From bfe089e3724ad8f9c8922165738d837b8641a6e6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 10:37:12 -0400 Subject: [PATCH 0001/1087] doc: Fix table column count Author: Erik Rijkers --- doc/src/sgml/high-availability.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 1a152cf118..6c54fbd40d 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -301,7 +301,7 @@ protocol to make nodes agree on a serializable transactional order. High Availability, Load Balancing, and Replication Feature Matrix - + Feature From 963af96920fabf5fd7ee28ecc96521f371c13a4b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 17 Aug 2017 11:17:39 -0400 Subject: [PATCH 0002/1087] Add missing "static" marker. Per pademelon. --- src/backend/optimizer/prep/prepunion.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 6d8f8938b2..f43c3f3007 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -2088,7 +2088,7 @@ adjust_appendrel_attrs_mutator(Node *node, * Substitute child relids for parent relids in a Relid set. The array of * appinfos specifies the substitutions to be performed. */ -Relids +static Relids adjust_child_relids(Relids relids, int nappinfos, AppendRelInfo **appinfos) { Bitmapset *result = NULL; From 79f457e53ac37b5d383845c410e5a41457d74950 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 17 Aug 2017 11:19:07 -0400 Subject: [PATCH 0003/1087] Remove bogus line from comment. Spotted by Tom Lane Discussion: http://postgr.es/m/27897.1502901074@sss.pgh.pa.us --- contrib/postgres_fdw/postgres_fdw.c | 1 - 1 file changed, 1 deletion(-) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index d77c2a70e4..a30afca1d6 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -4497,7 +4497,6 @@ postgresGetForeignJoinPaths(PlannerInfo *root, * the path list of the joinrel, if one exists. We must be careful to * call it before adding any ForeignPath, since the ForeignPath might * dominate the only suitable local path available. We also do it before - * reconstruct the row for EvalPlanQual(). Find an alternative local path * calling foreign_join_ok(), since that function updates fpinfo and marks * it as pushable if the join is found to be pushable. */ From d54285935072175aac1c446e15ec778b08a8fd75 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 11:39:00 -0400 Subject: [PATCH 0004/1087] doc: Update RFC URLs Consistently use the IETF HTML links instead of a random mix of different sites and formats. Correct one RFC number and fix one broken link. --- doc/src/sgml/client-auth.sgml | 2 +- doc/src/sgml/json.sgml | 2 +- doc/src/sgml/libpq.sgml | 2 +- doc/src/sgml/pgcrypto.sgml | 6 +++--- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 819db811b2..2dd6c29350 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -941,7 +941,7 @@ omicron bryanh guest1 scram-sha-256 performs SCRAM-SHA-256 authentication, as described in - RFC5802. It + RFC 7677. It is a challenge-response scheme, that prevents password sniffing on untrusted connections. It is more secure than the md5 method, but might not be supported by older clients. diff --git a/doc/src/sgml/json.sgml b/doc/src/sgml/json.sgml index 3cf78d6394..7dfdf96764 100644 --- a/doc/src/sgml/json.sgml +++ b/doc/src/sgml/json.sgml @@ -13,7 +13,7 @@ JSON data types are for storing JSON (JavaScript Object Notation) - data, as specified in RFC + data, as specified in RFC 7159. Such data can also be stored as text, but the JSON data types have the advantage of enforcing that each stored value is valid according to the JSON rules. There are also diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index ad5e9b95b4..8e0b0b8586 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -775,7 +775,7 @@ PGPing PQping(const char *conninfo); connection parameters. There are two accepted formats for these strings: plain keyword = value strings and URIs. URIs generally follow - RFC + RFC 3986, except that multi-host connection strings are allowed as further described below. diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml index bf514aacf3..34d8621958 100644 --- a/doc/src/sgml/pgcrypto.sgml +++ b/doc/src/sgml/pgcrypto.sgml @@ -1317,15 +1317,15 @@ gen_random_uuid() returns uuid - + OpenPGP message format. - + The MD5 Message-Digest Algorithm. - + HMAC: Keyed-Hashing for Message Authentication. From b5178c5d08ca59e30f9d9428fa6fdb2741794e65 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 17 Aug 2017 13:13:47 -0400 Subject: [PATCH 0005/1087] Further tweaks to compiler flags for PL/Perl on Windows. It now emerges that we can only rely on Perl to tell us we must use -D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later. For older versions, revert to our previous practice of assuming we need that symbol in all 32-bit Windows builds. This is not ideal, but inquiring into which compiler version Perl was built with seems far too fragile. In any case, we had not previously had complaints about these old Perl versions, so let's assume this is Good Enough. (It's still better than the situation ante commit 5a5c2feca, in that at least the effects are confined to PL/Perl rather than the whole PG build.) Back-patch to all supported versions, like 5a5c2feca and predecessors. Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com --- src/tools/msvc/Mkvcbuild.pm | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index 159e79ee7d..686c7369f6 100644 --- a/src/tools/msvc/Mkvcbuild.pm +++ b/src/tools/msvc/Mkvcbuild.pm @@ -530,6 +530,18 @@ sub mkvcbuild } } + # Perl versions before 5.13.4 don't provide -D_USE_32BIT_TIME_T + # regardless of how they were built. On 32-bit Windows, assume + # such a version was built with a pre-MSVC-2005 compiler, and + # define the symbol anyway, so that we are compatible if we're + # being built with a later MSVC version. + push(@perl_embed_ccflags, '_USE_32BIT_TIME_T') + if $solution->{platform} eq 'Win32' + && $Config{PERL_REVISION} == 5 + && ($Config{PERL_VERSION} < 13 + || ( $Config{PERL_VERSION} == 13 + && $Config{PERL_SUBVERSION} < 4)); + # Also, a hack to prevent duplicate definitions of uid_t/gid_t push(@perl_embed_ccflags, 'PLPERL_HAVE_UID_GID'); From a2b70c89ca1a5fcf6181d3c777d82e7b83d2de1b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 17 Aug 2017 13:49:22 -0400 Subject: [PATCH 0006/1087] Fix ExecReScanGatherMerge. Not surprisingly, since it'd never ever been tested, ExecReScanGatherMerge didn't work. Fix it, and add a regression test case to exercise it. Amit Kapila Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com --- src/backend/executor/nodeGatherMerge.c | 1 + src/test/regress/expected/select_parallel.out | 43 +++++++++++++++++++ src/test/regress/sql/select_parallel.sql | 16 +++++++ 3 files changed, 60 insertions(+) diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 9a81e22510..64c62398bb 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -334,6 +334,7 @@ ExecReScanGatherMerge(GatherMergeState *node) ExecShutdownGatherMergeWorkers(node); node->initialized = false; + node->gm_initialized = false; if (node->pei) ExecParallelReinitialize(node->pei); diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 0efb211c97..db31837ede 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -300,6 +300,49 @@ select count(*) from tenk1 group by twenty; 500 (20 rows) +--test rescan behavior of gather merge +set enable_material = false; +explain (costs off) +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + QUERY PLAN +---------------------------------------------------------- + Nested Loop Left Join + -> Values Scan on "*VALUES*" + -> Finalize GroupAggregate + Group Key: tenk1.string4 + -> Gather Merge + Workers Planned: 4 + -> Partial GroupAggregate + Group Key: tenk1.string4 + -> Sort + Sort Key: tenk1.string4 + -> Parallel Seq Scan on tenk1 +(11 rows) + +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + string4 | count | x +---------+-------+--- + AAAAxx | 2500 | 1 + HHHHxx | 2500 | 1 + OOOOxx | 2500 | 1 + VVVVxx | 2500 | 1 + AAAAxx | 2500 | 2 + HHHHxx | 2500 | 2 + OOOOxx | 2500 | 2 + VVVVxx | 2500 | 2 + AAAAxx | 2500 | 3 + HHHHxx | 2500 | 3 + OOOOxx | 2500 | 3 + VVVVxx | 2500 | 3 +(12 rows) + +reset enable_material; -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index e717f92e53..33ce61a026 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -118,6 +118,22 @@ explain (costs off) select count(*) from tenk1 group by twenty; +--test rescan behavior of gather merge +set enable_material = false; + +explain (costs off) +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + +reset enable_material; + -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) From 1e56883a528eb623c9a55ec7e43b4eee3722014b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 17 Aug 2017 14:04:15 -0400 Subject: [PATCH 0007/1087] Attempt to clarify comments related to force_parallel_mode. Per discussion with Tom Lane. Discussion: http://postgr.es/m/28589.1502902172@sss.pgh.pa.us --- src/backend/optimizer/plan/planner.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 407df9ae79..fdef00ab39 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -291,13 +291,21 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) } /* - * glob->parallelModeNeeded should tell us whether it's necessary to - * impose the parallel mode restrictions, but we don't actually want to - * impose them unless we choose a parallel plan, so it is normally set - * only if a parallel plan is chosen (see create_gather_plan). That way, - * people who mislabel their functions but don't use parallelism anyway - * aren't harmed. But when force_parallel_mode is set, we enable the - * restrictions whenever possible for testing purposes. + * glob->parallelModeNeeded is normally set to false here and changed to + * true during plan creation if a Gather or Gather Merge plan is actually + * created (cf. create_gather_plan, create_gather_merge_plan). + * + * However, if force_parallel_mode = on or force_parallel_mode = regress, + * then we impose parallel mode whenever it's safe to do so, even if the + * final plan doesn't use parallelism. It's not safe to do so if the + * query contains anything parallel-unsafe; parallelModeOK will be false + * in that case. Note that parallelModeOK can't change after this point. + * Otherwise, everything in the query is either parallel-safe or + * parallel-restricted, and in either case it should be OK to impose + * parallel-mode restrictions. If that ends up breaking something, then + * either some function the user included in the query is incorrectly + * labelled as parallel-safe or parallel-restricted when in reality it's + * parallel-unsafe, or else the query planner itself has a bug. */ glob->parallelModeNeeded = glob->parallelModeOK && (force_parallel_mode != FORCE_PARALLEL_OFF); From ecfe59e50fb8316ab7fc653419cd724c8b7a7dd7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 17 Aug 2017 14:49:45 -0400 Subject: [PATCH 0008/1087] Refactor validation of new partitions a little bit. Move some logic that is currently in ATExecAttachPartition to separate functions to facilitate future code reuse. Ashutosh Bapat and Jeevan Ladhe Discussion: http://postgr.es/m/CA+Tgmobbnamyvii0pRdg9pp_jLHSUvq7u5SiRrVV0tEFFU58Tg@mail.gmail.com --- src/backend/commands/tablecmds.c | 318 +++++++++++++++++-------------- 1 file changed, 172 insertions(+), 146 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 513a9ec485..83cb460164 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -473,6 +473,11 @@ static void CreateInheritance(Relation child_rel, Relation parent_rel); static void RemoveInheritance(Relation child_rel, Relation parent_rel); static ObjectAddress ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd); +static bool PartConstraintImpliedByRelConstraint(Relation scanrel, + List *partConstraint); +static void ValidatePartitionConstraints(List **wqueue, Relation scanrel, + List *scanrel_children, + List *partConstraint); static ObjectAddress ATExecDetachPartition(Relation rel, RangeVar *name); @@ -13424,6 +13429,169 @@ ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, } } +/* + * PartConstraintImpliedByRelConstraint + * Does scanrel's existing constraints imply the partition constraint? + * + * Existing constraints includes its check constraints and column-level + * NOT NULL constraints and partConstraint describes the partition constraint. + */ +static bool +PartConstraintImpliedByRelConstraint(Relation scanrel, + List *partConstraint) +{ + List *existConstraint = NIL; + TupleConstr *constr = RelationGetDescr(scanrel)->constr; + int num_check, + i; + + if (constr && constr->has_not_null) + { + int natts = scanrel->rd_att->natts; + + for (i = 1; i <= natts; i++) + { + Form_pg_attribute att = scanrel->rd_att->attrs[i - 1]; + + if (att->attnotnull && !att->attisdropped) + { + NullTest *ntest = makeNode(NullTest); + + ntest->arg = (Expr *) makeVar(1, + i, + att->atttypid, + att->atttypmod, + att->attcollation, + 0); + ntest->nulltesttype = IS_NOT_NULL; + + /* + * argisrow=false is correct even for a composite column, + * because attnotnull does not represent a SQL-spec IS NOT + * NULL test in such a case, just IS DISTINCT FROM NULL. + */ + ntest->argisrow = false; + ntest->location = -1; + existConstraint = lappend(existConstraint, ntest); + } + } + } + + num_check = (constr != NULL) ? constr->num_check : 0; + for (i = 0; i < num_check; i++) + { + Node *cexpr; + + /* + * If this constraint hasn't been fully validated yet, we must ignore + * it here. + */ + if (!constr->check[i].ccvalid) + continue; + + cexpr = stringToNode(constr->check[i].ccbin); + + /* + * Run each expression through const-simplification and + * canonicalization. It is necessary, because we will be comparing it + * to similarly-processed partition constraint expressions, and may + * fail to detect valid matches without this. + */ + cexpr = eval_const_expressions(NULL, cexpr); + cexpr = (Node *) canonicalize_qual((Expr *) cexpr); + + existConstraint = list_concat(existConstraint, + make_ands_implicit((Expr *) cexpr)); + } + + if (existConstraint != NIL) + existConstraint = list_make1(make_ands_explicit(existConstraint)); + + /* And away we go ... */ + return predicate_implied_by(partConstraint, existConstraint, true); +} + +/* + * ValidatePartitionConstraints + * + * Check whether all rows in the given table obey the given partition + * constraint; if so, it can be attached as a partition.  We do this by + * scanning the table (or all of its leaf partitions) row by row, except when + * the existing constraints are sufficient to prove that the new partitioning + * constraint must already hold. + */ +static void +ValidatePartitionConstraints(List **wqueue, Relation scanrel, + List *scanrel_children, + List *partConstraint) +{ + bool found_whole_row; + ListCell *lc; + + if (partConstraint == NIL) + return; + + /* + * Based on the table's existing constraints, determine if we can skip + * scanning the table to validate the partition constraint. + */ + if (PartConstraintImpliedByRelConstraint(scanrel, partConstraint)) + { + ereport(INFO, + (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + RelationGetRelationName(scanrel)))); + return; + } + + /* Constraints proved insufficient, so we need to scan the table. */ + foreach(lc, scanrel_children) + { + AlteredTableInfo *tab; + Oid part_relid = lfirst_oid(lc); + Relation part_rel; + List *my_partconstr = partConstraint; + + /* Lock already taken */ + if (part_relid != RelationGetRelid(scanrel)) + part_rel = heap_open(part_relid, NoLock); + else + part_rel = scanrel; + + /* + * Skip if the partition is itself a partitioned table. We can only + * ever scan RELKIND_RELATION relations. + */ + if (part_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + { + if (part_rel != scanrel) + heap_close(part_rel, NoLock); + continue; + } + + if (part_rel != scanrel) + { + /* + * Adjust the constraint for scanrel so that it matches this + * partition's attribute numbers. + */ + my_partconstr = map_partition_varattnos(my_partconstr, 1, + part_rel, scanrel, + &found_whole_row); + /* There can never be a whole-row reference here */ + if (found_whole_row) + elog(ERROR, "unexpected whole-row reference found in partition key"); + } + + /* Grab a work queue entry. */ + tab = ATGetQueueEntry(wqueue, part_rel); + tab->partition_constraint = (Expr *) linitial(my_partconstr); + + /* keep our lock until commit */ + if (part_rel != scanrel) + heap_close(part_rel, NoLock); + } +} + /* * ALTER TABLE ATTACH PARTITION FOR VALUES * @@ -13435,15 +13603,12 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) Relation attachrel, catalog; List *attachrel_children; - TupleConstr *attachrel_constr; - List *partConstraint, - *existConstraint; + List *partConstraint; SysScanDesc scan; ScanKeyData skey; AttrNumber attno; int natts; TupleDesc tupleDesc; - bool skip_validate = false; ObjectAddress address; const char *trigger_name; bool found_whole_row; @@ -13637,148 +13802,9 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) if (found_whole_row) elog(ERROR, "unexpected whole-row reference found in partition key"); - /* - * Check if we can do away with having to scan the table being attached to - * validate the partition constraint, by *proving* that the existing - * constraints of the table *imply* the partition predicate. We include - * the table's check constraints and NOT NULL constraints in the list of - * clauses passed to predicate_implied_by(). - * - * There is a case in which we cannot rely on just the result of the - * proof. - */ - attachrel_constr = tupleDesc->constr; - existConstraint = NIL; - if (attachrel_constr != NULL) - { - int num_check = attachrel_constr->num_check; - int i; - - if (attachrel_constr->has_not_null) - { - int natts = attachrel->rd_att->natts; - - for (i = 1; i <= natts; i++) - { - Form_pg_attribute att = attachrel->rd_att->attrs[i - 1]; - - if (att->attnotnull && !att->attisdropped) - { - NullTest *ntest = makeNode(NullTest); - - ntest->arg = (Expr *) makeVar(1, - i, - att->atttypid, - att->atttypmod, - att->attcollation, - 0); - ntest->nulltesttype = IS_NOT_NULL; - - /* - * argisrow=false is correct even for a composite column, - * because attnotnull does not represent a SQL-spec IS NOT - * NULL test in such a case, just IS DISTINCT FROM NULL. - */ - ntest->argisrow = false; - ntest->location = -1; - existConstraint = lappend(existConstraint, ntest); - } - } - } - - for (i = 0; i < num_check; i++) - { - Node *cexpr; - - /* - * If this constraint hasn't been fully validated yet, we must - * ignore it here. - */ - if (!attachrel_constr->check[i].ccvalid) - continue; - - cexpr = stringToNode(attachrel_constr->check[i].ccbin); - - /* - * Run each expression through const-simplification and - * canonicalization. It is necessary, because we will be - * comparing it to similarly-processed qual clauses, and may fail - * to detect valid matches without this. - */ - cexpr = eval_const_expressions(NULL, cexpr); - cexpr = (Node *) canonicalize_qual((Expr *) cexpr); - - existConstraint = list_concat(existConstraint, - make_ands_implicit((Expr *) cexpr)); - } - - existConstraint = list_make1(make_ands_explicit(existConstraint)); - - /* And away we go ... */ - if (predicate_implied_by(partConstraint, existConstraint, true)) - skip_validate = true; - } - - if (skip_validate) - { - /* No need to scan the table after all. */ - ereport(INFO, - (errmsg("partition constraint for table \"%s\" is implied by existing constraints", - RelationGetRelationName(attachrel)))); - } - else - { - /* Constraints proved insufficient, so we need to scan the table. */ - ListCell *lc; - - foreach(lc, attachrel_children) - { - AlteredTableInfo *tab; - Oid part_relid = lfirst_oid(lc); - Relation part_rel; - List *my_partconstr = partConstraint; - - /* Lock already taken */ - if (part_relid != RelationGetRelid(attachrel)) - part_rel = heap_open(part_relid, NoLock); - else - part_rel = attachrel; - - /* - * Skip if the partition is itself a partitioned table. We can - * only ever scan RELKIND_RELATION relations. - */ - if (part_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) - { - if (part_rel != attachrel) - heap_close(part_rel, NoLock); - continue; - } - - if (part_rel != attachrel) - { - /* - * Adjust the constraint that we constructed above for - * attachRel so that it matches this partition's attribute - * numbers. - */ - my_partconstr = map_partition_varattnos(my_partconstr, 1, - part_rel, attachrel, - &found_whole_row); - /* There can never be a whole-row reference here */ - if (found_whole_row) - elog(ERROR, "unexpected whole-row reference found in partition key"); - } - - /* Grab a work queue entry. */ - tab = ATGetQueueEntry(wqueue, part_rel); - tab->partition_constraint = (Expr *) linitial(my_partconstr); - - /* keep our lock until commit */ - if (part_rel != attachrel) - heap_close(part_rel, NoLock); - } - } + /* Validate partition constraints against the table being attached. */ + ValidatePartitionConstraints(wqueue, attachrel, attachrel_children, + partConstraint); ObjectAddressSet(address, RelationRelationId, RelationGetRelid(attachrel)); From 54cde0c4c05807321d3f4bf96a97c376e3fa91cb Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 17 Aug 2017 15:39:17 -0400 Subject: [PATCH 0009/1087] Don't lock tables in RelationGetPartitionDispatchInfo. Instead, lock them in the caller using find_all_inheritors so that they get locked in the standard order, minimizing deadlock risks. Also in RelationGetPartitionDispatchInfo, avoid opening tables which are not partitioned; there's no need. Amit Langote, reviewed by Ashutosh Bapat and Amit Khandekar Discussion: http://postgr.es/m/91b36fa1-c197-b72f-ca6e-56c593bae68c@lab.ntt.co.jp --- src/backend/catalog/partition.c | 55 +++++++++++++++++---------------- src/backend/executor/execMain.c | 10 ++++-- src/include/catalog/partition.h | 3 +- 3 files changed, 37 insertions(+), 31 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index c1a307c8d3..96a64ce6b2 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -999,12 +999,16 @@ get_partition_qual_relid(Oid relid) * RelationGetPartitionDispatchInfo * Returns information necessary to route tuples down a partition tree * - * All the partitions will be locked with lockmode, unless it is NoLock. - * A list of the OIDs of all the leaf partitions of rel is returned in - * *leaf_part_oids. + * The number of elements in the returned array (that is, the number of + * PartitionDispatch objects for the partitioned tables in the partition tree) + * is returned in *num_parted and a list of the OIDs of all the leaf + * partitions of rel is returned in *leaf_part_oids. + * + * All the relations in the partition tree (including 'rel') must have been + * locked (using at least the AccessShareLock) by the caller. */ PartitionDispatch * -RelationGetPartitionDispatchInfo(Relation rel, int lockmode, +RelationGetPartitionDispatchInfo(Relation rel, int *num_parted, List **leaf_part_oids) { PartitionDispatchData **pd; @@ -1019,14 +1023,18 @@ RelationGetPartitionDispatchInfo(Relation rel, int lockmode, offset; /* - * Lock partitions and make a list of the partitioned ones to prepare - * their PartitionDispatch objects below. + * We rely on the relcache to traverse the partition tree to build both + * the leaf partition OIDs list and the array of PartitionDispatch objects + * for the partitioned tables in the tree. That means every partitioned + * table in the tree must be locked, which is fine since we require the + * caller to lock all the partitions anyway. * - * Cannot use find_all_inheritors() here, because then the order of OIDs - * in parted_rels list would be unknown, which does not help, because we - * assign indexes within individual PartitionDispatch in an order that is - * predetermined (determined by the order of OIDs in individual partition - * descriptors). + * For every partitioned table in the tree, starting with the root + * partitioned table, add its relcache entry to parted_rels, while also + * queuing its partitions (in the order in which they appear in the + * partition descriptor) to be looked at later in the same loop. This is + * a bit tricky but works because the foreach() macro doesn't fetch the + * next list element until the bottom of the loop. */ *num_parted = 1; parted_rels = list_make1(rel); @@ -1035,29 +1043,24 @@ RelationGetPartitionDispatchInfo(Relation rel, int lockmode, APPEND_REL_PARTITION_OIDS(rel, all_parts, all_parents); forboth(lc1, all_parts, lc2, all_parents) { - Relation partrel = heap_open(lfirst_oid(lc1), lockmode); + Oid partrelid = lfirst_oid(lc1); Relation parent = lfirst(lc2); - PartitionDesc partdesc = RelationGetPartitionDesc(partrel); - /* - * If this partition is a partitioned table, add its children to the - * end of the list, so that they are processed as well. - */ - if (partdesc) + if (get_rel_relkind(partrelid) == RELKIND_PARTITIONED_TABLE) { + /* + * Already locked by the caller. Note that it is the + * responsibility of the caller to close the below relcache entry, + * once done using the information being collected here (for + * example, in ExecEndModifyTable). + */ + Relation partrel = heap_open(partrelid, NoLock); + (*num_parted)++; parted_rels = lappend(parted_rels, partrel); parted_rel_parents = lappend(parted_rel_parents, parent); APPEND_REL_PARTITION_OIDS(partrel, all_parts, all_parents); } - else - heap_close(partrel, NoLock); - - /* - * We keep the partitioned ones open until we're done using the - * information being collected here (for example, see - * ExecEndModifyTable). - */ } /* diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 6671a25ffb..74071eede6 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -43,6 +43,7 @@ #include "access/xact.h" #include "catalog/namespace.h" #include "catalog/partition.h" +#include "catalog/pg_inherits_fn.h" #include "catalog/pg_publication.h" #include "commands/matview.h" #include "commands/trigger.h" @@ -3249,9 +3250,12 @@ ExecSetupPartitionTupleRouting(Relation rel, int i; ResultRelInfo *leaf_part_rri; - /* Get the tuple-routing information and lock partitions */ - *pd = RelationGetPartitionDispatchInfo(rel, RowExclusiveLock, num_parted, - &leaf_parts); + /* + * Get the information about the partition tree after locking all the + * partitions. + */ + (void) find_all_inheritors(RelationGetRelid(rel), RowExclusiveLock, NULL); + *pd = RelationGetPartitionDispatchInfo(rel, num_parted, &leaf_parts); *num_partitions = list_length(leaf_parts); *partitions = (ResultRelInfo *) palloc(*num_partitions * sizeof(ResultRelInfo)); diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index bef7a0f5fb..2283c675e9 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -88,8 +88,7 @@ extern Expr *get_partition_qual_relid(Oid relid); /* For tuple routing */ extern PartitionDispatch *RelationGetPartitionDispatchInfo(Relation rel, - int lockmode, int *num_parted, - List **leaf_part_oids); + int *num_parted, List **leaf_part_oids); extern void FormPartitionKeyDatum(PartitionDispatch pd, TupleTableSlot *slot, EState *estate, From a20aac890a89e6f88e841dedbbfa8d9d5f7309fc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 17 Aug 2017 18:35:14 -0400 Subject: [PATCH 0010/1087] Temporarily revert test case from a2b70c89ca1a5fcf6181d3c777d82e7b83d2de1b. That code patch was good as far as it went, but the associated test case has exposed fundamental brain damage in the parallel scan mechanism, which is going to take nontrivial work to correct. In the interests of getting the buildfarm back to green so that unrelated work can proceed, let's temporarily remove the test case. --- src/test/regress/expected/select_parallel.out | 43 ------------------- src/test/regress/sql/select_parallel.sql | 16 ------- 2 files changed, 59 deletions(-) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index db31837ede..0efb211c97 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -300,49 +300,6 @@ select count(*) from tenk1 group by twenty; 500 (20 rows) ---test rescan behavior of gather merge -set enable_material = false; -explain (costs off) -select * from - (select string4, count(unique2) - from tenk1 group by string4 order by string4) ss - right join (values (1),(2),(3)) v(x) on true; - QUERY PLAN ----------------------------------------------------------- - Nested Loop Left Join - -> Values Scan on "*VALUES*" - -> Finalize GroupAggregate - Group Key: tenk1.string4 - -> Gather Merge - Workers Planned: 4 - -> Partial GroupAggregate - Group Key: tenk1.string4 - -> Sort - Sort Key: tenk1.string4 - -> Parallel Seq Scan on tenk1 -(11 rows) - -select * from - (select string4, count(unique2) - from tenk1 group by string4 order by string4) ss - right join (values (1),(2),(3)) v(x) on true; - string4 | count | x ----------+-------+--- - AAAAxx | 2500 | 1 - HHHHxx | 2500 | 1 - OOOOxx | 2500 | 1 - VVVVxx | 2500 | 1 - AAAAxx | 2500 | 2 - HHHHxx | 2500 | 2 - OOOOxx | 2500 | 2 - VVVVxx | 2500 | 2 - AAAAxx | 2500 | 3 - HHHHxx | 2500 | 3 - OOOOxx | 2500 | 3 - VVVVxx | 2500 | 3 -(12 rows) - -reset enable_material; -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 33ce61a026..e717f92e53 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -118,22 +118,6 @@ explain (costs off) select count(*) from tenk1 group by twenty; ---test rescan behavior of gather merge -set enable_material = false; - -explain (costs off) -select * from - (select string4, count(unique2) - from tenk1 group by string4 order by string4) ss - right join (values (1),(2),(3)) v(x) on true; - -select * from - (select string4, count(unique2) - from tenk1 group by string4 order by string4) ss - right join (values (1),(2),(3)) v(x) on true; - -reset enable_material; - -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) From c4b841ba6aa9252ab9dacd59d317aba8cfa9b31a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 18 Aug 2017 13:01:05 -0400 Subject: [PATCH 0011/1087] Fix interaction of triggers, partitioning, and EXPLAIN ANALYZE. Add a new EState member es_leaf_result_relations, so that the trigger code knows about ResultRelInfos created by tuple routing. Also make sure ExplainPrintTriggers knows about partition-related ResultRelInfos. Etsuro Fujita, reviewed by Amit Langote Discussion: http://postgr.es/m/57163e18-8e56-da83-337a-22f2c0008051@lab.ntt.co.jp --- src/backend/commands/copy.c | 110 +++++++++++++------------ src/backend/commands/explain.c | 15 +++- src/backend/executor/execMain.c | 45 +++++++--- src/backend/executor/execUtils.c | 5 ++ src/backend/executor/nodeModifyTable.c | 1 + src/include/executor/executor.h | 1 + src/include/nodes/execnodes.h | 3 + 7 files changed, 115 insertions(+), 65 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index a258965c20..375a25fbcf 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -1415,59 +1415,6 @@ BeginCopy(ParseState *pstate, (errcode(ERRCODE_UNDEFINED_COLUMN), errmsg("table \"%s\" does not have OIDs", RelationGetRelationName(cstate->rel)))); - - /* - * If there are any triggers with transition tables on the named - * relation, we need to be prepared to capture transition tuples. - */ - cstate->transition_capture = MakeTransitionCaptureState(rel->trigdesc); - - /* Initialize state for CopyFrom tuple routing. */ - if (is_from && rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) - { - PartitionDispatch *partition_dispatch_info; - ResultRelInfo *partitions; - TupleConversionMap **partition_tupconv_maps; - TupleTableSlot *partition_tuple_slot; - int num_parted, - num_partitions; - - ExecSetupPartitionTupleRouting(rel, - 1, - &partition_dispatch_info, - &partitions, - &partition_tupconv_maps, - &partition_tuple_slot, - &num_parted, &num_partitions); - cstate->partition_dispatch_info = partition_dispatch_info; - cstate->num_dispatch = num_parted; - cstate->partitions = partitions; - cstate->num_partitions = num_partitions; - cstate->partition_tupconv_maps = partition_tupconv_maps; - cstate->partition_tuple_slot = partition_tuple_slot; - - /* - * If we are capturing transition tuples, they may need to be - * converted from partition format back to partitioned table - * format (this is only ever necessary if a BEFORE trigger - * modifies the tuple). - */ - if (cstate->transition_capture != NULL) - { - int i; - - cstate->transition_tupconv_maps = (TupleConversionMap **) - palloc0(sizeof(TupleConversionMap *) * - cstate->num_partitions); - for (i = 0; i < cstate->num_partitions; ++i) - { - cstate->transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(cstate->partitions[i].ri_RelationDesc), - RelationGetDescr(rel), - gettext_noop("could not convert row type")); - } - } - } } else { @@ -2482,6 +2429,63 @@ CopyFrom(CopyState cstate) /* Triggers might need a slot as well */ estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); + /* + * If there are any triggers with transition tables on the named relation, + * we need to be prepared to capture transition tuples. + */ + cstate->transition_capture = + MakeTransitionCaptureState(cstate->rel->trigdesc); + + /* + * If the named relation is a partitioned table, initialize state for + * CopyFrom tuple routing. + */ + if (cstate->rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + { + PartitionDispatch *partition_dispatch_info; + ResultRelInfo *partitions; + TupleConversionMap **partition_tupconv_maps; + TupleTableSlot *partition_tuple_slot; + int num_parted, + num_partitions; + + ExecSetupPartitionTupleRouting(cstate->rel, + 1, + estate, + &partition_dispatch_info, + &partitions, + &partition_tupconv_maps, + &partition_tuple_slot, + &num_parted, &num_partitions); + cstate->partition_dispatch_info = partition_dispatch_info; + cstate->num_dispatch = num_parted; + cstate->partitions = partitions; + cstate->num_partitions = num_partitions; + cstate->partition_tupconv_maps = partition_tupconv_maps; + cstate->partition_tuple_slot = partition_tuple_slot; + + /* + * If we are capturing transition tuples, they may need to be + * converted from partition format back to partitioned table format + * (this is only ever necessary if a BEFORE trigger modifies the + * tuple). + */ + if (cstate->transition_capture != NULL) + { + int i; + + cstate->transition_tupconv_maps = (TupleConversionMap **) + palloc0(sizeof(TupleConversionMap *) * cstate->num_partitions); + for (i = 0; i < cstate->num_partitions; ++i) + { + cstate->transition_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(cstate->partitions[i].ri_RelationDesc), + RelationGetDescr(cstate->rel), + gettext_noop("could not convert row type")); + } + } + } + /* * It's more efficient to prepare a bunch of tuples for insertion, and * insert them in one heap_multi_insert() call, than call heap_insert() diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 7648201218..953e74d73c 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -656,17 +656,30 @@ ExplainPrintTriggers(ExplainState *es, QueryDesc *queryDesc) ResultRelInfo *rInfo; bool show_relname; int numrels = queryDesc->estate->es_num_result_relations; + int numrootrels = queryDesc->estate->es_num_root_result_relations; + List *leafrels = queryDesc->estate->es_leaf_result_relations; List *targrels = queryDesc->estate->es_trig_target_relations; int nr; ListCell *l; ExplainOpenGroup("Triggers", "Triggers", false, es); - show_relname = (numrels > 1 || targrels != NIL); + show_relname = (numrels > 1 || numrootrels > 0 || + leafrels != NIL || targrels != NIL); rInfo = queryDesc->estate->es_result_relations; for (nr = 0; nr < numrels; rInfo++, nr++) report_triggers(rInfo, show_relname, es); + rInfo = queryDesc->estate->es_root_result_relations; + for (nr = 0; nr < numrootrels; rInfo++, nr++) + report_triggers(rInfo, show_relname, es); + + foreach(l, leafrels) + { + rInfo = (ResultRelInfo *) lfirst(l); + report_triggers(rInfo, show_relname, es); + } + foreach(l, targrels) { rInfo = (ResultRelInfo *) lfirst(l); diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 74071eede6..4582a3caa0 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1365,16 +1365,18 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo, * * Get a ResultRelInfo for a trigger target relation. Most of the time, * triggers are fired on one of the result relations of the query, and so - * we can just return a member of the es_result_relations array. (Note: in - * self-join situations there might be multiple members with the same OID; - * if so it doesn't matter which one we pick.) However, it is sometimes - * necessary to fire triggers on other relations; this happens mainly when an - * RI update trigger queues additional triggers on other relations, which will - * be processed in the context of the outer query. For efficiency's sake, - * we want to have a ResultRelInfo for those triggers too; that can avoid - * repeated re-opening of the relation. (It also provides a way for EXPLAIN - * ANALYZE to report the runtimes of such triggers.) So we make additional - * ResultRelInfo's as needed, and save them in es_trig_target_relations. + * we can just return a member of the es_result_relations array, the + * es_root_result_relations array (if any), or the es_leaf_result_relations + * list (if any). (Note: in self-join situations there might be multiple + * members with the same OID; if so it doesn't matter which one we pick.) + * However, it is sometimes necessary to fire triggers on other relations; + * this happens mainly when an RI update trigger queues additional triggers + * on other relations, which will be processed in the context of the outer + * query. For efficiency's sake, we want to have a ResultRelInfo for those + * triggers too; that can avoid repeated re-opening of the relation. (It + * also provides a way for EXPLAIN ANALYZE to report the runtimes of such + * triggers.) So we make additional ResultRelInfo's as needed, and save them + * in es_trig_target_relations. */ ResultRelInfo * ExecGetTriggerResultRel(EState *estate, Oid relid) @@ -1395,6 +1397,23 @@ ExecGetTriggerResultRel(EState *estate, Oid relid) rInfo++; nr--; } + /* Second, search through the root result relations, if any */ + rInfo = estate->es_root_result_relations; + nr = estate->es_num_root_result_relations; + while (nr > 0) + { + if (RelationGetRelid(rInfo->ri_RelationDesc) == relid) + return rInfo; + rInfo++; + nr--; + } + /* Third, search through the leaf result relations, if any */ + foreach(l, estate->es_leaf_result_relations) + { + rInfo = (ResultRelInfo *) lfirst(l); + if (RelationGetRelid(rInfo->ri_RelationDesc) == relid) + return rInfo; + } /* Nope, but maybe we already made an extra ResultRelInfo for it */ foreach(l, estate->es_trig_target_relations) { @@ -3238,6 +3257,7 @@ EvalPlanQualEnd(EPQState *epqstate) void ExecSetupPartitionTupleRouting(Relation rel, Index resultRTindex, + EState *estate, PartitionDispatch **pd, ResultRelInfo **partitions, TupleConversionMap ***tup_conv_maps, @@ -3301,7 +3321,10 @@ ExecSetupPartitionTupleRouting(Relation rel, partrel, resultRTindex, rel, - 0); + estate->es_instrument); + + estate->es_leaf_result_relations = + lappend(estate->es_leaf_result_relations, leaf_part_rri); /* * Open partition indices (remember we do not support ON CONFLICT in diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index 25772fc603..c398846879 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -115,6 +115,11 @@ CreateExecutorState(void) estate->es_num_result_relations = 0; estate->es_result_relation_info = NULL; + estate->es_root_result_relations = NULL; + estate->es_num_root_result_relations = 0; + + estate->es_leaf_result_relations = NIL; + estate->es_trig_target_relations = NIL; estate->es_trig_tuple_slot = NULL; estate->es_trig_oldtup_slot = NULL; diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 36b2b43bc6..70a6b847a0 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1919,6 +1919,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ExecSetupPartitionTupleRouting(rel, node->nominalRelation, + estate, &partition_dispatch_info, &partitions, &partition_tupconv_maps, diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 60326f9d03..eacbea3c36 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -208,6 +208,7 @@ extern void EvalPlanQualSetTuple(EPQState *epqstate, Index rti, extern HeapTuple EvalPlanQualGetTuple(EPQState *epqstate, Index rti); extern void ExecSetupPartitionTupleRouting(Relation rel, Index resultRTindex, + EState *estate, PartitionDispatch **pd, ResultRelInfo **partitions, TupleConversionMap ***tup_conv_maps, diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 577499465d..3272c4b315 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -452,6 +452,9 @@ typedef struct EState ResultRelInfo *es_root_result_relations; /* array of ResultRelInfos */ int es_num_root_result_relations; /* length of the array */ + /* Info about leaf partitions of partitioned table(s) for insert queries: */ + List *es_leaf_result_relations; /* List of ResultRelInfos */ + /* Stuff used for firing triggers: */ List *es_trig_target_relations; /* trigger-only ResultRelInfos */ TupleTableSlot *es_trig_tuple_slot; /* for trigger output tuples */ From 24620fc52bd9d4139748591b6cce7327fd299684 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 18 Aug 2017 23:02:28 -0400 Subject: [PATCH 0012/1087] Fix creation of ICU comments for keyword variants It would create the comment referring to the keyword-less parent locale. This was broken in ddb5fdc068635d003a0d1c303cb109d1cb3ebeb1. --- src/backend/commands/collationcmds.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index 96a6bc9bf0..8572b2dedc 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -775,7 +775,7 @@ pg_import_system_collations(PG_FUNCTION_ARGS) CommandCounterIncrement(); - icucomment = get_icu_locale_comment(name); + icucomment = get_icu_locale_comment(localeid); if (icucomment) CreateComments(collid, CollationRelationId, 0, icucomment); From b1c2d76a2fcef812af0be3343082414d401909c8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 19 Aug 2017 13:39:37 -0400 Subject: [PATCH 0013/1087] Fix possible core dump in parallel restore when using a TOC list. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Commit 3eb9a5e7c unintentionally introduced an ordering dependency into restore_toc_entries_prefork(). The existing coding of reduce_dependencies() contains a check to skip moving a TOC entry to the ready_list if it wasn't initially in the pending_list. This used to suffice to prevent reduce_dependencies() from trying to move anything into the ready_list during restore_toc_entries_prefork(), because the pending_list stayed empty throughout that phase; but it no longer does. The problem doesn't manifest unless the TOC has been reordered by SortTocFromFile, which is how I missed it in testing. To fix, just add a test for ready_list == NULL, converting the call with NULL from a poor man's sanity check into an explicit command not to touch TOC items' list membership. Clarify some of the comments around this; in particular, note the primary purpose of the check for pending_list membership, which is to ensure that we can't try to restore the same item twice, in case a TOC list forces it to be restored before its dependency count goes to zero. Per report from Fabrízio de Royes Mello. Back-patch to 9.3, like the previous commit. Discussion: https://postgr.es/m/CAFcNs+pjuv0JL_x4+=71TPUPjdLHOXA4YfT32myj_OrrZb4ohA@mail.gmail.com --- src/bin/pg_dump/pg_backup_archiver.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 4cfb71c013..8ae7515c9c 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -3851,8 +3851,9 @@ restore_toc_entries_prefork(ArchiveHandle *AH, TocEntry *pending_list) * * Note: as of 9.2, it should be guaranteed that all PRE_DATA items appear * before DATA items, and all DATA items before POST_DATA items. That is - * not certain to be true in older archives, though, so this loop is coded - * to not assume it. + * not certain to be true in older archives, though, and in any case use + * of a list file would destroy that ordering (cf. SortTocFromFile). So + * this loop cannot assume that it holds. */ AH->restorePass = RESTORE_PASS_MAIN; skipped_some = false; @@ -3899,7 +3900,7 @@ restore_toc_entries_prefork(ArchiveHandle *AH, TocEntry *pending_list) (void) restore_toc_entry(AH, next_work_item, false); - /* there should be no touch of ready_list here, so pass NULL */ + /* Reduce dependencies, but don't move anything to ready_list */ reduce_dependencies(AH, next_work_item, NULL); } else @@ -4545,7 +4546,7 @@ identify_locking_dependencies(ArchiveHandle *AH, TocEntry *te) /* * Remove the specified TOC entry from the depCounts of items that depend on * it, thereby possibly making them ready-to-run. Any pending item that - * becomes ready should be moved to the ready list. + * becomes ready should be moved to the ready_list, if that's provided. */ static void reduce_dependencies(ArchiveHandle *AH, TocEntry *te, TocEntry *ready_list) @@ -4562,15 +4563,19 @@ reduce_dependencies(ArchiveHandle *AH, TocEntry *te, TocEntry *ready_list) otherte->depCount--; /* - * It's ready if it has no remaining dependencies and it belongs in - * the current restore pass. However, don't move it if it has not yet - * been put into the pending list. + * It's ready if it has no remaining dependencies, and it belongs in + * the current restore pass, and it is currently a member of the + * pending list (that check is needed to prevent double restore in + * some cases where a list-file forces out-of-order restoring). + * However, if ready_list == NULL then caller doesn't want any list + * memberships changed. */ if (otherte->depCount == 0 && _tocEntryRestorePass(otherte) == AH->restorePass && - otherte->par_prev != NULL) + otherte->par_prev != NULL && + ready_list != NULL) { - /* It must be in the pending list, so remove it ... */ + /* Remove it from pending list ... */ par_list_remove(otherte); /* ... and add to ready_list */ par_list_append(ready_list, otherte); From 2cd70845240087da205695baedab6412342d1dbe Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 20 Aug 2017 11:19:07 -0700 Subject: [PATCH 0014/1087] Change tupledesc->attrs[n] to TupleDescAttr(tupledesc, n). This is a mechanical change in preparation for a later commit that will change the layout of TupleDesc. Introducing a macro to abstract the details of where attributes are stored will allow us to change that in separate step and revise it in future. Author: Thomas Munro, editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com --- contrib/dblink/dblink.c | 31 +++-- contrib/file_fdw/file_fdw.c | 6 +- contrib/hstore/hstore_io.c | 22 +-- contrib/pageinspect/heapfuncs.c | 8 +- contrib/pageinspect/rawpage.c | 2 +- contrib/postgres_fdw/deparse.c | 6 +- contrib/postgres_fdw/postgres_fdw.c | 10 +- contrib/spi/timetravel.c | 2 +- contrib/tablefunc/tablefunc.c | 42 +++--- contrib/tcn/tcn.c | 3 +- contrib/test_decoding/test_decoding.c | 2 +- src/backend/access/brin/brin.c | 10 +- src/backend/access/brin/brin_inclusion.c | 6 +- src/backend/access/brin/brin_minmax.c | 6 +- src/backend/access/brin/brin_tuple.c | 2 +- src/backend/access/common/heaptuple.c | 95 ++++++------- src/backend/access/common/indextuple.c | 58 ++++---- src/backend/access/common/printsimple.c | 4 +- src/backend/access/common/printtup.c | 22 +-- src/backend/access/common/tupconvert.c | 47 ++++--- src/backend/access/common/tupdesc.c | 40 +++--- src/backend/access/gin/ginbulk.c | 3 +- src/backend/access/gin/ginget.c | 2 +- src/backend/access/gin/ginutil.c | 14 +- src/backend/access/gist/gistbuild.c | 4 +- src/backend/access/heap/heapam.c | 8 +- src/backend/access/heap/tuptoaster.c | 38 +++--- src/backend/access/spgist/spgutils.c | 2 +- src/backend/bootstrap/bootstrap.c | 8 +- src/backend/catalog/heap.c | 24 ++-- src/backend/catalog/index.c | 15 +- src/backend/catalog/toasting.c | 24 ++-- src/backend/commands/analyze.c | 2 +- src/backend/commands/cluster.c | 2 +- src/backend/commands/copy.c | 56 ++++---- src/backend/commands/createas.c | 2 +- src/backend/commands/indexcmds.c | 4 +- src/backend/commands/matview.c | 3 +- src/backend/commands/tablecmds.c | 57 +++++--- src/backend/commands/typecmds.c | 8 +- src/backend/commands/view.c | 4 +- src/backend/executor/execExpr.c | 11 +- src/backend/executor/execExprInterp.c | 14 +- src/backend/executor/execJunk.c | 2 +- src/backend/executor/execMain.c | 18 +-- src/backend/executor/execReplication.c | 2 +- src/backend/executor/execSRF.c | 4 +- src/backend/executor/execScan.c | 2 +- src/backend/executor/execTuples.c | 13 +- src/backend/executor/execUtils.c | 6 +- src/backend/executor/functions.c | 4 +- src/backend/executor/nodeAgg.c | 6 +- src/backend/executor/nodeModifyTable.c | 3 +- src/backend/executor/nodeSubplan.c | 4 +- src/backend/executor/nodeTableFuncscan.c | 15 +- src/backend/executor/nodeValuesscan.c | 6 +- src/backend/executor/spi.c | 14 +- src/backend/executor/tqueue.c | 4 +- src/backend/executor/tstoreReceiver.c | 13 +- src/backend/optimizer/prep/preptlist.c | 2 +- src/backend/optimizer/prep/prepunion.c | 6 +- src/backend/optimizer/util/clauses.c | 2 +- src/backend/optimizer/util/plancat.c | 9 +- src/backend/parser/analyze.c | 2 +- src/backend/parser/parse_coerce.c | 9 +- src/backend/parser/parse_func.c | 2 +- src/backend/parser/parse_relation.c | 32 ++--- src/backend/parser/parse_target.c | 14 +- src/backend/parser/parse_utilcmd.c | 15 +- src/backend/replication/logical/proto.c | 8 +- src/backend/replication/logical/relation.c | 5 +- .../replication/logical/reorderbuffer.c | 2 +- src/backend/replication/logical/worker.c | 6 +- src/backend/replication/pgoutput/pgoutput.c | 2 +- src/backend/rewrite/rewriteDefine.c | 2 +- src/backend/rewrite/rewriteHandler.c | 8 +- src/backend/utils/adt/json.c | 8 +- src/backend/utils/adt/jsonb.c | 8 +- src/backend/utils/adt/jsonfuncs.c | 2 +- src/backend/utils/adt/orderedsetaggs.c | 6 +- src/backend/utils/adt/rowtypes.c | 129 ++++++++++-------- src/backend/utils/adt/ruleutils.c | 16 ++- src/backend/utils/adt/tid.c | 6 +- src/backend/utils/adt/xml.c | 15 +- src/backend/utils/cache/catcache.c | 5 +- src/backend/utils/cache/relcache.c | 55 ++++---- src/backend/utils/cache/typcache.c | 7 +- src/backend/utils/fmgr/funcapi.c | 15 +- src/backend/utils/misc/pg_config.c | 4 +- src/include/access/htup_details.h | 6 +- src/include/access/itup.h | 6 +- src/include/access/tupdesc.h | 2 + src/pl/plperl/plperl.c | 26 ++-- src/pl/plpgsql/src/pl_comp.c | 2 +- src/pl/plpgsql/src/pl_exec.c | 50 ++++--- src/pl/plpython/plpy_exec.c | 6 +- src/pl/plpython/plpy_resultobject.c | 18 ++- src/pl/plpython/plpy_typeio.c | 41 +++--- src/pl/tcl/pltcl.c | 23 ++-- src/test/regress/regress.c | 4 +- 100 files changed, 805 insertions(+), 626 deletions(-) diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index 81136b131c..3113b07ab8 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -2172,14 +2172,16 @@ get_sql_insert(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals needComma = false; for (i = 0; i < natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) continue; if (needComma) appendStringInfoChar(&buf, ','); appendStringInfoString(&buf, - quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname))); + quote_ident_cstr(NameStr(att->attname))); needComma = true; } @@ -2191,7 +2193,7 @@ get_sql_insert(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals needComma = false; for (i = 0; i < natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + if (TupleDescAttr(tupdesc, i)->attisdropped) continue; if (needComma) @@ -2237,12 +2239,13 @@ get_sql_delete(Relation rel, int *pkattnums, int pknumatts, char **tgt_pkattvals for (i = 0; i < pknumatts; i++) { int pkattnum = pkattnums[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, pkattnum); if (i > 0) appendStringInfoString(&buf, " AND "); appendStringInfoString(&buf, - quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname))); + quote_ident_cstr(NameStr(attr->attname))); if (tgt_pkattvals[i] != NULL) appendStringInfo(&buf, " = %s", @@ -2289,14 +2292,16 @@ get_sql_update(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals needComma = false; for (i = 0; i < natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + + if (attr->attisdropped) continue; if (needComma) appendStringInfoString(&buf, ", "); appendStringInfo(&buf, "%s = ", - quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname))); + quote_ident_cstr(NameStr(attr->attname))); key = get_attnum_pk_pos(pkattnums, pknumatts, i); @@ -2320,12 +2325,13 @@ get_sql_update(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals for (i = 0; i < pknumatts; i++) { int pkattnum = pkattnums[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, pkattnum); if (i > 0) appendStringInfoString(&buf, " AND "); appendStringInfoString(&buf, - quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname))); + quote_ident_cstr(NameStr(attr->attname))); val = tgt_pkattvals[i]; @@ -2409,14 +2415,16 @@ get_tuple_of_interest(Relation rel, int *pkattnums, int pknumatts, char **src_pk for (i = 0; i < natts; i++) { + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + if (i > 0) appendStringInfoString(&buf, ", "); - if (tupdesc->attrs[i]->attisdropped) + if (attr->attisdropped) appendStringInfoString(&buf, "NULL"); else appendStringInfoString(&buf, - quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname))); + quote_ident_cstr(NameStr(attr->attname))); } appendStringInfo(&buf, " FROM %s WHERE ", relname); @@ -2424,12 +2432,13 @@ get_tuple_of_interest(Relation rel, int *pkattnums, int pknumatts, char **src_pk for (i = 0; i < pknumatts; i++) { int pkattnum = pkattnums[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, pkattnum); if (i > 0) appendStringInfoString(&buf, " AND "); appendStringInfoString(&buf, - quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname))); + quote_ident_cstr(NameStr(attr->attname))); if (src_pkattvals[i] != NULL) appendStringInfo(&buf, " = %s", @@ -2894,7 +2903,7 @@ validate_pkattnums(Relation rel, for (j = 0; j < natts; j++) { /* dropped columns don't count */ - if (tupdesc->attrs[j]->attisdropped) + if (TupleDescAttr(tupdesc, j)->attisdropped) continue; if (++lnum == pkattnum) diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c index 2396bd442f..94e50e92f7 100644 --- a/contrib/file_fdw/file_fdw.c +++ b/contrib/file_fdw/file_fdw.c @@ -430,7 +430,7 @@ get_file_fdw_attribute_options(Oid relid) /* Retrieve FDW options for all user-defined attributes. */ for (attnum = 1; attnum <= natts; attnum++) { - Form_pg_attribute attr = tupleDesc->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(tupleDesc, attnum - 1); List *options; ListCell *lc; @@ -898,7 +898,7 @@ check_selective_binary_conversion(RelOptInfo *baserel, /* Get user attributes. */ if (attnum > 0) { - Form_pg_attribute attr = tupleDesc->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(tupleDesc, attnum - 1); char *attname = NameStr(attr->attname); /* Skip dropped attributes (probably shouldn't see any here). */ @@ -912,7 +912,7 @@ check_selective_binary_conversion(RelOptInfo *baserel, numattrs = 0; for (i = 0; i < tupleDesc->natts; i++) { - Form_pg_attribute attr = tupleDesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupleDesc, i); if (attr->attisdropped) continue; diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index e03005c923..a44c1b2235 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -855,15 +855,16 @@ hstore_from_record(PG_FUNCTION_ARGS) for (i = 0, j = 0; i < ncolumns; ++i) { ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + Oid column_type = att->atttypid; char *value; /* Ignore dropped columns in datatype */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; - pairs[j].key = NameStr(tupdesc->attrs[i]->attname); - pairs[j].keylen = hstoreCheckKeyLen(strlen(NameStr(tupdesc->attrs[i]->attname))); + pairs[j].key = NameStr(att->attname); + pairs[j].keylen = hstoreCheckKeyLen(strlen(NameStr(att->attname))); if (!nulls || nulls[i]) { @@ -1034,21 +1035,22 @@ hstore_populate_record(PG_FUNCTION_ARGS) for (i = 0; i < ncolumns; ++i) { ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + Oid column_type = att->atttypid; char *value; int idx; int vallen; /* Ignore dropped columns in datatype */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) { nulls[i] = true; continue; } idx = hstoreFindKey(hs, 0, - NameStr(tupdesc->attrs[i]->attname), - strlen(NameStr(tupdesc->attrs[i]->attname))); + NameStr(att->attname), + strlen(NameStr(att->attname))); /* * we can't just skip here if the key wasn't found since we might have @@ -1082,7 +1084,7 @@ hstore_populate_record(PG_FUNCTION_ARGS) */ values[i] = InputFunctionCall(&column_info->proc, NULL, column_info->typioparam, - tupdesc->attrs[i]->atttypmod); + att->atttypmod); nulls[i] = true; } else @@ -1094,7 +1096,7 @@ hstore_populate_record(PG_FUNCTION_ARGS) values[i] = InputFunctionCall(&column_info->proc, value, column_info->typioparam, - tupdesc->attrs[i]->atttypmod); + att->atttypmod); nulls[i] = false; } } diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c index 72d1776a4a..ca4d3f530f 100644 --- a/contrib/pageinspect/heapfuncs.c +++ b/contrib/pageinspect/heapfuncs.c @@ -316,7 +316,7 @@ tuple_data_split_internal(Oid relid, char *tupdata, bool is_null; bytea *attr_data = NULL; - attr = tupdesc->attrs[i]; + attr = TupleDescAttr(tupdesc, i); is_null = (t_infomask & HEAP_HASNULL) && att_isnull(i, t_bits); /* @@ -334,7 +334,7 @@ tuple_data_split_internal(Oid relid, char *tupdata, if (attr->attlen == -1) { - off = att_align_pointer(off, tupdesc->attrs[i]->attalign, -1, + off = att_align_pointer(off, attr->attalign, -1, tupdata + off); /* @@ -353,7 +353,7 @@ tuple_data_split_internal(Oid relid, char *tupdata, } else { - off = att_align_nominal(off, tupdesc->attrs[i]->attalign); + off = att_align_nominal(off, attr->attalign); len = attr->attlen; } @@ -371,7 +371,7 @@ tuple_data_split_internal(Oid relid, char *tupdata, memcpy(VARDATA(attr_data), tupdata + off, len); } - off = att_addlength_pointer(off, tupdesc->attrs[i]->attlen, + off = att_addlength_pointer(off, attr->attlen, tupdata + off); } diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c index e9d3131bda..25af22f453 100644 --- a/contrib/pageinspect/rawpage.c +++ b/contrib/pageinspect/rawpage.c @@ -253,7 +253,7 @@ page_header(PG_FUNCTION_ARGS) lsn = PageGetLSN(page); /* pageinspect >= 1.2 uses pg_lsn instead of text for the LSN field. */ - if (tupdesc->attrs[0]->atttypid == TEXTOID) + if (TupleDescAttr(tupdesc, 0)->atttypid == TEXTOID) { char lsnchar[64]; diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 285cf1b2ee..0876589fe5 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -1115,7 +1115,7 @@ deparseTargetList(StringInfo buf, first = true; for (i = 1; i <= tupdesc->natts; i++) { - Form_pg_attribute attr = tupdesc->attrs[i - 1]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i - 1); /* Ignore dropped attributes. */ if (attr->attisdropped) @@ -1851,7 +1851,7 @@ deparseAnalyzeSql(StringInfo buf, Relation rel, List **retrieved_attrs) for (i = 0; i < tupdesc->natts; i++) { /* Ignore dropped columns. */ - if (tupdesc->attrs[i]->attisdropped) + if (TupleDescAttr(tupdesc, i)->attisdropped) continue; if (!first) @@ -1859,7 +1859,7 @@ deparseAnalyzeSql(StringInfo buf, Relation rel, List **retrieved_attrs) first = false; /* Use attribute name or column_name option. */ - colname = NameStr(tupdesc->attrs[i]->attname); + colname = NameStr(TupleDescAttr(tupdesc, i)->attname); options = GetForeignColumnOptions(relid, i + 1); foreach(lc, options) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index a30afca1d6..32dc4e6301 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -1575,7 +1575,7 @@ postgresPlanForeignModify(PlannerInfo *root, for (attnum = 1; attnum <= tupdesc->natts; attnum++) { - Form_pg_attribute attr = tupdesc->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1); if (!attr->attisdropped) targetAttrs = lappend_int(targetAttrs, attnum); @@ -1675,6 +1675,7 @@ postgresBeginForeignModify(ModifyTableState *mtstate, Oid typefnoid; bool isvarlena; ListCell *lc; + TupleDesc tupdesc = RelationGetDescr(rel); /* * Do nothing in EXPLAIN (no ANALYZE) case. resultRelInfo->ri_FdwState @@ -1719,7 +1720,7 @@ postgresBeginForeignModify(ModifyTableState *mtstate, /* Prepare for input conversion of RETURNING results. */ if (fmstate->has_returning) - fmstate->attinmeta = TupleDescGetAttInMetadata(RelationGetDescr(rel)); + fmstate->attinmeta = TupleDescGetAttInMetadata(tupdesc); /* Prepare for output conversion of parameters used in prepared stmt. */ n_params = list_length(fmstate->target_attrs) + 1; @@ -1748,7 +1749,7 @@ postgresBeginForeignModify(ModifyTableState *mtstate, foreach(lc, fmstate->target_attrs) { int attnum = lfirst_int(lc); - Form_pg_attribute attr = RelationGetDescr(rel)->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1); Assert(!attr->attisdropped); @@ -5090,9 +5091,10 @@ conversion_error_callback(void *arg) { /* error occurred in a scan against a foreign table */ TupleDesc tupdesc = RelationGetDescr(errpos->rel); + Form_pg_attribute attr = TupleDescAttr(tupdesc, errpos->cur_attno - 1); if (errpos->cur_attno > 0 && errpos->cur_attno <= tupdesc->natts) - attname = NameStr(tupdesc->attrs[errpos->cur_attno - 1]->attname); + attname = NameStr(attr->attname); else if (errpos->cur_attno == SelfItemPointerAttributeNumber) attname = "ctid"; else if (errpos->cur_attno == ObjectIdAttributeNumber) diff --git a/contrib/spi/timetravel.c b/contrib/spi/timetravel.c index f7905e20db..2c66d888df 100644 --- a/contrib/spi/timetravel.c +++ b/contrib/spi/timetravel.c @@ -328,7 +328,7 @@ timetravel(PG_FUNCTION_ARGS) for (i = 1; i <= natts; i++) { ctypes[i - 1] = SPI_gettypeid(tupdesc, i); - if (!(tupdesc->attrs[i - 1]->attisdropped)) /* skip dropped columns */ + if (!(TupleDescAttr(tupdesc, i - 1)->attisdropped)) /* skip dropped columns */ { snprintf(sql + strlen(sql), sizeof(sql) - strlen(sql), "%c$%d", separ, i); separ = ','; diff --git a/contrib/tablefunc/tablefunc.c b/contrib/tablefunc/tablefunc.c index 0bc8177b61..7369c71351 100644 --- a/contrib/tablefunc/tablefunc.c +++ b/contrib/tablefunc/tablefunc.c @@ -1421,7 +1421,7 @@ build_tuplestore_recursively(char *key_fld, * Check expected (query runtime) tupdesc suitable for Connectby */ static void -validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial) +validateConnectbyTupleDesc(TupleDesc td, bool show_branch, bool show_serial) { int serial_column = 0; @@ -1431,7 +1431,7 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial /* are there the correct number of columns */ if (show_branch) { - if (tupdesc->natts != (CONNECTBY_NCOLS + serial_column)) + if (td->natts != (CONNECTBY_NCOLS + serial_column)) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("invalid return type"), @@ -1440,7 +1440,7 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial } else { - if (tupdesc->natts != CONNECTBY_NCOLS_NOBRANCH + serial_column) + if (td->natts != CONNECTBY_NCOLS_NOBRANCH + serial_column) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("invalid return type"), @@ -1449,14 +1449,14 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial } /* check that the types of the first two columns match */ - if (tupdesc->attrs[0]->atttypid != tupdesc->attrs[1]->atttypid) + if (TupleDescAttr(td, 0)->atttypid != TupleDescAttr(td, 1)->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("invalid return type"), errdetail("First two columns must be the same type."))); /* check that the type of the third column is INT4 */ - if (tupdesc->attrs[2]->atttypid != INT4OID) + if (TupleDescAttr(td, 2)->atttypid != INT4OID) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("invalid return type"), @@ -1464,7 +1464,7 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial format_type_be(INT4OID)))); /* check that the type of the fourth column is TEXT if applicable */ - if (show_branch && tupdesc->attrs[3]->atttypid != TEXTOID) + if (show_branch && TupleDescAttr(td, 3)->atttypid != TEXTOID) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("invalid return type"), @@ -1472,7 +1472,8 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial format_type_be(TEXTOID)))); /* check that the type of the fifth column is INT4 */ - if (show_branch && show_serial && tupdesc->attrs[4]->atttypid != INT4OID) + if (show_branch && show_serial && + TupleDescAttr(td, 4)->atttypid != INT4OID) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("query-specified return tuple not valid for Connectby: " @@ -1480,7 +1481,8 @@ validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool show_serial format_type_be(INT4OID)))); /* check that the type of the fifth column is INT4 */ - if (!show_branch && show_serial && tupdesc->attrs[3]->atttypid != INT4OID) + if (!show_branch && show_serial && + TupleDescAttr(td, 3)->atttypid != INT4OID) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("query-specified return tuple not valid for Connectby: " @@ -1514,10 +1516,10 @@ compatConnectbyTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupdesc) * These columns must match the result type indicated by the calling * query. */ - ret_atttypid = ret_tupdesc->attrs[0]->atttypid; - sql_atttypid = sql_tupdesc->attrs[0]->atttypid; - ret_atttypmod = ret_tupdesc->attrs[0]->atttypmod; - sql_atttypmod = sql_tupdesc->attrs[0]->atttypmod; + ret_atttypid = TupleDescAttr(ret_tupdesc, 0)->atttypid; + sql_atttypid = TupleDescAttr(sql_tupdesc, 0)->atttypid; + ret_atttypmod = TupleDescAttr(ret_tupdesc, 0)->atttypmod; + sql_atttypmod = TupleDescAttr(sql_tupdesc, 0)->atttypmod; if (ret_atttypid != sql_atttypid || (ret_atttypmod >= 0 && ret_atttypmod != sql_atttypmod)) ereport(ERROR, @@ -1528,10 +1530,10 @@ compatConnectbyTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupdesc) format_type_with_typemod(ret_atttypid, ret_atttypmod), format_type_with_typemod(sql_atttypid, sql_atttypmod)))); - ret_atttypid = ret_tupdesc->attrs[1]->atttypid; - sql_atttypid = sql_tupdesc->attrs[1]->atttypid; - ret_atttypmod = ret_tupdesc->attrs[1]->atttypmod; - sql_atttypmod = sql_tupdesc->attrs[1]->atttypmod; + ret_atttypid = TupleDescAttr(ret_tupdesc, 1)->atttypid; + sql_atttypid = TupleDescAttr(sql_tupdesc, 1)->atttypid; + ret_atttypmod = TupleDescAttr(ret_tupdesc, 1)->atttypmod; + sql_atttypmod = TupleDescAttr(sql_tupdesc, 1)->atttypmod; if (ret_atttypid != sql_atttypid || (ret_atttypmod >= 0 && ret_atttypmod != sql_atttypmod)) ereport(ERROR, @@ -1562,8 +1564,8 @@ compatCrosstabTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupdesc) return false; /* check the rowid types match */ - ret_atttypid = ret_tupdesc->attrs[0]->atttypid; - sql_atttypid = sql_tupdesc->attrs[0]->atttypid; + ret_atttypid = TupleDescAttr(ret_tupdesc, 0)->atttypid; + sql_atttypid = TupleDescAttr(sql_tupdesc, 0)->atttypid; if (ret_atttypid != sql_atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), @@ -1576,10 +1578,10 @@ compatCrosstabTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupdesc) * attribute [2] of the sql tuple should match attributes [1] to [natts] * of the return tuple */ - sql_attr = sql_tupdesc->attrs[2]; + sql_attr = TupleDescAttr(sql_tupdesc, 2); for (i = 1; i < ret_tupdesc->natts; i++) { - ret_attr = ret_tupdesc->attrs[i]; + ret_attr = TupleDescAttr(ret_tupdesc, i); if (ret_attr->atttypid != sql_attr->atttypid) return false; diff --git a/contrib/tcn/tcn.c b/contrib/tcn/tcn.c index 0b9acbf848..88674901bb 100644 --- a/contrib/tcn/tcn.c +++ b/contrib/tcn/tcn.c @@ -153,9 +153,10 @@ triggered_change_notification(PG_FUNCTION_ARGS) for (i = 0; i < numatts; i++) { int colno = index->indkey.values[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, colno - 1); appendStringInfoCharMacro(payload, ','); - strcpy_quoted(payload, NameStr((tupdesc->attrs[colno - 1])->attname), '"'); + strcpy_quoted(payload, NameStr(attr->attname), '"'); appendStringInfoCharMacro(payload, '='); strcpy_quoted(payload, SPI_getvalue(trigtuple, tupdesc, colno), '\''); } diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c index a1a7c2ae0c..135b3b7638 100644 --- a/contrib/test_decoding/test_decoding.c +++ b/contrib/test_decoding/test_decoding.c @@ -330,7 +330,7 @@ tuple_to_stringinfo(StringInfo s, TupleDesc tupdesc, HeapTuple tuple, bool skip_ Datum origval; /* possibly toasted Datum */ bool isnull; /* column is null? */ - attr = tupdesc->attrs[natt]; + attr = TupleDescAttr(tupdesc, natt); /* * don't print dropped columns, we can't be sure everything is diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index efebeb035a..b3aa6d1ced 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -473,7 +473,8 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm) */ Assert((key->sk_flags & SK_ISNULL) || (key->sk_collation == - bdesc->bd_tupdesc->attrs[keyattno - 1]->attcollation)); + TupleDescAttr(bdesc->bd_tupdesc, + keyattno - 1)->attcollation)); /* First time this column? look up consistent function */ if (consistentFn[keyattno - 1].fn_oid == InvalidOid) @@ -622,6 +623,7 @@ brinbuildCallback(Relation index, { FmgrInfo *addValue; BrinValues *col; + Form_pg_attribute attr = TupleDescAttr(state->bs_bdesc->bd_tupdesc, i); col = &state->bs_dtuple->bt_columns[i]; addValue = index_getprocinfo(index, i + 1, @@ -631,7 +633,7 @@ brinbuildCallback(Relation index, * Update dtuple state, if and as necessary. */ FunctionCall4Coll(addValue, - state->bs_bdesc->bd_tupdesc->attrs[i]->attcollation, + attr->attcollation, PointerGetDatum(state->bs_bdesc), PointerGetDatum(col), values[i], isnull[i]); @@ -1019,12 +1021,12 @@ brin_build_desc(Relation rel) for (keyno = 0; keyno < tupdesc->natts; keyno++) { FmgrInfo *opcInfoFn; + Form_pg_attribute attr = TupleDescAttr(tupdesc, keyno); opcInfoFn = index_getprocinfo(rel, keyno + 1, BRIN_PROCNUM_OPCINFO); opcinfo[keyno] = (BrinOpcInfo *) - DatumGetPointer(FunctionCall1(opcInfoFn, - tupdesc->attrs[keyno]->atttypid)); + DatumGetPointer(FunctionCall1(opcInfoFn, attr->atttypid)); totalstored += opcinfo[keyno]->oi_nstored; } diff --git a/src/backend/access/brin/brin_inclusion.c b/src/backend/access/brin/brin_inclusion.c index 9c0a058ccb..449ce5ea4c 100644 --- a/src/backend/access/brin/brin_inclusion.c +++ b/src/backend/access/brin/brin_inclusion.c @@ -157,7 +157,7 @@ brin_inclusion_add_value(PG_FUNCTION_ARGS) } attno = column->bv_attno; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); /* * If the recorded value is null, copy the new value (which we know to be @@ -516,7 +516,7 @@ brin_inclusion_union(PG_FUNCTION_ARGS) PG_RETURN_VOID(); attno = col_a->bv_attno; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); /* * Adjust "allnulls". If A doesn't have values, just copy the values from @@ -675,7 +675,7 @@ inclusion_get_strategy_procinfo(BrinDesc *bdesc, uint16 attno, Oid subtype, bool isNull; opfamily = bdesc->bd_index->rd_opfamily[attno - 1]; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); tuple = SearchSysCache4(AMOPSTRATEGY, ObjectIdGetDatum(opfamily), ObjectIdGetDatum(attr->atttypid), ObjectIdGetDatum(subtype), diff --git a/src/backend/access/brin/brin_minmax.c b/src/backend/access/brin/brin_minmax.c index 62fd90aabe..ce503972f8 100644 --- a/src/backend/access/brin/brin_minmax.c +++ b/src/backend/access/brin/brin_minmax.c @@ -90,7 +90,7 @@ brin_minmax_add_value(PG_FUNCTION_ARGS) } attno = column->bv_attno; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); /* * If the recorded value is null, store the new value (which we know to be @@ -260,7 +260,7 @@ brin_minmax_union(PG_FUNCTION_ARGS) PG_RETURN_VOID(); attno = col_a->bv_attno; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); /* * Adjust "allnulls". If A doesn't have values, just copy the values from @@ -347,7 +347,7 @@ minmax_get_strategy_procinfo(BrinDesc *bdesc, uint16 attno, Oid subtype, bool isNull; opfamily = bdesc->bd_index->rd_opfamily[attno - 1]; - attr = bdesc->bd_tupdesc->attrs[attno - 1]; + attr = TupleDescAttr(bdesc->bd_tupdesc, attno - 1); tuple = SearchSysCache4(AMOPSTRATEGY, ObjectIdGetDatum(opfamily), ObjectIdGetDatum(attr->atttypid), ObjectIdGetDatum(subtype), diff --git a/src/backend/access/brin/brin_tuple.c b/src/backend/access/brin/brin_tuple.c index ed5b4b108d..5c035fb203 100644 --- a/src/backend/access/brin/brin_tuple.c +++ b/src/backend/access/brin/brin_tuple.c @@ -559,7 +559,7 @@ brin_deconstruct_tuple(BrinDesc *brdesc, datumno < brdesc->bd_info[attnum]->oi_nstored; datumno++) { - Form_pg_attribute thisatt = diskdsc->attrs[stored]; + Form_pg_attribute thisatt = TupleDescAttr(diskdsc, stored); if (thisatt->attlen == -1) { diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c index 584a202ab5..13ee528e26 100644 --- a/src/backend/access/common/heaptuple.c +++ b/src/backend/access/common/heaptuple.c @@ -89,7 +89,6 @@ heap_compute_data_size(TupleDesc tupleDesc, Size data_length = 0; int i; int numberOfAttributes = tupleDesc->natts; - Form_pg_attribute *att = tupleDesc->attrs; for (i = 0; i < numberOfAttributes; i++) { @@ -100,7 +99,7 @@ heap_compute_data_size(TupleDesc tupleDesc, continue; val = values[i]; - atti = att[i]; + atti = TupleDescAttr(tupleDesc, i); if (ATT_IS_PACKABLE(atti) && VARATT_CAN_MAKE_SHORT(DatumGetPointer(val))) @@ -152,7 +151,6 @@ heap_fill_tuple(TupleDesc tupleDesc, int bitmask; int i; int numberOfAttributes = tupleDesc->natts; - Form_pg_attribute *att = tupleDesc->attrs; #ifdef USE_ASSERT_CHECKING char *start = data; @@ -174,6 +172,7 @@ heap_fill_tuple(TupleDesc tupleDesc, for (i = 0; i < numberOfAttributes; i++) { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); Size data_length; if (bit != NULL) @@ -201,14 +200,14 @@ heap_fill_tuple(TupleDesc tupleDesc, * an offset. This is a bit of a hack. */ - if (att[i]->attbyval) + if (att->attbyval) { /* pass-by-value */ - data = (char *) att_align_nominal(data, att[i]->attalign); - store_att_byval(data, values[i], att[i]->attlen); - data_length = att[i]->attlen; + data = (char *) att_align_nominal(data, att->attalign); + store_att_byval(data, values[i], att->attlen); + data_length = att->attlen; } - else if (att[i]->attlen == -1) + else if (att->attlen == -1) { /* varlena */ Pointer val = DatumGetPointer(values[i]); @@ -225,7 +224,7 @@ heap_fill_tuple(TupleDesc tupleDesc, ExpandedObjectHeader *eoh = DatumGetEOHP(values[i]); data = (char *) att_align_nominal(data, - att[i]->attalign); + att->attalign); data_length = EOH_get_flat_size(eoh); EOH_flatten_into(eoh, data, data_length); } @@ -243,7 +242,7 @@ heap_fill_tuple(TupleDesc tupleDesc, data_length = VARSIZE_SHORT(val); memcpy(data, val, data_length); } - else if (VARLENA_ATT_IS_PACKABLE(att[i]) && + else if (VARLENA_ATT_IS_PACKABLE(att) && VARATT_CAN_MAKE_SHORT(val)) { /* convert to short varlena -- no alignment */ @@ -255,25 +254,25 @@ heap_fill_tuple(TupleDesc tupleDesc, { /* full 4-byte header varlena */ data = (char *) att_align_nominal(data, - att[i]->attalign); + att->attalign); data_length = VARSIZE(val); memcpy(data, val, data_length); } } - else if (att[i]->attlen == -2) + else if (att->attlen == -2) { /* cstring ... never needs alignment */ *infomask |= HEAP_HASVARWIDTH; - Assert(att[i]->attalign == 'c'); + Assert(att->attalign == 'c'); data_length = strlen(DatumGetCString(values[i])) + 1; memcpy(data, DatumGetPointer(values[i]), data_length); } else { /* fixed-length pass-by-reference */ - data = (char *) att_align_nominal(data, att[i]->attalign); - Assert(att[i]->attlen > 0); - data_length = att[i]->attlen; + data = (char *) att_align_nominal(data, att->attalign); + Assert(att->attlen > 0); + data_length = att->attlen; memcpy(data, DatumGetPointer(values[i]), data_length); } @@ -354,7 +353,6 @@ nocachegetattr(HeapTuple tuple, TupleDesc tupleDesc) { HeapTupleHeader tup = tuple->t_data; - Form_pg_attribute *att = tupleDesc->attrs; char *tp; /* ptr to data part of tuple */ bits8 *bp = tup->t_bits; /* ptr to null bitmap in tuple */ bool slow = false; /* do we have to walk attrs? */ @@ -404,15 +402,15 @@ nocachegetattr(HeapTuple tuple, if (!slow) { + Form_pg_attribute att; + /* * If we get here, there are no nulls up to and including the target * attribute. If we have a cached offset, we can use it. */ - if (att[attnum]->attcacheoff >= 0) - { - return fetchatt(att[attnum], - tp + att[attnum]->attcacheoff); - } + att = TupleDescAttr(tupleDesc, attnum); + if (att->attcacheoff >= 0) + return fetchatt(att, tp + att->attcacheoff); /* * Otherwise, check for non-fixed-length attrs up to and including @@ -425,7 +423,7 @@ nocachegetattr(HeapTuple tuple, for (j = 0; j <= attnum; j++) { - if (att[j]->attlen <= 0) + if (TupleDescAttr(tupleDesc, j)->attlen <= 0) { slow = true; break; @@ -448,29 +446,32 @@ nocachegetattr(HeapTuple tuple, * fixed-width columns, in hope of avoiding future visits to this * routine. */ - att[0]->attcacheoff = 0; + TupleDescAttr(tupleDesc, 0)->attcacheoff = 0; /* we might have set some offsets in the slow path previously */ - while (j < natts && att[j]->attcacheoff > 0) + while (j < natts && TupleDescAttr(tupleDesc, j)->attcacheoff > 0) j++; - off = att[j - 1]->attcacheoff + att[j - 1]->attlen; + off = TupleDescAttr(tupleDesc, j - 1)->attcacheoff + + TupleDescAttr(tupleDesc, j - 1)->attlen; for (; j < natts; j++) { - if (att[j]->attlen <= 0) + Form_pg_attribute att = TupleDescAttr(tupleDesc, j); + + if (att->attlen <= 0) break; - off = att_align_nominal(off, att[j]->attalign); + off = att_align_nominal(off, att->attalign); - att[j]->attcacheoff = off; + att->attcacheoff = off; - off += att[j]->attlen; + off += att->attlen; } Assert(j > attnum); - off = att[attnum]->attcacheoff; + off = TupleDescAttr(tupleDesc, attnum)->attcacheoff; } else { @@ -490,6 +491,8 @@ nocachegetattr(HeapTuple tuple, off = 0; for (i = 0;; i++) /* loop exit is at "break" */ { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); + if (HeapTupleHasNulls(tuple) && att_isnull(i, bp)) { usecache = false; @@ -497,9 +500,9 @@ nocachegetattr(HeapTuple tuple, } /* If we know the next offset, we can skip the rest */ - if (usecache && att[i]->attcacheoff >= 0) - off = att[i]->attcacheoff; - else if (att[i]->attlen == -1) + if (usecache && att->attcacheoff >= 0) + off = att->attcacheoff; + else if (att->attlen == -1) { /* * We can only cache the offset for a varlena attribute if the @@ -508,11 +511,11 @@ nocachegetattr(HeapTuple tuple, * either an aligned or unaligned value. */ if (usecache && - off == att_align_nominal(off, att[i]->attalign)) - att[i]->attcacheoff = off; + off == att_align_nominal(off, att->attalign)) + att->attcacheoff = off; else { - off = att_align_pointer(off, att[i]->attalign, -1, + off = att_align_pointer(off, att->attalign, -1, tp + off); usecache = false; } @@ -520,23 +523,23 @@ nocachegetattr(HeapTuple tuple, else { /* not varlena, so safe to use att_align_nominal */ - off = att_align_nominal(off, att[i]->attalign); + off = att_align_nominal(off, att->attalign); if (usecache) - att[i]->attcacheoff = off; + att->attcacheoff = off; } if (i == attnum) break; - off = att_addlength_pointer(off, att[i]->attlen, tp + off); + off = att_addlength_pointer(off, att->attlen, tp + off); - if (usecache && att[i]->attlen <= 0) + if (usecache && att->attlen <= 0) usecache = false; } } - return fetchatt(att[attnum], tp + off); + return fetchatt(TupleDescAttr(tupleDesc, attnum), tp + off); } /* ---------------- @@ -935,7 +938,6 @@ heap_deform_tuple(HeapTuple tuple, TupleDesc tupleDesc, { HeapTupleHeader tup = tuple->t_data; bool hasnulls = HeapTupleHasNulls(tuple); - Form_pg_attribute *att = tupleDesc->attrs; int tdesc_natts = tupleDesc->natts; int natts; /* number of atts to extract */ int attnum; @@ -959,7 +961,7 @@ heap_deform_tuple(HeapTuple tuple, TupleDesc tupleDesc, for (attnum = 0; attnum < natts; attnum++) { - Form_pg_attribute thisatt = att[attnum]; + Form_pg_attribute thisatt = TupleDescAttr(tupleDesc, attnum); if (hasnulls && att_isnull(attnum, bp)) { @@ -1039,7 +1041,6 @@ slot_deform_tuple(TupleTableSlot *slot, int natts) bool *isnull = slot->tts_isnull; HeapTupleHeader tup = tuple->t_data; bool hasnulls = HeapTupleHasNulls(tuple); - Form_pg_attribute *att = tupleDesc->attrs; int attnum; char *tp; /* ptr to tuple data */ long off; /* offset in tuple data */ @@ -1068,7 +1069,7 @@ slot_deform_tuple(TupleTableSlot *slot, int natts) for (; attnum < natts; attnum++) { - Form_pg_attribute thisatt = att[attnum]; + Form_pg_attribute thisatt = TupleDescAttr(tupleDesc, attnum); if (hasnulls && att_isnull(attnum, bp)) { @@ -1209,7 +1210,7 @@ slot_getattr(TupleTableSlot *slot, int attnum, bool *isnull) * This case should not happen in normal use, but it could happen if we * are executing a plan cached before the column was dropped. */ - if (tupleDesc->attrs[attnum - 1]->attisdropped) + if (TupleDescAttr(tupleDesc, attnum - 1)->attisdropped) { *isnull = true; return (Datum) 0; diff --git a/src/backend/access/common/indextuple.c b/src/backend/access/common/indextuple.c index 37a21057d0..138671410a 100644 --- a/src/backend/access/common/indextuple.c +++ b/src/backend/access/common/indextuple.c @@ -63,7 +63,7 @@ index_form_tuple(TupleDesc tupleDescriptor, #ifdef TOAST_INDEX_HACK for (i = 0; i < numberOfAttributes; i++) { - Form_pg_attribute att = tupleDescriptor->attrs[i]; + Form_pg_attribute att = TupleDescAttr(tupleDescriptor, i); untoasted_values[i] = values[i]; untoasted_free[i] = false; @@ -209,7 +209,6 @@ nocache_index_getattr(IndexTuple tup, int attnum, TupleDesc tupleDesc) { - Form_pg_attribute *att = tupleDesc->attrs; char *tp; /* ptr to data part of tuple */ bits8 *bp = NULL; /* ptr to null bitmap in tuple */ bool slow = false; /* do we have to walk attrs? */ @@ -271,15 +270,15 @@ nocache_index_getattr(IndexTuple tup, if (!slow) { + Form_pg_attribute att; + /* * If we get here, there are no nulls up to and including the target * attribute. If we have a cached offset, we can use it. */ - if (att[attnum]->attcacheoff >= 0) - { - return fetchatt(att[attnum], - tp + att[attnum]->attcacheoff); - } + att = TupleDescAttr(tupleDesc, attnum); + if (att->attcacheoff >= 0) + return fetchatt(att, tp + att->attcacheoff); /* * Otherwise, check for non-fixed-length attrs up to and including @@ -292,7 +291,7 @@ nocache_index_getattr(IndexTuple tup, for (j = 0; j <= attnum; j++) { - if (att[j]->attlen <= 0) + if (TupleDescAttr(tupleDesc, j)->attlen <= 0) { slow = true; break; @@ -315,29 +314,32 @@ nocache_index_getattr(IndexTuple tup, * fixed-width columns, in hope of avoiding future visits to this * routine. */ - att[0]->attcacheoff = 0; + TupleDescAttr(tupleDesc, 0)->attcacheoff = 0; /* we might have set some offsets in the slow path previously */ - while (j < natts && att[j]->attcacheoff > 0) + while (j < natts && TupleDescAttr(tupleDesc, j)->attcacheoff > 0) j++; - off = att[j - 1]->attcacheoff + att[j - 1]->attlen; + off = TupleDescAttr(tupleDesc, j - 1)->attcacheoff + + TupleDescAttr(tupleDesc, j - 1)->attlen; for (; j < natts; j++) { - if (att[j]->attlen <= 0) + Form_pg_attribute att = TupleDescAttr(tupleDesc, j); + + if (att->attlen <= 0) break; - off = att_align_nominal(off, att[j]->attalign); + off = att_align_nominal(off, att->attalign); - att[j]->attcacheoff = off; + att->attcacheoff = off; - off += att[j]->attlen; + off += att->attlen; } Assert(j > attnum); - off = att[attnum]->attcacheoff; + off = TupleDescAttr(tupleDesc, attnum)->attcacheoff; } else { @@ -357,6 +359,8 @@ nocache_index_getattr(IndexTuple tup, off = 0; for (i = 0;; i++) /* loop exit is at "break" */ { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); + if (IndexTupleHasNulls(tup) && att_isnull(i, bp)) { usecache = false; @@ -364,9 +368,9 @@ nocache_index_getattr(IndexTuple tup, } /* If we know the next offset, we can skip the rest */ - if (usecache && att[i]->attcacheoff >= 0) - off = att[i]->attcacheoff; - else if (att[i]->attlen == -1) + if (usecache && att->attcacheoff >= 0) + off = att->attcacheoff; + else if (att->attlen == -1) { /* * We can only cache the offset for a varlena attribute if the @@ -375,11 +379,11 @@ nocache_index_getattr(IndexTuple tup, * either an aligned or unaligned value. */ if (usecache && - off == att_align_nominal(off, att[i]->attalign)) - att[i]->attcacheoff = off; + off == att_align_nominal(off, att->attalign)) + att->attcacheoff = off; else { - off = att_align_pointer(off, att[i]->attalign, -1, + off = att_align_pointer(off, att->attalign, -1, tp + off); usecache = false; } @@ -387,23 +391,23 @@ nocache_index_getattr(IndexTuple tup, else { /* not varlena, so safe to use att_align_nominal */ - off = att_align_nominal(off, att[i]->attalign); + off = att_align_nominal(off, att->attalign); if (usecache) - att[i]->attcacheoff = off; + att->attcacheoff = off; } if (i == attnum) break; - off = att_addlength_pointer(off, att[i]->attlen, tp + off); + off = att_addlength_pointer(off, att->attlen, tp + off); - if (usecache && att[i]->attlen <= 0) + if (usecache && att->attlen <= 0) usecache = false; } } - return fetchatt(att[attnum], tp + off); + return fetchatt(TupleDescAttr(tupleDesc, attnum), tp + off); } /* diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index c863e859fe..b3e9a26b03 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -38,7 +38,7 @@ printsimple_startup(DestReceiver *self, int operation, TupleDesc tupdesc) for (i = 0; i < tupdesc->natts; ++i) { - Form_pg_attribute attr = tupdesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); pq_sendstring(&buf, NameStr(attr->attname)); pq_sendint(&buf, 0, 4); /* table oid */ @@ -71,7 +71,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) for (i = 0; i < tupdesc->natts; ++i) { - Form_pg_attribute attr = tupdesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); Datum value; if (slot->tts_isnull[i]) diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index a2ca2d74ae..20d20e623e 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -187,7 +187,6 @@ printtup_startup(DestReceiver *self, int operation, TupleDesc typeinfo) void SendRowDescriptionMessage(TupleDesc typeinfo, List *targetlist, int16 *formats) { - Form_pg_attribute *attrs = typeinfo->attrs; int natts = typeinfo->natts; int proto = PG_PROTOCOL_MAJOR(FrontendProtocol); int i; @@ -199,10 +198,11 @@ SendRowDescriptionMessage(TupleDesc typeinfo, List *targetlist, int16 *formats) for (i = 0; i < natts; ++i) { - Oid atttypid = attrs[i]->atttypid; - int32 atttypmod = attrs[i]->atttypmod; + Form_pg_attribute att = TupleDescAttr(typeinfo, i); + Oid atttypid = att->atttypid; + int32 atttypmod = att->atttypmod; - pq_sendstring(&buf, NameStr(attrs[i]->attname)); + pq_sendstring(&buf, NameStr(att->attname)); /* column ID info appears in protocol 3.0 and up */ if (proto >= 3) { @@ -228,7 +228,7 @@ SendRowDescriptionMessage(TupleDesc typeinfo, List *targetlist, int16 *formats) /* If column is a domain, send the base type and typmod instead */ atttypid = getBaseTypeAndTypmod(atttypid, &atttypmod); pq_sendint(&buf, (int) atttypid, sizeof(atttypid)); - pq_sendint(&buf, attrs[i]->attlen, sizeof(attrs[i]->attlen)); + pq_sendint(&buf, att->attlen, sizeof(att->attlen)); pq_sendint(&buf, atttypmod, sizeof(atttypmod)); /* format info appears in protocol 3.0 and up */ if (proto >= 3) @@ -268,18 +268,19 @@ printtup_prepare_info(DR_printtup *myState, TupleDesc typeinfo, int numAttrs) { PrinttupAttrInfo *thisState = myState->myinfo + i; int16 format = (formats ? formats[i] : 0); + Form_pg_attribute attr = TupleDescAttr(typeinfo, i); thisState->format = format; if (format == 0) { - getTypeOutputInfo(typeinfo->attrs[i]->atttypid, + getTypeOutputInfo(attr->atttypid, &thisState->typoutput, &thisState->typisvarlena); fmgr_info(thisState->typoutput, &thisState->finfo); } else if (format == 1) { - getTypeBinaryOutputInfo(typeinfo->attrs[i]->atttypid, + getTypeBinaryOutputInfo(attr->atttypid, &thisState->typsend, &thisState->typisvarlena); fmgr_info(thisState->typsend, &thisState->finfo); @@ -513,14 +514,13 @@ void debugStartup(DestReceiver *self, int operation, TupleDesc typeinfo) { int natts = typeinfo->natts; - Form_pg_attribute *attinfo = typeinfo->attrs; int i; /* * show the return type of the tuples */ for (i = 0; i < natts; ++i) - printatt((unsigned) i + 1, attinfo[i], NULL); + printatt((unsigned) i + 1, TupleDescAttr(typeinfo, i), NULL); printf("\t----\n"); } @@ -545,12 +545,12 @@ debugtup(TupleTableSlot *slot, DestReceiver *self) attr = slot_getattr(slot, i + 1, &isnull); if (isnull) continue; - getTypeOutputInfo(typeinfo->attrs[i]->atttypid, + getTypeOutputInfo(TupleDescAttr(typeinfo, i)->atttypid, &typoutput, &typisvarlena); value = OidOutputFunctionCall(typoutput, attr); - printatt((unsigned) i + 1, typeinfo->attrs[i], value); + printatt((unsigned) i + 1, TupleDescAttr(typeinfo, i), value); } printf("\t----\n"); diff --git a/src/backend/access/common/tupconvert.c b/src/backend/access/common/tupconvert.c index 57e44375ea..3d1bc0635b 100644 --- a/src/backend/access/common/tupconvert.c +++ b/src/backend/access/common/tupconvert.c @@ -84,7 +84,7 @@ convert_tuples_by_position(TupleDesc indesc, same = true; for (i = 0; i < n; i++) { - Form_pg_attribute att = outdesc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(outdesc, i); Oid atttypid; int32 atttypmod; @@ -95,7 +95,7 @@ convert_tuples_by_position(TupleDesc indesc, atttypmod = att->atttypmod; for (; j < indesc->natts; j++) { - att = indesc->attrs[j]; + att = TupleDescAttr(indesc, j); if (att->attisdropped) continue; nincols++; @@ -122,7 +122,7 @@ convert_tuples_by_position(TupleDesc indesc, /* Check for unused input columns */ for (; j < indesc->natts; j++) { - if (indesc->attrs[j]->attisdropped) + if (TupleDescAttr(indesc, j)->attisdropped) continue; nincols++; same = false; /* we'll complain below */ @@ -149,6 +149,9 @@ convert_tuples_by_position(TupleDesc indesc, { for (i = 0; i < n; i++) { + Form_pg_attribute inatt; + Form_pg_attribute outatt; + if (attrMap[i] == (i + 1)) continue; @@ -157,10 +160,12 @@ convert_tuples_by_position(TupleDesc indesc, * also dropped, we needn't convert. However, attlen and attalign * must agree. */ + inatt = TupleDescAttr(indesc, i); + outatt = TupleDescAttr(outdesc, i); if (attrMap[i] == 0 && - indesc->attrs[i]->attisdropped && - indesc->attrs[i]->attlen == outdesc->attrs[i]->attlen && - indesc->attrs[i]->attalign == outdesc->attrs[i]->attalign) + inatt->attisdropped && + inatt->attlen == outatt->attlen && + inatt->attalign == outatt->attalign) continue; same = false; @@ -228,6 +233,9 @@ convert_tuples_by_name(TupleDesc indesc, same = true; for (i = 0; i < n; i++) { + Form_pg_attribute inatt; + Form_pg_attribute outatt; + if (attrMap[i] == (i + 1)) continue; @@ -236,10 +244,12 @@ convert_tuples_by_name(TupleDesc indesc, * also dropped, we needn't convert. However, attlen and attalign * must agree. */ + inatt = TupleDescAttr(indesc, i); + outatt = TupleDescAttr(outdesc, i); if (attrMap[i] == 0 && - indesc->attrs[i]->attisdropped && - indesc->attrs[i]->attlen == outdesc->attrs[i]->attlen && - indesc->attrs[i]->attalign == outdesc->attrs[i]->attalign) + inatt->attisdropped && + inatt->attlen == outatt->attlen && + inatt->attalign == outatt->attalign) continue; same = false; @@ -292,26 +302,27 @@ convert_tuples_by_name_map(TupleDesc indesc, attrMap = (AttrNumber *) palloc0(n * sizeof(AttrNumber)); for (i = 0; i < n; i++) { - Form_pg_attribute att = outdesc->attrs[i]; + Form_pg_attribute outatt = TupleDescAttr(outdesc, i); char *attname; Oid atttypid; int32 atttypmod; int j; - if (att->attisdropped) + if (outatt->attisdropped) continue; /* attrMap[i] is already 0 */ - attname = NameStr(att->attname); - atttypid = att->atttypid; - atttypmod = att->atttypmod; + attname = NameStr(outatt->attname); + atttypid = outatt->atttypid; + atttypmod = outatt->atttypmod; for (j = 0; j < indesc->natts; j++) { - att = indesc->attrs[j]; - if (att->attisdropped) + Form_pg_attribute inatt = TupleDescAttr(indesc, j); + + if (inatt->attisdropped) continue; - if (strcmp(attname, NameStr(att->attname)) == 0) + if (strcmp(attname, NameStr(inatt->attname)) == 0) { /* Found it, check type */ - if (atttypid != att->atttypid || atttypmod != att->atttypmod) + if (atttypid != inatt->atttypid || atttypmod != inatt->atttypmod) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg_internal("%s", _(msg)), diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index 9fd7b4e019..a5df2d64e2 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -175,7 +175,9 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc) for (i = 0; i < desc->natts; i++) { - memcpy(desc->attrs[i], tupdesc->attrs[i], ATTRIBUTE_FIXED_PART_SIZE); + memcpy(TupleDescAttr(desc, i), + TupleDescAttr(tupdesc, i), + ATTRIBUTE_FIXED_PART_SIZE); } if (constr) @@ -230,6 +232,9 @@ void TupleDescCopyEntry(TupleDesc dst, AttrNumber dstAttno, TupleDesc src, AttrNumber srcAttno) { + Form_pg_attribute dstAtt = TupleDescAttr(dst, dstAttno - 1); + Form_pg_attribute srcAtt = TupleDescAttr(src, srcAttno - 1); + /* * sanity checks */ @@ -240,8 +245,7 @@ TupleDescCopyEntry(TupleDesc dst, AttrNumber dstAttno, AssertArg(dstAttno >= 1); AssertArg(dstAttno <= dst->natts); - memcpy(dst->attrs[dstAttno - 1], src->attrs[srcAttno - 1], - ATTRIBUTE_FIXED_PART_SIZE); + memcpy(dstAtt, srcAtt, ATTRIBUTE_FIXED_PART_SIZE); /* * Aside from updating the attno, we'd better reset attcacheoff. @@ -252,13 +256,13 @@ TupleDescCopyEntry(TupleDesc dst, AttrNumber dstAttno, * by other uses of this function or TupleDescInitEntry. So we cheat a * bit to avoid a useless O(N^2) penalty. */ - dst->attrs[dstAttno - 1]->attnum = dstAttno; - dst->attrs[dstAttno - 1]->attcacheoff = -1; + dstAtt->attnum = dstAttno; + dstAtt->attcacheoff = -1; /* since we're not copying constraints or defaults, clear these */ - dst->attrs[dstAttno - 1]->attnotnull = false; - dst->attrs[dstAttno - 1]->atthasdef = false; - dst->attrs[dstAttno - 1]->attidentity = '\0'; + dstAtt->attnotnull = false; + dstAtt->atthasdef = false; + dstAtt->attidentity = '\0'; } /* @@ -366,8 +370,8 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2) for (i = 0; i < tupdesc1->natts; i++) { - Form_pg_attribute attr1 = tupdesc1->attrs[i]; - Form_pg_attribute attr2 = tupdesc2->attrs[i]; + Form_pg_attribute attr1 = TupleDescAttr(tupdesc1, i); + Form_pg_attribute attr2 = TupleDescAttr(tupdesc2, i); /* * We do not need to check every single field here: we can disregard @@ -515,7 +519,7 @@ TupleDescInitEntry(TupleDesc desc, /* * initialize the attribute fields */ - att = desc->attrs[attributeNumber - 1]; + att = TupleDescAttr(desc, attributeNumber - 1); att->attrelid = 0; /* dummy value */ @@ -580,7 +584,7 @@ TupleDescInitBuiltinEntry(TupleDesc desc, AssertArg(attributeNumber <= desc->natts); /* initialize the attribute fields */ - att = desc->attrs[attributeNumber - 1]; + att = TupleDescAttr(desc, attributeNumber - 1); att->attrelid = 0; /* dummy value */ /* unlike TupleDescInitEntry, we require an attribute name */ @@ -664,7 +668,7 @@ TupleDescInitEntryCollation(TupleDesc desc, AssertArg(attributeNumber >= 1); AssertArg(attributeNumber <= desc->natts); - desc->attrs[attributeNumber - 1]->attcollation = collationid; + TupleDescAttr(desc, attributeNumber - 1)->attcollation = collationid; } @@ -704,6 +708,7 @@ BuildDescForRelation(List *schema) { ColumnDef *entry = lfirst(l); AclResult aclresult; + Form_pg_attribute att; /* * for each entry in the list, get the name and type information from @@ -730,17 +735,18 @@ BuildDescForRelation(List *schema) TupleDescInitEntry(desc, attnum, attname, atttypid, atttypmod, attdim); + att = TupleDescAttr(desc, attnum - 1); /* Override TupleDescInitEntry's settings as requested */ TupleDescInitEntryCollation(desc, attnum, attcollation); if (entry->storage) - desc->attrs[attnum - 1]->attstorage = entry->storage; + att->attstorage = entry->storage; /* Fill in additional stuff not handled by TupleDescInitEntry */ - desc->attrs[attnum - 1]->attnotnull = entry->is_not_null; + att->attnotnull = entry->is_not_null; has_not_null |= entry->is_not_null; - desc->attrs[attnum - 1]->attislocal = entry->is_local; - desc->attrs[attnum - 1]->attinhcount = entry->inhcount; + att->attislocal = entry->is_local; + att->attinhcount = entry->inhcount; } if (has_not_null) diff --git a/src/backend/access/gin/ginbulk.c b/src/backend/access/gin/ginbulk.c index 4ff149e59a..c76f504295 100644 --- a/src/backend/access/gin/ginbulk.c +++ b/src/backend/access/gin/ginbulk.c @@ -127,9 +127,10 @@ ginInitBA(BuildAccumulator *accum) static Datum getDatumCopy(BuildAccumulator *accum, OffsetNumber attnum, Datum value) { - Form_pg_attribute att = accum->ginstate->origTupdesc->attrs[attnum - 1]; + Form_pg_attribute att; Datum res; + att = TupleDescAttr(accum->ginstate->origTupdesc, attnum - 1); if (att->attbyval) res = value; else diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c index 56a5bf47b8..9895080685 100644 --- a/src/backend/access/gin/ginget.c +++ b/src/backend/access/gin/ginget.c @@ -129,7 +129,7 @@ collectMatchBitmap(GinBtreeData *btree, GinBtreeStack *stack, /* Locate tupdesc entry for key column (for attbyval/attlen data) */ attnum = scanEntry->attnum; - attr = btree->ginstate->origTupdesc->attrs[attnum - 1]; + attr = TupleDescAttr(btree->ginstate->origTupdesc, attnum - 1); for (;;) { diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c index 91e4a8cf70..136ea27718 100644 --- a/src/backend/access/gin/ginutil.c +++ b/src/backend/access/gin/ginutil.c @@ -96,6 +96,8 @@ initGinState(GinState *state, Relation index) for (i = 0; i < origTupdesc->natts; i++) { + Form_pg_attribute attr = TupleDescAttr(origTupdesc, i); + if (state->oneCol) state->tupdesc[i] = state->origTupdesc; else @@ -105,11 +107,11 @@ initGinState(GinState *state, Relation index) TupleDescInitEntry(state->tupdesc[i], (AttrNumber) 1, NULL, INT2OID, -1, 0); TupleDescInitEntry(state->tupdesc[i], (AttrNumber) 2, NULL, - origTupdesc->attrs[i]->atttypid, - origTupdesc->attrs[i]->atttypmod, - origTupdesc->attrs[i]->attndims); + attr->atttypid, + attr->atttypmod, + attr->attndims); TupleDescInitEntryCollation(state->tupdesc[i], (AttrNumber) 2, - origTupdesc->attrs[i]->attcollation); + attr->attcollation); } /* @@ -126,13 +128,13 @@ initGinState(GinState *state, Relation index) { TypeCacheEntry *typentry; - typentry = lookup_type_cache(origTupdesc->attrs[i]->atttypid, + typentry = lookup_type_cache(attr->atttypid, TYPECACHE_CMP_PROC_FINFO); if (!OidIsValid(typentry->cmp_proc_finfo.fn_oid)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_FUNCTION), errmsg("could not identify a comparison function for type %s", - format_type_be(origTupdesc->attrs[i]->atttypid)))); + format_type_be(attr->atttypid)))); fmgr_info_copy(&(state->compareFn[i]), &(typentry->cmp_proc_finfo), CurrentMemoryContext); diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index 4756a70ae6..b4cb364869 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -295,10 +295,10 @@ gistInitBuffering(GISTBuildState *buildstate) itupMinSize = (Size) MAXALIGN(sizeof(IndexTupleData)); for (i = 0; i < index->rd_att->natts; i++) { - if (index->rd_att->attrs[i]->attlen < 0) + if (TupleDescAttr(index->rd_att, i)->attlen < 0) itupMinSize += VARHDRSZ; else - itupMinSize += index->rd_att->attrs[i]->attlen; + itupMinSize += TupleDescAttr(index->rd_att, i)->attlen; } /* Calculate average and maximal number of index tuples which fit to page */ diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 8792f1453c..ff03c68fcd 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -1066,11 +1066,11 @@ fastgetattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, (*(isnull) = false), HeapTupleNoNulls(tup) ? ( - (tupleDesc)->attrs[(attnum) - 1]->attcacheoff >= 0 ? + TupleDescAttr((tupleDesc), (attnum) - 1)->attcacheoff >= 0 ? ( - fetchatt((tupleDesc)->attrs[(attnum) - 1], + fetchatt(TupleDescAttr((tupleDesc), (attnum) - 1), (char *) (tup)->t_data + (tup)->t_data->t_hoff + - (tupleDesc)->attrs[(attnum) - 1]->attcacheoff) + TupleDescAttr((tupleDesc), (attnum) - 1)->attcacheoff) ) : nocachegetattr((tup), (attnum), (tupleDesc)) @@ -4422,7 +4422,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum, else { Assert(attrnum <= tupdesc->natts); - att = tupdesc->attrs[attrnum - 1]; + att = TupleDescAttr(tupdesc, attrnum - 1); return datumIsEqual(value1, value2, att->attbyval, att->attlen); } } diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c index 458180bc95..5a8f1dab83 100644 --- a/src/backend/access/heap/tuptoaster.c +++ b/src/backend/access/heap/tuptoaster.c @@ -464,7 +464,6 @@ void toast_delete(Relation rel, HeapTuple oldtup, bool is_speculative) { TupleDesc tupleDesc; - Form_pg_attribute *att; int numAttrs; int i; Datum toast_values[MaxHeapAttributeNumber]; @@ -489,7 +488,6 @@ toast_delete(Relation rel, HeapTuple oldtup, bool is_speculative) * least one varlena column, by the way.) */ tupleDesc = rel->rd_att; - att = tupleDesc->attrs; numAttrs = tupleDesc->natts; Assert(numAttrs <= MaxHeapAttributeNumber); @@ -501,7 +499,7 @@ toast_delete(Relation rel, HeapTuple oldtup, bool is_speculative) */ for (i = 0; i < numAttrs; i++) { - if (att[i]->attlen == -1) + if (TupleDescAttr(tupleDesc, i)->attlen == -1) { Datum value = toast_values[i]; @@ -538,7 +536,6 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, { HeapTuple result_tuple; TupleDesc tupleDesc; - Form_pg_attribute *att; int numAttrs; int i; @@ -579,7 +576,6 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, * Get the tuple descriptor and break down the tuple(s) into fields. */ tupleDesc = rel->rd_att; - att = tupleDesc->attrs; numAttrs = tupleDesc->natts; Assert(numAttrs <= MaxHeapAttributeNumber); @@ -606,6 +602,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, for (i = 0; i < numAttrs; i++) { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); struct varlena *old_value; struct varlena *new_value; @@ -621,7 +618,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, * If the old value is stored on disk, check if it has changed so * we have to delete it later. */ - if (att[i]->attlen == -1 && !toast_oldisnull[i] && + if (att->attlen == -1 && !toast_oldisnull[i] && VARATT_IS_EXTERNAL_ONDISK(old_value)) { if (toast_isnull[i] || !VARATT_IS_EXTERNAL_ONDISK(new_value) || @@ -668,12 +665,12 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, /* * Now look at varlena attributes */ - if (att[i]->attlen == -1) + if (att->attlen == -1) { /* * If the table's attribute says PLAIN always, force it so. */ - if (att[i]->attstorage == 'p') + if (att->attstorage == 'p') toast_action[i] = 'p'; /* @@ -687,7 +684,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, if (VARATT_IS_EXTERNAL(new_value)) { toast_oldexternal[i] = new_value; - if (att[i]->attstorage == 'p') + if (att->attstorage == 'p') new_value = heap_tuple_untoast_attr(new_value); else new_value = heap_tuple_fetch_attr(new_value); @@ -749,13 +746,15 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, */ for (i = 0; i < numAttrs; i++) { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); + if (toast_action[i] != ' ') continue; if (VARATT_IS_EXTERNAL(DatumGetPointer(toast_values[i]))) continue; /* can't happen, toast_action would be 'p' */ if (VARATT_IS_COMPRESSED(DatumGetPointer(toast_values[i]))) continue; - if (att[i]->attstorage != 'x' && att[i]->attstorage != 'e') + if (att->attstorage != 'x' && att->attstorage != 'e') continue; if (toast_sizes[i] > biggest_size) { @@ -771,7 +770,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, * Attempt to compress it inline, if it has attstorage 'x' */ i = biggest_attno; - if (att[i]->attstorage == 'x') + if (TupleDescAttr(tupleDesc, i)->attstorage == 'x') { old_value = toast_values[i]; new_value = toast_compress_datum(old_value); @@ -841,11 +840,13 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, */ for (i = 0; i < numAttrs; i++) { + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); + if (toast_action[i] == 'p') continue; if (VARATT_IS_EXTERNAL(DatumGetPointer(toast_values[i]))) continue; /* can't happen, toast_action would be 'p' */ - if (att[i]->attstorage != 'x' && att[i]->attstorage != 'e') + if (att->attstorage != 'x' && att->attstorage != 'e') continue; if (toast_sizes[i] > biggest_size) { @@ -896,7 +897,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, continue; /* can't happen, toast_action would be 'p' */ if (VARATT_IS_COMPRESSED(DatumGetPointer(toast_values[i]))) continue; - if (att[i]->attstorage != 'm') + if (TupleDescAttr(tupleDesc, i)->attstorage != 'm') continue; if (toast_sizes[i] > biggest_size) { @@ -959,7 +960,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, continue; if (VARATT_IS_EXTERNAL(DatumGetPointer(toast_values[i]))) continue; /* can't happen, toast_action would be 'p' */ - if (att[i]->attstorage != 'm') + if (TupleDescAttr(tupleDesc, i)->attstorage != 'm') continue; if (toast_sizes[i] > biggest_size) { @@ -1084,7 +1085,6 @@ HeapTuple toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc) { HeapTuple new_tuple; - Form_pg_attribute *att = tupleDesc->attrs; int numAttrs = tupleDesc->natts; int i; Datum toast_values[MaxTupleAttributeNumber]; @@ -1104,7 +1104,7 @@ toast_flatten_tuple(HeapTuple tup, TupleDesc tupleDesc) /* * Look at non-null varlena attributes */ - if (!toast_isnull[i] && att[i]->attlen == -1) + if (!toast_isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) { struct varlena *new_value; @@ -1193,7 +1193,6 @@ toast_flatten_tuple_to_datum(HeapTupleHeader tup, int32 new_data_len; int32 new_tuple_len; HeapTupleData tmptup; - Form_pg_attribute *att = tupleDesc->attrs; int numAttrs = tupleDesc->natts; int i; bool has_nulls = false; @@ -1222,7 +1221,7 @@ toast_flatten_tuple_to_datum(HeapTupleHeader tup, */ if (toast_isnull[i]) has_nulls = true; - else if (att[i]->attlen == -1) + else if (TupleDescAttr(tupleDesc, i)->attlen == -1) { struct varlena *new_value; @@ -1307,7 +1306,6 @@ toast_build_flattened_tuple(TupleDesc tupleDesc, bool *isnull) { HeapTuple new_tuple; - Form_pg_attribute *att = tupleDesc->attrs; int numAttrs = tupleDesc->natts; int num_to_free; int i; @@ -1327,7 +1325,7 @@ toast_build_flattened_tuple(TupleDesc tupleDesc, /* * Look at non-null varlena attributes */ - if (!isnull[i] && att[i]->attlen == -1) + if (!isnull[i] && TupleDescAttr(tupleDesc, i)->attlen == -1) { struct varlena *new_value; diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c index 8656af453c..22f64b0103 100644 --- a/src/backend/access/spgist/spgutils.c +++ b/src/backend/access/spgist/spgutils.c @@ -112,7 +112,7 @@ spgGetCache(Relation index) * tupdesc. We pass this to the opclass config function so that * polymorphic opclasses are possible. */ - atttype = index->rd_att->attrs[0]->atttypid; + atttype = TupleDescAttr(index->rd_att, 0)->atttypid; /* Call the config function to get config info for the opclass */ in.attType = atttype; diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c index b3f0b3cc92..0453fd4ac1 100644 --- a/src/backend/bootstrap/bootstrap.c +++ b/src/backend/bootstrap/bootstrap.c @@ -609,7 +609,7 @@ boot_openrel(char *relname) if (attrtypes[i] == NULL) attrtypes[i] = AllocateAttribute(); memmove((char *) attrtypes[i], - (char *) boot_reldesc->rd_att->attrs[i], + (char *) TupleDescAttr(boot_reldesc->rd_att, i), ATTRIBUTE_FIXED_PART_SIZE); { @@ -816,7 +816,7 @@ InsertOneValue(char *value, int i) elog(DEBUG4, "inserting column %d value \"%s\"", i, value); - typoid = boot_reldesc->rd_att->attrs[i]->atttypid; + typoid = TupleDescAttr(boot_reldesc->rd_att, i)->atttypid; boot_get_type_io_data(typoid, &typlen, &typbyval, &typalign, @@ -843,10 +843,10 @@ InsertOneNull(int i) { elog(DEBUG4, "inserting column %d NULL", i); Assert(i >= 0 && i < MAXATTR); - if (boot_reldesc->rd_att->attrs[i]->attnotnull) + if (TupleDescAttr(boot_reldesc->rd_att, i)->attnotnull) elog(ERROR, "NULL value specified for not-null column \"%s\" of relation \"%s\"", - NameStr(boot_reldesc->rd_att->attrs[i]->attname), + NameStr(TupleDescAttr(boot_reldesc->rd_att, i)->attname), RelationGetRelationName(boot_reldesc)); values[i] = PointerGetDatum(NULL); Nulls[i] = true; diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index a376b99f1e..45ee9ac8b9 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -431,12 +431,14 @@ CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, { for (i = 0; i < natts; i++) { - if (SystemAttributeByName(NameStr(tupdesc->attrs[i]->attname), + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + + if (SystemAttributeByName(NameStr(attr->attname), tupdesc->tdhasoid) != NULL) ereport(ERROR, (errcode(ERRCODE_DUPLICATE_COLUMN), errmsg("column name \"%s\" conflicts with a system column name", - NameStr(tupdesc->attrs[i]->attname)))); + NameStr(attr->attname)))); } } @@ -447,12 +449,12 @@ CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, { for (j = 0; j < i; j++) { - if (strcmp(NameStr(tupdesc->attrs[j]->attname), - NameStr(tupdesc->attrs[i]->attname)) == 0) + if (strcmp(NameStr(TupleDescAttr(tupdesc, j)->attname), + NameStr(TupleDescAttr(tupdesc, i)->attname)) == 0) ereport(ERROR, (errcode(ERRCODE_DUPLICATE_COLUMN), errmsg("column name \"%s\" specified more than once", - NameStr(tupdesc->attrs[j]->attname)))); + NameStr(TupleDescAttr(tupdesc, j)->attname)))); } } @@ -461,9 +463,9 @@ CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, */ for (i = 0; i < natts; i++) { - CheckAttributeType(NameStr(tupdesc->attrs[i]->attname), - tupdesc->attrs[i]->atttypid, - tupdesc->attrs[i]->attcollation, + CheckAttributeType(NameStr(TupleDescAttr(tupdesc, i)->attname), + TupleDescAttr(tupdesc, i)->atttypid, + TupleDescAttr(tupdesc, i)->attcollation, NIL, /* assume we're creating a new rowtype */ allow_system_table_mods); } @@ -545,7 +547,7 @@ CheckAttributeType(const char *attname, for (i = 0; i < tupdesc->natts; i++) { - Form_pg_attribute attr = tupdesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); if (attr->attisdropped) continue; @@ -678,7 +680,7 @@ AddNewAttributeTuples(Oid new_rel_oid, */ for (i = 0; i < natts; i++) { - attr = tupdesc->attrs[i]; + attr = TupleDescAttr(tupdesc, i); /* Fill in the correct relation OID */ attr->attrelid = new_rel_oid; /* Make sure these are OK, too */ @@ -2245,7 +2247,7 @@ AddRelationNewConstraints(Relation rel, foreach(cell, newColDefaults) { RawColumnDefault *colDef = (RawColumnDefault *) lfirst(cell); - Form_pg_attribute atp = rel->rd_att->attrs[colDef->attnum - 1]; + Form_pg_attribute atp = TupleDescAttr(rel->rd_att, colDef->attnum - 1); Oid defOid; expr = cookDefault(pstate, colDef->raw_default, diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 25c5bead9f..c7b2f031f0 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -307,7 +307,7 @@ ConstructTupleDescriptor(Relation heapRelation, for (i = 0; i < numatts; i++) { AttrNumber atnum = indexInfo->ii_KeyAttrNumbers[i]; - Form_pg_attribute to = indexTupDesc->attrs[i]; + Form_pg_attribute to = TupleDescAttr(indexTupDesc, i); HeapTuple tuple; Form_pg_type typeTup; Form_pg_opclass opclassTup; @@ -333,7 +333,8 @@ ConstructTupleDescriptor(Relation heapRelation, */ if (atnum > natts) /* safety check */ elog(ERROR, "invalid column number %d", atnum); - from = heapTupDesc->attrs[AttrNumberGetAttrOffset(atnum)]; + from = TupleDescAttr(heapTupDesc, + AttrNumberGetAttrOffset(atnum)); } /* @@ -495,7 +496,7 @@ InitializeAttributeOids(Relation indexRelation, tupleDescriptor = RelationGetDescr(indexRelation); for (i = 0; i < numatts; i += 1) - tupleDescriptor->attrs[i]->attrelid = indexoid; + TupleDescAttr(tupleDescriptor, i)->attrelid = indexoid; } /* ---------------------------------------------------------------- @@ -524,14 +525,16 @@ AppendAttributeTuples(Relation indexRelation, int numatts) for (i = 0; i < numatts; i++) { + Form_pg_attribute attr = TupleDescAttr(indexTupDesc, i); + /* * There used to be very grotty code here to set these fields, but I * think it's unnecessary. They should be set already. */ - Assert(indexTupDesc->attrs[i]->attnum == i + 1); - Assert(indexTupDesc->attrs[i]->attcacheoff == -1); + Assert(attr->attnum == i + 1); + Assert(attr->attcacheoff == -1); - InsertPgAttributeTuple(pg_attribute, indexTupDesc->attrs[i], indstate); + InsertPgAttributeTuple(pg_attribute, attr, indstate); } CatalogCloseIndexes(indstate); diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index 29756eb14e..6f517bbcda 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -235,9 +235,9 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, * toast :-(. This is essential for chunk_data because type bytea is * toastable; hit the other two just to be sure. */ - tupdesc->attrs[0]->attstorage = 'p'; - tupdesc->attrs[1]->attstorage = 'p'; - tupdesc->attrs[2]->attstorage = 'p'; + TupleDescAttr(tupdesc, 0)->attstorage = 'p'; + TupleDescAttr(tupdesc, 1)->attstorage = 'p'; + TupleDescAttr(tupdesc, 2)->attstorage = 'p'; /* * Toast tables for regular relations go in pg_toast; those for temp @@ -402,33 +402,33 @@ needs_toast_table(Relation rel) bool maxlength_unknown = false; bool has_toastable_attrs = false; TupleDesc tupdesc; - Form_pg_attribute *att; int32 tuple_length; int i; tupdesc = rel->rd_att; - att = tupdesc->attrs; for (i = 0; i < tupdesc->natts; i++) { - if (att[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) continue; - data_length = att_align_nominal(data_length, att[i]->attalign); - if (att[i]->attlen > 0) + data_length = att_align_nominal(data_length, att->attalign); + if (att->attlen > 0) { /* Fixed-length types are never toastable */ - data_length += att[i]->attlen; + data_length += att->attlen; } else { - int32 maxlen = type_maximum_size(att[i]->atttypid, - att[i]->atttypmod); + int32 maxlen = type_maximum_size(att->atttypid, + att->atttypmod); if (maxlen < 0) maxlength_unknown = true; else data_length += maxlen; - if (att[i]->attstorage != 'p') + if (att->attstorage != 'p') has_toastable_attrs = true; } } diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 2b638271b3..fbad13ea94 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -871,7 +871,7 @@ compute_index_stats(Relation onerel, double totalrows, static VacAttrStats * examine_attribute(Relation onerel, int attnum, Node *index_expr) { - Form_pg_attribute attr = onerel->rd_att->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(onerel->rd_att, attnum - 1); HeapTuple typtuple; VacAttrStats *stats; int i; diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c index f51f8b9492..48f1e6e2ad 100644 --- a/src/backend/commands/cluster.c +++ b/src/backend/commands/cluster.c @@ -1714,7 +1714,7 @@ reform_and_rewrite_tuple(HeapTuple tuple, /* Be sure to null out any dropped columns */ for (i = 0; i < newTupDesc->natts; i++) { - if (newTupDesc->attrs[i]->attisdropped) + if (TupleDescAttr(newTupDesc, i)->attisdropped) isnull[i] = true; } diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 375a25fbcf..cfa3f059c2 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -1583,12 +1583,13 @@ BeginCopy(ParseState *pstate, foreach(cur, attnums) { int attnum = lfirst_int(cur); + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1); if (!list_member_int(cstate->attnumlist, attnum)) ereport(ERROR, (errcode(ERRCODE_INVALID_COLUMN_REFERENCE), errmsg("FORCE_QUOTE column \"%s\" not referenced by COPY", - NameStr(tupDesc->attrs[attnum - 1]->attname)))); + NameStr(attr->attname)))); cstate->force_quote_flags[attnum - 1] = true; } } @@ -1605,12 +1606,13 @@ BeginCopy(ParseState *pstate, foreach(cur, attnums) { int attnum = lfirst_int(cur); + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1); if (!list_member_int(cstate->attnumlist, attnum)) ereport(ERROR, (errcode(ERRCODE_INVALID_COLUMN_REFERENCE), errmsg("FORCE_NOT_NULL column \"%s\" not referenced by COPY", - NameStr(tupDesc->attrs[attnum - 1]->attname)))); + NameStr(attr->attname)))); cstate->force_notnull_flags[attnum - 1] = true; } } @@ -1627,12 +1629,13 @@ BeginCopy(ParseState *pstate, foreach(cur, attnums) { int attnum = lfirst_int(cur); + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1); if (!list_member_int(cstate->attnumlist, attnum)) ereport(ERROR, (errcode(ERRCODE_INVALID_COLUMN_REFERENCE), errmsg("FORCE_NULL column \"%s\" not referenced by COPY", - NameStr(tupDesc->attrs[attnum - 1]->attname)))); + NameStr(attr->attname)))); cstate->force_null_flags[attnum - 1] = true; } } @@ -1650,12 +1653,13 @@ BeginCopy(ParseState *pstate, foreach(cur, attnums) { int attnum = lfirst_int(cur); + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1); if (!list_member_int(cstate->attnumlist, attnum)) ereport(ERROR, (errcode(ERRCODE_INVALID_COLUMN_REFERENCE), errmsg_internal("selected column \"%s\" not referenced by COPY", - NameStr(tupDesc->attrs[attnum - 1]->attname)))); + NameStr(attr->attname)))); cstate->convert_select_flags[attnum - 1] = true; } } @@ -1919,7 +1923,6 @@ CopyTo(CopyState cstate) { TupleDesc tupDesc; int num_phys_attrs; - Form_pg_attribute *attr; ListCell *cur; uint64 processed; @@ -1927,7 +1930,6 @@ CopyTo(CopyState cstate) tupDesc = RelationGetDescr(cstate->rel); else tupDesc = cstate->queryDesc->tupDesc; - attr = tupDesc->attrs; num_phys_attrs = tupDesc->natts; cstate->null_print_client = cstate->null_print; /* default */ @@ -1941,13 +1943,14 @@ CopyTo(CopyState cstate) int attnum = lfirst_int(cur); Oid out_func_oid; bool isvarlena; + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1); if (cstate->binary) - getTypeBinaryOutputInfo(attr[attnum - 1]->atttypid, + getTypeBinaryOutputInfo(attr->atttypid, &out_func_oid, &isvarlena); else - getTypeOutputInfo(attr[attnum - 1]->atttypid, + getTypeOutputInfo(attr->atttypid, &out_func_oid, &isvarlena); fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]); @@ -2004,7 +2007,7 @@ CopyTo(CopyState cstate) CopySendChar(cstate, cstate->delim[0]); hdr_delim = true; - colname = NameStr(attr[attnum - 1]->attname); + colname = NameStr(TupleDescAttr(tupDesc, attnum - 1)->attname); CopyAttributeOutCSV(cstate, colname, false, list_length(cstate->attnumlist) == 1); @@ -2969,7 +2972,6 @@ BeginCopyFrom(ParseState *pstate, CopyState cstate; bool pipe = (filename == NULL); TupleDesc tupDesc; - Form_pg_attribute *attr; AttrNumber num_phys_attrs, num_defaults; FmgrInfo *in_functions; @@ -3004,7 +3006,6 @@ BeginCopyFrom(ParseState *pstate, cstate->range_table = pstate->p_rtable; tupDesc = RelationGetDescr(cstate->rel); - attr = tupDesc->attrs; num_phys_attrs = tupDesc->natts; num_defaults = 0; volatile_defexprs = false; @@ -3022,16 +3023,18 @@ BeginCopyFrom(ParseState *pstate, for (attnum = 1; attnum <= num_phys_attrs; attnum++) { + Form_pg_attribute att = TupleDescAttr(tupDesc, attnum - 1); + /* We don't need info for dropped attributes */ - if (attr[attnum - 1]->attisdropped) + if (att->attisdropped) continue; /* Fetch the input function and typioparam info */ if (cstate->binary) - getTypeBinaryInputInfo(attr[attnum - 1]->atttypid, + getTypeBinaryInputInfo(att->atttypid, &in_func_oid, &typioparams[attnum - 1]); else - getTypeInputInfo(attr[attnum - 1]->atttypid, + getTypeInputInfo(att->atttypid, &in_func_oid, &typioparams[attnum - 1]); fmgr_info(in_func_oid, &in_functions[attnum - 1]); @@ -3273,7 +3276,6 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext, Datum *values, bool *nulls, Oid *tupleOid) { TupleDesc tupDesc; - Form_pg_attribute *attr; AttrNumber num_phys_attrs, attr_count, num_defaults = cstate->num_defaults; @@ -3287,7 +3289,6 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext, ExprState **defexprs = cstate->defexprs; tupDesc = RelationGetDescr(cstate->rel); - attr = tupDesc->attrs; num_phys_attrs = tupDesc->natts; attr_count = list_length(cstate->attnumlist); nfields = file_has_oids ? (attr_count + 1) : attr_count; @@ -3349,12 +3350,13 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext, { int attnum = lfirst_int(cur); int m = attnum - 1; + Form_pg_attribute att = TupleDescAttr(tupDesc, m); if (fieldno >= fldct) ereport(ERROR, (errcode(ERRCODE_BAD_COPY_FILE_FORMAT), errmsg("missing data for column \"%s\"", - NameStr(attr[m]->attname)))); + NameStr(att->attname)))); string = field_strings[fieldno++]; if (cstate->convert_select_flags && @@ -3388,12 +3390,12 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext, } } - cstate->cur_attname = NameStr(attr[m]->attname); + cstate->cur_attname = NameStr(att->attname); cstate->cur_attval = string; values[m] = InputFunctionCall(&in_functions[m], string, typioparams[m], - attr[m]->atttypmod); + att->atttypmod); if (string != NULL) nulls[m] = false; cstate->cur_attname = NULL; @@ -3472,14 +3474,15 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext, { int attnum = lfirst_int(cur); int m = attnum - 1; + Form_pg_attribute att = TupleDescAttr(tupDesc, m); - cstate->cur_attname = NameStr(attr[m]->attname); + cstate->cur_attname = NameStr(att->attname); i++; values[m] = CopyReadBinaryAttribute(cstate, i, &in_functions[m], typioparams[m], - attr[m]->atttypmod, + att->atttypmod, &nulls[m]); cstate->cur_attname = NULL; } @@ -4709,13 +4712,12 @@ CopyGetAttnums(TupleDesc tupDesc, Relation rel, List *attnamelist) if (attnamelist == NIL) { /* Generate default column list */ - Form_pg_attribute *attr = tupDesc->attrs; int attr_count = tupDesc->natts; int i; for (i = 0; i < attr_count; i++) { - if (attr[i]->attisdropped) + if (TupleDescAttr(tupDesc, i)->attisdropped) continue; attnums = lappend_int(attnums, i + 1); } @@ -4735,11 +4737,13 @@ CopyGetAttnums(TupleDesc tupDesc, Relation rel, List *attnamelist) attnum = InvalidAttrNumber; for (i = 0; i < tupDesc->natts; i++) { - if (tupDesc->attrs[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupDesc, i); + + if (att->attisdropped) continue; - if (namestrcmp(&(tupDesc->attrs[i]->attname), name) == 0) + if (namestrcmp(&(att->attname), name) == 0) { - attnum = tupDesc->attrs[i]->attnum; + attnum = att->attnum; break; } } diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c index 97f9c55d6e..e60210cb24 100644 --- a/src/backend/commands/createas.c +++ b/src/backend/commands/createas.c @@ -468,7 +468,7 @@ intorel_startup(DestReceiver *self, int operation, TupleDesc typeinfo) lc = list_head(into->colNames); for (attnum = 0; attnum < typeinfo->natts; attnum++) { - Form_pg_attribute attribute = typeinfo->attrs[attnum]; + Form_pg_attribute attribute = TupleDescAttr(typeinfo, attnum); ColumnDef *col; char *colname; diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 620704ec49..b61aaac284 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -242,7 +242,7 @@ CheckIndexCompatible(Oid oldId, for (i = 0; i < old_natts; i++) { if (IsPolymorphicType(get_opclass_input_type(classObjectId[i])) && - irel->rd_att->attrs[i]->atttypid != typeObjectId[i]) + TupleDescAttr(irel->rd_att, i)->atttypid != typeObjectId[i]) { ret = false; break; @@ -270,7 +270,7 @@ CheckIndexCompatible(Oid oldId, op_input_types(indexInfo->ii_ExclusionOps[i], &left, &right); if ((IsPolymorphicType(left) || IsPolymorphicType(right)) && - irel->rd_att->attrs[i]->atttypid != typeObjectId[i]) + TupleDescAttr(irel->rd_att, i)->atttypid != typeObjectId[i]) { ret = false; break; diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c index 7d57f97442..d2e0376511 100644 --- a/src/backend/commands/matview.c +++ b/src/backend/commands/matview.c @@ -727,6 +727,7 @@ refresh_by_match_merge(Oid matviewOid, Oid tempOid, Oid relowner, for (i = 0; i < numatts; i++) { int attnum = indexStruct->indkey.values[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1); Oid type; Oid op; const char *colname; @@ -745,7 +746,7 @@ refresh_by_match_merge(Oid matviewOid, Oid tempOid, Oid relowner, if (foundUniqueIndex) appendStringInfoString(&querybuf, " AND "); - colname = quote_identifier(NameStr((tupdesc->attrs[attnum - 1])->attname)); + colname = quote_identifier(NameStr(attr->attname)); appendStringInfo(&querybuf, "newdata.%s ", colname); type = attnumTypeId(matviewRel, attnum); op = lookup_type_cache(type, TYPECACHE_EQ_OPR)->eq_opr; diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 83cb460164..0f08245a67 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -685,8 +685,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, foreach(listptr, stmt->tableElts) { ColumnDef *colDef = lfirst(listptr); + Form_pg_attribute attr; attnum++; + attr = TupleDescAttr(descriptor, attnum - 1); if (colDef->raw_default != NULL) { @@ -698,7 +700,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, rawEnt->attnum = attnum; rawEnt->raw_default = colDef->raw_default; rawDefaults = lappend(rawDefaults, rawEnt); - descriptor->attrs[attnum - 1]->atthasdef = true; + attr->atthasdef = true; } else if (colDef->cooked_default != NULL) { @@ -715,11 +717,11 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, cooked->inhcount = 0; /* ditto */ cooked->is_no_inherit = false; cookedDefaults = lappend(cookedDefaults, cooked); - descriptor->attrs[attnum - 1]->atthasdef = true; + attr->atthasdef = true; } if (colDef->identity) - descriptor->attrs[attnum - 1]->attidentity = colDef->identity; + attr->attidentity = colDef->identity; } /* @@ -1833,7 +1835,8 @@ MergeAttributes(List *schema, List *supers, char relpersistence, for (parent_attno = 1; parent_attno <= tupleDesc->natts; parent_attno++) { - Form_pg_attribute attribute = tupleDesc->attrs[parent_attno - 1]; + Form_pg_attribute attribute = TupleDescAttr(tupleDesc, + parent_attno - 1); char *attributeName = NameStr(attribute->attname); int exist_attno; ColumnDef *def; @@ -4417,8 +4420,9 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode) */ for (i = 0; i < newTupDesc->natts; i++) { - if (newTupDesc->attrs[i]->attnotnull && - !newTupDesc->attrs[i]->attisdropped) + Form_pg_attribute attr = TupleDescAttr(newTupDesc, i); + + if (attr->attnotnull && !attr->attisdropped) notnull_attrs = lappend_int(notnull_attrs, i); } if (notnull_attrs) @@ -4482,7 +4486,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode) */ for (i = 0; i < newTupDesc->natts; i++) { - if (newTupDesc->attrs[i]->attisdropped) + if (TupleDescAttr(newTupDesc, i)->attisdropped) dropped_attrs = lappend_int(dropped_attrs, i); } @@ -4556,11 +4560,15 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode) int attn = lfirst_int(l); if (heap_attisnull(tuple, attn + 1)) + { + Form_pg_attribute attr = TupleDescAttr(newTupDesc, attn); + ereport(ERROR, (errcode(ERRCODE_NOT_NULL_VIOLATION), errmsg("column \"%s\" contains null values", - NameStr(newTupDesc->attrs[attn]->attname)), + NameStr(attr->attname)), errtablecol(oldrel, attn + 1))); + } } foreach(l, tab->constraints) @@ -4927,7 +4935,7 @@ find_composite_type_dependencies(Oid typeOid, Relation origRelation, continue; rel = relation_open(pg_depend->objid, AccessShareLock); - att = rel->rd_att->attrs[pg_depend->objsubid - 1]; + att = TupleDescAttr(rel->rd_att, pg_depend->objsubid - 1); if (rel->rd_rel->relkind == RELKIND_RELATION || rel->rd_rel->relkind == RELKIND_MATVIEW || @@ -5693,7 +5701,7 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode) AttrNumber parent_attnum; parent_attnum = get_attnum(parentId, colName); - if (tupDesc->attrs[parent_attnum - 1]->attnotnull) + if (TupleDescAttr(tupDesc, parent_attnum - 1)->attnotnull) ereport(ERROR, (errcode(ERRCODE_INVALID_TABLE_DEFINITION), errmsg("column \"%s\" is marked NOT NULL in parent table", @@ -7286,13 +7294,15 @@ ATAddForeignKeyConstraint(AlteredTableInfo *tab, Relation rel, CoercionPathType new_pathtype; Oid old_castfunc; Oid new_castfunc; + Form_pg_attribute attr = TupleDescAttr(tab->oldDesc, + fkattnum[i] - 1); /* * Identify coercion pathways from each of the old and new FK-side * column types to the right (foreign) operand type of the pfeqop. * We may assume that pg_constraint.conkey is not changing. */ - old_fktype = tab->oldDesc->attrs[fkattnum[i] - 1]->atttypid; + old_fktype = attr->atttypid; new_fktype = fktype; old_pathtype = findFkeyCast(pfeqop_right, old_fktype, &old_castfunc); @@ -8963,7 +8973,8 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel, ColumnDef *def = (ColumnDef *) cmd->def; TypeName *typeName = def->typeName; HeapTuple heapTup; - Form_pg_attribute attTup; + Form_pg_attribute attTup, + attOldTup; AttrNumber attnum; HeapTuple typeTuple; Form_pg_type tform; @@ -8989,10 +9000,11 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel, colName, RelationGetRelationName(rel)))); attTup = (Form_pg_attribute) GETSTRUCT(heapTup); attnum = attTup->attnum; + attOldTup = TupleDescAttr(tab->oldDesc, attnum - 1); /* Check for multiple ALTER TYPE on same column --- can't cope */ - if (attTup->atttypid != tab->oldDesc->attrs[attnum - 1]->atttypid || - attTup->atttypmod != tab->oldDesc->attrs[attnum - 1]->atttypmod) + if (attTup->atttypid != attOldTup->atttypid || + attTup->atttypmod != attOldTup->atttypmod) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot alter type of column \"%s\" twice", @@ -11209,7 +11221,8 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel) for (parent_attno = 1; parent_attno <= parent_natts; parent_attno++) { - Form_pg_attribute attribute = tupleDesc->attrs[parent_attno - 1]; + Form_pg_attribute attribute = TupleDescAttr(tupleDesc, + parent_attno - 1); char *attributeName = NameStr(attribute->attname); /* Ignore dropped columns in the parent. */ @@ -11822,7 +11835,7 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode) *table_attname; /* Get the next non-dropped type attribute. */ - type_attr = typeTupleDesc->attrs[type_attno - 1]; + type_attr = TupleDescAttr(typeTupleDesc, type_attno - 1); if (type_attr->attisdropped) continue; type_attname = NameStr(type_attr->attname); @@ -11835,7 +11848,8 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode) (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("table is missing column \"%s\"", type_attname))); - table_attr = tableTupleDesc->attrs[table_attno++ - 1]; + table_attr = TupleDescAttr(tableTupleDesc, table_attno - 1); + table_attno++; } while (table_attr->attisdropped); table_attname = NameStr(table_attr->attname); @@ -11860,7 +11874,8 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode) /* Any remaining columns at the end of the table had better be dropped. */ for (; table_attno <= tableTupleDesc->natts; table_attno++) { - Form_pg_attribute table_attr = tableTupleDesc->attrs[table_attno - 1]; + Form_pg_attribute table_attr = TupleDescAttr(tableTupleDesc, + table_attno - 1); if (!table_attr->attisdropped) ereport(ERROR, @@ -12147,7 +12162,7 @@ ATExecReplicaIdentity(Relation rel, ReplicaIdentityStmt *stmt, LOCKMODE lockmode errmsg("index \"%s\" cannot be used as replica identity because column %d is a system column", RelationGetRelationName(indexRel), attno))); - attr = rel->rd_att->attrs[attno - 1]; + attr = TupleDescAttr(rel->rd_att, attno - 1); if (!attr->attnotnull) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -13451,7 +13466,7 @@ PartConstraintImpliedByRelConstraint(Relation scanrel, for (i = 1; i <= natts; i++) { - Form_pg_attribute att = scanrel->rd_att->attrs[i - 1]; + Form_pg_attribute att = TupleDescAttr(scanrel->rd_att, i - 1); if (att->attnotnull && !att->attisdropped) { @@ -13733,7 +13748,7 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) natts = tupleDesc->natts; for (attno = 1; attno <= natts; attno++) { - Form_pg_attribute attribute = tupleDesc->attrs[attno - 1]; + Form_pg_attribute attribute = TupleDescAttr(tupleDesc, attno - 1); char *attributeName = NameStr(attribute->attname); /* Ignore dropped */ diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index 29ac5d569d..7ed16aeff4 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -2324,6 +2324,7 @@ AlterDomainNotNull(List *names, bool notNull) for (i = 0; i < rtc->natts; i++) { int attnum = rtc->atts[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1); if (heap_attisnull(tuple, attnum)) { @@ -2338,7 +2339,7 @@ AlterDomainNotNull(List *names, bool notNull) ereport(ERROR, (errcode(ERRCODE_NOT_NULL_VIOLATION), errmsg("column \"%s\" of table \"%s\" contains null values", - NameStr(tupdesc->attrs[attnum - 1]->attname), + NameStr(attr->attname), RelationGetRelationName(testrel)), errtablecol(testrel, attnum))); } @@ -2722,6 +2723,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin) Datum d; bool isNull; Datum conResult; + Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum - 1); d = heap_getattr(tuple, attnum, tupdesc, &isNull); @@ -2745,7 +2747,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin) ereport(ERROR, (errcode(ERRCODE_CHECK_VIOLATION), errmsg("column \"%s\" of table \"%s\" contains values that violate the new constraint", - NameStr(tupdesc->attrs[attnum - 1]->attname), + NameStr(attr->attname), RelationGetRelationName(testrel)), errtablecol(testrel, attnum))); } @@ -2930,7 +2932,7 @@ get_rels_with_domain(Oid domainOid, LOCKMODE lockmode) */ if (pg_depend->objsubid > RelationGetNumberOfAttributes(rtc->rel)) continue; - pg_att = rtc->rel->rd_att->attrs[pg_depend->objsubid - 1]; + pg_att = TupleDescAttr(rtc->rel->rd_att, pg_depend->objsubid - 1); if (pg_att->attisdropped || pg_att->atttypid != domainOid) continue; diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c index f25a5658d6..076e2a3a40 100644 --- a/src/backend/commands/view.c +++ b/src/backend/commands/view.c @@ -283,8 +283,8 @@ checkViewTupleDesc(TupleDesc newdesc, TupleDesc olddesc) for (i = 0; i < olddesc->natts; i++) { - Form_pg_attribute newattr = newdesc->attrs[i]; - Form_pg_attribute oldattr = olddesc->attrs[i]; + Form_pg_attribute newattr = TupleDescAttr(newdesc, i); + Form_pg_attribute oldattr = TupleDescAttr(olddesc, i); /* XXX msg not right, but we don't support DROP COL on view anyway */ if (newattr->attisdropped != oldattr->attisdropped) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 7496189fab..be9d23bc32 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -347,7 +347,7 @@ ExecBuildProjectionInfo(List *targetList, isSafeVar = true; /* can't check, just assume OK */ else if (attnum <= inputDesc->natts) { - Form_pg_attribute attr = inputDesc->attrs[attnum - 1]; + Form_pg_attribute attr = TupleDescAttr(inputDesc, attnum - 1); /* * If user attribute is dropped or has a type mismatch, don't @@ -1492,7 +1492,6 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, RowExpr *rowexpr = (RowExpr *) node; int nelems = list_length(rowexpr->args); TupleDesc tupdesc; - Form_pg_attribute *attrs; int i; ListCell *l; @@ -1539,13 +1538,13 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, memset(scratch.d.row.elemnulls, true, sizeof(bool) * nelems); /* Set up evaluation, skipping any deleted columns */ - attrs = tupdesc->attrs; i = 0; foreach(l, rowexpr->args) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); Expr *e = (Expr *) lfirst(l); - if (!attrs[i]->attisdropped) + if (!att->attisdropped) { /* * Guard against ALTER COLUMN TYPE on rowtype since @@ -1553,12 +1552,12 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, * typmod too? Not sure we can be sure it'll be the * same. */ - if (exprType((Node *) e) != attrs[i]->atttypid) + if (exprType((Node *) e) != att->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("ROW() column has type %s instead of type %s", format_type_be(exprType((Node *) e)), - format_type_be(attrs[i]->atttypid)))); + format_type_be(att->atttypid)))); } else { diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index f2a52f6213..83e04471e4 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -1553,7 +1553,7 @@ CheckVarSlotCompatibility(TupleTableSlot *slot, int attnum, Oid vartype) elog(ERROR, "attribute number %d exceeds number of columns %d", attnum, slot_tupdesc->natts); - attr = slot_tupdesc->attrs[attnum - 1]; + attr = TupleDescAttr(slot_tupdesc, attnum - 1); if (attr->attisdropped) ereport(ERROR, @@ -2081,7 +2081,7 @@ ExecEvalRowNullInt(ExprState *state, ExprEvalStep *op, for (att = 1; att <= tupDesc->natts; att++) { /* ignore dropped columns */ - if (tupDesc->attrs[att - 1]->attisdropped) + if (TupleDescAttr(tupDesc, att - 1)->attisdropped) continue; if (heap_attisnull(&tmptup, att)) { @@ -2494,7 +2494,7 @@ ExecEvalFieldSelect(ExprState *state, ExprEvalStep *op, ExprContext *econtext) if (fieldnum > tupDesc->natts) /* should never happen */ elog(ERROR, "attribute number %d exceeds number of columns %d", fieldnum, tupDesc->natts); - attr = tupDesc->attrs[fieldnum - 1]; + attr = TupleDescAttr(tupDesc, fieldnum - 1); /* Check for dropped column, and force a NULL result if so */ if (attr->attisdropped) @@ -3441,8 +3441,8 @@ ExecEvalWholeRowVar(ExprState *state, ExprEvalStep *op, ExprContext *econtext) for (i = 0; i < var_tupdesc->natts; i++) { - Form_pg_attribute vattr = var_tupdesc->attrs[i]; - Form_pg_attribute sattr = slot_tupdesc->attrs[i]; + Form_pg_attribute vattr = TupleDescAttr(var_tupdesc, i); + Form_pg_attribute sattr = TupleDescAttr(slot_tupdesc, i); if (vattr->atttypid == sattr->atttypid) continue; /* no worries */ @@ -3540,8 +3540,8 @@ ExecEvalWholeRowVar(ExprState *state, ExprEvalStep *op, ExprContext *econtext) for (i = 0; i < var_tupdesc->natts; i++) { - Form_pg_attribute vattr = var_tupdesc->attrs[i]; - Form_pg_attribute sattr = tupleDesc->attrs[i]; + Form_pg_attribute vattr = TupleDescAttr(var_tupdesc, i); + Form_pg_attribute sattr = TupleDescAttr(tupleDesc, i); if (!vattr->attisdropped) continue; /* already checked non-dropped cols */ diff --git a/src/backend/executor/execJunk.c b/src/backend/executor/execJunk.c index a422327c88..7fcd940fdb 100644 --- a/src/backend/executor/execJunk.c +++ b/src/backend/executor/execJunk.c @@ -168,7 +168,7 @@ ExecInitJunkFilterConversion(List *targetList, t = list_head(targetList); for (i = 0; i < cleanLength; i++) { - if (cleanTupType->attrs[i]->attisdropped) + if (TupleDescAttr(cleanTupType, i)->attisdropped) continue; /* map entry is already zero */ for (;;) { diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 4582a3caa0..2946a0edee 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1950,8 +1950,9 @@ ExecConstraints(ResultRelInfo *resultRelInfo, for (attrChk = 1; attrChk <= natts; attrChk++) { - if (tupdesc->attrs[attrChk - 1]->attnotnull && - slot_attisnull(slot, attrChk)) + Form_pg_attribute att = TupleDescAttr(tupdesc, attrChk - 1); + + if (att->attnotnull && slot_attisnull(slot, attrChk)) { char *val_desc; Relation orig_rel = rel; @@ -1994,7 +1995,7 @@ ExecConstraints(ResultRelInfo *resultRelInfo, ereport(ERROR, (errcode(ERRCODE_NOT_NULL_VIOLATION), errmsg("null value in column \"%s\" violates not-null constraint", - NameStr(orig_tupdesc->attrs[attrChk - 1]->attname)), + NameStr(att->attname)), val_desc ? errdetail("Failing row contains %s.", val_desc) : 0, errtablecol(orig_rel, attrChk))); } @@ -2261,9 +2262,10 @@ ExecBuildSlotValueDescription(Oid reloid, bool column_perm = false; char *val; int vallen; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); /* ignore dropped columns */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; if (!table_perm) @@ -2274,9 +2276,9 @@ ExecBuildSlotValueDescription(Oid reloid, * for the column. If not, omit this column from the error * message. */ - aclresult = pg_attribute_aclcheck(reloid, tupdesc->attrs[i]->attnum, + aclresult = pg_attribute_aclcheck(reloid, att->attnum, GetUserId(), ACL_SELECT); - if (bms_is_member(tupdesc->attrs[i]->attnum - FirstLowInvalidHeapAttributeNumber, + if (bms_is_member(att->attnum - FirstLowInvalidHeapAttributeNumber, modifiedCols) || aclresult == ACLCHECK_OK) { column_perm = any_perm = true; @@ -2286,7 +2288,7 @@ ExecBuildSlotValueDescription(Oid reloid, else write_comma_collist = true; - appendStringInfoString(&collist, NameStr(tupdesc->attrs[i]->attname)); + appendStringInfoString(&collist, NameStr(att->attname)); } } @@ -2299,7 +2301,7 @@ ExecBuildSlotValueDescription(Oid reloid, Oid foutoid; bool typisvarlena; - getTypeOutputInfo(tupdesc->attrs[i]->atttypid, + getTypeOutputInfo(att->atttypid, &foutoid, &typisvarlena); val = OidOutputFunctionCall(foutoid, slot->tts_values[i]); } diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index 3819de28ad..fbb8108512 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -247,7 +247,7 @@ tuple_equals_slot(TupleDesc desc, HeapTuple tup, TupleTableSlot *slot) if (isnull[attrnum]) continue; - att = desc->attrs[attrnum]; + att = TupleDescAttr(desc, attrnum); typentry = lookup_type_cache(att->atttypid, TYPECACHE_EQ_OPR_FINFO); if (!OidIsValid(typentry->eq_opr_finfo.fn_oid)) diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index 138e86ac67..8bc90a6c7e 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -903,8 +903,8 @@ tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc) for (i = 0; i < dst_tupdesc->natts; i++) { - Form_pg_attribute dattr = dst_tupdesc->attrs[i]; - Form_pg_attribute sattr = src_tupdesc->attrs[i]; + Form_pg_attribute dattr = TupleDescAttr(dst_tupdesc, i); + Form_pg_attribute sattr = TupleDescAttr(src_tupdesc, i); if (IsBinaryCoercible(sattr->atttypid, dattr->atttypid)) continue; /* no worries */ diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index 4f131b3ee0..47a34a044a 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -269,7 +269,7 @@ tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, TupleDesc tupdesc /* Check the tlist attributes */ for (attrno = 1; attrno <= numattrs; attrno++) { - Form_pg_attribute att_tup = tupdesc->attrs[attrno - 1]; + Form_pg_attribute att_tup = TupleDescAttr(tupdesc, attrno - 1); Var *var; if (tlist_item == NULL) diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c index 7ae70a877a..31f814c0f0 100644 --- a/src/backend/executor/execTuples.c +++ b/src/backend/executor/execTuples.c @@ -997,7 +997,8 @@ ExecTypeSetColNames(TupleDesc typeInfo, List *namesList) /* Guard against too-long names list */ if (colno >= typeInfo->natts) break; - attr = typeInfo->attrs[colno++]; + attr = TupleDescAttr(typeInfo, colno); + colno++; /* Ignore empty aliases (these must be for dropped columns) */ if (cname[0] == '\0') @@ -1090,13 +1091,15 @@ TupleDescGetAttInMetadata(TupleDesc tupdesc) for (i = 0; i < natts; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + /* Ignore dropped attributes */ - if (!tupdesc->attrs[i]->attisdropped) + if (!att->attisdropped) { - atttypeid = tupdesc->attrs[i]->atttypid; + atttypeid = att->atttypid; getTypeInputInfo(atttypeid, &attinfuncid, &attioparams[i]); fmgr_info(attinfuncid, &attinfuncinfo[i]); - atttypmods[i] = tupdesc->attrs[i]->atttypmod; + atttypmods[i] = att->atttypmod; } } attinmeta->attinfuncs = attinfuncinfo; @@ -1127,7 +1130,7 @@ BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values) /* Call the "in" function for each non-dropped attribute */ for (i = 0; i < natts; i++) { - if (!tupdesc->attrs[i]->attisdropped) + if (!TupleDescAttr(tupdesc, i)->attisdropped) { /* Non-dropped attributes */ dvalues[i] = InputFunctionCall(&attinmeta->attinfuncs[i], diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index c398846879..9528393976 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -917,9 +917,11 @@ GetAttributeByName(HeapTupleHeader tuple, const char *attname, bool *isNull) attrno = InvalidAttrNumber; for (i = 0; i < tupDesc->natts; i++) { - if (namestrcmp(&(tupDesc->attrs[i]->attname), attname) == 0) + Form_pg_attribute att = TupleDescAttr(tupDesc, i); + + if (namestrcmp(&(att->attname), attname) == 0) { - attrno = tupDesc->attrs[i]->attnum; + attrno = att->attnum; break; } } diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index 3630f5d966..b7ac5f7432 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -1759,7 +1759,7 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, errmsg("return type mismatch in function declared to return %s", format_type_be(rettype)), errdetail("Final statement returns too many columns."))); - attr = tupdesc->attrs[colindex - 1]; + attr = TupleDescAttr(tupdesc, colindex - 1); if (attr->attisdropped && modifyTargetList) { Expr *null_expr; @@ -1816,7 +1816,7 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, /* remaining columns in tupdesc had better all be dropped */ for (colindex++; colindex <= tupnatts; colindex++) { - if (!tupdesc->attrs[colindex - 1]->attisdropped) + if (!TupleDescAttr(tupdesc, colindex - 1)->attisdropped) ereport(ERROR, (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("return type mismatch in function declared to return %s", diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 6a26773a49..0ae5873868 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -724,12 +724,16 @@ initialize_aggregate(AggState *aggstate, AggStatePerTrans pertrans, * process_ordered_aggregate_single.) */ if (pertrans->numInputs == 1) + { + Form_pg_attribute attr = TupleDescAttr(pertrans->sortdesc, 0); + pertrans->sortstates[aggstate->current_set] = - tuplesort_begin_datum(pertrans->sortdesc->attrs[0]->atttypid, + tuplesort_begin_datum(attr->atttypid, pertrans->sortOperators[0], pertrans->sortCollations[0], pertrans->sortNullsFirst[0], work_mem, false); + } else pertrans->sortstates[aggstate->current_set] = tuplesort_begin_heap(pertrans->sortdesc, diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 70a6b847a0..e12721a9b6 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -95,7 +95,8 @@ ExecCheckPlanOutput(Relation resultRel, List *targetList) (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("table row type and query-specified row type do not match"), errdetail("Query has too many columns."))); - attr = resultDesc->attrs[attno++]; + attr = TupleDescAttr(resultDesc, attno); + attno++; if (!attr->attisdropped) { diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index fe10e809df..77ef6f3df1 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -360,7 +360,7 @@ ExecScanSubPlan(SubPlanState *node, found = true; /* stash away current value */ - Assert(subplan->firstColType == tdesc->attrs[0]->atttypid); + Assert(subplan->firstColType == TupleDescAttr(tdesc, 0)->atttypid); dvalue = slot_getattr(slot, 1, &disnull); astate = accumArrayResultAny(astate, dvalue, disnull, subplan->firstColType, oldcontext); @@ -992,7 +992,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext) found = true; /* stash away current value */ - Assert(subplan->firstColType == tdesc->attrs[0]->atttypid); + Assert(subplan->firstColType == TupleDescAttr(tdesc, 0)->atttypid); dvalue = slot_getattr(slot, 1, &disnull); astate = accumArrayResultAny(astate, dvalue, disnull, subplan->firstColType, oldcontext); diff --git a/src/backend/executor/nodeTableFuncscan.c b/src/backend/executor/nodeTableFuncscan.c index b03d2ef762..165fae8c83 100644 --- a/src/backend/executor/nodeTableFuncscan.c +++ b/src/backend/executor/nodeTableFuncscan.c @@ -202,7 +202,7 @@ ExecInitTableFuncScan(TableFuncScan *node, EState *estate, int eflags) { Oid in_funcid; - getTypeInputInfo(tupdesc->attrs[i]->atttypid, + getTypeInputInfo(TupleDescAttr(tupdesc, i)->atttypid, &in_funcid, &scanstate->typioparams[i]); fmgr_info(in_funcid, &scanstate->in_functions[i]); } @@ -390,6 +390,7 @@ tfuncInitialize(TableFuncScanState *tstate, ExprContext *econtext, Datum doc) foreach(lc1, tstate->colexprs) { char *colfilter; + Form_pg_attribute att = TupleDescAttr(tupdesc, colno); if (colno != ordinalitycol) { @@ -403,11 +404,11 @@ tfuncInitialize(TableFuncScanState *tstate, ExprContext *econtext, Datum doc) (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED), errmsg("column filter expression must not be null"), errdetail("Filter for column \"%s\" is null.", - NameStr(tupdesc->attrs[colno]->attname)))); + NameStr(att->attname)))); colfilter = TextDatumGetCString(value); } else - colfilter = NameStr(tupdesc->attrs[colno]->attname); + colfilter = NameStr(att->attname); routine->SetColumnFilter(tstate, colfilter, colno); } @@ -453,6 +454,8 @@ tfuncLoadRows(TableFuncScanState *tstate, ExprContext *econtext) */ for (colno = 0; colno < natts; colno++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, colno); + if (colno == ordinalitycol) { /* Fast path for ordinality column */ @@ -465,8 +468,8 @@ tfuncLoadRows(TableFuncScanState *tstate, ExprContext *econtext) values[colno] = routine->GetValue(tstate, colno, - tupdesc->attrs[colno]->atttypid, - tupdesc->attrs[colno]->atttypmod, + att->atttypid, + att->atttypmod, &isnull); /* No value? Evaluate and apply the default, if any */ @@ -484,7 +487,7 @@ tfuncLoadRows(TableFuncScanState *tstate, ExprContext *econtext) ereport(ERROR, (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED), errmsg("null is not allowed in column \"%s\"", - NameStr(tupdesc->attrs[colno]->attname)))); + NameStr(att->attname)))); nulls[colno] = isnull; } diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c index 6eacaed8bb..1a72bfe160 100644 --- a/src/backend/executor/nodeValuesscan.c +++ b/src/backend/executor/nodeValuesscan.c @@ -95,7 +95,6 @@ ValuesNext(ValuesScanState *node) List *exprstatelist; Datum *values; bool *isnull; - Form_pg_attribute *att; ListCell *lc; int resind; @@ -131,12 +130,13 @@ ValuesNext(ValuesScanState *node) */ values = slot->tts_values; isnull = slot->tts_isnull; - att = slot->tts_tupleDescriptor->attrs; resind = 0; foreach(lc, exprstatelist) { ExprState *estate = (ExprState *) lfirst(lc); + Form_pg_attribute attr = TupleDescAttr(slot->tts_tupleDescriptor, + resind); values[resind] = ExecEvalExpr(estate, econtext, @@ -150,7 +150,7 @@ ValuesNext(ValuesScanState *node) */ values[resind] = MakeExpandedObjectReadOnly(values[resind], isnull[resind], - att[resind]->attlen); + attr->attlen); resind++; } diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index cd00a6d9f2..afe231fca9 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -765,8 +765,10 @@ SPI_fnumber(TupleDesc tupdesc, const char *fname) for (res = 0; res < tupdesc->natts; res++) { - if (namestrcmp(&tupdesc->attrs[res]->attname, fname) == 0 && - !tupdesc->attrs[res]->attisdropped) + Form_pg_attribute attr = TupleDescAttr(tupdesc, res); + + if (namestrcmp(&attr->attname, fname) == 0 && + !attr->attisdropped) return res + 1; } @@ -793,7 +795,7 @@ SPI_fname(TupleDesc tupdesc, int fnumber) } if (fnumber > 0) - att = tupdesc->attrs[fnumber - 1]; + att = TupleDescAttr(tupdesc, fnumber - 1); else att = SystemAttributeDefinition(fnumber, true); @@ -823,7 +825,7 @@ SPI_getvalue(HeapTuple tuple, TupleDesc tupdesc, int fnumber) return NULL; if (fnumber > 0) - typoid = tupdesc->attrs[fnumber - 1]->atttypid; + typoid = TupleDescAttr(tupdesc, fnumber - 1)->atttypid; else typoid = (SystemAttributeDefinition(fnumber, true))->atttypid; @@ -865,7 +867,7 @@ SPI_gettype(TupleDesc tupdesc, int fnumber) } if (fnumber > 0) - typoid = tupdesc->attrs[fnumber - 1]->atttypid; + typoid = TupleDescAttr(tupdesc, fnumber - 1)->atttypid; else typoid = (SystemAttributeDefinition(fnumber, true))->atttypid; @@ -901,7 +903,7 @@ SPI_gettypeid(TupleDesc tupdesc, int fnumber) } if (fnumber > 0) - return tupdesc->attrs[fnumber - 1]->atttypid; + return TupleDescAttr(tupdesc, fnumber - 1)->atttypid; else return (SystemAttributeDefinition(fnumber, true))->atttypid; } diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index a4cfe9685a..4c4fcf530d 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -551,7 +551,7 @@ TQSendRecordInfo(TQueueDestReceiver *tqueue, int32 typmod, TupleDesc tupledesc) appendBinaryStringInfo(&buf, (char *) &tupledesc->tdhasoid, sizeof(bool)); for (i = 0; i < tupledesc->natts; i++) { - appendBinaryStringInfo(&buf, (char *) tupledesc->attrs[i], + appendBinaryStringInfo(&buf, (char *) TupleDescAttr(tupledesc, i), sizeof(FormData_pg_attribute)); } @@ -1253,7 +1253,7 @@ BuildFieldRemapInfo(TupleDesc tupledesc, MemoryContext mycontext) tupledesc->natts * sizeof(TupleRemapInfo *)); for (i = 0; i < tupledesc->natts; i++) { - Form_pg_attribute attr = tupledesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupledesc, i); if (attr->attisdropped) { diff --git a/src/backend/executor/tstoreReceiver.c b/src/backend/executor/tstoreReceiver.c index eda38b1de1..027fa72f10 100644 --- a/src/backend/executor/tstoreReceiver.c +++ b/src/backend/executor/tstoreReceiver.c @@ -49,7 +49,6 @@ tstoreStartupReceiver(DestReceiver *self, int operation, TupleDesc typeinfo) { TStoreState *myState = (TStoreState *) self; bool needtoast = false; - Form_pg_attribute *attrs = typeinfo->attrs; int natts = typeinfo->natts; int i; @@ -58,9 +57,11 @@ tstoreStartupReceiver(DestReceiver *self, int operation, TupleDesc typeinfo) { for (i = 0; i < natts; i++) { - if (attrs[i]->attisdropped) + Form_pg_attribute attr = TupleDescAttr(typeinfo, i); + + if (attr->attisdropped) continue; - if (attrs[i]->attlen == -1) + if (attr->attlen == -1) { needtoast = true; break; @@ -109,7 +110,6 @@ tstoreReceiveSlot_detoast(TupleTableSlot *slot, DestReceiver *self) { TStoreState *myState = (TStoreState *) self; TupleDesc typeinfo = slot->tts_tupleDescriptor; - Form_pg_attribute *attrs = typeinfo->attrs; int natts = typeinfo->natts; int nfree; int i; @@ -127,10 +127,9 @@ tstoreReceiveSlot_detoast(TupleTableSlot *slot, DestReceiver *self) for (i = 0; i < natts; i++) { Datum val = slot->tts_values[i]; + Form_pg_attribute attr = TupleDescAttr(typeinfo, i); - if (!attrs[i]->attisdropped && - attrs[i]->attlen == -1 && - !slot->tts_isnull[i]) + if (!attr->attisdropped && attr->attlen == -1 && !slot->tts_isnull[i]) { if (VARATT_IS_EXTERNAL(DatumGetPointer(val))) { diff --git a/src/backend/optimizer/prep/preptlist.c b/src/backend/optimizer/prep/preptlist.c index afc733f183..9d75e8612a 100644 --- a/src/backend/optimizer/prep/preptlist.c +++ b/src/backend/optimizer/prep/preptlist.c @@ -248,7 +248,7 @@ expand_targetlist(List *tlist, int command_type, for (attrno = 1; attrno <= numattrs; attrno++) { - Form_pg_attribute att_tup = rel->rd_att->attrs[attrno - 1]; + Form_pg_attribute att_tup = TupleDescAttr(rel->rd_att, attrno - 1); TargetEntry *new_tle = NULL; if (tlist_item != NULL) diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index f43c3f3007..e73c819901 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -1648,7 +1648,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation, Oid attcollation; int new_attno; - att = old_tupdesc->attrs[old_attno]; + att = TupleDescAttr(old_tupdesc, old_attno); if (att->attisdropped) { /* Just put NULL into this list entry */ @@ -1686,7 +1686,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation, * notational device to include the assignment into the if-clause. */ if (old_attno < newnatts && - (att = new_tupdesc->attrs[old_attno]) != NULL && + (att = TupleDescAttr(new_tupdesc, old_attno)) != NULL && !att->attisdropped && att->attinhcount != 0 && strcmp(attname, NameStr(att->attname)) == 0) new_attno = old_attno; @@ -1694,7 +1694,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation, { for (new_attno = 0; new_attno < newnatts; new_attno++) { - att = new_tupdesc->attrs[new_attno]; + att = TupleDescAttr(new_tupdesc, new_attno); if (!att->attisdropped && att->attinhcount != 0 && strcmp(attname, NameStr(att->attname)) == 0) break; diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 602d17dfb4..93add27dbe 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -2366,7 +2366,7 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum, ReleaseTupleDesc(tupdesc); return false; } - attr = tupdesc->attrs[fieldnum - 1]; + attr = TupleDescAttr(tupdesc, fieldnum - 1); if (attr->attisdropped || attr->atttypid != expectedtype || attr->atttypmod != expectedtypmod || diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index dc0b0b0706..a1ebd4acc8 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -1072,7 +1072,7 @@ get_rel_data_width(Relation rel, int32 *attr_widths) for (i = 1; i <= RelationGetNumberOfAttributes(rel); i++) { - Form_pg_attribute att = rel->rd_att->attrs[i - 1]; + Form_pg_attribute att = TupleDescAttr(rel->rd_att, i - 1); int32 item_width; if (att->attisdropped) @@ -1208,7 +1208,7 @@ get_relation_constraints(PlannerInfo *root, for (i = 1; i <= natts; i++) { - Form_pg_attribute att = relation->rd_att->attrs[i - 1]; + Form_pg_attribute att = TupleDescAttr(relation->rd_att, i - 1); if (att->attnotnull && !att->attisdropped) { @@ -1489,7 +1489,8 @@ build_physical_tlist(PlannerInfo *root, RelOptInfo *rel) numattrs = RelationGetNumberOfAttributes(relation); for (attrno = 1; attrno <= numattrs; attrno++) { - Form_pg_attribute att_tup = relation->rd_att->attrs[attrno - 1]; + Form_pg_attribute att_tup = TupleDescAttr(relation->rd_att, + attrno - 1); if (att_tup->attisdropped) { @@ -1609,7 +1610,7 @@ build_index_tlist(PlannerInfo *root, IndexOptInfo *index, att_tup = SystemAttributeDefinition(indexkey, heapRelation->rd_rel->relhasoids); else - att_tup = heapRelation->rd_att->attrs[indexkey - 1]; + att_tup = TupleDescAttr(heapRelation->rd_att, indexkey - 1); indexvar = (Expr *) makeVar(varno, indexkey, diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c index 4fb793cfbf..757a4a8fd1 100644 --- a/src/backend/parser/analyze.c +++ b/src/backend/parser/analyze.c @@ -1050,7 +1050,7 @@ transformOnConflictClause(ParseState *pstate, */ for (attno = 0; attno < targetrel->rd_rel->relnatts; attno++) { - Form_pg_attribute attr = targetrel->rd_att->attrs[attno]; + Form_pg_attribute attr = TupleDescAttr(targetrel->rd_att, attno); char *name; if (attr->attisdropped) diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index 0bc7dba6a0..e95cee1ebf 100644 --- a/src/backend/parser/parse_coerce.c +++ b/src/backend/parser/parse_coerce.c @@ -982,9 +982,10 @@ coerce_record_to_complex(ParseState *pstate, Node *node, Node *expr; Node *cexpr; Oid exprtype; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); /* Fill in NULLs for dropped columns in rowtype */ - if (tupdesc->attrs[i]->attisdropped) + if (attr->attisdropped) { /* * can't use atttypid here, but it doesn't really matter what type @@ -1008,8 +1009,8 @@ coerce_record_to_complex(ParseState *pstate, Node *node, cexpr = coerce_to_target_type(pstate, expr, exprtype, - tupdesc->attrs[i]->atttypid, - tupdesc->attrs[i]->atttypmod, + attr->atttypid, + attr->atttypmod, ccontext, COERCE_IMPLICIT_CAST, -1); @@ -1021,7 +1022,7 @@ coerce_record_to_complex(ParseState *pstate, Node *node, format_type_be(targetTypeId)), errdetail("Cannot cast type %s to %s in column %d.", format_type_be(exprtype), - format_type_be(tupdesc->attrs[i]->atttypid), + format_type_be(attr->atttypid), ucolno), parser_coercion_errposition(pstate, location, expr))); newargs = lappend(newargs, cexpr); diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index 8487edaa95..2f2f2c7fb0 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -1834,7 +1834,7 @@ ParseComplexProjection(ParseState *pstate, char *funcname, Node *first_arg, for (i = 0; i < tupdesc->natts; i++) { - Form_pg_attribute att = tupdesc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); if (strcmp(funcname, NameStr(att->attname)) == 0 && !att->attisdropped) diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index 684a50d3df..88b3e88a21 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -1052,7 +1052,7 @@ buildRelationAliases(TupleDesc tupdesc, Alias *alias, Alias *eref) for (varattno = 0; varattno < maxattrs; varattno++) { - Form_pg_attribute attr = tupdesc->attrs[varattno]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, varattno); Value *attrname; if (attr->attisdropped) @@ -2026,19 +2026,18 @@ addRangeTableEntryForENR(ParseState *pstate, rte->colcollations = NIL; for (attno = 1; attno <= tupdesc->natts; ++attno) { - if (tupdesc->attrs[attno - 1]->atttypid == InvalidOid && - !(tupdesc->attrs[attno - 1]->attisdropped)) + Form_pg_attribute att = TupleDescAttr(tupdesc, attno - 1); + + if (att->atttypid == InvalidOid && + !(att->attisdropped)) elog(ERROR, "atttypid was invalid for column which has not been dropped from \"%s\"", rv->relname); rte->coltypes = - lappend_oid(rte->coltypes, - tupdesc->attrs[attno - 1]->atttypid); + lappend_oid(rte->coltypes, att->atttypid); rte->coltypmods = - lappend_int(rte->coltypmods, - tupdesc->attrs[attno - 1]->atttypmod); + lappend_int(rte->coltypmods, att->atttypmod); rte->colcollations = - lappend_oid(rte->colcollations, - tupdesc->attrs[attno - 1]->attcollation); + lappend_oid(rte->colcollations, att->attcollation); } /* @@ -2514,7 +2513,7 @@ expandTupleDesc(TupleDesc tupdesc, Alias *eref, int count, int offset, Assert(count <= tupdesc->natts); for (varattno = 0; varattno < count; varattno++) { - Form_pg_attribute attr = tupdesc->attrs[varattno]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, varattno); if (attr->attisdropped) { @@ -2749,7 +2748,7 @@ get_rte_attribute_type(RangeTblEntry *rte, AttrNumber attnum, Assert(tupdesc); Assert(attnum <= tupdesc->natts); - att_tup = tupdesc->attrs[attnum - 1]; + att_tup = TupleDescAttr(tupdesc, attnum - 1); /* * If dropped column, pretend it ain't there. See @@ -2953,7 +2952,8 @@ get_rte_attribute_is_dropped(RangeTblEntry *rte, AttrNumber attnum) Assert(tupdesc); Assert(attnum - atts_done <= tupdesc->natts); - att_tup = tupdesc->attrs[attnum - atts_done - 1]; + att_tup = TupleDescAttr(tupdesc, + attnum - atts_done - 1); return att_tup->attisdropped; } /* Otherwise, it can't have any dropped columns */ @@ -3042,7 +3042,7 @@ attnameAttNum(Relation rd, const char *attname, bool sysColOK) for (i = 0; i < rd->rd_rel->relnatts; i++) { - Form_pg_attribute att = rd->rd_att->attrs[i]; + Form_pg_attribute att = TupleDescAttr(rd->rd_att, i); if (namestrcmp(&(att->attname), attname) == 0 && !att->attisdropped) return i + 1; @@ -3102,7 +3102,7 @@ attnumAttName(Relation rd, int attid) } if (attid > rd->rd_att->natts) elog(ERROR, "invalid attribute number %d", attid); - return &rd->rd_att->attrs[attid - 1]->attname; + return &TupleDescAttr(rd->rd_att, attid - 1)->attname; } /* @@ -3124,7 +3124,7 @@ attnumTypeId(Relation rd, int attid) } if (attid > rd->rd_att->natts) elog(ERROR, "invalid attribute number %d", attid); - return rd->rd_att->attrs[attid - 1]->atttypid; + return TupleDescAttr(rd->rd_att, attid - 1)->atttypid; } /* @@ -3142,7 +3142,7 @@ attnumCollationId(Relation rd, int attid) } if (attid > rd->rd_att->natts) elog(ERROR, "invalid attribute number %d", attid); - return rd->rd_att->attrs[attid - 1]->attcollation; + return TupleDescAttr(rd->rd_att, attid - 1)->attcollation; } /* diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index 0a70539fb1..c3cb0357ca 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -484,8 +484,8 @@ transformAssignedExpr(ParseState *pstate, colname), parser_errposition(pstate, location))); attrtype = attnumTypeId(rd, attrno); - attrtypmod = rd->rd_att->attrs[attrno - 1]->atttypmod; - attrcollation = rd->rd_att->attrs[attrno - 1]->attcollation; + attrtypmod = TupleDescAttr(rd->rd_att, attrno - 1)->atttypmod; + attrcollation = TupleDescAttr(rd->rd_att, attrno - 1)->attcollation; /* * If the expression is a DEFAULT placeholder, insert the attribute's @@ -959,19 +959,21 @@ checkInsertTargets(ParseState *pstate, List *cols, List **attrnos) /* * Generate default column list for INSERT. */ - Form_pg_attribute *attr = pstate->p_target_relation->rd_att->attrs; int numcol = pstate->p_target_relation->rd_rel->relnatts; int i; for (i = 0; i < numcol; i++) { ResTarget *col; + Form_pg_attribute attr; - if (attr[i]->attisdropped) + attr = TupleDescAttr(pstate->p_target_relation->rd_att, i); + + if (attr->attisdropped) continue; col = makeNode(ResTarget); - col->name = pstrdup(NameStr(attr[i]->attname)); + col->name = pstrdup(NameStr(attr->attname)); col->indirection = NIL; col->val = NULL; col->location = -1; @@ -1407,7 +1409,7 @@ ExpandRowReference(ParseState *pstate, Node *expr, numAttrs = tupleDesc->natts; for (i = 0; i < numAttrs; i++) { - Form_pg_attribute att = tupleDesc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(tupleDesc, i); FieldSelect *fselect; if (att->attisdropped) diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 495ba3dffc..20586797cc 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -969,7 +969,8 @@ transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla for (parent_attno = 1; parent_attno <= tupleDesc->natts; parent_attno++) { - Form_pg_attribute attribute = tupleDesc->attrs[parent_attno - 1]; + Form_pg_attribute attribute = TupleDescAttr(tupleDesc, + parent_attno - 1); char *attributeName = NameStr(attribute->attname); ColumnDef *def; @@ -1219,7 +1220,7 @@ transformOfType(CreateStmtContext *cxt, TypeName *ofTypename) tupdesc = lookup_rowtype_tupdesc(ofTypeId, -1); for (i = 0; i < tupdesc->natts; i++) { - Form_pg_attribute attr = tupdesc->attrs[i]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); ColumnDef *n; if (attr->attisdropped) @@ -1256,7 +1257,6 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, const AttrNumber *attmap, int attmap_length) { Oid source_relid = RelationGetRelid(source_idx); - Form_pg_attribute *attrs = RelationGetDescr(source_idx)->attrs; HeapTuple ht_idxrel; HeapTuple ht_idx; HeapTuple ht_am; @@ -1434,6 +1434,8 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, { IndexElem *iparam; AttrNumber attnum = idxrec->indkey.values[keyno]; + Form_pg_attribute attr = TupleDescAttr(RelationGetDescr(source_idx), + keyno); int16 opt = source_idx->rd_indoption[keyno]; iparam = makeNode(IndexElem); @@ -1481,7 +1483,7 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, } /* Copy the original index column name */ - iparam->indexcolname = pstrdup(NameStr(attrs[keyno]->attname)); + iparam->indexcolname = pstrdup(NameStr(attr->attname)); /* Add the collation name, if non-default */ iparam->collation = get_collation(indcollation->values[keyno], keycoltype); @@ -1921,7 +1923,7 @@ transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt) if (attnum > 0) { Assert(attnum <= heap_rel->rd_att->natts); - attform = heap_rel->rd_att->attrs[attnum - 1]; + attform = TupleDescAttr(heap_rel->rd_att, attnum - 1); } else attform = SystemAttributeDefinition(attnum, @@ -2040,7 +2042,8 @@ transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt) inh->relname))); for (count = 0; count < rel->rd_att->natts; count++) { - Form_pg_attribute inhattr = rel->rd_att->attrs[count]; + Form_pg_attribute inhattr = TupleDescAttr(rel->rd_att, + count); char *inhname = NameStr(inhattr->attname); if (inhattr->attisdropped) diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index 94dfee0b24..f19649b113 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -398,7 +398,7 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) for (i = 0; i < desc->natts; i++) { - if (desc->attrs[i]->attisdropped) + if (TupleDescAttr(desc, i)->attisdropped) continue; nliveatts++; } @@ -415,7 +415,7 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) { HeapTuple typtup; Form_pg_type typclass; - Form_pg_attribute att = desc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(desc, i); char *outputstr; /* skip dropped columns */ @@ -518,7 +518,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel) /* send number of live attributes */ for (i = 0; i < desc->natts; i++) { - if (desc->attrs[i]->attisdropped) + if (TupleDescAttr(desc, i)->attisdropped) continue; nliveatts++; } @@ -533,7 +533,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel) /* send the attributes */ for (i = 0; i < desc->natts; i++) { - Form_pg_attribute att = desc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(desc, i); uint8 flags = 0; if (att->attisdropped) diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c index a7ea16d714..408143ae95 100644 --- a/src/backend/replication/logical/relation.c +++ b/src/backend/replication/logical/relation.c @@ -278,15 +278,16 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE lockmode) for (i = 0; i < desc->natts; i++) { int attnum; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) { entry->attrmap[i] = -1; continue; } attnum = logicalrep_rel_att_by_name(remoterel, - NameStr(desc->attrs[i]->attname)); + NameStr(attr->attname)); entry->attrmap[i] = attnum; if (attnum >= 0) diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index 5567bee061..657bafae57 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -2800,7 +2800,7 @@ ReorderBufferToastReplace(ReorderBuffer *rb, ReorderBufferTXN *txn, for (natt = 0; natt < desc->natts; natt++) { - Form_pg_attribute attr = desc->attrs[natt]; + Form_pg_attribute attr = TupleDescAttr(desc, natt); ReorderBufferToastEnt *ent; struct varlena *varlena; diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index 7c2df57645..041f3873b9 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -247,7 +247,7 @@ slot_fill_defaults(LogicalRepRelMapEntry *rel, EState *estate, { Expr *defexpr; - if (desc->attrs[attnum]->attisdropped) + if (TupleDescAttr(desc, attnum)->attisdropped) continue; if (rel->attrmap[attnum] >= 0) @@ -323,7 +323,7 @@ slot_store_cstrings(TupleTableSlot *slot, LogicalRepRelMapEntry *rel, /* Call the "in" function for each non-dropped attribute */ for (i = 0; i < natts; i++) { - Form_pg_attribute att = slot->tts_tupleDescriptor->attrs[i]; + Form_pg_attribute att = TupleDescAttr(slot->tts_tupleDescriptor, i); int remoteattnum = rel->attrmap[i]; if (!att->attisdropped && remoteattnum >= 0 && @@ -388,7 +388,7 @@ slot_modify_cstrings(TupleTableSlot *slot, LogicalRepRelMapEntry *rel, /* Call the "in" function for each replaced attribute */ for (i = 0; i < natts; i++) { - Form_pg_attribute att = slot->tts_tupleDescriptor->attrs[i]; + Form_pg_attribute att = TupleDescAttr(slot->tts_tupleDescriptor, i); int remoteattnum = rel->attrmap[i]; if (remoteattnum >= 0 && !replaces[remoteattnum]) diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index 370b74f232..67c1d3b246 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -303,7 +303,7 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, */ for (i = 0; i < desc->natts; i++) { - Form_pg_attribute att = desc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(desc, i); if (att->attisdropped) continue; diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c index 9e6865bef6..d03984a2de 100644 --- a/src/backend/rewrite/rewriteDefine.c +++ b/src/backend/rewrite/rewriteDefine.c @@ -676,7 +676,7 @@ checkRuleResultList(List *targetList, TupleDesc resultDesc, bool isSelect, errmsg("SELECT rule's target list has too many entries") : errmsg("RETURNING list has too many entries"))); - attr = resultDesc->attrs[i - 1]; + attr = TupleDescAttr(resultDesc, i - 1); attname = NameStr(attr->attname); /* diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index 6b79c69795..ef52dd5b95 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -751,7 +751,7 @@ rewriteTargetListIU(List *targetList, attrno = old_tle->resno; if (attrno < 1 || attrno > numattrs) elog(ERROR, "bogus resno %d in targetlist", attrno); - att_tup = target_relation->rd_att->attrs[attrno - 1]; + att_tup = TupleDescAttr(target_relation->rd_att, attrno - 1); /* put attrno into attrno_list even if it's dropped */ if (attrno_list) @@ -794,7 +794,7 @@ rewriteTargetListIU(List *targetList, TargetEntry *new_tle = new_tles[attrno - 1]; bool apply_default; - att_tup = target_relation->rd_att->attrs[attrno - 1]; + att_tup = TupleDescAttr(target_relation->rd_att, attrno - 1); /* We can (and must) ignore deleted attributes */ if (att_tup->attisdropped) @@ -1112,7 +1112,7 @@ Node * build_column_default(Relation rel, int attrno) { TupleDesc rd_att = rel->rd_att; - Form_pg_attribute att_tup = rd_att->attrs[attrno - 1]; + Form_pg_attribute att_tup = TupleDescAttr(rd_att, attrno - 1); Oid atttype = att_tup->atttypid; int32 atttypmod = att_tup->atttypmod; Node *expr = NULL; @@ -1247,7 +1247,7 @@ rewriteValuesRTE(RangeTblEntry *rte, Relation target_relation, List *attrnos) Form_pg_attribute att_tup; Node *new_expr; - att_tup = target_relation->rd_att->attrs[attrno - 1]; + att_tup = TupleDescAttr(target_relation->rd_att, attrno - 1); if (!att_tup->attisdropped) new_expr = build_column_default(target_relation, attrno); diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index 4dd7d977e8..1ddb42b4d0 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -1714,15 +1714,16 @@ composite_to_json(Datum composite, StringInfo result, bool use_line_feeds) char *attname; JsonTypeCategory tcategory; Oid outfuncoid; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; if (needsep) appendStringInfoString(result, sep); needsep = true; - attname = NameStr(tupdesc->attrs[i]->attname); + attname = NameStr(att->attname); escape_json(result, attname); appendStringInfoChar(result, ':'); @@ -1734,8 +1735,7 @@ composite_to_json(Datum composite, StringInfo result, bool use_line_feeds) outfuncoid = InvalidOid; } else - json_categorize_type(tupdesc->attrs[i]->atttypid, - &tcategory, &outfuncoid); + json_categorize_type(att->atttypid, &tcategory, &outfuncoid); datum_to_json(val, isnull, result, tcategory, outfuncoid, false); } diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 49f41f9f99..1eb7f3d6f9 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -1075,11 +1075,12 @@ composite_to_jsonb(Datum composite, JsonbInState *result) JsonbTypeCategory tcategory; Oid outfuncoid; JsonbValue v; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; - attname = NameStr(tupdesc->attrs[i]->attname); + attname = NameStr(att->attname); v.type = jbvString; /* don't need checkStringLen here - can't exceed maximum name length */ @@ -1096,8 +1097,7 @@ composite_to_jsonb(Datum composite, JsonbInState *result) outfuncoid = InvalidOid; } else - jsonb_categorize_type(tupdesc->attrs[i]->atttypid, - &tcategory, &outfuncoid); + jsonb_categorize_type(att->atttypid, &tcategory, &outfuncoid); datum_to_jsonb(val, isnull, result, tcategory, outfuncoid, false); } diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index 4779e74895..d92ffa83d9 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -3087,7 +3087,7 @@ populate_record(TupleDesc tupdesc, for (i = 0; i < ncolumns; ++i) { - Form_pg_attribute att = tupdesc->attrs[i]; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); char *colname = NameStr(att->attname); JsValue field = {0}; bool found; diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 8502fcfc82..25905a3287 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -1125,13 +1125,15 @@ hypothetical_check_argtypes(FunctionCallInfo fcinfo, int nargs, /* check that we have an int4 flag column */ if (!tupdesc || (nargs + 1) != tupdesc->natts || - tupdesc->attrs[nargs]->atttypid != INT4OID) + TupleDescAttr(tupdesc, nargs)->atttypid != INT4OID) elog(ERROR, "type mismatch in hypothetical-set function"); /* check that direct args match in type with aggregated args */ for (i = 0; i < nargs; i++) { - if (get_fn_expr_argtype(fcinfo->flinfo, i + 1) != tupdesc->attrs[i]->atttypid) + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + + if (get_fn_expr_argtype(fcinfo->flinfo, i + 1) != attr->atttypid) elog(ERROR, "type mismatch in hypothetical-set function"); } } diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c index 44acd13c6b..98fe00ff39 100644 --- a/src/backend/utils/adt/rowtypes.c +++ b/src/backend/utils/adt/rowtypes.c @@ -159,12 +159,13 @@ record_in(PG_FUNCTION_ARGS) for (i = 0; i < ncolumns; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Oid column_type = att->atttypid; char *column_data; /* Ignore dropped columns in datatype, but fill with nulls */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) { values[i] = (Datum) 0; nulls[i] = true; @@ -252,7 +253,7 @@ record_in(PG_FUNCTION_ARGS) values[i] = InputFunctionCall(&column_info->proc, column_data, column_info->typioparam, - tupdesc->attrs[i]->atttypmod); + att->atttypmod); /* * Prep for next column @@ -367,15 +368,16 @@ record_out(PG_FUNCTION_ARGS) for (i = 0; i < ncolumns; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Oid column_type = att->atttypid; Datum attr; char *value; char *tmp; bool nq; /* Ignore dropped columns in datatype */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; if (needComma) @@ -519,7 +521,7 @@ record_recv(PG_FUNCTION_ARGS) validcols = 0; for (i = 0; i < ncolumns; i++) { - if (!tupdesc->attrs[i]->attisdropped) + if (!TupleDescAttr(tupdesc, i)->attisdropped) validcols++; } if (usercols != validcols) @@ -531,8 +533,9 @@ record_recv(PG_FUNCTION_ARGS) /* Process each column */ for (i = 0; i < ncolumns; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Oid column_type = att->atttypid; Oid coltypoid; int itemlen; StringInfoData item_buf; @@ -540,7 +543,7 @@ record_recv(PG_FUNCTION_ARGS) char csave; /* Ignore dropped columns in datatype, but fill with nulls */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) { values[i] = (Datum) 0; nulls[i] = true; @@ -605,7 +608,7 @@ record_recv(PG_FUNCTION_ARGS) values[i] = ReceiveFunctionCall(&column_info->proc, bufptr, column_info->typioparam, - tupdesc->attrs[i]->atttypmod); + att->atttypmod); if (bufptr) { @@ -712,20 +715,21 @@ record_send(PG_FUNCTION_ARGS) validcols = 0; for (i = 0; i < ncolumns; i++) { - if (!tupdesc->attrs[i]->attisdropped) + if (!TupleDescAttr(tupdesc, i)->attisdropped) validcols++; } pq_sendint(&buf, validcols, 4); for (i = 0; i < ncolumns; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); ColumnIOData *column_info = &my_extra->columns[i]; - Oid column_type = tupdesc->attrs[i]->atttypid; + Oid column_type = att->atttypid; Datum attr; bytea *outputbytes; /* Ignore dropped columns in datatype */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; pq_sendint(&buf, column_type, sizeof(Oid)); @@ -873,18 +877,20 @@ record_cmp(FunctionCallInfo fcinfo) i1 = i2 = j = 0; while (i1 < ncolumns1 || i2 < ncolumns2) { + Form_pg_attribute att1; + Form_pg_attribute att2; TypeCacheEntry *typentry; Oid collation; /* * Skip dropped columns */ - if (i1 < ncolumns1 && tupdesc1->attrs[i1]->attisdropped) + if (i1 < ncolumns1 && TupleDescAttr(tupdesc1, i1)->attisdropped) { i1++; continue; } - if (i2 < ncolumns2 && tupdesc2->attrs[i2]->attisdropped) + if (i2 < ncolumns2 && TupleDescAttr(tupdesc2, i2)->attisdropped) { i2++; continue; @@ -892,24 +898,26 @@ record_cmp(FunctionCallInfo fcinfo) if (i1 >= ncolumns1 || i2 >= ncolumns2) break; /* we'll deal with mismatch below loop */ + att1 = TupleDescAttr(tupdesc1, i1); + att2 = TupleDescAttr(tupdesc2, i2); + /* * Have two matching columns, they must be same type */ - if (tupdesc1->attrs[i1]->atttypid != - tupdesc2->attrs[i2]->atttypid) + if (att1->atttypid != att2->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot compare dissimilar column types %s and %s at record column %d", - format_type_be(tupdesc1->attrs[i1]->atttypid), - format_type_be(tupdesc2->attrs[i2]->atttypid), + format_type_be(att1->atttypid), + format_type_be(att2->atttypid), j + 1))); /* * If they're not same collation, we don't complain here, but the * comparison function might. */ - collation = tupdesc1->attrs[i1]->attcollation; - if (collation != tupdesc2->attrs[i2]->attcollation) + collation = att1->attcollation; + if (collation != att2->attcollation) collation = InvalidOid; /* @@ -917,9 +925,9 @@ record_cmp(FunctionCallInfo fcinfo) */ typentry = my_extra->columns[j].typentry; if (typentry == NULL || - typentry->type_id != tupdesc1->attrs[i1]->atttypid) + typentry->type_id != att1->atttypid) { - typentry = lookup_type_cache(tupdesc1->attrs[i1]->atttypid, + typentry = lookup_type_cache(att1->atttypid, TYPECACHE_CMP_PROC_FINFO); if (!OidIsValid(typentry->cmp_proc_finfo.fn_oid)) ereport(ERROR, @@ -1111,6 +1119,8 @@ record_eq(PG_FUNCTION_ARGS) i1 = i2 = j = 0; while (i1 < ncolumns1 || i2 < ncolumns2) { + Form_pg_attribute att1; + Form_pg_attribute att2; TypeCacheEntry *typentry; Oid collation; FunctionCallInfoData locfcinfo; @@ -1119,12 +1129,12 @@ record_eq(PG_FUNCTION_ARGS) /* * Skip dropped columns */ - if (i1 < ncolumns1 && tupdesc1->attrs[i1]->attisdropped) + if (i1 < ncolumns1 && TupleDescAttr(tupdesc1, i1)->attisdropped) { i1++; continue; } - if (i2 < ncolumns2 && tupdesc2->attrs[i2]->attisdropped) + if (i2 < ncolumns2 && TupleDescAttr(tupdesc2, i2)->attisdropped) { i2++; continue; @@ -1132,24 +1142,26 @@ record_eq(PG_FUNCTION_ARGS) if (i1 >= ncolumns1 || i2 >= ncolumns2) break; /* we'll deal with mismatch below loop */ + att1 = TupleDescAttr(tupdesc1, i1); + att2 = TupleDescAttr(tupdesc2, i2); + /* * Have two matching columns, they must be same type */ - if (tupdesc1->attrs[i1]->atttypid != - tupdesc2->attrs[i2]->atttypid) + if (att1->atttypid != att2->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot compare dissimilar column types %s and %s at record column %d", - format_type_be(tupdesc1->attrs[i1]->atttypid), - format_type_be(tupdesc2->attrs[i2]->atttypid), + format_type_be(att1->atttypid), + format_type_be(att2->atttypid), j + 1))); /* * If they're not same collation, we don't complain here, but the * equality function might. */ - collation = tupdesc1->attrs[i1]->attcollation; - if (collation != tupdesc2->attrs[i2]->attcollation) + collation = att1->attcollation; + if (collation != att2->attcollation) collation = InvalidOid; /* @@ -1157,9 +1169,9 @@ record_eq(PG_FUNCTION_ARGS) */ typentry = my_extra->columns[j].typentry; if (typentry == NULL || - typentry->type_id != tupdesc1->attrs[i1]->atttypid) + typentry->type_id != att1->atttypid) { - typentry = lookup_type_cache(tupdesc1->attrs[i1]->atttypid, + typentry = lookup_type_cache(att1->atttypid, TYPECACHE_EQ_OPR_FINFO); if (!OidIsValid(typentry->eq_opr_finfo.fn_oid)) ereport(ERROR, @@ -1370,15 +1382,18 @@ record_image_cmp(FunctionCallInfo fcinfo) i1 = i2 = j = 0; while (i1 < ncolumns1 || i2 < ncolumns2) { + Form_pg_attribute att1; + Form_pg_attribute att2; + /* * Skip dropped columns */ - if (i1 < ncolumns1 && tupdesc1->attrs[i1]->attisdropped) + if (i1 < ncolumns1 && TupleDescAttr(tupdesc1, i1)->attisdropped) { i1++; continue; } - if (i2 < ncolumns2 && tupdesc2->attrs[i2]->attisdropped) + if (i2 < ncolumns2 && TupleDescAttr(tupdesc2, i2)->attisdropped) { i2++; continue; @@ -1386,24 +1401,25 @@ record_image_cmp(FunctionCallInfo fcinfo) if (i1 >= ncolumns1 || i2 >= ncolumns2) break; /* we'll deal with mismatch below loop */ + att1 = TupleDescAttr(tupdesc1, i1); + att2 = TupleDescAttr(tupdesc2, i2); + /* * Have two matching columns, they must be same type */ - if (tupdesc1->attrs[i1]->atttypid != - tupdesc2->attrs[i2]->atttypid) + if (att1->atttypid != att2->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot compare dissimilar column types %s and %s at record column %d", - format_type_be(tupdesc1->attrs[i1]->atttypid), - format_type_be(tupdesc2->attrs[i2]->atttypid), + format_type_be(att1->atttypid), + format_type_be(att2->atttypid), j + 1))); /* * The same type should have the same length (or both should be * variable). */ - Assert(tupdesc1->attrs[i1]->attlen == - tupdesc2->attrs[i2]->attlen); + Assert(att1->attlen == att2->attlen); /* * We consider two NULLs equal; NULL > not-NULL. @@ -1426,7 +1442,7 @@ record_image_cmp(FunctionCallInfo fcinfo) } /* Compare the pair of elements */ - if (tupdesc1->attrs[i1]->attlen == -1) + if (att1->attlen == -1) { Size len1, len2; @@ -1449,9 +1465,9 @@ record_image_cmp(FunctionCallInfo fcinfo) if ((Pointer) arg2val != (Pointer) values2[i2]) pfree(arg2val); } - else if (tupdesc1->attrs[i1]->attbyval) + else if (att1->attbyval) { - switch (tupdesc1->attrs[i1]->attlen) + switch (att1->attlen) { case 1: if (GET_1_BYTE(values1[i1]) != @@ -1495,7 +1511,7 @@ record_image_cmp(FunctionCallInfo fcinfo) { cmpresult = memcmp(DatumGetPointer(values1[i1]), DatumGetPointer(values2[i2]), - tupdesc1->attrs[i1]->attlen); + att1->attlen); } if (cmpresult < 0) @@ -1647,15 +1663,18 @@ record_image_eq(PG_FUNCTION_ARGS) i1 = i2 = j = 0; while (i1 < ncolumns1 || i2 < ncolumns2) { + Form_pg_attribute att1; + Form_pg_attribute att2; + /* * Skip dropped columns */ - if (i1 < ncolumns1 && tupdesc1->attrs[i1]->attisdropped) + if (i1 < ncolumns1 && TupleDescAttr(tupdesc1, i1)->attisdropped) { i1++; continue; } - if (i2 < ncolumns2 && tupdesc2->attrs[i2]->attisdropped) + if (i2 < ncolumns2 && TupleDescAttr(tupdesc2, i2)->attisdropped) { i2++; continue; @@ -1663,16 +1682,18 @@ record_image_eq(PG_FUNCTION_ARGS) if (i1 >= ncolumns1 || i2 >= ncolumns2) break; /* we'll deal with mismatch below loop */ + att1 = TupleDescAttr(tupdesc1, i1); + att2 = TupleDescAttr(tupdesc2, i2); + /* * Have two matching columns, they must be same type */ - if (tupdesc1->attrs[i1]->atttypid != - tupdesc2->attrs[i2]->atttypid) + if (att1->atttypid != att2->atttypid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot compare dissimilar column types %s and %s at record column %d", - format_type_be(tupdesc1->attrs[i1]->atttypid), - format_type_be(tupdesc2->attrs[i2]->atttypid), + format_type_be(att1->atttypid), + format_type_be(att2->atttypid), j + 1))); /* @@ -1687,7 +1708,7 @@ record_image_eq(PG_FUNCTION_ARGS) } /* Compare the pair of elements */ - if (tupdesc1->attrs[i1]->attlen == -1) + if (att1->attlen == -1) { Size len1, len2; @@ -1716,9 +1737,9 @@ record_image_eq(PG_FUNCTION_ARGS) pfree(arg2val); } } - else if (tupdesc1->attrs[i1]->attbyval) + else if (att1->attbyval) { - switch (tupdesc1->attrs[i1]->attlen) + switch (att1->attlen) { case 1: result = (GET_1_BYTE(values1[i1]) == @@ -1746,7 +1767,7 @@ record_image_eq(PG_FUNCTION_ARGS) { result = (memcmp(DatumGetPointer(values1[i1]), DatumGetPointer(values2[i2]), - tupdesc1->attrs[i1]->attlen) == 0); + att1->attlen) == 0); } if (!result) break; diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 7469ec773c..43646d2c4f 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -3698,10 +3698,12 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte, for (i = 0; i < ncolumns; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + + if (attr->attisdropped) real_colnames[i] = NULL; else - real_colnames[i] = pstrdup(NameStr(tupdesc->attrs[i]->attname)); + real_colnames[i] = pstrdup(NameStr(attr->attname)); } relation_close(rel, AccessShareLock); } @@ -5391,7 +5393,7 @@ get_target_list(List *targetList, deparse_context *context, * Otherwise, just use what we can find in the TLE. */ if (resultDesc && colno <= resultDesc->natts) - colname = NameStr(resultDesc->attrs[colno - 1]->attname); + colname = NameStr(TupleDescAttr(resultDesc, colno - 1)->attname); else colname = tle->resname; @@ -6741,7 +6743,7 @@ get_name_for_var_field(Var *var, int fieldno, Assert(tupleDesc); /* Got the tupdesc, so we can extract the field name */ Assert(fieldno >= 1 && fieldno <= tupleDesc->natts); - return NameStr(tupleDesc->attrs[fieldno - 1]->attname); + return NameStr(TupleDescAttr(tupleDesc, fieldno - 1)->attname); } /* Find appropriate nesting depth */ @@ -7051,7 +7053,7 @@ get_name_for_var_field(Var *var, int fieldno, Assert(tupleDesc); /* Got the tupdesc, so we can extract the field name */ Assert(fieldno >= 1 && fieldno <= tupleDesc->natts); - return NameStr(tupleDesc->attrs[fieldno - 1]->attname); + return NameStr(TupleDescAttr(tupleDesc, fieldno - 1)->attname); } /* @@ -8180,7 +8182,7 @@ get_rule_expr(Node *node, deparse_context *context, Node *e = (Node *) lfirst(arg); if (tupdesc == NULL || - !tupdesc->attrs[i]->attisdropped) + !TupleDescAttr(tupdesc, i)->attisdropped) { appendStringInfoString(buf, sep); /* Whole-row Vars need special treatment here */ @@ -8193,7 +8195,7 @@ get_rule_expr(Node *node, deparse_context *context, { while (i < tupdesc->natts) { - if (!tupdesc->attrs[i]->attisdropped) + if (!TupleDescAttr(tupdesc, i)->attisdropped) { appendStringInfoString(buf, sep); appendStringInfoString(buf, "NULL"); diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c index 8453b65e78..083f7d60a7 100644 --- a/src/backend/utils/adt/tid.c +++ b/src/backend/utils/adt/tid.c @@ -273,9 +273,11 @@ currtid_for_view(Relation viewrel, ItemPointer tid) for (i = 0; i < natts; i++) { - if (strcmp(NameStr(att->attrs[i]->attname), "ctid") == 0) + Form_pg_attribute attr = TupleDescAttr(att, i); + + if (strcmp(NameStr(attr->attname), "ctid") == 0) { - if (att->attrs[i]->atttypid != TIDOID) + if (attr->atttypid != TIDOID) elog(ERROR, "ctid isn't of type TID"); tididx = i; break; diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c index c47624eff6..24229c2dff 100644 --- a/src/backend/utils/adt/xml.c +++ b/src/backend/utils/adt/xml.c @@ -3099,13 +3099,15 @@ map_sql_table_to_xmlschema(TupleDesc tupdesc, Oid relid, bool nulls, for (i = 0; i < tupdesc->natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) continue; appendStringInfo(&result, " \n", - map_sql_identifier_to_xml_name(NameStr(tupdesc->attrs[i]->attname), + map_sql_identifier_to_xml_name(NameStr(att->attname), true, false), - map_sql_type_to_xml_name(tupdesc->attrs[i]->atttypid, -1), + map_sql_type_to_xml_name(att->atttypid, -1), nulls ? " nillable=\"true\"" : " minOccurs=\"0\""); } @@ -3392,10 +3394,11 @@ map_sql_typecoll_to_xmlschema_types(List *tupdesc_list) for (i = 0; i < tupdesc->natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) continue; - uniquetypes = list_append_unique_oid(uniquetypes, - tupdesc->attrs[i]->atttypid); + uniquetypes = list_append_unique_oid(uniquetypes, att->atttypid); } } diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c index f894053d80..e092801025 100644 --- a/src/backend/utils/cache/catcache.c +++ b/src/backend/utils/cache/catcache.c @@ -797,7 +797,7 @@ do { \ if (cache->cc_key[i] > 0) { \ elog(DEBUG2, "CatalogCacheInitializeCache: load %d/%d w/%d, %u", \ i+1, cache->cc_nkeys, cache->cc_key[i], \ - tupdesc->attrs[cache->cc_key[i] - 1]->atttypid); \ + TupleDescAttr(tupdesc, cache->cc_key[i] - 1)->atttypid); \ } else { \ elog(DEBUG2, "CatalogCacheInitializeCache: load %d/%d w/%d", \ i+1, cache->cc_nkeys, cache->cc_key[i]); \ @@ -862,7 +862,8 @@ CatalogCacheInitializeCache(CatCache *cache) if (cache->cc_key[i] > 0) { - Form_pg_attribute attr = tupdesc->attrs[cache->cc_key[i] - 1]; + Form_pg_attribute attr = TupleDescAttr(tupdesc, + cache->cc_key[i] - 1); keytype = attr->atttypid; /* cache key columns should always be NOT NULL */ diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 2150fe9a39..b8e37809b0 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -546,7 +546,7 @@ RelationBuildTupleDesc(Relation relation) elog(ERROR, "invalid attribute number %d for %s", attp->attnum, RelationGetRelationName(relation)); - memcpy(relation->rd_att->attrs[attp->attnum - 1], + memcpy(TupleDescAttr(relation->rd_att, attp->attnum - 1), attp, ATTRIBUTE_FIXED_PART_SIZE); @@ -590,7 +590,7 @@ RelationBuildTupleDesc(Relation relation) int i; for (i = 0; i < relation->rd_rel->relnatts; i++) - Assert(relation->rd_att->attrs[i]->attcacheoff == -1); + Assert(TupleDescAttr(relation->rd_att, i)->attcacheoff == -1); } #endif @@ -600,7 +600,7 @@ RelationBuildTupleDesc(Relation relation) * for attnum=1 that used to exist in fastgetattr() and index_getattr(). */ if (relation->rd_rel->relnatts > 0) - relation->rd_att->attrs[0]->attcacheoff = 0; + TupleDescAttr(relation->rd_att, 0)->attcacheoff = 0; /* * Set up constraint/default info @@ -958,9 +958,11 @@ RelationBuildPartitionKey(Relation relation) /* Collect type information */ if (attno != 0) { - key->parttypid[i] = relation->rd_att->attrs[attno - 1]->atttypid; - key->parttypmod[i] = relation->rd_att->attrs[attno - 1]->atttypmod; - key->parttypcoll[i] = relation->rd_att->attrs[attno - 1]->attcollation; + Form_pg_attribute att = TupleDescAttr(relation->rd_att, attno - 1); + + key->parttypid[i] = att->atttypid; + key->parttypmod[i] = att->atttypmod; + key->parttypcoll[i] = att->attcollation; } else { @@ -1977,16 +1979,16 @@ formrdesc(const char *relationName, Oid relationReltype, has_not_null = false; for (i = 0; i < natts; i++) { - memcpy(relation->rd_att->attrs[i], + memcpy(TupleDescAttr(relation->rd_att, i), &attrs[i], ATTRIBUTE_FIXED_PART_SIZE); has_not_null |= attrs[i].attnotnull; /* make sure attcacheoff is valid */ - relation->rd_att->attrs[i]->attcacheoff = -1; + TupleDescAttr(relation->rd_att, i)->attcacheoff = -1; } /* initialize first attribute's attcacheoff, cf RelationBuildTupleDesc */ - relation->rd_att->attrs[0]->attcacheoff = 0; + TupleDescAttr(relation->rd_att, 0)->attcacheoff = 0; /* mark not-null status */ if (has_not_null) @@ -2000,7 +2002,7 @@ formrdesc(const char *relationName, Oid relationReltype, /* * initialize relation id from info in att array (my, this is ugly) */ - RelationGetRelid(relation) = relation->rd_att->attrs[0]->attrelid; + RelationGetRelid(relation) = TupleDescAttr(relation->rd_att, 0)->attrelid; /* * All relations made with formrdesc are mapped. This is necessarily so @@ -3274,9 +3276,12 @@ RelationBuildLocalRelation(const char *relname, has_not_null = false; for (i = 0; i < natts; i++) { - rel->rd_att->attrs[i]->attidentity = tupDesc->attrs[i]->attidentity; - rel->rd_att->attrs[i]->attnotnull = tupDesc->attrs[i]->attnotnull; - has_not_null |= tupDesc->attrs[i]->attnotnull; + Form_pg_attribute satt = TupleDescAttr(tupDesc, i); + Form_pg_attribute datt = TupleDescAttr(rel->rd_att, i); + + datt->attidentity = satt->attidentity; + datt->attnotnull = satt->attnotnull; + has_not_null |= satt->attnotnull; } if (has_not_null) @@ -3346,7 +3351,7 @@ RelationBuildLocalRelation(const char *relname, RelationGetRelid(rel) = relid; for (i = 0; i < natts; i++) - rel->rd_att->attrs[i]->attrelid = relid; + TupleDescAttr(rel->rd_att, i)->attrelid = relid; rel->rd_rel->reltablespace = reltablespace; @@ -3971,13 +3976,13 @@ BuildHardcodedDescriptor(int natts, const FormData_pg_attribute *attrs, for (i = 0; i < natts; i++) { - memcpy(result->attrs[i], &attrs[i], ATTRIBUTE_FIXED_PART_SIZE); + memcpy(TupleDescAttr(result, i), &attrs[i], ATTRIBUTE_FIXED_PART_SIZE); /* make sure attcacheoff is valid */ - result->attrs[i]->attcacheoff = -1; + TupleDescAttr(result, i)->attcacheoff = -1; } /* initialize first attribute's attcacheoff, cf RelationBuildTupleDesc */ - result->attrs[0]->attcacheoff = 0; + TupleDescAttr(result, 0)->attcacheoff = 0; /* Note: we don't bother to set up a TupleConstr entry */ @@ -4044,6 +4049,7 @@ AttrDefaultFetch(Relation relation) while (HeapTupleIsValid(htup = systable_getnext(adscan))) { Form_pg_attrdef adform = (Form_pg_attrdef) GETSTRUCT(htup); + Form_pg_attribute attr = TupleDescAttr(relation->rd_att, adform->adnum - 1); for (i = 0; i < ndef; i++) { @@ -4051,7 +4057,7 @@ AttrDefaultFetch(Relation relation) continue; if (attrdef[i].adbin != NULL) elog(WARNING, "multiple attrdef records found for attr %s of rel %s", - NameStr(relation->rd_att->attrs[adform->adnum - 1]->attname), + NameStr(attr->attname), RelationGetRelationName(relation)); else found++; @@ -4061,7 +4067,7 @@ AttrDefaultFetch(Relation relation) adrel->rd_att, &isnull); if (isnull) elog(WARNING, "null adbin for attr %s of rel %s", - NameStr(relation->rd_att->attrs[adform->adnum - 1]->attname), + NameStr(attr->attname), RelationGetRelationName(relation)); else { @@ -5270,7 +5276,7 @@ errtablecol(Relation rel, int attnum) /* Use reldesc if it's a user attribute, else consult the catalogs */ if (attnum > 0 && attnum <= reldesc->natts) - colname = NameStr(reldesc->attrs[attnum - 1]->attname); + colname = NameStr(TupleDescAttr(reldesc, attnum - 1)->attname); else colname = get_relid_attribute_name(RelationGetRelid(rel), attnum); @@ -5460,14 +5466,16 @@ load_relcache_init_file(bool shared) has_not_null = false; for (i = 0; i < relform->relnatts; i++) { + Form_pg_attribute attr = TupleDescAttr(rel->rd_att, i); + if (fread(&len, 1, sizeof(len), fp) != sizeof(len)) goto read_failed; if (len != ATTRIBUTE_FIXED_PART_SIZE) goto read_failed; - if (fread(rel->rd_att->attrs[i], 1, len, fp) != len) + if (fread(attr, 1, len, fp) != len) goto read_failed; - has_not_null |= rel->rd_att->attrs[i]->attnotnull; + has_not_null |= attr->attnotnull; } /* next read the access method specific field */ @@ -5848,7 +5856,8 @@ write_relcache_init_file(bool shared) /* next, do all the attribute tuple form data entries */ for (i = 0; i < relform->relnatts; i++) { - write_item(rel->rd_att->attrs[i], ATTRIBUTE_FIXED_PART_SIZE, fp); + write_item(TupleDescAttr(rel->rd_att, i), + ATTRIBUTE_FIXED_PART_SIZE, fp); } /* next, do the access method specific field */ diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 7ec31eb3e3..20567a394b 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -1176,11 +1176,12 @@ cache_record_field_properties(TypeCacheEntry *typentry) for (i = 0; i < tupdesc->natts; i++) { TypeCacheEntry *fieldentry; + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); - if (tupdesc->attrs[i]->attisdropped) + if (attr->attisdropped) continue; - fieldentry = lookup_type_cache(tupdesc->attrs[i]->atttypid, + fieldentry = lookup_type_cache(attr->atttypid, TYPECACHE_EQ_OPR | TYPECACHE_CMP_PROC); if (!OidIsValid(fieldentry->eq_opr)) @@ -1340,7 +1341,7 @@ assign_record_type_typmod(TupleDesc tupDesc) { if (i >= REC_HASH_KEYS) break; - hashkey[i] = tupDesc->attrs[i]->atttypid; + hashkey[i] = TupleDescAttr(tupDesc, i)->atttypid; } recentry = (RecordCacheEntry *) hash_search(RecordCacheHash, (void *) hashkey, diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c index be47d411c5..9c3f4510ce 100644 --- a/src/backend/utils/fmgr/funcapi.c +++ b/src/backend/utils/fmgr/funcapi.c @@ -419,7 +419,7 @@ resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, /* See if there are any polymorphic outputs; quick out if not */ for (i = 0; i < natts; i++) { - switch (tupdesc->attrs[i]->atttypid) + switch (TupleDescAttr(tupdesc, i)->atttypid) { case ANYELEMENTOID: have_anyelement_result = true; @@ -548,13 +548,15 @@ resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, /* And finally replace the tuple column types as needed */ for (i = 0; i < natts; i++) { - switch (tupdesc->attrs[i]->atttypid) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + switch (att->atttypid) { case ANYELEMENTOID: case ANYNONARRAYOID: case ANYENUMOID: TupleDescInitEntry(tupdesc, i + 1, - NameStr(tupdesc->attrs[i]->attname), + NameStr(att->attname), anyelement_type, -1, 0); @@ -562,7 +564,7 @@ resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, break; case ANYARRAYOID: TupleDescInitEntry(tupdesc, i + 1, - NameStr(tupdesc->attrs[i]->attname), + NameStr(att->attname), anyarray_type, -1, 0); @@ -570,7 +572,7 @@ resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, break; case ANYRANGEOID: TupleDescInitEntry(tupdesc, i + 1, - NameStr(tupdesc->attrs[i]->attname), + NameStr(att->attname), anyrange_type, -1, 0); @@ -1344,9 +1346,10 @@ TypeGetTupleDesc(Oid typeoid, List *colaliases) for (varattno = 0; varattno < natts; varattno++) { char *label = strVal(list_nth(colaliases, varattno)); + Form_pg_attribute attr = TupleDescAttr(tupdesc, varattno); if (label != NULL) - namestrcpy(&(tupdesc->attrs[varattno]->attname), label); + namestrcpy(&(attr->attname), label); } /* The tuple type is now an anonymous record type */ diff --git a/src/backend/utils/misc/pg_config.c b/src/backend/utils/misc/pg_config.c index 468c7cc9e1..a84878994c 100644 --- a/src/backend/utils/misc/pg_config.c +++ b/src/backend/utils/misc/pg_config.c @@ -54,8 +54,8 @@ pg_config(PG_FUNCTION_ARGS) * Check to make sure we have a reasonable tuple descriptor */ if (tupdesc->natts != 2 || - tupdesc->attrs[0]->atttypid != TEXTOID || - tupdesc->attrs[1]->atttypid != TEXTOID) + TupleDescAttr(tupdesc, 0)->atttypid != TEXTOID || + TupleDescAttr(tupdesc, 1)->atttypid != TEXTOID) ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), errmsg("query-specified return tuple and " diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h index 3e1676c7e6..fa04a63b76 100644 --- a/src/include/access/htup_details.h +++ b/src/include/access/htup_details.h @@ -722,11 +722,11 @@ struct MinimalTupleData (*(isnull) = false), \ HeapTupleNoNulls(tup) ? \ ( \ - (tupleDesc)->attrs[(attnum)-1]->attcacheoff >= 0 ? \ + TupleDescAttr((tupleDesc), (attnum)-1)->attcacheoff >= 0 ? \ ( \ - fetchatt((tupleDesc)->attrs[(attnum)-1], \ + fetchatt(TupleDescAttr((tupleDesc), (attnum)-1), \ (char *) (tup)->t_data + (tup)->t_data->t_hoff + \ - (tupleDesc)->attrs[(attnum)-1]->attcacheoff) \ + TupleDescAttr((tupleDesc), (attnum)-1)->attcacheoff)\ ) \ : \ nocachegetattr((tup), (attnum), (tupleDesc)) \ diff --git a/src/include/access/itup.h b/src/include/access/itup.h index a94e7948b4..c178ae91a9 100644 --- a/src/include/access/itup.h +++ b/src/include/access/itup.h @@ -103,11 +103,11 @@ typedef IndexAttributeBitMapData * IndexAttributeBitMap; *(isnull) = false, \ !IndexTupleHasNulls(tup) ? \ ( \ - (tupleDesc)->attrs[(attnum)-1]->attcacheoff >= 0 ? \ + TupleDescAttr((tupleDesc), (attnum)-1)->attcacheoff >= 0 ? \ ( \ - fetchatt((tupleDesc)->attrs[(attnum)-1], \ + fetchatt(TupleDescAttr((tupleDesc), (attnum)-1), \ (char *) (tup) + IndexInfoFindDataOffset((tup)->t_info) \ - + (tupleDesc)->attrs[(attnum)-1]->attcacheoff) \ + + TupleDescAttr((tupleDesc), (attnum)-1)->attcacheoff) \ ) \ : \ nocache_index_getattr((tup), (attnum), (tupleDesc)) \ diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index e7065d70ba..31b77a08fa 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -80,6 +80,8 @@ typedef struct tupleDesc int tdrefcount; /* reference count, or -1 if not counting */ } *TupleDesc; +/* Accessor for the i'th attribute of tupdesc. */ +#define TupleDescAttr(tupdesc, i) ((tupdesc)->attrs[(i)]) extern TupleDesc CreateTemplateTupleDesc(int natts, bool hasoid); diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index afebec910d..5a575bdbe4 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -1095,6 +1095,7 @@ plperl_build_tuple_result(HV *perlhash, TupleDesc td) SV *val = HeVAL(he); char *key = hek2cstr(he); int attn = SPI_fnumber(td, key); + Form_pg_attribute attr = TupleDescAttr(td, attn - 1); if (attn == SPI_ERROR_NOATTRIBUTE) ereport(ERROR, @@ -1108,8 +1109,8 @@ plperl_build_tuple_result(HV *perlhash, TupleDesc td) key))); values[attn - 1] = plperl_sv_to_datum(val, - td->attrs[attn - 1]->atttypid, - td->attrs[attn - 1]->atttypmod, + attr->atttypid, + attr->atttypmod, NULL, NULL, InvalidOid, @@ -1757,6 +1758,7 @@ plperl_modify_tuple(HV *hvTD, TriggerData *tdata, HeapTuple otup) char *key = hek2cstr(he); SV *val = HeVAL(he); int attn = SPI_fnumber(tupdesc, key); + Form_pg_attribute attr = TupleDescAttr(tupdesc, attn - 1); if (attn == SPI_ERROR_NOATTRIBUTE) ereport(ERROR, @@ -1770,8 +1772,8 @@ plperl_modify_tuple(HV *hvTD, TriggerData *tdata, HeapTuple otup) key))); modvalues[attn - 1] = plperl_sv_to_datum(val, - tupdesc->attrs[attn - 1]->atttypid, - tupdesc->attrs[attn - 1]->atttypmod, + attr->atttypid, + attr->atttypmod, NULL, NULL, InvalidOid, @@ -3014,11 +3016,12 @@ plperl_hash_from_tuple(HeapTuple tuple, TupleDesc tupdesc) typisvarlena; char *attname; Oid typoutput; + Form_pg_attribute att = TupleDescAttr(tupdesc, i); - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; - attname = NameStr(tupdesc->attrs[i]->attname); + attname = NameStr(att->attname); attr = heap_getattr(tuple, i + 1, tupdesc, &isnull); if (isnull) @@ -3032,7 +3035,7 @@ plperl_hash_from_tuple(HeapTuple tuple, TupleDesc tupdesc) continue; } - if (type_is_rowtype(tupdesc->attrs[i]->atttypid)) + if (type_is_rowtype(att->atttypid)) { SV *sv = plperl_hash_from_datum(attr); @@ -3043,17 +3046,16 @@ plperl_hash_from_tuple(HeapTuple tuple, TupleDesc tupdesc) SV *sv; Oid funcid; - if (OidIsValid(get_base_element_type(tupdesc->attrs[i]->atttypid))) - sv = plperl_ref_from_pg_array(attr, tupdesc->attrs[i]->atttypid); - else if ((funcid = get_transform_fromsql(tupdesc->attrs[i]->atttypid, current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes))) + if (OidIsValid(get_base_element_type(att->atttypid))) + sv = plperl_ref_from_pg_array(attr, att->atttypid); + else if ((funcid = get_transform_fromsql(att->atttypid, current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes))) sv = (SV *) DatumGetPointer(OidFunctionCall1(funcid, attr)); else { char *outputstr; /* XXX should have a way to cache these lookups */ - getTypeOutputInfo(tupdesc->attrs[i]->atttypid, - &typoutput, &typisvarlena); + getTypeOutputInfo(att->atttypid, &typoutput, &typisvarlena); outputstr = OidOutputFunctionCall(typoutput, attr); sv = cstr2sv(outputstr); diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 662b3c97d7..e9d7ef55e9 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -2012,7 +2012,7 @@ build_row_from_class(Oid classOid) /* * Get the attribute and check for dropped column */ - attrStruct = row->rowtupdesc->attrs[i]; + attrStruct = TupleDescAttr(row->rowtupdesc, i); if (!attrStruct->attisdropped) { diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 616f5e30f8..9716697259 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -2841,6 +2841,7 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, PLpgSQL_var *var = (PLpgSQL_var *) retvar; Datum retval = var->value; bool isNull = var->isnull; + Form_pg_attribute attr = TupleDescAttr(tupdesc, 0); if (natts != 1) ereport(ERROR, @@ -2858,8 +2859,8 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, &isNull, var->datatype->typoid, var->datatype->atttypmod, - tupdesc->attrs[0]->atttypid, - tupdesc->attrs[0]->atttypmod); + attr->atttypid, + attr->atttypmod); tuplestore_putvalues(estate->tuple_store, tupdesc, &retval, &isNull); @@ -2968,6 +2969,8 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, } else { + Form_pg_attribute attr = TupleDescAttr(tupdesc, 0); + /* Simple scalar result */ if (natts != 1) ereport(ERROR, @@ -2980,8 +2983,8 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, &isNull, rettype, rettypmod, - tupdesc->attrs[0]->atttypid, - tupdesc->attrs[0]->atttypmod); + attr->atttypid, + attr->atttypmod); tuplestore_putvalues(estate->tuple_store, tupdesc, &retval, &isNull); @@ -4588,8 +4591,8 @@ exec_assign_value(PLpgSQL_execstate *estate, * Now insert the new value, being careful to cast it to the * right type. */ - atttype = rec->tupdesc->attrs[fno - 1]->atttypid; - atttypmod = rec->tupdesc->attrs[fno - 1]->atttypmod; + atttype = TupleDescAttr(rec->tupdesc, fno - 1)->atttypid; + atttypmod = TupleDescAttr(rec->tupdesc, fno - 1)->atttypmod; values[0] = exec_cast_value(estate, value, &isNull, @@ -4913,7 +4916,11 @@ exec_eval_datum(PLpgSQL_execstate *estate, rec->refname, recfield->fieldname))); *typeid = SPI_gettypeid(rec->tupdesc, fno); if (fno > 0) - *typetypmod = rec->tupdesc->attrs[fno - 1]->atttypmod; + { + Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); + + *typetypmod = attr->atttypmod; + } else *typetypmod = -1; *value = SPI_getbinval(rec->tup, rec->tupdesc, fno, isnull); @@ -5089,11 +5096,19 @@ plpgsql_exec_get_datum_type_info(PLpgSQL_execstate *estate, rec->refname, recfield->fieldname))); *typeid = SPI_gettypeid(rec->tupdesc, fno); if (fno > 0) - *typmod = rec->tupdesc->attrs[fno - 1]->atttypmod; + { + Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); + + *typmod = attr->atttypmod; + } else *typmod = -1; if (fno > 0) - *collation = rec->tupdesc->attrs[fno - 1]->attcollation; + { + Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); + + *collation = attr->attcollation; + } else /* no system column types have collation */ *collation = InvalidOid; break; @@ -5172,6 +5187,7 @@ exec_eval_expr(PLpgSQL_execstate *estate, { Datum result = 0; int rc; + Form_pg_attribute attr; /* * If first time through, create a plan for this expression. @@ -5211,8 +5227,9 @@ exec_eval_expr(PLpgSQL_execstate *estate, /* * ... and get the column's datatype. */ - *rettype = estate->eval_tuptable->tupdesc->attrs[0]->atttypid; - *rettypmod = estate->eval_tuptable->tupdesc->attrs[0]->atttypmod; + attr = TupleDescAttr(estate->eval_tuptable->tupdesc, 0); + *rettype = attr->atttypid; + *rettypmod = attr->atttypmod; /* * If there are no rows selected, the result is a NULL of that type. @@ -6030,7 +6047,8 @@ exec_move_row(PLpgSQL_execstate *estate, var = (PLpgSQL_var *) (estate->datums[row->varnos[fnum]]); - while (anum < td_natts && tupdesc->attrs[anum]->attisdropped) + while (anum < td_natts && + TupleDescAttr(tupdesc, anum)->attisdropped) anum++; /* skip dropped column in tuple */ if (anum < td_natts) @@ -6042,8 +6060,8 @@ exec_move_row(PLpgSQL_execstate *estate, value = (Datum) 0; isnull = true; } - valtype = tupdesc->attrs[anum]->atttypid; - valtypmod = tupdesc->attrs[anum]->atttypmod; + valtype = TupleDescAttr(tupdesc, anum)->atttypid; + valtypmod = TupleDescAttr(tupdesc, anum)->atttypmod; anum++; } else @@ -6095,7 +6113,7 @@ make_tuple_from_row(PLpgSQL_execstate *estate, Oid fieldtypeid; int32 fieldtypmod; - if (tupdesc->attrs[i]->attisdropped) + if (TupleDescAttr(tupdesc, i)->attisdropped) { nulls[i] = true; /* leave the column as null */ continue; @@ -6106,7 +6124,7 @@ make_tuple_from_row(PLpgSQL_execstate *estate, exec_eval_datum(estate, estate->datums[row->varnos[i]], &fieldtypeid, &fieldtypmod, &dvalues[i], &nulls[i]); - if (fieldtypeid != tupdesc->attrs[i]->atttypid) + if (fieldtypeid != TupleDescAttr(tupdesc, i)->atttypid) return NULL; /* XXX should we insist on typmod match, too? */ } diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index c6938d00aa..26f61dd0f3 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -950,6 +950,7 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, char *plattstr; int attn; PLyObToDatum *att; + Form_pg_attribute attr; platt = PyList_GetItem(plkeys, i); if (PyString_Check(platt)) @@ -982,11 +983,12 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, Py_INCREF(plval); + attr = TupleDescAttr(tupdesc, attn - 1); if (plval != Py_None) { modvalues[attn - 1] = (att->func) (att, - tupdesc->attrs[attn - 1]->atttypmod, + attr->atttypmod, plval, false); modnulls[attn - 1] = false; @@ -997,7 +999,7 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, InputFunctionCall(&att->typfunc, NULL, att->typioparam, - tupdesc->attrs[attn - 1]->atttypmod); + attr->atttypmod); modnulls[attn - 1] = true; } modrepls[attn - 1] = true; diff --git a/src/pl/plpython/plpy_resultobject.c b/src/pl/plpython/plpy_resultobject.c index 077bde6dc3..098a366f6f 100644 --- a/src/pl/plpython/plpy_resultobject.c +++ b/src/pl/plpython/plpy_resultobject.c @@ -148,7 +148,11 @@ PLy_result_colnames(PyObject *self, PyObject *unused) list = PyList_New(ob->tupdesc->natts); for (i = 0; i < ob->tupdesc->natts; i++) - PyList_SET_ITEM(list, i, PyString_FromString(NameStr(ob->tupdesc->attrs[i]->attname))); + { + Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); + + PyList_SET_ITEM(list, i, PyString_FromString(NameStr(attr->attname))); + } return list; } @@ -168,7 +172,11 @@ PLy_result_coltypes(PyObject *self, PyObject *unused) list = PyList_New(ob->tupdesc->natts); for (i = 0; i < ob->tupdesc->natts; i++) - PyList_SET_ITEM(list, i, PyInt_FromLong(ob->tupdesc->attrs[i]->atttypid)); + { + Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); + + PyList_SET_ITEM(list, i, PyInt_FromLong(attr->atttypid)); + } return list; } @@ -188,7 +196,11 @@ PLy_result_coltypmods(PyObject *self, PyObject *unused) list = PyList_New(ob->tupdesc->natts); for (i = 0; i < ob->tupdesc->natts; i++) - PyList_SET_ITEM(list, i, PyInt_FromLong(ob->tupdesc->attrs[i]->atttypmod)); + { + Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); + + PyList_SET_ITEM(list, i, PyInt_FromLong(attr->atttypmod)); + } return list; } diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c index 91ddcaa7b9..e4af8cc9ef 100644 --- a/src/pl/plpython/plpy_typeio.c +++ b/src/pl/plpython/plpy_typeio.c @@ -152,21 +152,21 @@ PLy_input_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc) for (i = 0; i < desc->natts; i++) { HeapTuple typeTup; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) continue; - if (arg->in.r.atts[i].typoid == desc->attrs[i]->atttypid) + if (arg->in.r.atts[i].typoid == attr->atttypid) continue; /* already set up this entry */ - typeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(desc->attrs[i]->atttypid)); + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(attr->atttypid)); if (!HeapTupleIsValid(typeTup)) elog(ERROR, "cache lookup failed for type %u", - desc->attrs[i]->atttypid); + attr->atttypid); PLy_input_datum_func2(&(arg->in.r.atts[i]), arg->mcxt, - desc->attrs[i]->atttypid, + attr->atttypid, typeTup, exec_ctx->curr_proc->langid, exec_ctx->curr_proc->trftypes); @@ -224,18 +224,18 @@ PLy_output_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc) for (i = 0; i < desc->natts; i++) { HeapTuple typeTup; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) continue; - if (arg->out.r.atts[i].typoid == desc->attrs[i]->atttypid) + if (arg->out.r.atts[i].typoid == attr->atttypid) continue; /* already set up this entry */ - typeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(desc->attrs[i]->atttypid)); + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(attr->atttypid)); if (!HeapTupleIsValid(typeTup)) elog(ERROR, "cache lookup failed for type %u", - desc->attrs[i]->atttypid); + attr->atttypid); PLy_output_datum_func2(&(arg->out.r.atts[i]), arg->mcxt, typeTup, exec_ctx->curr_proc->langid, @@ -306,11 +306,12 @@ PLyDict_FromTuple(PLyTypeInfo *info, HeapTuple tuple, TupleDesc desc) Datum vattr; bool is_null; PyObject *value; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) continue; - key = NameStr(desc->attrs[i]->attname); + key = NameStr(attr->attname); vattr = heap_getattr(tuple, (i + 1), desc, &is_null); if (is_null || info->in.r.atts[i].func == NULL) @@ -1183,15 +1184,16 @@ PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping) char *key; PyObject *volatile value; PLyObToDatum *att; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) { values[i] = (Datum) 0; nulls[i] = true; continue; } - key = NameStr(desc->attrs[i]->attname); + key = NameStr(attr->attname); value = NULL; att = &info->out.r.atts[i]; PG_TRY(); @@ -1256,7 +1258,7 @@ PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) idx = 0; for (i = 0; i < desc->natts; i++) { - if (!desc->attrs[i]->attisdropped) + if (!TupleDescAttr(desc, i)->attisdropped) idx++; } if (PySequence_Length(sequence) != idx) @@ -1277,7 +1279,7 @@ PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) PyObject *volatile value; PLyObToDatum *att; - if (desc->attrs[i]->attisdropped) + if (TupleDescAttr(desc, i)->attisdropped) { values[i] = (Datum) 0; nulls[i] = true; @@ -1346,15 +1348,16 @@ PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object char *key; PyObject *volatile value; PLyObToDatum *att; + Form_pg_attribute attr = TupleDescAttr(desc, i); - if (desc->attrs[i]->attisdropped) + if (attr->attisdropped) { values[i] = (Datum) 0; nulls[i] = true; continue; } - key = NameStr(desc->attrs[i]->attname); + key = NameStr(attr->attname); value = NULL; att = &info->out.r.atts[i]; PG_TRY(); diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index ed494e1210..09f87ec791 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -1106,11 +1106,13 @@ pltcl_trigger_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj()); for (i = 0; i < tupdesc->natts; i++) { - if (tupdesc->attrs[i]->attisdropped) + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + + if (att->attisdropped) Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj()); else Tcl_ListObjAppendElement(NULL, tcl_trigtup, - Tcl_NewStringObj(utf_e2u(NameStr(tupdesc->attrs[i]->attname)), -1)); + Tcl_NewStringObj(utf_e2u(NameStr(att->attname)), -1)); } Tcl_ListObjAppendElement(NULL, tcl_cmd, tcl_trigtup); @@ -2952,15 +2954,17 @@ pltcl_set_tuple_values(Tcl_Interp *interp, const char *arrayname, for (i = 0; i < tupdesc->natts; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + /* ignore dropped attributes */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; /************************************************************ * Get the attribute name ************************************************************/ UTF_BEGIN; - attname = pstrdup(UTF_E2U(NameStr(tupdesc->attrs[i]->attname))); + attname = pstrdup(UTF_E2U(NameStr(att->attname))); UTF_END; /************************************************************ @@ -2978,8 +2982,7 @@ pltcl_set_tuple_values(Tcl_Interp *interp, const char *arrayname, ************************************************************/ if (!isnull) { - getTypeOutputInfo(tupdesc->attrs[i]->atttypid, - &typoutput, &typisvarlena); + getTypeOutputInfo(att->atttypid, &typoutput, &typisvarlena); outputstr = OidOutputFunctionCall(typoutput, attr); UTF_BEGIN; Tcl_SetVar2Ex(interp, *arrptr, *nameptr, @@ -3013,14 +3016,16 @@ pltcl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc) for (i = 0; i < tupdesc->natts; i++) { + Form_pg_attribute att = TupleDescAttr(tupdesc, i); + /* ignore dropped attributes */ - if (tupdesc->attrs[i]->attisdropped) + if (att->attisdropped) continue; /************************************************************ * Get the attribute name ************************************************************/ - attname = NameStr(tupdesc->attrs[i]->attname); + attname = NameStr(att->attname); /************************************************************ * Get the attributes value @@ -3037,7 +3042,7 @@ pltcl_build_tuple_argument(HeapTuple tuple, TupleDesc tupdesc) ************************************************************/ if (!isnull) { - getTypeOutputInfo(tupdesc->attrs[i]->atttypid, + getTypeOutputInfo(att->atttypid, &typoutput, &typisvarlena); outputstr = OidOutputFunctionCall(typoutput, attr); UTF_BEGIN; diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c index b73bccec3d..3d33b36e66 100644 --- a/src/test/regress/regress.c +++ b/src/test/regress/regress.c @@ -770,9 +770,9 @@ make_tuple_indirect(PG_FUNCTION_ARGS) struct varatt_indirect redirect_pointer; /* only work on existing, not-null varlenas */ - if (tupdesc->attrs[i]->attisdropped || + if (TupleDescAttr(tupdesc, i)->attisdropped || nulls[i] || - tupdesc->attrs[i]->attlen != -1) + TupleDescAttr(tupdesc, i)->attlen != -1) continue; attr = (struct varlena *) DatumGetPointer(values[i]); From c6293249dc178f52dd508c3e6ff353af41c90b58 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 20 Aug 2017 11:19:12 -0700 Subject: [PATCH 0015/1087] Partially flatten struct tupleDesc so that it can be used in DSM. TupleDesc's attributes were already stored in contiguous memory after the struct. Go one step further and get rid of the array of pointers to attributes so that they can be stored in shared memory mapped at different addresses in each backend. This won't work for TupleDescs with contraints and defaults, since those point to other objects, but for many purposes only attributes are needed. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com --- src/backend/access/common/tupdesc.c | 67 ++++++----------------------- src/include/access/tupdesc.h | 8 ++-- 2 files changed, 18 insertions(+), 57 deletions(-) diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index a5df2d64e2..75b191ba2a 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -41,8 +41,6 @@ TupleDesc CreateTemplateTupleDesc(int natts, bool hasoid) { TupleDesc desc; - char *stg; - int attroffset; /* * sanity checks @@ -51,38 +49,10 @@ CreateTemplateTupleDesc(int natts, bool hasoid) /* * Allocate enough memory for the tuple descriptor, including the - * attribute rows, and set up the attribute row pointers. - * - * Note: we assume that sizeof(struct tupleDesc) is a multiple of the - * struct pointer alignment requirement, and hence we don't need to insert - * alignment padding between the struct and the array of attribute row - * pointers. - * - * Note: Only the fixed part of pg_attribute rows is included in tuple - * descriptors, so we only need ATTRIBUTE_FIXED_PART_SIZE space per attr. - * That might need alignment padding, however. + * attribute rows. */ - attroffset = sizeof(struct tupleDesc) + natts * sizeof(Form_pg_attribute); - attroffset = MAXALIGN(attroffset); - stg = palloc(attroffset + natts * MAXALIGN(ATTRIBUTE_FIXED_PART_SIZE)); - desc = (TupleDesc) stg; - - if (natts > 0) - { - Form_pg_attribute *attrs; - int i; - - attrs = (Form_pg_attribute *) (stg + sizeof(struct tupleDesc)); - desc->attrs = attrs; - stg += attroffset; - for (i = 0; i < natts; i++) - { - attrs[i] = (Form_pg_attribute) stg; - stg += MAXALIGN(ATTRIBUTE_FIXED_PART_SIZE); - } - } - else - desc->attrs = NULL; + desc = (TupleDesc) palloc(offsetof(struct tupleDesc, attrs) + + natts * sizeof(FormData_pg_attribute)); /* * Initialize other fields of the tupdesc. @@ -99,12 +69,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid) /* * CreateTupleDesc - * This function allocates a new TupleDesc pointing to a given + * This function allocates a new TupleDesc by copying a given * Form_pg_attribute array. * - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array - * will not be freed thereby. - * * Tuple type ID information is initially set for an anonymous record type; * caller can overwrite this if needed. */ @@ -112,20 +79,12 @@ TupleDesc CreateTupleDesc(int natts, bool hasoid, Form_pg_attribute *attrs) { TupleDesc desc; + int i; - /* - * sanity checks - */ - AssertArg(natts >= 0); + desc = CreateTemplateTupleDesc(natts, hasoid); - desc = (TupleDesc) palloc(sizeof(struct tupleDesc)); - desc->attrs = attrs; - desc->natts = natts; - desc->constr = NULL; - desc->tdtypeid = RECORDOID; - desc->tdtypmod = -1; - desc->tdhasoid = hasoid; - desc->tdrefcount = -1; /* assume not reference-counted */ + for (i = 0; i < natts; ++i) + memcpy(TupleDescAttr(desc, i), attrs[i], ATTRIBUTE_FIXED_PART_SIZE); return desc; } @@ -147,10 +106,12 @@ CreateTupleDescCopy(TupleDesc tupdesc) for (i = 0; i < desc->natts; i++) { - memcpy(desc->attrs[i], tupdesc->attrs[i], ATTRIBUTE_FIXED_PART_SIZE); - desc->attrs[i]->attnotnull = false; - desc->attrs[i]->atthasdef = false; - desc->attrs[i]->attidentity = '\0'; + Form_pg_attribute att = TupleDescAttr(desc, i); + + memcpy(att, &tupdesc->attrs[i], ATTRIBUTE_FIXED_PART_SIZE); + att->attnotnull = false; + att->atthasdef = false; + att->attidentity = '\0'; } desc->tdtypeid = tupdesc->tdtypeid; diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index 31b77a08fa..39fd59686a 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -71,17 +71,17 @@ typedef struct tupleConstr typedef struct tupleDesc { int natts; /* number of attributes in the tuple */ - Form_pg_attribute *attrs; - /* attrs[N] is a pointer to the description of Attribute Number N+1 */ - TupleConstr *constr; /* constraints, or NULL if none */ Oid tdtypeid; /* composite type ID for tuple type */ int32 tdtypmod; /* typmod for tuple type */ bool tdhasoid; /* tuple has oid attribute in its header */ int tdrefcount; /* reference count, or -1 if not counting */ + TupleConstr *constr; /* constraints, or NULL if none */ + /* attrs[N] is the description of Attribute Number N+1 */ + FormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER]; } *TupleDesc; /* Accessor for the i'th attribute of tupdesc. */ -#define TupleDescAttr(tupdesc, i) ((tupdesc)->attrs[(i)]) +#define TupleDescAttr(tupdesc, i) (&(tupdesc)->attrs[(i)]) extern TupleDesc CreateTemplateTupleDesc(int natts, bool hasoid); From 66ed3829df959adb47f71d7c903ac59f0670f3e1 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 20 Aug 2017 21:22:18 -0700 Subject: [PATCH 0016/1087] Inject $(ICU_LIBS) regardless of platform. It appeared in a conditional that excludes AIX, Cygwin and MinGW. Give ICU support a chance to work on those platforms. Back-patch to v10, where ICU support was introduced. --- src/backend/Makefile | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/Makefile b/src/backend/Makefile index bce9d2c3eb..aab676dbbd 100644 --- a/src/backend/Makefile +++ b/src/backend/Makefile @@ -39,8 +39,8 @@ OBJS = $(SUBDIROBJS) $(LOCALOBJS) $(top_builddir)/src/port/libpgport_srv.a \ $(top_builddir)/src/common/libpgcommon_srv.a # We put libpgport and libpgcommon into OBJS, so remove it from LIBS; also add -# libldap -LIBS := $(filter-out -lpgport -lpgcommon, $(LIBS)) $(LDAP_LIBS_BE) +# libldap and ICU +LIBS := $(filter-out -lpgport -lpgcommon, $(LIBS)) $(LDAP_LIBS_BE) $(ICU_LIBS) # The backend doesn't need everything that's in LIBS, however LIBS := $(filter-out -lz -lreadline -ledit -ltermcap -lncurses -lcurses, $(LIBS)) @@ -58,7 +58,7 @@ ifneq ($(PORTNAME), win32) ifneq ($(PORTNAME), aix) postgres: $(OBJS) - $(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(call expand_subsys,$^) $(LIBS) $(ICU_LIBS) -o $@ + $(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(call expand_subsys,$^) $(LIBS) -o $@ endif endif From 79ccd7cbd5ca44bee0191d12e9e65abf702899e7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 21 Aug 2017 14:43:00 -0400 Subject: [PATCH 0017/1087] pg_prewarm: Add automatic prewarm feature. Periodically while the server is running, and at shutdown, write out a list of blocks in shared buffers. When the server reaches consistency -- unfortunatey, we can't do it before that point without breaking things -- reload those blocks into any still-unused shared buffers. Mithun Cy and Robert Haas, reviewed and tested by Beena Emerson, Amit Kapila, Jim Nasby, and Rafia Sabih. Discussion: http://postgr.es/m/CAD__OugubOs1Vy7kgF6xTjmEqTR4CrGAv8w+ZbaY_+MZeitukw@mail.gmail.com --- contrib/file_fdw/data/list1.csv | 2 + contrib/file_fdw/data/list2.bad | 2 + contrib/file_fdw/data/list2.csv | 2 + contrib/pg_prewarm/Makefile | 4 +- contrib/pg_prewarm/autoprewarm.c | 924 ++++++++++++++++++++ contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql | 14 + contrib/pg_prewarm/pg_prewarm.control | 2 +- doc/src/sgml/pgprewarm.sgml | 69 +- src/backend/storage/buffer/freelist.c | 17 + src/include/storage/buf_internals.h | 1 + src/tools/pgindent/typedefs.list | 2 + 11 files changed, 1035 insertions(+), 4 deletions(-) create mode 100644 contrib/file_fdw/data/list1.csv create mode 100644 contrib/file_fdw/data/list2.bad create mode 100644 contrib/file_fdw/data/list2.csv create mode 100644 contrib/pg_prewarm/autoprewarm.c create mode 100644 contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql diff --git a/contrib/file_fdw/data/list1.csv b/contrib/file_fdw/data/list1.csv new file mode 100644 index 0000000000..203f3b2324 --- /dev/null +++ b/contrib/file_fdw/data/list1.csv @@ -0,0 +1,2 @@ +1,foo +1,bar diff --git a/contrib/file_fdw/data/list2.bad b/contrib/file_fdw/data/list2.bad new file mode 100644 index 0000000000..00af47f5ef --- /dev/null +++ b/contrib/file_fdw/data/list2.bad @@ -0,0 +1,2 @@ +2,baz +1,qux diff --git a/contrib/file_fdw/data/list2.csv b/contrib/file_fdw/data/list2.csv new file mode 100644 index 0000000000..2fb133d004 --- /dev/null +++ b/contrib/file_fdw/data/list2.csv @@ -0,0 +1,2 @@ +2,baz +2,qux diff --git a/contrib/pg_prewarm/Makefile b/contrib/pg_prewarm/Makefile index 7ad941e72b..88580d1118 100644 --- a/contrib/pg_prewarm/Makefile +++ b/contrib/pg_prewarm/Makefile @@ -1,10 +1,10 @@ # contrib/pg_prewarm/Makefile MODULE_big = pg_prewarm -OBJS = pg_prewarm.o $(WIN32RES) +OBJS = pg_prewarm.o autoprewarm.o $(WIN32RES) EXTENSION = pg_prewarm -DATA = pg_prewarm--1.1.sql pg_prewarm--1.0--1.1.sql +DATA = pg_prewarm--1.1--1.2.sql pg_prewarm--1.1.sql pg_prewarm--1.0--1.1.sql PGFILEDESC = "pg_prewarm - preload relation data into system buffer cache" ifdef USE_PGXS diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c new file mode 100644 index 0000000000..cc0350e6d6 --- /dev/null +++ b/contrib/pg_prewarm/autoprewarm.c @@ -0,0 +1,924 @@ +/*------------------------------------------------------------------------- + * + * autoprewarm.c + * Periodically dump information about the blocks present in + * shared_buffers, and reload them on server restart. + * + * Due to locking considerations, we can't actually begin prewarming + * until the server reaches a consistent state. We need the catalogs + * to be consistent so that we can figure out which relation to lock, + * and we need to lock the relations so that we don't try to prewarm + * pages from a relation that is in the process of being dropped. + * + * While prewarming, autoprewarm will use two workers. There's a + * master worker that reads and sorts the list of blocks to be + * prewarmed and then launches a per-database worker for each + * relevant database in turn. The former keeps running after the + * initial prewarm is complete to update the dump file periodically. + * + * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * + * IDENTIFICATION + * contrib/pg_prewarm/autoprewarm.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include + +#include "access/heapam.h" +#include "access/xact.h" +#include "catalog/pg_class.h" +#include "catalog/pg_type.h" +#include "miscadmin.h" +#include "pgstat.h" +#include "postmaster/bgworker.h" +#include "storage/buf_internals.h" +#include "storage/dsm.h" +#include "storage/ipc.h" +#include "storage/latch.h" +#include "storage/lwlock.h" +#include "storage/proc.h" +#include "storage/procsignal.h" +#include "storage/shmem.h" +#include "storage/smgr.h" +#include "tcop/tcopprot.h" +#include "utils/acl.h" +#include "utils/guc.h" +#include "utils/memutils.h" +#include "utils/rel.h" +#include "utils/relfilenodemap.h" +#include "utils/resowner.h" + +#define AUTOPREWARM_FILE "autoprewarm.blocks" + +/* Metadata for each block we dump. */ +typedef struct BlockInfoRecord +{ + Oid database; + Oid tablespace; + Oid filenode; + ForkNumber forknum; + BlockNumber blocknum; +} BlockInfoRecord; + +/* Shared state information for autoprewarm bgworker. */ +typedef struct AutoPrewarmSharedState +{ + LWLock lock; /* mutual exclusion */ + pid_t bgworker_pid; /* for main bgworker */ + pid_t pid_using_dumpfile; /* for autoprewarm or block dump */ + + /* Following items are for communication with per-database worker */ + dsm_handle block_info_handle; + Oid database; + int64 prewarm_start_idx; + int64 prewarm_stop_idx; + int64 prewarmed_blocks; +} AutoPrewarmSharedState; + +void _PG_init(void); +void autoprewarm_main(Datum main_arg); +void autoprewarm_database_main(Datum main_arg); + +PG_FUNCTION_INFO_V1(autoprewarm_start_worker); +PG_FUNCTION_INFO_V1(autoprewarm_dump_now); + +static void apw_load_buffers(void); +static int64 apw_dump_now(bool is_bgworker, bool dump_unlogged); +static void apw_start_master_worker(void); +static void apw_start_database_worker(void); +static bool apw_init_shmem(void); +static void apw_detach_shmem(int code, Datum arg); +static int apw_compare_blockinfo(const void *p, const void *q); +static void apw_sigterm_handler(SIGNAL_ARGS); +static void apw_sighup_handler(SIGNAL_ARGS); + +/* Flags set by signal handlers */ +static volatile sig_atomic_t got_sigterm = false; +static volatile sig_atomic_t got_sighup = false; + +/* Pointer to shared-memory state. */ +static AutoPrewarmSharedState *apw_state = NULL; + +/* GUC variables. */ +static bool autoprewarm = true; /* start worker? */ +static int autoprewarm_interval; /* dump interval */ + +/* + * Module load callback. + */ +void +_PG_init(void) +{ + DefineCustomIntVariable("pg_prewarm.autoprewarm_interval", + "Sets the interval between dumps of shared buffers", + "If set to zero, time-based dumping is disabled.", + &autoprewarm_interval, + 300, + 0, INT_MAX / 1000, + PGC_SIGHUP, + GUC_UNIT_S, + NULL, + NULL, + NULL); + + if (!process_shared_preload_libraries_in_progress) + return; + + /* can't define PGC_POSTMASTER variable after startup */ + DefineCustomBoolVariable("pg_prewarm.autoprewarm", + "Starts the autoprewarm worker.", + NULL, + &autoprewarm, + true, + PGC_POSTMASTER, + 0, + NULL, + NULL, + NULL); + + EmitWarningsOnPlaceholders("pg_prewarm"); + + RequestAddinShmemSpace(MAXALIGN(sizeof(AutoPrewarmSharedState))); + + /* Register autoprewarm worker, if enabled. */ + if (autoprewarm) + apw_start_master_worker(); +} + +/* + * Main entry point for the master autoprewarm process. Per-database workers + * have a separate entry point. + */ +void +autoprewarm_main(Datum main_arg) +{ + bool first_time = true; + TimestampTz last_dump_time = 0; + + /* Establish signal handlers; once that's done, unblock signals. */ + pqsignal(SIGTERM, apw_sigterm_handler); + pqsignal(SIGHUP, apw_sighup_handler); + pqsignal(SIGUSR1, procsignal_sigusr1_handler); + BackgroundWorkerUnblockSignals(); + + /* Create (if necessary) and attach to our shared memory area. */ + if (apw_init_shmem()) + first_time = false; + + /* Set on-detach hook so that our PID will be cleared on exit. */ + on_shmem_exit(apw_detach_shmem, 0); + + /* + * Store our PID in the shared memory area --- unless there's already + * another worker running, in which case just exit. + */ + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + if (apw_state->bgworker_pid != InvalidPid) + { + LWLockRelease(&apw_state->lock); + ereport(LOG, + (errmsg("autoprewarm worker is already running under PID %d", + apw_state->bgworker_pid))); + return; + } + apw_state->bgworker_pid = MyProcPid; + LWLockRelease(&apw_state->lock); + + /* + * Preload buffers from the dump file only if we just created the shared + * memory region. Otherwise, it's either already been done or shouldn't + * be done - e.g. because the old dump file has been overwritten since the + * server was started. + * + * There's not much point in performing a dump immediately after we finish + * preloading; so, if we do end up preloading, consider the last dump time + * to be equal to the current time. + */ + if (first_time) + { + apw_load_buffers(); + last_dump_time = GetCurrentTimestamp(); + } + + /* Periodically dump buffers until terminated. */ + while (!got_sigterm) + { + int rc; + + /* In case of a SIGHUP, just reload the configuration. */ + if (got_sighup) + { + got_sighup = false; + ProcessConfigFile(PGC_SIGHUP); + } + + if (autoprewarm_interval <= 0) + { + /* We're only dumping at shutdown, so just wait forever. */ + rc = WaitLatch(&MyProc->procLatch, + WL_LATCH_SET | WL_POSTMASTER_DEATH, + -1L, + PG_WAIT_EXTENSION); + } + else + { + long delay_in_ms = 0; + TimestampTz next_dump_time = 0; + long secs = 0; + int usecs = 0; + + /* Compute the next dump time. */ + next_dump_time = + TimestampTzPlusMilliseconds(last_dump_time, + autoprewarm_interval * 1000); + TimestampDifference(GetCurrentTimestamp(), next_dump_time, + &secs, &usecs); + delay_in_ms = secs + (usecs / 1000); + + /* Perform a dump if it's time. */ + if (delay_in_ms <= 0) + { + last_dump_time = GetCurrentTimestamp(); + apw_dump_now(true, false); + continue; + } + + /* Sleep until the next dump time. */ + rc = WaitLatch(&MyProc->procLatch, + WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH, + delay_in_ms, + PG_WAIT_EXTENSION); + } + + /* Reset the latch, bail out if postmaster died, otherwise loop. */ + ResetLatch(&MyProc->procLatch); + if (rc & WL_POSTMASTER_DEATH) + proc_exit(1); + } + + /* + * Dump one last time. We assume this is probably the result of a system + * shutdown, although it's possible that we've merely been terminated. + */ + apw_dump_now(true, true); +} + +/* + * Read the dump file and launch per-database workers one at a time to + * prewarm the buffers found there. + */ +static void +apw_load_buffers(void) +{ + FILE *file = NULL; + int64 num_elements, + i; + BlockInfoRecord *blkinfo; + dsm_segment *seg; + + /* + * Skip the prewarm if the dump file is in use; otherwise, prevent any + * other process from writing it while we're using it. + */ + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + if (apw_state->pid_using_dumpfile == InvalidPid) + apw_state->pid_using_dumpfile = MyProcPid; + else + { + LWLockRelease(&apw_state->lock); + ereport(LOG, + (errmsg("skipping prewarm because block dump file is being written by PID %d", + apw_state->pid_using_dumpfile))); + return; + } + LWLockRelease(&apw_state->lock); + + /* + * Open the block dump file. Exit quietly if it doesn't exist, but report + * any other error. + */ + file = AllocateFile(AUTOPREWARM_FILE, "r"); + if (!file) + { + if (errno == ENOENT) + { + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + apw_state->pid_using_dumpfile = InvalidPid; + LWLockRelease(&apw_state->lock); + return; /* No file to load. */ + } + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read file \"%s\": %m", + AUTOPREWARM_FILE))); + } + + /* First line of the file is a record count. */ + if (fscanf(file, "<<" INT64_FORMAT ">>\n", &num_elements) != 1) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from file \"%s\": %m", + AUTOPREWARM_FILE))); + + /* Allocate a dynamic shared memory segment to store the record data. */ + seg = dsm_create(sizeof(BlockInfoRecord) * num_elements, 0); + blkinfo = (BlockInfoRecord *) dsm_segment_address(seg); + + /* Read records, one per line. */ + for (i = 0; i < num_elements; i++) + { + unsigned forknum; + + if (fscanf(file, "%u,%u,%u,%u,%u\n", &blkinfo[i].database, + &blkinfo[i].tablespace, &blkinfo[i].filenode, + &forknum, &blkinfo[i].blocknum) != 5) + ereport(ERROR, + (errmsg("autoprewarm block dump file is corrupted at line " INT64_FORMAT, + i + 1))); + blkinfo[i].forknum = forknum; + } + + FreeFile(file); + + /* Sort the blocks to be loaded. */ + pg_qsort(blkinfo, num_elements, sizeof(BlockInfoRecord), + apw_compare_blockinfo); + + /* Populate shared memory state. */ + apw_state->block_info_handle = dsm_segment_handle(seg); + apw_state->prewarm_start_idx = apw_state->prewarm_stop_idx = 0; + apw_state->prewarmed_blocks = 0; + + /* Get the info position of the first block of the next database. */ + while (apw_state->prewarm_start_idx < num_elements) + { + uint32 i = apw_state->prewarm_start_idx; + Oid current_db = blkinfo[i].database; + + /* + * Advance the prewarm_stop_idx to the first BlockRecordInfo that does + * not belong to this database. + */ + i++; + while (i < num_elements) + { + if (current_db != blkinfo[i].database) + { + /* + * Combine BlockRecordInfos for global objects withs those of + * the database. + */ + if (current_db != InvalidOid) + break; + current_db = blkinfo[i].database; + } + + i++; + } + + /* + * If we reach this point with current_db == InvalidOid, then only + * BlockRecordInfos belonging to global objects exist. We can't + * prewarm without a database connection, so just bail out. + */ + if (current_db == InvalidOid) + break; + + /* Configure stop point and database for next per-database worker. */ + apw_state->prewarm_stop_idx = i; + apw_state->database = current_db; + Assert(apw_state->prewarm_start_idx < apw_state->prewarm_stop_idx); + + /* If we've run out of free buffers, don't launch another worker. */ + if (!have_free_buffer()) + break; + + /* + * Start a per-database worker to load blocks for this database; this + * function will return once the per-database worker exits. + */ + apw_start_database_worker(); + + /* Prepare for next database. */ + apw_state->prewarm_start_idx = apw_state->prewarm_stop_idx; + } + + /* Clean up. */ + dsm_detach(seg); + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + apw_state->block_info_handle = DSM_HANDLE_INVALID; + apw_state->pid_using_dumpfile = InvalidPid; + LWLockRelease(&apw_state->lock); + + /* Report our success. */ + ereport(LOG, + (errmsg("autoprewarm successfully prewarmed " INT64_FORMAT + " of " INT64_FORMAT " previously-loaded blocks", + apw_state->prewarmed_blocks, num_elements))); +} + +/* + * Prewarm all blocks for one database (and possibly also global objects, if + * those got grouped with this database). + */ +void +autoprewarm_database_main(Datum main_arg) +{ + uint32 pos; + BlockInfoRecord *block_info; + Relation rel = NULL; + BlockNumber nblocks = 0; + BlockInfoRecord *old_blk = NULL; + dsm_segment *seg; + + /* Establish signal handlers; once that's done, unblock signals. */ + pqsignal(SIGTERM, die); + BackgroundWorkerUnblockSignals(); + + /* Connect to correct database and get block information. */ + apw_init_shmem(); + seg = dsm_attach(apw_state->block_info_handle); + if (seg == NULL) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("could not map dynamic shared memory segment"))); + BackgroundWorkerInitializeConnectionByOid(apw_state->database, InvalidOid); + block_info = (BlockInfoRecord *) dsm_segment_address(seg); + pos = apw_state->prewarm_start_idx; + + /* + * Loop until we run out of blocks to prewarm or until we run out of free + * buffers. + */ + while (pos < apw_state->prewarm_stop_idx && have_free_buffer()) + { + BlockInfoRecord *blk = &block_info[pos++]; + Buffer buf; + + CHECK_FOR_INTERRUPTS(); + + /* + * Quit if we've reached records for another database. If previous + * blocks are of some global objects, then continue pre-warming. + */ + if (old_blk != NULL && old_blk->database != blk->database && + old_blk->database != 0) + break; + + /* + * As soon as we encounter a block of a new relation, close the old + * relation. Note that rel will be NULL if try_relation_open failed + * previously; in that case, there is nothing to close. + */ + if (old_blk != NULL && old_blk->filenode != blk->filenode && + rel != NULL) + { + relation_close(rel, AccessShareLock); + rel = NULL; + CommitTransactionCommand(); + } + + /* + * Try to open each new relation, but only once, when we first + * encounter it. If it's been dropped, skip the associated blocks. + */ + if (old_blk == NULL || old_blk->filenode != blk->filenode) + { + Oid reloid; + + Assert(rel == NULL); + StartTransactionCommand(); + reloid = RelidByRelfilenode(blk->tablespace, blk->filenode); + if (OidIsValid(reloid)) + rel = try_relation_open(reloid, AccessShareLock); + + if (!rel) + CommitTransactionCommand(); + } + if (!rel) + { + old_blk = blk; + continue; + } + + /* Once per fork, check for fork existence and size. */ + if (old_blk == NULL || + old_blk->filenode != blk->filenode || + old_blk->forknum != blk->forknum) + { + RelationOpenSmgr(rel); + + /* + * smgrexists is not safe for illegal forknum, hence check whether + * the passed forknum is valid before using it in smgrexists. + */ + if (blk->forknum > InvalidForkNumber && + blk->forknum <= MAX_FORKNUM && + smgrexists(rel->rd_smgr, blk->forknum)) + nblocks = RelationGetNumberOfBlocksInFork(rel, blk->forknum); + else + nblocks = 0; + } + + /* Check whether blocknum is valid and within fork file size. */ + if (blk->blocknum >= nblocks) + { + /* Move to next forknum. */ + old_blk = blk; + continue; + } + + /* Prewarm buffer. */ + buf = ReadBufferExtended(rel, blk->forknum, blk->blocknum, RBM_NORMAL, + NULL); + if (BufferIsValid(buf)) + { + apw_state->prewarmed_blocks++; + ReleaseBuffer(buf); + } + + old_blk = blk; + } + + dsm_detach(seg); + + /* Release lock on previous relation. */ + if (rel) + { + relation_close(rel, AccessShareLock); + CommitTransactionCommand(); + } +} + +/* + * Dump information on blocks in shared buffers. We use a text format here + * so that it's easy to understand and even change the file contents if + * necessary. + */ +static int64 +apw_dump_now(bool is_bgworker, bool dump_unlogged) +{ + uint32 i; + int ret; + int64 num_blocks; + BlockInfoRecord *block_info_array; + BufferDesc *bufHdr; + FILE *file; + char transient_dump_file_path[MAXPGPATH]; + pid_t pid; + + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + pid = apw_state->pid_using_dumpfile; + if (apw_state->pid_using_dumpfile == InvalidPid) + apw_state->pid_using_dumpfile = MyProcPid; + LWLockRelease(&apw_state->lock); + + if (pid != InvalidPid) + { + if (!is_bgworker) + ereport(ERROR, + (errmsg("could not perform block dump because dump file is being used by PID %d", + apw_state->pid_using_dumpfile))); + + ereport(LOG, + (errmsg("skipping block dump because it is already being performed by PID %d", + apw_state->pid_using_dumpfile))); + return 0; + } + + block_info_array = + (BlockInfoRecord *) palloc(sizeof(BlockInfoRecord) * NBuffers); + + for (num_blocks = 0, i = 0; i < NBuffers; i++) + { + uint32 buf_state; + + CHECK_FOR_INTERRUPTS(); + + bufHdr = GetBufferDescriptor(i); + + /* Lock each buffer header before inspecting. */ + buf_state = LockBufHdr(bufHdr); + + /* + * Unlogged tables will be automatically truncated after a crash or + * unclean shutdown. In such cases we need not prewarm them. Dump them + * only if requested by caller. + */ + if (buf_state & BM_TAG_VALID && + ((buf_state & BM_PERMANENT) || dump_unlogged)) + { + block_info_array[num_blocks].database = bufHdr->tag.rnode.dbNode; + block_info_array[num_blocks].tablespace = bufHdr->tag.rnode.spcNode; + block_info_array[num_blocks].filenode = bufHdr->tag.rnode.relNode; + block_info_array[num_blocks].forknum = bufHdr->tag.forkNum; + block_info_array[num_blocks].blocknum = bufHdr->tag.blockNum; + ++num_blocks; + } + + UnlockBufHdr(bufHdr, buf_state); + } + + snprintf(transient_dump_file_path, MAXPGPATH, "%s.tmp", AUTOPREWARM_FILE); + file = AllocateFile(transient_dump_file_path, "w"); + if (!file) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open file \"%s\": %m", + transient_dump_file_path))); + + ret = fprintf(file, "<<" INT64_FORMAT ">>\n", num_blocks); + if (ret < 0) + { + int save_errno = errno; + + FreeFile(file); + unlink(transient_dump_file_path); + errno = save_errno; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\" : %m", + transient_dump_file_path))); + } + + for (i = 0; i < num_blocks; i++) + { + CHECK_FOR_INTERRUPTS(); + + ret = fprintf(file, "%u,%u,%u,%u,%u\n", + block_info_array[i].database, + block_info_array[i].tablespace, + block_info_array[i].filenode, + (uint32) block_info_array[i].forknum, + block_info_array[i].blocknum); + if (ret < 0) + { + int save_errno = errno; + + FreeFile(file); + unlink(transient_dump_file_path); + errno = save_errno; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to file \"%s\" : %m", + transient_dump_file_path))); + } + } + + pfree(block_info_array); + + /* + * Rename transient_dump_file_path to AUTOPREWARM_FILE to make things + * permanent. + */ + ret = FreeFile(file); + if (ret != 0) + { + int save_errno = errno; + + unlink(transient_dump_file_path); + errno = save_errno; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not close file \"%s\" : %m", + transient_dump_file_path))); + } + + (void) durable_rename(transient_dump_file_path, AUTOPREWARM_FILE, ERROR); + apw_state->pid_using_dumpfile = InvalidPid; + + ereport(DEBUG1, + (errmsg("wrote block details for " INT64_FORMAT " blocks", + num_blocks))); + return num_blocks; +} + +/* + * SQL-callable function to launch autoprewarm. + */ +Datum +autoprewarm_start_worker(PG_FUNCTION_ARGS) +{ + pid_t pid; + + if (!autoprewarm) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("autoprewarm is disabled"))); + + apw_init_shmem(); + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + pid = apw_state->bgworker_pid; + LWLockRelease(&apw_state->lock); + + if (pid != InvalidPid) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("autoprewarm worker is already running under PID %d", + pid))); + + apw_start_master_worker(); + + PG_RETURN_VOID(); +} + +/* + * SQL-callable function to perform an immediate block dump. + */ +Datum +autoprewarm_dump_now(PG_FUNCTION_ARGS) +{ + int64 num_blocks; + + apw_init_shmem(); + + PG_ENSURE_ERROR_CLEANUP(apw_detach_shmem, 0); + { + num_blocks = apw_dump_now(false, true); + } + PG_END_ENSURE_ERROR_CLEANUP(apw_detach_shmem, 0); + + PG_RETURN_INT64(num_blocks); +} + +/* + * Allocate and initialize autoprewarm related shared memory, if not already + * done, and set up backend-local pointer to that state. Returns true if an + * existing shared memory segment was found. + */ +static bool +apw_init_shmem(void) +{ + bool found; + + LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE); + apw_state = ShmemInitStruct("autoprewarm", + sizeof(AutoPrewarmSharedState), + &found); + if (!found) + { + /* First time through ... */ + LWLockInitialize(&apw_state->lock, LWLockNewTrancheId()); + apw_state->bgworker_pid = InvalidPid; + apw_state->pid_using_dumpfile = InvalidPid; + } + LWLockRelease(AddinShmemInitLock); + + return found; +} + +/* + * Clear our PID from autoprewarm shared state. + */ +static void +apw_detach_shmem(int code, Datum arg) +{ + LWLockAcquire(&apw_state->lock, LW_EXCLUSIVE); + if (apw_state->pid_using_dumpfile == MyProcPid) + apw_state->pid_using_dumpfile = InvalidPid; + if (apw_state->bgworker_pid == MyProcPid) + apw_state->bgworker_pid = InvalidPid; + LWLockRelease(&apw_state->lock); +} + +/* + * Start autoprewarm master worker process. + */ +static void +apw_start_master_worker(void) +{ + BackgroundWorker worker; + BackgroundWorkerHandle *handle; + BgwHandleStatus status; + pid_t pid; + + MemSet(&worker, 0, sizeof(BackgroundWorker)); + worker.bgw_flags = BGWORKER_SHMEM_ACCESS; + worker.bgw_start_time = BgWorkerStart_ConsistentState; + strcpy(worker.bgw_library_name, "pg_prewarm"); + strcpy(worker.bgw_function_name, "autoprewarm_main"); + strcpy(worker.bgw_name, "autoprewarm"); + + if (process_shared_preload_libraries_in_progress) + { + RegisterBackgroundWorker(&worker); + return; + } + + /* must set notify PID to wait for startup */ + worker.bgw_notify_pid = MyProcPid; + + if (!RegisterDynamicBackgroundWorker(&worker, &handle)) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_RESOURCES), + errmsg("could not register background process"), + errhint("You may need to increase max_worker_processes."))); + + status = WaitForBackgroundWorkerStartup(handle, &pid); + if (status != BGWH_STARTED) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_RESOURCES), + errmsg("could not start background process"), + errhint("More details may be available in the server log."))); +} + +/* + * Start autoprewarm per-database worker process. + */ +static void +apw_start_database_worker(void) +{ + BackgroundWorker worker; + BackgroundWorkerHandle *handle; + + MemSet(&worker, 0, sizeof(BackgroundWorker)); + worker.bgw_flags = + BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION; + worker.bgw_start_time = BgWorkerStart_ConsistentState; + strcpy(worker.bgw_library_name, "pg_prewarm"); + strcpy(worker.bgw_function_name, "autoprewarm_database_main"); + strcpy(worker.bgw_name, "autoprewarm"); + + /* must set notify PID to wait for shutdown */ + worker.bgw_notify_pid = MyProcPid; + + if (!RegisterDynamicBackgroundWorker(&worker, &handle)) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_RESOURCES), + errmsg("registering dynamic bgworker autoprewarm failed"), + errhint("Consider increasing configuration parameter \"max_worker_processes\"."))); + + /* + * Ignore return value; if it fails, postmaster has died, but we have + * checks for that elsewhere. + */ + WaitForBackgroundWorkerShutdown(handle); +} + +/* Compare member elements to check whether they are not equal. */ +#define cmp_member_elem(fld) \ +do { \ + if (a->fld < b->fld) \ + return -1; \ + else if (a->fld > b->fld) \ + return 1; \ +} while(0); + +/* + * apw_compare_blockinfo + * + * We depend on all records for a particular database being consecutive + * in the dump file; each per-database worker will preload blocks until + * it sees a block for some other database. Sorting by tablespace, + * filenode, forknum, and blocknum isn't critical for correctness, but + * helps us get a sequential I/O pattern. + */ +static int +apw_compare_blockinfo(const void *p, const void *q) +{ + BlockInfoRecord *a = (BlockInfoRecord *) p; + BlockInfoRecord *b = (BlockInfoRecord *) q; + + cmp_member_elem(database); + cmp_member_elem(tablespace); + cmp_member_elem(filenode); + cmp_member_elem(forknum); + cmp_member_elem(blocknum); + + return 0; +} + +/* + * Signal handler for SIGTERM + */ +static void +apw_sigterm_handler(SIGNAL_ARGS) +{ + int save_errno = errno; + + got_sigterm = true; + + if (MyProc) + SetLatch(&MyProc->procLatch); + + errno = save_errno; +} + +/* + * Signal handler for SIGHUP + */ +static void +apw_sighup_handler(SIGNAL_ARGS) +{ + int save_errno = errno; + + got_sighup = true; + + if (MyProc) + SetLatch(&MyProc->procLatch); + + errno = save_errno; +} diff --git a/contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql b/contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql new file mode 100644 index 0000000000..2381c06eb9 --- /dev/null +++ b/contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql @@ -0,0 +1,14 @@ +/* contrib/pg_prewarm/pg_prewarm--1.1--1.2.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION pg_prewarm UPDATE TO '1.2'" to load this file. \quit + +CREATE FUNCTION autoprewarm_start_worker() +RETURNS VOID STRICT +AS 'MODULE_PATHNAME', 'autoprewarm_start_worker' +LANGUAGE C; + +CREATE FUNCTION autoprewarm_dump_now() +RETURNS pg_catalog.int8 STRICT +AS 'MODULE_PATHNAME', 'autoprewarm_dump_now' +LANGUAGE C; diff --git a/contrib/pg_prewarm/pg_prewarm.control b/contrib/pg_prewarm/pg_prewarm.control index cf2fb92bed..40e3add481 100644 --- a/contrib/pg_prewarm/pg_prewarm.control +++ b/contrib/pg_prewarm/pg_prewarm.control @@ -1,5 +1,5 @@ # pg_prewarm extension comment = 'prewarm relation data' -default_version = '1.1' +default_version = '1.2' module_pathname = '$libdir/pg_prewarm' relocatable = true diff --git a/doc/src/sgml/pgprewarm.sgml b/doc/src/sgml/pgprewarm.sgml index c090401eca..c6b94a8b72 100644 --- a/doc/src/sgml/pgprewarm.sgml +++ b/doc/src/sgml/pgprewarm.sgml @@ -10,7 +10,13 @@ The pg_prewarm module provides a convenient way to load relation data into either the operating system buffer cache - or the PostgreSQL buffer cache. + or the PostgreSQL buffer cache. Prewarming + can be performed manually using the pg_prewarm function, + or can be performed automatically by including pg_prewarm in + . In the latter case, the + system will run a background worker which periodically records the contents + of shared buffers in a file called autoprewarm.blocks and + will, using 2 background workers, reload those same blocks after a restart. @@ -55,6 +61,67 @@ pg_prewarm(regclass, mode text default 'buffer', fork text default 'main', cache. For these reasons, prewarming is typically most useful at startup, when caches are largely empty. + + +autoprewarm_start_worker() RETURNS void + + + + Launch the main autoprewarm worker. This will normally happen + automatically, but is useful if automatic prewarm was not configured at + server startup time and you wish to start up the worker at a later time. + + + +autoprewarm_dump_now() RETURNS int8 + + + + Update autoprewarm.blocks immediately. This may be useful + if the autoprewarm worker is not running but you anticipate running it + after the next restart. The return value is the number of records written + to autoprewarm.blocks. + + + + + Configuration Parameters + + + + + pg_prewarm.autoprewarm (boolean) + + pg_prewarm.autoprewarm configuration parameter + + + + + Controls whether the server should run the autoprewarm worker. This is + on by default. This parameter can only be set at server start. + + + + + + + + + pg_prewarm.autoprewarm_interval (int) + + pg_prewarm.autoprewarm_interval configuration parameter + + + + + This is the interval between updates to autoprewarm.blocks. + The default is 300 seconds. If set to 0, the file will not be + dumped at regular intervals, but only when the server is shut down. + + + + + diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c index 9d8ae6ae8e..f033323cff 100644 --- a/src/backend/storage/buffer/freelist.c +++ b/src/backend/storage/buffer/freelist.c @@ -168,6 +168,23 @@ ClockSweepTick(void) return victim; } +/* + * have_free_buffer -- a lockless check to see if there is a free buffer in + * buffer pool. + * + * If the result is true that will become stale once free buffers are moved out + * by other operations, so the caller who strictly want to use a free buffer + * should not call this. + */ +bool +have_free_buffer() +{ + if (StrategyControl->firstFreeBuffer >= 0) + return true; + else + return false; +} + /* * StrategyGetBuffer * diff --git a/src/include/storage/buf_internals.h b/src/include/storage/buf_internals.h index b768b6fc96..300adfcf9e 100644 --- a/src/include/storage/buf_internals.h +++ b/src/include/storage/buf_internals.h @@ -317,6 +317,7 @@ extern void StrategyNotifyBgWriter(int bgwprocno); extern Size StrategyShmemSize(void); extern void StrategyInitialize(bool init); +extern bool have_free_buffer(void); /* buf_table.c */ extern Size BufTableShmemSize(int size); diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 8166d86ca1..a4ace383fa 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -138,6 +138,7 @@ AttrDefault AttrNumber AttributeOpts AuthRequest +AutoPrewarmSharedState AutoVacOpts AutoVacuumShmemStruct AutoVacuumWorkItem @@ -218,6 +219,7 @@ BlobInfo Block BlockId BlockIdData +BlockInfoRecord BlockNumber BlockSampler BlockSamplerData From 1f6d515a67ec98194c23a5db25660856c9aab944 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 21 Aug 2017 14:43:01 -0400 Subject: [PATCH 0018/1087] Push limit through subqueries to underlying sort, where possible. Douglas Doole, reviewed by Ashutosh Bapat and by me. Minor formatting change by me. Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com --- src/backend/executor/nodeLimit.c | 26 +++++++++++++ src/test/regress/expected/subselect.out | 52 +++++++++++++++++++++++++ src/test/regress/sql/subselect.sql | 46 ++++++++++++++++++++++ 3 files changed, 124 insertions(+) diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c index ac5a2ff0e6..09af1a5d8b 100644 --- a/src/backend/executor/nodeLimit.c +++ b/src/backend/executor/nodeLimit.c @@ -308,6 +308,9 @@ recompute_limits(LimitState *node) * since the MergeAppend surely need read no more than that many tuples from * any one input. We also have to be prepared to look through a Result, * since the planner might stick one atop MergeAppend for projection purposes. + * We can also accept one or more levels of subqueries that have no quals or + * SRFs (that is, each subquery is just projecting columns) between the LIMIT + * and any of the above. * * This is a bit of a kluge, but we don't have any more-abstract way of * communicating between the two nodes; and it doesn't seem worth trying @@ -320,6 +323,29 @@ recompute_limits(LimitState *node) static void pass_down_bound(LimitState *node, PlanState *child_node) { + /* + * If the child is a subquery that does no filtering (no predicates) + * and does not have any SRFs in the target list then we can potentially + * push the limit through the subquery. It is possible that we could have + * multiple subqueries, so tunnel through them all. + */ + while (IsA(child_node, SubqueryScanState)) + { + SubqueryScanState *subqueryScanState; + + subqueryScanState = (SubqueryScanState *) child_node; + + /* + * Non-empty predicates or an SRF means we cannot push down the limit. + */ + if (subqueryScanState->ss.ps.qual != NULL || + expression_returns_set((Node *) child_node->plan->targetlist)) + return; + + /* Use the child in the following checks */ + child_node = subqueryScanState->subplan; + } + if (IsA(child_node, SortState)) { SortState *sortState = (SortState *) child_node; diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index ed7d6d8034..8419dea08e 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -1041,3 +1041,55 @@ NOTICE: x = 9, y = 13 (3 rows) drop function tattle(x int, y int); +-- +-- Test that LIMIT can be pushed to SORT through a subquery that just +-- projects columns +-- +create table sq_limit (pk int primary key, c1 int, c2 int); +insert into sq_limit values + (1, 1, 1), + (2, 2, 2), + (3, 3, 3), + (4, 4, 4), + (5, 1, 1), + (6, 2, 2), + (7, 3, 3), + (8, 4, 4); +-- The explain contains data that may not be invariant, so +-- filter for just the interesting bits. The goal here is that +-- we should see three notices, in order: +-- NOTICE: Limit +-- NOTICE: Subquery +-- NOTICE: Top-N Sort +-- A missing step, or steps out of order means we have a problem. +do $$ + declare x text; + begin + for x in + explain (analyze, summary off, timing off, costs off) + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 + loop + if (left(ltrim(x), 5) = 'Limit') then + raise notice 'Limit'; + end if; + if (left(ltrim(x), 12) = '-> Subquery') then + raise notice 'Subquery'; + end if; + if (left(ltrim(x), 18) = 'Sort Method: top-N') then + raise notice 'Top-N Sort'; + end if; + end loop; + end; +$$; +NOTICE: Limit +NOTICE: Subquery +NOTICE: Top-N Sort +select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; + pk | c2 +----+---- + 1 | 1 + 5 | 1 + 2 | 2 +(3 rows) + +drop table sq_limit; diff --git a/src/test/regress/sql/subselect.sql b/src/test/regress/sql/subselect.sql index 2fc0e26ca0..7087ee27cd 100644 --- a/src/test/regress/sql/subselect.sql +++ b/src/test/regress/sql/subselect.sql @@ -540,3 +540,49 @@ select * from where tattle(x, u); drop function tattle(x int, y int); + +-- +-- Test that LIMIT can be pushed to SORT through a subquery that just +-- projects columns +-- +create table sq_limit (pk int primary key, c1 int, c2 int); +insert into sq_limit values + (1, 1, 1), + (2, 2, 2), + (3, 3, 3), + (4, 4, 4), + (5, 1, 1), + (6, 2, 2), + (7, 3, 3), + (8, 4, 4); + +-- The explain contains data that may not be invariant, so +-- filter for just the interesting bits. The goal here is that +-- we should see three notices, in order: +-- NOTICE: Limit +-- NOTICE: Subquery +-- NOTICE: Top-N Sort +-- A missing step, or steps out of order means we have a problem. +do $$ + declare x text; + begin + for x in + explain (analyze, summary off, timing off, costs off) + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 + loop + if (left(ltrim(x), 5) = 'Limit') then + raise notice 'Limit'; + end if; + if (left(ltrim(x), 12) = '-> Subquery') then + raise notice 'Subquery'; + end if; + if (left(ltrim(x), 18) = 'Sort Method: top-N') then + raise notice 'Top-N Sort'; + end if; + end loop; + end; +$$; + +select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; + +drop table sq_limit; From 51e225da306e14616b690308a59fd89e22335035 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 21 Aug 2017 09:17:06 -0400 Subject: [PATCH 0019/1087] Expand set of predefined ICU locales Install language+region combinations even if they are not distinct from the language's base locale. This gives better long-term stability of the set of predefined locales and makes the predefined locales less implementation-dependent and more practical for users. Reviewed-by: Peter Geoghegan --- doc/src/sgml/charset.sgml | 13 ++++++------- src/backend/commands/collationcmds.c | 15 ++++++++++++--- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 48ecfc5f48..f2a4acc115 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -653,9 +653,8 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; string will be accepted as a locale name.) See for information on ICU locale naming. initdb uses the ICU - APIs to extract a set of locales with distinct collation rules to populate - the initial set of collations. Here are some example collations that - might be created: + APIs to extract a set of distinct locales to populate the initial set of + collations. Here are some example collations that might be created: @@ -677,9 +676,9 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; German collation for Austria, default variant - (As of this writing, there is no, - say, de-DE-x-icu or de-CH-x-icu, - because those are equivalent to de-x-icu.) + (There are also, say, de-DE-x-icu + or de-CH-x-icu, but as of this writing, they are + equivalent to de-x-icu.) @@ -690,6 +689,7 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; German collation for Austria, phone book variant + und-x-icu (for undefined) @@ -724,7 +724,6 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; CREATE COLLATION german FROM "de_DE"; CREATE COLLATION french FROM "fr-x-icu"; -CREATE COLLATION "de-DE-x-icu" FROM "de-x-icu"; diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index 8572b2dedc..d36ce53560 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -667,7 +667,16 @@ pg_import_system_collations(PG_FUNCTION_ARGS) } #endif /* READ_LOCALE_A_OUTPUT */ - /* Load collations known to ICU */ + /* + * Load collations known to ICU + * + * We use uloc_countAvailable()/uloc_getAvailable() rather than + * ucol_countAvailable()/ucol_getAvailable(). The former returns a full + * set of language+region combinations, whereas the latter only returns + * language+region combinations of they are distinct from the language's + * base collation. So there might not be a de-DE or en-GB, which would be + * confusing. + */ #ifdef USE_ICU { int i; @@ -676,7 +685,7 @@ pg_import_system_collations(PG_FUNCTION_ARGS) * Start the loop at -1 to sneak in the root locale without too much * code duplication. */ - for (i = -1; i < ucol_countAvailable(); i++) + for (i = -1; i < uloc_countAvailable(); i++) { /* * In ICU 4.2, ucol_getKeywordValuesForLocale() sometimes returns @@ -706,7 +715,7 @@ pg_import_system_collations(PG_FUNCTION_ARGS) if (i == -1) name = ""; /* ICU root locale */ else - name = ucol_getAvailable(i); + name = uloc_getAvailable(i); langtag = get_icu_language_tag(name); collcollate = U_ICU_VERSION_MAJOR_NUM >= 54 ? langtag : name; From 2bfd1b1ee562c4e4fd065c7f7d1beaa9b9852070 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 21 Aug 2017 11:22:00 -0400 Subject: [PATCH 0020/1087] Don't install ICU collation keyword variants Users can still create them themselves. Instead, document Unicode TR 35 collation options for ICU, so users can create all this themselves. Reviewed-by: Peter Geoghegan --- doc/src/sgml/charset.sgml | 98 ++++++++++++++++++++++++---- src/backend/commands/collationcmds.c | 71 -------------------- 2 files changed, 84 insertions(+), 85 deletions(-) diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index f2a4acc115..44e43503a6 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -664,13 +664,6 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - - de-u-co-phonebk-x-icu - - German collation, phone book variant - - - de-AT-x-icu @@ -683,13 +676,6 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - - de-AT-u-co-phonebk-x-icu - - German collation for Austria, phone book variant - - - und-x-icu (for undefined) @@ -709,6 +695,90 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; will draw an error along the lines of collation "de-x-icu" for encoding "WIN874" does not exist. + + + ICU allows collations to be customized beyond the basic language+country + set that is preloaded by initdb. Users are encouraged + to define their own collation objects that make use of these facilities to + suit the sorting behavior to their requirements. Here are some examples: + + + + CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de-u-co-phonebk') + + German collation with phone book collation type + + + + + CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = 'und-u-co-emoji') + + + Root collation with Emoji collation type, per Unicode Technical Standard #51 + + + + + + CREATE COLLATION digitslast (provider = icu, locale = 'en-u-kr-latn-digit') + + + Sort digits after Latin letters. (The default is digits before letters.) + + + + + + CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper') + + + Sort upper-case letters before lower-case letters. (The default is + lower-case letters first.) + + + + + + CREATE COLLATION special (provider = icu, locale = 'en-u-kf-upper-kr-latn-digit') + + + Combines both of the above options. + + + + + + CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true') + + + Numeric ordering, sorts sequences of digits by their numeric value, + for example: A-21 < A-123 + (also known as natural sort). + + + + + + See Unicode + Technical Standard #35 + and BCP 47 for + details. The list of possible collation types (co + subtag) can be found in + the CLDR + repository. + The ICU Locale + Explorer can be used to check the details of a particular locale + definition. + + + + Note that while this system allows creating collations that ignore + case or ignore accents or similar (using + the ks key), PostgreSQL does not at the moment allow + such collations to act in a truly case- or accent-insensitive manner. Any + strings that compare equal according to the collation but are not + byte-wise equal will be sorted according to their byte values. + diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index d36ce53560..9437731276 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -687,30 +687,11 @@ pg_import_system_collations(PG_FUNCTION_ARGS) */ for (i = -1; i < uloc_countAvailable(); i++) { - /* - * In ICU 4.2, ucol_getKeywordValuesForLocale() sometimes returns - * values that will not be accepted by uloc_toLanguageTag(). Skip - * loading keyword variants in that version. (Both - * ucol_getKeywordValuesForLocale() and uloc_toLanguageTag() are - * new in ICU 4.2, so older versions are not supported at all.) - * - * XXX We have no information about ICU 4.3 through 4.7, but we - * know the code below works with 4.8. - */ -#if U_ICU_VERSION_MAJOR_NUM > 4 || (U_ICU_VERSION_MAJOR_NUM == 4 && U_ICU_VERSION_MINOR_NUM > 2) -#define LOAD_ICU_KEYWORD_VARIANTS -#endif - const char *name; char *langtag; char *icucomment; const char *collcollate; Oid collid; -#ifdef LOAD_ICU_KEYWORD_VARIANTS - UEnumeration *en; - UErrorCode status; - const char *val; -#endif if (i == -1) name = ""; /* ICU root locale */ @@ -744,58 +725,6 @@ pg_import_system_collations(PG_FUNCTION_ARGS) CreateComments(collid, CollationRelationId, 0, icucomment); } - - /* - * Add keyword variants, if enabled. - */ -#ifdef LOAD_ICU_KEYWORD_VARIANTS - status = U_ZERO_ERROR; - en = ucol_getKeywordValuesForLocale("collation", name, TRUE, &status); - if (U_FAILURE(status)) - ereport(ERROR, - (errmsg("could not get keyword values for locale \"%s\": %s", - name, u_errorName(status)))); - - status = U_ZERO_ERROR; - uenum_reset(en, &status); - while ((val = uenum_next(en, NULL, &status))) - { - char *localeid = psprintf("%s@collation=%s", name, val); - - langtag = get_icu_language_tag(localeid); - collcollate = U_ICU_VERSION_MAJOR_NUM >= 54 ? langtag : localeid; - - /* - * Be paranoid about not allowing any non-ASCII strings into - * pg_collation - */ - if (!is_all_ascii(langtag) || !is_all_ascii(collcollate)) - continue; - - collid = CollationCreate(psprintf("%s-x-icu", langtag), - nspid, GetUserId(), - COLLPROVIDER_ICU, -1, - collcollate, collcollate, - get_collation_actual_version(COLLPROVIDER_ICU, collcollate), - true, true); - if (OidIsValid(collid)) - { - ncreated++; - - CommandCounterIncrement(); - - icucomment = get_icu_locale_comment(localeid); - if (icucomment) - CreateComments(collid, CollationRelationId, 0, - icucomment); - } - } - if (U_FAILURE(status)) - ereport(ERROR, - (errmsg("could not get keyword values for locale \"%s\": %s", - name, u_errorName(status)))); - uenum_close(en); -#endif /* LOAD_ICU_KEYWORD_VARIANTS */ } } #endif /* USE_ICU */ From 0052a0243d9c979a06ef273af965508103c456e0 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 22 Aug 2017 15:36:49 -0700 Subject: [PATCH 0021/1087] Add a hash_combine function for mixing hash values. This hash function is derived from Boost's function of the same name. Author: Andres Freund, Thomas Munro Discussion: https://postgr.es/m/CAEepm%3D3rdgjfxW4cKvJ0OEmya2-34B0qHNG1xV0vK7TGPJGMUQ%40mail.gmail.com Discussion: https://postgr.es/m/20170731210844.3cwrkmsmbbpt4rjc%40alap3.anarazel.de --- src/include/utils/hashutils.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) create mode 100644 src/include/utils/hashutils.h diff --git a/src/include/utils/hashutils.h b/src/include/utils/hashutils.h new file mode 100644 index 0000000000..56b7bfc9cb --- /dev/null +++ b/src/include/utils/hashutils.h @@ -0,0 +1,23 @@ +/* + * Utilities for working with hash values. + * + * Portions Copyright (c) 2017, PostgreSQL Global Development Group + */ + +#ifndef HASHUTILS_H +#define HASHUTILS_H + +/* + * Combine two hash values, resulting in another hash value, with decent bit + * mixing. + * + * Similar to boost's hash_combine(). + */ +static inline uint32 +hash_combine(uint32 a, uint32 b) +{ + a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2); + return a; +} + +#endif /* HASHUTILS_H */ From 35ea75632a56ca8ef22aa8fed03b9dabb9c8c575 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 22 Aug 2017 16:05:48 -0700 Subject: [PATCH 0022/1087] Refactor typcache.c's record typmod hash table. Previously, tuple descriptors were stored in chains keyed by a fixed size array of OIDs. That meant there were effectively two levels of collision chain -- one inside and one outside the hash table. Instead, let dynahash.c look after conflicts for us by supplying a proper hash and equal function pair. This is a nice cleanup on its own, but also simplifies followup changes allowing blessed TupleDescs to be shared between backends participating in parallel query. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm%3D34GVhOL%2BarUx56yx7OPk7%3DqpGsv3CpO54feqjAwQKm5g%40mail.gmail.com --- src/backend/access/common/tupdesc.c | 27 +++++++++++ src/backend/utils/cache/typcache.c | 74 ++++++++++++++--------------- src/include/access/tupdesc.h | 2 + 3 files changed, 65 insertions(+), 38 deletions(-) diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index 75b191ba2a..4436c86361 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -19,6 +19,7 @@ #include "postgres.h" +#include "access/hash.h" #include "access/htup_details.h" #include "catalog/pg_collation.h" #include "catalog/pg_type.h" @@ -26,6 +27,7 @@ #include "parser/parse_type.h" #include "utils/acl.h" #include "utils/builtins.h" +#include "utils/hashutils.h" #include "utils/resowner_private.h" #include "utils/syscache.h" @@ -443,6 +445,31 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2) return true; } +/* + * hashTupleDesc + * Compute a hash value for a tuple descriptor. + * + * If two tuple descriptors would be considered equal by equalTupleDescs() + * then their hash value will be equal according to this function. + * + * Note that currently contents of constraint are not hashed - it'd be a bit + * painful to do so, and conflicts just due to constraints are unlikely. + */ +uint32 +hashTupleDesc(TupleDesc desc) +{ + uint32 s; + int i; + + s = hash_combine(0, hash_uint32(desc->natts)); + s = hash_combine(s, hash_uint32(desc->tdtypeid)); + s = hash_combine(s, hash_uint32(desc->tdhasoid)); + for (i = 0; i < desc->natts; ++i) + s = hash_combine(s, hash_uint32(TupleDescAttr(desc, i)->atttypid)); + + return s; +} + /* * TupleDescInitEntry * This function initializes a single attribute structure in diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 20567a394b..691d4987b1 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -133,19 +133,12 @@ typedef struct TypeCacheEnumData * * Stored record types are remembered in a linear array of TupleDescs, * which can be indexed quickly with the assigned typmod. There is also - * a hash table to speed searches for matching TupleDescs. The hash key - * uses just the first N columns' type OIDs, and so we may have multiple - * entries with the same hash key. + * a hash table to speed searches for matching TupleDescs. */ -#define REC_HASH_KEYS 16 /* use this many columns in hash key */ typedef struct RecordCacheEntry { - /* the hash lookup key MUST BE FIRST */ - Oid hashkey[REC_HASH_KEYS]; /* column type IDs, zero-filled */ - - /* list of TupleDescs for record types with this hashkey */ - List *tupdescs; + TupleDesc tupdesc; } RecordCacheEntry; static HTAB *RecordCacheHash = NULL; @@ -1297,6 +1290,28 @@ lookup_rowtype_tupdesc_copy(Oid type_id, int32 typmod) return CreateTupleDescCopyConstr(tmp); } +/* + * Hash function for the hash table of RecordCacheEntry. + */ +static uint32 +record_type_typmod_hash(const void *data, size_t size) +{ + RecordCacheEntry *entry = (RecordCacheEntry *) data; + + return hashTupleDesc(entry->tupdesc); +} + +/* + * Match function for the hash table of RecordCacheEntry. + */ +static int +record_type_typmod_compare(const void *a, const void *b, size_t size) +{ + RecordCacheEntry *left = (RecordCacheEntry *) a; + RecordCacheEntry *right = (RecordCacheEntry *) b; + + return equalTupleDescs(left->tupdesc, right->tupdesc) ? 0 : 1; +} /* * assign_record_type_typmod @@ -1310,10 +1325,7 @@ assign_record_type_typmod(TupleDesc tupDesc) { RecordCacheEntry *recentry; TupleDesc entDesc; - Oid hashkey[REC_HASH_KEYS]; bool found; - int i; - ListCell *l; int32 newtypmod; MemoryContext oldcxt; @@ -1325,45 +1337,31 @@ assign_record_type_typmod(TupleDesc tupDesc) HASHCTL ctl; MemSet(&ctl, 0, sizeof(ctl)); - ctl.keysize = REC_HASH_KEYS * sizeof(Oid); + ctl.keysize = sizeof(TupleDesc); /* just the pointer */ ctl.entrysize = sizeof(RecordCacheEntry); + ctl.hash = record_type_typmod_hash; + ctl.match = record_type_typmod_compare; RecordCacheHash = hash_create("Record information cache", 64, - &ctl, HASH_ELEM | HASH_BLOBS); + &ctl, + HASH_ELEM | HASH_FUNCTION | HASH_COMPARE); /* Also make sure CacheMemoryContext exists */ if (!CacheMemoryContext) CreateCacheMemoryContext(); } - /* Find or create a hashtable entry for this hash class */ - MemSet(hashkey, 0, sizeof(hashkey)); - for (i = 0; i < tupDesc->natts; i++) - { - if (i >= REC_HASH_KEYS) - break; - hashkey[i] = TupleDescAttr(tupDesc, i)->atttypid; - } + /* Find or create a hashtable entry for this tuple descriptor */ recentry = (RecordCacheEntry *) hash_search(RecordCacheHash, - (void *) hashkey, + (void *) &tupDesc, HASH_ENTER, &found); - if (!found) + if (found && recentry->tupdesc != NULL) { - /* New entry ... hash_search initialized only the hash key */ - recentry->tupdescs = NIL; - } - - /* Look for existing record cache entry */ - foreach(l, recentry->tupdescs) - { - entDesc = (TupleDesc) lfirst(l); - if (equalTupleDescs(tupDesc, entDesc)) - { - tupDesc->tdtypmod = entDesc->tdtypmod; - return; - } + tupDesc->tdtypmod = recentry->tupdesc->tdtypmod; + return; } /* Not present, so need to manufacture an entry */ + recentry->tupdesc = NULL; oldcxt = MemoryContextSwitchTo(CacheMemoryContext); if (RecordCacheArray == NULL) @@ -1382,7 +1380,7 @@ assign_record_type_typmod(TupleDesc tupDesc) /* if fail in subrs, no damage except possibly some wasted memory... */ entDesc = CreateTupleDescCopy(tupDesc); - recentry->tupdescs = lcons(entDesc, recentry->tupdescs); + recentry->tupdesc = entDesc; /* mark it as a reference-counted tupdesc */ entDesc->tdrefcount = 1; /* now it's safe to advance NextRecordTypmod */ diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index 39fd59686a..989fe738bb 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -114,6 +114,8 @@ extern void DecrTupleDescRefCount(TupleDesc tupdesc); extern bool equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2); +extern uint32 hashTupleDesc(TupleDesc tupdesc); + extern void TupleDescInitEntry(TupleDesc desc, AttrNumber attributeNumber, const char *attributeName, From b5664cfd4c17eb69e6d7356ce670cc4a98074d13 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 22 Aug 2017 19:55:21 -0400 Subject: [PATCH 0023/1087] doc: Mention identity column feature in section on serial Reported-by: Basil Bourque --- doc/src/sgml/datatype.sgml | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 5f881a0b74..512756df4a 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -837,6 +837,14 @@ FROM generate_series(-3.5, 3.5, 1) as x; and serial type + + + This section describes a PostgreSQL-specific way to create an + autoincrementing column. Another way is to use the SQL-standard + identity column feature, described at . + + + The data types smallserial, serial and bigserial are not true types, but merely From 7e046e6e8a33f8a7ef641b9539376cf939993105 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 22 Aug 2017 20:32:17 -0400 Subject: [PATCH 0024/1087] pg_upgrade: Message translatability and style fixes --- src/bin/pg_upgrade/check.c | 24 +++++++++++++----------- src/bin/pg_upgrade/exec.c | 4 ++-- src/bin/pg_upgrade/function.c | 2 +- src/bin/pg_upgrade/pg_upgrade.c | 12 ++++++++---- src/bin/pg_upgrade/server.c | 6 +++--- src/bin/pg_upgrade/version.c | 6 +++--- 6 files changed, 30 insertions(+), 24 deletions(-) diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index 11f804b7e8..327c78e879 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -62,13 +62,15 @@ output_check_banner(bool live_check) { if (user_opts.check && live_check) { - pg_log(PG_REPORT, "Performing Consistency Checks on Old Live Server\n"); - pg_log(PG_REPORT, "------------------------------------------------\n"); + pg_log(PG_REPORT, + "Performing Consistency Checks on Old Live Server\n" + "------------------------------------------------\n"); } else { - pg_log(PG_REPORT, "Performing Consistency Checks\n"); - pg_log(PG_REPORT, "-----------------------------\n"); + pg_log(PG_REPORT, + "Performing Consistency Checks\n" + "-----------------------------\n"); } } @@ -991,7 +993,7 @@ check_for_jsonb_9_4_usage(ClusterInfo *cluster) bool found = false; char output_path[MAXPGPATH]; - prep_status("Checking for JSONB user data types"); + prep_status("Checking for incompatible jsonb data type"); snprintf(output_path, sizeof(output_path), "tables_using_jsonb.txt"); @@ -1022,7 +1024,7 @@ check_for_jsonb_9_4_usage(ClusterInfo *cluster) " a.atttypid = 'pg_catalog.jsonb'::pg_catalog.regtype AND " " c.relnamespace = n.oid AND " /* exclude possible orphaned temp tables */ - " n.nspname !~ '^pg_temp_' AND " + " n.nspname !~ '^pg_temp_' AND " " n.nspname NOT IN ('pg_catalog', 'information_schema')"); ntups = PQntuples(res); @@ -1057,8 +1059,8 @@ check_for_jsonb_9_4_usage(ClusterInfo *cluster) if (found) { pg_log(PG_REPORT, "fatal\n"); - pg_fatal("Your installation contains one of the JSONB data types in user tables.\n" - "The internal format of JSONB changed during 9.4 beta so this cluster cannot currently\n" + pg_fatal("Your installation contains the \"jsonb\" data type in user tables.\n" + "The internal format of \"jsonb\" changed during 9.4 beta so this cluster cannot currently\n" "be upgraded. You can remove the problem tables and restart the upgrade. A list\n" "of the problem columns is in the file:\n" " %s\n\n", output_path); @@ -1078,7 +1080,7 @@ check_for_pg_role_prefix(ClusterInfo *cluster) PGresult *res; PGconn *conn = connectToServer(cluster, "template1"); - prep_status("Checking for roles starting with 'pg_'"); + prep_status("Checking for roles starting with \"pg_\""); res = executeQueryOrDie(conn, "SELECT * " @@ -1088,9 +1090,9 @@ check_for_pg_role_prefix(ClusterInfo *cluster) if (PQntuples(res) != 0) { if (cluster == &old_cluster) - pg_fatal("The source cluster contains roles starting with 'pg_'\n"); + pg_fatal("The source cluster contains roles starting with \"pg_\"\n"); else - pg_fatal("The target cluster contains roles starting with 'pg_'\n"); + pg_fatal("The target cluster contains roles starting with \"pg_\"\n"); } PQclear(res); diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index cc4b4078db..cb8e29b17c 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -307,7 +307,7 @@ check_single_dir(const char *pg_data, const char *subdir) report_status(PG_FATAL, "check for \"%s\" failed: %s\n", subDirName, strerror(errno)); else if (!S_ISDIR(statBuf.st_mode)) - report_status(PG_FATAL, "%s is not a directory\n", + report_status(PG_FATAL, "\"%s\" is not a directory\n", subDirName); } @@ -370,7 +370,7 @@ check_bin_dir(ClusterInfo *cluster) report_status(PG_FATAL, "check for \"%s\" failed: %s\n", cluster->bindir, strerror(errno)); else if (!S_ISDIR(statBuf.st_mode)) - report_status(PG_FATAL, "%s is not a directory\n", + report_status(PG_FATAL, "\"%s\" is not a directory\n", cluster->bindir); validate_exec(cluster->bindir, "postgres"); diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c index 8383b75325..063a94f0ca 100644 --- a/src/bin/pg_upgrade/function.c +++ b/src/bin/pg_upgrade/function.c @@ -252,7 +252,7 @@ check_loadable_libraries(void) if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL) pg_fatal("could not open file \"%s\": %s\n", output_path, strerror(errno)); - fprintf(script, _("could not load library \"%s\":\n%s\n"), + fprintf(script, _("could not load library \"%s\": %s"), lib, PQerrorMessage(conn)); } diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index ef9793f3bd..2a68ce6efa 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -104,8 +104,10 @@ main(int argc, char **argv) check_new_cluster(); report_clusters_compatible(); - pg_log(PG_REPORT, "\nPerforming Upgrade\n"); - pg_log(PG_REPORT, "------------------\n"); + pg_log(PG_REPORT, + "\n" + "Performing Upgrade\n" + "------------------\n"); prepare_new_cluster(); @@ -164,8 +166,10 @@ main(int argc, char **argv) issue_warnings_and_set_wal_level(); - pg_log(PG_REPORT, "\nUpgrade Complete\n"); - pg_log(PG_REPORT, "----------------\n"); + pg_log(PG_REPORT, + "\n" + "Upgrade Complete\n" + "----------------\n"); output_completion_banner(analyze_script_file_name, deletion_script_file_name); diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 58e3593896..26e60fab46 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -30,7 +30,7 @@ connectToServer(ClusterInfo *cluster, const char *db_name) if (conn == NULL || PQstatus(conn) != CONNECTION_OK) { - pg_log(PG_REPORT, "connection to database failed: %s\n", + pg_log(PG_REPORT, "connection to database failed: %s", PQerrorMessage(conn)); if (conn) @@ -132,7 +132,7 @@ executeQueryOrDie(PGconn *conn, const char *fmt,...) if ((status != PGRES_TUPLES_OK) && (status != PGRES_COMMAND_OK)) { - pg_log(PG_REPORT, "SQL command failed\n%s\n%s\n", query, + pg_log(PG_REPORT, "SQL command failed\n%s\n%s", query, PQerrorMessage(conn)); PQclear(result); PQfinish(conn); @@ -281,7 +281,7 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) if ((conn = get_db_conn(cluster, "template1")) == NULL || PQstatus(conn) != CONNECTION_OK) { - pg_log(PG_REPORT, "\nconnection to database failed: %s\n", + pg_log(PG_REPORT, "\nconnection to database failed: %s", PQerrorMessage(conn)); if (conn) PQfinish(conn); diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c index a9071a3448..524b604296 100644 --- a/src/bin/pg_upgrade/version.c +++ b/src/bin/pg_upgrade/version.c @@ -82,7 +82,7 @@ new_9_0_populate_pg_largeobject_metadata(ClusterInfo *cluster, bool check_mode) pg_log(PG_WARNING, "\n" "Your installation contains large objects. The new database has an\n" "additional large object permission table. After upgrading, you will be\n" - "given a command to populate the pg_largeobject permission table with\n" + "given a command to populate the pg_largeobject_metadata table with\n" "default permissions.\n\n"); else pg_log(PG_WARNING, "\n" @@ -115,7 +115,7 @@ old_9_3_check_for_line_data_type_usage(ClusterInfo *cluster) bool found = false; char output_path[MAXPGPATH]; - prep_status("Checking for invalid \"line\" user columns"); + prep_status("Checking for incompatible \"line\" data type"); snprintf(output_path, sizeof(output_path), "tables_using_line.txt"); @@ -390,7 +390,7 @@ old_9_6_invalidate_hash_indexes(ClusterInfo *cluster, bool check_mode) pg_log(PG_WARNING, "\n" "Your installation contains hash indexes. These indexes have different\n" "internal formats between your old and new clusters, so they must be\n" - "reindexed with the REINDEX command. The file:\n" + "reindexed with the REINDEX command. The file\n" " %s\n" "when executed by psql by the database superuser will recreate all invalid\n" "indexes; until then, none of these indexes will be used.\n\n", From 8c0d7bafad36434cb08ac2c78e69ae72c194ca20 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 22 Aug 2017 22:41:32 -0700 Subject: [PATCH 0025/1087] Hash tables backed by DSA shared memory. Add general purpose chaining hash tables for DSA memory. Unlike DynaHash in shared memory mode, these hash tables can grow as required, and cope with being mapped into different addresses in different backends. There is a wide range of potential users for such a hash table, though it's very likely the interface will need to evolve as we come to understand the needs of different kinds of users. E.g support for iterators and incremental resizing is planned for later commits and the details of the callback signatures are likely to change. Author: Thomas Munro Reviewed-By: John Gorman, Andres Freund, Dilip Kumar, Robert Haas Discussion: https://postgr.es/m/CAEepm=3d8o8XdVwYT6O=bHKsKAM2pu2D6sV1S_=4d+jStVCE7w@mail.gmail.com https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com --- src/backend/lib/Makefile | 4 +- src/backend/lib/dshash.c | 889 +++++++++++++++++++++++++++++++ src/include/lib/dshash.h | 107 ++++ src/tools/pgindent/typedefs.list | 7 + 4 files changed, 1005 insertions(+), 2 deletions(-) create mode 100644 src/backend/lib/dshash.c create mode 100644 src/include/lib/dshash.h diff --git a/src/backend/lib/Makefile b/src/backend/lib/Makefile index f222c6c20d..d1fefe43f2 100644 --- a/src/backend/lib/Makefile +++ b/src/backend/lib/Makefile @@ -12,7 +12,7 @@ subdir = src/backend/lib top_builddir = ../../.. include $(top_builddir)/src/Makefile.global -OBJS = binaryheap.o bipartite_match.o hyperloglog.o ilist.o knapsack.o \ - pairingheap.o rbtree.o stringinfo.o +OBJS = binaryheap.o bipartite_match.o dshash.o hyperloglog.o ilist.o \ + knapsack.o pairingheap.o rbtree.o stringinfo.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c new file mode 100644 index 0000000000..988d569b84 --- /dev/null +++ b/src/backend/lib/dshash.c @@ -0,0 +1,889 @@ +/*------------------------------------------------------------------------- + * + * dshash.c + * Concurrent hash tables backed by dynamic shared memory areas. + * + * This is an open hashing hash table, with a linked list at each table + * entry. It supports dynamic resizing, as required to prevent the linked + * lists from growing too long on average. Currently, only growing is + * supported: the hash table never becomes smaller. + * + * To deal with concurrency, it has a fixed size set of partitions, each of + * which is independently locked. Each bucket maps to a partition; so insert, + * find and iterate operations normally only acquire one lock. Therefore, + * good concurrency is achieved whenever such operations don't collide at the + * lock partition level. However, when a resize operation begins, all + * partition locks must be acquired simultaneously for a brief period. This + * is only expected to happen a small number of times until a stable size is + * found, since growth is geometric. + * + * Future versions may support iterators and incremental resizing; for now + * the implementation is minimalist. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/lib/dshash.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "lib/dshash.h" +#include "storage/ipc.h" +#include "storage/lwlock.h" +#include "utils/dsa.h" +#include "utils/memutils.h" + +/* + * An item in the hash table. This wraps the user's entry object in an + * envelop that holds a pointer back to the bucket and a pointer to the next + * item in the bucket. + */ +struct dshash_table_item +{ + /* The next item in the same bucket. */ + dsa_pointer next; + /* The hashed key, to avoid having to recompute it. */ + dshash_hash hash; + /* The user's entry object follows here. See ENTRY_FROM_ITEM(item). */ +}; + +/* + * The number of partitions for locking purposes. This is set to match + * NUM_BUFFER_PARTITIONS for now, on the basis that whatever's good enough for + * the buffer pool must be good enough for any other purpose. This could + * become a runtime parameter in future. + */ +#define DSHASH_NUM_PARTITIONS_LOG2 7 +#define DSHASH_NUM_PARTITIONS (1 << DSHASH_NUM_PARTITIONS_LOG2) + +/* A magic value used to identify our hash tables. */ +#define DSHASH_MAGIC 0x75ff6a20 + +/* + * Tracking information for each lock partition. Initially, each partition + * corresponds to one bucket, but each time the hash table grows, the buckets + * covered by each partition split so the number of buckets covered doubles. + * + * We might want to add padding here so that each partition is on a different + * cache line, but doing so would bloat this structure considerably. + */ +typedef struct dshash_partition +{ + LWLock lock; /* Protects all buckets in this partition. */ + size_t count; /* # of items in this partition's buckets */ +} dshash_partition; + +/* + * The head object for a hash table. This will be stored in dynamic shared + * memory. + */ +typedef struct dshash_table_control +{ + dshash_table_handle handle; + uint32 magic; + dshash_partition partitions[DSHASH_NUM_PARTITIONS]; + int lwlock_tranche_id; + + /* + * The following members are written to only when ALL partitions locks are + * held. They can be read when any one partition lock is held. + */ + + /* Number of buckets expressed as power of 2 (8 = 256 buckets). */ + size_t size_log2; /* log2(number of buckets) */ + dsa_pointer buckets; /* current bucket array */ +} dshash_table_control; + +/* + * Per-backend state for a dynamic hash table. + */ +struct dshash_table +{ + dsa_area *area; /* Backing dynamic shared memory area. */ + dshash_parameters params; /* Parameters. */ + void *arg; /* User-supplied data pointer. */ + dshash_table_control *control; /* Control object in DSM. */ + dsa_pointer *buckets; /* Current bucket pointers in DSM. */ + size_t size_log2; /* log2(number of buckets) */ + bool find_locked; /* Is any partition lock held by 'find'? */ + bool find_exclusively_locked; /* ... exclusively? */ +}; + +/* Given a pointer to an item, find the entry (user data) it holds. */ +#define ENTRY_FROM_ITEM(item) \ + ((char *)(item) + MAXALIGN(sizeof(dshash_table_item))) + +/* Given a pointer to an entry, find the item that holds it. */ +#define ITEM_FROM_ENTRY(entry) \ + ((dshash_table_item *)((char *)(entry) - \ + MAXALIGN(sizeof(dshash_table_item)))) + +/* How many resize operations (bucket splits) have there been? */ +#define NUM_SPLITS(size_log2) \ + (size_log2 - DSHASH_NUM_PARTITIONS_LOG2) + +/* How many buckets are there in each partition at a given size? */ +#define BUCKETS_PER_PARTITION(size_log2) \ + (UINT64CONST(1) << NUM_SPLITS(size_log2)) + +/* Max entries before we need to grow. Half + quarter = 75% load factor. */ +#define MAX_COUNT_PER_PARTITION(hash_table) \ + (BUCKETS_PER_PARTITION(hash_table->size_log2) / 2 + \ + BUCKETS_PER_PARTITION(hash_table->size_log2) / 4) + +/* Choose partition based on the highest order bits of the hash. */ +#define PARTITION_FOR_HASH(hash) \ + (hash >> ((sizeof(dshash_hash) * CHAR_BIT) - DSHASH_NUM_PARTITIONS_LOG2)) + +/* + * Find the bucket index for a given hash and table size. Each time the table + * doubles in size, the appropriate bucket for a given hash value doubles and + * possibly adds one, depending on the newly revealed bit, so that all buckets + * are split. + */ +#define BUCKET_INDEX_FOR_HASH_AND_SIZE(hash, size_log2) \ + (hash >> ((sizeof(dshash_hash) * CHAR_BIT) - (size_log2))) + +/* The index of the first bucket in a given partition. */ +#define BUCKET_INDEX_FOR_PARTITION(partition, size_log2) \ + ((partition) << NUM_SPLITS(size_log2)) + +/* The head of the active bucket for a given hash value (lvalue). */ +#define BUCKET_FOR_HASH(hash_table, hash) \ + (hash_table->buckets[ \ + BUCKET_INDEX_FOR_HASH_AND_SIZE(hash, \ + hash_table->size_log2)]) + +static void delete_item(dshash_table *hash_table, + dshash_table_item *item); +static void resize(dshash_table *hash_table, size_t new_size); +static inline void ensure_valid_bucket_pointers(dshash_table *hash_table); +static inline dshash_table_item *find_in_bucket(dshash_table *hash_table, + const void *key, + dsa_pointer item_pointer); +static void insert_item_into_bucket(dshash_table *hash_table, + dsa_pointer item_pointer, + dshash_table_item *item, + dsa_pointer *bucket); +static dshash_table_item *insert_into_bucket(dshash_table *hash_table, + const void *key, + dsa_pointer *bucket); +static bool delete_key_from_bucket(dshash_table *hash_table, + const void *key, + dsa_pointer *bucket_head); +static bool delete_item_from_bucket(dshash_table *hash_table, + dshash_table_item *item, + dsa_pointer *bucket_head); +static inline dshash_hash hash_key(dshash_table *hash_table, const void *key); +static inline bool equal_keys(dshash_table *hash_table, + const void *a, const void *b); + +#define PARTITION_LOCK(hash_table, i) \ + (&(hash_table)->control->partitions[(i)].lock) + +/* + * Create a new hash table backed by the given dynamic shared area, with the + * given parameters. The returned object is allocated in backend-local memory + * using the current MemoryContext. If 'arg' is non-null, the arg variants of + * hash and compare functions must be provided in 'params' and 'arg' will be + * passed down to them. + */ +dshash_table * +dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) +{ + dshash_table *hash_table; + dsa_pointer control; + + /* Sanity checks on the set of supplied functions. */ + Assert((params->compare_function != NULL) ^ + (params->compare_arg_function != NULL)); + Assert((params->hash_function != NULL) ^ + (params->hash_arg_function != NULL)); + Assert(arg == NULL || (params->compare_arg_function != NULL)); + Assert(arg == NULL || (params->hash_arg_function != NULL)); + + /* Allocate the backend-local object representing the hash table. */ + hash_table = palloc(sizeof(dshash_table)); + + /* Allocate the control object in shared memory. */ + control = dsa_allocate(area, sizeof(dshash_table_control)); + + /* Set up the local and shared hash table structs. */ + hash_table->area = area; + hash_table->params = *params; + hash_table->arg = arg; + hash_table->control = dsa_get_address(area, control); + hash_table->control->handle = control; + hash_table->control->magic = DSHASH_MAGIC; + hash_table->control->lwlock_tranche_id = params->tranche_id; + + /* Set up the array of lock partitions. */ + { + dshash_partition *partitions = hash_table->control->partitions; + int tranche_id = hash_table->control->lwlock_tranche_id; + int i; + + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + { + LWLockInitialize(&partitions[i].lock, tranche_id); + partitions[i].count = 0; + } + } + + hash_table->find_locked = false; + hash_table->find_exclusively_locked = false; + + /* + * Set up the initial array of buckets. Our initial size is the same as + * the number of partitions. + */ + hash_table->control->size_log2 = DSHASH_NUM_PARTITIONS_LOG2; + hash_table->control->buckets = + dsa_allocate(area, sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS); + hash_table->buckets = dsa_get_address(area, + hash_table->control->buckets); + memset(hash_table->buckets, 0, sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS); + + return hash_table; +} + +/* + * Attach to an existing hash table using a handle. The returned object is + * allocated in backend-local memory using the current MemoryContext. If + * 'arg' is non-null, the arg variants of hash and compare functions must be + * provided in 'params' and 'arg' will be passed down to them. + */ +dshash_table * +dshash_attach(dsa_area *area, const dshash_parameters *params, + dshash_table_handle handle, void *arg) +{ + dshash_table *hash_table; + dsa_pointer control; + + /* Sanity checks on the set of supplied functions. */ + Assert((params->compare_function != NULL) ^ + (params->compare_arg_function != NULL)); + Assert((params->hash_function != NULL) ^ + (params->hash_arg_function != NULL)); + Assert(arg == NULL || (params->compare_arg_function != NULL)); + Assert(arg == NULL || (params->hash_arg_function != NULL)); + + /* Allocate the backend-local object representing the hash table. */ + hash_table = palloc(sizeof(dshash_table)); + + /* Find the control object in shared memory. */ + control = handle; + + /* Set up the local hash table struct. */ + hash_table->area = area; + hash_table->params = *params; + hash_table->arg = arg; + hash_table->control = dsa_get_address(area, control); + hash_table->find_locked = false; + hash_table->find_exclusively_locked = false; + Assert(hash_table->control->magic == DSHASH_MAGIC); + + return hash_table; +} + +/* + * Detach from a hash table. This frees backend-local resources associated + * with the hash table, but the hash table will continue to exist until it is + * either explicitly destroyed (by a backend that is still attached to it), or + * the area that backs it is returned to the operating system. + */ +void +dshash_detach(dshash_table *hash_table) +{ + Assert(!hash_table->find_locked); + + /* The hash table may have been destroyed. Just free local memory. */ + pfree(hash_table); +} + +/* + * Destroy a hash table, returning all memory to the area. The caller must be + * certain that no other backend will attempt to access the hash table before + * calling this function. Other backend must explicitly call dshash_detach to + * free up backend-local memory associated with the hash table. The backend + * that calls dshash_destroy must not call dshash_detach. + */ +void +dshash_destroy(dshash_table *hash_table) +{ + size_t size; + size_t i; + + Assert(hash_table->control->magic == DSHASH_MAGIC); + ensure_valid_bucket_pointers(hash_table); + + /* Free all the entries. */ + size = 1 << hash_table->size_log2; + for (i = 0; i < size; ++i) + { + dsa_pointer item_pointer = hash_table->buckets[i]; + + while (DsaPointerIsValid(item_pointer)) + { + dshash_table_item *item; + dsa_pointer next_item_pointer; + + item = dsa_get_address(hash_table->area, item_pointer); + next_item_pointer = item->next; + dsa_free(hash_table->area, item_pointer); + item_pointer = next_item_pointer; + } + } + + /* + * Vandalize the control block to help catch programming errors where + * other backends access the memory formerly occupied by this hash table. + */ + hash_table->control->magic = 0; + + /* Free the active table and control object. */ + dsa_free(hash_table->area, hash_table->control->buckets); + dsa_free(hash_table->area, hash_table->control->handle); + + pfree(hash_table); +} + +/* + * Get a handle that can be used by other processes to attach to this hash + * table. + */ +dshash_table_handle +dshash_get_hash_table_handle(dshash_table *hash_table) +{ + Assert(hash_table->control->magic == DSHASH_MAGIC); + + return hash_table->control->handle; +} + +/* + * Look up an entry, given a key. Returns a pointer to an entry if one can be + * found with the given key. Returns NULL if the key is not found. If a + * non-NULL value is returned, the entry is locked and must be released by + * calling dshash_release_lock. If an error is raised before + * dshash_release_lock is called, the lock will be released automatically, but + * the caller must take care to ensure that the entry is not left corrupted. + * The lock mode is either shared or exclusive depending on 'exclusive'. + * + * The caller must not lock a lock already. + * + * Note that the lock held is in fact an LWLock, so interrupts will be held on + * return from this function, and not resumed until dshash_release_lock is + * called. It is a very good idea for the caller to release the lock quickly. + */ +void * +dshash_find(dshash_table *hash_table, const void *key, bool exclusive) +{ + dshash_hash hash; + size_t partition; + dshash_table_item *item; + + hash = hash_key(hash_table, key); + partition = PARTITION_FOR_HASH(hash); + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(!hash_table->find_locked); + + LWLockAcquire(PARTITION_LOCK(hash_table, partition), + exclusive ? LW_EXCLUSIVE : LW_SHARED); + ensure_valid_bucket_pointers(hash_table); + + /* Search the active bucket. */ + item = find_in_bucket(hash_table, key, BUCKET_FOR_HASH(hash_table, hash)); + + if (!item) + { + /* Not found. */ + LWLockRelease(PARTITION_LOCK(hash_table, partition)); + return NULL; + } + else + { + /* The caller will free the lock by calling dshash_release. */ + hash_table->find_locked = true; + hash_table->find_exclusively_locked = exclusive; + return ENTRY_FROM_ITEM(item); + } +} + +/* + * Returns a pointer to an exclusively locked item which must be released with + * dshash_release_lock. If the key is found in the hash table, 'found' is set + * to true and a pointer to the existing entry is returned. If the key is not + * found, 'found' is set to false, and a pointer to a newly created entry is + * returned. + * + * Notes above dshash_find() regarding locking and error handling equally + * apply here. + */ +void * +dshash_find_or_insert(dshash_table *hash_table, + const void *key, + bool *found) +{ + dshash_hash hash; + size_t partition_index; + dshash_partition *partition; + dshash_table_item *item; + + hash = hash_key(hash_table, key); + partition_index = PARTITION_FOR_HASH(hash); + partition = &hash_table->control->partitions[partition_index]; + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(!hash_table->find_locked); + +restart: + LWLockAcquire(PARTITION_LOCK(hash_table, partition_index), + LW_EXCLUSIVE); + ensure_valid_bucket_pointers(hash_table); + + /* Search the active bucket. */ + item = find_in_bucket(hash_table, key, BUCKET_FOR_HASH(hash_table, hash)); + + if (item) + *found = true; + else + { + *found = false; + + /* Check if we are getting too full. */ + if (partition->count > MAX_COUNT_PER_PARTITION(hash_table)) + { + /* + * The load factor (= keys / buckets) for all buckets protected by + * this partition is > 0.75. Presumably the same applies + * generally across the whole hash table (though we don't attempt + * to track that directly to avoid contention on some kind of + * central counter; we just assume that this partition is + * representative). This is a good time to resize. + * + * Give up our existing lock first, because resizing needs to + * reacquire all the locks in the right order to avoid deadlocks. + */ + LWLockRelease(PARTITION_LOCK(hash_table, partition_index)); + resize(hash_table, hash_table->size_log2 + 1); + + goto restart; + } + + /* Finally we can try to insert the new item. */ + item = insert_into_bucket(hash_table, key, + &BUCKET_FOR_HASH(hash_table, hash)); + item->hash = hash; + /* Adjust per-lock-partition counter for load factor knowledge. */ + ++partition->count; + } + + /* The caller must release the lock with dshash_release_lock. */ + hash_table->find_locked = true; + hash_table->find_exclusively_locked = true; + return ENTRY_FROM_ITEM(item); +} + +/* + * Remove an entry by key. Returns true if the key was found and the + * corresponding entry was removed. + * + * To delete an entry that you already have a pointer to, see + * dshash_delete_entry. + */ +bool +dshash_delete_key(dshash_table *hash_table, const void *key) +{ + dshash_hash hash; + size_t partition; + bool found; + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(!hash_table->find_locked); + + hash = hash_key(hash_table, key); + partition = PARTITION_FOR_HASH(hash); + + LWLockAcquire(PARTITION_LOCK(hash_table, partition), LW_EXCLUSIVE); + ensure_valid_bucket_pointers(hash_table); + + if (delete_key_from_bucket(hash_table, key, + &BUCKET_FOR_HASH(hash_table, hash))) + { + Assert(hash_table->control->partitions[partition].count > 0); + found = true; + --hash_table->control->partitions[partition].count; + } + else + found = false; + + LWLockRelease(PARTITION_LOCK(hash_table, partition)); + + return found; +} + +/* + * Remove an entry. The entry must already be exclusively locked, and must + * have been obtained by dshash_find or dshash_find_or_insert. Note that this + * function releases the lock just like dshash_release_lock. + * + * To delete an entry by key, see dshash_delete_key. + */ +void +dshash_delete_entry(dshash_table *hash_table, void *entry) +{ + dshash_table_item *item = ITEM_FROM_ENTRY(entry); + size_t partition = PARTITION_FOR_HASH(item->hash); + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(hash_table->find_locked); + Assert(hash_table->find_exclusively_locked); + Assert(LWLockHeldByMeInMode(PARTITION_LOCK(hash_table, partition), + LW_EXCLUSIVE)); + + delete_item(hash_table, item); + hash_table->find_locked = false; + hash_table->find_exclusively_locked = false; + LWLockRelease(PARTITION_LOCK(hash_table, partition)); +} + +/* + * Unlock an entry which was locked by dshash_find or dshash_find_or_insert. + */ +void +dshash_release_lock(dshash_table *hash_table, void *entry) +{ + dshash_table_item *item = ITEM_FROM_ENTRY(entry); + size_t partition_index = PARTITION_FOR_HASH(item->hash); + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(hash_table->find_locked); + Assert(LWLockHeldByMeInMode(PARTITION_LOCK(hash_table, partition_index), + hash_table->find_exclusively_locked + ? LW_EXCLUSIVE : LW_SHARED)); + + hash_table->find_locked = false; + hash_table->find_exclusively_locked = false; + LWLockRelease(PARTITION_LOCK(hash_table, partition_index)); +} + +/* + * Print debugging information about the internal state of the hash table to + * stderr. The caller must hold no partition locks. + */ +void +dshash_dump(dshash_table *hash_table) +{ + size_t i; + size_t j; + + Assert(hash_table->control->magic == DSHASH_MAGIC); + Assert(!hash_table->find_locked); + + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + { + Assert(!LWLockHeldByMe(PARTITION_LOCK(hash_table, i))); + LWLockAcquire(PARTITION_LOCK(hash_table, i), LW_SHARED); + } + + ensure_valid_bucket_pointers(hash_table); + + fprintf(stderr, + "hash table size = %zu\n", (size_t) 1 << hash_table->size_log2); + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + { + dshash_partition *partition = &hash_table->control->partitions[i]; + size_t begin = BUCKET_INDEX_FOR_PARTITION(i, hash_table->size_log2); + size_t end = BUCKET_INDEX_FOR_PARTITION(i + 1, hash_table->size_log2); + + fprintf(stderr, " partition %zu\n", i); + fprintf(stderr, + " active buckets (key count = %zu)\n", partition->count); + + for (j = begin; j < end; ++j) + { + size_t count = 0; + dsa_pointer bucket = hash_table->buckets[j]; + + while (DsaPointerIsValid(bucket)) + { + dshash_table_item *item; + + item = dsa_get_address(hash_table->area, bucket); + + bucket = item->next; + ++count; + } + fprintf(stderr, " bucket %zu (key count = %zu)\n", j, count); + } + } + + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + LWLockRelease(PARTITION_LOCK(hash_table, i)); +} + +/* + * Delete a locked item to which we have a pointer. + */ +static void +delete_item(dshash_table *hash_table, dshash_table_item *item) +{ + size_t hash = item->hash; + size_t partition = PARTITION_FOR_HASH(hash); + + Assert(LWLockHeldByMe(PARTITION_LOCK(hash_table, partition))); + + if (delete_item_from_bucket(hash_table, item, + &BUCKET_FOR_HASH(hash_table, hash))) + { + Assert(hash_table->control->partitions[partition].count > 0); + --hash_table->control->partitions[partition].count; + } + else + { + Assert(false); + } +} + +/* + * Grow the hash table if necessary to the requested number of buckets. The + * requested size must be double some previously observed size. Returns true + * if the table was successfully expanded or found to be big enough already + * (because another backend expanded it). + * + * Must be called without any partition lock held. + */ +static void +resize(dshash_table *hash_table, size_t new_size_log2) +{ + dsa_pointer old_buckets; + dsa_pointer new_buckets_shared; + dsa_pointer *new_buckets; + size_t size; + size_t new_size = 1 << new_size_log2; + size_t i; + + /* + * Acquire the locks for all lock partitions. This is expensive, but we + * shouldn't have to do it many times. + */ + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + { + Assert(!LWLockHeldByMe(PARTITION_LOCK(hash_table, i))); + + LWLockAcquire(PARTITION_LOCK(hash_table, i), LW_EXCLUSIVE); + if (i == 0 && hash_table->control->size_log2 >= new_size_log2) + { + /* + * Another backend has already increased the size; we can avoid + * obtaining all the locks and return early. + */ + LWLockRelease(PARTITION_LOCK(hash_table, 0)); + return; + } + } + + Assert(new_size_log2 == hash_table->control->size_log2 + 1); + + /* Allocate the space for the new table. */ + new_buckets_shared = dsa_allocate0(hash_table->area, + sizeof(dsa_pointer) * new_size); + new_buckets = dsa_get_address(hash_table->area, new_buckets_shared); + + /* + * We've allocate the new bucket array; all that remains to do now is to + * reinsert all items, which amounts to adjusting all the pointers. + */ + size = 1 << hash_table->control->size_log2; + for (i = 0; i < size; ++i) + { + dsa_pointer item_pointer = hash_table->buckets[i]; + + while (DsaPointerIsValid(item_pointer)) + { + dshash_table_item *item; + dsa_pointer next_item_pointer; + + item = dsa_get_address(hash_table->area, item_pointer); + next_item_pointer = item->next; + insert_item_into_bucket(hash_table, item_pointer, item, + &new_buckets[BUCKET_INDEX_FOR_HASH_AND_SIZE(item->hash, + new_size_log2)]); + item_pointer = next_item_pointer; + } + } + + /* Swap the hash table into place and free the old one. */ + old_buckets = hash_table->control->buckets; + hash_table->control->buckets = new_buckets_shared; + hash_table->control->size_log2 = new_size_log2; + hash_table->buckets = new_buckets; + dsa_free(hash_table->area, old_buckets); + + /* Release all the locks. */ + for (i = 0; i < DSHASH_NUM_PARTITIONS; ++i) + LWLockRelease(PARTITION_LOCK(hash_table, i)); +} + +/* + * Make sure that our backend-local bucket pointers are up to date. The + * caller must have locked one lock partition, which prevents resize() from + * running concurrently. + */ +static inline void +ensure_valid_bucket_pointers(dshash_table *hash_table) +{ + if (hash_table->size_log2 != hash_table->control->size_log2) + { + hash_table->buckets = dsa_get_address(hash_table->area, + hash_table->control->buckets); + hash_table->size_log2 = hash_table->control->size_log2; + } +} + +/* + * Scan a locked bucket for a match, using the provided compare function. + */ +static inline dshash_table_item * +find_in_bucket(dshash_table *hash_table, const void *key, + dsa_pointer item_pointer) +{ + while (DsaPointerIsValid(item_pointer)) + { + dshash_table_item *item; + + item = dsa_get_address(hash_table->area, item_pointer); + if (equal_keys(hash_table, key, ENTRY_FROM_ITEM(item))) + return item; + item_pointer = item->next; + } + return NULL; +} + +/* + * Insert an already-allocated item into a bucket. + */ +static void +insert_item_into_bucket(dshash_table *hash_table, + dsa_pointer item_pointer, + dshash_table_item *item, + dsa_pointer *bucket) +{ + Assert(item == dsa_get_address(hash_table->area, item_pointer)); + + item->next = *bucket; + *bucket = item_pointer; +} + +/* + * Allocate space for an entry with the given key and insert it into the + * provided bucket. + */ +static dshash_table_item * +insert_into_bucket(dshash_table *hash_table, + const void *key, + dsa_pointer *bucket) +{ + dsa_pointer item_pointer; + dshash_table_item *item; + + item_pointer = dsa_allocate(hash_table->area, + hash_table->params.entry_size + + MAXALIGN(sizeof(dshash_table_item))); + item = dsa_get_address(hash_table->area, item_pointer); + memcpy(ENTRY_FROM_ITEM(item), key, hash_table->params.key_size); + insert_item_into_bucket(hash_table, item_pointer, item, bucket); + return item; +} + +/* + * Search a bucket for a matching key and delete it. + */ +static bool +delete_key_from_bucket(dshash_table *hash_table, + const void *key, + dsa_pointer *bucket_head) +{ + while (DsaPointerIsValid(*bucket_head)) + { + dshash_table_item *item; + + item = dsa_get_address(hash_table->area, *bucket_head); + + if (equal_keys(hash_table, key, ENTRY_FROM_ITEM(item))) + { + dsa_pointer next; + + next = item->next; + dsa_free(hash_table->area, *bucket_head); + *bucket_head = next; + + return true; + } + bucket_head = &item->next; + } + return false; +} + +/* + * Delete the specified item from the bucket. + */ +static bool +delete_item_from_bucket(dshash_table *hash_table, + dshash_table_item *item, + dsa_pointer *bucket_head) +{ + while (DsaPointerIsValid(*bucket_head)) + { + dshash_table_item *bucket_item; + + bucket_item = dsa_get_address(hash_table->area, *bucket_head); + + if (bucket_item == item) + { + dsa_pointer next; + + next = item->next; + dsa_free(hash_table->area, *bucket_head); + *bucket_head = next; + return true; + } + bucket_head = &bucket_item->next; + } + return false; +} + +/* + * Compute the hash value for a key. + */ +static inline dshash_hash +hash_key(dshash_table *hash_table, const void *key) +{ + if (hash_table->params.hash_arg_function != NULL) + return hash_table->params.hash_arg_function(key, hash_table->arg); + else + return hash_table->params.hash_function(key, + hash_table->params.key_size); +} + +/* + * Check whether two keys compare equal. + */ +static inline bool +equal_keys(dshash_table *hash_table, const void *a, const void *b) +{ + int r; + + if (hash_table->params.compare_arg_function != NULL) + r = hash_table->params.compare_arg_function(a, b, hash_table->arg); + else + r = hash_table->params.compare_function(a, b, + hash_table->params.key_size); + + return r == 0; +} diff --git a/src/include/lib/dshash.h b/src/include/lib/dshash.h new file mode 100644 index 0000000000..187f58b4e4 --- /dev/null +++ b/src/include/lib/dshash.h @@ -0,0 +1,107 @@ +/*------------------------------------------------------------------------- + * + * dshash.h + * Concurrent hash tables backed by dynamic shared memory areas. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/include/lib/dshash.h + * + *------------------------------------------------------------------------- + */ +#ifndef DSHASH_H +#define DSHASH_H + +#include "utils/dsa.h" + +/* The opaque type representing a hash table. */ +struct dshash_table; +typedef struct dshash_table dshash_table; + +/* A handle for a dshash_table which can be shared with other processes. */ +typedef dsa_pointer dshash_table_handle; + +/* The type for hash values. */ +typedef uint32 dshash_hash; + +/* + * A function used for comparing keys. This version is compatible with + * HashCompareFunction in hsearch.h and memcmp. + */ +typedef int (*dshash_compare_function) (const void *a, const void *b, size_t size); + +/* + * A function type used for comparing keys. This version allows compare + * functions to receive a pointer to arbitrary user data that was given to the + * create or attach function. Similar to qsort_arg_comparator. + */ +typedef int (*dshash_compare_arg_function) (const void *a, const void *b, void *arg); + +/* + * A function type for computing hash values for keys. This version is + * compatible with HashValueFunc in hsearch.h and hash functions like + * tag_hash. + */ +typedef dshash_hash (*dshash_hash_function) (const void *v, size_t size); + +/* + * A function type for computing hash values for keys. This version allows + * hash functions to receive a pointer to arbitrary user data that was given + * to the create or attach function. + */ +typedef dshash_hash (*dshash_hash_arg_function) (const void *v, void *arg); + +/* + * The set of parameters needed to create or attach to a hash table. The + * members tranche_id and tranche_name do not need to be initialized when + * attaching to an existing hash table. + * + * Compare and hash functions mus be supplied even when attaching, because we + * can't safely share function pointers between backends in general. Either + * the arg variants or the non-arg variants should be supplied; the other + * function pointers should be NULL. If the arg varants are supplied then the + * user data pointer supplied to the create and attach functions will be + * passed to the hash and compare functions. + */ +typedef struct dshash_parameters +{ + size_t key_size; /* Size of the key (initial bytes of entry) */ + size_t entry_size; /* Total size of entry */ + dshash_compare_function compare_function; /* Compare function */ + dshash_compare_arg_function compare_arg_function; /* Arg version */ + dshash_hash_function hash_function; /* Hash function */ + dshash_hash_arg_function hash_arg_function; /* Arg version */ + int tranche_id; /* The tranche ID to use for locks */ +} dshash_parameters; + +/* Forward declaration of private types for use only by dht.c. */ +struct dshash_table_item; +typedef struct dshash_table_item dshash_table_item; + +/* Creating, sharing and destroying from hash tables. */ +extern dshash_table *dshash_create(dsa_area *area, + const dshash_parameters *params, + void *arg); +extern dshash_table *dshash_attach(dsa_area *area, + const dshash_parameters *params, + dshash_table_handle handle, + void *arg); +extern void dshash_detach(dshash_table *hash_table); +extern dshash_table_handle dshash_get_hash_table_handle(dshash_table *hash_table); +extern void dshash_destroy(dshash_table *hash_table); + +/* Finding, creating, deleting entries. */ +extern void *dshash_find(dshash_table *hash_table, + const void *key, bool exclusive); +extern void *dshash_find_or_insert(dshash_table *hash_table, + const void *key, bool *found); +extern bool dshash_delete_key(dshash_table *hash_table, const void *key); +extern void dshash_delete_entry(dshash_table *hash_table, void *entry); +extern void dshash_release_lock(dshash_table *hash_table, void *entry); + +/* Debugging support. */ +extern void dshash_dump(dshash_table *hash_table); + +#endif /* DSHASH_H */ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index a4ace383fa..17ba2bde5c 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2604,6 +2604,13 @@ dsa_pointer dsa_segment_header dsa_segment_index dsa_segment_map +dshash_hash +dshash_parameters +dshash_partition +dshash_table +dshash_table_control +dshash_table_handle +dshash_table_item dsm_control_header dsm_control_item dsm_handle From 580ddcec3943216b281c56e3c7fc933fdcf850f4 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 23 Aug 2017 09:56:38 -0400 Subject: [PATCH 0026/1087] Fix translation marker This was erroneously removed in 55a70a023c3daefca9bbd68bfbe6862af10ab479. --- src/backend/commands/vacuumlazy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index e9b4045fe5..45b1859475 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -1351,7 +1351,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, "%u pages are entirely empty.\n", empty_pages), empty_pages); - appendStringInfo(&buf, "%s.", pg_rusage_show(&ru0)); + appendStringInfo(&buf, _("%s."), pg_rusage_show(&ru0)); ereport(elevel, (errmsg("\"%s\": found %.0f removable, %.0f nonremovable row versions in %u out of %u pages", From 85f4d6393da2ed2ad3ec4912a60a918348784c2b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 23 Aug 2017 12:01:43 -0400 Subject: [PATCH 0027/1087] Tweak some SCRAM error messages and code comments Clarify/correct some error messages, fix up some code comments that confused SASL and SCRAM, and other minor fixes. No changes in functionality. --- doc/src/sgml/protocol.sgml | 12 ++++++------ src/backend/libpq/auth-scram.c | 22 +++++++++++----------- src/interfaces/libpq/fe-auth-scram.c | 10 +++++----- 3 files changed, 22 insertions(+), 22 deletions(-) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index c8b083c29c..7c012f59a3 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1405,13 +1405,13 @@ ErrorMessage. -When SCRAM-SHA-256 is used in PostgreSQL, the server will ignore the username -that the client sends in the client-first-message. The username +When SCRAM-SHA-256 is used in PostgreSQL, the server will ignore the user name +that the client sends in the client-first-message. The user name that was already sent in the startup message is used instead. PostgreSQL supports multiple character encodings, while SCRAM -dictates UTF-8 to be used for the username, so it might be impossible to -represent the PostgreSQL username in UTF-8. To avoid confusion, the client -should use pg_same_as_startup_message as the username in the +dictates UTF-8 to be used for the user name, so it might be impossible to +represent the PostgreSQL user name in UTF-8. To avoid confusion, the client +should use pg_same_as_startup_message as the user name in the client-first-message. @@ -5274,7 +5274,7 @@ RowDescription (B) -SASLInitialresponse (F) +SASLInitialResponse (F) diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 0b69f106f1..9161c885e1 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -573,7 +573,7 @@ mock_scram_verifier(const char *username, int *iterations, char **salt, } /* - * Read the value in a given SASL exchange message for given attribute. + * Read the value in a given SCRAM exchange message for given attribute. */ static char * read_attr_value(char **input, char attr) @@ -585,7 +585,7 @@ read_attr_value(char **input, char attr) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Expected attribute '%c' but found %s.", + errdetail("Expected attribute \"%c\" but found \"%s\".", attr, sanitize_char(*begin)))); begin++; @@ -593,7 +593,7 @@ read_attr_value(char **input, char attr) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Expected character = for attribute %c.", attr))); + errdetail("Expected character \"=\" for attribute \"%c\".", attr))); begin++; end = begin; @@ -652,7 +652,7 @@ sanitize_char(char c) } /* - * Read the next attribute and value in a SASL exchange message. + * Read the next attribute and value in a SCRAM exchange message. * * Returns NULL if there is attribute. */ @@ -674,7 +674,7 @@ read_any_attr(char **input, char *attr_p) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Attribute expected, but found invalid character %s.", + errdetail("Attribute expected, but found invalid character \"%s\".", sanitize_char(attr)))); if (attr_p) *attr_p = attr; @@ -684,7 +684,7 @@ read_any_attr(char **input, char *attr_p) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Expected character = for attribute %c.", attr))); + errdetail("Expected character \"=\" for attribute \"%c\".", attr))); begin++; end = begin; @@ -703,7 +703,7 @@ read_any_attr(char **input, char *attr_p) } /* - * Read and parse the first message from client in the context of a SASL + * Read and parse the first message from client in the context of a SCRAM * authentication exchange message. * * At this stage, any errors will be reported directly with ereport(ERROR). @@ -802,14 +802,14 @@ read_client_first_message(scram_state *state, char *input) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Unexpected channel-binding flag %s.", + errdetail("Unexpected channel-binding flag \"%s\".", sanitize_char(*input)))); } if (*input != ',') ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Comma expected, but found character %s.", + errdetail("Comma expected, but found character \"%s\".", sanitize_char(*input)))); input++; @@ -824,7 +824,7 @@ read_client_first_message(scram_state *state, char *input) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("malformed SCRAM message"), - errdetail("Unexpected attribute %s in client-first-message.", + errdetail("Unexpected attribute \"%s\" in client-first-message.", sanitize_char(*input)))); input++; @@ -929,7 +929,7 @@ verify_client_proof(scram_state *state) } /* - * Build the first server-side message sent to the client in a SASL + * Build the first server-side message sent to the client in a SCRAM * communication exchange. */ static char * diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index d1c7037101..edfd42df85 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -228,7 +228,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, { *success = false; printfPQExpBuffer(errorMessage, - libpq_gettext("invalid server signature\n")); + libpq_gettext("incorrect server signature\n")); } *done = true; state->state = FE_SCRAM_FINISHED; @@ -249,7 +249,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, } /* - * Read value for an attribute part of a SASL message. + * Read value for an attribute part of a SCRAM message. */ static char * read_attr_value(char **input, char attr, PQExpBuffer errorMessage) @@ -260,7 +260,7 @@ read_attr_value(char **input, char attr, PQExpBuffer errorMessage) if (*begin != attr) { printfPQExpBuffer(errorMessage, - libpq_gettext("malformed SCRAM message (%c expected)\n"), + libpq_gettext("malformed SCRAM message (attribute \"%c\" expected)\n"), attr); return NULL; } @@ -269,7 +269,7 @@ read_attr_value(char **input, char attr, PQExpBuffer errorMessage) if (*begin != '=') { printfPQExpBuffer(errorMessage, - libpq_gettext("malformed SCRAM message (expected = in attr '%c')\n"), + libpq_gettext("malformed SCRAM message (expected character \"=\" for attribute \"%c\")\n"), attr); return NULL; } @@ -508,7 +508,7 @@ read_server_final_message(fe_scram_state *state, char *input, char *errmsg = read_attr_value(&input, 'e', errormessage); printfPQExpBuffer(errormessage, - libpq_gettext("error received from server in SASL exchange: %s\n"), + libpq_gettext("error received from server in SCRAM exchange: %s\n"), errmsg); return false; } From 237a0b87b1dc90f8789aa5441a2a11e67f46c96e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 23 Aug 2017 13:56:59 -0400 Subject: [PATCH 0028/1087] Improve plural handling in error message This does not use the normal plural handling, because no numbers appear in the actual message. --- src/backend/parser/parse_oper.c | 5 ++++- src/test/regress/expected/alter_table.out | 2 +- src/test/regress/expected/create_view.out | 2 +- src/test/regress/expected/geometry.out | 2 +- src/test/regress/expected/horology.out | 2 +- src/test/regress/expected/text.out | 2 +- src/test/regress/expected/timetz.out | 2 +- src/test/regress/expected/with.out | 2 +- 8 files changed, 11 insertions(+), 8 deletions(-) diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c index e9bf50243f..d7971cc3d9 100644 --- a/src/backend/parser/parse_oper.c +++ b/src/backend/parser/parse_oper.c @@ -723,7 +723,10 @@ op_error(ParseState *pstate, List *op, char oprkind, (errcode(ERRCODE_UNDEFINED_FUNCTION), errmsg("operator does not exist: %s", op_signature_string(op, oprkind, arg1, arg2)), - errhint("No operator matches the given name and argument type(s). " + (!arg1 || !arg2) ? + errhint("No operator matches the given name and argument type. " + "You might need to add an explicit type cast.") : + errhint("No operator matches the given name and argument types. " "You might need to add explicit type casts."), parser_errposition(pstate, location))); } diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 58192d2c6a..ed03cb9c63 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -1895,7 +1895,7 @@ alter table anothertab alter column atcol1 drop default; alter table anothertab alter column atcol1 type boolean using case when atcol1 % 2 = 0 then true else false end; -- fails ERROR: operator does not exist: boolean <= integer -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. alter table anothertab drop constraint anothertab_chk; alter table anothertab drop constraint anothertab_chk; -- fails ERROR: constraint "anothertab_chk" of relation "anothertab" does not exist diff --git a/src/test/regress/expected/create_view.out b/src/test/regress/expected/create_view.out index f909a3cefe..4468c85d77 100644 --- a/src/test/regress/expected/create_view.out +++ b/src/test/regress/expected/create_view.out @@ -1605,7 +1605,7 @@ select 'foo'::text = any((select array['abc','def','foo']::text[])); -- fail ERROR: operator does not exist: text = text[] LINE 1: select 'foo'::text = any((select array['abc','def','foo']::t... ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. select 'foo'::text = any((select array['abc','def','foo']::text[])::text[]); ?column? ---------- diff --git a/src/test/regress/expected/geometry.out b/src/test/regress/expected/geometry.out index 1271395d4e..e4c0039040 100644 --- a/src/test/regress/expected/geometry.out +++ b/src/test/regress/expected/geometry.out @@ -107,7 +107,7 @@ SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ERROR: operator does not exist: lseg # point LINE 1: SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- closest point SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest FROM LSEG_TBL l, POINT_TBL p; diff --git a/src/test/regress/expected/horology.out b/src/test/regress/expected/horology.out index f9d12e0f8a..7b3d058425 100644 --- a/src/test/regress/expected/horology.out +++ b/src/test/regress/expected/horology.out @@ -321,7 +321,7 @@ SELECT date '1991-02-03' - time with time zone '04:05:06 UTC' AS "Subtract Time ERROR: operator does not exist: date - time with time zone LINE 1: SELECT date '1991-02-03' - time with time zone '04:05:06 UTC... ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- -- timestamp, interval arithmetic -- diff --git a/src/test/regress/expected/text.out b/src/test/regress/expected/text.out index 829f2c224c..d28961cf88 100644 --- a/src/test/regress/expected/text.out +++ b/src/test/regress/expected/text.out @@ -50,7 +50,7 @@ select 3 || 4.0; ERROR: operator does not exist: integer || numeric LINE 1: select 3 || 4.0; ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. /* * various string functions */ diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out index 43911312f9..33ff8e18c9 100644 --- a/src/test/regress/expected/timetz.out +++ b/src/test/regress/expected/timetz.out @@ -92,4 +92,4 @@ SELECT f1 + time with time zone '00:01' AS "Illegal" FROM TIMETZ_TBL; ERROR: operator does not exist: time with time zone + time with time zone LINE 1: SELECT f1 + time with time zone '00:01' AS "Illegal" FROM TI... ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. diff --git a/src/test/regress/expected/with.out b/src/test/regress/expected/with.out index fdcc4970a1..c32a490580 100644 --- a/src/test/regress/expected/with.out +++ b/src/test/regress/expected/with.out @@ -166,7 +166,7 @@ SELECT n, n IS OF (int) AS is_int FROM t; ERROR: operator does not exist: text + integer LINE 4: SELECT n+1 FROM t WHERE n < 10 ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- -- Some examples with a tree -- From 1e1b01cd1632a7d768fb8c86c95cf3ec82dc58da Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 23 Aug 2017 14:19:35 -0400 Subject: [PATCH 0029/1087] Fix outdated comment Author: Thomas Munro --- src/backend/storage/lmgr/predicate.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c index 74e4b35837..6a6d9d6d5c 100644 --- a/src/backend/storage/lmgr/predicate.c +++ b/src/backend/storage/lmgr/predicate.c @@ -116,10 +116,12 @@ * than its own active transaction must acquire an exclusive * lock. * - * FirstPredicateLockMgrLock based partition locks + * PredicateLockHashPartitionLock(hashcode) * - The same lock protects a target, all locks on that target, and - * the linked list of locks on the target.. - * - When more than one is needed, acquire in ascending order. + * the linked list of locks on the target. + * - When more than one is needed, acquire in ascending address order. + * - When all are needed (rare), acquire in ascending index order with + * PredicateLockHashPartitionLockByIndex(index). * * SerializableXactHashLock * - Protects both PredXact and SerializableXidHash. From 6d242ee980193f29618aa899eb61f67a953bd712 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 23 Aug 2017 14:59:25 -0400 Subject: [PATCH 0030/1087] Update code comment for temporary replication slots Reported-by: Alvaro Herrera --- src/include/replication/slot.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/src/include/replication/slot.h b/src/include/replication/slot.h index 0bf2611fe9..0c442330b2 100644 --- a/src/include/replication/slot.h +++ b/src/include/replication/slot.h @@ -22,9 +22,13 @@ * * Slots marked as PERSISTENT are crash-safe and will not be dropped when * released. Slots marked as EPHEMERAL will be dropped when released or after - * restarts. + * restarts. Slots marked TEMPORARY will be dropped at the end of a session + * or on error. * - * EPHEMERAL slots can be made PERSISTENT by calling ReplicationSlotPersist(). + * EPHEMERAL is used as a not-quite-ready state when creating persistent + * slots. EPHEMERAL slots can be made PERSISTENT by calling + * ReplicationSlotPersist(). For a slot that goes away at the end of a + * session, TEMPORARY is the appropriate choice. */ typedef enum ReplicationSlotPersistency { From 27b89876c0fb08faa17768c68101186cda2e4bef Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 24 Aug 2017 11:13:55 -0400 Subject: [PATCH 0031/1087] Fix up secondary expected files for commit 237a0b87b1dc90f8789aa5441a2a11e67f46c96e --- src/test/regress/expected/geometry_1.out | 2 +- src/test/regress/expected/geometry_2.out | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/test/regress/expected/geometry_1.out b/src/test/regress/expected/geometry_1.out index fad246c2b9..3b92e23059 100644 --- a/src/test/regress/expected/geometry_1.out +++ b/src/test/regress/expected/geometry_1.out @@ -107,7 +107,7 @@ SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ERROR: operator does not exist: lseg # point LINE 1: SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- closest point SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest FROM LSEG_TBL l, POINT_TBL p; diff --git a/src/test/regress/expected/geometry_2.out b/src/test/regress/expected/geometry_2.out index c938e66418..5a922bcd3f 100644 --- a/src/test/regress/expected/geometry_2.out +++ b/src/test/regress/expected/geometry_2.out @@ -107,7 +107,7 @@ SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ERROR: operator does not exist: lseg # point LINE 1: SELECT '' AS count, p.f1, l.s, l.s # p.f1 AS intersection ^ -HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. -- closest point SELECT '' AS thirty, p.f1, l.s, p.f1 ## l.s AS closest FROM LSEG_TBL l, POINT_TBL p; From 1177ab1dabf72bafee8f19d904cee3a299f25892 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 24 Aug 2017 13:39:51 -0400 Subject: [PATCH 0032/1087] Make new regression test case parallel-safe, and improve its output. The test case added by commit 1f6d515a6 fails on buildfarm members that have force_parallel_mode turned on, because we currently don't report sort performance details from worker processes back to the master. To fix that, just make the test table be temp rather than regular; that's a good idea anyway to forestall any possible interference from auto-analyze. (The restriction that workers can't access temp tables might go away someday, but almost certainly not before the other thing gets fixed.) Also, improve the test so that we retain as much as possible of the EXPLAIN ANALYZE output. This aids debugging failures, and might also expose problems that the preceding version masked. Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com --- src/test/regress/expected/subselect.out | 77 ++++++++++++------------- src/test/regress/sql/subselect.sql | 67 ++++++++++----------- 2 files changed, 66 insertions(+), 78 deletions(-) diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index 8419dea08e..f009c253f4 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -1042,48 +1042,45 @@ NOTICE: x = 9, y = 13 drop function tattle(x int, y int); -- --- Test that LIMIT can be pushed to SORT through a subquery that just --- projects columns +-- Test that LIMIT can be pushed to SORT through a subquery that just projects +-- columns. We check for that having happened by looking to see if EXPLAIN +-- ANALYZE shows that a top-N sort was used. We must suppress or filter away +-- all the non-invariant parts of the EXPLAIN ANALYZE output. -- -create table sq_limit (pk int primary key, c1 int, c2 int); +create temp table sq_limit (pk int primary key, c1 int, c2 int); insert into sq_limit values - (1, 1, 1), - (2, 2, 2), - (3, 3, 3), - (4, 4, 4), - (5, 1, 1), - (6, 2, 2), - (7, 3, 3), - (8, 4, 4); --- The explain contains data that may not be invariant, so --- filter for just the interesting bits. The goal here is that --- we should see three notices, in order: --- NOTICE: Limit --- NOTICE: Subquery --- NOTICE: Top-N Sort --- A missing step, or steps out of order means we have a problem. -do $$ - declare x text; - begin - for x in - explain (analyze, summary off, timing off, costs off) - select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 - loop - if (left(ltrim(x), 5) = 'Limit') then - raise notice 'Limit'; - end if; - if (left(ltrim(x), 12) = '-> Subquery') then - raise notice 'Subquery'; - end if; - if (left(ltrim(x), 18) = 'Sort Method: top-N') then - raise notice 'Top-N Sort'; - end if; - end loop; - end; + (1, 1, 1), + (2, 2, 2), + (3, 3, 3), + (4, 4, 4), + (5, 1, 1), + (6, 2, 2), + (7, 3, 3), + (8, 4, 4); +create function explain_sq_limit() returns setof text language plpgsql as +$$ +declare ln text; +begin + for ln in + explain (analyze, summary off, timing off, costs off) + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 + loop + ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + return next ln; + end loop; +end; $$; -NOTICE: Limit -NOTICE: Subquery -NOTICE: Top-N Sort +select * from explain_sq_limit(); + explain_sq_limit +---------------------------------------------------------------- + Limit (actual rows=3 loops=1) + -> Subquery Scan on x (actual rows=3 loops=1) + -> Sort (actual rows=3 loops=1) + Sort Key: sq_limit.c1, sq_limit.pk + Sort Method: top-N heapsort Memory: xxx + -> Seq Scan on sq_limit (actual rows=8 loops=1) +(6 rows) + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; pk | c2 ----+---- @@ -1092,4 +1089,4 @@ select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; 2 | 2 (3 rows) -drop table sq_limit; +drop function explain_sq_limit(); diff --git a/src/test/regress/sql/subselect.sql b/src/test/regress/sql/subselect.sql index 7087ee27cd..9a14832206 100644 --- a/src/test/regress/sql/subselect.sql +++ b/src/test/regress/sql/subselect.sql @@ -542,47 +542,38 @@ select * from drop function tattle(x int, y int); -- --- Test that LIMIT can be pushed to SORT through a subquery that just --- projects columns +-- Test that LIMIT can be pushed to SORT through a subquery that just projects +-- columns. We check for that having happened by looking to see if EXPLAIN +-- ANALYZE shows that a top-N sort was used. We must suppress or filter away +-- all the non-invariant parts of the EXPLAIN ANALYZE output. -- -create table sq_limit (pk int primary key, c1 int, c2 int); +create temp table sq_limit (pk int primary key, c1 int, c2 int); insert into sq_limit values - (1, 1, 1), - (2, 2, 2), - (3, 3, 3), - (4, 4, 4), - (5, 1, 1), - (6, 2, 2), - (7, 3, 3), - (8, 4, 4); - --- The explain contains data that may not be invariant, so --- filter for just the interesting bits. The goal here is that --- we should see three notices, in order: --- NOTICE: Limit --- NOTICE: Subquery --- NOTICE: Top-N Sort --- A missing step, or steps out of order means we have a problem. -do $$ - declare x text; - begin - for x in - explain (analyze, summary off, timing off, costs off) - select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 - loop - if (left(ltrim(x), 5) = 'Limit') then - raise notice 'Limit'; - end if; - if (left(ltrim(x), 12) = '-> Subquery') then - raise notice 'Subquery'; - end if; - if (left(ltrim(x), 18) = 'Sort Method: top-N') then - raise notice 'Top-N Sort'; - end if; - end loop; - end; + (1, 1, 1), + (2, 2, 2), + (3, 3, 3), + (4, 4, 4), + (5, 1, 1), + (6, 2, 2), + (7, 3, 3), + (8, 4, 4); + +create function explain_sq_limit() returns setof text language plpgsql as +$$ +declare ln text; +begin + for ln in + explain (analyze, summary off, timing off, costs off) + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 + loop + ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + return next ln; + end loop; +end; $$; +select * from explain_sq_limit(); + select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; -drop table sq_limit; +drop function explain_sq_limit(); From fe7774144d5c3f3a2941a2ca51e61352e4005991 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 24 Aug 2017 14:04:28 -0400 Subject: [PATCH 0033/1087] Increase SCRAM salt length The original value 12 was set based on RFC 5802 for SCRAM-SHA-1, but RFC 7677 for SCRAM-SHA-256 uses 16, so use that. (This does not affect the validity of already stored verifiers.) Discussion: https://www.postgresql.org/message-id/flat/12cc9297-7e05-932f-d863-765e5626ead4%402ndquadrant.com --- src/include/common/scram-common.h | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/src/include/common/scram-common.h b/src/include/common/scram-common.h index ebb733df4b..0c5ee04f26 100644 --- a/src/include/common/scram-common.h +++ b/src/include/common/scram-common.h @@ -28,10 +28,17 @@ */ #define SCRAM_RAW_NONCE_LEN 18 -/* length of salt when generating new verifiers */ -#define SCRAM_DEFAULT_SALT_LEN 12 +/* + * Length of salt when generating new verifiers, in bytes. (It will be stored + * and sent over the wire encoded in Base64.) 16 bytes is what the example in + * RFC 7677 uses. + */ +#define SCRAM_DEFAULT_SALT_LEN 16 -/* default number of iterations when generating verifier */ +/* + * Default number of iterations when generating verifier. Should be at least + * 4096 per RFC 7677. + */ #define SCRAM_DEFAULT_ITERATIONS 4096 /* From 6ce6a61840cc90172ad3da7bf303656132fa5fab Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 24 Aug 2017 15:29:35 -0400 Subject: [PATCH 0034/1087] pg_upgrade: Remove dead code Remove code meant for upgrading to a particular version of PostgreSQL 9.0. Since pg_upgrade only supports upgrading to the current major version, this code is no longer useful. --- src/bin/pg_upgrade/check.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index 327c78e879..86225eaa4c 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -279,12 +279,6 @@ check_cluster_compatibility(bool live_check) get_control_data(&new_cluster, false); check_control_data(&old_cluster.controldata, &new_cluster.controldata); - /* Is it 9.0 but without tablespace directories? */ - if (GET_MAJOR_VERSION(new_cluster.major_version) == 900 && - new_cluster.controldata.cat_ver < TABLE_SPACE_SUBDIRS_CAT_VER) - pg_fatal("This utility can only upgrade to PostgreSQL version 9.0 after 2010-01-11\n" - "because of backend API changes made during development.\n"); - /* We read the real port number for PG >= 9.1 */ if (live_check && GET_MAJOR_VERSION(old_cluster.major_version) < 901 && old_cluster.port == DEF_PGUPORT) From 0cdc3e47bea442643c9870dc419364b9f2f52dcb Mon Sep 17 00:00:00 2001 From: Stephen Frost Date: Thu, 24 Aug 2017 16:20:50 -0400 Subject: [PATCH 0035/1087] psql: Fix \gx when FETCH_COUNT is used Set expanded output when requested through \gx in ExecQueryUsingCursor() (used when FETCH_COUNT is set). Discussion: https://www.postgresql.org/message-id/CB7A53AA-5645-4BDD-AB07-4D22CD9D8FF1%40gmx.net Author: Tobias Bussmann --- src/bin/psql/common.c | 4 ++++ src/test/regress/expected/psql.out | 25 +++++++++++++++++++++++++ src/test/regress/sql/psql.sql | 10 ++++++++++ 3 files changed, 39 insertions(+) diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c index 044cdb82a7..a41932ff27 100644 --- a/src/bin/psql/common.c +++ b/src/bin/psql/common.c @@ -1565,6 +1565,10 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) "FETCH FORWARD %d FROM _psql_cursor", fetch_count); + /* one-shot expanded output requested via \gx */ + if (pset.g_expanded) + my_popt.topt.expanded = 1; + /* prepare to write output to \g argument, if any */ if (pset.gfname) { diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index d602aeef42..4aaf4c1620 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -51,6 +51,31 @@ four | 4 3 | 4 (1 row) +-- \gx should work in FETCH_COUNT mode too +\set FETCH_COUNT 1 +SELECT 1 as one, 2 as two \g + one | two +-----+----- + 1 | 2 +(1 row) + +\gx +-[ RECORD 1 ] +one | 1 +two | 2 + +SELECT 3 as three, 4 as four \gx +-[ RECORD 1 ] +three | 3 +four | 4 + +\g + three | four +-------+------ + 3 | 4 +(1 row) + +\unset FETCH_COUNT -- \gset select 10 as test01, 20 as test02, 'Hello' as test03 \gset pref01_ \echo :pref01_test01 :pref01_test02 :pref01_test03 diff --git a/src/test/regress/sql/psql.sql b/src/test/regress/sql/psql.sql index b56a05f7f0..4a676c3119 100644 --- a/src/test/regress/sql/psql.sql +++ b/src/test/regress/sql/psql.sql @@ -28,6 +28,16 @@ SELECT 1 as one, 2 as two \g SELECT 3 as three, 4 as four \gx \g +-- \gx should work in FETCH_COUNT mode too +\set FETCH_COUNT 1 + +SELECT 1 as one, 2 as two \g +\gx +SELECT 3 as three, 4 as four \gx +\g + +\unset FETCH_COUNT + -- \gset select 10 as test01, 20 as test02, 'Hello' as test03 \gset pref01_ From 20fbf25533763c8c78c9c668b718d831236fb111 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 24 Aug 2017 15:07:40 -0700 Subject: [PATCH 0036/1087] Fix harmless thinko in dsa.c. Commit 16be2fd100199bdf284becfcee02c5eb20d8a11d added DSA_ALLOC_HUGE, DSA_ALLOC_ZERO and DSA_ALLOC_NO_OOM which have the same numerical values and meanings as the similarly named MCXT_... macros. In one place we accidentally used MCXT_ALLOC_NO_OOM when DSA_ALLOC_NO_OOM is wanted, so tidy that up. Author: Thomas Munro Discussion: http://postgr.es/m/CAEepm=2AimHxVkkxnMfQvbZMkXy0uKbVa0-D38c5-qwrCm4CMQ@mail.gmail.com Backpatch: 10, where dsa was introduced. --- src/backend/utils/mmgr/dsa.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/mmgr/dsa.c b/src/backend/utils/mmgr/dsa.c index b3327f676b..fe62788188 100644 --- a/src/backend/utils/mmgr/dsa.c +++ b/src/backend/utils/mmgr/dsa.c @@ -707,7 +707,7 @@ dsa_allocate_extended(dsa_area *area, Size size, int flags) dsa_free(area, span_pointer); /* Raise error unless asked not to. */ - if ((flags & MCXT_ALLOC_NO_OOM) == 0) + if ((flags & DSA_ALLOC_NO_OOM) == 0) ereport(ERROR, (errcode(ERRCODE_OUT_OF_MEMORY), errmsg("out of memory"), From 4569715bd6faa4c43e489a7069ab7abca68ff663 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 24 Aug 2017 16:58:30 -0700 Subject: [PATCH 0037/1087] Fix unlikely shared memory leak after failure in dshash_create(). Tidy-up for commit 8c0d7bafad36434cb08ac2c78e69ae72c194ca20, based on a complaint from Andres Freund. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20170823054644.efuzftxjpfi6wwqs%40alap3.anarazel.de --- src/backend/lib/dshash.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c index 988d569b84..06ff32313c 100644 --- a/src/backend/lib/dshash.c +++ b/src/backend/lib/dshash.c @@ -243,10 +243,20 @@ dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) */ hash_table->control->size_log2 = DSHASH_NUM_PARTITIONS_LOG2; hash_table->control->buckets = - dsa_allocate(area, sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS); + dsa_allocate_extended(area, + sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS, + DSA_ALLOC_NO_OOM | DSA_ALLOC_ZERO); + if (!DsaPointerIsValid(hash_table->control->buckets)) + { + dsa_free(area, control); + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"), + errdetail("Failed on DSA request of size %zu.", + sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS))); + } hash_table->buckets = dsa_get_address(area, hash_table->control->buckets); - memset(hash_table->buckets, 0, sizeof(dsa_pointer) * DSHASH_NUM_PARTITIONS); return hash_table; } From d7694fc148707cd8335d9ccfde9f4c17290189db Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 24 Aug 2017 17:01:36 -0700 Subject: [PATCH 0038/1087] Consolidate the function pointer types used by dshash.c. Commit 8c0d7bafad36434cb08ac2c78e69ae72c194ca20 introduced dshash with hash and compare functions like DynaHash's, and also variants that take a user data pointer instead of size. Simplify the interface by merging them into a single pair of function pointer types that take both size and a user data pointer. Since it is anticipated that memcmp and tag_hash behavior will be a common requirement, provide wrapper functions dshash_memcmp and dshash_memhash that conform to the new function types. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20170823054644.efuzftxjpfi6wwqs%40alap3.anarazel.de --- src/backend/lib/dshash.c | 65 ++++++++++++++++++---------------------- src/include/lib/dshash.h | 37 +++++++---------------- 2 files changed, 39 insertions(+), 63 deletions(-) diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c index 06ff32313c..5dbd0c4227 100644 --- a/src/backend/lib/dshash.c +++ b/src/backend/lib/dshash.c @@ -35,6 +35,7 @@ #include "storage/ipc.h" #include "storage/lwlock.h" #include "utils/dsa.h" +#include "utils/hsearch.h" #include "utils/memutils.h" /* @@ -188,9 +189,8 @@ static inline bool equal_keys(dshash_table *hash_table, /* * Create a new hash table backed by the given dynamic shared area, with the * given parameters. The returned object is allocated in backend-local memory - * using the current MemoryContext. If 'arg' is non-null, the arg variants of - * hash and compare functions must be provided in 'params' and 'arg' will be - * passed down to them. + * using the current MemoryContext. 'arg' will be passed through to the + * compare and hash functions. */ dshash_table * dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) @@ -198,14 +198,6 @@ dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) dshash_table *hash_table; dsa_pointer control; - /* Sanity checks on the set of supplied functions. */ - Assert((params->compare_function != NULL) ^ - (params->compare_arg_function != NULL)); - Assert((params->hash_function != NULL) ^ - (params->hash_arg_function != NULL)); - Assert(arg == NULL || (params->compare_arg_function != NULL)); - Assert(arg == NULL || (params->hash_arg_function != NULL)); - /* Allocate the backend-local object representing the hash table. */ hash_table = palloc(sizeof(dshash_table)); @@ -263,9 +255,8 @@ dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) /* * Attach to an existing hash table using a handle. The returned object is - * allocated in backend-local memory using the current MemoryContext. If - * 'arg' is non-null, the arg variants of hash and compare functions must be - * provided in 'params' and 'arg' will be passed down to them. + * allocated in backend-local memory using the current MemoryContext. 'arg' + * will be passed through to the compare and hash functions. */ dshash_table * dshash_attach(dsa_area *area, const dshash_parameters *params, @@ -274,14 +265,6 @@ dshash_attach(dsa_area *area, const dshash_parameters *params, dshash_table *hash_table; dsa_pointer control; - /* Sanity checks on the set of supplied functions. */ - Assert((params->compare_function != NULL) ^ - (params->compare_arg_function != NULL)); - Assert((params->hash_function != NULL) ^ - (params->hash_arg_function != NULL)); - Assert(arg == NULL || (params->compare_arg_function != NULL)); - Assert(arg == NULL || (params->hash_arg_function != NULL)); - /* Allocate the backend-local object representing the hash table. */ hash_table = palloc(sizeof(dshash_table)); @@ -582,6 +565,24 @@ dshash_release_lock(dshash_table *hash_table, void *entry) LWLockRelease(PARTITION_LOCK(hash_table, partition_index)); } +/* + * A compare function that forwards to memcmp. + */ +int +dshash_memcmp(const void *a, const void *b, size_t size, void *arg) +{ + return memcmp(a, b, size); +} + +/* + * A hash function that forwards to tag_hash. + */ +dshash_hash +dshash_memhash(const void *v, size_t size, void *arg) +{ + return tag_hash(v, size); +} + /* * Print debugging information about the internal state of the hash table to * stderr. The caller must hold no partition locks. @@ -874,11 +875,9 @@ delete_item_from_bucket(dshash_table *hash_table, static inline dshash_hash hash_key(dshash_table *hash_table, const void *key) { - if (hash_table->params.hash_arg_function != NULL) - return hash_table->params.hash_arg_function(key, hash_table->arg); - else - return hash_table->params.hash_function(key, - hash_table->params.key_size); + return hash_table->params.hash_function(key, + hash_table->params.key_size, + hash_table->arg); } /* @@ -887,13 +886,7 @@ hash_key(dshash_table *hash_table, const void *key) static inline bool equal_keys(dshash_table *hash_table, const void *a, const void *b) { - int r; - - if (hash_table->params.compare_arg_function != NULL) - r = hash_table->params.compare_arg_function(a, b, hash_table->arg); - else - r = hash_table->params.compare_function(a, b, - hash_table->params.key_size); - - return r == 0; + return hash_table->params.compare_function(a, b, + hash_table->params.key_size, + hash_table->arg) == 0; } diff --git a/src/include/lib/dshash.h b/src/include/lib/dshash.h index 187f58b4e4..3fd91f8697 100644 --- a/src/include/lib/dshash.h +++ b/src/include/lib/dshash.h @@ -26,32 +26,13 @@ typedef dsa_pointer dshash_table_handle; /* The type for hash values. */ typedef uint32 dshash_hash; -/* - * A function used for comparing keys. This version is compatible with - * HashCompareFunction in hsearch.h and memcmp. - */ -typedef int (*dshash_compare_function) (const void *a, const void *b, size_t size); +/* A function type for comparing keys. */ +typedef int (*dshash_compare_function) (const void *a, const void *b, + size_t size, void *arg); -/* - * A function type used for comparing keys. This version allows compare - * functions to receive a pointer to arbitrary user data that was given to the - * create or attach function. Similar to qsort_arg_comparator. - */ -typedef int (*dshash_compare_arg_function) (const void *a, const void *b, void *arg); - -/* - * A function type for computing hash values for keys. This version is - * compatible with HashValueFunc in hsearch.h and hash functions like - * tag_hash. - */ -typedef dshash_hash (*dshash_hash_function) (const void *v, size_t size); - -/* - * A function type for computing hash values for keys. This version allows - * hash functions to receive a pointer to arbitrary user data that was given - * to the create or attach function. - */ -typedef dshash_hash (*dshash_hash_arg_function) (const void *v, void *arg); +/* A function type for computing hash values for keys. */ +typedef dshash_hash (*dshash_hash_function) (const void *v, size_t size, + void *arg); /* * The set of parameters needed to create or attach to a hash table. The @@ -70,9 +51,7 @@ typedef struct dshash_parameters size_t key_size; /* Size of the key (initial bytes of entry) */ size_t entry_size; /* Total size of entry */ dshash_compare_function compare_function; /* Compare function */ - dshash_compare_arg_function compare_arg_function; /* Arg version */ dshash_hash_function hash_function; /* Hash function */ - dshash_hash_arg_function hash_arg_function; /* Arg version */ int tranche_id; /* The tranche ID to use for locks */ } dshash_parameters; @@ -101,6 +80,10 @@ extern bool dshash_delete_key(dshash_table *hash_table, const void *key); extern void dshash_delete_entry(dshash_table *hash_table, void *entry); extern void dshash_release_lock(dshash_table *hash_table, void *entry); +/* Convenience hash and compare functions wrapping memcmp and tag_hash. */ +extern int dshash_memcmp(const void *a, const void *b, size_t size, void *arg); +extern dshash_hash dshash_memhash(const void *v, size_t size, void *arg); + /* Debugging support. */ extern void dshash_dump(dshash_table *hash_table); From d36f7efb39e1b9613193b2e12717dbe2418ddae5 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 24 Aug 2017 17:42:49 -0700 Subject: [PATCH 0039/1087] Add minimal regression test for blessed record type transfer. Test that blessed records can be transferred through a TupleQueue and correctly decoded by another backend. While touching the file, make sure that force_parallel_mode settings only cover relevant tests. Author: Thomas Munro, editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/20170823054644.efuzftxjpfi6wwqs%40alap3.anarazel.de --- src/test/regress/expected/select_parallel.out | 38 ++++++++++++++++++- src/test/regress/sql/select_parallel.sql | 31 ++++++++++++++- 2 files changed, 66 insertions(+), 3 deletions(-) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 0efb211c97..084f0f0c8e 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -326,7 +326,8 @@ select string4 from tenk1 order by string4 limit 5; reset max_parallel_workers; reset enable_hashagg; -set force_parallel_mode=1; +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; explain (costs off) select stringu1::int2 from tenk1 where unique1 = 1; QUERY PLAN @@ -338,7 +339,38 @@ explain (costs off) Index Cond: (unique1 = 1) (5 rows) +ROLLBACK TO SAVEPOINT settings; +-- exercise record typmod remapping between backends +CREATE OR REPLACE FUNCTION make_record(n int) + RETURNS RECORD LANGUAGE plpgsql PARALLEL SAFE AS +$$ +BEGIN + RETURN CASE n + WHEN 1 THEN ROW(1) + WHEN 2 THEN ROW(1, 2) + WHEN 3 THEN ROW(1, 2, 3) + WHEN 4 THEN ROW(1, 2, 3, 4) + ELSE ROW(1, 2, 3, 4, 5) + END; +END; +$$; +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; +SELECT make_record(x) FROM (SELECT generate_series(1, 5) x) ss ORDER BY x; + make_record +------------- + (1) + (1,2) + (1,2,3) + (1,2,3,4) + (1,2,3,4,5) +(5 rows) + +ROLLBACK TO SAVEPOINT settings; +DROP function make_record(n int); -- to increase the parallel query test coverage +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; EXPLAIN (analyze, timing off, summary off, costs off) SELECT * FROM tenk1; QUERY PLAN ------------------------------------------------------------- @@ -348,8 +380,12 @@ EXPLAIN (analyze, timing off, summary off, costs off) SELECT * FROM tenk1; -> Parallel Seq Scan on tenk1 (actual rows=2000 loops=5) (4 rows) +ROLLBACK TO SAVEPOINT settings; -- provoke error in worker +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; select stringu1::int2 from tenk1 where unique1 = 1; ERROR: invalid input syntax for integer: "BAAAAA" CONTEXT: parallel worker +ROLLBACK TO SAVEPOINT settings; rollback; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index e717f92e53..58c3f59890 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -126,15 +126,42 @@ select string4 from tenk1 order by string4 limit 5; reset max_parallel_workers; reset enable_hashagg; -set force_parallel_mode=1; - +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; explain (costs off) select stringu1::int2 from tenk1 where unique1 = 1; +ROLLBACK TO SAVEPOINT settings; + +-- exercise record typmod remapping between backends +CREATE OR REPLACE FUNCTION make_record(n int) + RETURNS RECORD LANGUAGE plpgsql PARALLEL SAFE AS +$$ +BEGIN + RETURN CASE n + WHEN 1 THEN ROW(1) + WHEN 2 THEN ROW(1, 2) + WHEN 3 THEN ROW(1, 2, 3) + WHEN 4 THEN ROW(1, 2, 3, 4) + ELSE ROW(1, 2, 3, 4, 5) + END; +END; +$$; +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; +SELECT make_record(x) FROM (SELECT generate_series(1, 5) x) ss ORDER BY x; +ROLLBACK TO SAVEPOINT settings; +DROP function make_record(n int); -- to increase the parallel query test coverage +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; EXPLAIN (analyze, timing off, summary off, costs off) SELECT * FROM tenk1; +ROLLBACK TO SAVEPOINT settings; -- provoke error in worker +SAVEPOINT settings; +SET LOCAL force_parallel_mode = 1; select stringu1::int2 from tenk1 where unique1 = 1; +ROLLBACK TO SAVEPOINT settings; rollback; From 3f4c7917b3bc8b421c0c85cb9995974c55e7232b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 25 Aug 2017 09:05:17 -0400 Subject: [PATCH 0040/1087] Code review for pushing LIMIT through subqueries. Minor improvements for commit 1f6d515a6. We do not need the (rather expensive) test for SRFs in the targetlist, because since v10 any such SRFs would appear in separate ProjectSet nodes. Also, make the code look more like the existing cases by turning it into a simple recursion --- the argument that there might be some performance benefit to contorting the code seems unfounded to me, especially since any good compiler should turn the tail-recursion into iteration anyway. Discussion: http://postgr.es/m/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w@mail.gmail.com --- src/backend/executor/nodeLimit.c | 58 +++++++++++++++++--------------- 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c index 09af1a5d8b..ceb6854b59 100644 --- a/src/backend/executor/nodeLimit.c +++ b/src/backend/executor/nodeLimit.c @@ -303,14 +303,11 @@ recompute_limits(LimitState *node) /* * If we have a COUNT, and our input is a Sort node, notify it that it can - * use bounded sort. Also, if our input is a MergeAppend, we can apply the - * same bound to any Sorts that are direct children of the MergeAppend, - * since the MergeAppend surely need read no more than that many tuples from - * any one input. We also have to be prepared to look through a Result, - * since the planner might stick one atop MergeAppend for projection purposes. - * We can also accept one or more levels of subqueries that have no quals or - * SRFs (that is, each subquery is just projecting columns) between the LIMIT - * and any of the above. + * use bounded sort. We can also pass down the bound through plan nodes + * that cannot remove or combine input rows; for example, if our input is a + * MergeAppend, we can apply the same bound to any Sorts that are direct + * children of the MergeAppend, since the MergeAppend surely need not read + * more than that many tuples from any one input. * * This is a bit of a kluge, but we don't have any more-abstract way of * communicating between the two nodes; and it doesn't seem worth trying @@ -324,27 +321,10 @@ static void pass_down_bound(LimitState *node, PlanState *child_node) { /* - * If the child is a subquery that does no filtering (no predicates) - * and does not have any SRFs in the target list then we can potentially - * push the limit through the subquery. It is possible that we could have - * multiple subqueries, so tunnel through them all. + * Since this function recurses, in principle we should check stack depth + * here. In practice, it's probably pointless since the earlier node + * initialization tree traversal would surely have consumed more stack. */ - while (IsA(child_node, SubqueryScanState)) - { - SubqueryScanState *subqueryScanState; - - subqueryScanState = (SubqueryScanState *) child_node; - - /* - * Non-empty predicates or an SRF means we cannot push down the limit. - */ - if (subqueryScanState->ss.ps.qual != NULL || - expression_returns_set((Node *) child_node->plan->targetlist)) - return; - - /* Use the child in the following checks */ - child_node = subqueryScanState->subplan; - } if (IsA(child_node, SortState)) { @@ -365,6 +345,7 @@ pass_down_bound(LimitState *node, PlanState *child_node) } else if (IsA(child_node, MergeAppendState)) { + /* Pass down the bound through MergeAppend */ MergeAppendState *maState = (MergeAppendState *) child_node; int i; @@ -374,6 +355,9 @@ pass_down_bound(LimitState *node, PlanState *child_node) else if (IsA(child_node, ResultState)) { /* + * We also have to be prepared to look through a Result, since the + * planner might stick one atop MergeAppend for projection purposes. + * * If Result supported qual checking, we'd have to punt on seeing a * qual. Note that having a resconstantqual is not a showstopper: if * that fails we're not getting any rows at all. @@ -381,6 +365,24 @@ pass_down_bound(LimitState *node, PlanState *child_node) if (outerPlanState(child_node)) pass_down_bound(node, outerPlanState(child_node)); } + else if (IsA(child_node, SubqueryScanState)) + { + /* + * We can also look through SubqueryScan, but only if it has no qual + * (otherwise it might discard rows). + */ + SubqueryScanState *subqueryState = (SubqueryScanState *) child_node; + + if (subqueryState->ss.ps.qual == NULL) + pass_down_bound(node, subqueryState->subplan); + } + + /* + * In principle we could look through any plan node type that is certain + * not to discard or combine input rows. In practice, there are not many + * node types that the planner might put between Sort and Limit, so trying + * to be very general is not worth the trouble. + */ } /* ---------------------------------------------------------------- From d22e9d530516f7c9c56d00eff53cf19e45ef348c Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Fri, 25 Aug 2017 15:15:03 +0200 Subject: [PATCH 0041/1087] Implement DO CONTINUE action for ECPG WHENEVER statement. Author: Vinayak Pokale Reviewed-By: Masahiko Sawada --- doc/src/sgml/ecpg.sgml | 12 ++ src/interfaces/ecpg/preproc/ecpg.trailer | 6 + src/interfaces/ecpg/preproc/output.c | 3 + src/interfaces/ecpg/test/ecpg_schedule | 1 + .../expected/preproc-whenever_do_continue.c | 161 ++++++++++++++++++ .../preproc-whenever_do_continue.stderr | 112 ++++++++++++ .../preproc-whenever_do_continue.stdout | 2 + src/interfaces/ecpg/test/preproc/Makefile | 1 + .../test/preproc/whenever_do_continue.pgc | 63 +++++++ 9 files changed, 361 insertions(+) create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout create mode 100644 src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index f13a0e999f..3cb4001cc0 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -4762,6 +4762,17 @@ EXEC SQL WHENEVER condition action + + DO CONTINUE + + + Execute the C statement continue. This should + only be used in loops statements. if executed, will cause the flow + of control to return to the top of the loop. + + + + CALL name (args) DO name (args) @@ -7799,6 +7810,7 @@ WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } ac EXEC SQL WHENEVER NOT FOUND CONTINUE; EXEC SQL WHENEVER NOT FOUND DO BREAK; +EXEC SQL WHENEVER NOT FOUND DO CONTINUE; EXEC SQL WHENEVER SQLWARNING SQLPRINT; EXEC SQL WHENEVER SQLWARNING DO warn(); EXEC SQL WHENEVER SQLERROR sqlprint; diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer index d273070dab..f60a62099d 100644 --- a/src/interfaces/ecpg/preproc/ecpg.trailer +++ b/src/interfaces/ecpg/preproc/ecpg.trailer @@ -1454,6 +1454,12 @@ action : CONTINUE_P $$.command = NULL; $$.str = mm_strdup("break"); } + | DO CONTINUE_P + { + $$.code = W_CONTINUE; + $$.command = NULL; + $$.str = mm_strdup("continue"); + } | SQL_CALL name '(' c_args ')' { $$.code = W_DO; diff --git a/src/interfaces/ecpg/preproc/output.c b/src/interfaces/ecpg/preproc/output.c index 0479c93c99..a55bf2b06a 100644 --- a/src/interfaces/ecpg/preproc/output.c +++ b/src/interfaces/ecpg/preproc/output.c @@ -51,6 +51,9 @@ print_action(struct when *w) case W_BREAK: fprintf(base_yyout, "break;"); break; + case W_CONTINUE: + fprintf(base_yyout, "continue;"); + break; default: fprintf(base_yyout, "{/* %d not implemented yet */}", w->code); break; diff --git a/src/interfaces/ecpg/test/ecpg_schedule b/src/interfaces/ecpg/test/ecpg_schedule index c3ec125c36..cff4eebfde 100644 --- a/src/interfaces/ecpg/test/ecpg_schedule +++ b/src/interfaces/ecpg/test/ecpg_schedule @@ -28,6 +28,7 @@ test: preproc/type test: preproc/variable test: preproc/outofscope test: preproc/whenever +test: preproc/whenever_do_continue test: sql/array test: sql/binary test: sql/code100 diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c new file mode 100644 index 0000000000..2e95581cdf --- /dev/null +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c @@ -0,0 +1,161 @@ +/* Processed by ecpg (regression mode) */ +/* These include files are added by the preprocessor */ +#include +#include +#include +/* End of automatic include section */ +#define ECPGdebug(X,Y) ECPGdebug((X)+100,(Y)) + +#line 1 "whenever_do_continue.pgc" +#include + + +#line 1 "regression.h" + + + + + + +#line 3 "whenever_do_continue.pgc" + + +/* exec sql whenever sqlerror sqlprint ; */ +#line 5 "whenever_do_continue.pgc" + + +int main(void) +{ + /* exec sql begin declare section */ + + + + + + + + + +#line 15 "whenever_do_continue.pgc" + struct { +#line 12 "whenever_do_continue.pgc" + char ename [ 12 ] ; + +#line 13 "whenever_do_continue.pgc" + float sal ; + +#line 14 "whenever_do_continue.pgc" + float comm ; + } emp ; + +#line 17 "whenever_do_continue.pgc" + char msg [ 128 ] ; +/* exec sql end declare section */ +#line 18 "whenever_do_continue.pgc" + + + ECPGdebug(1, stderr); + + strcpy(msg, "connect"); + { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); +#line 23 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 23 "whenever_do_continue.pgc" + + + strcpy(msg, "create"); + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table emp ( ename varchar , sal double precision , comm double precision )", ECPGt_EOIT, ECPGt_EORT); +#line 26 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 26 "whenever_do_continue.pgc" + + + strcpy(msg, "insert"); + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into emp values ( 'Ram' , 111100 , 21 )", ECPGt_EOIT, ECPGt_EORT); +#line 29 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 29 "whenever_do_continue.pgc" + + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into emp values ( 'aryan' , 11110 , null )", ECPGt_EOIT, ECPGt_EORT); +#line 30 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 30 "whenever_do_continue.pgc" + + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into emp values ( 'josh' , 10000 , 10 )", ECPGt_EOIT, ECPGt_EORT); +#line 31 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 31 "whenever_do_continue.pgc" + + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "insert into emp values ( 'tom' , 20000 , null )", ECPGt_EOIT, ECPGt_EORT); +#line 32 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 32 "whenever_do_continue.pgc" + + + /* declare c cursor for select ename , sal , comm from emp order by ename asc */ +#line 34 "whenever_do_continue.pgc" + + + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "declare c cursor for select ename , sal , comm from emp order by ename asc", ECPGt_EOIT, ECPGt_EORT); +#line 36 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) sqlprint();} +#line 36 "whenever_do_continue.pgc" + + + /* The 'BREAK' condition to exit the loop. */ + /* exec sql whenever not found break ; */ +#line 39 "whenever_do_continue.pgc" + + + /* The DO CONTINUE makes the loop start at the next iteration when an error occurs.*/ + /* exec sql whenever sqlerror continue ; */ +#line 42 "whenever_do_continue.pgc" + + + while (1) + { + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "fetch c", ECPGt_EOIT, + ECPGt_char,&(emp.ename),(long)12,(long)1,(12)*sizeof(char), + ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, + ECPGt_float,&(emp.sal),(long)1,(long)1,sizeof(float), + ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, + ECPGt_float,&(emp.comm),(long)1,(long)1,sizeof(float), + ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT); +#line 46 "whenever_do_continue.pgc" + +if (sqlca.sqlcode == ECPG_NOT_FOUND) break; +#line 46 "whenever_do_continue.pgc" + +if (sqlca.sqlcode < 0) continue;} +#line 46 "whenever_do_continue.pgc" + + /* The employees with non-NULL commissions will be displayed. */ + printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm); + } + + /* + * This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to + * proceed if any further errors do occur. + */ + /* exec sql whenever sqlerror continue ; */ +#line 55 "whenever_do_continue.pgc" + + + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "close c", ECPGt_EOIT, ECPGt_EORT);} +#line 57 "whenever_do_continue.pgc" + + + strcpy(msg, "drop"); + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table emp", ECPGt_EOIT, ECPGt_EORT);} +#line 60 "whenever_do_continue.pgc" + + + exit(0); +} diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr new file mode 100644 index 0000000000..b33329bc0c --- /dev/null +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr @@ -0,0 +1,112 @@ +[NO_PID]: ECPGdebug: set to 1 +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ECPGconnect: opening database ecpg1_regression on port +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 26: query: create table emp ( ename varchar , sal double precision , comm double precision ); with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 26: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 26: OK: CREATE TABLE +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 29: query: insert into emp values ( 'Ram' , 111100 , 21 ); with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 29: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 29: OK: INSERT 0 1 +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 30: query: insert into emp values ( 'aryan' , 11110 , null ); with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 30: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 30: OK: INSERT 0 1 +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 31: query: insert into emp values ( 'josh' , 10000 , 10 ); with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 31: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 31: OK: INSERT 0 1 +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 32: query: insert into emp values ( 'tom' , 20000 , null ); with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 32: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 32: OK: INSERT 0 1 +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 36: query: declare c cursor for select ename , sal , comm from emp order by ename asc; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 36: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 36: OK: DECLARE CURSOR +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: aryan offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 11110 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: raising sqlcode -213 on line 46: null value without indicator on line 46 +[NO_PID]: sqlca: code: -213, state: 22002 +[NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: josh offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 10000 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 10 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: Ram offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 111100 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 21 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: tom offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: 20000 offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_get_data on line 46: RESULT: offset: -1; array: no +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: raising sqlcode -213 on line 46: null value without indicator on line 46 +[NO_PID]: sqlca: code: -213, state: 22002 +[NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 46: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 46: correctly got 0 tuples with 3 fields +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: raising sqlcode 100 on line 46: no data found on line 46 +[NO_PID]: sqlca: code: 100, state: 02000 +[NO_PID]: ecpg_execute on line 57: query: close c; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 57: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 57: OK: CLOSE CURSOR +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 60: query: drop table emp; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_execute on line 60: using PQexec +[NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: ecpg_process_output on line 60: OK: DROP TABLE +[NO_PID]: sqlca: code: 0, state: 00000 diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout new file mode 100644 index 0000000000..75fb6ce270 --- /dev/null +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout @@ -0,0 +1,2 @@ +josh 10000.00 10.00 +Ram 111100.00 21.00 diff --git a/src/interfaces/ecpg/test/preproc/Makefile b/src/interfaces/ecpg/test/preproc/Makefile index d658a4d6b2..39b1974f5f 100644 --- a/src/interfaces/ecpg/test/preproc/Makefile +++ b/src/interfaces/ecpg/test/preproc/Makefile @@ -15,6 +15,7 @@ TESTS = array_of_struct array_of_struct.c \ type type.c \ variable variable.c \ whenever whenever.c \ + whenever_do_continue whenever_do_continue.c \ pointer_to_struct pointer_to_struct.c all: $(TESTS) diff --git a/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc b/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc new file mode 100644 index 0000000000..8ceda69927 --- /dev/null +++ b/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc @@ -0,0 +1,63 @@ +#include + +exec sql include ../regression; + +exec sql whenever sqlerror sqlprint; + +int main(void) +{ + exec sql begin declare section; + struct + { + char ename[12]; + float sal; + float comm; + } emp; + + char msg[128]; + exec sql end declare section; + + ECPGdebug(1, stderr); + + strcpy(msg, "connect"); + exec sql connect to REGRESSDB1; + + strcpy(msg, "create"); + exec sql create table emp(ename varchar,sal double precision, comm double precision); + + strcpy(msg, "insert"); + exec sql insert into emp values ('Ram',111100,21); + exec sql insert into emp values ('aryan',11110,null); + exec sql insert into emp values ('josh',10000,10); + exec sql insert into emp values ('tom',20000,null); + + exec sql declare c cursor for select ename, sal, comm from emp order by ename asc; + + exec sql open c; + + /* The 'BREAK' condition to exit the loop. */ + exec sql whenever not found do break; + + /* The DO CONTINUE makes the loop start at the next iteration when an error occurs.*/ + exec sql whenever sqlerror do continue; + + while (1) + { + exec sql fetch c into :emp; + /* The employees with non-NULL commissions will be displayed. */ + printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm); + } + + /* + * This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to + * proceed if any further errors do occur. + */ + exec sql whenever sqlerror continue; + + exec sql close c; + + strcpy(msg, "drop"); + exec sql drop table emp; + + exit(0); +} From e86ac70d6ef12d8639885fcdb238fdaabec80aa7 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 25 Aug 2017 11:49:05 -0400 Subject: [PATCH 0042/1087] Message translatability fixes --- src/bin/pg_test_fsync/pg_test_fsync.c | 30 +++++++++++++++------------ src/bin/pg_waldump/pg_waldump.c | 20 ++++++++++-------- 2 files changed, 28 insertions(+), 22 deletions(-) diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c index 266b2b6e41..c607b5371c 100644 --- a/src/bin/pg_test_fsync/pg_test_fsync.c +++ b/src/bin/pg_test_fsync/pg_test_fsync.c @@ -25,8 +25,9 @@ #define XLOG_BLCKSZ_K (XLOG_BLCKSZ / 1024) #define LABEL_FORMAT " %-30s" -#define NA_FORMAT "%20s" -#define OPS_FORMAT "%13.3f ops/sec %6.0f usecs/op" +#define NA_FORMAT "%21s\n" +/* translator: maintain alignment with NA_FORMAT */ +#define OPS_FORMAT gettext_noop("%13.3f ops/sec %6.0f usecs/op\n") #define USECS_SEC 1000000 /* These are macros to avoid timing the function call overhead. */ @@ -45,7 +46,7 @@ do { \ if (CreateThread(NULL, 0, process_alarm, NULL, 0, NULL) == \ INVALID_HANDLE_VALUE) \ { \ - fprintf(stderr, _("Cannot create thread for alarm\n")); \ + fprintf(stderr, _("Could not create thread for alarm\n")); \ exit(1); \ } \ gettimeofday(&start_t, NULL); \ @@ -191,7 +192,10 @@ handle_args(int argc, char *argv[]) exit(1); } - printf(_("%d seconds per test\n"), secs_per_test); + printf(ngettext("%d second per test\n", + "%d seconds per test\n", + secs_per_test), + secs_per_test); #if PG_O_DIRECT != 0 printf(_("O_DIRECT supported on this platform for open_datasync and open_sync.\n")); #else @@ -255,7 +259,7 @@ test_sync(int writes_per_op) #ifdef OPEN_DATASYNC_FLAG if ((tmpfile = open(filename, O_RDWR | O_DSYNC | PG_O_DIRECT, 0)) == -1) { - printf(NA_FORMAT, _("n/a*\n")); + printf(NA_FORMAT, _("n/a*")); fs_warning = true; } else @@ -273,7 +277,7 @@ test_sync(int writes_per_op) close(tmpfile); } #else - printf(NA_FORMAT, _("n/a\n")); + printf(NA_FORMAT, _("n/a")); #endif /* @@ -298,7 +302,7 @@ test_sync(int writes_per_op) STOP_TIMER; close(tmpfile); #else - printf(NA_FORMAT, _("n/a\n")); + printf(NA_FORMAT, _("n/a")); #endif /* @@ -346,7 +350,7 @@ test_sync(int writes_per_op) STOP_TIMER; close(tmpfile); #else - printf(NA_FORMAT, _("n/a\n")); + printf(NA_FORMAT, _("n/a")); #endif /* @@ -358,7 +362,7 @@ test_sync(int writes_per_op) #ifdef OPEN_SYNC_FLAG if ((tmpfile = open(filename, O_RDWR | OPEN_SYNC_FLAG | PG_O_DIRECT, 0)) == -1) { - printf(NA_FORMAT, _("n/a*\n")); + printf(NA_FORMAT, _("n/a*")); fs_warning = true; } else @@ -383,7 +387,7 @@ test_sync(int writes_per_op) close(tmpfile); } #else - printf(NA_FORMAT, _("n/a\n")); + printf(NA_FORMAT, _("n/a")); #endif if (fs_warning) @@ -424,7 +428,7 @@ test_open_sync(const char *msg, int writes_size) #ifdef OPEN_SYNC_FLAG if ((tmpfile = open(filename, O_RDWR | OPEN_SYNC_FLAG | PG_O_DIRECT, 0)) == -1) - printf(NA_FORMAT, _("n/a*\n")); + printf(NA_FORMAT, _("n/a*")); else { START_TIMER; @@ -441,7 +445,7 @@ test_open_sync(const char *msg, int writes_size) close(tmpfile); } #else - printf(NA_FORMAT, _("n/a\n")); + printf(NA_FORMAT, _("n/a")); #endif } @@ -577,7 +581,7 @@ print_elapse(struct timeval start_t, struct timeval stop_t, int ops) double per_second = ops / total_time; double avg_op_time_us = (total_time / ops) * USECS_SEC; - printf(OPS_FORMAT "\n", per_second, avg_op_time_us); + printf(_(OPS_FORMAT), per_second, avg_op_time_us); } #ifndef WIN32 diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c index 81ec3e32fd..5aa3233bd3 100644 --- a/src/bin/pg_waldump/pg_waldump.c +++ b/src/bin/pg_waldump/pg_waldump.c @@ -300,7 +300,7 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id, XLogFileName(fname, timeline_id, sendSegNo); - fatal_error("could not seek in log segment %s to offset %u: %s", + fatal_error("could not seek in log file %s to offset %u: %s", fname, startoff, strerror(err)); } sendOff = startoff; @@ -320,7 +320,7 @@ XLogDumpXLogRead(const char *directory, TimeLineID timeline_id, XLogFileName(fname, timeline_id, sendSegNo); - fatal_error("could not read from log segment %s, offset %d, length %d: %s", + fatal_error("could not read from log file %s, offset %u, length %d: %s", fname, sendOff, segbytes, strerror(err)); } @@ -710,14 +710,14 @@ usage(void) printf(_(" -n, --limit=N number of records to display\n")); printf(_(" -p, --path=PATH directory in which to find log segment files or a\n" " directory with a ./pg_wal that contains such files\n" - " (default: current directory, ./pg_wal, PGDATA/pg_wal)\n")); - printf(_(" -r, --rmgr=RMGR only show records generated by resource manager RMGR\n" + " (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n")); + printf(_(" -r, --rmgr=RMGR only show records generated by resource manager RMGR;\n" " use --rmgr=list to list valid resource manager names\n")); printf(_(" -s, --start=RECPTR start reading at WAL location RECPTR\n")); printf(_(" -t, --timeline=TLI timeline from which to read log records\n" " (default: 1 or the value used in STARTSEG)\n")); printf(_(" -V, --version output version information, then exit\n")); - printf(_(" -x, --xid=XID only show records with TransactionId XID\n")); + printf(_(" -x, --xid=XID only show records with transaction ID XID\n")); printf(_(" -z, --stats[=record] show statistics instead of records\n" " (optionally, show per-record statistics)\n")); printf(_(" -?, --help show this help, then exit\n")); @@ -870,7 +870,7 @@ main(int argc, char **argv) case 'x': if (sscanf(optarg, "%u", &config.filter_by_xid) != 1) { - fprintf(stderr, _("%s: could not parse \"%s\" as a valid xid\n"), + fprintf(stderr, _("%s: could not parse \"%s\" as a transaction ID\n"), progname, optarg); goto bad_argument; } @@ -910,7 +910,7 @@ main(int argc, char **argv) if (!verify_directory(private.inpath)) { fprintf(stderr, - _("%s: path \"%s\" cannot be opened: %s\n"), + _("%s: path \"%s\" could not be opened: %s\n"), progname, private.inpath, strerror(errno)); goto bad_argument; } @@ -931,7 +931,7 @@ main(int argc, char **argv) private.inpath = directory; if (!verify_directory(private.inpath)) - fatal_error("cannot open directory \"%s\": %s", + fatal_error("could not open directory \"%s\": %s", private.inpath, strerror(errno)); } @@ -1029,7 +1029,9 @@ main(int argc, char **argv) * a segment (e.g. we were used in file mode). */ if (first_record != private.startptr && (private.startptr % XLogSegSize) != 0) - printf(_("first record is after %X/%X, at %X/%X, skipping over %u bytes\n"), + printf(ngettext("first record is after %X/%X, at %X/%X, skipping over %u byte\n", + "first record is after %X/%X, at %X/%X, skipping over %u bytes\n", + (first_record - private.startptr)), (uint32) (private.startptr >> 32), (uint32) private.startptr, (uint32) (first_record >> 32), (uint32) first_record, (uint32) (first_record - private.startptr)); From 99ce446ada332fd8879fcdbded9daa891595f089 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 25 Aug 2017 12:02:29 -0400 Subject: [PATCH 0043/1087] pg_upgrade: Remove more dead code related to 6ce6a61840cc90172ad3da7bf303656132fa5fab Reported-by: Christoph Berg --- src/bin/pg_upgrade/pg_upgrade.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index dae068ed83..e44c23654d 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -95,10 +95,12 @@ extern char *output_files[]; #endif -/* OID system catalog preservation added during PG 9.0 development */ -#define TABLE_SPACE_SUBDIRS_CAT_VER 201001111 -/* postmaster/postgres -b (binary_upgrade) flag added during PG 9.1 development */ +/* + * postmaster/postgres -b (binary_upgrade) flag added during PG 9.1 + * development + */ #define BINARY_UPGRADE_SERVER_FLAG_CAT_VER 201104251 + /* * Visibility map changed with this 9.2 commit, * 8f9fe6edce358f7904e0db119416b4d1080a83aa; pick later catalog version. @@ -109,6 +111,7 @@ extern char *output_files[]; * The format of visibility map is changed with this 9.6 commit, */ #define VISIBILITY_MAP_FROZEN_BIT_CAT_VER 201603011 + /* * pg_multixact format changed in 9.3 commit 0ac5ad5134f2769ccbaefec73844f85, * ("Improve concurrency of foreign key locking") which also updated catalog @@ -128,6 +131,7 @@ extern char *output_files[]; */ #define JSONB_FORMAT_CHANGE_CAT_VER 201409291 + /* * Each relation is represented by a relinfo structure. */ From aae62278e167623bfac9fd50d1242d8e72208b0c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 25 Aug 2017 14:17:33 -0400 Subject: [PATCH 0044/1087] Fix locale dependency in new ecpg test case. Force sorting in "C" locale so that the output ordering doesn't vary, per buildfarm. In passing, add missing .gitignore entries. Discussion: https://postgr.es/m/0975f4bb-5dee-c33c-b719-3ce44026d397@chrullrich.net --- .../expected/preproc-whenever_do_continue.c | 4 ++-- .../preproc-whenever_do_continue.stderr | 24 +++++++++---------- .../preproc-whenever_do_continue.stdout | 2 +- src/interfaces/ecpg/test/preproc/.gitignore | 2 ++ .../test/preproc/whenever_do_continue.pgc | 2 +- 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c index 2e95581cdf..a367af00f3 100644 --- a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c @@ -98,11 +98,11 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 32 "whenever_do_continue.pgc" - /* declare c cursor for select ename , sal , comm from emp order by ename asc */ + /* declare c cursor for select ename , sal , comm from emp order by ename collate \"C\" asc */ #line 34 "whenever_do_continue.pgc" - { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "declare c cursor for select ename , sal , comm from emp order by ename asc", ECPGt_EOIT, ECPGt_EORT); + { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "declare c cursor for select ename , sal , comm from emp order by ename collate \"C\" asc", ECPGt_EOIT, ECPGt_EORT); #line 36 "whenever_do_continue.pgc" if (sqlca.sqlcode < 0) sqlprint();} diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr index b33329bc0c..46bc4a5600 100644 --- a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr @@ -32,7 +32,7 @@ [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_process_output on line 32: OK: INSERT 0 1 [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_execute on line 36: query: declare c cursor for select ename , sal , comm from emp order by ename asc; with 0 parameter(s) on connection ecpg1_regression +[NO_PID]: ecpg_execute on line 36: query: declare c cursor for select ename , sal , comm from emp order by ename collate "C" asc; with 0 parameter(s) on connection ecpg1_regression [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_execute on line 36: using PQexec [NO_PID]: sqlca: code: 0, state: 00000 @@ -44,37 +44,37 @@ [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: aryan offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: Ram offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: 11110 offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: 111100 offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: 21 offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: raising sqlcode -213 on line 46: null value without indicator on line 46 -[NO_PID]: sqlca: code: -213, state: 22002 [NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_execute on line 46: using PQexec [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: josh offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: aryan offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: 10000 offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: 11110 offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: 10 offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 +[NO_PID]: raising sqlcode -213 on line 46: null value without indicator on line 46 +[NO_PID]: sqlca: code: -213, state: 22002 [NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_execute on line 46: using PQexec [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_process_output on line 46: correctly got 1 tuples with 3 fields [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: Ram offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: josh offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: 111100 offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: 10000 offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 -[NO_PID]: ecpg_get_data on line 46: RESULT: 21 offset: -1; array: no +[NO_PID]: ecpg_get_data on line 46: RESULT: 10 offset: -1; array: no [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ecpg_execute on line 46: query: fetch c; with 0 parameter(s) on connection ecpg1_regression [NO_PID]: sqlca: code: 0, state: 00000 diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout index 75fb6ce270..d6ac5a0280 100644 --- a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout +++ b/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout @@ -1,2 +1,2 @@ -josh 10000.00 10.00 Ram 111100.00 21.00 +josh 10000.00 10.00 diff --git a/src/interfaces/ecpg/test/preproc/.gitignore b/src/interfaces/ecpg/test/preproc/.gitignore index ffca98e8c0..fd63e645a3 100644 --- a/src/interfaces/ecpg/test/preproc/.gitignore +++ b/src/interfaces/ecpg/test/preproc/.gitignore @@ -22,3 +22,5 @@ /variable.c /whenever /whenever.c +/whenever_do_continue +/whenever_do_continue.c diff --git a/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc b/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc index 8ceda69927..2a925a3c54 100644 --- a/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc +++ b/src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc @@ -31,7 +31,7 @@ int main(void) exec sql insert into emp values ('josh',10000,10); exec sql insert into emp values ('tom',20000,null); - exec sql declare c cursor for select ename, sal, comm from emp order by ename asc; + exec sql declare c cursor for select ename, sal, comm from emp order by ename collate "C" asc; exec sql open c; From 449338cc644be6035d05afb6b60f536adfd99b3e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 25 Aug 2017 15:07:44 -0400 Subject: [PATCH 0045/1087] Improve low-level backup documentation. Our documentation hasn't really caught up with the fact that non-exclusive backups can now be taken using pg_start_backup and pg_stop_backup even on standbys. Update, also correcting some errors introduced by 52f8a59dd953c6820baf153e97cf07d31b8ac1d6. Updates to the 9.6 documentation are needed as well, but that will need a separate patch as some things are different on that version. David Steele, reviewed by Robert Haas and Michael Paquier Discussion: http://postgr.es/m/d4d951b9-89c0-6bc1-b6ff-d0b2dd5a8966@pgmasters.net --- doc/src/sgml/backup.sgml | 37 ++++++++++++++++++++++--------------- doc/src/sgml/func.sgml | 3 ++- 2 files changed, 24 insertions(+), 16 deletions(-) diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index 0e7c6e2051..95aeb35507 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -889,8 +889,11 @@ SELECT pg_start_backup('label', false, false); SELECT * FROM pg_stop_backup(false, true); - This terminates the backup mode and performs an automatic switch to - the next WAL segment. The reason for the switch is to arrange for + This terminates backup mode. On a primary, it also performs an automatic + switch to the next WAL segment. On a standby, it is not possible to + automatically switch WAL segments, so you may wish to run + pg_switch_wal on the primary to perform a manual + switch. The reason for the switch is to arrange for the last WAL segment file written during the backup interval to be ready to archive. @@ -908,9 +911,12 @@ SELECT * FROM pg_stop_backup(false, true); Once the WAL segment files active during the backup are archived, you are done. The file identified by pg_stop_backup's first return value is the last segment that is required to form a complete set of - backup files. If archive_mode is enabled, + backup files. On a primary, if archive_mode is enabled and the + wait_for_archive parameter is true, pg_stop_backup does not return until the last segment has been archived. + On a standby, archive_mode must be always in order + for pg_stop_backup to wait. Archiving of these files happens automatically since you have already configured archive_command. In most cases this happens quickly, but you are advised to monitor your archive @@ -926,8 +932,9 @@ SELECT * FROM pg_stop_backup(false, true); If the backup process monitors and ensures that all WAL segment files - required for the backup are successfully archived then the second - parameter (which defaults to true) can be set to false to have + required for the backup are successfully archived then the + wait_for_archive parameter (which defaults to true) can be set + to false to have pg_stop_backup return as soon as the stop backup record is written to the WAL. By default, pg_stop_backup will wait until all WAL has been archived, which can take some time. This option @@ -943,9 +950,9 @@ SELECT * FROM pg_stop_backup(false, true); Making an exclusive low level backup The process for an exclusive backup is mostly the same as for a - non-exclusive one, but it differs in a few key steps. It does not allow - more than one concurrent backup to run, and there can be some issues on - the server if it crashes during the backup. Prior to PostgreSQL 9.6, this + non-exclusive one, but it differs in a few key steps. This type of backup + can only be taken on a primary and does not allow concurrent backups. + Prior to PostgreSQL 9.6, this was the only low-level method available, but it is now recommended that all users upgrade their scripts to use non-exclusive backups if possible. @@ -1003,6 +1010,11 @@ SELECT pg_start_backup('label', true); for things to consider during this backup. + + Note that if the server crashes during the backup it may not be + possible to restart until the backup_label file has been + manually deleted from the PGDATA directory. + @@ -1012,15 +1024,10 @@ SELECT pg_start_backup('label', true); SELECT pg_stop_backup(); - This function, when called on a primary, terminates the backup mode and + This function terminates backup mode and performs an automatic switch to the next WAL segment. The reason for the switch is to arrange for the last WAL segment written during the backup - interval to be ready to archive. When called on a standby, this function - only terminates backup mode. A subsequent WAL segment switch will be - needed in order to ensure that all WAL files needed to restore the backup - can be archived; if the primary does not have sufficient write activity - to trigger one, pg_switch_wal should be executed on - the primary. + interval to be ready to archive. diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index b43ec30a4e..28eda97273 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -18606,7 +18606,8 @@ postgres=# select pg_start_backup('label_goes_here'); - The function also creates a backup history file in the write-ahead log + When executed on a primary, the function also creates a backup history file + in the write-ahead log archive area. The history file includes the label given to pg_start_backup, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. The return From a772624b1d6b47ac00384901e1753f1d34b0cd10 Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Sat, 26 Aug 2017 12:57:21 +0200 Subject: [PATCH 0046/1087] Make setlocale in ECPG test cases thread aware on Windows. Fix threaded test cases on Windows not to crash in setlocale() which can be global or local to a thread on Windows. Author: Christian Ullrich --- .../ecpg/test/expected/thread-alloc.c | 39 ++++++----- .../ecpg/test/expected/thread-descriptor.c | 19 ++++-- .../ecpg/test/expected/thread-prep.c | 67 ++++++++++--------- .../ecpg/test/expected/thread-thread.c | 60 +++++++++-------- .../test/expected/thread-thread_implicit.c | 60 +++++++++-------- src/interfaces/ecpg/test/thread/alloc.pgc | 5 ++ .../ecpg/test/thread/descriptor.pgc | 5 ++ src/interfaces/ecpg/test/thread/prep.pgc | 5 ++ src/interfaces/ecpg/test/thread/thread.pgc | 6 ++ .../ecpg/test/thread/thread_implicit.pgc | 6 ++ 10 files changed, 163 insertions(+), 109 deletions(-) diff --git a/src/interfaces/ecpg/test/expected/thread-alloc.c b/src/interfaces/ecpg/test/expected/thread-alloc.c index 9f8ac59430..1a60576533 100644 --- a/src/interfaces/ecpg/test/expected/thread-alloc.c +++ b/src/interfaces/ecpg/test/expected/thread-alloc.c @@ -22,6 +22,7 @@ main(void) #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -99,7 +100,7 @@ struct sqlca_t *ECPGget_sqlca(void); #endif -#line 24 "alloc.pgc" +#line 25 "alloc.pgc" #line 1 "regression.h" @@ -109,14 +110,14 @@ struct sqlca_t *ECPGget_sqlca(void); -#line 25 "alloc.pgc" +#line 26 "alloc.pgc" /* exec sql whenever sqlerror sqlprint ; */ -#line 27 "alloc.pgc" +#line 28 "alloc.pgc" /* exec sql whenever not found sqlprint ; */ -#line 28 "alloc.pgc" +#line 29 "alloc.pgc" #ifdef WIN32 @@ -127,59 +128,63 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + /* exec sql begin declare section */ -#line 39 "alloc.pgc" +#line 44 "alloc.pgc" int value ; -#line 40 "alloc.pgc" +#line 45 "alloc.pgc" char name [ 100 ] ; -#line 41 "alloc.pgc" +#line 46 "alloc.pgc" char ** r = NULL ; /* exec sql end declare section */ -#line 42 "alloc.pgc" +#line 47 "alloc.pgc" value = (long)arg; sprintf(name, "Connection: %d", value); { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , name, 0); -#line 47 "alloc.pgc" +#line 52 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 47 "alloc.pgc" +#line 52 "alloc.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 48 "alloc.pgc" +#line 53 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 48 "alloc.pgc" +#line 53 "alloc.pgc" for (i = 1; i <= REPEATS; ++i) { { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select relname from pg_class where relname = 'pg_class'", ECPGt_EOIT, ECPGt_char,&(r),(long)0,(long)0,(1)*sizeof(char), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT); -#line 51 "alloc.pgc" +#line 56 "alloc.pgc" if (sqlca.sqlcode == ECPG_NOT_FOUND) sqlprint(); -#line 51 "alloc.pgc" +#line 56 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 51 "alloc.pgc" +#line 56 "alloc.pgc" free(r); r = NULL; } { ECPGdisconnect(__LINE__, name); -#line 55 "alloc.pgc" +#line 60 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 55 "alloc.pgc" +#line 60 "alloc.pgc" return 0; diff --git a/src/interfaces/ecpg/test/expected/thread-descriptor.c b/src/interfaces/ecpg/test/expected/thread-descriptor.c index 607df7ce24..d1c20b147e 100644 --- a/src/interfaces/ecpg/test/expected/thread-descriptor.c +++ b/src/interfaces/ecpg/test/expected/thread-descriptor.c @@ -12,6 +12,7 @@ #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -90,13 +91,13 @@ struct sqlca_t *ECPGget_sqlca(void); #endif -#line 15 "descriptor.pgc" +#line 16 "descriptor.pgc" /* exec sql whenever sqlerror sqlprint ; */ -#line 16 "descriptor.pgc" +#line 17 "descriptor.pgc" /* exec sql whenever not found sqlprint ; */ -#line 17 "descriptor.pgc" +#line 18 "descriptor.pgc" #if defined(ENABLE_THREAD_SAFETY) && defined(WIN32) @@ -107,19 +108,23 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + for (i = 1; i <= REPEATS; ++i) { ECPGallocate_desc(__LINE__, "mydesc"); -#line 29 "descriptor.pgc" +#line 34 "descriptor.pgc" if (sqlca.sqlcode < 0) sqlprint(); -#line 29 "descriptor.pgc" +#line 34 "descriptor.pgc" ECPGdeallocate_desc(__LINE__, "mydesc"); -#line 30 "descriptor.pgc" +#line 35 "descriptor.pgc" if (sqlca.sqlcode < 0) sqlprint(); -#line 30 "descriptor.pgc" +#line 35 "descriptor.pgc" } diff --git a/src/interfaces/ecpg/test/expected/thread-prep.c b/src/interfaces/ecpg/test/expected/thread-prep.c index 72ca568151..ff872f08b5 100644 --- a/src/interfaces/ecpg/test/expected/thread-prep.c +++ b/src/interfaces/ecpg/test/expected/thread-prep.c @@ -22,6 +22,7 @@ main(void) #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -99,7 +100,7 @@ struct sqlca_t *ECPGget_sqlca(void); #endif -#line 24 "prep.pgc" +#line 25 "prep.pgc" #line 1 "regression.h" @@ -109,14 +110,14 @@ struct sqlca_t *ECPGget_sqlca(void); -#line 25 "prep.pgc" +#line 26 "prep.pgc" /* exec sql whenever sqlerror sqlprint ; */ -#line 27 "prep.pgc" +#line 28 "prep.pgc" /* exec sql whenever not found sqlprint ; */ -#line 28 "prep.pgc" +#line 29 "prep.pgc" #ifdef WIN32 @@ -127,69 +128,73 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + /* exec sql begin declare section */ -#line 39 "prep.pgc" +#line 44 "prep.pgc" int value ; -#line 40 "prep.pgc" +#line 45 "prep.pgc" char name [ 100 ] ; -#line 41 "prep.pgc" +#line 46 "prep.pgc" char query [ 256 ] = "INSERT INTO T VALUES ( ? )" ; /* exec sql end declare section */ -#line 42 "prep.pgc" +#line 47 "prep.pgc" value = (long)arg; sprintf(name, "Connection: %d", value); { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , name, 0); -#line 47 "prep.pgc" +#line 52 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 47 "prep.pgc" +#line 52 "prep.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 48 "prep.pgc" +#line 53 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 48 "prep.pgc" +#line 53 "prep.pgc" for (i = 1; i <= REPEATS; ++i) { { ECPGprepare(__LINE__, NULL, 0, "i", query); -#line 51 "prep.pgc" +#line 56 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 51 "prep.pgc" +#line 56 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "i", ECPGt_int,&(value),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 52 "prep.pgc" +#line 57 "prep.pgc" if (sqlca.sqlcode == ECPG_NOT_FOUND) sqlprint(); -#line 52 "prep.pgc" +#line 57 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 52 "prep.pgc" +#line 57 "prep.pgc" } { ECPGdeallocate(__LINE__, 0, NULL, "i"); -#line 54 "prep.pgc" +#line 59 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 54 "prep.pgc" +#line 59 "prep.pgc" { ECPGdisconnect(__LINE__, name); -#line 55 "prep.pgc" +#line 60 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 55 "prep.pgc" +#line 60 "prep.pgc" return 0; @@ -205,34 +210,34 @@ int main () #endif { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); -#line 69 "prep.pgc" +#line 74 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 69 "prep.pgc" +#line 74 "prep.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 70 "prep.pgc" +#line 75 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 70 "prep.pgc" +#line 75 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table if exists T", ECPGt_EOIT, ECPGt_EORT); -#line 71 "prep.pgc" +#line 76 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 71 "prep.pgc" +#line 76 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table T ( i int )", ECPGt_EOIT, ECPGt_EORT); -#line 72 "prep.pgc" +#line 77 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 72 "prep.pgc" +#line 77 "prep.pgc" { ECPGdisconnect(__LINE__, "CURRENT"); -#line 73 "prep.pgc" +#line 78 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 73 "prep.pgc" +#line 78 "prep.pgc" #ifdef WIN32 diff --git a/src/interfaces/ecpg/test/expected/thread-thread.c b/src/interfaces/ecpg/test/expected/thread-thread.c index 61d3c5c6e4..470fbb252d 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread.c +++ b/src/interfaces/ecpg/test/expected/thread-thread.c @@ -26,6 +26,7 @@ main(void) #include #else #include +#include #endif @@ -36,7 +37,7 @@ main(void) -#line 22 "thread.pgc" +#line 23 "thread.pgc" void *test_thread(void *arg); @@ -55,10 +56,10 @@ int main() /* exec sql begin declare section */ -#line 38 "thread.pgc" +#line 39 "thread.pgc" int l_rows ; /* exec sql end declare section */ -#line 39 "thread.pgc" +#line 40 "thread.pgc" /* Do not switch on debug output for regression tests. The threads get executed in @@ -67,22 +68,22 @@ int main() /* setup test_thread table */ { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); } -#line 46 "thread.pgc" +#line 47 "thread.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table test_thread", ECPGt_EOIT, ECPGt_EORT);} -#line 47 "thread.pgc" +#line 48 "thread.pgc" /* DROP might fail */ { ECPGtrans(__LINE__, NULL, "commit");} -#line 48 "thread.pgc" +#line 49 "thread.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table test_thread ( tstamp timestamp not null default cast ( timeofday ( ) as timestamp ) , thread text not null , iteration integer not null , primary key ( thread , iteration ) )", ECPGt_EOIT, ECPGt_EORT);} -#line 53 "thread.pgc" +#line 54 "thread.pgc" { ECPGtrans(__LINE__, NULL, "commit");} -#line 54 "thread.pgc" +#line 55 "thread.pgc" { ECPGdisconnect(__LINE__, "CURRENT");} -#line 55 "thread.pgc" +#line 56 "thread.pgc" /* create, and start, threads */ @@ -114,18 +115,18 @@ int main() /* and check results */ { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); } -#line 85 "thread.pgc" +#line 86 "thread.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from test_thread", ECPGt_EOIT, ECPGt_int,&(l_rows),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);} -#line 86 "thread.pgc" +#line 87 "thread.pgc" { ECPGtrans(__LINE__, NULL, "commit");} -#line 87 "thread.pgc" +#line 88 "thread.pgc" { ECPGdisconnect(__LINE__, "CURRENT");} -#line 88 "thread.pgc" +#line 89 "thread.pgc" if( l_rows == (nthreads * iterations) ) printf("Success.\n"); @@ -138,17 +139,22 @@ int main() void *test_thread(void *arg) { long threadnum = (long)arg; + +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + /* exec sql begin declare section */ -#line 101 "thread.pgc" +#line 107 "thread.pgc" int l_i ; -#line 102 "thread.pgc" +#line 108 "thread.pgc" char l_connection [ 128 ] ; /* exec sql end declare section */ -#line 103 "thread.pgc" +#line 109 "thread.pgc" /* build up connection name, and connect to database */ @@ -158,13 +164,13 @@ void *test_thread(void *arg) _snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); #endif /* exec sql whenever sqlerror sqlprint ; */ -#line 111 "thread.pgc" +#line 117 "thread.pgc" { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , l_connection, 0); -#line 112 "thread.pgc" +#line 118 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 112 "thread.pgc" +#line 118 "thread.pgc" if( sqlca.sqlcode != 0 ) { @@ -172,10 +178,10 @@ if (sqlca.sqlcode < 0) sqlprint();} return( NULL ); } { ECPGtrans(__LINE__, l_connection, "begin"); -#line 118 "thread.pgc" +#line 124 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 118 "thread.pgc" +#line 124 "thread.pgc" /* insert into test_thread table */ @@ -186,10 +192,10 @@ if (sqlca.sqlcode < 0) sqlprint();} ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_int,&(l_i),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 123 "thread.pgc" +#line 129 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 123 "thread.pgc" +#line 129 "thread.pgc" if( sqlca.sqlcode != 0 ) printf("%s: ERROR: insert failed!\n", l_connection); @@ -197,16 +203,16 @@ if (sqlca.sqlcode < 0) sqlprint();} /* all done */ { ECPGtrans(__LINE__, l_connection, "commit"); -#line 129 "thread.pgc" +#line 135 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 129 "thread.pgc" +#line 135 "thread.pgc" { ECPGdisconnect(__LINE__, l_connection); -#line 130 "thread.pgc" +#line 136 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 130 "thread.pgc" +#line 136 "thread.pgc" return( NULL ); } diff --git a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c index c43c1ada46..854549e27d 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c +++ b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c @@ -27,6 +27,7 @@ main(void) #include #else #include +#include #endif @@ -37,7 +38,7 @@ main(void) -#line 23 "thread_implicit.pgc" +#line 24 "thread_implicit.pgc" void *test_thread(void *arg); @@ -56,10 +57,10 @@ int main() /* exec sql begin declare section */ -#line 39 "thread_implicit.pgc" +#line 40 "thread_implicit.pgc" int l_rows ; /* exec sql end declare section */ -#line 40 "thread_implicit.pgc" +#line 41 "thread_implicit.pgc" /* Do not switch on debug output for regression tests. The threads get executed in @@ -68,22 +69,22 @@ int main() /* setup test_thread table */ { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); } -#line 47 "thread_implicit.pgc" +#line 48 "thread_implicit.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table test_thread", ECPGt_EOIT, ECPGt_EORT);} -#line 48 "thread_implicit.pgc" +#line 49 "thread_implicit.pgc" /* DROP might fail */ { ECPGtrans(__LINE__, NULL, "commit");} -#line 49 "thread_implicit.pgc" +#line 50 "thread_implicit.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table test_thread ( tstamp timestamp not null default cast ( timeofday ( ) as timestamp ) , thread text not null , iteration integer not null , primary key ( thread , iteration ) )", ECPGt_EOIT, ECPGt_EORT);} -#line 54 "thread_implicit.pgc" +#line 55 "thread_implicit.pgc" { ECPGtrans(__LINE__, NULL, "commit");} -#line 55 "thread_implicit.pgc" +#line 56 "thread_implicit.pgc" { ECPGdisconnect(__LINE__, "CURRENT");} -#line 56 "thread_implicit.pgc" +#line 57 "thread_implicit.pgc" /* create, and start, threads */ @@ -115,18 +116,18 @@ int main() /* and check results */ { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); } -#line 86 "thread_implicit.pgc" +#line 87 "thread_implicit.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select count ( * ) from test_thread", ECPGt_EOIT, ECPGt_int,&(l_rows),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);} -#line 87 "thread_implicit.pgc" +#line 88 "thread_implicit.pgc" { ECPGtrans(__LINE__, NULL, "commit");} -#line 88 "thread_implicit.pgc" +#line 89 "thread_implicit.pgc" { ECPGdisconnect(__LINE__, "CURRENT");} -#line 89 "thread_implicit.pgc" +#line 90 "thread_implicit.pgc" if( l_rows == (nthreads * iterations) ) printf("Success.\n"); @@ -139,17 +140,22 @@ int main() void *test_thread(void *arg) { long threadnum = (long)arg; + +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + /* exec sql begin declare section */ -#line 102 "thread_implicit.pgc" +#line 108 "thread_implicit.pgc" int l_i ; -#line 103 "thread_implicit.pgc" +#line 109 "thread_implicit.pgc" char l_connection [ 128 ] ; /* exec sql end declare section */ -#line 104 "thread_implicit.pgc" +#line 110 "thread_implicit.pgc" /* build up connection name, and connect to database */ @@ -159,13 +165,13 @@ void *test_thread(void *arg) _snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); #endif /* exec sql whenever sqlerror sqlprint ; */ -#line 112 "thread_implicit.pgc" +#line 118 "thread_implicit.pgc" { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , l_connection, 0); -#line 113 "thread_implicit.pgc" +#line 119 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 113 "thread_implicit.pgc" +#line 119 "thread_implicit.pgc" if( sqlca.sqlcode != 0 ) { @@ -173,10 +179,10 @@ if (sqlca.sqlcode < 0) sqlprint();} return( NULL ); } { ECPGtrans(__LINE__, NULL, "begin"); -#line 119 "thread_implicit.pgc" +#line 125 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 119 "thread_implicit.pgc" +#line 125 "thread_implicit.pgc" /* insert into test_thread table */ @@ -187,10 +193,10 @@ if (sqlca.sqlcode < 0) sqlprint();} ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_int,&(l_i),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 124 "thread_implicit.pgc" +#line 130 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 124 "thread_implicit.pgc" +#line 130 "thread_implicit.pgc" if( sqlca.sqlcode != 0 ) printf("%s: ERROR: insert failed!\n", l_connection); @@ -198,16 +204,16 @@ if (sqlca.sqlcode < 0) sqlprint();} /* all done */ { ECPGtrans(__LINE__, NULL, "commit"); -#line 130 "thread_implicit.pgc" +#line 136 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 130 "thread_implicit.pgc" +#line 136 "thread_implicit.pgc" { ECPGdisconnect(__LINE__, l_connection); -#line 131 "thread_implicit.pgc" +#line 137 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 131 "thread_implicit.pgc" +#line 137 "thread_implicit.pgc" return( NULL ); } diff --git a/src/interfaces/ecpg/test/thread/alloc.pgc b/src/interfaces/ecpg/test/thread/alloc.pgc index ea98495be4..8e6d042d89 100644 --- a/src/interfaces/ecpg/test/thread/alloc.pgc +++ b/src/interfaces/ecpg/test/thread/alloc.pgc @@ -13,6 +13,7 @@ main(void) #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -35,6 +36,10 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + EXEC SQL BEGIN DECLARE SECTION; int value; char name[100]; diff --git a/src/interfaces/ecpg/test/thread/descriptor.pgc b/src/interfaces/ecpg/test/thread/descriptor.pgc index e07a5e22b7..c88c05a8a5 100644 --- a/src/interfaces/ecpg/test/thread/descriptor.pgc +++ b/src/interfaces/ecpg/test/thread/descriptor.pgc @@ -3,6 +3,7 @@ #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -24,6 +25,10 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + for (i = 1; i <= REPEATS; ++i) { EXEC SQL ALLOCATE DESCRIPTOR mydesc; diff --git a/src/interfaces/ecpg/test/thread/prep.pgc b/src/interfaces/ecpg/test/thread/prep.pgc index 45205ddc8b..1ec96767af 100644 --- a/src/interfaces/ecpg/test/thread/prep.pgc +++ b/src/interfaces/ecpg/test/thread/prep.pgc @@ -13,6 +13,7 @@ main(void) #define WIN32_LEAN_AND_MEAN #include #include +#include #else #include #endif @@ -35,6 +36,10 @@ static void* fn(void* arg) { int i; +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + EXEC SQL BEGIN DECLARE SECTION; int value; char name[100]; diff --git a/src/interfaces/ecpg/test/thread/thread.pgc b/src/interfaces/ecpg/test/thread/thread.pgc index cc23b82484..f08aacdee5 100644 --- a/src/interfaces/ecpg/test/thread/thread.pgc +++ b/src/interfaces/ecpg/test/thread/thread.pgc @@ -17,6 +17,7 @@ main(void) #include #else #include +#include #endif exec sql include ../regression; @@ -97,6 +98,11 @@ int main() void *test_thread(void *arg) { long threadnum = (long)arg; + +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + EXEC SQL BEGIN DECLARE SECTION; int l_i; char l_connection[128]; diff --git a/src/interfaces/ecpg/test/thread/thread_implicit.pgc b/src/interfaces/ecpg/test/thread/thread_implicit.pgc index 96e0e993ac..aab758ed92 100644 --- a/src/interfaces/ecpg/test/thread/thread_implicit.pgc +++ b/src/interfaces/ecpg/test/thread/thread_implicit.pgc @@ -18,6 +18,7 @@ main(void) #include #else #include +#include #endif exec sql include ../regression; @@ -98,6 +99,11 @@ int main() void *test_thread(void *arg) { long threadnum = (long)arg; + +#ifdef WIN32 + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif + EXEC SQL BEGIN DECLARE SECTION; int l_i; char l_connection[128]; From 2073c641b43e1310784dc40aef32f71119313bdc Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 26 Aug 2017 09:21:46 -0400 Subject: [PATCH 0047/1087] pg_test_timing: Some NLS fixes The string "% of total" was marked by xgettext to be a c-format, but it is actually not, so mark up the source to prevent that. Compute the column widths of the final display dynamically based on the translated strings, so that translations don't mess up the display accidentally. --- src/bin/pg_test_timing/pg_test_timing.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/src/bin/pg_test_timing/pg_test_timing.c b/src/bin/pg_test_timing/pg_test_timing.c index 2f1ab7cd60..6e2fd1ab8c 100644 --- a/src/bin/pg_test_timing/pg_test_timing.c +++ b/src/bin/pg_test_timing/pg_test_timing.c @@ -172,13 +172,22 @@ output(uint64 loop_count) { int64 max_bit = 31, i; + char *header1 = _("< us"); + char *header2 = /* xgettext:no-c-format */ _("% of total"); + char *header3 = _("count"); + int len1 = strlen(header1); + int len2 = strlen(header2); + int len3 = strlen(header3); /* find highest bit value */ while (max_bit > 0 && histogram[max_bit] == 0) max_bit--; printf(_("Histogram of timing durations:\n")); - printf("%6s %10s %10s\n", _("< us"), _("% of total"), _("count")); + printf("%*s %*s %*s\n", + Max(6, len1), header1, + Max(10, len2), header2, + Max(10, len3), header3); for (i = 0; i <= max_bit; i++) { @@ -186,7 +195,9 @@ output(uint64 loop_count) /* lame hack to work around INT64_FORMAT deficiencies */ snprintf(buf, sizeof(buf), INT64_FORMAT, histogram[i]); - printf("%6ld %9.5f %10s\n", 1l << i, - (double) histogram[i] * 100 / loop_count, buf); + printf("%*ld %*.5f %*s\n", + Max(6, len1), 1l << i, + Max(10, len2) - 1, (double) histogram[i] * 100 / loop_count, + Max(10, len3), buf); } } From 04fbe0e4516d26de9420637f6fc90041e574b4b0 Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Sat, 26 Aug 2017 19:07:25 +0200 Subject: [PATCH 0048/1087] Changed order of statements and added an additiona MSVC safeguard to make ecpg thread test cases work on Windows. --- .../ecpg/test/expected/thread-alloc.c | 36 ++++++----- .../ecpg/test/expected/thread-descriptor.c | 10 +-- .../ecpg/test/expected/thread-prep.c | 64 ++++++++++--------- .../ecpg/test/expected/thread-thread.c | 38 +++++------ .../test/expected/thread-thread_implicit.c | 38 +++++------ src/interfaces/ecpg/test/thread/alloc.pgc | 10 +-- .../ecpg/test/thread/descriptor.pgc | 2 + src/interfaces/ecpg/test/thread/prep.pgc | 10 +-- src/interfaces/ecpg/test/thread/thread.pgc | 10 +-- .../ecpg/test/thread/thread_implicit.pgc | 10 +-- 10 files changed, 124 insertions(+), 104 deletions(-) diff --git a/src/interfaces/ecpg/test/expected/thread-alloc.c b/src/interfaces/ecpg/test/expected/thread-alloc.c index 1a60576533..e7b69b387f 100644 --- a/src/interfaces/ecpg/test/expected/thread-alloc.c +++ b/src/interfaces/ecpg/test/expected/thread-alloc.c @@ -128,63 +128,65 @@ static void* fn(void* arg) { int i; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - /* exec sql begin declare section */ -#line 44 "alloc.pgc" +#line 40 "alloc.pgc" int value ; -#line 45 "alloc.pgc" +#line 41 "alloc.pgc" char name [ 100 ] ; -#line 46 "alloc.pgc" +#line 42 "alloc.pgc" char ** r = NULL ; /* exec sql end declare section */ -#line 47 "alloc.pgc" +#line 43 "alloc.pgc" + +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif value = (long)arg; sprintf(name, "Connection: %d", value); { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , name, 0); -#line 52 "alloc.pgc" +#line 54 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 52 "alloc.pgc" +#line 54 "alloc.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 53 "alloc.pgc" +#line 55 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 53 "alloc.pgc" +#line 55 "alloc.pgc" for (i = 1; i <= REPEATS; ++i) { { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "select relname from pg_class where relname = 'pg_class'", ECPGt_EOIT, ECPGt_char,&(r),(long)0,(long)0,(1)*sizeof(char), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT); -#line 56 "alloc.pgc" +#line 58 "alloc.pgc" if (sqlca.sqlcode == ECPG_NOT_FOUND) sqlprint(); -#line 56 "alloc.pgc" +#line 58 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 56 "alloc.pgc" +#line 58 "alloc.pgc" free(r); r = NULL; } { ECPGdisconnect(__LINE__, name); -#line 60 "alloc.pgc" +#line 62 "alloc.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 60 "alloc.pgc" +#line 62 "alloc.pgc" return 0; diff --git a/src/interfaces/ecpg/test/expected/thread-descriptor.c b/src/interfaces/ecpg/test/expected/thread-descriptor.c index d1c20b147e..03cebad603 100644 --- a/src/interfaces/ecpg/test/expected/thread-descriptor.c +++ b/src/interfaces/ecpg/test/expected/thread-descriptor.c @@ -109,22 +109,24 @@ static void* fn(void* arg) int i; #ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif #endif for (i = 1; i <= REPEATS; ++i) { ECPGallocate_desc(__LINE__, "mydesc"); -#line 34 "descriptor.pgc" +#line 36 "descriptor.pgc" if (sqlca.sqlcode < 0) sqlprint(); -#line 34 "descriptor.pgc" +#line 36 "descriptor.pgc" ECPGdeallocate_desc(__LINE__, "mydesc"); -#line 35 "descriptor.pgc" +#line 37 "descriptor.pgc" if (sqlca.sqlcode < 0) sqlprint(); -#line 35 "descriptor.pgc" +#line 37 "descriptor.pgc" } diff --git a/src/interfaces/ecpg/test/expected/thread-prep.c b/src/interfaces/ecpg/test/expected/thread-prep.c index ff872f08b5..94e02933cd 100644 --- a/src/interfaces/ecpg/test/expected/thread-prep.c +++ b/src/interfaces/ecpg/test/expected/thread-prep.c @@ -128,73 +128,75 @@ static void* fn(void* arg) { int i; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - /* exec sql begin declare section */ -#line 44 "prep.pgc" +#line 40 "prep.pgc" int value ; -#line 45 "prep.pgc" +#line 41 "prep.pgc" char name [ 100 ] ; -#line 46 "prep.pgc" +#line 42 "prep.pgc" char query [ 256 ] = "INSERT INTO T VALUES ( ? )" ; /* exec sql end declare section */ -#line 47 "prep.pgc" +#line 43 "prep.pgc" +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif + value = (long)arg; sprintf(name, "Connection: %d", value); { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , name, 0); -#line 52 "prep.pgc" +#line 54 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 52 "prep.pgc" +#line 54 "prep.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 53 "prep.pgc" +#line 55 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 53 "prep.pgc" +#line 55 "prep.pgc" for (i = 1; i <= REPEATS; ++i) { { ECPGprepare(__LINE__, NULL, 0, "i", query); -#line 56 "prep.pgc" +#line 58 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 56 "prep.pgc" +#line 58 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, "i", ECPGt_int,&(value),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 57 "prep.pgc" +#line 59 "prep.pgc" if (sqlca.sqlcode == ECPG_NOT_FOUND) sqlprint(); -#line 57 "prep.pgc" +#line 59 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 57 "prep.pgc" +#line 59 "prep.pgc" } { ECPGdeallocate(__LINE__, 0, NULL, "i"); -#line 59 "prep.pgc" +#line 61 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 59 "prep.pgc" +#line 61 "prep.pgc" { ECPGdisconnect(__LINE__, name); -#line 60 "prep.pgc" +#line 62 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 60 "prep.pgc" +#line 62 "prep.pgc" return 0; @@ -210,34 +212,34 @@ int main () #endif { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , NULL, 0); -#line 74 "prep.pgc" +#line 76 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 74 "prep.pgc" +#line 76 "prep.pgc" { ECPGsetcommit(__LINE__, "on", NULL); -#line 75 "prep.pgc" +#line 77 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 75 "prep.pgc" +#line 77 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "drop table if exists T", ECPGt_EOIT, ECPGt_EORT); -#line 76 "prep.pgc" +#line 78 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 76 "prep.pgc" +#line 78 "prep.pgc" { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, "create table T ( i int )", ECPGt_EOIT, ECPGt_EORT); -#line 77 "prep.pgc" +#line 79 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 77 "prep.pgc" +#line 79 "prep.pgc" { ECPGdisconnect(__LINE__, "CURRENT"); -#line 78 "prep.pgc" +#line 80 "prep.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 78 "prep.pgc" +#line 80 "prep.pgc" #ifdef WIN32 diff --git a/src/interfaces/ecpg/test/expected/thread-thread.c b/src/interfaces/ecpg/test/expected/thread-thread.c index 470fbb252d..6e809d60fb 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread.c +++ b/src/interfaces/ecpg/test/expected/thread-thread.c @@ -140,22 +140,24 @@ void *test_thread(void *arg) { long threadnum = (long)arg; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - /* exec sql begin declare section */ -#line 107 "thread.pgc" +#line 103 "thread.pgc" int l_i ; -#line 108 "thread.pgc" +#line 104 "thread.pgc" char l_connection [ 128 ] ; /* exec sql end declare section */ -#line 109 "thread.pgc" +#line 105 "thread.pgc" + +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif /* build up connection name, and connect to database */ #ifndef _MSC_VER @@ -164,13 +166,13 @@ void *test_thread(void *arg) _snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); #endif /* exec sql whenever sqlerror sqlprint ; */ -#line 117 "thread.pgc" +#line 119 "thread.pgc" { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , l_connection, 0); -#line 118 "thread.pgc" +#line 120 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 118 "thread.pgc" +#line 120 "thread.pgc" if( sqlca.sqlcode != 0 ) { @@ -178,10 +180,10 @@ if (sqlca.sqlcode < 0) sqlprint();} return( NULL ); } { ECPGtrans(__LINE__, l_connection, "begin"); -#line 124 "thread.pgc" +#line 126 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 124 "thread.pgc" +#line 126 "thread.pgc" /* insert into test_thread table */ @@ -192,10 +194,10 @@ if (sqlca.sqlcode < 0) sqlprint();} ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_int,&(l_i),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 129 "thread.pgc" +#line 131 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 129 "thread.pgc" +#line 131 "thread.pgc" if( sqlca.sqlcode != 0 ) printf("%s: ERROR: insert failed!\n", l_connection); @@ -203,16 +205,16 @@ if (sqlca.sqlcode < 0) sqlprint();} /* all done */ { ECPGtrans(__LINE__, l_connection, "commit"); -#line 135 "thread.pgc" +#line 137 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 135 "thread.pgc" +#line 137 "thread.pgc" { ECPGdisconnect(__LINE__, l_connection); -#line 136 "thread.pgc" +#line 138 "thread.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 136 "thread.pgc" +#line 138 "thread.pgc" return( NULL ); } diff --git a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c index 854549e27d..b42c556633 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c +++ b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c @@ -141,22 +141,24 @@ void *test_thread(void *arg) { long threadnum = (long)arg; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - /* exec sql begin declare section */ -#line 108 "thread_implicit.pgc" +#line 104 "thread_implicit.pgc" int l_i ; -#line 109 "thread_implicit.pgc" +#line 105 "thread_implicit.pgc" char l_connection [ 128 ] ; /* exec sql end declare section */ -#line 110 "thread_implicit.pgc" +#line 106 "thread_implicit.pgc" + +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif /* build up connection name, and connect to database */ #ifndef _MSC_VER @@ -165,13 +167,13 @@ void *test_thread(void *arg) _snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); #endif /* exec sql whenever sqlerror sqlprint ; */ -#line 118 "thread_implicit.pgc" +#line 120 "thread_implicit.pgc" { ECPGconnect(__LINE__, 0, "ecpg1_regression" , NULL, NULL , l_connection, 0); -#line 119 "thread_implicit.pgc" +#line 121 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 119 "thread_implicit.pgc" +#line 121 "thread_implicit.pgc" if( sqlca.sqlcode != 0 ) { @@ -179,10 +181,10 @@ if (sqlca.sqlcode < 0) sqlprint();} return( NULL ); } { ECPGtrans(__LINE__, NULL, "begin"); -#line 125 "thread_implicit.pgc" +#line 127 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 125 "thread_implicit.pgc" +#line 127 "thread_implicit.pgc" /* insert into test_thread table */ @@ -193,10 +195,10 @@ if (sqlca.sqlcode < 0) sqlprint();} ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_int,&(l_i),(long)1,(long)1,sizeof(int), ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT); -#line 130 "thread_implicit.pgc" +#line 132 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 130 "thread_implicit.pgc" +#line 132 "thread_implicit.pgc" if( sqlca.sqlcode != 0 ) printf("%s: ERROR: insert failed!\n", l_connection); @@ -204,16 +206,16 @@ if (sqlca.sqlcode < 0) sqlprint();} /* all done */ { ECPGtrans(__LINE__, NULL, "commit"); -#line 136 "thread_implicit.pgc" +#line 138 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 136 "thread_implicit.pgc" +#line 138 "thread_implicit.pgc" { ECPGdisconnect(__LINE__, l_connection); -#line 137 "thread_implicit.pgc" +#line 139 "thread_implicit.pgc" if (sqlca.sqlcode < 0) sqlprint();} -#line 137 "thread_implicit.pgc" +#line 139 "thread_implicit.pgc" return( NULL ); } diff --git a/src/interfaces/ecpg/test/thread/alloc.pgc b/src/interfaces/ecpg/test/thread/alloc.pgc index 8e6d042d89..b13bcb860b 100644 --- a/src/interfaces/ecpg/test/thread/alloc.pgc +++ b/src/interfaces/ecpg/test/thread/alloc.pgc @@ -36,16 +36,18 @@ static void* fn(void* arg) { int i; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - EXEC SQL BEGIN DECLARE SECTION; int value; char name[100]; char **r = NULL; EXEC SQL END DECLARE SECTION; +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif + value = (long)arg; sprintf(name, "Connection: %d", value); diff --git a/src/interfaces/ecpg/test/thread/descriptor.pgc b/src/interfaces/ecpg/test/thread/descriptor.pgc index c88c05a8a5..3f28c6d329 100644 --- a/src/interfaces/ecpg/test/thread/descriptor.pgc +++ b/src/interfaces/ecpg/test/thread/descriptor.pgc @@ -26,7 +26,9 @@ static void* fn(void* arg) int i; #ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif #endif for (i = 1; i <= REPEATS; ++i) diff --git a/src/interfaces/ecpg/test/thread/prep.pgc b/src/interfaces/ecpg/test/thread/prep.pgc index 1ec96767af..3a2467c9ab 100644 --- a/src/interfaces/ecpg/test/thread/prep.pgc +++ b/src/interfaces/ecpg/test/thread/prep.pgc @@ -36,16 +36,18 @@ static void* fn(void* arg) { int i; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - EXEC SQL BEGIN DECLARE SECTION; int value; char name[100]; char query[256] = "INSERT INTO T VALUES ( ? )"; EXEC SQL END DECLARE SECTION; +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif + value = (long)arg; sprintf(name, "Connection: %d", value); diff --git a/src/interfaces/ecpg/test/thread/thread.pgc b/src/interfaces/ecpg/test/thread/thread.pgc index f08aacdee5..c5fbe928d9 100644 --- a/src/interfaces/ecpg/test/thread/thread.pgc +++ b/src/interfaces/ecpg/test/thread/thread.pgc @@ -99,15 +99,17 @@ void *test_thread(void *arg) { long threadnum = (long)arg; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - EXEC SQL BEGIN DECLARE SECTION; int l_i; char l_connection[128]; EXEC SQL END DECLARE SECTION; +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif + /* build up connection name, and connect to database */ #ifndef _MSC_VER snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); diff --git a/src/interfaces/ecpg/test/thread/thread_implicit.pgc b/src/interfaces/ecpg/test/thread/thread_implicit.pgc index aab758ed92..d65f17c073 100644 --- a/src/interfaces/ecpg/test/thread/thread_implicit.pgc +++ b/src/interfaces/ecpg/test/thread/thread_implicit.pgc @@ -100,15 +100,17 @@ void *test_thread(void *arg) { long threadnum = (long)arg; -#ifdef WIN32 - _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); -#endif - EXEC SQL BEGIN DECLARE SECTION; int l_i; char l_connection[128]; EXEC SQL END DECLARE SECTION; +#ifdef WIN32 +#ifdef _MSC_VER /* requires MSVC */ + _configthreadlocale(_ENABLE_PER_THREAD_LOCALE); +#endif +#endif + /* build up connection name, and connect to database */ #ifndef _MSC_VER snprintf(l_connection, sizeof(l_connection), "thread_%03ld", threadnum); From f1b10496a55a64b2872633850e55a2cd9d1c9108 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 26 Aug 2017 15:19:24 -0400 Subject: [PATCH 0049/1087] First-draft release notes for 9.6.5. As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first. Note the first entry is only for 9.4. --- doc/src/sgml/release-9.6.sgml | 265 ++++++++++++++++++++++++++++++++++ 1 file changed, 265 insertions(+) diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 078ac87841..e9bee98732 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -1,6 +1,271 @@ + + Release 9.6.5 + + + Release date: + 2017-08-31 + + + + This release contains a small number of fixes from 9.6.4. + For information about new features in the 9.6 major release, see + . + + + + Migration to Version 9.6.5 + + + A dump/restore is not required for those running 9.6.X. + + + + However, if you are upgrading from a version earlier than 9.6.4, + see . + + + + + Changes + + + + + + + Fix failure of walsender processes to respond to shutdown signals + (Marco Nenciarini) + + + + A missed flag update resulted in walsenders continuing to run as long + as they had a standby server connected, preventing primary-server + shutdown unless immediate shutdown mode is used. + + + + + + + Show foreign tables + in information_schema.table_privileges + view (Peter Eisentraut) + + + + All other relevant information_schema views include + foreign tables, but this one ignored them. + + + + Since this view definition is installed by initdb, + merely upgrading will not fix the problem. If you need to fix this + in an existing installation, you can, as a superuser, do this + in psql: + +BEGIN; +DROP SCHEMA information_schema CASCADE; +\i SHAREDIR/information_schema.sql +COMMIT; + + (Run pg_config --sharedir if you're uncertain + where SHAREDIR is.) This must be repeated in each + database to be fixed. + + + + + + + Clean up handling of a fatal exit (e.g., due to receipt + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) + + + + This situation could result in an assertion failure. In production + builds, the exit would still occur, but it would log an unexpected + message about cannot drop active portal. + + + + + + + Remove assertion that could trigger during a fatal exit (Tom Lane) + + + + + + + Correctly identify columns that are of a range type or domain type over + a composite type or domain type being searched for (Tom Lane) + + + + Certain ALTER commands that change the definition of a + composite type or domain type are supposed to fail if there are any + stored values of that type in the database, because they lack the + infrastructure needed to update or check such values. Previously, + these checks could miss relevant values that are wrapped inside range + types or sub-domains, possibly allowing the database to become + inconsistent. + + + + + + + Prevent crash when passing fixed-length pass-by-reference data types + to parallel worker processes (Tom Lane) + + + + + + + Fix crash in pg_restore when using parallel mode and + using a list file to select a subset of items to restore (Tom Lane) + + + + + + + Change ecpg's parser to allow RETURNING + clauses without attached C variables (Michael Meskes) + + + + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) + rather than using it to define values to be returned to the client. + + + + + + + Change ecpg's parser to recognize backslash + continuation of C preprocessor command lines (Michael Meskes) + + + + + + + Improve selection of compiler flags for PL/Perl on Windows (Tom Lane) + + + + This fix avoids possible crashes of PL/Perl due to inconsistent + assumptions about the width of time_t values. + A side-effect that may be visible to extension developers is + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. + + + + + + + Fix make check to behave correctly when invoked via a + non-GNU make program (Thomas Munro) + + + + + + + + Release 9.6.4 From 6a5366e69acf9ae04988488f1e365705ff591d65 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 26 Aug 2017 16:50:19 -0400 Subject: [PATCH 0050/1087] Doc: update v10 release notes through today. --- doc/src/sgml/release-10.sgml | 33 ++++++++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 269f1aac86..1330a9992c 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -6,7 +6,7 @@ Release date: - 2017-??-?? (current as of 2017-08-05, commit eccead9ed) + 2017-??-?? (current as of 2017-08-26, commit 145ca364d) @@ -1490,6 +1490,7 @@ Allow users to disable + + + + Remove restriction on placement of + + @@ -2464,6 +2475,16 @@ + + Support using synchronized snapshots when dumping from a standby + server (Petr Jelinek) + + + + + @@ -2851,6 +2872,16 @@ + + + + Allow WaitLatchOrSocket() to wait for socket + connection on Windows (Andres Freund) + + + + + Release 9.2.23 + + + Release date: + 2017-08-31 + + + + This release contains a small number of fixes from 9.2.22. + For information about new features in the 9.2 major release, see + . + + + + The PostgreSQL community will stop releasing updates + for the 9.2.X release series in September 2017. + Users are encouraged to update to a newer release branch soon. + + + + Migration to Version 9.2.23 + + + A dump/restore is not required for those running 9.2.X. + + + + However, if you are upgrading from a version earlier than 9.2.22, + see . + + + + + + Changes + + + + + + Show foreign tables + in information_schema.table_privileges + view (Peter Eisentraut) + + + + All other relevant information_schema views include + foreign tables, but this one ignored them. + + + + Since this view definition is installed by initdb, + merely upgrading will not fix the problem. If you need to fix this + in an existing installation, you can, as a superuser, do this + in psql: + +BEGIN; +DROP SCHEMA information_schema CASCADE; +\i SHAREDIR/information_schema.sql +COMMIT; + + (Run pg_config --sharedir if you're uncertain + where SHAREDIR is.) This must be repeated in each + database to be fixed. + + + + + + Clean up handling of a fatal exit (e.g., due to receipt + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) + + + + This situation could result in an assertion failure. In production + builds, the exit would still occur, but it would log an unexpected + message about cannot drop active portal. + + + + + + Remove assertion that could trigger during a fatal exit (Tom Lane) + + + + + + Correctly identify columns that are of a range type or domain type over + a composite type or domain type being searched for (Tom Lane) + + + + Certain ALTER commands that change the definition of a + composite type or domain type are supposed to fail if there are any + stored values of that type in the database, because they lack the + infrastructure needed to update or check such values. Previously, + these checks could miss relevant values that are wrapped inside range + types or sub-domains, possibly allowing the database to become + inconsistent. + + + + + + Change ecpg's parser to allow RETURNING + clauses without attached C variables (Michael Meskes) + + + + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) + rather than using it to define values to be returned to the client. + + + + + + Improve selection of compiler flags for PL/Perl on Windows (Tom Lane) + + + + This fix avoids possible crashes of PL/Perl due to inconsistent + assumptions about the width of time_t values. + A side-effect that may be visible to extension developers is + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. + + + + + + + + Release 9.2.22 diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index e95efefd66..a4ec3edb6c 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -1,6 +1,146 @@ + + Release 9.3.19 + + + Release date: + 2017-08-31 + + + + This release contains a small number of fixes from 9.3.18. + For information about new features in the 9.3 major release, see + . + + + + Migration to Version 9.3.19 + + + A dump/restore is not required for those running 9.3.X. + + + + However, if you are upgrading from a version earlier than 9.3.18, + see . + + + + + + Changes + + + + + + Show foreign tables + in information_schema.table_privileges + view (Peter Eisentraut) + + + + All other relevant information_schema views include + foreign tables, but this one ignored them. + + + + Since this view definition is installed by initdb, + merely upgrading will not fix the problem. If you need to fix this + in an existing installation, you can, as a superuser, do this + in psql: + +BEGIN; +DROP SCHEMA information_schema CASCADE; +\i SHAREDIR/information_schema.sql +COMMIT; + + (Run pg_config --sharedir if you're uncertain + where SHAREDIR is.) This must be repeated in each + database to be fixed. + + + + + + Clean up handling of a fatal exit (e.g., due to receipt + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) + + + + This situation could result in an assertion failure. In production + builds, the exit would still occur, but it would log an unexpected + message about cannot drop active portal. + + + + + + Remove assertion that could trigger during a fatal exit (Tom Lane) + + + + + + Correctly identify columns that are of a range type or domain type over + a composite type or domain type being searched for (Tom Lane) + + + + Certain ALTER commands that change the definition of a + composite type or domain type are supposed to fail if there are any + stored values of that type in the database, because they lack the + infrastructure needed to update or check such values. Previously, + these checks could miss relevant values that are wrapped inside range + types or sub-domains, possibly allowing the database to become + inconsistent. + + + + + + Fix crash in pg_restore when using parallel mode and + using a list file to select a subset of items to restore (Tom Lane) + + + + + + Change ecpg's parser to allow RETURNING + clauses without attached C variables (Michael Meskes) + + + + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) + rather than using it to define values to be returned to the client. + + + + + + Improve selection of compiler flags for PL/Perl on Windows (Tom Lane) + + + + This fix avoids possible crashes of PL/Perl due to inconsistent + assumptions about the width of time_t values. + A side-effect that may be visible to extension developers is + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. + + + + + + + + Release 9.3.18 diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index c616c1a514..cc7fb4ee7e 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -1,6 +1,162 @@ + + Release 9.4.14 + + + Release date: + 2017-08-31 + + + + This release contains a small number of fixes from 9.4.13. + For information about new features in the 9.4 major release, see + . + + + + Migration to Version 9.4.14 + + + A dump/restore is not required for those running 9.4.X. + + + + However, if you are upgrading from a version earlier than 9.4.13, + see . + + + + + Changes + + + + + + + Fix failure of walsender processes to respond to shutdown signals + (Marco Nenciarini) + + + + A missed flag update resulted in walsenders continuing to run as long + as they had a standby server connected, preventing primary-server + shutdown unless immediate shutdown mode is used. + + + + + + Show foreign tables + in information_schema.table_privileges + view (Peter Eisentraut) + + + + All other relevant information_schema views include + foreign tables, but this one ignored them. + + + + Since this view definition is installed by initdb, + merely upgrading will not fix the problem. If you need to fix this + in an existing installation, you can, as a superuser, do this + in psql: + +BEGIN; +DROP SCHEMA information_schema CASCADE; +\i SHAREDIR/information_schema.sql +COMMIT; + + (Run pg_config --sharedir if you're uncertain + where SHAREDIR is.) This must be repeated in each + database to be fixed. + + + + + + Clean up handling of a fatal exit (e.g., due to receipt + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) + + + + This situation could result in an assertion failure. In production + builds, the exit would still occur, but it would log an unexpected + message about cannot drop active portal. + + + + + + Remove assertion that could trigger during a fatal exit (Tom Lane) + + + + + + Correctly identify columns that are of a range type or domain type over + a composite type or domain type being searched for (Tom Lane) + + + + Certain ALTER commands that change the definition of a + composite type or domain type are supposed to fail if there are any + stored values of that type in the database, because they lack the + infrastructure needed to update or check such values. Previously, + these checks could miss relevant values that are wrapped inside range + types or sub-domains, possibly allowing the database to become + inconsistent. + + + + + + Fix crash in pg_restore when using parallel mode and + using a list file to select a subset of items to restore (Tom Lane) + + + + + + Change ecpg's parser to allow RETURNING + clauses without attached C variables (Michael Meskes) + + + + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) + rather than using it to define values to be returned to the client. + + + + + + Improve selection of compiler flags for PL/Perl on Windows (Tom Lane) + + + + This fix avoids possible crashes of PL/Perl due to inconsistent + assumptions about the width of time_t values. + A side-effect that may be visible to extension developers is + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. + + + + + + + + Release 9.4.13 diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index ceece4b8a5..b4a2af1de0 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -1,6 +1,152 @@ + + Release 9.5.9 + + + Release date: + 2017-08-31 + + + + This release contains a small number of fixes from 9.5.8. + For information about new features in the 9.5 major release, see + . + + + + Migration to Version 9.5.9 + + + A dump/restore is not required for those running 9.5.X. + + + + However, if you are upgrading from a version earlier than 9.5.8, + see . + + + + + Changes + + + + + + Show foreign tables + in information_schema.table_privileges + view (Peter Eisentraut) + + + + All other relevant information_schema views include + foreign tables, but this one ignored them. + + + + Since this view definition is installed by initdb, + merely upgrading will not fix the problem. If you need to fix this + in an existing installation, you can, as a superuser, do this + in psql: + +BEGIN; +DROP SCHEMA information_schema CASCADE; +\i SHAREDIR/information_schema.sql +COMMIT; + + (Run pg_config --sharedir if you're uncertain + where SHAREDIR is.) This must be repeated in each + database to be fixed. + + + + + + Clean up handling of a fatal exit (e.g., due to receipt + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) + + + + This situation could result in an assertion failure. In production + builds, the exit would still occur, but it would log an unexpected + message about cannot drop active portal. + + + + + + Remove assertion that could trigger during a fatal exit (Tom Lane) + + + + + + Correctly identify columns that are of a range type or domain type over + a composite type or domain type being searched for (Tom Lane) + + + + Certain ALTER commands that change the definition of a + composite type or domain type are supposed to fail if there are any + stored values of that type in the database, because they lack the + infrastructure needed to update or check such values. Previously, + these checks could miss relevant values that are wrapped inside range + types or sub-domains, possibly allowing the database to become + inconsistent. + + + + + + Fix crash in pg_restore when using parallel mode and + using a list file to select a subset of items to restore (Tom Lane) + + + + + + Change ecpg's parser to allow RETURNING + clauses without attached C variables (Michael Meskes) + + + + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) + rather than using it to define values to be returned to the client. + + + + + + Improve selection of compiler flags for PL/Perl on Windows (Tom Lane) + + + + This fix avoids possible crashes of PL/Perl due to inconsistent + assumptions about the width of time_t values. + A side-effect that may be visible to extension developers is + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. + + + + + + Fix make check to behave correctly when invoked via a + non-GNU make program (Thomas Munro) + + + + + + + + Release 9.5.8 diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index e9bee98732..21c3c8ef3f 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -35,23 +35,6 @@ - - Fix failure of walsender processes to respond to shutdown signals - (Marco Nenciarini) - - - - A missed flag update resulted in walsenders continuing to run as long - as they had a standby server connected, preventing primary-server - shutdown unless immediate shutdown mode is used. - - - - - Fix crash in pg_restore when using parallel mode and - using a list file to select a subset of items to restore (Tom Lane) + using a list file to select a subset of items to restore + (Fabrízio de Royes Mello) From ce5dcf54b942a469194ae390730f803b3f3fb928 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 29 Aug 2017 09:34:21 -0400 Subject: [PATCH 0055/1087] Improve docs about numeric formatting patterns (to_char/to_number). The explanation about "0" versus "9" format characters was confusing and arguably wrong; the discussion of sign handling wasn't very good either. Notably, while it's accurate to say that "FM" strips leading zeroes in date/time values, what it really does with numeric values is to strip *trailing* zeroes, and then only if you wrote "9" rather than "0". Per gripes from Erwin Brandstetter. Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com --- doc/src/sgml/func.sgml | 63 ++++++++++++++++++++++++++++++------------ 1 file changed, 46 insertions(+), 17 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 28eda97273..641b3b8f4e 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -6351,11 +6351,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); 9 - value with the specified number of digits + digit position (can be dropped if insignificant) 0 - value with leading zeros + digit position (will not be dropped, even if insignificant) . (period) @@ -6363,7 +6363,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); , (comma) - group (thousand) separator + group (thousands) separator PR @@ -6421,6 +6421,39 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Usage notes for numeric formatting: + + + 0 specifies a digit position that will always be printed, + even if it contains a leading/trailing zero. 9 also + specifies a digit position, but if it is a leading zero then it will + be replaced by a space, while if it is a trailing zero and fill mode + is specified then it will be deleted. (For to_number(), + these two pattern characters are equivalent.) + + + + + + The pattern characters S, L, D, + and G represent the sign, currency symbol, decimal point, + and thousands separator characters defined by the current locale + (see + and ). The pattern characters period + and comma represent those exact characters, with the meanings of + decimal point and thousands separator, regardless of locale. + + + + + + If no explicit provision is made for a sign + in to_char()'s pattern, one column will be reserved for + the sign, and it will be anchored to (appear just left of) the + number. If S appears just left of some 9's, + it will likewise be anchored to the number. + + + A sign formatted using SG, PL, or @@ -6428,18 +6461,10 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); the number; for example, to_char(-12, 'MI9999') produces '-  12' but to_char(-12, 'S9999') produces '  -12'. - The Oracle implementation does not allow the use of + (The Oracle implementation does not allow the use of MI before 9, but rather requires that 9 precede - MI. - - - - - - 9 results in a value with the same number of - digits as there are 9s. If a digit is - not available it outputs a space. + MI.) @@ -6486,8 +6511,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Certain modifiers can be applied to any template pattern to alter its - behavior. For example, FM9999 - is the 9999 pattern with the + behavior. For example, FM99.99 + is the 99.99 pattern with the FM modifier. shows the modifier patterns for numeric formatting. @@ -6506,8 +6531,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); FM prefix - fill mode (suppress leading zeroes and padding blanks) - FM9999 + fill mode (suppress trailing zeroes and padding blanks) + FM99.99 TH suffix @@ -6554,6 +6579,10 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); to_char(-0.1, 'FM9.99') '-.1' + + to_char(-0.1, 'FM90.99') + '-0.1' + to_char(0.1, '0.9') ' 0.1' From 3452dc5240da43e833118484e1e9b4894d04431c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 29 Aug 2017 13:12:23 -0400 Subject: [PATCH 0056/1087] Push tuple limits through Gather and Gather Merge. If we only need, say, 10 tuples in total, then we certainly don't need more than 10 tuples from any single process. Pushing down the limit lets workers exit early when possible. For Gather Merge, there is an additional benefit: a Sort immediately below the Gather Merge can be done as a bounded sort if there is an applicable limit. Robert Haas and Tom Lane Discussion: http://postgr.es/m/CA+TgmoYa3QKKrLj5rX7UvGqhH73G1Li4B-EKxrmASaca2tFu9Q@mail.gmail.com --- src/backend/executor/execParallel.c | 54 ++++++-- src/backend/executor/execProcnode.c | 121 ++++++++++++++++++ src/backend/executor/nodeGather.c | 4 +- src/backend/executor/nodeGatherMerge.c | 4 +- src/backend/executor/nodeLimit.c | 98 +++----------- src/include/executor/execParallel.h | 2 +- src/include/executor/executor.h | 1 + src/include/nodes/execnodes.h | 2 + src/test/regress/expected/select_parallel.out | 24 +++- src/test/regress/sql/select_parallel.sql | 9 +- 10 files changed, 222 insertions(+), 97 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index ce47f1d4a8..ad9eba63dd 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -47,16 +47,25 @@ * greater than any 32-bit integer here so that values < 2^32 can be used * by individual parallel nodes to store their own state. */ -#define PARALLEL_KEY_PLANNEDSTMT UINT64CONST(0xE000000000000001) -#define PARALLEL_KEY_PARAMS UINT64CONST(0xE000000000000002) -#define PARALLEL_KEY_BUFFER_USAGE UINT64CONST(0xE000000000000003) -#define PARALLEL_KEY_TUPLE_QUEUE UINT64CONST(0xE000000000000004) -#define PARALLEL_KEY_INSTRUMENTATION UINT64CONST(0xE000000000000005) -#define PARALLEL_KEY_DSA UINT64CONST(0xE000000000000006) -#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xE000000000000007) +#define PARALLEL_KEY_EXECUTOR_FIXED UINT64CONST(0xE000000000000001) +#define PARALLEL_KEY_PLANNEDSTMT UINT64CONST(0xE000000000000002) +#define PARALLEL_KEY_PARAMS UINT64CONST(0xE000000000000003) +#define PARALLEL_KEY_BUFFER_USAGE UINT64CONST(0xE000000000000004) +#define PARALLEL_KEY_TUPLE_QUEUE UINT64CONST(0xE000000000000005) +#define PARALLEL_KEY_INSTRUMENTATION UINT64CONST(0xE000000000000006) +#define PARALLEL_KEY_DSA UINT64CONST(0xE000000000000007) +#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xE000000000000008) #define PARALLEL_TUPLE_QUEUE_SIZE 65536 +/* + * Fixed-size random stuff that we need to pass to parallel workers. + */ +typedef struct FixedParallelExecutorState +{ + int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ +} FixedParallelExecutorState; + /* * DSM structure for accumulating per-PlanState instrumentation. * @@ -381,12 +390,14 @@ ExecParallelReinitialize(ParallelExecutorInfo *pei) * execution and return results to the main backend. */ ParallelExecutorInfo * -ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers) +ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, + int64 tuples_needed) { ParallelExecutorInfo *pei; ParallelContext *pcxt; ExecParallelEstimateContext e; ExecParallelInitializeDSMContext d; + FixedParallelExecutorState *fpes; char *pstmt_data; char *pstmt_space; char *param_space; @@ -418,6 +429,11 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers) * for the various things we need to store. */ + /* Estimate space for fixed-size state. */ + shm_toc_estimate_chunk(&pcxt->estimator, + sizeof(FixedParallelExecutorState)); + shm_toc_estimate_keys(&pcxt->estimator, 1); + /* Estimate space for query text. */ query_len = strlen(estate->es_sourceText); shm_toc_estimate_chunk(&pcxt->estimator, query_len); @@ -487,6 +503,11 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers) * asked for has been allocated or initialized yet, though, so do that. */ + /* Store fixed-size state. */ + fpes = shm_toc_allocate(pcxt->toc, sizeof(FixedParallelExecutorState)); + fpes->tuples_needed = tuples_needed; + shm_toc_insert(pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, fpes); + /* Store query string */ query_string = shm_toc_allocate(pcxt->toc, query_len); memcpy(query_string, estate->es_sourceText, query_len); @@ -833,6 +854,7 @@ ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) void ParallelQueryMain(dsm_segment *seg, shm_toc *toc) { + FixedParallelExecutorState *fpes; BufferUsage *buffer_usage; DestReceiver *receiver; QueryDesc *queryDesc; @@ -841,6 +863,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) void *area_space; dsa_area *area; + /* Get fixed-size state. */ + fpes = shm_toc_lookup(toc, PARALLEL_KEY_EXECUTOR_FIXED, false); + /* Set up DestReceiver, SharedExecutorInstrumentation, and QueryDesc. */ receiver = ExecParallelGetReceiver(seg, toc); instrumentation = shm_toc_lookup(toc, PARALLEL_KEY_INSTRUMENTATION, true); @@ -868,8 +893,17 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) queryDesc->planstate->state->es_query_dsa = area; ExecParallelInitializeWorker(queryDesc->planstate, toc); - /* Run the plan */ - ExecutorRun(queryDesc, ForwardScanDirection, 0L, true); + /* Pass down any tuple bound */ + ExecSetTupleBound(fpes->tuples_needed, queryDesc->planstate); + + /* + * Run the plan. If we specified a tuple bound, be careful not to demand + * more tuples than that. + */ + ExecutorRun(queryDesc, + ForwardScanDirection, + fpes->tuples_needed < 0 ? (int64) 0 : fpes->tuples_needed, + true); /* Shut down the executor */ ExecutorFinish(queryDesc); diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 36d2914249..c1aa5064c9 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -757,3 +757,124 @@ ExecShutdownNode(PlanState *node) return false; } + +/* + * ExecSetTupleBound + * + * Set a tuple bound for a planstate node. This lets child plan nodes + * optimize based on the knowledge that the maximum number of tuples that + * their parent will demand is limited. The tuple bound for a node may + * only be changed between scans (i.e., after node initialization or just + * before an ExecReScan call). + * + * Any negative tuples_needed value means "no limit", which should be the + * default assumption when this is not called at all for a particular node. + * + * Note: if this is called repeatedly on a plan tree, the exact same set + * of nodes must be updated with the new limit each time; be careful that + * only unchanging conditions are tested here. + */ +void +ExecSetTupleBound(int64 tuples_needed, PlanState *child_node) +{ + /* + * Since this function recurses, in principle we should check stack depth + * here. In practice, it's probably pointless since the earlier node + * initialization tree traversal would surely have consumed more stack. + */ + + if (IsA(child_node, SortState)) + { + /* + * If it is a Sort node, notify it that it can use bounded sort. + * + * Note: it is the responsibility of nodeSort.c to react properly to + * changes of these parameters. If we ever redesign this, it'd be a + * good idea to integrate this signaling with the parameter-change + * mechanism. + */ + SortState *sortState = (SortState *) child_node; + + if (tuples_needed < 0) + { + /* make sure flag gets reset if needed upon rescan */ + sortState->bounded = false; + } + else + { + sortState->bounded = true; + sortState->bound = tuples_needed; + } + } + else if (IsA(child_node, MergeAppendState)) + { + /* + * If it is a MergeAppend, we can apply the bound to any nodes that + * are children of the MergeAppend, since the MergeAppend surely need + * read no more than that many tuples from any one input. + */ + MergeAppendState *maState = (MergeAppendState *) child_node; + int i; + + for (i = 0; i < maState->ms_nplans; i++) + ExecSetTupleBound(tuples_needed, maState->mergeplans[i]); + } + else if (IsA(child_node, ResultState)) + { + /* + * Similarly, for a projecting Result, we can apply the bound to its + * child node. + * + * If Result supported qual checking, we'd have to punt on seeing a + * qual. Note that having a resconstantqual is not a showstopper: if + * that condition succeeds it affects nothing, while if it fails, no + * rows will be demanded from the Result child anyway. + */ + if (outerPlanState(child_node)) + ExecSetTupleBound(tuples_needed, outerPlanState(child_node)); + } + else if (IsA(child_node, SubqueryScanState)) + { + /* + * We can also descend through SubqueryScan, but only if it has no + * qual (otherwise it might discard rows). + */ + SubqueryScanState *subqueryState = (SubqueryScanState *) child_node; + + if (subqueryState->ss.ps.qual == NULL) + ExecSetTupleBound(tuples_needed, subqueryState->subplan); + } + else if (IsA(child_node, GatherState)) + { + /* + * A Gather node can propagate the bound to its workers. As with + * MergeAppend, no one worker could possibly need to return more + * tuples than the Gather itself needs to. + * + * Note: As with Sort, the Gather node is responsible for reacting + * properly to changes to this parameter. + */ + GatherState *gstate = (GatherState *) child_node; + + gstate->tuples_needed = tuples_needed; + + /* Also pass down the bound to our own copy of the child plan */ + ExecSetTupleBound(tuples_needed, outerPlanState(child_node)); + } + else if (IsA(child_node, GatherMergeState)) + { + /* Same comments as for Gather */ + GatherMergeState *gstate = (GatherMergeState *) child_node; + + gstate->tuples_needed = tuples_needed; + + ExecSetTupleBound(tuples_needed, outerPlanState(child_node)); + } + + /* + * In principle we could descend through any plan node type that is + * certain not to discard or combine input rows; but on seeing a node that + * can do that, we can't propagate the bound any further. For the moment + * it's unclear that any other cases are worth checking here. + */ +} diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index e8d94ee6f3..a0f5a60d93 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -72,6 +72,7 @@ ExecInitGather(Gather *node, EState *estate, int eflags) gatherstate->ps.state = estate; gatherstate->ps.ExecProcNode = ExecGather; gatherstate->need_to_scan_locally = !node->single_copy; + gatherstate->tuples_needed = -1; /* * Miscellaneous initialization @@ -156,7 +157,8 @@ ExecGather(PlanState *pstate) if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, - gather->num_workers); + gather->num_workers, + node->tuples_needed); /* * Register backend workers. We might not get as many as we diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 64c62398bb..2526c584fd 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -77,6 +77,7 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) gm_state->ps.plan = (Plan *) node; gm_state->ps.state = estate; gm_state->ps.ExecProcNode = ExecGatherMerge; + gm_state->tuples_needed = -1; /* * Miscellaneous initialization @@ -190,7 +191,8 @@ ExecGatherMerge(PlanState *pstate) if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, - gm->num_workers); + gm->num_workers, + node->tuples_needed); /* Try to launch workers. */ pcxt = node->pei->pcxt; diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c index ceb6854b59..883f46ce7c 100644 --- a/src/backend/executor/nodeLimit.c +++ b/src/backend/executor/nodeLimit.c @@ -27,7 +27,7 @@ #include "nodes/nodeFuncs.h" static void recompute_limits(LimitState *node); -static void pass_down_bound(LimitState *node, PlanState *child_node); +static int64 compute_tuples_needed(LimitState *node); /* ---------------------------------------------------------------- @@ -297,92 +297,26 @@ recompute_limits(LimitState *node) /* Set state-machine state */ node->lstate = LIMIT_RESCAN; - /* Notify child node about limit, if useful */ - pass_down_bound(node, outerPlanState(node)); + /* + * Notify child node about limit. Note: think not to "optimize" by + * skipping ExecSetTupleBound if compute_tuples_needed returns < 0. We + * must update the child node anyway, in case this is a rescan and the + * previous time we got a different result. + */ + ExecSetTupleBound(compute_tuples_needed(node), outerPlanState(node)); } /* - * If we have a COUNT, and our input is a Sort node, notify it that it can - * use bounded sort. We can also pass down the bound through plan nodes - * that cannot remove or combine input rows; for example, if our input is a - * MergeAppend, we can apply the same bound to any Sorts that are direct - * children of the MergeAppend, since the MergeAppend surely need not read - * more than that many tuples from any one input. - * - * This is a bit of a kluge, but we don't have any more-abstract way of - * communicating between the two nodes; and it doesn't seem worth trying - * to invent one without some more examples of special communication needs. - * - * Note: it is the responsibility of nodeSort.c to react properly to - * changes of these parameters. If we ever do redesign this, it'd be a - * good idea to integrate this signaling with the parameter-change mechanism. + * Compute the maximum number of tuples needed to satisfy this Limit node. + * Return a negative value if there is not a determinable limit. */ -static void -pass_down_bound(LimitState *node, PlanState *child_node) +static int64 +compute_tuples_needed(LimitState *node) { - /* - * Since this function recurses, in principle we should check stack depth - * here. In practice, it's probably pointless since the earlier node - * initialization tree traversal would surely have consumed more stack. - */ - - if (IsA(child_node, SortState)) - { - SortState *sortState = (SortState *) child_node; - int64 tuples_needed = node->count + node->offset; - - /* negative test checks for overflow in sum */ - if (node->noCount || tuples_needed < 0) - { - /* make sure flag gets reset if needed upon rescan */ - sortState->bounded = false; - } - else - { - sortState->bounded = true; - sortState->bound = tuples_needed; - } - } - else if (IsA(child_node, MergeAppendState)) - { - /* Pass down the bound through MergeAppend */ - MergeAppendState *maState = (MergeAppendState *) child_node; - int i; - - for (i = 0; i < maState->ms_nplans; i++) - pass_down_bound(node, maState->mergeplans[i]); - } - else if (IsA(child_node, ResultState)) - { - /* - * We also have to be prepared to look through a Result, since the - * planner might stick one atop MergeAppend for projection purposes. - * - * If Result supported qual checking, we'd have to punt on seeing a - * qual. Note that having a resconstantqual is not a showstopper: if - * that fails we're not getting any rows at all. - */ - if (outerPlanState(child_node)) - pass_down_bound(node, outerPlanState(child_node)); - } - else if (IsA(child_node, SubqueryScanState)) - { - /* - * We can also look through SubqueryScan, but only if it has no qual - * (otherwise it might discard rows). - */ - SubqueryScanState *subqueryState = (SubqueryScanState *) child_node; - - if (subqueryState->ss.ps.qual == NULL) - pass_down_bound(node, subqueryState->subplan); - } - - /* - * In principle we could look through any plan node type that is certain - * not to discard or combine input rows. In practice, there are not many - * node types that the planner might put between Sort and Limit, so trying - * to be very general is not worth the trouble. - */ + if (node->noCount) + return -1; + /* Note: if this overflows, we'll return a negative value, which is OK */ + return node->count + node->offset; } /* ---------------------------------------------------------------- diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index bd0a87fa04..79b886706f 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -33,7 +33,7 @@ typedef struct ParallelExecutorInfo } ParallelExecutorInfo; extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate, - EState *estate, int nworkers); + EState *estate, int nworkers, int64 tuples_needed); extern void ExecParallelFinish(ParallelExecutorInfo *pei); extern void ExecParallelCleanup(ParallelExecutorInfo *pei); extern void ExecParallelReinitialize(ParallelExecutorInfo *pei); diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index eacbea3c36..f48a603dae 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -232,6 +232,7 @@ extern PlanState *ExecInitNode(Plan *node, EState *estate, int eflags); extern Node *MultiExecProcNode(PlanState *node); extern void ExecEndNode(PlanState *node); extern bool ExecShutdownNode(PlanState *node); +extern void ExecSetTupleBound(int64 tuples_needed, PlanState *child_node); /* ---------------------------------------------------------------- diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 3272c4b315..15a84269ec 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1919,6 +1919,7 @@ typedef struct GatherState struct TupleQueueReader **reader; TupleTableSlot *funnel_slot; bool need_to_scan_locally; + int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ } GatherState; /* ---------------- @@ -1944,6 +1945,7 @@ typedef struct GatherMergeState struct binaryheap *gm_heap; /* binary heap of slot indices */ bool gm_initialized; /* gather merge initilized ? */ bool need_to_scan_locally; + int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ int gm_nkeys; SortSupport gm_sortkeys; /* array of length ms_nkeys */ struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per reader */ diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 084f0f0c8e..ccad18e978 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -300,6 +300,29 @@ select count(*) from tenk1 group by twenty; 500 (20 rows) +reset enable_hashagg; +-- gather merge test with a LIMIT +explain (costs off) + select fivethous from tenk1 order by fivethous limit 4; + QUERY PLAN +---------------------------------------------- + Limit + -> Gather Merge + Workers Planned: 4 + -> Sort + Sort Key: fivethous + -> Parallel Seq Scan on tenk1 +(6 rows) + +select fivethous from tenk1 order by fivethous limit 4; + fivethous +----------- + 0 + 0 + 1 + 1 +(4 rows) + -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) @@ -325,7 +348,6 @@ select string4 from tenk1 order by string4 limit 5; (5 rows) reset max_parallel_workers; -reset enable_hashagg; SAVEPOINT settings; SET LOCAL force_parallel_mode = 1; explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 58c3f59890..c0debddbcd 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -118,13 +118,20 @@ explain (costs off) select count(*) from tenk1 group by twenty; +reset enable_hashagg; + +-- gather merge test with a LIMIT +explain (costs off) + select fivethous from tenk1 order by fivethous limit 4; + +select fivethous from tenk1 order by fivethous limit 4; + -- gather merge test with 0 worker set max_parallel_workers = 0; explain (costs off) select string4 from tenk1 order by string4 limit 5; select string4 from tenk1 order by string4 limit 5; reset max_parallel_workers; -reset enable_hashagg; SAVEPOINT settings; SET LOCAL force_parallel_mode = 1; From bf11e7ee2e3607bb67d25aec73aa53b2d7e9961b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 29 Aug 2017 13:22:49 -0400 Subject: [PATCH 0057/1087] Propagate sort instrumentation from workers back to leader. Up until now, when parallel query was used, no details about the sort method or space used by the workers were available; details were shown only for any sorting done by the leader. Fix that. Commit 1177ab1dabf72bafee8f19d904cee3a299f25892 forced the test case added by commit 1f6d515a67ec98194c23a5db25660856c9aab944 to run without parallelism; now that we have this infrastructure, allow that again, with a little tweaking to make it pass with and without force_parallel_mode. Robert Haas and Tom Lane Discussion: http://postgr.es/m/CA+Tgmoa2VBZW6S8AAXfhpHczb=Rf6RqQ2br+zJvEgwJ0uoD_tQ@mail.gmail.com --- src/backend/commands/explain.c | 57 ++++++++- src/backend/executor/execParallel.c | 155 ++++++++++++++---------- src/backend/executor/nodeSort.c | 97 +++++++++++++++ src/backend/utils/sort/tuplesort.c | 56 +++++++-- src/include/executor/nodeSort.h | 7 ++ src/include/nodes/execnodes.h | 12 ++ src/include/utils/tuplesort.h | 34 +++++- src/test/regress/expected/subselect.out | 5 +- src/test/regress/sql/subselect.sql | 6 +- 9 files changed, 342 insertions(+), 87 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 953e74d73c..4cee357336 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -2292,15 +2292,21 @@ show_tablesample(TableSampleClause *tsc, PlanState *planstate, static void show_sort_info(SortState *sortstate, ExplainState *es) { - if (es->analyze && sortstate->sort_Done && - sortstate->tuplesortstate != NULL) + if (!es->analyze) + return; + + if (sortstate->sort_Done && sortstate->tuplesortstate != NULL) { Tuplesortstate *state = (Tuplesortstate *) sortstate->tuplesortstate; + TuplesortInstrumentation stats; const char *sortMethod; const char *spaceType; long spaceUsed; - tuplesort_get_stats(state, &sortMethod, &spaceType, &spaceUsed); + tuplesort_get_stats(state, &stats); + sortMethod = tuplesort_method_name(stats.sortMethod); + spaceType = tuplesort_space_type_name(stats.spaceType); + spaceUsed = stats.spaceUsed; if (es->format == EXPLAIN_FORMAT_TEXT) { @@ -2315,6 +2321,51 @@ show_sort_info(SortState *sortstate, ExplainState *es) ExplainPropertyText("Sort Space Type", spaceType, es); } } + + if (sortstate->shared_info != NULL) + { + int n; + bool opened_group = false; + + for (n = 0; n < sortstate->shared_info->num_workers; n++) + { + TuplesortInstrumentation *sinstrument; + const char *sortMethod; + const char *spaceType; + long spaceUsed; + + sinstrument = &sortstate->shared_info->sinstrument[n]; + if (sinstrument->sortMethod == SORT_TYPE_STILL_IN_PROGRESS) + continue; /* ignore any unfilled slots */ + sortMethod = tuplesort_method_name(sinstrument->sortMethod); + spaceType = tuplesort_space_type_name(sinstrument->spaceType); + spaceUsed = sinstrument->spaceUsed; + + if (es->format == EXPLAIN_FORMAT_TEXT) + { + appendStringInfoSpaces(es->str, es->indent * 2); + appendStringInfo(es->str, + "Worker %d: Sort Method: %s %s: %ldkB\n", + n, sortMethod, spaceType, spaceUsed); + } + else + { + if (!opened_group) + { + ExplainOpenGroup("Workers", "Workers", false, es); + opened_group = true; + } + ExplainOpenGroup("Worker", NULL, true, es); + ExplainPropertyInteger("Worker Number", n, es); + ExplainPropertyText("Sort Method", sortMethod, es); + ExplainPropertyLong("Sort Space Used", spaceUsed, es); + ExplainPropertyText("Sort Space Type", spaceType, es); + ExplainCloseGroup("Worker", NULL, true, es); + } + } + if (opened_group) + ExplainCloseGroup("Workers", "Workers", false, es); + } } /* diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index ad9eba63dd..01316ff5d9 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -28,9 +28,10 @@ #include "executor/nodeBitmapHeapscan.h" #include "executor/nodeCustom.h" #include "executor/nodeForeignscan.h" -#include "executor/nodeSeqscan.h" #include "executor/nodeIndexscan.h" #include "executor/nodeIndexonlyscan.h" +#include "executor/nodeSeqscan.h" +#include "executor/nodeSort.h" #include "executor/tqueue.h" #include "nodes/nodeFuncs.h" #include "optimizer/planmain.h" @@ -202,10 +203,10 @@ ExecSerializePlan(Plan *plan, EState *estate) } /* - * Ordinary plan nodes won't do anything here, but parallel-aware plan nodes - * may need some state which is shared across all parallel workers. Before - * we size the DSM, give them a chance to call shm_toc_estimate_chunk or - * shm_toc_estimate_keys on &pcxt->estimator. + * Parallel-aware plan nodes (and occasionally others) may need some state + * which is shared across all parallel workers. Before we size the DSM, give + * them a chance to call shm_toc_estimate_chunk or shm_toc_estimate_keys on + * &pcxt->estimator. * * While we're at it, count the number of PlanState nodes in the tree, so * we know how many SharedPlanStateInstrumentation structures we need. @@ -219,38 +220,43 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) /* Count this node. */ e->nnodes++; - /* Call estimators for parallel-aware nodes. */ - if (planstate->plan->parallel_aware) + switch (nodeTag(planstate)) { - switch (nodeTag(planstate)) - { - case T_SeqScanState: + case T_SeqScanState: + if (planstate->plan->parallel_aware) ExecSeqScanEstimate((SeqScanState *) planstate, e->pcxt); - break; - case T_IndexScanState: + break; + case T_IndexScanState: + if (planstate->plan->parallel_aware) ExecIndexScanEstimate((IndexScanState *) planstate, e->pcxt); - break; - case T_IndexOnlyScanState: + break; + case T_IndexOnlyScanState: + if (planstate->plan->parallel_aware) ExecIndexOnlyScanEstimate((IndexOnlyScanState *) planstate, e->pcxt); - break; - case T_ForeignScanState: + break; + case T_ForeignScanState: + if (planstate->plan->parallel_aware) ExecForeignScanEstimate((ForeignScanState *) planstate, e->pcxt); - break; - case T_CustomScanState: + break; + case T_CustomScanState: + if (planstate->plan->parallel_aware) ExecCustomScanEstimate((CustomScanState *) planstate, e->pcxt); - break; - case T_BitmapHeapScanState: + break; + case T_BitmapHeapScanState: + if (planstate->plan->parallel_aware) ExecBitmapHeapEstimate((BitmapHeapScanState *) planstate, e->pcxt); - break; - default: - break; - } + break; + case T_SortState: + /* even when not parallel-aware */ + ExecSortEstimate((SortState *) planstate, e->pcxt); + default: + break; } return planstate_tree_walker(planstate, ExecParallelEstimate, e); @@ -276,46 +282,51 @@ ExecParallelInitializeDSM(PlanState *planstate, d->nnodes++; /* - * Call initializers for parallel-aware plan nodes. + * Call initializers for DSM-using plan nodes. * - * Ordinary plan nodes won't do anything here, but parallel-aware plan - * nodes may need to initialize shared state in the DSM before parallel - * workers are available. They can allocate the space they previously + * Most plan nodes won't do anything here, but plan nodes that allocated + * DSM may need to initialize shared state in the DSM before parallel + * workers are launched. They can allocate the space they previously * estimated using shm_toc_allocate, and add the keys they previously * estimated using shm_toc_insert, in each case targeting pcxt->toc. */ - if (planstate->plan->parallel_aware) + switch (nodeTag(planstate)) { - switch (nodeTag(planstate)) - { - case T_SeqScanState: + case T_SeqScanState: + if (planstate->plan->parallel_aware) ExecSeqScanInitializeDSM((SeqScanState *) planstate, d->pcxt); - break; - case T_IndexScanState: + break; + case T_IndexScanState: + if (planstate->plan->parallel_aware) ExecIndexScanInitializeDSM((IndexScanState *) planstate, d->pcxt); - break; - case T_IndexOnlyScanState: + break; + case T_IndexOnlyScanState: + if (planstate->plan->parallel_aware) ExecIndexOnlyScanInitializeDSM((IndexOnlyScanState *) planstate, d->pcxt); - break; - case T_ForeignScanState: + break; + case T_ForeignScanState: + if (planstate->plan->parallel_aware) ExecForeignScanInitializeDSM((ForeignScanState *) planstate, d->pcxt); - break; - case T_CustomScanState: + break; + case T_CustomScanState: + if (planstate->plan->parallel_aware) ExecCustomScanInitializeDSM((CustomScanState *) planstate, d->pcxt); - break; - case T_BitmapHeapScanState: + break; + case T_BitmapHeapScanState: + if (planstate->plan->parallel_aware) ExecBitmapHeapInitializeDSM((BitmapHeapScanState *) planstate, d->pcxt); - break; - - default: - break; - } + break; + case T_SortState: + /* even when not parallel-aware */ + ExecSortInitializeDSM((SortState *) planstate, d->pcxt); + default: + break; } return planstate_tree_walker(planstate, ExecParallelInitializeDSM, d); @@ -642,6 +653,13 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, planstate->worker_instrument->num_workers = instrumentation->num_workers; memcpy(&planstate->worker_instrument->instrument, instrument, ibytes); + /* + * Perform any node-type-specific work that needs to be done. Currently, + * only Sort nodes need to do anything here. + */ + if (IsA(planstate, SortState)) + ExecSortRetrieveInstrumentation((SortState *) planstate); + return planstate_tree_walker(planstate, ExecParallelRetrieveInstrumentation, instrumentation); } @@ -801,35 +819,40 @@ ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) if (planstate == NULL) return false; - /* Call initializers for parallel-aware plan nodes. */ - if (planstate->plan->parallel_aware) + switch (nodeTag(planstate)) { - switch (nodeTag(planstate)) - { - case T_SeqScanState: + case T_SeqScanState: + if (planstate->plan->parallel_aware) ExecSeqScanInitializeWorker((SeqScanState *) planstate, toc); - break; - case T_IndexScanState: + break; + case T_IndexScanState: + if (planstate->plan->parallel_aware) ExecIndexScanInitializeWorker((IndexScanState *) planstate, toc); - break; - case T_IndexOnlyScanState: + break; + case T_IndexOnlyScanState: + if (planstate->plan->parallel_aware) ExecIndexOnlyScanInitializeWorker((IndexOnlyScanState *) planstate, toc); - break; - case T_ForeignScanState: + break; + case T_ForeignScanState: + if (planstate->plan->parallel_aware) ExecForeignScanInitializeWorker((ForeignScanState *) planstate, toc); - break; - case T_CustomScanState: + break; + case T_CustomScanState: + if (planstate->plan->parallel_aware) ExecCustomScanInitializeWorker((CustomScanState *) planstate, toc); - break; - case T_BitmapHeapScanState: + break; + case T_BitmapHeapScanState: + if (planstate->plan->parallel_aware) ExecBitmapHeapInitializeWorker( (BitmapHeapScanState *) planstate, toc); - break; - default: - break; - } + break; + case T_SortState: + /* even when not parallel-aware */ + ExecSortInitializeWorker((SortState *) planstate, toc); + default: + break; } return planstate_tree_walker(planstate, ExecParallelInitializeWorker, toc); diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index aae4150e2c..66ef109c12 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -15,6 +15,7 @@ #include "postgres.h" +#include "access/parallel.h" #include "executor/execdebug.h" #include "executor/nodeSort.h" #include "miscadmin.h" @@ -127,6 +128,15 @@ ExecSort(PlanState *pstate) node->sort_Done = true; node->bounded_Done = node->bounded; node->bound_Done = node->bound; + if (node->shared_info && node->am_worker) + { + TuplesortInstrumentation *si; + + Assert(IsParallelWorker()); + Assert(ParallelWorkerNumber <= node->shared_info->num_workers); + si = &node->shared_info->sinstrument[ParallelWorkerNumber]; + tuplesort_get_stats(tuplesortstate, si); + } SO1_printf("ExecSort: %s\n", "sorting done"); } @@ -334,3 +344,90 @@ ExecReScanSort(SortState *node) else tuplesort_rescan((Tuplesortstate *) node->tuplesortstate); } + +/* ---------------------------------------------------------------- + * Parallel Query Support + * ---------------------------------------------------------------- + */ + +/* ---------------------------------------------------------------- + * ExecSortEstimate + * + * Estimate space required to propagate sort statistics. + * ---------------------------------------------------------------- + */ +void +ExecSortEstimate(SortState *node, ParallelContext *pcxt) +{ + Size size; + + /* don't need this if not instrumenting or no workers */ + if (!node->ss.ps.instrument || pcxt->nworkers == 0) + return; + + size = mul_size(pcxt->nworkers, sizeof(TuplesortInstrumentation)); + size = add_size(size, offsetof(SharedSortInfo, sinstrument)); + shm_toc_estimate_chunk(&pcxt->estimator, size); + shm_toc_estimate_keys(&pcxt->estimator, 1); +} + +/* ---------------------------------------------------------------- + * ExecSortInitializeDSM + * + * Initialize DSM space for sort statistics. + * ---------------------------------------------------------------- + */ +void +ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt) +{ + Size size; + + /* don't need this if not instrumenting or no workers */ + if (!node->ss.ps.instrument || pcxt->nworkers == 0) + return; + + size = offsetof(SharedSortInfo, sinstrument) + + pcxt->nworkers * sizeof(TuplesortInstrumentation); + node->shared_info = shm_toc_allocate(pcxt->toc, size); + /* ensure any unfilled slots will contain zeroes */ + memset(node->shared_info, 0, size); + node->shared_info->num_workers = pcxt->nworkers; + shm_toc_insert(pcxt->toc, node->ss.ps.plan->plan_node_id, + node->shared_info); +} + +/* ---------------------------------------------------------------- + * ExecSortInitializeWorker + * + * Attach worker to DSM space for sort statistics. + * ---------------------------------------------------------------- + */ +void +ExecSortInitializeWorker(SortState *node, shm_toc *toc) +{ + node->shared_info = + shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, true); + node->am_worker = true; +} + +/* ---------------------------------------------------------------- + * ExecSortRetrieveInstrumentation + * + * Transfer sort statistics from DSM to private memory. + * ---------------------------------------------------------------- + */ +void +ExecSortRetrieveInstrumentation(SortState *node) +{ + Size size; + SharedSortInfo *si; + + if (node->shared_info == NULL) + return; + + size = offsetof(SharedSortInfo, sinstrument) + + node->shared_info->num_workers * sizeof(TuplesortInstrumentation); + si = palloc(size); + memcpy(si, node->shared_info, size); + node->shared_info = si; +} diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 59cd28e595..17e1b6860b 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -3227,13 +3227,10 @@ tuplesort_restorepos(Tuplesortstate *state) * * This can be called after tuplesort_performsort() finishes to obtain * printable summary information about how the sort was performed. - * spaceUsed is measured in kilobytes. */ void tuplesort_get_stats(Tuplesortstate *state, - const char **sortMethod, - const char **spaceType, - long *spaceUsed) + TuplesortInstrumentation *stats) { /* * Note: it might seem we should provide both memory and disk usage for a @@ -3246,35 +3243,68 @@ tuplesort_get_stats(Tuplesortstate *state, */ if (state->tapeset) { - *spaceType = "Disk"; - *spaceUsed = LogicalTapeSetBlocks(state->tapeset) * (BLCKSZ / 1024); + stats->spaceType = SORT_SPACE_TYPE_DISK; + stats->spaceUsed = LogicalTapeSetBlocks(state->tapeset) * (BLCKSZ / 1024); } else { - *spaceType = "Memory"; - *spaceUsed = (state->allowedMem - state->availMem + 1023) / 1024; + stats->spaceType = SORT_SPACE_TYPE_MEMORY; + stats->spaceUsed = (state->allowedMem - state->availMem + 1023) / 1024; } switch (state->status) { case TSS_SORTEDINMEM: if (state->boundUsed) - *sortMethod = "top-N heapsort"; + stats->sortMethod = SORT_TYPE_TOP_N_HEAPSORT; else - *sortMethod = "quicksort"; + stats->sortMethod = SORT_TYPE_QUICKSORT; break; case TSS_SORTEDONTAPE: - *sortMethod = "external sort"; + stats->sortMethod = SORT_TYPE_EXTERNAL_SORT; break; case TSS_FINALMERGE: - *sortMethod = "external merge"; + stats->sortMethod = SORT_TYPE_EXTERNAL_MERGE; break; default: - *sortMethod = "still in progress"; + stats->sortMethod = SORT_TYPE_STILL_IN_PROGRESS; break; } } +/* + * Convert TuplesortMethod to a string. + */ +const char * +tuplesort_method_name(TuplesortMethod m) +{ + switch (m) + { + case SORT_TYPE_STILL_IN_PROGRESS: + return "still in progress"; + case SORT_TYPE_TOP_N_HEAPSORT: + return "top-N heapsort"; + case SORT_TYPE_QUICKSORT: + return "quicksort"; + case SORT_TYPE_EXTERNAL_SORT: + return "external sort"; + case SORT_TYPE_EXTERNAL_MERGE: + return "external merge"; + } + + return "unknown"; +} + +/* + * Convert TuplesortSpaceType to a string. + */ +const char * +tuplesort_space_type_name(TuplesortSpaceType t) +{ + Assert(t == SORT_SPACE_TYPE_DISK || t == SORT_SPACE_TYPE_MEMORY); + return t == SORT_SPACE_TYPE_DISK ? "Disk" : "Memory"; +} + /* * Heap manipulation routines, per Knuth's Algorithm 5.2.3H. diff --git a/src/include/executor/nodeSort.h b/src/include/executor/nodeSort.h index ed0e9dbb53..77ac06597f 100644 --- a/src/include/executor/nodeSort.h +++ b/src/include/executor/nodeSort.h @@ -14,6 +14,7 @@ #ifndef NODESORT_H #define NODESORT_H +#include "access/parallel.h" #include "nodes/execnodes.h" extern SortState *ExecInitSort(Sort *node, EState *estate, int eflags); @@ -22,4 +23,10 @@ extern void ExecSortMarkPos(SortState *node); extern void ExecSortRestrPos(SortState *node); extern void ExecReScanSort(SortState *node); +/* parallel instrumentation support */ +extern void ExecSortEstimate(SortState *node, ParallelContext *pcxt); +extern void ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt); +extern void ExecSortInitializeWorker(SortState *node, shm_toc *toc); +extern void ExecSortRetrieveInstrumentation(SortState *node); + #endif /* NODESORT_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 15a84269ec..d1565e7496 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1730,6 +1730,16 @@ typedef struct MaterialState Tuplestorestate *tuplestorestate; } MaterialState; +/* ---------------- + * Shared memory container for per-worker sort information + * ---------------- + */ +typedef struct SharedSortInfo +{ + int num_workers; + TuplesortInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER]; +} SharedSortInfo; + /* ---------------- * SortState information * ---------------- @@ -1744,6 +1754,8 @@ typedef struct SortState bool bounded_Done; /* value of bounded we did the sort with */ int64 bound_Done; /* value of bound we did the sort with */ void *tuplesortstate; /* private state of tuplesort.c */ + bool am_worker; /* are we a worker? */ + SharedSortInfo *shared_info; /* one entry per worker */ } SortState; /* --------------------- diff --git a/src/include/utils/tuplesort.h b/src/include/utils/tuplesort.h index 28c168a801..b6b8c8ef8c 100644 --- a/src/include/utils/tuplesort.h +++ b/src/include/utils/tuplesort.h @@ -31,6 +31,34 @@ */ typedef struct Tuplesortstate Tuplesortstate; +/* + * Data structures for reporting sort statistics. Note that + * TuplesortInstrumentation can't contain any pointers because we + * sometimes put it in shared memory. + */ +typedef enum +{ + SORT_TYPE_STILL_IN_PROGRESS = 0, + SORT_TYPE_TOP_N_HEAPSORT, + SORT_TYPE_QUICKSORT, + SORT_TYPE_EXTERNAL_SORT, + SORT_TYPE_EXTERNAL_MERGE +} TuplesortMethod; + +typedef enum +{ + SORT_SPACE_TYPE_DISK, + SORT_SPACE_TYPE_MEMORY +} TuplesortSpaceType; + +typedef struct TuplesortInstrumentation +{ + TuplesortMethod sortMethod; /* sort algorithm used */ + TuplesortSpaceType spaceType; /* type of space spaceUsed represents */ + long spaceUsed; /* space consumption, in kB */ +} TuplesortInstrumentation; + + /* * We provide multiple interfaces to what is essentially the same code, * since different callers have different data to be sorted and want to @@ -107,9 +135,9 @@ extern bool tuplesort_skiptuples(Tuplesortstate *state, int64 ntuples, extern void tuplesort_end(Tuplesortstate *state); extern void tuplesort_get_stats(Tuplesortstate *state, - const char **sortMethod, - const char **spaceType, - long *spaceUsed); + TuplesortInstrumentation *stats); +extern const char *tuplesort_method_name(TuplesortMethod m); +extern const char *tuplesort_space_type_name(TuplesortSpaceType t); extern int tuplesort_merge_order(int64 allowedMem); diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index f009c253f4..f3ebad5857 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -1047,7 +1047,7 @@ drop function tattle(x int, y int); -- ANALYZE shows that a top-N sort was used. We must suppress or filter away -- all the non-invariant parts of the EXPLAIN ANALYZE output. -- -create temp table sq_limit (pk int primary key, c1 int, c2 int); +create table sq_limit (pk int primary key, c1 int, c2 int); insert into sq_limit values (1, 1, 1), (2, 2, 2), @@ -1066,6 +1066,8 @@ begin select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 loop ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + -- this case might occur if force_parallel_mode is on: + ln := regexp_replace(ln, 'Worker 0: Sort Method', 'Sort Method'); return next ln; end loop; end; @@ -1090,3 +1092,4 @@ select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; (3 rows) drop function explain_sq_limit(); +drop table sq_limit; diff --git a/src/test/regress/sql/subselect.sql b/src/test/regress/sql/subselect.sql index 9a14832206..5ac8badabe 100644 --- a/src/test/regress/sql/subselect.sql +++ b/src/test/regress/sql/subselect.sql @@ -547,7 +547,7 @@ drop function tattle(x int, y int); -- ANALYZE shows that a top-N sort was used. We must suppress or filter away -- all the non-invariant parts of the EXPLAIN ANALYZE output. -- -create temp table sq_limit (pk int primary key, c1 int, c2 int); +create table sq_limit (pk int primary key, c1 int, c2 int); insert into sq_limit values (1, 1, 1), (2, 2, 2), @@ -567,6 +567,8 @@ begin select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3 loop ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + -- this case might occur if force_parallel_mode is on: + ln := regexp_replace(ln, 'Worker 0: Sort Method', 'Sort Method'); return next ln; end loop; end; @@ -577,3 +579,5 @@ select * from explain_sq_limit(); select * from (select pk,c2 from sq_limit order by c1,pk) as x limit 3; drop function explain_sq_limit(); + +drop table sq_limit; From 2e70d6b5e99b7e7b53336b1838f869bbea1b5024 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 29 Aug 2017 15:18:01 -0400 Subject: [PATCH 0058/1087] Teach libpq to detect integer overflow in the row count of a PGresult. Adding more than 1 billion rows to a PGresult would overflow its ntups and tupArrSize fields, leading to client crashes. It'd be desirable to use wider fields on 64-bit machines, but because all of libpq's external APIs use plain "int" for row counters, that's going to be hard to accomplish without an ABI break. Given the lack of complaints so far, and the general pain that would be involved in using such huge PGresults, let's settle for just preventing the overflow and reporting a useful error message if it does happen. Also, for a couple more lines of code we can increase the threshold of trouble from INT_MAX/2 to INT_MAX rows. To do that, refactor pqAddTuple() to allow returning an error message that replaces the default assumption that it failed because of out-of-memory. Along the way, fix PQsetvalue() so that it reports all failures via pqInternalNotice(). It already did so in the case of bad field number, but neglected to report anything for other error causes. Because of the potential for crashes, this seems like a back-patchable bug fix, despite the lack of field reports. Michael Paquier, per a complaint from Igor Korot. Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com --- src/interfaces/libpq/fe-exec.c | 69 +++++++++++++++++++++++++++++----- 1 file changed, 60 insertions(+), 9 deletions(-) diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c index e1e2d18e3a..a97e73cf99 100644 --- a/src/interfaces/libpq/fe-exec.c +++ b/src/interfaces/libpq/fe-exec.c @@ -16,6 +16,7 @@ #include #include +#include #include "libpq-fe.h" #include "libpq-int.h" @@ -51,7 +52,8 @@ static bool static_std_strings = false; static PGEvent *dupEvents(PGEvent *events, int count); -static bool pqAddTuple(PGresult *res, PGresAttValue *tup); +static bool pqAddTuple(PGresult *res, PGresAttValue *tup, + const char **errmsgp); static bool PQsendQueryStart(PGconn *conn); static int PQsendQueryGuts(PGconn *conn, const char *command, @@ -416,18 +418,26 @@ dupEvents(PGEvent *events, int count) * equal to PQntuples(res). If it is equal, a new tuple is created and * added to the result. * Returns a non-zero value for success and zero for failure. + * (On failure, we report the specific problem via pqInternalNotice.) */ int PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) { PGresAttValue *attval; + const char *errmsg = NULL; + /* Note that this check also protects us against null "res" */ if (!check_field_number(res, field_num)) return FALSE; /* Invalid tup_num, must be <= ntups */ if (tup_num < 0 || tup_num > res->ntups) + { + pqInternalNotice(&res->noticeHooks, + "row number %d is out of range 0..%d", + tup_num, res->ntups); return FALSE; + } /* need to allocate a new tuple? */ if (tup_num == res->ntups) @@ -440,7 +450,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) TRUE); if (!tup) - return FALSE; + goto fail; /* initialize each column to NULL */ for (i = 0; i < res->numAttributes; i++) @@ -450,8 +460,8 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) } /* add it to the array */ - if (!pqAddTuple(res, tup)) - return FALSE; + if (!pqAddTuple(res, tup, &errmsg)) + goto fail; } attval = &res->tuples[tup_num][field_num]; @@ -471,13 +481,24 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) { attval->value = (char *) pqResultAlloc(res, len + 1, TRUE); if (!attval->value) - return FALSE; + goto fail; attval->len = len; memcpy(attval->value, value, len); attval->value[len] = '\0'; } return TRUE; + + /* + * Report failure via pqInternalNotice. If preceding code didn't provide + * an error message, assume "out of memory" was meant. + */ +fail: + if (!errmsg) + errmsg = libpq_gettext("out of memory"); + pqInternalNotice(&res->noticeHooks, "%s", errmsg); + + return FALSE; } /* @@ -847,10 +868,13 @@ pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...) /* * pqAddTuple * add a row pointer to the PGresult structure, growing it if necessary - * Returns TRUE if OK, FALSE if not enough memory to add the row + * Returns TRUE if OK, FALSE if an error prevented adding the row + * + * On error, *errmsgp can be set to an error string to be returned. + * If it is left NULL, the error is presumed to be "out of memory". */ static bool -pqAddTuple(PGresult *res, PGresAttValue *tup) +pqAddTuple(PGresult *res, PGresAttValue *tup, const char **errmsgp) { if (res->ntups >= res->tupArrSize) { @@ -865,9 +889,36 @@ pqAddTuple(PGresult *res, PGresAttValue *tup) * existing allocation. Note that the positions beyond res->ntups are * garbage, not necessarily NULL. */ - int newSize = (res->tupArrSize > 0) ? res->tupArrSize * 2 : 128; + int newSize; PGresAttValue **newTuples; + /* + * Since we use integers for row numbers, we can't support more than + * INT_MAX rows. Make sure we allow that many, though. + */ + if (res->tupArrSize <= INT_MAX / 2) + newSize = (res->tupArrSize > 0) ? res->tupArrSize * 2 : 128; + else if (res->tupArrSize < INT_MAX) + newSize = INT_MAX; + else + { + *errmsgp = libpq_gettext("PGresult cannot support more than INT_MAX tuples"); + return FALSE; + } + + /* + * Also, on 32-bit platforms we could, in theory, overflow size_t even + * before newSize gets to INT_MAX. (In practice we'd doubtless hit + * OOM long before that, but let's check.) + */ +#if INT_MAX >= (SIZE_MAX / 2) + if (newSize > SIZE_MAX / sizeof(PGresAttValue *)) + { + *errmsgp = libpq_gettext("size_t overflow"); + return FALSE; + } +#endif + if (res->tuples == NULL) newTuples = (PGresAttValue **) malloc(newSize * sizeof(PGresAttValue *)); @@ -1093,7 +1144,7 @@ pqRowProcessor(PGconn *conn, const char **errmsgp) } /* And add the tuple to the PGresult's tuple array */ - if (!pqAddTuple(res, tup)) + if (!pqAddTuple(res, tup, errmsgp)) goto fail; /* From 6cbee65eee20c144bb4de169c6802f20e76785b0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 29 Aug 2017 15:38:05 -0400 Subject: [PATCH 0059/1087] Doc: document libpq's restriction to INT_MAX rows in a PGresult. As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're pretty much stuck with this limitation. The documentation formerly stated that the result of PQntuples "might overflow on 32-bit operating systems", which is just nonsense: that's not where the overflow would happen, and if you did reach an overflow it would not be on a 32-bit machine, because you'd have OOM'd long since. Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com --- doc/src/sgml/libpq.sgml | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 8e0b0b8586..f154b6b5fa 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -3199,9 +3199,10 @@ void PQclear(PGresult *res); - Returns the number of rows (tuples) in the query result. Because - it returns an integer result, large result sets might overflow the - return value on 32-bit operating systems. + Returns the number of rows (tuples) in the query result. + (Note that PGresult objects are limited to no more + than INT_MAX rows, so an int result is + sufficient.) int PQntuples(const PGresult *res); From 00f6d5c2c3ae2f6d198e41800e5edcf0150d485b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 29 Aug 2017 19:33:24 -0400 Subject: [PATCH 0060/1087] doc: Avoid sidebar element The formatting of the sidebar element didn't carry over to the new tool chain. Instead of inventing a whole new way of dealing with it, just convert the one use to a "note". --- doc/src/sgml/client-auth.sgml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 2dd6c29350..1b568683a4 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -320,7 +320,7 @@ hostnossl database user hostssl, and hostnossl records. - + Users sometimes wonder why host names are handled in this seemingly complicated way, with two name resolutions @@ -350,7 +350,7 @@ hostnossl database user implementations of host name-based access control, such as the Apache HTTP Server and TCP Wrappers. - + From 7df2c1f8daeb361133ac8bdeaf59ceb0484e315a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 30 Aug 2017 09:29:55 -0400 Subject: [PATCH 0061/1087] Force rescanning of parallel-aware scan nodes below a Gather[Merge]. The ExecReScan machinery contains various optimizations for postponing or skipping rescans of plan subtrees; for example a HashAgg node may conclude that it can re-use the table it built before, instead of re-reading its input subtree. But that is wrong if the input contains a parallel-aware table scan node, since the portion of the table scanned by the leader process is likely to vary from one rescan to the next. This explains the timing-dependent buildfarm failures we saw after commit a2b70c89c. The established mechanism for showing that a plan node's output is potentially variable is to mark it as depending on some runtime Param. Hence, to fix this, invent a dummy Param (one that has a PARAM_EXEC parameter number, but carries no actual value) associated with each Gather or GatherMerge node, mark parallel-aware nodes below that node as dependent on that Param, and arrange for ExecReScanGather[Merge] to flag that Param as changed whenever the Gather[Merge] node is rescanned. This solution breaks an undocumented assumption made by the parallel executor logic, namely that all rescans of nodes below a Gather[Merge] will happen synchronously during the ReScan of the top node itself. But that's fundamentally contrary to the design of the ExecReScan code, and so was doomed to fail someday anyway (even if you want to argue that the bug being fixed here wasn't a failure of that assumption). A follow-on patch will address that issue. In the meantime, the worst that's expected to happen is that given very bad timing luck, the leader might have to do all the work during a rescan, because workers think they have nothing to do, if they are able to start up before the eventual ReScan of the leader's parallel-aware table scan node has reset the shared scan state. Although this problem exists in 9.6, there does not seem to be any way for it to manifest there. Without GatherMerge, it seems that a plan tree that has a rescan-short-circuiting node below Gather will always also have one above it that will short-circuit in the same cases, preventing the Gather from being rescanned. Hence we won't take the risk of back-patching this change into 9.6. But v10 needs it. Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com --- src/backend/executor/nodeGather.c | 22 +++++++- src/backend/executor/nodeGatherMerge.c | 22 +++++++- src/backend/nodes/copyfuncs.c | 2 + src/backend/nodes/outfuncs.c | 2 + src/backend/nodes/readfuncs.c | 2 + src/backend/optimizer/README | 21 ++++--- src/backend/optimizer/plan/createplan.c | 8 ++- src/backend/optimizer/plan/planner.c | 6 ++ src/backend/optimizer/plan/subselect.c | 73 +++++++++++++++++++++++-- src/include/nodes/plannodes.h | 15 ++++- src/include/nodes/relation.h | 6 +- 11 files changed, 155 insertions(+), 24 deletions(-) diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index a0f5a60d93..58f88a5724 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -432,6 +432,9 @@ ExecShutdownGather(GatherState *node) void ExecReScanGather(GatherState *node) { + Gather *gather = (Gather *) node->ps.plan; + PlanState *outerPlan = outerPlanState(node); + /* * Re-initialize the parallel workers to perform rescan of relation. We * want to gracefully shutdown all the workers so that they should be able @@ -445,5 +448,22 @@ ExecReScanGather(GatherState *node) if (node->pei) ExecParallelReinitialize(node->pei); - ExecReScan(node->ps.lefttree); + /* + * Set child node's chgParam to tell it that the next scan might deliver a + * different set of rows within the leader process. (The overall rowset + * shouldn't change, but the leader process's subset might; hence nodes + * between here and the parallel table scan node mustn't optimize on the + * assumption of an unchanging rowset.) + */ + if (gather->rescan_param >= 0) + outerPlan->chgParam = bms_add_member(outerPlan->chgParam, + gather->rescan_param); + + + /* + * if chgParam of subnode is not null then plan will be re-scanned by + * first ExecProcNode. + */ + if (outerPlan->chgParam == NULL) + ExecReScan(outerPlan); } diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 2526c584fd..f50841699c 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -327,6 +327,9 @@ ExecShutdownGatherMergeWorkers(GatherMergeState *node) void ExecReScanGatherMerge(GatherMergeState *node) { + GatherMerge *gm = (GatherMerge *) node->ps.plan; + PlanState *outerPlan = outerPlanState(node); + /* * Re-initialize the parallel workers to perform rescan of relation. We * want to gracefully shutdown all the workers so that they should be able @@ -341,7 +344,24 @@ ExecReScanGatherMerge(GatherMergeState *node) if (node->pei) ExecParallelReinitialize(node->pei); - ExecReScan(node->ps.lefttree); + /* + * Set child node's chgParam to tell it that the next scan might deliver a + * different set of rows within the leader process. (The overall rowset + * shouldn't change, but the leader process's subset might; hence nodes + * between here and the parallel table scan node mustn't optimize on the + * assumption of an unchanging rowset.) + */ + if (gm->rescan_param >= 0) + outerPlan->chgParam = bms_add_member(outerPlan->chgParam, + gm->rescan_param); + + + /* + * if chgParam of subnode is not null then plan will be re-scanned by + * first ExecProcNode. + */ + if (outerPlan->chgParam == NULL) + ExecReScan(outerPlan); } /* diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 72041693df..f9ddf4ed76 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -361,6 +361,7 @@ _copyGather(const Gather *from) * copy remainder of node */ COPY_SCALAR_FIELD(num_workers); + COPY_SCALAR_FIELD(rescan_param); COPY_SCALAR_FIELD(single_copy); COPY_SCALAR_FIELD(invisible); @@ -384,6 +385,7 @@ _copyGatherMerge(const GatherMerge *from) * copy remainder of node */ COPY_SCALAR_FIELD(num_workers); + COPY_SCALAR_FIELD(rescan_param); COPY_SCALAR_FIELD(numCols); COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber)); COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid)); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 5ce3c7c599..9ee3e23761 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -479,6 +479,7 @@ _outGather(StringInfo str, const Gather *node) _outPlanInfo(str, (const Plan *) node); WRITE_INT_FIELD(num_workers); + WRITE_INT_FIELD(rescan_param); WRITE_BOOL_FIELD(single_copy); WRITE_BOOL_FIELD(invisible); } @@ -493,6 +494,7 @@ _outGatherMerge(StringInfo str, const GatherMerge *node) _outPlanInfo(str, (const Plan *) node); WRITE_INT_FIELD(num_workers); + WRITE_INT_FIELD(rescan_param); WRITE_INT_FIELD(numCols); appendStringInfoString(str, " :sortColIdx"); diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 86c811de49..67b9e19d29 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -2163,6 +2163,7 @@ _readGather(void) ReadCommonPlan(&local_node->plan); READ_INT_FIELD(num_workers); + READ_INT_FIELD(rescan_param); READ_BOOL_FIELD(single_copy); READ_BOOL_FIELD(invisible); @@ -2180,6 +2181,7 @@ _readGatherMerge(void) ReadCommonPlan(&local_node->plan); READ_INT_FIELD(num_workers); + READ_INT_FIELD(rescan_param); READ_INT_FIELD(numCols); READ_ATTRNUMBER_ARRAY(sortColIdx, local_node->numCols); READ_OID_ARRAY(sortOperators, local_node->numCols); diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index fc0fca4107..62242e8564 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -374,6 +374,7 @@ RelOptInfo - a relation or joined relations MaterialPath - a Material plan node UniquePath - remove duplicate rows (either by hashing or sorting) GatherPath - collect the results of parallel workers + GatherMergePath - collect parallel results, preserving their common sort order ProjectionPath - a Result plan node with child (used for projection) ProjectSetPath - a ProjectSet plan node applied to some sub-path SortPath - a Sort plan node applied to some sub-path @@ -1030,7 +1031,7 @@ either by an entire query or some portion of the query in such a way that some of that work can be done by one or more worker processes, which are called parallel workers. Parallel workers are a subtype of dynamic background workers; see src/backend/access/transam/README.parallel for a -fuller description. Academic literature on parallel query suggests that +fuller description. The academic literature on parallel query suggests that parallel execution strategies can be divided into essentially two categories: pipelined parallelism, where the execution of the query is divided into multiple stages and each stage is handled by a separate @@ -1046,16 +1047,14 @@ that the underlying table be partitioned. It only requires that (1) there is some method of dividing the data from at least one of the base tables involved in the relation across multiple processes, (2) allowing each process to handle its own portion of the data, and then (3) -collecting the results. Requirements (2) and (3) is satisfied by the -executor node Gather, which launches any number of worker processes and -executes its single child plan in all of them (and perhaps in the leader -also, if the children aren't generating enough data to keep the leader -busy). Requirement (1) is handled by the SeqScan node: when invoked -with parallel_aware = true, this node will, in effect, partition the -table on a block by block basis, returning a subset of the tuples from -the relation in each worker where that SeqScan is executed. A similar -scheme could be (and probably should be) implemented for bitmap heap -scans. +collecting the results. Requirements (2) and (3) are satisfied by the +executor node Gather (or GatherMerge), which launches any number of worker +processes and executes its single child plan in all of them, and perhaps +in the leader also, if the children aren't generating enough data to keep +the leader busy. Requirement (1) is handled by the table scan node: when +invoked with parallel_aware = true, this node will, in effect, partition +the table on a block by block basis, returning a subset of the tuples from +the relation in each worker where that scan node is executed. Just as we do for non-parallel access methods, we build Paths to represent access strategies that can be used in a parallel plan. These diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 5c934f223d..28216629aa 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -267,7 +267,7 @@ static Unique *make_unique_from_sortclauses(Plan *lefttree, List *distinctList); static Unique *make_unique_from_pathkeys(Plan *lefttree, List *pathkeys, int numCols); static Gather *make_gather(List *qptlist, List *qpqual, - int nworkers, bool single_copy, Plan *subplan); + int nworkers, int rescan_param, bool single_copy, Plan *subplan); static SetOp *make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, List *distinctList, AttrNumber flagColIdx, int firstFlag, long numGroups); @@ -1471,6 +1471,7 @@ create_gather_plan(PlannerInfo *root, GatherPath *best_path) gather_plan = make_gather(tlist, NIL, best_path->num_workers, + SS_assign_special_param(root), best_path->single_copy, subplan); @@ -1505,6 +1506,9 @@ create_gather_merge_plan(PlannerInfo *root, GatherMergePath *best_path) gm_plan->num_workers = best_path->num_workers; copy_generic_path_info(&gm_plan->plan, &best_path->path); + /* Assign the rescan Param. */ + gm_plan->rescan_param = SS_assign_special_param(root); + /* Gather Merge is pointless with no pathkeys; use Gather instead. */ Assert(pathkeys != NIL); @@ -6238,6 +6242,7 @@ static Gather * make_gather(List *qptlist, List *qpqual, int nworkers, + int rescan_param, bool single_copy, Plan *subplan) { @@ -6249,6 +6254,7 @@ make_gather(List *qptlist, plan->lefttree = subplan; plan->righttree = NULL; node->num_workers = nworkers; + node->rescan_param = rescan_param; node->single_copy = single_copy; node->invisible = false; diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index fdef00ab39..966230256e 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -374,6 +374,12 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) gather->single_copy = true; gather->invisible = (force_parallel_mode == FORCE_PARALLEL_REGRESS); + /* + * Since this Gather has no parallel-aware descendants to signal to, + * we don't need a rescan Param. + */ + gather->rescan_param = -1; + /* * Ideally we'd use cost_gather here, but setting up dummy path data * to satisfy it doesn't seem much cleaner than knowing what it does. diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c index ffbd3eeed7..1103984779 100644 --- a/src/backend/optimizer/plan/subselect.c +++ b/src/backend/optimizer/plan/subselect.c @@ -79,6 +79,7 @@ static Node *process_sublinks_mutator(Node *node, process_sublinks_context *context); static Bitmapset *finalize_plan(PlannerInfo *root, Plan *plan, + int gather_param, Bitmapset *valid_params, Bitmapset *scan_params); static bool finalize_primnode(Node *node, finalize_primnode_context *context); @@ -2217,12 +2218,15 @@ void SS_finalize_plan(PlannerInfo *root, Plan *plan) { /* No setup needed, just recurse through plan tree. */ - (void) finalize_plan(root, plan, root->outer_params, NULL); + (void) finalize_plan(root, plan, -1, root->outer_params, NULL); } /* * Recursive processing of all nodes in the plan tree * + * gather_param is the rescan_param of an ancestral Gather/GatherMerge, + * or -1 if there is none. + * * valid_params is the set of param IDs supplied by outer plan levels * that are valid to reference in this plan node or its children. * @@ -2249,7 +2253,9 @@ SS_finalize_plan(PlannerInfo *root, Plan *plan) * can be handled more cleanly. */ static Bitmapset * -finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, +finalize_plan(PlannerInfo *root, Plan *plan, + int gather_param, + Bitmapset *valid_params, Bitmapset *scan_params) { finalize_primnode_context context; @@ -2302,6 +2308,18 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, finalize_primnode((Node *) plan->targetlist, &context); finalize_primnode((Node *) plan->qual, &context); + /* + * If it's a parallel-aware scan node, mark it as dependent on the parent + * Gather/GatherMerge's rescan Param. + */ + if (plan->parallel_aware) + { + if (gather_param < 0) + elog(ERROR, "parallel-aware plan node is not below a Gather"); + context.paramids = + bms_add_member(context.paramids, gather_param); + } + /* Check additional node-type-specific fields */ switch (nodeTag(plan)) { @@ -2512,6 +2530,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(lc), + gather_param, valid_params, scan_params)); } @@ -2542,6 +2561,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(l), + gather_param, valid_params, scan_params)); } @@ -2558,6 +2578,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(l), + gather_param, valid_params, scan_params)); } @@ -2574,6 +2595,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(l), + gather_param, valid_params, scan_params)); } @@ -2590,6 +2612,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(l), + gather_param, valid_params, scan_params)); } @@ -2606,6 +2629,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, bms_add_members(context.paramids, finalize_plan(root, (Plan *) lfirst(l), + gather_param, valid_params, scan_params)); } @@ -2697,13 +2721,51 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, &context); break; + case T_Gather: + /* child nodes are allowed to reference rescan_param, if any */ + locally_added_param = ((Gather *) plan)->rescan_param; + if (locally_added_param >= 0) + { + valid_params = bms_add_member(bms_copy(valid_params), + locally_added_param); + + /* + * We currently don't support nested Gathers. The issue so + * far as this function is concerned would be how to identify + * which child nodes depend on which Gather. + */ + Assert(gather_param < 0); + /* Pass down rescan_param to child parallel-aware nodes */ + gather_param = locally_added_param; + } + /* rescan_param does *not* get added to scan_params */ + break; + + case T_GatherMerge: + /* child nodes are allowed to reference rescan_param, if any */ + locally_added_param = ((GatherMerge *) plan)->rescan_param; + if (locally_added_param >= 0) + { + valid_params = bms_add_member(bms_copy(valid_params), + locally_added_param); + + /* + * We currently don't support nested Gathers. The issue so + * far as this function is concerned would be how to identify + * which child nodes depend on which Gather. + */ + Assert(gather_param < 0); + /* Pass down rescan_param to child parallel-aware nodes */ + gather_param = locally_added_param; + } + /* rescan_param does *not* get added to scan_params */ + break; + case T_ProjectSet: case T_Hash: case T_Material: case T_Sort: case T_Unique: - case T_Gather: - case T_GatherMerge: case T_SetOp: case T_Group: /* no node-type-specific fields need fixing */ @@ -2717,6 +2779,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, /* Process left and right child plans, if any */ child_params = finalize_plan(root, plan->lefttree, + gather_param, valid_params, scan_params); context.paramids = bms_add_members(context.paramids, child_params); @@ -2726,6 +2789,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, /* right child can reference nestloop_params as well as valid_params */ child_params = finalize_plan(root, plan->righttree, + gather_param, bms_union(nestloop_params, valid_params), scan_params); /* ... and they don't count as parameters used at my level */ @@ -2737,6 +2801,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params, /* easy case */ child_params = finalize_plan(root, plan->righttree, + gather_param, valid_params, scan_params); } diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index 7c51e7f9d2..a382331f41 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -825,13 +825,21 @@ typedef struct Unique /* ------------ * gather node + * + * Note: rescan_param is the ID of a PARAM_EXEC parameter slot. That slot + * will never actually contain a value, but the Gather node must flag it as + * having changed whenever it is rescanned. The child parallel-aware scan + * nodes are marked as depending on that parameter, so that the rescan + * machinery is aware that their output is likely to change across rescans. + * In some cases we don't need a rescan Param, so rescan_param is set to -1. * ------------ */ typedef struct Gather { Plan plan; - int num_workers; - bool single_copy; + int num_workers; /* planned number of worker processes */ + int rescan_param; /* ID of Param that signals a rescan, or -1 */ + bool single_copy; /* don't execute plan more than once */ bool invisible; /* suppress EXPLAIN display (for testing)? */ } Gather; @@ -842,7 +850,8 @@ typedef struct Gather typedef struct GatherMerge { Plan plan; - int num_workers; + int num_workers; /* planned number of worker processes */ + int rescan_param; /* ID of Param that signals a rescan, or -1 */ /* remaining fields are just like the sort-key info in struct Sort */ int numCols; /* number of sort-key columns */ AttrNumber *sortColIdx; /* their indexes in the target list */ diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 3ccc9d1b03..a39e59d8ac 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1268,9 +1268,9 @@ typedef struct GatherPath } GatherPath; /* - * GatherMergePath runs several copies of a plan in parallel and - * collects the results. For gather merge parallel leader always execute the - * plan. + * GatherMergePath runs several copies of a plan in parallel and collects + * the results, preserving their common sort order. For gather merge, the + * parallel leader always executes the plan too, so we don't need single_copy. */ typedef struct GatherMergePath { From 6c2c5bea3cec4c874d1ee225bb6e222055c03d75 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 30 Aug 2017 09:59:23 -0400 Subject: [PATCH 0062/1087] Restore test case from a2b70c89ca1a5fcf6181d3c777d82e7b83d2de1b. Revert the reversion commits a20aac890 and 9b644745c. In the wake of commit 7df2c1f8d, we should get stable buildfarm results from this test; if not, I'd like to know sooner not later. Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com --- src/test/regress/expected/select_parallel.out | 43 +++++++++++++++++++ src/test/regress/sql/select_parallel.sql | 16 +++++++ 2 files changed, 59 insertions(+) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index ccad18e978..888da5abf2 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -300,6 +300,49 @@ select count(*) from tenk1 group by twenty; 500 (20 rows) +--test rescan behavior of gather merge +set enable_material = false; +explain (costs off) +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + QUERY PLAN +---------------------------------------------------------- + Nested Loop Left Join + -> Values Scan on "*VALUES*" + -> Finalize GroupAggregate + Group Key: tenk1.string4 + -> Gather Merge + Workers Planned: 4 + -> Partial GroupAggregate + Group Key: tenk1.string4 + -> Sort + Sort Key: tenk1.string4 + -> Parallel Seq Scan on tenk1 +(11 rows) + +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + string4 | count | x +---------+-------+--- + AAAAxx | 2500 | 1 + HHHHxx | 2500 | 1 + OOOOxx | 2500 | 1 + VVVVxx | 2500 | 1 + AAAAxx | 2500 | 2 + HHHHxx | 2500 | 2 + OOOOxx | 2500 | 2 + VVVVxx | 2500 | 2 + AAAAxx | 2500 | 3 + HHHHxx | 2500 | 3 + OOOOxx | 2500 | 3 + VVVVxx | 2500 | 3 +(12 rows) + +reset enable_material; reset enable_hashagg; -- gather merge test with a LIMIT explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index c0debddbcd..cefb5a27d4 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -118,6 +118,22 @@ explain (costs off) select count(*) from tenk1 group by twenty; +--test rescan behavior of gather merge +set enable_material = false; + +explain (costs off) +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + +select * from + (select string4, count(unique2) + from tenk1 group by string4 order by string4) ss + right join (values (1),(2),(3)) v(x) on true; + +reset enable_material; + reset enable_hashagg; -- gather merge test with a LIMIT From 41b0dd987d44089dc48e9c70024277e253b396b7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 30 Aug 2017 13:18:16 -0400 Subject: [PATCH 0063/1087] Separate reinitialization of shared parallel-scan state from ExecReScan. Previously, the parallel executor logic did reinitialization of shared state within the ExecReScan code for parallel-aware scan nodes. This is problematic, because it means that the ExecReScan call has to occur synchronously (ie, during the parent Gather node's ReScan call). That is swimming very much against the tide so far as the ExecReScan machinery is concerned; the fact that it works at all today depends on a lot of fragile assumptions, such as that no plan node between Gather and a parallel-aware scan node is parameterized. Another objection is that because ExecReScan might be called in workers as well as the leader, hacky extra tests are needed in some places to prevent unwanted shared-state resets. Hence, let's separate this code into two functions, a ReInitializeDSM call and the ReScan call proper. ReInitializeDSM is called only in the leader and is guaranteed to run before we start new workers. ReScan is returned to its traditional function of resetting only local state, which means that ExecReScan's usual habits of delaying or eliminating child rescan calls are safe again. As with the preceding commit 7df2c1f8d, it doesn't seem to be necessary to make these changes in 9.6, which is a good thing because the FDW and CustomScan APIs are impacted. Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u_nNjqETuEnY_y1VUc_kJiU0bxtaQ@mail.gmail.com --- doc/src/sgml/custom-scan.sgml | 29 +++++-- doc/src/sgml/fdwhandler.sgml | 43 ++++++--- src/backend/access/heap/heapam.c | 28 +++--- src/backend/executor/execParallel.c | 101 ++++++++++++++++++---- src/backend/executor/nodeBitmapHeapscan.c | 42 +++++---- src/backend/executor/nodeCustom.c | 15 ++++ src/backend/executor/nodeForeignscan.c | 23 ++++- src/backend/executor/nodeGather.c | 29 ++++--- src/backend/executor/nodeGatherMerge.c | 29 ++++--- src/backend/executor/nodeIndexonlyscan.c | 29 +++---- src/backend/executor/nodeIndexscan.c | 38 ++++---- src/backend/executor/nodeSeqscan.c | 16 ++++ src/backend/executor/nodeSort.c | 17 ++++ src/include/access/heapam.h | 1 + src/include/executor/execParallel.h | 3 +- src/include/executor/nodeBitmapHeapscan.h | 2 + src/include/executor/nodeCustom.h | 2 + src/include/executor/nodeForeignscan.h | 2 + src/include/executor/nodeIndexonlyscan.h | 2 + src/include/executor/nodeIndexscan.h | 1 + src/include/executor/nodeSeqscan.h | 1 + src/include/executor/nodeSort.h | 1 + src/include/foreign/fdwapi.h | 4 + src/include/nodes/extensible.h | 3 + 24 files changed, 328 insertions(+), 133 deletions(-) diff --git a/doc/src/sgml/custom-scan.sgml b/doc/src/sgml/custom-scan.sgml index 6159c3a24e..9d1ca7bfe1 100644 --- a/doc/src/sgml/custom-scan.sgml +++ b/doc/src/sgml/custom-scan.sgml @@ -320,22 +320,39 @@ void (*InitializeDSMCustomScan) (CustomScanState *node, void *coordinate); Initialize the dynamic shared memory that will be required for parallel - operation; coordinate points to an amount of allocated space - equal to the return value of EstimateDSMCustomScan. + operation. coordinate points to a shared memory area of + size equal to the return value of EstimateDSMCustomScan. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. +void (*ReInitializeDSMCustomScan) (CustomScanState *node, + ParallelContext *pcxt, + void *coordinate); + + Re-initialize the dynamic shared memory required for parallel operation + when the custom-scan plan node is about to be re-scanned. + This callback is optional, and need only be supplied if this custom + scan provider supports parallel execution. + Recommended practice is that this callback reset only shared state, + while the ReScanCustomScan callback resets only local + state. Currently, this callback will be called + before ReScanCustomScan, but it's best not to rely on + that ordering. + + + + void (*InitializeWorkerCustomScan) (CustomScanState *node, shm_toc *toc, void *coordinate); - Initialize a parallel worker's custom state based on the shared state - set up in the leader by InitializeDSMCustomScan. - This callback is optional, and needs only be supplied if this - custom path supports parallel execution. + Initialize a parallel worker's local state based on the shared state + set up by the leader during InitializeDSMCustomScan. + This callback is optional, and need only be supplied if this custom + scan provider supports parallel execution. diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index dbeaab555d..cfa6808417 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -1191,12 +1191,12 @@ ImportForeignSchema (ImportForeignSchemaStmt *stmt, Oid serverOid); A ForeignScan node can, optionally, support parallel execution. A parallel ForeignScan will be executed - in multiple processes and should return each row only once across + in multiple processes and must return each row exactly once across all cooperating processes. To do this, processes can coordinate through - fixed size chunks of dynamic shared memory. This shared memory is not - guaranteed to be mapped at the same address in every process, so pointers - may not be used. The following callbacks are all optional in general, - but required if parallel execution is to be supported. + fixed-size chunks of dynamic shared memory. This shared memory is not + guaranteed to be mapped at the same address in every process, so it + must not contain pointers. The following functions are all optional, + but most are required if parallel execution is to be supported. @@ -1215,7 +1215,7 @@ IsForeignScanParallelSafe(PlannerInfo *root, RelOptInfo *rel, - If this callback is not defined, it is assumed that the scan must take + If this function is not defined, it is assumed that the scan must take place within the parallel leader. Note that returning true does not mean that the scan itself can be done in parallel, only that the scan can be performed within a parallel worker. Therefore, it can be useful to define @@ -1230,6 +1230,9 @@ EstimateDSMForeignScan(ForeignScanState *node, ParallelContext *pcxt); Estimate the amount of dynamic shared memory that will be required for parallel operation. This may be higher than the amount that will actually be used, but it must not be lower. The return value is in bytes. + This function is optional, and can be omitted if not needed; but if it + is omitted, the next three functions must be omitted as well, because + no shared memory will be allocated for the FDW's use. @@ -1239,8 +1242,25 @@ InitializeDSMForeignScan(ForeignScanState *node, ParallelContext *pcxt, void *coordinate); Initialize the dynamic shared memory that will be required for parallel - operation; coordinate points to an amount of allocated space - equal to the return value of EstimateDSMForeignScan. + operation. coordinate points to a shared memory area of + size equal to the return value of EstimateDSMForeignScan. + This function is optional, and can be omitted if not needed. + + + + +void +ReInitializeDSMForeignScan(ForeignScanState *node, ParallelContext *pcxt, + void *coordinate); + + Re-initialize the dynamic shared memory required for parallel operation + when the foreign-scan plan node is about to be re-scanned. + This function is optional, and can be omitted if not needed. + Recommended practice is that this function reset only shared state, + while the ReScanForeignScan function resets only local + state. Currently, this function will be called + before ReScanForeignScan, but it's best not to rely on + that ordering. @@ -1249,10 +1269,9 @@ void InitializeWorkerForeignScan(ForeignScanState *node, shm_toc *toc, void *coordinate); - Initialize a parallel worker's custom state based on the shared state - set up in the leader by InitializeDSMForeignScan. - This callback is optional, and needs only be supplied if this - custom path supports parallel execution. + Initialize a parallel worker's local state based on the shared state + set up by the leader during InitializeDSMForeignScan. + This function is optional, and can be omitted if not needed. diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index ff03c68fcd..e29c5ad086 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -1531,21 +1531,6 @@ heap_rescan(HeapScanDesc scan, * reinitialize scan descriptor */ initscan(scan, key, true); - - /* - * reset parallel scan, if present - */ - if (scan->rs_parallel != NULL) - { - ParallelHeapScanDesc parallel_scan; - - /* - * Caller is responsible for making sure that all workers have - * finished the scan before calling this. - */ - parallel_scan = scan->rs_parallel; - pg_atomic_write_u64(¶llel_scan->phs_nallocated, 0); - } } /* ---------------- @@ -1642,6 +1627,19 @@ heap_parallelscan_initialize(ParallelHeapScanDesc target, Relation relation, SerializeSnapshot(snapshot, target->phs_snapshot_data); } +/* ---------------- + * heap_parallelscan_reinitialize - reset a parallel scan + * + * Call this in the leader process. Caller is responsible for + * making sure that all workers have finished the scan beforehand. + * ---------------- + */ +void +heap_parallelscan_reinitialize(ParallelHeapScanDesc parallel_scan) +{ + pg_atomic_write_u64(¶llel_scan->phs_nallocated, 0); +} + /* ---------------- * heap_beginscan_parallel - join a parallel scan * diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 01316ff5d9..c713b85139 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -119,6 +119,8 @@ static bool ExecParallelInitializeDSM(PlanState *node, ExecParallelInitializeDSMContext *d); static shm_mq_handle **ExecParallelSetupTupleQueues(ParallelContext *pcxt, bool reinitialize); +static bool ExecParallelReInitializeDSM(PlanState *planstate, + ParallelContext *pcxt); static bool ExecParallelRetrieveInstrumentation(PlanState *planstate, SharedExecutorInstrumentation *instrumentation); @@ -255,6 +257,8 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) case T_SortState: /* even when not parallel-aware */ ExecSortEstimate((SortState *) planstate, e->pcxt); + break; + default: break; } @@ -325,6 +329,8 @@ ExecParallelInitializeDSM(PlanState *planstate, case T_SortState: /* even when not parallel-aware */ ExecSortInitializeDSM((SortState *) planstate, d->pcxt); + break; + default: break; } @@ -384,18 +390,6 @@ ExecParallelSetupTupleQueues(ParallelContext *pcxt, bool reinitialize) return responseq; } -/* - * Re-initialize the parallel executor info such that it can be reused by - * workers. - */ -void -ExecParallelReinitialize(ParallelExecutorInfo *pei) -{ - ReinitializeParallelDSM(pei->pcxt); - pei->tqueue = ExecParallelSetupTupleQueues(pei->pcxt, true); - pei->finished = false; -} - /* * Sets up the required infrastructure for backend workers to perform * execution and return results to the main backend. @@ -599,7 +593,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, ExecParallelInitializeDSM(planstate, &d); /* - * Make sure that the world hasn't shifted under our feat. This could + * Make sure that the world hasn't shifted under our feet. This could * probably just be an Assert(), but let's be conservative for now. */ if (e.nnodes != d.nnodes) @@ -609,6 +603,82 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, return pei; } +/* + * Re-initialize the parallel executor shared memory state before launching + * a fresh batch of workers. + */ +void +ExecParallelReinitialize(PlanState *planstate, + ParallelExecutorInfo *pei) +{ + /* Old workers must already be shut down */ + Assert(pei->finished); + + ReinitializeParallelDSM(pei->pcxt); + pei->tqueue = ExecParallelSetupTupleQueues(pei->pcxt, true); + pei->finished = false; + + /* Traverse plan tree and let each child node reset associated state. */ + ExecParallelReInitializeDSM(planstate, pei->pcxt); +} + +/* + * Traverse plan tree to reinitialize per-node dynamic shared memory state + */ +static bool +ExecParallelReInitializeDSM(PlanState *planstate, + ParallelContext *pcxt) +{ + if (planstate == NULL) + return false; + + /* + * Call reinitializers for DSM-using plan nodes. + */ + switch (nodeTag(planstate)) + { + case T_SeqScanState: + if (planstate->plan->parallel_aware) + ExecSeqScanReInitializeDSM((SeqScanState *) planstate, + pcxt); + break; + case T_IndexScanState: + if (planstate->plan->parallel_aware) + ExecIndexScanReInitializeDSM((IndexScanState *) planstate, + pcxt); + break; + case T_IndexOnlyScanState: + if (planstate->plan->parallel_aware) + ExecIndexOnlyScanReInitializeDSM((IndexOnlyScanState *) planstate, + pcxt); + break; + case T_ForeignScanState: + if (planstate->plan->parallel_aware) + ExecForeignScanReInitializeDSM((ForeignScanState *) planstate, + pcxt); + break; + case T_CustomScanState: + if (planstate->plan->parallel_aware) + ExecCustomScanReInitializeDSM((CustomScanState *) planstate, + pcxt); + break; + case T_BitmapHeapScanState: + if (planstate->plan->parallel_aware) + ExecBitmapHeapReInitializeDSM((BitmapHeapScanState *) planstate, + pcxt); + break; + case T_SortState: + /* even when not parallel-aware */ + ExecSortReInitializeDSM((SortState *) planstate, pcxt); + break; + + default: + break; + } + + return planstate_tree_walker(planstate, ExecParallelReInitializeDSM, pcxt); +} + /* * Copy instrumentation information about this node and its descendants from * dynamic shared memory. @@ -845,12 +915,13 @@ ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) break; case T_BitmapHeapScanState: if (planstate->plan->parallel_aware) - ExecBitmapHeapInitializeWorker( - (BitmapHeapScanState *) planstate, toc); + ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate, toc); break; case T_SortState: /* even when not parallel-aware */ ExecSortInitializeWorker((SortState *) planstate, toc); + break; + default: break; } diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index 79f534e4e9..f7e55e0b45 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -705,23 +705,6 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node) node->shared_tbmiterator = NULL; node->shared_prefetch_iterator = NULL; - /* Reset parallel bitmap state, if present */ - if (node->pstate) - { - dsa_area *dsa = node->ss.ps.state->es_query_dsa; - - node->pstate->state = BM_INITIAL; - - if (DsaPointerIsValid(node->pstate->tbmiterator)) - tbm_free_shared_area(dsa, node->pstate->tbmiterator); - - if (DsaPointerIsValid(node->pstate->prefetch_iterator)) - tbm_free_shared_area(dsa, node->pstate->prefetch_iterator); - - node->pstate->tbmiterator = InvalidDsaPointer; - node->pstate->prefetch_iterator = InvalidDsaPointer; - } - ExecScanReScan(&node->ss); /* @@ -999,6 +982,31 @@ ExecBitmapHeapInitializeDSM(BitmapHeapScanState *node, node->pstate = pstate; } +/* ---------------------------------------------------------------- + * ExecBitmapHeapReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecBitmapHeapReInitializeDSM(BitmapHeapScanState *node, + ParallelContext *pcxt) +{ + ParallelBitmapHeapState *pstate = node->pstate; + dsa_area *dsa = node->ss.ps.state->es_query_dsa; + + pstate->state = BM_INITIAL; + + if (DsaPointerIsValid(pstate->tbmiterator)) + tbm_free_shared_area(dsa, pstate->tbmiterator); + + if (DsaPointerIsValid(pstate->prefetch_iterator)) + tbm_free_shared_area(dsa, pstate->prefetch_iterator); + + pstate->tbmiterator = InvalidDsaPointer; + pstate->prefetch_iterator = InvalidDsaPointer; +} + /* ---------------------------------------------------------------- * ExecBitmapHeapInitializeWorker * diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c index fb7645b1f4..07dcabef55 100644 --- a/src/backend/executor/nodeCustom.c +++ b/src/backend/executor/nodeCustom.c @@ -194,6 +194,21 @@ ExecCustomScanInitializeDSM(CustomScanState *node, ParallelContext *pcxt) } } +void +ExecCustomScanReInitializeDSM(CustomScanState *node, ParallelContext *pcxt) +{ + const CustomExecMethods *methods = node->methods; + + if (methods->ReInitializeDSMCustomScan) + { + int plan_node_id = node->ss.ps.plan->plan_node_id; + void *coordinate; + + coordinate = shm_toc_lookup(pcxt->toc, plan_node_id, false); + methods->ReInitializeDSMCustomScan(node, pcxt, coordinate); + } +} + void ExecCustomScanInitializeWorker(CustomScanState *node, shm_toc *toc) { diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c index 140e82ef5e..20892d6d5f 100644 --- a/src/backend/executor/nodeForeignscan.c +++ b/src/backend/executor/nodeForeignscan.c @@ -332,7 +332,28 @@ ExecForeignScanInitializeDSM(ForeignScanState *node, ParallelContext *pcxt) } /* ---------------------------------------------------------------- - * ExecForeignScanInitializeDSM + * ExecForeignScanReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecForeignScanReInitializeDSM(ForeignScanState *node, ParallelContext *pcxt) +{ + FdwRoutine *fdwroutine = node->fdwroutine; + + if (fdwroutine->ReInitializeDSMForeignScan) + { + int plan_node_id = node->ss.ps.plan->plan_node_id; + void *coordinate; + + coordinate = shm_toc_lookup(pcxt->toc, plan_node_id, false); + fdwroutine->ReInitializeDSMForeignScan(node, pcxt, coordinate); + } +} + +/* ---------------------------------------------------------------- + * ExecForeignScanInitializeWorker * * Initialization according to the parallel coordination information * ---------------------------------------------------------------- diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 58f88a5724..f9cf1b2f87 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -153,12 +153,15 @@ ExecGather(PlanState *pstate) { ParallelContext *pcxt; - /* Initialize the workers required to execute Gather node. */ + /* Initialize, or re-initialize, shared state needed by workers. */ if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, gather->num_workers, node->tuples_needed); + else + ExecParallelReinitialize(node->ps.lefttree, + node->pei); /* * Register backend workers. We might not get as many as we @@ -426,7 +429,7 @@ ExecShutdownGather(GatherState *node) /* ---------------------------------------------------------------- * ExecReScanGather * - * Re-initialize the workers and rescans a relation via them. + * Prepare to re-scan the result of a Gather. * ---------------------------------------------------------------- */ void @@ -435,19 +438,12 @@ ExecReScanGather(GatherState *node) Gather *gather = (Gather *) node->ps.plan; PlanState *outerPlan = outerPlanState(node); - /* - * Re-initialize the parallel workers to perform rescan of relation. We - * want to gracefully shutdown all the workers so that they should be able - * to propagate any error or other information to master backend before - * dying. Parallel context will be reused for rescan. - */ + /* Make sure any existing workers are gracefully shut down */ ExecShutdownGatherWorkers(node); + /* Mark node so that shared state will be rebuilt at next call */ node->initialized = false; - if (node->pei) - ExecParallelReinitialize(node->pei); - /* * Set child node's chgParam to tell it that the next scan might deliver a * different set of rows within the leader process. (The overall rowset @@ -459,10 +455,15 @@ ExecReScanGather(GatherState *node) outerPlan->chgParam = bms_add_member(outerPlan->chgParam, gather->rescan_param); - /* - * if chgParam of subnode is not null then plan will be re-scanned by - * first ExecProcNode. + * If chgParam of subnode is not null then plan will be re-scanned by + * first ExecProcNode. Note: because this does nothing if we have a + * rescan_param, it's currently guaranteed that parallel-aware child nodes + * will not see a ReScan call until after they get a ReInitializeDSM call. + * That ordering might not be something to rely on, though. A good rule + * of thumb is that ReInitializeDSM should reset only shared state, ReScan + * should reset only local state, and anything that depends on both of + * those steps being finished must wait until the first ExecProcNode call. */ if (outerPlan->chgParam == NULL) ExecReScan(outerPlan); diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index f50841699c..0bd5da38b4 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -187,12 +187,15 @@ ExecGatherMerge(PlanState *pstate) { ParallelContext *pcxt; - /* Initialize data structures for workers. */ + /* Initialize, or re-initialize, shared state needed by workers. */ if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, gm->num_workers, node->tuples_needed); + else + ExecParallelReinitialize(node->ps.lefttree, + node->pei); /* Try to launch workers. */ pcxt = node->pei->pcxt; @@ -321,7 +324,7 @@ ExecShutdownGatherMergeWorkers(GatherMergeState *node) /* ---------------------------------------------------------------- * ExecReScanGatherMerge * - * Re-initialize the workers and rescans a relation via them. + * Prepare to re-scan the result of a GatherMerge. * ---------------------------------------------------------------- */ void @@ -330,20 +333,13 @@ ExecReScanGatherMerge(GatherMergeState *node) GatherMerge *gm = (GatherMerge *) node->ps.plan; PlanState *outerPlan = outerPlanState(node); - /* - * Re-initialize the parallel workers to perform rescan of relation. We - * want to gracefully shutdown all the workers so that they should be able - * to propagate any error or other information to master backend before - * dying. Parallel context will be reused for rescan. - */ + /* Make sure any existing workers are gracefully shut down */ ExecShutdownGatherMergeWorkers(node); + /* Mark node so that shared state will be rebuilt at next call */ node->initialized = false; node->gm_initialized = false; - if (node->pei) - ExecParallelReinitialize(node->pei); - /* * Set child node's chgParam to tell it that the next scan might deliver a * different set of rows within the leader process. (The overall rowset @@ -355,10 +351,15 @@ ExecReScanGatherMerge(GatherMergeState *node) outerPlan->chgParam = bms_add_member(outerPlan->chgParam, gm->rescan_param); - /* - * if chgParam of subnode is not null then plan will be re-scanned by - * first ExecProcNode. + * If chgParam of subnode is not null then plan will be re-scanned by + * first ExecProcNode. Note: because this does nothing if we have a + * rescan_param, it's currently guaranteed that parallel-aware child nodes + * will not see a ReScan call until after they get a ReInitializeDSM call. + * That ordering might not be something to rely on, though. A good rule + * of thumb is that ReInitializeDSM should reset only shared state, ReScan + * should reset only local state, and anything that depends on both of + * those steps being finished must wait until the first ExecProcNode call. */ if (outerPlan->chgParam == NULL) ExecReScan(outerPlan); diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index fe7ba3f1a4..5351cb8981 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -25,6 +25,7 @@ * parallel index-only scan * ExecIndexOnlyScanInitializeDSM initialize DSM for parallel * index-only scan + * ExecIndexOnlyScanReInitializeDSM reinitialize DSM for fresh scan * ExecIndexOnlyScanInitializeWorker attach to DSM info in parallel worker */ #include "postgres.h" @@ -336,16 +337,6 @@ ExecIndexOnlyScan(PlanState *pstate) void ExecReScanIndexOnlyScan(IndexOnlyScanState *node) { - bool reset_parallel_scan = true; - - /* - * If we are here to just update the scan keys, then don't reset parallel - * scan. For detailed reason behind this look in the comments for - * ExecReScanIndexScan. - */ - if (node->ioss_NumRuntimeKeys != 0 && !node->ioss_RuntimeKeysReady) - reset_parallel_scan = false; - /* * If we are doing runtime key calculations (ie, any of the index key * values weren't simple Consts), compute the new key values. But first, @@ -366,15 +357,10 @@ ExecReScanIndexOnlyScan(IndexOnlyScanState *node) /* reset index scan */ if (node->ioss_ScanDesc) - { - index_rescan(node->ioss_ScanDesc, node->ioss_ScanKeys, node->ioss_NumScanKeys, node->ioss_OrderByKeys, node->ioss_NumOrderByKeys); - if (reset_parallel_scan && node->ioss_ScanDesc->parallel_scan) - index_parallelrescan(node->ioss_ScanDesc); - } ExecScanReScan(&node->ss); } @@ -671,6 +657,19 @@ ExecIndexOnlyScanInitializeDSM(IndexOnlyScanState *node, node->ioss_OrderByKeys, node->ioss_NumOrderByKeys); } +/* ---------------------------------------------------------------- + * ExecIndexOnlyScanReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecIndexOnlyScanReInitializeDSM(IndexOnlyScanState *node, + ParallelContext *pcxt) +{ + index_parallelrescan(node->ioss_ScanDesc); +} + /* ---------------------------------------------------------------- * ExecIndexOnlyScanInitializeWorker * diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index 404076d593..638b17b07c 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -24,6 +24,7 @@ * ExecIndexRestrPos restores scan position. * ExecIndexScanEstimate estimates DSM space needed for parallel index scan * ExecIndexScanInitializeDSM initialize DSM for parallel indexscan + * ExecIndexScanReInitializeDSM reinitialize DSM for fresh scan * ExecIndexScanInitializeWorker attach to DSM info in parallel worker */ #include "postgres.h" @@ -577,18 +578,6 @@ ExecIndexScan(PlanState *pstate) void ExecReScanIndexScan(IndexScanState *node) { - bool reset_parallel_scan = true; - - /* - * If we are here to just update the scan keys, then don't reset parallel - * scan. We don't want each of the participating process in the parallel - * scan to update the shared parallel scan state at the start of the scan. - * It is quite possible that one of the participants has already begun - * scanning the index when another has yet to start it. - */ - if (node->iss_NumRuntimeKeys != 0 && !node->iss_RuntimeKeysReady) - reset_parallel_scan = false; - /* * If we are doing runtime key calculations (ie, any of the index key * values weren't simple Consts), compute the new key values. But first, @@ -614,21 +603,11 @@ ExecReScanIndexScan(IndexScanState *node) reorderqueue_pop(node); } - /* - * Reset (parallel) index scan. For parallel-aware nodes, the scan - * descriptor is initialized during actual execution of node and we can - * reach here before that (ex. during execution of nest loop join). So, - * avoid updating the scan descriptor at that time. - */ + /* reset index scan */ if (node->iss_ScanDesc) - { index_rescan(node->iss_ScanDesc, node->iss_ScanKeys, node->iss_NumScanKeys, node->iss_OrderByKeys, node->iss_NumOrderByKeys); - - if (reset_parallel_scan && node->iss_ScanDesc->parallel_scan) - index_parallelrescan(node->iss_ScanDesc); - } node->iss_ReachedEnd = false; ExecScanReScan(&node->ss); @@ -1716,6 +1695,19 @@ ExecIndexScanInitializeDSM(IndexScanState *node, node->iss_OrderByKeys, node->iss_NumOrderByKeys); } +/* ---------------------------------------------------------------- + * ExecIndexScanReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecIndexScanReInitializeDSM(IndexScanState *node, + ParallelContext *pcxt) +{ + index_parallelrescan(node->iss_ScanDesc); +} + /* ---------------------------------------------------------------- * ExecIndexScanInitializeWorker * diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c index 5c49d4ca8a..d4ac939c9b 100644 --- a/src/backend/executor/nodeSeqscan.c +++ b/src/backend/executor/nodeSeqscan.c @@ -22,6 +22,7 @@ * * ExecSeqScanEstimate estimates DSM space needed for parallel scan * ExecSeqScanInitializeDSM initialize DSM for parallel scan + * ExecSeqScanReInitializeDSM reinitialize DSM for fresh parallel scan * ExecSeqScanInitializeWorker attach to DSM info in parallel worker */ #include "postgres.h" @@ -324,6 +325,21 @@ ExecSeqScanInitializeDSM(SeqScanState *node, heap_beginscan_parallel(node->ss.ss_currentRelation, pscan); } +/* ---------------------------------------------------------------- + * ExecSeqScanReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecSeqScanReInitializeDSM(SeqScanState *node, + ParallelContext *pcxt) +{ + HeapScanDesc scan = node->ss.ss_currentScanDesc; + + heap_parallelscan_reinitialize(scan->rs_parallel); +} + /* ---------------------------------------------------------------- * ExecSeqScanInitializeWorker * diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index 66ef109c12..98bcaeb66f 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -396,6 +396,23 @@ ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt) node->shared_info); } +/* ---------------------------------------------------------------- + * ExecSortReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt) +{ + /* If there's any instrumentation space, clear it for next time */ + if (node->shared_info != NULL) + { + memset(node->shared_info->sinstrument, 0, + node->shared_info->num_workers * sizeof(TuplesortInstrumentation)); + } +} + /* ---------------------------------------------------------------- * ExecSortInitializeWorker * diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h index b2132e723e..4e41024e92 100644 --- a/src/include/access/heapam.h +++ b/src/include/access/heapam.h @@ -130,6 +130,7 @@ extern HeapTuple heap_getnext(HeapScanDesc scan, ScanDirection direction); extern Size heap_parallelscan_estimate(Snapshot snapshot); extern void heap_parallelscan_initialize(ParallelHeapScanDesc target, Relation relation, Snapshot snapshot); +extern void heap_parallelscan_reinitialize(ParallelHeapScanDesc parallel_scan); extern HeapScanDesc heap_beginscan_parallel(Relation, ParallelHeapScanDesc); extern bool heap_fetch(Relation relation, Snapshot snapshot, diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index 79b886706f..1cb895d898 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -36,7 +36,8 @@ extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, int64 tuples_needed); extern void ExecParallelFinish(ParallelExecutorInfo *pei); extern void ExecParallelCleanup(ParallelExecutorInfo *pei); -extern void ExecParallelReinitialize(ParallelExecutorInfo *pei); +extern void ExecParallelReinitialize(PlanState *planstate, + ParallelExecutorInfo *pei); extern void ParallelQueryMain(dsm_segment *seg, shm_toc *toc); diff --git a/src/include/executor/nodeBitmapHeapscan.h b/src/include/executor/nodeBitmapHeapscan.h index c77694cf22..10844a405a 100644 --- a/src/include/executor/nodeBitmapHeapscan.h +++ b/src/include/executor/nodeBitmapHeapscan.h @@ -24,6 +24,8 @@ extern void ExecBitmapHeapEstimate(BitmapHeapScanState *node, ParallelContext *pcxt); extern void ExecBitmapHeapInitializeDSM(BitmapHeapScanState *node, ParallelContext *pcxt); +extern void ExecBitmapHeapReInitializeDSM(BitmapHeapScanState *node, + ParallelContext *pcxt); extern void ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node, shm_toc *toc); diff --git a/src/include/executor/nodeCustom.h b/src/include/executor/nodeCustom.h index a1cc63ae1f..25767b6a4a 100644 --- a/src/include/executor/nodeCustom.h +++ b/src/include/executor/nodeCustom.h @@ -34,6 +34,8 @@ extern void ExecCustomScanEstimate(CustomScanState *node, ParallelContext *pcxt); extern void ExecCustomScanInitializeDSM(CustomScanState *node, ParallelContext *pcxt); +extern void ExecCustomScanReInitializeDSM(CustomScanState *node, + ParallelContext *pcxt); extern void ExecCustomScanInitializeWorker(CustomScanState *node, shm_toc *toc); extern void ExecShutdownCustomScan(CustomScanState *node); diff --git a/src/include/executor/nodeForeignscan.h b/src/include/executor/nodeForeignscan.h index 0b662597d8..0354c2c430 100644 --- a/src/include/executor/nodeForeignscan.h +++ b/src/include/executor/nodeForeignscan.h @@ -25,6 +25,8 @@ extern void ExecForeignScanEstimate(ForeignScanState *node, ParallelContext *pcxt); extern void ExecForeignScanInitializeDSM(ForeignScanState *node, ParallelContext *pcxt); +extern void ExecForeignScanReInitializeDSM(ForeignScanState *node, + ParallelContext *pcxt); extern void ExecForeignScanInitializeWorker(ForeignScanState *node, shm_toc *toc); extern void ExecShutdownForeignScan(ForeignScanState *node); diff --git a/src/include/executor/nodeIndexonlyscan.h b/src/include/executor/nodeIndexonlyscan.h index c8a709c26e..690b5dbfe5 100644 --- a/src/include/executor/nodeIndexonlyscan.h +++ b/src/include/executor/nodeIndexonlyscan.h @@ -28,6 +28,8 @@ extern void ExecIndexOnlyScanEstimate(IndexOnlyScanState *node, ParallelContext *pcxt); extern void ExecIndexOnlyScanInitializeDSM(IndexOnlyScanState *node, ParallelContext *pcxt); +extern void ExecIndexOnlyScanReInitializeDSM(IndexOnlyScanState *node, + ParallelContext *pcxt); extern void ExecIndexOnlyScanInitializeWorker(IndexOnlyScanState *node, shm_toc *toc); diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h index 1668e347ee..0670e87e39 100644 --- a/src/include/executor/nodeIndexscan.h +++ b/src/include/executor/nodeIndexscan.h @@ -24,6 +24,7 @@ extern void ExecIndexRestrPos(IndexScanState *node); extern void ExecReScanIndexScan(IndexScanState *node); extern void ExecIndexScanEstimate(IndexScanState *node, ParallelContext *pcxt); extern void ExecIndexScanInitializeDSM(IndexScanState *node, ParallelContext *pcxt); +extern void ExecIndexScanReInitializeDSM(IndexScanState *node, ParallelContext *pcxt); extern void ExecIndexScanInitializeWorker(IndexScanState *node, shm_toc *toc); /* diff --git a/src/include/executor/nodeSeqscan.h b/src/include/executor/nodeSeqscan.h index 0fba79f8de..eb96799cad 100644 --- a/src/include/executor/nodeSeqscan.h +++ b/src/include/executor/nodeSeqscan.h @@ -24,6 +24,7 @@ extern void ExecReScanSeqScan(SeqScanState *node); /* parallel scan support */ extern void ExecSeqScanEstimate(SeqScanState *node, ParallelContext *pcxt); extern void ExecSeqScanInitializeDSM(SeqScanState *node, ParallelContext *pcxt); +extern void ExecSeqScanReInitializeDSM(SeqScanState *node, ParallelContext *pcxt); extern void ExecSeqScanInitializeWorker(SeqScanState *node, shm_toc *toc); #endif /* NODESEQSCAN_H */ diff --git a/src/include/executor/nodeSort.h b/src/include/executor/nodeSort.h index 77ac06597f..1ab8f76721 100644 --- a/src/include/executor/nodeSort.h +++ b/src/include/executor/nodeSort.h @@ -26,6 +26,7 @@ extern void ExecReScanSort(SortState *node); /* parallel instrumentation support */ extern void ExecSortEstimate(SortState *node, ParallelContext *pcxt); extern void ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt); +extern void ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt); extern void ExecSortInitializeWorker(SortState *node, shm_toc *toc); extern void ExecSortRetrieveInstrumentation(SortState *node); diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h index e391f20fb8..ef0fbe6f9c 100644 --- a/src/include/foreign/fdwapi.h +++ b/src/include/foreign/fdwapi.h @@ -148,6 +148,9 @@ typedef Size (*EstimateDSMForeignScan_function) (ForeignScanState *node, typedef void (*InitializeDSMForeignScan_function) (ForeignScanState *node, ParallelContext *pcxt, void *coordinate); +typedef void (*ReInitializeDSMForeignScan_function) (ForeignScanState *node, + ParallelContext *pcxt, + void *coordinate); typedef void (*InitializeWorkerForeignScan_function) (ForeignScanState *node, shm_toc *toc, void *coordinate); @@ -224,6 +227,7 @@ typedef struct FdwRoutine IsForeignScanParallelSafe_function IsForeignScanParallelSafe; EstimateDSMForeignScan_function EstimateDSMForeignScan; InitializeDSMForeignScan_function InitializeDSMForeignScan; + ReInitializeDSMForeignScan_function ReInitializeDSMForeignScan; InitializeWorkerForeignScan_function InitializeWorkerForeignScan; ShutdownForeignScan_function ShutdownForeignScan; } FdwRoutine; diff --git a/src/include/nodes/extensible.h b/src/include/nodes/extensible.h index 7325bf536a..0654e79c7b 100644 --- a/src/include/nodes/extensible.h +++ b/src/include/nodes/extensible.h @@ -136,6 +136,9 @@ typedef struct CustomExecMethods void (*InitializeDSMCustomScan) (CustomScanState *node, ParallelContext *pcxt, void *coordinate); + void (*ReInitializeDSMCustomScan) (CustomScanState *node, + ParallelContext *pcxt, + void *coordinate); void (*InitializeWorkerCustomScan) (CustomScanState *node, shm_toc *toc, void *coordinate); From 04e9678614ec64ad9043174ac99a25b1dc45233a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 30 Aug 2017 17:21:08 -0400 Subject: [PATCH 0064/1087] Code review for nodeGatherMerge.c. Comment the fields of GatherMergeState, and organize them a bit more sensibly. Comment GMReaderTupleBuffer more usefully too. Improve assorted other comments that were obsolete or just not very good English. Get rid of the use of a GMReaderTupleBuffer for the leader process; that was confusing, since only the "done" field was used, and that in a way redundant with need_to_scan_locally. In gather_merge_init, avoid calling load_tuple_array for already-known-exhausted workers. I'm not sure if there's a live bug there, but the case is unlikely to be well tested due to timing considerations. Remove some useless code, such as duplicating the tts_isempty test done by TupIsNull. Remove useless initialization of ps.qual, replacing that with an assertion that we have no qual to check. (If we did, the code would fail to check it.) Avoid applying heap_copytuple to a null tuple. While that fails to crash, it's confusing and it makes the code less legible not more so IMO. Propagate a couple of these changes into nodeGather.c, as well. Back-patch to v10, partly because of the possibility that the gather_merge_init change is fixing a live bug, but mostly to keep the branches in sync to ease future bug fixes. --- src/backend/executor/nodeGather.c | 21 +-- src/backend/executor/nodeGatherMerge.c | 185 ++++++++++++++----------- src/include/nodes/execnodes.h | 46 +++--- 3 files changed, 138 insertions(+), 114 deletions(-) diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index f9cf1b2f87..d93fbacdf9 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -71,6 +71,8 @@ ExecInitGather(Gather *node, EState *estate, int eflags) gatherstate->ps.plan = (Plan *) node; gatherstate->ps.state = estate; gatherstate->ps.ExecProcNode = ExecGather; + + gatherstate->initialized = false; gatherstate->need_to_scan_locally = !node->single_copy; gatherstate->tuples_needed = -1; @@ -82,10 +84,10 @@ ExecInitGather(Gather *node, EState *estate, int eflags) ExecAssignExprContext(estate, &gatherstate->ps); /* - * initialize child expressions + * Gather doesn't support checking a qual (it's always more efficient to + * do it in the child node). */ - gatherstate->ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) gatherstate); + Assert(!node->plan.qual); /* * tuple table initialization @@ -169,15 +171,16 @@ ExecGather(PlanState *pstate) */ pcxt = node->pei->pcxt; LaunchParallelWorkers(pcxt); + /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; + node->nreaders = 0; + node->nextreader = 0; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - node->nreaders = 0; - node->nextreader = 0; - node->reader = - palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *)); + node->reader = palloc(pcxt->nworkers_launched * + sizeof(TupleQueueReader *)); for (i = 0; i < pcxt->nworkers_launched; ++i) { @@ -316,8 +319,8 @@ gather_readnext(GatherState *gatherstate) tup = TupleQueueReaderNext(reader, true, &readerdone); /* - * If this reader is done, remove it. If all readers are done, clean - * up remaining worker state. + * If this reader is done, remove it, and collapse the array. If all + * readers are done, clean up remaining worker state. */ if (readerdone) { diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 0bd5da38b4..67da5ff71f 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -26,24 +26,30 @@ #include "utils/memutils.h" #include "utils/rel.h" -/* - * Tuple array for each worker - */ -typedef struct GMReaderTupleBuffer -{ - HeapTuple *tuple; - int readCounter; - int nTuples; - bool done; -} GMReaderTupleBuffer; - /* * When we read tuples from workers, it's a good idea to read several at once * for efficiency when possible: this minimizes context-switching overhead. * But reading too many at a time wastes memory without improving performance. + * We'll read up to MAX_TUPLE_STORE tuples (in addition to the first one). */ #define MAX_TUPLE_STORE 10 +/* + * Pending-tuple array for each worker. This holds additional tuples that + * we were able to fetch from the worker, but can't process yet. In addition, + * this struct holds the "done" flag indicating the worker is known to have + * no more tuples. (We do not use this struct for the leader; we don't keep + * any pending tuples for the leader, and the need_to_scan_locally flag serves + * as its "done" indicator.) + */ +typedef struct GMReaderTupleBuffer +{ + HeapTuple *tuple; /* array of length MAX_TUPLE_STORE */ + int nTuples; /* number of tuples currently stored */ + int readCounter; /* index of next tuple to extract */ + bool done; /* true if reader is known exhausted */ +} GMReaderTupleBuffer; + static TupleTableSlot *ExecGatherMerge(PlanState *pstate); static int32 heap_compare_slots(Datum a, Datum b, void *arg); static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state); @@ -53,7 +59,7 @@ static void gather_merge_init(GatherMergeState *gm_state); static void ExecShutdownGatherMergeWorkers(GatherMergeState *node); static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait); -static void form_tuple_array(GatherMergeState *gm_state, int reader); +static void load_tuple_array(GatherMergeState *gm_state, int reader); /* ---------------------------------------------------------------- * ExecInitGather @@ -77,6 +83,9 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) gm_state->ps.plan = (Plan *) node; gm_state->ps.state = estate; gm_state->ps.ExecProcNode = ExecGatherMerge; + + gm_state->initialized = false; + gm_state->gm_initialized = false; gm_state->tuples_needed = -1; /* @@ -87,10 +96,10 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) ExecAssignExprContext(estate, &gm_state->ps); /* - * initialize child expressions + * GatherMerge doesn't support checking a qual (it's always more efficient + * to do it in the child node). */ - gm_state->ps.qual = - ExecInitQual(node->plan.qual, &gm_state->ps); + Assert(!node->plan.qual); /* * tuple table initialization @@ -109,8 +118,6 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) ExecAssignResultTypeFromTL(&gm_state->ps); ExecAssignProjectionInfo(&gm_state->ps, NULL); - gm_state->gm_initialized = false; - /* * initialize sort-key information */ @@ -177,7 +184,7 @@ ExecGatherMerge(PlanState *pstate) if (!node->initialized) { EState *estate = node->ps.state; - GatherMerge *gm = (GatherMerge *) node->ps.plan; + GatherMerge *gm = castNode(GatherMerge, node->ps.plan); /* * Sometimes we might have to run without parallelism; but if parallel @@ -200,17 +207,16 @@ ExecGatherMerge(PlanState *pstate) /* Try to launch workers. */ pcxt = node->pei->pcxt; LaunchParallelWorkers(pcxt); + /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; + node->nreaders = 0; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - node->nreaders = 0; node->reader = palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *)); - Assert(gm->numCols); - for (i = 0; i < pcxt->nworkers_launched; ++i) { shm_mq_set_handle(node->pei->tqueue[i], @@ -248,9 +254,7 @@ ExecGatherMerge(PlanState *pstate) return NULL; /* - * form the result tuple using ExecProject(), and return it --- unless the - * projection produces an empty set, in which case we must loop back - * around for another tuple + * Form the result tuple using ExecProject(), and return it. */ econtext->ecxt_outertuple = slot; return ExecProject(node->ps.ps_ProjInfo); @@ -374,17 +378,16 @@ static void gather_merge_init(GatherMergeState *gm_state) { int nreaders = gm_state->nreaders; - bool initialize = true; + bool nowait = true; int i; /* - * Allocate gm_slots for the number of worker + one more slot for leader. + * Allocate gm_slots for the number of workers + one more slot for leader. * Last slot is always for leader. Leader always calls ExecProcNode() to * read the tuple which will return the TupleTableSlot. Later it will * directly get assigned to gm_slot. So just initialize leader gm_slot - * with NULL. For other slots below code will call - * ExecInitExtraTupleSlot() which will do the initialization of worker - * slots. + * with NULL. For other slots, code below will call + * ExecInitExtraTupleSlot() to create a slot for the worker's results. */ gm_state->gm_slots = palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *)); @@ -393,10 +396,10 @@ gather_merge_init(GatherMergeState *gm_state) /* Initialize the tuple slot and tuple array for each worker */ gm_state->gm_tuple_buffers = (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * - (gm_state->nreaders + 1)); + gm_state->nreaders); for (i = 0; i < gm_state->nreaders; i++) { - /* Allocate the tuple array with MAX_TUPLE_STORE size */ + /* Allocate the tuple array with length MAX_TUPLE_STORE */ gm_state->gm_tuple_buffers[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE); @@ -413,39 +416,53 @@ gather_merge_init(GatherMergeState *gm_state) /* * First, try to read a tuple from each worker (including leader) in - * nowait mode, so that we initialize read from each worker as well as - * leader. After this, if all active workers are unable to produce a - * tuple, then re-read and this time use wait mode. For workers that were - * able to produce a tuple in the earlier loop and are still active, just - * try to fill the tuple array if more tuples are avaiable. + * nowait mode. After this, if not all workers were able to produce a + * tuple (or a "done" indication), then re-read from remaining workers, + * this time using wait mode. Add all live readers (those producing at + * least one tuple) to the heap. */ reread: for (i = 0; i < nreaders + 1; i++) { CHECK_FOR_INTERRUPTS(); - if (!gm_state->gm_tuple_buffers[i].done && - (TupIsNull(gm_state->gm_slots[i]) || - gm_state->gm_slots[i]->tts_isempty)) + /* ignore this source if already known done */ + if ((i < nreaders) ? + !gm_state->gm_tuple_buffers[i].done : + gm_state->need_to_scan_locally) { - if (gather_merge_readnext(gm_state, i, initialize)) + if (TupIsNull(gm_state->gm_slots[i])) { - binaryheap_add_unordered(gm_state->gm_heap, - Int32GetDatum(i)); + /* Don't have a tuple yet, try to get one */ + if (gather_merge_readnext(gm_state, i, nowait)) + binaryheap_add_unordered(gm_state->gm_heap, + Int32GetDatum(i)); + } + else + { + /* + * We already got at least one tuple from this worker, but + * might as well see if it has any more ready by now. + */ + load_tuple_array(gm_state, i); } } - else - form_tuple_array(gm_state, i); } - initialize = false; + /* need not recheck leader, since nowait doesn't matter for it */ for (i = 0; i < nreaders; i++) + { if (!gm_state->gm_tuple_buffers[i].done && - (TupIsNull(gm_state->gm_slots[i]) || - gm_state->gm_slots[i]->tts_isempty)) + TupIsNull(gm_state->gm_slots[i])) + { + nowait = false; goto reread; + } + } + /* Now heapify the heap. */ binaryheap_build(gm_state->gm_heap); + gm_state->gm_initialized = true; } @@ -460,7 +477,7 @@ gather_merge_clear_slots(GatherMergeState *gm_state) for (i = 0; i < gm_state->nreaders; i++) { pfree(gm_state->gm_tuple_buffers[i].tuple); - gm_state->gm_slots[i] = ExecClearTuple(gm_state->gm_slots[i]); + ExecClearTuple(gm_state->gm_slots[i]); } /* Free tuple array as we don't need it any more */ @@ -500,7 +517,10 @@ gather_merge_getnext(GatherMergeState *gm_state) if (gather_merge_readnext(gm_state, i, false)) binaryheap_replace_first(gm_state->gm_heap, Int32GetDatum(i)); else + { + /* reader exhausted, remove it from heap */ (void) binaryheap_remove_first(gm_state->gm_heap); + } } if (binaryheap_empty(gm_state->gm_heap)) @@ -518,37 +538,37 @@ gather_merge_getnext(GatherMergeState *gm_state) } /* - * Read the tuple for given reader in nowait mode, and form the tuple array. + * Read tuple(s) for given reader in nowait mode, and load into its tuple + * array, until we have MAX_TUPLE_STORE of them or would have to block. */ static void -form_tuple_array(GatherMergeState *gm_state, int reader) +load_tuple_array(GatherMergeState *gm_state, int reader) { - GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[reader]; + GMReaderTupleBuffer *tuple_buffer; int i; - /* Last slot is for leader and we don't build tuple array for leader */ + /* Don't do anything if this is the leader. */ if (reader == gm_state->nreaders) return; - /* - * We here because we already read all the tuples from the tuple array, so - * initialize the counter to zero. - */ + tuple_buffer = &gm_state->gm_tuple_buffers[reader]; + + /* If there's nothing in the array, reset the counters to zero. */ if (tuple_buffer->nTuples == tuple_buffer->readCounter) tuple_buffer->nTuples = tuple_buffer->readCounter = 0; - /* Tuple array is already full? */ - if (tuple_buffer->nTuples == MAX_TUPLE_STORE) - return; - + /* Try to fill additional slots in the array. */ for (i = tuple_buffer->nTuples; i < MAX_TUPLE_STORE; i++) { - tuple_buffer->tuple[i] = heap_copytuple(gm_readnext_tuple(gm_state, - reader, - false, - &tuple_buffer->done)); - if (!HeapTupleIsValid(tuple_buffer->tuple[i])) + HeapTuple tuple; + + tuple = gm_readnext_tuple(gm_state, + reader, + true, + &tuple_buffer->done); + if (!HeapTupleIsValid(tuple)) break; + tuple_buffer->tuple[i] = heap_copytuple(tuple); tuple_buffer->nTuples++; } } @@ -556,13 +576,15 @@ form_tuple_array(GatherMergeState *gm_state, int reader) /* * Store the next tuple for a given reader into the appropriate slot. * - * Returns false if the reader is exhausted, and true otherwise. + * Returns true if successful, false if not (either reader is exhausted, + * or we didn't want to wait for a tuple). Sets done flag if reader + * is found to be exhausted. */ static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) { GMReaderTupleBuffer *tuple_buffer; - HeapTuple tup = NULL; + HeapTuple tup; /* * If we're being asked to generate a tuple from the leader, then we just @@ -582,7 +604,7 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) gm_state->gm_slots[reader] = outerTupleSlot; return true; } - gm_state->gm_tuple_buffers[reader].done = true; + /* need_to_scan_locally serves as "done" flag for leader */ gm_state->need_to_scan_locally = false; } return false; @@ -594,7 +616,6 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) if (tuple_buffer->nTuples > tuple_buffer->readCounter) { /* Return any tuple previously read that is still buffered. */ - tuple_buffer = &gm_state->gm_tuple_buffers[reader]; tup = tuple_buffer->tuple[tuple_buffer->readCounter++]; } else if (tuple_buffer->done) @@ -607,19 +628,19 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) else { /* Read and buffer next tuple. */ - tup = heap_copytuple(gm_readnext_tuple(gm_state, - reader, - nowait, - &tuple_buffer->done)); + tup = gm_readnext_tuple(gm_state, + reader, + nowait, + &tuple_buffer->done); + if (!HeapTupleIsValid(tup)) + return false; + tup = heap_copytuple(tup); /* * Attempt to read more tuples in nowait mode and store them in the - * tuple array. + * pending-tuple array for the reader. */ - if (HeapTupleIsValid(tup)) - form_tuple_array(gm_state, reader); - else - return false; + load_tuple_array(gm_state, reader); } Assert(HeapTupleIsValid(tup)); @@ -642,15 +663,10 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done) { TupleQueueReader *reader; - HeapTuple tup = NULL; + HeapTuple tup; MemoryContext oldContext; MemoryContext tupleContext; - tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory; - - if (done != NULL) - *done = false; - /* Check for async events, particularly messages from workers. */ CHECK_FOR_INTERRUPTS(); @@ -658,6 +674,7 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, reader = gm_state->reader[nreader]; /* Run TupleQueueReaders in per-tuple context */ + tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory; oldContext = MemoryContextSwitchTo(tupleContext); tup = TupleQueueReaderNext(reader, nowait, done); MemoryContextSwitchTo(oldContext); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index d1565e7496..6cf128a7f0 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1923,15 +1923,17 @@ typedef struct UniqueState typedef struct GatherState { PlanState ps; /* its first field is NodeTag */ - bool initialized; - struct ParallelExecutorInfo *pei; - int nreaders; - int nextreader; - int nworkers_launched; - struct TupleQueueReader **reader; - TupleTableSlot *funnel_slot; - bool need_to_scan_locally; + bool initialized; /* workers launched? */ + bool need_to_scan_locally; /* need to read from local plan? */ int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ + /* these fields are set up once: */ + TupleTableSlot *funnel_slot; + struct ParallelExecutorInfo *pei; + /* all remaining fields are reinitialized during a rescan: */ + int nworkers_launched; /* original number of workers */ + int nreaders; /* number of still-active workers */ + int nextreader; /* next one to try to read from */ + struct TupleQueueReader **reader; /* array with nreaders active entries */ } GatherState; /* ---------------- @@ -1942,25 +1944,27 @@ typedef struct GatherState * merge the results into a single sorted stream. * ---------------- */ -struct GMReaderTuple; +struct GMReaderTupleBuffer; /* private in nodeGatherMerge.c */ typedef struct GatherMergeState { PlanState ps; /* its first field is NodeTag */ - bool initialized; + bool initialized; /* workers launched? */ + bool gm_initialized; /* gather_merge_init() done? */ + bool need_to_scan_locally; /* need to read from local plan? */ + int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ + /* these fields are set up once: */ + TupleDesc tupDesc; /* descriptor for subplan result tuples */ + int gm_nkeys; /* number of sort columns */ + SortSupport gm_sortkeys; /* array of length gm_nkeys */ struct ParallelExecutorInfo *pei; - int nreaders; - int nworkers_launched; - struct TupleQueueReader **reader; - TupleDesc tupDesc; - TupleTableSlot **gm_slots; + /* all remaining fields are reinitialized during a rescan: */ + int nworkers_launched; /* original number of workers */ + int nreaders; /* number of active workers */ + TupleTableSlot **gm_slots; /* array with nreaders+1 entries */ + struct TupleQueueReader **reader; /* array with nreaders active entries */ + struct GMReaderTupleBuffer *gm_tuple_buffers; /* nreaders tuple buffers */ struct binaryheap *gm_heap; /* binary heap of slot indices */ - bool gm_initialized; /* gather merge initilized ? */ - bool need_to_scan_locally; - int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ - int gm_nkeys; - SortSupport gm_sortkeys; /* array of length ms_nkeys */ - struct GMReaderTupleBuffer *gm_tuple_buffers; /* tuple buffer per reader */ } GatherMergeState; /* ---------------- From b5c75feca7ffb2667c42b86286e262d6cb709b76 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:40:24 -0400 Subject: [PATCH 0065/1087] Remove to pre-8.2 coding convention for PG_MODULE_MAGIC PG_MODULE_MAGIC has been around since 8.2, with 8.1 long since in EOL, so remove the mention of #ifdef guards for compiling against pre-8.2 sources from the documentation. Author: Daniel Gustafsson --- contrib/citext/citext.c | 2 -- doc/src/sgml/plhandler.sgml | 2 -- doc/src/sgml/spi.sgml | 2 -- doc/src/sgml/xfunc.sgml | 13 +------------ src/test/regress/regress.c | 2 -- 5 files changed, 1 insertion(+), 20 deletions(-) diff --git a/contrib/citext/citext.c b/contrib/citext/citext.c index 04f604b15f..0ba47828ba 100644 --- a/contrib/citext/citext.c +++ b/contrib/citext/citext.c @@ -9,9 +9,7 @@ #include "utils/formatting.h" #include "utils/varlena.h" -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif /* * ==================== diff --git a/doc/src/sgml/plhandler.sgml b/doc/src/sgml/plhandler.sgml index 57a2a05ed2..2573e67743 100644 --- a/doc/src/sgml/plhandler.sgml +++ b/doc/src/sgml/plhandler.sgml @@ -108,9 +108,7 @@ #include "catalog/pg_proc.h" #include "catalog/pg_type.h" -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif PG_FUNCTION_INFO_V1(plsample_call_handler); diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 86be87c0fd..d04b5a2125 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -4352,9 +4352,7 @@ INSERT INTO a SELECT * FROM a; #include "executor/spi.h" #include "utils/builtins.h" -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif int64 execq(text *sql, int cnt); diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index cd6dd840ba..7475288354 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -1757,20 +1757,13 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision file contains a magic block with the appropriate contents. This allows the server to detect obvious incompatibilities, such as code compiled for a different major version of - PostgreSQL. A magic block is required as of - PostgreSQL 8.2. To include a magic block, + PostgreSQL. To include a magic block, write this in one (and only one) of the module source files, after having included the header fmgr.h: -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif - - The #ifdef test can be omitted if the code doesn't - need to compile against pre-8.2 PostgreSQL - releases. @@ -2214,9 +2207,7 @@ PG_FUNCTION_INFO_V1(funcname); #include "fmgr.h" #include "utils/geo_decls.h" -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif /* by value */ @@ -2554,9 +2545,7 @@ SELECT name, c_overpaid(emp, 1500) AS overpaid #include "postgres.h" #include "executor/executor.h" /* for GetAttributeByName() */ -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif PG_FUNCTION_INFO_V1(c_overpaid); diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c index 3d33b36e66..734947cc98 100644 --- a/src/test/regress/regress.c +++ b/src/test/regress/regress.c @@ -46,9 +46,7 @@ extern PATH *poly2path(POLYGON *poly); extern void regress_lseg_construct(LSEG *lseg, Point *pt1, Point *pt2); -#ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; -#endif /* From 4b1dd62a257a469f92fef4f4cce37beab6c0b98b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 31 Aug 2017 13:15:54 -0400 Subject: [PATCH 0066/1087] Improve code coverage of select_parallel test. Make sure that rescans of parallel indexscans are tested. Per code coverage report. --- src/test/regress/expected/select_parallel.out | 55 +++++++++++++++++++ src/test/regress/sql/select_parallel.sql | 20 +++++++ 2 files changed, 75 insertions(+) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 888da5abf2..2ae600f1bb 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -179,6 +179,61 @@ select count(*) from tenk1 where thousand > 95; 9040 (1 row) +-- test rescan cases too +set enable_material = false; +explain (costs off) +select * from + (select count(unique1) from tenk1 where hundred > 10) ss + right join (values (1),(2),(3)) v(x) on true; + QUERY PLAN +-------------------------------------------------------------------------- + Nested Loop Left Join + -> Values Scan on "*VALUES*" + -> Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Index Scan using tenk1_hundred on tenk1 + Index Cond: (hundred > 10) +(8 rows) + +select * from + (select count(unique1) from tenk1 where hundred > 10) ss + right join (values (1),(2),(3)) v(x) on true; + count | x +-------+--- + 8900 | 1 + 8900 | 2 + 8900 | 3 +(3 rows) + +explain (costs off) +select * from + (select count(*) from tenk1 where thousand > 99) ss + right join (values (1),(2),(3)) v(x) on true; + QUERY PLAN +-------------------------------------------------------------------------------------- + Nested Loop Left Join + -> Values Scan on "*VALUES*" + -> Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Index Only Scan using tenk1_thous_tenthous on tenk1 + Index Cond: (thousand > 99) +(8 rows) + +select * from + (select count(*) from tenk1 where thousand > 99) ss + right join (values (1),(2),(3)) v(x) on true; + count | x +-------+--- + 9000 | 1 + 9000 | 2 + 9000 | 3 +(3 rows) + +reset enable_material; reset enable_seqscan; reset enable_bitmapscan; -- test parallel bitmap heap scan. diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index cefb5a27d4..89fe80a35c 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -65,6 +65,26 @@ explain (costs off) select count(*) from tenk1 where thousand > 95; select count(*) from tenk1 where thousand > 95; +-- test rescan cases too +set enable_material = false; + +explain (costs off) +select * from + (select count(unique1) from tenk1 where hundred > 10) ss + right join (values (1),(2),(3)) v(x) on true; +select * from + (select count(unique1) from tenk1 where hundred > 10) ss + right join (values (1),(2),(3)) v(x) on true; + +explain (costs off) +select * from + (select count(*) from tenk1 where thousand > 99) ss + right join (values (1),(2),(3)) v(x) on true; +select * from + (select count(*) from tenk1 where thousand > 99) ss + right join (values (1),(2),(3)) v(x) on true; + +reset enable_material; reset enable_seqscan; reset enable_bitmapscan; From 6708e447efb5046c95bdcf900b6da97f56f97ae8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 31 Aug 2017 15:10:24 -0400 Subject: [PATCH 0067/1087] Clean up shm_mq cleanup. The logic around shm_mq_detach was a few bricks shy of a load, because (contrary to the comments for shm_mq_attach) all it did was update the shared shm_mq state. That left us leaking a bit of process-local memory, but much worse, the on_dsm_detach callback for shm_mq_detach was still armed. That means that whenever we ultimately detach from the DSM segment, we'd run shm_mq_detach again for already-detached, possibly long-dead queues. This accidentally fails to fail today, because we only ever re-use a shm_mq's memory for another shm_mq, and multiple detach attempts on the last such shm_mq are fairly harmless. But it's gonna bite us someday, so let's clean it up. To do that, change shm_mq_detach's API so it takes a shm_mq_handle not the underlying shm_mq. This makes the callers simpler in most cases anyway. Also fix a few places in parallel.c that were just pfree'ing the handle structs rather than doing proper cleanup. Back-patch to v10 because of the risk that the revenant shm_mq_detach callbacks would cause a live bug sometime. Since this is an API change, it's too late to do it in 9.6. (We could make a variant patch that preserves API, but I'm not excited enough to do that.) Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us --- src/backend/access/transam/parallel.c | 6 ++-- src/backend/executor/tqueue.c | 9 ++++-- src/backend/libpq/pqmq.c | 10 ++----- src/backend/storage/ipc/shm_mq.c | 43 ++++++++++++++++++++++----- src/include/storage/shm_mq.h | 4 +-- 5 files changed, 51 insertions(+), 21 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 17b10383e4..ce1b907deb 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -480,7 +480,7 @@ LaunchParallelWorkers(ParallelContext *pcxt) */ any_registrations_failed = true; pcxt->worker[i].bgwhandle = NULL; - pfree(pcxt->worker[i].error_mqh); + shm_mq_detach(pcxt->worker[i].error_mqh); pcxt->worker[i].error_mqh = NULL; } } @@ -612,7 +612,7 @@ DestroyParallelContext(ParallelContext *pcxt) { TerminateBackgroundWorker(pcxt->worker[i].bgwhandle); - pfree(pcxt->worker[i].error_mqh); + shm_mq_detach(pcxt->worker[i].error_mqh); pcxt->worker[i].error_mqh = NULL; } } @@ -861,7 +861,7 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg) case 'X': /* Terminate, indicating clean exit */ { - pfree(pcxt->worker[i].error_mqh); + shm_mq_detach(pcxt->worker[i].error_mqh); pcxt->worker[i].error_mqh = NULL; break; } diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index 4c4fcf530d..ee4bec0385 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -578,7 +578,9 @@ tqueueShutdownReceiver(DestReceiver *self) { TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self; - shm_mq_detach(shm_mq_get_queue(tqueue->queue)); + if (tqueue->queue != NULL) + shm_mq_detach(tqueue->queue); + tqueue->queue = NULL; } /* @@ -589,6 +591,9 @@ tqueueDestroyReceiver(DestReceiver *self) { TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self; + /* We probably already detached from queue, but let's be sure */ + if (tqueue->queue != NULL) + shm_mq_detach(tqueue->queue); if (tqueue->tmpcontext != NULL) MemoryContextDelete(tqueue->tmpcontext); if (tqueue->recordhtab != NULL) @@ -650,7 +655,7 @@ CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc) void DestroyTupleQueueReader(TupleQueueReader *reader) { - shm_mq_detach(shm_mq_get_queue(reader->queue)); + shm_mq_detach(reader->queue); if (reader->typmodmap != NULL) hash_destroy(reader->typmodmap); /* Is it worth trying to free substructure of the remap tree? */ diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c index 8fbc03819d..e1a24b62c8 100644 --- a/src/backend/libpq/pqmq.c +++ b/src/backend/libpq/pqmq.c @@ -21,7 +21,6 @@ #include "tcop/tcopprot.h" #include "utils/builtins.h" -static shm_mq *pq_mq; static shm_mq_handle *pq_mq_handle; static bool pq_mq_busy = false; static pid_t pq_mq_parallel_master_pid = 0; @@ -56,7 +55,6 @@ void pq_redirect_to_shm_mq(dsm_segment *seg, shm_mq_handle *mqh) { PqCommMethods = &PqCommMqMethods; - pq_mq = shm_mq_get_queue(mqh); pq_mq_handle = mqh; whereToSendOutput = DestRemote; FrontendProtocol = PG_PROTOCOL_LATEST; @@ -70,7 +68,6 @@ pq_redirect_to_shm_mq(dsm_segment *seg, shm_mq_handle *mqh) static void pq_cleanup_redirect_to_shm_mq(dsm_segment *seg, Datum arg) { - pq_mq = NULL; pq_mq_handle = NULL; whereToSendOutput = DestNone; } @@ -135,9 +132,8 @@ mq_putmessage(char msgtype, const char *s, size_t len) */ if (pq_mq_busy) { - if (pq_mq != NULL) - shm_mq_detach(pq_mq); - pq_mq = NULL; + if (pq_mq_handle != NULL) + shm_mq_detach(pq_mq_handle); pq_mq_handle = NULL; return EOF; } @@ -148,7 +144,7 @@ mq_putmessage(char msgtype, const char *s, size_t len) * be generated late in the shutdown sequence, after all DSMs have already * been detached. */ - if (pq_mq == NULL) + if (pq_mq_handle == NULL) return 0; pq_mq_busy = true; diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c index f45a67cc27..770559a03e 100644 --- a/src/backend/storage/ipc/shm_mq.c +++ b/src/backend/storage/ipc/shm_mq.c @@ -83,7 +83,9 @@ struct shm_mq * This structure is a backend-private handle for access to a queue. * * mqh_queue is a pointer to the queue we've attached, and mqh_segment is - * a pointer to the dynamic shared memory segment that contains it. + * an optional pointer to the dynamic shared memory segment that contains it. + * (If mqh_segment is provided, we register an on_dsm_detach callback to + * make sure we detach from the queue before detaching from DSM.) * * If this queue is intended to connect the current process with a background * worker that started it, the user can pass a pointer to the worker handle @@ -139,6 +141,7 @@ struct shm_mq_handle MemoryContext mqh_context; }; +static void shm_mq_detach_internal(shm_mq *mq); static shm_mq_result shm_mq_send_bytes(shm_mq_handle *mq, Size nbytes, const void *data, bool nowait, Size *bytes_written); static shm_mq_result shm_mq_receive_bytes(shm_mq *mq, Size bytes_needed, @@ -288,14 +291,15 @@ shm_mq_attach(shm_mq *mq, dsm_segment *seg, BackgroundWorkerHandle *handle) Assert(mq->mq_receiver == MyProc || mq->mq_sender == MyProc); mqh->mqh_queue = mq; mqh->mqh_segment = seg; - mqh->mqh_buffer = NULL; mqh->mqh_handle = handle; + mqh->mqh_buffer = NULL; mqh->mqh_buflen = 0; mqh->mqh_consume_pending = 0; - mqh->mqh_context = CurrentMemoryContext; mqh->mqh_partial_bytes = 0; + mqh->mqh_expected_bytes = 0; mqh->mqh_length_word_complete = false; mqh->mqh_counterparty_attached = false; + mqh->mqh_context = CurrentMemoryContext; if (seg != NULL) on_dsm_detach(seg, shm_mq_detach_callback, PointerGetDatum(mq)); @@ -765,7 +769,28 @@ shm_mq_wait_for_attach(shm_mq_handle *mqh) } /* - * Detach a shared message queue. + * Detach from a shared message queue, and destroy the shm_mq_handle. + */ +void +shm_mq_detach(shm_mq_handle *mqh) +{ + /* Notify counterparty that we're outta here. */ + shm_mq_detach_internal(mqh->mqh_queue); + + /* Cancel on_dsm_detach callback, if any. */ + if (mqh->mqh_segment) + cancel_on_dsm_detach(mqh->mqh_segment, + shm_mq_detach_callback, + PointerGetDatum(mqh->mqh_queue)); + + /* Release local memory associated with handle. */ + if (mqh->mqh_buffer != NULL) + pfree(mqh->mqh_buffer); + pfree(mqh); +} + +/* + * Notify counterparty that we're detaching from shared message queue. * * The purpose of this function is to make sure that the process * with which we're communicating doesn't block forever waiting for us to @@ -773,9 +798,13 @@ shm_mq_wait_for_attach(shm_mq_handle *mqh) * detaches, the receiver can read any messages remaining in the queue; * further reads will return SHM_MQ_DETACHED. If the receiver detaches, * further attempts to send messages will likewise return SHM_MQ_DETACHED. + * + * This is separated out from shm_mq_detach() because if the on_dsm_detach + * callback fires, we only want to do this much. We do not try to touch + * the local shm_mq_handle, as it may have been pfree'd already. */ -void -shm_mq_detach(shm_mq *mq) +static void +shm_mq_detach_internal(shm_mq *mq) { volatile shm_mq *vmq = mq; PGPROC *victim; @@ -1193,5 +1222,5 @@ shm_mq_detach_callback(dsm_segment *seg, Datum arg) { shm_mq *mq = (shm_mq *) DatumGetPointer(arg); - shm_mq_detach(mq); + shm_mq_detach_internal(mq); } diff --git a/src/include/storage/shm_mq.h b/src/include/storage/shm_mq.h index 02a93e0222..7709efcc48 100644 --- a/src/include/storage/shm_mq.h +++ b/src/include/storage/shm_mq.h @@ -62,8 +62,8 @@ extern shm_mq_handle *shm_mq_attach(shm_mq *mq, dsm_segment *seg, /* Associate worker handle with shm_mq. */ extern void shm_mq_set_handle(shm_mq_handle *, BackgroundWorkerHandle *); -/* Break connection. */ -extern void shm_mq_detach(shm_mq *); +/* Break connection, release handle resources. */ +extern void shm_mq_detach(shm_mq_handle *mqh); /* Get the shm_mq from handle. */ extern shm_mq *shm_mq_get_queue(shm_mq_handle *mqh); From 30833ba154e0c1106d61e3270242dc5999a3e4f3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 31 Aug 2017 15:50:18 -0400 Subject: [PATCH 0068/1087] Expand partitioned tables in PartDesc order. Previously, we expanded the inheritance hierarchy in the order in which find_all_inheritors had locked the tables, but that turns out to block quite a bit of useful optimization. For example, a partition-wise join can't count on two tables with matching bounds to get expanded in the same order. Where possible, this change results in expanding partitioned tables in *bound* order. Bound order isn't well-defined for a list-partitioned table with a null-accepting partition or for a list-partitioned table where the bounds for a single partition are interleaved with other partitions. However, when expansion in bound order is possible, it opens up further opportunities for optimization, such as strength-reducing MergeAppend to Append when the expansion order matches the desired sort order. Patch by me, with cosmetic revisions by Ashutosh Bapat. Discussion: http://postgr.es/m/CA+TgmoZrKj7kEzcMSum3aXV4eyvvbh9WD=c6m=002WMheDyE3A@mail.gmail.com --- src/backend/optimizer/prep/prepunion.c | 328 ++++++++++++++++--------- src/test/regress/expected/insert.out | 4 +- 2 files changed, 220 insertions(+), 112 deletions(-) diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index e73c819901..ccf21453fd 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -33,6 +33,7 @@ #include "access/heapam.h" #include "access/htup_details.h" #include "access/sysattr.h" +#include "catalog/partition.h" #include "catalog/pg_inherits_fn.h" #include "catalog/pg_type.h" #include "miscadmin.h" @@ -100,6 +101,19 @@ static List *generate_append_tlist(List *colTypes, List *colCollations, static List *generate_setop_grouplist(SetOperationStmt *op, List *targetlist); static void expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti); +static void expand_partitioned_rtentry(PlannerInfo *root, + RangeTblEntry *parentrte, + Index parentRTindex, Relation parentrel, + PlanRowMark *parentrc, PartitionDesc partdesc, + LOCKMODE lockmode, + bool *has_child, List **appinfos, + List **partitioned_child_rels); +static void expand_single_inheritance_child(PlannerInfo *root, + RangeTblEntry *parentrte, + Index parentRTindex, Relation parentrel, + PlanRowMark *parentrc, Relation childrel, + bool *has_child, List **appinfos, + List **partitioned_child_rels); static void make_inh_translation_list(Relation oldrelation, Relation newrelation, Index newvarno, @@ -1455,131 +1469,62 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) /* Scan the inheritance set and expand it */ appinfos = NIL; has_child = false; - foreach(l, inhOIDs) + if (RelationGetPartitionDesc(oldrelation) != NULL) { - Oid childOID = lfirst_oid(l); - Relation newrelation; - RangeTblEntry *childrte; - Index childRTindex; - AppendRelInfo *appinfo; - - /* Open rel if needed; we already have required locks */ - if (childOID != parentOID) - newrelation = heap_open(childOID, NoLock); - else - newrelation = oldrelation; - - /* - * It is possible that the parent table has children that are temp - * tables of other backends. We cannot safely access such tables - * (because of buffering issues), and the best thing to do seems to be - * to silently ignore them. - */ - if (childOID != parentOID && RELATION_IS_OTHER_TEMP(newrelation)) - { - heap_close(newrelation, lockmode); - continue; - } - /* - * Build an RTE for the child, and attach to query's rangetable list. - * We copy most fields of the parent's RTE, but replace relation OID - * and relkind, and set inh = false. Also, set requiredPerms to zero - * since all required permissions checks are done on the original RTE. - * Likewise, set the child's securityQuals to empty, because we only - * want to apply the parent's RLS conditions regardless of what RLS - * properties individual children may have. (This is an intentional - * choice to make inherited RLS work like regular permissions checks.) - * The parent securityQuals will be propagated to children along with - * other base restriction clauses, so we don't need to do it here. + * If this table has partitions, recursively expand them in the order + * in which they appear in the PartitionDesc. But first, expand the + * parent itself. */ - childrte = copyObject(rte); - childrte->relid = childOID; - childrte->relkind = newrelation->rd_rel->relkind; - childrte->inh = false; - childrte->requiredPerms = 0; - childrte->securityQuals = NIL; - parse->rtable = lappend(parse->rtable, childrte); - childRTindex = list_length(parse->rtable); - + expand_single_inheritance_child(root, rte, rti, oldrelation, oldrc, + oldrelation, + &has_child, &appinfos, + &partitioned_child_rels); + expand_partitioned_rtentry(root, rte, rti, oldrelation, oldrc, + RelationGetPartitionDesc(oldrelation), + lockmode, + &has_child, &appinfos, + &partitioned_child_rels); + } + else + { /* - * Build an AppendRelInfo for this parent and child, unless the child - * is a partitioned table. + * This table has no partitions. Expand any plain inheritance + * children in the order the OIDs were returned by + * find_all_inheritors. */ - if (childrte->relkind != RELKIND_PARTITIONED_TABLE) + foreach(l, inhOIDs) { - /* Remember if we saw a real child. */ + Oid childOID = lfirst_oid(l); + Relation newrelation; + + /* Open rel if needed; we already have required locks */ if (childOID != parentOID) - has_child = true; - - appinfo = makeNode(AppendRelInfo); - appinfo->parent_relid = rti; - appinfo->child_relid = childRTindex; - appinfo->parent_reltype = oldrelation->rd_rel->reltype; - appinfo->child_reltype = newrelation->rd_rel->reltype; - make_inh_translation_list(oldrelation, newrelation, childRTindex, - &appinfo->translated_vars); - appinfo->parent_reloid = parentOID; - appinfos = lappend(appinfos, appinfo); + newrelation = heap_open(childOID, NoLock); + else + newrelation = oldrelation; /* - * Translate the column permissions bitmaps to the child's attnums - * (we have to build the translated_vars list before we can do - * this). But if this is the parent table, leave copyObject's - * result alone. - * - * Note: we need to do this even though the executor won't run any - * permissions checks on the child RTE. The - * insertedCols/updatedCols bitmaps may be examined for - * trigger-firing purposes. + * It is possible that the parent table has children that are temp + * tables of other backends. We cannot safely access such tables + * (because of buffering issues), and the best thing to do seems + * to be to silently ignore them. */ - if (childOID != parentOID) + if (childOID != parentOID && RELATION_IS_OTHER_TEMP(newrelation)) { - childrte->selectedCols = translate_col_privs(rte->selectedCols, - appinfo->translated_vars); - childrte->insertedCols = translate_col_privs(rte->insertedCols, - appinfo->translated_vars); - childrte->updatedCols = translate_col_privs(rte->updatedCols, - appinfo->translated_vars); + heap_close(newrelation, lockmode); + continue; } - } - else - partitioned_child_rels = lappend_int(partitioned_child_rels, - childRTindex); - /* - * Build a PlanRowMark if parent is marked FOR UPDATE/SHARE. - */ - if (oldrc) - { - PlanRowMark *newrc = makeNode(PlanRowMark); - - newrc->rti = childRTindex; - newrc->prti = rti; - newrc->rowmarkId = oldrc->rowmarkId; - /* Reselect rowmark type, because relkind might not match parent */ - newrc->markType = select_rowmark_type(childrte, oldrc->strength); - newrc->allMarkTypes = (1 << newrc->markType); - newrc->strength = oldrc->strength; - newrc->waitPolicy = oldrc->waitPolicy; - - /* - * We mark RowMarks for partitioned child tables as parent - * RowMarks so that the executor ignores them (except their - * existence means that the child tables be locked using - * appropriate mode). - */ - newrc->isParent = (childrte->relkind == RELKIND_PARTITIONED_TABLE); - - /* Include child's rowmark type in parent's allMarkTypes */ - oldrc->allMarkTypes |= newrc->allMarkTypes; + expand_single_inheritance_child(root, rte, rti, oldrelation, oldrc, + newrelation, + &has_child, &appinfos, + &partitioned_child_rels); - root->rowMarks = lappend(root->rowMarks, newrc); + /* Close child relations, but keep locks */ + if (childOID != parentOID) + heap_close(newrelation, NoLock); } - - /* Close child relations, but keep locks */ - if (childOID != parentOID) - heap_close(newrelation, NoLock); } heap_close(oldrelation, NoLock); @@ -1620,6 +1565,169 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) root->append_rel_list = list_concat(root->append_rel_list, appinfos); } +static void +expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, + Index parentRTindex, Relation parentrel, + PlanRowMark *parentrc, PartitionDesc partdesc, + LOCKMODE lockmode, + bool *has_child, List **appinfos, + List **partitioned_child_rels) +{ + int i; + + check_stack_depth(); + + for (i = 0; i < partdesc->nparts; i++) + { + Oid childOID = partdesc->oids[i]; + Relation childrel; + + /* Open rel; we already have required locks */ + childrel = heap_open(childOID, NoLock); + + /* As in expand_inherited_rtentry, skip non-local temp tables */ + if (RELATION_IS_OTHER_TEMP(childrel)) + { + heap_close(childrel, lockmode); + continue; + } + + expand_single_inheritance_child(root, parentrte, parentRTindex, + parentrel, parentrc, childrel, + has_child, appinfos, + partitioned_child_rels); + + /* If this child is itself partitioned, recurse */ + if (childrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + expand_partitioned_rtentry(root, parentrte, parentRTindex, + parentrel, parentrc, + RelationGetPartitionDesc(childrel), + lockmode, + has_child, appinfos, + partitioned_child_rels); + + /* Close child relation, but keep locks */ + heap_close(childrel, NoLock); + } +} + +/* + * expand_single_inheritance_child + * Expand a single inheritance child, if needed. + * + * If this is a temp table of another backend, we'll return without doing + * anything at all. Otherwise, we'll set "has_child" to true, build a + * RangeTblEntry and either a PartitionedChildRelInfo or AppendRelInfo as + * appropriate, plus maybe a PlanRowMark. + */ +static void +expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, + Index parentRTindex, Relation parentrel, + PlanRowMark *parentrc, Relation childrel, + bool *has_child, List **appinfos, + List **partitioned_child_rels) +{ + Query *parse = root->parse; + Oid parentOID = RelationGetRelid(parentrel); + Oid childOID = RelationGetRelid(childrel); + RangeTblEntry *childrte; + Index childRTindex; + AppendRelInfo *appinfo; + + /* + * Build an RTE for the child, and attach to query's rangetable list. We + * copy most fields of the parent's RTE, but replace relation OID and + * relkind, and set inh = false. Also, set requiredPerms to zero since + * all required permissions checks are done on the original RTE. Likewise, + * set the child's securityQuals to empty, because we only want to apply + * the parent's RLS conditions regardless of what RLS properties + * individual children may have. (This is an intentional choice to make + * inherited RLS work like regular permissions checks.) The parent + * securityQuals will be propagated to children along with other base + * restriction clauses, so we don't need to do it here. + */ + childrte = copyObject(parentrte); + childrte->relid = childOID; + childrte->relkind = childrel->rd_rel->relkind; + childrte->inh = false; + childrte->requiredPerms = 0; + childrte->securityQuals = NIL; + parse->rtable = lappend(parse->rtable, childrte); + childRTindex = list_length(parse->rtable); + + /* + * Build an AppendRelInfo for this parent and child, unless the child is a + * partitioned table. + */ + if (childrte->relkind != RELKIND_PARTITIONED_TABLE) + { + /* Remember if we saw a real child. */ + if (childOID != parentOID) + *has_child = true; + + appinfo = makeNode(AppendRelInfo); + appinfo->parent_relid = parentRTindex; + appinfo->child_relid = childRTindex; + appinfo->parent_reltype = parentrel->rd_rel->reltype; + appinfo->child_reltype = childrel->rd_rel->reltype; + make_inh_translation_list(parentrel, childrel, childRTindex, + &appinfo->translated_vars); + appinfo->parent_reloid = parentOID; + *appinfos = lappend(*appinfos, appinfo); + + /* + * Translate the column permissions bitmaps to the child's attnums (we + * have to build the translated_vars list before we can do this). But + * if this is the parent table, leave copyObject's result alone. + * + * Note: we need to do this even though the executor won't run any + * permissions checks on the child RTE. The insertedCols/updatedCols + * bitmaps may be examined for trigger-firing purposes. + */ + if (childOID != parentOID) + { + childrte->selectedCols = translate_col_privs(parentrte->selectedCols, + appinfo->translated_vars); + childrte->insertedCols = translate_col_privs(parentrte->insertedCols, + appinfo->translated_vars); + childrte->updatedCols = translate_col_privs(parentrte->updatedCols, + appinfo->translated_vars); + } + } + else + *partitioned_child_rels = lappend_int(*partitioned_child_rels, + childRTindex); + + /* + * Build a PlanRowMark if parent is marked FOR UPDATE/SHARE. + */ + if (parentrc) + { + PlanRowMark *childrc = makeNode(PlanRowMark); + + childrc->rti = childRTindex; + childrc->prti = parentRTindex; + childrc->rowmarkId = parentrc->rowmarkId; + /* Reselect rowmark type, because relkind might not match parent */ + childrc->markType = select_rowmark_type(childrte, parentrc->strength); + childrc->allMarkTypes = (1 << childrc->markType); + childrc->strength = parentrc->strength; + childrc->waitPolicy = parentrc->waitPolicy; + + /* + * We mark RowMarks for partitioned child tables as parent RowMarks so + * that the executor ignores them (except their existence means that + * the child tables be locked using appropriate mode). + */ + childrc->isParent = (childrte->relkind == RELKIND_PARTITIONED_TABLE); + + /* Include child's rowmark type in parent's allMarkTypes */ + parentrc->allMarkTypes |= childrc->allMarkTypes; + + root->rowMarks = lappend(root->rowMarks, childrc); + } +} + /* * make_inh_translation_list * Build the list of translations from parent Vars to child Vars for diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index a2d9469592..e159d62b66 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -278,12 +278,12 @@ select tableoid::regclass, * from list_parted; -------------+----+---- part_aa_bb | aA | part_cc_dd | cC | 1 - part_null | | 0 - part_null | | 1 part_ee_ff1 | ff | 1 part_ee_ff1 | EE | 1 part_ee_ff2 | ff | 11 part_ee_ff2 | EE | 10 + part_null | | 0 + part_null | | 1 (8 rows) -- some more tests to exercise tuple-routing with multi-level partitioning From 2d44c58c79aeef2d376be0141057afbb9ec6b5bc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 31 Aug 2017 16:20:58 -0400 Subject: [PATCH 0069/1087] Avoid memory leaks when a GatherMerge node is rescanned. Rescanning a GatherMerge led to leaking some memory in the executor's query-lifespan context, because most of the node's working data structures were simply abandoned and rebuilt from scratch. In practice, this might never amount to much, given the cost of relaunching worker processes --- but it's still pretty messy, so let's fix it. We can rearrange things so that the tuple arrays are simply cleared and reused, and we don't need to rebuild the TupleTableSlots either, just clear them. One small complication is that because we might get a different number of workers on each iteration, we can't keep the old convention that the leader's gm_slots[] entry is the last one; the leader might clobber a TupleTableSlot that we need for a worker in a future iteration. Hence, adjust the logic so that the leader has slot 0 always, while the active workers have slots 1..n. Back-patch to v10 to keep all the existing versions of nodeGatherMerge.c in sync --- because of the renumbering of the slots, there would otherwise be a very large risk that any future backpatches in this module would introduce bugs. Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us --- src/backend/executor/nodeGatherMerge.c | 159 ++++++++++++++++--------- src/include/nodes/execnodes.h | 3 +- 2 files changed, 107 insertions(+), 55 deletions(-) diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 67da5ff71f..b8bb4f8eb0 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -55,8 +55,10 @@ static int32 heap_compare_slots(Datum a, Datum b, void *arg); static TupleTableSlot *gather_merge_getnext(GatherMergeState *gm_state); static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, bool *done); -static void gather_merge_init(GatherMergeState *gm_state); static void ExecShutdownGatherMergeWorkers(GatherMergeState *node); +static void gather_merge_setup(GatherMergeState *gm_state); +static void gather_merge_init(GatherMergeState *gm_state); +static void gather_merge_clear_tuples(GatherMergeState *gm_state); static bool gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait); static void load_tuple_array(GatherMergeState *gm_state, int reader); @@ -149,14 +151,17 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) } /* - * store the tuple descriptor into gather merge state, so we can use it - * later while initializing the gather merge slots. + * Store the tuple descriptor into gather merge state, so we can use it + * while initializing the gather merge slots. */ if (!ExecContextForcesOids(&gm_state->ps, &hasoid)) hasoid = false; tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); gm_state->tupDesc = tupDesc; + /* Now allocate the workspace for gather merge */ + gather_merge_setup(gm_state); + return gm_state; } @@ -340,6 +345,9 @@ ExecReScanGatherMerge(GatherMergeState *node) /* Make sure any existing workers are gracefully shut down */ ExecShutdownGatherMergeWorkers(node); + /* Free any unused tuples, so we don't leak memory across rescans */ + gather_merge_clear_tuples(node); + /* Mark node so that shared state will be rebuilt at next call */ node->initialized = false; node->gm_initialized = false; @@ -370,49 +378,93 @@ ExecReScanGatherMerge(GatherMergeState *node) } /* - * Initialize the Gather merge tuple read. + * Set up the data structures that we'll need for Gather Merge. + * + * We allocate these once on the basis of gm->num_workers, which is an + * upper bound for the number of workers we'll actually have. During + * a rescan, we reset the structures to empty. This approach simplifies + * not leaking memory across rescans. * - * Pull at least a single tuple from each worker + leader and set up the heap. + * In the gm_slots[] array, index 0 is for the leader, and indexes 1 to n + * are for workers. The values placed into gm_heap correspond to indexes + * in gm_slots[]. The gm_tuple_buffers[] array, however, is indexed from + * 0 to n-1; it has no entry for the leader. */ static void -gather_merge_init(GatherMergeState *gm_state) +gather_merge_setup(GatherMergeState *gm_state) { - int nreaders = gm_state->nreaders; - bool nowait = true; + GatherMerge *gm = castNode(GatherMerge, gm_state->ps.plan); + int nreaders = gm->num_workers; int i; /* * Allocate gm_slots for the number of workers + one more slot for leader. - * Last slot is always for leader. Leader always calls ExecProcNode() to - * read the tuple which will return the TupleTableSlot. Later it will - * directly get assigned to gm_slot. So just initialize leader gm_slot - * with NULL. For other slots, code below will call - * ExecInitExtraTupleSlot() to create a slot for the worker's results. + * Slot 0 is always for the leader. Leader always calls ExecProcNode() to + * read the tuple, and then stores it directly into its gm_slots entry. + * For other slots, code below will call ExecInitExtraTupleSlot() to + * create a slot for the worker's results. Note that during any single + * scan, we might have fewer than num_workers available workers, in which + * case the extra array entries go unused. */ - gm_state->gm_slots = - palloc((gm_state->nreaders + 1) * sizeof(TupleTableSlot *)); - gm_state->gm_slots[gm_state->nreaders] = NULL; - - /* Initialize the tuple slot and tuple array for each worker */ - gm_state->gm_tuple_buffers = - (GMReaderTupleBuffer *) palloc0(sizeof(GMReaderTupleBuffer) * - gm_state->nreaders); - for (i = 0; i < gm_state->nreaders; i++) + gm_state->gm_slots = (TupleTableSlot **) + palloc0((nreaders + 1) * sizeof(TupleTableSlot *)); + + /* Allocate the tuple slot and tuple array for each worker */ + gm_state->gm_tuple_buffers = (GMReaderTupleBuffer *) + palloc0(nreaders * sizeof(GMReaderTupleBuffer)); + + for (i = 0; i < nreaders; i++) { /* Allocate the tuple array with length MAX_TUPLE_STORE */ gm_state->gm_tuple_buffers[i].tuple = (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE); - /* Initialize slot for worker */ - gm_state->gm_slots[i] = ExecInitExtraTupleSlot(gm_state->ps.state); - ExecSetSlotDescriptor(gm_state->gm_slots[i], + /* Initialize tuple slot for worker */ + gm_state->gm_slots[i + 1] = ExecInitExtraTupleSlot(gm_state->ps.state); + ExecSetSlotDescriptor(gm_state->gm_slots[i + 1], gm_state->tupDesc); } /* Allocate the resources for the merge */ - gm_state->gm_heap = binaryheap_allocate(gm_state->nreaders + 1, + gm_state->gm_heap = binaryheap_allocate(nreaders + 1, heap_compare_slots, gm_state); +} + +/* + * Initialize the Gather Merge. + * + * Reset data structures to ensure they're empty. Then pull at least one + * tuple from leader + each worker (or set its "done" indicator), and set up + * the heap. + */ +static void +gather_merge_init(GatherMergeState *gm_state) +{ + int nreaders = gm_state->nreaders; + bool nowait = true; + int i; + + /* Assert that gather_merge_setup made enough space */ + Assert(nreaders <= castNode(GatherMerge, gm_state->ps.plan)->num_workers); + + /* Reset leader's tuple slot to empty */ + gm_state->gm_slots[0] = NULL; + + /* Reset the tuple slot and tuple array for each worker */ + for (i = 0; i < nreaders; i++) + { + /* Reset tuple array to empty */ + gm_state->gm_tuple_buffers[i].nTuples = 0; + gm_state->gm_tuple_buffers[i].readCounter = 0; + /* Reset done flag to not-done */ + gm_state->gm_tuple_buffers[i].done = false; + /* Ensure output slot is empty */ + ExecClearTuple(gm_state->gm_slots[i + 1]); + } + + /* Reset binary heap to empty */ + binaryheap_reset(gm_state->gm_heap); /* * First, try to read a tuple from each worker (including leader) in @@ -422,14 +474,13 @@ gather_merge_init(GatherMergeState *gm_state) * least one tuple) to the heap. */ reread: - for (i = 0; i < nreaders + 1; i++) + for (i = 0; i <= nreaders; i++) { CHECK_FOR_INTERRUPTS(); - /* ignore this source if already known done */ - if ((i < nreaders) ? - !gm_state->gm_tuple_buffers[i].done : - gm_state->need_to_scan_locally) + /* skip this source if already known done */ + if ((i == 0) ? gm_state->need_to_scan_locally : + !gm_state->gm_tuple_buffers[i - 1].done) { if (TupIsNull(gm_state->gm_slots[i])) { @@ -450,9 +501,9 @@ gather_merge_init(GatherMergeState *gm_state) } /* need not recheck leader, since nowait doesn't matter for it */ - for (i = 0; i < nreaders; i++) + for (i = 1; i <= nreaders; i++) { - if (!gm_state->gm_tuple_buffers[i].done && + if (!gm_state->gm_tuple_buffers[i - 1].done && TupIsNull(gm_state->gm_slots[i])) { nowait = false; @@ -467,23 +518,23 @@ gather_merge_init(GatherMergeState *gm_state) } /* - * Clear out the tuple table slots for each gather merge input. + * Clear out the tuple table slot, and any unused pending tuples, + * for each gather merge input. */ static void -gather_merge_clear_slots(GatherMergeState *gm_state) +gather_merge_clear_tuples(GatherMergeState *gm_state) { int i; for (i = 0; i < gm_state->nreaders; i++) { - pfree(gm_state->gm_tuple_buffers[i].tuple); - ExecClearTuple(gm_state->gm_slots[i]); - } + GMReaderTupleBuffer *tuple_buffer = &gm_state->gm_tuple_buffers[i]; - /* Free tuple array as we don't need it any more */ - pfree(gm_state->gm_tuple_buffers); - /* Free the binaryheap, which was created for sort */ - binaryheap_free(gm_state->gm_heap); + while (tuple_buffer->readCounter < tuple_buffer->nTuples) + heap_freetuple(tuple_buffer->tuple[tuple_buffer->readCounter++]); + + ExecClearTuple(gm_state->gm_slots[i + 1]); + } } /* @@ -526,7 +577,7 @@ gather_merge_getnext(GatherMergeState *gm_state) if (binaryheap_empty(gm_state->gm_heap)) { /* All the queues are exhausted, and so is the heap */ - gather_merge_clear_slots(gm_state); + gather_merge_clear_tuples(gm_state); return NULL; } else @@ -548,10 +599,10 @@ load_tuple_array(GatherMergeState *gm_state, int reader) int i; /* Don't do anything if this is the leader. */ - if (reader == gm_state->nreaders) + if (reader == 0) return; - tuple_buffer = &gm_state->gm_tuple_buffers[reader]; + tuple_buffer = &gm_state->gm_tuple_buffers[reader - 1]; /* If there's nothing in the array, reset the counters to zero. */ if (tuple_buffer->nTuples == tuple_buffer->readCounter) @@ -590,7 +641,7 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) * If we're being asked to generate a tuple from the leader, then we just * call ExecProcNode as normal to produce one. */ - if (gm_state->nreaders == reader) + if (reader == 0) { if (gm_state->need_to_scan_locally) { @@ -601,7 +652,7 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) if (!TupIsNull(outerTupleSlot)) { - gm_state->gm_slots[reader] = outerTupleSlot; + gm_state->gm_slots[0] = outerTupleSlot; return true; } /* need_to_scan_locally serves as "done" flag for leader */ @@ -611,7 +662,7 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) } /* Otherwise, check the state of the relevant tuple buffer. */ - tuple_buffer = &gm_state->gm_tuple_buffers[reader]; + tuple_buffer = &gm_state->gm_tuple_buffers[reader - 1]; if (tuple_buffer->nTuples > tuple_buffer->readCounter) { @@ -621,8 +672,8 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) else if (tuple_buffer->done) { /* Reader is known to be exhausted. */ - DestroyTupleQueueReader(gm_state->reader[reader]); - gm_state->reader[reader] = NULL; + DestroyTupleQueueReader(gm_state->reader[reader - 1]); + gm_state->reader[reader - 1] = NULL; return false; } else @@ -649,14 +700,14 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) ExecStoreTuple(tup, /* tuple to store */ gm_state->gm_slots[reader], /* slot in which to store the * tuple */ - InvalidBuffer, /* buffer associated with this tuple */ - true); /* pfree this pointer if not from heap */ + InvalidBuffer, /* no buffer associated with tuple */ + true); /* pfree tuple when done with it */ return true; } /* - * Attempt to read a tuple from given reader. + * Attempt to read a tuple from given worker. */ static HeapTuple gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, @@ -671,7 +722,7 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, CHECK_FOR_INTERRUPTS(); /* Attempt to read a tuple. */ - reader = gm_state->reader[nreader]; + reader = gm_state->reader[nreader - 1]; /* Run TupleQueueReaders in per-tuple context */ tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory; diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 6cf128a7f0..90a60abc4d 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1958,7 +1958,8 @@ typedef struct GatherMergeState int gm_nkeys; /* number of sort columns */ SortSupport gm_sortkeys; /* array of length gm_nkeys */ struct ParallelExecutorInfo *pei; - /* all remaining fields are reinitialized during a rescan: */ + /* all remaining fields are reinitialized during a rescan */ + /* (but the arrays are not reallocated, just cleared) */ int nworkers_launched; /* original number of workers */ int nreaders; /* number of active workers */ TupleTableSlot **gm_slots; /* array with nreaders+1 entries */ From 81c5e46c490e2426db243eada186995da5bb0ba7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 31 Aug 2017 22:21:21 -0400 Subject: [PATCH 0070/1087] Introduce 64-bit hash functions with a 64-bit seed. This will be useful for hash partitioning, which needs a way to seed the hash functions to avoid problems such as a hash index on a hash partitioned table clumping all values into a small portion of the bucket space; it's also useful for anything that wants a 64-bit hash value rather than a 32-bit hash value. Just in case somebody wants a 64-bit hash value that is compatible with the existing 32-bit hash values, make the low 32-bits of the 64-bit hash value match the 32-bit hash value when the seed is 0. Robert Haas and Amul Sul Discussion: http://postgr.es/m/CA+Tgmoafx2yoJuhCQQOL5CocEi-w_uG4S2xT0EtgiJnPGcHW3g@mail.gmail.com --- doc/src/sgml/xindex.sgml | 13 +- src/backend/access/hash/hashfunc.c | 372 +++++++++++++++++++- src/backend/access/hash/hashpage.c | 2 +- src/backend/access/hash/hashutil.c | 6 +- src/backend/access/hash/hashvalidate.c | 42 ++- src/backend/commands/opclasscmds.c | 34 +- src/backend/utils/adt/acl.c | 15 + src/backend/utils/adt/arrayfuncs.c | 79 +++++ src/backend/utils/adt/date.c | 21 ++ src/backend/utils/adt/jsonb_op.c | 43 +++ src/backend/utils/adt/jsonb_util.c | 43 +++ src/backend/utils/adt/mac.c | 9 + src/backend/utils/adt/mac8.c | 9 + src/backend/utils/adt/network.c | 10 + src/backend/utils/adt/numeric.c | 60 ++++ src/backend/utils/adt/pg_lsn.c | 6 + src/backend/utils/adt/rangetypes.c | 63 ++++ src/backend/utils/adt/timestamp.c | 19 + src/backend/utils/adt/uuid.c | 8 + src/backend/utils/adt/varchar.c | 18 + src/backend/utils/cache/lsyscache.c | 8 +- src/backend/utils/cache/typcache.c | 58 ++- src/include/access/hash.h | 32 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_amproc.h | 36 ++ src/include/catalog/pg_proc.h | 54 +++ src/include/fmgr.h | 1 + src/include/utils/jsonb.h | 2 + src/include/utils/typcache.h | 4 + src/test/regress/expected/alter_generic.out | 4 +- src/test/regress/expected/hash_func.out | 300 ++++++++++++++++ src/test/regress/parallel_schedule | 2 +- src/test/regress/sql/hash_func.sql | 222 ++++++++++++ 33 files changed, 1555 insertions(+), 42 deletions(-) create mode 100644 src/test/regress/expected/hash_func.out create mode 100644 src/test/regress/sql/hash_func.sql diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index 333a36c456..745b4d5619 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -436,7 +436,8 @@
- Hash indexes require one support function, shown in . @@ -451,9 +452,17 @@ - Compute the hash value for a key + Compute the 32-bit hash value for a key 1 + + + Compute the 64-bit hash value for a key given a 64-bit salt; if + the salt is 0, the low 32 bits will match the value that would + have been computed by function 1 + + 2 + diff --git a/src/backend/access/hash/hashfunc.c b/src/backend/access/hash/hashfunc.c index a127f3f8b1..413e6b6ca3 100644 --- a/src/backend/access/hash/hashfunc.c +++ b/src/backend/access/hash/hashfunc.c @@ -46,18 +46,36 @@ hashchar(PG_FUNCTION_ARGS) return hash_uint32((int32) PG_GETARG_CHAR(0)); } +Datum +hashcharextended(PG_FUNCTION_ARGS) +{ + return hash_uint32_extended((int32) PG_GETARG_CHAR(0), PG_GETARG_INT64(1)); +} + Datum hashint2(PG_FUNCTION_ARGS) { return hash_uint32((int32) PG_GETARG_INT16(0)); } +Datum +hashint2extended(PG_FUNCTION_ARGS) +{ + return hash_uint32_extended((int32) PG_GETARG_INT16(0), PG_GETARG_INT64(1)); +} + Datum hashint4(PG_FUNCTION_ARGS) { return hash_uint32(PG_GETARG_INT32(0)); } +Datum +hashint4extended(PG_FUNCTION_ARGS) +{ + return hash_uint32_extended(PG_GETARG_INT32(0), PG_GETARG_INT64(1)); +} + Datum hashint8(PG_FUNCTION_ARGS) { @@ -78,18 +96,43 @@ hashint8(PG_FUNCTION_ARGS) return hash_uint32(lohalf); } +Datum +hashint8extended(PG_FUNCTION_ARGS) +{ + /* Same approach as hashint8 */ + int64 val = PG_GETARG_INT64(0); + uint32 lohalf = (uint32) val; + uint32 hihalf = (uint32) (val >> 32); + + lohalf ^= (val >= 0) ? hihalf : ~hihalf; + + return hash_uint32_extended(lohalf, PG_GETARG_INT64(1)); +} + Datum hashoid(PG_FUNCTION_ARGS) { return hash_uint32((uint32) PG_GETARG_OID(0)); } +Datum +hashoidextended(PG_FUNCTION_ARGS) +{ + return hash_uint32_extended((uint32) PG_GETARG_OID(0), PG_GETARG_INT64(1)); +} + Datum hashenum(PG_FUNCTION_ARGS) { return hash_uint32((uint32) PG_GETARG_OID(0)); } +Datum +hashenumextended(PG_FUNCTION_ARGS) +{ + return hash_uint32_extended((uint32) PG_GETARG_OID(0), PG_GETARG_INT64(1)); +} + Datum hashfloat4(PG_FUNCTION_ARGS) { @@ -116,6 +159,21 @@ hashfloat4(PG_FUNCTION_ARGS) return hash_any((unsigned char *) &key8, sizeof(key8)); } +Datum +hashfloat4extended(PG_FUNCTION_ARGS) +{ + float4 key = PG_GETARG_FLOAT4(0); + uint64 seed = PG_GETARG_INT64(1); + float8 key8; + + /* Same approach as hashfloat4 */ + if (key == (float4) 0) + PG_RETURN_UINT64(seed); + key8 = key; + + return hash_any_extended((unsigned char *) &key8, sizeof(key8), seed); +} + Datum hashfloat8(PG_FUNCTION_ARGS) { @@ -132,6 +190,19 @@ hashfloat8(PG_FUNCTION_ARGS) return hash_any((unsigned char *) &key, sizeof(key)); } +Datum +hashfloat8extended(PG_FUNCTION_ARGS) +{ + float8 key = PG_GETARG_FLOAT8(0); + uint64 seed = PG_GETARG_INT64(1); + + /* Same approach as hashfloat8 */ + if (key == (float8) 0) + PG_RETURN_UINT64(seed); + + return hash_any_extended((unsigned char *) &key, sizeof(key), seed); +} + Datum hashoidvector(PG_FUNCTION_ARGS) { @@ -140,6 +211,16 @@ hashoidvector(PG_FUNCTION_ARGS) return hash_any((unsigned char *) key->values, key->dim1 * sizeof(Oid)); } +Datum +hashoidvectorextended(PG_FUNCTION_ARGS) +{ + oidvector *key = (oidvector *) PG_GETARG_POINTER(0); + + return hash_any_extended((unsigned char *) key->values, + key->dim1 * sizeof(Oid), + PG_GETARG_INT64(1)); +} + Datum hashname(PG_FUNCTION_ARGS) { @@ -148,6 +229,15 @@ hashname(PG_FUNCTION_ARGS) return hash_any((unsigned char *) key, strlen(key)); } +Datum +hashnameextended(PG_FUNCTION_ARGS) +{ + char *key = NameStr(*PG_GETARG_NAME(0)); + + return hash_any_extended((unsigned char *) key, strlen(key), + PG_GETARG_INT64(1)); +} + Datum hashtext(PG_FUNCTION_ARGS) { @@ -168,6 +258,22 @@ hashtext(PG_FUNCTION_ARGS) return result; } +Datum +hashtextextended(PG_FUNCTION_ARGS) +{ + text *key = PG_GETARG_TEXT_PP(0); + Datum result; + + /* Same approach as hashtext */ + result = hash_any_extended((unsigned char *) VARDATA_ANY(key), + VARSIZE_ANY_EXHDR(key), + PG_GETARG_INT64(1)); + + PG_FREE_IF_COPY(key, 0); + + return result; +} + /* * hashvarlena() can be used for any varlena datatype in which there are * no non-significant bits, ie, distinct bitpatterns never compare as equal. @@ -187,6 +293,21 @@ hashvarlena(PG_FUNCTION_ARGS) return result; } +Datum +hashvarlenaextended(PG_FUNCTION_ARGS) +{ + struct varlena *key = PG_GETARG_VARLENA_PP(0); + Datum result; + + result = hash_any_extended((unsigned char *) VARDATA_ANY(key), + VARSIZE_ANY_EXHDR(key), + PG_GETARG_INT64(1)); + + PG_FREE_IF_COPY(key, 0); + + return result; +} + /* * This hash function was written by Bob Jenkins * (bob_jenkins@burtleburtle.net), and superficially adapted @@ -502,7 +623,227 @@ hash_any(register const unsigned char *k, register int keylen) } /* - * hash_uint32() -- hash a 32-bit value + * hash_any_extended() -- hash into a 64-bit value, using an optional seed + * k : the key (the unaligned variable-length array of bytes) + * len : the length of the key, counting by bytes + * seed : a 64-bit seed (0 means no seed) + * + * Returns a uint64 value. Otherwise similar to hash_any. + */ +Datum +hash_any_extended(register const unsigned char *k, register int keylen, + uint64 seed) +{ + register uint32 a, + b, + c, + len; + + /* Set up the internal state */ + len = keylen; + a = b = c = 0x9e3779b9 + len + 3923095; + + /* If the seed is non-zero, use it to perturb the internal state. */ + if (seed != 0) + { + /* + * In essence, the seed is treated as part of the data being hashed, + * but for simplicity, we pretend that it's padded with four bytes of + * zeroes so that the seed constitutes a 12-byte chunk. + */ + a += (uint32) (seed >> 32); + b += (uint32) seed; + mix(a, b, c); + } + + /* If the source pointer is word-aligned, we use word-wide fetches */ + if (((uintptr_t) k & UINT32_ALIGN_MASK) == 0) + { + /* Code path for aligned source data */ + register const uint32 *ka = (const uint32 *) k; + + /* handle most of the key */ + while (len >= 12) + { + a += ka[0]; + b += ka[1]; + c += ka[2]; + mix(a, b, c); + ka += 3; + len -= 12; + } + + /* handle the last 11 bytes */ + k = (const unsigned char *) ka; +#ifdef WORDS_BIGENDIAN + switch (len) + { + case 11: + c += ((uint32) k[10] << 8); + /* fall through */ + case 10: + c += ((uint32) k[9] << 16); + /* fall through */ + case 9: + c += ((uint32) k[8] << 24); + /* the lowest byte of c is reserved for the length */ + /* fall through */ + case 8: + b += ka[1]; + a += ka[0]; + break; + case 7: + b += ((uint32) k[6] << 8); + /* fall through */ + case 6: + b += ((uint32) k[5] << 16); + /* fall through */ + case 5: + b += ((uint32) k[4] << 24); + /* fall through */ + case 4: + a += ka[0]; + break; + case 3: + a += ((uint32) k[2] << 8); + /* fall through */ + case 2: + a += ((uint32) k[1] << 16); + /* fall through */ + case 1: + a += ((uint32) k[0] << 24); + /* case 0: nothing left to add */ + } +#else /* !WORDS_BIGENDIAN */ + switch (len) + { + case 11: + c += ((uint32) k[10] << 24); + /* fall through */ + case 10: + c += ((uint32) k[9] << 16); + /* fall through */ + case 9: + c += ((uint32) k[8] << 8); + /* the lowest byte of c is reserved for the length */ + /* fall through */ + case 8: + b += ka[1]; + a += ka[0]; + break; + case 7: + b += ((uint32) k[6] << 16); + /* fall through */ + case 6: + b += ((uint32) k[5] << 8); + /* fall through */ + case 5: + b += k[4]; + /* fall through */ + case 4: + a += ka[0]; + break; + case 3: + a += ((uint32) k[2] << 16); + /* fall through */ + case 2: + a += ((uint32) k[1] << 8); + /* fall through */ + case 1: + a += k[0]; + /* case 0: nothing left to add */ + } +#endif /* WORDS_BIGENDIAN */ + } + else + { + /* Code path for non-aligned source data */ + + /* handle most of the key */ + while (len >= 12) + { +#ifdef WORDS_BIGENDIAN + a += (k[3] + ((uint32) k[2] << 8) + ((uint32) k[1] << 16) + ((uint32) k[0] << 24)); + b += (k[7] + ((uint32) k[6] << 8) + ((uint32) k[5] << 16) + ((uint32) k[4] << 24)); + c += (k[11] + ((uint32) k[10] << 8) + ((uint32) k[9] << 16) + ((uint32) k[8] << 24)); +#else /* !WORDS_BIGENDIAN */ + a += (k[0] + ((uint32) k[1] << 8) + ((uint32) k[2] << 16) + ((uint32) k[3] << 24)); + b += (k[4] + ((uint32) k[5] << 8) + ((uint32) k[6] << 16) + ((uint32) k[7] << 24)); + c += (k[8] + ((uint32) k[9] << 8) + ((uint32) k[10] << 16) + ((uint32) k[11] << 24)); +#endif /* WORDS_BIGENDIAN */ + mix(a, b, c); + k += 12; + len -= 12; + } + + /* handle the last 11 bytes */ +#ifdef WORDS_BIGENDIAN + switch (len) /* all the case statements fall through */ + { + case 11: + c += ((uint32) k[10] << 8); + case 10: + c += ((uint32) k[9] << 16); + case 9: + c += ((uint32) k[8] << 24); + /* the lowest byte of c is reserved for the length */ + case 8: + b += k[7]; + case 7: + b += ((uint32) k[6] << 8); + case 6: + b += ((uint32) k[5] << 16); + case 5: + b += ((uint32) k[4] << 24); + case 4: + a += k[3]; + case 3: + a += ((uint32) k[2] << 8); + case 2: + a += ((uint32) k[1] << 16); + case 1: + a += ((uint32) k[0] << 24); + /* case 0: nothing left to add */ + } +#else /* !WORDS_BIGENDIAN */ + switch (len) /* all the case statements fall through */ + { + case 11: + c += ((uint32) k[10] << 24); + case 10: + c += ((uint32) k[9] << 16); + case 9: + c += ((uint32) k[8] << 8); + /* the lowest byte of c is reserved for the length */ + case 8: + b += ((uint32) k[7] << 24); + case 7: + b += ((uint32) k[6] << 16); + case 6: + b += ((uint32) k[5] << 8); + case 5: + b += k[4]; + case 4: + a += ((uint32) k[3] << 24); + case 3: + a += ((uint32) k[2] << 16); + case 2: + a += ((uint32) k[1] << 8); + case 1: + a += k[0]; + /* case 0: nothing left to add */ + } +#endif /* WORDS_BIGENDIAN */ + } + + final(a, b, c); + + /* report the result */ + PG_RETURN_UINT64(((uint64) b << 32) | c); +} + +/* + * hash_uint32() -- hash a 32-bit value to a 32-bit value * * This has the same result as * hash_any(&k, sizeof(uint32)) @@ -523,3 +864,32 @@ hash_uint32(uint32 k) /* report the result */ return UInt32GetDatum(c); } + +/* + * hash_uint32_extended() -- hash a 32-bit value to a 64-bit value, with a seed + * + * Like hash_uint32, this is a convenience function. + */ +Datum +hash_uint32_extended(uint32 k, uint64 seed) +{ + register uint32 a, + b, + c; + + a = b = c = 0x9e3779b9 + (uint32) sizeof(uint32) + 3923095; + + if (seed != 0) + { + a += (uint32) (seed >> 32); + b += (uint32) seed; + mix(a, b, c); + } + + a += k; + + final(a, b, c); + + /* report the result */ + PG_RETURN_UINT64(((uint64) b << 32) | c); +} diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c index 7b2906b0ca..05798419fc 100644 --- a/src/backend/access/hash/hashpage.c +++ b/src/backend/access/hash/hashpage.c @@ -373,7 +373,7 @@ _hash_init(Relation rel, double num_tuples, ForkNumber forkNum) if (ffactor < 10) ffactor = 10; - procid = index_getprocid(rel, 1, HASHPROC); + procid = index_getprocid(rel, 1, HASHSTANDARD_PROC); /* * We initialize the metapage, the first N bucket pages, and the first diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c index 9b803af7c2..869cbc1081 100644 --- a/src/backend/access/hash/hashutil.c +++ b/src/backend/access/hash/hashutil.c @@ -85,7 +85,7 @@ _hash_datum2hashkey(Relation rel, Datum key) Oid collation; /* XXX assumes index has only one attribute */ - procinfo = index_getprocinfo(rel, 1, HASHPROC); + procinfo = index_getprocinfo(rel, 1, HASHSTANDARD_PROC); collation = rel->rd_indcollation[0]; return DatumGetUInt32(FunctionCall1Coll(procinfo, collation, key)); @@ -108,10 +108,10 @@ _hash_datum2hashkey_type(Relation rel, Datum key, Oid keytype) hash_proc = get_opfamily_proc(rel->rd_opfamily[0], keytype, keytype, - HASHPROC); + HASHSTANDARD_PROC); if (!RegProcedureIsValid(hash_proc)) elog(ERROR, "missing support function %d(%u,%u) for index \"%s\"", - HASHPROC, keytype, keytype, + HASHSTANDARD_PROC, keytype, keytype, RelationGetRelationName(rel)); collation = rel->rd_indcollation[0]; diff --git a/src/backend/access/hash/hashvalidate.c b/src/backend/access/hash/hashvalidate.c index 30b29cb100..8b633c273a 100644 --- a/src/backend/access/hash/hashvalidate.c +++ b/src/backend/access/hash/hashvalidate.c @@ -29,7 +29,7 @@ #include "utils/syscache.h" -static bool check_hash_func_signature(Oid funcid, Oid restype, Oid argtype); +static bool check_hash_func_signature(Oid funcid, int16 amprocnum, Oid argtype); /* @@ -105,8 +105,9 @@ hashvalidate(Oid opclassoid) /* Check procedure numbers and function signatures */ switch (procform->amprocnum) { - case HASHPROC: - if (!check_hash_func_signature(procform->amproc, INT4OID, + case HASHSTANDARD_PROC: + case HASHEXTENDED_PROC: + if (!check_hash_func_signature(procform->amproc, procform->amprocnum, procform->amproclefttype)) { ereport(INFO, @@ -264,19 +265,37 @@ hashvalidate(Oid opclassoid) * hacks in the core hash opclass definitions. */ static bool -check_hash_func_signature(Oid funcid, Oid restype, Oid argtype) +check_hash_func_signature(Oid funcid, int16 amprocnum, Oid argtype) { bool result = true; + Oid restype; + int16 nargs; HeapTuple tp; Form_pg_proc procform; + switch (amprocnum) + { + case HASHSTANDARD_PROC: + restype = INT4OID; + nargs = 1; + break; + + case HASHEXTENDED_PROC: + restype = INT8OID; + nargs = 2; + break; + + default: + elog(ERROR, "invalid amprocnum"); + } + tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid)); if (!HeapTupleIsValid(tp)) elog(ERROR, "cache lookup failed for function %u", funcid); procform = (Form_pg_proc) GETSTRUCT(tp); if (procform->prorettype != restype || procform->proretset || - procform->pronargs != 1) + procform->pronargs != nargs) result = false; if (!IsBinaryCoercible(argtype, procform->proargtypes.values[0])) @@ -290,24 +309,29 @@ check_hash_func_signature(Oid funcid, Oid restype, Oid argtype) * identity, not just its input type, because hashvarlena() takes * INTERNAL and allowing any such function seems too scary. */ - if (funcid == F_HASHINT4 && + if ((funcid == F_HASHINT4 || funcid == F_HASHINT4EXTENDED) && (argtype == DATEOID || argtype == ABSTIMEOID || argtype == RELTIMEOID || argtype == XIDOID || argtype == CIDOID)) /* okay, allowed use of hashint4() */ ; - else if (funcid == F_TIMESTAMP_HASH && + else if ((funcid == F_TIMESTAMP_HASH || + funcid == F_TIMESTAMP_HASH_EXTENDED) && argtype == TIMESTAMPTZOID) /* okay, allowed use of timestamp_hash() */ ; - else if (funcid == F_HASHCHAR && + else if ((funcid == F_HASHCHAR || funcid == F_HASHCHAREXTENDED) && argtype == BOOLOID) /* okay, allowed use of hashchar() */ ; - else if (funcid == F_HASHVARLENA && + else if ((funcid == F_HASHVARLENA || funcid == F_HASHVARLENAEXTENDED) && argtype == BYTEAOID) /* okay, allowed use of hashvarlena() */ ; else result = false; } + /* If function takes a second argument, it must be for a 64-bit salt. */ + if (nargs == 2 && procform->proargtypes.values[1] != INT8OID) + result = false; + ReleaseSysCache(tp); return result; } diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index a31b1acb9c..d23e6d6f25 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -18,6 +18,7 @@ #include #include "access/genam.h" +#include "access/hash.h" #include "access/heapam.h" #include "access/nbtree.h" #include "access/htup_details.h" @@ -1129,7 +1130,8 @@ assignProcTypes(OpFamilyMember *member, Oid amoid, Oid typeoid) /* * btree comparison procs must be 2-arg procs returning int4, while btree * sortsupport procs must take internal and return void. hash support - * procs must be 1-arg procs returning int4. Otherwise we don't know. + * proc 1 must be a 1-arg proc returning int4, while proc 2 must be a + * 2-arg proc returning int8. Otherwise we don't know. */ if (amoid == BTREE_AM_OID) { @@ -1172,14 +1174,28 @@ assignProcTypes(OpFamilyMember *member, Oid amoid, Oid typeoid) } else if (amoid == HASH_AM_OID) { - if (procform->pronargs != 1) - ereport(ERROR, - (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), - errmsg("hash procedures must have one argument"))); - if (procform->prorettype != INT4OID) - ereport(ERROR, - (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), - errmsg("hash procedures must return integer"))); + if (member->number == HASHSTANDARD_PROC) + { + if (procform->pronargs != 1) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("hash procedure 1 must have one argument"))); + if (procform->prorettype != INT4OID) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("hash procedure 1 must return integer"))); + } + else if (member->number == HASHEXTENDED_PROC) + { + if (procform->pronargs != 2) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("hash procedure 2 must have two arguments"))); + if (procform->prorettype != INT8OID) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("hash procedure 2 must return bigint"))); + } /* * If lefttype/righttype isn't specified, use the proc's input type diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c index 2efb6c94e1..0c26e44d82 100644 --- a/src/backend/utils/adt/acl.c +++ b/src/backend/utils/adt/acl.c @@ -16,6 +16,7 @@ #include +#include "access/hash.h" #include "access/htup_details.h" #include "catalog/catalog.h" #include "catalog/namespace.h" @@ -717,6 +718,20 @@ hash_aclitem(PG_FUNCTION_ARGS) PG_RETURN_UINT32((uint32) (a->ai_privs + a->ai_grantee + a->ai_grantor)); } +/* + * 64-bit hash function for aclitem. + * + * Similar to hash_aclitem, but accepts a seed and returns a uint64 value. + */ +Datum +hash_aclitem_extended(PG_FUNCTION_ARGS) +{ + AclItem *a = PG_GETARG_ACLITEM_P(0); + uint64 seed = PG_GETARG_INT64(1); + uint32 sum = (uint32) (a->ai_privs + a->ai_grantee + a->ai_grantor); + + return (seed == 0) ? UInt64GetDatum(sum) : hash_uint32_extended(sum, seed); +} /* * acldefault() --- create an ACL describing default access permissions diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 34dadd6e19..522af7affc 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -20,6 +20,7 @@ #endif #include +#include "access/hash.h" #include "access/htup_details.h" #include "catalog/pg_type.h" #include "funcapi.h" @@ -4020,6 +4021,84 @@ hash_array(PG_FUNCTION_ARGS) PG_RETURN_UINT32(result); } +/* + * Returns 64-bit value by hashing a value to a 64-bit value, with a seed. + * Otherwise, similar to hash_array. + */ +Datum +hash_array_extended(PG_FUNCTION_ARGS) +{ + AnyArrayType *array = PG_GETARG_ANY_ARRAY(0); + uint64 seed = PG_GETARG_INT64(1); + int ndims = AARR_NDIM(array); + int *dims = AARR_DIMS(array); + Oid element_type = AARR_ELEMTYPE(array); + uint64 result = 1; + int nitems; + TypeCacheEntry *typentry; + int typlen; + bool typbyval; + char typalign; + int i; + array_iter iter; + FunctionCallInfoData locfcinfo; + + typentry = (TypeCacheEntry *) fcinfo->flinfo->fn_extra; + if (typentry == NULL || + typentry->type_id != element_type) + { + typentry = lookup_type_cache(element_type, + TYPECACHE_HASH_EXTENDED_PROC_FINFO); + if (!OidIsValid(typentry->hash_extended_proc_finfo.fn_oid)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("could not identify an extended hash function for type %s", + format_type_be(element_type)))); + fcinfo->flinfo->fn_extra = (void *) typentry; + } + typlen = typentry->typlen; + typbyval = typentry->typbyval; + typalign = typentry->typalign; + + InitFunctionCallInfoData(locfcinfo, &typentry->hash_extended_proc_finfo, 2, + InvalidOid, NULL, NULL); + + /* Loop over source data */ + nitems = ArrayGetNItems(ndims, dims); + array_iter_setup(&iter, array); + + for (i = 0; i < nitems; i++) + { + Datum elt; + bool isnull; + uint64 elthash; + + /* Get element, checking for NULL */ + elt = array_iter_next(&iter, &isnull, i, typlen, typbyval, typalign); + + if (isnull) + { + elthash = 0; + } + else + { + /* Apply the hash function */ + locfcinfo.arg[0] = elt; + locfcinfo.arg[1] = seed; + locfcinfo.argnull[0] = false; + locfcinfo.argnull[1] = false; + locfcinfo.isnull = false; + elthash = DatumGetUInt64(FunctionCallInvoke(&locfcinfo)); + } + + result = (result << 5) - result + elthash; + } + + AARR_FREE_IF_COPY(array, 0); + + PG_RETURN_UINT64(result); +} + /*----------------------------------------------------------------------------- * array overlap/containment comparisons diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 7d89d79438..34c0b52d58 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -1508,6 +1508,12 @@ time_hash(PG_FUNCTION_ARGS) return hashint8(fcinfo); } +Datum +time_hash_extended(PG_FUNCTION_ARGS) +{ + return hashint8extended(fcinfo); +} + Datum time_larger(PG_FUNCTION_ARGS) { @@ -2213,6 +2219,21 @@ timetz_hash(PG_FUNCTION_ARGS) PG_RETURN_UINT32(thash); } +Datum +timetz_hash_extended(PG_FUNCTION_ARGS) +{ + TimeTzADT *key = PG_GETARG_TIMETZADT_P(0); + uint64 seed = PG_GETARG_DATUM(1); + uint64 thash; + + /* Same approach as timetz_hash */ + thash = DatumGetUInt64(DirectFunctionCall2(hashint8extended, + Int64GetDatumFast(key->time), + seed)); + thash ^= DatumGetUInt64(hash_uint32_extended(key->zone, seed)); + PG_RETURN_UINT64(thash); +} + Datum timetz_larger(PG_FUNCTION_ARGS) { diff --git a/src/backend/utils/adt/jsonb_op.c b/src/backend/utils/adt/jsonb_op.c index d4c490e948..c4a7dc3f13 100644 --- a/src/backend/utils/adt/jsonb_op.c +++ b/src/backend/utils/adt/jsonb_op.c @@ -291,3 +291,46 @@ jsonb_hash(PG_FUNCTION_ARGS) PG_FREE_IF_COPY(jb, 0); PG_RETURN_INT32(hash); } + +Datum +jsonb_hash_extended(PG_FUNCTION_ARGS) +{ + Jsonb *jb = PG_GETARG_JSONB(0); + uint64 seed = PG_GETARG_INT64(1); + JsonbIterator *it; + JsonbValue v; + JsonbIteratorToken r; + uint64 hash = 0; + + if (JB_ROOT_COUNT(jb) == 0) + PG_RETURN_UINT64(seed); + + it = JsonbIteratorInit(&jb->root); + + while ((r = JsonbIteratorNext(&it, &v, false)) != WJB_DONE) + { + switch (r) + { + /* Rotation is left to JsonbHashScalarValueExtended() */ + case WJB_BEGIN_ARRAY: + hash ^= ((UINT64CONST(JB_FARRAY) << 32) | UINT64CONST(JB_FARRAY)); + break; + case WJB_BEGIN_OBJECT: + hash ^= ((UINT64CONST(JB_FOBJECT) << 32) | UINT64CONST(JB_FOBJECT)); + break; + case WJB_KEY: + case WJB_VALUE: + case WJB_ELEM: + JsonbHashScalarValueExtended(&v, &hash, seed); + break; + case WJB_END_ARRAY: + case WJB_END_OBJECT: + break; + default: + elog(ERROR, "invalid JsonbIteratorNext rc: %d", (int) r); + } + } + + PG_FREE_IF_COPY(jb, 0); + PG_RETURN_UINT64(hash); +} diff --git a/src/backend/utils/adt/jsonb_util.c b/src/backend/utils/adt/jsonb_util.c index 4850569bb5..d425f32403 100644 --- a/src/backend/utils/adt/jsonb_util.c +++ b/src/backend/utils/adt/jsonb_util.c @@ -1249,6 +1249,49 @@ JsonbHashScalarValue(const JsonbValue *scalarVal, uint32 *hash) *hash ^= tmp; } +/* + * Hash a value to a 64-bit value, with a seed. Otherwise, similar to + * JsonbHashScalarValue. + */ +void +JsonbHashScalarValueExtended(const JsonbValue *scalarVal, uint64 *hash, + uint64 seed) +{ + uint64 tmp; + + switch (scalarVal->type) + { + case jbvNull: + tmp = seed + 0x01; + break; + case jbvString: + tmp = DatumGetUInt64(hash_any_extended((const unsigned char *) scalarVal->val.string.val, + scalarVal->val.string.len, + seed)); + break; + case jbvNumeric: + tmp = DatumGetUInt64(DirectFunctionCall2(hash_numeric_extended, + NumericGetDatum(scalarVal->val.numeric), + UInt64GetDatum(seed))); + break; + case jbvBool: + if (seed) + tmp = DatumGetUInt64(DirectFunctionCall2(hashcharextended, + BoolGetDatum(scalarVal->val.boolean), + UInt64GetDatum(seed))); + else + tmp = scalarVal->val.boolean ? 0x02 : 0x04; + + break; + default: + elog(ERROR, "invalid jsonb scalar type"); + break; + } + + *hash = ROTATE_HIGH_AND_LOW_32BITS(*hash); + *hash ^= tmp; +} + /* * Are two scalar JsonbValues of the same type a and b equal? */ diff --git a/src/backend/utils/adt/mac.c b/src/backend/utils/adt/mac.c index d1c20c3086..60521cc21f 100644 --- a/src/backend/utils/adt/mac.c +++ b/src/backend/utils/adt/mac.c @@ -271,6 +271,15 @@ hashmacaddr(PG_FUNCTION_ARGS) return hash_any((unsigned char *) key, sizeof(macaddr)); } +Datum +hashmacaddrextended(PG_FUNCTION_ARGS) +{ + macaddr *key = PG_GETARG_MACADDR_P(0); + + return hash_any_extended((unsigned char *) key, sizeof(macaddr), + PG_GETARG_INT64(1)); +} + /* * Arithmetic functions: bitwise NOT, AND, OR. */ diff --git a/src/backend/utils/adt/mac8.c b/src/backend/utils/adt/mac8.c index 482d1fb5bf..0410b9888a 100644 --- a/src/backend/utils/adt/mac8.c +++ b/src/backend/utils/adt/mac8.c @@ -407,6 +407,15 @@ hashmacaddr8(PG_FUNCTION_ARGS) return hash_any((unsigned char *) key, sizeof(macaddr8)); } +Datum +hashmacaddr8extended(PG_FUNCTION_ARGS) +{ + macaddr8 *key = PG_GETARG_MACADDR8_P(0); + + return hash_any_extended((unsigned char *) key, sizeof(macaddr8), + PG_GETARG_INT64(1)); +} + /* * Arithmetic functions: bitwise NOT, AND, OR. */ diff --git a/src/backend/utils/adt/network.c b/src/backend/utils/adt/network.c index 5573c34097..ec4ac20bb7 100644 --- a/src/backend/utils/adt/network.c +++ b/src/backend/utils/adt/network.c @@ -486,6 +486,16 @@ hashinet(PG_FUNCTION_ARGS) return hash_any((unsigned char *) VARDATA_ANY(addr), addrsize + 2); } +Datum +hashinetextended(PG_FUNCTION_ARGS) +{ + inet *addr = PG_GETARG_INET_PP(0); + int addrsize = ip_addrsize(addr); + + return hash_any_extended((unsigned char *) VARDATA_ANY(addr), addrsize + 2, + PG_GETARG_INT64(1)); +} + /* * Boolean network-inclusion tests. */ diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 3e5614ece3..22d5898927 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -2230,6 +2230,66 @@ hash_numeric(PG_FUNCTION_ARGS) PG_RETURN_DATUM(result); } +/* + * Returns 64-bit value by hashing a value to a 64-bit value, with a seed. + * Otherwise, similar to hash_numeric. + */ +Datum +hash_numeric_extended(PG_FUNCTION_ARGS) +{ + Numeric key = PG_GETARG_NUMERIC(0); + uint64 seed = PG_GETARG_INT64(1); + Datum digit_hash; + Datum result; + int weight; + int start_offset; + int end_offset; + int i; + int hash_len; + NumericDigit *digits; + + if (NUMERIC_IS_NAN(key)) + PG_RETURN_UINT64(seed); + + weight = NUMERIC_WEIGHT(key); + start_offset = 0; + end_offset = 0; + + digits = NUMERIC_DIGITS(key); + for (i = 0; i < NUMERIC_NDIGITS(key); i++) + { + if (digits[i] != (NumericDigit) 0) + break; + + start_offset++; + + weight--; + } + + if (NUMERIC_NDIGITS(key) == start_offset) + PG_RETURN_UINT64(seed - 1); + + for (i = NUMERIC_NDIGITS(key) - 1; i >= 0; i--) + { + if (digits[i] != (NumericDigit) 0) + break; + + end_offset++; + } + + Assert(start_offset + end_offset < NUMERIC_NDIGITS(key)); + + hash_len = NUMERIC_NDIGITS(key) - start_offset - end_offset; + digit_hash = hash_any_extended((unsigned char *) (NUMERIC_DIGITS(key) + + start_offset), + hash_len * sizeof(NumericDigit), + seed); + + result = digit_hash ^ weight; + + PG_RETURN_DATUM(result); +} + /* ---------------------------------------------------------------------- * diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c index aefbb87680..7ad30a260a 100644 --- a/src/backend/utils/adt/pg_lsn.c +++ b/src/backend/utils/adt/pg_lsn.c @@ -179,6 +179,12 @@ pg_lsn_hash(PG_FUNCTION_ARGS) return hashint8(fcinfo); } +Datum +pg_lsn_hash_extended(PG_FUNCTION_ARGS) +{ + return hashint8extended(fcinfo); +} + /*---------------------------------------------------------- * Arithmetic operators on PostgreSQL LSNs. diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index 09a4f14a17..d7ba271317 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -1280,6 +1280,69 @@ hash_range(PG_FUNCTION_ARGS) PG_RETURN_INT32(result); } +/* + * Returns 64-bit value by hashing a value to a 64-bit value, with a seed. + * Otherwise, similar to hash_range. + */ +Datum +hash_range_extended(PG_FUNCTION_ARGS) +{ + RangeType *r = PG_GETARG_RANGE(0); + uint64 seed = PG_GETARG_INT64(1); + uint64 result; + TypeCacheEntry *typcache; + TypeCacheEntry *scache; + RangeBound lower; + RangeBound upper; + bool empty; + char flags; + uint64 lower_hash; + uint64 upper_hash; + + check_stack_depth(); + + typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r)); + + range_deserialize(typcache, r, &lower, &upper, &empty); + flags = range_get_flags(r); + + scache = typcache->rngelemtype; + if (!OidIsValid(scache->hash_extended_proc_finfo.fn_oid)) + { + scache = lookup_type_cache(scache->type_id, + TYPECACHE_HASH_EXTENDED_PROC_FINFO); + if (!OidIsValid(scache->hash_extended_proc_finfo.fn_oid)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("could not identify a hash function for type %s", + format_type_be(scache->type_id)))); + } + + if (RANGE_HAS_LBOUND(flags)) + lower_hash = DatumGetUInt64(FunctionCall2Coll(&scache->hash_extended_proc_finfo, + typcache->rng_collation, + lower.val, + seed)); + else + lower_hash = 0; + + if (RANGE_HAS_UBOUND(flags)) + upper_hash = DatumGetUInt64(FunctionCall2Coll(&scache->hash_extended_proc_finfo, + typcache->rng_collation, + upper.val, + seed)); + else + upper_hash = 0; + + /* Merge hashes of flags and bounds */ + result = hash_uint32_extended((uint32) flags, seed); + result ^= lower_hash; + result = ROTATE_HIGH_AND_LOW_32BITS(result); + result ^= upper_hash; + + PG_RETURN_UINT64(result); +} + /* *---------------------------------------------------------- * CANONICAL FUNCTIONS diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c index 6fa126d295..b11d452fc8 100644 --- a/src/backend/utils/adt/timestamp.c +++ b/src/backend/utils/adt/timestamp.c @@ -2113,6 +2113,11 @@ timestamp_hash(PG_FUNCTION_ARGS) return hashint8(fcinfo); } +Datum +timestamp_hash_extended(PG_FUNCTION_ARGS) +{ + return hashint8extended(fcinfo); +} /* * Cross-type comparison functions for timestamp vs timestamptz @@ -2419,6 +2424,20 @@ interval_hash(PG_FUNCTION_ARGS) return DirectFunctionCall1(hashint8, Int64GetDatumFast(span64)); } +Datum +interval_hash_extended(PG_FUNCTION_ARGS) +{ + Interval *interval = PG_GETARG_INTERVAL_P(0); + INT128 span = interval_cmp_value(interval); + int64 span64; + + /* Same approach as interval_hash */ + span64 = int128_to_int64(span); + + return DirectFunctionCall2(hashint8extended, Int64GetDatumFast(span64), + PG_GETARG_DATUM(1)); +} + /* overlaps_timestamp() --- implements the SQL OVERLAPS operator. * * Algorithm is per SQL spec. This is much harder than you'd think diff --git a/src/backend/utils/adt/uuid.c b/src/backend/utils/adt/uuid.c index 5f15c8e619..f73c695878 100644 --- a/src/backend/utils/adt/uuid.c +++ b/src/backend/utils/adt/uuid.c @@ -408,3 +408,11 @@ uuid_hash(PG_FUNCTION_ARGS) return hash_any(key->data, UUID_LEN); } + +Datum +uuid_hash_extended(PG_FUNCTION_ARGS) +{ + pg_uuid_t *key = PG_GETARG_UUID_P(0); + + return hash_any_extended(key->data, UUID_LEN, PG_GETARG_INT64(1)); +} diff --git a/src/backend/utils/adt/varchar.c b/src/backend/utils/adt/varchar.c index cbc62b00be..2df6f2ccb0 100644 --- a/src/backend/utils/adt/varchar.c +++ b/src/backend/utils/adt/varchar.c @@ -947,6 +947,24 @@ hashbpchar(PG_FUNCTION_ARGS) return result; } +Datum +hashbpcharextended(PG_FUNCTION_ARGS) +{ + BpChar *key = PG_GETARG_BPCHAR_PP(0); + char *keydata; + int keylen; + Datum result; + + keydata = VARDATA_ANY(key); + keylen = bcTruelen(key); + + result = hash_any_extended((unsigned char *) keydata, keylen, + PG_GETARG_INT64(1)); + + PG_FREE_IF_COPY(key, 0); + + return result; +} /* * The following operators support character-by-character comparison diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index 82763f8013..b7a14dc87e 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -490,8 +490,8 @@ get_compatible_hash_operators(Oid opno, /* * get_op_hash_functions - * Get the OID(s) of hash support function(s) compatible with the given - * operator, operating on its LHS and/or RHS datatype as required. + * Get the OID(s) of the standard hash support function(s) compatible with + * the given operator, operating on its LHS and/or RHS datatype as required. * * A function for the LHS type is sought and returned into *lhs_procno if * lhs_procno isn't NULL. Similarly, a function for the RHS type is sought @@ -542,7 +542,7 @@ get_op_hash_functions(Oid opno, *lhs_procno = get_opfamily_proc(aform->amopfamily, aform->amoplefttype, aform->amoplefttype, - HASHPROC); + HASHSTANDARD_PROC); if (!OidIsValid(*lhs_procno)) continue; /* Matching LHS found, done if caller doesn't want RHS */ @@ -564,7 +564,7 @@ get_op_hash_functions(Oid opno, *rhs_procno = get_opfamily_proc(aform->amopfamily, aform->amoprighttype, aform->amoprighttype, - HASHPROC); + HASHSTANDARD_PROC); if (!OidIsValid(*rhs_procno)) { /* Forget any LHS function from this opfamily */ diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 691d4987b1..2e633f08c5 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -90,6 +90,7 @@ static TypeCacheEntry *firstDomainTypeEntry = NULL; #define TCFLAGS_HAVE_FIELD_EQUALITY 0x1000 #define TCFLAGS_HAVE_FIELD_COMPARE 0x2000 #define TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS 0x4000 +#define TCFLAGS_CHECKED_HASH_EXTENDED_PROC 0x8000 /* * Data stored about a domain type's constraints. Note that we do not create @@ -307,6 +308,8 @@ lookup_type_cache(Oid type_id, int flags) flags |= TYPECACHE_HASH_OPFAMILY; if ((flags & (TYPECACHE_HASH_PROC | TYPECACHE_HASH_PROC_FINFO | + TYPECACHE_HASH_EXTENDED_PROC | + TYPECACHE_HASH_EXTENDED_PROC_FINFO | TYPECACHE_HASH_OPFAMILY)) && !(typentry->flags & TCFLAGS_CHECKED_HASH_OPCLASS)) { @@ -329,6 +332,7 @@ lookup_type_cache(Oid type_id, int flags) * decision is still good. */ typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC); + typentry->flags &= ~(TCFLAGS_CHECKED_HASH_EXTENDED_PROC); typentry->flags |= TCFLAGS_CHECKED_HASH_OPCLASS; } @@ -372,11 +376,12 @@ lookup_type_cache(Oid type_id, int flags) typentry->eq_opr = eq_opr; /* - * Reset info about hash function whenever we pick up new info about - * equality operator. This is so we can ensure that the hash function - * matches the operator. + * Reset info about hash functions whenever we pick up new info about + * equality operator. This is so we can ensure that the hash functions + * match the operator. */ typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC); + typentry->flags &= ~(TCFLAGS_CHECKED_HASH_EXTENDED_PROC); typentry->flags |= TCFLAGS_CHECKED_EQ_OPR; } if ((flags & TYPECACHE_LT_OPR) && @@ -467,7 +472,7 @@ lookup_type_cache(Oid type_id, int flags) hash_proc = get_opfamily_proc(typentry->hash_opf, typentry->hash_opintype, typentry->hash_opintype, - HASHPROC); + HASHSTANDARD_PROC); /* * As above, make sure hash_array will succeed. We don't currently @@ -485,6 +490,43 @@ lookup_type_cache(Oid type_id, int flags) typentry->hash_proc = hash_proc; typentry->flags |= TCFLAGS_CHECKED_HASH_PROC; } + if ((flags & (TYPECACHE_HASH_EXTENDED_PROC | + TYPECACHE_HASH_EXTENDED_PROC_FINFO)) && + !(typentry->flags & TCFLAGS_CHECKED_HASH_EXTENDED_PROC)) + { + Oid hash_extended_proc = InvalidOid; + + /* + * We insist that the eq_opr, if one has been determined, match the + * hash opclass; else report there is no hash function. + */ + if (typentry->hash_opf != InvalidOid && + (!OidIsValid(typentry->eq_opr) || + typentry->eq_opr == get_opfamily_member(typentry->hash_opf, + typentry->hash_opintype, + typentry->hash_opintype, + HTEqualStrategyNumber))) + hash_extended_proc = get_opfamily_proc(typentry->hash_opf, + typentry->hash_opintype, + typentry->hash_opintype, + HASHEXTENDED_PROC); + + /* + * As above, make sure hash_array_extended will succeed. We don't + * currently support hashing for composite types, but when we do, + * we'll need more logic here to check that case too. + */ + if (hash_extended_proc == F_HASH_ARRAY_EXTENDED && + !array_element_has_hashing(typentry)) + hash_extended_proc = InvalidOid; + + /* Force update of hash_proc_finfo only if we're changing state */ + if (typentry->hash_extended_proc != hash_extended_proc) + typentry->hash_extended_proc_finfo.fn_oid = InvalidOid; + + typentry->hash_extended_proc = hash_extended_proc; + typentry->flags |= TCFLAGS_CHECKED_HASH_EXTENDED_PROC; + } /* * Set up fmgr lookup info as requested @@ -523,6 +565,14 @@ lookup_type_cache(Oid type_id, int flags) fmgr_info_cxt(typentry->hash_proc, &typentry->hash_proc_finfo, CacheMemoryContext); } + if ((flags & TYPECACHE_HASH_EXTENDED_PROC_FINFO) && + typentry->hash_extended_proc_finfo.fn_oid == InvalidOid && + typentry->hash_extended_proc != InvalidOid) + { + fmgr_info_cxt(typentry->hash_extended_proc, + &typentry->hash_extended_proc_finfo, + CacheMemoryContext); + } /* * If it's a composite type (row type), get tupdesc if requested diff --git a/src/include/access/hash.h b/src/include/access/hash.h index 72fce3038c..c06dcb214f 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -38,6 +38,17 @@ typedef uint32 Bucket; #define BUCKET_TO_BLKNO(metap,B) \ ((BlockNumber) ((B) + ((B) ? (metap)->hashm_spares[_hash_spareindex((B)+1)-1] : 0)) + 1) +/* + * Rotate the high 32 bits and the low 32 bits separately. The standard + * hash function sometimes rotates the low 32 bits by one bit when + * combining elements. We want extended hash functions to be compatible with + * that algorithm when the seed is 0, so we can't just do a normal rotation. + * This works, though. + */ +#define ROTATE_HIGH_AND_LOW_32BITS(v) \ + ((((v) << 1) & UINT64CONST(0xfffffffefffffffe)) | \ + (((v) >> 31) & UINT64CONST(0x100000001))) + /* * Special space for hash index pages. * @@ -289,12 +300,20 @@ typedef HashMetaPageData *HashMetaPage; #define HTMaxStrategyNumber 1 /* - * When a new operator class is declared, we require that the user supply - * us with an amproc procudure for hashing a key of the new type. - * Since we only have one such proc in amproc, it's number 1. + * When a new operator class is declared, we require that the user supply + * us with an amproc procudure for hashing a key of the new type, returning + * a 32-bit hash value. We call this the "standard" hash procedure. We + * also allow an optional "extended" hash procedure which accepts a salt and + * returns a 64-bit hash value. This is highly recommended but, for reasons + * of backward compatibility, optional. + * + * When the salt is 0, the low 32 bits of the value returned by the extended + * hash procedure should match the value that would have been returned by the + * standard hash procedure. */ -#define HASHPROC 1 -#define HASHNProcs 1 +#define HASHSTANDARD_PROC 1 +#define HASHEXTENDED_PROC 2 +#define HASHNProcs 2 /* public routines */ @@ -322,7 +341,10 @@ extern bytea *hashoptions(Datum reloptions, bool validate); extern bool hashvalidate(Oid opclassoid); extern Datum hash_any(register const unsigned char *k, register int keylen); +extern Datum hash_any_extended(register const unsigned char *k, + register int keylen, uint64 seed); extern Datum hash_uint32(uint32 k); +extern Datum hash_uint32_extended(uint32 k, uint64 seed); /* private routines */ diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 0dafd6bf2a..6525da970d 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201707211 +#define CATALOG_VERSION_NO 201708311 #endif diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index 7d245b1271..fb6a829c90 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -153,41 +153,77 @@ DATA(insert ( 4033 3802 3802 1 4044 )); /* hash */ DATA(insert ( 427 1042 1042 1 1080 )); +DATA(insert ( 427 1042 1042 2 972 )); DATA(insert ( 431 18 18 1 454 )); +DATA(insert ( 431 18 18 2 446 )); DATA(insert ( 435 1082 1082 1 450 )); +DATA(insert ( 435 1082 1082 2 425 )); DATA(insert ( 627 2277 2277 1 626 )); +DATA(insert ( 627 2277 2277 2 782 )); DATA(insert ( 1971 700 700 1 451 )); +DATA(insert ( 1971 700 700 2 443 )); DATA(insert ( 1971 701 701 1 452 )); +DATA(insert ( 1971 701 701 2 444 )); DATA(insert ( 1975 869 869 1 422 )); +DATA(insert ( 1975 869 869 2 779 )); DATA(insert ( 1977 21 21 1 449 )); +DATA(insert ( 1977 21 21 2 441 )); DATA(insert ( 1977 23 23 1 450 )); +DATA(insert ( 1977 23 23 2 425 )); DATA(insert ( 1977 20 20 1 949 )); +DATA(insert ( 1977 20 20 2 442 )); DATA(insert ( 1983 1186 1186 1 1697 )); +DATA(insert ( 1983 1186 1186 2 3418 )); DATA(insert ( 1985 829 829 1 399 )); +DATA(insert ( 1985 829 829 2 778 )); DATA(insert ( 1987 19 19 1 455 )); +DATA(insert ( 1987 19 19 2 447 )); DATA(insert ( 1990 26 26 1 453 )); +DATA(insert ( 1990 26 26 2 445 )); DATA(insert ( 1992 30 30 1 457 )); +DATA(insert ( 1992 30 30 2 776 )); DATA(insert ( 1995 25 25 1 400 )); +DATA(insert ( 1995 25 25 2 448)); DATA(insert ( 1997 1083 1083 1 1688 )); +DATA(insert ( 1997 1083 1083 2 3409 )); DATA(insert ( 1998 1700 1700 1 432 )); +DATA(insert ( 1998 1700 1700 2 780 )); DATA(insert ( 1999 1184 1184 1 2039 )); +DATA(insert ( 1999 1184 1184 2 3411 )); DATA(insert ( 2001 1266 1266 1 1696 )); +DATA(insert ( 2001 1266 1266 2 3410 )); DATA(insert ( 2040 1114 1114 1 2039 )); +DATA(insert ( 2040 1114 1114 2 3411 )); DATA(insert ( 2222 16 16 1 454 )); +DATA(insert ( 2222 16 16 2 446 )); DATA(insert ( 2223 17 17 1 456 )); +DATA(insert ( 2223 17 17 2 772 )); DATA(insert ( 2225 28 28 1 450 )); +DATA(insert ( 2225 28 28 2 425)); DATA(insert ( 2226 29 29 1 450 )); +DATA(insert ( 2226 29 29 2 425 )); DATA(insert ( 2227 702 702 1 450 )); +DATA(insert ( 2227 702 702 2 425 )); DATA(insert ( 2228 703 703 1 450 )); +DATA(insert ( 2228 703 703 2 425 )); DATA(insert ( 2229 25 25 1 400 )); +DATA(insert ( 2229 25 25 2 448 )); DATA(insert ( 2231 1042 1042 1 1080 )); +DATA(insert ( 2231 1042 1042 2 972 )); DATA(insert ( 2235 1033 1033 1 329 )); +DATA(insert ( 2235 1033 1033 2 777 )); DATA(insert ( 2969 2950 2950 1 2963 )); +DATA(insert ( 2969 2950 2950 2 3412 )); DATA(insert ( 3254 3220 3220 1 3252 )); +DATA(insert ( 3254 3220 3220 2 3413 )); DATA(insert ( 3372 774 774 1 328 )); +DATA(insert ( 3372 774 774 2 781 )); DATA(insert ( 3523 3500 3500 1 3515 )); +DATA(insert ( 3523 3500 3500 2 3414 )); DATA(insert ( 3903 3831 3831 1 3902 )); +DATA(insert ( 3903 3831 3831 2 3417 )); DATA(insert ( 4034 3802 3802 1 4045 )); +DATA(insert ( 4034 3802 3802 2 3416)); /* gist */ diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 8b33b4e0ea..d820b56aa1 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -668,36 +668,68 @@ DESCR("convert char(n) to name"); DATA(insert OID = 449 ( hashint2 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "21" _null_ _null_ _null_ _null_ _null_ hashint2 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 441 ( hashint2extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "21 20" _null_ _null_ _null_ _null_ _null_ hashint2extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 450 ( hashint4 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "23" _null_ _null_ _null_ _null_ _null_ hashint4 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 425 ( hashint4extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "23 20" _null_ _null_ _null_ _null_ _null_ hashint4extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 949 ( hashint8 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "20" _null_ _null_ _null_ _null_ _null_ hashint8 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 442 ( hashint8extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "20 20" _null_ _null_ _null_ _null_ _null_ hashint8extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 451 ( hashfloat4 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "700" _null_ _null_ _null_ _null_ _null_ hashfloat4 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 443 ( hashfloat4extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "700 20" _null_ _null_ _null_ _null_ _null_ hashfloat4extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 452 ( hashfloat8 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "701" _null_ _null_ _null_ _null_ _null_ hashfloat8 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 444 ( hashfloat8extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "701 20" _null_ _null_ _null_ _null_ _null_ hashfloat8extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 453 ( hashoid PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "26" _null_ _null_ _null_ _null_ _null_ hashoid _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 445 ( hashoidextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "26 20" _null_ _null_ _null_ _null_ _null_ hashoidextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 454 ( hashchar PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "18" _null_ _null_ _null_ _null_ _null_ hashchar _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 446 ( hashcharextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "18 20" _null_ _null_ _null_ _null_ _null_ hashcharextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 455 ( hashname PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "19" _null_ _null_ _null_ _null_ _null_ hashname _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 447 ( hashnameextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "19 20" _null_ _null_ _null_ _null_ _null_ hashnameextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 400 ( hashtext PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "25" _null_ _null_ _null_ _null_ _null_ hashtext _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 448 ( hashtextextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "25 20" _null_ _null_ _null_ _null_ _null_ hashtextextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 456 ( hashvarlena PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "2281" _null_ _null_ _null_ _null_ _null_ hashvarlena _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 772 ( hashvarlenaextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "2281 20" _null_ _null_ _null_ _null_ _null_ hashvarlenaextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 457 ( hashoidvector PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "30" _null_ _null_ _null_ _null_ _null_ hashoidvector _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 776 ( hashoidvectorextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "30 20" _null_ _null_ _null_ _null_ _null_ hashoidvectorextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 329 ( hash_aclitem PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1033" _null_ _null_ _null_ _null_ _null_ hash_aclitem _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 777 ( hash_aclitem_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1033 20" _null_ _null_ _null_ _null_ _null_ hash_aclitem_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 399 ( hashmacaddr PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "829" _null_ _null_ _null_ _null_ _null_ hashmacaddr _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 778 ( hashmacaddrextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "829 20" _null_ _null_ _null_ _null_ _null_ hashmacaddrextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 422 ( hashinet PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "869" _null_ _null_ _null_ _null_ _null_ hashinet _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 779 ( hashinetextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "869 20" _null_ _null_ _null_ _null_ _null_ hashinetextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 432 ( hash_numeric PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1700" _null_ _null_ _null_ _null_ _null_ hash_numeric _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 780 ( hash_numeric_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1700 20" _null_ _null_ _null_ _null_ _null_ hash_numeric_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 328 ( hashmacaddr8 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "774" _null_ _null_ _null_ _null_ _null_ hashmacaddr8 _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 781 ( hashmacaddr8extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "774 20" _null_ _null_ _null_ _null_ _null_ hashmacaddr8extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 438 ( num_nulls PGNSP PGUID 12 1 0 2276 0 f f f f f f i s 1 0 23 "2276" "{2276}" "{v}" _null_ _null_ _null_ pg_num_nulls _null_ _null_ _null_ )); DESCR("count the number of NULL arguments"); @@ -747,6 +779,8 @@ DESCR("convert float8 to int8"); DATA(insert OID = 626 ( hash_array PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "2277" _null_ _null_ _null_ _null_ _null_ hash_array _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 782 ( hash_array_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "2277 20" _null_ _null_ _null_ _null_ _null_ hash_array_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 652 ( float4 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 700 "20" _null_ _null_ _null_ _null_ _null_ i8tof _null_ _null_ _null_ )); DESCR("convert int8 to float4"); @@ -1155,6 +1189,8 @@ DATA(insert OID = 3328 ( bpchar_sortsupport PGNSP PGUID 12 1 0 0 0 f f f f t f i DESCR("sort support"); DATA(insert OID = 1080 ( hashbpchar PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1042" _null_ _null_ _null_ _null_ _null_ hashbpchar _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 972 ( hashbpcharextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1042 20" _null_ _null_ _null_ _null_ _null_ hashbpcharextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 1081 ( format_type PGNSP PGUID 12 1 0 0 0 f f f f f f s s 2 0 25 "26 23" _null_ _null_ _null_ _null_ _null_ format_type _null_ _null_ _null_ )); DESCR("format a type oid and atttypmod to canonical SQL"); DATA(insert OID = 1084 ( date_in PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 1082 "2275" _null_ _null_ _null_ _null_ _null_ date_in _null_ _null_ _null_ )); @@ -2286,10 +2322,16 @@ DESCR("less-equal-greater"); DATA(insert OID = 1688 ( time_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1083" _null_ _null_ _null_ _null_ _null_ time_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3409 ( time_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1083 20" _null_ _null_ _null_ _null_ _null_ time_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 1696 ( timetz_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1266" _null_ _null_ _null_ _null_ _null_ timetz_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3410 ( timetz_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1266 20" _null_ _null_ _null_ _null_ _null_ timetz_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 1697 ( interval_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1186" _null_ _null_ _null_ _null_ _null_ interval_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3418 ( interval_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1186 20" _null_ _null_ _null_ _null_ _null_ interval_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); /* OID's 1700 - 1799 NUMERIC data type */ @@ -3078,6 +3120,8 @@ DATA(insert OID = 2038 ( timezone PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 DESCR("adjust time with time zone to new zone"); DATA(insert OID = 2039 ( timestamp_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "1114" _null_ _null_ _null_ _null_ _null_ timestamp_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3411 ( timestamp_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "1114 20" _null_ _null_ _null_ _null_ _null_ timestamp_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 2041 ( overlaps PGNSP PGUID 12 1 0 0 0 f f f f f f i s 4 0 16 "1114 1114 1114 1114" _null_ _null_ _null_ _null_ _null_ overlaps_timestamp _null_ _null_ _null_ )); DESCR("intervals overlap?"); DATA(insert OID = 2042 ( overlaps PGNSP PGUID 14 1 0 0 0 f f f f f f i s 4 0 16 "1114 1186 1114 1186" _null_ _null_ _null_ _null_ _null_ "select ($1, ($1 + $2)) overlaps ($3, ($3 + $4))" _null_ _null_ _null_ )); @@ -4543,6 +4587,8 @@ DATA(insert OID = 2962 ( uuid_send PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 DESCR("I/O"); DATA(insert OID = 2963 ( uuid_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "2950" _null_ _null_ _null_ _null_ _null_ uuid_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3412 ( uuid_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "2950 20" _null_ _null_ _null_ _null_ _null_ uuid_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); /* pg_lsn */ DATA(insert OID = 3229 ( pg_lsn_in PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 3220 "2275" _null_ _null_ _null_ _null_ _null_ pg_lsn_in _null_ _null_ _null_ )); @@ -4564,6 +4610,8 @@ DATA(insert OID = 3251 ( pg_lsn_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 DESCR("less-equal-greater"); DATA(insert OID = 3252 ( pg_lsn_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "3220" _null_ _null_ _null_ _null_ _null_ pg_lsn_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3413 ( pg_lsn_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "3220 20" _null_ _null_ _null_ _null_ _null_ pg_lsn_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); /* enum related procs */ DATA(insert OID = 3504 ( anyenum_in PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 3500 "2275" _null_ _null_ _null_ _null_ _null_ anyenum_in _null_ _null_ _null_ )); @@ -4584,6 +4632,8 @@ DATA(insert OID = 3514 ( enum_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 2 DESCR("less-equal-greater"); DATA(insert OID = 3515 ( hashenum PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "3500" _null_ _null_ _null_ _null_ _null_ hashenum _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3414 ( hashenumextended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "3500 20" _null_ _null_ _null_ _null_ _null_ hashenumextended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 3524 ( enum_smaller PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 3500 "3500 3500" _null_ _null_ _null_ _null_ _null_ enum_smaller _null_ _null_ _null_ )); DESCR("smaller of two"); DATA(insert OID = 3525 ( enum_larger PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 3500 "3500 3500" _null_ _null_ _null_ _null_ _null_ enum_larger _null_ _null_ _null_ )); @@ -4981,6 +5031,8 @@ DATA(insert OID = 4044 ( jsonb_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 DESCR("less-equal-greater"); DATA(insert OID = 4045 ( jsonb_hash PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "3802" _null_ _null_ _null_ _null_ _null_ jsonb_hash _null_ _null_ _null_ )); DESCR("hash"); +DATA(insert OID = 3416 ( jsonb_hash_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "3802 20" _null_ _null_ _null_ _null_ _null_ jsonb_hash_extended _null_ _null_ _null_ )); +DESCR("hash"); DATA(insert OID = 4046 ( jsonb_contains PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "3802 3802" _null_ _null_ _null_ _null_ _null_ jsonb_contains _null_ _null_ _null_ )); DATA(insert OID = 4047 ( jsonb_exists PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "3802 25" _null_ _null_ _null_ _null_ _null_ jsonb_exists _null_ _null_ _null_ )); DATA(insert OID = 4048 ( jsonb_exists_any PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "3802 1009" _null_ _null_ _null_ _null_ _null_ jsonb_exists_any _null_ _null_ _null_ )); @@ -5171,6 +5223,8 @@ DATA(insert OID = 3881 ( range_gist_same PGNSP PGUID 12 1 0 0 0 f f f f t f i DESCR("GiST support"); DATA(insert OID = 3902 ( hash_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 23 "3831" _null_ _null_ _null_ _null_ _null_ hash_range _null_ _null_ _null_ )); DESCR("hash a range"); +DATA(insert OID = 3417 ( hash_range_extended PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "3831 20" _null_ _null_ _null_ _null_ _null_ hash_range_extended _null_ _null_ _null_ )); +DESCR("hash a range"); DATA(insert OID = 3916 ( range_typanalyze PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 16 "2281" _null_ _null_ _null_ _null_ _null_ range_typanalyze _null_ _null_ _null_ )); DESCR("range typanalyze"); DATA(insert OID = 3169 ( rangesel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 4 0 701 "2281 26 2281 23" _null_ _null_ _null_ _null_ _null_ rangesel _null_ _null_ _null_ )); diff --git a/src/include/fmgr.h b/src/include/fmgr.h index 0216965bfc..b604a5c162 100644 --- a/src/include/fmgr.h +++ b/src/include/fmgr.h @@ -325,6 +325,7 @@ extern struct varlena *pg_detoast_datum_packed(struct varlena *datum); #define PG_RETURN_FLOAT4(x) return Float4GetDatum(x) #define PG_RETURN_FLOAT8(x) return Float8GetDatum(x) #define PG_RETURN_INT64(x) return Int64GetDatum(x) +#define PG_RETURN_UINT64(x) return UInt64GetDatum(x) /* RETURN macros for other pass-by-ref types will typically look like this: */ #define PG_RETURN_BYTEA_P(x) PG_RETURN_POINTER(x) #define PG_RETURN_TEXT_P(x) PG_RETURN_POINTER(x) diff --git a/src/include/utils/jsonb.h b/src/include/utils/jsonb.h index ea9dd17540..24f491663b 100644 --- a/src/include/utils/jsonb.h +++ b/src/include/utils/jsonb.h @@ -370,6 +370,8 @@ extern Jsonb *JsonbValueToJsonb(JsonbValue *val); extern bool JsonbDeepContains(JsonbIterator **val, JsonbIterator **mContained); extern void JsonbHashScalarValue(const JsonbValue *scalarVal, uint32 *hash); +extern void JsonbHashScalarValueExtended(const JsonbValue *scalarVal, + uint64 *hash, uint64 seed); /* jsonb.c support functions */ extern char *JsonbToCString(StringInfo out, JsonbContainer *in, diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index c12631dafe..b4f7592162 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -56,6 +56,7 @@ typedef struct TypeCacheEntry Oid gt_opr; /* the greater-than operator */ Oid cmp_proc; /* the btree comparison function */ Oid hash_proc; /* the hash calculation function */ + Oid hash_extended_proc; /* the extended hash calculation function */ /* * Pre-set-up fmgr call info for the equality operator, the btree @@ -67,6 +68,7 @@ typedef struct TypeCacheEntry FmgrInfo eq_opr_finfo; FmgrInfo cmp_proc_finfo; FmgrInfo hash_proc_finfo; + FmgrInfo hash_extended_proc_finfo; /* * Tuple descriptor if it's a composite type (row type). NULL if not @@ -120,6 +122,8 @@ typedef struct TypeCacheEntry #define TYPECACHE_HASH_OPFAMILY 0x0400 #define TYPECACHE_RANGE_INFO 0x0800 #define TYPECACHE_DOMAIN_INFO 0x1000 +#define TYPECACHE_HASH_EXTENDED_PROC 0x2000 +#define TYPECACHE_HASH_EXTENDED_PROC_FINFO 0x4000 /* * Callers wishing to maintain a long-lived reference to a domain's constraint diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out index 9f6ad4de33..767c09bec5 100644 --- a/src/test/regress/expected/alter_generic.out +++ b/src/test/regress/expected/alter_generic.out @@ -421,7 +421,7 @@ BEGIN TRANSACTION; CREATE OPERATOR FAMILY alt_opf13 USING hash; CREATE FUNCTION fn_opf13 (int4) RETURNS BIGINT AS 'SELECT NULL::BIGINT;' LANGUAGE SQL; ALTER OPERATOR FAMILY alt_opf13 USING hash ADD FUNCTION 1 fn_opf13(int4); -ERROR: hash procedures must return integer +ERROR: hash procedure 1 must return integer DROP OPERATOR FAMILY alt_opf13 USING hash; ERROR: current transaction is aborted, commands ignored until end of transaction block ROLLBACK; @@ -439,7 +439,7 @@ BEGIN TRANSACTION; CREATE OPERATOR FAMILY alt_opf15 USING hash; CREATE FUNCTION fn_opf15 (int4, int2) RETURNS BIGINT AS 'SELECT NULL::BIGINT;' LANGUAGE SQL; ALTER OPERATOR FAMILY alt_opf15 USING hash ADD FUNCTION 1 fn_opf15(int4, int2); -ERROR: hash procedures must have one argument +ERROR: hash procedure 1 must have one argument DROP OPERATOR FAMILY alt_opf15 USING hash; ERROR: current transaction is aborted, commands ignored until end of transaction block ROLLBACK; diff --git a/src/test/regress/expected/hash_func.out b/src/test/regress/expected/hash_func.out new file mode 100644 index 0000000000..da0948e95a --- /dev/null +++ b/src/test/regress/expected/hash_func.out @@ -0,0 +1,300 @@ +-- +-- Test hash functions +-- +-- When the salt is 0, the extended hash function should produce a result +-- whose low 32 bits match the standard hash function. When the salt is +-- not 0, we should get a different result. +-- +SELECT v as value, hashint2(v)::bit(32) as standard, + hashint2extended(v, 0)::bit(32) as extended0, + hashint2extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0::int2), (1::int2), (17::int2), (42::int2)) x(v) +WHERE hashint2(v)::bit(32) != hashint2extended(v, 0)::bit(32) + OR hashint2(v)::bit(32) = hashint2extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashint4(v)::bit(32) as standard, + hashint4extended(v, 0)::bit(32) as extended0, + hashint4extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashint4(v)::bit(32) != hashint4extended(v, 0)::bit(32) + OR hashint4(v)::bit(32) = hashint4extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashint8(v)::bit(32) as standard, + hashint8extended(v, 0)::bit(32) as extended0, + hashint8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashint8(v)::bit(32) != hashint8extended(v, 0)::bit(32) + OR hashint8(v)::bit(32) = hashint8extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashfloat4(v)::bit(32) as standard, + hashfloat4extended(v, 0)::bit(32) as extended0, + hashfloat4extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashfloat4(v)::bit(32) != hashfloat4extended(v, 0)::bit(32) + OR hashfloat4(v)::bit(32) = hashfloat4extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashfloat8(v)::bit(32) as standard, + hashfloat8extended(v, 0)::bit(32) as extended0, + hashfloat8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashfloat8(v)::bit(32) != hashfloat8extended(v, 0)::bit(32) + OR hashfloat8(v)::bit(32) = hashfloat8extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashoid(v)::bit(32) as standard, + hashoidextended(v, 0)::bit(32) as extended0, + hashoidextended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashoid(v)::bit(32) != hashoidextended(v, 0)::bit(32) + OR hashoid(v)::bit(32) = hashoidextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashchar(v)::bit(32) as standard, + hashcharextended(v, 0)::bit(32) as extended0, + hashcharextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::"char"), ('1'), ('x'), ('X'), ('p'), ('N')) x(v) +WHERE hashchar(v)::bit(32) != hashcharextended(v, 0)::bit(32) + OR hashchar(v)::bit(32) = hashcharextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashname(v)::bit(32) as standard, + hashnameextended(v, 0)::bit(32) as extended0, + hashnameextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashname(v)::bit(32) != hashnameextended(v, 0)::bit(32) + OR hashname(v)::bit(32) = hashnameextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashtext(v)::bit(32) as standard, + hashtextextended(v, 0)::bit(32) as extended0, + hashtextextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashtext(v)::bit(32) != hashtextextended(v, 0)::bit(32) + OR hashtext(v)::bit(32) = hashtextextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashoidvector(v)::bit(32) as standard, + hashoidvectorextended(v, 0)::bit(32) as extended0, + hashoidvectorextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::oidvector), ('0 1 2 3 4'), ('17 18 19 20'), + ('42 43 42 45'), ('550273 550273 570274'), + ('207112489 207112499 21512 2155 372325 1363252')) x(v) +WHERE hashoidvector(v)::bit(32) != hashoidvectorextended(v, 0)::bit(32) + OR hashoidvector(v)::bit(32) = hashoidvectorextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hash_aclitem(v)::bit(32) as standard, + hash_aclitem_extended(v, 0)::bit(32) as extended0, + hash_aclitem_extended(v, 1)::bit(32) as extended1 +FROM (SELECT DISTINCT(relacl[1]) FROM pg_class LIMIT 10) x(v) +WHERE hash_aclitem(v)::bit(32) != hash_aclitem_extended(v, 0)::bit(32) + OR hash_aclitem(v)::bit(32) = hash_aclitem_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashmacaddr(v)::bit(32) as standard, + hashmacaddrextended(v, 0)::bit(32) as extended0, + hashmacaddrextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::macaddr), ('08:00:2b:01:02:04'), ('08:00:2b:01:02:04'), + ('e2:7f:51:3e:70:49'), ('d6:a9:4a:78:1c:d5'), + ('ea:29:b1:5e:1f:a5')) x(v) +WHERE hashmacaddr(v)::bit(32) != hashmacaddrextended(v, 0)::bit(32) + OR hashmacaddr(v)::bit(32) = hashmacaddrextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashinet(v)::bit(32) as standard, + hashinetextended(v, 0)::bit(32) as extended0, + hashinetextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::inet), ('192.168.100.128/25'), ('192.168.100.0/8'), + ('172.168.10.126/16'), ('172.18.103.126/24'), ('192.188.13.16/32')) x(v) +WHERE hashinet(v)::bit(32) != hashinetextended(v, 0)::bit(32) + OR hashinet(v)::bit(32) = hashinetextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hash_numeric(v)::bit(32) as standard, + hash_numeric_extended(v, 0)::bit(32) as extended0, + hash_numeric_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1.149484958), (17.149484958), (42.149484958), + (149484958.550273), (2071124898672)) x(v) +WHERE hash_numeric(v)::bit(32) != hash_numeric_extended(v, 0)::bit(32) + OR hash_numeric(v)::bit(32) = hash_numeric_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashmacaddr8(v)::bit(32) as standard, + hashmacaddr8extended(v, 0)::bit(32) as extended0, + hashmacaddr8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::macaddr8), ('08:00:2b:01:02:04:36:49'), + ('08:00:2b:01:02:04:f0:e8'), ('e2:7f:51:3e:70:49:16:29'), + ('d6:a9:4a:78:1c:d5:47:32'), ('ea:29:b1:5e:1f:a5')) x(v) +WHERE hashmacaddr8(v)::bit(32) != hashmacaddr8extended(v, 0)::bit(32) + OR hashmacaddr8(v)::bit(32) = hashmacaddr8extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hash_array(v)::bit(32) as standard, + hash_array_extended(v, 0)::bit(32) as extended0, + hash_array_extended(v, 1)::bit(32) as extended1 +FROM (VALUES ('{0}'::int4[]), ('{0,1,2,3,4}'), ('{17,18,19,20}'), + ('{42,34,65,98}'), ('{550273,590027, 870273}'), + ('{207112489, 807112489}')) x(v) +WHERE hash_array(v)::bit(32) != hash_array_extended(v, 0)::bit(32) + OR hash_array(v)::bit(32) = hash_array_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hashbpchar(v)::bit(32) as standard, + hashbpcharextended(v, 0)::bit(32) as extended0, + hashbpcharextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashbpchar(v)::bit(32) != hashbpcharextended(v, 0)::bit(32) + OR hashbpchar(v)::bit(32) = hashbpcharextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, time_hash(v)::bit(32) as standard, + time_hash_extended(v, 0)::bit(32) as extended0, + time_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::time), ('11:09:59'), ('1:09:59'), ('11:59:59'), + ('7:9:59'), ('5:15:59')) x(v) +WHERE time_hash(v)::bit(32) != time_hash_extended(v, 0)::bit(32) + OR time_hash(v)::bit(32) = time_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, timetz_hash(v)::bit(32) as standard, + timetz_hash_extended(v, 0)::bit(32) as extended0, + timetz_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::timetz), ('00:11:52.518762-07'), ('00:11:52.51762-08'), + ('00:11:52.62-01'), ('00:11:52.62+01'), ('11:59:59+04')) x(v) +WHERE timetz_hash(v)::bit(32) != timetz_hash_extended(v, 0)::bit(32) + OR timetz_hash(v)::bit(32) = timetz_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, interval_hash(v)::bit(32) as standard, + interval_hash_extended(v, 0)::bit(32) as extended0, + interval_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::interval), + ('5 month 7 day 46 minutes'), ('1 year 7 day 46 minutes'), + ('1 year 7 month 20 day 46 minutes'), ('5 month'), + ('17 year 11 month 7 day 9 hours 46 minutes 5 seconds')) x(v) +WHERE interval_hash(v)::bit(32) != interval_hash_extended(v, 0)::bit(32) + OR interval_hash(v)::bit(32) = interval_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, timestamp_hash(v)::bit(32) as standard, + timestamp_hash_extended(v, 0)::bit(32) as extended0, + timestamp_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::timestamp), ('2017-08-22 00:09:59.518762'), + ('2015-08-20 00:11:52.51762-08'), + ('2017-05-22 00:11:52.62-01'), + ('2013-08-22 00:11:52.62+01'), ('2013-08-22 11:59:59+04')) x(v) +WHERE timestamp_hash(v)::bit(32) != timestamp_hash_extended(v, 0)::bit(32) + OR timestamp_hash(v)::bit(32) = timestamp_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, uuid_hash(v)::bit(32) as standard, + uuid_hash_extended(v, 0)::bit(32) as extended0, + uuid_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::uuid), ('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'), + ('5a9ba4ac-8d6f-11e7-bb31-be2e44b06b34'), + ('99c6705c-d939-461c-a3c9-1690ad64ed7b'), + ('7deed3ca-8d6f-11e7-bb31-be2e44b06b34'), + ('9ad46d4f-6f2a-4edd-aadb-745993928e1e')) x(v) +WHERE uuid_hash(v)::bit(32) != uuid_hash_extended(v, 0)::bit(32) + OR uuid_hash(v)::bit(32) = uuid_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, pg_lsn_hash(v)::bit(32) as standard, + pg_lsn_hash_extended(v, 0)::bit(32) as extended0, + pg_lsn_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::pg_lsn), ('16/B374D84'), ('30/B374D84'), + ('255/B374D84'), ('25/B379D90'), ('900/F37FD90')) x(v) +WHERE pg_lsn_hash(v)::bit(32) != pg_lsn_hash_extended(v, 0)::bit(32) + OR pg_lsn_hash(v)::bit(32) = pg_lsn_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); +SELECT v as value, hashenum(v)::bit(32) as standard, + hashenumextended(v, 0)::bit(32) as extended0, + hashenumextended(v, 1)::bit(32) as extended1 +FROM (VALUES ('sad'::mood), ('ok'), ('happy')) x(v) +WHERE hashenum(v)::bit(32) != hashenumextended(v, 0)::bit(32) + OR hashenum(v)::bit(32) = hashenumextended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +DROP TYPE mood; +SELECT v as value, jsonb_hash(v)::bit(32) as standard, + jsonb_hash_extended(v, 0)::bit(32) as extended0, + jsonb_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::jsonb), + ('{"a": "aaa bbb ddd ccc", "b": ["eee fff ggg"], "c": {"d": "hhh iii"}}'), + ('{"foo": [true, "bar"], "tags": {"e": 1, "f": null}}'), + ('{"g": {"h": "value"}}')) x(v) +WHERE jsonb_hash(v)::bit(32) != jsonb_hash_extended(v, 0)::bit(32) + OR jsonb_hash(v)::bit(32) = jsonb_hash_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + +SELECT v as value, hash_range(v)::bit(32) as standard, + hash_range_extended(v, 0)::bit(32) as extended0, + hash_range_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (int4range(10, 20)), (int4range(23, 43)), + (int4range(5675, 550273)), + (int4range(550274, 1550274)), (int4range(1550275, 208112489))) x(v) +WHERE hash_range(v)::bit(32) != hash_range_extended(v, 0)::bit(32) + OR hash_range(v)::bit(32) = hash_range_extended(v, 1)::bit(32); + value | standard | extended0 | extended1 +-------+----------+-----------+----------- +(0 rows) + diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index eefdeeacae..2fd3f2b1b1 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -60,7 +60,7 @@ test: create_index create_view # ---------- # Another group of parallel tests # ---------- -test: create_aggregate create_function_3 create_cast constraints triggers inherit create_table_like typed_table vacuum drop_if_exists updatable_views rolenames roleattributes create_am +test: create_aggregate create_function_3 create_cast constraints triggers inherit create_table_like typed_table vacuum drop_if_exists updatable_views rolenames roleattributes create_am hash_func # ---------- # sanity_check does a vacuum, affecting the sort order of SELECT * diff --git a/src/test/regress/sql/hash_func.sql b/src/test/regress/sql/hash_func.sql new file mode 100644 index 0000000000..b7ce8b21a3 --- /dev/null +++ b/src/test/regress/sql/hash_func.sql @@ -0,0 +1,222 @@ +-- +-- Test hash functions +-- +-- When the salt is 0, the extended hash function should produce a result +-- whose low 32 bits match the standard hash function. When the salt is +-- not 0, we should get a different result. +-- + +SELECT v as value, hashint2(v)::bit(32) as standard, + hashint2extended(v, 0)::bit(32) as extended0, + hashint2extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0::int2), (1::int2), (17::int2), (42::int2)) x(v) +WHERE hashint2(v)::bit(32) != hashint2extended(v, 0)::bit(32) + OR hashint2(v)::bit(32) = hashint2extended(v, 1)::bit(32); + +SELECT v as value, hashint4(v)::bit(32) as standard, + hashint4extended(v, 0)::bit(32) as extended0, + hashint4extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashint4(v)::bit(32) != hashint4extended(v, 0)::bit(32) + OR hashint4(v)::bit(32) = hashint4extended(v, 1)::bit(32); + +SELECT v as value, hashint8(v)::bit(32) as standard, + hashint8extended(v, 0)::bit(32) as extended0, + hashint8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashint8(v)::bit(32) != hashint8extended(v, 0)::bit(32) + OR hashint8(v)::bit(32) = hashint8extended(v, 1)::bit(32); + +SELECT v as value, hashfloat4(v)::bit(32) as standard, + hashfloat4extended(v, 0)::bit(32) as extended0, + hashfloat4extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashfloat4(v)::bit(32) != hashfloat4extended(v, 0)::bit(32) + OR hashfloat4(v)::bit(32) = hashfloat4extended(v, 1)::bit(32); + +SELECT v as value, hashfloat8(v)::bit(32) as standard, + hashfloat8extended(v, 0)::bit(32) as extended0, + hashfloat8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashfloat8(v)::bit(32) != hashfloat8extended(v, 0)::bit(32) + OR hashfloat8(v)::bit(32) = hashfloat8extended(v, 1)::bit(32); + +SELECT v as value, hashoid(v)::bit(32) as standard, + hashoidextended(v, 0)::bit(32) as extended0, + hashoidextended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1), (17), (42), (550273), (207112489)) x(v) +WHERE hashoid(v)::bit(32) != hashoidextended(v, 0)::bit(32) + OR hashoid(v)::bit(32) = hashoidextended(v, 1)::bit(32); + +SELECT v as value, hashchar(v)::bit(32) as standard, + hashcharextended(v, 0)::bit(32) as extended0, + hashcharextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::"char"), ('1'), ('x'), ('X'), ('p'), ('N')) x(v) +WHERE hashchar(v)::bit(32) != hashcharextended(v, 0)::bit(32) + OR hashchar(v)::bit(32) = hashcharextended(v, 1)::bit(32); + +SELECT v as value, hashname(v)::bit(32) as standard, + hashnameextended(v, 0)::bit(32) as extended0, + hashnameextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashname(v)::bit(32) != hashnameextended(v, 0)::bit(32) + OR hashname(v)::bit(32) = hashnameextended(v, 1)::bit(32); + +SELECT v as value, hashtext(v)::bit(32) as standard, + hashtextextended(v, 0)::bit(32) as extended0, + hashtextextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashtext(v)::bit(32) != hashtextextended(v, 0)::bit(32) + OR hashtext(v)::bit(32) = hashtextextended(v, 1)::bit(32); + +SELECT v as value, hashoidvector(v)::bit(32) as standard, + hashoidvectorextended(v, 0)::bit(32) as extended0, + hashoidvectorextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::oidvector), ('0 1 2 3 4'), ('17 18 19 20'), + ('42 43 42 45'), ('550273 550273 570274'), + ('207112489 207112499 21512 2155 372325 1363252')) x(v) +WHERE hashoidvector(v)::bit(32) != hashoidvectorextended(v, 0)::bit(32) + OR hashoidvector(v)::bit(32) = hashoidvectorextended(v, 1)::bit(32); + +SELECT v as value, hash_aclitem(v)::bit(32) as standard, + hash_aclitem_extended(v, 0)::bit(32) as extended0, + hash_aclitem_extended(v, 1)::bit(32) as extended1 +FROM (SELECT DISTINCT(relacl[1]) FROM pg_class LIMIT 10) x(v) +WHERE hash_aclitem(v)::bit(32) != hash_aclitem_extended(v, 0)::bit(32) + OR hash_aclitem(v)::bit(32) = hash_aclitem_extended(v, 1)::bit(32); + +SELECT v as value, hashmacaddr(v)::bit(32) as standard, + hashmacaddrextended(v, 0)::bit(32) as extended0, + hashmacaddrextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::macaddr), ('08:00:2b:01:02:04'), ('08:00:2b:01:02:04'), + ('e2:7f:51:3e:70:49'), ('d6:a9:4a:78:1c:d5'), + ('ea:29:b1:5e:1f:a5')) x(v) +WHERE hashmacaddr(v)::bit(32) != hashmacaddrextended(v, 0)::bit(32) + OR hashmacaddr(v)::bit(32) = hashmacaddrextended(v, 1)::bit(32); + +SELECT v as value, hashinet(v)::bit(32) as standard, + hashinetextended(v, 0)::bit(32) as extended0, + hashinetextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::inet), ('192.168.100.128/25'), ('192.168.100.0/8'), + ('172.168.10.126/16'), ('172.18.103.126/24'), ('192.188.13.16/32')) x(v) +WHERE hashinet(v)::bit(32) != hashinetextended(v, 0)::bit(32) + OR hashinet(v)::bit(32) = hashinetextended(v, 1)::bit(32); + +SELECT v as value, hash_numeric(v)::bit(32) as standard, + hash_numeric_extended(v, 0)::bit(32) as extended0, + hash_numeric_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (0), (1.149484958), (17.149484958), (42.149484958), + (149484958.550273), (2071124898672)) x(v) +WHERE hash_numeric(v)::bit(32) != hash_numeric_extended(v, 0)::bit(32) + OR hash_numeric(v)::bit(32) = hash_numeric_extended(v, 1)::bit(32); + +SELECT v as value, hashmacaddr8(v)::bit(32) as standard, + hashmacaddr8extended(v, 0)::bit(32) as extended0, + hashmacaddr8extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::macaddr8), ('08:00:2b:01:02:04:36:49'), + ('08:00:2b:01:02:04:f0:e8'), ('e2:7f:51:3e:70:49:16:29'), + ('d6:a9:4a:78:1c:d5:47:32'), ('ea:29:b1:5e:1f:a5')) x(v) +WHERE hashmacaddr8(v)::bit(32) != hashmacaddr8extended(v, 0)::bit(32) + OR hashmacaddr8(v)::bit(32) = hashmacaddr8extended(v, 1)::bit(32); + +SELECT v as value, hash_array(v)::bit(32) as standard, + hash_array_extended(v, 0)::bit(32) as extended0, + hash_array_extended(v, 1)::bit(32) as extended1 +FROM (VALUES ('{0}'::int4[]), ('{0,1,2,3,4}'), ('{17,18,19,20}'), + ('{42,34,65,98}'), ('{550273,590027, 870273}'), + ('{207112489, 807112489}')) x(v) +WHERE hash_array(v)::bit(32) != hash_array_extended(v, 0)::bit(32) + OR hash_array(v)::bit(32) = hash_array_extended(v, 1)::bit(32); + +SELECT v as value, hashbpchar(v)::bit(32) as standard, + hashbpcharextended(v, 0)::bit(32) as extended0, + hashbpcharextended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL), ('PostgreSQL'), ('eIpUEtqmY89'), ('AXKEJBTK'), + ('muop28x03'), ('yi3nm0d73')) x(v) +WHERE hashbpchar(v)::bit(32) != hashbpcharextended(v, 0)::bit(32) + OR hashbpchar(v)::bit(32) = hashbpcharextended(v, 1)::bit(32); + +SELECT v as value, time_hash(v)::bit(32) as standard, + time_hash_extended(v, 0)::bit(32) as extended0, + time_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::time), ('11:09:59'), ('1:09:59'), ('11:59:59'), + ('7:9:59'), ('5:15:59')) x(v) +WHERE time_hash(v)::bit(32) != time_hash_extended(v, 0)::bit(32) + OR time_hash(v)::bit(32) = time_hash_extended(v, 1)::bit(32); + +SELECT v as value, timetz_hash(v)::bit(32) as standard, + timetz_hash_extended(v, 0)::bit(32) as extended0, + timetz_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::timetz), ('00:11:52.518762-07'), ('00:11:52.51762-08'), + ('00:11:52.62-01'), ('00:11:52.62+01'), ('11:59:59+04')) x(v) +WHERE timetz_hash(v)::bit(32) != timetz_hash_extended(v, 0)::bit(32) + OR timetz_hash(v)::bit(32) = timetz_hash_extended(v, 1)::bit(32); + +SELECT v as value, interval_hash(v)::bit(32) as standard, + interval_hash_extended(v, 0)::bit(32) as extended0, + interval_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::interval), + ('5 month 7 day 46 minutes'), ('1 year 7 day 46 minutes'), + ('1 year 7 month 20 day 46 minutes'), ('5 month'), + ('17 year 11 month 7 day 9 hours 46 minutes 5 seconds')) x(v) +WHERE interval_hash(v)::bit(32) != interval_hash_extended(v, 0)::bit(32) + OR interval_hash(v)::bit(32) = interval_hash_extended(v, 1)::bit(32); + +SELECT v as value, timestamp_hash(v)::bit(32) as standard, + timestamp_hash_extended(v, 0)::bit(32) as extended0, + timestamp_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::timestamp), ('2017-08-22 00:09:59.518762'), + ('2015-08-20 00:11:52.51762-08'), + ('2017-05-22 00:11:52.62-01'), + ('2013-08-22 00:11:52.62+01'), ('2013-08-22 11:59:59+04')) x(v) +WHERE timestamp_hash(v)::bit(32) != timestamp_hash_extended(v, 0)::bit(32) + OR timestamp_hash(v)::bit(32) = timestamp_hash_extended(v, 1)::bit(32); + +SELECT v as value, uuid_hash(v)::bit(32) as standard, + uuid_hash_extended(v, 0)::bit(32) as extended0, + uuid_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::uuid), ('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'), + ('5a9ba4ac-8d6f-11e7-bb31-be2e44b06b34'), + ('99c6705c-d939-461c-a3c9-1690ad64ed7b'), + ('7deed3ca-8d6f-11e7-bb31-be2e44b06b34'), + ('9ad46d4f-6f2a-4edd-aadb-745993928e1e')) x(v) +WHERE uuid_hash(v)::bit(32) != uuid_hash_extended(v, 0)::bit(32) + OR uuid_hash(v)::bit(32) = uuid_hash_extended(v, 1)::bit(32); + +SELECT v as value, pg_lsn_hash(v)::bit(32) as standard, + pg_lsn_hash_extended(v, 0)::bit(32) as extended0, + pg_lsn_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::pg_lsn), ('16/B374D84'), ('30/B374D84'), + ('255/B374D84'), ('25/B379D90'), ('900/F37FD90')) x(v) +WHERE pg_lsn_hash(v)::bit(32) != pg_lsn_hash_extended(v, 0)::bit(32) + OR pg_lsn_hash(v)::bit(32) = pg_lsn_hash_extended(v, 1)::bit(32); + +CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); +SELECT v as value, hashenum(v)::bit(32) as standard, + hashenumextended(v, 0)::bit(32) as extended0, + hashenumextended(v, 1)::bit(32) as extended1 +FROM (VALUES ('sad'::mood), ('ok'), ('happy')) x(v) +WHERE hashenum(v)::bit(32) != hashenumextended(v, 0)::bit(32) + OR hashenum(v)::bit(32) = hashenumextended(v, 1)::bit(32); +DROP TYPE mood; + +SELECT v as value, jsonb_hash(v)::bit(32) as standard, + jsonb_hash_extended(v, 0)::bit(32) as extended0, + jsonb_hash_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (NULL::jsonb), + ('{"a": "aaa bbb ddd ccc", "b": ["eee fff ggg"], "c": {"d": "hhh iii"}}'), + ('{"foo": [true, "bar"], "tags": {"e": 1, "f": null}}'), + ('{"g": {"h": "value"}}')) x(v) +WHERE jsonb_hash(v)::bit(32) != jsonb_hash_extended(v, 0)::bit(32) + OR jsonb_hash(v)::bit(32) = jsonb_hash_extended(v, 1)::bit(32); + +SELECT v as value, hash_range(v)::bit(32) as standard, + hash_range_extended(v, 0)::bit(32) as extended0, + hash_range_extended(v, 1)::bit(32) as extended1 +FROM (VALUES (int4range(10, 20)), (int4range(23, 43)), + (int4range(5675, 550273)), + (int4range(550274, 1550274)), (int4range(1550275, 208112489))) x(v) +WHERE hash_range(v)::bit(32) != hash_range_extended(v, 0)::bit(32) + OR hash_range(v)::bit(32) = hash_range_extended(v, 1)::bit(32); From 0d9506d125beef18247a5e38a219d3b23e2d312e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 31 Aug 2017 23:09:00 -0400 Subject: [PATCH 0071/1087] Try to repair poorly-considered code in previous commit. --- src/backend/utils/adt/jsonb_op.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/adt/jsonb_op.c b/src/backend/utils/adt/jsonb_op.c index c4a7dc3f13..52a7e19a54 100644 --- a/src/backend/utils/adt/jsonb_op.c +++ b/src/backend/utils/adt/jsonb_op.c @@ -313,10 +313,10 @@ jsonb_hash_extended(PG_FUNCTION_ARGS) { /* Rotation is left to JsonbHashScalarValueExtended() */ case WJB_BEGIN_ARRAY: - hash ^= ((UINT64CONST(JB_FARRAY) << 32) | UINT64CONST(JB_FARRAY)); + hash ^= ((uint64) JB_FARRAY) << 32 | JB_FARRAY; break; case WJB_BEGIN_OBJECT: - hash ^= ((UINT64CONST(JB_FOBJECT) << 32) | UINT64CONST(JB_FOBJECT)); + hash ^= ((uint64) JB_FOBJECT) << 32 | JB_FOBJECT; break; case WJB_KEY: case WJB_VALUE: From 7b69b6ceb8047979ddf82af12ec1de143da62263 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Sep 2017 00:13:25 -0400 Subject: [PATCH 0072/1087] Fix assorted carelessness about Datum vs. int64 vs. uint64 Bugs introduced by commit 81c5e46c490e2426db243eada186995da5bb0ba7 --- src/backend/utils/adt/arrayfuncs.c | 2 +- src/backend/utils/adt/date.c | 5 +++-- src/backend/utils/adt/numeric.c | 2 +- src/backend/utils/adt/rangetypes.c | 5 +++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 522af7affc..2a4de41bbc 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -4084,7 +4084,7 @@ hash_array_extended(PG_FUNCTION_ARGS) { /* Apply the hash function */ locfcinfo.arg[0] = elt; - locfcinfo.arg[1] = seed; + locfcinfo.arg[1] = Int64GetDatum(seed); locfcinfo.argnull[0] = false; locfcinfo.argnull[1] = false; locfcinfo.isnull = false; diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 34c0b52d58..0992bb3fdd 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -2223,14 +2223,15 @@ Datum timetz_hash_extended(PG_FUNCTION_ARGS) { TimeTzADT *key = PG_GETARG_TIMETZADT_P(0); - uint64 seed = PG_GETARG_DATUM(1); + Datum seed = PG_GETARG_DATUM(1); uint64 thash; /* Same approach as timetz_hash */ thash = DatumGetUInt64(DirectFunctionCall2(hashint8extended, Int64GetDatumFast(key->time), seed)); - thash ^= DatumGetUInt64(hash_uint32_extended(key->zone, seed)); + thash ^= DatumGetUInt64(hash_uint32_extended(key->zone, + DatumGetInt64(seed))); PG_RETURN_UINT64(thash); } diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 22d5898927..bc01f3c284 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -2285,7 +2285,7 @@ hash_numeric_extended(PG_FUNCTION_ARGS) hash_len * sizeof(NumericDigit), seed); - result = digit_hash ^ weight; + result = UInt64GetDatum(DatumGetUInt64(digit_hash) ^ weight); PG_RETURN_DATUM(result); } diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index d7ba271317..dae505159e 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -1288,7 +1288,7 @@ Datum hash_range_extended(PG_FUNCTION_ARGS) { RangeType *r = PG_GETARG_RANGE(0); - uint64 seed = PG_GETARG_INT64(1); + Datum seed = PG_GETARG_DATUM(1); uint64 result; TypeCacheEntry *typcache; TypeCacheEntry *scache; @@ -1335,7 +1335,8 @@ hash_range_extended(PG_FUNCTION_ARGS) upper_hash = 0; /* Merge hashes of flags and bounds */ - result = hash_uint32_extended((uint32) flags, seed); + result = DatumGetUInt64(hash_uint32_extended((uint32) flags, + DatumGetInt64(seed))); result ^= lower_hash; result = ROTATE_HIGH_AND_LOW_32BITS(result); result ^= upper_hash; From abe85ef1d00187a42e7a757ea0413bc4965a4525 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 1 Sep 2017 07:57:05 +0100 Subject: [PATCH 0073/1087] Add note about diskspace usage of pg_commit_ts Author: Thomas Munro --- doc/src/sgml/maintenance.sgml | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index fe1e0ed2b3..616aece6c0 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -526,19 +526,23 @@ The sole disadvantage of increasing autovacuum_freeze_max_age - (and vacuum_freeze_table_age along with it) - is that the pg_xact subdirectory of the database cluster - will take more space, because it must store the commit status of all - transactions back to the autovacuum_freeze_max_age horizon. - The commit status uses two bits per transaction, so if - autovacuum_freeze_max_age is set to its maximum allowed - value of two billion, pg_xact can be expected to - grow to about half a gigabyte. If this is trivial compared to your - total database size, setting autovacuum_freeze_max_age to - its maximum allowed value is recommended. Otherwise, set it depending - on what you are willing to allow for pg_xact storage. - (The default, 200 million transactions, translates to about 50MB of - pg_xact storage.) + (and vacuum_freeze_table_age along with it) is that + the pg_xact and pg_commit_ts + subdirectories of the database cluster will take more space, because it + must store the commit status and (if track_commit_timestamp is + enabled) timestamp of all transactions back to + the autovacuum_freeze_max_age horizon. The commit status uses + two bits per transaction, so if + autovacuum_freeze_max_age is set to its maximum allowed value + of two billion, pg_xact can be expected to grow to about half + a gigabyte and pg_commit_ts to about 20GB. If this + is trivial compared to your total database size, + setting autovacuum_freeze_max_age to its maximum allowed value + is recommended. Otherwise, set it depending on what you are willing to + allow for pg_xact and pg_commit_ts storage. + (The default, 200 million transactions, translates to about 50MB + of pg_xact storage and about 2GB of pg_commit_ts + storage.) From be7161566db247fd519e1a888ea8cd36b3c72088 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 1 Sep 2017 13:44:14 +0200 Subject: [PATCH 0074/1087] Add a WAIT option to DROP_REPLICATION_SLOT Commit 9915de6c1cb2 changed the default behavior of DROP_REPLICATION_SLOT so that it would wait until any session holding the slot active would release it, instead of raising an error. But users are already depending on the original behavior, so revert to it by default and add a WAIT option to invoke the new behavior. Per complaint from Simone Gotti, in Discussion: https://postgr.es/m/CAEvsy6Wgdf90O6pUvg2wSVXL2omH5OPC-38OD4Zzgk-FXavj3Q@mail.gmail.com --- doc/src/sgml/logicaldecoding.sgml | 2 +- doc/src/sgml/protocol.sgml | 17 ++++++++++++++--- src/backend/commands/subscriptioncmds.c | 2 +- src/backend/replication/repl_gram.y | 10 ++++++++++ src/backend/replication/repl_scanner.l | 1 + src/backend/replication/slotfuncs.c | 2 +- src/backend/replication/walsender.c | 2 +- src/include/nodes/replnodes.h | 1 + 8 files changed, 30 insertions(+), 7 deletions(-) diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index 8dcfc6c742..f8142518c1 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -303,7 +303,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot - DROP_REPLICATION_SLOT slot_name + DROP_REPLICATION_SLOT slot_name WAIT diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 7c012f59a3..2bb4e38a9d 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -2173,13 +2173,13 @@ The commands accepted in walsender mode are: - DROP_REPLICATION_SLOT slot_name + + DROP_REPLICATION_SLOT slot_name WAIT DROP_REPLICATION_SLOT - Drops a replication slot, freeing any reserved server-side resources. If - the slot is currently in use by an active connection, this command fails. + Drops a replication slot, freeing any reserved server-side resources. If the slot is a logical slot that was created in a database other than the database the walsender is connected to, this command fails. @@ -2192,6 +2192,17 @@ The commands accepted in walsender mode are: + + + WAIT + + + This option causes the command to wait if the slot is active until + it becomes inactive, instead of the default behavior of raising an + error. + + + diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index 9bc1d178fc..2ef414e084 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -959,7 +959,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel) load_file("libpqwalreceiver", false); initStringInfo(&cmd); - appendStringInfo(&cmd, "DROP_REPLICATION_SLOT %s", quote_identifier(slotname)); + appendStringInfo(&cmd, "DROP_REPLICATION_SLOT %s WAIT", quote_identifier(slotname)); wrconn = walrcv_connect(conninfo, true, subname, &err); if (wrconn == NULL) diff --git a/src/backend/replication/repl_gram.y b/src/backend/replication/repl_gram.y index ec047c827c..a012447fa2 100644 --- a/src/backend/replication/repl_gram.y +++ b/src/backend/replication/repl_gram.y @@ -72,6 +72,7 @@ static SQLCmd *make_sqlcmd(void); %token K_LABEL %token K_PROGRESS %token K_FAST +%token K_WAIT %token K_NOWAIT %token K_MAX_RATE %token K_WAL @@ -272,6 +273,15 @@ drop_replication_slot: DropReplicationSlotCmd *cmd; cmd = makeNode(DropReplicationSlotCmd); cmd->slotname = $2; + cmd->wait = false; + $$ = (Node *) cmd; + } + | K_DROP_REPLICATION_SLOT IDENT K_WAIT + { + DropReplicationSlotCmd *cmd; + cmd = makeNode(DropReplicationSlotCmd); + cmd->slotname = $2; + cmd->wait = true; $$ = (Node *) cmd; } ; diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l index 52ae7b343f..62bb5288c0 100644 --- a/src/backend/replication/repl_scanner.l +++ b/src/backend/replication/repl_scanner.l @@ -103,6 +103,7 @@ TEMPORARY { return K_TEMPORARY; } EXPORT_SNAPSHOT { return K_EXPORT_SNAPSHOT; } NOEXPORT_SNAPSHOT { return K_NOEXPORT_SNAPSHOT; } USE_SNAPSHOT { return K_USE_SNAPSHOT; } +WAIT { return K_WAIT; } "," { return ','; } ";" { return ';'; } diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c index d4cbd83bde..ab776e85d2 100644 --- a/src/backend/replication/slotfuncs.c +++ b/src/backend/replication/slotfuncs.c @@ -171,7 +171,7 @@ pg_drop_replication_slot(PG_FUNCTION_ARGS) CheckSlotRequirements(); - ReplicationSlotDrop(NameStr(*name), false); + ReplicationSlotDrop(NameStr(*name), true); PG_RETURN_VOID(); } diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 03e1cf44de..db346e6edb 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1028,7 +1028,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd) static void DropReplicationSlot(DropReplicationSlotCmd *cmd) { - ReplicationSlotDrop(cmd->slotname, false); + ReplicationSlotDrop(cmd->slotname, !cmd->wait); EndCommand("DROP_REPLICATION_SLOT", DestRemote); } diff --git a/src/include/nodes/replnodes.h b/src/include/nodes/replnodes.h index dea61e90e9..2053ffabe0 100644 --- a/src/include/nodes/replnodes.h +++ b/src/include/nodes/replnodes.h @@ -68,6 +68,7 @@ typedef struct DropReplicationSlotCmd { NodeTag type; char *slotname; + bool wait; } DropReplicationSlotCmd; From 4f27c674fd9fb5ba1f2952e2db53886bb5954e8b Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 1 Sep 2017 14:55:44 +0100 Subject: [PATCH 0075/1087] Avoid race condition in logical replication test Wait for slot to become inactive before continuing. Author: Petr Jelinek --- src/test/recovery/t/006_logical_decoding.pl | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl index 4a90e9ac7e..8b35bc8438 100644 --- a/src/test/recovery/t/006_logical_decoding.pl +++ b/src/test/recovery/t/006_logical_decoding.pl @@ -78,6 +78,11 @@ is($stdout_recv, $expected, 'got same expected output from pg_recvlogical decoding session'); +$node_master->poll_query_until('postgres', +"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'test_slot' AND active_pid IS NULL)" +) + or die "slot never became inactive"; + $stdout_recv = $node_master->pg_recvlogical_upto( 'postgres', 'test_slot', $endpos, 10, 'include-xids' => '0', From a6979c3a68e2caa6696021f7a15278a99e7409a1 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 1 Sep 2017 16:30:02 +0200 Subject: [PATCH 0076/1087] Restore behavior for replication origin drop Do for replication origins what the previous commit did for replication slots: restore the original behavior of replication origin drop to raise an error rather than blocking, because users might be depending on the original behavior. Maintain the blocking behavior when invoked internally from logical replication subscription handling. Discussion: https://postgr.es/m/20170830133922.tlpo3lgfejm4n2cs@alvherre.pgsql --- src/backend/replication/logical/origin.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 14cb3d0bf2..edc6efb8a6 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -1205,7 +1205,7 @@ pg_replication_origin_drop(PG_FUNCTION_ARGS) roident = replorigin_by_name(name, false); Assert(OidIsValid(roident)); - replorigin_drop(roident, false); + replorigin_drop(roident, true); pfree(name); From 89c59b742a7f89eb598a25b70aaa3ab97381f67d Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 1 Sep 2017 16:51:55 +0200 Subject: [PATCH 0077/1087] Fix two-phase commit test for recovery mode MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The original code had a race condition because it never ensured the standby was caught up before proceeding; add a wait similar to every other place that does this. Author: Michaël Paquier Discussion: https://postgr.es/m/CAB7nPqTm9p+LCm1mVJYvgpwagRK+uibT-pKq0O2-paOWxT62jw@mail.gmail.com --- src/test/recovery/t/009_twophase.pl | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/test/recovery/t/009_twophase.pl b/src/test/recovery/t/009_twophase.pl index 6c50139572..95f22bc421 100644 --- a/src/test/recovery/t/009_twophase.pl +++ b/src/test/recovery/t/009_twophase.pl @@ -331,6 +331,14 @@ sub configure_and_reload CHECKPOINT; COMMIT PREPARED 'xact_009_13';"); +# Ensure that last transaction is replayed on standby. +my $cur_master_lsn = + $cur_master->safe_psql('postgres', "SELECT pg_current_wal_lsn()"); +my $caughtup_query = + "SELECT '$cur_master_lsn'::pg_lsn <= pg_last_wal_replay_lsn()"; +$cur_standby->poll_query_until('postgres', $caughtup_query) + or die "Timed out while waiting for standby to catch up"; + $cur_standby->psql( 'postgres', "SELECT count(*) FROM t_009_tbl2", From baaf272ac908ea27c09076e34f62c45fa7d1e448 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Sep 2017 11:45:17 -0400 Subject: [PATCH 0078/1087] Use group updates when setting transaction status in clog. Commit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a introduced a mechanism to reduce contention on ProcArrayLock by having a single process clear XIDs in the procArray on behalf of multiple processes, reducing the need to hand the lock around. A previous attempt to introduce a similar mechanism for CLogControlLock in ccce90b398673d55b0387b3de66639b1b30d451b crashed and burned, but the design problem which resulted in those failures is believed to have been corrected in this version. Amit Kapila, with some cosmetic changes by me. See the previous commit message for additional credits. Discussion: http://postgr.es/m/CAA4eK1KudxzgWhuywY_X=yeSAhJMT4DwCjroV5Ay60xaeB2Eew@mail.gmail.com --- doc/src/sgml/monitoring.sgml | 6 +- src/backend/access/transam/clog.c | 264 ++++++++++++++++++++++++++++-- src/backend/postmaster/pgstat.c | 3 + src/backend/storage/lmgr/proc.c | 9 + src/include/pgstat.h | 1 + src/include/storage/proc.h | 14 ++ 6 files changed, 285 insertions(+), 12 deletions(-) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 5575c2c837..38bf63658a 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -1250,7 +1250,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Waiting in an extension. - IPC + IPC BgWorkerShutdown Waiting for background worker to shut down. @@ -1302,6 +1302,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ProcArrayGroupUpdate Waiting for group leader to clear transaction id at transaction end. + + ClogGroupUpdate + Waiting for group leader to update transaction status at transaction end. + ReplicationOriginDrop Waiting for a replication origin to become inactive to be dropped. diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c index 0a7e2b310f..9003b22193 100644 --- a/src/backend/access/transam/clog.c +++ b/src/backend/access/transam/clog.c @@ -39,7 +39,9 @@ #include "access/xloginsert.h" #include "access/xlogutils.h" #include "miscadmin.h" +#include "pgstat.h" #include "pg_trace.h" +#include "storage/proc.h" /* * Defines for CLOG page sizes. A page is the same BLCKSZ as is used @@ -71,6 +73,12 @@ #define GetLSNIndex(slotno, xid) ((slotno) * CLOG_LSNS_PER_PAGE + \ ((xid) % (TransactionId) CLOG_XACTS_PER_PAGE) / CLOG_XACTS_PER_LSN_GROUP) +/* + * The number of subtransactions below which we consider to apply clog group + * update optimization. Testing reveals that the number higher than this can + * hurt performance. + */ +#define THRESHOLD_SUBTRANS_CLOG_OPT 5 /* * Link to shared-memory data structures for CLOG control @@ -87,11 +95,17 @@ static void WriteTruncateXlogRec(int pageno, TransactionId oldestXact, Oid oldestXidDb); static void TransactionIdSetPageStatus(TransactionId xid, int nsubxids, TransactionId *subxids, XidStatus status, - XLogRecPtr lsn, int pageno); + XLogRecPtr lsn, int pageno, + bool all_xact_same_page); static void TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno); static void set_status_by_pages(int nsubxids, TransactionId *subxids, XidStatus status, XLogRecPtr lsn); +static bool TransactionGroupUpdateXidStatus(TransactionId xid, + XidStatus status, XLogRecPtr lsn, int pageno); +static void TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, + TransactionId *subxids, XidStatus status, + XLogRecPtr lsn, int pageno); /* @@ -174,7 +188,7 @@ TransactionIdSetTreeStatus(TransactionId xid, int nsubxids, * Set the parent and all subtransactions in a single call */ TransactionIdSetPageStatus(xid, nsubxids, subxids, status, lsn, - pageno); + pageno, true); } else { @@ -201,7 +215,7 @@ TransactionIdSetTreeStatus(TransactionId xid, int nsubxids, */ pageno = TransactionIdToPage(xid); TransactionIdSetPageStatus(xid, nsubxids_on_first_page, subxids, status, - lsn, pageno); + lsn, pageno, false); /* * Now work through the rest of the subxids one clog page at a time, @@ -239,22 +253,92 @@ set_status_by_pages(int nsubxids, TransactionId *subxids, TransactionIdSetPageStatus(InvalidTransactionId, num_on_page, subxids + offset, - status, lsn, pageno); + status, lsn, pageno, false); offset = i; pageno = TransactionIdToPage(subxids[offset]); } } /* - * Record the final state of transaction entries in the commit log for - * all entries on a single page. Atomic only on this page. - * - * Otherwise API is same as TransactionIdSetTreeStatus() + * Record the final state of transaction entries in the commit log for all + * entries on a single page. Atomic only on this page. */ static void TransactionIdSetPageStatus(TransactionId xid, int nsubxids, TransactionId *subxids, XidStatus status, - XLogRecPtr lsn, int pageno) + XLogRecPtr lsn, int pageno, + bool all_xact_same_page) +{ + /* Can't use group update when PGPROC overflows. */ + StaticAssertStmt(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS, + "group clog threshold less than PGPROC cached subxids"); + + /* + * When there is contention on CLogControlLock, we try to group multiple + * updates; a single leader process will perform transaction status + * updates for multiple backends so that the number of times + * CLogControlLock needs to be acquired is reduced. + * + * For this optimization to be safe, the XID in MyPgXact and the subxids + * in MyProc must be the same as the ones for which we're setting the + * status. Check that this is the case. + * + * For this optimization to be efficient, we shouldn't have too many + * sub-XIDs and all of the XIDs for which we're adjusting clog should be + * on the same page. Check those conditions, too. + */ + if (all_xact_same_page && xid == MyPgXact->xid && + nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT && + nsubxids == MyPgXact->nxids && + memcmp(subxids, MyProc->subxids.xids, + nsubxids * sizeof(TransactionId)) == 0) + { + /* + * We don't try to do group update optimization if a process has + * overflowed the subxids array in its PGPROC, since in that case we + * don't have a complete list of XIDs for it. + */ + Assert(THRESHOLD_SUBTRANS_CLOG_OPT <= PGPROC_MAX_CACHED_SUBXIDS); + + /* + * If we can immediately acquire CLogControlLock, we update the status + * of our own XID and release the lock. If not, try use group XID + * update. If that doesn't work out, fall back to waiting for the + * lock to perform an update for this transaction only. + */ + if (LWLockConditionalAcquire(CLogControlLock, LW_EXCLUSIVE)) + { + /* Got the lock without waiting! Do the update. */ + TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status, + lsn, pageno); + LWLockRelease(CLogControlLock); + return; + } + else if (TransactionGroupUpdateXidStatus(xid, status, lsn, pageno)) + { + /* Group update mechanism has done the work. */ + return; + } + + /* Fall through only if update isn't done yet. */ + } + + /* Group update not applicable, or couldn't accept this page number. */ + LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); + TransactionIdSetPageStatusInternal(xid, nsubxids, subxids, status, + lsn, pageno); + LWLockRelease(CLogControlLock); +} + +/* + * Record the final state of transaction entry in the commit log + * + * We don't do any locking here; caller must handle that. + */ +static void +TransactionIdSetPageStatusInternal(TransactionId xid, int nsubxids, + TransactionId *subxids, XidStatus status, + XLogRecPtr lsn, int pageno) { int slotno; int i; @@ -262,8 +346,7 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, Assert(status == TRANSACTION_STATUS_COMMITTED || status == TRANSACTION_STATUS_ABORTED || (status == TRANSACTION_STATUS_SUB_COMMITTED && !TransactionIdIsValid(xid))); - - LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); + Assert(LWLockHeldByMeInMode(CLogControlLock, LW_EXCLUSIVE)); /* * If we're doing an async commit (ie, lsn is valid), then we must wait @@ -311,8 +394,167 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, } ClogCtl->shared->page_dirty[slotno] = true; +} + +/* + * When we cannot immediately acquire CLogControlLock in exclusive mode at + * commit time, add ourselves to a list of processes that need their XIDs + * status update. The first process to add itself to the list will acquire + * CLogControlLock in exclusive mode and set transaction status as required + * on behalf of all group members. This avoids a great deal of contention + * around CLogControlLock when many processes are trying to commit at once, + * since the lock need not be repeatedly handed off from one committing + * process to the next. + * + * Returns true when transaction status has been updated in clog; returns + * false if we decided against applying the optimization because the page + * number we need to update differs from those processes already waiting. + */ +static bool +TransactionGroupUpdateXidStatus(TransactionId xid, XidStatus status, + XLogRecPtr lsn, int pageno) +{ + volatile PROC_HDR *procglobal = ProcGlobal; + PGPROC *proc = MyProc; + uint32 nextidx; + uint32 wakeidx; + + /* We should definitely have an XID whose status needs to be updated. */ + Assert(TransactionIdIsValid(xid)); + + /* + * Add ourselves to the list of processes needing a group XID status + * update. + */ + proc->clogGroupMember = true; + proc->clogGroupMemberXid = xid; + proc->clogGroupMemberXidStatus = status; + proc->clogGroupMemberPage = pageno; + proc->clogGroupMemberLsn = lsn; + + nextidx = pg_atomic_read_u32(&procglobal->clogGroupFirst); + while (true) + { + /* + * Add the proc to list, if the clog page where we need to update the + * current transaction status is same as group leader's clog page. + * + * There is a race condition here, which is that after doing the below + * check and before adding this proc's clog update to a group, the + * group leader might have already finished the group update for this + * page and becomes group leader of another group. This will lead to a + * situation where a single group can have different clog page + * updates. This isn't likely and will still work, just maybe a bit + * less efficiently. + */ + if (nextidx != INVALID_PGPROCNO && + ProcGlobal->allProcs[nextidx].clogGroupMemberPage != proc->clogGroupMemberPage) + { + proc->clogGroupMember = false; + return false; + } + + pg_atomic_write_u32(&proc->clogGroupNext, nextidx); + + if (pg_atomic_compare_exchange_u32(&procglobal->clogGroupFirst, + &nextidx, + (uint32) proc->pgprocno)) + break; + } + + /* + * If the list was not empty, the leader will update the status of our + * XID. It is impossible to have followers without a leader because the + * first process that has added itself to the list will always have + * nextidx as INVALID_PGPROCNO. + */ + if (nextidx != INVALID_PGPROCNO) + { + int extraWaits = 0; + + /* Sleep until the leader updates our XID status. */ + pgstat_report_wait_start(WAIT_EVENT_CLOG_GROUP_UPDATE); + for (;;) + { + /* acts as a read barrier */ + PGSemaphoreLock(proc->sem); + if (!proc->clogGroupMember) + break; + extraWaits++; + } + pgstat_report_wait_end(); + + Assert(pg_atomic_read_u32(&proc->clogGroupNext) == INVALID_PGPROCNO); + + /* Fix semaphore count for any absorbed wakeups */ + while (extraWaits-- > 0) + PGSemaphoreUnlock(proc->sem); + return true; + } + + /* We are the leader. Acquire the lock on behalf of everyone. */ + LWLockAcquire(CLogControlLock, LW_EXCLUSIVE); + + /* + * Now that we've got the lock, clear the list of processes waiting for + * group XID status update, saving a pointer to the head of the list. + * Trying to pop elements one at a time could lead to an ABA problem. + */ + nextidx = pg_atomic_exchange_u32(&procglobal->clogGroupFirst, + INVALID_PGPROCNO); + + /* Remember head of list so we can perform wakeups after dropping lock. */ + wakeidx = nextidx; + + /* Walk the list and update the status of all XIDs. */ + while (nextidx != INVALID_PGPROCNO) + { + PGPROC *proc = &ProcGlobal->allProcs[nextidx]; + PGXACT *pgxact = &ProcGlobal->allPgXact[nextidx]; + + /* + * Overflowed transactions should not use group XID status update + * mechanism. + */ + Assert(!pgxact->overflowed); + + TransactionIdSetPageStatusInternal(proc->clogGroupMemberXid, + pgxact->nxids, + proc->subxids.xids, + proc->clogGroupMemberXidStatus, + proc->clogGroupMemberLsn, + proc->clogGroupMemberPage); + + /* Move to next proc in list. */ + nextidx = pg_atomic_read_u32(&proc->clogGroupNext); + } + + /* We're done with the lock now. */ LWLockRelease(CLogControlLock); + + /* + * Now that we've released the lock, go back and wake everybody up. We + * don't do this under the lock so as to keep lock hold times to a + * minimum. + */ + while (wakeidx != INVALID_PGPROCNO) + { + PGPROC *proc = &ProcGlobal->allProcs[wakeidx]; + + wakeidx = pg_atomic_read_u32(&proc->clogGroupNext); + pg_atomic_write_u32(&proc->clogGroupNext, INVALID_PGPROCNO); + + /* ensure all previous writes are visible before follower continues. */ + pg_write_barrier(); + + proc->clogGroupMember = false; + + if (proc != MyProc) + PGSemaphoreUnlock(proc->sem); + } + + return true; } /* diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 1f75e2e97d..accf302cf7 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -3609,6 +3609,9 @@ pgstat_get_wait_ipc(WaitEventIPC w) case WAIT_EVENT_PROCARRAY_GROUP_UPDATE: event_name = "ProcArrayGroupUpdate"; break; + case WAIT_EVENT_CLOG_GROUP_UPDATE: + event_name = "ClogGroupUpdate"; + break; case WAIT_EVENT_REPLICATION_ORIGIN_DROP: event_name = "ReplicationOriginDrop"; break; diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c index bfa84992ea..5f6727d501 100644 --- a/src/backend/storage/lmgr/proc.c +++ b/src/backend/storage/lmgr/proc.c @@ -186,6 +186,7 @@ InitProcGlobal(void) ProcGlobal->walwriterLatch = NULL; ProcGlobal->checkpointerLatch = NULL; pg_atomic_init_u32(&ProcGlobal->procArrayGroupFirst, INVALID_PGPROCNO); + pg_atomic_init_u32(&ProcGlobal->clogGroupFirst, INVALID_PGPROCNO); /* * Create and initialize all the PGPROC structures we'll need. There are @@ -408,6 +409,14 @@ InitProcess(void) /* Initialize wait event information. */ MyProc->wait_event_info = 0; + /* Initialize fields for group transaction status update. */ + MyProc->clogGroupMember = false; + MyProc->clogGroupMemberXid = InvalidTransactionId; + MyProc->clogGroupMemberXidStatus = TRANSACTION_STATUS_IN_PROGRESS; + MyProc->clogGroupMemberPage = -1; + MyProc->clogGroupMemberLsn = InvalidXLogRecPtr; + pg_atomic_init_u32(&MyProc->clogGroupNext, INVALID_PGPROCNO); + /* * Acquire ownership of the PGPROC's latch, so that we can use WaitLatch * on it. That allows us to repoint the process latch, which so far diff --git a/src/include/pgstat.h b/src/include/pgstat.h index cb05d9b81e..57ac5d41e4 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -812,6 +812,7 @@ typedef enum WAIT_EVENT_PARALLEL_FINISH, WAIT_EVENT_PARALLEL_BITMAP_SCAN, WAIT_EVENT_PROCARRAY_GROUP_UPDATE, + WAIT_EVENT_CLOG_GROUP_UPDATE, WAIT_EVENT_REPLICATION_ORIGIN_DROP, WAIT_EVENT_REPLICATION_SLOT_DROP, WAIT_EVENT_SAFE_SNAPSHOT, diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h index 7dbaa81a8f..205f484510 100644 --- a/src/include/storage/proc.h +++ b/src/include/storage/proc.h @@ -14,6 +14,7 @@ #ifndef _PROC_H_ #define _PROC_H_ +#include "access/clog.h" #include "access/xlogdefs.h" #include "lib/ilist.h" #include "storage/latch.h" @@ -171,6 +172,17 @@ struct PGPROC uint32 wait_event_info; /* proc's wait information */ + /* Support for group transaction status update. */ + bool clogGroupMember; /* true, if member of clog group */ + pg_atomic_uint32 clogGroupNext; /* next clog group member */ + TransactionId clogGroupMemberXid; /* transaction id of clog group member */ + XidStatus clogGroupMemberXidStatus; /* transaction status of clog + * group member */ + int clogGroupMemberPage; /* clog page corresponding to + * transaction id of clog group member */ + XLogRecPtr clogGroupMemberLsn; /* WAL location of commit record for clog + * group member */ + /* Per-backend LWLock. Protects fields below (but not group fields). */ LWLock backendLock; @@ -242,6 +254,8 @@ typedef struct PROC_HDR PGPROC *bgworkerFreeProcs; /* First pgproc waiting for group XID clear */ pg_atomic_uint32 procArrayGroupFirst; + /* First pgproc waiting for group transaction status update */ + pg_atomic_uint32 clogGroupFirst; /* WALWriter process's latch */ Latch *walwriterLatch; /* Checkpointer process's latch */ From 2f5ada2710d6e5a668d6d6b27f93ac545a01bafd Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 1 Sep 2017 17:20:18 +0100 Subject: [PATCH 0079/1087] Provisional list of Major Features --- doc/src/sgml/release-10.sgml | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 1330a9992c..f439869613 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -20,7 +20,13 @@ - (to be written) + (yet to be finalized) + Logical replication using publish/subscribe + Declarative Table Partitioning + Improved Query Parallelism + Significant general performance improvements + SCRAM-SHA-256 strong authentication + Improved monitoring and control From 84be67181aab22ea8723ba0625ee690223cd8785 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Sep 2017 12:23:16 -0400 Subject: [PATCH 0080/1087] pg_dumpall: Add a -E flag to set the encoding, like pg_dump has. Michael Paquier, reviewed by Fabien Coelho Discussion: http://postgr.es/m/CAB7nPqQcYWmrm2n-dVaMUhYPKFU_DxQwPuUGuC4ZF+8B=dS5xQ@mail.gmail.com --- doc/src/sgml/ref/pg_dumpall.sgml | 13 +++++++++++++ src/bin/pg_dump/pg_dumpall.c | 24 +++++++++++++++++++++++- 2 files changed, 36 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index f8a2521743..1dba702ad9 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -99,6 +99,19 @@ PostgreSQL documentation + + + + + + Create the dump in the specified character set encoding. By default, + the dump is created in the database encoding. (Another way to get the + same result is to set the PGCLIENTENCODING environment + variable to the desired dump encoding.) + + + + diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 806b537e64..41c5ff89b7 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -97,6 +97,7 @@ main(int argc, char *argv[]) static struct option long_options[] = { {"data-only", no_argument, NULL, 'a'}, {"clean", no_argument, NULL, 'c'}, + {"encoding", required_argument, NULL, 'E'}, {"file", required_argument, NULL, 'f'}, {"globals-only", no_argument, NULL, 'g'}, {"host", required_argument, NULL, 'h'}, @@ -147,6 +148,7 @@ main(int argc, char *argv[]) char *pguser = NULL; char *pgdb = NULL; char *use_role = NULL; + const char *dumpencoding = NULL; trivalue prompt_password = TRI_DEFAULT; bool data_only = false; bool globals_only = false; @@ -204,7 +206,7 @@ main(int argc, char *argv[]) pgdumpopts = createPQExpBuffer(); - while ((c = getopt_long(argc, argv, "acd:f:gh:l:oOp:rsS:tU:vwWx", long_options, &optindex)) != -1) + while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:oOp:rsS:tU:vwWx", long_options, &optindex)) != -1) { switch (c) { @@ -221,6 +223,12 @@ main(int argc, char *argv[]) connstr = pg_strdup(optarg); break; + case 'E': + dumpencoding = pg_strdup(optarg); + appendPQExpBufferStr(pgdumpopts, " -E "); + appendShellString(pgdumpopts, optarg); + break; + case 'f': filename = pg_strdup(optarg); appendPQExpBufferStr(pgdumpopts, " -f "); @@ -453,6 +461,19 @@ main(int argc, char *argv[]) else OPF = stdout; + /* + * Set the client encoding if requested. + */ + if (dumpencoding) + { + if (PQsetClientEncoding(conn, dumpencoding) < 0) + { + fprintf(stderr, _("%s: invalid client encoding \"%s\" specified\n"), + progname, dumpencoding); + exit_nicely(1); + } + } + /* * Get the active encoding and the standard_conforming_strings setting, so * we know how to escape strings. @@ -588,6 +609,7 @@ help(void) printf(_("\nOptions controlling the output content:\n")); printf(_(" -a, --data-only dump only the data, not the schema\n")); printf(_(" -c, --clean clean (drop) databases before recreating\n")); + printf(_(" -E, --encoding=ENCODING dump the data in encoding ENCODING\n")); printf(_(" -g, --globals-only dump only global objects, no databases\n")); printf(_(" -o, --oids include OIDs in dump\n")); printf(_(" -O, --no-owner skip restoration of object ownership\n")); From b79d69b087561eb6687373031a5098b0694f9ec6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 1 Sep 2017 13:52:53 -0400 Subject: [PATCH 0081/1087] Ensure SIZE_MAX can be used throughout our code. Pre-C99 platforms may lack and thereby SIZE_MAX. We have a couple of places using the hack "(size_t) -1" as a fallback, but it wasn't universally available; which means the code added in commit 2e70d6b5e fails to compile everywhere. Move that hack to c.h so that we can rely on having SIZE_MAX everywhere. Per discussion, it'd be a good idea to make the macro's value safe for use in #if-tests, but that will take a bit more work. This is just a quick expedient to get the buildfarm green again. Back-patch to all supported branches, like the previous commit. Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us --- src/include/c.h | 5 +++++ src/include/utils/memutils.h | 2 +- src/timezone/private.h | 4 ---- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index af799dc1df..4f8bbfcd62 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -343,6 +343,11 @@ typedef unsigned PG_INT128_TYPE uint128; #define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF) #define PG_UINT64_MAX UINT64CONST(0xFFFFFFFFFFFFFFFF) +/* Max value of size_t might also be missing if we don't have stdint.h */ +#ifndef SIZE_MAX +#define SIZE_MAX ((size_t) -1) +#endif + /* * We now always use int64 timestamps, but keep this symbol defined for the * benefit of external code that might test it. diff --git a/src/include/utils/memutils.h b/src/include/utils/memutils.h index c553349066..869c59dc85 100644 --- a/src/include/utils/memutils.h +++ b/src/include/utils/memutils.h @@ -41,7 +41,7 @@ #define AllocSizeIsValid(size) ((Size) (size) <= MaxAllocSize) -#define MaxAllocHugeSize ((Size) -1 >> 1) /* SIZE_MAX / 2 */ +#define MaxAllocHugeSize (SIZE_MAX / 2) #define AllocHugeSizeIsValid(size) ((Size) (size) <= MaxAllocHugeSize) diff --git a/src/timezone/private.h b/src/timezone/private.h index c141fb6131..e65cd1bb4e 100644 --- a/src/timezone/private.h +++ b/src/timezone/private.h @@ -48,10 +48,6 @@ /* Unlike 's isdigit, this also works if c < 0 | c > UCHAR_MAX. */ #define is_digit(c) ((unsigned)(c) - '0' <= 9) -#ifndef SIZE_MAX -#define SIZE_MAX ((size_t) -1) -#endif - /* * SunOS 4.1.1 libraries lack remove. */ From a0572203532560423c92066b90d13383720dce3a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 1 Sep 2017 14:18:45 -0400 Subject: [PATCH 0082/1087] doc: Remove mentions of server-side CRL and CA file names Commit a445cb92ef5b3a31313ebce30e18cc1d6e0bdecb removed the default file names for server-side CRL and CA files, but left them in the docs with a small note. This removes the note and the previous default names to clarify, as well as changes mentions of the file names to make it clearer that they are configurable. Author: Daniel Gustafsson Reviewed-by: Michael Paquier --- doc/src/sgml/config.sgml | 8 -------- doc/src/sgml/libpq.sgml | 4 ++-- doc/src/sgml/runtime.sgml | 8 ++++---- doc/src/sgml/sslinfo.sgml | 2 +- 4 files changed, 7 insertions(+), 15 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 2b6255ed95..5f59a382f1 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -983,10 +983,6 @@ include_dir 'conf.d' The default is empty, meaning no CA file is loaded, and client certificate verification is not performed. - - In previous releases of PostgreSQL, the name of this file was - hard-coded as root.crt. - @@ -1022,10 +1018,6 @@ include_dir 'conf.d' file or on the server command line. The default is empty, meaning no CRL file is loaded. - - In previous releases of PostgreSQL, the name of this file was - hard-coded as root.crl. - diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index f154b6b5fa..957096681a 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -7638,8 +7638,8 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) certificate of the signing authority to the postgresql.crt file, then its parent authority's certificate, and so on up to a certificate authority, root or intermediate, that is trusted by - the server, i.e. signed by a certificate in the server's - root.crt file. + the server, i.e. signed by a certificate in the server's root CA file + (). diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 6d57525515..088316cfb6 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -2264,7 +2264,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To require the client to supply a trusted certificate, place certificates of the certificate authorities (CAs) - you trust in the file root.crt in the data + you trust in a file named root.crt in the data directory, set the parameter in postgresql.conf to root.crt, and add the authentication option clientcert=1 to the @@ -2321,7 +2321,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 summarizes the files that are relevant to the SSL setup on the server. (The shown file names are default - or typical names. The locally configured names could be different.) + names. The locally configured names could be different.) @@ -2351,14 +2351,14 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - ($PGDATA/root.crt) + trusted certificate authorities checks that client certificate is signed by a trusted certificate authority - ($PGDATA/root.crl) + certificates revoked by certificate authorities client certificate must not be on this list diff --git a/doc/src/sgml/sslinfo.sgml b/doc/src/sgml/sslinfo.sgml index 7bda33efa3..1fd323a0b6 100644 --- a/doc/src/sgml/sslinfo.sgml +++ b/doc/src/sgml/sslinfo.sgml @@ -150,7 +150,7 @@ This function is really useful only if you have more than one trusted CA - certificate in your server's root.crt file, or if this CA + certificate in your server's certificate authority file, or if this CA has issued some intermediate certificate authority certificates. From 9d6b160d7db76809f0c696d9073f6955dd5a973a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 1 Sep 2017 15:14:18 -0400 Subject: [PATCH 0083/1087] Make [U]INT64CONST safe for use in #if conditions. Instead of using a cast to force the constant to be the right width, assume we can plaster on an L, UL, LL, or ULL suffix as appropriate. The old approach to this is very hoary, dating from before we were willing to require compilers to have working int64 types. This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX constants safe to use in preprocessor conditions, where a cast doesn't work. Other symbolic constants that might be defined using [U]INT64CONST are likewise safer than before. Also fix the SIZE_MAX macro to be similarly safe, if we are forced to provide a definition for that. The test added in commit 2e70d6b5e happens to do what we want even with the hack "(size_t) -1" definition, but we could easily get burnt on other tests in future. Back-patch to all supported branches, like the previous commits. Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us --- configure | 18 ------------------ configure.in | 12 ------------ src/include/c.h | 21 ++++++++++----------- src/include/pg_config.h.in | 4 ---- src/include/pg_config.h.win32 | 8 +------- 5 files changed, 11 insertions(+), 52 deletions(-) diff --git a/configure b/configure index a2f9a256b4..0d76e5ea42 100755 --- a/configure +++ b/configure @@ -14254,24 +14254,6 @@ cat >>confdefs.h <<_ACEOF _ACEOF - -if test x"$HAVE_LONG_LONG_INT_64" = xyes ; then - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ - -#define INT64CONST(x) x##LL -long long int foo = INT64CONST(0x1234567890123456); - -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - -$as_echo "#define HAVE_LL_CONSTANTS 1" >>confdefs.h - -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext -fi - - # If we found "long int" is 64 bits, assume snprintf handles it. If # we found we need to use "long long int", better check. We cope with # snprintfs that use %lld, %qd, or %I64d as the format. If none of these diff --git a/configure.in b/configure.in index e94fba5235..bdc41b071f 100644 --- a/configure.in +++ b/configure.in @@ -1735,18 +1735,6 @@ fi AC_DEFINE_UNQUOTED(PG_INT64_TYPE, $pg_int64_type, [Define to the name of a signed 64-bit integer type.]) -dnl If we need to use "long long int", figure out whether nnnLL notation works. - -if test x"$HAVE_LONG_LONG_INT_64" = xyes ; then - AC_COMPILE_IFELSE([AC_LANG_SOURCE([ -#define INT64CONST(x) x##LL -long long int foo = INT64CONST(0x1234567890123456); -])], - [AC_DEFINE(HAVE_LL_CONSTANTS, 1, [Define to 1 if constants of type 'long long int' should have the suffix LL.])], - []) -fi - - # If we found "long int" is 64 bits, assume snprintf handles it. If # we found we need to use "long long int", better check. We cope with # snprintfs that use %lld, %qd, or %I64d as the format. If none of these diff --git a/src/include/c.h b/src/include/c.h index 4f8bbfcd62..4fb8ef0c2f 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -288,6 +288,8 @@ typedef long int int64; #ifndef HAVE_UINT64 typedef unsigned long int uint64; #endif +#define INT64CONST(x) (x##L) +#define UINT64CONST(x) (x##UL) #elif defined(HAVE_LONG_LONG_INT_64) /* We have working support for "long long int", use that */ @@ -297,20 +299,13 @@ typedef long long int int64; #ifndef HAVE_UINT64 typedef unsigned long long int uint64; #endif +#define INT64CONST(x) (x##LL) +#define UINT64CONST(x) (x##ULL) #else /* neither HAVE_LONG_INT_64 nor HAVE_LONG_LONG_INT_64 */ #error must have a working 64-bit integer datatype #endif -/* Decide if we need to decorate 64-bit constants */ -#ifdef HAVE_LL_CONSTANTS -#define INT64CONST(x) ((int64) x##LL) -#define UINT64CONST(x) ((uint64) x##ULL) -#else -#define INT64CONST(x) ((int64) x) -#define UINT64CONST(x) ((uint64) x) -#endif - /* snprintf format strings to use for 64-bit integers */ #define INT64_FORMAT "%" INT64_MODIFIER "d" #define UINT64_FORMAT "%" INT64_MODIFIER "u" @@ -338,14 +333,18 @@ typedef unsigned PG_INT128_TYPE uint128; #define PG_UINT16_MAX (0xFFFF) #define PG_INT32_MIN (-0x7FFFFFFF-1) #define PG_INT32_MAX (0x7FFFFFFF) -#define PG_UINT32_MAX (0xFFFFFFFF) +#define PG_UINT32_MAX (0xFFFFFFFFU) #define PG_INT64_MIN (-INT64CONST(0x7FFFFFFFFFFFFFFF) - 1) #define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF) #define PG_UINT64_MAX UINT64CONST(0xFFFFFFFFFFFFFFFF) /* Max value of size_t might also be missing if we don't have stdint.h */ #ifndef SIZE_MAX -#define SIZE_MAX ((size_t) -1) +#if SIZEOF_SIZE_T == 8 +#define SIZE_MAX PG_UINT64_MAX +#else +#define SIZE_MAX PG_UINT32_MAX +#endif #endif /* diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index dcb7a1a320..579d195663 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -339,10 +339,6 @@ /* Define to 1 if you have the `z' library (-lz). */ #undef HAVE_LIBZ -/* Define to 1 if constants of type 'long long int' should have the suffix LL. - */ -#undef HAVE_LL_CONSTANTS - /* Define to 1 if the system has the type `locale_t'. */ #undef HAVE_LOCALE_T diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 0b4110472e..27aab21be7 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -223,12 +223,6 @@ /* Define to 1 if you have the `z' library (-lz). */ /* #undef HAVE_LIBZ */ -/* Define to 1 if constants of type 'long long int' should have the suffix LL. - */ -#if (_MSC_VER > 1200) -#define HAVE_LL_CONSTANTS 1 -#endif - /* Define to 1 if the system has the type `locale_t'. */ #define HAVE_LOCALE_T 1 @@ -237,7 +231,7 @@ /* Define to 1 if `long long int' works and is 64 bits. */ #if (_MSC_VER > 1200) -#define HAVE_LONG_LONG_INT_64 +#define HAVE_LONG_LONG_INT_64 1 #endif /* Define to 1 if you have the `mbstowcs_l' function. */ From 0cb8b7531db063bce7def2ef24f616285f1f4b04 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Sep 2017 15:16:44 -0400 Subject: [PATCH 0084/1087] Tighten up some code in RelationBuildPartitionDesc. This probably doesn't save anything meaningful in terms of performance, but making the code simpler is a good idea anyway. Code by Beena Emerson, extracted from a larger patch by Jeevan Ladhe, slightly adjusted by me. Discussion: http://postgr.es/m/CAOgcT0ONgwajdtkoq+AuYkdTPY9cLWWLjxt_k4SXue3eieAr+g@mail.gmail.com --- src/backend/catalog/partition.c | 54 +++++++++++---------------------- 1 file changed, 17 insertions(+), 37 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 96a64ce6b2..50162632f5 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -303,21 +303,18 @@ RelationBuildPartitionDesc(Relation rel) } else if (key->strategy == PARTITION_STRATEGY_RANGE) { - int j, - k; + int k; PartitionRangeBound **all_bounds, *prev; - bool *distinct_indexes; all_bounds = (PartitionRangeBound **) palloc0(2 * nparts * sizeof(PartitionRangeBound *)); - distinct_indexes = (bool *) palloc(2 * nparts * sizeof(bool)); /* * Create a unified list of range bounds across all the * partitions. */ - i = j = 0; + i = ndatums = 0; foreach(cell, boundspecs) { PartitionBoundSpec *spec = castNode(PartitionBoundSpec, @@ -332,12 +329,12 @@ RelationBuildPartitionDesc(Relation rel) true); upper = make_one_range_bound(key, i, spec->upperdatums, false); - all_bounds[j] = lower; - all_bounds[j + 1] = upper; - j += 2; + all_bounds[ndatums++] = lower; + all_bounds[ndatums++] = upper; i++; } - Assert(j == 2 * nparts); + + Assert(ndatums == nparts * 2); /* Sort all the bounds in ascending order */ qsort_arg(all_bounds, 2 * nparts, @@ -345,13 +342,12 @@ RelationBuildPartitionDesc(Relation rel) qsort_partition_rbound_cmp, (void *) key); - /* - * Count the number of distinct bounds to allocate an array of - * that size. - */ - ndatums = 0; + /* Save distinct bounds from all_bounds into rbounds. */ + rbounds = (PartitionRangeBound **) + palloc(ndatums * sizeof(PartitionRangeBound *)); + k = 0; prev = NULL; - for (i = 0; i < 2 * nparts; i++) + for (i = 0; i < ndatums; i++) { PartitionRangeBound *cur = all_bounds[i]; bool is_distinct = false; @@ -388,34 +384,18 @@ RelationBuildPartitionDesc(Relation rel) } /* - * Count the current bound if it is distinct from the previous - * one. Also, store if the index i contains a distinct bound - * that we'd like put in the relcache array. + * Only if the bound is distinct save it into a temporary + * array i.e. rbounds which is later copied into boundinfo + * datums array. */ if (is_distinct) - { - distinct_indexes[i] = true; - ndatums++; - } - else - distinct_indexes[i] = false; + rbounds[k++] = all_bounds[i]; prev = cur; } - /* - * Finally save them in an array from where they will be copied - * into the relcache. - */ - rbounds = (PartitionRangeBound **) palloc(ndatums * - sizeof(PartitionRangeBound *)); - k = 0; - for (i = 0; i < 2 * nparts; i++) - { - if (distinct_indexes[i]) - rbounds[k++] = all_bounds[i]; - } - Assert(k == ndatums); + /* Update ndatums to hold the count of distinct datums. */ + ndatums = k; } else elog(ERROR, "unexpected partition strategy: %d", From c039ba0716383ccaf88c9be1a7f0803a77823de1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 1 Sep 2017 15:36:33 -0400 Subject: [PATCH 0085/1087] Add memory info to getrusage output Add the maxrss field to the getrusage output (log_*_stats). This was previously omitted because of portability concerns, but we feel this might not be a concern anymore. based on patch by Justin Pryzby --- src/backend/tcop/postgres.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index b8d860ebdb..8d3fecf6d6 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -4421,11 +4421,8 @@ ShowUsage(const char *title) } /* - * the only stats we don't show here are for memory usage -- i can't - * figure out how to interpret the relevant fields in the rusage struct, - * and they change names across o/s platforms, anyway. if you can figure - * out what the entries mean, you can somehow extract resident set size, - * shared text size, and unshared data and stack sizes. + * The only stats we don't show here are ixrss, idrss, isrss. It takes + * some work to interpret them, and most platforms don't fill them in. */ initStringInfo(&str); @@ -4445,6 +4442,16 @@ ShowUsage(const char *title) (long) sys.tv_sec, (long) sys.tv_usec); #if defined(HAVE_GETRUSAGE) + appendStringInfo(&str, + "!\t%ld kB max resident size\n", +#if defined(__darwin__) + /* in bytes on macOS */ + r.ru_maxrss/1024 +#else + /* in kilobytes on most other platforms */ + r.ru_maxrss +#endif + ); appendStringInfo(&str, "!\t%ld/%ld [%ld/%ld] filesystem blocks in/out\n", r.ru_inblock - Save_r.ru_inblock, From 51daa7bdb39e1bdc31eb99fd3f54f61743ebb7ae Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 1 Sep 2017 17:38:54 -0400 Subject: [PATCH 0086/1087] Improve division of labor between execParallel.c and nodeGather[Merge].c. Move the responsibility for creating/destroying TupleQueueReaders into execParallel.c, to avoid duplicative coding in nodeGather.c and nodeGatherMerge.c. Also, instead of having DestroyTupleQueueReader do shm_mq_detach, do it in the caller (which is now only ExecParallelFinish). This means execParallel.c does both the attaching and detaching of the tuple-queue-reader shm_mqs, which seems less weird than the previous arrangement. These changes also eliminate a vestigial memory leak (of the pei->tqueue array). It's now demonstrable that rescans of Gather or GatherMerge don't leak memory. Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us --- src/backend/executor/execParallel.c | 72 ++++++++++++++++++++++++-- src/backend/executor/nodeGather.c | 64 ++++++++--------------- src/backend/executor/nodeGatherMerge.c | 50 ++++++------------ src/backend/executor/tqueue.c | 4 +- src/include/executor/execParallel.h | 18 ++++--- 5 files changed, 119 insertions(+), 89 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index c713b85139..59f3744a14 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -534,9 +534,12 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, shm_toc_insert(pcxt->toc, PARALLEL_KEY_BUFFER_USAGE, bufusage_space); pei->buffer_usage = bufusage_space; - /* Set up tuple queues. */ + /* Set up the tuple queues that the workers will write into. */ pei->tqueue = ExecParallelSetupTupleQueues(pcxt, false); + /* We don't need the TupleQueueReaders yet, though. */ + pei->reader = NULL; + /* * If instrumentation options were supplied, allocate space for the data. * It only gets partially initialized here; the rest happens during @@ -603,6 +606,37 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, return pei; } +/* + * Set up tuple queue readers to read the results of a parallel subplan. + * All the workers are expected to return tuples matching tupDesc. + * + * This is separate from ExecInitParallelPlan() because we can launch the + * worker processes and let them start doing something before we do this. + */ +void +ExecParallelCreateReaders(ParallelExecutorInfo *pei, + TupleDesc tupDesc) +{ + int nworkers = pei->pcxt->nworkers_launched; + int i; + + Assert(pei->reader == NULL); + + if (nworkers > 0) + { + pei->reader = (TupleQueueReader **) + palloc(nworkers * sizeof(TupleQueueReader *)); + + for (i = 0; i < nworkers; i++) + { + shm_mq_set_handle(pei->tqueue[i], + pei->pcxt->worker[i].bgwhandle); + pei->reader[i] = CreateTupleQueueReader(pei->tqueue[i], + tupDesc); + } + } +} + /* * Re-initialize the parallel executor shared memory state before launching * a fresh batch of workers. @@ -616,6 +650,7 @@ ExecParallelReinitialize(PlanState *planstate, ReinitializeParallelDSM(pei->pcxt); pei->tqueue = ExecParallelSetupTupleQueues(pei->pcxt, true); + pei->reader = NULL; pei->finished = false; /* Traverse plan tree and let each child node reset associated state. */ @@ -741,16 +776,45 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, void ExecParallelFinish(ParallelExecutorInfo *pei) { + int nworkers = pei->pcxt->nworkers_launched; int i; + /* Make this be a no-op if called twice in a row. */ if (pei->finished) return; - /* First, wait for the workers to finish. */ + /* + * Detach from tuple queues ASAP, so that any still-active workers will + * notice that no further results are wanted. + */ + if (pei->tqueue != NULL) + { + for (i = 0; i < nworkers; i++) + shm_mq_detach(pei->tqueue[i]); + pfree(pei->tqueue); + pei->tqueue = NULL; + } + + /* + * While we're waiting for the workers to finish, let's get rid of the + * tuple queue readers. (Any other local cleanup could be done here too.) + */ + if (pei->reader != NULL) + { + for (i = 0; i < nworkers; i++) + DestroyTupleQueueReader(pei->reader[i]); + pfree(pei->reader); + pei->reader = NULL; + } + + /* Now wait for the workers to finish. */ WaitForParallelWorkersToFinish(pei->pcxt); - /* Next, accumulate buffer usage. */ - for (i = 0; i < pei->pcxt->nworkers_launched; ++i) + /* + * Next, accumulate buffer usage. (This must wait for the workers to + * finish, or we might get incomplete data.) + */ + for (i = 0; i < nworkers; i++) InstrAccumParallelQuery(&pei->buffer_usage[i]); /* Finally, accumulate instrumentation, if any. */ diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index d93fbacdf9..022d75b4b8 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -130,7 +130,6 @@ ExecGather(PlanState *pstate) { GatherState *node = castNode(GatherState, pstate); TupleTableSlot *fslot = node->funnel_slot; - int i; TupleTableSlot *slot; ExprContext *econtext; @@ -173,33 +172,30 @@ ExecGather(PlanState *pstate) LaunchParallelWorkers(pcxt); /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; - node->nreaders = 0; - node->nextreader = 0; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - node->reader = palloc(pcxt->nworkers_launched * - sizeof(TupleQueueReader *)); - - for (i = 0; i < pcxt->nworkers_launched; ++i) - { - shm_mq_set_handle(node->pei->tqueue[i], - pcxt->worker[i].bgwhandle); - node->reader[node->nreaders++] = - CreateTupleQueueReader(node->pei->tqueue[i], - fslot->tts_tupleDescriptor); - } + ExecParallelCreateReaders(node->pei, + fslot->tts_tupleDescriptor); + /* Make a working array showing the active readers */ + node->nreaders = pcxt->nworkers_launched; + node->reader = (TupleQueueReader **) + palloc(node->nreaders * sizeof(TupleQueueReader *)); + memcpy(node->reader, node->pei->reader, + node->nreaders * sizeof(TupleQueueReader *)); } else { /* No workers? Then never mind. */ - ExecShutdownGatherWorkers(node); + node->nreaders = 0; + node->reader = NULL; } + node->nextreader = 0; } /* Run plan locally if no workers or not single-copy. */ - node->need_to_scan_locally = (node->reader == NULL) + node->need_to_scan_locally = (node->nreaders == 0) || !gather->single_copy; node->initialized = true; } @@ -258,11 +254,11 @@ gather_getnext(GatherState *gatherstate) MemoryContext tupleContext = gatherstate->ps.ps_ExprContext->ecxt_per_tuple_memory; HeapTuple tup; - while (gatherstate->reader != NULL || gatherstate->need_to_scan_locally) + while (gatherstate->nreaders > 0 || gatherstate->need_to_scan_locally) { CHECK_FOR_INTERRUPTS(); - if (gatherstate->reader != NULL) + if (gatherstate->nreaders > 0) { MemoryContext oldContext; @@ -319,19 +315,15 @@ gather_readnext(GatherState *gatherstate) tup = TupleQueueReaderNext(reader, true, &readerdone); /* - * If this reader is done, remove it, and collapse the array. If all - * readers are done, clean up remaining worker state. + * If this reader is done, remove it from our working array of active + * readers. If all readers are done, we're outta here. */ if (readerdone) { Assert(!tup); - DestroyTupleQueueReader(reader); --gatherstate->nreaders; if (gatherstate->nreaders == 0) - { - ExecShutdownGatherWorkers(gatherstate); return NULL; - } memmove(&gatherstate->reader[gatherstate->nextreader], &gatherstate->reader[gatherstate->nextreader + 1], sizeof(TupleQueueReader *) @@ -378,37 +370,25 @@ gather_readnext(GatherState *gatherstate) /* ---------------------------------------------------------------- * ExecShutdownGatherWorkers * - * Destroy the parallel workers. Collect all the stats after - * workers are stopped, else some work done by workers won't be - * accounted. + * Stop all the parallel workers. * ---------------------------------------------------------------- */ static void ExecShutdownGatherWorkers(GatherState *node) { - /* Shut down tuple queue readers before shutting down workers. */ - if (node->reader != NULL) - { - int i; - - for (i = 0; i < node->nreaders; ++i) - DestroyTupleQueueReader(node->reader[i]); - - pfree(node->reader); - node->reader = NULL; - } - - /* Now shut down the workers. */ if (node->pei != NULL) ExecParallelFinish(node->pei); + + /* Flush local copy of reader array */ + if (node->reader) + pfree(node->reader); + node->reader = NULL; } /* ---------------------------------------------------------------- * ExecShutdownGather * * Destroy the setup for parallel workers including parallel context. - * Collect all the stats after workers are stopped, else some work - * done by workers won't be accounted. * ---------------------------------------------------------------- */ void diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index b8bb4f8eb0..d20d46606e 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -178,7 +178,6 @@ ExecGatherMerge(PlanState *pstate) GatherMergeState *node = castNode(GatherMergeState, pstate); TupleTableSlot *slot; ExprContext *econtext; - int i; CHECK_FOR_INTERRUPTS(); @@ -214,27 +213,23 @@ ExecGatherMerge(PlanState *pstate) LaunchParallelWorkers(pcxt); /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; - node->nreaders = 0; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - node->reader = palloc(pcxt->nworkers_launched * - sizeof(TupleQueueReader *)); - - for (i = 0; i < pcxt->nworkers_launched; ++i) - { - shm_mq_set_handle(node->pei->tqueue[i], - pcxt->worker[i].bgwhandle); - node->reader[node->nreaders++] = - CreateTupleQueueReader(node->pei->tqueue[i], - node->tupDesc); - } + ExecParallelCreateReaders(node->pei, node->tupDesc); + /* Make a working array showing the active readers */ + node->nreaders = pcxt->nworkers_launched; + node->reader = (TupleQueueReader **) + palloc(node->nreaders * sizeof(TupleQueueReader *)); + memcpy(node->reader, node->pei->reader, + node->nreaders * sizeof(TupleQueueReader *)); } else { /* No workers? Then never mind. */ - ExecShutdownGatherMergeWorkers(node); + node->nreaders = 0; + node->reader = NULL; } } @@ -284,8 +279,6 @@ ExecEndGatherMerge(GatherMergeState *node) * ExecShutdownGatherMerge * * Destroy the setup for parallel workers including parallel context. - * Collect all the stats after workers are stopped, else some work - * done by workers won't be accounted. * ---------------------------------------------------------------- */ void @@ -304,30 +297,19 @@ ExecShutdownGatherMerge(GatherMergeState *node) /* ---------------------------------------------------------------- * ExecShutdownGatherMergeWorkers * - * Destroy the parallel workers. Collect all the stats after - * workers are stopped, else some work done by workers won't be - * accounted. + * Stop all the parallel workers. * ---------------------------------------------------------------- */ static void ExecShutdownGatherMergeWorkers(GatherMergeState *node) { - /* Shut down tuple queue readers before shutting down workers. */ - if (node->reader != NULL) - { - int i; - - for (i = 0; i < node->nreaders; ++i) - if (node->reader[i]) - DestroyTupleQueueReader(node->reader[i]); - - pfree(node->reader); - node->reader = NULL; - } - - /* Now shut down the workers. */ if (node->pei != NULL) ExecParallelFinish(node->pei); + + /* Flush local copy of reader array */ + if (node->reader) + pfree(node->reader); + node->reader = NULL; } /* ---------------------------------------------------------------- @@ -672,8 +654,6 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) else if (tuple_buffer->done) { /* Reader is known to be exhausted. */ - DestroyTupleQueueReader(gm_state->reader[reader - 1]); - gm_state->reader[reader - 1] = NULL; return false; } else diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index ee4bec0385..6afcd1a30a 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -651,11 +651,13 @@ CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc) /* * Destroy a tuple queue reader. + * + * Note: cleaning up the underlying shm_mq is the caller's responsibility. + * We won't access it here, as it may be detached already. */ void DestroyTupleQueueReader(TupleQueueReader *reader) { - shm_mq_detach(reader->queue); if (reader->typmodmap != NULL) hash_destroy(reader->typmodmap); /* Is it worth trying to free substructure of the remap tree? */ diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index 1cb895d898..ed231f2d53 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -23,17 +23,21 @@ typedef struct SharedExecutorInstrumentation SharedExecutorInstrumentation; typedef struct ParallelExecutorInfo { - PlanState *planstate; - ParallelContext *pcxt; - BufferUsage *buffer_usage; - SharedExecutorInstrumentation *instrumentation; - shm_mq_handle **tqueue; - dsa_area *area; - bool finished; + PlanState *planstate; /* plan subtree we're running in parallel */ + ParallelContext *pcxt; /* parallel context we're using */ + BufferUsage *buffer_usage; /* points to bufusage area in DSM */ + SharedExecutorInstrumentation *instrumentation; /* optional */ + dsa_area *area; /* points to DSA area in DSM */ + bool finished; /* set true by ExecParallelFinish */ + /* These two arrays have pcxt->nworkers_launched entries: */ + shm_mq_handle **tqueue; /* tuple queues for worker output */ + struct TupleQueueReader **reader; /* tuple reader/writer support */ } ParallelExecutorInfo; extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, int64 tuples_needed); +extern void ExecParallelCreateReaders(ParallelExecutorInfo *pei, + TupleDesc tupDesc); extern void ExecParallelFinish(ParallelExecutorInfo *pei); extern void ExecParallelCleanup(ParallelExecutorInfo *pei); extern void ExecParallelReinitialize(PlanState *planstate, From afc58affb6616a415ea991763e0383832346e7c7 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 1 Sep 2017 23:34:12 -0400 Subject: [PATCH 0087/1087] doc: Fix typos and other minor issues Author: Alexander Lakhin --- doc/src/sgml/catalogs.sgml | 2 +- doc/src/sgml/event-trigger.sgml | 1 + doc/src/sgml/indexam.sgml | 2 +- doc/src/sgml/logicaldecoding.sgml | 2 +- doc/src/sgml/perform.sgml | 2 +- doc/src/sgml/ref/alter_sequence.sgml | 2 +- doc/src/sgml/ref/alter_table.sgml | 4 ++-- doc/src/sgml/ref/copy.sgml | 2 +- doc/src/sgml/ref/create_sequence.sgml | 2 +- doc/src/sgml/ref/create_table.sgml | 4 ++-- doc/src/sgml/release-10.sgml | 4 ++-- 11 files changed, 14 insertions(+), 13 deletions(-) diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index ef7054cf26..3126990f4d 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -5385,7 +5385,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pubname - Name + name Name of the publication diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml index 3ed14f08c0..c7b880d7c9 100644 --- a/doc/src/sgml/event-trigger.sgml +++ b/doc/src/sgml/event-trigger.sgml @@ -565,6 +565,7 @@ X - - + CREATE USER MAPPING diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml index ac512588e2..aa3d371d2e 100644 --- a/doc/src/sgml/indexam.sgml +++ b/doc/src/sgml/indexam.sgml @@ -559,7 +559,7 @@ amgettuple (IndexScanDesc scan, a HeapTuple pointer stored at scan->xs_hitup, with tuple descriptor scan->xs_hitupdesc. (The latter format should be used when reconstructing data that might possibly not fit - into an IndexTuple.) In either case, + into an IndexTuple.) In either case, management of the data referenced by the pointer is the access method's responsibility. The data must remain good at least until the next amgettuple, amrescan, or amendscan diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index f8142518c1..35ac5abbe5 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -345,7 +345,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot The pg_replication_slots view and the - pg_stat_replication + pg_stat_replication view provide information about the current state of replication slots and streaming replication connections respectively. These views apply to both physical and logical replication. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 924f6091ba..1346328653 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1840,7 +1840,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Increase and ; this reduces the frequency + linkend="guc-checkpoint-timeout">; this reduces the frequency of checkpoints, but increases the storage requirements of /pg_wal. diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index 3a04d07ecc..190c8d6485 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -89,7 +89,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S The optional clause AS data_type changes the data type of the sequence. Valid types are - are smallint, integer, + smallint, integer, and bigint. diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 69600321e6..dae63077ee 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -620,8 +620,8 @@ ALTER TABLE [ IF EXISTS ] name SHARE UPDATE EXCLUSIVE lock will be taken for fillfactor and autovacuum storage parameters, as well as the following planner related parameters: - effective_io_concurrency, parallel_workers, seq_page_cost - random_page_cost, n_distinct and n_distinct_inherited. + effective_io_concurrency, parallel_workers, seq_page_cost, + random_page_cost, n_distinct and n_distinct_inherited. diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index 8de1150dfb..732efe69e6 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -482,7 +482,7 @@ COPY count For identity columns, the COPY FROM command will always write the column values provided in the input data, like - the INPUT option OVERRIDING SYSTEM + the INSERT option OVERRIDING SYSTEM VALUE. diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index f1448e7ab3..2af8c8d23e 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -119,7 +119,7 @@ SELECT * FROM name; The optional clause AS data_type specifies the data type of the sequence. Valid types are - are smallint, integer, + smallint, integer, and bigint. bigint is the default. The data type determines the default minimum and maximum values of the sequence. diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index e9c2c49533..a6ca590249 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -473,8 +473,8 @@ FROM ( { numeric_literal | partitioned table. The parenthesized list of columns or expressions forms the partition key for the table. When using range partitioning, the partition key can - include multiple columns or expressions (up to 32, but this limit can - altered when building PostgreSQL.), but for + include multiple columns or expressions (up to 32, but this limit can be + altered when building PostgreSQL), but for list partitioning, the partition key must consist of a single column or expression. If no B-tree operator class is specified when creating a partitioned table, the default B-tree operator class for the datatype will diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index f439869613..1a9110614d 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -2905,7 +2905,7 @@ --> Overhaul documentation build - process (Alexander Lakhin, Alexander Law) + process (Alexander Lakhin) @@ -2919,7 +2919,7 @@ - Previously Jade, DSSSL, and + Previously Jade, DSSSL, and JadeTex were used. From e451901804bd96a6b0fe3875b5c90aa0555c6a05 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 3 Sep 2017 11:01:08 -0400 Subject: [PATCH 0088/1087] Fix macro-redefinition warning on MSVC. In commit 9d6b160d7, I tweaked pg_config.h.win32 to use "#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty, for consistency with what happens in an autoconf'd build. But Solution.pm injects another definition of that macro into ecpg_config.h, leading to justifiable (though harmless) compiler whining. Make that one consistent too. Back-patch, like the previous patch. Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com --- src/tools/msvc/Solution.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm index 01e5846b63..19a95ddc0e 100644 --- a/src/tools/msvc/Solution.pm +++ b/src/tools/msvc/Solution.pm @@ -425,7 +425,7 @@ s{PG_VERSION_STR "[^"]+"}{PG_VERSION_STR "PostgreSQL $self->{strver}$extraver, c || confess "Could not open ecpg_config.h"; print $o < 1200) -#define HAVE_LONG_LONG_INT_64 +#define HAVE_LONG_LONG_INT_64 1 #define ENABLE_THREAD_SAFETY 1 EOF print $o "#endif\n"; From 4faa1dc2eb02ba67303110e025d44abb40b12725 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 3 Sep 2017 11:12:29 -0400 Subject: [PATCH 0089/1087] Suppress compiler warnings in dshash.c. Some compilers complain, not unreasonably, about left-shifting an int32 "1" and then assigning the result to an int64. In practice I sure hope that this data structure never gets large enough that an overflow would actually occur; but let's cast the constant to the right type to avoid the hazard. In passing, fix a typo in dshash.h. Amit Kapila, adjusted as per comment from Thomas Munro. Discussion: https://postgr.es/m/CAA4eK1+5vfVMYtjK_NX8O3-42yM3o80qdqWnQzGquPrbq6mb+A@mail.gmail.com --- src/backend/lib/dshash.c | 8 ++++---- src/include/lib/dshash.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c index 5dbd0c4227..448e058725 100644 --- a/src/backend/lib/dshash.c +++ b/src/backend/lib/dshash.c @@ -315,7 +315,7 @@ dshash_destroy(dshash_table *hash_table) ensure_valid_bucket_pointers(hash_table); /* Free all the entries. */ - size = 1 << hash_table->size_log2; + size = ((size_t) 1) << hash_table->size_log2; for (i = 0; i < size; ++i) { dsa_pointer item_pointer = hash_table->buckets[i]; @@ -676,7 +676,7 @@ resize(dshash_table *hash_table, size_t new_size_log2) dsa_pointer new_buckets_shared; dsa_pointer *new_buckets; size_t size; - size_t new_size = 1 << new_size_log2; + size_t new_size = ((size_t) 1) << new_size_log2; size_t i; /* @@ -707,10 +707,10 @@ resize(dshash_table *hash_table, size_t new_size_log2) new_buckets = dsa_get_address(hash_table->area, new_buckets_shared); /* - * We've allocate the new bucket array; all that remains to do now is to + * We've allocated the new bucket array; all that remains to do now is to * reinsert all items, which amounts to adjusting all the pointers. */ - size = 1 << hash_table->control->size_log2; + size = ((size_t) 1) << hash_table->control->size_log2; for (i = 0; i < size; ++i) { dsa_pointer item_pointer = hash_table->buckets[i]; diff --git a/src/include/lib/dshash.h b/src/include/lib/dshash.h index 3fd91f8697..362871bfe0 100644 --- a/src/include/lib/dshash.h +++ b/src/include/lib/dshash.h @@ -39,7 +39,7 @@ typedef dshash_hash (*dshash_hash_function) (const void *v, size_t size, * members tranche_id and tranche_name do not need to be initialized when * attaching to an existing hash table. * - * Compare and hash functions mus be supplied even when attaching, because we + * Compare and hash functions must be supplied even when attaching, because we * can't safely share function pointers between backends in general. Either * the arg variants or the non-arg variants should be supplied; the other * function pointers should be NULL. If the arg varants are supplied then the From 863d75439e8733b4bf6195a2c8a09966f04d8fbe Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 4 Sep 2017 11:08:52 +0200 Subject: [PATCH 0090/1087] Fix translatable string Discussion: https://postgr.es/m/20170828130545.sdajqlpr37hmmd6a@alvherre.pgsql --- src/bin/pg_rewind/libpq_fetch.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/src/bin/pg_rewind/libpq_fetch.c b/src/bin/pg_rewind/libpq_fetch.c index a6ff4e3817..0cdff55cab 100644 --- a/src/bin/pg_rewind/libpq_fetch.c +++ b/src/bin/pg_rewind/libpq_fetch.c @@ -270,6 +270,7 @@ receiveFileChunks(const char *sql) char *filename; int filenamelen; int64 chunkoff; + char chunkoff_str[32]; int chunksize; char *chunk; @@ -342,8 +343,13 @@ receiveFileChunks(const char *sql) continue; } - pg_log(PG_DEBUG, "received chunk for file \"%s\", offset " INT64_FORMAT ", size %d\n", - filename, chunkoff, chunksize); + /* + * Separate step to keep platform-dependent format code out of + * translatable strings. + */ + snprintf(chunkoff_str, sizeof(chunkoff_str), INT64_FORMAT, chunkoff); + pg_log(PG_DEBUG, "received chunk for file \"%s\", offset %s, size %d\n", + filename, chunkoff_str, chunksize); open_target_file(filename, false); From 9d36a386608d7349964e76120e48987e3ec67d04 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Sep 2017 13:45:20 -0400 Subject: [PATCH 0091/1087] Adjust pgbench to allow non-ASCII characters in variable names. This puts it in sync with psql's notion of what is a valid variable name. Like psql, we document that "non-Latin letters" are allowed, but actually any non-ASCII character is accepted. Fabien Coelho Discussion: https://postgr.es/m/20170405.094548.1184280384967203518.t-ishii@sraoss.co.jp --- doc/src/sgml/ref/pgbench.sgml | 2 ++ src/bin/pgbench/exprscan.l | 4 +-- src/bin/pgbench/pgbench.c | 47 ++++++++++++++++++++++++++++------- 3 files changed, 42 insertions(+), 11 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 03e1212d50..f5db8d18d3 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -771,6 +771,8 @@ pgbench options dbname There is a simple variable-substitution facility for script files. + Variable names must consist of letters (including non-Latin letters), + digits, and underscores. Variables can be set by the command-line - - - The built-in default editors are vi on Unix - systems and notepad.exe on Windows systems. + If none of them is set, the default is to use vi + on Unix systems or notepad.exe on Windows systems. @@ -4192,6 +4174,27 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' + + PSQL_PAGER + PAGER + + + + If a query's results do not fit on the screen, they are piped + through this command. Typical values are more + or less. + Use of the pager can be disabled by setting PSQL_PAGER + or PAGER to an empty string, or by adjusting the + pager-related options of the \pset command. + These variables are examined in the order listed; + the first that is set is used. + If none of them is set, the default is to use more on most + platforms, but less on Cygwin. + + + + + PSQLRC diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index 724cf8e761..9d366180af 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -459,8 +459,6 @@ helpVariables(unsigned short int pager) fprintf(output, _(" COLUMNS\n" " number of columns for wrapped format\n")); - fprintf(output, _(" PAGER\n" - " name of external pager program\n")); fprintf(output, _(" PGAPPNAME\n" " same as the application_name connection parameter\n")); fprintf(output, _(" PGDATABASE\n" @@ -481,6 +479,8 @@ helpVariables(unsigned short int pager) " how to specify a line number when invoking the editor\n")); fprintf(output, _(" PSQL_HISTORY\n" " alternative location for the command history file\n")); + fprintf(output, _(" PSQL_PAGER, PAGER\n" + " name of external pager program\n")); fprintf(output, _(" PSQLRC\n" " alternative location for the user's .psqlrc file\n")); fprintf(output, _(" SHELL\n" diff --git a/src/fe_utils/print.c b/src/fe_utils/print.c index f756f767e5..8af5bbe97e 100644 --- a/src/fe_utils/print.c +++ b/src/fe_utils/print.c @@ -2870,7 +2870,9 @@ PageOutput(int lines, const printTableOpt *topt) const char *pagerprog; FILE *pagerpipe; - pagerprog = getenv("PAGER"); + pagerprog = getenv("PSQL_PAGER"); + if (!pagerprog) + pagerprog = getenv("PAGER"); if (!pagerprog) pagerprog = DEFAULT_PAGER; else diff --git a/src/interfaces/libpq/fe-print.c b/src/interfaces/libpq/fe-print.c index 89bc4c5429..6dbf847280 100644 --- a/src/interfaces/libpq/fe-print.c +++ b/src/interfaces/libpq/fe-print.c @@ -165,6 +165,13 @@ PQprint(FILE *fout, const PGresult *res, const PQprintOpt *po) screen_size.ws_row = 24; screen_size.ws_col = 80; #endif + + /* + * Since this function is no longer used by psql, we don't examine + * PSQL_PAGER. It's possible that the hypothetical external users + * of the function would like that to happen, but in the name of + * backwards compatibility, we'll stick to just examining PAGER. + */ pagerenv = getenv("PAGER"); /* if PAGER is unset, empty or all-white-space, don't use pager */ if (pagerenv != NULL && From 90627cf98a8e7d0531789391fd798c9bfcc3bc1a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 5 Sep 2017 12:22:33 -0400 Subject: [PATCH 0098/1087] Support retaining data dirs on successful TAP tests This moves the data directories from using temporary directories with randomness in the directory name to a static name, to make it easier to debug. The data directory will be retained if tests fail or the test code dies/exits with failure, and is automatically removed on the next make check. If the environment variable PG_TEST_NOCLEAN is defined, the data directories will be retained regardless of test or exit status. Author: Daniel Gustafsson --- src/Makefile.global.in | 6 ++- src/bin/pg_rewind/RewindTest.pm | 7 +++- src/bin/pg_rewind/t/001_basic.pl | 4 +- src/bin/pg_rewind/t/002_databases.pl | 4 +- src/bin/pg_rewind/t/003_extrafiles.pl | 4 +- src/bin/pg_rewind/t/004_pg_xlog_symlink.pl | 4 +- src/test/perl/PostgresNode.pm | 47 ++++++++++++++++++++-- 7 files changed, 61 insertions(+), 15 deletions(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index e8b3a519cb..fae8068150 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -362,12 +362,14 @@ endef ifeq ($(enable_tap_tests),yes) define prove_installcheck -rm -rf $(CURDIR)/tmp_check/log +rm -rf '$(CURDIR)'/tmp_check +$(MKDIR_P) '$(CURDIR)'/tmp_check cd $(srcdir) && TESTDIR='$(CURDIR)' PATH="$(bindir):$$PATH" PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)/$(top_builddir)' PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl) endef define prove_check -rm -rf $(CURDIR)/tmp_check/log +rm -rf '$(CURDIR)'/tmp_check +$(MKDIR_P) '$(CURDIR)'/tmp_check cd $(srcdir) && TESTDIR='$(CURDIR)' $(with_temp_install) PGPORT='6$(DEF_PGPORT)' PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl) endef diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm index 6649c22b4f..76ce295cef 100644 --- a/src/bin/pg_rewind/RewindTest.pm +++ b/src/bin/pg_rewind/RewindTest.pm @@ -114,9 +114,10 @@ sub check_query sub setup_cluster { + my $extra_name = shift; # Initialize master, data checksums are mandatory - $node_master = get_new_node('master'); + $node_master = get_new_node('master' . ($extra_name ? "_${extra_name}" : '')); $node_master->init(allows_streaming => 1); } @@ -130,7 +131,9 @@ sub start_master sub create_standby { - $node_standby = get_new_node('standby'); + my $extra_name = shift; + + $node_standby = get_new_node('standby' . ($extra_name ? "_${extra_name}" : '')); $node_master->backup('my_backup'); $node_standby->init_from_backup($node_master, 'my_backup'); my $connstr_master = $node_master->connstr(); diff --git a/src/bin/pg_rewind/t/001_basic.pl b/src/bin/pg_rewind/t/001_basic.pl index 1764b17c90..736f34eae3 100644 --- a/src/bin/pg_rewind/t/001_basic.pl +++ b/src/bin/pg_rewind/t/001_basic.pl @@ -9,7 +9,7 @@ sub run_test { my $test_mode = shift; - RewindTest::setup_cluster(); + RewindTest::setup_cluster($test_mode); RewindTest::start_master(); # Create a test table and insert a row in master. @@ -28,7 +28,7 @@ sub run_test master_psql("CHECKPOINT"); - RewindTest::create_standby(); + RewindTest::create_standby($test_mode); # Insert additional data on master that will be replicated to standby master_psql("INSERT INTO tbl1 values ('in master, before promotion')"); diff --git a/src/bin/pg_rewind/t/002_databases.pl b/src/bin/pg_rewind/t/002_databases.pl index 20bdb4ab59..37cdd712f3 100644 --- a/src/bin/pg_rewind/t/002_databases.pl +++ b/src/bin/pg_rewind/t/002_databases.pl @@ -9,13 +9,13 @@ sub run_test { my $test_mode = shift; - RewindTest::setup_cluster(); + RewindTest::setup_cluster($test_mode); RewindTest::start_master(); # Create a database in master. master_psql('CREATE DATABASE inmaster'); - RewindTest::create_standby(); + RewindTest::create_standby($test_mode); # Create another database, the creation is replicated to the standby master_psql('CREATE DATABASE beforepromotion'); diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl index cedde1409b..2433a4aab6 100644 --- a/src/bin/pg_rewind/t/003_extrafiles.pl +++ b/src/bin/pg_rewind/t/003_extrafiles.pl @@ -14,7 +14,7 @@ sub run_test { my $test_mode = shift; - RewindTest::setup_cluster(); + RewindTest::setup_cluster($test_mode); RewindTest::start_master(); my $test_master_datadir = $node_master->data_dir; @@ -27,7 +27,7 @@ sub run_test append_to_file "$test_master_datadir/tst_both_dir/both_subdir/both_file3", "in both3"; - RewindTest::create_standby(); + RewindTest::create_standby($test_mode); # Create different subdirs and files in master and standby my $test_standby_datadir = $node_standby->data_dir; diff --git a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl index 12950ea1ca..feadaa6a0f 100644 --- a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl +++ b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl @@ -26,7 +26,7 @@ sub run_test my $master_xlogdir = "${TestLib::tmp_check}/xlog_master"; rmtree($master_xlogdir); - RewindTest::setup_cluster(); + RewindTest::setup_cluster($test_mode); my $test_master_datadir = $node_master->data_dir; @@ -43,7 +43,7 @@ sub run_test master_psql("CHECKPOINT"); - RewindTest::create_standby(); + RewindTest::create_standby($test_mode); # Insert additional data on master that will be replicated to standby master_psql("INSERT INTO tbl1 values ('in master, before promotion')"); diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index d9aeb277d9..3a81c1c60b 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -86,6 +86,7 @@ use Config; use Cwd; use Exporter 'import'; use File::Basename; +use File::Path qw(rmtree); use File::Spec; use File::Temp (); use IPC::Run; @@ -100,7 +101,7 @@ our @EXPORT = qw( get_new_node ); -our ($test_localhost, $test_pghost, $last_port_assigned, @all_nodes); +our ($test_localhost, $test_pghost, $last_port_assigned, @all_nodes, $died); # Windows path to virtual file system root @@ -149,11 +150,13 @@ sub new my $self = { _port => $pgport, _host => $pghost, - _basedir => TestLib::tempdir("data_" . $name), + _basedir => "$TestLib::tmp_check/t_${testname}_${name}_data", _name => $name, _logfile => "$TestLib::log_path/${testname}_${name}.log" }; bless $self, $class; + mkdir $self->{_basedir} or + BAIL_OUT("could not create data directory \"$self->{_basedir}\": $!"); $self->dump_info; return $self; @@ -928,9 +931,24 @@ sub get_new_node return $node; } +# Retain the errno on die() if set, else assume a generic errno of 1. +# This will instruct the END handler on how to handle artifacts left +# behind from tests. +$SIG{__DIE__} = sub +{ + if ($!) + { + $died = $!; + } + else + { + $died = 1; + } +}; + # Automatically shut down any still-running nodes when the test script exits. # Note that this just stops the postmasters (in the same order the nodes were -# created in). Temporary PGDATA directories are deleted, in an unspecified +# created in). Any temporary directories are deleted, in an unspecified # order, later when the File::Temp objects are destroyed. END { @@ -941,6 +959,13 @@ END foreach my $node (@all_nodes) { $node->teardown_node; + + # skip clean if we are requested to retain the basedir + next if defined $ENV{'PG_TEST_NOCLEAN'}; + + # clean basedir on clean test invocation + $node->clean_node + if TestLib::all_tests_passing() && !defined $died && !$exit_code; } $? = $exit_code; @@ -959,6 +984,22 @@ sub teardown_node my $self = shift; $self->stop('immediate'); + +} + +=pod + +=item $node->clean_node() + +Remove the base directory of the node if the node has been stopped. + +=cut + +sub clean_node +{ + my $self = shift; + + rmtree $self->{_basedir} unless defined $self->{_pid}; } =pod From 5a739e7b2c26aa95ee2871071c87fa248df1776b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 12:39:20 -0400 Subject: [PATCH 0099/1087] fuzzystrmatch: Remove dead code Remnants left behind by a323ede2802956f115d71599514fbc01f2575dee Reviewed-by: Michael Paquier Reviewed-by: Ryan Murphy --- contrib/fuzzystrmatch/fuzzystrmatch.c | 34 +++++---------------------- 1 file changed, 6 insertions(+), 28 deletions(-) diff --git a/contrib/fuzzystrmatch/fuzzystrmatch.c b/contrib/fuzzystrmatch/fuzzystrmatch.c index ce58a6a7fc..5b98adfa0c 100644 --- a/contrib/fuzzystrmatch/fuzzystrmatch.c +++ b/contrib/fuzzystrmatch/fuzzystrmatch.c @@ -94,18 +94,6 @@ soundex_code(char letter) ****************************************************************************/ -/************************************************************************** - my constants -- constants I like - - Probably redundant. - -***************************************************************************/ - -#define META_ERROR FALSE -#define META_SUCCESS TRUE -#define META_FAILURE FALSE - - /* I add modifications to the traditional metaphone algorithm that you might find in books. Define this if you want metaphone to behave traditionally */ @@ -116,7 +104,7 @@ soundex_code(char letter) #define TH '0' static char Lookahead(char *word, int how_far); -static int _metaphone(char *word, int max_phonemes, char **phoned_word); +static void _metaphone(char *word, int max_phonemes, char **phoned_word); /* Metachar.h ... little bits about characters for metaphone */ @@ -272,7 +260,6 @@ metaphone(PG_FUNCTION_ARGS) size_t str_i_len = strlen(str_i); int reqlen; char *metaph; - int retval; /* return an empty string if we receive one */ if (!(str_i_len > 0)) @@ -296,17 +283,8 @@ metaphone(PG_FUNCTION_ARGS) (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING), errmsg("output cannot be empty string"))); - - retval = _metaphone(str_i, reqlen, &metaph); - if (retval == META_SUCCESS) - PG_RETURN_TEXT_P(cstring_to_text(metaph)); - else - { - /* internal error */ - elog(ERROR, "metaphone: failure"); - /* keep the compiler quiet */ - PG_RETURN_NULL(); - } + _metaphone(str_i, reqlen, &metaph); + PG_RETURN_TEXT_P(cstring_to_text(metaph)); } @@ -362,7 +340,7 @@ Lookahead(char *word, int how_far) #define Isbreak(c) (!isalpha((unsigned char) (c))) -static int +static void _metaphone(char *word, /* IN */ int max_phonemes, char **phoned_word) /* OUT */ @@ -404,7 +382,7 @@ _metaphone(char *word, /* IN */ if (Curr_Letter == '\0') { End_Phoned_Word; - return META_SUCCESS; /* For testing */ + return; } } @@ -721,7 +699,7 @@ _metaphone(char *word, /* IN */ End_Phoned_Word; - return (META_SUCCESS); + return; } /* END metaphone */ From ba26f5cf768a31e0cbdf5eb8675ee187ad35fd0b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 12:39:20 -0400 Subject: [PATCH 0100/1087] Remove our own definition of NULL Surely everyone has that by now. Reviewed-by: Michael Paquier Reviewed-by: Ryan Murphy --- src/include/c.h | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 4fb8ef0c2f..56e7f792d2 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -27,7 +27,7 @@ * ------- ------------------------------------------------ * 0) pg_config.h and standard system headers * 1) hacks to cope with non-ANSI C compilers - * 2) bool, true, false, TRUE, FALSE, NULL + * 2) bool, true, false, TRUE, FALSE * 3) standard system types * 4) IsValid macros for system types * 5) offsetof, lengthof, endof, alignment @@ -184,7 +184,7 @@ #endif /* ---------------------------------------------------------------- - * Section 2: bool, true, false, TRUE, FALSE, NULL + * Section 2: bool, true, false, TRUE, FALSE * ---------------------------------------------------------------- */ @@ -221,14 +221,6 @@ typedef bool *BoolPtr; #define FALSE 0 #endif -/* - * NULL - * Null pointer. - */ -#ifndef NULL -#define NULL ((void *) 0) -#endif - /* ---------------------------------------------------------------- * Section 3: standard system types From 17273d059cd3a5cba818505b0d47a444c36a3513 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 12:39:20 -0400 Subject: [PATCH 0101/1087] Remove unnecessary parentheses in return statements The parenthesized style has only been used in a few modules. Change that to use the style that is predominant across the whole tree. Reviewed-by: Michael Paquier Reviewed-by: Ryan Murphy --- contrib/btree_gist/btree_utils_num.c | 2 +- contrib/btree_gist/btree_utils_var.c | 4 +- contrib/cube/cube.c | 34 +++--- contrib/dblink/dblink.c | 10 +- contrib/intarray/_int_bool.c | 4 +- contrib/ltree/ltxtquery_op.c | 2 +- contrib/pgcrypto/crypt-des.c | 30 +++--- contrib/pgstattuple/pgstatindex.c | 2 +- contrib/seg/seg.c | 8 +- contrib/spi/refint.c | 2 +- contrib/spi/timetravel.c | 4 +- doc/src/sgml/spi.sgml | 2 +- src/backend/bootstrap/bootscanner.l | 56 +++++----- src/backend/tsearch/spell.c | 10 +- src/backend/utils/adt/inet_cidr_ntop.c | 18 ++-- src/backend/utils/adt/inet_net_pton.c | 44 ++++---- src/backend/utils/adt/tsgistidx.c | 4 +- src/backend/utils/mb/Unicode/convutils.pm | 2 +- .../conversion_procs/euc_tw_and_big5/big5.c | 4 +- src/interfaces/ecpg/compatlib/informix.c | 16 +-- src/interfaces/ecpg/ecpglib/connect.c | 12 +-- src/interfaces/ecpg/ecpglib/data.c | 48 ++++----- src/interfaces/ecpg/ecpglib/descriptor.c | 44 ++++---- src/interfaces/ecpg/ecpglib/error.c | 16 +-- src/interfaces/ecpg/ecpglib/execute.c | 102 +++++++++--------- src/interfaces/ecpg/ecpglib/memory.c | 6 +- src/interfaces/ecpg/ecpglib/misc.c | 24 ++--- src/interfaces/ecpg/ecpglib/prepare.c | 22 ++-- src/interfaces/ecpg/pgtypeslib/common.c | 4 +- src/interfaces/ecpg/pgtypeslib/numeric.c | 14 +-- src/interfaces/ecpg/pgtypeslib/timestamp.c | 10 +- src/interfaces/ecpg/preproc/ecpg.c | 4 +- src/interfaces/ecpg/preproc/ecpg.header | 10 +- src/interfaces/ecpg/preproc/pgc.l | 86 +++++++-------- src/interfaces/ecpg/preproc/type.c | 84 +++++++-------- src/interfaces/ecpg/preproc/variable.c | 36 +++---- .../ecpg/test/compat_informix/dec_test.pgc | 2 +- .../ecpg/test/compat_informix/describe.pgc | 2 +- .../ecpg/test/compat_informix/rfmtdate.pgc | 2 +- .../ecpg/test/compat_informix/rfmtlong.pgc | 2 +- .../ecpg/test/compat_informix/sqlda.pgc | 2 +- src/interfaces/ecpg/test/connect/test1.pgc | 2 +- src/interfaces/ecpg/test/connect/test2.pgc | 2 +- src/interfaces/ecpg/test/connect/test3.pgc | 2 +- src/interfaces/ecpg/test/connect/test4.pgc | 2 +- src/interfaces/ecpg/test/connect/test5.pgc | 2 +- .../test/expected/compat_informix-dec_test.c | 2 +- .../test/expected/compat_informix-describe.c | 2 +- .../test/expected/compat_informix-rfmtdate.c | 2 +- .../test/expected/compat_informix-rfmtlong.c | 2 +- .../test/expected/compat_informix-sqlda.c | 2 +- .../ecpg/test/expected/connect-test1.c | 2 +- .../ecpg/test/expected/connect-test2.c | 2 +- .../ecpg/test/expected/connect-test3.c | 2 +- .../ecpg/test/expected/connect-test4.c | 2 +- .../ecpg/test/expected/connect-test5.c | 2 +- .../ecpg/test/expected/pgtypeslib-dt_test.c | 2 +- .../ecpg/test/expected/pgtypeslib-dt_test2.c | 2 +- .../ecpg/test/expected/pgtypeslib-nan_test.c | 2 +- .../ecpg/test/expected/pgtypeslib-num_test.c | 2 +- .../ecpg/test/expected/pgtypeslib-num_test2.c | 2 +- .../test/expected/preproc-array_of_struct.c | 2 +- .../ecpg/test/expected/preproc-cursor.c | 2 +- .../ecpg/test/expected/preproc-define.c | 2 +- .../ecpg/test/expected/preproc-describe.c | 2 +- .../ecpg/test/expected/preproc-outofscope.c | 2 +- .../test/expected/preproc-pointer_to_struct.c | 2 +- .../ecpg/test/expected/preproc-strings.c | 2 +- .../ecpg/test/expected/preproc-variable.c | 2 +- src/interfaces/ecpg/test/expected/sql-array.c | 2 +- .../ecpg/test/expected/sql-describe.c | 2 +- .../ecpg/test/expected/sql-execute.c | 2 +- .../ecpg/test/expected/sql-oldexec.c | 2 +- src/interfaces/ecpg/test/expected/sql-sqlda.c | 2 +- .../ecpg/test/expected/sql-twophase.c | 2 +- .../ecpg/test/expected/thread-thread.c | 8 +- .../test/expected/thread-thread_implicit.c | 8 +- .../ecpg/test/performance/perftest.pgc | 2 +- .../ecpg/test/pgtypeslib/dt_test.pgc | 2 +- .../ecpg/test/pgtypeslib/dt_test2.pgc | 2 +- .../ecpg/test/pgtypeslib/nan_test.pgc | 2 +- .../ecpg/test/pgtypeslib/num_test.pgc | 2 +- .../ecpg/test/pgtypeslib/num_test2.pgc | 2 +- .../ecpg/test/preproc/array_of_struct.pgc | 2 +- src/interfaces/ecpg/test/preproc/cursor.pgc | 2 +- src/interfaces/ecpg/test/preproc/define.pgc | 2 +- .../ecpg/test/preproc/outofscope.pgc | 2 +- .../ecpg/test/preproc/pointer_to_struct.pgc | 2 +- src/interfaces/ecpg/test/preproc/strings.pgc | 2 +- src/interfaces/ecpg/test/preproc/variable.pgc | 2 +- src/interfaces/ecpg/test/sql/array.pgc | 2 +- src/interfaces/ecpg/test/sql/describe.pgc | 2 +- src/interfaces/ecpg/test/sql/execute.pgc | 2 +- src/interfaces/ecpg/test/sql/oldexec.pgc | 2 +- src/interfaces/ecpg/test/sql/sqlda.pgc | 2 +- src/interfaces/ecpg/test/sql/twophase.pgc | 2 +- src/interfaces/ecpg/test/thread/thread.pgc | 8 +- .../ecpg/test/thread/thread_implicit.pgc | 8 +- src/test/isolation/specscanner.l | 10 +- 99 files changed, 469 insertions(+), 469 deletions(-) diff --git a/contrib/btree_gist/btree_utils_num.c b/contrib/btree_gist/btree_utils_num.c index bae32c4064..b2295f2c7d 100644 --- a/contrib/btree_gist/btree_utils_num.c +++ b/contrib/btree_gist/btree_utils_num.c @@ -296,7 +296,7 @@ gbt_num_consistent(const GBT_NUMKEY_R *key, retval = false; } - return (retval); + return retval; } diff --git a/contrib/btree_gist/btree_utils_var.c b/contrib/btree_gist/btree_utils_var.c index 2c636ad2fa..ecc87f3bb3 100644 --- a/contrib/btree_gist/btree_utils_var.c +++ b/contrib/btree_gist/btree_utils_var.c @@ -159,7 +159,7 @@ gbt_var_node_cp_len(const GBT_VARKEY *node, const gbtree_vinfo *tinfo) l--; i++; } - return (ml); /* lower == upper */ + return ml; /* lower == upper */ } @@ -299,7 +299,7 @@ gbt_var_compress(GISTENTRY *entry, const gbtree_vinfo *tinfo) else retval = entry; - return (retval); + return retval; } diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index 149558c763..0e968d1c76 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -625,7 +625,7 @@ g_cube_leaf_consistent(NDBOX *key, default: retval = FALSE; } - return (retval); + return retval; } bool @@ -652,7 +652,7 @@ g_cube_internal_consistent(NDBOX *key, default: retval = FALSE; } - return (retval); + return retval; } NDBOX * @@ -663,7 +663,7 @@ g_cube_binary_union(NDBOX *r1, NDBOX *r2, int *sizep) retval = cube_union_v0(r1, r2); *sizep = VARSIZE(retval); - return (retval); + return retval; } @@ -729,7 +729,7 @@ cube_union_v0(NDBOX *a, NDBOX *b) SET_POINT_BIT(result); } - return (result); + return result; } Datum @@ -1058,7 +1058,7 @@ cube_contains_v0(NDBOX *a, NDBOX *b) int i; if ((a == NULL) || (b == NULL)) - return (FALSE); + return FALSE; if (DIM(a) < DIM(b)) { @@ -1070,9 +1070,9 @@ cube_contains_v0(NDBOX *a, NDBOX *b) for (i = DIM(a); i < DIM(b); i++) { if (LL_COORD(b, i) != 0) - return (FALSE); + return FALSE; if (UR_COORD(b, i) != 0) - return (FALSE); + return FALSE; } } @@ -1081,13 +1081,13 @@ cube_contains_v0(NDBOX *a, NDBOX *b) { if (Min(LL_COORD(a, i), UR_COORD(a, i)) > Min(LL_COORD(b, i), UR_COORD(b, i))) - return (FALSE); + return FALSE; if (Max(LL_COORD(a, i), UR_COORD(a, i)) < Max(LL_COORD(b, i), UR_COORD(b, i))) - return (FALSE); + return FALSE; } - return (TRUE); + return TRUE; } Datum @@ -1128,7 +1128,7 @@ cube_overlap_v0(NDBOX *a, NDBOX *b) int i; if ((a == NULL) || (b == NULL)) - return (FALSE); + return FALSE; /* swap the box pointers if needed */ if (DIM(a) < DIM(b)) @@ -1143,21 +1143,21 @@ cube_overlap_v0(NDBOX *a, NDBOX *b) for (i = 0; i < DIM(b); i++) { if (Min(LL_COORD(a, i), UR_COORD(a, i)) > Max(LL_COORD(b, i), UR_COORD(b, i))) - return (FALSE); + return FALSE; if (Max(LL_COORD(a, i), UR_COORD(a, i)) < Min(LL_COORD(b, i), UR_COORD(b, i))) - return (FALSE); + return FALSE; } /* compare to zero those dimensions in (a) absent in (b) */ for (i = DIM(b); i < DIM(a); i++) { if (Min(LL_COORD(a, i), UR_COORD(a, i)) > 0) - return (FALSE); + return FALSE; if (Max(LL_COORD(a, i), UR_COORD(a, i)) < 0) - return (FALSE); + return FALSE; } - return (TRUE); + return TRUE; } @@ -1385,7 +1385,7 @@ distance_1D(double a1, double a2, double b1, double b2) return (Min(a1, a2) - Max(b1, b2)); /* the rest are all sorts of intersections */ - return (0.0); + return 0.0; } /* Test if a box is also a point */ diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index 3113b07ab8..7dc7716a3a 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -2217,7 +2217,7 @@ get_sql_insert(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals } appendStringInfoChar(&buf, ')'); - return (buf.data); + return buf.data; } static char * @@ -2254,7 +2254,7 @@ get_sql_delete(Relation rel, int *pkattnums, int pknumatts, char **tgt_pkattvals appendStringInfoString(&buf, " IS NULL"); } - return (buf.data); + return buf.data; } static char * @@ -2341,7 +2341,7 @@ get_sql_update(Relation rel, int *pkattnums, int pknumatts, char **src_pkattvals appendStringInfoString(&buf, " IS NULL"); } - return (buf.data); + return buf.data; } /* @@ -2549,9 +2549,9 @@ getConnectionByName(const char *name) key, HASH_FIND, NULL); if (hentry) - return (hentry->rconn); + return hentry->rconn; - return (NULL); + return NULL; } static HTAB * diff --git a/contrib/intarray/_int_bool.c b/contrib/intarray/_int_bool.c index a18c645606..01e47c53d0 100644 --- a/contrib/intarray/_int_bool.c +++ b/contrib/intarray/_int_bool.c @@ -245,7 +245,7 @@ checkcondition_arr(void *checkval, ITEM *item) { StopMiddle = StopLow + (StopHigh - StopLow) / 2; if (*StopMiddle == item->val) - return (true); + return true; else if (*StopMiddle < item->val) StopLow = StopMiddle + 1; else @@ -274,7 +274,7 @@ execute(ITEM *curitem, void *checkval, bool calcnot, return (*chkcond) (checkval, curitem); else if (curitem->val == (int32) '!') { - return (calcnot) ? + return calcnot ? ((execute(curitem - 1, checkval, calcnot, chkcond)) ? false : true) : true; } diff --git a/contrib/ltree/ltxtquery_op.c b/contrib/ltree/ltxtquery_op.c index 1428c8b478..6e9dbc4690 100644 --- a/contrib/ltree/ltxtquery_op.c +++ b/contrib/ltree/ltxtquery_op.c @@ -26,7 +26,7 @@ ltree_execute(ITEM *curitem, void *checkval, bool calcnot, bool (*chkcond) (void return (*chkcond) (checkval, curitem); else if (curitem->val == (int32) '!') { - return (calcnot) ? + return calcnot ? ((ltree_execute(curitem + 1, checkval, calcnot, chkcond)) ? false : true) : true; } diff --git a/contrib/pgcrypto/crypt-des.c b/contrib/pgcrypto/crypt-des.c index 60bdbb0c91..ee3a0f2169 100644 --- a/contrib/pgcrypto/crypt-des.c +++ b/contrib/pgcrypto/crypt-des.c @@ -206,18 +206,18 @@ static inline int ascii_to_bin(char ch) { if (ch > 'z') - return (0); + return 0; if (ch >= 'a') return (ch - 'a' + 38); if (ch > 'Z') - return (0); + return 0; if (ch >= 'A') return (ch - 'A' + 12); if (ch > '9') - return (0); + return 0; if (ch >= '.') return (ch - '.'); - return (0); + return 0; } static void @@ -420,7 +420,7 @@ des_setkey(const char *key) * (which is weak and has bad parity anyway) in order to simplify the * starting conditions. */ - return (0); + return 0; } old_rawkey0 = rawkey0; old_rawkey1 = rawkey1; @@ -479,7 +479,7 @@ des_setkey(const char *key) | comp_maskr[6][(t1 >> 7) & 0x7f] | comp_maskr[7][t1 & 0x7f]; } - return (0); + return 0; } static int @@ -500,7 +500,7 @@ do_des(uint32 l_in, uint32 r_in, uint32 *l_out, uint32 *r_out, int count) int round; if (count == 0) - return (1); + return 1; else if (count > 0) { /* @@ -613,7 +613,7 @@ do_des(uint32 l_in, uint32 r_in, uint32 *l_out, uint32 *r_out, int count) | fp_maskr[5][(r >> 16) & 0xff] | fp_maskr[6][(r >> 8) & 0xff] | fp_maskr[7][r & 0xff]; - return (0); + return 0; } static int @@ -639,7 +639,7 @@ des_cipher(const char *in, char *out, long salt, int count) retval = do_des(rawl, rawr, &l_out, &r_out, count); if (retval) - return (retval); + return retval; buffer[0] = htonl(l_out); buffer[1] = htonl(r_out); @@ -647,7 +647,7 @@ des_cipher(const char *in, char *out, long salt, int count) /* copy data to avoid assuming output is word-aligned */ memcpy(out, buffer, sizeof(buffer)); - return (retval); + return retval; } char * @@ -680,7 +680,7 @@ px_crypt_des(const char *key, const char *setting) key++; } if (des_setkey((char *) keybuf)) - return (NULL); + return NULL; #ifndef DISABLE_XDES if (*setting == _PASSWORD_EFMT1) @@ -711,7 +711,7 @@ px_crypt_des(const char *key, const char *setting) * Encrypt the key with itself. */ if (des_cipher((char *) keybuf, (char *) keybuf, 0L, 1)) - return (NULL); + return NULL; /* * And XOR with the next 8 characters of the key. @@ -721,7 +721,7 @@ px_crypt_des(const char *key, const char *setting) *q++ ^= *key++ << 1; if (des_setkey((char *) keybuf)) - return (NULL); + return NULL; } StrNCpy(output, setting, 10); @@ -767,7 +767,7 @@ px_crypt_des(const char *key, const char *setting) * Do it. */ if (do_des(0L, 0L, &r0, &r1, count)) - return (NULL); + return NULL; /* * Now encode the result... @@ -790,5 +790,5 @@ px_crypt_des(const char *key, const char *setting) *p++ = _crypt_a64[l & 0x3f]; *p = 0; - return (output); + return output; } diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c index 9365ba7e02..75317b96a2 100644 --- a/contrib/pgstattuple/pgstatindex.c +++ b/contrib/pgstattuple/pgstatindex.c @@ -568,7 +568,7 @@ pgstatginindex_internal(Oid relid, FunctionCallInfo fcinfo) tuple = heap_form_tuple(tupleDesc, values, nulls); result = HeapTupleGetDatum(tuple); - return (result); + return result; } /* ------------------------------------------------------ diff --git a/contrib/seg/seg.c b/contrib/seg/seg.c index 4fc18130e1..e707b18fc6 100644 --- a/contrib/seg/seg.c +++ b/contrib/seg/seg.c @@ -528,7 +528,7 @@ gseg_binary_union(Datum r1, Datum r2, int *sizep) retval = DirectFunctionCall2(seg_union, r1, r2); *sizep = sizeof(SEG); - return (retval); + return retval; } @@ -1040,7 +1040,7 @@ restore(char *result, float val, int n) /* ... this is not done yet. */ } - return (strlen(result)); + return strlen(result); } @@ -1080,7 +1080,7 @@ significant_digits(char *s) } if (!n) - return (zeroes); + return zeroes; - return (n); + return n; } diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c index 46205c7613..70def95ac5 100644 --- a/contrib/spi/refint.c +++ b/contrib/spi/refint.c @@ -636,5 +636,5 @@ find_plan(char *ident, EPlan **eplan, int *nplans) newp->splan = NULL; (*nplans)++; - return (newp); + return newp; } diff --git a/contrib/spi/timetravel.c b/contrib/spi/timetravel.c index 2c66d888df..816cc549ae 100644 --- a/contrib/spi/timetravel.c +++ b/contrib/spi/timetravel.c @@ -517,7 +517,7 @@ findTTStatus(char *name) AbsoluteTime currabstime() { - return (GetCurrentAbsoluteTime()); + return GetCurrentAbsoluteTime(); } */ @@ -549,5 +549,5 @@ find_plan(char *ident, EPlan **eplan, int *nplans) newp->splan = NULL; (*nplans)++; - return (newp); + return newp; } diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index d04b5a2125..31535a307d 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -4397,7 +4397,7 @@ execq(text *sql, int cnt) SPI_finish(); pfree(command); - return (proc); + return proc; } diff --git a/src/backend/bootstrap/bootscanner.l b/src/backend/bootstrap/bootscanner.l index 6467882fa3..51c5e5e3cd 100644 --- a/src/backend/bootstrap/bootscanner.l +++ b/src/backend/bootstrap/bootscanner.l @@ -72,25 +72,25 @@ arrayid [A-Za-z0-9_]+\[{D}*\] %% -open { return(OPEN); } +open { return OPEN; } -close { return(XCLOSE); } +close { return XCLOSE; } -create { return(XCREATE); } +create { return XCREATE; } -OID { return(OBJ_ID); } -bootstrap { return(XBOOTSTRAP); } -"shared_relation" { return(XSHARED_RELATION); } -"without_oids" { return(XWITHOUT_OIDS); } -"rowtype_oid" { return(XROWTYPE_OID); } -_null_ { return(NULLVAL); } +OID { return OBJ_ID; } +bootstrap { return XBOOTSTRAP; } +"shared_relation" { return XSHARED_RELATION; } +"without_oids" { return XWITHOUT_OIDS; } +"rowtype_oid" { return XROWTYPE_OID; } +_null_ { return NULLVAL; } -insert { return(INSERT_TUPLE); } +insert { return INSERT_TUPLE; } -"," { return(COMMA); } -"=" { return(EQUALS); } -"(" { return(LPAREN); } -")" { return(RPAREN); } +"," { return COMMA; } +"=" { return EQUALS; } +"(" { return LPAREN; } +")" { return RPAREN; } [\n] { yyline++; } [\t] ; @@ -99,31 +99,31 @@ insert { return(INSERT_TUPLE); } ^\#[^\n]* ; /* drop everything after "#" for comments */ -"declare" { return(XDECLARE); } -"build" { return(XBUILD); } -"indices" { return(INDICES); } -"unique" { return(UNIQUE); } -"index" { return(INDEX); } -"on" { return(ON); } -"using" { return(USING); } -"toast" { return(XTOAST); } -"FORCE" { return(XFORCE); } -"NOT" { return(XNOT); } -"NULL" { return(XNULL); } +"declare" { return XDECLARE; } +"build" { return XBUILD; } +"indices" { return INDICES; } +"unique" { return UNIQUE; } +"index" { return INDEX; } +"on" { return ON; } +"using" { return USING; } +"toast" { return XTOAST; } +"FORCE" { return XFORCE; } +"NOT" { return XNOT; } +"NULL" { return XNULL; } {arrayid} { yylval.str = MapArrayTypeName(yytext); - return(ID); + return ID; } {id} { yylval.str = scanstr(yytext); - return(ID); + return ID; } {sid} { yytext[strlen(yytext)-1] = '\0'; /* strip off quotes */ yylval.str = scanstr(yytext+1); yytext[strlen(yytext)] = '"'; /* restore quotes */ - return(ID); + return ID; } . { diff --git a/src/backend/tsearch/spell.c b/src/backend/tsearch/spell.c index 1020250490..6527c73731 100644 --- a/src/backend/tsearch/spell.c +++ b/src/backend/tsearch/spell.c @@ -195,14 +195,14 @@ static char *VoidString = ""; static int cmpspell(const void *s1, const void *s2) { - return (strcmp((*(SPELL *const *) s1)->word, (*(SPELL *const *) s2)->word)); + return strcmp((*(SPELL *const *) s1)->word, (*(SPELL *const *) s2)->word); } static int cmpspellaffix(const void *s1, const void *s2) { - return (strcmp((*(SPELL *const *) s1)->p.flag, - (*(SPELL *const *) s2)->p.flag)); + return strcmp((*(SPELL *const *) s1)->p.flag, + (*(SPELL *const *) s2)->p.flag); } static int @@ -2240,9 +2240,9 @@ NormalizeSubWord(IspellDict *Conf, char *word, int flag) if (cur == forms) { pfree(forms); - return (NULL); + return NULL; } - return (forms); + return forms; } typedef struct SplitVar diff --git a/src/backend/utils/adt/inet_cidr_ntop.c b/src/backend/utils/adt/inet_cidr_ntop.c index 2973d56658..30b3673789 100644 --- a/src/backend/utils/adt/inet_cidr_ntop.c +++ b/src/backend/utils/adt/inet_cidr_ntop.c @@ -58,12 +58,12 @@ inet_cidr_ntop(int af, const void *src, int bits, char *dst, size_t size) switch (af) { case PGSQL_AF_INET: - return (inet_cidr_ntop_ipv4(src, bits, dst, size)); + return inet_cidr_ntop_ipv4(src, bits, dst, size); case PGSQL_AF_INET6: - return (inet_cidr_ntop_ipv6(src, bits, dst, size)); + return inet_cidr_ntop_ipv6(src, bits, dst, size); default: errno = EAFNOSUPPORT; - return (NULL); + return NULL; } } @@ -92,7 +92,7 @@ inet_cidr_ntop_ipv4(const u_char *src, int bits, char *dst, size_t size) if (bits < 0 || bits > 32) { errno = EINVAL; - return (NULL); + return NULL; } if (bits == 0) @@ -137,11 +137,11 @@ inet_cidr_ntop_ipv4(const u_char *src, int bits, char *dst, size_t size) if (size <= sizeof "/32") goto emsgsize; dst += SPRINTF((dst, "/%u", bits)); - return (odst); + return odst; emsgsize: errno = EMSGSIZE; - return (NULL); + return NULL; } /* @@ -182,7 +182,7 @@ inet_cidr_ntop_ipv6(const u_char *src, int bits, char *dst, size_t size) if (bits < 0 || bits > 128) { errno = EINVAL; - return (NULL); + return NULL; } cp = outbuf; @@ -286,9 +286,9 @@ inet_cidr_ntop_ipv6(const u_char *src, int bits, char *dst, size_t size) goto emsgsize; strcpy(dst, outbuf); - return (dst); + return dst; emsgsize: errno = EMSGSIZE; - return (NULL); + return NULL; } diff --git a/src/backend/utils/adt/inet_net_pton.c b/src/backend/utils/adt/inet_net_pton.c index be788d37cd..6f3ece1209 100644 --- a/src/backend/utils/adt/inet_net_pton.c +++ b/src/backend/utils/adt/inet_net_pton.c @@ -73,7 +73,7 @@ inet_net_pton(int af, const char *src, void *dst, size_t size) inet_cidr_pton_ipv6(src, dst, size); default: errno = EAFNOSUPPORT; - return (-1); + return -1; } } @@ -228,15 +228,15 @@ inet_cidr_pton_ipv4(const char *src, u_char *dst, size_t size) goto emsgsize; *dst++ = '\0'; } - return (bits); + return bits; enoent: errno = ENOENT; - return (-1); + return -1; emsgsize: errno = EMSGSIZE; - return (-1); + return -1; } /* @@ -338,11 +338,11 @@ inet_net_pton_ipv4(const char *src, u_char *dst) enoent: errno = ENOENT; - return (-1); + return -1; emsgsize: errno = EMSGSIZE; - return (-1); + return -1; } static int @@ -363,19 +363,19 @@ getbits(const char *src, int *bitsp) if (pch != NULL) { if (n++ != 0 && val == 0) /* no leading zeros */ - return (0); + return 0; val *= 10; val += (pch - digits); if (val > 128) /* range */ - return (0); + return 0; continue; } - return (0); + return 0; } if (n == 0) - return (0); + return 0; *bitsp = val; - return (1); + return 1; } static int @@ -397,32 +397,32 @@ getv4(const char *src, u_char *dst, int *bitsp) if (pch != NULL) { if (n++ != 0 && val == 0) /* no leading zeros */ - return (0); + return 0; val *= 10; val += (pch - digits); if (val > 255) /* range */ - return (0); + return 0; continue; } if (ch == '.' || ch == '/') { if (dst - odst > 3) /* too many octets? */ - return (0); + return 0; *dst++ = val; if (ch == '/') - return (getbits(src, bitsp)); + return getbits(src, bitsp); val = 0; n = 0; continue; } - return (0); + return 0; } if (n == 0) - return (0); + return 0; if (dst - odst > 3) /* too many octets? */ - return (0); + return 0; *dst++ = val; - return (1); + return 1; } static int @@ -552,13 +552,13 @@ inet_cidr_pton_ipv6(const char *src, u_char *dst, size_t size) */ memcpy(dst, tmp, NS_IN6ADDRSZ); - return (bits); + return bits; enoent: errno = ENOENT; - return (-1); + return -1; emsgsize: errno = EMSGSIZE; - return (-1); + return -1; } diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c index 7ce2699b5c..d8b86f6393 100644 --- a/src/backend/utils/adt/tsgistidx.c +++ b/src/backend/utils/adt/tsgistidx.c @@ -317,14 +317,14 @@ checkcondition_arr(void *checkval, QueryOperand *val, ExecPhraseData *data) { StopMiddle = StopLow + (StopHigh - StopLow) / 2; if (*StopMiddle == val->valcrc) - return (true); + return true; else if (*StopMiddle < val->valcrc) StopLow = StopMiddle + 1; else StopHigh = StopMiddle; } - return (false); + return false; } static bool diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm index 43cadf5303..22d02ca485 100644 --- a/src/backend/utils/mb/Unicode/convutils.pm +++ b/src/backend/utils/mb/Unicode/convutils.pm @@ -812,7 +812,7 @@ sub ucs2utf (((($ucs & 0x3ffff) >> 12) | 0x80) << 16) | (((($ucs & 0x0fc0) >> 6) | 0x80) << 8) | (($ucs & 0x003f) | 0x80); } - return ($utf); + return $utf; } 1; diff --git a/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/big5.c b/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/big5.c index 1d9b10f8a7..68f76aa8cb 100644 --- a/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/big5.c +++ b/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/big5.c @@ -361,14 +361,14 @@ CNStoBIG5(unsigned short cns, unsigned char lc) for (i = 0; i < sizeof(b2c3) / (sizeof(unsigned short) * 2); i++) { if (b2c3[i][1] == cns) - return (b2c3[i][0]); + return b2c3[i][0]; } break; case LC_CNS11643_4: for (i = 0; i < sizeof(b1c4) / (sizeof(unsigned short) * 2); i++) { if (b1c4[i][1] == cns) - return (b1c4[i][0]); + return b1c4[i][0]; } default: break; diff --git a/src/interfaces/ecpg/compatlib/informix.c b/src/interfaces/ecpg/compatlib/informix.c index 2508ed9b8f..e9bcb4cde2 100644 --- a/src/interfaces/ecpg/compatlib/informix.c +++ b/src/interfaces/ecpg/compatlib/informix.c @@ -79,7 +79,7 @@ deccall2(decimal *arg1, decimal *arg2, int (*ptr) (numeric *, numeric *)) PGTYPESnumeric_free(a1); PGTYPESnumeric_free(a2); - return (i); + return i; } static int @@ -143,7 +143,7 @@ deccall3(decimal *arg1, decimal *arg2, decimal *result, int (*ptr) (numeric *, n PGTYPESnumeric_free(a1); PGTYPESnumeric_free(a2); - return (i); + return i; } /* we start with the numeric functions */ @@ -166,7 +166,7 @@ decadd(decimal *arg1, decimal *arg2, decimal *sum) int deccmp(decimal *arg1, decimal *arg2) { - return (deccall2(arg1, arg2, PGTYPESnumeric_cmp)); + return deccall2(arg1, arg2, PGTYPESnumeric_cmp); } void @@ -261,7 +261,7 @@ deccvdbl(double dbl, decimal *np) result = PGTYPESnumeric_to_decimal(nres, np); PGTYPESnumeric_free(nres); - return (result); + return result; } int @@ -283,7 +283,7 @@ deccvint(int in, decimal *np) result = PGTYPESnumeric_to_decimal(nres, np); PGTYPESnumeric_free(nres); - return (result); + return result; } int @@ -305,7 +305,7 @@ deccvlong(long lng, decimal *np) result = PGTYPESnumeric_to_decimal(nres, np); PGTYPESnumeric_free(nres); - return (result); + return result; } int @@ -598,7 +598,7 @@ rmdyjul(short mdy[3], date * d) int rdayofweek(date d) { - return (PGTYPESdate_dayofweek(d)); + return PGTYPESdate_dayofweek(d); } /* And the datetime stuff */ @@ -1049,5 +1049,5 @@ rsetnull(int t, char *ptr) int risnull(int t, char *ptr) { - return (ECPGis_noind_null(t, ptr)); + return ECPGis_noind_null(t, ptr); } diff --git a/src/interfaces/ecpg/ecpglib/connect.c b/src/interfaces/ecpg/ecpglib/connect.c index 0716abdd7e..71fa275363 100644 --- a/src/interfaces/ecpg/ecpglib/connect.c +++ b/src/interfaces/ecpg/ecpglib/connect.c @@ -67,7 +67,7 @@ ecpg_get_connection_nr(const char *connection_name) ret = con; } - return (ret); + return ret; } struct connection * @@ -106,7 +106,7 @@ ecpg_get_connection(const char *connection_name) #endif } - return (ret); + return ret; } static void @@ -168,7 +168,7 @@ ECPGsetcommit(int lineno, const char *mode, const char *connection_name) PGresult *results; if (!ecpg_init(con, connection_name, lineno)) - return (false); + return false; ecpg_log("ECPGsetcommit on line %d: action \"%s\"; connection \"%s\"\n", lineno, mode, con->name); @@ -204,7 +204,7 @@ ECPGsetconn(int lineno, const char *connection_name) struct connection *con = ecpg_get_connection(connection_name); if (!ecpg_init(con, connection_name, lineno)) - return (false); + return false; #ifdef ENABLE_THREAD_SAFETY pthread_setspecific(actual_connection_key, con); @@ -675,7 +675,7 @@ ECPGdisconnect(int lineno, const char *connection_name) { ecpg_raise(lineno, ECPG_OUT_OF_MEMORY, ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY, NULL); - return (false); + return false; } #ifdef ENABLE_THREAD_SAFETY @@ -702,7 +702,7 @@ ECPGdisconnect(int lineno, const char *connection_name) #ifdef ENABLE_THREAD_SAFETY pthread_mutex_unlock(&connections_mutex); #endif - return (false); + return false; } else ecpg_finish(con); diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c index 5dbfded873..a2f3916f38 100644 --- a/src/interfaces/ecpg/ecpglib/data.c +++ b/src/interfaces/ecpg/ecpglib/data.c @@ -134,7 +134,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_OUT_OF_MEMORY, ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY, NULL); - return (false); + return false; } /* @@ -156,7 +156,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, * at least one tuple, but let's play it safe. */ ecpg_raise(lineno, ECPG_NOT_FOUND, ECPG_SQLSTATE_NO_DATA, NULL); - return (false); + return false; } /* We will have to decode the value */ @@ -204,7 +204,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, ecpg_raise(lineno, ECPG_MISSING_INDICATOR, ECPG_SQLSTATE_NULL_VALUE_NO_INDICATOR_PARAMETER, NULL); - return (false); + return false; } } break; @@ -212,12 +212,12 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, ecpg_raise(lineno, ECPG_UNSUPPORTED, ECPG_SQLSTATE_ECPG_INTERNAL_ERROR, ecpg_type_name(ind_type)); - return (false); + return false; break; } if (value_for_indicator == -1) - return (true); + return true; /* let's check if it really is an array if it should be one */ if (isarray == ECPG_ARRAY_ARRAY) @@ -226,7 +226,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_DATA_NOT_ARRAY, ECPG_SQLSTATE_DATATYPE_MISMATCH, NULL); - return (false); + return false; } switch (type) @@ -307,7 +307,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_INT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } pval = scan_length; @@ -336,7 +336,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_UINT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } pval = scan_length; @@ -364,7 +364,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (garbage_left(isarray, scan_length, compat)) { ecpg_raise(lineno, ECPG_INT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } pval = scan_length; @@ -376,7 +376,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (garbage_left(isarray, scan_length, compat)) { ecpg_raise(lineno, ECPG_UINT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } pval = scan_length; @@ -399,7 +399,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_FLOAT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } pval = scan_length; @@ -438,7 +438,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, ecpg_raise(lineno, ECPG_CONVERT_BOOL, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; break; case ECPGt_char: @@ -581,14 +581,14 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_OUT_OF_MEMORY, ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY, NULL); - return (false); + return false; } } else { ecpg_raise(lineno, ECPG_NUMERIC_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } else @@ -598,7 +598,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, free(nres); ecpg_raise(lineno, ECPG_NUMERIC_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } pval = scan_length; @@ -635,7 +635,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, */ ires = (interval *) ecpg_alloc(sizeof(interval), lineno); if (!ires) - return (false); + return false; ECPGset_noind_null(ECPGt_interval, ires); } @@ -643,7 +643,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_INTERVAL_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } else @@ -656,7 +656,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, free(ires); ecpg_raise(lineno, ECPG_INTERVAL_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } pval = scan_length; @@ -693,7 +693,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_DATE_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } else @@ -705,7 +705,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_DATE_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } @@ -741,7 +741,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_TIMESTAMP_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } else @@ -753,7 +753,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, { ecpg_raise(lineno, ECPG_TIMESTAMP_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); - return (false); + return false; } } @@ -765,7 +765,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, ecpg_raise(lineno, ECPG_UNSUPPORTED, ECPG_SQLSTATE_ECPG_INTERNAL_ERROR, ecpg_type_name(type)); - return (false); + return false; break; } if (ECPG_IS_ARRAY(isarray)) @@ -791,5 +791,5 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, } } while (*pval != '\0' && !array_boundary(isarray, *pval)); - return (true); + return true; } diff --git a/src/interfaces/ecpg/ecpglib/descriptor.c b/src/interfaces/ecpg/ecpglib/descriptor.c index 1fa00b892f..bdd25184dc 100644 --- a/src/interfaces/ecpg/ecpglib/descriptor.c +++ b/src/interfaces/ecpg/ecpglib/descriptor.c @@ -150,10 +150,10 @@ get_int_item(int lineno, void *var, enum ECPGttype vartype, int value) break; default: ecpg_raise(lineno, ECPG_VAR_NOT_NUMERIC, ECPG_SQLSTATE_RESTRICTED_DATA_TYPE_ATTRIBUTE_VIOLATION, NULL); - return (false); + return false; } - return (true); + return true; } static bool @@ -195,7 +195,7 @@ set_int_item(int lineno, int *target, const void *var, enum ECPGttype vartype) break; default: ecpg_raise(lineno, ECPG_VAR_NOT_NUMERIC, ECPG_SQLSTATE_RESTRICTED_DATA_TYPE_ATTRIBUTE_VIOLATION, NULL); - return (false); + return false; } return true; @@ -228,17 +228,17 @@ get_char_item(int lineno, void *var, enum ECPGttype vartype, char *value, int va break; default: ecpg_raise(lineno, ECPG_VAR_NOT_CHAR, ECPG_SQLSTATE_RESTRICTED_DATA_TYPE_ATTRIBUTE_VIOLATION, NULL); - return (false); + return false; } - return (true); + return true; } #define RETURN_IF_NO_DATA if (ntuples < 1) \ { \ va_end(args); \ ecpg_raise(lineno, ECPG_NOT_FOUND, ECPG_SQLSTATE_NO_DATA, NULL); \ - return (false); \ + return false; \ } bool @@ -265,7 +265,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!ECPGresult) { va_end(args); - return (false); + return false; } ntuples = PQntuples(ECPGresult); @@ -274,7 +274,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) { ecpg_raise(lineno, ECPG_INVALID_DESCRIPTOR_INDEX, ECPG_SQLSTATE_INVALID_DESCRIPTOR_INDEX, NULL); va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: reading items for tuple %d\n", index); @@ -333,7 +333,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_char_item(lineno, var, vartype, PQfname(ECPGresult, index), varcharsize)) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: NAME = %s\n", PQfname(ECPGresult, index)); @@ -343,7 +343,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, 1)) { va_end(args); - return (false); + return false; } break; @@ -352,7 +352,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, 0)) { va_end(args); - return (false); + return false; } break; @@ -361,7 +361,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, (PQfmod(ECPGresult, index) - VARHDRSZ) & 0xffff)) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: SCALE = %d\n", (PQfmod(ECPGresult, index) - VARHDRSZ) & 0xffff); @@ -371,7 +371,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, PQfmod(ECPGresult, index) >> 16)) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: PRECISION = %d\n", PQfmod(ECPGresult, index) >> 16); @@ -381,7 +381,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, PQfsize(ECPGresult, index))) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: OCTET_LENGTH = %d\n", PQfsize(ECPGresult, index)); @@ -391,7 +391,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, PQfmod(ECPGresult, index) - VARHDRSZ)) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: LENGTH = %d\n", PQfmod(ECPGresult, index) - VARHDRSZ); @@ -401,7 +401,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, ecpg_dynamic_type(PQftype(ECPGresult, index)))) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: TYPE = %d\n", ecpg_dynamic_type(PQftype(ECPGresult, index))); @@ -411,7 +411,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, ecpg_dynamic_type_DDT(PQftype(ECPGresult, index)))) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: TYPE = %d\n", ecpg_dynamic_type_DDT(PQftype(ECPGresult, index))); @@ -421,7 +421,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, PQntuples(ECPGresult))) { va_end(args); - return (false); + return false; } ecpg_log("ECPGget_desc: CARDINALITY = %d\n", PQntuples(ECPGresult)); @@ -462,7 +462,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, var, vartype, PQgetlength(ECPGresult, act_tuple, index))) { va_end(args); - return (false); + return false; } var = (char *) var + offset; ecpg_log("ECPGget_desc: RETURNED[%d] = %d\n", act_tuple, PQgetlength(ECPGresult, act_tuple, index)); @@ -473,7 +473,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) snprintf(type_str, sizeof(type_str), "%d", type); ecpg_raise(lineno, ECPG_UNKNOWN_DESCRIPTOR_ITEM, ECPG_SQLSTATE_ECPG_INTERNAL_ERROR, type_str); va_end(args); - return (false); + return false; } type = va_arg(args, enum ECPGdtype); @@ -539,7 +539,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) if (!get_int_item(lineno, data_var.ind_value, data_var.ind_type, -PQgetisnull(ECPGresult, act_tuple, index))) { va_end(args); - return (false); + return false; } data_var.ind_value = (char *) data_var.ind_value + data_var.ind_offset; ecpg_log("ECPGget_desc: INDICATOR[%d] = %d\n", act_tuple, -PQgetisnull(ECPGresult, act_tuple, index)); @@ -547,7 +547,7 @@ ECPGget_desc(int lineno, const char *desc_name, int index,...) } sqlca->sqlerrd[2] = ntuples; va_end(args); - return (true); + return true; } #undef RETURN_IF_NO_DATA diff --git a/src/interfaces/ecpg/ecpglib/error.c b/src/interfaces/ecpg/ecpglib/error.c index 77d6cc2dae..f34ae4afb8 100644 --- a/src/interfaces/ecpg/ecpglib/error.c +++ b/src/interfaces/ecpg/ecpglib/error.c @@ -286,23 +286,23 @@ ecpg_check_PQresult(PGresult *results, int lineno, PGconn *connection, enum COMP { ecpg_log("ecpg_check_PQresult on line %d: no result - %s", lineno, PQerrorMessage(connection)); ecpg_raise_backend(lineno, NULL, connection, compat); - return (false); + return false; } switch (PQresultStatus(results)) { case PGRES_TUPLES_OK: - return (true); + return true; break; case PGRES_EMPTY_QUERY: /* do nothing */ ecpg_raise(lineno, ECPG_EMPTY, ECPG_SQLSTATE_ECPG_INTERNAL_ERROR, NULL); PQclear(results); - return (false); + return false; break; case PGRES_COMMAND_OK: - return (true); + return true; break; case PGRES_NONFATAL_ERROR: case PGRES_FATAL_ERROR: @@ -310,23 +310,23 @@ ecpg_check_PQresult(PGresult *results, int lineno, PGconn *connection, enum COMP ecpg_log("ecpg_check_PQresult on line %d: bad response - %s", lineno, PQresultErrorMessage(results)); ecpg_raise_backend(lineno, results, connection, compat); PQclear(results); - return (false); + return false; break; case PGRES_COPY_OUT: - return (true); + return true; break; case PGRES_COPY_IN: ecpg_log("ecpg_check_PQresult on line %d: COPY IN data transfer in progress\n", lineno); PQendcopy(connection); PQclear(results); - return (false); + return false; break; default: ecpg_log("ecpg_check_PQresult on line %d: unknown execution status type\n", lineno); ecpg_raise_backend(lineno, results, connection, compat); PQclear(results); - return (false); + return false; break; } } diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c index 03c55d3593..50f831aa2b 100644 --- a/src/interfaces/ecpg/ecpglib/execute.c +++ b/src/interfaces/ecpg/ecpglib/execute.c @@ -58,7 +58,7 @@ quote_postgres(char *arg, bool quote, int lineno) buffer_len = 2 * length + 1; res = (char *) ecpg_alloc(buffer_len + 3, lineno); if (!res) - return (res); + return res; escaped_len = PQescapeString(res + 1, arg, buffer_len); if (length == escaped_len) { @@ -151,13 +151,13 @@ ecpg_type_infocache_push(struct ECPGtype_information_cache **cache, int oid, enu = (struct ECPGtype_information_cache *) ecpg_alloc(sizeof(struct ECPGtype_information_cache), lineno); if (new_entry == NULL) - return (false); + return false; new_entry->oid = oid; new_entry->isarray = isarray; new_entry->next = *cache; *cache = new_entry; - return (true); + return true; } static enum ARRAY_TYPE @@ -178,89 +178,89 @@ ecpg_is_type_an_array(int type, const struct statement *stmt, const struct varia /* populate cache with well known types to speed things up */ if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), BOOLOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), BYTEAOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CHAROID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NAMEOID, not_an_array_in_ecpg, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INT8OID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INT2OID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INT2VECTOROID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INT4OID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), REGPROCOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TEXTOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), OIDOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIDOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), XIDOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CIDOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), OIDVECTOROID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), POINTOID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), LSEGOID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), PATHOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), BOXOID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), POLYGONOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), LINEOID, ECPG_ARRAY_VECTOR, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), FLOAT4OID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), FLOAT8OID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), ABSTIMEOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), RELTIMEOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TINTERVALOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), UNKNOWNOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CIRCLEOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CASHOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INETOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), CIDROID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), BPCHAROID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), VARCHAROID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), DATEOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMEOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMESTAMPOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMESTAMPTZOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), INTERVALOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), TIMETZOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), ZPBITOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), VARBITOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; if (!ecpg_type_infocache_push(&(stmt->connection->cache_head), NUMERICOID, ECPG_ARRAY_NONE, stmt->lineno)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; } for (cache_entry = (stmt->connection->cache_head); cache_entry != NULL; cache_entry = cache_entry->next) @@ -271,13 +271,13 @@ ecpg_is_type_an_array(int type, const struct statement *stmt, const struct varia array_query = (char *) ecpg_alloc(strlen("select typlen from pg_type where oid= and typelem<>0") + 11, stmt->lineno); if (array_query == NULL) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; sprintf(array_query, "select typlen from pg_type where oid=%d and typelem<>0", type); query = PQexec(stmt->connection->connection, array_query); ecpg_free(array_query); if (!ecpg_check_PQresult(query, stmt->lineno, stmt->connection->connection, stmt->compat)) - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; else if (PQresultStatus(query) == PGRES_TUPLES_OK) { if (PQntuples(query) == 0) @@ -297,7 +297,7 @@ ecpg_is_type_an_array(int type, const struct statement *stmt, const struct varia PQclear(query); } else - return (ECPG_ARRAY_ERROR); + return ECPG_ARRAY_ERROR; ecpg_type_infocache_push(&(stmt->connection->cache_head), type, isarray, stmt->lineno); ecpg_log("ecpg_is_type_an_array on line %d: type (%d); C (%d); array (%s)\n", stmt->lineno, type, var->type, ECPG_IS_ARRAY(isarray) ? "yes" : "no"); @@ -1486,7 +1486,7 @@ ecpg_process_output(struct statement *stmt, bool clear_result) { ecpg_raise(stmt->lineno, ECPG_OUT_OF_MEMORY, ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY, NULL); - return (false); + return false; } var = stmt->outlist; @@ -1654,7 +1654,7 @@ ecpg_process_output(struct statement *stmt, bool clear_result) else if (!INFORMIX_MODE(stmt->compat)) { ecpg_raise(stmt->lineno, ECPG_TOO_FEW_ARGUMENTS, ECPG_SQLSTATE_USING_CLAUSE_DOES_NOT_MATCH_TARGETS, NULL); - return (false); + return false; } } @@ -1830,7 +1830,7 @@ ecpg_do_prologue(int lineno, const int compat, const int force_indicator, { ecpg_raise(lineno, ECPG_INVALID_STMT, ECPG_SQLSTATE_INVALID_SQL_STATEMENT_NAME, stmt->command); ecpg_do_epilogue(stmt); - return (false); + return false; } } diff --git a/src/interfaces/ecpg/ecpglib/memory.c b/src/interfaces/ecpg/ecpglib/memory.c index a7268bb0f6..dc548a4cda 100644 --- a/src/interfaces/ecpg/ecpglib/memory.c +++ b/src/interfaces/ecpg/ecpglib/memory.c @@ -26,7 +26,7 @@ ecpg_alloc(long size, int lineno) return NULL; } - return (new); + return new; } char * @@ -40,7 +40,7 @@ ecpg_realloc(void *ptr, long size, int lineno) return NULL; } - return (new); + return new; } char * @@ -58,7 +58,7 @@ ecpg_strdup(const char *string, int lineno) return NULL; } - return (new); + return new; } /* keep a list of memory we allocated for the user */ diff --git a/src/interfaces/ecpg/ecpglib/misc.c b/src/interfaces/ecpg/ecpglib/misc.c index edd7302d54..2084d7fe60 100644 --- a/src/interfaces/ecpg/ecpglib/misc.c +++ b/src/interfaces/ecpg/ecpglib/misc.c @@ -110,7 +110,7 @@ ecpg_init(const struct connection *con, const char *connection_name, const int l { ecpg_raise(lineno, ECPG_OUT_OF_MEMORY, ECPG_SQLSTATE_ECPG_OUT_OF_MEMORY, NULL); - return (false); + return false; } ecpg_init_sqlca(sqlca); @@ -118,10 +118,10 @@ ecpg_init(const struct connection *con, const char *connection_name, const int l { ecpg_raise(lineno, ECPG_NO_CONN, ECPG_SQLSTATE_CONNECTION_DOES_NOT_EXIST, connection_name ? connection_name : ecpg_gettext("NULL")); - return (false); + return false; } - return (true); + return true; } #ifdef ENABLE_THREAD_SAFETY @@ -155,9 +155,9 @@ ECPGget_sqlca(void) ecpg_init_sqlca(sqlca); pthread_setspecific(sqlca_key, sqlca); } - return (sqlca); + return sqlca; #else - return (&sqlca); + return &sqlca; #endif } @@ -167,7 +167,7 @@ ECPGstatus(int lineno, const char *connection_name) struct connection *con = ecpg_get_connection(connection_name); if (!ecpg_init(con, connection_name, lineno)) - return (false); + return false; /* are we connected? */ if (con->connection == NULL) @@ -176,7 +176,7 @@ ECPGstatus(int lineno, const char *connection_name) return false; } - return (true); + return true; } PGTransactionStatusType @@ -202,7 +202,7 @@ ECPGtrans(int lineno, const char *connection_name, const char *transaction) struct connection *con = ecpg_get_connection(connection_name); if (!ecpg_init(con, connection_name, lineno)) - return (false); + return false; ecpg_log("ECPGtrans on line %d: action \"%s\"; connection \"%s\"\n", lineno, transaction, con ? con->name : "null"); @@ -419,10 +419,10 @@ ECPGis_noind_null(enum ECPGttype type, void *ptr) break; #endif /* HAVE_LONG_LONG_INT */ case ECPGt_float: - return (_check(ptr, sizeof(float))); + return _check(ptr, sizeof(float)); break; case ECPGt_double: - return (_check(ptr, sizeof(double))); + return _check(ptr, sizeof(double)); break; case ECPGt_varchar: if (*(((struct ECPGgeneric_varchar *) ptr)->arr) == 0x00) @@ -437,10 +437,10 @@ ECPGis_noind_null(enum ECPGttype type, void *ptr) return true; break; case ECPGt_interval: - return (_check(ptr, sizeof(interval))); + return _check(ptr, sizeof(interval)); break; case ECPGt_timestamp: - return (_check(ptr, sizeof(timestamp))); + return _check(ptr, sizeof(timestamp)); break; default: break; diff --git a/src/interfaces/ecpg/ecpglib/prepare.c b/src/interfaces/ecpg/ecpglib/prepare.c index 151aa80dc6..e76b645024 100644 --- a/src/interfaces/ecpg/ecpglib/prepare.c +++ b/src/interfaces/ecpg/ecpglib/prepare.c @@ -42,7 +42,7 @@ isvarchar(unsigned char c) if (c >= 128) return true; - return (false); + return false; } static bool @@ -371,7 +371,7 @@ SearchStmtCache(const char *ecpgQuery) if (entIx >= stmtCacheEntPerBucket) entNo = 0; - return (entNo); + return entNo; } /* @@ -389,14 +389,14 @@ ecpg_freeStmtCacheEntry(int lineno, int compat, int entNo) /* entry # to free */ entry = &stmtCacheEntries[entNo]; if (!entry->stmtID[0]) /* return if the entry isn't in use */ - return (0); + return 0; con = ecpg_get_connection(entry->connection); /* free the 'prepared_statement' list entry */ this = ecpg_find_prepared_statement(entry->stmtID, con, &prev); if (this && !deallocate_one(lineno, compat, con, prev, this)) - return (-1); + return -1; entry->stmtID[0] = '\0'; @@ -407,7 +407,7 @@ ecpg_freeStmtCacheEntry(int lineno, int compat, int entNo) /* entry # to free */ entry->ecpgQuery = 0; } - return (entNo); + return entNo; } /* @@ -450,7 +450,7 @@ AddStmtToCache(int lineno, /* line # of statement */ /* 'entNo' is the entry to use - make sure its free */ if (ecpg_freeStmtCacheEntry(lineno, compat, entNo) < 0) - return (-1); + return -1; /* add the query to the entry */ entry = &stmtCacheEntries[entNo]; @@ -460,7 +460,7 @@ AddStmtToCache(int lineno, /* line # of statement */ entry->execs = 0; memcpy(entry->stmtID, stmtID, sizeof(entry->stmtID)); - return (entNo); + return entNo; } /* handle cache and preparation of statements in auto-prepare mode */ @@ -487,7 +487,7 @@ ecpg_auto_prepare(int lineno, const char *connection_name, const int compat, cha prep = ecpg_find_prepared_statement(stmtID, con, NULL); /* This prepared name doesn't exist on this connection. */ if (!prep && !prepare_common(lineno, con, stmtID, query)) - return (false); + return false; *name = ecpg_strdup(stmtID, lineno); } @@ -501,9 +501,9 @@ ecpg_auto_prepare(int lineno, const char *connection_name, const int compat, cha sprintf(stmtID, "ecpg%d", nextStmtID++); if (!ECPGprepare(lineno, connection_name, 0, stmtID, query)) - return (false); + return false; if (AddStmtToCache(lineno, stmtID, connection_name, compat, query) < 0) - return (false); + return false; *name = ecpg_strdup(stmtID, lineno); } @@ -511,5 +511,5 @@ ecpg_auto_prepare(int lineno, const char *connection_name, const int compat, cha /* increase usage counter */ stmtCacheEntries[entNo].execs++; - return (true); + return true; } diff --git a/src/interfaces/ecpg/pgtypeslib/common.c b/src/interfaces/ecpg/pgtypeslib/common.c index 9084fd06b4..ae29b6c4ab 100644 --- a/src/interfaces/ecpg/pgtypeslib/common.c +++ b/src/interfaces/ecpg/pgtypeslib/common.c @@ -12,7 +12,7 @@ pgtypes_alloc(long size) if (!new) errno = ENOMEM; - return (new); + return new; } char * @@ -22,7 +22,7 @@ pgtypes_strdup(const char *str) if (!new) errno = ENOMEM; - return (new); + return new; } int diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c index a93d074de2..a8619168ff 100644 --- a/src/interfaces/ecpg/pgtypeslib/numeric.c +++ b/src/interfaces/ecpg/pgtypeslib/numeric.c @@ -40,7 +40,7 @@ apply_typmod(numeric *var, long typmod) /* Do nothing if we have a default typmod (-1) */ if (typmod < (long) (VARHDRSZ)) - return (0); + return 0; typmod -= VARHDRSZ; precision = (typmod >> 16) & 0xffff; @@ -100,7 +100,7 @@ apply_typmod(numeric *var, long typmod) var->rscale = scale; var->dscale = scale; - return (0); + return 0; } #endif @@ -296,7 +296,7 @@ set_var_from_str(char *str, char **ptr, numeric *dest) dest->weight = 0; dest->rscale = dest->dscale; - return (0); + return 0; } @@ -412,16 +412,16 @@ PGTYPESnumeric_from_asc(char *str, char **endptr) char **ptr = (endptr != NULL) ? endptr : &realptr; if (!value) - return (NULL); + return NULL; ret = set_var_from_str(str, ptr, value); if (ret) { PGTYPESnumeric_free(value); - return (NULL); + return NULL; } - return (value); + return value; } char * @@ -445,7 +445,7 @@ PGTYPESnumeric_to_asc(numeric *num, int dscale) /* get_str_from_var may change its argument */ s = get_str_from_var(numcopy, dscale); PGTYPESnumeric_free(numcopy); - return (s); + return s; } /* ---------- diff --git a/src/interfaces/ecpg/pgtypeslib/timestamp.c b/src/interfaces/ecpg/pgtypeslib/timestamp.c index 78931399e6..fa5b32ed9d 100644 --- a/src/interfaces/ecpg/pgtypeslib/timestamp.c +++ b/src/interfaces/ecpg/pgtypeslib/timestamp.c @@ -224,14 +224,14 @@ PGTYPEStimestamp_from_asc(char *str, char **endptr) if (strlen(str) > MAXDATELEN) { errno = PGTYPES_TS_BAD_TIMESTAMP; - return (noresult); + return noresult; } if (ParseDateTime(str, lowstr, field, ftype, &nf, ptr) != 0 || DecodeDateTime(field, ftype, nf, &dtype, tm, &fsec, 0) != 0) { errno = PGTYPES_TS_BAD_TIMESTAMP; - return (noresult); + return noresult; } switch (dtype) @@ -240,7 +240,7 @@ PGTYPEStimestamp_from_asc(char *str, char **endptr) if (tm2timestamp(tm, fsec, NULL, &result) != 0) { errno = PGTYPES_TS_BAD_TIMESTAMP; - return (noresult); + return noresult; } break; @@ -258,11 +258,11 @@ PGTYPEStimestamp_from_asc(char *str, char **endptr) case DTK_INVALID: errno = PGTYPES_TS_BAD_TIMESTAMP; - return (noresult); + return noresult; default: errno = PGTYPES_TS_BAD_TIMESTAMP; - return (noresult); + return noresult; } /* AdjustTimestampForTypmod(&result, typmod); */ diff --git a/src/interfaces/ecpg/preproc/ecpg.c b/src/interfaces/ecpg/preproc/ecpg.c index bad0a667fc..536185fa1c 100644 --- a/src/interfaces/ecpg/preproc/ecpg.c +++ b/src/interfaces/ecpg/preproc/ecpg.c @@ -137,7 +137,7 @@ main(int argc, char *const argv[]) if (find_my_exec(argv[0], my_exec_path) < 0) { fprintf(stderr, _("%s: could not locate my own executable path\n"), argv[0]); - return (ILLEGAL_OPTION); + return ILLEGAL_OPTION; } if (argc > 1) @@ -266,7 +266,7 @@ main(int argc, char *const argv[]) { fprintf(stderr, _("%s: no input files specified\n"), progname); fprintf(stderr, _("Try \"%s --help\" for more information.\n"), argv[0]); - return (ILLEGAL_OPTION); + return ILLEGAL_OPTION; } else { diff --git a/src/interfaces/ecpg/preproc/ecpg.header b/src/interfaces/ecpg/preproc/ecpg.header index 2562366bbe..e28d7e694d 100644 --- a/src/interfaces/ecpg/preproc/ecpg.header +++ b/src/interfaces/ecpg/preproc/ecpg.header @@ -142,7 +142,7 @@ cat2_str(char *str1, char *str2) strcat(res_str, str2); free(str1); free(str2); - return(res_str); + return res_str; } static char * @@ -162,7 +162,7 @@ cat_str(int count, ...) va_end(args); - return(res_str); + return res_str; } static char * @@ -174,7 +174,7 @@ make2_str(char *str1, char *str2) strcat(res_str, str2); free(str1); free(str2); - return(res_str); + return res_str; } static char * @@ -188,7 +188,7 @@ make3_str(char *str1, char *str2, char *str3) free(str1); free(str2); free(str3); - return(res_str); + return res_str; } /* and the rest */ @@ -233,7 +233,7 @@ create_questionmarks(char *name, bool array) /* removed the trailing " ," */ result[strlen(result)-3] = '\0'; - return(result); + return result; } static char * diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l index 3598a200d0..fc450f30ab 100644 --- a/src/interfaces/ecpg/preproc/pgc.l +++ b/src/interfaces/ecpg/preproc/pgc.l @@ -768,7 +768,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ } :{identifier}((("->"|\.){identifier})|(\[{array}\]))* { base_yylval.str = mm_strdup(yytext+1); - return(CVARIABLE); + return CVARIABLE; } {identifier} { const ScanKeyword *keyword; @@ -832,7 +832,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { base_yylval.str = mm_strdup(yytext); - return(CPP_LINE); + return CPP_LINE; } } {cppinclude_next} { @@ -844,12 +844,12 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { base_yylval.str = mm_strdup(yytext); - return(CPP_LINE); + return CPP_LINE; } } {cppline} { base_yylval.str = mm_strdup(yytext); - return(CPP_LINE); + return CPP_LINE; } {identifier} { const ScanKeyword *keyword; @@ -879,38 +879,38 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ } } {xcstop} { mmerror(PARSE_ERROR, ET_ERROR, "nested /* ... */ comments"); } -":" { return(':'); } -";" { return(';'); } -"," { return(','); } -"*" { return('*'); } -"%" { return('%'); } -"/" { return('/'); } -"+" { return('+'); } -"-" { return('-'); } -"(" { parenths_open++; return('('); } -")" { parenths_open--; return(')'); } +":" { return ':'; } +";" { return ';'; } +"," { return ','; } +"*" { return '*'; } +"%" { return '%'; } +"/" { return '/'; } +"+" { return '+'; } +"-" { return '-'; } +"(" { parenths_open++; return '('; } +")" { parenths_open--; return ')'; } {space} { ECHO; } -\{ { return('{'); } -\} { return('}'); } -\[ { return('['); } -\] { return(']'); } -\= { return('='); } -"->" { return(S_MEMBER); } -">>" { return(S_RSHIFT); } -"<<" { return(S_LSHIFT); } -"||" { return(S_OR); } -"&&" { return(S_AND); } -"++" { return(S_INC); } -"--" { return(S_DEC); } -"==" { return(S_EQUAL); } -"!=" { return(S_NEQUAL); } -"+=" { return(S_ADD); } -"-=" { return(S_SUB); } -"*=" { return(S_MUL); } -"/=" { return(S_DIV); } -"%=" { return(S_MOD); } -"->*" { return(S_MEMPOINT); } -".*" { return(S_DOTPOINT); } +\{ { return '{'; } +\} { return '}'; } +\[ { return '['; } +\] { return ']'; } +\= { return '='; } +"->" { return S_MEMBER; } +">>" { return S_RSHIFT; } +"<<" { return S_LSHIFT; } +"||" { return S_OR; } +"&&" { return S_AND; } +"++" { return S_INC; } +"--" { return S_DEC; } +"==" { return S_EQUAL; } +"!=" { return S_NEQUAL; } +"+=" { return S_ADD; } +"-=" { return S_SUB; } +"*=" { return S_MUL; } +"/=" { return S_DIV; } +"%=" { return S_MOD; } +"->*" { return S_MEMPOINT; } +".*" { return S_DOTPOINT; } {other} { return S_ANYTHING; } {exec_sql}{define}{space}* { BEGIN(def_ident); } {informix_special}{define}{space}* { @@ -922,7 +922,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {exec_sql}{undef}{space}* { BEGIN(undef); } @@ -935,7 +935,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {identifier}{space}*";" { @@ -984,7 +984,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {exec_sql}{ifdef}{space}* { ifcond = TRUE; BEGIN(xcond); } @@ -998,7 +998,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {exec_sql}{ifndef}{space}* { ifcond = FALSE; BEGIN(xcond); } @@ -1012,7 +1012,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {exec_sql}{elif}{space}* { /* pop stack */ @@ -1043,7 +1043,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } @@ -1085,7 +1085,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } {exec_sql}{endif}{space}*";" { @@ -1116,7 +1116,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else { yyless(1); - return (S_ANYTHING); + return S_ANYTHING; } } diff --git a/src/interfaces/ecpg/preproc/type.c b/src/interfaces/ecpg/preproc/type.c index 750cdf9c7c..256a3c395c 100644 --- a/src/interfaces/ecpg/preproc/type.c +++ b/src/interfaces/ecpg/preproc/type.c @@ -69,7 +69,7 @@ ECPGstruct_member_dup(struct ECPGstruct_member *rm) rm = rm->next; } - return (new); + return new; } /* The NAME argument is copied. The type argument is preserved as a pointer. */ @@ -135,78 +135,78 @@ get_type(enum ECPGttype type) switch (type) { case ECPGt_char: - return ("ECPGt_char"); + return "ECPGt_char"; break; case ECPGt_unsigned_char: - return ("ECPGt_unsigned_char"); + return "ECPGt_unsigned_char"; break; case ECPGt_short: - return ("ECPGt_short"); + return "ECPGt_short"; break; case ECPGt_unsigned_short: - return ("ECPGt_unsigned_short"); + return "ECPGt_unsigned_short"; break; case ECPGt_int: - return ("ECPGt_int"); + return "ECPGt_int"; break; case ECPGt_unsigned_int: - return ("ECPGt_unsigned_int"); + return "ECPGt_unsigned_int"; break; case ECPGt_long: - return ("ECPGt_long"); + return "ECPGt_long"; break; case ECPGt_unsigned_long: - return ("ECPGt_unsigned_long"); + return "ECPGt_unsigned_long"; break; case ECPGt_long_long: - return ("ECPGt_long_long"); + return "ECPGt_long_long"; break; case ECPGt_unsigned_long_long: - return ("ECPGt_unsigned_long_long"); + return "ECPGt_unsigned_long_long"; break; case ECPGt_float: - return ("ECPGt_float"); + return "ECPGt_float"; break; case ECPGt_double: - return ("ECPGt_double"); + return "ECPGt_double"; break; case ECPGt_bool: - return ("ECPGt_bool"); + return "ECPGt_bool"; break; case ECPGt_varchar: - return ("ECPGt_varchar"); + return "ECPGt_varchar"; case ECPGt_NO_INDICATOR: /* no indicator */ - return ("ECPGt_NO_INDICATOR"); + return "ECPGt_NO_INDICATOR"; break; case ECPGt_char_variable: /* string that should not be quoted */ - return ("ECPGt_char_variable"); + return "ECPGt_char_variable"; break; case ECPGt_const: /* constant string quoted */ - return ("ECPGt_const"); + return "ECPGt_const"; break; case ECPGt_decimal: - return ("ECPGt_decimal"); + return "ECPGt_decimal"; break; case ECPGt_numeric: - return ("ECPGt_numeric"); + return "ECPGt_numeric"; break; case ECPGt_interval: - return ("ECPGt_interval"); + return "ECPGt_interval"; break; case ECPGt_descriptor: - return ("ECPGt_descriptor"); + return "ECPGt_descriptor"; break; case ECPGt_sqlda: - return ("ECPGt_sqlda"); + return "ECPGt_sqlda"; break; case ECPGt_date: - return ("ECPGt_date"); + return "ECPGt_date"; break; case ECPGt_timestamp: - return ("ECPGt_timestamp"); + return "ECPGt_timestamp"; break; case ECPGt_string: - return ("ECPGt_string"); + return "ECPGt_string"; break; default: mmerror(PARSE_ERROR, ET_ERROR, "unrecognized variable type code %d", type); @@ -674,51 +674,51 @@ get_dtype(enum ECPGdtype type) switch (type) { case ECPGd_count: - return ("ECPGd_countr"); + return "ECPGd_countr"; break; case ECPGd_data: - return ("ECPGd_data"); + return "ECPGd_data"; break; case ECPGd_di_code: - return ("ECPGd_di_code"); + return "ECPGd_di_code"; break; case ECPGd_di_precision: - return ("ECPGd_di_precision"); + return "ECPGd_di_precision"; break; case ECPGd_indicator: - return ("ECPGd_indicator"); + return "ECPGd_indicator"; break; case ECPGd_key_member: - return ("ECPGd_key_member"); + return "ECPGd_key_member"; break; case ECPGd_length: - return ("ECPGd_length"); + return "ECPGd_length"; break; case ECPGd_name: - return ("ECPGd_name"); + return "ECPGd_name"; break; case ECPGd_nullable: - return ("ECPGd_nullable"); + return "ECPGd_nullable"; break; case ECPGd_octet: - return ("ECPGd_octet"); + return "ECPGd_octet"; break; case ECPGd_precision: - return ("ECPGd_precision"); + return "ECPGd_precision"; break; case ECPGd_ret_length: - return ("ECPGd_ret_length"); + return "ECPGd_ret_length"; case ECPGd_ret_octet: - return ("ECPGd_ret_octet"); + return "ECPGd_ret_octet"; break; case ECPGd_scale: - return ("ECPGd_scale"); + return "ECPGd_scale"; break; case ECPGd_type: - return ("ECPGd_type"); + return "ECPGd_type"; break; case ECPGd_cardinality: - return ("ECPGd_cardinality"); + return "ECPGd_cardinality"; default: mmerror(PARSE_ERROR, ET_ERROR, "unrecognized descriptor item code %d", type); } diff --git a/src/interfaces/ecpg/preproc/variable.c b/src/interfaces/ecpg/preproc/variable.c index 31225738e0..39bf3b2474 100644 --- a/src/interfaces/ecpg/preproc/variable.c +++ b/src/interfaces/ecpg/preproc/variable.c @@ -18,7 +18,7 @@ new_variable(const char *name, struct ECPGtype *type, int brace_level) p->next = allvariables; allvariables = p; - return (p); + return p; } static struct variable * @@ -44,12 +44,12 @@ find_struct_member(char *name, char *str, struct ECPGstruct_member *members, int switch (members->type->type) { case ECPGt_array: - return (new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(members->type->u.element->type, members->type->u.element->size, members->type->u.element->counter), members->type->size), brace_level)); + return new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(members->type->u.element->type, members->type->u.element->size, members->type->u.element->counter), members->type->size), brace_level); case ECPGt_struct: case ECPGt_union: - return (new_variable(name, ECPGmake_struct_type(members->type->u.members, members->type->type, members->type->type_name, members->type->struct_sizeof), brace_level)); + return new_variable(name, ECPGmake_struct_type(members->type->u.members, members->type->type, members->type->type_name, members->type->struct_sizeof), brace_level); default: - return (new_variable(name, ECPGmake_simple_type(members->type->type, members->type->size, members->type->counter), brace_level)); + return new_variable(name, ECPGmake_simple_type(members->type->type, members->type->size, members->type->counter), brace_level); } } else @@ -91,26 +91,26 @@ find_struct_member(char *name, char *str, struct ECPGstruct_member *members, int switch (members->type->u.element->type) { case ECPGt_array: - return (new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(members->type->u.element->u.element->type, members->type->u.element->u.element->size, members->type->u.element->u.element->counter), members->type->u.element->size), brace_level)); + return new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(members->type->u.element->u.element->type, members->type->u.element->u.element->size, members->type->u.element->u.element->counter), members->type->u.element->size), brace_level); case ECPGt_struct: case ECPGt_union: - return (new_variable(name, ECPGmake_struct_type(members->type->u.element->u.members, members->type->u.element->type, members->type->u.element->type_name, members->type->u.element->struct_sizeof), brace_level)); + return new_variable(name, ECPGmake_struct_type(members->type->u.element->u.members, members->type->u.element->type, members->type->u.element->type_name, members->type->u.element->struct_sizeof), brace_level); default: - return (new_variable(name, ECPGmake_simple_type(members->type->u.element->type, members->type->u.element->size, members->type->u.element->counter), brace_level)); + return new_variable(name, ECPGmake_simple_type(members->type->u.element->type, members->type->u.element->size, members->type->u.element->counter), brace_level); } break; case '-': if (members->type->type == ECPGt_array) - return (find_struct_member(name, ++end, members->type->u.element->u.members, brace_level)); + return find_struct_member(name, ++end, members->type->u.element->u.members, brace_level); else - return (find_struct_member(name, ++end, members->type->u.members, brace_level)); + return find_struct_member(name, ++end, members->type->u.members, brace_level); break; break; case '.': if (members->type->type == ECPGt_array) - return (find_struct_member(name, end, members->type->u.element->u.members, brace_level)); + return find_struct_member(name, end, members->type->u.element->u.members, brace_level); else - return (find_struct_member(name, end, members->type->u.members, brace_level)); + return find_struct_member(name, end, members->type->u.members, brace_level); break; default: mmfatal(PARSE_ERROR, "incorrectly formed variable \"%s\"", name); @@ -120,7 +120,7 @@ find_struct_member(char *name, char *str, struct ECPGstruct_member *members, int } } - return (NULL); + return NULL; } static struct variable * @@ -185,7 +185,7 @@ find_simple(char *name) return p; } - return (NULL); + return NULL; } /* Note that this function will end the program in case of an unknown */ @@ -236,12 +236,12 @@ find_variable(char *name) switch (p->type->u.element->type) { case ECPGt_array: - return (new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(p->type->u.element->u.element->type, p->type->u.element->u.element->size, p->type->u.element->u.element->counter), p->type->u.element->size), p->brace_level)); + return new_variable(name, ECPGmake_array_type(ECPGmake_simple_type(p->type->u.element->u.element->type, p->type->u.element->u.element->size, p->type->u.element->u.element->counter), p->type->u.element->size), p->brace_level); case ECPGt_struct: case ECPGt_union: - return (new_variable(name, ECPGmake_struct_type(p->type->u.element->u.members, p->type->u.element->type, p->type->u.element->type_name, p->type->u.element->struct_sizeof), p->brace_level)); + return new_variable(name, ECPGmake_struct_type(p->type->u.element->u.members, p->type->u.element->type, p->type->u.element->type_name, p->type->u.element->struct_sizeof), p->brace_level); default: - return (new_variable(name, ECPGmake_simple_type(p->type->u.element->type, p->type->u.element->size, p->type->u.element->counter), p->brace_level)); + return new_variable(name, ECPGmake_simple_type(p->type->u.element->type, p->type->u.element->size, p->type->u.element->counter), p->brace_level); } } } @@ -254,7 +254,7 @@ find_variable(char *name) if (p == NULL) mmfatal(PARSE_ERROR, "variable \"%s\" is not declared", name); - return (p); + return p; } void @@ -505,7 +505,7 @@ get_typedef(char *name) if (!this) mmfatal(PARSE_ERROR, "unrecognized data type name \"%s\"", name); - return (this); + return this; } void diff --git a/src/interfaces/ecpg/test/compat_informix/dec_test.pgc b/src/interfaces/ecpg/test/compat_informix/dec_test.pgc index b374bda724..c6a4ed85ee 100644 --- a/src/interfaces/ecpg/test/compat_informix/dec_test.pgc +++ b/src/interfaces/ecpg/test/compat_informix/dec_test.pgc @@ -206,7 +206,7 @@ main(void) } free(decarr); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/compat_informix/describe.pgc b/src/interfaces/ecpg/test/compat_informix/describe.pgc index 6fcccc6ab4..4ee7254dff 100644 --- a/src/interfaces/ecpg/test/compat_informix/describe.pgc +++ b/src/interfaces/ecpg/test/compat_informix/describe.pgc @@ -195,5 +195,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc b/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc index c799de6762..f1a9048889 100644 --- a/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc +++ b/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc @@ -147,7 +147,7 @@ main(void) /* ECPG_INFORMIX_BAD_YEAR */ /* ??? */ - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc b/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc index 162b42505f..a1070e1331 100644 --- a/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc +++ b/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc @@ -45,7 +45,7 @@ main(void) fmtlong(-8494493, "abc: ################+-+"); fmtlong(-8494493, "+<<<<,<<<,<<<,<<<"); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/compat_informix/sqlda.pgc b/src/interfaces/ecpg/test/compat_informix/sqlda.pgc index e1142d2b22..423ce41089 100644 --- a/src/interfaces/ecpg/test/compat_informix/sqlda.pgc +++ b/src/interfaces/ecpg/test/compat_informix/sqlda.pgc @@ -246,5 +246,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/connect/test1.pgc b/src/interfaces/ecpg/test/connect/test1.pgc index 4868b3dd81..86633a7af6 100644 --- a/src/interfaces/ecpg/test/connect/test1.pgc +++ b/src/interfaces/ecpg/test/connect/test1.pgc @@ -61,5 +61,5 @@ exec sql end declare section; exec sql connect to unix:postgresql://localhost/ecpg2_regression user regress_ecpg_user1 identified by "wrongpw"; /* no disconnect necessary */ - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/connect/test2.pgc b/src/interfaces/ecpg/test/connect/test2.pgc index 0ced76ec6e..f31a7f9bb0 100644 --- a/src/interfaces/ecpg/test/connect/test2.pgc +++ b/src/interfaces/ecpg/test/connect/test2.pgc @@ -42,5 +42,5 @@ exec sql end declare section; /* disconnect from "second" */ exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/connect/test3.pgc b/src/interfaces/ecpg/test/connect/test3.pgc index ecf68d42ac..5d075f0e99 100644 --- a/src/interfaces/ecpg/test/connect/test3.pgc +++ b/src/interfaces/ecpg/test/connect/test3.pgc @@ -48,5 +48,5 @@ exec sql end declare section; * are used in other tests */ - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/connect/test4.pgc b/src/interfaces/ecpg/test/connect/test4.pgc index 185582ca2a..b20b17471c 100644 --- a/src/interfaces/ecpg/test/connect/test4.pgc +++ b/src/interfaces/ecpg/test/connect/test4.pgc @@ -16,5 +16,5 @@ main(void) exec sql disconnect DEFAULT; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/connect/test5.pgc b/src/interfaces/ecpg/test/connect/test5.pgc index d64ca50c93..53b86556b1 100644 --- a/src/interfaces/ecpg/test/connect/test5.pgc +++ b/src/interfaces/ecpg/test/connect/test5.pgc @@ -72,5 +72,5 @@ exec sql end declare section; /* not connected */ exec sql disconnect nonexistant; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/compat_informix-dec_test.c b/src/interfaces/ecpg/test/expected/compat_informix-dec_test.c index 3b443e3ffd..8951cdb227 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-dec_test.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-dec_test.c @@ -226,7 +226,7 @@ main(void) } free(decarr); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/expected/compat_informix-describe.c b/src/interfaces/ecpg/test/expected/compat_informix-describe.c index 1b5aae0df7..031a2d776c 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-describe.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-describe.c @@ -463,5 +463,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 196 "describe.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c b/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c index ac133c52ef..87a435e9bd 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c @@ -158,7 +158,7 @@ main(void) /* ECPG_INFORMIX_BAD_YEAR */ /* ??? */ - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c b/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c index 5f44b35ee7..70e015a130 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c @@ -56,7 +56,7 @@ main(void) fmtlong(-8494493, "abc: ################+-+"); fmtlong(-8494493, "+<<<<,<<<,<<<,<<<"); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c b/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c index 1df87f83ef..fa2e569bbd 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-sqlda.c @@ -526,5 +526,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 247 "sqlda.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/connect-test1.c b/src/interfaces/ecpg/test/expected/connect-test1.c index 6471abb623..18e5968d3a 100644 --- a/src/interfaces/ecpg/test/expected/connect-test1.c +++ b/src/interfaces/ecpg/test/expected/connect-test1.c @@ -120,5 +120,5 @@ main(void) /* no disconnect necessary */ - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/connect-test2.c b/src/interfaces/ecpg/test/expected/connect-test2.c index cf87c63386..deb7f19170 100644 --- a/src/interfaces/ecpg/test/expected/connect-test2.c +++ b/src/interfaces/ecpg/test/expected/connect-test2.c @@ -100,5 +100,5 @@ main(void) #line 43 "test2.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/connect-test3.c b/src/interfaces/ecpg/test/expected/connect-test3.c index 5bab6ba8f0..1a74a06973 100644 --- a/src/interfaces/ecpg/test/expected/connect-test3.c +++ b/src/interfaces/ecpg/test/expected/connect-test3.c @@ -102,5 +102,5 @@ main(void) * are used in other tests */ - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/connect-test4.c b/src/interfaces/ecpg/test/expected/connect-test4.c index e1ae3e9a66..ff13e4ec41 100644 --- a/src/interfaces/ecpg/test/expected/connect-test4.c +++ b/src/interfaces/ecpg/test/expected/connect-test4.c @@ -40,5 +40,5 @@ main(void) #line 17 "test4.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/connect-test5.c b/src/interfaces/ecpg/test/expected/connect-test5.c index e991ee79b6..4b6569e763 100644 --- a/src/interfaces/ecpg/test/expected/connect-test5.c +++ b/src/interfaces/ecpg/test/expected/connect-test5.c @@ -158,5 +158,5 @@ main(void) #line 73 "test5.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c index 00d43915b2..6801669a0e 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c @@ -449,5 +449,5 @@ if (sqlca.sqlcode < 0) sqlprint ( );} #line 366 "dt_test.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.c b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.c index b6e77562b2..4024980f1e 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.c @@ -182,5 +182,5 @@ main(void) PGTYPESinterval_free(i1); } - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c b/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c index ecd2343117..c831284059 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-nan_test.c @@ -269,5 +269,5 @@ if (sqlca.sqlcode < 0) sqlprint ( );} #line 91 "nan_test.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-num_test.c b/src/interfaces/ecpg/test/expected/pgtypeslib-num_test.c index 8019a8f63e..47c320291f 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-num_test.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-num_test.c @@ -157,5 +157,5 @@ if (sqlca.sqlcode < 0) sqlprint ( );} #line 94 "num_test.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-num_test2.c b/src/interfaces/ecpg/test/expected/pgtypeslib-num_test2.c index 83636ad880..d31834e2c0 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-num_test2.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-num_test2.c @@ -228,7 +228,7 @@ main(void) } free(numarr); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/expected/preproc-array_of_struct.c b/src/interfaces/ecpg/test/expected/preproc-array_of_struct.c index c4ae862b49..1cf371092f 100644 --- a/src/interfaces/ecpg/test/expected/preproc-array_of_struct.c +++ b/src/interfaces/ecpg/test/expected/preproc-array_of_struct.c @@ -284,5 +284,5 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 92 "array_of_struct.pgc" - return( 0 ); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-cursor.c b/src/interfaces/ecpg/test/expected/preproc-cursor.c index f7da753a3d..4822901742 100644 --- a/src/interfaces/ecpg/test/expected/preproc-cursor.c +++ b/src/interfaces/ecpg/test/expected/preproc-cursor.c @@ -830,5 +830,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 253 "cursor.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-define.c b/src/interfaces/ecpg/test/expected/preproc-define.c index c8ae6f98dc..bde15b74a0 100644 --- a/src/interfaces/ecpg/test/expected/preproc-define.c +++ b/src/interfaces/ecpg/test/expected/preproc-define.c @@ -164,5 +164,5 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 59 "define.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-describe.c b/src/interfaces/ecpg/test/expected/preproc-describe.c index 1a9dd85438..143e966261 100644 --- a/src/interfaces/ecpg/test/expected/preproc-describe.c +++ b/src/interfaces/ecpg/test/expected/preproc-describe.c @@ -477,5 +477,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 144 "describe.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-outofscope.c b/src/interfaces/ecpg/test/expected/preproc-outofscope.c index b3deb221d7..f4676a083a 100644 --- a/src/interfaces/ecpg/test/expected/preproc-outofscope.c +++ b/src/interfaces/ecpg/test/expected/preproc-outofscope.c @@ -374,5 +374,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 124 "outofscope.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-pointer_to_struct.c b/src/interfaces/ecpg/test/expected/preproc-pointer_to_struct.c index 5a0f9caee3..7b1f58e835 100644 --- a/src/interfaces/ecpg/test/expected/preproc-pointer_to_struct.c +++ b/src/interfaces/ecpg/test/expected/preproc-pointer_to_struct.c @@ -289,5 +289,5 @@ if (sqlca.sqlcode < 0) sqlprint();} /* All the memory will anyway be freed at the end */ - return( 0 ); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-strings.c b/src/interfaces/ecpg/test/expected/preproc-strings.c index 89d17e96c9..2053443e81 100644 --- a/src/interfaces/ecpg/test/expected/preproc-strings.c +++ b/src/interfaces/ecpg/test/expected/preproc-strings.c @@ -66,5 +66,5 @@ int main(void) { ECPGdisconnect(__LINE__, "CURRENT");} #line 25 "strings.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/preproc-variable.c b/src/interfaces/ecpg/test/expected/preproc-variable.c index 7fd03ba7d3..08e2355d16 100644 --- a/src/interfaces/ecpg/test/expected/preproc-variable.c +++ b/src/interfaces/ecpg/test/expected/preproc-variable.c @@ -272,5 +272,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 98 "variable.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-array.c b/src/interfaces/ecpg/test/expected/sql-array.c index 781c426771..f5eb73d185 100644 --- a/src/interfaces/ecpg/test/expected/sql-array.c +++ b/src/interfaces/ecpg/test/expected/sql-array.c @@ -351,5 +351,5 @@ if (sqlca.sqlcode < 0) sqlprint();} free(t); - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-describe.c b/src/interfaces/ecpg/test/expected/sql-describe.c index 155e206f29..b79a6f4016 100644 --- a/src/interfaces/ecpg/test/expected/sql-describe.c +++ b/src/interfaces/ecpg/test/expected/sql-describe.c @@ -461,5 +461,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 196 "describe.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-execute.c b/src/interfaces/ecpg/test/expected/sql-execute.c index aee3c1bcb7..cac91dc599 100644 --- a/src/interfaces/ecpg/test/expected/sql-execute.c +++ b/src/interfaces/ecpg/test/expected/sql-execute.c @@ -327,5 +327,5 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 110 "execute.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-oldexec.c b/src/interfaces/ecpg/test/expected/sql-oldexec.c index 5b74dda9b5..d6a661e3fb 100644 --- a/src/interfaces/ecpg/test/expected/sql-oldexec.c +++ b/src/interfaces/ecpg/test/expected/sql-oldexec.c @@ -247,5 +247,5 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 87 "oldexec.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-sqlda.c b/src/interfaces/ecpg/test/expected/sql-sqlda.c index 15c81c6b12..ffaf52ca5c 100644 --- a/src/interfaces/ecpg/test/expected/sql-sqlda.c +++ b/src/interfaces/ecpg/test/expected/sql-sqlda.c @@ -527,5 +527,5 @@ if (sqlca.sqlcode < 0) exit (1);} #line 247 "sqlda.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/sql-twophase.c b/src/interfaces/ecpg/test/expected/sql-twophase.c index cf491fc078..20b54d35e5 100644 --- a/src/interfaces/ecpg/test/expected/sql-twophase.c +++ b/src/interfaces/ecpg/test/expected/sql-twophase.c @@ -110,5 +110,5 @@ if (sqlca.sqlcode < 0) sqlprint();} #line 41 "twophase.pgc" - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/expected/thread-thread.c b/src/interfaces/ecpg/test/expected/thread-thread.c index 6e809d60fb..420bbf194a 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread.c +++ b/src/interfaces/ecpg/test/expected/thread-thread.c @@ -91,7 +91,7 @@ int main() if( threads == NULL ) { fprintf(stderr, "Cannot alloc memory\n"); - return( 1 ); + return 1; } for( n = 0; n < nthreads; n++ ) { @@ -133,7 +133,7 @@ int main() else printf("ERROR: Failure - expecting %d rows, got %d.\n", nthreads * iterations, l_rows); - return( 0 ); + return 0; } void *test_thread(void *arg) @@ -177,7 +177,7 @@ if (sqlca.sqlcode < 0) sqlprint();} if( sqlca.sqlcode != 0 ) { printf("%s: ERROR: cannot connect to database!\n", l_connection); - return( NULL ); + return NULL; } { ECPGtrans(__LINE__, l_connection, "begin"); #line 126 "thread.pgc" @@ -216,6 +216,6 @@ if (sqlca.sqlcode < 0) sqlprint();} if (sqlca.sqlcode < 0) sqlprint();} #line 138 "thread.pgc" - return( NULL ); + return NULL; } #endif /* ENABLE_THREAD_SAFETY */ diff --git a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c index b42c556633..4bddca9fb9 100644 --- a/src/interfaces/ecpg/test/expected/thread-thread_implicit.c +++ b/src/interfaces/ecpg/test/expected/thread-thread_implicit.c @@ -92,7 +92,7 @@ int main() if( threads == NULL ) { fprintf(stderr, "Cannot alloc memory\n"); - return( 1 ); + return 1; } for( n = 0; n < nthreads; n++ ) { @@ -134,7 +134,7 @@ int main() else printf("ERROR: Failure - expecting %d rows, got %d.\n", nthreads * iterations, l_rows); - return( 0 ); + return 0; } void *test_thread(void *arg) @@ -178,7 +178,7 @@ if (sqlca.sqlcode < 0) sqlprint();} if( sqlca.sqlcode != 0 ) { printf("%s: ERROR: cannot connect to database!\n", l_connection); - return( NULL ); + return NULL; } { ECPGtrans(__LINE__, NULL, "begin"); #line 127 "thread_implicit.pgc" @@ -217,6 +217,6 @@ if (sqlca.sqlcode < 0) sqlprint();} if (sqlca.sqlcode < 0) sqlprint();} #line 139 "thread_implicit.pgc" - return( NULL ); + return NULL; } #endif /* ENABLE_THREAD_SAFETY */ diff --git a/src/interfaces/ecpg/test/performance/perftest.pgc b/src/interfaces/ecpg/test/performance/perftest.pgc index 3ed2ba0f5e..c8a9934986 100644 --- a/src/interfaces/ecpg/test/performance/perftest.pgc +++ b/src/interfaces/ecpg/test/performance/perftest.pgc @@ -140,5 +140,5 @@ exec sql end declare section; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc b/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc index 768cbd5e6f..5e0d86847d 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc @@ -365,5 +365,5 @@ main(void) exec sql rollback; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/pgtypeslib/dt_test2.pgc b/src/interfaces/ecpg/test/pgtypeslib/dt_test2.pgc index d519305e18..9e1f432823 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/dt_test2.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/dt_test2.pgc @@ -147,5 +147,5 @@ main(void) PGTYPESinterval_free(i1); } - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc b/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc index 3b5781632e..bc682b93d5 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/nan_test.pgc @@ -90,5 +90,5 @@ main(void) exec sql rollback; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/pgtypeslib/num_test.pgc b/src/interfaces/ecpg/test/pgtypeslib/num_test.pgc index a024c8702f..d276f70772 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/num_test.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/num_test.pgc @@ -93,5 +93,5 @@ main(void) exec sql rollback; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/pgtypeslib/num_test2.pgc b/src/interfaces/ecpg/test/pgtypeslib/num_test2.pgc index 2ac666f7c0..16ca6a44a5 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/num_test2.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/num_test2.pgc @@ -210,7 +210,7 @@ main(void) } free(numarr); - return (0); + return 0; } static void diff --git a/src/interfaces/ecpg/test/preproc/array_of_struct.pgc b/src/interfaces/ecpg/test/preproc/array_of_struct.pgc index f9e1946b3f..69f5758474 100644 --- a/src/interfaces/ecpg/test/preproc/array_of_struct.pgc +++ b/src/interfaces/ecpg/test/preproc/array_of_struct.pgc @@ -91,5 +91,5 @@ int main() EXEC SQL disconnect all; - return( 0 ); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/cursor.pgc b/src/interfaces/ecpg/test/preproc/cursor.pgc index 4247912989..8a286ad523 100644 --- a/src/interfaces/ecpg/test/preproc/cursor.pgc +++ b/src/interfaces/ecpg/test/preproc/cursor.pgc @@ -252,5 +252,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect all; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/define.pgc b/src/interfaces/ecpg/test/preproc/define.pgc index 2161733f49..0d07ebfe63 100644 --- a/src/interfaces/ecpg/test/preproc/define.pgc +++ b/src/interfaces/ecpg/test/preproc/define.pgc @@ -58,5 +58,5 @@ exec sql end declare section; exec sql commit; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/outofscope.pgc b/src/interfaces/ecpg/test/preproc/outofscope.pgc index 6b5d2707ce..aae53250a5 100644 --- a/src/interfaces/ecpg/test/preproc/outofscope.pgc +++ b/src/interfaces/ecpg/test/preproc/outofscope.pgc @@ -123,5 +123,5 @@ main (void) strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/pointer_to_struct.pgc b/src/interfaces/ecpg/test/preproc/pointer_to_struct.pgc index ec94273408..1ec651e3fc 100644 --- a/src/interfaces/ecpg/test/preproc/pointer_to_struct.pgc +++ b/src/interfaces/ecpg/test/preproc/pointer_to_struct.pgc @@ -96,5 +96,5 @@ int main() EXEC SQL disconnect all; /* All the memory will anyway be freed at the end */ - return( 0 ); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/strings.pgc b/src/interfaces/ecpg/test/preproc/strings.pgc index d6ec9a4cb8..f004ddf6dc 100644 --- a/src/interfaces/ecpg/test/preproc/strings.pgc +++ b/src/interfaces/ecpg/test/preproc/strings.pgc @@ -23,5 +23,5 @@ int main(void) printf("%s %s %s %s %s %s\n", s1, s2, s3, s4, s5, s6); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/preproc/variable.pgc b/src/interfaces/ecpg/test/preproc/variable.pgc index 05420afdb2..697a7dc814 100644 --- a/src/interfaces/ecpg/test/preproc/variable.pgc +++ b/src/interfaces/ecpg/test/preproc/variable.pgc @@ -97,5 +97,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/array.pgc b/src/interfaces/ecpg/test/sql/array.pgc index 5f12c472c9..15c9cfa5f7 100644 --- a/src/interfaces/ecpg/test/sql/array.pgc +++ b/src/interfaces/ecpg/test/sql/array.pgc @@ -107,5 +107,5 @@ EXEC SQL END DECLARE SECTION; free(t); - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/describe.pgc b/src/interfaces/ecpg/test/sql/describe.pgc index b95ab351bd..87d6bd9a29 100644 --- a/src/interfaces/ecpg/test/sql/describe.pgc +++ b/src/interfaces/ecpg/test/sql/describe.pgc @@ -195,5 +195,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/execute.pgc b/src/interfaces/ecpg/test/sql/execute.pgc index b8364c78bb..cc9814e9be 100644 --- a/src/interfaces/ecpg/test/sql/execute.pgc +++ b/src/interfaces/ecpg/test/sql/execute.pgc @@ -109,5 +109,5 @@ exec sql end declare section; exec sql commit; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/oldexec.pgc b/src/interfaces/ecpg/test/sql/oldexec.pgc index 2988f2ab8a..4f94a18aa1 100644 --- a/src/interfaces/ecpg/test/sql/oldexec.pgc +++ b/src/interfaces/ecpg/test/sql/oldexec.pgc @@ -86,5 +86,5 @@ exec sql end declare section; exec sql commit; exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/sqlda.pgc b/src/interfaces/ecpg/test/sql/sqlda.pgc index 29774b5909..eaf5c6f7e1 100644 --- a/src/interfaces/ecpg/test/sql/sqlda.pgc +++ b/src/interfaces/ecpg/test/sql/sqlda.pgc @@ -246,5 +246,5 @@ exec sql end declare section; strcpy(msg, "disconnect"); exec sql disconnect; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/sql/twophase.pgc b/src/interfaces/ecpg/test/sql/twophase.pgc index 867a28e4c4..38913d7af2 100644 --- a/src/interfaces/ecpg/test/sql/twophase.pgc +++ b/src/interfaces/ecpg/test/sql/twophase.pgc @@ -40,5 +40,5 @@ int main(void) strcpy(msg, "disconnect"); exec sql disconnect current; - return (0); + return 0; } diff --git a/src/interfaces/ecpg/test/thread/thread.pgc b/src/interfaces/ecpg/test/thread/thread.pgc index c5fbe928d9..ae6b229962 100644 --- a/src/interfaces/ecpg/test/thread/thread.pgc +++ b/src/interfaces/ecpg/test/thread/thread.pgc @@ -60,7 +60,7 @@ int main() if( threads == NULL ) { fprintf(stderr, "Cannot alloc memory\n"); - return( 1 ); + return 1; } for( n = 0; n < nthreads; n++ ) { @@ -92,7 +92,7 @@ int main() else printf("ERROR: Failure - expecting %d rows, got %d.\n", nthreads * iterations, l_rows); - return( 0 ); + return 0; } void *test_thread(void *arg) @@ -121,7 +121,7 @@ void *test_thread(void *arg) if( sqlca.sqlcode != 0 ) { printf("%s: ERROR: cannot connect to database!\n", l_connection); - return( NULL ); + return NULL; } EXEC SQL AT :l_connection BEGIN; @@ -136,6 +136,6 @@ void *test_thread(void *arg) /* all done */ EXEC SQL AT :l_connection COMMIT; EXEC SQL DISCONNECT :l_connection; - return( NULL ); + return NULL; } #endif /* ENABLE_THREAD_SAFETY */ diff --git a/src/interfaces/ecpg/test/thread/thread_implicit.pgc b/src/interfaces/ecpg/test/thread/thread_implicit.pgc index d65f17c073..0dfcb7172b 100644 --- a/src/interfaces/ecpg/test/thread/thread_implicit.pgc +++ b/src/interfaces/ecpg/test/thread/thread_implicit.pgc @@ -61,7 +61,7 @@ int main() if( threads == NULL ) { fprintf(stderr, "Cannot alloc memory\n"); - return( 1 ); + return 1; } for( n = 0; n < nthreads; n++ ) { @@ -93,7 +93,7 @@ int main() else printf("ERROR: Failure - expecting %d rows, got %d.\n", nthreads * iterations, l_rows); - return( 0 ); + return 0; } void *test_thread(void *arg) @@ -122,7 +122,7 @@ void *test_thread(void *arg) if( sqlca.sqlcode != 0 ) { printf("%s: ERROR: cannot connect to database!\n", l_connection); - return( NULL ); + return NULL; } EXEC SQL BEGIN; @@ -137,6 +137,6 @@ void *test_thread(void *arg) /* all done */ EXEC SQL COMMIT; EXEC SQL DISCONNECT :l_connection; - return( NULL ); + return NULL; } #endif /* ENABLE_THREAD_SAFETY */ diff --git a/src/test/isolation/specscanner.l b/src/test/isolation/specscanner.l index aed9269c63..a9528bda6b 100644 --- a/src/test/isolation/specscanner.l +++ b/src/test/isolation/specscanner.l @@ -39,11 +39,11 @@ comment ("#"{non_newline}*) %% -permutation { return(PERMUTATION); } -session { return(SESSION); } -setup { return(SETUP); } -step { return(STEP); } -teardown { return(TEARDOWN); } +permutation { return PERMUTATION; } +session { return SESSION; } +setup { return SETUP; } +step { return STEP; } +teardown { return TEARDOWN; } [\n] { yyline++; } {comment} { /* ignore */ } From ec3a4375961abaa209116162966bc7af2d51148a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 12:39:20 -0400 Subject: [PATCH 0102/1087] Remove unnecessary casts Reviewed-by: Michael Paquier Reviewed-by: Ryan Murphy --- contrib/cube/cube.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index 0e968d1c76..1032b997f9 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -609,18 +609,18 @@ g_cube_leaf_consistent(NDBOX *key, switch (strategy) { case RTOverlapStrategyNumber: - retval = (bool) cube_overlap_v0(key, query); + retval = cube_overlap_v0(key, query); break; case RTSameStrategyNumber: - retval = (bool) (cube_cmp_v0(key, query) == 0); + retval = (cube_cmp_v0(key, query) == 0); break; case RTContainsStrategyNumber: case RTOldContainsStrategyNumber: - retval = (bool) cube_contains_v0(key, query); + retval = cube_contains_v0(key, query); break; case RTContainedByStrategyNumber: case RTOldContainedByStrategyNumber: - retval = (bool) cube_contains_v0(query, key); + retval = cube_contains_v0(query, key); break; default: retval = FALSE; From 153a49bb331005bf70b1e76e69fe28f1c417cc91 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 17 Aug 2017 12:39:20 -0400 Subject: [PATCH 0103/1087] Remove endof macro It has not been used in a long time, and it doesn't seem safe anyway, so drop it. Reviewed-by: Michael Paquier Reviewed-by: Ryan Murphy --- src/include/c.h | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 56e7f792d2..630dfbfc41 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -30,7 +30,7 @@ * 2) bool, true, false, TRUE, FALSE * 3) standard system types * 4) IsValid macros for system types - * 5) offsetof, lengthof, endof, alignment + * 5) offsetof, lengthof, alignment * 6) assertions * 7) widely useful macros * 8) random stuff @@ -537,7 +537,7 @@ typedef NameData *Name; /* ---------------------------------------------------------------- - * Section 5: offsetof, lengthof, endof, alignment + * Section 5: offsetof, lengthof, alignment * ---------------------------------------------------------------- */ /* @@ -557,12 +557,6 @@ typedef NameData *Name; */ #define lengthof(array) (sizeof (array) / sizeof ((array)[0])) -/* - * endof - * Address of the element one past the last in an array. - */ -#define endof(array) (&(array)[lengthof(array)]) - /* ---------------- * Alignment macros: align a length or address appropriately for a given type. * The fooALIGN() macros round up to a multiple of the required alignment, From 6e427aa4e5f3ad79a79b463c470daf93fa15767b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 5 Sep 2017 15:57:48 -0400 Subject: [PATCH 0104/1087] Use lfirst_node() and linitial_node() where appropriate in planner.c. There's no particular reason to target this module for the first wholesale application of these macros; but we gotta start somewhere. Ashutosh Bapat and Jeevan Chalke Discussion: https://postgr.es/m/CAFjFpRcNr3r=u0ni=7A4GD9NnHQVq+dkFafzqo2rS6zy=dt1eg@mail.gmail.com --- src/backend/optimizer/plan/planner.c | 122 +++++++++++++-------------- 1 file changed, 61 insertions(+), 61 deletions(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 966230256e..6b79b3ad99 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -411,7 +411,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) forboth(lp, glob->subplans, lr, glob->subroots) { Plan *subplan = (Plan *) lfirst(lp); - PlannerInfo *subroot = (PlannerInfo *) lfirst(lr); + PlannerInfo *subroot = lfirst_node(PlannerInfo, lr); SS_finalize_plan(subroot, subplan); } @@ -430,7 +430,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) forboth(lp, glob->subplans, lr, glob->subroots) { Plan *subplan = (Plan *) lfirst(lp); - PlannerInfo *subroot = (PlannerInfo *) lfirst(lr); + PlannerInfo *subroot = lfirst_node(PlannerInfo, lr); lfirst(lp) = set_plan_references(subroot, subplan); } @@ -586,7 +586,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse, hasOuterJoins = false; foreach(l, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(l); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, l); if (rte->rtekind == RTE_JOIN) { @@ -643,7 +643,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse, newWithCheckOptions = NIL; foreach(l, parse->withCheckOptions) { - WithCheckOption *wco = (WithCheckOption *) lfirst(l); + WithCheckOption *wco = lfirst_node(WithCheckOption, l); wco->qual = preprocess_expression(root, wco->qual, EXPRKIND_QUAL); @@ -663,7 +663,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse, foreach(l, parse->windowClause) { - WindowClause *wc = (WindowClause *) lfirst(l); + WindowClause *wc = lfirst_node(WindowClause, l); /* partitionClause/orderClause are sort/group expressions */ wc->startOffset = preprocess_expression(root, wc->startOffset, @@ -705,7 +705,7 @@ subquery_planner(PlannerGlobal *glob, Query *parse, /* Also need to preprocess expressions within RTEs */ foreach(l, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(l); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, l); int kind; ListCell *lcsq; @@ -1080,7 +1080,7 @@ inheritance_planner(PlannerInfo *root) rti = 1; foreach(lc, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, lc); if (rte->rtekind == RTE_SUBQUERY) subqueryRTindexes = bms_add_member(subqueryRTindexes, rti); @@ -1102,7 +1102,7 @@ inheritance_planner(PlannerInfo *root) { foreach(lc, root->append_rel_list) { - AppendRelInfo *appinfo = (AppendRelInfo *) lfirst(lc); + AppendRelInfo *appinfo = lfirst_node(AppendRelInfo, lc); if (bms_is_member(appinfo->parent_relid, subqueryRTindexes) || bms_is_member(appinfo->child_relid, subqueryRTindexes) || @@ -1130,7 +1130,7 @@ inheritance_planner(PlannerInfo *root) */ foreach(lc, root->append_rel_list) { - AppendRelInfo *appinfo = (AppendRelInfo *) lfirst(lc); + AppendRelInfo *appinfo = lfirst_node(AppendRelInfo, lc); PlannerInfo *subroot; RangeTblEntry *child_rte; RelOptInfo *sub_final_rel; @@ -1192,7 +1192,7 @@ inheritance_planner(PlannerInfo *root) subroot->append_rel_list = NIL; foreach(lc2, root->append_rel_list) { - AppendRelInfo *appinfo2 = (AppendRelInfo *) lfirst(lc2); + AppendRelInfo *appinfo2 = lfirst_node(AppendRelInfo, lc2); if (bms_is_member(appinfo2->child_relid, modifiableARIindexes)) appinfo2 = copyObject(appinfo2); @@ -1227,7 +1227,7 @@ inheritance_planner(PlannerInfo *root) rti = 1; foreach(lr, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(lr); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, lr); if (bms_is_member(rti, subqueryRTindexes)) { @@ -1249,7 +1249,7 @@ inheritance_planner(PlannerInfo *root) foreach(lc2, subroot->append_rel_list) { - AppendRelInfo *appinfo2 = (AppendRelInfo *) lfirst(lc2); + AppendRelInfo *appinfo2 = lfirst_node(AppendRelInfo, lc2); if (bms_is_member(appinfo2->child_relid, modifiableARIindexes)) @@ -1407,7 +1407,7 @@ inheritance_planner(PlannerInfo *root) rti = 1; foreach(lc, final_rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, lc); root->simple_rte_array[rti++] = rte; } @@ -1556,8 +1556,8 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, /*------ translator: %s is a SQL row locking clause such as FOR UPDATE */ errmsg("%s is not allowed with UNION/INTERSECT/EXCEPT", - LCS_asString(((RowMarkClause *) - linitial(parse->rowMarks))->strength)))); + LCS_asString(linitial_node(RowMarkClause, + parse->rowMarks)->strength)))); /* * Calculate pathkeys that represent result ordering requirements @@ -1687,7 +1687,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, qp_extra.tlist = tlist; qp_extra.activeWindows = activeWindows; qp_extra.groupClause = (gset_data - ? (gset_data->rollups ? ((RollupData *) linitial(gset_data->rollups))->groupClause : NIL) + ? (gset_data->rollups ? linitial_node(RollupData, gset_data->rollups)->groupClause : NIL) : parse->groupClause); /* @@ -1757,25 +1757,25 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, split_pathtarget_at_srfs(root, final_target, sort_input_target, &final_targets, &final_targets_contain_srfs); - final_target = (PathTarget *) linitial(final_targets); + final_target = linitial_node(PathTarget, final_targets); Assert(!linitial_int(final_targets_contain_srfs)); /* likewise for sort_input_target vs. grouping_target */ split_pathtarget_at_srfs(root, sort_input_target, grouping_target, &sort_input_targets, &sort_input_targets_contain_srfs); - sort_input_target = (PathTarget *) linitial(sort_input_targets); + sort_input_target = linitial_node(PathTarget, sort_input_targets); Assert(!linitial_int(sort_input_targets_contain_srfs)); /* likewise for grouping_target vs. scanjoin_target */ split_pathtarget_at_srfs(root, grouping_target, scanjoin_target, &grouping_targets, &grouping_targets_contain_srfs); - grouping_target = (PathTarget *) linitial(grouping_targets); + grouping_target = linitial_node(PathTarget, grouping_targets); Assert(!linitial_int(grouping_targets_contain_srfs)); /* scanjoin_target will not have any SRFs precomputed for it */ split_pathtarget_at_srfs(root, scanjoin_target, NULL, &scanjoin_targets, &scanjoin_targets_contain_srfs); - scanjoin_target = (PathTarget *) linitial(scanjoin_targets); + scanjoin_target = linitial_node(PathTarget, scanjoin_targets); Assert(!linitial_int(scanjoin_targets_contain_srfs)); } else @@ -2106,7 +2106,7 @@ preprocess_grouping_sets(PlannerInfo *root) foreach(lc, parse->groupClause) { - SortGroupClause *gc = lfirst(lc); + SortGroupClause *gc = lfirst_node(SortGroupClause, lc); Index ref = gc->tleSortGroupRef; if (ref > maxref) @@ -2135,7 +2135,7 @@ preprocess_grouping_sets(PlannerInfo *root) foreach(lc, parse->groupingSets) { - List *gset = lfirst(lc); + List *gset = (List *) lfirst(lc); if (bms_overlap_list(gd->unsortable_refs, gset)) { @@ -2194,7 +2194,7 @@ preprocess_grouping_sets(PlannerInfo *root) /* * Get the initial (and therefore largest) grouping set. */ - gs = linitial(current_sets); + gs = linitial_node(GroupingSetData, current_sets); /* * Order the groupClause appropriately. If the first grouping set is @@ -2269,7 +2269,7 @@ remap_to_groupclause_idx(List *groupClause, foreach(lc, groupClause) { - SortGroupClause *gc = lfirst(lc); + SortGroupClause *gc = lfirst_node(SortGroupClause, lc); tleref_to_colnum_map[gc->tleSortGroupRef] = ref++; } @@ -2278,7 +2278,7 @@ remap_to_groupclause_idx(List *groupClause, { List *set = NIL; ListCell *lc2; - GroupingSetData *gs = lfirst(lc); + GroupingSetData *gs = lfirst_node(GroupingSetData, lc); foreach(lc2, gs->set) { @@ -2344,8 +2344,8 @@ preprocess_rowmarks(PlannerInfo *root) * CTIDs invalid. This is also checked at parse time, but that's * insufficient because of rule substitution, query pullup, etc. */ - CheckSelectLocking(parse, ((RowMarkClause *) - linitial(parse->rowMarks))->strength); + CheckSelectLocking(parse, linitial_node(RowMarkClause, + parse->rowMarks)->strength); } else { @@ -2373,7 +2373,7 @@ preprocess_rowmarks(PlannerInfo *root) prowmarks = NIL; foreach(l, parse->rowMarks) { - RowMarkClause *rc = (RowMarkClause *) lfirst(l); + RowMarkClause *rc = lfirst_node(RowMarkClause, l); RangeTblEntry *rte = rt_fetch(rc->rti, parse->rtable); PlanRowMark *newrc; @@ -2413,7 +2413,7 @@ preprocess_rowmarks(PlannerInfo *root) i = 0; foreach(l, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(l); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, l); PlanRowMark *newrc; i++; @@ -2772,7 +2772,7 @@ remove_useless_groupby_columns(PlannerInfo *root) (list_length(parse->rtable) + 1)); foreach(lc, parse->groupClause) { - SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); + SortGroupClause *sgc = lfirst_node(SortGroupClause, lc); TargetEntry *tle = get_sortgroupclause_tle(sgc, parse->targetList); Var *var = (Var *) tle->expr; @@ -2805,7 +2805,7 @@ remove_useless_groupby_columns(PlannerInfo *root) relid = 0; foreach(lc, parse->rtable) { - RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc); + RangeTblEntry *rte = lfirst_node(RangeTblEntry, lc); Bitmapset *relattnos; Bitmapset *pkattnos; Oid constraintOid; @@ -2863,7 +2863,7 @@ remove_useless_groupby_columns(PlannerInfo *root) foreach(lc, parse->groupClause) { - SortGroupClause *sgc = (SortGroupClause *) lfirst(lc); + SortGroupClause *sgc = lfirst_node(SortGroupClause, lc); TargetEntry *tle = get_sortgroupclause_tle(sgc, parse->targetList); Var *var = (Var *) tle->expr; @@ -2938,11 +2938,11 @@ preprocess_groupclause(PlannerInfo *root, List *force) */ foreach(sl, parse->sortClause) { - SortGroupClause *sc = (SortGroupClause *) lfirst(sl); + SortGroupClause *sc = lfirst_node(SortGroupClause, sl); foreach(gl, parse->groupClause) { - SortGroupClause *gc = (SortGroupClause *) lfirst(gl); + SortGroupClause *gc = lfirst_node(SortGroupClause, gl); if (equal(gc, sc)) { @@ -2971,7 +2971,7 @@ preprocess_groupclause(PlannerInfo *root, List *force) */ foreach(gl, parse->groupClause) { - SortGroupClause *gc = (SortGroupClause *) lfirst(gl); + SortGroupClause *gc = lfirst_node(SortGroupClause, gl); if (list_member_ptr(new_groupclause, gc)) continue; /* it matched an ORDER BY item */ @@ -3071,7 +3071,7 @@ extract_rollup_sets(List *groupingSets) for_each_cell(lc, lc1) { - List *candidate = lfirst(lc); + List *candidate = (List *) lfirst(lc); Bitmapset *candidate_set = NULL; ListCell *lc2; int dup_of = 0; @@ -3228,7 +3228,7 @@ reorder_grouping_sets(List *groupingsets, List *sortclause) foreach(lc, groupingsets) { - List *candidate = lfirst(lc); + List *candidate = (List *) lfirst(lc); List *new_elems = list_difference_int(candidate, previous); GroupingSetData *gs = makeNode(GroupingSetData); @@ -3296,7 +3296,7 @@ standard_qp_callback(PlannerInfo *root, void *extra) /* We consider only the first (bottom) window in pathkeys logic */ if (activeWindows != NIL) { - WindowClause *wc = (WindowClause *) linitial(activeWindows); + WindowClause *wc = linitial_node(WindowClause, activeWindows); root->window_pathkeys = make_pathkeys_for_window(root, wc, @@ -3384,7 +3384,7 @@ get_number_of_groups(PlannerInfo *root, foreach(lc, gd->rollups) { - RollupData *rollup = lfirst(lc); + RollupData *rollup = lfirst_node(RollupData, lc); ListCell *lc; groupExprs = get_sortgrouplist_exprs(rollup->groupClause, @@ -3395,7 +3395,7 @@ get_number_of_groups(PlannerInfo *root, forboth(lc, rollup->gsets, lc2, rollup->gsets_data) { List *gset = (List *) lfirst(lc); - GroupingSetData *gs = lfirst(lc2); + GroupingSetData *gs = lfirst_node(GroupingSetData, lc2); double numGroups = estimate_num_groups(root, groupExprs, path_rows, @@ -3420,7 +3420,7 @@ get_number_of_groups(PlannerInfo *root, forboth(lc, gd->hash_sets_idx, lc2, gd->unsortable_sets) { List *gset = (List *) lfirst(lc); - GroupingSetData *gs = lfirst(lc2); + GroupingSetData *gs = lfirst_node(GroupingSetData, lc2); double numGroups = estimate_num_groups(root, groupExprs, path_rows, @@ -4194,7 +4194,7 @@ consider_groupingsets_paths(PlannerInfo *root, if (pathkeys_contained_in(root->group_pathkeys, path->pathkeys)) { - unhashed_rollup = lfirst(l_start); + unhashed_rollup = lfirst_node(RollupData, l_start); exclude_groups = unhashed_rollup->numGroups; l_start = lnext(l_start); } @@ -4219,7 +4219,7 @@ consider_groupingsets_paths(PlannerInfo *root, for_each_cell(lc, l_start) { - RollupData *rollup = lfirst(lc); + RollupData *rollup = lfirst_node(RollupData, lc); /* * If we find an unhashable rollup that's not been skipped by the @@ -4239,7 +4239,7 @@ consider_groupingsets_paths(PlannerInfo *root, } foreach(lc, sets_data) { - GroupingSetData *gs = lfirst(lc); + GroupingSetData *gs = lfirst_node(GroupingSetData, lc); List *gset = gs->set; RollupData *rollup; @@ -4381,7 +4381,7 @@ consider_groupingsets_paths(PlannerInfo *root, i = 0; for_each_cell(lc, lnext(list_head(gd->rollups))) { - RollupData *rollup = lfirst(lc); + RollupData *rollup = lfirst_node(RollupData, lc); if (rollup->hashable) { @@ -4415,7 +4415,7 @@ consider_groupingsets_paths(PlannerInfo *root, i = 0; for_each_cell(lc, lnext(list_head(gd->rollups))) { - RollupData *rollup = lfirst(lc); + RollupData *rollup = lfirst_node(RollupData, lc); if (rollup->hashable) { @@ -4437,7 +4437,7 @@ consider_groupingsets_paths(PlannerInfo *root, foreach(lc, hash_sets) { - GroupingSetData *gs = lfirst(lc); + GroupingSetData *gs = lfirst_node(GroupingSetData, lc); RollupData *rollup = makeNode(RollupData); Assert(gs->set != NIL); @@ -4616,7 +4616,7 @@ create_one_window_path(PlannerInfo *root, foreach(l, activeWindows) { - WindowClause *wc = (WindowClause *) lfirst(l); + WindowClause *wc = lfirst_node(WindowClause, l); List *window_pathkeys; window_pathkeys = make_pathkeys_for_window(root, @@ -5280,7 +5280,7 @@ postprocess_setop_tlist(List *new_tlist, List *orig_tlist) foreach(l, new_tlist) { - TargetEntry *new_tle = (TargetEntry *) lfirst(l); + TargetEntry *new_tle = lfirst_node(TargetEntry, l); TargetEntry *orig_tle; /* ignore resjunk columns in setop result */ @@ -5288,7 +5288,7 @@ postprocess_setop_tlist(List *new_tlist, List *orig_tlist) continue; Assert(orig_tlist_item != NULL); - orig_tle = (TargetEntry *) lfirst(orig_tlist_item); + orig_tle = lfirst_node(TargetEntry, orig_tlist_item); orig_tlist_item = lnext(orig_tlist_item); if (orig_tle->resjunk) /* should not happen */ elog(ERROR, "resjunk output columns are not implemented"); @@ -5316,7 +5316,7 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists) actives = NIL; foreach(lc, root->parse->windowClause) { - WindowClause *wc = (WindowClause *) lfirst(lc); + WindowClause *wc = lfirst_node(WindowClause, lc); /* It's only active if wflists shows some related WindowFuncs */ Assert(wc->winref <= wflists->maxWinRef); @@ -5339,7 +5339,7 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists) result = NIL; while (actives != NIL) { - WindowClause *wc = (WindowClause *) linitial(actives); + WindowClause *wc = linitial_node(WindowClause, actives); ListCell *prev; ListCell *next; @@ -5351,7 +5351,7 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists) prev = NULL; for (lc = list_head(actives); lc; lc = next) { - WindowClause *wc2 = (WindowClause *) lfirst(lc); + WindowClause *wc2 = lfirst_node(WindowClause, lc); next = lnext(lc); /* framing options are NOT to be compared here! */ @@ -5424,18 +5424,18 @@ make_window_input_target(PlannerInfo *root, sgrefs = NULL; foreach(lc, activeWindows) { - WindowClause *wc = (WindowClause *) lfirst(lc); + WindowClause *wc = lfirst_node(WindowClause, lc); ListCell *lc2; foreach(lc2, wc->partitionClause) { - SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc2); + SortGroupClause *sortcl = lfirst_node(SortGroupClause, lc2); sgrefs = bms_add_member(sgrefs, sortcl->tleSortGroupRef); } foreach(lc2, wc->orderClause) { - SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc2); + SortGroupClause *sortcl = lfirst_node(SortGroupClause, lc2); sgrefs = bms_add_member(sgrefs, sortcl->tleSortGroupRef); } @@ -5444,7 +5444,7 @@ make_window_input_target(PlannerInfo *root, /* Add in sortgroupref numbers of GROUP BY clauses, too */ foreach(lc, parse->groupClause) { - SortGroupClause *grpcl = (SortGroupClause *) lfirst(lc); + SortGroupClause *grpcl = lfirst_node(SortGroupClause, lc); sgrefs = bms_add_member(sgrefs, grpcl->tleSortGroupRef); } @@ -5864,7 +5864,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel, Assert(subpath->param_info == NULL); forboth(lc1, targets, lc2, targets_contain_srfs) { - PathTarget *thistarget = (PathTarget *) lfirst(lc1); + PathTarget *thistarget = lfirst_node(PathTarget, lc1); bool contains_srfs = (bool) lfirst_int(lc2); /* If this level doesn't contain SRFs, do regular projection */ @@ -5897,7 +5897,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel, Assert(subpath->param_info == NULL); forboth(lc1, targets, lc2, targets_contain_srfs) { - PathTarget *thistarget = (PathTarget *) lfirst(lc1); + PathTarget *thistarget = lfirst_node(PathTarget, lc1); bool contains_srfs = (bool) lfirst_int(lc2); /* If this level doesn't contain SRFs, do regular projection */ @@ -6023,7 +6023,7 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid) indexInfo = NULL; foreach(lc, rel->indexlist) { - indexInfo = (IndexOptInfo *) lfirst(lc); + indexInfo = lfirst_node(IndexOptInfo, lc); if (indexInfo->indexoid == indexOid) break; } @@ -6086,7 +6086,7 @@ get_partitioned_child_rels(PlannerInfo *root, Index rti) foreach(l, root->pcinfo_list) { - PartitionedChildRelInfo *pc = lfirst(l); + PartitionedChildRelInfo *pc = lfirst_node(PartitionedChildRelInfo, l); if (pc->parent_relid == rti) { From 49ca462eb165dea297f1f110e8eac064308e9d51 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 5 Sep 2017 18:17:47 -0400 Subject: [PATCH 0105/1087] Add \gdesc psql command. This command acts somewhat like \g, but instead of executing the query buffer, it merely prints a description of the columns that the query result would have. (Of course, this still requires parsing the query; if parse analysis fails, you get an error anyway.) We accomplish this using an unnamed prepared statement, which should be invisible to psql users. Pavel Stehule, reviewed by Fabien Coelho Discussion: https://postgr.es/m/CAFj8pRBhYVvO34FU=EKb=nAF5t3b++krKt1FneCmR0kuF5m-QA@mail.gmail.com --- doc/src/sgml/ref/psql-ref.sgml | 19 +++++ src/bin/psql/command.c | 20 +++++ src/bin/psql/common.c | 131 ++++++++++++++++++++++++++++- src/bin/psql/help.c | 3 +- src/bin/psql/settings.h | 3 +- src/bin/psql/tab-complete.c | 2 +- src/test/regress/expected/psql.out | 85 +++++++++++++++++++ src/test/regress/sql/psql.sql | 36 ++++++++ 8 files changed, 293 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index fd2ca15d0a..5bdbc1e9cf 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -1949,6 +1949,25 @@ Tue Oct 26 21:40:57 CEST 1999 + + \gdesc + + + + Shows the description (that is, the column names and data types) + of the result of the current query buffer. The query is not + actually executed; however, if it contains some type of syntax + error, that error will be reported in the normal way. + + + + If the current query buffer is empty, the most recently sent query + is described instead. + + + + + \gexec diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c index 4283bf35af..fe0b83ea24 100644 --- a/src/bin/psql/command.c +++ b/src/bin/psql/command.c @@ -88,6 +88,7 @@ static backslashResult exec_command_errverbose(PsqlScanState scan_state, bool ac static backslashResult exec_command_f(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd); +static backslashResult exec_command_gdesc(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_gexec(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_gset(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_help(PsqlScanState scan_state, bool active_branch); @@ -337,6 +338,8 @@ exec_command(const char *cmd, status = exec_command_f(scan_state, active_branch); else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0) status = exec_command_g(scan_state, active_branch, cmd); + else if (strcmp(cmd, "gdesc") == 0) + status = exec_command_gdesc(scan_state, active_branch); else if (strcmp(cmd, "gexec") == 0) status = exec_command_gexec(scan_state, active_branch); else if (strcmp(cmd, "gset") == 0) @@ -1330,6 +1333,23 @@ exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd) return status; } +/* + * \gdesc -- describe query result + */ +static backslashResult +exec_command_gdesc(PsqlScanState scan_state, bool active_branch) +{ + backslashResult status = PSQL_CMD_SKIP_LINE; + + if (active_branch) + { + pset.gdesc_flag = true; + status = PSQL_CMD_SEND; + } + + return status; +} + /* * \gexec -- send query and execute each field of result */ diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c index a41932ff27..b99705886f 100644 --- a/src/bin/psql/common.c +++ b/src/bin/psql/common.c @@ -29,6 +29,7 @@ #include "fe_utils/mbprint.h" +static bool DescribeQuery(const char *query, double *elapsed_msec); static bool ExecQueryUsingCursor(const char *query, double *elapsed_msec); static bool command_no_begin(const char *query); static bool is_select_command(const char *query); @@ -1323,8 +1324,15 @@ SendQuery(const char *query) } } - if (pset.fetch_count <= 0 || pset.gexec_flag || - pset.crosstab_flag || !is_select_command(query)) + if (pset.gdesc_flag) + { + /* Describe query's result columns, without executing it */ + OK = DescribeQuery(query, &elapsed_msec); + ResetCancelConn(); + results = NULL; /* PQclear(NULL) does nothing */ + } + else if (pset.fetch_count <= 0 || pset.gexec_flag || + pset.crosstab_flag || !is_select_command(query)) { /* Default fetch-it-all-and-print mode */ instr_time before, @@ -1467,6 +1475,9 @@ SendQuery(const char *query) pset.gset_prefix = NULL; } + /* reset \gdesc trigger */ + pset.gdesc_flag = false; + /* reset \gexec trigger */ pset.gexec_flag = false; @@ -1482,6 +1493,118 @@ SendQuery(const char *query) } +/* + * DescribeQuery: describe the result columns of a query, without executing it + * + * Returns true if the operation executed successfully, false otherwise. + * + * If pset.timing is on, total query time (exclusive of result-printing) is + * stored into *elapsed_msec. + */ +static bool +DescribeQuery(const char *query, double *elapsed_msec) +{ + PGresult *results; + bool OK; + instr_time before, + after; + + *elapsed_msec = 0; + + if (pset.timing) + INSTR_TIME_SET_CURRENT(before); + + /* + * To parse the query but not execute it, we prepare it, using the unnamed + * prepared statement. This is invisible to psql users, since there's no + * way to access the unnamed prepared statement from psql user space. The + * next Parse or Query protocol message would overwrite the statement + * anyway. (So there's no great need to clear it when done, which is a + * good thing because libpq provides no easy way to do that.) + */ + results = PQprepare(pset.db, "", query, 0, NULL); + if (PQresultStatus(results) != PGRES_COMMAND_OK) + { + psql_error("%s", PQerrorMessage(pset.db)); + ClearOrSaveResult(results); + return false; + } + PQclear(results); + + results = PQdescribePrepared(pset.db, ""); + OK = AcceptResult(results) && + (PQresultStatus(results) == PGRES_COMMAND_OK); + if (OK && results) + { + if (PQnfields(results) > 0) + { + PQExpBufferData buf; + int i; + + initPQExpBuffer(&buf); + + printfPQExpBuffer(&buf, + "SELECT name AS \"%s\", pg_catalog.format_type(tp, tpm) AS \"%s\"\n" + "FROM (VALUES ", + gettext_noop("Column"), + gettext_noop("Type")); + + for (i = 0; i < PQnfields(results); i++) + { + const char *name; + char *escname; + + if (i > 0) + appendPQExpBufferStr(&buf, ","); + + name = PQfname(results, i); + escname = PQescapeLiteral(pset.db, name, strlen(name)); + + if (escname == NULL) + { + psql_error("%s", PQerrorMessage(pset.db)); + PQclear(results); + termPQExpBuffer(&buf); + return false; + } + + appendPQExpBuffer(&buf, "(%s, '%u'::pg_catalog.oid, %d)", + escname, + PQftype(results, i), + PQfmod(results, i)); + + PQfreemem(escname); + } + + appendPQExpBufferStr(&buf, ") s(name, tp, tpm)"); + PQclear(results); + + results = PQexec(pset.db, buf.data); + OK = AcceptResult(results); + + if (pset.timing) + { + INSTR_TIME_SET_CURRENT(after); + INSTR_TIME_SUBTRACT(after, before); + *elapsed_msec += INSTR_TIME_GET_MILLISEC(after); + } + + if (OK && results) + OK = PrintQueryResults(results); + + termPQExpBuffer(&buf); + } + else + fprintf(pset.queryFout, + _("The command has no result, or the result has no columns.\n")); + } + + ClearOrSaveResult(results); + + return OK; +} + + /* * ExecQueryUsingCursor: run a SELECT-like query using a cursor * @@ -1627,7 +1750,9 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) break; } - /* Note we do not deal with \gexec or \crosstabview modes here */ + /* + * Note we do not deal with \gdesc, \gexec or \crosstabview modes here + */ ntuples = PQntuples(results); diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index 9d366180af..4d1c0ec3c6 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -167,13 +167,14 @@ slashUsage(unsigned short int pager) * Use "psql --help=commands | wc" to count correctly. It's okay to count * the USE_READLINE line even in builds without that. */ - output = PageOutput(122, pager ? &(pset.popt.topt) : NULL); + output = PageOutput(125, pager ? &(pset.popt.topt) : NULL); fprintf(output, _("General\n")); fprintf(output, _(" \\copyright show PostgreSQL usage and distribution terms\n")); fprintf(output, _(" \\crosstabview [COLUMNS] execute query and display results in crosstab\n")); fprintf(output, _(" \\errverbose show most recent error message at maximum verbosity\n")); fprintf(output, _(" \\g [FILE] or ; execute query (and send results to file or |pipe)\n")); + fprintf(output, _(" \\gdesc describe result of query, without executing it\n")); fprintf(output, _(" \\gexec execute query, then execute each value in its result\n")); fprintf(output, _(" \\gset [PREFIX] execute query and store results in psql variables\n")); fprintf(output, _(" \\gx [FILE] as \\g, but forces expanded output mode\n")); diff --git a/src/bin/psql/settings.h b/src/bin/psql/settings.h index b78f151acd..96338c3197 100644 --- a/src/bin/psql/settings.h +++ b/src/bin/psql/settings.h @@ -93,7 +93,8 @@ typedef struct _psqlSettings char *gfname; /* one-shot file output argument for \g */ bool g_expanded; /* one-shot expanded output requested via \gx */ char *gset_prefix; /* one-shot prefix argument for \gset */ - bool gexec_flag; /* one-shot flag to execute query's results */ + bool gdesc_flag; /* one-shot request to describe query results */ + bool gexec_flag; /* one-shot request to execute query results */ bool crosstab_flag; /* one-shot request to crosstab results */ char *ctv_args[4]; /* \crosstabview arguments */ diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 1583cfa998..7959f9ac16 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1433,7 +1433,7 @@ psql_completion(const char *text, int start, int end) "\\e", "\\echo", "\\ef", "\\elif", "\\else", "\\encoding", "\\endif", "\\errverbose", "\\ev", "\\f", - "\\g", "\\gexec", "\\gset", "\\gx", + "\\g", "\\gdesc", "\\gexec", "\\gset", "\\gx", "\\h", "\\help", "\\H", "\\i", "\\if", "\\ir", "\\l", "\\lo_import", "\\lo_export", "\\lo_list", "\\lo_unlink", diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index 4aaf4c1620..7957268388 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -126,6 +126,91 @@ more than one row returned for \gset select 10 as test01, 20 as test02 from generate_series(1,0) \gset no rows returned for \gset \unset FETCH_COUNT +-- \gdesc +SELECT + NULL AS zero, + 1 AS one, + 2.0 AS two, + 'three' AS three, + $1 AS four, + sin($2) as five, + 'foo'::varchar(4) as six, + CURRENT_DATE AS now +\gdesc + Column | Type +--------+---------------------- + zero | text + one | integer + two | numeric + three | text + four | text + five | double precision + six | character varying(4) + now | date +(8 rows) + +-- should work with tuple-returning utilities, such as EXECUTE +PREPARE test AS SELECT 1 AS first, 2 AS second; +EXECUTE test \gdesc + Column | Type +--------+--------- + first | integer + second | integer +(2 rows) + +EXPLAIN EXECUTE test \gdesc + Column | Type +------------+------ + QUERY PLAN | text +(1 row) + +-- should fail cleanly - syntax error +SELECT 1 + \gdesc +ERROR: syntax error at end of input +LINE 1: SELECT 1 + + ^ +-- check behavior with empty results +SELECT \gdesc +The command has no result, or the result has no columns. +CREATE TABLE bububu(a int) \gdesc +The command has no result, or the result has no columns. +-- subject command should not have executed +TABLE bububu; -- fail +ERROR: relation "bububu" does not exist +LINE 1: TABLE bububu; + ^ +-- query buffer should remain unchanged +SELECT 1 AS x, 'Hello', 2 AS y, true AS "dirty\name" +\gdesc + Column | Type +------------+--------- + x | integer + ?column? | text + y | integer + dirty\name | boolean +(4 rows) + +\g + x | ?column? | y | dirty\name +---+----------+---+------------ + 1 | Hello | 2 | t +(1 row) + +-- all on one line +SELECT 3 AS x, 'Hello', 4 AS y, true AS "dirty\name" \gdesc \g + Column | Type +------------+--------- + x | integer + ?column? | text + y | integer + dirty\name | boolean +(4 rows) + + x | ?column? | y | dirty\name +---+----------+---+------------ + 3 | Hello | 4 | t +(1 row) + -- \gexec create temporary table gexec_test(a int, b text, c date, d float); select format('create index on gexec_test(%I)', attname) diff --git a/src/test/regress/sql/psql.sql b/src/test/regress/sql/psql.sql index 4a676c3119..0556b7c159 100644 --- a/src/test/regress/sql/psql.sql +++ b/src/test/regress/sql/psql.sql @@ -73,6 +73,42 @@ select 10 as test01, 20 as test02 from generate_series(1,0) \gset \unset FETCH_COUNT +-- \gdesc + +SELECT + NULL AS zero, + 1 AS one, + 2.0 AS two, + 'three' AS three, + $1 AS four, + sin($2) as five, + 'foo'::varchar(4) as six, + CURRENT_DATE AS now +\gdesc + +-- should work with tuple-returning utilities, such as EXECUTE +PREPARE test AS SELECT 1 AS first, 2 AS second; +EXECUTE test \gdesc +EXPLAIN EXECUTE test \gdesc + +-- should fail cleanly - syntax error +SELECT 1 + \gdesc + +-- check behavior with empty results +SELECT \gdesc +CREATE TABLE bububu(a int) \gdesc + +-- subject command should not have executed +TABLE bububu; -- fail + +-- query buffer should remain unchanged +SELECT 1 AS x, 'Hello', 2 AS y, true AS "dirty\name" +\gdesc +\g + +-- all on one line +SELECT 3 AS x, 'Hello', 4 AS y, true AS "dirty\name" \gdesc \g + -- \gexec create temporary table gexec_test(a int, b text, c date, d float); From 0b554e4e63a4ba4852c01951311713e23acdae02 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 5 Sep 2017 21:35:59 -0400 Subject: [PATCH 0106/1087] doc: Clarify pg_inherits description Reported-by: mjustin.lists@gmail.com --- doc/src/sgml/catalogs.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 3126990f4d..4f56188a1c 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -3848,7 +3848,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_inherits records information about table inheritance hierarchies. There is one entry for each direct - child table in the database. (Indirect inheritance can be determined + parent-child table relationship in the database. (Indirect inheritance can be determined by following chains of entries.) From 8689e38263af7755b8100203e325a5953ed1e602 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 6 Sep 2017 10:41:05 -0400 Subject: [PATCH 0107/1087] Clean up handling of dropped columns in NAMEDTUPLESTORE RTEs. The NAMEDTUPLESTORE patch piggybacked on the infrastructure for TABLEFUNC/VALUES/CTE RTEs, none of which can ever have dropped columns, so the possibility was ignored most places. Fix that, including adding a specification to parsenodes.h about what it's supposed to look like. In passing, clean up assorted comments that hadn't been maintained properly by said patch. Per bug #14799 from Philippe Beaudoin. Back-patch to v10. Discussion: https://postgr.es/m/20170906120005.25630.84360@wrigleys.postgresql.org --- src/backend/optimizer/util/relnode.c | 4 +- src/backend/parser/parse_relation.c | 98 +++++++++++++++++++--------- src/backend/parser/parse_target.c | 4 +- src/backend/utils/adt/ruleutils.c | 4 +- src/include/nodes/parsenodes.h | 5 ++ 5 files changed, 77 insertions(+), 38 deletions(-) diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index 8ad0b4a669..c7b2695ebb 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -178,8 +178,8 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptInfo *parent) case RTE_NAMEDTUPLESTORE: /* - * Subquery, function, tablefunc, or values list --- set up attr - * range and arrays + * Subquery, function, tablefunc, values list, CTE, or ENR --- set + * up attr range and arrays * * Note: 0 is included in range to support whole-row Vars */ diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index 88b3e88a21..4c5c684b44 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -2014,11 +2014,13 @@ addRangeTableEntryForENR(ParseState *pstate, /* * Build the list of effective column names using user-supplied aliases - * and/or actual column names. Also build the cannibalized fields. + * and/or actual column names. */ tupdesc = ENRMetadataGetTupDesc(enrmd); rte->eref = makeAlias(refname, NIL); buildRelationAliases(tupdesc, alias, rte->eref); + + /* Record additional data for ENR, including column type info */ rte->enrname = enrmd->name; rte->enrtuples = enrmd->enrtuples; rte->coltypes = NIL; @@ -2028,16 +2030,24 @@ addRangeTableEntryForENR(ParseState *pstate, { Form_pg_attribute att = TupleDescAttr(tupdesc, attno - 1); - if (att->atttypid == InvalidOid && - !(att->attisdropped)) - elog(ERROR, "atttypid was invalid for column which has not been dropped from \"%s\"", - rv->relname); - rte->coltypes = - lappend_oid(rte->coltypes, att->atttypid); - rte->coltypmods = - lappend_int(rte->coltypmods, att->atttypmod); - rte->colcollations = - lappend_oid(rte->colcollations, att->attcollation); + if (att->attisdropped) + { + /* Record zeroes for a dropped column */ + rte->coltypes = lappend_oid(rte->coltypes, InvalidOid); + rte->coltypmods = lappend_int(rte->coltypmods, 0); + rte->colcollations = lappend_oid(rte->colcollations, InvalidOid); + } + else + { + /* Let's just make sure we can tell this isn't dropped */ + if (att->atttypid == InvalidOid) + elog(ERROR, "atttypid is invalid for non-dropped column in \"%s\"", + rv->relname); + rte->coltypes = lappend_oid(rte->coltypes, att->atttypid); + rte->coltypmods = lappend_int(rte->coltypmods, att->atttypmod); + rte->colcollations = lappend_oid(rte->colcollations, + att->attcollation); + } } /* @@ -2416,7 +2426,7 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, case RTE_CTE: case RTE_NAMEDTUPLESTORE: { - /* Tablefunc, Values or CTE RTE */ + /* Tablefunc, Values, CTE, or ENR RTE */ ListCell *aliasp_item = list_head(rte->eref->colnames); ListCell *lct; ListCell *lcm; @@ -2436,23 +2446,43 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, if (colnames) { /* Assume there is one alias per output column */ - char *label = strVal(lfirst(aliasp_item)); + if (OidIsValid(coltype)) + { + char *label = strVal(lfirst(aliasp_item)); + + *colnames = lappend(*colnames, + makeString(pstrdup(label))); + } + else if (include_dropped) + *colnames = lappend(*colnames, + makeString(pstrdup(""))); - *colnames = lappend(*colnames, - makeString(pstrdup(label))); aliasp_item = lnext(aliasp_item); } if (colvars) { - Var *varnode; + if (OidIsValid(coltype)) + { + Var *varnode; - varnode = makeVar(rtindex, varattno, - coltype, coltypmod, colcoll, - sublevels_up); - varnode->location = location; + varnode = makeVar(rtindex, varattno, + coltype, coltypmod, colcoll, + sublevels_up); + varnode->location = location; - *colvars = lappend(*colvars, varnode); + *colvars = lappend(*colvars, varnode); + } + else if (include_dropped) + { + /* + * It doesn't really matter what type the Const + * claims to be. + */ + *colvars = lappend(*colvars, + makeNullConst(INT4OID, -1, + InvalidOid)); + } } } } @@ -2831,13 +2861,21 @@ get_rte_attribute_type(RangeTblEntry *rte, AttrNumber attnum, case RTE_NAMEDTUPLESTORE: { /* - * tablefunc, VALUES or CTE RTE --- get type info from lists - * in the RTE + * tablefunc, VALUES, CTE, or ENR RTE --- get type info from + * lists in the RTE */ Assert(attnum > 0 && attnum <= list_length(rte->coltypes)); *vartype = list_nth_oid(rte->coltypes, attnum - 1); *vartypmod = list_nth_int(rte->coltypmods, attnum - 1); *varcollid = list_nth_oid(rte->colcollations, attnum - 1); + + /* For ENR, better check for dropped column */ + if (!OidIsValid(*vartype)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("column %d of relation \"%s\" does not exist", + attnum, + rte->eref->aliasname))); } break; default: @@ -2888,15 +2926,11 @@ get_rte_attribute_is_dropped(RangeTblEntry *rte, AttrNumber attnum) break; case RTE_NAMEDTUPLESTORE: { - Assert(rte->enrname); - - /* - * We checked when we loaded coltypes for the tuplestore that - * InvalidOid was only used for dropped columns, so it is safe - * to count on that here. - */ - result = - ((list_nth_oid(rte->coltypes, attnum - 1) == InvalidOid)); + /* Check dropped-ness by testing for valid coltype */ + if (attnum <= 0 || + attnum > list_length(rte->coltypes)) + elog(ERROR, "invalid varattno %d", attnum); + result = !OidIsValid((list_nth_oid(rte->coltypes, attnum - 1))); } break; case RTE_JOIN: diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index c3cb0357ca..fce863600c 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -1511,8 +1511,8 @@ expandRecordVariable(ParseState *pstate, Var *var, int levelsup) case RTE_NAMEDTUPLESTORE: /* - * This case should not occur: a column of a table or values list - * shouldn't have type RECORD. Fall through and fail (most + * This case should not occur: a column of a table, values list, + * or ENR shouldn't have type RECORD. Fall through and fail (most * likely) at the bottom. */ break; diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 43646d2c4f..f9ea7ed771 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -6846,8 +6846,8 @@ get_name_for_var_field(Var *var, int fieldno, case RTE_NAMEDTUPLESTORE: /* - * This case should not occur: a column of a table or values list - * shouldn't have type RECORD. Fall through and fail (most + * This case should not occur: a column of a table, values list, + * or ENR shouldn't have type RECORD. Fall through and fail (most * likely) at the bottom. */ break; diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 5f2a4a75da..ef6753e31a 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1025,6 +1025,11 @@ typedef struct RangeTblEntry * from the catalogs if 'relid' was supplied, but we'd still need these * for TupleDesc-based ENRs, so we might as well always store the type * info here). + * + * For ENRs only, we have to consider the possibility of dropped columns. + * A dropped column is included in these lists, but it will have zeroes in + * all three lists (as well as an empty-string entry in eref). Testing + * for zero coltype is the standard way to detect a dropped column. */ List *coltypes; /* OID list of column type OIDs */ List *coltypmods; /* integer list of column typmods */ From 1c53f612bc8c9dbf97aa5a29910654a66dcdd307 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 6 Sep 2017 11:22:43 -0400 Subject: [PATCH 0108/1087] Escape < and & in SGML This is not required in SGML, but will be in XML, so this is a step to prepare for the conversion to XML. (It is still not required to escape >, but we did it here in some cases for symmetry.) Add a command-line option to osx/onsgmls calls to warn about unescaped occurrences in the future. Author: Alexander Law Author: Peter Eisentraut --- doc/src/sgml/Makefile | 9 ++++++--- doc/src/sgml/array.sgml | 2 +- doc/src/sgml/fdwhandler.sgml | 6 +++--- doc/src/sgml/plpgsql.sgml | 4 ++-- doc/src/sgml/ref/alter_operator.sgml | 4 ++-- doc/src/sgml/ref/create_view.sgml | 2 +- doc/src/sgml/ref/pgtesttiming.sgml | 12 ++++++------ doc/src/sgml/release-8.4.sgml | 2 +- doc/src/sgml/release-9.0.sgml | 2 +- doc/src/sgml/release-9.1.sgml | 2 +- doc/src/sgml/release-9.2.sgml | 2 +- doc/src/sgml/release-9.3.sgml | 2 +- doc/src/sgml/rules.sgml | 4 ++-- doc/src/sgml/syntax.sgml | 2 +- 14 files changed, 29 insertions(+), 26 deletions(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 8a73cc796f..7458ef4de2 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -66,10 +66,13 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML) # Enable some extra warnings # -wfully-tagged needed to throw a warning on missing tags # for older tool chains, 2007-08-31 -# Note: try "make SPFLAGS=-wxml" to catch a lot of other dubious constructs, -# in particular < and & that haven't been made into entities. It's far too -# noisy to turn on by default, unfortunately. override SPFLAGS += -wall -wno-unused-param -wno-empty -wfully-tagged +# Additional warnings for XML compatibility. The conditional is meant +# to detect whether we are using OpenSP rather than the ancient +# original SP. +ifneq (,$(filter o%,$(notdir $(OSX)))) +override SPFLAGS += -wdata-delim +endif ## diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 58878451f0..dd0d20e541 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -654,7 +654,7 @@ SELECT * FROM For instance: -SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; +SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; This and other array operators are further described in diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index cfa6808417..edf1029fe6 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -696,9 +696,9 @@ IsForeignRelUpdatable (Relation rel); The return value should be a bit mask of rule event numbers indicating which operations are supported by the foreign table, using the CmdType enumeration; that is, - (1 << CMD_UPDATE) = 4 for UPDATE, - (1 << CMD_INSERT) = 8 for INSERT, and - (1 << CMD_DELETE) = 16 for DELETE. + (1 << CMD_UPDATE) = 4 for UPDATE, + (1 << CMD_INSERT) = 8 for INSERT, and + (1 << CMD_DELETE) = 16 for DELETE. diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 2f166d2d59..6dc438a152 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -1823,8 +1823,8 @@ $BODY$ BEGIN RETURN QUERY SELECT flightid FROM flight - WHERE flightdate >= $1 - AND flightdate < ($1 + 1); + WHERE flightdate >= $1 + AND flightdate < ($1 + 1); -- Since execution is not finished, we can check whether rows were returned -- and raise exception if not. diff --git a/doc/src/sgml/ref/alter_operator.sgml b/doc/src/sgml/ref/alter_operator.sgml index b2eaa7a263..9579d00b78 100644 --- a/doc/src/sgml/ref/alter_operator.sgml +++ b/doc/src/sgml/ref/alter_operator.sgml @@ -134,9 +134,9 @@ ALTER OPERATOR @@ (text, text) OWNER TO joe; - Change the restriction and join selectivity estimator functions of a custom operator a && b for type int[]: + Change the restriction and join selectivity estimator functions of a custom operator a && b for type int[]: -ALTER OPERATOR && (_int4, _int4) SET (RESTRICT = _int_contsel, JOIN = _int_contjoinsel); +ALTER OPERATOR && (_int4, _int4) SET (RESTRICT = _int_contsel, JOIN = _int_contjoinsel); diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index a83d9564e5..319335051b 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -466,7 +466,7 @@ CREATE VIEW comedies AS CREATE RECURSIVE VIEW public.nums_1_100 (n) AS VALUES (1) UNION ALL - SELECT n+1 FROM nums_1_100 WHERE n < 100; + SELECT n+1 FROM nums_1_100 WHERE n < 100; Notice that although the recursive view's name is schema-qualified in this CREATE, its internal self-reference is not schema-qualified. diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml index e3539cf764..c659101361 100644 --- a/doc/src/sgml/ref/pgtesttiming.sgml +++ b/doc/src/sgml/ref/pgtesttiming.sgml @@ -94,7 +94,7 @@ nanoseconds. This example from an Intel i7-860 system using a TSC clock source shows excellent performance: - + +]]> @@ -152,7 +152,7 @@ EXPLAIN ANALYZE SELECT COUNT(*) FROM t; possible from switching to the slower acpi_pm time source, on the same system used for the fast results above: - + /sys/devices/system/clocksource/clocksource0/current_clocksource @@ -165,7 +165,7 @@ Histogram of timing durations: 4 0.07810 3241 8 0.01357 563 16 0.00007 3 - +]]> @@ -201,7 +201,7 @@ kern.timecounter.hardware: ACPI-fast -> TSC implementation, which can have good resolution when it's backed by fast enough timing hardware, as in this example: - + +]]> diff --git a/doc/src/sgml/release-8.4.sgml b/doc/src/sgml/release-8.4.sgml index 16004edb74..53e319ff33 100644 --- a/doc/src/sgml/release-8.4.sgml +++ b/doc/src/sgml/release-8.4.sgml @@ -962,7 +962,7 @@ to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index e7d2ffddaf..f7c63fc567 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -2900,7 +2900,7 @@ to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). diff --git a/doc/src/sgml/release-9.1.sgml b/doc/src/sgml/release-9.1.sgml index 0454f849d4..c354b7d1bc 100644 --- a/doc/src/sgml/release-9.1.sgml +++ b/doc/src/sgml/release-9.1.sgml @@ -4654,7 +4654,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index 57a8e93b43..faa7ae4d57 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -6553,7 +6553,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index 2ad5dee09c..f3b00a70d5 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -9930,7 +9930,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index bcbc170335..61423c25ef 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -970,7 +970,7 @@ CREATE MATERIALIZED VIEW sales_summary AS invoice_date, sum(invoice_amt)::numeric(13,2) as sales_amt FROM invoice - WHERE invoice_date < CURRENT_DATE + WHERE invoice_date < CURRENT_DATE GROUP BY seller_no, invoice_date @@ -1058,7 +1058,7 @@ SELECT count(*) FROM words WHERE word = 'caterpiler'; have wanted. Again using file_fdw: -SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; +SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; word --------------- diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index a2d136eaf8..06f0f0b8e0 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -1725,7 +1725,7 @@ SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; SELECT count(*) AS unfiltered, - count(*) FILTER (WHERE i < 5) AS filtered + count(*) FILTER (WHERE i < 5) AS filtered FROM generate_series(1,10) AS s(i); unfiltered | filtered ------------+---------- From 34ae182833a4f69ad5d93f06588665a918ee5b03 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 6 Sep 2017 11:38:28 -0400 Subject: [PATCH 0109/1087] doc: Make function synopsis formatting more uniform Whitespace use was inconsistent in the same chapter. --- doc/src/sgml/fdwhandler.sgml | 177 ++++++++++++++++++----------------- 1 file changed, 90 insertions(+), 87 deletions(-) diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index edf1029fe6..a59e03af98 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -101,9 +101,9 @@ void -GetForeignRelSize (PlannerInfo *root, - RelOptInfo *baserel, - Oid foreigntableid); +GetForeignRelSize(PlannerInfo *root, + RelOptInfo *baserel, + Oid foreigntableid); Obtain relation size estimates for a foreign table. This is called @@ -132,9 +132,9 @@ GetForeignRelSize (PlannerInfo *root, void -GetForeignPaths (PlannerInfo *root, - RelOptInfo *baserel, - Oid foreigntableid); +GetForeignPaths(PlannerInfo *root, + RelOptInfo *baserel, + Oid foreigntableid); Create possible access paths for a scan on a foreign table. @@ -163,13 +163,13 @@ GetForeignPaths (PlannerInfo *root, ForeignScan * -GetForeignPlan (PlannerInfo *root, - RelOptInfo *baserel, - Oid foreigntableid, - ForeignPath *best_path, - List *tlist, - List *scan_clauses, - Plan *outer_plan); +GetForeignPlan(PlannerInfo *root, + RelOptInfo *baserel, + Oid foreigntableid, + ForeignPath *best_path, + List *tlist, + List *scan_clauses, + Plan *outer_plan); Create a ForeignScan plan node from the selected foreign @@ -199,8 +199,8 @@ GetForeignPlan (PlannerInfo *root, void -BeginForeignScan (ForeignScanState *node, - int eflags); +BeginForeignScan(ForeignScanState *node, + int eflags); Begin executing a foreign scan. This is called during executor startup. @@ -227,7 +227,7 @@ BeginForeignScan (ForeignScanState *node, TupleTableSlot * -IterateForeignScan (ForeignScanState *node); +IterateForeignScan(ForeignScanState *node); Fetch one row from the foreign source, returning it in a tuple table slot @@ -264,7 +264,7 @@ IterateForeignScan (ForeignScanState *node); void -ReScanForeignScan (ForeignScanState *node); +ReScanForeignScan(ForeignScanState *node); Restart the scan from the beginning. Note that any parameters the @@ -275,7 +275,7 @@ ReScanForeignScan (ForeignScanState *node); void -EndForeignScan (ForeignScanState *node); +EndForeignScan(ForeignScanState *node); End the scan and release resources. It is normally not important @@ -297,12 +297,12 @@ EndForeignScan (ForeignScanState *node); void -GetForeignJoinPaths (PlannerInfo *root, - RelOptInfo *joinrel, - RelOptInfo *outerrel, - RelOptInfo *innerrel, - JoinType jointype, - JoinPathExtraData *extra); +GetForeignJoinPaths(PlannerInfo *root, + RelOptInfo *joinrel, + RelOptInfo *outerrel, + RelOptInfo *innerrel, + JoinType jointype, + JoinPathExtraData *extra); Create possible access paths for a join of two (or more) foreign tables that all belong to the same foreign server. This optional @@ -356,10 +356,10 @@ GetForeignJoinPaths (PlannerInfo *root, void -GetForeignUpperPaths (PlannerInfo *root, - UpperRelationKind stage, - RelOptInfo *input_rel, - RelOptInfo *output_rel); +GetForeignUpperPaths(PlannerInfo *root, + UpperRelationKind stage, + RelOptInfo *input_rel, + RelOptInfo *output_rel); Create possible access paths for upper relation processing, which is the planner's term for all post-scan/join query processing, such @@ -404,9 +404,9 @@ GetForeignUpperPaths (PlannerInfo *root, void -AddForeignUpdateTargets (Query *parsetree, - RangeTblEntry *target_rte, - Relation target_relation); +AddForeignUpdateTargets(Query *parsetree, + RangeTblEntry *target_rte, + Relation target_relation); UPDATE and DELETE operations are performed @@ -451,10 +451,10 @@ AddForeignUpdateTargets (Query *parsetree, List * -PlanForeignModify (PlannerInfo *root, - ModifyTable *plan, - Index resultRelation, - int subplan_index); +PlanForeignModify(PlannerInfo *root, + ModifyTable *plan, + Index resultRelation, + int subplan_index); Perform any additional planning actions needed for an insert, update, or @@ -490,11 +490,11 @@ PlanForeignModify (PlannerInfo *root, void -BeginForeignModify (ModifyTableState *mtstate, - ResultRelInfo *rinfo, - List *fdw_private, - int subplan_index, - int eflags); +BeginForeignModify(ModifyTableState *mtstate, + ResultRelInfo *rinfo, + List *fdw_private, + int subplan_index, + int eflags); Begin executing a foreign table modification operation. This routine is @@ -536,10 +536,10 @@ BeginForeignModify (ModifyTableState *mtstate, TupleTableSlot * -ExecForeignInsert (EState *estate, - ResultRelInfo *rinfo, - TupleTableSlot *slot, - TupleTableSlot *planSlot); +ExecForeignInsert(EState *estate, + ResultRelInfo *rinfo, + TupleTableSlot *slot, + TupleTableSlot *planSlot); Insert one tuple into the foreign table. @@ -582,10 +582,10 @@ ExecForeignInsert (EState *estate, TupleTableSlot * -ExecForeignUpdate (EState *estate, - ResultRelInfo *rinfo, - TupleTableSlot *slot, - TupleTableSlot *planSlot); +ExecForeignUpdate(EState *estate, + ResultRelInfo *rinfo, + TupleTableSlot *slot, + TupleTableSlot *planSlot); Update one tuple in the foreign table. @@ -628,10 +628,10 @@ ExecForeignUpdate (EState *estate, TupleTableSlot * -ExecForeignDelete (EState *estate, - ResultRelInfo *rinfo, - TupleTableSlot *slot, - TupleTableSlot *planSlot); +ExecForeignDelete(EState *estate, + ResultRelInfo *rinfo, + TupleTableSlot *slot, + TupleTableSlot *planSlot); Delete one tuple from the foreign table. @@ -672,8 +672,8 @@ ExecForeignDelete (EState *estate, void -EndForeignModify (EState *estate, - ResultRelInfo *rinfo); +EndForeignModify(EState *estate, + ResultRelInfo *rinfo); End the table update and release resources. It is normally not important @@ -689,7 +689,7 @@ EndForeignModify (EState *estate, int -IsForeignRelUpdatable (Relation rel); +IsForeignRelUpdatable(Relation rel); Report which update operations the specified foreign table supports. @@ -729,10 +729,10 @@ IsForeignRelUpdatable (Relation rel); bool -PlanDirectModify (PlannerInfo *root, - ModifyTable *plan, - Index resultRelation, - int subplan_index); +PlanDirectModify(PlannerInfo *root, + ModifyTable *plan, + Index resultRelation, + int subplan_index); Decide whether it is safe to execute a direct modification @@ -771,8 +771,8 @@ PlanDirectModify (PlannerInfo *root, void -BeginDirectModify (ForeignScanState *node, - int eflags); +BeginDirectModify(ForeignScanState *node, + int eflags); Prepare to execute a direct modification on the remote server. @@ -805,7 +805,7 @@ BeginDirectModify (ForeignScanState *node, TupleTableSlot * -IterateDirectModify (ForeignScanState *node); +IterateDirectModify(ForeignScanState *node); When the INSERT, UPDATE or DELETE @@ -851,7 +851,7 @@ IterateDirectModify (ForeignScanState *node); void -EndDirectModify (ForeignScanState *node); +EndDirectModify(ForeignScanState *node); Clean up following a direct modification on the remote server. It is @@ -879,8 +879,8 @@ EndDirectModify (ForeignScanState *node); RowMarkType -GetForeignRowMarkType (RangeTblEntry *rte, - LockClauseStrength strength); +GetForeignRowMarkType(RangeTblEntry *rte, + LockClauseStrength strength); Report which row-marking option to use for a foreign table. @@ -911,10 +911,10 @@ GetForeignRowMarkType (RangeTblEntry *rte, HeapTuple -RefetchForeignRow (EState *estate, - ExecRowMark *erm, - Datum rowid, - bool *updated); +RefetchForeignRow(EState *estate, + ExecRowMark *erm, + Datum rowid, + bool *updated); Re-fetch one tuple from the foreign table, after locking it if required. @@ -970,7 +970,8 @@ RefetchForeignRow (EState *estate, bool -RecheckForeignScan (ForeignScanState *node, TupleTableSlot *slot); +RecheckForeignScan(ForeignScanState *node, + TupleTableSlot *slot); Recheck that a previously-returned tuple still matches the relevant scan and join qualifiers, and possibly provide a modified version of @@ -1011,8 +1012,8 @@ RecheckForeignScan (ForeignScanState *node, TupleTableSlot *slot); void -ExplainForeignScan (ForeignScanState *node, - ExplainState *es); +ExplainForeignScan(ForeignScanState *node, + ExplainState *es); Print additional EXPLAIN output for a foreign table scan. @@ -1033,11 +1034,11 @@ ExplainForeignScan (ForeignScanState *node, void -ExplainForeignModify (ModifyTableState *mtstate, - ResultRelInfo *rinfo, - List *fdw_private, - int subplan_index, - struct ExplainState *es); +ExplainForeignModify(ModifyTableState *mtstate, + ResultRelInfo *rinfo, + List *fdw_private, + int subplan_index, + struct ExplainState *es); Print additional EXPLAIN output for a foreign table update. @@ -1059,8 +1060,8 @@ ExplainForeignModify (ModifyTableState *mtstate, void -ExplainDirectModify (ForeignScanState *node, - ExplainState *es); +ExplainDirectModify(ForeignScanState *node, + ExplainState *es); Print additional EXPLAIN output for a direct modification @@ -1087,9 +1088,9 @@ ExplainDirectModify (ForeignScanState *node, bool -AnalyzeForeignTable (Relation relation, - AcquireSampleRowsFunc *func, - BlockNumber *totalpages); +AnalyzeForeignTable(Relation relation, + AcquireSampleRowsFunc *func, + BlockNumber *totalpages); This function is called when is executed on @@ -1109,10 +1110,12 @@ AnalyzeForeignTable (Relation relation, If provided, the sample collection function must have the signature int -AcquireSampleRowsFunc (Relation relation, int elevel, - HeapTuple *rows, int targrows, - double *totalrows, - double *totaldeadrows); +AcquireSampleRowsFunc(Relation relation, + int elevel, + HeapTuple *rows, + int targrows, + double *totalrows, + double *totaldeadrows); A random sample of up to targrows rows should be collected @@ -1132,7 +1135,7 @@ AcquireSampleRowsFunc (Relation relation, int elevel, List * -ImportForeignSchema (ImportForeignSchemaStmt *stmt, Oid serverOid); +ImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid); Obtain a list of foreign table creation commands. This function is From e530be96859eb0a0e0bab98a79029268ddc98a1d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 6 Sep 2017 14:06:09 -0400 Subject: [PATCH 0110/1087] Remove duplicate reads from the inner loops in generic atomic ops. The pg_atomic_compare_exchange_xxx functions are defined to update *expected to whatever they read from the target variable. Therefore, there's no need to do additional explicit reads after we've initialized the "old" variable. The actual benefit of this is somewhat debatable, but it seems fairly unlikely to hurt anything, especially since we will override the generic implementations in most performance-sensitive cases. Yura Sokolov, reviewed by Jesper Pedersen and myself Discussion: https://postgr.es/m/7f65886daca545067f82bf2b463b218d@postgrespro.ru --- src/include/port/atomics/generic.h | 72 ++++++++++-------------------- 1 file changed, 24 insertions(+), 48 deletions(-) diff --git a/src/include/port/atomics/generic.h b/src/include/port/atomics/generic.h index 424543604a..75ffaf6e87 100644 --- a/src/include/port/atomics/generic.h +++ b/src/include/port/atomics/generic.h @@ -170,12 +170,9 @@ static inline uint32 pg_atomic_exchange_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 xchg_) { uint32 old; - while (true) - { - old = pg_atomic_read_u32_impl(ptr); - if (pg_atomic_compare_exchange_u32_impl(ptr, &old, xchg_)) - break; - } + old = pg_atomic_read_u32_impl(ptr); + while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, xchg_)) + /* skip */; return old; } #endif @@ -186,12 +183,9 @@ static inline uint32 pg_atomic_fetch_add_u32_impl(volatile pg_atomic_uint32 *ptr, int32 add_) { uint32 old; - while (true) - { - old = pg_atomic_read_u32_impl(ptr); - if (pg_atomic_compare_exchange_u32_impl(ptr, &old, old + add_)) - break; - } + old = pg_atomic_read_u32_impl(ptr); + while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old + add_)) + /* skip */; return old; } #endif @@ -211,12 +205,9 @@ static inline uint32 pg_atomic_fetch_and_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 and_) { uint32 old; - while (true) - { - old = pg_atomic_read_u32_impl(ptr); - if (pg_atomic_compare_exchange_u32_impl(ptr, &old, old & and_)) - break; - } + old = pg_atomic_read_u32_impl(ptr); + while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old & and_)) + /* skip */; return old; } #endif @@ -227,12 +218,9 @@ static inline uint32 pg_atomic_fetch_or_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 or_) { uint32 old; - while (true) - { - old = pg_atomic_read_u32_impl(ptr); - if (pg_atomic_compare_exchange_u32_impl(ptr, &old, old | or_)) - break; - } + old = pg_atomic_read_u32_impl(ptr); + while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old | or_)) + /* skip */; return old; } #endif @@ -261,12 +249,9 @@ static inline uint64 pg_atomic_exchange_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 xchg_) { uint64 old; - while (true) - { - old = ptr->value; - if (pg_atomic_compare_exchange_u64_impl(ptr, &old, xchg_)) - break; - } + old = ptr->value; + while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, xchg_)) + /* skip */; return old; } #endif @@ -357,12 +342,9 @@ static inline uint64 pg_atomic_fetch_add_u64_impl(volatile pg_atomic_uint64 *ptr, int64 add_) { uint64 old; - while (true) - { - old = pg_atomic_read_u64_impl(ptr); - if (pg_atomic_compare_exchange_u64_impl(ptr, &old, old + add_)) - break; - } + old = pg_atomic_read_u64_impl(ptr); + while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old + add_)) + /* skip */; return old; } #endif @@ -382,12 +364,9 @@ static inline uint64 pg_atomic_fetch_and_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 and_) { uint64 old; - while (true) - { - old = pg_atomic_read_u64_impl(ptr); - if (pg_atomic_compare_exchange_u64_impl(ptr, &old, old & and_)) - break; - } + old = pg_atomic_read_u64_impl(ptr); + while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old & and_)) + /* skip */; return old; } #endif @@ -398,12 +377,9 @@ static inline uint64 pg_atomic_fetch_or_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 or_) { uint64 old; - while (true) - { - old = pg_atomic_read_u64_impl(ptr); - if (pg_atomic_compare_exchange_u64_impl(ptr, &old, old | or_)) - break; - } + old = pg_atomic_read_u64_impl(ptr); + while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old | or_)) + /* skip */; return old; } #endif From e09db94c0a5f3b440d96c5c9e8e6c1638d1ec39f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 6 Sep 2017 14:21:39 -0400 Subject: [PATCH 0111/1087] Use more of gcc's __sync_fetch_and_xxx builtin functions for atomic ops. In addition to __sync_fetch_and_add, gcc offers __sync_fetch_and_sub, __sync_fetch_and_and, and __sync_fetch_and_or, which correspond directly to primitive atomic ops that we want. Testing shows that in some cases they generate better code than our generic implementations, so use them. We've assumed that configure's test for __sync_val_compare_and_swap is sufficient to allow assuming that __sync_fetch_and_add is available, so make the same assumption for these functions. Should that prove to be wrong, we can add more configure tests. Yura Sokolov, reviewed by Jesper Pedersen and myself Discussion: https://postgr.es/m/7f65886daca545067f82bf2b463b218d@postgrespro.ru --- src/include/port/atomics/generic-gcc.h | 58 ++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/src/include/port/atomics/generic-gcc.h b/src/include/port/atomics/generic-gcc.h index 7efc0861e7..e6871646a7 100644 --- a/src/include/port/atomics/generic-gcc.h +++ b/src/include/port/atomics/generic-gcc.h @@ -176,6 +176,8 @@ pg_atomic_compare_exchange_u32_impl(volatile pg_atomic_uint32 *ptr, } #endif +/* if we have 32-bit __sync_val_compare_and_swap, assume we have these too: */ + #if !defined(PG_HAVE_ATOMIC_FETCH_ADD_U32) && defined(HAVE_GCC__SYNC_INT32_CAS) #define PG_HAVE_ATOMIC_FETCH_ADD_U32 static inline uint32 @@ -185,6 +187,33 @@ pg_atomic_fetch_add_u32_impl(volatile pg_atomic_uint32 *ptr, int32 add_) } #endif +#if !defined(PG_HAVE_ATOMIC_FETCH_SUB_U32) && defined(HAVE_GCC__SYNC_INT32_CAS) +#define PG_HAVE_ATOMIC_FETCH_SUB_U32 +static inline uint32 +pg_atomic_fetch_sub_u32_impl(volatile pg_atomic_uint32 *ptr, int32 sub_) +{ + return __sync_fetch_and_sub(&ptr->value, sub_); +} +#endif + +#if !defined(PG_HAVE_ATOMIC_FETCH_AND_U32) && defined(HAVE_GCC__SYNC_INT32_CAS) +#define PG_HAVE_ATOMIC_FETCH_AND_U32 +static inline uint32 +pg_atomic_fetch_and_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 and_) +{ + return __sync_fetch_and_and(&ptr->value, and_); +} +#endif + +#if !defined(PG_HAVE_ATOMIC_FETCH_OR_U32) && defined(HAVE_GCC__SYNC_INT32_CAS) +#define PG_HAVE_ATOMIC_FETCH_OR_U32 +static inline uint32 +pg_atomic_fetch_or_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 or_) +{ + return __sync_fetch_and_or(&ptr->value, or_); +} +#endif + #if !defined(PG_DISABLE_64_BIT_ATOMICS) @@ -214,6 +243,8 @@ pg_atomic_compare_exchange_u64_impl(volatile pg_atomic_uint64 *ptr, } #endif +/* if we have 64-bit __sync_val_compare_and_swap, assume we have these too: */ + #if !defined(PG_HAVE_ATOMIC_FETCH_ADD_U64) && defined(HAVE_GCC__SYNC_INT64_CAS) #define PG_HAVE_ATOMIC_FETCH_ADD_U64 static inline uint64 @@ -223,6 +254,33 @@ pg_atomic_fetch_add_u64_impl(volatile pg_atomic_uint64 *ptr, int64 add_) } #endif +#if !defined(PG_HAVE_ATOMIC_FETCH_SUB_U64) && defined(HAVE_GCC__SYNC_INT64_CAS) +#define PG_HAVE_ATOMIC_FETCH_SUB_U64 +static inline uint64 +pg_atomic_fetch_sub_u64_impl(volatile pg_atomic_uint64 *ptr, int64 sub_) +{ + return __sync_fetch_and_sub(&ptr->value, sub_); +} +#endif + +#if !defined(PG_HAVE_ATOMIC_FETCH_AND_U64) && defined(HAVE_GCC__SYNC_INT64_CAS) +#define PG_HAVE_ATOMIC_FETCH_AND_U64 +static inline uint64 +pg_atomic_fetch_and_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 and_) +{ + return __sync_fetch_and_and(&ptr->value, and_); +} +#endif + +#if !defined(PG_HAVE_ATOMIC_FETCH_OR_U64) && defined(HAVE_GCC__SYNC_INT64_CAS) +#define PG_HAVE_ATOMIC_FETCH_OR_U64 +static inline uint64 +pg_atomic_fetch_or_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 or_) +{ + return __sync_fetch_and_or(&ptr->value, or_); +} +#endif + #endif /* !defined(PG_DISABLE_64_BIT_ATOMICS) */ #endif /* defined(HAVE_ATOMICS) */ From 5b6d13eec72b960eb0f78542199380e49c8583d4 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Wed, 6 Sep 2017 13:46:01 -0700 Subject: [PATCH 0112/1087] Allow SET STATISTICS on expression indexes Index columns are referenced by ordinal number rather than name, e.g. CREATE INDEX coord_idx ON measured (x, y, (z + t)); ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000; Incompatibility note for release notes: \d+ for indexes now also displays Stats Target Authors: Alexander Korotkov, with contribution by Adrien NAYRAT Review: Adrien NAYRAT, Simon Riggs Wordsmith: Simon Riggs --- doc/src/sgml/ref/alter_index.sgml | 39 +++++++++++++++ src/backend/commands/tablecmds.c | 55 +++++++++++++++++----- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/equalfuncs.c | 1 + src/backend/parser/gram.y | 16 +++++++ src/backend/utils/cache/syscache.c | 46 ++++++++++++++++++ src/bin/psql/describe.c | 2 + src/bin/psql/tab-complete.c | 5 +- src/include/nodes/parsenodes.h | 2 + src/include/utils/syscache.h | 3 ++ src/test/regress/expected/alter_table.out | 24 ++++++++++ src/test/regress/expected/create_index.out | 8 ++-- src/test/regress/sql/alter_table.sql | 16 +++++++ 13 files changed, 201 insertions(+), 17 deletions(-) diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index ad77b5743a..7d6553d2db 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -26,6 +26,8 @@ ALTER INDEX [ IF EXISTS ] name SET ALTER INDEX name DEPENDS ON EXTENSION extension_name ALTER INDEX [ IF EXISTS ] name SET ( storage_parameter = value [, ... ] ) ALTER INDEX [ IF EXISTS ] name RESET ( storage_parameter [, ... ] ) +ALTER INDEX [ IF EXISTS ] name ALTER [ COLUMN ] column_number + SET STATISTICS integer ALTER INDEX ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] SET TABLESPACE new_tablespace [ NOWAIT ] @@ -110,6 +112,25 @@ ALTER INDEX ALL IN TABLESPACE name + + ALTER [ COLUMN ] column_number SET STATISTICS integer + + + This form sets the per-column statistics-gathering target for + subsequent operations, though can + be used only on index columns that are defined as an expression. + Since expressions lack a unique name, we refer to them using the + ordinal number of the index column. + The target can be set in the range 0 to 10000; alternatively, set it + to -1 to revert to using the system default statistics + target (). + For more information on the use of statistics by the + PostgreSQL query planner, refer to + . + + + + @@ -130,6 +151,16 @@ ALTER INDEX ALL IN TABLESPACE name + + column_number + + + The ordinal number refers to the ordinal (left-to-right) position + of the index column. + + + + name @@ -235,6 +266,14 @@ ALTER INDEX distributors SET (fillfactor = 75); REINDEX INDEX distributors; + + Set the statistics-gathering target for an expression index: + +CREATE INDEX coord_idx ON measured (x, y, (z + t)); +ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000; + + + diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 0f08245a67..c8fc9cb7fe 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -375,9 +375,9 @@ static ObjectAddress ATExecAddIdentity(Relation rel, const char *colName, static ObjectAddress ATExecSetIdentity(Relation rel, const char *colName, Node *def, LOCKMODE lockmode); static ObjectAddress ATExecDropIdentity(Relation rel, const char *colName, bool missing_ok, LOCKMODE lockmode); -static void ATPrepSetStatistics(Relation rel, const char *colName, +static void ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newValue, LOCKMODE lockmode); -static ObjectAddress ATExecSetStatistics(Relation rel, const char *colName, +static ObjectAddress ATExecSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newValue, LOCKMODE lockmode); static ObjectAddress ATExecSetOptions(Relation rel, const char *colName, Node *options, bool isReset, LOCKMODE lockmode); @@ -3525,7 +3525,7 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd, case AT_SetStatistics: /* ALTER COLUMN SET STATISTICS */ ATSimpleRecursion(wqueue, rel, cmd, recurse, lockmode); /* Performs own permission checks */ - ATPrepSetStatistics(rel, cmd->name, cmd->def, lockmode); + ATPrepSetStatistics(rel, cmd->name, cmd->num, cmd->def, lockmode); pass = AT_PASS_MISC; break; case AT_SetOptions: /* ALTER COLUMN SET ( options ) */ @@ -3848,7 +3848,7 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab, Relation rel, address = ATExecSetNotNull(tab, rel, cmd->name, lockmode); break; case AT_SetStatistics: /* ALTER COLUMN SET STATISTICS */ - address = ATExecSetStatistics(rel, cmd->name, cmd->def, lockmode); + address = ATExecSetStatistics(rel, cmd->name, cmd->num, cmd->def, lockmode); break; case AT_SetOptions: /* ALTER COLUMN SET ( options ) */ address = ATExecSetOptions(rel, cmd->name, cmd->def, false, lockmode); @@ -6120,7 +6120,7 @@ ATExecDropIdentity(Relation rel, const char *colName, bool missing_ok, LOCKMODE * ALTER TABLE ALTER COLUMN SET STATISTICS */ static void -ATPrepSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE lockmode) +ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newValue, LOCKMODE lockmode) { /* * We do our own permission checking because (a) we want to allow SET @@ -6138,6 +6138,15 @@ ATPrepSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE errmsg("\"%s\" is not a table, materialized view, index, or foreign table", RelationGetRelationName(rel)))); + /* + * We allow referencing columns by numbers only for indexes, since + * table column numbers could contain gaps if columns are later dropped. + */ + if (rel->rd_rel->relkind != RELKIND_INDEX && !colName) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot refer to non-index column by number"))); + /* Permissions checks */ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId())) aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, @@ -6148,7 +6157,7 @@ ATPrepSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE * Return value is the address of the modified column */ static ObjectAddress -ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE lockmode) +ATExecSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newValue, LOCKMODE lockmode) { int newtarget; Relation attrelation; @@ -6181,13 +6190,27 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE attrelation = heap_open(AttributeRelationId, RowExclusiveLock); - tuple = SearchSysCacheCopyAttName(RelationGetRelid(rel), colName); + if (colName) + { + tuple = SearchSysCacheCopyAttName(RelationGetRelid(rel), colName); + + if (!HeapTupleIsValid(tuple)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("column \"%s\" of relation \"%s\" does not exist", + colName, RelationGetRelationName(rel)))); + } + else + { + tuple = SearchSysCacheCopyAttNum(RelationGetRelid(rel), colNum); + + if (!HeapTupleIsValid(tuple)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("column number %d of relation \"%s\" does not exist", + colNum, RelationGetRelationName(rel)))); + } - if (!HeapTupleIsValid(tuple)) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("column \"%s\" of relation \"%s\" does not exist", - colName, RelationGetRelationName(rel)))); attrtuple = (Form_pg_attribute) GETSTRUCT(tuple); attnum = attrtuple->attnum; @@ -6197,6 +6220,14 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE errmsg("cannot alter system column \"%s\"", colName))); + if (rel->rd_rel->relkind == RELKIND_INDEX && + rel->rd_index->indkey.values[attnum - 1] != 0) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot alter statistics on non-expression column \"%s\" of index \"%s\"", + NameStr(attrtuple->attname), RelationGetRelationName(rel)), + errhint("Alter statistics on table column instead."))); + attrtuple->attstattarget = newtarget; CatalogTupleUpdate(attrelation, &tuple->t_self, tuple); diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index f9ddf4ed76..9bae2647fd 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3087,6 +3087,7 @@ _copyAlterTableCmd(const AlterTableCmd *from) COPY_SCALAR_FIELD(subtype); COPY_STRING_FIELD(name); + COPY_SCALAR_FIELD(num); COPY_NODE_FIELD(newowner); COPY_NODE_FIELD(def); COPY_SCALAR_FIELD(behavior); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 8d92c03633..11731da80a 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1098,6 +1098,7 @@ _equalAlterTableCmd(const AlterTableCmd *a, const AlterTableCmd *b) { COMPARE_SCALAR_FIELD(subtype); COMPARE_STRING_FIELD(name); + COMPARE_SCALAR_FIELD(num); COMPARE_NODE_FIELD(newowner); COMPARE_NODE_FIELD(def); COMPARE_SCALAR_FIELD(behavior); diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 7d0de99baf..5eb398118e 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -2078,6 +2078,22 @@ alter_table_cmd: n->def = (Node *) makeInteger($6); $$ = (Node *)n; } + /* ALTER TABLE ALTER [COLUMN] SET STATISTICS */ + | ALTER opt_column Iconst SET STATISTICS SignedIconst + { + AlterTableCmd *n = makeNode(AlterTableCmd); + + if ($3 <= 0 || $3 > PG_INT16_MAX) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("column number must be in range from 1 to %d", PG_INT16_MAX), + parser_errposition(@3))); + + n->subtype = AT_SetStatistics; + n->num = (int16) $3; + n->def = (Node *) makeInteger($6); + $$ = (Node *)n; + } /* ALTER TABLE ALTER [COLUMN] SET ( column_parameter = value [, ... ] ) */ | ALTER opt_column ColId SET reloptions { diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c index 607fe9db79..fcbb683a99 100644 --- a/src/backend/utils/cache/syscache.c +++ b/src/backend/utils/cache/syscache.c @@ -1256,6 +1256,52 @@ SearchSysCacheExistsAttName(Oid relid, const char *attname) } +/* + * SearchSysCacheAttNum + * + * This routine is equivalent to SearchSysCache on the ATTNUM cache, + * except that it will return NULL if the found attribute is marked + * attisdropped. This is convenient for callers that want to act as + * though dropped attributes don't exist. + */ +HeapTuple +SearchSysCacheAttNum(Oid relid, int16 attnum) +{ + HeapTuple tuple; + + tuple = SearchSysCache2(ATTNUM, + ObjectIdGetDatum(relid), + Int16GetDatum(attnum)); + if (!HeapTupleIsValid(tuple)) + return NULL; + if (((Form_pg_attribute) GETSTRUCT(tuple))->attisdropped) + { + ReleaseSysCache(tuple); + return NULL; + } + return tuple; +} + +/* + * SearchSysCacheCopyAttNum + * + * As above, an attisdropped-aware version of SearchSysCacheCopy. + */ +HeapTuple +SearchSysCacheCopyAttNum(Oid relid, int16 attnum) +{ + HeapTuple tuple, + newtuple; + + tuple = SearchSysCacheAttNum(relid, attnum); + if (!HeapTupleIsValid(tuple)) + return NULL; + newtuple = heap_copytuple(tuple); + ReleaseSysCache(tuple); + return newtuple; +} + + /* * SysCacheGetAttr * diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index f6049cc9e5..6fb9bdd063 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1742,6 +1742,7 @@ describeOneTableDetails(const char *schemaname, { headers[cols++] = gettext_noop("Storage"); if (tableinfo.relkind == RELKIND_RELATION || + tableinfo.relkind == RELKIND_INDEX || tableinfo.relkind == RELKIND_MATVIEW || tableinfo.relkind == RELKIND_FOREIGN_TABLE || tableinfo.relkind == RELKIND_PARTITIONED_TABLE) @@ -1841,6 +1842,7 @@ describeOneTableDetails(const char *schemaname, /* Statistics target, if the relkind supports this feature */ if (tableinfo.relkind == RELKIND_RELATION || + tableinfo.relkind == RELKIND_INDEX || tableinfo.relkind == RELKIND_MATVIEW || tableinfo.relkind == RELKIND_FOREIGN_TABLE || tableinfo.relkind == RELKIND_PARTITIONED_TABLE) diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 7959f9ac16..2ab8809fa5 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1644,7 +1644,10 @@ psql_completion(const char *text, int start, int end) "UNION SELECT 'ALL IN TABLESPACE'"); /* ALTER INDEX */ else if (Matches3("ALTER", "INDEX", MatchAny)) - COMPLETE_WITH_LIST4("OWNER TO", "RENAME TO", "SET", "RESET"); + COMPLETE_WITH_LIST5("ALTER COLUMN", "OWNER TO", "RENAME TO", "SET", "RESET"); + /* ALTER INDEX ALTER COLUMN */ + else if (Matches6("ALTER", "INDEX", MatchAny, "ALTER", "COLUMN", MatchAny)) + COMPLETE_WITH_CONST("SET STATISTICS"); /* ALTER INDEX SET */ else if (Matches4("ALTER", "INDEX", MatchAny, "SET")) COMPLETE_WITH_LIST2("(", "TABLESPACE"); diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index ef6753e31a..3171815320 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1777,6 +1777,8 @@ typedef struct AlterTableCmd /* one subcommand of an ALTER TABLE */ AlterTableType subtype; /* Type of table alteration to apply */ char *name; /* column, constraint, or trigger to act on, * or tablespace */ + int16 num; /* attribute number for columns referenced + * by number */ RoleSpec *newowner; Node *def; /* definition of new column, index, * constraint, or parent table */ diff --git a/src/include/utils/syscache.h b/src/include/utils/syscache.h index 8352b40f4e..8a92ea27ac 100644 --- a/src/include/utils/syscache.h +++ b/src/include/utils/syscache.h @@ -131,6 +131,9 @@ extern HeapTuple SearchSysCacheAttName(Oid relid, const char *attname); extern HeapTuple SearchSysCacheCopyAttName(Oid relid, const char *attname); extern bool SearchSysCacheExistsAttName(Oid relid, const char *attname); +extern HeapTuple SearchSysCacheAttNum(Oid relid, int16 attnum); +extern HeapTuple SearchSysCacheCopyAttNum(Oid relid, int16 attnum); + extern Datum SysCacheGetAttr(int cacheId, HeapTuple tup, AttrNumber attributeNumber, bool *isNull); diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index ed03cb9c63..0f36423163 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -94,6 +94,30 @@ SELECT * FROM tmp; | 4 | name | text | 4.1 | 4.1 | 2 | ((4.1,4.1),(3.1,3.1)) | Mon May 01 00:30:30 1995 PDT | c | {"Mon May 01 00:30:30 1995 PDT","Mon Aug 24 14:43:07 1992 PDT","Wed Dec 31 16:00:00 1969 PST"} | 314159 | (1,1) | 512 | 1 2 3 4 5 6 7 8 | magnetic disk | (1.1,1.1) | [(4.1,4.1),(3.1,3.1)] | ((0,2),(4.1,4.1),(3.1,3.1)) | (4.1,4.1),(3.1,3.1) | ["Wed Dec 31 16:00:00 1969 PST" "infinity"] | Thu Jan 01 00:00:00 1970 | @ 1 hour 10 secs | {1,2,3,4} | {1,2,3,4} | {1,2,3,4} (1 row) +CREATE INDEX tmp_idx ON tmp (a, (d + e), b); +ALTER INDEX tmp_idx ALTER COLUMN 0 SET STATISTICS 1000; +ERROR: column number must be in range from 1 to 32767 +LINE 1: ALTER INDEX tmp_idx ALTER COLUMN 0 SET STATISTICS 1000; + ^ +ALTER INDEX tmp_idx ALTER COLUMN 1 SET STATISTICS 1000; +ERROR: cannot alter statistics on non-expression column "a" of index "tmp_idx" +HINT: Alter statistics on table column instead. +ALTER INDEX tmp_idx ALTER COLUMN 2 SET STATISTICS 1000; +\d+ tmp_idx + Index "public.tmp_idx" + Column | Type | Definition | Storage | Stats target +--------+------------------+------------+---------+-------------- + a | integer | a | plain | + expr | double precision | (d + e) | plain | 1000 + b | cstring | b | plain | +btree, for table "public.tmp" + +ALTER INDEX tmp_idx ALTER COLUMN 3 SET STATISTICS 1000; +ERROR: cannot alter statistics on non-expression column "b" of index "tmp_idx" +HINT: Alter statistics on table column instead. +ALTER INDEX tmp_idx ALTER COLUMN 4 SET STATISTICS 1000; +ERROR: column number 4 of relation "tmp_idx" does not exist +ALTER INDEX tmp_idx ALTER COLUMN 2 SET STATISTICS -1; DROP TABLE tmp; -- -- rename - check on both non-temp and temp tables diff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out index 064adb4640..8450f2463e 100644 --- a/src/test/regress/expected/create_index.out +++ b/src/test/regress/expected/create_index.out @@ -2324,10 +2324,10 @@ DROP TABLE array_gin_test; CREATE INDEX gin_relopts_test ON array_index_op_test USING gin (i) WITH (FASTUPDATE=on, GIN_PENDING_LIST_LIMIT=128); \d+ gin_relopts_test - Index "public.gin_relopts_test" - Column | Type | Definition | Storage ---------+---------+------------+--------- - i | integer | i | plain + Index "public.gin_relopts_test" + Column | Type | Definition | Storage | Stats target +--------+---------+------------+---------+-------------- + i | integer | i | plain | gin, for table "public.array_index_op_test" Options: fastupdate=on, gin_pending_list_limit=128 diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 9a20dd141a..e6f6669880 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -142,6 +142,22 @@ INSERT INTO tmp (a, b, c, d, e, f, g, h, i, j, k, l, m, n, p, q, r, s, t, u, SELECT * FROM tmp; +CREATE INDEX tmp_idx ON tmp (a, (d + e), b); + +ALTER INDEX tmp_idx ALTER COLUMN 0 SET STATISTICS 1000; + +ALTER INDEX tmp_idx ALTER COLUMN 1 SET STATISTICS 1000; + +ALTER INDEX tmp_idx ALTER COLUMN 2 SET STATISTICS 1000; + +\d+ tmp_idx + +ALTER INDEX tmp_idx ALTER COLUMN 3 SET STATISTICS 1000; + +ALTER INDEX tmp_idx ALTER COLUMN 4 SET STATISTICS 1000; + +ALTER INDEX tmp_idx ALTER COLUMN 2 SET STATISTICS -1; + DROP TABLE tmp; From ca4e20fde87d182aa699c5384fb1b6091f6e5f79 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 6 Sep 2017 17:32:40 -0400 Subject: [PATCH 0113/1087] Merge duplicative code for \sf/\sv, \ef/\ev in psql/command.c. Saves ~150 lines, costs little. Fabien Coelho, reviewed by Victor Drobny Discussion: https://postgr.es/m/alpine.DEB.2.20.1703311958001.14355@lancre --- src/bin/psql/command.c | 322 +++++++++++------------------------------ 1 file changed, 87 insertions(+), 235 deletions(-) diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c index fe0b83ea24..041b5e0c87 100644 --- a/src/bin/psql/command.c +++ b/src/bin/psql/command.c @@ -71,10 +71,8 @@ static backslashResult exec_command_d(PsqlScanState scan_state, bool active_bran const char *cmd); static backslashResult exec_command_edit(PsqlScanState scan_state, bool active_branch, PQExpBuffer query_buf, PQExpBuffer previous_buf); -static backslashResult exec_command_ef(PsqlScanState scan_state, bool active_branch, - PQExpBuffer query_buf); -static backslashResult exec_command_ev(PsqlScanState scan_state, bool active_branch, - PQExpBuffer query_buf); +static backslashResult exec_command_ef_ev(PsqlScanState scan_state, bool active_branch, + PQExpBuffer query_buf, bool is_func); static backslashResult exec_command_echo(PsqlScanState scan_state, bool active_branch, const char *cmd); static backslashResult exec_command_elif(PsqlScanState scan_state, ConditionalStack cstack, @@ -115,10 +113,8 @@ static backslashResult exec_command_s(PsqlScanState scan_state, bool active_bran static backslashResult exec_command_set(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_setenv(PsqlScanState scan_state, bool active_branch, const char *cmd); -static backslashResult exec_command_sf(PsqlScanState scan_state, bool active_branch, - const char *cmd); -static backslashResult exec_command_sv(PsqlScanState scan_state, bool active_branch, - const char *cmd); +static backslashResult exec_command_sf_sv(PsqlScanState scan_state, bool active_branch, + const char *cmd, bool is_func); static backslashResult exec_command_t(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_T(PsqlScanState scan_state, bool active_branch); static backslashResult exec_command_timing(PsqlScanState scan_state, bool active_branch); @@ -319,9 +315,9 @@ exec_command(const char *cmd, status = exec_command_edit(scan_state, active_branch, query_buf, previous_buf); else if (strcmp(cmd, "ef") == 0) - status = exec_command_ef(scan_state, active_branch, query_buf); + status = exec_command_ef_ev(scan_state, active_branch, query_buf, true); else if (strcmp(cmd, "ev") == 0) - status = exec_command_ev(scan_state, active_branch, query_buf); + status = exec_command_ef_ev(scan_state, active_branch, query_buf, false); else if (strcmp(cmd, "echo") == 0 || strcmp(cmd, "qecho") == 0) status = exec_command_echo(scan_state, active_branch, cmd); else if (strcmp(cmd, "elif") == 0) @@ -380,9 +376,9 @@ exec_command(const char *cmd, else if (strcmp(cmd, "setenv") == 0) status = exec_command_setenv(scan_state, active_branch, cmd); else if (strcmp(cmd, "sf") == 0 || strcmp(cmd, "sf+") == 0) - status = exec_command_sf(scan_state, active_branch, cmd); + status = exec_command_sf_sv(scan_state, active_branch, cmd, true); else if (strcmp(cmd, "sv") == 0 || strcmp(cmd, "sv+") == 0) - status = exec_command_sv(scan_state, active_branch, cmd); + status = exec_command_sf_sv(scan_state, active_branch, cmd, false); else if (strcmp(cmd, "t") == 0) status = exec_command_t(scan_state, active_branch); else if (strcmp(cmd, "T") == 0) @@ -979,28 +975,34 @@ exec_command_edit(PsqlScanState scan_state, bool active_branch, } /* - * \ef -- edit the named function, or present a blank CREATE FUNCTION - * template if no argument is given + * \ef/\ev -- edit the named function/view, or + * present a blank CREATE FUNCTION/VIEW template if no argument is given */ static backslashResult -exec_command_ef(PsqlScanState scan_state, bool active_branch, - PQExpBuffer query_buf) +exec_command_ef_ev(PsqlScanState scan_state, bool active_branch, + PQExpBuffer query_buf, bool is_func) { backslashResult status = PSQL_CMD_SKIP_LINE; if (active_branch) { - char *func = psql_scan_slash_option(scan_state, - OT_WHOLE_LINE, NULL, true); + char *obj_desc = psql_scan_slash_option(scan_state, + OT_WHOLE_LINE, + NULL, true); int lineno = -1; - if (pset.sversion < 80400) + if (pset.sversion < (is_func ? 80400 : 70400)) { char sverbuf[32]; - psql_error("The server (version %s) does not support editing function source.\n", - formatPGVersionNumber(pset.sversion, false, - sverbuf, sizeof(sverbuf))); + formatPGVersionNumber(pset.sversion, false, + sverbuf, sizeof(sverbuf)); + if (is_func) + psql_error("The server (version %s) does not support editing function source.\n", + sverbuf); + else + psql_error("The server (version %s) does not support editing view definitions.\n", + sverbuf); status = PSQL_CMD_ERROR; } else if (!query_buf) @@ -1010,36 +1012,44 @@ exec_command_ef(PsqlScanState scan_state, bool active_branch, } else { - Oid foid = InvalidOid; + Oid obj_oid = InvalidOid; + EditableObjectType eot = is_func ? EditableFunction : EditableView; - lineno = strip_lineno_from_objdesc(func); + lineno = strip_lineno_from_objdesc(obj_desc); if (lineno == 0) { /* error already reported */ status = PSQL_CMD_ERROR; } - else if (!func) + else if (!obj_desc) { /* set up an empty command to fill in */ - printfPQExpBuffer(query_buf, - "CREATE FUNCTION ( )\n" - " RETURNS \n" - " LANGUAGE \n" - " -- common options: IMMUTABLE STABLE STRICT SECURITY DEFINER\n" - "AS $function$\n" - "\n$function$\n"); + resetPQExpBuffer(query_buf); + if (is_func) + appendPQExpBufferStr(query_buf, + "CREATE FUNCTION ( )\n" + " RETURNS \n" + " LANGUAGE \n" + " -- common options: IMMUTABLE STABLE STRICT SECURITY DEFINER\n" + "AS $function$\n" + "\n$function$\n"); + else + appendPQExpBufferStr(query_buf, + "CREATE VIEW AS\n" + " SELECT \n" + " -- something...\n"); } - else if (!lookup_object_oid(EditableFunction, func, &foid)) + else if (!lookup_object_oid(eot, obj_desc, &obj_oid)) { /* error already reported */ status = PSQL_CMD_ERROR; } - else if (!get_create_object_cmd(EditableFunction, foid, query_buf)) + else if (!get_create_object_cmd(eot, obj_oid, query_buf)) { /* error already reported */ status = PSQL_CMD_ERROR; } - else if (lineno > 0) + else if (is_func && lineno > 0) { /* * lineno "1" should correspond to the first line of the @@ -1078,89 +1088,8 @@ exec_command_ef(PsqlScanState scan_state, bool active_branch, status = PSQL_CMD_NEWEDIT; } - if (func) - free(func); - } - else - ignore_slash_whole_line(scan_state); - - return status; -} - -/* - * \ev -- edit the named view, or present a blank CREATE VIEW - * template if no argument is given - */ -static backslashResult -exec_command_ev(PsqlScanState scan_state, bool active_branch, - PQExpBuffer query_buf) -{ - backslashResult status = PSQL_CMD_SKIP_LINE; - - if (active_branch) - { - char *view = psql_scan_slash_option(scan_state, - OT_WHOLE_LINE, NULL, true); - int lineno = -1; - - if (pset.sversion < 70400) - { - char sverbuf[32]; - - psql_error("The server (version %s) does not support editing view definitions.\n", - formatPGVersionNumber(pset.sversion, false, - sverbuf, sizeof(sverbuf))); - status = PSQL_CMD_ERROR; - } - else if (!query_buf) - { - psql_error("no query buffer\n"); - status = PSQL_CMD_ERROR; - } - else - { - Oid view_oid = InvalidOid; - - lineno = strip_lineno_from_objdesc(view); - if (lineno == 0) - { - /* error already reported */ - status = PSQL_CMD_ERROR; - } - else if (!view) - { - /* set up an empty command to fill in */ - printfPQExpBuffer(query_buf, - "CREATE VIEW AS\n" - " SELECT \n" - " -- something...\n"); - } - else if (!lookup_object_oid(EditableView, view, &view_oid)) - { - /* error already reported */ - status = PSQL_CMD_ERROR; - } - else if (!get_create_object_cmd(EditableView, view_oid, query_buf)) - { - /* error already reported */ - status = PSQL_CMD_ERROR; - } - } - - if (status != PSQL_CMD_ERROR) - { - bool edited = false; - - if (!do_edit(NULL, query_buf, lineno, &edited)) - status = PSQL_CMD_ERROR; - else if (!edited) - puts(_("No changes")); - else - status = PSQL_CMD_NEWEDIT; - } - - if (view) - free(view); + if (obj_desc) + free(obj_desc); } else ignore_slash_whole_line(scan_state); @@ -2234,43 +2163,53 @@ exec_command_setenv(PsqlScanState scan_state, bool active_branch, } /* - * \sf -- show a function's source code + * \sf/\sv -- show a function/view's source code */ static backslashResult -exec_command_sf(PsqlScanState scan_state, bool active_branch, const char *cmd) +exec_command_sf_sv(PsqlScanState scan_state, bool active_branch, + const char *cmd, bool is_func) { backslashResult status = PSQL_CMD_SKIP_LINE; if (active_branch) { - bool show_linenumbers = (strcmp(cmd, "sf+") == 0); - PQExpBuffer func_buf; - char *func; - Oid foid = InvalidOid; + bool show_linenumbers = (strchr(cmd, '+') != NULL); + PQExpBuffer buf; + char *obj_desc; + Oid obj_oid = InvalidOid; + EditableObjectType eot = is_func ? EditableFunction : EditableView; - func_buf = createPQExpBuffer(); - func = psql_scan_slash_option(scan_state, - OT_WHOLE_LINE, NULL, true); - if (pset.sversion < 80400) + buf = createPQExpBuffer(); + obj_desc = psql_scan_slash_option(scan_state, + OT_WHOLE_LINE, NULL, true); + if (pset.sversion < (is_func ? 80400 : 70400)) { char sverbuf[32]; - psql_error("The server (version %s) does not support showing function source.\n", - formatPGVersionNumber(pset.sversion, false, - sverbuf, sizeof(sverbuf))); + formatPGVersionNumber(pset.sversion, false, + sverbuf, sizeof(sverbuf)); + if (is_func) + psql_error("The server (version %s) does not support showing function source.\n", + sverbuf); + else + psql_error("The server (version %s) does not support showing view definitions.\n", + sverbuf); status = PSQL_CMD_ERROR; } - else if (!func) + else if (!obj_desc) { - psql_error("function name is required\n"); + if (is_func) + psql_error("function name is required\n"); + else + psql_error("view name is required\n"); status = PSQL_CMD_ERROR; } - else if (!lookup_object_oid(EditableFunction, func, &foid)) + else if (!lookup_object_oid(eot, obj_desc, &obj_oid)) { /* error already reported */ status = PSQL_CMD_ERROR; } - else if (!get_create_object_cmd(EditableFunction, foid, func_buf)) + else if (!get_create_object_cmd(eot, obj_oid, buf)) { /* error already reported */ status = PSQL_CMD_ERROR; @@ -2284,7 +2223,7 @@ exec_command_sf(PsqlScanState scan_state, bool active_branch, const char *cmd) if (pset.queryFout == stdout) { /* count lines in function to see if pager is needed */ - int lineno = count_lines_in_buf(func_buf); + int lineno = count_lines_in_buf(buf); output = PageOutput(lineno, &(pset.popt.topt)); is_pager = true; @@ -2299,115 +2238,28 @@ exec_command_sf(PsqlScanState scan_state, bool active_branch, const char *cmd) if (show_linenumbers) { /* - * lineno "1" should correspond to the first line of the - * function body. We expect that pg_get_functiondef() will - * emit that on a line beginning with "AS ", and that there - * can be no such line before the real start of the function - * body. + * For functions, lineno "1" should correspond to the first + * line of the function body. We expect that + * pg_get_functiondef() will emit that on a line beginning + * with "AS ", and that there can be no such line before the + * real start of the function body. */ - print_with_linenumbers(output, func_buf->data, "AS "); - } - else - { - /* just send the function definition to output */ - fputs(func_buf->data, output); - } - - if (is_pager) - ClosePager(output); - } - - if (func) - free(func); - destroyPQExpBuffer(func_buf); - } - else - ignore_slash_whole_line(scan_state); - - return status; -} - -/* - * \sv -- show a view's source code - */ -static backslashResult -exec_command_sv(PsqlScanState scan_state, bool active_branch, const char *cmd) -{ - backslashResult status = PSQL_CMD_SKIP_LINE; - - if (active_branch) - { - bool show_linenumbers = (strcmp(cmd, "sv+") == 0); - PQExpBuffer view_buf; - char *view; - Oid view_oid = InvalidOid; - - view_buf = createPQExpBuffer(); - view = psql_scan_slash_option(scan_state, - OT_WHOLE_LINE, NULL, true); - if (pset.sversion < 70400) - { - char sverbuf[32]; - - psql_error("The server (version %s) does not support showing view definitions.\n", - formatPGVersionNumber(pset.sversion, false, - sverbuf, sizeof(sverbuf))); - status = PSQL_CMD_ERROR; - } - else if (!view) - { - psql_error("view name is required\n"); - status = PSQL_CMD_ERROR; - } - else if (!lookup_object_oid(EditableView, view, &view_oid)) - { - /* error already reported */ - status = PSQL_CMD_ERROR; - } - else if (!get_create_object_cmd(EditableView, view_oid, view_buf)) - { - /* error already reported */ - status = PSQL_CMD_ERROR; - } - else - { - FILE *output; - bool is_pager; - - /* Select output stream: stdout, pager, or file */ - if (pset.queryFout == stdout) - { - /* count lines in view to see if pager is needed */ - int lineno = count_lines_in_buf(view_buf); - - output = PageOutput(lineno, &(pset.popt.topt)); - is_pager = true; - } - else - { - /* use previously set output file, without pager */ - output = pset.queryFout; - is_pager = false; - } - - if (show_linenumbers) - { - /* add line numbers, numbering all lines */ - print_with_linenumbers(output, view_buf->data, NULL); + print_with_linenumbers(output, buf->data, + is_func ? "AS " : NULL); } else { - /* just send the view definition to output */ - fputs(view_buf->data, output); + /* just send the definition to output */ + fputs(buf->data, output); } if (is_pager) ClosePager(output); } - if (view) - free(view); - destroyPQExpBuffer(view_buf); + if (obj_desc) + free(obj_desc); + destroyPQExpBuffer(buf); } else ignore_slash_whole_line(scan_state); From 793a89c1966733c84edacaa25ce47b72a75f3afb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 6 Sep 2017 17:52:08 -0400 Subject: [PATCH 0114/1087] Sync function prototype with its actual definition. Use the same parameter names as in the definition. Cosmetic fix only. Tatsuro Yamada Discussion: https://postgr.es/m/58E711AF.7070305@lab.ntt.co.jp --- contrib/postgres_fdw/postgres_fdw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index 32dc4e6301..fb65e2eb20 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -278,7 +278,7 @@ static void postgresGetForeignPaths(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid); static ForeignScan *postgresGetForeignPlan(PlannerInfo *root, - RelOptInfo *baserel, + RelOptInfo *foreignrel, Oid foreigntableid, ForeignPath *best_path, List *tlist, From f06588a8e6d1e2bf56f9dfa58d97e7956050ddc7 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Thu, 7 Sep 2017 04:56:34 -0700 Subject: [PATCH 0115/1087] Exclude special values in recovery_target_time MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit recovery_target_time accepts timestamp input, though does not allow use of special values, e.g. “today”. Report a useful error message for these cases. Reported-by: Piotr Stefaniak Author: Simon Riggs Discussion: https://postgr.es/m/CANP8+jJdKA+BkkYLWz9zAm16Y0s2ExBv0WfpAwXdTpPfWnA9Bg@mail.gmail.com --- src/backend/access/transam/xlog.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index df4843f409..442341a707 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -5265,6 +5265,18 @@ readRecoveryCommandFile(void) { recoveryTarget = RECOVERY_TARGET_TIME; + if (strcmp(item->value, "epoch") == 0 || + strcmp(item->value, "infinity") == 0 || + strcmp(item->value, "-infinity") == 0 || + strcmp(item->value, "now") == 0 || + strcmp(item->value, "today") == 0 || + strcmp(item->value, "tomorrow") == 0 || + strcmp(item->value, "yesterday") == 0) + ereport(FATAL, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("recovery_target_time is not a valid timestamp: \"%s\"", + item->value))); + /* * Convert the time string given by the user to TimestampTz form. */ From bfea92563c511931bc98163ec70ba2809b14afa1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 7 Sep 2017 08:50:01 -0400 Subject: [PATCH 0116/1087] Further marginal hacking on generic atomic ops. In the generic atomic ops that rely on a loop around a CAS primitive, there's no need to force the initial read of the "old" value to be atomic. In the typically-rare case that we get a torn value, that simply means that the first CAS attempt will fail; but it will update "old" to the atomically-read value, so the next attempt has a chance of succeeding. It was already being done that way in pg_atomic_exchange_u64_impl(), but let's duplicate the approach in the rest. (Given the current coding of the pg_atomic_read functions, this change is a no-op anyway on popular platforms; it only makes a difference where pg_atomic_read_u64_impl() is implemented as a CAS.) In passing, also remove unnecessary take-a-pointer-and-dereference-it coding in the pg_atomic_read functions. That seems to have been based on a misunderstanding of what the C standard requires. What actually matters is that the pointer be declared as pointing to volatile, which it is. I don't believe this will change the assembly code at all on x86 platforms (even ignoring the likelihood that these implementations get overridden by others); but it may help on less-mainstream CPUs. Discussion: https://postgr.es/m/13707.1504718238@sss.pgh.pa.us --- src/include/port/atomics/generic.h | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/src/include/port/atomics/generic.h b/src/include/port/atomics/generic.h index 75ffaf6e87..c7566919de 100644 --- a/src/include/port/atomics/generic.h +++ b/src/include/port/atomics/generic.h @@ -45,7 +45,7 @@ typedef pg_atomic_uint32 pg_atomic_flag; static inline uint32 pg_atomic_read_u32_impl(volatile pg_atomic_uint32 *ptr) { - return *(&ptr->value); + return ptr->value; } #endif @@ -170,7 +170,7 @@ static inline uint32 pg_atomic_exchange_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 xchg_) { uint32 old; - old = pg_atomic_read_u32_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, xchg_)) /* skip */; return old; @@ -183,7 +183,7 @@ static inline uint32 pg_atomic_fetch_add_u32_impl(volatile pg_atomic_uint32 *ptr, int32 add_) { uint32 old; - old = pg_atomic_read_u32_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old + add_)) /* skip */; return old; @@ -205,7 +205,7 @@ static inline uint32 pg_atomic_fetch_and_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 and_) { uint32 old; - old = pg_atomic_read_u32_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old & and_)) /* skip */; return old; @@ -218,7 +218,7 @@ static inline uint32 pg_atomic_fetch_or_u32_impl(volatile pg_atomic_uint32 *ptr, uint32 or_) { uint32 old; - old = pg_atomic_read_u32_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u32_impl(ptr, &old, old | or_)) /* skip */; return old; @@ -249,7 +249,7 @@ static inline uint64 pg_atomic_exchange_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 xchg_) { uint64 old; - old = ptr->value; + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, xchg_)) /* skip */; return old; @@ -299,12 +299,10 @@ static inline uint64 pg_atomic_read_u64_impl(volatile pg_atomic_uint64 *ptr) { /* - * On this platform aligned 64bit reads are guaranteed to be atomic, - * except if using the fallback implementation, where can't guarantee the - * required alignment. + * On this platform aligned 64-bit reads are guaranteed to be atomic. */ AssertPointerAlignment(ptr, 8); - return *(&ptr->value); + return ptr->value; } #else @@ -315,10 +313,10 @@ pg_atomic_read_u64_impl(volatile pg_atomic_uint64 *ptr) uint64 old = 0; /* - * 64 bit reads aren't safe on all platforms. In the generic + * 64-bit reads aren't atomic on all platforms. In the generic * implementation implement them as a compare/exchange with 0. That'll - * fail or succeed, but always return the old value. Possible might store - * a 0, but only if the prev. value also was a 0 - i.e. harmless. + * fail or succeed, but always return the old value. Possibly might store + * a 0, but only if the previous value also was a 0 - i.e. harmless. */ pg_atomic_compare_exchange_u64_impl(ptr, &old, 0); @@ -342,7 +340,7 @@ static inline uint64 pg_atomic_fetch_add_u64_impl(volatile pg_atomic_uint64 *ptr, int64 add_) { uint64 old; - old = pg_atomic_read_u64_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old + add_)) /* skip */; return old; @@ -364,7 +362,7 @@ static inline uint64 pg_atomic_fetch_and_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 and_) { uint64 old; - old = pg_atomic_read_u64_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old & and_)) /* skip */; return old; @@ -377,7 +375,7 @@ static inline uint64 pg_atomic_fetch_or_u64_impl(volatile pg_atomic_uint64 *ptr, uint64 or_) { uint64 old; - old = pg_atomic_read_u64_impl(ptr); + old = ptr->value; /* ok if read is not atomic */ while (!pg_atomic_compare_exchange_u64_impl(ptr, &old, old | or_)) /* skip */; return old; From 6eb52da3948dc8bc7c8a61cbacac14823b670c58 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 7 Sep 2017 09:49:55 -0400 Subject: [PATCH 0117/1087] Fix handling of savepoint commands within multi-statement Query strings. Issuing a savepoint-related command in a Query message that contains multiple SQL statements led to a FATAL exit with a complaint about "unexpected state STARTED". This is a shortcoming of commit 4f896dac1, which attempted to prevent such misbehaviors in multi-statement strings; its quick hack of marking the individual statements as "not top-level" does the wrong thing in this case, and isn't a very accurate description of the situation anyway. To fix, let's introduce into xact.c an explicit model of what happens for multi-statement Query strings. This is an "implicit transaction block in progress" state, which for many purposes works like the normal TBLOCK_INPROGRESS state --- in particular, IsTransactionBlock returns true, causing the desired result that PreventTransactionChain will throw error. But in case of error abort it works like TBLOCK_STARTED, allowing the transaction to be cancelled without need for an explicit ROLLBACK command. Commit 4f896dac1 is reverted in toto, so that we go back to treating the individual statements as "top level". We could have left it as-is, but this allows sharpening the error message for PreventTransactionChain calls inside functions. Except for getting a normal error instead of a FATAL exit for savepoint commands, this patch should result in no user-visible behavioral change (other than that one error message rewording). There are some things we might want to do in the line of changing the appearance or wording of error and warning messages around this behavior, which would be much simpler to do now that it's an explicitly modeled state. But I haven't done them here. Although this fixes a long-standing bug, no backpatch. The consequences of the bug don't seem severe enough to justify the risk that this commit itself creates some new issue. Patch by me, but it owes something to previous investigation by Takayuki Tsunakawa, who also reported the bug in the first place. Also thanks to Michael Paquier for reviewing. Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F6BE40D@G01JPEXMBYT05 --- src/backend/access/transam/xact.c | 160 +++++++++++++++++++-- src/backend/tcop/postgres.c | 57 +++++--- src/include/access/xact.h | 2 + src/test/regress/expected/transactions.out | 84 +++++++++++ src/test/regress/sql/transactions.sql | 54 +++++++ 5 files changed, 320 insertions(+), 37 deletions(-) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 5e7e812200..bc07354f9a 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -145,6 +145,7 @@ typedef enum TBlockState /* transaction block states */ TBLOCK_BEGIN, /* starting transaction block */ TBLOCK_INPROGRESS, /* live transaction */ + TBLOCK_IMPLICIT_INPROGRESS, /* live transaction after implicit BEGIN */ TBLOCK_PARALLEL_INPROGRESS, /* live transaction inside parallel worker */ TBLOCK_END, /* COMMIT received */ TBLOCK_ABORT, /* failed xact, awaiting ROLLBACK */ @@ -2700,6 +2701,7 @@ StartTransactionCommand(void) * previous CommitTransactionCommand.) */ case TBLOCK_INPROGRESS: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_SUBINPROGRESS: break; @@ -2790,6 +2792,7 @@ CommitTransactionCommand(void) * counter and return. */ case TBLOCK_INPROGRESS: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_SUBINPROGRESS: CommandCounterIncrement(); break; @@ -3014,10 +3017,12 @@ AbortCurrentTransaction(void) break; /* - * if we aren't in a transaction block, we just do the basic abort - * & cleanup transaction. + * If we aren't in a transaction block, we just do the basic abort + * & cleanup transaction. For this purpose, we treat an implicit + * transaction block as if it were a simple statement. */ case TBLOCK_STARTED: + case TBLOCK_IMPLICIT_INPROGRESS: AbortTransaction(); CleanupTransaction(); s->blockState = TBLOCK_DEFAULT; @@ -3148,9 +3153,8 @@ AbortCurrentTransaction(void) * completes). Subtransactions are verboten too. * * isTopLevel: passed down from ProcessUtility to determine whether we are - * inside a function or multi-query querystring. (We will always fail if - * this is false, but it's convenient to centralize the check here instead of - * making callers do it.) + * inside a function. (We will always fail if this is false, but it's + * convenient to centralize the check here instead of making callers do it.) * stmtType: statement type name, for error messages. */ void @@ -3183,8 +3187,7 @@ PreventTransactionChain(bool isTopLevel, const char *stmtType) ereport(ERROR, (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION), /* translator: %s represents an SQL statement name */ - errmsg("%s cannot be executed from a function or multi-command string", - stmtType))); + errmsg("%s cannot be executed from a function", stmtType))); /* If we got past IsTransactionBlock test, should be in default state */ if (CurrentTransactionState->blockState != TBLOCK_DEFAULT && @@ -3428,6 +3431,15 @@ BeginTransactionBlock(void) s->blockState = TBLOCK_BEGIN; break; + /* + * BEGIN converts an implicit transaction block to a regular one. + * (Note that we allow this even if we've already done some + * commands, which is a bit odd but matches historical practice.) + */ + case TBLOCK_IMPLICIT_INPROGRESS: + s->blockState = TBLOCK_BEGIN; + break; + /* * Already a transaction block in progress. */ @@ -3503,7 +3515,8 @@ PrepareTransactionBlock(char *gid) * ignore case where we are not in a transaction; * EndTransactionBlock already issued a warning. */ - Assert(s->blockState == TBLOCK_STARTED); + Assert(s->blockState == TBLOCK_STARTED || + s->blockState == TBLOCK_IMPLICIT_INPROGRESS); /* Don't send back a PREPARE result tag... */ result = false; } @@ -3541,6 +3554,18 @@ EndTransactionBlock(void) result = true; break; + /* + * In an implicit transaction block, commit, but issue a warning + * because there was no explicit BEGIN before this. + */ + case TBLOCK_IMPLICIT_INPROGRESS: + ereport(WARNING, + (errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION), + errmsg("there is no transaction in progress"))); + s->blockState = TBLOCK_END; + result = true; + break; + /* * We are in a failed transaction block. Tell * CommitTransactionCommand it's time to exit the block. @@ -3705,8 +3730,14 @@ UserAbortTransactionBlock(void) * WARNING and go to abort state. The upcoming call to * CommitTransactionCommand() will then put us back into the * default state. + * + * We do the same thing with ABORT inside an implicit transaction, + * although in this case we might be rolling back actual database + * state changes. (It's debatable whether we should issue a + * WARNING in this case, but we have done so historically.) */ case TBLOCK_STARTED: + case TBLOCK_IMPLICIT_INPROGRESS: ereport(WARNING, (errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION), errmsg("there is no transaction in progress"))); @@ -3743,6 +3774,58 @@ UserAbortTransactionBlock(void) } } +/* + * BeginImplicitTransactionBlock + * Start an implicit transaction block if we're not already in one. + * + * Unlike BeginTransactionBlock, this is called directly from the main loop + * in postgres.c, not within a Portal. So we can just change blockState + * without a lot of ceremony. We do not expect caller to do + * CommitTransactionCommand/StartTransactionCommand. + */ +void +BeginImplicitTransactionBlock(void) +{ + TransactionState s = CurrentTransactionState; + + /* + * If we are in STARTED state (that is, no transaction block is open), + * switch to IMPLICIT_INPROGRESS state, creating an implicit transaction + * block. + * + * For caller convenience, we consider all other transaction states as + * legal here; otherwise the caller would need its own state check, which + * seems rather pointless. + */ + if (s->blockState == TBLOCK_STARTED) + s->blockState = TBLOCK_IMPLICIT_INPROGRESS; +} + +/* + * EndImplicitTransactionBlock + * End an implicit transaction block, if we're in one. + * + * Like EndTransactionBlock, we just make any needed blockState change here. + * The real work will be done in the upcoming CommitTransactionCommand(). + */ +void +EndImplicitTransactionBlock(void) +{ + TransactionState s = CurrentTransactionState; + + /* + * If we are in IMPLICIT_INPROGRESS state, switch back to STARTED state, + * allowing CommitTransactionCommand to commit whatever happened during + * the implicit transaction block as though it were a single statement. + * + * For caller convenience, we consider all other transaction states as + * legal here; otherwise the caller would need its own state check, which + * seems rather pointless. + */ + if (s->blockState == TBLOCK_IMPLICIT_INPROGRESS) + s->blockState = TBLOCK_STARTED; +} + /* * DefineSavepoint * This executes a SAVEPOINT command. @@ -3780,6 +3863,28 @@ DefineSavepoint(char *name) s->name = MemoryContextStrdup(TopTransactionContext, name); break; + /* + * We disallow savepoint commands in implicit transaction blocks. + * There would be no great difficulty in allowing them so far as + * this module is concerned, but a savepoint seems inconsistent + * with exec_simple_query's behavior of abandoning the whole query + * string upon error. Also, the point of an implicit transaction + * block (as opposed to a regular one) is to automatically close + * after an error, so it's hard to see how a savepoint would fit + * into that. + * + * The error messages for this are phrased as if there were no + * active transaction block at all, which is historical but + * perhaps could be improved. + */ + case TBLOCK_IMPLICIT_INPROGRESS: + ereport(ERROR, + (errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION), + /* translator: %s represents an SQL statement name */ + errmsg("%s can only be used in transaction blocks", + "SAVEPOINT"))); + break; + /* These cases are invalid. */ case TBLOCK_DEFAULT: case TBLOCK_STARTED: @@ -3834,8 +3939,7 @@ ReleaseSavepoint(List *options) switch (s->blockState) { /* - * We can't rollback to a savepoint if there is no savepoint - * defined. + * We can't release a savepoint if there is no savepoint defined. */ case TBLOCK_INPROGRESS: ereport(ERROR, @@ -3843,6 +3947,15 @@ ReleaseSavepoint(List *options) errmsg("no such savepoint"))); break; + case TBLOCK_IMPLICIT_INPROGRESS: + /* See comment about implicit transactions in DefineSavepoint */ + ereport(ERROR, + (errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION), + /* translator: %s represents an SQL statement name */ + errmsg("%s can only be used in transaction blocks", + "RELEASE SAVEPOINT"))); + break; + /* * We are in a non-aborted subtransaction. This is the only valid * case. @@ -3957,6 +4070,15 @@ RollbackToSavepoint(List *options) errmsg("no such savepoint"))); break; + case TBLOCK_IMPLICIT_INPROGRESS: + /* See comment about implicit transactions in DefineSavepoint */ + ereport(ERROR, + (errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION), + /* translator: %s represents an SQL statement name */ + errmsg("%s can only be used in transaction blocks", + "ROLLBACK TO SAVEPOINT"))); + break; + /* * There is at least one savepoint, so proceed. */ @@ -4046,11 +4168,12 @@ RollbackToSavepoint(List *options) /* * BeginInternalSubTransaction * This is the same as DefineSavepoint except it allows TBLOCK_STARTED, - * TBLOCK_END, and TBLOCK_PREPARE states, and therefore it can safely be - * used in functions that might be called when not inside a BEGIN block - * or when running deferred triggers at COMMIT/PREPARE time. Also, it - * automatically does CommitTransactionCommand/StartTransactionCommand - * instead of expecting the caller to do it. + * TBLOCK_IMPLICIT_INPROGRESS, TBLOCK_END, and TBLOCK_PREPARE states, + * and therefore it can safely be used in functions that might be called + * when not inside a BEGIN block or when running deferred triggers at + * COMMIT/PREPARE time. Also, it automatically does + * CommitTransactionCommand/StartTransactionCommand instead of expecting + * the caller to do it. */ void BeginInternalSubTransaction(char *name) @@ -4076,6 +4199,7 @@ BeginInternalSubTransaction(char *name) { case TBLOCK_STARTED: case TBLOCK_INPROGRESS: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_END: case TBLOCK_PREPARE: case TBLOCK_SUBINPROGRESS: @@ -4180,6 +4304,7 @@ RollbackAndReleaseCurrentSubTransaction(void) case TBLOCK_DEFAULT: case TBLOCK_STARTED: case TBLOCK_BEGIN: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_PARALLEL_INPROGRESS: case TBLOCK_SUBBEGIN: case TBLOCK_INPROGRESS: @@ -4211,6 +4336,7 @@ RollbackAndReleaseCurrentSubTransaction(void) s = CurrentTransactionState; /* changed by pop */ AssertState(s->blockState == TBLOCK_SUBINPROGRESS || s->blockState == TBLOCK_INPROGRESS || + s->blockState == TBLOCK_IMPLICIT_INPROGRESS || s->blockState == TBLOCK_STARTED); } @@ -4259,6 +4385,7 @@ AbortOutOfAnyTransaction(void) case TBLOCK_STARTED: case TBLOCK_BEGIN: case TBLOCK_INPROGRESS: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_PARALLEL_INPROGRESS: case TBLOCK_END: case TBLOCK_ABORT_PENDING: @@ -4369,6 +4496,7 @@ TransactionBlockStatusCode(void) case TBLOCK_BEGIN: case TBLOCK_SUBBEGIN: case TBLOCK_INPROGRESS: + case TBLOCK_IMPLICIT_INPROGRESS: case TBLOCK_PARALLEL_INPROGRESS: case TBLOCK_SUBINPROGRESS: case TBLOCK_END: @@ -5036,6 +5164,8 @@ BlockStateAsString(TBlockState blockState) return "BEGIN"; case TBLOCK_INPROGRESS: return "INPROGRESS"; + case TBLOCK_IMPLICIT_INPROGRESS: + return "IMPLICIT_INPROGRESS"; case TBLOCK_PARALLEL_INPROGRESS: return "PARALLEL_INPROGRESS"; case TBLOCK_END: diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 8d3fecf6d6..c10d891260 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -883,10 +883,9 @@ exec_simple_query(const char *query_string) ListCell *parsetree_item; bool save_log_statement_stats = log_statement_stats; bool was_logged = false; - bool isTopLevel; + bool use_implicit_block; char msec_str[32]; - /* * Report query to various monitoring facilities. */ @@ -947,13 +946,14 @@ exec_simple_query(const char *query_string) MemoryContextSwitchTo(oldcontext); /* - * We'll tell PortalRun it's a top-level command iff there's exactly one - * raw parsetree. If more than one, it's effectively a transaction block - * and we want PreventTransactionChain to reject unsafe commands. (Note: - * we're assuming that query rewrite cannot add commands that are - * significant to PreventTransactionChain.) + * For historical reasons, if multiple SQL statements are given in a + * single "simple Query" message, we execute them as a single transaction, + * unless explicit transaction control commands are included to make + * portions of the list be separate transactions. To represent this + * behavior properly in the transaction machinery, we use an "implicit" + * transaction block. */ - isTopLevel = (list_length(parsetree_list) == 1); + use_implicit_block = (list_length(parsetree_list) > 1); /* * Run through the raw parsetree(s) and process each one. @@ -1001,6 +1001,16 @@ exec_simple_query(const char *query_string) /* Make sure we are in a transaction command */ start_xact_command(); + /* + * If using an implicit transaction block, and we're not already in a + * transaction block, start an implicit block to force this statement + * to be grouped together with any following ones. (We must do this + * each time through the loop; otherwise, a COMMIT/ROLLBACK in the + * list would cause later statements to not be grouped.) + */ + if (use_implicit_block) + BeginImplicitTransactionBlock(); + /* If we got a cancel signal in parsing or prior command, quit */ CHECK_FOR_INTERRUPTS(); @@ -1098,7 +1108,7 @@ exec_simple_query(const char *query_string) */ (void) PortalRun(portal, FETCH_ALL, - isTopLevel, + true, /* always top level */ true, receiver, receiver, @@ -1108,15 +1118,7 @@ exec_simple_query(const char *query_string) PortalDrop(portal, false); - if (IsA(parsetree->stmt, TransactionStmt)) - { - /* - * If this was a transaction control statement, commit it. We will - * start a new xact command for the next command (if any). - */ - finish_xact_command(); - } - else if (lnext(parsetree_item) == NULL) + if (lnext(parsetree_item) == NULL) { /* * If this is the last parsetree of the query string, close down @@ -1124,9 +1126,18 @@ exec_simple_query(const char *query_string) * is so that any end-of-transaction errors are reported before * the command-complete message is issued, to avoid confusing * clients who will expect either a command-complete message or an - * error, not one and then the other. But for compatibility with - * historical Postgres behavior, we do not force a transaction - * boundary between queries appearing in a single query string. + * error, not one and then the other. Also, if we're using an + * implicit transaction block, we must close that out first. + */ + if (use_implicit_block) + EndImplicitTransactionBlock(); + finish_xact_command(); + } + else if (IsA(parsetree->stmt, TransactionStmt)) + { + /* + * If this was a transaction control statement, commit it. We will + * start a new xact command for the next command. */ finish_xact_command(); } @@ -1149,7 +1160,9 @@ exec_simple_query(const char *query_string) } /* end loop over parsetrees */ /* - * Close down transaction statement, if one is open. + * Close down transaction statement, if one is open. (This will only do + * something if the parsetree list was empty; otherwise the last loop + * iteration already did it.) */ finish_xact_command(); diff --git a/src/include/access/xact.h b/src/include/access/xact.h index ad5aad96df..f2c10f905f 100644 --- a/src/include/access/xact.h +++ b/src/include/access/xact.h @@ -352,6 +352,8 @@ extern void BeginTransactionBlock(void); extern bool EndTransactionBlock(void); extern bool PrepareTransactionBlock(char *gid); extern void UserAbortTransactionBlock(void); +extern void BeginImplicitTransactionBlock(void); +extern void EndImplicitTransactionBlock(void); extern void ReleaseSavepoint(List *options); extern void DefineSavepoint(char *name); extern void RollbackToSavepoint(List *options); diff --git a/src/test/regress/expected/transactions.out b/src/test/regress/expected/transactions.out index d9b702d016..a7fdcf45fd 100644 --- a/src/test/regress/expected/transactions.out +++ b/src/test/regress/expected/transactions.out @@ -659,6 +659,90 @@ ERROR: portal "ctt" cannot be run COMMIT; DROP FUNCTION create_temp_tab(); DROP FUNCTION invert(x float8); +-- Test assorted behaviors around the implicit transaction block created +-- when multiple SQL commands are sent in a single Query message. These +-- tests rely on the fact that psql will not break SQL commands apart at a +-- backslash-quoted semicolon, but will send them as one Query. +create temp table i_table (f1 int); +-- psql will show only the last result in a multi-statement Query +SELECT 1\; SELECT 2\; SELECT 3; + ?column? +---------- + 3 +(1 row) + +-- this implicitly commits: +insert into i_table values(1)\; select * from i_table; + f1 +---- + 1 +(1 row) + +-- 1/0 error will cause rolling back the whole implicit transaction +insert into i_table values(2)\; select * from i_table\; select 1/0; +ERROR: division by zero +select * from i_table; + f1 +---- + 1 +(1 row) + +rollback; -- we are not in a transaction at this point +WARNING: there is no transaction in progress +-- can use regular begin/commit/rollback within a single Query +begin\; insert into i_table values(3)\; commit; +rollback; -- we are not in a transaction at this point +WARNING: there is no transaction in progress +begin\; insert into i_table values(4)\; rollback; +rollback; -- we are not in a transaction at this point +WARNING: there is no transaction in progress +-- begin converts implicit transaction into a regular one that +-- can extend past the end of the Query +select 1\; begin\; insert into i_table values(5); +commit; +select 1\; begin\; insert into i_table values(6); +rollback; +-- commit in implicit-transaction state commits but issues a warning. +insert into i_table values(7)\; commit\; insert into i_table values(8)\; select 1/0; +WARNING: there is no transaction in progress +ERROR: division by zero +-- similarly, rollback aborts but issues a warning. +insert into i_table values(9)\; rollback\; select 2; +WARNING: there is no transaction in progress + ?column? +---------- + 2 +(1 row) + +select * from i_table; + f1 +---- + 1 + 3 + 5 + 7 +(4 rows) + +rollback; -- we are not in a transaction at this point +WARNING: there is no transaction in progress +-- implicit transaction block is still a transaction block, for e.g. VACUUM +SELECT 1\; VACUUM; +ERROR: VACUUM cannot run inside a transaction block +SELECT 1\; COMMIT\; VACUUM; +WARNING: there is no transaction in progress +ERROR: VACUUM cannot run inside a transaction block +-- we disallow savepoint-related commands in implicit-transaction state +SELECT 1\; SAVEPOINT sp; +ERROR: SAVEPOINT can only be used in transaction blocks +SELECT 1\; COMMIT\; SAVEPOINT sp; +WARNING: there is no transaction in progress +ERROR: SAVEPOINT can only be used in transaction blocks +ROLLBACK TO SAVEPOINT sp\; SELECT 2; +ERROR: ROLLBACK TO SAVEPOINT can only be used in transaction blocks +SELECT 2\; RELEASE SAVEPOINT sp\; SELECT 3; +ERROR: RELEASE SAVEPOINT can only be used in transaction blocks +-- but this is OK, because the BEGIN converts it to a regular xact +SELECT 1\; BEGIN\; SAVEPOINT sp\; ROLLBACK TO SAVEPOINT sp\; COMMIT; -- Test for successful cleanup of an aborted transaction at session exit. -- THIS MUST BE THE LAST TEST IN THIS FILE. begin; diff --git a/src/test/regress/sql/transactions.sql b/src/test/regress/sql/transactions.sql index bf9cb05971..82661ab610 100644 --- a/src/test/regress/sql/transactions.sql +++ b/src/test/regress/sql/transactions.sql @@ -419,6 +419,60 @@ DROP FUNCTION create_temp_tab(); DROP FUNCTION invert(x float8); +-- Test assorted behaviors around the implicit transaction block created +-- when multiple SQL commands are sent in a single Query message. These +-- tests rely on the fact that psql will not break SQL commands apart at a +-- backslash-quoted semicolon, but will send them as one Query. + +create temp table i_table (f1 int); + +-- psql will show only the last result in a multi-statement Query +SELECT 1\; SELECT 2\; SELECT 3; + +-- this implicitly commits: +insert into i_table values(1)\; select * from i_table; +-- 1/0 error will cause rolling back the whole implicit transaction +insert into i_table values(2)\; select * from i_table\; select 1/0; +select * from i_table; + +rollback; -- we are not in a transaction at this point + +-- can use regular begin/commit/rollback within a single Query +begin\; insert into i_table values(3)\; commit; +rollback; -- we are not in a transaction at this point +begin\; insert into i_table values(4)\; rollback; +rollback; -- we are not in a transaction at this point + +-- begin converts implicit transaction into a regular one that +-- can extend past the end of the Query +select 1\; begin\; insert into i_table values(5); +commit; +select 1\; begin\; insert into i_table values(6); +rollback; + +-- commit in implicit-transaction state commits but issues a warning. +insert into i_table values(7)\; commit\; insert into i_table values(8)\; select 1/0; +-- similarly, rollback aborts but issues a warning. +insert into i_table values(9)\; rollback\; select 2; + +select * from i_table; + +rollback; -- we are not in a transaction at this point + +-- implicit transaction block is still a transaction block, for e.g. VACUUM +SELECT 1\; VACUUM; +SELECT 1\; COMMIT\; VACUUM; + +-- we disallow savepoint-related commands in implicit-transaction state +SELECT 1\; SAVEPOINT sp; +SELECT 1\; COMMIT\; SAVEPOINT sp; +ROLLBACK TO SAVEPOINT sp\; SELECT 2; +SELECT 2\; RELEASE SAVEPOINT sp\; SELECT 3; + +-- but this is OK, because the BEGIN converts it to a regular xact +SELECT 1\; BEGIN\; SAVEPOINT sp\; ROLLBACK TO SAVEPOINT sp\; COMMIT; + + -- Test for successful cleanup of an aborted transaction at session exit. -- THIS MUST BE THE LAST TEST IN THIS FILE. From 9d71323daca412e6e175595e1e42809fb5e1172d Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 7 Sep 2017 10:55:45 -0400 Subject: [PATCH 0118/1087] Even if some partitions are foreign, allow tuple routing. This doesn't allow routing tuple to the foreign partitions themselves, but it permits tuples to be routed to regular partitions despite the presence of foreign partitions in the same inheritance hierarchy. Etsuro Fujita, reviewed by Amit Langote and by me. Discussion: http://postgr.es/m/bc3db4c1-1693-3b8a-559f-33ad2b50b7ad@lab.ntt.co.jp --- contrib/file_fdw/input/file_fdw.source | 21 +++++++ contrib/file_fdw/output/file_fdw.source | 81 +++++++++++++++++++++++++ src/backend/executor/execMain.c | 25 +++++--- src/backend/executor/nodeModifyTable.c | 2 +- src/include/executor/executor.h | 2 +- 5 files changed, 120 insertions(+), 11 deletions(-) diff --git a/contrib/file_fdw/input/file_fdw.source b/contrib/file_fdw/input/file_fdw.source index 685561fc2a..e6821d64d4 100644 --- a/contrib/file_fdw/input/file_fdw.source +++ b/contrib/file_fdw/input/file_fdw.source @@ -162,6 +162,27 @@ SELECT tableoid::regclass, * FROM agg FOR UPDATE; ALTER FOREIGN TABLE agg_csv NO INHERIT agg; DROP TABLE agg; +-- declarative partitioning tests +SET ROLE regress_file_fdw_superuser; +CREATE TABLE pt (a int, b text) partition by list (a); +CREATE FOREIGN TABLE p1 partition of pt for values in (1) SERVER file_server +OPTIONS (format 'csv', filename '@abs_srcdir@/data/list1.csv', delimiter ','); +CREATE TABLE p2 partition of pt for values in (2); +SELECT tableoid::regclass, * FROM pt; +SELECT tableoid::regclass, * FROM p1; +SELECT tableoid::regclass, * FROM p2; +COPY pt FROM '@abs_srcdir@/data/list2.bad' with (format 'csv', delimiter ','); -- ERROR +COPY pt FROM '@abs_srcdir@/data/list2.csv' with (format 'csv', delimiter ','); +SELECT tableoid::regclass, * FROM pt; +SELECT tableoid::regclass, * FROM p1; +SELECT tableoid::regclass, * FROM p2; +INSERT INTO pt VALUES (1, 'xyzzy'); -- ERROR +INSERT INTO pt VALUES (2, 'xyzzy'); +SELECT tableoid::regclass, * FROM pt; +SELECT tableoid::regclass, * FROM p1; +SELECT tableoid::regclass, * FROM p2; +DROP TABLE pt; + -- privilege tests SET ROLE regress_file_fdw_superuser; SELECT * FROM agg_text ORDER BY a; diff --git a/contrib/file_fdw/output/file_fdw.source b/contrib/file_fdw/output/file_fdw.source index 01e2690a82..709c43ec80 100644 --- a/contrib/file_fdw/output/file_fdw.source +++ b/contrib/file_fdw/output/file_fdw.source @@ -289,6 +289,87 @@ SELECT tableoid::regclass, * FROM agg FOR UPDATE; ALTER FOREIGN TABLE agg_csv NO INHERIT agg; DROP TABLE agg; +-- declarative partitioning tests +SET ROLE regress_file_fdw_superuser; +CREATE TABLE pt (a int, b text) partition by list (a); +CREATE FOREIGN TABLE p1 partition of pt for values in (1) SERVER file_server +OPTIONS (format 'csv', filename '@abs_srcdir@/data/list1.csv', delimiter ','); +CREATE TABLE p2 partition of pt for values in (2); +SELECT tableoid::regclass, * FROM pt; + tableoid | a | b +----------+---+----- + p1 | 1 | foo + p1 | 1 | bar +(2 rows) + +SELECT tableoid::regclass, * FROM p1; + tableoid | a | b +----------+---+----- + p1 | 1 | foo + p1 | 1 | bar +(2 rows) + +SELECT tableoid::regclass, * FROM p2; + tableoid | a | b +----------+---+--- +(0 rows) + +COPY pt FROM '@abs_srcdir@/data/list2.bad' with (format 'csv', delimiter ','); -- ERROR +ERROR: cannot route inserted tuples to a foreign table +CONTEXT: COPY pt, line 2: "1,qux" +COPY pt FROM '@abs_srcdir@/data/list2.csv' with (format 'csv', delimiter ','); +SELECT tableoid::regclass, * FROM pt; + tableoid | a | b +----------+---+----- + p1 | 1 | foo + p1 | 1 | bar + p2 | 2 | baz + p2 | 2 | qux +(4 rows) + +SELECT tableoid::regclass, * FROM p1; + tableoid | a | b +----------+---+----- + p1 | 1 | foo + p1 | 1 | bar +(2 rows) + +SELECT tableoid::regclass, * FROM p2; + tableoid | a | b +----------+---+----- + p2 | 2 | baz + p2 | 2 | qux +(2 rows) + +INSERT INTO pt VALUES (1, 'xyzzy'); -- ERROR +ERROR: cannot route inserted tuples to a foreign table +INSERT INTO pt VALUES (2, 'xyzzy'); +SELECT tableoid::regclass, * FROM pt; + tableoid | a | b +----------+---+------- + p1 | 1 | foo + p1 | 1 | bar + p2 | 2 | baz + p2 | 2 | qux + p2 | 2 | xyzzy +(5 rows) + +SELECT tableoid::regclass, * FROM p1; + tableoid | a | b +----------+---+----- + p1 | 1 | foo + p1 | 1 | bar +(2 rows) + +SELECT tableoid::regclass, * FROM p2; + tableoid | a | b +----------+---+------- + p2 | 2 | baz + p2 | 2 | qux + p2 | 2 | xyzzy +(3 rows) + +DROP TABLE pt; -- privilege tests SET ROLE regress_file_fdw_superuser; SELECT * FROM agg_text ORDER BY a; diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 2946a0edee..b6f9f1b65f 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1097,8 +1097,9 @@ InitPlan(QueryDesc *queryDesc, int eflags) * CheckValidRowMarkRel. */ void -CheckValidResultRel(Relation resultRel, CmdType operation) +CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation) { + Relation resultRel = resultRelInfo->ri_RelationDesc; TriggerDesc *trigDesc = resultRel->trigdesc; FdwRoutine *fdwroutine; @@ -1169,10 +1170,16 @@ CheckValidResultRel(Relation resultRel, CmdType operation) break; case RELKIND_FOREIGN_TABLE: /* Okay only if the FDW supports it */ - fdwroutine = GetFdwRoutineForRelation(resultRel, false); + fdwroutine = resultRelInfo->ri_FdwRoutine; switch (operation) { case CMD_INSERT: + /* + * If foreign partition to do tuple-routing for, skip the + * check; it's disallowed elsewhere. + */ + if (resultRelInfo->ri_PartitionRoot) + break; if (fdwroutine->ExecForeignInsert == NULL) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), @@ -3307,11 +3314,6 @@ ExecSetupPartitionTupleRouting(Relation rel, partrel = heap_open(lfirst_oid(cell), NoLock); part_tupdesc = RelationGetDescr(partrel); - /* - * Verify result relation is a valid target for the current operation. - */ - CheckValidResultRel(partrel, CMD_INSERT); - /* * Save a tuple conversion map to convert a tuple routed to this * partition from the parent's type to the partition's. @@ -3325,8 +3327,10 @@ ExecSetupPartitionTupleRouting(Relation rel, rel, estate->es_instrument); - estate->es_leaf_result_relations = - lappend(estate->es_leaf_result_relations, leaf_part_rri); + /* + * Verify result relation is a valid target for INSERT. + */ + CheckValidResultRel(leaf_part_rri, CMD_INSERT); /* * Open partition indices (remember we do not support ON CONFLICT in @@ -3337,6 +3341,9 @@ ExecSetupPartitionTupleRouting(Relation rel, leaf_part_rri->ri_IndexRelationDescs == NULL) ExecOpenIndices(leaf_part_rri, false); + estate->es_leaf_result_relations = + lappend(estate->es_leaf_result_relations, leaf_part_rri); + leaf_part_rri++; i++; } diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index e12721a9b6..bd84778739 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1854,7 +1854,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) /* * Verify result relation is a valid target for the current operation */ - CheckValidResultRel(resultRelInfo->ri_RelationDesc, operation); + CheckValidResultRel(resultRelInfo, operation); /* * If there are indices on the result relation, open them and save diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index f48a603dae..ad228d1394 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -177,7 +177,7 @@ extern void ExecutorEnd(QueryDesc *queryDesc); extern void standard_ExecutorEnd(QueryDesc *queryDesc); extern void ExecutorRewind(QueryDesc *queryDesc); extern bool ExecCheckRTPerms(List *rangeTable, bool ereport_on_violation); -extern void CheckValidResultRel(Relation resultRel, CmdType operation); +extern void CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation); extern void InitResultRelInfo(ResultRelInfo *resultRelInfo, Relation resultRelationDesc, Index resultRelationIndex, From 1356f78ea93395c107cbc75dc923e29a0efccd8a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 7 Sep 2017 12:06:23 -0400 Subject: [PATCH 0119/1087] Reduce excessive dereferencing of function pointers It is equivalent in ANSI C to write (*funcptr) () and funcptr(). These two styles have been applied inconsistently. After discussion, we'll use the more verbose style for plain function pointer variables, to make it clear that it's a variable, and the shorter style when the function pointer is in a struct (s.func() or s->func()), because then it's clear that it's not a plain function name, and otherwise the excessive punctuation makes some of those invocations hard to read. Discussion: https://www.postgresql.org/message-id/f52c16db-14ed-757d-4b48-7ef360b1631d@2ndquadrant.com --- contrib/btree_gist/btree_utils_num.c | 34 +++++----- contrib/btree_gist/btree_utils_var.c | 44 ++++++------- src/backend/access/transam/xact.c | 4 +- src/backend/commands/analyze.c | 4 +- src/backend/commands/portalcmds.c | 2 +- src/backend/commands/seclabel.c | 2 +- src/backend/executor/execCurrent.c | 2 +- src/backend/executor/execExprInterp.c | 18 ++--- src/backend/executor/execMain.c | 6 +- src/backend/executor/execParallel.c | 2 +- src/backend/executor/execTuples.c | 6 +- src/backend/executor/execUtils.c | 2 +- src/backend/executor/functions.c | 2 +- src/backend/nodes/params.c | 6 +- src/backend/parser/parse_coerce.c | 2 +- src/backend/parser/parse_expr.c | 10 +-- src/backend/parser/parse_target.c | 4 +- src/backend/rewrite/rewriteManip.c | 2 +- src/backend/storage/ipc/ipc.c | 12 ++-- src/backend/storage/smgr/smgr.c | 44 ++++++------- src/backend/tcop/postgres.c | 4 +- src/backend/tcop/pquery.c | 8 +-- src/backend/utils/adt/array_typanalyze.c | 2 +- src/backend/utils/adt/expandeddatum.c | 4 +- src/backend/utils/adt/jsonfuncs.c | 4 +- src/backend/utils/cache/inval.c | 8 +-- src/backend/utils/error/elog.c | 4 +- src/backend/utils/mb/mbutils.c | 16 ++--- src/backend/utils/mb/wchar.c | 12 ++-- src/backend/utils/misc/guc.c | 60 ++++++++--------- src/backend/utils/misc/timeout.c | 2 +- src/backend/utils/mmgr/README | 2 +- src/backend/utils/mmgr/mcxt.c | 39 ++++++----- src/backend/utils/mmgr/portalmem.c | 10 +-- src/backend/utils/resowner/resowner.c | 2 +- src/bin/pg_dump/pg_backup_archiver.c | 84 ++++++++++++------------ src/bin/pg_dump/pg_backup_null.c | 2 +- src/bin/pg_dump/pg_backup_utils.c | 4 +- src/bin/psql/variables.c | 4 +- src/include/executor/executor.h | 4 +- src/include/utils/selfuncs.h | 2 +- src/include/utils/sortsupport.h | 4 +- src/interfaces/libpq/fe-connect.c | 4 +- src/interfaces/libpq/fe-exec.c | 2 +- src/interfaces/libpq/fe-protocol2.c | 2 +- src/interfaces/libpq/fe-protocol3.c | 2 +- 46 files changed, 249 insertions(+), 250 deletions(-) diff --git a/contrib/btree_gist/btree_utils_num.c b/contrib/btree_gist/btree_utils_num.c index b2295f2c7d..c2faf8b25a 100644 --- a/contrib/btree_gist/btree_utils_num.c +++ b/contrib/btree_gist/btree_utils_num.c @@ -184,10 +184,10 @@ gbt_num_union(GBT_NUMKEY *out, const GistEntryVector *entryvec, const gbtree_nin c.lower = &cur[0]; c.upper = &cur[tinfo->size]; /* if out->lower > cur->lower, adopt cur as lower */ - if ((*tinfo->f_gt) (o.lower, c.lower, flinfo)) + if (tinfo->f_gt(o.lower, c.lower, flinfo)) memcpy((void *) o.lower, (void *) c.lower, tinfo->size); /* if out->upper < cur->upper, adopt cur as upper */ - if ((*tinfo->f_lt) (o.upper, c.upper, flinfo)) + if (tinfo->f_lt(o.upper, c.upper, flinfo)) memcpy((void *) o.upper, (void *) c.upper, tinfo->size); } @@ -211,8 +211,8 @@ gbt_num_same(const GBT_NUMKEY *a, const GBT_NUMKEY *b, const gbtree_ninfo *tinfo b2.lower = &(((GBT_NUMKEY *) b)[0]); b2.upper = &(((GBT_NUMKEY *) b)[tinfo->size]); - return ((*tinfo->f_eq) (b1.lower, b2.lower, flinfo) && - (*tinfo->f_eq) (b1.upper, b2.upper, flinfo)); + return (tinfo->f_eq(b1.lower, b2.lower, flinfo) && + tinfo->f_eq(b1.upper, b2.upper, flinfo)); } @@ -236,9 +236,9 @@ gbt_num_bin_union(Datum *u, GBT_NUMKEY *e, const gbtree_ninfo *tinfo, FmgrInfo * ur.lower = &(((GBT_NUMKEY *) DatumGetPointer(*u))[0]); ur.upper = &(((GBT_NUMKEY *) DatumGetPointer(*u))[tinfo->size]); - if ((*tinfo->f_gt) ((void *) ur.lower, (void *) rd.lower, flinfo)) + if (tinfo->f_gt((void *) ur.lower, (void *) rd.lower, flinfo)) memcpy((void *) ur.lower, (void *) rd.lower, tinfo->size); - if ((*tinfo->f_lt) ((void *) ur.upper, (void *) rd.upper, flinfo)) + if (tinfo->f_lt((void *) ur.upper, (void *) rd.upper, flinfo)) memcpy((void *) ur.upper, (void *) rd.upper, tinfo->size); } } @@ -264,33 +264,33 @@ gbt_num_consistent(const GBT_NUMKEY_R *key, switch (*strategy) { case BTLessEqualStrategyNumber: - retval = (*tinfo->f_ge) (query, key->lower, flinfo); + retval = tinfo->f_ge(query, key->lower, flinfo); break; case BTLessStrategyNumber: if (is_leaf) - retval = (*tinfo->f_gt) (query, key->lower, flinfo); + retval = tinfo->f_gt(query, key->lower, flinfo); else - retval = (*tinfo->f_ge) (query, key->lower, flinfo); + retval = tinfo->f_ge(query, key->lower, flinfo); break; case BTEqualStrategyNumber: if (is_leaf) - retval = (*tinfo->f_eq) (query, key->lower, flinfo); + retval = tinfo->f_eq(query, key->lower, flinfo); else - retval = ((*tinfo->f_le) (key->lower, query, flinfo) && - (*tinfo->f_le) (query, key->upper, flinfo)); + retval = (tinfo->f_le(key->lower, query, flinfo) && + tinfo->f_le(query, key->upper, flinfo)); break; case BTGreaterStrategyNumber: if (is_leaf) - retval = (*tinfo->f_lt) (query, key->upper, flinfo); + retval = tinfo->f_lt(query, key->upper, flinfo); else - retval = (*tinfo->f_le) (query, key->upper, flinfo); + retval = tinfo->f_le(query, key->upper, flinfo); break; case BTGreaterEqualStrategyNumber: - retval = (*tinfo->f_le) (query, key->upper, flinfo); + retval = tinfo->f_le(query, key->upper, flinfo); break; case BtreeGistNotEqualStrategyNumber: - retval = (!((*tinfo->f_eq) (query, key->lower, flinfo) && - (*tinfo->f_eq) (query, key->upper, flinfo))); + retval = (!(tinfo->f_eq(query, key->lower, flinfo) && + tinfo->f_eq(query, key->upper, flinfo))); break; default: retval = false; diff --git a/contrib/btree_gist/btree_utils_var.c b/contrib/btree_gist/btree_utils_var.c index ecc87f3bb3..586de63a4d 100644 --- a/contrib/btree_gist/btree_utils_var.c +++ b/contrib/btree_gist/btree_utils_var.c @@ -109,7 +109,7 @@ gbt_var_leaf2node(GBT_VARKEY *leaf, const gbtree_vinfo *tinfo, FmgrInfo *flinfo) GBT_VARKEY *out = leaf; if (tinfo->f_l2n) - out = (*tinfo->f_l2n) (leaf, flinfo); + out = tinfo->f_l2n(leaf, flinfo); return out; } @@ -255,13 +255,13 @@ gbt_var_bin_union(Datum *u, GBT_VARKEY *e, Oid collation, nr.lower = ro.lower; nr.upper = ro.upper; - if ((*tinfo->f_cmp) (ro.lower, eo.lower, collation, flinfo) > 0) + if (tinfo->f_cmp(ro.lower, eo.lower, collation, flinfo) > 0) { nr.lower = eo.lower; update = true; } - if ((*tinfo->f_cmp) (ro.upper, eo.upper, collation, flinfo) < 0) + if (tinfo->f_cmp(ro.upper, eo.upper, collation, flinfo) < 0) { nr.upper = eo.upper; update = true; @@ -371,8 +371,8 @@ gbt_var_same(Datum d1, Datum d2, Oid collation, r1 = gbt_var_key_readable(t1); r2 = gbt_var_key_readable(t2); - return ((*tinfo->f_cmp) (r1.lower, r2.lower, collation, flinfo) == 0 && - (*tinfo->f_cmp) (r1.upper, r2.upper, collation, flinfo) == 0); + return (tinfo->f_cmp(r1.lower, r2.lower, collation, flinfo) == 0 && + tinfo->f_cmp(r1.upper, r2.upper, collation, flinfo) == 0); } @@ -400,9 +400,9 @@ gbt_var_penalty(float *res, const GISTENTRY *o, const GISTENTRY *n, if ((VARSIZE(ok.lower) - VARHDRSZ) == 0 && (VARSIZE(ok.upper) - VARHDRSZ) == 0) *res = 0.0; - else if (!(((*tinfo->f_cmp) (nk.lower, ok.lower, collation, flinfo) >= 0 || + else if (!((tinfo->f_cmp(nk.lower, ok.lower, collation, flinfo) >= 0 || gbt_bytea_pf_match(ok.lower, nk.lower, tinfo)) && - ((*tinfo->f_cmp) (nk.upper, ok.upper, collation, flinfo) <= 0 || + (tinfo->f_cmp(nk.upper, ok.upper, collation, flinfo) <= 0 || gbt_bytea_pf_match(ok.upper, nk.upper, tinfo)))) { Datum d = PointerGetDatum(0); @@ -449,9 +449,9 @@ gbt_vsrt_cmp(const void *a, const void *b, void *arg) const gbt_vsrt_arg *varg = (const gbt_vsrt_arg *) arg; int res; - res = (*varg->tinfo->f_cmp) (ar.lower, br.lower, varg->collation, varg->flinfo); + res = varg->tinfo->f_cmp(ar.lower, br.lower, varg->collation, varg->flinfo); if (res == 0) - return (*varg->tinfo->f_cmp) (ar.upper, br.upper, varg->collation, varg->flinfo); + return varg->tinfo->f_cmp(ar.upper, br.upper, varg->collation, varg->flinfo); return res; } @@ -567,44 +567,44 @@ gbt_var_consistent(GBT_VARKEY_R *key, { case BTLessEqualStrategyNumber: if (is_leaf) - retval = (*tinfo->f_ge) (query, key->lower, collation, flinfo); + retval = tinfo->f_ge(query, key->lower, collation, flinfo); else - retval = (*tinfo->f_cmp) (query, key->lower, collation, flinfo) >= 0 + retval = tinfo->f_cmp(query, key->lower, collation, flinfo) >= 0 || gbt_var_node_pf_match(key, query, tinfo); break; case BTLessStrategyNumber: if (is_leaf) - retval = (*tinfo->f_gt) (query, key->lower, collation, flinfo); + retval = tinfo->f_gt(query, key->lower, collation, flinfo); else - retval = (*tinfo->f_cmp) (query, key->lower, collation, flinfo) >= 0 + retval = tinfo->f_cmp(query, key->lower, collation, flinfo) >= 0 || gbt_var_node_pf_match(key, query, tinfo); break; case BTEqualStrategyNumber: if (is_leaf) - retval = (*tinfo->f_eq) (query, key->lower, collation, flinfo); + retval = tinfo->f_eq(query, key->lower, collation, flinfo); else retval = - ((*tinfo->f_cmp) (key->lower, query, collation, flinfo) <= 0 && - (*tinfo->f_cmp) (query, key->upper, collation, flinfo) <= 0) || + (tinfo->f_cmp(key->lower, query, collation, flinfo) <= 0 && + tinfo->f_cmp(query, key->upper, collation, flinfo) <= 0) || gbt_var_node_pf_match(key, query, tinfo); break; case BTGreaterStrategyNumber: if (is_leaf) - retval = (*tinfo->f_lt) (query, key->upper, collation, flinfo); + retval = tinfo->f_lt(query, key->upper, collation, flinfo); else - retval = (*tinfo->f_cmp) (query, key->upper, collation, flinfo) <= 0 + retval = tinfo->f_cmp(query, key->upper, collation, flinfo) <= 0 || gbt_var_node_pf_match(key, query, tinfo); break; case BTGreaterEqualStrategyNumber: if (is_leaf) - retval = (*tinfo->f_le) (query, key->upper, collation, flinfo); + retval = tinfo->f_le(query, key->upper, collation, flinfo); else - retval = (*tinfo->f_cmp) (query, key->upper, collation, flinfo) <= 0 + retval = tinfo->f_cmp(query, key->upper, collation, flinfo) <= 0 || gbt_var_node_pf_match(key, query, tinfo); break; case BtreeGistNotEqualStrategyNumber: - retval = !((*tinfo->f_eq) (query, key->lower, collation, flinfo) && - (*tinfo->f_eq) (query, key->upper, collation, flinfo)); + retval = !(tinfo->f_eq(query, key->lower, collation, flinfo) && + tinfo->f_eq(query, key->upper, collation, flinfo)); break; default: retval = FALSE; diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index bc07354f9a..93dca7a72a 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -3347,7 +3347,7 @@ CallXactCallbacks(XactEvent event) XactCallbackItem *item; for (item = Xact_callbacks; item; item = item->next) - (*item->callback) (event, item->arg); + item->callback(event, item->arg); } @@ -3404,7 +3404,7 @@ CallSubXactCallbacks(SubXactEvent event, SubXactCallbackItem *item; for (item = SubXact_callbacks; item; item = item->next) - (*item->callback) (event, mySubid, parentSubid, item->arg); + item->callback(event, mySubid, parentSubid, item->arg); } diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index fbad13ea94..08fc18e96b 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -526,7 +526,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params, stats->rows = rows; stats->tupDesc = onerel->rd_att; - (*stats->compute_stats) (stats, + stats->compute_stats(stats, std_fetch_func, numrows, totalrows); @@ -830,7 +830,7 @@ compute_index_stats(Relation onerel, double totalrows, stats->exprvals = exprvals + i; stats->exprnulls = exprnulls + i; stats->rowstride = attr_cnt; - (*stats->compute_stats) (stats, + stats->compute_stats(stats, ind_fetch_func, numindexrows, totalindexrows); diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c index 46369cf3db..b36473fba4 100644 --- a/src/backend/commands/portalcmds.c +++ b/src/backend/commands/portalcmds.c @@ -397,7 +397,7 @@ PersistHoldablePortal(Portal portal) /* Fetch the result set into the tuplestore */ ExecutorRun(queryDesc, ForwardScanDirection, 0L, false); - (*queryDesc->dest->rDestroy) (queryDesc->dest); + queryDesc->dest->rDestroy(queryDesc->dest); queryDesc->dest = NULL; /* diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c index 5f16d6cf1c..b0b06fc91f 100644 --- a/src/backend/commands/seclabel.c +++ b/src/backend/commands/seclabel.c @@ -122,7 +122,7 @@ ExecSecLabelStmt(SecLabelStmt *stmt) } /* Provider gets control here, may throw ERROR to veto new label. */ - (*provider->hook) (&address, stmt->label); + provider->hook(&address, stmt->label); /* Apply new label. */ SetSecurityLabel(&address, provider->provider_name, stmt->label); diff --git a/src/backend/executor/execCurrent.c b/src/backend/executor/execCurrent.c index f00fce5913..f42df3916e 100644 --- a/src/backend/executor/execCurrent.c +++ b/src/backend/executor/execCurrent.c @@ -220,7 +220,7 @@ fetch_cursor_param_value(ExprContext *econtext, int paramId) /* give hook a chance in case parameter is dynamic */ if (!OidIsValid(prm->ptype) && paramInfo->paramFetch != NULL) - (*paramInfo->paramFetch) (paramInfo, paramId); + paramInfo->paramFetch(paramInfo, paramId); if (OidIsValid(prm->ptype) && !prm->isnull) { diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 83e04471e4..bd8a15d6c3 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -647,7 +647,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) FunctionCallInfo fcinfo = op->d.func.fcinfo_data; fcinfo->isnull = false; - *op->resvalue = (op->d.func.fn_addr) (fcinfo); + *op->resvalue = op->d.func.fn_addr(fcinfo); *op->resnull = fcinfo->isnull; EEO_NEXT(); @@ -669,7 +669,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) } } fcinfo->isnull = false; - *op->resvalue = (op->d.func.fn_addr) (fcinfo); + *op->resvalue = op->d.func.fn_addr(fcinfo); *op->resnull = fcinfo->isnull; strictfail: @@ -684,7 +684,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) pgstat_init_function_usage(fcinfo, &fcusage); fcinfo->isnull = false; - *op->resvalue = (op->d.func.fn_addr) (fcinfo); + *op->resvalue = op->d.func.fn_addr(fcinfo); *op->resnull = fcinfo->isnull; pgstat_end_function_usage(&fcusage, true); @@ -712,7 +712,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) pgstat_init_function_usage(fcinfo, &fcusage); fcinfo->isnull = false; - *op->resvalue = (op->d.func.fn_addr) (fcinfo); + *op->resvalue = op->d.func.fn_addr(fcinfo); *op->resnull = fcinfo->isnull; pgstat_end_function_usage(&fcusage, true); @@ -1170,7 +1170,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) Datum eqresult; fcinfo->isnull = false; - eqresult = (op->d.func.fn_addr) (fcinfo); + eqresult = op->d.func.fn_addr(fcinfo); /* Must invert result of "="; safe to do even if null */ *op->resvalue = BoolGetDatum(!DatumGetBool(eqresult)); *op->resnull = fcinfo->isnull; @@ -1192,7 +1192,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) Datum result; fcinfo->isnull = false; - result = (op->d.func.fn_addr) (fcinfo); + result = op->d.func.fn_addr(fcinfo); /* if the arguments are equal return null */ if (!fcinfo->isnull && DatumGetBool(result)) @@ -1279,7 +1279,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) /* Apply comparison function */ fcinfo->isnull = false; - *op->resvalue = (op->d.rowcompare_step.fn_addr) (fcinfo); + *op->resvalue = op->d.rowcompare_step.fn_addr(fcinfo); /* force NULL result if NULL function result */ if (fcinfo->isnull) @@ -1878,7 +1878,7 @@ ExecEvalParamExtern(ExprState *state, ExprEvalStep *op, ExprContext *econtext) /* give hook a chance in case parameter is dynamic */ if (!OidIsValid(prm->ptype) && paramInfo->paramFetch != NULL) - (*paramInfo->paramFetch) (paramInfo, paramId); + paramInfo->paramFetch(paramInfo, paramId); if (likely(OidIsValid(prm->ptype))) { @@ -3000,7 +3000,7 @@ ExecEvalScalarArrayOp(ExprState *state, ExprEvalStep *op) else { fcinfo->isnull = false; - thisresult = (op->d.scalararrayop.fn_addr) (fcinfo); + thisresult = op->d.scalararrayop.fn_addr(fcinfo); } /* Combine results per OR or AND semantics */ diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index b6f9f1b65f..4b594d489c 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -349,7 +349,7 @@ standard_ExecutorRun(QueryDesc *queryDesc, queryDesc->plannedstmt->hasReturning); if (sendTuples) - (*dest->rStartup) (dest, operation, queryDesc->tupDesc); + dest->rStartup(dest, operation, queryDesc->tupDesc); /* * run plan @@ -375,7 +375,7 @@ standard_ExecutorRun(QueryDesc *queryDesc, * shutdown tuple receiver, if we started it */ if (sendTuples) - (*dest->rShutdown) (dest); + dest->rShutdown(dest); if (queryDesc->totaltime) InstrStopNode(queryDesc->totaltime, estate->es_processed); @@ -1752,7 +1752,7 @@ ExecutePlan(EState *estate, * has closed and no more tuples can be sent. If that's the case, * end the loop. */ - if (!((*dest->receiveSlot) (slot, dest))) + if (!dest->receiveSlot(slot, dest)) break; } diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 59f3744a14..8737cc1cef 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -1081,5 +1081,5 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) /* Cleanup. */ dsa_detach(area); FreeQueryDesc(queryDesc); - (*receiver->rDestroy) (receiver); + receiver->rDestroy(receiver); } diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c index 31f814c0f0..51d2c5d166 100644 --- a/src/backend/executor/execTuples.c +++ b/src/backend/executor/execTuples.c @@ -1241,7 +1241,7 @@ begin_tup_output_tupdesc(DestReceiver *dest, TupleDesc tupdesc) tstate->slot = MakeSingleTupleTableSlot(tupdesc); tstate->dest = dest; - (*tstate->dest->rStartup) (tstate->dest, (int) CMD_SELECT, tupdesc); + tstate->dest->rStartup(tstate->dest, (int) CMD_SELECT, tupdesc); return tstate; } @@ -1266,7 +1266,7 @@ do_tup_output(TupOutputState *tstate, Datum *values, bool *isnull) ExecStoreVirtualTuple(slot); /* send the tuple to the receiver */ - (void) (*tstate->dest->receiveSlot) (slot, tstate->dest); + (void) tstate->dest->receiveSlot(slot, tstate->dest); /* clean up */ ExecClearTuple(slot); @@ -1310,7 +1310,7 @@ do_text_output_multiline(TupOutputState *tstate, const char *txt) void end_tup_output(TupOutputState *tstate) { - (*tstate->dest->rShutdown) (tstate->dest); + tstate->dest->rShutdown(tstate->dest); /* note that destroying the dest is not ours to do */ ExecDropSingleTupleTableSlot(tstate->slot); pfree(tstate); diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index 9528393976..ee6c4af055 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -813,7 +813,7 @@ ShutdownExprContext(ExprContext *econtext, bool isCommit) { econtext->ecxt_callbacks = ecxt_callback->next; if (isCommit) - (*ecxt_callback->function) (ecxt_callback->arg); + ecxt_callback->function(ecxt_callback->arg); pfree(ecxt_callback); } diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index b7ac5f7432..42a4ca94e9 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -886,7 +886,7 @@ postquel_end(execution_state *es) ExecutorEnd(es->qd); } - (*es->qd->dest->rDestroy) (es->qd->dest); + es->qd->dest->rDestroy(es->qd->dest); FreeQueryDesc(es->qd); es->qd = NULL; diff --git a/src/backend/nodes/params.c b/src/backend/nodes/params.c index 110732081b..51429af1e3 100644 --- a/src/backend/nodes/params.c +++ b/src/backend/nodes/params.c @@ -73,7 +73,7 @@ copyParamList(ParamListInfo from) /* give hook a chance in case parameter is dynamic */ if (!OidIsValid(oprm->ptype) && from->paramFetch != NULL) - (*from->paramFetch) (from, i + 1); + from->paramFetch(from, i + 1); /* flat-copy the parameter info */ *nprm = *oprm; @@ -115,7 +115,7 @@ EstimateParamListSpace(ParamListInfo paramLI) { /* give hook a chance in case parameter is dynamic */ if (!OidIsValid(prm->ptype) && paramLI->paramFetch != NULL) - (*paramLI->paramFetch) (paramLI, i + 1); + paramLI->paramFetch(paramLI, i + 1); typeOid = prm->ptype; } @@ -184,7 +184,7 @@ SerializeParamList(ParamListInfo paramLI, char **start_address) { /* give hook a chance in case parameter is dynamic */ if (!OidIsValid(prm->ptype) && paramLI->paramFetch != NULL) - (*paramLI->paramFetch) (paramLI, i + 1); + paramLI->paramFetch(paramLI, i + 1); typeOid = prm->ptype; } diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index e95cee1ebf..e79ad26e71 100644 --- a/src/backend/parser/parse_coerce.c +++ b/src/backend/parser/parse_coerce.c @@ -369,7 +369,7 @@ coerce_type(ParseState *pstate, Node *node, * transformed node (very possibly the same Param node), or return * NULL to indicate we should proceed with normal coercion. */ - result = (*pstate->p_coerce_param_hook) (pstate, + result = pstate->p_coerce_param_hook(pstate, (Param *) node, targetTypeId, targetTypeMod, diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index 6d8cb07766..1aaa5244e6 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -527,7 +527,7 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref) */ if (pstate->p_pre_columnref_hook != NULL) { - node = (*pstate->p_pre_columnref_hook) (pstate, cref); + node = pstate->p_pre_columnref_hook(pstate, cref); if (node != NULL) return node; } @@ -758,7 +758,7 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref) { Node *hookresult; - hookresult = (*pstate->p_post_columnref_hook) (pstate, cref, node); + hookresult = pstate->p_post_columnref_hook(pstate, cref, node); if (node == NULL) node = hookresult; else if (hookresult != NULL) @@ -813,7 +813,7 @@ transformParamRef(ParseState *pstate, ParamRef *pref) * call it. If not, or if the hook returns NULL, throw a generic error. */ if (pstate->p_paramref_hook != NULL) - result = (*pstate->p_paramref_hook) (pstate, pref); + result = pstate->p_paramref_hook(pstate, pref); else result = NULL; @@ -2585,9 +2585,9 @@ transformCurrentOfExpr(ParseState *pstate, CurrentOfExpr *cexpr) /* See if there is a translation available from a parser hook */ if (pstate->p_pre_columnref_hook != NULL) - node = (*pstate->p_pre_columnref_hook) (pstate, cref); + node = pstate->p_pre_columnref_hook(pstate, cref); if (node == NULL && pstate->p_post_columnref_hook != NULL) - node = (*pstate->p_post_columnref_hook) (pstate, cref, NULL); + node = pstate->p_post_columnref_hook(pstate, cref, NULL); /* * XXX Should we throw an error if we get a translation that isn't a diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index fce863600c..2547524025 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -1108,7 +1108,7 @@ ExpandColumnRefStar(ParseState *pstate, ColumnRef *cref, { Node *node; - node = (*pstate->p_pre_columnref_hook) (pstate, cref); + node = pstate->p_pre_columnref_hook(pstate, cref); if (node != NULL) return ExpandRowReference(pstate, node, make_target_entry); } @@ -1163,7 +1163,7 @@ ExpandColumnRefStar(ParseState *pstate, ColumnRef *cref, { Node *node; - node = (*pstate->p_post_columnref_hook) (pstate, cref, + node = pstate->p_post_columnref_hook(pstate, cref, (Node *) rte); if (node != NULL) { diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c index ba706b25b4..5c17213720 100644 --- a/src/backend/rewrite/rewriteManip.c +++ b/src/backend/rewrite/rewriteManip.c @@ -1143,7 +1143,7 @@ replace_rte_variables_mutator(Node *node, /* Found a matching variable, make the substitution */ Node *newnode; - newnode = (*context->callback) (var, context); + newnode = context->callback(var, context); /* Detect if we are adding a sublink to query */ if (!context->inserted_sublink) context->inserted_sublink = checkExprHasSubLink(newnode); diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c index 90dee4f51a..dfb47e7c39 100644 --- a/src/backend/storage/ipc/ipc.c +++ b/src/backend/storage/ipc/ipc.c @@ -197,8 +197,8 @@ proc_exit_prepare(int code) * possible. */ while (--on_proc_exit_index >= 0) - (*on_proc_exit_list[on_proc_exit_index].function) (code, - on_proc_exit_list[on_proc_exit_index].arg); + on_proc_exit_list[on_proc_exit_index].function(code, + on_proc_exit_list[on_proc_exit_index].arg); on_proc_exit_index = 0; } @@ -225,8 +225,8 @@ shmem_exit(int code) elog(DEBUG3, "shmem_exit(%d): %d before_shmem_exit callbacks to make", code, before_shmem_exit_index); while (--before_shmem_exit_index >= 0) - (*before_shmem_exit_list[before_shmem_exit_index].function) (code, - before_shmem_exit_list[before_shmem_exit_index].arg); + before_shmem_exit_list[before_shmem_exit_index].function(code, + before_shmem_exit_list[before_shmem_exit_index].arg); before_shmem_exit_index = 0; /* @@ -258,8 +258,8 @@ shmem_exit(int code) elog(DEBUG3, "shmem_exit(%d): %d on_shmem_exit callbacks to make", code, on_shmem_exit_index); while (--on_shmem_exit_index >= 0) - (*on_shmem_exit_list[on_shmem_exit_index].function) (code, - on_shmem_exit_list[on_shmem_exit_index].arg); + on_shmem_exit_list[on_shmem_exit_index].function(code, + on_shmem_exit_list[on_shmem_exit_index].arg); on_shmem_exit_index = 0; } diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c index 0ca095c4d6..5d5b7dd95e 100644 --- a/src/backend/storage/smgr/smgr.c +++ b/src/backend/storage/smgr/smgr.c @@ -106,7 +106,7 @@ smgrinit(void) for (i = 0; i < NSmgr; i++) { if (smgrsw[i].smgr_init) - (*(smgrsw[i].smgr_init)) (); + smgrsw[i].smgr_init(); } /* register the shutdown proc */ @@ -124,7 +124,7 @@ smgrshutdown(int code, Datum arg) for (i = 0; i < NSmgr; i++) { if (smgrsw[i].smgr_shutdown) - (*(smgrsw[i].smgr_shutdown)) (); + smgrsw[i].smgr_shutdown(); } } @@ -286,7 +286,7 @@ remove_from_unowned_list(SMgrRelation reln) bool smgrexists(SMgrRelation reln, ForkNumber forknum) { - return (*(smgrsw[reln->smgr_which].smgr_exists)) (reln, forknum); + return smgrsw[reln->smgr_which].smgr_exists(reln, forknum); } /* @@ -299,7 +299,7 @@ smgrclose(SMgrRelation reln) ForkNumber forknum; for (forknum = 0; forknum <= MAX_FORKNUM; forknum++) - (*(smgrsw[reln->smgr_which].smgr_close)) (reln, forknum); + smgrsw[reln->smgr_which].smgr_close(reln, forknum); owner = reln->smgr_owner; @@ -395,7 +395,7 @@ smgrcreate(SMgrRelation reln, ForkNumber forknum, bool isRedo) reln->smgr_rnode.node.dbNode, isRedo); - (*(smgrsw[reln->smgr_which].smgr_create)) (reln, forknum, isRedo); + smgrsw[reln->smgr_which].smgr_create(reln, forknum, isRedo); } /* @@ -419,7 +419,7 @@ smgrdounlink(SMgrRelation reln, bool isRedo) /* Close the forks at smgr level */ for (forknum = 0; forknum <= MAX_FORKNUM; forknum++) - (*(smgrsw[which].smgr_close)) (reln, forknum); + smgrsw[which].smgr_close(reln, forknum); /* * Get rid of any remaining buffers for the relation. bufmgr will just @@ -451,7 +451,7 @@ smgrdounlink(SMgrRelation reln, bool isRedo) * ERROR, because we've already decided to commit or abort the current * xact. */ - (*(smgrsw[which].smgr_unlink)) (rnode, InvalidForkNumber, isRedo); + smgrsw[which].smgr_unlink(rnode, InvalidForkNumber, isRedo); } /* @@ -491,7 +491,7 @@ smgrdounlinkall(SMgrRelation *rels, int nrels, bool isRedo) /* Close the forks at smgr level */ for (forknum = 0; forknum <= MAX_FORKNUM; forknum++) - (*(smgrsw[which].smgr_close)) (rels[i], forknum); + smgrsw[which].smgr_close(rels[i], forknum); } /* @@ -529,7 +529,7 @@ smgrdounlinkall(SMgrRelation *rels, int nrels, bool isRedo) int which = rels[i]->smgr_which; for (forknum = 0; forknum <= MAX_FORKNUM; forknum++) - (*(smgrsw[which].smgr_unlink)) (rnodes[i], forknum, isRedo); + smgrsw[which].smgr_unlink(rnodes[i], forknum, isRedo); } pfree(rnodes); @@ -552,7 +552,7 @@ smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo) int which = reln->smgr_which; /* Close the fork at smgr level */ - (*(smgrsw[which].smgr_close)) (reln, forknum); + smgrsw[which].smgr_close(reln, forknum); /* * Get rid of any remaining buffers for the fork. bufmgr will just drop @@ -584,7 +584,7 @@ smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo) * ERROR, because we've already decided to commit or abort the current * xact. */ - (*(smgrsw[which].smgr_unlink)) (rnode, forknum, isRedo); + smgrsw[which].smgr_unlink(rnode, forknum, isRedo); } /* @@ -600,7 +600,7 @@ void smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer, bool skipFsync) { - (*(smgrsw[reln->smgr_which].smgr_extend)) (reln, forknum, blocknum, + smgrsw[reln->smgr_which].smgr_extend(reln, forknum, blocknum, buffer, skipFsync); } @@ -610,7 +610,7 @@ smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, void smgrprefetch(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum) { - (*(smgrsw[reln->smgr_which].smgr_prefetch)) (reln, forknum, blocknum); + smgrsw[reln->smgr_which].smgr_prefetch(reln, forknum, blocknum); } /* @@ -625,7 +625,7 @@ void smgrread(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer) { - (*(smgrsw[reln->smgr_which].smgr_read)) (reln, forknum, blocknum, buffer); + smgrsw[reln->smgr_which].smgr_read(reln, forknum, blocknum, buffer); } /* @@ -647,7 +647,7 @@ void smgrwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer, bool skipFsync) { - (*(smgrsw[reln->smgr_which].smgr_write)) (reln, forknum, blocknum, + smgrsw[reln->smgr_which].smgr_write(reln, forknum, blocknum, buffer, skipFsync); } @@ -660,7 +660,7 @@ void smgrwriteback(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, BlockNumber nblocks) { - (*(smgrsw[reln->smgr_which].smgr_writeback)) (reln, forknum, blocknum, + smgrsw[reln->smgr_which].smgr_writeback(reln, forknum, blocknum, nblocks); } @@ -671,7 +671,7 @@ smgrwriteback(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, BlockNumber smgrnblocks(SMgrRelation reln, ForkNumber forknum) { - return (*(smgrsw[reln->smgr_which].smgr_nblocks)) (reln, forknum); + return smgrsw[reln->smgr_which].smgr_nblocks(reln, forknum); } /* @@ -704,7 +704,7 @@ smgrtruncate(SMgrRelation reln, ForkNumber forknum, BlockNumber nblocks) /* * Do the truncation. */ - (*(smgrsw[reln->smgr_which].smgr_truncate)) (reln, forknum, nblocks); + smgrsw[reln->smgr_which].smgr_truncate(reln, forknum, nblocks); } /* @@ -733,7 +733,7 @@ smgrtruncate(SMgrRelation reln, ForkNumber forknum, BlockNumber nblocks) void smgrimmedsync(SMgrRelation reln, ForkNumber forknum) { - (*(smgrsw[reln->smgr_which].smgr_immedsync)) (reln, forknum); + smgrsw[reln->smgr_which].smgr_immedsync(reln, forknum); } @@ -748,7 +748,7 @@ smgrpreckpt(void) for (i = 0; i < NSmgr; i++) { if (smgrsw[i].smgr_pre_ckpt) - (*(smgrsw[i].smgr_pre_ckpt)) (); + smgrsw[i].smgr_pre_ckpt(); } } @@ -763,7 +763,7 @@ smgrsync(void) for (i = 0; i < NSmgr; i++) { if (smgrsw[i].smgr_sync) - (*(smgrsw[i].smgr_sync)) (); + smgrsw[i].smgr_sync(); } } @@ -778,7 +778,7 @@ smgrpostckpt(void) for (i = 0; i < NSmgr; i++) { if (smgrsw[i].smgr_post_ckpt) - (*(smgrsw[i].smgr_post_ckpt)) (); + smgrsw[i].smgr_post_ckpt(); } } diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index c10d891260..4eb85720a7 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -1114,7 +1114,7 @@ exec_simple_query(const char *query_string) receiver, completionTag); - (*receiver->rDestroy) (receiver); + receiver->rDestroy(receiver); PortalDrop(portal, false); @@ -2002,7 +2002,7 @@ exec_execute_message(const char *portal_name, long max_rows) receiver, completionTag); - (*receiver->rDestroy) (receiver); + receiver->rDestroy(receiver); if (completed) { diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c index 7e820d05dd..cc462efc37 100644 --- a/src/backend/tcop/pquery.c +++ b/src/backend/tcop/pquery.c @@ -1049,7 +1049,7 @@ FillPortalStore(Portal portal, bool isTopLevel) if (completionTag[0] != '\0') portal->commandTag = pstrdup(completionTag); - (*treceiver->rDestroy) (treceiver); + treceiver->rDestroy(treceiver); } /* @@ -1073,7 +1073,7 @@ RunFromStore(Portal portal, ScanDirection direction, uint64 count, slot = MakeSingleTupleTableSlot(portal->tupDesc); - (*dest->rStartup) (dest, CMD_SELECT, portal->tupDesc); + dest->rStartup(dest, CMD_SELECT, portal->tupDesc); if (ScanDirectionIsNoMovement(direction)) { @@ -1103,7 +1103,7 @@ RunFromStore(Portal portal, ScanDirection direction, uint64 count, * has closed and no more tuples can be sent. If that's the case, * end the loop. */ - if (!((*dest->receiveSlot) (slot, dest))) + if (!dest->receiveSlot(slot, dest)) break; ExecClearTuple(slot); @@ -1119,7 +1119,7 @@ RunFromStore(Portal portal, ScanDirection direction, uint64 count, } } - (*dest->rShutdown) (dest); + dest->rShutdown(dest); ExecDropSingleTupleTableSlot(slot); diff --git a/src/backend/utils/adt/array_typanalyze.c b/src/backend/utils/adt/array_typanalyze.c index 78153d232f..470ef0c4b0 100644 --- a/src/backend/utils/adt/array_typanalyze.c +++ b/src/backend/utils/adt/array_typanalyze.c @@ -247,7 +247,7 @@ compute_array_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc, * temporarily install that. */ stats->extra_data = extra_data->std_extra_data; - (*extra_data->std_compute_stats) (stats, fetchfunc, samplerows, totalrows); + extra_data->std_compute_stats(stats, fetchfunc, samplerows, totalrows); stats->extra_data = extra_data; /* diff --git a/src/backend/utils/adt/expandeddatum.c b/src/backend/utils/adt/expandeddatum.c index 3d77686af7..49854b39f4 100644 --- a/src/backend/utils/adt/expandeddatum.c +++ b/src/backend/utils/adt/expandeddatum.c @@ -74,14 +74,14 @@ EOH_init_header(ExpandedObjectHeader *eohptr, Size EOH_get_flat_size(ExpandedObjectHeader *eohptr) { - return (*eohptr->eoh_methods->get_flat_size) (eohptr); + return eohptr->eoh_methods->get_flat_size(eohptr); } void EOH_flatten_into(ExpandedObjectHeader *eohptr, void *result, Size allocated_size) { - (*eohptr->eoh_methods->flatten_into) (eohptr, result, allocated_size); + eohptr->eoh_methods->flatten_into(eohptr, result, allocated_size); } /* diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index d92ffa83d9..619547d6bf 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -4860,7 +4860,7 @@ iterate_string_values_scalar(void *state, char *token, JsonTokenType tokentype) IterateJsonStringValuesState *_state = (IterateJsonStringValuesState *) state; if (tokentype == JSON_TOKEN_STRING) - (*_state->action) (_state->action_state, token, strlen(token)); + _state->action(_state->action_state, token, strlen(token)); } /* @@ -5011,7 +5011,7 @@ transform_string_values_scalar(void *state, char *token, JsonTokenType tokentype if (tokentype == JSON_TOKEN_STRING) { - text *out = (*_state->action) (_state->action_state, token, strlen(token)); + text *out = _state->action(_state->action_state, token, strlen(token)); escape_json(_state->strval, text_to_cstring(out)); } diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c index d0e54b8535..0e61b4b79f 100644 --- a/src/backend/utils/cache/inval.c +++ b/src/backend/utils/cache/inval.c @@ -590,7 +590,7 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg) { struct RELCACHECALLBACK *ccitem = relcache_callback_list + i; - (*ccitem->function) (ccitem->arg, msg->rc.relId); + ccitem->function(ccitem->arg, msg->rc.relId); } } } @@ -650,14 +650,14 @@ InvalidateSystemCaches(void) { struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i; - (*ccitem->function) (ccitem->arg, ccitem->id, 0); + ccitem->function(ccitem->arg, ccitem->id, 0); } for (i = 0; i < relcache_callback_count; i++) { struct RELCACHECALLBACK *ccitem = relcache_callback_list + i; - (*ccitem->function) (ccitem->arg, InvalidOid); + ccitem->function(ccitem->arg, InvalidOid); } } @@ -1460,7 +1460,7 @@ CallSyscacheCallbacks(int cacheid, uint32 hashvalue) struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i; Assert(ccitem->id == cacheid); - (*ccitem->function) (ccitem->arg, cacheid, hashvalue); + ccitem->function(ccitem->arg, cacheid, hashvalue); i = ccitem->link - 1; } } diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index 918db0a8f2..977c03834a 100644 --- a/src/backend/utils/error/elog.c +++ b/src/backend/utils/error/elog.c @@ -435,7 +435,7 @@ errfinish(int dummy,...) for (econtext = error_context_stack; econtext != NULL; econtext = econtext->previous) - (*econtext->callback) (econtext->arg); + econtext->callback(econtext->arg); /* * If ERROR (not more nor less) we pass it off to the current handler. @@ -1837,7 +1837,7 @@ GetErrorContextStack(void) for (econtext = error_context_stack; econtext != NULL; econtext = econtext->previous) - (*econtext->callback) (econtext->arg); + econtext->callback(econtext->arg); /* * Clean ourselves off the stack, any allocations done should have been diff --git a/src/backend/utils/mb/mbutils.c b/src/backend/utils/mb/mbutils.c index ca7f129ebe..c4fbe0903b 100644 --- a/src/backend/utils/mb/mbutils.c +++ b/src/backend/utils/mb/mbutils.c @@ -726,14 +726,14 @@ perform_default_encoding_conversion(const char *src, int len, int pg_mb2wchar(const char *from, pg_wchar *to) { - return (*pg_wchar_table[DatabaseEncoding->encoding].mb2wchar_with_len) ((const unsigned char *) from, to, strlen(from)); + return pg_wchar_table[DatabaseEncoding->encoding].mb2wchar_with_len((const unsigned char *) from, to, strlen(from)); } /* convert a multibyte string to a wchar with a limited length */ int pg_mb2wchar_with_len(const char *from, pg_wchar *to, int len) { - return (*pg_wchar_table[DatabaseEncoding->encoding].mb2wchar_with_len) ((const unsigned char *) from, to, len); + return pg_wchar_table[DatabaseEncoding->encoding].mb2wchar_with_len((const unsigned char *) from, to, len); } /* same, with any encoding */ @@ -741,21 +741,21 @@ int pg_encoding_mb2wchar_with_len(int encoding, const char *from, pg_wchar *to, int len) { - return (*pg_wchar_table[encoding].mb2wchar_with_len) ((const unsigned char *) from, to, len); + return pg_wchar_table[encoding].mb2wchar_with_len((const unsigned char *) from, to, len); } /* convert a wchar string to a multibyte */ int pg_wchar2mb(const pg_wchar *from, char *to) { - return (*pg_wchar_table[DatabaseEncoding->encoding].wchar2mb_with_len) (from, (unsigned char *) to, pg_wchar_strlen(from)); + return pg_wchar_table[DatabaseEncoding->encoding].wchar2mb_with_len(from, (unsigned char *) to, pg_wchar_strlen(from)); } /* convert a wchar string to a multibyte with a limited length */ int pg_wchar2mb_with_len(const pg_wchar *from, char *to, int len) { - return (*pg_wchar_table[DatabaseEncoding->encoding].wchar2mb_with_len) (from, (unsigned char *) to, len); + return pg_wchar_table[DatabaseEncoding->encoding].wchar2mb_with_len(from, (unsigned char *) to, len); } /* same, with any encoding */ @@ -763,21 +763,21 @@ int pg_encoding_wchar2mb_with_len(int encoding, const pg_wchar *from, char *to, int len) { - return (*pg_wchar_table[encoding].wchar2mb_with_len) (from, (unsigned char *) to, len); + return pg_wchar_table[encoding].wchar2mb_with_len(from, (unsigned char *) to, len); } /* returns the byte length of a multibyte character */ int pg_mblen(const char *mbstr) { - return ((*pg_wchar_table[DatabaseEncoding->encoding].mblen) ((const unsigned char *) mbstr)); + return pg_wchar_table[DatabaseEncoding->encoding].mblen((const unsigned char *) mbstr); } /* returns the display length of a multibyte character */ int pg_dsplen(const char *mbstr) { - return ((*pg_wchar_table[DatabaseEncoding->encoding].dsplen) ((const unsigned char *) mbstr)); + return pg_wchar_table[DatabaseEncoding->encoding].dsplen((const unsigned char *) mbstr); } /* returns the length (counted in wchars) of a multibyte string */ diff --git a/src/backend/utils/mb/wchar.c b/src/backend/utils/mb/wchar.c index 765815a199..a5fdda456e 100644 --- a/src/backend/utils/mb/wchar.c +++ b/src/backend/utils/mb/wchar.c @@ -1785,8 +1785,8 @@ int pg_encoding_mblen(int encoding, const char *mbstr) { return (PG_VALID_ENCODING(encoding) ? - ((*pg_wchar_table[encoding].mblen) ((const unsigned char *) mbstr)) : - ((*pg_wchar_table[PG_SQL_ASCII].mblen) ((const unsigned char *) mbstr))); + pg_wchar_table[encoding].mblen((const unsigned char *) mbstr) : + pg_wchar_table[PG_SQL_ASCII].mblen((const unsigned char *) mbstr)); } /* @@ -1796,8 +1796,8 @@ int pg_encoding_dsplen(int encoding, const char *mbstr) { return (PG_VALID_ENCODING(encoding) ? - ((*pg_wchar_table[encoding].dsplen) ((const unsigned char *) mbstr)) : - ((*pg_wchar_table[PG_SQL_ASCII].dsplen) ((const unsigned char *) mbstr))); + pg_wchar_table[encoding].dsplen((const unsigned char *) mbstr) : + pg_wchar_table[PG_SQL_ASCII].dsplen((const unsigned char *) mbstr)); } /* @@ -1809,8 +1809,8 @@ int pg_encoding_verifymb(int encoding, const char *mbstr, int len) { return (PG_VALID_ENCODING(encoding) ? - ((*pg_wchar_table[encoding].mbverify) ((const unsigned char *) mbstr, len)) : - ((*pg_wchar_table[PG_SQL_ASCII].mbverify) ((const unsigned char *) mbstr, len))); + pg_wchar_table[encoding].mbverify((const unsigned char *) mbstr, len) : + pg_wchar_table[PG_SQL_ASCII].mbverify((const unsigned char *) mbstr, len)); } /* diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 246fea8693..969e80f756 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -4602,7 +4602,7 @@ InitializeOneGUCOption(struct config_generic *gconf) elog(FATAL, "failed to initialize %s to %d", conf->gen.name, (int) newval); if (conf->assign_hook) - (*conf->assign_hook) (newval, extra); + conf->assign_hook(newval, extra); *conf->variable = conf->reset_val = newval; conf->gen.extra = conf->reset_extra = extra; break; @@ -4620,7 +4620,7 @@ InitializeOneGUCOption(struct config_generic *gconf) elog(FATAL, "failed to initialize %s to %d", conf->gen.name, newval); if (conf->assign_hook) - (*conf->assign_hook) (newval, extra); + conf->assign_hook(newval, extra); *conf->variable = conf->reset_val = newval; conf->gen.extra = conf->reset_extra = extra; break; @@ -4638,7 +4638,7 @@ InitializeOneGUCOption(struct config_generic *gconf) elog(FATAL, "failed to initialize %s to %g", conf->gen.name, newval); if (conf->assign_hook) - (*conf->assign_hook) (newval, extra); + conf->assign_hook(newval, extra); *conf->variable = conf->reset_val = newval; conf->gen.extra = conf->reset_extra = extra; break; @@ -4660,7 +4660,7 @@ InitializeOneGUCOption(struct config_generic *gconf) elog(FATAL, "failed to initialize %s to \"%s\"", conf->gen.name, newval ? newval : ""); if (conf->assign_hook) - (*conf->assign_hook) (newval, extra); + conf->assign_hook(newval, extra); *conf->variable = conf->reset_val = newval; conf->gen.extra = conf->reset_extra = extra; break; @@ -4676,7 +4676,7 @@ InitializeOneGUCOption(struct config_generic *gconf) elog(FATAL, "failed to initialize %s to %d", conf->gen.name, newval); if (conf->assign_hook) - (*conf->assign_hook) (newval, extra); + conf->assign_hook(newval, extra); *conf->variable = conf->reset_val = newval; conf->gen.extra = conf->reset_extra = extra; break; @@ -4901,7 +4901,7 @@ ResetAllOptions(void) struct config_bool *conf = (struct config_bool *) gconf; if (conf->assign_hook) - (*conf->assign_hook) (conf->reset_val, + conf->assign_hook(conf->reset_val, conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, @@ -4913,7 +4913,7 @@ ResetAllOptions(void) struct config_int *conf = (struct config_int *) gconf; if (conf->assign_hook) - (*conf->assign_hook) (conf->reset_val, + conf->assign_hook(conf->reset_val, conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, @@ -4925,7 +4925,7 @@ ResetAllOptions(void) struct config_real *conf = (struct config_real *) gconf; if (conf->assign_hook) - (*conf->assign_hook) (conf->reset_val, + conf->assign_hook(conf->reset_val, conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, @@ -4937,7 +4937,7 @@ ResetAllOptions(void) struct config_string *conf = (struct config_string *) gconf; if (conf->assign_hook) - (*conf->assign_hook) (conf->reset_val, + conf->assign_hook(conf->reset_val, conf->reset_extra); set_string_field(conf, conf->variable, conf->reset_val); set_extra_field(&conf->gen, &conf->gen.extra, @@ -4949,7 +4949,7 @@ ResetAllOptions(void) struct config_enum *conf = (struct config_enum *) gconf; if (conf->assign_hook) - (*conf->assign_hook) (conf->reset_val, + conf->assign_hook(conf->reset_val, conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, @@ -5240,7 +5240,7 @@ AtEOXact_GUC(bool isCommit, int nestLevel) conf->gen.extra != newextra) { if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -5258,7 +5258,7 @@ AtEOXact_GUC(bool isCommit, int nestLevel) conf->gen.extra != newextra) { if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -5276,7 +5276,7 @@ AtEOXact_GUC(bool isCommit, int nestLevel) conf->gen.extra != newextra) { if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -5294,7 +5294,7 @@ AtEOXact_GUC(bool isCommit, int nestLevel) conf->gen.extra != newextra) { if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); set_string_field(conf, conf->variable, newval); set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -5321,7 +5321,7 @@ AtEOXact_GUC(bool isCommit, int nestLevel) conf->gen.extra != newextra) { if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -6211,7 +6211,7 @@ set_config_option(const char *name, const char *value, push_old_value(&conf->gen, action); if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -6301,7 +6301,7 @@ set_config_option(const char *name, const char *value, push_old_value(&conf->gen, action); if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -6391,7 +6391,7 @@ set_config_option(const char *name, const char *value, push_old_value(&conf->gen, action); if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -6499,7 +6499,7 @@ set_config_option(const char *name, const char *value, push_old_value(&conf->gen, action); if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); set_string_field(conf, conf->variable, newval); set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -6594,7 +6594,7 @@ set_config_option(const char *name, const char *value, push_old_value(&conf->gen, action); if (conf->assign_hook) - (*conf->assign_hook) (newval, newextra); + conf->assign_hook(newval, newextra); *conf->variable = newval; set_extra_field(&conf->gen, &conf->gen.extra, newextra); @@ -8653,7 +8653,7 @@ _ShowOption(struct config_generic *record, bool use_units) struct config_bool *conf = (struct config_bool *) record; if (conf->show_hook) - val = (*conf->show_hook) (); + val = conf->show_hook(); else val = *conf->variable ? "on" : "off"; } @@ -8664,7 +8664,7 @@ _ShowOption(struct config_generic *record, bool use_units) struct config_int *conf = (struct config_int *) record; if (conf->show_hook) - val = (*conf->show_hook) (); + val = conf->show_hook(); else { /* @@ -8694,7 +8694,7 @@ _ShowOption(struct config_generic *record, bool use_units) struct config_real *conf = (struct config_real *) record; if (conf->show_hook) - val = (*conf->show_hook) (); + val = conf->show_hook(); else { snprintf(buffer, sizeof(buffer), "%g", @@ -8709,7 +8709,7 @@ _ShowOption(struct config_generic *record, bool use_units) struct config_string *conf = (struct config_string *) record; if (conf->show_hook) - val = (*conf->show_hook) (); + val = conf->show_hook(); else if (*conf->variable && **conf->variable) val = *conf->variable; else @@ -8722,7 +8722,7 @@ _ShowOption(struct config_generic *record, bool use_units) struct config_enum *conf = (struct config_enum *) record; if (conf->show_hook) - val = (*conf->show_hook) (); + val = conf->show_hook(); else val = config_enum_lookup_by_value(conf, *conf->variable); } @@ -9807,7 +9807,7 @@ call_bool_check_hook(struct config_bool *conf, bool *newval, void **extra, GUC_check_errdetail_string = NULL; GUC_check_errhint_string = NULL; - if (!(*conf->check_hook) (newval, extra, source)) + if (!conf->check_hook(newval, extra, source)) { ereport(elevel, (errcode(GUC_check_errcode_value), @@ -9841,7 +9841,7 @@ call_int_check_hook(struct config_int *conf, int *newval, void **extra, GUC_check_errdetail_string = NULL; GUC_check_errhint_string = NULL; - if (!(*conf->check_hook) (newval, extra, source)) + if (!conf->check_hook(newval, extra, source)) { ereport(elevel, (errcode(GUC_check_errcode_value), @@ -9875,7 +9875,7 @@ call_real_check_hook(struct config_real *conf, double *newval, void **extra, GUC_check_errdetail_string = NULL; GUC_check_errhint_string = NULL; - if (!(*conf->check_hook) (newval, extra, source)) + if (!conf->check_hook(newval, extra, source)) { ereport(elevel, (errcode(GUC_check_errcode_value), @@ -9909,7 +9909,7 @@ call_string_check_hook(struct config_string *conf, char **newval, void **extra, GUC_check_errdetail_string = NULL; GUC_check_errhint_string = NULL; - if (!(*conf->check_hook) (newval, extra, source)) + if (!conf->check_hook(newval, extra, source)) { ereport(elevel, (errcode(GUC_check_errcode_value), @@ -9943,7 +9943,7 @@ call_enum_check_hook(struct config_enum *conf, int *newval, void **extra, GUC_check_errdetail_string = NULL; GUC_check_errhint_string = NULL; - if (!(*conf->check_hook) (newval, extra, source)) + if (!conf->check_hook(newval, extra, source)) { ereport(elevel, (errcode(GUC_check_errcode_value), diff --git a/src/backend/utils/misc/timeout.c b/src/backend/utils/misc/timeout.c index d7fc040ad3..75159ea5b1 100644 --- a/src/backend/utils/misc/timeout.c +++ b/src/backend/utils/misc/timeout.c @@ -302,7 +302,7 @@ handle_sig_alarm(SIGNAL_ARGS) this_timeout->indicator = true; /* And call its handler function */ - (*this_timeout->timeout_handler) (); + this_timeout->timeout_handler(); /* * The handler might not take negligible time (CheckDeadLock diff --git a/src/backend/utils/mmgr/README b/src/backend/utils/mmgr/README index 387c337985..0ab81bd80f 100644 --- a/src/backend/utils/mmgr/README +++ b/src/backend/utils/mmgr/README @@ -402,7 +402,7 @@ GetMemoryChunkContext()) and then invoke the corresponding method for the context - (*context->methods->free_p) (p); + context->methods->free_p(p); More Control Over aset.c Behavior diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c index 5d173d7e60..cd696f16bc 100644 --- a/src/backend/utils/mmgr/mcxt.c +++ b/src/backend/utils/mmgr/mcxt.c @@ -159,7 +159,7 @@ MemoryContextResetOnly(MemoryContext context) if (!context->isReset) { MemoryContextCallResetCallbacks(context); - (*context->methods->reset) (context); + context->methods->reset(context); context->isReset = true; VALGRIND_DESTROY_MEMPOOL(context); VALGRIND_CREATE_MEMPOOL(context, 0, false); @@ -222,7 +222,7 @@ MemoryContextDelete(MemoryContext context) */ MemoryContextSetParent(context, NULL); - (*context->methods->delete_context) (context); + context->methods->delete_context(context); VALGRIND_DESTROY_MEMPOOL(context); pfree(context); } @@ -291,7 +291,7 @@ MemoryContextCallResetCallbacks(MemoryContext context) while ((cb = context->reset_cbs) != NULL) { context->reset_cbs = cb->next; - (*cb->func) (cb->arg); + cb->func(cb->arg); } } @@ -391,8 +391,7 @@ GetMemoryChunkSpace(void *pointer) { MemoryContext context = GetMemoryChunkContext(pointer); - return (context->methods->get_chunk_space) (context, - pointer); + return context->methods->get_chunk_space(context, pointer); } /* @@ -423,7 +422,7 @@ MemoryContextIsEmpty(MemoryContext context) if (context->firstchild != NULL) return false; /* Otherwise use the type-specific inquiry */ - return (*context->methods->is_empty) (context); + return context->methods->is_empty(context); } /* @@ -481,7 +480,7 @@ MemoryContextStatsInternal(MemoryContext context, int level, AssertArg(MemoryContextIsValid(context)); /* Examine the context itself */ - (*context->methods->stats) (context, level, print, totals); + context->methods->stats(context, level, print, totals); /* * Examine children. If there are more than max_children of them, we do @@ -546,7 +545,7 @@ MemoryContextCheck(MemoryContext context) AssertArg(MemoryContextIsValid(context)); - (*context->methods->check) (context); + context->methods->check(context); for (child = context->firstchild; child != NULL; child = child->nextchild) MemoryContextCheck(child); } @@ -675,7 +674,7 @@ MemoryContextCreate(NodeTag tag, Size size, strcpy(node->name, name); /* Type-specific routine finishes any other essential initialization */ - (*node->methods->init) (node); + node->methods->init(node); /* OK to link node to parent (if any) */ /* Could use MemoryContextSetParent here, but doesn't seem worthwhile */ @@ -716,7 +715,7 @@ MemoryContextAlloc(MemoryContext context, Size size) context->isReset = false; - ret = (*context->methods->alloc) (context, size); + ret = context->methods->alloc(context, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -751,7 +750,7 @@ MemoryContextAllocZero(MemoryContext context, Size size) context->isReset = false; - ret = (*context->methods->alloc) (context, size); + ret = context->methods->alloc(context, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -788,7 +787,7 @@ MemoryContextAllocZeroAligned(MemoryContext context, Size size) context->isReset = false; - ret = (*context->methods->alloc) (context, size); + ret = context->methods->alloc(context, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -823,7 +822,7 @@ MemoryContextAllocExtended(MemoryContext context, Size size, int flags) context->isReset = false; - ret = (*context->methods->alloc) (context, size); + ret = context->methods->alloc(context, size); if (ret == NULL) { if ((flags & MCXT_ALLOC_NO_OOM) == 0) @@ -859,7 +858,7 @@ palloc(Size size) CurrentMemoryContext->isReset = false; - ret = (*CurrentMemoryContext->methods->alloc) (CurrentMemoryContext, size); + ret = CurrentMemoryContext->methods->alloc(CurrentMemoryContext, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -888,7 +887,7 @@ palloc0(Size size) CurrentMemoryContext->isReset = false; - ret = (*CurrentMemoryContext->methods->alloc) (CurrentMemoryContext, size); + ret = CurrentMemoryContext->methods->alloc(CurrentMemoryContext, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -920,7 +919,7 @@ palloc_extended(Size size, int flags) CurrentMemoryContext->isReset = false; - ret = (*CurrentMemoryContext->methods->alloc) (CurrentMemoryContext, size); + ret = CurrentMemoryContext->methods->alloc(CurrentMemoryContext, size); if (ret == NULL) { if ((flags & MCXT_ALLOC_NO_OOM) == 0) @@ -951,7 +950,7 @@ pfree(void *pointer) { MemoryContext context = GetMemoryChunkContext(pointer); - (*context->methods->free_p) (context, pointer); + context->methods->free_p(context, pointer); VALGRIND_MEMPOOL_FREE(context, pointer); } @@ -973,7 +972,7 @@ repalloc(void *pointer, Size size) /* isReset must be false already */ Assert(!context->isReset); - ret = (*context->methods->realloc) (context, pointer, size); + ret = context->methods->realloc(context, pointer, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -1007,7 +1006,7 @@ MemoryContextAllocHuge(MemoryContext context, Size size) context->isReset = false; - ret = (*context->methods->alloc) (context, size); + ret = context->methods->alloc(context, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); @@ -1041,7 +1040,7 @@ repalloc_huge(void *pointer, Size size) /* isReset must be false already */ Assert(!context->isReset); - ret = (*context->methods->realloc) (context, pointer, size); + ret = context->methods->realloc(context, pointer, size); if (ret == NULL) { MemoryContextStats(TopMemoryContext); diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index 369e181709..89db08464f 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -420,7 +420,7 @@ MarkPortalDone(Portal portal) */ if (PointerIsValid(portal->cleanup)) { - (*portal->cleanup) (portal); + portal->cleanup(portal); portal->cleanup = NULL; } } @@ -448,7 +448,7 @@ MarkPortalFailed(Portal portal) */ if (PointerIsValid(portal->cleanup)) { - (*portal->cleanup) (portal); + portal->cleanup(portal); portal->cleanup = NULL; } } @@ -486,7 +486,7 @@ PortalDrop(Portal portal, bool isTopCommit) */ if (PointerIsValid(portal->cleanup)) { - (*portal->cleanup) (portal); + portal->cleanup(portal); portal->cleanup = NULL; } @@ -786,7 +786,7 @@ AtAbort_Portals(void) */ if (PointerIsValid(portal->cleanup)) { - (*portal->cleanup) (portal); + portal->cleanup(portal); portal->cleanup = NULL; } @@ -980,7 +980,7 @@ AtSubAbort_Portals(SubTransactionId mySubid, */ if (PointerIsValid(portal->cleanup)) { - (*portal->cleanup) (portal); + portal->cleanup(portal); portal->cleanup = NULL; } diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c index 4a4a287148..bd19fad77e 100644 --- a/src/backend/utils/resowner/resowner.c +++ b/src/backend/utils/resowner/resowner.c @@ -672,7 +672,7 @@ ResourceOwnerReleaseInternal(ResourceOwner owner, /* Let add-on modules get a chance too */ for (item = ResourceRelease_callbacks; item; item = item->next) - (*item->callback) (phase, isCommit, isTopLevel, item->arg); + item->callback(phase, isCommit, isTopLevel, item->arg); CurrentResourceOwner = save; } diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 8ae7515c9c..ec2fa8b9b9 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -202,7 +202,7 @@ setupRestoreWorker(Archive *AHX) { ArchiveHandle *AH = (ArchiveHandle *) AHX; - (AH->ReopenPtr) (AH); + AH->ReopenPtr(AH); } @@ -237,7 +237,7 @@ CloseArchive(Archive *AHX) int res = 0; ArchiveHandle *AH = (ArchiveHandle *) AHX; - (*AH->ClosePtr) (AH); + AH->ClosePtr(AH); /* Close the output */ if (AH->gzOut) @@ -359,7 +359,7 @@ RestoreArchive(Archive *AHX) * It's also not gonna work if we can't reopen the input file, so * let's try that immediately. */ - (AH->ReopenPtr) (AH); + AH->ReopenPtr(AH); } /* @@ -865,7 +865,7 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) if (strcmp(te->desc, "BLOB COMMENTS") == 0) AH->outputKind = OUTPUT_OTHERDATA; - (*AH->PrintTocDataPtr) (AH, te); + AH->PrintTocDataPtr(AH, te); AH->outputKind = OUTPUT_SQLCMDS; } @@ -918,7 +918,7 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) else AH->outputKind = OUTPUT_OTHERDATA; - (*AH->PrintTocDataPtr) (AH, te); + AH->PrintTocDataPtr(AH, te); /* * Terminate COPY if needed. @@ -1038,7 +1038,7 @@ WriteData(Archive *AHX, const void *data, size_t dLen) if (!AH->currToc) exit_horribly(modulename, "internal error -- WriteData cannot be called outside the context of a DataDumper routine\n"); - (*AH->WriteDataPtr) (AH, data, dLen); + AH->WriteDataPtr(AH, data, dLen); return; } @@ -1109,7 +1109,7 @@ ArchiveEntry(Archive *AHX, newToc->formatData = NULL; if (AH->ArchiveEntryPtr != NULL) - (*AH->ArchiveEntryPtr) (AH, newToc); + AH->ArchiveEntryPtr(AH, newToc); } /* Public */ @@ -1236,7 +1236,7 @@ StartBlob(Archive *AHX, Oid oid) if (!AH->StartBlobPtr) exit_horribly(modulename, "large-object output not supported in chosen format\n"); - (*AH->StartBlobPtr) (AH, AH->currToc, oid); + AH->StartBlobPtr(AH, AH->currToc, oid); return 1; } @@ -1248,7 +1248,7 @@ EndBlob(Archive *AHX, Oid oid) ArchiveHandle *AH = (ArchiveHandle *) AHX; if (AH->EndBlobPtr) - (*AH->EndBlobPtr) (AH, AH->currToc, oid); + AH->EndBlobPtr(AH, AH->currToc, oid); return 1; } @@ -1920,12 +1920,12 @@ WriteOffset(ArchiveHandle *AH, pgoff_t o, int wasSet) int off; /* Save the flag */ - (*AH->WriteBytePtr) (AH, wasSet); + AH->WriteBytePtr(AH, wasSet); /* Write out pgoff_t smallest byte first, prevents endian mismatch */ for (off = 0; off < sizeof(pgoff_t); off++) { - (*AH->WriteBytePtr) (AH, o & 0xFF); + AH->WriteBytePtr(AH, o & 0xFF); o >>= 8; } return sizeof(pgoff_t) + 1; @@ -1964,7 +1964,7 @@ ReadOffset(ArchiveHandle *AH, pgoff_t * o) * This used to be handled by a negative or zero pointer, now we use an * extra byte specifically for the state. */ - offsetFlg = (*AH->ReadBytePtr) (AH) & 0xFF; + offsetFlg = AH->ReadBytePtr(AH) & 0xFF; switch (offsetFlg) { @@ -1984,10 +1984,10 @@ ReadOffset(ArchiveHandle *AH, pgoff_t * o) for (off = 0; off < AH->offSize; off++) { if (off < sizeof(pgoff_t)) - *o |= ((pgoff_t) ((*AH->ReadBytePtr) (AH))) << (off * 8); + *o |= ((pgoff_t) (AH->ReadBytePtr(AH))) << (off * 8); else { - if ((*AH->ReadBytePtr) (AH) != 0) + if (AH->ReadBytePtr(AH) != 0) exit_horribly(modulename, "file offset in dump file is too large\n"); } } @@ -2011,15 +2011,15 @@ WriteInt(ArchiveHandle *AH, int i) /* SIGN byte */ if (i < 0) { - (*AH->WriteBytePtr) (AH, 1); + AH->WriteBytePtr(AH, 1); i = -i; } else - (*AH->WriteBytePtr) (AH, 0); + AH->WriteBytePtr(AH, 0); for (b = 0; b < AH->intSize; b++) { - (*AH->WriteBytePtr) (AH, i & 0xFF); + AH->WriteBytePtr(AH, i & 0xFF); i >>= 8; } @@ -2037,11 +2037,11 @@ ReadInt(ArchiveHandle *AH) if (AH->version > K_VERS_1_0) /* Read a sign byte */ - sign = (*AH->ReadBytePtr) (AH); + sign = AH->ReadBytePtr(AH); for (b = 0; b < AH->intSize; b++) { - bv = (*AH->ReadBytePtr) (AH) & 0xFF; + bv = AH->ReadBytePtr(AH) & 0xFF; if (bv != 0) res = res + (bv << bitShift); bitShift += 8; @@ -2063,7 +2063,7 @@ WriteStr(ArchiveHandle *AH, const char *c) int len = strlen(c); res = WriteInt(AH, len); - (*AH->WriteBufPtr) (AH, c, len); + AH->WriteBufPtr(AH, c, len); res += len; } else @@ -2084,7 +2084,7 @@ ReadStr(ArchiveHandle *AH) else { buf = (char *) pg_malloc(l + 1); - (*AH->ReadBufPtr) (AH, (void *) buf, l); + AH->ReadBufPtr(AH, (void *) buf, l); buf[l] = '\0'; } @@ -2495,7 +2495,7 @@ WriteDataChunksForTocEntry(ArchiveHandle *AH, TocEntry *te) /* * The user-provided DataDumper routine needs to call AH->WriteData */ - (*te->dataDumper) ((Archive *) AH, te->dataDumperArg); + te->dataDumper((Archive *) AH, te->dataDumperArg); if (endPtr != NULL) (*endPtr) (AH, te); @@ -2557,7 +2557,7 @@ WriteToc(ArchiveHandle *AH) WriteStr(AH, NULL); /* Terminate List */ if (AH->WriteExtraTocPtr) - (*AH->WriteExtraTocPtr) (AH, te); + AH->WriteExtraTocPtr(AH, te); } } @@ -2699,7 +2699,7 @@ ReadToc(ArchiveHandle *AH) } if (AH->ReadExtraTocPtr) - (*AH->ReadExtraTocPtr) (AH, te); + AH->ReadExtraTocPtr(AH, te); ahlog(AH, 3, "read TOC entry %d (ID %d) for %s %s\n", i, te->dumpId, te->desc, te->tag); @@ -3520,7 +3520,7 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) ahprintf(AH, "\n"); if (AH->PrintExtraTocPtr != NULL) - (*AH->PrintExtraTocPtr) (AH, te); + AH->PrintExtraTocPtr(AH, te); ahprintf(AH, "--\n\n"); } @@ -3648,13 +3648,13 @@ WriteHead(ArchiveHandle *AH) { struct tm crtm; - (*AH->WriteBufPtr) (AH, "PGDMP", 5); /* Magic code */ - (*AH->WriteBytePtr) (AH, ARCHIVE_MAJOR(AH->version)); - (*AH->WriteBytePtr) (AH, ARCHIVE_MINOR(AH->version)); - (*AH->WriteBytePtr) (AH, ARCHIVE_REV(AH->version)); - (*AH->WriteBytePtr) (AH, AH->intSize); - (*AH->WriteBytePtr) (AH, AH->offSize); - (*AH->WriteBytePtr) (AH, AH->format); + AH->WriteBufPtr(AH, "PGDMP", 5); /* Magic code */ + AH->WriteBytePtr(AH, ARCHIVE_MAJOR(AH->version)); + AH->WriteBytePtr(AH, ARCHIVE_MINOR(AH->version)); + AH->WriteBytePtr(AH, ARCHIVE_REV(AH->version)); + AH->WriteBytePtr(AH, AH->intSize); + AH->WriteBytePtr(AH, AH->offSize); + AH->WriteBytePtr(AH, AH->format); WriteInt(AH, AH->compression); crtm = *localtime(&AH->createDate); WriteInt(AH, crtm.tm_sec); @@ -3688,16 +3688,16 @@ ReadHead(ArchiveHandle *AH) vmin, vrev; - (*AH->ReadBufPtr) (AH, tmpMag, 5); + AH->ReadBufPtr(AH, tmpMag, 5); if (strncmp(tmpMag, "PGDMP", 5) != 0) exit_horribly(modulename, "did not find magic string in file header\n"); - vmaj = (*AH->ReadBytePtr) (AH); - vmin = (*AH->ReadBytePtr) (AH); + vmaj = AH->ReadBytePtr(AH); + vmin = AH->ReadBytePtr(AH); if (vmaj > 1 || (vmaj == 1 && vmin > 0)) /* Version > 1.0 */ - vrev = (*AH->ReadBytePtr) (AH); + vrev = AH->ReadBytePtr(AH); else vrev = 0; @@ -3707,7 +3707,7 @@ ReadHead(ArchiveHandle *AH) exit_horribly(modulename, "unsupported version (%d.%d) in file header\n", vmaj, vmin); - AH->intSize = (*AH->ReadBytePtr) (AH); + AH->intSize = AH->ReadBytePtr(AH); if (AH->intSize > 32) exit_horribly(modulename, "sanity check on integer size (%lu) failed\n", (unsigned long) AH->intSize); @@ -3716,11 +3716,11 @@ ReadHead(ArchiveHandle *AH) write_msg(modulename, "WARNING: archive was made on a machine with larger integers, some operations might fail\n"); if (AH->version >= K_VERS_1_7) - AH->offSize = (*AH->ReadBytePtr) (AH); + AH->offSize = AH->ReadBytePtr(AH); else AH->offSize = AH->intSize; - fmt = (*AH->ReadBytePtr) (AH); + fmt = AH->ReadBytePtr(AH); if (AH->format != fmt) exit_horribly(modulename, "expected format (%d) differs from format found in file (%d)\n", @@ -3730,7 +3730,7 @@ ReadHead(ArchiveHandle *AH) if (AH->version >= K_VERS_1_2) { if (AH->version < K_VERS_1_4) - AH->compression = (*AH->ReadBytePtr) (AH); + AH->compression = AH->ReadBytePtr(AH); else AH->compression = ReadInt(AH); } @@ -4700,7 +4700,7 @@ CloneArchive(ArchiveHandle *AH) } /* Let the format-specific code have a chance too */ - (clone->ClonePtr) (clone); + clone->ClonePtr(clone); Assert(clone->connection != NULL); return clone; @@ -4718,7 +4718,7 @@ DeCloneArchive(ArchiveHandle *AH) Assert(AH->connection == NULL); /* Clear format-specific state */ - (AH->DeClonePtr) (AH); + AH->DeClonePtr(AH); /* Clear state allocated by CloneArchive */ if (AH->sqlparse.curCmd) diff --git a/src/bin/pg_dump/pg_backup_null.c b/src/bin/pg_dump/pg_backup_null.c index ff419bb82f..62f6e624f0 100644 --- a/src/bin/pg_dump/pg_backup_null.c +++ b/src/bin/pg_dump/pg_backup_null.c @@ -202,7 +202,7 @@ _PrintTocData(ArchiveHandle *AH, TocEntry *te) if (strcmp(te->desc, "BLOBS") == 0) _StartBlobs(AH, te); - (*te->dataDumper) ((Archive *) AH, te->dataDumperArg); + te->dataDumper((Archive *) AH, te->dataDumperArg); if (strcmp(te->desc, "BLOBS") == 0) _EndBlobs(AH, te); diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c index 1e5967e5cc..066121449e 100644 --- a/src/bin/pg_dump/pg_backup_utils.c +++ b/src/bin/pg_dump/pg_backup_utils.c @@ -144,8 +144,8 @@ exit_nicely(int code) int i; for (i = on_exit_nicely_index - 1; i >= 0; i--) - (*on_exit_nicely_list[i].function) (code, - on_exit_nicely_list[i].arg); + on_exit_nicely_list[i].function(code, + on_exit_nicely_list[i].arg); #ifdef WIN32 if (parallel_init_done && GetCurrentThreadId() != mainThreadId) diff --git a/src/bin/psql/variables.c b/src/bin/psql/variables.c index 806d39bfbe..c6a59ed478 100644 --- a/src/bin/psql/variables.c +++ b/src/bin/psql/variables.c @@ -246,10 +246,10 @@ SetVariable(VariableSpace space, const char *name, const char *value) bool confirmed; if (current->substitute_hook) - new_value = (*current->substitute_hook) (new_value); + new_value = current->substitute_hook(new_value); if (current->assign_hook) - confirmed = (*current->assign_hook) (new_value); + confirmed = current->assign_hook(new_value); else confirmed = true; diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index ad228d1394..770881849c 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -287,7 +287,7 @@ ExecEvalExpr(ExprState *state, ExprContext *econtext, bool *isNull) { - return (*state->evalfunc) (state, econtext, isNull); + return state->evalfunc(state, econtext, isNull); } #endif @@ -306,7 +306,7 @@ ExecEvalExprSwitchContext(ExprState *state, MemoryContext oldContext; oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory); - retDatum = (*state->evalfunc) (state, econtext, isNull); + retDatum = state->evalfunc(state, econtext, isNull); MemoryContextSwitchTo(oldContext); return retDatum; } diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h index dc6069d435..199a6317f5 100644 --- a/src/include/utils/selfuncs.h +++ b/src/include/utils/selfuncs.h @@ -81,7 +81,7 @@ typedef struct VariableStatData #define ReleaseVariableStats(vardata) \ do { \ if (HeapTupleIsValid((vardata).statsTuple)) \ - (* (vardata).freefunc) ((vardata).statsTuple); \ + (vardata).freefunc((vardata).statsTuple); \ } while(0) diff --git a/src/include/utils/sortsupport.h b/src/include/utils/sortsupport.h index 6e8444b4ff..a98420c37e 100644 --- a/src/include/utils/sortsupport.h +++ b/src/include/utils/sortsupport.h @@ -222,7 +222,7 @@ ApplySortComparator(Datum datum1, bool isNull1, } else { - compare = (*ssup->comparator) (datum1, datum2, ssup); + compare = ssup->comparator(datum1, datum2, ssup); if (ssup->ssup_reverse) compare = -compare; } @@ -260,7 +260,7 @@ ApplySortAbbrevFullComparator(Datum datum1, bool isNull1, } else { - compare = (*ssup->abbrev_full_comparator) (datum1, datum2, ssup); + compare = ssup->abbrev_full_comparator(datum1, datum2, ssup); if (ssup->ssup_reverse) compare = -compare; } diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index d0e97ecdd4..c580d91135 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -6297,8 +6297,8 @@ defaultNoticeReceiver(void *arg, const PGresult *res) { (void) arg; /* not used */ if (res->noticeHooks.noticeProc != NULL) - (*res->noticeHooks.noticeProc) (res->noticeHooks.noticeProcArg, - PQresultErrorMessage(res)); + res->noticeHooks.noticeProc(res->noticeHooks.noticeProcArg, + PQresultErrorMessage(res)); } /* diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c index a97e73cf99..c24bce62dd 100644 --- a/src/interfaces/libpq/fe-exec.c +++ b/src/interfaces/libpq/fe-exec.c @@ -860,7 +860,7 @@ pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...) /* * Pass to receiver, then free it. */ - (*res->noticeHooks.noticeRec) (res->noticeHooks.noticeRecArg, res); + res->noticeHooks.noticeRec(res->noticeHooks.noticeRecArg, res); } PQclear(res); } diff --git a/src/interfaces/libpq/fe-protocol2.c b/src/interfaces/libpq/fe-protocol2.c index a58f701e18..83f74f3985 100644 --- a/src/interfaces/libpq/fe-protocol2.c +++ b/src/interfaces/libpq/fe-protocol2.c @@ -1055,7 +1055,7 @@ pqGetErrorNotice2(PGconn *conn, bool isError) if (res) { if (res->noticeHooks.noticeRec != NULL) - (*res->noticeHooks.noticeRec) (res->noticeHooks.noticeRecArg, res); + res->noticeHooks.noticeRec(res->noticeHooks.noticeRecArg, res); PQclear(res); } } diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c index a484fe80a1..7da5fb28fb 100644 --- a/src/interfaces/libpq/fe-protocol3.c +++ b/src/interfaces/libpq/fe-protocol3.c @@ -960,7 +960,7 @@ pqGetErrorNotice3(PGconn *conn, bool isError) /* We can cheat a little here and not copy the message. */ res->errMsg = workBuf.data; if (res->noticeHooks.noticeRec != NULL) - (*res->noticeHooks.noticeRec) (res->noticeHooks.noticeRecArg, res); + res->noticeHooks.noticeRec(res->noticeHooks.noticeRecArg, res); PQclear(res); } } From b976499480bdbab6d69a11e47991febe53865adc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 7 Sep 2017 14:04:41 -0400 Subject: [PATCH 0120/1087] Improve documentation about behavior of multi-statement Query messages. We've long done our best to sweep this topic under the rug, but in view of recent work it seems like it's time to explain it more precisely. Here's an initial cut at doing that. Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F6BE40D@G01JPEXMBYT05 --- doc/src/sgml/libpq.sgml | 4 +- doc/src/sgml/protocol.sgml | 119 +++++++++++++++++++++++++++++++++ doc/src/sgml/ref/psql-ref.sgml | 49 +++++++++++++- 3 files changed, 168 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 957096681a..096a8be605 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -2223,7 +2223,9 @@ PGresult *PQexec(PGconn *conn, const char *command); PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple - transactions. Note however that the returned + transactions. (See + for more details about how the server handles multi-query strings.) + Note however that the returned PGresult structure describes only the result of the last command executed from the string. Should one of the commands fail, processing of the string stops with it and the returned diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 2bb4e38a9d..76d1c13cc4 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -675,6 +675,125 @@ that will accept any message type at any time that it could make sense, rather than wiring in assumptions about the exact sequence of messages. + + + Multiple Statements in a Simple Query + + + When a simple Query message contains more than one SQL statement + (separated by semicolons), those statements are executed as a single + transaction, unless explicit transaction control commands are included + to force a different behavior. For example, if the message contains + +INSERT INTO mytable VALUES(1); +SELECT 1/0; +INSERT INTO mytable VALUES(2); + + then the divide-by-zero failure in the SELECT will force + rollback of the first INSERT. Furthermore, because + execution of the message is abandoned at the first error, the second + INSERT is never attempted at all. + + + + If instead the message contains + +BEGIN; +INSERT INTO mytable VALUES(1); +COMMIT; +INSERT INTO mytable VALUES(2); +SELECT 1/0; + + then the first INSERT is committed by the + explicit COMMIT command. The second INSERT + and the SELECT are still treated as a single transaction, + so that the divide-by-zero failure will roll back the + second INSERT, but not the first one. + + + + This behavior is implemented by running the statements in a + multi-statement Query message in an implicit transaction + block unless there is some explicit transaction block for them to + run in. The main difference between an implicit transaction block and + a regular one is that an implicit block is closed automatically at the + end of the Query message, either by an implicit commit if there was no + error, or an implicit rollback if there was an error. This is similar + to the implicit commit or rollback that happens for a statement + executed by itself (when not in a transaction block). + + + + If the session is already in a transaction block, as a result of + a BEGIN in some previous message, then the Query message + simply continues that transaction block, whether the message contains + one statement or several. However, if the Query message contains + a COMMIT or ROLLBACK closing the existing + transaction block, then any following statements are executed in an + implicit transaction block. + Conversely, if a BEGIN appears in a multi-statement Query + message, then it starts a regular transaction block that will only be + terminated by an explicit COMMIT or ROLLBACK, + whether that appears in this Query message or a later one. + If the BEGIN follows some statements that were executed as + an implicit transaction block, those statements are not immediately + committed; in effect, they are retroactively included into the new + regular transaction block. + + + + A COMMIT or ROLLBACK appearing in an implicit + transaction block is executed as normal, closing the implicit block; + however, a warning will be issued since a COMMIT + or ROLLBACK without a previous BEGIN might + represent a mistake. If more statements follow, a new implicit + transaction block will be started for them. + + + + Savepoints are not allowed in an implicit transaction block, since + they would conflict with the behavior of automatically closing the + block upon any error. + + + + Remember that, regardless of any transaction control commands that may + be present, execution of the Query message stops at the first error. + Thus for example given + +BEGIN; +SELECT 1/0; +ROLLBACK; + + in a single Query message, the session will be left inside a failed + regular transaction block, since the ROLLBACK is not + reached after the divide-by-zero error. Another ROLLBACK + will be needed to restore the session to a usable state. + + + + Another behavior of note is that initial lexical and syntactic + analysis is done on the entire query string before any of it is + executed. Thus simple errors (such as a misspelled keyword) in later + statements can prevent execution of any of the statements. This + is normally invisible to users since the statements would all roll + back anyway when done as an implicit transaction block. However, + it can be visible when attempting to do multiple transactions within a + multi-statement Query. For instance, if a typo turned our previous + example into + +BEGIN; +INSERT INTO mytable VALUES(1); +COMMIT; +INSERT INTO mytable VALUES(2); +SELCT 1/0; + + then none of the statements would get run, resulting in the visible + difference that the first INSERT is not committed. + Errors detected at semantic analysis or later, such as a misspelled + table or column name, do not have this effect. + + diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 5bdbc1e9cf..79468a5663 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -120,12 +120,14 @@ echo '\x \\ SELECT * FROM foo;' | psql Each SQL command string passed - to is sent to the server as a single query. + to is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple - transactions. Also, psql only prints the + transactions. (See + for more details about how the server handles multi-query strings.) + Also, psql only prints the result of the last SQL command in the string. This is different from the behavior when the same string is read from a file or fed to psql's standard input, @@ -133,7 +135,7 @@ echo '\x \\ SELECT * FROM foo;' | psql each SQL command separately. - Because of this behavior, putting more than one command in a + Because of this behavior, putting more than one SQL command in a single string often has unexpected results. It's better to use repeated commands or feed multiple commands to psql's standard input, @@ -3179,6 +3181,47 @@ testdb=> \setenv LESS -imx4F + + + \; + + + Backslash-semicolon is not a meta-command in the same way as the + preceding commands; rather, it simply causes a semicolon to be + added to the query buffer without any further processing. + + + + Normally, psql will dispatch a SQL command to the + server as soon as it reaches the command-ending semicolon, even if + more input remains on the current line. Thus for example entering + +select 1; select 2; select 3; + + will result in the three SQL commands being individually sent to + the server, with each one's results being displayed before + continuing to the next command. However, a semicolon entered + as \; will not trigger command processing, so that the + command before it and the one after are effectively combined and + sent to the server in one request. So for example + +select 1\; select 2\; select 3; + + results in sending the three SQL commands to the server in a single + request, when the non-backslashed semicolon is reached. + The server executes such a request as a single transaction, + unless there are explicit BEGIN/COMMIT + commands included in the string to divide it into multiple + transactions. (See + for more details about how the server handles multi-query strings.) + psql prints only the last query result + it receives for each request; in this example, although all + three SELECTs are indeed executed, psql + only prints the 3. + + + + From 3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 7 Sep 2017 19:41:51 -0400 Subject: [PATCH 0121/1087] Improve performance of get_actual_variable_range with recently-dead tuples. In commit fccebe421, we hacked get_actual_variable_range() to scan the index with SnapshotDirty, so that if there are many uncommitted tuples at the end of the index range, it wouldn't laboriously scan through all of them looking for a live value to return. However, that didn't fix it for the case of many recently-dead tuples at the end of the index; SnapshotDirty recognizes those as committed dead and so we're back to the same problem. To improve the situation, invent a "SnapshotNonVacuumable" snapshot type and use that instead. The reason this helps is that, if the snapshot rejects a given index entry, we know that the indexscan will mark that index entry as killed. This means the next get_actual_variable_range() scan will proceed past that entry without visiting the heap, making the scan a lot faster. We may end up accepting a recently-dead tuple as being the estimated extremal value, but that doesn't seem much worse than the compromise we made before to accept not-yet-committed extremal values. The cost of the scan is still proportional to the number of dead index entries at the end of the range, so in the interval after a mass delete but before VACUUM's cleaned up the mess, it's still possible for get_actual_variable_range() to take a noticeable amount of time, if you've got enough such dead entries. But the constant factor is much much better than before, since all we need to do with each index entry is test its "killed" bit. We chose to back-patch commit fccebe421 at the time, but I'm hesitant to do so here, because this form of the problem seems to affect many fewer people. Also, even when it happens, it's less bad than the case fixed by commit fccebe421 because we don't get the contention effects from expensive TransactionIdIsInProgress tests. Dmitriy Sarafannikov, reviewed by Andrey Borodin Discussion: https://postgr.es/m/05C72CF7-B5F6-4DB9-8A09-5AC897653113@yandex.ru --- src/backend/access/heap/heapam.c | 3 +++ src/backend/utils/adt/selfuncs.c | 40 +++++++++++++++++++++----------- src/backend/utils/time/tqual.c | 22 ++++++++++++++++++ src/include/utils/snapshot.h | 4 +++- src/include/utils/tqual.h | 10 ++++++++ 5 files changed, 65 insertions(+), 14 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index e29c5ad086..d20f0381f3 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -2118,6 +2118,9 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer, * If we can't see it, maybe no one else can either. At caller * request, check whether all chain members are dead to all * transactions. + * + * Note: if you change the criterion here for what is "dead", fix the + * planner's get_actual_variable_range() function to match. */ if (all_dead && *all_dead && !HeapTupleIsSurelyDead(heapTuple, RecentGlobalXmin)) diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 23e5526a8e..81b0bc37d2 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -142,6 +142,7 @@ #include "utils/pg_locale.h" #include "utils/rel.h" #include "utils/selfuncs.h" +#include "utils/snapmgr.h" #include "utils/spccache.h" #include "utils/syscache.h" #include "utils/timestamp.h" @@ -5328,7 +5329,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata, HeapTuple tup; Datum values[INDEX_MAX_KEYS]; bool isnull[INDEX_MAX_KEYS]; - SnapshotData SnapshotDirty; + SnapshotData SnapshotNonVacuumable; estate = CreateExecutorState(); econtext = GetPerTupleExprContext(estate); @@ -5351,7 +5352,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata, slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel)); econtext->ecxt_scantuple = slot; get_typlenbyval(vardata->atttype, &typLen, &typByVal); - InitDirtySnapshot(SnapshotDirty); + InitNonVacuumableSnapshot(SnapshotNonVacuumable, RecentGlobalXmin); /* set up an IS NOT NULL scan key so that we ignore nulls */ ScanKeyEntryInitialize(&scankeys[0], @@ -5373,17 +5374,29 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata, * active snapshot, which is the best approximation we've got * to what the query will see when executed. But that won't * be exact if a new snap is taken before running the query, - * and it can be very expensive if a lot of uncommitted rows - * exist at the end of the index (because we'll laboriously - * fetch each one and reject it). What seems like a good - * compromise is to use SnapshotDirty. That will accept - * uncommitted rows, and thus avoid fetching multiple heap - * tuples in this scenario. On the other hand, it will reject - * known-dead rows, and thus not give a bogus answer when the - * extreme value has been deleted; that case motivates not - * using SnapshotAny here. + * and it can be very expensive if a lot of recently-dead or + * uncommitted rows exist at the beginning or end of the index + * (because we'll laboriously fetch each one and reject it). + * Instead, we use SnapshotNonVacuumable. That will accept + * recently-dead and uncommitted rows as well as normal + * visible rows. On the other hand, it will reject known-dead + * rows, and thus not give a bogus answer when the extreme + * value has been deleted (unless the deletion was quite + * recent); that case motivates not using SnapshotAny here. + * + * A crucial point here is that SnapshotNonVacuumable, with + * RecentGlobalXmin as horizon, yields the inverse of the + * condition that the indexscan will use to decide that index + * entries are killable (see heap_hot_search_buffer()). + * Therefore, if the snapshot rejects a tuple and we have to + * continue scanning past it, we know that the indexscan will + * mark that index entry killed. That means that the next + * get_actual_variable_range() call will not have to visit + * that heap entry. In this way we avoid repetitive work when + * this function is used a lot during planning. */ - index_scan = index_beginscan(heapRel, indexRel, &SnapshotDirty, + index_scan = index_beginscan(heapRel, indexRel, + &SnapshotNonVacuumable, 1, 0); index_rescan(index_scan, scankeys, 1, NULL, 0); @@ -5415,7 +5428,8 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata, /* If max is requested, and we didn't find the index is empty */ if (max && have_data) { - index_scan = index_beginscan(heapRel, indexRel, &SnapshotDirty, + index_scan = index_beginscan(heapRel, indexRel, + &SnapshotNonVacuumable, 1, 0); index_rescan(index_scan, scankeys, 1, NULL, 0); diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c index f9da9e17f5..bbac4083c9 100644 --- a/src/backend/utils/time/tqual.c +++ b/src/backend/utils/time/tqual.c @@ -45,6 +45,8 @@ * like HeapTupleSatisfiesSelf(), but includes open transactions * HeapTupleSatisfiesVacuum() * visible to any running transaction, used by VACUUM + * HeapTupleSatisfiesNonVacuumable() + * Snapshot-style API for HeapTupleSatisfiesVacuum * HeapTupleSatisfiesToast() * visible unless part of interrupted vacuum, used for TOAST * HeapTupleSatisfiesAny() @@ -1392,6 +1394,26 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, return HEAPTUPLE_DEAD; } + +/* + * HeapTupleSatisfiesNonVacuumable + * + * True if tuple might be visible to some transaction; false if it's + * surely dead to everyone, ie, vacuumable. + * + * This is an interface to HeapTupleSatisfiesVacuum that meets the + * SnapshotSatisfiesFunc API, so it can be used through a Snapshot. + * snapshot->xmin must have been set up with the xmin horizon to use. + */ +bool +HeapTupleSatisfiesNonVacuumable(HeapTuple htup, Snapshot snapshot, + Buffer buffer) +{ + return HeapTupleSatisfiesVacuum(htup, snapshot->xmin, buffer) + != HEAPTUPLE_DEAD; +} + + /* * HeapTupleIsSurelyDead * diff --git a/src/include/utils/snapshot.h b/src/include/utils/snapshot.h index 074cc81864..bf519778df 100644 --- a/src/include/utils/snapshot.h +++ b/src/include/utils/snapshot.h @@ -41,6 +41,7 @@ typedef bool (*SnapshotSatisfiesFunc) (HeapTuple htup, * * MVCC snapshots taken during recovery (in Hot-Standby mode) * * Historic MVCC snapshots used during logical decoding * * snapshots passed to HeapTupleSatisfiesDirty() + * * snapshots passed to HeapTupleSatisfiesNonVacuumable() * * snapshots used for SatisfiesAny, Toast, Self where no members are * accessed. * @@ -56,7 +57,8 @@ typedef struct SnapshotData /* * The remaining fields are used only for MVCC snapshots, and are normally * just zeroes in special snapshots. (But xmin and xmax are used - * specially by HeapTupleSatisfiesDirty.) + * specially by HeapTupleSatisfiesDirty, and xmin is used specially by + * HeapTupleSatisfiesNonVacuumable.) * * An MVCC snapshot can never see the effects of XIDs >= xmax. It can see * the effects of all older XIDs except those listed in the snapshot. xmin diff --git a/src/include/utils/tqual.h b/src/include/utils/tqual.h index 036d9898d6..9a3b56e5f0 100644 --- a/src/include/utils/tqual.h +++ b/src/include/utils/tqual.h @@ -66,6 +66,8 @@ extern bool HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot, Buffer buffer); extern bool HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot, Buffer buffer); +extern bool HeapTupleSatisfiesNonVacuumable(HeapTuple htup, + Snapshot snapshot, Buffer buffer); extern bool HeapTupleSatisfiesHistoricMVCC(HeapTuple htup, Snapshot snapshot, Buffer buffer); @@ -100,6 +102,14 @@ extern bool ResolveCminCmaxDuringDecoding(struct HTAB *tuplecid_data, #define InitDirtySnapshot(snapshotdata) \ ((snapshotdata).satisfies = HeapTupleSatisfiesDirty) +/* + * Similarly, some initialization is required for a NonVacuumable snapshot. + * The caller must supply the xmin horizon to use (e.g., RecentGlobalXmin). + */ +#define InitNonVacuumableSnapshot(snapshotdata, xmin_horizon) \ + ((snapshotdata).satisfies = HeapTupleSatisfiesNonVacuumable, \ + (snapshotdata).xmin = (xmin_horizon)) + /* * Similarly, some initialization is required for SnapshotToast. We need * to set lsn and whenTaken correctly to support snapshot_too_old. From f0a0c17c1b126882a37ec6bf42ab45a963794c3e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 7 Sep 2017 21:07:47 -0400 Subject: [PATCH 0122/1087] Refactor get_partition_for_tuple a bit. Pending patches for both default partitioning and hash partitioning find the current coding pattern to be inconvenient. Change it so that we switch on the partitioning method first and then do whatever is needed. Amul Sul, reviewed by Jeevan Ladhe, with a few adjustments by me. Discussion: http://postgr.es/m/CAAJ_b97mTb=dG2pv6+1ougxEVZFVnZJajW+0QHj46mEE7WsoOQ@mail.gmail.com Discussion: http://postgr.es/m/CAOgcT0M37CAztEinpvjJc18EdHfm23fw0EG9-36Ya=+rEFUqaQ@mail.gmail.com --- src/backend/catalog/partition.c | 100 +++++++++++++++++--------------- 1 file changed, 52 insertions(+), 48 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 50162632f5..c6bd02f77d 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1920,10 +1920,7 @@ get_partition_for_tuple(PartitionDispatch *pd, PartitionDispatch parent; Datum values[PARTITION_MAX_KEYS]; bool isnull[PARTITION_MAX_KEYS]; - int cur_offset, - cur_index; - int i, - result; + int result; ExprContext *ecxt = GetPerTupleExprContext(estate); TupleTableSlot *ecxt_scantuple_old = ecxt->ecxt_scantuple; @@ -1935,6 +1932,7 @@ get_partition_for_tuple(PartitionDispatch *pd, PartitionDesc partdesc = parent->partdesc; TupleTableSlot *myslot = parent->tupslot; TupleConversionMap *map = parent->tupmap; + int cur_index = -1; if (myslot != NULL && map != NULL) { @@ -1966,61 +1964,67 @@ get_partition_for_tuple(PartitionDispatch *pd, ecxt->ecxt_scantuple = slot; FormPartitionKeyDatum(parent, slot, estate, values, isnull); - if (key->strategy == PARTITION_STRATEGY_RANGE) + /* Route as appropriate based on partitioning strategy. */ + switch (key->strategy) { - /* - * Since we cannot route tuples with NULL partition keys through a - * range-partitioned table, simply return that no partition exists - */ - for (i = 0; i < key->partnatts; i++) - { - if (isnull[i]) + case PARTITION_STRATEGY_LIST: + + if (isnull[0]) { - *failed_at = parent; - *failed_slot = slot; - result = -1; - goto error_exit; + if (partition_bound_accepts_nulls(partdesc->boundinfo)) + cur_index = partdesc->boundinfo->null_index; } - } - } - - /* - * A null partition key is only acceptable if null-accepting list - * partition exists. - */ - cur_index = -1; - if (isnull[0] && partition_bound_accepts_nulls(partdesc->boundinfo)) - cur_index = partdesc->boundinfo->null_index; - else if (!isnull[0]) - { - /* Else bsearch in partdesc->boundinfo */ - bool equal = false; - - cur_offset = partition_bound_bsearch(key, partdesc->boundinfo, - values, false, &equal); - switch (key->strategy) - { - case PARTITION_STRATEGY_LIST: + else + { + bool equal = false; + int cur_offset; + + cur_offset = partition_bound_bsearch(key, + partdesc->boundinfo, + values, + false, + &equal); if (cur_offset >= 0 && equal) cur_index = partdesc->boundinfo->indexes[cur_offset]; - else - cur_index = -1; - break; + } + break; + + case PARTITION_STRATEGY_RANGE: + { + bool equal = false; + int cur_offset; + int i; + + /* No range includes NULL. */ + for (i = 0; i < key->partnatts; i++) + { + if (isnull[i]) + { + *failed_at = parent; + *failed_slot = slot; + result = -1; + goto error_exit; + } + } - case PARTITION_STRATEGY_RANGE: + cur_offset = partition_bound_bsearch(key, + partdesc->boundinfo, + values, + false, + &equal); /* - * Offset returned is such that the bound at offset is - * found to be less or equal with the tuple. So, the bound - * at offset+1 would be the upper bound. + * The offset returned is such that the bound at cur_offset + * is less than or equal to the tuple value, so the bound + * at offset+1 is the upper bound. */ cur_index = partdesc->boundinfo->indexes[cur_offset + 1]; - break; + } + break; - default: - elog(ERROR, "unexpected partition strategy: %d", - (int) key->strategy); - } + default: + elog(ERROR, "unexpected partition strategy: %d", + (int) key->strategy); } /* From ed8a7c6fcf92b6b57ed8003bbd4a4eb92a6039bc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 09:32:50 -0400 Subject: [PATCH 0123/1087] Add much-more-extensive TAP tests for pgbench. Fabien Coelho, reviewed by Nikolay Shaplov and myself Discussion: https://postgr.es/m/alpine.DEB.2.20.1704171422500.4025@lancre --- src/bin/pgbench/t/001_pgbench.pl | 25 - src/bin/pgbench/t/001_pgbench_with_server.pl | 498 +++++++++++++++++++ src/bin/pgbench/t/002_pgbench_no_server.pl | 123 +++++ src/test/perl/PostgresNode.pm | 31 +- src/test/perl/TestLib.pm | 38 ++ 5 files changed, 683 insertions(+), 32 deletions(-) delete mode 100644 src/bin/pgbench/t/001_pgbench.pl create mode 100644 src/bin/pgbench/t/001_pgbench_with_server.pl create mode 100644 src/bin/pgbench/t/002_pgbench_no_server.pl diff --git a/src/bin/pgbench/t/001_pgbench.pl b/src/bin/pgbench/t/001_pgbench.pl deleted file mode 100644 index 34d686ea86..0000000000 --- a/src/bin/pgbench/t/001_pgbench.pl +++ /dev/null @@ -1,25 +0,0 @@ -use strict; -use warnings; - -use PostgresNode; -use TestLib; -use Test::More tests => 3; - -# Test concurrent insertion into table with UNIQUE oid column. DDL expects -# GetNewOidWithIndex() to successfully avoid violating uniqueness for indexes -# like pg_class_oid_index and pg_proc_oid_index. This indirectly exercises -# LWLock and spinlock concurrency. This test makes a 5-MiB table. -my $node = get_new_node('main'); -$node->init; -$node->start; -$node->safe_psql('postgres', - 'CREATE UNLOGGED TABLE oid_tbl () WITH OIDS; ' - . 'ALTER TABLE oid_tbl ADD UNIQUE (oid);'); -my $script = $node->basedir . '/pgbench_script'; -append_to_file($script, - 'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);'); -$node->command_like( - [ qw(pgbench --no-vacuum --client=5 --protocol=prepared - --transactions=25 --file), $script ], - qr{processed: 125/125}, - 'concurrent OID generation'); diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl new file mode 100644 index 0000000000..032195e28a --- /dev/null +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -0,0 +1,498 @@ +use strict; +use warnings; + +use PostgresNode; +use TestLib; +use Test::More; + +# start a pgbench specific server +my $node = get_new_node('main'); +$node->init; +$node->start; + +# invoke pgbench +sub pgbench +{ + my ($opts, $stat, $out, $err, $name, $files) = @_; + my @cmd = ('pgbench', split /\s+/, $opts); + my @filenames = (); + if (defined $files) + { + + # note: files are ordered for determinism + for my $fn (sort keys %$files) + { + my $filename = $node->basedir . '/' . $fn; + push @cmd, '-f', $filename; + + # cleanup file weight + $filename =~ s/\@\d+$//; + + #push @filenames, $filename; + append_to_file($filename, $$files{$fn}); + } + } + $node->command_checks_all(\@cmd, $stat, $out, $err, $name); + + # cleanup? + #unlink @filenames or die "cannot unlink files (@filenames): $!"; +} + +# Test concurrent insertion into table with UNIQUE oid column. DDL expects +# GetNewOidWithIndex() to successfully avoid violating uniqueness for indexes +# like pg_class_oid_index and pg_proc_oid_index. This indirectly exercises +# LWLock and spinlock concurrency. This test makes a 5-MiB table. + +$node->safe_psql('postgres', + 'CREATE UNLOGGED TABLE oid_tbl () WITH OIDS; ' + . 'ALTER TABLE oid_tbl ADD UNIQUE (oid);'); + +pgbench( + '--no-vacuum --client=5 --protocol=prepared --transactions=25', + 0, + [qr{processed: 125/125}], + [qr{^$}], + 'concurrency OID generation', + { '001_pgbench_concurrent_oid_generation' => + 'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);' }); + +# cleanup +$node->safe_psql('postgres', 'DROP TABLE oid_tbl;'); + +# Trigger various connection errors +pgbench( + 'no-such-database', + 1, + [qr{^$}], + [ qr{connection to database "no-such-database" failed}, + qr{FATAL: database "no-such-database" does not exist} ], + 'no such database'); + +pgbench( + '-U no-such-user template0', + 1, + [qr{^$}], + [ qr{connection to database "template0" failed}, + qr{FATAL: role "no-such-user" does not exist} ], + 'no such user'); + +pgbench( + '-S -t 1', 1, [qr{^$}], + [qr{Perhaps you need to do initialization}], + 'run without init'); + +# Initialize pgbench tables scale 1 +pgbench( + '-i', 0, [qr{^$}], + [ qr{creating tables}, qr{vacuum}, qr{set primary keys}, qr{done\.} ], + 'pgbench scale 1 initialization',); + +# Again, with all possible options +pgbench( + + # unlogged => faster test + '--initialize --scale=1 --unlogged --fillfactor=98 --foreign-keys --quiet' + . ' --tablespace=pg_default --index-tablespace=pg_default', + 0, + [qr{^$}i], + [ qr{creating tables}, + qr{vacuum}, + qr{set primary keys}, + qr{set foreign keys}, + qr{done\.} ], + 'pgbench scale 1 initialization'); + +# Run all builtin scripts, for a few transactions each +pgbench( + '--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t' + . ' --connect -n -v -n', + 0, + [ qr{builtin: TPC-B}, + qr{clients: 2\b}, + qr{processed: 10/10}, + qr{mode: simple} ], + [qr{^$}], + 'pgbench tpcb-like'); + +pgbench( +'--transactions=20 --client=5 -M extended --builtin=si -C --no-vacuum -s 1', + 0, + [ qr{builtin: simple update}, + qr{clients: 5\b}, + qr{threads: 1\b}, + qr{processed: 100/100}, + qr{mode: extended} ], + [qr{scale option ignored}], + 'pgbench simple update'); + +pgbench( + '-t 100 -c 7 -M prepared -b se --debug', + 0, + [ qr{builtin: select only}, + qr{clients: 7\b}, + qr{threads: 1\b}, + qr{processed: 700/700}, + qr{mode: prepared} ], + [ qr{vacuum}, qr{client 0}, qr{client 1}, qr{sending}, + qr{receiving}, qr{executing} ], + 'pgbench select only'); + +# run custom scripts +pgbench( + '-t 100 -c 1 -j 2 -M prepared -n', + 0, + [ qr{type: multiple scripts}, + qr{mode: prepared}, + qr{script 1: .*/001_pgbench_custom_script_1}, + qr{weight: 2}, + qr{script 2: .*/001_pgbench_custom_script_2}, + qr{weight: 1}, + qr{processed: 100/100} ], + [qr{^$}], + 'pgbench custom scripts', + { '001_pgbench_custom_script_1@1' => q{-- select only +\set aid random(1, :scale * 100000) +SELECT abalance::INTEGER AS balance + FROM pgbench_accounts + WHERE aid=:aid; +}, + '001_pgbench_custom_script_2@2' => q{-- special variables +BEGIN; +\set foo 1 +-- cast are needed for typing under -M prepared +SELECT :foo::INT + :scale::INT * :client_id::INT AS bla; +COMMIT; +} }); + +pgbench( + '-n -t 10 -c 1 -M simple', + 0, + [ qr{type: .*/001_pgbench_custom_script_3}, + qr{processed: 10/10}, + qr{mode: simple} ], + [qr{^$}], + 'pgbench custom script', + { '001_pgbench_custom_script_3' => q{-- select only variant +\set aid random(1, :scale * 100000) +BEGIN; +SELECT abalance::INTEGER AS balance + FROM pgbench_accounts + WHERE aid=:aid; +COMMIT; +} }); + +pgbench( + '-n -t 10 -c 2 -M extended', + 0, + [ qr{type: .*/001_pgbench_custom_script_4}, + qr{processed: 20/20}, + qr{mode: extended} ], + [qr{^$}], + 'pgbench custom script', + { '001_pgbench_custom_script_4' => q{-- select only variant +\set aid random(1, :scale * 100000) +BEGIN; +SELECT abalance::INTEGER AS balance + FROM pgbench_accounts + WHERE aid=:aid; +COMMIT; +} }); + +# test expressions +pgbench( + '-t 1 -Dfoo=-10.1 -Dbla=false -Di=+3 -Dminint=-9223372036854775808', + 0, + [ qr{type: .*/001_pgbench_expressions}, qr{processed: 1/1} ], + [ qr{command=4.: int 4\b}, + qr{command=5.: int 5\b}, + qr{command=6.: int 6\b}, + qr{command=7.: int 7\b}, + qr{command=8.: int 8\b}, + qr{command=9.: int 9\b}, + qr{command=10.: int 10\b}, + qr{command=11.: int 11\b}, + qr{command=12.: int 12\b}, + qr{command=13.: double 13\b}, + qr{command=14.: double 14\b}, + qr{command=15.: double 15\b}, + qr{command=16.: double 16\b}, + qr{command=17.: double 17\b}, + qr{command=18.: double 18\b}, + qr{command=19.: double 19\b}, + qr{command=20.: double 20\b}, + qr{command=21.: double -?nan\b}, + qr{command=22.: double inf\b}, + qr{command=23.: double -inf\b}, + qr{command=24.: int 9223372036854775807\b}, ], + 'pgbench expressions', + { '001_pgbench_expressions' => q{-- integer functions +\set i1 debug(random(1, 100)) +\set i2 debug(random_exponential(1, 100, 10.0)) +\set i3 debug(random_gaussian(1, 100, 10.0)) +\set i4 debug(abs(-4)) +\set i5 debug(greatest(5, 4, 3, 2)) +\set i6 debug(11 + least(-5, -4, -3, -2)) +\set i7 debug(int(7.3)) +-- integer operators +\set i8 debug(17 / 5 + 5) +\set i9 debug(- (3 * 4 - 3) / -1 + 3 % -1) +\set ia debug(10 + (0 + 0 * 0 - 0 / 1)) +\set ib debug(:ia + :scale) +\set ic debug(64 % 13) +-- double functions +\set d1 debug(sqrt(3.0) * abs(-0.8E1)) +\set d2 debug(double(1 + 1) * 7) +\set pi debug(pi() * 4.9) +\set d4 debug(greatest(4, 2, -1.17) * 4.0) +\set d5 debug(least(-5.18, .0E0, 1.0/0) * -3.3) +-- double operators +\set d6 debug((0.5 * 12.1 - 0.05) * (31.0 / 10)) +\set d7 debug(11.1 + 7.9) +\set d8 debug(:foo * -2) +-- special values +\set nan debug(0.0 / 0.0) +\set pin debug(1.0 / 0.0) +\set nin debug(-1.0 / 0.0) +\set maxint debug(:minint - 1) +-- reset a variable +\set i1 0 +} }); + +# backslash commands +pgbench( + '-t 1', 0, + [ qr{type: .*/001_pgbench_backslash_commands}, + qr{processed: 1/1}, + qr{shell-echo-output} ], + [qr{command=8.: int 2\b}], + 'pgbench backslash commands', + { '001_pgbench_backslash_commands' => q{-- run set +\set zero 0 +\set one 1.0 +-- sleep +\sleep :one ms +\sleep 100 us +\sleep 0 s +\sleep :zero +-- setshell and continuation +\setshell two\ + expr \ + 1 + :one +\set n debug(:two) +-- shell +\shell echo shell-echo-output +} }); + +# trigger many expression errors +my @errors = ( + + # [ test name, script number, status, stderr match ] + # SQL + [ 'sql syntax error', + 0, + [ qr{ERROR: syntax error}, qr{prepared statement .* does not exist} + ], + q{-- SQL syntax error + SELECT 1 + ; +} ], + [ 'sql too many args', 1, [qr{statement has too many arguments.*\b9\b}], + q{-- MAX_ARGS=10 for prepared +\set i 0 +SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i); +} ], + + # SHELL + [ 'shell bad command', 0, + [qr{meta-command 'shell' failed}], q{\shell no-such-command} ], + [ 'shell undefined variable', 0, + [qr{undefined variable ":nosuchvariable"}], + q{-- undefined variable in shell +\shell echo ::foo :nosuchvariable +} ], + [ 'shell missing command', 1, [qr{missing command }], q{\shell} ], + [ 'shell too many args', 1, [qr{too many arguments in command "shell"}], + q{-- 257 arguments to \shell +\shell echo \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F \ + 0 1 2 3 4 5 6 7 8 9 A B C D E F +} ], + + # SET + [ 'set syntax error', 1, + [qr{syntax error in command "set"}], q{\set i 1 +} ], + [ 'set no such function', 1, + [qr{unexpected function name}], q{\set i noSuchFunction()} ], + [ 'set invalid variable name', 0, + [qr{invalid variable name}], q{\set . 1} ], + [ 'set int overflow', 0, + [qr{double to int overflow for 100}], q{\set i int(1E32)} ], + [ 'set division by zero', 0, [qr{division by zero}], q{\set i 1/0} ], + [ 'set bigint out of range', 0, + [qr{bigint out of range}], q{\set i 9223372036854775808 / -1} ], + [ 'set undefined variable', + 0, + [qr{undefined variable "nosuchvariable"}], + q{\set i :nosuchvariable} ], + [ 'set unexpected char', 1, [qr{unexpected character .;.}], q{\set i ;} ], + [ 'set too many args', + 0, + [qr{too many function arguments}], + q{\set i least(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)} ], + [ 'set empty random range', 0, + [qr{empty range given to random}], q{\set i random(5,3)} ], + [ 'set random range too large', + 0, + [qr{random range is too large}], + q{\set i random(-9223372036854775808, 9223372036854775807)} ], + [ 'set gaussian param too small', + 0, + [qr{gaussian param.* at least 2}], + q{\set i random_gaussian(0, 10, 1.0)} ], + [ 'set exponential param > 0', + 0, + [qr{exponential parameter must be greater }], + q{\set i random_exponential(0, 10, 0.0)} ], + [ 'set non numeric value', 0, + [qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1} ], + [ 'set no expression', 1, [qr{syntax error}], q{\set i} ], + [ 'set missing argument', 1, [qr{missing argument}i], q{\set} ], + + # SETSHELL + [ 'setshell not an int', 0, + [qr{command must return an integer}], q{\setshell i echo -n one} ], + [ 'setshell missing arg', 1, [qr{missing argument }], q{\setshell var} ], + [ 'setshell no such command', 0, + [qr{could not read result }], q{\setshell var no-such-command} ], + + # SLEEP + [ 'sleep undefined variable', 0, + [qr{sleep: undefined variable}], q{\sleep :nosuchvariable} ], + [ 'sleep too many args', 1, + [qr{too many arguments}], q{\sleep too many args} ], + [ 'sleep missing arg', 1, + [ qr{missing argument}, qr{\\sleep} ], q{\sleep} ], + [ 'sleep unknown unit', 1, + [qr{unrecognized time unit}], q{\sleep 1 week} ], + + # MISC + [ 'misc invalid backslash command', 1, + [qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand} ], + [ 'misc empty script', 1, [qr{empty command list for script}], q{} ],); + +for my $e (@errors) +{ + my ($name, $status, $re, $script) = @$e; + my $n = '001_pgbench_error_' . $name; + $n =~ s/ /_/g; + pgbench( + '-n -t 1 -Dfoo=bla -M prepared', + $status, + [ $status ? qr{^$} : qr{processed: 0/1} ], + $re, + 'pgbench script error: ' . $name, + { $n => $script }); +} + +# throttling +pgbench( + '-t 100 -S --rate=100000 --latency-limit=1000000 -c 2 -n -r', + 0, + [ qr{processed: 200/200}, qr{builtin: select only} ], + [qr{^$}], + 'pgbench throttling'); + +pgbench( + + # given the expected rate and the 2 ms tx duration, at most one is executed + '-t 10 --rate=100000 --latency-limit=1 -n -r', + 0, + [ qr{processed: [01]/10}, + qr{type: .*/001_pgbench_sleep}, + qr{above the 1.0 ms latency limit: [01] } ], + [qr{^$}i], + 'pgbench late throttling', + { '001_pgbench_sleep' => q{\sleep 2ms} }); + +# check log contents and cleanup +sub check_pgbench_logs +{ + my ($prefix, $nb, $min, $max, $re) = @_; + + my @logs = <$prefix.*>; + ok(@logs == $nb, "number of log files"); + ok(grep(/^$prefix\.\d+(\.\d+)?$/, @logs) == $nb, "file name format"); + + my $log_number = 0; + for my $log (sort @logs) + { + eval { + open LOG, $log or die "$@"; + my @contents = ; + my $clen = @contents; + ok( $min <= $clen && $clen <= $max, + "transaction count for $log ($clen)"); + ok( grep($re, @contents) == $clen, + "transaction format for $prefix"); + close LOG or die "$@"; + }; + } + ok(unlink(@logs), "remove log files"); +} + +# note: --progress-timestamp is not tested +pgbench( + '-T 2 -P 1 -l --log-prefix=001_pgbench_log_1 --aggregate-interval=1' + . ' -S -b se@2 --rate=20 --latency-limit=1000 -j 2 -c 3 -r', + 0, + [ qr{type: multiple}, + qr{clients: 3}, + qr{threads: 2}, + qr{duration: 2 s}, + qr{script 1: .* select only}, + qr{script 2: .* select only}, + qr{statement latencies in milliseconds}, + qr{FROM pgbench_accounts} ], + [ qr{vacuum}, qr{progress: 1\b} ], + 'pgbench progress'); + +# 2 threads 2 seconds, sometimes only one aggregated line is written +check_pgbench_logs('001_pgbench_log_1', 2, 1, 2, + qr{^\d+ \d{1,2} \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+$}); + +# with sampling rate +pgbench( +'-n -S -t 50 -c 2 --log --log-prefix=001_pgbench_log_2 --sampling-rate=0.5', + 0, + [ qr{select only}, qr{processed: 100/100} ], + [qr{^$}], + 'pgbench logs'); + +check_pgbench_logs('001_pgbench_log_2', 1, 8, 92, + qr{^0 \d{1,2} \d+ \d \d+ \d+$}); + +# check log file in some detail +pgbench( + '-n -b se -t 10 -l --log-prefix=001_pgbench_log_3', + 0, [ qr{select only}, qr{processed: 10/10} ], + [qr{^$}], 'pgbench logs contents'); + +check_pgbench_logs('001_pgbench_log_3', 1, 10, 10, + qr{^\d \d{1,2} \d+ \d \d+ \d+$}); + +# done +$node->stop; +done_testing(); diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl new file mode 100644 index 0000000000..acc0205f5b --- /dev/null +++ b/src/bin/pgbench/t/002_pgbench_no_server.pl @@ -0,0 +1,123 @@ +# +# pgbench tests which do not need a server +# + +use strict; +use warnings; + +use TestLib; +use Test::More; + +# invoke pgbench +sub pgbench +{ + my ($opts, $stat, $out, $err, $name) = @_; + print STDERR "opts=$opts, stat=$stat, out=$out, err=$err, name=$name"; + command_checks_all([ 'pgbench', split(/\s+/, $opts) ], + $stat, $out, $err, $name); +} + +# +# Option various errors +# + +my @options = ( + + # name, options, stderr checks + [ 'bad option', + '-h home -p 5432 -U calvin -d stuff --bad-option', + [ qr{unrecognized option}, qr{--help.*more information} ] ], + [ 'no file', + '-f no-such-file', + [qr{could not open file "no-such-file":}] ], + [ 'no builtin', + '-b no-such-builtin', + [qr{no builtin script .* "no-such-builtin"}] ], + [ 'invalid weight', + '--builtin=select-only@one', + [qr{invalid weight specification: \@one}] ], + [ 'invalid weight', + '-b select-only@-1', + [qr{weight spec.* out of range .*: -1}] ], + [ 'too many scripts', '-S ' x 129, [qr{at most 128 SQL scripts}] ], + [ 'bad #clients', '-c three', [qr{invalid number of clients: "three"}] ], + [ 'bad #threads', '-j eleven', [qr{invalid number of threads: "eleven"}] + ], + [ 'bad scale', '-i -s two', [qr{invalid scaling factor: "two"}] ], + [ 'invalid #transactions', + '-t zil', + [qr{invalid number of transactions: "zil"}] ], + [ 'invalid duration', '-T ten', [qr{invalid duration: "ten"}] ], + [ '-t XOR -T', + '-N -l --aggregate-interval=5 --log-prefix=notused -t 1000 -T 1', + [qr{specify either }] ], + [ '-T XOR -t', + '-P 1 --progress-timestamp -l --sampling-rate=0.001 -T 10 -t 1000', + [qr{specify either }] ], + [ 'bad variable', '--define foobla', [qr{invalid variable definition}] ], + [ 'invalid fillfactor', '-F 1', [qr{invalid fillfactor}] ], + [ 'invalid query mode', '-M no-such-mode', [qr{invalid query mode}] ], + [ 'invalid progress', '--progress=0', + [qr{invalid thread progress delay}] ], + [ 'invalid rate', '--rate=0.0', [qr{invalid rate limit}] ], + [ 'invalid latency', '--latency-limit=0.0', [qr{invalid latency limit}] ], + [ 'invalid sampling rate', '--sampling-rate=0', + [qr{invalid sampling rate}] ], + [ 'invalid aggregate interval', '--aggregate-interval=-3', + [qr{invalid .* seconds for}] ], + [ 'weight zero', + '-b se@0 -b si@0 -b tpcb@0', + [qr{weight must not be zero}] ], + [ 'init vs run', '-i -S', [qr{cannot be used in initialization}] ], + [ 'run vs init', '-S -F 90', [qr{cannot be used in benchmarking}] ], + [ 'ambiguous builtin', '-b s', [qr{ambiguous}] ], + [ '--progress-timestamp => --progress', '--progress-timestamp', + [qr{allowed only under}] ], + + # loging sub-options + [ 'sampling => log', '--sampling-rate=0.01', + [qr{log sampling .* only when}] ], + [ 'sampling XOR aggregate', + '-l --sampling-rate=0.1 --aggregate-interval=3', + [qr{sampling .* aggregation .* cannot be used at the same time}] ], + [ 'aggregate => log', '--aggregate-interval=3', + [qr{aggregation .* only when}] ], + [ 'log-prefix => log', '--log-prefix=x', [qr{prefix .* only when}] ], + [ 'duration & aggregation', + '-l -T 1 --aggregate-interval=3', + [qr{aggr.* not be higher}] ], + [ 'duration % aggregation', + '-l -T 5 --aggregate-interval=3', + [qr{multiple}] ],); + +for my $o (@options) +{ + my ($name, $opts, $err_checks) = @$o; + pgbench($opts, 1, [qr{^$}], $err_checks, + 'pgbench option error: ' . $name); +} + +# Help +pgbench( + '--help', 0, + [ qr{benchmarking tool for PostgreSQL}, + qr{Usage}, + qr{Initialization options:}, + qr{Common options:}, + qr{Report bugs to} ], + [qr{^$}], + 'pgbench help'); + +# Version +pgbench('-V', 0, [qr{^pgbench .PostgreSQL. }], [qr{^$}], 'pgbench version'); + +# list of builtins +pgbench( + '-b list', + 0, + [qr{^$}], + [ qr{Available builtin scripts:}, qr{tpcb-like}, + qr{simple-update}, qr{select-only} ], + 'pgbench builtin list'); + +done_testing(); diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index 3a81c1c60b..edcac6fb9f 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -155,8 +155,9 @@ sub new _logfile => "$TestLib::log_path/${testname}_${name}.log" }; bless $self, $class; - mkdir $self->{_basedir} or - BAIL_OUT("could not create data directory \"$self->{_basedir}\": $!"); + mkdir $self->{_basedir} + or + BAIL_OUT("could not create data directory \"$self->{_basedir}\": $!"); $self->dump_info; return $self; @@ -934,8 +935,7 @@ sub get_new_node # Retain the errno on die() if set, else assume a generic errno of 1. # This will instruct the END handler on how to handle artifacts left # behind from tests. -$SIG{__DIE__} = sub -{ +$SIG{__DIE__} = sub { if ($!) { $died = $!; @@ -965,7 +965,7 @@ END # clean basedir on clean test invocation $node->clean_node - if TestLib::all_tests_passing() && !defined $died && !$exit_code; + if TestLib::all_tests_passing() && !defined $died && !$exit_code; } $? = $exit_code; @@ -1325,9 +1325,9 @@ sub command_ok =pod -=item $node->command_fails(...) - TestLib::command_fails with our PGPORT +=item $node->command_fails(...) -See command_ok(...) +TestLib::command_fails with our PGPORT. See command_ok(...) =cut @@ -1359,6 +1359,23 @@ sub command_like =pod +=item $node->command_checks_all(...) + +TestLib::command_checks_all with our PGPORT. See command_ok(...) + +=cut + +sub command_checks_all +{ + my $self = shift; + + local $ENV{PGPORT} = $self->port; + + TestLib::command_checks_all(@_); +} + +=pod + =item $node->issues_sql_like(cmd, expected_sql, test_name) Run a command on the node, then verify that $expected_sql appears in the diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 6dba21c073..0e73c99130 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -39,6 +39,7 @@ our @EXPORT = qw( command_like command_like_safe command_fails_like + command_checks_all $windows_os ); @@ -330,4 +331,41 @@ sub command_fails_like like($stderr, $expected_stderr, "$test_name: matches"); } +# Run a command and check its status and outputs. +# The 5 arguments are: +# - cmd: ref to list for command, options and arguments to run +# - ret: expected exit status +# - out: ref to list of re to be checked against stdout (all must match) +# - err: ref to list of re to be checked against stderr (all must match) +# - test_name: name of test +sub command_checks_all +{ + my ($cmd, $ret, $out, $err, $test_name) = @_; + + # run command + my ($stdout, $stderr); + print("# Running: " . join(" ", @{$cmd}) . "\n"); + IPC::Run::run($cmd, '>', \$stdout, '2>', \$stderr); + + # On Windows, the exit status of the process is returned directly as the + # process's exit code, while on Unix, it's returned in the high bits + # of the exit code. + my $status = $windows_os ? $? : $? >> 8; + + # check status + ok($ret == $status, "$test_name status (got $status vs expected $ret)"); + + # check stdout + for my $re (@$out) + { + like($stdout, $re, "$test_name stdout /$re/"); + } + + # check stderr + for my $re (@$err) + { + like($stderr, $re, "$test_name stderr /$re/"); + } +} + 1; From 869aa40a27fa4908ad4112f1079bf732d1a12e13 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 11:28:02 -0400 Subject: [PATCH 0124/1087] Fix assorted portability issues in new pgbench TAP tests. * Our own version of getopt_long doesn't support abbreviation of long options. * It doesn't do automatic rearrangement of non-option arguments to the end, either. * Test was way too optimistic about the platform independence of NaN and Infinity outputs. I rather imagine we might have to lose those tests altogether, but for the moment just allow case variation and fully spelled out Infinity. Per buildfarm. --- src/bin/pgbench/t/001_pgbench_with_server.pl | 11 ++++------- src/bin/pgbench/t/002_pgbench_no_server.pl | 2 +- 2 files changed, 5 insertions(+), 8 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 032195e28a..66df4bc81b 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -89,10 +89,7 @@ sub pgbench # Again, with all possible options pgbench( - - # unlogged => faster test - '--initialize --scale=1 --unlogged --fillfactor=98 --foreign-keys --quiet' - . ' --tablespace=pg_default --index-tablespace=pg_default', + '--initialize --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default', 0, [qr{^$}i], [ qr{creating tables}, @@ -220,9 +217,9 @@ sub pgbench qr{command=18.: double 18\b}, qr{command=19.: double 19\b}, qr{command=20.: double 20\b}, - qr{command=21.: double -?nan\b}, - qr{command=22.: double inf\b}, - qr{command=23.: double -inf\b}, + qr{command=21.: double -?nan}i, + qr{command=22.: double inf}i, + qr{command=23.: double -inf}i, qr{command=24.: int 9223372036854775807\b}, ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl index acc0205f5b..631aa73ed3 100644 --- a/src/bin/pgbench/t/002_pgbench_no_server.pl +++ b/src/bin/pgbench/t/002_pgbench_no_server.pl @@ -25,7 +25,7 @@ sub pgbench # name, options, stderr checks [ 'bad option', - '-h home -p 5432 -U calvin -d stuff --bad-option', + '-h home -p 5432 -U calvin -d --bad-option', [ qr{unrecognized option}, qr{--help.*more information} ] ], [ 'no file', '-f no-such-file', From 9361bc347c85b685280fad742c519234d6e42bee Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:28:36 -0400 Subject: [PATCH 0125/1087] Remove useless dead code Reviewed-by: Aleksandr Parfenov --- .../ecpg/test/expected/pgtypeslib-dt_test.c | 22 +++++++++---------- .../ecpg/test/pgtypeslib/dt_test.pgc | 22 +++++++++---------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c index 6801669a0e..3f90022b54 100644 --- a/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c +++ b/src/interfaces/ecpg/test/expected/pgtypeslib-dt_test.c @@ -155,7 +155,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} /* rdate_defmt_asc() */ - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "In the year 1995, the month of December, it is the 25th day"; /* 0123456789012345678901234567890123456789012345678901234567890 @@ -166,7 +166,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc1: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmmm. dd. yyyy"; in = "12/25/95"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -174,7 +174,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc2: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "95/12/25"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -182,7 +182,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc3: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "1995, December 25th"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -190,7 +190,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc4: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "dd-mm-yy"; in = "This is 25th day of December, 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -198,7 +198,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc5: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmddyy"; in = "Dec. 25th, 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -206,7 +206,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc6: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmm. dd. yyyy"; in = "dec 25th 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -214,7 +214,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc7: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmm. dd. yyyy"; in = "DEC-25-1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -222,7 +222,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc8: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mm yy dd."; in = "12199525"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -230,7 +230,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc9: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yyyy fierj mm dd."; in = "19951225"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -238,7 +238,7 @@ if (sqlca.sqlcode < 0) sqlprint ( );} printf("date_defmt_asc10: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mm/dd/yy"; in = "122595"; PGTYPESdate_defmt_asc(&date1, fmt, in); diff --git a/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc b/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc index 5e0d86847d..1cc156c3dc 100644 --- a/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc +++ b/src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc @@ -81,7 +81,7 @@ main(void) /* rdate_defmt_asc() */ - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "In the year 1995, the month of December, it is the 25th day"; /* 0123456789012345678901234567890123456789012345678901234567890 @@ -92,7 +92,7 @@ main(void) printf("date_defmt_asc1: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmmm. dd. yyyy"; in = "12/25/95"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -100,7 +100,7 @@ main(void) printf("date_defmt_asc2: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "95/12/25"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -108,7 +108,7 @@ main(void) printf("date_defmt_asc3: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yy/mm/dd"; in = "1995, December 25th"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -116,7 +116,7 @@ main(void) printf("date_defmt_asc4: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "dd-mm-yy"; in = "This is 25th day of December, 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -124,7 +124,7 @@ main(void) printf("date_defmt_asc5: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmddyy"; in = "Dec. 25th, 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -132,7 +132,7 @@ main(void) printf("date_defmt_asc6: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmm. dd. yyyy"; in = "dec 25th 1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -140,7 +140,7 @@ main(void) printf("date_defmt_asc7: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mmm. dd. yyyy"; in = "DEC-25-1995"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -148,7 +148,7 @@ main(void) printf("date_defmt_asc8: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mm yy dd."; in = "12199525"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -156,7 +156,7 @@ main(void) printf("date_defmt_asc9: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "yyyy fierj mm dd."; in = "19951225"; PGTYPESdate_defmt_asc(&date1, fmt, in); @@ -164,7 +164,7 @@ main(void) printf("date_defmt_asc10: %s\n", text); free(text); - date1 = 0; text = ""; + date1 = 0; fmt = "mm/dd/yy"; in = "122595"; PGTYPESdate_defmt_asc(&date1, fmt, in); From 8e673801262c66af4a54837f63ff596407835c20 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:28:36 -0400 Subject: [PATCH 0126/1087] Remove useless empty string initializations This coding style probably stems from the days of shell scripts. Reviewed-by: Aleksandr Parfenov --- src/bin/initdb/initdb.c | 70 ++++++++++++++------------- src/bin/pg_basebackup/pg_basebackup.c | 6 +-- 2 files changed, 39 insertions(+), 37 deletions(-) diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index 7303bbe892..e4a0aba1eb 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -118,29 +118,29 @@ static const char *const auth_methods_local[] = { static char *share_path = NULL; /* values to be obtained from arguments */ -static char *pg_data = ""; -static char *encoding = ""; -static char *locale = ""; -static char *lc_collate = ""; -static char *lc_ctype = ""; -static char *lc_monetary = ""; -static char *lc_numeric = ""; -static char *lc_time = ""; -static char *lc_messages = ""; -static const char *default_text_search_config = ""; -static char *username = ""; +static char *pg_data = NULL; +static char *encoding = NULL; +static char *locale = NULL; +static char *lc_collate = NULL; +static char *lc_ctype = NULL; +static char *lc_monetary = NULL; +static char *lc_numeric = NULL; +static char *lc_time = NULL; +static char *lc_messages = NULL; +static const char *default_text_search_config = NULL; +static char *username = NULL; static bool pwprompt = false; static char *pwfilename = NULL; static char *superuser_password = NULL; -static const char *authmethodhost = ""; -static const char *authmethodlocal = ""; +static const char *authmethodhost = NULL; +static const char *authmethodlocal = NULL; static bool debug = false; static bool noclean = false; static bool do_sync = true; static bool sync_only = false; static bool show_setting = false; static bool data_checksums = false; -static char *xlog_dir = ""; +static char *xlog_dir = NULL; /* internal vars */ @@ -1285,7 +1285,6 @@ bootstrap_template1(void) { PG_CMD_DECL; char **line; - char *talkargs = ""; char **bki_lines; char headerline[MAXPGPATH]; char buf[64]; @@ -1293,9 +1292,6 @@ bootstrap_template1(void) printf(_("running bootstrap script ... ")); fflush(stdout); - if (debug) - talkargs = "-d 5"; - bki_lines = readfile(bki_file); /* Check that bki file appears to be of the right version */ @@ -1359,7 +1355,9 @@ bootstrap_template1(void) "\"%s\" --boot -x1 %s %s %s", backend_exec, data_checksums ? "-k" : "", - boot_options, talkargs); + boot_options, + debug ? "-d 5" : ""); + PG_CMD_OPEN; @@ -2136,6 +2134,10 @@ check_locale_name(int category, const char *locale, char **canonname) /* save may be pointing at a modifiable scratch variable, so copy it. */ save = pg_strdup(save); + /* for setlocale() call */ + if (!locale) + locale = ""; + /* set the locale with setlocale, to see if it accepts it. */ res = setlocale(category, locale); @@ -2223,19 +2225,19 @@ setlocales(void) /* set empty lc_* values to locale config if set */ - if (strlen(locale) > 0) + if (locale) { - if (strlen(lc_ctype) == 0) + if (!lc_ctype) lc_ctype = locale; - if (strlen(lc_collate) == 0) + if (!lc_collate) lc_collate = locale; - if (strlen(lc_numeric) == 0) + if (!lc_numeric) lc_numeric = locale; - if (strlen(lc_time) == 0) + if (!lc_time) lc_time = locale; - if (strlen(lc_monetary) == 0) + if (!lc_monetary) lc_monetary = locale; - if (strlen(lc_messages) == 0) + if (!lc_messages) lc_messages = locale; } @@ -2310,7 +2312,7 @@ usage(const char *progname) static void check_authmethod_unspecified(const char **authmethod) { - if (*authmethod == NULL || strlen(*authmethod) == 0) + if (*authmethod == NULL) { authwarning = _("\nWARNING: enabling \"trust\" authentication for local connections\n" "You can change this by editing pg_hba.conf or using the option -A, or\n" @@ -2367,7 +2369,7 @@ setup_pgdata(void) char *pgdata_get_env, *pgdata_set_env; - if (strlen(pg_data) == 0) + if (!pg_data) { pgdata_get_env = getenv("PGDATA"); if (pgdata_get_env && strlen(pgdata_get_env)) @@ -2479,7 +2481,7 @@ setup_locale_encoding(void) lc_time); } - if (strlen(encoding) == 0) + if (!encoding) { int ctype_enc; @@ -2589,10 +2591,10 @@ setup_data_file_paths(void) void setup_text_search(void) { - if (strlen(default_text_search_config) == 0) + if (!default_text_search_config) { default_text_search_config = find_matching_ts_config(lc_ctype); - if (default_text_search_config == NULL) + if (!default_text_search_config) { printf(_("%s: could not find suitable text search configuration for locale \"%s\"\n"), progname, lc_ctype); @@ -2728,7 +2730,7 @@ create_xlog_or_symlink(void) /* form name of the place for the subdirectory or symlink */ subdirloc = psprintf("%s/pg_wal", pg_data); - if (strcmp(xlog_dir, "") != 0) + if (xlog_dir) { int ret; @@ -3131,7 +3133,7 @@ main(int argc, char *argv[]) * Non-option argument specifies data directory as long as it wasn't * already specified with -D / --pgdata */ - if (optind < argc && strlen(pg_data) == 0) + if (optind < argc && !pg_data) { pg_data = pg_strdup(argv[optind]); optind++; @@ -3187,7 +3189,7 @@ main(int argc, char *argv[]) setup_bin_paths(argv[0]); effective_user = get_id(); - if (strlen(username) == 0) + if (!username) username = effective_user; if (strncmp(username, "pg_", 3) == 0) diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index dfb9b5ddcb..51509d150e 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -76,7 +76,7 @@ typedef enum /* Global options */ static char *basedir = NULL; static TablespaceList tablespace_dirs = {NULL, NULL}; -static char *xlog_dir = ""; +static char *xlog_dir = NULL; static char format = 'p'; /* p(lain)/t(ar) */ static char *label = "pg_basebackup base backup"; static bool noclean = false; @@ -2347,7 +2347,7 @@ main(int argc, char **argv) temp_replication_slot = false; } - if (strcmp(xlog_dir, "") != 0) + if (xlog_dir) { if (format != 'p') { @@ -2398,7 +2398,7 @@ main(int argc, char **argv) } /* Create pg_wal symlink, if required */ - if (strcmp(xlog_dir, "") != 0) + if (xlog_dir) { char *linkloc; From ee24d2b5cf059cab83711992c0cf110ad44df5f9 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:28:36 -0400 Subject: [PATCH 0127/1087] Clean up excessive code The encoding ID was converted between string and number too many times, probably a remnant from the shell script days. Reviewed-by: Aleksandr Parfenov --- src/bin/initdb/initdb.c | 26 +++++++++++--------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index e4a0aba1eb..9d1e5d789f 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -145,7 +145,7 @@ static char *xlog_dir = NULL; /* internal vars */ static const char *progname; -static char *encodingid = "0"; +static int encodingid; static char *bki_file; static char *desc_file; static char *shdesc_file; @@ -236,7 +236,7 @@ static void writefile(char *path, char **lines); static FILE *popen_check(const char *command, const char *mode); static void exit_nicely(void); static char *get_id(void); -static char *get_encoding_id(char *encoding_name); +static int get_encoding_id(char *encoding_name); static void set_input(char **dest, char *filename); static void check_input(char *path); static void write_version_file(char *extrapath); @@ -636,7 +636,7 @@ encodingid_to_string(int enc) /* * get the encoding id for a given encoding name */ -static char * +static int get_encoding_id(char *encoding_name) { int enc; @@ -644,7 +644,7 @@ get_encoding_id(char *encoding_name) if (encoding_name && *encoding_name) { if ((enc = pg_valid_server_encoding(encoding_name)) >= 0) - return encodingid_to_string(enc); + return enc; } fprintf(stderr, _("%s: \"%s\" is not a valid server encoding name\n"), progname, encoding_name ? encoding_name : "(null)"); @@ -1328,7 +1328,7 @@ bootstrap_template1(void) bki_lines = replace_token(bki_lines, "POSTGRES", escape_quotes(username)); - bki_lines = replace_token(bki_lines, "ENCODING", encodingid); + bki_lines = replace_token(bki_lines, "ENCODING", encodingid_to_string(encodingid)); bki_lines = replace_token(bki_lines, "LC_COLLATE", escape_quotes(lc_collate)); @@ -2454,8 +2454,6 @@ setup_bin_paths(const char *argv0) void setup_locale_encoding(void) { - int user_enc; - setlocales(); if (strcmp(lc_ctype, lc_collate) == 0 && @@ -2505,12 +2503,11 @@ setup_locale_encoding(void) * UTF-8. */ #ifdef WIN32 + encodingid = PG_UTF8; printf(_("Encoding \"%s\" implied by locale is not allowed as a server-side encoding.\n" "The default database encoding will be set to \"%s\" instead.\n"), pg_encoding_to_char(ctype_enc), - pg_encoding_to_char(PG_UTF8)); - ctype_enc = PG_UTF8; - encodingid = encodingid_to_string(ctype_enc); + pg_encoding_to_char(encodingid)); #else fprintf(stderr, _("%s: locale \"%s\" requires unsupported encoding \"%s\"\n"), @@ -2524,17 +2521,16 @@ setup_locale_encoding(void) } else { - encodingid = encodingid_to_string(ctype_enc); + encodingid = ctype_enc; printf(_("The default database encoding has accordingly been set to \"%s\".\n"), - pg_encoding_to_char(ctype_enc)); + pg_encoding_to_char(encodingid)); } } else encodingid = get_encoding_id(encoding); - user_enc = atoi(encodingid); - if (!check_locale_encoding(lc_ctype, user_enc) || - !check_locale_encoding(lc_collate, user_enc)) + if (!check_locale_encoding(lc_ctype, encodingid) || + !check_locale_encoding(lc_collate, encodingid)) exit(1); /* check_locale_encoding printed the error */ } From 77d63b7eafd44469c2766c1f29b75533981e4911 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 13:36:13 -0400 Subject: [PATCH 0128/1087] Fix more portability issues in new pgbench TAP tests. * Remove no-such-user test case, output isn't stable, and we really don't need to be testing such cases here anyway. * Fix the process exit code test logic to match PostgresNode::psql (but I didn't bother with looking at the "core" flag). * Give up on inf/nan tests. Per buildfarm. --- src/bin/pgbench/t/001_pgbench_with_server.pl | 20 +++----------------- src/test/perl/TestLib.pm | 14 ++++++++------ 2 files changed, 11 insertions(+), 23 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 66df4bc81b..8458270637 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -68,14 +68,6 @@ sub pgbench qr{FATAL: database "no-such-database" does not exist} ], 'no such database'); -pgbench( - '-U no-such-user template0', - 1, - [qr{^$}], - [ qr{connection to database "template0" failed}, - qr{FATAL: role "no-such-user" does not exist} ], - 'no such user'); - pgbench( '-S -t 1', 1, [qr{^$}], [qr{Perhaps you need to do initialization}], @@ -89,7 +81,7 @@ sub pgbench # Again, with all possible options pgbench( - '--initialize --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default', +'--initialize --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default', 0, [qr{^$}i], [ qr{creating tables}, @@ -217,10 +209,7 @@ sub pgbench qr{command=18.: double 18\b}, qr{command=19.: double 19\b}, qr{command=20.: double 20\b}, - qr{command=21.: double -?nan}i, - qr{command=22.: double inf}i, - qr{command=23.: double -inf}i, - qr{command=24.: int 9223372036854775807\b}, ], + qr{command=21.: int 9223372036854775807\b}, ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions \set i1 debug(random(1, 100)) @@ -246,10 +235,7 @@ sub pgbench \set d6 debug((0.5 * 12.1 - 0.05) * (31.0 / 10)) \set d7 debug(11.1 + 7.9) \set d8 debug(:foo * -2) --- special values -\set nan debug(0.0 / 0.0) -\set pin debug(1.0 / 0.0) -\set nin debug(-1.0 / 0.0) +-- forced overflow \set maxint debug(:minint - 1) -- reset a variable \set i1 0 diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 0e73c99130..d1a2eb5883 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -340,20 +340,22 @@ sub command_fails_like # - test_name: name of test sub command_checks_all { - my ($cmd, $ret, $out, $err, $test_name) = @_; + my ($cmd, $expected_ret, $out, $err, $test_name) = @_; # run command my ($stdout, $stderr); print("# Running: " . join(" ", @{$cmd}) . "\n"); IPC::Run::run($cmd, '>', \$stdout, '2>', \$stderr); - # On Windows, the exit status of the process is returned directly as the - # process's exit code, while on Unix, it's returned in the high bits - # of the exit code. - my $status = $windows_os ? $? : $? >> 8; + # See http://perldoc.perl.org/perlvar.html#%24CHILD_ERROR + my $ret = $?; + die "command exited with signal " . ($ret & 127) + if $ret & 127; + $ret = $ret >> 8; # check status - ok($ret == $status, "$test_name status (got $status vs expected $ret)"); + ok($ret == $expected_ret, + "$test_name status (got $ret vs expected $expected_ret)"); # check stdout for my $re (@$out) From 933851033becf0848e0bb903f310bbd725e19489 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 14:01:51 -0400 Subject: [PATCH 0129/1087] Fix more portability issues in new pgbench TAP tests. Strike two on the --bad-option test. Three strikes and it's out. Fabien Coelho, per buildfarm --- src/bin/pgbench/t/002_pgbench_no_server.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl index 631aa73ed3..d6b3d4f926 100644 --- a/src/bin/pgbench/t/002_pgbench_no_server.pl +++ b/src/bin/pgbench/t/002_pgbench_no_server.pl @@ -26,7 +26,7 @@ sub pgbench # name, options, stderr checks [ 'bad option', '-h home -p 5432 -U calvin -d --bad-option', - [ qr{unrecognized option}, qr{--help.*more information} ] ], + [ qr{(unrecognized|illegal) option}, qr{--help.*more information} ] ], [ 'no file', '-f no-such-file', [qr{could not open file "no-such-file":}] ], From 3cf17c9d47b1b427b7514c7baa6818a683293ff3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 14:38:54 -0400 Subject: [PATCH 0130/1087] Remove mention of password_encryption = plain in postgresql.conf.sample. Evidently missed in commit eb61136dc. Spotted by Oleg Bartunov. Discussion: https://postgr.es/m/CAF4Au4wz_iK5r4fnTnnd8XqioAZQs-P7-VsEAfivW34zMVpAmw@mail.gmail.com --- src/backend/utils/misc/postgresql.conf.sample | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index df5d2f3f22..53aa006df5 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -85,7 +85,7 @@ #ssl_key_file = 'server.key' #ssl_ca_file = '' #ssl_crl_file = '' -#password_encryption = md5 # md5, scram-sha-256, or plain +#password_encryption = md5 # md5 or scram-sha-256 #db_user_namespace = off #row_security = on From c1602c7a1b2e49acbba680cb72949d4fa3a8d2ee Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 16:59:26 -0400 Subject: [PATCH 0131/1087] Doc: update v10 release notes through today. Also, another round of copy-editing. I merged a few items that didn't seem to be meaningfully different from a user's perspective. --- doc/src/sgml/release-10.sgml | 290 +++++++++++++++++------------------ 1 file changed, 144 insertions(+), 146 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 1a9110614d..7ace37c8b6 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -6,7 +6,7 @@ Release date: - 2017-??-?? (current as of 2017-08-26, commit 145ca364d) + 2017-??-?? (current as of 2017-09-07, commit 08cb36417) @@ -20,12 +20,11 @@ - (yet to be finalized) Logical replication using publish/subscribe - Declarative Table Partitioning - Improved Query Parallelism + Declarative table partitioning + Improved query parallelism Significant general performance improvements - SCRAM-SHA-256 strong authentication + Stronger password authentication based on SCRAM-SHA-256 Improved monitoring and control @@ -56,11 +55,12 @@ Hash indexes must be rebuilt after pg_upgrade-ing from any previous major PostgreSQL version (Mithun - Cy, Robert Haas) + Cy, Robert Haas, Amit Kapila) @@ -329,8 +329,8 @@ Changing this setting from the default value caused queries referencing parent tables to not include child tables. The SQL - standard requires such behavior and this has been the default since - PostgreSQL 7.1. + standard requires them to be included, however, and this has been the + default since PostgreSQL 7.1. @@ -393,8 +393,8 @@ This removes configure's @@ -403,7 +403,7 @@ 2016-10-11 [2f1eaf87e] Drop server support for FE/BE protocol version 1.0. --> - Remove support for client/server protocol version 1.0 (Tom Lane) + Remove server support for client/server protocol version 1.0 (Tom Lane) @@ -424,8 +424,8 @@ This replaces the hardcoded, undocumented file name dh1024.pem. Note that dh1024.pem is - no longer examined by default; you must set this option to use custom - DH parameters. + no longer examined by default; you must set this option if you want + to use custom DH parameters. @@ -485,9 +485,9 @@ - These were deprecated since PostgreSQL 9.1. Instead, - use CREATE EXTENSION and DROP EXTENSION - directly. + These had been deprecated since PostgreSQL 9.1. + Instead, use CREATE EXTENSION and DROP + EXTENSION directly. @@ -626,25 +626,41 @@ - Add SP-GiST index support for INET and - CIDR data types (Emre Hasegeli) + Add write-ahead logging support to hash indexes (Amit Kapila) - These data types already had GiST support. + This makes hash indexes crash-safe and replicatable. + The former warning message about their use is removed. - Reduce page locking during vacuuming of GIN indexes - (Andrey Borodin) + Improve hash index performance (Amit Kapila, Mithun Cy, Ashutosh + Sharma) + + + + + + + Add SP-GiST index support for INET and + CIDR data types (Emre Hasegeli) @@ -658,8 +674,8 @@ - Specifically, a new CREATE - INDEX option allows auto-summarization of the + A new CREATE + INDEX option enables auto-summarization of the previous BRIN page range when a new page range is created. @@ -705,65 +721,17 @@ - - - - <link linkend="indexes-types">Hash Indexes</link> - - - - - - - Add write-ahead logging support to hash indexes (Amit Kapila) - - - - This makes hash indexes crash-safe and replicatable. - The former warning message about their use is removed. - - - - - - - Improve hash index bucket split performance by reducing locking - requirements (Amit Kapila, Mithun Cy) - - - - Also cache hash index meta-information for faster lookups. - - - - - - - Improve efficiency of hash index growth (Amit Kapila, Mithun Cy) - - - - + - - Allow page-at-a-time hash index pruning (Ashutosh Sharma) - - - - + + Reduce page locking during vacuuming of GIN indexes + (Andrey Borodin) + + - + @@ -986,17 +954,6 @@ - - Properly update the statistics collector during REFRESH MATERIALIZED - VIEW (Jim Mlodgenski) - - - - - @@ -1010,6 +967,17 @@ + + + + Properly update the statistics collector during REFRESH MATERIALIZED + VIEW (Jim Mlodgenski) + + + @@ -1120,25 +1088,17 @@ - Add pg_stat_activity reporting of latch wait states - (Michael Paquier, Robert Haas) + Add pg_stat_activity reporting of low-level wait + states (Michael Paquier, Robert Haas, Rushabh Lathia) - This includes the remaining wait events, like client reads, - client writes, and synchronous replication. - - - - - - - Add pg_stat_activity reporting of waits on reads, - writes, and fsyncs (Rushabh Lathia) + This change enables reporting of numerous low-level wait conditions, + including latch waits, file reads/writes/fsyncs, client reads/writes, + and synchronous replication. @@ -1315,8 +1275,8 @@ 2017-03-27 [1b02be21f] Fsync directory after creating or unlinking file. --> - Perform an fsync on the directory after creating or unlinking files - (Michael Paquier) + After creating or unlinking files, perform an fsync on their parent + directory (Michael Paquier) @@ -1367,7 +1327,7 @@ - Larger WAL segment sizes allows for fewer + A larger WAL segment size allows for fewer invocations and fewer WAL files to manage. @@ -1400,7 +1360,7 @@ Logical replication allows more flexibility than physical replication does, including replication between different major - versions of PostgreSQL and selective-table + versions of PostgreSQL and selective replication. @@ -1455,7 +1415,7 @@ Previously pg_hba.conf's replication connection - lines were commented out. This is particularly useful for + lines were commented out by default. This is particularly useful for . @@ -1654,7 +1614,7 @@ Previously all security policies were permissive, meaning that any - matching policy allowed access. Optional restrictive policies must + matching policy allowed access. A restrictive policy must match for access to be granted. These policy types can be combined. @@ -1829,7 +1789,7 @@ This complements the existing support for EUI-48 MAC addresses - as macaddr. + (type macaddr). @@ -2253,22 +2213,6 @@ - - Improve psql's \d (display relation) - and \dD (display domain) commands to show collation, - nullable, and default properties in separate columns (Peter - Eisentraut) - - - - Previous they were shown in a single Modifiers column. - - - - - @@ -2311,6 +2255,47 @@ + + Add variables showing server version and psql version + (Fabien Coelho) + + + + + + + Improve psql's \d (display relation) + and \dD (display domain) commands to show collation, + nullable, and default properties in separate columns (Peter + Eisentraut) + + + + Previously they were shown in a single Modifiers column. + + + + + + + Make the various \d commands handle no-matching-object + cases more consistently (Daniel Gustafsson) + + + + They now all print the message about that to stderr, not stdout, + and the message wording is more consistent. + + + + + + + tupconvert.c functions no longer convert tuples just to + embed a different composite-type OID in them (Ashutosh Bapat, Tom Lane) + + + + The majority of callers don't care about the composite-type OID; + but if the result tuple is to be used as a composite Datum, steps + should be taken to make sure the correct OID is inserted in it. + + + - Push aggregates to foreign data wrapper servers, where possible + In postgres_fdw, + push aggregate functions to the remote server, when possible (Jeevan Chalke, Ashutosh Bapat) - This reduces the amount of data that must be passed - from the foreign data wrapper server, and offloads - aggregate computation from the requesting server. The postgres_fdw FDW is able to - perform this optimization. There are also improvements in - pushing down joins involving extensions. + This reduces the amount of data that must be passed from the remote + server, and offloads aggregate computation from the requesting server. - Allow push down of FULL JOIN queries containing - subqueries in the - FROM clause to foreign servers (Etsuro Fujita) + In postgres_fdw, push joins to the remote server in + more cases (David Rowley, Ashutosh Bapat, Etsuro Fujita) @@ -3072,7 +3070,7 @@ - This allows it to be less disruptive when run on production systems. + This makes it less disruptive when run on production systems. From 2cf15ec8b1cb29bea149559700566a21a790b6d3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 17:25:11 -0400 Subject: [PATCH 0132/1087] Fix pgbench TAP tests to work with --disable-thread-safety. Probably matters to nobody but me; but I'd like to still be able to get through the TAP tests on gaur/pademelon, from time to time. --- src/bin/pgbench/t/001_pgbench_with_server.pl | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 8458270637..b80640a8cc 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -126,9 +126,18 @@ sub pgbench qr{receiving}, qr{executing} ], 'pgbench select only'); +# check if threads are supported +my $nthreads = 2; + +{ + my ($stderr); + run_log([ 'pgbench', '-j', '2', '--bad-option' ], '2>', \$stderr); + $nthreads = 1 if $stderr =~ 'threads are not supported on this platform'; +} + # run custom scripts pgbench( - '-t 100 -c 1 -j 2 -M prepared -n', + "-t 100 -c 1 -j $nthreads -M prepared -n", 0, [ qr{type: multiple scripts}, qr{mode: prepared}, @@ -439,11 +448,12 @@ sub check_pgbench_logs # note: --progress-timestamp is not tested pgbench( '-T 2 -P 1 -l --log-prefix=001_pgbench_log_1 --aggregate-interval=1' - . ' -S -b se@2 --rate=20 --latency-limit=1000 -j 2 -c 3 -r', + . ' -S -b se@2 --rate=20 --latency-limit=1000 -j ' . $nthreads + . ' -c 3 -r', 0, [ qr{type: multiple}, qr{clients: 3}, - qr{threads: 2}, + qr{threads: $nthreads}, qr{duration: 2 s}, qr{script 1: .* select only}, qr{script 2: .* select only}, @@ -452,8 +462,8 @@ sub check_pgbench_logs [ qr{vacuum}, qr{progress: 1\b} ], 'pgbench progress'); -# 2 threads 2 seconds, sometimes only one aggregated line is written -check_pgbench_logs('001_pgbench_log_1', 2, 1, 2, +# $nthreads threads, 2 seconds, sometimes only one aggregated line is written +check_pgbench_logs('001_pgbench_log_1', $nthreads, 1, 2, qr{^\d+ \d{1,2} \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+$}); # with sampling rate From 6f6b99d1335be8ea1b74581fc489a97b109dd08a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 8 Sep 2017 17:28:04 -0400 Subject: [PATCH 0133/1087] Allow a partitioned table to have a default partition. Any tuples that don't route to any other partition will route to the default partition. Jeevan Ladhe, Beena Emerson, Ashutosh Bapat, Rahila Syed, and Robert Haas, with review and testing at various stages by (at least) Rushabh Lathia, Keith Fiske, Amit Langote, Amul Sul, Rajkumar Raghuanshi, Sven Kunze, Kyotaro Horiguchi, Thom Brown, Rafia Sabih, and Dilip Kumar. Discussion: http://postgr.es/m/CAH2L28tbN4SYyhS7YV1YBWcitkqbhSWfQCy0G=apRcC_PEO-bg@mail.gmail.com Discussion: http://postgr.es/m/CAOG9ApEYj34fWMcvBMBQ-YtqR9fTdXhdN82QEKG0SVZ6zeL1xg@mail.gmail.com --- doc/src/sgml/catalogs.sgml | 11 + doc/src/sgml/ref/alter_table.sgml | 31 +- doc/src/sgml/ref/create_table.sgml | 35 +- src/backend/catalog/heap.c | 41 +- src/backend/catalog/partition.c | 644 ++++++++++++++++++--- src/backend/commands/tablecmds.c | 187 +++++- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/equalfuncs.c | 1 + src/backend/nodes/outfuncs.c | 1 + src/backend/nodes/readfuncs.c | 1 + src/backend/parser/gram.y | 27 +- src/backend/parser/parse_utilcmd.c | 12 + src/backend/utils/adt/ruleutils.c | 8 +- src/bin/psql/describe.c | 11 +- src/bin/psql/tab-complete.c | 4 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/partition.h | 7 + src/include/catalog/pg_partitioned_table.h | 13 +- src/include/commands/tablecmds.h | 4 + src/include/nodes/parsenodes.h | 1 + src/test/regress/expected/alter_table.out | 49 ++ src/test/regress/expected/create_table.out | 20 + src/test/regress/expected/insert.out | 147 ++++- src/test/regress/expected/plancache.out | 26 + src/test/regress/expected/sanity_check.out | 4 + src/test/regress/expected/update.out | 33 ++ src/test/regress/sql/alter_table.sql | 47 ++ src/test/regress/sql/create_table.sql | 20 + src/test/regress/sql/insert.sql | 69 ++- src/test/regress/sql/plancache.sql | 21 + src/test/regress/sql/update.sql | 24 + 31 files changed, 1367 insertions(+), 135 deletions(-) diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 4f56188a1c..4978b47f0e 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -4738,6 +4738,17 @@ SCRAM-SHA-256$<iteration count>:<salt>< The number of columns in partition key + + partdefid + oid + pg_class.oid + + The OID of the pg_class entry for the default partition + of this partitioned table, or zero if this partitioned table does not + have a default partition. + + + partattrs int2vector diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index dae63077ee..0fb385ece7 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -34,7 +34,7 @@ ALTER TABLE [ IF EXISTS ] name ALTER TABLE ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] SET TABLESPACE new_tablespace [ NOWAIT ] ALTER TABLE [ IF EXISTS ] name - ATTACH PARTITION partition_name FOR VALUES partition_bound_spec + ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } ALTER TABLE [ IF EXISTS ] name DETACH PARTITION partition_name @@ -765,11 +765,18 @@ ALTER TABLE [ IF EXISTS ] name - ATTACH PARTITION partition_name FOR VALUES partition_bound_spec + ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } This form attaches an existing table (which might itself be partitioned) - as a partition of the target table using the same syntax for + as a partition of the target table. The table can be attached + as a partition for specific values using FOR VALUES + or as a default partition by using DEFAULT + . + + + + A partition using FOR VALUES uses same syntax for partition_bound_spec as . The partition bound specification must correspond to the partitioning strategy and partition key of the @@ -806,6 +813,17 @@ ALTER TABLE [ IF EXISTS ] name (See the discussion in about constraints on the foreign table.) + + + When a table has a default partition, defining a new partition changes + the partition constraint for the default partition. The default + partition can't contain any rows that would need to be moved to the new + partition, and will be scanned to verify that none are present. This + scan, like the scan of the new partition, can be avoided if an + appropriate CHECK constraint is present. Also like + the scan of the new partition, it is always skipped when the default + partition is a foreign table. + @@ -1395,6 +1413,13 @@ ALTER TABLE cities ATTACH PARTITION cities_ab FOR VALUES IN ('a', 'b'); + + Attach a default partition to a partitioned table: + +ALTER TABLE cities + ATTACH PARTITION cities_partdef DEFAULT; + + Detach a partition from partitioned table: diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index a6ca590249..824253de40 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -49,7 +49,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] | table_constraint } [, ... ] -) ] FOR VALUES partition_bound_spec +) ] { FOR VALUES partition_bound_spec | DEFAULT } [ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] @@ -250,11 +250,13 @@ FROM ( { numeric_literal | - PARTITION OF parent_table FOR VALUES partition_bound_spec + PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT } Creates the table as a partition of the specified - parent table. + parent table. The table can be created either as a partition for specific + values using FOR VALUES or as a default partition + using DEFAULT. @@ -342,6 +344,26 @@ FROM ( { numeric_literal | + + If DEFAULT is specified, the table will be + created as a default partition of the parent table. The parent can + either be a list or range partitioned table. A partition key value + not fitting into any other partition of the given parent will be + routed to the default partition. There can be only one default + partition for a given parent table. + + + + When a table has an existing DEFAULT partition and + a new partition is added to it, the existing default partition must + be scanned to verify that it does not contain any rows which properly + belong in the new partition. If the default partition contains a + large number of rows, this may be slow. The scan will be skipped if + the default partition is a foreign table or if it has a constraint which + proves that it cannot contain rows which should be placed in the new + partition. + + A partition must have the same column names and types as the partitioned table to which it belongs. If the parent is specified WITH @@ -1679,6 +1701,13 @@ CREATE TABLE cities_ab CREATE TABLE cities_ab_10000_to_100000 PARTITION OF cities_ab FOR VALUES FROM (10000) TO (100000); + + + Create a default partition: + +CREATE TABLE cities_partdef + PARTITION OF cities DEFAULT; + diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 45ee9ac8b9..05e70818e7 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -1759,7 +1759,8 @@ heap_drop_with_catalog(Oid relid) { Relation rel; HeapTuple tuple; - Oid parentOid = InvalidOid; + Oid parentOid = InvalidOid, + defaultPartOid = InvalidOid; /* * To drop a partition safely, we must grab exclusive lock on its parent, @@ -1775,6 +1776,14 @@ heap_drop_with_catalog(Oid relid) { parentOid = get_partition_parent(relid); LockRelationOid(parentOid, AccessExclusiveLock); + + /* + * If this is not the default partition, dropping it will change the + * default partition's partition constraint, so we must lock it. + */ + defaultPartOid = get_default_partition_oid(parentOid); + if (OidIsValid(defaultPartOid) && relid != defaultPartOid) + LockRelationOid(defaultPartOid, AccessExclusiveLock); } ReleaseSysCache(tuple); @@ -1825,6 +1834,13 @@ heap_drop_with_catalog(Oid relid) if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) RemovePartitionKeyByRelId(relid); + /* + * If the relation being dropped is the default partition itself, + * invalidate its entry in pg_partitioned_table. + */ + if (relid == defaultPartOid) + update_default_partition_oid(parentOid, InvalidOid); + /* * Schedule unlinking of the relation's physical files at commit. */ @@ -1884,6 +1900,14 @@ heap_drop_with_catalog(Oid relid) if (OidIsValid(parentOid)) { + /* + * If this is not the default partition, the partition constraint of + * the default partition has changed to include the portion of the key + * space previously covered by the dropped partition. + */ + if (OidIsValid(defaultPartOid) && relid != defaultPartOid) + CacheInvalidateRelcacheByRelid(defaultPartOid); + /* * Invalidate the parent's relcache so that the partition is no longer * included in its partition descriptor. @@ -3138,6 +3162,7 @@ StorePartitionKey(Relation rel, values[Anum_pg_partitioned_table_partrelid - 1] = ObjectIdGetDatum(RelationGetRelid(rel)); values[Anum_pg_partitioned_table_partstrat - 1] = CharGetDatum(strategy); values[Anum_pg_partitioned_table_partnatts - 1] = Int16GetDatum(partnatts); + values[Anum_pg_partitioned_table_partdefid - 1] = ObjectIdGetDatum(InvalidOid); values[Anum_pg_partitioned_table_partattrs - 1] = PointerGetDatum(partattrs_vec); values[Anum_pg_partitioned_table_partclass - 1] = PointerGetDatum(partopclass_vec); values[Anum_pg_partitioned_table_partcollation - 1] = PointerGetDatum(partcollation_vec); @@ -3223,7 +3248,8 @@ RemovePartitionKeyByRelId(Oid relid) * relispartition to true * * Also, invalidate the parent's relcache, so that the next rebuild will load - * the new partition's info into its partition descriptor. + * the new partition's info into its partition descriptor.  If there is a + * default partition, we must invalidate its relcache entry as well. */ void StorePartitionBound(Relation rel, Relation parent, PartitionBoundSpec *bound) @@ -3234,6 +3260,7 @@ StorePartitionBound(Relation rel, Relation parent, PartitionBoundSpec *bound) Datum new_val[Natts_pg_class]; bool new_null[Natts_pg_class], new_repl[Natts_pg_class]; + Oid defaultPartOid; /* Update pg_class tuple */ classRel = heap_open(RelationRelationId, RowExclusiveLock); @@ -3271,5 +3298,15 @@ StorePartitionBound(Relation rel, Relation parent, PartitionBoundSpec *bound) heap_freetuple(newtuple); heap_close(classRel, RowExclusiveLock); + /* + * The partition constraint for the default partition depends on the + * partition bounds of every other partition, so we must invalidate the + * relcache entry for that partition every time a partition is added or + * removed. + */ + defaultPartOid = get_default_oid_from_partdesc(RelationGetPartitionDesc(parent)); + if (OidIsValid(defaultPartOid)) + CacheInvalidateRelcacheByRelid(defaultPartOid); + CacheInvalidateRelcache(parent); } diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index c6bd02f77d..7e426ba9c8 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -27,7 +27,9 @@ #include "catalog/pg_inherits.h" #include "catalog/pg_inherits_fn.h" #include "catalog/pg_opclass.h" +#include "catalog/pg_partitioned_table.h" #include "catalog/pg_type.h" +#include "commands/tablecmds.h" #include "executor/executor.h" #include "miscadmin.h" #include "nodes/makefuncs.h" @@ -35,6 +37,7 @@ #include "nodes/parsenodes.h" #include "optimizer/clauses.h" #include "optimizer/planmain.h" +#include "optimizer/prep.h" #include "optimizer/var.h" #include "rewrite/rewriteManip.h" #include "storage/lmgr.h" @@ -80,9 +83,12 @@ typedef struct PartitionBoundInfoData * partitioned table) */ int null_index; /* Index of the null-accepting partition; -1 * if there isn't one */ + int default_index; /* Index of the default partition; -1 if there + * isn't one */ } PartitionBoundInfoData; #define partition_bound_accepts_nulls(bi) ((bi)->null_index != -1) +#define partition_bound_has_default(bi) ((bi)->default_index != -1) /* * When qsort'ing partition bounds after reading from the catalog, each bound @@ -120,8 +126,10 @@ static void get_range_key_properties(PartitionKey key, int keynum, ListCell **partexprs_item, Expr **keyCol, Const **lower_val, Const **upper_val); -static List *get_qual_for_list(PartitionKey key, PartitionBoundSpec *spec); -static List *get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec); +static List *get_qual_for_list(Relation parent, PartitionBoundSpec *spec); +static List *get_qual_for_range(Relation parent, PartitionBoundSpec *spec, + bool for_default); +static List *get_range_nulltest(PartitionKey key); static List *generate_partition_qual(Relation rel); static PartitionRangeBound *make_one_range_bound(PartitionKey key, int index, @@ -162,6 +170,7 @@ RelationBuildPartitionDesc(Relation rel) MemoryContext oldcxt; int ndatums = 0; + int default_index = -1; /* List partitioning specific */ PartitionListValue **all_values = NULL; @@ -213,6 +222,22 @@ RelationBuildPartitionDesc(Relation rel) &isnull); Assert(!isnull); boundspec = (Node *) stringToNode(TextDatumGetCString(datum)); + + /* + * Sanity check: If the PartitionBoundSpec says this is the default + * partition, its OID should correspond to whatever's stored in + * pg_partitioned_table.partdefid; if not, the catalog is corrupt. + */ + if (castNode(PartitionBoundSpec, boundspec)->is_default) + { + Oid partdefid; + + partdefid = get_default_partition_oid(RelationGetRelid(rel)); + if (partdefid != inhrelid) + elog(ERROR, "expected partdefid %u, but got %u", + inhrelid, partdefid); + } + boundspecs = lappend(boundspecs, boundspec); partoids = lappend_oid(partoids, inhrelid); ReleaseSysCache(tuple); @@ -246,6 +271,18 @@ RelationBuildPartitionDesc(Relation rel) if (spec->strategy != PARTITION_STRATEGY_LIST) elog(ERROR, "invalid strategy in partition bound spec"); + /* + * Note the index of the partition bound spec for the default + * partition. There's no datum to add to the list of non-null + * datums for this partition. + */ + if (spec->is_default) + { + default_index = i; + i++; + continue; + } + foreach(c, spec->listdatums) { Const *val = castNode(Const, lfirst(c)); @@ -325,6 +362,17 @@ RelationBuildPartitionDesc(Relation rel) if (spec->strategy != PARTITION_STRATEGY_RANGE) elog(ERROR, "invalid strategy in partition bound spec"); + /* + * Note the index of the partition bound spec for the default + * partition. There's no datum to add to the allbounds array + * for this partition. + */ + if (spec->is_default) + { + default_index = i++; + continue; + } + lower = make_one_range_bound(key, i, spec->lowerdatums, true); upper = make_one_range_bound(key, i, spec->upperdatums, @@ -334,10 +382,11 @@ RelationBuildPartitionDesc(Relation rel) i++; } - Assert(ndatums == nparts * 2); + Assert(ndatums == nparts * 2 || + (default_index != -1 && ndatums == (nparts - 1) * 2)); /* Sort all the bounds in ascending order */ - qsort_arg(all_bounds, 2 * nparts, + qsort_arg(all_bounds, ndatums, sizeof(PartitionRangeBound *), qsort_partition_rbound_cmp, (void *) key); @@ -421,6 +470,7 @@ RelationBuildPartitionDesc(Relation rel) boundinfo = (PartitionBoundInfoData *) palloc0(sizeof(PartitionBoundInfoData)); boundinfo->strategy = key->strategy; + boundinfo->default_index = -1; boundinfo->ndatums = ndatums; boundinfo->null_index = -1; boundinfo->datums = (Datum **) palloc0(ndatums * sizeof(Datum *)); @@ -473,6 +523,21 @@ RelationBuildPartitionDesc(Relation rel) boundinfo->null_index = mapping[null_index]; } + /* Assign mapped index for the default partition. */ + if (default_index != -1) + { + /* + * The default partition accepts any value not + * specified in the lists of other partitions, hence + * it should not get mapped index while assigning + * those for non-null datums. + */ + Assert(default_index >= 0 && + mapping[default_index] == -1); + mapping[default_index] = next_index++; + boundinfo->default_index = mapping[default_index]; + } + /* All partition must now have a valid mapping */ Assert(next_index == nparts); break; @@ -527,6 +592,14 @@ RelationBuildPartitionDesc(Relation rel) boundinfo->indexes[i] = mapping[orig_index]; } } + + /* Assign mapped index for the default partition. */ + if (default_index != -1) + { + Assert(default_index >= 0 && mapping[default_index] == -1); + mapping[default_index] = next_index++; + boundinfo->default_index = mapping[default_index]; + } boundinfo->indexes[i] = -1; break; } @@ -577,6 +650,9 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, if (b1->null_index != b2->null_index) return false; + if (b1->default_index != b2->default_index) + return false; + for (i = 0; i < b1->ndatums; i++) { int j; @@ -635,10 +711,24 @@ check_new_partition_bound(char *relname, Relation parent, { PartitionKey key = RelationGetPartitionKey(parent); PartitionDesc partdesc = RelationGetPartitionDesc(parent); + PartitionBoundInfo boundinfo = partdesc->boundinfo; ParseState *pstate = make_parsestate(NULL); int with = -1; bool overlap = false; + if (spec->is_default) + { + if (boundinfo == NULL || !partition_bound_has_default(boundinfo)) + return; + + /* Default partition already exists, error out. */ + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("partition \"%s\" conflicts with existing default partition \"%s\"", + relname, get_rel_name(partdesc->oids[boundinfo->default_index])), + parser_errposition(pstate, spec->location))); + } + switch (key->strategy) { case PARTITION_STRATEGY_LIST: @@ -647,13 +737,13 @@ check_new_partition_bound(char *relname, Relation parent, if (partdesc->nparts > 0) { - PartitionBoundInfo boundinfo = partdesc->boundinfo; ListCell *cell; Assert(boundinfo && boundinfo->strategy == PARTITION_STRATEGY_LIST && (boundinfo->ndatums > 0 || - partition_bound_accepts_nulls(boundinfo))); + partition_bound_accepts_nulls(boundinfo) || + partition_bound_has_default(boundinfo))); foreach(cell, spec->listdatums) { @@ -718,8 +808,10 @@ check_new_partition_bound(char *relname, Relation parent, int offset; bool equal; - Assert(boundinfo && boundinfo->ndatums > 0 && - boundinfo->strategy == PARTITION_STRATEGY_RANGE); + Assert(boundinfo && + boundinfo->strategy == PARTITION_STRATEGY_RANGE && + (boundinfo->ndatums > 0 || + partition_bound_has_default(boundinfo))); /* * Test whether the new lower bound (which is treated @@ -796,6 +888,139 @@ check_new_partition_bound(char *relname, Relation parent, } } +/* + * check_default_allows_bound + * + * This function checks if there exists a row in the default partition that + * would properly belong to the new partition being added. If it finds one, + * it throws an error. + */ +void +check_default_allows_bound(Relation parent, Relation default_rel, + PartitionBoundSpec *new_spec) +{ + List *new_part_constraints; + List *def_part_constraints; + List *all_parts; + ListCell *lc; + + new_part_constraints = (new_spec->strategy == PARTITION_STRATEGY_LIST) + ? get_qual_for_list(parent, new_spec) + : get_qual_for_range(parent, new_spec, false); + def_part_constraints = + get_proposed_default_constraint(new_part_constraints); + + /* + * If the existing constraints on the default partition imply that it will + * not contain any row that would belong to the new partition, we can + * avoid scanning the default partition. + */ + if (PartConstraintImpliedByRelConstraint(default_rel, def_part_constraints)) + { + ereport(INFO, + (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + RelationGetRelationName(default_rel)))); + return; + } + + /* + * Scan the default partition and its subpartitions, and check for rows + * that do not satisfy the revised partition constraints. + */ + if (default_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + all_parts = find_all_inheritors(RelationGetRelid(default_rel), + AccessExclusiveLock, NULL); + else + all_parts = list_make1_oid(RelationGetRelid(default_rel)); + + foreach(lc, all_parts) + { + Oid part_relid = lfirst_oid(lc); + Relation part_rel; + Expr *constr; + Expr *partition_constraint; + EState *estate; + HeapTuple tuple; + ExprState *partqualstate = NULL; + Snapshot snapshot; + TupleDesc tupdesc; + ExprContext *econtext; + HeapScanDesc scan; + MemoryContext oldCxt; + TupleTableSlot *tupslot; + + /* Lock already taken above. */ + if (part_relid != RelationGetRelid(default_rel)) + part_rel = heap_open(part_relid, NoLock); + else + part_rel = default_rel; + + /* + * Only RELKIND_RELATION relations (i.e. leaf partitions) need to be + * scanned. + */ + if (part_rel->rd_rel->relkind != RELKIND_RELATION) + { + if (part_rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE) + ereport(WARNING, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("skipped scanning foreign table \"%s\" which is a partition of default partition \"%s\"", + RelationGetRelationName(part_rel), + RelationGetRelationName(default_rel)))); + + if (RelationGetRelid(default_rel) != RelationGetRelid(part_rel)) + heap_close(part_rel, NoLock); + + continue; + } + + tupdesc = CreateTupleDescCopy(RelationGetDescr(part_rel)); + constr = linitial(def_part_constraints); + partition_constraint = (Expr *) + map_partition_varattnos((List *) constr, + 1, part_rel, parent, NULL); + estate = CreateExecutorState(); + + /* Build expression execution states for partition check quals */ + partqualstate = ExecPrepareExpr(partition_constraint, estate); + + econtext = GetPerTupleExprContext(estate); + snapshot = RegisterSnapshot(GetLatestSnapshot()); + scan = heap_beginscan(part_rel, snapshot, 0, NULL); + tupslot = MakeSingleTupleTableSlot(tupdesc); + + /* + * Switch to per-tuple memory context and reset it for each tuple + * produced, so we don't leak memory. + */ + oldCxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate)); + + while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) + { + ExecStoreTuple(tuple, tupslot, InvalidBuffer, false); + econtext->ecxt_scantuple = tupslot; + + if (!ExecCheck(partqualstate, econtext)) + ereport(ERROR, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("updated partition constraint for default partition \"%s\" would be violated by some row", + RelationGetRelationName(default_rel)))); + + ResetExprContext(econtext); + CHECK_FOR_INTERRUPTS(); + } + + MemoryContextSwitchTo(oldCxt); + heap_endscan(scan); + UnregisterSnapshot(snapshot); + ExecDropSingleTupleTableSlot(tupslot); + FreeExecutorState(estate); + + if (RelationGetRelid(default_rel) != RelationGetRelid(part_rel)) + heap_close(part_rel, NoLock); /* keep the lock until commit */ + } +} + /* * get_partition_parent * @@ -860,12 +1085,12 @@ get_qual_from_partbound(Relation rel, Relation parent, { case PARTITION_STRATEGY_LIST: Assert(spec->strategy == PARTITION_STRATEGY_LIST); - my_qual = get_qual_for_list(key, spec); + my_qual = get_qual_for_list(parent, spec); break; case PARTITION_STRATEGY_RANGE: Assert(spec->strategy == PARTITION_STRATEGY_RANGE); - my_qual = get_qual_for_range(key, spec); + my_qual = get_qual_for_range(parent, spec, false); break; default: @@ -935,7 +1160,8 @@ RelationGetPartitionQual(Relation rel) * get_partition_qual_relid * * Returns an expression tree describing the passed-in relation's partition - * constraint. + * constraint. If there is no partition constraint returns NULL; this can + * happen if the default partition is the only partition. */ Expr * get_partition_qual_relid(Oid relid) @@ -948,7 +1174,10 @@ get_partition_qual_relid(Oid relid) if (rel->rd_rel->relispartition) { and_args = generate_partition_qual(rel); - if (list_length(and_args) > 1) + + if (and_args == NIL) + result = NULL; + else if (list_length(and_args) > 1) result = makeBoolExpr(AND_EXPR, and_args, -1); else result = linitial(and_args); @@ -1263,10 +1492,14 @@ make_partition_op_expr(PartitionKey key, int keynum, * * Returns an implicit-AND list of expressions to use as a list partition's * constraint, given the partition key and bound structures. + * + * The function returns NIL for a default partition when it's the only + * partition since in that case there is no constraint. */ static List * -get_qual_for_list(PartitionKey key, PartitionBoundSpec *spec) +get_qual_for_list(Relation parent, PartitionBoundSpec *spec) { + PartitionKey key = RelationGetPartitionKey(parent); List *result; Expr *keyCol; ArrayExpr *arr; @@ -1293,15 +1526,63 @@ get_qual_for_list(PartitionKey key, PartitionBoundSpec *spec) else keyCol = (Expr *) copyObject(linitial(key->partexprs)); - /* Create list of Consts for the allowed values, excluding any nulls */ - foreach(cell, spec->listdatums) + /* + * For default list partition, collect datums for all the partitions. The + * default partition constraint should check that the partition key is + * equal to none of those. + */ + if (spec->is_default) { - Const *val = castNode(Const, lfirst(cell)); + int i; + int ndatums = 0; + PartitionDesc pdesc = RelationGetPartitionDesc(parent); + PartitionBoundInfo boundinfo = pdesc->boundinfo; - if (val->constisnull) - list_has_null = true; - else - arrelems = lappend(arrelems, copyObject(val)); + if (boundinfo) + { + ndatums = boundinfo->ndatums; + + if (partition_bound_accepts_nulls(boundinfo)) + list_has_null = true; + } + + /* + * If default is the only partition, there need not be any partition + * constraint on it. + */ + if (ndatums == 0 && !list_has_null) + return NIL; + + for (i = 0; i < ndatums; i++) + { + Const *val; + + /* Construct const from datum */ + val = makeConst(key->parttypid[0], + key->parttypmod[0], + key->parttypcoll[0], + key->parttyplen[0], + *boundinfo->datums[i], + false, /* isnull */ + key->parttypbyval[0]); + + arrelems = lappend(arrelems, val); + } + } + else + { + /* + * Create list of Consts for the allowed values, excluding any nulls. + */ + foreach(cell, spec->listdatums) + { + Const *val = castNode(Const, lfirst(cell)); + + if (val->constisnull) + list_has_null = true; + else + arrelems = lappend(arrelems, copyObject(val)); + } } if (arrelems) @@ -1365,6 +1646,18 @@ get_qual_for_list(PartitionKey key, PartitionBoundSpec *spec) result = list_make1(nulltest); } + /* + * Note that, in general, applying NOT to a constraint expression doesn't + * necessarily invert the set of rows it accepts, because NOT (NULL) is + * NULL. However, the partition constraints we construct here never + * evaluate to NULL, so applying NOT works as intended. + */ + if (spec->is_default) + { + result = list_make1(make_ands_explicit(result)); + result = list_make1(makeBoolExpr(NOT_EXPR, result, -1)); + } + return result; } @@ -1421,6 +1714,53 @@ get_range_key_properties(PartitionKey key, int keynum, *upper_val = NULL; } + /* + * get_range_nulltest + * + * A non-default range partition table does not currently allow partition + * keys to be null, so emit an IS NOT NULL expression for each key column. + */ +static List * +get_range_nulltest(PartitionKey key) +{ + List *result = NIL; + NullTest *nulltest; + ListCell *partexprs_item; + int i; + + partexprs_item = list_head(key->partexprs); + for (i = 0; i < key->partnatts; i++) + { + Expr *keyCol; + + if (key->partattrs[i] != 0) + { + keyCol = (Expr *) makeVar(1, + key->partattrs[i], + key->parttypid[i], + key->parttypmod[i], + key->parttypcoll[i], + 0); + } + else + { + if (partexprs_item == NULL) + elog(ERROR, "wrong number of partition key expressions"); + keyCol = copyObject(lfirst(partexprs_item)); + partexprs_item = lnext(partexprs_item); + } + + nulltest = makeNode(NullTest); + nulltest->arg = keyCol; + nulltest->nulltesttype = IS_NOT_NULL; + nulltest->argisrow = false; + nulltest->location = -1; + result = lappend(result, nulltest); + } + + return result; +} + /* * get_qual_for_range * @@ -1459,11 +1799,15 @@ get_range_key_properties(PartitionKey key, int keynum, * In most common cases with only one partition column, say a, the following * expression tree will be generated: a IS NOT NULL AND a >= al AND a < au * - * If we end up with an empty result list, we return a single-member list - * containing a constant TRUE, because callers expect a non-empty list. + * For default partition, it returns the negation of the constraints of all + * the other partitions. + * + * External callers should pass for_default as false; we set it to true only + * when recursing. */ static List * -get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec) +get_qual_for_range(Relation parent, PartitionBoundSpec *spec, + bool for_default) { List *result = NIL; ListCell *cell1, @@ -1474,10 +1818,10 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec) j; PartitionRangeDatum *ldatum, *udatum; + PartitionKey key = RelationGetPartitionKey(parent); Expr *keyCol; Const *lower_val, *upper_val; - NullTest *nulltest; List *lower_or_arms, *upper_or_arms; int num_or_arms, @@ -1487,44 +1831,77 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec) bool need_next_lower_arm, need_next_upper_arm; - lower_or_start_datum = list_head(spec->lowerdatums); - upper_or_start_datum = list_head(spec->upperdatums); - num_or_arms = key->partnatts; - - /* - * A range-partitioned table does not currently allow partition keys to be - * null, so emit an IS NOT NULL expression for each key column. - */ - partexprs_item = list_head(key->partexprs); - for (i = 0; i < key->partnatts; i++) + if (spec->is_default) { - Expr *keyCol; + List *or_expr_args = NIL; + PartitionDesc pdesc = RelationGetPartitionDesc(parent); + Oid *inhoids = pdesc->oids; + int nparts = pdesc->nparts, + i; - if (key->partattrs[i] != 0) + for (i = 0; i < nparts; i++) { - keyCol = (Expr *) makeVar(1, - key->partattrs[i], - key->parttypid[i], - key->parttypmod[i], - key->parttypcoll[i], - 0); + Oid inhrelid = inhoids[i]; + HeapTuple tuple; + Datum datum; + bool isnull; + PartitionBoundSpec *bspec; + + tuple = SearchSysCache1(RELOID, inhrelid); + if (!HeapTupleIsValid(tuple)) + elog(ERROR, "cache lookup failed for relation %u", inhrelid); + + datum = SysCacheGetAttr(RELOID, tuple, + Anum_pg_class_relpartbound, + &isnull); + + Assert(!isnull); + bspec = (PartitionBoundSpec *) + stringToNode(TextDatumGetCString(datum)); + if (!IsA(bspec, PartitionBoundSpec)) + elog(ERROR, "expected PartitionBoundSpec"); + + if (!bspec->is_default) + { + List *part_qual; + + part_qual = get_qual_for_range(parent, bspec, true); + + /* + * AND the constraints of the partition and add to + * or_expr_args + */ + or_expr_args = lappend(or_expr_args, list_length(part_qual) > 1 + ? makeBoolExpr(AND_EXPR, part_qual, -1) + : linitial(part_qual)); + } + ReleaseSysCache(tuple); } - else + + if (or_expr_args != NIL) { - if (partexprs_item == NULL) - elog(ERROR, "wrong number of partition key expressions"); - keyCol = copyObject(lfirst(partexprs_item)); - partexprs_item = lnext(partexprs_item); + /* OR all the non-default partition constraints; then negate it */ + result = lappend(result, + list_length(or_expr_args) > 1 + ? makeBoolExpr(OR_EXPR, or_expr_args, -1) + : linitial(or_expr_args)); + result = list_make1(makeBoolExpr(NOT_EXPR, result, -1)); } - nulltest = makeNode(NullTest); - nulltest->arg = keyCol; - nulltest->nulltesttype = IS_NOT_NULL; - nulltest->argisrow = false; - nulltest->location = -1; - result = lappend(result, nulltest); + return result; } + lower_or_start_datum = list_head(spec->lowerdatums); + upper_or_start_datum = list_head(spec->upperdatums); + num_or_arms = key->partnatts; + + /* + * If it is the recursive call for default, we skip the get_range_nulltest + * to avoid accumulating the NullTest on the same keys for each partition. + */ + if (!for_default) + result = get_range_nulltest(key); + /* * Iterate over the key columns and check if the corresponding lower and * upper datums are equal using the btree equality operator for the @@ -1746,9 +2123,16 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec) ? makeBoolExpr(OR_EXPR, upper_or_arms, -1) : linitial(upper_or_arms)); - /* As noted above, caller expects the list to be non-empty. */ + /* + * As noted above, for non-default, we return list with constant TRUE. If + * the result is NIL during the recursive call for default, it implies + * this is the only other partition which can hold every value of the key + * except NULL. Hence we return the NullTest result skipped earlier. + */ if (result == NIL) - result = list_make1(makeBoolConst(true, false)); + result = for_default + ? get_range_nulltest(key) + : list_make1(makeBoolConst(true, false)); return result; } @@ -1756,7 +2140,8 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec) /* * generate_partition_qual * - * Generate partition predicate from rel's partition bound expression + * Generate partition predicate from rel's partition bound expression. The + * function returns a NIL list if there is no predicate. * * Result expression tree is stored CacheMemoryContext to ensure it survives * as long as the relcache entry. But we should be running in a less long-lived @@ -1932,7 +2317,7 @@ get_partition_for_tuple(PartitionDispatch *pd, PartitionDesc partdesc = parent->partdesc; TupleTableSlot *myslot = parent->tupslot; TupleConversionMap *map = parent->tupmap; - int cur_index = -1; + int cur_index = -1; if (myslot != NULL && map != NULL) { @@ -1991,14 +2376,25 @@ get_partition_for_tuple(PartitionDispatch *pd, case PARTITION_STRATEGY_RANGE: { - bool equal = false; + bool equal = false, + range_partkey_has_null = false; int cur_offset; int i; - /* No range includes NULL. */ + /* + * No range includes NULL, so this will be accepted by the + * default partition if there is one, and otherwise + * rejected. + */ for (i = 0; i < key->partnatts; i++) { - if (isnull[i]) + if (isnull[i] && + partition_bound_has_default(partdesc->boundinfo)) + { + range_partkey_has_null = true; + break; + } + else if (isnull[i]) { *failed_at = parent; *failed_slot = slot; @@ -2007,6 +2403,13 @@ get_partition_for_tuple(PartitionDispatch *pd, } } + /* + * No need to search for partition, as the null key will + * be routed to the default partition. + */ + if (range_partkey_has_null) + break; + cur_offset = partition_bound_bsearch(key, partdesc->boundinfo, values, @@ -2014,9 +2417,9 @@ get_partition_for_tuple(PartitionDispatch *pd, &equal); /* - * The offset returned is such that the bound at cur_offset - * is less than or equal to the tuple value, so the bound - * at offset+1 is the upper bound. + * The offset returned is such that the bound at + * cur_offset is less than or equal to the tuple value, so + * the bound at offset+1 is the upper bound. */ cur_index = partdesc->boundinfo->indexes[cur_offset + 1]; } @@ -2029,8 +2432,16 @@ get_partition_for_tuple(PartitionDispatch *pd, /* * cur_index < 0 means we failed to find a partition of this parent. - * cur_index >= 0 means we either found the leaf partition, or the - * next parent to find a partition of. + * Use the default partition, if there is one. + */ + if (cur_index < 0) + cur_index = partdesc->boundinfo->default_index; + + /* + * If cur_index is still less than 0 at this point, there's no + * partition for this tuple. Otherwise, we either found the leaf + * partition, or a child partitioned table through which we have to + * route the tuple. */ if (cur_index < 0) { @@ -2084,6 +2495,8 @@ make_one_range_bound(PartitionKey key, int index, List *datums, bool lower) ListCell *lc; int i; + Assert(datums != NIL); + bound = (PartitionRangeBound *) palloc0(sizeof(PartitionRangeBound)); bound->index = index; bound->datums = (Datum *) palloc0(key->partnatts * sizeof(Datum)); @@ -2320,3 +2733,104 @@ partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, return lo; } + +/* + * get_default_oid_from_partdesc + * + * Given a partition descriptor, return the OID of the default partition, if + * one exists; else, return InvalidOid. + */ +Oid +get_default_oid_from_partdesc(PartitionDesc partdesc) +{ + if (partdesc && partdesc->boundinfo && + partition_bound_has_default(partdesc->boundinfo)) + return partdesc->oids[partdesc->boundinfo->default_index]; + + return InvalidOid; +} + +/* + * get_default_partition_oid + * + * Given a relation OID, return the OID of the default partition, if one + * exists. Use get_default_oid_from_partdesc where possible, for + * efficiency. + */ +Oid +get_default_partition_oid(Oid parentId) +{ + HeapTuple tuple; + Oid defaultPartId = InvalidOid; + + tuple = SearchSysCache1(PARTRELID, ObjectIdGetDatum(parentId)); + + if (HeapTupleIsValid(tuple)) + { + Form_pg_partitioned_table part_table_form; + + part_table_form = (Form_pg_partitioned_table) GETSTRUCT(tuple); + defaultPartId = part_table_form->partdefid; + } + + ReleaseSysCache(tuple); + return defaultPartId; +} + +/* + * update_default_partition_oid + * + * Update pg_partition_table.partdefid with a new default partition OID. + */ +void +update_default_partition_oid(Oid parentId, Oid defaultPartId) +{ + HeapTuple tuple; + Relation pg_partitioned_table; + Form_pg_partitioned_table part_table_form; + + pg_partitioned_table = heap_open(PartitionedRelationId, RowExclusiveLock); + + tuple = SearchSysCacheCopy1(PARTRELID, ObjectIdGetDatum(parentId)); + + if (!HeapTupleIsValid(tuple)) + elog(ERROR, "cache lookup failed for partition key of relation %u", + parentId); + + part_table_form = (Form_pg_partitioned_table) GETSTRUCT(tuple); + part_table_form->partdefid = defaultPartId; + CatalogTupleUpdate(pg_partitioned_table, &tuple->t_self, tuple); + + heap_freetuple(tuple); + heap_close(pg_partitioned_table, RowExclusiveLock); +} + +/* + * get_proposed_default_constraint + * + * This function returns the negation of new_part_constraints, which + * would be an integral part of the default partition constraints after + * addition of the partition to which the new_part_constraints belongs. + */ +List * +get_proposed_default_constraint(List *new_part_constraints) +{ + Expr *defPartConstraint; + + defPartConstraint = make_ands_explicit(new_part_constraints); + + /* + * Derive the partition constraints of default partition by negating the + * given partition constraints. The partition constraint never evaluates + * to NULL, so negating it like this is safe. + */ + defPartConstraint = makeBoolExpr(NOT_EXPR, + list_make1(defPartConstraint), + -1); + defPartConstraint = + (Expr *) eval_const_expressions(NULL, + (Node *) defPartConstraint); + defPartConstraint = canonicalize_qual(defPartConstraint); + + return list_make1(defPartConstraint); +} diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index c8fc9cb7fe..d2167eda23 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -168,6 +168,8 @@ typedef struct AlteredTableInfo bool chgPersistence; /* T if SET LOGGED/UNLOGGED is used */ char newrelpersistence; /* if above is true */ Expr *partition_constraint; /* for attach partition validation */ + /* true, if validating default due to some other attach/detach */ + bool validate_default; /* Objects to rebuild after completing ALTER TYPE operations */ List *changedConstraintOids; /* OIDs of constraints to rebuild */ List *changedConstraintDefs; /* string definitions of same */ @@ -473,11 +475,10 @@ static void CreateInheritance(Relation child_rel, Relation parent_rel); static void RemoveInheritance(Relation child_rel, Relation parent_rel); static ObjectAddress ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd); -static bool PartConstraintImpliedByRelConstraint(Relation scanrel, - List *partConstraint); static void ValidatePartitionConstraints(List **wqueue, Relation scanrel, List *scanrel_children, - List *partConstraint); + List *partConstraint, + bool validate_default); static ObjectAddress ATExecDetachPartition(Relation rel, RangeVar *name); @@ -774,8 +775,10 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, { PartitionBoundSpec *bound; ParseState *pstate; - Oid parentId = linitial_oid(inheritOids); - Relation parent; + Oid parentId = linitial_oid(inheritOids), + defaultPartOid; + Relation parent, + defaultRel = NULL; /* Already have strong enough lock on the parent */ parent = heap_open(parentId, NoLock); @@ -790,6 +793,30 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, errmsg("\"%s\" is not partitioned", RelationGetRelationName(parent)))); + /* + * The partition constraint of the default partition depends on the + * partition bounds of every other partition. It is possible that + * another backend might be about to execute a query on the default + * partition table, and that the query relies on previously cached + * default partition constraints. We must therefore take a table lock + * strong enough to prevent all queries on the default partition from + * proceeding until we commit and send out a shared-cache-inval notice + * that will make them update their index lists. + * + * Order of locking: The relation being added won't be visible to + * other backends until it is committed, hence here in + * DefineRelation() the order of locking the default partition and the + * relation being added does not matter. But at all other places we + * need to lock the default relation before we lock the relation being + * added or removed i.e. we should take the lock in same order at all + * the places such that lock parent, lock default partition and then + * lock the partition so as to avoid a deadlock. + */ + defaultPartOid = + get_default_oid_from_partdesc(RelationGetPartitionDesc(parent)); + if (OidIsValid(defaultPartOid)) + defaultRel = heap_open(defaultPartOid, AccessExclusiveLock); + /* Tranform the bound values */ pstate = make_parsestate(NULL); pstate->p_sourcetext = queryString; @@ -798,14 +825,31 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, /* * Check first that the new partition's bound is valid and does not - * overlap with any of existing partitions of the parent - note that - * it does not return on error. + * overlap with any of existing partitions of the parent. */ check_new_partition_bound(relname, parent, bound); + /* + * If the default partition exists, its partition constraints will + * change after the addition of this new partition such that it won't + * allow any row that qualifies for this new partition. So, check that + * the existing data in the default partition satisfies the constraint + * as it will exist after adding this partition. + */ + if (OidIsValid(defaultPartOid)) + { + check_default_allows_bound(parent, defaultRel, bound); + /* Keep the lock until commit. */ + heap_close(defaultRel, NoLock); + } + /* Update the pg_class entry. */ StorePartitionBound(rel, parent, bound); + /* Update the default partition oid */ + if (bound->is_default) + update_default_partition_oid(RelationGetRelid(parent), relationId); + heap_close(parent, NoLock); /* @@ -4595,9 +4639,16 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode) } if (partqualstate && !ExecCheck(partqualstate, econtext)) - ereport(ERROR, - (errcode(ERRCODE_CHECK_VIOLATION), - errmsg("partition constraint is violated by some row"))); + { + if (tab->validate_default) + ereport(ERROR, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("updated partition constraint for default partition would be violated by some row"))); + else + ereport(ERROR, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("partition constraint is violated by some row"))); + } /* Write the tuple out to the new relation */ if (newrel) @@ -13482,7 +13533,7 @@ ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, * Existing constraints includes its check constraints and column-level * NOT NULL constraints and partConstraint describes the partition constraint. */ -static bool +bool PartConstraintImpliedByRelConstraint(Relation scanrel, List *partConstraint) { @@ -13569,7 +13620,8 @@ PartConstraintImpliedByRelConstraint(Relation scanrel, static void ValidatePartitionConstraints(List **wqueue, Relation scanrel, List *scanrel_children, - List *partConstraint) + List *partConstraint, + bool validate_default) { bool found_whole_row; ListCell *lc; @@ -13631,6 +13683,7 @@ ValidatePartitionConstraints(List **wqueue, Relation scanrel, /* Grab a work queue entry. */ tab = ATGetQueueEntry(wqueue, part_rel); tab->partition_constraint = (Expr *) linitial(my_partconstr); + tab->validate_default = validate_default; /* keep our lock until commit */ if (part_rel != scanrel) @@ -13658,6 +13711,17 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) ObjectAddress address; const char *trigger_name; bool found_whole_row; + Oid defaultPartOid; + List *partBoundConstraint; + + /* + * We must lock the default partition, because attaching a new partition + * will change its partition constraint. + */ + defaultPartOid = + get_default_oid_from_partdesc(RelationGetPartitionDesc(rel)); + if (OidIsValid(defaultPartOid)) + LockRelationOid(defaultPartOid, AccessExclusiveLock); attachrel = heap_openrv(cmd->name, AccessExclusiveLock); @@ -13814,6 +13878,11 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) /* OK to create inheritance. Rest of the checks performed there */ CreateInheritance(attachrel, rel); + /* Update the default partition oid */ + if (cmd->bound->is_default) + update_default_partition_oid(RelationGetRelid(rel), + RelationGetRelid(attachrel)); + /* * Check that the new partition's bound is valid and does not overlap any * of existing partitions of the parent - note that it does not return on @@ -13830,27 +13899,61 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) * If the parent itself is a partition, make sure to include its * constraint as well. */ - partConstraint = list_concat(get_qual_from_partbound(attachrel, rel, - cmd->bound), + partBoundConstraint = get_qual_from_partbound(attachrel, rel, cmd->bound); + partConstraint = list_concat(partBoundConstraint, RelationGetPartitionQual(rel)); - partConstraint = (List *) eval_const_expressions(NULL, - (Node *) partConstraint); - partConstraint = (List *) canonicalize_qual((Expr *) partConstraint); - partConstraint = list_make1(make_ands_explicit(partConstraint)); + + /* Skip validation if there are no constraints to validate. */ + if (partConstraint) + { + partConstraint = + (List *) eval_const_expressions(NULL, + (Node *) partConstraint); + partConstraint = (List *) canonicalize_qual((Expr *) partConstraint); + partConstraint = list_make1(make_ands_explicit(partConstraint)); + + /* + * Adjust the generated constraint to match this partition's attribute + * numbers. + */ + partConstraint = map_partition_varattnos(partConstraint, 1, attachrel, + rel, &found_whole_row); + /* There can never be a whole-row reference here */ + if (found_whole_row) + elog(ERROR, + "unexpected whole-row reference found in partition key"); + + /* Validate partition constraints against the table being attached. */ + ValidatePartitionConstraints(wqueue, attachrel, attachrel_children, + partConstraint, false); + } /* - * Adjust the generated constraint to match this partition's attribute - * numbers. + * Check whether default partition has a row that would fit the partition + * being attached. */ - partConstraint = map_partition_varattnos(partConstraint, 1, attachrel, - rel, &found_whole_row); - /* There can never be a whole-row reference here */ - if (found_whole_row) - elog(ERROR, "unexpected whole-row reference found in partition key"); + defaultPartOid = + get_default_oid_from_partdesc(RelationGetPartitionDesc(rel)); + if (OidIsValid(defaultPartOid)) + { + Relation defaultrel; + List *defaultrel_children; + List *defPartConstraint; + + /* We already have taken a lock on default partition. */ + defaultrel = heap_open(defaultPartOid, NoLock); + defPartConstraint = + get_proposed_default_constraint(partBoundConstraint); + defaultrel_children = + find_all_inheritors(defaultPartOid, + AccessExclusiveLock, NULL); + ValidatePartitionConstraints(wqueue, defaultrel, + defaultrel_children, + defPartConstraint, true); - /* Validate partition constraints against the table being attached. */ - ValidatePartitionConstraints(wqueue, attachrel, attachrel_children, - partConstraint); + /* keep our lock until commit. */ + heap_close(defaultrel, NoLock); + } ObjectAddressSet(address, RelationRelationId, RelationGetRelid(attachrel)); @@ -13877,6 +13980,16 @@ ATExecDetachPartition(Relation rel, RangeVar *name) new_null[Natts_pg_class], new_repl[Natts_pg_class]; ObjectAddress address; + Oid defaultPartOid; + + /* + * We must lock the default partition, because detaching this partition + * will changing its partition constrant. + */ + defaultPartOid = + get_default_oid_from_partdesc(RelationGetPartitionDesc(rel)); + if (OidIsValid(defaultPartOid)) + LockRelationOid(defaultPartOid, AccessExclusiveLock); partRel = heap_openrv(name, AccessShareLock); @@ -13908,6 +14021,24 @@ ATExecDetachPartition(Relation rel, RangeVar *name) heap_freetuple(newtuple); heap_close(classRel, RowExclusiveLock); + if (OidIsValid(defaultPartOid)) + { + /* + * If the detach relation is the default partition itself, invalidate + * its entry in pg_partitioned_table. + */ + if (RelationGetRelid(partRel) == defaultPartOid) + update_default_partition_oid(RelationGetRelid(rel), InvalidOid); + else + { + /* + * We must invalidate default partition's relcache, for the same + * reasons explained in StorePartitionBound(). + */ + CacheInvalidateRelcacheByRelid(defaultPartOid); + } + } + /* * Invalidate the parent's relcache so that the partition is no longer * included in its partition descriptor. diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 9bae2647fd..f1bed14e2b 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -4450,6 +4450,7 @@ _copyPartitionBoundSpec(const PartitionBoundSpec *from) PartitionBoundSpec *newnode = makeNode(PartitionBoundSpec); COPY_SCALAR_FIELD(strategy); + COPY_SCALAR_FIELD(is_default); COPY_NODE_FIELD(listdatums); COPY_NODE_FIELD(lowerdatums); COPY_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 11731da80a..8b56b9146a 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2839,6 +2839,7 @@ static bool _equalPartitionBoundSpec(const PartitionBoundSpec *a, const PartitionBoundSpec *b) { COMPARE_SCALAR_FIELD(strategy); + COMPARE_SCALAR_FIELD(is_default); COMPARE_NODE_FIELD(listdatums); COMPARE_NODE_FIELD(lowerdatums); COMPARE_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 9ee3e23761..b83d919e40 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -3573,6 +3573,7 @@ _outPartitionBoundSpec(StringInfo str, const PartitionBoundSpec *node) WRITE_NODE_TYPE("PARTITIONBOUNDSPEC"); WRITE_CHAR_FIELD(strategy); + WRITE_BOOL_FIELD(is_default); WRITE_NODE_FIELD(listdatums); WRITE_NODE_FIELD(lowerdatums); WRITE_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 67b9e19d29..fbf8330735 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -2390,6 +2390,7 @@ _readPartitionBoundSpec(void) READ_LOCALS(PartitionBoundSpec); READ_CHAR_FIELD(strategy); + READ_BOOL_FIELD(is_default); READ_NODE_FIELD(listdatums); READ_NODE_FIELD(lowerdatums); READ_NODE_FIELD(upperdatums); diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 5eb398118e..c303818c9b 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -575,7 +575,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type part_strategy %type part_elem %type part_params -%type ForValues +%type PartitionBoundSpec %type partbound_datum PartitionRangeDatum %type partbound_datum_list range_datum_list @@ -1980,7 +1980,7 @@ alter_table_cmds: partition_cmd: /* ALTER TABLE ATTACH PARTITION FOR VALUES */ - ATTACH PARTITION qualified_name ForValues + ATTACH PARTITION qualified_name PartitionBoundSpec { AlterTableCmd *n = makeNode(AlterTableCmd); PartitionCmd *cmd = makeNode(PartitionCmd); @@ -2635,13 +2635,14 @@ alter_identity_column_option: } ; -ForValues: +PartitionBoundSpec: /* a LIST partition */ FOR VALUES IN_P '(' partbound_datum_list ')' { PartitionBoundSpec *n = makeNode(PartitionBoundSpec); n->strategy = PARTITION_STRATEGY_LIST; + n->is_default = false; n->listdatums = $5; n->location = @3; @@ -2654,10 +2655,22 @@ ForValues: PartitionBoundSpec *n = makeNode(PartitionBoundSpec); n->strategy = PARTITION_STRATEGY_RANGE; + n->is_default = false; n->lowerdatums = $5; n->upperdatums = $9; n->location = @3; + $$ = n; + } + + /* a DEFAULT partition */ + | DEFAULT + { + PartitionBoundSpec *n = makeNode(PartitionBoundSpec); + + n->is_default = true; + n->location = @1; + $$ = n; } ; @@ -3130,7 +3143,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')' $$ = (Node *)n; } | CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name - OptTypedTableElementList ForValues OptPartitionSpec OptWith + OptTypedTableElementList PartitionBoundSpec OptPartitionSpec OptWith OnCommitOption OptTableSpace { CreateStmt *n = makeNode(CreateStmt); @@ -3149,7 +3162,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')' $$ = (Node *)n; } | CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF - qualified_name OptTypedTableElementList ForValues OptPartitionSpec + qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec OptWith OnCommitOption OptTableSpace { CreateStmt *n = makeNode(CreateStmt); @@ -4864,7 +4877,7 @@ CreateForeignTableStmt: $$ = (Node *) n; } | CREATE FOREIGN TABLE qualified_name - PARTITION OF qualified_name OptTypedTableElementList ForValues + PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec SERVER name create_generic_options { CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt); @@ -4885,7 +4898,7 @@ CreateForeignTableStmt: $$ = (Node *) n; } | CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name - PARTITION OF qualified_name OptTypedTableElementList ForValues + PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec SERVER name create_generic_options { CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt); diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 20586797cc..655da02c10 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -3307,6 +3307,18 @@ transformPartitionBound(ParseState *pstate, Relation parent, /* Avoid scribbling on input */ result_spec = copyObject(spec); + if (spec->is_default) + { + /* + * In case of the default partition, parser had no way to identify the + * partition strategy. Assign the parent's strategy to the default + * partition bound spec. + */ + result_spec->strategy = strategy; + + return result_spec; + } + if (strategy == PARTITION_STRATEGY_LIST) { ListCell *cell; diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index f9ea7ed771..0ea5078218 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -1750,7 +1750,7 @@ pg_get_partition_constraintdef(PG_FUNCTION_ARGS) constr_expr = get_partition_qual_relid(relationId); - /* Quick exit if not a partition */ + /* Quick exit if no partition constraint */ if (constr_expr == NULL) PG_RETURN_NULL(); @@ -8699,6 +8699,12 @@ get_rule_expr(Node *node, deparse_context *context, ListCell *cell; char *sep; + if (spec->is_default) + { + appendStringInfoString(buf, "DEFAULT"); + break; + } + switch (spec->strategy) { case PARTITION_STRATEGY_LIST: diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 6fb9bdd063..d22ec68431 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1893,19 +1893,20 @@ describeOneTableDetails(const char *schemaname, parent_name = PQgetvalue(result, 0, 0); partdef = PQgetvalue(result, 0, 1); - if (PQnfields(result) == 3) + if (PQnfields(result) == 3 && !PQgetisnull(result, 0, 2)) partconstraintdef = PQgetvalue(result, 0, 2); printfPQExpBuffer(&tmpbuf, _("Partition of: %s %s"), parent_name, partdef); printTableAddFooter(&cont, tmpbuf.data); - if (partconstraintdef) - { + /* If there isn't any constraint, show that explicitly */ + if (partconstraintdef == NULL || partconstraintdef[0] == '\0') + printfPQExpBuffer(&tmpbuf, _("No partition constraint")); + else printfPQExpBuffer(&tmpbuf, _("Partition constraint: %s"), partconstraintdef); - printTableAddFooter(&cont, tmpbuf.data); - } + printTableAddFooter(&cont, tmpbuf.data); PQclear(result); } diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 2ab8809fa5..a09c49d6cf 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -2053,7 +2053,7 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, ""); /* Limited completion support for partition bound specification */ else if (TailMatches3("ATTACH", "PARTITION", MatchAny)) - COMPLETE_WITH_CONST("FOR VALUES"); + COMPLETE_WITH_LIST2("FOR VALUES", "DEFAULT"); else if (TailMatches2("FOR", "VALUES")) COMPLETE_WITH_LIST2("FROM (", "IN ("); @@ -2492,7 +2492,7 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_partitioned_tables, ""); /* Limited completion support for partition bound specification */ else if (TailMatches3("PARTITION", "OF", MatchAny)) - COMPLETE_WITH_CONST("FOR VALUES"); + COMPLETE_WITH_LIST2("FOR VALUES", "DEFAULT"); /* CREATE TABLESPACE */ else if (Matches3("CREATE", "TABLESPACE", MatchAny)) diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 6525da970d..56642671b6 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201708311 +#define CATALOG_VERSION_NO 201709081 #endif diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 2283c675e9..454a940a23 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -99,4 +99,11 @@ extern int get_partition_for_tuple(PartitionDispatch *pd, EState *estate, PartitionDispatchData **failed_at, TupleTableSlot **failed_slot); +extern Oid get_default_oid_from_partdesc(PartitionDesc partdesc); +extern Oid get_default_partition_oid(Oid parentId); +extern void update_default_partition_oid(Oid parentId, Oid defaultPartId); +extern void check_default_allows_bound(Relation parent, Relation defaultRel, + PartitionBoundSpec *new_spec); +extern List *get_proposed_default_constraint(List *new_part_constaints); + #endif /* PARTITION_H */ diff --git a/src/include/catalog/pg_partitioned_table.h b/src/include/catalog/pg_partitioned_table.h index 38d64d6511..525e541f93 100644 --- a/src/include/catalog/pg_partitioned_table.h +++ b/src/include/catalog/pg_partitioned_table.h @@ -32,6 +32,8 @@ CATALOG(pg_partitioned_table,3350) BKI_WITHOUT_OIDS Oid partrelid; /* partitioned table oid */ char partstrat; /* partitioning strategy */ int16 partnatts; /* number of partition key columns */ + Oid partdefid; /* default partition oid; InvalidOid if there + * isn't one */ /* * variable-length fields start here, but we allow direct access to @@ -62,13 +64,14 @@ typedef FormData_pg_partitioned_table *Form_pg_partitioned_table; * compiler constants for pg_partitioned_table * ---------------- */ -#define Natts_pg_partitioned_table 7 +#define Natts_pg_partitioned_table 8 #define Anum_pg_partitioned_table_partrelid 1 #define Anum_pg_partitioned_table_partstrat 2 #define Anum_pg_partitioned_table_partnatts 3 -#define Anum_pg_partitioned_table_partattrs 4 -#define Anum_pg_partitioned_table_partclass 5 -#define Anum_pg_partitioned_table_partcollation 6 -#define Anum_pg_partitioned_table_partexprs 7 +#define Anum_pg_partitioned_table_partdefid 4 +#define Anum_pg_partitioned_table_partattrs 5 +#define Anum_pg_partitioned_table_partclass 6 +#define Anum_pg_partitioned_table_partcollation 7 +#define Anum_pg_partitioned_table_partexprs 8 #endif /* PG_PARTITIONED_TABLE_H */ diff --git a/src/include/commands/tablecmds.h b/src/include/commands/tablecmds.h index abd31b68d4..da3ff5dbee 100644 --- a/src/include/commands/tablecmds.h +++ b/src/include/commands/tablecmds.h @@ -18,6 +18,7 @@ #include "catalog/dependency.h" #include "catalog/objectaddress.h" #include "nodes/parsenodes.h" +#include "catalog/partition.h" #include "storage/lock.h" #include "utils/relcache.h" @@ -87,4 +88,7 @@ extern void RangeVarCallbackOwnsTable(const RangeVar *relation, extern void RangeVarCallbackOwnsRelation(const RangeVar *relation, Oid relId, Oid oldRelId, void *noCatalogs); +extern bool PartConstraintImpliedByRelConstraint(Relation scanrel, + List *partConstraint); + #endif /* TABLECMDS_H */ diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 3171815320..f3e4c69753 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -797,6 +797,7 @@ typedef struct PartitionBoundSpec NodeTag type; char strategy; /* see PARTITION_STRATEGY codes above */ + bool is_default; /* is it a default partition bound? */ /* Partitioning info for LIST strategy: */ List *listdatums; /* List of Consts (or A_Consts in raw tree) */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 0f36423163..0d400d9778 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3297,6 +3297,14 @@ SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_1'::reg CREATE TABLE fail_part (LIKE part_1 INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION fail_part FOR VALUES IN (1); ERROR: partition "fail_part" would overlap partition "part_1" +-- check that an existing table can be attached as a default partition +CREATE TABLE def_part (LIKE list_parted INCLUDING CONSTRAINTS); +ALTER TABLE list_parted ATTACH PARTITION def_part DEFAULT; +-- check attaching default partition fails if a default partition already +-- exists +CREATE TABLE fail_def_part (LIKE part_1 INCLUDING CONSTRAINTS); +ALTER TABLE list_parted ATTACH PARTITION fail_def_part DEFAULT; +ERROR: partition "fail_def_part" conflicts with existing default partition "def_part" -- check validation when attaching list partitions CREATE TABLE list_parted2 ( a int, @@ -3310,6 +3318,15 @@ ERROR: partition constraint is violated by some row -- should be ok after deleting the bad row DELETE FROM part_2; ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); +-- check partition cannot be attached if default has some row for its values +CREATE TABLE list_parted2_def PARTITION OF list_parted2 DEFAULT; +INSERT INTO list_parted2_def VALUES (11, 'z'); +CREATE TABLE part_3 (LIKE list_parted2); +ALTER TABLE list_parted2 ATTACH PARTITION part_3 FOR VALUES IN (11); +ERROR: updated partition constraint for default partition would be violated by some row +-- should be ok after deleting the bad row +DELETE FROM list_parted2_def WHERE a = 11; +ALTER TABLE list_parted2 ATTACH PARTITION part_3 FOR VALUES IN (11); -- adding constraints that describe the desired partition constraint -- (or more restrictive) will help skip the validation scan CREATE TABLE part_3_4 ( @@ -3325,6 +3342,10 @@ ALTER TABLE list_parted2 DETACH PARTITION part_3_4; ALTER TABLE part_3_4 ALTER a SET NOT NULL; ALTER TABLE list_parted2 ATTACH PARTITION part_3_4 FOR VALUES IN (3, 4); INFO: partition constraint for table "part_3_4" is implied by existing constraints +-- check if default partition scan skipped +ALTER TABLE list_parted2_def ADD CONSTRAINT check_a CHECK (a IN (5, 6)); +CREATE TABLE part_55_66 PARTITION OF list_parted2 FOR VALUES IN (55, 66); +INFO: partition constraint for table "list_parted2_def" is implied by existing constraints -- check validation when attaching range partitions CREATE TABLE range_parted ( a int, @@ -3350,6 +3371,19 @@ CREATE TABLE part2 ( ); ALTER TABLE range_parted ATTACH PARTITION part2 FOR VALUES FROM (1, 10) TO (1, 20); INFO: partition constraint for table "part2" is implied by existing constraints +-- Create default partition +CREATE TABLE partr_def1 PARTITION OF range_parted DEFAULT; +-- Only one default partition is allowed, hence, following should give error +CREATE TABLE partr_def2 (LIKE part1 INCLUDING CONSTRAINTS); +ALTER TABLE range_parted ATTACH PARTITION partr_def2 DEFAULT; +ERROR: partition "partr_def2" conflicts with existing default partition "partr_def1" +-- Overlapping partitions cannot be attached, hence, following should give error +INSERT INTO partr_def1 VALUES (2, 10); +CREATE TABLE part3 (LIKE range_parted); +ALTER TABLE range_parted ATTACH partition part3 FOR VALUES FROM (2, 10) TO (2, 20); +ERROR: updated partition constraint for default partition would be violated by some row +-- Attaching partitions should be successful when there are no overlapping rows +ALTER TABLE range_parted ATTACH partition part3 FOR VALUES FROM (3, 10) TO (3, 20); -- check that leaf partitions are scanned when attaching a partitioned -- table CREATE TABLE part_5 ( @@ -3402,6 +3436,7 @@ ALTER TABLE part_7 ATTACH PARTITION part_7_a_null FOR VALUES IN ('a', null); INFO: partition constraint for table "part_7_a_null" is implied by existing constraints ALTER TABLE list_parted2 ATTACH PARTITION part_7 FOR VALUES IN (7); INFO: partition constraint for table "part_7" is implied by existing constraints +INFO: partition constraint for table "list_parted2_def" is implied by existing constraints -- Same example, but check this time that the constraint correctly detects -- violating rows ALTER TABLE list_parted2 DETACH PARTITION part_7; @@ -3415,7 +3450,20 @@ SELECT tableoid::regclass, a, b FROM part_7 order by a; (2 rows) ALTER TABLE list_parted2 ATTACH PARTITION part_7 FOR VALUES IN (7); +INFO: partition constraint for table "list_parted2_def" is implied by existing constraints ERROR: partition constraint is violated by some row +-- check that leaf partitions of default partition are scanned when +-- attaching a partitioned table. +ALTER TABLE part_5 DROP CONSTRAINT check_a; +CREATE TABLE part5_def PARTITION OF part_5 DEFAULT PARTITION BY LIST(a); +CREATE TABLE part5_def_p1 PARTITION OF part5_def FOR VALUES IN (5); +INSERT INTO part5_def_p1 VALUES (5, 'y'); +CREATE TABLE part5_p1 (LIKE part_5); +ALTER TABLE part_5 ATTACH PARTITION part5_p1 FOR VALUES IN ('y'); +ERROR: updated partition constraint for default partition would be violated by some row +-- should be ok after deleting the bad row +DELETE FROM part5_def_p1 WHERE b = 'y'; +ALTER TABLE part_5 ATTACH PARTITION part5_p1 FOR VALUES IN ('y'); -- check that the table being attached is not already a partition ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); ERROR: "part_2" is already a partition @@ -3538,6 +3586,7 @@ ALTER TABLE list_parted2 ALTER COLUMN b TYPE text; ERROR: cannot alter type of column named in partition key -- cleanup DROP TABLE list_parted, list_parted2, range_parted; +DROP TABLE fail_def_part; -- more tests for certain multi-level partitioning scenarios create table p (a int, b int) partition by range (a, b); create table p1 (b int, a int not null) partition by range (b); diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index babda8978c..58c755be50 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -467,6 +467,10 @@ CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) TO (2); ERROR: invalid bound specification for a list partition LINE 1: ...BLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) T... ^ +-- check default partition cannot be created more than once +CREATE TABLE part_default PARTITION OF list_parted DEFAULT; +CREATE TABLE fail_default_part PARTITION OF list_parted DEFAULT; +ERROR: partition "fail_default_part" conflicts with existing default partition "part_default" -- specified literal can't be cast to the partition column data type CREATE TABLE bools ( a bool @@ -558,10 +562,15 @@ CREATE TABLE list_parted2 ( ) PARTITION BY LIST (a); CREATE TABLE part_null_z PARTITION OF list_parted2 FOR VALUES IN (null, 'z'); CREATE TABLE part_ab PARTITION OF list_parted2 FOR VALUES IN ('a', 'b'); +CREATE TABLE list_parted2_def PARTITION OF list_parted2 DEFAULT; CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN (null); ERROR: partition "fail_part" would overlap partition "part_null_z" CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN ('b', 'c'); ERROR: partition "fail_part" would overlap partition "part_ab" +-- check default partition overlap +INSERT INTO list_parted2 VALUES('X'); +CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN ('W', 'X', 'Y'); +ERROR: updated partition constraint for default partition "list_parted2_def" would be violated by some row CREATE TABLE range_parted2 ( a int ) PARTITION BY RANGE (a); @@ -585,6 +594,16 @@ CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (10) TO (30); ERROR: partition "fail_part" would overlap partition "part2" CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (10) TO (50); ERROR: partition "fail_part" would overlap partition "part2" +-- Create a default partition for range partitioned table +CREATE TABLE range2_default PARTITION OF range_parted2 DEFAULT; +-- More than one default partition is not allowed, so this should give error +CREATE TABLE fail_default_part PARTITION OF range_parted2 DEFAULT; +ERROR: partition "fail_default_part" conflicts with existing default partition "range2_default" +-- Check if the range for default partitions overlap +INSERT INTO range_parted2 VALUES (85); +CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (80) TO (90); +ERROR: updated partition constraint for default partition "range2_default" would be violated by some row +CREATE TABLE part4 PARTITION OF range_parted2 FOR VALUES FROM (90) TO (100); -- now check for multi-column range partition key CREATE TABLE range_parted3 ( a int, @@ -598,6 +617,7 @@ CREATE TABLE part11 PARTITION OF range_parted3 FOR VALUES FROM (1, 1) TO (1, 10) CREATE TABLE part12 PARTITION OF range_parted3 FOR VALUES FROM (1, 10) TO (1, maxvalue); CREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1, 10) TO (1, 20); ERROR: partition "fail_part" would overlap partition "part12" +CREATE TABLE range3_default PARTITION OF range_parted3 DEFAULT; -- cannot create a partition that says column b is allowed to range -- from -infinity to +infinity, while there exist partitions that have -- more specific ranges diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index e159d62b66..73a5600f19 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -219,17 +219,63 @@ insert into part_null values (null, 0); create table part_ee_ff partition of list_parted for values in ('ee', 'ff') partition by range (b); create table part_ee_ff1 partition of part_ee_ff for values from (1) to (10); create table part_ee_ff2 partition of part_ee_ff for values from (10) to (20); +-- test default partition +create table part_default partition of list_parted default; +-- Negative test: a row, which would fit in other partition, does not fit +-- default partition, even when inserted directly +insert into part_default values ('aa', 2); +ERROR: new row for relation "part_default" violates partition constraint +DETAIL: Failing row contains (aa, 2). +insert into part_default values (null, 2); +ERROR: new row for relation "part_default" violates partition constraint +DETAIL: Failing row contains (null, 2). +-- ok +insert into part_default values ('Zz', 2); +-- test if default partition works as expected for multi-level partitioned +-- table as well as when default partition itself is further partitioned +drop table part_default; +create table part_xx_yy partition of list_parted for values in ('xx', 'yy') partition by list (a); +create table part_xx_yy_p1 partition of part_xx_yy for values in ('xx'); +create table part_xx_yy_defpart partition of part_xx_yy default; +create table part_default partition of list_parted default partition by range(b); +create table part_default_p1 partition of part_default for values from (20) to (30); +create table part_default_p2 partition of part_default for values from (30) to (40); -- fail insert into part_ee_ff1 values ('EE', 11); ERROR: new row for relation "part_ee_ff1" violates partition constraint DETAIL: Failing row contains (EE, 11). +insert into part_default_p2 values ('gg', 43); +ERROR: new row for relation "part_default_p2" violates partition constraint +DETAIL: Failing row contains (gg, 43). -- fail (even the parent's, ie, part_ee_ff's partition constraint applies) insert into part_ee_ff1 values ('cc', 1); ERROR: new row for relation "part_ee_ff1" violates partition constraint DETAIL: Failing row contains (cc, 1). +insert into part_default values ('gg', 43); +ERROR: no partition of relation "part_default" found for row +DETAIL: Partition key of the failing row contains (b) = (43). -- ok insert into part_ee_ff1 values ('ff', 1); insert into part_ee_ff2 values ('ff', 11); +insert into part_default_p1 values ('cd', 25); +insert into part_default_p2 values ('de', 35); +insert into list_parted values ('ab', 21); +insert into list_parted values ('xx', 1); +insert into list_parted values ('yy', 2); +select tableoid::regclass, * from list_parted; + tableoid | a | b +--------------------+----+---- + part_cc_dd | cC | 1 + part_ee_ff1 | ff | 1 + part_ee_ff2 | ff | 11 + part_xx_yy_p1 | xx | 1 + part_xx_yy_defpart | yy | 2 + part_null | | 0 + part_default_p1 | cd | 25 + part_default_p1 | ab | 21 + part_default_p2 | de | 35 +(9 rows) + -- Check tuple routing for partitioned tables -- fail insert into range_parted values ('a', 0); @@ -249,6 +295,18 @@ insert into range_parted values ('b', 10); insert into range_parted values ('a'); ERROR: no partition of relation "range_parted" found for row DETAIL: Partition key of the failing row contains (a, (b + 0)) = (a, null). +-- Check default partition +create table part_def partition of range_parted default; +-- fail +insert into part_def values ('b', 10); +ERROR: new row for relation "part_def" violates partition constraint +DETAIL: Failing row contains (b, 10). +-- ok +insert into part_def values ('c', 10); +insert into range_parted values (null, null); +insert into range_parted values ('a', null); +insert into range_parted values (null, 19); +insert into range_parted values ('b', 20); select tableoid::regclass, * from range_parted; tableoid | a | b ----------+---+---- @@ -258,7 +316,12 @@ select tableoid::regclass, * from range_parted; part3 | b | 1 part4 | b | 10 part4 | b | 10 -(6 rows) + part_def | c | 10 + part_def | | + part_def | a | + part_def | | 19 + part_def | b | 20 +(11 rows) -- ok insert into list_parted values (null, 1); @@ -274,17 +337,22 @@ DETAIL: Partition key of the failing row contains (b) = (0). insert into list_parted values ('EE', 1); insert into part_ee_ff values ('EE', 10); select tableoid::regclass, * from list_parted; - tableoid | a | b --------------+----+---- - part_aa_bb | aA | - part_cc_dd | cC | 1 - part_ee_ff1 | ff | 1 - part_ee_ff1 | EE | 1 - part_ee_ff2 | ff | 11 - part_ee_ff2 | EE | 10 - part_null | | 0 - part_null | | 1 -(8 rows) + tableoid | a | b +--------------------+----+---- + part_aa_bb | aA | + part_cc_dd | cC | 1 + part_ee_ff1 | ff | 1 + part_ee_ff1 | EE | 1 + part_ee_ff2 | ff | 11 + part_ee_ff2 | EE | 10 + part_xx_yy_p1 | xx | 1 + part_xx_yy_defpart | yy | 2 + part_null | | 0 + part_null | | 1 + part_default_p1 | cd | 25 + part_default_p1 | ab | 21 + part_default_p2 | de | 35 +(13 rows) -- some more tests to exercise tuple-routing with multi-level partitioning create table part_gg partition of list_parted for values in ('gg') partition by range (b); @@ -316,6 +384,31 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p -- cleanup drop table range_parted, list_parted; +-- test that a default partition added as the first partition accepts any value +-- including null +create table list_parted (a int) partition by list (a); +create table part_default partition of list_parted default; +\d+ part_default + Table "public.part_default" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+---------+--------------+------------- + a | integer | | | | plain | | +Partition of: list_parted DEFAULT +No partition constraint + +insert into part_default values (null); +insert into part_default values (1); +insert into part_default values (-1); +select tableoid::regclass, a from list_parted; + tableoid | a +--------------+---- + part_default | + part_default | 1 + part_default | -1 +(3 rows) + +-- cleanup +drop table list_parted; -- more tests for certain multi-level partitioning scenarios create table mlparted (a int, b int) partition by range (a, b); create table mlparted1 (b int not null, a int not null) partition by range ((b+0)); @@ -425,6 +518,36 @@ insert into mlparted5 (a, b, c) values (1, 40, 'a'); ERROR: new row for relation "mlparted5a" violates partition constraint DETAIL: Failing row contains (b, 1, 40). drop table mlparted5; +alter table mlparted drop constraint check_b; +-- Check multi-level default partition +create table mlparted_def partition of mlparted default partition by range(a); +create table mlparted_def1 partition of mlparted_def for values from (40) to (50); +create table mlparted_def2 partition of mlparted_def for values from (50) to (60); +insert into mlparted values (40, 100); +insert into mlparted_def1 values (42, 100); +insert into mlparted_def2 values (54, 50); +-- fail +insert into mlparted values (70, 100); +ERROR: no partition of relation "mlparted_def" found for row +DETAIL: Partition key of the failing row contains (a) = (70). +insert into mlparted_def1 values (52, 50); +ERROR: new row for relation "mlparted_def1" violates partition constraint +DETAIL: Failing row contains (52, 50, null). +insert into mlparted_def2 values (34, 50); +ERROR: new row for relation "mlparted_def2" violates partition constraint +DETAIL: Failing row contains (34, 50, null). +-- ok +create table mlparted_defd partition of mlparted_def default; +insert into mlparted values (70, 100); +select tableoid::regclass, * from mlparted_def; + tableoid | a | b | c +---------------+----+-----+--- + mlparted_def1 | 40 | 100 | + mlparted_def1 | 42 | 100 | + mlparted_def2 | 54 | 50 | + mlparted_defd | 70 | 100 | +(4 rows) + -- check that message shown after failure to find a partition shows the -- appropriate key description (or none) in various situations create table key_desc (a int, b int) partition by list ((a+0)); diff --git a/src/test/regress/expected/plancache.out b/src/test/regress/expected/plancache.out index 3f3db337c5..c2eeff1614 100644 --- a/src/test/regress/expected/plancache.out +++ b/src/test/regress/expected/plancache.out @@ -252,3 +252,29 @@ NOTICE: 3 (1 row) +-- Check that addition or removal of any partition is correctly dealt with by +-- default partition table when it is being used in prepared statement. +create table list_parted (a int) partition by list(a); +create table list_part_null partition of list_parted for values in (null); +create table list_part_1 partition of list_parted for values in (1); +create table list_part_def partition of list_parted default; +prepare pstmt_def_insert (int) as insert into list_part_def values($1); +-- should fail +execute pstmt_def_insert(null); +ERROR: new row for relation "list_part_def" violates partition constraint +DETAIL: Failing row contains (null). +execute pstmt_def_insert(1); +ERROR: new row for relation "list_part_def" violates partition constraint +DETAIL: Failing row contains (1). +create table list_part_2 partition of list_parted for values in (2); +execute pstmt_def_insert(2); +ERROR: new row for relation "list_part_def" violates partition constraint +DETAIL: Failing row contains (2). +alter table list_parted detach partition list_part_null; +-- should be ok +execute pstmt_def_insert(null); +drop table list_part_1; +-- should be ok +execute pstmt_def_insert(1); +drop table list_parted, list_part_null; +deallocate pstmt_def_insert; diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out index 6750152e0f..e996640593 100644 --- a/src/test/regress/expected/sanity_check.out +++ b/src/test/regress/expected/sanity_check.out @@ -77,6 +77,10 @@ mlparted12|f mlparted2|f mlparted3|f mlparted4|f +mlparted_def|f +mlparted_def1|f +mlparted_def2|f +mlparted_defd|f money_data|f num_data|f num_exp_add|t diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out index 9366f04255..cef70b1a1e 100644 --- a/src/test/regress/expected/update.out +++ b/src/test/regress/expected/update.out @@ -218,5 +218,38 @@ ERROR: new row for relation "part_b_10_b_20" violates partition constraint DETAIL: Failing row contains (b, 9). -- ok update range_parted set b = b + 1 where b = 10; +-- Creating default partition for range +create table part_def partition of range_parted default; +\d+ part_def + Table "public.part_def" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+----------+--------------+------------- + a | text | | | | extended | | + b | integer | | | | plain | | +Partition of: range_parted DEFAULT +Partition constraint: (NOT (((a = 'a'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'a'::text) AND (b >= 10) AND (b < 20)) OR ((a = 'b'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'b'::text) AND (b >= 10) AND (b < 20)))) + +insert into range_parted values ('c', 9); +-- ok +update part_def set a = 'd' where a = 'c'; +-- fail +update part_def set a = 'a' where a = 'd'; +ERROR: new row for relation "part_def" violates partition constraint +DETAIL: Failing row contains (a, 9). +create table list_parted ( + a text, + b int +) partition by list (a); +create table list_part1 partition of list_parted for values in ('a', 'b'); +create table list_default partition of list_parted default; +insert into list_part1 values ('a', 1); +insert into list_default values ('d', 10); +-- fail +update list_default set a = 'a' where a = 'd'; +ERROR: new row for relation "list_default" violates partition constraint +DETAIL: Failing row contains (a, 10). +-- ok +update list_default set a = 'x' where a = 'd'; -- cleanup drop table range_parted; +drop table list_parted; diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index e6f6669880..37cca72620 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2111,6 +2111,13 @@ SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_1'::reg -- check that the new partition won't overlap with an existing partition CREATE TABLE fail_part (LIKE part_1 INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION fail_part FOR VALUES IN (1); +-- check that an existing table can be attached as a default partition +CREATE TABLE def_part (LIKE list_parted INCLUDING CONSTRAINTS); +ALTER TABLE list_parted ATTACH PARTITION def_part DEFAULT; +-- check attaching default partition fails if a default partition already +-- exists +CREATE TABLE fail_def_part (LIKE part_1 INCLUDING CONSTRAINTS); +ALTER TABLE list_parted ATTACH PARTITION fail_def_part DEFAULT; -- check validation when attaching list partitions CREATE TABLE list_parted2 ( @@ -2127,6 +2134,15 @@ ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); DELETE FROM part_2; ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); +-- check partition cannot be attached if default has some row for its values +CREATE TABLE list_parted2_def PARTITION OF list_parted2 DEFAULT; +INSERT INTO list_parted2_def VALUES (11, 'z'); +CREATE TABLE part_3 (LIKE list_parted2); +ALTER TABLE list_parted2 ATTACH PARTITION part_3 FOR VALUES IN (11); +-- should be ok after deleting the bad row +DELETE FROM list_parted2_def WHERE a = 11; +ALTER TABLE list_parted2 ATTACH PARTITION part_3 FOR VALUES IN (11); + -- adding constraints that describe the desired partition constraint -- (or more restrictive) will help skip the validation scan CREATE TABLE part_3_4 ( @@ -2144,6 +2160,9 @@ ALTER TABLE list_parted2 DETACH PARTITION part_3_4; ALTER TABLE part_3_4 ALTER a SET NOT NULL; ALTER TABLE list_parted2 ATTACH PARTITION part_3_4 FOR VALUES IN (3, 4); +-- check if default partition scan skipped +ALTER TABLE list_parted2_def ADD CONSTRAINT check_a CHECK (a IN (5, 6)); +CREATE TABLE part_55_66 PARTITION OF list_parted2 FOR VALUES IN (55, 66); -- check validation when attaching range partitions CREATE TABLE range_parted ( @@ -2172,6 +2191,21 @@ CREATE TABLE part2 ( ); ALTER TABLE range_parted ATTACH PARTITION part2 FOR VALUES FROM (1, 10) TO (1, 20); +-- Create default partition +CREATE TABLE partr_def1 PARTITION OF range_parted DEFAULT; + +-- Only one default partition is allowed, hence, following should give error +CREATE TABLE partr_def2 (LIKE part1 INCLUDING CONSTRAINTS); +ALTER TABLE range_parted ATTACH PARTITION partr_def2 DEFAULT; + +-- Overlapping partitions cannot be attached, hence, following should give error +INSERT INTO partr_def1 VALUES (2, 10); +CREATE TABLE part3 (LIKE range_parted); +ALTER TABLE range_parted ATTACH partition part3 FOR VALUES FROM (2, 10) TO (2, 20); + +-- Attaching partitions should be successful when there are no overlapping rows +ALTER TABLE range_parted ATTACH partition part3 FOR VALUES FROM (3, 10) TO (3, 20); + -- check that leaf partitions are scanned when attaching a partitioned -- table CREATE TABLE part_5 ( @@ -2232,6 +2266,18 @@ INSERT INTO part_7 (a, b) VALUES (8, null), (9, 'a'); SELECT tableoid::regclass, a, b FROM part_7 order by a; ALTER TABLE list_parted2 ATTACH PARTITION part_7 FOR VALUES IN (7); +-- check that leaf partitions of default partition are scanned when +-- attaching a partitioned table. +ALTER TABLE part_5 DROP CONSTRAINT check_a; +CREATE TABLE part5_def PARTITION OF part_5 DEFAULT PARTITION BY LIST(a); +CREATE TABLE part5_def_p1 PARTITION OF part5_def FOR VALUES IN (5); +INSERT INTO part5_def_p1 VALUES (5, 'y'); +CREATE TABLE part5_p1 (LIKE part_5); +ALTER TABLE part_5 ATTACH PARTITION part5_p1 FOR VALUES IN ('y'); +-- should be ok after deleting the bad row +DELETE FROM part5_def_p1 WHERE b = 'y'; +ALTER TABLE part_5 ATTACH PARTITION part5_p1 FOR VALUES IN ('y'); + -- check that the table being attached is not already a partition ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); @@ -2327,6 +2373,7 @@ ALTER TABLE list_parted2 ALTER COLUMN b TYPE text; -- cleanup DROP TABLE list_parted, list_parted2, range_parted; +DROP TABLE fail_def_part; -- more tests for certain multi-level partitioning scenarios create table p (a int, b int) partition by range (a, b); diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index 1c0ce92763..eeab5d91ff 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -447,6 +447,10 @@ CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES IN (); -- trying to specify range for list partitioned table CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) TO (2); +-- check default partition cannot be created more than once +CREATE TABLE part_default PARTITION OF list_parted DEFAULT; +CREATE TABLE fail_default_part PARTITION OF list_parted DEFAULT; + -- specified literal can't be cast to the partition column data type CREATE TABLE bools ( a bool @@ -524,9 +528,13 @@ CREATE TABLE list_parted2 ( ) PARTITION BY LIST (a); CREATE TABLE part_null_z PARTITION OF list_parted2 FOR VALUES IN (null, 'z'); CREATE TABLE part_ab PARTITION OF list_parted2 FOR VALUES IN ('a', 'b'); +CREATE TABLE list_parted2_def PARTITION OF list_parted2 DEFAULT; CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN (null); CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN ('b', 'c'); +-- check default partition overlap +INSERT INTO list_parted2 VALUES('X'); +CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN ('W', 'X', 'Y'); CREATE TABLE range_parted2 ( a int @@ -546,6 +554,17 @@ CREATE TABLE part3 PARTITION OF range_parted2 FOR VALUES FROM (30) TO (40); CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (10) TO (30); CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (10) TO (50); +-- Create a default partition for range partitioned table +CREATE TABLE range2_default PARTITION OF range_parted2 DEFAULT; + +-- More than one default partition is not allowed, so this should give error +CREATE TABLE fail_default_part PARTITION OF range_parted2 DEFAULT; + +-- Check if the range for default partitions overlap +INSERT INTO range_parted2 VALUES (85); +CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (80) TO (90); +CREATE TABLE part4 PARTITION OF range_parted2 FOR VALUES FROM (90) TO (100); + -- now check for multi-column range partition key CREATE TABLE range_parted3 ( a int, @@ -559,6 +578,7 @@ CREATE TABLE part10 PARTITION OF range_parted3 FOR VALUES FROM (1, minvalue) TO CREATE TABLE part11 PARTITION OF range_parted3 FOR VALUES FROM (1, 1) TO (1, 10); CREATE TABLE part12 PARTITION OF range_parted3 FOR VALUES FROM (1, 10) TO (1, maxvalue); CREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1, 10) TO (1, 20); +CREATE TABLE range3_default PARTITION OF range_parted3 DEFAULT; -- cannot create a partition that says column b is allowed to range -- from -infinity to +infinity, while there exist partitions that have diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index 6f17872087..a2948e4dd0 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -132,13 +132,39 @@ create table part_ee_ff partition of list_parted for values in ('ee', 'ff') part create table part_ee_ff1 partition of part_ee_ff for values from (1) to (10); create table part_ee_ff2 partition of part_ee_ff for values from (10) to (20); +-- test default partition +create table part_default partition of list_parted default; +-- Negative test: a row, which would fit in other partition, does not fit +-- default partition, even when inserted directly +insert into part_default values ('aa', 2); +insert into part_default values (null, 2); +-- ok +insert into part_default values ('Zz', 2); +-- test if default partition works as expected for multi-level partitioned +-- table as well as when default partition itself is further partitioned +drop table part_default; +create table part_xx_yy partition of list_parted for values in ('xx', 'yy') partition by list (a); +create table part_xx_yy_p1 partition of part_xx_yy for values in ('xx'); +create table part_xx_yy_defpart partition of part_xx_yy default; +create table part_default partition of list_parted default partition by range(b); +create table part_default_p1 partition of part_default for values from (20) to (30); +create table part_default_p2 partition of part_default for values from (30) to (40); + -- fail insert into part_ee_ff1 values ('EE', 11); +insert into part_default_p2 values ('gg', 43); -- fail (even the parent's, ie, part_ee_ff's partition constraint applies) insert into part_ee_ff1 values ('cc', 1); +insert into part_default values ('gg', 43); -- ok insert into part_ee_ff1 values ('ff', 1); insert into part_ee_ff2 values ('ff', 11); +insert into part_default_p1 values ('cd', 25); +insert into part_default_p2 values ('de', 35); +insert into list_parted values ('ab', 21); +insert into list_parted values ('xx', 1); +insert into list_parted values ('yy', 2); +select tableoid::regclass, * from list_parted; -- Check tuple routing for partitioned tables @@ -154,8 +180,19 @@ insert into range_parted values ('b', 1); insert into range_parted values ('b', 10); -- fail (partition key (b+0) is null) insert into range_parted values ('a'); -select tableoid::regclass, * from range_parted; +-- Check default partition +create table part_def partition of range_parted default; +-- fail +insert into part_def values ('b', 10); +-- ok +insert into part_def values ('c', 10); +insert into range_parted values (null, null); +insert into range_parted values ('a', null); +insert into range_parted values (null, 19); +insert into range_parted values ('b', 20); + +select tableoid::regclass, * from range_parted; -- ok insert into list_parted values (null, 1); insert into list_parted (a) values ('aA'); @@ -188,6 +225,18 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p -- cleanup drop table range_parted, list_parted; +-- test that a default partition added as the first partition accepts any value +-- including null +create table list_parted (a int) partition by list (a); +create table part_default partition of list_parted default; +\d+ part_default +insert into part_default values (null); +insert into part_default values (1); +insert into part_default values (-1); +select tableoid::regclass, a from list_parted; +-- cleanup +drop table list_parted; + -- more tests for certain multi-level partitioning scenarios create table mlparted (a int, b int) partition by range (a, b); create table mlparted1 (b int not null, a int not null) partition by range ((b+0)); @@ -274,6 +323,24 @@ create function mlparted5abrtrig_func() returns trigger as $$ begin new.c = 'b'; create trigger mlparted5abrtrig before insert on mlparted5a for each row execute procedure mlparted5abrtrig_func(); insert into mlparted5 (a, b, c) values (1, 40, 'a'); drop table mlparted5; +alter table mlparted drop constraint check_b; + +-- Check multi-level default partition +create table mlparted_def partition of mlparted default partition by range(a); +create table mlparted_def1 partition of mlparted_def for values from (40) to (50); +create table mlparted_def2 partition of mlparted_def for values from (50) to (60); +insert into mlparted values (40, 100); +insert into mlparted_def1 values (42, 100); +insert into mlparted_def2 values (54, 50); +-- fail +insert into mlparted values (70, 100); +insert into mlparted_def1 values (52, 50); +insert into mlparted_def2 values (34, 50); +-- ok +create table mlparted_defd partition of mlparted_def default; +insert into mlparted values (70, 100); + +select tableoid::regclass, * from mlparted_def; -- check that message shown after failure to find a partition shows the -- appropriate key description (or none) in various situations diff --git a/src/test/regress/sql/plancache.sql b/src/test/regress/sql/plancache.sql index bc2086166b..cb2a551487 100644 --- a/src/test/regress/sql/plancache.sql +++ b/src/test/regress/sql/plancache.sql @@ -156,3 +156,24 @@ end$$ language plpgsql; select cachebug(); select cachebug(); + +-- Check that addition or removal of any partition is correctly dealt with by +-- default partition table when it is being used in prepared statement. +create table list_parted (a int) partition by list(a); +create table list_part_null partition of list_parted for values in (null); +create table list_part_1 partition of list_parted for values in (1); +create table list_part_def partition of list_parted default; +prepare pstmt_def_insert (int) as insert into list_part_def values($1); +-- should fail +execute pstmt_def_insert(null); +execute pstmt_def_insert(1); +create table list_part_2 partition of list_parted for values in (2); +execute pstmt_def_insert(2); +alter table list_parted detach partition list_part_null; +-- should be ok +execute pstmt_def_insert(null); +drop table list_part_1; +-- should be ok +execute pstmt_def_insert(1); +drop table list_parted, list_part_null; +deallocate pstmt_def_insert; diff --git a/src/test/regress/sql/update.sql b/src/test/regress/sql/update.sql index 663711997b..66d1feca10 100644 --- a/src/test/regress/sql/update.sql +++ b/src/test/regress/sql/update.sql @@ -125,5 +125,29 @@ update range_parted set b = b - 1 where b = 10; -- ok update range_parted set b = b + 1 where b = 10; +-- Creating default partition for range +create table part_def partition of range_parted default; +\d+ part_def +insert into range_parted values ('c', 9); +-- ok +update part_def set a = 'd' where a = 'c'; +-- fail +update part_def set a = 'a' where a = 'd'; + +create table list_parted ( + a text, + b int +) partition by list (a); +create table list_part1 partition of list_parted for values in ('a', 'b'); +create table list_default partition of list_parted default; +insert into list_part1 values ('a', 1); +insert into list_default values ('d', 10); + +-- fail +update list_default set a = 'a' where a = 'd'; +-- ok +update list_default set a = 'x' where a = 'd'; + -- cleanup drop table range_parted; +drop table list_parted; From f25000c832f2e147986110116d4ba1a57b9d9256 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 17:37:43 -0400 Subject: [PATCH 0134/1087] Fix more portability issues in new pgbench TAP tests. Not completely sure, but I think bowerbird is spitting up on attempting to include ">" in a temporary file name. (Why in the world are we writing this stuff into files at all? A hash would be a better answer.) --- src/bin/pgbench/t/001_pgbench_with_server.pl | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index b80640a8cc..3609b9bede 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -132,7 +132,7 @@ sub pgbench { my ($stderr); run_log([ 'pgbench', '-j', '2', '--bad-option' ], '2>', \$stderr); - $nthreads = 1 if $stderr =~ 'threads are not supported on this platform'; + $nthreads = 1 if $stderr =~ m/threads are not supported on this platform/; } # run custom scripts @@ -354,7 +354,7 @@ sub pgbench 0, [qr{gaussian param.* at least 2}], q{\set i random_gaussian(0, 10, 1.0)} ], - [ 'set exponential param > 0', + [ 'set exponential param greater 0', 0, [qr{exponential parameter must be greater }], q{\set i random_exponential(0, 10, 0.0)} ], From e56dd7cf5078d9651d715a72cd802a3aa346c63a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 19:04:32 -0400 Subject: [PATCH 0135/1087] Fix uninitialized-variable bug. map_partition_varattnos() failed to set its found_whole_row output parameter if the given expression list was NIL. This seems to be a pre-existing bug that chanced to be exposed by commit 6f6b99d13. It might be unreachable in v10, but I have little faith in that proposition, so back-patch. Per buildfarm. --- src/backend/catalog/partition.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 7e426ba9c8..c94ee941de 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1120,21 +1120,23 @@ map_partition_varattnos(List *expr, int target_varno, Relation partrel, Relation parent, bool *found_whole_row) { - AttrNumber *part_attnos; - bool my_found_whole_row; + bool my_found_whole_row = false; - if (expr == NIL) - return NIL; + if (expr != NIL) + { + AttrNumber *part_attnos; + + part_attnos = convert_tuples_by_name_map(RelationGetDescr(partrel), + RelationGetDescr(parent), + gettext_noop("could not convert row type")); + expr = (List *) map_variable_attnos((Node *) expr, + target_varno, 0, + part_attnos, + RelationGetDescr(parent)->natts, + RelationGetForm(partrel)->reltype, + &my_found_whole_row); + } - part_attnos = convert_tuples_by_name_map(RelationGetDescr(partrel), - RelationGetDescr(parent), - gettext_noop("could not convert row type")); - expr = (List *) map_variable_attnos((Node *) expr, - target_varno, 0, - part_attnos, - RelationGetDescr(parent)->natts, - RelationGetForm(partrel)->reltype, - &my_found_whole_row); if (found_whole_row) *found_whole_row = my_found_whole_row; From fdf87ed451ef1ccb710f4e65dddbc6da17e92ba7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Sep 2017 20:45:31 -0400 Subject: [PATCH 0136/1087] Fix failure-to-copy bug in commit 6f6b99d13. The previous coding of get_qual_for_list() was careful to copy everything it was using from the input data structure. The new version missed making a copy of pass-by-ref datum values that it's inserting into Consts. This is not optional, however, as revealed by buildfarm failures on machines running -DRELCACHE_FORCE_RELEASE: we're copying from a relcache entry that could go away before the required lifespan of our output expression. I'm pretty sure -DCLOBBER_CACHE_ALWAYS machines won't like this either, but none of them have reported in yet. --- src/backend/catalog/partition.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index c94ee941de..73eff17202 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1559,12 +1559,18 @@ get_qual_for_list(Relation parent, PartitionBoundSpec *spec) { Const *val; - /* Construct const from datum */ + /* + * Construct Const from known-not-null datum. We must be careful + * to copy the value, because our result has to be able to outlive + * the relcache entry we're copying from. + */ val = makeConst(key->parttypid[0], key->parttypmod[0], key->parttypcoll[0], key->parttyplen[0], - *boundinfo->datums[i], + datumCopy(*boundinfo->datums[i], + key->parttypbyval[0], + key->parttyplen[0]), false, /* isnull */ key->parttypbyval[0]); From c824c7e29fe752110346fc821ad6d01357aa12f8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 9 Sep 2017 17:32:10 -0400 Subject: [PATCH 0137/1087] pg_upgrade: Message style fixes --- src/bin/pg_upgrade/check.c | 2 +- src/bin/pg_upgrade/exec.c | 8 ++++---- src/bin/pg_upgrade/file.c | 2 +- src/bin/pg_upgrade/option.c | 16 +++++++++------- src/bin/pg_upgrade/pg_upgrade.c | 4 ++-- src/bin/pg_upgrade/server.c | 2 +- 6 files changed, 18 insertions(+), 16 deletions(-) diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index 86225eaa4c..b7e1e4be19 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -987,7 +987,7 @@ check_for_jsonb_9_4_usage(ClusterInfo *cluster) bool found = false; char output_path[MAXPGPATH]; - prep_status("Checking for incompatible jsonb data type"); + prep_status("Checking for incompatible \"jsonb\" data type"); snprintf(output_path, sizeof(output_path), "tables_using_jsonb.txt"); diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index cb8e29b17c..1cf64e1a45 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -51,7 +51,7 @@ get_bin_version(ClusterInfo *cluster) *strchr(cmd_output, '\n') = '\0'; if (sscanf(cmd_output, "%*s %*s %d.%d", &pre_dot, &post_dot) < 1) - pg_fatal("could not get version from %s\n", cmd); + pg_fatal("could not get pg_ctl version output from %s\n", cmd); cluster->bin_version = (pre_dot * 100 + post_dot) * 100; } @@ -143,7 +143,7 @@ exec_prog(const char *log_file, const char *opt_log_file, #endif if (log == NULL) - pg_fatal("cannot write to log file %s\n", log_file); + pg_fatal("could not write to log file \"%s\"\n", log_file); #ifdef WIN32 /* Are we printing "command:" before its output? */ @@ -198,7 +198,7 @@ exec_prog(const char *log_file, const char *opt_log_file, * log these commands to a third file, but that just adds complexity. */ if ((log = fopen(log_file, "a")) == NULL) - pg_fatal("cannot write to log file %s\n", log_file); + pg_fatal("could not write to log file \"%s\"\n", log_file); fprintf(log, "\n\n"); fclose(log); #endif @@ -426,7 +426,7 @@ validate_exec(const char *dir, const char *cmdName) pg_fatal("check for \"%s\" failed: %s\n", path, strerror(errno)); else if (!S_ISREG(buf.st_mode)) - pg_fatal("check for \"%s\" failed: not an executable file\n", + pg_fatal("check for \"%s\" failed: not a regular file\n", path); /* diff --git a/src/bin/pg_upgrade/file.c b/src/bin/pg_upgrade/file.c index eb925d1e0f..ae8d89fb66 100644 --- a/src/bin/pg_upgrade/file.c +++ b/src/bin/pg_upgrade/file.c @@ -290,7 +290,7 @@ check_hard_link(void) if (pg_link_file(existing_file, new_link_file) < 0) pg_fatal("could not create hard link between old and new data directories: %s\n" - "In link mode the old and new data directories must be on the same file system volume.\n", + "In link mode the old and new data directories must be on the same file system.\n", strerror(errno)); unlink(new_link_file); diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c index bbe364741c..c74eb25e18 100644 --- a/src/bin/pg_upgrade/option.c +++ b/src/bin/pg_upgrade/option.c @@ -98,7 +98,7 @@ parseCommandLine(int argc, char *argv[]) pg_fatal("%s: cannot be run as root\n", os_info.progname); if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL) - pg_fatal("cannot write to log file %s\n", INTERNAL_LOG_FILE); + pg_fatal("could not write to log file \"%s\"\n", INTERNAL_LOG_FILE); while ((option = getopt_long(argc, argv, "d:D:b:B:cj:ko:O:p:P:rU:v", long_options, &optindex)) != -1) @@ -214,7 +214,7 @@ parseCommandLine(int argc, char *argv[]) for (filename = output_files; *filename != NULL; filename++) { if ((fp = fopen_priv(*filename, "a")) == NULL) - pg_fatal("cannot write to log file %s\n", *filename); + pg_fatal("could not write to log file \"%s\"\n", *filename); /* Start with newline because we might be appending to a file. */ fprintf(fp, "\n" @@ -262,7 +262,7 @@ parseCommandLine(int argc, char *argv[]) canonicalize_path(new_cluster_pgdata); if (!getcwd(cwd, MAXPGPATH)) - pg_fatal("cannot find current directory\n"); + pg_fatal("could not determine current directory\n"); canonicalize_path(cwd); if (path_is_prefix_of_path(new_cluster_pgdata, cwd)) pg_fatal("cannot run pg_upgrade from inside the new cluster data directory on Windows\n"); @@ -459,7 +459,7 @@ get_sock_dir(ClusterInfo *cluster, bool live_check) /* Use the current directory for the socket */ cluster->sockdir = pg_malloc(MAXPGPATH); if (!getcwd(cluster->sockdir, MAXPGPATH)) - pg_fatal("cannot find current directory\n"); + pg_fatal("could not determine current directory\n"); } else { @@ -477,14 +477,16 @@ get_sock_dir(ClusterInfo *cluster, bool live_check) snprintf(filename, sizeof(filename), "%s/postmaster.pid", cluster->pgdata); if ((fp = fopen(filename, "r")) == NULL) - pg_fatal("Cannot open file %s: %m\n", filename); + pg_fatal("could not open file \"%s\": %s\n", + filename, strerror(errno)); for (lineno = 1; lineno <= Max(LOCK_FILE_LINE_PORT, LOCK_FILE_LINE_SOCKET_DIR); lineno++) { if (fgets(line, sizeof(line), fp) == NULL) - pg_fatal("Cannot read line %d from %s: %m\n", lineno, filename); + pg_fatal("could not read line %d from file \"%s\": %s\n", + lineno, filename, strerror(errno)); /* potentially overwrite user-supplied value */ if (lineno == LOCK_FILE_LINE_PORT) @@ -501,7 +503,7 @@ get_sock_dir(ClusterInfo *cluster, bool live_check) /* warn of port number correction */ if (orig_port != DEF_PGUPORT && old_cluster.port != orig_port) - pg_log(PG_WARNING, "User-supplied old port number %hu corrected to %hu\n", + pg_log(PG_WARNING, "user-supplied old port number %hu corrected to %hu\n", orig_port, cluster->port); } } diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index 2a68ce6efa..d44fefb457 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -261,7 +261,7 @@ prepare_new_cluster(void) * datfrozenxid, relfrozenxids, and relminmxid later to match the new xid * counter later. */ - prep_status("Freezing all rows on the new cluster"); + prep_status("Freezing all rows in the new cluster"); exec_prog(UTILITY_LOG_FILE, NULL, true, "\"%s/vacuumdb\" %s --all --freeze %s", new_cluster.bindir, cluster_conn_opts(&new_cluster), @@ -471,7 +471,7 @@ copy_xact_xlog_xid(void) */ remove_new_subdir("pg_multixact/offsets", false); - prep_status("Setting oldest multixact ID on new cluster"); + prep_status("Setting oldest multixact ID in new cluster"); /* * We don't preserve files in this case, but it's important that the diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 26e60fab46..3e3323a6e8 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -167,7 +167,7 @@ get_major_server_version(ClusterInfo *cluster) if (fscanf(version_fd, "%63s", cluster->major_version_str) == 0 || sscanf(cluster->major_version_str, "%d.%d", &integer_version, &fractional_version) < 1) - pg_fatal("could not get version from %s\n", cluster->pgdata); + pg_fatal("could not parse PG_VERSION file from %s\n", cluster->pgdata); fclose(version_fd); From f80e782a6b4dcdea78f053f1505fff316f3a3289 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 10 Sep 2017 13:19:11 -0400 Subject: [PATCH 0138/1087] Remove pre-order and post-order traversal logic for red-black trees. This code isn't used, and there's no clear reason why anybody would ever want to use it. These traversal mechanisms don't yield a visitation order that is semantically meaningful for any external purpose, nor are they any faster or simpler than the left-to-right or right-to-left traversals. (In fact, some rough testing suggests they are slower :-(.) Moreover, these mechanisms are impossible to test in any arm's-length fashion; doing so requires knowledge of the red-black tree's internal implementation. Hence, let's just jettison them. Discussion: https://postgr.es/m/17735.1505003111@sss.pgh.pa.us --- src/backend/lib/rbtree.c | 129 +-------------------------------------- src/include/lib/rbtree.h | 5 +- 2 files changed, 3 insertions(+), 131 deletions(-) diff --git a/src/backend/lib/rbtree.c b/src/backend/lib/rbtree.c index 3d80090a8c..5362acc6ff 100644 --- a/src/backend/lib/rbtree.c +++ b/src/backend/lib/rbtree.c @@ -62,17 +62,6 @@ struct RBTree static RBNode sentinel = {RBBLACK, RBNIL, RBNIL, NULL}; -/* - * Values used in the RBTreeIterator.next_state field, with an - * InvertedWalk iterator. - */ -typedef enum InvertedWalkNextStep -{ - NextStepBegin, - NextStepUp, - NextStepLeft, - NextStepRight -} InvertedWalkNextStep; /* * rb_create: create an empty RBTree @@ -567,6 +556,7 @@ rb_delete_node(RBTree *rb, RBNode *z) RBNode *x, *y; + /* This is just paranoia: we should only get called on a valid node */ if (!z || z == RBNIL) return; @@ -730,114 +720,6 @@ rb_right_left_iterator(RBTreeIterator *iter) return iter->last_visited; } -static RBNode * -rb_direct_iterator(RBTreeIterator *iter) -{ - if (iter->last_visited == NULL) - { - iter->last_visited = iter->rb->root; - return iter->last_visited; - } - - if (iter->last_visited->left != RBNIL) - { - iter->last_visited = iter->last_visited->left; - return iter->last_visited; - } - - do - { - if (iter->last_visited->right != RBNIL) - { - iter->last_visited = iter->last_visited->right; - break; - } - - /* go up and one step right */ - for (;;) - { - RBNode *came_from = iter->last_visited; - - iter->last_visited = iter->last_visited->parent; - if (iter->last_visited == NULL) - { - iter->is_over = true; - break; - } - - if ((iter->last_visited->right != came_from) && (iter->last_visited->right != RBNIL)) - { - iter->last_visited = iter->last_visited->right; - return iter->last_visited; - } - } - } - while (iter->last_visited != NULL); - - return iter->last_visited; -} - -static RBNode * -rb_inverted_iterator(RBTreeIterator *iter) -{ - RBNode *came_from; - RBNode *current; - - current = iter->last_visited; - -loop: - switch ((InvertedWalkNextStep) iter->next_step) - { - /* First call, begin from root */ - case NextStepBegin: - current = iter->rb->root; - iter->next_step = NextStepLeft; - goto loop; - - case NextStepLeft: - while (current->left != RBNIL) - current = current->left; - - iter->next_step = NextStepRight; - goto loop; - - case NextStepRight: - if (current->right != RBNIL) - { - current = current->right; - iter->next_step = NextStepLeft; - goto loop; - } - else /* not moved - return current, then go up */ - iter->next_step = NextStepUp; - break; - - case NextStepUp: - came_from = current; - current = current->parent; - if (current == NULL) - { - iter->is_over = true; - break; /* end of iteration */ - } - else if (came_from == current->right) - { - /* return current, then continue to go up */ - break; - } - else - { - /* otherwise we came from the left */ - Assert(came_from == current->left); - iter->next_step = NextStepRight; - goto loop; - } - } - - iter->last_visited = current; - return current; -} - /* * rb_begin_iterate: prepare to traverse the tree in any of several orders * @@ -849,7 +731,7 @@ rb_inverted_iterator(RBTreeIterator *iter) * tree are allowed. * * The iterator state is stored in the 'iter' struct. The caller should - * treat it as opaque struct. + * treat it as an opaque struct. */ void rb_begin_iterate(RBTree *rb, RBOrderControl ctrl, RBTreeIterator *iter) @@ -867,13 +749,6 @@ rb_begin_iterate(RBTree *rb, RBOrderControl ctrl, RBTreeIterator *iter) case RightLeftWalk: /* visit right, then self, then left */ iter->iterate = rb_right_left_iterator; break; - case DirectWalk: /* visit self, then left, then right */ - iter->iterate = rb_direct_iterator; - break; - case InvertedWalk: /* visit left, then right, then self */ - iter->iterate = rb_inverted_iterator; - iter->next_step = NextStepBegin; - break; default: elog(ERROR, "unrecognized rbtree iteration order: %d", ctrl); } diff --git a/src/include/lib/rbtree.h b/src/include/lib/rbtree.h index a7183bb0b4..a4288d4fc4 100644 --- a/src/include/lib/rbtree.h +++ b/src/include/lib/rbtree.h @@ -35,9 +35,7 @@ typedef struct RBTree RBTree; typedef enum RBOrderControl { LeftRightWalk, /* inorder: left child, node, right child */ - RightLeftWalk, /* reverse inorder: right, node, left */ - DirectWalk, /* preorder: node, left child, right child */ - InvertedWalk /* postorder: left child, right child, node */ + RightLeftWalk /* reverse inorder: right, node, left */ } RBOrderControl; /* @@ -52,7 +50,6 @@ struct RBTreeIterator RBTree *rb; RBNode *(*iterate) (RBTreeIterator *iter); RBNode *last_visited; - char next_step; bool is_over; }; From 610bbdd8acfcbeedad1176188f53ce5c7905e280 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 10 Sep 2017 13:26:46 -0400 Subject: [PATCH 0139/1087] Add a test harness for the red-black tree code. This improves the regression tests' coverage of rbtree.c from pretty awful (because some of the functions aren't used yet) to basically 100%. Victor Drobny, reviewed by Aleksander Alekseev and myself Discussion: https://postgr.es/m/c9d61310e16e75f8acaf6cb1c48b7b77@postgrespro.ru --- src/test/modules/Makefile | 1 + src/test/modules/test_rbtree/.gitignore | 4 + src/test/modules/test_rbtree/Makefile | 21 + src/test/modules/test_rbtree/README | 13 + .../test_rbtree/expected/test_rbtree.out | 12 + .../modules/test_rbtree/sql/test_rbtree.sql | 8 + .../modules/test_rbtree/test_rbtree--1.0.sql | 8 + src/test/modules/test_rbtree/test_rbtree.c | 413 ++++++++++++++++++ .../modules/test_rbtree/test_rbtree.control | 4 + 9 files changed, 484 insertions(+) create mode 100644 src/test/modules/test_rbtree/.gitignore create mode 100644 src/test/modules/test_rbtree/Makefile create mode 100644 src/test/modules/test_rbtree/README create mode 100644 src/test/modules/test_rbtree/expected/test_rbtree.out create mode 100644 src/test/modules/test_rbtree/sql/test_rbtree.sql create mode 100644 src/test/modules/test_rbtree/test_rbtree--1.0.sql create mode 100644 src/test/modules/test_rbtree/test_rbtree.c create mode 100644 src/test/modules/test_rbtree/test_rbtree.control diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index 3ce99046f8..b7ed0af021 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -13,6 +13,7 @@ SUBDIRS = \ test_extensions \ test_parser \ test_pg_dump \ + test_rbtree \ test_rls_hooks \ test_shm_mq \ worker_spi diff --git a/src/test/modules/test_rbtree/.gitignore b/src/test/modules/test_rbtree/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/src/test/modules/test_rbtree/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/src/test/modules/test_rbtree/Makefile b/src/test/modules/test_rbtree/Makefile new file mode 100644 index 0000000000..a4184b4d2e --- /dev/null +++ b/src/test/modules/test_rbtree/Makefile @@ -0,0 +1,21 @@ +# src/test/modules/test_rbtree/Makefile + +MODULE_big = test_rbtree +OBJS = test_rbtree.o $(WIN32RES) +PGFILEDESC = "test_rbtree - test code for red-black tree library" + +EXTENSION = test_rbtree +DATA = test_rbtree--1.0.sql + +REGRESS = test_rbtree + +ifdef USE_PGXS +PG_CONFIG = pg_config +PGXS := $(shell $(PG_CONFIG) --pgxs) +include $(PGXS) +else +subdir = src/test/modules/test_rbtree +top_builddir = ../../../.. +include $(top_builddir)/src/Makefile.global +include $(top_srcdir)/contrib/contrib-global.mk +endif diff --git a/src/test/modules/test_rbtree/README b/src/test/modules/test_rbtree/README new file mode 100644 index 0000000000..d69eb8d3e3 --- /dev/null +++ b/src/test/modules/test_rbtree/README @@ -0,0 +1,13 @@ +test_rbtree is a test module for checking the correctness of red-black +tree operations. + +These tests are performed on red-black trees that store integers. +Since the rbtree logic treats the comparison function as a black +box, it shouldn't be important exactly what the key type is. + +Checking the correctness of traversals is based on the fact that a red-black +tree is a binary search tree, so the elements should be visited in increasing +(for Left-Current-Right) or decreasing (for Right-Current-Left) order. + +Also, this module does some checks of the correctness of the find, delete +and leftmost operations. diff --git a/src/test/modules/test_rbtree/expected/test_rbtree.out b/src/test/modules/test_rbtree/expected/test_rbtree.out new file mode 100644 index 0000000000..3e3295696e --- /dev/null +++ b/src/test/modules/test_rbtree/expected/test_rbtree.out @@ -0,0 +1,12 @@ +CREATE EXTENSION test_rbtree; +-- +-- These tests don't produce any interesting output. We're checking that +-- the operations complete without crashing or hanging and that none of their +-- internal sanity tests fail. +-- +SELECT test_rb_tree(10000); + test_rb_tree +-------------- + +(1 row) + diff --git a/src/test/modules/test_rbtree/sql/test_rbtree.sql b/src/test/modules/test_rbtree/sql/test_rbtree.sql new file mode 100644 index 0000000000..d8dc88e057 --- /dev/null +++ b/src/test/modules/test_rbtree/sql/test_rbtree.sql @@ -0,0 +1,8 @@ +CREATE EXTENSION test_rbtree; + +-- +-- These tests don't produce any interesting output. We're checking that +-- the operations complete without crashing or hanging and that none of their +-- internal sanity tests fail. +-- +SELECT test_rb_tree(10000); diff --git a/src/test/modules/test_rbtree/test_rbtree--1.0.sql b/src/test/modules/test_rbtree/test_rbtree--1.0.sql new file mode 100644 index 0000000000..04f2a3ada6 --- /dev/null +++ b/src/test/modules/test_rbtree/test_rbtree--1.0.sql @@ -0,0 +1,8 @@ +/* src/test/modules/test_rbtree/test_rbtree--1.0.sql */ + +-- complain if script is sourced in psql, rather than via CREATE EXTENSION +\echo Use "CREATE EXTENSION test_rbtree" to load this file. \quit + +CREATE FUNCTION test_rb_tree(size INTEGER) + RETURNS pg_catalog.void STRICT + AS 'MODULE_PATHNAME' LANGUAGE C; diff --git a/src/test/modules/test_rbtree/test_rbtree.c b/src/test/modules/test_rbtree/test_rbtree.c new file mode 100644 index 0000000000..688ebbbbad --- /dev/null +++ b/src/test/modules/test_rbtree/test_rbtree.c @@ -0,0 +1,413 @@ +/*-------------------------------------------------------------------------- + * + * test_rbtree.c + * Test correctness of red-black tree operations. + * + * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * + * IDENTIFICATION + * src/test/modules/test_rbtree/test_rbtree.c + * + * ------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "fmgr.h" +#include "lib/rbtree.h" +#include "utils/memutils.h" + +PG_MODULE_MAGIC; + + +/* + * Our test trees store an integer key, and nothing else. + */ +typedef struct IntRBTreeNode +{ + RBNode rbnode; + int key; +} IntRBTreeNode; + + +/* + * Node comparator. We don't worry about overflow in the subtraction, + * since none of our test keys are negative. + */ +static int +irb_cmp(const RBNode *a, const RBNode *b, void *arg) +{ + const IntRBTreeNode *ea = (const IntRBTreeNode *) a; + const IntRBTreeNode *eb = (const IntRBTreeNode *) b; + + return ea->key - eb->key; +} + +/* + * Node combiner. For testing purposes, just check that library doesn't + * try to combine unequal keys. + */ +static void +irb_combine(RBNode *existing, const RBNode *newdata, void *arg) +{ + const IntRBTreeNode *eexist = (const IntRBTreeNode *) existing; + const IntRBTreeNode *enew = (const IntRBTreeNode *) newdata; + + if (eexist->key != enew->key) + elog(ERROR, "red-black tree combines %d into %d", + enew->key, eexist->key); +} + +/* Node allocator */ +static RBNode * +irb_alloc(void *arg) +{ + return (RBNode *) palloc(sizeof(IntRBTreeNode)); +} + +/* Node freer */ +static void +irb_free(RBNode *node, void *arg) +{ + pfree(node); +} + +/* + * Create a red-black tree using our support functions + */ +static RBTree * +create_int_rbtree(void) +{ + return rb_create(sizeof(IntRBTreeNode), + irb_cmp, + irb_combine, + irb_alloc, + irb_free, + NULL); +} + +/* + * Generate a random permutation of the integers 0..size-1 + */ +static int * +GetPermutation(int size) +{ + int *permutation; + int i; + + permutation = (int *) palloc(size * sizeof(int)); + + permutation[0] = 0; + + /* + * This is the "inside-out" variant of the Fisher-Yates shuffle algorithm. + * Notionally, we append each new value to the array and then swap it with + * a randomly-chosen array element (possibly including itself, else we + * fail to generate permutations with the last integer last). The swap + * step can be optimized by combining it with the insertion. + */ + for (i = 1; i < size; i++) + { + int j = random() % (i + 1); + + if (j < i) /* avoid fetching undefined data if j=i */ + permutation[i] = permutation[j]; + permutation[j] = i; + } + + return permutation; +} + +/* + * Populate an empty RBTree with "size" integers having the values + * 0, step, 2*step, 3*step, ..., inserting them in random order + */ +static void +rb_populate(RBTree *tree, int size, int step) +{ + int *permutation = GetPermutation(size); + IntRBTreeNode node; + bool isNew; + int i; + + /* Insert values. We don't expect any collisions. */ + for (i = 0; i < size; i++) + { + node.key = step * permutation[i]; + rb_insert(tree, (RBNode *) &node, &isNew); + if (!isNew) + elog(ERROR, "unexpected !isNew result from rb_insert"); + } + + /* + * Re-insert the first value to make sure collisions work right. It's + * probably not useful to test that case over again for all the values. + */ + if (size > 0) + { + node.key = step * permutation[0]; + rb_insert(tree, (RBNode *) &node, &isNew); + if (isNew) + elog(ERROR, "unexpected isNew result from rb_insert"); + } + + pfree(permutation); +} + +/* + * Check the correctness of left-right traversal. + * Left-right traversal is correct if all elements are + * visited in increasing order. + */ +static void +testleftright(int size) +{ + RBTree *tree = create_int_rbtree(); + IntRBTreeNode *node; + RBTreeIterator iter; + int lastKey = -1; + int count = 0; + + /* check iteration over empty tree */ + rb_begin_iterate(tree, LeftRightWalk, &iter); + if (rb_iterate(&iter) != NULL) + elog(ERROR, "left-right walk over empty tree produced an element"); + + /* fill tree with consecutive natural numbers */ + rb_populate(tree, size, 1); + + /* iterate over the tree */ + rb_begin_iterate(tree, LeftRightWalk, &iter); + + while ((node = (IntRBTreeNode *) rb_iterate(&iter)) != NULL) + { + /* check that order is increasing */ + if (node->key <= lastKey) + elog(ERROR, "left-right walk gives elements not in sorted order"); + lastKey = node->key; + count++; + } + + if (lastKey != size - 1) + elog(ERROR, "left-right walk did not reach end"); + if (count != size) + elog(ERROR, "left-right walk missed some elements"); +} + +/* + * Check the correctness of right-left traversal. + * Right-left traversal is correct if all elements are + * visited in decreasing order. + */ +static void +testrightleft(int size) +{ + RBTree *tree = create_int_rbtree(); + IntRBTreeNode *node; + RBTreeIterator iter; + int lastKey = size; + int count = 0; + + /* check iteration over empty tree */ + rb_begin_iterate(tree, RightLeftWalk, &iter); + if (rb_iterate(&iter) != NULL) + elog(ERROR, "right-left walk over empty tree produced an element"); + + /* fill tree with consecutive natural numbers */ + rb_populate(tree, size, 1); + + /* iterate over the tree */ + rb_begin_iterate(tree, RightLeftWalk, &iter); + + while ((node = (IntRBTreeNode *) rb_iterate(&iter)) != NULL) + { + /* check that order is decreasing */ + if (node->key >= lastKey) + elog(ERROR, "right-left walk gives elements not in sorted order"); + lastKey = node->key; + count++; + } + + if (lastKey != 0) + elog(ERROR, "right-left walk did not reach end"); + if (count != size) + elog(ERROR, "right-left walk missed some elements"); +} + +/* + * Check the correctness of the rb_find operation by searching for + * both elements we inserted and elements we didn't. + */ +static void +testfind(int size) +{ + RBTree *tree = create_int_rbtree(); + int i; + + /* Insert even integers from 0 to 2 * (size-1) */ + rb_populate(tree, size, 2); + + /* Check that all inserted elements can be found */ + for (i = 0; i < size; i++) + { + IntRBTreeNode node; + IntRBTreeNode *resultNode; + + node.key = 2 * i; + resultNode = (IntRBTreeNode *) rb_find(tree, (RBNode *) &node); + if (resultNode == NULL) + elog(ERROR, "inserted element was not found"); + if (node.key != resultNode->key) + elog(ERROR, "find operation in rbtree gave wrong result"); + } + + /* + * Check that not-inserted elements can not be found, being sure to try + * values before the first and after the last element. + */ + for (i = -1; i <= 2 * size; i += 2) + { + IntRBTreeNode node; + IntRBTreeNode *resultNode; + + node.key = i; + resultNode = (IntRBTreeNode *) rb_find(tree, (RBNode *) &node); + if (resultNode != NULL) + elog(ERROR, "not-inserted element was found"); + } +} + +/* + * Check the correctness of the rb_leftmost operation. + * This operation should always return the smallest element of the tree. + */ +static void +testleftmost(int size) +{ + RBTree *tree = create_int_rbtree(); + IntRBTreeNode *result; + + /* Check that empty tree has no leftmost element */ + if (rb_leftmost(tree) != NULL) + elog(ERROR, "leftmost node of empty tree is not NULL"); + + /* fill tree with consecutive natural numbers */ + rb_populate(tree, size, 1); + + /* Check that leftmost element is the smallest one */ + result = (IntRBTreeNode *) rb_leftmost(tree); + if (result == NULL || result->key != 0) + elog(ERROR, "rb_leftmost gave wrong result"); +} + +/* + * Check the correctness of the rb_delete operation. + */ +static void +testdelete(int size, int delsize) +{ + RBTree *tree = create_int_rbtree(); + int *deleteIds; + bool *chosen; + int i; + + /* fill tree with consecutive natural numbers */ + rb_populate(tree, size, 1); + + /* Choose unique ids to delete */ + deleteIds = (int *) palloc(delsize * sizeof(int)); + chosen = (bool *) palloc0(size * sizeof(bool)); + + for (i = 0; i < delsize; i++) + { + int k = random() % size; + + while (chosen[k]) + k = (k + 1) % size; + deleteIds[i] = k; + chosen[k] = true; + } + + /* Delete elements */ + for (i = 0; i < delsize; i++) + { + IntRBTreeNode find; + IntRBTreeNode *node; + + find.key = deleteIds[i]; + /* Locate the node to be deleted */ + node = (IntRBTreeNode *) rb_find(tree, (RBNode *) &find); + if (node == NULL || node->key != deleteIds[i]) + elog(ERROR, "expected element was not found during deleting"); + /* Delete it */ + rb_delete(tree, (RBNode *) node); + } + + /* Check that deleted elements are deleted */ + for (i = 0; i < size; i++) + { + IntRBTreeNode node; + IntRBTreeNode *result; + + node.key = i; + result = (IntRBTreeNode *) rb_find(tree, (RBNode *) &node); + if (chosen[i]) + { + /* Deleted element should be absent */ + if (result != NULL) + elog(ERROR, "deleted element still present in the rbtree"); + } + else + { + /* Else it should be present */ + if (result == NULL || result->key != i) + elog(ERROR, "delete operation removed wrong rbtree value"); + } + } + + /* Delete remaining elements, so as to exercise reducing tree to empty */ + for (i = 0; i < size; i++) + { + IntRBTreeNode find; + IntRBTreeNode *node; + + if (chosen[i]) + continue; + find.key = i; + /* Locate the node to be deleted */ + node = (IntRBTreeNode *) rb_find(tree, (RBNode *) &find); + if (node == NULL || node->key != i) + elog(ERROR, "expected element was not found during deleting"); + /* Delete it */ + rb_delete(tree, (RBNode *) node); + } + + /* Tree should now be empty */ + if (rb_leftmost(tree) != NULL) + elog(ERROR, "deleting all elements failed"); + + pfree(deleteIds); + pfree(chosen); +} + +/* + * SQL-callable entry point to perform all tests + * + * Argument is the number of entries to put in the trees + */ +PG_FUNCTION_INFO_V1(test_rb_tree); + +Datum +test_rb_tree(PG_FUNCTION_ARGS) +{ + int size = PG_GETARG_INT32(0); + + if (size <= 0 || size > MaxAllocSize / sizeof(int)) + elog(ERROR, "invalid size for test_rb_tree: %d", size); + testleftright(size); + testrightleft(size); + testfind(size); + testleftmost(size); + testdelete(size, Max(size / 10, 1)); + PG_RETURN_VOID(); +} diff --git a/src/test/modules/test_rbtree/test_rbtree.control b/src/test/modules/test_rbtree/test_rbtree.control new file mode 100644 index 0000000000..17966a5d3f --- /dev/null +++ b/src/test/modules/test_rbtree/test_rbtree.control @@ -0,0 +1,4 @@ +comment = 'Test code for red-black tree library' +default_version = '1.0' +module_pathname = '$libdir/test_rbtree' +relocatable = true From 3c435952176ae5d294b37e5963cd72ddb66edead Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 10 Sep 2017 14:59:56 -0400 Subject: [PATCH 0140/1087] Quick-hack fix for foreign key cascade vs triggers with transition tables. AFTER triggers using transition tables crashed if they were fired due to a foreign key ON CASCADE update. This is because ExecEndModifyTable flushes the transition tables, on the assumption that any trigger that could need them was already fired during ExecutorFinish. Normally that's true, because we don't allow transition-table-using triggers to be deferred. However, foreign key CASCADE updates force any triggers on the referencing table to be deferred to the outer query level, by means of the EXEC_FLAG_SKIP_TRIGGERS flag. I don't recall all the details of why it's like that and am pretty loath to redesign it right now. Instead, just teach ExecEndModifyTable to skip destroying the TransitionCaptureState when that flag is set. This will allow the transition table data to survive until end of the current subtransaction. This isn't a terribly satisfactory solution, because (1) we might be leaking the transition tables for much longer than really necessary, and (2) as things stand, an AFTER STATEMENT trigger will fire once per RI updating query, ie once per row updated or deleted in the referenced table. I suspect that is not per SQL spec. But redesigning this is a research project that we're certainly not going to get done for v10. So let's go with this hackish answer for now. In passing, tweak AfterTriggerSaveEvent to not save the transition_capture pointer into the event record for a deferrable trigger. This is not necessary to fix the current bug, but it avoids letting dangling pointers to long-gone transition tables persist in the trigger event queue. That's at least a safety feature. It might also allow merging shared trigger states in more cases than before. I added a regression test that demonstrates the crash on unpatched code, and also exposes the behavior of firing the AFTER STATEMENT triggers once per row update. Per bug #14808 from Philippe Beaudoin. Back-patch to v10. Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org --- src/backend/commands/trigger.c | 4 +- src/backend/executor/nodeModifyTable.c | 10 ++++- src/test/regress/expected/triggers.out | 52 ++++++++++++++++++++++++++ src/test/regress/sql/triggers.sql | 41 ++++++++++++++++++++ 4 files changed, 104 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index da0850bfd6..bbfbc06db9 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -5474,7 +5474,9 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, new_shared.ats_tgoid = trigger->tgoid; new_shared.ats_relid = RelationGetRelid(rel); new_shared.ats_firing_id = 0; - new_shared.ats_transition_capture = transition_capture; + /* deferrable triggers cannot access transition data */ + new_shared.ats_transition_capture = + trigger->tgdeferrable ? NULL : transition_capture; afterTriggerAddEvent(&afterTriggers.query_stack[afterTriggers.query_depth], &new_event, &new_shared); diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index bd84778739..49586a3c03 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -2318,8 +2318,14 @@ ExecEndModifyTable(ModifyTableState *node) { int i; - /* Free transition tables */ - if (node->mt_transition_capture != NULL) + /* + * Free transition tables, unless this query is being run in + * EXEC_FLAG_SKIP_TRIGGERS mode, which means that it may have queued AFTER + * triggers that won't be run till later. In that case we'll just leak + * the transition tables till end of (sub)transaction. + */ + if (node->mt_transition_capture != NULL && + !(node->ps.state->es_top_eflags & EXEC_FLAG_SKIP_TRIGGERS)) DestroyTransitionCaptureState(node->mt_transition_capture); /* diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index ac132b042d..2f8029a2f7 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -2257,6 +2257,58 @@ create trigger my_table_multievent_trig for each statement execute procedure dump_insert(); ERROR: Transition tables cannot be specified for triggers with more than one event drop table my_table; +-- +-- Test firing of triggers with transition tables by foreign key cascades +-- +create table refd_table (a int primary key, b text); +create table trig_table (a int, b text, + foreign key (a) references refd_table on update cascade on delete cascade +); +create trigger trig_table_insert_trig + after insert on trig_table referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger trig_table_update_trig + after update on trig_table referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger trig_table_delete_trig + after delete on trig_table referencing old table as old_table + for each statement execute procedure dump_delete(); +insert into refd_table values + (1, 'one'), + (2, 'two'), + (3, 'three'); +insert into trig_table values + (1, 'one a'), + (1, 'one b'), + (2, 'two a'), + (2, 'two b'), + (3, 'three a'), + (3, 'three b'); +NOTICE: trigger = trig_table_insert_trig, new table = (1,"one a"), (1,"one b"), (2,"two a"), (2,"two b"), (3,"three a"), (3,"three b") +update refd_table set a = 11 where b = 'one'; +NOTICE: trigger = trig_table_update_trig, old table = (1,"one a"), (1,"one b"), new table = (11,"one a"), (11,"one b") +select * from trig_table; + a | b +----+--------- + 2 | two a + 2 | two b + 3 | three a + 3 | three b + 11 | one a + 11 | one b +(6 rows) + +delete from refd_table where length(b) = 3; +NOTICE: trigger = trig_table_delete_trig, old table = (2,"two a"), (2,"two b") +NOTICE: trigger = trig_table_delete_trig, old table = (11,"one a"), (11,"one b") +select * from trig_table; + a | b +---+--------- + 3 | three a + 3 | three b +(2 rows) + +drop table refd_table, trig_table; -- cleanup drop function dump_insert(); drop function dump_update(); diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index b10159a1cf..c6deb56c50 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1771,6 +1771,47 @@ create trigger my_table_multievent_trig drop table my_table; +-- +-- Test firing of triggers with transition tables by foreign key cascades +-- + +create table refd_table (a int primary key, b text); +create table trig_table (a int, b text, + foreign key (a) references refd_table on update cascade on delete cascade +); + +create trigger trig_table_insert_trig + after insert on trig_table referencing new table as new_table + for each statement execute procedure dump_insert(); +create trigger trig_table_update_trig + after update on trig_table referencing old table as old_table new table as new_table + for each statement execute procedure dump_update(); +create trigger trig_table_delete_trig + after delete on trig_table referencing old table as old_table + for each statement execute procedure dump_delete(); + +insert into refd_table values + (1, 'one'), + (2, 'two'), + (3, 'three'); +insert into trig_table values + (1, 'one a'), + (1, 'one b'), + (2, 'two a'), + (2, 'two b'), + (3, 'three a'), + (3, 'three b'); + +update refd_table set a = 11 where b = 'one'; + +select * from trig_table; + +delete from refd_table where length(b) = 3; + +select * from trig_table; + +drop table refd_table, trig_table; + -- cleanup drop function dump_insert(); drop function dump_update(); From 821fb8cdbf700a8aadbe12d5b46ca4e61be5a8a8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Sep 2017 11:20:47 -0400 Subject: [PATCH 0141/1087] Message style fixes --- doc/src/sgml/catalogs.sgml | 2 +- doc/src/sgml/perform.sgml | 4 ++-- doc/src/sgml/ref/create_statistics.sgml | 10 ++++---- doc/src/sgml/ref/psql-ref.sgml | 2 +- src/backend/access/transam/twophase.c | 14 +++++------ src/backend/access/transam/xlog.c | 2 +- src/backend/commands/publicationcmds.c | 2 +- src/backend/commands/statscmds.c | 10 ++++---- src/backend/commands/tablecmds.c | 2 +- src/backend/commands/trigger.c | 2 +- src/backend/executor/execReplication.c | 4 ++-- src/backend/libpq/hba.c | 2 +- src/backend/replication/logical/relation.c | 5 ++-- src/backend/replication/logical/worker.c | 6 ++--- src/backend/replication/pgoutput/pgoutput.c | 2 +- src/backend/rewrite/rewriteDefine.c | 4 ++-- src/backend/storage/lmgr/predicate.c | 2 +- src/backend/utils/adt/int8.c | 8 +++---- src/backend/utils/adt/jsonfuncs.c | 6 ++--- src/backend/utils/adt/mac8.c | 4 ++-- src/backend/utils/adt/numutils.c | 12 +++++----- src/backend/utils/adt/txid.c | 14 +++++------ src/backend/utils/misc/guc.c | 6 ++--- src/backend/utils/time/snapmgr.c | 2 +- src/bin/psql/variables.c | 2 +- src/include/catalog/pg_statistic_ext.h | 2 +- src/test/regress/expected/alter_table.out | 2 +- src/test/regress/expected/foreign_data.out | 2 +- src/test/regress/expected/json.out | 26 ++++++++++----------- src/test/regress/expected/jsonb.out | 26 ++++++++++----------- src/test/regress/expected/psql.out | 4 ++-- src/test/regress/expected/rules.out | 4 ++-- src/test/regress/expected/stats_ext.out | 4 ++-- src/test/regress/expected/triggers.out | 2 +- 34 files changed, 100 insertions(+), 101 deletions(-) diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 4978b47f0e..9af77c1f5a 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -6453,7 +6453,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< char[] - An array containing codes for the enabled statistic types; + An array containing codes for the enabled statistic kinds; valid values are: d for n-distinct statistics, f for functional dependency statistics diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 1346328653..d3b47bc5a5 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1107,7 +1107,7 @@ WHERE tablename = 'road'; - The following subsections describe the types of extended statistics + The following subsections describe the kinds of extended statistics that are currently supported. @@ -1115,7 +1115,7 @@ WHERE tablename = 'road'; Functional Dependencies - The simplest type of extended statistics tracks functional + The simplest kind of extended statistics tracks functional dependencies, a concept used in definitions of database normal forms. We say that column b is functionally dependent on column a if knowledge of the value of diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index deda21fec7..ef4e4852bd 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation CREATE STATISTICS [ IF NOT EXISTS ] statistics_name - [ ( statistic_type [, ... ] ) ] + [ ( statistics_kind [, ... ] ) ] ON column_name, column_name [, ...] FROM table_name @@ -76,15 +76,15 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - statistic_type + statistics_kind - A statistic type to be computed in this statistics object. - Currently supported types are + A statistics kind to be computed in this statistics object. + Currently supported kinds are ndistinct, which enables n-distinct statistics, and dependencies, which enables functional dependency statistics. - If this clause is omitted, all supported statistic types are + If this clause is omitted, all supported statistics kinds are included in the statistics object. For more information, see and . diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 79468a5663..a74caf8a6c 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -3629,7 +3629,7 @@ bar will terminate the application. If set to a larger numeric value, that many consecutive EOF characters must be typed to make an interactive session terminate. If the variable is set to a - non-numeric value, it is interpreted as 10. + non-numeric value, it is interpreted as 10. The default is 0. diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c index ba03d9687e..ae832917ce 100644 --- a/src/backend/access/transam/twophase.c +++ b/src/backend/access/transam/twophase.c @@ -2031,14 +2031,14 @@ ProcessTwoPhaseBuffer(TransactionId xid, if (fromdisk) { ereport(WARNING, - (errmsg("removing stale two-phase state file for \"%u\"", + (errmsg("removing stale two-phase state file for transaction %u", xid))); RemoveTwoPhaseFile(xid, true); } else { ereport(WARNING, - (errmsg("removing stale two-phase state from shared memory for \"%u\"", + (errmsg("removing stale two-phase state from memory for transaction %u", xid))); PrepareRedoRemove(xid, true); } @@ -2051,14 +2051,14 @@ ProcessTwoPhaseBuffer(TransactionId xid, if (fromdisk) { ereport(WARNING, - (errmsg("removing future two-phase state file for \"%u\"", + (errmsg("removing future two-phase state file for transaction %u", xid))); RemoveTwoPhaseFile(xid, true); } else { ereport(WARNING, - (errmsg("removing future two-phase state from memory for \"%u\"", + (errmsg("removing future two-phase state from memory for transaction %u", xid))); PrepareRedoRemove(xid, true); } @@ -2072,7 +2072,7 @@ ProcessTwoPhaseBuffer(TransactionId xid, if (buf == NULL) { ereport(WARNING, - (errmsg("removing corrupt two-phase state file for \"%u\"", + (errmsg("removing corrupt two-phase state file for transaction %u", xid))); RemoveTwoPhaseFile(xid, true); return NULL; @@ -2091,14 +2091,14 @@ ProcessTwoPhaseBuffer(TransactionId xid, if (fromdisk) { ereport(WARNING, - (errmsg("removing corrupt two-phase state file for \"%u\"", + (errmsg("removing corrupt two-phase state file for transaction %u", xid))); RemoveTwoPhaseFile(xid, true); } else { ereport(WARNING, - (errmsg("removing corrupt two-phase state from memory for \"%u\"", + (errmsg("removing corrupt two-phase state from memory for transaction %u", xid))); PrepareRedoRemove(xid, true); } diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 442341a707..a3e8ce092f 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -8658,7 +8658,7 @@ CreateCheckPoint(int flags) LWLockRelease(CheckpointLock); END_CRIT_SECTION(); ereport(DEBUG1, - (errmsg("checkpoint skipped due to an idle system"))); + (errmsg("checkpoint skipped because system is idle"))); return; } } diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c index 610cb499d2..f298d3d381 100644 --- a/src/backend/commands/publicationcmds.c +++ b/src/backend/commands/publicationcmds.c @@ -103,7 +103,7 @@ parse_publication_options(List *options, if (!SplitIdentifierString(publish, ',', &publish_list)) ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("invalid publish list"))); + errmsg("invalid list syntax for \"publish\" option"))); /* Process the option list. */ foreach(lc, publish_list) diff --git a/src/backend/commands/statscmds.c b/src/backend/commands/statscmds.c index 476505512b..c70a28de4b 100644 --- a/src/backend/commands/statscmds.c +++ b/src/backend/commands/statscmds.c @@ -180,7 +180,7 @@ CreateStatistics(CreateStatsStmt *stmt) if (!HeapTupleIsValid(atttuple)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("column \"%s\" referenced in statistics does not exist", + errmsg("column \"%s\" does not exist", attname))); attForm = (Form_pg_attribute) GETSTRUCT(atttuple); @@ -195,8 +195,8 @@ CreateStatistics(CreateStatsStmt *stmt) if (type->lt_opr == InvalidOid) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("column \"%s\" cannot be used in statistics because its type has no default btree operator class", - attname))); + errmsg("column \"%s\" cannot be used in statistics because its type %s has no default btree operator class", + attname, format_type_be(attForm->atttypid)))); /* Make sure no more than STATS_MAX_DIMENSIONS columns are used */ if (numcols >= STATS_MAX_DIMENSIONS) @@ -242,7 +242,7 @@ CreateStatistics(CreateStatsStmt *stmt) stxkeys = buildint2vector(attnums, numcols); /* - * Parse the statistics types. + * Parse the statistics kinds. */ build_ndistinct = false; build_dependencies = false; @@ -263,7 +263,7 @@ CreateStatistics(CreateStatsStmt *stmt) else ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("unrecognized statistic type \"%s\"", + errmsg("unrecognized statistics kind \"%s\"", type))); } /* If no statistic type was specified, build them all. */ diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index d2167eda23..96354bdee5 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -13859,7 +13859,7 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) errmsg("table \"%s\" contains column \"%s\" not found in parent \"%s\"", RelationGetRelationName(attachrel), attributeName, RelationGetRelationName(rel)), - errdetail("New partition should contain only the columns present in parent."))); + errdetail("The new partition may contain only the columns present in parent."))); } /* diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index bbfbc06db9..269c9e17dd 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -416,7 +416,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, (TRIGGER_FOR_DELETE(tgtype) ? 1 : 0)) != 1) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("Transition tables cannot be specified for triggers with more than one event"))); + errmsg("transition tables cannot be specified for triggers with more than one event"))); if (tt->isNew) { diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index fbb8108512..5a75e0211f 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -559,13 +559,13 @@ CheckCmdReplicaIdentity(Relation rel, CmdType cmd) if (cmd == CMD_UPDATE && pubactions->pubupdate) ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("cannot update table \"%s\" because it does not have replica identity and publishes updates", + errmsg("cannot update table \"%s\" because it does not have a replica identity and publishes updates", RelationGetRelationName(rel)), errhint("To enable updating the table, set REPLICA IDENTITY using ALTER TABLE."))); else if (cmd == CMD_DELETE && pubactions->pubdelete) ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("cannot delete from table \"%s\" because it does not have replica identity and publishes deletes", + errmsg("cannot delete from table \"%s\" because it does not have a replica identity and publishes deletes", RelationGetRelationName(rel)), errhint("To enable deleting from the table, set REPLICA IDENTITY using ALTER TABLE."))); } diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index 42afead9fd..ba011b6d61 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -1608,7 +1608,7 @@ verify_option_list_length(List *options, char *optionname, List *masters, char * ereport(LOG, (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("the number of %s (%i) must be 1 or the same as the number of %s (%i)", + errmsg("the number of %s (%d) must be 1 or the same as the number of %s (%d)", optionname, list_length(options), mastername, diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c index 408143ae95..4b2d8a153f 100644 --- a/src/backend/replication/logical/relation.c +++ b/src/backend/replication/logical/relation.c @@ -454,9 +454,8 @@ logicalrep_typmap_getid(Oid remoteid) { if (!get_typisdefined(remoteid)) ereport(ERROR, - (errmsg("builtin type %u not found", remoteid), - errhint("This can be caused by having publisher with " - "higher major version than subscriber"))); + (errmsg("built-in type %u not found", remoteid), + errhint("This can be caused by having a publisher with a higher PostgreSQL major version than the subscriber."))); return remoteid; } diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index 041f3873b9..bc6d8246a7 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -629,7 +629,7 @@ check_relation_updatable(LogicalRepRelMapEntry *rel) { ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("publisher does not send replica identity column " + errmsg("publisher did not send replica identity column " "expected by the logical replication target relation \"%s.%s\"", rel->remoterel.nspname, rel->remoterel.relname))); } @@ -844,7 +844,7 @@ apply_handle_delete(StringInfo s) /* The tuple to be deleted could not be found. */ ereport(DEBUG1, (errmsg("logical replication could not find row for delete " - "in replication target %s", + "in replication target relation \"%s\"", RelationGetRelationName(rel->localrel)))); } @@ -910,7 +910,7 @@ apply_dispatch(StringInfo s) default: ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), - errmsg("invalid logical replication message type %c", action))); + errmsg("invalid logical replication message type \"%c\"", action))); } } diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index 67c1d3b246..9ab954a6e0 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -115,7 +115,7 @@ parse_output_parameters(List *options, uint32 *protocol_version, if (parsed > PG_UINT32_MAX || parsed < 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("proto_verson \"%s\" out of range", + errmsg("proto_version \"%s\" out of range", strVal(defel->arg)))); *protocol_version = (uint32) parsed; diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c index d03984a2de..071b3a9ec9 100644 --- a/src/backend/rewrite/rewriteDefine.c +++ b/src/backend/rewrite/rewriteDefine.c @@ -425,13 +425,13 @@ DefineQueryRewrite(char *rulename, if (event_relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("could not convert partitioned table \"%s\" to a view", + errmsg("cannot convert partitioned table \"%s\" to a view", RelationGetRelationName(event_relation)))); if (event_relation->rd_rel->relispartition) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("could not convert partition \"%s\" to a view", + errmsg("cannot convert partition \"%s\" to a view", RelationGetRelationName(event_relation)))); snapshot = RegisterSnapshot(GetLatestSnapshot()); diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c index 6a6d9d6d5c..251a359bff 100644 --- a/src/backend/storage/lmgr/predicate.c +++ b/src/backend/storage/lmgr/predicate.c @@ -1769,7 +1769,7 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot, ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("could not import the requested snapshot"), - errdetail("The source process with pid %d is not running anymore.", + errdetail("The source process with PID %d is not running anymore.", sourcepid))); } diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index e8354dee44..afa434cfee 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -95,8 +95,8 @@ scanint8(const char *str, bool errorOK, int64 *result) else ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for %s: \"%s\"", - "integer", str))); + errmsg("invalid input syntax for integer: \"%s\"", + str))); } /* process digits */ @@ -130,8 +130,8 @@ scanint8(const char *str, bool errorOK, int64 *result) else ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for %s: \"%s\"", - "integer", str))); + errmsg("invalid input syntax for integer: \"%s\"", + str))); } *result = (sign < 0) ? -tmp : tmp; diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index 619547d6bf..68feeb2c5b 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -2314,7 +2314,7 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("expected json array"), - errhint("see the value of key \"%s\"", ctx->colname))); + errhint("See the value of key \"%s\".", ctx->colname))); else ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), @@ -2336,13 +2336,13 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("expected json array"), - errhint("see the array element %s of key \"%s\"", + errhint("See the array element %s of key \"%s\".", indices.data, ctx->colname))); else ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("expected json array"), - errhint("see the array element %s", + errhint("See the array element %s.", indices.data))); } } diff --git a/src/backend/utils/adt/mac8.c b/src/backend/utils/adt/mac8.c index 0410b9888a..1533cfdca0 100644 --- a/src/backend/utils/adt/mac8.c +++ b/src/backend/utils/adt/mac8.c @@ -562,8 +562,8 @@ macaddr8tomacaddr(PG_FUNCTION_ARGS) (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("macaddr8 data out of range to convert to macaddr"), errhint("Only addresses that have FF and FE as values in the " - "4th and 5th bytes, from the left, for example: " - "XX-XX-XX-FF-FE-XX-XX-XX, are eligible to be converted " + "4th and 5th bytes from the left, for example " + "xx:xx:xx:ff:fe:xx:xx:xx, are eligible to be converted " "from macaddr8 to macaddr."))); result->a = addr->a; diff --git a/src/backend/utils/adt/numutils.c b/src/backend/utils/adt/numutils.c index 07682723b7..244904ea94 100644 --- a/src/backend/utils/adt/numutils.c +++ b/src/backend/utils/adt/numutils.c @@ -48,8 +48,8 @@ pg_atoi(const char *s, int size, int c) if (*s == 0) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for %s: \"%s\"", - "integer", s))); + errmsg("invalid input syntax for integer: \"%s\"", + s))); errno = 0; l = strtol(s, &badp, 10); @@ -58,8 +58,8 @@ pg_atoi(const char *s, int size, int c) if (s == badp) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for %s: \"%s\"", - "integer", s))); + errmsg("invalid input syntax for integer: \"%s\"", + s))); switch (size) { @@ -102,8 +102,8 @@ pg_atoi(const char *s, int size, int c) if (*badp && *badp != c) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for %s: \"%s\"", - "integer", s))); + errmsg("invalid input syntax for integer: \"%s\"", + s))); return (int32) l; } diff --git a/src/backend/utils/adt/txid.c b/src/backend/utils/adt/txid.c index 5dd996f62c..1e38ca2aa5 100644 --- a/src/backend/utils/adt/txid.c +++ b/src/backend/utils/adt/txid.c @@ -132,8 +132,8 @@ TransactionIdInRecentPast(uint64 xid_with_epoch, TransactionId *extracted_xid) || (xid_epoch == now_epoch && xid > now_epoch_last_xid)) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("transaction ID " UINT64_FORMAT " is in the future", - xid_with_epoch))); + errmsg("transaction ID %s is in the future", + psprintf(UINT64_FORMAT, xid_with_epoch)))); /* * ShmemVariableCache->oldestClogXid is protected by CLogTruncationLock, @@ -755,11 +755,11 @@ txid_status(PG_FUNCTION_ARGS) Assert(TransactionIdIsValid(xid)); if (TransactionIdIsCurrentTransactionId(xid)) - status = gettext_noop("in progress"); + status = "in progress"; else if (TransactionIdDidCommit(xid)) - status = gettext_noop("committed"); + status = "committed"; else if (TransactionIdDidAbort(xid)) - status = gettext_noop("aborted"); + status = "aborted"; else { /* @@ -774,9 +774,9 @@ txid_status(PG_FUNCTION_ARGS) * checked commit/abort status). */ if (TransactionIdPrecedes(xid, GetActiveSnapshot()->xmin)) - status = gettext_noop("aborted"); + status = "aborted"; else - status = gettext_noop("in progress"); + status = "in progress"; } } else diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 969e80f756..a05fb1a7eb 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -2201,7 +2201,7 @@ static struct config_int ConfigureNamesInt[] = {"max_pred_locks_per_relation", PGC_SIGHUP, LOCK_MANAGEMENT, gettext_noop("Sets the maximum number of predicate-locked pages and tuples per relation."), gettext_noop("If more than this total of pages and tuples in the same relation are locked " - "by a connection, those locks are replaced by a relation level lock.") + "by a connection, those locks are replaced by a relation-level lock.") }, &max_predicate_locks_per_relation, -2, INT_MIN, INT_MAX, @@ -2212,7 +2212,7 @@ static struct config_int ConfigureNamesInt[] = {"max_pred_locks_per_page", PGC_SIGHUP, LOCK_MANAGEMENT, gettext_noop("Sets the maximum number of predicate-locked tuples per page."), gettext_noop("If more than this number of tuples on the same page are locked " - "by a connection, those locks are replaced by a page level lock.") + "by a connection, those locks are replaced by a page-level lock.") }, &max_predicate_locks_per_page, 2, 0, INT_MAX, @@ -3608,7 +3608,7 @@ static struct config_string ConfigureNamesString[] = { {"ssl_dh_params_file", PGC_SIGHUP, CONN_AUTH_SECURITY, - gettext_noop("Location of the SSL DH params file."), + gettext_noop("Location of the SSL DH parameters file."), NULL, GUC_SUPERUSER_ONLY }, diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index 08a08c8e8f..294ab705f1 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -625,7 +625,7 @@ SetTransactionSnapshot(Snapshot sourcesnap, VirtualTransactionId *sourcevxid, ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("could not import the requested snapshot"), - errdetail("The source process with pid %d is not running anymore.", + errdetail("The source process with PID %d is not running anymore.", sourcepid))); /* diff --git a/src/bin/psql/variables.c b/src/bin/psql/variables.c index c6a59ed478..5f7f6ce822 100644 --- a/src/bin/psql/variables.c +++ b/src/bin/psql/variables.c @@ -136,7 +136,7 @@ ParseVariableBool(const char *value, const char *name, bool *result) { /* string is not recognized; don't clobber *result */ if (name) - psql_error("unrecognized value \"%s\" for \"%s\": boolean expected\n", + psql_error("unrecognized value \"%s\" for \"%s\": Boolean expected\n", value, name); valid = false; } diff --git a/src/include/catalog/pg_statistic_ext.h b/src/include/catalog/pg_statistic_ext.h index 78138026db..e6d1a8c3bc 100644 --- a/src/include/catalog/pg_statistic_ext.h +++ b/src/include/catalog/pg_statistic_ext.h @@ -45,7 +45,7 @@ CATALOG(pg_statistic_ext,3381) int2vector stxkeys; /* array of column keys */ #ifdef CATALOG_VARLEN - char stxkind[1] BKI_FORCE_NOT_NULL; /* statistic types requested + char stxkind[1] BKI_FORCE_NOT_NULL; /* statistics kinds requested * to build */ pg_ndistinct stxndistinct; /* ndistinct coefficients (serialized) */ pg_dependencies stxdependencies; /* dependencies (serialized) */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 0d400d9778..0478a8ac60 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3242,7 +3242,7 @@ DROP TABLE fail_part; CREATE TABLE fail_part (like list_parted, c int); ALTER TABLE list_parted ATTACH PARTITION fail_part FOR VALUES IN (1); ERROR: table "fail_part" contains column "c" not found in parent "list_parted" -DETAIL: New partition should contain only the columns present in parent. +DETAIL: The new partition may contain only the columns present in parent. DROP TABLE fail_part; -- check that the table being attached has every column of the parent CREATE TABLE fail_part (a int NOT NULL); diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out index 927d0189a0..c6e558b07f 100644 --- a/src/test/regress/expected/foreign_data.out +++ b/src/test/regress/expected/foreign_data.out @@ -1872,7 +1872,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value') ALTER TABLE pt2 ATTACH PARTITION pt2_1 FOR VALUES IN (1); -- ERROR ERROR: table "pt2_1" contains column "c4" not found in parent "pt2" -DETAIL: New partition should contain only the columns present in parent. +DETAIL: The new partition may contain only the columns present in parent. DROP FOREIGN TABLE pt2_1; \d+ pt2 Table "public.pt2" diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out index b25e20ca20..d7abae9867 100644 --- a/src/test/regress/expected/json.out +++ b/src/test/regress/expected/json.out @@ -1400,7 +1400,7 @@ SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": null}') q; SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": 123}') q; ERROR: expected json array -HINT: see the value of key "ia" +HINT: See the value of key "ia". SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": [1, "2", null, 4]}') q; ia -------------- @@ -1415,7 +1415,7 @@ SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": [[1, 2], [3, 4]]}') q; SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": [[1], 2]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ia" +HINT: See the array element [1] of key "ia". SELECT ia FROM json_populate_record(NULL::jsrec, '{"ia": [[1], [2, 3]]}') q; ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. @@ -1433,7 +1433,7 @@ SELECT ia1 FROM json_populate_record(NULL::jsrec, '{"ia1": null}') q; SELECT ia1 FROM json_populate_record(NULL::jsrec, '{"ia1": 123}') q; ERROR: expected json array -HINT: see the value of key "ia1" +HINT: See the value of key "ia1". SELECT ia1 FROM json_populate_record(NULL::jsrec, '{"ia1": [1, "2", null, 4]}') q; ia1 -------------- @@ -1454,7 +1454,7 @@ SELECT ia1d FROM json_populate_record(NULL::jsrec, '{"ia1d": null}') q; SELECT ia1d FROM json_populate_record(NULL::jsrec, '{"ia1d": 123}') q; ERROR: expected json array -HINT: see the value of key "ia1d" +HINT: See the value of key "ia1d". SELECT ia1d FROM json_populate_record(NULL::jsrec, '{"ia1d": [1, "2", null, 4]}') q; ERROR: value for domain js_int_array_1d violates check constraint "js_int_array_1d_check" SELECT ia1d FROM json_populate_record(NULL::jsrec, '{"ia1d": [1, "2", null]}') q; @@ -1486,7 +1486,7 @@ ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. SELECT ia2 FROM json_populate_record(NULL::jsrec, '{"ia2": [[1, 2], 3, 4]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ia2" +HINT: See the array element [1] of key "ia2". SELECT ia2d FROM json_populate_record(NULL::jsrec, '{"ia2d": [[1, "2"], [null, 4]]}') q; ERROR: value for domain js_int_array_2d violates check constraint "js_int_array_2d_check" SELECT ia2d FROM json_populate_record(NULL::jsrec, '{"ia2d": [[1, "2", 3], [null, 5, 6]]}') q; @@ -1536,7 +1536,7 @@ SELECT ta FROM json_populate_record(NULL::jsrec, '{"ta": null}') q; SELECT ta FROM json_populate_record(NULL::jsrec, '{"ta": 123}') q; ERROR: expected json array -HINT: see the value of key "ta" +HINT: See the value of key "ta". SELECT ta FROM json_populate_record(NULL::jsrec, '{"ta": [1, "2", null, 4]}') q; ta -------------- @@ -1545,7 +1545,7 @@ SELECT ta FROM json_populate_record(NULL::jsrec, '{"ta": [1, "2", null, 4]}') q; SELECT ta FROM json_populate_record(NULL::jsrec, '{"ta": [[1, 2, 3], {"k": "v"}]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ta" +HINT: See the array element [1] of key "ta". SELECT c FROM json_populate_record(NULL::jsrec, '{"c": null}') q; c --- @@ -1574,7 +1574,7 @@ SELECT ca FROM json_populate_record(NULL::jsrec, '{"ca": null}') q; SELECT ca FROM json_populate_record(NULL::jsrec, '{"ca": 123}') q; ERROR: expected json array -HINT: see the value of key "ca" +HINT: See the value of key "ca". SELECT ca FROM json_populate_record(NULL::jsrec, '{"ca": [1, "2", null, 4]}') q; ca ----------------------------------------------- @@ -1585,7 +1585,7 @@ SELECT ca FROM json_populate_record(NULL::jsrec, '{"ca": ["aaaaaaaaaaaaaaaa"]}') ERROR: value too long for type character(10) SELECT ca FROM json_populate_record(NULL::jsrec, '{"ca": [[1, 2, 3], {"k": "v"}]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ca" +HINT: See the array element [1] of key "ca". SELECT js FROM json_populate_record(NULL::jsrec, '{"js": null}') q; js ---- @@ -1678,7 +1678,7 @@ SELECT jsa FROM json_populate_record(NULL::jsrec, '{"jsa": null}') q; SELECT jsa FROM json_populate_record(NULL::jsrec, '{"jsa": 123}') q; ERROR: expected json array -HINT: see the value of key "jsa" +HINT: See the value of key "jsa". SELECT jsa FROM json_populate_record(NULL::jsrec, '{"jsa": [1, "2", null, 4]}') q; jsa -------------------- @@ -1709,7 +1709,7 @@ SELECT rec FROM json_populate_record(NULL::jsrec, '{"rec": "(abc,42,01.02.2003)" SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": 123}') q; ERROR: expected json array -HINT: see the value of key "reca" +HINT: See the value of key "reca". SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [1, 2]}') q; ERROR: cannot call populate_composite on a scalar SELECT reca FROM json_populate_record(NULL::jsrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q; @@ -2043,7 +2043,7 @@ select * from json_to_record('{"ia": null}') as x(ia _int4); select * from json_to_record('{"ia": 123}') as x(ia _int4); ERROR: expected json array -HINT: see the value of key "ia" +HINT: See the value of key "ia". select * from json_to_record('{"ia": [1, "2", null, 4]}') as x(ia _int4); ia -------------- @@ -2058,7 +2058,7 @@ select * from json_to_record('{"ia": [[1, 2], [3, 4]]}') as x(ia _int4); select * from json_to_record('{"ia": [[1], 2]}') as x(ia _int4); ERROR: expected json array -HINT: see the array element [1] of key "ia" +HINT: See the array element [1] of key "ia". select * from json_to_record('{"ia": [[1], [2, 3]]}') as x(ia _int4); ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out index 79547035bd..dcea6a47a3 100644 --- a/src/test/regress/expected/jsonb.out +++ b/src/test/regress/expected/jsonb.out @@ -1984,7 +1984,7 @@ SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": null}') q; SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": 123}') q; ERROR: expected json array -HINT: see the value of key "ia" +HINT: See the value of key "ia". SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": [1, "2", null, 4]}') q; ia -------------- @@ -1999,7 +1999,7 @@ SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": [[1, 2], [3, 4]]}') q SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": [[1], 2]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ia" +HINT: See the array element [1] of key "ia". SELECT ia FROM jsonb_populate_record(NULL::jsbrec, '{"ia": [[1], [2, 3]]}') q; ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. @@ -2017,7 +2017,7 @@ SELECT ia1 FROM jsonb_populate_record(NULL::jsbrec, '{"ia1": null}') q; SELECT ia1 FROM jsonb_populate_record(NULL::jsbrec, '{"ia1": 123}') q; ERROR: expected json array -HINT: see the value of key "ia1" +HINT: See the value of key "ia1". SELECT ia1 FROM jsonb_populate_record(NULL::jsbrec, '{"ia1": [1, "2", null, 4]}') q; ia1 -------------- @@ -2038,7 +2038,7 @@ SELECT ia1d FROM jsonb_populate_record(NULL::jsbrec, '{"ia1d": null}') q; SELECT ia1d FROM jsonb_populate_record(NULL::jsbrec, '{"ia1d": 123}') q; ERROR: expected json array -HINT: see the value of key "ia1d" +HINT: See the value of key "ia1d". SELECT ia1d FROM jsonb_populate_record(NULL::jsbrec, '{"ia1d": [1, "2", null, 4]}') q; ERROR: value for domain jsb_int_array_1d violates check constraint "jsb_int_array_1d_check" SELECT ia1d FROM jsonb_populate_record(NULL::jsbrec, '{"ia1d": [1, "2", null]}') q; @@ -2070,7 +2070,7 @@ ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. SELECT ia2 FROM jsonb_populate_record(NULL::jsbrec, '{"ia2": [[1, 2], 3, 4]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ia2" +HINT: See the array element [1] of key "ia2". SELECT ia2d FROM jsonb_populate_record(NULL::jsbrec, '{"ia2d": [[1, "2"], [null, 4]]}') q; ERROR: value for domain jsb_int_array_2d violates check constraint "jsb_int_array_2d_check" SELECT ia2d FROM jsonb_populate_record(NULL::jsbrec, '{"ia2d": [[1, "2", 3], [null, 5, 6]]}') q; @@ -2120,7 +2120,7 @@ SELECT ta FROM jsonb_populate_record(NULL::jsbrec, '{"ta": null}') q; SELECT ta FROM jsonb_populate_record(NULL::jsbrec, '{"ta": 123}') q; ERROR: expected json array -HINT: see the value of key "ta" +HINT: See the value of key "ta". SELECT ta FROM jsonb_populate_record(NULL::jsbrec, '{"ta": [1, "2", null, 4]}') q; ta -------------- @@ -2129,7 +2129,7 @@ SELECT ta FROM jsonb_populate_record(NULL::jsbrec, '{"ta": [1, "2", null, 4]}') SELECT ta FROM jsonb_populate_record(NULL::jsbrec, '{"ta": [[1, 2, 3], {"k": "v"}]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ta" +HINT: See the array element [1] of key "ta". SELECT c FROM jsonb_populate_record(NULL::jsbrec, '{"c": null}') q; c --- @@ -2158,7 +2158,7 @@ SELECT ca FROM jsonb_populate_record(NULL::jsbrec, '{"ca": null}') q; SELECT ca FROM jsonb_populate_record(NULL::jsbrec, '{"ca": 123}') q; ERROR: expected json array -HINT: see the value of key "ca" +HINT: See the value of key "ca". SELECT ca FROM jsonb_populate_record(NULL::jsbrec, '{"ca": [1, "2", null, 4]}') q; ca ----------------------------------------------- @@ -2169,7 +2169,7 @@ SELECT ca FROM jsonb_populate_record(NULL::jsbrec, '{"ca": ["aaaaaaaaaaaaaaaa"]} ERROR: value too long for type character(10) SELECT ca FROM jsonb_populate_record(NULL::jsbrec, '{"ca": [[1, 2, 3], {"k": "v"}]}') q; ERROR: expected json array -HINT: see the array element [1] of key "ca" +HINT: See the array element [1] of key "ca". SELECT js FROM jsonb_populate_record(NULL::jsbrec, '{"js": null}') q; js ---- @@ -2262,7 +2262,7 @@ SELECT jsa FROM jsonb_populate_record(NULL::jsbrec, '{"jsa": null}') q; SELECT jsa FROM jsonb_populate_record(NULL::jsbrec, '{"jsa": 123}') q; ERROR: expected json array -HINT: see the value of key "jsa" +HINT: See the value of key "jsa". SELECT jsa FROM jsonb_populate_record(NULL::jsbrec, '{"jsa": [1, "2", null, 4]}') q; jsa -------------------- @@ -2293,7 +2293,7 @@ SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": "(abc,42,01.02.2003 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": 123}') q; ERROR: expected json array -HINT: see the value of key "reca" +HINT: See the value of key "reca". SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [1, 2]}') q; ERROR: cannot call populate_composite on a scalar SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q; @@ -2423,7 +2423,7 @@ select * from jsonb_to_record('{"ia": null}') as x(ia _int4); select * from jsonb_to_record('{"ia": 123}') as x(ia _int4); ERROR: expected json array -HINT: see the value of key "ia" +HINT: See the value of key "ia". select * from jsonb_to_record('{"ia": [1, "2", null, 4]}') as x(ia _int4); ia -------------- @@ -2438,7 +2438,7 @@ select * from jsonb_to_record('{"ia": [[1, 2], [3, 4]]}') as x(ia _int4); select * from jsonb_to_record('{"ia": [[1], 2]}') as x(ia _int4); ERROR: expected json array -HINT: see the array element [1] of key "ia" +HINT: See the array element [1] of key "ia". select * from jsonb_to_record('{"ia": [[1], [2, 3]]}') as x(ia _int4); ERROR: malformed json array DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions. diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index 7957268388..bda8960bf3 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -8,7 +8,7 @@ invalid variable name: "invalid/name" -- fail: invalid value for special variable \set AUTOCOMMIT foo -unrecognized value "foo" for "AUTOCOMMIT": boolean expected +unrecognized value "foo" for "AUTOCOMMIT": Boolean expected \set FETCH_COUNT foo invalid value "foo" for "FETCH_COUNT": integer expected -- check handling of built-in boolean variable @@ -2939,7 +2939,7 @@ second thing true \endif -- invalid boolean expressions are false \if invalid boolean expression -unrecognized value "invalid boolean expression" for "\if expression": boolean expected +unrecognized value "invalid boolean expression" for "\if expression": Boolean expected \echo 'will not print #6-1' \else \echo 'will print anyway #6-2' diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out index d582bc9ee4..f1c1b44d6f 100644 --- a/src/test/regress/expected/rules.out +++ b/src/test/regress/expected/rules.out @@ -2571,12 +2571,12 @@ drop view fooview; create table fooview (x int, y text) partition by list (x); create rule "_RETURN" as on select to fooview do instead select 1 as x, 'aaa'::text as y; -ERROR: could not convert partitioned table "fooview" to a view +ERROR: cannot convert partitioned table "fooview" to a view -- nor can one convert a partition to view create table fooview_part partition of fooview for values in (1); create rule "_RETURN" as on select to fooview_part do instead select 1 as x, 'aaa'::text as y; -ERROR: could not convert partition "fooview_part" to a view +ERROR: cannot convert partition "fooview_part" to a view -- -- check for planner problems with complex inherited UPDATES -- diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out index 441cfaa411..054a381dad 100644 --- a/src/test/regress/expected/stats_ext.out +++ b/src/test/regress/expected/stats_ext.out @@ -21,7 +21,7 @@ LINE 1: CREATE STATISTICS tst FROM sometab; CREATE STATISTICS tst ON a, b FROM nonexistant; ERROR: relation "nonexistant" does not exist CREATE STATISTICS tst ON a, b FROM pg_class; -ERROR: column "a" referenced in statistics does not exist +ERROR: column "a" does not exist CREATE STATISTICS tst ON relname, relname, relnatts FROM pg_class; ERROR: duplicate column name in statistics definition CREATE STATISTICS tst ON relnatts + relpages FROM pg_class; @@ -29,7 +29,7 @@ ERROR: only simple column references are allowed in CREATE STATISTICS CREATE STATISTICS tst ON (relpages, reltuples) FROM pg_class; ERROR: only simple column references are allowed in CREATE STATISTICS CREATE STATISTICS tst (unrecognized) ON relname, relnatts FROM pg_class; -ERROR: unrecognized statistic type "unrecognized" +ERROR: unrecognized statistics kind "unrecognized" -- Ensure stats are dropped sanely, and test IF NOT EXISTS while at it CREATE TABLE ab1 (a INTEGER, b INTEGER, c INTEGER); CREATE STATISTICS IF NOT EXISTS ab1_a_b_stats ON a, b FROM ab1; diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 2f8029a2f7..620fac1e2c 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -2255,7 +2255,7 @@ NOTICE: trigger = my_table_insert_trig, new table = create trigger my_table_multievent_trig after insert or update on my_table referencing new table as new_table for each statement execute procedure dump_insert(); -ERROR: Transition tables cannot be specified for triggers with more than one event +ERROR: transition tables cannot be specified for triggers with more than one event drop table my_table; -- -- Test firing of triggers with transition tables by foreign key cascades From 3612019a7925012445af29b9ea7af84bd68a5932 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Sep 2017 14:47:15 -0400 Subject: [PATCH 0142/1087] doc: Document function pointer source code style as implemented in 1356f78ea93395c107cbc75dc923e29a0efccd8a --- doc/src/sgml/sources.sgml | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml index 877fcedbb3..7777bf5199 100644 --- a/doc/src/sgml/sources.sgml +++ b/doc/src/sgml/sources.sgml @@ -964,5 +964,23 @@ handle_sighup(SIGNAL_ARGS) + + Calling Function Pointers + + + For clarity, it is preferred to explicitly dereference a function pointer + when calling the pointed-to function if the pointer is a simple variable, + for example: + +(*emit_log_hook) (edata); + + (even though emit_log_hook(edata) would also work). + When the function pointer is part of a structure, then the extra + punctuation can and usually should be omitted, for example: + +paramInfo->paramFetch(paramInfo, paramId); + + + From b8060e41b5994a3cffb3ececaab10ed39b8d5dfd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 11 Sep 2017 16:24:34 -0400 Subject: [PATCH 0143/1087] Prefer argument name over "$n" for the refname of a plpgsql argument. If a function argument has a name, use that as the "refname" of the PLpgSQL_datum representing the argument, instead of $n as before. This allows better error messages in some cases. Pavel Stehule, reviewed by Jeevan Chalke Discussion: https://postgr.es/m/CAFj8pRB9GyU2U1Sb2ssgP26DZ_yq-FYDfpvUvGQ=k4R=yOPVjg@mail.gmail.com --- src/pl/plpgsql/src/pl_comp.c | 11 ++++++++--- src/test/regress/expected/plpgsql.out | 11 +++++++++++ src/test/regress/sql/plpgsql.sql | 9 +++++++++ 3 files changed, 28 insertions(+), 3 deletions(-) diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index e9d7ef55e9..9931ee038f 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -433,9 +433,14 @@ do_compile(FunctionCallInfo fcinfo, errmsg("PL/pgSQL functions cannot accept type %s", format_type_be(argtypeid)))); - /* Build variable and add to datum list */ - argvariable = plpgsql_build_variable(buf, 0, - argdtype, false); + /* + * Build variable and add to datum list. If there's a name + * for the argument, use that as refname, else use $n name. + */ + argvariable = plpgsql_build_variable((argnames && + argnames[i][0] != '\0') ? + argnames[i] : buf, + 0, argdtype, false); if (argvariable->dtype == PLPGSQL_DTYPE_VAR) { diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 71099969a4..7d3e9225bb 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -6029,3 +6029,14 @@ SELECT * FROM list_partitioned_table() AS t; 2 (2 rows) +-- +-- Check argument name is used instead of $n in error message +-- +CREATE FUNCTION fx(x WSlot) RETURNS void AS $$ +BEGIN + GET DIAGNOSTICS x = ROW_COUNT; + RETURN; +END; $$ LANGUAGE plpgsql; +ERROR: "x" is not a scalar variable +LINE 3: GET DIAGNOSTICS x = ROW_COUNT; + ^ diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 771d68282e..6c9399696b 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4811,3 +4811,12 @@ BEGIN END; $$ LANGUAGE plpgsql; SELECT * FROM list_partitioned_table() AS t; + +-- +-- Check argument name is used instead of $n in error message +-- +CREATE FUNCTION fx(x WSlot) RETURNS void AS $$ +BEGIN + GET DIAGNOSTICS x = ROW_COUNT; + RETURN; +END; $$ LANGUAGE plpgsql; From c1898c3e1e235ae35b4759d233253eff221b976a Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 10 Sep 2017 16:20:41 -0700 Subject: [PATCH 0144/1087] Constify numeric.c. This allows the compiler/linker to move the static variables to a read-only segment. Not all the signature changes are necessary, but it seems better to apply const in a consistent manner. Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20170910232154.asgml44ji2b7lv3d@alap3.anarazel.de --- src/backend/utils/adt/numeric.c | 212 +++++++++++++++++--------------- 1 file changed, 111 insertions(+), 101 deletions(-) diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index bc01f3c284..ddc44d5179 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -367,59 +367,59 @@ typedef struct NumericSumAccum * Some preinitialized constants * ---------- */ -static NumericDigit const_zero_data[1] = {0}; -static NumericVar const_zero = -{0, 0, NUMERIC_POS, 0, NULL, const_zero_data}; +static const NumericDigit const_zero_data[1] = {0}; +static const NumericVar const_zero = +{0, 0, NUMERIC_POS, 0, NULL, (NumericDigit *) const_zero_data}; -static NumericDigit const_one_data[1] = {1}; -static NumericVar const_one = -{1, 0, NUMERIC_POS, 0, NULL, const_one_data}; +static const NumericDigit const_one_data[1] = {1}; +static const NumericVar const_one = +{1, 0, NUMERIC_POS, 0, NULL, (NumericDigit *) const_one_data}; -static NumericDigit const_two_data[1] = {2}; -static NumericVar const_two = -{1, 0, NUMERIC_POS, 0, NULL, const_two_data}; +static const NumericDigit const_two_data[1] = {2}; +static const NumericVar const_two = +{1, 0, NUMERIC_POS, 0, NULL, (NumericDigit *) const_two_data}; #if DEC_DIGITS == 4 || DEC_DIGITS == 2 -static NumericDigit const_ten_data[1] = {10}; -static NumericVar const_ten = -{1, 0, NUMERIC_POS, 0, NULL, const_ten_data}; +static const NumericDigit const_ten_data[1] = {10}; +static const NumericVar const_ten = +{1, 0, NUMERIC_POS, 0, NULL, (NumericDigit *) const_ten_data}; #elif DEC_DIGITS == 1 -static NumericDigit const_ten_data[1] = {1}; -static NumericVar const_ten = -{1, 1, NUMERIC_POS, 0, NULL, const_ten_data}; +static const NumericDigit const_ten_data[1] = {1}; +static const NumericVar const_ten = +{1, 1, NUMERIC_POS, 0, NULL, (NumericDigit *) const_ten_data}; #endif #if DEC_DIGITS == 4 -static NumericDigit const_zero_point_five_data[1] = {5000}; +static const NumericDigit const_zero_point_five_data[1] = {5000}; #elif DEC_DIGITS == 2 -static NumericDigit const_zero_point_five_data[1] = {50}; +static const NumericDigit const_zero_point_five_data[1] = {50}; #elif DEC_DIGITS == 1 -static NumericDigit const_zero_point_five_data[1] = {5}; +static const NumericDigit const_zero_point_five_data[1] = {5}; #endif -static NumericVar const_zero_point_five = -{1, -1, NUMERIC_POS, 1, NULL, const_zero_point_five_data}; +static const NumericVar const_zero_point_five = +{1, -1, NUMERIC_POS, 1, NULL, (NumericDigit *) const_zero_point_five_data}; #if DEC_DIGITS == 4 -static NumericDigit const_zero_point_nine_data[1] = {9000}; +static const NumericDigit const_zero_point_nine_data[1] = {9000}; #elif DEC_DIGITS == 2 -static NumericDigit const_zero_point_nine_data[1] = {90}; +static const NumericDigit const_zero_point_nine_data[1] = {90}; #elif DEC_DIGITS == 1 -static NumericDigit const_zero_point_nine_data[1] = {9}; +static const NumericDigit const_zero_point_nine_data[1] = {9}; #endif -static NumericVar const_zero_point_nine = -{1, -1, NUMERIC_POS, 1, NULL, const_zero_point_nine_data}; +static const NumericVar const_zero_point_nine = +{1, -1, NUMERIC_POS, 1, NULL, (NumericDigit *) const_zero_point_nine_data}; #if DEC_DIGITS == 4 -static NumericDigit const_one_point_one_data[2] = {1, 1000}; +static const NumericDigit const_one_point_one_data[2] = {1, 1000}; #elif DEC_DIGITS == 2 -static NumericDigit const_one_point_one_data[2] = {1, 10}; +static const NumericDigit const_one_point_one_data[2] = {1, 10}; #elif DEC_DIGITS == 1 -static NumericDigit const_one_point_one_data[2] = {1, 1}; +static const NumericDigit const_one_point_one_data[2] = {1, 1}; #endif -static NumericVar const_one_point_one = -{2, 0, NUMERIC_POS, 1, NULL, const_one_point_one_data}; +static const NumericVar const_one_point_one = +{2, 0, NUMERIC_POS, 1, NULL, (NumericDigit *) const_one_point_one_data}; -static NumericVar const_nan = +static const NumericVar const_nan = {0, 0, NUMERIC_NAN, 0, NULL, NULL}; #if DEC_DIGITS == 4 @@ -467,74 +467,84 @@ static const char *set_var_from_str(const char *str, const char *cp, NumericVar *dest); static void set_var_from_num(Numeric value, NumericVar *dest); static void init_var_from_num(Numeric num, NumericVar *dest); -static void set_var_from_var(NumericVar *value, NumericVar *dest); -static char *get_str_from_var(NumericVar *var); -static char *get_str_from_var_sci(NumericVar *var, int rscale); +static void set_var_from_var(const NumericVar *value, NumericVar *dest); +static char *get_str_from_var(const NumericVar *var); +static char *get_str_from_var_sci(const NumericVar *var, int rscale); -static Numeric make_result(NumericVar *var); +static Numeric make_result(const NumericVar *var); static void apply_typmod(NumericVar *var, int32 typmod); -static int32 numericvar_to_int32(NumericVar *var); -static bool numericvar_to_int64(NumericVar *var, int64 *result); +static int32 numericvar_to_int32(const NumericVar *var); +static bool numericvar_to_int64(const NumericVar *var, int64 *result); static void int64_to_numericvar(int64 val, NumericVar *var); #ifdef HAVE_INT128 -static bool numericvar_to_int128(NumericVar *var, int128 *result); +static bool numericvar_to_int128(const NumericVar *var, int128 *result); static void int128_to_numericvar(int128 val, NumericVar *var); #endif static double numeric_to_double_no_overflow(Numeric num); -static double numericvar_to_double_no_overflow(NumericVar *var); +static double numericvar_to_double_no_overflow(const NumericVar *var); static Datum numeric_abbrev_convert(Datum original_datum, SortSupport ssup); static bool numeric_abbrev_abort(int memtupcount, SortSupport ssup); static int numeric_fast_cmp(Datum x, Datum y, SortSupport ssup); static int numeric_cmp_abbrev(Datum x, Datum y, SortSupport ssup); -static Datum numeric_abbrev_convert_var(NumericVar *var, NumericSortSupport *nss); +static Datum numeric_abbrev_convert_var(const NumericVar *var, + NumericSortSupport *nss); static int cmp_numerics(Numeric num1, Numeric num2); -static int cmp_var(NumericVar *var1, NumericVar *var2); +static int cmp_var(const NumericVar *var1, const NumericVar *var2); static int cmp_var_common(const NumericDigit *var1digits, int var1ndigits, int var1weight, int var1sign, const NumericDigit *var2digits, int var2ndigits, int var2weight, int var2sign); -static void add_var(NumericVar *var1, NumericVar *var2, NumericVar *result); -static void sub_var(NumericVar *var1, NumericVar *var2, NumericVar *result); -static void mul_var(NumericVar *var1, NumericVar *var2, NumericVar *result, +static void add_var(const NumericVar *var1, const NumericVar *var2, + NumericVar *result); +static void sub_var(const NumericVar *var1, const NumericVar *var2, + NumericVar *result); +static void mul_var(const NumericVar *var1, const NumericVar *var2, + NumericVar *result, int rscale); -static void div_var(NumericVar *var1, NumericVar *var2, NumericVar *result, +static void div_var(const NumericVar *var1, const NumericVar *var2, + NumericVar *result, int rscale, bool round); -static void div_var_fast(NumericVar *var1, NumericVar *var2, NumericVar *result, - int rscale, bool round); -static int select_div_scale(NumericVar *var1, NumericVar *var2); -static void mod_var(NumericVar *var1, NumericVar *var2, NumericVar *result); -static void ceil_var(NumericVar *var, NumericVar *result); -static void floor_var(NumericVar *var, NumericVar *result); - -static void sqrt_var(NumericVar *arg, NumericVar *result, int rscale); -static void exp_var(NumericVar *arg, NumericVar *result, int rscale); -static int estimate_ln_dweight(NumericVar *var); -static void ln_var(NumericVar *arg, NumericVar *result, int rscale); -static void log_var(NumericVar *base, NumericVar *num, NumericVar *result); -static void power_var(NumericVar *base, NumericVar *exp, NumericVar *result); -static void power_var_int(NumericVar *base, int exp, NumericVar *result, +static void div_var_fast(const NumericVar *var1, const NumericVar *var2, + NumericVar *result, int rscale, bool round); +static int select_div_scale(const NumericVar *var1, const NumericVar *var2); +static void mod_var(const NumericVar *var1, const NumericVar *var2, + NumericVar *result); +static void ceil_var(const NumericVar *var, NumericVar *result); +static void floor_var(const NumericVar *var, NumericVar *result); + +static void sqrt_var(const NumericVar *arg, NumericVar *result, int rscale); +static void exp_var(const NumericVar *arg, NumericVar *result, int rscale); +static int estimate_ln_dweight(const NumericVar *var); +static void ln_var(const NumericVar *arg, NumericVar *result, int rscale); +static void log_var(const NumericVar *base, const NumericVar *num, + NumericVar *result); +static void power_var(const NumericVar *base, const NumericVar *exp, + NumericVar *result); +static void power_var_int(const NumericVar *base, int exp, NumericVar *result, int rscale); -static int cmp_abs(NumericVar *var1, NumericVar *var2); +static int cmp_abs(const NumericVar *var1, const NumericVar *var2); static int cmp_abs_common(const NumericDigit *var1digits, int var1ndigits, int var1weight, const NumericDigit *var2digits, int var2ndigits, int var2weight); -static void add_abs(NumericVar *var1, NumericVar *var2, NumericVar *result); -static void sub_abs(NumericVar *var1, NumericVar *var2, NumericVar *result); +static void add_abs(const NumericVar *var1, const NumericVar *var2, + NumericVar *result); +static void sub_abs(const NumericVar *var1, const NumericVar *var2, + NumericVar *result); static void round_var(NumericVar *var, int rscale); static void trunc_var(NumericVar *var, int rscale); static void strip_var(NumericVar *var); static void compute_bucket(Numeric operand, Numeric bound1, Numeric bound2, - NumericVar *count_var, NumericVar *result_var); + const NumericVar *count_var, NumericVar *result_var); -static void accum_sum_add(NumericSumAccum *accum, NumericVar *var1); -static void accum_sum_rescale(NumericSumAccum *accum, NumericVar *val); +static void accum_sum_add(NumericSumAccum *accum, const NumericVar *var1); +static void accum_sum_rescale(NumericSumAccum *accum, const NumericVar *val); static void accum_sum_carry(NumericSumAccum *accum); static void accum_sum_reset(NumericSumAccum *accum); static void accum_sum_final(NumericSumAccum *accum, NumericVar *result); @@ -1551,7 +1561,7 @@ width_bucket_numeric(PG_FUNCTION_ARGS) */ static void compute_bucket(Numeric operand, Numeric bound1, Numeric bound2, - NumericVar *count_var, NumericVar *result_var) + const NumericVar *count_var, NumericVar *result_var) { NumericVar bound1_var; NumericVar bound2_var; @@ -1883,7 +1893,7 @@ numeric_cmp_abbrev(Datum x, Datum y, SortSupport ssup) #if NUMERIC_ABBREV_BITS == 64 static Datum -numeric_abbrev_convert_var(NumericVar *var, NumericSortSupport *nss) +numeric_abbrev_convert_var(const NumericVar *var, NumericSortSupport *nss) { int ndigits = var->ndigits; int weight = var->weight; @@ -1938,7 +1948,7 @@ numeric_abbrev_convert_var(NumericVar *var, NumericSortSupport *nss) #if NUMERIC_ABBREV_BITS == 32 static Datum -numeric_abbrev_convert_var(NumericVar *var, NumericSortSupport *nss) +numeric_abbrev_convert_var(const NumericVar *var, NumericSortSupport *nss) { int ndigits = var->ndigits; int weight = var->weight; @@ -3002,7 +3012,7 @@ numeric_int4(PG_FUNCTION_ARGS) * ereport(). The input NumericVar is *not* free'd. */ static int32 -numericvar_to_int32(NumericVar *var) +numericvar_to_int32(const NumericVar *var) { int32 result; int64 val; @@ -4719,7 +4729,7 @@ numeric_stddev_internal(NumericAggState *state, vsumX, vsumX2, vNminus1; - NumericVar *comp; + const NumericVar *comp; int rscale; /* Deal with empty input and NaN-input cases */ @@ -5715,7 +5725,7 @@ init_var_from_num(Numeric num, NumericVar *dest) * Copy one variable into another */ static void -set_var_from_var(NumericVar *value, NumericVar *dest) +set_var_from_var(const NumericVar *value, NumericVar *dest) { NumericDigit *newbuf; @@ -5741,7 +5751,7 @@ set_var_from_var(NumericVar *value, NumericVar *dest) * Returns a palloc'd string. */ static char * -get_str_from_var(NumericVar *var) +get_str_from_var(const NumericVar *var) { int dscale; char *str; @@ -5894,7 +5904,7 @@ get_str_from_var(NumericVar *var) * Returns a palloc'd string. */ static char * -get_str_from_var_sci(NumericVar *var, int rscale) +get_str_from_var_sci(const NumericVar *var, int rscale) { int32 exponent; NumericVar denominator; @@ -5980,7 +5990,7 @@ get_str_from_var_sci(NumericVar *var, int rscale) * a variable. */ static Numeric -make_result(NumericVar *var) +make_result(const NumericVar *var) { Numeric result; NumericDigit *digits = var->digits; @@ -6143,7 +6153,7 @@ apply_typmod(NumericVar *var, int32 typmod) * If overflow, return FALSE (no error is raised). Return TRUE if okay. */ static bool -numericvar_to_int64(NumericVar *var, int64 *result) +numericvar_to_int64(const NumericVar *var, int64 *result) { NumericDigit *digits; int ndigits; @@ -6262,7 +6272,7 @@ int64_to_numericvar(int64 val, NumericVar *var) * If overflow, return FALSE (no error is raised). Return TRUE if okay. */ static bool -numericvar_to_int128(NumericVar *var, int128 *result) +numericvar_to_int128(const NumericVar *var, int128 *result) { NumericDigit *digits; int ndigits; @@ -6406,7 +6416,7 @@ numeric_to_double_no_overflow(Numeric num) /* As above, but work from a NumericVar */ static double -numericvar_to_double_no_overflow(NumericVar *var) +numericvar_to_double_no_overflow(const NumericVar *var) { char *tmp; double val; @@ -6438,7 +6448,7 @@ numericvar_to_double_no_overflow(NumericVar *var) * truncated to no digits. */ static int -cmp_var(NumericVar *var1, NumericVar *var2) +cmp_var(const NumericVar *var1, const NumericVar *var2) { return cmp_var_common(var1->digits, var1->ndigits, var1->weight, var1->sign, @@ -6496,7 +6506,7 @@ cmp_var_common(const NumericDigit *var1digits, int var1ndigits, * result might point to one of the operands too without danger. */ static void -add_var(NumericVar *var1, NumericVar *var2, NumericVar *result) +add_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result) { /* * Decide on the signs of the two variables what to do @@ -6613,7 +6623,7 @@ add_var(NumericVar *var1, NumericVar *var2, NumericVar *result) * result might point to one of the operands too without danger. */ static void -sub_var(NumericVar *var1, NumericVar *var2, NumericVar *result) +sub_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result) { /* * Decide on the signs of the two variables what to do @@ -6734,7 +6744,7 @@ sub_var(NumericVar *var1, NumericVar *var2, NumericVar *result) * in result. Result is rounded to no more than rscale fractional digits. */ static void -mul_var(NumericVar *var1, NumericVar *var2, NumericVar *result, +mul_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result, int rscale) { int res_ndigits; @@ -6763,7 +6773,7 @@ mul_var(NumericVar *var1, NumericVar *var2, NumericVar *result, */ if (var1->ndigits > var2->ndigits) { - NumericVar *tmp = var1; + const NumericVar *tmp = var1; var1 = var2; var2 = tmp; @@ -6931,7 +6941,7 @@ mul_var(NumericVar *var1, NumericVar *var2, NumericVar *result, * is truncated (towards zero) at that digit. */ static void -div_var(NumericVar *var1, NumericVar *var2, NumericVar *result, +div_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result, int rscale, bool round) { int div_ndigits; @@ -7216,8 +7226,8 @@ div_var(NumericVar *var1, NumericVar *var2, NumericVar *result, * the correct answer is 1. */ static void -div_var_fast(NumericVar *var1, NumericVar *var2, NumericVar *result, - int rscale, bool round) +div_var_fast(const NumericVar *var1, const NumericVar *var2, + NumericVar *result, int rscale, bool round) { int div_ndigits; int res_sign; @@ -7511,7 +7521,7 @@ div_var_fast(NumericVar *var1, NumericVar *var2, NumericVar *result, * Returns the appropriate result scale for the division result. */ static int -select_div_scale(NumericVar *var1, NumericVar *var2) +select_div_scale(const NumericVar *var1, const NumericVar *var2) { int weight1, weight2, @@ -7580,7 +7590,7 @@ select_div_scale(NumericVar *var1, NumericVar *var2) * Calculate the modulo of two numerics at variable level */ static void -mod_var(NumericVar *var1, NumericVar *var2, NumericVar *result) +mod_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result) { NumericVar tmp; @@ -7609,7 +7619,7 @@ mod_var(NumericVar *var1, NumericVar *var2, NumericVar *result) * on variable level */ static void -ceil_var(NumericVar *var, NumericVar *result) +ceil_var(const NumericVar *var, NumericVar *result) { NumericVar tmp; @@ -7633,7 +7643,7 @@ ceil_var(NumericVar *var, NumericVar *result) * on variable level */ static void -floor_var(NumericVar *var, NumericVar *result) +floor_var(const NumericVar *var, NumericVar *result) { NumericVar tmp; @@ -7656,7 +7666,7 @@ floor_var(NumericVar *var, NumericVar *result) * Compute the square root of x using Newton's algorithm */ static void -sqrt_var(NumericVar *arg, NumericVar *result, int rscale) +sqrt_var(const NumericVar *arg, NumericVar *result, int rscale) { NumericVar tmp_arg; NumericVar tmp_val; @@ -7729,7 +7739,7 @@ sqrt_var(NumericVar *arg, NumericVar *result, int rscale) * Raise e to the power of x, computed to rscale fractional digits */ static void -exp_var(NumericVar *arg, NumericVar *result, int rscale) +exp_var(const NumericVar *arg, NumericVar *result, int rscale) { NumericVar x; NumericVar elem; @@ -7855,7 +7865,7 @@ exp_var(NumericVar *arg, NumericVar *result, int rscale) * determine the appropriate rscale when computing natural logarithms. */ static int -estimate_ln_dweight(NumericVar *var) +estimate_ln_dweight(const NumericVar *var) { int ln_dweight; @@ -7933,7 +7943,7 @@ estimate_ln_dweight(NumericVar *var) * Compute the natural log of x */ static void -ln_var(NumericVar *arg, NumericVar *result, int rscale) +ln_var(const NumericVar *arg, NumericVar *result, int rscale) { NumericVar x; NumericVar xx; @@ -8040,7 +8050,7 @@ ln_var(NumericVar *arg, NumericVar *result, int rscale) * Note: this routine chooses dscale of the result. */ static void -log_var(NumericVar *base, NumericVar *num, NumericVar *result) +log_var(const NumericVar *base, const NumericVar *num, NumericVar *result) { NumericVar ln_base; NumericVar ln_num; @@ -8100,7 +8110,7 @@ log_var(NumericVar *base, NumericVar *num, NumericVar *result) * Note: this routine chooses dscale of the result. */ static void -power_var(NumericVar *base, NumericVar *exp, NumericVar *result) +power_var(const NumericVar *base, const NumericVar *exp, NumericVar *result) { NumericVar ln_base; NumericVar ln_num; @@ -8215,7 +8225,7 @@ power_var(NumericVar *base, NumericVar *exp, NumericVar *result) * Raise base to the power of exp, where exp is an integer. */ static void -power_var_int(NumericVar *base, int exp, NumericVar *result, int rscale) +power_var_int(const NumericVar *base, int exp, NumericVar *result, int rscale) { double f; int p; @@ -8405,7 +8415,7 @@ power_var_int(NumericVar *base, int exp, NumericVar *result, int rscale) * ---------- */ static int -cmp_abs(NumericVar *var1, NumericVar *var2) +cmp_abs(const NumericVar *var1, const NumericVar *var2) { return cmp_abs_common(var1->digits, var1->ndigits, var1->weight, var2->digits, var2->ndigits, var2->weight); @@ -8483,7 +8493,7 @@ cmp_abs_common(const NumericDigit *var1digits, int var1ndigits, int var1weight, * result might point to one of the operands without danger. */ static void -add_abs(NumericVar *var1, NumericVar *var2, NumericVar *result) +add_abs(const NumericVar *var1, const NumericVar *var2, NumericVar *result) { NumericDigit *res_buf; NumericDigit *res_digits; @@ -8568,7 +8578,7 @@ add_abs(NumericVar *var1, NumericVar *var2, NumericVar *result) * ABS(var1) MUST BE GREATER OR EQUAL ABS(var2) !!! */ static void -sub_abs(NumericVar *var1, NumericVar *var2, NumericVar *result) +sub_abs(const NumericVar *var1, const NumericVar *var2, NumericVar *result) { NumericDigit *res_buf; NumericDigit *res_digits; @@ -8875,7 +8885,7 @@ accum_sum_reset(NumericSumAccum *accum) * Accumulate a new value. */ static void -accum_sum_add(NumericSumAccum *accum, NumericVar *val) +accum_sum_add(NumericSumAccum *accum, const NumericVar *val) { int32 *accum_digits; int i, @@ -8996,7 +9006,7 @@ accum_sum_carry(NumericSumAccum *accum) * accumulator, enlarge the buffers. */ static void -accum_sum_rescale(NumericSumAccum *accum, NumericVar *val) +accum_sum_rescale(NumericSumAccum *accum, const NumericVar *val) { int old_weight = accum->weight; int old_ndigits = accum->ndigits; From 6d9fa52645e71711410a66b5349df3be0dd49608 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Sep 2017 16:30:50 -0400 Subject: [PATCH 0145/1087] pg_receivewal: Add --endpos option This is primarily useful for making tests of this utility more deterministic, to avoid the complexity of starting pg_receivewal as a deamon in TAP tests. While this is less useful than the equivalent pg_recvlogical option, users can as well use it for example to enforce WAL streaming up to a end-of-backup position, to save only a minimal amount of WAL. Use this new option to stream WAL data in a deterministic way within a new set of TAP tests. Author: Michael Paquier --- doc/src/sgml/ref/pg_receivewal.sgml | 16 +++++++ src/bin/pg_basebackup/pg_receivewal.c | 38 ++++++++++++++--- src/bin/pg_basebackup/t/020_pg_receivewal.pl | 45 +++++++++++++++++++- 3 files changed, 91 insertions(+), 8 deletions(-) diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index 7c82e36c7c..f0513dad2a 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -98,6 +98,22 @@ PostgreSQL documentation + + + + + + Automatically stop replication and exit with normal exit status 0 when + receiving reaches the specified LSN. + + + + If there is a record with LSN exactly equal to lsn, + the record will be processed. + + + + diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c index 4a1a5658fb..710a33ab4d 100644 --- a/src/bin/pg_basebackup/pg_receivewal.c +++ b/src/bin/pg_basebackup/pg_receivewal.c @@ -36,12 +36,13 @@ static int verbose = 0; static int compresslevel = 0; static int noloop = 0; static int standby_message_timeout = 10 * 1000; /* 10 sec = default */ -static volatile bool time_to_abort = false; +static volatile bool time_to_stop = false; static bool do_create_slot = false; static bool slot_exists_ok = false; static bool do_drop_slot = false; static bool synchronous = false; static char *replication_slot = NULL; +static XLogRecPtr endpos = InvalidXLogRecPtr; static void usage(void); @@ -77,6 +78,7 @@ usage(void) printf(_(" %s [OPTION]...\n"), progname); printf(_("\nOptions:\n")); printf(_(" -D, --directory=DIR receive write-ahead log files into this directory\n")); + printf(_(" -E, --endpos=LSN exit after receiving the specified LSN\n")); printf(_(" --if-not-exists do not error if slot already exists when creating a slot\n")); printf(_(" -n, --no-loop do not loop on connection lost\n")); printf(_(" -s, --status-interval=SECS\n" @@ -112,6 +114,16 @@ stop_streaming(XLogRecPtr xlogpos, uint32 timeline, bool segment_finished) progname, (uint32) (xlogpos >> 32), (uint32) xlogpos, timeline); + if (!XLogRecPtrIsInvalid(endpos) && endpos < xlogpos) + { + if (verbose) + fprintf(stderr, _("%s: stopped streaming at %X/%X (timeline %u)\n"), + progname, (uint32) (xlogpos >> 32), (uint32) xlogpos, + timeline); + time_to_stop = true; + return true; + } + /* * Note that we report the previous, not current, position here. After a * timeline switch, xlogpos points to the beginning of the segment because @@ -128,7 +140,7 @@ stop_streaming(XLogRecPtr xlogpos, uint32 timeline, bool segment_finished) prevtimeline = timeline; prevpos = xlogpos; - if (time_to_abort) + if (time_to_stop) { if (verbose) fprintf(stderr, _("%s: received interrupt signal, exiting\n"), @@ -448,7 +460,7 @@ StreamLog(void) static void sigint_handler(int signum) { - time_to_abort = true; + time_to_stop = true; } #endif @@ -460,6 +472,7 @@ main(int argc, char **argv) {"version", no_argument, NULL, 'V'}, {"directory", required_argument, NULL, 'D'}, {"dbname", required_argument, NULL, 'd'}, + {"endpos", required_argument, NULL, 'E'}, {"host", required_argument, NULL, 'h'}, {"port", required_argument, NULL, 'p'}, {"username", required_argument, NULL, 'U'}, @@ -481,6 +494,7 @@ main(int argc, char **argv) int c; int option_index; char *db_name; + uint32 hi, lo; progname = get_progname(argv[0]); set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup")); @@ -500,7 +514,7 @@ main(int argc, char **argv) } } - while ((c = getopt_long(argc, argv, "D:d:h:p:U:s:S:nwWvZ:", + while ((c = getopt_long(argc, argv, "D:d:E:h:p:U:s:S:nwWvZ:", long_options, &option_index)) != -1) { switch (c) @@ -544,6 +558,16 @@ main(int argc, char **argv) case 'S': replication_slot = pg_strdup(optarg); break; + case 'E': + if (sscanf(optarg, "%X/%X", &hi, &lo) != 2) + { + fprintf(stderr, + _("%s: could not parse end position \"%s\"\n"), + progname, optarg); + exit(1); + } + endpos = ((uint64) hi) << 32 | lo; + break; case 'n': noloop = 1; break; @@ -714,11 +738,11 @@ main(int argc, char **argv) while (true) { StreamLog(); - if (time_to_abort) + if (time_to_stop) { /* - * We've been Ctrl-C'ed. That's not an error, so exit without an - * errorcode. + * We've been Ctrl-C'ed or end of streaming position has been + * willingly reached, so exit without an error code. */ exit(0); } diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl index b4cb6f729d..101a36466d 100644 --- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl +++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl @@ -1,8 +1,51 @@ use strict; use warnings; use TestLib; -use Test::More tests => 8; +use PostgresNode; +use Test::More tests => 14; program_help_ok('pg_receivewal'); program_version_ok('pg_receivewal'); program_options_handling_ok('pg_receivewal'); + +my $primary = get_new_node('primary'); +$primary->init(allows_streaming => 1); +$primary->start; + +my $stream_dir = $primary->basedir . '/archive_wal'; +mkdir($stream_dir); + +# Sanity checks for command line options. +$primary->command_fails(['pg_receivewal'], + 'pg_receivewal needs target directory specified'); +$primary->command_fails( + [ 'pg_receivewal', '-D', $stream_dir, '--create-slot', '--drop-slot' ], + 'failure if both --create-slot and --drop-slot specified'); +$primary->command_fails( + [ 'pg_receivewal', '-D', $stream_dir, '--create-slot' ], + 'failure if --create-slot specified without --slot'); + +# Slot creation and drop +my $slot_name = 'test'; +$primary->command_ok( + [ 'pg_receivewal', '--slot', $slot_name, '--create-slot' ], + 'creating a replication slot'); +$primary->command_ok([ 'pg_receivewal', '--slot', $slot_name, '--drop-slot' ], + 'dropping a replication slot'); + +# Generate some WAL. Use --synchronous at the same time to add more +# code coverage. Switch to the next segment first so that subsequent +# restarts of pg_receivewal will see this segment as full.. +$primary->psql('postgres', 'CREATE TABLE test_table(x integer);'); +$primary->psql('postgres', 'SELECT pg_switch_wal();'); +my $nextlsn = + $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();'); +chomp($nextlsn); +$primary->psql('postgres', + 'INSERT INTO test_table VALUES (generate_series(1,100));'); + +# Stream up to the given position. +$primary->command_ok( + [ 'pg_receivewal', '-D', $stream_dir, '--verbose', + '--endpos', $nextlsn, '--synchronous', '--no-loop' ], + 'streaming some WAL with --synchronous'); From 3126433ae7464ffc25a8317110e79defaa3d8865 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Mon, 11 Sep 2017 19:43:49 -0400 Subject: [PATCH 0146/1087] PG 10 release notes: update PL/Tcl functions item Update attribution of PL/Tcl functions item from Jim Nasby to Karl Lehenbauer. Reported-by: Jim Nasby Discussion: https://postgr.es/m/ed42f3d6-4251-dabc-747f-1ff936763b2b@nasby.net Backpatch-through: 10 --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 7ace37c8b6..2fb13e5d78 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -2046,7 +2046,7 @@ --> Allow PL/Tcl functions to return composite types and sets - (Jim Nasby) + (Karl Lehenbauer) From 57e1c007939447ecf8c2d2aa2f507124613324ad Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Mon, 11 Sep 2017 19:56:44 -0400 Subject: [PATCH 0147/1087] PG 10 release notes: change trigger transition tables Add attribution of trigger transition tables for Thomas Munro. Reported-by: Thomas Munro Discussion: https://postgr.es/m/CAEepm=2bDFgr4ut+1-QjKQY4MA=5ek8Ap3nyB19y2tpTL6xxtA@mail.gmail.com Backpatch-through: 10 --- doc/src/sgml/release-10.sgml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 2fb13e5d78..a2b1d7cd40 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1594,7 +1594,8 @@ --> Add AFTER trigger - transition tables to record changed rows (Kevin Grittner) + transition tables to record changed rows (Kevin Grittner, Thomas + Munro) From e183530550dc1b73d24fb5ae7d84e85286e88ffb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 11 Sep 2017 22:02:58 -0400 Subject: [PATCH 0148/1087] Fix RecursiveCopy.pm to cope with disappearing files. When copying from an active database tree, it's possible for files to be deleted after we see them in a readdir() scan but before we can open them. (Once we've got a file open, we don't expect any further errors from it getting unlinked, though.) Tweak RecursiveCopy so it can cope with this case, so as to avoid irreproducible test failures. Back-patch to 9.6 where this code was added. In v10 and HEAD, also remove unused "use RecursiveCopy" in one recovery test script. Michael Paquier and Tom Lane Discussion: https://postgr.es/m/24621.1504924323@sss.pgh.pa.us --- src/test/perl/RecursiveCopy.pm | 64 +++++++++++++------ .../t/010_logical_decoding_timelines.pl | 1 - 2 files changed, 45 insertions(+), 20 deletions(-) diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm index 28ecaf6db2..19f7dd2fff 100644 --- a/src/test/perl/RecursiveCopy.pm +++ b/src/test/perl/RecursiveCopy.pm @@ -29,12 +29,17 @@ use File::Copy; =head2 copypath($from, $to, %params) Recursively copy all files and directories from $from to $to. +Does not preserve file metadata (e.g., permissions). Only regular files and subdirectories are copied. Trying to copy other types of directory entries raises an exception. Raises an exception if a file would be overwritten, the source directory can't -be read, or any I/O operation fails. Always returns true. +be read, or any I/O operation fails. However, we silently ignore ENOENT on +open, because when copying from a live database it's possible for a file/dir +to be deleted after we see its directory entry but before we can open it. + +Always returns true. If the B parameter is given, it must be a subroutine reference. This subroutine will be called for each entry in the source directory with its @@ -74,6 +79,9 @@ sub copypath $filterfn = sub { return 1; }; } + # Complain if original path is bogus, because _copypath_recurse won't. + die "\"$base_src_dir\" does not exist" if !-e $base_src_dir; + # Start recursive copy from current directory return _copypath_recurse($base_src_dir, $base_dest_dir, "", $filterfn); } @@ -89,12 +97,8 @@ sub _copypath_recurse return 1 unless &$filterfn($curr_path); # Check for symlink -- needed only on source dir - die "Cannot operate on symlinks" if -l $srcpath; - - # Can't handle symlinks or other weird things - die "Source path \"$srcpath\" is not a regular file or directory" - unless -f $srcpath - or -d $srcpath; + # (note: this will fall through quietly if file is already gone) + die "Cannot operate on symlink \"$srcpath\"" if -l $srcpath; # Abort if destination path already exists. Should we allow directories # to exist already? @@ -104,25 +108,47 @@ sub _copypath_recurse # same name and we're done. if (-f $srcpath) { - copy($srcpath, $destpath) + my $fh; + unless (open($fh, '<', $srcpath)) + { + return 1 if ($!{ENOENT}); + die "open($srcpath) failed: $!"; + } + copy($fh, $destpath) or die "copy $srcpath -> $destpath failed: $!"; + close $fh; return 1; } - # Otherwise this is directory: create it on dest and recurse onto it. - mkdir($destpath) or die "mkdir($destpath) failed: $!"; - - opendir(my $directory, $srcpath) or die "could not opendir($srcpath): $!"; - while (my $entry = readdir($directory)) + # If it's a directory, create it on dest and recurse into it. + if (-d $srcpath) { - next if ($entry eq '.' or $entry eq '..'); - _copypath_recurse($base_src_dir, $base_dest_dir, - $curr_path eq '' ? $entry : "$curr_path/$entry", $filterfn) - or die "copypath $srcpath/$entry -> $destpath/$entry failed"; + my $directory; + unless (opendir($directory, $srcpath)) + { + return 1 if ($!{ENOENT}); + die "opendir($srcpath) failed: $!"; + } + + mkdir($destpath) or die "mkdir($destpath) failed: $!"; + + while (my $entry = readdir($directory)) + { + next if ($entry eq '.' or $entry eq '..'); + _copypath_recurse($base_src_dir, $base_dest_dir, + $curr_path eq '' ? $entry : "$curr_path/$entry", $filterfn) + or die "copypath $srcpath/$entry -> $destpath/$entry failed"; + } + + closedir($directory); + return 1; } - closedir($directory); - return 1; + # If it disappeared from sight, that's OK. + return 1 if !-e $srcpath; + + # Else it's some weird file type; complain. + die "Source path \"$srcpath\" is not a regular file or directory"; } 1; diff --git a/src/test/recovery/t/010_logical_decoding_timelines.pl b/src/test/recovery/t/010_logical_decoding_timelines.pl index edc0219c9c..5620450acf 100644 --- a/src/test/recovery/t/010_logical_decoding_timelines.pl +++ b/src/test/recovery/t/010_logical_decoding_timelines.pl @@ -24,7 +24,6 @@ use PostgresNode; use TestLib; use Test::More tests => 13; -use RecursiveCopy; use File::Copy; use IPC::Run (); use Scalar::Util qw(blessed); From 35e15688269a2af13f4cddff0c13536a9a42115d Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Mon, 11 Sep 2017 21:10:36 +0200 Subject: [PATCH 0149/1087] Fixed ECPG to correctly handle out-of-scope cursor declarations with pointers or array variables. --- src/interfaces/ecpg/preproc/ecpg.header | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/interfaces/ecpg/preproc/ecpg.header b/src/interfaces/ecpg/preproc/ecpg.header index e28d7e694d..8921bcbeae 100644 --- a/src/interfaces/ecpg/preproc/ecpg.header +++ b/src/interfaces/ecpg/preproc/ecpg.header @@ -352,7 +352,7 @@ adjust_outofscope_cursor_vars(struct cursor *cur) else { newvar = new_variable(cat_str(4, mm_strdup("("), - mm_strdup(ecpg_type_name(ptr->variable->type->type)), + mm_strdup(ecpg_type_name(ptr->variable->type->u.element->type)), mm_strdup(" *)(ECPGget_var("), mm_strdup(var_text)), ECPGmake_array_type(ECPGmake_simple_type(ptr->variable->type->u.element->type, From 83aaac41c66959a3ebaec7daadc4885b5f98f561 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 12 Sep 2017 09:46:14 -0400 Subject: [PATCH 0150/1087] Allow custom search filters to be configured for LDAP auth Before, only filters of the form "(=)" could be used to search an LDAP server. Introduce ldapsearchfilter so that more general filters can be configured using patterns, like "(|(uid=$username)(mail=$username))" and "(&(uid=$username) (objectClass=posixAccount))". Also allow search filters to be included in an LDAP URL. Author: Thomas Munro Reviewed-By: Peter Eisentraut, Mark Cave-Ayland, Magnus Hagander Discussion: https://postgr.es/m/CAEepm=0XTkYvMci0WRubZcf_1am8=gP=7oJErpsUfRYcKF2gwg@mail.gmail.com --- doc/src/sgml/client-auth.sgml | 43 +++++++++++++++++++++++++++++--- src/backend/libpq/auth.c | 44 ++++++++++++++++++++++++++------ src/backend/libpq/hba.c | 47 +++++++++++++++++++++++++---------- src/include/libpq/hba.h | 1 + 4 files changed, 110 insertions(+), 25 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 1b568683a4..405bf26832 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -1507,6 +1507,17 @@ omicron bryanh guest1 + + ldapsearchfilter + + + The search filter to use when doing search+bind authentication. + Occurrences of $username will be replaced with the + user name. This allows for more flexible search filters than + ldapsearchattribute. + + + ldapurl @@ -1514,13 +1525,16 @@ omicron bryanh guest1 An RFC 4516 LDAP URL. This is an alternative way to write some of the other LDAP options in a more compact and standard form. The format is -ldap://host[:port]/basedn[?[attribute][?[scope]]] +ldap://host[:port]/basedn[?[attribute][?[scope][?[filter]]]] scope must be one of base, one, sub, - typically the latter. Only one attribute is used, and some other - components of standard LDAP URLs such as filters and extensions are - not supported. + typically the last. attribute can + nominate a single attribute, in which case it is used as a value for + ldapsearchattribute. If + attribute is empty then + filter can be used as a value for + ldapsearchfilter. @@ -1549,6 +1563,17 @@ ldap://host[:port]/ + + When using search+bind mode, the search can be performed using a single + attribute specified with ldapsearchattribute, or using + a custom search filter specified with + ldapsearchfilter. + Specifying ldapsearchattribute=foo is equivalent to + specifying ldapsearchfilter="(foo=$username)". If neither + option is specified the default is + ldapsearchattribute=uid. + + Here is an example for a simple-bind LDAP configuration: @@ -1584,6 +1609,16 @@ host ... ldap ldapurl="ldap://ldap.example.net/dc=example,dc=net?uid?sub" same URL format, so it will be easier to share the configuration. + + Here is an example for a search+bind configuration that uses + ldapsearchfilter instead of + ldapsearchattribute to allow authentication by + user ID or email address: + +host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapsearchfilter="(|(uid=$username)(mail=$username))" + + + Since LDAP often uses commas and spaces to separate the different diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index cb30fc7b71..62ff624dbd 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2394,6 +2394,34 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) return STATUS_OK; } +/* Placeholders recognized by FormatSearchFilter. For now just one. */ +#define LPH_USERNAME "$username" +#define LPH_USERNAME_LEN (sizeof(LPH_USERNAME) - 1) + +/* + * Return a newly allocated C string copied from "pattern" with all + * occurrences of the placeholder "$username" replaced with "user_name". + */ +static char * +FormatSearchFilter(const char *pattern, const char *user_name) +{ + StringInfoData output; + + initStringInfo(&output); + while (*pattern != '\0') + { + if (strncmp(pattern, LPH_USERNAME, LPH_USERNAME_LEN) == 0) + { + appendStringInfoString(&output, user_name); + pattern += LPH_USERNAME_LEN; + } + else + appendStringInfoChar(&output, *pattern++); + } + + return output.data; +} + /* * Perform LDAP authentication */ @@ -2437,7 +2465,7 @@ CheckLDAPAuth(Port *port) char *filter; LDAPMessage *search_message; LDAPMessage *entry; - char *attributes[2]; + char *attributes[] = { LDAP_NO_ATTRS, NULL }; char *dn; char *c; int count; @@ -2479,13 +2507,13 @@ CheckLDAPAuth(Port *port) return STATUS_ERROR; } - /* Fetch just one attribute, else *all* attributes are returned */ - attributes[0] = port->hba->ldapsearchattribute ? port->hba->ldapsearchattribute : "uid"; - attributes[1] = NULL; - - filter = psprintf("(%s=%s)", - attributes[0], - port->user_name); + /* Build a custom filter or a single attribute filter? */ + if (port->hba->ldapsearchfilter) + filter = FormatSearchFilter(port->hba->ldapsearchfilter, port->user_name); + else if (port->hba->ldapsearchattribute) + filter = psprintf("(%s=%s)", port->hba->ldapsearchattribute, port->user_name); + else + filter = psprintf("(uid=%s)", port->user_name); r = ldap_search_s(ldap, port->hba->ldapbasedn, diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index ba011b6d61..b2c487a8e8 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -1505,22 +1505,24 @@ parse_hba_line(TokenizedLine *tok_line, int elevel) /* * LDAP can operate in two modes: either with a direct bind, using * ldapprefix and ldapsuffix, or using a search+bind, using - * ldapbasedn, ldapbinddn, ldapbindpasswd and ldapsearchattribute. - * Disallow mixing these parameters. + * ldapbasedn, ldapbinddn, ldapbindpasswd and one of + * ldapsearchattribute or ldapsearchfilter. Disallow mixing these + * parameters. */ if (parsedline->ldapprefix || parsedline->ldapsuffix) { if (parsedline->ldapbasedn || parsedline->ldapbinddn || parsedline->ldapbindpasswd || - parsedline->ldapsearchattribute) + parsedline->ldapsearchattribute || + parsedline->ldapsearchfilter) { ereport(elevel, (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("cannot use ldapbasedn, ldapbinddn, ldapbindpasswd, ldapsearchattribute, or ldapurl together with ldapprefix"), + errmsg("cannot use ldapbasedn, ldapbinddn, ldapbindpasswd, ldapsearchattribute, ldapsearchfilter or ldapurl together with ldapprefix"), errcontext("line %d of configuration file \"%s\"", line_num, HbaFileName))); - *err_msg = "cannot use ldapbasedn, ldapbinddn, ldapbindpasswd, ldapsearchattribute, or ldapurl together with ldapprefix"; + *err_msg = "cannot use ldapbasedn, ldapbinddn, ldapbindpasswd, ldapsearchattribute, ldapsearchfilter or ldapurl together with ldapprefix"; return NULL; } } @@ -1534,6 +1536,22 @@ parse_hba_line(TokenizedLine *tok_line, int elevel) *err_msg = "authentication method \"ldap\" requires argument \"ldapbasedn\", \"ldapprefix\", or \"ldapsuffix\" to be set"; return NULL; } + + /* + * When using search+bind, you can either use a simple attribute + * (defaulting to "uid") or a fully custom search filter. You can't + * do both. + */ + if (parsedline->ldapsearchattribute && parsedline->ldapsearchfilter) + { + ereport(elevel, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg("cannot use ldapsearchattribute together with ldapsearchfilter"), + errcontext("line %d of configuration file \"%s\"", + line_num, HbaFileName))); + *err_msg = "cannot use ldapsearchattribute together with ldapsearchfilter"; + return NULL; + } } if (parsedline->auth_method == uaRADIUS) @@ -1729,14 +1747,7 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, hbaline->ldapsearchattribute = pstrdup(urldata->lud_attrs[0]); /* only use first one */ hbaline->ldapscope = urldata->lud_scope; if (urldata->lud_filter) - { - ereport(elevel, - (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("filters not supported in LDAP URLs"))); - *err_msg = "filters not supported in LDAP URLs"; - ldap_free_urldesc(urldata); - return false; - } + hbaline->ldapsearchfilter = pstrdup(urldata->lud_filter); ldap_free_urldesc(urldata); #else /* not OpenLDAP */ ereport(elevel, @@ -1788,6 +1799,11 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, REQUIRE_AUTH_OPTION(uaLDAP, "ldapsearchattribute", "ldap"); hbaline->ldapsearchattribute = pstrdup(val); } + else if (strcmp(name, "ldapsearchfilter") == 0) + { + REQUIRE_AUTH_OPTION(uaLDAP, "ldapsearchfilter", "ldap"); + hbaline->ldapsearchfilter = pstrdup(val); + } else if (strcmp(name, "ldapbasedn") == 0) { REQUIRE_AUTH_OPTION(uaLDAP, "ldapbasedn", "ldap"); @@ -2266,6 +2282,11 @@ gethba_options(HbaLine *hba) CStringGetTextDatum(psprintf("ldapsearchattribute=%s", hba->ldapsearchattribute)); + if (hba->ldapsearchfilter) + options[noptions++] = + CStringGetTextDatum(psprintf("ldapsearchfilter=%s", + hba->ldapsearchfilter)); + if (hba->ldapscope) options[noptions++] = CStringGetTextDatum(psprintf("ldapscope=%d", hba->ldapscope)); diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h index 07d92d4f9f..e711bee8bf 100644 --- a/src/include/libpq/hba.h +++ b/src/include/libpq/hba.h @@ -80,6 +80,7 @@ typedef struct HbaLine char *ldapbinddn; char *ldapbindpasswd; char *ldapsearchattribute; + char *ldapsearchfilter; char *ldapbasedn; int ldapscope; char *ldapprefix; From 58bd60995f1c7470c0542f591b303bcc586a5d5f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 12 Sep 2017 10:02:34 -0400 Subject: [PATCH 0151/1087] doc: Document default scope in LDAP URL --- doc/src/sgml/client-auth.sgml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 405bf26832..26c3d1242b 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -1529,7 +1529,8 @@ ldap://host[:port]/ scope must be one of base, one, sub, - typically the last. attribute can + typically the last. (The default is base, which + is normally not useful in this application.) attribute can nominate a single attribute, in which case it is used as a value for ldapsearchattribute. If attribute is empty then From 2eeaa74b5ba20bc75bbaf10837a1ae966094d6cc Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 12 Sep 2017 10:55:04 -0400 Subject: [PATCH 0152/1087] doc: Remove useless marked section This was left around when this text was moved from installation.sgml in c5ba11f8fb1701441b96a755ea410b96bfe36170. --- doc/src/sgml/runtime.sgml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 088316cfb6..6c4c7f4a8e 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -1863,8 +1863,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Install the new version of PostgreSQL as - outlined in - .]]> + outlined in . From 2d4a614e1ec34a746aca43d6a02aa3344dcf5fd4 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 12 Sep 2017 13:17:52 -0400 Subject: [PATCH 0153/1087] docs: improve pg_upgrade rsync instructions This explains how rsync accomplishes updating standby servers and clarifies the instructions. Reported-by: Andreas Joseph Krogh Discussion: https://postgr.es/m/VisenaEmail.10.2b4049e43870bd16.15d898d696f@tc7-visena Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 40 ++++++++++++++++++++------------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index d44431803b..f8d963019e 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -332,7 +332,7 @@ NET STOP postgresql-&majorversion; Also, if upgrading standby servers, change wal_level to replica in the postgresql.conf file on - the new master cluster. + the new primary cluster. @@ -425,8 +425,8 @@ pg_upgrade.exe linkend="streaming-replication">) or Log-Shipping (see ) standby servers, follow these steps to upgrade them. You will not be running pg_upgrade - on the standby servers, but rather rsync. Do not - start any servers yet. + on the standby servers, but rather rsync on the + primary. Do not start any servers yet. @@ -455,7 +455,7 @@ pg_upgrade.exe Install the same custom shared object files on the new standbys - that you installed in the new master cluster. + that you installed in the new primary cluster. @@ -482,25 +482,33 @@ pg_upgrade.exe Run <application>rsync</> - From a directory that is above the old and new database cluster - directories, run this for each standby: + From a directory on the primary server that is above the old and + new database cluster directories, run this on the + primary for each standby server: rsync --archive --delete --hard-links --size-only old_pgdata new_pgdata remote_dir where + + + What rsync does is to copy files from the + primary to the standby, and, if pg_upgrade's + @@ -518,7 +526,7 @@ rsync --archive --delete --hard-links --size-only old_pgdata new_pgdata remote_d Configure the servers for log shipping. (You do not need to run pg_start_backup() and pg_stop_backup() or take a file system backup as the standbys are still synchronized - with the master.) + with the primary.) From 6e7baa322773ff8c79d4d8883c99fdeff5bfa679 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 12 Sep 2017 12:13:12 -0700 Subject: [PATCH 0154/1087] Introduce BYTES unit for GUCs. This is already useful for track_activity_query_size, and will further be used in a later commit making the WAL segment size configurable. Author: Beena Emerson Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAOG9ApEu8bXVwBxkOO9J7ZpM76TASK_vFMEEiCEjwhMmSLiaqQ@mail.gmail.com --- src/backend/utils/misc/guc.c | 14 +++++++++----- src/include/utils/guc.h | 1 + 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index a05fb1a7eb..bc9f09a086 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -722,6 +722,11 @@ static const char *memory_units_hint = gettext_noop("Valid units for this parame static const unit_conversion memory_unit_conversion_table[] = { + {"GB", GUC_UNIT_BYTE, 1024 * 1024 * 1024}, + {"MB", GUC_UNIT_BYTE, 1024 * 1024}, + {"kB", GUC_UNIT_BYTE, 1024}, + {"B", GUC_UNIT_BYTE, 1}, + {"TB", GUC_UNIT_KB, 1024 * 1024 * 1024}, {"GB", GUC_UNIT_KB, 1024 * 1024}, {"MB", GUC_UNIT_KB, 1024}, @@ -2863,11 +2868,7 @@ static struct config_int ConfigureNamesInt[] = {"track_activity_query_size", PGC_POSTMASTER, RESOURCES_MEM, gettext_noop("Sets the size reserved for pg_stat_activity.query, in bytes."), NULL, - - /* - * There is no _bytes_ unit, so the user can't supply units for - * this. - */ + GUC_UNIT_BYTE }, &pgstat_track_activity_query_size, 1024, 100, 102400, @@ -8113,6 +8114,9 @@ GetConfigOptionByNum(int varnum, const char **values, bool *noshow) { switch (conf->flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME)) { + case GUC_UNIT_BYTE: + values[2] = "B"; + break; case GUC_UNIT_KB: values[2] = "kB"; break; diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index c1870d2130..467125a09d 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -219,6 +219,7 @@ typedef enum #define GUC_UNIT_BLOCKS 0x2000 /* value is in blocks */ #define GUC_UNIT_XBLOCKS 0x3000 /* value is in xlog blocks */ #define GUC_UNIT_MB 0x4000 /* value is in megabytes */ +#define GUC_UNIT_BYTE 0x8000 /* value is in bytes */ #define GUC_UNIT_MEMORY 0xF000 /* mask for size-related units */ #define GUC_UNIT_MS 0x10000 /* value is in milliseconds */ From 69835bc8988812c960f4ed5aeee86b62ac73602a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 12 Sep 2017 19:27:48 -0400 Subject: [PATCH 0155/1087] Add psql variables to track success/failure of SQL queries. This patch adds ERROR, SQLSTATE, and ROW_COUNT, which are updated after every query, as well as LAST_ERROR_MESSAGE and LAST_ERROR_SQLSTATE, which are updated only when a query fails. The expected usage of these is for scripting. Fabien Coelho, reviewed by Pavel Stehule Discussion: https://postgr.es/m/alpine.DEB.2.20.1704042158020.12290@lancre --- doc/src/sgml/ref/psql-ref.sgml | 44 ++++++++++ src/bin/psql/common.c | 71 ++++++++++++++++ src/bin/psql/help.c | 11 ++- src/bin/psql/startup.c | 4 + src/test/regress/expected/psql.out | 131 +++++++++++++++++++++++++++++ src/test/regress/sql/psql.sql | 64 ++++++++++++++ 6 files changed, 324 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index a74caf8a6c..60bafa8175 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -3517,6 +3517,16 @@ bar + + ERROR + + + true if the last SQL query failed, false if + it succeeded. See also SQLSTATE. + + + + FETCH_COUNT @@ -3653,6 +3663,19 @@ bar + + LAST_ERROR_MESSAGE + LAST_ERROR_SQLSTATE + + + The primary error message and associated SQLSTATE code for the most + recent failed query in the current psql session, or + an empty string and 00000 if no error has occurred in + the current session. + + + + ON_ERROR_ROLLBACK @@ -3732,6 +3755,16 @@ bar + + ROW_COUNT + + + The number of rows returned or affected by the last SQL query, or 0 + if the query failed or did not report a row count. + + + + SERVER_VERSION_NAME SERVER_VERSION_NUM @@ -3784,6 +3817,17 @@ bar + + SQLSTATE + + + The error code (see ) associated + with the last SQL query's failure, or 00000 if it + succeeded. + + + + USER diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c index b99705886f..9b59ee840b 100644 --- a/src/bin/psql/common.c +++ b/src/bin/psql/common.c @@ -548,11 +548,58 @@ AcceptResult(const PGresult *result) } +/* + * Set special variables from a query result + * - ERROR: true/false, whether an error occurred on this query + * - SQLSTATE: code of error, or "00000" if no error, or "" if unknown + * - ROW_COUNT: how many rows were returned or affected, or "0" + * - LAST_ERROR_SQLSTATE: same for last error + * - LAST_ERROR_MESSAGE: message of last error + * + * Note: current policy is to apply this only to the results of queries + * entered by the user, not queries generated by slash commands. + */ +static void +SetResultVariables(PGresult *results, bool success) +{ + if (success) + { + const char *ntuples = PQcmdTuples(results); + + SetVariable(pset.vars, "ERROR", "false"); + SetVariable(pset.vars, "SQLSTATE", "00000"); + SetVariable(pset.vars, "ROW_COUNT", *ntuples ? ntuples : "0"); + } + else + { + const char *code = PQresultErrorField(results, PG_DIAG_SQLSTATE); + const char *mesg = PQresultErrorField(results, PG_DIAG_MESSAGE_PRIMARY); + + SetVariable(pset.vars, "ERROR", "true"); + + /* + * If there is no SQLSTATE code, use an empty string. This can happen + * for libpq-detected errors (e.g., lost connection, ENOMEM). + */ + if (code == NULL) + code = ""; + SetVariable(pset.vars, "SQLSTATE", code); + SetVariable(pset.vars, "ROW_COUNT", "0"); + SetVariable(pset.vars, "LAST_ERROR_SQLSTATE", code); + SetVariable(pset.vars, "LAST_ERROR_MESSAGE", mesg ? mesg : ""); + } +} + + /* * ClearOrSaveResult * * If the result represents an error, remember it for possible display by * \errverbose. Otherwise, just PQclear() it. + * + * Note: current policy is to apply this to the results of all queries, + * including "back door" queries, for debugging's sake. It's OK to use + * PQclear() directly on results known to not be error results, however. */ static void ClearOrSaveResult(PGresult *result) @@ -1107,6 +1154,8 @@ ProcessResult(PGresult **results) first_cycle = false; } + SetResultVariables(*results, success); + /* may need this to recover from conn loss during COPY */ if (!first_cycle && !CheckConnection()) return false; @@ -1526,6 +1575,7 @@ DescribeQuery(const char *query, double *elapsed_msec) if (PQresultStatus(results) != PGRES_COMMAND_OK) { psql_error("%s", PQerrorMessage(pset.db)); + SetResultVariables(results, false); ClearOrSaveResult(results); return false; } @@ -1599,6 +1649,7 @@ DescribeQuery(const char *query, double *elapsed_msec) _("The command has no result, or the result has no columns.\n")); } + SetResultVariables(results, OK); ClearOrSaveResult(results); return OK; @@ -1626,6 +1677,7 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) bool is_pipe; bool is_pager = false; bool started_txn = false; + int64 total_tuples = 0; int ntuples; int fetch_count; char fetch_cmd[64]; @@ -1663,6 +1715,8 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) results = PQexec(pset.db, buf.data); OK = AcceptResult(results) && (PQresultStatus(results) == PGRES_COMMAND_OK); + if (!OK) + SetResultVariables(results, OK); ClearOrSaveResult(results); termPQExpBuffer(&buf); if (!OK) @@ -1738,6 +1792,7 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) OK = AcceptResult(results); Assert(!OK); + SetResultVariables(results, OK); ClearOrSaveResult(results); break; } @@ -1755,6 +1810,7 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) */ ntuples = PQntuples(results); + total_tuples += ntuples; if (ntuples < fetch_count) { @@ -1816,6 +1872,21 @@ ExecQueryUsingCursor(const char *query, double *elapsed_msec) ClosePager(fout); } + if (OK) + { + /* + * We don't have a PGresult here, and even if we did it wouldn't have + * the right row count, so fake SetResultVariables(). In error cases, + * we already set the result variables above. + */ + char buf[32]; + + SetVariable(pset.vars, "ERROR", "false"); + SetVariable(pset.vars, "SQLSTATE", "00000"); + snprintf(buf, sizeof(buf), INT64_FORMAT, total_tuples); + SetVariable(pset.vars, "ROW_COUNT", buf); + } + cleanup: if (pset.timing) INSTR_TIME_SET_CURRENT(before); diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index 4d1c0ec3c6..a926c40b9b 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -337,7 +337,7 @@ helpVariables(unsigned short int pager) * Windows builds currently print one more line than non-Windows builds. * Using the larger number is fine. */ - output = PageOutput(147, pager ? &(pset.popt.topt) : NULL); + output = PageOutput(156, pager ? &(pset.popt.topt) : NULL); fprintf(output, _("List of specially treated variables\n\n")); @@ -360,6 +360,8 @@ helpVariables(unsigned short int pager) " if set to \"noexec\", just show them without execution\n")); fprintf(output, _(" ENCODING\n" " current client character set encoding\n")); + fprintf(output, _(" ERROR\n" + " true if last query failed, else false\n")); fprintf(output, _(" FETCH_COUNT\n" " the number of result rows to fetch and display at a time (0 = unlimited)\n")); fprintf(output, _(" HISTCONTROL\n" @@ -374,6 +376,9 @@ helpVariables(unsigned short int pager) " number of EOFs needed to terminate an interactive session\n")); fprintf(output, _(" LASTOID\n" " value of the last affected OID\n")); + fprintf(output, _(" LAST_ERROR_MESSAGE\n" + " LAST_ERROR_SQLSTATE\n" + " message and SQLSTATE of last error, or empty string and \"00000\" if none\n")); fprintf(output, _(" ON_ERROR_ROLLBACK\n" " if set, an error doesn't stop a transaction (uses implicit savepoints)\n")); fprintf(output, _(" ON_ERROR_STOP\n" @@ -388,6 +393,8 @@ helpVariables(unsigned short int pager) " specifies the prompt used during COPY ... FROM STDIN\n")); fprintf(output, _(" QUIET\n" " run quietly (same as -q option)\n")); + fprintf(output, _(" ROW_COUNT\n" + " number of rows returned or affected by last query, or 0\n")); fprintf(output, _(" SERVER_VERSION_NAME\n" " SERVER_VERSION_NUM\n" " server's version (in short string or numeric format)\n")); @@ -397,6 +404,8 @@ helpVariables(unsigned short int pager) " if set, end of line terminates SQL commands (same as -S option)\n")); fprintf(output, _(" SINGLESTEP\n" " single-step mode (same as -s option)\n")); + fprintf(output, _(" SQLSTATE\n" + " SQLSTATE of last query, or \"00000\" if no error\n")); fprintf(output, _(" USER\n" " the currently connected database user\n")); fprintf(output, _(" VERBOSITY\n" diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c index 1e48f4ad5a..0dbd7841fb 100644 --- a/src/bin/psql/startup.c +++ b/src/bin/psql/startup.c @@ -165,6 +165,10 @@ main(int argc, char *argv[]) SetVariable(pset.vars, "VERSION_NAME", PG_VERSION); SetVariable(pset.vars, "VERSION_NUM", CppAsString2(PG_VERSION_NUM)); + /* Initialize variables for last error */ + SetVariable(pset.vars, "LAST_ERROR_MESSAGE", ""); + SetVariable(pset.vars, "LAST_ERROR_SQLSTATE", "00000"); + /* Default values for variables (that don't match the result of \unset) */ SetVariableBool(pset.vars, "AUTOCOMMIT"); SetVariable(pset.vars, "PROMPT1", DEFAULT_PROMPT1); diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index bda8960bf3..aa72a5b1eb 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -3074,3 +3074,134 @@ SELECT 3 UNION SELECT 4 UNION SELECT 5 ORDER BY 1; +-- tests for special result variables +-- working query, 2 rows selected +SELECT 1 AS stuff UNION SELECT 2; + stuff +------- + 1 + 2 +(2 rows) + +\echo 'error:' :ERROR +error: false +\echo 'error code:' :SQLSTATE +error code: 00000 +\echo 'number of rows:' :ROW_COUNT +number of rows: 2 +-- syntax error +SELECT 1 UNION; +ERROR: syntax error at or near ";" +LINE 1: SELECT 1 UNION; + ^ +\echo 'error:' :ERROR +error: true +\echo 'error code:' :SQLSTATE +error code: 42601 +\echo 'number of rows:' :ROW_COUNT +number of rows: 0 +\echo 'last error message:' :LAST_ERROR_MESSAGE +last error message: syntax error at or near ";" +\echo 'last error code:' :LAST_ERROR_SQLSTATE +last error code: 42601 +-- empty query +; +\echo 'error:' :ERROR +error: false +\echo 'error code:' :SQLSTATE +error code: 00000 +\echo 'number of rows:' :ROW_COUNT +number of rows: 0 +-- must have kept previous values +\echo 'last error message:' :LAST_ERROR_MESSAGE +last error message: syntax error at or near ";" +\echo 'last error code:' :LAST_ERROR_SQLSTATE +last error code: 42601 +-- other query error +DROP TABLE this_table_does_not_exist; +ERROR: table "this_table_does_not_exist" does not exist +\echo 'error:' :ERROR +error: true +\echo 'error code:' :SQLSTATE +error code: 42P01 +\echo 'number of rows:' :ROW_COUNT +number of rows: 0 +\echo 'last error message:' :LAST_ERROR_MESSAGE +last error message: table "this_table_does_not_exist" does not exist +\echo 'last error code:' :LAST_ERROR_SQLSTATE +last error code: 42P01 +-- working \gdesc +SELECT 3 AS three, 4 AS four \gdesc + Column | Type +--------+--------- + three | integer + four | integer +(2 rows) + +\echo 'error:' :ERROR +error: false +\echo 'error code:' :SQLSTATE +error code: 00000 +\echo 'number of rows:' :ROW_COUNT +number of rows: 2 +-- \gdesc with an error +SELECT 4 AS \gdesc +ERROR: syntax error at end of input +LINE 1: SELECT 4 AS + ^ +\echo 'error:' :ERROR +error: true +\echo 'error code:' :SQLSTATE +error code: 42601 +\echo 'number of rows:' :ROW_COUNT +number of rows: 0 +\echo 'last error message:' :LAST_ERROR_MESSAGE +last error message: syntax error at end of input +\echo 'last error code:' :LAST_ERROR_SQLSTATE +last error code: 42601 +-- check row count for a cursor-fetched query +\set FETCH_COUNT 10 +select unique2 from tenk1 limit 19; + unique2 +--------- + 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 +(19 rows) + +\echo 'error:' :ERROR +error: false +\echo 'error code:' :SQLSTATE +error code: 00000 +\echo 'number of rows:' :ROW_COUNT +number of rows: 19 +-- cursor-fetched query with an error +select 1/unique1 from tenk1; +ERROR: division by zero +\echo 'error:' :ERROR +error: true +\echo 'error code:' :SQLSTATE +error code: 22012 +\echo 'number of rows:' :ROW_COUNT +number of rows: 0 +\echo 'last error message:' :LAST_ERROR_MESSAGE +last error message: division by zero +\echo 'last error code:' :LAST_ERROR_SQLSTATE +last error code: 22012 +\unset FETCH_COUNT diff --git a/src/test/regress/sql/psql.sql b/src/test/regress/sql/psql.sql index 0556b7c159..29a17e1ae4 100644 --- a/src/test/regress/sql/psql.sql +++ b/src/test/regress/sql/psql.sql @@ -606,3 +606,67 @@ UNION SELECT 5 ORDER BY 1; \r \p + +-- tests for special result variables + +-- working query, 2 rows selected +SELECT 1 AS stuff UNION SELECT 2; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT + +-- syntax error +SELECT 1 UNION; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT +\echo 'last error message:' :LAST_ERROR_MESSAGE +\echo 'last error code:' :LAST_ERROR_SQLSTATE + +-- empty query +; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT +-- must have kept previous values +\echo 'last error message:' :LAST_ERROR_MESSAGE +\echo 'last error code:' :LAST_ERROR_SQLSTATE + +-- other query error +DROP TABLE this_table_does_not_exist; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT +\echo 'last error message:' :LAST_ERROR_MESSAGE +\echo 'last error code:' :LAST_ERROR_SQLSTATE + +-- working \gdesc +SELECT 3 AS three, 4 AS four \gdesc +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT + +-- \gdesc with an error +SELECT 4 AS \gdesc +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT +\echo 'last error message:' :LAST_ERROR_MESSAGE +\echo 'last error code:' :LAST_ERROR_SQLSTATE + +-- check row count for a cursor-fetched query +\set FETCH_COUNT 10 +select unique2 from tenk1 limit 19; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT + +-- cursor-fetched query with an error +select 1/unique1 from tenk1; +\echo 'error:' :ERROR +\echo 'error code:' :SQLSTATE +\echo 'number of rows:' :ROW_COUNT +\echo 'last error message:' :LAST_ERROR_MESSAGE +\echo 'last error code:' :LAST_ERROR_SQLSTATE + +\unset FETCH_COUNT From 1a2fdc99a4b341feb6c01304e58f01dd0e095d9a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 13 Sep 2017 08:20:45 -0400 Subject: [PATCH 0156/1087] Define LDAP_NO_ATTRS if necessary. Commit 83aaac41c66959a3ebaec7daadc4885b5f98f561 introduced the use of LDAP_NO_ATTRS to avoid requesting a dummy attribute when doing search+bind LDAP authentication. It turns out that not all LDAP implementations define that macro, but its value is fixed by the protocol so we can define it ourselves if it's missing. Author: Thomas Munro Reported-By: Ashutosh Sharma Discussion: https://postgr.es/m/CAE9k0Pm6FKCfPCiAr26-L_SMGOA7dT_k0%2B3pEbB8%2B-oT39xRpw%40mail.gmail.com --- src/backend/libpq/auth.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 62ff624dbd..39a57d4835 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2398,6 +2398,11 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) #define LPH_USERNAME "$username" #define LPH_USERNAME_LEN (sizeof(LPH_USERNAME) - 1) +/* Not all LDAP implementations define this. */ +#ifndef LDAP_NO_ATTRS +#define LDAP_NO_ATTRS "1.1" +#endif + /* * Return a newly allocated C string copied from "pattern" with all * occurrences of the placeholder "$username" replaced with "user_name". From 61975d6c2cf5bbcf40a2e3160914ecad7a21df1a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 13 Sep 2017 08:31:03 -0400 Subject: [PATCH 0157/1087] Improve error message in WAL sender The previous error message when attempting to run a general SQL command in a physical replication WAL sender was a bit sloppy. Reported-by: Fujii Masao --- src/backend/replication/walsender.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index db346e6edb..1fbe8ed71b 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1545,7 +1545,7 @@ exec_replication_command(const char *cmd_string) case T_SQLCmd: if (MyDatabaseId == InvalidOid) ereport(ERROR, - (errmsg("not connected to database"))); + (errmsg("cannot execute SQL commands in WAL sender for physical replication"))); /* Tell the caller that this wasn't a WalSender command. */ return false; From 9521ce4a7a1125385fb4de9689f345db594c516a Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 13 Sep 2017 09:11:28 -0400 Subject: [PATCH 0158/1087] docs: improve pg_upgrade standby instructions This makes it clear that pg_upgrade standby upgrade instructions should only be used in link mode, adds examples, and explains how rsync works with links. Reported-by: Andreas Joseph Krogh Discussion: https://postgr.es/m/VisenaEmail.6c.c0e592c5af4ef0a2.15e785dcb61@tc7-visena Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 64 +++++++++++++++++++++------------ 1 file changed, 41 insertions(+), 23 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index f8d963019e..ab2defe039 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -423,10 +423,12 @@ pg_upgrade.exe If you have Streaming Replication (see ) or Log-Shipping (see ) standby servers, follow these steps to - upgrade them. You will not be running pg_upgrade - on the standby servers, but rather rsync on the - primary. Do not start any servers yet. + linkend="warm-standby">) standby servers, and used link mode, + follow these steps to upgrade them. You will not be running + pg_upgrade on the standby servers, but rather + rsync on the primary. Do not start any servers yet. + If you did not use link mode, skip the instructions in + this section and simply recreate the standby servers. @@ -482,9 +484,11 @@ pg_upgrade.exe Run <application>rsync</> - From a directory on the primary server that is above the old and - new database cluster directories, run this on the - primary for each standby server: + When using link mode, standby servers can be quickly upgraded using + rsync. To accomplish this, from a directory on + the primary server that is above the old and new database cluster + directories, run this on the primary for each standby + server: rsync --archive --delete --hard-links --size-only old_pgdata new_pgdata remote_dir @@ -492,30 +496,44 @@ rsync --archive --delete --hard-links --size-only old_pgdata new_pgdata remote_d where - What rsync does is to copy files from the - primary to the standby, and, if pg_upgrade's - If you have tablespaces, you will need to run a similar - rsync command for each tablespace directory. If you - have relocated pg_wal outside the data directories, - rsync must be run on those directories too. + rsync command for each tablespace directory, e.g.: + + +rsync --archive --delete --hard-links --size-only /vol1/pg_tblsp/PG_9.5_201510051 \ + /vol1/pg_tblsp/PG_9.6_201608131 standby.example.com:/vol1/pg_tblsp + + + If you have relocated pg_wal outside the data + directories, rsync must be run on those directories + too. From 82e367ddbfdf798ea8a30da15db3984017277342 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 13 Sep 2017 09:22:18 -0400 Subject: [PATCH 0159/1087] docs: adjust "link mode" mention in pg_upgrade streaming steps Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index ab2defe039..60011d8167 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -421,14 +421,14 @@ pg_upgrade.exe Upgrade Streaming Replication and Log-Shipping standby servers - If you have Streaming Replication (see ) or Log-Shipping (see ) standby servers, and used link mode, - follow these steps to upgrade them. You will not be running - pg_upgrade on the standby servers, but rather - rsync on the primary. Do not start any servers yet. - If you did not use link mode, skip the instructions in - this section and simply recreate the standby servers. + linkend="warm-standby">) standby servers, follow these steps to + upgrade them. You will not be running pg_upgrade on + the standby servers, but rather rsync on the primary. + Do not start any servers yet. If you did not use link + mode, skip the instructions in this section and simply recreate the + standby servers. From 089880ba9af5f95e1a3b050874a90dbe5c33fd61 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 13 Sep 2017 10:10:34 -0400 Subject: [PATCH 0160/1087] doc: Remove incorrect SCRAM protocol documentation The documentation claimed that one should send "pg_same_as_startup_message" as the user name in the SCRAM messages, but this did not match the actual implementation, so remove it. --- doc/src/sgml/protocol.sgml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 76d1c13cc4..526e8011de 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1529,9 +1529,7 @@ that the client sends in the client-first-message. The user name that was already sent in the startup message is used instead. PostgreSQL supports multiple character encodings, while SCRAM dictates UTF-8 to be used for the user name, so it might be impossible to -represent the PostgreSQL user name in UTF-8. To avoid confusion, the client -should use pg_same_as_startup_message as the user name in the -client-first-message. +represent the PostgreSQL user name in UTF-8. From 7d08ce286cd5854d58152e428c28636a616bdc42 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 13 Sep 2017 11:12:39 -0400 Subject: [PATCH 0161/1087] Distinguish selectivity of < from <= and > from >=. Historically, the selectivity functions have simply not distinguished < from <=, or > from >=, arguing that the fraction of the population that satisfies the "=" aspect can be considered to be vanishingly small, if the comparison value isn't any of the most-common-values for the variable. (If it is, the code path that executes the operator against each MCV will take care of things properly.) But that isn't really true unless we're dealing with a continuum of variable values, and in practice we seldom are. If "x = const" would estimate a nonzero number of rows for a given const value, then it follows that we ought to estimate different numbers of rows for "x < const" and "x <= const", even if the const is not one of the MCVs. Handling this more honestly makes a significant difference in edge cases, such as the estimate for a tight range (x BETWEEN y AND z where y and z are close together). Hence, split scalarltsel into scalarltsel/scalarlesel, and similarly split scalargtsel into scalargtsel/scalargesel. Adjust <= and >= operator definitions to reference the new selectivity functions. Improve the core ineq_histogram_selectivity() function to make a correction for equality. (Along the way, I learned quite a bit about exactly why that function gives good answers, which I tried to memorialize in improved comments.) The corresponding join selectivity functions were, and remain, just stubs. But I chose to split them similarly, to avoid confusion and to prevent the need for doing this exercise again if someone ever makes them less stubby. In passing, change ineq_histogram_selectivity's clamp for extreme probability estimates so that it varies depending on the histogram size, instead of being hardwired at 0.0001. With the default histogram size of 100 entries, you still get the old clamp value, but bigger histograms should allow us to put more faith in edge values. Tom Lane, reviewed by Aleksander Alekseev and Kuntal Ghosh Discussion: https://postgr.es/m/12232.1499140410@sss.pgh.pa.us --- contrib/citext/Makefile | 3 +- contrib/citext/citext--1.4--1.5.sql | 14 ++ contrib/citext/citext.control | 2 +- contrib/cube/Makefile | 3 +- contrib/cube/cube--1.2--1.3.sql | 12 + contrib/cube/cube.control | 2 +- contrib/hstore/Makefile | 3 +- contrib/hstore/hstore--1.4--1.5.sql | 14 ++ contrib/hstore/hstore.control | 2 +- contrib/isn/Makefile | 3 +- contrib/isn/isn--1.1--1.2.sql | 228 +++++++++++++++++ contrib/isn/isn.control | 2 +- doc/src/sgml/xindex.sgml | 3 +- doc/src/sgml/xoper.sgml | 35 ++- src/backend/optimizer/path/clausesel.c | 14 +- src/backend/utils/adt/network.c | 4 +- src/backend/utils/adt/selfuncs.c | 323 ++++++++++++++++--------- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_operator.h | 224 ++++++++--------- src/include/catalog/pg_proc.h | 9 + src/tutorial/complex.source | 4 +- 21 files changed, 633 insertions(+), 273 deletions(-) create mode 100644 contrib/citext/citext--1.4--1.5.sql create mode 100644 contrib/cube/cube--1.2--1.3.sql create mode 100644 contrib/hstore/hstore--1.4--1.5.sql create mode 100644 contrib/isn/isn--1.1--1.2.sql diff --git a/contrib/citext/Makefile b/contrib/citext/Makefile index 563cd22dcc..e32a7de946 100644 --- a/contrib/citext/Makefile +++ b/contrib/citext/Makefile @@ -3,7 +3,8 @@ MODULES = citext EXTENSION = citext -DATA = citext--1.4.sql citext--1.3--1.4.sql \ +DATA = citext--1.4.sql citext--1.4--1.5.sql \ + citext--1.3--1.4.sql \ citext--1.2--1.3.sql citext--1.1--1.2.sql \ citext--1.0--1.1.sql citext--unpackaged--1.0.sql PGFILEDESC = "citext - case-insensitive character string data type" diff --git a/contrib/citext/citext--1.4--1.5.sql b/contrib/citext/citext--1.4--1.5.sql new file mode 100644 index 0000000000..97942cb7bf --- /dev/null +++ b/contrib/citext/citext--1.4--1.5.sql @@ -0,0 +1,14 @@ +/* contrib/citext/citext--1.4--1.5.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION citext UPDATE TO '1.5'" to load this file. \quit + +ALTER OPERATOR <= (citext, citext) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel +); + +ALTER OPERATOR >= (citext, citext) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel +); diff --git a/contrib/citext/citext.control b/contrib/citext/citext.control index 17fce4e887..4cd6e09331 100644 --- a/contrib/citext/citext.control +++ b/contrib/citext/citext.control @@ -1,5 +1,5 @@ # citext extension comment = 'data type for case-insensitive character strings' -default_version = '1.4' +default_version = '1.5' module_pathname = '$libdir/citext' relocatable = true diff --git a/contrib/cube/Makefile b/contrib/cube/Makefile index be7a1bc1a0..244c1d9bbf 100644 --- a/contrib/cube/Makefile +++ b/contrib/cube/Makefile @@ -4,7 +4,8 @@ MODULE_big = cube OBJS= cube.o cubeparse.o $(WIN32RES) EXTENSION = cube -DATA = cube--1.2.sql cube--1.1--1.2.sql cube--1.0--1.1.sql \ +DATA = cube--1.2.sql cube--1.2--1.3.sql \ + cube--1.1--1.2.sql cube--1.0--1.1.sql \ cube--unpackaged--1.0.sql PGFILEDESC = "cube - multidimensional cube data type" diff --git a/contrib/cube/cube--1.2--1.3.sql b/contrib/cube/cube--1.2--1.3.sql new file mode 100644 index 0000000000..a688f19f02 --- /dev/null +++ b/contrib/cube/cube--1.2--1.3.sql @@ -0,0 +1,12 @@ +/* contrib/cube/cube--1.2--1.3.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION cube UPDATE TO '1.3'" to load this file. \quit + +ALTER OPERATOR <= (cube, cube) SET ( + RESTRICT = scalarlesel, JOIN = scalarlejoinsel +); + +ALTER OPERATOR >= (cube, cube) SET ( + RESTRICT = scalargesel, JOIN = scalargejoinsel +); diff --git a/contrib/cube/cube.control b/contrib/cube/cube.control index b03cfa0a58..af062d4843 100644 --- a/contrib/cube/cube.control +++ b/contrib/cube/cube.control @@ -1,5 +1,5 @@ # cube extension comment = 'data type for multidimensional cubes' -default_version = '1.2' +default_version = '1.3' module_pathname = '$libdir/cube' relocatable = true diff --git a/contrib/hstore/Makefile b/contrib/hstore/Makefile index 311cc099e5..ab7fef3979 100644 --- a/contrib/hstore/Makefile +++ b/contrib/hstore/Makefile @@ -5,7 +5,8 @@ OBJS = hstore_io.o hstore_op.o hstore_gist.o hstore_gin.o hstore_compat.o \ $(WIN32RES) EXTENSION = hstore -DATA = hstore--1.4.sql hstore--1.3--1.4.sql hstore--1.2--1.3.sql \ +DATA = hstore--1.4.sql hstore--1.4--1.5.sql \ + hstore--1.3--1.4.sql hstore--1.2--1.3.sql \ hstore--1.1--1.2.sql hstore--1.0--1.1.sql \ hstore--unpackaged--1.0.sql PGFILEDESC = "hstore - key/value pair data type" diff --git a/contrib/hstore/hstore--1.4--1.5.sql b/contrib/hstore/hstore--1.4--1.5.sql new file mode 100644 index 0000000000..92c1832dce --- /dev/null +++ b/contrib/hstore/hstore--1.4--1.5.sql @@ -0,0 +1,14 @@ +/* contrib/hstore/hstore--1.4--1.5.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION hstore UPDATE TO '1.5'" to load this file. \quit + +ALTER OPERATOR #<=# (hstore, hstore) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel +); + +ALTER OPERATOR #>=# (hstore, hstore) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel +); diff --git a/contrib/hstore/hstore.control b/contrib/hstore/hstore.control index f99a937acc..8a719475b8 100644 --- a/contrib/hstore/hstore.control +++ b/contrib/hstore/hstore.control @@ -1,5 +1,5 @@ # hstore extension comment = 'data type for storing sets of (key, value) pairs' -default_version = '1.4' +default_version = '1.5' module_pathname = '$libdir/hstore' relocatable = true diff --git a/contrib/isn/Makefile b/contrib/isn/Makefile index 9543a4b1cf..ab6b175f9a 100644 --- a/contrib/isn/Makefile +++ b/contrib/isn/Makefile @@ -3,7 +3,8 @@ MODULES = isn EXTENSION = isn -DATA = isn--1.1.sql isn--1.0--1.1.sql isn--unpackaged--1.0.sql +DATA = isn--1.1.sql isn--1.1--1.2.sql \ + isn--1.0--1.1.sql isn--unpackaged--1.0.sql PGFILEDESC = "isn - data types for international product numbering standards" REGRESS = isn diff --git a/contrib/isn/isn--1.1--1.2.sql b/contrib/isn/isn--1.1--1.2.sql new file mode 100644 index 0000000000..d626a5f44d --- /dev/null +++ b/contrib/isn/isn--1.1--1.2.sql @@ -0,0 +1,228 @@ +/* contrib/isn/isn--1.1--1.2.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION isn UPDATE TO '1.2'" to load this file. \quit + +ALTER OPERATOR <= (ean13, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, isbn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, isbn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn13, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn13, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, ismn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, ismn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn13, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn13, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, issn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, issn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, isbn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, isbn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, ismn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, ismn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, issn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, issn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ean13, upc) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ean13, upc) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn13, isbn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn13, isbn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn13, isbn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn13, isbn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn, isbn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn, isbn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn, isbn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn, isbn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (isbn, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (isbn, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn13, ismn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn13, ismn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn13, ismn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn13, ismn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn, ismn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn, ismn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn, ismn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn, ismn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (ismn, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (ismn, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn13, issn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn13, issn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn13, issn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn13, issn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn13, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn13, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn, issn) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn, issn) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn, issn13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn, issn13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (issn, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (issn, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (upc, upc) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (upc, upc) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); + +ALTER OPERATOR <= (upc, ean13) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel); + +ALTER OPERATOR >= (upc, ean13) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel); diff --git a/contrib/isn/isn.control b/contrib/isn/isn.control index 544bd8d0bf..765dce0e0a 100644 --- a/contrib/isn/isn.control +++ b/contrib/isn/isn.control @@ -1,5 +1,5 @@ # isn extension comment = 'data types for international product numbering standards' -default_version = '1.1' +default_version = '1.2' module_pathname = '$libdir/isn' relocatable = true diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index 745b4d5619..b951a58e0a 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -801,8 +801,7 @@ CREATE OPERATOR < ( It is important to specify the correct commutator and negator operators, as well as suitable restriction and join selectivity functions, otherwise the optimizer will be unable to make effective - use of the index. Note that the less-than, equal, and - greater-than cases should use different selectivity functions. + use of the index. diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml index 8568e21216..d484d80105 100644 --- a/doc/src/sgml/xoper.sgml +++ b/doc/src/sgml/xoper.sgml @@ -242,20 +242,11 @@ column OP constant eqsel for = neqsel for <> - scalarltsel for < or <= - scalargtsel for > or >= - - It might seem a little odd that these are the categories, but they - make sense if you think about it. = will typically accept only - a small fraction of the rows in a table; <> will typically reject - only a small fraction. < will accept a fraction that depends on - where the given constant falls in the range of values for that table - column (which, it just so happens, is information collected by - ANALYZE and made available to the selectivity estimator). - <= will accept a slightly larger fraction than < for the same - comparison constant, but they're close enough to not be worth - distinguishing, especially since we're not likely to do better than a - rough guess anyhow. Similar remarks apply to > and >=. + scalarltsel for < + scalarlesel for <= + scalargtsel for > + scalargesel for >= + @@ -267,10 +258,12 @@ column OP constant - You can use scalarltsel and scalargtsel for comparisons on data types that - have some sensible means of being converted into numeric scalars for - range comparisons. If possible, add the data type to those understood - by the function convert_to_scalar() in src/backend/utils/adt/selfuncs.c. + You can use scalarltsel, scalarlesel, + scalargtsel and scalargesel for comparisons on + data types that have some sensible means of being converted into numeric + scalars for range comparisons. If possible, add the data type to those + understood by the function convert_to_scalar() in + src/backend/utils/adt/selfuncs.c. (Eventually, this function should be replaced by per-data-type functions identified through a column of the pg_type system catalog; but that hasn't happened yet.) If you do not do this, things will still work, but the optimizer's @@ -310,8 +303,10 @@ table1.column1 OP table2.column2 eqjoinsel for = neqjoinsel for <> - scalarltjoinsel for < or <= - scalargtjoinsel for > or >= + scalarltjoinsel for < + scalarlejoinsel for <= + scalargtjoinsel for > + scalargejoinsel for >= areajoinsel for 2D area-based comparisons positionjoinsel for 2D position-based comparisons contjoinsel for 2D containment-based comparisons diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c index 9d340255c3..b4cbc34ef1 100644 --- a/src/backend/optimizer/path/clausesel.c +++ b/src/backend/optimizer/path/clausesel.c @@ -71,7 +71,7 @@ static RelOptInfo *find_single_rel_for_clauses(PlannerInfo *root, * * We also recognize "range queries", such as "x > 34 AND x < 42". Clauses * are recognized as possible range query components if they are restriction - * opclauses whose operators have scalarltsel() or scalargtsel() as their + * opclauses whose operators have scalarltsel or a related function as their * restriction selectivity estimator. We pair up clauses of this form that * refer to the same variable. An unpairable clause of this kind is simply * multiplied into the selectivity product in the normal way. But when we @@ -92,8 +92,8 @@ static RelOptInfo *find_single_rel_for_clauses(PlannerInfo *root, * A free side-effect is that we can recognize redundant inequalities such * as "x < 4 AND x < 5"; only the tighter constraint will be counted. * - * Of course this is all very dependent on the behavior of - * scalarltsel/scalargtsel; perhaps some day we can generalize the approach. + * Of course this is all very dependent on the behavior of the inequality + * selectivity functions; perhaps some day we can generalize the approach. */ Selectivity clauselist_selectivity(PlannerInfo *root, @@ -218,17 +218,19 @@ clauselist_selectivity(PlannerInfo *root, if (ok) { /* - * If it's not a "<" or ">" operator, just merge the + * If it's not a "<"/"<="/">"/">=" operator, just merge the * selectivity in generically. But if it's the right oprrest, * add the clause to rqlist for later processing. */ switch (get_oprrest(expr->opno)) { case F_SCALARLTSEL: + case F_SCALARLESEL: addRangeClause(&rqlist, clause, varonleft, true, s2); break; case F_SCALARGTSEL: + case F_SCALARGESEL: addRangeClause(&rqlist, clause, varonleft, false, s2); break; @@ -368,7 +370,7 @@ addRangeClause(RangeQueryClause **rqlist, Node *clause, /*------ * We have found two similar clauses, such as - * x < y AND x < z. + * x < y AND x <= z. * Keep only the more restrictive one. *------ */ @@ -388,7 +390,7 @@ addRangeClause(RangeQueryClause **rqlist, Node *clause, /*------ * We have found two similar clauses, such as - * x > y AND x > z. + * x > y AND x >= z. * Keep only the more restrictive one. *------ */ diff --git a/src/backend/utils/adt/network.c b/src/backend/utils/adt/network.c index ec4ac20bb7..aac7621717 100644 --- a/src/backend/utils/adt/network.c +++ b/src/backend/utils/adt/network.c @@ -957,8 +957,8 @@ convert_network_to_scalar(Datum value, Oid typid) } /* - * Can't get here unless someone tries to use scalarltsel/scalargtsel on - * an operator with one network and one non-network operand. + * Can't get here unless someone tries to use scalarineqsel() on an + * operator with one network and one non-network operand. */ elog(ERROR, "unsupported type: %u", typid); return 0; diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 81b0bc37d2..db1792bf8d 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -164,7 +164,7 @@ static double var_eq_non_const(VariableStatData *vardata, Oid operator, bool varonleft, bool negate); static double ineq_histogram_selectivity(PlannerInfo *root, VariableStatData *vardata, - FmgrInfo *opproc, bool isgt, + FmgrInfo *opproc, bool isgt, bool iseq, Datum constval, Oid consttype); static double eqjoinsel_inner(Oid operator, VariableStatData *vardata1, VariableStatData *vardata2); @@ -545,18 +545,21 @@ neqsel(PG_FUNCTION_ARGS) /* * scalarineqsel - Selectivity of "<", "<=", ">", ">=" for scalars. * - * This is the guts of both scalarltsel and scalargtsel. The caller has - * commuted the clause, if necessary, so that we can treat the variable as - * being on the left. The caller must also make sure that the other side - * of the clause is a non-null Const, and dissect same into a value and - * datatype. + * This is the guts of scalarltsel/scalarlesel/scalargtsel/scalargesel. + * The isgt and iseq flags distinguish which of the four cases apply. + * + * The caller has commuted the clause, if necessary, so that we can treat + * the variable as being on the left. The caller must also make sure that + * the other side of the clause is a non-null Const, and dissect that into + * a value and datatype. (This definition simplifies some callers that + * want to estimate against a computed value instead of a Const node.) * * This routine works for any datatype (or pair of datatypes) known to * convert_to_scalar(). If it is applied to some other datatype, * it will return a default estimate. */ static double -scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, +scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq, VariableStatData *vardata, Datum constval, Oid consttype) { Form_pg_statistic stats; @@ -588,7 +591,8 @@ scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, * If there is a histogram, determine which bin the constant falls in, and * compute the resulting contribution to selectivity. */ - hist_selec = ineq_histogram_selectivity(root, vardata, &opproc, isgt, + hist_selec = ineq_histogram_selectivity(root, vardata, + &opproc, isgt, iseq, constval, consttype); /* @@ -758,7 +762,8 @@ histogram_selectivity(VariableStatData *vardata, FmgrInfo *opproc, * ineq_histogram_selectivity - Examine the histogram for scalarineqsel * * Determine the fraction of the variable's histogram population that - * satisfies the inequality condition, ie, VAR < CONST or VAR > CONST. + * satisfies the inequality condition, ie, VAR < (or <=, >, >=) CONST. + * The isgt and iseq flags distinguish which of the four cases apply. * * Returns -1 if there is no histogram (valid results will always be >= 0). * @@ -769,7 +774,7 @@ histogram_selectivity(VariableStatData *vardata, FmgrInfo *opproc, static double ineq_histogram_selectivity(PlannerInfo *root, VariableStatData *vardata, - FmgrInfo *opproc, bool isgt, + FmgrInfo *opproc, bool isgt, bool iseq, Datum constval, Oid consttype) { double hist_selec; @@ -796,11 +801,17 @@ ineq_histogram_selectivity(PlannerInfo *root, if (sslot.nvalues > 1) { /* - * Use binary search to find proper location, ie, the first slot - * at which the comparison fails. (If the given operator isn't - * actually sort-compatible with the histogram, you'll get garbage - * results ... but probably not any more garbage-y than you would - * from the old linear search.) + * Use binary search to find the desired location, namely the + * right end of the histogram bin containing the comparison value, + * which is the leftmost entry for which the comparison operator + * succeeds (if isgt) or fails (if !isgt). (If the given operator + * isn't actually sort-compatible with the histogram, you'll get + * garbage results ... but probably not any more garbage-y than + * you would have from the old linear search.) + * + * In this loop, we pay no attention to whether the operator iseq + * or not; that detail will be mopped up below. (We cannot tell, + * anyway, whether the operator thinks the values are equal.) * * If the binary search accesses the first or last histogram * entry, we try to replace that endpoint with the true column min @@ -865,25 +876,74 @@ ineq_histogram_selectivity(PlannerInfo *root, if (lobound <= 0) { - /* Constant is below lower histogram boundary. */ + /* + * Constant is below lower histogram boundary. More + * precisely, we have found that no entry in the histogram + * satisfies the inequality clause (if !isgt) or they all do + * (if isgt). We estimate that that's true of the entire + * table, so set histfrac to 0.0 (which we'll flip to 1.0 + * below, if isgt). + */ histfrac = 0.0; } else if (lobound >= sslot.nvalues) { - /* Constant is above upper histogram boundary. */ + /* + * Inverse case: constant is above upper histogram boundary. + */ histfrac = 1.0; } else { + /* We have values[i-1] <= constant <= values[i]. */ int i = lobound; + double eq_selec = 0; double val, high, low; double binfrac; /* - * We have values[i-1] <= constant <= values[i]. + * In the cases where we'll need it below, obtain an estimate + * of the selectivity of "x = constval". We use a calculation + * similar to what var_eq_const() does for a non-MCV constant, + * ie, estimate that all distinct non-MCV values occur equally + * often. But multiplication by "1.0 - sumcommon - nullfrac" + * will be done by our caller, so we shouldn't do that here. + * Therefore we can't try to clamp the estimate by reference + * to the least common MCV; the result would be too small. * + * Note: since this is effectively assuming that constval + * isn't an MCV, it's logically dubious if constval in fact is + * one. But we have to apply *some* correction for equality, + * and anyway we cannot tell if constval is an MCV, since we + * don't have a suitable equality operator at hand. + */ + if (i == 1 || isgt == iseq) + { + double otherdistinct; + bool isdefault; + AttStatsSlot mcvslot; + + /* Get estimated number of distinct values */ + otherdistinct = get_variable_numdistinct(vardata, + &isdefault); + + /* Subtract off the number of known MCVs */ + if (get_attstatsslot(&mcvslot, vardata->statsTuple, + STATISTIC_KIND_MCV, InvalidOid, + ATTSTATSSLOT_NUMBERS)) + { + otherdistinct -= mcvslot.nnumbers; + free_attstatsslot(&mcvslot); + } + + /* If result doesn't seem sane, leave eq_selec at 0 */ + if (otherdistinct > 1) + eq_selec = 1.0 / otherdistinct; + } + + /* * Convert the constant and the two nearest bin boundary * values to a uniform comparison scale, and do a linear * interpolation within this bin. @@ -937,13 +997,54 @@ ineq_histogram_selectivity(PlannerInfo *root, */ histfrac = (double) (i - 1) + binfrac; histfrac /= (double) (sslot.nvalues - 1); + + /* + * At this point, histfrac is an estimate of the fraction of + * the population represented by the histogram that satisfies + * "x <= constval". Somewhat remarkably, this statement is + * true regardless of which operator we were doing the probes + * with, so long as convert_to_scalar() delivers reasonable + * results. If the probe constant is equal to some histogram + * entry, we would have considered the bin to the left of that + * entry if probing with "<" or ">=", or the bin to the right + * if probing with "<=" or ">"; but binfrac would have come + * out as 1.0 in the first case and 0.0 in the second, leading + * to the same histfrac in either case. For probe constants + * between histogram entries, we find the same bin and get the + * same estimate with any operator. + * + * The fact that the estimate corresponds to "x <= constval" + * and not "x < constval" is because of the way that ANALYZE + * constructs the histogram: each entry is, effectively, the + * rightmost value in its sample bucket. So selectivity + * values that are exact multiples of 1/(histogram_size-1) + * should be understood as estimates including a histogram + * entry plus everything to its left. + * + * However, that breaks down for the first histogram entry, + * which necessarily is the leftmost value in its sample + * bucket. That means the first histogram bin is slightly + * narrower than the rest, by an amount equal to eq_selec. + * Another way to say that is that we want "x <= leftmost" to + * be estimated as eq_selec not zero. So, if we're dealing + * with the first bin (i==1), rescale to make that true while + * adjusting the rest of that bin linearly. + */ + if (i == 1) + histfrac += eq_selec * (1.0 - binfrac); + + /* + * "x <= constval" is good if we want an estimate for "<=" or + * ">", but if we are estimating for "<" or ">=", we now need + * to decrease the estimate by eq_selec. + */ + if (isgt == iseq) + histfrac -= eq_selec; } /* - * Now histfrac = fraction of histogram entries below the - * constant. - * - * Account for "<" vs ">" + * Now the estimate is finished for "<" and "<=" cases. If we are + * estimating for ">" or ">=", flip it. */ hist_selec = isgt ? (1.0 - histfrac) : histfrac; @@ -951,16 +1052,21 @@ ineq_histogram_selectivity(PlannerInfo *root, * The histogram boundaries are only approximate to begin with, * and may well be out of date anyway. Therefore, don't believe * extremely small or large selectivity estimates --- unless we - * got actual current endpoint values from the table. + * got actual current endpoint values from the table, in which + * case just do the usual sanity clamp. Somewhat arbitrarily, we + * set the cutoff for other cases at a hundredth of the histogram + * resolution. */ if (have_end) CLAMP_PROBABILITY(hist_selec); else { - if (hist_selec < 0.0001) - hist_selec = 0.0001; - else if (hist_selec > 0.9999) - hist_selec = 0.9999; + double cutoff = 0.01 / (double) (sslot.nvalues - 1); + + if (hist_selec < cutoff) + hist_selec = cutoff; + else if (hist_selec > 1.0 - cutoff) + hist_selec = 1.0 - cutoff; } } @@ -971,10 +1077,11 @@ ineq_histogram_selectivity(PlannerInfo *root, } /* - * scalarltsel - Selectivity of "<" (also "<=") for scalars. + * Common wrapper function for the selectivity estimators that simply + * invoke scalarineqsel(). */ -Datum -scalarltsel(PG_FUNCTION_ARGS) +static Datum +scalarineqsel_wrapper(PG_FUNCTION_ARGS, bool isgt, bool iseq) { PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0); Oid operator = PG_GETARG_OID(1); @@ -985,7 +1092,6 @@ scalarltsel(PG_FUNCTION_ARGS) bool varonleft; Datum constval; Oid consttype; - bool isgt; double selec; /* @@ -1020,14 +1126,8 @@ scalarltsel(PG_FUNCTION_ARGS) /* * Force the var to be on the left to simplify logic in scalarineqsel. */ - if (varonleft) + if (!varonleft) { - /* we have var < other */ - isgt = false; - } - else - { - /* we have other < var, commute to make var > other */ operator = get_commutator(operator); if (!operator) { @@ -1035,10 +1135,12 @@ scalarltsel(PG_FUNCTION_ARGS) ReleaseVariableStats(vardata); PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); } - isgt = true; + isgt = !isgt; } - selec = scalarineqsel(root, operator, isgt, &vardata, constval, consttype); + /* The rest of the work is done by scalarineqsel(). */ + selec = scalarineqsel(root, operator, isgt, iseq, + &vardata, constval, consttype); ReleaseVariableStats(vardata); @@ -1046,78 +1148,39 @@ scalarltsel(PG_FUNCTION_ARGS) } /* - * scalargtsel - Selectivity of ">" (also ">=") for integers. + * scalarltsel - Selectivity of "<" for scalars. */ Datum -scalargtsel(PG_FUNCTION_ARGS) +scalarltsel(PG_FUNCTION_ARGS) { - PlannerInfo *root = (PlannerInfo *) PG_GETARG_POINTER(0); - Oid operator = PG_GETARG_OID(1); - List *args = (List *) PG_GETARG_POINTER(2); - int varRelid = PG_GETARG_INT32(3); - VariableStatData vardata; - Node *other; - bool varonleft; - Datum constval; - Oid consttype; - bool isgt; - double selec; - - /* - * If expression is not variable op something or something op variable, - * then punt and return a default estimate. - */ - if (!get_restriction_variable(root, args, varRelid, - &vardata, &other, &varonleft)) - PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); - - /* - * Can't do anything useful if the something is not a constant, either. - */ - if (!IsA(other, Const)) - { - ReleaseVariableStats(vardata); - PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); - } - - /* - * If the constant is NULL, assume operator is strict and return zero, ie, - * operator will never return TRUE. - */ - if (((Const *) other)->constisnull) - { - ReleaseVariableStats(vardata); - PG_RETURN_FLOAT8(0.0); - } - constval = ((Const *) other)->constvalue; - consttype = ((Const *) other)->consttype; - - /* - * Force the var to be on the left to simplify logic in scalarineqsel. - */ - if (varonleft) - { - /* we have var > other */ - isgt = true; - } - else - { - /* we have other > var, commute to make var < other */ - operator = get_commutator(operator); - if (!operator) - { - /* Use default selectivity (should we raise an error instead?) */ - ReleaseVariableStats(vardata); - PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); - } - isgt = false; - } + return scalarineqsel_wrapper(fcinfo, false, false); +} - selec = scalarineqsel(root, operator, isgt, &vardata, constval, consttype); +/* + * scalarlesel - Selectivity of "<=" for scalars. + */ +Datum +scalarlesel(PG_FUNCTION_ARGS) +{ + return scalarineqsel_wrapper(fcinfo, false, true); +} - ReleaseVariableStats(vardata); +/* + * scalargtsel - Selectivity of ">" for scalars. + */ +Datum +scalargtsel(PG_FUNCTION_ARGS) +{ + return scalarineqsel_wrapper(fcinfo, true, false); +} - PG_RETURN_FLOAT8((float8) selec); +/* + * scalargesel - Selectivity of ">=" for scalars. + */ +Datum +scalargesel(PG_FUNCTION_ARGS) +{ + return scalarineqsel_wrapper(fcinfo, true, true); } /* @@ -2722,7 +2785,7 @@ neqjoinsel(PG_FUNCTION_ARGS) } /* - * scalarltjoinsel - Join selectivity of "<" and "<=" for scalars + * scalarltjoinsel - Join selectivity of "<" for scalars */ Datum scalarltjoinsel(PG_FUNCTION_ARGS) @@ -2731,7 +2794,16 @@ scalarltjoinsel(PG_FUNCTION_ARGS) } /* - * scalargtjoinsel - Join selectivity of ">" and ">=" for scalars + * scalarlejoinsel - Join selectivity of "<=" for scalars + */ +Datum +scalarlejoinsel(PG_FUNCTION_ARGS) +{ + PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); +} + +/* + * scalargtjoinsel - Join selectivity of ">" for scalars */ Datum scalargtjoinsel(PG_FUNCTION_ARGS) @@ -2739,6 +2811,15 @@ scalargtjoinsel(PG_FUNCTION_ARGS) PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); } +/* + * scalargejoinsel - Join selectivity of ">=" for scalars + */ +Datum +scalargejoinsel(PG_FUNCTION_ARGS) +{ + PG_RETURN_FLOAT8(DEFAULT_INEQ_SEL); +} + /* * patternjoinsel - Generic code for pattern-match join selectivity. */ @@ -3036,13 +3117,13 @@ mergejoinscansel(PlannerInfo *root, Node *clause, * fraction that's <= the right-side maximum value. But only believe * non-default estimates, else stick with our 1.0. */ - selec = scalarineqsel(root, leop, isgt, &leftvar, + selec = scalarineqsel(root, leop, isgt, true, &leftvar, rightmax, op_righttype); if (selec != DEFAULT_INEQ_SEL) *leftend = selec; /* And similarly for the right variable. */ - selec = scalarineqsel(root, revleop, isgt, &rightvar, + selec = scalarineqsel(root, revleop, isgt, true, &rightvar, leftmax, op_lefttype); if (selec != DEFAULT_INEQ_SEL) *rightend = selec; @@ -3066,13 +3147,13 @@ mergejoinscansel(PlannerInfo *root, Node *clause, * minimum value. But only believe non-default estimates, else stick with * our own default. */ - selec = scalarineqsel(root, ltop, isgt, &leftvar, + selec = scalarineqsel(root, ltop, isgt, false, &leftvar, rightmin, op_righttype); if (selec != DEFAULT_INEQ_SEL) *leftstart = selec; /* And similarly for the right variable. */ - selec = scalarineqsel(root, revltop, isgt, &rightvar, + selec = scalarineqsel(root, revltop, isgt, false, &rightvar, leftmin, op_lefttype); if (selec != DEFAULT_INEQ_SEL) *rightstart = selec; @@ -4029,8 +4110,8 @@ convert_numeric_to_scalar(Datum value, Oid typid) } /* - * Can't get here unless someone tries to use scalarltsel/scalargtsel on - * an operator with one numeric and one non-numeric operand. + * Can't get here unless someone tries to use scalarineqsel() on an + * operator with one numeric and one non-numeric operand. */ elog(ERROR, "unsupported type: %u", typid); return 0; @@ -4211,8 +4292,8 @@ convert_string_datum(Datum value, Oid typid) default: /* - * Can't get here unless someone tries to use scalarltsel on an - * operator with one string and one non-string operand. + * Can't get here unless someone tries to use scalarineqsel() on + * an operator with one string and one non-string operand. */ elog(ERROR, "unsupported type: %u", typid); return NULL; @@ -4416,8 +4497,8 @@ convert_timevalue_to_scalar(Datum value, Oid typid) } /* - * Can't get here unless someone tries to use scalarltsel/scalargtsel on - * an operator with one timevalue and one non-timevalue operand. + * Can't get here unless someone tries to use scalarineqsel() on an + * operator with one timevalue and one non-timevalue operand. */ elog(ERROR, "unsupported type: %u", typid); return 0; @@ -5806,7 +5887,8 @@ prefix_selectivity(PlannerInfo *root, VariableStatData *vardata, elog(ERROR, "no >= operator for opfamily %u", opfamily); fmgr_info(get_opcode(cmpopr), &opproc); - prefixsel = ineq_histogram_selectivity(root, vardata, &opproc, true, + prefixsel = ineq_histogram_selectivity(root, vardata, + &opproc, true, true, prefixcon->constvalue, prefixcon->consttype); @@ -5832,7 +5914,8 @@ prefix_selectivity(PlannerInfo *root, VariableStatData *vardata, { Selectivity topsel; - topsel = ineq_histogram_selectivity(root, vardata, &opproc, false, + topsel = ineq_histogram_selectivity(root, vardata, + &opproc, false, false, greaterstrcon->constvalue, greaterstrcon->consttype); diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 56642671b6..032b244fb8 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201709081 +#define CATALOG_VERSION_NO 201709131 #endif diff --git a/src/include/catalog/pg_operator.h b/src/include/catalog/pg_operator.h index ffabc2003b..ff9b47077b 100644 --- a/src/include/catalog/pg_operator.h +++ b/src/include/catalog/pg_operator.h @@ -97,9 +97,9 @@ DATA(insert OID = 37 ( "<" PGNSP PGUID b f f 23 20 16 419 82 int48lt scalar DESCR("less than"); DATA(insert OID = 76 ( ">" PGNSP PGUID b f f 23 20 16 418 80 int48gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 80 ( "<=" PGNSP PGUID b f f 23 20 16 430 76 int48le scalarltsel scalarltjoinsel )); +DATA(insert OID = 80 ( "<=" PGNSP PGUID b f f 23 20 16 430 76 int48le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 82 ( ">=" PGNSP PGUID b f f 23 20 16 420 37 int48ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 82 ( ">=" PGNSP PGUID b f f 23 20 16 420 37 int48ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 58 ( "<" PGNSP PGUID b f f 16 16 16 59 1695 boollt scalarltsel scalarltjoinsel )); @@ -112,9 +112,9 @@ DESCR("not equal"); DATA(insert OID = 91 ( "=" PGNSP PGUID b t t 16 16 16 91 85 booleq eqsel eqjoinsel )); DESCR("equal"); #define BooleanEqualOperator 91 -DATA(insert OID = 1694 ( "<=" PGNSP PGUID b f f 16 16 16 1695 59 boolle scalarltsel scalarltjoinsel )); +DATA(insert OID = 1694 ( "<=" PGNSP PGUID b f f 16 16 16 1695 59 boolle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1695 ( ">=" PGNSP PGUID b f f 16 16 16 1694 58 boolge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1695 ( ">=" PGNSP PGUID b f f 16 16 16 1694 58 boolge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 92 ( "=" PGNSP PGUID b t t 18 18 16 92 630 chareq eqsel eqjoinsel )); @@ -167,9 +167,9 @@ DESCR("less than"); #define TIDLessOperator 2799 DATA(insert OID = 2800 ( ">" PGNSP PGUID b f f 27 27 16 2799 2801 tidgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 2801 ( "<=" PGNSP PGUID b f f 27 27 16 2802 2800 tidle scalarltsel scalarltjoinsel )); +DATA(insert OID = 2801 ( "<=" PGNSP PGUID b f f 27 27 16 2802 2800 tidle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 2802 ( ">=" PGNSP PGUID b f f 27 27 16 2801 2799 tidge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2802 ( ">=" PGNSP PGUID b f f 27 27 16 2801 2799 tidge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 410 ( "=" PGNSP PGUID b t t 20 20 16 410 411 int8eq eqsel eqjoinsel )); @@ -181,9 +181,9 @@ DESCR("less than"); #define Int8LessOperator 412 DATA(insert OID = 413 ( ">" PGNSP PGUID b f f 20 20 16 412 414 int8gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 414 ( "<=" PGNSP PGUID b f f 20 20 16 415 413 int8le scalarltsel scalarltjoinsel )); +DATA(insert OID = 414 ( "<=" PGNSP PGUID b f f 20 20 16 415 413 int8le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 415 ( ">=" PGNSP PGUID b f f 20 20 16 414 412 int8ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 415 ( ">=" PGNSP PGUID b f f 20 20 16 414 412 int8ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 416 ( "=" PGNSP PGUID b t t 20 23 16 15 417 int84eq eqsel eqjoinsel )); @@ -194,9 +194,9 @@ DATA(insert OID = 418 ( "<" PGNSP PGUID b f f 20 23 16 76 430 int84lt scalar DESCR("less than"); DATA(insert OID = 419 ( ">" PGNSP PGUID b f f 20 23 16 37 420 int84gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 420 ( "<=" PGNSP PGUID b f f 20 23 16 82 419 int84le scalarltsel scalarltjoinsel )); +DATA(insert OID = 420 ( "<=" PGNSP PGUID b f f 20 23 16 82 419 int84le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 430 ( ">=" PGNSP PGUID b f f 20 23 16 80 418 int84ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 430 ( ">=" PGNSP PGUID b f f 20 23 16 80 418 int84ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 439 ( "%" PGNSP PGUID b f f 20 20 20 0 0 int8mod - - )); DESCR("modulus"); @@ -277,13 +277,13 @@ DATA(insert OID = 520 ( ">" PGNSP PGUID b f f 21 21 16 95 522 int2gt scalarg DESCR("greater than"); DATA(insert OID = 521 ( ">" PGNSP PGUID b f f 23 23 16 97 523 int4gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 522 ( "<=" PGNSP PGUID b f f 21 21 16 524 520 int2le scalarltsel scalarltjoinsel )); +DATA(insert OID = 522 ( "<=" PGNSP PGUID b f f 21 21 16 524 520 int2le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 523 ( "<=" PGNSP PGUID b f f 23 23 16 525 521 int4le scalarltsel scalarltjoinsel )); +DATA(insert OID = 523 ( "<=" PGNSP PGUID b f f 23 23 16 525 521 int4le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 524 ( ">=" PGNSP PGUID b f f 21 21 16 522 95 int2ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 524 ( ">=" PGNSP PGUID b f f 21 21 16 522 95 int2ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); -DATA(insert OID = 525 ( ">=" PGNSP PGUID b f f 23 23 16 523 97 int4ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 525 ( ">=" PGNSP PGUID b f f 23 23 16 523 97 int4ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 526 ( "*" PGNSP PGUID b f f 21 21 21 526 0 int2mul - - )); DESCR("multiply"); @@ -313,13 +313,13 @@ DATA(insert OID = 538 ( "<>" PGNSP PGUID b f f 21 23 16 539 532 int24ne neqs DESCR("not equal"); DATA(insert OID = 539 ( "<>" PGNSP PGUID b f f 23 21 16 538 533 int42ne neqsel neqjoinsel )); DESCR("not equal"); -DATA(insert OID = 540 ( "<=" PGNSP PGUID b f f 21 23 16 543 536 int24le scalarltsel scalarltjoinsel )); +DATA(insert OID = 540 ( "<=" PGNSP PGUID b f f 21 23 16 543 536 int24le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 541 ( "<=" PGNSP PGUID b f f 23 21 16 542 537 int42le scalarltsel scalarltjoinsel )); +DATA(insert OID = 541 ( "<=" PGNSP PGUID b f f 23 21 16 542 537 int42le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 542 ( ">=" PGNSP PGUID b f f 21 23 16 541 534 int24ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 542 ( ">=" PGNSP PGUID b f f 21 23 16 541 534 int24ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); -DATA(insert OID = 543 ( ">=" PGNSP PGUID b f f 23 21 16 540 535 int42ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 543 ( ">=" PGNSP PGUID b f f 23 21 16 540 535 int42ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 544 ( "*" PGNSP PGUID b f f 21 23 23 545 0 int24mul - - )); DESCR("multiply"); @@ -357,9 +357,9 @@ DATA(insert OID = 562 ( "<" PGNSP PGUID b f f 702 702 16 563 565 abstimelt s DESCR("less than"); DATA(insert OID = 563 ( ">" PGNSP PGUID b f f 702 702 16 562 564 abstimegt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 564 ( "<=" PGNSP PGUID b f f 702 702 16 565 563 abstimele scalarltsel scalarltjoinsel )); +DATA(insert OID = 564 ( "<=" PGNSP PGUID b f f 702 702 16 565 563 abstimele scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 565 ( ">=" PGNSP PGUID b f f 702 702 16 564 562 abstimege scalargtsel scalargtjoinsel )); +DATA(insert OID = 565 ( ">=" PGNSP PGUID b f f 702 702 16 564 562 abstimege scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 566 ( "=" PGNSP PGUID b t t 703 703 16 566 567 reltimeeq eqsel eqjoinsel )); DESCR("equal"); @@ -369,9 +369,9 @@ DATA(insert OID = 568 ( "<" PGNSP PGUID b f f 703 703 16 569 571 reltimelt s DESCR("less than"); DATA(insert OID = 569 ( ">" PGNSP PGUID b f f 703 703 16 568 570 reltimegt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 570 ( "<=" PGNSP PGUID b f f 703 703 16 571 569 reltimele scalarltsel scalarltjoinsel )); +DATA(insert OID = 570 ( "<=" PGNSP PGUID b f f 703 703 16 571 569 reltimele scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 571 ( ">=" PGNSP PGUID b f f 703 703 16 570 568 reltimege scalargtsel scalargtjoinsel )); +DATA(insert OID = 571 ( ">=" PGNSP PGUID b f f 703 703 16 570 568 reltimege scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 572 ( "~=" PGNSP PGUID b f f 704 704 16 572 0 tintervalsame eqsel eqjoinsel )); DESCR("same as"); @@ -438,9 +438,9 @@ DATA(insert OID = 609 ( "<" PGNSP PGUID b f f 26 26 16 610 612 oidlt scalarl DESCR("less than"); DATA(insert OID = 610 ( ">" PGNSP PGUID b f f 26 26 16 609 611 oidgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 611 ( "<=" PGNSP PGUID b f f 26 26 16 612 610 oidle scalarltsel scalarltjoinsel )); +DATA(insert OID = 611 ( "<=" PGNSP PGUID b f f 26 26 16 612 610 oidle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 612 ( ">=" PGNSP PGUID b f f 26 26 16 611 609 oidge scalargtsel scalargtjoinsel )); +DATA(insert OID = 612 ( ">=" PGNSP PGUID b f f 26 26 16 611 609 oidge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 644 ( "<>" PGNSP PGUID b f f 30 30 16 644 649 oidvectorne neqsel neqjoinsel )); @@ -449,9 +449,9 @@ DATA(insert OID = 645 ( "<" PGNSP PGUID b f f 30 30 16 646 648 oidvectorlt s DESCR("less than"); DATA(insert OID = 646 ( ">" PGNSP PGUID b f f 30 30 16 645 647 oidvectorgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 647 ( "<=" PGNSP PGUID b f f 30 30 16 648 646 oidvectorle scalarltsel scalarltjoinsel )); +DATA(insert OID = 647 ( "<=" PGNSP PGUID b f f 30 30 16 648 646 oidvectorle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 648 ( ">=" PGNSP PGUID b f f 30 30 16 647 645 oidvectorge scalargtsel scalargtjoinsel )); +DATA(insert OID = 648 ( ">=" PGNSP PGUID b f f 30 30 16 647 645 oidvectorge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 649 ( "=" PGNSP PGUID b t t 30 30 16 649 644 oidvectoreq eqsel eqjoinsel )); DESCR("equal"); @@ -477,20 +477,20 @@ DATA(insert OID = 622 ( "<" PGNSP PGUID b f f 700 700 16 623 625 float4lt s DESCR("less than"); DATA(insert OID = 623 ( ">" PGNSP PGUID b f f 700 700 16 622 624 float4gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 624 ( "<=" PGNSP PGUID b f f 700 700 16 625 623 float4le scalarltsel scalarltjoinsel )); +DATA(insert OID = 624 ( "<=" PGNSP PGUID b f f 700 700 16 625 623 float4le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 625 ( ">=" PGNSP PGUID b f f 700 700 16 624 622 float4ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 625 ( ">=" PGNSP PGUID b f f 700 700 16 624 622 float4ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 630 ( "<>" PGNSP PGUID b f f 18 18 16 630 92 charne neqsel neqjoinsel )); DESCR("not equal"); DATA(insert OID = 631 ( "<" PGNSP PGUID b f f 18 18 16 633 634 charlt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 632 ( "<=" PGNSP PGUID b f f 18 18 16 634 633 charle scalarltsel scalarltjoinsel )); +DATA(insert OID = 632 ( "<=" PGNSP PGUID b f f 18 18 16 634 633 charle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 633 ( ">" PGNSP PGUID b f f 18 18 16 631 632 chargt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 634 ( ">=" PGNSP PGUID b f f 18 18 16 632 631 charge scalargtsel scalargtjoinsel )); +DATA(insert OID = 634 ( ">=" PGNSP PGUID b f f 18 18 16 632 631 charge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 639 ( "~" PGNSP PGUID b f f 19 25 16 0 640 nameregexeq regexeqsel regexeqjoinsel )); @@ -510,19 +510,19 @@ DESCR("concatenate"); DATA(insert OID = 660 ( "<" PGNSP PGUID b f f 19 19 16 662 663 namelt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 661 ( "<=" PGNSP PGUID b f f 19 19 16 663 662 namele scalarltsel scalarltjoinsel )); +DATA(insert OID = 661 ( "<=" PGNSP PGUID b f f 19 19 16 663 662 namele scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 662 ( ">" PGNSP PGUID b f f 19 19 16 660 661 namegt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 663 ( ">=" PGNSP PGUID b f f 19 19 16 661 660 namege scalargtsel scalargtjoinsel )); +DATA(insert OID = 663 ( ">=" PGNSP PGUID b f f 19 19 16 661 660 namege scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 664 ( "<" PGNSP PGUID b f f 25 25 16 666 667 text_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 665 ( "<=" PGNSP PGUID b f f 25 25 16 667 666 text_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 665 ( "<=" PGNSP PGUID b f f 25 25 16 667 666 text_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 666 ( ">" PGNSP PGUID b f f 25 25 16 664 665 text_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 667 ( ">=" PGNSP PGUID b f f 25 25 16 665 664 text_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 667 ( ">=" PGNSP PGUID b f f 25 25 16 665 664 text_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 670 ( "=" PGNSP PGUID b t t 701 701 16 670 671 float8eq eqsel eqjoinsel )); @@ -532,11 +532,11 @@ DESCR("not equal"); DATA(insert OID = 672 ( "<" PGNSP PGUID b f f 701 701 16 674 675 float8lt scalarltsel scalarltjoinsel )); DESCR("less than"); #define Float8LessOperator 672 -DATA(insert OID = 673 ( "<=" PGNSP PGUID b f f 701 701 16 675 674 float8le scalarltsel scalarltjoinsel )); +DATA(insert OID = 673 ( "<=" PGNSP PGUID b f f 701 701 16 675 674 float8le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 674 ( ">" PGNSP PGUID b f f 701 701 16 672 673 float8gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 675 ( ">=" PGNSP PGUID b f f 701 701 16 673 672 float8ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 675 ( ">=" PGNSP PGUID b f f 701 701 16 673 672 float8ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 682 ( "@" PGNSP PGUID l f f 0 21 21 0 0 int2abs - - )); @@ -677,9 +677,9 @@ DATA(insert OID = 813 ( "<" PGNSP PGUID b f f 704 704 16 814 816 tintervallt DESCR("less than"); DATA(insert OID = 814 ( ">" PGNSP PGUID b f f 704 704 16 813 815 tintervalgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 815 ( "<=" PGNSP PGUID b f f 704 704 16 816 814 tintervalle scalarltsel scalarltjoinsel )); +DATA(insert OID = 815 ( "<=" PGNSP PGUID b f f 704 704 16 816 814 tintervalle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 816 ( ">=" PGNSP PGUID b f f 704 704 16 815 813 tintervalge scalargtsel scalargtjoinsel )); +DATA(insert OID = 816 ( ">=" PGNSP PGUID b f f 704 704 16 815 813 tintervalge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 843 ( "*" PGNSP PGUID b f f 790 700 790 845 0 cash_mul_flt4 - - )); @@ -697,9 +697,9 @@ DATA(insert OID = 902 ( "<" PGNSP PGUID b f f 790 790 16 903 905 cash_lt sc DESCR("less than"); DATA(insert OID = 903 ( ">" PGNSP PGUID b f f 790 790 16 902 904 cash_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 904 ( "<=" PGNSP PGUID b f f 790 790 16 905 903 cash_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 904 ( "<=" PGNSP PGUID b f f 790 790 16 905 903 cash_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 905 ( ">=" PGNSP PGUID b f f 790 790 16 904 902 cash_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 905 ( ">=" PGNSP PGUID b f f 790 790 16 904 902 cash_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 906 ( "+" PGNSP PGUID b f f 790 790 790 906 0 cash_pl - - )); DESCR("add"); @@ -763,11 +763,11 @@ DATA(insert OID = 1057 ( "<>" PGNSP PGUID b f f 1042 1042 16 1057 1054 bpcha DESCR("not equal"); DATA(insert OID = 1058 ( "<" PGNSP PGUID b f f 1042 1042 16 1060 1061 bpcharlt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1059 ( "<=" PGNSP PGUID b f f 1042 1042 16 1061 1060 bpcharle scalarltsel scalarltjoinsel )); +DATA(insert OID = 1059 ( "<=" PGNSP PGUID b f f 1042 1042 16 1061 1060 bpcharle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1060 ( ">" PGNSP PGUID b f f 1042 1042 16 1058 1059 bpchargt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1061 ( ">=" PGNSP PGUID b f f 1042 1042 16 1059 1058 bpcharge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1061 ( ">=" PGNSP PGUID b f f 1042 1042 16 1059 1058 bpcharge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* generic array comparison operators */ @@ -782,9 +782,9 @@ DESCR("less than"); DATA(insert OID = 1073 ( ">" PGNSP PGUID b f f 2277 2277 16 1072 1074 array_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); #define ARRAY_GT_OP 1073 -DATA(insert OID = 1074 ( "<=" PGNSP PGUID b f f 2277 2277 16 1075 1073 array_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1074 ( "<=" PGNSP PGUID b f f 2277 2277 16 1075 1073 array_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1075 ( ">=" PGNSP PGUID b f f 2277 2277 16 1074 1072 array_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1075 ( ">=" PGNSP PGUID b f f 2277 2277 16 1074 1072 array_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* date operators */ @@ -798,11 +798,11 @@ DATA(insert OID = 1094 ( "<>" PGNSP PGUID b f f 1082 1082 16 1094 1093 date DESCR("not equal"); DATA(insert OID = 1095 ( "<" PGNSP PGUID b f f 1082 1082 16 1097 1098 date_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1096 ( "<=" PGNSP PGUID b f f 1082 1082 16 1098 1097 date_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1096 ( "<=" PGNSP PGUID b f f 1082 1082 16 1098 1097 date_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1097 ( ">" PGNSP PGUID b f f 1082 1082 16 1095 1096 date_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1098 ( ">=" PGNSP PGUID b f f 1082 1082 16 1096 1095 date_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1098 ( ">=" PGNSP PGUID b f f 1082 1082 16 1096 1095 date_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1099 ( "-" PGNSP PGUID b f f 1082 1082 23 0 0 date_mi - - )); DESCR("subtract"); @@ -818,11 +818,11 @@ DATA(insert OID = 1109 ( "<>" PGNSP PGUID b f f 1083 1083 16 1109 1108 time_ DESCR("not equal"); DATA(insert OID = 1110 ( "<" PGNSP PGUID b f f 1083 1083 16 1112 1113 time_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1111 ( "<=" PGNSP PGUID b f f 1083 1083 16 1113 1112 time_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1111 ( "<=" PGNSP PGUID b f f 1083 1083 16 1113 1112 time_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1112 ( ">" PGNSP PGUID b f f 1083 1083 16 1110 1111 time_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1113 ( ">=" PGNSP PGUID b f f 1083 1083 16 1111 1110 time_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1113 ( ">=" PGNSP PGUID b f f 1083 1083 16 1111 1110 time_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* timetz operators */ @@ -832,11 +832,11 @@ DATA(insert OID = 1551 ( "<>" PGNSP PGUID b f f 1266 1266 16 1551 1550 timetz DESCR("not equal"); DATA(insert OID = 1552 ( "<" PGNSP PGUID b f f 1266 1266 16 1554 1555 timetz_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1553 ( "<=" PGNSP PGUID b f f 1266 1266 16 1555 1554 timetz_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1553 ( "<=" PGNSP PGUID b f f 1266 1266 16 1555 1554 timetz_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1554 ( ">" PGNSP PGUID b f f 1266 1266 16 1552 1553 timetz_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1555 ( ">=" PGNSP PGUID b f f 1266 1266 16 1553 1552 timetz_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1555 ( ">=" PGNSP PGUID b f f 1266 1266 16 1553 1552 timetz_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* float48 operators */ @@ -856,9 +856,9 @@ DATA(insert OID = 1122 ( "<" PGNSP PGUID b f f 700 701 16 1133 1125 float48l DESCR("less than"); DATA(insert OID = 1123 ( ">" PGNSP PGUID b f f 700 701 16 1132 1124 float48gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1124 ( "<=" PGNSP PGUID b f f 700 701 16 1135 1123 float48le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1124 ( "<=" PGNSP PGUID b f f 700 701 16 1135 1123 float48le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1125 ( ">=" PGNSP PGUID b f f 700 701 16 1134 1122 float48ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1125 ( ">=" PGNSP PGUID b f f 700 701 16 1134 1122 float48ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* float84 operators */ @@ -878,9 +878,9 @@ DATA(insert OID = 1132 ( "<" PGNSP PGUID b f f 701 700 16 1123 1135 float84l DESCR("less than"); DATA(insert OID = 1133 ( ">" PGNSP PGUID b f f 701 700 16 1122 1134 float84gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1134 ( "<=" PGNSP PGUID b f f 701 700 16 1125 1133 float84le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1134 ( "<=" PGNSP PGUID b f f 701 700 16 1125 1133 float84le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1135 ( ">=" PGNSP PGUID b f f 701 700 16 1124 1132 float84ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1135 ( ">=" PGNSP PGUID b f f 701 700 16 1124 1132 float84ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); @@ -925,11 +925,11 @@ DATA(insert OID = 1321 ( "<>" PGNSP PGUID b f f 1184 1184 16 1321 1320 time DESCR("not equal"); DATA(insert OID = 1322 ( "<" PGNSP PGUID b f f 1184 1184 16 1324 1325 timestamptz_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1323 ( "<=" PGNSP PGUID b f f 1184 1184 16 1325 1324 timestamptz_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1323 ( "<=" PGNSP PGUID b f f 1184 1184 16 1325 1324 timestamptz_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1324 ( ">" PGNSP PGUID b f f 1184 1184 16 1322 1323 timestamptz_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1325 ( ">=" PGNSP PGUID b f f 1184 1184 16 1323 1322 timestamptz_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1325 ( ">=" PGNSP PGUID b f f 1184 1184 16 1323 1322 timestamptz_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1327 ( "+" PGNSP PGUID b f f 1184 1186 1184 2554 0 timestamptz_pl_interval - - )); DESCR("add"); @@ -945,11 +945,11 @@ DATA(insert OID = 1331 ( "<>" PGNSP PGUID b f f 1186 1186 16 1331 1330 inte DESCR("not equal"); DATA(insert OID = 1332 ( "<" PGNSP PGUID b f f 1186 1186 16 1334 1335 interval_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1333 ( "<=" PGNSP PGUID b f f 1186 1186 16 1335 1334 interval_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1333 ( "<=" PGNSP PGUID b f f 1186 1186 16 1335 1334 interval_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1334 ( ">" PGNSP PGUID b f f 1186 1186 16 1332 1333 interval_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1335 ( ">=" PGNSP PGUID b f f 1186 1186 16 1333 1332 interval_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1335 ( ">=" PGNSP PGUID b f f 1186 1186 16 1333 1332 interval_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1336 ( "-" PGNSP PGUID l f f 0 1186 1186 0 0 interval_um - - )); @@ -1126,11 +1126,11 @@ DATA(insert OID = 1221 ( "<>" PGNSP PGUID b f f 829 829 16 1221 1220 macadd DESCR("not equal"); DATA(insert OID = 1222 ( "<" PGNSP PGUID b f f 829 829 16 1224 1225 macaddr_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1223 ( "<=" PGNSP PGUID b f f 829 829 16 1225 1224 macaddr_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1223 ( "<=" PGNSP PGUID b f f 829 829 16 1225 1224 macaddr_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1224 ( ">" PGNSP PGUID b f f 829 829 16 1222 1223 macaddr_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1225 ( ">=" PGNSP PGUID b f f 829 829 16 1223 1222 macaddr_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1225 ( ">=" PGNSP PGUID b f f 829 829 16 1223 1222 macaddr_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3147 ( "~" PGNSP PGUID l f f 0 829 829 0 0 macaddr_not - - )); @@ -1147,11 +1147,11 @@ DATA(insert OID = 3363 ( "<>" PGNSP PGUID b f f 774 774 16 3363 3362 macadd DESCR("not equal"); DATA(insert OID = 3364 ( "<" PGNSP PGUID b f f 774 774 16 3366 3367 macaddr8_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 3365 ( "<=" PGNSP PGUID b f f 774 774 16 3367 3366 macaddr8_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3365 ( "<=" PGNSP PGUID b f f 774 774 16 3367 3366 macaddr8_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 3366 ( ">" PGNSP PGUID b f f 774 774 16 3364 3365 macaddr8_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 3367 ( ">=" PGNSP PGUID b f f 774 774 16 3365 3364 macaddr8_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3367 ( ">=" PGNSP PGUID b f f 774 774 16 3365 3364 macaddr8_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3368 ( "~" PGNSP PGUID l f f 0 774 774 0 0 macaddr8_not - - )); @@ -1168,11 +1168,11 @@ DATA(insert OID = 1202 ( "<>" PGNSP PGUID b f f 869 869 16 1202 1201 networ DESCR("not equal"); DATA(insert OID = 1203 ( "<" PGNSP PGUID b f f 869 869 16 1205 1206 network_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1204 ( "<=" PGNSP PGUID b f f 869 869 16 1206 1205 network_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1204 ( "<=" PGNSP PGUID b f f 869 869 16 1206 1205 network_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1205 ( ">" PGNSP PGUID b f f 869 869 16 1203 1204 network_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1206 ( ">=" PGNSP PGUID b f f 869 869 16 1204 1203 network_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1206 ( ">=" PGNSP PGUID b f f 869 869 16 1204 1203 network_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 931 ( "<<" PGNSP PGUID b f f 869 869 16 933 0 network_sub networksel networkjoinsel )); DESCR("is subnet"); @@ -1231,11 +1231,11 @@ DATA(insert OID = 1753 ( "<>" PGNSP PGUID b f f 1700 1700 16 1753 1752 nume DESCR("not equal"); DATA(insert OID = 1754 ( "<" PGNSP PGUID b f f 1700 1700 16 1756 1757 numeric_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1755 ( "<=" PGNSP PGUID b f f 1700 1700 16 1757 1756 numeric_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1755 ( "<=" PGNSP PGUID b f f 1700 1700 16 1757 1756 numeric_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1756 ( ">" PGNSP PGUID b f f 1700 1700 16 1754 1755 numeric_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1757 ( ">=" PGNSP PGUID b f f 1700 1700 16 1755 1754 numeric_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1757 ( ">=" PGNSP PGUID b f f 1700 1700 16 1755 1754 numeric_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1758 ( "+" PGNSP PGUID b f f 1700 1700 1700 1758 0 numeric_add - - )); DESCR("add"); @@ -1260,9 +1260,9 @@ DATA(insert OID = 1786 ( "<" PGNSP PGUID b f f 1560 1560 16 1787 1789 bitlt s DESCR("less than"); DATA(insert OID = 1787 ( ">" PGNSP PGUID b f f 1560 1560 16 1786 1788 bitgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1788 ( "<=" PGNSP PGUID b f f 1560 1560 16 1789 1787 bitle scalarltsel scalarltjoinsel )); +DATA(insert OID = 1788 ( "<=" PGNSP PGUID b f f 1560 1560 16 1789 1787 bitle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1789 ( ">=" PGNSP PGUID b f f 1560 1560 16 1788 1786 bitge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1789 ( ">=" PGNSP PGUID b f f 1560 1560 16 1788 1786 bitge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1791 ( "&" PGNSP PGUID b f f 1560 1560 1560 1791 0 bitand - - )); DESCR("bitwise and"); @@ -1296,9 +1296,9 @@ DATA(insert OID = 1806 ( "<" PGNSP PGUID b f f 1562 1562 16 1807 1809 varbitl DESCR("less than"); DATA(insert OID = 1807 ( ">" PGNSP PGUID b f f 1562 1562 16 1806 1808 varbitgt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1808 ( "<=" PGNSP PGUID b f f 1562 1562 16 1809 1807 varbitle scalarltsel scalarltjoinsel )); +DATA(insert OID = 1808 ( "<=" PGNSP PGUID b f f 1562 1562 16 1809 1807 varbitle scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1809 ( ">=" PGNSP PGUID b f f 1562 1562 16 1808 1806 varbitge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1809 ( ">=" PGNSP PGUID b f f 1562 1562 16 1808 1806 varbitge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1849 ( "+" PGNSP PGUID b f f 1186 1083 1083 1800 0 interval_pl_time - - )); @@ -1312,9 +1312,9 @@ DATA(insert OID = 1864 ( "<" PGNSP PGUID b f f 21 20 16 1871 1867 int28lt sc DESCR("less than"); DATA(insert OID = 1865 ( ">" PGNSP PGUID b f f 21 20 16 1870 1866 int28gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1866 ( "<=" PGNSP PGUID b f f 21 20 16 1873 1865 int28le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1866 ( "<=" PGNSP PGUID b f f 21 20 16 1873 1865 int28le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1867 ( ">=" PGNSP PGUID b f f 21 20 16 1872 1864 int28ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1867 ( ">=" PGNSP PGUID b f f 21 20 16 1872 1864 int28ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1868 ( "=" PGNSP PGUID b t t 20 21 16 1862 1869 int82eq eqsel eqjoinsel )); @@ -1325,9 +1325,9 @@ DATA(insert OID = 1870 ( "<" PGNSP PGUID b f f 20 21 16 1865 1873 int82lt sca DESCR("less than"); DATA(insert OID = 1871 ( ">" PGNSP PGUID b f f 20 21 16 1864 1872 int82gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1872 ( "<=" PGNSP PGUID b f f 20 21 16 1867 1871 int82le scalarltsel scalarltjoinsel )); +DATA(insert OID = 1872 ( "<=" PGNSP PGUID b f f 20 21 16 1867 1871 int82le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 1873 ( ">=" PGNSP PGUID b f f 20 21 16 1866 1870 int82ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 1873 ( ">=" PGNSP PGUID b f f 20 21 16 1866 1870 int82ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 1874 ( "&" PGNSP PGUID b f f 21 21 21 1874 0 int2and - - )); @@ -1389,11 +1389,11 @@ DATA(insert OID = 1956 ( "<>" PGNSP PGUID b f f 17 17 16 1956 1955 byteane ne DESCR("not equal"); DATA(insert OID = 1957 ( "<" PGNSP PGUID b f f 17 17 16 1959 1960 bytealt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 1958 ( "<=" PGNSP PGUID b f f 17 17 16 1960 1959 byteale scalarltsel scalarltjoinsel )); +DATA(insert OID = 1958 ( "<=" PGNSP PGUID b f f 17 17 16 1960 1959 byteale scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 1959 ( ">" PGNSP PGUID b f f 17 17 16 1957 1958 byteagt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 1960 ( ">=" PGNSP PGUID b f f 17 17 16 1958 1957 byteage scalargtsel scalargtjoinsel )); +DATA(insert OID = 1960 ( ">=" PGNSP PGUID b f f 17 17 16 1958 1957 byteage scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2016 ( "~~" PGNSP PGUID b f f 17 17 16 0 2017 bytealike likesel likejoinsel )); @@ -1411,11 +1411,11 @@ DATA(insert OID = 2061 ( "<>" PGNSP PGUID b f f 1114 1114 16 2061 2060 time DESCR("not equal"); DATA(insert OID = 2062 ( "<" PGNSP PGUID b f f 1114 1114 16 2064 2065 timestamp_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2063 ( "<=" PGNSP PGUID b f f 1114 1114 16 2065 2064 timestamp_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 2063 ( "<=" PGNSP PGUID b f f 1114 1114 16 2065 2064 timestamp_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2064 ( ">" PGNSP PGUID b f f 1114 1114 16 2062 2063 timestamp_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 2065 ( ">=" PGNSP PGUID b f f 1114 1114 16 2063 2062 timestamp_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2065 ( ">=" PGNSP PGUID b f f 1114 1114 16 2063 2062 timestamp_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2066 ( "+" PGNSP PGUID b f f 1114 1186 1114 2553 0 timestamp_pl_interval - - )); DESCR("add"); @@ -1428,18 +1428,18 @@ DESCR("subtract"); DATA(insert OID = 2314 ( "~<~" PGNSP PGUID b f f 25 25 16 2318 2317 text_pattern_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2315 ( "~<=~" PGNSP PGUID b f f 25 25 16 2317 2318 text_pattern_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 2315 ( "~<=~" PGNSP PGUID b f f 25 25 16 2317 2318 text_pattern_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 2317 ( "~>=~" PGNSP PGUID b f f 25 25 16 2315 2314 text_pattern_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2317 ( "~>=~" PGNSP PGUID b f f 25 25 16 2315 2314 text_pattern_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2318 ( "~>~" PGNSP PGUID b f f 25 25 16 2314 2315 text_pattern_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); DATA(insert OID = 2326 ( "~<~" PGNSP PGUID b f f 1042 1042 16 2330 2329 bpchar_pattern_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2327 ( "~<=~" PGNSP PGUID b f f 1042 1042 16 2329 2330 bpchar_pattern_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 2327 ( "~<=~" PGNSP PGUID b f f 1042 1042 16 2329 2330 bpchar_pattern_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 2329 ( "~>=~" PGNSP PGUID b f f 1042 1042 16 2327 2326 bpchar_pattern_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2329 ( "~>=~" PGNSP PGUID b f f 1042 1042 16 2327 2326 bpchar_pattern_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2330 ( "~>~" PGNSP PGUID b f f 1042 1042 16 2326 2327 bpchar_pattern_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1448,11 +1448,11 @@ DESCR("greater than"); DATA(insert OID = 2345 ( "<" PGNSP PGUID b f f 1082 1114 16 2375 2348 date_lt_timestamp scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2346 ( "<=" PGNSP PGUID b f f 1082 1114 16 2374 2349 date_le_timestamp scalarltsel scalarltjoinsel )); +DATA(insert OID = 2346 ( "<=" PGNSP PGUID b f f 1082 1114 16 2374 2349 date_le_timestamp scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2347 ( "=" PGNSP PGUID b t f 1082 1114 16 2373 2350 date_eq_timestamp eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2348 ( ">=" PGNSP PGUID b f f 1082 1114 16 2372 2345 date_ge_timestamp scalargtsel scalargtjoinsel )); +DATA(insert OID = 2348 ( ">=" PGNSP PGUID b f f 1082 1114 16 2372 2345 date_ge_timestamp scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2349 ( ">" PGNSP PGUID b f f 1082 1114 16 2371 2346 date_gt_timestamp scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1461,11 +1461,11 @@ DESCR("not equal"); DATA(insert OID = 2358 ( "<" PGNSP PGUID b f f 1082 1184 16 2388 2361 date_lt_timestamptz scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2359 ( "<=" PGNSP PGUID b f f 1082 1184 16 2387 2362 date_le_timestamptz scalarltsel scalarltjoinsel )); +DATA(insert OID = 2359 ( "<=" PGNSP PGUID b f f 1082 1184 16 2387 2362 date_le_timestamptz scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2360 ( "=" PGNSP PGUID b t f 1082 1184 16 2386 2363 date_eq_timestamptz eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2361 ( ">=" PGNSP PGUID b f f 1082 1184 16 2385 2358 date_ge_timestamptz scalargtsel scalargtjoinsel )); +DATA(insert OID = 2361 ( ">=" PGNSP PGUID b f f 1082 1184 16 2385 2358 date_ge_timestamptz scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2362 ( ">" PGNSP PGUID b f f 1082 1184 16 2384 2359 date_gt_timestamptz scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1474,11 +1474,11 @@ DESCR("not equal"); DATA(insert OID = 2371 ( "<" PGNSP PGUID b f f 1114 1082 16 2349 2374 timestamp_lt_date scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2372 ( "<=" PGNSP PGUID b f f 1114 1082 16 2348 2375 timestamp_le_date scalarltsel scalarltjoinsel )); +DATA(insert OID = 2372 ( "<=" PGNSP PGUID b f f 1114 1082 16 2348 2375 timestamp_le_date scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2373 ( "=" PGNSP PGUID b t f 1114 1082 16 2347 2376 timestamp_eq_date eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2374 ( ">=" PGNSP PGUID b f f 1114 1082 16 2346 2371 timestamp_ge_date scalargtsel scalargtjoinsel )); +DATA(insert OID = 2374 ( ">=" PGNSP PGUID b f f 1114 1082 16 2346 2371 timestamp_ge_date scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2375 ( ">" PGNSP PGUID b f f 1114 1082 16 2345 2372 timestamp_gt_date scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1487,11 +1487,11 @@ DESCR("not equal"); DATA(insert OID = 2384 ( "<" PGNSP PGUID b f f 1184 1082 16 2362 2387 timestamptz_lt_date scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2385 ( "<=" PGNSP PGUID b f f 1184 1082 16 2361 2388 timestamptz_le_date scalarltsel scalarltjoinsel )); +DATA(insert OID = 2385 ( "<=" PGNSP PGUID b f f 1184 1082 16 2361 2388 timestamptz_le_date scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2386 ( "=" PGNSP PGUID b t f 1184 1082 16 2360 2389 timestamptz_eq_date eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2387 ( ">=" PGNSP PGUID b f f 1184 1082 16 2359 2384 timestamptz_ge_date scalargtsel scalargtjoinsel )); +DATA(insert OID = 2387 ( ">=" PGNSP PGUID b f f 1184 1082 16 2359 2384 timestamptz_ge_date scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2388 ( ">" PGNSP PGUID b f f 1184 1082 16 2358 2385 timestamptz_gt_date scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1502,11 +1502,11 @@ DESCR("not equal"); DATA(insert OID = 2534 ( "<" PGNSP PGUID b f f 1114 1184 16 2544 2537 timestamp_lt_timestamptz scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2535 ( "<=" PGNSP PGUID b f f 1114 1184 16 2543 2538 timestamp_le_timestamptz scalarltsel scalarltjoinsel )); +DATA(insert OID = 2535 ( "<=" PGNSP PGUID b f f 1114 1184 16 2543 2538 timestamp_le_timestamptz scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2536 ( "=" PGNSP PGUID b t f 1114 1184 16 2542 2539 timestamp_eq_timestamptz eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2537 ( ">=" PGNSP PGUID b f f 1114 1184 16 2541 2534 timestamp_ge_timestamptz scalargtsel scalargtjoinsel )); +DATA(insert OID = 2537 ( ">=" PGNSP PGUID b f f 1114 1184 16 2541 2534 timestamp_ge_timestamptz scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2538 ( ">" PGNSP PGUID b f f 1114 1184 16 2540 2535 timestamp_gt_timestamptz scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1515,11 +1515,11 @@ DESCR("not equal"); DATA(insert OID = 2540 ( "<" PGNSP PGUID b f f 1184 1114 16 2538 2543 timestamptz_lt_timestamp scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 2541 ( "<=" PGNSP PGUID b f f 1184 1114 16 2537 2544 timestamptz_le_timestamp scalarltsel scalarltjoinsel )); +DATA(insert OID = 2541 ( "<=" PGNSP PGUID b f f 1184 1114 16 2537 2544 timestamptz_le_timestamp scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 2542 ( "=" PGNSP PGUID b t f 1184 1114 16 2536 2545 timestamptz_eq_timestamp eqsel eqjoinsel )); DESCR("equal"); -DATA(insert OID = 2543 ( ">=" PGNSP PGUID b f f 1184 1114 16 2535 2540 timestamptz_ge_timestamp scalargtsel scalargtjoinsel )); +DATA(insert OID = 2543 ( ">=" PGNSP PGUID b f f 1184 1114 16 2535 2540 timestamptz_ge_timestamp scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 2544 ( ">" PGNSP PGUID b f f 1184 1114 16 2534 2541 timestamptz_gt_timestamp scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1624,9 +1624,9 @@ DATA(insert OID = 2974 ( "<" PGNSP PGUID b f f 2950 2950 16 2975 2977 uuid_l DESCR("less than"); DATA(insert OID = 2975 ( ">" PGNSP PGUID b f f 2950 2950 16 2974 2976 uuid_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 2976 ( "<=" PGNSP PGUID b f f 2950 2950 16 2977 2975 uuid_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 2976 ( "<=" PGNSP PGUID b f f 2950 2950 16 2977 2975 uuid_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 2977 ( ">=" PGNSP PGUID b f f 2950 2950 16 2976 2974 uuid_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2977 ( ">=" PGNSP PGUID b f f 2950 2950 16 2976 2974 uuid_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* pg_lsn operators */ @@ -1638,9 +1638,9 @@ DATA(insert OID = 3224 ( "<" PGNSP PGUID b f f 3220 3220 16 3225 3227 pg_lsn DESCR("less than"); DATA(insert OID = 3225 ( ">" PGNSP PGUID b f f 3220 3220 16 3224 3226 pg_lsn_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 3226 ( "<=" PGNSP PGUID b f f 3220 3220 16 3227 3225 pg_lsn_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3226 ( "<=" PGNSP PGUID b f f 3220 3220 16 3227 3225 pg_lsn_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 3227 ( ">=" PGNSP PGUID b f f 3220 3220 16 3226 3224 pg_lsn_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3227 ( ">=" PGNSP PGUID b f f 3220 3220 16 3226 3224 pg_lsn_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3228 ( "-" PGNSP PGUID b f f 3220 3220 1700 0 0 pg_lsn_mi - - )); DESCR("minus"); @@ -1654,9 +1654,9 @@ DATA(insert OID = 3518 ( "<" PGNSP PGUID b f f 3500 3500 16 3519 3521 enum_l DESCR("less than"); DATA(insert OID = 3519 ( ">" PGNSP PGUID b f f 3500 3500 16 3518 3520 enum_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 3520 ( "<=" PGNSP PGUID b f f 3500 3500 16 3521 3519 enum_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3520 ( "<=" PGNSP PGUID b f f 3500 3500 16 3521 3519 enum_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 3521 ( ">=" PGNSP PGUID b f f 3500 3500 16 3520 3518 enum_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3521 ( ">=" PGNSP PGUID b f f 3500 3500 16 3520 3518 enum_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* @@ -1664,13 +1664,13 @@ DESCR("greater than or equal"); */ DATA(insert OID = 3627 ( "<" PGNSP PGUID b f f 3614 3614 16 3632 3631 tsvector_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 3628 ( "<=" PGNSP PGUID b f f 3614 3614 16 3631 3632 tsvector_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3628 ( "<=" PGNSP PGUID b f f 3614 3614 16 3631 3632 tsvector_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 3629 ( "=" PGNSP PGUID b t f 3614 3614 16 3629 3630 tsvector_eq eqsel eqjoinsel )); DESCR("equal"); DATA(insert OID = 3630 ( "<>" PGNSP PGUID b f f 3614 3614 16 3630 3629 tsvector_ne neqsel neqjoinsel )); DESCR("not equal"); -DATA(insert OID = 3631 ( ">=" PGNSP PGUID b f f 3614 3614 16 3628 3627 tsvector_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3631 ( ">=" PGNSP PGUID b f f 3614 3614 16 3628 3627 tsvector_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3632 ( ">" PGNSP PGUID b f f 3614 3614 16 3627 3628 tsvector_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1686,13 +1686,13 @@ DATA(insert OID = 3661 ( "@@@" PGNSP PGUID b f f 3615 3614 16 3660 0 ts_m DESCR("deprecated, use @@ instead"); DATA(insert OID = 3674 ( "<" PGNSP PGUID b f f 3615 3615 16 3679 3678 tsquery_lt scalarltsel scalarltjoinsel )); DESCR("less than"); -DATA(insert OID = 3675 ( "<=" PGNSP PGUID b f f 3615 3615 16 3678 3679 tsquery_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3675 ( "<=" PGNSP PGUID b f f 3615 3615 16 3678 3679 tsquery_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); DATA(insert OID = 3676 ( "=" PGNSP PGUID b t f 3615 3615 16 3676 3677 tsquery_eq eqsel eqjoinsel )); DESCR("equal"); DATA(insert OID = 3677 ( "<>" PGNSP PGUID b f f 3615 3615 16 3677 3676 tsquery_ne neqsel neqjoinsel )); DESCR("not equal"); -DATA(insert OID = 3678 ( ">=" PGNSP PGUID b f f 3615 3615 16 3675 3674 tsquery_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3678 ( ">=" PGNSP PGUID b f f 3615 3615 16 3675 3674 tsquery_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3679 ( ">" PGNSP PGUID b f f 3615 3615 16 3674 3675 tsquery_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); @@ -1726,9 +1726,9 @@ DESCR("less than"); DATA(insert OID = 2991 ( ">" PGNSP PGUID b f f 2249 2249 16 2990 2992 record_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); #define RECORD_GT_OP 2991 -DATA(insert OID = 2992 ( "<=" PGNSP PGUID b f f 2249 2249 16 2993 2991 record_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 2992 ( "<=" PGNSP PGUID b f f 2249 2249 16 2993 2991 record_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 2993 ( ">=" PGNSP PGUID b f f 2249 2249 16 2992 2990 record_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 2993 ( ">=" PGNSP PGUID b f f 2249 2249 16 2992 2990 record_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* byte-oriented tests for identical rows and fast sorting */ @@ -1740,9 +1740,9 @@ DATA(insert OID = 3190 ( "*<" PGNSP PGUID b f f 2249 2249 16 3191 3193 recor DESCR("less than"); DATA(insert OID = 3191 ( "*>" PGNSP PGUID b f f 2249 2249 16 3190 3192 record_image_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 3192 ( "*<=" PGNSP PGUID b f f 2249 2249 16 3193 3191 record_image_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3192 ( "*<=" PGNSP PGUID b f f 2249 2249 16 3193 3191 record_image_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 3193 ( "*>=" PGNSP PGUID b f f 2249 2249 16 3192 3190 record_image_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3193 ( "*>=" PGNSP PGUID b f f 2249 2249 16 3192 3190 record_image_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); /* generic range type operators */ @@ -1753,10 +1753,10 @@ DESCR("not equal"); DATA(insert OID = 3884 ( "<" PGNSP PGUID b f f 3831 3831 16 3887 3886 range_lt rangesel scalarltjoinsel )); DESCR("less than"); #define OID_RANGE_LESS_OP 3884 -DATA(insert OID = 3885 ( "<=" PGNSP PGUID b f f 3831 3831 16 3886 3887 range_le rangesel scalarltjoinsel )); +DATA(insert OID = 3885 ( "<=" PGNSP PGUID b f f 3831 3831 16 3886 3887 range_le rangesel scalarlejoinsel )); DESCR("less than or equal"); #define OID_RANGE_LESS_EQUAL_OP 3885 -DATA(insert OID = 3886 ( ">=" PGNSP PGUID b f f 3831 3831 16 3885 3884 range_ge rangesel scalargtjoinsel )); +DATA(insert OID = 3886 ( ">=" PGNSP PGUID b f f 3831 3831 16 3885 3884 range_ge rangesel scalargejoinsel )); DESCR("greater than or equal"); #define OID_RANGE_GREATER_EQUAL_OP 3886 DATA(insert OID = 3887 ( ">" PGNSP PGUID b f f 3831 3831 16 3884 3885 range_gt rangesel scalargtjoinsel )); @@ -1829,9 +1829,9 @@ DATA(insert OID = 3242 ( "<" PGNSP PGUID b f f 3802 3802 16 3243 3245 jsonb_lt DESCR("less than"); DATA(insert OID = 3243 ( ">" PGNSP PGUID b f f 3802 3802 16 3242 3244 jsonb_gt scalargtsel scalargtjoinsel )); DESCR("greater than"); -DATA(insert OID = 3244 ( "<=" PGNSP PGUID b f f 3802 3802 16 3245 3243 jsonb_le scalarltsel scalarltjoinsel )); +DATA(insert OID = 3244 ( "<=" PGNSP PGUID b f f 3802 3802 16 3245 3243 jsonb_le scalarlesel scalarlejoinsel )); DESCR("less than or equal"); -DATA(insert OID = 3245 ( ">=" PGNSP PGUID b f f 3802 3802 16 3244 3242 jsonb_ge scalargtsel scalargtjoinsel )); +DATA(insert OID = 3245 ( ">=" PGNSP PGUID b f f 3802 3802 16 3244 3242 jsonb_ge scalargesel scalargejoinsel )); DESCR("greater than or equal"); DATA(insert OID = 3246 ( "@>" PGNSP PGUID b f f 3802 3802 16 3250 0 jsonb_contains contsel contjoinsel )); DESCR("contains"); diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index d820b56aa1..f73c6c6201 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -264,6 +264,15 @@ DESCR("join selectivity of < and related operators on scalar datatypes"); DATA(insert OID = 108 ( scalargtjoinsel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 5 0 701 "2281 26 2281 21 2281" _null_ _null_ _null_ _null_ _null_ scalargtjoinsel _null_ _null_ _null_ )); DESCR("join selectivity of > and related operators on scalar datatypes"); +DATA(insert OID = 336 ( scalarlesel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 4 0 701 "2281 26 2281 23" _null_ _null_ _null_ _null_ _null_ scalarlesel _null_ _null_ _null_ )); +DESCR("restriction selectivity of <= and related operators on scalar datatypes"); +DATA(insert OID = 337 ( scalargesel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 4 0 701 "2281 26 2281 23" _null_ _null_ _null_ _null_ _null_ scalargesel _null_ _null_ _null_ )); +DESCR("restriction selectivity of >= and related operators on scalar datatypes"); +DATA(insert OID = 386 ( scalarlejoinsel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 5 0 701 "2281 26 2281 21 2281" _null_ _null_ _null_ _null_ _null_ scalarlejoinsel _null_ _null_ _null_ )); +DESCR("join selectivity of <= and related operators on scalar datatypes"); +DATA(insert OID = 398 ( scalargejoinsel PGNSP PGUID 12 1 0 0 0 f f f f t f s s 5 0 701 "2281 26 2281 21 2281" _null_ _null_ _null_ _null_ _null_ scalargejoinsel _null_ _null_ _null_ )); +DESCR("join selectivity of >= and related operators on scalar datatypes"); + DATA(insert OID = 109 ( unknownin PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 705 "2275" _null_ _null_ _null_ _null_ _null_ unknownin _null_ _null_ _null_ )); DESCR("I/O"); DATA(insert OID = 110 ( unknownout PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2275 "705" _null_ _null_ _null_ _null_ _null_ unknownout _null_ _null_ _null_ )); diff --git a/src/tutorial/complex.source b/src/tutorial/complex.source index 035c7a7d13..a2307b9447 100644 --- a/src/tutorial/complex.source +++ b/src/tutorial/complex.source @@ -174,7 +174,7 @@ CREATE OPERATOR < ( CREATE OPERATOR <= ( leftarg = complex, rightarg = complex, procedure = complex_abs_le, commutator = >= , negator = > , - restrict = scalarltsel, join = scalarltjoinsel + restrict = scalarlesel, join = scalarlejoinsel ); CREATE OPERATOR = ( leftarg = complex, rightarg = complex, procedure = complex_abs_eq, @@ -186,7 +186,7 @@ CREATE OPERATOR = ( CREATE OPERATOR >= ( leftarg = complex, rightarg = complex, procedure = complex_abs_ge, commutator = <= , negator = < , - restrict = scalargtsel, join = scalargtjoinsel + restrict = scalargesel, join = scalargejoinsel ); CREATE OPERATOR > ( leftarg = complex, rightarg = complex, procedure = complex_abs_gt, From 44ba2920644903d7dfceda810e5facdbcbab58a8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 13 Sep 2017 11:54:55 -0400 Subject: [PATCH 0162/1087] Update contrib/seg for new scalarlesel/scalargesel selectivity functions. I somehow missed this module in commit 7d08ce286. --- contrib/seg/Makefile | 3 ++- contrib/seg/seg--1.1--1.2.sql | 14 ++++++++++++++ contrib/seg/seg.control | 2 +- 3 files changed, 17 insertions(+), 2 deletions(-) create mode 100644 contrib/seg/seg--1.1--1.2.sql diff --git a/contrib/seg/Makefile b/contrib/seg/Makefile index c8f0f8b9a2..00a5472d3b 100644 --- a/contrib/seg/Makefile +++ b/contrib/seg/Makefile @@ -4,7 +4,8 @@ MODULE_big = seg OBJS = seg.o segparse.o $(WIN32RES) EXTENSION = seg -DATA = seg--1.1.sql seg--1.0--1.1.sql seg--unpackaged--1.0.sql +DATA = seg--1.1.sql seg--1.1--1.2.sql \ + seg--1.0--1.1.sql seg--unpackaged--1.0.sql PGFILEDESC = "seg - line segment data type" REGRESS = seg diff --git a/contrib/seg/seg--1.1--1.2.sql b/contrib/seg/seg--1.1--1.2.sql new file mode 100644 index 0000000000..a6e4456f07 --- /dev/null +++ b/contrib/seg/seg--1.1--1.2.sql @@ -0,0 +1,14 @@ +/* contrib/seg/seg--1.1--1.2.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION seg UPDATE TO '1.2'" to load this file. \quit + +ALTER OPERATOR <= (seg, seg) SET ( + RESTRICT = scalarlesel, + JOIN = scalarlejoinsel +); + +ALTER OPERATOR >= (seg, seg) SET ( + RESTRICT = scalargesel, + JOIN = scalargejoinsel +); diff --git a/contrib/seg/seg.control b/contrib/seg/seg.control index f210cf5e04..ba3d092c25 100644 --- a/contrib/seg/seg.control +++ b/contrib/seg/seg.control @@ -1,5 +1,5 @@ # seg extension comment = 'data type for representing line segments or floating-point intervals' -default_version = '1.1' +default_version = '1.2' module_pathname = '$libdir/seg' relocatable = true From 76e134fefd7de0554536e1b8d45a1878f96cf9c0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 13 Sep 2017 12:27:01 -0400 Subject: [PATCH 0163/1087] Adjust unstable regression test case. Test queries added by commit 69835bc89 are giving unexpected results on some smaller buildfarm critters. I think probably the seqscan logic is kicking in to cause the scans to not start at the beginning of the table. Add ORDER BY to make them be indexscans instead. Per buildfarm member chipmunk. --- src/test/regress/expected/psql.out | 18 +++++++++++++++--- src/test/regress/sql/psql.sql | 6 +++--- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index aa72a5b1eb..836d8510fd 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -3161,7 +3161,7 @@ last error message: syntax error at end of input last error code: 42601 -- check row count for a cursor-fetched query \set FETCH_COUNT 10 -select unique2 from tenk1 limit 19; +select unique2 from tenk1 order by unique2 limit 19; unique2 --------- 0 @@ -3191,8 +3191,20 @@ error: false error code: 00000 \echo 'number of rows:' :ROW_COUNT number of rows: 19 --- cursor-fetched query with an error -select 1/unique1 from tenk1; +-- cursor-fetched query with an error after the first group +select 1/(15-unique2) from tenk1 order by unique2 limit 19; + ?column? +---------- + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 ERROR: division by zero \echo 'error:' :ERROR error: true diff --git a/src/test/regress/sql/psql.sql b/src/test/regress/sql/psql.sql index 29a17e1ae4..ddae1bf1e7 100644 --- a/src/test/regress/sql/psql.sql +++ b/src/test/regress/sql/psql.sql @@ -656,13 +656,13 @@ SELECT 4 AS \gdesc -- check row count for a cursor-fetched query \set FETCH_COUNT 10 -select unique2 from tenk1 limit 19; +select unique2 from tenk1 order by unique2 limit 19; \echo 'error:' :ERROR \echo 'error code:' :SQLSTATE \echo 'number of rows:' :ROW_COUNT --- cursor-fetched query with an error -select 1/unique1 from tenk1; +-- cursor-fetched query with an error after the first group +select 1/(15-unique2) from tenk1 order by unique2 limit 19; \echo 'error:' :ERROR \echo 'error code:' :SQLSTATE \echo 'number of rows:' :ROW_COUNT From d2e40b310aea1050fd499f62f391329f2c331f6a Mon Sep 17 00:00:00 2001 From: Stephen Frost Date: Wed, 13 Sep 2017 20:02:09 -0400 Subject: [PATCH 0164/1087] Fix ordering in pg_dump of GRANTs The order in which GRANTs are output is important as GRANTs which have been GRANT'd by individuals via WITH GRANT OPTION GRANTs have to come after the GRANT which included the WITH GRANT OPTION. This happens naturally in the backend during normal operation as we only change existing ACLs in-place, only add new ACLs to the end, and when removing an ACL we remove any which depend on it also. Also, adjust the comments in acl.h to make this clear. Unfortunately, the updates to pg_dump to handle initial privileges involved pulling apart ACLs and then combining them back together and could end up putting them back together in an invalid order, leading to dumps which wouldn't restore. Fix this by adjusting the queries used by pg_dump to ensure that the ACLs are rebuilt in the same order in which they were originally. Back-patch to 9.6 where the changes for initial privileges were done. --- src/bin/pg_dump/dumputils.c | 51 ++++++++++++++++++++++++++----------- src/include/utils/acl.h | 14 +++++++--- 2 files changed, 47 insertions(+), 18 deletions(-) diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index dfc611848b..e4c95feb63 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -722,21 +722,36 @@ buildACLQueries(PQExpBuffer acl_subquery, PQExpBuffer racl_subquery, * We always perform this delta on all ACLs and expect that by the time * these are run the initial privileges will be in place, even in a binary * upgrade situation (see below). + * + * Finally, the order in which privileges are in the ACL string (the order + * they been GRANT'd in, which the backend maintains) must be preserved to + * ensure that GRANTs WITH GRANT OPTION and subsequent GRANTs based on + * those are dumped in the correct order. */ - printfPQExpBuffer(acl_subquery, "(SELECT pg_catalog.array_agg(acl) FROM " - "(SELECT pg_catalog.unnest(coalesce(%s,pg_catalog.acldefault(%s,%s))) AS acl " - "EXCEPT " - "SELECT pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(%s,%s)))) as foo)", + printfPQExpBuffer(acl_subquery, + "(SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM " + "(SELECT acl, row_n FROM " + "pg_catalog.unnest(coalesce(%s,pg_catalog.acldefault(%s,%s))) " + "WITH ORDINALITY AS perm(acl,row_n) " + "WHERE NOT EXISTS ( " + "SELECT 1 FROM " + "pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(%s,%s))) " + "AS init(init_acl) WHERE acl = init_acl)) as foo)", acl_column, obj_kind, acl_owner, obj_kind, acl_owner); - printfPQExpBuffer(racl_subquery, "(SELECT pg_catalog.array_agg(acl) FROM " - "(SELECT pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(%s,%s))) AS acl " - "EXCEPT " - "SELECT pg_catalog.unnest(coalesce(%s,pg_catalog.acldefault(%s,%s)))) as foo)", + printfPQExpBuffer(racl_subquery, + "(SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM " + "(SELECT acl, row_n FROM " + "pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(%s,%s))) " + "WITH ORDINALITY AS initp(acl,row_n) " + "WHERE NOT EXISTS ( " + "SELECT 1 FROM " + "pg_catalog.unnest(coalesce(%s,pg_catalog.acldefault(%s,%s))) " + "AS permp(orig_acl) WHERE acl = orig_acl)) as foo)", obj_kind, acl_owner, acl_column, @@ -761,19 +776,25 @@ buildACLQueries(PQExpBuffer acl_subquery, PQExpBuffer racl_subquery, { printfPQExpBuffer(init_acl_subquery, "CASE WHEN privtype = 'e' THEN " - "(SELECT pg_catalog.array_agg(acl) FROM " - "(SELECT pg_catalog.unnest(pip.initprivs) AS acl " - "EXCEPT " - "SELECT pg_catalog.unnest(pg_catalog.acldefault(%s,%s))) as foo) END", + "(SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM " + "(SELECT acl, row_n FROM pg_catalog.unnest(pip.initprivs) " + "WITH ORDINALITY AS initp(acl,row_n) " + "WHERE NOT EXISTS ( " + "SELECT 1 FROM " + "pg_catalog.unnest(pg_catalog.acldefault(%s,%s)) " + "AS privm(orig_acl) WHERE acl = orig_acl)) as foo) END", obj_kind, acl_owner); printfPQExpBuffer(init_racl_subquery, "CASE WHEN privtype = 'e' THEN " "(SELECT pg_catalog.array_agg(acl) FROM " - "(SELECT pg_catalog.unnest(pg_catalog.acldefault(%s,%s)) AS acl " - "EXCEPT " - "SELECT pg_catalog.unnest(pip.initprivs)) as foo) END", + "(SELECT acl, row_n FROM " + "pg_catalog.unnest(pg_catalog.acldefault(%s,%s)) " + "WITH ORDINALITY AS privp(acl,row_n) " + "WHERE NOT EXISTS ( " + "SELECT 1 FROM pg_catalog.unnest(pip.initprivs) " + "AS initp(init_acl) WHERE acl = init_acl)) as foo) END", obj_kind, acl_owner); } diff --git a/src/include/utils/acl.h b/src/include/utils/acl.h index 43273eaab5..254a811aff 100644 --- a/src/include/utils/acl.h +++ b/src/include/utils/acl.h @@ -12,9 +12,17 @@ * NOTES * An ACL array is simply an array of AclItems, representing the union * of the privileges represented by the individual items. A zero-length - * array represents "no privileges". There are no assumptions about the - * ordering of the items, but we do expect that there are no two entries - * in the array with the same grantor and grantee. + * array represents "no privileges". + * + * The order of items in the array is important as client utilities (in + * particular, pg_dump, though possibly other clients) expect to be able + * to issue GRANTs in the ordering of the items in the array. The reason + * this matters is that GRANTs WITH GRANT OPTION must be before any GRANTs + * which depend on it. This happens naturally in the backend during + * operations as we update ACLs in-place, new items are appended, and + * existing entries are only removed if there's no dependency on them (no + * GRANT can been based on it, or, if there was, those GRANTs are also + * removed). * * For backward-compatibility purposes we have to allow null ACL entries * in system catalogs. A null ACL will be treated as meaning "default From 1ab973ab600dc4295dbbd38d1643f9bd26f81d8e Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 14 Sep 2017 01:53:10 -0700 Subject: [PATCH 0165/1087] Properly check interrupts in execScan.c. During the development of d47cfef711 the CFI()s in ExecScan() were moved back and forth, ending up in the wrong place. Thus queries that largely spend their time in ExecScan(), and have neither projection nor a qual, can't be cancelled in a timely manner. Reported-By: Jeff Janes Author: Andres Freund Discussion: https://postgr.es/m/CAMkU=1weDXp8eLLPt9SO1LEUsJYYK9cScaGhLKpuN+WbYo9b5g@mail.gmail.com Backpatch: 10, as d47cfef711 --- src/backend/executor/execScan.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index 47a34a044a..5dfc49deb9 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -27,7 +27,7 @@ static bool tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, Tuple /* - * ExecScanFetch -- fetch next potential tuple + * ExecScanFetch -- check interrupts & fetch next potential tuple * * This routine is concerned with substituting a test tuple if we are * inside an EvalPlanQual recheck. If we aren't, just execute @@ -40,6 +40,8 @@ ExecScanFetch(ScanState *node, { EState *estate = node->ps.state; + CHECK_FOR_INTERRUPTS(); + if (estate->es_epqTuple != NULL) { /* @@ -133,6 +135,8 @@ ExecScan(ScanState *node, projInfo = node->ps.ps_ProjInfo; econtext = node->ps.ps_ExprContext; + /* interrupt checks are in ExecScanFetch */ + /* * If we have neither a qual to check nor a projection to do, just skip * all the overhead and return the raw scan tuple. @@ -157,8 +161,6 @@ ExecScan(ScanState *node, { TupleTableSlot *slot; - CHECK_FOR_INTERRUPTS(); - slot = ExecScanFetch(node, accessMtd, recheckMtd); /* From 1555566d9ee1a996a28cc4601840a67831112695 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 14 Sep 2017 10:43:44 -0400 Subject: [PATCH 0166/1087] Set partitioned_rels appropriately when UNION ALL is used. In most cases, this omission won't matter, because the appropriate locks will have been acquired during parse/plan or by AcquireExecutorLocks. But it's a bug all the same. Report by Ashutosh Bapat. Patch by me, reviewed by Amit Langote. Discussion: http://postgr.es/m/CAFjFpRdHb_ZnoDTuBXqrudWXh3H1ibLkr6nHsCFT96fSK4DXtA@mail.gmail.com --- src/backend/optimizer/path/allpaths.c | 42 ++++++++++++++++++++++++--- src/backend/optimizer/plan/planner.c | 6 ++-- 2 files changed, 40 insertions(+), 8 deletions(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 2d7e1d84d0..e8e7202e11 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -1287,13 +1287,34 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, ListCell *l; List *partitioned_rels = NIL; RangeTblEntry *rte; + bool build_partitioned_rels = false; + /* + * A plain relation will already have a PartitionedChildRelInfo if it is + * partitioned. For a subquery RTE, no PartitionedChildRelInfo exists; we + * collect all partitioned_rels associated with any child. (This assumes + * that we don't need to look through multiple levels of subquery RTEs; if + * we ever do, we could create a PartitionedChildRelInfo with the + * accumulated list of partitioned_rels which would then be found when + * populated our parent rel with paths. For the present, that appears to + * be unnecessary.) + */ rte = planner_rt_fetch(rel->relid, root); - if (rte->relkind == RELKIND_PARTITIONED_TABLE) + switch (rte->rtekind) { - partitioned_rels = get_partitioned_child_rels(root, rel->relid); - /* The root partitioned table is included as a child rel */ - Assert(list_length(partitioned_rels) >= 1); + case RTE_RELATION: + if (rte->relkind == RELKIND_PARTITIONED_TABLE) + { + partitioned_rels = + get_partitioned_child_rels(root, rel->relid); + Assert(list_length(partitioned_rels) >= 1); + } + break; + case RTE_SUBQUERY: + build_partitioned_rels = true; + break; + default: + elog(ERROR, "unexpcted rtekind: %d", (int) rte->rtekind); } /* @@ -1306,6 +1327,19 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, RelOptInfo *childrel = lfirst(l); ListCell *lcp; + /* + * If we need to build partitioned_rels, accumulate the partitioned + * rels for this child. + */ + if (build_partitioned_rels) + { + List *cprels; + + cprels = get_partitioned_child_rels(root, childrel->relid); + partitioned_rels = list_concat(partitioned_rels, + list_copy(cprels)); + } + /* * If child has an unparameterized cheapest-total path, add that to * the unparameterized Append path we are constructing for the parent. diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 6b79b3ad99..907622eadb 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -6076,7 +6076,8 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid) * Returns a list of the RT indexes of the partitioned child relations * with rti as the root parent RT index. * - * Note: Only call this function on RTEs known to be partitioned tables. + * Note: This function might get called even for range table entries that + * are not partitioned tables; in such a case, it will simply return NIL. */ List * get_partitioned_child_rels(PlannerInfo *root, Index rti) @@ -6095,8 +6096,5 @@ get_partitioned_child_rels(PlannerInfo *root, Index rti) } } - /* The root partitioned table is included as a child rel */ - Assert(list_length(result) >= 1); - return result; } From 42651bdd68a123544d5bfd0773a170aa3b443f1b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 14 Sep 2017 11:11:12 -0400 Subject: [PATCH 0167/1087] Fix inconsistent capitalization. Amit Langote Discussion: http://postgr.es/m/a83a0899-19f5-594c-9aac-3ba0f16989a1@lab.ntt.co.jp --- src/backend/commands/tablecmds.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 96354bdee5..563bcda30c 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -13779,7 +13779,7 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) * Prevent circularity by seeing if rel is a partition of attachrel. (In * particular, this disallows making a rel a partition of itself.) * - * We do that by checking if rel is a member of the list of attachRel's + * We do that by checking if rel is a member of the list of attachrel's * partitions provided the latter is partitioned at all. We want to avoid * having to construct this list again, so we request the strongest lock * on all partitions. We need the strongest lock, because we may decide From 0ec2e908babfbfde83a3925680f06b16408739ff Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 16 Aug 2017 00:22:32 -0400 Subject: [PATCH 0168/1087] Fix bool/int type confusion Using ++ on a bool variable doesn't work well when stdbool.h is in use. The original BSD code appears to use int here, so use that instead. Reviewed-by: Thomas Munro --- src/timezone/localtime.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/timezone/localtime.c b/src/timezone/localtime.c index 08642d1236..82c18e8544 100644 --- a/src/timezone/localtime.c +++ b/src/timezone/localtime.c @@ -1379,7 +1379,7 @@ timesub(const pg_time_t *timep, int32 offset, int y; const int *ip; int64 corr; - bool hit; + int hit; int i; corr = 0; From 8951c65df2701a4620ea43f12b9fbabdb653c164 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 16 Aug 2017 00:22:32 -0400 Subject: [PATCH 0169/1087] Remove BoolPtr type Not used and doesn't seem useful. Reviewed-by: Thomas Munro --- src/include/c.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 630dfbfc41..fd53010e24 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -211,8 +211,6 @@ typedef char bool; #endif #endif /* not C++ */ -typedef bool *BoolPtr; - #ifndef TRUE #define TRUE 1 #endif From 77b6b5e9ceca04dbd6f0f6cd3fc881519acc8714 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 14 Sep 2017 12:28:50 -0400 Subject: [PATCH 0170/1087] Make RelationGetPartitionDispatchInfo expand depth-first. With this change, the order of leaf partitions as returned by RelationGetPartitionDispatchInfo should now be the same as the order used by expand_inherited_rtentry. This will make it simpler for future patches to match up the partition dispatch information with the planner data structures. The new code is also, in my opinion anyway, simpler and easier to understand. Amit Langote, reviewed by Amit Khandekar. I also reviewed and made a few cosmetic revisions. Discussion: http://postgr.es/m/d98d4761-5071-1762-501e-0e15047c714b@lab.ntt.co.jp --- src/backend/catalog/partition.c | 252 +++++++++++-------------- src/backend/optimizer/prep/prepunion.c | 7 + 2 files changed, 116 insertions(+), 143 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 73eff17202..1ab6dba7ae 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -147,6 +147,8 @@ static int32 partition_bound_cmp(PartitionKey key, static int partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, void *probe, bool probe_is_bound, bool *is_equal); +static void get_partition_dispatch_recurse(Relation rel, Relation parent, + List **pds, List **leaf_part_oids); /* * RelationBuildPartitionDesc @@ -1191,21 +1193,6 @@ get_partition_qual_relid(Oid relid) return result; } -/* - * Append OIDs of rel's partitions to the list 'partoids' and for each OID, - * append pointer rel to the list 'parents'. - */ -#define APPEND_REL_PARTITION_OIDS(rel, partoids, parents) \ - do\ - {\ - int i;\ - for (i = 0; i < (rel)->rd_partdesc->nparts; i++)\ - {\ - (partoids) = lappend_oid((partoids), (rel)->rd_partdesc->oids[i]);\ - (parents) = lappend((parents), (rel));\ - }\ - } while(0) - /* * RelationGetPartitionDispatchInfo * Returns information necessary to route tuples down a partition tree @@ -1222,151 +1209,130 @@ PartitionDispatch * RelationGetPartitionDispatchInfo(Relation rel, int *num_parted, List **leaf_part_oids) { + List *pdlist = NIL; PartitionDispatchData **pd; - List *all_parts = NIL, - *all_parents = NIL, - *parted_rels, - *parted_rel_parents; - ListCell *lc1, - *lc2; - int i, - k, - offset; + ListCell *lc; + int i; - /* - * We rely on the relcache to traverse the partition tree to build both - * the leaf partition OIDs list and the array of PartitionDispatch objects - * for the partitioned tables in the tree. That means every partitioned - * table in the tree must be locked, which is fine since we require the - * caller to lock all the partitions anyway. - * - * For every partitioned table in the tree, starting with the root - * partitioned table, add its relcache entry to parted_rels, while also - * queuing its partitions (in the order in which they appear in the - * partition descriptor) to be looked at later in the same loop. This is - * a bit tricky but works because the foreach() macro doesn't fetch the - * next list element until the bottom of the loop. - */ - *num_parted = 1; - parted_rels = list_make1(rel); - /* Root partitioned table has no parent, so NULL for parent */ - parted_rel_parents = list_make1(NULL); - APPEND_REL_PARTITION_OIDS(rel, all_parts, all_parents); - forboth(lc1, all_parts, lc2, all_parents) + Assert(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); + + *num_parted = 0; + *leaf_part_oids = NIL; + + get_partition_dispatch_recurse(rel, NULL, &pdlist, leaf_part_oids); + *num_parted = list_length(pdlist); + pd = (PartitionDispatchData **) palloc(*num_parted * + sizeof(PartitionDispatchData *)); + i = 0; + foreach(lc, pdlist) { - Oid partrelid = lfirst_oid(lc1); - Relation parent = lfirst(lc2); + pd[i++] = lfirst(lc); + } - if (get_rel_relkind(partrelid) == RELKIND_PARTITIONED_TABLE) - { - /* - * Already locked by the caller. Note that it is the - * responsibility of the caller to close the below relcache entry, - * once done using the information being collected here (for - * example, in ExecEndModifyTable). - */ - Relation partrel = heap_open(partrelid, NoLock); + return pd; +} - (*num_parted)++; - parted_rels = lappend(parted_rels, partrel); - parted_rel_parents = lappend(parted_rel_parents, parent); - APPEND_REL_PARTITION_OIDS(partrel, all_parts, all_parents); - } +/* + * get_partition_dispatch_recurse + * Recursively expand partition tree rooted at rel + * + * As the partition tree is expanded in a depth-first manner, we mantain two + * global lists: of PartitionDispatch objects corresponding to partitioned + * tables in *pds and of the leaf partition OIDs in *leaf_part_oids. + * + * Note that the order of OIDs of leaf partitions in leaf_part_oids matches + * the order in which the planner's expand_partitioned_rtentry() processes + * them. It's not necessarily the case that the offsets match up exactly, + * because constraint exclusion might prune away some partitions on the + * planner side, whereas we'll always have the complete list; but unpruned + * partitions will appear in the same order in the plan as they are returned + * here. + */ +static void +get_partition_dispatch_recurse(Relation rel, Relation parent, + List **pds, List **leaf_part_oids) +{ + TupleDesc tupdesc = RelationGetDescr(rel); + PartitionDesc partdesc = RelationGetPartitionDesc(rel); + PartitionKey partkey = RelationGetPartitionKey(rel); + PartitionDispatch pd; + int i; + + check_stack_depth(); + + /* Build a PartitionDispatch for this table and add it to *pds. */ + pd = (PartitionDispatch) palloc(sizeof(PartitionDispatchData)); + *pds = lappend(*pds, pd); + pd->reldesc = rel; + pd->key = partkey; + pd->keystate = NIL; + pd->partdesc = partdesc; + if (parent != NULL) + { + /* + * For every partitioned table other than the root, we must store a + * tuple table slot initialized with its tuple descriptor and a tuple + * conversion map to convert a tuple from its parent's rowtype to its + * own. That is to make sure that we are looking at the correct row + * using the correct tuple descriptor when computing its partition key + * for tuple routing. + */ + pd->tupslot = MakeSingleTupleTableSlot(tupdesc); + pd->tupmap = convert_tuples_by_name(RelationGetDescr(parent), + tupdesc, + gettext_noop("could not convert row type")); + } + else + { + /* Not required for the root partitioned table */ + pd->tupslot = NULL; + pd->tupmap = NULL; } /* - * We want to create two arrays - one for leaf partitions and another for - * partitioned tables (including the root table and internal partitions). - * While we only create the latter here, leaf partition array of suitable - * objects (such as, ResultRelInfo) is created by the caller using the - * list of OIDs we return. Indexes into these arrays get assigned in a - * breadth-first manner, whereby partitions of any given level are placed - * consecutively in the respective arrays. + * Go look at each partition of this table. If it's a leaf partition, + * simply add its OID to *leaf_part_oids. If it's a partitioned table, + * recursively call get_partition_dispatch_recurse(), so that its + * partitions are processed as well and a corresponding PartitionDispatch + * object gets added to *pds. + * + * About the values in pd->indexes: for a leaf partition, it contains the + * leaf partition's position in the global list *leaf_part_oids minus 1, + * whereas for a partitioned table partition, it contains the partition's + * position in the global list *pds multiplied by -1. The latter is + * multiplied by -1 to distinguish partitioned tables from leaf partitions + * when going through the values in pd->indexes. So, for example, when + * using it during tuple-routing, encountering a value >= 0 means we found + * a leaf partition. It is immediately returned as the index in the array + * of ResultRelInfos of all the leaf partitions, using which we insert the + * tuple into that leaf partition. A negative value means we found a + * partitioned table. The value multiplied by -1 is returned as the index + * in the array of PartitionDispatch objects of all partitioned tables in + * the tree. This value is used to continue the search in the next level + * of the partition tree. */ - pd = (PartitionDispatchData **) palloc(*num_parted * - sizeof(PartitionDispatchData *)); - *leaf_part_oids = NIL; - i = k = offset = 0; - forboth(lc1, parted_rels, lc2, parted_rel_parents) + pd->indexes = (int *) palloc(partdesc->nparts * sizeof(int)); + for (i = 0; i < partdesc->nparts; i++) { - Relation partrel = lfirst(lc1); - Relation parent = lfirst(lc2); - PartitionKey partkey = RelationGetPartitionKey(partrel); - TupleDesc tupdesc = RelationGetDescr(partrel); - PartitionDesc partdesc = RelationGetPartitionDesc(partrel); - int j, - m; - - pd[i] = (PartitionDispatch) palloc(sizeof(PartitionDispatchData)); - pd[i]->reldesc = partrel; - pd[i]->key = partkey; - pd[i]->keystate = NIL; - pd[i]->partdesc = partdesc; - if (parent != NULL) + Oid partrelid = partdesc->oids[i]; + + if (get_rel_relkind(partrelid) != RELKIND_PARTITIONED_TABLE) { - /* - * For every partitioned table other than root, we must store a - * tuple table slot initialized with its tuple descriptor and a - * tuple conversion map to convert a tuple from its parent's - * rowtype to its own. That is to make sure that we are looking at - * the correct row using the correct tuple descriptor when - * computing its partition key for tuple routing. - */ - pd[i]->tupslot = MakeSingleTupleTableSlot(tupdesc); - pd[i]->tupmap = convert_tuples_by_name(RelationGetDescr(parent), - tupdesc, - gettext_noop("could not convert row type")); + *leaf_part_oids = lappend_oid(*leaf_part_oids, partrelid); + pd->indexes[i] = list_length(*leaf_part_oids) - 1; } else { - /* Not required for the root partitioned table */ - pd[i]->tupslot = NULL; - pd[i]->tupmap = NULL; - } - pd[i]->indexes = (int *) palloc(partdesc->nparts * sizeof(int)); - - /* - * Indexes corresponding to the internal partitions are multiplied by - * -1 to distinguish them from those of leaf partitions. Encountering - * an index >= 0 means we found a leaf partition, which is immediately - * returned as the partition we are looking for. A negative index - * means we found a partitioned table, whose PartitionDispatch object - * is located at the above index multiplied back by -1. Using the - * PartitionDispatch object, search is continued further down the - * partition tree. - */ - m = 0; - for (j = 0; j < partdesc->nparts; j++) - { - Oid partrelid = partdesc->oids[j]; + /* + * We assume all tables in the partition tree were already locked + * by the caller. + */ + Relation partrel = heap_open(partrelid, NoLock); - if (get_rel_relkind(partrelid) != RELKIND_PARTITIONED_TABLE) - { - *leaf_part_oids = lappend_oid(*leaf_part_oids, partrelid); - pd[i]->indexes[j] = k++; - } - else - { - /* - * offset denotes the number of partitioned tables of upper - * levels including those of the current level. Any partition - * of this table must belong to the next level and hence will - * be placed after the last partitioned table of this level. - */ - pd[i]->indexes[j] = -(1 + offset + m); - m++; - } + pd->indexes[i] = -list_length(*pds); + get_partition_dispatch_recurse(partrel, rel, pds, leaf_part_oids); } - i++; - - /* - * This counts the number of partitioned tables at upper levels - * including those of the current level. - */ - offset += m; } - - return pd; } /* Module-local functions */ diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index ccf21453fd..e1dbf38d16 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -1565,6 +1565,13 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) root->append_rel_list = list_concat(root->append_rel_list, appinfos); } +/* + * expand_partitioned_rtentry + * Recursively expand an RTE for a partitioned table. + * + * Note that RelationGetPartitionDispatchInfo will expand partitions in the + * same order as this code. + */ static void expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, From 0c4b879b74f891c19b3b431c5f34f94e50daa09b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 16 Aug 2017 00:22:32 -0400 Subject: [PATCH 0171/1087] Avoid use of bool in thread_test.c It's not necessary for such a small program, and it causes unnecessary extra work to get the correct definition of bool, more so if we are going to introduce stdbool.h later. Reviewed-by: Thomas Munro --- src/test/thread/thread_test.c | 33 ++++++++++----------------------- 1 file changed, 10 insertions(+), 23 deletions(-) diff --git a/src/test/thread/thread_test.c b/src/test/thread/thread_test.c index 32ce80e57f..282a95872c 100644 --- a/src/test/thread/thread_test.c +++ b/src/test/thread/thread_test.c @@ -22,19 +22,6 @@ #if !defined(IN_CONFIGURE) && !defined(WIN32) #include "postgres.h" -#else -/* From src/include/c.h" */ -#ifndef bool -typedef char bool; -#endif - -#ifndef true -#define true ((bool) 1) -#endif - -#ifndef false -#define false ((bool) 0) -#endif #endif #include @@ -93,23 +80,23 @@ static volatile int errno2_set = 0; #ifndef HAVE_STRERROR_R static char *strerror_p1; static char *strerror_p2; -static bool strerror_threadsafe = false; +static int strerror_threadsafe = 0; #endif #if !defined(WIN32) && !defined(HAVE_GETPWUID_R) static struct passwd *passwd_p1; static struct passwd *passwd_p2; -static bool getpwuid_threadsafe = false; +static int getpwuid_threadsafe = 0; #endif #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R) static struct hostent *hostent_p1; static struct hostent *hostent_p2; static char myhostname[MAXHOSTNAMELEN]; -static bool gethostbyname_threadsafe = false; +static int gethostbyname_threadsafe = 0; #endif -static bool platform_is_threadsafe = true; +static int platform_is_threadsafe = 1; int main(int argc, char *argv[]) @@ -187,17 +174,17 @@ main(int argc, char *argv[]) #ifndef HAVE_STRERROR_R if (strerror_p1 != strerror_p2) - strerror_threadsafe = true; + strerror_threadsafe = 1; #endif #if !defined(WIN32) && !defined(HAVE_GETPWUID_R) if (passwd_p1 != passwd_p2) - getpwuid_threadsafe = true; + getpwuid_threadsafe = 1; #endif #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R) if (hostent_p1 != hostent_p2) - gethostbyname_threadsafe = true; + gethostbyname_threadsafe = 1; #endif /* close down threads */ @@ -218,7 +205,7 @@ main(int argc, char *argv[]) else { printf("not thread-safe. **\n"); - platform_is_threadsafe = false; + platform_is_threadsafe = 0; } #endif @@ -233,7 +220,7 @@ main(int argc, char *argv[]) else { printf("not thread-safe. **\n"); - platform_is_threadsafe = false; + platform_is_threadsafe = 0; } #endif @@ -249,7 +236,7 @@ main(int argc, char *argv[]) else { printf("not thread-safe. **\n"); - platform_is_threadsafe = false; + platform_is_threadsafe = 0; } #endif From 0a480502b092195a9b25a2f0f199a21d592a9c57 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 14 Sep 2017 15:41:08 -0400 Subject: [PATCH 0172/1087] Expand partitioned table RTEs level by level, without flattening. Flattening the partitioning hierarchy at this stage makes various desirable optimizations difficult. The original use case for this patch was partition-wise join, which wants to match up the partitions in one partitioning hierarchy with those in another such hierarchy. However, it now seems that it will also be useful in making partition pruning work using the PartitionDesc rather than constraint exclusion, because with a flattened expansion, we have no easy way to figure out which PartitionDescs apply to which leaf tables in a multi-level partition hierarchy. As it turns out, we end up creating both rte->inh and !rte->inh RTEs for each intermediate partitioned table, just as we previously did for the root table. This seems unnecessary since the partitioned tables have no storage and are not scanned. We might want to go back and rejigger things so that no partitioned tables (including the parent) need !rte->inh RTEs, but that seems to require some adjustments not related to the core purpose of this patch. Ashutosh Bapat, reviewed by me and by Amit Langote. Some final adjustments by me. Discussion: http://postgr.es/m/CAFjFpRd=1venqLL7oGU=C1dEkuvk2DJgvF+7uKbnPHaum1mvHQ@mail.gmail.com --- src/backend/optimizer/path/allpaths.c | 28 +-- src/backend/optimizer/plan/initsplan.c | 22 ++- src/backend/optimizer/plan/planner.c | 80 +++++++-- src/backend/optimizer/prep/prepunion.c | 234 ++++++++++++++----------- src/include/nodes/relation.h | 8 +- src/test/regress/expected/inherit.out | 22 +++ src/test/regress/expected/join.out | 53 ++++++ src/test/regress/sql/inherit.sql | 17 ++ src/test/regress/sql/join.sql | 23 +++ 9 files changed, 350 insertions(+), 137 deletions(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index e8e7202e11..5b746a906a 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -24,6 +24,7 @@ #include "catalog/pg_operator.h" #include "catalog/pg_proc.h" #include "foreign/fdwapi.h" +#include "miscadmin.h" #include "nodes/makefuncs.h" #include "nodes/nodeFuncs.h" #ifdef OPTIMIZER_DEBUG @@ -352,8 +353,8 @@ set_rel_size(PlannerInfo *root, RelOptInfo *rel, else if (rte->relkind == RELKIND_PARTITIONED_TABLE) { /* - * A partitioned table without leaf partitions is marked - * as a dummy rel. + * A partitioned table without any partitions is marked as + * a dummy rel. */ set_dummy_rel_pathlist(rel); } @@ -867,6 +868,9 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, int nattrs; ListCell *l; + /* Guard against stack overflow due to overly deep inheritance tree. */ + check_stack_depth(); + Assert(IS_SIMPLE_REL(rel)); /* @@ -1290,25 +1294,23 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, bool build_partitioned_rels = false; /* - * A plain relation will already have a PartitionedChildRelInfo if it is - * partitioned. For a subquery RTE, no PartitionedChildRelInfo exists; we - * collect all partitioned_rels associated with any child. (This assumes - * that we don't need to look through multiple levels of subquery RTEs; if - * we ever do, we could create a PartitionedChildRelInfo with the - * accumulated list of partitioned_rels which would then be found when - * populated our parent rel with paths. For the present, that appears to - * be unnecessary.) + * A root partition will already have a PartitionedChildRelInfo, and a + * non-root partitioned table doesn't need one, because its Append paths + * will get flattened into the parent anyway. For a subquery RTE, no + * PartitionedChildRelInfo exists; we collect all partitioned_rels + * associated with any child. (This assumes that we don't need to look + * through multiple levels of subquery RTEs; if we ever do, we could + * create a PartitionedChildRelInfo with the accumulated list of + * partitioned_rels which would then be found when populated our parent + * rel with paths. For the present, that appears to be unnecessary.) */ rte = planner_rt_fetch(rel->relid, root); switch (rte->rtekind) { case RTE_RELATION: if (rte->relkind == RELKIND_PARTITIONED_TABLE) - { partitioned_rels = get_partitioned_child_rels(root, rel->relid); - Assert(list_length(partitioned_rels) >= 1); - } break; case RTE_SUBQUERY: build_partitioned_rels = true; diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c index 987c20ac9f..ad81f0f82f 100644 --- a/src/backend/optimizer/plan/initsplan.c +++ b/src/backend/optimizer/plan/initsplan.c @@ -15,6 +15,7 @@ #include "postgres.h" #include "catalog/pg_type.h" +#include "catalog/pg_class.h" #include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" #include "optimizer/cost.h" @@ -629,11 +630,28 @@ create_lateral_join_info(PlannerInfo *root) for (rti = 1; rti < root->simple_rel_array_size; rti++) { RelOptInfo *brel = root->simple_rel_array[rti]; + RangeTblEntry *brte = root->simple_rte_array[rti]; - if (brel == NULL || brel->reloptkind != RELOPT_BASEREL) + if (brel == NULL) + continue; + + /* + * In the case of table inheritance, the parent RTE is directly linked + * to every child table via an AppendRelInfo. In the case of table + * partitioning, the inheritance hierarchy is expanded one level at a + * time rather than flattened. Therefore, an other member rel that is + * a partitioned table may have children of its own, and must + * therefore be marked with the appropriate lateral info so that those + * children eventually get marked also. + */ + Assert(IS_SIMPLE_REL(brel)); + Assert(brte); + if (brel->reloptkind == RELOPT_OTHER_MEMBER_REL && + (brte->rtekind != RTE_RELATION || + brte->relkind != RELKIND_PARTITIONED_TABLE)) continue; - if (root->simple_rte_array[rti]->inh) + if (brte->inh) { foreach(lc, root->append_rel_list) { diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 907622eadb..7f146d670c 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -1038,7 +1038,7 @@ static void inheritance_planner(PlannerInfo *root) { Query *parse = root->parse; - int parentRTindex = parse->resultRelation; + int top_parentRTindex = parse->resultRelation; Bitmapset *subqueryRTindexes; Bitmapset *modifiableARIindexes; int nominalRelation = -1; @@ -1056,6 +1056,10 @@ inheritance_planner(PlannerInfo *root) Index rti; RangeTblEntry *parent_rte; List *partitioned_rels = NIL; + PlannerInfo *parent_root; + Query *parent_parse; + Bitmapset *parent_relids = bms_make_singleton(top_parentRTindex); + PlannerInfo **parent_roots = NULL; Assert(parse->commandType != CMD_INSERT); @@ -1119,11 +1123,31 @@ inheritance_planner(PlannerInfo *root) * (including the root parent) as child members of the inheritance set do * not appear anywhere else in the plan. The situation is exactly the * opposite in the case of non-partitioned inheritance parent as described - * below. + * below. For the same reason, collect the list of descendant partitioned + * tables to be saved in ModifyTable node, so that executor can lock those + * as well. */ - parent_rte = rt_fetch(parentRTindex, root->parse->rtable); + parent_rte = rt_fetch(top_parentRTindex, root->parse->rtable); if (parent_rte->relkind == RELKIND_PARTITIONED_TABLE) - nominalRelation = parentRTindex; + { + nominalRelation = top_parentRTindex; + partitioned_rels = get_partitioned_child_rels(root, top_parentRTindex); + /* The root partitioned table is included as a child rel */ + Assert(list_length(partitioned_rels) >= 1); + } + + /* + * The PlannerInfo for each child is obtained by translating the relevant + * members of the PlannerInfo for its immediate parent, which we find + * using the parent_relid in its AppendRelInfo. We save the PlannerInfo + * for each parent in an array indexed by relid for fast retrieval. Since + * the maximum number of parents is limited by the number of RTEs in the + * query, we use that number to allocate the array. An extra entry is + * needed since relids start from 1. + */ + parent_roots = (PlannerInfo **) palloc0((list_length(parse->rtable) + 1) * + sizeof(PlannerInfo *)); + parent_roots[top_parentRTindex] = root; /* * And now we can get on with generating a plan for each child table. @@ -1137,15 +1161,24 @@ inheritance_planner(PlannerInfo *root) Path *subpath; /* append_rel_list contains all append rels; ignore others */ - if (appinfo->parent_relid != parentRTindex) + if (!bms_is_member(appinfo->parent_relid, parent_relids)) continue; + /* + * expand_inherited_rtentry() always processes a parent before any of + * that parent's children, so the parent_root for this relation should + * already be available. + */ + parent_root = parent_roots[appinfo->parent_relid]; + Assert(parent_root != NULL); + parent_parse = parent_root->parse; + /* * We need a working copy of the PlannerInfo so that we can control * propagation of information back to the main copy. */ subroot = makeNode(PlannerInfo); - memcpy(subroot, root, sizeof(PlannerInfo)); + memcpy(subroot, parent_root, sizeof(PlannerInfo)); /* * Generate modified query with this rel as target. We first apply @@ -1154,15 +1187,15 @@ inheritance_planner(PlannerInfo *root) * then fool around with subquery RTEs. */ subroot->parse = (Query *) - adjust_appendrel_attrs(root, - (Node *) parse, + adjust_appendrel_attrs(parent_root, + (Node *) parent_parse, 1, &appinfo); /* * If there are securityQuals attached to the parent, move them to the * child rel (they've already been transformed properly for that). */ - parent_rte = rt_fetch(parentRTindex, subroot->parse->rtable); + parent_rte = rt_fetch(appinfo->parent_relid, subroot->parse->rtable); child_rte = rt_fetch(appinfo->child_relid, subroot->parse->rtable); child_rte->securityQuals = parent_rte->securityQuals; parent_rte->securityQuals = NIL; @@ -1173,7 +1206,7 @@ inheritance_planner(PlannerInfo *root) * executor doesn't need to see the modified copies --- we can just * pass it the original rowMarks list.) */ - subroot->rowMarks = copyObject(root->rowMarks); + subroot->rowMarks = copyObject(parent_root->rowMarks); /* * The append_rel_list likewise might contain references to subquery @@ -1190,7 +1223,7 @@ inheritance_planner(PlannerInfo *root) ListCell *lc2; subroot->append_rel_list = NIL; - foreach(lc2, root->append_rel_list) + foreach(lc2, parent_root->append_rel_list) { AppendRelInfo *appinfo2 = lfirst_node(AppendRelInfo, lc2); @@ -1225,7 +1258,7 @@ inheritance_planner(PlannerInfo *root) ListCell *lr; rti = 1; - foreach(lr, parse->rtable) + foreach(lr, parent_parse->rtable) { RangeTblEntry *rte = lfirst_node(RangeTblEntry, lr); @@ -1272,6 +1305,22 @@ inheritance_planner(PlannerInfo *root) /* hack to mark target relation as an inheritance partition */ subroot->hasInheritedTarget = true; + /* + * If the child is further partitioned, remember it as a parent. Since + * a partitioned table does not have any data, we don't need to create + * a plan for it. We do, however, need to remember the PlannerInfo for + * use when processing its children. + */ + if (child_rte->inh) + { + Assert(child_rte->relkind == RELKIND_PARTITIONED_TABLE); + parent_relids = + bms_add_member(parent_relids, appinfo->child_relid); + parent_roots[appinfo->child_relid] = subroot; + + continue; + } + /* Generate Path(s) for accessing this result relation */ grouping_planner(subroot, true, 0.0 /* retrieve all tuples */ ); @@ -1368,13 +1417,6 @@ inheritance_planner(PlannerInfo *root) Assert(!parse->onConflict); } - if (parent_rte->relkind == RELKIND_PARTITIONED_TABLE) - { - partitioned_rels = get_partitioned_child_rels(root, parentRTindex); - /* The root partitioned table is included as a child rel */ - Assert(list_length(partitioned_rels) >= 1); - } - /* Result path must go into outer query's FINAL upperrel */ final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL); diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index e1dbf38d16..3e0c3de86d 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -104,16 +104,14 @@ static void expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, static void expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, - PlanRowMark *parentrc, PartitionDesc partdesc, - LOCKMODE lockmode, - bool *has_child, List **appinfos, - List **partitioned_child_rels); + PlanRowMark *top_parentrc, LOCKMODE lockmode, + List **appinfos, List **partitioned_child_rels); static void expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, - PlanRowMark *parentrc, Relation childrel, - bool *has_child, List **appinfos, - List **partitioned_child_rels); + PlanRowMark *top_parentrc, Relation childrel, + List **appinfos, RangeTblEntry **childrte_p, + Index *childRTindex_p); static void make_inh_translation_list(Relation oldrelation, Relation newrelation, Index newvarno, @@ -1348,9 +1346,9 @@ expand_inherited_tables(PlannerInfo *root) ListCell *rl; /* - * expand_inherited_rtentry may add RTEs to parse->rtable; there is no - * need to scan them since they can't have inh=true. So just scan as far - * as the original end of the rtable list. + * expand_inherited_rtentry may add RTEs to parse->rtable. The function is + * expected to recursively handle any RTEs that it creates with inh=true. + * So just scan as far as the original end of the rtable list. */ nrtes = list_length(root->parse->rtable); rl = list_head(root->parse->rtable); @@ -1392,11 +1390,7 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) Relation oldrelation; LOCKMODE lockmode; List *inhOIDs; - List *appinfos; ListCell *l; - bool has_child; - PartitionedChildRelInfo *pcinfo; - List *partitioned_child_rels = NIL; /* Does RT entry allow inheritance? */ if (!rte->inh) @@ -1467,27 +1461,44 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) oldrelation = heap_open(parentOID, NoLock); /* Scan the inheritance set and expand it */ - appinfos = NIL; - has_child = false; if (RelationGetPartitionDesc(oldrelation) != NULL) { + List *partitioned_child_rels = NIL; + + Assert(rte->relkind == RELKIND_PARTITIONED_TABLE); + /* * If this table has partitions, recursively expand them in the order - * in which they appear in the PartitionDesc. But first, expand the - * parent itself. + * in which they appear in the PartitionDesc. */ - expand_single_inheritance_child(root, rte, rti, oldrelation, oldrc, - oldrelation, - &has_child, &appinfos, - &partitioned_child_rels); expand_partitioned_rtentry(root, rte, rti, oldrelation, oldrc, - RelationGetPartitionDesc(oldrelation), - lockmode, - &has_child, &appinfos, - &partitioned_child_rels); + lockmode, &root->append_rel_list, + &partitioned_child_rels); + + /* + * We keep a list of objects in root, each of which maps a root + * partitioned parent RT index to the list of RT indexes of descendant + * partitioned child tables. When creating an Append or a ModifyTable + * path for the parent, we copy the child RT index list verbatim to + * the path so that it could be carried over to the executor so that + * the latter could identify the partitioned child tables. + */ + if (rte->inh && partitioned_child_rels != NIL) + { + PartitionedChildRelInfo *pcinfo; + + pcinfo = makeNode(PartitionedChildRelInfo); + pcinfo->parent_relid = rti; + pcinfo->child_rels = partitioned_child_rels; + root->pcinfo_list = lappend(root->pcinfo_list, pcinfo); + } } else { + List *appinfos = NIL; + RangeTblEntry *childrte; + Index childRTindex; + /* * This table has no partitions. Expand any plain inheritance * children in the order the OIDs were returned by @@ -1518,51 +1529,30 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) expand_single_inheritance_child(root, rte, rti, oldrelation, oldrc, newrelation, - &has_child, &appinfos, - &partitioned_child_rels); + &appinfos, &childrte, + &childRTindex); /* Close child relations, but keep locks */ if (childOID != parentOID) heap_close(newrelation, NoLock); } - } - - heap_close(oldrelation, NoLock); - /* - * If all the children were temp tables or a partitioned parent did not - * have any leaf partitions, pretend it's a non-inheritance situation; we - * don't need Append node in that case. The duplicate RTE we added for - * the parent table is harmless, so we don't bother to get rid of it; - * ditto for the useless PlanRowMark node. - */ - if (!has_child) - { - /* Clear flag before returning */ - rte->inh = false; - return; - } - - /* - * We keep a list of objects in root, each of which maps a partitioned - * parent RT index to the list of RT indexes of its partitioned child - * tables. When creating an Append or a ModifyTable path for the parent, - * we copy the child RT index list verbatim to the path so that it could - * be carried over to the executor so that the latter could identify the - * partitioned child tables. - */ - if (partitioned_child_rels != NIL) - { - pcinfo = makeNode(PartitionedChildRelInfo); + /* + * If all the children were temp tables, pretend it's a + * non-inheritance situation; we don't need Append node in that case. + * The duplicate RTE we added for the parent table is harmless, so we + * don't bother to get rid of it; ditto for the useless PlanRowMark + * node. + */ + if (list_length(appinfos) < 2) + rte->inh = false; + else + root->append_rel_list = list_concat(root->append_rel_list, + appinfos); - Assert(rte->relkind == RELKIND_PARTITIONED_TABLE); - pcinfo->parent_relid = rti; - pcinfo->child_rels = partitioned_child_rels; - root->pcinfo_list = lappend(root->pcinfo_list, pcinfo); } - /* Otherwise, OK to add to root->append_rel_list */ - root->append_rel_list = list_concat(root->append_rel_list, appinfos); + heap_close(oldrelation, NoLock); } /* @@ -1575,15 +1565,35 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) static void expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, - PlanRowMark *parentrc, PartitionDesc partdesc, - LOCKMODE lockmode, - bool *has_child, List **appinfos, - List **partitioned_child_rels) + PlanRowMark *top_parentrc, LOCKMODE lockmode, + List **appinfos, List **partitioned_child_rels) { int i; + RangeTblEntry *childrte; + Index childRTindex; + bool has_child = false; + PartitionDesc partdesc = RelationGetPartitionDesc(parentrel); check_stack_depth(); + /* A partitioned table should always have a partition descriptor. */ + Assert(partdesc); + + Assert(parentrte->inh); + + /* First expand the partitioned table itself. */ + expand_single_inheritance_child(root, parentrte, parentRTindex, parentrel, + top_parentrc, parentrel, + appinfos, &childrte, &childRTindex); + + /* + * The partitioned table does not have data for itself but still need to + * be locked. Update given list of partitioned children with RTI of this + * partitioned relation. + */ + *partitioned_child_rels = lappend_int(*partitioned_child_rels, + childRTindex); + for (i = 0; i < partdesc->nparts; i++) { Oid childOID = partdesc->oids[i]; @@ -1599,23 +1609,30 @@ expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, continue; } + /* We have a real partition. */ + has_child = true; + expand_single_inheritance_child(root, parentrte, parentRTindex, - parentrel, parentrc, childrel, - has_child, appinfos, - partitioned_child_rels); + parentrel, top_parentrc, childrel, + appinfos, &childrte, &childRTindex); /* If this child is itself partitioned, recurse */ if (childrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) - expand_partitioned_rtentry(root, parentrte, parentRTindex, - parentrel, parentrc, - RelationGetPartitionDesc(childrel), - lockmode, - has_child, appinfos, - partitioned_child_rels); + expand_partitioned_rtentry(root, childrte, childRTindex, + childrel, top_parentrc, lockmode, + appinfos, partitioned_child_rels); /* Close child relation, but keep locks */ heap_close(childrel, NoLock); } + + /* + * If the partitioned table has no partitions or all the partitions are + * temporary tables from other backends, treat this as non-inheritance + * case. + */ + if (!has_child) + parentrte->inh = false; } /* @@ -1623,16 +1640,31 @@ expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, * Expand a single inheritance child, if needed. * * If this is a temp table of another backend, we'll return without doing - * anything at all. Otherwise, we'll set "has_child" to true, build a - * RangeTblEntry and either a PartitionedChildRelInfo or AppendRelInfo as + * anything at all. Otherwise, build a RangeTblEntry and an AppendRelInfo, if * appropriate, plus maybe a PlanRowMark. + * + * We now expand the partition hierarchy level by level, creating a + * corresponding hierarchy of AppendRelInfos and RelOptInfos, where each + * partitioned descendant acts as a parent of its immediate partitions. + * (This is a difference from what older versions of PostgreSQL did and what + * is still done in the case of table inheritance for unpartitioned tables, + * where the hierarchy is flattened during RTE expansion.) + * + * PlanRowMarks still carry the top-parent's RTI, and the top-parent's + * allMarkTypes field still accumulates values from all descendents. + * + * "parentrte" and "parentRTindex" are immediate parent's RTE and + * RTI. "top_parentrc" is top parent's PlanRowMark. + * + * The child RangeTblEntry and its RTI are returned in "childrte_p" and + * "childRTindex_p" resp. */ static void expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, - PlanRowMark *parentrc, Relation childrel, - bool *has_child, List **appinfos, - List **partitioned_child_rels) + PlanRowMark *top_parentrc, Relation childrel, + List **appinfos, RangeTblEntry **childrte_p, + Index *childRTindex_p) { Query *parse = root->parse; Oid parentOID = RelationGetRelid(parentrel); @@ -1654,24 +1686,30 @@ expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, * restriction clauses, so we don't need to do it here. */ childrte = copyObject(parentrte); + *childrte_p = childrte; childrte->relid = childOID; childrte->relkind = childrel->rd_rel->relkind; - childrte->inh = false; + /* A partitioned child will need to be expanded further. */ + if (childOID != parentOID && + childrte->relkind == RELKIND_PARTITIONED_TABLE) + childrte->inh = true; + else + childrte->inh = false; childrte->requiredPerms = 0; childrte->securityQuals = NIL; parse->rtable = lappend(parse->rtable, childrte); childRTindex = list_length(parse->rtable); + *childRTindex_p = childRTindex; /* - * Build an AppendRelInfo for this parent and child, unless the child is a - * partitioned table. + * We need an AppendRelInfo if paths will be built for the child RTE. If + * childrte->inh is true, then we'll always need to generate append paths + * for it. If childrte->inh is false, we must scan it if it's not a + * partitioned table; but if it is a partitioned table, then it never has + * any data of its own and need not be scanned. */ - if (childrte->relkind != RELKIND_PARTITIONED_TABLE) + if (childrte->relkind != RELKIND_PARTITIONED_TABLE || childrte->inh) { - /* Remember if we saw a real child. */ - if (childOID != parentOID) - *has_child = true; - appinfo = makeNode(AppendRelInfo); appinfo->parent_relid = parentRTindex; appinfo->child_relid = childRTindex; @@ -1701,25 +1739,23 @@ expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, appinfo->translated_vars); } } - else - *partitioned_child_rels = lappend_int(*partitioned_child_rels, - childRTindex); /* * Build a PlanRowMark if parent is marked FOR UPDATE/SHARE. */ - if (parentrc) + if (top_parentrc) { PlanRowMark *childrc = makeNode(PlanRowMark); childrc->rti = childRTindex; - childrc->prti = parentRTindex; - childrc->rowmarkId = parentrc->rowmarkId; + childrc->prti = top_parentrc->rti; + childrc->rowmarkId = top_parentrc->rowmarkId; /* Reselect rowmark type, because relkind might not match parent */ - childrc->markType = select_rowmark_type(childrte, parentrc->strength); + childrc->markType = select_rowmark_type(childrte, + top_parentrc->strength); childrc->allMarkTypes = (1 << childrc->markType); - childrc->strength = parentrc->strength; - childrc->waitPolicy = parentrc->waitPolicy; + childrc->strength = top_parentrc->strength; + childrc->waitPolicy = top_parentrc->waitPolicy; /* * We mark RowMarks for partitioned child tables as parent RowMarks so @@ -1728,8 +1764,8 @@ expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, */ childrc->isParent = (childrte->relkind == RELKIND_PARTITIONED_TABLE); - /* Include child's rowmark type in parent's allMarkTypes */ - parentrc->allMarkTypes |= childrc->allMarkTypes; + /* Include child's rowmark type in top parent's allMarkTypes */ + top_parentrc->allMarkTypes |= childrc->allMarkTypes; root->rowMarks = lappend(root->rowMarks, childrc); } diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index a39e59d8ac..d50ff55681 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1935,10 +1935,10 @@ typedef struct SpecialJoinInfo * * When we expand an inheritable table or a UNION-ALL subselect into an * "append relation" (essentially, a list of child RTEs), we build an - * AppendRelInfo for each non-partitioned child RTE. The list of - * AppendRelInfos indicates which child RTEs must be included when expanding - * the parent, and each node carries information needed to translate Vars - * referencing the parent into Vars referencing that child. + * AppendRelInfo for each child RTE. The list of AppendRelInfos indicates + * which child RTEs must be included when expanding the parent, and each node + * carries information needed to translate Vars referencing the parent into + * Vars referencing that child. * * These structs are kept in the PlannerInfo node's append_rel_list. * Note that we just throw all the structs into one list, and scan the diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out index 1fa9650ec9..2fb0b4d86e 100644 --- a/src/test/regress/expected/inherit.out +++ b/src/test/regress/expected/inherit.out @@ -625,6 +625,28 @@ select tableoid::regclass::text as relname, parted_tab.* from parted_tab order b (3 rows) drop table parted_tab; +-- Check UPDATE with multi-level partitioned inherited target +create table mlparted_tab (a int, b char, c text) partition by list (a); +create table mlparted_tab_part1 partition of mlparted_tab for values in (1); +create table mlparted_tab_part2 partition of mlparted_tab for values in (2) partition by list (b); +create table mlparted_tab_part3 partition of mlparted_tab for values in (3); +create table mlparted_tab_part2a partition of mlparted_tab_part2 for values in ('a'); +create table mlparted_tab_part2b partition of mlparted_tab_part2 for values in ('b'); +insert into mlparted_tab values (1, 'a'), (2, 'a'), (2, 'b'), (3, 'a'); +update mlparted_tab mlp set c = 'xxx' +from + (select a from some_tab union all select a+1 from some_tab) ss (a) +where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3; +select tableoid::regclass::text as relname, mlparted_tab.* from mlparted_tab order by 1,2; + relname | a | b | c +---------------------+---+---+----- + mlparted_tab_part1 | 1 | a | + mlparted_tab_part2a | 2 | a | + mlparted_tab_part2b | 2 | b | xxx + mlparted_tab_part3 | 3 | a | xxx +(4 rows) + +drop table mlparted_tab; drop table some_tab cascade; NOTICE: drop cascades to table some_tab_child /* Test multiple inheritance of column defaults */ diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 9f4c88dab4..06a84e8e1c 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -5328,6 +5328,59 @@ LINE 1: ...xx1 using lateral (select * from int4_tbl where f1 = x1) ss; ^ HINT: There is an entry for table "xx1", but it cannot be referenced from this part of the query. -- +-- test LATERAL reference propagation down a multi-level inheritance hierarchy +-- produced for a multi-level partitioned table hierarchy. +-- +create table pt1 (a int, b int, c varchar) partition by range(a); +create table pt1p1 partition of pt1 for values from (0) to (100) partition by range(b); +create table pt1p2 partition of pt1 for values from (100) to (200); +create table pt1p1p1 partition of pt1p1 for values from (0) to (100); +insert into pt1 values (1, 1, 'x'), (101, 101, 'y'); +create table ut1 (a int, b int, c varchar); +insert into ut1 values (101, 101, 'y'), (2, 2, 'z'); +explain (verbose, costs off) +select t1.b, ss.phv from ut1 t1 left join lateral + (select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv + from pt1 t2 join ut1 t3 on t2.a = t3.b) ss + on t1.a = ss.t2a order by t1.a; + QUERY PLAN +------------------------------------------------------------- + Sort + Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a + Sort Key: t1.a + -> Nested Loop Left Join + Output: t1.b, (LEAST(t1.a, t2.a, t3.a)), t1.a + -> Seq Scan on public.ut1 t1 + Output: t1.a, t1.b, t1.c + -> Hash Join + Output: t2.a, LEAST(t1.a, t2.a, t3.a) + Hash Cond: (t3.b = t2.a) + -> Seq Scan on public.ut1 t3 + Output: t3.a, t3.b, t3.c + -> Hash + Output: t2.a + -> Append + -> Seq Scan on public.pt1p1p1 t2 + Output: t2.a + Filter: (t1.a = t2.a) + -> Seq Scan on public.pt1p2 t2_1 + Output: t2_1.a + Filter: (t1.a = t2_1.a) +(21 rows) + +select t1.b, ss.phv from ut1 t1 left join lateral + (select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv + from pt1 t2 join ut1 t3 on t2.a = t3.b) ss + on t1.a = ss.t2a order by t1.a; + b | phv +-----+----- + 2 | + 101 | 101 +(2 rows) + +drop table pt1; +drop table ut1; +-- -- test that foreign key join estimation performs sanely for outer joins -- begin; diff --git a/src/test/regress/sql/inherit.sql b/src/test/regress/sql/inherit.sql index c96580cd81..01780d4977 100644 --- a/src/test/regress/sql/inherit.sql +++ b/src/test/regress/sql/inherit.sql @@ -154,6 +154,23 @@ where parted_tab.a = ss.a; select tableoid::regclass::text as relname, parted_tab.* from parted_tab order by 1,2; drop table parted_tab; + +-- Check UPDATE with multi-level partitioned inherited target +create table mlparted_tab (a int, b char, c text) partition by list (a); +create table mlparted_tab_part1 partition of mlparted_tab for values in (1); +create table mlparted_tab_part2 partition of mlparted_tab for values in (2) partition by list (b); +create table mlparted_tab_part3 partition of mlparted_tab for values in (3); +create table mlparted_tab_part2a partition of mlparted_tab_part2 for values in ('a'); +create table mlparted_tab_part2b partition of mlparted_tab_part2 for values in ('b'); +insert into mlparted_tab values (1, 'a'), (2, 'a'), (2, 'b'), (3, 'a'); + +update mlparted_tab mlp set c = 'xxx' +from + (select a from some_tab union all select a+1 from some_tab) ss (a) +where (mlp.a = ss.a and mlp.b = 'b') or mlp.a = 3; +select tableoid::regclass::text as relname, mlparted_tab.* from mlparted_tab order by 1,2; + +drop table mlparted_tab; drop table some_tab cascade; /* Test multiple inheritance of column defaults */ diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index 835d67551c..8b21838e92 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -1733,6 +1733,29 @@ delete from xx1 using (select * from int4_tbl where f1 = x1) ss; delete from xx1 using (select * from int4_tbl where f1 = xx1.x1) ss; delete from xx1 using lateral (select * from int4_tbl where f1 = x1) ss; +-- +-- test LATERAL reference propagation down a multi-level inheritance hierarchy +-- produced for a multi-level partitioned table hierarchy. +-- +create table pt1 (a int, b int, c varchar) partition by range(a); +create table pt1p1 partition of pt1 for values from (0) to (100) partition by range(b); +create table pt1p2 partition of pt1 for values from (100) to (200); +create table pt1p1p1 partition of pt1p1 for values from (0) to (100); +insert into pt1 values (1, 1, 'x'), (101, 101, 'y'); +create table ut1 (a int, b int, c varchar); +insert into ut1 values (101, 101, 'y'), (2, 2, 'z'); +explain (verbose, costs off) +select t1.b, ss.phv from ut1 t1 left join lateral + (select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv + from pt1 t2 join ut1 t3 on t2.a = t3.b) ss + on t1.a = ss.t2a order by t1.a; +select t1.b, ss.phv from ut1 t1 left join lateral + (select t2.a as t2a, t3.a t3a, least(t1.a, t2.a, t3.a) phv + from pt1 t2 join ut1 t3 on t2.a = t3.b) ss + on t1.a = ss.t2a order by t1.a; + +drop table pt1; +drop table ut1; -- -- test that foreign key join estimation performs sanely for outer joins -- From 8356753c212a5865469c9befc4cf1e637a9d8bbc Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 13 Sep 2017 02:12:17 -0700 Subject: [PATCH 0173/1087] Perform only one ReadControlFile() during startup. Previously we read the control file in multiple places. But soon the segment size will be configurable and stored in the control file, and that needs to be available earlier than it currently is needed. Instead of adding yet another place where it's read, refactor things so there's a single processing of the control file during startup (in EXEC_BACKEND that's every individual backend's startup). Author: Andres Freund Discussion: http://postgr.es/m/20170913092828.aozd3gvvmw67gmyc@alap3.anarazel.de --- src/backend/access/transam/xlog.c | 48 ++++++++++++++++++++--------- src/backend/postmaster/postmaster.c | 6 ++++ src/backend/tcop/postgres.c | 3 ++ src/include/access/xlog.h | 1 + 4 files changed, 44 insertions(+), 14 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index a3e8ce092f..b8f648927a 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -4799,6 +4799,22 @@ check_wal_buffers(int *newval, void **extra, GucSource source) return true; } +/* + * Read the control file, set respective GUCs. + * + * This is to be called during startup, unless in bootstrap mode, where no + * control file yet exists. As there's no shared memory yet (its sizing can + * depend on the contents of the control file!), first store data in local + * memory. XLOGShemInit() will then copy it to shared memory later. + */ +void +LocalProcessControlFile(void) +{ + Assert(ControlFile == NULL); + ControlFile = palloc(sizeof(ControlFileData)); + ReadControlFile(); +} + /* * Initialization of shared memory for XLOG */ @@ -4850,6 +4866,7 @@ XLOGShmemInit(void) foundXLog; char *allocptr; int i; + ControlFileData *localControlFile; #ifdef WAL_DEBUG @@ -4867,8 +4884,18 @@ XLOGShmemInit(void) } #endif + /* + * Already have read control file locally, unless in bootstrap mode. Move + * local version into shared memory. + */ + localControlFile = ControlFile; ControlFile = (ControlFileData *) ShmemInitStruct("Control File", sizeof(ControlFileData), &foundCFile); + if (localControlFile) + { + memcpy(ControlFile, localControlFile, sizeof(ControlFileData)); + pfree(localControlFile); + } XLogCtl = (XLogCtlData *) ShmemInitStruct("XLOG Ctl", XLOGShmemSize(), &foundXLog); @@ -4933,14 +4960,6 @@ XLOGShmemInit(void) SpinLockInit(&XLogCtl->info_lck); SpinLockInit(&XLogCtl->ulsn_lck); InitSharedLatch(&XLogCtl->recoveryWakeupLatch); - - /* - * If we are not in bootstrap mode, pg_control should already exist. Read - * and validate it immediately (see comments in ReadControlFile() for the - * reasons why). - */ - if (!IsBootstrapProcessingMode()) - ReadControlFile(); } /* @@ -5129,6 +5148,12 @@ BootStrapXLOG(void) BootStrapMultiXact(); pfree(buffer); + + /* + * Force control file to be read - in contrast to normal processing we'd + * otherwise never run the checks and GUC related initializations therein. + */ + ReadControlFile(); } static char * @@ -6227,13 +6252,8 @@ StartupXLOG(void) struct stat st; /* - * Read control file and check XLOG status looks valid. - * - * Note: in most control paths, *ControlFile is already valid and we need - * not do ReadControlFile() here, but might as well do it to be sure. + * Verify XLOG status looks valid. */ - ReadControlFile(); - if (ControlFile->state < DB_SHUTDOWNED || ControlFile->state > DB_IN_PRODUCTION || !XRecOffIsValid(ControlFile->checkPoint)) diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 95180b2ef5..e4f8f597c6 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -950,6 +950,9 @@ PostmasterMain(int argc, char *argv[]) */ CreateDataDirLockFile(true); + /* read control file (error checking and contains config) */ + LocalProcessControlFile(); + /* * Initialize SSL library, if specified. */ @@ -4805,6 +4808,9 @@ SubPostmasterMain(int argc, char *argv[]) /* Read in remaining GUC variables */ read_nondefault_variables(); + /* (re-)read control file (contains config) */ + LocalProcessControlFile(); + /* * Reload any libraries that were preloaded by the postmaster. Since we * exec'd this process, those libraries didn't come along with us; but we diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 4eb85720a7..46b662266b 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3717,6 +3717,9 @@ PostgresMain(int argc, char *argv[], */ CreateDataDirLockFile(false); + /* read control file (error checking and contains config ) */ + LocalProcessControlFile(); + /* Initialize MaxBackends (if under postmaster, was done already) */ InitializeMaxBackends(); } diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index 66bfb77295..e0635ab4e6 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -261,6 +261,7 @@ extern XLogRecPtr GetFakeLSNForUnloggedRel(void); extern Size XLOGShmemSize(void); extern void XLOGShmemInit(void); extern void BootStrapXLOG(void); +extern void LocalProcessControlFile(void); extern void StartupXLOG(void); extern void ShutdownXLOG(int code, Datum arg); extern void InitXLOGAccess(void); From 81276fdd3931d286e62b86b2512a517de2ba2de8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 14 Sep 2017 16:25:19 -0400 Subject: [PATCH 0174/1087] Add missing tags to GetCommandLogLevel. Otherwise, log_statement = 'ddl' causes errors if those statement types are used. Michael Paquier, reviewed by Ashutosh Sharma Discussion: http://postgr.es/m/CAB7nPqStC3HkE76Q1MnHsVd1vF1Td9zXApzYadzDMyLMRkkGrw@mail.gmail.com --- src/backend/tcop/utility.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 775477c6cf..5c69ecf0f7 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -3007,6 +3007,10 @@ GetCommandLogLevel(Node *parsetree) lev = LOGSTMT_DDL; break; + case T_AlterOperatorStmt: + lev = LOGSTMT_DDL; + break; + case T_AlterTableMoveAllStmt: case T_AlterTableStmt: lev = LOGSTMT_DDL; @@ -3291,6 +3295,14 @@ GetCommandLogLevel(Node *parsetree) lev = LOGSTMT_DDL; break; + case T_CreateStatsStmt: + lev = LOGSTMT_DDL; + break; + + case T_AlterCollationStmt: + lev = LOGSTMT_DDL; + break; + /* already-planned queries */ case T_PlannedStmt: { From b28dfa6d6f4e9a7a518d3c22b28375cad8a22272 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0175/1087] adminpack: Add test suite Reviewed-by: David Steele --- contrib/adminpack/.gitignore | 4 + contrib/adminpack/Makefile | 2 + contrib/adminpack/expected/adminpack.out | 146 +++++++++++++++++++++++ contrib/adminpack/sql/adminpack.sql | 60 ++++++++++ 4 files changed, 212 insertions(+) create mode 100644 contrib/adminpack/.gitignore create mode 100644 contrib/adminpack/expected/adminpack.out create mode 100644 contrib/adminpack/sql/adminpack.sql diff --git a/contrib/adminpack/.gitignore b/contrib/adminpack/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/contrib/adminpack/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/contrib/adminpack/Makefile b/contrib/adminpack/Makefile index f065f84bfb..89c249bc0d 100644 --- a/contrib/adminpack/Makefile +++ b/contrib/adminpack/Makefile @@ -8,6 +8,8 @@ EXTENSION = adminpack DATA = adminpack--1.0.sql PGFILEDESC = "adminpack - support functions for pgAdmin" +REGRESS = adminpack + ifdef USE_PGXS PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) diff --git a/contrib/adminpack/expected/adminpack.out b/contrib/adminpack/expected/adminpack.out new file mode 100644 index 0000000000..b0d72ddab2 --- /dev/null +++ b/contrib/adminpack/expected/adminpack.out @@ -0,0 +1,146 @@ +CREATE EXTENSION adminpack; +-- create new file +SELECT pg_file_write('test_file1', 'test1', false); + pg_file_write +--------------- + 5 +(1 row) + +SELECT pg_read_file('test_file1'); + pg_read_file +-------------- + test1 +(1 row) + +-- append +SELECT pg_file_write('test_file1', 'test1', true); + pg_file_write +--------------- + 5 +(1 row) + +SELECT pg_read_file('test_file1'); + pg_read_file +-------------- + test1test1 +(1 row) + +-- error, already exists +SELECT pg_file_write('test_file1', 'test1', false); +ERROR: file "test_file1" exists +SELECT pg_read_file('test_file1'); + pg_read_file +-------------- + test1test1 +(1 row) + +-- disallowed file paths +SELECT pg_file_write('../test_file0', 'test0', false); +ERROR: path must be in or below the current directory +SELECT pg_file_write('/tmp/test_file0', 'test0', false); +ERROR: absolute path not allowed +SELECT pg_file_write(current_setting('data_directory') || '/test_file4', 'test4', false); + pg_file_write +--------------- + 5 +(1 row) + +SELECT pg_file_write(current_setting('data_directory') || '/../test_file4', 'test4', false); +ERROR: reference to parent directory ("..") not allowed +-- rename file +SELECT pg_file_rename('test_file1', 'test_file2'); + pg_file_rename +---------------- + t +(1 row) + +SELECT pg_read_file('test_file1'); -- not there +ERROR: could not stat file "test_file1": No such file or directory +SELECT pg_read_file('test_file2'); + pg_read_file +-------------- + test1test1 +(1 row) + +-- error +SELECT pg_file_rename('test_file1', 'test_file2'); +WARNING: file "test_file1" is not accessible: No such file or directory + pg_file_rename +---------------- + f +(1 row) + +-- rename file and archive +SELECT pg_file_write('test_file3', 'test3', false); + pg_file_write +--------------- + 5 +(1 row) + +SELECT pg_file_rename('test_file2', 'test_file3', 'test_file3_archive'); + pg_file_rename +---------------- + t +(1 row) + +SELECT pg_read_file('test_file2'); -- not there +ERROR: could not stat file "test_file2": No such file or directory +SELECT pg_read_file('test_file3'); + pg_read_file +-------------- + test1test1 +(1 row) + +SELECT pg_read_file('test_file3_archive'); + pg_read_file +-------------- + test3 +(1 row) + +-- unlink +SELECT pg_file_unlink('test_file1'); -- does not exist + pg_file_unlink +---------------- + f +(1 row) + +SELECT pg_file_unlink('test_file2'); -- does not exist + pg_file_unlink +---------------- + f +(1 row) + +SELECT pg_file_unlink('test_file3'); + pg_file_unlink +---------------- + t +(1 row) + +SELECT pg_file_unlink('test_file3_archive'); + pg_file_unlink +---------------- + t +(1 row) + +SELECT pg_file_unlink('test_file4'); + pg_file_unlink +---------------- + t +(1 row) + +-- superuser checks +CREATE USER regress_user1; +SET ROLE regress_user1; +SELECT pg_file_write('test_file0', 'test0', false); +ERROR: only superuser may access generic file functions +SELECT pg_file_rename('test_file0', 'test_file0'); +ERROR: only superuser may access generic file functions +CONTEXT: SQL function "pg_file_rename" statement 1 +SELECT pg_file_unlink('test_file0'); +ERROR: only superuser may access generic file functions +SELECT pg_logdir_ls(); +ERROR: only superuser can list the log directory +RESET ROLE; +DROP USER regress_user1; +-- no further tests for pg_logdir_ls() because it depends on the +-- server's logging setup diff --git a/contrib/adminpack/sql/adminpack.sql b/contrib/adminpack/sql/adminpack.sql new file mode 100644 index 0000000000..13621bd043 --- /dev/null +++ b/contrib/adminpack/sql/adminpack.sql @@ -0,0 +1,60 @@ +CREATE EXTENSION adminpack; + +-- create new file +SELECT pg_file_write('test_file1', 'test1', false); +SELECT pg_read_file('test_file1'); + +-- append +SELECT pg_file_write('test_file1', 'test1', true); +SELECT pg_read_file('test_file1'); + +-- error, already exists +SELECT pg_file_write('test_file1', 'test1', false); +SELECT pg_read_file('test_file1'); + +-- disallowed file paths +SELECT pg_file_write('../test_file0', 'test0', false); +SELECT pg_file_write('/tmp/test_file0', 'test0', false); +SELECT pg_file_write(current_setting('data_directory') || '/test_file4', 'test4', false); +SELECT pg_file_write(current_setting('data_directory') || '/../test_file4', 'test4', false); + + +-- rename file +SELECT pg_file_rename('test_file1', 'test_file2'); +SELECT pg_read_file('test_file1'); -- not there +SELECT pg_read_file('test_file2'); + +-- error +SELECT pg_file_rename('test_file1', 'test_file2'); + +-- rename file and archive +SELECT pg_file_write('test_file3', 'test3', false); +SELECT pg_file_rename('test_file2', 'test_file3', 'test_file3_archive'); +SELECT pg_read_file('test_file2'); -- not there +SELECT pg_read_file('test_file3'); +SELECT pg_read_file('test_file3_archive'); + + +-- unlink +SELECT pg_file_unlink('test_file1'); -- does not exist +SELECT pg_file_unlink('test_file2'); -- does not exist +SELECT pg_file_unlink('test_file3'); +SELECT pg_file_unlink('test_file3_archive'); +SELECT pg_file_unlink('test_file4'); + + +-- superuser checks +CREATE USER regress_user1; +SET ROLE regress_user1; + +SELECT pg_file_write('test_file0', 'test0', false); +SELECT pg_file_rename('test_file0', 'test_file0'); +SELECT pg_file_unlink('test_file0'); +SELECT pg_logdir_ls(); + +RESET ROLE; +DROP USER regress_user1; + + +-- no further tests for pg_logdir_ls() because it depends on the +-- server's logging setup From 6141123a827a47d02b8b6c8eb97643c33aa4461d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0176/1087] fuzzystrmatch: Add test suite Reviewed-by: David Steele --- contrib/fuzzystrmatch/.gitignore | 4 ++ contrib/fuzzystrmatch/Makefile | 2 + .../fuzzystrmatch/expected/fuzzystrmatch.out | 67 +++++++++++++++++++ contrib/fuzzystrmatch/sql/fuzzystrmatch.sql | 21 ++++++ doc/src/sgml/fuzzystrmatch.sgml | 8 +-- 5 files changed, 98 insertions(+), 4 deletions(-) create mode 100644 contrib/fuzzystrmatch/.gitignore create mode 100644 contrib/fuzzystrmatch/expected/fuzzystrmatch.out create mode 100644 contrib/fuzzystrmatch/sql/fuzzystrmatch.sql diff --git a/contrib/fuzzystrmatch/.gitignore b/contrib/fuzzystrmatch/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/contrib/fuzzystrmatch/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/contrib/fuzzystrmatch/Makefile b/contrib/fuzzystrmatch/Makefile index 51e215a919..bd6f5e50d1 100644 --- a/contrib/fuzzystrmatch/Makefile +++ b/contrib/fuzzystrmatch/Makefile @@ -8,6 +8,8 @@ DATA = fuzzystrmatch--1.1.sql fuzzystrmatch--1.0--1.1.sql \ fuzzystrmatch--unpackaged--1.0.sql PGFILEDESC = "fuzzystrmatch - similarities and distance between strings" +REGRESS = fuzzystrmatch + ifdef USE_PGXS PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) diff --git a/contrib/fuzzystrmatch/expected/fuzzystrmatch.out b/contrib/fuzzystrmatch/expected/fuzzystrmatch.out new file mode 100644 index 0000000000..493c95cdfa --- /dev/null +++ b/contrib/fuzzystrmatch/expected/fuzzystrmatch.out @@ -0,0 +1,67 @@ +CREATE EXTENSION fuzzystrmatch; +SELECT soundex('hello world!'); + soundex +--------- + H464 +(1 row) + +SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann'); + soundex | soundex | difference +---------+---------+------------ + A500 | A500 | 4 +(1 row) + +SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew'); + soundex | soundex | difference +---------+---------+------------ + A500 | A536 | 2 +(1 row) + +SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret'); + soundex | soundex | difference +---------+---------+------------ + A500 | M626 | 0 +(1 row) + +SELECT levenshtein('GUMBO', 'GAMBOL'); + levenshtein +------------- + 2 +(1 row) + +SELECT levenshtein('GUMBO', 'GAMBOL', 2, 1, 1); + levenshtein +------------- + 3 +(1 row) + +SELECT levenshtein_less_equal('extensive', 'exhaustive', 2); + levenshtein_less_equal +------------------------ + 3 +(1 row) + +SELECT levenshtein_less_equal('extensive', 'exhaustive', 4); + levenshtein_less_equal +------------------------ + 4 +(1 row) + +SELECT metaphone('GUMBO', 4); + metaphone +----------- + KM +(1 row) + +SELECT dmetaphone('gumbo'); + dmetaphone +------------ + KMP +(1 row) + +SELECT dmetaphone_alt('gumbo'); + dmetaphone_alt +---------------- + KMP +(1 row) + diff --git a/contrib/fuzzystrmatch/sql/fuzzystrmatch.sql b/contrib/fuzzystrmatch/sql/fuzzystrmatch.sql new file mode 100644 index 0000000000..f05dc28ffb --- /dev/null +++ b/contrib/fuzzystrmatch/sql/fuzzystrmatch.sql @@ -0,0 +1,21 @@ +CREATE EXTENSION fuzzystrmatch; + + +SELECT soundex('hello world!'); + +SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann'); +SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew'); +SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret'); + + +SELECT levenshtein('GUMBO', 'GAMBOL'); +SELECT levenshtein('GUMBO', 'GAMBOL', 2, 1, 1); +SELECT levenshtein_less_equal('extensive', 'exhaustive', 2); +SELECT levenshtein_less_equal('extensive', 'exhaustive', 4); + + +SELECT metaphone('GUMBO', 4); + + +SELECT dmetaphone('gumbo'); +SELECT dmetaphone_alt('gumbo'); diff --git a/doc/src/sgml/fuzzystrmatch.sgml b/doc/src/sgml/fuzzystrmatch.sgml index feb06861da..ff5bc08fea 100644 --- a/doc/src/sgml/fuzzystrmatch.sgml +++ b/doc/src/sgml/fuzzystrmatch.sgml @@ -133,19 +133,19 @@ test=# SELECT levenshtein('GUMBO', 'GAMBOL'); 2 (1 row) -test=# SELECT levenshtein('GUMBO', 'GAMBOL', 2,1,1); +test=# SELECT levenshtein('GUMBO', 'GAMBOL', 2, 1, 1); levenshtein ------------- 3 (1 row) -test=# SELECT levenshtein_less_equal('extensive', 'exhaustive',2); +test=# SELECT levenshtein_less_equal('extensive', 'exhaustive', 2); levenshtein_less_equal ------------------------ 3 (1 row) -test=# SELECT levenshtein_less_equal('extensive', 'exhaustive',4); +test=# SELECT levenshtein_less_equal('extensive', 'exhaustive', 4); levenshtein_less_equal ------------------------ 4 @@ -227,7 +227,7 @@ dmetaphone_alt(text source) returns text -test=# select dmetaphone('gumbo'); +test=# SELECT dmetaphone('gumbo'); dmetaphone ------------ KMP From 4cb89d830626d009ed6a4482bed3a141c5039a7c Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0177/1087] lo: Add test suite Reviewed-by: David Steele --- contrib/lo/.gitignore | 4 ++++ contrib/lo/Makefile | 2 ++ contrib/lo/expected/lo.out | 42 ++++++++++++++++++++++++++++++++++++++ contrib/lo/sql/lo.sql | 25 +++++++++++++++++++++++ doc/src/sgml/lo.sgml | 2 +- 5 files changed, 74 insertions(+), 1 deletion(-) create mode 100644 contrib/lo/.gitignore create mode 100644 contrib/lo/expected/lo.out create mode 100644 contrib/lo/sql/lo.sql diff --git a/contrib/lo/.gitignore b/contrib/lo/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/contrib/lo/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/contrib/lo/Makefile b/contrib/lo/Makefile index 71f0cb0d24..bd4fd6b72d 100644 --- a/contrib/lo/Makefile +++ b/contrib/lo/Makefile @@ -6,6 +6,8 @@ EXTENSION = lo DATA = lo--1.1.sql lo--1.0--1.1.sql lo--unpackaged--1.0.sql PGFILEDESC = "lo - management for large objects" +REGRESS = lo + ifdef USE_PGXS PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) diff --git a/contrib/lo/expected/lo.out b/contrib/lo/expected/lo.out new file mode 100644 index 0000000000..f7104aee3f --- /dev/null +++ b/contrib/lo/expected/lo.out @@ -0,0 +1,42 @@ +CREATE EXTENSION lo; +CREATE TABLE image (title text, raster lo); +CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image + FOR EACH ROW EXECUTE PROCEDURE lo_manage(raster); +SELECT lo_create(43213); + lo_create +----------- + 43213 +(1 row) + +SELECT lo_create(43214); + lo_create +----------- + 43214 +(1 row) + +INSERT INTO image (title, raster) VALUES ('beautiful image', 43213); +SELECT lo_get(43213); + lo_get +-------- + \x +(1 row) + +SELECT lo_get(43214); + lo_get +-------- + \x +(1 row) + +UPDATE image SET raster = 43214 WHERE title = 'beautiful image'; +SELECT lo_get(43213); +ERROR: large object 43213 does not exist +SELECT lo_get(43214); + lo_get +-------- + \x +(1 row) + +DELETE FROM image; +SELECT lo_get(43214); +ERROR: large object 43214 does not exist +DROP TABLE image; diff --git a/contrib/lo/sql/lo.sql b/contrib/lo/sql/lo.sql new file mode 100644 index 0000000000..34ba6f00ec --- /dev/null +++ b/contrib/lo/sql/lo.sql @@ -0,0 +1,25 @@ +CREATE EXTENSION lo; + +CREATE TABLE image (title text, raster lo); + +CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image + FOR EACH ROW EXECUTE PROCEDURE lo_manage(raster); + +SELECT lo_create(43213); +SELECT lo_create(43214); + +INSERT INTO image (title, raster) VALUES ('beautiful image', 43213); + +SELECT lo_get(43213); +SELECT lo_get(43214); + +UPDATE image SET raster = 43214 WHERE title = 'beautiful image'; + +SELECT lo_get(43213); +SELECT lo_get(43214); + +DELETE FROM image; + +SELECT lo_get(43214); + +DROP TABLE image; diff --git a/doc/src/sgml/lo.sgml b/doc/src/sgml/lo.sgml index cd4ed6030b..9c318f1c98 100644 --- a/doc/src/sgml/lo.sgml +++ b/doc/src/sgml/lo.sgml @@ -67,7 +67,7 @@ -CREATE TABLE image (title TEXT, raster lo); +CREATE TABLE image (title text, raster lo); CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image FOR EACH ROW EXECUTE PROCEDURE lo_manage(raster); From 8423bf4f25ecd7afdd1d89adfbf29ea28992678f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0178/1087] chkpass: Add test suite Reviewed-by: David Steele --- contrib/chkpass/.gitignore | 4 ++++ contrib/chkpass/Makefile | 2 ++ contrib/chkpass/expected/chkpass.out | 18 ++++++++++++++++++ contrib/chkpass/sql/chkpass.sql | 7 +++++++ 4 files changed, 31 insertions(+) create mode 100644 contrib/chkpass/.gitignore create mode 100644 contrib/chkpass/expected/chkpass.out create mode 100644 contrib/chkpass/sql/chkpass.sql diff --git a/contrib/chkpass/.gitignore b/contrib/chkpass/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/contrib/chkpass/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/contrib/chkpass/Makefile b/contrib/chkpass/Makefile index a2599ea239..dbecc3360b 100644 --- a/contrib/chkpass/Makefile +++ b/contrib/chkpass/Makefile @@ -9,6 +9,8 @@ PGFILEDESC = "chkpass - encrypted password data type" SHLIB_LINK = $(filter -lcrypt, $(LIBS)) +REGRESS = chkpass + ifdef USE_PGXS PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) diff --git a/contrib/chkpass/expected/chkpass.out b/contrib/chkpass/expected/chkpass.out new file mode 100644 index 0000000000..b53557bf2a --- /dev/null +++ b/contrib/chkpass/expected/chkpass.out @@ -0,0 +1,18 @@ +CREATE EXTENSION chkpass; +WARNING: type input function chkpass_in should not be volatile +CREATE TABLE test (i int, p chkpass); +INSERT INTO test VALUES (1, 'hello'), (2, 'goodbye'); +SELECT i, p = 'hello' AS "hello?" FROM test; + i | hello? +---+-------- + 1 | t + 2 | f +(2 rows) + +SELECT i, p <> 'hello' AS "!hello?" FROM test; + i | !hello? +---+--------- + 1 | f + 2 | t +(2 rows) + diff --git a/contrib/chkpass/sql/chkpass.sql b/contrib/chkpass/sql/chkpass.sql new file mode 100644 index 0000000000..595683e249 --- /dev/null +++ b/contrib/chkpass/sql/chkpass.sql @@ -0,0 +1,7 @@ +CREATE EXTENSION chkpass; + +CREATE TABLE test (i int, p chkpass); +INSERT INTO test VALUES (1, 'hello'), (2, 'goodbye'); + +SELECT i, p = 'hello' AS "hello?" FROM test; +SELECT i, p <> 'hello' AS "!hello?" FROM test; From af7211e92dc2bba66f90de9e5bea6ae5fa914c61 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0179/1087] passwordcheck: Add test suite Also improve one error message. Reviewed-by: David Steele --- contrib/passwordcheck/.gitignore | 4 ++++ contrib/passwordcheck/Makefile | 5 +++++ .../passwordcheck/expected/passwordcheck.out | 18 ++++++++++++++++ contrib/passwordcheck/passwordcheck.c | 2 +- contrib/passwordcheck/passwordcheck.conf | 1 + contrib/passwordcheck/sql/passwordcheck.sql | 21 +++++++++++++++++++ 6 files changed, 50 insertions(+), 1 deletion(-) create mode 100644 contrib/passwordcheck/.gitignore create mode 100644 contrib/passwordcheck/expected/passwordcheck.out create mode 100644 contrib/passwordcheck/passwordcheck.conf create mode 100644 contrib/passwordcheck/sql/passwordcheck.sql diff --git a/contrib/passwordcheck/.gitignore b/contrib/passwordcheck/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/contrib/passwordcheck/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/contrib/passwordcheck/Makefile b/contrib/passwordcheck/Makefile index 4652aeb3d7..7edc968b90 100644 --- a/contrib/passwordcheck/Makefile +++ b/contrib/passwordcheck/Makefile @@ -8,6 +8,11 @@ PGFILEDESC = "passwordcheck - strengthen user password checks" # PG_CPPFLAGS = -DUSE_CRACKLIB '-DCRACKLIB_DICTPATH="/usr/lib/cracklib_dict"' # SHLIB_LINK = -lcrack +REGRESS_OPTS = --temp-config $(srcdir)/passwordcheck.conf +REGRESS = passwordcheck +# disabled because these tests require setting shared_preload_libraries +NO_INSTALLCHECK = 1 + ifdef USE_PGXS PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) diff --git a/contrib/passwordcheck/expected/passwordcheck.out b/contrib/passwordcheck/expected/passwordcheck.out new file mode 100644 index 0000000000..b3515df3e8 --- /dev/null +++ b/contrib/passwordcheck/expected/passwordcheck.out @@ -0,0 +1,18 @@ +CREATE USER regress_user1; +-- ok +ALTER USER regress_user1 PASSWORD 'a_nice_long_password'; +-- error: too short +ALTER USER regress_user1 PASSWORD 'tooshrt'; +ERROR: password is too short +-- error: contains user name +ALTER USER regress_user1 PASSWORD 'xyzregress_user1'; +ERROR: password must not contain user name +-- error: contains only letters +ALTER USER regress_user1 PASSWORD 'alessnicelongpassword'; +ERROR: password must contain both letters and nonletters +-- encrypted ok (password is "secret") +ALTER USER regress_user1 PASSWORD 'md51a44d829a20a23eac686d9f0d258af13'; +-- error: password is user name +ALTER USER regress_user1 PASSWORD 'md5e589150ae7d28f93333afae92b36ef48'; +ERROR: password must not equal user name +DROP USER regress_user1; diff --git a/contrib/passwordcheck/passwordcheck.c b/contrib/passwordcheck/passwordcheck.c index b80fd458ad..64d43462f0 100644 --- a/contrib/passwordcheck/passwordcheck.c +++ b/contrib/passwordcheck/passwordcheck.c @@ -70,7 +70,7 @@ check_password(const char *username, if (plain_crypt_verify(username, shadow_pass, username, &logdetail) == STATUS_OK) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("password must not contain user name"))); + errmsg("password must not equal user name"))); } else { diff --git a/contrib/passwordcheck/passwordcheck.conf b/contrib/passwordcheck/passwordcheck.conf new file mode 100644 index 0000000000..f6604f3d6b --- /dev/null +++ b/contrib/passwordcheck/passwordcheck.conf @@ -0,0 +1 @@ +shared_preload_libraries = 'passwordcheck' diff --git a/contrib/passwordcheck/sql/passwordcheck.sql b/contrib/passwordcheck/sql/passwordcheck.sql new file mode 100644 index 0000000000..59c84f522e --- /dev/null +++ b/contrib/passwordcheck/sql/passwordcheck.sql @@ -0,0 +1,21 @@ +CREATE USER regress_user1; + +-- ok +ALTER USER regress_user1 PASSWORD 'a_nice_long_password'; + +-- error: too short +ALTER USER regress_user1 PASSWORD 'tooshrt'; + +-- error: contains user name +ALTER USER regress_user1 PASSWORD 'xyzregress_user1'; + +-- error: contains only letters +ALTER USER regress_user1 PASSWORD 'alessnicelongpassword'; + +-- encrypted ok (password is "secret") +ALTER USER regress_user1 PASSWORD 'md51a44d829a20a23eac686d9f0d258af13'; + +-- error: password is user name +ALTER USER regress_user1 PASSWORD 'md5e589150ae7d28f93333afae92b36ef48'; + +DROP USER regress_user1; From 98470fdfa72b78ec49dea9a25e658876e6e51989 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0180/1087] pg_archivecleanup: Add test suite Reviewed-by: David Steele --- src/bin/pg_archivecleanup/.gitignore | 2 + src/bin/pg_archivecleanup/Makefile | 7 ++ .../t/010_pg_archivecleanup.pl | 81 +++++++++++++++++++ 3 files changed, 90 insertions(+) create mode 100644 src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl diff --git a/src/bin/pg_archivecleanup/.gitignore b/src/bin/pg_archivecleanup/.gitignore index 804089070d..bd05d00156 100644 --- a/src/bin/pg_archivecleanup/.gitignore +++ b/src/bin/pg_archivecleanup/.gitignore @@ -1 +1,3 @@ /pg_archivecleanup + +/tmp_check/ diff --git a/src/bin/pg_archivecleanup/Makefile b/src/bin/pg_archivecleanup/Makefile index 5bda78490c..c5bf99db0f 100644 --- a/src/bin/pg_archivecleanup/Makefile +++ b/src/bin/pg_archivecleanup/Makefile @@ -25,3 +25,10 @@ uninstall: clean distclean maintainer-clean: rm -f pg_archivecleanup$(X) $(OBJS) + rm -rf tmp_check + +check: + $(prove_check) + +installcheck: + $(prove_installcheck) diff --git a/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl new file mode 100644 index 0000000000..1d3a1e4fb9 --- /dev/null +++ b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl @@ -0,0 +1,81 @@ +use strict; +use warnings; +use TestLib; +use Test::More tests => 42; + +program_help_ok('pg_archivecleanup'); +program_version_ok('pg_archivecleanup'); +program_options_handling_ok('pg_archivecleanup'); + +my $tempdir = TestLib::tempdir; + +my @walfiles = ( + '00000001000000370000000C.gz', + '00000001000000370000000D', + '00000001000000370000000E', + '00000001000000370000000F.partial', +); + +sub create_files +{ + foreach my $fn (@walfiles, 'unrelated_file') + { + open my $file, '>', "$tempdir/$fn"; + print $file 'CONTENT'; + close $file; + } +} + +create_files(); + +command_fails_like(['pg_archivecleanup'], + qr/must specify archive location/, + 'fails if archive location is not specified'); + +command_fails_like(['pg_archivecleanup', $tempdir], + qr/must specify oldest kept WAL file/, + 'fails if oldest kept WAL file name is not specified'); + +command_fails_like(['pg_archivecleanup', 'notexist', 'foo'], + qr/archive location .* does not exist/, + 'fails if archive location does not exist'); + +command_fails_like(['pg_archivecleanup', $tempdir, 'foo', 'bar'], + qr/too many command-line arguments/, + 'fails with too many command-line arguments'); + +command_fails_like(['pg_archivecleanup', $tempdir, 'foo'], + qr/invalid file name argument/, + 'fails with invalid restart file name'); + +{ + # like command_like but checking stderr + my $stderr; + my $result = IPC::Run::run ['pg_archivecleanup', '-d', '-n', $tempdir, $walfiles[2]], '2>', \$stderr; + ok($result, "pg_archivecleanup dry run: exit code 0"); + like($stderr, qr/$walfiles[1].*would be removed/, "pg_archivecleanup dry run: matches"); + foreach my $fn (@walfiles) + { + ok(-f "$tempdir/$fn", "$fn not removed"); + } +} + +sub run_check +{ + my ($suffix, $test_name) = @_; + + create_files(); + + command_ok(['pg_archivecleanup', '-x', '.gz', $tempdir, $walfiles[2] . $suffix], + "$test_name: runs"); + + ok(! -f "$tempdir/$walfiles[0]", "$test_name: first older WAL file was cleaned up"); + ok(! -f "$tempdir/$walfiles[1]", "$test_name: second older WAL file was cleaned up"); + ok(-f "$tempdir/$walfiles[2]", "$test_name: restartfile was not cleaned up"); + ok(-f "$tempdir/$walfiles[3]", "$test_name: newer WAL file was not cleaned up"); + ok(-f "$tempdir/unrelated_file", "$test_name: unrelated file was not cleaned up"); +} + +run_check('', 'pg_archivecleanup'); +run_check('.partial', 'pg_archivecleanup with .partial file'); +run_check('.00000020.backup', 'pg_archivecleanup with .backup file'); From 9b6cb4650bc6a56114000678c1944afdb95f8333 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 11 Aug 2017 21:04:04 -0400 Subject: [PATCH 0181/1087] isn: Fix debug code The ISN_DEBUG code did not compile. Fix that code, don't hide it behind an #ifdef, make it run when building with asserts, and make it error out instead of just logging if it fails. Reviewed-by: David Steele --- contrib/isn/isn.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/contrib/isn/isn.c b/contrib/isn/isn.c index 4d845b716f..0148f9549f 100644 --- a/contrib/isn/isn.c +++ b/contrib/isn/isn.c @@ -26,6 +26,12 @@ PG_MODULE_MAGIC; +#ifdef USE_ASSERT_CHECKING +#define ISN_DEBUG 1 +#else +#define ISN_DEBUG 0 +#endif + #define MAXEAN13LEN 18 enum isn_type @@ -36,7 +42,6 @@ enum isn_type static const char *const isn_names[] = {"EAN13/UPC/ISxN", "EAN13/UPC/ISxN", "EAN13", "ISBN", "ISMN", "ISSN", "UPC"}; static bool g_weak = false; -static bool g_initialized = false; /*********************************************************************** @@ -56,7 +61,7 @@ static bool g_initialized = false; /* * Check if the table and its index is correct (just for debugging) */ -#ifdef ISN_DEBUG +pg_attribute_unused() static bool check_table(const char *(*TABLE)[2], const unsigned TABLE_index[10][2]) { @@ -68,7 +73,6 @@ check_table(const char *(*TABLE)[2], const unsigned TABLE_index[10][2]) y = -1, i = 0, j, - cnt = 0, init = 0; if (TABLE == NULL || TABLE_index == NULL) @@ -131,7 +135,6 @@ check_table(const char *(*TABLE)[2], const unsigned TABLE_index[10][2]) elog(DEBUG1, "index %d is invalid", j); return false; } -#endif /* ISN_DEBUG */ /*---------------------------------------------------------- * Formatting and conversion routines. @@ -922,22 +925,24 @@ string2ean(const char *str, bool errorOK, ean13 *result, * Exported routines. *---------------------------------------------------------*/ +void _PG_init(void); + void -initialize(void) +_PG_init(void) { -#ifdef ISN_DEBUG - if (!check_table(EAN13, EAN13_index)) - elog(LOG, "EAN13 failed check"); - if (!check_table(ISBN, ISBN_index)) - elog(LOG, "ISBN failed check"); - if (!check_table(ISMN, ISMN_index)) - elog(LOG, "ISMN failed check"); - if (!check_table(ISSN, ISSN_index)) - elog(LOG, "ISSN failed check"); - if (!check_table(UPC, UPC_index)) - elog(LOG, "UPC failed check"); -#endif - g_initialized = true; + if (ISN_DEBUG) + { + if (!check_table(EAN13_range, EAN13_index)) + elog(ERROR, "EAN13 failed check"); + if (!check_table(ISBN_range, ISBN_index)) + elog(ERROR, "ISBN failed check"); + if (!check_table(ISMN_range, ISMN_index)) + elog(ERROR, "ISMN failed check"); + if (!check_table(ISSN_range, ISSN_index)) + elog(ERROR, "ISSN failed check"); + if (!check_table(UPC_range, UPC_index)) + elog(ERROR, "UPC failed check"); + } } /* isn_out From cc5f81366c36b3dd8f02bd9be1cf75b2cc8482bd Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 14 Sep 2017 19:59:21 -0700 Subject: [PATCH 0182/1087] Add support for coordinating record typmods among parallel workers. Tuples can have type RECORDOID and a typmod number that identifies a blessed TupleDesc in a backend-private cache. To support the sharing of such tuples through shared memory and temporary files, provide a typmod registry in shared memory. To achieve that, introduce per-session DSM segments, created on demand when a backend first runs a parallel query. The per-session DSM segment has a table-of-contents just like the per-query DSM segment, and initially the contents are a shared record typmod registry and a DSA area to provide the space it needs to grow. State relating to the current session is accessed via a Session object reached through global variable CurrentSession that may require significant redesign further down the road as we figure out what else needs to be shared or remodelled. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com --- src/backend/access/common/Makefile | 2 +- src/backend/access/common/session.c | 208 +++++++++ src/backend/access/common/tupdesc.c | 16 + src/backend/access/transam/parallel.c | 44 +- src/backend/storage/lmgr/lwlock.c | 8 +- src/backend/utils/cache/typcache.c | 634 ++++++++++++++++++++++++-- src/backend/utils/init/postinit.c | 4 + src/include/access/session.h | 44 ++ src/include/access/tupdesc.h | 6 + src/include/storage/lwlock.h | 3 + src/include/utils/typcache.h | 10 + src/tools/pgindent/typedefs.list | 4 + 12 files changed, 946 insertions(+), 37 deletions(-) create mode 100644 src/backend/access/common/session.c create mode 100644 src/include/access/session.h diff --git a/src/backend/access/common/Makefile b/src/backend/access/common/Makefile index fb27944b89..f130b6e350 100644 --- a/src/backend/access/common/Makefile +++ b/src/backend/access/common/Makefile @@ -13,6 +13,6 @@ top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global OBJS = bufmask.o heaptuple.o indextuple.o printsimple.o printtup.o \ - reloptions.o scankey.o tupconvert.o tupdesc.o + reloptions.o scankey.o session.o tupconvert.o tupdesc.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/access/common/session.c b/src/backend/access/common/session.c new file mode 100644 index 0000000000..865999b063 --- /dev/null +++ b/src/backend/access/common/session.c @@ -0,0 +1,208 @@ +/*------------------------------------------------------------------------- + * + * session.c + * Encapsulation of user session. + * + * This is intended to contain data that needs to be shared between backends + * performing work for a client session. In particular such a session is + * shared between the leader and worker processes for parallel queries. At + * some later point it might also become useful infrastructure for separating + * backends from client connections, e.g. for the purpose of pooling. + * + * Currently this infrastructure is used to share: + * - typemod registry for ephemeral row-types, i.e. BlessTupleDesc etc. + * + * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * + * src/backend/access/common/session.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "access/session.h" +#include "storage/lwlock.h" +#include "storage/shm_toc.h" +#include "utils/memutils.h" +#include "utils/typcache.h" + +/* Magic number for per-session DSM TOC. */ +#define SESSION_MAGIC 0xabb0fbc9 + +/* + * We want to create a DSA area to store shared state that has the same + * lifetime as a session. So far, it's only used to hold the shared record + * type registry. We don't want it to have to create any DSM segments just + * yet in common cases, so we'll give it enough space to hold a very small + * SharedRecordTypmodRegistry. + */ +#define SESSION_DSA_SIZE 0x30000 + +/* + * Magic numbers for state sharing in the per-session DSM area. + */ +#define SESSION_KEY_DSA UINT64CONST(0xFFFFFFFFFFFF0001) +#define SESSION_KEY_RECORD_TYPMOD_REGISTRY UINT64CONST(0xFFFFFFFFFFFF0002) + +/* This backend's current session. */ +Session *CurrentSession = NULL; + +/* + * Set up CurrentSession to point to an empty Session object. + */ +void +InitializeSession(void) +{ + CurrentSession = MemoryContextAllocZero(TopMemoryContext, sizeof(Session)); +} + +/* + * Initialize the per-session DSM segment if it isn't already initialized, and + * return its handle so that worker processes can attach to it. + * + * Unlike the per-context DSM segment, this segement and its contents are + * reused for future parallel queries. + * + * Return DSM_HANDLE_INVALID if a segment can't be allocated due to lack of + * resources. + */ +dsm_handle +GetSessionDsmHandle(void) +{ + shm_toc_estimator estimator; + shm_toc *toc; + dsm_segment *seg; + size_t typmod_registry_size; + size_t size; + void *dsa_space; + void *typmod_registry_space; + dsa_area *dsa; + MemoryContext old_context; + + /* + * If we have already created a session-scope DSM segment in this backend, + * return its handle. The same segment will be used for the rest of this + * backend's lifetime. + */ + if (CurrentSession->segment != NULL) + return dsm_segment_handle(CurrentSession->segment); + + /* Otherwise, prepare to set one up. */ + old_context = MemoryContextSwitchTo(TopMemoryContext); + shm_toc_initialize_estimator(&estimator); + + /* Estimate space for the per-session DSA area. */ + shm_toc_estimate_keys(&estimator, 1); + shm_toc_estimate_chunk(&estimator, SESSION_DSA_SIZE); + + /* Estimate space for the per-session record typmod registry. */ + typmod_registry_size = SharedRecordTypmodRegistryEstimate(); + shm_toc_estimate_keys(&estimator, 1); + shm_toc_estimate_chunk(&estimator, typmod_registry_size); + + /* Set up segment and TOC. */ + size = shm_toc_estimate(&estimator); + seg = dsm_create(size, DSM_CREATE_NULL_IF_MAXSEGMENTS); + if (seg == NULL) + { + MemoryContextSwitchTo(old_context); + + return DSM_HANDLE_INVALID; + } + toc = shm_toc_create(SESSION_MAGIC, + dsm_segment_address(seg), + size); + + /* Create per-session DSA area. */ + dsa_space = shm_toc_allocate(toc, SESSION_DSA_SIZE); + dsa = dsa_create_in_place(dsa_space, + SESSION_DSA_SIZE, + LWTRANCHE_SESSION_DSA, + seg); + shm_toc_insert(toc, SESSION_KEY_DSA, dsa_space); + + + /* Create session-scoped shared record typmod registry. */ + typmod_registry_space = shm_toc_allocate(toc, typmod_registry_size); + SharedRecordTypmodRegistryInit((SharedRecordTypmodRegistry *) + typmod_registry_space, seg, dsa); + shm_toc_insert(toc, SESSION_KEY_RECORD_TYPMOD_REGISTRY, + typmod_registry_space); + + /* + * If we got this far, we can pin the shared memory so it stays mapped for + * the rest of this backend's life. If we don't make it this far, cleanup + * callbacks for anything we installed above (ie currently + * SharedRecordTypemodRegistry) will run when the DSM segment is detached + * by CurrentResourceOwner so we aren't left with a broken CurrentSession. + */ + dsm_pin_mapping(seg); + dsa_pin_mapping(dsa); + + /* Make segment and area available via CurrentSession. */ + CurrentSession->segment = seg; + CurrentSession->area = dsa; + + MemoryContextSwitchTo(old_context); + + return dsm_segment_handle(seg); +} + +/* + * Attach to a per-session DSM segment provided by a parallel leader. + */ +void +AttachSession(dsm_handle handle) +{ + dsm_segment *seg; + shm_toc *toc; + void *dsa_space; + void *typmod_registry_space; + dsa_area *dsa; + MemoryContext old_context; + + old_context = MemoryContextSwitchTo(TopMemoryContext); + + /* Attach to the DSM segment. */ + seg = dsm_attach(handle); + if (seg == NULL) + elog(ERROR, "could not attach to per-session DSM segment"); + toc = shm_toc_attach(SESSION_MAGIC, dsm_segment_address(seg)); + + /* Attach to the DSA area. */ + dsa_space = shm_toc_lookup(toc, SESSION_KEY_DSA, false); + dsa = dsa_attach_in_place(dsa_space, seg); + + /* Make them available via the current session. */ + CurrentSession->segment = seg; + CurrentSession->area = dsa; + + /* Attach to the shared record typmod registry. */ + typmod_registry_space = + shm_toc_lookup(toc, SESSION_KEY_RECORD_TYPMOD_REGISTRY, false); + SharedRecordTypmodRegistryAttach((SharedRecordTypmodRegistry *) + typmod_registry_space); + + /* Remain attached until end of backend or DetachSession(). */ + dsm_pin_mapping(seg); + dsa_pin_mapping(dsa); + + MemoryContextSwitchTo(old_context); +} + +/* + * Detach from the current session DSM segment. It's not strictly necessary + * to do this explicitly since we'll detach automatically at backend exit, but + * if we ever reuse parallel workers it will become important for workers to + * detach from one session before attaching to another. Note that this runs + * detach hooks. + */ +void +DetachSession(void) +{ + /* Runs detach hooks. */ + dsm_detach(CurrentSession->segment); + CurrentSession->segment = NULL; + dsa_detach(CurrentSession->area); + CurrentSession->area = NULL; +} diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index 4436c86361..9e37ca73a8 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -184,6 +184,22 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc) return desc; } +/* + * TupleDescCopy + * Copy a tuple descriptor into caller-supplied memory. + * The memory may be shared memory mapped at any address, and must + * be sufficient to hold TupleDescSize(src) bytes. + * + * !!! Constraints and defaults are not copied !!! + */ +void +TupleDescCopy(TupleDesc dst, TupleDesc src) +{ + memcpy(dst, src, TupleDescSize(src)); + dst->constr = NULL; + dst->tdrefcount = -1; +} + /* * TupleDescCopyEntry * This function copies a single attribute structure from one tuple diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index ce1b907deb..13c8ba3b19 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -15,6 +15,7 @@ #include "postgres.h" #include "access/parallel.h" +#include "access/session.h" #include "access/xact.h" #include "access/xlog.h" #include "catalog/namespace.h" @@ -36,6 +37,7 @@ #include "utils/memutils.h" #include "utils/resowner.h" #include "utils/snapmgr.h" +#include "utils/typcache.h" /* @@ -51,8 +53,9 @@ #define PARALLEL_MAGIC 0x50477c7c /* - * Magic numbers for parallel state sharing. Higher-level code should use - * smaller values, leaving these very large ones for use by this module. + * Magic numbers for per-context parallel state sharing. Higher-level code + * should use smaller values, leaving these very large ones for use by this + * module. */ #define PARALLEL_KEY_FIXED UINT64CONST(0xFFFFFFFFFFFF0001) #define PARALLEL_KEY_ERROR_QUEUE UINT64CONST(0xFFFFFFFFFFFF0002) @@ -63,6 +66,7 @@ #define PARALLEL_KEY_ACTIVE_SNAPSHOT UINT64CONST(0xFFFFFFFFFFFF0007) #define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0xFFFFFFFFFFFF0008) #define PARALLEL_KEY_ENTRYPOINT UINT64CONST(0xFFFFFFFFFFFF0009) +#define PARALLEL_KEY_SESSION_DSM UINT64CONST(0xFFFFFFFFFFFF000A) /* Fixed-size parallel state. */ typedef struct FixedParallelState @@ -197,6 +201,7 @@ InitializeParallelDSM(ParallelContext *pcxt) Size segsize = 0; int i; FixedParallelState *fps; + dsm_handle session_dsm_handle = DSM_HANDLE_INVALID; Snapshot transaction_snapshot = GetTransactionSnapshot(); Snapshot active_snapshot = GetActiveSnapshot(); @@ -211,6 +216,21 @@ InitializeParallelDSM(ParallelContext *pcxt) * Normally, the user will have requested at least one worker process, but * if by chance they have not, we can skip a bunch of things here. */ + if (pcxt->nworkers > 0) + { + /* Get (or create) the per-session DSM segment's handle. */ + session_dsm_handle = GetSessionDsmHandle(); + + /* + * If we weren't able to create a per-session DSM segment, then we can + * continue but we can't safely launch any workers because their + * record typmods would be incompatible so they couldn't exchange + * tuples. + */ + if (session_dsm_handle == DSM_HANDLE_INVALID) + pcxt->nworkers = 0; + } + if (pcxt->nworkers > 0) { /* Estimate space for various kinds of state sharing. */ @@ -226,8 +246,9 @@ InitializeParallelDSM(ParallelContext *pcxt) shm_toc_estimate_chunk(&pcxt->estimator, asnaplen); tstatelen = EstimateTransactionStateSpace(); shm_toc_estimate_chunk(&pcxt->estimator, tstatelen); + shm_toc_estimate_chunk(&pcxt->estimator, sizeof(dsm_handle)); /* If you add more chunks here, you probably need to add keys. */ - shm_toc_estimate_keys(&pcxt->estimator, 6); + shm_toc_estimate_keys(&pcxt->estimator, 7); /* Estimate space need for error queues. */ StaticAssertStmt(BUFFERALIGN(PARALLEL_ERROR_QUEUE_SIZE) == @@ -295,6 +316,7 @@ InitializeParallelDSM(ParallelContext *pcxt) char *asnapspace; char *tstatespace; char *error_queue_space; + char *session_dsm_handle_space; char *entrypointstate; Size lnamelen; @@ -322,6 +344,13 @@ InitializeParallelDSM(ParallelContext *pcxt) SerializeSnapshot(active_snapshot, asnapspace); shm_toc_insert(pcxt->toc, PARALLEL_KEY_ACTIVE_SNAPSHOT, asnapspace); + /* Provide the handle for per-session segment. */ + session_dsm_handle_space = shm_toc_allocate(pcxt->toc, + sizeof(dsm_handle)); + *(dsm_handle *) session_dsm_handle_space = session_dsm_handle; + shm_toc_insert(pcxt->toc, PARALLEL_KEY_SESSION_DSM, + session_dsm_handle_space); + /* Serialize transaction state. */ tstatespace = shm_toc_allocate(pcxt->toc, tstatelen); SerializeTransactionState(tstatelen, tstatespace); @@ -938,6 +967,7 @@ ParallelWorkerMain(Datum main_arg) char *asnapspace; char *tstatespace; StringInfoData msgbuf; + char *session_dsm_handle_space; /* Set flag to indicate that we're initializing a parallel worker. */ InitializingParallelWorker = true; @@ -1064,6 +1094,11 @@ ParallelWorkerMain(Datum main_arg) combocidspace = shm_toc_lookup(toc, PARALLEL_KEY_COMBO_CID, false); RestoreComboCIDState(combocidspace); + /* Attach to the per-session DSM segment and contained objects. */ + session_dsm_handle_space = + shm_toc_lookup(toc, PARALLEL_KEY_SESSION_DSM, false); + AttachSession(*(dsm_handle *) session_dsm_handle_space); + /* Restore transaction snapshot. */ tsnapspace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_SNAPSHOT, false); RestoreTransactionSnapshot(RestoreSnapshot(tsnapspace), @@ -1110,6 +1145,9 @@ ParallelWorkerMain(Datum main_arg) /* Shut down the parallel-worker transaction. */ EndParallelWorkerTransaction(); + /* Detach from the per-session DSM segment. */ + DetachSession(); + /* Report success. */ pq_putmessage('X', NULL, 0); } diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 82a1cf5150..f1060f9675 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -494,7 +494,7 @@ RegisterLWLockTranches(void) if (LWLockTrancheArray == NULL) { - LWLockTranchesAllocated = 64; + LWLockTranchesAllocated = 128; LWLockTrancheArray = (char **) MemoryContextAllocZero(TopMemoryContext, LWLockTranchesAllocated * sizeof(char *)); @@ -510,6 +510,12 @@ RegisterLWLockTranches(void) "predicate_lock_manager"); LWLockRegisterTranche(LWTRANCHE_PARALLEL_QUERY_DSA, "parallel_query_dsa"); + LWLockRegisterTranche(LWTRANCHE_SESSION_DSA, + "session_dsa"); + LWLockRegisterTranche(LWTRANCHE_SESSION_RECORD_TABLE, + "session_record_table"); + LWLockRegisterTranche(LWTRANCHE_SESSION_TYPMOD_TABLE, + "session_typmod_table"); LWLockRegisterTranche(LWTRANCHE_TBM, "tbm"); /* Register named tranches. */ diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 2e633f08c5..3be853a85a 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -46,6 +46,8 @@ #include "access/heapam.h" #include "access/htup_details.h" #include "access/nbtree.h" +#include "access/parallel.h" +#include "access/session.h" #include "catalog/indexing.h" #include "catalog/pg_am.h" #include "catalog/pg_constraint.h" @@ -55,7 +57,9 @@ #include "catalog/pg_type.h" #include "commands/defrem.h" #include "executor/executor.h" +#include "lib/dshash.h" #include "optimizer/planner.h" +#include "storage/lwlock.h" #include "utils/builtins.h" #include "utils/catcache.h" #include "utils/fmgroids.h" @@ -142,6 +146,117 @@ typedef struct RecordCacheEntry TupleDesc tupdesc; } RecordCacheEntry; +/* + * To deal with non-anonymous record types that are exchanged by backends + * involved in a parallel query, we also need a shared verion of the above. + */ +struct SharedRecordTypmodRegistry +{ + /* A hash table for finding a matching TupleDesc. */ + dshash_table_handle record_table_handle; + /* A hash table for finding a TupleDesc by typmod. */ + dshash_table_handle typmod_table_handle; + /* A source of new record typmod numbers. */ + pg_atomic_uint32 next_typmod; +}; + +/* + * When using shared tuple descriptors as hash table keys we need a way to be + * able to search for an equal shared TupleDesc using a backend-local + * TupleDesc. So we use this type which can hold either, and hash and compare + * functions that know how to handle both. + */ +typedef struct SharedRecordTableKey +{ + union + { + TupleDesc local_tupdesc; + dsa_pointer shared_tupdesc; + }; + bool shared; +} SharedRecordTableKey; + +/* + * The shared version of RecordCacheEntry. This lets us look up a typmod + * using a TupleDesc which may be in local or shared memory. + */ +typedef struct SharedRecordTableEntry +{ + SharedRecordTableKey key; +} SharedRecordTableEntry; + +/* + * An entry in SharedRecordTypmodRegistry's typmod table. This lets us look + * up a TupleDesc in shared memory using a typmod. + */ +typedef struct SharedTypmodTableEntry +{ + uint32 typmod; + dsa_pointer shared_tupdesc; +} SharedTypmodTableEntry; + +/* + * A comparator function for SharedTupleDescTableKey. + */ +static int +shared_record_table_compare(const void *a, const void *b, size_t size, + void *arg) +{ + dsa_area *area = (dsa_area *) arg; + SharedRecordTableKey *k1 = (SharedRecordTableKey *) a; + SharedRecordTableKey *k2 = (SharedRecordTableKey *) b; + TupleDesc t1; + TupleDesc t2; + + if (k1->shared) + t1 = (TupleDesc) dsa_get_address(area, k1->shared_tupdesc); + else + t1 = k1->local_tupdesc; + + if (k2->shared) + t2 = (TupleDesc) dsa_get_address(area, k2->shared_tupdesc); + else + t2 = k2->local_tupdesc; + + return equalTupleDescs(t1, t2) ? 0 : 1; +} + +/* + * A hash function for SharedRecordTableKey. + */ +static uint32 +shared_record_table_hash(const void *a, size_t size, void *arg) +{ + dsa_area *area = (dsa_area *) arg; + SharedRecordTableKey *k = (SharedRecordTableKey *) a; + TupleDesc t; + + if (k->shared) + t = (TupleDesc) dsa_get_address(area, k->shared_tupdesc); + else + t = k->local_tupdesc; + + return hashTupleDesc(t); +} + +/* Parameters for SharedRecordTypmodRegistry's TupleDesc table. */ +static const dshash_parameters srtr_record_table_params = { + sizeof(SharedRecordTableKey), /* unused */ + sizeof(SharedRecordTableEntry), + shared_record_table_compare, + shared_record_table_hash, + LWTRANCHE_SESSION_RECORD_TABLE +}; + +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */ +static const dshash_parameters srtr_typmod_table_params = { + sizeof(uint32), + sizeof(SharedTypmodTableEntry), + dshash_memcmp, + dshash_memhash, + LWTRANCHE_SESSION_TYPMOD_TABLE +}; + static HTAB *RecordCacheHash = NULL; static TupleDesc *RecordCacheArray = NULL; @@ -168,6 +283,13 @@ static void TypeCacheConstrCallback(Datum arg, int cacheid, uint32 hashvalue); static void load_enum_cache_data(TypeCacheEntry *tcache); static EnumItem *find_enumitem(TypeCacheEnumData *enumdata, Oid arg); static int enum_oid_cmp(const void *left, const void *right); +static void shared_record_typmod_registry_detach(dsm_segment *segment, + Datum datum); +static void shared_record_typmod_registry_worker_detach(dsm_segment *segment, + Datum datum); +static TupleDesc find_or_make_matching_shared_tupledesc(TupleDesc tupdesc); +static dsa_pointer share_tupledesc(dsa_area *area, TupleDesc tupdesc, + uint32 typmod); /* @@ -377,8 +499,8 @@ lookup_type_cache(Oid type_id, int flags) /* * Reset info about hash functions whenever we pick up new info about - * equality operator. This is so we can ensure that the hash functions - * match the operator. + * equality operator. This is so we can ensure that the hash + * functions match the operator. */ typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC); typentry->flags &= ~(TCFLAGS_CHECKED_HASH_EXTENDED_PROC); @@ -1243,6 +1365,33 @@ cache_record_field_properties(TypeCacheEntry *typentry) typentry->flags |= TCFLAGS_CHECKED_FIELD_PROPERTIES; } +/* + * Make sure that RecordCacheArray is large enough to store 'typmod'. + */ +static void +ensure_record_cache_typmod_slot_exists(int32 typmod) +{ + if (RecordCacheArray == NULL) + { + RecordCacheArray = (TupleDesc *) + MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(TupleDesc)); + RecordCacheArrayLen = 64; + } + + if (typmod >= RecordCacheArrayLen) + { + int32 newlen = RecordCacheArrayLen * 2; + + while (typmod >= newlen) + newlen *= 2; + + RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray, + newlen * sizeof(TupleDesc)); + memset(RecordCacheArray + RecordCacheArrayLen, 0, + (newlen - RecordCacheArrayLen) * sizeof(TupleDesc *)); + RecordCacheArrayLen = newlen; + } +} /* * lookup_rowtype_tupdesc_internal --- internal routine to lookup a rowtype @@ -1273,15 +1422,53 @@ lookup_rowtype_tupdesc_internal(Oid type_id, int32 typmod, bool noError) /* * It's a transient record type, so look in our record-type table. */ - if (typmod < 0 || typmod >= NextRecordTypmod) + if (typmod >= 0) { - if (!noError) - ereport(ERROR, - (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("record type has not been registered"))); - return NULL; + /* It is already in our local cache? */ + if (typmod < RecordCacheArrayLen && + RecordCacheArray[typmod] != NULL) + return RecordCacheArray[typmod]; + + /* Are we attached to a shared record typmod registry? */ + if (CurrentSession->shared_typmod_registry != NULL) + { + SharedTypmodTableEntry *entry; + + /* Try to find it in the shared typmod index. */ + entry = dshash_find(CurrentSession->shared_typmod_table, + &typmod, false); + if (entry != NULL) + { + TupleDesc tupdesc; + + tupdesc = (TupleDesc) + dsa_get_address(CurrentSession->area, + entry->shared_tupdesc); + Assert(typmod == tupdesc->tdtypmod); + + /* We may need to extend the local RecordCacheArray. */ + ensure_record_cache_typmod_slot_exists(typmod); + + /* + * Our local array can now point directly to the TupleDesc + * in shared memory. + */ + RecordCacheArray[typmod] = tupdesc; + Assert(tupdesc->tdrefcount == -1); + + dshash_release_lock(CurrentSession->shared_typmod_table, + entry); + + return RecordCacheArray[typmod]; + } + } } - return RecordCacheArray[typmod]; + + if (!noError) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("record type has not been registered"))); + return NULL; } } @@ -1303,7 +1490,7 @@ lookup_rowtype_tupdesc(Oid type_id, int32 typmod) TupleDesc tupDesc; tupDesc = lookup_rowtype_tupdesc_internal(type_id, typmod, false); - IncrTupleDescRefCount(tupDesc); + PinTupleDesc(tupDesc); return tupDesc; } @@ -1321,7 +1508,7 @@ lookup_rowtype_tupdesc_noerror(Oid type_id, int32 typmod, bool noError) tupDesc = lookup_rowtype_tupdesc_internal(type_id, typmod, noError); if (tupDesc != NULL) - IncrTupleDescRefCount(tupDesc); + PinTupleDesc(tupDesc); return tupDesc; } @@ -1376,7 +1563,6 @@ assign_record_type_typmod(TupleDesc tupDesc) RecordCacheEntry *recentry; TupleDesc entDesc; bool found; - int32 newtypmod; MemoryContext oldcxt; Assert(tupDesc->tdtypeid == RECORDOID); @@ -1414,34 +1600,208 @@ assign_record_type_typmod(TupleDesc tupDesc) recentry->tupdesc = NULL; oldcxt = MemoryContextSwitchTo(CacheMemoryContext); - if (RecordCacheArray == NULL) + /* Look in the SharedRecordTypmodRegistry, if attached */ + entDesc = find_or_make_matching_shared_tupledesc(tupDesc); + if (entDesc == NULL) { - RecordCacheArray = (TupleDesc *) palloc(64 * sizeof(TupleDesc)); - RecordCacheArrayLen = 64; + /* Reference-counted local cache only. */ + entDesc = CreateTupleDescCopy(tupDesc); + entDesc->tdrefcount = 1; + entDesc->tdtypmod = NextRecordTypmod++; } - else if (NextRecordTypmod >= RecordCacheArrayLen) + ensure_record_cache_typmod_slot_exists(entDesc->tdtypmod); + RecordCacheArray[entDesc->tdtypmod] = entDesc; + recentry->tupdesc = entDesc; + + /* Update the caller's tuple descriptor. */ + tupDesc->tdtypmod = entDesc->tdtypmod; + + MemoryContextSwitchTo(oldcxt); +} + +/* + * Return the amout of shmem required to hold a SharedRecordTypmodRegistry. + * This exists only to avoid exposing private innards of + * SharedRecordTypmodRegistry in a header. + */ +size_t +SharedRecordTypmodRegistryEstimate(void) +{ + return sizeof(SharedRecordTypmodRegistry); +} + +/* + * Initialize 'registry' in a pre-existing shared memory region, which must be + * maximally aligned and have space for SharedRecordTypmodRegistryEstimate() + * bytes. + * + * 'area' will be used to allocate shared memory space as required for the + * typemod registration. The current process, expected to be a leader process + * in a parallel query, will be attached automatically and its current record + * types will be loaded into *registry. While attached, all calls to + * assign_record_type_typmod will use the shared registry. Worker backends + * will need to attach explicitly. + * + * Note that this function takes 'area' and 'segment' as arguments rather than + * accessing them via CurrentSession, because they aren't installed there + * until after this function runs. + */ +void +SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *registry, + dsm_segment *segment, + dsa_area *area) +{ + MemoryContext old_context; + dshash_table *record_table; + dshash_table *typmod_table; + int32 typmod; + + Assert(!IsParallelWorker()); + + /* We can't already be attached to a shared registry. */ + Assert(CurrentSession->shared_typmod_registry == NULL); + Assert(CurrentSession->shared_record_table == NULL); + Assert(CurrentSession->shared_typmod_table == NULL); + + old_context = MemoryContextSwitchTo(TopMemoryContext); + + /* Create the hash table of tuple descriptors indexed by themselves. */ + record_table = dshash_create(area, &srtr_record_table_params, area); + + /* Create the hash table of tuple descriptors indexed by typmod. */ + typmod_table = dshash_create(area, &srtr_typmod_table_params, NULL); + + MemoryContextSwitchTo(old_context); + + /* Initialize the SharedRecordTypmodRegistry. */ + registry->record_table_handle = dshash_get_hash_table_handle(record_table); + registry->typmod_table_handle = dshash_get_hash_table_handle(typmod_table); + pg_atomic_init_u32(®istry->next_typmod, NextRecordTypmod); + + /* + * Copy all entries from this backend's private registry into the shared + * registry. + */ + for (typmod = 0; typmod < NextRecordTypmod; ++typmod) { - int32 newlen = RecordCacheArrayLen * 2; + SharedTypmodTableEntry *typmod_table_entry; + SharedRecordTableEntry *record_table_entry; + SharedRecordTableKey record_table_key; + dsa_pointer shared_dp; + TupleDesc tupdesc; + bool found; - RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray, - newlen * sizeof(TupleDesc)); - RecordCacheArrayLen = newlen; + tupdesc = RecordCacheArray[typmod]; + if (tupdesc == NULL) + continue; + + /* Copy the TupleDesc into shared memory. */ + shared_dp = share_tupledesc(area, tupdesc, typmod); + + /* Insert into the typmod table. */ + typmod_table_entry = dshash_find_or_insert(typmod_table, + &tupdesc->tdtypmod, + &found); + if (found) + elog(ERROR, "cannot create duplicate shared record typmod"); + typmod_table_entry->typmod = tupdesc->tdtypmod; + typmod_table_entry->shared_tupdesc = shared_dp; + dshash_release_lock(typmod_table, typmod_table_entry); + + /* Insert into the record table. */ + record_table_key.shared = false; + record_table_key.local_tupdesc = tupdesc; + record_table_entry = dshash_find_or_insert(record_table, + &record_table_key, + &found); + if (!found) + { + record_table_entry->key.shared = true; + record_table_entry->key.shared_tupdesc = shared_dp; + } + dshash_release_lock(record_table, record_table_entry); } - /* if fail in subrs, no damage except possibly some wasted memory... */ - entDesc = CreateTupleDescCopy(tupDesc); - recentry->tupdesc = entDesc; - /* mark it as a reference-counted tupdesc */ - entDesc->tdrefcount = 1; - /* now it's safe to advance NextRecordTypmod */ - newtypmod = NextRecordTypmod++; - entDesc->tdtypmod = newtypmod; - RecordCacheArray[newtypmod] = entDesc; + /* + * Set up the global state that will tell assign_record_type_typmod and + * lookup_rowtype_tupdesc_internal about the shared registry. + */ + CurrentSession->shared_record_table = record_table; + CurrentSession->shared_typmod_table = typmod_table; + CurrentSession->shared_typmod_registry = registry; - /* report to caller as well */ - tupDesc->tdtypmod = newtypmod; + /* + * We install a detach hook in the leader, but only to handle cleanup on + * failure during GetSessionDsmHandle(). Once GetSessionDsmHandle() pins + * the memory, the leader process will use a shared registry until it + * exits. + */ + on_dsm_detach(segment, shared_record_typmod_registry_detach, (Datum) 0); +} - MemoryContextSwitchTo(oldcxt); +/* + * Attach to 'registry', which must have been initialized already by another + * backend. Future calls to assign_record_type_typmod and + * lookup_rowtype_tupdesc_internal will use the shared registry until the + * current session is detached. + */ +void +SharedRecordTypmodRegistryAttach(SharedRecordTypmodRegistry *registry) +{ + MemoryContext old_context; + dshash_table *record_table; + dshash_table *typmod_table; + + Assert(IsParallelWorker()); + + /* We can't already be attached to a shared registry. */ + Assert(CurrentSession != NULL); + Assert(CurrentSession->segment != NULL); + Assert(CurrentSession->area != NULL); + Assert(CurrentSession->shared_typmod_registry == NULL); + Assert(CurrentSession->shared_record_table == NULL); + Assert(CurrentSession->shared_typmod_table == NULL); + + /* + * We can't already have typmods in our local cache, because they'd clash + * with those imported by SharedRecordTypmodRegistryInit. This should be + * a freshly started parallel worker. If we ever support worker + * recycling, a worker would need to zap its local cache in between + * servicing different queries, in order to be able to call this and + * synchronize typmods with a new leader; see + * shared_record_typmod_registry_detach(). + */ + Assert(NextRecordTypmod == 0); + + old_context = MemoryContextSwitchTo(TopMemoryContext); + + /* Attach to the two hash tables. */ + record_table = dshash_attach(CurrentSession->area, + &srtr_record_table_params, + registry->record_table_handle, + CurrentSession->area); + typmod_table = dshash_attach(CurrentSession->area, + &srtr_typmod_table_params, + registry->typmod_table_handle, + NULL); + + MemoryContextSwitchTo(old_context); + + /* + * We install a different detach callback that performs a more complete + * reset of backend local state. + */ + on_dsm_detach(CurrentSession->segment, + shared_record_typmod_registry_worker_detach, + PointerGetDatum(registry)); + + /* + * Set up the session state that will tell assign_record_type_typmod and + * lookup_rowtype_tupdesc_internal about the shared registry. + */ + CurrentSession->shared_typmod_registry = registry; + CurrentSession->shared_record_table = record_table; + CurrentSession->shared_typmod_table = typmod_table; } /* @@ -1858,3 +2218,213 @@ enum_oid_cmp(const void *left, const void *right) else return 0; } + +/* + * Copy 'tupdesc' into newly allocated shared memory in 'area', set its typmod + * to the given value and return a dsa_pointer. + */ +static dsa_pointer +share_tupledesc(dsa_area *area, TupleDesc tupdesc, uint32 typmod) +{ + dsa_pointer shared_dp; + TupleDesc shared; + + shared_dp = dsa_allocate(area, TupleDescSize(tupdesc)); + shared = (TupleDesc) dsa_get_address(area, shared_dp); + TupleDescCopy(shared, tupdesc); + shared->tdtypmod = typmod; + + return shared_dp; +} + +/* + * If we are attached to a SharedRecordTypmodRegistry, use it to find or + * create a shared TupleDesc that matches 'tupdesc'. Otherwise return NULL. + * Tuple descriptors returned by this function are not reference counted, and + * will exist at least as long as the current backend remained attached to the + * current session. + */ +static TupleDesc +find_or_make_matching_shared_tupledesc(TupleDesc tupdesc) +{ + TupleDesc result; + SharedRecordTableKey key; + SharedRecordTableEntry *record_table_entry; + SharedTypmodTableEntry *typmod_table_entry; + dsa_pointer shared_dp; + bool found; + uint32 typmod; + + /* If not even attached, nothing to do. */ + if (CurrentSession->shared_typmod_registry == NULL) + return NULL; + + /* Try to find a matching tuple descriptor in the record table. */ + key.shared = false; + key.local_tupdesc = tupdesc; + record_table_entry = (SharedRecordTableEntry *) + dshash_find(CurrentSession->shared_record_table, &key, false); + if (record_table_entry) + { + Assert(record_table_entry->key.shared); + dshash_release_lock(CurrentSession->shared_record_table, + record_table_entry); + result = (TupleDesc) + dsa_get_address(CurrentSession->area, + record_table_entry->key.shared_tupdesc); + Assert(result->tdrefcount == -1); + + return result; + } + + /* Allocate a new typmod number. This will be wasted if we error out. */ + typmod = (int) + pg_atomic_fetch_add_u32(&CurrentSession->shared_typmod_registry->next_typmod, + 1); + + /* Copy the TupleDesc into shared memory. */ + shared_dp = share_tupledesc(CurrentSession->area, tupdesc, typmod); + + /* + * Create an entry in the typmod table so that others will understand this + * typmod number. + */ + PG_TRY(); + { + typmod_table_entry = (SharedTypmodTableEntry *) + dshash_find_or_insert(CurrentSession->shared_typmod_table, + &typmod, &found); + if (found) + elog(ERROR, "cannot create duplicate shared record typmod"); + } + PG_CATCH(); + { + dsa_free(CurrentSession->area, shared_dp); + PG_RE_THROW(); + } + PG_END_TRY(); + typmod_table_entry->typmod = typmod; + typmod_table_entry->shared_tupdesc = shared_dp; + dshash_release_lock(CurrentSession->shared_typmod_table, + typmod_table_entry); + + /* + * Finally create an entry in the record table so others with matching + * tuple descriptors can reuse the typmod. + */ + record_table_entry = (SharedRecordTableEntry *) + dshash_find_or_insert(CurrentSession->shared_record_table, &key, + &found); + if (found) + { + /* + * Someone concurrently inserted a matching tuple descriptor since the + * first time we checked. Use that one instead. + */ + dshash_release_lock(CurrentSession->shared_record_table, + record_table_entry); + + /* Might as well free up the space used by the one we created. */ + found = dshash_delete_key(CurrentSession->shared_typmod_table, + &typmod); + Assert(found); + dsa_free(CurrentSession->area, shared_dp); + + /* Return the one we found. */ + Assert(record_table_entry->key.shared); + result = (TupleDesc) + dsa_get_address(CurrentSession->area, + record_table_entry->key.shared); + Assert(result->tdrefcount == -1); + + return result; + } + + /* Store it and return it. */ + record_table_entry->key.shared = true; + record_table_entry->key.shared_tupdesc = shared_dp; + dshash_release_lock(CurrentSession->shared_record_table, + record_table_entry); + result = (TupleDesc) + dsa_get_address(CurrentSession->area, shared_dp); + Assert(result->tdrefcount == -1); + + return result; +} + +/* + * Detach hook to forget about the current shared record typmod + * infrastructure. This is registered directly in leader backends, and + * reached only in case of error or shutdown. It's also reached indirectly + * via the worker detach callback below. + */ +static void +shared_record_typmod_registry_detach(dsm_segment *segment, Datum datum) +{ + /* Be cautious here: maybe we didn't finish initializing. */ + if (CurrentSession->shared_record_table != NULL) + { + dshash_detach(CurrentSession->shared_record_table); + CurrentSession->shared_record_table = NULL; + } + if (CurrentSession->shared_typmod_table != NULL) + { + dshash_detach(CurrentSession->shared_typmod_table); + CurrentSession->shared_typmod_table = NULL; + } + CurrentSession->shared_typmod_registry = NULL; +} + +/* + * Deatch hook allowing workers to disconnect from shared record typmod + * registry. The resulting state should allow a worker to attach to a + * different leader, if worker reuse pools are invented. + */ +static void +shared_record_typmod_registry_worker_detach(dsm_segment *segment, Datum datum) +{ + /* + * Forget everything we learned about record typmods as part of the + * session we are disconnecting from, and return to the initial state. + */ + if (RecordCacheArray != NULL) + { + int32 i; + + for (i = 0; i < RecordCacheArrayLen; ++i) + { + if (RecordCacheArray[i] != NULL) + { + TupleDesc tupdesc = RecordCacheArray[i]; + + /* + * Pointers to tuple descriptors in shared memory are not + * reference counted, so we are not responsible for freeing + * them. They'll survive as long as the shared session + * exists, which should be as long as the owning leader + * backend exists. In theory we do need to free local + * reference counted tuple descriptors however, and we can't + * do that with DescTupleDescRefCount() because we aren't + * using a resource owner. In practice we don't expect to + * find any non-shared TupleDesc object in a worker. + */ + if (tupdesc->tdrefcount != -1) + { + Assert(tupdesc->tdrefcount > 0); + if (--tupdesc->tdrefcount == 0) + FreeTupleDesc(tupdesc); + } + } + } + pfree(RecordCacheArray); + RecordCacheArray = NULL; + } + if (RecordCacheHash != NULL) + { + hash_destroy(RecordCacheHash); + RecordCacheHash = NULL; + } + NextRecordTypmod = 0; + /* Call the code common to leader and worker detach. */ + shared_record_typmod_registry_detach(segment, datum); +} diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c index eb6960d93f..20f1d279e9 100644 --- a/src/backend/utils/init/postinit.c +++ b/src/backend/utils/init/postinit.c @@ -21,6 +21,7 @@ #include "access/heapam.h" #include "access/htup_details.h" +#include "access/session.h" #include "access/sysattr.h" #include "access/xact.h" #include "access/xlog.h" @@ -1027,6 +1028,9 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username, /* initialize client encoding */ InitializeClientEncoding(); + /* Initialize this backend's session state. */ + InitializeSession(); + /* report this backend in the PgBackendStatus array */ if (!bootstrap) pgstat_bestart(); diff --git a/src/include/access/session.h b/src/include/access/session.h new file mode 100644 index 0000000000..8376dc5312 --- /dev/null +++ b/src/include/access/session.h @@ -0,0 +1,44 @@ +/*------------------------------------------------------------------------- + * + * session.h + * Encapsulation of user session. + * + * Copyright (c) 2017, PostgreSQL Global Development Group + * + * src/include/access/session.h + * + *------------------------------------------------------------------------- + */ +#ifndef SESSION_H +#define SESSION_H + +#include "lib/dshash.h" + +/* Defined in typcache.c */ +typedef struct SharedRecordTypmodRegistry SharedRecordTypmodRegistry; + +/* + * A struct encapsulating some elements of a user's session. For now this + * manages state that applies to parallel query, but it principle it could + * include other things that are currently global variables. + */ +typedef struct Session +{ + dsm_segment *segment; /* The session-scoped DSM segment. */ + dsa_area *area; /* The session-scoped DSA area. */ + + /* State managed by typcache.c. */ + SharedRecordTypmodRegistry *shared_typmod_registry; + dshash_table *shared_record_table; + dshash_table *shared_typmod_table; +} Session; + +extern void InitializeSession(void); +extern dsm_handle GetSessionDsmHandle(void); +extern void AttachSession(dsm_handle handle); +extern void DetachSession(void); + +/* The current session, or NULL for none. */ +extern Session *CurrentSession; + +#endif /* SESSION_H */ diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index 989fe738bb..c15610e767 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -92,6 +92,12 @@ extern TupleDesc CreateTupleDescCopy(TupleDesc tupdesc); extern TupleDesc CreateTupleDescCopyConstr(TupleDesc tupdesc); +#define TupleDescSize(src) \ + (offsetof(struct tupleDesc, attrs) + \ + (src)->natts * sizeof(FormData_pg_attribute)) + +extern void TupleDescCopy(TupleDesc dst, TupleDesc src); + extern void TupleDescCopyEntry(TupleDesc dst, AttrNumber dstAttno, TupleDesc src, AttrNumber srcAttno); diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index 3d16132c88..f4c4aed7f9 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -212,6 +212,9 @@ typedef enum BuiltinTrancheIds LWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, LWTRANCHE_PARALLEL_QUERY_DSA, + LWTRANCHE_SESSION_DSA, + LWTRANCHE_SESSION_RECORD_TABLE, + LWTRANCHE_SESSION_TYPMOD_TABLE, LWTRANCHE_TBM, LWTRANCHE_FIRST_USER_DEFINED } BuiltinTrancheIds; diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index b4f7592162..41b645a58f 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -18,6 +18,8 @@ #include "access/tupdesc.h" #include "fmgr.h" +#include "storage/dsm.h" +#include "utils/dsa.h" /* DomainConstraintCache is an opaque struct known only within typcache.c */ @@ -143,6 +145,7 @@ typedef struct DomainConstraintRef MemoryContextCallback callback; /* used to release refcount when done */ } DomainConstraintRef; +typedef struct SharedRecordTypmodRegistry SharedRecordTypmodRegistry; extern TypeCacheEntry *lookup_type_cache(Oid type_id, int flags); @@ -164,4 +167,11 @@ extern void assign_record_type_typmod(TupleDesc tupDesc); extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2); +extern size_t SharedRecordTypmodRegistryEstimate(void); + +extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *, + dsm_segment *segment, dsa_area *area); + +extern void SharedRecordTypmodRegistryAttach(SharedRecordTypmodRegistry *); + #endif /* TYPCACHE_H */ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 17ba2bde5c..8ce97da2ee 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2016,6 +2016,10 @@ SharedInvalRelmapMsg SharedInvalSmgrMsg SharedInvalSnapshotMsg SharedInvalidationMessage +SharedRecordTableKey +SharedRecordTableEntry +SharedRecordTypmodRegistry +SharedTypmodTableEntry ShellTypeInfo ShippableCacheEntry ShippableCacheKey From 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 14 Sep 2017 19:59:02 -0700 Subject: [PATCH 0183/1087] Remove TupleDesc remapping logic from tqueue.c. With the introduction of a shared memory record typmod registry, it is no longer necessary to remap record typmods when sending tuples between backends so most of tqueue.c can be removed. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com --- src/backend/executor/execParallel.c | 7 +- src/backend/executor/nodeGather.c | 3 +- src/backend/executor/nodeGatherMerge.c | 2 +- src/backend/executor/tqueue.c | 1119 +----------------------- src/include/executor/execParallel.h | 3 +- src/include/executor/tqueue.h | 3 +- 6 files changed, 27 insertions(+), 1110 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 8737cc1cef..5dc26ed17a 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -608,14 +608,12 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, /* * Set up tuple queue readers to read the results of a parallel subplan. - * All the workers are expected to return tuples matching tupDesc. * * This is separate from ExecInitParallelPlan() because we can launch the * worker processes and let them start doing something before we do this. */ void -ExecParallelCreateReaders(ParallelExecutorInfo *pei, - TupleDesc tupDesc) +ExecParallelCreateReaders(ParallelExecutorInfo *pei) { int nworkers = pei->pcxt->nworkers_launched; int i; @@ -631,8 +629,7 @@ ExecParallelCreateReaders(ParallelExecutorInfo *pei, { shm_mq_set_handle(pei->tqueue[i], pei->pcxt->worker[i].bgwhandle); - pei->reader[i] = CreateTupleQueueReader(pei->tqueue[i], - tupDesc); + pei->reader[i] = CreateTupleQueueReader(pei->tqueue[i]); } } } diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 022d75b4b8..8370037c43 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -176,8 +176,7 @@ ExecGather(PlanState *pstate) /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - ExecParallelCreateReaders(node->pei, - fslot->tts_tupleDescriptor); + ExecParallelCreateReaders(node->pei); /* Make a working array showing the active readers */ node->nreaders = pcxt->nworkers_launched; node->reader = (TupleQueueReader **) diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index d20d46606e..70f33a9a28 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -217,7 +217,7 @@ ExecGatherMerge(PlanState *pstate) /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - ExecParallelCreateReaders(node->pei, node->tupDesc); + ExecParallelCreateReaders(node->pei); /* Make a working array showing the active readers */ node->nreaders = pcxt->nworkers_launched; node->reader = (TupleQueueReader **) diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index 6afcd1a30a..e9a5d5a1a5 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -3,25 +3,10 @@ * tqueue.c * Use shm_mq to send & receive tuples between parallel backends * - * Most of the complexity in this module arises from transient RECORD types, - * which all have type RECORDOID and are distinguished by typmod numbers - * that are managed per-backend (see src/backend/utils/cache/typcache.c). - * The sender's set of RECORD typmod assignments probably doesn't match the - * receiver's. To deal with this, we make the sender send a description - * of each transient RECORD type appearing in the data it sends. The - * receiver finds or creates a matching type in its own typcache, and then - * maps the sender's typmod for that type to its own typmod. - * * A DestReceiver of type DestTupleQueue, which is a TQueueDestReceiver - * under the hood, writes tuples from the executor to a shm_mq. If - * necessary, it also writes control messages describing transient - * record types used within the tuple. + * under the hood, writes tuples from the executor to a shm_mq. * - * A TupleQueueReader reads tuples, and control messages if any are sent, - * from a shm_mq and returns the tuples. If transient record types are - * in use, it registers those types locally based on the control messages - * and rewrites the typmods sent by the remote side to the corresponding - * local record typmods. + * A TupleQueueReader reads tuples from a shm_mq and returns the tuples. * * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -35,186 +20,31 @@ #include "postgres.h" #include "access/htup_details.h" -#include "catalog/pg_type.h" #include "executor/tqueue.h" -#include "funcapi.h" -#include "lib/stringinfo.h" -#include "miscadmin.h" -#include "utils/array.h" -#include "utils/lsyscache.h" -#include "utils/memutils.h" -#include "utils/rangetypes.h" -#include "utils/syscache.h" -#include "utils/typcache.h" - - -/* - * The data transferred through the shm_mq is divided into messages. - * One-byte messages are mode-switch messages, telling the receiver to switch - * between "control" and "data" modes. (We always start up in "data" mode.) - * Otherwise, when in "data" mode, each message is a tuple. When in "control" - * mode, each message defines one transient-typmod-to-tupledesc mapping to - * let us interpret future tuples. Both of those cases certainly require - * more than one byte, so no confusion is possible. - */ -#define TUPLE_QUEUE_MODE_CONTROL 'c' /* mode-switch message contents */ -#define TUPLE_QUEUE_MODE_DATA 'd' - -/* - * Both the sender and receiver build trees of TupleRemapInfo nodes to help - * them identify which (sub) fields of transmitted tuples are composite and - * may thus need remap processing. We might need to look within arrays and - * ranges, not only composites, to find composite sub-fields. A NULL - * TupleRemapInfo pointer indicates that it is known that the described field - * is not composite and has no composite substructure. - * - * Note that we currently have to look at each composite field at runtime, - * even if we believe it's of a named composite type (i.e., not RECORD). - * This is because we allow the actual value to be a compatible transient - * RECORD type. That's grossly inefficient, and it would be good to get - * rid of the requirement, but it's not clear what would need to change. - * - * Also, we allow the top-level tuple structure, as well as the actual - * structure of composite subfields, to change from one tuple to the next - * at runtime. This may well be entirely historical, but it's mostly free - * to support given the previous requirement; and other places in the system - * also permit this, so it's not entirely clear if we could drop it. - */ - -typedef enum -{ - TQUEUE_REMAP_ARRAY, /* array */ - TQUEUE_REMAP_RANGE, /* range */ - TQUEUE_REMAP_RECORD /* composite type, named or transient */ -} TupleRemapClass; - -typedef struct TupleRemapInfo TupleRemapInfo; - -typedef struct ArrayRemapInfo -{ - int16 typlen; /* array element type's storage properties */ - bool typbyval; - char typalign; - TupleRemapInfo *element_remap; /* array element type's remap info */ -} ArrayRemapInfo; - -typedef struct RangeRemapInfo -{ - TypeCacheEntry *typcache; /* range type's typcache entry */ - TupleRemapInfo *bound_remap; /* range bound type's remap info */ -} RangeRemapInfo; - -typedef struct RecordRemapInfo -{ - /* Original (remote) type ID info last seen for this composite field */ - Oid rectypid; - int32 rectypmod; - /* Local RECORD typmod, or -1 if unset; not used on sender side */ - int32 localtypmod; - /* If no fields of the record require remapping, these are NULL: */ - TupleDesc tupledesc; /* copy of record's tupdesc */ - TupleRemapInfo **field_remap; /* each field's remap info */ -} RecordRemapInfo; - -struct TupleRemapInfo -{ - TupleRemapClass remapclass; - union - { - ArrayRemapInfo arr; - RangeRemapInfo rng; - RecordRemapInfo rec; - } u; -}; /* * DestReceiver object's private contents * - * queue and tupledesc are pointers to data supplied by DestReceiver's caller. - * The recordhtab and remap info are owned by the DestReceiver and are kept - * in mycontext. tmpcontext is a tuple-lifespan context to hold cruft - * created while traversing each tuple to find record subfields. + * queue is a pointer to data supplied by DestReceiver's caller. */ typedef struct TQueueDestReceiver { DestReceiver pub; /* public fields */ shm_mq_handle *queue; /* shm_mq to send to */ - MemoryContext mycontext; /* context containing TQueueDestReceiver */ - MemoryContext tmpcontext; /* per-tuple context, if needed */ - HTAB *recordhtab; /* table of transmitted typmods, if needed */ - char mode; /* current message mode */ - TupleDesc tupledesc; /* current top-level tuple descriptor */ - TupleRemapInfo **field_remapinfo; /* current top-level remap info */ } TQueueDestReceiver; -/* - * Hash table entries for mapping remote to local typmods. - */ -typedef struct RecordTypmodMap -{ - int32 remotetypmod; /* hash key (must be first!) */ - int32 localtypmod; -} RecordTypmodMap; - /* * TupleQueueReader object's private contents * - * queue and tupledesc are pointers to data supplied by reader's caller. - * The typmodmap and remap info are owned by the TupleQueueReader and - * are kept in mycontext. + * queue is a pointer to data supplied by reader's caller. * * "typedef struct TupleQueueReader TupleQueueReader" is in tqueue.h */ struct TupleQueueReader { shm_mq_handle *queue; /* shm_mq to receive from */ - MemoryContext mycontext; /* context containing TupleQueueReader */ - HTAB *typmodmap; /* RecordTypmodMap hashtable, if needed */ - char mode; /* current message mode */ - TupleDesc tupledesc; /* current top-level tuple descriptor */ - TupleRemapInfo **field_remapinfo; /* current top-level remap info */ }; -/* Local function prototypes */ -static void TQExamine(TQueueDestReceiver *tqueue, - TupleRemapInfo *remapinfo, - Datum value); -static void TQExamineArray(TQueueDestReceiver *tqueue, - ArrayRemapInfo *remapinfo, - Datum value); -static void TQExamineRange(TQueueDestReceiver *tqueue, - RangeRemapInfo *remapinfo, - Datum value); -static void TQExamineRecord(TQueueDestReceiver *tqueue, - RecordRemapInfo *remapinfo, - Datum value); -static void TQSendRecordInfo(TQueueDestReceiver *tqueue, int32 typmod, - TupleDesc tupledesc); -static void TupleQueueHandleControlMessage(TupleQueueReader *reader, - Size nbytes, char *data); -static HeapTuple TupleQueueHandleDataMessage(TupleQueueReader *reader, - Size nbytes, HeapTupleHeader data); -static HeapTuple TQRemapTuple(TupleQueueReader *reader, - TupleDesc tupledesc, - TupleRemapInfo **field_remapinfo, - HeapTuple tuple); -static Datum TQRemap(TupleQueueReader *reader, TupleRemapInfo *remapinfo, - Datum value, bool *changed); -static Datum TQRemapArray(TupleQueueReader *reader, ArrayRemapInfo *remapinfo, - Datum value, bool *changed); -static Datum TQRemapRange(TupleQueueReader *reader, RangeRemapInfo *remapinfo, - Datum value, bool *changed); -static Datum TQRemapRecord(TupleQueueReader *reader, RecordRemapInfo *remapinfo, - Datum value, bool *changed); -static TupleRemapInfo *BuildTupleRemapInfo(Oid typid, MemoryContext mycontext); -static TupleRemapInfo *BuildArrayRemapInfo(Oid elemtypid, - MemoryContext mycontext); -static TupleRemapInfo *BuildRangeRemapInfo(Oid rngtypid, - MemoryContext mycontext); -static TupleRemapInfo **BuildFieldRemapInfo(TupleDesc tupledesc, - MemoryContext mycontext); - - /* * Receive a tuple from a query, and send it to the designated shm_mq. * @@ -224,86 +54,9 @@ static bool tqueueReceiveSlot(TupleTableSlot *slot, DestReceiver *self) { TQueueDestReceiver *tqueue = (TQueueDestReceiver *) self; - TupleDesc tupledesc = slot->tts_tupleDescriptor; HeapTuple tuple; shm_mq_result result; - /* - * If first time through, compute remapping info for the top-level fields. - * On later calls, if the tupledesc has changed, set up for the new - * tupledesc. (This is a strange test both because the executor really - * shouldn't change the tupledesc, and also because it would be unsafe if - * the old tupledesc could be freed and a new one allocated at the same - * address. But since some very old code in printtup.c uses a similar - * approach, we adopt it here as well.) - * - * Here and elsewhere in this module, when replacing remapping info we - * pfree the top-level object because that's easy, but we don't bother to - * recursively free any substructure. This would lead to query-lifespan - * memory leaks if the mapping info actually changed frequently, but since - * we don't expect that to happen, it doesn't seem worth expending code to - * prevent it. - */ - if (tqueue->tupledesc != tupledesc) - { - /* Is it worth trying to free substructure of the remap tree? */ - if (tqueue->field_remapinfo != NULL) - pfree(tqueue->field_remapinfo); - tqueue->field_remapinfo = BuildFieldRemapInfo(tupledesc, - tqueue->mycontext); - tqueue->tupledesc = tupledesc; - } - - /* - * When, because of the types being transmitted, no record typmod mapping - * can be needed, we can skip a good deal of work. - */ - if (tqueue->field_remapinfo != NULL) - { - TupleRemapInfo **remapinfo = tqueue->field_remapinfo; - int i; - MemoryContext oldcontext = NULL; - - /* Deform the tuple so we can examine fields, if not done already. */ - slot_getallattrs(slot); - - /* Iterate over each attribute and search it for transient typmods. */ - for (i = 0; i < tupledesc->natts; i++) - { - /* Ignore nulls and types that don't need special handling. */ - if (slot->tts_isnull[i] || remapinfo[i] == NULL) - continue; - - /* Switch to temporary memory context to avoid leaking. */ - if (oldcontext == NULL) - { - if (tqueue->tmpcontext == NULL) - tqueue->tmpcontext = - AllocSetContextCreate(tqueue->mycontext, - "tqueue sender temp context", - ALLOCSET_DEFAULT_SIZES); - oldcontext = MemoryContextSwitchTo(tqueue->tmpcontext); - } - - /* Examine the value. */ - TQExamine(tqueue, remapinfo[i], slot->tts_values[i]); - } - - /* If we used the temp context, reset it and restore prior context. */ - if (oldcontext != NULL) - { - MemoryContextSwitchTo(oldcontext); - MemoryContextReset(tqueue->tmpcontext); - } - - /* If we entered control mode, switch back to data mode. */ - if (tqueue->mode != TUPLE_QUEUE_MODE_DATA) - { - tqueue->mode = TUPLE_QUEUE_MODE_DATA; - shm_mq_send(tqueue->queue, sizeof(char), &tqueue->mode, false); - } - } - /* Send the tuple itself. */ tuple = ExecMaterializeSlot(slot); result = shm_mq_send(tqueue->queue, tuple->t_len, tuple->t_data, false); @@ -319,248 +72,6 @@ tqueueReceiveSlot(TupleTableSlot *slot, DestReceiver *self) return true; } -/* - * Examine the given datum and send any necessary control messages for - * transient record types contained in it. - * - * remapinfo is previously-computed remapping info about the datum's type. - * - * This function just dispatches based on the remap class. - */ -static void -TQExamine(TQueueDestReceiver *tqueue, TupleRemapInfo *remapinfo, Datum value) -{ - /* This is recursive, so it could be driven to stack overflow. */ - check_stack_depth(); - - switch (remapinfo->remapclass) - { - case TQUEUE_REMAP_ARRAY: - TQExamineArray(tqueue, &remapinfo->u.arr, value); - break; - case TQUEUE_REMAP_RANGE: - TQExamineRange(tqueue, &remapinfo->u.rng, value); - break; - case TQUEUE_REMAP_RECORD: - TQExamineRecord(tqueue, &remapinfo->u.rec, value); - break; - } -} - -/* - * Examine a record datum and send any necessary control messages for - * transient record types contained in it. - */ -static void -TQExamineRecord(TQueueDestReceiver *tqueue, RecordRemapInfo *remapinfo, - Datum value) -{ - HeapTupleHeader tup; - Oid typid; - int32 typmod; - TupleDesc tupledesc; - - /* Extract type OID and typmod from tuple. */ - tup = DatumGetHeapTupleHeader(value); - typid = HeapTupleHeaderGetTypeId(tup); - typmod = HeapTupleHeaderGetTypMod(tup); - - /* - * If first time through, or if this isn't the same composite type as last - * time, consider sending a control message, and then look up the - * necessary information for examining the fields. - */ - if (typid != remapinfo->rectypid || typmod != remapinfo->rectypmod) - { - /* Free any old data. */ - if (remapinfo->tupledesc != NULL) - FreeTupleDesc(remapinfo->tupledesc); - /* Is it worth trying to free substructure of the remap tree? */ - if (remapinfo->field_remap != NULL) - pfree(remapinfo->field_remap); - - /* Look up tuple descriptor in typcache. */ - tupledesc = lookup_rowtype_tupdesc(typid, typmod); - - /* - * If this is a transient record type, send the tupledesc in a control - * message. (TQSendRecordInfo is smart enough to do this only once - * per typmod.) - */ - if (typid == RECORDOID) - TQSendRecordInfo(tqueue, typmod, tupledesc); - - /* Figure out whether fields need recursive processing. */ - remapinfo->field_remap = BuildFieldRemapInfo(tupledesc, - tqueue->mycontext); - if (remapinfo->field_remap != NULL) - { - /* - * We need to inspect the record contents, so save a copy of the - * tupdesc. (We could possibly just reference the typcache's - * copy, but then it's problematic when to release the refcount.) - */ - MemoryContext oldcontext = MemoryContextSwitchTo(tqueue->mycontext); - - remapinfo->tupledesc = CreateTupleDescCopy(tupledesc); - MemoryContextSwitchTo(oldcontext); - } - else - { - /* No fields of the record require remapping. */ - remapinfo->tupledesc = NULL; - } - remapinfo->rectypid = typid; - remapinfo->rectypmod = typmod; - - /* Release reference count acquired by lookup_rowtype_tupdesc. */ - DecrTupleDescRefCount(tupledesc); - } - - /* - * If field remapping is required, deform the tuple and examine each - * field. - */ - if (remapinfo->field_remap != NULL) - { - Datum *values; - bool *isnull; - HeapTupleData tdata; - int i; - - /* Deform the tuple so we can check each column within. */ - tupledesc = remapinfo->tupledesc; - values = (Datum *) palloc(tupledesc->natts * sizeof(Datum)); - isnull = (bool *) palloc(tupledesc->natts * sizeof(bool)); - tdata.t_len = HeapTupleHeaderGetDatumLength(tup); - ItemPointerSetInvalid(&(tdata.t_self)); - tdata.t_tableOid = InvalidOid; - tdata.t_data = tup; - heap_deform_tuple(&tdata, tupledesc, values, isnull); - - /* Recursively check each interesting non-NULL attribute. */ - for (i = 0; i < tupledesc->natts; i++) - { - if (!isnull[i] && remapinfo->field_remap[i]) - TQExamine(tqueue, remapinfo->field_remap[i], values[i]); - } - - /* Need not clean up, since we're in a short-lived context. */ - } -} - -/* - * Examine an array datum and send any necessary control messages for - * transient record types contained in it. - */ -static void -TQExamineArray(TQueueDestReceiver *tqueue, ArrayRemapInfo *remapinfo, - Datum value) -{ - ArrayType *arr = DatumGetArrayTypeP(value); - Oid typid = ARR_ELEMTYPE(arr); - Datum *elem_values; - bool *elem_nulls; - int num_elems; - int i; - - /* Deconstruct the array. */ - deconstruct_array(arr, typid, remapinfo->typlen, - remapinfo->typbyval, remapinfo->typalign, - &elem_values, &elem_nulls, &num_elems); - - /* Examine each element. */ - for (i = 0; i < num_elems; i++) - { - if (!elem_nulls[i]) - TQExamine(tqueue, remapinfo->element_remap, elem_values[i]); - } -} - -/* - * Examine a range datum and send any necessary control messages for - * transient record types contained in it. - */ -static void -TQExamineRange(TQueueDestReceiver *tqueue, RangeRemapInfo *remapinfo, - Datum value) -{ - RangeType *range = DatumGetRangeType(value); - RangeBound lower; - RangeBound upper; - bool empty; - - /* Extract the lower and upper bounds. */ - range_deserialize(remapinfo->typcache, range, &lower, &upper, &empty); - - /* Nothing to do for an empty range. */ - if (empty) - return; - - /* Examine each bound, if present. */ - if (!upper.infinite) - TQExamine(tqueue, remapinfo->bound_remap, upper.val); - if (!lower.infinite) - TQExamine(tqueue, remapinfo->bound_remap, lower.val); -} - -/* - * Send tuple descriptor information for a transient typmod, unless we've - * already done so previously. - */ -static void -TQSendRecordInfo(TQueueDestReceiver *tqueue, int32 typmod, TupleDesc tupledesc) -{ - StringInfoData buf; - bool found; - int i; - - /* Initialize hash table if not done yet. */ - if (tqueue->recordhtab == NULL) - { - HASHCTL ctl; - - MemSet(&ctl, 0, sizeof(ctl)); - /* Hash table entries are just typmods */ - ctl.keysize = sizeof(int32); - ctl.entrysize = sizeof(int32); - ctl.hcxt = tqueue->mycontext; - tqueue->recordhtab = hash_create("tqueue sender record type hashtable", - 100, &ctl, - HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); - } - - /* Have we already seen this record type? If not, must report it. */ - hash_search(tqueue->recordhtab, &typmod, HASH_ENTER, &found); - if (found) - return; - - elog(DEBUG3, "sending tqueue control message for record typmod %d", typmod); - - /* If message queue is in data mode, switch to control mode. */ - if (tqueue->mode != TUPLE_QUEUE_MODE_CONTROL) - { - tqueue->mode = TUPLE_QUEUE_MODE_CONTROL; - shm_mq_send(tqueue->queue, sizeof(char), &tqueue->mode, false); - } - - /* Assemble a control message. */ - initStringInfo(&buf); - appendBinaryStringInfo(&buf, (char *) &typmod, sizeof(int32)); - appendBinaryStringInfo(&buf, (char *) &tupledesc->natts, sizeof(int)); - appendBinaryStringInfo(&buf, (char *) &tupledesc->tdhasoid, sizeof(bool)); - for (i = 0; i < tupledesc->natts; i++) - { - appendBinaryStringInfo(&buf, (char *) TupleDescAttr(tupledesc, i), - sizeof(FormData_pg_attribute)); - } - - /* Send control message. */ - shm_mq_send(tqueue->queue, buf.len, buf.data, false); - - /* We assume it's OK to leak buf because we're in a short-lived context. */ -} - /* * Prepare to receive tuples from executor. */ @@ -594,13 +105,6 @@ tqueueDestroyReceiver(DestReceiver *self) /* We probably already detached from queue, but let's be sure */ if (tqueue->queue != NULL) shm_mq_detach(tqueue->queue); - if (tqueue->tmpcontext != NULL) - MemoryContextDelete(tqueue->tmpcontext); - if (tqueue->recordhtab != NULL) - hash_destroy(tqueue->recordhtab); - /* Is it worth trying to free substructure of the remap tree? */ - if (tqueue->field_remapinfo != NULL) - pfree(tqueue->field_remapinfo); pfree(self); } @@ -620,13 +124,6 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle) self->pub.rDestroy = tqueueDestroyReceiver; self->pub.mydest = DestTupleQueue; self->queue = handle; - self->mycontext = CurrentMemoryContext; - self->tmpcontext = NULL; - self->recordhtab = NULL; - self->mode = TUPLE_QUEUE_MODE_DATA; - /* Top-level tupledesc is not known yet */ - self->tupledesc = NULL; - self->field_remapinfo = NULL; return (DestReceiver *) self; } @@ -635,16 +132,11 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle) * Create a tuple queue reader. */ TupleQueueReader * -CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc) +CreateTupleQueueReader(shm_mq_handle *handle) { TupleQueueReader *reader = palloc0(sizeof(TupleQueueReader)); reader->queue = handle; - reader->mycontext = CurrentMemoryContext; - reader->typmodmap = NULL; - reader->mode = TUPLE_QUEUE_MODE_DATA; - reader->tupledesc = tupledesc; - reader->field_remapinfo = BuildFieldRemapInfo(tupledesc, reader->mycontext); return reader; } @@ -658,11 +150,6 @@ CreateTupleQueueReader(shm_mq_handle *handle, TupleDesc tupledesc) void DestroyTupleQueueReader(TupleQueueReader *reader) { - if (reader->typmodmap != NULL) - hash_destroy(reader->typmodmap); - /* Is it worth trying to free substructure of the remap tree? */ - if (reader->field_remapinfo != NULL) - pfree(reader->field_remapinfo); pfree(reader); } @@ -674,9 +161,6 @@ DestroyTupleQueueReader(TupleQueueReader *reader) * is set to true when there are no remaining tuples and otherwise to false. * * The returned tuple, if any, is allocated in CurrentMemoryContext. - * That should be a short-lived (tuple-lifespan) context, because we are - * pretty cavalier about leaking memory in that context if we have to do - * tuple remapping. * * Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK, this can still * accumulate bytes from a partially-read message, so it's useful to call @@ -685,64 +169,29 @@ DestroyTupleQueueReader(TupleQueueReader *reader) HeapTuple TupleQueueReaderNext(TupleQueueReader *reader, bool nowait, bool *done) { + HeapTupleData htup; shm_mq_result result; + Size nbytes; + void *data; if (done != NULL) *done = false; - for (;;) - { - Size nbytes; - void *data; + /* Attempt to read a message. */ + result = shm_mq_receive(reader->queue, &nbytes, &data, nowait); - /* Attempt to read a message. */ - result = shm_mq_receive(reader->queue, &nbytes, &data, nowait); - - /* If queue is detached, set *done and return NULL. */ - if (result == SHM_MQ_DETACHED) - { - if (done != NULL) - *done = true; - return NULL; - } - - /* In non-blocking mode, bail out if no message ready yet. */ - if (result == SHM_MQ_WOULD_BLOCK) - return NULL; - Assert(result == SHM_MQ_SUCCESS); - - /* - * We got a message (see message spec at top of file). Process it. - */ - if (nbytes == 1) - { - /* Mode switch message. */ - reader->mode = ((char *) data)[0]; - } - else if (reader->mode == TUPLE_QUEUE_MODE_DATA) - { - /* Tuple data. */ - return TupleQueueHandleDataMessage(reader, nbytes, data); - } - else if (reader->mode == TUPLE_QUEUE_MODE_CONTROL) - { - /* Control message, describing a transient record type. */ - TupleQueueHandleControlMessage(reader, nbytes, data); - } - else - elog(ERROR, "unrecognized tqueue mode: %d", (int) reader->mode); + /* If queue is detached, set *done and return NULL. */ + if (result == SHM_MQ_DETACHED) + { + if (done != NULL) + *done = true; + return NULL; } -} -/* - * Handle a data message - that is, a tuple - from the remote side. - */ -static HeapTuple -TupleQueueHandleDataMessage(TupleQueueReader *reader, - Size nbytes, - HeapTupleHeader data) -{ - HeapTupleData htup; + /* In non-blocking mode, bail out if no message ready yet. */ + if (result == SHM_MQ_WOULD_BLOCK) + return NULL; + Assert(result == SHM_MQ_SUCCESS); /* * Set up a dummy HeapTupleData pointing to the data from the shm_mq @@ -753,531 +202,5 @@ TupleQueueHandleDataMessage(TupleQueueReader *reader, htup.t_len = nbytes; htup.t_data = data; - /* - * Either just copy the data into a regular palloc'd tuple, or remap it, - * as required. - */ - return TQRemapTuple(reader, - reader->tupledesc, - reader->field_remapinfo, - &htup); -} - -/* - * Copy the given tuple, remapping any transient typmods contained in it. - */ -static HeapTuple -TQRemapTuple(TupleQueueReader *reader, - TupleDesc tupledesc, - TupleRemapInfo **field_remapinfo, - HeapTuple tuple) -{ - Datum *values; - bool *isnull; - bool changed = false; - int i; - - /* - * If no remapping is necessary, just copy the tuple into a single - * palloc'd chunk, as caller will expect. - */ - if (field_remapinfo == NULL) - return heap_copytuple(tuple); - - /* Deform tuple so we can remap record typmods for individual attrs. */ - values = (Datum *) palloc(tupledesc->natts * sizeof(Datum)); - isnull = (bool *) palloc(tupledesc->natts * sizeof(bool)); - heap_deform_tuple(tuple, tupledesc, values, isnull); - - /* Recursively process each interesting non-NULL attribute. */ - for (i = 0; i < tupledesc->natts; i++) - { - if (isnull[i] || field_remapinfo[i] == NULL) - continue; - values[i] = TQRemap(reader, field_remapinfo[i], values[i], &changed); - } - - /* Reconstruct the modified tuple, if anything was modified. */ - if (changed) - return heap_form_tuple(tupledesc, values, isnull); - else - return heap_copytuple(tuple); -} - -/* - * Process the given datum and replace any transient record typmods - * contained in it. Set *changed to TRUE if we actually changed the datum. - * - * remapinfo is previously-computed remapping info about the datum's type. - * - * This function just dispatches based on the remap class. - */ -static Datum -TQRemap(TupleQueueReader *reader, TupleRemapInfo *remapinfo, - Datum value, bool *changed) -{ - /* This is recursive, so it could be driven to stack overflow. */ - check_stack_depth(); - - switch (remapinfo->remapclass) - { - case TQUEUE_REMAP_ARRAY: - return TQRemapArray(reader, &remapinfo->u.arr, value, changed); - - case TQUEUE_REMAP_RANGE: - return TQRemapRange(reader, &remapinfo->u.rng, value, changed); - - case TQUEUE_REMAP_RECORD: - return TQRemapRecord(reader, &remapinfo->u.rec, value, changed); - } - - elog(ERROR, "unrecognized tqueue remap class: %d", - (int) remapinfo->remapclass); - return (Datum) 0; -} - -/* - * Process the given array datum and replace any transient record typmods - * contained in it. Set *changed to TRUE if we actually changed the datum. - */ -static Datum -TQRemapArray(TupleQueueReader *reader, ArrayRemapInfo *remapinfo, - Datum value, bool *changed) -{ - ArrayType *arr = DatumGetArrayTypeP(value); - Oid typid = ARR_ELEMTYPE(arr); - bool element_changed = false; - Datum *elem_values; - bool *elem_nulls; - int num_elems; - int i; - - /* Deconstruct the array. */ - deconstruct_array(arr, typid, remapinfo->typlen, - remapinfo->typbyval, remapinfo->typalign, - &elem_values, &elem_nulls, &num_elems); - - /* Remap each element. */ - for (i = 0; i < num_elems; i++) - { - if (!elem_nulls[i]) - elem_values[i] = TQRemap(reader, - remapinfo->element_remap, - elem_values[i], - &element_changed); - } - - if (element_changed) - { - /* Reconstruct and return the array. */ - *changed = true; - arr = construct_md_array(elem_values, elem_nulls, - ARR_NDIM(arr), ARR_DIMS(arr), ARR_LBOUND(arr), - typid, remapinfo->typlen, - remapinfo->typbyval, remapinfo->typalign); - return PointerGetDatum(arr); - } - - /* Else just return the value as-is. */ - return value; -} - -/* - * Process the given range datum and replace any transient record typmods - * contained in it. Set *changed to TRUE if we actually changed the datum. - */ -static Datum -TQRemapRange(TupleQueueReader *reader, RangeRemapInfo *remapinfo, - Datum value, bool *changed) -{ - RangeType *range = DatumGetRangeType(value); - bool bound_changed = false; - RangeBound lower; - RangeBound upper; - bool empty; - - /* Extract the lower and upper bounds. */ - range_deserialize(remapinfo->typcache, range, &lower, &upper, &empty); - - /* Nothing to do for an empty range. */ - if (empty) - return value; - - /* Remap each bound, if present. */ - if (!upper.infinite) - upper.val = TQRemap(reader, remapinfo->bound_remap, - upper.val, &bound_changed); - if (!lower.infinite) - lower.val = TQRemap(reader, remapinfo->bound_remap, - lower.val, &bound_changed); - - if (bound_changed) - { - /* Reserialize. */ - *changed = true; - range = range_serialize(remapinfo->typcache, &lower, &upper, empty); - return RangeTypeGetDatum(range); - } - - /* Else just return the value as-is. */ - return value; -} - -/* - * Process the given record datum and replace any transient record typmods - * contained in it. Set *changed to TRUE if we actually changed the datum. - */ -static Datum -TQRemapRecord(TupleQueueReader *reader, RecordRemapInfo *remapinfo, - Datum value, bool *changed) -{ - HeapTupleHeader tup; - Oid typid; - int32 typmod; - bool changed_typmod; - TupleDesc tupledesc; - - /* Extract type OID and typmod from tuple. */ - tup = DatumGetHeapTupleHeader(value); - typid = HeapTupleHeaderGetTypeId(tup); - typmod = HeapTupleHeaderGetTypMod(tup); - - /* - * If first time through, or if this isn't the same composite type as last - * time, identify the required typmod mapping, and then look up the - * necessary information for processing the fields. - */ - if (typid != remapinfo->rectypid || typmod != remapinfo->rectypmod) - { - /* Free any old data. */ - if (remapinfo->tupledesc != NULL) - FreeTupleDesc(remapinfo->tupledesc); - /* Is it worth trying to free substructure of the remap tree? */ - if (remapinfo->field_remap != NULL) - pfree(remapinfo->field_remap); - - /* If transient record type, look up matching local typmod. */ - if (typid == RECORDOID) - { - RecordTypmodMap *mapent; - - Assert(reader->typmodmap != NULL); - mapent = hash_search(reader->typmodmap, &typmod, - HASH_FIND, NULL); - if (mapent == NULL) - elog(ERROR, "tqueue received unrecognized remote typmod %d", - typmod); - remapinfo->localtypmod = mapent->localtypmod; - } - else - remapinfo->localtypmod = -1; - - /* Look up tuple descriptor in typcache. */ - tupledesc = lookup_rowtype_tupdesc(typid, remapinfo->localtypmod); - - /* Figure out whether fields need recursive processing. */ - remapinfo->field_remap = BuildFieldRemapInfo(tupledesc, - reader->mycontext); - if (remapinfo->field_remap != NULL) - { - /* - * We need to inspect the record contents, so save a copy of the - * tupdesc. (We could possibly just reference the typcache's - * copy, but then it's problematic when to release the refcount.) - */ - MemoryContext oldcontext = MemoryContextSwitchTo(reader->mycontext); - - remapinfo->tupledesc = CreateTupleDescCopy(tupledesc); - MemoryContextSwitchTo(oldcontext); - } - else - { - /* No fields of the record require remapping. */ - remapinfo->tupledesc = NULL; - } - remapinfo->rectypid = typid; - remapinfo->rectypmod = typmod; - - /* Release reference count acquired by lookup_rowtype_tupdesc. */ - DecrTupleDescRefCount(tupledesc); - } - - /* If transient record, replace remote typmod with local typmod. */ - if (typid == RECORDOID && typmod != remapinfo->localtypmod) - { - typmod = remapinfo->localtypmod; - changed_typmod = true; - } - else - changed_typmod = false; - - /* - * If we need to change the typmod, or if there are any potentially - * remappable fields, replace the tuple. - */ - if (changed_typmod || remapinfo->field_remap != NULL) - { - HeapTupleData htup; - HeapTuple atup; - - /* For now, assume we always need to change the tuple in this case. */ - *changed = true; - - /* Copy tuple, possibly remapping contained fields. */ - ItemPointerSetInvalid(&htup.t_self); - htup.t_tableOid = InvalidOid; - htup.t_len = HeapTupleHeaderGetDatumLength(tup); - htup.t_data = tup; - atup = TQRemapTuple(reader, - remapinfo->tupledesc, - remapinfo->field_remap, - &htup); - - /* Apply the correct labeling for a local Datum. */ - HeapTupleHeaderSetTypeId(atup->t_data, typid); - HeapTupleHeaderSetTypMod(atup->t_data, typmod); - HeapTupleHeaderSetDatumLength(atup->t_data, htup.t_len); - - /* And return the results. */ - return HeapTupleHeaderGetDatum(atup->t_data); - } - - /* Else just return the value as-is. */ - return value; -} - -/* - * Handle a control message from the tuple queue reader. - * - * Control messages are sent when the remote side is sending tuples that - * contain transient record types. We need to arrange to bless those - * record types locally and translate between remote and local typmods. - */ -static void -TupleQueueHandleControlMessage(TupleQueueReader *reader, Size nbytes, - char *data) -{ - int32 remotetypmod; - int natts; - bool hasoid; - Size offset = 0; - Form_pg_attribute *attrs; - TupleDesc tupledesc; - RecordTypmodMap *mapent; - bool found; - int i; - - /* Extract remote typmod. */ - memcpy(&remotetypmod, &data[offset], sizeof(int32)); - offset += sizeof(int32); - - /* Extract attribute count. */ - memcpy(&natts, &data[offset], sizeof(int)); - offset += sizeof(int); - - /* Extract hasoid flag. */ - memcpy(&hasoid, &data[offset], sizeof(bool)); - offset += sizeof(bool); - - /* Extract attribute details. The tupledesc made here is just transient. */ - attrs = palloc(natts * sizeof(Form_pg_attribute)); - for (i = 0; i < natts; i++) - { - attrs[i] = palloc(sizeof(FormData_pg_attribute)); - memcpy(attrs[i], &data[offset], sizeof(FormData_pg_attribute)); - offset += sizeof(FormData_pg_attribute); - } - - /* We should have read the whole message. */ - Assert(offset == nbytes); - - /* Construct TupleDesc, and assign a local typmod. */ - tupledesc = CreateTupleDesc(natts, hasoid, attrs); - tupledesc = BlessTupleDesc(tupledesc); - - /* Create mapping hashtable if it doesn't exist already. */ - if (reader->typmodmap == NULL) - { - HASHCTL ctl; - - MemSet(&ctl, 0, sizeof(ctl)); - ctl.keysize = sizeof(int32); - ctl.entrysize = sizeof(RecordTypmodMap); - ctl.hcxt = reader->mycontext; - reader->typmodmap = hash_create("tqueue receiver record type hashtable", - 100, &ctl, - HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); - } - - /* Create map entry. */ - mapent = hash_search(reader->typmodmap, &remotetypmod, HASH_ENTER, - &found); - if (found) - elog(ERROR, "duplicate tqueue control message for typmod %d", - remotetypmod); - mapent->localtypmod = tupledesc->tdtypmod; - - elog(DEBUG3, "tqueue mapping remote typmod %d to local typmod %d", - remotetypmod, mapent->localtypmod); -} - -/* - * Build remap info for the specified data type, storing it in mycontext. - * Returns NULL if neither the type nor any subtype could require remapping. - */ -static TupleRemapInfo * -BuildTupleRemapInfo(Oid typid, MemoryContext mycontext) -{ - HeapTuple tup; - Form_pg_type typ; - - /* This is recursive, so it could be driven to stack overflow. */ - check_stack_depth(); - -restart: - tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid)); - if (!HeapTupleIsValid(tup)) - elog(ERROR, "cache lookup failed for type %u", typid); - typ = (Form_pg_type) GETSTRUCT(tup); - - /* Look through domains to underlying base type. */ - if (typ->typtype == TYPTYPE_DOMAIN) - { - typid = typ->typbasetype; - ReleaseSysCache(tup); - goto restart; - } - - /* If it's a true array type, deal with it that way. */ - if (OidIsValid(typ->typelem) && typ->typlen == -1) - { - typid = typ->typelem; - ReleaseSysCache(tup); - return BuildArrayRemapInfo(typid, mycontext); - } - - /* Similarly, deal with ranges appropriately. */ - if (typ->typtype == TYPTYPE_RANGE) - { - ReleaseSysCache(tup); - return BuildRangeRemapInfo(typid, mycontext); - } - - /* - * If it's a composite type (including RECORD), set up for remapping. We - * don't attempt to determine the status of subfields here, since we do - * not have enough information yet; just mark everything invalid. - */ - if (typ->typtype == TYPTYPE_COMPOSITE || typid == RECORDOID) - { - TupleRemapInfo *remapinfo; - - remapinfo = (TupleRemapInfo *) - MemoryContextAlloc(mycontext, sizeof(TupleRemapInfo)); - remapinfo->remapclass = TQUEUE_REMAP_RECORD; - remapinfo->u.rec.rectypid = InvalidOid; - remapinfo->u.rec.rectypmod = -1; - remapinfo->u.rec.localtypmod = -1; - remapinfo->u.rec.tupledesc = NULL; - remapinfo->u.rec.field_remap = NULL; - ReleaseSysCache(tup); - return remapinfo; - } - - /* Nothing else can possibly need remapping attention. */ - ReleaseSysCache(tup); - return NULL; -} - -static TupleRemapInfo * -BuildArrayRemapInfo(Oid elemtypid, MemoryContext mycontext) -{ - TupleRemapInfo *remapinfo; - TupleRemapInfo *element_remapinfo; - - /* See if element type requires remapping. */ - element_remapinfo = BuildTupleRemapInfo(elemtypid, mycontext); - /* If not, the array doesn't either. */ - if (element_remapinfo == NULL) - return NULL; - /* OK, set up to remap the array. */ - remapinfo = (TupleRemapInfo *) - MemoryContextAlloc(mycontext, sizeof(TupleRemapInfo)); - remapinfo->remapclass = TQUEUE_REMAP_ARRAY; - get_typlenbyvalalign(elemtypid, - &remapinfo->u.arr.typlen, - &remapinfo->u.arr.typbyval, - &remapinfo->u.arr.typalign); - remapinfo->u.arr.element_remap = element_remapinfo; - return remapinfo; -} - -static TupleRemapInfo * -BuildRangeRemapInfo(Oid rngtypid, MemoryContext mycontext) -{ - TupleRemapInfo *remapinfo; - TupleRemapInfo *bound_remapinfo; - TypeCacheEntry *typcache; - - /* - * Get range info from the typcache. We assume this pointer will stay - * valid for the duration of the query. - */ - typcache = lookup_type_cache(rngtypid, TYPECACHE_RANGE_INFO); - if (typcache->rngelemtype == NULL) - elog(ERROR, "type %u is not a range type", rngtypid); - - /* See if range bound type requires remapping. */ - bound_remapinfo = BuildTupleRemapInfo(typcache->rngelemtype->type_id, - mycontext); - /* If not, the range doesn't either. */ - if (bound_remapinfo == NULL) - return NULL; - /* OK, set up to remap the range. */ - remapinfo = (TupleRemapInfo *) - MemoryContextAlloc(mycontext, sizeof(TupleRemapInfo)); - remapinfo->remapclass = TQUEUE_REMAP_RANGE; - remapinfo->u.rng.typcache = typcache; - remapinfo->u.rng.bound_remap = bound_remapinfo; - return remapinfo; -} - -/* - * Build remap info for fields of the type described by the given tupdesc. - * Returns an array of TupleRemapInfo pointers, or NULL if no field - * requires remapping. Data is allocated in mycontext. - */ -static TupleRemapInfo ** -BuildFieldRemapInfo(TupleDesc tupledesc, MemoryContext mycontext) -{ - TupleRemapInfo **remapinfo; - bool noop = true; - int i; - - /* Recursively determine the remapping status of each field. */ - remapinfo = (TupleRemapInfo **) - MemoryContextAlloc(mycontext, - tupledesc->natts * sizeof(TupleRemapInfo *)); - for (i = 0; i < tupledesc->natts; i++) - { - Form_pg_attribute attr = TupleDescAttr(tupledesc, i); - - if (attr->attisdropped) - { - remapinfo[i] = NULL; - continue; - } - remapinfo[i] = BuildTupleRemapInfo(attr->atttypid, mycontext); - if (remapinfo[i] != NULL) - noop = false; - } - - /* If no fields require remapping, report that by returning NULL. */ - if (noop) - { - pfree(remapinfo); - remapinfo = NULL; - } - - return remapinfo; + return heap_copytuple(&htup); } diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index ed231f2d53..e1b3e7af1f 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -36,8 +36,7 @@ typedef struct ParallelExecutorInfo extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, int64 tuples_needed); -extern void ExecParallelCreateReaders(ParallelExecutorInfo *pei, - TupleDesc tupDesc); +extern void ExecParallelCreateReaders(ParallelExecutorInfo *pei); extern void ExecParallelFinish(ParallelExecutorInfo *pei); extern void ExecParallelCleanup(ParallelExecutorInfo *pei); extern void ExecParallelReinitialize(PlanState *planstate, diff --git a/src/include/executor/tqueue.h b/src/include/executor/tqueue.h index a717ac6184..fdc9deb2b2 100644 --- a/src/include/executor/tqueue.h +++ b/src/include/executor/tqueue.h @@ -24,8 +24,7 @@ typedef struct TupleQueueReader TupleQueueReader; extern DestReceiver *CreateTupleQueueDestReceiver(shm_mq_handle *handle); /* Use these to receive tuples from a shm_mq. */ -extern TupleQueueReader *CreateTupleQueueReader(shm_mq_handle *handle, - TupleDesc tupledesc); +extern TupleQueueReader *CreateTupleQueueReader(shm_mq_handle *handle); extern void DestroyTupleQueueReader(TupleQueueReader *reader); extern HeapTuple TupleQueueReaderNext(TupleQueueReader *reader, bool nowait, bool *done); From fba366555659fc1dc66a825196be3cc68640d289 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 15 Sep 2017 00:25:33 -0400 Subject: [PATCH 0184/1087] Avoid duplicate typedef for SharedRecordTypmodRegistry. This isn't our usual solution for such problems, and older compilers (not terribly old, either) don't like it. Per buildfarm and local testing. --- src/include/access/session.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/include/access/session.h b/src/include/access/session.h index 8376dc5312..45986208c8 100644 --- a/src/include/access/session.h +++ b/src/include/access/session.h @@ -14,8 +14,8 @@ #include "lib/dshash.h" -/* Defined in typcache.c */ -typedef struct SharedRecordTypmodRegistry SharedRecordTypmodRegistry; +/* Avoid including typcache.h */ +struct SharedRecordTypmodRegistry; /* * A struct encapsulating some elements of a user's session. For now this @@ -28,7 +28,7 @@ typedef struct Session dsa_area *area; /* The session-scoped DSA area. */ /* State managed by typcache.c. */ - SharedRecordTypmodRegistry *shared_typmod_registry; + struct SharedRecordTypmodRegistry *shared_typmod_registry; dshash_table *shared_record_table; dshash_table *shared_typmod_table; } Session; From eaa4070543c2e36f0521f831d051265139875254 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 15 Sep 2017 00:57:38 -0400 Subject: [PATCH 0185/1087] Don't use anonymous unions. Commit cc5f81366c36b3dd8f02bd9be1cf75b2cc8482bd introduced a language feature that is not acceptable to strict C89 compilers. Thomas Munro Per buildfarm. --- src/backend/utils/cache/typcache.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 3be853a85a..9b94f1b3c3 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -172,7 +172,7 @@ typedef struct SharedRecordTableKey { TupleDesc local_tupdesc; dsa_pointer shared_tupdesc; - }; + } u; bool shared; } SharedRecordTableKey; @@ -209,14 +209,14 @@ shared_record_table_compare(const void *a, const void *b, size_t size, TupleDesc t2; if (k1->shared) - t1 = (TupleDesc) dsa_get_address(area, k1->shared_tupdesc); + t1 = (TupleDesc) dsa_get_address(area, k1->u.shared_tupdesc); else - t1 = k1->local_tupdesc; + t1 = k1->u.local_tupdesc; if (k2->shared) - t2 = (TupleDesc) dsa_get_address(area, k2->shared_tupdesc); + t2 = (TupleDesc) dsa_get_address(area, k2->u.shared_tupdesc); else - t2 = k2->local_tupdesc; + t2 = k2->u.local_tupdesc; return equalTupleDescs(t1, t2) ? 0 : 1; } @@ -232,9 +232,9 @@ shared_record_table_hash(const void *a, size_t size, void *arg) TupleDesc t; if (k->shared) - t = (TupleDesc) dsa_get_address(area, k->shared_tupdesc); + t = (TupleDesc) dsa_get_address(area, k->u.shared_tupdesc); else - t = k->local_tupdesc; + t = k->u.local_tupdesc; return hashTupleDesc(t); } @@ -1710,14 +1710,14 @@ SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *registry, /* Insert into the record table. */ record_table_key.shared = false; - record_table_key.local_tupdesc = tupdesc; + record_table_key.u.local_tupdesc = tupdesc; record_table_entry = dshash_find_or_insert(record_table, &record_table_key, &found); if (!found) { record_table_entry->key.shared = true; - record_table_entry->key.shared_tupdesc = shared_dp; + record_table_entry->key.u.shared_tupdesc = shared_dp; } dshash_release_lock(record_table, record_table_entry); } @@ -2261,7 +2261,7 @@ find_or_make_matching_shared_tupledesc(TupleDesc tupdesc) /* Try to find a matching tuple descriptor in the record table. */ key.shared = false; - key.local_tupdesc = tupdesc; + key.u.local_tupdesc = tupdesc; record_table_entry = (SharedRecordTableEntry *) dshash_find(CurrentSession->shared_record_table, &key, false); if (record_table_entry) @@ -2271,7 +2271,7 @@ find_or_make_matching_shared_tupledesc(TupleDesc tupdesc) record_table_entry); result = (TupleDesc) dsa_get_address(CurrentSession->area, - record_table_entry->key.shared_tupdesc); + record_table_entry->key.u.shared_tupdesc); Assert(result->tdrefcount == -1); return result; @@ -2342,7 +2342,7 @@ find_or_make_matching_shared_tupledesc(TupleDesc tupdesc) /* Store it and return it. */ record_table_entry->key.shared = true; - record_table_entry->key.shared_tupdesc = shared_dp; + record_table_entry->key.u.shared_tupdesc = shared_dp; dshash_release_lock(CurrentSession->shared_record_table, record_table_entry); result = (TupleDesc) From 60cd2f8a2d1a1e763b2df015e2e660caa9e39a67 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 15 Sep 2017 08:07:22 -0400 Subject: [PATCH 0186/1087] Test coverage for CREATE/ALTER FOREIGN DATA WRAPPER .. HANDLER. Amit Langote, per a suggestion from Mark Dilger. Reviewed by Marc Dilger and Ashutosh Bapat. Discussion: http://postgr.es/m/CAFjFpReL0oeN7SCpnsEPbqJhB2Bp1wnH1uvbOF_w6KEuv6ZXvg@mail.gmail.com --- src/test/regress/expected/foreign_data.out | 28 +++++++++++++++---- .../regress/input/create_function_1.source | 6 ++++ .../regress/output/create_function_1.source | 5 ++++ src/test/regress/regress.c | 7 +++++ src/test/regress/sql/foreign_data.sql | 13 +++++++++ 5 files changed, 53 insertions(+), 6 deletions(-) diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out index c6e558b07f..331f7a911f 100644 --- a/src/test/regress/expected/foreign_data.out +++ b/src/test/regress/expected/foreign_data.out @@ -89,6 +89,14 @@ CREATE FOREIGN DATA WRAPPER foo VALIDATOR postgresql_fdw_validator; postgresql | regress_foreign_data_user | - | postgresql_fdw_validator | | | (3 rows) +-- HANDLER related checks +CREATE FUNCTION invalid_fdw_handler() RETURNS int LANGUAGE SQL AS 'SELECT 1;'; +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER invalid_fdw_handler; -- ERROR +ERROR: function invalid_fdw_handler must return type fdw_handler +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler HANDLER invalid_fdw_handler; -- ERROR +ERROR: conflicting or redundant options +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler; +DROP FOREIGN DATA WRAPPER test_fdw; -- ALTER FOREIGN DATA WRAPPER ALTER FOREIGN DATA WRAPPER foo; -- ERROR ERROR: syntax error at or near ";" @@ -188,18 +196,26 @@ ALTER FOREIGN DATA WRAPPER foo RENAME TO foo1; (3 rows) ALTER FOREIGN DATA WRAPPER foo1 RENAME TO foo; +-- HANDLER related checks +ALTER FOREIGN DATA WRAPPER foo HANDLER invalid_fdw_handler; -- ERROR +ERROR: function invalid_fdw_handler must return type fdw_handler +ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler HANDLER anything; -- ERROR +ERROR: conflicting or redundant options +ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler; +WARNING: changing the foreign-data wrapper handler can change behavior of existing foreign tables +DROP FUNCTION invalid_fdw_handler(); -- DROP FOREIGN DATA WRAPPER DROP FOREIGN DATA WRAPPER nonexistent; -- ERROR ERROR: foreign-data wrapper "nonexistent" does not exist DROP FOREIGN DATA WRAPPER IF EXISTS nonexistent; NOTICE: foreign-data wrapper "nonexistent" does not exist, skipping \dew+ - List of foreign-data wrappers - Name | Owner | Handler | Validator | Access privileges | FDW options | Description -------------+---------------------------+---------+--------------------------+-------------------+------------------------------+------------- - dummy | regress_foreign_data_user | - | - | | | useless - foo | regress_test_role_super | - | - | | (b '3', c '4', a '2', d '5') | - postgresql | regress_foreign_data_user | - | postgresql_fdw_validator | | | + List of foreign-data wrappers + Name | Owner | Handler | Validator | Access privileges | FDW options | Description +------------+---------------------------+------------------+--------------------------+-------------------+------------------------------+------------- + dummy | regress_foreign_data_user | - | - | | | useless + foo | regress_test_role_super | test_fdw_handler | - | | (b '3', c '4', a '2', d '5') | + postgresql | regress_foreign_data_user | - | postgresql_fdw_validator | | | (3 rows) DROP ROLE regress_test_role_super; -- ERROR diff --git a/src/test/regress/input/create_function_1.source b/src/test/regress/input/create_function_1.source index f2b1561cc2..cde78eb1a0 100644 --- a/src/test/regress/input/create_function_1.source +++ b/src/test/regress/input/create_function_1.source @@ -62,6 +62,12 @@ CREATE FUNCTION test_atomic_ops() AS '@libdir@/regress@DLSUFFIX@' LANGUAGE C; +-- Tests creating a FDW handler +CREATE FUNCTION test_fdw_handler() + RETURNS fdw_handler + AS '@libdir@/regress@DLSUFFIX@', 'test_fdw_handler' + LANGUAGE C; + -- Things that shouldn't work: CREATE FUNCTION test1 (int) RETURNS int LANGUAGE SQL diff --git a/src/test/regress/output/create_function_1.source b/src/test/regress/output/create_function_1.source index 957595c51e..ab601be375 100644 --- a/src/test/regress/output/create_function_1.source +++ b/src/test/regress/output/create_function_1.source @@ -55,6 +55,11 @@ CREATE FUNCTION test_atomic_ops() RETURNS bool AS '@libdir@/regress@DLSUFFIX@' LANGUAGE C; +-- Tests creating a FDW handler +CREATE FUNCTION test_fdw_handler() + RETURNS fdw_handler + AS '@libdir@/regress@DLSUFFIX@', 'test_fdw_handler' + LANGUAGE C; -- Things that shouldn't work: CREATE FUNCTION test1 (int) RETURNS int LANGUAGE SQL AS 'SELECT ''not an integer'';'; diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c index 734947cc98..0a123f2b39 100644 --- a/src/test/regress/regress.c +++ b/src/test/regress/regress.c @@ -1096,3 +1096,10 @@ test_atomic_ops(PG_FUNCTION_ARGS) PG_RETURN_BOOL(true); } + +PG_FUNCTION_INFO_V1(test_fdw_handler); +Datum +test_fdw_handler(PG_FUNCTION_ARGS) +{ + PG_RETURN_NULL(); +} diff --git a/src/test/regress/sql/foreign_data.sql b/src/test/regress/sql/foreign_data.sql index ebe8ffbffe..1af7258718 100644 --- a/src/test/regress/sql/foreign_data.sql +++ b/src/test/regress/sql/foreign_data.sql @@ -51,6 +51,13 @@ RESET ROLE; CREATE FOREIGN DATA WRAPPER foo VALIDATOR postgresql_fdw_validator; \dew+ +-- HANDLER related checks +CREATE FUNCTION invalid_fdw_handler() RETURNS int LANGUAGE SQL AS 'SELECT 1;'; +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER invalid_fdw_handler; -- ERROR +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler HANDLER invalid_fdw_handler; -- ERROR +CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler; +DROP FOREIGN DATA WRAPPER test_fdw; + -- ALTER FOREIGN DATA WRAPPER ALTER FOREIGN DATA WRAPPER foo; -- ERROR ALTER FOREIGN DATA WRAPPER foo VALIDATOR bar; -- ERROR @@ -88,6 +95,12 @@ ALTER FOREIGN DATA WRAPPER foo RENAME TO foo1; \dew+ ALTER FOREIGN DATA WRAPPER foo1 RENAME TO foo; +-- HANDLER related checks +ALTER FOREIGN DATA WRAPPER foo HANDLER invalid_fdw_handler; -- ERROR +ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler HANDLER anything; -- ERROR +ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler; +DROP FUNCTION invalid_fdw_handler(); + -- DROP FOREIGN DATA WRAPPER DROP FOREIGN DATA WRAPPER nonexistent; -- ERROR DROP FOREIGN DATA WRAPPER IF EXISTS nonexistent; From 71aa4801a8184eb422c6bf51631bda76f1011278 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 15 Sep 2017 10:52:30 -0400 Subject: [PATCH 0187/1087] Get rid of shared_record_typmod_registry_worker_detach; it doesn't work. This code is unsafe, as proven by buildfarm failures, because it tries to access shared memory that might already be gone. It's also unnecessary, because we're about to exit the process anyway and so the record type cache should never be accessed again. The idea was to lay some foundations for someday recycling workers --- which would require attaching to a different shared tupdesc registry --- but that will require considerably more thought. In the meantime let's save some bytes by just removing the nonfunctional code. Problem identification, and proposal to fix by removing functionality from the detach function, by Thomas Munro. I went a bit further by removing the function altogether. Discussion: https://postgr.es/m/E1dsguX-00056N-9x@gemulon.postgresql.org --- src/backend/utils/cache/typcache.c | 74 ++++-------------------------- 1 file changed, 9 insertions(+), 65 deletions(-) diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 9b94f1b3c3..fd80c128cb 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -285,8 +285,6 @@ static EnumItem *find_enumitem(TypeCacheEnumData *enumdata, Oid arg); static int enum_oid_cmp(const void *left, const void *right); static void shared_record_typmod_registry_detach(dsm_segment *segment, Datum datum); -static void shared_record_typmod_registry_worker_detach(dsm_segment *segment, - Datum datum); static TupleDesc find_or_make_matching_shared_tupledesc(TupleDesc tupdesc); static dsa_pointer share_tupledesc(dsa_area *area, TupleDesc tupdesc, uint32 typmod); @@ -1768,8 +1766,9 @@ SharedRecordTypmodRegistryAttach(SharedRecordTypmodRegistry *registry) * a freshly started parallel worker. If we ever support worker * recycling, a worker would need to zap its local cache in between * servicing different queries, in order to be able to call this and - * synchronize typmods with a new leader; see - * shared_record_typmod_registry_detach(). + * synchronize typmods with a new leader; but that's problematic because + * we can't be very sure that record-typmod-related state hasn't escaped + * to anywhere else in the process. */ Assert(NextRecordTypmod == 0); @@ -1788,11 +1787,12 @@ SharedRecordTypmodRegistryAttach(SharedRecordTypmodRegistry *registry) MemoryContextSwitchTo(old_context); /* - * We install a different detach callback that performs a more complete - * reset of backend local state. + * Set up detach hook to run at worker exit. Currently this is the same + * as the leader's detach hook, but in future they might need to be + * different. */ on_dsm_detach(CurrentSession->segment, - shared_record_typmod_registry_worker_detach, + shared_record_typmod_registry_detach, PointerGetDatum(registry)); /* @@ -2353,10 +2353,8 @@ find_or_make_matching_shared_tupledesc(TupleDesc tupdesc) } /* - * Detach hook to forget about the current shared record typmod - * infrastructure. This is registered directly in leader backends, and - * reached only in case of error or shutdown. It's also reached indirectly - * via the worker detach callback below. + * On-DSM-detach hook to forget about the current shared record typmod + * infrastructure. This is currently used by both leader and workers. */ static void shared_record_typmod_registry_detach(dsm_segment *segment, Datum datum) @@ -2374,57 +2372,3 @@ shared_record_typmod_registry_detach(dsm_segment *segment, Datum datum) } CurrentSession->shared_typmod_registry = NULL; } - -/* - * Deatch hook allowing workers to disconnect from shared record typmod - * registry. The resulting state should allow a worker to attach to a - * different leader, if worker reuse pools are invented. - */ -static void -shared_record_typmod_registry_worker_detach(dsm_segment *segment, Datum datum) -{ - /* - * Forget everything we learned about record typmods as part of the - * session we are disconnecting from, and return to the initial state. - */ - if (RecordCacheArray != NULL) - { - int32 i; - - for (i = 0; i < RecordCacheArrayLen; ++i) - { - if (RecordCacheArray[i] != NULL) - { - TupleDesc tupdesc = RecordCacheArray[i]; - - /* - * Pointers to tuple descriptors in shared memory are not - * reference counted, so we are not responsible for freeing - * them. They'll survive as long as the shared session - * exists, which should be as long as the owning leader - * backend exists. In theory we do need to free local - * reference counted tuple descriptors however, and we can't - * do that with DescTupleDescRefCount() because we aren't - * using a resource owner. In practice we don't expect to - * find any non-shared TupleDesc object in a worker. - */ - if (tupdesc->tdrefcount != -1) - { - Assert(tupdesc->tdrefcount > 0); - if (--tupdesc->tdrefcount == 0) - FreeTupleDesc(tupdesc); - } - } - } - pfree(RecordCacheArray); - RecordCacheArray = NULL; - } - if (RecordCacheHash != NULL) - { - hash_destroy(RecordCacheHash); - RecordCacheHash = NULL; - } - NextRecordTypmod = 0; - /* Call the code common to leader and worker detach. */ - shared_record_typmod_registry_detach(segment, datum); -} From f0e60ee4bc04fd4865dbaf2139d50d6fe71c1bc3 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 15 Sep 2017 11:41:15 -0400 Subject: [PATCH 0188/1087] Add LDAP authentication test suite Like the SSL test suite, this will not be run by default. Reviewed-by: Thomas Munro --- src/test/Makefile | 7 +- src/test/ldap/.gitignore | 2 + src/test/ldap/Makefile | 20 ++++ src/test/ldap/README | 20 ++++ src/test/ldap/authdata.ldif | 32 +++++++ src/test/ldap/t/001_auth.pl | 177 ++++++++++++++++++++++++++++++++++++ 6 files changed, 255 insertions(+), 3 deletions(-) create mode 100644 src/test/ldap/.gitignore create mode 100644 src/test/ldap/Makefile create mode 100644 src/test/ldap/README create mode 100644 src/test/ldap/authdata.ldif create mode 100644 src/test/ldap/t/001_auth.pl diff --git a/src/test/Makefile b/src/test/Makefile index dbfa799a84..73abf163f1 100644 --- a/src/test/Makefile +++ b/src/test/Makefile @@ -15,9 +15,10 @@ include $(top_builddir)/src/Makefile.global SUBDIRS = perl regress isolation modules authentication recovery subscription # We don't build or execute examples/, locale/, or thread/ by default, -# but we do want "make clean" etc to recurse into them. Likewise for ssl/, -# because the SSL test suite is not secure to run on a multi-user system. -ALWAYS_SUBDIRS = examples locale thread ssl +# but we do want "make clean" etc to recurse into them. Likewise for +# ldap/ and ssl/, because these test suites are not secure to run on a +# multi-user system. +ALWAYS_SUBDIRS = examples ldap locale thread ssl # We want to recurse to all subdirs for all standard targets, except that # installcheck and install should not recurse into the subdirectory "modules". diff --git a/src/test/ldap/.gitignore b/src/test/ldap/.gitignore new file mode 100644 index 0000000000..871e943d50 --- /dev/null +++ b/src/test/ldap/.gitignore @@ -0,0 +1,2 @@ +# Generated by test suite +/tmp_check/ diff --git a/src/test/ldap/Makefile b/src/test/ldap/Makefile new file mode 100644 index 0000000000..9dd1bbeade --- /dev/null +++ b/src/test/ldap/Makefile @@ -0,0 +1,20 @@ +#------------------------------------------------------------------------- +# +# Makefile for src/test/ldap +# +# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1994, Regents of the University of California +# +# src/test/ldap/Makefile +# +#------------------------------------------------------------------------- + +subdir = src/test/ldap +top_builddir = ../../.. +include $(top_builddir)/src/Makefile.global + +check: + $(prove_check) + +clean distclean maintainer-clean: + rm -rf tmp_check diff --git a/src/test/ldap/README b/src/test/ldap/README new file mode 100644 index 0000000000..61579f87c6 --- /dev/null +++ b/src/test/ldap/README @@ -0,0 +1,20 @@ +src/test/ldap/README + +Tests for LDAP functionality +============================ + +This directory contains a test suite for LDAP functionality. This +requires a full OpenLDAP installation, including server and client +tools, and is therefore kept separate and not run by default. You +might need to adjust some paths in the test file to have it find +OpenLDAP in a place that hadn't been thought of yet. + +Also, this test suite creates an LDAP server that listens for TCP/IP +connections on localhost without any real access control, so it is not +safe to run this on a system where there might be untrusted local +users. + +Running the tests +================= + + make check diff --git a/src/test/ldap/authdata.ldif b/src/test/ldap/authdata.ldif new file mode 100644 index 0000000000..c0a15daffb --- /dev/null +++ b/src/test/ldap/authdata.ldif @@ -0,0 +1,32 @@ +dn: dc=example,dc=net +objectClass: top +objectClass: dcObject +objectClass: organization +dc: example +o: ExampleCo + +dn: uid=test1,dc=example,dc=net +objectClass: inetOrgPerson +objectClass: posixAccount +uid: test1 +sn: Lastname +givenName: Firstname +cn: First Test User +displayName: First Test User +uidNumber: 101 +gidNumber: 100 +homeDirectory: /home/test1 +mail: test1@example.net + +dn: uid=test2,dc=example,dc=net +objectClass: inetOrgPerson +objectClass: posixAccount +uid: test2 +sn: Lastname +givenName: Firstname +cn: Second Test User +displayName: Second Test User +uidNumber: 102 +gidNumber: 100 +homeDirectory: /home/test2 +mail: test2@example.net diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl new file mode 100644 index 0000000000..01b90e95a1 --- /dev/null +++ b/src/test/ldap/t/001_auth.pl @@ -0,0 +1,177 @@ +use strict; +use warnings; +use TestLib; +use PostgresNode; +use Test::More tests => 14; + +my ($slapd, $ldap_bin_dir, $ldap_schema_dir); + +$ldap_bin_dir = undef; # usually in PATH + +if ($^O eq 'darwin') +{ + $slapd = '/usr/local/opt/openldap/libexec/slapd'; + $ldap_schema_dir = '/usr/local/etc/openldap/schema'; +} +elsif ($^O eq 'linux') +{ + $slapd = '/usr/sbin/slapd'; + $ldap_schema_dir = '/etc/ldap/schema' if -f '/etc/ldap/schema'; + $ldap_schema_dir = '/etc/openldap/schema' if -f '/etc/openldap/schema'; +} +elsif ($^O eq 'freebsd') +{ + $slapd = '/usr/local/libexec/slapd'; + $ldap_schema_dir = '/usr/local/etc/openldap/schema'; +} + +# make your own edits here +#$slapd = ''; +#$ldap_bin_dir = ''; +#$ldap_schema_dir = ''; + +$ENV{PATH} = "$ldap_bin_dir:$ENV{PATH}" if $ldap_bin_dir; + +my $ldap_datadir = "${TestLib::tmp_check}/openldap-data"; +my $slapd_conf = "${TestLib::tmp_check}/slapd.conf"; +my $slapd_pidfile = "${TestLib::tmp_check}/slapd.pid"; +my $slapd_logfile = "${TestLib::tmp_check}/slapd.log"; +my $ldap_conf = "${TestLib::tmp_check}/ldap.conf"; +my $ldap_server = 'localhost'; +my $ldap_port = int(rand() * 16384) + 49152; +my $ldap_url = "ldap://$ldap_server:$ldap_port"; +my $ldap_basedn = 'dc=example,dc=net'; +my $ldap_rootdn = 'cn=Manager,dc=example,dc=net'; +my $ldap_rootpw = 'secret'; +my $ldap_pwfile = "${TestLib::tmp_check}/ldappassword"; + +note "setting up slapd"; + +append_to_file($slapd_conf, +qq{include $ldap_schema_dir/core.schema +include $ldap_schema_dir/cosine.schema +include $ldap_schema_dir/nis.schema +include $ldap_schema_dir/inetorgperson.schema + +pidfile $slapd_pidfile +logfile $slapd_logfile + +access to * + by * read + by anonymous auth + +database ldif +directory $ldap_datadir + +suffix "dc=example,dc=net" +rootdn "$ldap_rootdn" +rootpw $ldap_rootpw}); + +mkdir $ldap_datadir or die; + +system_or_bail $slapd, '-f', $slapd_conf, '-h', $ldap_url; + +END +{ + kill 'INT', `cat $slapd_pidfile` if -f $slapd_pidfile; +} + +append_to_file($ldap_pwfile, $ldap_rootpw); +chmod 0600, $ldap_pwfile or die; + +$ENV{'LDAPURI'} = $ldap_url; +$ENV{'LDAPBINDDN'} = $ldap_rootdn; + +note "loading LDAP data"; + +system_or_bail 'ldapadd', '-x', '-y', $ldap_pwfile, '-f', 'authdata.ldif'; +system_or_bail 'ldappasswd', '-x', '-y', $ldap_pwfile, '-s', 'secret1', 'uid=test1,dc=example,dc=net'; +system_or_bail 'ldappasswd', '-x', '-y', $ldap_pwfile, '-s', 'secret2', 'uid=test2,dc=example,dc=net'; + +note "setting up PostgreSQL instance"; + +my $node = get_new_node('node'); +$node->init; +$node->start; + +$node->safe_psql('postgres', 'CREATE USER test0;'); +$node->safe_psql('postgres', 'CREATE USER test1;'); +$node->safe_psql('postgres', 'CREATE USER "test2@example.net";'); + +note "running tests"; + +sub test_access +{ + my ($node, $role, $expected_res, $test_name) = @_; + + my $res = $node->psql('postgres', 'SELECT 1', extra_params => [ '-U', $role ]); + is($res, $expected_res, $test_name); +} + +note "simple bind"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="uid=" ldapsuffix=",dc=example,dc=net"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'wrong'; +test_access($node, 'test0', 2, 'simple bind authentication fails if user not found in LDAP'); +test_access($node, 'test1', 2, 'simple bind authentication fails with wrong password'); +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'simple bind authentication succeeds'); + +note "search+bind"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'wrong'; +test_access($node, 'test0', 2, 'search+bind authentication fails if user not found in LDAP'); +test_access($node, 'test1', 2, 'search+bind authentication fails with wrong password'); +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'search+bind authentication succeeds'); + +note "LDAP URLs"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn?uid?sub"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'wrong'; +test_access($node, 'test0', 2, 'search+bind with LDAP URL authentication fails if user not found in LDAP'); +test_access($node, 'test1', 2, 'search+bind with LDAP URL authentication fails with wrong password'); +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'search+bind with LDAP URL authentication succeeds'); + +note "search filters"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(|(uid=\$username)(mail=\$username))"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'search filter finds by uid'); +$ENV{"PGPASSWORD"} = 'secret2'; +test_access($node, 'test2@example.net', 0, 'search filter finds by mail'); + +note "search filters in LDAP URLs"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn??sub?(|(uid=\$username)(mail=\$username))"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'search filter finds by uid'); +$ENV{"PGPASSWORD"} = 'secret2'; +test_access($node, 'test2@example.net', 0, 'search filter finds by mail'); + +# This is not documented: You can combine ldapurl and other ldap* +# settings. ldapurl is always parsed first, then the other settings +# override. It might be useful in a case like this. +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn??sub" ldapsearchfilter="(|(uid=\$username)(mail=\$username))"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'combined LDAP URL and search filter'); From 3012061b8653a57a098c85f06f1f80ec9576711b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 15 Sep 2017 14:04:51 -0400 Subject: [PATCH 0189/1087] Apply pg_get_serial_sequence() to identity column sequences as well Bug: #14813 --- doc/src/sgml/func.sgml | 37 +++++++++++++++----------- src/backend/utils/adt/ruleutils.c | 11 ++++---- src/test/regress/expected/identity.out | 6 +++++ src/test/regress/expected/sequence.out | 6 +++++ src/test/regress/sql/identity.sql | 2 ++ src/test/regress/sql/sequence.sql | 2 ++ 6 files changed, 44 insertions(+), 20 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 641b3b8f4e..2f036015cc 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -17034,8 +17034,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_serial_sequence(table_name, column_name) text - get name of the sequence that a serial, smallserial or bigserial column - uses + get name of the sequence that a serial or identity column uses pg_get_statisticsobjdef(statobj_oid) @@ -17223,19 +17222,27 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_serial_sequence returns the name of the sequence associated with a column, or NULL if no sequence is associated - with the column. The first input parameter is a table name with - optional schema, and the second parameter is a column name. Because - the first parameter is potentially a schema and table, it is not treated - as a double-quoted identifier, meaning it is lower cased by default, - while the second parameter, being just a column name, is treated as - double-quoted and has its case preserved. The function returns a value - suitably formatted for passing to sequence functions (see ). This association can be modified or - removed with ALTER SEQUENCE OWNED BY. (The function - probably should have been called - pg_get_owned_sequence; its current name reflects the fact - that it's typically used with serial or bigserial - columns.) + with the column. If the column is an identity column, the associated + sequence is the sequence internally created for the identity column. For + columns created using one of the serial types + (serial, smallserial, bigserial), it + is the sequence created for that serial column definition. In the latter + case, this association can be modified or removed with ALTER + SEQUENCE OWNED BY. (The function probably should have been called + pg_get_owned_sequence; its current name reflects the + fact that it has typically been used with serial + or bigserial columns.) The first input parameter is a table name + with optional schema, and the second parameter is a column name. Because + the first parameter is potentially a schema and table, it is not treated as + a double-quoted identifier, meaning it is lower cased by default, while the + second parameter, being just a column name, is treated as double-quoted and + has its case preserved. The function returns a value suitably formatted + for passing to sequence functions + (see ). A typical use is in reading the + current value of a sequence for an identity or serial column, for example: + +SELECT currval(pg_get_serial_sequence('sometable', 'id')); + diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 0ea5078218..84759b6149 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -2322,7 +2322,7 @@ pg_get_userbyid(PG_FUNCTION_ARGS) /* * pg_get_serial_sequence - * Get the name of the sequence used by a serial column, + * Get the name of the sequence used by an identity or serial column, * formatted suitably for passing to setval, nextval or currval. * First parameter is not treated as double-quoted, second parameter * is --- see documentation for reason. @@ -2380,13 +2380,14 @@ pg_get_serial_sequence(PG_FUNCTION_ARGS) Form_pg_depend deprec = (Form_pg_depend) GETSTRUCT(tup); /* - * We assume any auto dependency of a sequence on a column must be - * what we are looking for. (We need the relkind test because indexes - * can also have auto dependencies on columns.) + * Look for an auto dependency (serial column) or internal dependency + * (identity column) of a sequence on a column. (We need the relkind + * test because indexes can also have auto dependencies on columns.) */ if (deprec->classid == RelationRelationId && deprec->objsubid == 0 && - deprec->deptype == DEPENDENCY_AUTO && + (deprec->deptype == DEPENDENCY_AUTO || + deprec->deptype == DEPENDENCY_INTERNAL) && get_rel_relkind(deprec->objid) == RELKIND_SEQUENCE) { sequenceId = deprec->objid; diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 88b56dad93..2800ed7caa 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -26,6 +26,12 @@ SELECT sequence_name FROM information_schema.sequences WHERE sequence_name LIKE --------------- (0 rows) +SELECT pg_get_serial_sequence('itest1', 'a'); + pg_get_serial_sequence +------------------------ + public.itest1_a_seq +(1 row) + CREATE TABLE itest4 (a int, b text); ALTER TABLE itest4 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY; -- error, requires NOT NULL ERROR: column "a" of relation "itest4" must be declared NOT NULL before identity can be added diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out index a43b52cfc1..ea05a3382b 100644 --- a/src/test/regress/expected/sequence.out +++ b/src/test/regress/expected/sequence.out @@ -79,6 +79,12 @@ SELECT * FROM serialTest1; force | 100 (3 rows) +SELECT pg_get_serial_sequence('serialTest1', 'f2'); + pg_get_serial_sequence +--------------------------- + public.serialtest1_f2_seq +(1 row) + -- test smallserial / bigserial CREATE TABLE serialTest2 (f1 text, f2 serial, f3 smallserial, f4 serial2, f5 bigserial, f6 serial8); diff --git a/src/test/regress/sql/identity.sql b/src/test/regress/sql/identity.sql index a7e7b15737..7886456a56 100644 --- a/src/test/regress/sql/identity.sql +++ b/src/test/regress/sql/identity.sql @@ -12,6 +12,8 @@ SELECT table_name, column_name, column_default, is_nullable, is_identity, identi -- internal sequences should not be shown here SELECT sequence_name FROM information_schema.sequences WHERE sequence_name LIKE 'itest%'; +SELECT pg_get_serial_sequence('itest1', 'a'); + CREATE TABLE itest4 (a int, b text); ALTER TABLE itest4 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY; -- error, requires NOT NULL ALTER TABLE itest4 ALTER COLUMN a SET NOT NULL; diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql index b41c5a753d..c50834a5b9 100644 --- a/src/test/regress/sql/sequence.sql +++ b/src/test/regress/sql/sequence.sql @@ -61,6 +61,8 @@ INSERT INTO serialTest1 VALUES ('wrong', NULL); SELECT * FROM serialTest1; +SELECT pg_get_serial_sequence('serialTest1', 'f2'); + -- test smallserial / bigserial CREATE TABLE serialTest2 (f1 text, f2 serial, f3 smallserial, f4 serial2, f5 bigserial, f6 serial8); From c29145f00df2aa873672ab9f1b3fc4ec6a0ec05d Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Sat, 16 Sep 2017 00:39:37 +0200 Subject: [PATCH 0190/1087] src/test/ldap: Fix test function in Linux port --- src/test/ldap/t/001_auth.pl | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl index 01b90e95a1..a7cac6210b 100644 --- a/src/test/ldap/t/001_auth.pl +++ b/src/test/ldap/t/001_auth.pl @@ -16,8 +16,8 @@ elsif ($^O eq 'linux') { $slapd = '/usr/sbin/slapd'; - $ldap_schema_dir = '/etc/ldap/schema' if -f '/etc/ldap/schema'; - $ldap_schema_dir = '/etc/openldap/schema' if -f '/etc/openldap/schema'; + $ldap_schema_dir = '/etc/ldap/schema' if -d '/etc/ldap/schema'; + $ldap_schema_dir = '/etc/openldap/schema' if -d '/etc/openldap/schema'; } elsif ($^O eq 'freebsd') { From 9361f6f54e3ff9bab84e80d4b1e15be79b48d60e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 15 Sep 2017 21:15:55 -0400 Subject: [PATCH 0191/1087] After a MINVALUE/MAXVALUE bound, allow only more of the same. In the old syntax, which used UNBOUNDED, we had a similar restriction, but commit d363d42bb9a4399a0207bd3b371c966e22e06bd3, which changed the syntax, eliminated it. Put it back. Patch by me, reviewed by Dean Rasheed. Discussion: http://postgr.es/m/CA+Tgmobs+pLPC27tS3gOpEAxAffHrq5w509cvkwTf9pF6cWYbg@mail.gmail.com --- doc/src/sgml/ref/create_table.sgml | 11 +++-- src/backend/parser/parse_utilcmd.c | 48 ++++++++++++++++++++++ src/test/regress/expected/create_table.out | 12 +++--- src/test/regress/expected/inherit.out | 4 +- src/test/regress/expected/insert.out | 35 +++++++++++----- src/test/regress/sql/create_table.sql | 6 +-- src/test/regress/sql/inherit.sql | 4 +- src/test/regress/sql/insert.sql | 19 +++++---- 8 files changed, 102 insertions(+), 37 deletions(-) diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 824253de40..1477288851 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -324,11 +324,10 @@ FROM ( { numeric_literal | - Note that any values after MINVALUE or - MAXVALUE in a partition bound are ignored; so the bound - (10, MINVALUE, 0) is equivalent to - (10, MINVALUE, 10) and (10, MINVALUE, MINVALUE) - and (10, MINVALUE, MAXVALUE). + Note that if MINVALUE or MAXVALUE is used for + one column of a partitioning bound, the same value must be used for all + subsequent columns. For example, (10, MINVALUE, 0) is not + a valid bound; you should write (10, MINVALUE, MINVALUE). @@ -1665,7 +1664,7 @@ CREATE TABLE measurement_y2016m07 CREATE TABLE measurement_ym_older PARTITION OF measurement_year_month - FOR VALUES FROM (MINVALUE, 0) TO (2016, 11); + FOR VALUES FROM (MINVALUE, MINVALUE) TO (2016, 11); CREATE TABLE measurement_ym_y2016m11 PARTITION OF measurement_year_month diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 655da02c10..27e568fc62 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -135,6 +135,7 @@ static void transformConstraintAttrs(CreateStmtContext *cxt, static void transformColumnType(CreateStmtContext *cxt, ColumnDef *column); static void setSchemaName(char *context_schema, char **stmt_schema_name); static void transformPartitionCmd(CreateStmtContext *cxt, PartitionCmd *cmd); +static void validateInfiniteBounds(ParseState *pstate, List *blist); static Const *transformPartitionBoundValue(ParseState *pstate, A_Const *con, const char *colName, Oid colType, int32 colTypmod); @@ -3397,6 +3398,13 @@ transformPartitionBound(ParseState *pstate, Relation parent, (errcode(ERRCODE_INVALID_TABLE_DEFINITION), errmsg("TO must specify exactly one value per partitioning column"))); + /* + * Once we see MINVALUE or MAXVALUE for one column, the remaining + * columns must be the same. + */ + validateInfiniteBounds(pstate, spec->lowerdatums); + validateInfiniteBounds(pstate, spec->upperdatums); + /* Transform all the constants */ i = j = 0; result_spec->lowerdatums = result_spec->upperdatums = NIL; @@ -3468,6 +3476,46 @@ transformPartitionBound(ParseState *pstate, Relation parent, return result_spec; } +/* + * validateInfiniteBounds + * + * Check that a MAXVALUE or MINVALUE specification in a partition bound is + * followed only by more of the same. + */ +static void +validateInfiniteBounds(ParseState *pstate, List *blist) +{ + ListCell *lc; + PartitionRangeDatumKind kind = PARTITION_RANGE_DATUM_VALUE; + + foreach(lc, blist) + { + PartitionRangeDatum *prd = castNode(PartitionRangeDatum, lfirst(lc)); + + if (kind == prd->kind) + continue; + + switch (kind) + { + case PARTITION_RANGE_DATUM_VALUE: + kind = prd->kind; + break; + + case PARTITION_RANGE_DATUM_MAXVALUE: + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("every bound following MAXVALUE must also be MAXVALUE"), + parser_errposition(pstate, exprLocation((Node *) prd)))); + + case PARTITION_RANGE_DATUM_MINVALUE: + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("every bound following MINVALUE must also be MINVALUE"), + parser_errposition(pstate, exprLocation((Node *) prd)))); + } + } +} + /* * Transform one constant in a partition bound spec */ diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index 58c755be50..60ab28a96a 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -723,7 +723,7 @@ Number of partitions: 3 (Use \d+ to list them.) -- check that we get the expected partition constraints CREATE TABLE range_parted4 (a int, b int, c int) PARTITION BY RANGE (abs(a), abs(b), c); -CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (MAXVALUE, 0, 0); +CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE); \d+ unbounded_range_part Table "public.unbounded_range_part" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description @@ -731,11 +731,11 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI a | integer | | | | plain | | b | integer | | | | plain | | c | integer | | | | plain | | -Partition of: range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (MAXVALUE, 0, 0) +Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE) Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL)) DROP TABLE unbounded_range_part; -CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (1, MAXVALUE, 0); +CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE); \d+ range_parted4_1 Table "public.range_parted4_1" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description @@ -743,7 +743,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU a | integer | | | | plain | | b | integer | | | | plain | | c | integer | | | | plain | | -Partition of: range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (1, MAXVALUE, 0) +Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE) Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1)) CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE); @@ -757,7 +757,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5 Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE) Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7)))) -CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, 0); +CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE); \d+ range_parted4_3 Table "public.range_parted4_3" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description @@ -765,7 +765,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M a | integer | | | | plain | | b | integer | | | | plain | | c | integer | | | | plain | | -Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, 0) +Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE) Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9)) DROP TABLE range_parted4; diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out index 2fb0b4d86e..c698faff2f 100644 --- a/src/test/regress/expected/inherit.out +++ b/src/test/regress/expected/inherit.out @@ -1853,12 +1853,12 @@ drop table range_list_parted; -- check that constraint exclusion is able to cope with the partition -- constraint emitted for multi-column range partitioned tables create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); -create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, 1, 1); +create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, 1, 1); create table mcrparted1 partition of mcrparted for values from (1, 1, 1) to (10, 5, 10); create table mcrparted2 partition of mcrparted for values from (10, 5, 10) to (10, 10, 10); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); create table mcrparted4 partition of mcrparted for values from (20, 10, 10) to (20, 20, 20); -create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, 0, 0); +create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0 QUERY PLAN ------------------------------ diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index 73a5600f19..b715619313 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -574,15 +574,28 @@ revoke all on key_desc from someone_else; revoke all on key_desc_1 from someone_else; drop role someone_else; drop table key_desc, key_desc_1; +-- test minvalue/maxvalue restrictions +create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); +create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, maxvalue, maxvalue); +ERROR: every bound following MINVALUE must also be MINVALUE +LINE 1: ...partition of mcrparted for values from (minvalue, 0, 0) to (... + ^ +create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, minvalue); +ERROR: every bound following MAXVALUE must also be MAXVALUE +LINE 1: ...r values from (10, 6, minvalue) to (10, maxvalue, minvalue); + ^ +create table mcrparted4 partition of mcrparted for values from (21, minvalue, 0) to (30, 20, minvalue); +ERROR: every bound following MINVALUE must also be MINVALUE +LINE 1: ...ition of mcrparted for values from (21, minvalue, 0) to (30,... + ^ -- check multi-column range partitioning expression enforces the same -- constraint as what tuple-routing would determine it to be -create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); -create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, maxvalue, 0); +create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, maxvalue, maxvalue); create table mcrparted1 partition of mcrparted for values from (2, 1, minvalue) to (10, 5, 10); -create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, 0); +create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, maxvalue); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); -create table mcrparted4 partition of mcrparted for values from (21, minvalue, 0) to (30, 20, maxvalue); -create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, 0, 0); +create table mcrparted4 partition of mcrparted for values from (21, minvalue, minvalue) to (30, 20, maxvalue); +create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, maxvalue, maxvalue); -- routed to mcrparted0 insert into mcrparted values (0, 1, 1); insert into mcrparted0 values (0, 1, 1); @@ -666,14 +679,14 @@ drop table brtrigpartcon; drop function brtrigpartcon1trigf(); -- check multi-column range partitioning with minvalue/maxvalue constraints create table mcrparted (a text, b int) partition by range(a, b); -create table mcrparted1_lt_b partition of mcrparted for values from (minvalue, 0) to ('b', minvalue); +create table mcrparted1_lt_b partition of mcrparted for values from (minvalue, minvalue) to ('b', minvalue); create table mcrparted2_b partition of mcrparted for values from ('b', minvalue) to ('c', minvalue); create table mcrparted3_c_to_common partition of mcrparted for values from ('c', minvalue) to ('common', minvalue); create table mcrparted4_common_lt_0 partition of mcrparted for values from ('common', minvalue) to ('common', 0); create table mcrparted5_common_0_to_10 partition of mcrparted for values from ('common', 0) to ('common', 10); create table mcrparted6_common_ge_10 partition of mcrparted for values from ('common', 10) to ('common', maxvalue); create table mcrparted7_gt_common_lt_d partition of mcrparted for values from ('common', maxvalue) to ('d', minvalue); -create table mcrparted8_ge_d partition of mcrparted for values from ('d', minvalue) to (maxvalue, 0); +create table mcrparted8_ge_d partition of mcrparted for values from ('d', minvalue) to (maxvalue, maxvalue); \d+ mcrparted Table "public.mcrparted" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description @@ -681,14 +694,14 @@ create table mcrparted8_ge_d partition of mcrparted for values from ('d', minval a | text | | | | extended | | b | integer | | | | plain | | Partition key: RANGE (a, b) -Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, 0) TO ('b', MINVALUE), +Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE), mcrparted2_b FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE), mcrparted3_c_to_common FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE), mcrparted4_common_lt_0 FOR VALUES FROM ('common', MINVALUE) TO ('common', 0), mcrparted5_common_0_to_10 FOR VALUES FROM ('common', 0) TO ('common', 10), mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE), mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE), - mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, 0) + mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE) \d+ mcrparted1_lt_b Table "public.mcrparted1_lt_b" @@ -696,7 +709,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, 0) TO ('b', MINVALUE), --------+---------+-----------+----------+---------+----------+--------------+------------- a | text | | | | extended | | b | integer | | | | plain | | -Partition of: mcrparted FOR VALUES FROM (MINVALUE, 0) TO ('b', MINVALUE) +Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE) Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text)) \d+ mcrparted2_b @@ -759,7 +772,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te --------+---------+-----------+----------+---------+----------+--------------+------------- a | text | | | | extended | | b | integer | | | | plain | | -Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, 0) +Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE) Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text)) insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10), diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index eeab5d91ff..df6a6d7326 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -641,14 +641,14 @@ CREATE TABLE part_c_1_10 PARTITION OF part_c FOR VALUES FROM (1) TO (10); -- check that we get the expected partition constraints CREATE TABLE range_parted4 (a int, b int, c int) PARTITION BY RANGE (abs(a), abs(b), c); -CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (MAXVALUE, 0, 0); +CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE); \d+ unbounded_range_part DROP TABLE unbounded_range_part; -CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, 0, 0) TO (1, MAXVALUE, 0); +CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE); \d+ range_parted4_1 CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE); \d+ range_parted4_2 -CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, 0); +CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE); \d+ range_parted4_3 DROP TABLE range_parted4; diff --git a/src/test/regress/sql/inherit.sql b/src/test/regress/sql/inherit.sql index 01780d4977..169d0dc0f5 100644 --- a/src/test/regress/sql/inherit.sql +++ b/src/test/regress/sql/inherit.sql @@ -664,12 +664,12 @@ drop table range_list_parted; -- check that constraint exclusion is able to cope with the partition -- constraint emitted for multi-column range partitioned tables create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); -create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, 1, 1); +create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, 1, 1); create table mcrparted1 partition of mcrparted for values from (1, 1, 1) to (10, 5, 10); create table mcrparted2 partition of mcrparted for values from (10, 5, 10) to (10, 10, 10); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); create table mcrparted4 partition of mcrparted for values from (20, 10, 10) to (20, 20, 20); -create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, 0, 0); +create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0 explain (costs off) select * from mcrparted where a = 10 and abs(b) < 5; -- scans mcrparted1 explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scans mcrparted1, mcrparted2 diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index a2948e4dd0..d741514414 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -369,15 +369,20 @@ revoke all on key_desc_1 from someone_else; drop role someone_else; drop table key_desc, key_desc_1; +-- test minvalue/maxvalue restrictions +create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); +create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, maxvalue, maxvalue); +create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, minvalue); +create table mcrparted4 partition of mcrparted for values from (21, minvalue, 0) to (30, 20, minvalue); + -- check multi-column range partitioning expression enforces the same -- constraint as what tuple-routing would determine it to be -create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); -create table mcrparted0 partition of mcrparted for values from (minvalue, 0, 0) to (1, maxvalue, 0); +create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, maxvalue, maxvalue); create table mcrparted1 partition of mcrparted for values from (2, 1, minvalue) to (10, 5, 10); -create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, 0); +create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) to (10, maxvalue, maxvalue); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); -create table mcrparted4 partition of mcrparted for values from (21, minvalue, 0) to (30, 20, maxvalue); -create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, 0, 0); +create table mcrparted4 partition of mcrparted for values from (21, minvalue, minvalue) to (30, 20, maxvalue); +create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, maxvalue, maxvalue); -- routed to mcrparted0 insert into mcrparted values (0, 1, 1); @@ -442,14 +447,14 @@ drop function brtrigpartcon1trigf(); -- check multi-column range partitioning with minvalue/maxvalue constraints create table mcrparted (a text, b int) partition by range(a, b); -create table mcrparted1_lt_b partition of mcrparted for values from (minvalue, 0) to ('b', minvalue); +create table mcrparted1_lt_b partition of mcrparted for values from (minvalue, minvalue) to ('b', minvalue); create table mcrparted2_b partition of mcrparted for values from ('b', minvalue) to ('c', minvalue); create table mcrparted3_c_to_common partition of mcrparted for values from ('c', minvalue) to ('common', minvalue); create table mcrparted4_common_lt_0 partition of mcrparted for values from ('common', minvalue) to ('common', 0); create table mcrparted5_common_0_to_10 partition of mcrparted for values from ('common', 0) to ('common', 10); create table mcrparted6_common_ge_10 partition of mcrparted for values from ('common', 10) to ('common', maxvalue); create table mcrparted7_gt_common_lt_d partition of mcrparted for values from ('common', maxvalue) to ('d', minvalue); -create table mcrparted8_ge_d partition of mcrparted for values from ('d', minvalue) to (maxvalue, 0); +create table mcrparted8_ge_d partition of mcrparted for values from ('d', minvalue) to (maxvalue, maxvalue); \d+ mcrparted \d+ mcrparted1_lt_b From 04b64b8ddf9926950fe86d7d489825c46665dc01 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Sat, 16 Sep 2017 11:58:00 -0400 Subject: [PATCH 0192/1087] docs: clarify pg_upgrade docs regarding standbys and rsync Document that rsync is an _optional_ way to upgrade standbys, suggest rsync option --dry-run, and mention a way of upgrading one standby from another using rsync. Also clarify some instructions by specifying if they operate on the old or new clusters. Reported-by: Stephen Frost, Magnus Hagander Discussion: https://postgr.es/m/20170914191250.GB6595@momjian.us Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 41 +++++++++++++++++++-------------- 1 file changed, 24 insertions(+), 17 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index 60011d8167..146b3af620 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -320,20 +320,14 @@ NET STOP postgresql-&majorversion; Prepare for standby server upgrades - If you are upgrading standby servers (as outlined in section ), verify that the old standby + If you are upgrading standby servers using methods outlined in section , verify that the old standby servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the Latest checkpoint location values match in all clusters. (There will be a mismatch if old standby servers were shut down before the old primary.) - - - Also, if upgrading standby servers, change wal_level - to replica in the postgresql.conf file on - the new primary cluster. - @@ -423,12 +417,18 @@ pg_upgrade.exe If you used link mode and have Streaming Replication (see ) or Log-Shipping (see ) standby servers, follow these steps to - upgrade them. You will not be running pg_upgrade on + linkend="warm-standby">) standby servers, you can follow these steps to + quickly upgrade them. You will not be running pg_upgrade on the standby servers, but rather rsync on the primary. - Do not start any servers yet. If you did not use link - mode, skip the instructions in this section and simply recreate the - standby servers. + Do not start any servers yet. + + + + If you did not use link mode, do not have or do not + want to use rsync, or want an easier solution, skip + the instructions in this section and simply recreate the standby + servers once pg_upgrade completes and the new primary + is running. @@ -448,7 +448,7 @@ pg_upgrade.exe Make sure the new standby data directories do not exist or are empty. If initdb was run, delete - the standby server data directories. + the standby servers' new data directories. @@ -474,9 +474,10 @@ pg_upgrade.exe Save configuration files - Save any configuration files from the standbys you need to keep, - e.g. postgresql.conf, recovery.conf, - as these will be overwritten or removed in the next step. + Save any configuration files from the old standbys' data + directories you need to keep, e.g. postgresql.conf, + recovery.conf, because these will be overwritten or + removed in the next step. @@ -507,6 +508,12 @@ rsync --archive --delete --hard-links --size-only /opt/PostgreSQL/9.5/data \ /opt/PostgreSQL/9.6/data standby.example.com:/opt/PostgreSQL + You can verify what the command will do using + rsync's From 0f79440fb0b4c5a9baa9a95570c01828a9093802 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 16 Sep 2017 13:20:32 -0400 Subject: [PATCH 0193/1087] Fix SQL-spec incompatibilities in new transition table feature. The standard says that all changes of the same kind (insert, update, or delete) caused in one table by a single SQL statement should be reported in a single transition table; and by that, they mean to include foreign key enforcement actions cascading from the statement's direct effects. It's also reasonable to conclude that if the standard had wCTEs, they would say that effects of wCTEs applying to the same table as each other or the outer statement should be merged into one transition table. We weren't doing it like that. Hence, arrange to merge tuples from multiple update actions into a single transition table as much as we can. There is a problem, which is that if the firing of FK enforcement triggers and after-row triggers with transition tables is interspersed, we might need to report more tuples after some triggers have already seen the transition table. It seems like a bad idea for the transition table to be mutable between trigger calls. There's no good way around this without a major redesign of the FK logic, so for now, resolve it by opening a new transition table each time this happens. Also, ensure that AFTER STATEMENT triggers fire just once per statement, or once per transition table when we're forced to make more than one. Previous versions of Postgres have allowed each FK enforcement query to cause an additional firing of the AFTER STATEMENT triggers for the referencing table, but that's certainly not per spec. (We're still doing multiple firings of BEFORE STATEMENT triggers, though; is that something worth changing?) Also, forbid using transition tables with column-specific UPDATE triggers. The spec requires such transition tables to show only the tuples for which the UPDATE trigger would have fired, which means maintaining multiple transition tables or else somehow filtering the contents at readout. Maybe someday we'll bother to support that option, but it looks like a lot of trouble for a marginal feature. The transition tables are now managed by the AfterTriggers data structures, rather than being directly the responsibility of ModifyTable nodes. This removes a subtransaction-lifespan memory leak introduced by my previous band-aid patch 3c4359521. In passing, refactor the AfterTriggers data structures to reduce the management overhead for them, by using arrays of structs rather than several parallel arrays for per-query-level and per-subtransaction state. I failed to resist the temptation to do some copy-editing on the SGML docs about triggers, above and beyond merely documenting the effects of this patch. Back-patch to v10, because we don't want the semantics of transition tables to change post-release. Patch by me, with help and review from Thomas Munro. Discussion: https://postgr.es/m/20170909064853.25630.12825@wrigleys.postgresql.org --- doc/src/sgml/ref/create_trigger.sgml | 112 +++- doc/src/sgml/trigger.sgml | 54 +- src/backend/commands/copy.c | 10 +- src/backend/commands/trigger.c | 820 ++++++++++++++++--------- src/backend/executor/README | 2 +- src/backend/executor/execMain.c | 11 +- src/backend/executor/nodeModifyTable.c | 57 +- src/include/commands/trigger.h | 29 +- src/include/nodes/execnodes.h | 4 +- src/test/regress/expected/triggers.out | 52 +- src/test/regress/sql/triggers.sql | 42 ++ 11 files changed, 803 insertions(+), 390 deletions(-) diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 18efe6a9ed..065c827271 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -52,7 +52,7 @@ CREATE [ CONSTRAINT ] TRIGGER name trigger will be associated with the specified table, view, or foreign table and will execute the specified function function_name when - certain events occur. + certain operations are performed on that table. @@ -82,10 +82,7 @@ CREATE [ CONSTRAINT ] TRIGGER name executes once for any given operation, regardless of how many rows it modifies (in particular, an operation that modifies zero rows will still result in the execution of any applicable FOR - EACH STATEMENT triggers). Note that with an - INSERT with an ON CONFLICT DO UPDATE - clause, both INSERT and - UPDATE statement level trigger will be fired. + EACH STATEMENT triggers). @@ -174,7 +171,8 @@ CREATE [ CONSTRAINT ] TRIGGER name constraint trigger. This is the same as a regular trigger except that the timing of the trigger firing can be adjusted using . - Constraint triggers must be AFTER ROW triggers on tables. They + Constraint triggers must be AFTER ROW triggers on plain + tables (not foreign tables). They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; in the latter case they are said to be deferred. A pending deferred-trigger firing @@ -184,18 +182,29 @@ CREATE [ CONSTRAINT ] TRIGGER name - The REFERENCING option is only allowed for an AFTER - trigger which is not a constraint trigger. OLD TABLE may only - be specified once, and only on a trigger which can fire on - UPDATE or DELETE. NEW TABLE may only - be specified once, and only on a trigger which can fire on - UPDATE or INSERT. + The REFERENCING option enables collection + of transition relations, which are row sets that include all + of the rows inserted, deleted, or modified by the current SQL statement. + This feature lets the trigger see a global view of what the statement did, + not just one row at a time. This option is only allowed for + an AFTER trigger that is not a constraint trigger; also, if + the trigger is an UPDATE trigger, it must not specify + a column_name list. + OLD TABLE may only be specified once, and only for a trigger + that can fire on UPDATE or DELETE; it creates a + transition relation containing the before-images of all rows + updated or deleted by the statement. + Similarly, NEW TABLE may only be specified once, and only for + a trigger that can fire on UPDATE or INSERT; + it creates a transition relation containing the after-images + of all rows updated or inserted by the statement. SELECT does not modify any rows so you cannot - create SELECT triggers. Rules and views are more - appropriate in such cases. + create SELECT triggers. Rules and views may provide + workable solutions to problems that seem to need SELECT + triggers. @@ -300,12 +309,9 @@ UPDATE OF column_name1 [, column_name2REFERENCING - This immediately precedes the declaration of one or two relations which - can be used to read the before and/or after images of all rows directly - affected by the triggering statement. An AFTER EACH ROW - trigger is allowed to use both these transition relation names and the - row names (OLD and NEW) which reference each - individual row for which the trigger fires. + This keyword immediately precedes the declaration of one or two + relation names that provide access to the transition relations of the + triggering statement. @@ -315,8 +321,9 @@ UPDATE OF column_name1 [, column_name2NEW TABLE - This specifies whether the named relation contains the before or after - images for rows affected by the statement which fired the trigger. + This clause indicates whether the following relation name is for the + before-image transition relation or the after-image transition + relation. @@ -325,7 +332,8 @@ UPDATE OF column_name1 [, column_name2transition_relation_name - The (unqualified) name to be used within the trigger for this relation. + The (unqualified) name to be used within the trigger for this + transition relation. @@ -458,6 +466,35 @@ UPDATE OF column_name1 [, column_name2 + + In some cases it is possible for a single SQL command to fire more than + one kind of trigger. For instance an INSERT with + an ON CONFLICT DO UPDATE clause may cause both insert and + update operations, so it will fire both kinds of triggers as needed. + The transition relations supplied to triggers are + specific to their event type; thus an INSERT trigger + will see only the inserted rows, while an UPDATE + trigger will see only the updated rows. + + + + Row updates or deletions caused by foreign-key enforcement actions, such + as ON UPDATE CASCADE or ON DELETE SET NULL, are + treated as part of the SQL command that caused them (note that such + actions are never deferred). Relevant triggers on the affected table will + be fired, so that this provides another way in which a SQL command might + fire triggers not directly matching its type. In simple cases, triggers + that request transition relations will see all changes caused in their + table by a single original SQL command as a single transition relation. + However, there are cases in which the presence of an AFTER ROW + trigger that requests transition relations will cause the foreign-key + enforcement actions triggered by a single SQL command to be split into + multiple steps, each with its own transition relation(s). In such cases, + any AFTER STATEMENT triggers that are present will be fired + once per creation of a transition relation, ensuring that the triggers see + each affected row once and only once. + + Modifying a partitioned table or a table with inheritance children fires statement-level triggers directly attached to that table, but not @@ -589,19 +626,30 @@ CREATE TRIGGER paired_items_update - While transition tables for AFTER triggers are specified - using the REFERENCING clause in the standard way, the row - variables used in FOR EACH ROW triggers may not be - specified in REFERENCING clause. They are available in a - manner which is dependent on the language in which the trigger function - is written. Some languages effectively behave as though there is a - REFERENCING clause containing OLD ROW AS OLD NEW - ROW AS NEW. + While transition table names for AFTER triggers are + specified using the REFERENCING clause in the standard way, + the row variables used in FOR EACH ROW triggers may not be + specified in a REFERENCING clause. They are available in a + manner that is dependent on the language in which the trigger function + is written, but is fixed for any one language. Some languages + effectively behave as though there is a REFERENCING clause + containing OLD ROW AS OLD NEW ROW AS NEW. - PostgreSQL only allows the execution + + The standard allows transition tables to be used with + column-specific UPDATE triggers, but then the set of rows + that should be visible in the transition tables depends on the + trigger's column list. This is not currently implemented by + PostgreSQL. + + + + + + PostgreSQL only allows the execution of a user-defined function for the triggered action. The standard allows the execution of a number of other SQL commands, such as CREATE TABLE, as the triggered action. This diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index 950245d19a..a16256056f 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -41,17 +41,13 @@ On tables and foreign tables, triggers can be defined to execute either before or after any INSERT, UPDATE, or DELETE operation, either once per modified row, - or once per SQL statement. If an - INSERT contains an ON CONFLICT DO UPDATE - clause, it is possible that the effects of a BEFORE insert trigger and - a BEFORE update trigger can both be applied together, if a reference to - an EXCLUDED column appears. UPDATE - triggers can moreover be set to fire only if certain columns are - mentioned in the SET clause of the - UPDATE statement. Triggers can also fire for - TRUNCATE statements. If a trigger event occurs, + or once per SQL statement. + UPDATE triggers can moreover be set to fire only if + certain columns are mentioned in the SET clause of + the UPDATE statement. Triggers can also fire + for TRUNCATE statements. If a trigger event occurs, the trigger's function is called at the appropriate time to handle the - event. Foreign tables do not support the TRUNCATE statement at all. + event. @@ -97,10 +93,7 @@ two types of triggers are sometimes called row-level triggers and statement-level triggers, respectively. Triggers on TRUNCATE may only be - defined at statement level. On views, triggers that fire before or - after may only be defined at statement level, while triggers that fire - instead of an INSERT, UPDATE, - or DELETE may only be defined at row level. + defined at statement level, not per-row. @@ -117,9 +110,9 @@ operated on, while row-level AFTER triggers fire at the end of the statement (but before any statement-level AFTER triggers). These types of triggers may only be defined on non-partitioned tables and - foreign tables. Row-level INSTEAD OF triggers may only be - defined on views, and fire immediately as each row in the view is - identified as needing to be operated on. + foreign tables, not views. INSTEAD OF triggers may only be + defined on views, and only at row level; they fire immediately as each + row in the view is identified as needing to be operated on. @@ -132,18 +125,19 @@ If an INSERT contains an ON CONFLICT - DO UPDATE clause, it is possible that the effects of all - row-level BEFORE INSERT triggers - and all row-level BEFORE UPDATE triggers can + DO UPDATE clause, it is possible that the effects of + row-level BEFORE INSERT triggers and + row-level BEFORE UPDATE triggers can both be applied in a way that is apparent from the final state of the updated row, if an EXCLUDED column is referenced. There need not be an EXCLUDED column reference for - both sets of row-level BEFORE triggers to execute, though. The + both sets of row-level BEFORE triggers to execute, + though. The possibility of surprising outcomes should be considered when there are both BEFORE INSERT and BEFORE UPDATE row-level triggers - that both affect a row being inserted/updated (this can still be - problematic if the modifications are more or less equivalent if + that change a row being inserted/updated (this can be + problematic even if the modifications are more or less equivalent, if they're not also idempotent). Note that statement-level UPDATE triggers are executed when ON CONFLICT DO UPDATE is specified, regardless of whether or not @@ -314,8 +308,18 @@ NEW row for INSERT and UPDATE triggers, and/or the OLD row for UPDATE and DELETE triggers. - Statement-level triggers do not currently have any way to examine the - individual row(s) modified by the statement. + + + + By default, statement-level triggers do not have any way to examine the + individual row(s) modified by the statement. But an AFTER + STATEMENT trigger can request that transition tables + be created to make the sets of affected rows available to the trigger. + AFTER ROW triggers can also request transition tables, so + that they can see the total changes in the table as well as the change in + the individual row they are currently being fired for. The syntax for + examining the transition tables again depends on the programming language + that is being used. diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index cfa3f059c2..c6fa44563c 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2432,12 +2432,17 @@ CopyFrom(CopyState cstate) /* Triggers might need a slot as well */ estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); + /* Prepare to catch AFTER triggers. */ + AfterTriggerBeginQuery(); + /* * If there are any triggers with transition tables on the named relation, * we need to be prepared to capture transition tuples. */ cstate->transition_capture = - MakeTransitionCaptureState(cstate->rel->trigdesc); + MakeTransitionCaptureState(cstate->rel->trigdesc, + RelationGetRelid(cstate->rel), + CMD_INSERT); /* * If the named relation is a partitioned table, initialize state for @@ -2513,9 +2518,6 @@ CopyFrom(CopyState cstate) bufferedTuples = palloc(MAX_BUFFERED_TUPLES * sizeof(HeapTuple)); } - /* Prepare to catch AFTER triggers. */ - AfterTriggerBeginQuery(); - /* * Check BEFORE STATEMENT insertion triggers. It's debatable whether we * should do this for COPY, since it's not really an "INSERT" statement as diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 269c9e17dd..7e391a1092 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -234,6 +234,11 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, RelationGetRelationName(rel)), errdetail("Foreign tables cannot have TRUNCATE triggers."))); + /* + * We disallow constraint triggers to protect the assumption that + * triggers on FKs can't be deferred. See notes with AfterTriggers + * data structures, below. + */ if (stmt->isconstraint) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -418,6 +423,26 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("transition tables cannot be specified for triggers with more than one event"))); + /* + * We currently don't allow column-specific triggers with + * transition tables. Per spec, that seems to require + * accumulating separate transition tables for each combination of + * columns, which is a lot of work for a rather marginal feature. + */ + if (stmt->columns != NIL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("transition tables cannot be specified for triggers with column lists"))); + + /* + * We disallow constraint triggers with transition tables, to + * protect the assumption that such triggers can't be deferred. + * See notes with AfterTriggers data structures, below. + * + * Currently this is enforced by the grammar, so just Assert here. + */ + Assert(!stmt->isconstraint); + if (tt->isNew) { if (!(TRIGGER_FOR_INSERT(tgtype) || @@ -2085,96 +2110,6 @@ FindTriggerIncompatibleWithInheritance(TriggerDesc *trigdesc) return NULL; } -/* - * Make a TransitionCaptureState object from a given TriggerDesc. The - * resulting object holds the flags which control whether transition tuples - * are collected when tables are modified, and the tuplestores themselves. - * Note that we copy the flags from a parent table into this struct (rather - * than using each relation's TriggerDesc directly) so that we can use it to - * control the collection of transition tuples from child tables. - * - * If there are no triggers with transition tables configured for 'trigdesc', - * then return NULL. - * - * The resulting object can be passed to the ExecAR* functions. The caller - * should set tcs_map or tcs_original_insert_tuple as appropriate when dealing - * with child tables. - */ -TransitionCaptureState * -MakeTransitionCaptureState(TriggerDesc *trigdesc) -{ - TransitionCaptureState *state = NULL; - - if (trigdesc != NULL && - (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table || - trigdesc->trig_update_new_table || trigdesc->trig_insert_new_table)) - { - MemoryContext oldcxt; - ResourceOwner saveResourceOwner; - - /* - * Normally DestroyTransitionCaptureState should be called after - * executing all AFTER triggers for the current statement. - * - * To handle error cleanup, TransitionCaptureState and the tuplestores - * it contains will live in the current [sub]transaction's memory - * context. Likewise for the current resource owner, because we also - * want to clean up temporary files spilled to disk by the tuplestore - * in that scenario. This scope is sufficient, because AFTER triggers - * with transition tables cannot be deferred (only constraint triggers - * can be deferred, and constraint triggers cannot have transition - * tables). The AFTER trigger queue may contain pointers to this - * TransitionCaptureState, but any such entries will be processed or - * discarded before the end of the current [sub]transaction. - * - * If a future release allows deferred triggers with transition - * tables, we'll need to reconsider the scope of the - * TransitionCaptureState object. - */ - oldcxt = MemoryContextSwitchTo(CurTransactionContext); - saveResourceOwner = CurrentResourceOwner; - - state = (TransitionCaptureState *) - palloc0(sizeof(TransitionCaptureState)); - state->tcs_delete_old_table = trigdesc->trig_delete_old_table; - state->tcs_update_old_table = trigdesc->trig_update_old_table; - state->tcs_update_new_table = trigdesc->trig_update_new_table; - state->tcs_insert_new_table = trigdesc->trig_insert_new_table; - PG_TRY(); - { - CurrentResourceOwner = CurTransactionResourceOwner; - if (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table) - state->tcs_old_tuplestore = tuplestore_begin_heap(false, false, work_mem); - if (trigdesc->trig_insert_new_table) - state->tcs_insert_tuplestore = tuplestore_begin_heap(false, false, work_mem); - if (trigdesc->trig_update_new_table) - state->tcs_update_tuplestore = tuplestore_begin_heap(false, false, work_mem); - } - PG_CATCH(); - { - CurrentResourceOwner = saveResourceOwner; - PG_RE_THROW(); - } - PG_END_TRY(); - CurrentResourceOwner = saveResourceOwner; - MemoryContextSwitchTo(oldcxt); - } - - return state; -} - -void -DestroyTransitionCaptureState(TransitionCaptureState *tcs) -{ - if (tcs->tcs_insert_tuplestore != NULL) - tuplestore_end(tcs->tcs_insert_tuplestore); - if (tcs->tcs_update_tuplestore != NULL) - tuplestore_end(tcs->tcs_update_tuplestore); - if (tcs->tcs_old_tuplestore != NULL) - tuplestore_end(tcs->tcs_old_tuplestore); - pfree(tcs); -} - /* * Call a trigger function. * @@ -3338,9 +3273,11 @@ TriggerEnabled(EState *estate, ResultRelInfo *relinfo, * during the current transaction tree. (BEFORE triggers are fired * immediately so we don't need any persistent state about them.) The struct * and most of its subsidiary data are kept in TopTransactionContext; however - * the individual event records are kept in a separate sub-context. This is - * done mainly so that it's easy to tell from a memory context dump how much - * space is being eaten by trigger events. + * some data that can be discarded sooner appears in the CurTransactionContext + * of the relevant subtransaction. Also, the individual event records are + * kept in a separate sub-context of TopTransactionContext. This is done + * mainly so that it's easy to tell from a memory context dump how much space + * is being eaten by trigger events. * * Because the list of pending events can grow large, we go to some * considerable effort to minimize per-event memory consumption. The event @@ -3400,6 +3337,13 @@ typedef SetConstraintStateData *SetConstraintState; * tuple(s). This permits storing tuples once regardless of the number of * row-level triggers on a foreign table. * + * Note that we need triggers on foreign tables to be fired in exactly the + * order they were queued, so that the tuples come out of the tuplestore in + * the right order. To ensure that, we forbid deferrable (constraint) + * triggers on foreign tables. This also ensures that such triggers do not + * get deferred into outer trigger query levels, meaning that it's okay to + * destroy the tuplestore at the end of the query level. + * * Statement-level triggers always bear AFTER_TRIGGER_1CTID, though they * require no ctid field. We lack the flag bit space to neatly represent that * distinct case, and it seems unlikely to be worth much trouble. @@ -3433,7 +3377,7 @@ typedef struct AfterTriggerSharedData Oid ats_tgoid; /* the trigger's ID */ Oid ats_relid; /* the relation it's on */ CommandId ats_firing_id; /* ID for firing cycle */ - TransitionCaptureState *ats_transition_capture; + struct AfterTriggersTableData *ats_table; /* transition table access */ } AfterTriggerSharedData; typedef struct AfterTriggerEventData *AfterTriggerEvent; @@ -3505,6 +3449,14 @@ typedef struct AfterTriggerEventList #define for_each_event_chunk(eptr, cptr, evtlist) \ for_each_chunk(cptr, evtlist) for_each_event(eptr, cptr) +/* Macros for iterating from a start point that might not be list start */ +#define for_each_chunk_from(cptr) \ + for (; cptr != NULL; cptr = cptr->next) +#define for_each_event_from(eptr, cptr) \ + for (; \ + (char *) eptr < (cptr)->freeptr; \ + eptr = (AfterTriggerEvent) (((char *) eptr) + SizeofTriggerEvent(eptr))) + /* * All per-transaction data for the AFTER TRIGGERS module. @@ -3529,60 +3481,107 @@ typedef struct AfterTriggerEventList * query_depth is the current depth of nested AfterTriggerBeginQuery calls * (-1 when the stack is empty). * - * query_stack[query_depth] is a list of AFTER trigger events queued by the - * current query (and the query_stack entries below it are lists of trigger - * events queued by calling queries). None of these are valid until the - * matching AfterTriggerEndQuery call occurs. At that point we fire - * immediate-mode triggers, and append any deferred events to the main events - * list. + * query_stack[query_depth] is the per-query-level data, including these fields: + * + * events is a list of AFTER trigger events queued by the current query. + * None of these are valid until the matching AfterTriggerEndQuery call + * occurs. At that point we fire immediate-mode triggers, and append any + * deferred events to the main events list. * - * fdw_tuplestores[query_depth] is a tuplestore containing the foreign tuples - * needed for the current query. + * fdw_tuplestore is a tuplestore containing the foreign-table tuples + * needed by events queued by the current query. (Note: we use just one + * tuplestore even though more than one foreign table might be involved. + * This is okay because tuplestores don't really care what's in the tuples + * they store; but it's possible that someday it'd break.) * - * maxquerydepth is just the allocated length of query_stack and the - * tuplestores. + * tables is a List of AfterTriggersTableData structs for target tables + * of the current query (see below). * - * state_stack is a stack of pointers to saved copies of the SET CONSTRAINTS - * state data; each subtransaction level that modifies that state first + * maxquerydepth is just the allocated length of query_stack. + * + * trans_stack holds per-subtransaction data, including these fields: + * + * state is NULL or a pointer to a saved copy of the SET CONSTRAINTS + * state data. Each subtransaction level that modifies that state first * saves a copy, which we use to restore the state if we abort. * - * events_stack is a stack of copies of the events head/tail pointers, + * events is a copy of the events head/tail pointers, * which we use to restore those values during subtransaction abort. * - * depth_stack is a stack of copies of subtransaction-start-time query_depth, + * query_depth is the subtransaction-start-time value of query_depth, * which we similarly use to clean up at subtransaction abort. * - * firing_stack is a stack of copies of subtransaction-start-time - * firing_counter. We use this to recognize which deferred triggers were - * fired (or marked for firing) within an aborted subtransaction. + * firing_counter is the subtransaction-start-time value of firing_counter. + * We use this to recognize which deferred triggers were fired (or marked + * for firing) within an aborted subtransaction. * * We use GetCurrentTransactionNestLevel() to determine the correct array - * index in these stacks. maxtransdepth is the number of allocated entries in - * each stack. (By not keeping our own stack pointer, we can avoid trouble + * index in trans_stack. maxtransdepth is the number of allocated entries in + * trans_stack. (By not keeping our own stack pointer, we can avoid trouble * in cases where errors during subxact abort cause multiple invocations * of AfterTriggerEndSubXact() at the same nesting depth.) + * + * We create an AfterTriggersTableData struct for each target table of the + * current query, and each operation mode (INSERT/UPDATE/DELETE), that has + * either transition tables or AFTER STATEMENT triggers. This is used to + * hold the relevant transition tables, as well as info tracking whether + * we already queued the AFTER STATEMENT triggers. (We use that info to + * prevent, as much as possible, firing the same AFTER STATEMENT trigger + * more than once per statement.) These structs, along with the transition + * table tuplestores, live in the (sub)transaction's CurTransactionContext. + * That's sufficient lifespan because we don't allow transition tables to be + * used by deferrable triggers, so they only need to survive until + * AfterTriggerEndQuery. */ +typedef struct AfterTriggersQueryData AfterTriggersQueryData; +typedef struct AfterTriggersTransData AfterTriggersTransData; +typedef struct AfterTriggersTableData AfterTriggersTableData; + typedef struct AfterTriggersData { CommandId firing_counter; /* next firing ID to assign */ SetConstraintState state; /* the active S C state */ AfterTriggerEventList events; /* deferred-event list */ - int query_depth; /* current query list index */ - AfterTriggerEventList *query_stack; /* events pending from each query */ - Tuplestorestate **fdw_tuplestores; /* foreign tuples for one row from - * each query */ - int maxquerydepth; /* allocated len of above array */ MemoryContext event_cxt; /* memory context for events, if any */ - /* these fields are just for resetting at subtrans abort: */ + /* per-query-level data: */ + AfterTriggersQueryData *query_stack; /* array of structs shown below */ + int query_depth; /* current index in above array */ + int maxquerydepth; /* allocated len of above array */ - SetConstraintState *state_stack; /* stacked S C states */ - AfterTriggerEventList *events_stack; /* stacked list pointers */ - int *depth_stack; /* stacked query_depths */ - CommandId *firing_stack; /* stacked firing_counters */ - int maxtransdepth; /* allocated len of above arrays */ + /* per-subtransaction-level data: */ + AfterTriggersTransData *trans_stack; /* array of structs shown below */ + int maxtransdepth; /* allocated len of above array */ } AfterTriggersData; +struct AfterTriggersQueryData +{ + AfterTriggerEventList events; /* events pending from this query */ + Tuplestorestate *fdw_tuplestore; /* foreign tuples for said events */ + List *tables; /* list of AfterTriggersTableData, see below */ +}; + +struct AfterTriggersTransData +{ + /* these fields are just for resetting at subtrans abort: */ + SetConstraintState state; /* saved S C state, or NULL if not yet saved */ + AfterTriggerEventList events; /* saved list pointer */ + int query_depth; /* saved query_depth */ + CommandId firing_counter; /* saved firing_counter */ +}; + +struct AfterTriggersTableData +{ + /* relid + cmdType form the lookup key for these structs: */ + Oid relid; /* target table's OID */ + CmdType cmdType; /* event type, CMD_INSERT/UPDATE/DELETE */ + bool closed; /* true when no longer OK to add tuples */ + bool stmt_trig_done; /* did we already queue stmt-level triggers? */ + AfterTriggerEventList stmt_trig_events; /* if so, saved list pointer */ + Tuplestorestate *old_tuplestore; /* "old" transition table, if any */ + Tuplestorestate *new_tuplestore; /* "new" transition table, if any */ +}; + static AfterTriggersData afterTriggers; static void AfterTriggerExecute(AfterTriggerEvent event, @@ -3591,38 +3590,41 @@ static void AfterTriggerExecute(AfterTriggerEvent event, Instrumentation *instr, MemoryContext per_tuple_context, TupleTableSlot *trig_tuple_slot1, - TupleTableSlot *trig_tuple_slot2, - TransitionCaptureState *transition_capture); + TupleTableSlot *trig_tuple_slot2); +static AfterTriggersTableData *GetAfterTriggersTableData(Oid relid, + CmdType cmdType); +static void AfterTriggerFreeQuery(AfterTriggersQueryData *qs); static SetConstraintState SetConstraintStateCreate(int numalloc); static SetConstraintState SetConstraintStateCopy(SetConstraintState state); static SetConstraintState SetConstraintStateAddItem(SetConstraintState state, Oid tgoid, bool tgisdeferred); +static void cancel_prior_stmt_triggers(Oid relid, CmdType cmdType, int tgevent); /* - * Gets a current query transition tuplestore and initializes it if necessary. + * Get the FDW tuplestore for the current trigger query level, creating it + * if necessary. */ static Tuplestorestate * -GetTriggerTransitionTuplestore(Tuplestorestate **tss) +GetCurrentFDWTuplestore(void) { Tuplestorestate *ret; - ret = tss[afterTriggers.query_depth]; + ret = afterTriggers.query_stack[afterTriggers.query_depth].fdw_tuplestore; if (ret == NULL) { MemoryContext oldcxt; ResourceOwner saveResourceOwner; /* - * Make the tuplestore valid until end of transaction. This is the - * allocation lifespan of the associated events list, but we really + * Make the tuplestore valid until end of subtransaction. We really * only need it until AfterTriggerEndQuery(). */ - oldcxt = MemoryContextSwitchTo(TopTransactionContext); + oldcxt = MemoryContextSwitchTo(CurTransactionContext); saveResourceOwner = CurrentResourceOwner; PG_TRY(); { - CurrentResourceOwner = TopTransactionResourceOwner; + CurrentResourceOwner = CurTransactionResourceOwner; ret = tuplestore_begin_heap(false, false, work_mem); } PG_CATCH(); @@ -3634,7 +3636,7 @@ GetTriggerTransitionTuplestore(Tuplestorestate **tss) CurrentResourceOwner = saveResourceOwner; MemoryContextSwitchTo(oldcxt); - tss[afterTriggers.query_depth] = ret; + afterTriggers.query_stack[afterTriggers.query_depth].fdw_tuplestore = ret; } return ret; @@ -3780,7 +3782,7 @@ afterTriggerAddEvent(AfterTriggerEventList *events, if (newshared->ats_tgoid == evtshared->ats_tgoid && newshared->ats_relid == evtshared->ats_relid && newshared->ats_event == evtshared->ats_event && - newshared->ats_transition_capture == evtshared->ats_transition_capture && + newshared->ats_table == evtshared->ats_table && newshared->ats_firing_id == 0) break; } @@ -3892,8 +3894,7 @@ AfterTriggerExecute(AfterTriggerEvent event, FmgrInfo *finfo, Instrumentation *instr, MemoryContext per_tuple_context, TupleTableSlot *trig_tuple_slot1, - TupleTableSlot *trig_tuple_slot2, - TransitionCaptureState *transition_capture) + TupleTableSlot *trig_tuple_slot2) { AfterTriggerShared evtshared = GetTriggerSharedData(event); Oid tgoid = evtshared->ats_tgoid; @@ -3934,9 +3935,7 @@ AfterTriggerExecute(AfterTriggerEvent event, { case AFTER_TRIGGER_FDW_FETCH: { - Tuplestorestate *fdw_tuplestore = - GetTriggerTransitionTuplestore - (afterTriggers.fdw_tuplestores); + Tuplestorestate *fdw_tuplestore = GetCurrentFDWTuplestore(); if (!tuplestore_gettupleslot(fdw_tuplestore, true, false, trig_tuple_slot1)) @@ -4006,36 +4005,25 @@ AfterTriggerExecute(AfterTriggerEvent event, } /* - * Set up the tuplestore information. + * Set up the tuplestore information to let the trigger have access to + * transition tables. When we first make a transition table available to + * a trigger, mark it "closed" so that it cannot change anymore. If any + * additional events of the same type get queued in the current trigger + * query level, they'll go into new transition tables. */ LocTriggerData.tg_oldtable = LocTriggerData.tg_newtable = NULL; - if (transition_capture != NULL) + if (evtshared->ats_table) { if (LocTriggerData.tg_trigger->tgoldtable) - LocTriggerData.tg_oldtable = transition_capture->tcs_old_tuplestore; - if (LocTriggerData.tg_trigger->tgnewtable) { - /* - * Currently a trigger with transition tables may only be defined - * for a single event type (here AFTER INSERT or AFTER UPDATE, but - * not AFTER INSERT OR ...). - */ - Assert((TRIGGER_FOR_INSERT(LocTriggerData.tg_trigger->tgtype) != 0) ^ - (TRIGGER_FOR_UPDATE(LocTriggerData.tg_trigger->tgtype) != 0)); + LocTriggerData.tg_oldtable = evtshared->ats_table->old_tuplestore; + evtshared->ats_table->closed = true; + } - /* - * Show either the insert or update new tuple images, depending on - * which event type the trigger was registered for. A single - * statement may have produced both in the case of INSERT ... ON - * CONFLICT ... DO UPDATE, and in that case the event determines - * which tuplestore the trigger sees as the NEW TABLE. - */ - if (TRIGGER_FOR_INSERT(LocTriggerData.tg_trigger->tgtype)) - LocTriggerData.tg_newtable = - transition_capture->tcs_insert_tuplestore; - else - LocTriggerData.tg_newtable = - transition_capture->tcs_update_tuplestore; + if (LocTriggerData.tg_trigger->tgnewtable) + { + LocTriggerData.tg_newtable = evtshared->ats_table->new_tuplestore; + evtshared->ats_table->closed = true; } } @@ -4245,8 +4233,7 @@ afterTriggerInvokeEvents(AfterTriggerEventList *events, * won't try to re-fire it. */ AfterTriggerExecute(event, rel, trigdesc, finfo, instr, - per_tuple_context, slot1, slot2, - evtshared->ats_transition_capture); + per_tuple_context, slot1, slot2); /* * Mark the event as done. @@ -4296,6 +4283,166 @@ afterTriggerInvokeEvents(AfterTriggerEventList *events, } +/* + * GetAfterTriggersTableData + * + * Find or create an AfterTriggersTableData struct for the specified + * trigger event (relation + operation type). Ignore existing structs + * marked "closed"; we don't want to put any additional tuples into them, + * nor change their stmt-triggers-fired state. + * + * Note: the AfterTriggersTableData list is allocated in the current + * (sub)transaction's CurTransactionContext. This is OK because + * we don't need it to live past AfterTriggerEndQuery. + */ +static AfterTriggersTableData * +GetAfterTriggersTableData(Oid relid, CmdType cmdType) +{ + AfterTriggersTableData *table; + AfterTriggersQueryData *qs; + MemoryContext oldcxt; + ListCell *lc; + + /* Caller should have ensured query_depth is OK. */ + Assert(afterTriggers.query_depth >= 0 && + afterTriggers.query_depth < afterTriggers.maxquerydepth); + qs = &afterTriggers.query_stack[afterTriggers.query_depth]; + + foreach(lc, qs->tables) + { + table = (AfterTriggersTableData *) lfirst(lc); + if (table->relid == relid && table->cmdType == cmdType && + !table->closed) + return table; + } + + oldcxt = MemoryContextSwitchTo(CurTransactionContext); + + table = (AfterTriggersTableData *) palloc0(sizeof(AfterTriggersTableData)); + table->relid = relid; + table->cmdType = cmdType; + qs->tables = lappend(qs->tables, table); + + MemoryContextSwitchTo(oldcxt); + + return table; +} + + +/* + * MakeTransitionCaptureState + * + * Make a TransitionCaptureState object for the given TriggerDesc, target + * relation, and operation type. The TCS object holds all the state needed + * to decide whether to capture tuples in transition tables. + * + * If there are no triggers in 'trigdesc' that request relevant transition + * tables, then return NULL. + * + * The resulting object can be passed to the ExecAR* functions. The caller + * should set tcs_map or tcs_original_insert_tuple as appropriate when dealing + * with child tables. + * + * Note that we copy the flags from a parent table into this struct (rather + * than subsequently using the relation's TriggerDesc directly) so that we can + * use it to control collection of transition tuples from child tables. + * + * Per SQL spec, all operations of the same kind (INSERT/UPDATE/DELETE) + * on the same table during one query should share one transition table. + * Therefore, the Tuplestores are owned by an AfterTriggersTableData struct + * looked up using the table OID + CmdType, and are merely referenced by + * the TransitionCaptureState objects we hand out to callers. + */ +TransitionCaptureState * +MakeTransitionCaptureState(TriggerDesc *trigdesc, Oid relid, CmdType cmdType) +{ + TransitionCaptureState *state; + bool need_old, + need_new; + AfterTriggersTableData *table; + MemoryContext oldcxt; + ResourceOwner saveResourceOwner; + + if (trigdesc == NULL) + return NULL; + + /* Detect which table(s) we need. */ + switch (cmdType) + { + case CMD_INSERT: + need_old = false; + need_new = trigdesc->trig_insert_new_table; + break; + case CMD_UPDATE: + need_old = trigdesc->trig_update_old_table; + need_new = trigdesc->trig_update_new_table; + break; + case CMD_DELETE: + need_old = trigdesc->trig_delete_old_table; + need_new = false; + break; + default: + elog(ERROR, "unexpected CmdType: %d", (int) cmdType); + need_old = need_new = false; /* keep compiler quiet */ + break; + } + if (!need_old && !need_new) + return NULL; + + /* Check state, like AfterTriggerSaveEvent. */ + if (afterTriggers.query_depth < 0) + elog(ERROR, "MakeTransitionCaptureState() called outside of query"); + + /* Be sure we have enough space to record events at this query depth. */ + if (afterTriggers.query_depth >= afterTriggers.maxquerydepth) + AfterTriggerEnlargeQueryState(); + + /* + * Find or create an AfterTriggersTableData struct to hold the + * tuplestore(s). If there's a matching struct but it's marked closed, + * ignore it; we need a newer one. + * + * Note: the AfterTriggersTableData list, as well as the tuplestores, are + * allocated in the current (sub)transaction's CurTransactionContext, and + * the tuplestores are managed by the (sub)transaction's resource owner. + * This is sufficient lifespan because we do not allow triggers using + * transition tables to be deferrable; they will be fired during + * AfterTriggerEndQuery, after which it's okay to delete the data. + */ + table = GetAfterTriggersTableData(relid, cmdType); + + /* Now create required tuplestore(s), if we don't have them already. */ + oldcxt = MemoryContextSwitchTo(CurTransactionContext); + saveResourceOwner = CurrentResourceOwner; + PG_TRY(); + { + CurrentResourceOwner = CurTransactionResourceOwner; + if (need_old && table->old_tuplestore == NULL) + table->old_tuplestore = tuplestore_begin_heap(false, false, work_mem); + if (need_new && table->new_tuplestore == NULL) + table->new_tuplestore = tuplestore_begin_heap(false, false, work_mem); + } + PG_CATCH(); + { + CurrentResourceOwner = saveResourceOwner; + PG_RE_THROW(); + } + PG_END_TRY(); + CurrentResourceOwner = saveResourceOwner; + MemoryContextSwitchTo(oldcxt); + + /* Now build the TransitionCaptureState struct, in caller's context */ + state = (TransitionCaptureState *) palloc0(sizeof(TransitionCaptureState)); + state->tcs_delete_old_table = trigdesc->trig_delete_old_table; + state->tcs_update_old_table = trigdesc->trig_update_old_table; + state->tcs_update_new_table = trigdesc->trig_update_new_table; + state->tcs_insert_new_table = trigdesc->trig_insert_new_table; + state->tcs_private = table; + + return state; +} + + /* ---------- * AfterTriggerBeginXact() * @@ -4319,14 +4466,10 @@ AfterTriggerBeginXact(void) */ Assert(afterTriggers.state == NULL); Assert(afterTriggers.query_stack == NULL); - Assert(afterTriggers.fdw_tuplestores == NULL); Assert(afterTriggers.maxquerydepth == 0); Assert(afterTriggers.event_cxt == NULL); Assert(afterTriggers.events.head == NULL); - Assert(afterTriggers.state_stack == NULL); - Assert(afterTriggers.events_stack == NULL); - Assert(afterTriggers.depth_stack == NULL); - Assert(afterTriggers.firing_stack == NULL); + Assert(afterTriggers.trans_stack == NULL); Assert(afterTriggers.maxtransdepth == 0); } @@ -4362,9 +4505,6 @@ AfterTriggerBeginQuery(void) void AfterTriggerEndQuery(EState *estate) { - AfterTriggerEventList *events; - Tuplestorestate *fdw_tuplestore; - /* Must be inside a query, too */ Assert(afterTriggers.query_depth >= 0); @@ -4393,38 +4533,86 @@ AfterTriggerEndQuery(EState *estate) * will instead fire any triggers in a dedicated query level. Foreign key * enforcement triggers do add to the current query level, thanks to their * passing fire_triggers = false to SPI_execute_snapshot(). Other - * C-language triggers might do likewise. Be careful here: firing a - * trigger could result in query_stack being repalloc'd, so we can't save - * its address across afterTriggerInvokeEvents calls. + * C-language triggers might do likewise. * * If we find no firable events, we don't have to increment * firing_counter. */ for (;;) { - events = &afterTriggers.query_stack[afterTriggers.query_depth]; - if (afterTriggerMarkEvents(events, &afterTriggers.events, true)) + AfterTriggersQueryData *qs; + + /* + * Firing a trigger could result in query_stack being repalloc'd, so + * we must recalculate qs after each afterTriggerInvokeEvents call. + */ + qs = &afterTriggers.query_stack[afterTriggers.query_depth]; + + if (afterTriggerMarkEvents(&qs->events, &afterTriggers.events, true)) { CommandId firing_id = afterTriggers.firing_counter++; /* OK to delete the immediate events after processing them */ - if (afterTriggerInvokeEvents(events, firing_id, estate, true)) + if (afterTriggerInvokeEvents(&qs->events, firing_id, estate, true)) break; /* all fired */ } else break; } - /* Release query-local storage for events, including tuplestore if any */ - fdw_tuplestore = afterTriggers.fdw_tuplestores[afterTriggers.query_depth]; - if (fdw_tuplestore) + /* Release query-level-local storage, including tuplestores if any */ + AfterTriggerFreeQuery(&afterTriggers.query_stack[afterTriggers.query_depth]); + + afterTriggers.query_depth--; +} + + +/* + * AfterTriggerFreeQuery + * Release subsidiary storage for a trigger query level. + * This includes closing down tuplestores. + * Note: it's important for this to be safe if interrupted by an error + * and then called again for the same query level. + */ +static void +AfterTriggerFreeQuery(AfterTriggersQueryData *qs) +{ + Tuplestorestate *ts; + List *tables; + ListCell *lc; + + /* Drop the trigger events */ + afterTriggerFreeEventList(&qs->events); + + /* Drop FDW tuplestore if any */ + ts = qs->fdw_tuplestore; + qs->fdw_tuplestore = NULL; + if (ts) + tuplestore_end(ts); + + /* Release per-table subsidiary storage */ + tables = qs->tables; + foreach(lc, tables) { - tuplestore_end(fdw_tuplestore); - afterTriggers.fdw_tuplestores[afterTriggers.query_depth] = NULL; + AfterTriggersTableData *table = (AfterTriggersTableData *) lfirst(lc); + + ts = table->old_tuplestore; + table->old_tuplestore = NULL; + if (ts) + tuplestore_end(ts); + ts = table->new_tuplestore; + table->new_tuplestore = NULL; + if (ts) + tuplestore_end(ts); } - afterTriggerFreeEventList(&afterTriggers.query_stack[afterTriggers.query_depth]); - afterTriggers.query_depth--; + /* + * Now free the AfterTriggersTableData structs and list cells. Reset list + * pointer first; if list_free_deep somehow gets an error, better to leak + * that storage than have an infinite loop. + */ + qs->tables = NIL; + list_free_deep(tables); } @@ -4521,10 +4709,7 @@ AfterTriggerEndXact(bool isCommit) * large, we let the eventual reset of TopTransactionContext free the * memory instead of doing it here. */ - afterTriggers.state_stack = NULL; - afterTriggers.events_stack = NULL; - afterTriggers.depth_stack = NULL; - afterTriggers.firing_stack = NULL; + afterTriggers.trans_stack = NULL; afterTriggers.maxtransdepth = 0; @@ -4534,7 +4719,6 @@ AfterTriggerEndXact(bool isCommit) * memory here. */ afterTriggers.query_stack = NULL; - afterTriggers.fdw_tuplestores = NULL; afterTriggers.maxquerydepth = 0; afterTriggers.state = NULL; @@ -4553,48 +4737,28 @@ AfterTriggerBeginSubXact(void) int my_level = GetCurrentTransactionNestLevel(); /* - * Allocate more space in the stacks if needed. (Note: because the + * Allocate more space in the trans_stack if needed. (Note: because the * minimum nest level of a subtransaction is 2, we waste the first couple - * entries of each array; not worth the notational effort to avoid it.) + * entries of the array; not worth the notational effort to avoid it.) */ while (my_level >= afterTriggers.maxtransdepth) { if (afterTriggers.maxtransdepth == 0) { - MemoryContext old_cxt; - - old_cxt = MemoryContextSwitchTo(TopTransactionContext); - -#define DEFTRIG_INITALLOC 8 - afterTriggers.state_stack = (SetConstraintState *) - palloc(DEFTRIG_INITALLOC * sizeof(SetConstraintState)); - afterTriggers.events_stack = (AfterTriggerEventList *) - palloc(DEFTRIG_INITALLOC * sizeof(AfterTriggerEventList)); - afterTriggers.depth_stack = (int *) - palloc(DEFTRIG_INITALLOC * sizeof(int)); - afterTriggers.firing_stack = (CommandId *) - palloc(DEFTRIG_INITALLOC * sizeof(CommandId)); - afterTriggers.maxtransdepth = DEFTRIG_INITALLOC; - - MemoryContextSwitchTo(old_cxt); + /* Arbitrarily initialize for max of 8 subtransaction levels */ + afterTriggers.trans_stack = (AfterTriggersTransData *) + MemoryContextAlloc(TopTransactionContext, + 8 * sizeof(AfterTriggersTransData)); + afterTriggers.maxtransdepth = 8; } else { - /* repalloc will keep the stacks in the same context */ + /* repalloc will keep the stack in the same context */ int new_alloc = afterTriggers.maxtransdepth * 2; - afterTriggers.state_stack = (SetConstraintState *) - repalloc(afterTriggers.state_stack, - new_alloc * sizeof(SetConstraintState)); - afterTriggers.events_stack = (AfterTriggerEventList *) - repalloc(afterTriggers.events_stack, - new_alloc * sizeof(AfterTriggerEventList)); - afterTriggers.depth_stack = (int *) - repalloc(afterTriggers.depth_stack, - new_alloc * sizeof(int)); - afterTriggers.firing_stack = (CommandId *) - repalloc(afterTriggers.firing_stack, - new_alloc * sizeof(CommandId)); + afterTriggers.trans_stack = (AfterTriggersTransData *) + repalloc(afterTriggers.trans_stack, + new_alloc * sizeof(AfterTriggersTransData)); afterTriggers.maxtransdepth = new_alloc; } } @@ -4604,10 +4768,10 @@ AfterTriggerBeginSubXact(void) * is not saved until/unless changed. Likewise, we don't make a * per-subtransaction event context until needed. */ - afterTriggers.state_stack[my_level] = NULL; - afterTriggers.events_stack[my_level] = afterTriggers.events; - afterTriggers.depth_stack[my_level] = afterTriggers.query_depth; - afterTriggers.firing_stack[my_level] = afterTriggers.firing_counter; + afterTriggers.trans_stack[my_level].state = NULL; + afterTriggers.trans_stack[my_level].events = afterTriggers.events; + afterTriggers.trans_stack[my_level].query_depth = afterTriggers.query_depth; + afterTriggers.trans_stack[my_level].firing_counter = afterTriggers.firing_counter; } /* @@ -4631,70 +4795,58 @@ AfterTriggerEndSubXact(bool isCommit) { Assert(my_level < afterTriggers.maxtransdepth); /* If we saved a prior state, we don't need it anymore */ - state = afterTriggers.state_stack[my_level]; + state = afterTriggers.trans_stack[my_level].state; if (state != NULL) pfree(state); /* this avoids double pfree if error later: */ - afterTriggers.state_stack[my_level] = NULL; + afterTriggers.trans_stack[my_level].state = NULL; Assert(afterTriggers.query_depth == - afterTriggers.depth_stack[my_level]); + afterTriggers.trans_stack[my_level].query_depth); } else { /* * Aborting. It is possible subxact start failed before calling * AfterTriggerBeginSubXact, in which case we mustn't risk touching - * stack levels that aren't there. + * trans_stack levels that aren't there. */ if (my_level >= afterTriggers.maxtransdepth) return; /* - * Release any event lists from queries being aborted, and restore + * Release query-level storage for queries being aborted, and restore * query_depth to its pre-subxact value. This assumes that a * subtransaction will not add events to query levels started in a * earlier transaction state. */ - while (afterTriggers.query_depth > afterTriggers.depth_stack[my_level]) + while (afterTriggers.query_depth > afterTriggers.trans_stack[my_level].query_depth) { if (afterTriggers.query_depth < afterTriggers.maxquerydepth) - { - Tuplestorestate *ts; - - ts = afterTriggers.fdw_tuplestores[afterTriggers.query_depth]; - if (ts) - { - tuplestore_end(ts); - afterTriggers.fdw_tuplestores[afterTriggers.query_depth] = NULL; - } - - afterTriggerFreeEventList(&afterTriggers.query_stack[afterTriggers.query_depth]); - } - + AfterTriggerFreeQuery(&afterTriggers.query_stack[afterTriggers.query_depth]); afterTriggers.query_depth--; } Assert(afterTriggers.query_depth == - afterTriggers.depth_stack[my_level]); + afterTriggers.trans_stack[my_level].query_depth); /* * Restore the global deferred-event list to its former length, * discarding any events queued by the subxact. */ afterTriggerRestoreEventList(&afterTriggers.events, - &afterTriggers.events_stack[my_level]); + &afterTriggers.trans_stack[my_level].events); /* * Restore the trigger state. If the saved state is NULL, then this * subxact didn't save it, so it doesn't need restoring. */ - state = afterTriggers.state_stack[my_level]; + state = afterTriggers.trans_stack[my_level].state; if (state != NULL) { pfree(afterTriggers.state); afterTriggers.state = state; } /* this avoids double pfree if error later: */ - afterTriggers.state_stack[my_level] = NULL; + afterTriggers.trans_stack[my_level].state = NULL; /* * Scan for any remaining deferred events that were marked DONE or IN @@ -4704,7 +4856,7 @@ AfterTriggerEndSubXact(bool isCommit) * (This essentially assumes that the current subxact includes all * subxacts started after it.) */ - subxact_firing_id = afterTriggers.firing_stack[my_level]; + subxact_firing_id = afterTriggers.trans_stack[my_level].firing_counter; for_each_event_chunk(event, chunk, afterTriggers.events) { AfterTriggerShared evtshared = GetTriggerSharedData(event); @@ -4740,12 +4892,9 @@ AfterTriggerEnlargeQueryState(void) { int new_alloc = Max(afterTriggers.query_depth + 1, 8); - afterTriggers.query_stack = (AfterTriggerEventList *) + afterTriggers.query_stack = (AfterTriggersQueryData *) MemoryContextAlloc(TopTransactionContext, - new_alloc * sizeof(AfterTriggerEventList)); - afterTriggers.fdw_tuplestores = (Tuplestorestate **) - MemoryContextAllocZero(TopTransactionContext, - new_alloc * sizeof(Tuplestorestate *)); + new_alloc * sizeof(AfterTriggersQueryData)); afterTriggers.maxquerydepth = new_alloc; } else @@ -4755,27 +4904,22 @@ AfterTriggerEnlargeQueryState(void) int new_alloc = Max(afterTriggers.query_depth + 1, old_alloc * 2); - afterTriggers.query_stack = (AfterTriggerEventList *) + afterTriggers.query_stack = (AfterTriggersQueryData *) repalloc(afterTriggers.query_stack, - new_alloc * sizeof(AfterTriggerEventList)); - afterTriggers.fdw_tuplestores = (Tuplestorestate **) - repalloc(afterTriggers.fdw_tuplestores, - new_alloc * sizeof(Tuplestorestate *)); - /* Clear newly-allocated slots for subsequent lazy initialization. */ - memset(afterTriggers.fdw_tuplestores + old_alloc, - 0, (new_alloc - old_alloc) * sizeof(Tuplestorestate *)); + new_alloc * sizeof(AfterTriggersQueryData)); afterTriggers.maxquerydepth = new_alloc; } - /* Initialize new query lists to empty */ + /* Initialize new array entries to empty */ while (init_depth < afterTriggers.maxquerydepth) { - AfterTriggerEventList *events; + AfterTriggersQueryData *qs = &afterTriggers.query_stack[init_depth]; - events = &afterTriggers.query_stack[init_depth]; - events->head = NULL; - events->tail = NULL; - events->tailfree = NULL; + qs->events.head = NULL; + qs->events.tail = NULL; + qs->events.tailfree = NULL; + qs->fdw_tuplestore = NULL; + qs->tables = NIL; ++init_depth; } @@ -4873,9 +5017,9 @@ AfterTriggerSetState(ConstraintsSetStmt *stmt) * save it so it can be restored if the subtransaction aborts. */ if (my_level > 1 && - afterTriggers.state_stack[my_level] == NULL) + afterTriggers.trans_stack[my_level].state == NULL) { - afterTriggers.state_stack[my_level] = + afterTriggers.trans_stack[my_level].state = SetConstraintStateCopy(afterTriggers.state); } @@ -5184,7 +5328,7 @@ AfterTriggerPendingOnRel(Oid relid) */ for (depth = 0; depth <= afterTriggers.query_depth && depth < afterTriggers.maxquerydepth; depth++) { - for_each_event_chunk(event, chunk, afterTriggers.query_stack[depth]) + for_each_event_chunk(event, chunk, afterTriggers.query_stack[depth].events) { AfterTriggerShared evtshared = GetTriggerSharedData(event); @@ -5229,7 +5373,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, TriggerDesc *trigdesc = relinfo->ri_TrigDesc; AfterTriggerEventData new_event; AfterTriggerSharedData new_shared; - char relkind = relinfo->ri_RelationDesc->rd_rel->relkind; + char relkind = rel->rd_rel->relkind; int tgtype_event; int tgtype_level; int i; @@ -5266,7 +5410,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, Tuplestorestate *old_tuplestore; Assert(oldtup != NULL); - old_tuplestore = transition_capture->tcs_old_tuplestore; + old_tuplestore = transition_capture->tcs_private->old_tuplestore; if (map != NULL) { @@ -5284,10 +5428,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, Tuplestorestate *new_tuplestore; Assert(newtup != NULL); - if (event == TRIGGER_EVENT_INSERT) - new_tuplestore = transition_capture->tcs_insert_tuplestore; - else - new_tuplestore = transition_capture->tcs_update_tuplestore; + new_tuplestore = transition_capture->tcs_private->new_tuplestore; if (original_insert_tuple != NULL) tuplestore_puttuple(new_tuplestore, original_insert_tuple); @@ -5316,6 +5457,11 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, * The event code will be used both as a bitmask and an array offset, so * validation is important to make sure we don't walk off the edge of our * arrays. + * + * Also, if we're considering statement-level triggers, check whether we + * already queued a set of them for this event, and cancel the prior set + * if so. This preserves the behavior that statement-level triggers fire + * just once per statement and fire after row-level triggers. */ switch (event) { @@ -5334,6 +5480,8 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, Assert(newtup == NULL); ItemPointerSetInvalid(&(new_event.ate_ctid1)); ItemPointerSetInvalid(&(new_event.ate_ctid2)); + cancel_prior_stmt_triggers(RelationGetRelid(rel), + CMD_INSERT, event); } break; case TRIGGER_EVENT_DELETE: @@ -5351,6 +5499,8 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, Assert(newtup == NULL); ItemPointerSetInvalid(&(new_event.ate_ctid1)); ItemPointerSetInvalid(&(new_event.ate_ctid2)); + cancel_prior_stmt_triggers(RelationGetRelid(rel), + CMD_DELETE, event); } break; case TRIGGER_EVENT_UPDATE: @@ -5368,6 +5518,8 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, Assert(newtup == NULL); ItemPointerSetInvalid(&(new_event.ate_ctid1)); ItemPointerSetInvalid(&(new_event.ate_ctid2)); + cancel_prior_stmt_triggers(RelationGetRelid(rel), + CMD_UPDATE, event); } break; case TRIGGER_EVENT_TRUNCATE: @@ -5407,9 +5559,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, { if (fdw_tuplestore == NULL) { - fdw_tuplestore = - GetTriggerTransitionTuplestore - (afterTriggers.fdw_tuplestores); + fdw_tuplestore = GetCurrentFDWTuplestore(); new_event.ate_flags = AFTER_TRIGGER_FDW_FETCH; } else @@ -5465,6 +5615,8 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, /* * Fill in event structure and add it to the current query's queue. + * Note we set ats_table to NULL whenever this trigger doesn't use + * transition tables, to improve sharability of the shared event data. */ new_shared.ats_event = (event & TRIGGER_EVENT_OPMASK) | @@ -5474,11 +5626,13 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, new_shared.ats_tgoid = trigger->tgoid; new_shared.ats_relid = RelationGetRelid(rel); new_shared.ats_firing_id = 0; - /* deferrable triggers cannot access transition data */ - new_shared.ats_transition_capture = - trigger->tgdeferrable ? NULL : transition_capture; + if ((trigger->tgoldtable || trigger->tgnewtable) && + transition_capture != NULL) + new_shared.ats_table = transition_capture->tcs_private; + else + new_shared.ats_table = NULL; - afterTriggerAddEvent(&afterTriggers.query_stack[afterTriggers.query_depth], + afterTriggerAddEvent(&afterTriggers.query_stack[afterTriggers.query_depth].events, &new_event, &new_shared); } @@ -5496,6 +5650,100 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, } } +/* + * If we previously queued a set of AFTER STATEMENT triggers for the given + * relation + operation, and they've not been fired yet, cancel them. The + * caller will queue a fresh set that's after any row-level triggers that may + * have been queued by the current sub-statement, preserving (as much as + * possible) the property that AFTER ROW triggers fire before AFTER STATEMENT + * triggers, and that the latter only fire once. This deals with the + * situation where several FK enforcement triggers sequentially queue triggers + * for the same table into the same trigger query level. We can't fully + * prevent odd behavior though: if there are AFTER ROW triggers taking + * transition tables, we don't want to change the transition tables once the + * first such trigger has seen them. In such a case, any additional events + * will result in creating new transition tables and allowing new firings of + * statement triggers. + * + * This also saves the current event list location so that a later invocation + * of this function can cheaply find the triggers we're about to queue and + * cancel them. + */ +static void +cancel_prior_stmt_triggers(Oid relid, CmdType cmdType, int tgevent) +{ + AfterTriggersTableData *table; + AfterTriggersQueryData *qs = &afterTriggers.query_stack[afterTriggers.query_depth]; + + /* + * We keep this state in the AfterTriggersTableData that also holds + * transition tables for the relation + operation. In this way, if we are + * forced to make a new set of transition tables because more tuples get + * entered after we've already fired triggers, we will allow a new set of + * statement triggers to get queued without canceling the old ones. + */ + table = GetAfterTriggersTableData(relid, cmdType); + + if (table->stmt_trig_done) + { + /* + * We want to start scanning from the tail location that existed just + * before we inserted any statement triggers. But the events list + * might've been entirely empty then, in which case scan from the + * current head. + */ + AfterTriggerEvent event; + AfterTriggerEventChunk *chunk; + + if (table->stmt_trig_events.tail) + { + chunk = table->stmt_trig_events.tail; + event = (AfterTriggerEvent) table->stmt_trig_events.tailfree; + } + else + { + chunk = qs->events.head; + event = NULL; + } + + for_each_chunk_from(chunk) + { + if (event == NULL) + event = (AfterTriggerEvent) CHUNK_DATA_START(chunk); + for_each_event_from(event, chunk) + { + AfterTriggerShared evtshared = GetTriggerSharedData(event); + + /* + * Exit loop when we reach events that aren't AS triggers for + * the target relation. + */ + if (evtshared->ats_relid != relid) + goto done; + if ((evtshared->ats_event & TRIGGER_EVENT_OPMASK) != tgevent) + goto done; + if (!TRIGGER_FIRED_FOR_STATEMENT(evtshared->ats_event)) + goto done; + if (!TRIGGER_FIRED_AFTER(evtshared->ats_event)) + goto done; + /* OK, mark it DONE */ + event->ate_flags &= ~AFTER_TRIGGER_IN_PROGRESS; + event->ate_flags |= AFTER_TRIGGER_DONE; + } + /* signal we must reinitialize event ptr for next chunk */ + event = NULL; + } + } +done: + + /* In any case, save current insertion point for next time */ + table->stmt_trig_done = true; + table->stmt_trig_events = qs->events; +} + +/* + * SQL function pg_trigger_depth() + */ Datum pg_trigger_depth(PG_FUNCTION_ARGS) { diff --git a/src/backend/executor/README b/src/backend/executor/README index a0045067fb..b3e74aa1a5 100644 --- a/src/backend/executor/README +++ b/src/backend/executor/README @@ -241,11 +241,11 @@ This is a sketch of control flow for full query processing: CreateExecutorState creates per-query context switch to per-query context to run ExecInitNode + AfterTriggerBeginQuery ExecInitNode --- recursively scans plan tree CreateExprContext creates per-tuple context ExecInitExpr - AfterTriggerBeginQuery ExecutorRun ExecProcNode --- recursively called in per-query context diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 4b594d489c..62fb05efac 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -251,11 +251,6 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags) estate->es_top_eflags = eflags; estate->es_instrument = queryDesc->instrument_options; - /* - * Initialize the plan state tree - */ - InitPlan(queryDesc, eflags); - /* * Set up an AFTER-trigger statement context, unless told not to, or * unless it's EXPLAIN-only mode (when ExecutorFinish won't be called). @@ -263,6 +258,11 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags) if (!(eflags & (EXEC_FLAG_SKIP_TRIGGERS | EXEC_FLAG_EXPLAIN_ONLY))) AfterTriggerBeginQuery(); + /* + * Initialize the plan state tree + */ + InitPlan(queryDesc, eflags); + MemoryContextSwitchTo(oldcontext); } @@ -1174,6 +1174,7 @@ CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation) switch (operation) { case CMD_INSERT: + /* * If foreign partition to do tuple-routing for, skip the * check; it's disallowed elsewhere. diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 49586a3c03..845c409540 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -343,6 +343,9 @@ ExecInsert(ModifyTableState *mtstate, mtstate->mt_transition_capture->tcs_map = NULL; } } + if (mtstate->mt_oc_transition_capture != NULL) + mtstate->mt_oc_transition_capture->tcs_map = + mtstate->mt_transition_tupconv_maps[leaf_part_index]; /* * We might need to convert from the parent rowtype to the partition @@ -1158,6 +1161,8 @@ lreplace:; /* AFTER ROW UPDATE Triggers */ ExecARUpdateTriggers(estate, resultRelInfo, tupleid, oldtuple, tuple, recheckIndexes, + mtstate->operation == CMD_INSERT ? + mtstate->mt_oc_transition_capture : mtstate->mt_transition_capture); list_free(recheckIndexes); @@ -1444,7 +1449,7 @@ fireASTriggers(ModifyTableState *node) if (node->mt_onconflict == ONCONFLICT_UPDATE) ExecASUpdateTriggers(node->ps.state, resultRelInfo, - node->mt_transition_capture); + node->mt_oc_transition_capture); ExecASInsertTriggers(node->ps.state, resultRelInfo, node->mt_transition_capture); break; @@ -1474,14 +1479,24 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) /* Check for transition tables on the directly targeted relation. */ mtstate->mt_transition_capture = - MakeTransitionCaptureState(targetRelInfo->ri_TrigDesc); + MakeTransitionCaptureState(targetRelInfo->ri_TrigDesc, + RelationGetRelid(targetRelInfo->ri_RelationDesc), + mtstate->operation); + if (mtstate->operation == CMD_INSERT && + mtstate->mt_onconflict == ONCONFLICT_UPDATE) + mtstate->mt_oc_transition_capture = + MakeTransitionCaptureState(targetRelInfo->ri_TrigDesc, + RelationGetRelid(targetRelInfo->ri_RelationDesc), + CMD_UPDATE); /* * If we found that we need to collect transition tuples then we may also * need tuple conversion maps for any children that have TupleDescs that - * aren't compatible with the tuplestores. + * aren't compatible with the tuplestores. (We can share these maps + * between the regular and ON CONFLICT cases.) */ - if (mtstate->mt_transition_capture != NULL) + if (mtstate->mt_transition_capture != NULL || + mtstate->mt_oc_transition_capture != NULL) { ResultRelInfo *resultRelInfos; int numResultRelInfos; @@ -1522,10 +1537,12 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) /* * Install the conversion map for the first plan for UPDATE and DELETE * operations. It will be advanced each time we switch to the next - * plan. (INSERT operations set it every time.) + * plan. (INSERT operations set it every time, so we need not update + * mtstate->mt_oc_transition_capture here.) */ - mtstate->mt_transition_capture->tcs_map = - mtstate->mt_transition_tupconv_maps[0]; + if (mtstate->mt_transition_capture) + mtstate->mt_transition_capture->tcs_map = + mtstate->mt_transition_tupconv_maps[0]; } } @@ -1629,13 +1646,19 @@ ExecModifyTable(PlanState *pstate) estate->es_result_relation_info = resultRelInfo; EvalPlanQualSetPlan(&node->mt_epqstate, subplanstate->plan, node->mt_arowmarks[node->mt_whichplan]); + /* Prepare to convert transition tuples from this child. */ if (node->mt_transition_capture != NULL) { - /* Prepare to convert transition tuples from this child. */ Assert(node->mt_transition_tupconv_maps != NULL); node->mt_transition_capture->tcs_map = node->mt_transition_tupconv_maps[node->mt_whichplan]; } + if (node->mt_oc_transition_capture != NULL) + { + Assert(node->mt_transition_tupconv_maps != NULL); + node->mt_oc_transition_capture->tcs_map = + node->mt_transition_tupconv_maps[node->mt_whichplan]; + } continue; } else @@ -1934,8 +1957,12 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->mt_partition_tuple_slot = partition_tuple_slot; } - /* Build state for collecting transition tuples */ - ExecSetupTransitionCaptureState(mtstate, estate); + /* + * Build state for collecting transition tuples. This requires having a + * valid trigger query context, so skip it in explain-only mode. + */ + if (!(eflags & EXEC_FLAG_EXPLAIN_ONLY)) + ExecSetupTransitionCaptureState(mtstate, estate); /* * Initialize any WITH CHECK OPTION constraints if needed. @@ -2318,16 +2345,6 @@ ExecEndModifyTable(ModifyTableState *node) { int i; - /* - * Free transition tables, unless this query is being run in - * EXEC_FLAG_SKIP_TRIGGERS mode, which means that it may have queued AFTER - * triggers that won't be run till later. In that case we'll just leak - * the transition tables till end of (sub)transaction. - */ - if (node->mt_transition_capture != NULL && - !(node->ps.state->es_top_eflags & EXEC_FLAG_SKIP_TRIGGERS)) - DestroyTransitionCaptureState(node->mt_transition_capture); - /* * Allow any FDWs to shut down */ diff --git a/src/include/commands/trigger.h b/src/include/commands/trigger.h index aeb363f13e..adbcfa1297 100644 --- a/src/include/commands/trigger.h +++ b/src/include/commands/trigger.h @@ -43,13 +43,21 @@ typedef struct TriggerData /* * The state for capturing old and new tuples into transition tables for a - * single ModifyTable node. + * single ModifyTable node (or other operation source, e.g. copy.c). + * + * This is per-caller to avoid conflicts in setting tcs_map or + * tcs_original_insert_tuple. Note, however, that the pointed-to + * private data may be shared across multiple callers. */ +struct AfterTriggersTableData; /* private in trigger.c */ + typedef struct TransitionCaptureState { /* * Is there at least one trigger specifying each transition relation on * the relation explicitly named in the DML statement or COPY command? + * Note: in current usage, these flags could be part of the private state, + * but it seems possibly useful to let callers see them. */ bool tcs_delete_old_table; bool tcs_update_old_table; @@ -60,7 +68,7 @@ typedef struct TransitionCaptureState * For UPDATE and DELETE, AfterTriggerSaveEvent may need to convert the * new and old tuples from a child table's format to the format of the * relation named in a query so that it is compatible with the transition - * tuplestores. + * tuplestores. The caller must store the conversion map here if so. */ TupleConversionMap *tcs_map; @@ -74,17 +82,9 @@ typedef struct TransitionCaptureState HeapTuple tcs_original_insert_tuple; /* - * The tuplestores backing the transition tables. We use separate - * tuplestores for INSERT and UPDATE, because INSERT ... ON CONFLICT ... - * DO UPDATE causes INSERT and UPDATE triggers to fire and needs a way to - * keep track of the new tuple images resulting from the two cases - * separately. We only need a single old image tuplestore, because there - * is no statement that can both update and delete at the same time. + * Private data including the tuplestore(s) into which to insert tuples. */ - Tuplestorestate *tcs_old_tuplestore; /* for DELETE and UPDATE old - * images */ - Tuplestorestate *tcs_insert_tuplestore; /* for INSERT new images */ - Tuplestorestate *tcs_update_tuplestore; /* for UPDATE new images */ + struct AfterTriggersTableData *tcs_private; } TransitionCaptureState; /* @@ -174,8 +174,9 @@ extern void RelationBuildTriggers(Relation relation); extern TriggerDesc *CopyTriggerDesc(TriggerDesc *trigdesc); extern const char *FindTriggerIncompatibleWithInheritance(TriggerDesc *trigdesc); -extern TransitionCaptureState *MakeTransitionCaptureState(TriggerDesc *trigdesc); -extern void DestroyTransitionCaptureState(TransitionCaptureState *tcs); + +extern TransitionCaptureState *MakeTransitionCaptureState(TriggerDesc *trigdesc, + Oid relid, CmdType cmdType); extern void FreeTriggerDesc(TriggerDesc *trigdesc); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 90a60abc4d..c6d3021c85 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -983,7 +983,9 @@ typedef struct ModifyTableState /* Per partition tuple conversion map */ TupleTableSlot *mt_partition_tuple_slot; struct TransitionCaptureState *mt_transition_capture; - /* controls transition table population */ + /* controls transition table population for specified operation */ + struct TransitionCaptureState *mt_oc_transition_capture; + /* controls transition table population for INSERT...ON CONFLICT UPDATE */ TupleConversionMap **mt_transition_tupconv_maps; /* Per plan/partition tuple conversion */ } ModifyTableState; diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 620fac1e2c..3ab6be3421 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -2217,6 +2217,23 @@ with wcte as (insert into table1 values (42)) insert into table2 values ('hello world'); NOTICE: trigger = table2_trig, new table = ("hello world") NOTICE: trigger = table1_trig, new table = (42) +with wcte as (insert into table1 values (43)) + insert into table1 values (44); +NOTICE: trigger = table1_trig, new table = (43), (44) +select * from table1; + a +---- + 42 + 44 + 43 +(3 rows) + +select * from table2; + a +------------- + hello world +(1 row) + drop table table1; drop table table2; -- @@ -2256,6 +2273,14 @@ create trigger my_table_multievent_trig after insert or update on my_table referencing new table as new_table for each statement execute procedure dump_insert(); ERROR: transition tables cannot be specified for triggers with more than one event +-- +-- Verify that you can't create a trigger with transition tables with +-- a column list. +-- +create trigger my_table_col_update_trig + after update of b on my_table referencing new table as new_table + for each statement execute procedure dump_insert(); +ERROR: transition tables cannot be specified for triggers with column lists drop table my_table; -- -- Test firing of triggers with transition tables by foreign key cascades @@ -2299,8 +2324,7 @@ select * from trig_table; (6 rows) delete from refd_table where length(b) = 3; -NOTICE: trigger = trig_table_delete_trig, old table = (2,"two a"), (2,"two b") -NOTICE: trigger = trig_table_delete_trig, old table = (11,"one a"), (11,"one b") +NOTICE: trigger = trig_table_delete_trig, old table = (2,"two a"), (2,"two b"), (11,"one a"), (11,"one b") select * from trig_table; a | b ---+--------- @@ -2309,6 +2333,30 @@ select * from trig_table; (2 rows) drop table refd_table, trig_table; +-- +-- self-referential FKs are even more fun +-- +create table self_ref (a int primary key, + b int references self_ref(a) on delete cascade); +create trigger self_ref_r_trig + after delete on self_ref referencing old table as old_table + for each row execute procedure dump_delete(); +create trigger self_ref_s_trig + after delete on self_ref referencing old table as old_table + for each statement execute procedure dump_delete(); +insert into self_ref values (1, null), (2, 1), (3, 2); +delete from self_ref where a = 1; +NOTICE: trigger = self_ref_r_trig, old table = (1,), (2,1) +NOTICE: trigger = self_ref_r_trig, old table = (1,), (2,1) +NOTICE: trigger = self_ref_s_trig, old table = (1,), (2,1) +NOTICE: trigger = self_ref_r_trig, old table = (3,2) +NOTICE: trigger = self_ref_s_trig, old table = (3,2) +-- without AR trigger, cascaded deletes all end up in one transition table +drop trigger self_ref_r_trig on self_ref; +insert into self_ref values (1, null), (2, 1), (3, 2), (4, 3); +delete from self_ref where a = 1; +NOTICE: trigger = self_ref_s_trig, old table = (1,), (2,1), (3,2), (4,3) +drop table self_ref; -- cleanup drop function dump_insert(); drop function dump_update(); diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index c6deb56c50..30bb7d17b0 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1729,6 +1729,12 @@ create trigger table2_trig with wcte as (insert into table1 values (42)) insert into table2 values ('hello world'); +with wcte as (insert into table1 values (43)) + insert into table1 values (44); + +select * from table1; +select * from table2; + drop table table1; drop table table2; @@ -1769,6 +1775,15 @@ create trigger my_table_multievent_trig after insert or update on my_table referencing new table as new_table for each statement execute procedure dump_insert(); +-- +-- Verify that you can't create a trigger with transition tables with +-- a column list. +-- + +create trigger my_table_col_update_trig + after update of b on my_table referencing new table as new_table + for each statement execute procedure dump_insert(); + drop table my_table; -- @@ -1812,6 +1827,33 @@ select * from trig_table; drop table refd_table, trig_table; +-- +-- self-referential FKs are even more fun +-- + +create table self_ref (a int primary key, + b int references self_ref(a) on delete cascade); + +create trigger self_ref_r_trig + after delete on self_ref referencing old table as old_table + for each row execute procedure dump_delete(); +create trigger self_ref_s_trig + after delete on self_ref referencing old table as old_table + for each statement execute procedure dump_delete(); + +insert into self_ref values (1, null), (2, 1), (3, 2); + +delete from self_ref where a = 1; + +-- without AR trigger, cascaded deletes all end up in one transition table +drop trigger self_ref_r_trig on self_ref; + +insert into self_ref values (1, null), (2, 1), (3, 2), (4, 3); + +delete from self_ref where a = 1; + +drop table self_ref; + -- cleanup drop function dump_insert(); drop function dump_update(); From 936df5ba80a46fb40bfc93da49a709cbc0aafe5e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 16 Sep 2017 15:31:26 -0400 Subject: [PATCH 0194/1087] Doc: add example of transition table use in a trigger. I noticed that there were exactly no complete examples of use of a transition table in a trigger function, and no clear description of just how you'd do it either. Improve that. --- doc/src/sgml/plpgsql.sgml | 78 ++++++++++++++++++++++++++++++++++++++- doc/src/sgml/trigger.sgml | 6 ++- 2 files changed, 81 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 6dc438a152..d18b48c40c 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -4013,7 +4013,7 @@ CREATE OR REPLACE FUNCTION process_emp_audit() RETURNS TRIGGER AS $emp_audit$ BEGIN -- -- Create a row in emp_audit to reflect the operation performed on emp, - -- make use of the special variable TG_OP to work out the operation. + -- making use of the special variable TG_OP to work out the operation. -- IF (TG_OP = 'DELETE') THEN INSERT INTO emp_audit SELECT 'D', now(), user, OLD.*; @@ -4265,6 +4265,82 @@ UPDATE sales_fact SET units_sold = units_sold * 2; SELECT * FROM sales_summary_bytime; + + + AFTER triggers can also make use of transition + tables to inspect the entire set of rows changed by the triggering + statement. The CREATE TRIGGER command assigns names to one + or both transition tables, and then the function can refer to those names + as though they were read-only temporary tables. + shows an example. + + + + Auditing with Transition Tables + + + This example produces the same results as + , but instead of using a + trigger that fires for every row, it uses a trigger that fires once + per statement, after collecting the relevant information in a transition + table. This can be significantly faster than the row-trigger approach + when the invoking statement has modified many rows. Notice that we must + make a separate trigger declaration for each kind of event, since the + REFERENCING clauses must be different for each case. But + this does not stop us from using a single trigger function if we choose. + (In practice, it might be better to use three separate functions and + avoid the run-time tests on TG_OP.) + + + +CREATE TABLE emp ( + empname text NOT NULL, + salary integer +); + +CREATE TABLE emp_audit( + operation char(1) NOT NULL, + stamp timestamp NOT NULL, + userid text NOT NULL, + empname text NOT NULL, + salary integer +); + +CREATE OR REPLACE FUNCTION process_emp_audit() RETURNS TRIGGER AS $emp_audit$ + BEGIN + -- + -- Create rows in emp_audit to reflect the operations performed on emp, + -- making use of the special variable TG_OP to work out the operation. + -- + IF (TG_OP = 'DELETE') THEN + INSERT INTO emp_audit + SELECT 'D', now(), user, o.* FROM old_table o; + ELSIF (TG_OP = 'UPDATE') THEN + INSERT INTO emp_audit + SELECT 'U', now(), user, n.* FROM new_table n; + ELSIF (TG_OP = 'INSERT') THEN + INSERT INTO emp_audit + SELECT 'I', now(), user, n.* FROM new_table n; + END IF; + RETURN NULL; -- result is ignored since this is an AFTER trigger + END; +$emp_audit$ LANGUAGE plpgsql; + +CREATE TRIGGER emp_audit_ins + AFTER INSERT ON emp + REFERENCING NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE process_emp_audit(); +CREATE TRIGGER emp_audit_upd + AFTER UPDATE ON emp + REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE process_emp_audit(); +CREATE TRIGGER emp_audit_del + AFTER DELETE ON emp + REFERENCING OLD TABLE AS old_table + FOR EACH STATEMENT EXECUTE PROCEDURE process_emp_audit(); + + + diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index a16256056f..f5f74af5a1 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -317,9 +317,11 @@ be created to make the sets of affected rows available to the trigger. AFTER ROW triggers can also request transition tables, so that they can see the total changes in the table as well as the change in - the individual row they are currently being fired for. The syntax for + the individual row they are currently being fired for. The method for examining the transition tables again depends on the programming language - that is being used. + that is being used, but the typical approach is to make the transition + tables act like read-only temporary tables that can be accessed by SQL + commands issued within the trigger function. From cad22075bc2ce9c1fbe61e8d3969d4dbdb5bc1f3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Sep 2017 11:35:27 -0400 Subject: [PATCH 0195/1087] Fix bogus size calculation introduced by commit cc5f81366. The elements of RecordCacheArray are TupleDesc, not TupleDesc *. Those are actually the same size, so that this error is harmless, but it's still wrong --- and it might bite us someday, if TupleDesc ever became a struct, say. Per Coverity. --- src/backend/utils/cache/typcache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index fd80c128cb..16c52c5a38 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -1386,7 +1386,7 @@ ensure_record_cache_typmod_slot_exists(int32 typmod) RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray, newlen * sizeof(TupleDesc)); memset(RecordCacheArray + RecordCacheArrayLen, 0, - (newlen - RecordCacheArrayLen) * sizeof(TupleDesc *)); + (newlen - RecordCacheArrayLen) * sizeof(TupleDesc)); RecordCacheArrayLen = newlen; } } From fd31f9f033213e2ebf00b57ef837e1828c338fc4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Sep 2017 12:16:38 -0400 Subject: [PATCH 0196/1087] Ensure that BEFORE STATEMENT triggers fire the right number of times. Commit 0f79440fb introduced mechanism to keep AFTER STATEMENT triggers from firing more than once per statement, which was formerly possible if more than one FK enforcement action had to be applied to a given table. Add a similar mechanism for BEFORE STATEMENT triggers, so that we don't have the unexpected situation of firing BEFORE STATEMENT triggers more often than AFTER STATEMENT. As with the previous patch, back-patch to v10. Discussion: https://postgr.es/m/22315.1505584992@sss.pgh.pa.us --- doc/src/sgml/ref/create_trigger.sgml | 14 +++-- src/backend/commands/trigger.c | 72 +++++++++++++++++++++----- src/test/regress/expected/triggers.out | 12 +++++ src/test/regress/sql/triggers.sql | 6 +++ 4 files changed, 87 insertions(+), 17 deletions(-) diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 065c827271..2496250bed 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -250,7 +250,8 @@ CREATE [ CONSTRAINT ] TRIGGER name One of INSERT, UPDATE, DELETE, or TRUNCATE; this specifies the event that will fire the trigger. Multiple - events can be specified using OR. + events can be specified using OR, except when + transition relations are requested. @@ -263,7 +264,10 @@ UPDATE OF column_name1 [, column_name2UPDATE command. - INSTEAD OF UPDATE events do not support lists of columns. + + INSTEAD OF UPDATE events do not allow a list of columns. + A column list cannot be specified when requesting transition relations, + either. @@ -490,9 +494,9 @@ UPDATE OF column_name1 [, column_name2AFTER STATEMENT triggers that are present will be fired - once per creation of a transition relation, ensuring that the triggers see - each affected row once and only once. + any statement-level triggers that are present will be fired once per + creation of a transition relation set, ensuring that the triggers see + each affected row in a transition relation once and only once. diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 7e391a1092..7b411c130b 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -100,6 +100,7 @@ static void AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, List *recheckIndexes, Bitmapset *modifiedCols, TransitionCaptureState *transition_capture); static void AfterTriggerEnlargeQueryState(void); +static bool before_stmt_triggers_fired(Oid relid, CmdType cmdType); /* @@ -2229,6 +2230,11 @@ ExecBSInsertTriggers(EState *estate, ResultRelInfo *relinfo) if (!trigdesc->trig_insert_before_statement) return; + /* no-op if we already fired BS triggers in this context */ + if (before_stmt_triggers_fired(RelationGetRelid(relinfo->ri_RelationDesc), + CMD_INSERT)) + return; + LocTriggerData.type = T_TriggerData; LocTriggerData.tg_event = TRIGGER_EVENT_INSERT | TRIGGER_EVENT_BEFORE; @@ -2439,6 +2445,11 @@ ExecBSDeleteTriggers(EState *estate, ResultRelInfo *relinfo) if (!trigdesc->trig_delete_before_statement) return; + /* no-op if we already fired BS triggers in this context */ + if (before_stmt_triggers_fired(RelationGetRelid(relinfo->ri_RelationDesc), + CMD_DELETE)) + return; + LocTriggerData.type = T_TriggerData; LocTriggerData.tg_event = TRIGGER_EVENT_DELETE | TRIGGER_EVENT_BEFORE; @@ -2651,6 +2662,11 @@ ExecBSUpdateTriggers(EState *estate, ResultRelInfo *relinfo) if (!trigdesc->trig_update_before_statement) return; + /* no-op if we already fired BS triggers in this context */ + if (before_stmt_triggers_fired(RelationGetRelid(relinfo->ri_RelationDesc), + CMD_UPDATE)) + return; + updatedCols = GetUpdatedColumns(relinfo, estate); LocTriggerData.type = T_TriggerData; @@ -3523,11 +3539,11 @@ typedef struct AfterTriggerEventList * * We create an AfterTriggersTableData struct for each target table of the * current query, and each operation mode (INSERT/UPDATE/DELETE), that has - * either transition tables or AFTER STATEMENT triggers. This is used to + * either transition tables or statement-level triggers. This is used to * hold the relevant transition tables, as well as info tracking whether - * we already queued the AFTER STATEMENT triggers. (We use that info to - * prevent, as much as possible, firing the same AFTER STATEMENT trigger - * more than once per statement.) These structs, along with the transition + * we already queued the statement triggers. (We use that info to prevent + * firing the same statement triggers more than once per statement, or really + * once per transition table set.) These structs, along with the transition * table tuplestores, live in the (sub)transaction's CurTransactionContext. * That's sufficient lifespan because we don't allow transition tables to be * used by deferrable triggers, so they only need to survive until @@ -3576,8 +3592,9 @@ struct AfterTriggersTableData Oid relid; /* target table's OID */ CmdType cmdType; /* event type, CMD_INSERT/UPDATE/DELETE */ bool closed; /* true when no longer OK to add tuples */ - bool stmt_trig_done; /* did we already queue stmt-level triggers? */ - AfterTriggerEventList stmt_trig_events; /* if so, saved list pointer */ + bool before_trig_done; /* did we already queue BS triggers? */ + bool after_trig_done; /* did we already queue AS triggers? */ + AfterTriggerEventList after_trig_events; /* if so, saved list pointer */ Tuplestorestate *old_tuplestore; /* "old" transition table, if any */ Tuplestorestate *new_tuplestore; /* "new" transition table, if any */ }; @@ -5650,6 +5667,37 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, } } +/* + * Detect whether we already queued BEFORE STATEMENT triggers for the given + * relation + operation, and set the flag so the next call will report "true". + */ +static bool +before_stmt_triggers_fired(Oid relid, CmdType cmdType) +{ + bool result; + AfterTriggersTableData *table; + + /* Check state, like AfterTriggerSaveEvent. */ + if (afterTriggers.query_depth < 0) + elog(ERROR, "before_stmt_triggers_fired() called outside of query"); + + /* Be sure we have enough space to record events at this query depth. */ + if (afterTriggers.query_depth >= afterTriggers.maxquerydepth) + AfterTriggerEnlargeQueryState(); + + /* + * We keep this state in the AfterTriggersTableData that also holds + * transition tables for the relation + operation. In this way, if we are + * forced to make a new set of transition tables because more tuples get + * entered after we've already fired triggers, we will allow a new set of + * statement triggers to get queued. + */ + table = GetAfterTriggersTableData(relid, cmdType); + result = table->before_trig_done; + table->before_trig_done = true; + return result; +} + /* * If we previously queued a set of AFTER STATEMENT triggers for the given * relation + operation, and they've not been fired yet, cancel them. The @@ -5684,7 +5732,7 @@ cancel_prior_stmt_triggers(Oid relid, CmdType cmdType, int tgevent) */ table = GetAfterTriggersTableData(relid, cmdType); - if (table->stmt_trig_done) + if (table->after_trig_done) { /* * We want to start scanning from the tail location that existed just @@ -5695,10 +5743,10 @@ cancel_prior_stmt_triggers(Oid relid, CmdType cmdType, int tgevent) AfterTriggerEvent event; AfterTriggerEventChunk *chunk; - if (table->stmt_trig_events.tail) + if (table->after_trig_events.tail) { - chunk = table->stmt_trig_events.tail; - event = (AfterTriggerEvent) table->stmt_trig_events.tailfree; + chunk = table->after_trig_events.tail; + event = (AfterTriggerEvent) table->after_trig_events.tailfree; } else { @@ -5737,8 +5785,8 @@ cancel_prior_stmt_triggers(Oid relid, CmdType cmdType, int tgevent) done: /* In any case, save current insertion point for next time */ - table->stmt_trig_done = true; - table->stmt_trig_events = qs->events; + table->after_trig_done = true; + table->after_trig_events = qs->events; } /* diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 3ab6be3421..85d948741e 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -2289,6 +2289,9 @@ create table refd_table (a int primary key, b text); create table trig_table (a int, b text, foreign key (a) references refd_table on update cascade on delete cascade ); +create trigger trig_table_before_trig + before insert or update or delete on trig_table + for each statement execute procedure trigger_func('trig_table'); create trigger trig_table_insert_trig after insert on trig_table referencing new table as new_table for each statement execute procedure dump_insert(); @@ -2309,8 +2312,10 @@ insert into trig_table values (2, 'two b'), (3, 'three a'), (3, 'three b'); +NOTICE: trigger_func(trig_table) called: action = INSERT, when = BEFORE, level = STATEMENT NOTICE: trigger = trig_table_insert_trig, new table = (1,"one a"), (1,"one b"), (2,"two a"), (2,"two b"), (3,"three a"), (3,"three b") update refd_table set a = 11 where b = 'one'; +NOTICE: trigger_func(trig_table) called: action = UPDATE, when = BEFORE, level = STATEMENT NOTICE: trigger = trig_table_update_trig, old table = (1,"one a"), (1,"one b"), new table = (11,"one a"), (11,"one b") select * from trig_table; a | b @@ -2324,6 +2329,7 @@ select * from trig_table; (6 rows) delete from refd_table where length(b) = 3; +NOTICE: trigger_func(trig_table) called: action = DELETE, when = BEFORE, level = STATEMENT NOTICE: trigger = trig_table_delete_trig, old table = (2,"two a"), (2,"two b"), (11,"one a"), (11,"one b") select * from trig_table; a | b @@ -2338,6 +2344,9 @@ drop table refd_table, trig_table; -- create table self_ref (a int primary key, b int references self_ref(a) on delete cascade); +create trigger self_ref_before_trig + before delete on self_ref + for each statement execute procedure trigger_func('self_ref'); create trigger self_ref_r_trig after delete on self_ref referencing old table as old_table for each row execute procedure dump_delete(); @@ -2346,7 +2355,9 @@ create trigger self_ref_s_trig for each statement execute procedure dump_delete(); insert into self_ref values (1, null), (2, 1), (3, 2); delete from self_ref where a = 1; +NOTICE: trigger_func(self_ref) called: action = DELETE, when = BEFORE, level = STATEMENT NOTICE: trigger = self_ref_r_trig, old table = (1,), (2,1) +NOTICE: trigger_func(self_ref) called: action = DELETE, when = BEFORE, level = STATEMENT NOTICE: trigger = self_ref_r_trig, old table = (1,), (2,1) NOTICE: trigger = self_ref_s_trig, old table = (1,), (2,1) NOTICE: trigger = self_ref_r_trig, old table = (3,2) @@ -2355,6 +2366,7 @@ NOTICE: trigger = self_ref_s_trig, old table = (3,2) drop trigger self_ref_r_trig on self_ref; insert into self_ref values (1, null), (2, 1), (3, 2), (4, 3); delete from self_ref where a = 1; +NOTICE: trigger_func(self_ref) called: action = DELETE, when = BEFORE, level = STATEMENT NOTICE: trigger = self_ref_s_trig, old table = (1,), (2,1), (3,2), (4,3) drop table self_ref; -- cleanup diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 30bb7d17b0..2b2236ed7d 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1795,6 +1795,9 @@ create table trig_table (a int, b text, foreign key (a) references refd_table on update cascade on delete cascade ); +create trigger trig_table_before_trig + before insert or update or delete on trig_table + for each statement execute procedure trigger_func('trig_table'); create trigger trig_table_insert_trig after insert on trig_table referencing new table as new_table for each statement execute procedure dump_insert(); @@ -1834,6 +1837,9 @@ drop table refd_table, trig_table; create table self_ref (a int primary key, b int references self_ref(a) on delete cascade); +create trigger self_ref_before_trig + before delete on self_ref + for each statement execute procedure trigger_func('self_ref'); create trigger self_ref_r_trig after delete on self_ref referencing old table as old_table for each row execute procedure dump_delete(); From 27c6619e9c8ff80cd78c7f66443aa005734cda90 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Sep 2017 14:50:01 -0400 Subject: [PATCH 0197/1087] Fix possible dangling pointer dereference in trigger.c. AfterTriggerEndQuery correctly notes that the query_stack could get repalloc'd during a trigger firing, but it nonetheless passes the address of a query_stack entry to afterTriggerInvokeEvents, so that if such a repalloc occurs, afterTriggerInvokeEvents is already working with an obsolete dangling pointer while it scans the rest of the events. Oops. The only code at risk is its "delete_ok" cleanup code, so we can prevent unsafe behavior by passing delete_ok = false instead of true. However, that could have a significant performance penalty, because the point of passing delete_ok = true is to not have to re-scan possibly a large number of dead trigger events on the next time through the loop. There's more than one way to skin that cat, though. What we can do is delete all the "chunks" in the event list except the last one, since we know all events in them must be dead. Deleting the chunks is work we'd have had to do later in AfterTriggerEndQuery anyway, and it ends up saving rescanning of just about the same events we'd have gotten rid of with delete_ok = true. In v10 and HEAD, we also have to be careful to mop up any per-table after_trig_events pointers that would become dangling. This is slightly annoying, but I don't think that normal use-cases will traverse this code path often enough for it to be a performance problem. It's pretty hard to hit this in practice because of the unlikelihood of the query_stack getting resized at just the wrong time. Nonetheless, it's definitely a live bug of ancient standing, so back-patch to all supported branches. Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us --- src/backend/commands/trigger.c | 86 ++++++++++++++++++++++++++++------ 1 file changed, 71 insertions(+), 15 deletions(-) diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 7b411c130b..e75a59d299 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -3831,14 +3831,12 @@ static void afterTriggerFreeEventList(AfterTriggerEventList *events) { AfterTriggerEventChunk *chunk; - AfterTriggerEventChunk *next_chunk; - for (chunk = events->head; chunk != NULL; chunk = next_chunk) + while ((chunk = events->head) != NULL) { - next_chunk = chunk->next; + events->head = chunk->next; pfree(chunk); } - events->head = NULL; events->tail = NULL; events->tailfree = NULL; } @@ -3882,6 +3880,45 @@ afterTriggerRestoreEventList(AfterTriggerEventList *events, } } +/* ---------- + * afterTriggerDeleteHeadEventChunk() + * + * Remove the first chunk of events from the query level's event list. + * Keep any event list pointers elsewhere in the query level's data + * structures in sync. + * ---------- + */ +static void +afterTriggerDeleteHeadEventChunk(AfterTriggersQueryData *qs) +{ + AfterTriggerEventChunk *target = qs->events.head; + ListCell *lc; + + Assert(target && target->next); + + /* + * First, update any pointers in the per-table data, so that they won't be + * dangling. Resetting obsoleted pointers to NULL will make + * cancel_prior_stmt_triggers start from the list head, which is fine. + */ + foreach(lc, qs->tables) + { + AfterTriggersTableData *table = (AfterTriggersTableData *) lfirst(lc); + + if (table->after_trig_done && + table->after_trig_events.tail == target) + { + table->after_trig_events.head = NULL; + table->after_trig_events.tail = NULL; + table->after_trig_events.tailfree = NULL; + } + } + + /* Now we can flush the head chunk */ + qs->events.head = target->next; + pfree(target); +} + /* ---------- * AfterTriggerExecute() @@ -4274,7 +4311,7 @@ afterTriggerInvokeEvents(AfterTriggerEventList *events, /* * If it's last chunk, must sync event list's tailfree too. Note * that delete_ok must NOT be passed as true if there could be - * stacked AfterTriggerEventList values pointing at this event + * additional AfterTriggerEventList values pointing at this event * list, since we'd fail to fix their copies of tailfree. */ if (chunk == events->tail) @@ -4522,6 +4559,8 @@ AfterTriggerBeginQuery(void) void AfterTriggerEndQuery(EState *estate) { + AfterTriggersQueryData *qs; + /* Must be inside a query, too */ Assert(afterTriggers.query_depth >= 0); @@ -4555,23 +4594,40 @@ AfterTriggerEndQuery(EState *estate) * If we find no firable events, we don't have to increment * firing_counter. */ + qs = &afterTriggers.query_stack[afterTriggers.query_depth]; + for (;;) { - AfterTriggersQueryData *qs; - - /* - * Firing a trigger could result in query_stack being repalloc'd, so - * we must recalculate qs after each afterTriggerInvokeEvents call. - */ - qs = &afterTriggers.query_stack[afterTriggers.query_depth]; - if (afterTriggerMarkEvents(&qs->events, &afterTriggers.events, true)) { CommandId firing_id = afterTriggers.firing_counter++; + AfterTriggerEventChunk *oldtail = qs->events.tail; - /* OK to delete the immediate events after processing them */ - if (afterTriggerInvokeEvents(&qs->events, firing_id, estate, true)) + if (afterTriggerInvokeEvents(&qs->events, firing_id, estate, false)) break; /* all fired */ + + /* + * Firing a trigger could result in query_stack being repalloc'd, + * so we must recalculate qs after each afterTriggerInvokeEvents + * call. Furthermore, it's unsafe to pass delete_ok = true here, + * because that could cause afterTriggerInvokeEvents to try to + * access qs->events after the stack has been repalloc'd. + */ + qs = &afterTriggers.query_stack[afterTriggers.query_depth]; + + /* + * We'll need to scan the events list again. To reduce the cost + * of doing so, get rid of completely-fired chunks. We know that + * all events were marked IN_PROGRESS or DONE at the conclusion of + * afterTriggerMarkEvents, so any still-interesting events must + * have been added after that, and so must be in the chunk that + * was then the tail chunk, or in later chunks. So, zap all + * chunks before oldtail. This is approximately the same set of + * events we would have gotten rid of by passing delete_ok = true. + */ + Assert(oldtail != NULL); + while (qs->events.head != oldtail) + afterTriggerDeleteHeadEventChunk(qs); } else break; From 6f44fe7f121ac7c29c1ac8553e4e209f9c3bfbcb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Sep 2017 15:28:51 -0400 Subject: [PATCH 0198/1087] Allow rel_is_distinct_for() to look through RelabelType below OpExpr. This lets it do the right thing for, eg, varchar columns. Back-patch to 9.5 where this logic appeared. David Rowley, per report from Kim Rose Carlsen Discussion: https://postgr.es/m/VI1PR05MB17091F9A9876528055D6A827C76D0@VI1PR05MB1709.eurprd05.prod.outlook.com --- src/backend/optimizer/plan/analyzejoins.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c index 34317fe778..511603b581 100644 --- a/src/backend/optimizer/plan/analyzejoins.c +++ b/src/backend/optimizer/plan/analyzejoins.c @@ -703,6 +703,14 @@ rel_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List *clause_list) else var = (Var *) get_leftop(rinfo->clause); + /* + * We may ignore any RelabelType node above the operand. (There + * won't be more than one, since eval_const_expressions() has been + * applied already.) + */ + if (var && IsA(var, RelabelType)) + var = (Var *) ((RelabelType *) var)->arg; + /* * If inner side isn't a Var referencing a subquery output column, * this clause doesn't help us. From 68ab9acd8557a9401a115a5369a14bf0a169e8e7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Sep 2017 17:04:21 -0400 Subject: [PATCH 0199/1087] Doc: update v10 release notes through today. Add item about number of times statement-level triggers will be fired. Rearrange the compatibility items into (what seems to me) a less random ordering. --- doc/src/sgml/release-10.sgml | 205 +++++++++++++++++++---------------- 1 file changed, 113 insertions(+), 92 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index a2b1d7cd40..2658b73ca6 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -6,7 +6,7 @@ Release date: - 2017-??-?? (current as of 2017-09-07, commit 08cb36417) + 2017-??-?? (current as of 2017-09-17, commit 244b4a37e) @@ -157,6 +157,38 @@ + + When ALTER TABLE ... ADD PRIMARY KEY marks + columns NOT NULL, that change now propagates to + inheritance child tables as well (Michael Paquier) + + + + + + + Prevent statement-level triggers from firing more than once per + statement (Tom Lane) + + + + Cases involving writable CTEs updating the same table updated by the + containing statement, or by another writable CTE, fired BEFORE + STATEMENT or AFTER STATEMENT triggers more than once. + Also, if there were statement-level triggers on a table affected by a + foreign key enforcement action (such as ON DELETE CASCADE), + they could fire more than once per outer SQL statement. This is + contrary to the SQL standard, so change it. + + + + + + + Change the default value of the + server parameter from pg_log to log + (Andreas Karlsson) + + + + + + + Add configuration option to + specify file name for custom OpenSSL DH parameters (Heikki Linnakangas) + + + + This replaces the hardcoded, undocumented file + name dh1024.pem. Note that dh1024.pem is + no longer examined by default; you must set this option if you want + to use custom DH parameters. + + + + + + + Increase the size of the default DH parameters used for OpenSSL + ephemeral DH ciphers to 2048 bits (Heikki Linnakangas) + + + + The size of the compiled-in DH parameters has been increased from + 1024 to 2048 bits, making DH key exchange more resistant to + brute-force attacks. However, some old SSL implementations, notably + some revisions of Java Runtime Environment version 6, will not accept + DH parameters longer than 1024 bits, and hence will not be able to + connect over SSL. If it's necessary to support such old clients, you + can use custom 1024-bit DH parameters instead of the compiled-in + defaults. See . + + + + + @@ -271,55 +352,39 @@ - Allow multi-dimensional arrays to be passed into PL/Python functions, - and returned as nested Python lists (Alexey Grishchenko, Dave Cramer, - Heikki Linnakangas) + Add + and server + parameters to control parallel queries (Amit Kapila, Robert Haas) - This feature requires a backwards-incompatible change to the handling - of arrays of composite types in PL/Python. Previously, you could - return an array of composite values by writing, e.g., [[col1, - col2], [col1, col2]]; but now that is interpreted as a - two-dimensional array. Composite types in arrays must now be written - as Python tuples, not lists, to resolve the ambiguity; that is, - write [(col1, col2), (col1, col2)] instead. + These replace min_parallel_relation_size, which was + found to be too generic. - Remove PL/Tcl's module auto-loading facility (Tom Lane) + Don't downcase unquoted text + within and related + server parameters (QL Zhuo) - This functionality has been replaced by new server - parameters - and , which are easier to use - and more similar to features available in other PLs. + These settings are really lists of file names, but they were + previously treated as lists of SQL identifiers, which have different + parsing rules. - - Change the default value of the - server parameter from pg_log to log - (Andreas Karlsson) - - - - - @@ -336,34 +401,39 @@ - Add - and server - parameters to control parallel queries (Amit Kapila, Robert Haas) + Allow multi-dimensional arrays to be passed into PL/Python functions, + and returned as nested Python lists (Alexey Grishchenko, Dave Cramer, + Heikki Linnakangas) - These replace min_parallel_relation_size, which was - found to be too generic. + This feature requires a backwards-incompatible change to the handling + of arrays of composite types in PL/Python. Previously, you could + return an array of composite values by writing, e.g., [[col1, + col2], [col1, col2]]; but now that is interpreted as a + two-dimensional array. Composite types in arrays must now be written + as Python tuples, not lists, to resolve the ambiguity; that is, + write [(col1, col2), (col1, col2)] instead. - Don't downcase unquoted text - within and related - server parameters (QL Zhuo) + Remove PL/Tcl's module auto-loading facility (Tom Lane) - These settings are really lists of file names, but they were - previously treated as lists of SQL identifiers, which have different - parsing rules. + This functionality has been replaced by new server + parameters + and , which are easier to use + and more similar to features available in other PLs. @@ -414,55 +484,6 @@ - - Add configuration option to - specify file name for custom OpenSSL DH parameters (Heikki Linnakangas) - - - - This replaces the hardcoded, undocumented file - name dh1024.pem. Note that dh1024.pem is - no longer examined by default; you must set this option if you want - to use custom DH parameters. - - - - - - - Increase the size of the default DH parameters used for OpenSSL - ephemeral DH ciphers to 2048 bits (Heikki Linnakangas) - - - - The size of the compiled-in DH parameters has been increased from - 1024 to 2048 bits, making DH key exchange more resistant to - brute-force attacks. However, some old SSL implementations, notably - some revisions of Java Runtime Environment version 6, will not accept - DH parameters longer than 1024 bits, and hence will not be able to - connect over SSL. If it's necessary to support such old clients, you - can use custom 1024-bit DH parameters instead of the compiled-in - defaults. See . - - - - - - - When ALTER TABLE ... ADD PRIMARY KEY marks - columns NOT NULL, that change now propagates to - inheritance child tables as well (Michael Paquier) - - - - - From 8edacab209957520423770851351ab4013cb0167 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 17 Sep 2017 21:37:02 -0400 Subject: [PATCH 0200/1087] Fix DROP SUBSCRIPTION hang When ALTER SUBSCRIPTION DISABLE is run in the same transaction before DROP SUBSCRIPTION, the latter will hang because workers will still be running, not having seen the DISABLE committed, and DROP SUBSCRIPTION will wait until the workers have vacated the replication origin slots. Previously, DROP SUBSCRIPTION killed the logical replication workers immediately only if it was going to drop the replication slot, otherwise it scheduled the worker killing for the end of the transaction, as a result of 7e174fa793a2df89fe03d002a5087ef67abcdde8. This, however, causes the present problem. To fix, kill the workers immediately in all cases. This covers all cases: A subscription that doesn't have a replication slot must be disabled. It was either disabled in the same transaction, or it was already disabled before the current transaction, but then there shouldn't be any workers left and this won't make a difference. Reported-by: Arseny Sher Discussion: https://www.postgresql.org/message-id/flat/87mv6av84w.fsf%40ars-thinkpad --- src/backend/commands/subscriptioncmds.c | 19 +++++---- src/test/subscription/t/007_ddl.pl | 51 +++++++++++++++++++++++++ 2 files changed, 63 insertions(+), 7 deletions(-) create mode 100644 src/test/subscription/t/007_ddl.pl diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index 2ef414e084..372fa1b634 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -909,9 +909,17 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel) ReleaseSysCache(tup); /* - * If we are dropping the replication slot, stop all the subscription - * workers immediately, so that the slot becomes accessible. Otherwise - * just schedule the stopping for the end of the transaction. + * Stop all the subscription workers immediately. + * + * This is necessary if we are dropping the replication slot, so that the + * slot becomes accessible. + * + * It is also necessary if the subscription is disabled and was disabled + * in the same transaction. Then the workers haven't seen the disabling + * yet and will still be running, leading to hangs later when we want to + * drop the replication origin. If the subscription was disabled before + * this transaction, then there shouldn't be any workers left, so this + * won't make a difference. * * New workers won't be started because we hold an exclusive lock on the * subscription till the end of the transaction. @@ -923,10 +931,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel) { LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc); - if (slotname) - logicalrep_worker_stop(w->subid, w->relid); - else - logicalrep_worker_stop_at_commit(w->subid, w->relid); + logicalrep_worker_stop(w->subid, w->relid); } list_free(subworkers); diff --git a/src/test/subscription/t/007_ddl.pl b/src/test/subscription/t/007_ddl.pl new file mode 100644 index 0000000000..3f36238840 --- /dev/null +++ b/src/test/subscription/t/007_ddl.pl @@ -0,0 +1,51 @@ +# Test some logical replication DDL behavior +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More tests => 1; + +sub wait_for_caught_up +{ + my ($node, $appname) = @_; + + $node->poll_query_until('postgres', +"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" + ) or die "Timed out while waiting for subscriber to catch up"; +} + +my $node_publisher = get_new_node('publisher'); +$node_publisher->init(allows_streaming => 'logical'); +$node_publisher->start; + +my $node_subscriber = get_new_node('subscriber'); +$node_subscriber->init(allows_streaming => 'logical'); +$node_subscriber->start; + +my $ddl = "CREATE TABLE test1 (a int, b text);"; +$node_publisher->safe_psql('postgres', $ddl); +$node_subscriber->safe_psql('postgres', $ddl); + +my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres'; +my $appname = 'replication_test'; + +$node_publisher->safe_psql('postgres', + "CREATE PUBLICATION mypub FOR ALL TABLES;"); +$node_subscriber->safe_psql('postgres', +"CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" +); + +wait_for_caught_up($node_publisher, $appname); + +$node_subscriber->safe_psql('postgres', q{ +BEGIN; +ALTER SUBSCRIPTION mysub DISABLE; +ALTER SUBSCRIPTION mysub SET (slot_name = NONE); +DROP SUBSCRIPTION mysub; +COMMIT; +}); + +pass "subscription disable and drop in same transaction did not hang"; + +$node_subscriber->stop; +$node_publisher->stop; From d31892e2105cf48d8430807d74d5fdf1434af541 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 18 Sep 2017 10:41:48 -0400 Subject: [PATCH 0201/1087] Remove dead external links from documentation --- doc/src/sgml/installation.sgml | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index 12866b4bf7..b178d3074b 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -2567,17 +2567,13 @@ PHSS_30966 s700_800 ld(1) and linker tools cumulative patch On general principles you should be current on libc and ld/dld patches, as well as compiler patches if you are using HP's C compiler. See HP's support sites such - as and - for free + as for free copies of their latest patches. If you are building on a PA-RISC 2.0 machine and want to have - 64-bit binaries using GCC, you must use GCC 64-bit version. GCC - binaries for HP-UX PA-RISC and Itanium are available from - . Don't forget to - get and install binutils at the same time. + 64-bit binaries using GCC, you must use a GCC 64-bit version. @@ -2806,8 +2802,7 @@ LIBOBJS = snprintf.o Yes, using DTrace is possible. See ]]> for further - information. You can also find more information in this - article: . + information. From 4b17c894293d0c3ed944da76aeb9bc2bb02a6db6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 18 Sep 2017 11:09:15 -0400 Subject: [PATCH 0202/1087] Update some dead external links in the documentation --- doc/src/sgml/sepgsql.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/sepgsql.sgml b/doc/src/sgml/sepgsql.sgml index 0b611eeeca..6a8d3765a2 100644 --- a/doc/src/sgml/sepgsql.sgml +++ b/doc/src/sgml/sepgsql.sgml @@ -762,17 +762,17 @@ ERROR: SELinux: security policy violation - Fedora SELinux User Guide + SELinux User's and Administrator's Guide This document provides a wide spectrum of knowledge to administer SELinux on your systems. - It focuses primarily on Fedora, but is not limited to Fedora. + It focuses primarily on Red Hat operating systems, but is not limited to them. - Fedora SELinux FAQ + Fedora SELinux FAQ This document answers frequently asked questions about From 3e1683d37e1d751eb2df9a5cb0507bebc6cf7d05 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 18 Sep 2017 11:39:44 -0400 Subject: [PATCH 0203/1087] Fix, or at least ameliorate, bugs in logicalrep_worker_launch(). If we failed to get a background worker slot, the code just walked away from the logicalrep-worker slot it already had, leaving that looking like the worker is still starting up. This led to an indefinite hang in subscription startup, as reported by Thomas Munro. We must release the slot on failure. Also fix a thinko: we must capture the worker slot's generation before releasing LogicalRepWorkerLock the first time, else testing to see if it's changed is pretty meaningless. BTW, the CHECK_FOR_INTERRUPTS() in WaitForReplicationWorkerAttach is a ticking time bomb, even without considering the possibility of elog(ERROR) in one of the other functions it calls. Really, this entire business needs a redesign with some actual thought about error recovery. But for now I'm just band-aiding the case observed in testing. Back-patch to v10 where this code was added. Discussion: https://postgr.es/m/CAEepm=2bP3TBMFBArP6o20AZaRduWjMnjCjt22hSdnA-EvrtCw@mail.gmail.com --- src/backend/replication/logical/launcher.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c index 6c894421a3..44bdcab3b9 100644 --- a/src/backend/replication/logical/launcher.c +++ b/src/backend/replication/logical/launcher.c @@ -168,14 +168,11 @@ get_subscription_list(void) */ static void WaitForReplicationWorkerAttach(LogicalRepWorker *worker, + uint16 generation, BackgroundWorkerHandle *handle) { BgwHandleStatus status; int rc; - uint16 generation; - - /* Remember generation for future identification. */ - generation = worker->generation; for (;;) { @@ -282,7 +279,7 @@ logicalrep_workers_find(Oid subid, bool only_running) } /* - * Start new apply background worker. + * Start new apply background worker, if possible. */ void logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, @@ -290,6 +287,7 @@ logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, { BackgroundWorker bgw; BackgroundWorkerHandle *bgw_handle; + uint16 generation; int i; int slot = 0; LogicalRepWorker *worker = NULL; @@ -406,6 +404,9 @@ logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, worker->reply_lsn = InvalidXLogRecPtr; TIMESTAMP_NOBEGIN(worker->reply_time); + /* Before releasing lock, remember generation for future identification. */ + generation = worker->generation; + LWLockRelease(LogicalRepWorkerLock); /* Register the new dynamic worker. */ @@ -428,6 +429,12 @@ logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, if (!RegisterDynamicBackgroundWorker(&bgw, &bgw_handle)) { + /* Failed to start worker, so clean up the worker slot. */ + LWLockAcquire(LogicalRepWorkerLock, LW_EXCLUSIVE); + Assert(generation == worker->generation); + logicalrep_worker_cleanup(worker); + LWLockRelease(LogicalRepWorkerLock); + ereport(WARNING, (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), errmsg("out of background worker slots"), @@ -436,7 +443,7 @@ logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, } /* Now wait until it attaches. */ - WaitForReplicationWorkerAttach(worker, bgw_handle); + WaitForReplicationWorkerAttach(worker, generation, bgw_handle); } /* From 4bd1994650fddf49e717e35f1930d62208845974 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 18 Sep 2017 15:21:23 -0400 Subject: [PATCH 0204/1087] Make DatumGetFoo/PG_GETARG_FOO/PG_RETURN_FOO macro names more consistent. By project convention, these names should include "P" when dealing with a pointer type; that is, if the result of a GETARG macro is of type FOO *, it should be called PG_GETARG_FOO_P not just PG_GETARG_FOO. Some newer types such as JSONB and ranges had not followed the convention, and a number of contrib modules hadn't gotten that memo either. Rename the offending macros to improve consistency. In passing, fix a few places that thought PG_DETOAST_DATUM() returns a Datum; it does not, it returns "struct varlena *". Applying DatumGetPointer to that happens not to cause any bad effects today, but it's formally wrong. Also, adjust an ltree macro that was designed without any thought for what pgindent would do with it. This is all cosmetic and shouldn't have any impact on generated code. Mark Dilger, some further tweaks by me Discussion: https://postgr.es/m/EA5676F4-766F-4F38-8348-ECC7DB427C6A@gmail.com --- contrib/btree_gist/btree_text.c | 2 +- contrib/btree_gist/btree_utils_var.c | 6 +- contrib/cube/cube.c | 151 +++++++++--------- contrib/cube/cubedata.h | 6 +- contrib/hstore/hstore.h | 2 +- contrib/hstore/hstore_gin.c | 4 +- contrib/hstore/hstore_gist.c | 2 +- contrib/hstore/hstore_io.c | 14 +- contrib/hstore/hstore_op.c | 50 +++--- contrib/hstore_plperl/hstore_plperl.c | 2 +- contrib/hstore_plpython/hstore_plpython.c | 2 +- contrib/ltree/_ltree_gist.c | 2 +- contrib/ltree/_ltree_op.c | 16 +- contrib/ltree/lquery_op.c | 6 +- contrib/ltree/ltree.h | 21 ++- contrib/ltree/ltree_gist.c | 26 +-- contrib/ltree/ltree_io.c | 4 +- contrib/ltree/ltree_op.c | 68 ++++---- contrib/ltree/ltxtquery_io.c | 2 +- contrib/ltree/ltxtquery_op.c | 4 +- contrib/ltree_plpython/ltree_plpython.c | 2 +- src/backend/tsearch/to_tsany.c | 6 +- src/backend/tsearch/wparser.c | 4 +- src/backend/utils/adt/array_expanded.c | 4 +- src/backend/utils/adt/arrayfuncs.c | 46 +++--- src/backend/utils/adt/jsonb.c | 8 +- src/backend/utils/adt/jsonb_gin.c | 12 +- src/backend/utils/adt/jsonb_op.c | 48 +++--- src/backend/utils/adt/jsonfuncs.c | 94 +++++------ src/backend/utils/adt/rangetypes.c | 134 ++++++++-------- src/backend/utils/adt/rangetypes_gist.c | 96 +++++------ src/backend/utils/adt/rangetypes_selfuncs.c | 4 +- src/backend/utils/adt/rangetypes_spgist.c | 54 +++---- src/backend/utils/adt/rangetypes_typanalyze.c | 2 +- src/backend/utils/adt/tsgistidx.c | 4 +- src/include/utils/array.h | 4 +- src/include/utils/jsonb.h | 8 +- src/include/utils/rangetypes.h | 12 +- 38 files changed, 471 insertions(+), 461 deletions(-) diff --git a/contrib/btree_gist/btree_text.c b/contrib/btree_gist/btree_text.c index 090c849470..02cc0a45b1 100644 --- a/contrib/btree_gist/btree_text.c +++ b/contrib/btree_gist/btree_text.c @@ -171,7 +171,7 @@ Datum gbt_bpchar_consistent(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - void *query = (void *) DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(1))); + void *query = (void *) DatumGetTextP(PG_GETARG_DATUM(1)); StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); /* Oid subtype = PG_GETARG_OID(3); */ diff --git a/contrib/btree_gist/btree_utils_var.c b/contrib/btree_gist/btree_utils_var.c index 586de63a4d..a43d81a165 100644 --- a/contrib/btree_gist/btree_utils_var.c +++ b/contrib/btree_gist/btree_utils_var.c @@ -37,7 +37,7 @@ Datum gbt_var_decompress(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - GBT_VARKEY *key = (GBT_VARKEY *) DatumGetPointer(PG_DETOAST_DATUM(entry->key)); + GBT_VARKEY *key = (GBT_VARKEY *) PG_DETOAST_DATUM(entry->key); if (key != (GBT_VARKEY *) DatumGetPointer(entry->key)) { @@ -159,7 +159,7 @@ gbt_var_node_cp_len(const GBT_VARKEY *node, const gbtree_vinfo *tinfo) l--; i++; } - return ml; /* lower == upper */ + return ml; /* lower == upper */ } @@ -307,7 +307,7 @@ Datum gbt_var_fetch(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - GBT_VARKEY *key = (GBT_VARKEY *) DatumGetPointer(PG_DETOAST_DATUM(entry->key)); + GBT_VARKEY *key = (GBT_VARKEY *) PG_DETOAST_DATUM(entry->key); GBT_VARKEY_R r = gbt_var_key_readable(key); GISTENTRY *retval; diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index 1032b997f9..b7702716fe 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -126,7 +126,7 @@ cube_in(PG_FUNCTION_ARGS) cube_scanner_finish(); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } @@ -187,7 +187,7 @@ cube_a_f8_f8(PG_FUNCTION_ARGS) else SET_POINT_BIT(result); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* @@ -221,13 +221,13 @@ cube_a_f8(PG_FUNCTION_ARGS) for (i = 0; i < dim; i++) result->x[i] = dur[i]; - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } Datum cube_subset(PG_FUNCTION_ARGS) { - NDBOX *c = PG_GETARG_NDBOX(0); + NDBOX *c = PG_GETARG_NDBOX_P(0); ArrayType *idx = PG_GETARG_ARRAYTYPE_P(1); NDBOX *result; int size, @@ -263,13 +263,13 @@ cube_subset(PG_FUNCTION_ARGS) } PG_FREE_IF_COPY(c, 0); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } Datum cube_out(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); StringInfoData buf; int dim = DIM(cube); int i; @@ -316,7 +316,7 @@ Datum g_cube_consistent(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - NDBOX *query = PG_GETARG_NDBOX(1); + NDBOX *query = PG_GETARG_NDBOX_P(1); StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); /* Oid subtype = PG_GETARG_OID(3); */ @@ -331,10 +331,10 @@ g_cube_consistent(PG_FUNCTION_ARGS) * g_cube_leaf_consistent */ if (GIST_LEAF(entry)) - res = g_cube_leaf_consistent(DatumGetNDBOX(entry->key), + res = g_cube_leaf_consistent(DatumGetNDBOXP(entry->key), query, strategy); else - res = g_cube_internal_consistent(DatumGetNDBOX(entry->key), + res = g_cube_internal_consistent(DatumGetNDBOXP(entry->key), query, strategy); PG_FREE_IF_COPY(query, 1); @@ -355,7 +355,7 @@ g_cube_union(PG_FUNCTION_ARGS) NDBOX *tmp; int i; - tmp = DatumGetNDBOX(entryvec->vector[0].key); + tmp = DatumGetNDBOXP(entryvec->vector[0].key); /* * sizep = sizeof(NDBOX); -- NDBOX has variable size @@ -365,7 +365,7 @@ g_cube_union(PG_FUNCTION_ARGS) for (i = 1; i < entryvec->n; i++) { out = g_cube_binary_union(tmp, - DatumGetNDBOX(entryvec->vector[i].key), + DatumGetNDBOXP(entryvec->vector[i].key), sizep); tmp = out; } @@ -388,9 +388,9 @@ Datum g_cube_decompress(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - NDBOX *key = DatumGetNDBOX(PG_DETOAST_DATUM(entry->key)); + NDBOX *key = DatumGetNDBOXP(entry->key); - if (key != DatumGetNDBOX(entry->key)) + if (key != DatumGetNDBOXP(entry->key)) { GISTENTRY *retval = (GISTENTRY *) palloc(sizeof(GISTENTRY)); @@ -417,10 +417,10 @@ g_cube_penalty(PG_FUNCTION_ARGS) double tmp1, tmp2; - ud = cube_union_v0(DatumGetNDBOX(origentry->key), - DatumGetNDBOX(newentry->key)); + ud = cube_union_v0(DatumGetNDBOXP(origentry->key), + DatumGetNDBOXP(newentry->key)); rt_cube_size(ud, &tmp1); - rt_cube_size(DatumGetNDBOX(origentry->key), &tmp2); + rt_cube_size(DatumGetNDBOXP(origentry->key), &tmp2); *result = (float) (tmp1 - tmp2); PG_RETURN_FLOAT8(*result); @@ -473,17 +473,18 @@ g_cube_picksplit(PG_FUNCTION_ARGS) for (i = FirstOffsetNumber; i < maxoff; i = OffsetNumberNext(i)) { - datum_alpha = DatumGetNDBOX(entryvec->vector[i].key); + datum_alpha = DatumGetNDBOXP(entryvec->vector[i].key); for (j = OffsetNumberNext(i); j <= maxoff; j = OffsetNumberNext(j)) { - datum_beta = DatumGetNDBOX(entryvec->vector[j].key); + datum_beta = DatumGetNDBOXP(entryvec->vector[j].key); /* compute the wasted space by unioning these guys */ /* size_waste = size_union - size_inter; */ union_d = cube_union_v0(datum_alpha, datum_beta); rt_cube_size(union_d, &size_union); - inter_d = DatumGetNDBOX(DirectFunctionCall2(cube_inter, - entryvec->vector[i].key, entryvec->vector[j].key)); + inter_d = DatumGetNDBOXP(DirectFunctionCall2(cube_inter, + entryvec->vector[i].key, + entryvec->vector[j].key)); rt_cube_size(inter_d, &size_inter); size_waste = size_union - size_inter; @@ -506,10 +507,10 @@ g_cube_picksplit(PG_FUNCTION_ARGS) right = v->spl_right; v->spl_nright = 0; - datum_alpha = DatumGetNDBOX(entryvec->vector[seed_1].key); + datum_alpha = DatumGetNDBOXP(entryvec->vector[seed_1].key); datum_l = cube_union_v0(datum_alpha, datum_alpha); rt_cube_size(datum_l, &size_l); - datum_beta = DatumGetNDBOX(entryvec->vector[seed_2].key); + datum_beta = DatumGetNDBOXP(entryvec->vector[seed_2].key); datum_r = cube_union_v0(datum_beta, datum_beta); rt_cube_size(datum_r, &size_r); @@ -548,7 +549,7 @@ g_cube_picksplit(PG_FUNCTION_ARGS) } /* okay, which page needs least enlargement? */ - datum_alpha = DatumGetNDBOX(entryvec->vector[i].key); + datum_alpha = DatumGetNDBOXP(entryvec->vector[i].key); union_dl = cube_union_v0(datum_l, datum_alpha); union_dr = cube_union_v0(datum_r, datum_alpha); rt_cube_size(union_dl, &size_alpha); @@ -584,8 +585,8 @@ g_cube_picksplit(PG_FUNCTION_ARGS) Datum g_cube_same(PG_FUNCTION_ARGS) { - NDBOX *b1 = PG_GETARG_NDBOX(0); - NDBOX *b2 = PG_GETARG_NDBOX(1); + NDBOX *b1 = PG_GETARG_NDBOX_P(0); + NDBOX *b2 = PG_GETARG_NDBOX_P(1); bool *result = (bool *) PG_GETARG_POINTER(2); if (cube_cmp_v0(b1, b2) == 0) @@ -593,7 +594,7 @@ g_cube_same(PG_FUNCTION_ARGS) else *result = FALSE; - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* @@ -735,23 +736,23 @@ cube_union_v0(NDBOX *a, NDBOX *b) Datum cube_union(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0); - NDBOX *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0); + NDBOX *b = PG_GETARG_NDBOX_P(1); NDBOX *res; res = cube_union_v0(a, b); PG_FREE_IF_COPY(a, 0); PG_FREE_IF_COPY(b, 1); - PG_RETURN_NDBOX(res); + PG_RETURN_NDBOX_P(res); } /* cube_inter */ Datum cube_inter(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0); - NDBOX *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0); + NDBOX *b = PG_GETARG_NDBOX_P(1); NDBOX *result; bool swapped = false; int i; @@ -823,14 +824,14 @@ cube_inter(PG_FUNCTION_ARGS) /* * Is it OK to return a non-null intersection for non-overlapping boxes? */ - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* cube_size */ Datum cube_size(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0); + NDBOX *a = PG_GETARG_NDBOX_P(0); double result; rt_cube_size(a, &result); @@ -948,8 +949,8 @@ cube_cmp_v0(NDBOX *a, NDBOX *b) Datum cube_cmp(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -963,8 +964,8 @@ cube_cmp(PG_FUNCTION_ARGS) Datum cube_eq(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -978,8 +979,8 @@ cube_eq(PG_FUNCTION_ARGS) Datum cube_ne(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -993,8 +994,8 @@ cube_ne(PG_FUNCTION_ARGS) Datum cube_lt(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -1008,8 +1009,8 @@ cube_lt(PG_FUNCTION_ARGS) Datum cube_gt(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -1023,8 +1024,8 @@ cube_gt(PG_FUNCTION_ARGS) Datum cube_le(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -1038,8 +1039,8 @@ cube_le(PG_FUNCTION_ARGS) Datum cube_ge(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); int32 res; res = cube_cmp_v0(a, b); @@ -1093,8 +1094,8 @@ cube_contains_v0(NDBOX *a, NDBOX *b) Datum cube_contains(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool res; res = cube_contains_v0(a, b); @@ -1109,8 +1110,8 @@ cube_contains(PG_FUNCTION_ARGS) Datum cube_contained(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool res; res = cube_contains_v0(b, a); @@ -1164,8 +1165,8 @@ cube_overlap_v0(NDBOX *a, NDBOX *b) Datum cube_overlap(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool res; res = cube_overlap_v0(a, b); @@ -1184,8 +1185,8 @@ cube_overlap(PG_FUNCTION_ARGS) Datum cube_distance(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool swapped = false; double d, distance; @@ -1233,8 +1234,8 @@ cube_distance(PG_FUNCTION_ARGS) Datum distance_taxicab(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool swapped = false; double distance; int i; @@ -1277,8 +1278,8 @@ distance_taxicab(PG_FUNCTION_ARGS) Datum distance_chebyshev(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0), - *b = PG_GETARG_NDBOX(1); + NDBOX *a = PG_GETARG_NDBOX_P(0), + *b = PG_GETARG_NDBOX_P(1); bool swapped = false; double d, distance; @@ -1331,7 +1332,7 @@ g_cube_distance(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); - NDBOX *cube = DatumGetNDBOX(entry->key); + NDBOX *cube = DatumGetNDBOXP(entry->key); double retval; if (strategy == CubeKNNDistanceCoord) @@ -1348,7 +1349,7 @@ g_cube_distance(PG_FUNCTION_ARGS) } else { - NDBOX *query = PG_GETARG_NDBOX(1); + NDBOX *query = PG_GETARG_NDBOX_P(1); switch (strategy) { @@ -1392,7 +1393,7 @@ distance_1D(double a1, double a2, double b1, double b2) Datum cube_is_point(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); bool result; result = cube_is_point_internal(cube); @@ -1427,7 +1428,7 @@ cube_is_point_internal(NDBOX *cube) Datum cube_dim(PG_FUNCTION_ARGS) { - NDBOX *c = PG_GETARG_NDBOX(0); + NDBOX *c = PG_GETARG_NDBOX_P(0); int dim = DIM(c); PG_FREE_IF_COPY(c, 0); @@ -1438,7 +1439,7 @@ cube_dim(PG_FUNCTION_ARGS) Datum cube_ll_coord(PG_FUNCTION_ARGS) { - NDBOX *c = PG_GETARG_NDBOX(0); + NDBOX *c = PG_GETARG_NDBOX_P(0); int n = PG_GETARG_INT32(1); double result; @@ -1455,7 +1456,7 @@ cube_ll_coord(PG_FUNCTION_ARGS) Datum cube_ur_coord(PG_FUNCTION_ARGS) { - NDBOX *c = PG_GETARG_NDBOX(0); + NDBOX *c = PG_GETARG_NDBOX_P(0); int n = PG_GETARG_INT32(1); double result; @@ -1476,7 +1477,7 @@ cube_ur_coord(PG_FUNCTION_ARGS) Datum cube_coord(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); int coord = PG_GETARG_INT32(1); if (coord <= 0 || coord > 2 * DIM(cube)) @@ -1504,7 +1505,7 @@ cube_coord(PG_FUNCTION_ARGS) Datum cube_coord_llur(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); int coord = PG_GETARG_INT32(1); if (coord <= 0 || coord > 2 * DIM(cube)) @@ -1534,7 +1535,7 @@ cube_coord_llur(PG_FUNCTION_ARGS) Datum cube_enlarge(PG_FUNCTION_ARGS) { - NDBOX *a = PG_GETARG_NDBOX(0); + NDBOX *a = PG_GETARG_NDBOX_P(0); double r = PG_GETARG_FLOAT8(1); int32 n = PG_GETARG_INT32(2); NDBOX *result; @@ -1592,7 +1593,7 @@ cube_enlarge(PG_FUNCTION_ARGS) } PG_FREE_IF_COPY(a, 0); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* Create a one dimensional box with identical upper and lower coordinates */ @@ -1610,7 +1611,7 @@ cube_f8(PG_FUNCTION_ARGS) SET_POINT_BIT(result); result->x[0] = x; - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* Create a one dimensional box */ @@ -1641,7 +1642,7 @@ cube_f8_f8(PG_FUNCTION_ARGS) result->x[1] = x1; } - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* Add a dimension to an existing cube with the same values for the new @@ -1649,7 +1650,7 @@ cube_f8_f8(PG_FUNCTION_ARGS) Datum cube_c_f8(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); double x = PG_GETARG_FLOAT8(1); NDBOX *result; int size; @@ -1682,14 +1683,14 @@ cube_c_f8(PG_FUNCTION_ARGS) } PG_FREE_IF_COPY(cube, 0); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } /* Add a dimension to an existing cube */ Datum cube_c_f8_f8(PG_FUNCTION_ARGS) { - NDBOX *cube = PG_GETARG_NDBOX(0); + NDBOX *cube = PG_GETARG_NDBOX_P(0); double x1 = PG_GETARG_FLOAT8(1); double x2 = PG_GETARG_FLOAT8(2); NDBOX *result; @@ -1723,5 +1724,5 @@ cube_c_f8_f8(PG_FUNCTION_ARGS) } PG_FREE_IF_COPY(cube, 0); - PG_RETURN_NDBOX(result); + PG_RETURN_NDBOX_P(result); } diff --git a/contrib/cube/cubedata.h b/contrib/cube/cubedata.h index 6e6ddfd3d7..dbe7d4f742 100644 --- a/contrib/cube/cubedata.h +++ b/contrib/cube/cubedata.h @@ -49,9 +49,9 @@ typedef struct NDBOX #define CUBE_SIZE(_dim) (offsetof(NDBOX, x) + sizeof(double)*(_dim)*2) /* fmgr interface macros */ -#define DatumGetNDBOX(x) ((NDBOX *) PG_DETOAST_DATUM(x)) -#define PG_GETARG_NDBOX(x) DatumGetNDBOX(PG_GETARG_DATUM(x)) -#define PG_RETURN_NDBOX(x) PG_RETURN_POINTER(x) +#define DatumGetNDBOXP(x) ((NDBOX *) PG_DETOAST_DATUM(x)) +#define PG_GETARG_NDBOX_P(x) DatumGetNDBOXP(PG_GETARG_DATUM(x)) +#define PG_RETURN_NDBOX_P(x) PG_RETURN_POINTER(x) /* GiST operator strategy numbers */ #define CubeKNNDistanceCoord 15 /* ~> */ diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h index c4862a82e1..bf4a565ed9 100644 --- a/contrib/hstore/hstore.h +++ b/contrib/hstore/hstore.h @@ -151,7 +151,7 @@ extern HStore *hstoreUpgrade(Datum orig); #define DatumGetHStoreP(d) hstoreUpgrade(d) -#define PG_GETARG_HS(x) DatumGetHStoreP(PG_GETARG_DATUM(x)) +#define PG_GETARG_HSTORE_P(x) DatumGetHStoreP(PG_GETARG_DATUM(x)) /* diff --git a/contrib/hstore/hstore_gin.c b/contrib/hstore/hstore_gin.c index d98fb38458..4c3a422643 100644 --- a/contrib/hstore/hstore_gin.c +++ b/contrib/hstore/hstore_gin.c @@ -43,7 +43,7 @@ makeitem(char *str, int len, char flag) Datum gin_extract_hstore(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); int32 *nentries = (int32 *) PG_GETARG_POINTER(1); Datum *entries = NULL; HEntry *hsent = ARRPTR(hs); @@ -155,7 +155,7 @@ gin_consistent_hstore(PG_FUNCTION_ARGS) bool *check = (bool *) PG_GETARG_POINTER(0); StrategyNumber strategy = PG_GETARG_UINT16(1); - /* HStore *query = PG_GETARG_HS(2); */ + /* HStore *query = PG_GETARG_HSTORE_P(2); */ int32 nkeys = PG_GETARG_INT32(3); /* Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); */ diff --git a/contrib/hstore/hstore_gist.c b/contrib/hstore/hstore_gist.c index f8f5934e40..3a61342019 100644 --- a/contrib/hstore/hstore_gist.c +++ b/contrib/hstore/hstore_gist.c @@ -518,7 +518,7 @@ ghstore_consistent(PG_FUNCTION_ARGS) if (strategy == HStoreContainsStrategyNumber || strategy == HStoreOldContainsStrategyNumber) { - HStore *query = PG_GETARG_HS(1); + HStore *query = PG_GETARG_HSTORE_P(1); HEntry *qe = ARRPTR(query); char *qv = STRPTR(query); int count = HS_COUNT(query); diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index a44c1b2235..6363c321c5 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -962,7 +962,7 @@ hstore_populate_record(PG_FUNCTION_ARGS) tupTypmod = HeapTupleHeaderGetTypMod(rec); } - hs = PG_GETARG_HS(1); + hs = PG_GETARG_HSTORE_P(1); entries = ARRPTR(hs); ptr = STRPTR(hs); @@ -1127,7 +1127,7 @@ PG_FUNCTION_INFO_V1(hstore_out); Datum hstore_out(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int buflen, i; int count = HS_COUNT(in); @@ -1198,7 +1198,7 @@ PG_FUNCTION_INFO_V1(hstore_send); Datum hstore_send(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); @@ -1244,7 +1244,7 @@ PG_FUNCTION_INFO_V1(hstore_to_json_loose); Datum hstore_to_json_loose(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); @@ -1299,7 +1299,7 @@ PG_FUNCTION_INFO_V1(hstore_to_json); Datum hstore_to_json(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); @@ -1344,7 +1344,7 @@ PG_FUNCTION_INFO_V1(hstore_to_jsonb); Datum hstore_to_jsonb(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); @@ -1387,7 +1387,7 @@ PG_FUNCTION_INFO_V1(hstore_to_jsonb_loose); Datum hstore_to_jsonb_loose(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); diff --git a/contrib/hstore/hstore_op.c b/contrib/hstore/hstore_op.c index 612be23a74..8f9277f8da 100644 --- a/contrib/hstore/hstore_op.c +++ b/contrib/hstore/hstore_op.c @@ -130,7 +130,7 @@ PG_FUNCTION_INFO_V1(hstore_fetchval); Datum hstore_fetchval(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); text *key = PG_GETARG_TEXT_PP(1); HEntry *entries = ARRPTR(hs); text *out; @@ -151,7 +151,7 @@ PG_FUNCTION_INFO_V1(hstore_exists); Datum hstore_exists(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); text *key = PG_GETARG_TEXT_PP(1); int idx = hstoreFindKey(hs, NULL, VARDATA_ANY(key), VARSIZE_ANY_EXHDR(key)); @@ -164,7 +164,7 @@ PG_FUNCTION_INFO_V1(hstore_exists_any); Datum hstore_exists_any(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); ArrayType *keys = PG_GETARG_ARRAYTYPE_P(1); int nkeys; Pairs *key_pairs = hstoreArrayToPairs(keys, &nkeys); @@ -198,7 +198,7 @@ PG_FUNCTION_INFO_V1(hstore_exists_all); Datum hstore_exists_all(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); ArrayType *keys = PG_GETARG_ARRAYTYPE_P(1); int nkeys; Pairs *key_pairs = hstoreArrayToPairs(keys, &nkeys); @@ -232,7 +232,7 @@ PG_FUNCTION_INFO_V1(hstore_defined); Datum hstore_defined(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); text *key = PG_GETARG_TEXT_PP(1); HEntry *entries = ARRPTR(hs); int idx = hstoreFindKey(hs, NULL, @@ -247,7 +247,7 @@ PG_FUNCTION_INFO_V1(hstore_delete); Datum hstore_delete(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); text *key = PG_GETARG_TEXT_PP(1); char *keyptr = VARDATA_ANY(key); int keylen = VARSIZE_ANY_EXHDR(key); @@ -294,7 +294,7 @@ PG_FUNCTION_INFO_V1(hstore_delete_array); Datum hstore_delete_array(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); HStore *out = palloc(VARSIZE(hs)); int hs_count = HS_COUNT(hs); char *ps, @@ -373,8 +373,8 @@ PG_FUNCTION_INFO_V1(hstore_delete_hstore); Datum hstore_delete_hstore(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); - HStore *hs2 = PG_GETARG_HS(1); + HStore *hs = PG_GETARG_HSTORE_P(0); + HStore *hs2 = PG_GETARG_HSTORE_P(1); HStore *out = palloc(VARSIZE(hs)); int hs_count = HS_COUNT(hs); int hs2_count = HS_COUNT(hs2); @@ -473,8 +473,8 @@ PG_FUNCTION_INFO_V1(hstore_concat); Datum hstore_concat(PG_FUNCTION_ARGS) { - HStore *s1 = PG_GETARG_HS(0); - HStore *s2 = PG_GETARG_HS(1); + HStore *s1 = PG_GETARG_HSTORE_P(0); + HStore *s2 = PG_GETARG_HSTORE_P(1); HStore *out = palloc(VARSIZE(s1) + VARSIZE(s2)); char *ps1, *ps2, @@ -571,7 +571,7 @@ PG_FUNCTION_INFO_V1(hstore_slice_to_array); Datum hstore_slice_to_array(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); HEntry *entries = ARRPTR(hs); char *ptr = STRPTR(hs); ArrayType *key_array = PG_GETARG_ARRAYTYPE_P(1); @@ -634,7 +634,7 @@ PG_FUNCTION_INFO_V1(hstore_slice_to_hstore); Datum hstore_slice_to_hstore(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); HEntry *entries = ARRPTR(hs); char *ptr = STRPTR(hs); ArrayType *key_array = PG_GETARG_ARRAYTYPE_P(1); @@ -696,7 +696,7 @@ PG_FUNCTION_INFO_V1(hstore_akeys); Datum hstore_akeys(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); Datum *d; ArrayType *a; HEntry *entries = ARRPTR(hs); @@ -731,7 +731,7 @@ PG_FUNCTION_INFO_V1(hstore_avals); Datum hstore_avals(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); Datum *d; bool *nulls; ArrayType *a; @@ -827,7 +827,7 @@ PG_FUNCTION_INFO_V1(hstore_to_array); Datum hstore_to_array(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); ArrayType *out = hstore_to_array_internal(hs, 1); PG_RETURN_POINTER(out); @@ -837,7 +837,7 @@ PG_FUNCTION_INFO_V1(hstore_to_matrix); Datum hstore_to_matrix(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); ArrayType *out = hstore_to_array_internal(hs, 2); PG_RETURN_POINTER(out); @@ -891,7 +891,7 @@ hstore_skeys(PG_FUNCTION_ARGS) if (SRF_IS_FIRSTCALL()) { - hs = PG_GETARG_HS(0); + hs = PG_GETARG_HSTORE_P(0); funcctx = SRF_FIRSTCALL_INIT(); setup_firstcall(funcctx, hs, NULL); } @@ -925,7 +925,7 @@ hstore_svals(PG_FUNCTION_ARGS) if (SRF_IS_FIRSTCALL()) { - hs = PG_GETARG_HS(0); + hs = PG_GETARG_HSTORE_P(0); funcctx = SRF_FIRSTCALL_INIT(); setup_firstcall(funcctx, hs, NULL); } @@ -967,8 +967,8 @@ PG_FUNCTION_INFO_V1(hstore_contains); Datum hstore_contains(PG_FUNCTION_ARGS) { - HStore *val = PG_GETARG_HS(0); - HStore *tmpl = PG_GETARG_HS(1); + HStore *val = PG_GETARG_HSTORE_P(0); + HStore *tmpl = PG_GETARG_HSTORE_P(1); bool res = true; HEntry *te = ARRPTR(tmpl); char *tstr = STRPTR(tmpl); @@ -1032,7 +1032,7 @@ hstore_each(PG_FUNCTION_ARGS) if (SRF_IS_FIRSTCALL()) { - hs = PG_GETARG_HS(0); + hs = PG_GETARG_HSTORE_P(0); funcctx = SRF_FIRSTCALL_INIT(); setup_firstcall(funcctx, hs, fcinfo); } @@ -1087,8 +1087,8 @@ PG_FUNCTION_INFO_V1(hstore_cmp); Datum hstore_cmp(PG_FUNCTION_ARGS) { - HStore *hs1 = PG_GETARG_HS(0); - HStore *hs2 = PG_GETARG_HS(1); + HStore *hs1 = PG_GETARG_HSTORE_P(0); + HStore *hs2 = PG_GETARG_HSTORE_P(1); int hcount1 = HS_COUNT(hs1); int hcount2 = HS_COUNT(hs2); int res = 0; @@ -1235,7 +1235,7 @@ PG_FUNCTION_INFO_V1(hstore_hash); Datum hstore_hash(PG_FUNCTION_ARGS) { - HStore *hs = PG_GETARG_HS(0); + HStore *hs = PG_GETARG_HSTORE_P(0); Datum hval = hash_any((unsigned char *) VARDATA(hs), VARSIZE(hs) - VARHDRSZ); diff --git a/contrib/hstore_plperl/hstore_plperl.c b/contrib/hstore_plperl/hstore_plperl.c index cc46a525f6..6bc3bb37fc 100644 --- a/contrib/hstore_plperl/hstore_plperl.c +++ b/contrib/hstore_plperl/hstore_plperl.c @@ -68,7 +68,7 @@ Datum hstore_to_plperl(PG_FUNCTION_ARGS) { dTHX; - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); diff --git a/contrib/hstore_plpython/hstore_plpython.c b/contrib/hstore_plpython/hstore_plpython.c index b184324ebf..22366bd40f 100644 --- a/contrib/hstore_plpython/hstore_plpython.c +++ b/contrib/hstore_plpython/hstore_plpython.c @@ -85,7 +85,7 @@ PG_FUNCTION_INFO_V1(hstore_to_plpython); Datum hstore_to_plpython(PG_FUNCTION_ARGS) { - HStore *in = PG_GETARG_HS(0); + HStore *in = PG_GETARG_HSTORE_P(0); int i; int count = HS_COUNT(in); char *base = STRPTR(in); diff --git a/contrib/ltree/_ltree_gist.c b/contrib/ltree/_ltree_gist.c index a387f5b899..23952df4af 100644 --- a/contrib/ltree/_ltree_gist.c +++ b/contrib/ltree/_ltree_gist.c @@ -545,7 +545,7 @@ Datum _ltree_consistent(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - char *query = (char *) DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(1))); + void *query = (void *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); /* Oid subtype = PG_GETARG_OID(3); */ diff --git a/contrib/ltree/_ltree_op.c b/contrib/ltree/_ltree_op.c index fdf6ebb43b..9bb6bcaeff 100644 --- a/contrib/ltree/_ltree_op.c +++ b/contrib/ltree/_ltree_op.c @@ -71,7 +71,7 @@ Datum _ltree_isparent(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltree *query = PG_GETARG_LTREE(1); + ltree *query = PG_GETARG_LTREE_P(1); bool res = array_iterator(la, ltree_isparent, (void *) query, NULL); PG_FREE_IF_COPY(la, 0); @@ -92,7 +92,7 @@ Datum _ltree_risparent(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltree *query = PG_GETARG_LTREE(1); + ltree *query = PG_GETARG_LTREE_P(1); bool res = array_iterator(la, ltree_risparent, (void *) query, NULL); PG_FREE_IF_COPY(la, 0); @@ -113,7 +113,7 @@ Datum _ltq_regex(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - lquery *query = PG_GETARG_LQUERY(1); + lquery *query = PG_GETARG_LQUERY_P(1); bool res = array_iterator(la, ltq_regex, (void *) query, NULL); PG_FREE_IF_COPY(la, 0); @@ -178,7 +178,7 @@ Datum _ltxtq_exec(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltxtquery *query = PG_GETARG_LTXTQUERY(1); + ltxtquery *query = PG_GETARG_LTXTQUERY_P(1); bool res = array_iterator(la, ltxtq_exec, (void *) query, NULL); PG_FREE_IF_COPY(la, 0); @@ -200,7 +200,7 @@ Datum _ltree_extract_isparent(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltree *query = PG_GETARG_LTREE(1); + ltree *query = PG_GETARG_LTREE_P(1); ltree *found, *item; @@ -223,7 +223,7 @@ Datum _ltree_extract_risparent(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltree *query = PG_GETARG_LTREE(1); + ltree *query = PG_GETARG_LTREE_P(1); ltree *found, *item; @@ -246,7 +246,7 @@ Datum _ltq_extract_regex(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - lquery *query = PG_GETARG_LQUERY(1); + lquery *query = PG_GETARG_LQUERY_P(1); ltree *found, *item; @@ -269,7 +269,7 @@ Datum _ltxtq_extract_exec(PG_FUNCTION_ARGS) { ArrayType *la = PG_GETARG_ARRAYTYPE_P(0); - ltxtquery *query = PG_GETARG_LTXTQUERY(1); + ltxtquery *query = PG_GETARG_LTXTQUERY_P(1); ltree *found, *item; diff --git a/contrib/ltree/lquery_op.c b/contrib/ltree/lquery_op.c index 229ddd0ae3..b6d2deb1af 100644 --- a/contrib/ltree/lquery_op.c +++ b/contrib/ltree/lquery_op.c @@ -302,8 +302,8 @@ checkCond(lquery_level *curq, int query_numlevel, ltree_level *curt, int tree_nu Datum ltq_regex(PG_FUNCTION_ARGS) { - ltree *tree = PG_GETARG_LTREE(0); - lquery *query = PG_GETARG_LQUERY(1); + ltree *tree = PG_GETARG_LTREE_P(0); + lquery *query = PG_GETARG_LQUERY_P(1); bool res = false; if (query->flag & LQUERY_HASNOT) @@ -338,7 +338,7 @@ ltq_rregex(PG_FUNCTION_ARGS) Datum lt_q_regex(PG_FUNCTION_ARGS) { - ltree *tree = PG_GETARG_LTREE(0); + ltree *tree = PG_GETARG_LTREE_P(0); ArrayType *_query = PG_GETARG_ARRAYTYPE_P(1); lquery *query = (lquery *) ARR_DATA_PTR(_query); bool res = false; diff --git a/contrib/ltree/ltree.h b/contrib/ltree/ltree.h index fd86323ffe..e4b8c84fa6 100644 --- a/contrib/ltree/ltree.h +++ b/contrib/ltree/ltree.h @@ -165,12 +165,21 @@ bool compare_subnode(ltree_level *t, char *q, int len, ltree *lca_inner(ltree **a, int len); int ltree_strncasecmp(const char *a, const char *b, size_t s); -#define PG_GETARG_LTREE(x) ((ltree*)DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(x)))) -#define PG_GETARG_LTREE_COPY(x) ((ltree*)DatumGetPointer(PG_DETOAST_DATUM_COPY(PG_GETARG_DATUM(x)))) -#define PG_GETARG_LQUERY(x) ((lquery*)DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(x)))) -#define PG_GETARG_LQUERY_COPY(x) ((lquery*)DatumGetPointer(PG_DETOAST_DATUM_COPY(PG_GETARG_DATUM(x)))) -#define PG_GETARG_LTXTQUERY(x) ((ltxtquery*)DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(x)))) -#define PG_GETARG_LTXTQUERY_COPY(x) ((ltxtquery*)DatumGetPointer(PG_DETOAST_DATUM_COPY(PG_GETARG_DATUM(x)))) +/* fmgr macros for ltree objects */ +#define DatumGetLtreeP(X) ((ltree *) PG_DETOAST_DATUM(X)) +#define DatumGetLtreePCopy(X) ((ltree *) PG_DETOAST_DATUM_COPY(X)) +#define PG_GETARG_LTREE_P(n) DatumGetLtreeP(PG_GETARG_DATUM(n)) +#define PG_GETARG_LTREE_P_COPY(n) DatumGetLtreePCopy(PG_GETARG_DATUM(n)) + +#define DatumGetLqueryP(X) ((lquery *) PG_DETOAST_DATUM(X)) +#define DatumGetLqueryPCopy(X) ((lquery *) PG_DETOAST_DATUM_COPY(X)) +#define PG_GETARG_LQUERY_P(n) DatumGetLqueryP(PG_GETARG_DATUM(n)) +#define PG_GETARG_LQUERY_P_COPY(n) DatumGetLqueryPCopy(PG_GETARG_DATUM(n)) + +#define DatumGetLtxtqueryP(X) ((ltxtquery *) PG_DETOAST_DATUM(X)) +#define DatumGetLtxtqueryPCopy(X) ((ltxtquery *) PG_DETOAST_DATUM_COPY(X)) +#define PG_GETARG_LTXTQUERY_P(n) DatumGetLtxtqueryP(PG_GETARG_DATUM(n)) +#define PG_GETARG_LTXTQUERY_P_COPY(n) DatumGetLtxtqueryPCopy(PG_GETARG_DATUM(n)) /* GiST support for ltree */ diff --git a/contrib/ltree/ltree_gist.c b/contrib/ltree/ltree_gist.c index 70e78a672a..ecfd9d84d7 100644 --- a/contrib/ltree/ltree_gist.c +++ b/contrib/ltree/ltree_gist.c @@ -53,7 +53,7 @@ ltree_compress(PG_FUNCTION_ARGS) if (entry->leafkey) { /* ltree */ ltree_gist *key; - ltree *val = (ltree *) DatumGetPointer(PG_DETOAST_DATUM(entry->key)); + ltree *val = DatumGetLtreeP(entry->key); int32 len = LTG_HDRSIZE + VARSIZE(val); key = (ltree_gist *) palloc0(len); @@ -73,7 +73,7 @@ Datum ltree_decompress(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - ltree_gist *key = (ltree_gist *) DatumGetPointer(PG_DETOAST_DATUM(entry->key)); + ltree_gist *key = (ltree_gist *) PG_DETOAST_DATUM(entry->key); if (PointerGetDatum(key) != entry->key) { @@ -621,18 +621,18 @@ ltree_consistent(PG_FUNCTION_ARGS) switch (strategy) { case BTLessStrategyNumber: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); res = (GIST_LEAF(entry)) ? (ltree_compare((ltree *) query, LTG_NODE(key)) > 0) : (ltree_compare((ltree *) query, LTG_GETLNODE(key)) >= 0); break; case BTLessEqualStrategyNumber: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); res = (ltree_compare((ltree *) query, LTG_GETLNODE(key)) >= 0); break; case BTEqualStrategyNumber: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); if (GIST_LEAF(entry)) res = (ltree_compare((ltree *) query, LTG_NODE(key)) == 0); else @@ -643,25 +643,25 @@ ltree_consistent(PG_FUNCTION_ARGS) ); break; case BTGreaterEqualStrategyNumber: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); res = (ltree_compare((ltree *) query, LTG_GETRNODE(key)) <= 0); break; case BTGreaterStrategyNumber: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); res = (GIST_LEAF(entry)) ? (ltree_compare((ltree *) query, LTG_GETRNODE(key)) < 0) : (ltree_compare((ltree *) query, LTG_GETRNODE(key)) <= 0); break; case 10: - query = PG_GETARG_LTREE_COPY(1); + query = PG_GETARG_LTREE_P_COPY(1); res = (GIST_LEAF(entry)) ? inner_isparent((ltree *) query, LTG_NODE(key)) : gist_isparent(key, (ltree *) query); break; case 11: - query = PG_GETARG_LTREE(1); + query = PG_GETARG_LTREE_P(1); res = (GIST_LEAF(entry)) ? inner_isparent(LTG_NODE(key), (ltree *) query) : @@ -669,7 +669,7 @@ ltree_consistent(PG_FUNCTION_ARGS) break; case 12: case 13: - query = PG_GETARG_LQUERY(1); + query = PG_GETARG_LQUERY_P(1); if (GIST_LEAF(entry)) res = DatumGetBool(DirectFunctionCall2(ltq_regex, PointerGetDatum(LTG_NODE(key)), @@ -680,18 +680,18 @@ ltree_consistent(PG_FUNCTION_ARGS) break; case 14: case 15: - query = PG_GETARG_LQUERY(1); + query = PG_GETARG_LTXTQUERY_P(1); if (GIST_LEAF(entry)) res = DatumGetBool(DirectFunctionCall2(ltxtq_exec, PointerGetDatum(LTG_NODE(key)), - PointerGetDatum((lquery *) query) + PointerGetDatum((ltxtquery *) query) )); else res = gist_qtxt(key, (ltxtquery *) query); break; case 16: case 17: - query = DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_DATUM(1))); + query = PG_GETARG_ARRAYTYPE_P(1); if (GIST_LEAF(entry)) res = DatumGetBool(DirectFunctionCall2(lt_q_regex, PointerGetDatum(LTG_NODE(key)), diff --git a/contrib/ltree/ltree_io.c b/contrib/ltree/ltree_io.c index 34ca597a48..f54f037443 100644 --- a/contrib/ltree/ltree_io.c +++ b/contrib/ltree/ltree_io.c @@ -149,7 +149,7 @@ ltree_in(PG_FUNCTION_ARGS) Datum ltree_out(PG_FUNCTION_ARGS) { - ltree *in = PG_GETARG_LTREE(0); + ltree *in = PG_GETARG_LTREE_P(0); char *buf, *ptr; int i; @@ -521,7 +521,7 @@ lquery_in(PG_FUNCTION_ARGS) Datum lquery_out(PG_FUNCTION_ARGS) { - lquery *in = PG_GETARG_LQUERY(0); + lquery *in = PG_GETARG_LQUERY_P(0); char *buf, *ptr; int i, diff --git a/contrib/ltree/ltree_op.c b/contrib/ltree/ltree_op.c index aa1e9918be..d62ca02521 100644 --- a/contrib/ltree/ltree_op.c +++ b/contrib/ltree/ltree_op.c @@ -67,65 +67,65 @@ ltree_compare(const ltree *a, const ltree *b) } #define RUNCMP \ -ltree *a = PG_GETARG_LTREE(0); \ -ltree *b = PG_GETARG_LTREE(1); \ -int res = ltree_compare(a,b); \ -PG_FREE_IF_COPY(a,0); \ -PG_FREE_IF_COPY(b,1); \ +ltree *a = PG_GETARG_LTREE_P(0); \ +ltree *b = PG_GETARG_LTREE_P(1); \ +int res = ltree_compare(a,b); \ +PG_FREE_IF_COPY(a,0); \ +PG_FREE_IF_COPY(b,1) Datum ltree_cmp(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_INT32(res); + RUNCMP; + PG_RETURN_INT32(res); } Datum ltree_lt(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res < 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res < 0) ? true : false); } Datum ltree_le(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res <= 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res <= 0) ? true : false); } Datum ltree_eq(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res == 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res == 0) ? true : false); } Datum ltree_ge(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res >= 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res >= 0) ? true : false); } Datum ltree_gt(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res > 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res > 0) ? true : false); } Datum ltree_ne(PG_FUNCTION_ARGS) { - RUNCMP - PG_RETURN_BOOL((res != 0) ? true : false); + RUNCMP; + PG_RETURN_BOOL((res != 0) ? true : false); } Datum nlevel(PG_FUNCTION_ARGS) { - ltree *a = PG_GETARG_LTREE(0); + ltree *a = PG_GETARG_LTREE_P(0); int res = a->numlevel; PG_FREE_IF_COPY(a, 0); @@ -159,8 +159,8 @@ inner_isparent(const ltree *c, const ltree *p) Datum ltree_isparent(PG_FUNCTION_ARGS) { - ltree *c = PG_GETARG_LTREE(1); - ltree *p = PG_GETARG_LTREE(0); + ltree *c = PG_GETARG_LTREE_P(1); + ltree *p = PG_GETARG_LTREE_P(0); bool res = inner_isparent(c, p); PG_FREE_IF_COPY(c, 1); @@ -171,8 +171,8 @@ ltree_isparent(PG_FUNCTION_ARGS) Datum ltree_risparent(PG_FUNCTION_ARGS) { - ltree *c = PG_GETARG_LTREE(0); - ltree *p = PG_GETARG_LTREE(1); + ltree *c = PG_GETARG_LTREE_P(0); + ltree *p = PG_GETARG_LTREE_P(1); bool res = inner_isparent(c, p); PG_FREE_IF_COPY(c, 0); @@ -223,7 +223,7 @@ inner_subltree(ltree *t, int32 startpos, int32 endpos) Datum subltree(PG_FUNCTION_ARGS) { - ltree *t = PG_GETARG_LTREE(0); + ltree *t = PG_GETARG_LTREE_P(0); ltree *res = inner_subltree(t, PG_GETARG_INT32(1), PG_GETARG_INT32(2)); PG_FREE_IF_COPY(t, 0); @@ -233,7 +233,7 @@ subltree(PG_FUNCTION_ARGS) Datum subpath(PG_FUNCTION_ARGS) { - ltree *t = PG_GETARG_LTREE(0); + ltree *t = PG_GETARG_LTREE_P(0); int32 start = PG_GETARG_INT32(1); int32 len = (fcinfo->nargs == 3) ? PG_GETARG_INT32(2) : 0; int32 end; @@ -282,8 +282,8 @@ ltree_concat(ltree *a, ltree *b) Datum ltree_addltree(PG_FUNCTION_ARGS) { - ltree *a = PG_GETARG_LTREE(0); - ltree *b = PG_GETARG_LTREE(1); + ltree *a = PG_GETARG_LTREE_P(0); + ltree *b = PG_GETARG_LTREE_P(1); ltree *r; r = ltree_concat(a, b); @@ -295,7 +295,7 @@ ltree_addltree(PG_FUNCTION_ARGS) Datum ltree_addtext(PG_FUNCTION_ARGS) { - ltree *a = PG_GETARG_LTREE(0); + ltree *a = PG_GETARG_LTREE_P(0); text *b = PG_GETARG_TEXT_PP(1); char *s; ltree *r, @@ -320,8 +320,8 @@ ltree_addtext(PG_FUNCTION_ARGS) Datum ltree_index(PG_FUNCTION_ARGS) { - ltree *a = PG_GETARG_LTREE(0); - ltree *b = PG_GETARG_LTREE(1); + ltree *a = PG_GETARG_LTREE_P(0); + ltree *b = PG_GETARG_LTREE_P(1); int start = (fcinfo->nargs == 3) ? PG_GETARG_INT32(2) : 0; int i, j; @@ -380,7 +380,7 @@ ltree_index(PG_FUNCTION_ARGS) Datum ltree_textadd(PG_FUNCTION_ARGS) { - ltree *a = PG_GETARG_LTREE(1); + ltree *a = PG_GETARG_LTREE_P(1); text *b = PG_GETARG_TEXT_PP(0); char *s; ltree *r, @@ -476,7 +476,7 @@ lca(PG_FUNCTION_ARGS) a = (ltree **) palloc(sizeof(ltree *) * fcinfo->nargs); for (i = 0; i < fcinfo->nargs; i++) - a[i] = PG_GETARG_LTREE(i); + a[i] = PG_GETARG_LTREE_P(i); res = lca_inner(a, (int) fcinfo->nargs); for (i = 0; i < fcinfo->nargs; i++) PG_FREE_IF_COPY(a[i], i); @@ -508,7 +508,7 @@ text2ltree(PG_FUNCTION_ARGS) Datum ltree2text(PG_FUNCTION_ARGS) { - ltree *in = PG_GETARG_LTREE(0); + ltree *in = PG_GETARG_LTREE_P(0); char *ptr; int i; ltree_level *curlevel; diff --git a/contrib/ltree/ltxtquery_io.c b/contrib/ltree/ltxtquery_io.c index 9ca1994249..56bf39d145 100644 --- a/contrib/ltree/ltxtquery_io.c +++ b/contrib/ltree/ltxtquery_io.c @@ -515,7 +515,7 @@ infix(INFIX *in, bool first) Datum ltxtq_out(PG_FUNCTION_ARGS) { - ltxtquery *query = PG_GETARG_LTXTQUERY(0); + ltxtquery *query = PG_GETARG_LTXTQUERY_P(0); INFIX nrm; if (query->size == 0) diff --git a/contrib/ltree/ltxtquery_op.c b/contrib/ltree/ltxtquery_op.c index 6e9dbc4690..dc0ee82bb6 100644 --- a/contrib/ltree/ltxtquery_op.c +++ b/contrib/ltree/ltxtquery_op.c @@ -86,8 +86,8 @@ checkcondition_str(void *checkval, ITEM *val) Datum ltxtq_exec(PG_FUNCTION_ARGS) { - ltree *val = PG_GETARG_LTREE(0); - ltxtquery *query = PG_GETARG_LTXTQUERY(1); + ltree *val = PG_GETARG_LTREE_P(0); + ltxtquery *query = PG_GETARG_LTXTQUERY_P(1); CHKVAL chkval; bool result; diff --git a/contrib/ltree_plpython/ltree_plpython.c b/contrib/ltree_plpython/ltree_plpython.c index bdd462a91b..ae9b90dd10 100644 --- a/contrib/ltree_plpython/ltree_plpython.c +++ b/contrib/ltree_plpython/ltree_plpython.c @@ -40,7 +40,7 @@ PG_FUNCTION_INFO_V1(ltree_to_plpython); Datum ltree_to_plpython(PG_FUNCTION_ARGS) { - ltree *in = PG_GETARG_LTREE(0); + ltree *in = PG_GETARG_LTREE_P(0); int i; PyObject *list; ltree_level *curlevel; diff --git a/src/backend/tsearch/to_tsany.c b/src/backend/tsearch/to_tsany.c index 35d9ab276c..cf55e3910d 100644 --- a/src/backend/tsearch/to_tsany.c +++ b/src/backend/tsearch/to_tsany.c @@ -271,7 +271,7 @@ Datum jsonb_to_tsvector_byid(PG_FUNCTION_ARGS) { Oid cfgId = PG_GETARG_OID(0); - Jsonb *jb = PG_GETARG_JSONB(1); + Jsonb *jb = PG_GETARG_JSONB_P(1); TSVector result; TSVectorBuildState state; ParsedText prs; @@ -293,13 +293,13 @@ jsonb_to_tsvector_byid(PG_FUNCTION_ARGS) Datum jsonb_to_tsvector(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); Oid cfgId; cfgId = getTSCurrentConfig(true); PG_RETURN_DATUM(DirectFunctionCall2(jsonb_to_tsvector_byid, ObjectIdGetDatum(cfgId), - JsonbGetDatum(jb))); + JsonbPGetDatum(jb))); } Datum diff --git a/src/backend/tsearch/wparser.c b/src/backend/tsearch/wparser.c index c9ce80a91a..523c3edd7d 100644 --- a/src/backend/tsearch/wparser.c +++ b/src/backend/tsearch/wparser.c @@ -383,7 +383,7 @@ Datum ts_headline_jsonb_byid_opt(PG_FUNCTION_ARGS) { Oid tsconfig = PG_GETARG_OID(0); - Jsonb *jb = PG_GETARG_JSONB(1); + Jsonb *jb = PG_GETARG_JSONB_P(1); TSQuery query = PG_GETARG_TSQUERY(2); text *opt = (PG_NARGS() > 3 && PG_GETARG_POINTER(3)) ? PG_GETARG_TEXT_P(3) : NULL; Jsonb *out; @@ -424,7 +424,7 @@ ts_headline_jsonb_byid_opt(PG_FUNCTION_ARGS) pfree(prs.stopsel); } - PG_RETURN_JSONB(out); + PG_RETURN_JSONB_P(out); } Datum diff --git a/src/backend/utils/adt/array_expanded.c b/src/backend/utils/adt/array_expanded.c index f256c7f13d..31583f9033 100644 --- a/src/backend/utils/adt/array_expanded.c +++ b/src/backend/utils/adt/array_expanded.c @@ -394,11 +394,11 @@ DatumGetExpandedArrayX(Datum d, ArrayMetaState *metacache) } /* - * DatumGetAnyArray: return either an expanded array or a detoasted varlena + * DatumGetAnyArrayP: return either an expanded array or a detoasted varlena * array. The result must not be modified in-place. */ AnyArrayType * -DatumGetAnyArray(Datum d) +DatumGetAnyArrayP(Datum d) { ExpandedArrayHeader *eah; diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index 2a4de41bbc..e4101c9af0 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -1011,7 +1011,7 @@ CopyArrayEls(ArrayType *array, Datum array_out(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); Oid element_type = AARR_ELEMTYPE(v); int typlen; bool typbyval; @@ -1534,7 +1534,7 @@ ReadArrayBinary(StringInfo buf, Datum array_send(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); Oid element_type = AARR_ELEMTYPE(v); int typlen; bool typbyval; @@ -1638,7 +1638,7 @@ array_send(PG_FUNCTION_ARGS) Datum array_ndims(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); /* Sanity check: does it look like an array at all? */ if (AARR_NDIM(v) <= 0 || AARR_NDIM(v) > MAXDIM) @@ -1654,7 +1654,7 @@ array_ndims(PG_FUNCTION_ARGS) Datum array_dims(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); char *p; int i; int *dimv, @@ -1692,7 +1692,7 @@ array_dims(PG_FUNCTION_ARGS) Datum array_lower(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); int reqdim = PG_GETARG_INT32(1); int *lb; int result; @@ -1719,7 +1719,7 @@ array_lower(PG_FUNCTION_ARGS) Datum array_upper(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); int reqdim = PG_GETARG_INT32(1); int *dimv, *lb; @@ -1749,7 +1749,7 @@ array_upper(PG_FUNCTION_ARGS) Datum array_length(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); int reqdim = PG_GETARG_INT32(1); int *dimv; int result; @@ -1776,7 +1776,7 @@ array_length(PG_FUNCTION_ARGS) Datum array_cardinality(PG_FUNCTION_ARGS) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); PG_RETURN_INT32(ArrayGetNItems(AARR_NDIM(v), AARR_DIMS(v))); } @@ -3147,7 +3147,7 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) elog(ERROR, "invalid nargs: %d", fcinfo->nargs); if (PG_ARGISNULL(0)) elog(ERROR, "null input array"); - v = PG_GETARG_ANY_ARRAY(0); + v = PG_GETARG_ANY_ARRAY_P(0); inpType = AARR_ELEMTYPE(v); ndim = AARR_NDIM(v); @@ -3589,8 +3589,8 @@ array_contains_nulls(ArrayType *array) Datum array_eq(PG_FUNCTION_ARGS) { - AnyArrayType *array1 = PG_GETARG_ANY_ARRAY(0); - AnyArrayType *array2 = PG_GETARG_ANY_ARRAY(1); + AnyArrayType *array1 = PG_GETARG_ANY_ARRAY_P(0); + AnyArrayType *array2 = PG_GETARG_ANY_ARRAY_P(1); Oid collation = PG_GET_COLLATION(); int ndims1 = AARR_NDIM(array1); int ndims2 = AARR_NDIM(array2); @@ -3760,8 +3760,8 @@ btarraycmp(PG_FUNCTION_ARGS) static int array_cmp(FunctionCallInfo fcinfo) { - AnyArrayType *array1 = PG_GETARG_ANY_ARRAY(0); - AnyArrayType *array2 = PG_GETARG_ANY_ARRAY(1); + AnyArrayType *array1 = PG_GETARG_ANY_ARRAY_P(0); + AnyArrayType *array2 = PG_GETARG_ANY_ARRAY_P(1); Oid collation = PG_GET_COLLATION(); int ndims1 = AARR_NDIM(array1); int ndims2 = AARR_NDIM(array2); @@ -3931,7 +3931,7 @@ array_cmp(FunctionCallInfo fcinfo) Datum hash_array(PG_FUNCTION_ARGS) { - AnyArrayType *array = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0); int ndims = AARR_NDIM(array); int *dims = AARR_DIMS(array); Oid element_type = AARR_ELEMTYPE(array); @@ -4028,7 +4028,7 @@ hash_array(PG_FUNCTION_ARGS) Datum hash_array_extended(PG_FUNCTION_ARGS) { - AnyArrayType *array = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *array = PG_GETARG_ANY_ARRAY_P(0); uint64 seed = PG_GETARG_INT64(1); int ndims = AARR_NDIM(array); int *dims = AARR_DIMS(array); @@ -4260,8 +4260,8 @@ array_contain_compare(AnyArrayType *array1, AnyArrayType *array2, Oid collation, Datum arrayoverlap(PG_FUNCTION_ARGS) { - AnyArrayType *array1 = PG_GETARG_ANY_ARRAY(0); - AnyArrayType *array2 = PG_GETARG_ANY_ARRAY(1); + AnyArrayType *array1 = PG_GETARG_ANY_ARRAY_P(0); + AnyArrayType *array2 = PG_GETARG_ANY_ARRAY_P(1); Oid collation = PG_GET_COLLATION(); bool result; @@ -4278,8 +4278,8 @@ arrayoverlap(PG_FUNCTION_ARGS) Datum arraycontains(PG_FUNCTION_ARGS) { - AnyArrayType *array1 = PG_GETARG_ANY_ARRAY(0); - AnyArrayType *array2 = PG_GETARG_ANY_ARRAY(1); + AnyArrayType *array1 = PG_GETARG_ANY_ARRAY_P(0); + AnyArrayType *array2 = PG_GETARG_ANY_ARRAY_P(1); Oid collation = PG_GET_COLLATION(); bool result; @@ -4296,8 +4296,8 @@ arraycontains(PG_FUNCTION_ARGS) Datum arraycontained(PG_FUNCTION_ARGS) { - AnyArrayType *array1 = PG_GETARG_ANY_ARRAY(0); - AnyArrayType *array2 = PG_GETARG_ANY_ARRAY(1); + AnyArrayType *array1 = PG_GETARG_ANY_ARRAY_P(0); + AnyArrayType *array2 = PG_GETARG_ANY_ARRAY_P(1); Oid collation = PG_GET_COLLATION(); bool result; @@ -5634,7 +5634,7 @@ generate_subscripts(PG_FUNCTION_ARGS) /* stuff done only on the first call of the function */ if (SRF_IS_FIRSTCALL()) { - AnyArrayType *v = PG_GETARG_ANY_ARRAY(0); + AnyArrayType *v = PG_GETARG_ANY_ARRAY_P(0); int reqdim = PG_GETARG_INT32(1); int *lb, *dimv; @@ -5996,7 +5996,7 @@ array_unnest(PG_FUNCTION_ARGS) * and not before. (If no detoast happens, we assume the originally * passed array will stick around till then.) */ - arr = PG_GETARG_ANY_ARRAY(0); + arr = PG_GETARG_ANY_ARRAY_P(0); /* allocate memory for user context */ fctx = (array_unnest_fctx *) palloc(sizeof(array_unnest_fctx)); diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 1eb7f3d6f9..95db895538 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -130,7 +130,7 @@ jsonb_recv(PG_FUNCTION_ARGS) Datum jsonb_out(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); char *out; out = JsonbToCString(NULL, &jb->root, VARSIZE(jb)); @@ -146,7 +146,7 @@ jsonb_out(PG_FUNCTION_ARGS) Datum jsonb_send(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); StringInfoData buf; StringInfo jtext = makeStringInfo(); int version = 1; @@ -171,7 +171,7 @@ jsonb_send(PG_FUNCTION_ARGS) Datum jsonb_typeof(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); JsonbIterator *it; JsonbValue v; char *result; @@ -878,7 +878,7 @@ datum_to_jsonb(Datum val, bool is_null, JsonbInState *result, break; case JSONBTYPE_JSONB: { - Jsonb *jsonb = DatumGetJsonb(val); + Jsonb *jsonb = DatumGetJsonbP(val); JsonbIterator *it; it = JsonbIteratorInit(&jsonb->root); diff --git a/src/backend/utils/adt/jsonb_gin.c b/src/backend/utils/adt/jsonb_gin.c index 8e8e8fd850..4e1ba10e9c 100644 --- a/src/backend/utils/adt/jsonb_gin.c +++ b/src/backend/utils/adt/jsonb_gin.c @@ -66,7 +66,7 @@ gin_compare_jsonb(PG_FUNCTION_ARGS) Datum gin_extract_jsonb(PG_FUNCTION_ARGS) { - Jsonb *jb = (Jsonb *) PG_GETARG_JSONB(0); + Jsonb *jb = (Jsonb *) PG_GETARG_JSONB_P(0); int32 *nentries = (int32 *) PG_GETARG_POINTER(1); int total = 2 * JB_ROOT_COUNT(jb); JsonbIterator *it; @@ -196,7 +196,7 @@ gin_consistent_jsonb(PG_FUNCTION_ARGS) bool *check = (bool *) PG_GETARG_POINTER(0); StrategyNumber strategy = PG_GETARG_UINT16(1); - /* Jsonb *query = PG_GETARG_JSONB(2); */ + /* Jsonb *query = PG_GETARG_JSONB_P(2); */ int32 nkeys = PG_GETARG_INT32(3); /* Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); */ @@ -268,7 +268,7 @@ gin_triconsistent_jsonb(PG_FUNCTION_ARGS) GinTernaryValue *check = (GinTernaryValue *) PG_GETARG_POINTER(0); StrategyNumber strategy = PG_GETARG_UINT16(1); - /* Jsonb *query = PG_GETARG_JSONB(2); */ + /* Jsonb *query = PG_GETARG_JSONB_P(2); */ int32 nkeys = PG_GETARG_INT32(3); /* Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); */ @@ -329,7 +329,7 @@ gin_triconsistent_jsonb(PG_FUNCTION_ARGS) Datum gin_extract_jsonb_path(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); int32 *nentries = (int32 *) PG_GETARG_POINTER(1); int total = 2 * JB_ROOT_COUNT(jb); JsonbIterator *it; @@ -454,7 +454,7 @@ gin_consistent_jsonb_path(PG_FUNCTION_ARGS) bool *check = (bool *) PG_GETARG_POINTER(0); StrategyNumber strategy = PG_GETARG_UINT16(1); - /* Jsonb *query = PG_GETARG_JSONB(2); */ + /* Jsonb *query = PG_GETARG_JSONB_P(2); */ int32 nkeys = PG_GETARG_INT32(3); /* Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); */ @@ -492,7 +492,7 @@ gin_triconsistent_jsonb_path(PG_FUNCTION_ARGS) GinTernaryValue *check = (GinTernaryValue *) PG_GETARG_POINTER(0); StrategyNumber strategy = PG_GETARG_UINT16(1); - /* Jsonb *query = PG_GETARG_JSONB(2); */ + /* Jsonb *query = PG_GETARG_JSONB_P(2); */ int32 nkeys = PG_GETARG_INT32(3); /* Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); */ diff --git a/src/backend/utils/adt/jsonb_op.c b/src/backend/utils/adt/jsonb_op.c index 52a7e19a54..d54a07d204 100644 --- a/src/backend/utils/adt/jsonb_op.c +++ b/src/backend/utils/adt/jsonb_op.c @@ -21,7 +21,7 @@ Datum jsonb_exists(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); text *key = PG_GETARG_TEXT_PP(1); JsonbValue kval; JsonbValue *v = NULL; @@ -46,7 +46,7 @@ jsonb_exists(PG_FUNCTION_ARGS) Datum jsonb_exists_any(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); ArrayType *keys = PG_GETARG_ARRAYTYPE_P(1); int i; Datum *key_datums; @@ -79,7 +79,7 @@ jsonb_exists_any(PG_FUNCTION_ARGS) Datum jsonb_exists_all(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); ArrayType *keys = PG_GETARG_ARRAYTYPE_P(1); int i; Datum *key_datums; @@ -112,8 +112,8 @@ jsonb_exists_all(PG_FUNCTION_ARGS) Datum jsonb_contains(PG_FUNCTION_ARGS) { - Jsonb *val = PG_GETARG_JSONB(0); - Jsonb *tmpl = PG_GETARG_JSONB(1); + Jsonb *val = PG_GETARG_JSONB_P(0); + Jsonb *tmpl = PG_GETARG_JSONB_P(1); JsonbIterator *it1, *it2; @@ -131,8 +131,8 @@ Datum jsonb_contained(PG_FUNCTION_ARGS) { /* Commutator of "contains" */ - Jsonb *tmpl = PG_GETARG_JSONB(0); - Jsonb *val = PG_GETARG_JSONB(1); + Jsonb *tmpl = PG_GETARG_JSONB_P(0); + Jsonb *val = PG_GETARG_JSONB_P(1); JsonbIterator *it1, *it2; @@ -149,8 +149,8 @@ jsonb_contained(PG_FUNCTION_ARGS) Datum jsonb_ne(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) != 0); @@ -166,8 +166,8 @@ jsonb_ne(PG_FUNCTION_ARGS) Datum jsonb_lt(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) < 0); @@ -180,8 +180,8 @@ jsonb_lt(PG_FUNCTION_ARGS) Datum jsonb_gt(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) > 0); @@ -194,8 +194,8 @@ jsonb_gt(PG_FUNCTION_ARGS) Datum jsonb_le(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) <= 0); @@ -208,8 +208,8 @@ jsonb_le(PG_FUNCTION_ARGS) Datum jsonb_ge(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) >= 0); @@ -222,8 +222,8 @@ jsonb_ge(PG_FUNCTION_ARGS) Datum jsonb_eq(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); bool res; res = (compareJsonbContainers(&jba->root, &jbb->root) == 0); @@ -236,8 +236,8 @@ jsonb_eq(PG_FUNCTION_ARGS) Datum jsonb_cmp(PG_FUNCTION_ARGS) { - Jsonb *jba = PG_GETARG_JSONB(0); - Jsonb *jbb = PG_GETARG_JSONB(1); + Jsonb *jba = PG_GETARG_JSONB_P(0); + Jsonb *jbb = PG_GETARG_JSONB_P(1); int res; res = compareJsonbContainers(&jba->root, &jbb->root); @@ -253,7 +253,7 @@ jsonb_cmp(PG_FUNCTION_ARGS) Datum jsonb_hash(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); JsonbIterator *it; JsonbValue v; JsonbIteratorToken r; @@ -295,7 +295,7 @@ jsonb_hash(PG_FUNCTION_ARGS) Datum jsonb_hash_extended(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); uint64 seed = PG_GETARG_INT64(1); JsonbIterator *it; JsonbValue v; @@ -311,7 +311,7 @@ jsonb_hash_extended(PG_FUNCTION_ARGS) { switch (r) { - /* Rotation is left to JsonbHashScalarValueExtended() */ + /* Rotation is left to JsonbHashScalarValueExtended() */ case WJB_BEGIN_ARRAY: hash ^= ((uint64) JB_FARRAY) << 32 | JB_FARRAY; break; diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index 68feeb2c5b..d36fd9e929 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -499,7 +499,7 @@ jsonb_object_keys(PG_FUNCTION_ARGS) if (SRF_IS_FIRSTCALL()) { MemoryContext oldcontext; - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); bool skipNested = false; JsonbIterator *it; JsonbValue v; @@ -703,7 +703,7 @@ json_object_field(PG_FUNCTION_ARGS) Datum jsonb_object_field(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); text *key = PG_GETARG_TEXT_PP(1); JsonbValue *v; @@ -715,7 +715,7 @@ jsonb_object_field(PG_FUNCTION_ARGS) VARSIZE_ANY_EXHDR(key)); if (v != NULL) - PG_RETURN_JSONB(JsonbValueToJsonb(v)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(v)); PG_RETURN_NULL(); } @@ -739,7 +739,7 @@ json_object_field_text(PG_FUNCTION_ARGS) Datum jsonb_object_field_text(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); text *key = PG_GETARG_TEXT_PP(1); JsonbValue *v; @@ -805,7 +805,7 @@ json_array_element(PG_FUNCTION_ARGS) Datum jsonb_array_element(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); int element = PG_GETARG_INT32(1); JsonbValue *v; @@ -825,7 +825,7 @@ jsonb_array_element(PG_FUNCTION_ARGS) v = getIthJsonbValueFromContainer(&jb->root, element); if (v != NULL) - PG_RETURN_JSONB(JsonbValueToJsonb(v)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(v)); PG_RETURN_NULL(); } @@ -848,7 +848,7 @@ json_array_element_text(PG_FUNCTION_ARGS) Datum jsonb_array_element_text(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); int element = PG_GETARG_INT32(1); JsonbValue *v; @@ -1375,7 +1375,7 @@ jsonb_extract_path_text(PG_FUNCTION_ARGS) static Datum get_jsonb_path_all(FunctionCallInfo fcinfo, bool as_text) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); ArrayType *path = PG_GETARG_ARRAYTYPE_P(1); Jsonb *res; Datum *pathtext; @@ -1435,7 +1435,7 @@ get_jsonb_path_all(FunctionCallInfo fcinfo, bool as_text) else { /* not text mode - just hand back the jsonb */ - PG_RETURN_JSONB(jb); + PG_RETURN_JSONB_P(jb); } } @@ -1533,7 +1533,7 @@ get_jsonb_path_all(FunctionCallInfo fcinfo, bool as_text) else { /* not text mode - just hand back the jsonb */ - PG_RETURN_JSONB(res); + PG_RETURN_JSONB_P(res); } } @@ -1571,7 +1571,7 @@ json_array_length(PG_FUNCTION_ARGS) Datum jsonb_array_length(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); if (JB_ROOT_IS_SCALAR(jb)) ereport(ERROR, @@ -1661,7 +1661,7 @@ jsonb_each_text(PG_FUNCTION_ARGS) static Datum each_worker_jsonb(FunctionCallInfo fcinfo, const char *funcname, bool as_text) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); ReturnSetInfo *rsi; Tuplestorestate *tuple_store; TupleDesc tupdesc; @@ -1976,7 +1976,7 @@ static Datum elements_worker_jsonb(FunctionCallInfo fcinfo, const char *funcname, bool as_text) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); ReturnSetInfo *rsi; Tuplestorestate *tuple_store; TupleDesc tupdesc; @@ -2799,7 +2799,7 @@ populate_scalar(ScalarIOData *io, Oid typid, int32 typmod, JsValue *jsv) { Jsonb *jsonb = JsonbValueToJsonb(jbv); /* directly use jsonb */ - return JsonbGetDatum(jsonb); + return JsonbPGetDatum(jsonb); } /* convert jsonb to string for typio call */ else if (typid == JSONOID && jbv->type != jbvBinary) @@ -3235,7 +3235,7 @@ populate_record_worker(FunctionCallInfo fcinfo, const char *funcname, } else { - Jsonb *jb = PG_GETARG_JSONB(json_arg_num); + Jsonb *jb = PG_GETARG_JSONB_P(json_arg_num); jsv.val.jsonb = &jbv; @@ -3552,7 +3552,7 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname, } else { - Jsonb *jb = PG_GETARG_JSONB(json_arg_num); + Jsonb *jb = PG_GETARG_JSONB_P(json_arg_num); JsonbIterator *it; JsonbValue v; bool skipNested = false; @@ -3904,7 +3904,7 @@ json_strip_nulls(PG_FUNCTION_ARGS) Datum jsonb_strip_nulls(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); JsonbIterator *it; JsonbParseState *parseState = NULL; JsonbValue *res = NULL; @@ -4013,7 +4013,7 @@ addJsonbToParseState(JsonbParseState **jbps, Jsonb *jb) Datum jsonb_pretty(PG_FUNCTION_ARGS) { - Jsonb *jb = PG_GETARG_JSONB(0); + Jsonb *jb = PG_GETARG_JSONB_P(0); StringInfo str = makeStringInfo(); JsonbToCStringIndent(str, &jb->root, VARSIZE(jb)); @@ -4029,8 +4029,8 @@ jsonb_pretty(PG_FUNCTION_ARGS) Datum jsonb_concat(PG_FUNCTION_ARGS) { - Jsonb *jb1 = PG_GETARG_JSONB(0); - Jsonb *jb2 = PG_GETARG_JSONB(1); + Jsonb *jb1 = PG_GETARG_JSONB_P(0); + Jsonb *jb2 = PG_GETARG_JSONB_P(1); JsonbParseState *state = NULL; JsonbValue *res; JsonbIterator *it1, @@ -4045,9 +4045,9 @@ jsonb_concat(PG_FUNCTION_ARGS) if (JB_ROOT_IS_OBJECT(jb1) == JB_ROOT_IS_OBJECT(jb2)) { if (JB_ROOT_COUNT(jb1) == 0 && !JB_ROOT_IS_SCALAR(jb2)) - PG_RETURN_JSONB(jb2); + PG_RETURN_JSONB_P(jb2); else if (JB_ROOT_COUNT(jb2) == 0 && !JB_ROOT_IS_SCALAR(jb1)) - PG_RETURN_JSONB(jb1); + PG_RETURN_JSONB_P(jb1); } it1 = JsonbIteratorInit(&jb1->root); @@ -4057,7 +4057,7 @@ jsonb_concat(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } @@ -4070,7 +4070,7 @@ jsonb_concat(PG_FUNCTION_ARGS) Datum jsonb_delete(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); text *key = PG_GETARG_TEXT_PP(1); char *keyptr = VARDATA_ANY(key); int keylen = VARSIZE_ANY_EXHDR(key); @@ -4087,7 +4087,7 @@ jsonb_delete(PG_FUNCTION_ARGS) errmsg("cannot delete from scalar"))); if (JB_ROOT_COUNT(in) == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4111,7 +4111,7 @@ jsonb_delete(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } /* @@ -4123,7 +4123,7 @@ jsonb_delete(PG_FUNCTION_ARGS) Datum jsonb_delete_array(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); ArrayType *keys = PG_GETARG_ARRAYTYPE_P(1); Datum *keys_elems; bool *keys_nulls; @@ -4146,13 +4146,13 @@ jsonb_delete_array(PG_FUNCTION_ARGS) errmsg("cannot delete from scalar"))); if (JB_ROOT_COUNT(in) == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); deconstruct_array(keys, TEXTOID, -1, false, 'i', &keys_elems, &keys_nulls, &keys_len); if (keys_len == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4197,7 +4197,7 @@ jsonb_delete_array(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } /* @@ -4210,7 +4210,7 @@ jsonb_delete_array(PG_FUNCTION_ARGS) Datum jsonb_delete_idx(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); int idx = PG_GETARG_INT32(1); JsonbParseState *state = NULL; JsonbIterator *it; @@ -4231,7 +4231,7 @@ jsonb_delete_idx(PG_FUNCTION_ARGS) errmsg("cannot delete from object using integer index"))); if (JB_ROOT_COUNT(in) == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4248,7 +4248,7 @@ jsonb_delete_idx(PG_FUNCTION_ARGS) } if (idx >= n) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); pushJsonbValue(&state, r, NULL); @@ -4265,7 +4265,7 @@ jsonb_delete_idx(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } /* @@ -4275,9 +4275,9 @@ jsonb_delete_idx(PG_FUNCTION_ARGS) Datum jsonb_set(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); ArrayType *path = PG_GETARG_ARRAYTYPE_P(1); - Jsonb *newval = PG_GETARG_JSONB(2); + Jsonb *newval = PG_GETARG_JSONB_P(2); bool create = PG_GETARG_BOOL(3); JsonbValue *res = NULL; Datum *path_elems; @@ -4297,13 +4297,13 @@ jsonb_set(PG_FUNCTION_ARGS) errmsg("cannot set path in scalar"))); if (JB_ROOT_COUNT(in) == 0 && !create) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); deconstruct_array(path, TEXTOID, -1, false, 'i', &path_elems, &path_nulls, &path_len); if (path_len == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4312,7 +4312,7 @@ jsonb_set(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } @@ -4322,7 +4322,7 @@ jsonb_set(PG_FUNCTION_ARGS) Datum jsonb_delete_path(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); ArrayType *path = PG_GETARG_ARRAYTYPE_P(1); JsonbValue *res = NULL; Datum *path_elems; @@ -4342,13 +4342,13 @@ jsonb_delete_path(PG_FUNCTION_ARGS) errmsg("cannot delete path in scalar"))); if (JB_ROOT_COUNT(in) == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); deconstruct_array(path, TEXTOID, -1, false, 'i', &path_elems, &path_nulls, &path_len); if (path_len == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4357,7 +4357,7 @@ jsonb_delete_path(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } /* @@ -4367,9 +4367,9 @@ jsonb_delete_path(PG_FUNCTION_ARGS) Datum jsonb_insert(PG_FUNCTION_ARGS) { - Jsonb *in = PG_GETARG_JSONB(0); + Jsonb *in = PG_GETARG_JSONB_P(0); ArrayType *path = PG_GETARG_ARRAYTYPE_P(1); - Jsonb *newval = PG_GETARG_JSONB(2); + Jsonb *newval = PG_GETARG_JSONB_P(2); bool after = PG_GETARG_BOOL(3); JsonbValue *res = NULL; Datum *path_elems; @@ -4392,7 +4392,7 @@ jsonb_insert(PG_FUNCTION_ARGS) &path_elems, &path_nulls, &path_len); if (path_len == 0) - PG_RETURN_JSONB(in); + PG_RETURN_JSONB_P(in); it = JsonbIteratorInit(&in->root); @@ -4401,7 +4401,7 @@ jsonb_insert(PG_FUNCTION_ARGS) Assert(res != NULL); - PG_RETURN_JSONB(JsonbValueToJsonb(res)); + PG_RETURN_JSONB_P(JsonbValueToJsonb(res)); } /* diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index dae505159e..d0aa33c010 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -115,13 +115,13 @@ range_in(PG_FUNCTION_ARGS) /* serialize and canonicalize */ range = make_range(cache->typcache, &lower, &upper, flags & RANGE_EMPTY); - PG_RETURN_RANGE(range); + PG_RETURN_RANGE_P(range); } Datum range_out(PG_FUNCTION_ARGS) { - RangeType *range = PG_GETARG_RANGE(0); + RangeType *range = PG_GETARG_RANGE_P(0); char *output_str; RangeIOData *cache; char flags; @@ -238,13 +238,13 @@ range_recv(PG_FUNCTION_ARGS) /* serialize and canonicalize */ range = make_range(cache->typcache, &lower, &upper, flags & RANGE_EMPTY); - PG_RETURN_RANGE(range); + PG_RETURN_RANGE_P(range); } Datum range_send(PG_FUNCTION_ARGS) { - RangeType *range = PG_GETARG_RANGE(0); + RangeType *range = PG_GETARG_RANGE_P(0); StringInfo buf = makeStringInfo(); RangeIOData *cache; char flags; @@ -381,7 +381,7 @@ range_constructor2(PG_FUNCTION_ARGS) range = make_range(typcache, &lower, &upper, false); - PG_RETURN_RANGE(range); + PG_RETURN_RANGE_P(range); } /* Construct general range value from three arguments */ @@ -418,7 +418,7 @@ range_constructor3(PG_FUNCTION_ARGS) range = make_range(typcache, &lower, &upper, false); - PG_RETURN_RANGE(range); + PG_RETURN_RANGE_P(range); } @@ -428,7 +428,7 @@ range_constructor3(PG_FUNCTION_ARGS) Datum range_lower(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); TypeCacheEntry *typcache; RangeBound lower; RangeBound upper; @@ -449,7 +449,7 @@ range_lower(PG_FUNCTION_ARGS) Datum range_upper(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); TypeCacheEntry *typcache; RangeBound lower; RangeBound upper; @@ -473,7 +473,7 @@ range_upper(PG_FUNCTION_ARGS) Datum range_empty(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); char flags = range_get_flags(r1); PG_RETURN_BOOL(flags & RANGE_EMPTY); @@ -483,7 +483,7 @@ range_empty(PG_FUNCTION_ARGS) Datum range_lower_inc(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); char flags = range_get_flags(r1); PG_RETURN_BOOL(flags & RANGE_LB_INC); @@ -493,7 +493,7 @@ range_lower_inc(PG_FUNCTION_ARGS) Datum range_upper_inc(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); char flags = range_get_flags(r1); PG_RETURN_BOOL(flags & RANGE_UB_INC); @@ -503,7 +503,7 @@ range_upper_inc(PG_FUNCTION_ARGS) Datum range_lower_inf(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); char flags = range_get_flags(r1); PG_RETURN_BOOL(flags & RANGE_LB_INF); @@ -513,7 +513,7 @@ range_lower_inf(PG_FUNCTION_ARGS) Datum range_upper_inf(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); + RangeType *r1 = PG_GETARG_RANGE_P(0); char flags = range_get_flags(r1); PG_RETURN_BOOL(flags & RANGE_UB_INF); @@ -526,7 +526,7 @@ range_upper_inf(PG_FUNCTION_ARGS) Datum range_contains_elem(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); Datum val = PG_GETARG_DATUM(1); TypeCacheEntry *typcache; @@ -540,7 +540,7 @@ Datum elem_contained_by_range(PG_FUNCTION_ARGS) { Datum val = PG_GETARG_DATUM(0); - RangeType *r = PG_GETARG_RANGE(1); + RangeType *r = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r)); @@ -587,8 +587,8 @@ range_eq_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_eq(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -607,8 +607,8 @@ range_ne_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_ne(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -620,8 +620,8 @@ range_ne(PG_FUNCTION_ARGS) Datum range_contains(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -633,8 +633,8 @@ range_contains(PG_FUNCTION_ARGS) Datum range_contained_by(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -671,8 +671,8 @@ range_before_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_before(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -709,8 +709,8 @@ range_after_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_after(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -810,8 +810,8 @@ range_adjacent_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_adjacent(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -856,8 +856,8 @@ range_overlaps_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_overlaps(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -897,8 +897,8 @@ range_overleft_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_overleft(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -938,8 +938,8 @@ range_overright_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2) Datum range_overright(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); @@ -954,8 +954,8 @@ range_overright(PG_FUNCTION_ARGS) Datum range_minus(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; RangeBound lower1, lower2; @@ -979,7 +979,7 @@ range_minus(PG_FUNCTION_ARGS) /* if either is empty, r1 is the correct answer */ if (empty1 || empty2) - PG_RETURN_RANGE(r1); + PG_RETURN_RANGE_P(r1); cmp_l1l2 = range_cmp_bounds(typcache, &lower1, &lower2); cmp_l1u2 = range_cmp_bounds(typcache, &lower1, &upper2); @@ -992,23 +992,23 @@ range_minus(PG_FUNCTION_ARGS) errmsg("result of range difference would not be contiguous"))); if (cmp_l1u2 > 0 || cmp_u1l2 < 0) - PG_RETURN_RANGE(r1); + PG_RETURN_RANGE_P(r1); if (cmp_l1l2 >= 0 && cmp_u1u2 <= 0) - PG_RETURN_RANGE(make_empty_range(typcache)); + PG_RETURN_RANGE_P(make_empty_range(typcache)); if (cmp_l1l2 <= 0 && cmp_u1l2 >= 0 && cmp_u1u2 <= 0) { lower2.inclusive = !lower2.inclusive; lower2.lower = false; /* it will become the upper bound */ - PG_RETURN_RANGE(make_range(typcache, &lower1, &lower2, false)); + PG_RETURN_RANGE_P(make_range(typcache, &lower1, &lower2, false)); } if (cmp_l1l2 >= 0 && cmp_u1u2 >= 0 && cmp_l1u2 <= 0) { upper2.inclusive = !upper2.inclusive; upper2.lower = true; /* it will become the lower bound */ - PG_RETURN_RANGE(make_range(typcache, &upper2, &upper1, false)); + PG_RETURN_RANGE_P(make_range(typcache, &upper2, &upper1, false)); } elog(ERROR, "unexpected case in range_minus"); @@ -1068,13 +1068,13 @@ range_union_internal(TypeCacheEntry *typcache, RangeType *r1, RangeType *r2, Datum range_union(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); - PG_RETURN_RANGE(range_union_internal(typcache, r1, r2, true)); + PG_RETURN_RANGE_P(range_union_internal(typcache, r1, r2, true)); } /* @@ -1084,21 +1084,21 @@ range_union(PG_FUNCTION_ARGS) Datum range_merge(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; typcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1)); - PG_RETURN_RANGE(range_union_internal(typcache, r1, r2, false)); + PG_RETURN_RANGE_P(range_union_internal(typcache, r1, r2, false)); } /* set intersection */ Datum range_intersect(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; RangeBound lower1, lower2; @@ -1119,7 +1119,7 @@ range_intersect(PG_FUNCTION_ARGS) range_deserialize(typcache, r2, &lower2, &upper2, &empty2); if (empty1 || empty2 || !DatumGetBool(range_overlaps(fcinfo))) - PG_RETURN_RANGE(make_empty_range(typcache)); + PG_RETURN_RANGE_P(make_empty_range(typcache)); if (range_cmp_bounds(typcache, &lower1, &lower2) >= 0) result_lower = &lower1; @@ -1131,7 +1131,7 @@ range_intersect(PG_FUNCTION_ARGS) else result_upper = &upper2; - PG_RETURN_RANGE(make_range(typcache, result_lower, result_upper, false)); + PG_RETURN_RANGE_P(make_range(typcache, result_lower, result_upper, false)); } /* Btree support */ @@ -1140,8 +1140,8 @@ range_intersect(PG_FUNCTION_ARGS) Datum range_cmp(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); TypeCacheEntry *typcache; RangeBound lower1, lower2; @@ -1221,7 +1221,7 @@ range_gt(PG_FUNCTION_ARGS) Datum hash_range(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); uint32 result; TypeCacheEntry *typcache; TypeCacheEntry *scache; @@ -1287,7 +1287,7 @@ hash_range(PG_FUNCTION_ARGS) Datum hash_range_extended(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); Datum seed = PG_GETARG_DATUM(1); uint64 result; TypeCacheEntry *typcache; @@ -1355,7 +1355,7 @@ hash_range_extended(PG_FUNCTION_ARGS) Datum int4range_canonical(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); TypeCacheEntry *typcache; RangeBound lower; RangeBound upper; @@ -1366,7 +1366,7 @@ int4range_canonical(PG_FUNCTION_ARGS) range_deserialize(typcache, r, &lower, &upper, &empty); if (empty) - PG_RETURN_RANGE(r); + PG_RETURN_RANGE_P(r); if (!lower.infinite && !lower.inclusive) { @@ -1380,13 +1380,13 @@ int4range_canonical(PG_FUNCTION_ARGS) upper.inclusive = false; } - PG_RETURN_RANGE(range_serialize(typcache, &lower, &upper, false)); + PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false)); } Datum int8range_canonical(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); TypeCacheEntry *typcache; RangeBound lower; RangeBound upper; @@ -1397,7 +1397,7 @@ int8range_canonical(PG_FUNCTION_ARGS) range_deserialize(typcache, r, &lower, &upper, &empty); if (empty) - PG_RETURN_RANGE(r); + PG_RETURN_RANGE_P(r); if (!lower.infinite && !lower.inclusive) { @@ -1411,13 +1411,13 @@ int8range_canonical(PG_FUNCTION_ARGS) upper.inclusive = false; } - PG_RETURN_RANGE(range_serialize(typcache, &lower, &upper, false)); + PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false)); } Datum daterange_canonical(PG_FUNCTION_ARGS) { - RangeType *r = PG_GETARG_RANGE(0); + RangeType *r = PG_GETARG_RANGE_P(0); TypeCacheEntry *typcache; RangeBound lower; RangeBound upper; @@ -1428,7 +1428,7 @@ daterange_canonical(PG_FUNCTION_ARGS) range_deserialize(typcache, r, &lower, &upper, &empty); if (empty) - PG_RETURN_RANGE(r); + PG_RETURN_RANGE_P(r); if (!lower.infinite && !lower.inclusive) { @@ -1442,7 +1442,7 @@ daterange_canonical(PG_FUNCTION_ARGS) upper.inclusive = false; } - PG_RETURN_RANGE(range_serialize(typcache, &lower, &upper, false)); + PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false)); } /* @@ -1799,8 +1799,8 @@ make_range(TypeCacheEntry *typcache, RangeBound *lower, RangeBound *upper, /* no need to call canonical on empty ranges ... */ if (OidIsValid(typcache->rng_canonical_finfo.fn_oid) && !RangeIsEmpty(range)) - range = DatumGetRangeType(FunctionCall1(&typcache->rng_canonical_finfo, - RangeTypeGetDatum(range))); + range = DatumGetRangeTypeP(FunctionCall1(&typcache->rng_canonical_finfo, + RangeTypePGetDatum(range))); return range; } diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c index a1f4f4d372..cb2d5a3b73 100644 --- a/src/backend/utils/adt/rangetypes_gist.c +++ b/src/backend/utils/adt/rangetypes_gist.c @@ -177,7 +177,7 @@ range_gist_consistent(PG_FUNCTION_ARGS) /* Oid subtype = PG_GETARG_OID(3); */ bool *recheck = (bool *) PG_GETARG_POINTER(4); - RangeType *key = DatumGetRangeType(entry->key); + RangeType *key = DatumGetRangeTypeP(entry->key); TypeCacheEntry *typcache; /* All operators served by this function are exact */ @@ -203,17 +203,17 @@ range_gist_union(PG_FUNCTION_ARGS) TypeCacheEntry *typcache; int i; - result_range = DatumGetRangeType(ent[0].key); + result_range = DatumGetRangeTypeP(ent[0].key); typcache = range_get_typcache(fcinfo, RangeTypeGetOid(result_range)); for (i = 1; i < entryvec->n; i++) { result_range = range_super_union(typcache, result_range, - DatumGetRangeType(ent[i].key)); + DatumGetRangeTypeP(ent[i].key)); } - PG_RETURN_RANGE(result_range); + PG_RETURN_RANGE_P(result_range); } /* compress, decompress, fetch are no-ops */ @@ -257,8 +257,8 @@ range_gist_penalty(PG_FUNCTION_ARGS) GISTENTRY *origentry = (GISTENTRY *) PG_GETARG_POINTER(0); GISTENTRY *newentry = (GISTENTRY *) PG_GETARG_POINTER(1); float *penalty = (float *) PG_GETARG_POINTER(2); - RangeType *orig = DatumGetRangeType(origentry->key); - RangeType *new = DatumGetRangeType(newentry->key); + RangeType *orig = DatumGetRangeTypeP(origentry->key); + RangeType *new = DatumGetRangeTypeP(newentry->key); TypeCacheEntry *typcache; bool has_subtype_diff; RangeBound orig_lower, @@ -526,7 +526,7 @@ range_gist_picksplit(PG_FUNCTION_ARGS) int total_count; /* use first item to look up range type's info */ - pred_left = DatumGetRangeType(entryvec->vector[FirstOffsetNumber].key); + pred_left = DatumGetRangeTypeP(entryvec->vector[FirstOffsetNumber].key); typcache = range_get_typcache(fcinfo, RangeTypeGetOid(pred_left)); maxoff = entryvec->n - 1; @@ -540,7 +540,7 @@ range_gist_picksplit(PG_FUNCTION_ARGS) memset(count_in_classes, 0, sizeof(count_in_classes)); for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) { - RangeType *range = DatumGetRangeType(entryvec->vector[i].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key); count_in_classes[get_gist_range_class(range)]++; } @@ -670,8 +670,8 @@ range_gist_picksplit(PG_FUNCTION_ARGS) Datum range_gist_same(PG_FUNCTION_ARGS) { - RangeType *r1 = PG_GETARG_RANGE(0); - RangeType *r2 = PG_GETARG_RANGE(1); + RangeType *r1 = PG_GETARG_RANGE_P(0); + RangeType *r2 = PG_GETARG_RANGE_P(1); bool *result = (bool *) PG_GETARG_POINTER(2); /* @@ -787,39 +787,39 @@ range_gist_consistent_int(TypeCacheEntry *typcache, StrategyNumber strategy, switch (strategy) { case RANGESTRAT_BEFORE: - if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeTypeP(query))) return false; return (!range_overright_internal(typcache, key, - DatumGetRangeType(query))); + DatumGetRangeTypeP(query))); case RANGESTRAT_OVERLEFT: - if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeTypeP(query))) return false; return (!range_after_internal(typcache, key, - DatumGetRangeType(query))); + DatumGetRangeTypeP(query))); case RANGESTRAT_OVERLAPS: return range_overlaps_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_OVERRIGHT: - if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeTypeP(query))) return false; return (!range_before_internal(typcache, key, - DatumGetRangeType(query))); + DatumGetRangeTypeP(query))); case RANGESTRAT_AFTER: - if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeTypeP(query))) return false; return (!range_overleft_internal(typcache, key, - DatumGetRangeType(query))); + DatumGetRangeTypeP(query))); case RANGESTRAT_ADJACENT: - if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(key) || RangeIsEmpty(DatumGetRangeTypeP(query))) return false; if (range_adjacent_internal(typcache, key, - DatumGetRangeType(query))) + DatumGetRangeTypeP(query))) return true; return range_overlaps_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINS: return range_contains_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINED_BY: /* @@ -830,7 +830,7 @@ range_gist_consistent_int(TypeCacheEntry *typcache, StrategyNumber strategy, if (RangeIsOrContainsEmpty(key)) return true; return range_overlaps_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINS_ELEM: return range_contains_elem_internal(typcache, key, query); case RANGESTRAT_EQ: @@ -839,10 +839,10 @@ range_gist_consistent_int(TypeCacheEntry *typcache, StrategyNumber strategy, * If query is empty, descend only if the key is or contains any * empty ranges. Otherwise, descend if key contains query. */ - if (RangeIsEmpty(DatumGetRangeType(query))) + if (RangeIsEmpty(DatumGetRangeTypeP(query))) return RangeIsOrContainsEmpty(key); return range_contains_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); default: elog(ERROR, "unrecognized range strategy: %d", strategy); return false; /* keep compiler quiet */ @@ -860,32 +860,32 @@ range_gist_consistent_leaf(TypeCacheEntry *typcache, StrategyNumber strategy, { case RANGESTRAT_BEFORE: return range_before_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_OVERLEFT: return range_overleft_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_OVERLAPS: return range_overlaps_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_OVERRIGHT: return range_overright_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_AFTER: return range_after_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_ADJACENT: return range_adjacent_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINS: return range_contains_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINED_BY: return range_contained_by_internal(typcache, key, - DatumGetRangeType(query)); + DatumGetRangeTypeP(query)); case RANGESTRAT_CONTAINS_ELEM: return range_contains_elem_internal(typcache, key, query); case RANGESTRAT_EQ: - return range_eq_internal(typcache, key, DatumGetRangeType(query)); + return range_eq_internal(typcache, key, DatumGetRangeTypeP(query)); default: elog(ERROR, "unrecognized range strategy: %d", strategy); return false; /* keep compiler quiet */ @@ -915,7 +915,7 @@ range_gist_fallback_split(TypeCacheEntry *typcache, v->spl_nright = 0; for (i = FirstOffsetNumber; i <= maxoff; i++) { - RangeType *range = DatumGetRangeType(entryvec->vector[i].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key); if (i < split_idx) PLACE_LEFT(range, i); @@ -923,8 +923,8 @@ range_gist_fallback_split(TypeCacheEntry *typcache, PLACE_RIGHT(range, i); } - v->spl_ldatum = RangeTypeGetDatum(left_range); - v->spl_rdatum = RangeTypeGetDatum(right_range); + v->spl_ldatum = RangeTypePGetDatum(left_range); + v->spl_rdatum = RangeTypePGetDatum(right_range); } /* @@ -951,7 +951,7 @@ range_gist_class_split(TypeCacheEntry *typcache, v->spl_nright = 0; for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) { - RangeType *range = DatumGetRangeType(entryvec->vector[i].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key); int class; /* Get class of range */ @@ -967,8 +967,8 @@ range_gist_class_split(TypeCacheEntry *typcache, } } - v->spl_ldatum = RangeTypeGetDatum(left_range); - v->spl_rdatum = RangeTypeGetDatum(right_range); + v->spl_ldatum = RangeTypePGetDatum(left_range); + v->spl_rdatum = RangeTypePGetDatum(right_range); } /* @@ -1000,7 +1000,7 @@ range_gist_single_sorting_split(TypeCacheEntry *typcache, */ for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) { - RangeType *range = DatumGetRangeType(entryvec->vector[i].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key); RangeBound bound2; bool empty; @@ -1026,7 +1026,7 @@ range_gist_single_sorting_split(TypeCacheEntry *typcache, for (i = 0; i < maxoff; i++) { int idx = sortItems[i].index; - RangeType *range = DatumGetRangeType(entryvec->vector[idx].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[idx].key); if (i < split_idx) PLACE_LEFT(range, idx); @@ -1034,8 +1034,8 @@ range_gist_single_sorting_split(TypeCacheEntry *typcache, PLACE_RIGHT(range, idx); } - v->spl_ldatum = RangeTypeGetDatum(left_range); - v->spl_rdatum = RangeTypeGetDatum(right_range); + v->spl_ldatum = RangeTypePGetDatum(left_range); + v->spl_rdatum = RangeTypePGetDatum(right_range); } /* @@ -1102,7 +1102,7 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache, /* Fill arrays of bounds */ for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) { - RangeType *range = DatumGetRangeType(entryvec->vector[i].key); + RangeType *range = DatumGetRangeTypeP(entryvec->vector[i].key); bool empty; range_deserialize(typcache, range, @@ -1277,7 +1277,7 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache, /* * Get upper and lower bounds along selected axis. */ - range = DatumGetRangeType(entryvec->vector[i].key); + range = DatumGetRangeTypeP(entryvec->vector[i].key); range_deserialize(typcache, range, &lower, &upper, &empty); @@ -1347,7 +1347,7 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache, { int idx = common_entries[i].index; - range = DatumGetRangeType(entryvec->vector[idx].key); + range = DatumGetRangeTypeP(entryvec->vector[idx].key); /* * Check if we have to place this entry in either group to achieve diff --git a/src/backend/utils/adt/rangetypes_selfuncs.c b/src/backend/utils/adt/rangetypes_selfuncs.c index ed13c27fcb..cba8974dbe 100644 --- a/src/backend/utils/adt/rangetypes_selfuncs.c +++ b/src/backend/utils/adt/rangetypes_selfuncs.c @@ -203,7 +203,7 @@ rangesel(PG_FUNCTION_ARGS) /* Both sides are the same range type */ typcache = range_get_typcache(fcinfo, vardata.vartype); - constrange = DatumGetRangeType(((Const *) other)->constvalue); + constrange = DatumGetRangeTypeP(((Const *) other)->constvalue); } /* @@ -406,7 +406,7 @@ calc_hist_selectivity(TypeCacheEntry *typcache, VariableStatData *vardata, hist_upper = (RangeBound *) palloc(sizeof(RangeBound) * nhist); for (i = 0; i < nhist; i++) { - range_deserialize(typcache, DatumGetRangeType(hslot.values[i]), + range_deserialize(typcache, DatumGetRangeTypeP(hslot.values[i]), &hist_lower[i], &hist_upper[i], &empty); /* The histogram should not contain any empty ranges */ if (empty) diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c index e82c4e1a9e..d934105d32 100644 --- a/src/backend/utils/adt/rangetypes_spgist.c +++ b/src/backend/utils/adt/rangetypes_spgist.c @@ -132,7 +132,7 @@ spg_range_quad_choose(PG_FUNCTION_ARGS) { spgChooseIn *in = (spgChooseIn *) PG_GETARG_POINTER(0); spgChooseOut *out = (spgChooseOut *) PG_GETARG_POINTER(1); - RangeType *inRange = DatumGetRangeType(in->datum), + RangeType *inRange = DatumGetRangeTypeP(in->datum), *centroid; int16 quadrant; TypeCacheEntry *typcache; @@ -142,7 +142,7 @@ spg_range_quad_choose(PG_FUNCTION_ARGS) out->resultType = spgMatchNode; /* nodeN will be set by core */ out->result.matchNode.levelAdd = 0; - out->result.matchNode.restDatum = RangeTypeGetDatum(inRange); + out->result.matchNode.restDatum = RangeTypePGetDatum(inRange); PG_RETURN_VOID(); } @@ -161,11 +161,11 @@ spg_range_quad_choose(PG_FUNCTION_ARGS) else out->result.matchNode.nodeN = 1; out->result.matchNode.levelAdd = 1; - out->result.matchNode.restDatum = RangeTypeGetDatum(inRange); + out->result.matchNode.restDatum = RangeTypePGetDatum(inRange); PG_RETURN_VOID(); } - centroid = DatumGetRangeType(in->prefixDatum); + centroid = DatumGetRangeTypeP(in->prefixDatum); quadrant = getQuadrant(typcache, centroid, inRange); Assert(quadrant <= in->nNodes); @@ -174,7 +174,7 @@ spg_range_quad_choose(PG_FUNCTION_ARGS) out->resultType = spgMatchNode; out->result.matchNode.nodeN = quadrant - 1; out->result.matchNode.levelAdd = 1; - out->result.matchNode.restDatum = RangeTypeGetDatum(inRange); + out->result.matchNode.restDatum = RangeTypePGetDatum(inRange); PG_RETURN_VOID(); } @@ -213,7 +213,7 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS) *upperBounds; typcache = range_get_typcache(fcinfo, - RangeTypeGetOid(DatumGetRangeType(in->datums[0]))); + RangeTypeGetOid(DatumGetRangeTypeP(in->datums[0]))); /* Allocate memory for bounds */ lowerBounds = palloc(sizeof(RangeBound) * in->nTuples); @@ -223,7 +223,7 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS) /* Deserialize bounds of ranges, count non-empty ranges */ for (i = 0; i < in->nTuples; i++) { - range_deserialize(typcache, DatumGetRangeType(in->datums[i]), + range_deserialize(typcache, DatumGetRangeTypeP(in->datums[i]), &lowerBounds[j], &upperBounds[j], &empty); if (!empty) j++; @@ -249,9 +249,9 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS) /* Place all ranges into node 0 */ for (i = 0; i < in->nTuples; i++) { - RangeType *range = DatumGetRangeType(in->datums[i]); + RangeType *range = DatumGetRangeTypeP(in->datums[i]); - out->leafTupleDatums[i] = RangeTypeGetDatum(range); + out->leafTupleDatums[i] = RangeTypePGetDatum(range); out->mapTuplesToNodes[i] = 0; } PG_RETURN_VOID(); @@ -267,7 +267,7 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS) centroid = range_serialize(typcache, &lowerBounds[nonEmptyCount / 2], &upperBounds[nonEmptyCount / 2], false); out->hasPrefix = true; - out->prefixDatum = RangeTypeGetDatum(centroid); + out->prefixDatum = RangeTypePGetDatum(centroid); /* Create node for empty ranges only if it is a root node */ out->nNodes = (in->level == 0) ? 5 : 4; @@ -282,10 +282,10 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS) */ for (i = 0; i < in->nTuples; i++) { - RangeType *range = DatumGetRangeType(in->datums[i]); + RangeType *range = DatumGetRangeTypeP(in->datums[i]); int16 quadrant = getQuadrant(typcache, centroid, range); - out->leafTupleDatums[i] = RangeTypeGetDatum(range); + out->leafTupleDatums[i] = RangeTypePGetDatum(range); out->mapTuplesToNodes[i] = quadrant - 1; } @@ -347,7 +347,7 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS) */ if (strategy != RANGESTRAT_CONTAINS_ELEM) empty = RangeIsEmpty( - DatumGetRangeType(in->scankeys[i].sk_argument)); + DatumGetRangeTypeP(in->scankeys[i].sk_argument)); else empty = false; @@ -415,9 +415,9 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS) RangeType *centroid; /* This node has a centroid. Fetch it. */ - centroid = DatumGetRangeType(in->prefixDatum); + centroid = DatumGetRangeTypeP(in->prefixDatum); typcache = range_get_typcache(fcinfo, - RangeTypeGetOid(DatumGetRangeType(centroid))); + RangeTypeGetOid(DatumGetRangeTypeP(centroid))); range_deserialize(typcache, centroid, ¢roidLower, ¢roidUpper, ¢roidEmpty); @@ -482,7 +482,7 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS) } else { - range = DatumGetRangeType(in->scankeys[i].sk_argument); + range = DatumGetRangeTypeP(in->scankeys[i].sk_argument); range_deserialize(typcache, range, &lower, &upper, &empty); } @@ -558,7 +558,7 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS) */ if (in->traversalValue != (Datum) 0) { - prevCentroid = DatumGetRangeType(in->traversalValue); + prevCentroid = DatumGetRangeTypeP(in->traversalValue); range_deserialize(typcache, prevCentroid, &prevLower, &prevUpper, &prevEmpty); } @@ -921,7 +921,7 @@ spg_range_quad_leaf_consistent(PG_FUNCTION_ARGS) { spgLeafConsistentIn *in = (spgLeafConsistentIn *) PG_GETARG_POINTER(0); spgLeafConsistentOut *out = (spgLeafConsistentOut *) PG_GETARG_POINTER(1); - RangeType *leafRange = DatumGetRangeType(in->leafDatum); + RangeType *leafRange = DatumGetRangeTypeP(in->leafDatum); TypeCacheEntry *typcache; bool res; int i; @@ -945,35 +945,35 @@ spg_range_quad_leaf_consistent(PG_FUNCTION_ARGS) { case RANGESTRAT_BEFORE: res = range_before_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_OVERLEFT: res = range_overleft_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_OVERLAPS: res = range_overlaps_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_OVERRIGHT: res = range_overright_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_AFTER: res = range_after_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_ADJACENT: res = range_adjacent_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_CONTAINS: res = range_contains_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_CONTAINED_BY: res = range_contained_by_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; case RANGESTRAT_CONTAINS_ELEM: res = range_contains_elem_internal(typcache, leafRange, @@ -981,7 +981,7 @@ spg_range_quad_leaf_consistent(PG_FUNCTION_ARGS) break; case RANGESTRAT_EQ: res = range_eq_internal(typcache, leafRange, - DatumGetRangeType(keyDatum)); + DatumGetRangeTypeP(keyDatum)); break; default: elog(ERROR, "unrecognized range strategy: %d", diff --git a/src/backend/utils/adt/rangetypes_typanalyze.c b/src/backend/utils/adt/rangetypes_typanalyze.c index 324bbe48e5..3efd982d1b 100644 --- a/src/backend/utils/adt/rangetypes_typanalyze.c +++ b/src/backend/utils/adt/rangetypes_typanalyze.c @@ -144,7 +144,7 @@ compute_range_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc, total_width += VARSIZE_ANY(DatumGetPointer(value)); /* Get range and deserialize it for further analysis. */ - range = DatumGetRangeType(value); + range = DatumGetRangeTypeP(value); range_deserialize(typcache, range, &lower, &upper, &empty); if (!empty) diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c index d8b86f6393..732d87f22f 100644 --- a/src/backend/utils/adt/tsgistidx.c +++ b/src/backend/utils/adt/tsgistidx.c @@ -110,7 +110,7 @@ static int outbuf_maxlen = 0; Datum gtsvectorout(PG_FUNCTION_ARGS) { - SignTSVector *key = (SignTSVector *) DatumGetPointer(PG_DETOAST_DATUM(PG_GETARG_POINTER(0))); + SignTSVector *key = (SignTSVector *) PG_DETOAST_DATUM(PG_GETARG_POINTER(0)); char *outbuf; if (outbuf_maxlen == 0) @@ -273,7 +273,7 @@ Datum gtsvector_decompress(PG_FUNCTION_ARGS) { GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - SignTSVector *key = (SignTSVector *) DatumGetPointer(PG_DETOAST_DATUM(entry->key)); + SignTSVector *key = (SignTSVector *) PG_DETOAST_DATUM(entry->key); if (key != (SignTSVector *) DatumGetPointer(entry->key)) { diff --git a/src/include/utils/array.h b/src/include/utils/array.h index 61a67a21e3..d6d3c582b6 100644 --- a/src/include/utils/array.h +++ b/src/include/utils/array.h @@ -252,7 +252,7 @@ typedef struct ArrayIteratorData *ArrayIterator; #define PG_RETURN_EXPANDED_ARRAY(x) PG_RETURN_DATUM(EOHPGetRWDatum(&(x)->hdr)) /* fmgr macros for AnyArrayType (ie, get either varlena or expanded form) */ -#define PG_GETARG_ANY_ARRAY(n) DatumGetAnyArray(PG_GETARG_DATUM(n)) +#define PG_GETARG_ANY_ARRAY_P(n) DatumGetAnyArrayP(PG_GETARG_DATUM(n)) /* * Access macros for varlena array header fields. @@ -440,7 +440,7 @@ extern Datum expand_array(Datum arraydatum, MemoryContext parentcontext, extern ExpandedArrayHeader *DatumGetExpandedArray(Datum d); extern ExpandedArrayHeader *DatumGetExpandedArrayX(Datum d, ArrayMetaState *metacache); -extern AnyArrayType *DatumGetAnyArray(Datum d); +extern AnyArrayType *DatumGetAnyArrayP(Datum d); extern void deconstruct_expanded_array(ExpandedArrayHeader *eah); #endif /* ARRAY_H */ diff --git a/src/include/utils/jsonb.h b/src/include/utils/jsonb.h index 24f491663b..d639bbc960 100644 --- a/src/include/utils/jsonb.h +++ b/src/include/utils/jsonb.h @@ -65,10 +65,10 @@ typedef enum #define JGIN_MAXLENGTH 125 /* max length of text part before hashing */ /* Convenience macros */ -#define DatumGetJsonb(d) ((Jsonb *) PG_DETOAST_DATUM(d)) -#define JsonbGetDatum(p) PointerGetDatum(p) -#define PG_GETARG_JSONB(x) DatumGetJsonb(PG_GETARG_DATUM(x)) -#define PG_RETURN_JSONB(x) PG_RETURN_POINTER(x) +#define DatumGetJsonbP(d) ((Jsonb *) PG_DETOAST_DATUM(d)) +#define JsonbPGetDatum(p) PointerGetDatum(p) +#define PG_GETARG_JSONB_P(x) DatumGetJsonbP(PG_GETARG_DATUM(x)) +#define PG_RETURN_JSONB_P(x) PG_RETURN_POINTER(x) typedef struct JsonbPair JsonbPair; typedef struct JsonbValue JsonbValue; diff --git a/src/include/utils/rangetypes.h b/src/include/utils/rangetypes.h index 5544889317..1ef5e54253 100644 --- a/src/include/utils/rangetypes.h +++ b/src/include/utils/rangetypes.h @@ -68,12 +68,12 @@ typedef struct /* * fmgr macros for range type objects */ -#define DatumGetRangeType(X) ((RangeType *) PG_DETOAST_DATUM(X)) -#define DatumGetRangeTypeCopy(X) ((RangeType *) PG_DETOAST_DATUM_COPY(X)) -#define RangeTypeGetDatum(X) PointerGetDatum(X) -#define PG_GETARG_RANGE(n) DatumGetRangeType(PG_GETARG_DATUM(n)) -#define PG_GETARG_RANGE_COPY(n) DatumGetRangeTypeCopy(PG_GETARG_DATUM(n)) -#define PG_RETURN_RANGE(x) return RangeTypeGetDatum(x) +#define DatumGetRangeTypeP(X) ((RangeType *) PG_DETOAST_DATUM(X)) +#define DatumGetRangeTypePCopy(X) ((RangeType *) PG_DETOAST_DATUM_COPY(X)) +#define RangeTypePGetDatum(X) PointerGetDatum(X) +#define PG_GETARG_RANGE_P(n) DatumGetRangeTypeP(PG_GETARG_DATUM(n)) +#define PG_GETARG_RANGE_P_COPY(n) DatumGetRangeTypePCopy(PG_GETARG_DATUM(n)) +#define PG_RETURN_RANGE_P(x) return RangeTypePGetDatum(x) /* Operator strategy numbers used in the GiST and SP-GiST range opclasses */ /* Numbers are chosen to match up operator names with existing usages */ From 66917bfaa7bb0b6bae52a5fe631a8b6443203f55 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 18 Sep 2017 16:01:16 -0400 Subject: [PATCH 0205/1087] Make ExplainOpenGroup and ExplainCloseGroup public. Extensions with custom plan nodes might like to use these in their EXPLAIN output. Hadi Moshayedi Discussion: https://postgr.es/m/CA+_kT_dU-rHCN0u6pjA6bN5CZniMfD=-wVqPY4QLrKUY_uJq5w@mail.gmail.com --- src/backend/commands/explain.c | 8 ++------ src/include/commands/explain.h | 5 +++++ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 4cee357336..c1602c59cc 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -124,10 +124,6 @@ static void ExplainCustomChildren(CustomScanState *css, List *ancestors, ExplainState *es); static void ExplainProperty(const char *qlabel, const char *value, bool numeric, ExplainState *es); -static void ExplainOpenGroup(const char *objtype, const char *labelname, - bool labeled, ExplainState *es); -static void ExplainCloseGroup(const char *objtype, const char *labelname, - bool labeled, ExplainState *es); static void ExplainDummyGroup(const char *objtype, const char *labelname, ExplainState *es); static void ExplainXMLTag(const char *tagname, int flags, ExplainState *es); @@ -3277,7 +3273,7 @@ ExplainPropertyBool(const char *qlabel, bool value, ExplainState *es) * If labeled is true, the group members will be labeled properties, * while if it's false, they'll be unlabeled objects. */ -static void +void ExplainOpenGroup(const char *objtype, const char *labelname, bool labeled, ExplainState *es) { @@ -3340,7 +3336,7 @@ ExplainOpenGroup(const char *objtype, const char *labelname, * Close a group of related objects. * Parameters must match the corresponding ExplainOpenGroup call. */ -static void +void ExplainCloseGroup(const char *objtype, const char *labelname, bool labeled, ExplainState *es) { diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h index 78822b766a..543b2bb0c6 100644 --- a/src/include/commands/explain.h +++ b/src/include/commands/explain.h @@ -101,4 +101,9 @@ extern void ExplainPropertyFloat(const char *qlabel, double value, int ndigits, extern void ExplainPropertyBool(const char *qlabel, bool value, ExplainState *es); +extern void ExplainOpenGroup(const char *objtype, const char *labelname, + bool labeled, ExplainState *es); +extern void ExplainCloseGroup(const char *objtype, const char *labelname, + bool labeled, ExplainState *es); + #endif /* EXPLAIN_H */ From eb5c404b17752ca566947f12cb702438dcccdcb1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 18 Sep 2017 16:36:28 -0400 Subject: [PATCH 0206/1087] Minor code-cleanliness improvements for btree. Make the btree page-flags test macros (P_ISLEAF and friends) return clean boolean values, rather than values that might not fit in a bool. Use them in a few places that were randomly referencing the flag bits directly. In passing, change access/nbtree/'s only direct use of BUFFER_LOCK_SHARE to BT_READ. (Some think we should go the other way, but as long as we have BT_READ/BT_WRITE, let's use them consistently.) Masahiko Sawada, reviewed by Doug Doole Discussion: https://postgr.es/m/CAD21AoBmWPeN=WBB5Jvyz_Nt3rmW1ebUyAnk3ZbJP3RMXALJog@mail.gmail.com --- contrib/amcheck/verify_nbtree.c | 4 ++-- contrib/pgstattuple/pgstattuple.c | 2 +- src/backend/access/nbtree/nbtpage.c | 6 +++--- src/backend/access/nbtree/nbtxlog.c | 4 ++-- src/include/access/nbtree.h | 16 ++++++++-------- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c index 9ae83dc839..868c14ec8f 100644 --- a/contrib/amcheck/verify_nbtree.c +++ b/contrib/amcheck/verify_nbtree.c @@ -1195,7 +1195,7 @@ palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum) opaque = (BTPageOpaque) PageGetSpecialPointer(page); - if (opaque->btpo_flags & BTP_META && blocknum != BTREE_METAPAGE) + if (P_ISMETA(opaque) && blocknum != BTREE_METAPAGE) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), errmsg("invalid meta page found at block %u in index \"%s\"", @@ -1206,7 +1206,7 @@ palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum) { BTMetaPageData *metad = BTPageGetMeta(page); - if (!(opaque->btpo_flags & BTP_META) || + if (!P_ISMETA(opaque) || metad->btm_magic != BTREE_MAGIC) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c index 7a91cc3468..7ca1bb24d2 100644 --- a/contrib/pgstattuple/pgstattuple.c +++ b/contrib/pgstattuple/pgstattuple.c @@ -416,7 +416,7 @@ pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno, BTPageOpaque opaque; opaque = (BTPageOpaque) PageGetSpecialPointer(page); - if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD)) + if (P_IGNORE(opaque)) { /* recyclable page */ stat->free_space += BLCKSZ; diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c index 5c817b6510..10697e9e23 100644 --- a/src/backend/access/nbtree/nbtpage.c +++ b/src/backend/access/nbtree/nbtpage.c @@ -162,7 +162,7 @@ _bt_getroot(Relation rel, int access) metad = BTPageGetMeta(metapg); /* sanity-check the metapage */ - if (!(metaopaque->btpo_flags & BTP_META) || + if (!P_ISMETA(metaopaque) || metad->btm_magic != BTREE_MAGIC) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), @@ -365,7 +365,7 @@ _bt_gettrueroot(Relation rel) metaopaque = (BTPageOpaque) PageGetSpecialPointer(metapg); metad = BTPageGetMeta(metapg); - if (!(metaopaque->btpo_flags & BTP_META) || + if (!P_ISMETA(metaopaque) || metad->btm_magic != BTREE_MAGIC) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), @@ -452,7 +452,7 @@ _bt_getrootheight(Relation rel) metad = BTPageGetMeta(metapg); /* sanity-check the metapage */ - if (!(metaopaque->btpo_flags & BTP_META) || + if (!P_ISMETA(metaopaque) || metad->btm_magic != BTREE_MAGIC) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 3610c7c7e0..4afdf4736f 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -135,7 +135,7 @@ _bt_clear_incomplete_split(XLogReaderState *record, uint8 block_id) Page page = (Page) BufferGetPage(buf); BTPageOpaque pageop = (BTPageOpaque) PageGetSpecialPointer(page); - Assert((pageop->btpo_flags & BTP_INCOMPLETE_SPLIT) != 0); + Assert(P_INCOMPLETE_SPLIT(pageop)); pageop->btpo_flags &= ~BTP_INCOMPLETE_SPLIT; PageSetLSN(page, lsn); @@ -598,7 +598,7 @@ btree_xlog_delete_get_latestRemovedXid(XLogReaderState *record) UnlockReleaseBuffer(ibuffer); return InvalidTransactionId; } - LockBuffer(hbuffer, BUFFER_LOCK_SHARE); + LockBuffer(hbuffer, BT_READ); hpage = (Page) BufferGetPage(hbuffer); /* diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index e6abbec280..2d4c36d0b8 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -173,14 +173,14 @@ typedef struct BTMetaPageData */ #define P_LEFTMOST(opaque) ((opaque)->btpo_prev == P_NONE) #define P_RIGHTMOST(opaque) ((opaque)->btpo_next == P_NONE) -#define P_ISLEAF(opaque) ((opaque)->btpo_flags & BTP_LEAF) -#define P_ISROOT(opaque) ((opaque)->btpo_flags & BTP_ROOT) -#define P_ISDELETED(opaque) ((opaque)->btpo_flags & BTP_DELETED) -#define P_ISMETA(opaque) ((opaque)->btpo_flags & BTP_META) -#define P_ISHALFDEAD(opaque) ((opaque)->btpo_flags & BTP_HALF_DEAD) -#define P_IGNORE(opaque) ((opaque)->btpo_flags & (BTP_DELETED|BTP_HALF_DEAD)) -#define P_HAS_GARBAGE(opaque) ((opaque)->btpo_flags & BTP_HAS_GARBAGE) -#define P_INCOMPLETE_SPLIT(opaque) ((opaque)->btpo_flags & BTP_INCOMPLETE_SPLIT) +#define P_ISLEAF(opaque) (((opaque)->btpo_flags & BTP_LEAF) != 0) +#define P_ISROOT(opaque) (((opaque)->btpo_flags & BTP_ROOT) != 0) +#define P_ISDELETED(opaque) (((opaque)->btpo_flags & BTP_DELETED) != 0) +#define P_ISMETA(opaque) (((opaque)->btpo_flags & BTP_META) != 0) +#define P_ISHALFDEAD(opaque) (((opaque)->btpo_flags & BTP_HALF_DEAD) != 0) +#define P_IGNORE(opaque) (((opaque)->btpo_flags & (BTP_DELETED|BTP_HALF_DEAD)) != 0) +#define P_HAS_GARBAGE(opaque) (((opaque)->btpo_flags & BTP_HAS_GARBAGE) != 0) +#define P_INCOMPLETE_SPLIT(opaque) (((opaque)->btpo_flags & BTP_INCOMPLETE_SPLIT) != 0) /* * Lehman and Yao's algorithm requires a ``high key'' on every non-rightmost From ec9e05b3c392ba9587f283507459737684539574 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 17 Sep 2017 01:00:39 -0700 Subject: [PATCH 0207/1087] Fix crash restart bug introduced in 8356753c212. The bug was caused by not re-reading the control file during crash recovery restarts, which lead to an attempt to pfree() shared memory contents. The fix is to re-read the control file, which seems good anyway. It's unclear as of this moment, whether we want to keep the refactoring introduced in the commit referenced above, or come up with an alternative approach. But fixing the bug in the mean time seems like a good idea regardless. A followup commit will introduce regression test coverage for crash restarts. Reported-By: Tom Lane Discussion: https://postgr.es/m/14134.1505572349@sss.pgh.pa.us --- src/backend/access/transam/xlog.c | 44 ++++++++++++++++++----------- src/backend/postmaster/postmaster.c | 13 +++++++-- src/backend/tcop/postgres.c | 2 +- src/include/access/xlog.h | 2 +- 4 files changed, 39 insertions(+), 22 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index b8f648927a..96ebf32a58 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -4802,15 +4802,19 @@ check_wal_buffers(int *newval, void **extra, GucSource source) /* * Read the control file, set respective GUCs. * - * This is to be called during startup, unless in bootstrap mode, where no - * control file yet exists. As there's no shared memory yet (its sizing can - * depend on the contents of the control file!), first store data in local - * memory. XLOGShemInit() will then copy it to shared memory later. + * This is to be called during startup, including a crash recovery cycle, + * unless in bootstrap mode, where no control file yet exists. As there's no + * usable shared memory yet (its sizing can depend on the contents of the + * control file!), first store the contents in local memory. XLOGShemInit() + * will then copy it to shared memory later. + * + * reset just controls whether previous contents are to be expected (in the + * reset case, there's a dangling pointer into old shared memory), or not. */ void -LocalProcessControlFile(void) +LocalProcessControlFile(bool reset) { - Assert(ControlFile == NULL); + Assert(reset || ControlFile == NULL); ControlFile = palloc(sizeof(ControlFileData)); ReadControlFile(); } @@ -4884,20 +4888,13 @@ XLOGShmemInit(void) } #endif - /* - * Already have read control file locally, unless in bootstrap mode. Move - * local version into shared memory. - */ + + XLogCtl = (XLogCtlData *) + ShmemInitStruct("XLOG Ctl", XLOGShmemSize(), &foundXLog); + localControlFile = ControlFile; ControlFile = (ControlFileData *) ShmemInitStruct("Control File", sizeof(ControlFileData), &foundCFile); - if (localControlFile) - { - memcpy(ControlFile, localControlFile, sizeof(ControlFileData)); - pfree(localControlFile); - } - XLogCtl = (XLogCtlData *) - ShmemInitStruct("XLOG Ctl", XLOGShmemSize(), &foundXLog); if (foundCFile || foundXLog) { @@ -4908,10 +4905,23 @@ XLOGShmemInit(void) WALInsertLocks = XLogCtl->Insert.WALInsertLocks; LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, "wal_insert"); + + if (localControlFile) + pfree(localControlFile); return; } memset(XLogCtl, 0, sizeof(XLogCtlData)); + /* + * Already have read control file locally, unless in bootstrap mode. Move + * contents into shared memory. + */ + if (localControlFile) + { + memcpy(ControlFile, localControlFile, sizeof(ControlFileData)); + pfree(localControlFile); + } + /* * Since XLogCtlData contains XLogRecPtr fields, its sizeof should be a * multiple of the alignment for same, so no extra alignment padding is diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index e4f8f597c6..160b555294 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -951,7 +951,7 @@ PostmasterMain(int argc, char *argv[]) CreateDataDirLockFile(true); /* read control file (error checking and contains config) */ - LocalProcessControlFile(); + LocalProcessControlFile(false); /* * Initialize SSL library, if specified. @@ -3829,6 +3829,10 @@ PostmasterStateMachine(void) ResetBackgroundWorkerCrashTimes(); shmem_exit(1); + + /* re-read control file into local memory */ + LocalProcessControlFile(true); + reset_shared(PostPortNumber); StartupPID = StartupDataBase(); @@ -4808,8 +4812,11 @@ SubPostmasterMain(int argc, char *argv[]) /* Read in remaining GUC variables */ read_nondefault_variables(); - /* (re-)read control file (contains config) */ - LocalProcessControlFile(); + /* + * (re-)read control file, as it contains config. The postmaster will + * already have read this, but this process doesn't know about that. + */ + LocalProcessControlFile(false); /* * Reload any libraries that were preloaded by the postmaster. Since we diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 46b662266b..dfd52b3c87 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3718,7 +3718,7 @@ PostgresMain(int argc, char *argv[], CreateDataDirLockFile(false); /* read control file (error checking and contains config ) */ - LocalProcessControlFile(); + LocalProcessControlFile(false); /* Initialize MaxBackends (if under postmaster, was done already) */ InitializeMaxBackends(); diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index e0635ab4e6..7213af0e81 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -261,7 +261,7 @@ extern XLogRecPtr GetFakeLSNForUnloggedRel(void); extern Size XLOGShmemSize(void); extern void XLOGShmemInit(void); extern void BootStrapXLOG(void); -extern void LocalProcessControlFile(void); +extern void LocalProcessControlFile(bool reset); extern void StartupXLOG(void); extern void ShutdownXLOG(int code, Datum arg); extern void InitXLOGAccess(void); From a1924a4ea29399111e5155532ca24c9c51d3c82d Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 17 Sep 2017 23:05:58 -0700 Subject: [PATCH 0208/1087] Add test for postmaster crash restarts. Given that I managed to break this... We probably should extend the tests to also cover other sub-processes dying, but that's something for later. Author: Andres Freund Discussion: https://postgr.es/m/20170917080752.rcmihzfmgbeuqjk2@alap3.anarazel.de --- src/test/recovery/t/013_crash_restart.pl | 192 +++++++++++++++++++++++ 1 file changed, 192 insertions(+) create mode 100644 src/test/recovery/t/013_crash_restart.pl diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl new file mode 100644 index 0000000000..e8ad24941b --- /dev/null +++ b/src/test/recovery/t/013_crash_restart.pl @@ -0,0 +1,192 @@ +# +# Tests restarts of postgres due to crashes of a subprocess. +# +# Two longer-running psql subprocesses are used: One to kill a +# backend, triggering a crash-restart cycle, one to detect when +# postmaster noticed the backend died. The second backend is +# necessary because it's otherwise hard to determine if postmaster is +# still accepting new sessions (because it hasn't noticed that the +# backend died), or because it's already restarted. +# +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More; +use Config; +use Time::HiRes qw(usleep); + +if ($Config{osname} eq 'MSWin32') +{ + # some Windows Perls at least don't like IPC::Run's + # start/kill_kill regime. + plan skip_all => "Test fails on Windows perl"; +} +else +{ + plan tests => 12; +} + +my $node = get_new_node('master'); +$node->init(allows_streaming => 1); +$node->start(); + +# by default PostgresNode doesn't doesn't restart after a crash +$node->safe_psql('postgres', + q[ALTER SYSTEM SET restart_after_crash = 1; + ALTER SYSTEM SET log_connections = 1; + SELECT pg_reload_conf();]); + +# Run psql, keeping session alive, so we have an alive backend to kill. +my ($killme_stdin, $killme_stdout, $killme_stderr) = ('', '', ''); +my $killme = IPC::Run::start( + [ 'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d', + $node->connstr('postgres') ], + '<', + \$killme_stdin, + '>', + \$killme_stdout, + '2>', + \$killme_stderr); + +# Need a second psql to check if crash-restart happened. +my ($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', ''); +my $monitor = IPC::Run::start( + [ 'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d', + $node->connstr('postgres') ], + '<', + \$monitor_stdin, + '>', + \$monitor_stdout, + '2>', + \$monitor_stderr); + +#create table, insert row that should survive +$killme_stdin .= q[ +CREATE TABLE alive(status text); +INSERT INTO alive VALUES($$committed-before-sigquit$$); +SELECT pg_backend_pid(); +]; +$killme->pump until $killme_stdout =~ /[[:digit:]]+[\r\n]$/; +my $pid = $killme_stdout; +chomp($pid); +$killme_stdout = ''; + +#insert a row that should *not* survive, due to in-progress xact +$killme_stdin .= q[ +BEGIN; +INSERT INTO alive VALUES($$in-progress-before-sigquit$$) RETURNING status; +]; +$killme->pump until $killme_stdout =~ /in-progress-before-sigquit/; +$killme_stdout = ''; + + +# Start longrunning query in second session, it's failure will signal +# that crash-restart has occurred. +$monitor_stdin .= q[ +SELECT pg_sleep(3600); +]; +$monitor->pump; + + +# kill once with QUIT - we expect psql to exit, while emitting error message first +my $cnt = kill 'QUIT', $pid; + +# Exactly process should have been alive to be killed +is($cnt, 1, "exactly one process killed with SIGQUIT"); + +# Check that psql sees the killed backend as having been terminated +$killme_stdin .= q[ +SELECT 1; +]; +$killme->pump until $killme_stderr =~ /WARNING: terminating connection because of crash of another server process/; + +ok(1, "psql query died successfully after SIGQUIT"); +$killme->kill_kill; + +# Check if the crash-restart cycle has occurred +$monitor->pump until $monitor_stderr =~ /WARNING: terminating connection because of crash of another server process/; +$monitor->kill_kill; +ok(1, "psql monitor died successfully after SIGQUIT"); + +# Wait till server restarts +is($node->poll_query_until('postgres', 'SELECT $$restarted$$;', 'restarted'), "1", "reconnected after SIGQUIT"); + +# restart psql processes, now that the crash cycle finished +($killme_stdin, $killme_stdout, $killme_stderr) = ('', '', ''); +$killme->run(); +($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', ''); +$monitor->run(); + + +# Acquire pid of new backend +$killme_stdin .= q[ +SELECT pg_backend_pid(); +]; +$killme->pump until $killme_stdout =~ /[[:digit:]]+[\r\n]$/; +$pid = $killme_stdout; +chomp($pid); +$pid = $killme_stdout; + +# Insert test rows +$killme_stdin .= q[ +INSERT INTO alive VALUES($$committed-before-sigkill$$) RETURNING status; +BEGIN; +INSERT INTO alive VALUES($$in-progress-before-sigkill$$) RETURNING status; +]; +$killme->pump until $killme_stdout =~ /in-progress-before-sigkill/; +$killme_stdout = ''; + +$monitor_stdin .= q[ +SELECT $$restart$$; +]; +$monitor->pump until $monitor_stdout =~ /restart/; +$monitor_stdout = ''; + +# Re-start longrunning query in second session, it's failure will signal +# that crash-restart has occurred. +$monitor_stdin = q[ +SELECT pg_sleep(3600); +]; +$monitor->pump_nb; # don't wait for query results to come back + + +# kill with SIGKILL this time - we expect the backend to exit, without +# being able to emit an error error message +$cnt = kill 'KILL', $pid; +is($cnt, 1, "exactly one process killed with KILL"); + +# Check that psql sees the server as being terminated. No WARNING, +# because signal handlers aren't being run on SIGKILL. +$killme_stdin .= q[ +SELECT 1; +]; +$killme->pump until $killme_stderr =~ /server closed the connection unexpectedly/; +$killme->kill_kill; +ok(1, "psql query died successfully after SIGKILL"); + +# Wait till server restarts (note that we should get the WARNING here) +$monitor->pump until $monitor_stderr =~ /WARNING: terminating connection because of crash of another server process/; +ok(1, "psql monitor died successfully after SIGKILL"); +$monitor->kill_kill; + +# Wait till server restarts +is($node->poll_query_until('postgres', 'SELECT 1', '1'), "1", "reconnected after SIGKILL"); + +# Make sure the committed rows survived, in-progress ones not +is($node->safe_psql('postgres', 'SELECT * FROM alive'), + "committed-before-sigquit\ncommitted-before-sigkill", 'data survived'); + +is($node->safe_psql('postgres', 'INSERT INTO alive VALUES($$before-orderly-restart$$) RETURNING status'), + 'before-orderly-restart', 'can still write after crash restart'); + +# Just to be sure, check that an orderly restart now still works +$node->restart(); + +is($node->safe_psql('postgres', 'SELECT * FROM alive'), + "committed-before-sigquit\ncommitted-before-sigkill\nbefore-orderly-restart", 'data survived'); + +is($node->safe_psql('postgres', 'INSERT INTO alive VALUES($$after-orderly-restart$$) RETURNING status'), + 'after-orderly-restart', 'can still write after orderly restart'); + +$node->stop(); From 0fb9e4ace5ce4d479d839a720f32b99fdc87f455 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 18 Sep 2017 17:43:37 -0700 Subject: [PATCH 0209/1087] Fix uninitialized variable in dshash.c. A bugfix for commit 8c0d7bafad36434cb08ac2c78e69ae72c194ca20. The code would have crashed if hashtable->size_log2 ever had the same value as hashtable->control->size_log2 by coincidence. Per Valgrind. Author: Thomas Munro Reported-By: Tomas Vondra Discussion: https://postgr.es/m/e72fb33c-4f31-f276-e972-263d9b59554d%402ndquadrant.com --- src/backend/lib/dshash.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c index 448e058725..dd87573067 100644 --- a/src/backend/lib/dshash.c +++ b/src/backend/lib/dshash.c @@ -249,6 +249,7 @@ dshash_create(dsa_area *area, const dshash_parameters *params, void *arg) } hash_table->buckets = dsa_get_address(area, hash_table->control->buckets); + hash_table->size_log2 = hash_table->control->size_log2; return hash_table; } @@ -280,6 +281,14 @@ dshash_attach(dsa_area *area, const dshash_parameters *params, hash_table->find_exclusively_locked = false; Assert(hash_table->control->magic == DSHASH_MAGIC); + /* + * These will later be set to the correct values by + * ensure_valid_bucket_pointers(), at which time we'll be holding a + * partition lock for interlocking against concurrent resizing. + */ + hash_table->buckets = NULL; + hash_table->size_log2 = 0; + return hash_table; } From f8e5f156b30efee5d0038b03e38735773abcb7ed Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 18 Sep 2017 19:36:44 -0700 Subject: [PATCH 0210/1087] Rearm statement_timeout after each executed query. Previously statement_timeout, in the extended protocol, affected all messages till a Sync message. For clients that pipeline/batch query execution that's problematic. Instead disable timeout after each Execute message, and enable, if necessary, the timer in start_xact_command(). As that's done only for Execute and not Parse / Bind, pipelining the latter two could still cause undesirable timeouts. But a survey of protocol implementations shows that all drivers issue Sync messages when preparing, and adding timeout rearming to both is fairly expensive for the common parse / bind / execute sequence. Author: Tatsuo Ishii, editorialized by Andres Freund Reviewed-By: Takayuki Tsunakawa, Andres Freund Discussion: https://postgr.es/m/20170222.115044.1665674502985097185.t-ishii@sraoss.co.jp --- src/backend/tcop/postgres.c | 77 +++++++++++++++++++++++++++++++------ 1 file changed, 65 insertions(+), 12 deletions(-) diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index dfd52b3c87..c807b00b0b 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -143,6 +143,11 @@ static bool DoingCommandRead = false; static bool doing_extended_query_message = false; static bool ignore_till_sync = false; +/* + * Flag to keep track of whether statement timeout timer is active. + */ +static bool stmt_timeout_active = false; + /* * If an unnamed prepared statement exists, it's stored here. * We keep it separate from the hashtable kept by commands/prepare.c @@ -182,6 +187,8 @@ static bool IsTransactionExitStmtList(List *pstmts); static bool IsTransactionStmtList(List *pstmts); static void drop_unnamed_stmt(void); static void log_disconnections(int code, Datum arg); +static void enable_statement_timeout(void); +static void disable_statement_timeout(void); /* ---------------------------------------------------------------- @@ -1241,7 +1248,8 @@ exec_parse_message(const char *query_string, /* string to execute */ /* * Start up a transaction command so we can run parse analysis etc. (Note * that this will normally change current memory context.) Nothing happens - * if we are already in one. + * if we are already in one. This also arms the statement timeout if + * necessary. */ start_xact_command(); @@ -1529,7 +1537,8 @@ exec_bind_message(StringInfo input_message) /* * Start up a transaction command so we can call functions etc. (Note that * this will normally change current memory context.) Nothing happens if - * we are already in one. + * we are already in one. This also arms the statement timeout if + * necessary. */ start_xact_command(); @@ -2021,6 +2030,9 @@ exec_execute_message(const char *portal_name, long max_rows) * those that start or end a transaction block. */ CommandCounterIncrement(); + + /* full command has been executed, reset timeout */ + disable_statement_timeout(); } /* Send appropriate CommandComplete to client */ @@ -2450,25 +2462,27 @@ start_xact_command(void) { StartTransactionCommand(); - /* Set statement timeout running, if any */ - /* NB: this mustn't be enabled until we are within an xact */ - if (StatementTimeout > 0) - enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout); - else - disable_timeout(STATEMENT_TIMEOUT, false); - xact_started = true; } + + /* + * Start statement timeout if necessary. Note that this'll intentionally + * not reset the clock on an already started timeout, to avoid the timing + * overhead when start_xact_command() is invoked repeatedly, without an + * interceding finish_xact_command() (e.g. parse/bind/execute). If that's + * not desired, the timeout has to be disabled explicitly. + */ + enable_statement_timeout(); } static void finish_xact_command(void) { + /* cancel active statement timeout after each command */ + disable_statement_timeout(); + if (xact_started) { - /* Cancel any active statement timeout before committing */ - disable_timeout(STATEMENT_TIMEOUT, false); - CommitTransactionCommand(); #ifdef MEMORY_CONTEXT_CHECKING @@ -4537,3 +4551,42 @@ log_disconnections(int code, Datum arg) port->user_name, port->database_name, port->remote_host, port->remote_port[0] ? " port=" : "", port->remote_port))); } + +/* + * Start statement timeout timer, if enabled. + * + * If there's already a timeout running, don't restart the timer. That + * enables compromises between accuracy of timeouts and cost of starting a + * timeout. + */ +static void +enable_statement_timeout(void) +{ + /* must be within an xact */ + Assert(xact_started); + + if (StatementTimeout > 0) + { + if (!stmt_timeout_active) + { + enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout); + stmt_timeout_active = true; + } + } + else + disable_timeout(STATEMENT_TIMEOUT, false); +} + +/* + * Disable statement timeout, if active. + */ +static void +disable_statement_timeout(void) +{ + if (stmt_timeout_active) + { + disable_timeout(STATEMENT_TIMEOUT, false); + + stmt_timeout_active = false; + } +} From f2464997644c64b5dec93ab3c08305f48bfe14f1 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Tue, 19 Sep 2017 08:31:45 -0400 Subject: [PATCH 0211/1087] Add citext_pattern_ops for citext contrib module This is similar to text_pattern_ops. Alexey Chernyshov, reviewed by Jacob Champion. --- contrib/citext/citext--1.4--1.5.sql | 74 ++++++ contrib/citext/citext.c | 121 +++++++++ contrib/citext/expected/citext.out | 370 +++++++++++++++++++++++++++ contrib/citext/expected/citext_1.out | 370 +++++++++++++++++++++++++++ contrib/citext/sql/citext.sql | 78 ++++++ 5 files changed, 1013 insertions(+) diff --git a/contrib/citext/citext--1.4--1.5.sql b/contrib/citext/citext--1.4--1.5.sql index 97942cb7bf..5ae522b7da 100644 --- a/contrib/citext/citext--1.4--1.5.sql +++ b/contrib/citext/citext--1.4--1.5.sql @@ -12,3 +12,77 @@ ALTER OPERATOR >= (citext, citext) SET ( RESTRICT = scalargesel, JOIN = scalargejoinsel ); + +CREATE FUNCTION citext_pattern_lt( citext, citext ) +RETURNS bool +AS 'MODULE_PATHNAME' +LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE; + +CREATE FUNCTION citext_pattern_le( citext, citext ) +RETURNS bool +AS 'MODULE_PATHNAME' +LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE; + +CREATE FUNCTION citext_pattern_gt( citext, citext ) +RETURNS bool +AS 'MODULE_PATHNAME' +LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE; + +CREATE FUNCTION citext_pattern_ge( citext, citext ) +RETURNS bool +AS 'MODULE_PATHNAME' +LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE; + +CREATE OPERATOR ~<~ ( + LEFTARG = CITEXT, + RIGHTARG = CITEXT, + NEGATOR = ~>=~, + COMMUTATOR = ~>~, + PROCEDURE = citext_pattern_lt, + RESTRICT = scalarltsel, + JOIN = scalarltjoinsel +); + +CREATE OPERATOR ~<=~ ( + LEFTARG = CITEXT, + RIGHTARG = CITEXT, + NEGATOR = ~>~, + COMMUTATOR = ~>=~, + PROCEDURE = citext_pattern_le, + RESTRICT = scalarltsel, + JOIN = scalarltjoinsel +); + +CREATE OPERATOR ~>=~ ( + LEFTARG = CITEXT, + RIGHTARG = CITEXT, + NEGATOR = ~<~, + COMMUTATOR = ~<=~, + PROCEDURE = citext_pattern_ge, + RESTRICT = scalargtsel, + JOIN = scalargtjoinsel +); + +CREATE OPERATOR ~>~ ( + LEFTARG = CITEXT, + RIGHTARG = CITEXT, + NEGATOR = ~<=~, + COMMUTATOR = ~<~, + PROCEDURE = citext_pattern_gt, + RESTRICT = scalargtsel, + JOIN = scalargtjoinsel +); + +CREATE FUNCTION citext_pattern_cmp(citext, citext) +RETURNS int4 +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE; + +CREATE OPERATOR CLASS citext_pattern_ops +FOR TYPE CITEXT USING btree AS + OPERATOR 1 ~<~ (citext, citext), + OPERATOR 2 ~<=~ (citext, citext), + OPERATOR 3 = (citext, citext), + OPERATOR 4 ~>=~ (citext, citext), + OPERATOR 5 ~>~ (citext, citext), + FUNCTION 1 citext_pattern_cmp(citext, citext); diff --git a/contrib/citext/citext.c b/contrib/citext/citext.c index 0ba47828ba..2c0e48e2bc 100644 --- a/contrib/citext/citext.c +++ b/contrib/citext/citext.c @@ -18,6 +18,7 @@ PG_MODULE_MAGIC; */ static int32 citextcmp(text *left, text *right, Oid collid); +static int32 internal_citext_pattern_cmp(text *left, text *right, Oid collid); /* * ================= @@ -58,6 +59,41 @@ citextcmp(text *left, text *right, Oid collid) return result; } +/* + * citext_pattern_cmp() + * Internal character-by-character comparison function for citext strings. + * Returns int32 negative, zero, or positive. + */ +static int32 +internal_citext_pattern_cmp(text *left, text *right, Oid collid) +{ + char *lcstr, + *rcstr; + int llen, + rlen; + int32 result; + + lcstr = str_tolower(VARDATA_ANY(left), VARSIZE_ANY_EXHDR(left), DEFAULT_COLLATION_OID); + rcstr = str_tolower(VARDATA_ANY(right), VARSIZE_ANY_EXHDR(right), DEFAULT_COLLATION_OID); + + llen = strlen(lcstr); + rlen = strlen(rcstr); + + result = memcmp((void *) lcstr, (void *) rcstr, Min(llen, rlen)); + if (result == 0) + { + if (llen < rlen) + result = -1; + else if (llen > rlen) + result = 1; + } + + pfree(lcstr); + pfree(rcstr); + + return result; +} + /* * ================== * INDEXING FUNCTIONS @@ -81,6 +117,23 @@ citext_cmp(PG_FUNCTION_ARGS) PG_RETURN_INT32(result); } +PG_FUNCTION_INFO_V1(citext_pattern_cmp); + +Datum +citext_pattern_cmp(PG_FUNCTION_ARGS) +{ + text *left = PG_GETARG_TEXT_PP(0); + text *right = PG_GETARG_TEXT_PP(1); + int32 result; + + result = internal_citext_pattern_cmp(left, right, PG_GET_COLLATION()); + + PG_FREE_IF_COPY(left, 0); + PG_FREE_IF_COPY(right, 1); + + PG_RETURN_INT32(result); +} + PG_FUNCTION_INFO_V1(citext_hash); Datum @@ -234,6 +287,74 @@ citext_ge(PG_FUNCTION_ARGS) PG_RETURN_BOOL(result); } +PG_FUNCTION_INFO_V1(citext_pattern_lt); + +Datum +citext_pattern_lt(PG_FUNCTION_ARGS) +{ + text *left = PG_GETARG_TEXT_PP(0); + text *right = PG_GETARG_TEXT_PP(1); + bool result; + + result = internal_citext_pattern_cmp(left, right, PG_GET_COLLATION()) < 0; + + PG_FREE_IF_COPY(left, 0); + PG_FREE_IF_COPY(right, 1); + + PG_RETURN_BOOL(result); +} + +PG_FUNCTION_INFO_V1(citext_pattern_le); + +Datum +citext_pattern_le(PG_FUNCTION_ARGS) +{ + text *left = PG_GETARG_TEXT_PP(0); + text *right = PG_GETARG_TEXT_PP(1); + bool result; + + result = internal_citext_pattern_cmp(left, right, PG_GET_COLLATION()) <= 0; + + PG_FREE_IF_COPY(left, 0); + PG_FREE_IF_COPY(right, 1); + + PG_RETURN_BOOL(result); +} + +PG_FUNCTION_INFO_V1(citext_pattern_gt); + +Datum +citext_pattern_gt(PG_FUNCTION_ARGS) +{ + text *left = PG_GETARG_TEXT_PP(0); + text *right = PG_GETARG_TEXT_PP(1); + bool result; + + result = internal_citext_pattern_cmp(left, right, PG_GET_COLLATION()) > 0; + + PG_FREE_IF_COPY(left, 0); + PG_FREE_IF_COPY(right, 1); + + PG_RETURN_BOOL(result); +} + +PG_FUNCTION_INFO_V1(citext_pattern_ge); + +Datum +citext_pattern_ge(PG_FUNCTION_ARGS) +{ + text *left = PG_GETARG_TEXT_PP(0); + text *right = PG_GETARG_TEXT_PP(1); + bool result; + + result = internal_citext_pattern_cmp(left, right, PG_GET_COLLATION()) >= 0; + + PG_FREE_IF_COPY(left, 0); + PG_FREE_IF_COPY(right, 1); + + PG_RETURN_BOOL(result); +} + /* * =================== * AGGREGATE FUNCTIONS diff --git a/contrib/citext/expected/citext.out b/contrib/citext/expected/citext.out index 9cc94f4c1b..56fb0e9036 100644 --- a/contrib/citext/expected/citext.out +++ b/contrib/citext/expected/citext.out @@ -2351,3 +2351,373 @@ SELECT * FROM citext_matview ORDER BY id; 5 | (5 rows) +-- test citext_pattern_cmp() function explicitly. +SELECT citext_pattern_cmp('aardvark'::citext, 'aardvark'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('aardvark'::citext, 'aardVark'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('AARDVARK'::citext, 'AARDVARK'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('B'::citext, 'a'::citext) > 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('a'::citext, 'B'::citext) < 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('A'::citext, 'b'::citext) < 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('ABCD'::citext, 'abc'::citext) > 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('ABC'::citext, 'abcd'::citext) < 0 AS true; + true +------ + t +(1 row) + +-- test operator functions +-- lt +SELECT citext_pattern_lt('a'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('A'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('b'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_lt('B'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_lt('b'::citext, 'A'::citext) AS false; + false +------- + f +(1 row) + +-- le +SELECT citext_pattern_le('a'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('b'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_le('B'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_le('b'::citext, 'A'::citext) AS false; + false +------- + f +(1 row) + +-- gt +SELECT citext_pattern_gt('a'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('A'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('b'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_gt('B'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_gt('b'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +-- ge +SELECT citext_pattern_ge('a'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('b'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('B'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +-- Test ~<~ and ~<=~ +SELECT 'a'::citext ~<~ 'B'::citext AS t; + t +--- + t +(1 row) + +SELECT 'b'::citext ~<~ 'A'::citext AS f; + f +--- + f +(1 row) + +SELECT 'à'::citext ~<~ 'À'::citext AS f; + f +--- + f +(1 row) + +SELECT 'a'::citext ~<=~ 'B'::citext AS t; + t +--- + t +(1 row) + +SELECT 'a'::citext ~<=~ 'A'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~<=~ 'À'::citext AS t; + t +--- + f +(1 row) + +-- Test ~>~ and ~>=~ +SELECT 'B'::citext ~>~ 'a'::citext AS t; + t +--- + t +(1 row) + +SELECT 'b'::citext ~>~ 'A'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~>~ 'À'::citext AS f; + f +--- + t +(1 row) + +SELECT 'B'::citext ~>~ 'b'::citext AS f; + f +--- + f +(1 row) + +SELECT 'B'::citext ~>=~ 'b'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~>=~ 'À'::citext AS t; + t +--- + t +(1 row) + +-- Test implicit casting. citext casts to text, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'B'::citext ~<=~ 'a'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>~ 'B'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>=~ 'B'::text AS t; -- text wins. + t +--- + t +(1 row) + +-- Test implicit casting. citext casts to varchar, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'B'::citext ~<=~ 'a'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>~ 'B'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>=~ 'B'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + diff --git a/contrib/citext/expected/citext_1.out b/contrib/citext/expected/citext_1.out index d1fb1e14e0..95549c5888 100644 --- a/contrib/citext/expected/citext_1.out +++ b/contrib/citext/expected/citext_1.out @@ -2351,3 +2351,373 @@ SELECT * FROM citext_matview ORDER BY id; 5 | (5 rows) +-- test citext_pattern_cmp() function explicitly. +SELECT citext_pattern_cmp('aardvark'::citext, 'aardvark'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('aardvark'::citext, 'aardVark'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('AARDVARK'::citext, 'AARDVARK'::citext) AS zero; + zero +------ + 0 +(1 row) + +SELECT citext_pattern_cmp('B'::citext, 'a'::citext) > 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('a'::citext, 'B'::citext) < 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('A'::citext, 'b'::citext) < 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('ABCD'::citext, 'abc'::citext) > 0 AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_cmp('ABC'::citext, 'abcd'::citext) < 0 AS true; + true +------ + t +(1 row) + +-- test operator functions +-- lt +SELECT citext_pattern_lt('a'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('A'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_lt('b'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_lt('B'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_lt('b'::citext, 'A'::citext) AS false; + false +------- + f +(1 row) + +-- le +SELECT citext_pattern_le('a'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('A'::citext, 'b'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_le('b'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_le('B'::citext, 'a'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_le('b'::citext, 'A'::citext) AS false; + false +------- + f +(1 row) + +-- gt +SELECT citext_pattern_gt('a'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('A'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_gt('b'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_gt('B'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_gt('b'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +-- ge +SELECT citext_pattern_ge('a'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('A'::citext, 'b'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; + false +------- + f +(1 row) + +SELECT citext_pattern_ge('b'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('B'::citext, 'a'::citext) AS true; + true +------ + t +(1 row) + +SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; + true +------ + t +(1 row) + +-- Test ~<~ and ~<=~ +SELECT 'a'::citext ~<~ 'B'::citext AS t; + t +--- + t +(1 row) + +SELECT 'b'::citext ~<~ 'A'::citext AS f; + f +--- + f +(1 row) + +SELECT 'à'::citext ~<~ 'À'::citext AS f; + f +--- + f +(1 row) + +SELECT 'a'::citext ~<=~ 'B'::citext AS t; + t +--- + t +(1 row) + +SELECT 'a'::citext ~<=~ 'A'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~<=~ 'À'::citext AS t; + t +--- + t +(1 row) + +-- Test ~>~ and ~>=~ +SELECT 'B'::citext ~>~ 'a'::citext AS t; + t +--- + t +(1 row) + +SELECT 'b'::citext ~>~ 'A'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~>~ 'À'::citext AS f; + f +--- + f +(1 row) + +SELECT 'B'::citext ~>~ 'b'::citext AS f; + f +--- + f +(1 row) + +SELECT 'B'::citext ~>=~ 'b'::citext AS t; + t +--- + t +(1 row) + +SELECT 'à'::citext ~>=~ 'À'::citext AS t; + t +--- + t +(1 row) + +-- Test implicit casting. citext casts to text, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'B'::citext ~<=~ 'a'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>~ 'B'::text AS t; -- text wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>=~ 'B'::text AS t; -- text wins. + t +--- + t +(1 row) + +-- Test implicit casting. citext casts to varchar, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'B'::citext ~<=~ 'a'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>~ 'B'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + +SELECT 'a'::citext ~>=~ 'B'::varchar AS t; -- varchar wins. + t +--- + t +(1 row) + diff --git a/contrib/citext/sql/citext.sql b/contrib/citext/sql/citext.sql index f70f9ebae9..e9acd4664f 100644 --- a/contrib/citext/sql/citext.sql +++ b/contrib/citext/sql/citext.sql @@ -752,3 +752,81 @@ SELECT * WHERE t.id IS NULL OR m.id IS NULL; REFRESH MATERIALIZED VIEW CONCURRENTLY citext_matview; SELECT * FROM citext_matview ORDER BY id; + +-- test citext_pattern_cmp() function explicitly. +SELECT citext_pattern_cmp('aardvark'::citext, 'aardvark'::citext) AS zero; +SELECT citext_pattern_cmp('aardvark'::citext, 'aardVark'::citext) AS zero; +SELECT citext_pattern_cmp('AARDVARK'::citext, 'AARDVARK'::citext) AS zero; +SELECT citext_pattern_cmp('B'::citext, 'a'::citext) > 0 AS true; +SELECT citext_pattern_cmp('a'::citext, 'B'::citext) < 0 AS true; +SELECT citext_pattern_cmp('A'::citext, 'b'::citext) < 0 AS true; +SELECT citext_pattern_cmp('ABCD'::citext, 'abc'::citext) > 0 AS true; +SELECT citext_pattern_cmp('ABC'::citext, 'abcd'::citext) < 0 AS true; + +-- test operator functions +-- lt +SELECT citext_pattern_lt('a'::citext, 'b'::citext) AS true; +SELECT citext_pattern_lt('A'::citext, 'b'::citext) AS true; +SELECT citext_pattern_lt('a'::citext, 'B'::citext) AS true; +SELECT citext_pattern_lt('b'::citext, 'a'::citext) AS false; +SELECT citext_pattern_lt('B'::citext, 'a'::citext) AS false; +SELECT citext_pattern_lt('b'::citext, 'A'::citext) AS false; +-- le +SELECT citext_pattern_le('a'::citext, 'a'::citext) AS true; +SELECT citext_pattern_le('a'::citext, 'A'::citext) AS true; +SELECT citext_pattern_le('A'::citext, 'a'::citext) AS true; +SELECT citext_pattern_le('A'::citext, 'A'::citext) AS true; +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; +SELECT citext_pattern_le('A'::citext, 'b'::citext) AS true; +SELECT citext_pattern_le('a'::citext, 'B'::citext) AS true; +SELECT citext_pattern_le('b'::citext, 'a'::citext) AS false; +SELECT citext_pattern_le('B'::citext, 'a'::citext) AS false; +SELECT citext_pattern_le('b'::citext, 'A'::citext) AS false; +-- gt +SELECT citext_pattern_gt('a'::citext, 'b'::citext) AS false; +SELECT citext_pattern_gt('A'::citext, 'b'::citext) AS false; +SELECT citext_pattern_gt('a'::citext, 'B'::citext) AS false; +SELECT citext_pattern_gt('b'::citext, 'a'::citext) AS true; +SELECT citext_pattern_gt('B'::citext, 'a'::citext) AS true; +SELECT citext_pattern_gt('b'::citext, 'A'::citext) AS true; +-- ge +SELECT citext_pattern_ge('a'::citext, 'a'::citext) AS true; +SELECT citext_pattern_ge('a'::citext, 'A'::citext) AS true; +SELECT citext_pattern_ge('A'::citext, 'a'::citext) AS true; +SELECT citext_pattern_ge('A'::citext, 'A'::citext) AS true; +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; +SELECT citext_pattern_ge('A'::citext, 'b'::citext) AS false; +SELECT citext_pattern_ge('a'::citext, 'B'::citext) AS false; +SELECT citext_pattern_ge('b'::citext, 'a'::citext) AS true; +SELECT citext_pattern_ge('B'::citext, 'a'::citext) AS true; +SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; + +-- Test ~<~ and ~<=~ +SELECT 'a'::citext ~<~ 'B'::citext AS t; +SELECT 'b'::citext ~<~ 'A'::citext AS f; +SELECT 'à'::citext ~<~ 'À'::citext AS f; +SELECT 'a'::citext ~<=~ 'B'::citext AS t; +SELECT 'a'::citext ~<=~ 'A'::citext AS t; +SELECT 'à'::citext ~<=~ 'À'::citext AS t; + +-- Test ~>~ and ~>=~ +SELECT 'B'::citext ~>~ 'a'::citext AS t; +SELECT 'b'::citext ~>~ 'A'::citext AS t; +SELECT 'à'::citext ~>~ 'À'::citext AS f; +SELECT 'B'::citext ~>~ 'b'::citext AS f; +SELECT 'B'::citext ~>=~ 'b'::citext AS t; +SELECT 'à'::citext ~>=~ 'À'::citext AS t; + +-- Test implicit casting. citext casts to text, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. +SELECT 'B'::citext ~<=~ 'a'::text AS t; -- text wins. + +SELECT 'a'::citext ~>~ 'B'::text AS t; -- text wins. +SELECT 'a'::citext ~>=~ 'B'::text AS t; -- text wins. + +-- Test implicit casting. citext casts to varchar, but not vice-versa. +SELECT 'B'::citext ~<~ 'a'::varchar AS t; -- varchar wins. +SELECT 'B'::citext ~<=~ 'a'::varchar AS t; -- varchar wins. + +SELECT 'a'::citext ~>~ 'B'::varchar AS t; -- varchar wins. +SELECT 'a'::citext ~>=~ 'B'::varchar AS t; -- varchar wins. From d61f5bb7c444255b064a60df782907f7dddad61a Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 19 Sep 2017 12:23:18 -0400 Subject: [PATCH 0212/1087] doc: add example of % substitution for connection URIs Reported-by: Zhou Digoal Discussion: https://postgr.es/m/20170912133722.25637.91@wrigleys.postgresql.org Backpatch-through: 10 --- doc/src/sgml/libpq.sgml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 096a8be605..0aedd837dc 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -840,7 +840,9 @@ postgresql:///mydb?host=localhost&port=5433 Percent-encoding may be used to include symbols with special meaning in any - of the URI parts. + of the URI parts, e.g. replace = with + %3D. + From 1910353675bd149e1020b29c0fae02538fc358cd Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 10:32:51 -0700 Subject: [PATCH 0213/1087] Make new crash restart test a bit more robust. Add timeouts in case psql doesn't deliver the expected output, and try to cause the monitoring psql to be fully connected to a backend. This isn't necessarily everything needed, but at least the timeouts should reduce the pain for buildfarm owners. Author: Andres Freund Reported-By: Tom Lane, BF animals prairiedog and calliphoridae Discussion: https://postgr.es/m/E1du6ZT-00043I-91@gemulon.postgresql.org --- src/test/recovery/t/013_crash_restart.pl | 34 +++++++++++++++--------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl index e8ad24941b..161dbd86ee 100644 --- a/src/test/recovery/t/013_crash_restart.pl +++ b/src/test/recovery/t/013_crash_restart.pl @@ -27,6 +27,12 @@ plan tests => 12; } +# To avoid hanging while expecting some specific input from a psql +# instance being driven by us, add a timeout high enough that it +# should never trigger in a normal run, but low enough to actually see +# failures in a realistic amount of time. +my $psql_timeout = 180; + my $node = get_new_node('master'); $node->init(allows_streaming => 1); $node->start(); @@ -47,7 +53,8 @@ '>', \$killme_stdout, '2>', - \$killme_stderr); + \$killme_stderr, + IPC::Run::timeout($psql_timeout)); # Need a second psql to check if crash-restart happened. my ($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', ''); @@ -59,7 +66,8 @@ '>', \$monitor_stdout, '2>', - \$monitor_stderr); + \$monitor_stderr, + IPC::Run::timeout($psql_timeout)); #create table, insert row that should survive $killme_stdin .= q[ @@ -82,11 +90,13 @@ # Start longrunning query in second session, it's failure will signal -# that crash-restart has occurred. +# that crash-restart has occurred. The initial wait for the trivial +# select is to be sure that psql successfully connected to backend. $monitor_stdin .= q[ +SELECT $$psql-connected$$; SELECT pg_sleep(3600); ]; -$monitor->pump; +$monitor->pump until $monitor_stdout =~ /psql-connected/; # kill once with QUIT - we expect psql to exit, while emitting error message first @@ -137,18 +147,16 @@ $killme->pump until $killme_stdout =~ /in-progress-before-sigkill/; $killme_stdout = ''; -$monitor_stdin .= q[ -SELECT $$restart$$; -]; -$monitor->pump until $monitor_stdout =~ /restart/; -$monitor_stdout = ''; - -# Re-start longrunning query in second session, it's failure will signal -# that crash-restart has occurred. +# Re-start longrunning query in second session, it's failure will +# signal that crash-restart has occurred. The initial wait for the +# trivial select is to be sure that psql successfully connected to +# backend. $monitor_stdin = q[ +SELECT $$psql-connected$$; SELECT pg_sleep(3600); ]; -$monitor->pump_nb; # don't wait for query results to come back +$monitor->pump until $monitor_stdout =~ /psql-connected/; +$monitor_stdout = ''; # kill with SIGKILL this time - we expect the backend to exit, without From 890faaf1957759c6e17fbcbfd16f7cabc4a59d07 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Tue, 19 Sep 2017 14:51:51 -0400 Subject: [PATCH 0214/1087] Set client encoding to UTF8 for the citext regression script Problem introduced with non-ascii characters in commit f2464997644c and discovered on various buildfarm animals. --- contrib/citext/expected/citext.out | 2 ++ contrib/citext/expected/citext_1.out | 2 ++ contrib/citext/sql/citext.sql | 3 +++ 3 files changed, 7 insertions(+) diff --git a/contrib/citext/expected/citext.out b/contrib/citext/expected/citext.out index 56fb0e9036..ff0a6ed588 100644 --- a/contrib/citext/expected/citext.out +++ b/contrib/citext/expected/citext.out @@ -1,6 +1,8 @@ -- -- Test citext datatype -- +--- script setup +set client_encoding = 'utf8'; CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate SELECT amname, opcname diff --git a/contrib/citext/expected/citext_1.out b/contrib/citext/expected/citext_1.out index 95549c5888..43a609b066 100644 --- a/contrib/citext/expected/citext_1.out +++ b/contrib/citext/expected/citext_1.out @@ -1,6 +1,8 @@ -- -- Test citext datatype -- +--- script setup +set client_encoding = 'utf8'; CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate SELECT amname, opcname diff --git a/contrib/citext/sql/citext.sql b/contrib/citext/sql/citext.sql index e9acd4664f..91dd7d03d0 100644 --- a/contrib/citext/sql/citext.sql +++ b/contrib/citext/sql/citext.sql @@ -2,6 +2,9 @@ -- Test citext datatype -- +--- script setup +set client_encoding = 'utf8'; + CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate From ed22fb8b0091deea23747310fa7609079a96cf82 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 19 Sep 2017 15:09:34 -0400 Subject: [PATCH 0215/1087] Cache datatype-output-function lookup info across calls of concat(). Testing indicates this can save a third to a half of the runtime of the function. Pavel Stehule, reviewed by Alexander Kuzmenkov Discussion: https://postgr.es/m/CAFj8pRAT62pRgjoHbgTfJUc2uLmeQ4saUj+yVJAEZUiMwNCmdg@mail.gmail.com --- src/backend/utils/adt/varlena.c | 54 +++++++++++++++++++++++++++------ 1 file changed, 45 insertions(+), 9 deletions(-) diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index ebfb823fb8..260efd519a 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -4733,11 +4733,48 @@ string_agg_finalfn(PG_FUNCTION_ARGS) PG_RETURN_NULL(); } +/* + * Prepare cache with fmgr info for the output functions of the datatypes of + * the arguments of a concat-like function, beginning with argument "argidx". + * (Arguments before that will have corresponding slots in the resulting + * FmgrInfo array, but we don't fill those slots.) + */ +static FmgrInfo * +build_concat_foutcache(FunctionCallInfo fcinfo, int argidx) +{ + FmgrInfo *foutcache; + int i; + + /* We keep the info in fn_mcxt so it survives across calls */ + foutcache = (FmgrInfo *) MemoryContextAlloc(fcinfo->flinfo->fn_mcxt, + PG_NARGS() * sizeof(FmgrInfo)); + + for (i = argidx; i < PG_NARGS(); i++) + { + Oid valtype; + Oid typOutput; + bool typIsVarlena; + + valtype = get_fn_expr_argtype(fcinfo->flinfo, i); + if (!OidIsValid(valtype)) + elog(ERROR, "could not determine data type of concat() input"); + + getTypeOutputInfo(valtype, &typOutput, &typIsVarlena); + fmgr_info_cxt(typOutput, &foutcache[i], fcinfo->flinfo->fn_mcxt); + } + + fcinfo->flinfo->fn_extra = foutcache; + + return foutcache; +} + /* * Implementation of both concat() and concat_ws(). * * sepstr is the separator string to place between values. - * argidx identifies the first argument to concatenate (counting from zero). + * argidx identifies the first argument to concatenate (counting from zero); + * note that this must be constant across any one series of calls. + * * Returns NULL if result should be NULL, else text value. */ static text * @@ -4746,6 +4783,7 @@ concat_internal(const char *sepstr, int argidx, { text *result; StringInfoData str; + FmgrInfo *foutcache; bool first_arg = true; int i; @@ -4787,14 +4825,16 @@ concat_internal(const char *sepstr, int argidx, /* Normal case without explicit VARIADIC marker */ initStringInfo(&str); + /* Get output function info, building it if first time through */ + foutcache = (FmgrInfo *) fcinfo->flinfo->fn_extra; + if (foutcache == NULL) + foutcache = build_concat_foutcache(fcinfo, argidx); + for (i = argidx; i < PG_NARGS(); i++) { if (!PG_ARGISNULL(i)) { Datum value = PG_GETARG_DATUM(i); - Oid valtype; - Oid typOutput; - bool typIsVarlena; /* add separator if appropriate */ if (first_arg) @@ -4803,12 +4843,8 @@ concat_internal(const char *sepstr, int argidx, appendStringInfoString(&str, sepstr); /* call the appropriate type output function, append the result */ - valtype = get_fn_expr_argtype(fcinfo->flinfo, i); - if (!OidIsValid(valtype)) - elog(ERROR, "could not determine data type of concat() input"); - getTypeOutputInfo(valtype, &typOutput, &typIsVarlena); appendStringInfoString(&str, - OidOutputFunctionCall(typOutput, value)); + OutputFunctionCall(&foutcache[i], value)); } } From d1687c6926819f023c78b353458950a303796aba Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Tue, 19 Sep 2017 15:31:37 -0400 Subject: [PATCH 0216/1087] Disable multi-byte citext tests This reverts commit 890faaf1 which attempted unsuccessfully to deal with the problem, and instead just comments out these tests like other similar tests elsewhere in the script. --- contrib/citext/expected/citext.out | 32 ++++++---------------------- contrib/citext/expected/citext_1.out | 32 ++++++---------------------- contrib/citext/sql/citext.sql | 14 ++++++------ 3 files changed, 19 insertions(+), 59 deletions(-) diff --git a/contrib/citext/expected/citext.out b/contrib/citext/expected/citext.out index ff0a6ed588..95373182af 100644 --- a/contrib/citext/expected/citext.out +++ b/contrib/citext/expected/citext.out @@ -1,8 +1,6 @@ -- -- Test citext datatype -- ---- script setup -set client_encoding = 'utf8'; CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate SELECT amname, opcname @@ -2599,6 +2597,8 @@ SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; t (1 row) +-- Multi-byte tests below are diabled like the sanity tests above. +-- Uncomment to run them. -- Test ~<~ and ~<=~ SELECT 'a'::citext ~<~ 'B'::citext AS t; t @@ -2612,12 +2612,7 @@ SELECT 'b'::citext ~<~ 'A'::citext AS f; f (1 row) -SELECT 'à'::citext ~<~ 'À'::citext AS f; - f ---- - f -(1 row) - +-- SELECT 'à'::citext ~<~ 'À'::citext AS f; SELECT 'a'::citext ~<=~ 'B'::citext AS t; t --- @@ -2630,12 +2625,7 @@ SELECT 'a'::citext ~<=~ 'A'::citext AS t; t (1 row) -SELECT 'à'::citext ~<=~ 'À'::citext AS t; - t ---- - f -(1 row) - +-- SELECT 'à'::citext ~<=~ 'À'::citext AS t; -- Test ~>~ and ~>=~ SELECT 'B'::citext ~>~ 'a'::citext AS t; t @@ -2649,12 +2639,7 @@ SELECT 'b'::citext ~>~ 'A'::citext AS t; t (1 row) -SELECT 'à'::citext ~>~ 'À'::citext AS f; - f ---- - t -(1 row) - +-- SELECT 'à'::citext ~>~ 'À'::citext AS f; SELECT 'B'::citext ~>~ 'b'::citext AS f; f --- @@ -2667,12 +2652,7 @@ SELECT 'B'::citext ~>=~ 'b'::citext AS t; t (1 row) -SELECT 'à'::citext ~>=~ 'À'::citext AS t; - t ---- - t -(1 row) - +-- SELECT 'à'::citext ~>=~ 'À'::citext AS t; -- Test implicit casting. citext casts to text, but not vice-versa. SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. t diff --git a/contrib/citext/expected/citext_1.out b/contrib/citext/expected/citext_1.out index 43a609b066..855ec3f10b 100644 --- a/contrib/citext/expected/citext_1.out +++ b/contrib/citext/expected/citext_1.out @@ -1,8 +1,6 @@ -- -- Test citext datatype -- ---- script setup -set client_encoding = 'utf8'; CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate SELECT amname, opcname @@ -2599,6 +2597,8 @@ SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; t (1 row) +-- Multi-byte tests below are diabled like the sanity tests above. +-- Uncomment to run them. -- Test ~<~ and ~<=~ SELECT 'a'::citext ~<~ 'B'::citext AS t; t @@ -2612,12 +2612,7 @@ SELECT 'b'::citext ~<~ 'A'::citext AS f; f (1 row) -SELECT 'à'::citext ~<~ 'À'::citext AS f; - f ---- - f -(1 row) - +-- SELECT 'à'::citext ~<~ 'À'::citext AS f; SELECT 'a'::citext ~<=~ 'B'::citext AS t; t --- @@ -2630,12 +2625,7 @@ SELECT 'a'::citext ~<=~ 'A'::citext AS t; t (1 row) -SELECT 'à'::citext ~<=~ 'À'::citext AS t; - t ---- - t -(1 row) - +-- SELECT 'à'::citext ~<=~ 'À'::citext AS t; -- Test ~>~ and ~>=~ SELECT 'B'::citext ~>~ 'a'::citext AS t; t @@ -2649,12 +2639,7 @@ SELECT 'b'::citext ~>~ 'A'::citext AS t; t (1 row) -SELECT 'à'::citext ~>~ 'À'::citext AS f; - f ---- - f -(1 row) - +-- SELECT 'à'::citext ~>~ 'À'::citext AS f; SELECT 'B'::citext ~>~ 'b'::citext AS f; f --- @@ -2667,12 +2652,7 @@ SELECT 'B'::citext ~>=~ 'b'::citext AS t; t (1 row) -SELECT 'à'::citext ~>=~ 'À'::citext AS t; - t ---- - t -(1 row) - +-- SELECT 'à'::citext ~>=~ 'À'::citext AS t; -- Test implicit casting. citext casts to text, but not vice-versa. SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. t diff --git a/contrib/citext/sql/citext.sql b/contrib/citext/sql/citext.sql index 91dd7d03d0..2732be436d 100644 --- a/contrib/citext/sql/citext.sql +++ b/contrib/citext/sql/citext.sql @@ -2,9 +2,6 @@ -- Test citext datatype -- ---- script setup -set client_encoding = 'utf8'; - CREATE EXTENSION citext; -- Check whether any of our opclasses fail amvalidate @@ -804,21 +801,24 @@ SELECT citext_pattern_ge('b'::citext, 'a'::citext) AS true; SELECT citext_pattern_ge('B'::citext, 'a'::citext) AS true; SELECT citext_pattern_ge('b'::citext, 'A'::citext) AS true; +-- Multi-byte tests below are diabled like the sanity tests above. +-- Uncomment to run them. + -- Test ~<~ and ~<=~ SELECT 'a'::citext ~<~ 'B'::citext AS t; SELECT 'b'::citext ~<~ 'A'::citext AS f; -SELECT 'à'::citext ~<~ 'À'::citext AS f; +-- SELECT 'à'::citext ~<~ 'À'::citext AS f; SELECT 'a'::citext ~<=~ 'B'::citext AS t; SELECT 'a'::citext ~<=~ 'A'::citext AS t; -SELECT 'à'::citext ~<=~ 'À'::citext AS t; +-- SELECT 'à'::citext ~<=~ 'À'::citext AS t; -- Test ~>~ and ~>=~ SELECT 'B'::citext ~>~ 'a'::citext AS t; SELECT 'b'::citext ~>~ 'A'::citext AS t; -SELECT 'à'::citext ~>~ 'À'::citext AS f; +-- SELECT 'à'::citext ~>~ 'À'::citext AS f; SELECT 'B'::citext ~>~ 'b'::citext AS f; SELECT 'B'::citext ~>=~ 'b'::citext AS t; -SELECT 'à'::citext ~>=~ 'À'::citext AS t; +-- SELECT 'à'::citext ~>=~ 'À'::citext AS t; -- Test implicit casting. citext casts to text, but not vice-versa. SELECT 'B'::citext ~<~ 'a'::text AS t; -- text wins. From 54b6cd589ac2f5635a42511236a5eb7299e2dcaf Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 11:46:07 -0700 Subject: [PATCH 0217/1087] Speedup pgstat_report_activity by moving mb-aware truncation to read side. Previously multi-byte aware truncation was done on every pgstat_report_activity() call - proving to be a bottleneck for workloads with long query strings that execute quickly. Instead move the truncation to the read side, which commonly is executed far less frequently. That's possible because all server encodings allow to determine the length of a multi-byte string from the first byte. Rename PgBackendStatus.st_activity to st_activity_raw so existing extension users of the field break - their code has to be adjusted to use pgstat_clip_activity(). Author: Andres Freund Tested-By: Khuntal Ghosh Reviewed-By: Robert Haas, Tom Lane Discussion: https://postgr.es/m/20170912071948.pa7igbpkkkviecpz@alap3.anarazel.de --- src/backend/postmaster/pgstat.c | 63 ++++++++++++++++++++++------- src/backend/utils/adt/pgstatfuncs.c | 17 ++++++-- src/include/pgstat.h | 12 +++++- 3 files changed, 72 insertions(+), 20 deletions(-) diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index accf302cf7..1ffdac5448 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -2701,7 +2701,7 @@ CreateSharedBackendStatus(void) buffer = BackendActivityBuffer; for (i = 0; i < NumBackendStatSlots; i++) { - BackendStatusArray[i].st_activity = buffer; + BackendStatusArray[i].st_activity_raw = buffer; buffer += pgstat_track_activity_query_size; } } @@ -2922,11 +2922,11 @@ pgstat_bestart(void) #endif beentry->st_state = STATE_UNDEFINED; beentry->st_appname[0] = '\0'; - beentry->st_activity[0] = '\0'; + beentry->st_activity_raw[0] = '\0'; /* Also make sure the last byte in each string area is always 0 */ beentry->st_clienthostname[NAMEDATALEN - 1] = '\0'; beentry->st_appname[NAMEDATALEN - 1] = '\0'; - beentry->st_activity[pgstat_track_activity_query_size - 1] = '\0'; + beentry->st_activity_raw[pgstat_track_activity_query_size - 1] = '\0'; beentry->st_progress_command = PROGRESS_COMMAND_INVALID; beentry->st_progress_command_target = InvalidOid; @@ -3017,7 +3017,7 @@ pgstat_report_activity(BackendState state, const char *cmd_str) pgstat_increment_changecount_before(beentry); beentry->st_state = STATE_DISABLED; beentry->st_state_start_timestamp = 0; - beentry->st_activity[0] = '\0'; + beentry->st_activity_raw[0] = '\0'; beentry->st_activity_start_timestamp = 0; /* st_xact_start_timestamp and wait_event_info are also disabled */ beentry->st_xact_start_timestamp = 0; @@ -3034,8 +3034,12 @@ pgstat_report_activity(BackendState state, const char *cmd_str) start_timestamp = GetCurrentStatementStartTimestamp(); if (cmd_str != NULL) { - len = pg_mbcliplen(cmd_str, strlen(cmd_str), - pgstat_track_activity_query_size - 1); + /* + * Compute length of to-be-stored string unaware of multi-byte + * characters. For speed reasons that'll get corrected on read, rather + * than computed every write. + */ + len = Min(strlen(cmd_str), pgstat_track_activity_query_size - 1); } current_timestamp = GetCurrentTimestamp(); @@ -3049,8 +3053,8 @@ pgstat_report_activity(BackendState state, const char *cmd_str) if (cmd_str != NULL) { - memcpy((char *) beentry->st_activity, cmd_str, len); - beentry->st_activity[len] = '\0'; + memcpy((char *) beentry->st_activity_raw, cmd_str, len); + beentry->st_activity_raw[len] = '\0'; beentry->st_activity_start_timestamp = start_timestamp; } @@ -3278,8 +3282,8 @@ pgstat_read_current_status(void) */ strcpy(localappname, (char *) beentry->st_appname); localentry->backendStatus.st_appname = localappname; - strcpy(localactivity, (char *) beentry->st_activity); - localentry->backendStatus.st_activity = localactivity; + strcpy(localactivity, (char *) beentry->st_activity_raw); + localentry->backendStatus.st_activity_raw = localactivity; localentry->backendStatus.st_ssl = beentry->st_ssl; #ifdef USE_SSL if (beentry->st_ssl) @@ -3945,10 +3949,13 @@ pgstat_get_backend_current_activity(int pid, bool checkUser) /* Now it is safe to use the non-volatile pointer */ if (checkUser && !superuser() && beentry->st_userid != GetUserId()) return ""; - else if (*(beentry->st_activity) == '\0') + else if (*(beentry->st_activity_raw) == '\0') return ""; else - return beentry->st_activity; + { + /* this'll leak a bit of memory, but that seems acceptable */ + return pgstat_clip_activity(beentry->st_activity_raw); + } } beentry++; @@ -3994,7 +4001,7 @@ pgstat_get_crashed_backend_activity(int pid, char *buffer, int buflen) if (beentry->st_procpid == pid) { /* Read pointer just once, so it can't change after validation */ - const char *activity = beentry->st_activity; + const char *activity = beentry->st_activity_raw; const char *activity_last; /* @@ -4017,7 +4024,8 @@ pgstat_get_crashed_backend_activity(int pid, char *buffer, int buflen) /* * Copy only ASCII-safe characters so we don't run into encoding * problems when reporting the message; and be sure not to run off - * the end of memory. + * the end of memory. As only ASCII characters are reported, it + * doesn't seem necessary to perform multibyte aware clipping. */ ascii_safe_strlcpy(buffer, activity, Min(buflen, pgstat_track_activity_query_size)); @@ -6270,3 +6278,30 @@ pgstat_db_requested(Oid databaseid) return false; } + +/* + * Convert a potentially unsafely truncated activity string (see + * PgBackendStatus.st_activity_raw's documentation) into a correctly truncated + * one. + * + * The returned string is allocated in the caller's memory context and may be + * freed. + */ +char * +pgstat_clip_activity(const char *activity) +{ + int rawlen = strnlen(activity, pgstat_track_activity_query_size - 1); + int cliplen; + + /* + * All supported server-encodings make it possible to determine the length + * of a multi-byte character from its first byte (this is not the case for + * client encodings, see GB18030). As st_activity is always stored using + * server encoding, this allows us to perform multi-byte aware truncation, + * even if the string earlier was truncated in the middle of a multi-byte + * character. + */ + cliplen = pg_mbcliplen(activity, rawlen, + pgstat_track_activity_query_size - 1); + return pnstrdup(activity, cliplen); +} diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c index 20ce48b2d8..5a968e3758 100644 --- a/src/backend/utils/adt/pgstatfuncs.c +++ b/src/backend/utils/adt/pgstatfuncs.c @@ -664,6 +664,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS) is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS)) { SockAddr zero_clientaddr; + char *clipped_activity; switch (beentry->st_state) { @@ -690,7 +691,9 @@ pg_stat_get_activity(PG_FUNCTION_ARGS) break; } - values[5] = CStringGetTextDatum(beentry->st_activity); + clipped_activity = pgstat_clip_activity(beentry->st_activity_raw); + values[5] = CStringGetTextDatum(clipped_activity); + pfree(clipped_activity); proc = BackendPidGetProc(beentry->st_procpid); if (proc != NULL) @@ -906,17 +909,23 @@ pg_stat_get_backend_activity(PG_FUNCTION_ARGS) int32 beid = PG_GETARG_INT32(0); PgBackendStatus *beentry; const char *activity; + char *clipped_activity; + text *ret; if ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL) activity = ""; else if (!has_privs_of_role(GetUserId(), beentry->st_userid)) activity = ""; - else if (*(beentry->st_activity) == '\0') + else if (*(beentry->st_activity_raw) == '\0') activity = ""; else - activity = beentry->st_activity; + activity = beentry->st_activity_raw; - PG_RETURN_TEXT_P(cstring_to_text(activity)); + clipped_activity = pgstat_clip_activity(activity); + ret = cstring_to_text(activity); + pfree(clipped_activity); + + PG_RETURN_TEXT_P(ret); } Datum diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 57ac5d41e4..52af0aa541 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -1003,8 +1003,14 @@ typedef struct PgBackendStatus /* application name; MUST be null-terminated */ char *st_appname; - /* current command string; MUST be null-terminated */ - char *st_activity; + /* + * Current command string; MUST be null-terminated. Note that this string + * possibly is truncated in the middle of a multi-byte character. As + * activity strings are stored more frequently than read, that allows to + * move the cost of correct truncation to the display side. Use + * pgstat_clip_activity() to truncate correctly. + */ + char *st_activity_raw; /* * Command progress reporting. Any command which wishes can advertise @@ -1193,6 +1199,8 @@ extern PgStat_BackendFunctionEntry *find_funcstat_entry(Oid func_id); extern void pgstat_initstats(Relation rel); +extern char *pgstat_clip_activity(const char *activity); + /* ---------- * pgstat_report_wait_start() - * From 71edbb6f66f7139d6209334ef8734a122ba06b56 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 14:17:20 -0700 Subject: [PATCH 0218/1087] Avoid use of non-portable strnlen() in pgstat_clip_activity(). The use of strnlen rather than strlen was just paranoia. Instead of giving up on the paranoia, just implement the safeguard differently. And add a comment explaining why we're careful. Author: Andres Freund Discussion: https://postgr.es/m/E1duOkJ-0001Mc-U5@gemulon.postgresql.org --- src/backend/postmaster/pgstat.c | 25 +++++++++++++++++++++---- src/include/pgstat.h | 2 +- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 1ffdac5448..9e2dce4f4c 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -6288,10 +6288,24 @@ pgstat_db_requested(Oid databaseid) * freed. */ char * -pgstat_clip_activity(const char *activity) +pgstat_clip_activity(const char *raw_activity) { - int rawlen = strnlen(activity, pgstat_track_activity_query_size - 1); - int cliplen; + char *activity; + int rawlen; + int cliplen; + + /* + * Some callers, like pgstat_get_backend_current_activity(), do not + * guarantee that the buffer isn't concurrently modified. We try to take + * care that the buffer is always terminated by a NULL byte regardless, + * but let's still be paranoid about the string's length. In those cases + * the underlying buffer is guaranteed to be + * pgstat_track_activity_query_size large. + */ + activity = pnstrdup(raw_activity, pgstat_track_activity_query_size - 1); + + /* now double-guaranteed to be NULL terminated */ + rawlen = strlen(activity); /* * All supported server-encodings make it possible to determine the length @@ -6303,5 +6317,8 @@ pgstat_clip_activity(const char *activity) */ cliplen = pg_mbcliplen(activity, rawlen, pgstat_track_activity_query_size - 1); - return pnstrdup(activity, cliplen); + + activity[cliplen] = '\0'; + + return activity; } diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 52af0aa541..089b7c3a10 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -1199,7 +1199,7 @@ extern PgStat_BackendFunctionEntry *find_funcstat_entry(Oid func_id); extern void pgstat_initstats(Relation rel); -extern char *pgstat_clip_activity(const char *activity); +extern char *pgstat_clip_activity(const char *raw_activity); /* ---------- * pgstat_report_wait_start() - From f41e56c76e39f02bef7ba002c9de03d62b76de4d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 19 Sep 2017 18:29:12 -0400 Subject: [PATCH 0219/1087] Add basic TAP test setup for pg_upgrade The plan is to convert the current pg_upgrade test to the TAP framework. This commit just puts a basic TAP test in place so that we can see how the build farm behaves, since the build farm client has some special knowledge of the pg_upgrade tests. Author: Michael Paquier --- src/bin/pg_upgrade/Makefile | 7 ++++--- src/bin/pg_upgrade/t/001_basic.pl | 9 +++++++++ 2 files changed, 13 insertions(+), 3 deletions(-) create mode 100644 src/bin/pg_upgrade/t/001_basic.pl diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile index 1d6ee702c6..e5c98596a1 100644 --- a/src/bin/pg_upgrade/Makefile +++ b/src/bin/pg_upgrade/Makefile @@ -36,8 +36,9 @@ clean distclean maintainer-clean: pg_upgrade_dump_globals.sql \ pg_upgrade_dump_*.custom pg_upgrade_*.log -check: test.sh all +check: test.sh + $(prove_check) MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install -# installcheck is not supported because there's no meaningful way to test -# pg_upgrade against a single already-running server +installcheck: + $(prove_installcheck) diff --git a/src/bin/pg_upgrade/t/001_basic.pl b/src/bin/pg_upgrade/t/001_basic.pl new file mode 100644 index 0000000000..605a7f622f --- /dev/null +++ b/src/bin/pg_upgrade/t/001_basic.pl @@ -0,0 +1,9 @@ +use strict; +use warnings; + +use TestLib; +use Test::More tests => 8; + +program_help_ok('pg_upgrade'); +program_version_ok('pg_upgrade'); +program_options_handling_ok('pg_upgrade'); From 896537f078ba4d709ce754ecaff8350fd55bdfd8 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 16:39:18 -0700 Subject: [PATCH 0220/1087] s/NULL byte/NUL byte/ in comment refering to C string terminator. Reported-By: Robert Haas Discussion: https://postgr.es/m/CA+Tgmoa+YBvWgFST2NVoeXjVSohEpK=vqnVCsoCkhTVVxfLcVQ@mail.gmail.com --- src/backend/postmaster/pgstat.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 9e2dce4f4c..fd6ebc976a 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -6297,14 +6297,14 @@ pgstat_clip_activity(const char *raw_activity) /* * Some callers, like pgstat_get_backend_current_activity(), do not * guarantee that the buffer isn't concurrently modified. We try to take - * care that the buffer is always terminated by a NULL byte regardless, - * but let's still be paranoid about the string's length. In those cases - * the underlying buffer is guaranteed to be - * pgstat_track_activity_query_size large. + * care that the buffer is always terminated by a NUL byte regardless, but + * let's still be paranoid about the string's length. In those cases the + * underlying buffer is guaranteed to be pgstat_track_activity_query_size + * large. */ activity = pnstrdup(raw_activity, pgstat_track_activity_query_size - 1); - /* now double-guaranteed to be NULL terminated */ + /* now double-guaranteed to be NUL terminated */ rawlen = strlen(activity); /* From d3a4f89d8a3e500bd7c0b7a8a8a5ce1b47859128 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 19 Sep 2017 23:32:27 -0400 Subject: [PATCH 0221/1087] Allow no-op GiST support functions to be omitted. There are common use-cases in which the compress and/or decompress functions can be omitted, with the result being that we make no data transformation when storing or retrieving index values. Previously, you had to provide a no-op function anyway, but this patch allows such opclass support functions to be omitted. Furthermore, if the compress function is omitted, then the core code knows that the stored representation is the same as the original data. This means we can allow index-only scans without requiring a fetch function to be provided either. Previously you had to provide a no-op fetch function if you wanted IOS to work. This reportedly provides a small performance benefit in such cases, but IMO the real reason for doing it is just to reduce the amount of useless boilerplate code that has to be written for GiST opclasses. Andrey Borodin, reviewed by Dmitriy Sarafannikov Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com --- doc/src/sgml/gist.sgml | 34 +++++++++++++++------ src/backend/access/gist/gist.c | 24 +++++++++++---- src/backend/access/gist/gistget.c | 4 ++- src/backend/access/gist/gistutil.c | 42 +++++++++++++++++++++++--- src/backend/access/gist/gistvalidate.c | 3 +- 5 files changed, 85 insertions(+), 22 deletions(-) diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index b3cc347e5c..1648eb3672 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -267,14 +267,14 @@ CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); - There are seven methods that an index operator class for - GiST must provide, and two that are optional. + There are five methods that an index operator class for + GiST must provide, and four that are optional. Correctness of the index is ensured by proper implementation of the same, consistent and union methods, while efficiency (size and speed) of the index will depend on the penalty and picksplit methods. - The remaining two basic methods are compress and + Two optional methods are compress and decompress, which allow an index to have internal tree data of a different type than the data it indexes. The leaves are to be of the indexed data type, while the other tree nodes can be of any C struct (but @@ -285,7 +285,8 @@ CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); The optional eighth method is distance, which is needed if the operator class wishes to support ordered scans (nearest-neighbor searches). The optional ninth method fetch is needed if the - operator class wishes to support index-only scans. + operator class wishes to support index-only scans, except when the + compress method is omitted. @@ -468,8 +469,10 @@ my_union(PG_FUNCTION_ARGS) compress - Converts the data item into a format suitable for physical storage in + Converts a data item into a format suitable for physical storage in an index page. + If the compress method is omitted, data items are stored + in the index without modification. @@ -527,9 +530,17 @@ my_compress(PG_FUNCTION_ARGS) decompress - The reverse of the compress method. Converts the - index representation of the data item into a format that can be - manipulated by the other GiST methods in the operator class. + Converts the stored representation of a data item into a format that + can be manipulated by the other GiST methods in the operator class. + If the decompress method is omitted, it is assumed that + the other GiST methods can work directly on the stored data format. + (decompress is not necessarily the reverse of + the compress method; in particular, + if compress is lossy then it's impossible + for decompress to exactly reconstruct the original + data. decompress is not necessarily equivalent + to fetch, either, since the other GiST methods might not + require full reconstruction of the data.) @@ -555,7 +566,8 @@ my_decompress(PG_FUNCTION_ARGS) The above skeleton is suitable for the case where no decompression - is needed. + is needed. (But, of course, omitting the method altogether is even + easier, and is recommended in such cases.) @@ -883,7 +895,9 @@ LANGUAGE C STRICT; struct, whose key field contains the same datum in its original, uncompressed form. If the opclass's compress function does nothing for leaf entries, the fetch method can return the - argument as-is. + argument as-is. Or, if the opclass does not have a compress function, + the fetch method can be omitted as well, since it would + necessarily be a no-op. diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c index 565525bbdf..aec174cd00 100644 --- a/src/backend/access/gist/gist.c +++ b/src/backend/access/gist/gist.c @@ -1453,12 +1453,23 @@ initGISTstate(Relation index) fmgr_info_copy(&(giststate->unionFn[i]), index_getprocinfo(index, i + 1, GIST_UNION_PROC), scanCxt); - fmgr_info_copy(&(giststate->compressFn[i]), - index_getprocinfo(index, i + 1, GIST_COMPRESS_PROC), - scanCxt); - fmgr_info_copy(&(giststate->decompressFn[i]), - index_getprocinfo(index, i + 1, GIST_DECOMPRESS_PROC), - scanCxt); + + /* opclasses are not required to provide a Compress method */ + if (OidIsValid(index_getprocid(index, i + 1, GIST_COMPRESS_PROC))) + fmgr_info_copy(&(giststate->compressFn[i]), + index_getprocinfo(index, i + 1, GIST_COMPRESS_PROC), + scanCxt); + else + giststate->compressFn[i].fn_oid = InvalidOid; + + /* opclasses are not required to provide a Decompress method */ + if (OidIsValid(index_getprocid(index, i + 1, GIST_DECOMPRESS_PROC))) + fmgr_info_copy(&(giststate->decompressFn[i]), + index_getprocinfo(index, i + 1, GIST_DECOMPRESS_PROC), + scanCxt); + else + giststate->decompressFn[i].fn_oid = InvalidOid; + fmgr_info_copy(&(giststate->penaltyFn[i]), index_getprocinfo(index, i + 1, GIST_PENALTY_PROC), scanCxt); @@ -1468,6 +1479,7 @@ initGISTstate(Relation index) fmgr_info_copy(&(giststate->equalFn[i]), index_getprocinfo(index, i + 1, GIST_EQUAL_PROC), scanCxt); + /* opclasses are not required to provide a Distance method */ if (OidIsValid(index_getprocid(index, i + 1, GIST_DISTANCE_PROC))) fmgr_info_copy(&(giststate->distanceFn[i]), diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c index 760ea0c997..06dac0bb53 100644 --- a/src/backend/access/gist/gistget.c +++ b/src/backend/access/gist/gistget.c @@ -801,11 +801,13 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm) * Can we do index-only scans on the given index column? * * Opclasses that implement a fetch function support index-only scans. + * Opclasses without compression functions also support index-only scans. */ bool gistcanreturn(Relation index, int attno) { - if (OidIsValid(index_getprocid(index, attno, GIST_FETCH_PROC))) + if (OidIsValid(index_getprocid(index, attno, GIST_FETCH_PROC)) || + !OidIsValid(index_getprocid(index, attno, GIST_COMPRESS_PROC))) return true; else return false; diff --git a/src/backend/access/gist/gistutil.c b/src/backend/access/gist/gistutil.c index b6ccc1a66a..26d89f79ae 100644 --- a/src/backend/access/gist/gistutil.c +++ b/src/backend/access/gist/gistutil.c @@ -550,6 +550,11 @@ gistdentryinit(GISTSTATE *giststate, int nkey, GISTENTRY *e, GISTENTRY *dep; gistentryinit(*e, k, r, pg, o, l); + + /* there may not be a decompress function in opclass */ + if (!OidIsValid(giststate->decompressFn[nkey].fn_oid)) + return; + dep = (GISTENTRY *) DatumGetPointer(FunctionCall1Coll(&giststate->decompressFn[nkey], giststate->supportCollation[nkey], @@ -585,10 +590,14 @@ gistFormTuple(GISTSTATE *giststate, Relation r, gistentryinit(centry, attdata[i], r, NULL, (OffsetNumber) 0, isleaf); - cep = (GISTENTRY *) - DatumGetPointer(FunctionCall1Coll(&giststate->compressFn[i], - giststate->supportCollation[i], - PointerGetDatum(¢ry))); + /* there may not be a compress function in opclass */ + if (OidIsValid(giststate->compressFn[i].fn_oid)) + cep = (GISTENTRY *) + DatumGetPointer(FunctionCall1Coll(&giststate->compressFn[i], + giststate->supportCollation[i], + PointerGetDatum(¢ry))); + else + cep = ¢ry; compatt[i] = cep->key; } } @@ -648,6 +657,17 @@ gistFetchTuple(GISTSTATE *giststate, Relation r, IndexTuple tuple) else fetchatt[i] = (Datum) 0; } + else if (giststate->compressFn[i].fn_oid == InvalidOid) + { + /* + * If opclass does not provide compress method that could change + * original value, att is necessarily stored in original form. + */ + if (!isnull[i]) + fetchatt[i] = datum; + else + fetchatt[i] = (Datum) 0; + } else { /* @@ -934,6 +954,20 @@ gistproperty(Oid index_oid, int attno, ObjectIdGetDatum(opcintype), ObjectIdGetDatum(opcintype), Int16GetDatum(procno)); + + /* + * Special case: even without a fetch function, AMPROP_RETURNABLE is true + * if the opclass has no compress function. + */ + if (prop == AMPROP_RETURNABLE && !*res) + { + *res = !SearchSysCacheExists4(AMPROCNUM, + ObjectIdGetDatum(opfamily), + ObjectIdGetDatum(opcintype), + ObjectIdGetDatum(opcintype), + Int16GetDatum(GIST_COMPRESS_PROC)); + } + return true; } diff --git a/src/backend/access/gist/gistvalidate.c b/src/backend/access/gist/gistvalidate.c index 42254c5f15..42f91ac0c9 100644 --- a/src/backend/access/gist/gistvalidate.c +++ b/src/backend/access/gist/gistvalidate.c @@ -258,7 +258,8 @@ gistvalidate(Oid opclassoid) if (opclassgroup && (opclassgroup->functionset & (((uint64) 1) << i)) != 0) continue; /* got it */ - if (i == GIST_DISTANCE_PROC || i == GIST_FETCH_PROC) + if (i == GIST_DISTANCE_PROC || i == GIST_FETCH_PROC || + i == GIST_COMPRESS_PROC || i == GIST_DECOMPRESS_PROC) continue; /* optional methods */ ereport(INFO, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), From 2d484f9b058035d41204f2eb8a0a8d2e8ee57b44 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 19 Sep 2017 23:32:45 -0400 Subject: [PATCH 0222/1087] Remove no-op GiST support functions in the core GiST opclasses. The preceding patch allowed us to remove useless GiST support functions. This patch actually does that for all the no-op cases in the core GiST code. This buys us whatever performance gain is to be had, and more importantly exercises the preceding patch. There remain no-op functions in the contrib GiST opclasses, but those will take more work to remove. Discussion: https://postgr.es/m/CAJEAwVELVx9gYscpE=Be6iJxvdW5unZ_LkcAaVNSeOwvdwtD=A@mail.gmail.com --- src/backend/access/gist/gistproc.c | 32 ++----------------------- src/backend/utils/adt/network_gist.c | 12 ++-------- src/backend/utils/adt/rangetypes_gist.c | 29 ++++------------------ src/backend/utils/adt/tsgistidx.c | 4 ++++ src/backend/utils/adt/tsquery_gist.c | 9 ++++--- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_amproc.h | 11 --------- src/include/catalog/pg_proc.h | 16 ------------- src/test/regress/expected/create_am.out | 6 ++--- src/test/regress/sql/create_am.sql | 6 ++--- 10 files changed, 22 insertions(+), 105 deletions(-) diff --git a/src/backend/access/gist/gistproc.c b/src/backend/access/gist/gistproc.c index 08990f5a1b..d1919fc74b 100644 --- a/src/backend/access/gist/gistproc.c +++ b/src/backend/access/gist/gistproc.c @@ -185,37 +185,9 @@ gist_box_union(PG_FUNCTION_ARGS) } /* - * GiST Compress methods for boxes - * - * do not do anything. + * We store boxes as boxes in GiST indexes, so we do not need + * compress, decompress, or fetch functions. */ -Datum -gist_box_compress(PG_FUNCTION_ARGS) -{ - PG_RETURN_POINTER(PG_GETARG_POINTER(0)); -} - -/* - * GiST DeCompress method for boxes (also used for points, polygons - * and circles) - * - * do not do anything --- we just use the stored box as is. - */ -Datum -gist_box_decompress(PG_FUNCTION_ARGS) -{ - PG_RETURN_POINTER(PG_GETARG_POINTER(0)); -} - -/* - * GiST Fetch method for boxes - * do not do anything --- we just return the stored box as is. - */ -Datum -gist_box_fetch(PG_FUNCTION_ARGS) -{ - PG_RETURN_POINTER(PG_GETARG_POINTER(0)); -} /* * The GiST Penalty method for boxes (also used for points) diff --git a/src/backend/utils/adt/network_gist.c b/src/backend/utils/adt/network_gist.c index a0097dae9c..0e36b7685d 100644 --- a/src/backend/utils/adt/network_gist.c +++ b/src/backend/utils/adt/network_gist.c @@ -576,17 +576,9 @@ inet_gist_compress(PG_FUNCTION_ARGS) } /* - * The GiST decompress function - * - * do not do anything --- we just use the stored GistInetKey as-is. + * We do not need a decompress function, because the other GiST inet + * support functions work with the GistInetKey representation. */ -Datum -inet_gist_decompress(PG_FUNCTION_ARGS) -{ - GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - - PG_RETURN_POINTER(entry); -} /* * The GiST fetch function diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c index cb2d5a3b73..29fa1ae325 100644 --- a/src/backend/utils/adt/rangetypes_gist.c +++ b/src/backend/utils/adt/rangetypes_gist.c @@ -216,30 +216,11 @@ range_gist_union(PG_FUNCTION_ARGS) PG_RETURN_RANGE_P(result_range); } -/* compress, decompress, fetch are no-ops */ -Datum -range_gist_compress(PG_FUNCTION_ARGS) -{ - GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - - PG_RETURN_POINTER(entry); -} - -Datum -range_gist_decompress(PG_FUNCTION_ARGS) -{ - GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - - PG_RETURN_POINTER(entry); -} - -Datum -range_gist_fetch(PG_FUNCTION_ARGS) -{ - GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); - - PG_RETURN_POINTER(entry); -} +/* + * We store ranges as ranges in GiST indexes, so we do not need + * compress, decompress, or fetch functions. Note this implies a limit + * on the size of range values that can be indexed. + */ /* * GiST page split penalty function. diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c index 732d87f22f..578af5d512 100644 --- a/src/backend/utils/adt/tsgistidx.c +++ b/src/backend/utils/adt/tsgistidx.c @@ -272,6 +272,10 @@ gtsvector_compress(PG_FUNCTION_ARGS) Datum gtsvector_decompress(PG_FUNCTION_ARGS) { + /* + * We need to detoast the stored value, because the other gtsvector + * support functions don't cope with toasted values. + */ GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); SignTSVector *key = (SignTSVector *) PG_DETOAST_DATUM(entry->key); diff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c index 85518dc7d9..05bc0d6adb 100644 --- a/src/backend/utils/adt/tsquery_gist.c +++ b/src/backend/utils/adt/tsquery_gist.c @@ -43,11 +43,10 @@ gtsquery_compress(PG_FUNCTION_ARGS) PG_RETURN_POINTER(retval); } -Datum -gtsquery_decompress(PG_FUNCTION_ARGS) -{ - PG_RETURN_DATUM(PG_GETARG_DATUM(0)); -} +/* + * We do not need a decompress function, because the other gtsquery + * support functions work with the compressed representation. + */ Datum gtsquery_consistent(PG_FUNCTION_ARGS) diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 032b244fb8..5d57a95d8b 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201709131 +#define CATALOG_VERSION_NO 201709191 #endif diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index fb6a829c90..1c95846207 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -230,7 +230,6 @@ DATA(insert ( 4034 3802 3802 2 3416)); DATA(insert ( 1029 600 600 1 2179 )); DATA(insert ( 1029 600 600 2 2583 )); DATA(insert ( 1029 600 600 3 1030 )); -DATA(insert ( 1029 600 600 4 2580 )); DATA(insert ( 1029 600 600 5 2581 )); DATA(insert ( 1029 600 600 6 2582 )); DATA(insert ( 1029 600 600 7 2584 )); @@ -238,16 +237,12 @@ DATA(insert ( 1029 600 600 8 3064 )); DATA(insert ( 1029 600 600 9 3282 )); DATA(insert ( 2593 603 603 1 2578 )); DATA(insert ( 2593 603 603 2 2583 )); -DATA(insert ( 2593 603 603 3 2579 )); -DATA(insert ( 2593 603 603 4 2580 )); DATA(insert ( 2593 603 603 5 2581 )); DATA(insert ( 2593 603 603 6 2582 )); DATA(insert ( 2593 603 603 7 2584 )); -DATA(insert ( 2593 603 603 9 3281 )); DATA(insert ( 2594 604 604 1 2585 )); DATA(insert ( 2594 604 604 2 2583 )); DATA(insert ( 2594 604 604 3 2586 )); -DATA(insert ( 2594 604 604 4 2580 )); DATA(insert ( 2594 604 604 5 2581 )); DATA(insert ( 2594 604 604 6 2582 )); DATA(insert ( 2594 604 604 7 2584 )); @@ -255,7 +250,6 @@ DATA(insert ( 2594 604 604 8 3288 )); DATA(insert ( 2595 718 718 1 2591 )); DATA(insert ( 2595 718 718 2 2583 )); DATA(insert ( 2595 718 718 3 2592 )); -DATA(insert ( 2595 718 718 4 2580 )); DATA(insert ( 2595 718 718 5 2581 )); DATA(insert ( 2595 718 718 6 2582 )); DATA(insert ( 2595 718 718 7 2584 )); @@ -270,22 +264,17 @@ DATA(insert ( 3655 3614 3614 7 3652 )); DATA(insert ( 3702 3615 3615 1 3701 )); DATA(insert ( 3702 3615 3615 2 3698 )); DATA(insert ( 3702 3615 3615 3 3695 )); -DATA(insert ( 3702 3615 3615 4 3696 )); DATA(insert ( 3702 3615 3615 5 3700 )); DATA(insert ( 3702 3615 3615 6 3697 )); DATA(insert ( 3702 3615 3615 7 3699 )); DATA(insert ( 3919 3831 3831 1 3875 )); DATA(insert ( 3919 3831 3831 2 3876 )); -DATA(insert ( 3919 3831 3831 3 3877 )); -DATA(insert ( 3919 3831 3831 4 3878 )); DATA(insert ( 3919 3831 3831 5 3879 )); DATA(insert ( 3919 3831 3831 6 3880 )); DATA(insert ( 3919 3831 3831 7 3881 )); -DATA(insert ( 3919 3831 3831 9 3996 )); DATA(insert ( 3550 869 869 1 3553 )); DATA(insert ( 3550 869 869 2 3554 )); DATA(insert ( 3550 869 869 3 3555 )); -DATA(insert ( 3550 869 869 4 3556 )); DATA(insert ( 3550 869 869 5 3557 )); DATA(insert ( 3550 869 869 6 3558 )); DATA(insert ( 3550 869 869 7 3559 )); diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index f73c6c6201..93c031aad7 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -2293,8 +2293,6 @@ DATA(insert OID = 3554 ( inet_gist_union PGNSP PGUID 12 1 0 0 0 f f f f t f i DESCR("GiST support"); DATA(insert OID = 3555 ( inet_gist_compress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ inet_gist_compress _null_ _null_ _null_ )); DESCR("GiST support"); -DATA(insert OID = 3556 ( inet_gist_decompress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ inet_gist_decompress _null_ _null_ _null_ )); -DESCR("GiST support"); DATA(insert OID = 3573 ( inet_gist_fetch PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ inet_gist_fetch _null_ _null_ _null_ )); DESCR("GiST support"); DATA(insert OID = 3557 ( inet_gist_penalty PGNSP PGUID 12 1 0 0 0 f f f f t f i s 3 0 2281 "2281 2281 2281" _null_ _null_ _null_ _null_ _null_ inet_gist_penalty _null_ _null_ _null_ )); @@ -4310,12 +4308,6 @@ DATA(insert OID = 2588 ( circle_overabove PGNSP PGUID 12 1 0 0 0 f f f f t f i /* support functions for GiST r-tree emulation */ DATA(insert OID = 2578 ( gist_box_consistent PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "2281 603 21 26 2281" _null_ _null_ _null_ _null_ _null_ gist_box_consistent _null_ _null_ _null_ )); DESCR("GiST support"); -DATA(insert OID = 2579 ( gist_box_compress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ gist_box_compress _null_ _null_ _null_ )); -DESCR("GiST support"); -DATA(insert OID = 2580 ( gist_box_decompress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ gist_box_decompress _null_ _null_ _null_ )); -DESCR("GiST support"); -DATA(insert OID = 3281 ( gist_box_fetch PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ gist_box_fetch _null_ _null_ _null_ )); -DESCR("GiST support"); DATA(insert OID = 2581 ( gist_box_penalty PGNSP PGUID 12 1 0 0 0 f f f f t f i s 3 0 2281 "2281 2281 2281" _null_ _null_ _null_ _null_ _null_ gist_box_penalty _null_ _null_ _null_ )); DESCR("GiST support"); DATA(insert OID = 2582 ( gist_box_picksplit PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 2281 "2281 2281" _null_ _null_ _null_ _null_ _null_ gist_box_picksplit _null_ _null_ _null_ )); @@ -4796,8 +4788,6 @@ DESCR("rewrite tsquery"); DATA(insert OID = 3695 ( gtsquery_compress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ gtsquery_compress _null_ _null_ _null_ )); DESCR("GiST tsquery support"); -DATA(insert OID = 3696 ( gtsquery_decompress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ gtsquery_decompress _null_ _null_ _null_ )); -DESCR("GiST tsquery support"); DATA(insert OID = 3697 ( gtsquery_picksplit PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 2281 "2281 2281" _null_ _null_ _null_ _null_ _null_ gtsquery_picksplit _null_ _null_ _null_ )); DESCR("GiST tsquery support"); DATA(insert OID = 3698 ( gtsquery_union PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 20 "2281 2281" _null_ _null_ _null_ _null_ _null_ gtsquery_union _null_ _null_ _null_ )); @@ -5218,12 +5208,6 @@ DATA(insert OID = 3875 ( range_gist_consistent PGNSP PGUID 12 1 0 0 0 f f f f t DESCR("GiST support"); DATA(insert OID = 3876 ( range_gist_union PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 3831 "2281 2281" _null_ _null_ _null_ _null_ _null_ range_gist_union _null_ _null_ _null_ )); DESCR("GiST support"); -DATA(insert OID = 3877 ( range_gist_compress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ range_gist_compress _null_ _null_ _null_ )); -DESCR("GiST support"); -DATA(insert OID = 3878 ( range_gist_decompress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ range_gist_decompress _null_ _null_ _null_ )); -DESCR("GiST support"); -DATA(insert OID = 3996 ( range_gist_fetch PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2281 "2281" _null_ _null_ _null_ _null_ _null_ range_gist_fetch _null_ _null_ _null_ )); -DESCR("GiST support"); DATA(insert OID = 3879 ( range_gist_penalty PGNSP PGUID 12 1 0 0 0 f f f f t f i s 3 0 2281 "2281 2281 2281" _null_ _null_ _null_ _null_ _null_ range_gist_penalty _null_ _null_ _null_ )); DESCR("GiST support"); DATA(insert OID = 3880 ( range_gist_picksplit PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 2281 "2281 2281" _null_ _null_ _null_ _null_ _null_ range_gist_picksplit _null_ _null_ _null_ )); diff --git a/src/test/regress/expected/create_am.out b/src/test/regress/expected/create_am.out index 1b464aae2d..47dd885c4e 100644 --- a/src/test/regress/expected/create_am.out +++ b/src/test/regress/expected/create_am.out @@ -26,12 +26,10 @@ CREATE OPERATOR CLASS box_ops DEFAULT OPERATOR 14 @, FUNCTION 1 gist_box_consistent(internal, box, smallint, oid, internal), FUNCTION 2 gist_box_union(internal, internal), - FUNCTION 3 gist_box_compress(internal), - FUNCTION 4 gist_box_decompress(internal), + -- don't need compress, decompress, or fetch functions FUNCTION 5 gist_box_penalty(internal, internal, internal), FUNCTION 6 gist_box_picksplit(internal, internal), - FUNCTION 7 gist_box_same(box, box, internal), - FUNCTION 9 gist_box_fetch(internal); + FUNCTION 7 gist_box_same(box, box, internal); -- Create gist2 index on fast_emp4000 CREATE INDEX grect2ind2 ON fast_emp4000 USING gist2 (home_base); -- Now check the results from plain indexscan; temporarily drop existing diff --git a/src/test/regress/sql/create_am.sql b/src/test/regress/sql/create_am.sql index 2f116d98c7..3e0ac104f3 100644 --- a/src/test/regress/sql/create_am.sql +++ b/src/test/regress/sql/create_am.sql @@ -27,12 +27,10 @@ CREATE OPERATOR CLASS box_ops DEFAULT OPERATOR 14 @, FUNCTION 1 gist_box_consistent(internal, box, smallint, oid, internal), FUNCTION 2 gist_box_union(internal, internal), - FUNCTION 3 gist_box_compress(internal), - FUNCTION 4 gist_box_decompress(internal), + -- don't need compress, decompress, or fetch functions FUNCTION 5 gist_box_penalty(internal, internal, internal), FUNCTION 6 gist_box_picksplit(internal, internal), - FUNCTION 7 gist_box_same(box, box, internal), - FUNCTION 9 gist_box_fetch(internal); + FUNCTION 7 gist_box_same(box, box, internal); -- Create gist2 index on fast_emp4000 CREATE INDEX grect2ind2 ON fast_emp4000 USING gist2 (home_base); From 5ada1fcd0c30be1b0b793a802cf6da386a6c1925 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 21:29:51 -0700 Subject: [PATCH 0223/1087] Accept that server might not be able to send error in crash recovery test. As it turns out we can't rely that the script's monitoring session is terminated with a proper error by the server, because the session might be terminated while already trying to send data. Also improve robustness and error reporting facilities of the test, developed while debugging this issue. Discussion: https://postgr.es/m/20170920020038.kllxgilo7xzwmtto@alap3.anarazel.de --- src/test/recovery/t/013_crash_restart.pl | 98 ++++++++++++++++++------ 1 file changed, 74 insertions(+), 24 deletions(-) diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl index 161dbd86ee..23024716e6 100644 --- a/src/test/recovery/t/013_crash_restart.pl +++ b/src/test/recovery/t/013_crash_restart.pl @@ -24,14 +24,14 @@ } else { - plan tests => 12; + plan tests => 18; } # To avoid hanging while expecting some specific input from a psql # instance being driven by us, add a timeout high enough that it # should never trigger in a normal run, but low enough to actually see # failures in a realistic amount of time. -my $psql_timeout = 180; +my $psql_timeout = IPC::Run::timer(10); my $node = get_new_node('master'); $node->init(allows_streaming => 1); @@ -54,7 +54,7 @@ \$killme_stdout, '2>', \$killme_stderr, - IPC::Run::timeout($psql_timeout)); + $psql_timeout); # Need a second psql to check if crash-restart happened. my ($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', ''); @@ -67,7 +67,7 @@ \$monitor_stdout, '2>', \$monitor_stderr, - IPC::Run::timeout($psql_timeout)); + $psql_timeout); #create table, insert row that should survive $killme_stdin .= q[ @@ -75,18 +75,22 @@ INSERT INTO alive VALUES($$committed-before-sigquit$$); SELECT pg_backend_pid(); ]; -$killme->pump until $killme_stdout =~ /[[:digit:]]+[\r\n]$/; +ok(pump_until($killme, \$killme_stdout, qr/[[:digit:]]+[\r\n]$/m), + 'acquired pid for SIGQUIT'); my $pid = $killme_stdout; chomp($pid); $killme_stdout = ''; +$killme_stderr = ''; #insert a row that should *not* survive, due to in-progress xact $killme_stdin .= q[ BEGIN; INSERT INTO alive VALUES($$in-progress-before-sigquit$$) RETURNING status; ]; -$killme->pump until $killme_stdout =~ /in-progress-before-sigquit/; +ok(pump_until($killme, \$killme_stdout, qr/in-progress-before-sigquit/m), + 'inserted in-progress-before-sigquit'); $killme_stdout = ''; +$killme_stderr = ''; # Start longrunning query in second session, it's failure will signal @@ -96,8 +100,10 @@ SELECT $$psql-connected$$; SELECT pg_sleep(3600); ]; -$monitor->pump until $monitor_stdout =~ /psql-connected/; - +ok(pump_until($monitor, \$monitor_stdout, qr/psql-connected/m), + 'monitor connected'); +$monitor_stdout = ''; +$monitor_stderr = ''; # kill once with QUIT - we expect psql to exit, while emitting error message first my $cnt = kill 'QUIT', $pid; @@ -105,22 +111,27 @@ # Exactly process should have been alive to be killed is($cnt, 1, "exactly one process killed with SIGQUIT"); -# Check that psql sees the killed backend as having been terminated +# Check that psql sees the killed backend as having been terminated $killme_stdin .= q[ SELECT 1; ]; -$killme->pump until $killme_stderr =~ /WARNING: terminating connection because of crash of another server process/; - -ok(1, "psql query died successfully after SIGQUIT"); +ok(pump_until($killme, \$killme_stderr, qr/WARNING: terminating connection because of crash of another server process/m), + "psql query died successfully after SIGQUIT"); +$killme_stderr = ''; +$killme_stdout = ''; $killme->kill_kill; -# Check if the crash-restart cycle has occurred -$monitor->pump until $monitor_stderr =~ /WARNING: terminating connection because of crash of another server process/; +# Wait till server restarts - we should get the WARNING here, but +# sometimes the server is unable to send that, if interrupted while +# sending. +ok(pump_until($monitor, \$monitor_stderr, qr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly/m), + "psql monitor died successfully after SIGQUIT"); $monitor->kill_kill; -ok(1, "psql monitor died successfully after SIGQUIT"); # Wait till server restarts -is($node->poll_query_until('postgres', 'SELECT $$restarted$$;', 'restarted'), "1", "reconnected after SIGQUIT"); +is($node->poll_query_until('postgres', 'SELECT $$restarted after sigquit$$;', 'restarted after sigquit'), + "1", "reconnected after SIGQUIT"); + # restart psql processes, now that the crash cycle finished ($killme_stdin, $killme_stdout, $killme_stderr) = ('', '', ''); @@ -133,10 +144,13 @@ $killme_stdin .= q[ SELECT pg_backend_pid(); ]; -$killme->pump until $killme_stdout =~ /[[:digit:]]+[\r\n]$/; +ok(pump_until($killme, \$killme_stdout, qr/[[:digit:]]+[\r\n]$/m), + "acquired pid for SIGKILL"); $pid = $killme_stdout; chomp($pid); $pid = $killme_stdout; +$killme_stdout = ''; +$killme_stderr = ''; # Insert test rows $killme_stdin .= q[ @@ -144,8 +158,10 @@ BEGIN; INSERT INTO alive VALUES($$in-progress-before-sigkill$$) RETURNING status; ]; -$killme->pump until $killme_stdout =~ /in-progress-before-sigkill/; +ok(pump_until($killme, \$killme_stdout, qr/in-progress-before-sigkill/m), + 'inserted in-progress-before-sigkill'); $killme_stdout = ''; +$killme_stderr = ''; # Re-start longrunning query in second session, it's failure will # signal that crash-restart has occurred. The initial wait for the @@ -155,8 +171,10 @@ SELECT $$psql-connected$$; SELECT pg_sleep(3600); ]; -$monitor->pump until $monitor_stdout =~ /psql-connected/; +ok(pump_until($monitor, \$monitor_stdout, qr/psql-connected/m), + 'monitor connected'); $monitor_stdout = ''; +$monitor_stderr = ''; # kill with SIGKILL this time - we expect the backend to exit, without @@ -169,13 +187,15 @@ $killme_stdin .= q[ SELECT 1; ]; -$killme->pump until $killme_stderr =~ /server closed the connection unexpectedly/; +ok(pump_until($killme, \$killme_stderr, qr/server closed the connection unexpectedly/m), + "psql query died successfully after SIGKILL"); $killme->kill_kill; -ok(1, "psql query died successfully after SIGKILL"); -# Wait till server restarts (note that we should get the WARNING here) -$monitor->pump until $monitor_stderr =~ /WARNING: terminating connection because of crash of another server process/; -ok(1, "psql monitor died successfully after SIGKILL"); +# Wait till server restarts - we should get the WARNING here, but +# sometimes the server is unable to send that, if interrupted while +# sending. +ok(pump_until($monitor, \$monitor_stderr, qr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly/m), + "psql monitor died successfully after SIGKILL"); $monitor->kill_kill; # Wait till server restarts @@ -198,3 +218,33 @@ 'after-orderly-restart', 'can still write after orderly restart'); $node->stop(); + +# Pump until string is matched, or timeout occurs +sub pump_until +{ + my ($proc, $stream, $untl) = @_; + $proc->pump_nb(); + while (1) + { + if ($psql_timeout->is_expired) + { + diag("aborting wait: program timed out"); + diag("stream contents: >>", $$stream,"<<"); + diag("pattern searched for: ", $untl); + + return 0; + } + if (not $proc->pumpable()) + { + diag("aborting wait: program died"); + diag("stream contents: >>", $$stream,"<<"); + diag("pattern searched for: ", $untl); + + return 0; + } + $proc->pump(); + last if $$stream =~ /$untl/; + } + return 1; + +}; From fc49e24fa69a15efacd5b8958115ed9c43c48f9a Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 22:03:48 -0700 Subject: [PATCH 0224/1087] Make WAL segment size configurable at initdb time. For performance reasons a larger segment size than the default 16MB can be useful. A larger segment size has two main benefits: Firstly, in setups using archiving, it makes it easier to write scripts that can keep up with higher amounts of WAL, secondly, the WAL has to be written and synced to disk less frequently. But at the same time large segment size are disadvantageous for smaller databases. So far the segment size had to be configured at compile time, often making it unrealistic to choose one fitting to a particularly load. Therefore change it to a initdb time setting. This includes a breaking changes to the xlogreader.h API, which now requires the current segment size to be configured. For that and similar reasons a number of binaries had to be taught how to recognize the current segment size. Author: Beena Emerson, editorialized by Andres Freund Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com --- configure | 54 ---- configure.in | 31 --- contrib/pg_standby/pg_standby.c | 115 +++++++- doc/src/sgml/backup.sgml | 2 +- doc/src/sgml/installation.sgml | 14 - doc/src/sgml/ref/initdb.sgml | 15 ++ doc/src/sgml/wal.sgml | 13 +- src/backend/access/transam/twophase.c | 3 +- src/backend/access/transam/xlog.c | 255 +++++++++++------- src/backend/access/transam/xlogarchive.c | 14 +- src/backend/access/transam/xlogfuncs.c | 10 +- src/backend/access/transam/xlogreader.c | 32 +-- src/backend/access/transam/xlogutils.c | 36 ++- src/backend/bootstrap/bootstrap.c | 15 +- src/backend/postmaster/checkpointer.c | 5 +- src/backend/replication/basebackup.c | 34 +-- src/backend/replication/logical/logical.c | 2 +- .../replication/logical/reorderbuffer.c | 19 +- src/backend/replication/slot.c | 2 +- src/backend/replication/walreceiver.c | 14 +- src/backend/replication/walreceiverfuncs.c | 4 +- src/backend/replication/walsender.c | 16 +- src/backend/utils/misc/guc.c | 20 +- src/backend/utils/misc/pg_controldata.c | 5 +- src/backend/utils/misc/postgresql.conf.sample | 2 +- src/bin/initdb/initdb.c | 58 +++- src/bin/pg_basebackup/pg_basebackup.c | 7 +- src/bin/pg_basebackup/pg_receivewal.c | 16 +- src/bin/pg_basebackup/receivelog.c | 36 +-- src/bin/pg_basebackup/streamutil.c | 76 ++++++ src/bin/pg_basebackup/streamutil.h | 2 + src/bin/pg_controldata/pg_controldata.c | 15 +- src/bin/pg_resetwal/pg_resetwal.c | 55 ++-- src/bin/pg_rewind/parsexlog.c | 30 ++- src/bin/pg_rewind/pg_rewind.c | 12 +- src/bin/pg_rewind/pg_rewind.h | 1 + src/bin/pg_test_fsync/pg_test_fsync.c | 7 +- src/bin/pg_upgrade/test.sh | 4 +- src/bin/pg_waldump/pg_waldump.c | 246 ++++++++++++----- src/include/access/xlog.h | 1 + src/include/access/xlog_internal.h | 76 +++--- src/include/access/xlogreader.h | 8 +- src/include/catalog/pg_control.h | 2 +- src/include/pg_config.h.in | 5 - src/include/pg_config_manual.h | 6 + src/tools/msvc/Solution.pm | 2 - 46 files changed, 897 insertions(+), 500 deletions(-) diff --git a/configure b/configure index 0d76e5ea42..5c38149a3d 100755 --- a/configure +++ b/configure @@ -821,7 +821,6 @@ enable_tap_tests with_blocksize with_segsize with_wal_blocksize -with_wal_segsize with_CC enable_depend enable_cassert @@ -1518,8 +1517,6 @@ Optional Packages: --with-segsize=SEGSIZE set table segment size in GB [1] --with-wal-blocksize=BLOCKSIZE set WAL block size in kB [8] - --with-wal-segsize=SEGSIZE - set WAL segment size in MB [16] --with-CC=CMD set compiler (deprecated) --with-icu build with ICU support --with-tcl build Tcl modules (PL/Tcl) @@ -3733,57 +3730,6 @@ cat >>confdefs.h <<_ACEOF _ACEOF -# -# WAL segment size -# -{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for WAL segment size" >&5 -$as_echo_n "checking for WAL segment size... " >&6; } - - - -# Check whether --with-wal-segsize was given. -if test "${with_wal_segsize+set}" = set; then : - withval=$with_wal_segsize; - case $withval in - yes) - as_fn_error $? "argument required for --with-wal-segsize option" "$LINENO" 5 - ;; - no) - as_fn_error $? "argument required for --with-wal-segsize option" "$LINENO" 5 - ;; - *) - wal_segsize=$withval - ;; - esac - -else - wal_segsize=16 -fi - - -case ${wal_segsize} in - 1) ;; - 2) ;; - 4) ;; - 8) ;; - 16) ;; - 32) ;; - 64) ;; - 128) ;; - 256) ;; - 512) ;; - 1024) ;; - *) as_fn_error $? "Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64,128,256,512,1024." "$LINENO" 5 -esac -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: ${wal_segsize}MB" >&5 -$as_echo "${wal_segsize}MB" >&6; } - - -cat >>confdefs.h <<_ACEOF -#define XLOG_SEG_SIZE (${wal_segsize} * 1024 * 1024) -_ACEOF - - # # C compiler # diff --git a/configure.in b/configure.in index bdc41b071f..176b29a792 100644 --- a/configure.in +++ b/configure.in @@ -343,37 +343,6 @@ AC_DEFINE_UNQUOTED([XLOG_BLCKSZ], ${XLOG_BLCKSZ}, [ Changing XLOG_BLCKSZ requires an initdb. ]) -# -# WAL segment size -# -AC_MSG_CHECKING([for WAL segment size]) -PGAC_ARG_REQ(with, wal-segsize, [SEGSIZE], [set WAL segment size in MB [16]], - [wal_segsize=$withval], - [wal_segsize=16]) -case ${wal_segsize} in - 1) ;; - 2) ;; - 4) ;; - 8) ;; - 16) ;; - 32) ;; - 64) ;; - 128) ;; - 256) ;; - 512) ;; - 1024) ;; - *) AC_MSG_ERROR([Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64,128,256,512,1024.]) -esac -AC_MSG_RESULT([${wal_segsize}MB]) - -AC_DEFINE_UNQUOTED([XLOG_SEG_SIZE], [(${wal_segsize} * 1024 * 1024)], [ - XLOG_SEG_SIZE is the size of a single WAL file. This must be a power of 2 - and larger than XLOG_BLCKSZ (preferably, a great deal larger than - XLOG_BLCKSZ). - - Changing XLOG_SEG_SIZE requires an initdb. -]) - # # C compiler # diff --git a/contrib/pg_standby/pg_standby.c b/contrib/pg_standby/pg_standby.c index d7fa2a80c6..6aeca6e8f7 100644 --- a/contrib/pg_standby/pg_standby.c +++ b/contrib/pg_standby/pg_standby.c @@ -36,6 +36,8 @@ const char *progname; +int WalSegSz = -1; + /* Options and defaults */ int sleeptime = 5; /* amount of time to sleep between file checks */ int waittime = -1; /* how long we have been waiting, -1 no wait @@ -100,6 +102,10 @@ int nextWALFileType; struct stat stat_buf; +static bool SetWALFileNameForCleanup(void); +static bool SetWALSegSize(void); + + /* ===================================================================== * * Customizable section @@ -175,6 +181,35 @@ CustomizableNextWALFileReady(void) { if (stat(WALFilePath, &stat_buf) == 0) { + /* + * If we've not seen any WAL segments, we don't know the WAL segment + * size, which we need. If it looks like a WAL segment, determine size + * of segments for the cluster. + */ + if (WalSegSz == -1 && IsXLogFileName(nextWALFileName)) + { + if (SetWALSegSize()) + { + /* + * Successfully determined WAL segment size. Can compute + * cleanup cutoff now. + */ + need_cleanup = SetWALFileNameForCleanup(); + if (debug) + { + fprintf(stderr, + _("WAL segment size: %d \n"), WalSegSz); + fprintf(stderr, "Keep archive history: "); + + if (need_cleanup) + fprintf(stderr, "%s and later\n", + exclusiveCleanupFileName); + else + fprintf(stderr, "no cleanup required\n"); + } + } + } + /* * If it's a backup file, return immediately. If it's a regular file * return only if it's the right size already. @@ -184,7 +219,7 @@ CustomizableNextWALFileReady(void) nextWALFileType = XLOG_BACKUP_LABEL; return true; } - else if (stat_buf.st_size == XLOG_SEG_SIZE) + else if (WalSegSz > 0 && stat_buf.st_size == WalSegSz) { #ifdef WIN32 @@ -204,7 +239,7 @@ CustomizableNextWALFileReady(void) /* * If still too small, wait until it is the correct size */ - if (stat_buf.st_size > XLOG_SEG_SIZE) + if (WalSegSz > 0 && stat_buf.st_size > WalSegSz) { if (debug) { @@ -218,8 +253,6 @@ CustomizableNextWALFileReady(void) return false; } -#define MaxSegmentsPerLogFile ( 0xFFFFFFFF / XLOG_SEG_SIZE ) - static void CustomizableCleanupPriorWALFiles(void) { @@ -315,6 +348,7 @@ SetWALFileNameForCleanup(void) uint32 log_diff = 0, seg_diff = 0; bool cleanup = false; + int max_segments_per_logfile = (0xFFFFFFFF / WalSegSz); if (restartWALFileName) { @@ -336,12 +370,12 @@ SetWALFileNameForCleanup(void) sscanf(nextWALFileName, "%08X%08X%08X", &tli, &log, &seg); if (tli > 0 && seg > 0) { - log_diff = keepfiles / MaxSegmentsPerLogFile; - seg_diff = keepfiles % MaxSegmentsPerLogFile; + log_diff = keepfiles / max_segments_per_logfile; + seg_diff = keepfiles % max_segments_per_logfile; if (seg_diff > seg) { log_diff++; - seg = MaxSegmentsPerLogFile - (seg_diff - seg); + seg = max_segments_per_logfile - (seg_diff - seg); } else seg -= seg_diff; @@ -364,6 +398,66 @@ SetWALFileNameForCleanup(void) return cleanup; } +/* + * Try to set the wal segment size from the WAL file specified by WALFilePath. + * + * Return true if size could be determined, false otherwise. + */ +static bool +SetWALSegSize(void) +{ + bool ret_val = false; + int fd; + char *buf = (char *) malloc(XLOG_BLCKSZ); + + Assert(WalSegSz == -1); + + if ((fd = open(WALFilePath, O_RDWR, 0)) < 0) + { + fprintf(stderr, "%s: couldn't open WAL file \"%s\"\n", + progname, WALFilePath); + return false; + } + if (read(fd, buf, XLOG_BLCKSZ) == XLOG_BLCKSZ) + { + XLogLongPageHeader longhdr = (XLogLongPageHeader) buf; + + WalSegSz = longhdr->xlp_seg_size; + + if (IsValidWalSegSize(WalSegSz)) + { + /* successfully retrieved WAL segment size */ + ret_val = true; + } + else + fprintf(stderr, + "%s: WAL segment size must be a power of two between 1MB and 1GB, but the WAL file header specifies %d bytes\n", + progname, WalSegSz); + close(fd); + } + else + { + /* + * Don't complain loudly, this is to be expected for segments being + * created. + */ + if (errno != 0) + { + if (debug) + fprintf(stderr, "could not read file \"%s\": %s", + WALFilePath, strerror(errno)); + } + else + { + if (debug) + fprintf(stderr, "not enough data in file \"%s\"", WALFilePath); + } + } + + fflush(stderr); + return ret_val; +} + /* * CheckForExternalTrigger() * @@ -708,8 +802,6 @@ main(int argc, char **argv) CustomizableInitialize(); - need_cleanup = SetWALFileNameForCleanup(); - if (debug) { fprintf(stderr, "Trigger file: %s\n", triggerPath ? triggerPath : ""); @@ -721,11 +813,6 @@ main(int argc, char **argv) fprintf(stderr, "Max wait interval: %d %s\n", maxwaittime, (maxwaittime > 0 ? "seconds" : "forever")); fprintf(stderr, "Command for restore: %s\n", restoreCommand); - fprintf(stderr, "Keep archive history: "); - if (need_cleanup) - fprintf(stderr, "%s and later\n", exclusiveCleanupFileName); - else - fprintf(stderr, "no cleanup required\n"); fflush(stderr); } diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index 95aeb35507..bd55e8bb77 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -562,7 +562,7 @@ tar -cf backup.tar /usr/local/pgsql/data produces an indefinitely long sequence of WAL records. The system physically divides this sequence into WAL segment files, which are normally 16MB apiece (although the segment size - can be altered when building PostgreSQL). The segment + can be altered during initdb). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index b178d3074b..a1bae95145 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -1058,20 +1058,6 @@ su - postgres - - - - - Set the WAL segment size, in megabytes. This is - the size of each individual file in the WAL log. It may be useful - to adjust this size to control the granularity of WAL log shipping. - The default size is 16 megabytes. - The value must be a power of 2 between 1 and 1024 (megabytes). - Note that changing this value requires an initdb. - - - - diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml index 6efb2e442d..732fecab8e 100644 --- a/doc/src/sgml/ref/initdb.sgml +++ b/doc/src/sgml/ref/initdb.sgml @@ -316,6 +316,21 @@ PostgreSQL documentation + + + + Set the WAL segment size, in megabytes. This is + the size of each individual file in the WAL log. It may be useful + to adjust this size to control the granularity of WAL log shipping. + This option can only be set during initialization, and cannot be + changed later. + The default size is 16 megabytes. + The value must be a power of 2 between 1 and 1024 (megabytes). + + + + + diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index 940c37b21a..ddcef5fbf5 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -752,13 +752,12 @@ WAL logs are stored in the directory pg_wal under the data directory, as a set of segment files, normally each 16 MB in size (but the size can be changed - by altering the diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c index b14e6f7924..8287de97a2 100644 --- a/src/backend/bootstrap/bootstrap.c +++ b/src/backend/bootstrap/bootstrap.c @@ -321,19 +321,19 @@ AuxiliaryProcessMain(int argc, char *argv[]) switch (MyAuxProcType) { case StartupProcess: - statmsg = "startup process"; + statmsg = pgstat_get_backend_desc(B_STARTUP); break; case BgWriterProcess: - statmsg = "writer process"; + statmsg = pgstat_get_backend_desc(B_BG_WRITER); break; case CheckpointerProcess: - statmsg = "checkpointer process"; + statmsg = pgstat_get_backend_desc(B_CHECKPOINTER); break; case WalWriterProcess: - statmsg = "wal writer process"; + statmsg = pgstat_get_backend_desc(B_WAL_WRITER); break; case WalReceiverProcess: - statmsg = "wal receiver process"; + statmsg = pgstat_get_backend_desc(B_WAL_RECEIVER); break; default: statmsg = "??? process"; diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 776b1c0a9d..b745d8962e 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -436,7 +436,7 @@ AutoVacLauncherMain(int argc, char *argv[]) am_autovacuum_launcher = true; /* Identify myself via ps */ - init_ps_display("autovacuum launcher process", "", "", ""); + init_ps_display(pgstat_get_backend_desc(B_AUTOVAC_LAUNCHER), "", "", ""); ereport(DEBUG1, (errmsg("autovacuum launcher started"))); @@ -1519,7 +1519,7 @@ AutoVacWorkerMain(int argc, char *argv[]) am_autovacuum_worker = true; /* Identify myself via ps */ - init_ps_display("autovacuum worker process", "", "", ""); + init_ps_display(pgstat_get_backend_desc(B_AUTOVAC_WORKER), "", "", ""); SetProcessingMode(InitProcessing); diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c index ddf9d698e0..1c6cf83f8c 100644 --- a/src/backend/postmaster/pgarch.c +++ b/src/backend/postmaster/pgarch.c @@ -236,7 +236,7 @@ PgArchiverMain(int argc, char *argv[]) /* * Identify myself via ps */ - init_ps_display("archiver process", "", "", ""); + init_ps_display("archiver", "", "", ""); pgarch_MainLoop(); diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index fd6ebc976a..3a0b49c7c4 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -4224,7 +4224,7 @@ PgstatCollectorMain(int argc, char *argv[]) /* * Identify myself via ps */ - init_ps_display("stats collector process", "", "", ""); + init_ps_display("stats collector", "", "", ""); /* * Read in existing stats files or initialize the stats to zero. diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 160b555294..1bcbce537a 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -4266,14 +4266,14 @@ BackendInitialize(Port *port) * * For a walsender, the ps display is set in the following form: * - * postgres: wal sender process + * postgres: walsender * - * To achieve that, we pass "wal sender process" as username and username + * To achieve that, we pass "walsender" as username and username * as dbname to init_ps_display(). XXX: should add a new variant of * init_ps_display() to avoid abusing the parameters like this. */ if (am_walsender) - init_ps_display("wal sender process", port->user_name, remote_ps_data, + init_ps_display(pgstat_get_backend_desc(B_WAL_SENDER), port->user_name, remote_ps_data, update_process_title ? "authentication" : ""); else init_ps_display(port->user_name, port->database_name, remote_ps_data, diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c index 3255b42c7d..aeb117796d 100644 --- a/src/backend/postmaster/syslogger.c +++ b/src/backend/postmaster/syslogger.c @@ -173,7 +173,7 @@ SysLoggerMain(int argc, char *argv[]) am_syslogger = true; - init_ps_display("logger process", "", "", ""); + init_ps_display("logger", "", "", ""); /* * If we restarted, our stderr is already redirected into our own input From d42294fc00da4b97d04ddb4401b76295e8d86816 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 20 Sep 2017 09:03:04 -0400 Subject: [PATCH 0227/1087] Fix compiler warning from gcc-7 -Wformat-truncation (via -Wall) --- src/bin/initdb/initdb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index 1d4a138618..27fcf5a87f 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -1009,12 +1009,12 @@ static char * pretty_wal_size(int segment_count) { int sz = wal_segment_size_mb * segment_count; - char *result = pg_malloc(10); + char *result = pg_malloc(11); if ((sz % 1024) == 0) - snprintf(result, 10, "%dGB", sz / 1024); + snprintf(result, 11, "%dGB", sz / 1024); else - snprintf(result, 10, "%dMB", sz); + snprintf(result, 11, "%dMB", sz); return result; } From 00210e3fb974ff2b9affc4d8f3b29f9cb3645a60 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 20 Sep 2017 09:36:19 -0400 Subject: [PATCH 0228/1087] docs: re-add instructions on setting wal_level for rsync use This step was erroneously removed four days ago by me. Reported-by: Magnus via IM Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index dabf9978cd..c3df343571 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -326,7 +326,9 @@ NET STOP postgresql-&majorversion; against the old primary and standby clusters. Verify that the Latest checkpoint location values match in all clusters. (There will be a mismatch if old standby servers were shut down - before the old primary.) + before the old primary.) Also, change wal_level to + replica in the postgresql.conf file on the + new primary cluster. From 7f3a3312abf34ea7e899046e326775612802764b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 20 Sep 2017 10:07:53 -0400 Subject: [PATCH 0229/1087] Fix typo. Thomas Munro Discussion: http://postgr.es/m/CAEepm=2j-HAgnBUrAazwS0ry7Z_ihk+d7g+Ye3u99+6WbiGt_Q@mail.gmail.com --- src/backend/optimizer/path/allpaths.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 5b746a906a..a7866a99e0 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -1316,7 +1316,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, build_partitioned_rels = true; break; default: - elog(ERROR, "unexpcted rtekind: %d", (int) rte->rtekind); + elog(ERROR, "unexpected rtekind: %d", (int) rte->rtekind); } /* From 57eebca03a9eb61eb18f8ea9db94775653f797d1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 20 Sep 2017 10:20:10 -0400 Subject: [PATCH 0230/1087] Fix create_lateral_join_info to handle dead relations properly. Commit 0a480502b092195a9b25a2f0f199a21d592a9c57 broke it. Report by Andreas Seltenreich. Fix by Ashutosh Bapat. Discussion: http://postgr.es/m/874ls2vrnx.fsf@ansel.ydns.eu --- src/backend/optimizer/plan/initsplan.c | 7 +++++-- src/test/regress/expected/join.out | 12 ++++++++++++ src/test/regress/sql/join.sql | 5 +++++ 3 files changed, 22 insertions(+), 2 deletions(-) diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c index ad81f0f82f..9931dddba4 100644 --- a/src/backend/optimizer/plan/initsplan.c +++ b/src/backend/optimizer/plan/initsplan.c @@ -632,7 +632,11 @@ create_lateral_join_info(PlannerInfo *root) RelOptInfo *brel = root->simple_rel_array[rti]; RangeTblEntry *brte = root->simple_rte_array[rti]; - if (brel == NULL) + /* + * Skip empty slots. Also skip non-simple relations i.e. dead + * relations. + */ + if (brel == NULL || !IS_SIMPLE_REL(brel)) continue; /* @@ -644,7 +648,6 @@ create_lateral_join_info(PlannerInfo *root) * therefore be marked with the appropriate lateral info so that those * children eventually get marked also. */ - Assert(IS_SIMPLE_REL(brel)); Assert(brte); if (brel->reloptkind == RELOPT_OTHER_MEMBER_REL && (brte->rtekind != RTE_RELATION || diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 06a84e8e1c..f47449b1c4 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -4060,6 +4060,18 @@ select i8.* from int8_tbl i8 left join (select f1 from int4_tbl group by f1) i4 Seq Scan on int8_tbl i8 (1 row) +-- check join removal with lateral references +explain (costs off) +select 1 from (select a.id FROM a left join b on a.b_id = b.id) q, + lateral generate_series(1, q.id) gs(i) where q.id = gs.i; + QUERY PLAN +------------------------------------------- + Nested Loop + -> Seq Scan on a + -> Function Scan on generate_series gs + Filter: (a.id = i) +(4 rows) + rollback; create temp table parent (k int primary key, pd int); create temp table child (k int unique, cd int); diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index 8b21838e92..d847d53653 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -1336,6 +1336,11 @@ explain (costs off) select i8.* from int8_tbl i8 left join (select f1 from int4_tbl group by f1) i4 on i8.q1 = i4.f1; +-- check join removal with lateral references +explain (costs off) +select 1 from (select a.id FROM a left join b on a.b_id = b.id) q, + lateral generate_series(1, q.id) gs(i) where q.id = gs.i; + rollback; create temp table parent (k int primary key, pd int); From 36b564c648a044e42ca461466ae14d8588e6c5e2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 20 Sep 2017 11:10:36 -0400 Subject: [PATCH 0231/1087] Fix erroneous documentation about noise word GROUP. GRANT, REVOKE, and some allied commands allow the noise word GROUP before a role name (cf. grantee production in gram.y). This option does not exist elsewhere, but it had nonetheless snuck into the documentation for ALTER ROLE, ALTER USER, and CREATE SCHEMA. Seems to be a copy-and-pasteo in commit 31eae6028, which did expand the syntax choices here, but not in that way. Back-patch to 9.5 where that came in. Discussion: https://postgr.es/m/20170916123750.8885.66941@wrigleys.postgresql.org --- doc/src/sgml/ref/alter_role.sgml | 2 +- doc/src/sgml/ref/alter_user.sgml | 2 +- doc/src/sgml/ref/create_schema.sgml | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index 8cd8602bc4..ccdd5c107c 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -45,7 +45,7 @@ ALTER ROLE { role_specification | A where role_specification can be: - [ GROUP ] role_name + role_name | CURRENT_USER | SESSION_USER diff --git a/doc/src/sgml/ref/alter_user.sgml b/doc/src/sgml/ref/alter_user.sgml index 411a6dcc38..c26d264de5 100644 --- a/doc/src/sgml/ref/alter_user.sgml +++ b/doc/src/sgml/ref/alter_user.sgml @@ -45,7 +45,7 @@ ALTER USER { role_specification | A where role_specification can be: - [ GROUP ] role_name + role_name | CURRENT_USER | SESSION_USER diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index 554a4483c5..5d29cd768a 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -28,7 +28,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp where role_specification can be: - [ GROUP ] user_name + user_name | CURRENT_USER | SESSION_USER From 4939488af9b86edfff9b981773cd388d361c5830 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 20 Sep 2017 11:28:34 -0400 Subject: [PATCH 0232/1087] Fix instability in subscription regression test. 005_encoding.pl neglected to wait for the subscriber's initial synchronization to happen. While we have not seen this fail in the buildfarm, it's pretty easy to demonstrate there's an issue by hacking logicalrep_worker_launch() to fail most of the time. Michael Paquier Discussion: https://postgr.es/m/27032.1505749806@sss.pgh.pa.us --- src/test/subscription/t/005_encoding.pl | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/test/subscription/t/005_encoding.pl b/src/test/subscription/t/005_encoding.pl index 26a40c0b7f..2b0c47c07d 100644 --- a/src/test/subscription/t/005_encoding.pl +++ b/src/test/subscription/t/005_encoding.pl @@ -41,6 +41,12 @@ sub wait_for_caught_up wait_for_caught_up($node_publisher, $appname); +# Wait for initial sync to finish as well +my $synced_query = + "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');"; +$node_subscriber->poll_query_until('postgres', $synced_query) + or die "Timed out while waiting for subscriber to synchronize data"; + $node_publisher->safe_psql('postgres', q{INSERT INTO test1 VALUES (1, E'Mot\xc3\xb6rhead')}); # hand-rolled UTF-8 From 7b86c2ac9563ffd9b870cfd73a769431b7922e81 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 20 Sep 2017 13:52:36 -0400 Subject: [PATCH 0233/1087] Improve dubious memory management in pg_newlocale_from_collation(). pg_newlocale_from_collation() used malloc() and strdup() directly, which is generally not per backend coding style, and it didn't bother to check for failure results, but would just SIGSEGV instead. Also, if one of the numerous error checks in the middle of the function failed, the already-allocated memory would be leaked permanently. Admittedly, it's not a lot of memory, but it could build up if this function were called repeatedly for a bad collation. The first two problems are easily cured by palloc'ing in TopMemoryContext instead of calling libc directly. We can fairly easily dodge the leakage problem for the struct pg_locale_struct by filling in a temporary variable and allocating permanent storage only once we reach the bottom of the function. It's harder to get rid of the potential leakage for ICU's copy of the collcollate string, but at least that's only allocated after most of the error checks; so live with that aspect. Back-patch to v10 where this code came in, with one or another of the ICU patches. --- src/backend/utils/adt/pg_locale.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/src/backend/utils/adt/pg_locale.c b/src/backend/utils/adt/pg_locale.c index 13e7625b20..3d3d8aa4b6 100644 --- a/src/backend/utils/adt/pg_locale.c +++ b/src/backend/utils/adt/pg_locale.c @@ -1292,7 +1292,8 @@ pg_newlocale_from_collation(Oid collid) Form_pg_collation collform; const char *collcollate; const char *collctype pg_attribute_unused(); - pg_locale_t result; + struct pg_locale_struct result; + pg_locale_t resultp; Datum collversion; bool isnull; @@ -1304,9 +1305,9 @@ pg_newlocale_from_collation(Oid collid) collcollate = NameStr(collform->collcollate); collctype = NameStr(collform->collctype); - result = malloc(sizeof(*result)); - memset(result, 0, sizeof(*result)); - result->provider = collform->collprovider; + /* We'll fill in the result struct locally before allocating memory */ + memset(&result, 0, sizeof(result)); + result.provider = collform->collprovider; if (collform->collprovider == COLLPROVIDER_LIBC) { @@ -1353,7 +1354,7 @@ pg_newlocale_from_collation(Oid collid) #endif } - result->info.lt = loc; + result.info.lt = loc; #else /* not HAVE_LOCALE_T */ /* platform that doesn't support locale_t */ ereport(ERROR, @@ -1379,8 +1380,10 @@ pg_newlocale_from_collation(Oid collid) (errmsg("could not open collator for locale \"%s\": %s", collcollate, u_errorName(status)))); - result->info.icu.locale = strdup(collcollate); - result->info.icu.ucol = collator; + /* We will leak this string if we get an error below :-( */ + result.info.icu.locale = MemoryContextStrdup(TopMemoryContext, + collcollate); + result.info.icu.ucol = collator; #else /* not USE_ICU */ /* could get here if a collation was created by a build with ICU */ ereport(ERROR, @@ -1427,7 +1430,11 @@ pg_newlocale_from_collation(Oid collid) ReleaseSysCache(tp); - cache_entry->locale = result; + /* We'll keep the pg_locale_t structures in TopMemoryContext */ + resultp = MemoryContextAlloc(TopMemoryContext, sizeof(*resultp)); + *resultp = result; + + cache_entry->locale = resultp; } return cache_entry->locale; From 9140cf8269b0c4ae002b2748d93979d535891311 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 20 Sep 2017 23:33:04 -0400 Subject: [PATCH 0234/1087] Associate partitioning information with each RelOptInfo. This is not used for anything yet, but it is necessary infrastructure for partition-wise join and for partition pruning without constraint exclusion. Ashutosh Bapat, reviewed by Amit Langote and with quite a few changes, mostly cosmetic, by me. Additional review and testing of this patch series by Antonin Houska, Amit Khandekar, Rafia Sabih, Rajkumar Raghuwanshi, Thomas Munro, and Dilip Kumar. Discussion: http://postgr.es/m/CAFjFpRfneFG3H+F6BaiXemMrKF+FY-POpx3Ocy+RiH3yBmXSNw@mail.gmail.com --- src/backend/optimizer/util/plancat.c | 159 +++++++++++++++++++++++++++ src/backend/optimizer/util/relnode.c | 37 ++++++- src/include/nodes/relation.h | 56 +++++++++- 3 files changed, 249 insertions(+), 3 deletions(-) diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index a1ebd4acc8..cac46bedf9 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -68,6 +68,10 @@ static List *get_relation_constraints(PlannerInfo *root, static List *build_index_tlist(PlannerInfo *root, IndexOptInfo *index, Relation heapRelation); static List *get_relation_statistics(RelOptInfo *rel, Relation relation); +static void set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, + Relation relation); +static PartitionScheme find_partition_scheme(PlannerInfo *root, Relation rel); +static List **build_baserel_partition_key_exprs(Relation relation, Index varno); /* * get_relation_info - @@ -420,6 +424,13 @@ get_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent, /* Collect info about relation's foreign keys, if relevant */ get_relation_foreign_keys(root, rel, relation, inhparent); + /* + * Collect info about relation's partitioning scheme, if any. Only + * inheritance parents may be partitioned. + */ + if (inhparent && relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + set_relation_partition_info(root, rel, relation); + heap_close(relation, NoLock); /* @@ -1802,3 +1813,151 @@ has_row_triggers(PlannerInfo *root, Index rti, CmdType event) heap_close(relation, NoLock); return result; } + +/* + * set_relation_partition_info + * + * Set partitioning scheme and related information for a partitioned table. + */ +static void +set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, + Relation relation) +{ + PartitionDesc partdesc; + + Assert(relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); + + partdesc = RelationGetPartitionDesc(relation); + rel->part_scheme = find_partition_scheme(root, relation); + Assert(partdesc != NULL && rel->part_scheme != NULL); + rel->boundinfo = partdesc->boundinfo; + rel->nparts = partdesc->nparts; + rel->partexprs = build_baserel_partition_key_exprs(relation, rel->relid); +} + +/* + * find_partition_scheme + * + * Find or create a PartitionScheme for this Relation. + */ +static PartitionScheme +find_partition_scheme(PlannerInfo *root, Relation relation) +{ + PartitionKey partkey = RelationGetPartitionKey(relation); + ListCell *lc; + int partnatts; + PartitionScheme part_scheme; + + /* A partitioned table should have a partition key. */ + Assert(partkey != NULL); + + partnatts = partkey->partnatts; + + /* Search for a matching partition scheme and return if found one. */ + foreach(lc, root->part_schemes) + { + part_scheme = lfirst(lc); + + /* Match partitioning strategy and number of keys. */ + if (partkey->strategy != part_scheme->strategy || + partnatts != part_scheme->partnatts) + continue; + + /* Match the partition key types. */ + if (memcmp(partkey->partopfamily, part_scheme->partopfamily, + sizeof(Oid) * partnatts) != 0 || + memcmp(partkey->partopcintype, part_scheme->partopcintype, + sizeof(Oid) * partnatts) != 0 || + memcmp(partkey->parttypcoll, part_scheme->parttypcoll, + sizeof(Oid) * partnatts) != 0) + continue; + + /* + * Length and byval information should match when partopcintype + * matches. + */ + Assert(memcmp(partkey->parttyplen, part_scheme->parttyplen, + sizeof(int16) * partnatts) == 0); + Assert(memcmp(partkey->parttypbyval, part_scheme->parttypbyval, + sizeof(bool) * partnatts) == 0); + + /* Found matching partition scheme. */ + return part_scheme; + } + + /* + * Did not find matching partition scheme. Create one copying relevant + * information from the relcache. Instead of copying whole arrays, copy + * the pointers in relcache. It's safe to do so since + * RelationClearRelation() wouldn't change it while planner is using it. + */ + part_scheme = (PartitionScheme) palloc0(sizeof(PartitionSchemeData)); + part_scheme->strategy = partkey->strategy; + part_scheme->partnatts = partkey->partnatts; + part_scheme->partopfamily = partkey->partopfamily; + part_scheme->partopcintype = partkey->partopcintype; + part_scheme->parttypcoll = partkey->parttypcoll; + part_scheme->parttyplen = partkey->parttyplen; + part_scheme->parttypbyval = partkey->parttypbyval; + + /* Add the partitioning scheme to PlannerInfo. */ + root->part_schemes = lappend(root->part_schemes, part_scheme); + + return part_scheme; +} + +/* + * build_baserel_partition_key_exprs + * + * Collects partition key expressions for a given base relation. Any single + * column partition keys are converted to Var nodes. All Var nodes are set + * to the given varno. The partition key expressions are returned as an array + * of single element lists to be stored in RelOptInfo of the base relation. + */ +static List ** +build_baserel_partition_key_exprs(Relation relation, Index varno) +{ + PartitionKey partkey = RelationGetPartitionKey(relation); + int partnatts; + int cnt; + List **partexprs; + ListCell *lc; + + /* A partitioned table should have a partition key. */ + Assert(partkey != NULL); + + partnatts = partkey->partnatts; + partexprs = (List **) palloc(sizeof(List *) * partnatts); + lc = list_head(partkey->partexprs); + + for (cnt = 0; cnt < partnatts; cnt++) + { + Expr *partexpr; + AttrNumber attno = partkey->partattrs[cnt]; + + if (attno != InvalidAttrNumber) + { + /* Single column partition key is stored as a Var node. */ + Assert(attno > 0); + + partexpr = (Expr *) makeVar(varno, attno, + partkey->parttypid[cnt], + partkey->parttypmod[cnt], + partkey->parttypcoll[cnt], 0); + } + else + { + if (lc == NULL) + elog(ERROR, "wrong number of partition key expressions"); + + /* Re-stamp the expression with given varno. */ + partexpr = (Expr *) copyObject(lfirst(lc)); + ChangeVarNodes((Node *) partexpr, 1, varno, 0); + lc = lnext(lc); + } + + partexprs[cnt] = list_make1(partexpr); + } + + return partexprs; +} diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index c7b2695ebb..077e89ae43 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -146,6 +146,11 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptInfo *parent) rel->baserestrict_min_security = UINT_MAX; rel->joininfo = NIL; rel->has_eclass_joins = false; + rel->part_scheme = NULL; + rel->nparts = 0; + rel->boundinfo = NULL; + rel->part_rels = NULL; + rel->partexprs = NULL; /* * Pass top parent's relids down the inheritance hierarchy. If the parent @@ -218,18 +223,41 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptInfo *parent) if (rte->inh) { ListCell *l; + int nparts = rel->nparts; + int cnt_parts = 0; + + if (nparts > 0) + rel->part_rels = (RelOptInfo **) + palloc(sizeof(RelOptInfo *) * nparts); foreach(l, root->append_rel_list) { AppendRelInfo *appinfo = (AppendRelInfo *) lfirst(l); + RelOptInfo *childrel; /* append_rel_list contains all append rels; ignore others */ if (appinfo->parent_relid != relid) continue; - (void) build_simple_rel(root, appinfo->child_relid, - rel); + childrel = build_simple_rel(root, appinfo->child_relid, + rel); + + /* Nothing more to do for an unpartitioned table. */ + if (!rel->part_scheme) + continue; + + /* + * The order of partition OIDs in append_rel_list is the same as + * the order in the PartitionDesc, so the order of part_rels will + * also match the PartitionDesc. See expand_partitioned_rtentry. + */ + Assert(cnt_parts < nparts); + rel->part_rels[cnt_parts] = childrel; + cnt_parts++; } + + /* We should have seen all the child partitions. */ + Assert(cnt_parts == nparts); } return rel; @@ -527,6 +555,11 @@ build_join_rel(PlannerInfo *root, joinrel->joininfo = NIL; joinrel->has_eclass_joins = false; joinrel->top_parent_relids = NULL; + joinrel->part_scheme = NULL; + joinrel->nparts = 0; + joinrel->boundinfo = NULL; + joinrel->part_rels = NULL; + joinrel->partexprs = NULL; /* Compute information relevant to the foreign relations. */ set_foreign_rel_properties(joinrel, outer_rel, inner_rel); diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index d50ff55681..48e6012f7f 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -266,6 +266,9 @@ typedef struct PlannerInfo List *distinct_pathkeys; /* distinctClause pathkeys, if any */ List *sort_pathkeys; /* sortClause pathkeys, if any */ + List *part_schemes; /* Canonicalised partition schemes used in the + * query. */ + List *initial_rels; /* RelOptInfos we are now trying to join */ /* Use fetch_upper_rel() to get any particular upper rel */ @@ -326,6 +329,34 @@ typedef struct PlannerInfo ((root)->simple_rte_array ? (root)->simple_rte_array[rti] : \ rt_fetch(rti, (root)->parse->rtable)) +/* + * If multiple relations are partitioned the same way, all such partitions + * will have a pointer to the same PartitionScheme. A list of PartitionScheme + * objects is attached to the PlannerInfo. By design, the partition scheme + * incorporates only the general properties of the partition method (LIST vs. + * RANGE, number of partitioning columns and the type information for each) + * and not the specific bounds. + * + * We store the opclass-declared input data types instead of the partition key + * datatypes since the former rather than the latter are used to compare + * partition bounds. Since partition key data types and the opclass declared + * input data types are expected to be binary compatible (per ResolveOpClass), + * both of those should have same byval and length properties. + */ +typedef struct PartitionSchemeData +{ + char strategy; /* partition strategy */ + int16 partnatts; /* number of partition attributes */ + Oid *partopfamily; /* OIDs of operator families */ + Oid *partopcintype; /* OIDs of opclass declared input data types */ + Oid *parttypcoll; /* OIDs of collations of partition keys. */ + + /* Cached information about partition key data types. */ + int16 *parttyplen; + bool *parttypbyval; +} PartitionSchemeData; + +typedef struct PartitionSchemeData *PartitionScheme; /*---------- * RelOptInfo @@ -456,7 +487,7 @@ typedef struct PlannerInfo * other rels for which we have tried and failed to prove * this one unique * - * The presence of the remaining fields depends on the restrictions + * The presence of the following fields depends on the restrictions * and joins that the relation participates in: * * baserestrictinfo - List of RestrictInfo nodes, containing info about @@ -487,6 +518,21 @@ typedef struct PlannerInfo * We store baserestrictcost in the RelOptInfo (for base relations) because * we know we will need it at least once (to price the sequential scan) * and may need it multiple times to price index scans. + * + * If the relation is partitioned, these fields will be set: + * + * part_scheme - Partitioning scheme of the relation + * boundinfo - Partition bounds + * nparts - Number of partitions + * part_rels - RelOptInfos for each partition + * partexprs - Partition key expressions + * + * Note: A base relation always has only one set of partition keys, but a join + * relation may have as many sets of partition keys as the number of relations + * being joined. partexprs is an array containing part_scheme->partnatts + * elements, each of which is a list of partition key expressions. For a base + * relation each list contains only one expression, but for a join relation + * there can be one per baserel. *---------- */ typedef enum RelOptKind @@ -592,6 +638,14 @@ typedef struct RelOptInfo /* used by "other" relations */ Relids top_parent_relids; /* Relids of topmost parents */ + + /* used for partitioned relations */ + PartitionScheme part_scheme; /* Partitioning scheme. */ + int nparts; /* number of partitions */ + struct PartitionBoundInfoData *boundinfo; /* Partition bounds */ + struct RelOptInfo **part_rels; /* Array of RelOptInfos of partitions, + * stored in the same order of bounds */ + List **partexprs; /* Partition key expressions. */ } RelOptInfo; /* From 28ae524bbf865d23eb10f6ae1b996d59dcc30e4e Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 21 Sep 2017 08:41:14 -0400 Subject: [PATCH 0235/1087] Quieten warnings about unused variables These variables are only ever written to in assertion-enabled builds, and the latest Microsoft compilers complain about such variables in non-assertion-enabled builds. Apparently they don't worry so much about variables that are written to but not read from, so most of our PG_USED_FOR_ASSERTS_ONLY variables don't cause the problem. Discussion: https://postgr.es/m/7800.1505950322@sss.pgh.pa.us --- src/backend/optimizer/path/costsize.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 051a8544b0..0baf9785c9 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -4550,15 +4550,11 @@ set_subquery_size_estimates(PlannerInfo *root, RelOptInfo *rel) { PlannerInfo *subroot = rel->subroot; RelOptInfo *sub_final_rel; - RangeTblEntry *rte PG_USED_FOR_ASSERTS_ONLY; ListCell *lc; /* Should only be applied to base relations that are subqueries */ Assert(rel->relid > 0); -#ifdef USE_ASSERT_CHECKING - rte = planner_rt_fetch(rel->relid, root); - Assert(rte->rtekind == RTE_SUBQUERY); -#endif + Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_SUBQUERY); /* * Copy raw number of output rows from subquery. All of its paths should @@ -4670,14 +4666,9 @@ set_function_size_estimates(PlannerInfo *root, RelOptInfo *rel) void set_tablefunc_size_estimates(PlannerInfo *root, RelOptInfo *rel) { - RangeTblEntry *rte PG_USED_FOR_ASSERTS_ONLY; - /* Should only be applied to base relations that are functions */ Assert(rel->relid > 0); -#ifdef USE_ASSERT_CHECKING - rte = planner_rt_fetch(rel->relid, root); - Assert(rte->rtekind == RTE_TABLEFUNC); -#endif + Assert(planner_rt_fetch(rel->relid, root)->rtekind == RTE_TABLEFUNC); rel->tuples = 100; From 71480501057fee9fa3649b072173ff10e2b842d0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 21 Sep 2017 18:13:11 -0400 Subject: [PATCH 0236/1087] Give a better error for duplicate entries in VACUUM/ANALYZE column list. Previously, the code didn't think about this case and would just try to analyze such a column twice. That would fail at the point of inserting the second version of the pg_statistic row, with obscure error messsages like "duplicate key value violates unique constraint" or "tuple already updated by self", depending on context and PG version. We could allow the case by ignoring duplicate column specifications, but it seems better to reject it explicitly. The bogus error messages seem like arguably a bug, so back-patch to all supported versions. Nathan Bossart, per a report from Michael Paquier, and whacked around a bit by me. Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com --- src/backend/commands/analyze.c | 25 ++++++++++++++++++------- src/test/regress/expected/vacuum.out | 5 +++++ src/test/regress/sql/vacuum.sql | 5 +++++ 3 files changed, 28 insertions(+), 7 deletions(-) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 08fc18e96b..1248b2ee5c 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -370,10 +370,14 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params, /* * Determine which columns to analyze * - * Note that system attributes are never analyzed. + * Note that system attributes are never analyzed, so we just reject them + * at the lookup stage. We also reject duplicate column mentions. (We + * could alternatively ignore duplicates, but analyzing a column twice + * won't work; we'd end up making a conflicting update in pg_statistic.) */ if (va_cols != NIL) { + Bitmapset *unique_cols = NULL; ListCell *le; vacattrstats = (VacAttrStats **) palloc(list_length(va_cols) * @@ -389,6 +393,13 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params, (errcode(ERRCODE_UNDEFINED_COLUMN), errmsg("column \"%s\" of relation \"%s\" does not exist", col, RelationGetRelationName(onerel)))); + if (bms_is_member(i, unique_cols)) + ereport(ERROR, + (errcode(ERRCODE_DUPLICATE_COLUMN), + errmsg("column \"%s\" of relation \"%s\" is specified twice", + col, RelationGetRelationName(onerel)))); + unique_cols = bms_add_member(unique_cols, i); + vacattrstats[tcnt] = examine_attribute(onerel, i, NULL); if (vacattrstats[tcnt] != NULL) tcnt++; @@ -527,9 +538,9 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params, stats->rows = rows; stats->tupDesc = onerel->rd_att; stats->compute_stats(stats, - std_fetch_func, - numrows, - totalrows); + std_fetch_func, + numrows, + totalrows); /* * If the appropriate flavor of the n_distinct option is @@ -831,9 +842,9 @@ compute_index_stats(Relation onerel, double totalrows, stats->exprnulls = exprnulls + i; stats->rowstride = attr_cnt; stats->compute_stats(stats, - ind_fetch_func, - numindexrows, - totalindexrows); + ind_fetch_func, + numindexrows, + totalindexrows); /* * If the n_distinct option is specified, it overrides the diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out index 6f68663087..a16f26da7e 100644 --- a/src/test/regress/expected/vacuum.out +++ b/src/test/regress/expected/vacuum.out @@ -90,4 +90,9 @@ UPDATE vacparted SET b = 'b'; VACUUM (ANALYZE) vacparted; VACUUM (FULL) vacparted; VACUUM (FREEZE) vacparted; +-- check behavior with duplicate column mentions +VACUUM ANALYZE vacparted(a,b,a); +ERROR: column "a" of relation "vacparted" is specified twice +ANALYZE vacparted(a,b,b); +ERROR: column "b" of relation "vacparted" is specified twice DROP TABLE vacparted; diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql index 7c5fb04917..96a848ca95 100644 --- a/src/test/regress/sql/vacuum.sql +++ b/src/test/regress/sql/vacuum.sql @@ -73,4 +73,9 @@ UPDATE vacparted SET b = 'b'; VACUUM (ANALYZE) vacparted; VACUUM (FULL) vacparted; VACUUM (FREEZE) vacparted; + +-- check behavior with duplicate column mentions +VACUUM ANALYZE vacparted(a,b,a); +ANALYZE vacparted(a,b,b); + DROP TABLE vacparted; From d57c7a7c506276597af619bdb8c62fa5b592745a Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 21 Sep 2017 19:02:23 -0400 Subject: [PATCH 0237/1087] Provide a test for variable existence in psql "\if :{?variable_name}" will be translated to "\if TRUE" if the variable exists and "\if FALSE" otherwise. Thus it will be possible to execute code conditionally on the existence of the variable, regardless of its value. Fabien Coelho, with some review by Robins Tharakan and some light text editing by me. Discussion: https://postgr.es/m/alpine.DEB.2.20.1708260835520.3627@lancre --- doc/src/sgml/ref/psql-ref.sgml | 10 +++++++ src/bin/psql/psqlscanslash.l | 18 +++++++++++++ src/fe_utils/psqlscan.l | 42 ++++++++++++++++++++++++++++- src/include/fe_utils/psqlscan_int.h | 2 ++ src/test/regress/expected/psql.out | 26 ++++++++++++++++++ src/test/regress/sql/psql.sql | 18 +++++++++++++ 6 files changed, 115 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 60bafa8175..e7a3e17c67 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -783,6 +783,10 @@ testdb=> The forms :'variable_name' and :"variable_name" described there work as well. + The :{?variable_name} syntax allows + testing whether a variable is defined. It is substituted by + TRUE or FALSE. + Escaping the colon with a backslash protects it from substitution. @@ -3938,6 +3942,12 @@ testdb=> INSERT INTO my_table VALUES (:'content'); can escape a colon with a backslash to protect it from substitution. + + The :{?name} special syntax returns TRUE + or FALSE depending on whether the variable exists or not, and is thus + always substituted, unless the colon is backslash-escaped. + + The colon syntax for variables is standard SQL for embedded query languages, such as ECPG. diff --git a/src/bin/psql/psqlscanslash.l b/src/bin/psql/psqlscanslash.l index db7a1b9eea..9a53cb3e02 100644 --- a/src/bin/psql/psqlscanslash.l +++ b/src/bin/psql/psqlscanslash.l @@ -281,6 +281,10 @@ other . unquoted_option_chars = 0; } +:\{\?{variable_char}+\} { + psqlscan_test_variable(cur_state, yytext, yyleng); + } + :'{variable_char}* { /* Throw back everything but the colon */ yyless(1); @@ -295,6 +299,20 @@ other . ECHO; } +:\{\?{variable_char}* { + /* Throw back everything but the colon */ + yyless(1); + unquoted_option_chars++; + ECHO; + } + +:\{ { + /* Throw back everything but the colon */ + yyless(1); + unquoted_option_chars++; + ECHO; + } + {other} { unquoted_option_chars++; ECHO; diff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l index 27689d72da..4375142a00 100644 --- a/src/fe_utils/psqlscan.l +++ b/src/fe_utils/psqlscan.l @@ -745,9 +745,13 @@ other . PQUOTE_SQL_IDENT); } +:\{\?{variable_char}+\} { + psqlscan_test_variable(cur_state, yytext, yyleng); + } + /* * These rules just avoid the need for scanner backup if one of the - * two rules above fails to match completely. + * three rules above fails to match completely. */ :'{variable_char}* { @@ -762,6 +766,17 @@ other . ECHO; } +:\{\?{variable_char}* { + /* Throw back everything but the colon */ + yyless(1); + ECHO; + } +:\{ { + /* Throw back everything but the colon */ + yyless(1); + ECHO; + } + /* * Back to backend-compatible rules. */ @@ -1442,3 +1457,28 @@ psqlscan_escape_variable(PsqlScanState state, const char *txt, int len, psqlscan_emit(state, txt, len); } } + +void +psqlscan_test_variable(PsqlScanState state, const char *txt, int len) +{ + char *varname; + char *value; + + varname = psqlscan_extract_substring(state, txt + 3, len - 4); + if (state->callbacks->get_variable) + value = state->callbacks->get_variable(varname, PQUOTE_PLAIN, + state->cb_passthrough); + else + value = NULL; + free(varname); + + if (value != NULL) + { + psqlscan_emit(state, "TRUE", 4); + free(value); + } + else + { + psqlscan_emit(state, "FALSE", 5); + } +} diff --git a/src/include/fe_utils/psqlscan_int.h b/src/include/fe_utils/psqlscan_int.h index c70ff29f4e..e9b351756b 100644 --- a/src/include/fe_utils/psqlscan_int.h +++ b/src/include/fe_utils/psqlscan_int.h @@ -142,5 +142,7 @@ extern char *psqlscan_extract_substring(PsqlScanState state, extern void psqlscan_escape_variable(PsqlScanState state, const char *txt, int len, PsqlScanQuoteType quote); +extern void psqlscan_test_variable(PsqlScanState state, + const char *txt, int len); #endif /* PSQLSCAN_INT_H */ diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out index 836d8510fd..3818cfea7e 100644 --- a/src/test/regress/expected/psql.out +++ b/src/test/regress/expected/psql.out @@ -3014,6 +3014,32 @@ bar 'bar' "bar" \echo 'should print #8-1' should print #8-1 \endif +-- :{?...} defined variable test +\set i 1 +\if :{?i} + \echo '#9-1 ok, variable i is defined' +#9-1 ok, variable i is defined +\else + \echo 'should not print #9-2' +\endif +\if :{?no_such_variable} + \echo 'should not print #10-1' +\else + \echo '#10-2 ok, variable no_such_variable is not defined' +#10-2 ok, variable no_such_variable is not defined +\endif +SELECT :{?i} AS i_is_defined; + i_is_defined +-------------- + t +(1 row) + +SELECT NOT :{?no_such_var} AS no_such_var_is_not_defined; + no_such_var_is_not_defined +---------------------------- + t +(1 row) + -- SHOW_CONTEXT \set SHOW_CONTEXT never do $$ diff --git a/src/test/regress/sql/psql.sql b/src/test/regress/sql/psql.sql index ddae1bf1e7..b45da9bb8d 100644 --- a/src/test/regress/sql/psql.sql +++ b/src/test/regress/sql/psql.sql @@ -572,6 +572,24 @@ select \if false \\ (bogus \else \\ 42 \endif \\ forty_two; \echo 'should print #8-1' \endif +-- :{?...} defined variable test +\set i 1 +\if :{?i} + \echo '#9-1 ok, variable i is defined' +\else + \echo 'should not print #9-2' +\endif + +\if :{?no_such_variable} + \echo 'should not print #10-1' +\else + \echo '#10-2 ok, variable no_such_variable is not defined' +\endif + +SELECT :{?i} AS i_is_defined; + +SELECT NOT :{?no_such_var} AS no_such_var_is_not_defined; + -- SHOW_CONTEXT \set SHOW_CONTEXT never From a890432a872afc9ca2327573f3313fd994d17384 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 21 Sep 2017 19:32:19 -0400 Subject: [PATCH 0238/1087] Revert "Fix bool/int type confusion" This reverts commit 0ec2e908babfbfde83a3925680f06b16408739ff. We'll use the upstream (IANA) fix instead. --- src/timezone/localtime.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/timezone/localtime.c b/src/timezone/localtime.c index 82c18e8544..08642d1236 100644 --- a/src/timezone/localtime.c +++ b/src/timezone/localtime.c @@ -1379,7 +1379,7 @@ timesub(const pg_time_t *timep, int32 offset, int y; const int *ip; int64 corr; - int hit; + bool hit; int i; corr = 0; From 47f849a3c9005852926dca551d70ad8111f09f3a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 22 Sep 2017 00:04:21 -0400 Subject: [PATCH 0239/1087] Sync our copy of the timezone library with IANA tzcode master. This patch absorbs a few unreleased fixes in the IANA code. It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a in https://github.com/eggert/tz. Non-cosmetic changes include: TZDEFRULESTRING is updated to match current US DST practice, rather than what it was over ten years ago. This only matters for interpretation of POSIX-style zone names (e.g., "EST5EDT"), and only if the timezone database doesn't include either an exact match for the zone name or a "posixrules" entry. The latter should not be true in any current Postgres installation, but this could possibly matter when using --with-system-tzdata. Get rid of a nonportable use of "++var" on a bool var. This is part of a larger fix that eliminates some vestigial support for consecutive leap seconds, and adds checks to the "zic" compiler that the data files do not specify that. Remove a couple of ancient compatibility hacks. The IANA crew think these are obsolete, and I tend to agree. But perhaps our buildfarm will think different. Back-patch to all supported branches, in line with our policy that all branches should be using current IANA code. Before v10, this includes application of current pgindent rules, to avoid whitespace problems in future back-patches. Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org --- src/timezone/localtime.c | 100 +++++++++++++-------------- src/timezone/private.h | 25 ++----- src/timezone/strftime.c | 45 ++++++------- src/timezone/zic.c | 142 +++++++++++++++++++++++++++------------ 4 files changed, 173 insertions(+), 139 deletions(-) diff --git a/src/timezone/localtime.c b/src/timezone/localtime.c index 08642d1236..d946e882aa 100644 --- a/src/timezone/localtime.c +++ b/src/timezone/localtime.c @@ -26,15 +26,15 @@ #ifndef WILDABBR /* * Someone might make incorrect use of a time zone abbreviation: - * 1. They might reference tzname[0] before calling tzset (explicitly + * 1. They might reference tzname[0] before calling tzset (explicitly * or implicitly). - * 2. They might reference tzname[1] before calling tzset (explicitly + * 2. They might reference tzname[1] before calling tzset (explicitly * or implicitly). - * 3. They might reference tzname[1] after setting to a time zone + * 3. They might reference tzname[1] after setting to a time zone * in which Daylight Saving Time is never observed. - * 4. They might reference tzname[0] after setting to a time zone + * 4. They might reference tzname[0] after setting to a time zone * in which Standard Time is never observed. - * 5. They might reference tm.TM_ZONE after calling offtime. + * 5. They might reference tm.TM_ZONE after calling offtime. * What's best to do in the above cases is open to debate; * for now, we just set things up so that in any of the five cases * WILDABBR is used. Another possibility: initialize tzname[0] to the @@ -50,12 +50,8 @@ static const char wildabbr[] = WILDABBR; static const char gmt[] = "GMT"; -/* The minimum and maximum finite time values. This assumes no padding. */ -static const pg_time_t time_t_min = MINVAL(pg_time_t, TYPE_BIT(pg_time_t)); -static const pg_time_t time_t_max = MAXVAL(pg_time_t, TYPE_BIT(pg_time_t)); - /* - * We cache the result of trying to load the TZDEFRULES zone here. + * PG: We cache the result of trying to load the TZDEFRULES zone here. * tzdefrules_loaded is 0 if not tried yet, +1 if good, -1 if failed. */ static struct state tzdefrules_s; @@ -63,12 +59,12 @@ static int tzdefrules_loaded = 0; /* * The DST rules to use if TZ has no rules and we can't load TZDEFRULES. - * We default to US rules as of 1999-08-17. + * Default to US rules as of 2017-05-07. * POSIX 1003.1 section 8.1.1 says that the default DST rules are * implementation dependent; for historical reasons, US rules are a * common default. */ -#define TZDEFRULESTRING ",M4.1.0,M10.5.0" +#define TZDEFRULESTRING ",M3.2.0,M11.1.0" /* structs ttinfo, lsinfo, state have been moved to pgtz.h */ @@ -195,10 +191,8 @@ union input_buffer /* Local storage needed for 'tzloadbody'. */ union local_storage { - /* We don't need the "fullname" member */ - /* The results of analyzing the file's contents after it is opened. */ - struct + struct file_analysis { /* The input buffer. */ union input_buffer u; @@ -206,6 +200,8 @@ union local_storage /* A temporary state used for parsing a TZ string in the file. */ struct state st; } u; + + /* We don't need the "fullname" member */ }; /* Load tz data from the file named NAME into *SP. Read extended @@ -255,6 +251,8 @@ tzloadbody(char const *name, char *canonname, struct state *sp, bool doextend, { int32 ttisstdcnt = detzcode(up->tzhead.tzh_ttisstdcnt); int32 ttisgmtcnt = detzcode(up->tzhead.tzh_ttisgmtcnt); + int64 prevtr = 0; + int32 prevcorr = 0; int32 leapcnt = detzcode(up->tzhead.tzh_leapcnt); int32 timecnt = detzcode(up->tzhead.tzh_timecnt); int32 typecnt = detzcode(up->tzhead.tzh_typecnt); @@ -285,8 +283,8 @@ tzloadbody(char const *name, char *canonname, struct state *sp, bool doextend, /* * Read transitions, discarding those out of pg_time_t range. But - * pretend the last transition before time_t_min occurred at - * time_t_min. + * pretend the last transition before TIME_T_MIN occurred at + * TIME_T_MIN. */ timecnt = 0; for (i = 0; i < sp->timecnt; ++i) @@ -294,12 +292,12 @@ tzloadbody(char const *name, char *canonname, struct state *sp, bool doextend, int64 at = stored == 4 ? detzcode(p) : detzcode64(p); - sp->types[i] = at <= time_t_max; + sp->types[i] = at <= TIME_T_MAX; if (sp->types[i]) { pg_time_t attime - = ((TYPE_SIGNED(pg_time_t) ? at < time_t_min : at < 0) - ? time_t_min : at); + = ((TYPE_SIGNED(pg_time_t) ? at < TIME_T_MIN : at < 0) + ? TIME_T_MIN : at); if (timecnt && attime <= sp->ats[timecnt - 1]) { @@ -354,20 +352,22 @@ tzloadbody(char const *name, char *canonname, struct state *sp, bool doextend, int32 corr = detzcode(p + stored); p += stored + 4; - if (tr <= time_t_max) + /* Leap seconds cannot occur before the Epoch. */ + if (tr < 0) + return EINVAL; + if (tr <= TIME_T_MAX) { - pg_time_t trans - = ((TYPE_SIGNED(pg_time_t) ? tr < time_t_min : tr < 0) - ? time_t_min : tr); - - if (leapcnt && trans <= sp->lsis[leapcnt - 1].ls_trans) - { - if (trans < sp->lsis[leapcnt - 1].ls_trans) - return EINVAL; - leapcnt--; - } - sp->lsis[leapcnt].ls_trans = trans; - sp->lsis[leapcnt].ls_corr = corr; + /* + * Leap seconds cannot occur more than once per UTC month, and + * UTC months are at least 28 days long (minus 1 second for a + * negative leap second). Each leap second's correction must + * differ from the previous one's by 1 second. + */ + if (tr - prevtr < 28 * SECSPERDAY - 1 + || (corr != prevcorr - 1 && corr != prevcorr + 1)) + return EINVAL; + sp->lsis[leapcnt].ls_trans = prevtr = tr; + sp->lsis[leapcnt].ls_corr = prevcorr = corr; leapcnt++; } } @@ -1361,11 +1361,18 @@ pg_gmtime(const pg_time_t *timep) * Return the number of leap years through the end of the given year * where, to make the math easy, the answer for year zero is defined as zero. */ +static int +leaps_thru_end_of_nonneg(int y) +{ + return y / 4 - y / 100 + y / 400; +} + static int leaps_thru_end_of(const int y) { - return (y >= 0) ? (y / 4 - y / 100 + y / 400) : - -(leaps_thru_end_of(-(y + 1)) + 1); + return (y < 0 + ? -1 - leaps_thru_end_of_nonneg(-1 - y) + : leaps_thru_end_of_nonneg(y)); } static struct pg_tm * @@ -1390,22 +1397,9 @@ timesub(const pg_time_t *timep, int32 offset, lp = &sp->lsis[i]; if (*timep >= lp->ls_trans) { - if (*timep == lp->ls_trans) - { - hit = ((i == 0 && lp->ls_corr > 0) || - lp->ls_corr > sp->lsis[i - 1].ls_corr); - if (hit) - while (i > 0 && - sp->lsis[i].ls_trans == - sp->lsis[i - 1].ls_trans + 1 && - sp->lsis[i].ls_corr == - sp->lsis[i - 1].ls_corr + 1) - { - ++hit; - --i; - } - } corr = lp->ls_corr; + hit = (*timep == lp->ls_trans + && (i == 0 ? 0 : lp[-1].ls_corr) < corr); break; } } @@ -1529,13 +1523,13 @@ increment_overflow_time(pg_time_t *tp, int32 j) { /*---------- * This is like - * 'if (! (time_t_min <= *tp + j && *tp + j <= time_t_max)) ...', + * 'if (! (TIME_T_MIN <= *tp + j && *tp + j <= TIME_T_MAX)) ...', * except that it does the right thing even if *tp + j would overflow. *---------- */ if (!(j < 0 - ? (TYPE_SIGNED(pg_time_t) ? time_t_min - j <= *tp : -1 - j < *tp) - : *tp <= time_t_max - j)) + ? (TYPE_SIGNED(pg_time_t) ? TIME_T_MIN - j <= *tp : -1 - j < *tp) + : *tp <= TIME_T_MAX - j)) return true; *tp += j; return false; diff --git a/src/timezone/private.h b/src/timezone/private.h index e65cd1bb4e..701112ec5b 100644 --- a/src/timezone/private.h +++ b/src/timezone/private.h @@ -38,26 +38,9 @@ #define EOVERFLOW EINVAL #endif -#ifndef WIFEXITED -#define WIFEXITED(status) (((status) & 0xff) == 0) -#endif /* !defined WIFEXITED */ -#ifndef WEXITSTATUS -#define WEXITSTATUS(status) (((status) >> 8) & 0xff) -#endif /* !defined WEXITSTATUS */ - /* Unlike 's isdigit, this also works if c < 0 | c > UCHAR_MAX. */ #define is_digit(c) ((unsigned)(c) - '0' <= 9) -/* - * SunOS 4.1.1 libraries lack remove. - */ - -#ifndef remove -extern int unlink(const char *filename); - -#define remove unlink -#endif /* !defined remove */ - /* * Finally, some convenience items. @@ -78,6 +61,10 @@ extern int unlink(const char *filename); #define MINVAL(t, b) \ ((t) (TYPE_SIGNED(t) ? - TWOS_COMPLEMENT(t) - MAXVAL(t, b) : 0)) +/* The extreme time values, assuming no padding. */ +#define TIME_T_MIN MINVAL(pg_time_t, TYPE_BIT(pg_time_t)) +#define TIME_T_MAX MAXVAL(pg_time_t, TYPE_BIT(pg_time_t)) + /* * 302 / 1000 is log10(2.0) rounded up. * Subtract one for the sign bit if the type is signed; @@ -91,7 +78,7 @@ extern int unlink(const char *filename); /* * INITIALIZE(x) */ -#define INITIALIZE(x) ((x) = 0) +#define INITIALIZE(x) ((x) = 0) #undef _ #define _(msgid) (msgid) @@ -146,7 +133,7 @@ extern int unlink(const char *filename); * or * isleap(a + b) == isleap(a % 400 + b % 400) * This is true even if % means modulo rather than Fortran remainder - * (which is allowed by C89 but not C99). + * (which is allowed by C89 but not by C99 or later). * We use this to avoid addition overflow problems. */ diff --git a/src/timezone/strftime.c b/src/timezone/strftime.c index 7cbafc9d83..e1c6483443 100644 --- a/src/timezone/strftime.c +++ b/src/timezone/strftime.c @@ -82,17 +82,17 @@ static const struct lc_time_T C_time_locale = { /* * x_fmt * - * C99 requires this format. Using just numbers (as here) makes Quakers - * happier; it's also compatible with SVR4. + * C99 and later require this format. Using just numbers (as here) makes + * Quakers happier; it's also compatible with SVR4. */ "%m/%d/%y", /* * c_fmt * - * C99 requires this format. Previously this code used "%D %X", but we now - * conform to C99. Note that "%a %b %d %H:%M:%S %Y" is used by Solaris - * 2.3. + * C99 and later require this format. Previously this code used "%D %X", + * but we now conform to C99. Note that "%a %b %d %H:%M:%S %Y" is used by + * Solaris 2.3. */ "%a %b %e %T %Y", @@ -106,26 +106,25 @@ static const struct lc_time_T C_time_locale = { "%a %b %e %H:%M:%S %Z %Y" }; +enum warn +{ + IN_NONE, IN_SOME, IN_THIS, IN_ALL +}; + static char *_add(const char *, char *, const char *); static char *_conv(int, const char *, char *, const char *); -static char *_fmt(const char *, const struct pg_tm *, char *, - const char *, int *); +static char *_fmt(const char *, const struct pg_tm *, char *, const char *, + enum warn *); static char *_yconv(int, int, bool, bool, char *, const char *); -#define IN_NONE 0 -#define IN_SOME 1 -#define IN_THIS 2 -#define IN_ALL 3 - size_t pg_strftime(char *s, size_t maxsize, const char *format, const struct pg_tm *t) { char *p; - int warn; + enum warn warn = IN_NONE; - warn = IN_NONE; p = _fmt(format, t, s, s + maxsize, &warn); if (p == s + maxsize) return 0; @@ -134,8 +133,8 @@ pg_strftime(char *s, size_t maxsize, const char *format, } static char * -_fmt(const char *format, const struct pg_tm *t, char *pt, const char *ptlim, - int *warnp) +_fmt(const char *format, const struct pg_tm *t, char *pt, + const char *ptlim, enum warn *warnp) { for (; *format; ++format) { @@ -184,7 +183,7 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, const char *ptlim, continue; case 'c': { - int warn2 = IN_SOME; + enum warn warn2 = IN_SOME; pt = _fmt(Locale->c_fmt, t, pt, ptlim, &warn2); if (warn2 == IN_ALL) @@ -203,9 +202,9 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, const char *ptlim, case 'O': /* - * C99 locale modifiers. The sequences %Ec %EC %Ex %EX - * %Ey %EY %Od %oe %OH %OI %Om %OM %OS %Ou %OU %OV %Ow - * %OW %Oy are supposed to provide alternate + * Locale modifiers of C99 and later. The sequences %Ec + * %EC %Ex %EX %Ey %EY %Od %oe %OH %OI %Om %OM %OS %Ou %OU + * %OV %Ow %OW %Oy are supposed to provide alternate * representations. */ goto label; @@ -417,7 +416,7 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, const char *ptlim, continue; case 'x': { - int warn2 = IN_SOME; + enum warn warn2 = IN_SOME; pt = _fmt(Locale->x_fmt, t, pt, ptlim, &warn2); if (warn2 == IN_ALL) @@ -442,8 +441,8 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, const char *ptlim, pt = _add(t->tm_zone, pt, ptlim); /* - * C99 says that %Z must be replaced by the empty string - * if the time zone is not determinable. + * C99 and later say that %Z must be replaced by the empty + * string if the time zone is not determinable. */ continue; case 'z': diff --git a/src/timezone/zic.c b/src/timezone/zic.c index 27c841be9e..db119265c3 100644 --- a/src/timezone/zic.c +++ b/src/timezone/zic.c @@ -11,7 +11,6 @@ #include #include #include -#include #include "pg_getopt.h" @@ -46,9 +45,12 @@ typedef int64 zic_t; static ptrdiff_t const PTRDIFF_MAX = MAXVAL(ptrdiff_t, TYPE_BIT(ptrdiff_t)); #endif -/* The type and printf format for line numbers. */ +/* + * The type for line numbers. In Postgres, use %d to format them; upstream + * uses PRIdMAX but we prefer not to rely on that, not least because it + * results in platform-dependent strings to be translated. + */ typedef int lineno_t; -#define PRIdLINENO "d" struct rule { @@ -293,10 +295,13 @@ struct lookup static struct lookup const *byword(const char *string, const struct lookup *lp); -static struct lookup const line_codes[] = { +static struct lookup const zi_line_codes[] = { {"Rule", LC_RULE}, {"Zone", LC_ZONE}, {"Link", LC_LINK}, + {NULL, 0} +}; +static struct lookup const leap_line_codes[] = { {"Leap", LC_LEAP}, {NULL, 0} }; @@ -435,7 +440,8 @@ growalloc(void *ptr, size_t itemsize, ptrdiff_t nitems, ptrdiff_t *nitems_alloc) return ptr; else { - ptrdiff_t amax = PTRDIFF_MAX - WORK_AROUND_QTBUG_53071; + ptrdiff_t nitems_max = PTRDIFF_MAX - WORK_AROUND_QTBUG_53071; + ptrdiff_t amax = nitems_max < SIZE_MAX ? nitems_max : SIZE_MAX; if ((amax - 1) / 3 * 2 < *nitems_alloc) memory_exhausted(_("integer overflow")); @@ -471,10 +477,10 @@ verror(const char *string, va_list args) * "*" -v on BSD systems. */ if (filename) - fprintf(stderr, _("\"%s\", line %" PRIdLINENO ": "), filename, linenum); + fprintf(stderr, _("\"%s\", line %d: "), filename, linenum); vfprintf(stderr, string, args); if (rfilename != NULL) - fprintf(stderr, _(" (rule from \"%s\", line %" PRIdLINENO ")"), + fprintf(stderr, _(" (rule from \"%s\", line %d)"), rfilename, rlinenum); fprintf(stderr, "\n"); } @@ -563,7 +569,7 @@ static const char *leapsec; static const char *yitcommand; int -main(int argc, char *argv[]) +main(int argc, char **argv) { int c, k; @@ -572,7 +578,7 @@ main(int argc, char *argv[]) #ifndef WIN32 umask(umask(S_IWGRP | S_IWOTH) | (S_IWGRP | S_IWOTH)); -#endif /* !WIN32 */ +#endif progname = argv[0]; if (TYPE_BIT(zic_t) <64) { @@ -631,7 +637,10 @@ main(int argc, char *argv[]) break; case 'y': if (yitcommand == NULL) + { + warning(_("-y is obsolescent")); yitcommand = strdup(optarg); + } else { fprintf(stderr, @@ -1202,6 +1211,9 @@ infile(const char *name) wantcont = inzcont(fields, nfields); else { + struct lookup const *line_codes + = name == leapsec ? leap_line_codes : zi_line_codes; + lp = byword(fields[0], line_codes); if (lp == NULL) error(_("input line of unknown type")); @@ -1220,12 +1232,7 @@ infile(const char *name) wantcont = false; break; case LC_LEAP: - if (name != leapsec) - warning(_("%s: Leap line in non leap" - " seconds file %s"), - progname, name); - else - inleap(fields, nfields); + inleap(fields, nfields); wantcont = false; break; default: /* "cannot happen" */ @@ -1359,7 +1366,7 @@ inzone(char **fields, int nfields) strcmp(zones[i].z_name, fields[ZF_NAME]) == 0) { error(_("duplicate zone name %s" - " (file \"%s\", line %" PRIdLINENO ")"), + " (file \"%s\", line %d)"), fields[ZF_NAME], zones[i].z_filename, zones[i].z_linenum); @@ -1573,21 +1580,11 @@ inleap(char **fields, int nfields) positive = false; count = 1; } - else if (strcmp(cp, "--") == 0) - { - positive = false; - count = 2; - } else if (strcmp(cp, "+") == 0) { positive = true; count = 1; } - else if (strcmp(cp, "++") == 0) - { - positive = true; - count = 2; - } else { error(_("illegal CORRECTION field on Leap line")); @@ -1599,9 +1596,9 @@ inleap(char **fields, int nfields) return; } t = tadd(t, tod); - if (t < early_time) + if (t < 0) { - error(_("leap second precedes Big Bang")); + error(_("leap second precedes Epoch")); return; } leapadd(t, positive, lp->l_value, count); @@ -1753,11 +1750,14 @@ rulesub(struct rule *rp, const char *loyearp, const char *hiyearp, error(_("typed single year")); return; } + warning(_("year type \"%s\" is obsolete; use \"-\" instead"), + typep); rp->r_yrtype = ecpyalloc(typep); } /* - * Day work. Accept things such as: 1 last-Sunday Sun<=20 Sun>=7 + * Day work. Accept things such as: 1 lastSunday last-Sunday + * (undocumented; warn about this) Sun<=20 Sun>=7 */ dp = ecpyalloc(dayp); if ((lp = byword(dp, lasts)) != NULL) @@ -2850,9 +2850,10 @@ outzone(const struct zone *zpfirst, ptrdiff_t zonecount) { ptrdiff_t k; zic_t jtime, - ktime = 0; + ktime; zic_t offset; + INITIALIZE(ktime); if (useuntil) { /* @@ -2929,7 +2930,8 @@ outzone(const struct zone *zpfirst, ptrdiff_t zonecount) continue; } if (*startbuf == '\0' && - startoff == oadd(zp->z_gmtoff, stdoff)) + startoff == oadd(zp->z_gmtoff, + stdoff)) { doabbr(startbuf, zp, @@ -3104,14 +3106,7 @@ leapadd(zic_t t, bool positive, int rolling, int count) } for (i = 0; i < leapcnt; ++i) if (t <= trans[i]) - { - if (t == trans[i]) - { - error(_("repeated leap second moment")); - exit(EXIT_FAILURE); - } break; - } do { for (j = leapcnt; j > i; --j) @@ -3132,12 +3127,19 @@ adjleap(void) { int i; zic_t last = 0; + zic_t prevtrans = 0; /* * propagate leap seconds forward */ for (i = 0; i < leapcnt; ++i) { + if (trans[i] - prevtrans < 28 * SECSPERDAY) + { + error(_("Leap seconds too close together")); + exit(EXIT_FAILURE); + } + prevtrans = trans[i]; trans[i] = tadd(trans[i], last); last = corr[i] += last; } @@ -3191,7 +3193,7 @@ yearistype(zic_t year, const char *type) exit(EXIT_FAILURE); } -/* Is A a space character in the C locale? */ +/* Is A a space character in the C locale? */ static bool is_space(char a) { @@ -3362,6 +3364,19 @@ itsabbr(const char *abbr, const char *word) return true; } +/* Return true if ABBR is an initial prefix of WORD, ignoring ASCII case. */ + +static bool +ciprefix(char const *abbr, char const *word) +{ + do + if (!*abbr) + return true; + while (lowerit(*abbr++) == lowerit(*word++)); + + return false; +} + static const struct lookup * byword(const char *word, const struct lookup *table) { @@ -3371,6 +3386,23 @@ byword(const char *word, const struct lookup *table) if (word == NULL || table == NULL) return NULL; + /* + * If TABLE is LASTS and the word starts with "last" followed by a + * non-'-', skip the "last" and look in WDAY_NAMES instead. Warn about any + * usage of the undocumented prefix "last-". + */ + if (table == lasts && ciprefix("last", word) && word[4]) + { + if (word[4] == '-') + warning(_("\"%s\" is undocumented; use \"last%s\" instead"), + word, word + 5); + else + { + word += 4; + table = wday_names; + } + } + /* * Look for exact match. */ @@ -3383,13 +3415,31 @@ byword(const char *word, const struct lookup *table) */ foundlp = NULL; for (lp = table; lp->l_word != NULL; ++lp) - if (itsabbr(word, lp->l_word)) + if (ciprefix(word, lp->l_word)) { if (foundlp == NULL) foundlp = lp; else return NULL; /* multiple inexact matches */ } + + /* Warn about any backward-compatibility issue with pre-2017c zic. */ + if (foundlp) + { + bool pre_2017c_match = false; + + for (lp = table; lp->l_word; lp++) + if (itsabbr(word, lp->l_word)) + { + if (pre_2017c_match) + { + warning(_("\"%s\" is ambiguous in pre-2017c zic"), word); + break; + } + pre_2017c_match = true; + } + } + return foundlp; } @@ -3621,11 +3671,15 @@ mkdirs(char const *argname, bool ancestors) cp = name = ecpyalloc(argname); + /* + * On MS-Windows systems, do not worry about drive letters or backslashes, + * as this should suffice in practice. Time zone names do not use drive + * letters and backslashes. If the -d option of zic does not name an + * already-existing directory, it can use slashes to separate the + * already-existing ancestor prefix from the to-be-created subdirectories. + */ + /* Do not mkdir a root directory, as it must exist. */ -#ifdef WIN32 - if (is_alpha(name[0]) && name[1] == ':') - cp += 2; -#endif while (*cp == '/') cp++; From 885cab58115a5af9484926ddee8dca3dc0106c1e Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 22 Sep 2017 13:35:54 +0200 Subject: [PATCH 0240/1087] Document further existing locks as wait events Reported-by: Jeremy Schneider Author: Michael Paquier Discussion: https://postgr.es/m/CA+fnDAZaPCwfY8Lp-pfLnUGFAXRu1VfLyRgdup-L-kwcBj8MqQ@mail.gmail.com --- doc/src/sgml/monitoring.sgml | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 8a3cf5d4c3..18fb9c2aa6 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -845,7 +845,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - LWLock + LWLock ShmemIndexLock Waiting to find or allocate space in shared memory. @@ -1030,6 +1030,14 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser OldSnapshotTimeMapLockWaiting to read or update old snapshot control information. + + BackendRandomLock + Waiting to generate a random number. + + + LogicalRepWorkerLock + Waiting for action on logical replication worker to finish. + CLogTruncationLock Waiting to truncate the write-ahead log or waiting for write-ahead log truncation to finish. From e6023ee7fa73a2d9a2d7524f63584844b2291def Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 21 Sep 2017 14:42:10 -0400 Subject: [PATCH 0241/1087] Fix build with !USE_WIDE_UPPER_LOWER The placement of the ifdef blocks in formatting.c was pretty bogus, so the code failed to compile if USE_WIDE_UPPER_LOWER was not defined. Reported-by: Peter Geoghegan Reported-by: Noah Misch --- src/backend/utils/adt/formatting.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index 46f45f6654..2bf484cda3 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -1528,7 +1528,6 @@ str_tolower(const char *buff, size_t nbytes, Oid collid) { result = asc_tolower(buff, nbytes); } -#ifdef USE_WIDE_UPPER_LOWER else { pg_locale_t mylocale = 0; @@ -1566,6 +1565,7 @@ str_tolower(const char *buff, size_t nbytes, Oid collid) else #endif { +#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1603,8 +1603,8 @@ str_tolower(const char *buff, size_t nbytes, Oid collid) wchar2char(result, workspace, result_size, mylocale); pfree(workspace); } -#endif /* USE_WIDE_UPPER_LOWER */ else +#endif /* USE_WIDE_UPPER_LOWER */ { char *p; @@ -1652,7 +1652,6 @@ str_toupper(const char *buff, size_t nbytes, Oid collid) { result = asc_toupper(buff, nbytes); } -#ifdef USE_WIDE_UPPER_LOWER else { pg_locale_t mylocale = 0; @@ -1690,6 +1689,7 @@ str_toupper(const char *buff, size_t nbytes, Oid collid) else #endif { +#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1727,8 +1727,8 @@ str_toupper(const char *buff, size_t nbytes, Oid collid) wchar2char(result, workspace, result_size, mylocale); pfree(workspace); } -#endif /* USE_WIDE_UPPER_LOWER */ else +#endif /* USE_WIDE_UPPER_LOWER */ { char *p; @@ -1777,7 +1777,6 @@ str_initcap(const char *buff, size_t nbytes, Oid collid) { result = asc_initcap(buff, nbytes); } -#ifdef USE_WIDE_UPPER_LOWER else { pg_locale_t mylocale = 0; @@ -1815,6 +1814,7 @@ str_initcap(const char *buff, size_t nbytes, Oid collid) else #endif { +#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1864,8 +1864,8 @@ str_initcap(const char *buff, size_t nbytes, Oid collid) wchar2char(result, workspace, result_size, mylocale); pfree(workspace); } -#endif /* USE_WIDE_UPPER_LOWER */ else +#endif /* USE_WIDE_UPPER_LOWER */ { char *p; From 85feb77aa09cda9ff3e12cf95c757c499dc25343 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 22 Sep 2017 11:00:58 -0400 Subject: [PATCH 0242/1087] Assume wcstombs(), towlower(), and sibling functions are always present. These functions are required by SUS v2, which is our minimum baseline for Unix platforms, and are present on all interesting Windows versions as well. Even our oldest buildfarm members have them. Thus, we were not testing the "!USE_WIDE_UPPER_LOWER" code paths, which explains why the bug fixed in commit e6023ee7f escaped detection. Per discussion, there seems to be no more real-world value in maintaining this option. Hence, remove the configure-time tests for wcstombs() and towlower(), remove the USE_WIDE_UPPER_LOWER symbol, and remove all the !USE_WIDE_UPPER_LOWER code. There's not actually all that much of the latter, but simplifying the #if nests is a win in itself. Discussion: https://postgr.es/m/20170921052928.GA188913@rfd.leadboat.com --- configure | 2 +- configure.in | 2 +- src/backend/regex/regc_pg_locale.c | 46 +++++++----------------------- src/backend/tsearch/regis.c | 5 ---- src/backend/tsearch/ts_locale.c | 9 ------ src/backend/tsearch/wparser_def.c | 40 -------------------------- src/backend/utils/adt/formatting.c | 6 ---- src/backend/utils/adt/pg_locale.c | 5 +--- src/include/c.h | 8 ------ src/include/pg_config.h.in | 6 ---- src/include/pg_config.h.win32 | 6 ---- src/include/tsearch/ts_locale.h | 20 +++---------- src/include/utils/pg_locale.h | 2 -- 13 files changed, 18 insertions(+), 139 deletions(-) diff --git a/configure b/configure index 5c38149a3d..70d8aa649d 100755 --- a/configure +++ b/configure @@ -12970,7 +12970,7 @@ fi LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` -for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range towlower utime utimes wcstombs wcstombs_l +for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" diff --git a/configure.in b/configure.in index 176b29a792..cdf11bf673 100644 --- a/configure.in +++ b/configure.in @@ -1399,7 +1399,7 @@ PGAC_FUNC_WCSTOMBS_L LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` -AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range towlower utime utimes wcstombs wcstombs_l]) +AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l]) AC_REPLACE_FUNCS(fseeko) case $host_os in diff --git a/src/backend/regex/regc_pg_locale.c b/src/backend/regex/regc_pg_locale.c index 6982879688..b2122e9e8f 100644 --- a/src/backend/regex/regc_pg_locale.c +++ b/src/backend/regex/regc_pg_locale.c @@ -268,7 +268,6 @@ pg_set_regex_collation(Oid collation) pg_regex_strategy = PG_REGEX_LOCALE_ICU; else #endif -#ifdef USE_WIDE_UPPER_LOWER if (GetDatabaseEncoding() == PG_UTF8) { if (pg_regex_locale) @@ -277,7 +276,6 @@ pg_set_regex_collation(Oid collation) pg_regex_strategy = PG_REGEX_LOCALE_WIDE; } else -#endif /* USE_WIDE_UPPER_LOWER */ { if (pg_regex_locale) pg_regex_strategy = PG_REGEX_LOCALE_1BYTE_L; @@ -298,16 +296,14 @@ pg_wc_isdigit(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISDIGIT)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswdigit((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isdigit((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswdigit_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -336,16 +332,14 @@ pg_wc_isalpha(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISALPHA)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswalpha((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isalpha((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswalpha_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -374,16 +368,14 @@ pg_wc_isalnum(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISALNUM)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswalnum((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isalnum((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswalnum_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -412,16 +404,14 @@ pg_wc_isupper(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISUPPER)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswupper((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isupper((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswupper_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -450,16 +440,14 @@ pg_wc_islower(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISLOWER)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswlower((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && islower((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswlower_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -488,16 +476,14 @@ pg_wc_isgraph(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISGRAPH)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswgraph((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isgraph((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswgraph_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -526,16 +512,14 @@ pg_wc_isprint(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISPRINT)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswprint((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isprint((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswprint_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -564,16 +548,14 @@ pg_wc_ispunct(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISPUNCT)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswpunct((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && ispunct((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswpunct_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -602,16 +584,14 @@ pg_wc_isspace(pg_wchar c) return (c <= (pg_wchar) 127 && (pg_char_properties[c] & PG_ISSPACE)); case PG_REGEX_LOCALE_WIDE: -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswspace((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: return (c <= (pg_wchar) UCHAR_MAX && isspace((unsigned char) c)); case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return iswspace_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -644,10 +624,8 @@ pg_wc_toupper(pg_wchar c) /* force C behavior for ASCII characters, per comments above */ if (c <= (pg_wchar) 127) return pg_ascii_toupper((unsigned char) c); -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return towupper((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: /* force C behavior for ASCII characters, per comments above */ @@ -657,7 +635,7 @@ pg_wc_toupper(pg_wchar c) return toupper((unsigned char) c); return c; case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return towupper_l((wint_t) c, pg_regex_locale->info.lt); #endif @@ -690,10 +668,8 @@ pg_wc_tolower(pg_wchar c) /* force C behavior for ASCII characters, per comments above */ if (c <= (pg_wchar) 127) return pg_ascii_tolower((unsigned char) c); -#ifdef USE_WIDE_UPPER_LOWER if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return towlower((wint_t) c); -#endif /* FALL THRU */ case PG_REGEX_LOCALE_1BYTE: /* force C behavior for ASCII characters, per comments above */ @@ -703,7 +679,7 @@ pg_wc_tolower(pg_wchar c) return tolower((unsigned char) c); return c; case PG_REGEX_LOCALE_WIDE_L: -#if defined(HAVE_LOCALE_T) && defined(USE_WIDE_UPPER_LOWER) +#ifdef HAVE_LOCALE_T if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF) return towlower_l((wint_t) c, pg_regex_locale->info.lt); #endif diff --git a/src/backend/tsearch/regis.c b/src/backend/tsearch/regis.c index 2b89f596f0..4a799f2585 100644 --- a/src/backend/tsearch/regis.c +++ b/src/backend/tsearch/regis.c @@ -178,7 +178,6 @@ RS_free(Regis *r) r->node = NULL; } -#ifdef USE_WIDE_UPPER_LOWER static bool mb_strchr(char *str, char *c) { @@ -209,10 +208,6 @@ mb_strchr(char *str, char *c) return res; } -#else -#define mb_strchr(s,c) ( (strchr((s),*(c)) == NULL) ? false : true ) -#endif - bool RS_execute(Regis *r, char *str) diff --git a/src/backend/tsearch/ts_locale.c b/src/backend/tsearch/ts_locale.c index 1aa3e23733..7a6f0bc722 100644 --- a/src/backend/tsearch/ts_locale.c +++ b/src/backend/tsearch/ts_locale.c @@ -21,8 +21,6 @@ static void tsearch_readline_callback(void *arg); -#ifdef USE_WIDE_UPPER_LOWER - int t_isdigit(const char *ptr) { @@ -86,7 +84,6 @@ t_isprint(const char *ptr) return iswprint((wint_t) character[0]); } -#endif /* USE_WIDE_UPPER_LOWER */ /* @@ -244,17 +241,12 @@ char * lowerstr_with_len(const char *str, int len) { char *out; - -#ifdef USE_WIDE_UPPER_LOWER Oid collation = DEFAULT_COLLATION_OID; /* TODO */ pg_locale_t mylocale = 0; /* TODO */ -#endif if (len == 0) return pstrdup(""); -#ifdef USE_WIDE_UPPER_LOWER - /* * Use wide char code only when max encoding length > 1 and ctype != C. * Some operating systems fail with multi-byte encodings and a C locale. @@ -300,7 +292,6 @@ lowerstr_with_len(const char *str, int len) Assert(wlen < len); } else -#endif /* USE_WIDE_UPPER_LOWER */ { const char *ptr = str; char *outptr; diff --git a/src/backend/tsearch/wparser_def.c b/src/backend/tsearch/wparser_def.c index e841a1ccf0..c118357336 100644 --- a/src/backend/tsearch/wparser_def.c +++ b/src/backend/tsearch/wparser_def.c @@ -241,11 +241,9 @@ typedef struct TParser /* string and position information */ char *str; /* multibyte string */ int lenstr; /* length of mbstring */ -#ifdef USE_WIDE_UPPER_LOWER wchar_t *wstr; /* wide character string */ pg_wchar *pgwstr; /* wide character string for C-locale */ bool usewide; -#endif /* State of parse */ int charmaxlen; @@ -294,8 +292,6 @@ TParserInit(char *str, int len) prs->str = str; prs->lenstr = len; -#ifdef USE_WIDE_UPPER_LOWER - /* * Use wide char code only when max encoding length > 1. */ @@ -323,7 +319,6 @@ TParserInit(char *str, int len) } else prs->usewide = false; -#endif prs->state = newTParserPosition(NULL); prs->state->state = TPS_Base; @@ -360,15 +355,12 @@ TParserCopyInit(const TParser *orig) prs->charmaxlen = orig->charmaxlen; prs->str = orig->str + orig->state->posbyte; prs->lenstr = orig->lenstr - orig->state->posbyte; - -#ifdef USE_WIDE_UPPER_LOWER prs->usewide = orig->usewide; if (orig->pgwstr) prs->pgwstr = orig->pgwstr + orig->state->poschar; if (orig->wstr) prs->wstr = orig->wstr + orig->state->poschar; -#endif prs->state = newTParserPosition(NULL); prs->state->state = TPS_Base; @@ -393,12 +385,10 @@ TParserClose(TParser *prs) prs->state = ptr; } -#ifdef USE_WIDE_UPPER_LOWER if (prs->wstr) pfree(prs->wstr); if (prs->pgwstr) pfree(prs->pgwstr); -#endif #ifdef WPARSER_TRACE fprintf(stderr, "closing parser\n"); @@ -437,8 +427,6 @@ TParserCopyClose(TParser *prs) * - if locale is C then we use pgwstr instead of wstr. */ -#ifdef USE_WIDE_UPPER_LOWER - #define p_iswhat(type) \ static int \ p_is##type(TParser *prs) { \ @@ -536,31 +524,6 @@ p_iseq(TParser *prs, char c) Assert(prs->state); return ((prs->state->charlen == 1 && *(prs->str + prs->state->posbyte) == c)) ? 1 : 0; } -#else /* USE_WIDE_UPPER_LOWER */ - -#define p_iswhat(type) \ -static int \ -p_is##type(TParser *prs) { \ - Assert( prs->state ); \ - return is##type( (unsigned char)*( prs->str + prs->state->posbyte ) ); \ -} \ - \ -static int \ -p_isnot##type(TParser *prs) { \ - return !p_is##type(prs); \ -} - - -static int -p_iseq(TParser *prs, char c) -{ - Assert(prs->state); - return (*(prs->str + prs->state->posbyte) == c) ? 1 : 0; -} - -p_iswhat(alnum) -p_iswhat(alpha) -#endif /* USE_WIDE_UPPER_LOWER */ p_iswhat(digit) p_iswhat(lower) @@ -785,8 +748,6 @@ p_isspecial(TParser *prs) if (pg_dsplen(prs->str + prs->state->posbyte) == 0) return 1; -#ifdef USE_WIDE_UPPER_LOWER - /* * Unicode Characters in the 'Mark, Spacing Combining' Category That * characters are not alpha although they are not breakers of word too. @@ -1050,7 +1011,6 @@ p_isspecial(TParser *prs) StopHigh = StopMiddle; } } -#endif return 0; } diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index 2bf484cda3..7877af2d6b 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -1565,7 +1565,6 @@ str_tolower(const char *buff, size_t nbytes, Oid collid) else #endif { -#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1604,7 +1603,6 @@ str_tolower(const char *buff, size_t nbytes, Oid collid) pfree(workspace); } else -#endif /* USE_WIDE_UPPER_LOWER */ { char *p; @@ -1689,7 +1687,6 @@ str_toupper(const char *buff, size_t nbytes, Oid collid) else #endif { -#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1728,7 +1725,6 @@ str_toupper(const char *buff, size_t nbytes, Oid collid) pfree(workspace); } else -#endif /* USE_WIDE_UPPER_LOWER */ { char *p; @@ -1814,7 +1810,6 @@ str_initcap(const char *buff, size_t nbytes, Oid collid) else #endif { -#ifdef USE_WIDE_UPPER_LOWER if (pg_database_encoding_max_length() > 1) { wchar_t *workspace; @@ -1865,7 +1860,6 @@ str_initcap(const char *buff, size_t nbytes, Oid collid) pfree(workspace); } else -#endif /* USE_WIDE_UPPER_LOWER */ { char *p; diff --git a/src/backend/utils/adt/pg_locale.c b/src/backend/utils/adt/pg_locale.c index 3d3d8aa4b6..5ad75efb7a 100644 --- a/src/backend/utils/adt/pg_locale.c +++ b/src/backend/utils/adt/pg_locale.c @@ -1587,6 +1587,7 @@ icu_from_uchar(char **result, const UChar *buff_uchar, int32_t len_uchar) return len_result; } + #endif /* USE_ICU */ /* @@ -1594,8 +1595,6 @@ icu_from_uchar(char **result, const UChar *buff_uchar, int32_t len_uchar) * Therefore we keep them here rather than with the mbutils code. */ -#ifdef USE_WIDE_UPPER_LOWER - /* * wchar2char --- convert wide characters to multibyte format * @@ -1762,5 +1761,3 @@ char2wchar(wchar_t *to, size_t tolen, const char *from, size_t fromlen, return result; } - -#endif /* USE_WIDE_UPPER_LOWER */ diff --git a/src/include/c.h b/src/include/c.h index fd53010e24..b6a969787a 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -1095,14 +1095,6 @@ extern int fdatasync(int fildes); #define HAVE_STRTOULL 1 #endif -/* - * We assume if we have these two functions, we have their friends too, and - * can use the wide-character functions. - */ -#if defined(HAVE_WCSTOMBS) && defined(HAVE_TOWLOWER) -#define USE_WIDE_UPPER_LOWER -#endif - /* EXEC_BACKEND defines */ #ifdef EXEC_BACKEND #define NON_EXEC_STATIC diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 85deb29d83..2a4e9f6050 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -593,9 +593,6 @@ `HAVE_STRUCT_TM_TM_ZONE' instead. */ #undef HAVE_TM_ZONE -/* Define to 1 if you have the `towlower' function. */ -#undef HAVE_TOWLOWER - /* Define to 1 if your compiler understands `typeof' or something similar. */ #undef HAVE_TYPEOF @@ -659,9 +656,6 @@ /* Define to 1 if you have the header file. */ #undef HAVE_WCHAR_H -/* Define to 1 if you have the `wcstombs' function. */ -#undef HAVE_WCSTOMBS - /* Define to 1 if you have the `wcstombs_l' function. */ #undef HAVE_WCSTOMBS_L diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 27aab21be7..b6808d581b 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -442,9 +442,6 @@ `HAVE_STRUCT_TM_TM_ZONE' instead. */ /* #undef HAVE_TM_ZONE */ -/* Define to 1 if you have the `towlower' function. */ -#define HAVE_TOWLOWER 1 - /* Define to 1 if your compiler understands `typeof' or something similar. */ /* #undef HAVE_TYPEOF */ @@ -484,9 +481,6 @@ /* Define to 1 if you have the header file. */ #define HAVE_WCHAR_H 1 -/* Define to 1 if you have the `wcstombs' function. */ -#define HAVE_WCSTOMBS 1 - /* Define to 1 if you have the `wcstombs_l' function. */ #define HAVE_WCSTOMBS_L 1 diff --git a/src/include/tsearch/ts_locale.h b/src/include/tsearch/ts_locale.h index c32f0743aa..3ec276fc05 100644 --- a/src/include/tsearch/ts_locale.h +++ b/src/include/tsearch/ts_locale.h @@ -41,27 +41,15 @@ typedef struct #define TOUCHAR(x) (*((const unsigned char *) (x))) -#ifdef USE_WIDE_UPPER_LOWER - -extern int t_isdigit(const char *ptr); -extern int t_isspace(const char *ptr); -extern int t_isalpha(const char *ptr); -extern int t_isprint(const char *ptr); - /* The second argument of t_iseq() must be a plain ASCII character */ #define t_iseq(x,c) (TOUCHAR(x) == (unsigned char) (c)) #define COPYCHAR(d,s) memcpy(d, s, pg_mblen(s)) -#else /* not USE_WIDE_UPPER_LOWER */ -#define t_isdigit(x) isdigit(TOUCHAR(x)) -#define t_isspace(x) isspace(TOUCHAR(x)) -#define t_isalpha(x) isalpha(TOUCHAR(x)) -#define t_isprint(x) isprint(TOUCHAR(x)) -#define t_iseq(x,c) (TOUCHAR(x) == (unsigned char) (c)) - -#define COPYCHAR(d,s) (*((unsigned char *) (d)) = TOUCHAR(s)) -#endif /* USE_WIDE_UPPER_LOWER */ +extern int t_isdigit(const char *ptr); +extern int t_isspace(const char *ptr); +extern int t_isalpha(const char *ptr); +extern int t_isprint(const char *ptr); extern char *lowerstr(const char *str); extern char *lowerstr_with_len(const char *str, int len); diff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h index f3e04d4d8c..b633511a7a 100644 --- a/src/include/utils/pg_locale.h +++ b/src/include/utils/pg_locale.h @@ -110,11 +110,9 @@ extern int32_t icu_from_uchar(char **result, const UChar *buff_uchar, int32_t le #endif /* These functions convert from/to libc's wchar_t, *not* pg_wchar_t */ -#ifdef USE_WIDE_UPPER_LOWER extern size_t wchar2char(char *to, const wchar_t *from, size_t tolen, pg_locale_t locale); extern size_t char2wchar(wchar_t *to, size_t tolen, const char *from, size_t fromlen, pg_locale_t locale); -#endif #endif /* _PG_LOCALE_ */ From ed87e1980706975e7aa412bee200087774c5ff22 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 22 Sep 2017 11:35:12 -0400 Subject: [PATCH 0243/1087] Mop-up for commit 85feb77aa09cda9ff3e12cf95c757c499dc25343. Adjust commentary in regc_pg_locale.c to remove mention of the possibility of not having functions, since we no longer consider that. Eliminate duplicate code in wparser_def.c by generalizing the p_iswhat macro to take a parameter saying what to return for non-ASCII chars in C locale. (That's not really a consequence of the USE_WIDE_UPPER_LOWER-ectomy, but I noticed it while doing that.) --- src/backend/regex/regc_pg_locale.c | 24 +++--- src/backend/tsearch/wparser_def.c | 113 +++++++---------------------- 2 files changed, 40 insertions(+), 97 deletions(-) diff --git a/src/backend/regex/regc_pg_locale.c b/src/backend/regex/regc_pg_locale.c index b2122e9e8f..e39ee7ae09 100644 --- a/src/backend/regex/regc_pg_locale.c +++ b/src/backend/regex/regc_pg_locale.c @@ -29,20 +29,20 @@ * * 2. In the "default" collation (which is supposed to obey LC_CTYPE): * - * 2a. When working in UTF8 encoding, we use the functions if - * available. This assumes that every platform uses Unicode codepoints - * directly as the wchar_t representation of Unicode. On some platforms + * 2a. When working in UTF8 encoding, we use the functions. + * This assumes that every platform uses Unicode codepoints directly + * as the wchar_t representation of Unicode. On some platforms * wchar_t is only 16 bits wide, so we have to punt for codepoints > 0xFFFF. * - * 2b. In all other encodings, or on machines that lack , we use - * the functions for pg_wchar values up to 255, and punt for values - * above that. This is only 100% correct in single-byte encodings such as - * LATINn. However, non-Unicode multibyte encodings are mostly Far Eastern - * character sets for which the properties being tested here aren't very - * relevant for higher code values anyway. The difficulty with using the - * functions with non-Unicode multibyte encodings is that we can - * have no certainty that the platform's wchar_t representation matches - * what we do in pg_wchar conversions. + * 2b. In all other encodings, we use the functions for pg_wchar + * values up to 255, and punt for values above that. This is 100% correct + * only in single-byte encodings such as LATINn. However, non-Unicode + * multibyte encodings are mostly Far Eastern character sets for which the + * properties being tested here aren't very relevant for higher code values + * anyway. The difficulty with using the functions with + * non-Unicode multibyte encodings is that we can have no certainty that + * the platform's wchar_t representation matches what we do in pg_wchar + * conversions. * * 3. Other collations are only supported on platforms that HAVE_LOCALE_T. * Here, we use the locale_t-extended forms of the and diff --git a/src/backend/tsearch/wparser_def.c b/src/backend/tsearch/wparser_def.c index c118357336..8450e1c08e 100644 --- a/src/backend/tsearch/wparser_def.c +++ b/src/backend/tsearch/wparser_def.c @@ -427,94 +427,45 @@ TParserCopyClose(TParser *prs) * - if locale is C then we use pgwstr instead of wstr. */ -#define p_iswhat(type) \ +#define p_iswhat(type, nonascii) \ + \ static int \ -p_is##type(TParser *prs) { \ - Assert( prs->state ); \ - if ( prs->usewide ) \ +p_is##type(TParser *prs) \ +{ \ + Assert(prs->state); \ + if (prs->usewide) \ { \ - if ( prs->pgwstr ) \ + if (prs->pgwstr) \ { \ unsigned int c = *(prs->pgwstr + prs->state->poschar); \ - if ( c > 0x7f ) \ - return 0; \ - return is##type( c ); \ + if (c > 0x7f) \ + return nonascii; \ + return is##type(c); \ } \ - return isw##type( *( prs->wstr + prs->state->poschar ) ); \ + return isw##type(*(prs->wstr + prs->state->poschar)); \ } \ - \ - return is##type( *(unsigned char*)( prs->str + prs->state->posbyte ) ); \ -} \ + return is##type(*(unsigned char *) (prs->str + prs->state->posbyte)); \ +} \ \ static int \ -p_isnot##type(TParser *prs) { \ +p_isnot##type(TParser *prs) \ +{ \ return !p_is##type(prs); \ } -static int -p_isalnum(TParser *prs) -{ - Assert(prs->state); - - if (prs->usewide) - { - if (prs->pgwstr) - { - unsigned int c = *(prs->pgwstr + prs->state->poschar); - - /* - * any non-ascii symbol with multibyte encoding with C-locale is - * an alpha character - */ - if (c > 0x7f) - return 1; - - return isalnum(c); - } - - return iswalnum(*(prs->wstr + prs->state->poschar)); - } - - return isalnum(*(unsigned char *) (prs->str + prs->state->posbyte)); -} -static int -p_isnotalnum(TParser *prs) -{ - return !p_isalnum(prs); -} - -static int -p_isalpha(TParser *prs) -{ - Assert(prs->state); - - if (prs->usewide) - { - if (prs->pgwstr) - { - unsigned int c = *(prs->pgwstr + prs->state->poschar); - - /* - * any non-ascii symbol with multibyte encoding with C-locale is - * an alpha character - */ - if (c > 0x7f) - return 1; - - return isalpha(c); - } - - return iswalpha(*(prs->wstr + prs->state->poschar)); - } - - return isalpha(*(unsigned char *) (prs->str + prs->state->posbyte)); -} - -static int -p_isnotalpha(TParser *prs) -{ - return !p_isalpha(prs); -} +/* + * In C locale with a multibyte encoding, any non-ASCII symbol is considered + * an alpha character, but not a member of other char classes. + */ +p_iswhat(alnum, 1) +p_iswhat(alpha, 1) +p_iswhat(digit, 0) +p_iswhat(lower, 0) +p_iswhat(print, 0) +p_iswhat(punct, 0) +p_iswhat(space, 0) +p_iswhat(upper, 0) +p_iswhat(xdigit, 0) /* p_iseq should be used only for ascii symbols */ @@ -525,14 +476,6 @@ p_iseq(TParser *prs, char c) return ((prs->state->charlen == 1 && *(prs->str + prs->state->posbyte) == c)) ? 1 : 0; } -p_iswhat(digit) -p_iswhat(lower) -p_iswhat(print) -p_iswhat(punct) -p_iswhat(space) -p_iswhat(upper) -p_iswhat(xdigit) - static int p_isEOF(TParser *prs) { From 5d3cad564729f64d972c5c803ff34f0eb40bfd0c Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 22 Sep 2017 10:59:46 -0400 Subject: [PATCH 0244/1087] Remove contrib/chkpass The recent addition of a test suite for this module revealed a few problems. It uses a crypt() method that is no longer considered secure and doesn't work anymore on some platforms. Using a volatile input function violates internal sanity check assumptions and leads to failures on the build farm. So this module is neither a usable security tool nor a good example for an extension. No one wanted to argue for keeping or improving it, so remove it. Discussion: https://www.postgresql.org/message-id/5645b0d7-cc40-6ab5-c553-292a91091ee7%402ndquadrant.com --- contrib/Makefile | 1 - contrib/chkpass/.gitignore | 4 - contrib/chkpass/Makefile | 23 --- contrib/chkpass/chkpass--1.0.sql | 70 -------- contrib/chkpass/chkpass--unpackaged--1.0.sql | 13 -- contrib/chkpass/chkpass.c | 175 ------------------- contrib/chkpass/chkpass.control | 5 - contrib/chkpass/expected/chkpass.out | 18 -- contrib/chkpass/sql/chkpass.sql | 7 - doc/src/sgml/chkpass.sgml | 95 ---------- doc/src/sgml/contrib.sgml | 1 - doc/src/sgml/filelist.sgml | 1 - 12 files changed, 413 deletions(-) delete mode 100644 contrib/chkpass/.gitignore delete mode 100644 contrib/chkpass/Makefile delete mode 100644 contrib/chkpass/chkpass--1.0.sql delete mode 100644 contrib/chkpass/chkpass--unpackaged--1.0.sql delete mode 100644 contrib/chkpass/chkpass.c delete mode 100644 contrib/chkpass/chkpass.control delete mode 100644 contrib/chkpass/expected/chkpass.out delete mode 100644 contrib/chkpass/sql/chkpass.sql delete mode 100644 doc/src/sgml/chkpass.sgml diff --git a/contrib/Makefile b/contrib/Makefile index e84eb67008..8046ca4f39 100644 --- a/contrib/Makefile +++ b/contrib/Makefile @@ -12,7 +12,6 @@ SUBDIRS = \ bloom \ btree_gin \ btree_gist \ - chkpass \ citext \ cube \ dblink \ diff --git a/contrib/chkpass/.gitignore b/contrib/chkpass/.gitignore deleted file mode 100644 index 5dcb3ff972..0000000000 --- a/contrib/chkpass/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -# Generated subdirectories -/log/ -/results/ -/tmp_check/ diff --git a/contrib/chkpass/Makefile b/contrib/chkpass/Makefile deleted file mode 100644 index dbecc3360b..0000000000 --- a/contrib/chkpass/Makefile +++ /dev/null @@ -1,23 +0,0 @@ -# contrib/chkpass/Makefile - -MODULE_big = chkpass -OBJS = chkpass.o $(WIN32RES) - -EXTENSION = chkpass -DATA = chkpass--1.0.sql chkpass--unpackaged--1.0.sql -PGFILEDESC = "chkpass - encrypted password data type" - -SHLIB_LINK = $(filter -lcrypt, $(LIBS)) - -REGRESS = chkpass - -ifdef USE_PGXS -PG_CONFIG = pg_config -PGXS := $(shell $(PG_CONFIG) --pgxs) -include $(PGXS) -else -subdir = contrib/chkpass -top_builddir = ../.. -include $(top_builddir)/src/Makefile.global -include $(top_srcdir)/contrib/contrib-global.mk -endif diff --git a/contrib/chkpass/chkpass--1.0.sql b/contrib/chkpass/chkpass--1.0.sql deleted file mode 100644 index 406a61924c..0000000000 --- a/contrib/chkpass/chkpass--1.0.sql +++ /dev/null @@ -1,70 +0,0 @@ -/* contrib/chkpass/chkpass--1.0.sql */ - --- complain if script is sourced in psql, rather than via CREATE EXTENSION -\echo Use "CREATE EXTENSION chkpass" to load this file. \quit - --- --- Input and output functions and the type itself: --- - -CREATE FUNCTION chkpass_in(cstring) - RETURNS chkpass - AS 'MODULE_PATHNAME' - LANGUAGE C STRICT VOLATILE; --- Note: chkpass_in actually is volatile, because of its use of random(). --- In hindsight that was a bad idea, but there's no way to change it without --- breaking some usage patterns. - -CREATE FUNCTION chkpass_out(chkpass) - RETURNS cstring - AS 'MODULE_PATHNAME' - LANGUAGE C STRICT IMMUTABLE; - -CREATE TYPE chkpass ( - internallength = 16, - input = chkpass_in, - output = chkpass_out -); - -CREATE FUNCTION raw(chkpass) - RETURNS text - AS 'MODULE_PATHNAME', 'chkpass_rout' - LANGUAGE C STRICT; - --- --- The various boolean tests: --- - -CREATE FUNCTION eq(chkpass, text) - RETURNS bool - AS 'MODULE_PATHNAME', 'chkpass_eq' - LANGUAGE C STRICT; - -CREATE FUNCTION ne(chkpass, text) - RETURNS bool - AS 'MODULE_PATHNAME', 'chkpass_ne' - LANGUAGE C STRICT; - --- --- Now the operators. --- - -CREATE OPERATOR = ( - leftarg = chkpass, - rightarg = text, - negator = <>, - procedure = eq -); - -CREATE OPERATOR <> ( - leftarg = chkpass, - rightarg = text, - negator = =, - procedure = ne -); - -COMMENT ON TYPE chkpass IS 'password type with checks'; - --- --- eof --- diff --git a/contrib/chkpass/chkpass--unpackaged--1.0.sql b/contrib/chkpass/chkpass--unpackaged--1.0.sql deleted file mode 100644 index 8bdecddfa5..0000000000 --- a/contrib/chkpass/chkpass--unpackaged--1.0.sql +++ /dev/null @@ -1,13 +0,0 @@ -/* contrib/chkpass/chkpass--unpackaged--1.0.sql */ - --- complain if script is sourced in psql, rather than via CREATE EXTENSION -\echo Use "CREATE EXTENSION chkpass FROM unpackaged" to load this file. \quit - -ALTER EXTENSION chkpass ADD type chkpass; -ALTER EXTENSION chkpass ADD function chkpass_in(cstring); -ALTER EXTENSION chkpass ADD function chkpass_out(chkpass); -ALTER EXTENSION chkpass ADD function raw(chkpass); -ALTER EXTENSION chkpass ADD function eq(chkpass,text); -ALTER EXTENSION chkpass ADD function ne(chkpass,text); -ALTER EXTENSION chkpass ADD operator <>(chkpass,text); -ALTER EXTENSION chkpass ADD operator =(chkpass,text); diff --git a/contrib/chkpass/chkpass.c b/contrib/chkpass/chkpass.c deleted file mode 100644 index 3803ccff9a..0000000000 --- a/contrib/chkpass/chkpass.c +++ /dev/null @@ -1,175 +0,0 @@ -/* - * PostgreSQL type definitions for chkpass - * Written by D'Arcy J.M. Cain - * darcy@druid.net - * http://www.druid.net/darcy/ - * - * contrib/chkpass/chkpass.c - * best viewed with tabs set to 4 - */ - -#include "postgres.h" - -#include -#include -#ifdef HAVE_CRYPT_H -#include -#endif - -#include "fmgr.h" -#include "utils/backend_random.h" -#include "utils/builtins.h" - -PG_MODULE_MAGIC; - -/* - * This type encrypts it's input unless the first character is a colon. - * The output is the encrypted form with a leading colon. The output - * format is designed to allow dump and reload operations to work as - * expected without doing special tricks. - */ - - -/* - * This is the internal storage format for CHKPASSs. - * 15 is all I need but add a little buffer - */ - -typedef struct chkpass -{ - char password[16]; -} chkpass; - - -/* This function checks that the password is a good one - * It's just a placeholder for now */ -static int -verify_pass(const char *str) -{ - return 0; -} - -/* - * CHKPASS reader. - */ -PG_FUNCTION_INFO_V1(chkpass_in); -Datum -chkpass_in(PG_FUNCTION_ARGS) -{ - char *str = PG_GETARG_CSTRING(0); - chkpass *result; - char mysalt[4]; - char *crypt_output; - static char salt_chars[] = - "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; - - /* special case to let us enter encrypted passwords */ - if (*str == ':') - { - result = (chkpass *) palloc0(sizeof(chkpass)); - strlcpy(result->password, str + 1, 13 + 1); - PG_RETURN_POINTER(result); - } - - if (verify_pass(str) != 0) - ereport(ERROR, - (errcode(ERRCODE_DATA_EXCEPTION), - errmsg("password \"%s\" is weak", str))); - - result = (chkpass *) palloc0(sizeof(chkpass)); - - if (!pg_backend_random(mysalt, 2)) - ereport(ERROR, - (errmsg("could not generate random salt"))); - - mysalt[0] = salt_chars[mysalt[0] & 0x3f]; - mysalt[1] = salt_chars[mysalt[1] & 0x3f]; - mysalt[2] = 0; /* technically the terminator is not necessary - * but I like to play safe */ - - crypt_output = crypt(str, mysalt); - if (crypt_output == NULL) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("crypt() failed"))); - - strlcpy(result->password, crypt_output, sizeof(result->password)); - - PG_RETURN_POINTER(result); -} - -/* - * CHKPASS output function. - * Just like any string but we know it is max 15 (13 plus colon and terminator.) - */ - -PG_FUNCTION_INFO_V1(chkpass_out); -Datum -chkpass_out(PG_FUNCTION_ARGS) -{ - chkpass *password = (chkpass *) PG_GETARG_POINTER(0); - char *result; - - result = (char *) palloc(16); - result[0] = ':'; - strlcpy(result + 1, password->password, 15); - - PG_RETURN_CSTRING(result); -} - - -/* - * special output function that doesn't output the colon - */ - -PG_FUNCTION_INFO_V1(chkpass_rout); -Datum -chkpass_rout(PG_FUNCTION_ARGS) -{ - chkpass *password = (chkpass *) PG_GETARG_POINTER(0); - - PG_RETURN_TEXT_P(cstring_to_text(password->password)); -} - - -/* - * Boolean tests - */ - -PG_FUNCTION_INFO_V1(chkpass_eq); -Datum -chkpass_eq(PG_FUNCTION_ARGS) -{ - chkpass *a1 = (chkpass *) PG_GETARG_POINTER(0); - text *a2 = PG_GETARG_TEXT_PP(1); - char str[9]; - char *crypt_output; - - text_to_cstring_buffer(a2, str, sizeof(str)); - crypt_output = crypt(str, a1->password); - if (crypt_output == NULL) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("crypt() failed"))); - - PG_RETURN_BOOL(strcmp(a1->password, crypt_output) == 0); -} - -PG_FUNCTION_INFO_V1(chkpass_ne); -Datum -chkpass_ne(PG_FUNCTION_ARGS) -{ - chkpass *a1 = (chkpass *) PG_GETARG_POINTER(0); - text *a2 = PG_GETARG_TEXT_PP(1); - char str[9]; - char *crypt_output; - - text_to_cstring_buffer(a2, str, sizeof(str)); - crypt_output = crypt(str, a1->password); - if (crypt_output == NULL) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("crypt() failed"))); - - PG_RETURN_BOOL(strcmp(a1->password, crypt_output) != 0); -} diff --git a/contrib/chkpass/chkpass.control b/contrib/chkpass/chkpass.control deleted file mode 100644 index bd4b3d3d0d..0000000000 --- a/contrib/chkpass/chkpass.control +++ /dev/null @@ -1,5 +0,0 @@ -# chkpass extension -comment = 'data type for auto-encrypted passwords' -default_version = '1.0' -module_pathname = '$libdir/chkpass' -relocatable = true diff --git a/contrib/chkpass/expected/chkpass.out b/contrib/chkpass/expected/chkpass.out deleted file mode 100644 index b53557bf2a..0000000000 --- a/contrib/chkpass/expected/chkpass.out +++ /dev/null @@ -1,18 +0,0 @@ -CREATE EXTENSION chkpass; -WARNING: type input function chkpass_in should not be volatile -CREATE TABLE test (i int, p chkpass); -INSERT INTO test VALUES (1, 'hello'), (2, 'goodbye'); -SELECT i, p = 'hello' AS "hello?" FROM test; - i | hello? ----+-------- - 1 | t - 2 | f -(2 rows) - -SELECT i, p <> 'hello' AS "!hello?" FROM test; - i | !hello? ----+--------- - 1 | f - 2 | t -(2 rows) - diff --git a/contrib/chkpass/sql/chkpass.sql b/contrib/chkpass/sql/chkpass.sql deleted file mode 100644 index 595683e249..0000000000 --- a/contrib/chkpass/sql/chkpass.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE EXTENSION chkpass; - -CREATE TABLE test (i int, p chkpass); -INSERT INTO test VALUES (1, 'hello'), (2, 'goodbye'); - -SELECT i, p = 'hello' AS "hello?" FROM test; -SELECT i, p <> 'hello' AS "!hello?" FROM test; diff --git a/doc/src/sgml/chkpass.sgml b/doc/src/sgml/chkpass.sgml deleted file mode 100644 index 9f682d8981..0000000000 --- a/doc/src/sgml/chkpass.sgml +++ /dev/null @@ -1,95 +0,0 @@ - - - - chkpass - - - chkpass - - - - This module implements a data type chkpass that is - designed for storing encrypted passwords. - Each password is automatically converted to encrypted form upon entry, - and is always stored encrypted. To compare, simply compare against a clear - text password and the comparison function will encrypt it before comparing. - - - - There are provisions in the code to report an error if the password is - determined to be easily crackable. However, this is currently just - a stub that does nothing. - - - - If you precede an input string with a colon, it is assumed to be an - already-encrypted password, and is stored without further encryption. - This allows entry of previously-encrypted passwords. - - - - On output, a colon is prepended. This makes it possible to dump and reload - passwords without re-encrypting them. If you want the encrypted password - without the colon then use the raw() function. - This allows you to use the - type with things like Apache's Auth_PostgreSQL module. - - - - The encryption uses the standard Unix function crypt(), - and so it suffers - from all the usual limitations of that function; notably that only the - first eight characters of a password are considered. - - - - Note that the chkpass data type is not indexable. - - - - - Sample usage: - - - -test=# create table test (p chkpass); -CREATE TABLE -test=# insert into test values ('hello'); -INSERT 0 1 -test=# select * from test; - p ----------------- - :dVGkpXdOrE3ko -(1 row) - -test=# select raw(p) from test; - raw ---------------- - dVGkpXdOrE3ko -(1 row) - -test=# select p = 'hello' from test; - ?column? ----------- - t -(1 row) - -test=# select p = 'goodbye' from test; - ?column? ----------- - f -(1 row) - - - - Author - - - D'Arcy J.M. Cain (darcy@druid.net) - - - - diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index eaaa36cb87..f32b8a81a2 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -109,7 +109,6 @@ CREATE EXTENSION module_name FROM unpackaged; &bloom; &btree-gin; &btree-gist; - &chkpass; &citext; &cube; &dblink; diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml index b914086009..bd371fd1d3 100644 --- a/doc/src/sgml/filelist.sgml +++ b/doc/src/sgml/filelist.sgml @@ -110,7 +110,6 @@ - From 0f574a7afb5c998d19dc3d981e45cb10267286ed Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 22 Sep 2017 12:59:44 -0400 Subject: [PATCH 0245/1087] Allow up to 3 "-P 1" reports per thread in pgbench run of 2 seconds. There seems to be some considerable imprecision in the timing of -P progress reports. Nominally each thread ought to produce 2 reports in this test, but about 10% of the time we only get one, and 1% of the time we get three, as per buildfarm results so far. Pending further investigation, treat the last case as a "pass". (I, tgl, am suspicious that this still might not be lax enough, now that it's obvious that the behavior is load-dependent; but there's not yet buildfarm evidence to confirm that suspicion.) Fabien Coelho Discussion: https://postgr.es/m/26654.1505232433@sss.pgh.pa.us --- src/bin/pgbench/t/001_pgbench_with_server.pl | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 3609b9bede..7db4bc8c97 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -462,8 +462,9 @@ sub check_pgbench_logs [ qr{vacuum}, qr{progress: 1\b} ], 'pgbench progress'); -# $nthreads threads, 2 seconds, sometimes only one aggregated line is written -check_pgbench_logs('001_pgbench_log_1', $nthreads, 1, 2, +# $nthreads threads, 2 seconds, but due to timing imprecision we might get +# only 1 or as many as 3 progress reports per thread. +check_pgbench_logs('001_pgbench_log_1', $nthreads, 1, 3, qr{^\d+ \d{1,2} \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+$}); # with sampling rate From 7c75ef571579a3ad7a1d3ee909f11dba5e0b9440 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 22 Sep 2017 13:26:25 -0400 Subject: [PATCH 0246/1087] hash: Implement page-at-a-time scan. Commit 09cb5c0e7d6fbc9dee26dc429e4fc0f2a88e5272 added a similar optimization to btree back in 2006, but nobody bothered to implement the same thing for hash indexes, probably because they weren't WAL-logged and had lots of other performance problems as well. As with the corresponding btree case, this eliminates the problem of potentially needing to refind our position within the page, and cuts down on pin/unpin traffic as well. Ashutosh Sharma, reviewed by Alexander Korotkov, Jesper Pedersen, Amit Kapila, and me. Some final edits to comments and README by me. Discussion: http://postgr.es/m/CAE9k0Pm3KTx93K8_5j6VMzG4h5F+SyknxUwXrN-zqSZ9X8ZS3w@mail.gmail.com --- src/backend/access/hash/README | 79 ++-- src/backend/access/hash/hash.c | 165 ++------ src/backend/access/hash/hashpage.c | 10 +- src/backend/access/hash/hashsearch.c | 569 +++++++++++++++++---------- src/backend/access/hash/hashutil.c | 67 +++- src/include/access/hash.h | 70 +++- 6 files changed, 562 insertions(+), 398 deletions(-) diff --git a/src/backend/access/hash/README b/src/backend/access/hash/README index c8a0ec78a9..5827389a70 100644 --- a/src/backend/access/hash/README +++ b/src/backend/access/hash/README @@ -259,10 +259,11 @@ The reader algorithm is: -- then, per read request: reacquire content lock on current page step to next page if necessary (no chaining of content locks, but keep - the pin on the primary bucket throughout the scan; we also maintain - a pin on the page currently being scanned) - get tuple - release content lock + the pin on the primary bucket throughout the scan) + save all the matching tuples from current index page into an items array + release pin and content lock (but if it is primary bucket page retain + its pin till the end of the scan) + get tuple from an item array -- at scan shutdown: release all pins still held @@ -270,15 +271,13 @@ Holding the buffer pin on the primary bucket page for the whole scan prevents the reader's current-tuple pointer from being invalidated by splits or compactions. (Of course, other buckets can still be split or compacted.) -To keep concurrency reasonably good, we require readers to cope with -concurrent insertions, which means that they have to be able to re-find -their current scan position after re-acquiring the buffer content lock on -page. Since deletion is not possible while a reader holds the pin on bucket, -and we assume that heap tuple TIDs are unique, this can be implemented by -searching for the same heap tuple TID previously returned. Insertion does -not move index entries across pages, so the previously-returned index entry -should always be on the same page, at the same or higher offset number, -as it was before. +To minimize lock/unlock traffic, hash index scan always searches the entire +hash page to identify all the matching items at once, copying their heap tuple +IDs into backend-local storage. The heap tuple IDs are then processed while not +holding any page lock within the index thereby, allowing concurrent insertion +to happen on the same index page without any requirement of re-finding the +current scan position for the reader. We do continue to hold a pin on the +bucket page, to protect against concurrent deletions and bucket split. To allow for scans during a bucket split, if at the start of the scan, the bucket is marked as bucket-being-populated, it scan all the tuples in that @@ -415,23 +414,43 @@ The fourth operation is garbage collection (bulk deletion): Note that this is designed to allow concurrent splits and scans. If a split occurs, tuples relocated into the new bucket will be visited twice by the -scan, but that does no harm. As we release the lock on bucket page during -cleanup scan of a bucket, it will allow concurrent scan to start on a bucket -and ensures that scan will always be behind cleanup. It is must to keep scans -behind cleanup, else vacuum could decrease the TIDs that are required to -complete the scan. Now, as the scan that returns multiple tuples from the -same bucket page always expect next valid TID to be greater than or equal to -the current TID, it might miss the tuples. This holds true for backward scans -as well (backward scans first traverse each bucket starting from first bucket -to last overflow page in the chain). We must be careful about the statistics -reported by the VACUUM operation. What we can do is count the number of -tuples scanned, and believe this in preference to the stored tuple count if -the stored tuple count and number of buckets did *not* change at any time -during the scan. This provides a way of correcting the stored tuple count if -it gets out of sync for some reason. But if a split or insertion does occur -concurrently, the scan count is untrustworthy; instead, subtract the number of -tuples deleted from the stored tuple count and use that. - +scan, but that does no harm. See also "Interlocking Between Scans and +VACUUM", below. + +We must be careful about the statistics reported by the VACUUM operation. +What we can do is count the number of tuples scanned, and believe this in +preference to the stored tuple count if the stored tuple count and number of +buckets did *not* change at any time during the scan. This provides a way of +correcting the stored tuple count if it gets out of sync for some reason. But +if a split or insertion does occur concurrently, the scan count is +untrustworthy; instead, subtract the number of tuples deleted from the stored +tuple count and use that. + +Interlocking Between Scans and VACUUM +------------------------------------- + +Since we release the lock on bucket page during a cleanup scan of a bucket, a +concurrent scan could start in that bucket before we've finished vacuuming it. +If a scan gets ahead of cleanup, we could have the following problem: (1) the +scan sees heap TIDs that are about to be removed before they are processed by +VACUUM, (2) the scan decides that one or more of those TIDs are dead, (3) +VACUUM completes, (3) one or more of the TIDs the scan decided were dead are +reused for an unrelated tuple, and finally (5) the scan wakes up and +erroneously kills the new tuple. + +Note that this requires VACUUM and a scan to be active in the same bucket at +the same time. If VACUUM completes before the scan starts, the scan never has +a chance to see the dead tuples; if the scan completes before the VACUUM +starts, the heap TIDs can't have been reused meanwhile. Furthermore, VACUUM +can't start on a bucket that has an active scan, because the scan holds a pin +on the primary bucket page, and VACUUM must take a cleanup lock on that page +in order to begin cleanup. Therefore, the only way this problem can occur is +for a scan to start after VACUUM has released the cleanup lock on the bucket +but before it has processed the entire bucket and then overtake the cleanup +operation. + +Currently, we prevent this using lock chaining: cleanup locks the next page +in the chain before releasing the lock and pin on the page just processed. Free Space Management --------------------- diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c index d89c192862..0fef60a858 100644 --- a/src/backend/access/hash/hash.c +++ b/src/backend/access/hash/hash.c @@ -268,65 +268,20 @@ bool hashgettuple(IndexScanDesc scan, ScanDirection dir) { HashScanOpaque so = (HashScanOpaque) scan->opaque; - Relation rel = scan->indexRelation; - Buffer buf; - Page page; - OffsetNumber offnum; - ItemPointer current; bool res; /* Hash indexes are always lossy since we store only the hash code */ scan->xs_recheck = true; - /* - * We hold pin but not lock on current buffer while outside the hash AM. - * Reacquire the read lock here. - */ - if (BufferIsValid(so->hashso_curbuf)) - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_SHARE); - /* * If we've already initialized this scan, we can just advance it in the * appropriate direction. If we haven't done so yet, we call a routine to * get the first item in the scan. */ - current = &(so->hashso_curpos); - if (ItemPointerIsValid(current)) + if (!HashScanPosIsValid(so->currPos)) + res = _hash_first(scan, dir); + else { - /* - * An insertion into the current index page could have happened while - * we didn't have read lock on it. Re-find our position by looking - * for the TID we previously returned. (Because we hold a pin on the - * primary bucket page, no deletions or splits could have occurred; - * therefore we can expect that the TID still exists in the current - * index page, at an offset >= where we were.) - */ - OffsetNumber maxoffnum; - - buf = so->hashso_curbuf; - Assert(BufferIsValid(buf)); - page = BufferGetPage(buf); - - /* - * We don't need test for old snapshot here as the current buffer is - * pinned, so vacuum can't clean the page. - */ - maxoffnum = PageGetMaxOffsetNumber(page); - for (offnum = ItemPointerGetOffsetNumber(current); - offnum <= maxoffnum; - offnum = OffsetNumberNext(offnum)) - { - IndexTuple itup; - - itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); - if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid))) - break; - } - if (offnum > maxoffnum) - elog(ERROR, "failed to re-find scan position within index \"%s\"", - RelationGetRelationName(rel)); - ItemPointerSetOffsetNumber(current, offnum); - /* * Check to see if we should kill the previously-fetched tuple. */ @@ -341,16 +296,11 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir) * entries. */ if (so->killedItems == NULL) - so->killedItems = palloc(MaxIndexTuplesPerPage * - sizeof(HashScanPosItem)); + so->killedItems = (int *) + palloc(MaxIndexTuplesPerPage * sizeof(int)); if (so->numKilled < MaxIndexTuplesPerPage) - { - so->killedItems[so->numKilled].heapTid = so->hashso_heappos; - so->killedItems[so->numKilled].indexOffset = - ItemPointerGetOffsetNumber(&(so->hashso_curpos)); - so->numKilled++; - } + so->killedItems[so->numKilled++] = so->currPos.itemIndex; } /* @@ -358,30 +308,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir) */ res = _hash_next(scan, dir); } - else - res = _hash_first(scan, dir); - - /* - * Skip killed tuples if asked to. - */ - if (scan->ignore_killed_tuples) - { - while (res) - { - offnum = ItemPointerGetOffsetNumber(current); - page = BufferGetPage(so->hashso_curbuf); - if (!ItemIdIsDead(PageGetItemId(page, offnum))) - break; - res = _hash_next(scan, dir); - } - } - - /* Release read lock on current buffer, but keep it pinned */ - if (BufferIsValid(so->hashso_curbuf)) - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_UNLOCK); - - /* Return current heap TID on success */ - scan->xs_ctup.t_self = so->hashso_heappos; return res; } @@ -396,35 +322,21 @@ hashgetbitmap(IndexScanDesc scan, TIDBitmap *tbm) HashScanOpaque so = (HashScanOpaque) scan->opaque; bool res; int64 ntids = 0; + HashScanPosItem *currItem; res = _hash_first(scan, ForwardScanDirection); while (res) { - bool add_tuple; + currItem = &so->currPos.items[so->currPos.itemIndex]; /* - * Skip killed tuples if asked to. + * _hash_first and _hash_next handle eliminate dead index entries + * whenever scan->ignored_killed_tuples is true. Therefore, there's + * nothing to do here except add the results to the TIDBitmap. */ - if (scan->ignore_killed_tuples) - { - Page page; - OffsetNumber offnum; - - offnum = ItemPointerGetOffsetNumber(&(so->hashso_curpos)); - page = BufferGetPage(so->hashso_curbuf); - add_tuple = !ItemIdIsDead(PageGetItemId(page, offnum)); - } - else - add_tuple = true; - - /* Save tuple ID, and continue scanning */ - if (add_tuple) - { - /* Note we mark the tuple ID as requiring recheck */ - tbm_add_tuples(tbm, &(so->hashso_heappos), 1, true); - ntids++; - } + tbm_add_tuples(tbm, &(currItem->heapTid), 1, true); + ntids++; res = _hash_next(scan, ForwardScanDirection); } @@ -448,12 +360,9 @@ hashbeginscan(Relation rel, int nkeys, int norderbys) scan = RelationGetIndexScan(rel, nkeys, norderbys); so = (HashScanOpaque) palloc(sizeof(HashScanOpaqueData)); - so->hashso_curbuf = InvalidBuffer; + HashScanPosInvalidate(so->currPos); so->hashso_bucket_buf = InvalidBuffer; so->hashso_split_bucket_buf = InvalidBuffer; - /* set position invalid (this will cause _hash_first call) */ - ItemPointerSetInvalid(&(so->hashso_curpos)); - ItemPointerSetInvalid(&(so->hashso_heappos)); so->hashso_buc_populated = false; so->hashso_buc_split = false; @@ -476,22 +385,17 @@ hashrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys, HashScanOpaque so = (HashScanOpaque) scan->opaque; Relation rel = scan->indexRelation; - /* - * Before leaving current page, deal with any killed items. Also, ensure - * that we acquire lock on current page before calling _hash_kill_items. - */ - if (so->numKilled > 0) + if (HashScanPosIsValid(so->currPos)) { - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_SHARE); - _hash_kill_items(scan); - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_UNLOCK); + /* Before leaving current page, deal with any killed items */ + if (so->numKilled > 0) + _hash_kill_items(scan); } _hash_dropscanbuf(rel, so); /* set position invalid (this will cause _hash_first call) */ - ItemPointerSetInvalid(&(so->hashso_curpos)); - ItemPointerSetInvalid(&(so->hashso_heappos)); + HashScanPosInvalidate(so->currPos); /* Update scan key, if a new one is given */ if (scankey && scan->numberOfKeys > 0) @@ -514,15 +418,11 @@ hashendscan(IndexScanDesc scan) HashScanOpaque so = (HashScanOpaque) scan->opaque; Relation rel = scan->indexRelation; - /* - * Before leaving current page, deal with any killed items. Also, ensure - * that we acquire lock on current page before calling _hash_kill_items. - */ - if (so->numKilled > 0) + if (HashScanPosIsValid(so->currPos)) { - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_SHARE); - _hash_kill_items(scan); - LockBuffer(so->hashso_curbuf, BUFFER_LOCK_UNLOCK); + /* Before leaving current page, deal with any killed items */ + if (so->numKilled > 0) + _hash_kill_items(scan); } _hash_dropscanbuf(rel, so); @@ -755,16 +655,15 @@ hashvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) * primary bucket page. The lock won't necessarily be held continuously, * though, because we'll release it when visiting overflow pages. * - * It would be very bad if this function cleaned a page while some other - * backend was in the midst of scanning it, because hashgettuple assumes - * that the next valid TID will be greater than or equal to the current - * valid TID. There can't be any concurrent scans in progress when we first - * enter this function because of the cleanup lock we hold on the primary - * bucket page, but as soon as we release that lock, there might be. We - * handle that by conspiring to prevent those scans from passing our cleanup - * scan. To do that, we lock the next page in the bucket chain before - * releasing the lock on the previous page. (This type of lock chaining is - * not ideal, so we might want to look for a better solution at some point.) + * There can't be any concurrent scans in progress when we first enter this + * function because of the cleanup lock we hold on the primary bucket page, + * but as soon as we release that lock, there might be. If those scans got + * ahead of our cleanup scan, they might see a tuple before we kill it and + * wake up only after VACUUM has completed and the TID has been recycled for + * an unrelated tuple. To avoid that calamity, we prevent scans from passing + * our cleanup scan by locking the next page in the bucket chain before + * releasing the lock on the previous page. (This type of lock chaining is not + * ideal, so we might want to look for a better solution at some point.) * * We need to retain a pin on the primary bucket to ensure that no concurrent * split can start. diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c index 05798419fc..f279dcea1d 100644 --- a/src/backend/access/hash/hashpage.c +++ b/src/backend/access/hash/hashpage.c @@ -298,20 +298,20 @@ _hash_dropscanbuf(Relation rel, HashScanOpaque so) { /* release pin we hold on primary bucket page */ if (BufferIsValid(so->hashso_bucket_buf) && - so->hashso_bucket_buf != so->hashso_curbuf) + so->hashso_bucket_buf != so->currPos.buf) _hash_dropbuf(rel, so->hashso_bucket_buf); so->hashso_bucket_buf = InvalidBuffer; /* release pin we hold on primary bucket page of bucket being split */ if (BufferIsValid(so->hashso_split_bucket_buf) && - so->hashso_split_bucket_buf != so->hashso_curbuf) + so->hashso_split_bucket_buf != so->currPos.buf) _hash_dropbuf(rel, so->hashso_split_bucket_buf); so->hashso_split_bucket_buf = InvalidBuffer; /* release any pin we still hold */ - if (BufferIsValid(so->hashso_curbuf)) - _hash_dropbuf(rel, so->hashso_curbuf); - so->hashso_curbuf = InvalidBuffer; + if (BufferIsValid(so->currPos.buf)) + _hash_dropbuf(rel, so->currPos.buf); + so->currPos.buf = InvalidBuffer; /* reset split scan */ so->hashso_buc_populated = false; diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c index 3e461ad7a0..ce5515dbcb 100644 --- a/src/backend/access/hash/hashsearch.c +++ b/src/backend/access/hash/hashsearch.c @@ -20,44 +20,105 @@ #include "pgstat.h" #include "utils/rel.h" +static bool _hash_readpage(IndexScanDesc scan, Buffer *bufP, + ScanDirection dir); +static int _hash_load_qualified_items(IndexScanDesc scan, Page page, + OffsetNumber offnum, ScanDirection dir); +static inline void _hash_saveitem(HashScanOpaque so, int itemIndex, + OffsetNumber offnum, IndexTuple itup); +static void _hash_readnext(IndexScanDesc scan, Buffer *bufp, + Page *pagep, HashPageOpaque *opaquep); /* * _hash_next() -- Get the next item in a scan. * - * On entry, we have a valid hashso_curpos in the scan, and a - * pin and read lock on the page that contains that item. - * We find the next item in the scan, if any. - * On success exit, we have the page containing the next item - * pinned and locked. + * On entry, so->currPos describes the current page, which may + * be pinned but not locked, and so->currPos.itemIndex identifies + * which item was previously returned. + * + * On successful exit, scan->xs_ctup.t_self is set to the TID + * of the next heap tuple. so->currPos is updated as needed. + * + * On failure exit (no more tuples), we return FALSE with pin + * held on bucket page but no pins or locks held on overflow + * page. */ bool _hash_next(IndexScanDesc scan, ScanDirection dir) { Relation rel = scan->indexRelation; HashScanOpaque so = (HashScanOpaque) scan->opaque; + HashScanPosItem *currItem; + BlockNumber blkno; Buffer buf; - Page page; - OffsetNumber offnum; - ItemPointer current; - IndexTuple itup; - - /* we still have the buffer pinned and read-locked */ - buf = so->hashso_curbuf; - Assert(BufferIsValid(buf)); + bool end_of_scan = false; /* - * step to next valid tuple. + * Advance to the next tuple on the current page; or if done, try to read + * data from the next or previous page based on the scan direction. Before + * moving to the next or previous page make sure that we deal with all the + * killed items. */ - if (!_hash_step(scan, &buf, dir)) + if (ScanDirectionIsForward(dir)) + { + if (++so->currPos.itemIndex > so->currPos.lastItem) + { + if (so->numKilled > 0) + _hash_kill_items(scan); + + blkno = so->currPos.nextPage; + if (BlockNumberIsValid(blkno)) + { + buf = _hash_getbuf(rel, blkno, HASH_READ, LH_OVERFLOW_PAGE); + TestForOldSnapshot(scan->xs_snapshot, rel, BufferGetPage(buf)); + if (!_hash_readpage(scan, &buf, dir)) + end_of_scan = true; + } + else + end_of_scan = true; + } + } + else + { + if (--so->currPos.itemIndex < so->currPos.firstItem) + { + if (so->numKilled > 0) + _hash_kill_items(scan); + + blkno = so->currPos.prevPage; + if (BlockNumberIsValid(blkno)) + { + buf = _hash_getbuf(rel, blkno, HASH_READ, + LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); + TestForOldSnapshot(scan->xs_snapshot, rel, BufferGetPage(buf)); + + /* + * We always maintain the pin on bucket page for whole scan + * operation, so releasing the additional pin we have acquired + * here. + */ + if (buf == so->hashso_bucket_buf || + buf == so->hashso_split_bucket_buf) + _hash_dropbuf(rel, buf); + + if (!_hash_readpage(scan, &buf, dir)) + end_of_scan = true; + } + else + end_of_scan = true; + } + } + + if (end_of_scan) + { + _hash_dropscanbuf(rel, so); + HashScanPosInvalidate(so->currPos); return false; + } - /* if we're here, _hash_step found a valid tuple */ - current = &(so->hashso_curpos); - offnum = ItemPointerGetOffsetNumber(current); - _hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); - page = BufferGetPage(buf); - itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); - so->hashso_heappos = itup->t_tid; + /* OK, itemIndex says what to return */ + currItem = &so->currPos.items[so->currPos.itemIndex]; + scan->xs_ctup.t_self = currItem->heapTid; return true; } @@ -212,11 +273,18 @@ _hash_readprev(IndexScanDesc scan, /* * _hash_first() -- Find the first item in a scan. * - * Find the first item in the index that - * satisfies the qualification associated with the scan descriptor. On - * success, the page containing the current index tuple is read locked - * and pinned, and the scan's opaque data entry is updated to - * include the buffer. + * We find the first item (or, if backward scan, the last item) in the + * index that satisfies the qualification associated with the scan + * descriptor. + * + * On successful exit, if the page containing current index tuple is an + * overflow page, both pin and lock are released whereas if it is a bucket + * page then it is pinned but not locked and data about the matching + * tuple(s) on the page has been loaded into so->currPos, + * scan->xs_ctup.t_self is set to the heap TID of the current tuple. + * + * On failure exit (no more tuples), we return FALSE, with pin held on + * bucket page but no pins or locks held on overflow page. */ bool _hash_first(IndexScanDesc scan, ScanDirection dir) @@ -229,15 +297,10 @@ _hash_first(IndexScanDesc scan, ScanDirection dir) Buffer buf; Page page; HashPageOpaque opaque; - IndexTuple itup; - ItemPointer current; - OffsetNumber offnum; + HashScanPosItem *currItem; pgstat_count_index_scan(rel); - current = &(so->hashso_curpos); - ItemPointerSetInvalid(current); - /* * We do not support hash scans with no index qualification, because we * would have to read the whole index rather than just one bucket. That @@ -356,222 +419,308 @@ _hash_first(IndexScanDesc scan, ScanDirection dir) _hash_readnext(scan, &buf, &page, &opaque); } - /* Now find the first tuple satisfying the qualification */ - if (!_hash_step(scan, &buf, dir)) + /* remember which buffer we have pinned, if any */ + Assert(BufferIsInvalid(so->currPos.buf)); + so->currPos.buf = buf; + + /* Now find all the tuples satisfying the qualification from a page */ + if (!_hash_readpage(scan, &buf, dir)) return false; - /* if we're here, _hash_step found a valid tuple */ - offnum = ItemPointerGetOffsetNumber(current); - _hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); - page = BufferGetPage(buf); - itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); - so->hashso_heappos = itup->t_tid; + /* OK, itemIndex says what to return */ + currItem = &so->currPos.items[so->currPos.itemIndex]; + scan->xs_ctup.t_self = currItem->heapTid; + /* if we're here, _hash_readpage found a valid tuples */ return true; } /* - * _hash_step() -- step to the next valid item in a scan in the bucket. - * - * If no valid record exists in the requested direction, return - * false. Else, return true and set the hashso_curpos for the - * scan to the right thing. + * _hash_readpage() -- Load data from current index page into so->currPos * - * Here we need to ensure that if the scan has started during split, then - * skip the tuples that are moved by split while scanning bucket being - * populated and then scan the bucket being split to cover all such - * tuples. This is done to ensure that we don't miss tuples in the scans - * that are started during split. + * We scan all the items in the current index page and save them into + * so->currPos if it satisfies the qualification. If no matching items + * are found in the current page, we move to the next or previous page + * in a bucket chain as indicated by the direction. * - * 'bufP' points to the current buffer, which is pinned and read-locked. - * On success exit, we have pin and read-lock on whichever page - * contains the right item; on failure, we have released all buffers. + * Return true if any matching items are found else return false. */ -bool -_hash_step(IndexScanDesc scan, Buffer *bufP, ScanDirection dir) +static bool +_hash_readpage(IndexScanDesc scan, Buffer *bufP, ScanDirection dir) { Relation rel = scan->indexRelation; HashScanOpaque so = (HashScanOpaque) scan->opaque; - ItemPointer current; Buffer buf; Page page; HashPageOpaque opaque; - OffsetNumber maxoff; OffsetNumber offnum; - BlockNumber blkno; - IndexTuple itup; - - current = &(so->hashso_curpos); + uint16 itemIndex; buf = *bufP; + Assert(BufferIsValid(buf)); _hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); page = BufferGetPage(buf); opaque = (HashPageOpaque) PageGetSpecialPointer(page); - /* - * If _hash_step is called from _hash_first, current will not be valid, so - * we can't dereference it. However, in that case, we presumably want to - * start at the beginning/end of the page... - */ - maxoff = PageGetMaxOffsetNumber(page); - if (ItemPointerIsValid(current)) - offnum = ItemPointerGetOffsetNumber(current); - else - offnum = InvalidOffsetNumber; + so->currPos.buf = buf; /* - * 'offnum' now points to the last tuple we examined (if any). - * - * continue to step through tuples until: 1) we get to the end of the - * bucket chain or 2) we find a valid tuple. + * We save the LSN of the page as we read it, so that we know whether it + * is safe to apply LP_DEAD hints to the page later. */ - do + so->currPos.lsn = PageGetLSN(page); + so->currPos.currPage = BufferGetBlockNumber(buf); + + if (ScanDirectionIsForward(dir)) { - switch (dir) + BlockNumber prev_blkno = InvalidBlockNumber; + + for (;;) { - case ForwardScanDirection: - if (offnum != InvalidOffsetNumber) - offnum = OffsetNumberNext(offnum); /* move forward */ - else - { - /* new page, locate starting position by binary search */ - offnum = _hash_binsearch(page, so->hashso_sk_hash); - } - - for (;;) - { - /* - * check if we're still in the range of items with the - * target hash key - */ - if (offnum <= maxoff) - { - Assert(offnum >= FirstOffsetNumber); - itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); - - /* - * skip the tuples that are moved by split operation - * for the scan that has started when split was in - * progress - */ - if (so->hashso_buc_populated && !so->hashso_buc_split && - (itup->t_info & INDEX_MOVED_BY_SPLIT_MASK)) - { - offnum = OffsetNumberNext(offnum); /* move forward */ - continue; - } - - if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup)) - break; /* yes, so exit for-loop */ - } - - /* Before leaving current page, deal with any killed items */ - if (so->numKilled > 0) - _hash_kill_items(scan); - - /* - * ran off the end of this page, try the next - */ - _hash_readnext(scan, &buf, &page, &opaque); - if (BufferIsValid(buf)) - { - maxoff = PageGetMaxOffsetNumber(page); - offnum = _hash_binsearch(page, so->hashso_sk_hash); - } - else - { - itup = NULL; - break; /* exit for-loop */ - } - } + /* new page, locate starting position by binary search */ + offnum = _hash_binsearch(page, so->hashso_sk_hash); + + itemIndex = _hash_load_qualified_items(scan, page, offnum, dir); + + if (itemIndex != 0) break; - case BackwardScanDirection: - if (offnum != InvalidOffsetNumber) - offnum = OffsetNumberPrev(offnum); /* move back */ - else - { - /* new page, locate starting position by binary search */ - offnum = _hash_binsearch_last(page, so->hashso_sk_hash); - } - - for (;;) - { - /* - * check if we're still in the range of items with the - * target hash key - */ - if (offnum >= FirstOffsetNumber) - { - Assert(offnum <= maxoff); - itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); - - /* - * skip the tuples that are moved by split operation - * for the scan that has started when split was in - * progress - */ - if (so->hashso_buc_populated && !so->hashso_buc_split && - (itup->t_info & INDEX_MOVED_BY_SPLIT_MASK)) - { - offnum = OffsetNumberPrev(offnum); /* move back */ - continue; - } - - if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup)) - break; /* yes, so exit for-loop */ - } - - /* Before leaving current page, deal with any killed items */ - if (so->numKilled > 0) - _hash_kill_items(scan); - - /* - * ran off the end of this page, try the next - */ - _hash_readprev(scan, &buf, &page, &opaque); - if (BufferIsValid(buf)) - { - TestForOldSnapshot(scan->xs_snapshot, rel, page); - maxoff = PageGetMaxOffsetNumber(page); - offnum = _hash_binsearch_last(page, so->hashso_sk_hash); - } - else - { - itup = NULL; - break; /* exit for-loop */ - } - } + /* + * Could not find any matching tuples in the current page, move to + * the next page. Before leaving the current page, deal with any + * killed items. + */ + if (so->numKilled > 0) + _hash_kill_items(scan); + + /* + * If this is a primary bucket page, hasho_prevblkno is not a real + * block number. + */ + if (so->currPos.buf == so->hashso_bucket_buf || + so->currPos.buf == so->hashso_split_bucket_buf) + prev_blkno = InvalidBlockNumber; + else + prev_blkno = opaque->hasho_prevblkno; + + _hash_readnext(scan, &buf, &page, &opaque); + if (BufferIsValid(buf)) + { + so->currPos.buf = buf; + so->currPos.currPage = BufferGetBlockNumber(buf); + so->currPos.lsn = PageGetLSN(page); + } + else + { + /* + * Remember next and previous block numbers for scrollable + * cursors to know the start position and return FALSE + * indicating that no more matching tuples were found. Also, + * don't reset currPage or lsn, because we expect + * _hash_kill_items to be called for the old page after this + * function returns. + */ + so->currPos.prevPage = prev_blkno; + so->currPos.nextPage = InvalidBlockNumber; + so->currPos.buf = buf; + return false; + } + } + + so->currPos.firstItem = 0; + so->currPos.lastItem = itemIndex - 1; + so->currPos.itemIndex = 0; + } + else + { + BlockNumber next_blkno = InvalidBlockNumber; + + for (;;) + { + /* new page, locate starting position by binary search */ + offnum = _hash_binsearch_last(page, so->hashso_sk_hash); + + itemIndex = _hash_load_qualified_items(scan, page, offnum, dir); + + if (itemIndex != MaxIndexTuplesPerPage) break; - default: - /* NoMovementScanDirection */ - /* this should not be reached */ - itup = NULL; + /* + * Could not find any matching tuples in the current page, move to + * the previous page. Before leaving the current page, deal with + * any killed items. + */ + if (so->numKilled > 0) + _hash_kill_items(scan); + + if (so->currPos.buf == so->hashso_bucket_buf || + so->currPos.buf == so->hashso_split_bucket_buf) + next_blkno = opaque->hasho_nextblkno; + + _hash_readprev(scan, &buf, &page, &opaque); + if (BufferIsValid(buf)) + { + so->currPos.buf = buf; + so->currPos.currPage = BufferGetBlockNumber(buf); + so->currPos.lsn = PageGetLSN(page); + } + else + { + /* + * Remember next and previous block numbers for scrollable + * cursors to know the start position and return FALSE + * indicating that no more matching tuples were found. Also, + * don't reset currPage or lsn, because we expect + * _hash_kill_items to be called for the old page after this + * function returns. + */ + so->currPos.prevPage = InvalidBlockNumber; + so->currPos.nextPage = next_blkno; + so->currPos.buf = buf; + return false; + } + } + + so->currPos.firstItem = itemIndex; + so->currPos.lastItem = MaxIndexTuplesPerPage - 1; + so->currPos.itemIndex = MaxIndexTuplesPerPage - 1; + } + + if (so->currPos.buf == so->hashso_bucket_buf || + so->currPos.buf == so->hashso_split_bucket_buf) + { + so->currPos.prevPage = InvalidBlockNumber; + so->currPos.nextPage = opaque->hasho_nextblkno; + LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK); + } + else + { + so->currPos.prevPage = opaque->hasho_prevblkno; + so->currPos.nextPage = opaque->hasho_nextblkno; + _hash_relbuf(rel, so->currPos.buf); + so->currPos.buf = InvalidBuffer; + } + + Assert(so->currPos.firstItem <= so->currPos.lastItem); + return true; +} + +/* + * Load all the qualified items from a current index page + * into so->currPos. Helper function for _hash_readpage. + */ +static int +_hash_load_qualified_items(IndexScanDesc scan, Page page, + OffsetNumber offnum, ScanDirection dir) +{ + HashScanOpaque so = (HashScanOpaque) scan->opaque; + IndexTuple itup; + int itemIndex; + OffsetNumber maxoff; + + maxoff = PageGetMaxOffsetNumber(page); + + if (ScanDirectionIsForward(dir)) + { + /* load items[] in ascending order */ + itemIndex = 0; + + while (offnum <= maxoff) + { + Assert(offnum >= FirstOffsetNumber); + itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); + + /* + * skip the tuples that are moved by split operation for the scan + * that has started when split was in progress. Also, skip the + * tuples that are marked as dead. + */ + if ((so->hashso_buc_populated && !so->hashso_buc_split && + (itup->t_info & INDEX_MOVED_BY_SPLIT_MASK)) || + (scan->ignore_killed_tuples && + (ItemIdIsDead(PageGetItemId(page, offnum))))) + { + offnum = OffsetNumberNext(offnum); /* move forward */ + continue; + } + + if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup) && + _hash_checkqual(scan, itup)) + { + /* tuple is qualified, so remember it */ + _hash_saveitem(so, itemIndex, offnum, itup); + itemIndex++; + } + else + { + /* + * No more matching tuples exist in this page. so, exit while + * loop. + */ break; + } + + offnum = OffsetNumberNext(offnum); } - if (itup == NULL) + Assert(itemIndex <= MaxIndexTuplesPerPage); + return itemIndex; + } + else + { + /* load items[] in descending order */ + itemIndex = MaxIndexTuplesPerPage; + + while (offnum >= FirstOffsetNumber) { + Assert(offnum <= maxoff); + itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum)); + /* - * We ran off the end of the bucket without finding a match. - * Release the pin on bucket buffers. Normally, such pins are - * released at end of scan, however scrolling cursors can - * reacquire the bucket lock and pin in the same scan multiple - * times. + * skip the tuples that are moved by split operation for the scan + * that has started when split was in progress. Also, skip the + * tuples that are marked as dead. */ - *bufP = so->hashso_curbuf = InvalidBuffer; - ItemPointerSetInvalid(current); - _hash_dropscanbuf(rel, so); - return false; + if ((so->hashso_buc_populated && !so->hashso_buc_split && + (itup->t_info & INDEX_MOVED_BY_SPLIT_MASK)) || + (scan->ignore_killed_tuples && + (ItemIdIsDead(PageGetItemId(page, offnum))))) + { + offnum = OffsetNumberPrev(offnum); /* move back */ + continue; + } + + if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup) && + _hash_checkqual(scan, itup)) + { + itemIndex--; + /* tuple is qualified, so remember it */ + _hash_saveitem(so, itemIndex, offnum, itup); + } + else + { + /* + * No more matching tuples exist in this page. so, exit while + * loop. + */ + break; + } + + offnum = OffsetNumberPrev(offnum); } - /* check the tuple quals, loop around if not met */ - } while (!_hash_checkqual(scan, itup)); + Assert(itemIndex >= 0); + return itemIndex; + } +} + +/* Save an index item into so->currPos.items[itemIndex] */ +static inline void +_hash_saveitem(HashScanOpaque so, int itemIndex, + OffsetNumber offnum, IndexTuple itup) +{ + HashScanPosItem *currItem = &so->currPos.items[itemIndex]; - /* if we made it to here, we've found a valid tuple */ - blkno = BufferGetBlockNumber(buf); - *bufP = so->hashso_curbuf = buf; - ItemPointerSet(current, blkno, offnum); - return true; + currItem->heapTid = itup->t_tid; + currItem->indexOffset = offnum; } diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c index 869cbc1081..a825b82706 100644 --- a/src/backend/access/hash/hashutil.c +++ b/src/backend/access/hash/hashutil.c @@ -522,13 +522,30 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket, * current page and killed tuples thereon (generally, this should only be * called if so->numKilled > 0). * + * The caller does not have a lock on the page and may or may not have the + * page pinned in a buffer. Note that read-lock is sufficient for setting + * LP_DEAD status (which is only a hint). + * + * The caller must have pin on bucket buffer, but may or may not have pin + * on overflow buffer, as indicated by HashScanPosIsPinned(so->currPos). + * * We match items by heap TID before assuming they are the right ones to * delete. + * + * Note that we keep the pin on the bucket page throughout the scan. Hence, + * there is no chance of VACUUM deleting any items from that page. However, + * having pin on the overflow page doesn't guarantee that vacuum won't delete + * any items. + * + * See _bt_killitems() for more details. */ void _hash_kill_items(IndexScanDesc scan) { HashScanOpaque so = (HashScanOpaque) scan->opaque; + Relation rel = scan->indexRelation; + BlockNumber blkno; + Buffer buf; Page page; HashPageOpaque opaque; OffsetNumber offnum, @@ -536,9 +553,11 @@ _hash_kill_items(IndexScanDesc scan) int numKilled = so->numKilled; int i; bool killedsomething = false; + bool havePin = false; Assert(so->numKilled > 0); Assert(so->killedItems != NULL); + Assert(HashScanPosIsValid(so->currPos)); /* * Always reset the scan state, so we don't look for same items on other @@ -546,20 +565,54 @@ _hash_kill_items(IndexScanDesc scan) */ so->numKilled = 0; - page = BufferGetPage(so->hashso_curbuf); + blkno = so->currPos.currPage; + if (HashScanPosIsPinned(so->currPos)) + { + /* + * We already have pin on this buffer, so, all we need to do is + * acquire lock on it. + */ + havePin = true; + buf = so->currPos.buf; + LockBuffer(buf, BUFFER_LOCK_SHARE); + } + else + buf = _hash_getbuf(rel, blkno, HASH_READ, LH_OVERFLOW_PAGE); + + /* + * If page LSN differs it means that the page was modified since the last + * read. killedItems could be not valid so applying LP_DEAD hints is not + * safe. + */ + page = BufferGetPage(buf); + if (PageGetLSN(page) != so->currPos.lsn) + { + if (havePin) + LockBuffer(buf, BUFFER_LOCK_UNLOCK); + else + _hash_relbuf(rel, buf); + return; + } + opaque = (HashPageOpaque) PageGetSpecialPointer(page); maxoff = PageGetMaxOffsetNumber(page); for (i = 0; i < numKilled; i++) { - offnum = so->killedItems[i].indexOffset; + int itemIndex = so->killedItems[i]; + HashScanPosItem *currItem = &so->currPos.items[itemIndex]; + + offnum = currItem->indexOffset; + + Assert(itemIndex >= so->currPos.firstItem && + itemIndex <= so->currPos.lastItem); while (offnum <= maxoff) { ItemId iid = PageGetItemId(page, offnum); IndexTuple ituple = (IndexTuple) PageGetItem(page, iid); - if (ItemPointerEquals(&ituple->t_tid, &so->killedItems[i].heapTid)) + if (ItemPointerEquals(&ituple->t_tid, &currItem->heapTid)) { /* found the item */ ItemIdMarkDead(iid); @@ -578,6 +631,12 @@ _hash_kill_items(IndexScanDesc scan) if (killedsomething) { opaque->hasho_flag |= LH_PAGE_HAS_DEAD_TUPLES; - MarkBufferDirtyHint(so->hashso_curbuf, true); + MarkBufferDirtyHint(buf, true); } + + if (so->hashso_bucket_buf == so->currPos.buf || + havePin) + LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK); + else + _hash_relbuf(rel, buf); } diff --git a/src/include/access/hash.h b/src/include/access/hash.h index c06dcb214f..0e0f3e17a7 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -114,6 +114,53 @@ typedef struct HashScanPosItem /* what we remember about each match */ OffsetNumber indexOffset; /* index item's location within page */ } HashScanPosItem; +typedef struct HashScanPosData +{ + Buffer buf; /* if valid, the buffer is pinned */ + XLogRecPtr lsn; /* pos in the WAL stream when page was read */ + BlockNumber currPage; /* current hash index page */ + BlockNumber nextPage; /* next overflow page */ + BlockNumber prevPage; /* prev overflow or bucket page */ + + /* + * The items array is always ordered in index order (ie, increasing + * indexoffset). When scanning backwards it is convenient to fill the + * array back-to-front, so we start at the last slot and fill downwards. + * Hence we need both a first-valid-entry and a last-valid-entry counter. + * itemIndex is a cursor showing which entry was last returned to caller. + */ + int firstItem; /* first valid index in items[] */ + int lastItem; /* last valid index in items[] */ + int itemIndex; /* current index in items[] */ + + HashScanPosItem items[MaxIndexTuplesPerPage]; /* MUST BE LAST */ +} HashScanPosData; + +#define HashScanPosIsPinned(scanpos) \ +( \ + AssertMacro(BlockNumberIsValid((scanpos).currPage) || \ + !BufferIsValid((scanpos).buf)), \ + BufferIsValid((scanpos).buf) \ +) + +#define HashScanPosIsValid(scanpos) \ +( \ + AssertMacro(BlockNumberIsValid((scanpos).currPage) || \ + !BufferIsValid((scanpos).buf)), \ + BlockNumberIsValid((scanpos).currPage) \ +) + +#define HashScanPosInvalidate(scanpos) \ + do { \ + (scanpos).buf = InvalidBuffer; \ + (scanpos).lsn = InvalidXLogRecPtr; \ + (scanpos).currPage = InvalidBlockNumber; \ + (scanpos).nextPage = InvalidBlockNumber; \ + (scanpos).prevPage = InvalidBlockNumber; \ + (scanpos).firstItem = 0; \ + (scanpos).lastItem = 0; \ + (scanpos).itemIndex = 0; \ + } while (0); /* * HashScanOpaqueData is private state for a hash index scan. @@ -123,14 +170,6 @@ typedef struct HashScanOpaqueData /* Hash value of the scan key, ie, the hash key we seek */ uint32 hashso_sk_hash; - /* - * We also want to remember which buffer we're currently examining in the - * scan. We keep the buffer pinned (but not locked) across hashgettuple - * calls, in order to avoid doing a ReadBuffer() for every tuple in the - * index. - */ - Buffer hashso_curbuf; - /* remember the buffer associated with primary bucket */ Buffer hashso_bucket_buf; @@ -141,12 +180,6 @@ typedef struct HashScanOpaqueData */ Buffer hashso_split_bucket_buf; - /* Current position of the scan, as an index TID */ - ItemPointerData hashso_curpos; - - /* Current position of the scan, as a heap TID */ - ItemPointerData hashso_heappos; - /* Whether scan starts on bucket being populated due to split */ bool hashso_buc_populated; @@ -156,8 +189,14 @@ typedef struct HashScanOpaqueData */ bool hashso_buc_split; /* info about killed items if any (killedItems is NULL if never used) */ - HashScanPosItem *killedItems; /* tids and offset numbers of killed items */ + int *killedItems; /* currPos.items indexes of killed items */ int numKilled; /* number of currently stored items */ + + /* + * Identify all the matching items on a page and save them in + * HashScanPosData + */ + HashScanPosData currPos; /* current position data */ } HashScanOpaqueData; typedef HashScanOpaqueData *HashScanOpaque; @@ -401,7 +440,6 @@ extern void _hash_finish_split(Relation rel, Buffer metabuf, Buffer obuf, /* hashsearch.c */ extern bool _hash_next(IndexScanDesc scan, ScanDirection dir); extern bool _hash_first(IndexScanDesc scan, ScanDirection dir); -extern bool _hash_step(IndexScanDesc scan, Buffer *bufP, ScanDirection dir); /* hashsort.c */ typedef struct HSpool HSpool; /* opaque struct in hashsort.c */ From 6a2fa09c0cba0e5a11854d733872ac18511f4c83 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 22 Sep 2017 14:28:22 -0400 Subject: [PATCH 0247/1087] For wal_consistency_checking, mask page checksum as well as page LSN. If the LSN is different, the checksum will be different, too. Ashwin Agrawal, reviewed by Michael Paquier and Kuntal Ghosh Discussion: http://postgr.es/m/CALfoeis5iqrAU-+JAN+ZzXkpPr7+-0OAGv7QUHwFn=-wDy4o4Q@mail.gmail.com --- src/backend/access/brin/brin_xlog.c | 2 +- src/backend/access/common/bufmask.c | 8 +++++--- src/backend/access/gin/ginxlog.c | 2 +- src/backend/access/gist/gistxlog.c | 4 ++-- src/backend/access/hash/hash_xlog.c | 2 +- src/backend/access/heap/heapam.c | 2 +- src/backend/access/nbtree/nbtxlog.c | 2 +- src/backend/access/spgist/spgxlog.c | 2 +- src/backend/access/transam/generic_xlog.c | 2 +- src/backend/commands/sequence.c | 2 +- src/include/access/bufmask.h | 2 +- 11 files changed, 16 insertions(+), 14 deletions(-) diff --git a/src/backend/access/brin/brin_xlog.c b/src/backend/access/brin/brin_xlog.c index dff7198a39..60daa54a95 100644 --- a/src/backend/access/brin/brin_xlog.c +++ b/src/backend/access/brin/brin_xlog.c @@ -332,7 +332,7 @@ brin_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); diff --git a/src/backend/access/common/bufmask.c b/src/backend/access/common/bufmask.c index 10253d3354..d880aef7ba 100644 --- a/src/backend/access/common/bufmask.c +++ b/src/backend/access/common/bufmask.c @@ -23,15 +23,17 @@ * mask_page_lsn * * In consistency checks, the LSN of the two pages compared will likely be - * different because of concurrent operations when the WAL is generated - * and the state of the page when WAL is applied. + * different because of concurrent operations when the WAL is generated and + * the state of the page when WAL is applied. Also, mask out checksum as + * masking anything else on page means checksum is not going to match as well. */ void -mask_page_lsn(Page page) +mask_page_lsn_and_checksum(Page page) { PageHeader phdr = (PageHeader) page; PageXLogRecPtrSet(phdr->pd_lsn, (uint64) MASK_MARKER); + phdr->pd_checksum = MASK_MARKER; } /* diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c index 7ba04e324f..92cafe950b 100644 --- a/src/backend/access/gin/ginxlog.c +++ b/src/backend/access/gin/ginxlog.c @@ -770,7 +770,7 @@ gin_mask(char *pagedata, BlockNumber blkno) Page page = (Page) pagedata; GinPageOpaque opaque; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); opaque = GinPageGetOpaque(page); mask_page_hint_bits(page); diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c index 4f4fe8fab5..7fd91ce640 100644 --- a/src/backend/access/gist/gistxlog.c +++ b/src/backend/access/gist/gistxlog.c @@ -352,14 +352,14 @@ gist_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); mask_unused_space(page); /* * NSN is nothing but a special purpose LSN. Hence, mask it for the same - * reason as mask_page_lsn. + * reason as mask_page_lsn_and_checksum. */ GistPageSetNSN(page, (uint64) MASK_MARKER); diff --git a/src/backend/access/hash/hash_xlog.c b/src/backend/access/hash/hash_xlog.c index 67a856c142..f19f6fdfaf 100644 --- a/src/backend/access/hash/hash_xlog.c +++ b/src/backend/access/hash/hash_xlog.c @@ -1263,7 +1263,7 @@ hash_mask(char *pagedata, BlockNumber blkno) HashPageOpaque opaque; int pagetype; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); mask_unused_space(page); diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index d20f0381f3..d03f544d26 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -9166,7 +9166,7 @@ heap_mask(char *pagedata, BlockNumber blkno) Page page = (Page) pagedata; OffsetNumber off; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); mask_unused_space(page); diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 4afdf4736f..82337f8ef2 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -1034,7 +1034,7 @@ btree_mask(char *pagedata, BlockNumber blkno) Page page = (Page) pagedata; BTPageOpaque maskopaq; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); mask_unused_space(page); diff --git a/src/backend/access/spgist/spgxlog.c b/src/backend/access/spgist/spgxlog.c index c440d21715..87def79ee5 100644 --- a/src/backend/access/spgist/spgxlog.c +++ b/src/backend/access/spgist/spgxlog.c @@ -1034,7 +1034,7 @@ spg_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); diff --git a/src/backend/access/transam/generic_xlog.c b/src/backend/access/transam/generic_xlog.c index fbc6810c2f..3adbf7b949 100644 --- a/src/backend/access/transam/generic_xlog.c +++ b/src/backend/access/transam/generic_xlog.c @@ -541,7 +541,7 @@ generic_redo(XLogReaderState *record) void generic_mask(char *page, BlockNumber blkno) { - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_unused_space(page); } diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c index 62937124ef..5c2ce78946 100644 --- a/src/backend/commands/sequence.c +++ b/src/backend/commands/sequence.c @@ -1941,7 +1941,7 @@ ResetSequenceCaches(void) void seq_mask(char *page, BlockNumber blkno) { - mask_page_lsn(page); + mask_page_lsn_and_checksum(page); mask_unused_space(page); } diff --git a/src/include/access/bufmask.h b/src/include/access/bufmask.h index 95c6c3ae02..6a24c947ef 100644 --- a/src/include/access/bufmask.h +++ b/src/include/access/bufmask.h @@ -23,7 +23,7 @@ /* Marker used to mask pages consistently */ #define MASK_MARKER 0 -extern void mask_page_lsn(Page page); +extern void mask_page_lsn_and_checksum(Page page); extern void mask_page_hint_bits(Page page); extern void mask_unused_space(Page page); extern void mask_lp_flags(Page page); From f9583e86b4bfa8c4e4d83ab33e5dcdaeab5c45a1 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 19 Sep 2017 22:59:36 -0700 Subject: [PATCH 0248/1087] Fix s/intidb/initdb/ typo. Reported-By: Michael Paquier Discussion: https://postgr.es/m/CAB7nPqTfaKAYZ4wuUM-W8kc4VnXrxX1=5-a9i==VoUPTMFpsgg@mail.gmail.com --- src/include/pg_config_manual.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h index 9615a389af..b048175321 100644 --- a/src/include/pg_config_manual.h +++ b/src/include/pg_config_manual.h @@ -14,7 +14,7 @@ */ /* - * This is default value for wal_segment_size to be used at intidb when run + * This is default value for wal_segment_size to be used at initdb when run * without --walsegsize option. Must be a valid segment size. */ #define DEFAULT_XLOG_SEG_SIZE (16*1024*1024) From 8d926029e817d280b2376433e3aaa3895e1a7128 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 22 Sep 2017 11:29:05 -0700 Subject: [PATCH 0249/1087] Expand expected output for recovery test even further. I'd assumed that the backend being killed should be able to get out an error message - but it turns out it's not guaranteed that it's not still sending a ready-for-query. Really need to do something about getting these error message to the client. Reported-By: Thomas Munro, Tom Lane Discussion: https://postgr.es/m/CAEepm=0TE90nded+bNthP45_PEvGAAr=3gxhHJObL4xmOLtX0w@mail.gmail.com https://postgr.es/m/14968.1506101414@sss.pgh.pa.us --- src/test/recovery/t/013_crash_restart.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl index 23024716e6..ce35bdd633 100644 --- a/src/test/recovery/t/013_crash_restart.pl +++ b/src/test/recovery/t/013_crash_restart.pl @@ -115,7 +115,7 @@ $killme_stdin .= q[ SELECT 1; ]; -ok(pump_until($killme, \$killme_stderr, qr/WARNING: terminating connection because of crash of another server process/m), +ok(pump_until($killme, \$killme_stderr, qr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly/m), "psql query died successfully after SIGQUIT"); $killme_stderr = ''; $killme_stdout = ''; From 91ad8b416cee753eaa6f520ee2d21c2d41853381 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 22 Sep 2017 15:01:13 -0400 Subject: [PATCH 0250/1087] doc: Document commands that cannot be run in a transaction block Mainly covering the new CREATE SUBSCRIPTION and DROP SUBSCRIPTION, but ALTER DATABASE SET TABLESPACE was also missing. --- doc/src/sgml/ref/alter_database.sgml | 4 ++++ doc/src/sgml/ref/create_subscription.sgml | 10 +++++----- doc/src/sgml/ref/drop_subscription.sgml | 5 +++++ 3 files changed, 14 insertions(+), 5 deletions(-) diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index cfc28cf9a7..9ab86127af 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -164,6 +164,10 @@ ALTER DATABASE name RESET ALL The new default tablespace of the database. + + + This form of the command cannot be executed inside a transaction block. + diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index 9f45b6f574..de505ea8d3 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -48,11 +48,6 @@ CREATE SUBSCRIPTION subscription_name - - CREATE SUBSCRIPTION cannot be executed inside a - transaction block when the parameter create_slot is specified. - - Additional info about subscriptions and logical replication as a whole can is available at and @@ -227,6 +222,11 @@ CREATE SUBSCRIPTION subscription_name + + When creating a replication slot (the default behavior), CREATE + SUBSCRIPTION cannot be executed inside a transaction block. + + Creating a subscription that connects to the same database cluster (for example, to replicate between databases in the same cluster or to replicate diff --git a/doc/src/sgml/ref/drop_subscription.sgml b/doc/src/sgml/ref/drop_subscription.sgml index f535c000c4..f5734e6f30 100644 --- a/doc/src/sgml/ref/drop_subscription.sgml +++ b/doc/src/sgml/ref/drop_subscription.sgml @@ -93,6 +93,11 @@ DROP SUBSCRIPTION [ IF EXISTS ] name. + + + If a subscription is associated with a replication slot, then DROP + SUBSCRIPTION cannot be executed inside a transaction block. + From 791961f59b792fbd4f0a992d3ccab47298e79103 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 22 Sep 2017 13:38:42 -0700 Subject: [PATCH 0251/1087] Add inline murmurhash32(uint32) function. The function already existed in tidbitmap.c but more users requiring fast hashing of 32bit ints are coming up. Author: Andres Freund Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de --- src/backend/nodes/tidbitmap.c | 20 ++------------------ src/include/utils/hashutils.h | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+), 18 deletions(-) diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c index c4e53adb0c..01d6bc5c11 100644 --- a/src/backend/nodes/tidbitmap.c +++ b/src/backend/nodes/tidbitmap.c @@ -45,6 +45,7 @@ #include "nodes/tidbitmap.h" #include "storage/lwlock.h" #include "utils/dsa.h" +#include "utils/hashutils.h" /* * The maximum number of tuples per page is not large (typically 256 with @@ -237,30 +238,13 @@ static int tbm_comparator(const void *left, const void *right); static int tbm_shared_comparator(const void *left, const void *right, void *arg); -/* - * Simple inline murmur hash implementation for the exact width required, for - * performance. - */ -static inline uint32 -hash_blockno(BlockNumber b) -{ - uint32 h = b; - - h ^= h >> 16; - h *= 0x85ebca6b; - h ^= h >> 13; - h *= 0xc2b2ae35; - h ^= h >> 16; - return h; -} - /* define hashtable mapping block numbers to PagetableEntry's */ #define SH_USE_NONDEFAULT_ALLOCATOR #define SH_PREFIX pagetable #define SH_ELEMENT_TYPE PagetableEntry #define SH_KEY_TYPE BlockNumber #define SH_KEY blockno -#define SH_HASH_KEY(tb, key) hash_blockno(key) +#define SH_HASH_KEY(tb, key) murmurhash32(key) #define SH_EQUAL(tb, a, b) a == b #define SH_SCOPE static inline #define SH_DEFINE diff --git a/src/include/utils/hashutils.h b/src/include/utils/hashutils.h index 56b7bfc9cb..35281689e8 100644 --- a/src/include/utils/hashutils.h +++ b/src/include/utils/hashutils.h @@ -20,4 +20,22 @@ hash_combine(uint32 a, uint32 b) return a; } + +/* + * Simple inline murmur hash implementation hashing a 32 bit ingeger, for + * performance. + */ +static inline uint32 +murmurhash32(uint32 data) +{ + uint32 h = data; + + h ^= h >> 16; + h *= 0x85ebca6b; + h ^= h >> 13; + h *= 0xc2b2ae35; + h ^= h >> 16; + return h; +} + #endif /* HASHUTILS_H */ From 58ffe141eb37c3f027acd25c1fc6b36513bf9380 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 22 Sep 2017 16:34:46 -0400 Subject: [PATCH 0252/1087] Revert "Add basic TAP test setup for pg_upgrade" This reverts commit f41e56c76e39f02bef7ba002c9de03d62b76de4d. The build farm client would run the pg_upgrade tests twice, once as part of the existing pg_upgrade check run and once as part of picking up all TAP tests by looking for "t" directories. Since the pg_upgrade tests are pretty slow, we will need a better solution or possibly a build farm client change before we can proceed with this. --- src/bin/pg_upgrade/Makefile | 7 +++---- src/bin/pg_upgrade/t/001_basic.pl | 9 --------- 2 files changed, 3 insertions(+), 13 deletions(-) delete mode 100644 src/bin/pg_upgrade/t/001_basic.pl diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile index e5c98596a1..1d6ee702c6 100644 --- a/src/bin/pg_upgrade/Makefile +++ b/src/bin/pg_upgrade/Makefile @@ -36,9 +36,8 @@ clean distclean maintainer-clean: pg_upgrade_dump_globals.sql \ pg_upgrade_dump_*.custom pg_upgrade_*.log -check: test.sh - $(prove_check) +check: test.sh all MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install -installcheck: - $(prove_installcheck) +# installcheck is not supported because there's no meaningful way to test +# pg_upgrade against a single already-running server diff --git a/src/bin/pg_upgrade/t/001_basic.pl b/src/bin/pg_upgrade/t/001_basic.pl deleted file mode 100644 index 605a7f622f..0000000000 --- a/src/bin/pg_upgrade/t/001_basic.pl +++ /dev/null @@ -1,9 +0,0 @@ -use strict; -use warnings; - -use TestLib; -use Test::More tests => 8; - -program_help_ok('pg_upgrade'); -program_version_ok('pg_upgrade'); -program_options_handling_ok('pg_upgrade'); From aa6b7b72d9bcf967cbccd378de4bc5cef33d02f9 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 22 Sep 2017 16:50:59 -0400 Subject: [PATCH 0253/1087] Fix saving and restoring umask In two cases, we set a different umask for some piece of code and restore it afterwards. But if the contained code errors out, the umask is not restored. So add TRY/CATCH blocks to fix that. --- src/backend/commands/copy.c | 11 ++++++++++- src/backend/libpq/be-fsstubs.c | 13 +++++++++++-- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index c6fa44563c..7c004ffad8 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -1826,7 +1826,16 @@ BeginCopyTo(ParseState *pstate, errmsg("relative path not allowed for COPY to file"))); oumask = umask(S_IWGRP | S_IWOTH); - cstate->copy_file = AllocateFile(cstate->filename, PG_BINARY_W); + PG_TRY(); + { + cstate->copy_file = AllocateFile(cstate->filename, PG_BINARY_W); + } + PG_CATCH(); + { + umask(oumask); + PG_RE_THROW(); + } + PG_END_TRY(); umask(oumask); if (cstate->copy_file == NULL) { diff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c index bf45461b2f..19b34bfc84 100644 --- a/src/backend/libpq/be-fsstubs.c +++ b/src/backend/libpq/be-fsstubs.c @@ -538,8 +538,17 @@ be_lo_export(PG_FUNCTION_ARGS) */ text_to_cstring_buffer(filename, fnamebuf, sizeof(fnamebuf)); oumask = umask(S_IWGRP | S_IWOTH); - fd = OpenTransientFile(fnamebuf, O_CREAT | O_WRONLY | O_TRUNC | PG_BINARY, - S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + PG_TRY(); + { + fd = OpenTransientFile(fnamebuf, O_CREAT | O_WRONLY | O_TRUNC | PG_BINARY, + S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + } + PG_CATCH(); + { + umask(oumask); + PG_RE_THROW(); + } + PG_END_TRY(); umask(oumask); if (fd < 0) ereport(ERROR, From 404ba54e8fd3036eee0f9241f68b17092ce734ee Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Sat, 23 Sep 2017 14:05:57 +0200 Subject: [PATCH 0254/1087] Test BRIN autosummarization There was no coverage for this code. Reported-by: Nikolay Shaplov, Tom Lane Discussion: https://postgr.es/m/2700647.XEouBYNZic@x200m https://postgr.es/m/13849.1506114543@sss.pgh.pa.us --- src/test/modules/brin/Makefile | 7 +++-- src/test/modules/brin/t/01_workitems.pl | 39 +++++++++++++++++++++++++ 2 files changed, 44 insertions(+), 2 deletions(-) create mode 100644 src/test/modules/brin/t/01_workitems.pl diff --git a/src/test/modules/brin/Makefile b/src/test/modules/brin/Makefile index dda84c23c7..18c5cafd5e 100644 --- a/src/test/modules/brin/Makefile +++ b/src/test/modules/brin/Makefile @@ -16,7 +16,7 @@ include $(top_builddir)/src/Makefile.global include $(top_srcdir)/contrib/contrib-global.mk endif -check: isolation-check +check: isolation-check prove-check isolation-check: | submake-isolation $(MKDIR_P) isolation_output @@ -24,7 +24,10 @@ isolation-check: | submake-isolation --outputdir=./isolation_output \ $(ISOLATIONCHECKS) -.PHONY: check isolation-check +prove-check: + $(prove_check) + +.PHONY: check isolation-check prove-check submake-isolation: $(MAKE) -C $(top_builddir)/src/test/isolation all diff --git a/src/test/modules/brin/t/01_workitems.pl b/src/test/modules/brin/t/01_workitems.pl new file mode 100644 index 0000000000..11c9981d40 --- /dev/null +++ b/src/test/modules/brin/t/01_workitems.pl @@ -0,0 +1,39 @@ +# Verify that work items work correctly + +use strict; +use warnings; + +use TestLib; +use Test::More tests => 2; +use PostgresNode; + +my $node = get_new_node('tango'); +$node->init; +$node->append_conf('postgresql.conf', 'autovacuum_naptime=1s'); +$node->start; + +$node->safe_psql('postgres', 'create extension pageinspect'); + +# Create a table with an autosummarizing BRIN index +$node->safe_psql('postgres', + 'create table brin_wi (a int) with (fillfactor = 10); + create index brin_wi_idx on brin_wi using brin (a) with (pages_per_range=1, autosummarize=on); + ' +); +my $count = $node->safe_psql('postgres', + "select count(*) from brin_page_items(get_raw_page('brin_wi_idx', 2), 'brin_wi_idx'::regclass)" +); +is($count, '1', "initial index state is correct"); + +$node->safe_psql('postgres', + 'insert into brin_wi select * from generate_series(1, 100)'); + +$node->poll_query_until('postgres', + "select count(*) > 1 from brin_page_items(get_raw_page('brin_wi_idx', 2), 'brin_wi_idx'::regclass)", + 't'); + +$count = $node->safe_psql('postgres', + "select count(*) > 1 from brin_page_items(get_raw_page('brin_wi_idx', 2), 'brin_wi_idx'::regclass)" +); +is($count, 't', "index got summarized"); +$node->stop; From 0c5803b450e0cc29b3527df3f352e6f18a038cc6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 23 Sep 2017 09:49:22 -0400 Subject: [PATCH 0255/1087] Refactor new file permission handling The file handling functions from fd.c were called with a diverse mix of notations for the file permissions when they were opening new files. Almost all files created by the server should have the same permissions set. So change the API so that e.g. OpenTransientFile() automatically uses the standard permissions set, and OpenTransientFilePerm() is a new function that takes an explicit permissions set for the few cases where it is needed. This also saves an unnecessary argument for call sites that are just opening an existing file. While we're reviewing these APIs, get rid of the FileName typedef and use the standard const char * for the file name and mode_t for the file mode. This makes these functions match other file handling functions and removes an unnecessary layer of mysteriousness. We can also get rid of a few casts that way. Author: David Steele --- .../pg_stat_statements/pg_stat_statements.c | 5 +- src/backend/access/heap/rewriteheap.c | 8 +-- src/backend/access/transam/slru.c | 7 +- src/backend/access/transam/timeline.c | 8 +-- src/backend/access/transam/twophase.c | 5 +- src/backend/access/transam/xlog.c | 28 +++----- src/backend/access/transam/xlogutils.c | 2 +- src/backend/catalog/catalog.c | 2 +- src/backend/libpq/be-fsstubs.c | 6 +- src/backend/replication/logical/origin.c | 7 +- .../replication/logical/reorderbuffer.c | 7 +- src/backend/replication/logical/snapbuild.c | 5 +- src/backend/replication/slot.c | 6 +- src/backend/replication/walsender.c | 4 +- src/backend/storage/file/copydir.c | 5 +- src/backend/storage/file/fd.c | 68 ++++++++++++++----- src/backend/storage/ipc/dsm_impl.c | 2 +- src/backend/storage/smgr/md.c | 12 ++-- src/backend/utils/cache/relmapper.c | 7 +- src/backend/utils/misc/guc.c | 3 +- src/include/storage/fd.h | 15 ++-- 21 files changed, 110 insertions(+), 102 deletions(-) diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index fa409d72b7..3ab1fd2db4 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -1869,8 +1869,7 @@ qtext_store(const char *query, int query_len, *query_offset = off; /* Now write the data into the successfully-reserved part of the file */ - fd = OpenTransientFile(PGSS_TEXT_FILE, O_RDWR | O_CREAT | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(PGSS_TEXT_FILE, O_RDWR | O_CREAT | PG_BINARY); if (fd < 0) goto error; @@ -1934,7 +1933,7 @@ qtext_load_file(Size *buffer_size) int fd; struct stat stat; - fd = OpenTransientFile(PGSS_TEXT_FILE, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(PGSS_TEXT_FILE, O_RDONLY | PG_BINARY); if (fd < 0) { if (errno != ENOENT) diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c index bd560e47e1..f93c194e18 100644 --- a/src/backend/access/heap/rewriteheap.c +++ b/src/backend/access/heap/rewriteheap.c @@ -1013,8 +1013,7 @@ logical_rewrite_log_mapping(RewriteState state, TransactionId xid, src->off = 0; memcpy(src->path, path, sizeof(path)); src->vfd = PathNameOpenFile(path, - O_CREAT | O_EXCL | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); if (src->vfd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -1133,8 +1132,7 @@ heap_xlog_logical_rewrite(XLogReaderState *r) xlrec->mapped_xid, XLogRecGetXid(r)); fd = OpenTransientFile(path, - O_CREAT | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + O_CREAT | O_WRONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -1258,7 +1256,7 @@ CheckPointLogicalRewriteHeap(void) } else { - int fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + int fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); /* * The file cannot vanish due to concurrency since this function diff --git a/src/backend/access/transam/slru.c b/src/backend/access/transam/slru.c index 77edc51e1c..9dd77190ec 100644 --- a/src/backend/access/transam/slru.c +++ b/src/backend/access/transam/slru.c @@ -599,7 +599,7 @@ SimpleLruDoesPhysicalPageExist(SlruCtl ctl, int pageno) SlruFileName(ctl, path, segno); - fd = OpenTransientFile(path, O_RDWR | PG_BINARY, S_IRUSR | S_IWUSR); + fd = OpenTransientFile(path, O_RDWR | PG_BINARY); if (fd < 0) { /* expected: file doesn't exist */ @@ -654,7 +654,7 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno) * SlruPhysicalWritePage). Hence, if we are InRecovery, allow the case * where the file doesn't exist, and return zeroes instead. */ - fd = OpenTransientFile(path, O_RDWR | PG_BINARY, S_IRUSR | S_IWUSR); + fd = OpenTransientFile(path, O_RDWR | PG_BINARY); if (fd < 0) { if (errno != ENOENT || !InRecovery) @@ -804,8 +804,7 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata) * don't use O_EXCL or O_TRUNC or anything like that. */ SlruFileName(ctl, path, segno); - fd = OpenTransientFile(path, O_RDWR | O_CREAT | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(path, O_RDWR | O_CREAT | PG_BINARY); if (fd < 0) { slru_errcause = SLRU_OPEN_FAILED; diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c index 63db8a981d..3d65e5624a 100644 --- a/src/backend/access/transam/timeline.c +++ b/src/backend/access/transam/timeline.c @@ -307,8 +307,7 @@ writeTimeLineHistory(TimeLineID newTLI, TimeLineID parentTLI, unlink(tmppath); /* do not use get_sync_bit() here --- want to fsync only at end of fill */ - fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -325,7 +324,7 @@ writeTimeLineHistory(TimeLineID newTLI, TimeLineID parentTLI, else TLHistoryFilePath(path, parentTLI); - srcfd = OpenTransientFile(path, O_RDONLY, 0); + srcfd = OpenTransientFile(path, O_RDONLY); if (srcfd < 0) { if (errno != ENOENT) @@ -459,8 +458,7 @@ writeTimeLineHistoryFile(TimeLineID tli, char *content, int size) unlink(tmppath); /* do not use get_sync_bit() here --- want to fsync only at end of fill */ - fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c index bfd800bc16..cfaf8da781 100644 --- a/src/backend/access/transam/twophase.c +++ b/src/backend/access/transam/twophase.c @@ -1195,7 +1195,7 @@ ReadTwoPhaseFile(TransactionId xid, bool give_warnings) TwoPhaseFilePath(path, xid); - fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (fd < 0) { if (give_warnings) @@ -1581,8 +1581,7 @@ RecreateTwoPhaseFile(TransactionId xid, void *content, int len) TwoPhaseFilePath(path, xid); fd = OpenTransientFile(path, - O_CREAT | O_TRUNC | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + O_CREAT | O_TRUNC | O_WRONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 051347163b..dd028a12a4 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -3185,8 +3185,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock) */ if (*use_existent) { - fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method), - S_IRUSR | S_IWUSR); + fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method)); if (fd < 0) { if (errno != ENOENT) @@ -3211,8 +3210,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock) unlink(tmppath); /* do not use get_sync_bit() here --- want to fsync only at end of fill */ - fd = BasicOpenFile(tmppath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = BasicOpenFile(tmppath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -3308,8 +3306,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock) *use_existent = false; /* Now open original target segment (might not be file I just made) */ - fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method), - S_IRUSR | S_IWUSR); + fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method)); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -3350,7 +3347,7 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno, * Open the source file */ XLogFilePath(path, srcTLI, srcsegno, wal_segment_size); - srcfd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + srcfd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (srcfd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -3364,8 +3361,7 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno, unlink(tmppath); /* do not use get_sync_bit() here --- want to fsync only at end of fill */ - fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(tmppath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -3543,8 +3539,7 @@ XLogFileOpen(XLogSegNo segno) XLogFilePath(path, ThisTimeLineID, segno, wal_segment_size); - fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method), - S_IRUSR | S_IWUSR); + fd = BasicOpenFile(path, O_RDWR | PG_BINARY | get_sync_bit(sync_method)); if (fd < 0) ereport(PANIC, (errcode_for_file_access(), @@ -3610,7 +3605,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli, snprintf(path, MAXPGPATH, XLOGDIR "/%s", xlogfname); } - fd = BasicOpenFile(path, O_RDONLY | PG_BINARY, 0); + fd = BasicOpenFile(path, O_RDONLY | PG_BINARY); if (fd >= 0) { /* Success! */ @@ -4449,8 +4444,7 @@ WriteControlFile(void) memcpy(buffer, ControlFile, sizeof(ControlFileData)); fd = BasicOpenFile(XLOG_CONTROL_FILE, - O_RDWR | O_CREAT | O_EXCL | PG_BINARY, - S_IRUSR | S_IWUSR); + O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (fd < 0) ereport(PANIC, (errcode_for_file_access(), @@ -4494,8 +4488,7 @@ ReadControlFile(void) * Read data... */ fd = BasicOpenFile(XLOG_CONTROL_FILE, - O_RDWR | PG_BINARY, - S_IRUSR | S_IWUSR); + O_RDWR | PG_BINARY); if (fd < 0) ereport(PANIC, (errcode_for_file_access(), @@ -4695,8 +4688,7 @@ UpdateControlFile(void) FIN_CRC32C(ControlFile->crc); fd = BasicOpenFile(XLOG_CONTROL_FILE, - O_RDWR | PG_BINARY, - S_IRUSR | S_IWUSR); + O_RDWR | PG_BINARY); if (fd < 0) ereport(PANIC, (errcode_for_file_access(), diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c index b11c94c9b6..3af6e19c98 100644 --- a/src/backend/access/transam/xlogutils.c +++ b/src/backend/access/transam/xlogutils.c @@ -694,7 +694,7 @@ XLogRead(char *buf, int segsize, TimeLineID tli, XLogRecPtr startptr, XLogFilePath(path, tli, sendSegNo, segsize); - sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY, 0); + sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY); if (sendFile < 0) { diff --git a/src/backend/catalog/catalog.c b/src/backend/catalog/catalog.c index 92d943cac7..f50ae3e41d 100644 --- a/src/backend/catalog/catalog.c +++ b/src/backend/catalog/catalog.c @@ -444,7 +444,7 @@ GetNewRelFileNode(Oid reltablespace, Relation pg_class, char relpersistence) /* Check for existing file of same name */ rpath = relpath(rnode, MAIN_FORKNUM); - fd = BasicOpenFile(rpath, O_RDONLY | PG_BINARY, 0); + fd = BasicOpenFile(rpath, O_RDONLY | PG_BINARY); if (fd >= 0) { diff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c index 19b34bfc84..84c2d26402 100644 --- a/src/backend/libpq/be-fsstubs.c +++ b/src/backend/libpq/be-fsstubs.c @@ -462,7 +462,7 @@ lo_import_internal(text *filename, Oid lobjOid) * open the file to be read in */ text_to_cstring_buffer(filename, fnamebuf, sizeof(fnamebuf)); - fd = OpenTransientFile(fnamebuf, O_RDONLY | PG_BINARY, S_IRWXU); + fd = OpenTransientFile(fnamebuf, O_RDONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -540,8 +540,8 @@ be_lo_export(PG_FUNCTION_ARGS) oumask = umask(S_IWGRP | S_IWOTH); PG_TRY(); { - fd = OpenTransientFile(fnamebuf, O_CREAT | O_WRONLY | O_TRUNC | PG_BINARY, - S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + fd = OpenTransientFilePerm(fnamebuf, O_CREAT | O_WRONLY | O_TRUNC | PG_BINARY, + S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); } PG_CATCH(); { diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index edc6efb8a6..20d32679e0 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -546,9 +546,8 @@ CheckPointReplicationOrigin(void) * no other backend can perform this at the same time, we're protected by * CheckpointLock. */ - tmpfd = OpenTransientFile((char *) tmppath, - O_CREAT | O_EXCL | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + tmpfd = OpenTransientFile(tmppath, + O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); if (tmpfd < 0) ereport(PANIC, (errcode_for_file_access(), @@ -660,7 +659,7 @@ StartupReplicationOrigin(void) elog(DEBUG2, "starting up replication origin progress state"); - fd = OpenTransientFile((char *) path, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); /* * might have had max_replication_slots == 0 last run, or we just brought diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index 68766d522d..0f607bab70 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -2104,8 +2104,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn) /* open segment, create it if necessary */ fd = OpenTransientFile(path, - O_CREAT | O_WRONLY | O_APPEND | PG_BINARY, - S_IRUSR | S_IWUSR); + O_CREAT | O_WRONLY | O_APPEND | PG_BINARY); if (fd < 0) ereport(ERROR, @@ -2349,7 +2348,7 @@ ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn, NameStr(MyReplicationSlot->data.name), txn->xid, (uint32) (recptr >> 32), (uint32) recptr); - *fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + *fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (*fd < 0 && errno == ENOENT) { *fd = -1; @@ -3038,7 +3037,7 @@ ApplyLogicalMappingFile(HTAB *tuplecid_data, Oid relid, const char *fname) LogicalRewriteMappingData map; sprintf(path, "pg_logical/mappings/%s", fname); - fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c index fba57a0470..ad65b9831d 100644 --- a/src/backend/replication/logical/snapbuild.c +++ b/src/backend/replication/logical/snapbuild.c @@ -1597,8 +1597,7 @@ SnapBuildSerialize(SnapBuild *builder, XLogRecPtr lsn) /* we have valid data now, open tempfile and write it there */ fd = OpenTransientFile(tmppath, - O_CREAT | O_EXCL | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errmsg("could not open file \"%s\": %m", path))); @@ -1682,7 +1681,7 @@ SnapBuildRestore(SnapBuild *builder, XLogRecPtr lsn) sprintf(path, "pg_logical/snapshots/%X-%X.snap", (uint32) (lsn >> 32), (uint32) lsn); - fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (fd < 0 && errno == ENOENT) return false; diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c index 23de2577ef..0d27b6f39e 100644 --- a/src/backend/replication/slot.c +++ b/src/backend/replication/slot.c @@ -1233,9 +1233,7 @@ SaveSlotToPath(ReplicationSlot *slot, const char *dir, int elevel) sprintf(tmppath, "%s/state.tmp", dir); sprintf(path, "%s/state", dir); - fd = OpenTransientFile(tmppath, - O_CREAT | O_EXCL | O_WRONLY | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(tmppath, O_CREAT | O_EXCL | O_WRONLY | PG_BINARY); if (fd < 0) { ereport(elevel, @@ -1354,7 +1352,7 @@ RestoreSlotFromDisk(const char *name) elog(DEBUG1, "restoring replication slot from \"%s\"", path); - fd = OpenTransientFile(path, O_RDWR | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDWR | PG_BINARY); /* * We do not need to handle this as we are rename()ing the directory into diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 56999e9315..6ec4e63161 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -472,7 +472,7 @@ SendTimeLineHistory(TimeLineHistoryCmd *cmd) pq_sendint(&buf, len, 4); /* col1 len */ pq_sendbytes(&buf, histfname, len); - fd = OpenTransientFile(path, O_RDONLY | PG_BINARY, 0666); + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), @@ -2366,7 +2366,7 @@ XLogRead(char *buf, XLogRecPtr startptr, Size count) XLogFilePath(path, curFileTimeLine, sendSegNo, wal_segment_size); - sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY, 0); + sendFile = BasicOpenFile(path, O_RDONLY | PG_BINARY); if (sendFile < 0) { /* diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c index 1e2691685a..a5e074ead8 100644 --- a/src/backend/storage/file/copydir.c +++ b/src/backend/storage/file/copydir.c @@ -148,14 +148,13 @@ copy_file(char *fromfile, char *tofile) /* * Open the files */ - srcfd = OpenTransientFile(fromfile, O_RDONLY | PG_BINARY, 0); + srcfd = OpenTransientFile(fromfile, O_RDONLY | PG_BINARY); if (srcfd < 0) ereport(ERROR, (errcode_for_file_access(), errmsg("could not open file \"%s\": %m", fromfile))); - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, - S_IRUSR | S_IWUSR); + dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (dstfd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 83b061a036..b0c174284b 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -116,6 +116,11 @@ */ #define FD_MINFREE 10 +/* + * Default mode for created files, unless something else is specified using + * the *Perm() function variants. + */ +#define PG_FILE_MODE_DEFAULT (S_IRUSR | S_IWUSR) /* * A number of platforms allow individual processes to open many more files @@ -186,7 +191,7 @@ typedef struct vfd char *fileName; /* name of file, or NULL for unused VFD */ /* NB: fileName is malloc'd, and must be free'd when closing the VFD */ int fileFlags; /* open(2) flags for (re)opening the file */ - int fileMode; /* mode to pass to open(2) */ + mode_t fileMode; /* mode to pass to open(2) */ } Vfd; /* @@ -604,7 +609,7 @@ durable_rename(const char *oldfile, const char *newfile, int elevel) if (fsync_fname_ext(oldfile, false, false, elevel) != 0) return -1; - fd = OpenTransientFile((char *) newfile, PG_BINARY | O_RDWR, 0); + fd = OpenTransientFile(newfile, PG_BINARY | O_RDWR); if (fd < 0) { if (errno != ENOENT) @@ -917,7 +922,17 @@ set_max_safe_fds(void) } /* - * BasicOpenFile --- same as open(2) except can free other FDs if needed + * Open a file with BasicOpenFilePerm() and pass default file mode for the + * fileMode parameter. + */ +int +BasicOpenFile(const char *fileName, int fileFlags) +{ + return BasicOpenFilePerm(fileName, fileFlags, PG_FILE_MODE_DEFAULT); +} + +/* + * BasicOpenFilePerm --- same as open(2) except can free other FDs if needed * * This is exported for use by places that really want a plain kernel FD, * but need to be proof against running out of FDs. Once an FD has been @@ -933,7 +948,7 @@ set_max_safe_fds(void) * this module wouldn't have any open files to close at that point anyway. */ int -BasicOpenFile(FileName fileName, int fileFlags, int fileMode) +BasicOpenFilePerm(const char *fileName, int fileFlags, mode_t fileMode) { int fd; @@ -1084,8 +1099,8 @@ LruInsert(File file) * overall system file table being full. So, be prepared to release * another FD if necessary... */ - vfdP->fd = BasicOpenFile(vfdP->fileName, vfdP->fileFlags, - vfdP->fileMode); + vfdP->fd = BasicOpenFilePerm(vfdP->fileName, vfdP->fileFlags, + vfdP->fileMode); if (vfdP->fd < 0) { DO_DB(elog(LOG, "re-open failed: %m")); @@ -1292,6 +1307,16 @@ FileInvalidate(File file) } #endif +/* + * Open a file with PathNameOpenFilePerm() and pass default file mode for the + * fileMode parameter. + */ +File +PathNameOpenFile(const char *fileName, int fileFlags) +{ + return PathNameOpenFilePerm(fileName, fileFlags, PG_FILE_MODE_DEFAULT); +} + /* * open a file in an arbitrary directory * @@ -1300,13 +1325,13 @@ FileInvalidate(File file) * (which should always be $PGDATA when this code is running). */ File -PathNameOpenFile(FileName fileName, int fileFlags, int fileMode) +PathNameOpenFilePerm(const char *fileName, int fileFlags, mode_t fileMode) { char *fnamecopy; File file; Vfd *vfdP; - DO_DB(elog(LOG, "PathNameOpenFile: %s %x %o", + DO_DB(elog(LOG, "PathNameOpenFilePerm: %s %x %o", fileName, fileFlags, fileMode)); /* @@ -1324,7 +1349,7 @@ PathNameOpenFile(FileName fileName, int fileFlags, int fileMode) /* Close excess kernel FDs. */ ReleaseLruFiles(); - vfdP->fd = BasicOpenFile(fileName, fileFlags, fileMode); + vfdP->fd = BasicOpenFilePerm(fileName, fileFlags, fileMode); if (vfdP->fd < 0) { @@ -1461,8 +1486,7 @@ OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError) * temp file that can be reused. */ file = PathNameOpenFile(tempfilepath, - O_RDWR | O_CREAT | O_TRUNC | PG_BINARY, - 0600); + O_RDWR | O_CREAT | O_TRUNC | PG_BINARY); if (file <= 0) { /* @@ -1476,8 +1500,7 @@ OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError) mkdir(tempdirpath, S_IRWXU); file = PathNameOpenFile(tempfilepath, - O_RDWR | O_CREAT | O_TRUNC | PG_BINARY, - 0600); + O_RDWR | O_CREAT | O_TRUNC | PG_BINARY); if (file <= 0 && rejectError) elog(ERROR, "could not create temporary file \"%s\": %m", tempfilepath); @@ -2006,7 +2029,7 @@ FileGetRawFlags(File file) /* * FileGetRawMode - returns the mode bitmask passed to open(2) */ -int +mode_t FileGetRawMode(File file) { Assert(FileIsValid(file)); @@ -2136,12 +2159,21 @@ AllocateFile(const char *name, const char *mode) return NULL; } +/* + * Open a file with OpenTransientFilePerm() and pass default file mode for + * the fileMode parameter. + */ +int +OpenTransientFile(const char *fileName, int fileFlags) +{ + return OpenTransientFilePerm(fileName, fileFlags, PG_FILE_MODE_DEFAULT); +} /* * Like AllocateFile, but returns an unbuffered fd like open(2) */ int -OpenTransientFile(FileName fileName, int fileFlags, int fileMode) +OpenTransientFilePerm(const char *fileName, int fileFlags, mode_t fileMode) { int fd; @@ -2158,7 +2190,7 @@ OpenTransientFile(FileName fileName, int fileFlags, int fileMode) /* Close excess kernel FDs. */ ReleaseLruFiles(); - fd = BasicOpenFile(fileName, fileFlags, fileMode); + fd = BasicOpenFilePerm(fileName, fileFlags, fileMode); if (fd >= 0) { @@ -3081,7 +3113,7 @@ pre_sync_fname(const char *fname, bool isdir, int elevel) if (isdir) return; - fd = OpenTransientFile((char *) fname, O_RDONLY | PG_BINARY, 0); + fd = OpenTransientFile(fname, O_RDONLY | PG_BINARY); if (fd < 0) { @@ -3141,7 +3173,7 @@ fsync_fname_ext(const char *fname, bool isdir, bool ignore_perm, int elevel) else flags |= O_RDONLY; - fd = OpenTransientFile((char *) fname, flags, 0); + fd = OpenTransientFile(fname, flags); /* * Some OSs don't allow us to open directories at all (Windows returns diff --git a/src/backend/storage/ipc/dsm_impl.c b/src/backend/storage/ipc/dsm_impl.c index 1500465d31..c63780139e 100644 --- a/src/backend/storage/ipc/dsm_impl.c +++ b/src/backend/storage/ipc/dsm_impl.c @@ -835,7 +835,7 @@ dsm_impl_mmap(dsm_op op, dsm_handle handle, Size request_size, /* Create new segment or open an existing one for attach or resize. */ flags = O_RDWR | (op == DSM_OP_CREATE ? O_CREAT | O_EXCL : 0); - if ((fd = OpenTransientFile(name, flags, 0600)) == -1) + if ((fd = OpenTransientFile(name, flags)) == -1) { if (errno != EEXIST) ereport(elevel, diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c index 65e0abe9ec..64a4ccf0db 100644 --- a/src/backend/storage/smgr/md.c +++ b/src/backend/storage/smgr/md.c @@ -304,7 +304,7 @@ mdcreate(SMgrRelation reln, ForkNumber forkNum, bool isRedo) path = relpath(reln->smgr_rnode, forkNum); - fd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, 0600); + fd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (fd < 0) { @@ -317,7 +317,7 @@ mdcreate(SMgrRelation reln, ForkNumber forkNum, bool isRedo) * already, even if isRedo is not set. (See also mdopen) */ if (isRedo || IsBootstrapProcessingMode()) - fd = PathNameOpenFile(path, O_RDWR | PG_BINARY, 0600); + fd = PathNameOpenFile(path, O_RDWR | PG_BINARY); if (fd < 0) { /* be sure to report the error reported by create, not open */ @@ -430,7 +430,7 @@ mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo) /* truncate(2) would be easier here, but Windows hasn't got it */ int fd; - fd = OpenTransientFile(path, O_RDWR | PG_BINARY, 0); + fd = OpenTransientFile(path, O_RDWR | PG_BINARY); if (fd >= 0) { int save_errno; @@ -583,7 +583,7 @@ mdopen(SMgrRelation reln, ForkNumber forknum, int behavior) path = relpath(reln->smgr_rnode, forknum); - fd = PathNameOpenFile(path, O_RDWR | PG_BINARY, 0600); + fd = PathNameOpenFile(path, O_RDWR | PG_BINARY); if (fd < 0) { @@ -594,7 +594,7 @@ mdopen(SMgrRelation reln, ForkNumber forknum, int behavior) * substitute for mdcreate() in bootstrap mode only. (See mdcreate) */ if (IsBootstrapProcessingMode()) - fd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, 0600); + fd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_EXCL | PG_BINARY); if (fd < 0) { if ((behavior & EXTENSION_RETURN_NULL) && @@ -1780,7 +1780,7 @@ _mdfd_openseg(SMgrRelation reln, ForkNumber forknum, BlockNumber segno, fullpath = _mdfd_segpath(reln, forknum, segno); /* open the file */ - fd = PathNameOpenFile(fullpath, O_RDWR | PG_BINARY | oflags, 0600); + fd = PathNameOpenFile(fullpath, O_RDWR | PG_BINARY | oflags); pfree(fullpath); diff --git a/src/backend/utils/cache/relmapper.c b/src/backend/utils/cache/relmapper.c index f5394dc43d..41c2ba7f97 100644 --- a/src/backend/utils/cache/relmapper.c +++ b/src/backend/utils/cache/relmapper.c @@ -644,8 +644,7 @@ load_relmap_file(bool shared) } /* Read data ... */ - fd = OpenTransientFile(mapfilename, - O_RDONLY | PG_BINARY, S_IRUSR | S_IWUSR); + fd = OpenTransientFile(mapfilename, O_RDONLY | PG_BINARY); if (fd < 0) ereport(FATAL, (errcode_for_file_access(), @@ -745,9 +744,7 @@ write_relmap_file(bool shared, RelMapFile *newmap, realmap = &local_map; } - fd = OpenTransientFile(mapfilename, - O_WRONLY | O_CREAT | PG_BINARY, - S_IRUSR | S_IWUSR); + fd = OpenTransientFile(mapfilename, O_WRONLY | O_CREAT | PG_BINARY); if (fd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index e1fd446ce5..47a5f25707 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -7242,8 +7242,7 @@ AlterSystemSetConfigFile(AlterSystemStmt *altersysstmt) * truncate and reuse it. */ Tmpfd = BasicOpenFile(AutoConfTmpFileName, - O_CREAT | O_RDWR | O_TRUNC, - S_IRUSR | S_IWUSR); + O_CREAT | O_RDWR | O_TRUNC); if (Tmpfd < 0) ereport(ERROR, (errcode_for_file_access(), diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index faef39e78d..6ea26e63b8 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -22,7 +22,7 @@ * Use them for all file activity... * * File fd; - * fd = PathNameOpenFile("foo", O_RDONLY, 0600); + * fd = PathNameOpenFile("foo", O_RDONLY); * * AllocateFile(); * FreeFile(); @@ -46,8 +46,6 @@ * FileSeek uses the standard UNIX lseek(2) flags. */ -typedef char *FileName; - typedef int File; @@ -65,7 +63,8 @@ extern int max_safe_fds; */ /* Operations on virtual Files --- equivalent to Unix kernel file ops */ -extern File PathNameOpenFile(FileName fileName, int fileFlags, int fileMode); +extern File PathNameOpenFile(const char *fileName, int fileFlags); +extern File PathNameOpenFilePerm(const char *fileName, int fileFlags, mode_t fileMode); extern File OpenTemporaryFile(bool interXact); extern void FileClose(File file); extern int FilePrefetch(File file, off_t offset, int amount, uint32 wait_event_info); @@ -78,7 +77,7 @@ extern void FileWriteback(File file, off_t offset, off_t nbytes, uint32 wait_eve extern char *FilePathName(File file); extern int FileGetRawDesc(File file); extern int FileGetRawFlags(File file); -extern int FileGetRawMode(File file); +extern mode_t FileGetRawMode(File file); /* Operations that allow use of regular stdio --- USE WITH CAUTION */ extern FILE *AllocateFile(const char *name, const char *mode); @@ -94,11 +93,13 @@ extern struct dirent *ReadDir(DIR *dir, const char *dirname); extern int FreeDir(DIR *dir); /* Operations to allow use of a plain kernel FD, with automatic cleanup */ -extern int OpenTransientFile(FileName fileName, int fileFlags, int fileMode); +extern int OpenTransientFile(const char *fileName, int fileFlags); +extern int OpenTransientFilePerm(const char *fileName, int fileFlags, mode_t fileMode); extern int CloseTransientFile(int fd); /* If you've really really gotta have a plain kernel FD, use this */ -extern int BasicOpenFile(FileName fileName, int fileFlags, int fileMode); +extern int BasicOpenFile(const char *fileName, int fileFlags); +extern int BasicOpenFilePerm(const char *fileName, int fileFlags, mode_t fileMode); /* Miscellaneous support routines */ extern void InitFileAccess(void); From 01c7d3ef85d4b0e1c52cc1a3542864f95f386f76 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 23 Sep 2017 12:56:31 -0400 Subject: [PATCH 0256/1087] Ten-second timeout in 013_crash_restart.pl is not enough, let's try 60. Per buildfarm member topminnow. --- src/test/recovery/t/013_crash_restart.pl | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl index ce35bdd633..ca02054ff0 100644 --- a/src/test/recovery/t/013_crash_restart.pl +++ b/src/test/recovery/t/013_crash_restart.pl @@ -29,9 +29,9 @@ # To avoid hanging while expecting some specific input from a psql # instance being driven by us, add a timeout high enough that it -# should never trigger in a normal run, but low enough to actually see -# failures in a realistic amount of time. -my $psql_timeout = IPC::Run::timer(10); +# should never trigger even on very slow machines, unless something +# is really wrong. +my $psql_timeout = IPC::Run::timer(60); my $node = get_new_node('master'); $node->init(allows_streaming => 1); From ad51c6fb5708342e603d12a730bbc4e663bd637e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 23 Sep 2017 13:02:30 -0400 Subject: [PATCH 0257/1087] Remove pgbench "progress" test pending solution of its timing issues. Buildfarm member skink shows that this is even more flaky than I thought. There are probably some actual pgbench bugs here as well as a timing dependency. But we can't have stuff this unstable in the buildfarm, it obscures other issues. --- src/bin/pgbench/t/001_pgbench_with_server.pl | 22 -------------------- 1 file changed, 22 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 7db4bc8c97..11bc0fecfe 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -445,28 +445,6 @@ sub check_pgbench_logs ok(unlink(@logs), "remove log files"); } -# note: --progress-timestamp is not tested -pgbench( - '-T 2 -P 1 -l --log-prefix=001_pgbench_log_1 --aggregate-interval=1' - . ' -S -b se@2 --rate=20 --latency-limit=1000 -j ' . $nthreads - . ' -c 3 -r', - 0, - [ qr{type: multiple}, - qr{clients: 3}, - qr{threads: $nthreads}, - qr{duration: 2 s}, - qr{script 1: .* select only}, - qr{script 2: .* select only}, - qr{statement latencies in milliseconds}, - qr{FROM pgbench_accounts} ], - [ qr{vacuum}, qr{progress: 1\b} ], - 'pgbench progress'); - -# $nthreads threads, 2 seconds, but due to timing imprecision we might get -# only 1 or as many as 3 progress reports per thread. -check_pgbench_logs('001_pgbench_log_1', $nthreads, 1, 3, - qr{^\d+ \d{1,2} \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+ \d+$}); - # with sampling rate pgbench( '-n -S -t 50 -c 2 --log --log-prefix=001_pgbench_log_2 --sampling-rate=0.5', From 335f3d04e4c8dd495c4dd30ab1049b6fe8f25052 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 23 Sep 2017 13:28:16 -0400 Subject: [PATCH 0258/1087] Improve memory management in autovacuum.c. Invoke vacuum(), as well as "work item" processing, in the PortalContext that do_autovacuum() has manufactured, which will be reset before each such invocation. This ensures cleanup of any memory leaked by these operations. It also avoids the rather dangerous practice of calling vacuum() in a context that vacuum() itself will destroy while it runs. There's no known live bug there, but it's not hard to imagine introducing one if we leave it like this. Tom Lane, reviewed by Michael Paquier and Alvaro Herrera Discussion: https://postgr.es/m/13849.1506114543@sss.pgh.pa.us --- src/backend/postmaster/autovacuum.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index b745d8962e..db6d91ffdf 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2444,8 +2444,10 @@ do_autovacuum(void) */ PG_TRY(); { + /* Use PortalContext for any per-table allocations */ + MemoryContextSwitchTo(PortalContext); + /* have at it */ - MemoryContextSwitchTo(TopTransactionContext); autovacuum_do_vac_analyze(tab, bstrategy); /* @@ -2482,6 +2484,9 @@ do_autovacuum(void) } PG_END_TRY(); + /* Make sure we're back in AutovacMemCxt */ + MemoryContextSwitchTo(AutovacMemCxt); + did_vacuum = true; /* the PGXACT flags are reset at the next end of transaction */ @@ -2533,8 +2538,7 @@ do_autovacuum(void) perform_work_item(workitem); /* - * Check for config changes before acquiring lock for further - * jobs. + * Check for config changes before acquiring lock for further jobs. */ CHECK_FOR_INTERRUPTS(); if (got_SIGHUP) @@ -2605,6 +2609,7 @@ perform_work_item(AutoVacuumWorkItem *workitem) * must live in a long-lived memory context because we call vacuum and * analyze in different transactions. */ + Assert(CurrentMemoryContext == AutovacMemCxt); cur_relname = get_rel_name(workitem->avw_relation); cur_nspname = get_namespace_name(get_rel_namespace(workitem->avw_relation)); @@ -2614,6 +2619,9 @@ perform_work_item(AutoVacuumWorkItem *workitem) autovac_report_workitem(workitem, cur_nspname, cur_datname); + /* clean up memory before each work item */ + MemoryContextResetAndDeleteChildren(PortalContext); + /* * We will abort the current work item if something errors out, and * continue with the next one; in particular, this happens if we are @@ -2622,9 +2630,10 @@ perform_work_item(AutoVacuumWorkItem *workitem) */ PG_TRY(); { - /* have at it */ - MemoryContextSwitchTo(TopTransactionContext); + /* Use PortalContext for any per-work-item allocations */ + MemoryContextSwitchTo(PortalContext); + /* have at it */ switch (workitem->avw_type) { case AVW_BRINSummarizeRange: @@ -2668,6 +2677,9 @@ perform_work_item(AutoVacuumWorkItem *workitem) } PG_END_TRY(); + /* Make sure we're back in AutovacMemCxt */ + MemoryContextSwitchTo(AutovacMemCxt); + /* We intentionally do not set did_vacuum here */ /* be tidy */ From 737639017c87d5a0a466e8676f1eadc61d775c78 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 23 Sep 2017 15:01:59 -0400 Subject: [PATCH 0259/1087] Fix bogus size calculation in strlist_to_textarray(). It's making an array of Datum, not an array of text *. The mistake is harmless since those are currently the same size, but it's still wrong. --- src/backend/catalog/objectaddress.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index 6cac2dfd1d..c2ad7c675e 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -5051,7 +5051,7 @@ getRelationIdentity(StringInfo buffer, Oid relid, List **object) } /* - * Auxiliary function to return a TEXT array out of a list of C-strings. + * Auxiliary function to build a TEXT array out of a list of C-strings. */ ArrayType * strlist_to_textarray(List *list) @@ -5063,12 +5063,14 @@ strlist_to_textarray(List *list) MemoryContext memcxt; MemoryContext oldcxt; + /* Work in a temp context; easier than individually pfree'ing the Datums */ memcxt = AllocSetContextCreate(CurrentMemoryContext, "strlist to array", ALLOCSET_DEFAULT_SIZES); oldcxt = MemoryContextSwitchTo(memcxt); - datums = palloc(sizeof(text *) * list_length(list)); + datums = (Datum *) palloc(sizeof(Datum) * list_length(list)); + foreach(cell, list) { char *name = lfirst(cell); @@ -5080,6 +5082,7 @@ strlist_to_textarray(List *list) arr = construct_array(datums, list_length(list), TEXTOID, -1, false, 'i'); + MemoryContextDelete(memcxt); return arr; From 24541ffd788d56009126fff52b2341ada6c84245 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 23 Sep 2017 15:16:48 -0400 Subject: [PATCH 0260/1087] ... and the very same bug in publicationListToArray(). Sigh. --- src/backend/commands/subscriptioncmds.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index 372fa1b634..086a6ef85e 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -244,7 +244,7 @@ parse_subscription_options(List *options, bool *connect, bool *enabled_given, } /* - * Auxiliary function to return a text array out of a list of String nodes. + * Auxiliary function to build a text array out of a list of String nodes. */ static Datum publicationListToArray(List *publist) @@ -264,7 +264,8 @@ publicationListToArray(List *publist) ALLOCSET_DEFAULT_MAXSIZE); oldcxt = MemoryContextSwitchTo(memcxt); - datums = palloc(sizeof(text *) * list_length(publist)); + datums = (Datum *) palloc(sizeof(Datum) * list_length(publist)); + foreach(cell, publist) { char *name = strVal(lfirst(cell)); @@ -275,7 +276,7 @@ publicationListToArray(List *publist) { char *pname = strVal(lfirst(pcell)); - if (name == pname) + if (pcell == cell) break; if (strcmp(name, pname) == 0) @@ -292,6 +293,7 @@ publicationListToArray(List *publist) arr = construct_array(datums, list_length(publist), TEXTOID, -1, false, 'i'); + MemoryContextDelete(memcxt); return PointerGetDatum(arr); From 74ca8f9b9077017529fe658e445a11da296ac6ab Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 23 Sep 2017 22:59:26 -0400 Subject: [PATCH 0261/1087] Fix pg_basebackup test to original intent One test case was meant to check that pg_basebackup does not succeed when a slot is specified with -S but WAL streaming is not selected, which used to require specifying -X stream. Since -X stream is the default in PostgreSQL 10, this test case no longer covers that meaning, but the pg_basebackup invocation happened to fail anyway for the unrelated reason that the specified replication slot does not exist. To fix, move the test case to later in the file where the slot does exist, and add -X none to the invocation so that it covers the originally meant behavior. extracted from a patch by Michael Banck --- src/bin/pg_basebackup/t/010_pg_basebackup.pl | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl index a00f7b0e1a..cce14b83e1 100644 --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl @@ -255,9 +255,6 @@ 'stream', '--no-slot' ], 'pg_basebackup -X stream runs with --no-slot'); -$node->command_fails( - [ 'pg_basebackup', '-D', "$tempdir/fail", '-S', 'slot1' ], - 'pg_basebackup with replication slot fails without -X stream'); $node->command_fails( [ 'pg_basebackup', '-D', "$tempdir/backupxs_sl_fail", '-X', @@ -271,6 +268,9 @@ q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot1'} ); is($lsn, '', 'restart LSN of new slot is null'); +$node->command_fails( + [ 'pg_basebackup', '-D', "$tempdir/fail", '-S', 'slot1', '-X', 'none' ], + 'pg_basebackup with replication slot fails without WAL streaming'); $node->command_ok( [ 'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X', 'stream', '-S', 'slot1' ], From 9b31c72a9492880e657b68b1ed971dec3c361c95 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 24 Sep 2017 00:29:59 -0400 Subject: [PATCH 0262/1087] doc: Expand user documentation on SCRAM Explain more about how the different password authentication methods and the password_encryption settings relate to each other, give some upgrading advice, and set a better link from the release notes. Reviewed-by: Jeff Janes --- doc/src/sgml/client-auth.sgml | 127 ++++++++++++++++++++++++++-------- doc/src/sgml/config.sgml | 2 +- doc/src/sgml/release-10.sgml | 2 +- 3 files changed, 100 insertions(+), 31 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 26c3d1242b..c76d5faf44 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -916,46 +916,82 @@ omicron bryanh guest1 MD5 + + SCRAM + password authentication - The password-based authentication methods are scram-sha-256, - md5, and password. These methods operate - similarly except for the way that the password is sent across the + There are several password-based authentication methods. These methods + operate similarly but differ in how the users' passwords are stored on the + server and how the password provided by a client is sent across the connection. - - Plain password sends the password in clear-text, and is - therefore vulnerable to password sniffing attacks. It should - always be avoided if possible. If the connection is protected by SSL - encryption then password can be used safely, though. - (Though SSL certificate authentication might be a better choice if one - is depending on using SSL). - + + + scram-sha-256 + + + The method scram-sha-256 performs SCRAM-SHA-256 + authentication, as described in + RFC 7677. It + is a challenge-response scheme that prevents password sniffing on + untrusted connections and supports storing passwords on the server in a + cryptographically hashed form that is thought to be secure. + + + This is the most secure of the currently provided methods, but it is + not supported by older client libraries. + + + - - scram-sha-256 performs SCRAM-SHA-256 authentication, as - described in - RFC 7677. It - is a challenge-response scheme, that prevents password sniffing on - untrusted connections. It is more secure than the md5 - method, but might not be supported by older clients. - + + md5 + + + The method md5 uses a custom less secure challenge-response + mechanism. It prevents password sniffing and avoids storing passwords + on the server in plain text but provides no protection if an attacker + manages to steal the password hash from the server. Also, the MD5 hash + algorithm is nowadays no longer consider secure against determined + attacks. + - - md5 allows falling back to a less secure challenge-response - mechanism for those users with an MD5 hashed password. - The fallback mechanism also prevents password sniffing, but provides no - protection if an attacker manages to steal the password hash from the - server, and it cannot be used with the feature. For all other users, - md5 works the same as scram-sha-256. - + + The md5 method cannot be used with + the feature. + + + + To ease transition from the md5 method to the newer + SCRAM method, if md5 is specified as a method + in pg_hba.conf but the user's password on the + server is encrypted for SCRAM (see below), then SCRAM-based + authentication will automatically be chosen instead. + + + + + + password + + + The method password sends the password in clear-text and is + therefore vulnerable to password sniffing attacks. It should + always be avoided if possible. If the connection is protected by SSL + encryption then password can be used safely, though. + (Though SSL certificate authentication might be a better choice if one + is depending on using SSL). + + + + PostgreSQL database passwords are @@ -964,11 +1000,44 @@ omicron bryanh guest1 catalog. Passwords can be managed with the SQL commands and , - e.g., CREATE USER foo WITH PASSWORD 'secret'. + e.g., CREATE USER foo WITH PASSWORD 'secret', + or the psql + command \password. If no password has been set up for a user, the stored password is null and password authentication will always fail for that user. + + The availability of the different password-based authentication methods + depends on how a user's password on the server is encrypted (or hashed, + more accurately). This is controlled by the configuration + parameter at the time the + password is set. If a password was encrypted using + the scram-sha-256 setting, then it can be used for the + authentication methods scram-sha-256 + and password (but password transmission will be in + plain text in the latter case). The authentication method + specification md5 will automatically switch to using + the scram-sha-256 method in this case, as explained + above, so it will also work. If a password was encrypted using + the md5 setting, then it can be used only for + the md5 and password authentication + method specifications (again, with the password transmitted in plain text + in the latter case). (Previous PostgreSQL releases supported storing the + password on the server in plain text. This is no longer possible.) To + check the currently stored password hashes, see the system + catalog pg_authid. + + + + To upgrade an existing installation from md5 + to scram-sha-256, after having ensured that all client + libraries in use are new enough to support SCRAM, + set password_encryption = 'scram-sha-256' + in postgresql.conf, make all users set new passwords, + and change the authentication method specifications + in pg_hba.conf to scram-sha-256. + diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 5f59a382f1..4b265d9e40 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1190,7 +1190,7 @@ include_dir 'conf.d' Note that older clients might lack support for the SCRAM authentication mechanism, and hence not work with passwords encrypted with - SCRAM-SHA-256. + SCRAM-SHA-256. See for more details. diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 2658b73ca6..9fd3b2c8ac 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1184,7 +1184,7 @@ 2017-04-18 [c727f120f] Rename "scram" to "scram-sha-256" in pg_hba.conf and pas --> - Add SCRAM-SHA-256 + Add SCRAM-SHA-256 support for password negotiation and storage (Michael Paquier, Heikki Linnakangas) From 6dda0998afc7d449145b9ba216844bdba7a817d6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 24 Sep 2017 00:56:31 -0400 Subject: [PATCH 0263/1087] Allow ICU to use SortSupport on Windows with UTF-8 There is no reason to ever prevent the use of SortSupport on Windows when ICU locales are used. We previously avoided SortSupport on Windows with UTF-8 server encoding and a non C-locale due to restrictions in Windows' libc functionality. This is now considered to be a restriction in one platform's libc collation provider, and not a more general platform restriction. Reported-by: Peter Geoghegan --- src/backend/utils/adt/varlena.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index 260efd519a..4b5483dbb9 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -1823,12 +1823,6 @@ varstr_sortsupport(SortSupport ssup, Oid collid, bool bpchar) * requirements of BpChar callers. However, if LC_COLLATE = C, we can * make things quite a bit faster with varstrfastcmp_c or bpcharfastcmp_c, * both of which use memcmp() rather than strcoll(). - * - * There is a further exception on Windows. When the database encoding is - * UTF-8 and we are not using the C collation, complex hacks are required. - * We don't currently have a comparator that handles that case, so we fall - * back on the slow method of having the sort code invoke bttextcmp() (in - * the case of text) via the fmgr trampoline. */ if (lc_collate_is_c(collid)) { @@ -1839,14 +1833,8 @@ varstr_sortsupport(SortSupport ssup, Oid collid, bool bpchar) collate_c = true; } -#ifdef WIN32 - else if (GetDatabaseEncoding() == PG_UTF8) - return; -#endif else { - ssup->comparator = varstrfastcmp_locale; - /* * We need a collation-sensitive comparison. To make things faster, * we'll figure out the collation based on the locale id and cache the @@ -1867,6 +1855,22 @@ varstr_sortsupport(SortSupport ssup, Oid collid, bool bpchar) } locale = pg_newlocale_from_collation(collid); } + + /* + * There is a further exception on Windows. When the database + * encoding is UTF-8 and we are not using the C collation, complex + * hacks are required. We don't currently have a comparator that + * handles that case, so we fall back on the slow method of having the + * sort code invoke bttextcmp() (in the case of text) via the fmgr + * trampoline. ICU locales work just the same on Windows, however. + */ +#ifdef WIN32 + if (GetDatabaseEncoding() == PG_UTF8 && + !(locale && locale->provider == COLLPROVIDER_ICU)) + return; +#endif + + ssup->comparator = varstrfastcmp_locale; } /* From 8485a25a8c9a419ff3e0d30e43e4abd5e680cc65 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 24 Sep 2017 12:05:06 -0400 Subject: [PATCH 0264/1087] Fix assorted infelicities in new SetWALSegSize() function. * Failure to check for malloc failure (ok, pretty unlikely here, but that's not an excuse). * Leakage of open fd on read error, and of malloc'd buffer always. * Incorrect assumption that a short read would set errno to zero. * Failure to adhere to message style conventions (in particular, not reporting errno where relevant; using "couldn't open" rather than "could not open" is not really in line with project style either). * Missing newlines on some messages. Coverity spotted the leak problems; I noticed the rest while fixing the leaks. --- contrib/pg_standby/pg_standby.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/contrib/pg_standby/pg_standby.c b/contrib/pg_standby/pg_standby.c index 6aeca6e8f7..cb785971a9 100644 --- a/contrib/pg_standby/pg_standby.c +++ b/contrib/pg_standby/pg_standby.c @@ -408,16 +408,21 @@ SetWALSegSize(void) { bool ret_val = false; int fd; - char *buf = (char *) malloc(XLOG_BLCKSZ); + + /* malloc this buffer to ensure sufficient alignment: */ + char *buf = (char *) pg_malloc(XLOG_BLCKSZ); Assert(WalSegSz == -1); if ((fd = open(WALFilePath, O_RDWR, 0)) < 0) { - fprintf(stderr, "%s: couldn't open WAL file \"%s\"\n", - progname, WALFilePath); + fprintf(stderr, "%s: could not open WAL file \"%s\": %s\n", + progname, WALFilePath, strerror(errno)); + pg_free(buf); return false; } + + errno = 0; if (read(fd, buf, XLOG_BLCKSZ) == XLOG_BLCKSZ) { XLogLongPageHeader longhdr = (XLogLongPageHeader) buf; @@ -433,7 +438,6 @@ SetWALSegSize(void) fprintf(stderr, "%s: WAL segment size must be a power of two between 1MB and 1GB, but the WAL file header specifies %d bytes\n", progname, WalSegSz); - close(fd); } else { @@ -444,17 +448,21 @@ SetWALSegSize(void) if (errno != 0) { if (debug) - fprintf(stderr, "could not read file \"%s\": %s", + fprintf(stderr, "could not read file \"%s\": %s\n", WALFilePath, strerror(errno)); } else { if (debug) - fprintf(stderr, "not enough data in file \"%s\"", WALFilePath); + fprintf(stderr, "not enough data in file \"%s\"\n", + WALFilePath); } } fflush(stderr); + + close(fd); + pg_free(buf); return ret_val; } From f2ab3898f3a25ef431db4ea90a8d128b974dbffe Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Mon, 25 Sep 2017 08:03:05 -0400 Subject: [PATCH 0265/1087] Support building with Visual Studio 2017 Haribabu Kommi, reviewed by Takeshi Ideriha and Christian Ullrich Backpatch to 9.6 --- doc/src/sgml/install-windows.sgml | 16 +++++++++------- src/tools/msvc/MSBuildProject.pm | 23 +++++++++++++++++++++++ src/tools/msvc/README | 13 +++++++------ src/tools/msvc/Solution.pm | 26 ++++++++++++++++++++++++++ src/tools/msvc/VSObjectFactory.pm | 13 +++++++++++++ 5 files changed, 78 insertions(+), 13 deletions(-) diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml index 1861e7e2f7..696c620b18 100644 --- a/doc/src/sgml/install-windows.sgml +++ b/doc/src/sgml/install-windows.sgml @@ -19,10 +19,10 @@ There are several different ways of building PostgreSQL on Windows. The simplest way to build with - Microsoft tools is to install Visual Studio Express 2015 + Microsoft tools is to install Visual Studio Express 2017 for Windows Desktop and use the included compiler. It is also possible to build with the full - Microsoft Visual C++ 2005 to 2015. + Microsoft Visual C++ 2005 to 2017. In some cases that requires the installation of the Windows SDK in addition to the compiler. @@ -69,19 +69,19 @@ Visual Studio Express or some versions of the Microsoft Windows SDK. If you do not already have a Visual Studio environment set up, the easiest - ways are to use the compilers from Visual Studio Express 2015 + ways are to use the compilers from Visual Studio Express 2017 for Windows Desktop or those in the Windows SDK - 7.1, which are both free downloads from Microsoft. + 8.1, which are both free downloads from Microsoft. Both 32-bit and 64-bit builds are possible with the Microsoft Compiler suite. 32-bit PostgreSQL builds are possible with Visual Studio 2005 to - Visual Studio 2015 (including Express editions), - as well as standalone Windows SDK releases 6.0 to 7.1. + Visual Studio 2017 (including Express editions), + as well as standalone Windows SDK releases 6.0 to 8.1. 64-bit PostgreSQL builds are supported with - Microsoft Windows SDK version 6.0a to 7.1 or + Microsoft Windows SDK version 6.0a to 8.1 or Visual Studio 2008 and above. Compilation is supported down to Windows XP and Windows Server 2003 when building with @@ -89,6 +89,8 @@ Visual Studio 2013. Building with Visual Studio 2015 is supported down to Windows Vista and Windows Server 2008. + Building with Visual Studio 2017 is supported + down to Windows 7 SP1 and Windows Server 2008 R2 SP1. diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm index 27329f9e36..7a287bd0bd 100644 --- a/src/tools/msvc/MSBuildProject.pm +++ b/src/tools/msvc/MSBuildProject.pm @@ -483,4 +483,27 @@ sub new return $self; } +package VC2017Project; + +# +# Package that encapsulates a Visual C++ 2017 project file +# + +use strict; +use warnings; +use base qw(VC2012Project); + +sub new +{ + my $classname = shift; + my $self = $classname->SUPER::_new(@_); + bless($self, $classname); + + $self->{vcver} = '15.00'; + $self->{PlatformToolset} = 'v141'; + $self->{ToolsVersion} = '15.0'; + + return $self; +} + 1; diff --git a/src/tools/msvc/README b/src/tools/msvc/README index b61ddb8791..48082cab90 100644 --- a/src/tools/msvc/README +++ b/src/tools/msvc/README @@ -4,7 +4,7 @@ MSVC build ========== This directory contains the tools required to build PostgreSQL using -Microsoft Visual Studio 2005 - 2011. This builds the whole backend, not just +Microsoft Visual Studio 2005 - 2017. This builds the whole backend, not just the libpq frontend library. For more information, see the documentation chapter "Installation on Windows" and the description below. @@ -92,11 +92,12 @@ These configuration arguments are passed over to Mkvcbuild::mkvcbuild (Mkvcbuild.pm) which creates the Visual Studio project and solution files. It does this by using VSObjectFactory::CreateSolution to create an object implementing the Solution interface (this could be either a VS2005Solution, -a VS2008Solution, a VS2010Solution or a VS2012Solution, all in Solution.pm, -depending on the user's build environment) and adding objects implementing -the corresponding Project interface (VC2005Project or VC2008Project from -VCBuildProject.pm or VC2010Project or VC2012Project from MSBuildProject.pm) -to it. +a VS2008Solution, a VS2010Solution or a VS2012Solution or a VS2013Solution, +or a VS2015Solution or a VS2017Solution, all in Solution.pm, depending on +the user's build environment) and adding objects implementing the corresponding +Project interface (VC2005Project or VC2008Project from VCBuildProject.pm or +VC2010Project or VC2012Project or VC2013Project or VC2015Project or VC2017Project +from MSBuildProject.pm) to it. When Solution::Save is called, the implementations of Solution and Project save their content in the appropriate format. The final step of starting the appropriate build program (msbuild or vcbuild) diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm index 5d5f716b6f..0925bef139 100644 --- a/src/tools/msvc/Solution.pm +++ b/src/tools/msvc/Solution.pm @@ -849,6 +849,32 @@ sub new return $self; } +package VS2017Solution; + +# +# Package that encapsulates a Visual Studio 2017 solution file +# + +use Carp; +use strict; +use warnings; +use base qw(Solution); + +sub new +{ + my $classname = shift; + my $self = $classname->SUPER::_new(@_); + bless($self, $classname); + + $self->{solutionFileVersion} = '12.00'; + $self->{vcver} = '15.00'; + $self->{visualStudioName} = 'Visual Studio 2017'; + $self->{VisualStudioVersion} = '15.0.26730.3'; + $self->{MinimumVisualStudioVersion} = '10.0.40219.1'; + + return $self; +} + sub GetAdditionalHeaders { my ($self, $f) = @_; diff --git a/src/tools/msvc/VSObjectFactory.pm b/src/tools/msvc/VSObjectFactory.pm index 4190ada618..2f3480a1f6 100644 --- a/src/tools/msvc/VSObjectFactory.pm +++ b/src/tools/msvc/VSObjectFactory.pm @@ -53,8 +53,14 @@ sub CreateSolution { return new VS2015Solution(@_); } + # visual 2017 hasn't changed the nmake version to 15, so adjust the check to support it. + elsif (($visualStudioVersion ge '14.10') or ($visualStudioVersion eq '15.00')) + { + return new VS2017Solution(@_); + } else { + croak $visualStudioVersion; croak "The requested Visual Studio version is not supported."; } } @@ -92,8 +98,14 @@ sub CreateProject { return new VC2015Project(@_); } + # visual 2017 hasn't changed the nmake version to 15, so adjust the check to support it. + elsif (($visualStudioVersion ge '14.10') or ($visualStudioVersion eq '15.00')) + { + return new VC2017Project(@_); + } else { + croak $visualStudioVersion; croak "The requested Visual Studio version is not supported."; } } @@ -120,6 +132,7 @@ sub DetermineVisualStudioVersion sub _GetVisualStudioVersion { my ($major, $minor) = @_; + # visual 2017 hasn't changed the nmake version to 15, so still using the older version for comparison. if ($major > 14) { carp From 716ea626a88ac510523ab3af5bc779d78eeced58 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 25 Sep 2017 11:55:24 -0400 Subject: [PATCH 0266/1087] Make construct_[md_]array return a valid empty array for zero-size input. If construct_array() or construct_md_array() were given a dimension of zero, they'd produce an array that contains no elements but has positive dimension. This violates a general expectation that empty arrays should have ndims = 0; in particular, while arrays like this print as empty, they don't compare equal to other empty arrays. Up to now we've expected callers to avoid making such calls and instead be careful to call construct_empty_array() if there would be no elements. But this has always been an easily missed case, and we've repeatedly had to fix callers to do it right. In bug #14826, Erwin Brandstetter pointed out yet another such oversight, in ts_lexize(); and a bit of examination of other call sites found at least two more with similar issues. So let's fix the problem centrally and permanently by changing these two functions to construct a proper zero-D empty array whenever the array would be empty. This renders a few explicit calls of construct_empty_array() redundant, but the only such place I found that really seemed worth changing was in ExecEvalArrayExpr(). Although this fixes some very old bugs, no back-patch: the problem is pretty minor and the risk of changing behavior seems to outweigh the benefit in stable branches. Discussion: https://postgr.es/m/20170923125723.1448.39412@wrigleys.postgresql.org Discussion: https://postgr.es/m/20570.1506198383@sss.pgh.pa.us --- src/backend/executor/execExprInterp.c | 8 -------- src/backend/utils/adt/arrayfuncs.c | 10 ++++++---- src/test/regress/expected/tsearch.out | 11 +++++++++++ src/test/regress/sql/tsearch.sql | 4 ++++ 4 files changed, 21 insertions(+), 12 deletions(-) diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index bd8a15d6c3..09abd46dda 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -2131,14 +2131,6 @@ ExecEvalArrayExpr(ExprState *state, ExprEvalStep *op) Datum *dvalues = op->d.arrayexpr.elemvalues; bool *dnulls = op->d.arrayexpr.elemnulls; - /* Shouldn't happen here, but if length is 0, return empty array */ - if (nelems == 0) - { - *op->resvalue = - PointerGetDatum(construct_empty_array(element_type)); - return; - } - /* setup for 1-D array of the given length */ ndims = 1; dims[0] = nelems; diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index e4101c9af0..d1f2fe7d95 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -3297,6 +3297,7 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) * * A palloc'd 1-D array object is constructed and returned. Note that * elem values will be copied into the object even if pass-by-ref type. + * Also note the result will be 0-D not 1-D if nelems = 0. * * NOTE: it would be cleaner to look up the elmlen/elmbval/elmalign info * from the system catalogs, given the elmtype. However, the caller is @@ -3331,6 +3332,7 @@ construct_array(Datum *elems, int nelems, * * A palloc'd ndims-D array object is constructed and returned. Note that * elem values will be copied into the object even if pass-by-ref type. + * Also note the result will be 0-D not ndims-D if any dims[i] = 0. * * NOTE: it would be cleaner to look up the elmlen/elmbval/elmalign info * from the system catalogs, given the elmtype. However, the caller is @@ -3362,12 +3364,12 @@ construct_md_array(Datum *elems, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", ndims, MAXDIM))); - /* fast track for empty array */ - if (ndims == 0) - return construct_empty_array(elmtype); - nelems = ArrayGetNItems(ndims, dims); + /* if ndims <= 0 or any dims[i] == 0, return empty array */ + if (nelems <= 0) + return construct_empty_array(elmtype); + /* compute required space */ nbytes = 0; hasnulls = false; diff --git a/src/test/regress/expected/tsearch.out b/src/test/regress/expected/tsearch.out index b2fc9e207e..d63fb12f1d 100644 --- a/src/test/regress/expected/tsearch.out +++ b/src/test/regress/expected/tsearch.out @@ -618,6 +618,17 @@ SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx'); url_path | URL path | /?xx | {simple} | simple | {/?xx} (3 rows) +SELECT token, alias, + dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims, + lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims +from ts_debug('english', 'a title'); + token | alias | dictionaries | dnull | ddims | lexemes | lnull | ldims +-------+-----------+----------------+-------+-------+---------+-------+------- + a | asciiword | {english_stem} | f | [1:1] | {} | f | + | blank | {} | f | | | t | + title | asciiword | {english_stem} | f | [1:1] | {titl} | f | [1:1] +(3 rows) + -- to_tsquery SELECT to_tsquery('english', 'qwe & sKies '); to_tsquery diff --git a/src/test/regress/sql/tsearch.sql b/src/test/regress/sql/tsearch.sql index e4b21f8f18..1c8520b3e9 100644 --- a/src/test/regress/sql/tsearch.sql +++ b/src/test/regress/sql/tsearch.sql @@ -145,6 +145,10 @@ SELECT * from ts_debug('english', 'http://www.harewoodsolutions.co.uk/press.aspx SELECT * from ts_debug('english', 'http://aew.wer0c.ewr/id?ad=qwe&dw'); SELECT * from ts_debug('english', 'http://5aew.werc.ewr:8100/?'); SELECT * from ts_debug('english', '5aew.werc.ewr:8100/?xx'); +SELECT token, alias, + dictionaries, dictionaries is null as dnull, array_dims(dictionaries) as ddims, + lexemes, lexemes is null as lnull, array_dims(lexemes) as ldims +from ts_debug('english', 'a title'); -- to_tsquery From 899bd785c0edf376077d3f5d65c316f92c1b64b5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 25 Sep 2017 16:09:19 -0400 Subject: [PATCH 0267/1087] Avoid SIGBUS on Linux when a DSM memory request overruns tmpfs. On Linux, shared memory segments created with shm_open() are backed by swap files created in tmpfs. If the swap file needs to be extended, but there's no tmpfs space left, you get a very unfriendly SIGBUS trap. To avoid this, force allocation of the full request size when we create the segment. This adds a few cycles, but none that we wouldn't expend later anyway, assuming the request isn't hugely bigger than the actual need. Make this code #ifdef __linux__, because (a) there's not currently a reason to think the same problem exists on other platforms, and (b) applying posix_fallocate() to an FD created by shm_open() isn't very portable anyway. Back-patch to 9.4 where the DSM code came in. Thomas Munro, per a bug report from Amul Sul Discussion: https://postgr.es/m/1002664500.12301802.1471008223422.JavaMail.yahoo@mail.yahoo.com --- configure | 2 +- configure.in | 2 +- src/backend/storage/ipc/dsm_impl.c | 54 ++++++++++++++++++++++++++++-- src/include/pg_config.h.in | 3 ++ src/include/pg_config.h.win32 | 3 ++ 5 files changed, 60 insertions(+), 4 deletions(-) diff --git a/configure b/configure index 70d8aa649d..4f3b97c7cf 100755 --- a/configure +++ b/configure @@ -12970,7 +12970,7 @@ fi LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` -for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l +for ac_func in cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" diff --git a/configure.in b/configure.in index cdf11bf673..fa48369630 100644 --- a/configure.in +++ b/configure.in @@ -1399,7 +1399,7 @@ PGAC_FUNC_WCSTOMBS_L LIBS_including_readline="$LIBS" LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'` -AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l]) +AC_CHECK_FUNCS([cbrt clock_gettime dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setsid shm_open symlink sync_file_range utime utimes wcstombs_l]) AC_REPLACE_FUNCS(fseeko) case $host_os in diff --git a/src/backend/storage/ipc/dsm_impl.c b/src/backend/storage/ipc/dsm_impl.c index c63780139e..a5879060d0 100644 --- a/src/backend/storage/ipc/dsm_impl.c +++ b/src/backend/storage/ipc/dsm_impl.c @@ -73,6 +73,7 @@ static bool dsm_impl_posix(dsm_op op, dsm_handle handle, Size request_size, void **impl_private, void **mapped_address, Size *mapped_size, int elevel); +static int dsm_impl_posix_resize(int fd, off_t size); #endif #ifdef USE_DSM_SYSV static bool dsm_impl_sysv(dsm_op op, dsm_handle handle, Size request_size, @@ -319,7 +320,8 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size request_size, } request_size = st.st_size; } - else if (*mapped_size != request_size && ftruncate(fd, request_size)) + else if (*mapped_size != request_size && + dsm_impl_posix_resize(fd, request_size) != 0) { int save_errno; @@ -392,7 +394,55 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size request_size, return true; } -#endif + +/* + * Set the size of a virtual memory region associated with a file descriptor. + * If necessary, also ensure that virtual memory is actually allocated by the + * operating system, to avoid nasty surprises later. + * + * Returns non-zero if either truncation or allocation fails, and sets errno. + */ +static int +dsm_impl_posix_resize(int fd, off_t size) +{ + int rc; + + /* Truncate (or extend) the file to the requested size. */ + rc = ftruncate(fd, size); + + /* + * On Linux, a shm_open fd is backed by a tmpfs file. After resizing with + * ftruncate, the file may contain a hole. Accessing memory backed by a + * hole causes tmpfs to allocate pages, which fails with SIGBUS if there + * is no more tmpfs space available. So we ask tmpfs to allocate pages + * here, so we can fail gracefully with ENOSPC now rather than risking + * SIGBUS later. + */ +#if defined(HAVE_POSIX_FALLOCATE) && defined(__linux__) + if (rc == 0) + { + /* We may get interrupted, if so just retry. */ + do + { + rc = posix_fallocate(fd, 0, size); + } while (rc == -1 && errno == EINTR); + + if (rc != 0 && errno == ENOSYS) + { + /* + * Kernel too old (< 2.6.23). Rather than fail, just trust that + * we won't hit the problem (it typically doesn't show up without + * many-GB-sized requests, anyway). + */ + rc = 0; + } + } +#endif /* HAVE_POSIX_FALLOCATE && __linux__ */ + + return rc; +} + +#endif /* USE_DSM_POSIX */ #ifdef USE_DSM_SYSV /* diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 2a4e9f6050..39286836dc 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -393,6 +393,9 @@ /* Define to 1 if you have the `posix_fadvise' function. */ #undef HAVE_POSIX_FADVISE +/* Define to 1 if you have the `posix_fallocate' function. */ +#undef HAVE_POSIX_FALLOCATE + /* Define to 1 if the assembler supports PPC's LWARX mutex hint bit. */ #undef HAVE_PPC_LWARX_MUTEX_HINT diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index b6808d581b..d90f5a6b5b 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -261,6 +261,9 @@ /* Define to 1 if you have the header file. */ /* #undef HAVE_POLL_H */ +/* Define to 1 if you have the `posix_fallocate' function. */ +/* #undef HAVE_POSIX_FALLOCATE */ + /* Define to 1 if you have the `pstat' function. */ /* #undef HAVE_PSTAT */ From 79a4a665c046af91d4216fe69b535c429039d0d0 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 26 Sep 2017 08:58:06 -0400 Subject: [PATCH 0268/1087] Fix trivial mistake in README. You might think I (Robert) could manage to count to five without messing it up, but if you did, you would be wrong. Amit Kapila Discussion: http://postgr.es/m/CAA4eK1JxqqcuC5Un7YLQVhOYSZBS+t=3xqZuEkt5RyquyuxpwQ@mail.gmail.com --- src/backend/access/hash/README | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/access/hash/README b/src/backend/access/hash/README index 5827389a70..bb90722778 100644 --- a/src/backend/access/hash/README +++ b/src/backend/access/hash/README @@ -434,7 +434,7 @@ concurrent scan could start in that bucket before we've finished vacuuming it. If a scan gets ahead of cleanup, we could have the following problem: (1) the scan sees heap TIDs that are about to be removed before they are processed by VACUUM, (2) the scan decides that one or more of those TIDs are dead, (3) -VACUUM completes, (3) one or more of the TIDs the scan decided were dead are +VACUUM completes, (4) one or more of the TIDs the scan decided were dead are reused for an unrelated tuple, and finally (5) the scan wakes up and erroneously kills the new tuple. From 22c5e73562c53437979efec4c26cd9fff408777c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 26 Sep 2017 09:16:45 -0400 Subject: [PATCH 0269/1087] Remove lsn from HashScanPosData. This was intended as infrastructure for weakening VACUUM's locking requirements, similar to what was done for btree indexes in commit 2ed5b87f96d473962ec5230fd820abfeaccb2069. However, for hash indexes, it seems that the improvements which are possible are actually extremely marginal. Furthermore, performing the LSN cross-check will end up skipping cleanup far more often than is necessary; we only care about page modifications due to a VACUUM, but the LSN check will fail if ANY modification has occurred. So, rather than pressing forward with that "optimization", just rip the LSN field out. Patch by me, reviewed by Ashutosh Sharma and Amit Kapila Discussion: http://postgr.es/m/CAA4eK1JxqqcuC5Un7YLQVhOYSZBS+t=3xqZuEkt5RyquyuxpwQ@mail.gmail.com --- src/backend/access/hash/hashsearch.c | 8 -------- src/backend/access/hash/hashutil.c | 27 +++++++-------------------- src/include/access/hash.h | 2 -- 3 files changed, 7 insertions(+), 30 deletions(-) diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c index ce5515dbcb..81a206eeb7 100644 --- a/src/backend/access/hash/hashsearch.c +++ b/src/backend/access/hash/hashsearch.c @@ -463,12 +463,6 @@ _hash_readpage(IndexScanDesc scan, Buffer *bufP, ScanDirection dir) opaque = (HashPageOpaque) PageGetSpecialPointer(page); so->currPos.buf = buf; - - /* - * We save the LSN of the page as we read it, so that we know whether it - * is safe to apply LP_DEAD hints to the page later. - */ - so->currPos.lsn = PageGetLSN(page); so->currPos.currPage = BufferGetBlockNumber(buf); if (ScanDirectionIsForward(dir)) @@ -508,7 +502,6 @@ _hash_readpage(IndexScanDesc scan, Buffer *bufP, ScanDirection dir) { so->currPos.buf = buf; so->currPos.currPage = BufferGetBlockNumber(buf); - so->currPos.lsn = PageGetLSN(page); } else { @@ -562,7 +555,6 @@ _hash_readpage(IndexScanDesc scan, Buffer *bufP, ScanDirection dir) { so->currPos.buf = buf; so->currPos.currPage = BufferGetBlockNumber(buf); - so->currPos.lsn = PageGetLSN(page); } else { diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c index a825b82706..f2a1c5b6ab 100644 --- a/src/backend/access/hash/hashutil.c +++ b/src/backend/access/hash/hashutil.c @@ -532,12 +532,13 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket, * We match items by heap TID before assuming they are the right ones to * delete. * - * Note that we keep the pin on the bucket page throughout the scan. Hence, - * there is no chance of VACUUM deleting any items from that page. However, - * having pin on the overflow page doesn't guarantee that vacuum won't delete - * any items. - * - * See _bt_killitems() for more details. + * There are never any scans active in a bucket at the time VACUUM begins, + * because VACUUM takes a cleanup lock on the primary bucket page and scans + * hold a pin. A scan can begin after VACUUM leaves the primary bucket page + * but before it finishes the entire bucket, but it can never pass VACUUM, + * because VACUUM always locks the next page before releasing the lock on + * the previous one. Therefore, we don't have to worry about accidentally + * killing a TID that has been reused for an unrelated tuple. */ void _hash_kill_items(IndexScanDesc scan) @@ -579,21 +580,7 @@ _hash_kill_items(IndexScanDesc scan) else buf = _hash_getbuf(rel, blkno, HASH_READ, LH_OVERFLOW_PAGE); - /* - * If page LSN differs it means that the page was modified since the last - * read. killedItems could be not valid so applying LP_DEAD hints is not - * safe. - */ page = BufferGetPage(buf); - if (PageGetLSN(page) != so->currPos.lsn) - { - if (havePin) - LockBuffer(buf, BUFFER_LOCK_UNLOCK); - else - _hash_relbuf(rel, buf); - return; - } - opaque = (HashPageOpaque) PageGetSpecialPointer(page); maxoff = PageGetMaxOffsetNumber(page); diff --git a/src/include/access/hash.h b/src/include/access/hash.h index 0e0f3e17a7..e3135c1738 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -117,7 +117,6 @@ typedef struct HashScanPosItem /* what we remember about each match */ typedef struct HashScanPosData { Buffer buf; /* if valid, the buffer is pinned */ - XLogRecPtr lsn; /* pos in the WAL stream when page was read */ BlockNumber currPage; /* current hash index page */ BlockNumber nextPage; /* next overflow page */ BlockNumber prevPage; /* prev overflow or bucket page */ @@ -153,7 +152,6 @@ typedef struct HashScanPosData #define HashScanPosInvalidate(scanpos) \ do { \ (scanpos).buf = InvalidBuffer; \ - (scanpos).lsn = InvalidXLogRecPtr; \ (scanpos).currPage = InvalidBlockNumber; \ (scanpos).nextPage = InvalidBlockNumber; \ (scanpos).prevPage = InvalidBlockNumber; \ From ab28feae2bd3d4629bd73ae3548e671c57d785f0 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Sep 2017 10:03:56 -0400 Subject: [PATCH 0270/1087] Handle heap rewrites better in logical replication A FOR ALL TABLES publication naturally considers all base tables to be a candidate for replication. This includes transient heaps that are created during a table rewrite during DDL. This causes failures on the subscriber side because it will not have a table like pg_temp_16386 to receive data (and if it did, it would be the wrong table). The prevent this problem, we filter out any tables that match this naming pattern and match an actual table from FOR ALL TABLES publications. This is only a heuristic, meaning that user tables that match that naming could accidentally be omitted. A more robust solution might require an explicit marking of such tables in pg_class somehow. Reported-by: yxq Bug: #14785 Reviewed-by: Andres Freund Reviewed-by: Petr Jelinek --- src/backend/replication/pgoutput/pgoutput.c | 26 ++++++++ src/test/subscription/t/006_rewrite.pl | 73 +++++++++++++++++++++ 2 files changed, 99 insertions(+) create mode 100644 src/test/subscription/t/006_rewrite.pl diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index 9ab954a6e0..c3126545b4 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -21,6 +21,7 @@ #include "utils/inval.h" #include "utils/int8.h" +#include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/syscache.h" #include "utils/varlena.h" @@ -509,6 +510,31 @@ get_rel_sync_entry(PGOutputData *data, Oid relid) { Publication *pub = lfirst(lc); + /* + * Skip tables that look like they are from a heap rewrite (see + * make_new_heap()). We need to skip them because the subscriber + * won't have a table by that name to receive the data. That + * means we won't ship the new data in, say, an added column with + * a DEFAULT, but if the user applies the same DDL manually on the + * subscriber, then this will work out for them. + * + * We only need to consider the alltables case, because such a + * transient heap won't be an explicit member of a publication. + */ + if (pub->alltables) + { + char *relname = get_rel_name(relid); + unsigned int u; + int n; + + if (sscanf(relname, "pg_temp_%u%n", &u, &n) == 1 && + relname[n] == '\0') + { + if (get_rel_relkind(u) == RELKIND_RELATION) + break; + } + } + if (pub->alltables || list_member_oid(pubids, pub->oid)) { entry->pubactions.pubinsert |= pub->pubactions.pubinsert; diff --git a/src/test/subscription/t/006_rewrite.pl b/src/test/subscription/t/006_rewrite.pl new file mode 100644 index 0000000000..5e3211aefa --- /dev/null +++ b/src/test/subscription/t/006_rewrite.pl @@ -0,0 +1,73 @@ +# Test logical replication behavior with heap rewrites +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More tests => 2; + +sub wait_for_caught_up +{ + my ($node, $appname) = @_; + + $node->poll_query_until('postgres', +"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" + ) or die "Timed out while waiting for subscriber to catch up"; +} + +my $node_publisher = get_new_node('publisher'); +$node_publisher->init(allows_streaming => 'logical'); +$node_publisher->start; + +my $node_subscriber = get_new_node('subscriber'); +$node_subscriber->init(allows_streaming => 'logical'); +$node_subscriber->start; + +my $ddl = "CREATE TABLE test1 (a int, b text);"; +$node_publisher->safe_psql('postgres', $ddl); +$node_subscriber->safe_psql('postgres', $ddl); + +my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres'; +my $appname = 'encoding_test'; + +$node_publisher->safe_psql('postgres', + "CREATE PUBLICATION mypub FOR ALL TABLES;"); +$node_subscriber->safe_psql('postgres', +"CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" +); + +wait_for_caught_up($node_publisher, $appname); + +# Wait for initial sync to finish as well +my $synced_query = + "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');"; +$node_subscriber->poll_query_until('postgres', $synced_query) + or die "Timed out while waiting for subscriber to synchronize data"; + +$node_publisher->safe_psql('postgres', q{INSERT INTO test1 (a, b) VALUES (1, 'one'), (2, 'two');}); + +wait_for_caught_up($node_publisher, $appname); + +is($node_subscriber->safe_psql('postgres', q{SELECT a, b FROM test1}), + qq(1|one +2|two), + 'initial data replicated to subscriber'); + +# DDL that causes a heap rewrite +my $ddl2 = "ALTER TABLE test1 ADD c int NOT NULL DEFAULT 0;"; +$node_subscriber->safe_psql('postgres', $ddl2); +$node_publisher->safe_psql('postgres', $ddl2); + +wait_for_caught_up($node_publisher, $appname); + +$node_publisher->safe_psql('postgres', q{INSERT INTO test1 (a, b, c) VALUES (3, 'three', 33);}); + +wait_for_caught_up($node_publisher, $appname); + +is($node_subscriber->safe_psql('postgres', q{SELECT a, b, c FROM test1}), + qq(1|one|0 +2|two|0 +3|three|33), + 'data replicated to subscriber'); + +$node_subscriber->stop; +$node_publisher->stop; From 15a8010ed691f190aad19c0a205f4a17868591e9 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Sep 2017 11:58:22 -0400 Subject: [PATCH 0271/1087] Sort pg_basebackup options better The --slot option somehow ended up under options controlling the output, and some other options were in a nonsensical place or were not moved after recent renamings, so tidy all that up a bit. --- doc/src/sgml/ref/pg_basebackup.sgml | 100 +++++++++++++------------- src/bin/pg_basebackup/pg_basebackup.c | 6 +- 2 files changed, 53 insertions(+), 53 deletions(-) diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index 2454d35af3..b7aa128f7f 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -226,48 +226,6 @@ PostgreSQL documentation - - - - - - This option can only be used together with -X - stream. It causes the WAL streaming to use the specified - replication slot. If the base backup is intended to be used as a - streaming replication standby using replication slots, it should then - use the same replication slot name - in recovery.conf. That way, it is ensured that - the server does not remove any necessary WAL data in the time between - the end of the base backup and the start of streaming replication. - - - If this option is not specified and the server supports temporary - replication slots (version 10 and later), then a temporary replication - slot is automatically used for WAL streaming. - - - - - - - - - This option prevents the creation of a temporary replication slot - during the backup even if it's supported by the server. - - - Temporary replication slots are created by default if no slot name - is given with the option when using log streaming. - - - The main purpose of this option is to allow taking a base backup when - the server is out of free replication slots. Using replication slots - is almost always preferred, because it prevents needed WAL from being - removed by the server during the backup. - - - - @@ -453,6 +411,21 @@ PostgreSQL documentation + + + + + + By default, pg_basebackup will wait for all files + to be written safely to disk. This option causes + pg_basebackup to return without waiting, which is + faster, but means that a subsequent operating system crash can leave + the base backup corrupt. Generally, this option is useful for testing + but should not be used when creating a production installation. + + + + @@ -476,16 +449,43 @@ PostgreSQL documentation - - + + - By default, pg_basebackup will wait for all files - to be written safely to disk. This option causes - pg_basebackup to return without waiting, which is - faster, but means that a subsequent operating system crash can leave - the base backup corrupt. Generally, this option is useful for testing - but should not be used when creating a production installation. + This option can only be used together with -X + stream. It causes the WAL streaming to use the specified + replication slot. If the base backup is intended to be used as a + streaming replication standby using replication slots, it should then + use the same replication slot name + in recovery.conf. That way, it is ensured that + the server does not remove any necessary WAL data in the time between + the end of the base backup and the start of streaming replication. + + + If this option is not specified and the server supports temporary + replication slots (version 10 and later), then a temporary replication + slot is automatically used for WAL streaming. + + + + + + + + + This option prevents the creation of a temporary replication slot + during the backup even if it's supported by the server. + + + Temporary replication slots are created by default if no slot name + is given with the option when using log streaming. + + + The main purpose of this option is to allow taking a base backup when + the server is out of free replication slots. Using replication slots + is almost always preferred, because it prevents needed WAL from being + removed by the server during the backup. diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index 2d039d5a33..537978090e 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -336,13 +336,11 @@ usage(void) " (in kB/s, or use suffix \"k\" or \"M\")\n")); printf(_(" -R, --write-recovery-conf\n" " write recovery.conf for replication\n")); - printf(_(" -S, --slot=SLOTNAME replication slot to use\n")); - printf(_(" --no-slot prevent creation of temporary replication slot\n")); printf(_(" -T, --tablespace-mapping=OLDDIR=NEWDIR\n" " relocate tablespace in OLDDIR to NEWDIR\n")); + printf(_(" --waldir=WALDIR location for the write-ahead log directory\n")); printf(_(" -X, --wal-method=none|fetch|stream\n" " include required WAL files with specified method\n")); - printf(_(" --waldir=WALDIR location for the write-ahead log directory\n")); printf(_(" -z, --gzip compress tar output\n")); printf(_(" -Z, --compress=0-9 compress tar output with given compression level\n")); printf(_("\nGeneral options:\n")); @@ -352,6 +350,8 @@ usage(void) printf(_(" -n, --no-clean do not clean up after errors\n")); printf(_(" -N, --no-sync do not wait for changes to be written safely to disk\n")); printf(_(" -P, --progress show progress information\n")); + printf(_(" -S, --slot=SLOTNAME replication slot to use\n")); + printf(_(" --no-slot prevent creation of temporary replication slot\n")); printf(_(" -v, --verbose output verbose messages\n")); printf(_(" -V, --version output version information, then exit\n")); printf(_(" -?, --help show this help, then exit\n")); From 1635e80d30b16df98aebead12f2b82f17efd9bc8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 26 Sep 2017 13:12:03 -0400 Subject: [PATCH 0272/1087] Use a blacklist to distinguish original from add-on enum values. Commit 15bc038f9 allowed ALTER TYPE ADD VALUE to be executed inside transaction blocks, by disallowing the use of the added value later in the same transaction, except under limited circumstances. However, the test for "limited circumstances" was heuristic and could reject references to enum values that were created during CREATE TYPE AS ENUM, not just later. This breaks the use-case of restoring pg_dump scripts in a single transaction, as reported in bug #14825 from Balazs Szilfai. We can improve this by keeping a "blacklist" table of enum value OIDs created by ALTER TYPE ADD VALUE during the current transaction. Any visible-but-uncommitted value whose OID is not in the blacklist must have been created by CREATE TYPE AS ENUM, and can be used safely because it could not have a lifespan shorter than its parent enum type. This change also removes the restriction that a renamed enum value can't be used before being committed (unless it was on the blacklist). Andrew Dunstan, with cosmetic improvements by me. Back-patch to v10. Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org --- doc/src/sgml/ref/alter_type.sgml | 2 - src/backend/access/transam/xact.c | 4 ++ src/backend/catalog/pg_enum.c | 64 ++++++++++++++++++++++++++++++ src/backend/utils/adt/enum.c | 9 +++++ src/include/catalog/pg_enum.h | 2 + src/test/regress/expected/enum.out | 21 ++++++++++ src/test/regress/sql/enum.sql | 13 ++++++ 7 files changed, 113 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index d65f70f674..446e08b175 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -294,8 +294,6 @@ ALTER TYPE name RENAME VALUE diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 93dca7a72a..52408fc6b0 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -32,6 +32,7 @@ #include "access/xlogutils.h" #include "catalog/catalog.h" #include "catalog/namespace.h" +#include "catalog/pg_enum.h" #include "catalog/storage.h" #include "commands/async.h" #include "commands/tablecmds.h" @@ -2128,6 +2129,7 @@ CommitTransaction(void) AtCommit_Notify(); AtEOXact_GUC(true, 1); AtEOXact_SPI(true); + AtEOXact_Enum(); AtEOXact_on_commit_actions(true); AtEOXact_Namespace(true, is_parallel_worker); AtEOXact_SMgr(); @@ -2406,6 +2408,7 @@ PrepareTransaction(void) /* PREPARE acts the same as COMMIT as far as GUC is concerned */ AtEOXact_GUC(true, 1); AtEOXact_SPI(true); + AtEOXact_Enum(); AtEOXact_on_commit_actions(true); AtEOXact_Namespace(true, false); AtEOXact_SMgr(); @@ -2608,6 +2611,7 @@ AbortTransaction(void) AtEOXact_GUC(false, 1); AtEOXact_SPI(false); + AtEOXact_Enum(); AtEOXact_on_commit_actions(false); AtEOXact_Namespace(false, is_parallel_worker); AtEOXact_SMgr(); diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c index fe61d4dacc..0f7b36e11d 100644 --- a/src/backend/catalog/pg_enum.c +++ b/src/backend/catalog/pg_enum.c @@ -28,6 +28,8 @@ #include "utils/builtins.h" #include "utils/catcache.h" #include "utils/fmgroids.h" +#include "utils/hsearch.h" +#include "utils/memutils.h" #include "utils/syscache.h" #include "utils/tqual.h" @@ -35,6 +37,17 @@ /* Potentially set by pg_upgrade_support functions */ Oid binary_upgrade_next_pg_enum_oid = InvalidOid; +/* + * Hash table of enum value OIDs created during the current transaction by + * AddEnumLabel. We disallow using these values until the transaction is + * committed; otherwise, they might get into indexes where we can't clean + * them up, and then if the transaction rolls back we have a broken index. + * (See comments for check_safe_enum_use() in enum.c.) Values created by + * EnumValuesCreate are *not* blacklisted; we assume those are created during + * CREATE TYPE, so they can't go away unless the enum type itself does. + */ +static HTAB *enum_blacklist = NULL; + static void RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems); static int sort_order_cmp(const void *p1, const void *p2); @@ -460,6 +473,24 @@ AddEnumLabel(Oid enumTypeOid, heap_freetuple(enum_tup); heap_close(pg_enum, RowExclusiveLock); + + /* Set up the blacklist hash if not already done in this transaction */ + if (enum_blacklist == NULL) + { + HASHCTL hash_ctl; + + memset(&hash_ctl, 0, sizeof(hash_ctl)); + hash_ctl.keysize = sizeof(Oid); + hash_ctl.entrysize = sizeof(Oid); + hash_ctl.hcxt = TopTransactionContext; + enum_blacklist = hash_create("Enum value blacklist", + 32, + &hash_ctl, + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); + } + + /* Add the new value to the blacklist */ + (void) hash_search(enum_blacklist, &newOid, HASH_ENTER, NULL); } @@ -547,6 +578,39 @@ RenameEnumLabel(Oid enumTypeOid, } +/* + * Test if the given enum value is on the blacklist + */ +bool +EnumBlacklisted(Oid enum_id) +{ + bool found; + + /* If we've made no blacklist table, all values are safe */ + if (enum_blacklist == NULL) + return false; + + /* Else, is it in the table? */ + (void) hash_search(enum_blacklist, &enum_id, HASH_FIND, &found); + return found; +} + + +/* + * Clean up enum stuff after end of top-level transaction. + */ +void +AtEOXact_Enum(void) +{ + /* + * Reset the blacklist table, as all our enum values are now committed. + * The memory will go away automatically when TopTransactionContext is + * freed; it's sufficient to clear our pointer. + */ + enum_blacklist = NULL; +} + + /* * RenumberEnumType * Renumber existing enum elements to have sort positions 1..n. diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c index 973397cc85..401e7299fa 100644 --- a/src/backend/utils/adt/enum.c +++ b/src/backend/utils/adt/enum.c @@ -76,6 +76,15 @@ check_safe_enum_use(HeapTuple enumval_tup) TransactionIdDidCommit(xmin)) return; + /* + * Check if the enum value is blacklisted. If not, it's safe, because it + * was made during CREATE TYPE AS ENUM and can't be shorter-lived than its + * owning type. (This'd also be false for values made by other + * transactions; but the previous tests should have handled all of those.) + */ + if (!EnumBlacklisted(HeapTupleGetOid(enumval_tup))) + return; + /* It is a new enum value, so check to see if the whole enum is new */ en = (Form_pg_enum) GETSTRUCT(enumval_tup); enumtyp_tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(en->enumtypid)); diff --git a/src/include/catalog/pg_enum.h b/src/include/catalog/pg_enum.h index 5938ba5cac..dff3d2f481 100644 --- a/src/include/catalog/pg_enum.h +++ b/src/include/catalog/pg_enum.h @@ -69,5 +69,7 @@ extern void AddEnumLabel(Oid enumTypeOid, const char *newVal, bool skipIfExists); extern void RenameEnumLabel(Oid enumTypeOid, const char *oldVal, const char *newVal); +extern bool EnumBlacklisted(Oid enum_id); +extern void AtEOXact_Enum(void); #endif /* PG_ENUM_H */ diff --git a/src/test/regress/expected/enum.out b/src/test/regress/expected/enum.out index 0e6030443f..6bbe488736 100644 --- a/src/test/regress/expected/enum.out +++ b/src/test/regress/expected/enum.out @@ -633,8 +633,29 @@ ERROR: unsafe use of new value "bad" of enum type bogon LINE 1: SELECT 'bad'::bogon; ^ HINT: New enum values must be committed before they can be used. +ROLLBACK; +-- but a renamed value is safe to use later in same transaction +BEGIN; +ALTER TYPE bogus RENAME VALUE 'good' to 'bad'; +SELECT 'bad'::bogus; + bogus +------- + bad +(1 row) + ROLLBACK; DROP TYPE bogus; +-- check that values created during CREATE TYPE can be used in any case +BEGIN; +CREATE TYPE bogus AS ENUM('good','bad','ugly'); +ALTER TYPE bogus RENAME TO bogon; +select enum_range(null::bogon); + enum_range +----------------- + {good,bad,ugly} +(1 row) + +ROLLBACK; -- check that we can add new values to existing enums in a transaction -- and use them, if the type is new as well BEGIN; diff --git a/src/test/regress/sql/enum.sql b/src/test/regress/sql/enum.sql index d7e87143a0..eb464a72c5 100644 --- a/src/test/regress/sql/enum.sql +++ b/src/test/regress/sql/enum.sql @@ -300,8 +300,21 @@ ALTER TYPE bogon ADD VALUE 'bad'; SELECT 'bad'::bogon; ROLLBACK; +-- but a renamed value is safe to use later in same transaction +BEGIN; +ALTER TYPE bogus RENAME VALUE 'good' to 'bad'; +SELECT 'bad'::bogus; +ROLLBACK; + DROP TYPE bogus; +-- check that values created during CREATE TYPE can be used in any case +BEGIN; +CREATE TYPE bogus AS ENUM('good','bad','ugly'); +ALTER TYPE bogus RENAME TO bogon; +select enum_range(null::bogon); +ROLLBACK; + -- check that we can add new values to existing enums in a transaction -- and use them, if the type is new as well BEGIN; From 984c92074d84a81dc17e9865fc79e264eb50ad61 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 26 Sep 2017 13:12:13 -0400 Subject: [PATCH 0273/1087] Remove heuristic same-transaction test from check_safe_enum_use(). The blacklist mechanism added by the preceding commit directly fixes most of the practical cases that the same-transaction test was meant to cover. What remains is use-cases like begin; create type e as enum('x'); alter type e add value 'y'; -- use 'y' somehow commit; However, because the same-transaction test is heuristic, it fails on small variants of that, such as renaming the type or changing its owner. Rather than try to explain the behavior to users, let's remove it and just have a rule that the newly added value can't be used before being committed, full stop. Perhaps later it will be worth the implementation effort and overhead to have a more accurate test for type-was-created-in-this-transaction. We'll wait for some field experience with v10 before deciding to do that. Back-patch to v10. Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org --- doc/src/sgml/ref/alter_type.sgml | 3 +-- src/backend/utils/adt/enum.c | 41 +++++++----------------------- src/test/regress/expected/enum.out | 18 ++++++------- src/test/regress/sql/enum.sql | 11 ++++---- 4 files changed, 24 insertions(+), 49 deletions(-) diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 446e08b175..7e2258e1e3 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -292,8 +292,7 @@ ALTER TYPE name RENAME VALUE If ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum type) is executed inside a transaction block, the new value cannot - be used until after the transaction has been committed, except in the case - that the enum type itself was created earlier in the same transaction. + be used until after the transaction has been committed. diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c index 401e7299fa..c0124f497e 100644 --- a/src/backend/utils/adt/enum.c +++ b/src/backend/utils/adt/enum.c @@ -48,7 +48,14 @@ static ArrayType *enum_range_internal(Oid enumtypoid, Oid lower, Oid upper); * However, it's okay to allow use of uncommitted values belonging to enum * types that were themselves created in the same transaction, because then * any such index would also be new and would go away altogether on rollback. - * (This case is required by pg_upgrade.) + * We don't implement that fully right now, but we do allow free use of enum + * values created during CREATE TYPE AS ENUM, which are surely of the same + * lifespan as the enum type. (This case is required by "pg_restore -1".) + * Values added by ALTER TYPE ADD VALUE are currently restricted, but could + * be allowed if the enum type could be proven to have been created earlier + * in the same transaction. (Note that comparing tuple xmins would not work + * for that, because the type tuple might have been updated in the current + * transaction. Subtransactions also create hazards to be accounted for.) * * This function needs to be called (directly or indirectly) in any of the * functions below that could return an enum value to SQL operations. @@ -58,7 +65,6 @@ check_safe_enum_use(HeapTuple enumval_tup) { TransactionId xmin; Form_pg_enum en; - HeapTuple enumtyp_tup; /* * If the row is hinted as committed, it's surely safe. This provides a @@ -85,40 +91,11 @@ check_safe_enum_use(HeapTuple enumval_tup) if (!EnumBlacklisted(HeapTupleGetOid(enumval_tup))) return; - /* It is a new enum value, so check to see if the whole enum is new */ - en = (Form_pg_enum) GETSTRUCT(enumval_tup); - enumtyp_tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(en->enumtypid)); - if (!HeapTupleIsValid(enumtyp_tup)) - elog(ERROR, "cache lookup failed for type %u", en->enumtypid); - - /* - * We insist that the type have been created in the same (sub)transaction - * as the enum value. It would be safe to allow the type's originating - * xact to be a subcommitted child of the enum value's xact, but not vice - * versa (since we might now be in a subxact of the type's originating - * xact, which could roll back along with the enum value's subxact). The - * former case seems a sufficiently weird usage pattern as to not be worth - * spending code for, so we're left with a simple equality check. - * - * We also insist that the type's pg_type row not be HEAP_UPDATED. If it - * is, we can't tell whether the row was created or only modified in the - * apparent originating xact, so it might be older than that xact. (We do - * not worry whether the enum value is HEAP_UPDATED; if it is, we might - * think it's too new and throw an unnecessary error, but we won't allow - * an unsafe case.) - */ - if (xmin == HeapTupleHeaderGetXmin(enumtyp_tup->t_data) && - !(enumtyp_tup->t_data->t_infomask & HEAP_UPDATED)) - { - /* same (sub)transaction, so safe */ - ReleaseSysCache(enumtyp_tup); - return; - } - /* * There might well be other tests we could do here to narrow down the * unsafe conditions, but for now just raise an exception. */ + en = (Form_pg_enum) GETSTRUCT(enumval_tup); ereport(ERROR, (errcode(ERRCODE_UNSAFE_NEW_ENUM_VALUE_USAGE), errmsg("unsafe use of new value \"%s\" of enum type %s", diff --git a/src/test/regress/expected/enum.out b/src/test/regress/expected/enum.out index 6bbe488736..4f839ce027 100644 --- a/src/test/regress/expected/enum.out +++ b/src/test/regress/expected/enum.out @@ -656,18 +656,16 @@ select enum_range(null::bogon); (1 row) ROLLBACK; --- check that we can add new values to existing enums in a transaction --- and use them, if the type is new as well +-- ideally, we'd allow this usage; but it requires keeping track of whether +-- the enum type was created in the current transaction, which is expensive BEGIN; CREATE TYPE bogus AS ENUM('good'); -ALTER TYPE bogus ADD VALUE 'bad'; -ALTER TYPE bogus ADD VALUE 'ugly'; -SELECT enum_range(null::bogus); - enum_range ------------------ - {good,bad,ugly} -(1 row) - +ALTER TYPE bogus RENAME TO bogon; +ALTER TYPE bogon ADD VALUE 'bad'; +ALTER TYPE bogon ADD VALUE 'ugly'; +select enum_range(null::bogon); -- fails +ERROR: unsafe use of new value "bad" of enum type bogon +HINT: New enum values must be committed before they can be used. ROLLBACK; -- -- Cleanup diff --git a/src/test/regress/sql/enum.sql b/src/test/regress/sql/enum.sql index eb464a72c5..6affd0d1eb 100644 --- a/src/test/regress/sql/enum.sql +++ b/src/test/regress/sql/enum.sql @@ -315,13 +315,14 @@ ALTER TYPE bogus RENAME TO bogon; select enum_range(null::bogon); ROLLBACK; --- check that we can add new values to existing enums in a transaction --- and use them, if the type is new as well +-- ideally, we'd allow this usage; but it requires keeping track of whether +-- the enum type was created in the current transaction, which is expensive BEGIN; CREATE TYPE bogus AS ENUM('good'); -ALTER TYPE bogus ADD VALUE 'bad'; -ALTER TYPE bogus ADD VALUE 'ugly'; -SELECT enum_range(null::bogus); +ALTER TYPE bogus RENAME TO bogon; +ALTER TYPE bogon ADD VALUE 'bad'; +ALTER TYPE bogon ADD VALUE 'ugly'; +select enum_range(null::bogon); -- fails ROLLBACK; -- From 5ea96efaa0100e96b6b927eb1e67869143e1db4e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 26 Sep 2017 13:42:53 -0400 Subject: [PATCH 0274/1087] Fix failure-to-read-man-page in commit 899bd785c. posix_fallocate() is not quite a drop-in replacement for fallocate(), because it is defined to return the error code as its function result, not in "errno". I (tgl) missed this because RHEL6's version seems to set errno as well. That is not the case on more modern Linuxen, though, as per buildfarm results. Aside from fixing the return-convention confusion, remove the test for ENOSYS; we expect that glibc will mask that for posix_fallocate, though it does not for fallocate. Keep the test for EINTR, because POSIX specifies that as a possible result, and buildfarm results suggest that it can happen in practice. Back-patch to 9.4, like the previous commit. Thomas Munro Discussion: https://postgr.es/m/1002664500.12301802.1471008223422.JavaMail.yahoo@mail.yahoo.com --- src/backend/storage/ipc/dsm_impl.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/src/backend/storage/ipc/dsm_impl.c b/src/backend/storage/ipc/dsm_impl.c index a5879060d0..b18bea64c6 100644 --- a/src/backend/storage/ipc/dsm_impl.c +++ b/src/backend/storage/ipc/dsm_impl.c @@ -425,17 +425,14 @@ dsm_impl_posix_resize(int fd, off_t size) do { rc = posix_fallocate(fd, 0, size); - } while (rc == -1 && errno == EINTR); + } while (rc == EINTR); - if (rc != 0 && errno == ENOSYS) - { - /* - * Kernel too old (< 2.6.23). Rather than fail, just trust that - * we won't hit the problem (it typically doesn't show up without - * many-GB-sized requests, anyway). - */ - rc = 0; - } + /* + * The caller expects errno to be set, but posix_fallocate() doesn't + * set it. Instead it returns error numbers directly. So set errno, + * even though we'll also return rc to indicate success or failure. + */ + errno = rc; } #endif /* HAVE_POSIX_FALLOCATE && __linux__ */ From 9a50a93c7b1427f6182ed1f21ed76da5b1d6a57c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 26 Sep 2017 15:25:56 -0400 Subject: [PATCH 0275/1087] Improve wording of error message added in commit 714805010. Per suggestions from Peter Eisentraut and David Johnston. Back-patch, like the previous commit. Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org --- src/backend/commands/analyze.c | 2 +- src/test/regress/expected/vacuum.out | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 1248b2ee5c..283845cf2a 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -396,7 +396,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params, if (bms_is_member(i, unique_cols)) ereport(ERROR, (errcode(ERRCODE_DUPLICATE_COLUMN), - errmsg("column \"%s\" of relation \"%s\" is specified twice", + errmsg("column \"%s\" of relation \"%s\" appears more than once", col, RelationGetRelationName(onerel)))); unique_cols = bms_add_member(unique_cols, i); diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out index a16f26da7e..ced53ca9aa 100644 --- a/src/test/regress/expected/vacuum.out +++ b/src/test/regress/expected/vacuum.out @@ -92,7 +92,7 @@ VACUUM (FULL) vacparted; VACUUM (FREEZE) vacparted; -- check behavior with duplicate column mentions VACUUM ANALYZE vacparted(a,b,a); -ERROR: column "a" of relation "vacparted" is specified twice +ERROR: column "a" of relation "vacparted" appears more than once ANALYZE vacparted(a,b,b); -ERROR: column "b" of relation "vacparted" is specified twice +ERROR: column "b" of relation "vacparted" appears more than once DROP TABLE vacparted; From 43588f58aa045c736af168267e0f1c5934333e15 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Sep 2017 15:00:05 -0400 Subject: [PATCH 0276/1087] Turn on log_replication_commands in PostgresNode This is useful for example for the pg_basebackup and related tests. --- src/test/perl/PostgresNode.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index edcac6fb9f..b44f70d27c 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -419,6 +419,7 @@ sub init print $conf "restart_after_crash = off\n"; print $conf "log_line_prefix = '%m [%p] %q%a '\n"; print $conf "log_statement = all\n"; + print $conf "log_replication_commands = on\n"; print $conf "wal_retrieve_retry_interval = '500ms'\n"; print $conf "port = $port\n"; From fa41461205ae4eb417045825583c3209e5a4f339 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Sep 2017 16:41:20 -0400 Subject: [PATCH 0277/1087] Add some more pg_receivewal tests Add some more tests for the --create-slot and --drop-slot options, verifying that the right kind of slot was created and that the slot was dropped. While working on an unrelated patch for pg_basebackup, some of this was temporarily broken without any tests noticing. --- src/bin/pg_basebackup/t/020_pg_receivewal.pl | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl index 101a36466d..f9f7bf75ab 100644 --- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl +++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl @@ -2,7 +2,7 @@ use warnings; use TestLib; use PostgresNode; -use Test::More tests => 14; +use Test::More tests => 17; program_help_ok('pg_receivewal'); program_version_ok('pg_receivewal'); @@ -30,8 +30,12 @@ $primary->command_ok( [ 'pg_receivewal', '--slot', $slot_name, '--create-slot' ], 'creating a replication slot'); +my $slot = $primary->slot($slot_name); +is($slot->{'slot_type'}, 'physical', 'physical replication slot was created'); +is($slot->{'restart_lsn'}, '', 'restart LSN of new slot is null'); $primary->command_ok([ 'pg_receivewal', '--slot', $slot_name, '--drop-slot' ], 'dropping a replication slot'); +is($primary->slot($slot_name)->{'slot_type'}, '', 'replication slot was removed'); # Generate some WAL. Use --synchronous at the same time to add more # code coverage. Switch to the next segment first so that subsequent From 59597e6485847ae40eab2e80ff04af3e8663f2d8 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Tue, 26 Sep 2017 22:39:44 -0700 Subject: [PATCH 0278/1087] Don't recommend "DROP SCHEMA information_schema CASCADE". It drops objects outside information_schema that depend on objects inside information_schema. For example, it will drop a user-defined view if the view query refers to information_schema. Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com --- doc/src/sgml/release-9.2.sgml | 44 +++++++++++++++++++++++++++++------ doc/src/sgml/release-9.3.sgml | 44 +++++++++++++++++++++++++++++------ doc/src/sgml/release-9.4.sgml | 44 +++++++++++++++++++++++++++++------ doc/src/sgml/release-9.5.sgml | 44 +++++++++++++++++++++++++++++------ doc/src/sgml/release-9.6.sgml | 44 +++++++++++++++++++++++++++++------ 5 files changed, 185 insertions(+), 35 deletions(-) diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index faa7ae4d57..6fa21e3759 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -58,14 +58,44 @@ in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS + SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + + FROM ( + SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner)))).* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + + WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index f3b00a70d5..91fbb34399 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -52,14 +52,44 @@ in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS + SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + + FROM ( + SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner)))).* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + + WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index 227e5e231c..c665f90ca1 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -68,14 +68,44 @@ Branch: REL9_4_STABLE [b51c8efc6] 2017-08-24 15:21:32 -0700 in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS + SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + + FROM ( + SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner)))).* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + + WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index 62b311486a..0f700dd5d3 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -51,14 +51,44 @@ in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS + SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + + FROM ( + SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner)))).* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + + WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index fa5355f873..dc811c4ca5 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -61,14 +61,44 @@ Branch: REL9_2_STABLE [98e6784aa] 2017-08-15 19:33:04 -0400 in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS + SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + + FROM ( + SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner)))).* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + + WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. From 3709ca1cf069cee24ef8000cb6a479813b5537df Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Sep 2017 16:07:52 -0400 Subject: [PATCH 0279/1087] pg_basebackup: Add option to create replication slot When requesting a particular replication slot, the new pg_basebackup option -C/--create-slot creates it before starting to replicate from it. Further refactor the slot creation logic to include the temporary slot creation logic into the same function. Add new arguments is_temporary and preserve_wal to CreateReplicationSlot(). Print in --verbose mode that a slot has been created. Author: Michael Banck --- doc/src/sgml/ref/pg_basebackup.sgml | 16 +++++ src/bin/pg_basebackup/pg_basebackup.c | 63 +++++++++++++++++--- src/bin/pg_basebackup/pg_receivewal.c | 3 +- src/bin/pg_basebackup/pg_recvlogical.c | 4 +- src/bin/pg_basebackup/receivelog.c | 18 ------ src/bin/pg_basebackup/receivelog.h | 1 - src/bin/pg_basebackup/streamutil.c | 16 +++-- src/bin/pg_basebackup/streamutil.h | 5 +- src/bin/pg_basebackup/t/010_pg_basebackup.pl | 27 ++++++++- 9 files changed, 112 insertions(+), 41 deletions(-) diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index b7aa128f7f..f790c56003 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -382,6 +382,18 @@ PostgreSQL documentation + + + + + + This option causes the replication slot specified by the + option --slot to be created before starting the + backup. In this case, an error is raised if the slot already exists. + + + + @@ -462,6 +474,10 @@ PostgreSQL documentation the server does not remove any necessary WAL data in the time between the end of the base backup and the start of streaming replication. + + The specified replication slot has to exist unless the + option is also used. + If this option is not specified and the server supports temporary replication slots (version 10 and later), then a temporary replication diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index 537978090e..dac7299ff4 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -93,6 +93,8 @@ static pg_time_t last_progress_report = 0; static int32 maxrate = 0; /* no limit by default */ static char *replication_slot = NULL; static bool temp_replication_slot = true; +static bool create_slot = false; +static bool no_slot = false; static bool success = false; static bool made_new_pgdata = false; @@ -346,6 +348,7 @@ usage(void) printf(_("\nGeneral options:\n")); printf(_(" -c, --checkpoint=fast|spread\n" " set fast or spread checkpointing\n")); + printf(_(" -C, --create-slot create replication slot\n")); printf(_(" -l, --label=LABEL set backup label\n")); printf(_(" -n, --no-clean do not clean up after errors\n")); printf(_(" -N, --no-sync do not wait for changes to be written safely to disk\n")); @@ -466,7 +469,6 @@ typedef struct char xlog[MAXPGPATH]; /* directory or tarfile depending on mode */ char *sysidentifier; int timeline; - bool temp_slot; } logstreamer_param; static int @@ -492,9 +494,6 @@ LogStreamerMain(logstreamer_param *param) stream.mark_done = true; stream.partial_suffix = NULL; stream.replication_slot = replication_slot; - stream.temp_slot = param->temp_slot; - if (stream.temp_slot && !stream.replication_slot) - stream.replication_slot = psprintf("pg_basebackup_%d", (int) PQbackendPID(param->bgconn)); if (format == 'p') stream.walmethod = CreateWalDirectoryMethod(param->xlog, 0, do_sync); @@ -583,9 +582,29 @@ StartLogStreamer(char *startpos, uint32 timeline, char *sysidentifier) /* Temporary replication slots are only supported in 10 and newer */ if (PQserverVersion(conn) < MINIMUM_VERSION_FOR_TEMP_SLOTS) - param->temp_slot = false; - else - param->temp_slot = temp_replication_slot; + temp_replication_slot = false; + + /* + * Create replication slot if requested + */ + if (temp_replication_slot && !replication_slot) + replication_slot = psprintf("pg_basebackup_%d", (int) PQbackendPID(param->bgconn)); + if (temp_replication_slot || create_slot) + { + if (!CreateReplicationSlot(param->bgconn, replication_slot, NULL, + temp_replication_slot, true, true, false)) + disconnect_and_exit(1); + + if (verbose) + { + if (temp_replication_slot) + fprintf(stderr, _("%s: created temporary replication slot \"%s\"\n"), + progname, replication_slot); + else + fprintf(stderr, _("%s: created replication slot \"%s\"\n"), + progname, replication_slot); + } + } if (format == 'p') { @@ -2079,6 +2098,7 @@ main(int argc, char **argv) {"pgdata", required_argument, NULL, 'D'}, {"format", required_argument, NULL, 'F'}, {"checkpoint", required_argument, NULL, 'c'}, + {"create-slot", no_argument, NULL, 'C'}, {"max-rate", required_argument, NULL, 'r'}, {"write-recovery-conf", no_argument, NULL, 'R'}, {"slot", required_argument, NULL, 'S'}, @@ -2105,7 +2125,6 @@ main(int argc, char **argv) int c; int option_index; - bool no_slot = false; progname = get_progname(argv[0]); set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup")); @@ -2127,11 +2146,14 @@ main(int argc, char **argv) atexit(cleanup_directories_atexit); - while ((c = getopt_long(argc, argv, "D:F:r:RT:X:l:nNzZ:d:c:h:p:U:s:S:wWvP", + while ((c = getopt_long(argc, argv, "CD:F:r:RS:T:X:l:nNzZ:d:c:h:p:U:s:wWvP", long_options, &option_index)) != -1) { switch (c) { + case 'C': + create_slot = true; + break; case 'D': basedir = pg_strdup(optarg); break; @@ -2348,6 +2370,29 @@ main(int argc, char **argv) temp_replication_slot = false; } + if (create_slot) + { + if (!replication_slot) + { + fprintf(stderr, + _("%s: --create-slot needs a slot to be specified using --slot\n"), + progname); + fprintf(stderr, _("Try \"%s --help\" for more information.\n"), + progname); + exit(1); + } + + if (no_slot) + { + fprintf(stderr, + _("%s: --create-slot and --no-slot are incompatible options\n"), + progname); + fprintf(stderr, _("Try \"%s --help\" for more information.\n"), + progname); + exit(1); + } + } + if (xlog_dir) { if (format != 'p') diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c index fbac0df93d..888ae6c571 100644 --- a/src/bin/pg_basebackup/pg_receivewal.c +++ b/src/bin/pg_basebackup/pg_receivewal.c @@ -431,7 +431,6 @@ StreamLog(void) stream.do_sync); stream.partial_suffix = ".partial"; stream.replication_slot = replication_slot; - stream.temp_slot = false; ReceiveXlogStream(conn, &stream); @@ -728,7 +727,7 @@ main(int argc, char **argv) _("%s: creating replication slot \"%s\"\n"), progname, replication_slot); - if (!CreateReplicationSlot(conn, replication_slot, NULL, true, + if (!CreateReplicationSlot(conn, replication_slot, NULL, false, true, false, slot_exists_ok)) disconnect_and_exit(1); disconnect_and_exit(0); diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c index 6811a55e76..3109d0f99f 100644 --- a/src/bin/pg_basebackup/pg_recvlogical.c +++ b/src/bin/pg_basebackup/pg_recvlogical.c @@ -979,8 +979,8 @@ main(int argc, char **argv) _("%s: creating replication slot \"%s\"\n"), progname, replication_slot); - if (!CreateReplicationSlot(conn, replication_slot, plugin, - false, slot_exists_ok)) + if (!CreateReplicationSlot(conn, replication_slot, plugin, false, + false, false, slot_exists_ok)) disconnect_and_exit(1); startpos = InvalidXLogRecPtr; } diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c index 65931f6454..07509cb825 100644 --- a/src/bin/pg_basebackup/receivelog.c +++ b/src/bin/pg_basebackup/receivelog.c @@ -522,24 +522,6 @@ ReceiveXlogStream(PGconn *conn, StreamCtl *stream) PQclear(res); } - /* - * Create temporary replication slot if one is needed - */ - if (stream->temp_slot) - { - snprintf(query, sizeof(query), - "CREATE_REPLICATION_SLOT \"%s\" TEMPORARY PHYSICAL RESERVE_WAL", - stream->replication_slot); - res = PQexec(conn, query); - if (PQresultStatus(res) != PGRES_TUPLES_OK) - { - fprintf(stderr, _("%s: could not create temporary replication slot \"%s\": %s"), - progname, stream->replication_slot, PQerrorMessage(conn)); - PQclear(res); - return false; - } - } - /* * initialize flush position to starting point, it's the caller's * responsibility that that's sane. diff --git a/src/bin/pg_basebackup/receivelog.h b/src/bin/pg_basebackup/receivelog.h index bb786ce289..5b8c33fc26 100644 --- a/src/bin/pg_basebackup/receivelog.h +++ b/src/bin/pg_basebackup/receivelog.h @@ -47,7 +47,6 @@ typedef struct StreamCtl WalWriteMethod *walmethod; /* How to write the WAL */ char *partial_suffix; /* Suffix appended to partially received files */ char *replication_slot; /* Replication slot to use, or NULL */ - bool temp_slot; /* Create temporary replication slot */ } StreamCtl; diff --git a/src/bin/pg_basebackup/streamutil.c b/src/bin/pg_basebackup/streamutil.c index df17f60596..81fef8cd51 100644 --- a/src/bin/pg_basebackup/streamutil.c +++ b/src/bin/pg_basebackup/streamutil.c @@ -398,7 +398,8 @@ RunIdentifySystem(PGconn *conn, char **sysid, TimeLineID *starttli, */ bool CreateReplicationSlot(PGconn *conn, const char *slot_name, const char *plugin, - bool is_physical, bool slot_exists_ok) + bool is_temporary, bool is_physical, bool reserve_wal, + bool slot_exists_ok) { PQExpBuffer query; PGresult *res; @@ -410,13 +411,18 @@ CreateReplicationSlot(PGconn *conn, const char *slot_name, const char *plugin, Assert(slot_name != NULL); /* Build query */ + appendPQExpBuffer(query, "CREATE_REPLICATION_SLOT \"%s\"", slot_name); + if (is_temporary) + appendPQExpBuffer(query, " TEMPORARY"); if (is_physical) - appendPQExpBuffer(query, "CREATE_REPLICATION_SLOT \"%s\" PHYSICAL", - slot_name); + { + appendPQExpBuffer(query, " PHYSICAL"); + if (reserve_wal) + appendPQExpBuffer(query, " RESERVE_WAL"); + } else { - appendPQExpBuffer(query, "CREATE_REPLICATION_SLOT \"%s\" LOGICAL \"%s\"", - slot_name, plugin); + appendPQExpBuffer(query, " LOGICAL \"%s\"", plugin); if (PQserverVersion(conn) >= 100000) /* pg_recvlogical doesn't use an exported snapshot, so suppress */ appendPQExpBuffer(query, " NOEXPORT_SNAPSHOT"); diff --git a/src/bin/pg_basebackup/streamutil.h b/src/bin/pg_basebackup/streamutil.h index ec227712d5..908fd68c2b 100644 --- a/src/bin/pg_basebackup/streamutil.h +++ b/src/bin/pg_basebackup/streamutil.h @@ -33,8 +33,9 @@ extern PGconn *GetConnection(void); /* Replication commands */ extern bool CreateReplicationSlot(PGconn *conn, const char *slot_name, - const char *plugin, bool is_physical, - bool slot_exists_ok); + const char *plugin, bool is_temporary, + bool is_physical, bool reserve_wal, + bool slot_exists_ok); extern bool DropReplicationSlot(PGconn *conn, const char *slot_name); extern bool RunIdentifySystem(PGconn *conn, char **sysid, TimeLineID *starttli, diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl index cce14b83e1..6a8be09f4c 100644 --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl @@ -4,7 +4,7 @@ use Config; use PostgresNode; use TestLib; -use Test::More tests => 72; +use Test::More tests => 78; program_help_ok('pg_basebackup'); program_version_ok('pg_basebackup'); @@ -259,9 +259,32 @@ [ 'pg_basebackup', '-D', "$tempdir/backupxs_sl_fail", '-X', 'stream', '-S', - 'slot1' ], + 'slot0' ], 'pg_basebackup fails with nonexistent replication slot'); +$node->command_fails( + [ 'pg_basebackup', '-D', "$tempdir/backupxs_slot", '-C' ], + 'pg_basebackup -C fails without slot name'); + +$node->command_fails( + [ 'pg_basebackup', '-D', "$tempdir/backupxs_slot", '-C', '-S', 'slot0', '--no-slot' ], + 'pg_basebackup fails with -C -S --no-slot'); + +$node->command_ok( + [ 'pg_basebackup', '-D', "$tempdir/backupxs_slot", '-C', '-S', 'slot0' ], + 'pg_basebackup -C runs'); + +is($node->safe_psql('postgres', q{SELECT slot_name FROM pg_replication_slots WHERE slot_name = 'slot0'}), + 'slot0', + 'replication slot was created'); +isnt($node->safe_psql('postgres', q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot0'}), + '', + 'restart LSN of new slot is not null'); + +$node->command_fails( + [ 'pg_basebackup', '-D', "$tempdir/backupxs_slot1", '-C', '-S', 'slot0' ], + 'pg_basebackup fails with -C -S and a previously existing slot'); + $node->safe_psql('postgres', q{SELECT * FROM pg_create_physical_replication_slot('slot1')}); my $lsn = $node->safe_psql('postgres', From 684cf76b83e9dc8aed12aeb9131d2208f61bd31f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 15 Sep 2017 10:17:37 -0400 Subject: [PATCH 0280/1087] Get rid of parameterized marked sections in SGML Previously, we created a variant of the installation instructions for producing the plain-text INSTALL file by marking up certain parts of installation.sgml using SGML parameterized marked sections. Marked sections will not work anymore in XML, so before we can convert the documentation to XML, we need a new approach. DocBook provides a "profiling" feature that allows selecting content based on attributes, which would work here. But it imposes a noticeable overhead when building the full documentation and causes complications when building some output formats, and given that we recently spent a fair amount of effort optimizing the documentation build time, it seems sad to have to accept that. So as an alternative, (1) we create our own mini-profiling layer that adjusts just the text we want, and (2) assemble the pieces of content that we want in the INSTALL file using XInclude. That way, there is no overhead when building the full documentation and most of the "ugly" stuff in installation.sgml can be removed and dealt with out of line. --- doc/src/sgml/Makefile | 5 +- doc/src/sgml/filelist.sgml | 9 - doc/src/sgml/installation.sgml | 245 ++++----------------------- doc/src/sgml/standalone-install.sgml | 28 --- doc/src/sgml/standalone-install.xml | 167 ++++++++++++++++++ doc/src/sgml/standalone-profile.xsl | 81 +++++++++ 6 files changed, 280 insertions(+), 255 deletions(-) delete mode 100644 doc/src/sgml/standalone-install.sgml create mode 100644 doc/src/sgml/standalone-install.xml create mode 100644 doc/src/sgml/standalone-profile.xsl diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 7458ef4de2..128d827c1a 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -134,9 +134,8 @@ INSTALL.html: %.html : stylesheet-text.xsl %.xml $(XMLLINT) --noout --valid $*.xml $(XSLTPROC) $(XSLTPROCFLAGS) $(XSLTPROC_HTML_FLAGS) $^ >$@ -INSTALL.xml: standalone-install.sgml installation.sgml version.sgml - $(OSX) $(SPFLAGS) $(SGMLINCLUDE) -x lower $(filter-out version.sgml,$^) >$@.tmp - $(call mangle-xml,chapter) +INSTALL.xml: standalone-profile.xsl standalone-install.xml postgres.xml + $(XSLTPROC) $(XSLTPROCFLAGS) --xinclude $(wordlist 1,2,$^) >$@ ## diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml index bd371fd1d3..9050559abd 100644 --- a/doc/src/sgml/filelist.sgml +++ b/doc/src/sgml/filelist.sgml @@ -190,12 +190,3 @@ - - - - diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index a1bae95145..f4e4fc7c5e 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -1,28 +1,25 @@ - <![%standalone-include[<productname>PostgreSQL</>]]> - Installation from Source Code + Installation from Source Code installation - This - describes the installation of + This chapter describes the installation of PostgreSQL using the source code distribution. (If you are installing a pre-packaged distribution, - such as an RPM or Debian package, ignore this - - + such as an RPM or Debian package, ignore this chapter and read the packager's instructions instead.) @@ -45,8 +42,7 @@ su - postgres /usr/local/pgsql/bin/psql test The long version is the rest of this - - + chapter. @@ -197,8 +193,7 @@ su - postgres required version is Python 2.4. Python 3 is supported if it's version 3.1 or later; but see - PL/Python documentation]]> - ]]> + when using Python 3. @@ -267,9 +262,7 @@ su - postgres To build the PostgreSQL documentation, there is a separate set of requirements; see - .]]> - + . @@ -340,7 +333,6 @@ su - postgres - Getting The Source @@ -369,7 +361,6 @@ su - postgres . -]]> Installation Procedure @@ -844,9 +835,8 @@ su - postgres Build with LDAPLDAP support for authentication and connection parameter lookup (see - and - ]]> for more information). On Unix, + and + for more information). On Unix, this requires the OpenLDAP package to be installed. On Windows, the default WinLDAP library is used. configure will check for the required @@ -865,8 +855,8 @@ su - postgres for systemdsystemd service notifications. This improves integration if the server binary is started under systemd but has no impact - otherwise for more - information]]>. libsystemd and the + otherwise; see for more + information. libsystemd and the associated header files need to be installed to be able to use this option. @@ -911,8 +901,7 @@ su - postgres - Build the - ]]> module + Build the module (which provides functions to generate UUIDs), using the specified UUID library.UUID LIBRARY must be one of: @@ -979,8 +968,7 @@ su - postgres Use libxslt when building the - - ]]> + module. xml2 relies on this library to perform XSL transformations of XML. @@ -1096,8 +1084,7 @@ su - postgres has no support for strong random numbers on the platform. A source of random numbers is needed for some authentication protocols, as well as some routines in the - - ]]> + module. disables functionality that requires cryptographically strong random numbers, and substitutes a weak pseudo-random-number-generator for the generation of @@ -1201,8 +1188,8 @@ su - postgres code coverage testing instrumentation. When run, they generate files in the build directory with code coverage metrics. - - for more information.]]> This option is for use only with GCC + See + for more information. This option is for use only with GCC and when doing development work. @@ -1262,8 +1249,8 @@ su - postgres Compiles PostgreSQL with support for the dynamic tracing tool DTrace. - - for more information.]]> + See + for more information. @@ -1298,7 +1285,7 @@ su - postgres Enable tests using the Perl TAP tools. This requires a Perl installation and the Perl module IPC::Run. - for more information.]]> + See for more information. @@ -1455,9 +1442,7 @@ su - postgres whether Python 2 or 3 is specified here (or otherwise implicitly chosen) determines which variant of the PL/Python language becomes available. See - PL/Python - documentation]]> - ]]> + for more information. @@ -1584,10 +1569,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. make check (This won't work as root; do it as an unprivileged user.) - src/test/regress/README and the - documentation contain]]> - contains]]> + See for detailed information about interpreting the test results. You can repeat this test at any later time by issuing the same command. @@ -1599,8 +1581,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. If you are upgrading an existing system be sure to read - - ]]> + , which has instructions about upgrading a cluster. @@ -1858,167 +1839,6 @@ export MANPATH - - - Getting Started - - - The following is a quick summary of how to get PostgreSQL up and - running once installed. The main documentation contains more information. - - - - - - Create a user account for the PostgreSQL - server. This is the user the server will run as. For production - use you should create a separate, unprivileged account - (postgres is commonly used). If you do not have root - access or just want to play around, your own user account is - enough, but running the server as root is a security risk and - will not work. - -adduser postgres - - - - - - - Create a database installation with the initdb - command. To run initdb you must be logged in to your - PostgreSQL server account. It will not work as - root. - -root# mkdir /usr/local/pgsql/data -root# chown postgres /usr/local/pgsql/data -root# su - postgres -postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data - - - - - The - - - - - At this point, if you did not use the initdb -A - option, you might want to modify pg_hba.conf to control - local access to the server before you start it. The default is to - trust all local users. - - - - - - The previous initdb step should have told you how to - start up the database server. Do so now. The command should look - something like: - -/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data - - This will start the server in the foreground. To put the server - in the background use something like: - -nohup /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data \ - </dev/null >>server.log 2>&1 </dev/null & - - - - - To stop a server running in the background you can type: - -kill `cat /usr/local/pgsql/data/postmaster.pid` - - - - - - - Create a database: - -createdb testdb - - Then enter: - -psql testdb - - to connect to that database. At the prompt you can enter SQL - commands and start experimenting. - - - - - - - What Now? - - - - - - The PostgreSQL distribution contains a - comprehensive documentation set, which you should read sometime. - After installation, the documentation can be accessed by - pointing your browser to - /usr/local/pgsql/doc/html/index.html, unless you - changed the installation directories. - - - - The first few chapters of the main documentation are the Tutorial, - which should be your first reading if you are completely new to - SQL databases. If you are familiar with database - concepts then you want to proceed with part on server - administration, which contains information about how to set up - the database server, database users, and authentication. - - - - - - Usually, you will want to modify your computer so that it will - automatically start the database server whenever it boots. Some - suggestions for this are in the documentation. - - - - - - Run the regression tests against the installed server (using - make installcheck). If you didn't run the - tests before installation, you should definitely do it now. This - is also explained in the documentation. - - - - - - By default, PostgreSQL is configured to run on - minimal hardware. This allows it to start up with almost any - hardware configuration. The default configuration is, however, - not designed for optimum performance. To achieve optimum - performance, several server parameters must be adjusted, the two - most common being shared_buffers and - work_mem. - Other parameters mentioned in the documentation also affect - performance. - - - - - -]]> - - Supported Platforms @@ -2076,9 +1896,7 @@ kill `cat /usr/local/pgsql/data/postmaster.pid` regarding the installation and setup of PostgreSQL. Be sure to read the installation instructions, and in particular as well. Also, - check src/test/regress/README and the documentation]]> - ]]> regarding the + check regarding the interpretation of regression test results. @@ -2429,7 +2247,7 @@ ERROR: could not load library "/opt/dbs/pgsql/lib/plperl.so": Bad address PostgreSQL can be built using Cygwin, a Linux-like environment for Windows, but that method is inferior to the native Windows build - )]]> and + (see ) and running a server under Cygwin is no longer recommended. @@ -2623,8 +2441,7 @@ PHSS_30849 s700_800 u2comp/be/plugin library Patch Microsoft's Visual C++ compiler suite. The MinGW build variant uses the normal build system described in this chapter; the Visual C++ build works completely differently - and is described in ]]>. + and is described in . It is a fully native build and uses no additional software like MinGW. A ready-made installer is available on the main PostgreSQL web site. @@ -2785,10 +2602,8 @@ LIBOBJS = snprintf.o Using DTrace for Tracing PostgreSQL - Yes, using DTrace is possible. See - ]]> for further - information. + Yes, using DTrace is possible. See for + further information. diff --git a/doc/src/sgml/standalone-install.sgml b/doc/src/sgml/standalone-install.sgml deleted file mode 100644 index 1942f9dc4c..0000000000 --- a/doc/src/sgml/standalone-install.sgml +++ /dev/null @@ -1,28 +0,0 @@ - - - - - -%version; - - - - - - - -]> diff --git a/doc/src/sgml/standalone-install.xml b/doc/src/sgml/standalone-install.xml new file mode 100644 index 0000000000..49d94c5187 --- /dev/null +++ b/doc/src/sgml/standalone-install.xml @@ -0,0 +1,167 @@ + + + +
+ <productname>PostgreSQL</productname> Installation from Source Code + + + This document describes the installation of + PostgreSQL using this source code distribution. + + + + + + + + + Getting Started + + + The following is a quick summary of how to get PostgreSQL up and + running once installed. The main documentation contains more information. + + + + + + Create a user account for the PostgreSQL + server. This is the user the server will run as. For production + use you should create a separate, unprivileged account + (postgres is commonly used). If you do not have root + access or just want to play around, your own user account is + enough, but running the server as root is a security risk and + will not work. +adduser postgres + + + + + + Create a database installation with the initdb + command. To run initdb you must be logged in to your + PostgreSQL server account. It will not work as + root. +root# mkdir /usr/local/pgsql/data +root# chown postgres /usr/local/pgsql/data +root# su - postgres +postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data + + + + The option specifies the location where the data + will be stored. You can use any path you want, it does not have + to be under the installation directory. Just make sure that the + server account can write to the directory (or create it, if it + doesn't already exist) before starting initdb, as + illustrated here. + + + + + + At this point, if you did not use the initdb -A + option, you might want to modify pg_hba.conf to control + local access to the server before you start it. The default is to + trust all local users. + + + + + + The previous initdb step should have told you how to + start up the database server. Do so now. The command should look + something like: +/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data + This will start the server in the foreground. To put the server + in the background use something like: +nohup /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data \ + </dev/null >>server.log 2>&1 </dev/null & + + + + To stop a server running in the background you can type: +kill `cat /usr/local/pgsql/data/postmaster.pid` + + + + + + Create a database: +createdb testdb + Then enter: +psql testdb + to connect to that database. At the prompt you can enter SQL + commands and start experimenting. + + + + + + + What Now? + + + + + + The PostgreSQL distribution contains a + comprehensive documentation set, which you should read sometime. + After installation, the documentation can be accessed by + pointing your browser to + /usr/local/pgsql/doc/html/index.html, unless you + changed the installation directories. + + + + The first few chapters of the main documentation are the Tutorial, + which should be your first reading if you are completely new to + SQL databases. If you are familiar with database + concepts then you want to proceed with part on server + administration, which contains information about how to set up + the database server, database users, and authentication. + + + + + + Usually, you will want to modify your computer so that it will + automatically start the database server whenever it boots. Some + suggestions for this are in the documentation. + + + + + + Run the regression tests against the installed server (using + make installcheck). If you didn't run the + tests before installation, you should definitely do it now. This + is also explained in the documentation. + + + + + + By default, PostgreSQL is configured to run on + minimal hardware. This allows it to start up with almost any + hardware configuration. The default configuration is, however, + not designed for optimum performance. To achieve optimum + performance, several server parameters must be adjusted, the two + most common being shared_buffers and + work_mem. + Other parameters mentioned in the documentation also affect + performance. + + + + + + + + +
diff --git a/doc/src/sgml/standalone-profile.xsl b/doc/src/sgml/standalone-profile.xsl new file mode 100644 index 0000000000..ff464c1654 --- /dev/null +++ b/doc/src/sgml/standalone-profile.xsl @@ -0,0 +1,81 @@ + + + + + + + + + + + + + + + + + + + + + + document + + + + the documentation about client authentication and libpq + + + + the main documentation's appendix on documentation + + + + the documentation + + + + the documentation + + + + pgcrypto + + + + the PL/Python documentation + + + + the file + src/test/regress/README + and the documentation + + + + the documentation + + + + uuid-ossp + + + + xml2 + + + From 639928c988c1c2f52bbe7ca89e8c7c78a041b3e2 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0281/1087] Improve vpath support in plperl build Run xsubpp with the -output option instead of redirecting stdout. That ensures that the #line directives in the output file point to the right place in a vpath build. This in turn fixes an error in coverage builds that it can't find the source files. Refactor the makefile rules while we're here. Reviewed-by: Michael Paquier --- src/pl/plperl/GNUmakefile | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/src/pl/plperl/GNUmakefile b/src/pl/plperl/GNUmakefile index 191f74067a..66a2c3d4c9 100644 --- a/src/pl/plperl/GNUmakefile +++ b/src/pl/plperl/GNUmakefile @@ -81,13 +81,9 @@ perlchunks.h: $(PERLCHUNKS) all: all-lib -SPI.c: SPI.xs plperl_helpers.h +%.c: %.xs @if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi - $(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@ - -Util.c: Util.xs plperl_helpers.h - @if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi - $(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@ + $(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap -output $@ $< install: all install-lib install-data From af44cbd5ecd7e1db0ae4bce75c8f1bce14b1d6db Mon Sep 17 00:00:00 2001 From: Dean Rasheed Date: Wed, 27 Sep 2017 17:16:15 +0100 Subject: [PATCH 0282/1087] Improve the CREATE POLICY documentation. Provide a correct description of how multiple policies are combined, clarify when SELECT permissions are required, mention SELECT FOR UPDATE/SHARE, and do some other more minor tidying up. Reviewed by Stephen Frost Discussion: https://postgr.es/m/CAEZATCVrxyYbOFU8XbGHicz%2BmXPYzw%3DhfNL2XTphDt-53TomQQ%40mail.gmail.com Back-patch to 9.5. --- doc/src/sgml/ref/create_policy.sgml | 206 ++++++++++++++++++---------- 1 file changed, 134 insertions(+), 72 deletions(-) diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index c0dfe1ea4b..70df22c059 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -73,20 +73,17 @@ CREATE POLICY name ON Policies can be applied for specific commands or for specific roles. The default for newly created policies is that they apply for all commands and - roles, unless otherwise specified. If multiple policies apply to a given - statement, they will be combined using OR (although ON CONFLICT DO - UPDATE and INSERT policies are not combined in this way, but - rather enforced as noted at each stage of ON CONFLICT execution). + roles, unless otherwise specified.
- For commands that can have both USING - and WITH CHECK policies (ALL + For policies that can have both USING + and WITH CHECK expressions (ALL and UPDATE), if no WITH CHECK - policy is defined, then the USING policy will be used - both for which rows are visible (normal USING case) - and for which rows will be allowed to be added (WITH - CHECK case). + expression is defined, then the USING expression will be + used both to determine which rows are visible (normal + USING case) and which new rows will be allowed to be + added (WITH CHECK case). @@ -144,6 +141,16 @@ CREATE POLICY name ON + + + Note that there needs to be at least one permissive policy to grant + access to records before restrictive policies can be usefully used to + reduce that access. If only restrictive policies exist, then no records + will be accessible. When a mix of permissive and restrictive policies + are present, a record is only accessible if at least one of the + permissive policies passes, in addition to all the restrictive + policies. + @@ -210,7 +217,7 @@ CREATE POLICY name ON - + Per-Command Policies @@ -223,8 +230,7 @@ CREATE POLICY name ON ALL
policy exists and more specific policies exist, then both the ALL policy and the more - specific policy (or policies) will be combined using - OR, as usual for overlapping policies. + specific policy (or policies) will be applied. Additionally, ALL policies will be applied to both the selection side of a query and the modification side, using the USING expression for both cases if only @@ -293,11 +299,12 @@ CREATE POLICY name ON Using UPDATE for a policy means that it will apply - to UPDATE commands (or auxiliary ON - CONFLICT DO UPDATE clauses of INSERT - commands). Since UPDATE involves pulling an - existing record and then making changes to some portion (but - possibly not all) of the record, UPDATE + to UPDATE, SELECT FOR UPDATE + and SELECT FOR SHARE commands, as well as + auxiliary ON CONFLICT DO UPDATE clauses of + INSERT commands. Since UPDATE + involves pulling an existing record and replacing it with a new + modified record, UPDATE policies accept both a USING expression and a WITH CHECK expression. The USING expression determines which records @@ -306,22 +313,6 @@ CREATE POLICY name ON - - When an UPDATE command is used with a - WHERE clause or a RETURNING - clause, SELECT rights are also required on the - relation being updated and the appropriate SELECT - and ALL policies will be combined (using OR for any - overlapping SELECT related policies found) with the - USING clause of the UPDATE policy - using AND. Therefore, in order for a user to be able to - UPDATE specific rows, the user must have access - to the row(s) through a SELECT - or ALL policy and the row(s) must pass - the UPDATE policy's USING - expression. - - Any rows whose updated values do not pass the WITH CHECK expression will cause an error, and the @@ -331,21 +322,33 @@ CREATE POLICY name ON - Note, however, that INSERT with ON CONFLICT - DO UPDATE requires that an UPDATE policy - USING expression always be enforced as a - WITH CHECK expression. This - UPDATE policy must always pass when the - UPDATE path is taken. Any existing row that - necessitates that the UPDATE path be taken must - pass the (UPDATE or ALL) - USING qualifications (combined using OR), which - are always enforced as WITH CHECK - options in this context. (The UPDATE path will - never be silently avoided; an error will be thrown - instead.) Finally, the final row appended to the relation must pass - any WITH CHECK options that a conventional - UPDATE is required to pass. + Typically an UPDATE command also needs to read + data from columns in the relation being updated (e.g., in a + WHERE clause or a RETURNING + clause, or in an expression on the right hand side of the + SET clause). In this case, + SELECT rights are also required on the relation + being updated, and the appropriate SELECT or + ALL policies will be applied in addition to + the UPDATE policies. Thus the user must have + access to the row(s) being updated through a SELECT + or ALL policy in addition to being granted + permission to update the row(s) via an UPDATE + or ALL policy. + + + + When an INSERT command has an auxiliary + ON CONFLICT DO UPDATE clause, if the + UPDATE path is taken, the row to be updated is + first checked against the USING expressions of + any UPDATE policies, and then the new updated row + is checked against the WITH CHECK expressions. + Note, however, that unlike a standalone UPDATE + command, if the existing row does not pass the + USING expressions, an error will be thrown (the + UPDATE path will never be silently + avoided). @@ -364,19 +367,18 @@ CREATE POLICY name ON - When a DELETE command is used with a - WHERE clause or a RETURNING - clause, SELECT rights are also required on the - relation being updated and the appropriate SELECT - and ALL policies will be combined (using OR for any - overlapping SELECT related policies found) with the - USING clause of the DELETE policy - using AND. Therefore, in order for a user to be able to - DELETE specific rows, the user must have access - to the row(s) through a SELECT - or ALL policy and the row(s) must pass - the DELETE policy's USING - expression. + In most cases a DELETE command also needs to read + data from columns in the relation that it is deleting from (e.g., + in a WHERE clause or a + RETURNING clause). In this case, + SELECT rights are also required on the relation, + and the appropriate SELECT or + ALL policies will be applied in addition to + the DELETE policies. Thus the user must have + access to the row(s) being deleted through a SELECT + or ALL policy in addition to being granted + permission to delete the row(s) via a DELETE or + ALL policy. @@ -390,6 +392,76 @@ CREATE POLICY name ON + + + Application of Multiple Policies + + + When multiple policies of different command types apply to the same command + (for example, SELECT and UPDATE + policies applied to an UPDATE command), then the user + must have both types of permissions (for example, permission to select rows + from the relation as well as permission to update them). Thus the + expressions for one type of policy are combined with the expressions for + the other type of policy using the AND operator. + + + + When multiple policies of the same command type apply to the same command, + then there must be at least one PERMISSIVE policy + granting access to the relation, and all of the + RESTRICTIVE policies must pass. Thus all the + PERMISSIVE policy expressions are combined using + OR, all the RESTRICTIVE policy + expressions are combined using AND, and the results are + combined using AND. If there are no + PERMISSIVE policies, then access is denied. + + + + Note that, for the purposes of combining multiple policies, + ALL policies are treated as having the same type as + whichever other type of policy is being applied. + + + + For example, in an UPDATE command requiring both + SELECT and UPDATE permissions, if + there are multiple applicable policies of each type, they will be combined + as follows: + + +expression from RESTRICTIVE SELECT/ALL policy 1 +AND +expression from RESTRICTIVE SELECT/ALL policy 2 +AND +... +AND +( + expression from PERMISSIVE SELECT/ALL policy 1 + OR + expression from PERMISSIVE SELECT/ALL policy 2 + OR + ... +) +AND +expression from RESTRICTIVE UPDATE/ALL policy 1 +AND +expression from RESTRICTIVE UPDATE/ALL policy 2 +AND +... +AND +( + expression from PERMISSIVE UPDATE/ALL policy 1 + OR + expression from PERMISSIVE UPDATE/ALL policy 2 + OR + ... +) + + + + @@ -418,16 +490,6 @@ CREATE POLICY name ON - - Note that there needs to be at least one permissive policy to grant - access to records before restrictive policies can be usefully used to - reduce that access. If only restrictive policies exist, then no records - will be accessible. When a mix of permissive and restrictive policies - are present, a record is only accessible if at least one of the - permissive policies passes, in addition to all the restrictive - policies. - - Generally, the system will enforce filter conditions imposed using security policies prior to qualifications that appear in user queries, From 65c865620237bf1964757436a36c40af591d30fb Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 27 Sep 2017 15:51:04 -0400 Subject: [PATCH 0283/1087] Fix plperl build The changes in 639928c988c1c2f52bbe7ca89e8c7c78a041b3e2 turned out to require Perl 5.9.3, which is newer than our minimum required version. So revert back to the old code for the normal case and only use the new variant when both coverage and vpath are used. As the minimum Perl version moves forward, we can drop the old code sometime. --- src/pl/plperl/GNUmakefile | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/pl/plperl/GNUmakefile b/src/pl/plperl/GNUmakefile index 66a2c3d4c9..91d1296b21 100644 --- a/src/pl/plperl/GNUmakefile +++ b/src/pl/plperl/GNUmakefile @@ -83,7 +83,12 @@ all: all-lib %.c: %.xs @if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi +# xsubpp -output option is required for coverage+vpath, but requires Perl 5.9.3 +ifeq ($(enable_coverage)$(vpath_build),yesyes) $(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap -output $@ $< +else + $(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@ +endif install: all install-lib install-data From 28e07270768518524291d7d7906668eb67f6b8a5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 27 Sep 2017 16:14:37 -0400 Subject: [PATCH 0284/1087] Revert to 9.6 treatment of ALTER TYPE enumtype ADD VALUE. This reverts commit 15bc038f9, along with the followon commits 1635e80d3 and 984c92074 that tried to clean up the problems exposed by bug #14825. The result was incomplete because it failed to address parallel-query requirements. With 10.0 release so close upon us, now does not seem like the time to be adding more code to fix that. I hope we can un-revert this code and add the missing parallel query support during the v11 cycle. Back-patch to v10. Discussion: https://postgr.es/m/20170922185904.1448.16585@wrigleys.postgresql.org --- doc/src/sgml/ref/alter_type.sgml | 5 +- src/backend/access/transam/xact.c | 4 -- src/backend/catalog/pg_enum.c | 64 --------------------- src/backend/commands/typecmds.c | 29 ++++++++-- src/backend/tcop/utility.c | 2 +- src/backend/utils/adt/enum.c | 90 ------------------------------ src/backend/utils/errcodes.txt | 1 - src/include/catalog/pg_enum.h | 2 - src/include/commands/typecmds.h | 2 +- src/test/regress/expected/enum.out | 78 ++++---------------------- src/test/regress/sql/enum.sql | 42 +++----------- 11 files changed, 49 insertions(+), 270 deletions(-) diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 7e2258e1e3..4027c1b8f7 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -290,9 +290,8 @@ ALTER TYPE name RENAME VALUE Notes - If ALTER TYPE ... ADD VALUE (the form that adds a new value to - an enum type) is executed inside a transaction block, the new value cannot - be used until after the transaction has been committed. + ALTER TYPE ... ADD VALUE (the form that adds a new value to an + enum type) cannot be executed inside a transaction block. diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 52408fc6b0..93dca7a72a 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -32,7 +32,6 @@ #include "access/xlogutils.h" #include "catalog/catalog.h" #include "catalog/namespace.h" -#include "catalog/pg_enum.h" #include "catalog/storage.h" #include "commands/async.h" #include "commands/tablecmds.h" @@ -2129,7 +2128,6 @@ CommitTransaction(void) AtCommit_Notify(); AtEOXact_GUC(true, 1); AtEOXact_SPI(true); - AtEOXact_Enum(); AtEOXact_on_commit_actions(true); AtEOXact_Namespace(true, is_parallel_worker); AtEOXact_SMgr(); @@ -2408,7 +2406,6 @@ PrepareTransaction(void) /* PREPARE acts the same as COMMIT as far as GUC is concerned */ AtEOXact_GUC(true, 1); AtEOXact_SPI(true); - AtEOXact_Enum(); AtEOXact_on_commit_actions(true); AtEOXact_Namespace(true, false); AtEOXact_SMgr(); @@ -2611,7 +2608,6 @@ AbortTransaction(void) AtEOXact_GUC(false, 1); AtEOXact_SPI(false); - AtEOXact_Enum(); AtEOXact_on_commit_actions(false); AtEOXact_Namespace(false, is_parallel_worker); AtEOXact_SMgr(); diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c index 0f7b36e11d..fe61d4dacc 100644 --- a/src/backend/catalog/pg_enum.c +++ b/src/backend/catalog/pg_enum.c @@ -28,8 +28,6 @@ #include "utils/builtins.h" #include "utils/catcache.h" #include "utils/fmgroids.h" -#include "utils/hsearch.h" -#include "utils/memutils.h" #include "utils/syscache.h" #include "utils/tqual.h" @@ -37,17 +35,6 @@ /* Potentially set by pg_upgrade_support functions */ Oid binary_upgrade_next_pg_enum_oid = InvalidOid; -/* - * Hash table of enum value OIDs created during the current transaction by - * AddEnumLabel. We disallow using these values until the transaction is - * committed; otherwise, they might get into indexes where we can't clean - * them up, and then if the transaction rolls back we have a broken index. - * (See comments for check_safe_enum_use() in enum.c.) Values created by - * EnumValuesCreate are *not* blacklisted; we assume those are created during - * CREATE TYPE, so they can't go away unless the enum type itself does. - */ -static HTAB *enum_blacklist = NULL; - static void RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems); static int sort_order_cmp(const void *p1, const void *p2); @@ -473,24 +460,6 @@ AddEnumLabel(Oid enumTypeOid, heap_freetuple(enum_tup); heap_close(pg_enum, RowExclusiveLock); - - /* Set up the blacklist hash if not already done in this transaction */ - if (enum_blacklist == NULL) - { - HASHCTL hash_ctl; - - memset(&hash_ctl, 0, sizeof(hash_ctl)); - hash_ctl.keysize = sizeof(Oid); - hash_ctl.entrysize = sizeof(Oid); - hash_ctl.hcxt = TopTransactionContext; - enum_blacklist = hash_create("Enum value blacklist", - 32, - &hash_ctl, - HASH_ELEM | HASH_BLOBS | HASH_CONTEXT); - } - - /* Add the new value to the blacklist */ - (void) hash_search(enum_blacklist, &newOid, HASH_ENTER, NULL); } @@ -578,39 +547,6 @@ RenameEnumLabel(Oid enumTypeOid, } -/* - * Test if the given enum value is on the blacklist - */ -bool -EnumBlacklisted(Oid enum_id) -{ - bool found; - - /* If we've made no blacklist table, all values are safe */ - if (enum_blacklist == NULL) - return false; - - /* Else, is it in the table? */ - (void) hash_search(enum_blacklist, &enum_id, HASH_FIND, &found); - return found; -} - - -/* - * Clean up enum stuff after end of top-level transaction. - */ -void -AtEOXact_Enum(void) -{ - /* - * Reset the blacklist table, as all our enum values are now committed. - * The memory will go away automatically when TopTransactionContext is - * freed; it's sufficient to clear our pointer. - */ - enum_blacklist = NULL; -} - - /* * RenumberEnumType * Renumber existing enum elements to have sort positions 1..n. diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index 7ed16aeff4..4c490ed5c1 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -1222,10 +1222,10 @@ DefineEnum(CreateEnumStmt *stmt) /* * AlterEnum - * Adds a new label to an existing enum. + * ALTER TYPE on an enum. */ ObjectAddress -AlterEnum(AlterEnumStmt *stmt) +AlterEnum(AlterEnumStmt *stmt, bool isTopLevel) { Oid enum_type_oid; TypeName *typename; @@ -1243,8 +1243,6 @@ AlterEnum(AlterEnumStmt *stmt) /* Check it's an enum and check user has permission to ALTER the enum */ checkEnumOwner(tup); - ReleaseSysCache(tup); - if (stmt->oldVal) { /* Rename an existing label */ @@ -1253,6 +1251,27 @@ AlterEnum(AlterEnumStmt *stmt) else { /* Add a new label */ + + /* + * Ordinarily we disallow adding values within transaction blocks, + * because we can't cope with enum OID values getting into indexes and + * then having their defining pg_enum entries go away. However, it's + * okay if the enum type was created in the current transaction, since + * then there can be no such indexes that wouldn't themselves go away + * on rollback. (We support this case because pg_dump + * --binary-upgrade needs it.) We test this by seeing if the pg_type + * row has xmin == current XID and is not HEAP_UPDATED. If it is + * HEAP_UPDATED, we can't be sure whether the type was created or only + * modified in this xact. So we are disallowing some cases that could + * theoretically be safe; but fortunately pg_dump only needs the + * simplest case. + */ + if (HeapTupleHeaderGetXmin(tup->t_data) == GetCurrentTransactionId() && + !(tup->t_data->t_infomask & HEAP_UPDATED)) + /* safe to do inside transaction block */ ; + else + PreventTransactionChain(isTopLevel, "ALTER TYPE ... ADD"); + AddEnumLabel(enum_type_oid, stmt->newVal, stmt->newValNeighbor, stmt->newValIsAfter, stmt->skipIfNewValExists); @@ -1262,6 +1281,8 @@ AlterEnum(AlterEnumStmt *stmt) ObjectAddressSet(address, TypeRelationId, enum_type_oid); + ReleaseSysCache(tup); + return address; } diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 5c69ecf0f7..82a707af7b 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -1412,7 +1412,7 @@ ProcessUtilitySlow(ParseState *pstate, break; case T_AlterEnumStmt: /* ALTER TYPE (enum) */ - address = AlterEnum((AlterEnumStmt *) parsetree); + address = AlterEnum((AlterEnumStmt *) parsetree, isTopLevel); break; case T_ViewStmt: /* CREATE VIEW */ diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c index c0124f497e..048a08dd85 100644 --- a/src/backend/utils/adt/enum.c +++ b/src/backend/utils/adt/enum.c @@ -19,7 +19,6 @@ #include "catalog/indexing.h" #include "catalog/pg_enum.h" #include "libpq/pqformat.h" -#include "storage/procarray.h" #include "utils/array.h" #include "utils/builtins.h" #include "utils/fmgroids.h" @@ -32,79 +31,6 @@ static Oid enum_endpoint(Oid enumtypoid, ScanDirection direction); static ArrayType *enum_range_internal(Oid enumtypoid, Oid lower, Oid upper); -/* - * Disallow use of an uncommitted pg_enum tuple. - * - * We need to make sure that uncommitted enum values don't get into indexes. - * If they did, and if we then rolled back the pg_enum addition, we'd have - * broken the index because value comparisons will not work reliably without - * an underlying pg_enum entry. (Note that removal of the heap entry - * containing an enum value is not sufficient to ensure that it doesn't appear - * in upper levels of indexes.) To do this we prevent an uncommitted row from - * being used for any SQL-level purpose. This is stronger than necessary, - * since the value might not be getting inserted into a table or there might - * be no index on its column, but it's easy to enforce centrally. - * - * However, it's okay to allow use of uncommitted values belonging to enum - * types that were themselves created in the same transaction, because then - * any such index would also be new and would go away altogether on rollback. - * We don't implement that fully right now, but we do allow free use of enum - * values created during CREATE TYPE AS ENUM, which are surely of the same - * lifespan as the enum type. (This case is required by "pg_restore -1".) - * Values added by ALTER TYPE ADD VALUE are currently restricted, but could - * be allowed if the enum type could be proven to have been created earlier - * in the same transaction. (Note that comparing tuple xmins would not work - * for that, because the type tuple might have been updated in the current - * transaction. Subtransactions also create hazards to be accounted for.) - * - * This function needs to be called (directly or indirectly) in any of the - * functions below that could return an enum value to SQL operations. - */ -static void -check_safe_enum_use(HeapTuple enumval_tup) -{ - TransactionId xmin; - Form_pg_enum en; - - /* - * If the row is hinted as committed, it's surely safe. This provides a - * fast path for all normal use-cases. - */ - if (HeapTupleHeaderXminCommitted(enumval_tup->t_data)) - return; - - /* - * Usually, a row would get hinted as committed when it's read or loaded - * into syscache; but just in case not, let's check the xmin directly. - */ - xmin = HeapTupleHeaderGetXmin(enumval_tup->t_data); - if (!TransactionIdIsInProgress(xmin) && - TransactionIdDidCommit(xmin)) - return; - - /* - * Check if the enum value is blacklisted. If not, it's safe, because it - * was made during CREATE TYPE AS ENUM and can't be shorter-lived than its - * owning type. (This'd also be false for values made by other - * transactions; but the previous tests should have handled all of those.) - */ - if (!EnumBlacklisted(HeapTupleGetOid(enumval_tup))) - return; - - /* - * There might well be other tests we could do here to narrow down the - * unsafe conditions, but for now just raise an exception. - */ - en = (Form_pg_enum) GETSTRUCT(enumval_tup); - ereport(ERROR, - (errcode(ERRCODE_UNSAFE_NEW_ENUM_VALUE_USAGE), - errmsg("unsafe use of new value \"%s\" of enum type %s", - NameStr(en->enumlabel), - format_type_be(en->enumtypid)), - errhint("New enum values must be committed before they can be used."))); -} - - /* Basic I/O support */ Datum @@ -133,9 +59,6 @@ enum_in(PG_FUNCTION_ARGS) format_type_be(enumtypoid), name))); - /* check it's safe to use in SQL */ - check_safe_enum_use(tup); - /* * This comes from pg_enum.oid and stores system oids in user tables. This * oid must be preserved by binary upgrades. @@ -201,9 +124,6 @@ enum_recv(PG_FUNCTION_ARGS) format_type_be(enumtypoid), name))); - /* check it's safe to use in SQL */ - check_safe_enum_use(tup); - enumoid = HeapTupleGetOid(tup); ReleaseSysCache(tup); @@ -411,16 +331,9 @@ enum_endpoint(Oid enumtypoid, ScanDirection direction) enum_tuple = systable_getnext_ordered(enum_scan, direction); if (HeapTupleIsValid(enum_tuple)) - { - /* check it's safe to use in SQL */ - check_safe_enum_use(enum_tuple); minmax = HeapTupleGetOid(enum_tuple); - } else - { - /* should only happen with an empty enum */ minmax = InvalidOid; - } systable_endscan_ordered(enum_scan); index_close(enum_idx, AccessShareLock); @@ -581,9 +494,6 @@ enum_range_internal(Oid enumtypoid, Oid lower, Oid upper) if (left_found) { - /* check it's safe to use in SQL */ - check_safe_enum_use(enum_tuple); - if (cnt >= max) { max *= 2; diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt index 4f35471762..76fe79eac0 100644 --- a/src/backend/utils/errcodes.txt +++ b/src/backend/utils/errcodes.txt @@ -400,7 +400,6 @@ Section: Class 55 - Object Not In Prerequisite State 55006 E ERRCODE_OBJECT_IN_USE object_in_use 55P02 E ERRCODE_CANT_CHANGE_RUNTIME_PARAM cant_change_runtime_param 55P03 E ERRCODE_LOCK_NOT_AVAILABLE lock_not_available -55P04 E ERRCODE_UNSAFE_NEW_ENUM_VALUE_USAGE unsafe_new_enum_value_usage Section: Class 57 - Operator Intervention diff --git a/src/include/catalog/pg_enum.h b/src/include/catalog/pg_enum.h index dff3d2f481..5938ba5cac 100644 --- a/src/include/catalog/pg_enum.h +++ b/src/include/catalog/pg_enum.h @@ -69,7 +69,5 @@ extern void AddEnumLabel(Oid enumTypeOid, const char *newVal, bool skipIfExists); extern void RenameEnumLabel(Oid enumTypeOid, const char *oldVal, const char *newVal); -extern bool EnumBlacklisted(Oid enum_id); -extern void AtEOXact_Enum(void); #endif /* PG_ENUM_H */ diff --git a/src/include/commands/typecmds.h b/src/include/commands/typecmds.h index 34f6fe328f..8f3fc65536 100644 --- a/src/include/commands/typecmds.h +++ b/src/include/commands/typecmds.h @@ -26,7 +26,7 @@ extern void RemoveTypeById(Oid typeOid); extern ObjectAddress DefineDomain(CreateDomainStmt *stmt); extern ObjectAddress DefineEnum(CreateEnumStmt *stmt); extern ObjectAddress DefineRange(CreateRangeStmt *stmt); -extern ObjectAddress AlterEnum(AlterEnumStmt *stmt); +extern ObjectAddress AlterEnum(AlterEnumStmt *stmt, bool isTopLevel); extern ObjectAddress DefineCompositeType(RangeVar *typevar, List *coldeflist); extern Oid AssignTypeArrayOid(void); diff --git a/src/test/regress/expected/enum.out b/src/test/regress/expected/enum.out index 4f839ce027..a0b81608a1 100644 --- a/src/test/regress/expected/enum.out +++ b/src/test/regress/expected/enum.out @@ -581,60 +581,19 @@ ERROR: enum label "green" already exists -- check transactional behaviour of ALTER TYPE ... ADD VALUE -- CREATE TYPE bogus AS ENUM('good'); --- check that we can add new values to existing enums in a transaction --- but we can't use them +-- check that we can't add new values to existing enums in a transaction BEGIN; -ALTER TYPE bogus ADD VALUE 'new'; -SAVEPOINT x; -SELECT 'new'::bogus; -- unsafe -ERROR: unsafe use of new value "new" of enum type bogus -LINE 1: SELECT 'new'::bogus; - ^ -HINT: New enum values must be committed before they can be used. -ROLLBACK TO x; -SELECT enum_first(null::bogus); -- safe - enum_first ------------- - good -(1 row) - -SELECT enum_last(null::bogus); -- unsafe -ERROR: unsafe use of new value "new" of enum type bogus -HINT: New enum values must be committed before they can be used. -ROLLBACK TO x; -SELECT enum_range(null::bogus); -- unsafe -ERROR: unsafe use of new value "new" of enum type bogus -HINT: New enum values must be committed before they can be used. -ROLLBACK TO x; +ALTER TYPE bogus ADD VALUE 'bad'; +ERROR: ALTER TYPE ... ADD cannot run inside a transaction block COMMIT; -SELECT 'new'::bogus; -- now safe - bogus -------- - new -(1 row) - -SELECT enumlabel, enumsortorder -FROM pg_enum -WHERE enumtypid = 'bogus'::regtype -ORDER BY 2; - enumlabel | enumsortorder ------------+--------------- - good | 1 - new | 2 -(2 rows) - -- check that we recognize the case where the enum already existed but was --- modified in the current txn; this should not be considered safe +-- modified in the current txn BEGIN; ALTER TYPE bogus RENAME TO bogon; ALTER TYPE bogon ADD VALUE 'bad'; -SELECT 'bad'::bogon; -ERROR: unsafe use of new value "bad" of enum type bogon -LINE 1: SELECT 'bad'::bogon; - ^ -HINT: New enum values must be committed before they can be used. +ERROR: ALTER TYPE ... ADD cannot run inside a transaction block ROLLBACK; --- but a renamed value is safe to use later in same transaction +-- but ALTER TYPE RENAME VALUE is safe in a transaction BEGIN; ALTER TYPE bogus RENAME VALUE 'good' to 'bad'; SELECT 'bad'::bogus; @@ -645,27 +604,12 @@ SELECT 'bad'::bogus; ROLLBACK; DROP TYPE bogus; --- check that values created during CREATE TYPE can be used in any case -BEGIN; -CREATE TYPE bogus AS ENUM('good','bad','ugly'); -ALTER TYPE bogus RENAME TO bogon; -select enum_range(null::bogon); - enum_range ------------------ - {good,bad,ugly} -(1 row) - -ROLLBACK; --- ideally, we'd allow this usage; but it requires keeping track of whether --- the enum type was created in the current transaction, which is expensive +-- check that we *can* add new values to existing enums in a transaction, +-- if the type is new as well BEGIN; -CREATE TYPE bogus AS ENUM('good'); -ALTER TYPE bogus RENAME TO bogon; -ALTER TYPE bogon ADD VALUE 'bad'; -ALTER TYPE bogon ADD VALUE 'ugly'; -select enum_range(null::bogon); -- fails -ERROR: unsafe use of new value "bad" of enum type bogon -HINT: New enum values must be committed before they can be used. +CREATE TYPE bogus AS ENUM(); +ALTER TYPE bogus ADD VALUE 'good'; +ALTER TYPE bogus ADD VALUE 'ugly'; ROLLBACK; -- -- Cleanup diff --git a/src/test/regress/sql/enum.sql b/src/test/regress/sql/enum.sql index 6affd0d1eb..7b68b2fe37 100644 --- a/src/test/regress/sql/enum.sql +++ b/src/test/regress/sql/enum.sql @@ -273,34 +273,19 @@ ALTER TYPE rainbow RENAME VALUE 'blue' TO 'green'; -- CREATE TYPE bogus AS ENUM('good'); --- check that we can add new values to existing enums in a transaction --- but we can't use them +-- check that we can't add new values to existing enums in a transaction BEGIN; -ALTER TYPE bogus ADD VALUE 'new'; -SAVEPOINT x; -SELECT 'new'::bogus; -- unsafe -ROLLBACK TO x; -SELECT enum_first(null::bogus); -- safe -SELECT enum_last(null::bogus); -- unsafe -ROLLBACK TO x; -SELECT enum_range(null::bogus); -- unsafe -ROLLBACK TO x; +ALTER TYPE bogus ADD VALUE 'bad'; COMMIT; -SELECT 'new'::bogus; -- now safe -SELECT enumlabel, enumsortorder -FROM pg_enum -WHERE enumtypid = 'bogus'::regtype -ORDER BY 2; -- check that we recognize the case where the enum already existed but was --- modified in the current txn; this should not be considered safe +-- modified in the current txn BEGIN; ALTER TYPE bogus RENAME TO bogon; ALTER TYPE bogon ADD VALUE 'bad'; -SELECT 'bad'::bogon; ROLLBACK; --- but a renamed value is safe to use later in same transaction +-- but ALTER TYPE RENAME VALUE is safe in a transaction BEGIN; ALTER TYPE bogus RENAME VALUE 'good' to 'bad'; SELECT 'bad'::bogus; @@ -308,21 +293,12 @@ ROLLBACK; DROP TYPE bogus; --- check that values created during CREATE TYPE can be used in any case -BEGIN; -CREATE TYPE bogus AS ENUM('good','bad','ugly'); -ALTER TYPE bogus RENAME TO bogon; -select enum_range(null::bogon); -ROLLBACK; - --- ideally, we'd allow this usage; but it requires keeping track of whether --- the enum type was created in the current transaction, which is expensive +-- check that we *can* add new values to existing enums in a transaction, +-- if the type is new as well BEGIN; -CREATE TYPE bogus AS ENUM('good'); -ALTER TYPE bogus RENAME TO bogon; -ALTER TYPE bogon ADD VALUE 'bad'; -ALTER TYPE bogon ADD VALUE 'ugly'; -select enum_range(null::bogon); -- fails +CREATE TYPE bogus AS ENUM(); +ALTER TYPE bogus ADD VALUE 'good'; +ALTER TYPE bogus ADD VALUE 'ugly'; ROLLBACK; -- From 7769fc000aa3b959d3e1c7d7c3c2555aba7722c3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 27 Sep 2017 17:05:53 -0400 Subject: [PATCH 0285/1087] Fix behavior when converting a float infinity to numeric. float8_numeric() and float4_numeric() failed to consider the possibility that the input is an IEEE infinity. The results depended on the platform-specific behavior of sprintf(): on most platforms you'd get something like ERROR: invalid input syntax for type numeric: "inf" but at least on Windows it's possible for the conversion to succeed and deliver a finite value (typically 1), due to a nonstandard output format from sprintf and lack of syntax error checking in these functions. Since our numeric type lacks the concept of infinity, a suitable conversion is impossible; the best thing to do is throw an explicit error before letting sprintf do its thing. While at it, let's use snprintf not sprintf. Overrunning the buffer should be impossible if sprintf does what it's supposed to, but this is cheap insurance against a stack smash if it doesn't. Problem reported by Taiki Kondo. Patch by me based on fix suggestion from KaiGai Kohei. Back-patch to all supported branches. Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp --- src/backend/utils/adt/numeric.c | 14 ++++++++++++-- src/test/regress/expected/numeric.out | 21 +++++++++++++++++++++ src/test/regress/sql/numeric.sql | 8 ++++++++ 3 files changed, 41 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index ddc44d5179..48d95e9050 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -3143,7 +3143,12 @@ float8_numeric(PG_FUNCTION_ARGS) if (isnan(val)) PG_RETURN_NUMERIC(make_result(&const_nan)); - sprintf(buf, "%.*g", DBL_DIG, val); + if (isinf(val)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot convert infinity to numeric"))); + + snprintf(buf, sizeof(buf), "%.*g", DBL_DIG, val); init_var(&result); @@ -3209,7 +3214,12 @@ float4_numeric(PG_FUNCTION_ARGS) if (isnan(val)) PG_RETURN_NUMERIC(make_result(&const_nan)); - sprintf(buf, "%.*g", FLT_DIG, val); + if (isinf(val)) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot convert infinity to numeric"))); + + snprintf(buf, sizeof(buf), "%.*g", FLT_DIG, val); init_var(&result); diff --git a/src/test/regress/expected/numeric.out b/src/test/regress/expected/numeric.out index ae0beb9b68..7e55b0e293 100644 --- a/src/test/regress/expected/numeric.out +++ b/src/test/regress/expected/numeric.out @@ -708,6 +708,27 @@ SELECT * FROM fract_only; (6 rows) DROP TABLE fract_only; +-- Check inf/nan conversion behavior +SELECT 'NaN'::float8::numeric; + numeric +--------- + NaN +(1 row) + +SELECT 'Infinity'::float8::numeric; +ERROR: cannot convert infinity to numeric +SELECT '-Infinity'::float8::numeric; +ERROR: cannot convert infinity to numeric +SELECT 'NaN'::float4::numeric; + numeric +--------- + NaN +(1 row) + +SELECT 'Infinity'::float4::numeric; +ERROR: cannot convert infinity to numeric +SELECT '-Infinity'::float4::numeric; +ERROR: cannot convert infinity to numeric -- Simple check that ceil(), floor(), and round() work correctly CREATE TABLE ceil_floor_round (a numeric); INSERT INTO ceil_floor_round VALUES ('-5.5'); diff --git a/src/test/regress/sql/numeric.sql b/src/test/regress/sql/numeric.sql index b51225c47f..9675b6eabf 100644 --- a/src/test/regress/sql/numeric.sql +++ b/src/test/regress/sql/numeric.sql @@ -655,6 +655,14 @@ INSERT INTO fract_only VALUES (8, '0.00017'); SELECT * FROM fract_only; DROP TABLE fract_only; +-- Check inf/nan conversion behavior +SELECT 'NaN'::float8::numeric; +SELECT 'Infinity'::float8::numeric; +SELECT '-Infinity'::float8::numeric; +SELECT 'NaN'::float4::numeric; +SELECT 'Infinity'::float4::numeric; +SELECT '-Infinity'::float4::numeric; + -- Simple check that ceil(), floor(), and round() work correctly CREATE TABLE ceil_floor_round (a numeric); INSERT INTO ceil_floor_round VALUES ('-5.5'); From 504923a0ed5c75775196c8ed0cd59b15d55cd39b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0286/1087] Run only top-level recursive lcov This is the way lcov was intended to be used. It is much faster and more robust and makes the makefiles simpler than running it in each subdirectory. The previous coding ran gcov before lcov, but that is useless because lcov/geninfo call gcov internally and use that information. Moreover, this led to complications and failures during parallel make. This separates the two targets: You either use "make coverage" to get textual output from gcov or "make coverage-html" to get an HTML report via lcov. (Using both is still problematic because they write the same output files.) Reviewed-by: Michael Paquier --- .gitignore | 1 + doc/src/sgml/regress.sgml | 13 +++++++++++++ src/Makefile.global.in | 28 ++++++++++++++++------------ 3 files changed, 30 insertions(+), 12 deletions(-) diff --git a/.gitignore b/.gitignore index 4976fd9119..94e2c582f5 100644 --- a/.gitignore +++ b/.gitignore @@ -23,6 +23,7 @@ objfiles.txt *.gcov.out lcov.info coverage/ +coverage-html-stamp *.vcproj *.vcxproj win32ver.rc diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 7c2b1029c2..14747e5f3b 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -706,6 +706,19 @@ make coverage-html The make commands also work in subdirectories. + + If you don't have lcov or prefer text output over an + HTML report, you can also run + +make coverage + + instead of make coverage-html, which will + produce .gcov output files for each source file + relevant to the test. (make coverage and make + coverage-html will overwrite each other's files, so mixing them + might be confusing.) + + To reset the execution counts between test runs, run: diff --git a/src/Makefile.global.in b/src/Makefile.global.in index fae8068150..f352ba20e2 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -874,25 +874,29 @@ endif # enable_nls ifeq ($(enable_coverage), yes) -# There is a strange interaction between lcov and existing .gcov -# output files. Hence the rm command and the ordering dependency. +# make coverage -- text output -gcda_files := $(wildcard *.gcda) +local_gcda_files = $(wildcard *.gcda) -lcov.info: $(gcda_files) - rm -f *.gcov .*.gcov - $(if $^,$(LCOV) -d . -c -o $@ $(LCOVFLAGS) --gcov-tool $(GCOV)) +coverage: $(local_gcda_files:.gcda=.c.gcov) -%.c.gcov: %.gcda | lcov.info +%.c.gcov: %.gcda $(GCOV) -b -f -p -o . $(GCOVFLAGS) $*.c >$*.c.gcov.out -coverage: $(gcda_files:.gcda=.c.gcov) lcov.info +# make coverage-html -- HTML output via lcov .PHONY: coverage-html -coverage-html: coverage +coverage-html: coverage-html-stamp + +coverage-html-stamp: lcov.info rm -rf coverage - mkdir coverage - $(GENHTML) --show-details --legend --output-directory=coverage --title=PostgreSQL --num-spaces=4 --prefix=$(abs_top_srcdir) `find . -name lcov.info -print` + $(GENHTML) --show-details --legend --output-directory=coverage --title=PostgreSQL --num-spaces=4 --prefix=$(abs_top_srcdir) $< + touch $@ + +all_gcda_files = $(shell find . -name '*.gcda' -print) + +lcov.info: $(all_gcda_files) + $(LCOV) -d . -c -o $@ $(LCOVFLAGS) --gcov-tool $(GCOV) # hook for clean-up @@ -900,7 +904,7 @@ clean distclean maintainer-clean: clean-coverage .PHONY: clean-coverage clean-coverage: - rm -rf coverage + rm -rf coverage coverage-html-stamp rm -f *.gcda *.gcno lcov.info *.gcov .*.gcov *.gcov.out From 66fd86a6a3d2ac9772f977ec43af190ea3fe6ddb Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0287/1087] Have lcov exclude external files Call lcov with --no-external option to exclude external files (for example, system headers with inline functions) from output. Reviewed-by: Michael Paquier --- src/Makefile.global.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index f352ba20e2..2b22f0de29 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -896,7 +896,7 @@ coverage-html-stamp: lcov.info all_gcda_files = $(shell find . -name '*.gcda' -print) lcov.info: $(all_gcda_files) - $(LCOV) -d . -c -o $@ $(LCOVFLAGS) --gcov-tool $(GCOV) + $(LCOV) -d . -c -o $@ $(LCOVFLAGS) --gcov-tool $(GCOV) --no-external # hook for clean-up From 20b655224249e6d2daf7ef0595995228baddb381 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 28 Sep 2017 16:44:01 +0200 Subject: [PATCH 0288/1087] Fix freezing of a dead HOT-updated tuple MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Vacuum calls page-level HOT prune to remove dead HOT tuples before doing liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But concurrent transaction commit/abort may turn DEAD some of the HOT tuples that survived the prune, before HeapTupleSatisfiesVacuum tests them. This happens to activate the code that decides to freeze the tuple ... which resuscitates it, duplicating data. (This is especially bad if there's any unique constraints, because those are now internally violated due to the duplicate entries, though you won't know until you try to REINDEX or dump/restore the table.) One possible fix would be to simply skip doing anything to the tuple, and hope that the next HOT prune would remove it. But there is a problem: if the tuple is older than freeze horizon, this would leave an unfrozen XID behind, and if no HOT prune happens to clean it up before the containing pg_clog segment is truncated away, it'd later cause an error when the XID is looked up. Fix the problem by having the tuple freezing routines cope with the situation: don't freeze the tuple (and keep it dead). In the cases that the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag so that there is no need to look up the XID in pg_clog later on. An isolation test is included, authored by Michael Paquier, loosely based on Daniel Wood's original reproducer. It only tests one particular scenario, though, not all the possible ways for this problem to surface; it be good to have a more reliable way to test this more fully, but it'd require more work. In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql I outlined another test case (more closely matching Dan Wood's) that exposed a few more ways for the problem to occur. Backpatch all the way back to 9.3, where this problem was introduced by multixact juggling. In branches 9.3 and 9.4, this includes a backpatch of commit e5ff9fefcd50 (of 9.5 era), since the original is not correctable without matching the coding pattern in 9.5 up. Reported-by: Daniel Wood Diagnosed-by: Daniel Wood Reviewed-by: Yi Wen Wong, Michaël Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com --- src/backend/access/heap/heapam.c | 57 +++++++--- src/backend/commands/vacuumlazy.c | 20 ++-- .../isolation/expected/freeze-the-dead.out | 101 ++++++++++++++++++ src/test/isolation/isolation_schedule | 1 + src/test/isolation/specs/freeze-the-dead.spec | 27 +++++ 5 files changed, 179 insertions(+), 27 deletions(-) create mode 100644 src/test/isolation/expected/freeze-the-dead.out create mode 100644 src/test/isolation/specs/freeze-the-dead.spec diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index d03f544d26..c435482cd2 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -6405,14 +6405,23 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, Assert(TransactionIdIsValid(xid)); /* - * If the xid is older than the cutoff, it has to have aborted, - * otherwise the tuple would have gotten pruned away. + * The updating transaction cannot possibly be still running, but + * verify whether it has committed, and request to set the + * COMMITTED flag if so. (We normally don't see any tuples in + * this state, because they are removed by page pruning before we + * try to freeze the page; but this can happen if the updating + * transaction commits after the page is pruned but before + * HeapTupleSatisfiesVacuum). */ if (TransactionIdPrecedes(xid, cutoff_xid)) { - Assert(!TransactionIdDidCommit(xid)); - *flags |= FRM_INVALIDATE_XMAX; - xid = InvalidTransactionId; /* not strictly necessary */ + if (TransactionIdDidCommit(xid)) + *flags = FRM_MARK_COMMITTED | FRM_RETURN_IS_XID; + else + { + *flags |= FRM_INVALIDATE_XMAX; + xid = InvalidTransactionId; /* not strictly necessary */ + } } else { @@ -6485,13 +6494,16 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, /* * It's an update; should we keep it? If the transaction is known * aborted or crashed then it's okay to ignore it, otherwise not. - * Note that an updater older than cutoff_xid cannot possibly be - * committed, because HeapTupleSatisfiesVacuum would have returned - * HEAPTUPLE_DEAD and we would not be trying to freeze the tuple. * * As with all tuple visibility routines, it's critical to test * TransactionIdIsInProgress before TransactionIdDidCommit, * because of race conditions explained in detail in tqual.c. + * + * We normally don't see committed updating transactions earlier + * than the cutoff xid, because they are removed by page pruning + * before we try to freeze the page; but it can happen if the + * updating transaction commits after the page is pruned but + * before HeapTupleSatisfiesVacuum. */ if (TransactionIdIsCurrentTransactionId(xid) || TransactionIdIsInProgress(xid)) @@ -6516,13 +6528,6 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, * we can ignore it. */ - /* - * Since the tuple wasn't marked HEAPTUPLE_DEAD by vacuum, the - * update Xid cannot possibly be older than the xid cutoff. - */ - Assert(!TransactionIdIsValid(update_xid) || - !TransactionIdPrecedes(update_xid, cutoff_xid)); - /* * If we determined that it's an Xid corresponding to an update * that must be retained, additionally add it to the list of @@ -6601,7 +6606,10 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, * * It is assumed that the caller has checked the tuple with * HeapTupleSatisfiesVacuum() and determined that it is not HEAPTUPLE_DEAD - * (else we should be removing the tuple, not freezing it). + * (else we should be removing the tuple, not freezing it). However, note + * that we don't remove HOT tuples even if they are dead, and it'd be incorrect + * to freeze them (because that would make them visible), so we mark them as + * update-committed, and needing further freezing later on. * * NB: cutoff_xid *must* be <= the current global xmin, to ensure that any * XID older than it could neither be running nor seen as running by any @@ -6712,7 +6720,22 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, else if (TransactionIdIsNormal(xid)) { if (TransactionIdPrecedes(xid, cutoff_xid)) - freeze_xmax = true; + { + /* + * Must freeze regular XIDs older than the cutoff. We must not + * freeze a HOT-updated tuple, though; doing so would bring it + * back to life. + */ + if (HeapTupleHeaderIsHotUpdated(tuple)) + { + frz->t_infomask |= HEAP_XMAX_COMMITTED; + totally_frozen = false; + changed = true; + /* must not freeze */ + } + else + freeze_xmax = true; + } else totally_frozen = false; } diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 45b1859475..30b1c08c6c 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -2018,17 +2018,17 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats, ItemPointer itemptr) { /* - * The array shouldn't overflow under normal behavior, but perhaps it - * could if we are given a really small maintenance_work_mem. In that - * case, just forget the last few tuples (we'll get 'em next time). + * The array must never overflow, since we rely on all deletable tuples + * being removed; inability to remove a tuple might cause an old XID to + * persist beyond the freeze limit, which could be disastrous later on. */ - if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples) - { - vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr; - vacrelstats->num_dead_tuples++; - pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, - vacrelstats->num_dead_tuples); - } + if (vacrelstats->num_dead_tuples >= vacrelstats->max_dead_tuples) + elog(ERROR, "dead tuple array overflow"); + + vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr; + vacrelstats->num_dead_tuples++; + pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, + vacrelstats->num_dead_tuples); } /* diff --git a/src/test/isolation/expected/freeze-the-dead.out b/src/test/isolation/expected/freeze-the-dead.out new file mode 100644 index 0000000000..dd045613f9 --- /dev/null +++ b/src/test/isolation/expected/freeze-the-dead.out @@ -0,0 +1,101 @@ +Parsed test spec with 2 sessions + +starting permutation: s1_update s1_commit s1_vacuum s2_key_share s2_commit +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s2_commit: COMMIT; + +starting permutation: s1_update s1_commit s2_key_share s1_vacuum s2_commit +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_vacuum: VACUUM FREEZE tab_freeze; +step s2_commit: COMMIT; + +starting permutation: s1_update s1_commit s2_key_share s2_commit s1_vacuum +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s2_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; + +starting permutation: s1_update s2_key_share s1_commit s1_vacuum s2_commit +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; +step s2_commit: COMMIT; + +starting permutation: s1_update s2_key_share s1_commit s2_commit s1_vacuum +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_commit: COMMIT; +step s2_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; + +starting permutation: s1_update s2_key_share s2_commit s1_commit s1_vacuum +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s2_commit: COMMIT; +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; + +starting permutation: s2_key_share s1_update s1_commit s1_vacuum s2_commit +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; +step s2_commit: COMMIT; + +starting permutation: s2_key_share s1_update s1_commit s2_commit s1_vacuum +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s2_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; + +starting permutation: s2_key_share s1_update s2_commit s1_commit s1_vacuum +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s2_commit: COMMIT; +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; + +starting permutation: s2_key_share s2_commit s1_update s1_commit s1_vacuum +step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +id + +3 +step s2_commit: COMMIT; +step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; +step s1_commit: COMMIT; +step s1_vacuum: VACUUM FREEZE tab_freeze; diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index 32c965b2a0..7dad3c2316 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -44,6 +44,7 @@ test: update-locked-tuple test: propagate-lock-delete test: tuplelock-conflict test: tuplelock-update +test: freeze-the-dead test: nowait test: nowait-2 test: nowait-3 diff --git a/src/test/isolation/specs/freeze-the-dead.spec b/src/test/isolation/specs/freeze-the-dead.spec new file mode 100644 index 0000000000..3cd9965b2f --- /dev/null +++ b/src/test/isolation/specs/freeze-the-dead.spec @@ -0,0 +1,27 @@ +# Test for interactions of tuple freezing with dead, as well as recently-dead +# tuples using multixacts via FOR KEY SHARE. +setup +{ + CREATE TABLE tab_freeze ( + id int PRIMARY KEY, + name char(3), + x int); + INSERT INTO tab_freeze VALUES (1, '111', 0); + INSERT INTO tab_freeze VALUES (3, '333', 0); +} + +teardown +{ + DROP TABLE tab_freeze; +} + +session "s1" +setup { BEGIN; } +step "s1_update" { UPDATE tab_freeze SET x = x + 1 WHERE id = 3; } +step "s1_commit" { COMMIT; } +step "s1_vacuum" { VACUUM FREEZE tab_freeze; } + +session "s2" +setup { BEGIN; } +step "s2_key_share" { SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; } +step "s2_commit" { COMMIT; } From 22d9764646d03ac7d3419c4fd0effd256568c922 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 28 Sep 2017 16:17:28 -0400 Subject: [PATCH 0289/1087] Remove SGML marked sections For XML compatibility, replace marked sections with comments . In some cases it seemed better to remove the ignored text altogether, and in one case the text should not have been ignored. --- doc/src/sgml/Makefile | 2 +- doc/src/sgml/ecpg.sgml | 7 +-- doc/src/sgml/func.sgml | 9 --- doc/src/sgml/release-old.sgml | 100 ---------------------------------- doc/src/sgml/rules.sgml | 4 +- 5 files changed, 5 insertions(+), 117 deletions(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 128d827c1a..164c00bb63 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -71,7 +71,7 @@ override SPFLAGS += -wall -wno-unused-param -wno-empty -wfully-tagged # to detect whether we are using OpenSP rather than the ancient # original SP. ifneq (,$(filter o%,$(notdir $(OSX)))) -override SPFLAGS += -wdata-delim +override SPFLAGS += -wdata-delim -winstance-ignore-ms -winstance-include-ms -winstance-param-entity endif diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 3cb4001cc0..c88b0c2fb3 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -5275,8 +5275,6 @@ while (1) - -216 (ECPG_ARRAY_INSERT) @@ -5286,7 +5284,6 @@ while (1) -]]> -220 (ECPG_NO_CONN) @@ -5441,8 +5438,8 @@ while (1) - + -602 (ECPG_WARNING_UNKNOWN_PORTAL) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 2f036015cc..1839bddceb 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -8663,15 +8663,6 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple convert path to closed pclose(path '[(0,0),(1,1),(2,0)]')
- - - point(lseg, lseg) - point - intersection - point(lseg '((-1,0),(1,0))',lseg '((-2,-2),(2,2))') - -]]> popen(path) path diff --git a/doc/src/sgml/release-old.sgml b/doc/src/sgml/release-old.sgml index d4de6b1357..24a7233378 100644 --- a/doc/src/sgml/release-old.sgml +++ b/doc/src/sgml/release-old.sgml @@ -6555,103 +6555,3 @@ The following bugs have been fixed in postgres95-beta-0.02: Initial release. - - - Timing Results - - - These timing results are from running the regression test with the commands - - -% cd src/test/regress -% make all -% time make runtest - - - - Timing under Linux 2.0.27 seems to have a roughly 5% variation from run - to run, presumably due to the scheduling vagaries of multitasking systems. - - - - Version 6.5 - - - As has been the case for previous releases, timing between - releases is not directly comparable since new regression tests - have been added. In general, 6.5 is faster than previous - releases. - - - - Timing with fsync() disabled: - - -Time System -02:00 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486 -04:38 Sparc Ultra 1 143MHz, 64MB, Solaris 2.6 - - - - - Timing with fsync() enabled: - - -Time System -04:21 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486 - - - For the Linux system above, using UW-SCSI disks rather than (older) IDE - disks leads to a 50% improvement in speed on the regression test. - - - - -Version 6.4beta - - -The times for this release are not directly comparable to those for previous releases -since some additional regression tests have been included. -In general, however, 6.4 should be slightly faster than the previous release (thanks, Bruce!). - - - -Time System -02:26 Dual Pentium Pro 180, 96MB, UW-SCSI, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486 - - - - - -Version 6.3 - - -The times for this release are not directly comparable to those for previous releases -since some additional regression tests have been included and some obsolete tests involving -time travel have been removed. -In general, however, 6.3 is substantially faster than previous releases (thanks, Bruce!). - - - - Time System - 02:30 Dual Pentium Pro 180, 96MB, UW-SCSI, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486 - 04:12 Dual Pentium Pro 180, 96MB, EIDE, Linux 2.0.30, gcc 2.7.2.1 -O2 -m486 - - - - - -Version 6.1 - - - - Time System - 06:12 Pentium Pro 180, 32MB, EIDE, Linux 2.0.30, gcc 2.7.2 -O2 -m486 - 12:06 P-100, 48MB, Linux 2.0.29, gcc - 39:58 Sparc IPC 32MB, Solaris 2.5, gcc 2.7.2.1 -O -g - - - - -]]> diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 61423c25ef..61c801a693 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -2434,8 +2434,8 @@ Nestloop in a command. - + The summary is, rules will only be significantly slower than From 4bb5a2536bcff5dfef9242818979faaa0659b1af Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0290/1087] Add lcov --initial By just running lcov on the produced .gcda data files, we don't account for source files that are not touched by tests at all. To fix that, run lcov --initial to create a base line info file with all zero counters, and merge that with the actual counters when creating the final report. Reviewed-by: Michael Paquier --- .gitignore | 2 +- src/Makefile.global.in | 25 +++++++++++++++++++------ 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/.gitignore b/.gitignore index 94e2c582f5..a59e3da3be 100644 --- a/.gitignore +++ b/.gitignore @@ -21,7 +21,7 @@ objfiles.txt *.gcda *.gcov *.gcov.out -lcov.info +lcov*.info coverage/ coverage-html-stamp *.vcproj diff --git a/src/Makefile.global.in b/src/Makefile.global.in index 2b22f0de29..c0a88c9152 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -869,8 +869,13 @@ endif # enable_nls # gcov from foo.gcda (by "make coverage") # foo.c.gcov.out stdout captured when foo.c.gcov is created, mildly # interesting -# lcov.info lcov tracefile, built from gcda files in one directory, +# lcov_test.info +# lcov tracefile, built from gcda files in one directory, # later collected by "make coverage-html" +# lcov_base.info +# tracefile for zero counters for every file, so that +# even files that are not touched by tests are counted +# for the overall coverage rate ifeq ($(enable_coverage), yes) @@ -888,15 +893,23 @@ coverage: $(local_gcda_files:.gcda=.c.gcov) .PHONY: coverage-html coverage-html: coverage-html-stamp -coverage-html-stamp: lcov.info +coverage-html-stamp: lcov_base.info lcov_test.info rm -rf coverage - $(GENHTML) --show-details --legend --output-directory=coverage --title=PostgreSQL --num-spaces=4 --prefix=$(abs_top_srcdir) $< + $(GENHTML) --show-details --legend --output-directory=coverage --title=PostgreSQL --num-spaces=4 --prefix=$(abs_top_srcdir) $^ touch $@ +LCOV += --gcov-tool $(GCOV) +LCOVFLAGS = --no-external + +all_gcno_files = $(shell find . -name '*.gcno' -print) + +lcov_base.info: $(all_gcno_files) + $(LCOV) $(LCOVFLAGS) -c -i -d . -o $@ + all_gcda_files = $(shell find . -name '*.gcda' -print) -lcov.info: $(all_gcda_files) - $(LCOV) -d . -c -o $@ $(LCOVFLAGS) --gcov-tool $(GCOV) --no-external +lcov_test.info: $(all_gcda_files) + $(LCOV) $(LCOVFLAGS) -c -d . -o $@ # hook for clean-up @@ -905,7 +918,7 @@ clean distclean maintainer-clean: clean-coverage .PHONY: clean-coverage clean-coverage: rm -rf coverage coverage-html-stamp - rm -f *.gcda *.gcno lcov.info *.gcov .*.gcov *.gcov.out + rm -f *.gcda *.gcno lcov*.info *.gcov .*.gcov *.gcov.out # User-callable target to reset counts between test runs From d2773f9bcd980cf6ed720928cd0700196608ef19 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0291/1087] Add PostgreSQL version to coverage output Also make overriding the title easier. That helps telling where the report came from and labeling different variants of a report. Reviewed-by: Michael Paquier --- src/Makefile.global.in | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index c0a88c9152..1a0faf9023 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -893,9 +893,12 @@ coverage: $(local_gcda_files:.gcda=.c.gcov) .PHONY: coverage-html coverage-html: coverage-html-stamp +GENHTML_FLAGS = --show-details --legend +GENHTML_TITLE = PostgreSQL $(VERSION) + coverage-html-stamp: lcov_base.info lcov_test.info rm -rf coverage - $(GENHTML) --show-details --legend --output-directory=coverage --title=PostgreSQL --num-spaces=4 --prefix=$(abs_top_srcdir) $^ + $(GENHTML) $(GENHTML_FLAGS) -o coverage --title='$(GENHTML_TITLE)' --num-spaces=4 --prefix='$(abs_top_srcdir)' $^ touch $@ LCOV += --gcov-tool $(GCOV) From 8b304b8b72b0a60f1968d39f01cf817c8df863ec Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 29 Sep 2017 10:20:44 -0400 Subject: [PATCH 0292/1087] Remove replacement selection sort. At the time replacement_sort_tuples was introduced, there were still cases where replacement selection sort noticeably outperformed using quicksort even for the first run. However, those cases seem to have evaporated as a result of further improvements made since that time (and perhaps also advances in CPU technology). So remove replacement selection and the controlling GUC entirely. This makes tuplesort.c noticeably simpler and probably paves the way for further optimizations someone might want to do later. Peter Geoghegan, with review and testing by Tomas Vondra and me. Discussion: https://postgr.es/m/CAH2-WzmmNjG_K0R9nqYwMq3zjyJJK+hCbiZYNGhAy-Zyjs64GQ@mail.gmail.com --- doc/src/sgml/config.sgml | 39 -- doc/src/sgml/release-9.6.sgml | 2 +- src/backend/utils/init/globals.c | 1 - src/backend/utils/misc/guc.c | 10 - src/backend/utils/misc/postgresql.conf.sample | 1 - src/backend/utils/sort/tuplesort.c | 415 +++--------------- src/include/miscadmin.h | 1 - src/test/regress/expected/cluster.out | 17 +- src/test/regress/sql/cluster.sql | 14 +- 9 files changed, 52 insertions(+), 448 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 4b265d9e40..c13f60230f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1505,45 +1505,6 @@ include_dir 'conf.d' - - replacement_sort_tuples (integer) - - replacement_sort_tuples configuration parameter - - - - - When the number of tuples to be sorted is smaller than this number, - a sort will produce its first output run using replacement selection - rather than quicksort. This may be useful in memory-constrained - environments where tuples that are input into larger sort operations - have a strong physical-to-logical correlation. Note that this does - not include input tuples with an inverse - correlation. It is possible for the replacement selection algorithm - to generate one long run that requires no merging, where use of the - default strategy would result in many runs that must be merged - to produce a final sorted output. This may allow sort - operations to complete sooner. - - - The default is 150,000 tuples. Note that higher values are typically - not much more effective, and may be counter-productive, since the - priority queue is sensitive to the size of available CPU cache, whereas - the default strategy sorts runs using a cache - oblivious algorithm. This property allows the default sort - strategy to automatically and transparently make effective use - of available CPU cache. - - - Setting maintenance_work_mem to its default - value usually prevents utility command external sorts (e.g., - sorts used by CREATE INDEX to build B-Tree - indexes) from ever using replacement selection sort, unless the - input tuples are quite wide. - - - - autovacuum_work_mem (integer) diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index dc811c4ca5..09b6b90254 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -5140,7 +5140,7 @@ and many others in the same vein The new approach makes better use of the CPU cache for typical cache sizes and data volumes. Where necessary, the behavior can be adjusted via the new configuration parameter - . + replacement_sort_tuples. diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 7c09498dc0..9680a4b0f7 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -112,7 +112,6 @@ bool enableFsync = true; bool allowSystemTableMods = false; int work_mem = 1024; int maintenance_work_mem = 16384; -int replacement_sort_tuples = 150000; /* * Primary determinants of sizes of shared-memory structures. diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 47a5f25707..8292df00bb 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -1933,16 +1933,6 @@ static struct config_int ConfigureNamesInt[] = NULL, NULL, NULL }, - { - {"replacement_sort_tuples", PGC_USERSET, RESOURCES_MEM, - gettext_noop("Sets the maximum number of tuples to be sorted using replacement selection."), - gettext_noop("When more tuples than this are present, quicksort will be used.") - }, - &replacement_sort_tuples, - 150000, 0, INT_MAX, - NULL, NULL, NULL - }, - /* * We use the hopefully-safely-small value of 100kB as the compiled-in * default for max_stack_depth. InitializeGUCOptions will increase it if diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 8ba6b1d08a..cf4ddcd94a 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -121,7 +121,6 @@ # you actively intend to use prepared transactions. #work_mem = 4MB # min 64kB #maintenance_work_mem = 64MB # min 1MB -#replacement_sort_tuples = 150000 # limits use of replacement selection sort #autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem #max_stack_depth = 2MB # min 100kB #dynamic_shared_memory_type = posix # the default is the first option diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 17e1b6860b..60522cb442 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -13,47 +13,11 @@ * See Knuth, volume 3, for more than you want to know about the external * sorting algorithm. Historically, we divided the input into sorted runs * using replacement selection, in the form of a priority tree implemented - * as a heap (essentially his Algorithm 5.2.3H), but now we only do that - * for the first run, and only if the run would otherwise end up being very - * short. We merge the runs using polyphase merge, Knuth's Algorithm - * 5.4.2D. The logical "tapes" used by Algorithm D are implemented by - * logtape.c, which avoids space wastage by recycling disk space as soon - * as each block is read from its "tape". - * - * We do not use Knuth's recommended data structure (Algorithm 5.4.1R) for - * the replacement selection, because it uses a fixed number of records - * in memory at all times. Since we are dealing with tuples that may vary - * considerably in size, we want to be able to vary the number of records - * kept in memory to ensure full utilization of the allowed sort memory - * space. So, we keep the tuples in a variable-size heap, with the next - * record to go out at the top of the heap. Like Algorithm 5.4.1R, each - * record is stored with the run number that it must go into, and we use - * (run number, key) as the ordering key for the heap. When the run number - * at the top of the heap changes, we know that no more records of the prior - * run are left in the heap. Note that there are in practice only ever two - * distinct run numbers, because since PostgreSQL 9.6, we only use - * replacement selection to form the first run. - * - * In PostgreSQL 9.6, a heap (based on Knuth's Algorithm H, with some small - * customizations) is only used with the aim of producing just one run, - * thereby avoiding all merging. Only the first run can use replacement - * selection, which is why there are now only two possible valid run - * numbers, and why heapification is customized to not distinguish between - * tuples in the second run (those will be quicksorted). We generally - * prefer a simple hybrid sort-merge strategy, where runs are sorted in much - * the same way as the entire input of an internal sort is sorted (using - * qsort()). The replacement_sort_tuples GUC controls the limited remaining - * use of replacement selection for the first run. - * - * There are several reasons to favor a hybrid sort-merge strategy. - * Maintaining a priority tree/heap has poor CPU cache characteristics. - * Furthermore, the growth in main memory sizes has greatly diminished the - * value of having runs that are larger than available memory, even in the - * case where there is partially sorted input and runs can be made far - * larger by using a heap. In most cases, a single-pass merge step is all - * that is required even when runs are no larger than available memory. - * Avoiding multiple merge passes was traditionally considered to be the - * major advantage of using replacement selection. + * as a heap (essentially his Algorithm 5.2.3H), but now we always use + * quicksort for run generation. We merge the runs using polyphase merge, + * Knuth's Algorithm 5.4.2D. The logical "tapes" used by Algorithm D are + * implemented by logtape.c, which avoids space wastage by recycling disk + * space as soon as each block is read from its "tape". * * The approximate amount of memory allowed for any one sort operation * is specified in kilobytes by the caller (most pass work_mem). Initially, @@ -64,9 +28,8 @@ * workMem, we begin to emit tuples into sorted runs in temporary tapes. * When tuples are dumped in batch after quicksorting, we begin a new run * with a new output tape (selected per Algorithm D). After the end of the - * input is reached, we dump out remaining tuples in memory into a final run - * (or two, when replacement selection is still used), then merge the runs - * using Algorithm D. + * input is reached, we dump out remaining tuples in memory into a final run, + * then merge the runs using Algorithm D. * * When merging runs, we use a heap containing just the frontmost tuple from * each source run; we repeatedly output the smallest tuple and replace it @@ -188,13 +151,8 @@ bool optimize_bounded_sort = true; * described above. Accordingly, "tuple" is always used in preference to * datum1 as the authoritative value for pass-by-reference cases. * - * While building initial runs, tupindex holds the tuple's run number. - * Historically, the run number could meaningfully distinguish many runs, but - * it now only distinguishes RUN_FIRST and HEAP_RUN_NEXT, since replacement - * selection is always abandoned after the first run; no other run number - * should be represented here. During merge passes, we re-use it to hold the - * input tape number that each tuple in the heap was read from. tupindex goes - * unused if the sort occurs entirely in memory. + * tupindex holds the input tape number that each tuple in the heap was read + * from during merge passes. */ typedef struct { @@ -253,15 +211,6 @@ typedef enum #define TAPE_BUFFER_OVERHEAD BLCKSZ #define MERGE_BUFFER_SIZE (BLCKSZ * 32) - /* - * Run numbers, used during external sort operations. - * - * HEAP_RUN_NEXT is only used for SortTuple.tupindex, never state.currentRun. - */ -#define RUN_FIRST 0 -#define HEAP_RUN_NEXT INT_MAX -#define RUN_SECOND 1 - typedef int (*SortTupleComparator) (const SortTuple *a, const SortTuple *b, Tuplesortstate *state); @@ -381,16 +330,8 @@ struct Tuplesortstate void *lastReturnedTuple; /* - * While building initial runs, this indicates if the replacement - * selection strategy is in use. When it isn't, then a simple hybrid - * sort-merge strategy is in use instead (runs are quicksorted). - */ - bool replaceActive; - - /* - * While building initial runs, this is the current output run number - * (starting at RUN_FIRST). Afterwards, it is the number of initial runs - * we made. + * While building initial runs, this is the current output run number. + * Afterwards, it is the number of initial runs we made. */ int currentRun; @@ -583,7 +524,6 @@ struct Tuplesortstate static Tuplesortstate *tuplesort_begin_common(int workMem, bool randomAccess); static void puttuple_common(Tuplesortstate *state, SortTuple *tuple); static bool consider_abort_common(Tuplesortstate *state); -static bool useselection(Tuplesortstate *state); static void inittapes(Tuplesortstate *state); static void selectnewtape(Tuplesortstate *state); static void init_slab_allocator(Tuplesortstate *state, int numSlots); @@ -592,15 +532,12 @@ static void mergeonerun(Tuplesortstate *state); static void beginmerge(Tuplesortstate *state); static bool mergereadnext(Tuplesortstate *state, int srcTape, SortTuple *stup); static void dumptuples(Tuplesortstate *state, bool alltuples); -static void dumpbatch(Tuplesortstate *state, bool alltuples); static void make_bounded_heap(Tuplesortstate *state); static void sort_bounded_heap(Tuplesortstate *state); static void tuplesort_sort_memtuples(Tuplesortstate *state); -static void tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple, - bool checkIndex); -static void tuplesort_heap_replace_top(Tuplesortstate *state, SortTuple *tuple, - bool checkIndex); -static void tuplesort_heap_delete_top(Tuplesortstate *state, bool checkIndex); +static void tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple); +static void tuplesort_heap_replace_top(Tuplesortstate *state, SortTuple *tuple); +static void tuplesort_heap_delete_top(Tuplesortstate *state); static void reversedirection(Tuplesortstate *state); static unsigned int getlen(Tuplesortstate *state, int tapenum, bool eofOK); static void markrunend(Tuplesortstate *state, int tapenum); @@ -738,7 +675,7 @@ tuplesort_begin_common(int workMem, bool randomAccess) if (LACKMEM(state)) elog(ERROR, "insufficient memory allowed for sort"); - state->currentRun = RUN_FIRST; + state->currentRun = 0; /* * maxTapes, tapeRange, and Algorithm D variables will be initialized by @@ -1622,7 +1559,7 @@ puttuple_common(Tuplesortstate *state, SortTuple *tuple) inittapes(state); /* - * Dump tuples until we are back under the limit. + * Dump all tuples. */ dumptuples(state, false); break; @@ -1647,74 +1584,20 @@ puttuple_common(Tuplesortstate *state, SortTuple *tuple) { /* discard top of heap, replacing it with the new tuple */ free_sort_tuple(state, &state->memtuples[0]); - tuple->tupindex = 0; /* not used */ - tuplesort_heap_replace_top(state, tuple, false); + tuplesort_heap_replace_top(state, tuple); } break; case TSS_BUILDRUNS: /* - * Insert the tuple into the heap, with run number currentRun if - * it can go into the current run, else HEAP_RUN_NEXT. The tuple - * can go into the current run if it is >= the first - * not-yet-output tuple. (Actually, it could go into the current - * run if it is >= the most recently output tuple ... but that - * would require keeping around the tuple we last output, and it's - * simplest to let writetup free each tuple as soon as it's - * written.) - * - * Note that this only applies when: - * - * - currentRun is RUN_FIRST - * - * - Replacement selection is in use (typically it is never used). - * - * When these two conditions are not both true, all tuples are - * appended indifferently, much like the TSS_INITIAL case. - * - * There should always be room to store the incoming tuple. + * Save the tuple into the unsorted array (there must be + * space) */ - Assert(!state->replaceActive || state->memtupcount > 0); - if (state->replaceActive && - COMPARETUP(state, tuple, &state->memtuples[0]) >= 0) - { - Assert(state->currentRun == RUN_FIRST); - - /* - * Insert tuple into first, fully heapified run. - * - * Unlike classic replacement selection, which this module was - * previously based on, only RUN_FIRST tuples are fully - * heapified. Any second/next run tuples are appended - * indifferently. While HEAP_RUN_NEXT tuples may be sifted - * out of the way of first run tuples, COMPARETUP() will never - * be called for the run's tuples during sifting (only our - * initial COMPARETUP() call is required for the tuple, to - * determine that the tuple does not belong in RUN_FIRST). - */ - tuple->tupindex = state->currentRun; - tuplesort_heap_insert(state, tuple, true); - } - else - { - /* - * Tuple was determined to not belong to heapified RUN_FIRST, - * or replacement selection not in play. Append the tuple to - * memtuples indifferently. - * - * dumptuples() does not trust that the next run's tuples are - * heapified. Anything past the first run will always be - * quicksorted even when replacement selection is initially - * used. (When it's never used, every tuple still takes this - * path.) - */ - tuple->tupindex = HEAP_RUN_NEXT; - state->memtuples[state->memtupcount++] = *tuple; - } + state->memtuples[state->memtupcount++] = *tuple; /* - * If we are over the memory limit, dump tuples till we're under. + * If we are over the memory limit, dump all tuples. */ dumptuples(state, false); break; @@ -2068,7 +1951,7 @@ tuplesort_gettuple_common(Tuplesortstate *state, bool forward, * If no more data, we've reached end of run on this tape. * Remove the top node from the heap. */ - tuplesort_heap_delete_top(state, false); + tuplesort_heap_delete_top(state); /* * Rewind to free the read buffer. It'd go away at the @@ -2079,7 +1962,7 @@ tuplesort_gettuple_common(Tuplesortstate *state, bool forward, return true; } newtup.tupindex = srcTape; - tuplesort_heap_replace_top(state, &newtup, false); + tuplesort_heap_replace_top(state, &newtup); return true; } return false; @@ -2336,28 +2219,6 @@ tuplesort_merge_order(int64 allowedMem) return mOrder; } -/* - * useselection - determine algorithm to use to sort first run. - * - * It can sometimes be useful to use the replacement selection algorithm if it - * results in one large run, and there is little available workMem. See - * remarks on RUN_SECOND optimization within dumptuples(). - */ -static bool -useselection(Tuplesortstate *state) -{ - /* - * memtupsize might be noticeably higher than memtupcount here in atypical - * cases. It seems slightly preferable to not allow recent outliers to - * impact this determination. Note that caller's trace_sort output - * reports memtupcount instead. - */ - if (state->memtupsize <= replacement_sort_tuples) - return true; - - return false; -} - /* * inittapes - initialize for tape sorting. * @@ -2413,44 +2274,7 @@ inittapes(Tuplesortstate *state) state->tp_dummy = (int *) palloc0(maxTapes * sizeof(int)); state->tp_tapenum = (int *) palloc0(maxTapes * sizeof(int)); - /* - * Give replacement selection a try based on user setting. There will be - * a switch to a simple hybrid sort-merge strategy after the first run - * (iff we could not output one long run). - */ - state->replaceActive = useselection(state); - - if (state->replaceActive) - { - /* - * Convert the unsorted contents of memtuples[] into a heap. Each - * tuple is marked as belonging to run number zero. - * - * NOTE: we pass false for checkIndex since there's no point in - * comparing indexes in this step, even though we do intend the - * indexes to be part of the sort key... - */ - int ntuples = state->memtupcount; - -#ifdef TRACE_SORT - if (trace_sort) - elog(LOG, "replacement selection will sort %d first run tuples", - state->memtupcount); -#endif - state->memtupcount = 0; /* make the heap empty */ - - for (j = 0; j < ntuples; j++) - { - /* Must copy source tuple to avoid possible overwrite */ - SortTuple stup = state->memtuples[j]; - - stup.tupindex = RUN_FIRST; - tuplesort_heap_insert(state, &stup, false); - } - Assert(state->memtupcount == ntuples); - } - - state->currentRun = RUN_FIRST; + state->currentRun = 0; /* * Initialize variables of Algorithm D (step D1). @@ -2624,22 +2448,6 @@ mergeruns(Tuplesortstate *state) else init_slab_allocator(state, 0); - /* - * If we produced only one initial run (quite likely if the total data - * volume is between 1X and 2X workMem when replacement selection is used, - * but something we particularly count on when input is presorted), we can - * just use that tape as the finished output, rather than doing a useless - * merge. (This obvious optimization is not in Knuth's algorithm.) - */ - if (state->currentRun == RUN_SECOND) - { - state->result_tape = state->tp_tapenum[state->destTape]; - /* must freeze and rewind the finished output tape */ - LogicalTapeFreeze(state->tapeset, state->result_tape); - state->status = TSS_SORTEDONTAPE; - return; - } - /* * Allocate a new 'memtuples' array, for the heap. It will hold one tuple * from each input tape. @@ -2826,11 +2634,11 @@ mergeonerun(Tuplesortstate *state) if (mergereadnext(state, srcTape, &stup)) { stup.tupindex = srcTape; - tuplesort_heap_replace_top(state, &stup, false); + tuplesort_heap_replace_top(state, &stup); } else - tuplesort_heap_delete_top(state, false); + tuplesort_heap_delete_top(state); } /* @@ -2892,7 +2700,7 @@ beginmerge(Tuplesortstate *state) if (mergereadnext(state, srcTape, &tup)) { tup.tupindex = srcTape; - tuplesort_heap_insert(state, &tup, false); + tuplesort_heap_insert(state, &tup); } } } @@ -2922,124 +2730,25 @@ mergereadnext(Tuplesortstate *state, int srcTape, SortTuple *stup) } /* - * dumptuples - remove tuples from memtuples and write to tape - * - * This is used during initial-run building, but not during merging. + * dumptuples - remove tuples from memtuples and write initial run to tape * - * When alltuples = false and replacement selection is still active, dump - * only enough tuples to get under the availMem limit (and leave at least - * one tuple in memtuples, since puttuple will then assume it is a heap that - * has a tuple to compare to). We always insist there be at least one free - * slot in the memtuples[] array. - * - * When alltuples = true, dump everything currently in memory. (This - * case is only used at end of input data, although in practice only the - * first run could fail to dump all tuples when we LACKMEM(), and only - * when replacement selection is active.) - * - * If, when replacement selection is active, we see that the tuple run - * number at the top of the heap has changed, start a new run. This must be - * the first run, because replacement selection is always abandoned for all - * further runs. + * When alltuples = true, dump everything currently in memory. (This case is + * only used at end of input data.) */ static void dumptuples(Tuplesortstate *state, bool alltuples) -{ - while (alltuples || - (LACKMEM(state) && state->memtupcount > 1) || - state->memtupcount >= state->memtupsize) - { - if (state->replaceActive) - { - /* - * Still holding out for a case favorable to replacement - * selection. Still incrementally spilling using heap. - * - * Dump the heap's frontmost entry, and remove it from the heap. - */ - Assert(state->memtupcount > 0); - WRITETUP(state, state->tp_tapenum[state->destTape], - &state->memtuples[0]); - tuplesort_heap_delete_top(state, true); - } - else - { - /* - * Once committed to quicksorting runs, never incrementally spill - */ - dumpbatch(state, alltuples); - break; - } - - /* - * If top run number has changed, we've finished the current run (this - * can only be the first run), and will no longer spill incrementally. - */ - if (state->memtupcount == 0 || - state->memtuples[0].tupindex == HEAP_RUN_NEXT) - { - markrunend(state, state->tp_tapenum[state->destTape]); - Assert(state->currentRun == RUN_FIRST); - state->currentRun++; - state->tp_runs[state->destTape]++; - state->tp_dummy[state->destTape]--; /* per Alg D step D2 */ - -#ifdef TRACE_SORT - if (trace_sort) - elog(LOG, "finished incrementally writing %s run %d to tape %d: %s", - (state->memtupcount == 0) ? "only" : "first", - state->currentRun, state->destTape, - pg_rusage_show(&state->ru_start)); -#endif - - /* - * Done if heap is empty, which is possible when there is only one - * long run. - */ - Assert(state->currentRun == RUN_SECOND); - if (state->memtupcount == 0) - { - /* - * Replacement selection best case; no final merge required, - * because there was only one initial run (second run has no - * tuples). See RUN_SECOND case in mergeruns(). - */ - break; - } - - /* - * Abandon replacement selection for second run (as well as any - * subsequent runs). - */ - state->replaceActive = false; - - /* - * First tuple of next run should not be heapified, and so will - * bear placeholder run number. In practice this must actually be - * the second run, which just became the currentRun, so we're - * clear to quicksort and dump the tuples in batch next time - * memtuples becomes full. - */ - Assert(state->memtuples[0].tupindex == HEAP_RUN_NEXT); - selectnewtape(state); - } - } -} - -/* - * dumpbatch - sort and dump all memtuples, forming one run on tape - * - * Second or subsequent runs are never heapified by this module (although - * heapification still respects run number differences between the first and - * second runs), and a heap (replacement selection priority queue) is often - * avoided in the first place. - */ -static void -dumpbatch(Tuplesortstate *state, bool alltuples) { int memtupwrite; int i; + /* + * Nothing to do if we still fit in available memory and have array + * slots, unless this is the final call during initial run generation. + */ + if (state->memtupcount < state->memtupsize && !LACKMEM(state) && + !alltuples) + return; + /* * Final call might require no sorting, in rare cases where we just so * happen to have previously LACKMEM()'d at the point where exactly all @@ -3308,21 +3017,8 @@ tuplesort_space_type_name(TuplesortSpaceType t) /* * Heap manipulation routines, per Knuth's Algorithm 5.2.3H. - * - * Compare two SortTuples. If checkIndex is true, use the tuple index - * as the front of the sort key; otherwise, no. - * - * Note that for checkIndex callers, the heap invariant is never - * maintained beyond the first run, and so there are no COMPARETUP() - * calls needed to distinguish tuples in HEAP_RUN_NEXT. */ -#define HEAPCOMPARE(tup1,tup2) \ - (checkIndex && ((tup1)->tupindex != (tup2)->tupindex || \ - (tup1)->tupindex == HEAP_RUN_NEXT) ? \ - ((tup1)->tupindex) - ((tup2)->tupindex) : \ - COMPARETUP(state, tup1, tup2)) - /* * Convert the existing unordered array of SortTuples to a bounded heap, * discarding all but the smallest "state->bound" tuples. @@ -3331,10 +3027,6 @@ tuplesort_space_type_name(TuplesortSpaceType t) * at the root (array entry zero), instead of the smallest as in the normal * sort case. This allows us to discard the largest entry cheaply. * Therefore, we temporarily reverse the sort direction. - * - * We assume that all entries in a bounded heap will always have tupindex - * zero; it therefore doesn't matter that HEAPCOMPARE() doesn't reverse - * the direction of comparison for tupindexes. */ static void make_bounded_heap(Tuplesortstate *state) @@ -3358,8 +3050,7 @@ make_bounded_heap(Tuplesortstate *state) /* Must copy source tuple to avoid possible overwrite */ SortTuple stup = state->memtuples[i]; - stup.tupindex = 0; /* not used */ - tuplesort_heap_insert(state, &stup, false); + tuplesort_heap_insert(state, &stup); } else { @@ -3374,7 +3065,7 @@ make_bounded_heap(Tuplesortstate *state) CHECK_FOR_INTERRUPTS(); } else - tuplesort_heap_replace_top(state, &state->memtuples[i], false); + tuplesort_heap_replace_top(state, &state->memtuples[i]); } } @@ -3404,7 +3095,7 @@ sort_bounded_heap(Tuplesortstate *state) SortTuple stup = state->memtuples[0]; /* this sifts-up the next-largest entry and decreases memtupcount */ - tuplesort_heap_delete_top(state, false); + tuplesort_heap_delete_top(state); state->memtuples[state->memtupcount] = stup; } state->memtupcount = tupcount; @@ -3422,10 +3113,7 @@ sort_bounded_heap(Tuplesortstate *state) /* * Sort all memtuples using specialized qsort() routines. * - * Quicksort is used for small in-memory sorts. Quicksort is also generally - * preferred to replacement selection for generating runs during external sort - * operations, although replacement selection is sometimes used for the first - * run. + * Quicksort is used for small in-memory sorts, and external sort runs. */ static void tuplesort_sort_memtuples(Tuplesortstate *state) @@ -3454,15 +3142,13 @@ tuplesort_sort_memtuples(Tuplesortstate *state) * is, it might get overwritten before being moved into the heap! */ static void -tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple, - bool checkIndex) +tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple) { SortTuple *memtuples; int j; memtuples = state->memtuples; Assert(state->memtupcount < state->memtupsize); - Assert(!checkIndex || tuple->tupindex == RUN_FIRST); CHECK_FOR_INTERRUPTS(); @@ -3475,7 +3161,7 @@ tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple, { int i = (j - 1) >> 1; - if (HEAPCOMPARE(tuple, &memtuples[i]) >= 0) + if (COMPARETUP(state, tuple, &memtuples[i]) >= 0) break; memtuples[j] = memtuples[i]; j = i; @@ -3491,12 +3177,11 @@ tuplesort_heap_insert(Tuplesortstate *state, SortTuple *tuple, * if necessary. */ static void -tuplesort_heap_delete_top(Tuplesortstate *state, bool checkIndex) +tuplesort_heap_delete_top(Tuplesortstate *state) { SortTuple *memtuples = state->memtuples; SortTuple *tuple; - Assert(!checkIndex || state->currentRun == RUN_FIRST); if (--state->memtupcount <= 0) return; @@ -3505,7 +3190,7 @@ tuplesort_heap_delete_top(Tuplesortstate *state, bool checkIndex) * current top node with it. */ tuple = &memtuples[state->memtupcount]; - tuplesort_heap_replace_top(state, tuple, checkIndex); + tuplesort_heap_replace_top(state, tuple); } /* @@ -3516,14 +3201,12 @@ tuplesort_heap_delete_top(Tuplesortstate *state, bool checkIndex) * Heapsort, steps H3-H8). */ static void -tuplesort_heap_replace_top(Tuplesortstate *state, SortTuple *tuple, - bool checkIndex) +tuplesort_heap_replace_top(Tuplesortstate *state, SortTuple *tuple) { SortTuple *memtuples = state->memtuples; unsigned int i, n; - Assert(!checkIndex || state->currentRun == RUN_FIRST); Assert(state->memtupcount >= 1); CHECK_FOR_INTERRUPTS(); @@ -3542,9 +3225,9 @@ tuplesort_heap_replace_top(Tuplesortstate *state, SortTuple *tuple, if (j >= n) break; if (j + 1 < n && - HEAPCOMPARE(&memtuples[j], &memtuples[j + 1]) > 0) + COMPARETUP(state, &memtuples[j], &memtuples[j + 1]) > 0) j++; - if (HEAPCOMPARE(tuple, &memtuples[j]) <= 0) + if (COMPARETUP(state, tuple, &memtuples[j]) <= 0) break; memtuples[i] = memtuples[j]; i = j; diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index dad98de98d..3950054368 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -241,7 +241,6 @@ extern bool enableFsync; extern bool allowSystemTableMods; extern PGDLLIMPORT int work_mem; extern PGDLLIMPORT int maintenance_work_mem; -extern PGDLLIMPORT int replacement_sort_tuples; extern int VacuumCostPageHit; extern int VacuumCostPageMiss; diff --git a/src/test/regress/expected/cluster.out b/src/test/regress/expected/cluster.out index 097ac2b006..82713bfa2c 100644 --- a/src/test/regress/expected/cluster.out +++ b/src/test/regress/expected/cluster.out @@ -444,22 +444,8 @@ create table clstr_4 as select * from tenk1; create index cluster_sort on clstr_4 (hundred, thousand, tenthous); -- ensure we don't use the index in CLUSTER nor the checking SELECTs set enable_indexscan = off; --- Use external sort that only ever uses quicksort to sort runs: +-- Use external sort: set maintenance_work_mem = '1MB'; -set replacement_sort_tuples = 0; -cluster clstr_4 using cluster_sort; -select * from -(select hundred, lag(hundred) over () as lhundred, - thousand, lag(thousand) over () as lthousand, - tenthous, lag(tenthous) over () as ltenthous from clstr_4) ss -where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous); - hundred | lhundred | thousand | lthousand | tenthous | ltenthous ----------+----------+----------+-----------+----------+----------- -(0 rows) - --- Replacement selection will now be forced. It should only produce a single --- run, due to the fact that input is found to be presorted: -set replacement_sort_tuples = 150000; cluster clstr_4 using cluster_sort; select * from (select hundred, lag(hundred) over () as lhundred, @@ -472,7 +458,6 @@ where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous); reset enable_indexscan; reset maintenance_work_mem; -reset replacement_sort_tuples; -- clean up DROP TABLE clustertest; DROP TABLE clstr_1; diff --git a/src/test/regress/sql/cluster.sql b/src/test/regress/sql/cluster.sql index 8dd9459bda..a6c2757efa 100644 --- a/src/test/regress/sql/cluster.sql +++ b/src/test/regress/sql/cluster.sql @@ -203,19 +203,8 @@ create index cluster_sort on clstr_4 (hundred, thousand, tenthous); -- ensure we don't use the index in CLUSTER nor the checking SELECTs set enable_indexscan = off; --- Use external sort that only ever uses quicksort to sort runs: +-- Use external sort: set maintenance_work_mem = '1MB'; -set replacement_sort_tuples = 0; -cluster clstr_4 using cluster_sort; -select * from -(select hundred, lag(hundred) over () as lhundred, - thousand, lag(thousand) over () as lthousand, - tenthous, lag(tenthous) over () as ltenthous from clstr_4) ss -where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous); - --- Replacement selection will now be forced. It should only produce a single --- run, due to the fact that input is found to be presorted: -set replacement_sort_tuples = 150000; cluster clstr_4 using cluster_sort; select * from (select hundred, lag(hundred) over () as lhundred, @@ -225,7 +214,6 @@ where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous); reset enable_indexscan; reset maintenance_work_mem; -reset replacement_sort_tuples; -- clean up DROP TABLE clustertest; From 5373bc2a0867048bb78f93aede54ac1309b5e227 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 31 Aug 2017 12:24:47 -0400 Subject: [PATCH 0293/1087] Add background worker type Add bgw_type field to background worker structure. It is intended to be set to the same value for all workers of the same type, so they can be grouped in pg_stat_activity, for example. The backend_type column in pg_stat_activity now shows bgw_type for a background worker. The ps listing also no longer calls out that a process is a background worker but just show the bgw_type. That way, being a background worker is more of an implementation detail now that is not shown to the user. However, most log messages still refer to 'background worker "%s"'; otherwise constructing sensible and translatable log messages would become tricky. Reviewed-by: Michael Paquier Reviewed-by: Daniel Gustafsson --- contrib/pg_prewarm/autoprewarm.c | 6 ++- doc/src/sgml/bgworker.sgml | 11 ++++- src/backend/access/transam/parallel.c | 1 + src/backend/postmaster/bgworker.c | 51 ++++++++++++++++++++-- src/backend/postmaster/postmaster.c | 5 ++- src/backend/replication/logical/launcher.c | 3 ++ src/backend/utils/adt/pgstatfuncs.c | 16 ++++++- src/include/postmaster/bgworker.h | 2 + src/test/modules/test_shm_mq/setup.c | 2 +- src/test/modules/worker_spi/worker_spi.c | 8 ++-- 10 files changed, 89 insertions(+), 16 deletions(-) diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c index cc0350e6d6..006c3153db 100644 --- a/contrib/pg_prewarm/autoprewarm.c +++ b/contrib/pg_prewarm/autoprewarm.c @@ -800,7 +800,8 @@ apw_start_master_worker(void) worker.bgw_start_time = BgWorkerStart_ConsistentState; strcpy(worker.bgw_library_name, "pg_prewarm"); strcpy(worker.bgw_function_name, "autoprewarm_main"); - strcpy(worker.bgw_name, "autoprewarm"); + strcpy(worker.bgw_name, "autoprewarm master"); + strcpy(worker.bgw_type, "autoprewarm master"); if (process_shared_preload_libraries_in_progress) { @@ -840,7 +841,8 @@ apw_start_database_worker(void) worker.bgw_start_time = BgWorkerStart_ConsistentState; strcpy(worker.bgw_library_name, "pg_prewarm"); strcpy(worker.bgw_function_name, "autoprewarm_database_main"); - strcpy(worker.bgw_name, "autoprewarm"); + strcpy(worker.bgw_name, "autoprewarm worker"); + strcpy(worker.bgw_type, "autoprewarm worker"); /* must set notify PID to wait for shutdown */ worker.bgw_notify_pid = MyProcPid; diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index b422323081..ea1b5c0c8e 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -51,6 +51,7 @@ typedef void (*bgworker_main_type)(Datum main_arg); typedef struct BackgroundWorker { char bgw_name[BGW_MAXLEN]; + char bgw_type[BGW_MAXLEN]; int bgw_flags; BgWorkerStartTime bgw_start_time; int bgw_restart_time; /* in seconds, or BGW_NEVER_RESTART */ @@ -64,8 +65,14 @@ typedef struct BackgroundWorker - bgw_name is a string to be used in log messages, process - listings and similar contexts. + bgw_name and bgw_type are + strings to be used in log messages, process listings and similar contexts. + bgw_type should be the same for all background + workers of the same type, so that it is possible to group such workers in a + process listing, for example. bgw_name on the + other hand can contain additional information about the specific process. + (Typically, the string for bgw_name will contain + the type somehow, but that is not strictly required.) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 13c8ba3b19..c6f7b7af0e 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -467,6 +467,7 @@ LaunchParallelWorkers(ParallelContext *pcxt) memset(&worker, 0, sizeof(worker)); snprintf(worker.bgw_name, BGW_MAXLEN, "parallel worker for PID %d", MyProcPid); + snprintf(worker.bgw_type, BGW_MAXLEN, "parallel worker"); worker.bgw_flags = BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION | BGWORKER_CLASS_PARALLEL; diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c index 28af6f0f07..4a3c4b4cc9 100644 --- a/src/backend/postmaster/bgworker.c +++ b/src/backend/postmaster/bgworker.c @@ -344,6 +344,8 @@ BackgroundWorkerStateChange(void) */ ascii_safe_strlcpy(rw->rw_worker.bgw_name, slot->worker.bgw_name, BGW_MAXLEN); + ascii_safe_strlcpy(rw->rw_worker.bgw_type, + slot->worker.bgw_type, BGW_MAXLEN); ascii_safe_strlcpy(rw->rw_worker.bgw_library_name, slot->worker.bgw_library_name, BGW_MAXLEN); ascii_safe_strlcpy(rw->rw_worker.bgw_function_name, @@ -630,6 +632,12 @@ SanityCheckBackgroundWorker(BackgroundWorker *worker, int elevel) return false; } + /* + * If bgw_type is not filled in, use bgw_name. + */ + if (strcmp(worker->bgw_type, "") == 0) + strcpy(worker->bgw_type, worker->bgw_name); + return true; } @@ -671,7 +679,7 @@ bgworker_die(SIGNAL_ARGS) ereport(FATAL, (errcode(ERRCODE_ADMIN_SHUTDOWN), errmsg("terminating background worker \"%s\" due to administrator command", - MyBgworkerEntry->bgw_name))); + MyBgworkerEntry->bgw_type))); } /* @@ -700,7 +708,6 @@ void StartBackgroundWorker(void) { sigjmp_buf local_sigjmp_buf; - char buf[MAXPGPATH]; BackgroundWorker *worker = MyBgworkerEntry; bgworker_main_type entrypt; @@ -710,8 +717,7 @@ StartBackgroundWorker(void) IsBackgroundWorker = true; /* Identify myself via ps */ - snprintf(buf, MAXPGPATH, "bgworker: %s", worker->bgw_name); - init_ps_display(buf, "", "", ""); + init_ps_display(worker->bgw_name, "", "", ""); /* * If we're not supposed to have shared memory access, then detach from @@ -1233,3 +1239,40 @@ LookupBackgroundWorkerFunction(const char *libraryname, const char *funcname) return (bgworker_main_type) load_external_function(libraryname, funcname, true, NULL); } + +/* + * Given a PID, get the bgw_type of the background worker. Returns NULL if + * not a valid background worker. + * + * The return value is in static memory belonging to this function, so it has + * to be used before calling this function again. This is so that the caller + * doesn't have to worry about the background worker locking protocol. + */ +const char * +GetBackgroundWorkerTypeByPid(pid_t pid) +{ + int slotno; + bool found = false; + static char result[BGW_MAXLEN]; + + LWLockAcquire(BackgroundWorkerLock, LW_SHARED); + + for (slotno = 0; slotno < BackgroundWorkerData->total_slots; slotno++) + { + BackgroundWorkerSlot *slot = &BackgroundWorkerData->slot[slotno]; + + if (slot->pid > 0 && slot->pid == pid) + { + strcpy(result, slot->worker.bgw_type); + found = true; + break; + } + } + + LWLockRelease(BackgroundWorkerLock); + + if (!found) + return NULL; + + return result; +} diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 1bcbce537a..8a2cc2fc2b 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -3117,8 +3117,9 @@ CleanupBackgroundWorker(int pid, exitstatus = 0; #endif - snprintf(namebuf, MAXPGPATH, "%s: %s", _("worker process"), - rw->rw_worker.bgw_name); + snprintf(namebuf, MAXPGPATH, _("background worker \"%s\""), + rw->rw_worker.bgw_type); + if (!EXIT_STATUS_0(exitstatus)) { diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c index 44bdcab3b9..a613ef4757 100644 --- a/src/backend/replication/logical/launcher.c +++ b/src/backend/replication/logical/launcher.c @@ -422,6 +422,7 @@ logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid, else snprintf(bgw.bgw_name, BGW_MAXLEN, "logical replication worker for subscription %u", subid); + snprintf(bgw.bgw_type, BGW_MAXLEN, "logical replication worker"); bgw.bgw_restart_time = BGW_NEVER_RESTART; bgw.bgw_notify_pid = MyProcPid; @@ -775,6 +776,8 @@ ApplyLauncherRegister(void) snprintf(bgw.bgw_function_name, BGW_MAXLEN, "ApplyLauncherMain"); snprintf(bgw.bgw_name, BGW_MAXLEN, "logical replication launcher"); + snprintf(bgw.bgw_type, BGW_MAXLEN, + "logical replication launcher"); bgw.bgw_restart_time = 5; bgw.bgw_notify_pid = 0; bgw.bgw_main_arg = (Datum) 0; diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c index 5a968e3758..8d9e7c10ae 100644 --- a/src/backend/utils/adt/pgstatfuncs.c +++ b/src/backend/utils/adt/pgstatfuncs.c @@ -21,6 +21,7 @@ #include "funcapi.h" #include "miscadmin.h" #include "pgstat.h" +#include "postmaster/bgworker_internals.h" #include "postmaster/postmaster.h" #include "storage/proc.h" #include "storage/procarray.h" @@ -823,8 +824,19 @@ pg_stat_get_activity(PG_FUNCTION_ARGS) } } /* Add backend type */ - values[17] = - CStringGetTextDatum(pgstat_get_backend_desc(beentry->st_backendType)); + if (beentry->st_backendType == B_BG_WORKER) + { + const char *bgw_type; + + bgw_type = GetBackgroundWorkerTypeByPid(beentry->st_procpid); + if (bgw_type) + values[17] = CStringGetTextDatum(bgw_type); + else + nulls[17] = true; + } + else + values[17] = + CStringGetTextDatum(pgstat_get_backend_desc(beentry->st_backendType)); } else { diff --git a/src/include/postmaster/bgworker.h b/src/include/postmaster/bgworker.h index e2ecd3c9eb..6b4e631880 100644 --- a/src/include/postmaster/bgworker.h +++ b/src/include/postmaster/bgworker.h @@ -88,6 +88,7 @@ typedef enum typedef struct BackgroundWorker { char bgw_name[BGW_MAXLEN]; + char bgw_type[BGW_MAXLEN]; int bgw_flags; BgWorkerStartTime bgw_start_time; int bgw_restart_time; /* in seconds, or BGW_NEVER_RESTART */ @@ -122,6 +123,7 @@ extern BgwHandleStatus GetBackgroundWorkerPid(BackgroundWorkerHandle *handle, extern BgwHandleStatus WaitForBackgroundWorkerStartup(BackgroundWorkerHandle *handle, pid_t *pid); extern BgwHandleStatus WaitForBackgroundWorkerShutdown(BackgroundWorkerHandle *); +extern const char *GetBackgroundWorkerTypeByPid(pid_t pid); /* Terminate a bgworker */ extern void TerminateBackgroundWorker(BackgroundWorkerHandle *handle); diff --git a/src/test/modules/test_shm_mq/setup.c b/src/test/modules/test_shm_mq/setup.c index 3ae9018360..561f6f9bac 100644 --- a/src/test/modules/test_shm_mq/setup.c +++ b/src/test/modules/test_shm_mq/setup.c @@ -219,7 +219,7 @@ setup_background_workers(int nworkers, dsm_segment *seg) worker.bgw_restart_time = BGW_NEVER_RESTART; sprintf(worker.bgw_library_name, "test_shm_mq"); sprintf(worker.bgw_function_name, "test_shm_mq_main"); - snprintf(worker.bgw_name, BGW_MAXLEN, "test_shm_mq"); + snprintf(worker.bgw_type, BGW_MAXLEN, "test_shm_mq"); worker.bgw_main_arg = UInt32GetDatum(dsm_segment_handle(seg)); /* set bgw_notify_pid, so we can detect if the worker stops */ worker.bgw_notify_pid = MyProcPid; diff --git a/src/test/modules/worker_spi/worker_spi.c b/src/test/modules/worker_spi/worker_spi.c index 12c8cd5774..4c6ab6d575 100644 --- a/src/test/modules/worker_spi/worker_spi.c +++ b/src/test/modules/worker_spi/worker_spi.c @@ -111,7 +111,7 @@ initialize_worker_spi(worktable *table) StartTransactionCommand(); SPI_connect(); PushActiveSnapshot(GetTransactionSnapshot()); - pgstat_report_activity(STATE_RUNNING, "initializing spi_worker schema"); + pgstat_report_activity(STATE_RUNNING, "initializing worker_spi schema"); /* XXX could we use CREATE SCHEMA IF NOT EXISTS? */ initStringInfo(&buf); @@ -359,7 +359,8 @@ _PG_init(void) */ for (i = 1; i <= worker_spi_total_workers; i++) { - snprintf(worker.bgw_name, BGW_MAXLEN, "worker %d", i); + snprintf(worker.bgw_name, BGW_MAXLEN, "worker_spi worker %d", i); + snprintf(worker.bgw_type, BGW_MAXLEN, "worker_spi"); worker.bgw_main_arg = Int32GetDatum(i); RegisterBackgroundWorker(&worker); @@ -385,7 +386,8 @@ worker_spi_launch(PG_FUNCTION_ARGS) worker.bgw_restart_time = BGW_NEVER_RESTART; sprintf(worker.bgw_library_name, "worker_spi"); sprintf(worker.bgw_function_name, "worker_spi_main"); - snprintf(worker.bgw_name, BGW_MAXLEN, "worker %d", i); + snprintf(worker.bgw_name, BGW_MAXLEN, "worker_spi worker %d", i); + snprintf(worker.bgw_type, BGW_MAXLEN, "worker_spi"); worker.bgw_main_arg = Int32GetDatum(i); /* set bgw_notify_pid so that we can use WaitForBackgroundWorkerStartup */ worker.bgw_notify_pid = MyProcPid; From 136ab7c5a5f54fecea7c28c8550c19123245acf0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 29 Sep 2017 11:32:05 -0400 Subject: [PATCH 0294/1087] Marginal improvement for generated code in execExprInterp.c. Avoid the coding pattern "*op->resvalue = f();", as some compilers think that requires them to evaluate "op->resvalue" before the function call. Unless there are lots of free registers, this can lead to a useless register spill and reload across the call. I changed all the cases like this in ExecInterpExpr(), but didn't bother in the out-of-line opcode eval subroutines, since those are presumably not as performance-critical. Discussion: https://postgr.es/m/2508.1506630094@sss.pgh.pa.us --- src/backend/executor/execExprInterp.c | 59 +++++++++++++++++++-------- 1 file changed, 42 insertions(+), 17 deletions(-) diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 09abd46dda..39d50f98a9 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -501,15 +501,17 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_CASE(EEOP_INNER_SYSVAR) { int attnum = op->d.var.attnum; + Datum d; /* these asserts must match defenses in slot_getattr */ Assert(innerslot->tts_tuple != NULL); Assert(innerslot->tts_tuple != &(innerslot->tts_minhdr)); - /* heap_getsysattr has sufficient defenses against bad attnums */ - *op->resvalue = heap_getsysattr(innerslot->tts_tuple, attnum, - innerslot->tts_tupleDescriptor, - op->resnull); + /* heap_getsysattr has sufficient defenses against bad attnums */ + d = heap_getsysattr(innerslot->tts_tuple, attnum, + innerslot->tts_tupleDescriptor, + op->resnull); + *op->resvalue = d; EEO_NEXT(); } @@ -517,15 +519,17 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_CASE(EEOP_OUTER_SYSVAR) { int attnum = op->d.var.attnum; + Datum d; /* these asserts must match defenses in slot_getattr */ Assert(outerslot->tts_tuple != NULL); Assert(outerslot->tts_tuple != &(outerslot->tts_minhdr)); /* heap_getsysattr has sufficient defenses against bad attnums */ - *op->resvalue = heap_getsysattr(outerslot->tts_tuple, attnum, - outerslot->tts_tupleDescriptor, - op->resnull); + d = heap_getsysattr(outerslot->tts_tuple, attnum, + outerslot->tts_tupleDescriptor, + op->resnull); + *op->resvalue = d; EEO_NEXT(); } @@ -533,15 +537,17 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_CASE(EEOP_SCAN_SYSVAR) { int attnum = op->d.var.attnum; + Datum d; /* these asserts must match defenses in slot_getattr */ Assert(scanslot->tts_tuple != NULL); Assert(scanslot->tts_tuple != &(scanslot->tts_minhdr)); - /* heap_getsysattr has sufficient defenses against bad attnums */ - *op->resvalue = heap_getsysattr(scanslot->tts_tuple, attnum, - scanslot->tts_tupleDescriptor, - op->resnull); + /* heap_getsysattr has sufficient defenses against bad attnums */ + d = heap_getsysattr(scanslot->tts_tuple, attnum, + scanslot->tts_tupleDescriptor, + op->resnull); + *op->resvalue = d; EEO_NEXT(); } @@ -641,13 +647,22 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) * As both STRICT checks and function-usage are noticeable performance * wise, and function calls are a very hot-path (they also back * operators!), it's worth having so many separate opcodes. + * + * Note: the reason for using a temporary variable "d", here and in + * other places, is that some compilers think "*op->resvalue = f();" + * requires them to evaluate op->resvalue into a register before + * calling f(), just in case f() is able to modify op->resvalue + * somehow. The extra line of code can save a useless register spill + * and reload across the function call. */ EEO_CASE(EEOP_FUNCEXPR) { FunctionCallInfo fcinfo = op->d.func.fcinfo_data; + Datum d; fcinfo->isnull = false; - *op->resvalue = op->d.func.fn_addr(fcinfo); + d = op->d.func.fn_addr(fcinfo); + *op->resvalue = d; *op->resnull = fcinfo->isnull; EEO_NEXT(); @@ -658,6 +673,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) FunctionCallInfo fcinfo = op->d.func.fcinfo_data; bool *argnull = fcinfo->argnull; int argno; + Datum d; /* strict function, so check for NULL args */ for (argno = 0; argno < op->d.func.nargs; argno++) @@ -669,7 +685,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) } } fcinfo->isnull = false; - *op->resvalue = op->d.func.fn_addr(fcinfo); + d = op->d.func.fn_addr(fcinfo); + *op->resvalue = d; *op->resnull = fcinfo->isnull; strictfail: @@ -680,11 +697,13 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) { FunctionCallInfo fcinfo = op->d.func.fcinfo_data; PgStat_FunctionCallUsage fcusage; + Datum d; pgstat_init_function_usage(fcinfo, &fcusage); fcinfo->isnull = false; - *op->resvalue = op->d.func.fn_addr(fcinfo); + d = op->d.func.fn_addr(fcinfo); + *op->resvalue = d; *op->resnull = fcinfo->isnull; pgstat_end_function_usage(&fcusage, true); @@ -698,6 +717,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) PgStat_FunctionCallUsage fcusage; bool *argnull = fcinfo->argnull; int argno; + Datum d; /* strict function, so check for NULL args */ for (argno = 0; argno < op->d.func.nargs; argno++) @@ -712,7 +732,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) pgstat_init_function_usage(fcinfo, &fcusage); fcinfo->isnull = false; - *op->resvalue = op->d.func.fn_addr(fcinfo); + d = op->d.func.fn_addr(fcinfo); + *op->resvalue = d; *op->resnull = fcinfo->isnull; pgstat_end_function_usage(&fcusage, true); @@ -1113,6 +1134,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) if (!op->d.iocoerce.finfo_in->fn_strict || str != NULL) { FunctionCallInfo fcinfo_in; + Datum d; fcinfo_in = op->d.iocoerce.fcinfo_data_in; fcinfo_in->arg[0] = PointerGetDatum(str); @@ -1120,7 +1142,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) /* second and third arguments are already set up */ fcinfo_in->isnull = false; - *op->resvalue = FunctionCallInvoke(fcinfo_in); + d = FunctionCallInvoke(fcinfo_in); + *op->resvalue = d; /* Should get null result if and only if str is NULL */ if (str == NULL) @@ -1268,6 +1291,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_CASE(EEOP_ROWCOMPARE_STEP) { FunctionCallInfo fcinfo = op->d.rowcompare_step.fcinfo_data; + Datum d; /* force NULL result if strict fn and NULL input */ if (op->d.rowcompare_step.finfo->fn_strict && @@ -1279,7 +1303,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) /* Apply comparison function */ fcinfo->isnull = false; - *op->resvalue = op->d.rowcompare_step.fn_addr(fcinfo); + d = op->d.rowcompare_step.fn_addr(fcinfo); + *op->resvalue = d; /* force NULL result if NULL function result */ if (fcinfo->isnull) From 2a14b9609df1de4f2eb5a97aff674aaad033a7e6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 25 Sep 2017 11:59:46 -0400 Subject: [PATCH 0295/1087] psql: Update \d sequence display For \d sequencename, the psql code just did SELECT * FROM sequencename to get the information to display, but this does not contain much interesting information anymore in PostgreSQL 10, because the metadata has been moved to a separate system catalog. This patch creates a newly designed sequence display that is not merely an extension of the general relation/table display as it was previously. Example: PostgreSQL 9.6: => \d foobar Sequence "public.foobar" Column | Type | Value ---------------+---------+--------------------- sequence_name | name | foobar last_value | bigint | 1 start_value | bigint | 1 increment_by | bigint | 1 max_value | bigint | 9223372036854775807 min_value | bigint | 1 cache_value | bigint | 1 log_cnt | bigint | 0 is_cycled | boolean | f is_called | boolean | f PostgreSQL 10 before this change: => \d foobar Sequence "public.foobar" Column | Type | Value ------------+---------+------- last_value | bigint | 1 log_cnt | bigint | 0 is_called | boolean | f New: => \d foobar Sequence "public.foobar" Type | Start | Minimum | Maximum | Increment | Cycles? | Cache --------+-------+---------+---------------------+-----------+---------+------- bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1 Reviewed-by: Fabien COELHO --- src/bin/psql/describe.c | 189 ++++++++++++++----------- src/test/regress/expected/identity.out | 7 + src/test/regress/expected/sequence.out | 13 ++ src/test/regress/sql/identity.sql | 2 + src/test/regress/sql/sequence.sql | 4 + 5 files changed, 135 insertions(+), 80 deletions(-) diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index d22ec68431..a3209b73fa 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1380,8 +1380,6 @@ describeOneTableDetails(const char *schemaname, int i; char *view_def = NULL; char *headers[11]; - char **seq_values = NULL; - char **ptr; PQExpBufferData title; PQExpBufferData tmpbuf; int cols; @@ -1563,27 +1561,125 @@ describeOneTableDetails(const char *schemaname, res = NULL; /* - * If it's a sequence, fetch its values and store into an array that will - * be used later. + * If it's a sequence, deal with it here separately. */ if (tableinfo.relkind == RELKIND_SEQUENCE) { - printfPQExpBuffer(&buf, "SELECT * FROM %s", fmtId(schemaname)); - /* must be separate because fmtId isn't reentrant */ - appendPQExpBuffer(&buf, ".%s;", fmtId(relationname)); + PGresult *result = NULL; + printQueryOpt myopt = pset.popt; + char *footers[2] = {NULL, NULL}; + + if (pset.sversion >= 100000) + { + printfPQExpBuffer(&buf, + "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n" + " seqstart AS \"%s\",\n" + " seqmin AS \"%s\",\n" + " seqmax AS \"%s\",\n" + " seqincrement AS \"%s\",\n" + " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n" + " seqcache AS \"%s\"\n", + gettext_noop("Type"), + gettext_noop("Start"), + gettext_noop("Minimum"), + gettext_noop("Maximum"), + gettext_noop("Increment"), + gettext_noop("yes"), + gettext_noop("no"), + gettext_noop("Cycles?"), + gettext_noop("Cache")); + appendPQExpBuffer(&buf, + "FROM pg_catalog.pg_sequence\n" + "WHERE seqrelid = '%s';", + oid); + } + else + { + printfPQExpBuffer(&buf, + "SELECT pg_catalog.format_type('bigint'::regtype, NULL) AS \"%s\",\n" + " start_value AS \"%s\",\n" + " min_value AS \"%s\",\n" + " max_value AS \"%s\",\n" + " increment_by AS \"%s\",\n" + " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n" + " cache_value AS \"%s\"\n", + gettext_noop("Type"), + gettext_noop("Start"), + gettext_noop("Minimum"), + gettext_noop("Maximum"), + gettext_noop("Increment"), + gettext_noop("yes"), + gettext_noop("no"), + gettext_noop("Cycles?"), + gettext_noop("Cache")); + appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname)); + /* must be separate because fmtId isn't reentrant */ + appendPQExpBuffer(&buf, ".%s;", fmtId(relationname)); + } res = PSQLexec(buf.data); if (!res) goto error_return; - seq_values = pg_malloc((PQnfields(res) + 1) * sizeof(*seq_values)); + /* Footer information about a sequence */ + + /* Get the column that owns this sequence */ + printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||" + "\n pg_catalog.quote_ident(relname) || '.' ||" + "\n pg_catalog.quote_ident(attname)," + "\n d.deptype" + "\nFROM pg_catalog.pg_class c" + "\nINNER JOIN pg_catalog.pg_depend d ON c.oid=d.refobjid" + "\nINNER JOIN pg_catalog.pg_namespace n ON n.oid=c.relnamespace" + "\nINNER JOIN pg_catalog.pg_attribute a ON (" + "\n a.attrelid=c.oid AND" + "\n a.attnum=d.refobjsubid)" + "\nWHERE d.classid='pg_catalog.pg_class'::pg_catalog.regclass" + "\n AND d.refclassid='pg_catalog.pg_class'::pg_catalog.regclass" + "\n AND d.objid='%s'" + "\n AND d.deptype IN ('a', 'i')", + oid); + + result = PSQLexec(buf.data); + + /* + * If we get no rows back, don't show anything (obviously). We should + * never get more than one row back, but if we do, just ignore it and + * don't print anything. + */ + if (!result) + goto error_return; + else if (PQntuples(result) == 1) + { + switch (PQgetvalue(result, 0, 1)[0]) + { + case 'a': + footers[0] = psprintf(_("Owned by: %s"), + PQgetvalue(result, 0, 0)); + break; + case 'i': + footers[0] = psprintf(_("Sequence for identity column: %s"), + PQgetvalue(result, 0, 0)); + break; + } + } + PQclear(result); + + printfPQExpBuffer(&title, _("Sequence \"%s.%s\""), + schemaname, relationname); - for (i = 0; i < PQnfields(res); i++) - seq_values[i] = pg_strdup(PQgetvalue(res, 0, i)); - seq_values[i] = NULL; + myopt.footers = footers; + myopt.topt.default_footer = false; + myopt.title = title.data; + myopt.translate_header = true; - PQclear(res); - res = NULL; + printQuery(res, &myopt, pset.queryFout, false, pset.logfile); + + if (footers[0]) + free(footers[0]); + + retval = true; + goto error_return; /* not an error, just return early */ } /* @@ -1667,10 +1763,6 @@ describeOneTableDetails(const char *schemaname, printfPQExpBuffer(&title, _("Materialized view \"%s.%s\""), schemaname, relationname); break; - case RELKIND_SEQUENCE: - printfPQExpBuffer(&title, _("Sequence \"%s.%s\""), - schemaname, relationname); - break; case RELKIND_INDEX: if (tableinfo.relpersistence == 'u') printfPQExpBuffer(&title, _("Unlogged index \"%s.%s\""), @@ -1729,9 +1821,6 @@ describeOneTableDetails(const char *schemaname, show_column_details = true; } - if (tableinfo.relkind == RELKIND_SEQUENCE) - headers[cols++] = gettext_noop("Value"); - if (tableinfo.relkind == RELKIND_INDEX) headers[cols++] = gettext_noop("Definition"); @@ -1814,10 +1903,6 @@ describeOneTableDetails(const char *schemaname, printTableAddCell(&cont, default_str, false, false); } - /* Value: for sequences only */ - if (tableinfo.relkind == RELKIND_SEQUENCE) - printTableAddCell(&cont, seq_values[i], false, false); - /* Expression for index column */ if (tableinfo.relkind == RELKIND_INDEX) printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); @@ -2030,55 +2115,6 @@ describeOneTableDetails(const char *schemaname, PQclear(result); } - else if (tableinfo.relkind == RELKIND_SEQUENCE) - { - /* Footer information about a sequence */ - PGresult *result = NULL; - - /* Get the column that owns this sequence */ - printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||" - "\n pg_catalog.quote_ident(relname) || '.' ||" - "\n pg_catalog.quote_ident(attname)," - "\n d.deptype" - "\nFROM pg_catalog.pg_class c" - "\nINNER JOIN pg_catalog.pg_depend d ON c.oid=d.refobjid" - "\nINNER JOIN pg_catalog.pg_namespace n ON n.oid=c.relnamespace" - "\nINNER JOIN pg_catalog.pg_attribute a ON (" - "\n a.attrelid=c.oid AND" - "\n a.attnum=d.refobjsubid)" - "\nWHERE d.classid='pg_catalog.pg_class'::pg_catalog.regclass" - "\n AND d.refclassid='pg_catalog.pg_class'::pg_catalog.regclass" - "\n AND d.objid='%s'" - "\n AND d.deptype IN ('a', 'i')", - oid); - - result = PSQLexec(buf.data); - if (!result) - goto error_return; - else if (PQntuples(result) == 1) - { - switch (PQgetvalue(result, 0, 1)[0]) - { - case 'a': - printfPQExpBuffer(&buf, _("Owned by: %s"), - PQgetvalue(result, 0, 0)); - printTableAddFooter(&cont, buf.data); - break; - case 'i': - printfPQExpBuffer(&buf, _("Sequence for identity column: %s"), - PQgetvalue(result, 0, 0)); - printTableAddFooter(&cont, buf.data); - break; - } - } - - /* - * If we get no rows back, don't show anything (obviously). We should - * never get more than one row back, but if we do, just ignore it and - * don't print anything. - */ - PQclear(result); - } else if (tableinfo.relkind == RELKIND_RELATION || tableinfo.relkind == RELKIND_MATVIEW || tableinfo.relkind == RELKIND_FOREIGN_TABLE || @@ -2963,13 +2999,6 @@ describeOneTableDetails(const char *schemaname, termPQExpBuffer(&title); termPQExpBuffer(&tmpbuf); - if (seq_values) - { - for (ptr = seq_values; *ptr; ptr++) - free(*ptr); - free(seq_values); - } - if (view_def) free(view_def); diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 2800ed7caa..5fa585d6cc 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -32,6 +32,13 @@ SELECT pg_get_serial_sequence('itest1', 'a'); public.itest1_a_seq (1 row) +\d itest1_a_seq + Sequence "public.itest1_a_seq" + Type | Start | Minimum | Maximum | Increment | Cycles? | Cache +---------+-------+---------+------------+-----------+---------+------- + integer | 1 | 1 | 2147483647 | 1 | no | 1 +Sequence for identity column: public.itest1.a + CREATE TABLE itest4 (a int, b text); ALTER TABLE itest4 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY; -- error, requires NOT NULL ERROR: column "a" of relation "itest4" must be declared NOT NULL before identity can be added diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out index ea05a3382b..2384b7dd81 100644 --- a/src/test/regress/expected/sequence.out +++ b/src/test/regress/expected/sequence.out @@ -535,6 +535,19 @@ SELECT * FROM pg_sequence_parameters('sequence_test4'::regclass); -1 | -9223372036854775808 | -1 | -1 | f | 1 | 20 (1 row) +\d sequence_test4 + Sequence "public.sequence_test4" + Type | Start | Minimum | Maximum | Increment | Cycles? | Cache +--------+-------+----------------------+---------+-----------+---------+------- + bigint | -1 | -9223372036854775808 | -1 | -1 | no | 1 + +\d serialtest2_f2_seq + Sequence "public.serialtest2_f2_seq" + Type | Start | Minimum | Maximum | Increment | Cycles? | Cache +---------+-------+---------+------------+-----------+---------+------- + integer | 1 | 1 | 2147483647 | 1 | no | 1 +Owned by: public.serialtest2.f2 + -- Test comments COMMENT ON SEQUENCE asdf IS 'won''t work'; ERROR: relation "asdf" does not exist diff --git a/src/test/regress/sql/identity.sql b/src/test/regress/sql/identity.sql index 7886456a56..e1b5a074c9 100644 --- a/src/test/regress/sql/identity.sql +++ b/src/test/regress/sql/identity.sql @@ -14,6 +14,8 @@ SELECT sequence_name FROM information_schema.sequences WHERE sequence_name LIKE SELECT pg_get_serial_sequence('itest1', 'a'); +\d itest1_a_seq + CREATE TABLE itest4 (a int, b text); ALTER TABLE itest4 ALTER COLUMN a ADD GENERATED ALWAYS AS IDENTITY; -- error, requires NOT NULL ALTER TABLE itest4 ALTER COLUMN a SET NOT NULL; diff --git a/src/test/regress/sql/sequence.sql b/src/test/regress/sql/sequence.sql index c50834a5b9..a7b9e63372 100644 --- a/src/test/regress/sql/sequence.sql +++ b/src/test/regress/sql/sequence.sql @@ -246,6 +246,10 @@ WHERE sequencename ~ ANY(ARRAY['sequence_test', 'serialtest']) SELECT * FROM pg_sequence_parameters('sequence_test4'::regclass); +\d sequence_test4 +\d serialtest2_f2_seq + + -- Test comments COMMENT ON SEQUENCE asdf IS 'won''t work'; COMMENT ON SEQUENCE sequence_test2 IS 'will work'; From e55d9643ecb87f41185941b54d632641b3852aaa Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 29 Sep 2017 13:51:14 -0400 Subject: [PATCH 0296/1087] pgbench: If we fail to send a command to the server, fail. This beats the old behavior of busy-waiting hands down. Oversight in commit 12788ae49e1933f463bc59a6efe46c4a01701b76. Report by Pavan Deolasee. Patch by Fabien Coelho. Reviewed by Pavan Deolasee. Discussion: http://postgr.es/m/CABOikdPhfXTypckMC1Ux6Ko+hKBWwUBA=EXsvamXYSg8M9J94w@mail.gmail.com --- src/bin/pgbench/pgbench.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index e37496c971..f039413690 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -2194,12 +2194,8 @@ doCustom(TState *thread, CState *st, StatsData *agg) { if (!sendCommand(st, command)) { - /* - * Failed. Stay in CSTATE_START_COMMAND state, to - * retry. ??? What the point or retrying? Should - * rather abort? - */ - return; + commandFailed(st, "SQL command send failed"); + st->state = CSTATE_ABORTED; } else st->state = CSTATE_WAIT_RESULT; From 69c16983e103f913ee0dae7f288611de006ba2ba Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 29 Sep 2017 15:59:11 -0400 Subject: [PATCH 0297/1087] psql: Don't try to print a partition constraint we didn't fetch. If \d rather than \d+ is used, then verbose is false and we don't ask the server for the partition constraint; so we shouldn't print it in that case either. Maksim Milyutin, per a report from Jesper Pedersen. Reviewed by Jesper Pedersen and Amit Langote. Discussion: http://postgr.es/m/2af5fc4d-7bcc-daa8-4fe6-86274bea363c@redhat.com --- src/bin/psql/describe.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index a3209b73fa..06885715a6 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1985,13 +1985,16 @@ describeOneTableDetails(const char *schemaname, partdef); printTableAddFooter(&cont, tmpbuf.data); - /* If there isn't any constraint, show that explicitly */ - if (partconstraintdef == NULL || partconstraintdef[0] == '\0') - printfPQExpBuffer(&tmpbuf, _("No partition constraint")); - else - printfPQExpBuffer(&tmpbuf, _("Partition constraint: %s"), - partconstraintdef); - printTableAddFooter(&cont, tmpbuf.data); + if (verbose) + { + /* If there isn't any constraint, show that explicitly */ + if (partconstraintdef == NULL || partconstraintdef[0] == '\0') + printfPQExpBuffer(&tmpbuf, _("No partition constraint")); + else + printfPQExpBuffer(&tmpbuf, _("Partition constraint: %s"), + partconstraintdef); + printTableAddFooter(&cont, tmpbuf.data); + } PQclear(result); } From 19de0ab23ccba12567c18640f00b49f01471018d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 29 Sep 2017 16:26:21 -0400 Subject: [PATCH 0298/1087] Fix inadequate locking during get_rel_oids(). get_rel_oids used to not take any relation locks at all, but that stopped being a good idea with commit 3c3bb9933, which inserted a syscache lookup into the function. A concurrent DROP TABLE could now produce "cache lookup failed", which we don't want to have happen in normal operation. The best solution seems to be to transiently take a lock on the relation named by the RangeVar (which also makes the result of RangeVarGetRelid a lot less spongy). But we shouldn't hold the lock beyond this function, because we don't want VACUUM to lock more than one table at a time. (That would not be a big problem right now, but it will become one after the pending feature patch to allow multiple tables to be named in VACUUM.) In passing, adjust vacuum_rel and analyze_rel to document that we don't trust the passed RangeVar to be accurate, and allow the RangeVar to possibly be NULL --- which it is anyway for a whole-database VACUUM, though we accidentally didn't crash for that case. The passed RangeVar is in fact inaccurate when dealing with a child partition, as of v10, and it has been wrong for a whole long time in the case of vacuum_rel() recursing to a TOAST table. None of these things present visible bugs up to now, because the passed RangeVar is in fact only consulted for autovacuum logging, and in that particular context it's always accurate because autovacuum doesn't let vacuum.c expand partitions nor recurse to toast tables. Still, this seems like trouble waiting to happen, so let's nail the door at least partly shut. (Further cleanup is planned, in HEAD only, as part of the pending feature patch.) Fix some sadly inaccurate/obsolete comments too. Back-patch to v10. Michael Paquier and Tom Lane Discussion: https://postgr.es/m/25023.1506107590@sss.pgh.pa.us --- src/backend/commands/analyze.c | 7 +++- src/backend/commands/vacuum.c | 60 ++++++++++++++++++++++------------ 2 files changed, 45 insertions(+), 22 deletions(-) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 283845cf2a..d432f8208d 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -106,6 +106,10 @@ static Datum ind_fetch_func(VacAttrStatsP stats, int rownum, bool *isNull); /* * analyze_rel() -- analyze one relation + * + * relid identifies the relation to analyze. If relation is supplied, use + * the name therein for reporting any failure to open/lock the rel; do not + * use it once we've successfully opened the rel, since it might be stale. */ void analyze_rel(Oid relid, RangeVar *relation, int options, @@ -145,7 +149,8 @@ analyze_rel(Oid relid, RangeVar *relation, int options, else { onerel = NULL; - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) + if (relation && + IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) ereport(LOG, (errcode(ERRCODE_LOCK_NOT_AVAILABLE), errmsg("skipping analyze of \"%s\" --- lock not available", diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index faa181207a..d533cef6a6 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -128,9 +128,11 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) * * options is a bitmask of VacuumOption flags, indicating what to do. * - * relid, if not InvalidOid, indicate the relation to process; otherwise, - * the RangeVar is used. (The latter must always be passed, because it's - * used for error messages.) + * relid, if not InvalidOid, indicates the relation to process; otherwise, + * if a RangeVar is supplied, that's what to process; otherwise, we process + * all relevant tables in the database. (If both relid and a RangeVar are + * supplied, the relid is what is processed, but we use the RangeVar's name + * to report any open/lock failure.) * * params contains a set of parameters that can be used to customize the * behavior. @@ -226,8 +228,8 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, vac_strategy = bstrategy; /* - * Build list of relations to process, unless caller gave us one. (If we - * build one, we put it in vac_context for safekeeping.) + * Build list of relation OID(s) to process, putting it in vac_context for + * safekeeping. */ relations = get_rel_oids(relid, relation); @@ -393,22 +395,18 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) } else if (vacrel) { - /* Process a specific relation */ + /* Process a specific relation, and possibly partitions thereof */ Oid relid; HeapTuple tuple; Form_pg_class classForm; bool include_parts; /* - * Since we don't take a lock here, the relation might be gone, or the - * RangeVar might no longer refer to the OID we look up here. In the - * former case, VACUUM will do nothing; in the latter case, it will - * process the OID we looked up here, rather than the new one. Neither - * is ideal, but there's little practical alternative, since we're - * going to commit this transaction and begin a new one between now - * and then. + * We transiently take AccessShareLock to protect the syscache lookup + * below, as well as find_all_inheritors's expectation that the caller + * holds some lock on the starting relation. */ - relid = RangeVarGetRelid(vacrel, NoLock, false); + relid = RangeVarGetRelid(vacrel, AccessShareLock, false); /* * To check whether the relation is a partitioned table, fetch its @@ -422,10 +420,11 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) ReleaseSysCache(tuple); /* - * Make relation list entries for this guy and its partitions, if any. - * Note that the list returned by find_all_inheritors() include the - * passed-in OID at its head. Also note that we did not request a - * lock to be taken to match what would be done otherwise. + * Make relation list entries for this rel and its partitions, if any. + * Note that the list returned by find_all_inheritors() includes the + * passed-in OID at its head. There's no point in taking locks on the + * individual partitions yet, and doing so would just add unnecessary + * deadlock risk. */ oldcontext = MemoryContextSwitchTo(vac_context); if (include_parts) @@ -434,6 +433,19 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) else oid_list = lappend_oid(oid_list, relid); MemoryContextSwitchTo(oldcontext); + + /* + * Release lock again. This means that by the time we actually try to + * process the table, it might be gone or renamed. In the former case + * we'll silently ignore it; in the latter case we'll process it + * anyway, but we must beware that the RangeVar doesn't necessarily + * identify it anymore. This isn't ideal, perhaps, but there's little + * practical alternative, since we're typically going to commit this + * transaction and begin a new one between now and then. Moreover, + * holding locks on multiple relations would create significant risk + * of deadlock. + */ + UnlockRelationOid(relid, AccessShareLock); } else { @@ -463,7 +475,7 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) classForm->relkind != RELKIND_PARTITIONED_TABLE) continue; - /* Make a relation list entry for this guy */ + /* Make a relation list entry for this rel */ oldcontext = MemoryContextSwitchTo(vac_context); oid_list = lappend_oid(oid_list, HeapTupleGetOid(tuple)); MemoryContextSwitchTo(oldcontext); @@ -1212,6 +1224,11 @@ vac_truncate_clog(TransactionId frozenXID, /* * vacuum_rel() -- vacuum one heap relation * + * relid identifies the relation to vacuum. If relation is supplied, + * use the name therein for reporting any failure to open/lock the rel; + * do not use it once we've successfully opened the rel, since it might + * be stale. + * * Doing one heap at a time incurs extra overhead, since we need to * check that the heap exists again just before we vacuum it. The * reason that we do this is so that vacuuming can be spread across @@ -1300,7 +1317,8 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) else { onerel = NULL; - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) + if (relation && + IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) ereport(LOG, (errcode(ERRCODE_LOCK_NOT_AVAILABLE), errmsg("skipping vacuum of \"%s\" --- lock not available", @@ -1468,7 +1486,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) * totally unimportant for toast relations. */ if (toast_relid != InvalidOid) - vacuum_rel(toast_relid, relation, options, params); + vacuum_rel(toast_relid, NULL, options, params); /* * Now release the session-level lock on the master table. From 0008a106d4f84206a96fc1fb09a1e6b09f1627ec Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 29 Sep 2017 16:50:01 -0400 Subject: [PATCH 0299/1087] Use Py_RETURN_NONE where suitable This is more idiomatic style and available as of Python 2.4, which is our minimum. --- src/pl/plpython/plpy_cursorobject.c | 3 +-- src/pl/plpython/plpy_plpymodule.c | 3 +-- src/pl/plpython/plpy_subxactobject.c | 3 +-- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c index 2ad663cf66..0108471bfe 100644 --- a/src/pl/plpython/plpy_cursorobject.c +++ b/src/pl/plpython/plpy_cursorobject.c @@ -509,6 +509,5 @@ PLy_cursor_close(PyObject *self, PyObject *unused) cursor->closed = true; } - Py_INCREF(Py_None); - return Py_None; + Py_RETURN_NONE; } diff --git a/src/pl/plpython/plpy_plpymodule.c b/src/pl/plpython/plpy_plpymodule.c index feaf203256..759ad44932 100644 --- a/src/pl/plpython/plpy_plpymodule.c +++ b/src/pl/plpython/plpy_plpymodule.c @@ -575,6 +575,5 @@ PLy_output(volatile int level, PyObject *self, PyObject *args, PyObject *kw) /* * return a legal object so the interpreter will continue on its merry way */ - Py_INCREF(Py_None); - return Py_None; + Py_RETURN_NONE; } diff --git a/src/pl/plpython/plpy_subxactobject.c b/src/pl/plpython/plpy_subxactobject.c index 9f1caa87d9..331d2b859c 100644 --- a/src/pl/plpython/plpy_subxactobject.c +++ b/src/pl/plpython/plpy_subxactobject.c @@ -212,6 +212,5 @@ PLy_subtransaction_exit(PyObject *self, PyObject *args) CurrentResourceOwner = subxactdata->oldowner; pfree(subxactdata); - Py_INCREF(Py_None); - return Py_None; + Py_RETURN_NONE; } From 510b8cbff15fcece246f66f2273ccf830a6c7e98 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 29 Sep 2017 15:52:55 -0700 Subject: [PATCH 0300/1087] Extend & revamp pg_bswap.h infrastructure. Upcoming patches are going to address performance issues that involve slow system provided ntohs/htons etc. To address that expand pg_bswap.h to provide pg_ntoh{16,32,64}, pg_hton{16,32,64} and optimize their respective implementations by using compiler intrinsics for gcc compatible compilers and msvc. Fall back to manual implementations using shifts etc otherwise. Additionally remove multiple evaluation hazards from the existing BSWAP32/64 macros, by replacing them with inline functions when necessary. In the course of that the naming scheme is changed to pg_bswap16/32/64. Author: Andres Freund Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de --- config/c-compiler.m4 | 17 ++++ configure | 24 ++++++ configure.in | 1 + contrib/btree_gist/btree_uuid.c | 4 +- src/include/pg_config.h.in | 3 + src/include/pg_config.h.win32 | 3 + src/include/port/pg_bswap.h | 132 ++++++++++++++++++++++++++------ src/include/port/pg_crc32c.h | 2 +- 8 files changed, 159 insertions(+), 27 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 7275ea69fe..6dcc790649 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -224,6 +224,23 @@ AC_DEFINE(HAVE__BUILTIN_TYPES_COMPATIBLE_P, 1, fi])# PGAC_C_TYPES_COMPATIBLE +# PGAC_C_BUILTIN_BSWAP16 +# ------------------------- +# Check if the C compiler understands __builtin_bswap16(), +# and define HAVE__BUILTIN_BSWAP16 if so. +AC_DEFUN([PGAC_C_BUILTIN_BSWAP16], +[AC_CACHE_CHECK(for __builtin_bswap16, pgac_cv__builtin_bswap16, +[AC_COMPILE_IFELSE([AC_LANG_SOURCE( +[static unsigned long int x = __builtin_bswap16(0xaabb);] +)], +[pgac_cv__builtin_bswap16=yes], +[pgac_cv__builtin_bswap16=no])]) +if test x"$pgac_cv__builtin_bswap16" = xyes ; then +AC_DEFINE(HAVE__BUILTIN_BSWAP16, 1, + [Define to 1 if your compiler understands __builtin_bswap16.]) +fi])# PGAC_C_BUILTIN_BSWAP16 + + # PGAC_C_BUILTIN_BSWAP32 # ------------------------- diff --git a/configure b/configure index 4f3b97c7cf..216447e739 100755 --- a/configure +++ b/configure @@ -11816,6 +11816,30 @@ if test x"$pgac_cv__types_compatible" = xyes ; then $as_echo "#define HAVE__BUILTIN_TYPES_COMPATIBLE_P 1" >>confdefs.h +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_bswap16" >&5 +$as_echo_n "checking for __builtin_bswap16... " >&6; } +if ${pgac_cv__builtin_bswap16+:} false; then : + $as_echo_n "(cached) " >&6 +else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +static unsigned long int x = __builtin_bswap16(0xaabb); + +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + pgac_cv__builtin_bswap16=yes +else + pgac_cv__builtin_bswap16=no +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_bswap16" >&5 +$as_echo "$pgac_cv__builtin_bswap16" >&6; } +if test x"$pgac_cv__builtin_bswap16" = xyes ; then + +$as_echo "#define HAVE__BUILTIN_BSWAP16 1" >>confdefs.h + fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_bswap32" >&5 $as_echo_n "checking for __builtin_bswap32... " >&6; } diff --git a/configure.in b/configure.in index fa48369630..a2e3d8331a 100644 --- a/configure.in +++ b/configure.in @@ -1306,6 +1306,7 @@ PGAC_C_FUNCNAME_SUPPORT PGAC_C_STATIC_ASSERT PGAC_C_TYPEOF PGAC_C_TYPES_COMPATIBLE +PGAC_C_BUILTIN_BSWAP16 PGAC_C_BUILTIN_BSWAP32 PGAC_C_BUILTIN_BSWAP64 PGAC_C_BUILTIN_CONSTANT_P diff --git a/contrib/btree_gist/btree_uuid.c b/contrib/btree_gist/btree_uuid.c index ecf357d662..9ff421ea55 100644 --- a/contrib/btree_gist/btree_uuid.c +++ b/contrib/btree_gist/btree_uuid.c @@ -182,8 +182,8 @@ uuid_2_double(const pg_uuid_t *u) * machine, byte-swap each half so we can use native uint64 arithmetic. */ #ifndef WORDS_BIGENDIAN - uu[0] = BSWAP64(uu[0]); - uu[1] = BSWAP64(uu[1]); + uu[0] = pg_bswap64(uu[0]); + uu[1] = pg_bswap64(uu[1]); #endif /* diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 39286836dc..368a297e6d 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -668,6 +668,9 @@ /* Define to 1 if you have the header file. */ #undef HAVE_WINLDAP_H +/* Define to 1 if your compiler understands __builtin_bswap16. */ +#undef HAVE__BUILTIN_BSWAP16 + /* Define to 1 if your compiler understands __builtin_bswap32. */ #undef HAVE__BUILTIN_BSWAP32 diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index d90f5a6b5b..3537b6f704 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -493,6 +493,9 @@ /* Define to 1 if you have the header file. */ /* #undef HAVE_WINLDAP_H */ +/* Define to 1 if your compiler understands __builtin_bswap16. */ +/* #undef HAVE__BUILTIN_BSWAP16 */ + /* Define to 1 if your compiler understands __builtin_bswap32. */ /* #undef HAVE__BUILTIN_BSWAP32 */ diff --git a/src/include/port/pg_bswap.h b/src/include/port/pg_bswap.h index 50a6bd106b..f67ad4b133 100644 --- a/src/include/port/pg_bswap.h +++ b/src/include/port/pg_bswap.h @@ -3,15 +3,13 @@ * pg_bswap.h * Byte swapping. * - * Macros for reversing the byte order of 32-bit and 64-bit unsigned integers. + * Macros for reversing the byte order of 16, 32 and 64-bit unsigned integers. * For example, 0xAABBCCDD becomes 0xDDCCBBAA. These are just wrappers for * built-in functions provided by the compiler where support exists. - * Elsewhere, beware of multiple evaluations of the arguments! * - * Note that the GCC built-in functions __builtin_bswap32() and - * __builtin_bswap64() are documented as accepting single arguments of type - * uint32_t and uint64_t respectively (these are also the respective return - * types). Use caution when using these wrapper macros with signed integers. + * Note that all of these functions accept unsigned integers as arguments and + * return the same. Use caution when using these wrapper macros with signed + * integers. * * Copyright (c) 2015-2017, PostgreSQL Global Development Group * @@ -22,28 +20,114 @@ #ifndef PG_BSWAP_H #define PG_BSWAP_H -#ifdef HAVE__BUILTIN_BSWAP32 -#define BSWAP32(x) __builtin_bswap32(x) + +/* In all supported versions msvc provides _byteswap_* functions in stdlib.h */ +#ifdef _MSC_VER +#include +#endif + + +/* implementation of uint16 pg_bswap16(uint16) */ +#if defined(HAVE__BUILTIN_BSWAP16) + +#define pg_bswap16(x) __builtin_bswap16(x) + +#elif defined(_MSC_VER) + +#define pg_bswap16(x) _byteswap_ushort(x) + +#else + +static inline uint16 +pg_bswap16(uint16 x) +{ + return + ((x << 8) & 0xff00) | + ((x >> 8) & 0x00ff); +} + +#endif /* HAVE__BUILTIN_BSWAP16 */ + + +/* implementation of uint32 pg_bswap32(uint32) */ +#if defined(HAVE__BUILTIN_BSWAP32) + +#define pg_bswap32(x) __builtin_bswap32(x) + +#elif defined(_MSC_VER) + +#define pg_bswap32(x) _byteswap_ulong(x) + #else -#define BSWAP32(x) ((((x) << 24) & 0xff000000) | \ - (((x) << 8) & 0x00ff0000) | \ - (((x) >> 8) & 0x0000ff00) | \ - (((x) >> 24) & 0x000000ff)) + +static inline uint32 +pg_bswap32(uint32 x) +{ + return + ((x << 24) & 0xff000000) | + ((x << 8) & 0x00ff0000) | + ((x >> 8) & 0x0000ff00) | + ((x >> 24) & 0x000000ff); +} + #endif /* HAVE__BUILTIN_BSWAP32 */ -#ifdef HAVE__BUILTIN_BSWAP64 -#define BSWAP64(x) __builtin_bswap64(x) + +/* implementation of uint64 pg_bswap64(uint64) */ +#if defined(HAVE__BUILTIN_BSWAP64) + +#define pg_bswap64(x) __builtin_bswap64(x) + + +#elif defined(_MSC_VER) + +#define pg_bswap64(x) _byteswap_uint64(x) + #else -#define BSWAP64(x) ((((x) << 56) & UINT64CONST(0xff00000000000000)) | \ - (((x) << 40) & UINT64CONST(0x00ff000000000000)) | \ - (((x) << 24) & UINT64CONST(0x0000ff0000000000)) | \ - (((x) << 8) & UINT64CONST(0x000000ff00000000)) | \ - (((x) >> 8) & UINT64CONST(0x00000000ff000000)) | \ - (((x) >> 24) & UINT64CONST(0x0000000000ff0000)) | \ - (((x) >> 40) & UINT64CONST(0x000000000000ff00)) | \ - (((x) >> 56) & UINT64CONST(0x00000000000000ff))) + +static inline uint16 +pg_bswap64(uint16 x) +{ + return + ((x << 56) & UINT64CONST(0xff00000000000000)) | + ((x << 40) & UINT64CONST(0x00ff000000000000)) | + ((x << 24) & UINT64CONST(0x0000ff0000000000)) | + ((x << 8) & UINT64CONST(0x000000ff00000000)) | + ((x >> 8) & UINT64CONST(0x00000000ff000000)) | + ((x >> 24) & UINT64CONST(0x0000000000ff0000)) | + ((x >> 40) & UINT64CONST(0x000000000000ff00)) | + ((x >> 56) & UINT64CONST(0x00000000000000ff)); +} #endif /* HAVE__BUILTIN_BSWAP64 */ + +/* + * Portable and fast equivalents for for ntohs, ntohl, htons, htonl, + * additionally extended to 64 bits. + */ +#ifdef WORDS_BIGENDIAN + +#define pg_hton16(x) (x) +#define pg_hton32(x) (x) +#define pg_hton64(x) (x) + +#define pg_ntoh16(x) (x) +#define pg_ntoh32(x) (x) +#define pg_ntoh64(x) (x) + +#else + +#define pg_hton16(x) pg_bswap16(x) +#define pg_hton32(x) pg_bswap32(x) +#define pg_hton64(x) pg_bswap64(x) + +#define pg_ntoh16(x) pg_bswap16(x) +#define pg_ntoh32(x) pg_bswap32(x) +#define pg_ntoh64(x) pg_bswap64(x) + +#endif /* WORDS_BIGENDIAN */ + + /* * Rearrange the bytes of a Datum from big-endian order into the native byte * order. On big-endian machines, this does nothing at all. Note that the C @@ -60,9 +144,9 @@ #define DatumBigEndianToNative(x) (x) #else /* !WORDS_BIGENDIAN */ #if SIZEOF_DATUM == 8 -#define DatumBigEndianToNative(x) BSWAP64(x) +#define DatumBigEndianToNative(x) pg_bswap64(x) #else /* SIZEOF_DATUM != 8 */ -#define DatumBigEndianToNative(x) BSWAP32(x) +#define DatumBigEndianToNative(x) pg_bswap32(x) #endif /* SIZEOF_DATUM == 8 */ #endif /* WORDS_BIGENDIAN */ diff --git a/src/include/port/pg_crc32c.h b/src/include/port/pg_crc32c.h index cd58ecc988..32d7176273 100644 --- a/src/include/port/pg_crc32c.h +++ b/src/include/port/pg_crc32c.h @@ -73,7 +73,7 @@ extern pg_crc32c (*pg_comp_crc32c) (pg_crc32c crc, const void *data, size_t len) #define COMP_CRC32C(crc, data, len) \ ((crc) = pg_comp_crc32c_sb8((crc), (data), (len))) #ifdef WORDS_BIGENDIAN -#define FIN_CRC32C(crc) ((crc) = BSWAP32(crc) ^ 0xFFFFFFFF) +#define FIN_CRC32C(crc) ((crc) = pg_bswap32(crc) ^ 0xFFFFFFFF) #else #define FIN_CRC32C(crc) ((crc) ^= 0xFFFFFFFF) #endif From f14241236ea2e306dc665635665c7f88669b6ca4 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 29 Sep 2017 15:52:55 -0700 Subject: [PATCH 0301/1087] Fix typo. Reported-By: Thomas Munro and Jesper Pedersen --- src/include/utils/hashutils.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/include/utils/hashutils.h b/src/include/utils/hashutils.h index 35281689e8..366bd0e78b 100644 --- a/src/include/utils/hashutils.h +++ b/src/include/utils/hashutils.h @@ -22,7 +22,7 @@ hash_combine(uint32 a, uint32 b) /* - * Simple inline murmur hash implementation hashing a 32 bit ingeger, for + * Simple inline murmur hash implementation hashing a 32 bit integer, for * performance. */ static inline uint32 From 248e33756b425335d94a32ffc8e9aace04f82c31 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 29 Sep 2017 17:41:20 -0700 Subject: [PATCH 0302/1087] Fix copy & pasto in 510b8cbff15f. Reported-By: Peter Geoghegan --- src/include/port/pg_bswap.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/include/port/pg_bswap.h b/src/include/port/pg_bswap.h index f67ad4b133..bba0ca5490 100644 --- a/src/include/port/pg_bswap.h +++ b/src/include/port/pg_bswap.h @@ -85,8 +85,8 @@ pg_bswap32(uint32 x) #else -static inline uint16 -pg_bswap64(uint16 x) +static inline uint64 +pg_bswap64(uint64 x) { return ((x << 56) & UINT64CONST(0xff00000000000000)) | From c12d570fa147d0ec273df53de3a2802925d551ba Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 30 Sep 2017 13:40:56 -0400 Subject: [PATCH 0303/1087] Support arrays over domains. Allowing arrays with a domain type as their element type was left un-done in the original domain patch, but not for any very good reason. This omission leads to such surprising results as array_agg() not working on a domain column, because the parser can't identify a suitable output type for the polymorphic aggregate. In order to fix this, first clean up the APIs of coerce_to_domain() and some internal functions in parse_coerce.c so that we consistently pass around a CoercionContext along with CoercionForm. Previously, we sometimes passed an "isExplicit" boolean flag instead, which is strictly less information; and coerce_to_domain() didn't even get that, but instead had to reverse-engineer isExplicit from CoercionForm. That's contrary to the documentation in primnodes.h that says that CoercionForm only affects display and not semantics. I don't think this change fixes any live bugs, but it makes things more consistent. The main reason for doing it though is that now build_coercion_expression() receives ccontext, which it needs in order to be able to recursively invoke coerce_to_target_type(). Next, reimplement ArrayCoerceExpr so that the node does not directly know any details of what has to be done to the individual array elements while performing the array coercion. Instead, the per-element processing is represented by a sub-expression whose input is a source array element and whose output is a target array element. This simplifies life in parse_coerce.c, because it can build that sub-expression by a recursive invocation of coerce_to_target_type(). The executor now handles the per-element processing as a compiled expression instead of hard-wired code. The main advantage of this is that we can use a single ArrayCoerceExpr to handle as many as three successive steps per element: base type conversion, typmod coercion, and domain constraint checking. The old code used two stacked ArrayCoerceExprs to handle type + typmod coercion, which was pretty inefficient, and adding yet another array deconstruction to do domain constraint checking seemed very unappetizing. In the case where we just need a single, very simple coercion function, doing this straightforwardly leads to a noticeable increase in the per-array-element runtime cost. Hence, add an additional shortcut evalfunc in execExprInterp.c that skips unnecessary overhead for that specific form of expression. The runtime speed of simple cases is within 1% or so of where it was before, while cases that previously required two levels of array processing are significantly faster. Finally, create an implicit array type for every domain type, as we do for base types, enums, etc. Everything except the array-coercion case seems to just work without further effort. Tom Lane, reviewed by Andrew Dunstan Discussion: https://postgr.es/m/9852.1499791473@sss.pgh.pa.us --- .../pg_stat_statements/pg_stat_statements.c | 1 + doc/src/sgml/array.sgml | 5 +- src/backend/catalog/dependency.c | 9 +- src/backend/commands/typecmds.c | 50 +++++- src/backend/executor/execExpr.c | 58 ++++--- src/backend/executor/execExprInterp.c | 86 ++++++---- src/backend/nodes/copyfuncs.c | 3 +- src/backend/nodes/equalfuncs.c | 3 +- src/backend/nodes/nodeFuncs.c | 20 +-- src/backend/nodes/outfuncs.c | 3 +- src/backend/nodes/readfuncs.c | 3 +- src/backend/optimizer/path/costsize.c | 13 +- src/backend/optimizer/plan/setrefs.c | 6 - src/backend/optimizer/prep/preptlist.c | 2 +- src/backend/optimizer/util/clauses.c | 46 ++++-- src/backend/parser/parse_coerce.c | 155 ++++++++++-------- src/backend/rewrite/rewriteHandler.c | 4 +- src/backend/rewrite/rewriteManip.c | 2 +- src/backend/utils/adt/arrayfuncs.c | 85 ++++------ src/backend/utils/adt/selfuncs.c | 15 +- src/backend/utils/fmgr/fmgr.c | 12 +- src/include/catalog/catversion.h | 2 +- src/include/executor/execExpr.h | 7 +- src/include/nodes/primnodes.h | 14 +- src/include/parser/parse_coerce.h | 5 +- src/include/utils/array.h | 9 +- src/test/regress/expected/domain.out | 95 +++++++++++ src/test/regress/sql/domain.sql | 43 +++++ 28 files changed, 492 insertions(+), 264 deletions(-) diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index 3ab1fd2db4..a0e7a46871 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -2630,6 +2630,7 @@ JumbleExpr(pgssJumbleState *jstate, Node *node) APP_JUMB(acexpr->resulttype); JumbleExpr(jstate, (Node *) acexpr->arg); + JumbleExpr(jstate, (Node *) acexpr->elemexpr); } break; case T_ConvertRowtypeExpr: diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index dd0d20e541..88eb4be04d 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -10,9 +10,8 @@ PostgreSQL allows columns of a table to be defined as variable-length multidimensional arrays. Arrays of any - built-in or user-defined base type, enum type, or composite type - can be created. - Arrays of domains are not yet supported. + built-in or user-defined base type, enum type, composite type, range type, + or domain can be created. diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 6fffc290fa..2668650f27 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -1738,11 +1738,14 @@ find_expr_references_walker(Node *node, { ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node; - if (OidIsValid(acoerce->elemfuncid)) - add_object_address(OCLASS_PROC, acoerce->elemfuncid, 0, - context->addrs); + /* as above, depend on type */ add_object_address(OCLASS_TYPE, acoerce->resulttype, 0, context->addrs); + /* the collation might not be referenced anywhere else, either */ + if (OidIsValid(acoerce->resultcollid) && + acoerce->resultcollid != DEFAULT_COLLATION_OID) + add_object_address(OCLASS_COLLATION, acoerce->resultcollid, 0, + context->addrs); /* fall through to examine arguments */ } else if (IsA(node, ConvertRowtypeExpr)) diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index 4c490ed5c1..c1b87e09e7 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -729,6 +729,7 @@ ObjectAddress DefineDomain(CreateDomainStmt *stmt) { char *domainName; + char *domainArrayName; Oid domainNamespace; AclResult aclresult; int16 internalLength; @@ -757,6 +758,7 @@ DefineDomain(CreateDomainStmt *stmt) Oid basetypeoid; Oid old_type_oid; Oid domaincoll; + Oid domainArrayOid; Form_pg_type baseType; int32 basetypeMod; Oid baseColl; @@ -1027,6 +1029,9 @@ DefineDomain(CreateDomainStmt *stmt) } } + /* Allocate OID for array type */ + domainArrayOid = AssignTypeArrayOid(); + /* * Have TypeCreate do all the real work. */ @@ -1051,7 +1056,7 @@ DefineDomain(CreateDomainStmt *stmt) analyzeProcedure, /* analyze procedure */ InvalidOid, /* no array element type */ false, /* this isn't an array */ - InvalidOid, /* no arrays for domains (yet) */ + domainArrayOid, /* array type we are about to create */ basetypeoid, /* base type ID */ defaultValue, /* default type value (text) */ defaultValueBin, /* default type value (binary) */ @@ -1063,6 +1068,48 @@ DefineDomain(CreateDomainStmt *stmt) typNotNull, /* Type NOT NULL */ domaincoll); /* type's collation */ + /* + * Create the array type that goes with it. + */ + domainArrayName = makeArrayTypeName(domainName, domainNamespace); + + /* alignment must be 'i' or 'd' for arrays */ + alignment = (alignment == 'd') ? 'd' : 'i'; + + TypeCreate(domainArrayOid, /* force assignment of this type OID */ + domainArrayName, /* type name */ + domainNamespace, /* namespace */ + InvalidOid, /* relation oid (n/a here) */ + 0, /* relation kind (ditto) */ + GetUserId(), /* owner's ID */ + -1, /* internal size (always varlena) */ + TYPTYPE_BASE, /* type-type (base type) */ + TYPCATEGORY_ARRAY, /* type-category (array) */ + false, /* array types are never preferred */ + delimiter, /* array element delimiter */ + F_ARRAY_IN, /* input procedure */ + F_ARRAY_OUT, /* output procedure */ + F_ARRAY_RECV, /* receive procedure */ + F_ARRAY_SEND, /* send procedure */ + InvalidOid, /* typmodin procedure - none */ + InvalidOid, /* typmodout procedure - none */ + F_ARRAY_TYPANALYZE, /* analyze procedure */ + address.objectId, /* element type ID */ + true, /* yes this is an array type */ + InvalidOid, /* no further array type */ + InvalidOid, /* base type ID */ + NULL, /* never a default type value */ + NULL, /* binary default isn't sent either */ + false, /* never passed by value */ + alignment, /* see above */ + 'x', /* ARRAY is always toastable */ + -1, /* typMod (Domains only) */ + 0, /* Array dimensions of typbasetype */ + false, /* Type NOT NULL */ + domaincoll); /* type's collation */ + + pfree(domainArrayName); + /* * Process constraints which refer to the domain ID returned by TypeCreate */ @@ -1139,6 +1186,7 @@ DefineEnum(CreateEnumStmt *stmt) errmsg("type \"%s\" already exists", enumName))); } + /* Allocate OID for array type */ enumArrayOid = AssignTypeArrayOid(); /* Create the pg_type entry */ diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index be9d23bc32..e0839616e1 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -1225,6 +1225,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node; Oid resultelemtype; + ExprState *elemstate; /* evaluate argument into step's result area */ ExecInitExprRec(acoerce->arg, parent, state, resv, resnull); @@ -1234,42 +1235,49 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("target type is not an array"))); - /* Arrays over domains aren't supported yet */ - Assert(getBaseType(resultelemtype) == resultelemtype); - scratch.opcode = EEOP_ARRAYCOERCE; - scratch.d.arraycoerce.coerceexpr = acoerce; - scratch.d.arraycoerce.resultelemtype = resultelemtype; + /* + * Construct a sub-expression for the per-element expression; + * but don't ready it until after we check it for triviality. + * We assume it hasn't any Var references, but does have a + * CaseTestExpr representing the source array element values. + */ + elemstate = makeNode(ExprState); + elemstate->expr = acoerce->elemexpr; + elemstate->innermost_caseval = (Datum *) palloc(sizeof(Datum)); + elemstate->innermost_casenull = (bool *) palloc(sizeof(bool)); - if (OidIsValid(acoerce->elemfuncid)) - { - AclResult aclresult; + ExecInitExprRec(acoerce->elemexpr, parent, elemstate, + &elemstate->resvalue, &elemstate->resnull); - /* Check permission to call function */ - aclresult = pg_proc_aclcheck(acoerce->elemfuncid, - GetUserId(), - ACL_EXECUTE); - if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, - get_func_name(acoerce->elemfuncid)); - InvokeFunctionExecuteHook(acoerce->elemfuncid); + if (elemstate->steps_len == 1 && + elemstate->steps[0].opcode == EEOP_CASE_TESTVAL) + { + /* Trivial, so we need no per-element work at runtime */ + elemstate = NULL; + } + else + { + /* Not trivial, so append a DONE step */ + scratch.opcode = EEOP_DONE; + ExprEvalPushStep(elemstate, &scratch); + /* and ready the subexpression */ + ExecReadyExpr(elemstate); + } - /* Set up the primary fmgr lookup information */ - scratch.d.arraycoerce.elemfunc = - (FmgrInfo *) palloc0(sizeof(FmgrInfo)); - fmgr_info(acoerce->elemfuncid, - scratch.d.arraycoerce.elemfunc); - fmgr_info_set_expr((Node *) acoerce, - scratch.d.arraycoerce.elemfunc); + scratch.opcode = EEOP_ARRAYCOERCE; + scratch.d.arraycoerce.elemexprstate = elemstate; + scratch.d.arraycoerce.resultelemtype = resultelemtype; + if (elemstate) + { /* Set up workspace for array_map */ scratch.d.arraycoerce.amstate = (ArrayMapState *) palloc0(sizeof(ArrayMapState)); } else { - /* Don't need workspace if there's no conversion func */ - scratch.d.arraycoerce.elemfunc = NULL; + /* Don't need workspace if there's no subexpression */ scratch.d.arraycoerce.amstate = NULL; } diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 39d50f98a9..c5e97ef9e2 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -34,10 +34,8 @@ * * For very simple instructions the overhead of the full interpreter * "startup", as minimal as it is, is noticeable. Therefore - * ExecReadyInterpretedExpr will choose to implement simple scalar Var - * and Const expressions using special fast-path routines (ExecJust*). - * Benchmarking shows anything more complex than those may as well use the - * "full interpreter". + * ExecReadyInterpretedExpr will choose to implement certain simple + * opcode patterns using special fast-path routines (ExecJust*). * * Complex or uncommon instructions are not implemented in-line in * ExecInterpExpr(), rather we call out to a helper function appearing later @@ -149,6 +147,7 @@ static Datum ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull static Datum ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustAssignOuterVar(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustAssignScanVar(ExprState *state, ExprContext *econtext, bool *isnull); +static Datum ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull); /* @@ -184,10 +183,8 @@ ExecReadyInterpretedExpr(ExprState *state) /* * Select fast-path evalfuncs for very simple expressions. "Starting up" - * the full interpreter is a measurable overhead for these. Plain Vars - * and Const seem to be the only ones where the intrinsic cost is small - * enough that the overhead of ExecInterpExpr matters. For more complex - * expressions it's cheaper to use ExecInterpExpr always. + * the full interpreter is a measurable overhead for these, and these + * patterns occur often enough to be worth optimizing. */ if (state->steps_len == 3) { @@ -230,6 +227,13 @@ ExecReadyInterpretedExpr(ExprState *state) state->evalfunc = ExecJustAssignScanVar; return; } + else if (step0 == EEOP_CASE_TESTVAL && + step1 == EEOP_FUNCEXPR_STRICT && + state->steps[0].d.casetest.value) + { + state->evalfunc = ExecJustApplyFuncToCase; + return; + } } else if (state->steps_len == 2 && state->steps[0].opcode == EEOP_CONST) @@ -1275,7 +1279,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_CASE(EEOP_ARRAYCOERCE) { /* too complex for an inline implementation */ - ExecEvalArrayCoerce(state, op); + ExecEvalArrayCoerce(state, op, econtext); EEO_NEXT(); } @@ -1811,6 +1815,43 @@ ExecJustAssignScanVar(ExprState *state, ExprContext *econtext, bool *isnull) return 0; } +/* Evaluate CASE_TESTVAL and apply a strict function to it */ +static Datum +ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull) +{ + ExprEvalStep *op = &state->steps[0]; + FunctionCallInfo fcinfo; + bool *argnull; + int argno; + Datum d; + + /* + * XXX with some redesign of the CaseTestExpr mechanism, maybe we could + * get rid of this data shuffling? + */ + *op->resvalue = *op->d.casetest.value; + *op->resnull = *op->d.casetest.isnull; + + op++; + + fcinfo = op->d.func.fcinfo_data; + argnull = fcinfo->argnull; + + /* strict function, so check for NULL args */ + for (argno = 0; argno < op->d.func.nargs; argno++) + { + if (argnull[argno]) + { + *isnull = true; + return (Datum) 0; + } + } + fcinfo->isnull = false; + d = op->d.func.fn_addr(fcinfo); + *isnull = fcinfo->isnull; + return d; +} + /* * Do one-time initialization of interpretation machinery. @@ -2345,11 +2386,9 @@ ExecEvalArrayExpr(ExprState *state, ExprEvalStep *op) * Source array is in step's result variable. */ void -ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op) +ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op, ExprContext *econtext) { - ArrayCoerceExpr *acoerce = op->d.arraycoerce.coerceexpr; Datum arraydatum; - FunctionCallInfoData locfcinfo; /* NULL array -> NULL result */ if (*op->resnull) @@ -2361,7 +2400,7 @@ ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op) * If it's binary-compatible, modify the element type in the array header, * but otherwise leave the array as we received it. */ - if (!OidIsValid(acoerce->elemfuncid)) + if (op->d.arraycoerce.elemexprstate == NULL) { /* Detoast input array if necessary, and copy in any case */ ArrayType *array = DatumGetArrayTypePCopy(arraydatum); @@ -2372,23 +2411,12 @@ ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op) } /* - * Use array_map to apply the function to each array element. - * - * We pass on the desttypmod and isExplicit flags whether or not the - * function wants them. - * - * Note: coercion functions are assumed to not use collation. + * Use array_map to apply the sub-expression to each array element. */ - InitFunctionCallInfoData(locfcinfo, op->d.arraycoerce.elemfunc, 3, - InvalidOid, NULL, NULL); - locfcinfo.arg[0] = arraydatum; - locfcinfo.arg[1] = Int32GetDatum(acoerce->resulttypmod); - locfcinfo.arg[2] = BoolGetDatum(acoerce->isExplicit); - locfcinfo.argnull[0] = false; - locfcinfo.argnull[1] = false; - locfcinfo.argnull[2] = false; - - *op->resvalue = array_map(&locfcinfo, op->d.arraycoerce.resultelemtype, + *op->resvalue = array_map(arraydatum, + op->d.arraycoerce.elemexprstate, + econtext, + op->d.arraycoerce.resultelemtype, op->d.arraycoerce.amstate); } diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index f1bed14e2b..b274af26a4 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -1698,11 +1698,10 @@ _copyArrayCoerceExpr(const ArrayCoerceExpr *from) ArrayCoerceExpr *newnode = makeNode(ArrayCoerceExpr); COPY_NODE_FIELD(arg); - COPY_SCALAR_FIELD(elemfuncid); + COPY_NODE_FIELD(elemexpr); COPY_SCALAR_FIELD(resulttype); COPY_SCALAR_FIELD(resulttypmod); COPY_SCALAR_FIELD(resultcollid); - COPY_SCALAR_FIELD(isExplicit); COPY_SCALAR_FIELD(coerceformat); COPY_LOCATION_FIELD(location); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 8b56b9146a..5c839f4c31 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -513,11 +513,10 @@ static bool _equalArrayCoerceExpr(const ArrayCoerceExpr *a, const ArrayCoerceExpr *b) { COMPARE_NODE_FIELD(arg); - COMPARE_SCALAR_FIELD(elemfuncid); + COMPARE_NODE_FIELD(elemexpr); COMPARE_SCALAR_FIELD(resulttype); COMPARE_SCALAR_FIELD(resulttypmod); COMPARE_SCALAR_FIELD(resultcollid); - COMPARE_SCALAR_FIELD(isExplicit); COMPARE_COERCIONFORM_FIELD(coerceformat); COMPARE_LOCATION_FIELD(location); diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c index e3eb0c5788..8e6f27e153 100644 --- a/src/backend/nodes/nodeFuncs.c +++ b/src/backend/nodes/nodeFuncs.c @@ -1717,15 +1717,6 @@ check_functions_in_node(Node *node, check_function_callback checker, return true; } break; - case T_ArrayCoerceExpr: - { - ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node; - - if (OidIsValid(expr->elemfuncid) && - checker(expr->elemfuncid, context)) - return true; - } - break; case T_RowCompareExpr: { RowCompareExpr *rcexpr = (RowCompareExpr *) node; @@ -2023,7 +2014,15 @@ expression_tree_walker(Node *node, case T_CoerceViaIO: return walker(((CoerceViaIO *) node)->arg, context); case T_ArrayCoerceExpr: - return walker(((ArrayCoerceExpr *) node)->arg, context); + { + ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node; + + if (walker(acoerce->arg, context)) + return true; + if (walker(acoerce->elemexpr, context)) + return true; + } + break; case T_ConvertRowtypeExpr: return walker(((ConvertRowtypeExpr *) node)->arg, context); case T_CollateExpr: @@ -2705,6 +2704,7 @@ expression_tree_mutator(Node *node, FLATCOPY(newnode, acoerce, ArrayCoerceExpr); MUTATE(newnode->arg, acoerce->arg, Expr *); + MUTATE(newnode->elemexpr, acoerce->elemexpr, Expr *); return (Node *) newnode; } break; diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index b83d919e40..2532edc94a 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -1394,11 +1394,10 @@ _outArrayCoerceExpr(StringInfo str, const ArrayCoerceExpr *node) WRITE_NODE_TYPE("ARRAYCOERCEEXPR"); WRITE_NODE_FIELD(arg); - WRITE_OID_FIELD(elemfuncid); + WRITE_NODE_FIELD(elemexpr); WRITE_OID_FIELD(resulttype); WRITE_INT_FIELD(resulttypmod); WRITE_OID_FIELD(resultcollid); - WRITE_BOOL_FIELD(isExplicit); WRITE_ENUM_FIELD(coerceformat, CoercionForm); WRITE_LOCATION_FIELD(location); } diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index fbf8330735..07ba69178c 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -892,11 +892,10 @@ _readArrayCoerceExpr(void) READ_LOCALS(ArrayCoerceExpr); READ_NODE_FIELD(arg); - READ_OID_FIELD(elemfuncid); + READ_NODE_FIELD(elemexpr); READ_OID_FIELD(resulttype); READ_INT_FIELD(resulttypmod); READ_OID_FIELD(resultcollid); - READ_BOOL_FIELD(isExplicit); READ_ENUM_FIELD(coerceformat, CoercionForm); READ_LOCATION_FIELD(location); diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 0baf9785c9..f76da49044 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -3632,11 +3632,14 @@ cost_qual_eval_walker(Node *node, cost_qual_eval_context *context) else if (IsA(node, ArrayCoerceExpr)) { ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node; - Node *arraynode = (Node *) acoerce->arg; - - if (OidIsValid(acoerce->elemfuncid)) - context->total.per_tuple += get_func_cost(acoerce->elemfuncid) * - cpu_operator_cost * estimate_array_length(arraynode); + QualCost perelemcost; + + cost_qual_eval_node(&perelemcost, (Node *) acoerce->elemexpr, + context->root); + context->total.startup += perelemcost.startup; + if (perelemcost.per_tuple > 0) + context->total.per_tuple += perelemcost.per_tuple * + estimate_array_length((Node *) acoerce->arg); } else if (IsA(node, RowCompareExpr)) { diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index b0c9e94459..dee4414cec 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -1395,12 +1395,6 @@ fix_expr_common(PlannerInfo *root, Node *node) record_plan_function_dependency(root, ((ScalarArrayOpExpr *) node)->opfuncid); } - else if (IsA(node, ArrayCoerceExpr)) - { - if (OidIsValid(((ArrayCoerceExpr *) node)->elemfuncid)) - record_plan_function_dependency(root, - ((ArrayCoerceExpr *) node)->elemfuncid); - } else if (IsA(node, Const)) { Const *con = (Const *) node; diff --git a/src/backend/optimizer/prep/preptlist.c b/src/backend/optimizer/prep/preptlist.c index 9d75e8612a..d7db32ebf5 100644 --- a/src/backend/optimizer/prep/preptlist.c +++ b/src/backend/optimizer/prep/preptlist.c @@ -306,9 +306,9 @@ expand_targetlist(List *tlist, int command_type, new_expr = coerce_to_domain(new_expr, InvalidOid, -1, atttype, + COERCION_IMPLICIT, COERCE_IMPLICIT_CAST, -1, - false, false); } else diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 93add27dbe..7961362280 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1361,6 +1361,17 @@ contain_nonstrict_functions_walker(Node *node, void *context) return true; if (IsA(node, FieldStore)) return true; + if (IsA(node, ArrayCoerceExpr)) + { + /* + * ArrayCoerceExpr is strict at the array level, regardless of what + * the per-element expression is; so we should ignore elemexpr and + * recurse only into the arg. + */ + return expression_tree_walker((Node *) ((ArrayCoerceExpr *) node)->arg, + contain_nonstrict_functions_walker, + context); + } if (IsA(node, CaseExpr)) return true; if (IsA(node, ArrayExpr)) @@ -1380,14 +1391,11 @@ contain_nonstrict_functions_walker(Node *node, void *context) if (IsA(node, BooleanTest)) return true; - /* - * Check other function-containing nodes; but ArrayCoerceExpr is strict at - * the array level, regardless of elemfunc. - */ - if (!IsA(node, ArrayCoerceExpr) && - check_functions_in_node(node, contain_nonstrict_functions_checker, + /* Check other function-containing nodes */ + if (check_functions_in_node(node, contain_nonstrict_functions_checker, context)) return true; + return expression_tree_walker(node, contain_nonstrict_functions_walker, context); } @@ -1757,7 +1765,7 @@ find_nonnullable_rels_walker(Node *node, bool top_level) } else if (IsA(node, ArrayCoerceExpr)) { - /* ArrayCoerceExpr is strict at the array level */ + /* ArrayCoerceExpr is strict at the array level; ignore elemexpr */ ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node; result = find_nonnullable_rels_walker((Node *) expr->arg, top_level); @@ -1965,7 +1973,7 @@ find_nonnullable_vars_walker(Node *node, bool top_level) } else if (IsA(node, ArrayCoerceExpr)) { - /* ArrayCoerceExpr is strict at the array level */ + /* ArrayCoerceExpr is strict at the array level; ignore elemexpr */ ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node; result = find_nonnullable_vars_walker((Node *) expr->arg, top_level); @@ -3005,32 +3013,38 @@ eval_const_expressions_mutator(Node *node, { ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node; Expr *arg; + Expr *elemexpr; ArrayCoerceExpr *newexpr; /* - * Reduce constants in the ArrayCoerceExpr's argument, then - * build a new ArrayCoerceExpr. + * Reduce constants in the ArrayCoerceExpr's argument and + * per-element expressions, then build a new ArrayCoerceExpr. */ arg = (Expr *) eval_const_expressions_mutator((Node *) expr->arg, context); + elemexpr = (Expr *) eval_const_expressions_mutator((Node *) expr->elemexpr, + context); newexpr = makeNode(ArrayCoerceExpr); newexpr->arg = arg; - newexpr->elemfuncid = expr->elemfuncid; + newexpr->elemexpr = elemexpr; newexpr->resulttype = expr->resulttype; newexpr->resulttypmod = expr->resulttypmod; newexpr->resultcollid = expr->resultcollid; - newexpr->isExplicit = expr->isExplicit; newexpr->coerceformat = expr->coerceformat; newexpr->location = expr->location; /* - * If constant argument and it's a binary-coercible or - * immutable conversion, we can simplify it to a constant. + * If constant argument and per-element expression is + * immutable, we can simplify the whole thing to a constant. + * Exception: although contain_mutable_functions considers + * CoerceToDomain immutable for historical reasons, let's not + * do so here; this ensures coercion to an array-over-domain + * does not apply the domain's constraints until runtime. */ if (arg && IsA(arg, Const) && - (!OidIsValid(newexpr->elemfuncid) || - func_volatile(newexpr->elemfuncid) == PROVOLATILE_IMMUTABLE)) + elemexpr && !IsA(elemexpr, CoerceToDomain) && + !contain_mutable_functions((Node *) elemexpr)) return (Node *) evaluate_expr((Expr *) newexpr, newexpr->resulttype, newexpr->resulttypmod, diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index e79ad26e71..53457dc2c8 100644 --- a/src/backend/parser/parse_coerce.c +++ b/src/backend/parser/parse_coerce.c @@ -34,15 +34,16 @@ static Node *coerce_type_typmod(Node *node, Oid targetTypeId, int32 targetTypMod, - CoercionForm cformat, int location, - bool isExplicit, bool hideInputCoercion); + CoercionContext ccontext, CoercionForm cformat, + int location, + bool hideInputCoercion); static void hide_coercion_node(Node *node); static Node *build_coercion_expression(Node *node, CoercionPathType pathtype, Oid funcId, Oid targetTypeId, int32 targetTypMod, - CoercionForm cformat, int location, - bool isExplicit); + CoercionContext ccontext, CoercionForm cformat, + int location); static Node *coerce_record_to_complex(ParseState *pstate, Node *node, Oid targetTypeId, CoercionContext ccontext, @@ -110,8 +111,7 @@ coerce_to_target_type(ParseState *pstate, Node *expr, Oid exprtype, */ result = coerce_type_typmod(result, targettype, targettypmod, - cformat, location, - (cformat != COERCE_IMPLICIT_CAST), + ccontext, cformat, location, (result != expr && !IsA(result, Const))); if (expr != origexpr) @@ -355,7 +355,8 @@ coerce_type(ParseState *pstate, Node *node, result = coerce_to_domain(result, baseTypeId, baseTypeMod, targetTypeId, - cformat, location, false, false); + ccontext, cformat, location, + false); ReleaseSysCache(baseType); @@ -370,10 +371,10 @@ coerce_type(ParseState *pstate, Node *node, * NULL to indicate we should proceed with normal coercion. */ result = pstate->p_coerce_param_hook(pstate, - (Param *) node, - targetTypeId, - targetTypeMod, - location); + (Param *) node, + targetTypeId, + targetTypeMod, + location); if (result) return result; } @@ -417,20 +418,17 @@ coerce_type(ParseState *pstate, Node *node, result = build_coercion_expression(node, pathtype, funcId, baseTypeId, baseTypeMod, - cformat, location, - (cformat != COERCE_IMPLICIT_CAST)); + ccontext, cformat, location); /* * If domain, coerce to the domain type and relabel with domain - * type ID. We can skip the internal length-coercion step if the - * selected coercion function was a type-and-length coercion. + * type ID, hiding the previous coercion node. */ if (targetTypeId != baseTypeId) result = coerce_to_domain(result, baseTypeId, baseTypeMod, targetTypeId, - cformat, location, true, - exprIsLengthCoercion(result, - NULL)); + ccontext, cformat, location, + true); } else { @@ -444,7 +442,8 @@ coerce_type(ParseState *pstate, Node *node, * then we won't need a RelabelType node. */ result = coerce_to_domain(node, InvalidOid, -1, targetTypeId, - cformat, location, false, false); + ccontext, cformat, location, + false); if (result == node) { /* @@ -636,19 +635,17 @@ can_coerce_type(int nargs, Oid *input_typeids, Oid *target_typeids, * 'baseTypeMod': base type typmod of domain, if known (pass -1 if caller * has not bothered to look this up) * 'typeId': target type to coerce to - * 'cformat': coercion format + * 'ccontext': context indicator to control coercions + * 'cformat': coercion display format * 'location': coercion request location * 'hideInputCoercion': if true, hide the input coercion under this one. - * 'lengthCoercionDone': if true, caller already accounted for length, - * ie the input is already of baseTypMod as well as baseTypeId. * * If the target type isn't a domain, the given 'arg' is returned as-is. */ Node * coerce_to_domain(Node *arg, Oid baseTypeId, int32 baseTypeMod, Oid typeId, - CoercionForm cformat, int location, - bool hideInputCoercion, - bool lengthCoercionDone) + CoercionContext ccontext, CoercionForm cformat, int location, + bool hideInputCoercion) { CoerceToDomain *result; @@ -677,14 +674,9 @@ coerce_to_domain(Node *arg, Oid baseTypeId, int32 baseTypeMod, Oid typeId, * would be safe to do anyway, without lots of knowledge about what the * base type thinks the typmod means. */ - if (!lengthCoercionDone) - { - if (baseTypeMod >= 0) - arg = coerce_type_typmod(arg, baseTypeId, baseTypeMod, - COERCE_IMPLICIT_CAST, location, - (cformat != COERCE_IMPLICIT_CAST), - false); - } + arg = coerce_type_typmod(arg, baseTypeId, baseTypeMod, + ccontext, COERCE_IMPLICIT_CAST, location, + false); /* * Now build the domain coercion node. This represents run-time checking @@ -714,11 +706,14 @@ coerce_to_domain(Node *arg, Oid baseTypeId, int32 baseTypeMod, Oid typeId, * The caller must have already ensured that the value is of the correct * type, typically by applying coerce_type. * - * cformat determines the display properties of the generated node (if any), - * while isExplicit may affect semantics. If hideInputCoercion is true - * *and* we generate a node, the input node is forced to IMPLICIT display - * form, so that only the typmod coercion node will be visible when - * displaying the expression. + * ccontext may affect semantics, depending on whether the length coercion + * function pays attention to the isExplicit flag it's passed. + * + * cformat determines the display properties of the generated node (if any). + * + * If hideInputCoercion is true *and* we generate a node, the input node is + * forced to IMPLICIT display form, so that only the typmod coercion node will + * be visible when displaying the expression. * * NOTE: this does not need to work on domain types, because any typmod * coercion for a domain is considered to be part of the type coercion @@ -726,8 +721,9 @@ coerce_to_domain(Node *arg, Oid baseTypeId, int32 baseTypeMod, Oid typeId, */ static Node * coerce_type_typmod(Node *node, Oid targetTypeId, int32 targetTypMod, - CoercionForm cformat, int location, - bool isExplicit, bool hideInputCoercion) + CoercionContext ccontext, CoercionForm cformat, + int location, + bool hideInputCoercion) { CoercionPathType pathtype; Oid funcId; @@ -749,8 +745,7 @@ coerce_type_typmod(Node *node, Oid targetTypeId, int32 targetTypMod, node = build_coercion_expression(node, pathtype, funcId, targetTypeId, targetTypMod, - cformat, location, - isExplicit); + ccontext, cformat, location); } return node; @@ -799,8 +794,8 @@ build_coercion_expression(Node *node, CoercionPathType pathtype, Oid funcId, Oid targetTypeId, int32 targetTypMod, - CoercionForm cformat, int location, - bool isExplicit) + CoercionContext ccontext, CoercionForm cformat, + int location) { int nargs = 0; @@ -865,7 +860,7 @@ build_coercion_expression(Node *node, -1, InvalidOid, sizeof(bool), - BoolGetDatum(isExplicit), + BoolGetDatum(ccontext == COERCION_EXPLICIT), false, true); @@ -881,19 +876,52 @@ build_coercion_expression(Node *node, { /* We need to build an ArrayCoerceExpr */ ArrayCoerceExpr *acoerce = makeNode(ArrayCoerceExpr); + CaseTestExpr *ctest = makeNode(CaseTestExpr); + Oid sourceBaseTypeId; + int32 sourceBaseTypeMod; + Oid targetElementType; + Node *elemexpr; + + /* + * Look through any domain over the source array type. Note we don't + * expect that the target type is a domain; it must be a plain array. + * (To get to a domain target type, we'll do coerce_to_domain later.) + */ + sourceBaseTypeMod = exprTypmod(node); + sourceBaseTypeId = getBaseTypeAndTypmod(exprType(node), + &sourceBaseTypeMod); + + /* Set up CaseTestExpr representing one element of source array */ + ctest->typeId = get_element_type(sourceBaseTypeId); + Assert(OidIsValid(ctest->typeId)); + ctest->typeMod = sourceBaseTypeMod; + ctest->collation = InvalidOid; /* Assume coercions don't care */ + + /* And coerce it to the target element type */ + targetElementType = get_element_type(targetTypeId); + Assert(OidIsValid(targetElementType)); + + elemexpr = coerce_to_target_type(NULL, + (Node *) ctest, + ctest->typeId, + targetElementType, + targetTypMod, + ccontext, + cformat, + location); + if (elemexpr == NULL) /* shouldn't happen */ + elog(ERROR, "failed to coerce array element type as expected"); acoerce->arg = (Expr *) node; - acoerce->elemfuncid = funcId; + acoerce->elemexpr = (Expr *) elemexpr; acoerce->resulttype = targetTypeId; /* - * Label the output as having a particular typmod only if we are - * really invoking a length-coercion function, ie one with more than - * one argument. + * Label the output as having a particular element typmod only if we + * ended up with a per-element expression that is labeled that way. */ - acoerce->resulttypmod = (nargs >= 2) ? targetTypMod : -1; + acoerce->resulttypmod = exprTypmod(elemexpr); /* resultcollid will be set by parse_collate.c */ - acoerce->isExplicit = isExplicit; acoerce->coerceformat = cformat; acoerce->location = location; @@ -2148,8 +2176,7 @@ IsBinaryCoercible(Oid srctype, Oid targettype) * COERCION_PATH_RELABELTYPE: binary-compatible cast, no function needed * *funcid is set to InvalidOid * COERCION_PATH_ARRAYCOERCE: need an ArrayCoerceExpr node - * *funcid is set to the element cast function, or InvalidOid - * if the array elements are binary-compatible + * *funcid is set to InvalidOid * COERCION_PATH_COERCEVIAIO: need a CoerceViaIO node * *funcid is set to InvalidOid * @@ -2235,11 +2262,8 @@ find_coercion_pathway(Oid targetTypeId, Oid sourceTypeId, { /* * If there's no pg_cast entry, perhaps we are dealing with a pair of - * array types. If so, and if the element types have a suitable cast, - * report that we can coerce with an ArrayCoerceExpr. - * - * Note that the source type can be a domain over array, but not the - * target, because ArrayCoerceExpr won't check domain constraints. + * array types. If so, and if their element types have a conversion + * pathway, report that we can coerce with an ArrayCoerceExpr. * * Hack: disallow coercions to oidvector and int2vector, which * otherwise tend to capture coercions that should go to "real" array @@ -2254,7 +2278,7 @@ find_coercion_pathway(Oid targetTypeId, Oid sourceTypeId, Oid sourceElem; if ((targetElem = get_element_type(targetTypeId)) != InvalidOid && - (sourceElem = get_base_element_type(sourceTypeId)) != InvalidOid) + (sourceElem = get_element_type(sourceTypeId)) != InvalidOid) { CoercionPathType elempathtype; Oid elemfuncid; @@ -2263,14 +2287,9 @@ find_coercion_pathway(Oid targetTypeId, Oid sourceTypeId, sourceElem, ccontext, &elemfuncid); - if (elempathtype != COERCION_PATH_NONE && - elempathtype != COERCION_PATH_ARRAYCOERCE) + if (elempathtype != COERCION_PATH_NONE) { - *funcid = elemfuncid; - if (elempathtype == COERCION_PATH_COERCEVIAIO) - result = COERCION_PATH_COERCEVIAIO; - else - result = COERCION_PATH_ARRAYCOERCE; + result = COERCION_PATH_ARRAYCOERCE; } } } @@ -2311,7 +2330,9 @@ find_coercion_pathway(Oid targetTypeId, Oid sourceTypeId, * If the given type is a varlena array type, we do not look for a coercion * function associated directly with the array type, but instead look for * one associated with the element type. An ArrayCoerceExpr node must be - * used to apply such a function. + * used to apply such a function. (Note: currently, it's pointless to + * return the funcid in this case, because it'll just get looked up again + * in the recursive construction of the ArrayCoerceExpr's elemexpr.) * * We use the same result enum as find_coercion_pathway, but the only possible * result codes are: diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index ef52dd5b95..7054d4f77d 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -875,9 +875,9 @@ rewriteTargetListIU(List *targetList, new_expr = coerce_to_domain(new_expr, InvalidOid, -1, att_tup->atttypid, + COERCION_IMPLICIT, COERCE_IMPLICIT_CAST, -1, - false, false); } } @@ -1271,9 +1271,9 @@ rewriteValuesRTE(RangeTblEntry *rte, Relation target_relation, List *attrnos) new_expr = coerce_to_domain(new_expr, InvalidOid, -1, att_tup->atttypid, + COERCION_IMPLICIT, COERCE_IMPLICIT_CAST, -1, - false, false); } newList = lappend(newList, new_expr); diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c index 5c17213720..c5773efd19 100644 --- a/src/backend/rewrite/rewriteManip.c +++ b/src/backend/rewrite/rewriteManip.c @@ -1429,9 +1429,9 @@ ReplaceVarsFromTargetList_callback(Var *var, var->varcollid), InvalidOid, -1, var->vartype, + COERCION_IMPLICIT, COERCE_IMPLICIT_CAST, -1, - false, false); } elog(ERROR, "could not find replacement targetlist entry for attno %d", diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index d1f2fe7d95..ca04b13e82 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -3092,21 +3092,18 @@ array_set(ArrayType *array, int nSubscripts, int *indx, /* * array_map() * - * Map an array through an arbitrary function. Return a new array with - * same dimensions and each source element transformed by fn(). Each - * source element is passed as the first argument to fn(); additional - * arguments to be passed to fn() can be specified by the caller. - * The output array can have a different element type than the input. + * Map an array through an arbitrary expression. Return a new array with + * the same dimensions and each source element transformed by the given, + * already-compiled expression. Each source element is placed in the + * innermost_caseval/innermost_casenull fields of the ExprState. * * Parameters are: - * * fcinfo: a function-call data structure pre-constructed by the caller - * to be ready to call the desired function, with everything except the - * first argument position filled in. In particular, flinfo identifies - * the function fn(), and if nargs > 1 then argument positions after the - * first must be preset to the additional values to be passed. The - * first argument position initially holds the input array value. + * * arrayd: Datum representing array argument. + * * exprstate: ExprState representing the per-element transformation. + * * econtext: context for expression evaluation. * * retType: OID of element type of output array. This must be the same as, - * or binary-compatible with, the result type of fn(). + * or binary-compatible with, the result type of the expression. It might + * be different from the input array's element type. * * amstate: workspace for array_map. Must be zeroed by caller before * first call, and not touched after that. * @@ -3116,11 +3113,14 @@ array_set(ArrayType *array, int nSubscripts, int *indx, * * NB: caller must assure that input array is not NULL. NULL elements in * the array are OK however. + * NB: caller should be running in econtext's per-tuple memory context. */ Datum -array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) +array_map(Datum arrayd, + ExprState *exprstate, ExprContext *econtext, + Oid retType, ArrayMapState *amstate) { - AnyArrayType *v; + AnyArrayType *v = DatumGetAnyArrayP(arrayd); ArrayType *result; Datum *values; bool *nulls; @@ -3141,13 +3141,8 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) array_iter iter; ArrayMetaState *inp_extra; ArrayMetaState *ret_extra; - - /* Get input array */ - if (fcinfo->nargs < 1) - elog(ERROR, "invalid nargs: %d", fcinfo->nargs); - if (PG_ARGISNULL(0)) - elog(ERROR, "null input array"); - v = PG_GETARG_ANY_ARRAY_P(0); + Datum *transform_source = exprstate->innermost_caseval; + bool *transform_source_isnull = exprstate->innermost_casenull; inpType = AARR_ELEMTYPE(v); ndim = AARR_NDIM(v); @@ -3158,7 +3153,7 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) if (nitems <= 0) { /* Return empty array */ - PG_RETURN_ARRAYTYPE_P(construct_empty_array(retType)); + return PointerGetDatum(construct_empty_array(retType)); } /* @@ -3203,39 +3198,15 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) for (i = 0; i < nitems; i++) { - bool callit = true; - /* Get source element, checking for NULL */ - fcinfo->arg[0] = array_iter_next(&iter, &fcinfo->argnull[0], i, - inp_typlen, inp_typbyval, inp_typalign); - - /* - * Apply the given function to source elt and extra args. - */ - if (fcinfo->flinfo->fn_strict) - { - int j; + *transform_source = + array_iter_next(&iter, transform_source_isnull, i, + inp_typlen, inp_typbyval, inp_typalign); - for (j = 0; j < fcinfo->nargs; j++) - { - if (fcinfo->argnull[j]) - { - callit = false; - break; - } - } - } + /* Apply the given expression to source element */ + values[i] = ExecEvalExpr(exprstate, econtext, &nulls[i]); - if (callit) - { - fcinfo->isnull = false; - values[i] = FunctionCallInvoke(fcinfo); - } - else - fcinfo->isnull = true; - - nulls[i] = fcinfo->isnull; - if (fcinfo->isnull) + if (nulls[i]) hasnulls = true; else { @@ -3254,7 +3225,7 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) } } - /* Allocate and initialize the result array */ + /* Allocate and fill the result array */ if (hasnulls) { dataoffset = ARR_OVERHEAD_WITHNULLS(ndim, nitems); @@ -3273,18 +3244,18 @@ array_map(FunctionCallInfo fcinfo, Oid retType, ArrayMapState *amstate) memcpy(ARR_DIMS(result), AARR_DIMS(v), ndim * sizeof(int)); memcpy(ARR_LBOUND(result), AARR_LBOUND(v), ndim * sizeof(int)); - /* - * Note: do not risk trying to pfree the results of the called function - */ CopyArrayEls(result, values, nulls, nitems, typlen, typbyval, typalign, false); + /* + * Note: do not risk trying to pfree the results of the called expression + */ pfree(values); pfree(nulls); - PG_RETURN_ARRAYTYPE_P(result); + return PointerGetDatum(result); } /* diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index db1792bf8d..7361e9d43c 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -1816,10 +1816,19 @@ strip_array_coercion(Node *node) { for (;;) { - if (node && IsA(node, ArrayCoerceExpr) && - ((ArrayCoerceExpr *) node)->elemfuncid == InvalidOid) + if (node && IsA(node, ArrayCoerceExpr)) { - node = (Node *) ((ArrayCoerceExpr *) node)->arg; + ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) node; + + /* + * If the per-element expression is just a RelabelType on top of + * CaseTestExpr, then we know it's a binary-compatible relabeling. + */ + if (IsA(acoerce->elemexpr, RelabelType) && + IsA(((RelabelType *) acoerce->elemexpr)->arg, CaseTestExpr)) + node = (Node *) acoerce->arg; + else + break; } else if (node && IsA(node, RelabelType)) { diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c index a7b07827e0..919733517b 100644 --- a/src/backend/utils/fmgr/fmgr.c +++ b/src/backend/utils/fmgr/fmgr.c @@ -1941,8 +1941,6 @@ get_call_expr_argtype(Node *expr, int argnum) args = ((DistinctExpr *) expr)->args; else if (IsA(expr, ScalarArrayOpExpr)) args = ((ScalarArrayOpExpr *) expr)->args; - else if (IsA(expr, ArrayCoerceExpr)) - args = list_make1(((ArrayCoerceExpr *) expr)->arg); else if (IsA(expr, NullIfExpr)) args = ((NullIfExpr *) expr)->args; else if (IsA(expr, WindowFunc)) @@ -1956,16 +1954,12 @@ get_call_expr_argtype(Node *expr, int argnum) argtype = exprType((Node *) list_nth(args, argnum)); /* - * special hack for ScalarArrayOpExpr and ArrayCoerceExpr: what the - * underlying function will actually get passed is the element type of the - * array. + * special hack for ScalarArrayOpExpr: what the underlying function will + * actually get passed is the element type of the array. */ if (IsA(expr, ScalarArrayOpExpr) && argnum == 1) argtype = get_base_element_type(argtype); - else if (IsA(expr, ArrayCoerceExpr) && - argnum == 0) - argtype = get_base_element_type(argtype); return argtype; } @@ -2012,8 +2006,6 @@ get_call_expr_arg_stable(Node *expr, int argnum) args = ((DistinctExpr *) expr)->args; else if (IsA(expr, ScalarArrayOpExpr)) args = ((ScalarArrayOpExpr *) expr)->args; - else if (IsA(expr, ArrayCoerceExpr)) - args = list_make1(((ArrayCoerceExpr *) expr)->arg); else if (IsA(expr, NullIfExpr)) args = ((NullIfExpr *) expr)->args; else if (IsA(expr, WindowFunc)) diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 5d57a95d8b..2c382a73cf 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201709191 +#define CATALOG_VERSION_NO 201709301 #endif diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 8ee0496e01..78d2247816 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -385,10 +385,8 @@ typedef struct ExprEvalStep /* for EEOP_ARRAYCOERCE */ struct { - ArrayCoerceExpr *coerceexpr; + ExprState *elemexprstate; /* null if no per-element work */ Oid resultelemtype; /* element type of result array */ - FmgrInfo *elemfunc; /* lookup info for element coercion - * function */ struct ArrayMapState *amstate; /* workspace for array_map */ } arraycoerce; @@ -621,7 +619,8 @@ extern void ExecEvalRowNull(ExprState *state, ExprEvalStep *op, extern void ExecEvalRowNotNull(ExprState *state, ExprEvalStep *op, ExprContext *econtext); extern void ExecEvalArrayExpr(ExprState *state, ExprEvalStep *op); -extern void ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op); +extern void ExecEvalArrayCoerce(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); extern void ExecEvalRow(ExprState *state, ExprEvalStep *op); extern void ExecEvalMinMax(ExprState *state, ExprEvalStep *op); extern void ExecEvalFieldSelect(ExprState *state, ExprEvalStep *op, diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h index 8c536a8d38..ccb5123e2e 100644 --- a/src/include/nodes/primnodes.h +++ b/src/include/nodes/primnodes.h @@ -820,11 +820,12 @@ typedef struct CoerceViaIO * ArrayCoerceExpr * * ArrayCoerceExpr represents a type coercion from one array type to another, - * which is implemented by applying the indicated element-type coercion - * function to each element of the source array. If elemfuncid is InvalidOid - * then the element types are binary-compatible, but the coercion still - * requires some effort (we have to fix the element type ID stored in the - * array header). + * which is implemented by applying the per-element coercion expression + * "elemexpr" to each element of the source array. Within elemexpr, the + * source element is represented by a CaseTestExpr node. Note that even if + * elemexpr is a no-op (that is, just CaseTestExpr + RelabelType), the + * coercion still requires some effort: we have to fix the element type OID + * stored in the array header. * ---------------- */ @@ -832,11 +833,10 @@ typedef struct ArrayCoerceExpr { Expr xpr; Expr *arg; /* input expression (yields an array) */ - Oid elemfuncid; /* OID of element coercion function, or 0 */ + Expr *elemexpr; /* expression representing per-element work */ Oid resulttype; /* output type of coercion (an array type) */ int32 resulttypmod; /* output typmod (also element typmod) */ Oid resultcollid; /* OID of collation, or InvalidOid if none */ - bool isExplicit; /* conversion semantics flag to pass to func */ CoercionForm coerceformat; /* how to display this node */ int location; /* token location, or -1 if unknown */ } ArrayCoerceExpr; diff --git a/src/include/parser/parse_coerce.h b/src/include/parser/parse_coerce.h index 06f65293cb..e560f0c96e 100644 --- a/src/include/parser/parse_coerce.h +++ b/src/include/parser/parse_coerce.h @@ -48,9 +48,8 @@ extern Node *coerce_type(ParseState *pstate, Node *node, CoercionContext ccontext, CoercionForm cformat, int location); extern Node *coerce_to_domain(Node *arg, Oid baseTypeId, int32 baseTypeMod, Oid typeId, - CoercionForm cformat, int location, - bool hideInputCoercion, - bool lengthCoercionDone); + CoercionContext ccontext, CoercionForm cformat, int location, + bool hideInputCoercion); extern Node *coerce_to_boolean(ParseState *pstate, Node *node, const char *constructName); diff --git a/src/include/utils/array.h b/src/include/utils/array.h index d6d3c582b6..cc19879a9a 100644 --- a/src/include/utils/array.h +++ b/src/include/utils/array.h @@ -64,6 +64,10 @@ #include "fmgr.h" #include "utils/expandeddatum.h" +/* avoid including execnodes.h here */ +struct ExprState; +struct ExprContext; + /* * Arrays are varlena objects, so must meet the varlena convention that @@ -360,8 +364,9 @@ extern ArrayType *array_set(ArrayType *array, int nSubscripts, int *indx, Datum dataValue, bool isNull, int arraytyplen, int elmlen, bool elmbyval, char elmalign); -extern Datum array_map(FunctionCallInfo fcinfo, Oid retType, - ArrayMapState *amstate); +extern Datum array_map(Datum arrayd, + struct ExprState *exprstate, struct ExprContext *econtext, + Oid retType, ArrayMapState *amstate); extern void array_bitmap_copy(bits8 *destbitmap, int destoffset, const bits8 *srcbitmap, int srcoffset, diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out index 3acc696863..1e62c57a68 100644 --- a/src/test/regress/expected/domain.out +++ b/src/test/regress/expected/domain.out @@ -310,6 +310,101 @@ Rules: drop table dcomptable; drop type comptype cascade; NOTICE: drop cascades to type dcomptypea +-- Test arrays over domains +create domain posint as int check (value > 0); +create table pitable (f1 posint[]); +insert into pitable values(array[42]); +insert into pitable values(array[-1]); -- fail +ERROR: value for domain posint violates check constraint "posint_check" +insert into pitable values('{0}'); -- fail +ERROR: value for domain posint violates check constraint "posint_check" +LINE 1: insert into pitable values('{0}'); + ^ +update pitable set f1[1] = f1[1] + 1; +update pitable set f1[1] = 0; -- fail +ERROR: value for domain posint violates check constraint "posint_check" +select * from pitable; + f1 +------ + {43} +(1 row) + +drop table pitable; +create domain vc4 as varchar(4); +create table vc4table (f1 vc4[]); +insert into vc4table values(array['too long']); -- fail +ERROR: value too long for type character varying(4) +insert into vc4table values(array['too long']::vc4[]); -- cast truncates +select * from vc4table; + f1 +---------- + {"too "} +(1 row) + +drop table vc4table; +drop type vc4; +-- You can sort of fake arrays-of-arrays by putting a domain in between +create domain dposinta as posint[]; +create table dposintatable (f1 dposinta[]); +insert into dposintatable values(array[array[42]]); -- fail +ERROR: column "f1" is of type dposinta[] but expression is of type integer[] +LINE 1: insert into dposintatable values(array[array[42]]); + ^ +HINT: You will need to rewrite or cast the expression. +insert into dposintatable values(array[array[42]::posint[]]); -- still fail +ERROR: column "f1" is of type dposinta[] but expression is of type posint[] +LINE 1: insert into dposintatable values(array[array[42]::posint[]])... + ^ +HINT: You will need to rewrite or cast the expression. +insert into dposintatable values(array[array[42]::dposinta]); -- but this works +select f1, f1[1], (f1[1])[1] from dposintatable; + f1 | f1 | f1 +----------+------+---- + {"{42}"} | {42} | 42 +(1 row) + +select pg_typeof(f1) from dposintatable; + pg_typeof +------------ + dposinta[] +(1 row) + +select pg_typeof(f1[1]) from dposintatable; + pg_typeof +----------- + dposinta +(1 row) + +select pg_typeof(f1[1][1]) from dposintatable; + pg_typeof +----------- + dposinta +(1 row) + +select pg_typeof((f1[1])[1]) from dposintatable; + pg_typeof +----------- + posint +(1 row) + +update dposintatable set f1[2] = array[99]; +select f1, f1[1], (f1[2])[1] from dposintatable; + f1 | f1 | f1 +-----------------+------+---- + {"{42}","{99}"} | {42} | 99 +(1 row) + +-- it'd be nice if you could do something like this, but for now you can't: +update dposintatable set f1[2][1] = array[97]; +ERROR: wrong number of array subscripts +-- maybe someday we can make this syntax work: +update dposintatable set (f1[2])[1] = array[98]; +ERROR: syntax error at or near "[" +LINE 1: update dposintatable set (f1[2])[1] = array[98]; + ^ +drop table dposintatable; +drop domain posint cascade; +NOTICE: drop cascades to type dposinta -- Test not-null restrictions create domain dnotnull varchar(15) NOT NULL; create domain dnull varchar(15); diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql index 0fd383e272..8fb3e2086a 100644 --- a/src/test/regress/sql/domain.sql +++ b/src/test/regress/sql/domain.sql @@ -166,6 +166,49 @@ drop table dcomptable; drop type comptype cascade; +-- Test arrays over domains + +create domain posint as int check (value > 0); + +create table pitable (f1 posint[]); +insert into pitable values(array[42]); +insert into pitable values(array[-1]); -- fail +insert into pitable values('{0}'); -- fail +update pitable set f1[1] = f1[1] + 1; +update pitable set f1[1] = 0; -- fail +select * from pitable; +drop table pitable; + +create domain vc4 as varchar(4); +create table vc4table (f1 vc4[]); +insert into vc4table values(array['too long']); -- fail +insert into vc4table values(array['too long']::vc4[]); -- cast truncates +select * from vc4table; +drop table vc4table; +drop type vc4; + +-- You can sort of fake arrays-of-arrays by putting a domain in between +create domain dposinta as posint[]; +create table dposintatable (f1 dposinta[]); +insert into dposintatable values(array[array[42]]); -- fail +insert into dposintatable values(array[array[42]::posint[]]); -- still fail +insert into dposintatable values(array[array[42]::dposinta]); -- but this works +select f1, f1[1], (f1[1])[1] from dposintatable; +select pg_typeof(f1) from dposintatable; +select pg_typeof(f1[1]) from dposintatable; +select pg_typeof(f1[1][1]) from dposintatable; +select pg_typeof((f1[1])[1]) from dposintatable; +update dposintatable set f1[2] = array[99]; +select f1, f1[1], (f1[2])[1] from dposintatable; +-- it'd be nice if you could do something like this, but for now you can't: +update dposintatable set f1[2][1] = array[97]; +-- maybe someday we can make this syntax work: +update dposintatable set (f1[2])[1] = array[98]; + +drop table dposintatable; +drop domain posint cascade; + + -- Test not-null restrictions create domain dnotnull varchar(15) NOT NULL; From 2632bcce5e767a2b5901bbef54ae52df061eee72 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 30 Sep 2017 17:05:07 -0400 Subject: [PATCH 0304/1087] Fix pg_dump to assign domain array type OIDs during pg_upgrade. During a binary upgrade, all type OIDs are supposed to be assigned by pg_dump based on their values in the old cluster. But now that domains have arrays, there's nothing to base the arrays' type OIDs on, if we're upgrading from a pre-v11 cluster. Make pg_dump search for an unused type OID to use for this purpose. Per buildfarm. Discussion: https://postgr.es/m/E1dyLlE-0002gT-H5@gemulon.postgresql.org --- src/bin/pg_dump/pg_dump.c | 79 ++++++++++++++++++++++++++++++--------- 1 file changed, 61 insertions(+), 18 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 75f08cd792..e34c83acd2 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -253,7 +253,9 @@ static void dumpDatabase(Archive *AH); static void dumpEncoding(Archive *AH); static void dumpStdStrings(Archive *AH); static void binary_upgrade_set_type_oids_by_type_oid(Archive *fout, - PQExpBuffer upgrade_buffer, Oid pg_type_oid); + PQExpBuffer upgrade_buffer, + Oid pg_type_oid, + bool force_array_type); static bool binary_upgrade_set_type_oids_by_rel_oid(Archive *fout, PQExpBuffer upgrade_buffer, Oid pg_rel_oid); static void binary_upgrade_set_pg_class_oids(Archive *fout, @@ -3949,10 +3951,11 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) static void binary_upgrade_set_type_oids_by_type_oid(Archive *fout, PQExpBuffer upgrade_buffer, - Oid pg_type_oid) + Oid pg_type_oid, + bool force_array_type) { PQExpBuffer upgrade_query = createPQExpBuffer(); - PGresult *upgrade_res; + PGresult *res; Oid pg_type_array_oid; appendPQExpBufferStr(upgrade_buffer, "\n-- For binary upgrade, must preserve pg_type oid\n"); @@ -3964,12 +3967,43 @@ binary_upgrade_set_type_oids_by_type_oid(Archive *fout, appendPQExpBuffer(upgrade_query, "SELECT typarray " "FROM pg_catalog.pg_type " - "WHERE pg_type.oid = '%u'::pg_catalog.oid;", + "WHERE oid = '%u'::pg_catalog.oid;", pg_type_oid); - upgrade_res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data); + res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data); + + pg_type_array_oid = atooid(PQgetvalue(res, 0, PQfnumber(res, "typarray"))); - pg_type_array_oid = atooid(PQgetvalue(upgrade_res, 0, PQfnumber(upgrade_res, "typarray"))); + PQclear(res); + + if (!OidIsValid(pg_type_array_oid) && force_array_type) + { + /* + * If the old version didn't assign an array type, but the new version + * does, we must select an unused type OID to assign. This currently + * only happens for domains, when upgrading pre-v11 to v11 and up. + * + * Note: local state here is kind of ugly, but we must have some, + * since we mustn't choose the same unused OID more than once. + */ + static Oid next_possible_free_oid = FirstNormalObjectId; + bool is_dup; + + do + { + ++next_possible_free_oid; + printfPQExpBuffer(upgrade_query, + "SELECT EXISTS(SELECT 1 " + "FROM pg_catalog.pg_type " + "WHERE oid = '%u'::pg_catalog.oid);", + next_possible_free_oid); + res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data); + is_dup = (PQgetvalue(res, 0, 0)[0] == 't'); + PQclear(res); + } while (is_dup); + + pg_type_array_oid = next_possible_free_oid; + } if (OidIsValid(pg_type_array_oid)) { @@ -3980,7 +4014,6 @@ binary_upgrade_set_type_oids_by_type_oid(Archive *fout, pg_type_array_oid); } - PQclear(upgrade_res); destroyPQExpBuffer(upgrade_query); } @@ -4008,7 +4041,7 @@ binary_upgrade_set_type_oids_by_rel_oid(Archive *fout, pg_type_oid = atooid(PQgetvalue(upgrade_res, 0, PQfnumber(upgrade_res, "crel"))); binary_upgrade_set_type_oids_by_type_oid(fout, upgrade_buffer, - pg_type_oid); + pg_type_oid, false); if (!PQgetisnull(upgrade_res, 0, PQfnumber(upgrade_res, "trel"))) { @@ -9840,7 +9873,8 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, - tyinfo->dobj.catId.oid); + tyinfo->dobj.catId.oid, + false); appendPQExpBuffer(q, "CREATE TYPE %s AS ENUM (", qtypname); @@ -9976,8 +10010,9 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) qtypname); if (dopt->binary_upgrade) - binary_upgrade_set_type_oids_by_type_oid(fout, - q, tyinfo->dobj.catId.oid); + binary_upgrade_set_type_oids_by_type_oid(fout, q, + tyinfo->dobj.catId.oid, + false); appendPQExpBuffer(q, "CREATE TYPE %s AS RANGE (", qtypname); @@ -10091,8 +10126,9 @@ dumpUndefinedType(Archive *fout, TypeInfo *tyinfo) qtypname); if (dopt->binary_upgrade) - binary_upgrade_set_type_oids_by_type_oid(fout, - q, tyinfo->dobj.catId.oid); + binary_upgrade_set_type_oids_by_type_oid(fout, q, + tyinfo->dobj.catId.oid, + false); appendPQExpBuffer(q, "CREATE TYPE %s;\n", qtypname); @@ -10296,10 +10332,14 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) appendPQExpBuffer(delq, "%s CASCADE;\n", qtypname); - /* We might already have a shell type, but setting pg_type_oid is harmless */ + /* + * We might already have a shell type, but setting pg_type_oid is + * harmless, and in any case we'd better set the array type OID. + */ if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, - tyinfo->dobj.catId.oid); + tyinfo->dobj.catId.oid, + false); appendPQExpBuffer(q, "CREATE TYPE %s (\n" @@ -10490,7 +10530,8 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, - tyinfo->dobj.catId.oid); + tyinfo->dobj.catId.oid, + true); /* force array type */ qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); @@ -10692,7 +10733,8 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) if (dopt->binary_upgrade) { binary_upgrade_set_type_oids_by_type_oid(fout, q, - tyinfo->dobj.catId.oid); + tyinfo->dobj.catId.oid, + false); binary_upgrade_set_pg_class_oids(fout, q, tyinfo->typrelid, false); } @@ -10967,7 +11009,8 @@ dumpShellType(Archive *fout, ShellTypeInfo *stinfo) if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, - stinfo->baseType->dobj.catId.oid); + stinfo->baseType->dobj.catId.oid, + false); appendPQExpBuffer(q, "CREATE TYPE %s;\n", fmtId(stinfo->dobj.name)); From 396ef1561878a5d42ea9191f60098b7fbbec6e41 Mon Sep 17 00:00:00 2001 From: Heikki Linnakangas Date: Sun, 1 Oct 2017 09:29:27 +0300 Subject: [PATCH 0305/1087] Fix busy-wait in pgbench, with --rate. If --rate was used to throttle pgbench, it failed to sleep when it had nothing to do, leading to a busy-wait with 100% CPU usage. This bug was introduced in the refactoring in v10. Before that, sleep() was called with a timeout, even when there were no file descriptors to wait for. Reported by Jeff Janes, patch by Fabien COELHO. Backpatch to v10. Discussion: https://www.postgresql.org/message-id/CAMkU%3D1x5hoX0pLLKPRnXCy0T8uHoDvXdq%2B7kAM9eoC9_z72ucw%40mail.gmail.com --- src/bin/pgbench/pgbench.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index f039413690..5d8a01c72c 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -4578,20 +4578,30 @@ threadRun(void *arg) * or it's time to print a progress report. Update input_mask to show * which client(s) received data. */ - if (min_usec > 0 && maxsock != -1) + if (min_usec > 0) { - int nsocks; /* return from select(2) */ + int nsocks = 0; /* return from select(2) if called */ if (min_usec != PG_INT64_MAX) { - struct timeval timeout; + if (maxsock != -1) + { + struct timeval timeout; - timeout.tv_sec = min_usec / 1000000; - timeout.tv_usec = min_usec % 1000000; - nsocks = select(maxsock + 1, &input_mask, NULL, NULL, &timeout); + timeout.tv_sec = min_usec / 1000000; + timeout.tv_usec = min_usec % 1000000; + nsocks = select(maxsock + 1, &input_mask, NULL, NULL, &timeout); + } + else /* nothing active, simple sleep */ + { + pg_usleep(min_usec); + } } - else + else /* no explicit delay, select without timeout */ + { nsocks = select(maxsock + 1, &input_mask, NULL, NULL, NULL); + } + if (nsocks < 0) { if (errno == EINTR) @@ -4604,7 +4614,7 @@ threadRun(void *arg) goto done; } } - else + else /* min_usec == 0, i.e. something needs to be executed */ { /* If we didn't call select(), don't try to read any data */ FD_ZERO(&input_mask); From 655c938fb0aacde297d4c53daf97ff82a3b90fad Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 1 Oct 2017 08:51:20 -0400 Subject: [PATCH 0306/1087] Add list of acknowledgments to release notes This contains all individuals mentioned in the commit messages during PostgreSQL 10 development. current through babf18579455e85269ad75e1ddb03f34138f77b6 Discussion: https://www.postgresql.org/message-id/flat/54ad0e42-770e-dfe1-123e-bce9361ad452%402ndquadrant.com --- doc/src/sgml/release-10.sgml | 337 +++++++++++++++++++++++++++++++++++ 1 file changed, 337 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 9fd3b2c8ac..5c2e026286 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -3169,4 +3169,341 @@ + + Acknowledgments + + + The following individuals (in alphabetical order) have contributed to this + release as patch authors, committers, reviewers, testers, or reporters of + issues. + + + + Adam Brightwell + Adam Brusselback + Adam Gomaa + Adam Sah + Adrian Klaver + Aidan Van Dyk + Aleksander Alekseev + Alexander Korotkov + Alexander Lakhin + Alexander Sosna + Alexey Bashtanov + Alexey Grishchenko + Alexey Isayko + Álvaro Hernández Tortosa + Álvaro Herrera + Amit Kapila + Amit Khandekar + Amit Langote + Amul Sul + Anastasia Lubennikova + Andreas Joseph Krogh + Andreas Karlsson + Andreas Scherbaum + Andreas Seltenreich + Andres Freund + Andrew Dunstan + Andrew Gierth + Andrew Wheelwright + Andrey Borodin + Andrey Lizenko + Andy Abelisto + Antonin Houska + Ants Aasma + Arjen Nienhuis + Arseny Sher + Artur Zakirov + Ashutosh Bapat + Ashutosh Sharma + Ashwin Agrawal + Atsushi Torikoshi + Ayumi Ishii + Basil Bourque + Beena Emerson + Ben de Graaff + Benedikt Grundmann + Bernd Helmle + Brad DeJong + Brandur Leach + Breen Hagan + Bruce Momjian + Bruno Wolff III + Catalin Iacob + Chapman Flack + Chen Huajun + Choi Doo-Won + Chris Bandy + Chris Richards + Chris Ruprecht + Christian Ullrich + Christoph Berg + Chuanting Wang + Claudio Freire + Clinton Adams + Const Zhang + Constantin Pan + Corey Huinker + Craig Ringer + Cynthia Shang + Dagfinn Ilmari Mannsåker + Daisuke Higuchi + Damian Quiroga + Dan Wood + Daniel Gustafsson + Daniel Vérité + Daniel Westermann + Daniele Varrazzo + Danylo Hlynskyi + Darko Prelec + Dave Cramer + Dave Page + David Christensen + David Fetter + David Johnston + David Rader + David Rowley + David Steele + Dean Rasheed + Denis Smirnov + Denish Patel + Dennis Björklund + Devrim Gündüz + Dilip Kumar + Dilyan Palauzov + Dima Pavlov + Dimitry Ivanov + Dmitriy Sarafannikov + Dmitry Dolgov + Dmitry Fedin + Don Morrison + Egor Rogov + Eiji Seki + Emil Iggland + Emre Hasegeli + Enrique Meneses + Erik Nordström + Erik Rijkers + Erwin Brandstetter + Etsuro Fujita + Eugen Konkov + Eugene Kazakov + Euler Taveira + Fabien Coelho + Fabrízio de Royes Mello + Fakhroutdinov Evgenievich + Feike Steenbergen + Felix Gerzaguet + Filip Jirsák + Fujii Masao + Gabriele Bartolini + Gabrielle Roth + Gao Zengqi + Gerdan Santos + Gianni Ciolli + Gilles Darold + Giuseppe Broccolo + Graham Dutton + Greg Atkins + Greg Burek + Grigory Smolkin + Guillaume Lelarge + Hans Buschmann + Haribabu Kommi + Heikki Linnakangas + Henry Boehlert + Huan Ruan + Huong Dangminh + Ian Barwick + Igor Korot + Ildus Kurbangaliev + Ivan Kartyshov + Jaime Casanova + Jakob Egger + James Parks + Jarred Ward + Jason Li + Jason O'Donnell + Jason Petersen + Jeevan Chalke + Jeevan Ladhe + Jeff Dafoe + Jeff Davis + Jeff Janes + Jelte Fennema + Jeremy Finzel + Jeremy Schneider + Jeroen van der Ham + Jesper Pedersen + Jim Mlodgenski + Jim Nasby + Jinyu Zhang + Joe Conway + Joel Jacobson + John Harvey + Jon Nelson + Jordan Gigov + Josh Berkus + Josh Soref + Julian Markwort + Julien Rouhaud + Junseok Yang + Justin Muise + Justin Pryzby + Kacper Zuk + KaiGai Kohei + Karen Huddleston + Karl Lehenbauer + Karl O. Pinc + Keith Fiske + Kevin Grittner + Kim Rose Carlsen + Konstantin Evteev + Konstantin Knizhnik + Kuntal Ghosh + Kurt Kartaltepe + Kyle Conroy + Kyotaro Horiguchi + Laurenz Albe + Leonardo Cecchi + Ludovic Vaugeois-Pepin + Lukas Fittl + Magnus Hagander + Maksim Milyutin + Maksym Sobolyev + Marc Rassbach + Marc-Olaf Jaschke + Marcos Castedo + Marek Cvoren + Mark Dilger + Mark Kirkwood + Mark Pether + Marko Tiikkaja + Markus Winand + Marllius Ribeiro + Marti Raudsepp + Martín Marqués + Masahiko Sawada + Matheus Oliveira + Mathieu Fenniak + Merlin Moncure + Michael Banck + Michael Day + Michael Meskes + Michael Overmeyer + Michael Paquier + Mike Palmiotto + Milos Urbanek + Mithun Cy + Moshe Jacobson + Murtuza Zabuawala + Naoki Okano + Nathan Bossart + Nathan Wagner + Neha Khatri + Neha Sharma + Neil Anderson + Nicolas Baccelli + Nicolas Guini + Nicolas Thauvin + Nikhil Sontakke + Nikita Glukhov + Nikolaus Thiel + Nikolay Nikitin + Nikolay Shaplov + Noah Misch + Noriyoshi Shinoda + Olaf Gawenda + Oleg Bartunov + Oskari Saarenmaa + Otar Shavadze + Paresh More + Paul Jungwirth + Paul Ramsey + Pavan Deolasee + Pavel Golub + Pavel Hanák + Pavel Raiskup + Pavel Stehule + Peng Sun + Peter Eisentraut + Peter Geoghegan + Petr Jelínek + Philippe Beaudoin + Pierre-Emmanuel André + Piotr Stefaniak + Prabhat Sahu + QL Zhuo + Radek Slupik + Rafa de la Torre + Rafia Sabih + Ragnar Ouchterlony + Rahila Syed + Rajkumar Raghuwanshi + Regina Obe + Richard Pistole + Robert Haas + Robins Tharakan + Rod Taylor + Roman Shaposhnik + Rushabh Lathia + Ryan Murphy + Sandeep Thakkar + Scott Milliken + Sean Farrell + Sebastian Luque + Sehrope Sarkuni + Sergey Burladyan + Sergey Koposov + Shay Rojansky + Shinichi Matsuda + Sho Kato + Simon Riggs + Simone Gotti + Spencer Thomason + Stas Kelvich + Stepan Pesternikov + Stephen Frost + Steve Randall + Steve Singer + Steven Fackler + Steven Winfield + Suraj Kharage + Sveinn Sveinsson + Sven R. Kunze + Taiki Kondo + Takayuki Tsunakawa + Takeshi Ideriha + Tatsuo Ishii + Tatsuro Yamada + Teodor Sigaev + Thom Brown + Thomas Kellerer + Thomas Munro + Tim Goodaire + Tobias Bussmann + Tom Dunstan + Tom Lane + Tom van Tilburg + Tomas Vondra + Tomonari Katsumata + Tushar Ahuja + Vaishnavi Prabakaran + Venkata Balaji Nagothi + Vicky Vergara + Victor Wagner + Vik Fearing + Vinayak Pokale + Viren Negi + Vitaly Burovoy + Vladimir Kunshchikov + Vladimir Rusinov + Yi Wen Wong + Yugo Nagata + Zhen Ming Yang + Zhou Digoal + + + From 4a1c0f3dde765c65e0ccb712c899df16986d09ad Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 1 Oct 2017 12:43:46 -0400 Subject: [PATCH 0307/1087] Use a longer connection timeout in pg_isready test. Buildfarm members skink and sungazer have both recently failed this test, with symptoms indicating that the default 3-second timeout isn't quite enough for those very slow systems. There's no reason to be miserly with this timeout, so boost it to 60 seconds. Back-patch to all versions containing this test. That may be overkill, because the failure has only been observed in the v10 branch, but I don't feel like having to revisit this later. --- src/bin/scripts/t/080_pg_isready.pl | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/bin/scripts/t/080_pg_isready.pl b/src/bin/scripts/t/080_pg_isready.pl index d9830b5b3a..d01804da37 100644 --- a/src/bin/scripts/t/080_pg_isready.pl +++ b/src/bin/scripts/t/080_pg_isready.pl @@ -15,4 +15,5 @@ $node->init; $node->start; -$node->command_ok(['pg_isready'], 'succeeds with server running'); +# use a long timeout for the benefit of very slow buildfarm machines +$node->command_ok([qw(pg_isready --timeout=60)], 'succeeds with server running'); From 5a632a213d43c30940de3328286172c52730a01d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 1 Oct 2017 13:32:26 -0400 Subject: [PATCH 0308/1087] Update v10 release notes, and set the official release date. Last(?) round of changes for 10.0. --- doc/src/sgml/release-10.sgml | 25 +++++++------------------ 1 file changed, 7 insertions(+), 18 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 5c2e026286..fed67e1b23 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -6,7 +6,7 @@ Release date: - 2017-??-?? (current as of 2017-09-17, commit 244b4a37e) + 2017-10-05 @@ -193,6 +193,7 @@ 2016-11-18 [67dc4ccbb] Add pg_sequences view 2017-05-15 [f8dc1985f] Fix ALTER SEQUENCE locking 2017-06-01 [3d79013b9] Make ALTER SEQUENCE, including RESTART, fully transactio +2017-09-29 [5cc5987ce] psql: Update \d sequence display --> Move sequences' metadata fields into a new + + + The output of psql's \d command for a + sequence has been redesigned, too. + @@ -764,23 +770,6 @@ - - Reduce locking required for adding values to enum types (Andrew - Dunstan, Tom Lane) - - - - Previously it was impossible to run ALTER TYPE ... ADD - VALUE in a transaction block unless the enum type was created - in the same block. Now, only references to uncommitted enum - values from other transactions are prohibited. - - - - - From 2e83db3ad2da9b073af9ae12916f0b71cf698e1e Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 15:17:10 -0700 Subject: [PATCH 0309/1087] Allow pg_ctl kill to send SIGKILL. Previously that was disallowed out of an abundance of caution. Providing KILL support however is helpful to make the 013_crash_restart.pl test portable, and there's no actual issue with allowing it. SIGABRT, which has similar consequences except it also dumps core, was already allowed. Author: Andres Freund Discussion: https://postgr.es/m/45d42d41-6145-9be1-7261-84acf6d9e344@2ndQuadrant.com --- src/bin/pg_ctl/pg_ctl.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index 4e02c4cea1..0893d0d26f 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -1903,7 +1903,7 @@ do_help(void) printf(_(" immediate quit without complete shutdown; will lead to recovery on restart\n")); printf(_("\nAllowed signal names for kill:\n")); - printf(" ABRT HUP INT QUIT TERM USR1 USR2\n"); + printf(" ABRT HUP INT KILL QUIT TERM USR1 USR2\n"); #ifdef WIN32 printf(_("\nOptions for register and unregister:\n")); @@ -1961,11 +1961,8 @@ set_sig(char *signame) sig = SIGQUIT; else if (strcmp(signame, "ABRT") == 0) sig = SIGABRT; -#if 0 - /* probably should NOT provide SIGKILL */ else if (strcmp(signame, "KILL") == 0) sig = SIGKILL; -#endif else if (strcmp(signame, "TERM") == 0) sig = SIGTERM; else if (strcmp(signame, "USR1") == 0) From 784905795f8aadc09efe2fdae195279d17250f00 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 15:21:45 -0700 Subject: [PATCH 0310/1087] Try to make crash restart test work on windows. Author: Andres Freund Tested-By: Andrew Dunstan Discussion: https://postgr.es/m/20170930224424.ud5ilchmclbl5y5n@alap3.anarazel.de --- src/test/recovery/t/013_crash_restart.pl | 28 +++++++++--------------- 1 file changed, 10 insertions(+), 18 deletions(-) diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl index ca02054ff0..91a8ef90c1 100644 --- a/src/test/recovery/t/013_crash_restart.pl +++ b/src/test/recovery/t/013_crash_restart.pl @@ -16,16 +16,8 @@ use Config; use Time::HiRes qw(usleep); -if ($Config{osname} eq 'MSWin32') -{ - # some Windows Perls at least don't like IPC::Run's - # start/kill_kill regime. - plan skip_all => "Test fails on Windows perl"; -} -else -{ - plan tests => 18; -} +plan tests => 18; + # To avoid hanging while expecting some specific input from a psql # instance being driven by us, add a timeout high enough that it @@ -106,10 +98,10 @@ $monitor_stderr = ''; # kill once with QUIT - we expect psql to exit, while emitting error message first -my $cnt = kill 'QUIT', $pid; +my $ret = TestLib::system_log('pg_ctl', 'kill', 'QUIT', $pid); # Exactly process should have been alive to be killed -is($cnt, 1, "exactly one process killed with SIGQUIT"); +is($ret, 0, "killed process with SIGQUIT"); # Check that psql sees the killed backend as having been terminated $killme_stdin .= q[ @@ -119,14 +111,14 @@ "psql query died successfully after SIGQUIT"); $killme_stderr = ''; $killme_stdout = ''; -$killme->kill_kill; +$killme->finish; # Wait till server restarts - we should get the WARNING here, but # sometimes the server is unable to send that, if interrupted while # sending. ok(pump_until($monitor, \$monitor_stderr, qr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly/m), "psql monitor died successfully after SIGQUIT"); -$monitor->kill_kill; +$monitor->finish; # Wait till server restarts is($node->poll_query_until('postgres', 'SELECT $$restarted after sigquit$$;', 'restarted after sigquit'), @@ -179,8 +171,8 @@ # kill with SIGKILL this time - we expect the backend to exit, without # being able to emit an error error message -$cnt = kill 'KILL', $pid; -is($cnt, 1, "exactly one process killed with KILL"); +$ret = TestLib::system_log('pg_ctl', 'kill', 'KILL', $pid); +is($ret, 0, "killed process with KILL"); # Check that psql sees the server as being terminated. No WARNING, # because signal handlers aren't being run on SIGKILL. @@ -189,14 +181,14 @@ ]; ok(pump_until($killme, \$killme_stderr, qr/server closed the connection unexpectedly/m), "psql query died successfully after SIGKILL"); -$killme->kill_kill; +$killme->finish; # Wait till server restarts - we should get the WARNING here, but # sometimes the server is unable to send that, if interrupted while # sending. ok(pump_until($monitor, \$monitor_stderr, qr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly/m), "psql monitor died successfully after SIGKILL"); -$monitor->kill_kill; +$monitor->finish; # Wait till server restarts is($node->poll_query_until('postgres', 'SELECT 1', '1'), "1", "reconnected after SIGKILL"); From 1f2830f9df9f0196ba541c1e253463afe657cb67 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 15:24:14 -0700 Subject: [PATCH 0311/1087] Remove redundant stdint.h include. Discussion: https://postgr.es/m/31674.1506788226@sss.pgh.pa.us --- src/include/port/pg_bswap.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/include/port/pg_bswap.h b/src/include/port/pg_bswap.h index bba0ca5490..0b2dcf52cb 100644 --- a/src/include/port/pg_bswap.h +++ b/src/include/port/pg_bswap.h @@ -21,10 +21,10 @@ #define PG_BSWAP_H -/* In all supported versions msvc provides _byteswap_* functions in stdlib.h */ -#ifdef _MSC_VER -#include -#endif +/* + * In all supported versions msvc provides _byteswap_* functions in stdlib.h, + * already included by c.h. + */ /* implementation of uint16 pg_bswap16(uint16) */ From 0ba99c84e8c7138143059b281063d4cca5a2bfea Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 15:36:14 -0700 Subject: [PATCH 0312/1087] Replace most usages of ntoh[ls] and hton[sl] with pg_bswap.h. All postgres internal usages are replaced, it's just libpq example usages that haven't been converted. External users of libpq can't generally rely on including postgres internal headers. Note that this includes replacing open-coded byte swapping of 64bit integers (using two 32 bit swaps) with a single 64bit swap. Where it looked applicable, I have removed netinet/in.h and arpa/inet.h usage, which previously provided the relevant functionality. It's perfectly possible that I missed other reasons for including those, the buildfarm will tell. Author: Andres Freund Discussion: https://postgr.es/m/20170927172019.gheidqy6xvlxb325@alap3.anarazel.de --- contrib/pgcrypto/crypt-des.c | 17 +++++------- contrib/uuid-ossp/uuid-ossp.c | 17 +++++------- src/backend/commands/copy.c | 11 ++++---- src/backend/libpq/auth.c | 18 ++++++------- src/backend/libpq/ifaddr.c | 6 ++--- src/backend/libpq/pqcomm.c | 6 ++--- src/backend/libpq/pqformat.c | 40 ++++++++--------------------- src/backend/postmaster/postmaster.c | 13 +++++----- src/backend/tcop/fastpath.c | 8 +++--- src/bin/pg_basebackup/streamutil.c | 34 +++++------------------- src/bin/pg_dump/parallel.c | 6 +++-- src/bin/pg_rewind/libpq_fetch.c | 29 ++------------------- src/common/scram-common.c | 7 ++--- src/interfaces/libpq/fe-connect.c | 12 ++++----- src/interfaces/libpq/fe-lobj.c | 11 ++++---- src/interfaces/libpq/fe-misc.c | 14 +++++----- src/interfaces/libpq/fe-protocol2.c | 5 ++-- src/interfaces/libpq/fe-protocol3.c | 5 ++-- src/port/getaddrinfo.c | 11 ++++---- src/port/inet_aton.c | 4 ++- 20 files changed, 99 insertions(+), 175 deletions(-) diff --git a/contrib/pgcrypto/crypt-des.c b/contrib/pgcrypto/crypt-des.c index ee3a0f2169..ed07fc4606 100644 --- a/contrib/pgcrypto/crypt-des.c +++ b/contrib/pgcrypto/crypt-des.c @@ -62,13 +62,10 @@ #include "postgres.h" #include "miscadmin.h" +#include "port/pg_bswap.h" #include "px-crypt.h" -/* for ntohl/htonl */ -#include -#include - #define _PASSWORD_EFMT1 '_' static const char _crypt_a64[] = @@ -408,8 +405,8 @@ des_setkey(const char *key) if (!des_initialised) des_init(); - rawkey0 = ntohl(*(const uint32 *) key); - rawkey1 = ntohl(*(const uint32 *) (key + 4)); + rawkey0 = pg_ntoh32(*(const uint32 *) key); + rawkey1 = pg_ntoh32(*(const uint32 *) (key + 4)); if ((rawkey0 | rawkey1) && rawkey0 == old_rawkey0 @@ -634,15 +631,15 @@ des_cipher(const char *in, char *out, long salt, int count) /* copy data to avoid assuming input is word-aligned */ memcpy(buffer, in, sizeof(buffer)); - rawl = ntohl(buffer[0]); - rawr = ntohl(buffer[1]); + rawl = pg_ntoh32(buffer[0]); + rawr = pg_ntoh32(buffer[1]); retval = do_des(rawl, rawr, &l_out, &r_out, count); if (retval) return retval; - buffer[0] = htonl(l_out); - buffer[1] = htonl(r_out); + buffer[0] = pg_hton32(l_out); + buffer[1] = pg_hton32(r_out); /* copy data to avoid assuming output is word-aligned */ memcpy(out, buffer, sizeof(buffer)); diff --git a/contrib/uuid-ossp/uuid-ossp.c b/contrib/uuid-ossp/uuid-ossp.c index 55bc609415..fce4bc9140 100644 --- a/contrib/uuid-ossp/uuid-ossp.c +++ b/contrib/uuid-ossp/uuid-ossp.c @@ -14,13 +14,10 @@ #include "postgres.h" #include "fmgr.h" +#include "port/pg_bswap.h" #include "utils/builtins.h" #include "utils/uuid.h" -/* for ntohl/htonl */ -#include -#include - /* * It's possible that there's more than one uuid.h header file present. * We expect configure to set the HAVE_ symbol for only the one we want. @@ -90,16 +87,16 @@ typedef struct #define UUID_TO_NETWORK(uu) \ do { \ - uu.time_low = htonl(uu.time_low); \ - uu.time_mid = htons(uu.time_mid); \ - uu.time_hi_and_version = htons(uu.time_hi_and_version); \ + uu.time_low = pg_hton32(uu.time_low); \ + uu.time_mid = pg_hton16(uu.time_mid); \ + uu.time_hi_and_version = pg_hton16(uu.time_hi_and_version); \ } while (0) #define UUID_TO_LOCAL(uu) \ do { \ - uu.time_low = ntohl(uu.time_low); \ - uu.time_mid = ntohs(uu.time_mid); \ - uu.time_hi_and_version = ntohs(uu.time_hi_and_version); \ + uu.time_low = pg_ntoh32(uu.time_low); \ + uu.time_mid = pg_ntoh16(uu.time_mid); \ + uu.time_hi_and_version = pg_ntoh16(uu.time_hi_and_version); \ } while (0) #define UUID_V3_OR_V5(uu, v) \ diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 7c004ffad8..e87588040f 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -17,8 +17,6 @@ #include #include #include -#include -#include #include "access/heapam.h" #include "access/htup_details.h" @@ -38,6 +36,7 @@ #include "optimizer/planner.h" #include "nodes/makefuncs.h" #include "parser/parse_relation.h" +#include "port/pg_bswap.h" #include "rewrite/rewriteHandler.h" #include "storage/fd.h" #include "tcop/tcopprot.h" @@ -671,7 +670,7 @@ CopySendInt32(CopyState cstate, int32 val) { uint32 buf; - buf = htonl((uint32) val); + buf = pg_hton32((uint32) val); CopySendData(cstate, &buf, sizeof(buf)); } @@ -690,7 +689,7 @@ CopyGetInt32(CopyState cstate, int32 *val) *val = 0; /* suppress compiler warning */ return false; } - *val = (int32) ntohl(buf); + *val = (int32) pg_ntoh32(buf); return true; } @@ -702,7 +701,7 @@ CopySendInt16(CopyState cstate, int16 val) { uint16 buf; - buf = htons((uint16) val); + buf = pg_hton16((uint16) val); CopySendData(cstate, &buf, sizeof(buf)); } @@ -719,7 +718,7 @@ CopyGetInt16(CopyState cstate, int16 *val) *val = 0; /* suppress compiler warning */ return false; } - *val = (int16) ntohs(buf); + *val = (int16) pg_ntoh16(buf); return true; } diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 39a57d4835..480e344eb3 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #ifdef HAVE_SYS_SELECT_H #include @@ -33,6 +32,7 @@ #include "libpq/pqformat.h" #include "libpq/scram.h" #include "miscadmin.h" +#include "port/pg_bswap.h" #include "replication/walsender.h" #include "storage/ipc.h" #include "utils/backend_random.h" @@ -2840,7 +2840,7 @@ PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identi radius_packet *receivepacket = &radius_recv_pack; char *radius_buffer = (char *) &radius_send_pack; char *receive_buffer = (char *) &radius_recv_pack; - int32 service = htonl(RADIUS_AUTHENTICATE_ONLY); + int32 service = pg_hton32(RADIUS_AUTHENTICATE_ONLY); uint8 *cryptvector; int encryptedpasswordlen; uint8 encryptedpassword[RADIUS_MAX_PASSWORD_LENGTH]; @@ -2948,7 +2948,7 @@ PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identi /* Length needs to be in network order on the wire */ packetlength = packet->length; - packet->length = htons(packet->length); + packet->length = pg_hton16(packet->length); sock = socket(serveraddrs[0].ai_family, SOCK_DGRAM, 0); if (sock == PGINVALID_SOCKET) @@ -3074,19 +3074,19 @@ PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identi } #ifdef HAVE_IPV6 - if (remoteaddr.sin6_port != htons(port)) + if (remoteaddr.sin6_port != pg_hton16(port)) #else - if (remoteaddr.sin_port != htons(port)) + if (remoteaddr.sin_port != pg_hton16(port)) #endif { #ifdef HAVE_IPV6 ereport(LOG, (errmsg("RADIUS response from %s was sent from incorrect port: %d", - server, ntohs(remoteaddr.sin6_port)))); + server, pg_ntoh16(remoteaddr.sin6_port)))); #else ereport(LOG, (errmsg("RADIUS response from %s was sent from incorrect port: %d", - server, ntohs(remoteaddr.sin_port)))); + server, pg_ntoh16(remoteaddr.sin_port)))); #endif continue; } @@ -3098,11 +3098,11 @@ PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identi continue; } - if (packetlength != ntohs(receivepacket->length)) + if (packetlength != pg_ntoh16(receivepacket->length)) { ereport(LOG, (errmsg("RADIUS response from %s has corrupt length: %d (actual length %d)", - server, ntohs(receivepacket->length), packetlength))); + server, pg_ntoh16(receivepacket->length), packetlength))); continue; } diff --git a/src/backend/libpq/ifaddr.c b/src/backend/libpq/ifaddr.c index 53bf6bcd80..b8c463b101 100644 --- a/src/backend/libpq/ifaddr.c +++ b/src/backend/libpq/ifaddr.c @@ -27,10 +27,10 @@ #ifdef HAVE_NETINET_TCP_H #include #endif -#include #include #include "libpq/ifaddr.h" +#include "port/pg_bswap.h" static int range_sockaddr_AF_INET(const struct sockaddr_in *addr, const struct sockaddr_in *netaddr, @@ -144,7 +144,7 @@ pg_sockaddr_cidr_mask(struct sockaddr_storage *mask, char *numbits, int family) & 0xffffffffUL; else maskl = 0; - mask4.sin_addr.s_addr = htonl(maskl); + mask4.sin_addr.s_addr = pg_hton32(maskl); memcpy(mask, &mask4, sizeof(mask4)); break; } @@ -568,7 +568,7 @@ pg_foreach_ifaddr(PgIfAddrCallback callback, void *cb_data) /* addr 127.0.0.1/8 */ memset(&addr, 0, sizeof(addr)); addr.sin_family = AF_INET; - addr.sin_addr.s_addr = ntohl(0x7f000001); + addr.sin_addr.s_addr = pg_ntoh32(0x7f000001); memset(&mask, 0, sizeof(mask)); pg_sockaddr_cidr_mask(&mask, "8", AF_INET); run_ifaddr_callback(callback, cb_data, diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c index 4452ea4228..754154b83b 100644 --- a/src/backend/libpq/pqcomm.c +++ b/src/backend/libpq/pqcomm.c @@ -81,7 +81,6 @@ #ifdef HAVE_NETINET_TCP_H #include #endif -#include #ifdef HAVE_UTIME_H #include #endif @@ -92,6 +91,7 @@ #include "common/ip.h" #include "libpq/libpq.h" #include "miscadmin.h" +#include "port/pg_bswap.h" #include "storage/ipc.h" #include "utils/guc.h" #include "utils/memutils.h" @@ -1286,7 +1286,7 @@ pq_getmessage(StringInfo s, int maxlen) return EOF; } - len = ntohl(len); + len = pg_ntoh32(len); if (len < 4 || (maxlen > 0 && len > maxlen)) @@ -1569,7 +1569,7 @@ socket_putmessage(char msgtype, const char *s, size_t len) { uint32 n32; - n32 = htonl((uint32) (len + 4)); + n32 = pg_hton32((uint32) (len + 4)); if (internal_putbytes((char *) &n32, 4)) goto fail; } diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index c8cf67c041..f27a04f834 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -72,12 +72,11 @@ #include "postgres.h" #include -#include -#include #include "libpq/libpq.h" #include "libpq/pqformat.h" #include "mb/pg_wchar.h" +#include "port/pg_bswap.h" /* -------------------------------- @@ -246,11 +245,11 @@ pq_sendint(StringInfo buf, int i, int b) appendBinaryStringInfo(buf, (char *) &n8, 1); break; case 2: - n16 = htons((uint16) i); + n16 = pg_hton16((uint16) i); appendBinaryStringInfo(buf, (char *) &n16, 2); break; case 4: - n32 = htonl((uint32) i); + n32 = pg_hton32((uint32) i); appendBinaryStringInfo(buf, (char *) &n32, 4); break; default: @@ -270,17 +269,9 @@ pq_sendint(StringInfo buf, int i, int b) void pq_sendint64(StringInfo buf, int64 i) { - uint32 n32; - - /* High order half first, since we're doing MSB-first */ - n32 = (uint32) (i >> 32); - n32 = htonl(n32); - appendBinaryStringInfo(buf, (char *) &n32, 4); + uint64 n64 = pg_hton64(i); - /* Now the low order half */ - n32 = (uint32) i; - n32 = htonl(n32); - appendBinaryStringInfo(buf, (char *) &n32, 4); + appendBinaryStringInfo(buf, (char *) &n64, sizeof(n64)); } /* -------------------------------- @@ -304,7 +295,7 @@ pq_sendfloat4(StringInfo buf, float4 f) } swap; swap.f = f; - swap.i = htonl(swap.i); + swap.i = pg_hton32(swap.i); appendBinaryStringInfo(buf, (char *) &swap.i, 4); } @@ -460,11 +451,11 @@ pq_getmsgint(StringInfo msg, int b) break; case 2: pq_copymsgbytes(msg, (char *) &n16, 2); - result = ntohs(n16); + result = pg_ntoh16(n16); break; case 4: pq_copymsgbytes(msg, (char *) &n32, 4); - result = ntohl(n32); + result = pg_ntoh32(n32); break; default: elog(ERROR, "unsupported integer size %d", b); @@ -485,20 +476,11 @@ pq_getmsgint(StringInfo msg, int b) int64 pq_getmsgint64(StringInfo msg) { - int64 result; - uint32 h32; - uint32 l32; + uint64 n64; - pq_copymsgbytes(msg, (char *) &h32, 4); - pq_copymsgbytes(msg, (char *) &l32, 4); - h32 = ntohl(h32); - l32 = ntohl(l32); + pq_copymsgbytes(msg, (char *) &n64, sizeof(n64)); - result = h32; - result <<= 32; - result |= l32; - - return result; + return pg_ntoh64(n64); } /* -------------------------------- diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 8a2cc2fc2b..2b2b993e2c 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -74,8 +74,6 @@ #include #include #include -#include -#include #include #include @@ -107,6 +105,7 @@ #include "miscadmin.h" #include "pg_getopt.h" #include "pgstat.h" +#include "port/pg_bswap.h" #include "postmaster/autovacuum.h" #include "postmaster/bgworker_internals.h" #include "postmaster/fork_process.h" @@ -1072,7 +1071,7 @@ PostmasterMain(int argc, char *argv[]) "_postgresql._tcp.", NULL, NULL, - htons(PostPortNumber), + pg_hton16(PostPortNumber), 0, NULL, NULL, @@ -1966,7 +1965,7 @@ ProcessStartupPacket(Port *port, bool SSLdone) return STATUS_ERROR; } - len = ntohl(len); + len = pg_ntoh32(len); len -= 4; if (len < (int32) sizeof(ProtocolVersion) || @@ -2002,7 +2001,7 @@ ProcessStartupPacket(Port *port, bool SSLdone) * The first field is either a protocol version number or a special * request code. */ - port->proto = proto = ntohl(*((ProtocolVersion *) buf)); + port->proto = proto = pg_ntoh32(*((ProtocolVersion *) buf)); if (proto == CANCEL_REQUEST_CODE) { @@ -2281,8 +2280,8 @@ processCancelRequest(Port *port, void *pkt) int i; #endif - backendPID = (int) ntohl(canc->backendPID); - cancelAuthCode = (int32) ntohl(canc->cancelAuthCode); + backendPID = (int) pg_ntoh32(canc->backendPID); + cancelAuthCode = (int32) pg_ntoh32(canc->cancelAuthCode); /* * See if we have a matching backend. In the EXEC_BACKEND case, we can no diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index 9207d76981..8101ae74e0 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -17,9 +17,6 @@ */ #include "postgres.h" -#include -#include - #include "access/htup_details.h" #include "access/xact.h" #include "catalog/objectaccess.h" @@ -28,6 +25,7 @@ #include "libpq/pqformat.h" #include "mb/pg_wchar.h" #include "miscadmin.h" +#include "port/pg_bswap.h" #include "tcop/fastpath.h" #include "tcop/tcopprot.h" #include "utils/acl.h" @@ -92,7 +90,7 @@ GetOldFunctionMessage(StringInfo buf) if (pq_getbytes((char *) &ibuf, 4)) return EOF; appendBinaryStringInfo(buf, (char *) &ibuf, 4); - nargs = ntohl(ibuf); + nargs = pg_ntoh32(ibuf); /* For each argument ... */ while (nargs-- > 0) { @@ -102,7 +100,7 @@ GetOldFunctionMessage(StringInfo buf) if (pq_getbytes((char *) &ibuf, 4)) return EOF; appendBinaryStringInfo(buf, (char *) &ibuf, 4); - argsize = ntohl(ibuf); + argsize = pg_ntoh32(ibuf); if (argsize < -1) { /* FATAL here since no hope of regaining message sync */ diff --git a/src/bin/pg_basebackup/streamutil.c b/src/bin/pg_basebackup/streamutil.c index 81fef8cd51..a57ff8f2c4 100644 --- a/src/bin/pg_basebackup/streamutil.c +++ b/src/bin/pg_basebackup/streamutil.c @@ -17,18 +17,15 @@ #include #include -/* for ntohl/htonl */ -#include -#include - /* local includes */ #include "receivelog.h" #include "streamutil.h" #include "access/xlog_internal.h" -#include "pqexpbuffer.h" #include "common/fe_memutils.h" #include "datatype/timestamp.h" +#include "port/pg_bswap.h" +#include "pqexpbuffer.h" #define ERRCODE_DUPLICATE_OBJECT "42710" @@ -576,17 +573,9 @@ feTimestampDifferenceExceeds(TimestampTz start_time, void fe_sendint64(int64 i, char *buf) { - uint32 n32; + uint64 n64 = pg_hton64(i); - /* High order half first, since we're doing MSB-first */ - n32 = (uint32) (i >> 32); - n32 = htonl(n32); - memcpy(&buf[0], &n32, 4); - - /* Now the low order half */ - n32 = (uint32) i; - n32 = htonl(n32); - memcpy(&buf[4], &n32, 4); + memcpy(buf, &n64, sizeof(n64)); } /* @@ -595,18 +584,9 @@ fe_sendint64(int64 i, char *buf) int64 fe_recvint64(char *buf) { - int64 result; - uint32 h32; - uint32 l32; - - memcpy(&h32, buf, 4); - memcpy(&l32, buf + 4, 4); - h32 = ntohl(h32); - l32 = ntohl(l32); + uint64 n64; - result = h32; - result <<= 32; - result |= l32; + memcpy(&n64, buf, sizeof(n64)); - return result; + return pg_ntoh64(n64); } diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c index 8ad51942ff..8b996f4699 100644 --- a/src/bin/pg_dump/parallel.c +++ b/src/bin/pg_dump/parallel.c @@ -63,7 +63,9 @@ #include "parallel.h" #include "pg_backup_utils.h" + #include "fe_utils/string_utils.h" +#include "port/pg_bswap.h" /* Mnemonic macros for indexing the fd array returned by pipe(2) */ #define PIPE_READ 0 @@ -1764,8 +1766,8 @@ pgpipe(int handles[2]) memset((void *) &serv_addr, 0, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; - serv_addr.sin_port = htons(0); - serv_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + serv_addr.sin_port = pg_hton16(0); + serv_addr.sin_addr.s_addr = pg_hton32(INADDR_LOOPBACK); if (bind(s, (SOCKADDR *) &serv_addr, len) == SOCKET_ERROR) { write_msg(modulename, "pgpipe: could not bind: error code %d\n", diff --git a/src/bin/pg_rewind/libpq_fetch.c b/src/bin/pg_rewind/libpq_fetch.c index 0cdff55cab..79bec40b02 100644 --- a/src/bin/pg_rewind/libpq_fetch.c +++ b/src/bin/pg_rewind/libpq_fetch.c @@ -14,10 +14,6 @@ #include #include -/* for ntohl/htonl */ -#include -#include - #include "pg_rewind.h" #include "datapagemap.h" #include "fetch.h" @@ -28,6 +24,7 @@ #include "libpq-fe.h" #include "catalog/catalog.h" #include "catalog/pg_type.h" +#include "port/pg_bswap.h" static PGconn *conn = NULL; @@ -220,28 +217,6 @@ libpqProcessFileList(void) PQclear(res); } -/* - * Converts an int64 from network byte order to native format. - */ -static int64 -pg_recvint64(int64 value) -{ - union - { - int64 i64; - uint32 i32[2]; - } swap; - int64 result; - - swap.i64 = value; - - result = (uint32) ntohl(swap.i32[0]); - result <<= 32; - result |= (uint32) ntohl(swap.i32[1]); - - return result; -} - /*---- * Runs a query, which returns pieces of files from the remote source data * directory, and overwrites the corresponding parts of target files with @@ -318,7 +293,7 @@ receiveFileChunks(const char *sql) /* Read result set to local variables */ memcpy(&chunkoff, PQgetvalue(res, 0, 1), sizeof(int64)); - chunkoff = pg_recvint64(chunkoff); + chunkoff = pg_ntoh64(chunkoff); chunksize = PQgetlength(res, 0, 2); filenamelen = PQgetlength(res, 0, 0); diff --git a/src/common/scram-common.c b/src/common/scram-common.c index e43d035d4d..e54fe1a7c9 100644 --- a/src/common/scram-common.c +++ b/src/common/scram-common.c @@ -19,12 +19,9 @@ #include "postgres_fe.h" #endif -/* for htonl */ -#include -#include - #include "common/base64.h" #include "common/scram-common.h" +#include "port/pg_bswap.h" #define HMAC_IPAD 0x36 #define HMAC_OPAD 0x5C @@ -109,7 +106,7 @@ scram_SaltedPassword(const char *password, uint8 *result) { int password_len = strlen(password); - uint32 one = htonl(1); + uint32 one = pg_hton32(1); int i, j; uint8 Ui[SCRAM_KEY_LEN]; diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index c580d91135..5f79803607 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -47,7 +47,6 @@ #ifdef HAVE_NETINET_TCP_H #include #endif -#include #endif #ifdef ENABLE_THREAD_SAFETY @@ -73,6 +72,7 @@ static int ldapServiceLookup(const char *purl, PQconninfoOption *options, #include "common/ip.h" #include "mb/pg_wchar.h" +#include "port/pg_bswap.h" #ifndef WIN32 @@ -2443,7 +2443,7 @@ PQconnectPoll(PGconn *conn) * shouldn't since we only got here if the socket is * write-ready. */ - pv = htonl(NEGOTIATE_SSL_CODE); + pv = pg_hton32(NEGOTIATE_SSL_CODE); if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK) { appendPQExpBuffer(&conn->errorMessage, @@ -3838,10 +3838,10 @@ internal_cancel(SockAddr *raddr, int be_pid, int be_key, /* Create and send the cancel request packet. */ - crp.packetlen = htonl((uint32) sizeof(crp)); - crp.cp.cancelRequestCode = (MsgType) htonl(CANCEL_REQUEST_CODE); - crp.cp.backendPID = htonl(be_pid); - crp.cp.cancelAuthCode = htonl(be_key); + crp.packetlen = pg_hton32((uint32) sizeof(crp)); + crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE); + crp.cp.backendPID = pg_hton32(be_pid); + crp.cp.cancelAuthCode = pg_hton32(be_key); retry4: if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp)) diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c index 343e5303d9..2ff5559233 100644 --- a/src/interfaces/libpq/fe-lobj.c +++ b/src/interfaces/libpq/fe-lobj.c @@ -33,12 +33,11 @@ #include #include #include -#include /* for ntohl/htonl */ -#include #include "libpq-fe.h" #include "libpq-int.h" #include "libpq/libpq-fs.h" /* must come after sys/stat.h */ +#include "port/pg_bswap.h" #define LO_BUFSIZE 8192 @@ -1070,11 +1069,11 @@ lo_hton64(pg_int64 host64) /* High order half first, since we're doing MSB-first */ t = (uint32) (host64 >> 32); - swap.i32[0] = htonl(t); + swap.i32[0] = pg_hton32(t); /* Now the low order half */ t = (uint32) host64; - swap.i32[1] = htonl(t); + swap.i32[1] = pg_hton32(t); return swap.i64; } @@ -1095,9 +1094,9 @@ lo_ntoh64(pg_int64 net64) swap.i64 = net64; - result = (uint32) ntohl(swap.i32[0]); + result = (uint32) pg_ntoh32(swap.i32[0]); result <<= 32; - result |= (uint32) ntohl(swap.i32[1]); + result |= (uint32) pg_ntoh32(swap.i32[1]); return result; } diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c index cac6359585..41b1749d07 100644 --- a/src/interfaces/libpq/fe-misc.c +++ b/src/interfaces/libpq/fe-misc.c @@ -33,9 +33,6 @@ #include #include -#include -#include - #ifdef WIN32 #include "win32.h" #else @@ -53,6 +50,7 @@ #include "libpq-fe.h" #include "libpq-int.h" #include "mb/pg_wchar.h" +#include "port/pg_bswap.h" #include "pg_config_paths.h" @@ -278,14 +276,14 @@ pqGetInt(int *result, size_t bytes, PGconn *conn) return EOF; memcpy(&tmp2, conn->inBuffer + conn->inCursor, 2); conn->inCursor += 2; - *result = (int) ntohs(tmp2); + *result = (int) pg_ntoh16(tmp2); break; case 4: if (conn->inCursor + 4 > conn->inEnd) return EOF; memcpy(&tmp4, conn->inBuffer + conn->inCursor, 4); conn->inCursor += 4; - *result = (int) ntohl(tmp4); + *result = (int) pg_ntoh32(tmp4); break; default: pqInternalNotice(&conn->noticeHooks, @@ -314,12 +312,12 @@ pqPutInt(int value, size_t bytes, PGconn *conn) switch (bytes) { case 2: - tmp2 = htons((uint16) value); + tmp2 = pg_hton16((uint16) value); if (pqPutMsgBytes((const char *) &tmp2, 2, conn)) return EOF; break; case 4: - tmp4 = htonl((uint32) value); + tmp4 = pg_hton32((uint32) value); if (pqPutMsgBytes((const char *) &tmp4, 4, conn)) return EOF; break; @@ -597,7 +595,7 @@ pqPutMsgEnd(PGconn *conn) { uint32 msgLen = conn->outMsgEnd - conn->outMsgStart; - msgLen = htonl(msgLen); + msgLen = pg_hton32(msgLen); memcpy(conn->outBuffer + conn->outMsgStart, &msgLen, 4); } diff --git a/src/interfaces/libpq/fe-protocol2.c b/src/interfaces/libpq/fe-protocol2.c index 83f74f3985..1320d18a99 100644 --- a/src/interfaces/libpq/fe-protocol2.c +++ b/src/interfaces/libpq/fe-protocol2.c @@ -19,17 +19,16 @@ #include "libpq-fe.h" #include "libpq-int.h" +#include "port/pg_bswap.h" #ifdef WIN32 #include "win32.h" #else #include -#include #ifdef HAVE_NETINET_TCP_H #include #endif -#include #endif @@ -1609,7 +1608,7 @@ pqBuildStartupPacket2(PGconn *conn, int *packetlen, MemSet(startpacket, 0, sizeof(StartupPacket)); - startpacket->protoVersion = htonl(conn->pversion); + startpacket->protoVersion = pg_hton32(conn->pversion); /* strncpy is safe here: postmaster will handle full fields correctly */ strncpy(startpacket->user, conn->pguser, SM_USER); diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c index 7da5fb28fb..21fb8f2f21 100644 --- a/src/interfaces/libpq/fe-protocol3.c +++ b/src/interfaces/libpq/fe-protocol3.c @@ -21,16 +21,15 @@ #include "libpq-int.h" #include "mb/pg_wchar.h" +#include "port/pg_bswap.h" #ifdef WIN32 #include "win32.h" #else #include -#include #ifdef HAVE_NETINET_TCP_H #include #endif -#include #endif @@ -2148,7 +2147,7 @@ build_startup_packet(const PGconn *conn, char *packet, /* Protocol version comes first. */ if (packet) { - ProtocolVersion pv = htonl(conn->pversion); + ProtocolVersion pv = pg_hton32(conn->pversion); memcpy(packet + packet_len, &pv, sizeof(ProtocolVersion)); } diff --git a/src/port/getaddrinfo.c b/src/port/getaddrinfo.c index e5b5702c79..2e0e313c9f 100644 --- a/src/port/getaddrinfo.c +++ b/src/port/getaddrinfo.c @@ -31,6 +31,7 @@ #include "getaddrinfo.h" #include "libpq/pqcomm.h" /* needed for struct sockaddr_storage */ +#include "port/pg_bsawp.h" #ifdef WIN32 @@ -178,7 +179,7 @@ getaddrinfo(const char *node, const char *service, if (node) { if (node[0] == '\0') - sin.sin_addr.s_addr = htonl(INADDR_ANY); + sin.sin_addr.s_addr = pg_hton32(INADDR_ANY); else if (hints.ai_flags & AI_NUMERICHOST) { if (!inet_aton(node, &sin.sin_addr)) @@ -221,13 +222,13 @@ getaddrinfo(const char *node, const char *service, else { if (hints.ai_flags & AI_PASSIVE) - sin.sin_addr.s_addr = htonl(INADDR_ANY); + sin.sin_addr.s_addr = pg_hton32(INADDR_ANY); else - sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + sin.sin_addr.s_addr = pg_hton32(INADDR_LOOPBACK); } if (service) - sin.sin_port = htons((unsigned short) atoi(service)); + sin.sin_port = pg_hton16((unsigned short) atoi(service)); #ifdef HAVE_STRUCT_SOCKADDR_STORAGE_SS_LEN sin.sin_len = sizeof(sin); @@ -402,7 +403,7 @@ getnameinfo(const struct sockaddr *sa, int salen, if (sa->sa_family == AF_INET) { ret = snprintf(service, servicelen, "%d", - ntohs(((struct sockaddr_in *) sa)->sin_port)); + pg_ntoh16(((struct sockaddr_in *) sa)->sin_port)); } if (ret == -1 || ret >= servicelen) return EAI_MEMORY; diff --git a/src/port/inet_aton.c b/src/port/inet_aton.c index 68efd4723e..b31d1f025d 100644 --- a/src/port/inet_aton.c +++ b/src/port/inet_aton.c @@ -43,6 +43,8 @@ #include #include +#include "port/pg_swap.h" + /* * Check whether "cp" is a valid ascii representation * of an Internet address and convert to a binary address. @@ -142,6 +144,6 @@ inet_aton(const char *cp, struct in_addr *addr) break; } if (addr) - addr->s_addr = htonl(val); + addr->s_addr = pg_hton32(val); return 1; } From 859759b62f2d2f2f2805e2aa9ebdb167a1b9655c Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 17:41:00 -0700 Subject: [PATCH 0313/1087] Correct include file name in inet_aton fallback. Per buildfarm animal frogmouth. Author: Andres Freund --- src/port/inet_aton.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/port/inet_aton.c b/src/port/inet_aton.c index b31d1f025d..adaf18adb3 100644 --- a/src/port/inet_aton.c +++ b/src/port/inet_aton.c @@ -43,7 +43,7 @@ #include #include -#include "port/pg_swap.h" +#include "port/pg_bswap.h" /* * Check whether "cp" is a valid ascii representation From 0c8b3ee94478ca07c86c09d2399a2ce73c2b922b Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 1 Oct 2017 20:05:27 -0700 Subject: [PATCH 0314/1087] Yet another pg_bswap typo in a windows only file. Per buildfarm animal frogmouth. Brown-Paper-Bagged-By: Andres Freund --- src/port/getaddrinfo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/port/getaddrinfo.c b/src/port/getaddrinfo.c index 2e0e313c9f..dbad0293e8 100644 --- a/src/port/getaddrinfo.c +++ b/src/port/getaddrinfo.c @@ -31,7 +31,7 @@ #include "getaddrinfo.h" #include "libpq/pqcomm.h" /* needed for struct sockaddr_storage */ -#include "port/pg_bsawp.h" +#include "port/pg_bswap.h" #ifdef WIN32 From 0703c197adb0bf5fa6c99e8af74b13585bdc9056 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Mon, 2 Oct 2017 10:27:46 +0100 Subject: [PATCH 0315/1087] Grammar typo in security warning about md5 --- doc/src/sgml/client-auth.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index c76d5faf44..78c594bbba 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -959,7 +959,7 @@ omicron bryanh guest1 mechanism. It prevents password sniffing and avoids storing passwords on the server in plain text but provides no protection if an attacker manages to steal the password hash from the server. Also, the MD5 hash - algorithm is nowadays no longer consider secure against determined + algorithm is nowadays no longer considered secure against determined attacks. From f41bd4cb90eb1d93631a346bf71d17dfc4beee50 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 22 Sep 2017 13:51:01 -0400 Subject: [PATCH 0316/1087] Expand collation documentation Document better how to create custom collations and what locale strings ICU accepts. Explain the ICU examples in more detail. Also update the text on the CREATE COLLATION reference page a bit to take ICU more into account. --- doc/src/sgml/charset.sgml | 135 ++++++++++++++++++++----- doc/src/sgml/ref/create_collation.sgml | 28 +++-- 2 files changed, 124 insertions(+), 39 deletions(-) diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 44e43503a6..63f7de5b43 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -515,7 +515,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; A collation object provided by libc maps to a combination of LC_COLLATE and LC_CTYPE - settings. (As + settings, as accepted by the setlocale() system library call. (As the name would suggest, the main purpose of a collation is to set LC_COLLATE, which controls the sort order. But it is rarely necessary in practice to have an @@ -640,21 +640,19 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; ICU collations - Collations provided by ICU are created with names in BCP 47 language tag + With ICU, it is not sensible to enumerate all possible locale names. ICU + uses a particular naming system for locales, but there are many more ways + to name a locale than there are actually distinct locales. + initdb uses the ICU APIs to extract a set of distinct + locales to populate the initial set of collations. Collations provided by + ICU are created in the SQL environment with names in BCP 47 language tag format, with a private use extension -x-icu appended, to distinguish them from - libc locales. So de-x-icu would be an example name. + libc locales. - With ICU, it is not sensible to enumerate all possible locale names. ICU - uses a particular naming system for locales, but there are many more ways - to name a locale than there are actually distinct locales. (In fact, any - string will be accepted as a locale name.) - See for - information on ICU locale naming. initdb uses the ICU - APIs to extract a set of distinct locales to populate the initial set of - collations. Here are some example collations that might be created: + Here are some example collations that might be created: @@ -695,32 +693,104 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; will draw an error along the lines of collation "de-x-icu" for encoding "WIN874" does not exist. + + + + + Creating New Collation Objects + + + If the standard and predefined collations are not sufficient, users can + create their own collation objects using the SQL + command . + + + + The standard and predefined collations are in the + schema pg_catalog, like all predefined objects. + User-defined collations should be created in user schemas. This also + ensures that they are saved by pg_dump. + + + + libc collations + + + New libc collations can be created like this: + +CREATE COLLATION german (provider = libc, locale = 'de_DE'); + + The exact values that are acceptable for the locale + clause in this command depend on the operating system. On Unix-like + systems, the command locale -a will show a list. + + + + Since the predefined libc collations already include all collations + defined in the operating system when the database instance is + initialized, it is not often necessary to manually create new ones. + Reasons might be if a different naming system is desired (in which case + see also ) or if the operating system has + been upgraded to provide new locale definitions (in which case see + also pg_import_system_collations()). + + + + + ICU collations ICU allows collations to be customized beyond the basic language+country set that is preloaded by initdb. Users are encouraged to define their own collation objects that make use of these facilities to - suit the sorting behavior to their requirements. Here are some examples: + suit the sorting behavior to their requirements. + See + and for + information on ICU locale naming. The set of acceptable names and + attributes depends on the particular ICU version. + + + + Here are some examples: - CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de-u-co-phonebk') + CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de-u-co-phonebk'); + CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de@collation=phonebook'); German collation with phone book collation type + + The first example selects the ICU locale using a language + tag per BCP 47. The second example uses the traditional + ICU-specific locale syntax. The first style is preferred going + forward, but it is not supported by older ICU versions. + + + Note that you can name the collation objects in the SQL environment + anything you want. In this example, we follow the naming style that + the predefined collations use, which in turn also follow BCP 47, but + that is not required for user-defined collations. + - CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = 'und-u-co-emoji') + CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = 'und-u-co-emoji'); + CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = '@collation=emoji'); Root collation with Emoji collation type, per Unicode Technical Standard #51 + + Observe how in the traditional ICU locale naming system, the root + locale is selected by an empty string. + - CREATE COLLATION digitslast (provider = icu, locale = 'en-u-kr-latn-digit') + CREATE COLLATION digitslast (provider = icu, locale = 'en-u-kr-latn-digit'); + CREATE COLLATION digitslast (provider = icu, locale = 'en@colReorder=latn-digit'); Sort digits after Latin letters. (The default is digits before letters.) @@ -729,7 +799,8 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper') + CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper'); + CREATE COLLATION upperfirst (provider = icu, locale = 'en@colCaseFirst=upper'); Sort upper-case letters before lower-case letters. (The default is @@ -739,7 +810,8 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - CREATE COLLATION special (provider = icu, locale = 'en-u-kf-upper-kr-latn-digit') + CREATE COLLATION special (provider = icu, locale = 'en-u-kf-upper-kr-latn-digit'); + CREATE COLLATION special (provider = icu, locale = 'en@colCaseFirst=upper;colReorder=latn-digit'); Combines both of the above options. @@ -748,7 +820,8 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true') + CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true'); + CREATE COLLATION numeric (provider = icu, locale = 'en@colNumeric=yes'); Numeric ordering, sorts sequences of digits by their numeric value, @@ -768,7 +841,8 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; repository. The ICU Locale Explorer can be used to check the details of a particular locale - definition. + definition. The examples using the k* subtags require + at least ICU version 54. @@ -779,10 +853,21 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; strings that compare equal according to the collation but are not byte-wise equal will be sorted according to their byte values. + + + + By design, ICU will accept almost any string as a locale name and match + it to the closet locale it can provide, using the fallback procedure + described in its documentation. Thus, there will be no direct feedback + if a collation specification is composed using features that the given + ICU installation does not actually support. It is therefore recommended + to create application-level test cases to check that the collation + definitions satisfy one's requirements. + + - - + Copying Collations @@ -796,13 +881,7 @@ CREATE COLLATION german FROM "de_DE"; CREATE COLLATION french FROM "fr-x-icu"; - - - The standard and predefined collations are in the - schema pg_catalog, like all predefined objects. - User-defined collations should be created in user schemas. This also - ensures that they are saved by pg_dump. - + diff --git a/doc/src/sgml/ref/create_collation.sgml b/doc/src/sgml/ref/create_collation.sgml index 2d3e050545..f88758095f 100644 --- a/doc/src/sgml/ref/create_collation.sgml +++ b/doc/src/sgml/ref/create_collation.sgml @@ -93,10 +93,7 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM Use the specified operating system locale for - the LC_COLLATE locale category. The locale - must be applicable to the current database encoding. - (See for the precise - rules.) + the LC_COLLATE locale category. @@ -107,10 +104,7 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM Use the specified operating system locale for - the LC_CTYPE locale category. The locale - must be applicable to the current database encoding. - (See for the precise - rules.) + the LC_CTYPE locale category. @@ -173,8 +167,13 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM - See for more information about collation - support in PostgreSQL. + See for more information on how to create collations. + + + + When using the libc collation provider, the locale must + be applicable to the current database encoding. + See for the precise rules. @@ -186,7 +185,14 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM fr_FR.utf8 (assuming the current database encoding is UTF8): -CREATE COLLATION french (LOCALE = 'fr_FR.utf8'); +CREATE COLLATION french (locale = 'fr_FR.utf8'); + + + + + To create a collation using the ICU provider using German phone book sort order: + +CREATE COLLATION german_phonebook (provider = icu, locale = 'de-u-co-phonebk'); From 89e434b59caffeeeb7478653c74ad5d7a50d2e96 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 3 Oct 2017 14:58:25 +0200 Subject: [PATCH 0317/1087] Fix coding rules violations in walreceiver.c 1. Since commit b1a9bad9e744 we had pstrdup() inside a spinlock-protected critical section; reported by Andreas Seltenreich. Turn those into strlcpy() to stack-allocated variables instead. Backpatch to 9.6. 2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a spinlock-protected critical section. Tom Lane noticed in code review. Move down. Backpatch to 9.6. 3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel call) inside a spinlock-protected critical section. Tom Lane noticed in code review. Move it up. Backpatch to 9.2. 4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock. Tom Lane noticed in code review. Release spinlock before dying. Backpatch to 9.2. Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu --- src/backend/replication/walreceiver.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index 3474514adc..57c305d0e5 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -196,6 +196,7 @@ WalReceiverMain(void) bool first_stream; WalRcvData *walrcv = WalRcv; TimestampTz last_recv_timestamp; + TimestampTz now; bool ping_sent; char *err; @@ -205,6 +206,8 @@ WalReceiverMain(void) */ Assert(walrcv != NULL); + now = GetCurrentTimestamp(); + /* * Mark walreceiver as running in shared memory. * @@ -235,6 +238,7 @@ WalReceiverMain(void) case WALRCV_RESTARTING: default: /* Shouldn't happen */ + SpinLockRelease(&walrcv->mutex); elog(PANIC, "walreceiver still running according to shared memory state"); } /* Advertise our PID so that the startup process can kill us */ @@ -249,7 +253,8 @@ WalReceiverMain(void) startpointTLI = walrcv->receiveStartTLI; /* Initialise to a sanish value */ - walrcv->lastMsgSendTime = walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = GetCurrentTimestamp(); + walrcv->lastMsgSendTime = + walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = now; SpinLockRelease(&walrcv->mutex); @@ -308,13 +313,13 @@ WalReceiverMain(void) SpinLockAcquire(&walrcv->mutex); memset(walrcv->conninfo, 0, MAXCONNINFO); if (tmp_conninfo) - { strlcpy((char *) walrcv->conninfo, tmp_conninfo, MAXCONNINFO); - pfree(tmp_conninfo); - } walrcv->ready_to_display = true; SpinLockRelease(&walrcv->mutex); + if (tmp_conninfo) + pfree(tmp_conninfo); + first_stream = true; for (;;) { @@ -1390,8 +1395,8 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS) TimestampTz last_receipt_time; XLogRecPtr latest_end_lsn; TimestampTz latest_end_time; - char *slotname; - char *conninfo; + char slotname[NAMEDATALEN]; + char conninfo[MAXCONNINFO]; /* Take a lock to ensure value consistency */ SpinLockAcquire(&WalRcv->mutex); @@ -1406,8 +1411,8 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS) last_receipt_time = WalRcv->lastMsgReceiptTime; latest_end_lsn = WalRcv->latestWalEnd; latest_end_time = WalRcv->latestWalEndTime; - slotname = pstrdup(WalRcv->slotname); - conninfo = pstrdup(WalRcv->conninfo); + strlcpy(slotname, (char *) WalRcv->slotname, sizeof(slotname)); + strlcpy(conninfo, (char *) WalRcv->conninfo, sizeof(conninfo)); SpinLockRelease(&WalRcv->mutex); /* From 45f9d08684d954b0e514b69f270e763d2785dd53 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 3 Oct 2017 14:00:56 -0400 Subject: [PATCH 0318/1087] Fix race condition with unprotected use of a latch pointer variable. Commit 597a87ccc introduced a latch pointer variable to replace use of a long-lived shared latch in the shared WalRcvData structure. This was not well thought out, because there are now hazards of the pointer variable changing while it's being inspected by another process. This could obviously lead to a core dump in code like if (WalRcv->latch) SetLatch(WalRcv->latch); and there's a more remote risk of a torn read, if we have any platforms where reading/writing a pointer is not atomic. An actual problem would occur only if the walreceiver process exits (gracefully) while the startup process is trying to signal it, but that seems well within the realm of possibility. To fix, treat the pointer variable (not the referenced latch) as being protected by the WalRcv->mutex spinlock. There remains a race condition that we could apply SetLatch to a process latch that no longer belongs to the walreceiver, but I believe that's harmless: at worst it'd cause an extra wakeup of the next process to use that PGPROC structure. Back-patch to v10 where the faulty code was added. Discussion: https://postgr.es/m/22735.1507048202@sss.pgh.pa.us --- src/backend/replication/walreceiver.c | 19 +++++++++++++------ src/backend/replication/walreceiverfuncs.c | 7 +++++-- src/include/replication/walreceiver.h | 17 +++++++++-------- 3 files changed, 27 insertions(+), 16 deletions(-) diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index 57c305d0e5..1bf9be673b 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -256,13 +256,14 @@ WalReceiverMain(void) walrcv->lastMsgSendTime = walrcv->lastMsgReceiptTime = walrcv->latestWalEndTime = now; + /* Report the latch to use to awaken this process */ + walrcv->latch = &MyProc->procLatch; + SpinLockRelease(&walrcv->mutex); /* Arrange to clean up at walreceiver exit */ on_shmem_exit(WalRcvDie, 0); - walrcv->latch = &MyProc->procLatch; - /* Properly accept or ignore signals the postmaster might send us */ pqsignal(SIGHUP, WalRcvSigHupHandler); /* set flag to read config file */ pqsignal(SIGINT, SIG_IGN); @@ -777,8 +778,7 @@ WalRcvDie(int code, Datum arg) /* Ensure that all WAL records received are flushed to disk */ XLogWalRcvFlush(true); - walrcv->latch = NULL; - + /* Mark ourselves inactive in shared memory */ SpinLockAcquire(&walrcv->mutex); Assert(walrcv->walRcvState == WALRCV_STREAMING || walrcv->walRcvState == WALRCV_RESTARTING || @@ -789,6 +789,7 @@ WalRcvDie(int code, Datum arg) walrcv->walRcvState = WALRCV_STOPPED; walrcv->pid = 0; walrcv->ready_to_display = false; + walrcv->latch = NULL; SpinLockRelease(&walrcv->mutex); /* Terminate the connection gracefully. */ @@ -1344,9 +1345,15 @@ ProcessWalSndrMessage(XLogRecPtr walEnd, TimestampTz sendTime) void WalRcvForceReply(void) { + Latch *latch; + WalRcv->force_reply = true; - if (WalRcv->latch) - SetLatch(WalRcv->latch); + /* fetching the latch pointer might not be atomic, so use spinlock */ + SpinLockAcquire(&WalRcv->mutex); + latch = WalRcv->latch; + SpinLockRelease(&WalRcv->mutex); + if (latch) + SetLatch(latch); } /* diff --git a/src/backend/replication/walreceiverfuncs.c b/src/backend/replication/walreceiverfuncs.c index 78f8693ece..b1f28d0fc4 100644 --- a/src/backend/replication/walreceiverfuncs.c +++ b/src/backend/replication/walreceiverfuncs.c @@ -226,6 +226,7 @@ RequestXLogStreaming(TimeLineID tli, XLogRecPtr recptr, const char *conninfo, WalRcvData *walrcv = WalRcv; bool launch = false; pg_time_t now = (pg_time_t) time(NULL); + Latch *latch; /* * We always start at the beginning of the segment. That prevents a broken @@ -274,12 +275,14 @@ RequestXLogStreaming(TimeLineID tli, XLogRecPtr recptr, const char *conninfo, walrcv->receiveStart = recptr; walrcv->receiveStartTLI = tli; + latch = walrcv->latch; + SpinLockRelease(&walrcv->mutex); if (launch) SendPostmasterSignal(PMSIGNAL_START_WALRECEIVER); - else if (walrcv->latch) - SetLatch(walrcv->latch); + else if (latch) + SetLatch(latch); } /* diff --git a/src/include/replication/walreceiver.h b/src/include/replication/walreceiver.h index 9a8b2e207e..e58fc49c68 100644 --- a/src/include/replication/walreceiver.h +++ b/src/include/replication/walreceiver.h @@ -117,14 +117,6 @@ typedef struct /* set true once conninfo is ready to display (obfuscated pwds etc) */ bool ready_to_display; - slock_t mutex; /* locks shared variables shown above */ - - /* - * force walreceiver reply? This doesn't need to be locked; memory - * barriers for ordering are sufficient. - */ - bool force_reply; - /* * Latch used by startup process to wake up walreceiver after telling it * where to start streaming (after setting receiveStart and @@ -133,6 +125,15 @@ typedef struct * normally mapped to procLatch when walreceiver is running. */ Latch *latch; + + slock_t mutex; /* locks shared variables shown above */ + + /* + * force walreceiver reply? This doesn't need to be locked; memory + * barriers for ordering are sufficient. But we do need atomic fetch and + * store semantics, so use sig_atomic_t. + */ + sig_atomic_t force_reply; /* used as a bool */ } WalRcvData; extern WalRcvData *WalRcv; From 11d8d72c27a64ea4e30adce11cf6c4f3dd3e60db Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 3 Oct 2017 18:53:44 -0400 Subject: [PATCH 0319/1087] Allow multiple tables to be specified in one VACUUM or ANALYZE command. Not much to say about this; does what it says on the tin. However, formerly, if there was a column list then the ANALYZE action was implied; now it must be specified, or you get an error. This is because it would otherwise be a bit unclear what the user meant if some tables have column lists and some don't. Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada, with some editorialization by me Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com --- doc/src/sgml/ref/analyze.sgml | 14 +- doc/src/sgml/ref/vacuum.sgml | 36 ++-- src/backend/commands/vacuum.c | 236 ++++++++++++++++++--------- src/backend/nodes/copyfuncs.c | 14 ++ src/backend/nodes/equalfuncs.c | 12 ++ src/backend/nodes/makefuncs.c | 15 ++ src/backend/parser/gram.y | 71 +++----- src/backend/postmaster/autovacuum.c | 20 +-- src/include/commands/vacuum.h | 3 +- src/include/nodes/makefuncs.h | 2 + src/include/nodes/nodes.h | 1 + src/include/nodes/parsenodes.h | 22 ++- src/test/regress/expected/vacuum.out | 23 ++- src/test/regress/sql/vacuum.sql | 19 ++- 14 files changed, 329 insertions(+), 159 deletions(-) diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index 45dee101df..ba42973022 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -21,7 +21,11 @@ PostgreSQL documentation -ANALYZE [ VERBOSE ] [ table_name [ ( column_name [, ...] ) ] ] +ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ] + +where table_and_columns is: + + table_name [ ( column_name [, ...] ) ] @@ -38,9 +42,11 @@ ANALYZE [ VERBOSE ] [ table_name [ - With no parameter, ANALYZE examines every table in the - current database. With a parameter, ANALYZE examines - only that table. It is further possible to give a list of column names, + Without a table_and_columns + list, ANALYZE processes every table and materialized view + in the current database that the current user has permission to analyze. + With a list, ANALYZE processes only those table(s). + It is further possible to give a list of column names for a table, in which case only the statistics for those columns are collected. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index 421c18d117..e712226c61 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -21,9 +21,20 @@ PostgreSQL documentation -VACUUM [ ( { FULL | FREEZE | VERBOSE | ANALYZE | DISABLE_PAGE_SKIPPING } [, ...] ) ] [ table_name [ (column_name [, ...] ) ] ] -VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ table_name ] -VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ table_name [ (column_name [, ...] ) ] ] +VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ] +VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ] + +where option can be one of: + + FULL + FREEZE + VERBOSE + ANALYZE + DISABLE_PAGE_SKIPPING + +and table_and_columns is: + + table_name [ ( column_name [, ...] ) ] @@ -40,9 +51,10 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ - With no parameter, VACUUM processes every table in the - current database that the current user has permission to vacuum. - With a parameter, VACUUM processes only that table. + Without a table_and_columns + list, VACUUM processes every table and materialized view + in the current database that the current user has permission to vacuum. + With a list, VACUUM processes only those table(s). @@ -141,8 +153,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ except when performing an aggressive vacuum, some pages may be skipped in order to avoid waiting for other sessions to finish using them. This option disables all page-skipping behavior, and is intended to - be used only the contents of the visibility map are thought to - be suspect, which should happen only if there is a hardware or software + be used only when the contents of the visibility map are + suspect, which should happen only if there is a hardware or software issue causing database corruption. @@ -152,9 +164,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ table_name - The name (optionally schema-qualified) of a specific table to - vacuum. If omitted, all regular tables and materialized views in the - current database are vacuumed. If the specified table is a partitioned + The name (optionally schema-qualified) of a specific table or + materialized view to vacuum. If the specified table is a partitioned table, all of its leaf partitions are vacuumed. @@ -165,7 +176,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ The name of a specific column to analyze. Defaults to all columns. - If a column list is specified, ANALYZE is implied. + If a column list is specified, ANALYZE must also be + specified. diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index d533cef6a6..f439b55ea5 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -37,6 +37,7 @@ #include "commands/cluster.h" #include "commands/vacuum.h" #include "miscadmin.h" +#include "nodes/makefuncs.h" #include "pgstat.h" #include "postmaster/autovacuum.h" #include "storage/bufmgr.h" @@ -67,7 +68,8 @@ static BufferAccessStrategy vac_strategy; /* non-export function prototypes */ -static List *get_rel_oids(Oid relid, const RangeVar *vacrel); +static List *expand_vacuum_rel(VacuumRelation *vrel); +static List *get_all_vacuum_rels(void); static void vac_truncate_clog(TransactionId frozenXID, MultiXactId minMulti, TransactionId lastSaneFrozenXid, @@ -90,9 +92,26 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) Assert(vacstmt->options & (VACOPT_VACUUM | VACOPT_ANALYZE)); Assert((vacstmt->options & VACOPT_VACUUM) || !(vacstmt->options & (VACOPT_FULL | VACOPT_FREEZE))); - Assert((vacstmt->options & VACOPT_ANALYZE) || vacstmt->va_cols == NIL); Assert(!(vacstmt->options & VACOPT_SKIPTOAST)); + /* + * Make sure VACOPT_ANALYZE is specified if any column lists are present. + */ + if (!(vacstmt->options & VACOPT_ANALYZE)) + { + ListCell *lc; + + foreach(lc, vacstmt->rels) + { + VacuumRelation *vrel = lfirst_node(VacuumRelation, lc); + + if (vrel->va_cols != NIL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("ANALYZE option must be specified when a column list is provided"))); + } + } + /* * All freeze ages are zero if the FREEZE option is given; otherwise pass * them as -1 which means to use the default values. @@ -119,26 +138,22 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) params.log_min_duration = -1; /* Now go through the common routine */ - vacuum(vacstmt->options, vacstmt->relation, InvalidOid, ¶ms, - vacstmt->va_cols, NULL, isTopLevel); + vacuum(vacstmt->options, vacstmt->rels, ¶ms, NULL, isTopLevel); } /* - * Primary entry point for VACUUM and ANALYZE commands. + * Internal entry point for VACUUM and ANALYZE commands. * * options is a bitmask of VacuumOption flags, indicating what to do. * - * relid, if not InvalidOid, indicates the relation to process; otherwise, - * if a RangeVar is supplied, that's what to process; otherwise, we process - * all relevant tables in the database. (If both relid and a RangeVar are - * supplied, the relid is what is processed, but we use the RangeVar's name - * to report any open/lock failure.) + * relations, if not NIL, is a list of VacuumRelation to process; otherwise, + * we process all relevant tables in the database. For each VacuumRelation, + * if a valid OID is supplied, the table with that OID is what to process; + * otherwise, the VacuumRelation's RangeVar indicates what to process. * * params contains a set of parameters that can be used to customize the * behavior. * - * va_cols is a list of columns to analyze, or NIL to process them all. - * * bstrategy is normally given as NULL, but in autovacuum it can be passed * in to use the same buffer strategy object across multiple vacuum() calls. * @@ -148,14 +163,14 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel) * memory context that will not disappear at transaction commit. */ void -vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, - List *va_cols, BufferAccessStrategy bstrategy, bool isTopLevel) +vacuum(int options, List *relations, VacuumParams *params, + BufferAccessStrategy bstrategy, bool isTopLevel) { + static bool in_vacuum = false; + const char *stmttype; volatile bool in_outer_xact, use_own_xacts; - List *relations; - static bool in_vacuum = false; Assert(params != NULL); @@ -228,10 +243,29 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, vac_strategy = bstrategy; /* - * Build list of relation OID(s) to process, putting it in vac_context for - * safekeeping. + * Build list of relation(s) to process, putting any new data in + * vac_context for safekeeping. */ - relations = get_rel_oids(relid, relation); + if (relations != NIL) + { + List *newrels = NIL; + ListCell *lc; + + foreach(lc, relations) + { + VacuumRelation *vrel = lfirst_node(VacuumRelation, lc); + List *sublist; + MemoryContext old_context; + + sublist = expand_vacuum_rel(vrel); + old_context = MemoryContextSwitchTo(vac_context); + newrels = list_concat(newrels, sublist); + MemoryContextSwitchTo(old_context); + } + relations = newrels; + } + else + relations = get_all_vacuum_rels(); /* * Decide whether we need to start/commit our own transactions. @@ -282,7 +316,7 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, CommitTransactionCommand(); } - /* Turn vacuum cost accounting on or off */ + /* Turn vacuum cost accounting on or off, and set/clear in_vacuum */ PG_TRY(); { ListCell *cur; @@ -299,11 +333,11 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, */ foreach(cur, relations) { - Oid relid = lfirst_oid(cur); + VacuumRelation *vrel = lfirst_node(VacuumRelation, cur); if (options & VACOPT_VACUUM) { - if (!vacuum_rel(relid, relation, options, params)) + if (!vacuum_rel(vrel->oid, vrel->relation, options, params)) continue; } @@ -320,8 +354,8 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, PushActiveSnapshot(GetTransactionSnapshot()); } - analyze_rel(relid, relation, options, params, - va_cols, in_outer_xact, vac_strategy); + analyze_rel(vrel->oid, vrel->relation, options, params, + vrel->va_cols, in_outer_xact, vac_strategy); if (use_own_xacts) { @@ -375,25 +409,33 @@ vacuum(int options, RangeVar *relation, Oid relid, VacuumParams *params, } /* - * Build a list of Oids for each relation to be processed + * Given a VacuumRelation, fill in the table OID if it wasn't specified, + * and optionally add VacuumRelations for partitions of the table. + * + * If a VacuumRelation does not have an OID supplied and is a partitioned + * table, an extra entry will be added to the output for each partition. + * Presently, only autovacuum supplies OIDs when calling vacuum(), and + * it does not want us to expand partitioned tables. * - * The list is built in vac_context so that it will survive across our - * per-relation transactions. + * We take care not to modify the input data structure, but instead build + * new VacuumRelation(s) to return. (But note that they will reference + * unmodified parts of the input, eg column lists.) New data structures + * are made in vac_context. */ static List * -get_rel_oids(Oid relid, const RangeVar *vacrel) +expand_vacuum_rel(VacuumRelation *vrel) { - List *oid_list = NIL; + List *vacrels = NIL; MemoryContext oldcontext; - /* OID supplied by VACUUM's caller? */ - if (OidIsValid(relid)) + /* If caller supplied OID, there's nothing we need do here. */ + if (OidIsValid(vrel->oid)) { oldcontext = MemoryContextSwitchTo(vac_context); - oid_list = lappend_oid(oid_list, relid); + vacrels = lappend(vacrels, vrel); MemoryContextSwitchTo(oldcontext); } - else if (vacrel) + else { /* Process a specific relation, and possibly partitions thereof */ Oid relid; @@ -406,7 +448,16 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) * below, as well as find_all_inheritors's expectation that the caller * holds some lock on the starting relation. */ - relid = RangeVarGetRelid(vacrel, AccessShareLock, false); + relid = RangeVarGetRelid(vrel->relation, AccessShareLock, false); + + /* + * Make a returnable VacuumRelation for this rel. + */ + oldcontext = MemoryContextSwitchTo(vac_context); + vacrels = lappend(vacrels, makeVacuumRelation(vrel->relation, + relid, + vrel->va_cols)); + MemoryContextSwitchTo(oldcontext); /* * To check whether the relation is a partitioned table, fetch its @@ -420,19 +471,36 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) ReleaseSysCache(tuple); /* - * Make relation list entries for this rel and its partitions, if any. - * Note that the list returned by find_all_inheritors() includes the - * passed-in OID at its head. There's no point in taking locks on the - * individual partitions yet, and doing so would just add unnecessary - * deadlock risk. + * If it is, make relation list entries for its partitions. Note that + * the list returned by find_all_inheritors() includes the passed-in + * OID, so we have to skip that. There's no point in taking locks on + * the individual partitions yet, and doing so would just add + * unnecessary deadlock risk. */ - oldcontext = MemoryContextSwitchTo(vac_context); if (include_parts) - oid_list = list_concat(oid_list, - find_all_inheritors(relid, NoLock, NULL)); - else - oid_list = lappend_oid(oid_list, relid); - MemoryContextSwitchTo(oldcontext); + { + List *part_oids = find_all_inheritors(relid, NoLock, NULL); + ListCell *part_lc; + + foreach(part_lc, part_oids) + { + Oid part_oid = lfirst_oid(part_lc); + + if (part_oid == relid) + continue; /* ignore original table */ + + /* + * We omit a RangeVar since it wouldn't be appropriate to + * complain about failure to open one of these relations + * later. + */ + oldcontext = MemoryContextSwitchTo(vac_context); + vacrels = lappend(vacrels, makeVacuumRelation(NULL, + part_oid, + vrel->va_cols)); + MemoryContextSwitchTo(oldcontext); + } + } /* * Release lock again. This means that by the time we actually try to @@ -447,45 +515,57 @@ get_rel_oids(Oid relid, const RangeVar *vacrel) */ UnlockRelationOid(relid, AccessShareLock); } - else - { - /* - * Process all plain relations and materialized views listed in - * pg_class - */ - Relation pgclass; - HeapScanDesc scan; - HeapTuple tuple; - pgclass = heap_open(RelationRelationId, AccessShareLock); + return vacrels; +} - scan = heap_beginscan_catalog(pgclass, 0, NULL); +/* + * Construct a list of VacuumRelations for all vacuumable rels in + * the current database. The list is built in vac_context. + */ +static List * +get_all_vacuum_rels(void) +{ + List *vacrels = NIL; + Relation pgclass; + HeapScanDesc scan; + HeapTuple tuple; - while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) - { - Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple); - - /* - * We include partitioned tables here; depending on which - * operation is to be performed, caller will decide whether to - * process or ignore them. - */ - if (classForm->relkind != RELKIND_RELATION && - classForm->relkind != RELKIND_MATVIEW && - classForm->relkind != RELKIND_PARTITIONED_TABLE) - continue; - - /* Make a relation list entry for this rel */ - oldcontext = MemoryContextSwitchTo(vac_context); - oid_list = lappend_oid(oid_list, HeapTupleGetOid(tuple)); - MemoryContextSwitchTo(oldcontext); - } + pgclass = heap_open(RelationRelationId, AccessShareLock); + + scan = heap_beginscan_catalog(pgclass, 0, NULL); + + while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) + { + Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple); + MemoryContext oldcontext; - heap_endscan(scan); - heap_close(pgclass, AccessShareLock); + /* + * We include partitioned tables here; depending on which operation is + * to be performed, caller will decide whether to process or ignore + * them. + */ + if (classForm->relkind != RELKIND_RELATION && + classForm->relkind != RELKIND_MATVIEW && + classForm->relkind != RELKIND_PARTITIONED_TABLE) + continue; + + /* + * Build VacuumRelation(s) specifying the table OIDs to be processed. + * We omit a RangeVar since it wouldn't be appropriate to complain + * about failure to open one of these relations later. + */ + oldcontext = MemoryContextSwitchTo(vac_context); + vacrels = lappend(vacrels, makeVacuumRelation(NULL, + HeapTupleGetOid(tuple), + NIL)); + MemoryContextSwitchTo(oldcontext); } - return oid_list; + heap_endscan(scan); + heap_close(pgclass, AccessShareLock); + + return vacrels; } /* diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index b274af26a4..c1a83ca909 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3766,7 +3766,18 @@ _copyVacuumStmt(const VacuumStmt *from) VacuumStmt *newnode = makeNode(VacuumStmt); COPY_SCALAR_FIELD(options); + COPY_NODE_FIELD(rels); + + return newnode; +} + +static VacuumRelation * +_copyVacuumRelation(const VacuumRelation *from) +{ + VacuumRelation *newnode = makeNode(VacuumRelation); + COPY_NODE_FIELD(relation); + COPY_SCALAR_FIELD(oid); COPY_NODE_FIELD(va_cols); return newnode; @@ -5215,6 +5226,9 @@ copyObjectImpl(const void *from) case T_VacuumStmt: retval = _copyVacuumStmt(from); break; + case T_VacuumRelation: + retval = _copyVacuumRelation(from); + break; case T_ExplainStmt: retval = _copyExplainStmt(from); break; diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 5c839f4c31..7a700018e7 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1663,7 +1663,16 @@ static bool _equalVacuumStmt(const VacuumStmt *a, const VacuumStmt *b) { COMPARE_SCALAR_FIELD(options); + COMPARE_NODE_FIELD(rels); + + return true; +} + +static bool +_equalVacuumRelation(const VacuumRelation *a, const VacuumRelation *b) +{ COMPARE_NODE_FIELD(relation); + COMPARE_SCALAR_FIELD(oid); COMPARE_NODE_FIELD(va_cols); return true; @@ -3361,6 +3370,9 @@ equal(const void *a, const void *b) case T_VacuumStmt: retval = _equalVacuumStmt(a, b); break; + case T_VacuumRelation: + retval = _equalVacuumRelation(a, b); + break; case T_ExplainStmt: retval = _equalExplainStmt(a, b); break; diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c index 0755039da9..b58eb0f815 100644 --- a/src/backend/nodes/makefuncs.c +++ b/src/backend/nodes/makefuncs.c @@ -611,3 +611,18 @@ makeGroupingSet(GroupingSetKind kind, List *content, int location) n->location = location; return n; } + +/* + * makeVacuumRelation - + * create a VacuumRelation node + */ +VacuumRelation * +makeVacuumRelation(RangeVar *relation, Oid oid, List *va_cols) +{ + VacuumRelation *v = makeNode(VacuumRelation); + + v->relation = relation; + v->oid = oid; + v->va_cols = va_cols; + return v; +} diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index c303818c9b..4c83a63f7d 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -365,6 +365,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type DefACLOptionList %type import_qualification_type %type import_qualification +%type vacuum_relation %type stmtblock stmtmulti OptTableElementList TableElementList OptInherit definition @@ -396,6 +397,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); transform_element_list transform_type_list TriggerTransitions TriggerReferencing publication_name_list + vacuum_relation_list opt_vacuum_relation_list %type group_by_list %type group_by_item empty_grouping_set rollup_clause cube_clause @@ -10147,7 +10149,7 @@ cluster_index_specification: * *****************************************************************************/ -VacuumStmt: VACUUM opt_full opt_freeze opt_verbose +VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); n->options = VACOPT_VACUUM; @@ -10157,22 +10159,7 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose n->options |= VACOPT_FREEZE; if ($4) n->options |= VACOPT_VERBOSE; - n->relation = NULL; - n->va_cols = NIL; - $$ = (Node *)n; - } - | VACUUM opt_full opt_freeze opt_verbose qualified_name - { - VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_VACUUM; - if ($2) - n->options |= VACOPT_FULL; - if ($3) - n->options |= VACOPT_FREEZE; - if ($4) - n->options |= VACOPT_VERBOSE; - n->relation = $5; - n->va_cols = NIL; + n->rels = $5; $$ = (Node *)n; } | VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt @@ -10187,22 +10174,11 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose n->options |= VACOPT_VERBOSE; $$ = (Node *)n; } - | VACUUM '(' vacuum_option_list ')' - { - VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_VACUUM | $3; - n->relation = NULL; - n->va_cols = NIL; - $$ = (Node *) n; - } - | VACUUM '(' vacuum_option_list ')' qualified_name opt_name_list + | VACUUM '(' vacuum_option_list ')' opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); n->options = VACOPT_VACUUM | $3; - n->relation = $5; - n->va_cols = $6; - if (n->va_cols != NIL) /* implies analyze */ - n->options |= VACOPT_ANALYZE; + n->rels = $5; $$ = (Node *) n; } ; @@ -10229,25 +10205,13 @@ vacuum_option_elem: } ; -AnalyzeStmt: - analyze_keyword opt_verbose +AnalyzeStmt: analyze_keyword opt_verbose opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); n->options = VACOPT_ANALYZE; if ($2) n->options |= VACOPT_VERBOSE; - n->relation = NULL; - n->va_cols = NIL; - $$ = (Node *)n; - } - | analyze_keyword opt_verbose qualified_name opt_name_list - { - VacuumStmt *n = makeNode(VacuumStmt); - n->options = VACOPT_ANALYZE; - if ($2) - n->options |= VACOPT_VERBOSE; - n->relation = $3; - n->va_cols = $4; + n->rels = $3; $$ = (Node *)n; } ; @@ -10275,6 +10239,25 @@ opt_name_list: | /*EMPTY*/ { $$ = NIL; } ; +vacuum_relation: + qualified_name opt_name_list + { + $$ = (Node *) makeVacuumRelation($1, InvalidOid, $2); + } + ; + +vacuum_relation_list: + vacuum_relation + { $$ = list_make1($1); } + | vacuum_relation_list ',' vacuum_relation + { $$ = lappend($1, $3); } + ; + +opt_vacuum_relation_list: + vacuum_relation_list { $$ = $1; } + | /*EMPTY*/ { $$ = NIL; } + ; + /***************************************************************************** * diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index db6d91ffdf..c04c0b548d 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -79,6 +79,7 @@ #include "lib/ilist.h" #include "libpq/pqsignal.h" #include "miscadmin.h" +#include "nodes/makefuncs.h" #include "pgstat.h" #include "postmaster/autovacuum.h" #include "postmaster/fork_process.h" @@ -3081,20 +3082,19 @@ relation_needs_vacanalyze(Oid relid, static void autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy) { - RangeVar rangevar; - - /* Set up command parameters --- use local variables instead of palloc */ - MemSet(&rangevar, 0, sizeof(rangevar)); - - rangevar.schemaname = tab->at_nspname; - rangevar.relname = tab->at_relname; - rangevar.location = -1; + RangeVar *rangevar; + VacuumRelation *rel; + List *rel_list; /* Let pgstat know what we're doing */ autovac_report_activity(tab); - vacuum(tab->at_vacoptions, &rangevar, tab->at_relid, &tab->at_params, NIL, - bstrategy, true); + /* Set up one VacuumRelation target, identified by OID, for vacuum() */ + rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1); + rel = makeVacuumRelation(rangevar, tab->at_relid, NIL); + rel_list = list_make1(rel); + + vacuum(tab->at_vacoptions, rel_list, &tab->at_params, bstrategy, true); } /* diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index a9035112e9..7a7b793ddf 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -157,8 +157,7 @@ extern int vacuum_multixact_freeze_table_age; /* in commands/vacuum.c */ extern void ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel); -extern void vacuum(int options, RangeVar *relation, Oid relid, - VacuumParams *params, List *va_cols, +extern void vacuum(int options, List *relations, VacuumParams *params, BufferAccessStrategy bstrategy, bool isTopLevel); extern void vac_open_indexes(Relation relation, LOCKMODE lockmode, int *nindexes, Relation **Irel); diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h index 46a79b1817..dd0d2ea07d 100644 --- a/src/include/nodes/makefuncs.h +++ b/src/include/nodes/makefuncs.h @@ -86,4 +86,6 @@ extern DefElem *makeDefElemExtended(char *nameSpace, char *name, Node *arg, extern GroupingSet *makeGroupingSet(GroupingSetKind kind, List *content, int location); +extern VacuumRelation *makeVacuumRelation(RangeVar *relation, Oid oid, List *va_cols); + #endif /* MAKEFUNC_H */ diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index 27bd4f3363..ffeeb4919b 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -468,6 +468,7 @@ typedef enum NodeTag T_PartitionBoundSpec, T_PartitionRangeDatum, T_PartitionCmd, + T_VacuumRelation, /* * TAGS FOR REPLICATION GRAMMAR PARSE NODES (replnodes.h) diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index f3e4c69753..50eec730b3 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1778,8 +1778,8 @@ typedef struct AlterTableCmd /* one subcommand of an ALTER TABLE */ AlterTableType subtype; /* Type of table alteration to apply */ char *name; /* column, constraint, or trigger to act on, * or tablespace */ - int16 num; /* attribute number for columns referenced - * by number */ + int16 num; /* attribute number for columns referenced by + * number */ RoleSpec *newowner; Node *def; /* definition of new column, index, * constraint, or parent table */ @@ -3098,12 +3098,26 @@ typedef enum VacuumOption VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7 /* don't skip any pages */ } VacuumOption; +/* + * Info about a single target table of VACUUM/ANALYZE. + * + * If the OID field is set, it always identifies the table to process. + * Then the relation field can be NULL; if it isn't, it's used only to report + * failure to open/lock the relation. + */ +typedef struct VacuumRelation +{ + NodeTag type; + RangeVar *relation; /* table name to process, or NULL */ + Oid oid; /* table's OID; InvalidOid if not looked up */ + List *va_cols; /* list of column names, or NIL for all */ +} VacuumRelation; + typedef struct VacuumStmt { NodeTag type; int options; /* OR of VacuumOption flags */ - RangeVar *relation; /* single table to process, or NULL */ - List *va_cols; /* list of column names, or NIL for all */ + List *rels; /* list of VacuumRelation, or NIL for all */ } VacuumStmt; /* ---------------------- diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out index ced53ca9aa..c440c7ea58 100644 --- a/src/test/regress/expected/vacuum.out +++ b/src/test/regress/expected/vacuum.out @@ -80,8 +80,6 @@ CONTEXT: SQL function "do_analyze" statement 1 SQL function "wrap_do_analyze" statement 1 VACUUM FULL vactst; VACUUM (DISABLE_PAGE_SKIPPING) vaccluster; -DROP TABLE vaccluster; -DROP TABLE vactst; -- partitioned table CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a); CREATE TABLE vacparted1 PARTITION OF vacparted FOR VALUES IN (1); @@ -95,4 +93,25 @@ VACUUM ANALYZE vacparted(a,b,a); ERROR: column "a" of relation "vacparted" appears more than once ANALYZE vacparted(a,b,b); ERROR: column "b" of relation "vacparted" appears more than once +-- multiple tables specified +VACUUM vaccluster, vactst; +VACUUM vacparted, does_not_exist; +ERROR: relation "does_not_exist" does not exist +VACUUM (FREEZE) vacparted, vaccluster, vactst; +VACUUM (FREEZE) does_not_exist, vaccluster; +ERROR: relation "does_not_exist" does not exist +VACUUM ANALYZE vactst, vacparted (a); +VACUUM ANALYZE vactst (does_not_exist), vacparted (b); +ERROR: column "does_not_exist" of relation "vactst" does not exist +VACUUM FULL vacparted, vactst; +VACUUM FULL vactst, vacparted (a, b), vaccluster (i); +ERROR: ANALYZE option must be specified when a column list is provided +ANALYZE vactst, vacparted; +ANALYZE vacparted (b), vactst; +ANALYZE vactst, does_not_exist, vacparted; +ERROR: relation "does_not_exist" does not exist +ANALYZE vactst (i), vacparted (does_not_exist); +ERROR: column "does_not_exist" of relation "vacparted" does not exist +DROP TABLE vaccluster; +DROP TABLE vactst; DROP TABLE vacparted; diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql index 96a848ca95..92eaca2a93 100644 --- a/src/test/regress/sql/vacuum.sql +++ b/src/test/regress/sql/vacuum.sql @@ -62,9 +62,6 @@ VACUUM FULL vactst; VACUUM (DISABLE_PAGE_SKIPPING) vaccluster; -DROP TABLE vaccluster; -DROP TABLE vactst; - -- partitioned table CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a); CREATE TABLE vacparted1 PARTITION OF vacparted FOR VALUES IN (1); @@ -78,4 +75,20 @@ VACUUM (FREEZE) vacparted; VACUUM ANALYZE vacparted(a,b,a); ANALYZE vacparted(a,b,b); +-- multiple tables specified +VACUUM vaccluster, vactst; +VACUUM vacparted, does_not_exist; +VACUUM (FREEZE) vacparted, vaccluster, vactst; +VACUUM (FREEZE) does_not_exist, vaccluster; +VACUUM ANALYZE vactst, vacparted (a); +VACUUM ANALYZE vactst (does_not_exist), vacparted (b); +VACUUM FULL vacparted, vactst; +VACUUM FULL vactst, vacparted (a, b), vaccluster (i); +ANALYZE vactst, vacparted; +ANALYZE vacparted (b), vactst; +ANALYZE vactst, does_not_exist, vacparted; +ANALYZE vactst (i), vacparted (does_not_exist); + +DROP TABLE vaccluster; +DROP TABLE vactst; DROP TABLE vacparted; From 4736d74479745f0f5a0129fba4628a742034b90e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 4 Oct 2017 00:45:15 -0400 Subject: [PATCH 0320/1087] Adjust git_changelog for new-style release tags. It wasn't on board with REL_n_n format. --- src/tools/git_changelog | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/tools/git_changelog b/src/tools/git_changelog index a66b64467f..2fc1565a6c 100755 --- a/src/tools/git_changelog +++ b/src/tools/git_changelog @@ -102,8 +102,9 @@ my %rel_tags; { my $commit = $1; my $tag = $2; - if ( $tag =~ /^REL\d+_\d+$/ - || $tag =~ /^REL\d+_\d+_\d+$/) + if ($tag =~ /^REL_\d+_\d+$/ + || $tag =~ /^REL\d+_\d+$/ + || $tag =~ /^REL\d+_\d+_\d+$/) { $rel_tags{$commit} = $tag; } @@ -198,6 +199,7 @@ for my $branch (@BRANCHES) $last_tag = $sprout_tags{$commit}; # normalize branch names for making sprout tags + $last_tag =~ s/^(REL_\d+).*/$1_BR/; $last_tag =~ s/^(REL\d+_\d+).*/$1_BR/; } $c->{'last_tag'} = $last_tag; From 18f791ab2b6a01a632653d394e046f3daf193ff6 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 4 Oct 2017 00:11:36 -0700 Subject: [PATCH 0321/1087] Move genbki.pl's find_defined_symbol to Catalog.pm. Will be used in Gen_fmgrtab.pl in a followup commit. --- src/backend/catalog/Catalog.pm | 35 +++++++++++++++++++++++++++++++++- src/backend/catalog/genbki.pl | 34 ++++----------------------------- 2 files changed, 38 insertions(+), 31 deletions(-) diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm index 7abfda3d3a..54f83533b6 100644 --- a/src/backend/catalog/Catalog.pm +++ b/src/backend/catalog/Catalog.pm @@ -19,7 +19,7 @@ use warnings; require Exporter; our @ISA = qw(Exporter); our @EXPORT = (); -our @EXPORT_OK = qw(Catalogs SplitDataLine RenameTempFile); +our @EXPORT_OK = qw(Catalogs SplitDataLine RenameTempFile FindDefinedSymbol); # Call this function with an array of names of header files to parse. # Returns a nested data structure describing the data in the headers. @@ -252,6 +252,39 @@ sub RenameTempFile rename($temp_name, $final_name) || die "rename: $temp_name: $!"; } + +# Find a symbol defined in a particular header file and extract the value. +# +# The include path has to be passed as a reference to an array. +sub FindDefinedSymbol +{ + my ($catalog_header, $include_path, $symbol) = @_; + + for my $path (@$include_path) + { + + # Make sure include path ends in a slash. + if (substr($path, -1) ne '/') + { + $path .= '/'; + } + my $file = $path . $catalog_header; + next if !-f $file; + open(my $find_defined_symbol, '<', $file) || die "$file: $!"; + while (<$find_defined_symbol>) + { + if (/^#define\s+\Q$symbol\E\s+(\S+)/) + { + return $1; + } + } + close $find_defined_symbol; + die "$file: no definition found for $symbol\n"; + } + die "$catalog_header: not found in any include directory\n"; +} + + # verify the number of fields in the passed-in DATA line sub check_natts { diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl index 2eebb061b7..256a9c9c93 100644 --- a/src/backend/catalog/genbki.pl +++ b/src/backend/catalog/genbki.pl @@ -87,9 +87,11 @@ # NB: make sure that the files used here are known to be part of the .bki # file's dependencies by src/backend/catalog/Makefile. my $BOOTSTRAP_SUPERUSERID = - find_defined_symbol('pg_authid.h', 'BOOTSTRAP_SUPERUSERID'); + Catalog::FindDefinedSymbol('pg_authid.h', \@include_path, + 'BOOTSTRAP_SUPERUSERID'); my $PG_CATALOG_NAMESPACE = - find_defined_symbol('pg_namespace.h', 'PG_CATALOG_NAMESPACE'); + Catalog::FindDefinedSymbol('pg_namespace.h', \@include_path, + 'PG_CATALOG_NAMESPACE'); # Read all the input header files into internal data structures my $catalogs = Catalog::Catalogs(@input_files); @@ -500,34 +502,6 @@ sub emit_schemapg_row return $row; } -# Find a symbol defined in a particular header file and extract the value. -sub find_defined_symbol -{ - my ($catalog_header, $symbol) = @_; - for my $path (@include_path) - { - - # Make sure include path ends in a slash. - if (substr($path, -1) ne '/') - { - $path .= '/'; - } - my $file = $path . $catalog_header; - next if !-f $file; - open(my $find_defined_symbol, '<', $file) || die "$file: $!"; - while (<$find_defined_symbol>) - { - if (/^#define\s+\Q$symbol\E\s+(\S+)/) - { - return $1; - } - } - close $find_defined_symbol; - die "$file: no definition found for $symbol\n"; - } - die "$catalog_header: not found in any include directory\n"; -} - sub usage { die < Date: Wed, 4 Oct 2017 00:22:38 -0700 Subject: [PATCH 0322/1087] Replace binary search in fmgr_isbuiltin with a lookup array. Turns out we have enough functions that the binary search is quite noticeable in profiles. Thus have Gen_fmgrtab.pl build a new mapping from a builtin function's oid to an index in the existing fmgr_builtins array. That keeps the additional memory usage at a reasonable amount. Author: Andres Freund, with input from Tom Lane Discussion: https://postgr.es/m/20170914065128.a5sk7z4xde5uy3ei@alap3.anarazel.de --- src/backend/utils/Gen_fmgrtab.pl | 79 ++++++++++++++++++++++++++------ src/backend/utils/Makefile | 2 +- src/backend/utils/fmgr/fmgr.c | 29 +++++------- src/include/utils/fmgrtab.h | 11 ++++- 4 files changed, 88 insertions(+), 33 deletions(-) diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl index 17851fe2a4..ee89d50784 100644 --- a/src/backend/utils/Gen_fmgrtab.pl +++ b/src/backend/utils/Gen_fmgrtab.pl @@ -21,6 +21,8 @@ # Collect arguments my $infile; # pg_proc.h my $output_path = ''; +my @include_path; + while (@ARGV) { my $arg = shift @ARGV; @@ -32,6 +34,10 @@ { $output_path = length($arg) > 2 ? substr($arg, 2) : shift @ARGV; } + elsif ($arg =~ /^-I/) + { + push @include_path, length($arg) > 2 ? substr($arg, 2) : shift @ARGV; + } else { usage(); @@ -44,6 +50,13 @@ $output_path .= '/'; } +# Sanity check arguments. +die "No input files.\n" if !$infile; +die "No include path; you must specify -I at least once.\n" if !@include_path; + +my $FirstBootstrapObjectId = + Catalog::FindDefinedSymbol('access/transam.h', \@include_path, 'FirstBootstrapObjectId'); + # Read all the data from the include/catalog files. my $catalogs = Catalog::Catalogs($infile); @@ -176,6 +189,7 @@ #include "postgres.h" +#include "access/transam.h" #include "utils/fmgrtab.h" #include "utils/fmgrprotos.h" @@ -191,32 +205,71 @@ print $pfh "extern Datum $s->{prosrc}(PG_FUNCTION_ARGS);\n"; } -# Create the fmgr_builtins table +# Create the fmgr_builtins table, collect data for fmgr_builtin_oid_index print $tfh "\nconst FmgrBuiltin fmgr_builtins[] = {\n"; my %bmap; $bmap{'t'} = 'true'; $bmap{'f'} = 'false'; +my @fmgr_builtin_oid_index; +my $fmgr_count = 0; foreach my $s (sort { $a->{oid} <=> $b->{oid} } @fmgr) { print $tfh -" { $s->{oid}, \"$s->{prosrc}\", $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, $s->{prosrc} },\n"; +" { $s->{oid}, \"$s->{prosrc}\", $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, $s->{prosrc} }"; + + $fmgr_builtin_oid_index[$s->{oid}] = $fmgr_count++; + + if ($fmgr_count <= $#fmgr) + { + print $tfh ",\n"; + } + else + { + print $tfh "\n"; + } +} +print $tfh "};\n"; + +print $tfh qq| +const int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin)); +|; + + +# Create fmgr_builtins_oid_index table. +# +# Note that the array has to be filled up to FirstBootstrapObjectId, +# as we can't rely on zero initialization as 0 is a valid mapping. +print $tfh qq| +const uint16 fmgr_builtin_oid_index[FirstBootstrapObjectId] = { +|; + +for (my $i = 0; $i < $FirstBootstrapObjectId; $i++) +{ + my $oid = $fmgr_builtin_oid_index[$i]; + + # fmgr_builtin_oid_index is sparse, map nonexistant functions to + # InvalidOidBuiltinMapping + if (not defined $oid) + { + $oid = 'InvalidOidBuiltinMapping'; + } + + if ($i + 1 == $FirstBootstrapObjectId) + { + print $tfh " $oid\n"; + } + else + { + print $tfh " $oid,\n"; + } } +print $tfh "};\n"; + # And add the file footers. print $ofh "\n#endif /* FMGROIDS_H */\n"; print $pfh "\n#endif /* FMGRPROTOS_H */\n"; -print $tfh -qq| /* dummy entry is easier than getting rid of comma after last real one */ - /* (not that there has ever been anything wrong with *having* a - comma after the last field in an array initializer) */ - { 0, NULL, 0, false, false, NULL } -}; - -/* Note fmgr_nbuiltins excludes the dummy entry */ -const int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin)) - 1; -|; - close($ofh); close($pfh); close($tfh); diff --git a/src/backend/utils/Makefile b/src/backend/utils/Makefile index 2e35ca58cc..efb8b53f49 100644 --- a/src/backend/utils/Makefile +++ b/src/backend/utils/Makefile @@ -25,7 +25,7 @@ fmgrprotos.h: fmgroids.h ; fmgroids.h: fmgrtab.c ; fmgrtab.c: Gen_fmgrtab.pl $(catalogdir)/Catalog.pm $(top_srcdir)/src/include/catalog/pg_proc.h - $(PERL) -I $(catalogdir) $< $(top_srcdir)/src/include/catalog/pg_proc.h + $(PERL) -I $(catalogdir) $< -I $(top_srcdir)/src/include/ $(top_srcdir)/src/include/catalog/pg_proc.h errcodes.h: $(top_srcdir)/src/backend/utils/errcodes.txt generate-errcodes.pl $(PERL) $(srcdir)/generate-errcodes.pl $< > $@ diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c index 919733517b..975c968e3b 100644 --- a/src/backend/utils/fmgr/fmgr.c +++ b/src/backend/utils/fmgr/fmgr.c @@ -70,26 +70,21 @@ static Datum fmgr_security_definer(PG_FUNCTION_ARGS); static const FmgrBuiltin * fmgr_isbuiltin(Oid id) { - int low = 0; - int high = fmgr_nbuiltins - 1; + uint16 index; + + /* fast lookup only possible if original oid still assigned */ + if (id >= FirstBootstrapObjectId) + return NULL; /* - * Loop invariant: low is the first index that could contain target entry, - * and high is the last index that could contain it. + * Lookup function data. If there's a miss in that range it's likely a + * nonexistant function, returning NULL here will trigger an ERROR later. */ - while (low <= high) - { - int i = (high + low) / 2; - const FmgrBuiltin *ptr = &fmgr_builtins[i]; - - if (id == ptr->foid) - return ptr; - else if (id > ptr->foid) - low = i + 1; - else - high = i - 1; - } - return NULL; + index = fmgr_builtin_oid_index[id]; + if (index == InvalidOidBuiltinMapping) + return NULL; + + return &fmgr_builtins[index]; } /* diff --git a/src/include/utils/fmgrtab.h b/src/include/utils/fmgrtab.h index 6130ef8f9c..515a9d596a 100644 --- a/src/include/utils/fmgrtab.h +++ b/src/include/utils/fmgrtab.h @@ -13,13 +13,13 @@ #ifndef FMGRTAB_H #define FMGRTAB_H +#include "access/transam.h" #include "fmgr.h" /* * This table stores info about all the built-in functions (ie, functions - * that are compiled into the Postgres executable). The table entries are - * required to appear in Oid order, so that binary search can be used. + * that are compiled into the Postgres executable). */ typedef struct @@ -36,4 +36,11 @@ extern const FmgrBuiltin fmgr_builtins[]; extern const int fmgr_nbuiltins; /* number of entries in table */ +/* + * Mapping from a builtin function's oid to the index in the fmgr_builtins + * array. + */ +#define InvalidOidBuiltinMapping UINT16_MAX +extern const uint16 fmgr_builtin_oid_index[FirstBootstrapObjectId]; + #endif /* FMGRTAB_H */ From 15334ad19a776f76cbb725e4e9162a7bce1bd4d0 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 4 Oct 2017 09:32:02 -0700 Subject: [PATCH 0323/1087] Attempt to adapt windows build for 212e6f34d55c. Per buildfarm animal baiji. --- src/tools/msvc/Solution.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm index 0925bef139..947d904244 100644 --- a/src/tools/msvc/Solution.pm +++ b/src/tools/msvc/Solution.pm @@ -269,7 +269,7 @@ s{PG_VERSION_STR "[^"]+"}{PG_VERSION_STR "PostgreSQL $self->{strver}$extraver, c print "Generating fmgrtab.c, fmgroids.h, fmgrprotos.h...\n"; chdir('src/backend/utils'); system( -"perl -I ../catalog Gen_fmgrtab.pl ../../../src/include/catalog/pg_proc.h"); +"perl -I ../catalog Gen_fmgrtab.pl -I../../../src/include/ ../../../src/include/catalog/pg_proc.h"); chdir('../../..'); } if (IsNewer( From 9eafa2b5b043b84fb9846bd7a57d15ed1ee220c1 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 4 Oct 2017 10:01:02 -0700 Subject: [PATCH 0324/1087] Msvc doesn't know UINT16_MAX, replace with PG_UINT16_MAX. UINT16_MAX usage is originating from commit 212e6f34d55c. Per buildfarm animal currawong. --- src/include/utils/fmgrtab.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/include/utils/fmgrtab.h b/src/include/utils/fmgrtab.h index 515a9d596a..4d06b02578 100644 --- a/src/include/utils/fmgrtab.h +++ b/src/include/utils/fmgrtab.h @@ -40,7 +40,7 @@ extern const int fmgr_nbuiltins; /* number of entries in table */ * Mapping from a builtin function's oid to the index in the fmgr_builtins * array. */ -#define InvalidOidBuiltinMapping UINT16_MAX +#define InvalidOidBuiltinMapping PG_UINT16_MAX extern const uint16 fmgr_builtin_oid_index[FirstBootstrapObjectId]; #endif /* FMGRTAB_H */ From 582bbcf37fb45ea2e6a851bf9a3c7d7364c7ad32 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:16:50 -0400 Subject: [PATCH 0325/1087] Move SPI error reporting out of ri_ReportViolation() These are two completely unrelated code paths, so it doesn't make sense to pack them into one function. Add attribute noreturn to ri_ReportViolation(). Reviewed-by: Michael Paquier --- src/backend/utils/adt/ri_triggers.c | 29 +++++++++++------------------ 1 file changed, 11 insertions(+), 18 deletions(-) diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c index c2891e6fa1..b63a7775b7 100644 --- a/src/backend/utils/adt/ri_triggers.c +++ b/src/backend/utils/adt/ri_triggers.c @@ -242,7 +242,7 @@ static void ri_ExtractValues(Relation rel, HeapTuple tup, static void ri_ReportViolation(const RI_ConstraintInfo *riinfo, Relation pk_rel, Relation fk_rel, HeapTuple violator, TupleDesc tupdesc, - int queryno, bool spi_err); + int queryno) pg_attribute_noreturn(); /* ---------- @@ -2499,7 +2499,7 @@ RI_Initial_Check(Trigger *trigger, Relation fk_rel, Relation pk_rel) ri_ReportViolation(&fake_riinfo, pk_rel, fk_rel, tuple, tupdesc, - RI_PLAN_CHECK_LOOKUPPK, false); + RI_PLAN_CHECK_LOOKUPPK); } if (SPI_finish() != SPI_OK_FINISH) @@ -3147,11 +3147,13 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo, elog(ERROR, "SPI_execute_snapshot returned %d", spi_result); if (expect_OK >= 0 && spi_result != expect_OK) - ri_ReportViolation(riinfo, - pk_rel, fk_rel, - new_tuple ? new_tuple : old_tuple, - NULL, - qkey->constr_queryno, true); + ereport(ERROR, + (errcode(ERRCODE_INTERNAL_ERROR), + errmsg("referential integrity query on \"%s\" from constraint \"%s\" on \"%s\" gave unexpected result", + RelationGetRelationName(pk_rel), + NameStr(riinfo->conname), + RelationGetRelationName(fk_rel)), + errhint("This is most likely due to a rule having rewritten the query."))); /* XXX wouldn't it be clearer to do this part at the caller? */ if (qkey->constr_queryno != RI_PLAN_CHECK_LOOKUPPK_FROM_PK && @@ -3161,7 +3163,7 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo, pk_rel, fk_rel, new_tuple ? new_tuple : old_tuple, NULL, - qkey->constr_queryno, false); + qkey->constr_queryno); return SPI_processed != 0; } @@ -3205,7 +3207,7 @@ static void ri_ReportViolation(const RI_ConstraintInfo *riinfo, Relation pk_rel, Relation fk_rel, HeapTuple violator, TupleDesc tupdesc, - int queryno, bool spi_err) + int queryno) { StringInfoData key_names; StringInfoData key_values; @@ -3216,15 +3218,6 @@ ri_ReportViolation(const RI_ConstraintInfo *riinfo, AclResult aclresult; bool has_perm = true; - if (spi_err) - ereport(ERROR, - (errcode(ERRCODE_INTERNAL_ERROR), - errmsg("referential integrity query on \"%s\" from constraint \"%s\" on \"%s\" gave unexpected result", - RelationGetRelationName(pk_rel), - NameStr(riinfo->conname), - RelationGetRelationName(fk_rel)), - errhint("This is most likely due to a rule having rewritten the query."))); - /* * Determine which relation to complain about. If tupdesc wasn't passed * by caller, assume the violator tuple came from there. From 036166f26e00ab3286ef29a6519525d6291fdfd7 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 30 Aug 2017 22:16:50 -0400 Subject: [PATCH 0326/1087] Document and use SPI_result_code_string() A lot of semi-internal code just prints out numeric SPI error codes, which is not very helpful. We already have an API function to convert the codes to a string, so let's make more use of that. Reviewed-by: Michael Paquier --- contrib/spi/refint.c | 6 ++-- contrib/spi/timetravel.c | 2 +- doc/src/sgml/spi.sgml | 53 +++++++++++++++++++++++++++++ src/backend/utils/adt/ri_triggers.c | 10 +++--- src/test/regress/regress.c | 2 +- 5 files changed, 63 insertions(+), 10 deletions(-) diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c index 70def95ac5..2fc894e72a 100644 --- a/contrib/spi/refint.c +++ b/contrib/spi/refint.c @@ -182,7 +182,7 @@ check_primary_key(PG_FUNCTION_ARGS) pplan = SPI_prepare(sql, nkeys, argtypes); if (pplan == NULL) /* internal error */ - elog(ERROR, "check_primary_key: SPI_prepare returned %d", SPI_result); + elog(ERROR, "check_primary_key: SPI_prepare returned %s", SPI_result_code_string(SPI_result)); /* * Remember that SPI_prepare places plan in current memory context - @@ -395,7 +395,7 @@ check_foreign_key(PG_FUNCTION_ARGS) /* this shouldn't happen! SPI_ERROR_NOOUTFUNC ? */ if (oldval == NULL) /* internal error */ - elog(ERROR, "check_foreign_key: SPI_getvalue returned %d", SPI_result); + elog(ERROR, "check_foreign_key: SPI_getvalue returned %s", SPI_result_code_string(SPI_result)); newval = SPI_getvalue(newtuple, tupdesc, fnumber); if (newval == NULL || strcmp(oldval, newval) != 0) isequal = false; @@ -529,7 +529,7 @@ check_foreign_key(PG_FUNCTION_ARGS) pplan = SPI_prepare(sql, nkeys, argtypes); if (pplan == NULL) /* internal error */ - elog(ERROR, "check_foreign_key: SPI_prepare returned %d", SPI_result); + elog(ERROR, "check_foreign_key: SPI_prepare returned %s", SPI_result_code_string(SPI_result)); /* * Remember that SPI_prepare places plan in current memory context diff --git a/contrib/spi/timetravel.c b/contrib/spi/timetravel.c index 816cc549ae..00f661e6b6 100644 --- a/contrib/spi/timetravel.c +++ b/contrib/spi/timetravel.c @@ -341,7 +341,7 @@ timetravel(PG_FUNCTION_ARGS) /* Prepare plan for query */ pplan = SPI_prepare(sql, natts, ctypes); if (pplan == NULL) - elog(ERROR, "timetravel (%s): SPI_prepare returned %d", relname, SPI_result); + elog(ERROR, "timetravel (%s): SPI_prepare returned %s", relname, SPI_result_code_string(SPI_result)); /* * Remember that SPI_prepare places plan in current memory context - diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 31535a307d..3594f9dce1 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -3546,6 +3546,59 @@ char * SPI_getnspname(Relation rel) + + SPI_result_code_string + + + SPI_result_code_string + 3 + + + + SPI_result_code_string + return error code as string + + + + +const char * SPI_result_code_string(int code); + + + + + Description + + + SPI_result_code_string returns a string representation + of the result code returned by various SPI functions or stored + in SPI_result. + + + + + Arguments + + + + int code + + + result code + + + + + + + + Return Value + + + A string representation of the result code. + + + + diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c index b63a7775b7..ba0e3ad87d 100644 --- a/src/backend/utils/adt/ri_triggers.c +++ b/src/backend/utils/adt/ri_triggers.c @@ -2435,8 +2435,8 @@ RI_Initial_Check(Trigger *trigger, Relation fk_rel, Relation pk_rel) qplan = SPI_prepare(querybuf.data, 0, NULL); if (qplan == NULL) - elog(ERROR, "SPI_prepare returned %d for %s", - SPI_result, querybuf.data); + elog(ERROR, "SPI_prepare returned %s for %s", + SPI_result_code_string(SPI_result), querybuf.data); /* * Run the plan. For safety we force a current snapshot to be used. (In @@ -2453,7 +2453,7 @@ RI_Initial_Check(Trigger *trigger, Relation fk_rel, Relation pk_rel) /* Check result */ if (spi_result != SPI_OK_SELECT) - elog(ERROR, "SPI_execute_snapshot returned %d", spi_result); + elog(ERROR, "SPI_execute_snapshot returned %s", SPI_result_code_string(spi_result)); /* Did we find a tuple violating the constraint? */ if (SPI_processed > 0) @@ -3016,7 +3016,7 @@ ri_PlanCheck(const char *querystr, int nargs, Oid *argtypes, qplan = SPI_prepare(querystr, nargs, argtypes); if (qplan == NULL) - elog(ERROR, "SPI_prepare returned %d for %s", SPI_result, querystr); + elog(ERROR, "SPI_prepare returned %s for %s", SPI_result_code_string(SPI_result), querystr); /* Restore UID and security context */ SetUserIdAndSecContext(save_userid, save_sec_context); @@ -3144,7 +3144,7 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo, /* Check result */ if (spi_result < 0) - elog(ERROR, "SPI_execute_snapshot returned %d", spi_result); + elog(ERROR, "SPI_execute_snapshot returned %s", SPI_result_code_string(spi_result)); if (expect_OK >= 0 && spi_result != expect_OK) ereport(ERROR, diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c index 0a123f2b39..0e9e46e667 100644 --- a/src/test/regress/regress.c +++ b/src/test/regress/regress.c @@ -612,7 +612,7 @@ ttdummy(PG_FUNCTION_ARGS) /* Prepare plan for query */ pplan = SPI_prepare(query, natts, ctypes); if (pplan == NULL) - elog(ERROR, "ttdummy (%s): SPI_prepare returned %d", relname, SPI_result); + elog(ERROR, "ttdummy (%s): SPI_prepare returned %s", relname, SPI_result_code_string(SPI_result)); if (SPI_keepplan(pplan)) elog(ERROR, "ttdummy (%s): SPI_keepplan failed", relname); From c097b271e8a14eac5e6189139deca66796b16a59 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 07:58:02 -0400 Subject: [PATCH 0327/1087] Fix more user-visible elog() calls. Michael Paquier discovered that this could be triggered via SQL; give a nicer message instead. Patch by Michael Paquier, reviewed by Masahiko Sawada. Discussion: http://postgr.es/m/CAB7nPqQtPg+LKKtzdKN26judHcvPZ0s1gNigzOT4j8CYuuuBYg@mail.gmail.com --- contrib/test_decoding/expected/replorigin.out | 9 ++++++++- contrib/test_decoding/sql/replorigin.sql | 5 +++++ src/backend/replication/logical/origin.c | 12 ++++++++---- 3 files changed, 21 insertions(+), 5 deletions(-) diff --git a/contrib/test_decoding/expected/replorigin.out b/contrib/test_decoding/expected/replorigin.out index 76d4ea986d..8ea4ddda97 100644 --- a/contrib/test_decoding/expected/replorigin.out +++ b/contrib/test_decoding/expected/replorigin.out @@ -26,7 +26,14 @@ SELECT pg_replication_origin_drop('test_decoding: temp'); (1 row) SELECT pg_replication_origin_drop('test_decoding: temp'); -ERROR: cache lookup failed for replication origin 'test_decoding: temp' +ERROR: replication origin "test_decoding: temp" does not exist +-- various failure checks for undefined slots +select pg_replication_origin_advance('test_decoding: temp', '0/1'); +ERROR: replication origin "test_decoding: temp" does not exist +select pg_replication_origin_session_setup('test_decoding: temp'); +ERROR: replication origin "test_decoding: temp" does not exist +select pg_replication_origin_progress('test_decoding: temp', true); +ERROR: replication origin "test_decoding: temp" does not exist SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding'); ?column? ---------- diff --git a/contrib/test_decoding/sql/replorigin.sql b/contrib/test_decoding/sql/replorigin.sql index 7870f0ea32..451cd4bc3b 100644 --- a/contrib/test_decoding/sql/replorigin.sql +++ b/contrib/test_decoding/sql/replorigin.sql @@ -13,6 +13,11 @@ SELECT pg_replication_origin_create('test_decoding: temp'); SELECT pg_replication_origin_drop('test_decoding: temp'); SELECT pg_replication_origin_drop('test_decoding: temp'); +-- various failure checks for undefined slots +select pg_replication_origin_advance('test_decoding: temp', '0/1'); +select pg_replication_origin_session_setup('test_decoding: temp'); +select pg_replication_origin_progress('test_decoding: temp', true); + SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding'); -- origin tx diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 20d32679e0..55382b4b24 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -225,8 +225,10 @@ replorigin_by_name(char *roname, bool missing_ok) ReleaseSysCache(tuple); } else if (!missing_ok) - elog(ERROR, "cache lookup failed for replication origin '%s'", - roname); + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("replication origin \"%s\" does not exist", + roname))); return roident; } @@ -437,8 +439,10 @@ replorigin_by_oid(RepOriginId roident, bool missing_ok, char **roname) *roname = NULL; if (!missing_ok) - elog(ERROR, "cache lookup failed for replication origin with oid %u", - roident); + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("replication origin with OID %u does not exist", + roident))); return false; } From 4b2ba1fe0222b7820a2f4cd52b133baeb91c5a93 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 08:45:24 -0400 Subject: [PATCH 0328/1087] Fix typo. Etsuro Fujita Discussion: http://postgr.es/m/1b2e9ac7-b99a-2769-5e42-afdf62bfa7fa@lab.ntt.co.jp --- src/backend/catalog/partition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 1ab6dba7ae..9777d40e66 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1236,7 +1236,7 @@ RelationGetPartitionDispatchInfo(Relation rel, * get_partition_dispatch_recurse * Recursively expand partition tree rooted at rel * - * As the partition tree is expanded in a depth-first manner, we mantain two + * As the partition tree is expanded in a depth-first manner, we maintain two * global lists: of PartitionDispatch objects corresponding to partitioned * tables in *pds and of the leaf partition OIDs in *leaf_part_oids. * From 4d85c2900b113e331925baf308cc7fc75ac4530b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 5 Oct 2017 10:47:47 -0400 Subject: [PATCH 0329/1087] Improve comments in vacuum_rel() and analyze_rel(). Remove obsolete references to get_rel_oids(). Avoid listing specific relkinds in the comments, since we seem unable to keep such things in sync with the code, and it's not all that helpful anyhow. Noted by Michael Paquier, though I rewrote the comments a bit more. Discussion: https://postgr.es/m/CAB7nPqTWiN9zwKTaOrsnKiGDChqRt7C1+CiiDk4N4OMn92rs6A@mail.gmail.com --- src/backend/commands/analyze.c | 4 +--- src/backend/commands/vacuum.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index d432f8208d..760d19142e 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -207,9 +207,7 @@ analyze_rel(Oid relid, RangeVar *relation, int options, } /* - * Check that it's a plain table, materialized view, or foreign table; we - * used to do this in get_rel_oids() but seems safer to check after we've - * locked the relation. + * Check that it's of an analyzable relkind, and set up appropriately. */ if (onerel->rd_rel->relkind == RELKIND_RELATION || onerel->rd_rel->relkind == RELKIND_MATVIEW) diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index f439b55ea5..cbd6e9b161 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -1309,6 +1309,9 @@ vac_truncate_clog(TransactionId frozenXID, * do not use it once we've successfully opened the rel, since it might * be stale. * + * Returns true if it's okay to proceed with a requested ANALYZE + * operation on this table. + * * Doing one heap at a time incurs extra overhead, since we need to * check that the heap exists again just before we vacuum it. The * reason that we do this is so that vacuuming can be spread across @@ -1444,9 +1447,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) } /* - * Check that it's a vacuumable relation; we used to do this in - * get_rel_oids() but seems safer to check after we've locked the - * relation. + * Check that it's of a vacuumable relkind. */ if (onerel->rd_rel->relkind != RELKIND_RELATION && onerel->rd_rel->relkind != RELKIND_MATVIEW && @@ -1478,17 +1479,16 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) } /* - * Ignore partitioned tables as there is no work to be done. Since we - * release the lock here, it's possible that any partitions added from - * this point on will not get processed, but that seems harmless. + * Silently ignore partitioned tables as there is no work to be done. The + * useful work is on their child partitions, which have been queued up for + * us separately. */ if (onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { relation_close(onerel, lmode); PopActiveSnapshot(); CommitTransactionCommand(); - - /* It's OK for other commands to look at this table */ + /* It's OK to proceed with ANALYZE on this table */ return true; } From e9baa5e9fa147e00a2466ab2c40eb99c8a700824 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 11:34:38 -0400 Subject: [PATCH 0330/1087] Allow DML commands that create tables to use parallel query. Haribabu Kommi, reviewed by Dilip Kumar and Rafia Sabih. Various cosmetic changes by me to explain why this appears to be safe but allowing inserts in parallel mode in general wouldn't be. Also, I removed the REFRESH MATERIALIZED VIEW case from Haribabu's patch, since I'm not convinced that case is OK, and hacked on the documentation somewhat. Discussion: http://postgr.es/m/CAJrrPGdo5bak6qnPWe8Kpi8g_jfQEs-G4SYmG9y+OFaw2-dPvA@mail.gmail.com --- doc/src/sgml/parallel.sgml | 16 +--- src/backend/access/heap/heapam.c | 16 ++-- src/backend/commands/createas.c | 4 +- src/backend/commands/explain.c | 4 +- src/backend/executor/execMain.c | 6 +- src/backend/optimizer/plan/planner.c | 10 +++ src/test/regress/expected/write_parallel.out | 79 ++++++++++++++++++++ src/test/regress/parallel_schedule | 1 + src/test/regress/serial_schedule | 1 + src/test/regress/sql/write_parallel.sql | 42 +++++++++++ 10 files changed, 151 insertions(+), 28 deletions(-) create mode 100644 src/test/regress/expected/write_parallel.out create mode 100644 src/test/regress/sql/write_parallel.sql diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml index 2a25f21eb4..1f5efd9e6d 100644 --- a/doc/src/sgml/parallel.sgml +++ b/doc/src/sgml/parallel.sgml @@ -151,9 +151,10 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; The query writes any data or locks any database rows. If a query contains a data-modifying operation either at the top level or within - a CTE, no parallel plans for that query will be generated. This is a - limitation of the current implementation which could be lifted in a - future release. + a CTE, no parallel plans for that query will be generated. As an + exception, the commands CREATE TABLE, SELECT + INTO, and CREATE MATERIALIZED VIEW which create a new + table and populate it can use a parallel plan. @@ -241,15 +242,6 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - - - A prepared statement is executed using a CREATE TABLE .. AS - EXECUTE .. statement. This construct converts what otherwise - would have been a read-only operation into a read-write operation, - making it ineligible for parallel query. - - - The transaction isolation level is serializable. This situation diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index c435482cd2..0c0f640f64 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -2598,15 +2598,17 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid, CommandId cid, int options) { /* - * For now, parallel operations are required to be strictly read-only. - * Unlike heap_update() and heap_delete(), an insert should never create a - * combo CID, so it might be possible to relax this restriction, but not - * without more thought and testing. - */ - if (IsInParallelMode()) + * Parallel operations are required to be strictly read-only in a parallel + * worker. Parallel inserts are not safe even in the leader in the + * general case, because group locking means that heavyweight locks for + * relation extension or GIN page locks will not conflict between members + * of a lock group, but we don't prohibit that case here because there are + * useful special cases that we can safely allow, such as CREATE TABLE AS. + */ + if (IsParallelWorker()) ereport(ERROR, (errcode(ERRCODE_INVALID_TRANSACTION_STATE), - errmsg("cannot insert tuples during a parallel operation"))); + errmsg("cannot insert tuples in a parallel worker"))); if (relation->rd_rel->relhasoids) { diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c index e60210cb24..4d77411a68 100644 --- a/src/backend/commands/createas.c +++ b/src/backend/commands/createas.c @@ -326,8 +326,8 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString, query = linitial_node(Query, rewritten); Assert(query->commandType == CMD_SELECT); - /* plan the query --- note we disallow parallelism */ - plan = pg_plan_query(query, 0, params); + /* plan the query */ + plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params); /* * Use a snapshot with an updated command ID to ensure this query sees diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index c1602c59cc..8f7062cd6e 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -396,8 +396,6 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es, * We have to rewrite the contained SELECT and then pass it back to * ExplainOneQuery. It's probably not really necessary to copy the * contained parsetree another time, but let's be safe. - * - * Like ExecCreateTableAs, disallow parallelism in the plan. */ CreateTableAsStmt *ctas = (CreateTableAsStmt *) utilityStmt; List *rewritten; @@ -405,7 +403,7 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es, rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query))); Assert(list_length(rewritten) == 1); ExplainOneQuery(linitial_node(Query, rewritten), - 0, ctas->into, es, + CURSOR_OPT_PARALLEL_OK, ctas->into, es, queryString, params, queryEnv); } else if (IsA(utilityStmt, DeclareCursorStmt)) diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 62fb05efac..384ad70f2d 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1697,11 +1697,9 @@ ExecutePlan(EState *estate, /* * If the plan might potentially be executed multiple times, we must force - * it to run without parallelism, because we might exit early. Also - * disable parallelism when writing into a relation, because no database - * changes are allowed in parallel mode. + * it to run without parallelism, because we might exit early. */ - if (!execute_once || dest->mydest == DestIntoRel) + if (!execute_once) use_parallel_mode = false; if (use_parallel_mode) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 7f146d670c..e7ac11e9bb 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -257,6 +257,16 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) * to values that don't permit parallelism, or if parallel-unsafe * functions are present in the query tree. * + * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE + * MATERIALIZED VIEW to use parallel plans, but this is safe only because + * the command is writing into a completely new table which workers won't + * be able to see. If the workers could see the table, the fact that + * group locking would cause them to ignore the leader's heavyweight + * relation extension lock and GIN page locks would make this unsafe. + * We'll have to fix that somehow if we want to allow parallel inserts in + * general; updates and deletes have additional problems especially around + * combo CIDs.) + * * For now, we don't try to use parallel mode if we're running inside a * parallel worker. We might eventually be able to relax this * restriction, but for now it seems best not to have parallel workers diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out new file mode 100644 index 0000000000..0c4da2591a --- /dev/null +++ b/src/test/regress/expected/write_parallel.out @@ -0,0 +1,79 @@ +-- +-- PARALLEL +-- +-- Serializable isolation would disable parallel query, so explicitly use an +-- arbitrary other level. +begin isolation level repeatable read; +-- encourage use of parallel plans +set parallel_setup_cost=0; +set parallel_tuple_cost=0; +set min_parallel_table_scan_size=0; +set max_parallel_workers_per_gather=4; +-- +-- Test write operations that has an underlying query that is eligble +-- for parallel plans +-- +explain (costs off) create table parallel_write as + select length(stringu1) from tenk1 group by length(stringu1); + QUERY PLAN +--------------------------------------------------- + Finalize HashAggregate + Group Key: (length((stringu1)::text)) + -> Gather + Workers Planned: 4 + -> Partial HashAggregate + Group Key: length((stringu1)::text) + -> Parallel Seq Scan on tenk1 +(7 rows) + +create table parallel_write as + select length(stringu1) from tenk1 group by length(stringu1); +drop table parallel_write; +explain (costs off) select length(stringu1) into parallel_write + from tenk1 group by length(stringu1); + QUERY PLAN +--------------------------------------------------- + Finalize HashAggregate + Group Key: (length((stringu1)::text)) + -> Gather + Workers Planned: 4 + -> Partial HashAggregate + Group Key: length((stringu1)::text) + -> Parallel Seq Scan on tenk1 +(7 rows) + +select length(stringu1) into parallel_write + from tenk1 group by length(stringu1); +drop table parallel_write; +explain (costs off) create materialized view parallel_mat_view as + select length(stringu1) from tenk1 group by length(stringu1); + QUERY PLAN +--------------------------------------------------- + Finalize HashAggregate + Group Key: (length((stringu1)::text)) + -> Gather + Workers Planned: 4 + -> Partial HashAggregate + Group Key: length((stringu1)::text) + -> Parallel Seq Scan on tenk1 +(7 rows) + +create materialized view parallel_mat_view as + select length(stringu1) from tenk1 group by length(stringu1); +drop materialized view parallel_mat_view; +prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1); +explain (costs off) create table parallel_write as execute prep_stmt; + QUERY PLAN +--------------------------------------------------- + Finalize HashAggregate + Group Key: (length((stringu1)::text)) + -> Gather + Workers Planned: 4 + -> Partial HashAggregate + Group Key: length((stringu1)::text) + -> Parallel Seq Scan on tenk1 +(7 rows) + +create table parallel_write as execute prep_stmt; +drop table parallel_write; +rollback; diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index 2fd3f2b1b1..860e8ab795 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -96,6 +96,7 @@ test: rules psql_crosstab amutils # run by itself so it can run parallel workers test: select_parallel +test: write_parallel # no relation related tests can be put in this group test: publication subscription diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index 76b0de30a7..ef275d0d9a 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -134,6 +134,7 @@ test: stats_ext test: rules test: psql_crosstab test: select_parallel +test: write_parallel test: publication test: subscription test: amutils diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql new file mode 100644 index 0000000000..78b479cedf --- /dev/null +++ b/src/test/regress/sql/write_parallel.sql @@ -0,0 +1,42 @@ +-- +-- PARALLEL +-- + +-- Serializable isolation would disable parallel query, so explicitly use an +-- arbitrary other level. +begin isolation level repeatable read; + +-- encourage use of parallel plans +set parallel_setup_cost=0; +set parallel_tuple_cost=0; +set min_parallel_table_scan_size=0; +set max_parallel_workers_per_gather=4; + +-- +-- Test write operations that has an underlying query that is eligble +-- for parallel plans +-- +explain (costs off) create table parallel_write as + select length(stringu1) from tenk1 group by length(stringu1); +create table parallel_write as + select length(stringu1) from tenk1 group by length(stringu1); +drop table parallel_write; + +explain (costs off) select length(stringu1) into parallel_write + from tenk1 group by length(stringu1); +select length(stringu1) into parallel_write + from tenk1 group by length(stringu1); +drop table parallel_write; + +explain (costs off) create materialized view parallel_mat_view as + select length(stringu1) from tenk1 group by length(stringu1); +create materialized view parallel_mat_view as + select length(stringu1) from tenk1 group by length(stringu1); +drop materialized view parallel_mat_view; + +prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1); +explain (costs off) create table parallel_write as execute prep_stmt; +create table parallel_write as execute prep_stmt; +drop table parallel_write; + +rollback; From c31e9d4bafd80da52408af5f87fe874c9ca0c952 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 12:19:40 -0400 Subject: [PATCH 0331/1087] Improve error message when skipping scan of default partition. It seems like a good idea to clearly distinguish between skipping the scan of the new partition itself and skipping the scan of the default partition. Amit Langote Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp --- src/backend/commands/tablecmds.c | 11 ++++++++--- src/test/regress/expected/alter_table.out | 4 ++-- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 563bcda30c..d90c739952 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -13635,9 +13635,14 @@ ValidatePartitionConstraints(List **wqueue, Relation scanrel, */ if (PartConstraintImpliedByRelConstraint(scanrel, partConstraint)) { - ereport(INFO, - (errmsg("partition constraint for table \"%s\" is implied by existing constraints", - RelationGetRelationName(scanrel)))); + if (!validate_default) + ereport(INFO, + (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + RelationGetRelationName(scanrel)))); + else + ereport(INFO, + (errmsg("updated partition constraint for default partition \"%s\" is implied by existing constraints", + RelationGetRelationName(scanrel)))); return; } diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 0478a8ac60..807eb913f6 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3436,7 +3436,7 @@ ALTER TABLE part_7 ATTACH PARTITION part_7_a_null FOR VALUES IN ('a', null); INFO: partition constraint for table "part_7_a_null" is implied by existing constraints ALTER TABLE list_parted2 ATTACH PARTITION part_7 FOR VALUES IN (7); INFO: partition constraint for table "part_7" is implied by existing constraints -INFO: partition constraint for table "list_parted2_def" is implied by existing constraints +INFO: updated partition constraint for default partition "list_parted2_def" is implied by existing constraints -- Same example, but check this time that the constraint correctly detects -- violating rows ALTER TABLE list_parted2 DETACH PARTITION part_7; @@ -3450,7 +3450,7 @@ SELECT tableoid::regclass, a, b FROM part_7 order by a; (2 rows) ALTER TABLE list_parted2 ATTACH PARTITION part_7 FOR VALUES IN (7); -INFO: partition constraint for table "list_parted2_def" is implied by existing constraints +INFO: updated partition constraint for default partition "list_parted2_def" is implied by existing constraints ERROR: partition constraint is violated by some row -- check that leaf partitions of default partition are scanned when -- attaching a partitioned table. From 14f67a8ee282ebc0de78e773fbd597f460ab4a54 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 13:06:46 -0400 Subject: [PATCH 0332/1087] On attach, consider skipping validation of subpartitions individually. If the table attached as a partition is itself partitioned, individual partitions might have constraints strong enough to skip scanning the table even if the table actually attached does not. This is pretty cheap to check, and possibly a big win if it works out. Amit Langote, with test case changes by me. Discussion: http://postgr.es/m/1f08b844-0078-aa8d-452e-7af3bf77d05f@lab.ntt.co.jp --- src/backend/commands/tablecmds.c | 15 +++++++++++++++ src/test/regress/expected/alter_table.out | 14 ++++++++++++++ src/test/regress/sql/alter_table.sql | 14 ++++++++++++++ 3 files changed, 43 insertions(+) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index d90c739952..2d4dcd7556 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -13683,6 +13683,21 @@ ValidatePartitionConstraints(List **wqueue, Relation scanrel, /* There can never be a whole-row reference here */ if (found_whole_row) elog(ERROR, "unexpected whole-row reference found in partition key"); + + /* Can we skip scanning this part_rel? */ + if (PartConstraintImpliedByRelConstraint(part_rel, my_partconstr)) + { + if (!validate_default) + ereport(INFO, + (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + RelationGetRelationName(part_rel)))); + else + ereport(INFO, + (errmsg("updated partition constraint for default partition \"%s\" is implied by existing constraints", + RelationGetRelationName(part_rel)))); + heap_close(part_rel, NoLock); + continue; + } } /* Grab a work queue entry. */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 807eb913f6..cda8ce556c 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3474,6 +3474,20 @@ DETAIL: "part_5" is already a child of "list_parted2". ALTER TABLE list_parted2 ATTACH PARTITION list_parted2 FOR VALUES IN (0); ERROR: circular inheritance not allowed DETAIL: "list_parted2" is already a child of "list_parted2". +-- If the partitioned table being attached does not have a constraint that +-- would allow validation scan to be skipped, but an individual partition +-- does, then the partition's validation scan is skipped. +CREATE TABLE quuux (a int, b text) PARTITION BY LIST (a); +CREATE TABLE quuux_default PARTITION OF quuux DEFAULT PARTITION BY LIST (b); +CREATE TABLE quuux_default1 PARTITION OF quuux_default ( + CONSTRAINT check_1 CHECK (a IS NOT NULL AND a = 1) +) FOR VALUES IN ('b'); +CREATE TABLE quuux1 (a int, b text); +ALTER TABLE quuux ATTACH PARTITION quuux1 FOR VALUES IN (1); -- validate! +CREATE TABLE quuux2 (a int, b text); +ALTER TABLE quuux ATTACH PARTITION quuux2 FOR VALUES IN (2); -- skip validation +INFO: updated partition constraint for default partition "quuux_default1" is implied by existing constraints +DROP TABLE quuux; -- -- DETACH PARTITION -- diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 37cca72620..d0daacf3e9 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2285,6 +2285,20 @@ ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); ALTER TABLE part_5 ATTACH PARTITION list_parted2 FOR VALUES IN ('b'); ALTER TABLE list_parted2 ATTACH PARTITION list_parted2 FOR VALUES IN (0); +-- If the partitioned table being attached does not have a constraint that +-- would allow validation scan to be skipped, but an individual partition +-- does, then the partition's validation scan is skipped. +CREATE TABLE quuux (a int, b text) PARTITION BY LIST (a); +CREATE TABLE quuux_default PARTITION OF quuux DEFAULT PARTITION BY LIST (b); +CREATE TABLE quuux_default1 PARTITION OF quuux_default ( + CONSTRAINT check_1 CHECK (a IS NOT NULL AND a = 1) +) FOR VALUES IN ('b'); +CREATE TABLE quuux1 (a int, b text); +ALTER TABLE quuux ATTACH PARTITION quuux1 FOR VALUES IN (1); -- validate! +CREATE TABLE quuux2 (a int, b text); +ALTER TABLE quuux ATTACH PARTITION quuux2 FOR VALUES IN (2); -- skip validation +DROP TABLE quuux; + -- -- DETACH PARTITION -- From 6476b26115f3ef25a9cd87880e0ac5ec5f7a05f6 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 5 Oct 2017 13:21:50 -0400 Subject: [PATCH 0333/1087] On CREATE TABLE, consider skipping validation of subpartitions. This is just like commit 14f67a8ee282ebc0de78e773fbd597f460ab4a54, but for CREATE PARTITION rather than ATTACH PARTITION. Jeevan Ladhe, with test case changes by me. Discussion: http://postgr.es/m/CAOgcT0MWwG8WBw8frFMtRYHAgDD=tpt6U7WcsO_L2k0KYpm4Jg@mail.gmail.com --- src/backend/catalog/partition.c | 18 ++++++++++++++++++ src/test/regress/expected/alter_table.out | 12 +++++++++--- src/test/regress/sql/alter_table.sql | 11 ++++++++--- 3 files changed, 35 insertions(+), 6 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 9777d40e66..3a8306a055 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -953,7 +953,25 @@ check_default_allows_bound(Relation parent, Relation default_rel, /* Lock already taken above. */ if (part_relid != RelationGetRelid(default_rel)) + { part_rel = heap_open(part_relid, NoLock); + + /* + * If the partition constraints on default partition child imply + * that it will not contain any row that would belong to the new + * partition, we can avoid scanning the child table. + */ + if (PartConstraintImpliedByRelConstraint(part_rel, + def_part_constraints)) + { + ereport(INFO, + (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + RelationGetRelationName(part_rel)))); + + heap_close(part_rel, NoLock); + continue; + } + } else part_rel = default_rel; diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index cda8ce556c..dbe438dcd4 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3474,9 +3474,10 @@ DETAIL: "part_5" is already a child of "list_parted2". ALTER TABLE list_parted2 ATTACH PARTITION list_parted2 FOR VALUES IN (0); ERROR: circular inheritance not allowed DETAIL: "list_parted2" is already a child of "list_parted2". --- If the partitioned table being attached does not have a constraint that --- would allow validation scan to be skipped, but an individual partition --- does, then the partition's validation scan is skipped. +-- If a partitioned table being created or an existing table being attached +-- as a paritition does not have a constraint that would allow validation scan +-- to be skipped, but an individual partition does, then the partition's +-- validation scan is skipped. CREATE TABLE quuux (a int, b text) PARTITION BY LIST (a); CREATE TABLE quuux_default PARTITION OF quuux DEFAULT PARTITION BY LIST (b); CREATE TABLE quuux_default1 PARTITION OF quuux_default ( @@ -3487,6 +3488,11 @@ ALTER TABLE quuux ATTACH PARTITION quuux1 FOR VALUES IN (1); -- validate! CREATE TABLE quuux2 (a int, b text); ALTER TABLE quuux ATTACH PARTITION quuux2 FOR VALUES IN (2); -- skip validation INFO: updated partition constraint for default partition "quuux_default1" is implied by existing constraints +DROP TABLE quuux1, quuux2; +-- should validate for quuux1, but not for quuux2 +CREATE TABLE quuux1 PARTITION OF quuux FOR VALUES IN (1); +CREATE TABLE quuux2 PARTITION OF quuux FOR VALUES IN (2); +INFO: partition constraint for table "quuux_default1" is implied by existing constraints DROP TABLE quuux; -- -- DETACH PARTITION diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index d0daacf3e9..0c8ae2ab97 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2285,9 +2285,10 @@ ALTER TABLE list_parted2 ATTACH PARTITION part_2 FOR VALUES IN (2); ALTER TABLE part_5 ATTACH PARTITION list_parted2 FOR VALUES IN ('b'); ALTER TABLE list_parted2 ATTACH PARTITION list_parted2 FOR VALUES IN (0); --- If the partitioned table being attached does not have a constraint that --- would allow validation scan to be skipped, but an individual partition --- does, then the partition's validation scan is skipped. +-- If a partitioned table being created or an existing table being attached +-- as a paritition does not have a constraint that would allow validation scan +-- to be skipped, but an individual partition does, then the partition's +-- validation scan is skipped. CREATE TABLE quuux (a int, b text) PARTITION BY LIST (a); CREATE TABLE quuux_default PARTITION OF quuux DEFAULT PARTITION BY LIST (b); CREATE TABLE quuux_default1 PARTITION OF quuux_default ( @@ -2297,6 +2298,10 @@ CREATE TABLE quuux1 (a int, b text); ALTER TABLE quuux ATTACH PARTITION quuux1 FOR VALUES IN (1); -- validate! CREATE TABLE quuux2 (a int, b text); ALTER TABLE quuux ATTACH PARTITION quuux2 FOR VALUES IN (2); -- skip validation +DROP TABLE quuux1, quuux2; +-- should validate for quuux1, but not for quuux2 +CREATE TABLE quuux1 PARTITION OF quuux FOR VALUES IN (1); +CREATE TABLE quuux2 PARTITION OF quuux FOR VALUES IN (2); DROP TABLE quuux; -- From fe9ba28ee852bb968bc8948d172c6bc0c70c50df Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 5 Oct 2017 15:05:49 -0400 Subject: [PATCH 0334/1087] Fix typo in README. s/BeginInternalSubtransaction/BeginInternalSubTransaction/ --- src/backend/access/transam/README | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/access/transam/README b/src/backend/access/transam/README index e7dd19fd7b..ad4083eb6b 100644 --- a/src/backend/access/transam/README +++ b/src/backend/access/transam/README @@ -177,13 +177,13 @@ subtransaction level with the same name. So it's a completely new subtransaction as far as the internals are concerned. Other subsystems are allowed to start "internal" subtransactions, which are -handled by BeginInternalSubtransaction. This is to allow implementing +handled by BeginInternalSubTransaction. This is to allow implementing exception handling, e.g. in PL/pgSQL. ReleaseCurrentSubTransaction and RollbackAndReleaseCurrentSubTransaction allows the subsystem to close said subtransactions. The main difference between this and the savepoint/release path is that we execute the complete state transition immediately in each subroutine, rather than deferring some work until CommitTransactionCommand. -Another difference is that BeginInternalSubtransaction is allowed when no +Another difference is that BeginInternalSubTransaction is allowed when no explicit transaction block has been established, while DefineSavepoint is not. From f49842d1ee31b976c681322f76025d7732e860f3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 6 Oct 2017 11:11:10 -0400 Subject: [PATCH 0335/1087] Basic partition-wise join functionality. Instead of joining two partitioned tables in their entirety we can, if it is an equi-join on the partition keys, join the matching partitions individually. This involves teaching the planner about "other join" rels, which are related to regular join rels in the same way that other member rels are related to baserels. This can use significantly more CPU time and memory than regular join planning, because there may now be a set of "other" rels not only for every base relation but also for every join relation. In most practical cases, this probably shouldn't be a problem, because (1) it's probably unusual to join many tables each with many partitions using the partition keys for all joins and (2) if you do that scenario then you probably have a big enough machine to handle the increased memory cost of planning and (3) the resulting plan is highly likely to be better, so what you spend in planning you'll make up on the execution side. All the same, for now, turn this feature off by default. Currently, we can only perform joins between two tables whose partitioning schemes are absolutely identical. It would be nice to cope with other scenarios, such as extra partitions on one side or the other with no match on the other side, but that will have to wait for a future patch. Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi, Amit Langote, Rafia Sabih, Thomas Munro, Dilip Kumar, Antonin Houska, Amit Khandekar, and by me. A few final adjustments by me. Discussion: http://postgr.es/m/CAFjFpRfQ8GrQvzp3jA2wnLqrHmaXna-urjm_UY9BqXj=EaDTSA@mail.gmail.com Discussion: http://postgr.es/m/CAFjFpRcitjfrULr5jfuKWRPsGUX0LQ0k8-yG0Qw2+1LBGNpMdw@mail.gmail.com --- .../postgres_fdw/expected/postgres_fdw.out | 120 ++ contrib/postgres_fdw/sql/postgres_fdw.sql | 53 + doc/src/sgml/config.sgml | 20 + doc/src/sgml/fdwhandler.sgml | 20 + src/backend/optimizer/README | 26 + src/backend/optimizer/geqo/geqo_eval.c | 3 + src/backend/optimizer/path/allpaths.c | 268 ++- src/backend/optimizer/path/costsize.c | 1 + src/backend/optimizer/path/joinpath.c | 102 +- src/backend/optimizer/path/joinrels.c | 316 ++- src/backend/optimizer/plan/createplan.c | 35 +- src/backend/optimizer/plan/planner.c | 22 + src/backend/optimizer/plan/setrefs.c | 58 +- src/backend/optimizer/prep/prepunion.c | 95 + src/backend/optimizer/util/pathnode.c | 363 ++++ src/backend/optimizer/util/placeholder.c | 58 + src/backend/optimizer/util/plancat.c | 32 +- src/backend/optimizer/util/relnode.c | 368 +++- src/backend/utils/misc/guc.c | 9 + src/backend/utils/misc/postgresql.conf.sample | 1 + src/include/foreign/fdwapi.h | 6 + src/include/nodes/extensible.h | 3 + src/include/nodes/relation.h | 50 +- src/include/optimizer/cost.h | 1 + src/include/optimizer/pathnode.h | 6 + src/include/optimizer/paths.h | 5 + src/include/optimizer/placeholder.h | 2 + src/include/optimizer/planner.h | 2 + src/include/optimizer/prep.h | 6 + src/test/regress/expected/partition_join.out | 1789 +++++++++++++++++ src/test/regress/expected/sysviews.out | 31 +- src/test/regress/parallel_schedule | 3 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/partition_join.sql | 354 ++++ 34 files changed, 4089 insertions(+), 140 deletions(-) create mode 100644 src/test/regress/expected/partition_join.out create mode 100644 src/test/regress/sql/partition_join.sql diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index c19b3318c7..4339bbf9df 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -7346,3 +7346,123 @@ AND ftoptions @> array['fetch_size=60000']; (1 row) ROLLBACK; +-- =================================================================== +-- test partition-wise-joins +-- =================================================================== +SET enable_partition_wise_join=on; +CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE fprt1_p1 (LIKE fprt1); +CREATE TABLE fprt1_p2 (LIKE fprt1); +INSERT INTO fprt1_p1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 249, 2) i; +INSERT INTO fprt1_p2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(250, 499, 2) i; +CREATE FOREIGN TABLE ftprt1_p1 PARTITION OF fprt1 FOR VALUES FROM (0) TO (250) + SERVER loopback OPTIONS (table_name 'fprt1_p1', use_remote_estimate 'true'); +CREATE FOREIGN TABLE ftprt1_p2 PARTITION OF fprt1 FOR VALUES FROM (250) TO (500) + SERVER loopback OPTIONS (TABLE_NAME 'fprt1_p2'); +ANALYZE fprt1; +ANALYZE fprt1_p1; +ANALYZE fprt1_p2; +CREATE TABLE fprt2 (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE fprt2_p1 (LIKE fprt2); +CREATE TABLE fprt2_p2 (LIKE fprt2); +INSERT INTO fprt2_p1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 249, 3) i; +INSERT INTO fprt2_p2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(250, 499, 3) i; +CREATE FOREIGN TABLE ftprt2_p1 PARTITION OF fprt2 FOR VALUES FROM (0) TO (250) + SERVER loopback OPTIONS (table_name 'fprt2_p1', use_remote_estimate 'true'); +CREATE FOREIGN TABLE ftprt2_p2 PARTITION OF fprt2 FOR VALUES FROM (250) TO (500) + SERVER loopback OPTIONS (table_name 'fprt2_p2', use_remote_estimate 'true'); +ANALYZE fprt2; +ANALYZE fprt2_p1; +ANALYZE fprt2_p2; +-- inner join three tables +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------- + Sort + Sort Key: t1.a, t3.c + -> Append + -> Foreign Scan + Relations: ((public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 t2)) INNER JOIN (public.ftprt1_p1 t3) + -> Foreign Scan + Relations: ((public.ftprt1_p2 t1) INNER JOIN (public.ftprt2_p2 t2)) INNER JOIN (public.ftprt1_p2 t3) +(7 rows) + +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + a | b | c +-----+-----+------ + 0 | 0 | 0000 + 150 | 150 | 0003 + 250 | 250 | 0005 + 400 | 400 | 0008 +(4 rows) + +-- left outer join + nullable clasue +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; + QUERY PLAN +----------------------------------------------------------------------------------- + Sort + Sort Key: t1.a, ftprt2_p1.b, ftprt2_p1.c + -> Append + -> Foreign Scan + Relations: (public.ftprt1_p1 t1) LEFT JOIN (public.ftprt2_p1 fprt2) +(5 rows) + +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; + a | b | c +---+---+------ + 0 | 0 | 0000 + 2 | | + 4 | | + 6 | 6 | 0000 + 8 | | +(5 rows) + +-- with whole-row reference +EXPLAIN (COSTS OFF) +SELECT t1,t2 FROM fprt1 t1 JOIN fprt2 t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a % 25 =0 ORDER BY 1,2; + QUERY PLAN +--------------------------------------------------------------------------------- + Sort + Sort Key: ((t1.*)::fprt1), ((t2.*)::fprt2) + -> Append + -> Foreign Scan + Relations: (public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 t2) + -> Foreign Scan + Relations: (public.ftprt1_p2 t1) INNER JOIN (public.ftprt2_p2 t2) +(7 rows) + +SELECT t1,t2 FROM fprt1 t1 JOIN fprt2 t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a % 25 =0 ORDER BY 1,2; + t1 | t2 +----------------+---------------- + (0,0,0000) | (0,0,0000) + (150,150,0003) | (150,150,0003) + (250,250,0005) | (250,250,0005) + (400,400,0008) | (400,400,0008) +(4 rows) + +-- join with lateral reference +EXPLAIN (COSTS OFF) +SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; + QUERY PLAN +--------------------------------------------------------------------------------- + Sort + Sort Key: t1.a, t1.b + -> Append + -> Foreign Scan + Relations: (public.ftprt1_p1 t1) INNER JOIN (public.ftprt2_p1 t2) + -> Foreign Scan + Relations: (public.ftprt1_p2 t1) INNER JOIN (public.ftprt2_p2 t2) +(7 rows) + +SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; + a | b +-----+----- + 0 | 0 + 150 | 150 + 250 | 250 + 400 | 400 +(4 rows) + +RESET enable_partition_wise_join; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 5f65d9d966..ddfec7930d 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1764,3 +1764,56 @@ WHERE ftrelid = 'table30000'::regclass AND ftoptions @> array['fetch_size=60000']; ROLLBACK; + +-- =================================================================== +-- test partition-wise-joins +-- =================================================================== +SET enable_partition_wise_join=on; + +CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE fprt1_p1 (LIKE fprt1); +CREATE TABLE fprt1_p2 (LIKE fprt1); +INSERT INTO fprt1_p1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 249, 2) i; +INSERT INTO fprt1_p2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(250, 499, 2) i; +CREATE FOREIGN TABLE ftprt1_p1 PARTITION OF fprt1 FOR VALUES FROM (0) TO (250) + SERVER loopback OPTIONS (table_name 'fprt1_p1', use_remote_estimate 'true'); +CREATE FOREIGN TABLE ftprt1_p2 PARTITION OF fprt1 FOR VALUES FROM (250) TO (500) + SERVER loopback OPTIONS (TABLE_NAME 'fprt1_p2'); +ANALYZE fprt1; +ANALYZE fprt1_p1; +ANALYZE fprt1_p2; + +CREATE TABLE fprt2 (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE fprt2_p1 (LIKE fprt2); +CREATE TABLE fprt2_p2 (LIKE fprt2); +INSERT INTO fprt2_p1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 249, 3) i; +INSERT INTO fprt2_p2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(250, 499, 3) i; +CREATE FOREIGN TABLE ftprt2_p1 PARTITION OF fprt2 FOR VALUES FROM (0) TO (250) + SERVER loopback OPTIONS (table_name 'fprt2_p1', use_remote_estimate 'true'); +CREATE FOREIGN TABLE ftprt2_p2 PARTITION OF fprt2 FOR VALUES FROM (250) TO (500) + SERVER loopback OPTIONS (table_name 'fprt2_p2', use_remote_estimate 'true'); +ANALYZE fprt2; +ANALYZE fprt2_p1; +ANALYZE fprt2_p2; + +-- inner join three tables +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; +SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; + +-- left outer join + nullable clasue +EXPLAIN (COSTS OFF) +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; +SELECT t1.a,t2.b,t2.c FROM fprt1 t1 LEFT JOIN (SELECT * FROM fprt2 WHERE a < 10) t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a < 10 ORDER BY 1,2,3; + +-- with whole-row reference +EXPLAIN (COSTS OFF) +SELECT t1,t2 FROM fprt1 t1 JOIN fprt2 t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a % 25 =0 ORDER BY 1,2; +SELECT t1,t2 FROM fprt1 t1 JOIN fprt2 t2 ON (t1.a = t2.b and t1.b = t2.a) WHERE t1.a % 25 =0 ORDER BY 1,2; + +-- join with lateral reference +EXPLAIN (COSTS OFF) +SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; +SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; + +RESET enable_partition_wise_join; diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index c13f60230f..b012a26991 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3632,6 +3632,26 @@ ANY num_sync ( + enable_partition_wise_join (boolean) + + enable_partition_wise_join configuration parameter + + + + + Enables or disables the query planner's use of partition-wise join, + which allows a join between partitioned tables to be performed by + joining the matching partitions. Partition-wise join currently applies + only when the join conditions include all the partition keys, which + must be of the same data type and have exactly matching sets of child + partitions. Because partition-wise join planning can use significantly + more CPU time and memory during planning, the default is + off. + + + + enable_seqscan (boolean) diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index a59e03af98..e63e29fd96 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -1292,6 +1292,26 @@ ShutdownForeignScan(ForeignScanState *node); + + FDW Routines For reparameterization of paths + + + +List * +ReparameterizeForeignPathByChild(PlannerInfo *root, List *fdw_private, + RelOptInfo *child_rel); + + This function is called while converting a path parameterized by the + top-most parent of the given child relation child_rel to be + parameterized by the child relation. The function is used to reparameterize + any paths or translate any expression nodes saved in the given + fdw_private member of a ForeignPath. The + callback may use reparameterize_path_by_child, + adjust_appendrel_attrs or + adjust_appendrel_attrs_multilevel as required. + + + diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index 62242e8564..031e503761 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -1075,3 +1075,29 @@ be desirable to postpone the Gather stage until as near to the top of the plan as possible. Expanding the range of cases in which more work can be pushed below the Gather (and costing them accurately) is likely to keep us busy for a long time to come. + +Partition-wise joins +-------------------- +A join between two similarly partitioned tables can be broken down into joins +between their matching partitions if there exists an equi-join condition +between the partition keys of the joining tables. The equi-join between +partition keys implies that all join partners for a given row in one +partitioned table must be in the corresponding partition of the other +partitioned table. Because of this the join between partitioned tables to be +broken into joins between the matching partitions. The resultant join is +partitioned in the same way as the joining relations, thus allowing an N-way +join between similarly partitioned tables having equi-join condition between +their partition keys to be broken down into N-way joins between their matching +partitions. This technique of breaking down a join between partition tables +into join between their partitions is called partition-wise join. We will use +term "partitioned relation" for either a partitioned table or a join between +compatibly partitioned tables. + +The partitioning properties of a partitioned relation are stored in its +RelOptInfo. The information about data types of partition keys are stored in +PartitionSchemeData structure. The planner maintains a list of canonical +partition schemes (distinct PartitionSchemeData objects) so that RelOptInfo of +any two partitioned relations with same partitioning scheme point to the same +PartitionSchemeData object. This reduces memory consumed by +PartitionSchemeData objects and makes it easy to compare the partition schemes +of joining relations. diff --git a/src/backend/optimizer/geqo/geqo_eval.c b/src/backend/optimizer/geqo/geqo_eval.c index b5cab0c351..3cf268cbd3 100644 --- a/src/backend/optimizer/geqo/geqo_eval.c +++ b/src/backend/optimizer/geqo/geqo_eval.c @@ -264,6 +264,9 @@ merge_clump(PlannerInfo *root, List *clumps, Clump *new_clump, bool force) /* Keep searching if join order is not valid */ if (joinrel) { + /* Create paths for partition-wise joins. */ + generate_partition_wise_join_paths(root, joinrel); + /* Create GatherPaths for any useful partial paths for rel */ generate_gather_paths(root, joinrel); diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index a7866a99e0..5535b63803 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -920,12 +920,79 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, childrel = find_base_rel(root, childRTindex); Assert(childrel->reloptkind == RELOPT_OTHER_MEMBER_REL); + if (rel->part_scheme) + { + AttrNumber attno; + + /* + * We need attr_needed data for building targetlist of a join + * relation representing join between matching partitions for + * partition-wise join. A given attribute of a child will be + * needed in the same highest joinrel where the corresponding + * attribute of parent is needed. Hence it suffices to use the + * same Relids set for parent and child. + */ + for (attno = rel->min_attr; attno <= rel->max_attr; attno++) + { + int index = attno - rel->min_attr; + Relids attr_needed = rel->attr_needed[index]; + + /* System attributes do not need translation. */ + if (attno <= 0) + { + Assert(rel->min_attr == childrel->min_attr); + childrel->attr_needed[index] = attr_needed; + } + else + { + Var *var = list_nth_node(Var, + appinfo->translated_vars, + attno - 1); + int child_index; + + child_index = var->varattno - childrel->min_attr; + childrel->attr_needed[child_index] = attr_needed; + } + } + } + + /* + * Copy/Modify targetlist. Even if this child is deemed empty, we need + * its targetlist in case it falls on nullable side in a child-join + * because of partition-wise join. + * + * NB: the resulting childrel->reltarget->exprs may contain arbitrary + * expressions, which otherwise would not occur in a rel's targetlist. + * Code that might be looking at an appendrel child must cope with + * such. (Normally, a rel's targetlist would only include Vars and + * PlaceHolderVars.) XXX we do not bother to update the cost or width + * fields of childrel->reltarget; not clear if that would be useful. + */ + childrel->reltarget->exprs = (List *) + adjust_appendrel_attrs(root, + (Node *) rel->reltarget->exprs, + 1, &appinfo); + + /* + * We have to make child entries in the EquivalenceClass data + * structures as well. This is needed either if the parent + * participates in some eclass joins (because we will want to consider + * inner-indexscan joins on the individual children) or if the parent + * has useful pathkeys (because we should try to build MergeAppend + * paths that produce those sort orderings). Even if this child is + * deemed dummy, it may fall on nullable side in a child-join, which + * in turn may participate in a MergeAppend, where we will need the + * EquivalenceClass data structures. + */ + if (rel->has_eclass_joins || has_useful_pathkeys(root, rel)) + add_child_rel_equivalences(root, appinfo, rel, childrel); + childrel->has_eclass_joins = rel->has_eclass_joins; + /* - * We have to copy the parent's targetlist and quals to the child, - * with appropriate substitution of variables. However, only the - * baserestrictinfo quals are needed before we can check for - * constraint exclusion; so do that first and then check to see if we - * can disregard this child. + * We have to copy the parent's quals to the child, with appropriate + * substitution of variables. However, only the baserestrictinfo + * quals are needed before we can check for constraint exclusion; so + * do that first and then check to see if we can disregard this child. * * The child rel's targetlist might contain non-Var expressions, which * means that substitution into the quals could produce opportunities @@ -1052,44 +1119,11 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, continue; } - /* - * CE failed, so finish copying/modifying targetlist and join quals. - * - * NB: the resulting childrel->reltarget->exprs may contain arbitrary - * expressions, which otherwise would not occur in a rel's targetlist. - * Code that might be looking at an appendrel child must cope with - * such. (Normally, a rel's targetlist would only include Vars and - * PlaceHolderVars.) XXX we do not bother to update the cost or width - * fields of childrel->reltarget; not clear if that would be useful. - */ + /* CE failed, so finish copying/modifying join quals. */ childrel->joininfo = (List *) adjust_appendrel_attrs(root, (Node *) rel->joininfo, 1, &appinfo); - childrel->reltarget->exprs = (List *) - adjust_appendrel_attrs(root, - (Node *) rel->reltarget->exprs, - 1, &appinfo); - - /* - * We have to make child entries in the EquivalenceClass data - * structures as well. This is needed either if the parent - * participates in some eclass joins (because we will want to consider - * inner-indexscan joins on the individual children) or if the parent - * has useful pathkeys (because we should try to build MergeAppend - * paths that produce those sort orderings). - */ - if (rel->has_eclass_joins || has_useful_pathkeys(root, rel)) - add_child_rel_equivalences(root, appinfo, rel, childrel); - childrel->has_eclass_joins = rel->has_eclass_joins; - - /* - * Note: we could compute appropriate attr_needed data for the child's - * variables, by transforming the parent's attr_needed through the - * translated_vars mapping. However, currently there's no need - * because attr_needed is only examined for base relations not - * otherrels. So we just leave the child's attr_needed empty. - */ /* * If parallelism is allowable for this query in general, see whether @@ -1262,14 +1296,14 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel, live_childrels = lappend(live_childrels, childrel); } - /* Add paths to the "append" relation. */ + /* Add paths to the append relation. */ add_paths_to_append_rel(root, rel, live_childrels); } /* * add_paths_to_append_rel - * Generate paths for given "append" relation given the set of non-dummy + * Generate paths for the given append relation given the set of non-dummy * child rels. * * The function collects all parameterizations and orderings supported by the @@ -1293,30 +1327,44 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte; bool build_partitioned_rels = false; - /* - * A root partition will already have a PartitionedChildRelInfo, and a - * non-root partitioned table doesn't need one, because its Append paths - * will get flattened into the parent anyway. For a subquery RTE, no - * PartitionedChildRelInfo exists; we collect all partitioned_rels - * associated with any child. (This assumes that we don't need to look - * through multiple levels of subquery RTEs; if we ever do, we could - * create a PartitionedChildRelInfo with the accumulated list of - * partitioned_rels which would then be found when populated our parent - * rel with paths. For the present, that appears to be unnecessary.) - */ - rte = planner_rt_fetch(rel->relid, root); - switch (rte->rtekind) + if (IS_SIMPLE_REL(rel)) { - case RTE_RELATION: - if (rte->relkind == RELKIND_PARTITIONED_TABLE) - partitioned_rels = - get_partitioned_child_rels(root, rel->relid); - break; - case RTE_SUBQUERY: - build_partitioned_rels = true; - break; - default: - elog(ERROR, "unexpected rtekind: %d", (int) rte->rtekind); + /* + * A root partition will already have a PartitionedChildRelInfo, and a + * non-root partitioned table doesn't need one, because its Append + * paths will get flattened into the parent anyway. For a subquery + * RTE, no PartitionedChildRelInfo exists; we collect all + * partitioned_rels associated with any child. (This assumes that we + * don't need to look through multiple levels of subquery RTEs; if we + * ever do, we could create a PartitionedChildRelInfo with the + * accumulated list of partitioned_rels which would then be found when + * populated our parent rel with paths. For the present, that appears + * to be unnecessary.) + */ + rte = planner_rt_fetch(rel->relid, root); + switch (rte->rtekind) + { + case RTE_RELATION: + if (rte->relkind == RELKIND_PARTITIONED_TABLE) + partitioned_rels = + get_partitioned_child_rels(root, rel->relid); + break; + case RTE_SUBQUERY: + build_partitioned_rels = true; + break; + default: + elog(ERROR, "unexpcted rtekind: %d", (int) rte->rtekind); + } + } + else if (rel->reloptkind == RELOPT_JOINREL && rel->part_scheme) + { + /* + * Associate PartitionedChildRelInfo of the root partitioned tables + * being joined with the root partitioned join (indicated by + * RELOPT_JOINREL). + */ + partitioned_rels = get_partitioned_child_rels_for_join(root, + rel->relids); } /* @@ -2422,16 +2470,22 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) join_search_one_level(root, lev); /* - * Run generate_gather_paths() for each just-processed joinrel. We - * could not do this earlier because both regular and partial paths - * can get added to a particular joinrel at multiple times within - * join_search_one_level. After that, we're done creating paths for - * the joinrel, so run set_cheapest(). + * Run generate_partition_wise_join_paths() and + * generate_gather_paths() for each just-processed joinrel. We could + * not do this earlier because both regular and partial paths can get + * added to a particular joinrel at multiple times within + * join_search_one_level. + * + * After that, we're done creating paths for the joinrel, so run + * set_cheapest(). */ foreach(lc, root->join_rel_level[lev]) { rel = (RelOptInfo *) lfirst(lc); + /* Create paths for partition-wise joins. */ + generate_partition_wise_join_paths(root, rel); + /* Create GatherPaths for any useful partial paths for rel */ generate_gather_paths(root, rel); @@ -3179,6 +3233,82 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages) return parallel_workers; } +/* + * generate_partition_wise_join_paths + * Create paths representing partition-wise join for given partitioned + * join relation. + * + * This must not be called until after we are done adding paths for all + * child-joins. Otherwise, add_path might delete a path to which some path + * generated here has a reference. + */ +void +generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) +{ + List *live_children = NIL; + int cnt_parts; + int num_parts; + RelOptInfo **part_rels; + + /* Handle only join relations here. */ + if (!IS_JOIN_REL(rel)) + return; + + /* + * If we've already proven this join is empty, we needn't consider any + * more paths for it. + */ + if (IS_DUMMY_REL(rel)) + return; + + /* + * Nothing to do if the relation is not partitioned. An outer join + * relation which had empty inner relation in every pair will have rest of + * the partitioning properties set except the child-join RelOptInfos. See + * try_partition_wise_join() for more explanation. + */ + if (rel->nparts <= 0 || rel->part_rels == NULL) + return; + + /* Guard against stack overflow due to overly deep partition hierarchy. */ + check_stack_depth(); + + num_parts = rel->nparts; + part_rels = rel->part_rels; + + /* Collect non-dummy child-joins. */ + for (cnt_parts = 0; cnt_parts < num_parts; cnt_parts++) + { + RelOptInfo *child_rel = part_rels[cnt_parts]; + + /* Add partition-wise join paths for partitioned child-joins. */ + generate_partition_wise_join_paths(root, child_rel); + + /* Dummy children will not be scanned, so ingore those. */ + if (IS_DUMMY_REL(child_rel)) + continue; + + set_cheapest(child_rel); + +#ifdef OPTIMIZER_DEBUG + debug_print_rel(root, rel); +#endif + + live_children = lappend(live_children, child_rel); + } + + /* If all child-joins are dummy, parent join is also dummy. */ + if (!live_children) + { + mark_dummy_rel(rel); + return; + } + + /* Build additional paths for this rel from child-join paths. */ + add_paths_to_append_rel(root, rel, live_children); + list_free(live_children); +} + /***************************************************************************** * DEBUG SUPPORT diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index f76da49044..ce32b8a4b9 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -127,6 +127,7 @@ bool enable_material = true; bool enable_mergejoin = true; bool enable_hashjoin = true; bool enable_gathermerge = true; +bool enable_partition_wise_join = false; typedef struct { diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c index 43833ea9c9..310262d87c 100644 --- a/src/backend/optimizer/path/joinpath.c +++ b/src/backend/optimizer/path/joinpath.c @@ -26,9 +26,19 @@ /* Hook for plugins to get control in add_paths_to_joinrel() */ set_join_pathlist_hook_type set_join_pathlist_hook = NULL; -#define PATH_PARAM_BY_REL(path, rel) \ +/* + * Paths parameterized by the parent can be considered to be parameterized by + * any of its child. + */ +#define PATH_PARAM_BY_PARENT(path, rel) \ + ((path)->param_info && bms_overlap(PATH_REQ_OUTER(path), \ + (rel)->top_parent_relids)) +#define PATH_PARAM_BY_REL_SELF(path, rel) \ ((path)->param_info && bms_overlap(PATH_REQ_OUTER(path), (rel)->relids)) +#define PATH_PARAM_BY_REL(path, rel) \ + (PATH_PARAM_BY_REL_SELF(path, rel) || PATH_PARAM_BY_PARENT(path, rel)) + static void try_partial_mergejoin_path(PlannerInfo *root, RelOptInfo *joinrel, Path *outer_path, @@ -115,6 +125,19 @@ add_paths_to_joinrel(PlannerInfo *root, JoinPathExtraData extra; bool mergejoin_allowed = true; ListCell *lc; + Relids joinrelids; + + /* + * PlannerInfo doesn't contain the SpecialJoinInfos created for joins + * between child relations, even if there is a SpecialJoinInfo node for + * the join between the topmost parents. So, while calculating Relids set + * representing the restriction, consider relids of topmost parent of + * partitions. + */ + if (joinrel->reloptkind == RELOPT_OTHER_JOINREL) + joinrelids = joinrel->top_parent_relids; + else + joinrelids = joinrel->relids; extra.restrictlist = restrictlist; extra.mergeclause_list = NIL; @@ -211,16 +234,16 @@ add_paths_to_joinrel(PlannerInfo *root, * join has already been proven legal.) If the SJ is relevant, it * presents constraints for joining to anything not in its RHS. */ - if (bms_overlap(joinrel->relids, sjinfo2->min_righthand) && - !bms_overlap(joinrel->relids, sjinfo2->min_lefthand)) + if (bms_overlap(joinrelids, sjinfo2->min_righthand) && + !bms_overlap(joinrelids, sjinfo2->min_lefthand)) extra.param_source_rels = bms_join(extra.param_source_rels, bms_difference(root->all_baserels, sjinfo2->min_righthand)); /* full joins constrain both sides symmetrically */ if (sjinfo2->jointype == JOIN_FULL && - bms_overlap(joinrel->relids, sjinfo2->min_lefthand) && - !bms_overlap(joinrel->relids, sjinfo2->min_righthand)) + bms_overlap(joinrelids, sjinfo2->min_lefthand) && + !bms_overlap(joinrelids, sjinfo2->min_righthand)) extra.param_source_rels = bms_join(extra.param_source_rels, bms_difference(root->all_baserels, sjinfo2->min_lefthand)); @@ -347,11 +370,25 @@ try_nestloop_path(PlannerInfo *root, JoinCostWorkspace workspace; RelOptInfo *innerrel = inner_path->parent; RelOptInfo *outerrel = outer_path->parent; - Relids innerrelids = innerrel->relids; - Relids outerrelids = outerrel->relids; + Relids innerrelids; + Relids outerrelids; Relids inner_paramrels = PATH_REQ_OUTER(inner_path); Relids outer_paramrels = PATH_REQ_OUTER(outer_path); + /* + * Paths are parameterized by top-level parents, so run parameterization + * tests on the parent relids. + */ + if (innerrel->top_parent_relids) + innerrelids = innerrel->top_parent_relids; + else + innerrelids = innerrel->relids; + + if (outerrel->top_parent_relids) + outerrelids = outerrel->top_parent_relids; + else + outerrelids = outerrel->relids; + /* * Check to see if proposed path is still parameterized, and reject if the * parameterization wouldn't be sensible --- unless allow_star_schema_join @@ -387,6 +424,27 @@ try_nestloop_path(PlannerInfo *root, workspace.startup_cost, workspace.total_cost, pathkeys, required_outer)) { + /* + * If the inner path is parameterized, it is parameterized by the + * topmost parent of the outer rel, not the outer rel itself. Fix + * that. + */ + if (PATH_PARAM_BY_PARENT(inner_path, outer_path->parent)) + { + inner_path = reparameterize_path_by_child(root, inner_path, + outer_path->parent); + + /* + * If we could not translate the path, we can't create nest loop + * path. + */ + if (!inner_path) + { + bms_free(required_outer); + return; + } + } + add_path(joinrel, (Path *) create_nestloop_path(root, joinrel, @@ -432,8 +490,20 @@ try_partial_nestloop_path(PlannerInfo *root, if (inner_path->param_info != NULL) { Relids inner_paramrels = inner_path->param_info->ppi_req_outer; + RelOptInfo *outerrel = outer_path->parent; + Relids outerrelids; + + /* + * The inner and outer paths are parameterized, if at all, by the top + * level parents, not the child relations, so we must use those relids + * for our paramaterization tests. + */ + if (outerrel->top_parent_relids) + outerrelids = outerrel->top_parent_relids; + else + outerrelids = outerrel->relids; - if (!bms_is_subset(inner_paramrels, outer_path->parent->relids)) + if (!bms_is_subset(inner_paramrels, outerrelids)) return; } @@ -446,6 +516,22 @@ try_partial_nestloop_path(PlannerInfo *root, if (!add_partial_path_precheck(joinrel, workspace.total_cost, pathkeys)) return; + /* + * If the inner path is parameterized, it is parameterized by the topmost + * parent of the outer rel, not the outer rel itself. Fix that. + */ + if (PATH_PARAM_BY_PARENT(inner_path, outer_path->parent)) + { + inner_path = reparameterize_path_by_child(root, inner_path, + outer_path->parent); + + /* + * If we could not translate the path, we can't create nest loop path. + */ + if (!inner_path) + return; + } + /* Might be good enough to be worth trying, so let's try it. */ add_partial_path(joinrel, (Path *) create_nestloop_path(root, diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index 6ee23509c5..2b868c52de 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -14,10 +14,17 @@ */ #include "postgres.h" +#include "miscadmin.h" +#include "catalog/partition.h" +#include "nodes/relation.h" +#include "optimizer/clauses.h" #include "optimizer/joininfo.h" #include "optimizer/pathnode.h" #include "optimizer/paths.h" +#include "optimizer/prep.h" +#include "optimizer/cost.h" #include "utils/memutils.h" +#include "utils/lsyscache.h" static void make_rels_by_clause_joins(PlannerInfo *root, @@ -29,12 +36,17 @@ static void make_rels_by_clauseless_joins(PlannerInfo *root, static bool has_join_restriction(PlannerInfo *root, RelOptInfo *rel); static bool has_legal_joinclause(PlannerInfo *root, RelOptInfo *rel); static bool is_dummy_rel(RelOptInfo *rel); -static void mark_dummy_rel(RelOptInfo *rel); static bool restriction_is_constant_false(List *restrictlist, bool only_pushed_down); static void populate_joinrel_with_paths(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, RelOptInfo *joinrel, SpecialJoinInfo *sjinfo, List *restrictlist); +static void try_partition_wise_join(PlannerInfo *root, RelOptInfo *rel1, + RelOptInfo *rel2, RelOptInfo *joinrel, + SpecialJoinInfo *parent_sjinfo, + List *parent_restrictlist); +static int match_expr_to_partition_keys(Expr *expr, RelOptInfo *rel, + bool strict_op); /* @@ -892,6 +904,9 @@ populate_joinrel_with_paths(PlannerInfo *root, RelOptInfo *rel1, elog(ERROR, "unrecognized join type: %d", (int) sjinfo->jointype); break; } + + /* Apply partition-wise join technique, if possible. */ + try_partition_wise_join(root, rel1, rel2, joinrel, sjinfo, restrictlist); } @@ -1197,7 +1212,7 @@ is_dummy_rel(RelOptInfo *rel) * is that the best solution is to explicitly make the dummy path in the same * context the given RelOptInfo is in. */ -static void +void mark_dummy_rel(RelOptInfo *rel) { MemoryContext oldcontext; @@ -1268,3 +1283,300 @@ restriction_is_constant_false(List *restrictlist, bool only_pushed_down) } return false; } + +/* + * Assess whether join between given two partitioned relations can be broken + * down into joins between matching partitions; a technique called + * "partition-wise join" + * + * Partition-wise join is possible when a. Joining relations have same + * partitioning scheme b. There exists an equi-join between the partition keys + * of the two relations. + * + * Partition-wise join is planned as follows (details: optimizer/README.) + * + * 1. Create the RelOptInfos for joins between matching partitions i.e + * child-joins and add paths to them. + * + * 2. Construct Append or MergeAppend paths across the set of child joins. + * This second phase is implemented by generate_partition_wise_join_paths(). + * + * The RelOptInfo, SpecialJoinInfo and restrictlist for each child join are + * obtained by translating the respective parent join structures. + */ +static void +try_partition_wise_join(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, + RelOptInfo *joinrel, SpecialJoinInfo *parent_sjinfo, + List *parent_restrictlist) +{ + int nparts; + int cnt_parts; + + /* Guard against stack overflow due to overly deep partition hierarchy. */ + check_stack_depth(); + + /* Nothing to do, if the join relation is not partitioned. */ + if (!IS_PARTITIONED_REL(joinrel)) + return; + + /* + * set_rel_pathlist() may not create paths in children of an empty + * partitioned table and so we can not add paths to child-joins. So, deem + * such a join as unpartitioned. When a partitioned relation is deemed + * empty because all its children are empty, dummy path will be set in + * each of the children. In such a case we could still consider the join + * as partitioned, but it might not help much. + */ + if (IS_DUMMY_REL(rel1) || IS_DUMMY_REL(rel2)) + return; + + /* + * Since this join relation is partitioned, all the base relations + * participating in this join must be partitioned and so are all the + * intermediate join relations. + */ + Assert(IS_PARTITIONED_REL(rel1) && IS_PARTITIONED_REL(rel2)); + Assert(REL_HAS_ALL_PART_PROPS(rel1) && REL_HAS_ALL_PART_PROPS(rel2)); + + /* + * The partition scheme of the join relation should match that of the + * joining relations. + */ + Assert(joinrel->part_scheme == rel1->part_scheme && + joinrel->part_scheme == rel2->part_scheme); + + /* + * Since we allow partition-wise join only when the partition bounds of + * the joining relations exactly match, the partition bounds of the join + * should match those of the joining relations. + */ + Assert(partition_bounds_equal(joinrel->part_scheme->partnatts, + joinrel->part_scheme->parttyplen, + joinrel->part_scheme->parttypbyval, + joinrel->boundinfo, rel1->boundinfo)); + Assert(partition_bounds_equal(joinrel->part_scheme->partnatts, + joinrel->part_scheme->parttyplen, + joinrel->part_scheme->parttypbyval, + joinrel->boundinfo, rel2->boundinfo)); + + nparts = joinrel->nparts; + + /* Allocate space to hold child-joins RelOptInfos, if not already done. */ + if (!joinrel->part_rels) + joinrel->part_rels = + (RelOptInfo **) palloc0(sizeof(RelOptInfo *) * nparts); + + /* + * Create child-join relations for this partitioned join, if those don't + * exist. Add paths to child-joins for a pair of child relations + * corresponding to the given pair of parent relations. + */ + for (cnt_parts = 0; cnt_parts < nparts; cnt_parts++) + { + RelOptInfo *child_rel1 = rel1->part_rels[cnt_parts]; + RelOptInfo *child_rel2 = rel2->part_rels[cnt_parts]; + SpecialJoinInfo *child_sjinfo; + List *child_restrictlist; + RelOptInfo *child_joinrel; + Relids child_joinrelids; + AppendRelInfo **appinfos; + int nappinfos; + + /* We should never try to join two overlapping sets of rels. */ + Assert(!bms_overlap(child_rel1->relids, child_rel2->relids)); + child_joinrelids = bms_union(child_rel1->relids, child_rel2->relids); + appinfos = find_appinfos_by_relids(root, child_joinrelids, &nappinfos); + + /* + * Construct SpecialJoinInfo from parent join relations's + * SpecialJoinInfo. + */ + child_sjinfo = build_child_join_sjinfo(root, parent_sjinfo, + child_rel1->relids, + child_rel2->relids); + + /* + * Construct restrictions applicable to the child join from those + * applicable to the parent join. + */ + child_restrictlist = + (List *) adjust_appendrel_attrs(root, + (Node *) parent_restrictlist, + nappinfos, appinfos); + pfree(appinfos); + + child_joinrel = joinrel->part_rels[cnt_parts]; + if (!child_joinrel) + { + child_joinrel = build_child_join_rel(root, child_rel1, child_rel2, + joinrel, child_restrictlist, + child_sjinfo, + child_sjinfo->jointype); + joinrel->part_rels[cnt_parts] = child_joinrel; + } + + Assert(bms_equal(child_joinrel->relids, child_joinrelids)); + + populate_joinrel_with_paths(root, child_rel1, child_rel2, + child_joinrel, child_sjinfo, + child_restrictlist); + } +} + +/* + * Returns true if there exists an equi-join condition for each pair of + * partition keys from given relations being joined. + */ +bool +have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, JoinType jointype, + List *restrictlist) +{ + PartitionScheme part_scheme = rel1->part_scheme; + ListCell *lc; + int cnt_pks; + bool pk_has_clause[PARTITION_MAX_KEYS]; + bool strict_op; + + /* + * This function should be called when the joining relations have same + * partitioning scheme. + */ + Assert(rel1->part_scheme == rel2->part_scheme); + Assert(part_scheme); + + memset(pk_has_clause, 0, sizeof(pk_has_clause)); + foreach(lc, restrictlist) + { + RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc); + OpExpr *opexpr; + Expr *expr1; + Expr *expr2; + int ipk1; + int ipk2; + + /* If processing an outer join, only use its own join clauses. */ + if (IS_OUTER_JOIN(jointype) && rinfo->is_pushed_down) + continue; + + /* Skip clauses which can not be used for a join. */ + if (!rinfo->can_join) + continue; + + /* Skip clauses which are not equality conditions. */ + if (!rinfo->mergeopfamilies) + continue; + + opexpr = (OpExpr *) rinfo->clause; + Assert(is_opclause(opexpr)); + + /* + * The equi-join between partition keys is strict if equi-join between + * at least one partition key is using a strict operator. See + * explanation about outer join reordering identity 3 in + * optimizer/README + */ + strict_op = op_strict(opexpr->opno); + + /* Match the operands to the relation. */ + if (bms_is_subset(rinfo->left_relids, rel1->relids) && + bms_is_subset(rinfo->right_relids, rel2->relids)) + { + expr1 = linitial(opexpr->args); + expr2 = lsecond(opexpr->args); + } + else if (bms_is_subset(rinfo->left_relids, rel2->relids) && + bms_is_subset(rinfo->right_relids, rel1->relids)) + { + expr1 = lsecond(opexpr->args); + expr2 = linitial(opexpr->args); + } + else + continue; + + /* + * Only clauses referencing the partition keys are useful for + * partition-wise join. + */ + ipk1 = match_expr_to_partition_keys(expr1, rel1, strict_op); + if (ipk1 < 0) + continue; + ipk2 = match_expr_to_partition_keys(expr2, rel2, strict_op); + if (ipk2 < 0) + continue; + + /* + * If the clause refers to keys at different ordinal positions, it can + * not be used for partition-wise join. + */ + if (ipk1 != ipk2) + continue; + + /* + * The clause allows partition-wise join if only it uses the same + * operator family as that specified by the partition key. + */ + if (!list_member_oid(rinfo->mergeopfamilies, + part_scheme->partopfamily[ipk1])) + continue; + + /* Mark the partition key as having an equi-join clause. */ + pk_has_clause[ipk1] = true; + } + + /* Check whether every partition key has an equi-join condition. */ + for (cnt_pks = 0; cnt_pks < part_scheme->partnatts; cnt_pks++) + { + if (!pk_has_clause[cnt_pks]) + return false; + } + + return true; +} + +/* + * Find the partition key from the given relation matching the given + * expression. If found, return the index of the partition key, else return -1. + */ +static int +match_expr_to_partition_keys(Expr *expr, RelOptInfo *rel, bool strict_op) +{ + int cnt; + + /* This function should be called only for partitioned relations. */ + Assert(rel->part_scheme); + + /* Remove any relabel decorations. */ + while (IsA(expr, RelabelType)) + expr = (Expr *) (castNode(RelabelType, expr))->arg; + + for (cnt = 0; cnt < rel->part_scheme->partnatts; cnt++) + { + ListCell *lc; + + Assert(rel->partexprs); + foreach(lc, rel->partexprs[cnt]) + { + if (equal(lfirst(lc), expr)) + return cnt; + } + + if (!strict_op) + continue; + + /* + * If it's a strict equi-join a NULL partition key on one side will + * not join a NULL partition key on the other side. So, rows with NULL + * partition key from a partition on one side can not join with those + * from a non-matching partition on the other side. So, search the + * nullable partition keys as well. + */ + Assert(rel->nullable_partexprs); + foreach(lc, rel->nullable_partexprs[cnt]) + { + if (equal(lfirst(lc), expr)) + return cnt; + } + } + + return -1; +} diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 28216629aa..792ea84a81 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -250,7 +250,8 @@ static Plan *prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys, static EquivalenceMember *find_ec_member_for_tle(EquivalenceClass *ec, TargetEntry *tle, Relids relids); -static Sort *make_sort_from_pathkeys(Plan *lefttree, List *pathkeys); +static Sort *make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, + Relids relids); static Sort *make_sort_from_groupcols(List *groupcls, AttrNumber *grpColIdx, Plan *lefttree); @@ -1652,7 +1653,7 @@ create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags) subplan = create_plan_recurse(root, best_path->subpath, flags | CP_SMALL_TLIST); - plan = make_sort_from_pathkeys(subplan, best_path->path.pathkeys); + plan = make_sort_from_pathkeys(subplan, best_path->path.pathkeys, NULL); copy_generic_path_info(&plan->plan, (Path *) best_path); @@ -3771,6 +3772,8 @@ create_mergejoin_plan(PlannerInfo *root, ListCell *lc; ListCell *lop; ListCell *lip; + Path *outer_path = best_path->jpath.outerjoinpath; + Path *inner_path = best_path->jpath.innerjoinpath; /* * MergeJoin can project, so we don't have to demand exact tlists from the @@ -3834,8 +3837,10 @@ create_mergejoin_plan(PlannerInfo *root, */ if (best_path->outersortkeys) { + Relids outer_relids = outer_path->parent->relids; Sort *sort = make_sort_from_pathkeys(outer_plan, - best_path->outersortkeys); + best_path->outersortkeys, + outer_relids); label_sort_with_costsize(root, sort, -1.0); outer_plan = (Plan *) sort; @@ -3846,8 +3851,10 @@ create_mergejoin_plan(PlannerInfo *root, if (best_path->innersortkeys) { + Relids inner_relids = inner_path->parent->relids; Sort *sort = make_sort_from_pathkeys(inner_plan, - best_path->innersortkeys); + best_path->innersortkeys, + inner_relids); label_sort_with_costsize(root, sort, -1.0); inner_plan = (Plan *) sort; @@ -5525,8 +5532,9 @@ make_sort(Plan *lefttree, int numCols, * the output parameters *p_numsortkeys etc. * * When looking for matches to an EquivalenceClass's members, we will only - * consider child EC members if they match 'relids'. This protects against - * possible incorrect matches to child expressions that contain no Vars. + * consider child EC members if they belong to given 'relids'. This protects + * against possible incorrect matches to child expressions that contain no + * Vars. * * If reqColIdx isn't NULL then it contains sort key column numbers that * we should match. This is used when making child plans for a MergeAppend; @@ -5681,11 +5689,11 @@ prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys, continue; /* - * Ignore child members unless they match the rel being + * Ignore child members unless they belong to the rel being * sorted. */ if (em->em_is_child && - !bms_equal(em->em_relids, relids)) + !bms_is_subset(em->em_relids, relids)) continue; sortexpr = em->em_expr; @@ -5769,7 +5777,7 @@ prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys, * find_ec_member_for_tle * Locate an EquivalenceClass member matching the given TLE, if any * - * Child EC members are ignored unless they match 'relids'. + * Child EC members are ignored unless they belong to given 'relids'. */ static EquivalenceMember * find_ec_member_for_tle(EquivalenceClass *ec, @@ -5797,10 +5805,10 @@ find_ec_member_for_tle(EquivalenceClass *ec, continue; /* - * Ignore child members unless they match the rel being sorted. + * Ignore child members unless they belong to the rel being sorted. */ if (em->em_is_child && - !bms_equal(em->em_relids, relids)) + !bms_is_subset(em->em_relids, relids)) continue; /* Match if same expression (after stripping relabel) */ @@ -5821,9 +5829,10 @@ find_ec_member_for_tle(EquivalenceClass *ec, * * 'lefttree' is the node which yields input tuples * 'pathkeys' is the list of pathkeys by which the result is to be sorted + * 'relids' is the set of relations required by prepare_sort_from_pathkeys() */ static Sort * -make_sort_from_pathkeys(Plan *lefttree, List *pathkeys) +make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids) { int numsortkeys; AttrNumber *sortColIdx; @@ -5833,7 +5842,7 @@ make_sort_from_pathkeys(Plan *lefttree, List *pathkeys) /* Compute sort column info, and adjust lefttree as needed */ lefttree = prepare_sort_from_pathkeys(lefttree, pathkeys, - NULL, + relids, NULL, false, &numsortkeys, diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index e7ac11e9bb..ecdd7280eb 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -6150,3 +6150,25 @@ get_partitioned_child_rels(PlannerInfo *root, Index rti) return result; } + +/* + * get_partitioned_child_rels_for_join + * Build and return a list containing the RTI of every partitioned + * relation which is a child of some rel included in the join. + */ +List * +get_partitioned_child_rels_for_join(PlannerInfo *root, Relids join_relids) +{ + List *result = NIL; + ListCell *l; + + foreach(l, root->pcinfo_list) + { + PartitionedChildRelInfo *pc = lfirst(l); + + if (bms_is_member(pc->parent_relid, join_relids)) + result = list_concat(result, list_copy(pc->child_rels)); + } + + return result; +} diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index dee4414cec..1382b67974 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -41,6 +41,9 @@ typedef struct int num_vars; /* number of plain Var tlist entries */ bool has_ph_vars; /* are there PlaceHolderVar entries? */ bool has_non_vars; /* are there other entries? */ + bool has_conv_whole_rows; /* are there ConvertRowtypeExpr + * entries encapsulating a whole-row + * Var? */ tlist_vinfo vars[FLEXIBLE_ARRAY_MEMBER]; /* has num_vars entries */ } indexed_tlist; @@ -139,6 +142,7 @@ static List *set_returning_clause_references(PlannerInfo *root, int rtoffset); static bool extract_query_dependencies_walker(Node *node, PlannerInfo *context); +static bool is_converted_whole_row_reference(Node *node); /***************************************************************************** * @@ -1944,6 +1948,7 @@ build_tlist_index(List *tlist) itlist->tlist = tlist; itlist->has_ph_vars = false; itlist->has_non_vars = false; + itlist->has_conv_whole_rows = false; /* Find the Vars and fill in the index array */ vinfo = itlist->vars; @@ -1962,6 +1967,8 @@ build_tlist_index(List *tlist) } else if (tle->expr && IsA(tle->expr, PlaceHolderVar)) itlist->has_ph_vars = true; + else if (is_converted_whole_row_reference((Node *) tle->expr)) + itlist->has_conv_whole_rows = true; else itlist->has_non_vars = true; } @@ -1977,7 +1984,10 @@ build_tlist_index(List *tlist) * This is like build_tlist_index, but we only index tlist entries that * are Vars belonging to some rel other than the one specified. We will set * has_ph_vars (allowing PlaceHolderVars to be matched), but not has_non_vars - * (so nothing other than Vars and PlaceHolderVars can be matched). + * (so nothing other than Vars and PlaceHolderVars can be matched). In case of + * DML, where this function will be used, returning lists from child relations + * will be appended similar to a simple append relation. That does not require + * fixing ConvertRowtypeExpr references. So, those are not considered here. */ static indexed_tlist * build_tlist_index_other_vars(List *tlist, Index ignore_rel) @@ -1994,6 +2004,7 @@ build_tlist_index_other_vars(List *tlist, Index ignore_rel) itlist->tlist = tlist; itlist->has_ph_vars = false; itlist->has_non_vars = false; + itlist->has_conv_whole_rows = false; /* Find the desired Vars and fill in the index array */ vinfo = itlist->vars; @@ -2197,6 +2208,7 @@ static Node * fix_join_expr_mutator(Node *node, fix_join_expr_context *context) { Var *newvar; + bool converted_whole_row; if (node == NULL) return NULL; @@ -2266,8 +2278,12 @@ fix_join_expr_mutator(Node *node, fix_join_expr_context *context) } if (IsA(node, Param)) return fix_param_node(context->root, (Param *) node); + /* Try matching more complex expressions too, if tlists have any */ - if (context->outer_itlist && context->outer_itlist->has_non_vars) + converted_whole_row = is_converted_whole_row_reference(node); + if (context->outer_itlist && + (context->outer_itlist->has_non_vars || + (context->outer_itlist->has_conv_whole_rows && converted_whole_row))) { newvar = search_indexed_tlist_for_non_var((Expr *) node, context->outer_itlist, @@ -2275,7 +2291,9 @@ fix_join_expr_mutator(Node *node, fix_join_expr_context *context) if (newvar) return (Node *) newvar; } - if (context->inner_itlist && context->inner_itlist->has_non_vars) + if (context->inner_itlist && + (context->inner_itlist->has_non_vars || + (context->inner_itlist->has_conv_whole_rows && converted_whole_row))) { newvar = search_indexed_tlist_for_non_var((Expr *) node, context->inner_itlist, @@ -2395,7 +2413,9 @@ fix_upper_expr_mutator(Node *node, fix_upper_expr_context *context) /* If no match, just fall through to process it normally */ } /* Try matching more complex expressions too, if tlist has any */ - if (context->subplan_itlist->has_non_vars) + if (context->subplan_itlist->has_non_vars || + (context->subplan_itlist->has_conv_whole_rows && + is_converted_whole_row_reference(node))) { newvar = search_indexed_tlist_for_non_var((Expr *) node, context->subplan_itlist, @@ -2602,3 +2622,33 @@ extract_query_dependencies_walker(Node *node, PlannerInfo *context) return expression_tree_walker(node, extract_query_dependencies_walker, (void *) context); } + +/* + * is_converted_whole_row_reference + * If the given node is a ConvertRowtypeExpr encapsulating a whole-row + * reference as implicit cast, return true. Otherwise return false. + */ +static bool +is_converted_whole_row_reference(Node *node) +{ + ConvertRowtypeExpr *convexpr; + + if (!node || !IsA(node, ConvertRowtypeExpr)) + return false; + + /* Traverse nested ConvertRowtypeExpr's. */ + convexpr = castNode(ConvertRowtypeExpr, node); + while (convexpr->convertformat == COERCE_IMPLICIT_CAST && + IsA(convexpr->arg, ConvertRowtypeExpr)) + convexpr = castNode(ConvertRowtypeExpr, convexpr->arg); + + if (IsA(convexpr->arg, Var)) + { + Var *var = castNode(Var, convexpr->arg); + + if (var->varattno == 0) + return true; + } + + return false; +} diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 3e0c3de86d..1c84a2cb28 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -2269,6 +2269,59 @@ adjust_child_relids(Relids relids, int nappinfos, AppendRelInfo **appinfos) return relids; } +/* + * Replace any relid present in top_parent_relids with its child in + * child_relids. Members of child_relids can be multiple levels below top + * parent in the partition hierarchy. + */ +Relids +adjust_child_relids_multilevel(PlannerInfo *root, Relids relids, + Relids child_relids, Relids top_parent_relids) +{ + AppendRelInfo **appinfos; + int nappinfos; + Relids parent_relids = NULL; + Relids result; + Relids tmp_result = NULL; + int cnt; + + /* + * If the given relids set doesn't contain any of the top parent relids, + * it will remain unchanged. + */ + if (!bms_overlap(relids, top_parent_relids)) + return relids; + + appinfos = find_appinfos_by_relids(root, child_relids, &nappinfos); + + /* Construct relids set for the immediate parent of the given child. */ + for (cnt = 0; cnt < nappinfos; cnt++) + { + AppendRelInfo *appinfo = appinfos[cnt]; + + parent_relids = bms_add_member(parent_relids, appinfo->parent_relid); + } + + /* Recurse if immediate parent is not the top parent. */ + if (!bms_equal(parent_relids, top_parent_relids)) + { + tmp_result = adjust_child_relids_multilevel(root, relids, + parent_relids, + top_parent_relids); + relids = tmp_result; + } + + result = adjust_child_relids(relids, nappinfos, appinfos); + + /* Free memory consumed by any intermediate result. */ + if (tmp_result) + bms_free(tmp_result); + bms_free(parent_relids); + pfree(appinfos); + + return result; +} + /* * Adjust the targetlist entries of an inherited UPDATE operation * @@ -2408,6 +2461,48 @@ adjust_appendrel_attrs_multilevel(PlannerInfo *root, Node *node, return node; } +/* + * Construct the SpecialJoinInfo for a child-join by translating + * SpecialJoinInfo for the join between parents. left_relids and right_relids + * are the relids of left and right side of the join respectively. + */ +SpecialJoinInfo * +build_child_join_sjinfo(PlannerInfo *root, SpecialJoinInfo *parent_sjinfo, + Relids left_relids, Relids right_relids) +{ + SpecialJoinInfo *sjinfo = makeNode(SpecialJoinInfo); + AppendRelInfo **left_appinfos; + int left_nappinfos; + AppendRelInfo **right_appinfos; + int right_nappinfos; + + memcpy(sjinfo, parent_sjinfo, sizeof(SpecialJoinInfo)); + left_appinfos = find_appinfos_by_relids(root, left_relids, + &left_nappinfos); + right_appinfos = find_appinfos_by_relids(root, right_relids, + &right_nappinfos); + + sjinfo->min_lefthand = adjust_child_relids(sjinfo->min_lefthand, + left_nappinfos, left_appinfos); + sjinfo->min_righthand = adjust_child_relids(sjinfo->min_righthand, + right_nappinfos, + right_appinfos); + sjinfo->syn_lefthand = adjust_child_relids(sjinfo->syn_lefthand, + left_nappinfos, left_appinfos); + sjinfo->syn_righthand = adjust_child_relids(sjinfo->syn_righthand, + right_nappinfos, + right_appinfos); + sjinfo->semi_rhs_exprs = (List *) adjust_appendrel_attrs(root, + (Node *) sjinfo->semi_rhs_exprs, + right_nappinfos, + right_appinfos); + + pfree(left_appinfos); + pfree(right_appinfos); + + return sjinfo; +} + /* * find_appinfos_by_relids * Find AppendRelInfo structures for all relations specified by relids. diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 26567cb7f6..2d491eb0ba 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -18,15 +18,20 @@ #include "miscadmin.h" #include "nodes/nodeFuncs.h" +#include "nodes/extensible.h" #include "optimizer/clauses.h" #include "optimizer/cost.h" #include "optimizer/pathnode.h" #include "optimizer/paths.h" #include "optimizer/planmain.h" +#include "optimizer/prep.h" #include "optimizer/restrictinfo.h" +#include "optimizer/tlist.h" #include "optimizer/var.h" #include "parser/parsetree.h" +#include "foreign/fdwapi.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" #include "utils/selfuncs.h" @@ -46,6 +51,9 @@ typedef enum #define STD_FUZZ_FACTOR 1.01 static List *translate_sub_tlist(List *tlist, int relid); +static List *reparameterize_pathlist_by_child(PlannerInfo *root, + List *pathlist, + RelOptInfo *child_rel); /***************************************************************************** @@ -3429,3 +3437,358 @@ reparameterize_path(PlannerInfo *root, Path *path, } return NULL; } + +/* + * reparameterize_path_by_child + * Given a path parameterized by the parent of the given child relation, + * translate the path to be parameterized by the given child relation. + * + * The function creates a new path of the same type as the given path, but + * parameterized by the given child relation. Most fields from the original + * path can simply be flat-copied, but any expressions must be adjusted to + * refer to the correct varnos, and any paths must be recursively + * reparameterized. Other fields that refer to specific relids also need + * adjustment. + * + * The cost, number of rows, width and parallel path properties depend upon + * path->parent, which does not change during the translation. Hence those + * members are copied as they are. + * + * If the given path can not be reparameterized, the function returns NULL. + */ +Path * +reparameterize_path_by_child(PlannerInfo *root, Path *path, + RelOptInfo *child_rel) +{ + +#define FLAT_COPY_PATH(newnode, node, nodetype) \ + ( (newnode) = makeNode(nodetype), \ + memcpy((newnode), (node), sizeof(nodetype)) ) + +#define ADJUST_CHILD_ATTRS(node) \ + ((node) = \ + (List *) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \ + child_rel->relids, \ + child_rel->top_parent_relids)) + +#define REPARAMETERIZE_CHILD_PATH(path) \ +do { \ + (path) = reparameterize_path_by_child(root, (path), child_rel); \ + if ((path) == NULL) \ + return NULL; \ +} while(0); + +#define REPARAMETERIZE_CHILD_PATH_LIST(pathlist) \ +do { \ + if ((pathlist) != NIL) \ + { \ + (pathlist) = reparameterize_pathlist_by_child(root, (pathlist), \ + child_rel); \ + if ((pathlist) == NIL) \ + return NULL; \ + } \ +} while(0); + + Path *new_path; + ParamPathInfo *new_ppi; + ParamPathInfo *old_ppi; + Relids required_outer; + + /* + * If the path is not parameterized by parent of the given relation, it + * doesn't need reparameterization. + */ + if (!path->param_info || + !bms_overlap(PATH_REQ_OUTER(path), child_rel->top_parent_relids)) + return path; + + /* Reparameterize a copy of given path. */ + switch (nodeTag(path)) + { + case T_Path: + FLAT_COPY_PATH(new_path, path, Path); + break; + + case T_IndexPath: + { + IndexPath *ipath; + + FLAT_COPY_PATH(ipath, path, IndexPath); + ADJUST_CHILD_ATTRS(ipath->indexclauses); + ADJUST_CHILD_ATTRS(ipath->indexquals); + new_path = (Path *) ipath; + } + break; + + case T_BitmapHeapPath: + { + BitmapHeapPath *bhpath; + + FLAT_COPY_PATH(bhpath, path, BitmapHeapPath); + REPARAMETERIZE_CHILD_PATH(bhpath->bitmapqual); + new_path = (Path *) bhpath; + } + break; + + case T_BitmapAndPath: + { + BitmapAndPath *bapath; + + FLAT_COPY_PATH(bapath, path, BitmapAndPath); + REPARAMETERIZE_CHILD_PATH_LIST(bapath->bitmapquals); + new_path = (Path *) bapath; + } + break; + + case T_BitmapOrPath: + { + BitmapOrPath *bopath; + + FLAT_COPY_PATH(bopath, path, BitmapOrPath); + REPARAMETERIZE_CHILD_PATH_LIST(bopath->bitmapquals); + new_path = (Path *) bopath; + } + break; + + case T_TidPath: + { + TidPath *tpath; + + /* + * TidPath contains tidquals, which do not contain any + * external parameters per create_tidscan_path(). So don't + * bother to translate those. + */ + FLAT_COPY_PATH(tpath, path, TidPath); + new_path = (Path *) tpath; + } + break; + + case T_ForeignPath: + { + ForeignPath *fpath; + ReparameterizeForeignPathByChild_function rfpc_func; + + FLAT_COPY_PATH(fpath, path, ForeignPath); + if (fpath->fdw_outerpath) + REPARAMETERIZE_CHILD_PATH(fpath->fdw_outerpath); + + /* Hand over to FDW if needed. */ + rfpc_func = + path->parent->fdwroutine->ReparameterizeForeignPathByChild; + if (rfpc_func) + fpath->fdw_private = rfpc_func(root, fpath->fdw_private, + child_rel); + new_path = (Path *) fpath; + } + break; + + case T_CustomPath: + { + CustomPath *cpath; + + FLAT_COPY_PATH(cpath, path, CustomPath); + REPARAMETERIZE_CHILD_PATH_LIST(cpath->custom_paths); + if (cpath->methods && + cpath->methods->ReparameterizeCustomPathByChild) + cpath->custom_private = + cpath->methods->ReparameterizeCustomPathByChild(root, + cpath->custom_private, + child_rel); + new_path = (Path *) cpath; + } + break; + + case T_NestPath: + { + JoinPath *jpath; + + FLAT_COPY_PATH(jpath, path, NestPath); + + REPARAMETERIZE_CHILD_PATH(jpath->outerjoinpath); + REPARAMETERIZE_CHILD_PATH(jpath->innerjoinpath); + ADJUST_CHILD_ATTRS(jpath->joinrestrictinfo); + new_path = (Path *) jpath; + } + break; + + case T_MergePath: + { + JoinPath *jpath; + MergePath *mpath; + + FLAT_COPY_PATH(mpath, path, MergePath); + + jpath = (JoinPath *) mpath; + REPARAMETERIZE_CHILD_PATH(jpath->outerjoinpath); + REPARAMETERIZE_CHILD_PATH(jpath->innerjoinpath); + ADJUST_CHILD_ATTRS(jpath->joinrestrictinfo); + ADJUST_CHILD_ATTRS(mpath->path_mergeclauses); + new_path = (Path *) mpath; + } + break; + + case T_HashPath: + { + JoinPath *jpath; + HashPath *hpath; + + FLAT_COPY_PATH(hpath, path, HashPath); + + jpath = (JoinPath *) hpath; + REPARAMETERIZE_CHILD_PATH(jpath->outerjoinpath); + REPARAMETERIZE_CHILD_PATH(jpath->innerjoinpath); + ADJUST_CHILD_ATTRS(jpath->joinrestrictinfo); + ADJUST_CHILD_ATTRS(hpath->path_hashclauses); + new_path = (Path *) hpath; + } + break; + + case T_AppendPath: + { + AppendPath *apath; + + FLAT_COPY_PATH(apath, path, AppendPath); + REPARAMETERIZE_CHILD_PATH_LIST(apath->subpaths); + new_path = (Path *) apath; + } + break; + + case T_MergeAppend: + { + MergeAppendPath *mapath; + + FLAT_COPY_PATH(mapath, path, MergeAppendPath); + REPARAMETERIZE_CHILD_PATH_LIST(mapath->subpaths); + new_path = (Path *) mapath; + } + break; + + case T_MaterialPath: + { + MaterialPath *mpath; + + FLAT_COPY_PATH(mpath, path, MaterialPath); + REPARAMETERIZE_CHILD_PATH(mpath->subpath); + new_path = (Path *) mpath; + } + break; + + case T_UniquePath: + { + UniquePath *upath; + + FLAT_COPY_PATH(upath, path, UniquePath); + REPARAMETERIZE_CHILD_PATH(upath->subpath); + ADJUST_CHILD_ATTRS(upath->uniq_exprs); + new_path = (Path *) upath; + } + break; + + case T_GatherPath: + { + GatherPath *gpath; + + FLAT_COPY_PATH(gpath, path, GatherPath); + REPARAMETERIZE_CHILD_PATH(gpath->subpath); + new_path = (Path *) gpath; + } + break; + + case T_GatherMergePath: + { + GatherMergePath *gmpath; + + FLAT_COPY_PATH(gmpath, path, GatherMergePath); + REPARAMETERIZE_CHILD_PATH(gmpath->subpath); + new_path = (Path *) gmpath; + } + break; + + default: + + /* We don't know how to reparameterize this path. */ + return NULL; + } + + /* + * Adjust the parameterization information, which refers to the topmost + * parent. The topmost parent can be multiple levels away from the given + * child, hence use multi-level expression adjustment routines. + */ + old_ppi = new_path->param_info; + required_outer = + adjust_child_relids_multilevel(root, old_ppi->ppi_req_outer, + child_rel->relids, + child_rel->top_parent_relids); + + /* If we already have a PPI for this parameterization, just return it */ + new_ppi = find_param_path_info(new_path->parent, required_outer); + + /* + * If not, build a new one and link it to the list of PPIs. For the same + * reason as explained in mark_dummy_rel(), allocate new PPI in the same + * context the given RelOptInfo is in. + */ + if (new_ppi == NULL) + { + MemoryContext oldcontext; + RelOptInfo *rel = path->parent; + + oldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(rel)); + + new_ppi = makeNode(ParamPathInfo); + new_ppi->ppi_req_outer = bms_copy(required_outer); + new_ppi->ppi_rows = old_ppi->ppi_rows; + new_ppi->ppi_clauses = old_ppi->ppi_clauses; + ADJUST_CHILD_ATTRS(new_ppi->ppi_clauses); + rel->ppilist = lappend(rel->ppilist, new_ppi); + + MemoryContextSwitchTo(oldcontext); + } + bms_free(required_outer); + + new_path->param_info = new_ppi; + + /* + * Adjust the path target if the parent of the outer relation is + * referenced in the targetlist. This can happen when only the parent of + * outer relation is laterally referenced in this relation. + */ + if (bms_overlap(path->parent->lateral_relids, + child_rel->top_parent_relids)) + { + new_path->pathtarget = copy_pathtarget(new_path->pathtarget); + ADJUST_CHILD_ATTRS(new_path->pathtarget->exprs); + } + + return new_path; +} + +/* + * reparameterize_pathlist_by_child + * Helper function to reparameterize a list of paths by given child rel. + */ +static List * +reparameterize_pathlist_by_child(PlannerInfo *root, + List *pathlist, + RelOptInfo *child_rel) +{ + ListCell *lc; + List *result = NIL; + + foreach(lc, pathlist) + { + Path *path = reparameterize_path_by_child(root, lfirst(lc), + child_rel); + if (path == NULL) + { + list_free(result); + return NIL; + } + + result = lappend(result, path); + } + + return result; +} diff --git a/src/backend/optimizer/util/placeholder.c b/src/backend/optimizer/util/placeholder.c index 970542dde5..3343521b97 100644 --- a/src/backend/optimizer/util/placeholder.c +++ b/src/backend/optimizer/util/placeholder.c @@ -20,6 +20,7 @@ #include "optimizer/pathnode.h" #include "optimizer/placeholder.h" #include "optimizer/planmain.h" +#include "optimizer/prep.h" #include "optimizer/var.h" #include "utils/lsyscache.h" @@ -414,6 +415,10 @@ add_placeholders_to_joinrel(PlannerInfo *root, RelOptInfo *joinrel, Relids relids = joinrel->relids; ListCell *lc; + /* This function is called only on the parent relations. */ + Assert(!IS_OTHER_REL(joinrel) && !IS_OTHER_REL(outer_rel) && + !IS_OTHER_REL(inner_rel)); + foreach(lc, root->placeholder_list) { PlaceHolderInfo *phinfo = (PlaceHolderInfo *) lfirst(lc); @@ -459,3 +464,56 @@ add_placeholders_to_joinrel(PlannerInfo *root, RelOptInfo *joinrel, } } } + +/* + * add_placeholders_to_child_joinrel + * Translate the PHVs in parent's targetlist and add them to the child's + * targetlist. Also adjust the cost + */ +void +add_placeholders_to_child_joinrel(PlannerInfo *root, RelOptInfo *childrel, + RelOptInfo *parentrel) +{ + ListCell *lc; + AppendRelInfo **appinfos; + int nappinfos; + + Assert(IS_JOIN_REL(childrel) && IS_JOIN_REL(parentrel)); + Assert(IS_OTHER_REL(childrel)); + + /* Nothing to do if no PHVs. */ + if (root->placeholder_list == NIL) + return; + + appinfos = find_appinfos_by_relids(root, childrel->relids, &nappinfos); + foreach(lc, parentrel->reltarget->exprs) + { + PlaceHolderVar *phv = lfirst(lc); + + if (IsA(phv, PlaceHolderVar)) + { + /* + * In case the placeholder Var refers to any of the parent + * relations, translate it to refer to the corresponding child. + */ + if (bms_overlap(phv->phrels, parentrel->relids) && + childrel->reloptkind == RELOPT_OTHER_JOINREL) + { + phv = (PlaceHolderVar *) adjust_appendrel_attrs(root, + (Node *) phv, + nappinfos, + appinfos); + } + + childrel->reltarget->exprs = lappend(childrel->reltarget->exprs, + phv); + } + } + + /* Adjust the cost and width of child targetlist. */ + childrel->reltarget->cost.startup = parentrel->reltarget->cost.startup; + childrel->reltarget->cost.per_tuple = parentrel->reltarget->cost.per_tuple; + childrel->reltarget->width = parentrel->reltarget->width; + + pfree(appinfos); +} diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index cac46bedf9..93cc7576a0 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -71,7 +71,8 @@ static List *get_relation_statistics(RelOptInfo *rel, Relation relation); static void set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, Relation relation); static PartitionScheme find_partition_scheme(PlannerInfo *root, Relation rel); -static List **build_baserel_partition_key_exprs(Relation relation, Index varno); +static void set_baserel_partition_key_exprs(Relation relation, + RelOptInfo *rel); /* * get_relation_info - @@ -1832,7 +1833,7 @@ set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, Assert(partdesc != NULL && rel->part_scheme != NULL); rel->boundinfo = partdesc->boundinfo; rel->nparts = partdesc->nparts; - rel->partexprs = build_baserel_partition_key_exprs(relation, rel->relid); + set_baserel_partition_key_exprs(relation, rel); } /* @@ -1907,21 +1908,24 @@ find_partition_scheme(PlannerInfo *root, Relation relation) } /* - * build_baserel_partition_key_exprs + * set_baserel_partition_key_exprs * - * Collects partition key expressions for a given base relation. Any single - * column partition keys are converted to Var nodes. All Var nodes are set - * to the given varno. The partition key expressions are returned as an array - * of single element lists to be stored in RelOptInfo of the base relation. + * Builds partition key expressions for the given base relation and sets them + * in given RelOptInfo. Any single column partition keys are converted to Var + * nodes. All Var nodes are restamped with the relid of given relation. */ -static List ** -build_baserel_partition_key_exprs(Relation relation, Index varno) +static void +set_baserel_partition_key_exprs(Relation relation, + RelOptInfo *rel) { PartitionKey partkey = RelationGetPartitionKey(relation); int partnatts; int cnt; List **partexprs; ListCell *lc; + Index varno = rel->relid; + + Assert(IS_SIMPLE_REL(rel) && rel->relid > 0); /* A partitioned table should have a partition key. */ Assert(partkey != NULL); @@ -1959,5 +1963,13 @@ build_baserel_partition_key_exprs(Relation relation, Index varno) partexprs[cnt] = list_make1(partexpr); } - return partexprs; + rel->partexprs = partexprs; + + /* + * A base relation can not have nullable partition key expressions. We + * still allocate array of empty expressions lists to keep partition key + * expression handling code simple. See build_joinrel_partition_info() and + * match_expr_to_partition_keys(). + */ + rel->nullable_partexprs = (List **) palloc0(sizeof(List *) * partnatts); } diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index 077e89ae43..3bd1063aa8 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -17,12 +17,14 @@ #include #include "miscadmin.h" +#include "catalog/partition.h" #include "optimizer/clauses.h" #include "optimizer/cost.h" #include "optimizer/pathnode.h" #include "optimizer/paths.h" #include "optimizer/placeholder.h" #include "optimizer/plancat.h" +#include "optimizer/prep.h" #include "optimizer/restrictinfo.h" #include "optimizer/tlist.h" #include "utils/hsearch.h" @@ -52,6 +54,9 @@ static List *subbuild_joinrel_joinlist(RelOptInfo *joinrel, static void set_foreign_rel_properties(RelOptInfo *joinrel, RelOptInfo *outer_rel, RelOptInfo *inner_rel); static void add_join_rel(PlannerInfo *root, RelOptInfo *joinrel); +static void build_joinrel_partition_info(RelOptInfo *joinrel, + RelOptInfo *outer_rel, RelOptInfo *inner_rel, + List *restrictlist, JoinType jointype); /* @@ -151,6 +156,7 @@ build_simple_rel(PlannerInfo *root, int relid, RelOptInfo *parent) rel->boundinfo = NULL; rel->part_rels = NULL; rel->partexprs = NULL; + rel->nullable_partexprs = NULL; /* * Pass top parent's relids down the inheritance hierarchy. If the parent @@ -481,6 +487,9 @@ build_join_rel(PlannerInfo *root, RelOptInfo *joinrel; List *restrictlist; + /* This function should be used only for join between parents. */ + Assert(!IS_OTHER_REL(outer_rel) && !IS_OTHER_REL(inner_rel)); + /* * See if we already have a joinrel for this set of base rels. */ @@ -560,6 +569,7 @@ build_join_rel(PlannerInfo *root, joinrel->boundinfo = NULL; joinrel->part_rels = NULL; joinrel->partexprs = NULL; + joinrel->nullable_partexprs = NULL; /* Compute information relevant to the foreign relations. */ set_foreign_rel_properties(joinrel, outer_rel, inner_rel); @@ -605,6 +615,10 @@ build_join_rel(PlannerInfo *root, */ joinrel->has_eclass_joins = has_relevant_eclass_joinclause(root, joinrel); + /* Store the partition information. */ + build_joinrel_partition_info(joinrel, outer_rel, inner_rel, restrictlist, + sjinfo->jointype); + /* * Set estimates of the joinrel's size. */ @@ -650,6 +664,138 @@ build_join_rel(PlannerInfo *root, return joinrel; } +/* + * build_child_join_rel + * Builds RelOptInfo representing join between given two child relations. + * + * 'outer_rel' and 'inner_rel' are the RelOptInfos of child relations being + * joined + * 'parent_joinrel' is the RelOptInfo representing the join between parent + * relations. Some of the members of new RelOptInfo are produced by + * translating corresponding members of this RelOptInfo + * 'sjinfo': child-join context info + * 'restrictlist': list of RestrictInfo nodes that apply to this particular + * pair of joinable relations + * 'join_appinfos': list of AppendRelInfo nodes for base child relations + * involved in this join + */ +RelOptInfo * +build_child_join_rel(PlannerInfo *root, RelOptInfo *outer_rel, + RelOptInfo *inner_rel, RelOptInfo *parent_joinrel, + List *restrictlist, SpecialJoinInfo *sjinfo, + JoinType jointype) +{ + RelOptInfo *joinrel = makeNode(RelOptInfo); + AppendRelInfo **appinfos; + int nappinfos; + + /* Only joins between "other" relations land here. */ + Assert(IS_OTHER_REL(outer_rel) && IS_OTHER_REL(inner_rel)); + + joinrel->reloptkind = RELOPT_OTHER_JOINREL; + joinrel->relids = bms_union(outer_rel->relids, inner_rel->relids); + joinrel->rows = 0; + /* cheap startup cost is interesting iff not all tuples to be retrieved */ + joinrel->consider_startup = (root->tuple_fraction > 0); + joinrel->consider_param_startup = false; + joinrel->consider_parallel = false; + joinrel->reltarget = create_empty_pathtarget(); + joinrel->pathlist = NIL; + joinrel->ppilist = NIL; + joinrel->partial_pathlist = NIL; + joinrel->cheapest_startup_path = NULL; + joinrel->cheapest_total_path = NULL; + joinrel->cheapest_unique_path = NULL; + joinrel->cheapest_parameterized_paths = NIL; + joinrel->direct_lateral_relids = NULL; + joinrel->lateral_relids = NULL; + joinrel->relid = 0; /* indicates not a baserel */ + joinrel->rtekind = RTE_JOIN; + joinrel->min_attr = 0; + joinrel->max_attr = 0; + joinrel->attr_needed = NULL; + joinrel->attr_widths = NULL; + joinrel->lateral_vars = NIL; + joinrel->lateral_referencers = NULL; + joinrel->indexlist = NIL; + joinrel->pages = 0; + joinrel->tuples = 0; + joinrel->allvisfrac = 0; + joinrel->subroot = NULL; + joinrel->subplan_params = NIL; + joinrel->serverid = InvalidOid; + joinrel->userid = InvalidOid; + joinrel->useridiscurrent = false; + joinrel->fdwroutine = NULL; + joinrel->fdw_private = NULL; + joinrel->baserestrictinfo = NIL; + joinrel->baserestrictcost.startup = 0; + joinrel->baserestrictcost.per_tuple = 0; + joinrel->joininfo = NIL; + joinrel->has_eclass_joins = false; + joinrel->top_parent_relids = NULL; + joinrel->part_scheme = NULL; + joinrel->part_rels = NULL; + joinrel->partexprs = NULL; + joinrel->nullable_partexprs = NULL; + + joinrel->top_parent_relids = bms_union(outer_rel->top_parent_relids, + inner_rel->top_parent_relids); + + /* Compute information relevant to foreign relations. */ + set_foreign_rel_properties(joinrel, outer_rel, inner_rel); + + /* Build targetlist */ + build_joinrel_tlist(root, joinrel, outer_rel); + build_joinrel_tlist(root, joinrel, inner_rel); + /* Add placeholder variables. */ + add_placeholders_to_child_joinrel(root, joinrel, parent_joinrel); + + /* Construct joininfo list. */ + appinfos = find_appinfos_by_relids(root, joinrel->relids, &nappinfos); + joinrel->joininfo = (List *) adjust_appendrel_attrs(root, + (Node *) parent_joinrel->joininfo, + nappinfos, + appinfos); + pfree(appinfos); + + /* + * Lateral relids referred in child join will be same as that referred in + * the parent relation. Throw any partial result computed while building + * the targetlist. + */ + bms_free(joinrel->direct_lateral_relids); + bms_free(joinrel->lateral_relids); + joinrel->direct_lateral_relids = (Relids) bms_copy(parent_joinrel->direct_lateral_relids); + joinrel->lateral_relids = (Relids) bms_copy(parent_joinrel->lateral_relids); + + /* + * If the parent joinrel has pending equivalence classes, so does the + * child. + */ + joinrel->has_eclass_joins = parent_joinrel->has_eclass_joins; + + /* Is the join between partitions itself partitioned? */ + build_joinrel_partition_info(joinrel, outer_rel, inner_rel, restrictlist, + jointype); + + /* Child joinrel is parallel safe if parent is parallel safe. */ + joinrel->consider_parallel = parent_joinrel->consider_parallel; + + + /* Set estimates of the child-joinrel's size. */ + set_joinrel_size_estimates(root, joinrel, outer_rel, inner_rel, + sjinfo, restrictlist); + + /* We build the join only once. */ + Assert(!find_join_rel(root, joinrel->relids)); + + /* Add the relation to the PlannerInfo. */ + add_join_rel(root, joinrel); + + return joinrel; +} + /* * min_join_parameterization * @@ -705,9 +851,15 @@ static void build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel, RelOptInfo *input_rel) { - Relids relids = joinrel->relids; + Relids relids; ListCell *vars; + /* attrs_needed refers to parent relids and not those of a child. */ + if (joinrel->top_parent_relids) + relids = joinrel->top_parent_relids; + else + relids = joinrel->relids; + foreach(vars, input_rel->reltarget->exprs) { Var *var = (Var *) lfirst(vars); @@ -722,24 +874,55 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel, continue; /* - * Otherwise, anything in a baserel or joinrel targetlist ought to be - * a Var. (More general cases can only appear in appendrel child - * rels, which will never be seen here.) + * Otherwise, anything in a baserel or joinrel targetlist ought to be a + * Var. Children of a partitioned table may have ConvertRowtypeExpr + * translating whole-row Var of a child to that of the parent. Children + * of an inherited table or subquery child rels can not directly + * participate in a join, so other kinds of nodes here. */ - if (!IsA(var, Var)) + if (IsA(var, Var)) + { + baserel = find_base_rel(root, var->varno); + ndx = var->varattno - baserel->min_attr; + } + else if (IsA(var, ConvertRowtypeExpr)) + { + ConvertRowtypeExpr *child_expr = (ConvertRowtypeExpr *) var; + Var *childvar = (Var *) child_expr->arg; + + /* + * Child's whole-row references are converted to look like those + * of parent using ConvertRowtypeExpr. There can be as many + * ConvertRowtypeExpr decorations as the depth of partition tree. + * The argument to the deepest ConvertRowtypeExpr is expected to + * be a whole-row reference of the child. + */ + while (IsA(childvar, ConvertRowtypeExpr)) + { + child_expr = (ConvertRowtypeExpr *) childvar; + childvar = (Var *) child_expr->arg; + } + Assert(IsA(childvar, Var) && childvar->varattno == 0); + + baserel = find_base_rel(root, childvar->varno); + ndx = 0 - baserel->min_attr; + } + else elog(ERROR, "unexpected node type in rel targetlist: %d", (int) nodeTag(var)); - /* Get the Var's original base rel */ - baserel = find_base_rel(root, var->varno); - /* Is it still needed above this joinrel? */ - ndx = var->varattno - baserel->min_attr; + /* Is the target expression still needed above this joinrel? */ if (bms_nonempty_difference(baserel->attr_needed[ndx], relids)) { /* Yup, add it to the output */ joinrel->reltarget->exprs = lappend(joinrel->reltarget->exprs, var); - /* Vars have cost zero, so no need to adjust reltarget->cost */ + + /* + * Vars have cost zero, so no need to adjust reltarget->cost. Even + * if it's a ConvertRowtypeExpr, it will be computed only for the + * base relation, costing nothing for a join. + */ joinrel->reltarget->width += baserel->attr_widths[ndx]; } } @@ -876,6 +1059,9 @@ subbuild_joinrel_joinlist(RelOptInfo *joinrel, { ListCell *l; + /* Expected to be called only for join between parent relations. */ + Assert(joinrel->reloptkind == RELOPT_JOINREL); + foreach(l, joininfo_list) { RestrictInfo *rinfo = (RestrictInfo *) lfirst(l); @@ -1399,3 +1585,165 @@ find_param_path_info(RelOptInfo *rel, Relids required_outer) return NULL; } + +/* + * build_joinrel_partition_info + * If the two relations have same partitioning scheme, their join may be + * partitioned and will follow the same partitioning scheme as the joining + * relations. Set the partition scheme and partition key expressions in + * the join relation. + */ +static void +build_joinrel_partition_info(RelOptInfo *joinrel, RelOptInfo *outer_rel, + RelOptInfo *inner_rel, List *restrictlist, + JoinType jointype) +{ + int partnatts; + int cnt; + PartitionScheme part_scheme; + + /* Nothing to do if partition-wise join technique is disabled. */ + if (!enable_partition_wise_join) + { + Assert(!IS_PARTITIONED_REL(joinrel)); + return; + } + + /* + * We can only consider this join as an input to further partition-wise + * joins if (a) the input relations are partitioned, (b) the partition + * schemes match, and (c) we can identify an equi-join between the + * partition keys. Note that if it were possible for + * have_partkey_equi_join to return different answers for the same joinrel + * depending on which join ordering we try first, this logic would break. + * That shouldn't happen, though, because of the way the query planner + * deduces implied equalities and reorders the joins. Please see + * optimizer/README for details. + */ + if (!IS_PARTITIONED_REL(outer_rel) || !IS_PARTITIONED_REL(inner_rel) || + outer_rel->part_scheme != inner_rel->part_scheme || + !have_partkey_equi_join(outer_rel, inner_rel, jointype, restrictlist)) + { + Assert(!IS_PARTITIONED_REL(joinrel)); + return; + } + + part_scheme = outer_rel->part_scheme; + + Assert(REL_HAS_ALL_PART_PROPS(outer_rel) && + REL_HAS_ALL_PART_PROPS(inner_rel)); + + /* + * For now, our partition matching algorithm can match partitions only + * when the partition bounds of the joining relations are exactly same. + * So, bail out otherwise. + */ + if (outer_rel->nparts != inner_rel->nparts || + !partition_bounds_equal(part_scheme->partnatts, + part_scheme->parttyplen, + part_scheme->parttypbyval, + outer_rel->boundinfo, inner_rel->boundinfo)) + { + Assert(!IS_PARTITIONED_REL(joinrel)); + return; + } + + /* + * This function will be called only once for each joinrel, hence it + * should not have partition scheme, partition bounds, partition key + * expressions and array for storing child relations set. + */ + Assert(!joinrel->part_scheme && !joinrel->partexprs && + !joinrel->nullable_partexprs && !joinrel->part_rels && + !joinrel->boundinfo); + + /* + * Join relation is partitioned using the same partitioning scheme as the + * joining relations and has same bounds. + */ + joinrel->part_scheme = part_scheme; + joinrel->boundinfo = outer_rel->boundinfo; + joinrel->nparts = outer_rel->nparts; + partnatts = joinrel->part_scheme->partnatts; + joinrel->partexprs = (List **) palloc0(sizeof(List *) * partnatts); + joinrel->nullable_partexprs = + (List **) palloc0(sizeof(List *) *partnatts); + + /* + * Construct partition keys for the join. + * + * An INNER join between two partitioned relations can be regarded as + * partitioned by either key expression. For example, A INNER JOIN B ON A.a = + * B.b can be regarded as partitioned on A.a or on B.b; they are equivalent. + * + * For a SEMI or ANTI join, the result can only be regarded as being + * partitioned in the same manner as the outer side, since the inner columns + * are not retained. + * + * An OUTER join like (A LEFT JOIN B ON A.a = B.b) may produce rows with + * B.b NULL. These rows may not fit the partitioning conditions imposed on + * B.b. Hence, strictly speaking, the join is not partitioned by B.b and + * thus partition keys of an OUTER join should include partition key + * expressions from the OUTER side only. However, because all + * commonly-used comparison operators are strict, the presence of nulls on + * the outer side doesn't cause any problem; they can't match anything at + * future join levels anyway. Therefore, we track two sets of expressions: + * those that authentically partition the relation (partexprs) and those + * that partition the relation with the exception that extra nulls may be + * present (nullable_partexprs). When the comparison operator is strict, + * the latter is just as good as the former. + */ + for (cnt = 0; cnt < partnatts; cnt++) + { + List *outer_expr; + List *outer_null_expr; + List *inner_expr; + List *inner_null_expr; + List *partexpr = NIL; + List *nullable_partexpr = NIL; + + outer_expr = list_copy(outer_rel->partexprs[cnt]); + outer_null_expr = list_copy(outer_rel->nullable_partexprs[cnt]); + inner_expr = list_copy(inner_rel->partexprs[cnt]); + inner_null_expr = list_copy(inner_rel->nullable_partexprs[cnt]); + + switch (jointype) + { + case JOIN_INNER: + partexpr = list_concat(outer_expr, inner_expr); + nullable_partexpr = list_concat(outer_null_expr, + inner_null_expr); + break; + + case JOIN_SEMI: + case JOIN_ANTI: + partexpr = outer_expr; + nullable_partexpr = outer_null_expr; + break; + + case JOIN_LEFT: + partexpr = outer_expr; + nullable_partexpr = list_concat(inner_expr, + outer_null_expr); + nullable_partexpr = list_concat(nullable_partexpr, + inner_null_expr); + break; + + case JOIN_FULL: + nullable_partexpr = list_concat(outer_expr, + inner_expr); + nullable_partexpr = list_concat(nullable_partexpr, + outer_null_expr); + nullable_partexpr = list_concat(nullable_partexpr, + inner_null_expr); + break; + + default: + elog(ERROR, "unrecognized join type: %d", (int) jointype); + + } + + joinrel->partexprs[cnt] = partexpr; + joinrel->nullable_partexprs[cnt] = nullable_partexpr; + } +} diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 8292df00bb..ae22185fbd 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -911,6 +911,15 @@ static struct config_bool ConfigureNamesBool[] = true, NULL, NULL, NULL }, + { + {"enable_partition_wise_join", PGC_USERSET, QUERY_TUNING_METHOD, + gettext_noop("Enables partition-wise join."), + NULL + }, + &enable_partition_wise_join, + false, + NULL, NULL, NULL + }, { {"geqo", PGC_USERSET, QUERY_TUNING_GEQO, diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index cf4ddcd94a..368b280c8a 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -299,6 +299,7 @@ #enable_seqscan = on #enable_sort = on #enable_tidscan = on +#enable_partition_wise_join = off # - Planner Cost Constants - diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h index ef0fbe6f9c..04e43cc5e5 100644 --- a/src/include/foreign/fdwapi.h +++ b/src/include/foreign/fdwapi.h @@ -158,6 +158,9 @@ typedef void (*ShutdownForeignScan_function) (ForeignScanState *node); typedef bool (*IsForeignScanParallelSafe_function) (PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte); +typedef List *(*ReparameterizeForeignPathByChild_function) (PlannerInfo *root, + List *fdw_private, + RelOptInfo *child_rel); /* * FdwRoutine is the struct returned by a foreign-data wrapper's handler @@ -230,6 +233,9 @@ typedef struct FdwRoutine ReInitializeDSMForeignScan_function ReInitializeDSMForeignScan; InitializeWorkerForeignScan_function InitializeWorkerForeignScan; ShutdownForeignScan_function ShutdownForeignScan; + + /* Support functions for path reparameterization. */ + ReparameterizeForeignPathByChild_function ReparameterizeForeignPathByChild; } FdwRoutine; diff --git a/src/include/nodes/extensible.h b/src/include/nodes/extensible.h index 0654e79c7b..c3436c7a4e 100644 --- a/src/include/nodes/extensible.h +++ b/src/include/nodes/extensible.h @@ -96,6 +96,9 @@ typedef struct CustomPathMethods List *tlist, List *clauses, List *custom_plans); + struct List *(*ReparameterizeCustomPathByChild) (PlannerInfo *root, + List *custom_private, + RelOptInfo *child_rel); } CustomPathMethods; /* diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 48e6012f7f..e085cefb7b 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -391,6 +391,11 @@ typedef struct PartitionSchemeData *PartitionScheme; * handling join alias Vars. Currently this is not needed because all join * alias Vars are expanded to non-aliased form during preprocess_expression. * + * We also have relations representing joins between child relations of + * different partitioned tables. These relations are not added to + * join_rel_level lists as they are not joined directly by the dynamic + * programming algorithm. + * * There is also a RelOptKind for "upper" relations, which are RelOptInfos * that describe post-scan/join processing steps, such as aggregation. * Many of the fields in these RelOptInfos are meaningless, but their Path @@ -525,14 +530,18 @@ typedef struct PartitionSchemeData *PartitionScheme; * boundinfo - Partition bounds * nparts - Number of partitions * part_rels - RelOptInfos for each partition - * partexprs - Partition key expressions + * partexprs, nullable_partexprs - Partition key expressions * * Note: A base relation always has only one set of partition keys, but a join * relation may have as many sets of partition keys as the number of relations - * being joined. partexprs is an array containing part_scheme->partnatts - * elements, each of which is a list of partition key expressions. For a base - * relation each list contains only one expression, but for a join relation - * there can be one per baserel. + * being joined. partexprs and nullable_partexprs are arrays containing + * part_scheme->partnatts elements each. Each of these elements is a list of + * partition key expressions. For a base relation each list in partexprs + * contains only one expression and nullable_partexprs is not populated. For a + * join relation, partexprs and nullable_partexprs contain partition key + * expressions from non-nullable and nullable relations resp. Lists at any + * given position in those arrays together contain as many elements as the + * number of joining relations. *---------- */ typedef enum RelOptKind @@ -540,6 +549,7 @@ typedef enum RelOptKind RELOPT_BASEREL, RELOPT_JOINREL, RELOPT_OTHER_MEMBER_REL, + RELOPT_OTHER_JOINREL, RELOPT_UPPER_REL, RELOPT_DEADREL } RelOptKind; @@ -553,13 +563,17 @@ typedef enum RelOptKind (rel)->reloptkind == RELOPT_OTHER_MEMBER_REL) /* Is the given relation a join relation? */ -#define IS_JOIN_REL(rel) ((rel)->reloptkind == RELOPT_JOINREL) +#define IS_JOIN_REL(rel) \ + ((rel)->reloptkind == RELOPT_JOINREL || \ + (rel)->reloptkind == RELOPT_OTHER_JOINREL) /* Is the given relation an upper relation? */ #define IS_UPPER_REL(rel) ((rel)->reloptkind == RELOPT_UPPER_REL) /* Is the given relation an "other" relation? */ -#define IS_OTHER_REL(rel) ((rel)->reloptkind == RELOPT_OTHER_MEMBER_REL) +#define IS_OTHER_REL(rel) \ + ((rel)->reloptkind == RELOPT_OTHER_MEMBER_REL || \ + (rel)->reloptkind == RELOPT_OTHER_JOINREL) typedef struct RelOptInfo { @@ -645,9 +659,29 @@ typedef struct RelOptInfo struct PartitionBoundInfoData *boundinfo; /* Partition bounds */ struct RelOptInfo **part_rels; /* Array of RelOptInfos of partitions, * stored in the same order of bounds */ - List **partexprs; /* Partition key expressions. */ + List **partexprs; /* Non-nullable partition key expressions. */ + List **nullable_partexprs; /* Nullable partition key expressions. */ } RelOptInfo; +/* + * Is given relation partitioned? + * + * A join between two partitioned relations with same partitioning scheme + * without any matching partitions will not have any partition in it but will + * have partition scheme set. So a relation is deemed to be partitioned if it + * has a partitioning scheme, bounds and positive number of partitions. + */ +#define IS_PARTITIONED_REL(rel) \ + ((rel)->part_scheme && (rel)->boundinfo && (rel)->nparts > 0) + +/* + * Convenience macro to make sure that a partitioned relation has all the + * required members set. + */ +#define REL_HAS_ALL_PART_PROPS(rel) \ + ((rel)->part_scheme && (rel)->boundinfo && (rel)->nparts > 0 && \ + (rel)->part_rels && (rel)->partexprs && (rel)->nullable_partexprs) + /* * IndexOptInfo * Per-index information for planning/optimization diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 63feba06e7..306d923a22 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -67,6 +67,7 @@ extern bool enable_material; extern bool enable_mergejoin; extern bool enable_hashjoin; extern bool enable_gathermerge; +extern bool enable_partition_wise_join; extern int constraint_exclusion; extern double clamp_row_est(double nrows); diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index e372f8862b..e9ed16ad32 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -251,6 +251,8 @@ extern LimitPath *create_limit_path(PlannerInfo *root, RelOptInfo *rel, extern Path *reparameterize_path(PlannerInfo *root, Path *path, Relids required_outer, double loop_count); +extern Path *reparameterize_path_by_child(PlannerInfo *root, Path *path, + RelOptInfo *child_rel); /* * prototypes for relnode.c @@ -290,5 +292,9 @@ extern ParamPathInfo *get_appendrel_parampathinfo(RelOptInfo *appendrel, Relids required_outer); extern ParamPathInfo *find_param_path_info(RelOptInfo *rel, Relids required_outer); +extern RelOptInfo *build_child_join_rel(PlannerInfo *root, + RelOptInfo *outer_rel, RelOptInfo *inner_rel, + RelOptInfo *parent_joinrel, List *restrictlist, + SpecialJoinInfo *sjinfo, JoinType jointype); #endif /* PATHNODE_H */ diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index 4e06b2e299..a15eee54bb 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -58,6 +58,8 @@ extern int compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages); extern void create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, Path *bitmapqual); +extern void generate_partition_wise_join_paths(PlannerInfo *root, + RelOptInfo *rel); #ifdef OPTIMIZER_DEBUG extern void debug_print_rel(PlannerInfo *root, RelOptInfo *rel); @@ -111,6 +113,9 @@ extern bool have_join_order_restriction(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2); extern bool have_dangerous_phv(PlannerInfo *root, Relids outer_relids, Relids inner_params); +extern void mark_dummy_rel(RelOptInfo *rel); +extern bool have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, + JoinType jointype, List *restrictlist); /* * equivclass.c diff --git a/src/include/optimizer/placeholder.h b/src/include/optimizer/placeholder.h index 5a4d46ba9d..a4a7b79f4d 100644 --- a/src/include/optimizer/placeholder.h +++ b/src/include/optimizer/placeholder.h @@ -28,5 +28,7 @@ extern void fix_placeholder_input_needed_levels(PlannerInfo *root); extern void add_placeholders_to_base_rels(PlannerInfo *root); extern void add_placeholders_to_joinrel(PlannerInfo *root, RelOptInfo *joinrel, RelOptInfo *outer_rel, RelOptInfo *inner_rel); +extern void add_placeholders_to_child_joinrel(PlannerInfo *root, + RelOptInfo *childrel, RelOptInfo *parentrel); #endif /* PLACEHOLDER_H */ diff --git a/src/include/optimizer/planner.h b/src/include/optimizer/planner.h index 2a4cf71e10..2801bfdfbe 100644 --- a/src/include/optimizer/planner.h +++ b/src/include/optimizer/planner.h @@ -58,5 +58,7 @@ extern Expr *preprocess_phv_expression(PlannerInfo *root, Expr *expr); extern bool plan_cluster_use_sort(Oid tableOid, Oid indexOid); extern List *get_partitioned_child_rels(PlannerInfo *root, Index rti); +extern List *get_partitioned_child_rels_for_join(PlannerInfo *root, + Relids join_relids); #endif /* PLANNER_H */ diff --git a/src/include/optimizer/prep.h b/src/include/optimizer/prep.h index 4be0afd566..80fbfd6ea9 100644 --- a/src/include/optimizer/prep.h +++ b/src/include/optimizer/prep.h @@ -62,4 +62,10 @@ extern Node *adjust_appendrel_attrs_multilevel(PlannerInfo *root, Node *node, extern AppendRelInfo **find_appinfos_by_relids(PlannerInfo *root, Relids relids, int *nappinfos); +extern SpecialJoinInfo *build_child_join_sjinfo(PlannerInfo *root, + SpecialJoinInfo *parent_sjinfo, + Relids left_relids, Relids right_relids); +extern Relids adjust_child_relids_multilevel(PlannerInfo *root, Relids relids, + Relids child_relids, Relids top_parent_relids); + #endif /* PREP_H */ diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out new file mode 100644 index 0000000000..234b8b5381 --- /dev/null +++ b/src/test/regress/expected/partition_join.out @@ -0,0 +1,1789 @@ +-- +-- PARTITION_JOIN +-- Test partition-wise join between partitioned tables +-- +-- Enable partition-wise join, which by default is disabled. +SET enable_partition_wise_join to true; +-- +-- partitioned by a single column +-- +CREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_p3 PARTITION OF prt1 FOR VALUES FROM (500) TO (600); +CREATE TABLE prt1_p2 PARTITION OF prt1 FOR VALUES FROM (250) TO (500); +INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 2 = 0; +CREATE INDEX iprt1_p1_a on prt1_p1(a); +CREATE INDEX iprt1_p2_a on prt1_p2(a); +CREATE INDEX iprt1_p3_a on prt1_p3(a); +ANALYZE prt1; +CREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_p2 PARTITION OF prt2 FOR VALUES FROM (250) TO (500); +CREATE TABLE prt2_p3 PARTITION OF prt2 FOR VALUES FROM (500) TO (600); +INSERT INTO prt2 SELECT i % 25, i, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 3 = 0; +CREATE INDEX iprt2_p1_b on prt2_p1(b); +CREATE INDEX iprt2_p2_b on prt2_p2(b); +CREATE INDEX iprt2_p3_b on prt2_p3(b); +ANALYZE prt2; +-- inner join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +-------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Hash Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_p1 t2 + -> Hash + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Hash Join + Hash Cond: (t2_1.b = t1_1.a) + -> Seq Scan on prt2_p2 t2_1 + -> Hash + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Hash Join + Hash Cond: (t2_2.b = t1_2.a) + -> Seq Scan on prt2_p3 t2_2 + -> Hash + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) +(21 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 150 | 0150 | 150 | 0150 + 300 | 0300 | 300 | 0300 + 450 | 0450 | 450 | 0450 +(4 rows) + +-- left outer join, with whole-row reference +EXPLAIN (COSTS OFF) +SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +-------------------------------------------------------- + Sort + Sort Key: t1.a, t2.b + -> Result + -> Append + -> Hash Right Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_p1 t2 + -> Hash + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: (t2_1.b = t1_1.a) + -> Seq Scan on prt2_p2 t2_1 + -> Hash + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: (t2_2.b = t1_2.a) + -> Seq Scan on prt2_p3 t2_2 + -> Hash + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) +(22 rows) + +SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + t1 | t2 +--------------+-------------- + (0,0,0000) | (0,0,0000) + (50,0,0050) | + (100,0,0100) | + (150,0,0150) | (0,150,0150) + (200,0,0200) | + (250,0,0250) | + (300,0,0300) | (0,300,0300) + (350,0,0350) | + (400,0,0400) | + (450,0,0450) | (0,450,0450) + (500,0,0500) | + (550,0,0550) | +(12 rows) + +-- right outer join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +--------------------------------------------------------------------- + Sort + Sort Key: t1.a, t2.b + -> Result + -> Append + -> Hash Right Join + Hash Cond: (t1.a = t2.b) + -> Seq Scan on prt1_p1 t1 + -> Hash + -> Seq Scan on prt2_p1 t2 + Filter: (a = 0) + -> Hash Right Join + Hash Cond: (t1_1.a = t2_1.b) + -> Seq Scan on prt1_p2 t1_1 + -> Hash + -> Seq Scan on prt2_p2 t2_1 + Filter: (a = 0) + -> Nested Loop Left Join + -> Seq Scan on prt2_p3 t2_2 + Filter: (a = 0) + -> Index Scan using iprt1_p3_a on prt1_p3 t1_2 + Index Cond: (a = t2_2.b) +(21 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 150 | 0150 | 150 | 0150 + 300 | 0300 | 300 | 0300 + 450 | 0450 | 450 | 0450 + | | 75 | 0075 + | | 225 | 0225 + | | 375 | 0375 + | | 525 | 0525 +(8 rows) + +-- full outer join, with placeholder vars +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b) WHERE t1.phv = t1.a OR t2.phv = t2.b ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------------ + Sort + Sort Key: prt1_p1.a, prt2_p1.b + -> Append + -> Hash Full Join + Hash Cond: (prt1_p1.a = prt2_p1.b) + Filter: (((50) = prt1_p1.a) OR ((75) = prt2_p1.b)) + -> Seq Scan on prt1_p1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p1 + Filter: (a = 0) + -> Hash Full Join + Hash Cond: (prt1_p2.a = prt2_p2.b) + Filter: (((50) = prt1_p2.a) OR ((75) = prt2_p2.b)) + -> Seq Scan on prt1_p2 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p2 + Filter: (a = 0) + -> Hash Full Join + Hash Cond: (prt1_p3.a = prt2_p3.b) + Filter: (((50) = prt1_p3.a) OR ((75) = prt2_p3.b)) + -> Seq Scan on prt1_p3 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p3 + Filter: (a = 0) +(27 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b) WHERE t1.phv = t1.a OR t2.phv = t2.b ORDER BY t1.a, t2.b; + a | c | b | c +----+------+----+------ + 50 | 0050 | | + | | 75 | 0075 +(2 rows) + +-- Join with pruned partitions from joining relations +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +----------------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Hash Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_p2 t2 + Filter: (b > 250) + -> Hash + -> Seq Scan on prt1_p2 t1 + Filter: ((a < 450) AND (b = 0)) +(10 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 300 | 0300 | 300 | 0300 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +----------------------------------------------------------- + Sort + Sort Key: prt1_p1.a, b + -> Append + -> Hash Left Join + Hash Cond: (prt1_p1.a = b) + -> Seq Scan on prt1_p1 + Filter: ((a < 450) AND (b = 0)) + -> Hash + -> Result + One-Time Filter: false + -> Hash Right Join + Hash Cond: (prt2_p2.b = prt1_p2.a) + -> Seq Scan on prt2_p2 + Filter: (b > 250) + -> Hash + -> Seq Scan on prt1_p2 + Filter: ((a < 450) AND (b = 0)) +(17 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | | + 50 | 0050 | | + 100 | 0100 | | + 150 | 0150 | | + 200 | 0200 | | + 250 | 0250 | | + 300 | 0300 | 300 | 0300 + 350 | 0350 | | + 400 | 0400 | | +(9 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 FULL JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 OR t2.a = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------ + Sort + Sort Key: prt1_p1.a, b + -> Append + -> Hash Full Join + Hash Cond: (prt1_p1.a = b) + Filter: ((prt1_p1.b = 0) OR (a = 0)) + -> Seq Scan on prt1_p1 + Filter: (a < 450) + -> Hash + -> Result + One-Time Filter: false + -> Hash Full Join + Hash Cond: (prt1_p2.a = prt2_p2.b) + Filter: ((prt1_p2.b = 0) OR (prt2_p2.a = 0)) + -> Seq Scan on prt1_p2 + Filter: (a < 450) + -> Hash + -> Seq Scan on prt2_p2 + Filter: (b > 250) + -> Hash Full Join + Hash Cond: (prt2_p3.b = a) + Filter: ((b = 0) OR (prt2_p3.a = 0)) + -> Seq Scan on prt2_p3 + Filter: (b > 250) + -> Hash + -> Result + One-Time Filter: false +(27 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 FULL JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 OR t2.a = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | | + 50 | 0050 | | + 100 | 0100 | | + 150 | 0150 | | + 200 | 0200 | | + 250 | 0250 | | + 300 | 0300 | 300 | 0300 + 350 | 0350 | | + 400 | 0400 | | + | | 375 | 0375 + | | 450 | 0450 + | | 525 | 0525 +(12 rows) + +-- Semi-join +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t2.b FROM prt2 t2 WHERE t2.a = 0) AND t1.b = 0 ORDER BY t1.a; + QUERY PLAN +-------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Hash Semi Join + Hash Cond: (t1.a = t2.b) + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p1 t2 + Filter: (a = 0) + -> Hash Semi Join + Hash Cond: (t1_1.a = t2_1.b) + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p2 t2_1 + Filter: (a = 0) + -> Nested Loop Semi Join + Join Filter: (t1_2.a = t2_2.b) + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) + -> Materialize + -> Seq Scan on prt2_p3 t2_2 + Filter: (a = 0) +(24 rows) + +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t2.b FROM prt2 t2 WHERE t2.a = 0) AND t1.b = 0 ORDER BY t1.a; + a | b | c +-----+---+------ + 0 | 0 | 0000 + 150 | 0 | 0150 + 300 | 0 | 0300 + 450 | 0 | 0450 +(4 rows) + +-- Anti-join with aggregates +EXPLAIN (COSTS OFF) +SELECT sum(t1.a), avg(t1.a), sum(t1.b), avg(t1.b) FROM prt1 t1 WHERE NOT EXISTS (SELECT 1 FROM prt2 t2 WHERE t1.a = t2.b); + QUERY PLAN +-------------------------------------------------- + Aggregate + -> Append + -> Hash Anti Join + Hash Cond: (t1.a = t2.b) + -> Seq Scan on prt1_p1 t1 + -> Hash + -> Seq Scan on prt2_p1 t2 + -> Hash Anti Join + Hash Cond: (t1_1.a = t2_1.b) + -> Seq Scan on prt1_p2 t1_1 + -> Hash + -> Seq Scan on prt2_p2 t2_1 + -> Hash Anti Join + Hash Cond: (t1_2.a = t2_2.b) + -> Seq Scan on prt1_p3 t1_2 + -> Hash + -> Seq Scan on prt2_p3 t2_2 +(17 rows) + +SELECT sum(t1.a), avg(t1.a), sum(t1.b), avg(t1.b) FROM prt1 t1 WHERE NOT EXISTS (SELECT 1 FROM prt2 t2 WHERE t1.a = t2.b); + sum | avg | sum | avg +-------+----------------------+------+--------------------- + 60000 | 300.0000000000000000 | 2400 | 12.0000000000000000 +(1 row) + +-- lateral reference +EXPLAIN (COSTS OFF) +SELECT * FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; + QUERY PLAN +-------------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Result + -> Append + -> Nested Loop Left Join + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Nested Loop + -> Index Only Scan using iprt1_p1_a on prt1_p1 t2 + Index Cond: (a = t1.a) + -> Index Scan using iprt2_p1_b on prt2_p1 t3 + Index Cond: (b = t2.a) + -> Nested Loop Left Join + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Nested Loop + -> Index Only Scan using iprt1_p2_a on prt1_p2 t2_1 + Index Cond: (a = t1_1.a) + -> Index Scan using iprt2_p2_b on prt2_p2 t3_1 + Index Cond: (b = t2_1.a) + -> Nested Loop Left Join + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) + -> Nested Loop + -> Index Only Scan using iprt1_p3_a on prt1_p3 t2_2 + Index Cond: (a = t1_2.a) + -> Index Scan using iprt2_p3_b on prt2_p3 t3_2 + Index Cond: (b = t2_2.a) +(28 rows) + +SELECT * FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; + a | b | c | t2a | t3a | least +-----+---+------+-----+-----+------- + 0 | 0 | 0000 | 0 | 0 | 0 + 50 | 0 | 0050 | | | + 100 | 0 | 0100 | | | + 150 | 0 | 0150 | 150 | 0 | 150 + 200 | 0 | 0200 | | | + 250 | 0 | 0250 | | | + 300 | 0 | 0300 | 300 | 0 | 300 + 350 | 0 | 0350 | | | + 400 | 0 | 0400 | | | + 450 | 0 | 0450 | 450 | 0 | 450 + 500 | 0 | 0500 | | | + 550 | 0 | 0550 | | | +(12 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, ss.t2a, ss.t2c FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, t2.b t2b, t2.c t2c, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.c = ss.t2c WHERE (t1.b + coalesce(ss.t2b, 0)) = 0 ORDER BY t1.a; + QUERY PLAN +-------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Hash Left Join + Hash Cond: ((t1.c)::text = (t2.c)::text) + Filter: ((t1.b + COALESCE(t2.b, 0)) = 0) + -> Append + -> Seq Scan on prt1_p1 t1 + -> Seq Scan on prt1_p2 t1_1 + -> Seq Scan on prt1_p3 t1_2 + -> Hash + -> Append + -> Hash Join + Hash Cond: (t2.a = t3.b) + -> Seq Scan on prt1_p1 t2 + -> Hash + -> Seq Scan on prt2_p1 t3 + -> Hash Join + Hash Cond: (t2_1.a = t3_1.b) + -> Seq Scan on prt1_p2 t2_1 + -> Hash + -> Seq Scan on prt2_p2 t3_1 + -> Hash Join + Hash Cond: (t2_2.a = t3_2.b) + -> Seq Scan on prt1_p3 t2_2 + -> Hash + -> Seq Scan on prt2_p3 t3_2 +(26 rows) + +SELECT t1.a, ss.t2a, ss.t2c FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, t2.b t2b, t2.c t2c, least(t1.a,t2.a,t3.a) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.c = ss.t2c WHERE (t1.b + coalesce(ss.t2b, 0)) = 0 ORDER BY t1.a; + a | t2a | t2c +-----+-----+------ + 0 | 0 | 0000 + 50 | | + 100 | | + 150 | 150 | 0150 + 200 | | + 250 | | + 300 | 300 | 0300 + 350 | | + 400 | | + 450 | 450 | 0450 + 500 | | + 550 | | +(12 rows) + +-- +-- partitioned by expression +-- +CREATE TABLE prt1_e (a int, b int, c int) PARTITION BY RANGE(((a + b)/2)); +CREATE TABLE prt1_e_p1 PARTITION OF prt1_e FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_e_p2 PARTITION OF prt1_e FOR VALUES FROM (250) TO (500); +CREATE TABLE prt1_e_p3 PARTITION OF prt1_e FOR VALUES FROM (500) TO (600); +INSERT INTO prt1_e SELECT i, i, i % 25 FROM generate_series(0, 599, 2) i; +CREATE INDEX iprt1_e_p1_ab2 on prt1_e_p1(((a+b)/2)); +CREATE INDEX iprt1_e_p2_ab2 on prt1_e_p2(((a+b)/2)); +CREATE INDEX iprt1_e_p3_ab2 on prt1_e_p3(((a+b)/2)); +ANALYZE prt1_e; +CREATE TABLE prt2_e (a int, b int, c int) PARTITION BY RANGE(((b + a)/2)); +CREATE TABLE prt2_e_p1 PARTITION OF prt2_e FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_e_p2 PARTITION OF prt2_e FOR VALUES FROM (250) TO (500); +CREATE TABLE prt2_e_p3 PARTITION OF prt2_e FOR VALUES FROM (500) TO (600); +INSERT INTO prt2_e SELECT i, i, i % 25 FROM generate_series(0, 599, 3) i; +ANALYZE prt2_e; +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_e t1, prt2_e t2 WHERE (t1.a + t1.b)/2 = (t2.b + t2.a)/2 AND t1.c = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------------------------ + Sort + Sort Key: t1.a, t2.b + -> Append + -> Hash Join + Hash Cond: (((t2.b + t2.a) / 2) = ((t1.a + t1.b) / 2)) + -> Seq Scan on prt2_e_p1 t2 + -> Hash + -> Seq Scan on prt1_e_p1 t1 + Filter: (c = 0) + -> Hash Join + Hash Cond: (((t2_1.b + t2_1.a) / 2) = ((t1_1.a + t1_1.b) / 2)) + -> Seq Scan on prt2_e_p2 t2_1 + -> Hash + -> Seq Scan on prt1_e_p2 t1_1 + Filter: (c = 0) + -> Hash Join + Hash Cond: (((t2_2.b + t2_2.a) / 2) = ((t1_2.a + t1_2.b) / 2)) + -> Seq Scan on prt2_e_p3 t2_2 + -> Hash + -> Seq Scan on prt1_e_p3 t1_2 + Filter: (c = 0) +(21 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_e t1, prt2_e t2 WHERE (t1.a + t1.b)/2 = (t2.b + t2.a)/2 AND t1.c = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+---+-----+--- + 0 | 0 | 0 | 0 + 150 | 0 | 150 | 0 + 300 | 0 | 300 | 0 + 450 | 0 | 450 | 0 +(4 rows) + +-- +-- N-way join +-- +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM prt1 t1, prt2 t2, prt1_e t3 WHERE t1.a = t2.b AND t1.a = (t3.a + t3.b)/2 AND t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +--------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Result + -> Append + -> Nested Loop + Join Filter: (t1.a = ((t3.a + t3.b) / 2)) + -> Hash Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_p1 t2 + -> Hash + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Index Scan using iprt1_e_p1_ab2 on prt1_e_p1 t3 + Index Cond: (((a + b) / 2) = t2.b) + -> Nested Loop + Join Filter: (t1_1.a = ((t3_1.a + t3_1.b) / 2)) + -> Hash Join + Hash Cond: (t2_1.b = t1_1.a) + -> Seq Scan on prt2_p2 t2_1 + -> Hash + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Index Scan using iprt1_e_p2_ab2 on prt1_e_p2 t3_1 + Index Cond: (((a + b) / 2) = t2_1.b) + -> Nested Loop + Join Filter: (t1_2.a = ((t3_2.a + t3_2.b) / 2)) + -> Hash Join + Hash Cond: (t2_2.b = t1_2.a) + -> Seq Scan on prt2_p3 t2_2 + -> Hash + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) + -> Index Scan using iprt1_e_p3_ab2 on prt1_e_p3 t3_2 + Index Cond: (((a + b) / 2) = t2_2.b) +(34 rows) + +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM prt1 t1, prt2 t2, prt1_e t3 WHERE t1.a = t2.b AND t1.a = (t3.a + t3.b)/2 AND t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c | ?column? | c +-----+------+-----+------+----------+--- + 0 | 0000 | 0 | 0000 | 0 | 0 + 150 | 0150 | 150 | 0150 | 300 | 0 + 300 | 0300 | 300 | 0300 | 600 | 0 + 450 | 0450 | 450 | 0450 | 900 | 0 +(4 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) LEFT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.b = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + QUERY PLAN +-------------------------------------------------------------------- + Sort + Sort Key: t1.a, t2.b, ((t3.a + t3.b)) + -> Result + -> Append + -> Hash Right Join + Hash Cond: (((t3.a + t3.b) / 2) = t1.a) + -> Seq Scan on prt1_e_p1 t3 + -> Hash + -> Hash Right Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_p1 t2 + -> Hash + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: (((t3_1.a + t3_1.b) / 2) = t1_1.a) + -> Seq Scan on prt1_e_p2 t3_1 + -> Hash + -> Hash Right Join + Hash Cond: (t2_1.b = t1_1.a) + -> Seq Scan on prt2_p2 t2_1 + -> Hash + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: (((t3_2.a + t3_2.b) / 2) = t1_2.a) + -> Seq Scan on prt1_e_p3 t3_2 + -> Hash + -> Hash Right Join + Hash Cond: (t2_2.b = t1_2.a) + -> Seq Scan on prt2_p3 t2_2 + -> Hash + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) +(34 rows) + +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) LEFT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.b = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + a | c | b | c | ?column? | c +-----+------+-----+------+----------+--- + 0 | 0000 | 0 | 0000 | 0 | 0 + 50 | 0050 | | | 100 | 0 + 100 | 0100 | | | 200 | 0 + 150 | 0150 | 150 | 0150 | 300 | 0 + 200 | 0200 | | | 400 | 0 + 250 | 0250 | | | 500 | 0 + 300 | 0300 | 300 | 0300 | 600 | 0 + 350 | 0350 | | | 700 | 0 + 400 | 0400 | | | 800 | 0 + 450 | 0450 | 450 | 0450 | 900 | 0 + 500 | 0500 | | | 1000 | 0 + 550 | 0550 | | | 1100 | 0 +(12 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + QUERY PLAN +------------------------------------------------------------------------- + Sort + Sort Key: t1.a, t2.b, ((t3.a + t3.b)) + -> Result + -> Append + -> Nested Loop Left Join + -> Hash Right Join + Hash Cond: (t1.a = ((t3.a + t3.b) / 2)) + -> Seq Scan on prt1_p1 t1 + -> Hash + -> Seq Scan on prt1_e_p1 t3 + Filter: (c = 0) + -> Index Scan using iprt2_p1_b on prt2_p1 t2 + Index Cond: (t1.a = b) + -> Nested Loop Left Join + -> Hash Right Join + Hash Cond: (t1_1.a = ((t3_1.a + t3_1.b) / 2)) + -> Seq Scan on prt1_p2 t1_1 + -> Hash + -> Seq Scan on prt1_e_p2 t3_1 + Filter: (c = 0) + -> Index Scan using iprt2_p2_b on prt2_p2 t2_1 + Index Cond: (t1_1.a = b) + -> Nested Loop Left Join + -> Hash Right Join + Hash Cond: (t1_2.a = ((t3_2.a + t3_2.b) / 2)) + -> Seq Scan on prt1_p3 t1_2 + -> Hash + -> Seq Scan on prt1_e_p3 t3_2 + Filter: (c = 0) + -> Index Scan using iprt2_p3_b on prt2_p3 t2_2 + Index Cond: (t1_2.a = b) +(31 rows) + +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + a | c | b | c | ?column? | c +-----+------+-----+------+----------+--- + 0 | 0000 | 0 | 0000 | 0 | 0 + 50 | 0050 | | | 100 | 0 + 100 | 0100 | | | 200 | 0 + 150 | 0150 | 150 | 0150 | 300 | 0 + 200 | 0200 | | | 400 | 0 + 250 | 0250 | | | 500 | 0 + 300 | 0300 | 300 | 0300 | 600 | 0 + 350 | 0350 | | | 700 | 0 + 400 | 0400 | | | 800 | 0 + 450 | 0450 | 450 | 0450 | 900 | 0 + 500 | 0500 | | | 1000 | 0 + 550 | 0550 | | | 1100 | 0 +(12 rows) + +-- Cases with non-nullable expressions in subquery results; +-- make sure these go to null as expected +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.phv, t2.b, t2.phv, t3.a + t3.b, t3.phv FROM ((SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b)) FULL JOIN (SELECT 50 phv, * FROM prt1_e WHERE prt1_e.c = 0) t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.a = t1.phv OR t2.b = t2.phv OR (t3.a + t3.b)/2 = t3.phv ORDER BY t1.a, t2.b, t3.a + t3.b; + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------- + Sort + Sort Key: prt1_p1.a, prt2_p1.b, ((prt1_e_p1.a + prt1_e_p1.b)) + -> Result + -> Append + -> Hash Full Join + Hash Cond: (prt1_p1.a = ((prt1_e_p1.a + prt1_e_p1.b) / 2)) + Filter: ((prt1_p1.a = (50)) OR (prt2_p1.b = (75)) OR (((prt1_e_p1.a + prt1_e_p1.b) / 2) = (50))) + -> Hash Full Join + Hash Cond: (prt1_p1.a = prt2_p1.b) + -> Seq Scan on prt1_p1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p1 + Filter: (a = 0) + -> Hash + -> Seq Scan on prt1_e_p1 + Filter: (c = 0) + -> Hash Full Join + Hash Cond: (prt1_p2.a = ((prt1_e_p2.a + prt1_e_p2.b) / 2)) + Filter: ((prt1_p2.a = (50)) OR (prt2_p2.b = (75)) OR (((prt1_e_p2.a + prt1_e_p2.b) / 2) = (50))) + -> Hash Full Join + Hash Cond: (prt1_p2.a = prt2_p2.b) + -> Seq Scan on prt1_p2 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p2 + Filter: (a = 0) + -> Hash + -> Seq Scan on prt1_e_p2 + Filter: (c = 0) + -> Hash Full Join + Hash Cond: (prt1_p3.a = ((prt1_e_p3.a + prt1_e_p3.b) / 2)) + Filter: ((prt1_p3.a = (50)) OR (prt2_p3.b = (75)) OR (((prt1_e_p3.a + prt1_e_p3.b) / 2) = (50))) + -> Hash Full Join + Hash Cond: (prt1_p3.a = prt2_p3.b) + -> Seq Scan on prt1_p3 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_p3 + Filter: (a = 0) + -> Hash + -> Seq Scan on prt1_e_p3 + Filter: (c = 0) +(43 rows) + +SELECT t1.a, t1.phv, t2.b, t2.phv, t3.a + t3.b, t3.phv FROM ((SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b)) FULL JOIN (SELECT 50 phv, * FROM prt1_e WHERE prt1_e.c = 0) t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.a = t1.phv OR t2.b = t2.phv OR (t3.a + t3.b)/2 = t3.phv ORDER BY t1.a, t2.b, t3.a + t3.b; + a | phv | b | phv | ?column? | phv +----+-----+----+-----+----------+----- + 50 | 50 | | | 100 | 50 + | | 75 | 75 | | +(2 rows) + +-- Semi-join +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1, prt1_e t2 WHERE t1.a = 0 AND t1.b = (t2.a + t2.b)/2) AND t1.b = 0 ORDER BY t1.a; + QUERY PLAN +--------------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Nested Loop + Join Filter: (t1.a = t1_3.b) + -> HashAggregate + Group Key: t1_3.b + -> Hash Join + Hash Cond: (((t2.a + t2.b) / 2) = t1_3.b) + -> Seq Scan on prt1_e_p1 t2 + -> Hash + -> Seq Scan on prt2_p1 t1_3 + Filter: (a = 0) + -> Index Scan using iprt1_p1_a on prt1_p1 t1 + Index Cond: (a = ((t2.a + t2.b) / 2)) + Filter: (b = 0) + -> Nested Loop + Join Filter: (t1_1.a = t1_4.b) + -> HashAggregate + Group Key: t1_4.b + -> Hash Join + Hash Cond: (((t2_1.a + t2_1.b) / 2) = t1_4.b) + -> Seq Scan on prt1_e_p2 t2_1 + -> Hash + -> Seq Scan on prt2_p2 t1_4 + Filter: (a = 0) + -> Index Scan using iprt1_p2_a on prt1_p2 t1_1 + Index Cond: (a = ((t2_1.a + t2_1.b) / 2)) + Filter: (b = 0) + -> Nested Loop + Join Filter: (t1_2.a = t1_5.b) + -> HashAggregate + Group Key: t1_5.b + -> Nested Loop + -> Seq Scan on prt2_p3 t1_5 + Filter: (a = 0) + -> Index Scan using iprt1_e_p3_ab2 on prt1_e_p3 t2_2 + Index Cond: (((a + b) / 2) = t1_5.b) + -> Index Scan using iprt1_p3_a on prt1_p3 t1_2 + Index Cond: (a = ((t2_2.a + t2_2.b) / 2)) + Filter: (b = 0) +(41 rows) + +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1, prt1_e t2 WHERE t1.a = 0 AND t1.b = (t2.a + t2.b)/2) AND t1.b = 0 ORDER BY t1.a; + a | b | c +-----+---+------ + 0 | 0 | 0000 + 150 | 0 | 0150 + 300 | 0 | 0300 + 450 | 0 | 0450 +(4 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + QUERY PLAN +------------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Nested Loop + -> HashAggregate + Group Key: t1_3.b + -> Hash Semi Join + Hash Cond: (t1_3.b = ((t1_6.a + t1_6.b) / 2)) + -> Seq Scan on prt2_p1 t1_3 + -> Hash + -> Seq Scan on prt1_e_p1 t1_6 + Filter: (c = 0) + -> Index Scan using iprt1_p1_a on prt1_p1 t1 + Index Cond: (a = t1_3.b) + Filter: (b = 0) + -> Nested Loop + -> HashAggregate + Group Key: t1_4.b + -> Hash Semi Join + Hash Cond: (t1_4.b = ((t1_7.a + t1_7.b) / 2)) + -> Seq Scan on prt2_p2 t1_4 + -> Hash + -> Seq Scan on prt1_e_p2 t1_7 + Filter: (c = 0) + -> Index Scan using iprt1_p2_a on prt1_p2 t1_1 + Index Cond: (a = t1_4.b) + Filter: (b = 0) + -> Nested Loop + -> Unique + -> Sort + Sort Key: t1_5.b + -> Hash Semi Join + Hash Cond: (t1_5.b = ((t1_8.a + t1_8.b) / 2)) + -> Seq Scan on prt2_p3 t1_5 + -> Hash + -> Seq Scan on prt1_e_p3 t1_8 + Filter: (c = 0) + -> Index Scan using iprt1_p3_a on prt1_p3 t1_2 + Index Cond: (a = t1_5.b) + Filter: (b = 0) +(40 rows) + +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + a | b | c +-----+---+------ + 0 | 0 | 0000 + 150 | 0 | 0150 + 300 | 0 | 0300 + 450 | 0 | 0450 +(4 rows) + +-- test merge joins +SET enable_hashjoin TO off; +SET enable_nestloop TO off; +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + QUERY PLAN +---------------------------------------------------------------- + Merge Append + Sort Key: t1.a + -> Merge Semi Join + Merge Cond: (t1.a = t1_3.b) + -> Sort + Sort Key: t1.a + -> Seq Scan on prt1_p1 t1 + Filter: (b = 0) + -> Merge Semi Join + Merge Cond: (t1_3.b = (((t1_6.a + t1_6.b) / 2))) + -> Sort + Sort Key: t1_3.b + -> Seq Scan on prt2_p1 t1_3 + -> Sort + Sort Key: (((t1_6.a + t1_6.b) / 2)) + -> Seq Scan on prt1_e_p1 t1_6 + Filter: (c = 0) + -> Merge Semi Join + Merge Cond: (t1_1.a = t1_4.b) + -> Sort + Sort Key: t1_1.a + -> Seq Scan on prt1_p2 t1_1 + Filter: (b = 0) + -> Merge Semi Join + Merge Cond: (t1_4.b = (((t1_7.a + t1_7.b) / 2))) + -> Sort + Sort Key: t1_4.b + -> Seq Scan on prt2_p2 t1_4 + -> Sort + Sort Key: (((t1_7.a + t1_7.b) / 2)) + -> Seq Scan on prt1_e_p2 t1_7 + Filter: (c = 0) + -> Merge Semi Join + Merge Cond: (t1_2.a = t1_5.b) + -> Sort + Sort Key: t1_2.a + -> Seq Scan on prt1_p3 t1_2 + Filter: (b = 0) + -> Merge Semi Join + Merge Cond: (t1_5.b = (((t1_8.a + t1_8.b) / 2))) + -> Sort + Sort Key: t1_5.b + -> Seq Scan on prt2_p3 t1_5 + -> Sort + Sort Key: (((t1_8.a + t1_8.b) / 2)) + -> Seq Scan on prt1_e_p3 t1_8 + Filter: (c = 0) +(47 rows) + +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + a | b | c +-----+---+------ + 0 | 0 | 0000 + 150 | 0 | 0150 + 300 | 0 | 0300 + 450 | 0 | 0450 +(4 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + QUERY PLAN +---------------------------------------------------------------------------------- + Sort + Sort Key: t1.a, t2.b, ((t3.a + t3.b)) + -> Result + -> Append + -> Merge Left Join + Merge Cond: (t1.a = t2.b) + -> Sort + Sort Key: t1.a + -> Merge Left Join + Merge Cond: ((((t3.a + t3.b) / 2)) = t1.a) + -> Sort + Sort Key: (((t3.a + t3.b) / 2)) + -> Seq Scan on prt1_e_p1 t3 + Filter: (c = 0) + -> Sort + Sort Key: t1.a + -> Seq Scan on prt1_p1 t1 + -> Sort + Sort Key: t2.b + -> Seq Scan on prt2_p1 t2 + -> Merge Left Join + Merge Cond: (t1_1.a = t2_1.b) + -> Sort + Sort Key: t1_1.a + -> Merge Left Join + Merge Cond: ((((t3_1.a + t3_1.b) / 2)) = t1_1.a) + -> Sort + Sort Key: (((t3_1.a + t3_1.b) / 2)) + -> Seq Scan on prt1_e_p2 t3_1 + Filter: (c = 0) + -> Sort + Sort Key: t1_1.a + -> Seq Scan on prt1_p2 t1_1 + -> Sort + Sort Key: t2_1.b + -> Seq Scan on prt2_p2 t2_1 + -> Merge Left Join + Merge Cond: (t1_2.a = t2_2.b) + -> Sort + Sort Key: t1_2.a + -> Merge Left Join + Merge Cond: ((((t3_2.a + t3_2.b) / 2)) = t1_2.a) + -> Sort + Sort Key: (((t3_2.a + t3_2.b) / 2)) + -> Seq Scan on prt1_e_p3 t3_2 + Filter: (c = 0) + -> Sort + Sort Key: t1_2.a + -> Seq Scan on prt1_p3 t1_2 + -> Sort + Sort Key: t2_2.b + -> Seq Scan on prt2_p3 t2_2 +(52 rows) + +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + a | c | b | c | ?column? | c +-----+------+-----+------+----------+--- + 0 | 0000 | 0 | 0000 | 0 | 0 + 50 | 0050 | | | 100 | 0 + 100 | 0100 | | | 200 | 0 + 150 | 0150 | 150 | 0150 | 300 | 0 + 200 | 0200 | | | 400 | 0 + 250 | 0250 | | | 500 | 0 + 300 | 0300 | 300 | 0300 | 600 | 0 + 350 | 0350 | | | 700 | 0 + 400 | 0400 | | | 800 | 0 + 450 | 0450 | 450 | 0450 | 900 | 0 + 500 | 0500 | | | 1000 | 0 + 550 | 0550 | | | 1100 | 0 +(12 rows) + +-- MergeAppend on nullable column +EXPLAIN (COSTS OFF) +SELECT t1.a, t2.b FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +----------------------------------------------------------- + Sort + Sort Key: prt1_p1.a, b + -> Append + -> Merge Left Join + Merge Cond: (prt1_p1.a = b) + -> Sort + Sort Key: prt1_p1.a + -> Seq Scan on prt1_p1 + Filter: ((a < 450) AND (b = 0)) + -> Sort + Sort Key: b + -> Result + One-Time Filter: false + -> Merge Left Join + Merge Cond: (prt1_p2.a = prt2_p2.b) + -> Sort + Sort Key: prt1_p2.a + -> Seq Scan on prt1_p2 + Filter: ((a < 450) AND (b = 0)) + -> Sort + Sort Key: prt2_p2.b + -> Seq Scan on prt2_p2 + Filter: (b > 250) +(23 rows) + +SELECT t1.a, t2.b FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + a | b +-----+----- + 0 | + 50 | + 100 | + 150 | + 200 | + 250 | + 300 | 300 + 350 | + 400 | +(9 rows) + +RESET enable_hashjoin; +RESET enable_nestloop; +-- +-- partitioned by multiple columns +-- +CREATE TABLE prt1_m (a int, b int, c int) PARTITION BY RANGE(a, ((a + b)/2)); +CREATE TABLE prt1_m_p1 PARTITION OF prt1_m FOR VALUES FROM (0, 0) TO (250, 250); +CREATE TABLE prt1_m_p2 PARTITION OF prt1_m FOR VALUES FROM (250, 250) TO (500, 500); +CREATE TABLE prt1_m_p3 PARTITION OF prt1_m FOR VALUES FROM (500, 500) TO (600, 600); +INSERT INTO prt1_m SELECT i, i, i % 25 FROM generate_series(0, 599, 2) i; +ANALYZE prt1_m; +CREATE TABLE prt2_m (a int, b int, c int) PARTITION BY RANGE(((b + a)/2), b); +CREATE TABLE prt2_m_p1 PARTITION OF prt2_m FOR VALUES FROM (0, 0) TO (250, 250); +CREATE TABLE prt2_m_p2 PARTITION OF prt2_m FOR VALUES FROM (250, 250) TO (500, 500); +CREATE TABLE prt2_m_p3 PARTITION OF prt2_m FOR VALUES FROM (500, 500) TO (600, 600); +INSERT INTO prt2_m SELECT i, i, i % 25 FROM generate_series(0, 599, 3) i; +ANALYZE prt2_m; +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_m WHERE prt1_m.c = 0) t1 FULL JOIN (SELECT * FROM prt2_m WHERE prt2_m.c = 0) t2 ON (t1.a = (t2.b + t2.a)/2 AND t2.b = (t1.a + t1.b)/2) ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------ + Sort + Sort Key: prt1_m_p1.a, prt2_m_p1.b + -> Append + -> Hash Full Join + Hash Cond: ((prt1_m_p1.a = ((prt2_m_p1.b + prt2_m_p1.a) / 2)) AND (((prt1_m_p1.a + prt1_m_p1.b) / 2) = prt2_m_p1.b)) + -> Seq Scan on prt1_m_p1 + Filter: (c = 0) + -> Hash + -> Seq Scan on prt2_m_p1 + Filter: (c = 0) + -> Hash Full Join + Hash Cond: ((prt1_m_p2.a = ((prt2_m_p2.b + prt2_m_p2.a) / 2)) AND (((prt1_m_p2.a + prt1_m_p2.b) / 2) = prt2_m_p2.b)) + -> Seq Scan on prt1_m_p2 + Filter: (c = 0) + -> Hash + -> Seq Scan on prt2_m_p2 + Filter: (c = 0) + -> Hash Full Join + Hash Cond: ((prt1_m_p3.a = ((prt2_m_p3.b + prt2_m_p3.a) / 2)) AND (((prt1_m_p3.a + prt1_m_p3.b) / 2) = prt2_m_p3.b)) + -> Seq Scan on prt1_m_p3 + Filter: (c = 0) + -> Hash + -> Seq Scan on prt2_m_p3 + Filter: (c = 0) +(24 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_m WHERE prt1_m.c = 0) t1 FULL JOIN (SELECT * FROM prt2_m WHERE prt2_m.c = 0) t2 ON (t1.a = (t2.b + t2.a)/2 AND t2.b = (t1.a + t1.b)/2) ORDER BY t1.a, t2.b; + a | c | b | c +-----+---+-----+--- + 0 | 0 | 0 | 0 + 50 | 0 | | + 100 | 0 | | + 150 | 0 | 150 | 0 + 200 | 0 | | + 250 | 0 | | + 300 | 0 | 300 | 0 + 350 | 0 | | + 400 | 0 | | + 450 | 0 | 450 | 0 + 500 | 0 | | + 550 | 0 | | + | | 75 | 0 + | | 225 | 0 + | | 375 | 0 + | | 525 | 0 +(16 rows) + +-- +-- tests for list partitioned tables. +-- +CREATE TABLE plt1 (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt1_p3 PARTITION OF plt1 FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE plt1; +CREATE TABLE plt2 (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt2_p3 PARTITION OF plt2 FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE plt2; +-- +-- list partitioned by expression +-- +CREATE TABLE plt1_e (a int, b int, c text) PARTITION BY LIST(ltrim(c, 'A')); +CREATE TABLE plt1_e_p1 PARTITION OF plt1_e FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt1_e_p2 PARTITION OF plt1_e FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt1_e_p3 PARTITION OF plt1_e FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE plt1_e; +-- test partition matching with N-way join +EXPLAIN (COSTS OFF) +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + QUERY PLAN +-------------------------------------------------------------------------------------- + Sort + Sort Key: t1.c, t3.c + -> HashAggregate + Group Key: t1.c, t2.c, t3.c + -> Result + -> Append + -> Hash Join + Hash Cond: (t1.c = t2.c) + -> Seq Scan on plt1_p1 t1 + -> Hash + -> Hash Join + Hash Cond: (t2.c = ltrim(t3.c, 'A'::text)) + -> Seq Scan on plt2_p1 t2 + -> Hash + -> Seq Scan on plt1_e_p1 t3 + -> Hash Join + Hash Cond: (t1_1.c = t2_1.c) + -> Seq Scan on plt1_p2 t1_1 + -> Hash + -> Hash Join + Hash Cond: (t2_1.c = ltrim(t3_1.c, 'A'::text)) + -> Seq Scan on plt2_p2 t2_1 + -> Hash + -> Seq Scan on plt1_e_p2 t3_1 + -> Hash Join + Hash Cond: (t1_2.c = t2_2.c) + -> Seq Scan on plt1_p3 t1_2 + -> Hash + -> Hash Join + Hash Cond: (t2_2.c = ltrim(t3_2.c, 'A'::text)) + -> Seq Scan on plt2_p3 t2_2 + -> Hash + -> Seq Scan on plt1_e_p3 t3_2 +(33 rows) + +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + avg | avg | avg | c | c | c +----------------------+----------------------+-----------------------+------+------+------- + 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 + 74.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 + 124.0000000000000000 | 124.5000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 + 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 + 224.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 + 274.0000000000000000 | 274.5000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 + 324.0000000000000000 | 324.0000000000000000 | 648.0000000000000000 | 0006 | 0006 | A0006 + 374.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 + 424.0000000000000000 | 424.5000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 + 474.0000000000000000 | 474.0000000000000000 | 948.0000000000000000 | 0009 | 0009 | A0009 + 524.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 + 574.0000000000000000 | 574.5000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 +(12 rows) + +-- joins where one of the relations is proven empty +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a = 1 AND t1.a = 2; + QUERY PLAN +-------------------------- + Result + One-Time Filter: false +(2 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 LEFT JOIN prt2 t2 ON t1.a = t2.b; + QUERY PLAN +-------------------------- + Result + One-Time Filter: false +(2 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +-------------------------------------------- + Sort + Sort Key: a, t2.b + -> Hash Left Join + Hash Cond: (t2.b = a) + -> Append + -> Seq Scan on prt2_p1 t2 + Filter: (a = 0) + -> Seq Scan on prt2_p2 t2_1 + Filter: (a = 0) + -> Seq Scan on prt2_p3 t2_2 + Filter: (a = 0) + -> Hash + -> Result + One-Time Filter: false +(14 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 FULL JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +-------------------------------------------- + Sort + Sort Key: a, t2.b + -> Hash Left Join + Hash Cond: (t2.b = a) + -> Append + -> Seq Scan on prt2_p1 t2 + Filter: (a = 0) + -> Seq Scan on prt2_p2 t2_1 + Filter: (a = 0) + -> Seq Scan on prt2_p3 t2_2 + Filter: (a = 0) + -> Hash + -> Result + One-Time Filter: false +(14 rows) + +-- +-- multiple levels of partitioning +-- +CREATE TABLE prt1_l (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE prt1_l_p1 PARTITION OF prt1_l FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_l_p2 PARTITION OF prt1_l FOR VALUES FROM (250) TO (500) PARTITION BY LIST (c); +CREATE TABLE prt1_l_p2_p1 PARTITION OF prt1_l_p2 FOR VALUES IN ('0000', '0001'); +CREATE TABLE prt1_l_p2_p2 PARTITION OF prt1_l_p2 FOR VALUES IN ('0002', '0003'); +CREATE TABLE prt1_l_p3 PARTITION OF prt1_l FOR VALUES FROM (500) TO (600) PARTITION BY RANGE (b); +CREATE TABLE prt1_l_p3_p1 PARTITION OF prt1_l_p3 FOR VALUES FROM (0) TO (13); +CREATE TABLE prt1_l_p3_p2 PARTITION OF prt1_l_p3 FOR VALUES FROM (13) TO (25); +INSERT INTO prt1_l SELECT i, i % 25, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt1_l; +CREATE TABLE prt2_l (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE prt2_l_p1 PARTITION OF prt2_l FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_l_p2 PARTITION OF prt2_l FOR VALUES FROM (250) TO (500) PARTITION BY LIST (c); +CREATE TABLE prt2_l_p2_p1 PARTITION OF prt2_l_p2 FOR VALUES IN ('0000', '0001'); +CREATE TABLE prt2_l_p2_p2 PARTITION OF prt2_l_p2 FOR VALUES IN ('0002', '0003'); +CREATE TABLE prt2_l_p3 PARTITION OF prt2_l FOR VALUES FROM (500) TO (600) PARTITION BY RANGE (a); +CREATE TABLE prt2_l_p3_p1 PARTITION OF prt2_l_p3 FOR VALUES FROM (0) TO (13); +CREATE TABLE prt2_l_p3_p2 PARTITION OF prt2_l_p3 FOR VALUES FROM (13) TO (25); +INSERT INTO prt2_l SELECT i % 25, i, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE prt2_l; +-- inner join, qual covering only top-level partitions +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1, prt2_l t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Append + -> Hash Join + Hash Cond: (t2.b = t1.a) + -> Seq Scan on prt2_l_p1 t2 + -> Hash + -> Seq Scan on prt1_l_p1 t1 + Filter: (b = 0) + -> Hash Join + Hash Cond: (t2_1.b = t1_1.a) + -> Append + -> Seq Scan on prt2_l_p2_p1 t2_1 + -> Seq Scan on prt2_l_p2_p2 t2_2 + -> Hash + -> Append + -> Seq Scan on prt1_l_p2_p1 t1_1 + Filter: (b = 0) + -> Seq Scan on prt1_l_p2_p2 t1_2 + Filter: (b = 0) + -> Hash Join + Hash Cond: (t2_3.b = t1_3.a) + -> Append + -> Seq Scan on prt2_l_p3_p1 t2_3 + -> Seq Scan on prt2_l_p3_p2 t2_4 + -> Hash + -> Append + -> Seq Scan on prt1_l_p3_p1 t1_3 + Filter: (b = 0) +(29 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1, prt2_l t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 150 | 0002 | 150 | 0002 + 300 | 0000 | 300 | 0000 + 450 | 0002 | 450 | 0002 +(4 rows) + +-- left join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 LEFT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------------------------------ + Sort + Sort Key: t1.a, t2.b + -> Append + -> Hash Right Join + Hash Cond: ((t2.b = t1.a) AND ((t2.c)::text = (t1.c)::text)) + -> Seq Scan on prt2_l_p1 t2 + -> Hash + -> Seq Scan on prt1_l_p1 t1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: ((t2_1.b = t1_1.a) AND ((t2_1.c)::text = (t1_1.c)::text)) + -> Seq Scan on prt2_l_p2_p1 t2_1 + -> Hash + -> Seq Scan on prt1_l_p2_p1 t1_1 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: ((t2_2.b = t1_2.a) AND ((t2_2.c)::text = (t1_2.c)::text)) + -> Seq Scan on prt2_l_p2_p2 t2_2 + -> Hash + -> Seq Scan on prt1_l_p2_p2 t1_2 + Filter: (b = 0) + -> Hash Right Join + Hash Cond: ((t2_3.b = t1_3.a) AND ((t2_3.c)::text = (t1_3.c)::text)) + -> Append + -> Seq Scan on prt2_l_p3_p1 t2_3 + -> Seq Scan on prt2_l_p3_p2 t2_4 + -> Hash + -> Append + -> Seq Scan on prt1_l_p3_p1 t1_3 + Filter: (b = 0) +(30 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 LEFT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t1.b = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 50 | 0002 | | + 100 | 0000 | | + 150 | 0002 | 150 | 0002 + 200 | 0000 | | + 250 | 0002 | | + 300 | 0000 | 300 | 0000 + 350 | 0002 | | + 400 | 0000 | | + 450 | 0002 | 450 | 0002 + 500 | 0000 | | + 550 | 0002 | | +(12 rows) + +-- right join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t2.a = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------------------------------------ + Sort + Sort Key: t1.a, t2.b + -> Result + -> Append + -> Hash Right Join + Hash Cond: ((t1.a = t2.b) AND ((t1.c)::text = (t2.c)::text)) + -> Seq Scan on prt1_l_p1 t1 + -> Hash + -> Seq Scan on prt2_l_p1 t2 + Filter: (a = 0) + -> Hash Right Join + Hash Cond: ((t1_1.a = t2_1.b) AND ((t1_1.c)::text = (t2_1.c)::text)) + -> Seq Scan on prt1_l_p2_p1 t1_1 + -> Hash + -> Seq Scan on prt2_l_p2_p1 t2_1 + Filter: (a = 0) + -> Hash Right Join + Hash Cond: ((t1_2.a = t2_2.b) AND ((t1_2.c)::text = (t2_2.c)::text)) + -> Seq Scan on prt1_l_p2_p2 t1_2 + -> Hash + -> Seq Scan on prt2_l_p2_p2 t2_2 + Filter: (a = 0) + -> Hash Right Join + Hash Cond: ((t1_3.a = t2_3.b) AND ((t1_3.c)::text = (t2_3.c)::text)) + -> Append + -> Seq Scan on prt1_l_p3_p1 t1_3 + -> Seq Scan on prt1_l_p3_p2 t1_4 + -> Hash + -> Append + -> Seq Scan on prt2_l_p3_p1 t2_3 + Filter: (a = 0) +(31 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t2.a = 0 ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 150 | 0002 | 150 | 0002 + 300 | 0000 | 300 | 0000 + 450 | 0002 | 450 | 0002 + | | 75 | 0003 + | | 225 | 0001 + | | 375 | 0003 + | | 525 | 0001 +(8 rows) + +-- full join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------- + Sort + Sort Key: prt1_l_p1.a, prt2_l_p1.b + -> Append + -> Hash Full Join + Hash Cond: ((prt1_l_p1.a = prt2_l_p1.b) AND ((prt1_l_p1.c)::text = (prt2_l_p1.c)::text)) + -> Seq Scan on prt1_l_p1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_l_p1 + Filter: (a = 0) + -> Hash Full Join + Hash Cond: ((prt1_l_p2_p1.a = prt2_l_p2_p1.b) AND ((prt1_l_p2_p1.c)::text = (prt2_l_p2_p1.c)::text)) + -> Seq Scan on prt1_l_p2_p1 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_l_p2_p1 + Filter: (a = 0) + -> Hash Full Join + Hash Cond: ((prt1_l_p2_p2.a = prt2_l_p2_p2.b) AND ((prt1_l_p2_p2.c)::text = (prt2_l_p2_p2.c)::text)) + -> Seq Scan on prt1_l_p2_p2 + Filter: (b = 0) + -> Hash + -> Seq Scan on prt2_l_p2_p2 + Filter: (a = 0) + -> Hash Full Join + Hash Cond: ((prt1_l_p3_p1.a = prt2_l_p3_p1.b) AND ((prt1_l_p3_p1.c)::text = (prt2_l_p3_p1.c)::text)) + -> Append + -> Seq Scan on prt1_l_p3_p1 + Filter: (b = 0) + -> Hash + -> Append + -> Seq Scan on prt2_l_p3_p1 + Filter: (a = 0) +(33 rows) + +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; + a | c | b | c +-----+------+-----+------ + 0 | 0000 | 0 | 0000 + 50 | 0002 | | + 100 | 0000 | | + 150 | 0002 | 150 | 0002 + 200 | 0000 | | + 250 | 0002 | | + 300 | 0000 | 300 | 0000 + 350 | 0002 | | + 400 | 0000 | | + 450 | 0002 | 450 | 0002 + 500 | 0000 | | + 550 | 0002 | | + | | 75 | 0003 + | | 225 | 0001 + | | 375 | 0003 + | | 525 | 0001 +(16 rows) + +-- lateral partition-wise join +EXPLAIN (COSTS OFF) +SELECT * FROM prt1_l t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss + ON t1.a = ss.t2a AND t1.c = ss.t2c WHERE t1.b = 0 ORDER BY t1.a; + QUERY PLAN +----------------------------------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Result + -> Append + -> Nested Loop Left Join + -> Seq Scan on prt1_l_p1 t1 + Filter: (b = 0) + -> Hash Join + Hash Cond: ((t3.b = t2.a) AND ((t3.c)::text = (t2.c)::text)) + -> Seq Scan on prt2_l_p1 t3 + -> Hash + -> Seq Scan on prt1_l_p1 t2 + Filter: ((t1.a = a) AND ((t1.c)::text = (c)::text)) + -> Nested Loop Left Join + -> Seq Scan on prt1_l_p2_p1 t1_1 + Filter: (b = 0) + -> Hash Join + Hash Cond: ((t3_1.b = t2_1.a) AND ((t3_1.c)::text = (t2_1.c)::text)) + -> Seq Scan on prt2_l_p2_p1 t3_1 + -> Hash + -> Seq Scan on prt1_l_p2_p1 t2_1 + Filter: ((t1_1.a = a) AND ((t1_1.c)::text = (c)::text)) + -> Nested Loop Left Join + -> Seq Scan on prt1_l_p2_p2 t1_2 + Filter: (b = 0) + -> Hash Join + Hash Cond: ((t3_2.b = t2_2.a) AND ((t3_2.c)::text = (t2_2.c)::text)) + -> Seq Scan on prt2_l_p2_p2 t3_2 + -> Hash + -> Seq Scan on prt1_l_p2_p2 t2_2 + Filter: ((t1_2.a = a) AND ((t1_2.c)::text = (c)::text)) + -> Nested Loop Left Join + -> Append + -> Seq Scan on prt1_l_p3_p1 t1_3 + Filter: (b = 0) + -> Hash Join + Hash Cond: ((t3_3.b = t2_3.a) AND ((t3_3.c)::text = (t2_3.c)::text)) + -> Append + -> Seq Scan on prt2_l_p3_p1 t3_3 + -> Seq Scan on prt2_l_p3_p2 t3_4 + -> Hash + -> Append + -> Seq Scan on prt1_l_p3_p1 t2_3 + Filter: ((t1_3.a = a) AND ((t1_3.c)::text = (c)::text)) + -> Seq Scan on prt1_l_p3_p2 t2_4 + Filter: ((t1_3.a = a) AND ((t1_3.c)::text = (c)::text)) +(46 rows) + +SELECT * FROM prt1_l t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss + ON t1.a = ss.t2a AND t1.c = ss.t2c WHERE t1.b = 0 ORDER BY t1.a; + a | b | c | t2a | t2c | t2b | t3b | least +-----+---+------+-----+------+-----+-----+------- + 0 | 0 | 0000 | 0 | 0000 | 0 | 0 | 0 + 50 | 0 | 0002 | | | | | + 100 | 0 | 0000 | | | | | + 150 | 0 | 0002 | 150 | 0002 | 0 | 150 | 150 + 200 | 0 | 0000 | | | | | + 250 | 0 | 0002 | | | | | + 300 | 0 | 0000 | 300 | 0000 | 0 | 300 | 300 + 350 | 0 | 0002 | | | | | + 400 | 0 | 0000 | | | | | + 450 | 0 | 0002 | 450 | 0002 | 0 | 450 | 450 + 500 | 0 | 0000 | | | | | + 550 | 0 | 0002 | | | | | +(12 rows) + +-- join with one side empty +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE a = 1 AND a = 2) t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.b = t2.a AND t1.c = t2.c; + QUERY PLAN +------------------------------------------------------------------------- + Hash Left Join + Hash Cond: ((t2.b = a) AND (t2.a = b) AND ((t2.c)::text = (c)::text)) + -> Append + -> Seq Scan on prt2_l_p1 t2 + -> Seq Scan on prt2_l_p2_p1 t2_1 + -> Seq Scan on prt2_l_p2_p2 t2_2 + -> Seq Scan on prt2_l_p3_p1 t2_3 + -> Seq Scan on prt2_l_p3_p2 t2_4 + -> Hash + -> Result + One-Time Filter: false +(11 rows) + +-- +-- negative testcases +-- +CREATE TABLE prt1_n (a int, b int, c varchar) PARTITION BY RANGE(c); +CREATE TABLE prt1_n_p1 PARTITION OF prt1_n FOR VALUES FROM ('0000') TO ('0250'); +CREATE TABLE prt1_n_p2 PARTITION OF prt1_n FOR VALUES FROM ('0250') TO ('0500'); +INSERT INTO prt1_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 499, 2) i; +ANALYZE prt1_n; +CREATE TABLE prt2_n (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE prt2_n_p1 PARTITION OF prt2_n FOR VALUES IN ('0000', '0003', '0004', '0010', '0006', '0007'); +CREATE TABLE prt2_n_p2 PARTITION OF prt2_n FOR VALUES IN ('0001', '0005', '0002', '0009', '0008', '0011'); +INSERT INTO prt2_n SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt2_n; +CREATE TABLE prt3_n (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE prt3_n_p1 PARTITION OF prt3_n FOR VALUES IN ('0000', '0004', '0006', '0007'); +CREATE TABLE prt3_n_p2 PARTITION OF prt3_n FOR VALUES IN ('0001', '0002', '0008', '0010'); +CREATE TABLE prt3_n_p3 PARTITION OF prt3_n FOR VALUES IN ('0003', '0005', '0009', '0011'); +INSERT INTO prt2_n SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt3_n; +CREATE TABLE prt4_n (a int, b int, c text) PARTITION BY RANGE(a); +CREATE TABLE prt4_n_p1 PARTITION OF prt4_n FOR VALUES FROM (0) TO (300); +CREATE TABLE prt4_n_p2 PARTITION OF prt4_n FOR VALUES FROM (300) TO (500); +CREATE TABLE prt4_n_p3 PARTITION OF prt4_n FOR VALUES FROM (500) TO (600); +INSERT INTO prt4_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt4_n; +-- partition-wise join can not be applied if the partition ranges differ +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2 WHERE t1.a = t2.a; + QUERY PLAN +---------------------------------------------- + Hash Join + Hash Cond: (t1.a = t2.a) + -> Append + -> Seq Scan on prt1_p1 t1 + -> Seq Scan on prt1_p2 t1_1 + -> Seq Scan on prt1_p3 t1_2 + -> Hash + -> Append + -> Seq Scan on prt4_n_p1 t2 + -> Seq Scan on prt4_n_p2 t2_1 + -> Seq Scan on prt4_n_p3 t2_2 +(11 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2, prt2 t3 WHERE t1.a = t2.a and t1.a = t3.b; + QUERY PLAN +-------------------------------------------------------- + Hash Join + Hash Cond: (t2.a = t1.a) + -> Append + -> Seq Scan on prt4_n_p1 t2 + -> Seq Scan on prt4_n_p2 t2_1 + -> Seq Scan on prt4_n_p3 t2_2 + -> Hash + -> Append + -> Hash Join + Hash Cond: (t1.a = t3.b) + -> Seq Scan on prt1_p1 t1 + -> Hash + -> Seq Scan on prt2_p1 t3 + -> Hash Join + Hash Cond: (t1_1.a = t3_1.b) + -> Seq Scan on prt1_p2 t1_1 + -> Hash + -> Seq Scan on prt2_p2 t3_1 + -> Hash Join + Hash Cond: (t1_2.a = t3_2.b) + -> Seq Scan on prt1_p3 t1_2 + -> Hash + -> Seq Scan on prt2_p3 t3_2 +(23 rows) + +-- partition-wise join can not be applied if there are no equi-join conditions +-- between partition keys +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 LEFT JOIN prt2 t2 ON (t1.a < t2.b); + QUERY PLAN +--------------------------------------------------------- + Nested Loop Left Join + -> Append + -> Seq Scan on prt1_p1 t1 + -> Seq Scan on prt1_p2 t1_1 + -> Seq Scan on prt1_p3 t1_2 + -> Append + -> Index Scan using iprt2_p1_b on prt2_p1 t2 + Index Cond: (t1.a < b) + -> Index Scan using iprt2_p2_b on prt2_p2 t2_1 + Index Cond: (t1.a < b) + -> Index Scan using iprt2_p3_b on prt2_p3 t2_2 + Index Cond: (t1.a < b) +(12 rows) + +-- equi-join with join condition on partial keys does not qualify for +-- partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1, prt2_m t2 WHERE t1.a = (t2.b + t2.a)/2; + QUERY PLAN +---------------------------------------------- + Hash Join + Hash Cond: (((t2.b + t2.a) / 2) = t1.a) + -> Append + -> Seq Scan on prt2_m_p1 t2 + -> Seq Scan on prt2_m_p2 t2_1 + -> Seq Scan on prt2_m_p3 t2_2 + -> Hash + -> Append + -> Seq Scan on prt1_m_p1 t1 + -> Seq Scan on prt1_m_p2 t1_1 + -> Seq Scan on prt1_m_p3 t1_2 +(11 rows) + +-- equi-join between out-of-order partition key columns does not qualify for +-- partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.a = t2.b; + QUERY PLAN +---------------------------------------------- + Hash Left Join + Hash Cond: (t1.a = t2.b) + -> Append + -> Seq Scan on prt1_m_p1 t1 + -> Seq Scan on prt1_m_p2 t1_1 + -> Seq Scan on prt1_m_p3 t1_2 + -> Hash + -> Append + -> Seq Scan on prt2_m_p1 t2 + -> Seq Scan on prt2_m_p2 t2_1 + -> Seq Scan on prt2_m_p3 t2_2 +(11 rows) + +-- equi-join between non-key columns does not qualify for partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.c = t2.c; + QUERY PLAN +---------------------------------------------- + Hash Left Join + Hash Cond: (t1.c = t2.c) + -> Append + -> Seq Scan on prt1_m_p1 t1 + -> Seq Scan on prt1_m_p2 t1_1 + -> Seq Scan on prt1_m_p3 t1_2 + -> Hash + -> Append + -> Seq Scan on prt2_m_p1 t2 + -> Seq Scan on prt2_m_p2 t2_1 + -> Seq Scan on prt2_m_p3 t2_2 +(11 rows) + +-- partition-wise join can not be applied between tables with different +-- partition lists +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 LEFT JOIN prt2_n t2 ON (t1.c = t2.c); + QUERY PLAN +---------------------------------------------- + Hash Right Join + Hash Cond: (t2.c = (t1.c)::text) + -> Append + -> Seq Scan on prt2_n_p1 t2 + -> Seq Scan on prt2_n_p2 t2_1 + -> Hash + -> Append + -> Seq Scan on prt1_n_p1 t1 + -> Seq Scan on prt1_n_p2 t1_1 +(9 rows) + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 JOIN prt2_n t2 ON (t1.c = t2.c) JOIN plt1 t3 ON (t1.c = t3.c); + QUERY PLAN +---------------------------------------------------------- + Hash Join + Hash Cond: (t2.c = (t1.c)::text) + -> Append + -> Seq Scan on prt2_n_p1 t2 + -> Seq Scan on prt2_n_p2 t2_1 + -> Hash + -> Hash Join + Hash Cond: (t3.c = (t1.c)::text) + -> Append + -> Seq Scan on plt1_p1 t3 + -> Seq Scan on plt1_p2 t3_1 + -> Seq Scan on plt1_p3 t3_2 + -> Hash + -> Append + -> Seq Scan on prt1_n_p1 t1 + -> Seq Scan on prt1_n_p2 t1_1 +(16 rows) + +-- partition-wise join can not be applied for a join between list and range +-- partitioned table +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c); + QUERY PLAN +---------------------------------------------- + Hash Full Join + Hash Cond: ((t2.c)::text = (t1.c)::text) + -> Append + -> Seq Scan on prt1_p1 t2 + -> Seq Scan on prt1_p2 t2_1 + -> Seq Scan on prt1_p3 t2_2 + -> Hash + -> Append + -> Seq Scan on prt1_n_p1 t1 + -> Seq Scan on prt1_n_p2 t1_1 +(10 rows) + diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out index 568b783f5e..cd1f7f301d 100644 --- a/src/test/regress/expected/sysviews.out +++ b/src/test/regress/expected/sysviews.out @@ -70,21 +70,22 @@ select count(*) >= 0 as ok from pg_prepared_xacts; -- This is to record the prevailing planner enable_foo settings during -- a regression test run. select name, setting from pg_settings where name like 'enable%'; - name | setting -----------------------+--------- - enable_bitmapscan | on - enable_gathermerge | on - enable_hashagg | on - enable_hashjoin | on - enable_indexonlyscan | on - enable_indexscan | on - enable_material | on - enable_mergejoin | on - enable_nestloop | on - enable_seqscan | on - enable_sort | on - enable_tidscan | on -(12 rows) + name | setting +----------------------------+--------- + enable_bitmapscan | on + enable_gathermerge | on + enable_hashagg | on + enable_hashjoin | on + enable_indexonlyscan | on + enable_indexscan | on + enable_material | on + enable_mergejoin | on + enable_nestloop | on + enable_partition_wise_join | off + enable_seqscan | on + enable_sort | on + enable_tidscan | on +(13 rows) -- Test that the pg_timezone_names and pg_timezone_abbrevs views are -- more-or-less working. We can't test their contents in any great detail diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index 860e8ab795..e1d150b878 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -104,7 +104,8 @@ test: publication subscription # ---------- # Another group of parallel tests # ---------- -test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock json jsonb json_encoding indirect_toast equivclass +test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock json jsonb json_encoding indirect_toast equivclass partition_join + # ---------- # Another group of parallel tests # NB: temp.sql does a reconnect which transiently uses 2 connections, diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index ef275d0d9a..ed755f45fa 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -180,3 +180,4 @@ test: with test: xml test: event_trigger test: stats +test: partition_join diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql new file mode 100644 index 0000000000..ca525d9941 --- /dev/null +++ b/src/test/regress/sql/partition_join.sql @@ -0,0 +1,354 @@ +-- +-- PARTITION_JOIN +-- Test partition-wise join between partitioned tables +-- + +-- Enable partition-wise join, which by default is disabled. +SET enable_partition_wise_join to true; + +-- +-- partitioned by a single column +-- +CREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_p3 PARTITION OF prt1 FOR VALUES FROM (500) TO (600); +CREATE TABLE prt1_p2 PARTITION OF prt1 FOR VALUES FROM (250) TO (500); +INSERT INTO prt1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 2 = 0; +CREATE INDEX iprt1_p1_a on prt1_p1(a); +CREATE INDEX iprt1_p2_a on prt1_p2(a); +CREATE INDEX iprt1_p3_a on prt1_p3(a); +ANALYZE prt1; + +CREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_p2 PARTITION OF prt2 FOR VALUES FROM (250) TO (500); +CREATE TABLE prt2_p3 PARTITION OF prt2 FOR VALUES FROM (500) TO (600); +INSERT INTO prt2 SELECT i % 25, i, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 3 = 0; +CREATE INDEX iprt2_p1_b on prt2_p1(b); +CREATE INDEX iprt2_p2_b on prt2_p2(b); +CREATE INDEX iprt2_p3_b on prt2_p3(b); +ANALYZE prt2; + +-- inner join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + +-- left outer join, with whole-row reference +EXPLAIN (COSTS OFF) +SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + +-- right outer join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + +-- full outer join, with placeholder vars +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b) WHERE t1.phv = t1.a OR t2.phv = t2.b ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b) WHERE t1.phv = t1.a OR t2.phv = t2.b ORDER BY t1.a, t2.b; + +-- Join with pruned partitions from joining relations +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0 ORDER BY t1.a, t2.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 FULL JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 OR t2.a = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a < 450) t1 FULL JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 OR t2.a = 0 ORDER BY t1.a, t2.b; + +-- Semi-join +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t2.b FROM prt2 t2 WHERE t2.a = 0) AND t1.b = 0 ORDER BY t1.a; +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t2.b FROM prt2 t2 WHERE t2.a = 0) AND t1.b = 0 ORDER BY t1.a; + +-- Anti-join with aggregates +EXPLAIN (COSTS OFF) +SELECT sum(t1.a), avg(t1.a), sum(t1.b), avg(t1.b) FROM prt1 t1 WHERE NOT EXISTS (SELECT 1 FROM prt2 t2 WHERE t1.a = t2.b); +SELECT sum(t1.a), avg(t1.a), sum(t1.b), avg(t1.b) FROM prt1 t1 WHERE NOT EXISTS (SELECT 1 FROM prt2 t2 WHERE t1.a = t2.b); + +-- lateral reference +EXPLAIN (COSTS OFF) +SELECT * FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; +SELECT * FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; + +EXPLAIN (COSTS OFF) +SELECT t1.a, ss.t2a, ss.t2c FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, t2.b t2b, t2.c t2c, least(t1.a,t2.a,t3.b) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.c = ss.t2c WHERE (t1.b + coalesce(ss.t2b, 0)) = 0 ORDER BY t1.a; +SELECT t1.a, ss.t2a, ss.t2c FROM prt1 t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t3.a AS t3a, t2.b t2b, t2.c t2c, least(t1.a,t2.a,t3.a) FROM prt1 t2 JOIN prt2 t3 ON (t2.a = t3.b)) ss + ON t1.c = ss.t2c WHERE (t1.b + coalesce(ss.t2b, 0)) = 0 ORDER BY t1.a; + +-- +-- partitioned by expression +-- +CREATE TABLE prt1_e (a int, b int, c int) PARTITION BY RANGE(((a + b)/2)); +CREATE TABLE prt1_e_p1 PARTITION OF prt1_e FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_e_p2 PARTITION OF prt1_e FOR VALUES FROM (250) TO (500); +CREATE TABLE prt1_e_p3 PARTITION OF prt1_e FOR VALUES FROM (500) TO (600); +INSERT INTO prt1_e SELECT i, i, i % 25 FROM generate_series(0, 599, 2) i; +CREATE INDEX iprt1_e_p1_ab2 on prt1_e_p1(((a+b)/2)); +CREATE INDEX iprt1_e_p2_ab2 on prt1_e_p2(((a+b)/2)); +CREATE INDEX iprt1_e_p3_ab2 on prt1_e_p3(((a+b)/2)); +ANALYZE prt1_e; + +CREATE TABLE prt2_e (a int, b int, c int) PARTITION BY RANGE(((b + a)/2)); +CREATE TABLE prt2_e_p1 PARTITION OF prt2_e FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_e_p2 PARTITION OF prt2_e FOR VALUES FROM (250) TO (500); +CREATE TABLE prt2_e_p3 PARTITION OF prt2_e FOR VALUES FROM (500) TO (600); +INSERT INTO prt2_e SELECT i, i, i % 25 FROM generate_series(0, 599, 3) i; +ANALYZE prt2_e; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_e t1, prt2_e t2 WHERE (t1.a + t1.b)/2 = (t2.b + t2.a)/2 AND t1.c = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_e t1, prt2_e t2 WHERE (t1.a + t1.b)/2 = (t2.b + t2.a)/2 AND t1.c = 0 ORDER BY t1.a, t2.b; + +-- +-- N-way join +-- +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM prt1 t1, prt2 t2, prt1_e t3 WHERE t1.a = t2.b AND t1.a = (t3.a + t3.b)/2 AND t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM prt1 t1, prt2 t2, prt1_e t3 WHERE t1.a = t2.b AND t1.a = (t3.a + t3.b)/2 AND t1.b = 0 ORDER BY t1.a, t2.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) LEFT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.b = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) LEFT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.b = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + +-- Cases with non-nullable expressions in subquery results; +-- make sure these go to null as expected +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.phv, t2.b, t2.phv, t3.a + t3.b, t3.phv FROM ((SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b)) FULL JOIN (SELECT 50 phv, * FROM prt1_e WHERE prt1_e.c = 0) t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.a = t1.phv OR t2.b = t2.phv OR (t3.a + t3.b)/2 = t3.phv ORDER BY t1.a, t2.b, t3.a + t3.b; +SELECT t1.a, t1.phv, t2.b, t2.phv, t3.a + t3.b, t3.phv FROM ((SELECT 50 phv, * FROM prt1 WHERE prt1.b = 0) t1 FULL JOIN (SELECT 75 phv, * FROM prt2 WHERE prt2.a = 0) t2 ON (t1.a = t2.b)) FULL JOIN (SELECT 50 phv, * FROM prt1_e WHERE prt1_e.c = 0) t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t1.a = t1.phv OR t2.b = t2.phv OR (t3.a + t3.b)/2 = t3.phv ORDER BY t1.a, t2.b, t3.a + t3.b; + +-- Semi-join +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1, prt1_e t2 WHERE t1.a = 0 AND t1.b = (t2.a + t2.b)/2) AND t1.b = 0 ORDER BY t1.a; +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1, prt1_e t2 WHERE t1.a = 0 AND t1.b = (t2.a + t2.b)/2) AND t1.b = 0 ORDER BY t1.a; + +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + +-- test merge joins +SET enable_hashjoin TO off; +SET enable_nestloop TO off; + +EXPLAIN (COSTS OFF) +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; +SELECT t1.* FROM prt1 t1 WHERE t1.a IN (SELECT t1.b FROM prt2 t1 WHERE t1.b IN (SELECT (t1.a + t1.b)/2 FROM prt1_e t1 WHERE t1.c = 0)) AND t1.b = 0 ORDER BY t1.a; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; +SELECT t1.a, t1.c, t2.b, t2.c, t3.a + t3.b, t3.c FROM (prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b) RIGHT JOIN prt1_e t3 ON (t1.a = (t3.a + t3.b)/2) WHERE t3.c = 0 ORDER BY t1.a, t2.b, t3.a + t3.b; + +-- MergeAppend on nullable column +EXPLAIN (COSTS OFF) +SELECT t1.a, t2.b FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t2.b FROM (SELECT * FROM prt1 WHERE a < 450) t1 LEFT JOIN (SELECT * FROM prt2 WHERE b > 250) t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b; + +RESET enable_hashjoin; +RESET enable_nestloop; + +-- +-- partitioned by multiple columns +-- +CREATE TABLE prt1_m (a int, b int, c int) PARTITION BY RANGE(a, ((a + b)/2)); +CREATE TABLE prt1_m_p1 PARTITION OF prt1_m FOR VALUES FROM (0, 0) TO (250, 250); +CREATE TABLE prt1_m_p2 PARTITION OF prt1_m FOR VALUES FROM (250, 250) TO (500, 500); +CREATE TABLE prt1_m_p3 PARTITION OF prt1_m FOR VALUES FROM (500, 500) TO (600, 600); +INSERT INTO prt1_m SELECT i, i, i % 25 FROM generate_series(0, 599, 2) i; +ANALYZE prt1_m; + +CREATE TABLE prt2_m (a int, b int, c int) PARTITION BY RANGE(((b + a)/2), b); +CREATE TABLE prt2_m_p1 PARTITION OF prt2_m FOR VALUES FROM (0, 0) TO (250, 250); +CREATE TABLE prt2_m_p2 PARTITION OF prt2_m FOR VALUES FROM (250, 250) TO (500, 500); +CREATE TABLE prt2_m_p3 PARTITION OF prt2_m FOR VALUES FROM (500, 500) TO (600, 600); +INSERT INTO prt2_m SELECT i, i, i % 25 FROM generate_series(0, 599, 3) i; +ANALYZE prt2_m; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_m WHERE prt1_m.c = 0) t1 FULL JOIN (SELECT * FROM prt2_m WHERE prt2_m.c = 0) t2 ON (t1.a = (t2.b + t2.a)/2 AND t2.b = (t1.a + t1.b)/2) ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_m WHERE prt1_m.c = 0) t1 FULL JOIN (SELECT * FROM prt2_m WHERE prt2_m.c = 0) t2 ON (t1.a = (t2.b + t2.a)/2 AND t2.b = (t1.a + t1.b)/2) ORDER BY t1.a, t2.b; + +-- +-- tests for list partitioned tables. +-- +CREATE TABLE plt1 (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE plt1_p1 PARTITION OF plt1 FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt1_p2 PARTITION OF plt1 FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt1_p3 PARTITION OF plt1 FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE plt1; + +CREATE TABLE plt2 (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE plt2_p1 PARTITION OF plt2 FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt2_p2 PARTITION OF plt2 FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt2_p3 PARTITION OF plt2 FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE plt2; + +-- +-- list partitioned by expression +-- +CREATE TABLE plt1_e (a int, b int, c text) PARTITION BY LIST(ltrim(c, 'A')); +CREATE TABLE plt1_e_p1 PARTITION OF plt1_e FOR VALUES IN ('0000', '0003', '0004', '0010'); +CREATE TABLE plt1_e_p2 PARTITION OF plt1_e FOR VALUES IN ('0001', '0005', '0002', '0009'); +CREATE TABLE plt1_e_p3 PARTITION OF plt1_e FOR VALUES IN ('0006', '0007', '0008', '0011'); +INSERT INTO plt1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE plt1_e; + +-- test partition matching with N-way join +EXPLAIN (COSTS OFF) +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + +-- joins where one of the relations is proven empty +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.a = 1 AND t1.a = 2; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 LEFT JOIN prt2 t2 ON t1.a = t2.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 RIGHT JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 FULL JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; + +-- +-- multiple levels of partitioning +-- +CREATE TABLE prt1_l (a int, b int, c varchar) PARTITION BY RANGE(a); +CREATE TABLE prt1_l_p1 PARTITION OF prt1_l FOR VALUES FROM (0) TO (250); +CREATE TABLE prt1_l_p2 PARTITION OF prt1_l FOR VALUES FROM (250) TO (500) PARTITION BY LIST (c); +CREATE TABLE prt1_l_p2_p1 PARTITION OF prt1_l_p2 FOR VALUES IN ('0000', '0001'); +CREATE TABLE prt1_l_p2_p2 PARTITION OF prt1_l_p2 FOR VALUES IN ('0002', '0003'); +CREATE TABLE prt1_l_p3 PARTITION OF prt1_l FOR VALUES FROM (500) TO (600) PARTITION BY RANGE (b); +CREATE TABLE prt1_l_p3_p1 PARTITION OF prt1_l_p3 FOR VALUES FROM (0) TO (13); +CREATE TABLE prt1_l_p3_p2 PARTITION OF prt1_l_p3 FOR VALUES FROM (13) TO (25); +INSERT INTO prt1_l SELECT i, i % 25, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt1_l; + +CREATE TABLE prt2_l (a int, b int, c varchar) PARTITION BY RANGE(b); +CREATE TABLE prt2_l_p1 PARTITION OF prt2_l FOR VALUES FROM (0) TO (250); +CREATE TABLE prt2_l_p2 PARTITION OF prt2_l FOR VALUES FROM (250) TO (500) PARTITION BY LIST (c); +CREATE TABLE prt2_l_p2_p1 PARTITION OF prt2_l_p2 FOR VALUES IN ('0000', '0001'); +CREATE TABLE prt2_l_p2_p2 PARTITION OF prt2_l_p2 FOR VALUES IN ('0002', '0003'); +CREATE TABLE prt2_l_p3 PARTITION OF prt2_l FOR VALUES FROM (500) TO (600) PARTITION BY RANGE (a); +CREATE TABLE prt2_l_p3_p1 PARTITION OF prt2_l_p3 FOR VALUES FROM (0) TO (13); +CREATE TABLE prt2_l_p3_p2 PARTITION OF prt2_l_p3 FOR VALUES FROM (13) TO (25); +INSERT INTO prt2_l SELECT i % 25, i, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE prt2_l; + +-- inner join, qual covering only top-level partitions +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1, prt2_l t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1, prt2_l t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + +-- left join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 LEFT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t1.b = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 LEFT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t1.b = 0 ORDER BY t1.a, t2.b; + +-- right join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t2.a = 0 ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_l t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.c = t2.c WHERE t2.a = 0 ORDER BY t1.a, t2.b; + +-- full join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; + +-- lateral partition-wise join +EXPLAIN (COSTS OFF) +SELECT * FROM prt1_l t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss + ON t1.a = ss.t2a AND t1.c = ss.t2c WHERE t1.b = 0 ORDER BY t1.a; +SELECT * FROM prt1_l t1 LEFT JOIN LATERAL + (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss + ON t1.a = ss.t2a AND t1.c = ss.t2c WHERE t1.b = 0 ORDER BY t1.a; + +-- join with one side empty +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE a = 1 AND a = 2) t1 RIGHT JOIN prt2_l t2 ON t1.a = t2.b AND t1.b = t2.a AND t1.c = t2.c; + +-- +-- negative testcases +-- +CREATE TABLE prt1_n (a int, b int, c varchar) PARTITION BY RANGE(c); +CREATE TABLE prt1_n_p1 PARTITION OF prt1_n FOR VALUES FROM ('0000') TO ('0250'); +CREATE TABLE prt1_n_p2 PARTITION OF prt1_n FOR VALUES FROM ('0250') TO ('0500'); +INSERT INTO prt1_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 499, 2) i; +ANALYZE prt1_n; + +CREATE TABLE prt2_n (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE prt2_n_p1 PARTITION OF prt2_n FOR VALUES IN ('0000', '0003', '0004', '0010', '0006', '0007'); +CREATE TABLE prt2_n_p2 PARTITION OF prt2_n FOR VALUES IN ('0001', '0005', '0002', '0009', '0008', '0011'); +INSERT INTO prt2_n SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt2_n; + +CREATE TABLE prt3_n (a int, b int, c text) PARTITION BY LIST(c); +CREATE TABLE prt3_n_p1 PARTITION OF prt3_n FOR VALUES IN ('0000', '0004', '0006', '0007'); +CREATE TABLE prt3_n_p2 PARTITION OF prt3_n FOR VALUES IN ('0001', '0002', '0008', '0010'); +CREATE TABLE prt3_n_p3 PARTITION OF prt3_n FOR VALUES IN ('0003', '0005', '0009', '0011'); +INSERT INTO prt2_n SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt3_n; + +CREATE TABLE prt4_n (a int, b int, c text) PARTITION BY RANGE(a); +CREATE TABLE prt4_n_p1 PARTITION OF prt4_n FOR VALUES FROM (0) TO (300); +CREATE TABLE prt4_n_p2 PARTITION OF prt4_n FOR VALUES FROM (300) TO (500); +CREATE TABLE prt4_n_p3 PARTITION OF prt4_n FOR VALUES FROM (500) TO (600); +INSERT INTO prt4_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE prt4_n; + +-- partition-wise join can not be applied if the partition ranges differ +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2 WHERE t1.a = t2.a; +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2, prt2 t3 WHERE t1.a = t2.a and t1.a = t3.b; + +-- partition-wise join can not be applied if there are no equi-join conditions +-- between partition keys +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 LEFT JOIN prt2 t2 ON (t1.a < t2.b); + +-- equi-join with join condition on partial keys does not qualify for +-- partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1, prt2_m t2 WHERE t1.a = (t2.b + t2.a)/2; + +-- equi-join between out-of-order partition key columns does not qualify for +-- partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.a = t2.b; + +-- equi-join between non-key columns does not qualify for partition-wise join +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.c = t2.c; + +-- partition-wise join can not be applied between tables with different +-- partition lists +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 LEFT JOIN prt2_n t2 ON (t1.c = t2.c); +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 JOIN prt2_n t2 ON (t1.c = t2.c) JOIN plt1 t3 ON (t1.c = t3.c); + +-- partition-wise join can not be applied for a join between list and range +-- partitioned table +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c); From a5736bf754c82d8b86674e199e232096c679201d Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 6 Oct 2017 17:14:42 +0200 Subject: [PATCH 0336/1087] Fix traversal of half-frozen update chains MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When some tuple versions in an update chain are frozen due to them being older than freeze_min_age, the xmax/xmin trail can become broken. This breaks HOT (and probably other things). A subsequent VACUUM can break things in more serious ways, such as leaving orphan heap-only tuples whose root HOT redirect items were removed. This can be seen because index creation (or REINDEX) complain like ERROR: XX000: failed to find parent tuple for heap-only tuple at (0,7) in table "t" Because of relfrozenxid contraints, we cannot avoid the freezing of the early tuples, so we must cope with the results: whenever we see an Xmin of FrozenTransactionId, consider it a match for whatever the previous Xmax value was. This problem seems to have appeared in 9.3 with multixact changes, though strictly speaking it seems unrelated. Since 9.4 we have commit 37484ad2a "Change the way we mark tuples as frozen", so the fix is simple: just compare the raw Xmin (still stored in the tuple header, since freezing merely set an infomask bit) to the Xmax. But in 9.3 we rewrite the Xmin value to FrozenTransactionId, so the original value is lost and we have nothing to compare the Xmax with. To cope with that case we need to compare the Xmin with FrozenXid, assume it's a match, and hope for the best. Sadly, since you can pg_upgrade a 9.3 instance containing half-frozen pages to newer releases, we need to keep the old check in newer versions too, which seems a bit brittle; I hope we can somehow get rid of that. I didn't optimize the new function for performance. The new coding is probably a bit slower than before, since there is a function call rather than a straight comparison, but I'd rather have it work correctly than be fast but wrong. This is a followup after 20b655224249 fixed a few related problems. Apparently, in 9.6 and up there are more ways to get into trouble, but in 9.3 - 9.5 I cannot reproduce a problem anymore with this patch, so there must be a separate bug. Reported-by: Peter Geoghegan Diagnosed-by: Peter Geoghegan, Michael Paquier, Daniel Wood, Yi Wen Wong, Álvaro Discussion: https://postgr.es/m/CAH2-Wznm4rCrhFAiwKPWTpEw2bXDtgROZK7jWWGucXeH3D1fmA@mail.gmail.com --- src/backend/access/heap/heapam.c | 52 ++++++++++++++++++++++++++--- src/backend/access/heap/pruneheap.c | 4 +-- src/backend/executor/execMain.c | 6 ++-- src/include/access/heapam.h | 3 ++ 4 files changed, 54 insertions(+), 11 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 0c0f640f64..52dda41cc4 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -2074,8 +2074,7 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer, * broken. */ if (TransactionIdIsValid(prev_xmax) && - !TransactionIdEquals(prev_xmax, - HeapTupleHeaderGetXmin(heapTuple->t_data))) + !HeapTupleUpdateXmaxMatchesXmin(prev_xmax, heapTuple->t_data)) break; /* @@ -2261,7 +2260,7 @@ heap_get_latest_tid(Relation relation, * tuple. Check for XMIN match. */ if (TransactionIdIsValid(priorXmax) && - !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data))) + !HeapTupleUpdateXmaxMatchesXmin(priorXmax, tp.t_data)) { UnlockReleaseBuffer(buffer); break; @@ -2293,6 +2292,50 @@ heap_get_latest_tid(Relation relation, } /* end of loop */ } +/* + * HeapTupleUpdateXmaxMatchesXmin - verify update chain xmax/xmin lineage + * + * Given the new version of a tuple after some update, verify whether the + * given Xmax (corresponding to the previous version) matches the tuple's + * Xmin, taking into account that the Xmin might have been frozen after the + * update. + */ +bool +HeapTupleUpdateXmaxMatchesXmin(TransactionId xmax, HeapTupleHeader htup) +{ + TransactionId xmin = HeapTupleHeaderGetXmin(htup); + + /* + * If the xmax of the old tuple is identical to the xmin of the new one, + * it's a match. + */ + if (TransactionIdEquals(xmax, xmin)) + return true; + + /* + * If the Xmin that was in effect prior to a freeze matches the Xmax, + * it's good too. + */ + if (HeapTupleHeaderXminFrozen(htup) && + TransactionIdEquals(HeapTupleHeaderGetRawXmin(htup), xmax)) + return true; + + /* + * When a tuple is frozen, the original Xmin is lost, but we know it's a + * committed transaction. So unless the Xmax is InvalidXid, we don't know + * for certain that there is a match, but there may be one; and we must + * return true so that a HOT chain that is half-frozen can be walked + * correctly. + * + * We no longer freeze tuples this way, but we must keep this in order to + * interpret pre-pg_upgrade pages correctly. + */ + if (TransactionIdEquals(xmin, FrozenTransactionId) && + TransactionIdIsValid(xmax)) + return true; + + return false; +} /* * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends @@ -5712,8 +5755,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid, * end of the chain, we're done, so return success. */ if (TransactionIdIsValid(priorXmax) && - !TransactionIdEquals(HeapTupleHeaderGetXmin(mytup.t_data), - priorXmax)) + !HeapTupleUpdateXmaxMatchesXmin(priorXmax, mytup.t_data)) { result = HeapTupleMayBeUpdated; goto out_locked; diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c index 52231ac417..7753ee7b12 100644 --- a/src/backend/access/heap/pruneheap.c +++ b/src/backend/access/heap/pruneheap.c @@ -473,7 +473,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum, * Check the tuple XMIN against prior XMAX, if any */ if (TransactionIdIsValid(priorXmax) && - !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) + !HeapTupleUpdateXmaxMatchesXmin(priorXmax, htup)) break; /* @@ -813,7 +813,7 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets) htup = (HeapTupleHeader) PageGetItem(page, lp); if (TransactionIdIsValid(priorXmax) && - !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup))) + !HeapTupleUpdateXmaxMatchesXmin(priorXmax, htup)) break; /* Remember the root line pointer for this item */ diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 384ad70f2d..8359beb463 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -2594,8 +2594,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode, * atomic, and Xmin never changes in an existing tuple, except to * invalid or frozen, and neither of those can match priorXmax.) */ - if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple.t_data), - priorXmax)) + if (!HeapTupleUpdateXmaxMatchesXmin(priorXmax, tuple.t_data)) { ReleaseBuffer(buffer); return NULL; @@ -2742,8 +2741,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode, /* * As above, if xmin isn't what we're expecting, do nothing. */ - if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple.t_data), - priorXmax)) + if (!HeapTupleUpdateXmaxMatchesXmin(priorXmax, tuple.t_data)) { ReleaseBuffer(buffer); return NULL; diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h index 4e41024e92..9f4367d704 100644 --- a/src/include/access/heapam.h +++ b/src/include/access/heapam.h @@ -146,6 +146,9 @@ extern void heap_get_latest_tid(Relation relation, Snapshot snapshot, ItemPointer tid); extern void setLastTid(const ItemPointer tid); +extern bool HeapTupleUpdateXmaxMatchesXmin(TransactionId xmax, + HeapTupleHeader htup); + extern BulkInsertState GetBulkInsertState(void); extern void FreeBulkInsertState(BulkInsertState); extern void ReleaseBulkInsertStatePin(BulkInsertState bistate); From 3620569fecc6c2edb1cccfbba39b86c4e7d2faae Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 6 Oct 2017 11:35:49 -0400 Subject: [PATCH 0337/1087] #ifdef out some dead code in psql/mainloop.c. This pg_send_history() call is unreachable, since the block it's in is currently only entered in !cur_cmd_interactive mode. But rather than just delete it, make it #ifdef NOT_USED, in hopes that we'll remember to enable it if we ever change that decision. Per report from David Binderman. Since this is basically cosmetic, I see no great need to back-patch. Discussion: https://postgr.es/m/HE1PR0802MB233122B61F00A15E035C83BE9C710@HE1PR0802MB2331.eurprd08.prod.outlook.com --- src/bin/psql/mainloop.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/bin/psql/mainloop.c b/src/bin/psql/mainloop.c index 0162ee8d2f..62cf1f404a 100644 --- a/src/bin/psql/mainloop.c +++ b/src/bin/psql/mainloop.c @@ -456,14 +456,19 @@ MainLoop(FILE *source) } /* while !endoffile/session */ /* - * Process query at the end of file without a semicolon + * If we have a non-semicolon-terminated query at the end of file, we + * process it unless the input source is interactive --- in that case it + * seems better to go ahead and quit. Also skip if this is an error exit. */ if (query_buf->len > 0 && !pset.cur_cmd_interactive && successResult == EXIT_SUCCESS) { /* save query in history */ + /* currently unneeded since we don't use this block if interactive */ +#ifdef NOT_USED if (pset.cur_cmd_interactive) pg_send_history(history_buf); +#endif /* execute query unless we're in an inactive \if branch */ if (conditional_active(cond_stack)) From c01123630db18561039d4eb17f9502bed0e9d109 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0338/1087] Remove coverage details view This is only useful if we name the different tests, which we don't do at the moment. Reviewed-by: Michael Paquier --- src/Makefile.global.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index 1a0faf9023..d4fed90405 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -893,7 +893,7 @@ coverage: $(local_gcda_files:.gcda=.c.gcov) .PHONY: coverage-html coverage-html: coverage-html-stamp -GENHTML_FLAGS = --show-details --legend +GENHTML_FLAGS = --legend GENHTML_TITLE = PostgreSQL $(VERSION) coverage-html-stamp: lcov_base.info lcov_test.info From 52e1b1b0425553250db35101f44090898322fb6f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0339/1087] Run coverage commands quietly They are very chatty by default, but the output doesn't seem all that useful for normal operation. Reviewed-by: Michael Paquier --- src/Makefile.global.in | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index d4fed90405..9340d60de5 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -893,7 +893,7 @@ coverage: $(local_gcda_files:.gcda=.c.gcov) .PHONY: coverage-html coverage-html: coverage-html-stamp -GENHTML_FLAGS = --legend +GENHTML_FLAGS = -q --legend GENHTML_TITLE = PostgreSQL $(VERSION) coverage-html-stamp: lcov_base.info lcov_test.info @@ -902,7 +902,7 @@ coverage-html-stamp: lcov_base.info lcov_test.info touch $@ LCOV += --gcov-tool $(GCOV) -LCOVFLAGS = --no-external +LCOVFLAGS = -q --no-external all_gcno_files = $(shell find . -name '*.gcno' -print) From c3d9a66024a93e6d0380bdd1b18cb03a67216b72 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0340/1087] Support coverage on vpath builds A few paths needed to be tweaked so everything looks into the appropriate directories. Reviewed-by: Michael Paquier --- src/Makefile.global.in | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index 9340d60de5..a4209df7c9 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -898,7 +898,7 @@ GENHTML_TITLE = PostgreSQL $(VERSION) coverage-html-stamp: lcov_base.info lcov_test.info rm -rf coverage - $(GENHTML) $(GENHTML_FLAGS) -o coverage --title='$(GENHTML_TITLE)' --num-spaces=4 --prefix='$(abs_top_srcdir)' $^ + $(GENHTML) $(GENHTML_FLAGS) -o coverage --title='$(GENHTML_TITLE)' --num-spaces=4 $^ touch $@ LCOV += --gcov-tool $(GCOV) @@ -907,12 +907,12 @@ LCOVFLAGS = -q --no-external all_gcno_files = $(shell find . -name '*.gcno' -print) lcov_base.info: $(all_gcno_files) - $(LCOV) $(LCOVFLAGS) -c -i -d . -o $@ + $(LCOV) $(LCOVFLAGS) -c -i -d . -d $(srcdir) -o $@ all_gcda_files = $(shell find . -name '*.gcda' -print) lcov_test.info: $(all_gcda_files) - $(LCOV) $(LCOVFLAGS) -c -d . -o $@ + $(LCOV) $(LCOVFLAGS) -c -d . -d $(srcdir) -o $@ # hook for clean-up From 6b87416c9a4dd305b76e619ecac36e2b968462f8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 6 Oct 2017 12:20:12 -0400 Subject: [PATCH 0341/1087] Fix access-off-end-of-array in clog.c. Sloppy loop coding in set_status_by_pages() resulted in fetching one array element more than it should from the subxids[] array. The odds of this resulting in SIGSEGV are pretty small, but we've certainly seen that happen with similar mistakes elsewhere. While at it, we can get rid of an extra TransactionIdToPage() calculation per loop. Per report from David Binderman. Back-patch to all supported branches, since this code is quite old. Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com --- src/backend/access/transam/clog.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c index 9003b22193..a3e2b12435 100644 --- a/src/backend/access/transam/clog.c +++ b/src/backend/access/transam/clog.c @@ -241,21 +241,27 @@ set_status_by_pages(int nsubxids, TransactionId *subxids, int offset = 0; int i = 0; + Assert(nsubxids > 0); /* else the pageno fetch above is unsafe */ + while (i < nsubxids) { int num_on_page = 0; + int nextpageno; - while (TransactionIdToPage(subxids[i]) == pageno && i < nsubxids) + do { + nextpageno = TransactionIdToPage(subxids[i]); + if (nextpageno != pageno) + break; num_on_page++; i++; - } + } while (i < nsubxids); TransactionIdSetPageStatus(InvalidTransactionId, num_on_page, subxids + offset, status, lsn, pageno, false); offset = i; - pageno = TransactionIdToPage(subxids[offset]); + pageno = nextpageno; } } From a1c2c430d33e0945da234b025b78bd265c8bdfb5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 6 Oct 2017 14:28:42 -0400 Subject: [PATCH 0342/1087] Fix intra-query memory leakage in nodeProjectSet.c. Both ExecMakeFunctionResultSet() and evaluation of simple expressions need to be done in the per-tuple memory context, not per-query, else we leak data until end of query. This is a consideration that was missed while refactoring code in the ProjectSet patch (note that in pre-v10, ExecMakeFunctionResult is called in the per-tuple context). Per bug #14843 from Ben M. Diagnosed independently by Andres and myself. Discussion: https://postgr.es/m/20171005230321.28561.15927@wrigleys.postgresql.org --- src/backend/executor/execSRF.c | 2 ++ src/backend/executor/nodeProjectSet.c | 6 ++++++ 2 files changed, 8 insertions(+) diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index 8bc90a6c7e..1be683db83 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -467,6 +467,8 @@ ExecInitFunctionResultSet(Expr *expr, * function itself. The argument expressions may not contain set-returning * functions (the planner is supposed to have separated evaluation for those). * + * This should be called in a short-lived (per-tuple) context. + * * This is used by nodeProjectSet.c. */ Datum diff --git a/src/backend/executor/nodeProjectSet.c b/src/backend/executor/nodeProjectSet.c index d93462c542..68981296f9 100644 --- a/src/backend/executor/nodeProjectSet.c +++ b/src/backend/executor/nodeProjectSet.c @@ -124,12 +124,16 @@ ExecProjectSRF(ProjectSetState *node, bool continuing) { TupleTableSlot *resultSlot = node->ps.ps_ResultTupleSlot; ExprContext *econtext = node->ps.ps_ExprContext; + MemoryContext oldcontext; bool hassrf PG_USED_FOR_ASSERTS_ONLY; bool hasresult; int argno; ExecClearTuple(resultSlot); + /* Call SRFs, as well as plain expressions, in per-tuple context */ + oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory); + /* * Assume no further tuples are produced unless an ExprMultipleResult is * encountered from a set returning function. @@ -176,6 +180,8 @@ ExecProjectSRF(ProjectSetState *node, bool continuing) } } + MemoryContextSwitchTo(oldcontext); + /* ProjectSet should not be used if there's no SRFs */ Assert(hassrf); From 45866c75507f0757de0da6e90c694a0dbe67d727 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 6 Oct 2017 15:27:11 -0400 Subject: [PATCH 0343/1087] Copy information from the relcache instead of pointing to it. We have the relations continuously locked, but not open, so relcache pointers are not guaranteed to be stable. Per buildfarm member prion. Ashutosh Bapat. I fixed a typo. Discussion: http://postgr.es/m/CAFjFpRcRBqoKLZSNmRsjKr81uEP=ennvqSQaXVCCBTXvJ2rW+Q@mail.gmail.com --- src/backend/catalog/partition.c | 68 ++++++++++++++++++++++++++++ src/backend/optimizer/util/plancat.c | 35 ++++++++++---- src/include/catalog/partition.h | 2 + 3 files changed, 96 insertions(+), 9 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 3a8306a055..ebda85e4ef 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -701,6 +701,74 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, return true; } +/* + * Return a copy of given PartitionBoundInfo structure. The data types of bounds + * are described by given partition key specificiation. + */ +extern PartitionBoundInfo +partition_bounds_copy(PartitionBoundInfo src, + PartitionKey key) +{ + PartitionBoundInfo dest; + int i; + int ndatums; + int partnatts; + int num_indexes; + + dest = (PartitionBoundInfo) palloc(sizeof(PartitionBoundInfoData)); + + dest->strategy = src->strategy; + ndatums = dest->ndatums = src->ndatums; + partnatts = key->partnatts; + + /* Range partitioned table has an extra index. */ + num_indexes = key->strategy == PARTITION_STRATEGY_RANGE ? ndatums + 1 : ndatums; + + /* List partitioned tables have only a single partition key. */ + Assert(key->strategy != PARTITION_STRATEGY_LIST || partnatts == 1); + + dest->datums = (Datum **) palloc(sizeof(Datum *) * ndatums); + + if (src->kind != NULL) + { + dest->kind = (PartitionRangeDatumKind **) palloc(ndatums * + sizeof(PartitionRangeDatumKind *)); + for (i = 0; i < ndatums; i++) + { + dest->kind[i] = (PartitionRangeDatumKind *) palloc(partnatts * + sizeof(PartitionRangeDatumKind)); + + memcpy(dest->kind[i], src->kind[i], + sizeof(PartitionRangeDatumKind) * key->partnatts); + } + } + else + dest->kind = NULL; + + for (i = 0; i < ndatums; i++) + { + int j; + dest->datums[i] = (Datum *) palloc(sizeof(Datum) * partnatts); + + for (j = 0; j < partnatts; j++) + { + if (dest->kind == NULL || + dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) + dest->datums[i][j] = datumCopy(src->datums[i][j], + key->parttypbyval[j], + key->parttyplen[j]); + } + } + + dest->indexes = (int *) palloc(sizeof(int) * num_indexes); + memcpy(dest->indexes, src->indexes, sizeof(int) * num_indexes); + + dest->null_index = src->null_index; + dest->default_index = src->default_index; + + return dest; +} + /* * check_new_partition_bound * diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index 93cc7576a0..9d35a41e22 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -1825,13 +1825,15 @@ set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, Relation relation) { PartitionDesc partdesc; + PartitionKey partkey; Assert(relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); partdesc = RelationGetPartitionDesc(relation); + partkey = RelationGetPartitionKey(relation); rel->part_scheme = find_partition_scheme(root, relation); Assert(partdesc != NULL && rel->part_scheme != NULL); - rel->boundinfo = partdesc->boundinfo; + rel->boundinfo = partition_bounds_copy(partdesc->boundinfo, partkey); rel->nparts = partdesc->nparts; set_baserel_partition_key_exprs(relation, rel); } @@ -1888,18 +1890,33 @@ find_partition_scheme(PlannerInfo *root, Relation relation) /* * Did not find matching partition scheme. Create one copying relevant - * information from the relcache. Instead of copying whole arrays, copy - * the pointers in relcache. It's safe to do so since - * RelationClearRelation() wouldn't change it while planner is using it. + * information from the relcache. We need to copy the contents of the array + * since the relcache entry may not survive after we have closed the + * relation. */ part_scheme = (PartitionScheme) palloc0(sizeof(PartitionSchemeData)); part_scheme->strategy = partkey->strategy; part_scheme->partnatts = partkey->partnatts; - part_scheme->partopfamily = partkey->partopfamily; - part_scheme->partopcintype = partkey->partopcintype; - part_scheme->parttypcoll = partkey->parttypcoll; - part_scheme->parttyplen = partkey->parttyplen; - part_scheme->parttypbyval = partkey->parttypbyval; + + part_scheme->partopfamily = (Oid *) palloc(sizeof(Oid) * partnatts); + memcpy(part_scheme->partopfamily, partkey->partopfamily, + sizeof(Oid) * partnatts); + + part_scheme->partopcintype = (Oid *) palloc(sizeof(Oid) * partnatts); + memcpy(part_scheme->partopcintype, partkey->partopcintype, + sizeof(Oid) * partnatts); + + part_scheme->parttypcoll = (Oid *) palloc(sizeof(Oid) * partnatts); + memcpy(part_scheme->parttypcoll, partkey->parttypcoll, + sizeof(Oid) * partnatts); + + part_scheme->parttyplen = (int16 *) palloc(sizeof(int16) * partnatts); + memcpy(part_scheme->parttyplen, partkey->parttyplen, + sizeof(int16) * partnatts); + + part_scheme->parttypbyval = (bool *) palloc(sizeof(bool) * partnatts); + memcpy(part_scheme->parttypbyval, partkey->parttypbyval, + sizeof(bool) * partnatts); /* Add the partitioning scheme to PlannerInfo. */ root->part_schemes = lappend(root->part_schemes, part_scheme); diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 454a940a23..945ac0239d 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -74,6 +74,8 @@ extern void RelationBuildPartitionDesc(Relation relation); extern bool partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, PartitionBoundInfo b1, PartitionBoundInfo b2); +extern PartitionBoundInfo partition_bounds_copy(PartitionBoundInfo src, + PartitionKey key); extern void check_new_partition_bound(char *relname, Relation parent, PartitionBoundSpec *spec); From 1518d07842dcb412ea6b8bb8172c40da7499b174 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 6 Oct 2017 19:18:58 -0400 Subject: [PATCH 0344/1087] Fix crash when logical decoding is invoked from a PL function. The logical decoding functions do BeginInternalSubTransaction and RollbackAndReleaseCurrentSubTransaction to clean up after themselves. It turns out that AtEOSubXact_SPI has an unrecognized assumption that we always need to cancel the active SPI operation in the SPI context that surrounds the subtransaction (if there is one). That's true when the RollbackAndReleaseCurrentSubTransaction call is coming from the SPI-using function itself, but not when it's happening inside some unrelated function invoked by a SPI query. In practice the affected callers are the various PLs. To fix, record the current subtransaction ID when we begin a SPI operation, and clean up only if that ID is the subtransaction being canceled. Also, remove AtEOSubXact_SPI's assertion that it must have cleaned up the surrounding SPI context's active tuptable. That's proven wrong by the same test case. Also clarify (or, if you prefer, reinterpret) the calling conventions for _SPI_begin_call and _SPI_end_call. The memory context cleanup in the latter means that these have always had the flavor of a matched resource-management pair, but they weren't documented that way before. Per report from Ben Chobot. Back-patch to 9.4 where logical decoding came in. In principle, the SPI changes should go all the way back, since the problem dates back to commit 7ec1c5a86. But given the lack of field complaints it seems few people are using internal subtransactions in this way. So I don't feel a need to take any risks in 9.2/9.3. Discussion: https://postgr.es/m/73FBA179-C68C-4540-9473-71E865408B15@silentmedia.com --- .../expected/decoding_into_rel.out | 25 +++++++++++ .../test_decoding/sql/decoding_into_rel.sql | 11 +++++ src/backend/executor/spi.c | 42 ++++++++++++++----- src/include/executor/spi_priv.h | 3 ++ 4 files changed, 70 insertions(+), 11 deletions(-) diff --git a/contrib/test_decoding/expected/decoding_into_rel.out b/contrib/test_decoding/expected/decoding_into_rel.out index be759caa31..8fd3390066 100644 --- a/contrib/test_decoding/expected/decoding_into_rel.out +++ b/contrib/test_decoding/expected/decoding_into_rel.out @@ -59,6 +59,31 @@ SELECT * FROM changeresult; DROP TABLE changeresult; DROP TABLE somechange; +-- check calling logical decoding from pl/pgsql +CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$ +BEGIN + RETURN QUERY + SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); +END$$ LANGUAGE plpgsql; +SELECT * FROM slot_changes_wrapper('regression_slot'); + slot_changes_wrapper +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + BEGIN + table public.changeresult: INSERT: data[text]:'BEGIN' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''BEGIN''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''table public.somechange: INSERT: id[integer]:1''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''COMMIT''' + table public.changeresult: INSERT: data[text]:'COMMIT' + table public.changeresult: INSERT: data[text]:'BEGIN' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''BEGIN''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''table public.changeresult: INSERT: data[text]:''''BEGIN''''''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''table public.changeresult: INSERT: data[text]:''''table public.somechange: INSERT: id[integer]:1''''''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''table public.changeresult: INSERT: data[text]:''''COMMIT''''''' + table public.changeresult: INSERT: data[text]:'table public.changeresult: INSERT: data[text]:''COMMIT''' + table public.changeresult: INSERT: data[text]:'COMMIT' + COMMIT +(14 rows) + SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); data -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- diff --git a/contrib/test_decoding/sql/decoding_into_rel.sql b/contrib/test_decoding/sql/decoding_into_rel.sql index 54670fd39e..1068cec588 100644 --- a/contrib/test_decoding/sql/decoding_into_rel.sql +++ b/contrib/test_decoding/sql/decoding_into_rel.sql @@ -27,5 +27,16 @@ INSERT INTO changeresult SELECT * FROM changeresult; DROP TABLE changeresult; DROP TABLE somechange; + +-- check calling logical decoding from pl/pgsql +CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$ +BEGIN + RETURN QUERY + SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); +END$$ LANGUAGE plpgsql; + +SELECT * FROM slot_changes_wrapper('regression_slot'); + SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); + SELECT 'stop' FROM pg_drop_replication_slot('regression_slot'); diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index afe231fca9..40292b86c1 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -71,8 +71,8 @@ static void _SPI_cursor_operation(Portal portal, static SPIPlanPtr _SPI_make_plan_non_temp(SPIPlanPtr plan); static SPIPlanPtr _SPI_save_plan(SPIPlanPtr plan); -static int _SPI_begin_call(bool execmem); -static int _SPI_end_call(bool procmem); +static int _SPI_begin_call(bool use_exec); +static int _SPI_end_call(bool use_exec); static MemoryContext _SPI_execmem(void); static MemoryContext _SPI_procmem(void); static bool _SPI_checktuples(void); @@ -118,6 +118,7 @@ SPI_connect(void) _SPI_current->processed = 0; _SPI_current->lastoid = InvalidOid; _SPI_current->tuptable = NULL; + _SPI_current->execSubid = InvalidSubTransactionId; slist_init(&_SPI_current->tuptables); _SPI_current->procCxt = NULL; /* in case we fail to create 'em */ _SPI_current->execCxt = NULL; @@ -149,7 +150,7 @@ SPI_finish(void) { int res; - res = _SPI_begin_call(false); /* live in procedure memory */ + res = _SPI_begin_call(false); /* just check we're connected */ if (res < 0) return res; @@ -268,8 +269,15 @@ AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid) { slist_mutable_iter siter; - /* free Executor memory the same as _SPI_end_call would do */ - MemoryContextResetAndDeleteChildren(_SPI_current->execCxt); + /* + * Throw away executor state if current executor operation was started + * within current subxact (essentially, force a _SPI_end_call(true)). + */ + if (_SPI_current->execSubid >= mySubid) + { + _SPI_current->execSubid = InvalidSubTransactionId; + MemoryContextResetAndDeleteChildren(_SPI_current->execCxt); + } /* throw away any tuple tables created within current subxact */ slist_foreach_modify(siter, &_SPI_current->tuptables) @@ -293,8 +301,6 @@ AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid) MemoryContextDelete(tuptable->tuptabcxt); } } - /* in particular we should have gotten rid of any in-progress table */ - Assert(_SPI_current->tuptable == NULL); } } @@ -2446,15 +2452,24 @@ _SPI_procmem(void) /* * _SPI_begin_call: begin a SPI operation within a connected procedure + * + * use_exec is true if we intend to make use of the procedure's execCxt + * during this SPI operation. We'll switch into that context, and arrange + * for it to be cleaned up at _SPI_end_call or if an error occurs. */ static int -_SPI_begin_call(bool execmem) +_SPI_begin_call(bool use_exec) { if (_SPI_current == NULL) return SPI_ERROR_UNCONNECTED; - if (execmem) /* switch to the Executor memory context */ + if (use_exec) + { + /* remember when the Executor operation started */ + _SPI_current->execSubid = GetCurrentSubTransactionId(); + /* switch to the Executor memory context */ _SPI_execmem(); + } return 0; } @@ -2462,14 +2477,19 @@ _SPI_begin_call(bool execmem) /* * _SPI_end_call: end a SPI operation within a connected procedure * + * use_exec must be the same as in the previous _SPI_begin_call + * * Note: this currently has no failure return cases, so callers don't check */ static int -_SPI_end_call(bool procmem) +_SPI_end_call(bool use_exec) { - if (procmem) /* switch to the procedure memory context */ + if (use_exec) { + /* switch to the procedure memory context */ _SPI_procmem(); + /* mark Executor context no longer in use */ + _SPI_current->execSubid = InvalidSubTransactionId; /* and free Executor memory */ MemoryContextResetAndDeleteChildren(_SPI_current->execCxt); } diff --git a/src/include/executor/spi_priv.h b/src/include/executor/spi_priv.h index ba7fb98875..8fae755418 100644 --- a/src/include/executor/spi_priv.h +++ b/src/include/executor/spi_priv.h @@ -26,6 +26,9 @@ typedef struct Oid lastoid; SPITupleTable *tuptable; /* tuptable currently being built */ + /* subtransaction in which current Executor call was started */ + SubTransactionId execSubid; + /* resources of this execution context */ slist_head tuptables; /* list of all live SPITupleTables */ MemoryContext procCxt; /* procedure context */ From 1fdab4d5aa47d8a2bb29ccb1122b0158f6db221f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 7 Oct 2017 13:19:13 -0400 Subject: [PATCH 0345/1087] Clean up sloppy maintenance of regression test schedule files. The partition_join test was added to a parallel group that was already at the maximum of 20 concurrent tests. The hash_func test wasn't added to serial_schedule at all. The identity and partition_join tests were added to serial_schedule with the aid of a dartboard, rather than maintaining consistency with parallel_schedule. There are proposals afoot to make these sorts of errors harder to make, but in the meantime let's fix the ones already in place. Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com --- src/test/regress/parallel_schedule | 4 ++-- src/test/regress/serial_schedule | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index e1d150b878..53d4f49197 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -104,7 +104,7 @@ test: publication subscription # ---------- # Another group of parallel tests # ---------- -test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock json jsonb json_encoding indirect_toast equivclass partition_join +test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combocid tsearch tsdicts foreign_data window xmlmap functional_deps advisory_lock json jsonb json_encoding indirect_toast equivclass # ---------- # Another group of parallel tests @@ -116,7 +116,7 @@ test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid c # ---------- # Another group of parallel tests # ---------- -test: identity +test: identity partition_join # event triggers cannot run concurrently with any test that runs DDL test: event_trigger diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index ed755f45fa..ed1df5ae24 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -79,6 +79,7 @@ test: updatable_views test: rolenames test: roleattributes test: create_am +test: hash_func test: sanity_check test: errors test: select @@ -171,13 +172,13 @@ test: conversion test: truncate test: alter_table test: sequence -test: identity test: polymorphism test: rowtypes test: returning test: largeobject test: with test: xml +test: identity +test: partition_join test: event_trigger test: stats -test: partition_join From ef73a8162a5fe9c4b2f895bf9fb660f1aabc796c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 7 Oct 2017 17:20:09 -0400 Subject: [PATCH 0346/1087] Enforce our convention about max number of parallel regression tests. We have a very old rule that parallel_schedule should have no more than twenty tests in any one parallel group, so as to provide a bound on the number of concurrently running processes needed to pass the tests. But people keep forgetting the rule, so let's add a few lines of code to check it. Discussion: https://postgr.es/m/a37e9c57-22d4-1b82-1270-4501cd2e984e@2ndquadrant.com --- src/test/regress/GNUmakefile | 2 +- src/test/regress/pg_regress.c | 25 ++++++++++++++++++++----- src/tools/msvc/vcregress.pl | 2 ++ 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile index b923ea1420..56cd202078 100644 --- a/src/test/regress/GNUmakefile +++ b/src/test/regress/GNUmakefile @@ -124,7 +124,7 @@ tablespace-setup: ## Run tests ## -REGRESS_OPTS = --dlpath=. $(EXTRA_REGRESS_OPTS) +REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS) check: all tablespace-setup $(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c index abb742b1ed..f859fbc011 100644 --- a/src/test/regress/pg_regress.c +++ b/src/test/regress/pg_regress.c @@ -78,6 +78,7 @@ char *launcher = NULL; static _stringlist *loadlanguage = NULL; static _stringlist *loadextension = NULL; static int max_connections = 0; +static int max_concurrent_tests = 0; static char *encoding = NULL; static _stringlist *schedulelist = NULL; static _stringlist *extra_tests = NULL; @@ -1592,9 +1593,9 @@ run_schedule(const char *schedule, test_function tfunc) FILE *scf; int line_num = 0; - memset(resultfiles, 0, sizeof(_stringlist *) * MAX_PARALLEL_TESTS); - memset(expectfiles, 0, sizeof(_stringlist *) * MAX_PARALLEL_TESTS); - memset(tags, 0, sizeof(_stringlist *) * MAX_PARALLEL_TESTS); + memset(resultfiles, 0, sizeof(resultfiles)); + memset(expectfiles, 0, sizeof(expectfiles)); + memset(tags, 0, sizeof(tags)); scf = fopen(schedule, "r"); if (!scf) @@ -1614,6 +1615,7 @@ run_schedule(const char *schedule, test_function tfunc) line_num++; + /* clear out string lists left over from previous line */ for (i = 0; i < MAX_PARALLEL_TESTS; i++) { if (resultfiles[i] == NULL) @@ -1667,8 +1669,8 @@ run_schedule(const char *schedule, test_function tfunc) if (num_tests >= MAX_PARALLEL_TESTS) { /* can't print scbuf here, it's already been trashed */ - fprintf(stderr, _("too many parallel tests in schedule file \"%s\", line %d\n"), - schedule, line_num); + fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d\n"), + MAX_PARALLEL_TESTS, schedule, line_num); exit(2); } tests[num_tests] = c; @@ -1691,6 +1693,13 @@ run_schedule(const char *schedule, test_function tfunc) wait_for_tests(pids, statuses, NULL, 1); /* status line is finished below */ } + else if (max_concurrent_tests > 0 && max_concurrent_tests < num_tests) + { + /* can't print scbuf here, it's already been trashed */ + fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d\n"), + max_concurrent_tests, schedule, line_num); + exit(2); + } else if (max_connections > 0 && max_connections < num_tests) { int oldest = 0; @@ -1999,6 +2008,8 @@ help(void) printf(_(" tests; can appear multiple times\n")); printf(_(" --max-connections=N maximum number of concurrent connections\n")); printf(_(" (default is 0, meaning unlimited)\n")); + printf(_(" --max-concurrent-tests=N maximum number of concurrent tests in schedule\n")); + printf(_(" (default is 0, meaning unlimited)\n")); printf(_(" --outputdir=DIR place output files in DIR (default \".\")\n")); printf(_(" --schedule=FILE use test ordering schedule from FILE\n")); printf(_(" (can be used multiple times to concatenate)\n")); @@ -2048,6 +2059,7 @@ regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc {"launcher", required_argument, NULL, 21}, {"load-extension", required_argument, NULL, 22}, {"config-auth", required_argument, NULL, 24}, + {"max-concurrent-tests", required_argument, NULL, 25}, {NULL, 0, NULL, 0} }; @@ -2161,6 +2173,9 @@ regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc case 24: config_auth_datadir = pg_strdup(optarg); break; + case 25: + max_concurrent_tests = atoi(optarg); + break; default: /* getopt_long already emitted a complaint */ fprintf(stderr, _("\nTry \"%s -h\" for more information.\n"), diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl index 2904679114..719fe83047 100644 --- a/src/tools/msvc/vcregress.pl +++ b/src/tools/msvc/vcregress.pl @@ -104,6 +104,7 @@ sub installcheck "--dlpath=.", "--bindir=../../../$Config/psql", "--schedule=${schedule}_schedule", + "--max-concurrent-tests=20", "--encoding=SQL_ASCII", "--no-locale"); push(@args, $maxconn) if $maxconn; @@ -122,6 +123,7 @@ sub check "--dlpath=.", "--bindir=", "--schedule=${schedule}_schedule", + "--max-concurrent-tests=20", "--encoding=SQL_ASCII", "--no-locale", "--temp-instance=./tmp_check"); From b11f0d36b224a9673863b4e592f40f179dba3016 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 7 Oct 2017 18:04:25 -0400 Subject: [PATCH 0347/1087] Improve pg_regress's error reporting for schedule-file problems. The previous coding here trashed the line buffer as it scanned it, making it impossible to print the source line in subsequent error messages. With a few save/restore/strdup pushups we can improve that situation. In passing, move the free'ing of the various strings that are collected while processing one set of tests down to the bottom of the loop. That's simpler, less surprising, and should make valgrind less unhappy about the strings that were previously leaked by the last iteration. --- src/test/regress/pg_regress.c | 62 ++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 26 deletions(-) diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c index f859fbc011..7c628df4b4 100644 --- a/src/test/regress/pg_regress.c +++ b/src/test/regress/pg_regress.c @@ -1593,6 +1593,7 @@ run_schedule(const char *schedule, test_function tfunc) FILE *scf; int line_num = 0; + memset(tests, 0, sizeof(tests)); memset(resultfiles, 0, sizeof(resultfiles)); memset(expectfiles, 0, sizeof(expectfiles)); memset(tags, 0, sizeof(tags)); @@ -1615,16 +1616,6 @@ run_schedule(const char *schedule, test_function tfunc) line_num++; - /* clear out string lists left over from previous line */ - for (i = 0; i < MAX_PARALLEL_TESTS; i++) - { - if (resultfiles[i] == NULL) - break; - free_stringlist(&resultfiles[i]); - free_stringlist(&expectfiles[i]); - free_stringlist(&tags[i]); - } - /* strip trailing whitespace, especially the newline */ i = strlen(scbuf); while (i > 0 && isspace((unsigned char) scbuf[i - 1])) @@ -1657,24 +1648,35 @@ run_schedule(const char *schedule, test_function tfunc) num_tests = 0; inword = false; - for (c = test; *c; c++) + for (c = test;; c++) { - if (isspace((unsigned char) *c)) + if (*c == '\0' || isspace((unsigned char) *c)) { - *c = '\0'; - inword = false; + if (inword) + { + /* Reached end of a test name */ + char sav; + + if (num_tests >= MAX_PARALLEL_TESTS) + { + fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d: %s\n"), + MAX_PARALLEL_TESTS, schedule, line_num, scbuf); + exit(2); + } + sav = *c; + *c = '\0'; + tests[num_tests] = pg_strdup(test); + num_tests++; + *c = sav; + inword = false; + } + if (*c == '\0') + break; /* loop exit is here */ } else if (!inword) { - if (num_tests >= MAX_PARALLEL_TESTS) - { - /* can't print scbuf here, it's already been trashed */ - fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d\n"), - MAX_PARALLEL_TESTS, schedule, line_num); - exit(2); - } - tests[num_tests] = c; - num_tests++; + /* Start of a test name */ + test = c; inword = true; } } @@ -1695,9 +1697,8 @@ run_schedule(const char *schedule, test_function tfunc) } else if (max_concurrent_tests > 0 && max_concurrent_tests < num_tests) { - /* can't print scbuf here, it's already been trashed */ - fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d\n"), - max_concurrent_tests, schedule, line_num); + fprintf(stderr, _("too many parallel tests (more than %d) in schedule file \"%s\" line %d: %s\n"), + max_concurrent_tests, schedule, line_num, scbuf); exit(2); } else if (max_connections > 0 && max_connections < num_tests) @@ -1802,6 +1803,15 @@ run_schedule(const char *schedule, test_function tfunc) status_end(); } + + for (i = 0; i < num_tests; i++) + { + pg_free(tests[i]); + tests[i] = NULL; + free_stringlist(&resultfiles[i]); + free_stringlist(&expectfiles[i]); + free_stringlist(&tags[i]); + } } free_stringlist(&ignorelist); From 8ec5429e2f422f4d570d4909507db0d4ca83bbac Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 8 Oct 2017 12:23:32 -0400 Subject: [PATCH 0348/1087] Reduce "X = X" to "X IS NOT NULL", if it's easy to do so. If the operator is a strict btree equality operator, and X isn't volatile, then the clause must yield true for any non-null value of X, or null if X is null. At top level of a WHERE clause, we can ignore the distinction between false and null results, so it's valid to simplify the clause to "X IS NOT NULL". This is a useful improvement mainly because we'll get a far better selectivity estimate in most cases. Because such cases seldom arise in well-written queries, it is unappetizing to expend a lot of planner cycles looking for them ... but it turns out that there's a place we can shoehorn this in practically for free, because equivclass.c already has to detect and reject candidate equivalences of the form X = X. That doesn't catch every place that it would be valid to simplify to X IS NOT NULL, but it catches the typical case. Working harder doesn't seem justified. Patch by me, reviewed by Petr Jelinek Discussion: https://postgr.es/m/CAMjNa7cC4X9YR-vAJS-jSYCajhRDvJQnN7m2sLH1wLh-_Z2bsw@mail.gmail.com --- src/backend/optimizer/path/equivclass.c | 66 +++++++++++++++++++----- src/backend/optimizer/plan/initsplan.c | 5 +- src/include/optimizer/paths.h | 3 +- src/test/regress/expected/equivclass.out | 18 +++++++ src/test/regress/sql/equivclass.sql | 8 +++ 5 files changed, 83 insertions(+), 17 deletions(-) diff --git a/src/backend/optimizer/path/equivclass.c b/src/backend/optimizer/path/equivclass.c index 7997f50c18..a225414c97 100644 --- a/src/backend/optimizer/path/equivclass.c +++ b/src/backend/optimizer/path/equivclass.c @@ -27,6 +27,7 @@ #include "optimizer/paths.h" #include "optimizer/planmain.h" #include "optimizer/prep.h" +#include "optimizer/restrictinfo.h" #include "optimizer/var.h" #include "utils/lsyscache.h" @@ -71,8 +72,14 @@ static bool reconsider_full_join_clause(PlannerInfo *root, * any delay by an outer join, so its two sides can be considered equal * anywhere they are both computable; moreover that equality can be * extended transitively. Record this knowledge in the EquivalenceClass - * data structure. Returns TRUE if successful, FALSE if not (in which - * case caller should treat the clause as ordinary, not an equivalence). + * data structure, if applicable. Returns TRUE if successful, FALSE if not + * (in which case caller should treat the clause as ordinary, not an + * equivalence). + * + * In some cases, although we cannot convert a clause into EquivalenceClass + * knowledge, we can still modify it to a more useful form than the original. + * Then, *p_restrictinfo will be replaced by a new RestrictInfo, which is what + * the caller should use for further processing. * * If below_outer_join is true, then the clause was found below the nullable * side of an outer join, so its sides might validly be both NULL rather than @@ -104,9 +111,11 @@ static bool reconsider_full_join_clause(PlannerInfo *root, * memory context. */ bool -process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo, +process_equivalence(PlannerInfo *root, + RestrictInfo **p_restrictinfo, bool below_outer_join) { + RestrictInfo *restrictinfo = *p_restrictinfo; Expr *clause = restrictinfo->clause; Oid opno, collation, @@ -154,16 +163,45 @@ process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo, collation); /* - * Reject clauses of the form X=X. These are not as redundant as they - * might seem at first glance: assuming the operator is strict, this is - * really an expensive way to write X IS NOT NULL. So we must not risk - * just losing the clause, which would be possible if there is already a - * single-element EquivalenceClass containing X. The case is not common - * enough to be worth contorting the EC machinery for, so just reject the - * clause and let it be processed as a normal restriction clause. + * Clauses of the form X=X cannot be translated into EquivalenceClasses. + * We'd either end up with a single-entry EC, losing the knowledge that + * the clause was present at all, or else make an EC with duplicate + * entries, causing other issues. */ if (equal(item1, item2)) - return false; /* X=X is not a useful equivalence */ + { + /* + * If the operator is strict, then the clause can be treated as just + * "X IS NOT NULL". (Since we know we are considering a top-level + * qual, we can ignore the difference between FALSE and NULL results.) + * It's worth making the conversion because we'll typically get a much + * better selectivity estimate than we would for X=X. + * + * If the operator is not strict, we can't be sure what it will do + * with NULLs, so don't attempt to optimize it. + */ + set_opfuncid((OpExpr *) clause); + if (func_strict(((OpExpr *) clause)->opfuncid)) + { + NullTest *ntest = makeNode(NullTest); + + ntest->arg = item1; + ntest->nulltesttype = IS_NOT_NULL; + ntest->argisrow = false; /* correct even if composite arg */ + ntest->location = -1; + + *p_restrictinfo = + make_restrictinfo((Expr *) ntest, + restrictinfo->is_pushed_down, + restrictinfo->outerjoin_delayed, + restrictinfo->pseudoconstant, + restrictinfo->security_level, + NULL, + restrictinfo->outer_relids, + restrictinfo->nullable_relids); + } + return false; + } /* * If below outer join, check for strictness, else reject. @@ -1741,7 +1779,7 @@ reconsider_outer_join_clause(PlannerInfo *root, RestrictInfo *rinfo, bms_copy(inner_relids), bms_copy(inner_nullable_relids), cur_ec->ec_min_security); - if (process_equivalence(root, newrinfo, true)) + if (process_equivalence(root, &newrinfo, true)) match = true; } @@ -1884,7 +1922,7 @@ reconsider_full_join_clause(PlannerInfo *root, RestrictInfo *rinfo) bms_copy(left_relids), bms_copy(left_nullable_relids), cur_ec->ec_min_security); - if (process_equivalence(root, newrinfo, true)) + if (process_equivalence(root, &newrinfo, true)) matchleft = true; } eq_op = select_equality_operator(cur_ec, @@ -1899,7 +1937,7 @@ reconsider_full_join_clause(PlannerInfo *root, RestrictInfo *rinfo) bms_copy(right_relids), bms_copy(right_nullable_relids), cur_ec->ec_min_security); - if (process_equivalence(root, newrinfo, true)) + if (process_equivalence(root, &newrinfo, true)) matchright = true; } } diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c index 9931dddba4..974eb58d83 100644 --- a/src/backend/optimizer/plan/initsplan.c +++ b/src/backend/optimizer/plan/initsplan.c @@ -1964,10 +1964,11 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause, if (maybe_equivalence) { if (check_equivalence_delay(root, restrictinfo) && - process_equivalence(root, restrictinfo, below_outer_join)) + process_equivalence(root, &restrictinfo, below_outer_join)) return; /* EC rejected it, so set left_ec/right_ec the hard way ... */ - initialize_mergeclause_eclasses(root, restrictinfo); + if (restrictinfo->mergeopfamilies) /* EC might have changed this */ + initialize_mergeclause_eclasses(root, restrictinfo); /* ... and fall through to distribute_restrictinfo_to_rels */ } else if (maybe_outer_join && restrictinfo->can_join) diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index a15eee54bb..ea886b6501 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -127,7 +127,8 @@ typedef bool (*ec_matches_callback_type) (PlannerInfo *root, EquivalenceMember *em, void *arg); -extern bool process_equivalence(PlannerInfo *root, RestrictInfo *restrictinfo, +extern bool process_equivalence(PlannerInfo *root, + RestrictInfo **p_restrictinfo, bool below_outer_join); extern Expr *canonicalize_ec_expression(Expr *expr, Oid req_type, Oid req_collation); diff --git a/src/test/regress/expected/equivclass.out b/src/test/regress/expected/equivclass.out index a96b2a1b07..c448d85dec 100644 --- a/src/test/regress/expected/equivclass.out +++ b/src/test/regress/expected/equivclass.out @@ -421,3 +421,21 @@ reset session authorization; revoke select on ec0 from regress_user_ectest; revoke select on ec1 from regress_user_ectest; drop user regress_user_ectest; +-- check that X=X is converted to X IS NOT NULL when appropriate +explain (costs off) + select * from tenk1 where unique1 = unique1 and unique2 = unique2; + QUERY PLAN +------------------------------------------------------------- + Seq Scan on tenk1 + Filter: ((unique1 IS NOT NULL) AND (unique2 IS NOT NULL)) +(2 rows) + +-- this could be converted, but isn't at present +explain (costs off) + select * from tenk1 where unique1 = unique1 or unique2 = unique2; + QUERY PLAN +-------------------------------------------------------- + Seq Scan on tenk1 + Filter: ((unique1 = unique1) OR (unique2 = unique2)) +(2 rows) + diff --git a/src/test/regress/sql/equivclass.sql b/src/test/regress/sql/equivclass.sql index 0e4aa0cd2c..85aa65de39 100644 --- a/src/test/regress/sql/equivclass.sql +++ b/src/test/regress/sql/equivclass.sql @@ -254,3 +254,11 @@ revoke select on ec0 from regress_user_ectest; revoke select on ec1 from regress_user_ectest; drop user regress_user_ectest; + +-- check that X=X is converted to X IS NOT NULL when appropriate +explain (costs off) + select * from tenk1 where unique1 = unique1 and unique2 = unique2; + +-- this could be converted, but isn't at present +explain (costs off) + select * from tenk1 where unique1 = unique1 or unique2 = unique2; From 643c27e36ff38f40d256c2a05b51a14ae2b26077 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 8 Oct 2017 15:25:26 -0400 Subject: [PATCH 0349/1087] Increase distance between flush requests during bulk file copies. copy_file() reads and writes data 64KB at a time (with default BLCKSZ), and historically has issued a pg_flush_data request after each write. This turns out to interact really badly with macOS's new APFS file system: a large file copy takes over 100X longer than it ought to on APFS, as reported by Brent Dearth. While that's arguably a macOS bug, it's not clear whether Apple will do anything about it in the near future, and in any case experimentation suggests that issuing flushes a bit less often can be helpful on other platforms too. Hence, rearrange the logic in copy_file() so that flush requests are issued once per N writes rather than every time through the loop. I set the FLUSH_DISTANCE to 32MB on macOS (any less than that still results in a noticeable speed degradation on APFS), but 1MB elsewhere. In limited testing on Linux and FreeBSD, this seems slightly faster than the previous code, and certainly no worse. It helps noticeably on macOS even with the older HFS filesystem. A simpler change would have been to just increase the size of the copy buffer without changing the loop logic, but that seems likely to trash the processor cache without really helping much. Back-patch to 9.6 where we introduced msync() as an implementation option for pg_flush_data(). The problem seems specific to APFS's mmap/msync support, so I don't think we need to go further back. Discussion: https://postgr.es/m/CADkxhTNv-j2jw2g8H57deMeAbfRgYBoLmVuXkC=YCFBXRuCOww@mail.gmail.com --- src/backend/storage/file/copydir.c | 38 +++++++++++++++++++++++------- 1 file changed, 30 insertions(+), 8 deletions(-) diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c index a5e074ead8..eae9f5a1f2 100644 --- a/src/backend/storage/file/copydir.c +++ b/src/backend/storage/file/copydir.c @@ -139,10 +139,24 @@ copy_file(char *fromfile, char *tofile) int dstfd; int nbytes; off_t offset; + off_t flush_offset; - /* Use palloc to ensure we get a maxaligned buffer */ + /* Size of copy buffer (read and write requests) */ #define COPY_BUF_SIZE (8 * BLCKSZ) + /* + * Size of data flush requests. It seems beneficial on most platforms to + * do this every 1MB or so. But macOS, at least with early releases of + * APFS, is really unfriendly to small mmap/msync requests, so there do it + * only every 32MB. + */ +#if defined(__darwin__) +#define FLUSH_DISTANCE (32 * 1024 * 1024) +#else +#define FLUSH_DISTANCE (1024 * 1024) +#endif + + /* Use palloc to ensure we get a maxaligned buffer */ buffer = palloc(COPY_BUF_SIZE); /* @@ -163,11 +177,23 @@ copy_file(char *fromfile, char *tofile) /* * Do the data copying. */ + flush_offset = 0; for (offset = 0;; offset += nbytes) { /* If we got a cancel signal during the copy of the file, quit */ CHECK_FOR_INTERRUPTS(); + /* + * We fsync the files later, but during the copy, flush them every so + * often to avoid spamming the cache and hopefully get the kernel to + * start writing them out before the fsync comes. + */ + if (offset - flush_offset >= FLUSH_DISTANCE) + { + pg_flush_data(dstfd, flush_offset, offset - flush_offset); + flush_offset = offset; + } + pgstat_report_wait_start(WAIT_EVENT_COPY_FILE_READ); nbytes = read(srcfd, buffer, COPY_BUF_SIZE); pgstat_report_wait_end(); @@ -190,15 +216,11 @@ copy_file(char *fromfile, char *tofile) errmsg("could not write to file \"%s\": %m", tofile))); } pgstat_report_wait_end(); - - /* - * We fsync the files later but first flush them to avoid spamming the - * cache and hopefully get the kernel to start writing them out before - * the fsync comes. - */ - pg_flush_data(dstfd, offset, nbytes); } + if (offset > flush_offset) + pg_flush_data(dstfd, flush_offset, offset - flush_offset); + if (CloseTransientFile(dstfd)) ereport(ERROR, (errcode_for_file_access(), From 84ad4b036d975ad1be0f52251bac3a06463c9811 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 8 Oct 2017 15:08:25 -0700 Subject: [PATCH 0350/1087] Reduce memory usage of targetlist SRFs. Previously nodeProjectSet only released memory once per input tuple, rather than once per returned tuple. If the computation of an individual returned tuple requires a lot of memory, that can lead to problems. Instead change things so that the expression context can be reset once per output tuple, which requires a new memory context to store SRF arguments in. This is a longstanding issue, but was hard to fix before 9.6, due to the way tSRFs where evaluated. But it's fairly easy to fix now. We could backpatch this into 10, but given there've been fewc omplaints that doesn't seem worth the risk so far. Reported-By: Lucas Fairchild Author: Andres Freund, per discussion with Tom Lane Discussion: https://postgr.es/m/4514.1507318623@sss.pgh.pa.us --- src/backend/executor/execSRF.c | 31 +++++++++++++++++++++++--- src/backend/executor/nodeProjectSet.c | 32 ++++++++++++++++++++++----- src/include/executor/executor.h | 1 + src/include/nodes/execnodes.h | 1 + 4 files changed, 57 insertions(+), 8 deletions(-) diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index 1be683db83..cce771d4be 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -467,13 +467,16 @@ ExecInitFunctionResultSet(Expr *expr, * function itself. The argument expressions may not contain set-returning * functions (the planner is supposed to have separated evaluation for those). * - * This should be called in a short-lived (per-tuple) context. + * This should be called in a short-lived (per-tuple) context, argContext + * needs to live until all rows have been returned (i.e. *isDone set to + * ExprEndResult or ExprSingleResult). * * This is used by nodeProjectSet.c. */ Datum ExecMakeFunctionResultSet(SetExprState *fcache, ExprContext *econtext, + MemoryContext argContext, bool *isNull, ExprDoneCond *isDone) { @@ -497,8 +500,21 @@ ExecMakeFunctionResultSet(SetExprState *fcache, */ if (fcache->funcResultStore) { - if (tuplestore_gettupleslot(fcache->funcResultStore, true, false, - fcache->funcResultSlot)) + TupleTableSlot *slot = fcache->funcResultSlot; + MemoryContext oldContext; + bool foundTup; + + /* + * Have to make sure tuple in slot lives long enough, otherwise + * clearing the slot could end up trying to free something already + * freed. + */ + oldContext = MemoryContextSwitchTo(slot->tts_mcxt); + foundTup = tuplestore_gettupleslot(fcache->funcResultStore, true, false, + fcache->funcResultSlot); + MemoryContextSwitchTo(oldContext); + + if (foundTup) { *isDone = ExprMultipleResult; if (fcache->funcReturnsTuple) @@ -526,11 +542,20 @@ ExecMakeFunctionResultSet(SetExprState *fcache, * function manager. We skip the evaluation if it was already done in the * previous call (ie, we are continuing the evaluation of a set-valued * function). Otherwise, collect the current argument values into fcinfo. + * + * The arguments have to live in a context that lives at least until all + * rows from this SRF have been returned, otherwise ValuePerCall SRFs + * would reference freed memory after the first returned row. */ fcinfo = &fcache->fcinfo_data; arguments = fcache->args; if (!fcache->setArgsValid) + { + MemoryContext oldContext = MemoryContextSwitchTo(argContext); + ExecEvalFuncArgs(fcinfo, arguments, econtext); + MemoryContextSwitchTo(oldContext); + } else { /* Reset flag (we may set it again below) */ diff --git a/src/backend/executor/nodeProjectSet.c b/src/backend/executor/nodeProjectSet.c index 68981296f9..30789bcce4 100644 --- a/src/backend/executor/nodeProjectSet.c +++ b/src/backend/executor/nodeProjectSet.c @@ -52,6 +52,13 @@ ExecProjectSet(PlanState *pstate) econtext = node->ps.ps_ExprContext; + /* + * Reset per-tuple context to free expression-evaluation storage allocated + * for a potentially previously returned tuple. Note that the SRF argument + * context has a different lifetime and is reset below. + */ + ResetExprContext(econtext); + /* * Check to see if we're still projecting out tuples from a previous scan * tuple (because there is a function-returning-set in the projection @@ -66,11 +73,13 @@ ExecProjectSet(PlanState *pstate) } /* - * Reset per-tuple memory context to free any expression evaluation - * storage allocated in the previous tuple cycle. Note this can't happen - * until we're done projecting out tuples from a scan tuple. + * Reset argument context to free any expression evaluation storage + * allocated in the previous tuple cycle. Note this can't happen until + * we're done projecting out tuples from a scan tuple, as ValuePerCall + * functions are allowed to reference the arguments for each returned + * tuple. */ - ResetExprContext(econtext); + MemoryContextReset(node->argcontext); /* * Get another input tuple and project SRFs from it. @@ -164,7 +173,8 @@ ExecProjectSRF(ProjectSetState *node, bool continuing) * Evaluate SRF - possibly continuing previously started output. */ *result = ExecMakeFunctionResultSet((SetExprState *) elem, - econtext, isnull, isdone); + econtext, node->argcontext, + isnull, isdone); if (*isdone != ExprEndResult) hasresult = true; @@ -291,6 +301,18 @@ ExecInitProjectSet(ProjectSet *node, EState *estate, int eflags) off++; } + + /* + * Create a memory context that ExecMakeFunctionResult can use to evaluate + * function arguments in. We can't use the per-tuple context for this + * because it gets reset too often; but we don't want to leak evaluation + * results into the query-lifespan context either. We use one context for + * the arguments of all tSRFs, as they have roughly equivalent lifetimes. + */ + state->argcontext = AllocSetContextCreate(CurrentMemoryContext, + "tSRF function arguments", + ALLOCSET_DEFAULT_SIZES); + return state; } diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 770881849c..37fd6b2700 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -400,6 +400,7 @@ extern SetExprState *ExecInitFunctionResultSet(Expr *expr, ExprContext *econtext, PlanState *parent); extern Datum ExecMakeFunctionResultSet(SetExprState *fcache, ExprContext *econtext, + MemoryContext argContext, bool *isNull, ExprDoneCond *isDone); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index c6d3021c85..c46113444f 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -946,6 +946,7 @@ typedef struct ProjectSetState ExprDoneCond *elemdone; /* array of per-SRF is-done states */ int nelems; /* length of elemdone[] array */ bool pending_srf_tuples; /* still evaluating srfs in tlist? */ + MemoryContext argcontext; /* context for SRF arguments */ } ProjectSetState; /* ---------------- From 71c75ddfbb277362bf62dc5b1645c3903e16bc34 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 8 Oct 2017 21:51:58 -0400 Subject: [PATCH 0351/1087] Remove unused documentation file --- doc/src/sgml/contacts.sgml | 26 -------------------------- doc/src/sgml/filelist.sgml | 1 - 2 files changed, 27 deletions(-) delete mode 100644 doc/src/sgml/contacts.sgml diff --git a/doc/src/sgml/contacts.sgml b/doc/src/sgml/contacts.sgml deleted file mode 100644 index 308eb418a5..0000000000 --- a/doc/src/sgml/contacts.sgml +++ /dev/null @@ -1,26 +0,0 @@ - - - -Contacts - - - diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml index 9050559abd..a72c50eadb 100644 --- a/doc/src/sgml/filelist.sgml +++ b/doc/src/sgml/filelist.sgml @@ -156,7 +156,6 @@ - From 8a241792f968ed5be6cf4d41e32c0d264f6c0c65 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 9 Oct 2017 15:20:42 -0700 Subject: [PATCH 0352/1087] Add pg_strnlen() a portable implementation of strlen. As the OS version is likely going to be more optimized, fall back to it if available, as detected by configure. --- configure | 2 +- configure.in | 2 +- src/common/string.c | 20 ++++++++++++++++++++ src/include/common/string.h | 15 +++++++++++++++ src/include/pg_config.h.in | 3 +++ src/include/pg_config.h.win32 | 3 +++ src/port/snprintf.c | 12 ++---------- 7 files changed, 45 insertions(+), 12 deletions(-) diff --git a/configure b/configure index 216447e739..a1283c0500 100755 --- a/configure +++ b/configure @@ -8777,7 +8777,7 @@ fi -for ac_func in strerror_r getpwuid_r gethostbyname_r +for ac_func in strerror_r getpwuid_r gethostbyname_r strnlen do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" diff --git a/configure.in b/configure.in index a2e3d8331a..e1381b4ead 100644 --- a/configure.in +++ b/configure.in @@ -961,7 +961,7 @@ LIBS="$LIBS $PTHREAD_LIBS" AC_CHECK_HEADER(pthread.h, [], [AC_MSG_ERROR([ pthread.h not found; use --disable-thread-safety to disable thread safety])]) -AC_CHECK_FUNCS([strerror_r getpwuid_r gethostbyname_r]) +AC_CHECK_FUNCS([strerror_r getpwuid_r gethostbyname_r strnlen]) # Do test here with the proper thread flags PGAC_FUNC_STRERROR_R_INT diff --git a/src/common/string.c b/src/common/string.c index 159d9ea7b6..901821f3d8 100644 --- a/src/common/string.c +++ b/src/common/string.c @@ -41,3 +41,23 @@ pg_str_endswith(const char *str, const char *end) str += slen - elen; return strcmp(str, end) == 0; } + + +/* + * Portable version of posix' strnlen. + * + * Returns the number of characters before a null-byte in the string pointed + * to by str, unless there's no null-byte before maxlen. In the latter case + * maxlen is returned. + */ +#ifndef HAVE_STRNLEN +size_t +pg_strnlen(const char *str, size_t maxlen) +{ + const char *p = str; + + while (maxlen-- > 0 && *p) + p++; + return p - str; +} +#endif diff --git a/src/include/common/string.h b/src/include/common/string.h index 5f3ea71d61..3d46b80918 100644 --- a/src/include/common/string.h +++ b/src/include/common/string.h @@ -12,4 +12,19 @@ extern bool pg_str_endswith(const char *str, const char *end); +/* + * Portable version of posix' strnlen. + * + * Returns the number of characters before a null-byte in the string pointed + * to by str, unless there's no null-byte before maxlen. In the latter case + * maxlen is returned. + * + * Use the system strnlen if provided, it's likely to be faster. + */ +#ifdef HAVE_STRNLEN +#define pg_strnlen(str, maxlen) strnlen(str, maxlen) +#else +extern size_t pg_strnlen(const char *str, size_t maxlen); +#endif + #endif /* COMMON_STRING_H */ diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 368a297e6d..d20cc47fde 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -496,6 +496,9 @@ /* Define to 1 if you have the `strlcpy' function. */ #undef HAVE_STRLCPY +/* Define to 1 if you have the `strnlen' function. */ +#undef HAVE_STRNLEN + /* Define to use have a strong random number source */ #undef HAVE_STRONG_RANDOM diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 3537b6f704..58eef0a538 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -345,6 +345,9 @@ /* Define to 1 if you have the header file. */ #define HAVE_STRING_H 1 +/* Define to 1 if you have the `strnlen' function. */ +#define HAVE_STRNLEN + /* Define to use have a strong random number source */ #define HAVE_STRONG_RANDOM 1 diff --git a/src/port/snprintf.c b/src/port/snprintf.c index 231e5d6bdb..531d2c5ee3 100644 --- a/src/port/snprintf.c +++ b/src/port/snprintf.c @@ -43,6 +43,8 @@ #endif #include +#include "common/string.h" + #ifndef NL_ARGMAX #define NL_ARGMAX 16 #endif @@ -790,16 +792,6 @@ dopr(PrintfTarget *target, const char *format, va_list args) target->failed = true; } -static size_t -pg_strnlen(const char *str, size_t maxlen) -{ - const char *p = str; - - while (maxlen-- > 0 && *p) - p++; - return p - str; -} - static void fmtstr(char *value, int leftjust, int minlen, int maxwidth, int pointflag, PrintfTarget *target) From 82c117cb90e6b6b79f06d61eb1ddf06e94e75b60 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 9 Oct 2017 15:20:42 -0700 Subject: [PATCH 0353/1087] Fix pnstrdup() to not memcpy() the maximum allowed length. The previous behaviour was dangerous if the length passed wasn't the size of the underlying buffer, but the maximum size of the underlying buffer. Author: Andres Freund Discussion: https://postgr.es/m/20161003215524.mwz5p45pcverrkyk@alap3.anarazel.de --- src/backend/utils/mmgr/mcxt.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c index cd696f16bc..64e0408d5a 100644 --- a/src/backend/utils/mmgr/mcxt.c +++ b/src/backend/utils/mmgr/mcxt.c @@ -21,6 +21,7 @@ #include "postgres.h" +#include "common/string.h" #include "miscadmin.h" #include "utils/memdebug.h" #include "utils/memutils.h" @@ -1086,10 +1087,14 @@ pstrdup(const char *in) char * pnstrdup(const char *in, Size len) { - char *out = palloc(len + 1); + char *out; + len = pg_strnlen(in, len); + + out = palloc(len + 1); memcpy(out, in, len); out[len] = '\0'; + return out; } From 44b3230e821e7a0cc4e9438d1c27305d533edacc Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 8 Oct 2017 22:00:57 -0400 Subject: [PATCH 0354/1087] Use lower-case SGML attribute values for DocBook XML compatibility --- doc/src/sgml/ecpg.sgml | 112 +++++------ doc/src/sgml/func.sgml | 64 +++---- doc/src/sgml/queries.sgml | 2 +- doc/src/sgml/ref/alter_database.sgml | 26 +-- .../sgml/ref/alter_default_privileges.sgml | 20 +- doc/src/sgml/ref/alter_domain.sgml | 52 +++--- doc/src/sgml/ref/alter_event_trigger.sgml | 14 +- doc/src/sgml/ref/alter_extension.sgml | 68 +++---- .../sgml/ref/alter_foreign_data_wrapper.sgml | 6 +- doc/src/sgml/ref/alter_foreign_table.sgml | 100 +++++----- doc/src/sgml/ref/alter_function.sgml | 4 +- doc/src/sgml/ref/alter_group.sgml | 14 +- doc/src/sgml/ref/alter_index.sgml | 38 ++-- doc/src/sgml/ref/alter_large_object.sgml | 2 +- doc/src/sgml/ref/alter_materialized_view.sgml | 42 ++--- doc/src/sgml/ref/alter_policy.sgml | 2 +- doc/src/sgml/ref/alter_publication.sgml | 12 +- doc/src/sgml/ref/alter_role.sgml | 26 +-- doc/src/sgml/ref/alter_rule.sgml | 8 +- doc/src/sgml/ref/alter_sequence.sgml | 4 +- doc/src/sgml/ref/alter_server.sgml | 10 +- doc/src/sgml/ref/alter_statistics.sgml | 4 +- doc/src/sgml/ref/alter_subscription.sgml | 16 +- doc/src/sgml/ref/alter_system.sgml | 4 +- doc/src/sgml/ref/alter_table.sgml | 174 +++++++++--------- doc/src/sgml/ref/alter_tablespace.sgml | 4 +- doc/src/sgml/ref/alter_trigger.sgml | 12 +- doc/src/sgml/ref/alter_type.sgml | 46 ++--- doc/src/sgml/ref/alter_user.sgml | 24 +-- doc/src/sgml/ref/alter_user_mapping.sgml | 4 +- doc/src/sgml/ref/alter_view.sgml | 8 +- doc/src/sgml/ref/analyze.sgml | 12 +- doc/src/sgml/ref/close.sgml | 4 +- doc/src/sgml/ref/cluster.sgml | 8 +- doc/src/sgml/ref/comment.sgml | 80 ++++---- doc/src/sgml/ref/commit_prepared.sgml | 4 +- doc/src/sgml/ref/create_aggregate.sgml | 174 +++++++++--------- doc/src/sgml/ref/create_database.sgml | 2 +- doc/src/sgml/ref/create_domain.sgml | 14 +- doc/src/sgml/ref/create_event_trigger.sgml | 8 +- .../sgml/ref/create_foreign_data_wrapper.sgml | 4 +- doc/src/sgml/ref/create_foreign_table.sgml | 42 ++--- doc/src/sgml/ref/create_group.sgml | 20 +- doc/src/sgml/ref/create_index.sgml | 2 +- .../sgml/ref/create_materialized_view.sgml | 10 +- doc/src/sgml/ref/create_role.sgml | 24 +-- doc/src/sgml/ref/create_schema.sgml | 12 +- doc/src/sgml/ref/create_server.sgml | 4 +- doc/src/sgml/ref/create_statistics.sgml | 16 +- doc/src/sgml/ref/create_subscription.sgml | 6 +- doc/src/sgml/ref/create_table.sgml | 102 +++++----- doc/src/sgml/ref/create_table_as.sgml | 10 +- doc/src/sgml/ref/create_tablespace.sgml | 2 +- doc/src/sgml/ref/create_trigger.sgml | 12 +- doc/src/sgml/ref/create_type.sgml | 2 +- doc/src/sgml/ref/create_user.sgml | 22 +-- doc/src/sgml/ref/create_user_mapping.sgml | 4 +- doc/src/sgml/ref/create_view.sgml | 8 +- doc/src/sgml/ref/delete.sgml | 18 +- doc/src/sgml/ref/do.sgml | 6 +- doc/src/sgml/ref/drop_database.sgml | 4 +- doc/src/sgml/ref/drop_domain.sgml | 4 +- doc/src/sgml/ref/drop_event_trigger.sgml | 4 +- doc/src/sgml/ref/drop_extension.sgml | 4 +- doc/src/sgml/ref/drop_foreign_table.sgml | 4 +- doc/src/sgml/ref/drop_group.sgml | 2 +- doc/src/sgml/ref/drop_index.sgml | 4 +- doc/src/sgml/ref/drop_language.sgml | 4 +- doc/src/sgml/ref/drop_materialized_view.sgml | 4 +- doc/src/sgml/ref/drop_opclass.sgml | 2 +- doc/src/sgml/ref/drop_operator.sgml | 2 +- doc/src/sgml/ref/drop_opfamily.sgml | 2 +- doc/src/sgml/ref/drop_owned.sgml | 4 +- doc/src/sgml/ref/drop_publication.sgml | 2 +- doc/src/sgml/ref/drop_role.sgml | 4 +- doc/src/sgml/ref/drop_rule.sgml | 2 +- doc/src/sgml/ref/drop_schema.sgml | 4 +- doc/src/sgml/ref/drop_sequence.sgml | 4 +- doc/src/sgml/ref/drop_statistics.sgml | 4 +- doc/src/sgml/ref/drop_table.sgml | 4 +- doc/src/sgml/ref/drop_tablespace.sgml | 4 +- doc/src/sgml/ref/drop_trigger.sgml | 6 +- doc/src/sgml/ref/drop_tsconfig.sgml | 2 +- doc/src/sgml/ref/drop_tsdictionary.sgml | 2 +- doc/src/sgml/ref/drop_tsparser.sgml | 2 +- doc/src/sgml/ref/drop_tstemplate.sgml | 2 +- doc/src/sgml/ref/drop_type.sgml | 4 +- doc/src/sgml/ref/drop_user.sgml | 2 +- doc/src/sgml/ref/drop_view.sgml | 4 +- doc/src/sgml/ref/execute.sgml | 6 +- doc/src/sgml/ref/fetch.sgml | 64 +++---- doc/src/sgml/ref/grant.sgml | 50 ++--- doc/src/sgml/ref/import_foreign_schema.sgml | 22 +-- doc/src/sgml/ref/insert.sgml | 86 ++++----- doc/src/sgml/ref/listen.sgml | 8 +- doc/src/sgml/ref/load.sgml | 4 +- doc/src/sgml/ref/lock.sgml | 8 +- doc/src/sgml/ref/move.sgml | 14 +- doc/src/sgml/ref/notify.sgml | 6 +- doc/src/sgml/ref/pg_restore.sgml | 4 +- doc/src/sgml/ref/prepare.sgml | 8 +- doc/src/sgml/ref/prepare_transaction.sgml | 4 +- doc/src/sgml/ref/reassign_owned.sgml | 16 +- .../sgml/ref/refresh_materialized_view.sgml | 4 +- doc/src/sgml/ref/reindex.sgml | 4 +- doc/src/sgml/ref/reset.sgml | 4 +- doc/src/sgml/ref/revoke.sgml | 40 ++-- doc/src/sgml/ref/rollback_prepared.sgml | 4 +- doc/src/sgml/ref/rollback_to.sgml | 2 +- doc/src/sgml/ref/security_label.sgml | 42 ++--- doc/src/sgml/ref/select_into.sgml | 2 +- doc/src/sgml/ref/set.sgml | 8 +- doc/src/sgml/ref/show.sgml | 4 +- doc/src/sgml/ref/truncate.sgml | 4 +- doc/src/sgml/ref/unlisten.sgml | 6 +- doc/src/sgml/ref/update.sgml | 34 ++-- doc/src/sgml/ref/vacuum.sgml | 16 +- doc/src/sgml/ref/values.sgml | 6 +- doc/src/sgml/textsearch.sgml | 86 ++++----- doc/src/sgml/unaccent.sgml | 4 +- 120 files changed, 1110 insertions(+), 1110 deletions(-) diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index c88b0c2fb3..716a101838 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -507,7 +507,7 @@ EXEC SQL COMMIT; - EXEC SQL PREPARE TRANSACTION transaction_id + EXEC SQL PREPARE TRANSACTION transaction_id Prepare the current transaction for two-phase commit. @@ -516,7 +516,7 @@ EXEC SQL COMMIT; - EXEC SQL COMMIT PREPARED transaction_id + EXEC SQL COMMIT PREPARED transaction_id Commit a transaction that is in prepared state. @@ -525,7 +525,7 @@ EXEC SQL COMMIT; - EXEC SQL ROLLBACK PREPARED transaction_id + EXEC SQL ROLLBACK PREPARED transaction_id Roll back a transaction that is in prepared state. @@ -6264,7 +6264,7 @@ c++ test_cpp.o test_mod.o -lecpg -o test_cpp -ALLOCATE DESCRIPTOR name +ALLOCATE DESCRIPTOR name @@ -6288,7 +6288,7 @@ ALLOCATE DESCRIPTOR name - name + name A name of SQL descriptor, case sensitive. This can be an SQL @@ -6356,10 +6356,10 @@ DATABASE connection_target - connection_target + connection_target - connection_target + connection_target specifies the target server of the connection on one of several forms. @@ -6416,7 +6416,7 @@ DATABASE connection_target - connection_object + connection_object An optional identifier for the connection, so that it can be @@ -6427,7 +6427,7 @@ DATABASE connection_target - connection_user + connection_user The user name for the database connection. @@ -6553,7 +6553,7 @@ EXEC SQL END DECLARE SECTION; -DEALLOCATE DESCRIPTOR name +DEALLOCATE DESCRIPTOR name @@ -6571,7 +6571,7 @@ DEALLOCATE DESCRIPTOR name - name + name The name of the descriptor which is going to be deallocated. @@ -6619,8 +6619,8 @@ EXEC SQL DEALLOCATE DESCRIPTOR mydesc; -DECLARE cursor_name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FOR prepared_name -DECLARE cursor_name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FOR query +DECLARE cursor_name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FOR prepared_name +DECLARE cursor_name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FOR query @@ -6645,7 +6645,7 @@ DECLARE cursor_name [ BINARY ] [ IN - cursor_name + cursor_name A cursor name, case sensitive. This can be an SQL identifier @@ -6655,7 +6655,7 @@ DECLARE cursor_name [ BINARY ] [ IN - prepared_name + prepared_name The name of a prepared query, either as an SQL identifier or a @@ -6730,9 +6730,9 @@ EXEC SQL DECLARE cur1 CURSOR FOR stmt1; -DESCRIBE [ OUTPUT ] prepared_name USING [ SQL ] DESCRIPTOR descriptor_name -DESCRIBE [ OUTPUT ] prepared_name INTO [ SQL ] DESCRIPTOR descriptor_name -DESCRIBE [ OUTPUT ] prepared_name INTO sqlda_name +DESCRIBE [ OUTPUT ] prepared_name USING [ SQL ] DESCRIPTOR descriptor_name +DESCRIBE [ OUTPUT ] prepared_name INTO [ SQL ] DESCRIPTOR descriptor_name +DESCRIBE [ OUTPUT ] prepared_name INTO sqlda_name @@ -6751,7 +6751,7 @@ DESCRIBE [ OUTPUT ] prepared_name I - prepared_name + prepared_name The name of a prepared statement. This can be an SQL @@ -6761,7 +6761,7 @@ DESCRIBE [ OUTPUT ] prepared_name I - descriptor_name + descriptor_name A descriptor name. It is case sensitive. It can be an SQL @@ -6771,7 +6771,7 @@ DESCRIBE [ OUTPUT ] prepared_name I - sqlda_name + sqlda_name The name of an SQLDA variable. @@ -6819,7 +6819,7 @@ EXEC SQL DEALLOCATE DESCRIPTOR mydesc; -DISCONNECT connection_name +DISCONNECT connection_name DISCONNECT [ CURRENT ] DISCONNECT DEFAULT DISCONNECT ALL @@ -6840,7 +6840,7 @@ DISCONNECT ALL - connection_name + connection_name A database connection name established by @@ -6929,7 +6929,7 @@ main(void) -EXECUTE IMMEDIATE string +EXECUTE IMMEDIATE string @@ -6948,7 +6948,7 @@ EXECUTE IMMEDIATE string - string + string A literal C string or a host variable containing the SQL @@ -6990,8 +6990,8 @@ EXEC SQL EXECUTE IMMEDIATE :command; -GET DESCRIPTOR descriptor_name :cvariable = descriptor_header_item [, ... ] -GET DESCRIPTOR descriptor_name VALUE column_number :cvariable = descriptor_item [, ... ] +GET DESCRIPTOR descriptor_name :cvariable = descriptor_header_item [, ... ] +GET DESCRIPTOR descriptor_name VALUE column_number :cvariable = descriptor_item [, ... ] @@ -7022,7 +7022,7 @@ GET DESCRIPTOR descriptor_name VALU - descriptor_name + descriptor_name A descriptor name. @@ -7031,7 +7031,7 @@ GET DESCRIPTOR descriptor_name VALU - descriptor_header_item + descriptor_header_item A token identifying which header information item to retrieve. @@ -7042,7 +7042,7 @@ GET DESCRIPTOR descriptor_name VALU - column_number + column_number The number of the column about which information is to be @@ -7052,7 +7052,7 @@ GET DESCRIPTOR descriptor_name VALU - descriptor_item + descriptor_item A token identifying which item of information about a column @@ -7063,7 +7063,7 @@ GET DESCRIPTOR descriptor_name VALU - cvariable + cvariable A host variable that will receive the data retrieved from the @@ -7178,9 +7178,9 @@ d_data = testdb -OPEN cursor_name -OPEN cursor_name USING value [, ... ] -OPEN cursor_name USING SQL DESCRIPTOR descriptor_name +OPEN cursor_name +OPEN cursor_name USING value [, ... ] +OPEN cursor_name USING SQL DESCRIPTOR descriptor_name @@ -7202,7 +7202,7 @@ OPEN cursor_name USING SQL DESCRIPT - cursor_name + cursor_name The name of the cursor to be opened. This can be an SQL @@ -7212,7 +7212,7 @@ OPEN cursor_name USING SQL DESCRIPT - value + value A value to be bound to a placeholder in the cursor. This can @@ -7223,7 +7223,7 @@ OPEN cursor_name USING SQL DESCRIPT - descriptor_name + descriptor_name The name of a descriptor containing values to be bound to the @@ -7272,7 +7272,7 @@ EXEC SQL OPEN :curname1; -PREPARE name FROM string +PREPARE name FROM string @@ -7293,7 +7293,7 @@ PREPARE name FROM - prepared_name + prepared_name An identifier for the prepared query. @@ -7302,7 +7302,7 @@ PREPARE name FROM - string + string A literal C string or a host variable containing a preparable @@ -7385,7 +7385,7 @@ SET AUTOCOMMIT { = | TO } { ON | OFF } -SET CONNECTION [ TO | = ] connection_name +SET CONNECTION [ TO | = ] connection_name @@ -7404,7 +7404,7 @@ SET CONNECTION [ TO | = ] connection_name - connection_name + connection_name A database connection name established by @@ -7459,8 +7459,8 @@ EXEC SQL SET CONNECTION = con1; -SET DESCRIPTOR descriptor_name descriptor_header_item = value [, ... ] -SET DESCRIPTOR descriptor_name VALUE number descriptor_item = value [, ...] +SET DESCRIPTOR descriptor_name descriptor_header_item = value [, ... ] +SET DESCRIPTOR descriptor_name VALUE number descriptor_item = value [, ...] @@ -7486,7 +7486,7 @@ SET DESCRIPTOR descriptor_name VALU - descriptor_name + descriptor_name A descriptor name. @@ -7495,7 +7495,7 @@ SET DESCRIPTOR descriptor_name VALU - descriptor_header_item + descriptor_header_item A token identifying which header information item to set. @@ -7506,7 +7506,7 @@ SET DESCRIPTOR descriptor_name VALU - number + number The number of the descriptor item to set. The count starts at @@ -7516,7 +7516,7 @@ SET DESCRIPTOR descriptor_name VALU - descriptor_item + descriptor_item A token identifying which item of information to set in the @@ -7527,7 +7527,7 @@ SET DESCRIPTOR descriptor_name VALU - value + value A value to store into the descriptor item. This can be an SQL @@ -7575,7 +7575,7 @@ EXEC SQL SET DESCRIPTOR indesc VALUE 2 INDICATOR = :val2null, DATA = :val2; -TYPE type_name IS ctype +TYPE type_name IS ctype @@ -7599,7 +7599,7 @@ TYPE type_name IS - type_name + type_name The name for the new type. It must be a valid C type name. @@ -7608,7 +7608,7 @@ TYPE type_name IS - ctype + ctype A C type specification. @@ -7732,7 +7732,7 @@ VAR varname IS ctype - varname + varname A C variable name. @@ -7741,7 +7741,7 @@ VAR varname IS ctype - ctype + ctype A C type specification. @@ -7779,7 +7779,7 @@ EXEC SQL VAR a IS int; -WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } action +WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } action diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 1839bddceb..b52407822d 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -9491,7 +9491,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple plainto_tsquery - plainto_tsquery( config regconfig , query text) + plainto_tsquery( config regconfig , query text) tsquery produce tsquery ignoring punctuation @@ -9503,7 +9503,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple phraseto_tsquery - phraseto_tsquery( config regconfig , query text) + phraseto_tsquery( config regconfig , query text) tsquery produce tsquery that searches for a phrase, @@ -9516,7 +9516,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple querytree - querytree(query tsquery) + querytree(query tsquery) text get indexable part of a tsquery @@ -9528,10 +9528,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweight - setweight(vector tsvector, weight "char") + setweight(vector tsvector, weight "char") tsvector - assign weight to each element of vector + assign weight to each element of vector setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A') 'cat':3A 'fat':2A,4A 'rat':5A @@ -9541,10 +9541,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweightsetweight for specific lexeme(s) - setweight(vector tsvector, weight "char", lexemes text[]) + setweight(vector tsvector, weight "char", lexemes text[])tsvector - assign weight to elements of vector that are listed in lexemes + assign weight to elements of vector that are listed in lexemessetweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A', '{cat,rat}')'cat':3A 'fat':2,4 'rat':5A @@ -9565,7 +9565,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsquery - to_tsquery( config regconfig , query text) + to_tsquery( config regconfig , query text)tsquerynormalize words and convert to tsquery @@ -9577,7 +9577,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsvector - to_tsvector( config regconfig , document text) + to_tsvector( config regconfig , document text)tsvectorreduce document text to tsvector @@ -9586,7 +9586,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - to_tsvector( config regconfig , document json(b)) + to_tsvector( config regconfig , document json(b)) tsvector @@ -9601,20 +9601,20 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_delete - ts_delete(vector tsvector, lexeme text) + ts_delete(vector tsvector, lexeme text) tsvector - remove given lexeme from vector + remove given lexeme from vector ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, 'fat') 'cat':3 'rat':5A - ts_delete(vector tsvector, lexemes text[]) + ts_delete(vector tsvector, lexemes text[]) tsvector - remove any occurrence of lexemes in lexemes from vector + remove any occurrence of lexemes in lexemes from vector ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, ARRAY['fat','rat']) 'cat':3 @@ -9623,10 +9623,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_filter - ts_filter(vector tsvector, weights "char"[]) + ts_filter(vector tsvector, weights "char"[])tsvector - select only elements with given weights from vector + select only elements with given weights from vectorts_filter('fat:2,4 cat:3b rat:5A'::tsvector, '{a,b}')'cat':3B 'rat':5A @@ -9635,7 +9635,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_headline - ts_headline( config regconfig, document text, query tsquery , options text ) + ts_headline( config regconfig, document text, query tsquery , options text )textdisplay a query match @@ -9644,7 +9644,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - ts_headline( config regconfig, document json(b), query tsquery , options text ) + ts_headline( config regconfig, document json(b), query tsquery , options text ) text display a query match @@ -9656,7 +9656,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query @@ -9668,7 +9668,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query using cover density @@ -9680,7 +9680,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rewrite - ts_rewrite(query tsquery, target tsquery, substitute tsquery) + ts_rewrite(query tsquery, target tsquery, substitute tsquery) tsquery replace target with substitute @@ -9689,7 +9689,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple 'b' & ( 'foo' | 'bar' ) - ts_rewrite(query tsquery, select text) + ts_rewrite(query tsquery, select text) tsquery replace using targets and substitutes from a SELECT command SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases') @@ -9700,7 +9700,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery) + tsquery_phrase(query1 tsquery, query2 tsquery) tsquery make query that searches for query1 followed @@ -9711,7 +9711,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) + tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) tsquery make query that searches for query1 followed by @@ -9761,7 +9761,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple unnest for tsvector - unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) + unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) setof record expand a tsvector to a set of rows @@ -9807,7 +9807,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_debug - ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) + ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) setof record test a configuration @@ -9819,7 +9819,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_lexize - ts_lexize(dict regdictionary, token text) + ts_lexize(dict regdictionary, token text) text[] test a dictionary @@ -9831,7 +9831,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_parse - ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) + ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) setof record test a parser @@ -9839,7 +9839,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,foo) ... - ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) + ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) setof record test a parser ts_parse(3722, 'foo - bar') @@ -9850,7 +9850,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_token_type - ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser @@ -9858,7 +9858,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,asciiword,"Word, all ASCII") ... - ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser ts_token_type(3722) @@ -9869,7 +9869,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_stat - ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) + ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) setof record get statistics of a tsvector column diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index 0588da2912..11a4bf4e41 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -1854,7 +1854,7 @@ SELECT select_list that can be used in a query without having to actually create and populate a table on-disk. The syntax is -VALUES ( expression [, ...] ) [, ...] +VALUES ( expression [, ...] ) [, ...] Each parenthesized list of expressions generates a row in the table. The lists must all have the same number of elements (i.e., the number diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index 9ab86127af..59639d3729 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -21,24 +21,24 @@ PostgreSQL documentation -ALTER DATABASE name [ [ WITH ] option [ ... ] ] +ALTER DATABASE name [ [ WITH ] option [ ... ] ] -where option can be: +where option can be: - ALLOW_CONNECTIONS allowconn - CONNECTION LIMIT connlimit - IS_TEMPLATE istemplate + ALLOW_CONNECTIONS allowconn + CONNECTION LIMIT connlimit + IS_TEMPLATE istemplate -ALTER DATABASE name RENAME TO new_name +ALTER DATABASE name RENAME TO new_name -ALTER DATABASE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER DATABASE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER DATABASE name SET TABLESPACE new_tablespace +ALTER DATABASE name SET TABLESPACE new_tablespace -ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT } -ALTER DATABASE name SET configuration_parameter FROM CURRENT -ALTER DATABASE name RESET configuration_parameter -ALTER DATABASE name RESET ALL +ALTER DATABASE name SET configuration_parameter { TO | = } { value | DEFAULT } +ALTER DATABASE name SET configuration_parameter FROM CURRENT +ALTER DATABASE name RESET configuration_parameter +ALTER DATABASE name RESET ALL @@ -102,7 +102,7 @@ ALTER DATABASE name RESET ALL - name + name The name of the database whose attributes are to be altered. diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index e3363f868a..09eabda68a 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -31,55 +31,55 @@ ALTER DEFAULT PRIVILEGES GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER } [, ...] | ALL [ PRIVILEGES ] } ON TABLES - TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] + TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { { USAGE | SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } ON SEQUENCES - TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] + TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { EXECUTE | ALL [ PRIVILEGES ] } ON FUNCTIONS - TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] + TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON TYPES - TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] + TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | CREATE | ALL [ PRIVILEGES ] } ON SCHEMAS - TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] + TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER } [, ...] | ALL [ PRIVILEGES ] } ON TABLES - FROM { [ GROUP ] role_name | PUBLIC } [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { { USAGE | SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } ON SEQUENCES - FROM { [ GROUP ] role_name | PUBLIC } [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] } ON FUNCTIONS - FROM { [ GROUP ] role_name | PUBLIC } [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { USAGE | ALL [ PRIVILEGES ] } ON TYPES - FROM { [ GROUP ] role_name | PUBLIC } [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { USAGE | CREATE | ALL [ PRIVILEGES ] } ON SCHEMAS - FROM { [ GROUP ] role_name | PUBLIC } [, ...] + FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 95a822aef6..827a1c7d20 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -23,24 +23,24 @@ PostgreSQL documentation -ALTER DOMAIN name - { SET DEFAULT expression | DROP DEFAULT } -ALTER DOMAIN name +ALTER DOMAIN name + { SET DEFAULT expression | DROP DEFAULT } +ALTER DOMAIN name { SET | DROP } NOT NULL -ALTER DOMAIN name - ADD domain_constraint [ NOT VALID ] -ALTER DOMAIN name - DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] -ALTER DOMAIN name - RENAME CONSTRAINT constraint_name TO new_constraint_name -ALTER DOMAIN name - VALIDATE CONSTRAINT constraint_name -ALTER DOMAIN name - OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER DOMAIN name - RENAME TO new_name -ALTER DOMAIN name - SET SCHEMA new_schema +ALTER DOMAIN name + ADD domain_constraint [ NOT VALID ] +ALTER DOMAIN name + DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] +ALTER DOMAIN name + RENAME CONSTRAINT constraint_name TO new_constraint_name +ALTER DOMAIN name + VALIDATE CONSTRAINT constraint_name +ALTER DOMAIN name + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER DOMAIN name + RENAME TO new_name +ALTER DOMAIN name + SET SCHEMA new_schema @@ -76,7 +76,7 @@ ALTER DOMAIN name - ADD domain_constraint [ NOT VALID ] + ADD domain_constraint [ NOT VALID ] This form adds a new constraint to a domain using the same syntax as @@ -171,7 +171,7 @@ ALTER DOMAIN name - name + name The name (possibly schema-qualified) of an existing domain to @@ -181,7 +181,7 @@ ALTER DOMAIN name - domain_constraint + domain_constraint New domain constraint for the domain. @@ -190,7 +190,7 @@ ALTER DOMAIN name - constraint_name + constraint_name Name of an existing constraint to drop or rename. @@ -199,7 +199,7 @@ ALTER DOMAIN name - NOT VALID + NOT VALID Do not verify existing column data for constraint validity. @@ -230,7 +230,7 @@ ALTER DOMAIN name - new_name + new_name The new name for the domain. @@ -239,7 +239,7 @@ ALTER DOMAIN name - new_constraint_name + new_constraint_name The new name for the constraint. @@ -248,7 +248,7 @@ ALTER DOMAIN name - new_owner + new_owner The user name of the new owner of the domain. @@ -257,7 +257,7 @@ ALTER DOMAIN name - new_schema + new_schema The new schema for the domain. diff --git a/doc/src/sgml/ref/alter_event_trigger.sgml b/doc/src/sgml/ref/alter_event_trigger.sgml index 9d6c64ad52..38b971fb08 100644 --- a/doc/src/sgml/ref/alter_event_trigger.sgml +++ b/doc/src/sgml/ref/alter_event_trigger.sgml @@ -21,10 +21,10 @@ PostgreSQL documentation -ALTER EVENT TRIGGER name DISABLE -ALTER EVENT TRIGGER name ENABLE [ REPLICA | ALWAYS ] -ALTER EVENT TRIGGER name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER EVENT TRIGGER name RENAME TO new_name +ALTER EVENT TRIGGER name DISABLE +ALTER EVENT TRIGGER name ENABLE [ REPLICA | ALWAYS ] +ALTER EVENT TRIGGER name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER EVENT TRIGGER name RENAME TO new_name @@ -46,7 +46,7 @@ ALTER EVENT TRIGGER name RENAME TO - name + name The name of an existing trigger to alter. @@ -55,7 +55,7 @@ ALTER EVENT TRIGGER name RENAME TO - new_owner + new_owner The user name of the new owner of the event trigger. @@ -64,7 +64,7 @@ ALTER EVENT TRIGGER name RENAME TO - new_name + new_name The new name of the event trigger. diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index a7c0927d1c..ae84e98e49 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -23,39 +23,39 @@ PostgreSQL documentation -ALTER EXTENSION name UPDATE [ TO new_version ] -ALTER EXTENSION name SET SCHEMA new_schema -ALTER EXTENSION name ADD member_object -ALTER EXTENSION name DROP member_object +ALTER EXTENSION name UPDATE [ TO new_version ] +ALTER EXTENSION name SET SCHEMA new_schema +ALTER EXTENSION name ADD member_object +ALTER EXTENSION name DROP member_object -where member_object is: +where member_object is: - ACCESS METHOD object_name | - AGGREGATE aggregate_name ( aggregate_signature ) | + ACCESS METHOD object_name | + AGGREGATE aggregate_name ( aggregate_signature ) | CAST (source_type AS target_type) | - COLLATION object_name | - CONVERSION object_name | - DOMAIN object_name | - EVENT TRIGGER object_name | - FOREIGN DATA WRAPPER object_name | - FOREIGN TABLE object_name | - FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | - MATERIALIZED VIEW object_name | - OPERATOR operator_name (left_type, right_type) | - OPERATOR CLASS object_name USING index_method | - OPERATOR FAMILY object_name USING index_method | - [ PROCEDURAL ] LANGUAGE object_name | - SCHEMA object_name | - SEQUENCE object_name | - SERVER object_name | - TABLE object_name | - TEXT SEARCH CONFIGURATION object_name | - TEXT SEARCH DICTIONARY object_name | - TEXT SEARCH PARSER object_name | - TEXT SEARCH TEMPLATE object_name | + COLLATION object_name | + CONVERSION object_name | + DOMAIN object_name | + EVENT TRIGGER object_name | + FOREIGN DATA WRAPPER object_name | + FOREIGN TABLE object_name | + FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + MATERIALIZED VIEW object_name | + OPERATOR operator_name (left_type, right_type) | + OPERATOR CLASS object_name USING index_method | + OPERATOR FAMILY object_name USING index_method | + [ PROCEDURAL ] LANGUAGE object_name | + SCHEMA object_name | + SEQUENCE object_name | + SERVER object_name | + TABLE object_name | + TEXT SEARCH CONFIGURATION object_name | + TEXT SEARCH DICTIONARY object_name | + TEXT SEARCH PARSER object_name | + TEXT SEARCH TEMPLATE object_name | TRANSFORM FOR type_name LANGUAGE lang_name | - TYPE object_name | - VIEW object_name + TYPE object_name | + VIEW object_name and aggregate_signature is: @@ -96,7 +96,7 @@ ALTER EXTENSION name DROP - ADD member_object + ADD member_object This form adds an existing object to the extension. This is mainly @@ -108,7 +108,7 @@ ALTER EXTENSION name DROP - DROP member_object + DROP member_object This form removes a member object from the extension. This is mainly @@ -136,7 +136,7 @@ ALTER EXTENSION name DROP - name + name The name of an installed extension. @@ -145,7 +145,7 @@ ALTER EXTENSION name DROP - new_version + new_version The desired new version of the extension. This can be written as @@ -157,7 +157,7 @@ ALTER EXTENSION name DROP - new_schema + new_schema The new schema for the extension. diff --git a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml index 3f5fb0f77e..9c5b84fe64 100644 --- a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml @@ -24,7 +24,7 @@ PostgreSQL documentation ALTER FOREIGN DATA WRAPPER name [ HANDLER handler_function | NO HANDLER ] [ VALIDATOR validator_function | NO VALIDATOR ] - [ OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) ] + [ OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) ] ALTER FOREIGN DATA WRAPPER name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ALTER FOREIGN DATA WRAPPER name RENAME TO new_name @@ -113,7 +113,7 @@ ALTER FOREIGN DATA WRAPPER name REN - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) Change options for the foreign-data @@ -127,7 +127,7 @@ ALTER FOREIGN DATA WRAPPER name REN - new_owner + new_owner The user name of the new owner of the foreign-data wrapper. diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml index b1692842b2..cb4e7044fb 100644 --- a/doc/src/sgml/ref/alter_foreign_table.sgml +++ b/doc/src/sgml/ref/alter_foreign_table.sgml @@ -21,41 +21,41 @@ PostgreSQL documentation -ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] - action [, ... ] -ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] - RENAME [ COLUMN ] column_name TO new_column_name -ALTER FOREIGN TABLE [ IF EXISTS ] name - RENAME TO new_name -ALTER FOREIGN TABLE [ IF EXISTS ] name - SET SCHEMA new_schema - -where action is one of: - - ADD [ COLUMN ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] - DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] - ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] - ALTER [ COLUMN ] column_name SET DEFAULT expression - ALTER [ COLUMN ] column_name DROP DEFAULT - ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL - ALTER [ COLUMN ] column_name SET STATISTICS integer - ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) - ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) - ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } - ALTER [ COLUMN ] column_name OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) - ADD table_constraint [ NOT VALID ] - VALIDATE CONSTRAINT constraint_name - DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] - DISABLE TRIGGER [ trigger_name | ALL | USER ] - ENABLE TRIGGER [ trigger_name | ALL | USER ] - ENABLE REPLICA TRIGGER trigger_name - ENABLE ALWAYS TRIGGER trigger_name +ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + action [, ... ] +ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME [ COLUMN ] column_name TO new_column_name +ALTER FOREIGN TABLE [ IF EXISTS ] name + RENAME TO new_name +ALTER FOREIGN TABLE [ IF EXISTS ] name + SET SCHEMA new_schema + +where action is one of: + + ADD [ COLUMN ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] + DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] + ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] + ALTER [ COLUMN ] column_name SET DEFAULT expression + ALTER [ COLUMN ] column_name DROP DEFAULT + ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL + ALTER [ COLUMN ] column_name SET STATISTICS integer + ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) + ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) + ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + ALTER [ COLUMN ] column_name OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) + ADD table_constraint [ NOT VALID ] + VALIDATE CONSTRAINT constraint_name + DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] + DISABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE REPLICA TRIGGER trigger_name + ENABLE ALWAYS TRIGGER trigger_name SET WITH OIDS SET WITHOUT OIDS - INHERIT parent_table - NO INHERIT parent_table - OWNER TO { new_owner | CURRENT_USER | SESSION_USER } - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) + INHERIT parent_table + NO INHERIT parent_table + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) @@ -142,8 +142,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - SET ( attribute_option = value [, ... ] ) - RESET ( attribute_option [, ... ] ) + SET ( attribute_option = value [, ... ] ) + RESET ( attribute_option [, ... ] ) This form sets or resets per-attribute options. @@ -169,7 +169,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - ADD table_constraint [ NOT VALID ] + ADD table_constraint [ NOT VALID ] This form adds a new constraint to a foreign table, using the same @@ -256,7 +256,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - INHERIT parent_table + INHERIT parent_table This form adds the target foreign table as a new child of the specified @@ -268,7 +268,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - NO INHERIT parent_table + NO INHERIT parent_table This form removes the target foreign table from the list of children of @@ -288,7 +288,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) Change options for the foreign table or one of its columns. @@ -358,7 +358,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - name + name The name (possibly schema-qualified) of an existing foreign table to @@ -372,7 +372,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - column_name + column_name Name of a new or existing column. @@ -381,7 +381,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - new_column_name + new_column_name New name for an existing column. @@ -390,7 +390,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - new_name + new_name New name for the table. @@ -399,7 +399,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - data_type + data_type Data type of the new column, or new data type for an existing @@ -409,7 +409,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - table_constraint + table_constraint New table constraint for the foreign table. @@ -418,7 +418,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - constraint_name + constraint_name Name of an existing constraint to drop. @@ -449,7 +449,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - trigger_name + trigger_name Name of a single trigger to disable or enable. @@ -480,7 +480,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - parent_table + parent_table A parent table to associate or de-associate with this foreign table. @@ -489,7 +489,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - new_owner + new_owner The user name of the new owner of the table. @@ -498,7 +498,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - new_schema + new_schema The name of the schema to which the table will be moved. diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index 168eeb7c52..8d9fec6005 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] - action [ ... ] [ RESTRICT ] + action [ ... ] [ RESTRICT ] ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] RENAME TO new_name ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] @@ -32,7 +32,7 @@ ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] DEPENDS ON EXTENSION extension_name -where action is one of: +where action is one of: CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT IMMUTABLE | STABLE | VOLATILE | [ NOT ] LEAKPROOF diff --git a/doc/src/sgml/ref/alter_group.sgml b/doc/src/sgml/ref/alter_group.sgml index adf6f7e932..f7136c773a 100644 --- a/doc/src/sgml/ref/alter_group.sgml +++ b/doc/src/sgml/ref/alter_group.sgml @@ -21,16 +21,16 @@ PostgreSQL documentation -ALTER GROUP role_specification ADD USER user_name [, ... ] -ALTER GROUP role_specification DROP USER user_name [, ... ] +ALTER GROUP role_specification ADD USER user_name [, ... ] +ALTER GROUP role_specification DROP USER user_name [, ... ] -where role_specification can be: +where role_specification can be: - role_name + role_name | CURRENT_USER | SESSION_USER -ALTER GROUP group_name RENAME TO new_name +ALTER GROUP group_name RENAME TO new_name @@ -66,7 +66,7 @@ ALTER GROUP group_name RENAME TO - group_name + group_name The name of the group (role) to modify. @@ -75,7 +75,7 @@ ALTER GROUP group_name RENAME TO - user_name + user_name Users (roles) that are to be added to or removed from the group. diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index 7d6553d2db..4c777f8675 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -21,15 +21,15 @@ PostgreSQL documentation -ALTER INDEX [ IF EXISTS ] name RENAME TO new_name -ALTER INDEX [ IF EXISTS ] name SET TABLESPACE tablespace_name -ALTER INDEX name DEPENDS ON EXTENSION extension_name -ALTER INDEX [ IF EXISTS ] name SET ( storage_parameter = value [, ... ] ) -ALTER INDEX [ IF EXISTS ] name RESET ( storage_parameter [, ... ] ) -ALTER INDEX [ IF EXISTS ] name ALTER [ COLUMN ] column_number - SET STATISTICS integer -ALTER INDEX ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] - SET TABLESPACE new_tablespace [ NOWAIT ] +ALTER INDEX [ IF EXISTS ] name RENAME TO new_name +ALTER INDEX [ IF EXISTS ] name SET TABLESPACE tablespace_name +ALTER INDEX name DEPENDS ON EXTENSION extension_name +ALTER INDEX [ IF EXISTS ] name SET ( storage_parameter = value [, ... ] ) +ALTER INDEX [ IF EXISTS ] name RESET ( storage_parameter [, ... ] ) +ALTER INDEX [ IF EXISTS ] name ALTER [ COLUMN ] column_number + SET STATISTICS integer +ALTER INDEX ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] + SET TABLESPACE new_tablespace [ NOWAIT ] @@ -86,7 +86,7 @@ ALTER INDEX ALL IN TABLESPACE name - SET ( storage_parameter = value [, ... ] ) + SET ( storage_parameter = value [, ... ] ) This form changes one or more index-method-specific storage parameters @@ -102,7 +102,7 @@ ALTER INDEX ALL IN TABLESPACE name - RESET ( storage_parameter [, ... ] ) + RESET ( storage_parameter [, ... ] ) This form resets one or more index-method-specific storage parameters to @@ -113,7 +113,7 @@ ALTER INDEX ALL IN TABLESPACE name - ALTER [ COLUMN ] column_number SET STATISTICS integer + ALTER [ COLUMN ] column_number SET STATISTICS integer This form sets the per-column statistics-gathering target for @@ -152,7 +152,7 @@ ALTER INDEX ALL IN TABLESPACE name - column_number + column_number The ordinal number refers to the ordinal (left-to-right) position @@ -162,7 +162,7 @@ ALTER INDEX ALL IN TABLESPACE name - name + name The name (possibly schema-qualified) of an existing index to @@ -172,7 +172,7 @@ ALTER INDEX ALL IN TABLESPACE name - new_name + new_name The new name for the index. @@ -181,7 +181,7 @@ ALTER INDEX ALL IN TABLESPACE name - tablespace_name + tablespace_name The tablespace to which the index will be moved. @@ -190,7 +190,7 @@ ALTER INDEX ALL IN TABLESPACE name - extension_name + extension_name The name of the extension that the index is to depend on. @@ -199,7 +199,7 @@ ALTER INDEX ALL IN TABLESPACE name - storage_parameter + storage_parameter The name of an index-method-specific storage parameter. @@ -208,7 +208,7 @@ ALTER INDEX ALL IN TABLESPACE name - value + value The new value for an index-method-specific storage parameter. diff --git a/doc/src/sgml/ref/alter_large_object.sgml b/doc/src/sgml/ref/alter_large_object.sgml index 5748d52db1..5e680f7720 100644 --- a/doc/src/sgml/ref/alter_large_object.sgml +++ b/doc/src/sgml/ref/alter_large_object.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -ALTER LARGE OBJECT large_object_oid OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER LARGE OBJECT large_object_oid OWNER TO { new_owner | CURRENT_USER | SESSION_USER } diff --git a/doc/src/sgml/ref/alter_materialized_view.sgml b/doc/src/sgml/ref/alter_materialized_view.sgml index b88f5ac00f..a1cced1581 100644 --- a/doc/src/sgml/ref/alter_materialized_view.sgml +++ b/doc/src/sgml/ref/alter_materialized_view.sgml @@ -21,30 +21,30 @@ PostgreSQL documentation -ALTER MATERIALIZED VIEW [ IF EXISTS ] name - action [, ... ] -ALTER MATERIALIZED VIEW name - DEPENDS ON EXTENSION extension_name -ALTER MATERIALIZED VIEW [ IF EXISTS ] name - RENAME [ COLUMN ] column_name TO new_column_name +ALTER MATERIALIZED VIEW [ IF EXISTS ] name + action [, ... ] +ALTER MATERIALIZED VIEW name + DEPENDS ON EXTENSION extension_name +ALTER MATERIALIZED VIEW [ IF EXISTS ] name + RENAME [ COLUMN ] column_name TO new_column_name ALTER MATERIALIZED VIEW [ IF EXISTS ] name RENAME TO new_name ALTER MATERIALIZED VIEW [ IF EXISTS ] name SET SCHEMA new_schema -ALTER MATERIALIZED VIEW ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] - SET TABLESPACE new_tablespace [ NOWAIT ] +ALTER MATERIALIZED VIEW ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] + SET TABLESPACE new_tablespace [ NOWAIT ] -where action is one of: +where action is one of: - ALTER [ COLUMN ] column_name SET STATISTICS integer - ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) - ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) - ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } - CLUSTER ON index_name + ALTER [ COLUMN ] column_name SET STATISTICS integer + ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) + ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) + ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + CLUSTER ON index_name SET WITHOUT CLUSTER - SET ( storage_parameter = value [, ... ] ) - RESET ( storage_parameter [, ... ] ) - OWNER TO { new_owner | CURRENT_USER | SESSION_USER } + SET ( storage_parameter = value [, ... ] ) + RESET ( storage_parameter [, ... ] ) + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } @@ -98,7 +98,7 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name - column_name + column_name Name of a new or existing column. @@ -107,7 +107,7 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name - extension_name + extension_name The name of the extension that the materialized view is to depend on. @@ -116,7 +116,7 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name - new_column_name + new_column_name New name for an existing column. @@ -125,7 +125,7 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name - new_owner + new_owner The user name of the new owner of the materialized view. diff --git a/doc/src/sgml/ref/alter_policy.sgml b/doc/src/sgml/ref/alter_policy.sgml index df347d180e..3eb9155602 100644 --- a/doc/src/sgml/ref/alter_policy.sgml +++ b/doc/src/sgml/ref/alter_policy.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -ALTER POLICY name ON table_name RENAME TO new_name +ALTER POLICY name ON table_name RENAME TO new_name ALTER POLICY name ON table_name [ TO { role_name | PUBLIC | CURRENT_USER | SESSION_USER } [, ...] ] diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index f064ec5f32..7392e6f3de 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -21,12 +21,12 @@ PostgreSQL documentation -ALTER PUBLICATION name ADD TABLE [ ONLY ] table_name [ * ] [, ...] -ALTER PUBLICATION name SET TABLE [ ONLY ] table_name [ * ] [, ...] -ALTER PUBLICATION name DROP TABLE [ ONLY ] table_name [ * ] [, ...] -ALTER PUBLICATION name SET ( publication_parameter [= value] [, ... ] ) -ALTER PUBLICATION name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER PUBLICATION name RENAME TO new_name +ALTER PUBLICATION name ADD TABLE [ ONLY ] table_name [ * ] [, ...] +ALTER PUBLICATION name SET TABLE [ ONLY ] table_name [ * ] [, ...] +ALTER PUBLICATION name DROP TABLE [ ONLY ] table_name [ * ] [, ...] +ALTER PUBLICATION name SET ( publication_parameter [= value] [, ... ] ) +ALTER PUBLICATION name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER PUBLICATION name RENAME TO new_name diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index ccdd5c107c..607b25962f 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -ALTER ROLE role_specification [ WITH ] option [ ... ] +ALTER ROLE role_specification [ WITH ] option [ ... ] -where option can be: +where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB @@ -32,20 +32,20 @@ ALTER ROLE role_specification [ WIT | LOGIN | NOLOGIN | REPLICATION | NOREPLICATION | BYPASSRLS | NOBYPASSRLS - | CONNECTION LIMIT connlimit - | [ ENCRYPTED ] PASSWORD 'password' - | VALID UNTIL 'timestamp' + | CONNECTION LIMIT connlimit + | [ ENCRYPTED ] PASSWORD 'password' + | VALID UNTIL 'timestamp' -ALTER ROLE name RENAME TO new_name +ALTER ROLE name RENAME TO new_name -ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter { TO | = } { value | DEFAULT } -ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter FROM CURRENT -ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET configuration_parameter -ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET ALL +ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter { TO | = } { value | DEFAULT } +ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter FROM CURRENT +ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET configuration_parameter +ALTER ROLE { role_specification | ALL } [ IN DATABASE database_name ] RESET ALL -where role_specification can be: +where role_specification can be: - role_name + role_name | CURRENT_USER | SESSION_USER @@ -125,7 +125,7 @@ ALTER ROLE { role_specification | A - name + name The name of the role whose attributes are to be altered. diff --git a/doc/src/sgml/ref/alter_rule.sgml b/doc/src/sgml/ref/alter_rule.sgml index 993a0ceb83..26791b379b 100644 --- a/doc/src/sgml/ref/alter_rule.sgml +++ b/doc/src/sgml/ref/alter_rule.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -ALTER RULE name ON table_name RENAME TO new_name +ALTER RULE name ON table_name RENAME TO new_name @@ -44,7 +44,7 @@ ALTER RULE name ON - name + name The name of an existing rule to alter. @@ -53,7 +53,7 @@ ALTER RULE name ON - table_name + table_name The name (optionally schema-qualified) of the table or view that the @@ -63,7 +63,7 @@ ALTER RULE name ON - new_name + new_name The new name for the rule. diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index 190c8d6485..c505935fcc 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -31,7 +31,7 @@ ALTER SEQUENCE [ IF EXISTS ] name [ RESTART [ [ WITH ] restart ] ] [ CACHE cache ] [ [ NO ] CYCLE ] [ OWNED BY { table_name.column_name | NONE } ] -ALTER SEQUENCE [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER SEQUENCE [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ALTER SEQUENCE [ IF EXISTS ] name RENAME TO new_name ALTER SEQUENCE [ IF EXISTS ] name SET SCHEMA new_schema @@ -256,7 +256,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S - new_owner + new_owner The user name of the new owner of the sequence. diff --git a/doc/src/sgml/ref/alter_server.sgml b/doc/src/sgml/ref/alter_server.sgml index e6cf511853..7f5def30a4 100644 --- a/doc/src/sgml/ref/alter_server.sgml +++ b/doc/src/sgml/ref/alter_server.sgml @@ -22,9 +22,9 @@ PostgreSQL documentation ALTER SERVER name [ VERSION 'new_version' ] - [ OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) ] -ALTER SERVER name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER SERVER name RENAME TO new_name + [ OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) ] +ALTER SERVER name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER SERVER name RENAME TO new_name @@ -71,7 +71,7 @@ ALTER SERVER name RENAME TO - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) Change options for the @@ -85,7 +85,7 @@ ALTER SERVER name RENAME TO - new_owner + new_owner The user name of the new owner of the foreign server. diff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml index 4f25669852..db4f2f0d52 100644 --- a/doc/src/sgml/ref/alter_statistics.sgml +++ b/doc/src/sgml/ref/alter_statistics.sgml @@ -23,7 +23,7 @@ PostgreSQL documentation -ALTER STATISTICS name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER STATISTICS name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ALTER STATISTICS name RENAME TO new_name ALTER STATISTICS name SET SCHEMA new_schema @@ -67,7 +67,7 @@ ALTER STATISTICS name SET SCHEMA - new_owner + new_owner The user name of the new owner of the statistics object. diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml index b1b7765d76..44c0b35069 100644 --- a/doc/src/sgml/ref/alter_subscription.sgml +++ b/doc/src/sgml/ref/alter_subscription.sgml @@ -21,14 +21,14 @@ PostgreSQL documentation -ALTER SUBSCRIPTION name CONNECTION 'conninfo' -ALTER SUBSCRIPTION name SET PUBLICATION publication_name [, ...] [ WITH ( set_publication_option [= value] [, ... ] ) ] -ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ] -ALTER SUBSCRIPTION name ENABLE -ALTER SUBSCRIPTION name DISABLE -ALTER SUBSCRIPTION name SET ( subscription_parameter [= value] [, ... ] ) -ALTER SUBSCRIPTION name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER SUBSCRIPTION name RENAME TO new_name +ALTER SUBSCRIPTION name CONNECTION 'conninfo' +ALTER SUBSCRIPTION name SET PUBLICATION publication_name [, ...] [ WITH ( set_publication_option [= value] [, ... ] ) ] +ALTER SUBSCRIPTION name REFRESH PUBLICATION [ WITH ( refresh_option [= value] [, ... ] ) ] +ALTER SUBSCRIPTION name ENABLE +ALTER SUBSCRIPTION name DISABLE +ALTER SUBSCRIPTION name SET ( subscription_parameter [= value] [, ... ] ) +ALTER SUBSCRIPTION name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER SUBSCRIPTION name RENAME TO new_name diff --git a/doc/src/sgml/ref/alter_system.sgml b/doc/src/sgml/ref/alter_system.sgml index b234793f3e..e3a4af4041 100644 --- a/doc/src/sgml/ref/alter_system.sgml +++ b/doc/src/sgml/ref/alter_system.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -ALTER SYSTEM SET configuration_parameter { TO | = } { value | 'value' | DEFAULT } +ALTER SYSTEM SET configuration_parameter { TO | = } { value | 'value' | DEFAULT } -ALTER SYSTEM RESET configuration_parameter +ALTER SYSTEM RESET configuration_parameter ALTER SYSTEM RESET ALL diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 0fb385ece7..0559f80549 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -21,74 +21,74 @@ PostgreSQL documentation -ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] - action [, ... ] -ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] - RENAME [ COLUMN ] column_name TO new_column_name -ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] - RENAME CONSTRAINT constraint_name TO new_constraint_name -ALTER TABLE [ IF EXISTS ] name - RENAME TO new_name -ALTER TABLE [ IF EXISTS ] name - SET SCHEMA new_schema -ALTER TABLE ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] - SET TABLESPACE new_tablespace [ NOWAIT ] -ALTER TABLE [ IF EXISTS ] name - ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } -ALTER TABLE [ IF EXISTS ] name - DETACH PARTITION partition_name - -where action is one of: - - ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] - DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] - ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ] - ALTER [ COLUMN ] column_name SET DEFAULT expression - ALTER [ COLUMN ] column_name DROP DEFAULT - ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL - ALTER [ COLUMN ] column_name ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] - ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS | BY DEFAULT } | SET sequence_option | RESTART [ [ WITH ] restart ] } [...] - ALTER [ COLUMN ] column_name DROP IDENTITY [ IF EXISTS ] - ALTER [ COLUMN ] column_name SET STATISTICS integer - ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) - ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) - ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } - ADD table_constraint [ NOT VALID ] - ADD table_constraint_using_index - ALTER CONSTRAINT constraint_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] - VALIDATE CONSTRAINT constraint_name - DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] - DISABLE TRIGGER [ trigger_name | ALL | USER ] - ENABLE TRIGGER [ trigger_name | ALL | USER ] - ENABLE REPLICA TRIGGER trigger_name - ENABLE ALWAYS TRIGGER trigger_name - DISABLE RULE rewrite_rule_name - ENABLE RULE rewrite_rule_name - ENABLE REPLICA RULE rewrite_rule_name - ENABLE ALWAYS RULE rewrite_rule_name +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + action [, ... ] +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME [ COLUMN ] column_name TO new_column_name +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME CONSTRAINT constraint_name TO new_constraint_name +ALTER TABLE [ IF EXISTS ] name + RENAME TO new_name +ALTER TABLE [ IF EXISTS ] name + SET SCHEMA new_schema +ALTER TABLE ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] + SET TABLESPACE new_tablespace [ NOWAIT ] +ALTER TABLE [ IF EXISTS ] name + ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } +ALTER TABLE [ IF EXISTS ] name + DETACH PARTITION partition_name + +where action is one of: + + ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] + DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] + ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ] + ALTER [ COLUMN ] column_name SET DEFAULT expression + ALTER [ COLUMN ] column_name DROP DEFAULT + ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL + ALTER [ COLUMN ] column_name ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] + ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS | BY DEFAULT } | SET sequence_option | RESTART [ [ WITH ] restart ] } [...] + ALTER [ COLUMN ] column_name DROP IDENTITY [ IF EXISTS ] + ALTER [ COLUMN ] column_name SET STATISTICS integer + ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) + ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) + ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + ADD table_constraint [ NOT VALID ] + ADD table_constraint_using_index + ALTER CONSTRAINT constraint_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + VALIDATE CONSTRAINT constraint_name + DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] + DISABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE REPLICA TRIGGER trigger_name + ENABLE ALWAYS TRIGGER trigger_name + DISABLE RULE rewrite_rule_name + ENABLE RULE rewrite_rule_name + ENABLE REPLICA RULE rewrite_rule_name + ENABLE ALWAYS RULE rewrite_rule_name DISABLE ROW LEVEL SECURITY ENABLE ROW LEVEL SECURITY FORCE ROW LEVEL SECURITY NO FORCE ROW LEVEL SECURITY - CLUSTER ON index_name + CLUSTER ON index_name SET WITHOUT CLUSTER SET WITH OIDS SET WITHOUT OIDS - SET TABLESPACE new_tablespace + SET TABLESPACE new_tablespace SET { LOGGED | UNLOGGED } - SET ( storage_parameter = value [, ... ] ) - RESET ( storage_parameter [, ... ] ) - INHERIT parent_table - NO INHERIT parent_table - OF type_name + SET ( storage_parameter = value [, ... ] ) + RESET ( storage_parameter [, ... ] ) + INHERIT parent_table + NO INHERIT parent_table + OF type_name NOT OF - OWNER TO { new_owner | CURRENT_USER | SESSION_USER } - REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } + REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } -and table_constraint_using_index is: +and table_constraint_using_index is: - [ CONSTRAINT constraint_name ] - { UNIQUE | PRIMARY KEY } USING INDEX index_name + [ CONSTRAINT constraint_name ] + { UNIQUE | PRIMARY KEY } USING INDEX index_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] @@ -244,8 +244,8 @@ ALTER TABLE [ IF EXISTS ] name - SET ( attribute_option = value [, ... ] ) - RESET ( attribute_option [, ... ] ) + SET ( attribute_option = value [, ... ] ) + RESET ( attribute_option [, ... ] ) This form sets or resets per-attribute options. Currently, the only @@ -310,7 +310,7 @@ ALTER TABLE [ IF EXISTS ] name - ADD table_constraint [ NOT VALID ] + ADD table_constraint [ NOT VALID ] This form adds a new constraint to a table using the same syntax as @@ -332,7 +332,7 @@ ALTER TABLE [ IF EXISTS ] name - ADD table_constraint_using_index + ADD table_constraint_using_index This form adds a new PRIMARY KEY or UNIQUE @@ -599,7 +599,7 @@ ALTER TABLE [ IF EXISTS ] name - SET ( storage_parameter = value [, ... ] ) + SET ( storage_parameter = value [, ... ] ) This form changes one or more storage parameters for the table. See @@ -628,7 +628,7 @@ ALTER TABLE [ IF EXISTS ] name While CREATE TABLE allows OIDS to be specified in the WITH (storage_parameter) syntax, + class="parameter">storage_parameter) syntax, ALTER TABLE does not treat OIDS as a storage parameter. Instead use the SET WITH OIDS and SET WITHOUT OIDS forms to change OID status. @@ -638,7 +638,7 @@ ALTER TABLE [ IF EXISTS ] name - RESET ( storage_parameter [, ... ] ) + RESET ( storage_parameter [, ... ] ) This form resets one or more storage parameters to their @@ -649,7 +649,7 @@ ALTER TABLE [ IF EXISTS ] name - INHERIT parent_table + INHERIT parent_table This form adds the target table as a new child of the specified parent @@ -677,7 +677,7 @@ ALTER TABLE [ IF EXISTS ] name - NO INHERIT parent_table + NO INHERIT parent_table This form removes the target table from the list of children of the @@ -689,7 +689,7 @@ ALTER TABLE [ IF EXISTS ] name - OF type_name + OF type_name This form links the table to a composite type as though CREATE @@ -765,7 +765,7 @@ ALTER TABLE [ IF EXISTS ] name - ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } + ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } This form attaches an existing table (which might itself be partitioned) @@ -777,7 +777,7 @@ ALTER TABLE [ IF EXISTS ] name A partition using FOR VALUES uses same syntax for - partition_bound_spec as + partition_bound_spec as . The partition bound specification must correspond to the partitioning strategy and partition key of the target table. The table to be attached must have all the same columns @@ -828,7 +828,7 @@ ALTER TABLE [ IF EXISTS ] name - DETACH PARTITION partition_name + DETACH PARTITION partition_name This form detaches specified partition of the target table. The detached @@ -886,7 +886,7 @@ ALTER TABLE [ IF EXISTS ] name - name + name The name (optionally schema-qualified) of an existing table to @@ -900,7 +900,7 @@ ALTER TABLE [ IF EXISTS ] name - column_name + column_name Name of a new or existing column. @@ -909,7 +909,7 @@ ALTER TABLE [ IF EXISTS ] name - new_column_name + new_column_name New name for an existing column. @@ -918,7 +918,7 @@ ALTER TABLE [ IF EXISTS ] name - new_name + new_name New name for the table. @@ -927,7 +927,7 @@ ALTER TABLE [ IF EXISTS ] name - data_type + data_type Data type of the new column, or new data type for an existing @@ -937,7 +937,7 @@ ALTER TABLE [ IF EXISTS ] name - table_constraint + table_constraint New table constraint for the table. @@ -946,7 +946,7 @@ ALTER TABLE [ IF EXISTS ] name - constraint_name + constraint_name Name of a new or existing constraint. @@ -977,7 +977,7 @@ ALTER TABLE [ IF EXISTS ] name - trigger_name + trigger_name Name of a single trigger to disable or enable. @@ -1011,7 +1011,7 @@ ALTER TABLE [ IF EXISTS ] name - index_name + index_name The name of an existing index. @@ -1020,7 +1020,7 @@ ALTER TABLE [ IF EXISTS ] name - storage_parameter + storage_parameter The name of a table storage parameter. @@ -1029,7 +1029,7 @@ ALTER TABLE [ IF EXISTS ] name - value + value The new value for a table storage parameter. @@ -1039,7 +1039,7 @@ ALTER TABLE [ IF EXISTS ] name - parent_table + parent_table A parent table to associate or de-associate with this table. @@ -1048,7 +1048,7 @@ ALTER TABLE [ IF EXISTS ] name - new_owner + new_owner The user name of the new owner of the table. @@ -1057,7 +1057,7 @@ ALTER TABLE [ IF EXISTS ] name - new_tablespace + new_tablespace The name of the tablespace to which the table will be moved. @@ -1066,7 +1066,7 @@ ALTER TABLE [ IF EXISTS ] name - new_schema + new_schema The name of the schema to which the table will be moved. @@ -1075,7 +1075,7 @@ ALTER TABLE [ IF EXISTS ] name - partition_name + partition_name The name of the table to attach as a new partition or to detach from this table. @@ -1084,7 +1084,7 @@ ALTER TABLE [ IF EXISTS ] name - partition_bound_spec + partition_bound_spec The partition bound specification for a new partition. Refer to diff --git a/doc/src/sgml/ref/alter_tablespace.sgml b/doc/src/sgml/ref/alter_tablespace.sgml index 2f41105001..4542bd90a2 100644 --- a/doc/src/sgml/ref/alter_tablespace.sgml +++ b/doc/src/sgml/ref/alter_tablespace.sgml @@ -23,8 +23,8 @@ PostgreSQL documentation ALTER TABLESPACE name RENAME TO new_name ALTER TABLESPACE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER TABLESPACE name SET ( tablespace_option = value [, ... ] ) -ALTER TABLESPACE name RESET ( tablespace_option [, ... ] ) +ALTER TABLESPACE name SET ( tablespace_option = value [, ... ] ) +ALTER TABLESPACE name RESET ( tablespace_option [, ... ] ) diff --git a/doc/src/sgml/ref/alter_trigger.sgml b/doc/src/sgml/ref/alter_trigger.sgml index 47eef6e5e8..3e48d159e2 100644 --- a/doc/src/sgml/ref/alter_trigger.sgml +++ b/doc/src/sgml/ref/alter_trigger.sgml @@ -21,8 +21,8 @@ PostgreSQL documentation -ALTER TRIGGER name ON table_name RENAME TO new_name -ALTER TRIGGER name ON table_name DEPENDS ON EXTENSION extension_name +ALTER TRIGGER name ON table_name RENAME TO new_name +ALTER TRIGGER name ON table_name DEPENDS ON EXTENSION extension_name @@ -48,7 +48,7 @@ ALTER TRIGGER name ON - name + name The name of an existing trigger to alter. @@ -57,7 +57,7 @@ ALTER TRIGGER name ON - table_name + table_name The name of the table on which this trigger acts. @@ -66,7 +66,7 @@ ALTER TRIGGER name ON - new_name + new_name The new name for the trigger. @@ -75,7 +75,7 @@ ALTER TRIGGER name ON - extension_name + extension_name The name of the extension that the trigger is to depend on. diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 4027c1b8f7..6c5201ccb5 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -23,19 +23,19 @@ PostgreSQL documentation -ALTER TYPE name action [, ... ] -ALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } -ALTER TYPE name RENAME ATTRIBUTE attribute_name TO new_attribute_name [ CASCADE | RESTRICT ] -ALTER TYPE name RENAME TO new_name -ALTER TYPE name SET SCHEMA new_schema -ALTER TYPE name ADD VALUE [ IF NOT EXISTS ] new_enum_value [ { BEFORE | AFTER } neighbor_enum_value ] -ALTER TYPE name RENAME VALUE existing_enum_value TO new_enum_value - -where action is one of: - - ADD ATTRIBUTE attribute_name data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] - DROP ATTRIBUTE [ IF EXISTS ] attribute_name [ CASCADE | RESTRICT ] - ALTER ATTRIBUTE attribute_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] +ALTER TYPE name action [, ... ] +ALTER TYPE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER TYPE name RENAME ATTRIBUTE attribute_name TO new_attribute_name [ CASCADE | RESTRICT ] +ALTER TYPE name RENAME TO new_name +ALTER TYPE name SET SCHEMA new_schema +ALTER TYPE name ADD VALUE [ IF NOT EXISTS ] new_enum_value [ { BEFORE | AFTER } neighbor_enum_value ] +ALTER TYPE name RENAME VALUE existing_enum_value TO new_enum_value + +where action is one of: + + ADD ATTRIBUTE attribute_name data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + DROP ATTRIBUTE [ IF EXISTS ] attribute_name [ CASCADE | RESTRICT ] + ALTER ATTRIBUTE attribute_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] @@ -166,7 +166,7 @@ ALTER TYPE name RENAME VALUE - name + name The name (possibly schema-qualified) of an existing type to @@ -176,7 +176,7 @@ ALTER TYPE name RENAME VALUE - new_name + new_name The new name for the type. @@ -185,7 +185,7 @@ ALTER TYPE name RENAME VALUE - new_owner + new_owner The user name of the new owner of the type. @@ -194,7 +194,7 @@ ALTER TYPE name RENAME VALUE - new_schema + new_schema The new schema for the type. @@ -203,7 +203,7 @@ ALTER TYPE name RENAME VALUE - attribute_name + attribute_name The name of the attribute to add, alter, or drop. @@ -212,7 +212,7 @@ ALTER TYPE name RENAME VALUE - new_attribute_name + new_attribute_name The new name of the attribute to be renamed. @@ -221,7 +221,7 @@ ALTER TYPE name RENAME VALUE - data_type + data_type The data type of the attribute to add, or the new type of the @@ -231,7 +231,7 @@ ALTER TYPE name RENAME VALUE - new_enum_value + new_enum_value The new value to be added to an enum type's list of values, @@ -242,7 +242,7 @@ ALTER TYPE name RENAME VALUE - neighbor_enum_value + neighbor_enum_value The existing enum value that the new value should be added immediately @@ -253,7 +253,7 @@ ALTER TYPE name RENAME VALUE - existing_enum_value + existing_enum_value The existing enum value that should be renamed. diff --git a/doc/src/sgml/ref/alter_user.sgml b/doc/src/sgml/ref/alter_user.sgml index c26d264de5..1a240ff430 100644 --- a/doc/src/sgml/ref/alter_user.sgml +++ b/doc/src/sgml/ref/alter_user.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -ALTER USER role_specification [ WITH ] option [ ... ] +ALTER USER role_specification [ WITH ] option [ ... ] -where option can be: +where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB @@ -32,20 +32,20 @@ ALTER USER role_specification [ WIT | LOGIN | NOLOGIN | REPLICATION | NOREPLICATION | BYPASSRLS | NOBYPASSRLS - | CONNECTION LIMIT connlimit - | [ ENCRYPTED ] PASSWORD 'password' - | VALID UNTIL 'timestamp' + | CONNECTION LIMIT connlimit + | [ ENCRYPTED ] PASSWORD 'password' + | VALID UNTIL 'timestamp' -ALTER USER name RENAME TO new_name +ALTER USER name RENAME TO new_name -ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter { TO | = } { value | DEFAULT } -ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter FROM CURRENT -ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] RESET configuration_parameter -ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] RESET ALL +ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter { TO | = } { value | DEFAULT } +ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] SET configuration_parameter FROM CURRENT +ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] RESET configuration_parameter +ALTER USER { role_specification | ALL } [ IN DATABASE database_name ] RESET ALL -where role_specification can be: +where role_specification can be: - role_name + role_name | CURRENT_USER | SESSION_USER diff --git a/doc/src/sgml/ref/alter_user_mapping.sgml b/doc/src/sgml/ref/alter_user_mapping.sgml index 3be54afee5..5cc49210ed 100644 --- a/doc/src/sgml/ref/alter_user_mapping.sgml +++ b/doc/src/sgml/ref/alter_user_mapping.sgml @@ -23,7 +23,7 @@ PostgreSQL documentation ALTER USER MAPPING FOR { user_name | USER | CURRENT_USER | SESSION_USER | PUBLIC } SERVER server_name - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) @@ -69,7 +69,7 @@ ALTER USER MAPPING FOR { user_name - OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) Change options for the user mapping. The new options override diff --git a/doc/src/sgml/ref/alter_view.sgml b/doc/src/sgml/ref/alter_view.sgml index 00f4ecb9b1..788eda5d58 100644 --- a/doc/src/sgml/ref/alter_view.sgml +++ b/doc/src/sgml/ref/alter_view.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name SET DEFAULT expression -ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name DROP DEFAULT -ALTER VIEW [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name SET DEFAULT expression +ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name DROP DEFAULT +ALTER VIEW [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ALTER VIEW [ IF EXISTS ] name RENAME TO new_name ALTER VIEW [ IF EXISTS ] name SET SCHEMA new_schema ALTER VIEW [ IF EXISTS ] name SET ( view_option_name [= view_option_value] [, ... ] ) @@ -90,7 +90,7 @@ ALTER VIEW [ IF EXISTS ] name RESET - new_owner + new_owner The user name of the new owner of the view. diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index ba42973022..eae7fe92e0 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -21,11 +21,11 @@ PostgreSQL documentation -ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ] +ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ] -where table_and_columns is: +where table_and_columns is: - table_name [ ( column_name [, ...] ) ] + table_name [ ( column_name [, ...] ) ] @@ -42,7 +42,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns - Without a table_and_columns + Without a table_and_columns list, ANALYZE processes every table and materialized view in the current database that the current user has permission to analyze. With a list, ANALYZE processes only those table(s). @@ -65,7 +65,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns - table_name + table_name The name (possibly schema-qualified) of a specific table to @@ -79,7 +79,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns - column_name + column_name The name of a specific column to analyze. Defaults to all columns. diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index aacc667144..aaa2f89a30 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -26,7 +26,7 @@ PostgreSQL documentation -CLOSE { name | ALL } +CLOSE { name | ALL } @@ -57,7 +57,7 @@ CLOSE { name | ALL } - name + name The name of an open cursor to close. diff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml index e6a77095ec..b55734d35c 100644 --- a/doc/src/sgml/ref/cluster.sgml +++ b/doc/src/sgml/ref/cluster.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -CLUSTER [VERBOSE] table_name [ USING index_name ] +CLUSTER [VERBOSE] table_name [ USING index_name ] CLUSTER [VERBOSE] @@ -82,7 +82,7 @@ CLUSTER [VERBOSE] - table_name + table_name The name (possibly schema-qualified) of a table. @@ -91,7 +91,7 @@ CLUSTER [VERBOSE] - index_name + index_name The name of an index. @@ -210,7 +210,7 @@ CLUSTER; The syntax -CLUSTER index_name ON table_name +CLUSTER index_name ON table_name is also supported for compatibility with pre-8.3 PostgreSQL versions. diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index df328117f1..059d6f41d8 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -23,48 +23,48 @@ PostgreSQL documentation COMMENT ON { - ACCESS METHOD object_name | - AGGREGATE aggregate_name ( aggregate_signature ) | + ACCESS METHOD object_name | + AGGREGATE aggregate_name ( aggregate_signature ) | CAST (source_type AS target_type) | - COLLATION object_name | - COLUMN relation_name.column_name | - CONSTRAINT constraint_name ON table_name | - CONSTRAINT constraint_name ON DOMAIN domain_name | - CONVERSION object_name | - DATABASE object_name | - DOMAIN object_name | - EXTENSION object_name | - EVENT TRIGGER object_name | - FOREIGN DATA WRAPPER object_name | - FOREIGN TABLE object_name | - FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | - INDEX object_name | - LARGE OBJECT large_object_oid | - MATERIALIZED VIEW object_name | - OPERATOR operator_name (left_type, right_type) | - OPERATOR CLASS object_name USING index_method | - OPERATOR FAMILY object_name USING index_method | - POLICY policy_name ON table_name | - [ PROCEDURAL ] LANGUAGE object_name | - PUBLICATION object_name | - ROLE object_name | - RULE rule_name ON table_name | - SCHEMA object_name | - SEQUENCE object_name | - SERVER object_name | - STATISTICS object_name | - SUBSCRIPTION object_name | - TABLE object_name | - TABLESPACE object_name | - TEXT SEARCH CONFIGURATION object_name | - TEXT SEARCH DICTIONARY object_name | - TEXT SEARCH PARSER object_name | - TEXT SEARCH TEMPLATE object_name | + COLLATION object_name | + COLUMN relation_name.column_name | + CONSTRAINT constraint_name ON table_name | + CONSTRAINT constraint_name ON DOMAIN domain_name | + CONVERSION object_name | + DATABASE object_name | + DOMAIN object_name | + EXTENSION object_name | + EVENT TRIGGER object_name | + FOREIGN DATA WRAPPER object_name | + FOREIGN TABLE object_name | + FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + INDEX object_name | + LARGE OBJECT large_object_oid | + MATERIALIZED VIEW object_name | + OPERATOR operator_name (left_type, right_type) | + OPERATOR CLASS object_name USING index_method | + OPERATOR FAMILY object_name USING index_method | + POLICY policy_name ON table_name | + [ PROCEDURAL ] LANGUAGE object_name | + PUBLICATION object_name | + ROLE object_name | + RULE rule_name ON table_name | + SCHEMA object_name | + SEQUENCE object_name | + SERVER object_name | + STATISTICS object_name | + SUBSCRIPTION object_name | + TABLE object_name | + TABLESPACE object_name | + TEXT SEARCH CONFIGURATION object_name | + TEXT SEARCH DICTIONARY object_name | + TEXT SEARCH PARSER object_name | + TEXT SEARCH TEMPLATE object_name | TRANSFORM FOR type_name LANGUAGE lang_name | - TRIGGER trigger_name ON table_name | - TYPE object_name | - VIEW object_name -} IS 'text' + TRIGGER trigger_name ON table_name | + TYPE object_name | + VIEW object_name +} IS 'text' where aggregate_signature is: diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml index e1988ad318..716aed3ac2 100644 --- a/doc/src/sgml/ref/commit_prepared.sgml +++ b/doc/src/sgml/ref/commit_prepared.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -COMMIT PREPARED transaction_id +COMMIT PREPARED transaction_id @@ -39,7 +39,7 @@ COMMIT PREPARED transaction_id - transaction_id + transaction_id The transaction identifier of the transaction that is to be diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index 6a8acfb4f9..c96e4faba7 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -22,59 +22,59 @@ PostgreSQL documentation CREATE AGGREGATE name ( [ argmode ] [ argname ] arg_data_type [ , ... ] ) ( - SFUNC = sfunc, - STYPE = state_data_type - [ , SSPACE = state_data_size ] - [ , FINALFUNC = ffunc ] + SFUNC = sfunc, + STYPE = state_data_type + [ , SSPACE = state_data_size ] + [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] - [ , COMBINEFUNC = combinefunc ] - [ , SERIALFUNC = serialfunc ] - [ , DESERIALFUNC = deserialfunc ] - [ , INITCOND = initial_condition ] - [ , MSFUNC = msfunc ] - [ , MINVFUNC = minvfunc ] - [ , MSTYPE = mstate_data_type ] - [ , MSSPACE = mstate_data_size ] - [ , MFINALFUNC = mffunc ] + [ , COMBINEFUNC = combinefunc ] + [ , SERIALFUNC = serialfunc ] + [ , DESERIALFUNC = deserialfunc ] + [ , INITCOND = initial_condition ] + [ , MSFUNC = msfunc ] + [ , MINVFUNC = minvfunc ] + [ , MSTYPE = mstate_data_type ] + [ , MSSPACE = mstate_data_size ] + [ , MFINALFUNC = mffunc ] [ , MFINALFUNC_EXTRA ] - [ , MINITCOND = minitial_condition ] - [ , SORTOP = sort_operator ] + [ , MINITCOND = minitial_condition ] + [ , SORTOP = sort_operator ] [ , PARALLEL = { SAFE | RESTRICTED | UNSAFE } ] ) CREATE AGGREGATE name ( [ [ argmode ] [ argname ] arg_data_type [ , ... ] ] ORDER BY [ argmode ] [ argname ] arg_data_type [ , ... ] ) ( - SFUNC = sfunc, - STYPE = state_data_type - [ , SSPACE = state_data_size ] - [ , FINALFUNC = ffunc ] + SFUNC = sfunc, + STYPE = state_data_type + [ , SSPACE = state_data_size ] + [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] - [ , INITCOND = initial_condition ] + [ , INITCOND = initial_condition ] [ , PARALLEL = { SAFE | RESTRICTED | UNSAFE } ] [ , HYPOTHETICAL ] ) or the old syntax -CREATE AGGREGATE name ( - BASETYPE = base_type, - SFUNC = sfunc, - STYPE = state_data_type - [ , SSPACE = state_data_size ] - [ , FINALFUNC = ffunc ] +CREATE AGGREGATE name ( + BASETYPE = base_type, + SFUNC = sfunc, + STYPE = state_data_type + [ , SSPACE = state_data_size ] + [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] - [ , COMBINEFUNC = combinefunc ] - [ , SERIALFUNC = serialfunc ] - [ , DESERIALFUNC = deserialfunc ] - [ , INITCOND = initial_condition ] - [ , MSFUNC = msfunc ] - [ , MINVFUNC = minvfunc ] - [ , MSTYPE = mstate_data_type ] - [ , MSSPACE = mstate_data_size ] - [ , MFINALFUNC = mffunc ] + [ , COMBINEFUNC = combinefunc ] + [ , SERIALFUNC = serialfunc ] + [ , DESERIALFUNC = deserialfunc ] + [ , INITCOND = initial_condition ] + [ , MSFUNC = msfunc ] + [ , MINVFUNC = minvfunc ] + [ , MSTYPE = mstate_data_type ] + [ , MSSPACE = mstate_data_size ] + [ , MFINALFUNC = mffunc ] [ , MFINALFUNC_EXTRA ] - [ , MINITCOND = minitial_condition ] - [ , SORTOP = sort_operator ] + [ , MINITCOND = minitial_condition ] + [ , SORTOP = sort_operator ] ) @@ -112,19 +112,19 @@ CREATE AGGREGATE name ( A simple aggregate function is made from one or two ordinary functions: a state transition function - sfunc, + sfunc, and an optional final calculation function - ffunc. + ffunc. These are used as follows: -sfunc( internal-state, next-data-values ) ---> next-internal-state -ffunc( internal-state ) ---> aggregate-value +sfunc( internal-state, next-data-values ) ---> next-internal-state +ffunc( internal-state ) ---> aggregate-value PostgreSQL creates a temporary variable - of data type stype + of data type stype to hold the current internal state of the aggregate. At each input row, the aggregate argument value(s) are calculated and the state transition function is invoked with the current state value @@ -155,9 +155,9 @@ CREATE AGGREGATE name ( all-nonnull input values. This is handy for implementing aggregates like max. Note that this behavior is only available when - state_data_type + state_data_type is the same as the first - arg_data_type. + arg_data_type. When these types are different, you must supply a nonnull initial condition or use a nonstrict transition function. @@ -224,7 +224,7 @@ CREATE AGGREGATE name ( An aggregate can optionally support partial aggregation, as described in . This requires specifying the COMBINEFUNC parameter. - If the state_data_type + If the state_data_type is internal, it's usually also appropriate to provide the SERIALFUNC and DESERIALFUNC parameters so that parallel aggregation is possible. Note that the aggregate must also be @@ -268,7 +268,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - name + name The name (optionally schema-qualified) of the aggregate function @@ -302,7 +302,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - arg_data_type + arg_data_type An input data type on which this aggregate function operates. @@ -314,7 +314,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - base_type + base_type In the old syntax for CREATE AGGREGATE, the input data type @@ -329,18 +329,18 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - sfunc + sfunc The name of the state transition function to be called for each - input row. For a normal N-argument - aggregate function, the sfunc - must take N+1 arguments, + input row. For a normal N-argument + aggregate function, the sfunc + must take N+1 arguments, the first being of type state_data_type and the rest + class="parameter">state_data_type and the rest matching the declared input data type(s) of the aggregate. The function must return a value of type state_data_type. This function + class="parameter">state_data_type. This function takes the current state value and the current input data value(s), and returns the next state value. @@ -355,7 +355,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - state_data_type + state_data_type The data type for the aggregate's state value. @@ -364,7 +364,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - state_data_size + state_data_size The approximate average size (in bytes) of the aggregate's state value. @@ -380,19 +380,19 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - ffunc + ffunc The name of the final function called to compute the aggregate's result after all input rows have been traversed. For a normal aggregate, this function must take a single argument of type state_data_type. The return + class="parameter">state_data_type. The return data type of the aggregate is defined as the return type of this - function. If ffunc + function. If ffunc is not specified, then the ending state value is used as the aggregate's result, and the return type is state_data_type. + class="parameter">state_data_type. @@ -413,30 +413,30 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - combinefunc + combinefunc - The combinefunc function + The combinefunc function may optionally be specified to allow the aggregate function to support partial aggregation. If provided, - the combinefunc must - combine two state_data_type + the combinefunc must + combine two state_data_type values, each containing the result of aggregation over some subset of the input values, to produce a - new state_data_type that + new state_data_type that represents the result of aggregating over both sets of inputs. This function can be thought of as - an sfunc, where instead of + an sfunc, where instead of acting upon an individual input row and adding it to the running aggregate state, it adds another aggregate state to the running state. - The combinefunc must be + The combinefunc must be declared as taking two arguments of - the state_data_type and + the state_data_type and returning a value of - the state_data_type. + the state_data_type. Optionally this function may be strict. In this case the function will not be called when either of the input states are null; the other state will be taken as the correct result. @@ -444,11 +444,11 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; For aggregate functions - whose state_data_type + whose state_data_type is internal, - the combinefunc must not + the combinefunc must not be strict. In this case - the combinefunc must + the combinefunc must ensure that null states are handled correctly and that the state being returned is properly stored in the aggregate memory context. @@ -456,28 +456,28 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - serialfunc + serialfunc An aggregate function - whose state_data_type + whose state_data_type is internal can participate in parallel aggregation only if it - has a serialfunc function, + has a serialfunc function, which must serialize the aggregate state into a bytea value for transmission to another process. This function must take a single argument of type internal and return type bytea. A - corresponding deserialfunc + corresponding deserialfunc is also required. - deserialfunc + deserialfunc Deserialize a previously serialized aggregate state back into - state_data_type. This + state_data_type. This function must take two arguments of types bytea and internal, and produce a result of type internal. (Note: the second, internal argument is unused, but is required @@ -487,19 +487,19 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - initial_condition + initial_condition The initial setting for the state value. This must be a string constant in the form accepted for the data type state_data_type. If not + class="parameter">state_data_type. If not specified, the state value starts out null. - msfunc + msfunc The name of the forward state transition function to be called for each @@ -512,7 +512,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - minvfunc + minvfunc The name of the inverse state transition function to be used in @@ -526,7 +526,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - mstate_data_type + mstate_data_type The data type for the aggregate's state value, when using @@ -536,7 +536,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - mstate_data_size + mstate_data_size The approximate average size (in bytes) of the aggregate's state @@ -547,7 +547,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - mffunc + mffunc The name of the final function called to compute the aggregate's @@ -564,7 +564,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - minitial_condition + minitial_condition The initial setting for the state value, when using moving-aggregate @@ -574,7 +574,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - sort_operator + sort_operator The associated sort operator for a MIN- or diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 48386a29f9..8e2a73402f 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -CREATE DATABASE name +CREATE DATABASE name [ [ WITH ] [ OWNER [=] user_name ] [ TEMPLATE [=] template ] [ ENCODING [=] encoding ] diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index 3423bf9a32..85ed57dd08 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -24,12 +24,12 @@ PostgreSQL documentation CREATE DOMAIN name [ AS ] data_type [ COLLATE collation ] [ DEFAULT expression ] - [ constraint [ ... ] ] + [ constraint [ ... ] ] -where constraint is: +where constraint is: -[ CONSTRAINT constraint_name ] -{ NOT NULL | NULL | CHECK (expression) } +[ CONSTRAINT constraint_name ] +{ NOT NULL | NULL | CHECK (expression) } @@ -80,7 +80,7 @@ CREATE DOMAIN name [ AS ] - data_type + data_type The underlying data type of the domain. This can include array @@ -126,7 +126,7 @@ CREATE DOMAIN name [ AS ] - CONSTRAINT constraint_name + CONSTRAINT constraint_name An optional name for a constraint. If not specified, @@ -161,7 +161,7 @@ CREATE DOMAIN name [ AS ] - CHECK (expression) + CHECK (expression) CHECK clauses specify integrity constraints or tests which values of the domain must satisfy. diff --git a/doc/src/sgml/ref/create_event_trigger.sgml b/doc/src/sgml/ref/create_event_trigger.sgml index be18fc36e8..7decfbb983 100644 --- a/doc/src/sgml/ref/create_event_trigger.sgml +++ b/doc/src/sgml/ref/create_event_trigger.sgml @@ -21,10 +21,10 @@ PostgreSQL documentation -CREATE EVENT TRIGGER name - ON event - [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ] - EXECUTE PROCEDURE function_name() +CREATE EVENT TRIGGER name + ON event + [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ] + EXECUTE PROCEDURE function_name() diff --git a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml index a3811a3b63..1161e05d1c 100644 --- a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml @@ -24,7 +24,7 @@ PostgreSQL documentation CREATE FOREIGN DATA WRAPPER name [ HANDLER handler_function | NO HANDLER ] [ VALIDATOR validator_function | NO VALIDATOR ] - [ OPTIONS ( option 'value' [, ... ] ) ] + [ OPTIONS ( option 'value' [, ... ] ) ] @@ -100,7 +100,7 @@ CREATE FOREIGN DATA WRAPPER name - OPTIONS ( option 'value' [, ... ] ) + OPTIONS ( option 'value' [, ... ] ) This clause specifies options for the new foreign-data wrapper. diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml index 065c982082..f514b2d59f 100644 --- a/doc/src/sgml/ref/create_foreign_table.sgml +++ b/doc/src/sgml/ref/create_foreign_table.sgml @@ -18,36 +18,36 @@ -CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [ - { column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ] +CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [ + { column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ] | table_constraint } [, ... ] ] ) [ INHERITS ( parent_table [, ... ] ) ] SERVER server_name -[ OPTIONS ( option 'value' [, ... ] ) ] +[ OPTIONS ( option 'value' [, ... ] ) ] -CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name - PARTITION OF parent_table [ ( - { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] +CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name + PARTITION OF parent_table [ ( + { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] | table_constraint } [, ... ] -) ] partition_bound_spec +) ] partition_bound_spec SERVER server_name -[ OPTIONS ( option 'value' [, ... ] ) ] +[ OPTIONS ( option 'value' [, ... ] ) ] -where column_constraint is: +where column_constraint is: -[ CONSTRAINT constraint_name ] +[ CONSTRAINT constraint_name ] { NOT NULL | NULL | - CHECK ( expression ) [ NO INHERIT ] | + CHECK ( expression ) [ NO INHERIT ] | DEFAULT default_expr } -and table_constraint is: +and table_constraint is: -[ CONSTRAINT constraint_name ] -CHECK ( expression ) [ NO INHERIT ] +[ CONSTRAINT constraint_name ] +CHECK ( expression ) [ NO INHERIT ] @@ -107,7 +107,7 @@ CHECK ( expression ) [ NO INHERIT ] - table_name + table_name The name (optionally schema-qualified) of the table to be created. @@ -116,7 +116,7 @@ CHECK ( expression ) [ NO INHERIT ] - column_name + column_name The name of a column to be created in the new table. @@ -125,7 +125,7 @@ CHECK ( expression ) [ NO INHERIT ] - data_type + data_type The data type of the column. This can include array @@ -161,7 +161,7 @@ CHECK ( expression ) [ NO INHERIT ] - CONSTRAINT constraint_name + CONSTRAINT constraint_name An optional name for a column or table constraint. If the @@ -199,7 +199,7 @@ CHECK ( expression ) [ NO INHERIT ] - CHECK ( expression ) [ NO INHERIT ] + CHECK ( expression ) [ NO INHERIT ] The CHECK clause specifies an expression producing a @@ -247,7 +247,7 @@ CHECK ( expression ) [ NO INHERIT ] - server_name + server_name The name of an existing foreign server to use for the foreign table. @@ -258,7 +258,7 @@ CHECK ( expression ) [ NO INHERIT ] - OPTIONS ( option 'value' [, ...] ) + OPTIONS ( option 'value' [, ...] ) Options to be associated with the new foreign table or one of its diff --git a/doc/src/sgml/ref/create_group.sgml b/doc/src/sgml/ref/create_group.sgml index 158617cb93..7896043a11 100644 --- a/doc/src/sgml/ref/create_group.sgml +++ b/doc/src/sgml/ref/create_group.sgml @@ -21,23 +21,23 @@ PostgreSQL documentation -CREATE GROUP name [ [ WITH ] option [ ... ] ] +CREATE GROUP name [ [ WITH ] option [ ... ] ] -where option can be: +where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB | CREATEROLE | NOCREATEROLE | INHERIT | NOINHERIT | LOGIN | NOLOGIN - | [ ENCRYPTED ] PASSWORD 'password' - | VALID UNTIL 'timestamp' - | IN ROLE role_name [, ...] - | IN GROUP role_name [, ...] - | ROLE role_name [, ...] - | ADMIN role_name [, ...] - | USER role_name [, ...] - | SYSID uid + | [ ENCRYPTED ] PASSWORD 'password' + | VALID UNTIL 'timestamp' + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN role_name [, ...] + | USER role_name [, ...] + | SYSID uid diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 83ee7d3f25..a462be790f 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -23,7 +23,7 @@ PostgreSQL documentation CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] name ] ON table_name [ USING method ] ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ) - [ WITH ( storage_parameter = value [, ... ] ) ] + [ WITH ( storage_parameter = value [, ... ] ) ] [ TABLESPACE tablespace_name ] [ WHERE predicate ] diff --git a/doc/src/sgml/ref/create_materialized_view.sgml b/doc/src/sgml/ref/create_materialized_view.sgml index a8fb84e7a7..577893fe92 100644 --- a/doc/src/sgml/ref/create_materialized_view.sgml +++ b/doc/src/sgml/ref/create_materialized_view.sgml @@ -23,8 +23,8 @@ PostgreSQL documentation CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name [ (column_name [, ...] ) ] - [ WITH ( storage_parameter [= value] [, ... ] ) ] - [ TABLESPACE tablespace_name ] + [ WITH ( storage_parameter [= value] [, ... ] ) ] + [ TABLESPACE tablespace_name ] AS query [ WITH [ NO ] DATA ] @@ -87,7 +87,7 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name - WITH ( storage_parameter [= value] [, ... ] ) + WITH ( storage_parameter [= value] [, ... ] ) This clause specifies optional storage parameters for the new @@ -102,10 +102,10 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name - TABLESPACE tablespace_name + TABLESPACE tablespace_name - The tablespace_name is the name + The tablespace_name is the name of the tablespace in which the new materialized view is to be created. If not specified, is consulted. diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml index 36772b678a..41670c4b05 100644 --- a/doc/src/sgml/ref/create_role.sgml +++ b/doc/src/sgml/ref/create_role.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -CREATE ROLE name [ [ WITH ] option [ ... ] ] +CREATE ROLE name [ [ WITH ] option [ ... ] ] -where option can be: +where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB @@ -32,15 +32,15 @@ CREATE ROLE name [ [ WITH ] connlimit - | [ ENCRYPTED ] PASSWORD 'password' - | VALID UNTIL 'timestamp' - | IN ROLE role_name [, ...] - | IN GROUP role_name [, ...] - | ROLE role_name [, ...] - | ADMIN role_name [, ...] - | USER role_name [, ...] - | SYSID uid + | CONNECTION LIMIT connlimit + | [ ENCRYPTED ] PASSWORD 'password' + | VALID UNTIL 'timestamp' + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN role_name [, ...] + | USER role_name [, ...] + | SYSID uid @@ -453,7 +453,7 @@ CREATE ROLE admin WITH CREATEDB CREATEROLE; The CREATE ROLE statement is in the SQL standard, but the standard only requires the syntax -CREATE ROLE name [ WITH ADMIN role_name ] +CREATE ROLE name [ WITH ADMIN role_name ] Multiple initial administrators, and all the other options of CREATE ROLE, are diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index 5d29cd768a..ce145f96a0 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -21,14 +21,14 @@ PostgreSQL documentation -CREATE SCHEMA schema_name [ AUTHORIZATION role_specification ] [ schema_element [ ... ] ] -CREATE SCHEMA AUTHORIZATION role_specification [ schema_element [ ... ] ] -CREATE SCHEMA IF NOT EXISTS schema_name [ AUTHORIZATION role_specification ] -CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_specification +CREATE SCHEMA schema_name [ AUTHORIZATION role_specification ] [ schema_element [ ... ] ] +CREATE SCHEMA AUTHORIZATION role_specification [ schema_element [ ... ] ] +CREATE SCHEMA IF NOT EXISTS schema_name [ AUTHORIZATION role_specification ] +CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_specification -where role_specification can be: +where role_specification can be: - user_name + user_name | CURRENT_USER | SESSION_USER diff --git a/doc/src/sgml/ref/create_server.sgml b/doc/src/sgml/ref/create_server.sgml index 7318481487..47b8a6291b 100644 --- a/doc/src/sgml/ref/create_server.sgml +++ b/doc/src/sgml/ref/create_server.sgml @@ -23,7 +23,7 @@ PostgreSQL documentation CREATE SERVER [IF NOT EXISTS] server_name [ TYPE 'server_type' ] [ VERSION 'server_version' ] FOREIGN DATA WRAPPER fdw_name - [ OPTIONS ( option 'value' [, ... ] ) ] + [ OPTIONS ( option 'value' [, ... ] ) ] @@ -105,7 +105,7 @@ CREATE SERVER [IF NOT EXISTS] server_name - OPTIONS ( option 'value' [, ... ] ) + OPTIONS ( option 'value' [, ... ] ) This clause specifies the options for the server. The options diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index ef4e4852bd..0d68ca06b7 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -21,10 +21,10 @@ PostgreSQL documentation -CREATE STATISTICS [ IF NOT EXISTS ] statistics_name - [ ( statistics_kind [, ... ] ) ] - ON column_name, column_name [, ...] - FROM table_name +CREATE STATISTICS [ IF NOT EXISTS ] statistics_name + [ ( statistics_kind [, ... ] ) ] + ON column_name, column_name [, ...] + FROM table_name @@ -66,7 +66,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - statistics_name + statistics_name The name (optionally schema-qualified) of the statistics object to be @@ -76,7 +76,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - statistics_kind + statistics_kind A statistics kind to be computed in this statistics object. @@ -93,7 +93,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - column_name + column_name The name of a table column to be covered by the computed statistics. @@ -103,7 +103,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - table_name + table_name The name (optionally schema-qualified) of the table containing the diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index de505ea8d3..bae9f839bd 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -CREATE SUBSCRIPTION subscription_name - CONNECTION 'conninfo' - PUBLICATION publication_name [, ...] +CREATE SUBSCRIPTION subscription_name + CONNECTION 'conninfo' + PUBLICATION publication_name [, ...] [ WITH ( subscription_parameter [= value] [, ... ] ) ] diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 1477288851..d15795857b 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -21,81 +21,81 @@ PostgreSQL documentation -CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [ - { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] +CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [ + { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] | table_constraint | LIKE source_table [ like_option ... ] } [, ... ] ] ) [ INHERITS ( parent_table [, ... ] ) ] [ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] -[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] +[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] -[ TABLESPACE tablespace_name ] +[ TABLESPACE tablespace_name ] -CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name - OF type_name [ ( - { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] +CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name + OF type_name [ ( + { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] | table_constraint } [, ... ] ) ] [ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] -[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] +[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] -[ TABLESPACE tablespace_name ] +[ TABLESPACE tablespace_name ] -CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name - PARTITION OF parent_table [ ( - { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] +CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name + PARTITION OF parent_table [ ( + { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ] | table_constraint } [, ... ] -) ] { FOR VALUES partition_bound_spec | DEFAULT } +) ] { FOR VALUES partition_bound_spec | DEFAULT } [ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] -[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] +[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] -[ TABLESPACE tablespace_name ] +[ TABLESPACE tablespace_name ] -where column_constraint is: +where column_constraint is: -[ CONSTRAINT constraint_name ] +[ CONSTRAINT constraint_name ] { NOT NULL | NULL | - CHECK ( expression ) [ NO INHERIT ] | + CHECK ( expression ) [ NO INHERIT ] | DEFAULT default_expr | GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] | - UNIQUE index_parameters | - PRIMARY KEY index_parameters | - REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] + UNIQUE index_parameters | + PRIMARY KEY index_parameters | + REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] -and table_constraint is: +and table_constraint is: -[ CONSTRAINT constraint_name ] -{ CHECK ( expression ) [ NO INHERIT ] | - UNIQUE ( column_name [, ... ] ) index_parameters | - PRIMARY KEY ( column_name [, ... ] ) index_parameters | +[ CONSTRAINT constraint_name ] +{ CHECK ( expression ) [ NO INHERIT ] | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters | EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] | - FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] + FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] -and like_option is: +and like_option is: { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | IDENTITY | INDEXES | STORAGE | COMMENTS | ALL } -and partition_bound_spec is: +and partition_bound_spec is: -IN ( { numeric_literal | string_literal | NULL } [, ...] ) | -FROM ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) - TO ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) +IN ( { numeric_literal | string_literal | NULL } [, ...] ) | +FROM ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) + TO ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) -index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are: +index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are: -[ WITH ( storage_parameter [= value] [, ... ] ) ] -[ USING INDEX TABLESPACE tablespace_name ] +[ WITH ( storage_parameter [= value] [, ... ] ) ] +[ USING INDEX TABLESPACE tablespace_name ] -exclude_element in an EXCLUDE constraint is: +exclude_element in an EXCLUDE constraint is: { column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] @@ -220,7 +220,7 @@ FROM ( { numeric_literal | - table_name + table_name The name (optionally schema-qualified) of the table to be created. @@ -229,7 +229,7 @@ FROM ( { numeric_literal | - OF type_name + OF type_name Creates a typed table, which takes its @@ -250,7 +250,7 @@ FROM ( { numeric_literal | - PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT } + PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT } Creates the table as a partition of the specified @@ -260,7 +260,7 @@ FROM ( { numeric_literal | - The partition_bound_spec + The partition_bound_spec must correspond to the partitioning method and partition key of the parent table, and must not overlap with any existing partition of that parent. The form with IN is used for list partitioning, @@ -270,7 +270,7 @@ FROM ( { numeric_literal | Each of the values specified in - the partition_bound_spec is + the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key @@ -397,7 +397,7 @@ FROM ( { numeric_literal | - column_name + column_name The name of a column to be created in the new table. @@ -406,7 +406,7 @@ FROM ( { numeric_literal | - data_type + data_type The data type of the column. This can include array @@ -602,7 +602,7 @@ FROM ( { numeric_literal | - CONSTRAINT constraint_name + CONSTRAINT constraint_name An optional name for a column or table constraint. If the @@ -640,7 +640,7 @@ FROM ( { numeric_literal | - CHECK ( expression ) [ NO INHERIT ] + CHECK ( expression ) [ NO INHERIT ] The CHECK clause specifies an expression producing a @@ -730,7 +730,7 @@ FROM ( { numeric_literal | UNIQUE (column constraint) - UNIQUE ( column_name [, ... ] ) (table constraint) + UNIQUE ( column_name [, ... ] ) (table constraint) @@ -757,7 +757,7 @@ FROM ( { numeric_literal | PRIMARY KEY (column constraint) - PRIMARY KEY ( column_name [, ... ] ) (table constraint) + PRIMARY KEY ( column_name [, ... ] ) (table constraint) The PRIMARY KEY constraint specifies that a column or @@ -997,7 +997,7 @@ FROM ( { numeric_literal | - WITH ( storage_parameter [= value] [, ... ] ) + WITH ( storage_parameter [= value] [, ... ] ) This clause specifies optional storage parameters for a table or index; @@ -1092,10 +1092,10 @@ FROM ( { numeric_literal | - TABLESPACE tablespace_name + TABLESPACE tablespace_name - The tablespace_name is the name + The tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, is consulted, or @@ -1105,7 +1105,7 @@ FROM ( { numeric_literal | - USING INDEX TABLESPACE tablespace_name + USING INDEX TABLESPACE tablespace_name This clause allows selection of the tablespace in which the index diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml index 8e4ada794d..0fa28a11fa 100644 --- a/doc/src/sgml/ref/create_table_as.sgml +++ b/doc/src/sgml/ref/create_table_as.sgml @@ -23,9 +23,9 @@ PostgreSQL documentation CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name [ (column_name [, ...] ) ] - [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] + [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] - [ TABLESPACE tablespace_name ] + [ TABLESPACE tablespace_name ] AS query [ WITH [ NO ] DATA ] @@ -121,7 +121,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - WITH ( storage_parameter [= value] [, ... ] ) + WITH ( storage_parameter [= value] [, ... ] ) This clause specifies optional storage parameters for the new table; @@ -195,10 +195,10 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - TABLESPACE tablespace_name + TABLESPACE tablespace_name - The tablespace_name is the name + The tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, is consulted, or diff --git a/doc/src/sgml/ref/create_tablespace.sgml b/doc/src/sgml/ref/create_tablespace.sgml index cf08408f96..2fed29ffaf 100644 --- a/doc/src/sgml/ref/create_tablespace.sgml +++ b/doc/src/sgml/ref/create_tablespace.sgml @@ -24,7 +24,7 @@ PostgreSQL documentation CREATE TABLESPACE tablespace_name [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ] LOCATION 'directory' - [ WITH ( tablespace_option = value [, ... ] ) ] + [ WITH ( tablespace_option = value [, ... ] ) ] diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 2496250bed..7fc481d9fc 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -26,14 +26,14 @@ PostgreSQL documentation -CREATE [ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [ OR ... ] } - ON table_name +CREATE [ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [ OR ... ] } + ON table_name [ FROM referenced_table_name ] [ NOT DEFERRABLE | [ DEFERRABLE ] [ INITIALLY IMMEDIATE | INITIALLY DEFERRED ] ] - [ REFERENCING { { OLD | NEW } TABLE [ AS ] transition_relation_name } [ ... ] ] + [ REFERENCING { { OLD | NEW } TABLE [ AS ] transition_relation_name } [ ... ] ] [ FOR [ EACH ] { ROW | STATEMENT } ] [ WHEN ( condition ) ] - EXECUTE PROCEDURE function_name ( arguments ) + EXECUTE PROCEDURE function_name ( arguments ) where event can be one of: @@ -283,7 +283,7 @@ UPDATE OF column_name1 [, column_name2 - referenced_table_name + referenced_table_name The (possibly schema-qualified) name of another table referenced by the @@ -333,7 +333,7 @@ UPDATE OF column_name1 [, column_name2 - transition_relation_name + transition_relation_name The (unqualified) name to be used within the trigger for this diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 7146c4a27b..312bd050bc 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation CREATE TYPE name AS - ( [ attribute_name data_type [ COLLATE collation ] [, ... ] ] ) + ( [ attribute_name data_type [ COLLATE collation ] [, ... ] ] ) CREATE TYPE name AS ENUM ( [ 'label' [, ... ] ] ) diff --git a/doc/src/sgml/ref/create_user.sgml b/doc/src/sgml/ref/create_user.sgml index 8a596eec9f..480b6405e6 100644 --- a/doc/src/sgml/ref/create_user.sgml +++ b/doc/src/sgml/ref/create_user.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -CREATE USER name [ [ WITH ] option [ ... ] ] +CREATE USER name [ [ WITH ] option [ ... ] ] -where option can be: +where option can be: SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB @@ -32,15 +32,15 @@ CREATE USER name [ [ WITH ] connlimit - | [ ENCRYPTED ] PASSWORD 'password' - | VALID UNTIL 'timestamp' - | IN ROLE role_name [, ...] - | IN GROUP role_name [, ...] - | ROLE role_name [, ...] - | ADMIN role_name [, ...] - | USER role_name [, ...] - | SYSID uid + | CONNECTION LIMIT connlimit + | [ ENCRYPTED ] PASSWORD 'password' + | VALID UNTIL 'timestamp' + | IN ROLE role_name [, ...] + | IN GROUP role_name [, ...] + | ROLE role_name [, ...] + | ADMIN role_name [, ...] + | USER role_name [, ...] + | SYSID uid diff --git a/doc/src/sgml/ref/create_user_mapping.sgml b/doc/src/sgml/ref/create_user_mapping.sgml index 1c44679a98..d6f29c9489 100644 --- a/doc/src/sgml/ref/create_user_mapping.sgml +++ b/doc/src/sgml/ref/create_user_mapping.sgml @@ -23,7 +23,7 @@ PostgreSQL documentation CREATE USER MAPPING [IF NOT EXISTS] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name - [ OPTIONS ( option 'value' [ , ... ] ) ] + [ OPTIONS ( option 'value' [ , ... ] ) ] @@ -86,7 +86,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na - OPTIONS ( option 'value' [, ... ] ) + OPTIONS ( option 'value' [, ... ] ) This clause specifies the options of the user mapping. The diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index 319335051b..695c759312 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW name [ ( column_name [, ...] ) ] - [ WITH ( view_option_name [= view_option_value] [, ... ] ) ] - AS query +CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW name [ ( column_name [, ...] ) ] + [ WITH ( view_option_name [= view_option_value] [, ... ] ) ] + AS query [ WITH [ CASCADED | LOCAL ] CHECK OPTION ] @@ -118,7 +118,7 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR - WITH ( view_option_name [= view_option_value] [, ... ] ) + WITH ( view_option_name [= view_option_value] [, ... ] ) This clause specifies optional parameters for a view; the following diff --git a/doc/src/sgml/ref/delete.sgml b/doc/src/sgml/ref/delete.sgml index 20417a1391..8ced7de7be 100644 --- a/doc/src/sgml/ref/delete.sgml +++ b/doc/src/sgml/ref/delete.sgml @@ -22,9 +22,9 @@ PostgreSQL documentation [ WITH [ RECURSIVE ] with_query [, ...] ] -DELETE FROM [ ONLY ] table_name [ * ] [ [ AS ] alias ] - [ USING using_list ] - [ WHERE condition | WHERE CURRENT OF cursor_name ] +DELETE FROM [ ONLY ] table_name [ * ] [ [ AS ] alias ] + [ USING using_list ] + [ WHERE condition | WHERE CURRENT OF cursor_name ] [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ] @@ -117,7 +117,7 @@ DELETE FROM [ ONLY ] table_name [ * - using_list + using_list A list of table expressions, allowing columns from other tables @@ -126,7 +126,7 @@ DELETE FROM [ ONLY ] table_name [ * linkend="sql-from" endterm="sql-from-title"> of a SELECT statement; for example, an alias for the table name can be specified. Do not repeat the target table - in the using_list, + in the using_list, unless you wish to set up a self-join. @@ -144,7 +144,7 @@ DELETE FROM [ ONLY ] table_name [ * - cursor_name + cursor_name The name of the cursor to use in a WHERE CURRENT OF @@ -161,12 +161,12 @@ DELETE FROM [ ONLY ] table_name [ * - output_expression + output_expression An expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any - column names of the table named by table_name + column names of the table named by table_name or table(s) listed in USING. Write * to return all columns. @@ -174,7 +174,7 @@ DELETE FROM [ ONLY ] table_name [ * - output_name + output_name A name to use for a returned column. diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index ed5e588ee7..d4da32c34d 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -25,7 +25,7 @@ PostgreSQL documentation -DO [ LANGUAGE lang_name ] code +DO [ LANGUAGE lang_name ] code @@ -54,7 +54,7 @@ DO [ LANGUAGE lang_name ] - code + code The procedural language code to be executed. This must be specified @@ -65,7 +65,7 @@ DO [ LANGUAGE lang_name ] - lang_name + lang_name The name of the procedural language the code is written in. diff --git a/doc/src/sgml/ref/drop_database.sgml b/doc/src/sgml/ref/drop_database.sgml index 740aa31995..44436ad48d 100644 --- a/doc/src/sgml/ref/drop_database.sgml +++ b/doc/src/sgml/ref/drop_database.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP DATABASE [ IF EXISTS ] name +DROP DATABASE [ IF EXISTS ] name @@ -57,7 +57,7 @@ DROP DATABASE [ IF EXISTS ] name - name + name The name of the database to remove. diff --git a/doc/src/sgml/ref/drop_domain.sgml b/doc/src/sgml/ref/drop_domain.sgml index e14795e6a3..ba546165c2 100644 --- a/doc/src/sgml/ref/drop_domain.sgml +++ b/doc/src/sgml/ref/drop_domain.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP DOMAIN [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP DOMAIN [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -49,7 +49,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - name + name The name (optionally schema-qualified) of an existing domain. diff --git a/doc/src/sgml/ref/drop_event_trigger.sgml b/doc/src/sgml/ref/drop_event_trigger.sgml index 6e3ee22d7b..583048dc0f 100644 --- a/doc/src/sgml/ref/drop_event_trigger.sgml +++ b/doc/src/sgml/ref/drop_event_trigger.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ] @@ -51,7 +51,7 @@ DROP EVENT TRIGGER [ IF EXISTS ] name - name + name The name of the event trigger to remove. diff --git a/doc/src/sgml/ref/drop_extension.sgml b/doc/src/sgml/ref/drop_extension.sgml index 7438a08bb3..ba52922013 100644 --- a/doc/src/sgml/ref/drop_extension.sgml +++ b/doc/src/sgml/ref/drop_extension.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -54,7 +54,7 @@ DROP EXTENSION [ IF EXISTS ] name [ - name + name The name of an installed extension. diff --git a/doc/src/sgml/ref/drop_foreign_table.sgml b/doc/src/sgml/ref/drop_foreign_table.sgml index 5a2b235d4e..173eadadd3 100644 --- a/doc/src/sgml/ref/drop_foreign_table.sgml +++ b/doc/src/sgml/ref/drop_foreign_table.sgml @@ -18,7 +18,7 @@ -DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -46,7 +46,7 @@ DROP FOREIGN TABLE [ IF EXISTS ] name - name + name The name (optionally schema-qualified) of the foreign table to drop. diff --git a/doc/src/sgml/ref/drop_group.sgml b/doc/src/sgml/ref/drop_group.sgml index e601ff4172..5987c5f760 100644 --- a/doc/src/sgml/ref/drop_group.sgml +++ b/doc/src/sgml/ref/drop_group.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP GROUP [ IF EXISTS ] name [, ...] +DROP GROUP [ IF EXISTS ] name [, ...] diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml index 6fe108ded2..4c838fffff 100644 --- a/doc/src/sgml/ref/drop_index.sgml +++ b/doc/src/sgml/ref/drop_index.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -72,7 +72,7 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name - name + name The name (optionally schema-qualified) of an index to remove. diff --git a/doc/src/sgml/ref/drop_language.sgml b/doc/src/sgml/ref/drop_language.sgml index f014a74d45..081bd5fe3e 100644 --- a/doc/src/sgml/ref/drop_language.sgml +++ b/doc/src/sgml/ref/drop_language.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name [ CASCADE | RESTRICT ] @@ -60,7 +60,7 @@ DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name - name + name The name of an existing procedural language. For backward diff --git a/doc/src/sgml/ref/drop_materialized_view.sgml b/doc/src/sgml/ref/drop_materialized_view.sgml index 36ec33ceb0..a898a1fc0a 100644 --- a/doc/src/sgml/ref/drop_materialized_view.sgml +++ b/doc/src/sgml/ref/drop_materialized_view.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP MATERIALIZED VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP MATERIALIZED VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -50,7 +50,7 @@ DROP MATERIALIZED VIEW [ IF EXISTS ] name - name + name The name (optionally schema-qualified) of the materialized view to diff --git a/doc/src/sgml/ref/drop_opclass.sgml b/doc/src/sgml/ref/drop_opclass.sgml index 187a9a4d1f..423a211bca 100644 --- a/doc/src/sgml/ref/drop_opclass.sgml +++ b/doc/src/sgml/ref/drop_opclass.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP OPERATOR CLASS [ IF EXISTS ] name USING index_method [ CASCADE | RESTRICT ] +DROP OPERATOR CLASS [ IF EXISTS ] name USING index_method [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_operator.sgml b/doc/src/sgml/ref/drop_operator.sgml index fc82c3e0e3..5897c99a62 100644 --- a/doc/src/sgml/ref/drop_operator.sgml +++ b/doc/src/sgml/ref/drop_operator.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP OPERATOR [ IF EXISTS ] name ( { left_type | NONE } , { right_type | NONE } ) [, ...] [ CASCADE | RESTRICT ] +DROP OPERATOR [ IF EXISTS ] name ( { left_type | NONE } , { right_type | NONE } ) [, ...] [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_opfamily.sgml b/doc/src/sgml/ref/drop_opfamily.sgml index 53bce22883..a7b90f306c 100644 --- a/doc/src/sgml/ref/drop_opfamily.sgml +++ b/doc/src/sgml/ref/drop_opfamily.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP OPERATOR FAMILY [ IF EXISTS ] name USING index_method [ CASCADE | RESTRICT ] +DROP OPERATOR FAMILY [ IF EXISTS ] name USING index_method [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_owned.sgml b/doc/src/sgml/ref/drop_owned.sgml index 81694b88e5..0426373d2d 100644 --- a/doc/src/sgml/ref/drop_owned.sgml +++ b/doc/src/sgml/ref/drop_owned.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP OWNED BY { name | CURRENT_USER | SESSION_USER } [, ...] [ CASCADE | RESTRICT ] +DROP OWNED BY { name | CURRENT_USER | SESSION_USER } [, ...] [ CASCADE | RESTRICT ] @@ -42,7 +42,7 @@ DROP OWNED BY { name | CURRENT_USER - name + name The name of a role whose objects will be dropped, and whose diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml index 8e45a43982..bf43db3dac 100644 --- a/doc/src/sgml/ref/drop_publication.sgml +++ b/doc/src/sgml/ref/drop_publication.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP PUBLICATION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP PUBLICATION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_role.sgml b/doc/src/sgml/ref/drop_role.sgml index 75b48f94f9..fcddfeb172 100644 --- a/doc/src/sgml/ref/drop_role.sgml +++ b/doc/src/sgml/ref/drop_role.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP ROLE [ IF EXISTS ] name [, ...] +DROP ROLE [ IF EXISTS ] name [, ...] @@ -68,7 +68,7 @@ DROP ROLE [ IF EXISTS ] name [, ... - name + name The name of the role to remove. diff --git a/doc/src/sgml/ref/drop_rule.sgml b/doc/src/sgml/ref/drop_rule.sgml index d4905a69c9..d3fdf55080 100644 --- a/doc/src/sgml/ref/drop_rule.sgml +++ b/doc/src/sgml/ref/drop_rule.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP RULE [ IF EXISTS ] name ON table_name [ CASCADE | RESTRICT ] +DROP RULE [ IF EXISTS ] name ON table_name [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_schema.sgml b/doc/src/sgml/ref/drop_schema.sgml index 5b1697fff2..fd1fcd7e03 100644 --- a/doc/src/sgml/ref/drop_schema.sgml +++ b/doc/src/sgml/ref/drop_schema.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP SCHEMA [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP SCHEMA [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -54,7 +54,7 @@ DROP SCHEMA [ IF EXISTS ] name [, . - name + name The name of a schema. diff --git a/doc/src/sgml/ref/drop_sequence.sgml b/doc/src/sgml/ref/drop_sequence.sgml index f0e1edc81c..9d827f0cb1 100644 --- a/doc/src/sgml/ref/drop_sequence.sgml +++ b/doc/src/sgml/ref/drop_sequence.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP SEQUENCE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP SEQUENCE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -49,7 +49,7 @@ DROP SEQUENCE [ IF EXISTS ] name [, - name + name The name (optionally schema-qualified) of a sequence. diff --git a/doc/src/sgml/ref/drop_statistics.sgml b/doc/src/sgml/ref/drop_statistics.sgml index 37fc402589..fd2087db6a 100644 --- a/doc/src/sgml/ref/drop_statistics.sgml +++ b/doc/src/sgml/ref/drop_statistics.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP STATISTICS [ IF EXISTS ] name [, ...] +DROP STATISTICS [ IF EXISTS ] name [, ...] @@ -51,7 +51,7 @@ DROP STATISTICS [ IF EXISTS ] name - name + name The name (optionally schema-qualified) of the statistics object to drop. diff --git a/doc/src/sgml/ref/drop_table.sgml b/doc/src/sgml/ref/drop_table.sgml index 94d28b06fb..ae96cf0657 100644 --- a/doc/src/sgml/ref/drop_table.sgml +++ b/doc/src/sgml/ref/drop_table.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -62,7 +62,7 @@ DROP TABLE [ IF EXISTS ] name [, .. - name + name The name (optionally schema-qualified) of the table to drop. diff --git a/doc/src/sgml/ref/drop_tablespace.sgml b/doc/src/sgml/ref/drop_tablespace.sgml index d0a05af2e1..ee40cc6b0c 100644 --- a/doc/src/sgml/ref/drop_tablespace.sgml +++ b/doc/src/sgml/ref/drop_tablespace.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TABLESPACE [ IF EXISTS ] name +DROP TABLESPACE [ IF EXISTS ] name @@ -60,7 +60,7 @@ DROP TABLESPACE [ IF EXISTS ] name - name + name The name of a tablespace. diff --git a/doc/src/sgml/ref/drop_trigger.sgml b/doc/src/sgml/ref/drop_trigger.sgml index d400b8383f..d44bf138a6 100644 --- a/doc/src/sgml/ref/drop_trigger.sgml +++ b/doc/src/sgml/ref/drop_trigger.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TRIGGER [ IF EXISTS ] name ON table_name [ CASCADE | RESTRICT ] +DROP TRIGGER [ IF EXISTS ] name ON table_name [ CASCADE | RESTRICT ] @@ -51,7 +51,7 @@ DROP TRIGGER [ IF EXISTS ] name ON - name + name The name of the trigger to remove. @@ -60,7 +60,7 @@ DROP TRIGGER [ IF EXISTS ] name ON - table_name + table_name The name (optionally schema-qualified) of the table for which diff --git a/doc/src/sgml/ref/drop_tsconfig.sgml b/doc/src/sgml/ref/drop_tsconfig.sgml index 0096e0092d..e4a1738bae 100644 --- a/doc/src/sgml/ref/drop_tsconfig.sgml +++ b/doc/src/sgml/ref/drop_tsconfig.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TEXT SEARCH CONFIGURATION [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP TEXT SEARCH CONFIGURATION [ IF EXISTS ] name [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_tsdictionary.sgml b/doc/src/sgml/ref/drop_tsdictionary.sgml index 803abf8cba..faa4b3a1e5 100644 --- a/doc/src/sgml/ref/drop_tsdictionary.sgml +++ b/doc/src/sgml/ref/drop_tsdictionary.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TEXT SEARCH DICTIONARY [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP TEXT SEARCH DICTIONARY [ IF EXISTS ] name [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_tsparser.sgml b/doc/src/sgml/ref/drop_tsparser.sgml index fa99720161..bc9dae17a5 100644 --- a/doc/src/sgml/ref/drop_tsparser.sgml +++ b/doc/src/sgml/ref/drop_tsparser.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TEXT SEARCH PARSER [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP TEXT SEARCH PARSER [ IF EXISTS ] name [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_tstemplate.sgml b/doc/src/sgml/ref/drop_tstemplate.sgml index 9d051eb619..98f5523e51 100644 --- a/doc/src/sgml/ref/drop_tstemplate.sgml +++ b/doc/src/sgml/ref/drop_tstemplate.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TEXT SEARCH TEMPLATE [ IF EXISTS ] name [ CASCADE | RESTRICT ] +DROP TEXT SEARCH TEMPLATE [ IF EXISTS ] name [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 2c7b8fe9f6..4ec1b92f32 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP TYPE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP TYPE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -49,7 +49,7 @@ DROP TYPE [ IF EXISTS ] name [, ... - name + name The name (optionally schema-qualified) of the data type to remove. diff --git a/doc/src/sgml/ref/drop_user.sgml b/doc/src/sgml/ref/drop_user.sgml index 38e5418d07..3cb90522da 100644 --- a/doc/src/sgml/ref/drop_user.sgml +++ b/doc/src/sgml/ref/drop_user.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP USER [ IF EXISTS ] name [, ...] +DROP USER [ IF EXISTS ] name [, ...] diff --git a/doc/src/sgml/ref/drop_view.sgml b/doc/src/sgml/ref/drop_view.sgml index 40f2356188..002d2c6dd6 100644 --- a/doc/src/sgml/ref/drop_view.sgml +++ b/doc/src/sgml/ref/drop_view.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -DROP VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] +DROP VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ] @@ -49,7 +49,7 @@ DROP VIEW [ IF EXISTS ] name [, ... - name + name The name (optionally schema-qualified) of the view to remove. diff --git a/doc/src/sgml/ref/execute.sgml b/doc/src/sgml/ref/execute.sgml index 76069c019e..6ab5e54fa7 100644 --- a/doc/src/sgml/ref/execute.sgml +++ b/doc/src/sgml/ref/execute.sgml @@ -26,7 +26,7 @@ PostgreSQL documentation -EXECUTE name [ ( parameter [, ...] ) ] +EXECUTE name [ ( parameter [, ...] ) ] @@ -61,7 +61,7 @@ EXECUTE name [ ( - name + name The name of the prepared statement to execute. @@ -70,7 +70,7 @@ EXECUTE name [ ( - parameter + parameter The actual value of a parameter to the prepared statement. This diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml index 24c8c49156..7651dcd0f8 100644 --- a/doc/src/sgml/ref/fetch.sgml +++ b/doc/src/sgml/ref/fetch.sgml @@ -27,23 +27,23 @@ PostgreSQL documentation -FETCH [ direction [ FROM | IN ] ] cursor_name +FETCH [ direction [ FROM | IN ] ] cursor_name -where direction can be empty or one of: +where direction can be empty or one of: NEXT PRIOR FIRST LAST - ABSOLUTE count - RELATIVE count - count + ABSOLUTE count + RELATIVE count + count ALL FORWARD - FORWARD count + FORWARD count FORWARD ALL BACKWARD - BACKWARD count + BACKWARD count BACKWARD ALL @@ -82,7 +82,7 @@ FETCH [ direction [ FROM | IN ] ] < retrieve the indicated number of rows moving in the forward or backward direction, leaving the cursor positioned on the last-returned row (or after/before all rows, if the count exceeds the number of rows + class="parameter">count exceeds the number of rows available). @@ -109,9 +109,9 @@ FETCH [ direction [ FROM | IN ] ] < - direction + direction - direction defines + direction defines the fetch direction and number of rows to fetch. It can be one of the following: @@ -121,7 +121,7 @@ FETCH [ direction [ FROM | IN ] ] < Fetch the next row. This is the default if direction is omitted. + class="parameter">direction is omitted. @@ -154,17 +154,17 @@ FETCH [ direction [ FROM | IN ] ] < - ABSOLUTE count + ABSOLUTE count Fetch the count'th row of the query, + class="parameter">count'th row of the query, or the abs(count)'th row from + class="parameter">count)'th row from the end if count is negative. Position + class="parameter">count is negative. Position before first row or after last row if count is out of range; in + class="parameter">count is out of range; in particular, ABSOLUTE 0 positions before the first row. @@ -172,14 +172,14 @@ FETCH [ direction [ FROM | IN ] ] < - RELATIVE count + RELATIVE count Fetch the count'th succeeding row, or + class="parameter">count'th succeeding row, or the abs(count)'th prior - row if count is + class="parameter">count)'th prior + row if count is negative. RELATIVE 0 re-fetches the current row, if any. @@ -187,13 +187,13 @@ FETCH [ direction [ FROM | IN ] ] < - count + count Fetch the next count rows (same as + class="parameter">count rows (same as FORWARD count). + class="parameter">count). @@ -217,11 +217,11 @@ FETCH [ direction [ FROM | IN ] ] < - FORWARD count + FORWARD count Fetch the next count rows. + class="parameter">count rows. FORWARD 0 re-fetches the current row. @@ -246,11 +246,11 @@ FETCH [ direction [ FROM | IN ] ] < - BACKWARD count + BACKWARD count Fetch the prior count rows (scanning + class="parameter">count rows (scanning backwards). BACKWARD 0 re-fetches the current row. @@ -270,20 +270,20 @@ FETCH [ direction [ FROM | IN ] ] < - count + count - count is a + count is a possibly-signed integer constant, determining the location or number of rows to fetch. For FORWARD and BACKWARD cases, specifying a negative count is equivalent to changing + class="parameter">count is equivalent to changing the sense of FORWARD and BACKWARD. - cursor_name + cursor_name An open cursor's name. @@ -394,7 +394,7 @@ COMMIT WORK; The FETCH forms involving FORWARD and BACKWARD, as well as the forms FETCH count and FETCH + class="parameter">count and FETCH ALL, in which FORWARD is implicit, are PostgreSQL extensions. diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index c63252ca24..8f385f6bb7 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -23,70 +23,70 @@ PostgreSQL documentation GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER } [, ...] | ALL [ PRIVILEGES ] } - ON { [ TABLE ] table_name [, ...] - | ALL TABLES IN SCHEMA schema_name [, ...] } - TO role_specification [, ...] [ WITH GRANT OPTION ] + ON { [ TABLE ] table_name [, ...] + | ALL TABLES IN SCHEMA schema_name [, ...] } + TO role_specification [, ...] [ WITH GRANT OPTION ] -GRANT { { SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] ) - [, ...] | ALL [ PRIVILEGES ] ( column_name [, ...] ) } - ON [ TABLE ] table_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] +GRANT { { SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] ) + [, ...] | ALL [ PRIVILEGES ] ( column_name [, ...] ) } + ON [ TABLE ] table_name [, ...] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { { USAGE | SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } - ON { SEQUENCE sequence_name [, ...] - | ALL SEQUENCES IN SCHEMA schema_name [, ...] } - TO role_specification [, ...] [ WITH GRANT OPTION ] + ON { SEQUENCE sequence_name [, ...] + | ALL SEQUENCES IN SCHEMA schema_name [, ...] } + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] | ALL [ PRIVILEGES ] } ON DATABASE database_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON DOMAIN domain_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON FOREIGN DATA WRAPPER fdw_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON FOREIGN SERVER server_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { EXECUTE | ALL [ PRIVILEGES ] } ON { FUNCTION function_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] - | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } - TO role_specification [, ...] [ WITH GRANT OPTION ] + | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON LANGUAGE lang_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } - ON LARGE OBJECT loid [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + ON LARGE OBJECT loid [, ...] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } ON SCHEMA schema_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { CREATE | ALL [ PRIVILEGES ] } ON TABLESPACE tablespace_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } ON TYPE type_name [, ...] - TO role_specification [, ...] [ WITH GRANT OPTION ] + TO role_specification [, ...] [ WITH GRANT OPTION ] -where role_specification can be: +where role_specification can be: - [ GROUP ] role_name + [ GROUP ] role_name | PUBLIC | CURRENT_USER | SESSION_USER -GRANT role_name [, ...] TO role_name [, ...] [ WITH ADMIN OPTION ] +GRANT role_name [, ...] TO role_name [, ...] [ WITH ADMIN OPTION ] diff --git a/doc/src/sgml/ref/import_foreign_schema.sgml b/doc/src/sgml/ref/import_foreign_schema.sgml index b73dee9439..f22893f137 100644 --- a/doc/src/sgml/ref/import_foreign_schema.sgml +++ b/doc/src/sgml/ref/import_foreign_schema.sgml @@ -21,11 +21,11 @@ PostgreSQL documentation -IMPORT FOREIGN SCHEMA remote_schema - [ { LIMIT TO | EXCEPT } ( table_name [, ...] ) ] - FROM SERVER server_name - INTO local_schema - [ OPTIONS ( option 'value' [, ... ] ) ] +IMPORT FOREIGN SCHEMA remote_schema + [ { LIMIT TO | EXCEPT } ( table_name [, ...] ) ] + FROM SERVER server_name + INTO local_schema + [ OPTIONS ( option 'value' [, ... ] ) ] @@ -59,7 +59,7 @@ IMPORT FOREIGN SCHEMA remote_schema - remote_schema + remote_schema The remote schema to import from. The specific meaning of a remote schema @@ -69,7 +69,7 @@ IMPORT FOREIGN SCHEMA remote_schema - LIMIT TO ( table_name [, ...] ) + LIMIT TO ( table_name [, ...] ) Import only foreign tables matching one of the given table names. @@ -79,7 +79,7 @@ IMPORT FOREIGN SCHEMA remote_schema - EXCEPT ( table_name [, ...] ) + EXCEPT ( table_name [, ...] ) Exclude specified foreign tables from the import. All tables @@ -90,7 +90,7 @@ IMPORT FOREIGN SCHEMA remote_schema - server_name + server_name The foreign server to import from. @@ -99,7 +99,7 @@ IMPORT FOREIGN SCHEMA remote_schema - local_schema + local_schema The schema in which the imported foreign tables will be created. @@ -108,7 +108,7 @@ IMPORT FOREIGN SCHEMA remote_schema - OPTIONS ( option 'value' [, ...] ) + OPTIONS ( option 'value' [, ...] ) Options to be used during the import. diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index cc9b94617e..ce037e5902 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -22,25 +22,25 @@ PostgreSQL documentation [ WITH [ RECURSIVE ] with_query [, ...] ] -INSERT INTO table_name [ AS alias ] [ ( column_name [, ...] ) ] +INSERT INTO table_name [ AS alias ] [ ( column_name [, ...] ) ] [ OVERRIDING { SYSTEM | USER} VALUE ] - { DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query } + { DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query } [ ON CONFLICT [ conflict_target ] conflict_action ] [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ] where conflict_target can be one of: - ( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ] - ON CONSTRAINT constraint_name + ( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ] + ON CONSTRAINT constraint_name and conflict_action is one of: DO NOTHING - DO UPDATE SET { column_name = { expression | DEFAULT } | - ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [, ...] ) | - ( column_name [, ...] ) = ( sub-SELECT ) + DO UPDATE SET { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [, ...] ) | + ( column_name [, ...] ) = ( sub-SELECT ) } [, ...] - [ WHERE condition ] + [ WHERE condition ] @@ -93,7 +93,7 @@ INSERT INTO table_name [ AS ON CONFLICT DO UPDATE ... WHERE clause condition was not satisfied, the + class="parameter">condition was not satisfied, the row will not be returned. @@ -119,7 +119,7 @@ INSERT INTO table_name [ AS RETURNING clause requires SELECT privilege on all columns mentioned in RETURNING. If you use the query clause to insert rows from a + class="parameter">query clause to insert rows from a query, you of course need to have SELECT privilege on any table or column used in the query. @@ -160,7 +160,7 @@ INSERT INTO table_name [ AS - table_name + table_name The name (optionally schema-qualified) of an existing table. @@ -173,7 +173,7 @@ INSERT INTO table_name [ AS A substitute name for table_name. When an alias is + class="parameter">table_name. When an alias is provided, it completely hides the actual name of the table. This is particularly useful when ON CONFLICT DO UPDATE targets a table named excluded, since that will otherwise @@ -185,11 +185,11 @@ INSERT INTO table_name [ AS - column_name + column_name The name of a column in the table named by table_name. The column name + class="parameter">table_name. The column name can be qualified with a subfield name or array subscript, if needed. (Inserting into only some fields of a composite column leaves the other fields null.) When referencing a @@ -246,7 +246,7 @@ INSERT INTO table_name [ AS - expression + expression An expression or value to assign to the corresponding column. @@ -265,7 +265,7 @@ INSERT INTO table_name [ AS - query + query A query (SELECT statement) that supplies the @@ -277,14 +277,14 @@ INSERT INTO table_name [ AS - output_expression + output_expression An expression to be computed and returned by the INSERT command after each row is inserted or updated. The expression can use any column names of the table named by table_name. Write + class="parameter">table_name. Write * to return all columns of the inserted or updated row(s). @@ -292,7 +292,7 @@ INSERT INTO table_name [ AS - output_name + output_name A name to use for a returned column. @@ -328,14 +328,14 @@ INSERT INTO table_name [ AS conflict_target can perform unique index inference. When performing inference, it consists of one or more index_column_name columns and/or - index_expression - expressions, and an optional index_predicate. All table_name unique indexes that, + class="parameter">index_column_name columns and/or + index_expression + expressions, and an optional index_predicate. All table_name unique indexes that, without regard to order, contain exactly the conflict_target-specified columns/expressions are inferred (chosen) as arbiter indexes. If - an index_predicate is + an index_predicate is specified, it must, as a further requirement for inference, satisfy arbiter indexes. Note that this means a non-partial unique index (a unique index without a predicate) will be inferred @@ -400,42 +400,42 @@ INSERT INTO table_name [ AS - index_column_name + index_column_name The name of a table_name column. Used to + class="parameter">table_name column. Used to infer arbiter indexes. Follows CREATE INDEX format. SELECT privilege on - index_column_name + index_column_name is required. - index_expression + index_expression Similar to index_column_name, but used to + class="parameter">index_column_name, but used to infer expressions on table_name columns appearing + class="parameter">table_name columns appearing within index definitions (not simple columns). Follows CREATE INDEX format. SELECT privilege on any column appearing within index_expression is required. + class="parameter">index_expression is required. - collation + collation When specified, mandates that corresponding index_column_name or - index_expression + class="parameter">index_column_name or + index_expression use a particular collation in order to be matched during inference. Typically this is omitted, as collations usually do not affect whether or not a constraint violation occurs. @@ -445,12 +445,12 @@ INSERT INTO table_name [ AS - opclass + opclass When specified, mandates that corresponding index_column_name or - index_expression + class="parameter">index_column_name or + index_expression use particular operator class in order to be matched during inference. Typically this is omitted, as the equality semantics are often equivalent @@ -463,7 +463,7 @@ INSERT INTO table_name [ AS - index_predicate + index_predicate Used to allow inference of partial unique indexes. Any @@ -471,13 +471,13 @@ INSERT INTO table_name [ AS CREATE INDEX format. SELECT privilege on any column appearing within index_predicate is required. + class="parameter">index_predicate is required. - constraint_name + constraint_name Explicitly specifies an arbiter @@ -488,7 +488,7 @@ INSERT INTO table_name [ AS - condition + condition An expression that returns a value of type @@ -522,7 +522,7 @@ INSERT INTO table_name [ AS It is often preferable to use unique index inference rather than naming a constraint directly using ON CONFLICT ON - CONSTRAINT + CONSTRAINT constraint_name. Inference will continue to work correctly when the underlying index is replaced by another more or less equivalent index in an overlapping way, for example when @@ -753,7 +753,7 @@ INSERT INTO distributors (did, dname) VALUES (10, 'Conrad International') Possible limitations of the query clause are documented under + class="parameter">query clause are documented under . diff --git a/doc/src/sgml/ref/listen.sgml b/doc/src/sgml/ref/listen.sgml index 9cd53b02bb..76215716d6 100644 --- a/doc/src/sgml/ref/listen.sgml +++ b/doc/src/sgml/ref/listen.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -LISTEN channel +LISTEN channel @@ -31,14 +31,14 @@ LISTEN channel LISTEN registers the current session as a listener on the notification channel named channel. + class="parameter">channel. If the current session is already registered as a listener for this notification channel, nothing is done. Whenever the command NOTIFY channel is invoked, either + class="parameter">channel is invoked, either by this session or another one connected to the same database, all the sessions currently listening on that notification channel are notified, and each will in turn notify its connected client @@ -77,7 +77,7 @@ LISTEN channel - channel + channel Name of a notification channel (any identifier). diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index 6e9182fa3b..2be28e6d15 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -20,7 +20,7 @@ doc/src/sgml/ref/load.sgml -LOAD 'filename' +LOAD 'filename' @@ -53,7 +53,7 @@ LOAD 'filename' Non-superusers can only apply LOAD to library files located in $libdir/plugins/ — the specified - filename must begin + filename must begin with exactly that string. (It is the database administrator's responsibility to ensure that only safe libraries are installed there.) diff --git a/doc/src/sgml/ref/lock.sgml b/doc/src/sgml/ref/lock.sgml index b946eab303..f1dbb8e65a 100644 --- a/doc/src/sgml/ref/lock.sgml +++ b/doc/src/sgml/ref/lock.sgml @@ -21,9 +21,9 @@ PostgreSQL documentation -LOCK [ TABLE ] [ ONLY ] name [ * ] [, ...] [ IN lockmode MODE ] [ NOWAIT ] +LOCK [ TABLE ] [ ONLY ] name [ * ] [, ...] [ IN lockmode MODE ] [ NOWAIT ] -where lockmode is one of: +where lockmode is one of: ACCESS SHARE | ROW SHARE | ROW EXCLUSIVE | SHARE UPDATE EXCLUSIVE | SHARE | SHARE ROW EXCLUSIVE | EXCLUSIVE | ACCESS EXCLUSIVE @@ -59,7 +59,7 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] committed data, because SHARE lock mode conflicts with the ROW EXCLUSIVE lock acquired by writers, and your LOCK TABLE name IN SHARE MODE + class="parameter">name IN SHARE MODE statement will wait until any concurrent holders of ROW EXCLUSIVE mode locks commit or roll back. Thus, once you obtain the lock, there are no uncommitted writes outstanding; @@ -107,7 +107,7 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] - name + name The name (optionally schema-qualified) of an existing table to diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml index ed64f23068..6b809b961d 100644 --- a/doc/src/sgml/ref/move.sgml +++ b/doc/src/sgml/ref/move.sgml @@ -27,23 +27,23 @@ PostgreSQL documentation -MOVE [ direction [ FROM | IN ] ] cursor_name +MOVE [ direction [ FROM | IN ] ] cursor_name -where direction can be empty or one of: +where direction can be empty or one of: NEXT PRIOR FIRST LAST - ABSOLUTE count - RELATIVE count - count + ABSOLUTE count + RELATIVE count + count ALL FORWARD - FORWARD count + FORWARD count FORWARD ALL BACKWARD - BACKWARD count + BACKWARD count BACKWARD ALL diff --git a/doc/src/sgml/ref/notify.sgml b/doc/src/sgml/ref/notify.sgml index 3389aa055c..09debd6685 100644 --- a/doc/src/sgml/ref/notify.sgml +++ b/doc/src/sgml/ref/notify.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -NOTIFY channel [ , payload ] +NOTIFY channel [ , payload ] @@ -128,7 +128,7 @@ NOTIFY channel [ , - channel + channel Name of the notification channel to be signaled (any identifier). @@ -136,7 +136,7 @@ NOTIFY channel [ , - payload + payload The payload string to be communicated along with the diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index 5180103526..a628e79310 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -287,12 +287,12 @@ Restore only those archive elements that are listed in list-file, and restore them in the + class="parameter">list-file, and restore them in the order they appear in the file. Note that if filtering switches such as - list-file is normally created by + list-file is normally created by editing the output of a previous - new_table + new_table The name (optionally schema-qualified) of the table to be created. diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml index 4ebb6a627b..89c0fad195 100644 --- a/doc/src/sgml/ref/set.sgml +++ b/doc/src/sgml/ref/set.sgml @@ -21,8 +21,8 @@ PostgreSQL documentation -SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT } -SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT } +SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT } +SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT } @@ -118,7 +118,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone - configuration_parameter + configuration_parameter Name of a settable run-time parameter. Available parameters are @@ -128,7 +128,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone - value + value New value of parameter. Values can be specified as string diff --git a/doc/src/sgml/ref/show.sgml b/doc/src/sgml/ref/show.sgml index 46bb239baf..7e198e6df8 100644 --- a/doc/src/sgml/ref/show.sgml +++ b/doc/src/sgml/ref/show.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -SHOW name +SHOW name SHOW ALL @@ -47,7 +47,7 @@ SHOW ALL - name + name The name of a run-time parameter. Available parameters are diff --git a/doc/src/sgml/ref/truncate.sgml b/doc/src/sgml/ref/truncate.sgml index e9c8a03a63..fef3315599 100644 --- a/doc/src/sgml/ref/truncate.sgml +++ b/doc/src/sgml/ref/truncate.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -TRUNCATE [ TABLE ] [ ONLY ] name [ * ] [, ... ] +TRUNCATE [ TABLE ] [ ONLY ] name [ * ] [, ... ] [ RESTART IDENTITY | CONTINUE IDENTITY ] [ CASCADE | RESTRICT ] @@ -44,7 +44,7 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ - name + name The name (optionally schema-qualified) of a table to truncate. diff --git a/doc/src/sgml/ref/unlisten.sgml b/doc/src/sgml/ref/unlisten.sgml index f7c3c47e2f..622e1cf154 100644 --- a/doc/src/sgml/ref/unlisten.sgml +++ b/doc/src/sgml/ref/unlisten.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -UNLISTEN { channel | * } +UNLISTEN { channel | * } @@ -34,7 +34,7 @@ UNLISTEN { channel | * } UNLISTEN cancels any existing registration of the current PostgreSQL session as a listener on the notification channel named channel. The special wildcard + class="parameter">channel. The special wildcard * cancels all listener registrations for the current session. @@ -52,7 +52,7 @@ UNLISTEN { channel | * } - channel + channel Name of a notification channel (any identifier). diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index 8a1619fb68..9dcbbd0e28 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -22,13 +22,13 @@ PostgreSQL documentation [ WITH [ RECURSIVE ] with_query [, ...] ] -UPDATE [ ONLY ] table_name [ * ] [ [ AS ] alias ] - SET { column_name = { expression | DEFAULT } | - ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [, ...] ) | - ( column_name [, ...] ) = ( sub-SELECT ) +UPDATE [ ONLY ] table_name [ * ] [ [ AS ] alias ] + SET { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [, ...] ) | + ( column_name [, ...] ) = ( sub-SELECT ) } [, ...] - [ FROM from_list ] - [ WHERE condition | WHERE CURRENT OF cursor_name ] + [ FROM from_list ] + [ WHERE condition | WHERE CURRENT OF cursor_name ] [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ] @@ -88,7 +88,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - table_name + table_name The name (optionally schema-qualified) of the table to update. @@ -115,11 +115,11 @@ UPDATE [ ONLY ] table_name [ * ] [ - column_name + column_name The name of a column in the table named by table_name. + class="parameter">table_name. The column name can be qualified with a subfield name or array subscript, if needed. Do not include the table's name in the specification of a target column — for example, @@ -129,7 +129,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - expression + expression An expression to assign to the column. The expression can use the @@ -149,7 +149,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - sub-SELECT + sub-SELECT A SELECT sub-query that produces as many output columns @@ -164,7 +164,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - from_list + from_list A list of table expressions, allowing columns from other tables @@ -180,7 +180,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - condition + condition An expression that returns a value of type boolean. @@ -191,7 +191,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - cursor_name + cursor_name The name of the cursor to use in a WHERE CURRENT OF @@ -208,12 +208,12 @@ UPDATE [ ONLY ] table_name [ * ] [ - output_expression + output_expression An expression to be computed and returned by the UPDATE command after each row is updated. The expression can use any - column names of the table named by table_name + column names of the table named by table_name or table(s) listed in FROM. Write * to return all columns. @@ -221,7 +221,7 @@ UPDATE [ ONLY ] table_name [ * ] [ - output_name + output_name A name to use for a returned column. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index e712226c61..f5bc87e290 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -21,10 +21,10 @@ PostgreSQL documentation -VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ] -VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ] +VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ] +VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ] -where option can be one of: +where option can be one of: FULL FREEZE @@ -32,9 +32,9 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns is: +and table_and_columns is: - table_name [ ( column_name [, ...] ) ] + table_name [ ( column_name [, ...] ) ] @@ -51,7 +51,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns + Without a table_and_columns list, VACUUM processes every table and materialized view in the current database that the current user has permission to vacuum. With a list, VACUUM processes only those table(s). @@ -161,7 +161,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_name + table_name The name (optionally schema-qualified) of a specific table or @@ -172,7 +172,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ column_name + column_name The name of a specific column to analyze. Defaults to all columns. diff --git a/doc/src/sgml/ref/values.sgml b/doc/src/sgml/ref/values.sgml index 9b0d8fa4a1..9baeade551 100644 --- a/doc/src/sgml/ref/values.sgml +++ b/doc/src/sgml/ref/values.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -VALUES ( expression [, ...] ) [, ...] +VALUES ( expression [, ...] ) [, ...] [ ORDER BY sort_expression [ ASC | DESC | USING operator ] [, ...] ] [ LIMIT { count | ALL } ] [ OFFSET start [ ROW | ROWS ] ] @@ -63,13 +63,13 @@ VALUES ( expression [, ...] ) [, .. - expression + expression A constant or expression to compute and insert at the indicated place in the resulting table (set of rows). In a VALUES list appearing at the top level of an INSERT, an - expression can be replaced + expression can be replaced by DEFAULT to indicate that the destination column's default value should be inserted. DEFAULT cannot be used when VALUES appears in other contexts. diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index fe630a66b3..d5bde5c6c0 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -701,7 +701,7 @@ LIMIT 10; -to_tsvector( config regconfig, document text) returns tsvector +to_tsvector( config regconfig, document text) returns tsvector @@ -811,7 +811,7 @@ UPDATE tt SET ti = -to_tsquery( config regconfig, querytext text) returns tsquery +to_tsquery( config regconfig, querytext text) returns tsquery @@ -884,7 +884,7 @@ SELECT to_tsquery('''supernovae stars'' & !crab'); -plainto_tsquery( config regconfig, querytext text) returns tsquery +plainto_tsquery( config regconfig, querytext text) returns tsquery @@ -924,7 +924,7 @@ SELECT plainto_tsquery('english', 'The Fat & Rats:C'); -phraseto_tsquery( config regconfig, querytext text) returns tsquery +phraseto_tsquery( config regconfig, querytext text) returns tsquery @@ -994,7 +994,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1011,7 +1011,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1043,7 +1043,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); For both these functions, - the optional weights + the optional weights argument offers the ability to weigh word instances more or less heavily depending on how they are labeled. The weight arrays specify how heavily to weigh each category of word, in the order: @@ -1052,7 +1052,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); {D-weight, C-weight, B-weight, A-weight} - If no weights are provided, + If no weights are provided, then these defaults are used: @@ -1199,7 +1199,7 @@ LIMIT 10; -ts_headline( config regconfig, document text, query tsquery , options text ) returns text +ts_headline( config regconfig, document text, query tsquery , options text ) returns text @@ -1388,7 +1388,7 @@ query.', setweight - setweight(vector tsvector, weight "char") returns tsvector + setweight(vector tsvector, weight "char") returns tsvector @@ -1416,7 +1416,7 @@ query.', length(tsvector) - length(vector tsvector) returns integer + length(vector tsvector) returns integer @@ -1433,7 +1433,7 @@ query.', strip - strip(vector tsvector) returns tsvector + strip(vector tsvector) returns tsvector @@ -1546,7 +1546,7 @@ SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery + tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery @@ -1575,7 +1575,7 @@ SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10); numnode - numnode(query tsquery) returns integer + numnode(query tsquery) returns integer @@ -1609,7 +1609,7 @@ SELECT numnode('foo & bar'::tsquery); querytree - querytree(query tsquery) returns text + querytree(query tsquery) returns text @@ -1662,16 +1662,16 @@ SELECT querytree(to_tsquery('!defined')); - ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery + ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery This form of ts_rewrite simply applies a single - rewrite rule: target - is replaced by substitute + rewrite rule: target + is replaced by substitute wherever it appears in query. For example: + class="parameter">query. For example: SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery); @@ -1686,7 +1686,7 @@ SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery); - ts_rewrite (query tsquery, select text) returns tsquery + ts_rewrite (query tsquery, select text) returns tsquery @@ -1785,8 +1785,8 @@ SELECT ts_rewrite('a & b'::tsquery, -tsvector_update_trigger(tsvector_column_name, config_name, text_column_name , ... ) -tsvector_update_trigger_column(tsvector_column_name, config_column_name, text_column_name , ... ) +tsvector_update_trigger(tsvector_column_name, config_name, text_column_name , ... ) +tsvector_update_trigger_column(tsvector_column_name, config_column_name, text_column_name , ... ) @@ -1886,9 +1886,9 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE -ts_stat(sqlquery text, weights text, - OUT word text, OUT ndoc integer, - OUT nentry integer) returns setof record +ts_stat(sqlquery text, weights text, + OUT word text, OUT ndoc integer, + OUT nentry integer) returns setof record @@ -3177,22 +3177,22 @@ SHOW default_text_search_config; -ts_debug( config regconfig, document text, - OUT alias text, - OUT description text, - OUT token text, - OUT dictionaries regdictionary[], - OUT dictionary regdictionary, - OUT lexemes text[]) +ts_debug( config regconfig, document text, + OUT alias text, + OUT description text, + OUT token text, + OUT dictionaries regdictionary[], + OUT dictionary regdictionary, + OUT lexemes text[]) returns setof record ts_debug displays information about every token of - document as produced by the + document as produced by the parser and processed by the configured dictionaries. It uses the configuration specified by config, + class="parameter">config, or default_text_search_config if that argument is omitted. @@ -3360,10 +3360,10 @@ FROM ts_debug('public.english','The Brightest supernovaes'); -ts_parse(parser_name text, document text, - OUT tokid integer, OUT token text) returns setof record -ts_parse(parser_oid oid, document text, - OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_name text, document text, + OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_oid oid, document text, + OUT tokid integer, OUT token text) returns setof record @@ -3391,10 +3391,10 @@ SELECT * FROM ts_parse('default', '123 - a number'); -ts_token_type(parser_name text, OUT tokid integer, - OUT alias text, OUT description text) returns setof record -ts_token_type(parser_oid oid, OUT tokid integer, - OUT alias text, OUT description text) returns setof record +ts_token_type(parser_name text, OUT tokid integer, + OUT alias text, OUT description text) returns setof record +ts_token_type(parser_oid oid, OUT tokid integer, + OUT alias text, OUT description text) returns setof record @@ -3449,7 +3449,7 @@ SELECT * FROM ts_token_type('default'); -ts_lexize(dict regdictionary, token text) returns text[] +ts_lexize(dict regdictionary, token text) returns text[] diff --git a/doc/src/sgml/unaccent.sgml b/doc/src/sgml/unaccent.sgml index 2b127e6736..d5cf98f6c1 100644 --- a/doc/src/sgml/unaccent.sgml +++ b/doc/src/sgml/unaccent.sgml @@ -174,11 +174,11 @@ mydb=# select ts_headline('fr','Hôtel de la Mer',to_tsquery('fr','Hotels') -unaccent(dictionary, string) returns text +unaccent(dictionary, string) returns text - If the dictionary argument is + If the dictionary argument is omitted, unaccent is assumed. From fa5e119dc71ada8d023deadcb36dbfae328f8902 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 10 Oct 2017 12:51:09 -0400 Subject: [PATCH 0355/1087] Add missing clean step to src/test/modules/brin/Makefile. I noticed the tmp_check subdirectory wasn't getting cleaned up after a check-world run. Apparently pgxs.mk will only do this for you if you've defined REGRESS. The only other src/test/modules Makefile that does not set that is snapshot_too_old, and it does it like this. --- src/test/modules/brin/Makefile | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/test/modules/brin/Makefile b/src/test/modules/brin/Makefile index 18c5cafd5e..912dca8009 100644 --- a/src/test/modules/brin/Makefile +++ b/src/test/modules/brin/Makefile @@ -1,6 +1,9 @@ # src/test/modules/brin/Makefile -EXTRA_CLEAN = ./isolation_output +# Note: because we don't tell the Makefile there are any regression tests, +# we have to clean those result files explicitly +EXTRA_CLEAN = $(pg_regress_clean_files) ./isolation_output + EXTRA_INSTALL=contrib/pageinspect ISOLATIONCHECKS=summarization-and-inprogress-insertion From fffd651e83ccbd6191a76be6ec7c6b1b27888fde Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 10 Oct 2017 14:42:16 -0700 Subject: [PATCH 0356/1087] Rewrite strnlen replacement implementation from 8a241792f96. The previous placement of the fallback implementation in libpgcommon was problematic, because libpqport functions need strnlen functionality. Move replacement into libpgport. Provide strnlen() under its posix name, instead of pg_strnlen(). Fix stupid configure bug, executing the test only when compiled with threading support. Author: Andres Freund Discussion: https://postgr.es/m/E1e1gR2-0005fB-SI@gemulon.postgresql.org --- configure | 25 ++++++++++++++++++++++++- configure.in | 6 +++--- src/backend/utils/mmgr/mcxt.c | 3 +-- src/common/string.c | 20 -------------------- src/include/common/string.h | 15 --------------- src/include/pg_config.h.in | 4 ++++ src/include/pg_config.h.win32 | 10 +++++++--- src/include/port.h | 4 ++++ src/port/snprintf.c | 4 +--- src/port/strnlen.c | 33 +++++++++++++++++++++++++++++++++ 10 files changed, 77 insertions(+), 47 deletions(-) create mode 100644 src/port/strnlen.c diff --git a/configure b/configure index a1283c0500..b0582657bf 100755 --- a/configure +++ b/configure @@ -8777,7 +8777,7 @@ fi -for ac_func in strerror_r getpwuid_r gethostbyname_r strnlen +for ac_func in strerror_r getpwuid_r gethostbyname_r do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" @@ -13161,6 +13161,16 @@ fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_STRLCPY $ac_have_decl _ACEOF +ac_fn_c_check_decl "$LINENO" "strnlenfrak" "ac_cv_have_decl_strnlen" "$ac_includes_default" +if test "x$ac_cv_have_decl_strnlen" = xyes; then : + ac_have_decl=1 +else + ac_have_decl=0 +fi + +cat >>confdefs.h <<_ACEOF +#define HAVE_DECL_STRNLEN $ac_have_decl +_ACEOF # This is probably only present on macOS, but may as well check always ac_fn_c_check_decl "$LINENO" "F_FULLFSYNC" "ac_cv_have_decl_F_FULLFSYNC" "#include @@ -13528,6 +13538,19 @@ esac fi +ac_fn_c_check_func "$LINENO" "strnlenfrak" "ac_cv_func_strnlen" +if test "x$ac_cv_func_strnlen" = xyes; then : + $as_echo "#define HAVE_STRNLEN 1" >>confdefs.h + +else + case " $LIBOBJS " in + *" strnlen.$ac_objext "* ) ;; + *) LIBOBJS="$LIBOBJS strnlen.$ac_objext" + ;; +esac + +fi + case $host_os in diff --git a/configure.in b/configure.in index e1381b4ead..4548db0dd3 100644 --- a/configure.in +++ b/configure.in @@ -961,7 +961,7 @@ LIBS="$LIBS $PTHREAD_LIBS" AC_CHECK_HEADER(pthread.h, [], [AC_MSG_ERROR([ pthread.h not found; use --disable-thread-safety to disable thread safety])]) -AC_CHECK_FUNCS([strerror_r getpwuid_r gethostbyname_r strnlen]) +AC_CHECK_FUNCS([strerror_r getpwuid_r gethostbyname_r]) # Do test here with the proper thread flags PGAC_FUNC_STRERROR_R_INT @@ -1422,7 +1422,7 @@ AC_CHECK_DECLS(posix_fadvise, [], [], [#include ]) fi AC_CHECK_DECLS(fdatasync, [], [], [#include ]) -AC_CHECK_DECLS([strlcat, strlcpy]) +AC_CHECK_DECLS([strlcat, strlcpy, strnlen]) # This is probably only present on macOS, but may as well check always AC_CHECK_DECLS(F_FULLFSYNC, [], [], [#include ]) @@ -1514,7 +1514,7 @@ else AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break]) fi -AC_REPLACE_FUNCS([crypt fls getopt getrusage inet_aton mkdtemp random rint srandom strerror strlcat strlcpy]) +AC_REPLACE_FUNCS([crypt fls getopt getrusage inet_aton mkdtemp random rint srandom strerror strlcat strlcpy strnlen]) case $host_os in diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c index 64e0408d5a..c5c311fad3 100644 --- a/src/backend/utils/mmgr/mcxt.c +++ b/src/backend/utils/mmgr/mcxt.c @@ -21,7 +21,6 @@ #include "postgres.h" -#include "common/string.h" #include "miscadmin.h" #include "utils/memdebug.h" #include "utils/memutils.h" @@ -1089,7 +1088,7 @@ pnstrdup(const char *in, Size len) { char *out; - len = pg_strnlen(in, len); + len = strnlen(in, len); out = palloc(len + 1); memcpy(out, in, len); diff --git a/src/common/string.c b/src/common/string.c index 901821f3d8..159d9ea7b6 100644 --- a/src/common/string.c +++ b/src/common/string.c @@ -41,23 +41,3 @@ pg_str_endswith(const char *str, const char *end) str += slen - elen; return strcmp(str, end) == 0; } - - -/* - * Portable version of posix' strnlen. - * - * Returns the number of characters before a null-byte in the string pointed - * to by str, unless there's no null-byte before maxlen. In the latter case - * maxlen is returned. - */ -#ifndef HAVE_STRNLEN -size_t -pg_strnlen(const char *str, size_t maxlen) -{ - const char *p = str; - - while (maxlen-- > 0 && *p) - p++; - return p - str; -} -#endif diff --git a/src/include/common/string.h b/src/include/common/string.h index 3d46b80918..5f3ea71d61 100644 --- a/src/include/common/string.h +++ b/src/include/common/string.h @@ -12,19 +12,4 @@ extern bool pg_str_endswith(const char *str, const char *end); -/* - * Portable version of posix' strnlen. - * - * Returns the number of characters before a null-byte in the string pointed - * to by str, unless there's no null-byte before maxlen. In the latter case - * maxlen is returned. - * - * Use the system strnlen if provided, it's likely to be faster. - */ -#ifdef HAVE_STRNLEN -#define pg_strnlen(str, maxlen) strnlen(str, maxlen) -#else -extern size_t pg_strnlen(const char *str, size_t maxlen); -#endif - #endif /* COMMON_STRING_H */ diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index d20cc47fde..b0298cca19 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -147,6 +147,10 @@ don't. */ #undef HAVE_DECL_STRLCPY +/* Define to 1 if you have the declaration of `strnlen', and to 0 if you + don't. */ +#undef HAVE_DECL_STRNLEN + /* Define to 1 if you have the declaration of `sys_siglist', and to 0 if you don't. */ #undef HAVE_DECL_SYS_SIGLIST diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 58eef0a538..b76aad0267 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -99,6 +99,10 @@ don't. */ #define HAVE_DECL_SNPRINTF 1 +/* Define to 1 if you have the declaration of `strnlen', and to 0 if you + don't. */ +#define HAVE_DECL_STRNLEN 1 + /* Define to 1 if you have the declaration of `vsnprintf', and to 0 if you don't. */ #define HAVE_DECL_VSNPRINTF 1 @@ -255,6 +259,9 @@ /* Define to 1 if you have the header file. */ /* #undef HAVE_PAM_PAM_APPL_H */ +/* Define to 1 if you have the `strnlen' function. */ +#define HAVE_STRNLEN 1 + /* Define to 1 if you have the `poll' function. */ /* #undef HAVE_POLL */ @@ -345,9 +352,6 @@ /* Define to 1 if you have the header file. */ #define HAVE_STRING_H 1 -/* Define to 1 if you have the `strnlen' function. */ -#define HAVE_STRNLEN - /* Define to use have a strong random number source */ #define HAVE_STRONG_RANDOM 1 diff --git a/src/include/port.h b/src/include/port.h index b1ba645655..17a7710a5e 100644 --- a/src/include/port.h +++ b/src/include/port.h @@ -406,6 +406,10 @@ extern size_t strlcat(char *dst, const char *src, size_t siz); extern size_t strlcpy(char *dst, const char *src, size_t siz); #endif +#if !HAVE_DECL_STRNLEN +extern size_t strnlen(const char *str, size_t maxlen); +#endif + #if !defined(HAVE_RANDOM) extern long random(void); #endif diff --git a/src/port/snprintf.c b/src/port/snprintf.c index 531d2c5ee3..43c17e702e 100644 --- a/src/port/snprintf.c +++ b/src/port/snprintf.c @@ -43,8 +43,6 @@ #endif #include -#include "common/string.h" - #ifndef NL_ARGMAX #define NL_ARGMAX 16 #endif @@ -804,7 +802,7 @@ fmtstr(char *value, int leftjust, int minlen, int maxwidth, * than that. */ if (pointflag) - vallen = pg_strnlen(value, maxwidth); + vallen = strnlen(value, maxwidth); else vallen = strlen(value); diff --git a/src/port/strnlen.c b/src/port/strnlen.c new file mode 100644 index 0000000000..260b883368 --- /dev/null +++ b/src/port/strnlen.c @@ -0,0 +1,33 @@ +/*------------------------------------------------------------------------- + * + * strnlen.c + * Fallback implementation of strnlen(). + * + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/port/strnlen.c + * + *------------------------------------------------------------------------- + */ + +#include "c.h" + +/* + * Implementation of posix' strnlen for systems where it's not available. + * + * Returns the number of characters before a null-byte in the string pointed + * to by str, unless there's no null-byte before maxlen. In the latter case + * maxlen is returned. + */ +size_t +strnlen(const char *str, size_t maxlen) +{ + const char *p = str; + + while (maxlen-- > 0 && *p) + p++; + return p - str; +} From f4128ab466aac639387a5dade6647621c87bbb3f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 10 Oct 2017 19:14:06 -0400 Subject: [PATCH 0357/1087] Regenerate configure script. Not sure how fffd651e83ccbd6191a76be6ec7c6b1b27888fde ended up probing for "strnlenfrak" rather than "strnlen". My autoconf doesn't do that ... --- configure | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/configure b/configure index b0582657bf..98cb8b2881 100755 --- a/configure +++ b/configure @@ -13161,7 +13161,7 @@ fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_STRLCPY $ac_have_decl _ACEOF -ac_fn_c_check_decl "$LINENO" "strnlenfrak" "ac_cv_have_decl_strnlen" "$ac_includes_default" +ac_fn_c_check_decl "$LINENO" "strnlen" "ac_cv_have_decl_strnlen" "$ac_includes_default" if test "x$ac_cv_have_decl_strnlen" = xyes; then : ac_have_decl=1 else @@ -13538,7 +13538,7 @@ esac fi -ac_fn_c_check_func "$LINENO" "strnlenfrak" "ac_cv_func_strnlen" +ac_fn_c_check_func "$LINENO" "strnlen" "ac_cv_func_strnlen" if test "x$ac_cv_func_strnlen" = xyes; then : $as_echo "#define HAVE_STRNLEN 1" >>confdefs.h From e9e0f78bdeaee6e1e24544fd564cf0907f6a2134 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 11 Oct 2017 09:15:20 -0400 Subject: [PATCH 0358/1087] Fix whitespace --- src/test/regress/expected/partition_join.out | 2 +- src/test/regress/sql/partition_join.sql | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index 234b8b5381..adf6aedfa6 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -1257,7 +1257,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 (14 rows) -- --- multiple levels of partitioning +-- multiple levels of partitioning -- CREATE TABLE prt1_l (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE prt1_l_p1 PARTITION OF prt1_l FOR VALUES FROM (0) TO (250); diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql index ca525d9941..25abf2dc13 100644 --- a/src/test/regress/sql/partition_join.sql +++ b/src/test/regress/sql/partition_join.sql @@ -230,7 +230,7 @@ EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 FULL JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; -- --- multiple levels of partitioning +-- multiple levels of partitioning -- CREATE TABLE prt1_l (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE prt1_l_p1 PARTITION OF prt1_l FOR VALUES FROM (0) TO (250); From 46912d9b1504cfaede1b22811039028a75f76ab8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 11 Oct 2017 11:27:57 -0400 Subject: [PATCH 0359/1087] Add port/strnlen support to libpq and ecpg Makefiles. In the wake of fffd651e8, any makefile that pulls in snprintf.c from src/port/ needs to be prepared to pull in strnlen.c as well. Per buildfarm. --- src/interfaces/ecpg/compatlib/.gitignore | 1 + src/interfaces/ecpg/compatlib/Makefile | 6 +++--- src/interfaces/ecpg/ecpglib/.gitignore | 1 + src/interfaces/ecpg/ecpglib/Makefile | 7 ++++--- src/interfaces/ecpg/pgtypeslib/.gitignore | 1 + src/interfaces/ecpg/pgtypeslib/Makefile | 7 ++++--- src/interfaces/libpq/.gitignore | 1 + src/interfaces/libpq/Makefile | 6 +++--- 8 files changed, 18 insertions(+), 12 deletions(-) diff --git a/src/interfaces/ecpg/compatlib/.gitignore b/src/interfaces/ecpg/compatlib/.gitignore index 6eb8a0dc06..ad5ba1354c 100644 --- a/src/interfaces/ecpg/compatlib/.gitignore +++ b/src/interfaces/ecpg/compatlib/.gitignore @@ -2,3 +2,4 @@ /blibecpg_compatdll.def /exports.list /snprintf.c +/strnlen.c diff --git a/src/interfaces/ecpg/compatlib/Makefile b/src/interfaces/ecpg/compatlib/Makefile index 04ddcfeab2..9ea5cf4ac4 100644 --- a/src/interfaces/ecpg/compatlib/Makefile +++ b/src/interfaces/ecpg/compatlib/Makefile @@ -31,7 +31,7 @@ SHLIB_EXPORTS = exports.txt # Need to recompile any libpgport object files LIBS := $(filter-out -lpgport, $(LIBS)) -OBJS= informix.o $(filter snprintf.o, $(LIBOBJS)) $(WIN32RES) +OBJS= informix.o $(filter snprintf.o strnlen.o, $(LIBOBJS)) $(WIN32RES) PKG_CONFIG_REQUIRES_PRIVATE = libecpg libpgtypes @@ -48,7 +48,7 @@ submake-pgtypeslib: # Shared library stuff include $(top_srcdir)/src/Makefile.shlib -snprintf.c: % : $(top_srcdir)/src/port/% +snprintf.c strnlen.c: % : $(top_srcdir)/src/port/% rm -f $@ && $(LN_S) $< . install: all installdirs install-lib @@ -58,6 +58,6 @@ installdirs: installdirs-lib uninstall: uninstall-lib clean distclean: clean-lib - rm -f $(OBJS) snprintf.c + rm -f $(OBJS) snprintf.c strnlen.c maintainer-clean: distclean maintainer-clean-lib diff --git a/src/interfaces/ecpg/ecpglib/.gitignore b/src/interfaces/ecpg/ecpglib/.gitignore index 8ef6401dd0..1619e970c2 100644 --- a/src/interfaces/ecpg/ecpglib/.gitignore +++ b/src/interfaces/ecpg/ecpglib/.gitignore @@ -5,6 +5,7 @@ /pgstrcasecmp.c /snprintf.c /strlcpy.c +/strnlen.c /thread.c /win32setlocale.c /isinf.c diff --git a/src/interfaces/ecpg/ecpglib/Makefile b/src/interfaces/ecpg/ecpglib/Makefile index fbb14073ce..0b0a3e2857 100644 --- a/src/interfaces/ecpg/ecpglib/Makefile +++ b/src/interfaces/ecpg/ecpglib/Makefile @@ -27,7 +27,8 @@ LIBS := $(filter-out -lpgport, $(LIBS)) OBJS= execute.o typename.o descriptor.o sqlda.o data.o error.o prepare.o memory.o \ connect.o misc.o path.o pgstrcasecmp.o \ - $(filter snprintf.o strlcpy.o win32setlocale.o isinf.o, $(LIBOBJS)) $(WIN32RES) + $(filter snprintf.o strlcpy.o strnlen.o win32setlocale.o isinf.o, $(LIBOBJS)) \ + $(WIN32RES) # thread.c is needed only for non-WIN32 implementation of path.c ifneq ($(PORTNAME), win32) @@ -55,7 +56,7 @@ include $(top_srcdir)/src/Makefile.shlib # necessarily use the same object files as the backend uses. Instead, # symlink the source files in here and build our own object file. -path.c pgstrcasecmp.c snprintf.c strlcpy.c thread.c win32setlocale.c isinf.c: % : $(top_srcdir)/src/port/% +path.c pgstrcasecmp.c snprintf.c strlcpy.c strnlen.c thread.c win32setlocale.c isinf.c: % : $(top_srcdir)/src/port/% rm -f $@ && $(LN_S) $< . misc.o: misc.c $(top_builddir)/src/port/pg_config_paths.h @@ -72,6 +73,6 @@ uninstall: uninstall-lib clean distclean: clean-lib rm -f $(OBJS) - rm -f path.c pgstrcasecmp.c snprintf.c strlcpy.c thread.c win32setlocale.c isinf.c + rm -f path.c pgstrcasecmp.c snprintf.c strlcpy.c strnlen.c thread.c win32setlocale.c isinf.c maintainer-clean: distclean maintainer-clean-lib diff --git a/src/interfaces/ecpg/pgtypeslib/.gitignore b/src/interfaces/ecpg/pgtypeslib/.gitignore index fbcd68d7d3..df46a79972 100644 --- a/src/interfaces/ecpg/pgtypeslib/.gitignore +++ b/src/interfaces/ecpg/pgtypeslib/.gitignore @@ -4,3 +4,4 @@ /pgstrcasecmp.c /rint.c /snprintf.c +/strnlen.c diff --git a/src/interfaces/ecpg/pgtypeslib/Makefile b/src/interfaces/ecpg/pgtypeslib/Makefile index 9fc75661b5..960c26ae55 100644 --- a/src/interfaces/ecpg/pgtypeslib/Makefile +++ b/src/interfaces/ecpg/pgtypeslib/Makefile @@ -31,7 +31,8 @@ SHLIB_EXPORTS = exports.txt OBJS= numeric.o datetime.o common.o dt_common.o timestamp.o interval.o \ pgstrcasecmp.o \ - $(filter rint.o snprintf.o, $(LIBOBJS)) $(WIN32RES) + $(filter rint.o snprintf.o strnlen.o, $(LIBOBJS)) \ + $(WIN32RES) all: all-lib @@ -43,7 +44,7 @@ include $(top_srcdir)/src/Makefile.shlib # necessarily use the same object files as the backend uses. Instead, # symlink the source files in here and build our own object file. -pgstrcasecmp.c rint.c snprintf.c: % : $(top_srcdir)/src/port/% +pgstrcasecmp.c rint.c snprintf.c strnlen.c: % : $(top_srcdir)/src/port/% rm -f $@ && $(LN_S) $< . install: all installdirs install-lib @@ -53,6 +54,6 @@ installdirs: installdirs-lib uninstall: uninstall-lib clean distclean: clean-lib - rm -f $(OBJS) pgstrcasecmp.c rint.c snprintf.c + rm -f $(OBJS) pgstrcasecmp.c rint.c snprintf.c strnlen.c maintainer-clean: distclean maintainer-clean-lib diff --git a/src/interfaces/libpq/.gitignore b/src/interfaces/libpq/.gitignore index 6c02dc7055..5c232ae2d1 100644 --- a/src/interfaces/libpq/.gitignore +++ b/src/interfaces/libpq/.gitignore @@ -18,6 +18,7 @@ /snprintf.c /strerror.c /strlcpy.c +/strnlen.c /thread.c /win32error.c /win32setlocale.c diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile index 87f22d242f..94eb84be03 100644 --- a/src/interfaces/libpq/Makefile +++ b/src/interfaces/libpq/Makefile @@ -38,7 +38,7 @@ OBJS= fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-l OBJS += chklocale.o inet_net_ntop.o noblock.o pgstrcasecmp.o pqsignal.o \ thread.o # libpgport C files that are needed if identified by configure -OBJS += $(filter crypt.o getaddrinfo.o getpeereid.o inet_aton.o open.o system.o snprintf.o strerror.o strlcpy.o win32error.o win32setlocale.o, $(LIBOBJS)) +OBJS += $(filter crypt.o getaddrinfo.o getpeereid.o inet_aton.o open.o system.o snprintf.o strerror.o strlcpy.o strnlen.o win32error.o win32setlocale.o, $(LIBOBJS)) ifeq ($(enable_strong_random), yes) OBJS += pg_strong_random.o @@ -103,7 +103,7 @@ backend_src = $(top_srcdir)/src/backend # the module is needed (see filter hack in OBJS, above). # When you add a file here, remember to add it in the "clean" target below. -chklocale.c crypt.c erand48.c getaddrinfo.c getpeereid.c inet_aton.c inet_net_ntop.c noblock.c open.c system.c pgsleep.c pg_strong_random.c pgstrcasecmp.c pqsignal.c snprintf.c strerror.c strlcpy.c thread.c win32error.c win32setlocale.c: % : $(top_srcdir)/src/port/% +chklocale.c crypt.c erand48.c getaddrinfo.c getpeereid.c inet_aton.c inet_net_ntop.c noblock.c open.c system.c pgsleep.c pg_strong_random.c pgstrcasecmp.c pqsignal.c snprintf.c strerror.c strlcpy.c strnlen.c thread.c win32error.c win32setlocale.c: % : $(top_srcdir)/src/port/% rm -f $@ && $(LN_S) $< . ip.c md5.c base64.c scram-common.c sha2.c sha2_openssl.c saslprep.c unicode_norm.c: % : $(top_srcdir)/src/common/% @@ -155,7 +155,7 @@ clean distclean: clean-lib # Might be left over from a Win32 client-only build rm -f pg_config_paths.h # Remove files we (may have) symlinked in from src/port and other places - rm -f chklocale.c crypt.c erand48.c getaddrinfo.c getpeereid.c inet_aton.c inet_net_ntop.c noblock.c open.c system.c pgsleep.c pg_strong_random.c pgstrcasecmp.c pqsignal.c snprintf.c strerror.c strlcpy.c thread.c win32error.c win32setlocale.c + rm -f chklocale.c crypt.c erand48.c getaddrinfo.c getpeereid.c inet_aton.c inet_net_ntop.c noblock.c open.c system.c pgsleep.c pg_strong_random.c pgstrcasecmp.c pqsignal.c snprintf.c strerror.c strlcpy.c strnlen.c thread.c win32error.c win32setlocale.c rm -f ip.c md5.c base64.c scram-common.c sha2.c sha2_openssl.c saslprep.c unicode_norm.c rm -f encnames.c wchar.c From 118e99c3d71efbea85341697a447d84bbfb54f18 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 11 Oct 2017 14:28:33 -0400 Subject: [PATCH 0360/1087] Fix low-probability loss of NOTIFY messages due to XID wraparound. Up to now async.c has used TransactionIdIsInProgress() to detect whether a notify message's source transaction is still running. However, that function has a quick-exit path that reports that XIDs before RecentXmin are no longer running. If a listening backend is doing nothing but listening, and not running any queries, there is nothing that will advance its value of RecentXmin. Once 2 billion transactions elapse, the RecentXmin check causes active transactions to be reported as not running. If they aren't committed yet according to CLOG, async.c decides they aborted and discards their messages. The timing for that is a bit tight but it can happen when multiple backends are sending notifies concurrently. The net symptom therefore is that a sufficiently-long-surviving listen-only backend starts to miss some fraction of NOTIFY traffic, but only under heavy load. The only function that updates RecentXmin is GetSnapshotData(). A brute-force fix would therefore be to take a snapshot before processing incoming notify messages. But that would add cycles, as well as contention for the ProcArrayLock. We can be smarter: having taken the snapshot, let's use that to check for running XIDs, and not call TransactionIdIsInProgress() at all. In this way we reduce the number of ProcArrayLock acquisitions from one per message to one per notify interrupt; that's the same under light load but should be a benefit under heavy load. Light testing says that this change is a wash performance-wise for normal loads. I looked around for other callers of TransactionIdIsInProgress() that might be at similar risk, and didn't find any; all of them are inside transactions that presumably have already taken a snapshot. Problem report and diagnosis by Marko Tiikkaja, patch by me. Back-patch to all supported branches, since it's been like this since 9.0. Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org --- src/backend/commands/async.c | 39 +++++++++++++++++++++++++--------- src/backend/utils/time/tqual.c | 8 +++---- src/include/utils/tqual.h | 1 + 3 files changed, 33 insertions(+), 15 deletions(-) diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c index bacc08eb84..a93c81bca2 100644 --- a/src/backend/commands/async.c +++ b/src/backend/commands/async.c @@ -137,7 +137,9 @@ #include "utils/builtins.h" #include "utils/memutils.h" #include "utils/ps_status.h" +#include "utils/snapmgr.h" #include "utils/timestamp.h" +#include "utils/tqual.h" /* @@ -387,7 +389,8 @@ static bool SignalBackends(void); static void asyncQueueReadAllNotifications(void); static bool asyncQueueProcessPageEntries(volatile QueuePosition *current, QueuePosition stop, - char *page_buffer); + char *page_buffer, + Snapshot snapshot); static void asyncQueueAdvanceTail(void); static void ProcessIncomingNotify(void); static bool AsyncExistsPendingNotify(const char *channel, const char *payload); @@ -798,7 +801,7 @@ PreCommit_Notify(void) } } - /* Queue any pending notifies */ + /* Queue any pending notifies (must happen after the above) */ if (pendingNotifies) { ListCell *nextNotify; @@ -987,7 +990,9 @@ Exec_ListenPreCommit(void) * have already committed before we started to LISTEN. * * Note that we are not yet listening on anything, so we won't deliver any - * notification to the frontend. + * notification to the frontend. Also, although our transaction might + * have executed NOTIFY, those message(s) aren't queued yet so we can't + * see them in the queue. * * This will also advance the global tail pointer if possible. */ @@ -1744,6 +1749,7 @@ asyncQueueReadAllNotifications(void) volatile QueuePosition pos; QueuePosition oldpos; QueuePosition head; + Snapshot snapshot; bool advanceTail; /* page_buffer must be adequately aligned, so use a union */ @@ -1767,6 +1773,9 @@ asyncQueueReadAllNotifications(void) return; } + /* Get snapshot we'll use to decide which xacts are still in progress */ + snapshot = RegisterSnapshot(GetLatestSnapshot()); + /*---------- * Note that we deliver everything that we see in the queue and that * matches our _current_ listening state. @@ -1854,7 +1863,8 @@ asyncQueueReadAllNotifications(void) * while sending the notifications to the frontend. */ reachedStop = asyncQueueProcessPageEntries(&pos, head, - page_buffer.buf); + page_buffer.buf, + snapshot); } while (!reachedStop); } PG_CATCH(); @@ -1882,6 +1892,9 @@ asyncQueueReadAllNotifications(void) /* If we were the laziest backend, try to advance the tail pointer */ if (advanceTail) asyncQueueAdvanceTail(); + + /* Done with snapshot */ + UnregisterSnapshot(snapshot); } /* @@ -1903,7 +1916,8 @@ asyncQueueReadAllNotifications(void) static bool asyncQueueProcessPageEntries(volatile QueuePosition *current, QueuePosition stop, - char *page_buffer) + char *page_buffer, + Snapshot snapshot) { bool reachedStop = false; bool reachedEndOfPage; @@ -1928,7 +1942,7 @@ asyncQueueProcessPageEntries(volatile QueuePosition *current, /* Ignore messages destined for other databases */ if (qe->dboid == MyDatabaseId) { - if (TransactionIdIsInProgress(qe->xid)) + if (XidInMVCCSnapshot(qe->xid, snapshot)) { /* * The source transaction is still in progress, so we can't @@ -1939,10 +1953,15 @@ asyncQueueProcessPageEntries(volatile QueuePosition *current, * this advance-then-back-up behavior when dealing with an * uncommitted message.) * - * Note that we must test TransactionIdIsInProgress before we - * test TransactionIdDidCommit, else we might return a message - * from a transaction that is not yet visible to snapshots; - * compare the comments at the head of tqual.c. + * Note that we must test XidInMVCCSnapshot before we test + * TransactionIdDidCommit, else we might return a message from + * a transaction that is not yet visible to snapshots; compare + * the comments at the head of tqual.c. + * + * Also, while our own xact won't be listed in the snapshot, + * we need not check for TransactionIdIsCurrentTransactionId + * because our transaction cannot (yet) have queued any + * messages. */ *current = thisentry; reachedStop = true; diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c index bbac4083c9..b7aab0dd19 100644 --- a/src/backend/utils/time/tqual.c +++ b/src/backend/utils/time/tqual.c @@ -81,8 +81,6 @@ SnapshotData SnapshotSelfData = {HeapTupleSatisfiesSelf}; SnapshotData SnapshotAnyData = {HeapTupleSatisfiesAny}; -/* local functions */ -static bool XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot); /* * SetHintBits() @@ -1479,10 +1477,10 @@ HeapTupleIsSurelyDead(HeapTuple htup, TransactionId OldestXmin) * Note: GetSnapshotData never stores either top xid or subxids of our own * backend into a snapshot, so these xids will not be reported as "running" * by this function. This is OK for current uses, because we always check - * TransactionIdIsCurrentTransactionId first, except for known-committed - * XIDs which could not be ours anyway. + * TransactionIdIsCurrentTransactionId first, except when it's known the + * XID could not be ours anyway. */ -static bool +bool XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot) { uint32 i; diff --git a/src/include/utils/tqual.h b/src/include/utils/tqual.h index 9a3b56e5f0..96eaf01ca0 100644 --- a/src/include/utils/tqual.h +++ b/src/include/utils/tqual.h @@ -78,6 +78,7 @@ extern HTSV_Result HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, Buffer buffer); extern bool HeapTupleIsSurelyDead(HeapTuple htup, TransactionId OldestXmin); +extern bool XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot); extern void HeapTupleSetHintBits(HeapTupleHeader tuple, Buffer buffer, uint16 infomask, TransactionId xid); From 20d210bf5bb0d5ae37c727d364cfd810c367704a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 11 Oct 2017 15:55:10 -0400 Subject: [PATCH 0361/1087] Fix mistakes in comments. Masahiko Sawada Discussion: http://postgr.es/m/CAD21AoBsfYsMHD6_SL9iN3n_Foaa+oPbL5jG55DxU1ChaujqwQ@mail.gmail.com --- src/backend/executor/execReplication.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index 5a75e0211f..c26420ae10 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -448,7 +448,7 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate, CheckCmdReplicaIdentity(rel, CMD_UPDATE); - /* BEFORE ROW INSERT Triggers */ + /* BEFORE ROW UPDATE Triggers */ if (resultRelInfo->ri_TrigDesc && resultRelInfo->ri_TrigDesc->trig_update_before_row) { @@ -509,7 +509,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate, CheckCmdReplicaIdentity(rel, CMD_DELETE); - /* BEFORE ROW INSERT Triggers */ + /* BEFORE ROW DELETE Triggers */ if (resultRelInfo->ri_TrigDesc && resultRelInfo->ri_TrigDesc->trig_update_before_row) { From 28605968322b70a7efe1cc89595d1cfc557d80b9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 11 Oct 2017 16:56:23 -0400 Subject: [PATCH 0362/1087] Doc: fix missing explanation of default object privileges. The GRANT reference page, which lists the default privileges for new objects, failed to mention that USAGE is granted by default for data types and domains. As a lesser sin, it also did not specify anything about the initial privileges for sequences, FDWs, foreign servers, or large objects. Fix that, and add a comment to acldefault() in the probably vain hope of getting people to maintain this list in future. Noted by Laurenz Albe, though I editorialized on the wording a bit. Back-patch to all supported branches, since they all have this behavior. Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at --- doc/src/sgml/ref/grant.sgml | 20 +++++++++++++++----- src/backend/utils/adt/acl.c | 4 +++- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index 8f385f6bb7..385cfe6a9c 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -156,12 +156,22 @@ GRANT role_name [, ...] TO PostgreSQL grants default privileges on some types of objects to PUBLIC. No privileges are granted to - PUBLIC by default on tables, - columns, schemas or tablespaces. For other types, the default privileges + PUBLIC by default on + tables, + table columns, + sequences, + foreign data wrappers, + foreign servers, + large objects, + schemas, + or tablespaces. + For other types of objects, the default privileges granted to PUBLIC are as follows: - CONNECT and CREATE TEMP TABLE for - databases; EXECUTE privilege for functions; and - USAGE privilege for languages. + CONNECT and TEMPORARY (create + temporary tables) privileges for databases; + EXECUTE privilege for functions; and + USAGE privilege for languages and data types + (including domains). The object owner can, of course, REVOKE both default and expressly granted privileges. (For maximum security, issue the REVOKE in the same transaction that diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c index 0c26e44d82..fa6b792d00 100644 --- a/src/backend/utils/adt/acl.c +++ b/src/backend/utils/adt/acl.c @@ -737,7 +737,9 @@ hash_aclitem_extended(PG_FUNCTION_ARGS) * acldefault() --- create an ACL describing default access permissions * * Change this routine if you want to alter the default access policy for - * newly-created objects (or any object with a NULL acl entry). + * newly-created objects (or any object with a NULL acl entry). When + * you make a change here, don't forget to update the GRANT man page, + * which explains all the default permissions. * * Note that these are the hard-wired "defaults" that are used in the * absence of any pg_default_acl entry. From f676616651c83b14e1d879fbfabdd3ab2dc70bbe Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 12:03:26 -0700 Subject: [PATCH 0363/1087] Prevent idle in transaction session timeout from sometimes being ignored. The previous coding in ProcessInterrupts() could lead to idle_in_transaction_session_timeout being ignored, when statement_timeout occurred earlier. The problem was that ProcessInterrupts() would return before processing the transaction timeout if QueryCancelPending was set while QueryCancelHoldoffCount != 0 - which is the case when reading new commands from the client. Ergo when the idle transaction timeout would hit. Fix that by removing the early return. Alternatively the transaction timeout code could have been moved up, but that early return seems like an issue that could hit other cases too. Author: Lukas Fittl Bug: #14821 Discussion: https://www.postgresql.org/message-id/20170921010956.17345.61461%40wrigleys.postgresql.org https://www.postgresql.org/message-id/CAP53PkxQnv3OWJpyNPGJYT62uY=n1=2CF_Lpc6gVOFnc0-gazw@mail.gmail.com Backpatch: 9.6-, where idle_in_transaction_session_timeout was introduced. --- src/backend/tcop/postgres.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index c807b00b0b..edea6f177b 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -2941,26 +2941,24 @@ ProcessInterrupts(void) " database and repeat your command."))); } - if (QueryCancelPending) + /* + * Don't allow query cancel interrupts while reading input from the + * client, because we might lose sync in the FE/BE protocol. (Die + * interrupts are OK, because we won't read any further messages from + * the client in that case.) + */ + if (QueryCancelPending && QueryCancelHoldoffCount != 0) { - bool lock_timeout_occurred; - bool stmt_timeout_occurred; - /* - * Don't allow query cancel interrupts while reading input from the - * client, because we might lose sync in the FE/BE protocol. (Die - * interrupts are OK, because we won't read any further messages from - * the client in that case.) + * Re-arm InterruptPending so that we process the cancel request + * as soon as we're done reading the message. */ - if (QueryCancelHoldoffCount != 0) - { - /* - * Re-arm InterruptPending so that we process the cancel request - * as soon as we're done reading the message. - */ - InterruptPending = true; - return; - } + InterruptPending = true; + } + else if (QueryCancelPending) + { + bool lock_timeout_occurred; + bool stmt_timeout_occurred; QueryCancelPending = false; From 5fa6b0d102eb8ccd15c4963ee9841baec50df45e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 11 Oct 2017 17:43:50 -0400 Subject: [PATCH 0364/1087] Remove unnecessary PG_TRY overhead for CurrentResourceOwner changes. resowner/README contained advice to use a PG_TRY block to restore the old CurrentResourceOwner value anywhere that that variable is transiently changed. That advice was only inconsistently followed, however, and on reflection it seems like unnecessary overhead. We don't bother with such a convention for transient CurrentMemoryContext changes, on the grounds that any (sub)transaction abort will start out by resetting CurrentMemoryContext to what it wants. But the same is true of CurrentResourceOwner, so there seems no need to treat it differently. Hence, remove PG_TRY blocks that exist only to restore CurrentResourceOwner before re-throwing the error. There are a couple of places that restore it along with some other actions, and I left those alone; the restore is probably unnecessary but no noticeable gain will result from removing it. Discussion: https://postgr.es/m/5236.1507583529@sss.pgh.pa.us --- src/backend/access/transam/xact.c | 16 ++------ src/backend/commands/portalcmds.c | 22 ++++------- src/backend/commands/sequence.c | 16 ++------ src/backend/commands/trigger.c | 36 ++++++------------ src/backend/storage/large_object/inv_api.c | 44 +++++++--------------- src/backend/utils/resowner/README | 4 -- src/backend/utils/resowner/resowner.c | 20 ++-------- 7 files changed, 42 insertions(+), 116 deletions(-) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 93dca7a72a..8203388fa8 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -575,18 +575,10 @@ AssignTransactionId(TransactionState s) * ResourceOwner. */ currentOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = s->curTransactionOwner; - XactLockTableInsert(s->transactionId); - } - PG_CATCH(); - { - /* Ensure CurrentResourceOwner is restored on error */ - CurrentResourceOwner = currentOwner; - PG_RE_THROW(); - } - PG_END_TRY(); + CurrentResourceOwner = s->curTransactionOwner; + + XactLockTableInsert(s->transactionId); + CurrentResourceOwner = currentOwner; /* diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c index b36473fba4..76d6cf154c 100644 --- a/src/backend/commands/portalcmds.c +++ b/src/backend/commands/portalcmds.c @@ -294,21 +294,13 @@ PortalCleanup(Portal portal) /* We must make the portal's resource owner current */ saveResourceOwner = CurrentResourceOwner; - PG_TRY(); - { - if (portal->resowner) - CurrentResourceOwner = portal->resowner; - ExecutorFinish(queryDesc); - ExecutorEnd(queryDesc); - FreeQueryDesc(queryDesc); - } - PG_CATCH(); - { - /* Ensure CurrentResourceOwner is restored on error */ - CurrentResourceOwner = saveResourceOwner; - PG_RE_THROW(); - } - PG_END_TRY(); + if (portal->resowner) + CurrentResourceOwner = portal->resowner; + + ExecutorFinish(queryDesc); + ExecutorEnd(queryDesc); + FreeQueryDesc(queryDesc); + CurrentResourceOwner = saveResourceOwner; } } diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c index 5c2ce78946..5e1b0fe289 100644 --- a/src/backend/commands/sequence.c +++ b/src/backend/commands/sequence.c @@ -1055,18 +1055,10 @@ lock_and_open_sequence(SeqTable seq) ResourceOwner currentOwner; currentOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = TopTransactionResourceOwner; - LockRelationOid(seq->relid, RowExclusiveLock); - } - PG_CATCH(); - { - /* Ensure CurrentResourceOwner is restored on error */ - CurrentResourceOwner = currentOwner; - PG_RE_THROW(); - } - PG_END_TRY(); + CurrentResourceOwner = TopTransactionResourceOwner; + + LockRelationOid(seq->relid, RowExclusiveLock); + CurrentResourceOwner = currentOwner; /* Flag that we have a lock in the current xact */ diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index e75a59d299..8d0345cd64 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -3639,17 +3639,10 @@ GetCurrentFDWTuplestore(void) */ oldcxt = MemoryContextSwitchTo(CurTransactionContext); saveResourceOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = CurTransactionResourceOwner; - ret = tuplestore_begin_heap(false, false, work_mem); - } - PG_CATCH(); - { - CurrentResourceOwner = saveResourceOwner; - PG_RE_THROW(); - } - PG_END_TRY(); + CurrentResourceOwner = CurTransactionResourceOwner; + + ret = tuplestore_begin_heap(false, false, work_mem); + CurrentResourceOwner = saveResourceOwner; MemoryContextSwitchTo(oldcxt); @@ -4468,20 +4461,13 @@ MakeTransitionCaptureState(TriggerDesc *trigdesc, Oid relid, CmdType cmdType) /* Now create required tuplestore(s), if we don't have them already. */ oldcxt = MemoryContextSwitchTo(CurTransactionContext); saveResourceOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = CurTransactionResourceOwner; - if (need_old && table->old_tuplestore == NULL) - table->old_tuplestore = tuplestore_begin_heap(false, false, work_mem); - if (need_new && table->new_tuplestore == NULL) - table->new_tuplestore = tuplestore_begin_heap(false, false, work_mem); - } - PG_CATCH(); - { - CurrentResourceOwner = saveResourceOwner; - PG_RE_THROW(); - } - PG_END_TRY(); + CurrentResourceOwner = CurTransactionResourceOwner; + + if (need_old && table->old_tuplestore == NULL) + table->old_tuplestore = tuplestore_begin_heap(false, false, work_mem); + if (need_new && table->new_tuplestore == NULL) + table->new_tuplestore = tuplestore_begin_heap(false, false, work_mem); + CurrentResourceOwner = saveResourceOwner; MemoryContextSwitchTo(oldcxt); diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c index d55d40e6f8..aa43b46c30 100644 --- a/src/backend/storage/large_object/inv_api.c +++ b/src/backend/storage/large_object/inv_api.c @@ -75,23 +75,14 @@ open_lo_relation(void) /* Arrange for the top xact to own these relation references */ currentOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = TopTransactionResourceOwner; + CurrentResourceOwner = TopTransactionResourceOwner; + + /* Use RowExclusiveLock since we might either read or write */ + if (lo_heap_r == NULL) + lo_heap_r = heap_open(LargeObjectRelationId, RowExclusiveLock); + if (lo_index_r == NULL) + lo_index_r = index_open(LargeObjectLOidPNIndexId, RowExclusiveLock); - /* Use RowExclusiveLock since we might either read or write */ - if (lo_heap_r == NULL) - lo_heap_r = heap_open(LargeObjectRelationId, RowExclusiveLock); - if (lo_index_r == NULL) - lo_index_r = index_open(LargeObjectLOidPNIndexId, RowExclusiveLock); - } - PG_CATCH(); - { - /* Ensure CurrentResourceOwner is restored on error */ - CurrentResourceOwner = currentOwner; - PG_RE_THROW(); - } - PG_END_TRY(); CurrentResourceOwner = currentOwner; } @@ -112,22 +103,13 @@ close_lo_relation(bool isCommit) ResourceOwner currentOwner; currentOwner = CurrentResourceOwner; - PG_TRY(); - { - CurrentResourceOwner = TopTransactionResourceOwner; + CurrentResourceOwner = TopTransactionResourceOwner; + + if (lo_index_r) + index_close(lo_index_r, NoLock); + if (lo_heap_r) + heap_close(lo_heap_r, NoLock); - if (lo_index_r) - index_close(lo_index_r, NoLock); - if (lo_heap_r) - heap_close(lo_heap_r, NoLock); - } - PG_CATCH(); - { - /* Ensure CurrentResourceOwner is restored on error */ - CurrentResourceOwner = currentOwner; - PG_RE_THROW(); - } - PG_END_TRY(); CurrentResourceOwner = currentOwner; } lo_heap_r = NULL; diff --git a/src/backend/utils/resowner/README b/src/backend/utils/resowner/README index e7b9ddfa59..2998f6bb36 100644 --- a/src/backend/utils/resowner/README +++ b/src/backend/utils/resowner/README @@ -80,7 +80,3 @@ CurrentResourceOwner must point to the same resource owner that was current when the buffer, lock, or cache reference was acquired. It would be possible to relax this restriction given additional bookkeeping effort, but at present there seems no need. - -Code that transiently changes CurrentResourceOwner should generally use a -PG_TRY construct to ensure that the previous value of CurrentResourceOwner -is restored if control is lost during an error exit. diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c index bd19fad77e..4c35ccf65e 100644 --- a/src/backend/utils/resowner/resowner.c +++ b/src/backend/utils/resowner/resowner.c @@ -473,21 +473,8 @@ ResourceOwnerRelease(ResourceOwner owner, bool isCommit, bool isTopLevel) { - /* Rather than PG_TRY at every level of recursion, set it up once */ - ResourceOwner save; - - save = CurrentResourceOwner; - PG_TRY(); - { - ResourceOwnerReleaseInternal(owner, phase, isCommit, isTopLevel); - } - PG_CATCH(); - { - CurrentResourceOwner = save; - PG_RE_THROW(); - } - PG_END_TRY(); - CurrentResourceOwner = save; + /* There's not currently any setup needed before recursing */ + ResourceOwnerReleaseInternal(owner, phase, isCommit, isTopLevel); } static void @@ -507,8 +494,7 @@ ResourceOwnerReleaseInternal(ResourceOwner owner, /* * Make CurrentResourceOwner point to me, so that ReleaseBuffer etc don't - * get confused. We needn't PG_TRY here because the outermost level will - * fix it on error abort. + * get confused. */ save = CurrentResourceOwner; CurrentResourceOwner = owner; From 0b974dba2d6b5581ce422ed883209de46f313fb6 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 16:01:52 -0700 Subject: [PATCH 0365/1087] Add configure infrastructure to detect support for C99's restrict. Will be used in later commits improving performance for a few key routines where information about aliasing allows for significantly better code generation. This allows to use the C99 'restrict' keyword without breaking C89, or for that matter C++, compilers. If not supported it's defined to be empty. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- configure | 46 +++++++++++++++++++++++++++++++++++ configure.in | 1 + src/include/pg_config.h.in | 14 +++++++++++ src/include/pg_config.h.win32 | 11 +++++++++ 4 files changed, 72 insertions(+) diff --git a/configure b/configure index 98cb8b2881..910f0fc373 100755 --- a/configure +++ b/configure @@ -11545,6 +11545,52 @@ _ACEOF ;; esac +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for C/C++ restrict keyword" >&5 +$as_echo_n "checking for C/C++ restrict keyword... " >&6; } +if ${ac_cv_c_restrict+:} false; then : + $as_echo_n "(cached) " >&6 +else + ac_cv_c_restrict=no + # The order here caters to the fact that C++ does not require restrict. + for ac_kw in __restrict __restrict__ _Restrict restrict; do + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +typedef int * int_ptr; + int foo (int_ptr $ac_kw ip) { + return ip[0]; + } +int +main () +{ +int s[1]; + int * $ac_kw t = s; + t[0] = 0; + return foo(t) + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_cv_c_restrict=$ac_kw +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + test "$ac_cv_c_restrict" != no && break + done + +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_restrict" >&5 +$as_echo "$ac_cv_c_restrict" >&6; } + + case $ac_cv_c_restrict in + restrict) ;; + no) $as_echo "#define restrict /**/" >>confdefs.h + ;; + *) cat >>confdefs.h <<_ACEOF +#define restrict $ac_cv_c_restrict +_ACEOF + ;; + esac + { $as_echo "$as_me:${as_lineno-$LINENO}: checking for printf format archetype" >&5 $as_echo_n "checking for printf format archetype... " >&6; } if ${pgac_cv_printf_archetype+:} false; then : diff --git a/configure.in b/configure.in index 4548db0dd3..ab990d69f4 100644 --- a/configure.in +++ b/configure.in @@ -1299,6 +1299,7 @@ fi m4_defun([AC_PROG_CC_STDC], []) dnl We don't want that. AC_C_BIGENDIAN AC_C_INLINE +AC_C_RESTRICT PGAC_PRINTF_ARCHETYPE AC_C_FLEXIBLE_ARRAY_MEMBER PGAC_C_SIGNED diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index b0298cca19..80ee37dd62 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -923,6 +923,20 @@ if such a type exists, and if the system does not define it. */ #undef intptr_t +/* Define to the equivalent of the C99 'restrict' keyword, or to + nothing if this is not supported. Do not define if restrict is + supported directly. */ +#undef restrict +/* Work around a bug in Sun C++: it does not support _Restrict or + __restrict__, even though the corresponding Sun C compiler ends up with + "#define restrict _Restrict" or "#define restrict __restrict__" in the + previous line. Perhaps some future version of Sun C++ will work with + restrict; if so, hopefully it defines __RESTRICT like Sun C does. */ +#if defined __SUNPRO_CC && !defined __RESTRICT +# define _Restrict +# define __restrict__ +#endif + /* Define to empty if the C compiler does not understand signed types. */ #undef signed diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index b76aad0267..3be1c235aa 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -681,6 +681,17 @@ #define inline __inline #endif +/* Define to the equivalent of the C99 'restrict' keyword, or to + nothing if this is not supported. Do not define if restrict is + supported directly. */ +/* Visual Studio 2008 and upwards */ +#if (_MSC_VER >= 1500) +/* works for C and C++ in msvc */ +#define restrict __restrict +#else +#define restrict +#endif + /* Define to empty if the C compiler does not understand signed types. */ /* #undef signed */ From 70c2d1be2b1e1efa8ef38a92b443fa290a9558dd Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 16:01:52 -0700 Subject: [PATCH 0366/1087] Allow to avoid NUL-byte management for stringinfos and use in format.c. In a lot of the places having appendBinaryStringInfo() maintain a trailing NUL byte wasn't actually meaningful, e.g. when appending an integer which can contain 0 in one of its bytes. Removing this yields some small speedup, but more importantly will be more consistent when providing faster variants of pq_sendint etc. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- src/backend/lib/stringinfo.c | 21 ++++++++++++++++++++- src/backend/libpq/pqformat.c | 18 +++++++++--------- src/include/lib/stringinfo.h | 8 ++++++++ 3 files changed, 37 insertions(+), 10 deletions(-) diff --git a/src/backend/lib/stringinfo.c b/src/backend/lib/stringinfo.c index fd15567144..cb2026c3b2 100644 --- a/src/backend/lib/stringinfo.c +++ b/src/backend/lib/stringinfo.c @@ -202,7 +202,7 @@ appendStringInfoSpaces(StringInfo str, int count) * appendBinaryStringInfo * * Append arbitrary binary data to a StringInfo, allocating more space - * if necessary. + * if necessary. Ensures that a trailing null byte is present. */ void appendBinaryStringInfo(StringInfo str, const char *data, int datalen) @@ -224,6 +224,25 @@ appendBinaryStringInfo(StringInfo str, const char *data, int datalen) str->data[str->len] = '\0'; } +/* + * appendBinaryStringInfoNT + * + * Append arbitrary binary data to a StringInfo, allocating more space + * if necessary. Does not ensure a trailing null-byte exists. + */ +void +appendBinaryStringInfoNT(StringInfo str, const char *data, int datalen) +{ + Assert(str != NULL); + + /* Make more room if needed */ + enlargeStringInfo(str, datalen); + + /* OK, append the data */ + memcpy(str->data + str->len, data, datalen); + str->len += datalen; +} + /* * enlargeStringInfo * diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index f27a04f834..2414d0d8e9 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -138,13 +138,13 @@ pq_sendcountedtext(StringInfo buf, const char *str, int slen, { slen = strlen(p); pq_sendint(buf, slen + extra, 4); - appendBinaryStringInfo(buf, p, slen); + appendBinaryStringInfoNT(buf, p, slen); pfree(p); } else { pq_sendint(buf, slen + extra, 4); - appendBinaryStringInfo(buf, str, slen); + appendBinaryStringInfoNT(buf, str, slen); } } @@ -191,11 +191,11 @@ pq_sendstring(StringInfo buf, const char *str) if (p != str) /* actual conversion has been done? */ { slen = strlen(p); - appendBinaryStringInfo(buf, p, slen + 1); + appendBinaryStringInfoNT(buf, p, slen + 1); pfree(p); } else - appendBinaryStringInfo(buf, str, slen + 1); + appendBinaryStringInfoNT(buf, str, slen + 1); } /* -------------------------------- @@ -242,15 +242,15 @@ pq_sendint(StringInfo buf, int i, int b) { case 1: n8 = (unsigned char) i; - appendBinaryStringInfo(buf, (char *) &n8, 1); + appendBinaryStringInfoNT(buf, (char *) &n8, 1); break; case 2: n16 = pg_hton16((uint16) i); - appendBinaryStringInfo(buf, (char *) &n16, 2); + appendBinaryStringInfoNT(buf, (char *) &n16, 2); break; case 4: n32 = pg_hton32((uint32) i); - appendBinaryStringInfo(buf, (char *) &n32, 4); + appendBinaryStringInfoNT(buf, (char *) &n32, 4); break; default: elog(ERROR, "unsupported integer size %d", b); @@ -271,7 +271,7 @@ pq_sendint64(StringInfo buf, int64 i) { uint64 n64 = pg_hton64(i); - appendBinaryStringInfo(buf, (char *) &n64, sizeof(n64)); + appendBinaryStringInfoNT(buf, (char *) &n64, sizeof(n64)); } /* -------------------------------- @@ -297,7 +297,7 @@ pq_sendfloat4(StringInfo buf, float4 f) swap.f = f; swap.i = pg_hton32(swap.i); - appendBinaryStringInfo(buf, (char *) &swap.i, 4); + appendBinaryStringInfoNT(buf, (char *) &swap.i, 4); } /* -------------------------------- diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h index 9694ea3f21..01b845db44 100644 --- a/src/include/lib/stringinfo.h +++ b/src/include/lib/stringinfo.h @@ -143,6 +143,14 @@ extern void appendStringInfoSpaces(StringInfo str, int count); extern void appendBinaryStringInfo(StringInfo str, const char *data, int datalen); +/*------------------------ + * appendBinaryStringInfoNT + * Append arbitrary binary data to a StringInfo, allocating more space + * if necessary. Does not ensure a trailing null-byte exists. + */ +extern void appendBinaryStringInfoNT(StringInfo str, + const char *data, int datalen); + /*------------------------ * enlargeStringInfo * Make sure a StringInfo's buffer can hold at least 'needed' more bytes. From 1de09ad8eb1fa673ee7899d6dfbb2b49ba204818 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 16:01:52 -0700 Subject: [PATCH 0367/1087] Add more efficient functions to pqformat API. There's three prongs to achieve greater efficiency here: 1) Allow reusing a stringbuffer across pq_beginmessage/endmessage, with the new pq_beginmessage_reuse/endmessage_reuse. This can be beneficial both because it avoids allocating the initial buffer, and because it's more likely to already have an correctly sized buffer. 2) Replacing pq_sendint() with pq_sendint$width() inline functions. Previously unnecessary and unpredictable branches in pq_sendint() were needed. Additionally the replacement functions are implemented more efficiently. pq_sendint is now deprecated, a separate commit will convert all in-tree callers. 3) Add pq_writeint$width(), pq_writestring(). These rely on sufficient space in the StringInfo's buffer, avoiding individual space checks & potential individual resizing. To allow this to be used for strings, expose mbutil.c's MAX_CONVERSION_GROWTH. Followup commits will make use of these facilities. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- src/backend/libpq/pqformat.c | 88 +++++++---------- src/backend/utils/mb/mbutils.c | 11 --- src/include/libpq/pqformat.h | 168 ++++++++++++++++++++++++++++++++- src/include/mb/pg_wchar.h | 11 +++ 4 files changed, 208 insertions(+), 70 deletions(-) diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index 2414d0d8e9..a5698390ae 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -97,13 +97,24 @@ pq_beginmessage(StringInfo buf, char msgtype) } /* -------------------------------- - * pq_sendbyte - append a raw byte to a StringInfo buffer + + * pq_beginmessage_reuse - initialize for sending a message, reuse buffer + * + * This requires the buffer to be allocated in an sufficiently long-lived + * memory context. * -------------------------------- */ void -pq_sendbyte(StringInfo buf, int byt) +pq_beginmessage_reuse(StringInfo buf, char msgtype) { - appendStringInfoCharMacro(buf, byt); + resetStringInfo(buf); + + /* + * We stash the message type into the buffer's cursor field, expecting + * that the pq_sendXXX routines won't touch it. We could alternatively + * make it the first byte of the buffer contents, but this seems easier. + */ + buf->cursor = msgtype; } /* -------------------------------- @@ -113,6 +124,7 @@ pq_sendbyte(StringInfo buf, int byt) void pq_sendbytes(StringInfo buf, const char *data, int datalen) { + /* use variant that maintains a trailing null-byte, out of caution */ appendBinaryStringInfo(buf, data, datalen); } @@ -137,13 +149,13 @@ pq_sendcountedtext(StringInfo buf, const char *str, int slen, if (p != str) /* actual conversion has been done? */ { slen = strlen(p); - pq_sendint(buf, slen + extra, 4); + pq_sendint32(buf, slen + extra); appendBinaryStringInfoNT(buf, p, slen); pfree(p); } else { - pq_sendint(buf, slen + extra, 4); + pq_sendint32(buf, slen + extra); appendBinaryStringInfoNT(buf, str, slen); } } @@ -227,53 +239,6 @@ pq_send_ascii_string(StringInfo buf, const char *str) appendStringInfoChar(buf, '\0'); } -/* -------------------------------- - * pq_sendint - append a binary integer to a StringInfo buffer - * -------------------------------- - */ -void -pq_sendint(StringInfo buf, int i, int b) -{ - unsigned char n8; - uint16 n16; - uint32 n32; - - switch (b) - { - case 1: - n8 = (unsigned char) i; - appendBinaryStringInfoNT(buf, (char *) &n8, 1); - break; - case 2: - n16 = pg_hton16((uint16) i); - appendBinaryStringInfoNT(buf, (char *) &n16, 2); - break; - case 4: - n32 = pg_hton32((uint32) i); - appendBinaryStringInfoNT(buf, (char *) &n32, 4); - break; - default: - elog(ERROR, "unsupported integer size %d", b); - break; - } -} - -/* -------------------------------- - * pq_sendint64 - append a binary 8-byte int to a StringInfo buffer - * - * It is tempting to merge this with pq_sendint, but we'd have to make the - * argument int64 for all data widths --- that could be a big performance - * hit on machines where int64 isn't efficient. - * -------------------------------- - */ -void -pq_sendint64(StringInfo buf, int64 i) -{ - uint64 n64 = pg_hton64(i); - - appendBinaryStringInfoNT(buf, (char *) &n64, sizeof(n64)); -} - /* -------------------------------- * pq_sendfloat4 - append a float4 to a StringInfo buffer * @@ -295,9 +260,7 @@ pq_sendfloat4(StringInfo buf, float4 f) } swap; swap.f = f; - swap.i = pg_hton32(swap.i); - - appendBinaryStringInfoNT(buf, (char *) &swap.i, 4); + pq_sendint32(buf, swap.i); } /* -------------------------------- @@ -341,6 +304,21 @@ pq_endmessage(StringInfo buf) buf->data = NULL; } +/* -------------------------------- + * pq_endmessage_reuse - send the completed message to the frontend + * + * The data buffer is *not* freed, allowing to reuse the buffer with + * pg_beginmessage_reuse. + -------------------------------- + */ + +void +pq_endmessage_reuse(StringInfo buf) +{ + /* msgtype was saved in cursor field */ + (void) pq_putmessage(buf->cursor, buf->data, buf->len); +} + /* -------------------------------- * pq_begintypsend - initialize for constructing a bytea result diff --git a/src/backend/utils/mb/mbutils.c b/src/backend/utils/mb/mbutils.c index c4fbe0903b..56f4dc1453 100644 --- a/src/backend/utils/mb/mbutils.c +++ b/src/backend/utils/mb/mbutils.c @@ -41,17 +41,6 @@ #include "utils/memutils.h" #include "utils/syscache.h" -/* - * When converting strings between different encodings, we assume that space - * for converted result is 4-to-1 growth in the worst case. The rate for - * currently supported encoding pairs are within 3 (SJIS JIS X0201 half width - * kanna -> UTF8 is the worst case). So "4" should be enough for the moment. - * - * Note that this is not the same as the maximum character width in any - * particular encoding. - */ -#define MAX_CONVERSION_GROWTH 4 - /* * We maintain a simple linked list caching the fmgr lookup info for the * currently selected conversion functions, as well as any that have been diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 32112547a0..9a546b4891 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -14,20 +14,180 @@ #define PQFORMAT_H #include "lib/stringinfo.h" +#include "mb/pg_wchar.h" +#include "port/pg_bswap.h" extern void pq_beginmessage(StringInfo buf, char msgtype); -extern void pq_sendbyte(StringInfo buf, int byt); +extern void pq_beginmessage_reuse(StringInfo buf, char msgtype); +extern void pq_endmessage(StringInfo buf); +extern void pq_endmessage_reuse(StringInfo buf); + extern void pq_sendbytes(StringInfo buf, const char *data, int datalen); extern void pq_sendcountedtext(StringInfo buf, const char *str, int slen, bool countincludesself); extern void pq_sendtext(StringInfo buf, const char *str, int slen); extern void pq_sendstring(StringInfo buf, const char *str); extern void pq_send_ascii_string(StringInfo buf, const char *str); -extern void pq_sendint(StringInfo buf, int i, int b); -extern void pq_sendint64(StringInfo buf, int64 i); extern void pq_sendfloat4(StringInfo buf, float4 f); extern void pq_sendfloat8(StringInfo buf, float8 f); -extern void pq_endmessage(StringInfo buf); + +extern void pq_sendfloat4(StringInfo buf, float4 f); +extern void pq_sendfloat8(StringInfo buf, float8 f); + +/* + * Append a int8 to a StringInfo buffer, which already has enough space + * preallocated. + * + * The use of restrict allows the compiler to optimize the code based on the + * assumption that buf, buf->len, buf->data and *buf->data don't + * overlap. Without the annotation buf->len etc cannot be kept in a register + * over subsequent pq_writeint* calls. + */ +static inline void +pq_writeint8(StringInfo restrict buf, int8 i) +{ + int8 ni = i; + + Assert(buf->len + sizeof(i) <= buf->maxlen); + memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(ni)); + buf->len += sizeof(i); +} + +/* + * Append a int16 to a StringInfo buffer, which already has enough space + * preallocated. + */ +static inline void +pq_writeint16(StringInfo restrict buf, int16 i) +{ + int16 ni = pg_hton16(i); + + Assert(buf->len + sizeof(ni) <= buf->maxlen); + memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + buf->len += sizeof(i); +} + +/* + * Append a int32 to a StringInfo buffer, which already has enough space + * preallocated. + */ +static inline void +pq_writeint32(StringInfo restrict buf, int32 i) +{ + int32 ni = pg_hton32(i); + + Assert(buf->len + sizeof(i) <= buf->maxlen); + memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + buf->len += sizeof(i); +} + +/* + * Append a int64 to a StringInfo buffer, which already has enough space + * preallocated. + */ +static inline void +pq_writeint64(StringInfo restrict buf, int64 i) +{ + int64 ni = pg_hton64(i); + + Assert(buf->len + sizeof(i) <= buf->maxlen); + memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + buf->len += sizeof(i); +} + +/* + * Append a null-terminated text string (with conversion) to a buffer with + * preallocated space. + * + * NB: The pre-allocated space needs to be sufficient for the string after + * converting to client encoding. + * + * NB: passed text string must be null-terminated, and so is the data + * sent to the frontend. + */ +static inline void +pq_writestring(StringInfo restrict buf, const char *restrict str) +{ + int slen = strlen(str); + char *p; + + p = pg_server_to_client(str, slen); + if (p != str) /* actual conversion has been done? */ + slen = strlen(p); + + Assert(buf->len + slen + 1 <= buf->maxlen); + + memcpy(((char *restrict) buf->data + buf->len), p, slen + 1); + buf->len += slen + 1; + + if (p != str) + pfree(p); +} + +/* append a binary int8 to a StringInfo buffer */ +static inline void +pq_sendint8(StringInfo buf, int8 i) +{ + enlargeStringInfo(buf, sizeof(i)); + pq_writeint8(buf, i); +} + +/* append a binary int16 to a StringInfo buffer */ +static inline void +pq_sendint16(StringInfo buf, int16 i) +{ + enlargeStringInfo(buf, sizeof(i)); + pq_writeint16(buf, i); +} + +/* append a binary int32 to a StringInfo buffer */ +static inline void +pq_sendint32(StringInfo buf, int32 i) +{ + enlargeStringInfo(buf, sizeof(i)); + pq_writeint32(buf, i); +} + +/* append a binary int64 to a StringInfo buffer */ +static inline void +pq_sendint64(StringInfo buf, int64 i) +{ + enlargeStringInfo(buf, sizeof(i)); + pq_writeint64(buf, i); +} + +/* append a binary byte to a StringInfo buffer */ +static inline void +pq_sendbyte(StringInfo buf, int8 byt) +{ + pq_sendint8(buf, byt); +} + +/* + * Append a binary integer to a StringInfo buffer + * + * This function is deprecated. + */ +static inline void +pq_sendint(StringInfo buf, int i, int b) +{ + switch (b) + { + case 1: + pq_sendint8(buf, (int8) i); + break; + case 2: + pq_sendint16(buf, (int16) i); + break; + case 4: + pq_sendint32(buf, (int32) i); + break; + default: + elog(ERROR, "unsupported integer size %d", b); + break; + } +} + extern void pq_begintypsend(StringInfo buf); extern bytea *pq_endtypsend(StringInfo buf); diff --git a/src/include/mb/pg_wchar.h b/src/include/mb/pg_wchar.h index d57ef017cb..9227d634f6 100644 --- a/src/include/mb/pg_wchar.h +++ b/src/include/mb/pg_wchar.h @@ -304,6 +304,17 @@ typedef enum pg_enc /* On FE are possible all encodings */ #define PG_VALID_FE_ENCODING(_enc) PG_VALID_ENCODING(_enc) +/* + * When converting strings between different encodings, we assume that space + * for converted result is 4-to-1 growth in the worst case. The rate for + * currently supported encoding pairs are within 3 (SJIS JIS X0201 half width + * kanna -> UTF8 is the worst case). So "4" should be enough for the moment. + * + * Note that this is not the same as the maximum character width in any + * particular encoding. + */ +#define MAX_CONVERSION_GROWTH 4 + /* * Table for mapping an encoding number to official encoding name and * possibly other subsidiary data. Be careful to check encoding number From f2dec34e19d3969ddd616e671fe9a7b968bec812 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 16:26:35 -0700 Subject: [PATCH 0368/1087] Use one stringbuffer for all rows printed in printtup.c. This avoids newly allocating, and then possibly growing, the stringbuffer for every row. For wide rows this can substantially reduce memory allocator overhead, at the price of not immediately reducing memory usage after outputting an especially wide row. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- src/backend/access/common/printtup.c | 46 +++++++++++++++------------- 1 file changed, 25 insertions(+), 21 deletions(-) diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index 20d20e623e..c00b372b84 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -57,6 +57,7 @@ typedef struct typedef struct { DestReceiver pub; /* publicly-known function pointers */ + StringInfoData buf; /* output buffer */ Portal portal; /* the Portal we are printing from */ bool sendDescrip; /* send RowDescription at startup? */ TupleDesc attrinfo; /* The attr info we are set up for */ @@ -127,6 +128,9 @@ printtup_startup(DestReceiver *self, int operation, TupleDesc typeinfo) DR_printtup *myState = (DR_printtup *) self; Portal portal = myState->portal; + /* create buffer to be used for all messages */ + initStringInfo(&myState->buf); + /* * Create a temporary memory context that we can reset once per row to * recover palloc'd memory. This avoids any problems with leaks inside @@ -302,7 +306,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) TupleDesc typeinfo = slot->tts_tupleDescriptor; DR_printtup *myState = (DR_printtup *) self; MemoryContext oldcontext; - StringInfoData buf; + StringInfo buf = &myState->buf; int natts = typeinfo->natts; int i; @@ -319,9 +323,9 @@ printtup(TupleTableSlot *slot, DestReceiver *self) /* * Prepare a DataRow message (note buffer is in per-row context) */ - pq_beginmessage(&buf, 'D'); + pq_beginmessage_reuse(buf, 'D'); - pq_sendint(&buf, natts, 2); + pq_sendint(buf, natts, 2); /* * send the attributes of this tuple @@ -333,7 +337,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) if (slot->tts_isnull[i]) { - pq_sendint(&buf, -1, 4); + pq_sendint(buf, -1, 4); continue; } @@ -354,7 +358,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) char *outputstr; outputstr = OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), false); + pq_sendcountedtext(buf, outputstr, strlen(outputstr), false); } else { @@ -362,13 +366,13 @@ printtup(TupleTableSlot *slot, DestReceiver *self) bytea *outputbytes; outputbytes = SendFunctionCall(&thisState->finfo, attr); - pq_sendint(&buf, VARSIZE(outputbytes) - VARHDRSZ, 4); - pq_sendbytes(&buf, VARDATA(outputbytes), + pq_sendint(buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); } } - pq_endmessage(&buf); + pq_endmessage_reuse(buf); /* Return to caller's context, and flush row's temporary memory */ MemoryContextSwitchTo(oldcontext); @@ -387,7 +391,7 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) TupleDesc typeinfo = slot->tts_tupleDescriptor; DR_printtup *myState = (DR_printtup *) self; MemoryContext oldcontext; - StringInfoData buf; + StringInfo buf = &myState->buf; int natts = typeinfo->natts; int i, j, @@ -406,7 +410,7 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) /* * tell the frontend to expect new tuple data (in ASCII style) */ - pq_beginmessage(&buf, 'D'); + pq_beginmessage_reuse(buf, 'D'); /* * send a bitmap of which attributes are not null @@ -420,13 +424,13 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) k >>= 1; if (k == 0) /* end of byte? */ { - pq_sendint(&buf, j, 1); + pq_sendint(buf, j, 1); j = 0; k = 1 << 7; } } if (k != (1 << 7)) /* flush last partial byte */ - pq_sendint(&buf, j, 1); + pq_sendint(buf, j, 1); /* * send the attributes of this tuple @@ -443,10 +447,10 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) Assert(thisState->format == 0); outputstr = OutputFunctionCall(&thisState->finfo, attr); - pq_sendcountedtext(&buf, outputstr, strlen(outputstr), true); + pq_sendcountedtext(buf, outputstr, strlen(outputstr), true); } - pq_endmessage(&buf); + pq_endmessage_reuse(buf); /* Return to caller's context, and flush row's temporary memory */ MemoryContextSwitchTo(oldcontext); @@ -572,7 +576,7 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) TupleDesc typeinfo = slot->tts_tupleDescriptor; DR_printtup *myState = (DR_printtup *) self; MemoryContext oldcontext; - StringInfoData buf; + StringInfo buf = &myState->buf; int natts = typeinfo->natts; int i, j, @@ -591,7 +595,7 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) /* * tell the frontend to expect new tuple data (in binary style) */ - pq_beginmessage(&buf, 'B'); + pq_beginmessage_reuse(buf, 'B'); /* * send a bitmap of which attributes are not null @@ -605,13 +609,13 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) k >>= 1; if (k == 0) /* end of byte? */ { - pq_sendint(&buf, j, 1); + pq_sendint(buf, j, 1); j = 0; k = 1 << 7; } } if (k != (1 << 7)) /* flush last partial byte */ - pq_sendint(&buf, j, 1); + pq_sendint(buf, j, 1); /* * send the attributes of this tuple @@ -628,12 +632,12 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) Assert(thisState->format == 1); outputbytes = SendFunctionCall(&thisState->finfo, attr); - pq_sendint(&buf, VARSIZE(outputbytes) - VARHDRSZ, 4); - pq_sendbytes(&buf, VARDATA(outputbytes), + pq_sendint(buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); } - pq_endmessage(&buf); + pq_endmessage_reuse(buf); /* Return to caller's context, and flush row's temporary memory */ MemoryContextSwitchTo(oldcontext); From cff440d368690f94fbda1a475277e90ea2263843 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 11 Oct 2017 19:52:46 -0400 Subject: [PATCH 0369/1087] pg_stat_statements: Widen query IDs from 32 bits to 64 bits. This takes advantage of the infrastructure introduced by commit 81c5e46c490e2426db243eada186995da5bb0ba7 to greatly reduce the likelihood that two different queries will end up with the same query ID. It's still possible, of course, but whereas before it the chances of a collision reached 25% around 50,000 queries, it will now take more than 3 billion queries. Backward incompatibility: Because the type exposed at the SQL level is int8, users may now see negative query IDs in the pg_stat_statements view (and also, query IDs more than 4 billion, which was the old limit). Patch by me, reviewed by Michael Paquier and Peter Geoghegan. Discussion: http://postgr.es/m/CA+TgmobG_Kp4cBKFmsznUAaM1GWW6hhRNiZC0KjRMOOeYnz5Yw@mail.gmail.com --- .../pg_stat_statements/pg_stat_statements.c | 76 ++++++------------- src/backend/executor/execParallel.c | 2 +- src/backend/nodes/outfuncs.c | 7 +- src/backend/nodes/readfuncs.c | 11 ++- src/backend/rewrite/rewriteHandler.c | 2 +- src/include/nodes/parsenodes.h | 2 +- src/include/nodes/plannodes.h | 2 +- 7 files changed, 42 insertions(+), 60 deletions(-) diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index a0e7a46871..b04b4d6ce1 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -21,7 +21,7 @@ * as the collations of Vars and, most notably, the values of constants. * * This jumble is acquired at the end of parse analysis of each query, and - * a 32-bit hash of it is stored into the query's Query.queryId field. + * a 64-bit hash of it is stored into the query's Query.queryId field. * The server then copies this value around, making it available in plan * tree(s) generated from the query. The executor can then use this value * to blame query costs on the proper queryId. @@ -95,7 +95,7 @@ PG_MODULE_MAGIC; #define PGSS_TEXT_FILE PG_STAT_TMP_DIR "/pgss_query_texts.stat" /* Magic number identifying the stats file format */ -static const uint32 PGSS_FILE_HEADER = 0x20140125; +static const uint32 PGSS_FILE_HEADER = 0x20171004; /* PostgreSQL major version number, changes in which invalidate all entries */ static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100; @@ -130,7 +130,7 @@ typedef struct pgssHashKey { Oid userid; /* user OID */ Oid dbid; /* database OID */ - uint32 queryid; /* query identifier */ + uint64 queryid; /* query identifier */ } pgssHashKey; /* @@ -301,10 +301,8 @@ static void pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString, ProcessUtilityContext context, ParamListInfo params, QueryEnvironment *queryEnv, DestReceiver *dest, char *completionTag); -static uint32 pgss_hash_fn(const void *key, Size keysize); -static int pgss_match_fn(const void *key1, const void *key2, Size keysize); -static uint32 pgss_hash_string(const char *str, int len); -static void pgss_store(const char *query, uint32 queryId, +static uint64 pgss_hash_string(const char *str, int len); +static void pgss_store(const char *query, uint64 queryId, int query_location, int query_len, double total_time, uint64 rows, const BufferUsage *bufusage, @@ -500,12 +498,10 @@ pgss_shmem_startup(void) memset(&info, 0, sizeof(info)); info.keysize = sizeof(pgssHashKey); info.entrysize = sizeof(pgssEntry); - info.hash = pgss_hash_fn; - info.match = pgss_match_fn; pgss_hash = ShmemInitHash("pg_stat_statements hash", pgss_max, pgss_max, &info, - HASH_ELEM | HASH_FUNCTION | HASH_COMPARE); + HASH_ELEM | HASH_BLOBS); LWLockRelease(AddinShmemInitLock); @@ -781,7 +777,7 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query) prev_post_parse_analyze_hook(pstate, query); /* Assert we didn't do this already */ - Assert(query->queryId == 0); + Assert(query->queryId == UINT64CONST(0)); /* Safety check... */ if (!pgss || !pgss_hash) @@ -797,7 +793,7 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query) */ if (query->utilityStmt) { - query->queryId = 0; + query->queryId = UINT64CONST(0); return; } @@ -812,14 +808,15 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query) /* Compute query ID and mark the Query node with it */ JumbleQuery(&jstate, query); - query->queryId = hash_any(jstate.jumble, jstate.jumble_len); + query->queryId = + DatumGetUInt64(hash_any_extended(jstate.jumble, jstate.jumble_len, 0)); /* * If we are unlucky enough to get a hash of zero, use 1 instead, to * prevent confusion with the utility-statement case. */ - if (query->queryId == 0) - query->queryId = 1; + if (query->queryId == UINT64CONST(0)) + query->queryId = UINT64CONST(1); /* * If we were able to identify any ignorable constants, we immediately @@ -855,7 +852,7 @@ pgss_ExecutorStart(QueryDesc *queryDesc, int eflags) * counting of optimizable statements that are directly contained in * utility statements. */ - if (pgss_enabled() && queryDesc->plannedstmt->queryId != 0) + if (pgss_enabled() && queryDesc->plannedstmt->queryId != UINT64CONST(0)) { /* * Set up to track total elapsed time in ExecutorRun. Make sure the @@ -926,9 +923,9 @@ pgss_ExecutorFinish(QueryDesc *queryDesc) static void pgss_ExecutorEnd(QueryDesc *queryDesc) { - uint32 queryId = queryDesc->plannedstmt->queryId; + uint64 queryId = queryDesc->plannedstmt->queryId; - if (queryId != 0 && queryDesc->totaltime && pgss_enabled()) + if (queryId != UINT64CONST(0) && queryDesc->totaltime && pgss_enabled()) { /* * Make sure stats accumulation is done. (Note: it's okay if several @@ -1069,45 +1066,16 @@ pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString, } } -/* - * Calculate hash value for a key - */ -static uint32 -pgss_hash_fn(const void *key, Size keysize) -{ - const pgssHashKey *k = (const pgssHashKey *) key; - - return hash_uint32((uint32) k->userid) ^ - hash_uint32((uint32) k->dbid) ^ - hash_uint32((uint32) k->queryid); -} - -/* - * Compare two keys - zero means match - */ -static int -pgss_match_fn(const void *key1, const void *key2, Size keysize) -{ - const pgssHashKey *k1 = (const pgssHashKey *) key1; - const pgssHashKey *k2 = (const pgssHashKey *) key2; - - if (k1->userid == k2->userid && - k1->dbid == k2->dbid && - k1->queryid == k2->queryid) - return 0; - else - return 1; -} - /* * Given an arbitrarily long query string, produce a hash for the purposes of * identifying the query, without normalizing constants. Used when hashing * utility statements. */ -static uint32 +static uint64 pgss_hash_string(const char *str, int len) { - return hash_any((const unsigned char *) str, len); + return DatumGetUInt64(hash_any_extended((const unsigned char *) str, + len, 0)); } /* @@ -1121,7 +1089,7 @@ pgss_hash_string(const char *str, int len) * query string. total_time, rows, bufusage are ignored in this case. */ static void -pgss_store(const char *query, uint32 queryId, +pgss_store(const char *query, uint64 queryId, int query_location, int query_len, double total_time, uint64 rows, const BufferUsage *bufusage, @@ -1173,7 +1141,7 @@ pgss_store(const char *query, uint32 queryId, /* * For utility statements, we just hash the query string to get an ID. */ - if (queryId == 0) + if (queryId == UINT64CONST(0)) queryId = pgss_hash_string(query, query_len); /* Set up key for hashtable search */ @@ -2324,8 +2292,10 @@ AppendJumble(pgssJumbleState *jstate, const unsigned char *item, Size size) if (jumble_len >= JUMBLE_SIZE) { - uint32 start_hash = hash_any(jumble, JUMBLE_SIZE); + uint64 start_hash; + start_hash = DatumGetUInt64(hash_any_extended(jumble, + JUMBLE_SIZE, 0)); memcpy(jumble, &start_hash, sizeof(start_hash)); jumble_len = sizeof(start_hash); } diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 5dc26ed17a..1b477baecb 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -162,7 +162,7 @@ ExecSerializePlan(Plan *plan, EState *estate) */ pstmt = makeNode(PlannedStmt); pstmt->commandType = CMD_SELECT; - pstmt->queryId = 0; + pstmt->queryId = UINT64CONST(0); pstmt->hasReturning = false; pstmt->hasModifyingCTE = false; pstmt->canSetTag = true; diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 2532edc94a..43d62062bc 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -54,6 +54,11 @@ static void outChar(StringInfo str, char c); #define WRITE_UINT_FIELD(fldname) \ appendStringInfo(str, " :" CppAsString(fldname) " %u", node->fldname) +/* Write an unsigned integer field (anything written with UINT64_FORMAT) */ +#define WRITE_UINT64_FIELD(fldname) \ + appendStringInfo(str, " :" CppAsString(fldname) " " UINT64_FORMAT, \ + node->fldname) + /* Write an OID field (don't hard-wire assumption that OID is same as uint) */ #define WRITE_OID_FIELD(fldname) \ appendStringInfo(str, " :" CppAsString(fldname) " %u", node->fldname) @@ -260,7 +265,7 @@ _outPlannedStmt(StringInfo str, const PlannedStmt *node) WRITE_NODE_TYPE("PLANNEDSTMT"); WRITE_ENUM_FIELD(commandType, CmdType); - WRITE_UINT_FIELD(queryId); + WRITE_UINT64_FIELD(queryId); WRITE_BOOL_FIELD(hasReturning); WRITE_BOOL_FIELD(hasModifyingCTE); WRITE_BOOL_FIELD(canSetTag); diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 07ba69178c..ccb6a1f4ac 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -33,6 +33,7 @@ #include "nodes/parsenodes.h" #include "nodes/plannodes.h" #include "nodes/readfuncs.h" +#include "utils/builtins.h" /* @@ -70,6 +71,12 @@ token = pg_strtok(&length); /* get field value */ \ local_node->fldname = atoui(token) +/* Read an unsigned integer field (anything written using UINT64_FORMAT) */ +#define READ_UINT64_FIELD(fldname) \ + token = pg_strtok(&length); /* skip :fldname */ \ + token = pg_strtok(&length); /* get field value */ \ + local_node->fldname = pg_strtouint64(token, NULL, 10) + /* Read an long integer field (anything written as ":fldname %ld") */ #define READ_LONG_FIELD(fldname) \ token = pg_strtok(&length); /* skip :fldname */ \ @@ -231,7 +238,7 @@ _readQuery(void) READ_ENUM_FIELD(commandType, CmdType); READ_ENUM_FIELD(querySource, QuerySource); - local_node->queryId = 0; /* not saved in output format */ + local_node->queryId = UINT64CONST(0); /* not saved in output format */ READ_BOOL_FIELD(canSetTag); READ_NODE_FIELD(utilityStmt); READ_INT_FIELD(resultRelation); @@ -1456,7 +1463,7 @@ _readPlannedStmt(void) READ_LOCALS(PlannedStmt); READ_ENUM_FIELD(commandType, CmdType); - READ_UINT_FIELD(queryId); + READ_UINT64_FIELD(queryId); READ_BOOL_FIELD(hasReturning); READ_BOOL_FIELD(hasModifyingCTE); READ_BOOL_FIELD(canSetTag); diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index 7054d4f77d..7a61af7905 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -3575,7 +3575,7 @@ RewriteQuery(Query *parsetree, List *rewrite_events) List * QueryRewrite(Query *parsetree) { - uint32 input_query_id = parsetree->queryId; + uint64 input_query_id = parsetree->queryId; List *querylist; List *results; ListCell *l; diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 50eec730b3..732e5d6788 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -111,7 +111,7 @@ typedef struct Query QuerySource querySource; /* where did I come from? */ - uint32 queryId; /* query identifier (can be set by plugins) */ + uint64 queryId; /* query identifier (can be set by plugins) */ bool canSetTag; /* do I set the command result tag? */ diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index a382331f41..dd74efa9a4 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -44,7 +44,7 @@ typedef struct PlannedStmt CmdType commandType; /* select|insert|update|delete|utility */ - uint32 queryId; /* query identifier (copied from Query) */ + uint64 queryId; /* query identifier (copied from Query) */ bool hasReturning; /* is it insert|update|delete RETURNING? */ From 4c119fbcd49ba882791c7b99a1e934b985468e9f Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 16:49:31 -0700 Subject: [PATCH 0370/1087] Improve performance of SendRowDescriptionMessage. There's three categories of changes leading to better performance: - Splitting the per-attribute part of SendRowDescriptionMessage into a v2 and a v3 version allows avoiding branches for every attribute. - Preallocating the size of the buffer to be big enough for all attributes and then using pq_write* avoids unnecessary buffer size checks & resizing. - Reusing a persistently allocated StringInfo for all SendRowDescriptionMessage() invocations avoids repeated allocations & reallocations. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- src/backend/access/common/printtup.c | 146 ++++++++++++++++++++------- src/backend/tcop/postgres.c | 35 +++++-- src/include/access/printtup.h | 4 +- 3 files changed, 138 insertions(+), 47 deletions(-) diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index c00b372b84..02cd1beef7 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -32,6 +32,10 @@ static bool printtup_internal_20(TupleTableSlot *slot, DestReceiver *self); static void printtup_shutdown(DestReceiver *self); static void printtup_destroy(DestReceiver *self); +static void SendRowDescriptionCols_2(StringInfo buf, TupleDesc typeinfo, + List *targetlist, int16 *formats); +static void SendRowDescriptionCols_3(StringInfo buf, TupleDesc typeinfo, + List *targetlist, int16 *formats); /* ---------------------------------------------------------------- * printtup / debugtup support @@ -161,7 +165,8 @@ printtup_startup(DestReceiver *self, int operation, TupleDesc typeinfo) * descriptor of the tuples. */ if (myState->sendDescrip) - SendRowDescriptionMessage(typeinfo, + SendRowDescriptionMessage(&myState->buf, + typeinfo, FetchPortalTargetList(portal), portal->formats); @@ -189,61 +194,126 @@ printtup_startup(DestReceiver *self, int operation, TupleDesc typeinfo) * send zeroes for the format codes in that case. */ void -SendRowDescriptionMessage(TupleDesc typeinfo, List *targetlist, int16 *formats) +SendRowDescriptionMessage(StringInfo buf, TupleDesc typeinfo, + List *targetlist, int16 *formats) { int natts = typeinfo->natts; int proto = PG_PROTOCOL_MAJOR(FrontendProtocol); + + /* tuple descriptor message type */ + pq_beginmessage_reuse(buf, 'T'); + /* # of attrs in tuples */ + pq_sendint16(buf, natts); + + if (proto >= 3) + SendRowDescriptionCols_3(buf, typeinfo, targetlist, formats); + else + SendRowDescriptionCols_2(buf, typeinfo, targetlist, formats); + + pq_endmessage_reuse(buf); +} + +/* + * Send description for each column when using v3+ protocol + */ +static void +SendRowDescriptionCols_3(StringInfo buf, TupleDesc typeinfo, List *targetlist, int16 *formats) +{ + int natts = typeinfo->natts; int i; - StringInfoData buf; ListCell *tlist_item = list_head(targetlist); - pq_beginmessage(&buf, 'T'); /* tuple descriptor message type */ - pq_sendint(&buf, natts, 2); /* # of attrs in tuples */ + /* + * Preallocate memory for the entire message to be sent. That allows to + * use the significantly faster inline pqformat.h functions and to avoid + * reallocations. + * + * Have to overestimate the size of the column-names, to account for + * character set overhead. + */ + enlargeStringInfo(buf, (NAMEDATALEN * MAX_CONVERSION_GROWTH /* attname */ + + sizeof(Oid) /* resorigtbl */ + + sizeof(AttrNumber) /* resorigcol */ + + sizeof(Oid) /* atttypid */ + + sizeof(int16) /* attlen */ + + sizeof(int32) /* attypmod */ + + sizeof(int16) /* format */ + ) * natts); for (i = 0; i < natts; ++i) { Form_pg_attribute att = TupleDescAttr(typeinfo, i); Oid atttypid = att->atttypid; int32 atttypmod = att->atttypmod; + Oid resorigtbl; + AttrNumber resorigcol; + int16 format; + + /* + * If column is a domain, send the base type and typmod instead. + * Lookup before sending any ints, for efficiency. + */ + atttypid = getBaseTypeAndTypmod(atttypid, &atttypmod); - pq_sendstring(&buf, NameStr(att->attname)); - /* column ID info appears in protocol 3.0 and up */ - if (proto >= 3) + /* Do we have a non-resjunk tlist item? */ + while (tlist_item && + ((TargetEntry *) lfirst(tlist_item))->resjunk) + tlist_item = lnext(tlist_item); + if (tlist_item) { - /* Do we have a non-resjunk tlist item? */ - while (tlist_item && - ((TargetEntry *) lfirst(tlist_item))->resjunk) - tlist_item = lnext(tlist_item); - if (tlist_item) - { - TargetEntry *tle = (TargetEntry *) lfirst(tlist_item); - - pq_sendint(&buf, tle->resorigtbl, 4); - pq_sendint(&buf, tle->resorigcol, 2); - tlist_item = lnext(tlist_item); - } - else - { - /* No info available, so send zeroes */ - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); - } + TargetEntry *tle = (TargetEntry *) lfirst(tlist_item); + + resorigtbl = tle->resorigtbl; + resorigcol = tle->resorigcol; + tlist_item = lnext(tlist_item); } - /* If column is a domain, send the base type and typmod instead */ - atttypid = getBaseTypeAndTypmod(atttypid, &atttypmod); - pq_sendint(&buf, (int) atttypid, sizeof(atttypid)); - pq_sendint(&buf, att->attlen, sizeof(att->attlen)); - pq_sendint(&buf, atttypmod, sizeof(atttypmod)); - /* format info appears in protocol 3.0 and up */ - if (proto >= 3) + else { - if (formats) - pq_sendint(&buf, formats[i], 2); - else - pq_sendint(&buf, 0, 2); + /* No info available, so send zeroes */ + resorigtbl = 0; + resorigcol = 0; } + + if (formats) + format = formats[i]; + else + format = 0; + + pq_writestring(buf, NameStr(att->attname)); + pq_writeint32(buf, resorigtbl); + pq_writeint16(buf, resorigcol); + pq_writeint32(buf, atttypid); + pq_writeint16(buf, att->attlen); + pq_writeint32(buf, atttypmod); + pq_writeint16(buf, format); + } +} + +/* + * Send description for each column when using v2 protocol + */ +static void +SendRowDescriptionCols_2(StringInfo buf, TupleDesc typeinfo, List *targetlist, int16 *formats) +{ + int natts = typeinfo->natts; + int i; + + for (i = 0; i < natts; ++i) + { + Form_pg_attribute att = TupleDescAttr(typeinfo, i); + Oid atttypid = att->atttypid; + int32 atttypmod = att->atttypmod; + + /* If column is a domain, send the base type and typmod instead */ + atttypid = getBaseTypeAndTypmod(atttypid, &atttypmod); + + pq_sendstring(buf, NameStr(att->attname)); + /* column ID only info appears in protocol 3.0 and up */ + pq_sendint32(buf, atttypid); + pq_sendint16(buf, att->attlen); + pq_sendint32(buf, atttypmod); + /* format info only appears in protocol 3.0 and up */ } - pq_endmessage(&buf); } /* diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index edea6f177b..338ce81331 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -165,6 +165,10 @@ static bool RecoveryConflictPending = false; static bool RecoveryConflictRetryable = true; static ProcSignalReason RecoveryConflictReason; +/* reused buffer to pass to SendRowDescriptionMessage() */ +static MemoryContext row_description_context = NULL; +static StringInfoData row_description_buf; + /* ---------------------------------------------------------------- * decls for routines only used in this file * ---------------------------------------------------------------- @@ -2315,7 +2319,6 @@ static void exec_describe_statement_message(const char *stmt_name) { CachedPlanSource *psrc; - StringInfoData buf; int i; /* @@ -2371,16 +2374,17 @@ exec_describe_statement_message(const char *stmt_name) /* * First describe the parameters... */ - pq_beginmessage(&buf, 't'); /* parameter description message type */ - pq_sendint(&buf, psrc->num_params, 2); + pq_beginmessage_reuse(&row_description_buf, 't'); /* parameter description + * message type */ + pq_sendint(&row_description_buf, psrc->num_params, 2); for (i = 0; i < psrc->num_params; i++) { Oid ptype = psrc->param_types[i]; - pq_sendint(&buf, (int) ptype, 4); + pq_sendint(&row_description_buf, (int) ptype, 4); } - pq_endmessage(&buf); + pq_endmessage_reuse(&row_description_buf); /* * Next send RowDescription or NoData to describe the result... @@ -2392,7 +2396,10 @@ exec_describe_statement_message(const char *stmt_name) /* Get the plan's primary targetlist */ tlist = CachedPlanGetTargetList(psrc, NULL); - SendRowDescriptionMessage(psrc->resultDesc, tlist, NULL); + SendRowDescriptionMessage(&row_description_buf, + psrc->resultDesc, + tlist, + NULL); } else pq_putemptymessage('n'); /* NoData */ @@ -2444,7 +2451,8 @@ exec_describe_portal_message(const char *portal_name) return; /* can't actually do anything... */ if (portal->tupDesc) - SendRowDescriptionMessage(portal->tupDesc, + SendRowDescriptionMessage(&row_description_buf, + portal->tupDesc, FetchPortalTargetList(portal), portal->formats); else @@ -3830,6 +3838,19 @@ PostgresMain(int argc, char *argv[], "MessageContext", ALLOCSET_DEFAULT_SIZES); + /* + * Create memory context and buffer used for RowDescription messages. As + * SendRowDescriptionMessage(), via exec_describe_statement_message(), is + * frequently executed for ever single statement, we don't want to + * allocate a separate buffer every time. + */ + row_description_context = AllocSetContextCreate(TopMemoryContext, + "RowDescriptionContext", + ALLOCSET_DEFAULT_SIZES); + MemoryContextSwitchTo(row_description_context); + initStringInfo(&row_description_buf); + MemoryContextSwitchTo(TopMemoryContext); + /* * Remember stand-alone backend startup time */ diff --git a/src/include/access/printtup.h b/src/include/access/printtup.h index 641715e416..1b5a003a99 100644 --- a/src/include/access/printtup.h +++ b/src/include/access/printtup.h @@ -20,8 +20,8 @@ extern DestReceiver *printtup_create_DR(CommandDest dest); extern void SetRemoteDestReceiverParams(DestReceiver *self, Portal portal); -extern void SendRowDescriptionMessage(TupleDesc typeinfo, List *targetlist, - int16 *formats); +extern void SendRowDescriptionMessage(StringInfo buf, + TupleDesc typeinfo, List *targetlist, int16 *formats); extern void debugStartup(DestReceiver *self, int operation, TupleDesc typeinfo); From 060b069984a69ff0255ce318f10681c553613bef Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 17:16:16 -0700 Subject: [PATCH 0371/1087] Work around overly strict restrict checks by MSVC. Apparently MSVC requires a * before a restrict in a variable declaration, even if the adorned type already is a pointer, just via typedef. As reported by buildfarm animal woodlouse. Author: Andres Freund Discussion: https://postgr.es/m/20171012001320.4putagiruuehtvb6@alap3.anarazel.de --- src/include/libpq/pqformat.h | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 9a546b4891..2329669b08 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -42,9 +42,12 @@ extern void pq_sendfloat8(StringInfo buf, float8 f); * assumption that buf, buf->len, buf->data and *buf->data don't * overlap. Without the annotation buf->len etc cannot be kept in a register * over subsequent pq_writeint* calls. + * + * The use of StringInfoData * rather than StringInfo is due to MSVC being + * overly picky and demanding a * before a restrict. */ static inline void -pq_writeint8(StringInfo restrict buf, int8 i) +pq_writeint8(StringInfoData * restrict buf, int8 i) { int8 ni = i; @@ -58,7 +61,7 @@ pq_writeint8(StringInfo restrict buf, int8 i) * preallocated. */ static inline void -pq_writeint16(StringInfo restrict buf, int16 i) +pq_writeint16(StringInfoData * restrict buf, int16 i) { int16 ni = pg_hton16(i); @@ -72,7 +75,7 @@ pq_writeint16(StringInfo restrict buf, int16 i) * preallocated. */ static inline void -pq_writeint32(StringInfo restrict buf, int32 i) +pq_writeint32(StringInfoData * restrict buf, int32 i) { int32 ni = pg_hton32(i); @@ -86,7 +89,7 @@ pq_writeint32(StringInfo restrict buf, int32 i) * preallocated. */ static inline void -pq_writeint64(StringInfo restrict buf, int64 i) +pq_writeint64(StringInfoData * restrict buf, int64 i) { int64 ni = pg_hton64(i); @@ -106,7 +109,7 @@ pq_writeint64(StringInfo restrict buf, int64 i) * sent to the frontend. */ static inline void -pq_writestring(StringInfo restrict buf, const char *restrict str) +pq_writestring(StringInfoData * restrict buf, const char *restrict str) { int slen = strlen(str); char *p; From 36b4b91ba07843406d5a30106facb59d8275c6de Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 19:06:29 -0700 Subject: [PATCH 0372/1087] Temporary attempt at a workaround for further MSVC restrict build failures. It appears some versions of msvc use __declspec(restrict) in stdlib.h and subsidiary headers. Including those after defining 'restrict' to '__restrict' doesn't work. Try to get the buildfarm green to see whether there's further problems, by including stdlib.h just before said define. --- src/include/pg_config.h.win32 | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 3be1c235aa..81604de7f9 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -687,6 +687,11 @@ /* Visual Studio 2008 and upwards */ #if (_MSC_VER >= 1500) /* works for C and C++ in msvc */ +/* + * Temporary attempt at a workaround for stdlib.h's use of + * declspec(restrict), conflicting with below define. + */ +#include #define restrict __restrict #else #define restrict From 52328727bea4d9f95af9622e4624b9d1492df88e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 11 Oct 2017 22:18:01 -0400 Subject: [PATCH 0373/1087] Prevent sharing transition states between ordered-set aggregates. This ought to work, but the built-in OSAs are not capable of coping, because their final-functions destructively modify their transition state (specifically, the contained tuplesort object). That was fine when those functions were written, but commit 804163bc2 moved the goalposts without telling orderedsetaggs.c. We should fix the built-in OSAs to support this, but it will take a little work, especially if we don't want to sacrifice performance in the normal non-shared-state case. Given that it took a year after 9.6 release for anyone to notice this bug, we should not prioritize sharable-state over nonsharable-state performance. And a proper fix is likely to be more complicated than we'd want to back-patch, too. Therefore, let's just put in this stop-gap patch to prevent nodeAgg.c from choosing to use shared state for OSAs. We can revert it in HEAD when we get a better fix. Report from Lukas Eder, diagnosis by me, patch by David Rowley. Back-patch to 9.6 where the problem was introduced. Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com --- src/backend/executor/nodeAgg.c | 10 ++++++++++ src/test/regress/expected/aggregates.out | 19 +++++++++++++++++++ src/test/regress/sql/aggregates.sql | 11 +++++++++++ 3 files changed, 40 insertions(+) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 0ae5873868..1a1aebe7b0 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -3808,6 +3808,16 @@ find_compatible_pertrans(AggState *aggstate, Aggref *newagg, { ListCell *lc; + /* + * For the moment, never try to share transition states between different + * ordered-set aggregates. This is necessary because the finalfns of the + * built-in OSAs (see orderedsetaggs.c) are destructive of their + * transition states. We should fix them so we can allow this, but not + * losing performance in the normal non-shared case will take some work. + */ + if (AGGKIND_IS_ORDERED_SET(newagg->aggkind)) + return -1; + foreach(lc, transnos) { int transno = lfirst_int(lc); diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index ce6b841a33..82ede655aa 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -1860,6 +1860,25 @@ NOTICE: avg_transfn called with 3 2 | 6 (1 row) +-- ideally these would share state, but we have to fix the OSAs first. +select + percentile_cont(0.5) within group (order by a), + percentile_disc(0.5) within group (order by a) +from (values(1::float8),(3),(5),(7)) t(a); + percentile_cont | percentile_disc +-----------------+----------------- + 4 | 3 +(1 row) + +select + rank(4) within group (order by a), + dense_rank(4) within group (order by a) +from (values(1),(3),(5),(7)) t(a); + rank | dense_rank +------+------------ + 3 | 3 +(1 row) + -- test that aggs with the same sfunc and initcond share the same agg state create aggregate my_sum_init(int4) ( diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 2eeb3eedbd..77314522eb 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -739,6 +739,17 @@ select my_avg(one) filter (where one > 1),my_sum(one) from (values(1),(3)) t(one -- this should not share the state due to different input columns. select my_avg(one),my_sum(two) from (values(1,2),(3,4)) t(one,two); +-- ideally these would share state, but we have to fix the OSAs first. +select + percentile_cont(0.5) within group (order by a), + percentile_disc(0.5) within group (order by a) +from (values(1::float8),(3),(5),(7)) t(a); + +select + rank(4) within group (order by a), + dense_rank(4) within group (order by a) +from (values(1),(3),(5),(7)) t(a); + -- test that aggs with the same sfunc and initcond share the same agg state create aggregate my_sum_init(int4) ( From 31079a4a8e66e56e48bad94d380fa6224e9ffa0d Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 11 Oct 2017 21:00:46 -0700 Subject: [PATCH 0374/1087] Replace remaining uses of pq_sendint with pq_sendint{8,16,32}. pq_sendint() remains, so extension code doesn't unnecessarily break. Author: Andres Freund Discussion: https://postgr.es/m/20170914063418.sckdzgjfrsbekae4@alap3.anarazel.de --- contrib/hstore/hstore_io.c | 8 +-- src/backend/access/common/printsimple.c | 18 +++--- src/backend/access/common/printtup.c | 16 ++--- src/backend/access/transam/parallel.c | 4 +- src/backend/commands/async.c | 2 +- src/backend/commands/copy.c | 8 +-- src/backend/libpq/auth.c | 2 +- src/backend/replication/basebackup.c | 86 ++++++++++++------------- src/backend/replication/logical/proto.c | 20 +++--- src/backend/replication/walreceiver.c | 8 +-- src/backend/replication/walsender.c | 36 +++++------ src/backend/tcop/fastpath.c | 4 +- src/backend/tcop/postgres.c | 8 +-- src/backend/utils/adt/arrayfuncs.c | 14 ++-- src/backend/utils/adt/date.c | 4 +- src/backend/utils/adt/geo_ops.c | 4 +- src/backend/utils/adt/int.c | 4 +- src/backend/utils/adt/jsonb.c | 2 +- src/backend/utils/adt/nabstime.c | 10 +-- src/backend/utils/adt/numeric.c | 14 ++-- src/backend/utils/adt/oid.c | 2 +- src/backend/utils/adt/rangetypes.c | 4 +- src/backend/utils/adt/rowtypes.c | 8 +-- src/backend/utils/adt/tid.c | 6 +- src/backend/utils/adt/timestamp.c | 4 +- src/backend/utils/adt/tsquery.c | 13 ++-- src/backend/utils/adt/tsvector.c | 6 +- src/backend/utils/adt/txid.c | 2 +- src/backend/utils/adt/varbit.c | 2 +- src/backend/utils/adt/xid.c | 4 +- 30 files changed, 160 insertions(+), 163 deletions(-) diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index 6363c321c5..d8284012d0 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -1207,23 +1207,23 @@ hstore_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); - pq_sendint(&buf, count, 4); + pq_sendint32(&buf, count); for (i = 0; i < count; i++) { int32 keylen = HSTORE_KEYLEN(entries, i); - pq_sendint(&buf, keylen, 4); + pq_sendint32(&buf, keylen); pq_sendtext(&buf, HSTORE_KEY(entries, base, i), keylen); if (HSTORE_VALISNULL(entries, i)) { - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); } else { int32 vallen = HSTORE_VALLEN(entries, i); - pq_sendint(&buf, vallen, 4); + pq_sendint32(&buf, vallen); pq_sendtext(&buf, HSTORE_VAL(entries, base, i), vallen); } } diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index b3e9a26b03..872de7c3f4 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -34,19 +34,19 @@ printsimple_startup(DestReceiver *self, int operation, TupleDesc tupdesc) int i; pq_beginmessage(&buf, 'T'); /* RowDescription */ - pq_sendint(&buf, tupdesc->natts, 2); + pq_sendint16(&buf, tupdesc->natts); for (i = 0; i < tupdesc->natts; ++i) { Form_pg_attribute attr = TupleDescAttr(tupdesc, i); pq_sendstring(&buf, NameStr(attr->attname)); - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ - pq_sendint(&buf, (int) attr->atttypid, 4); - pq_sendint(&buf, attr->attlen, 2); - pq_sendint(&buf, attr->atttypmod, 4); - pq_sendint(&buf, 0, 2); /* format code */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, (int) attr->atttypid); + pq_sendint16(&buf, attr->attlen); + pq_sendint32(&buf, attr->atttypmod); + pq_sendint16(&buf, 0); /* format code */ } pq_endmessage(&buf); @@ -67,7 +67,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) /* Prepare and send message */ pq_beginmessage(&buf, 'D'); - pq_sendint(&buf, tupdesc->natts, 2); + pq_sendint16(&buf, tupdesc->natts); for (i = 0; i < tupdesc->natts; ++i) { @@ -76,7 +76,7 @@ printsimple(TupleTableSlot *slot, DestReceiver *self) if (slot->tts_isnull[i]) { - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); continue; } diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index 02cd1beef7..fc94e711b2 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -395,7 +395,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) */ pq_beginmessage_reuse(buf, 'D'); - pq_sendint(buf, natts, 2); + pq_sendint16(buf, natts); /* * send the attributes of this tuple @@ -407,7 +407,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) if (slot->tts_isnull[i]) { - pq_sendint(buf, -1, 4); + pq_sendint32(buf, -1); continue; } @@ -436,7 +436,7 @@ printtup(TupleTableSlot *slot, DestReceiver *self) bytea *outputbytes; outputbytes = SendFunctionCall(&thisState->finfo, attr); - pq_sendint(buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendint32(buf, VARSIZE(outputbytes) - VARHDRSZ); pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); } @@ -494,13 +494,13 @@ printtup_20(TupleTableSlot *slot, DestReceiver *self) k >>= 1; if (k == 0) /* end of byte? */ { - pq_sendint(buf, j, 1); + pq_sendint8(buf, j); j = 0; k = 1 << 7; } } if (k != (1 << 7)) /* flush last partial byte */ - pq_sendint(buf, j, 1); + pq_sendint8(buf, j); /* * send the attributes of this tuple @@ -679,13 +679,13 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) k >>= 1; if (k == 0) /* end of byte? */ { - pq_sendint(buf, j, 1); + pq_sendint8(buf, j); j = 0; k = 1 << 7; } } if (k != (1 << 7)) /* flush last partial byte */ - pq_sendint(buf, j, 1); + pq_sendint8(buf, j); /* * send the attributes of this tuple @@ -702,7 +702,7 @@ printtup_internal_20(TupleTableSlot *slot, DestReceiver *self) Assert(thisState->format == 1); outputbytes = SendFunctionCall(&thisState->finfo, attr); - pq_sendint(buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendint32(buf, VARSIZE(outputbytes) - VARHDRSZ); pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); } diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index c6f7b7af0e..d683050733 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -1030,8 +1030,8 @@ ParallelWorkerMain(Datum main_arg) * in this case. */ pq_beginmessage(&msgbuf, 'K'); - pq_sendint(&msgbuf, (int32) MyProcPid, sizeof(int32)); - pq_sendint(&msgbuf, (int32) MyCancelKey, sizeof(int32)); + pq_sendint32(&msgbuf, (int32) MyProcPid); + pq_sendint32(&msgbuf, (int32) MyCancelKey); pq_endmessage(&msgbuf); /* diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c index a93c81bca2..f7de742a56 100644 --- a/src/backend/commands/async.c +++ b/src/backend/commands/async.c @@ -2100,7 +2100,7 @@ NotifyMyFrontEnd(const char *channel, const char *payload, int32 srcPid) StringInfoData buf; pq_beginmessage(&buf, 'A'); - pq_sendint(&buf, srcPid, sizeof(int32)); + pq_sendint32(&buf, srcPid); pq_sendstring(&buf, channel); if (PG_PROTOCOL_MAJOR(FrontendProtocol) >= 3) pq_sendstring(&buf, payload); diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index e87588040f..64550f9c68 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -357,9 +357,9 @@ SendCopyBegin(CopyState cstate) pq_beginmessage(&buf, 'H'); pq_sendbyte(&buf, format); /* overall format */ - pq_sendint(&buf, natts, 2); + pq_sendint16(&buf, natts); for (i = 0; i < natts; i++) - pq_sendint(&buf, format, 2); /* per-column formats */ + pq_sendint16(&buf, format); /* per-column formats */ pq_endmessage(&buf); cstate->copy_dest = COPY_NEW_FE; } @@ -390,9 +390,9 @@ ReceiveCopyBegin(CopyState cstate) pq_beginmessage(&buf, 'G'); pq_sendbyte(&buf, format); /* overall format */ - pq_sendint(&buf, natts, 2); + pq_sendint16(&buf, natts); for (i = 0; i < natts; i++) - pq_sendint(&buf, format, 2); /* per-column formats */ + pq_sendint16(&buf, format); /* per-column formats */ pq_endmessage(&buf); cstate->copy_dest = COPY_NEW_FE; cstate->fe_msgbuf = makeStringInfo(); diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 480e344eb3..3b3a932a7d 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -613,7 +613,7 @@ sendAuthRequest(Port *port, AuthRequest areq, char *extradata, int extralen) CHECK_FOR_INTERRUPTS(); pq_beginmessage(&buf, 'R'); - pq_sendint(&buf, (int32) areq, sizeof(int32)); + pq_sendint32(&buf, (int32) areq); if (extralen > 0) pq_sendbytes(&buf, extradata, extralen); diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index c3b9bddc8f..75029b0def 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -274,7 +274,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir) /* Send CopyOutResponse message */ pq_beginmessage(&buf, 'H'); pq_sendbyte(&buf, 0); /* overall format */ - pq_sendint(&buf, 0, 2); /* natts */ + pq_sendint16(&buf, 0); /* natts */ pq_endmessage(&buf); if (ti->path == NULL) @@ -722,7 +722,7 @@ send_int8_string(StringInfoData *buf, int64 intval) char is[32]; sprintf(is, INT64_FORMAT, intval); - pq_sendint(buf, strlen(is), 4); + pq_sendint32(buf, strlen(is)); pq_sendbytes(buf, is, strlen(is)); } @@ -734,34 +734,34 @@ SendBackupHeader(List *tablespaces) /* Construct and send the directory information */ pq_beginmessage(&buf, 'T'); /* RowDescription */ - pq_sendint(&buf, 3, 2); /* 3 fields */ + pq_sendint16(&buf, 3); /* 3 fields */ /* First field - spcoid */ pq_sendstring(&buf, "spcoid"); - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ - pq_sendint(&buf, OIDOID, 4); /* type oid */ - pq_sendint(&buf, 4, 2); /* typlen */ - pq_sendint(&buf, 0, 4); /* typmod */ - pq_sendint(&buf, 0, 2); /* format code */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, OIDOID); /* type oid */ + pq_sendint16(&buf, 4); /* typlen */ + pq_sendint32(&buf, 0); /* typmod */ + pq_sendint16(&buf, 0); /* format code */ /* Second field - spcpath */ pq_sendstring(&buf, "spclocation"); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); - pq_sendint(&buf, TEXTOID, 4); - pq_sendint(&buf, -1, 2); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); + pq_sendint32(&buf, TEXTOID); + pq_sendint16(&buf, -1); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); /* Third field - size */ pq_sendstring(&buf, "size"); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); - pq_sendint(&buf, INT8OID, 4); - pq_sendint(&buf, 8, 2); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); + pq_sendint32(&buf, INT8OID); + pq_sendint16(&buf, 8); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); pq_endmessage(&buf); foreach(lc, tablespaces) @@ -770,28 +770,28 @@ SendBackupHeader(List *tablespaces) /* Send one datarow message */ pq_beginmessage(&buf, 'D'); - pq_sendint(&buf, 3, 2); /* number of columns */ + pq_sendint16(&buf, 3); /* number of columns */ if (ti->path == NULL) { - pq_sendint(&buf, -1, 4); /* Length = -1 ==> NULL */ - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); /* Length = -1 ==> NULL */ + pq_sendint32(&buf, -1); } else { Size len; len = strlen(ti->oid); - pq_sendint(&buf, len, 4); + pq_sendint32(&buf, len); pq_sendbytes(&buf, ti->oid, len); len = strlen(ti->path); - pq_sendint(&buf, len, 4); + pq_sendint32(&buf, len); pq_sendbytes(&buf, ti->path, len); } if (ti->size >= 0) send_int8_string(&buf, ti->size / 1024); else - pq_sendint(&buf, -1, 4); /* NULL */ + pq_sendint32(&buf, -1); /* NULL */ pq_endmessage(&buf); } @@ -812,42 +812,42 @@ SendXlogRecPtrResult(XLogRecPtr ptr, TimeLineID tli) Size len; pq_beginmessage(&buf, 'T'); /* RowDescription */ - pq_sendint(&buf, 2, 2); /* 2 fields */ + pq_sendint16(&buf, 2); /* 2 fields */ /* Field headers */ pq_sendstring(&buf, "recptr"); - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ - pq_sendint(&buf, TEXTOID, 4); /* type oid */ - pq_sendint(&buf, -1, 2); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, TEXTOID); /* type oid */ + pq_sendint16(&buf, -1); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); pq_sendstring(&buf, "tli"); - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ /* * int8 may seem like a surprising data type for this, but in theory int4 * would not be wide enough for this, as TimeLineID is unsigned. */ - pq_sendint(&buf, INT8OID, 4); /* type oid */ - pq_sendint(&buf, -1, 2); - pq_sendint(&buf, 0, 4); - pq_sendint(&buf, 0, 2); + pq_sendint32(&buf, INT8OID); /* type oid */ + pq_sendint16(&buf, -1); + pq_sendint32(&buf, 0); + pq_sendint16(&buf, 0); pq_endmessage(&buf); /* Data row */ pq_beginmessage(&buf, 'D'); - pq_sendint(&buf, 2, 2); /* number of columns */ + pq_sendint16(&buf, 2); /* number of columns */ len = snprintf(str, sizeof(str), "%X/%X", (uint32) (ptr >> 32), (uint32) ptr); - pq_sendint(&buf, len, 4); + pq_sendint32(&buf, len); pq_sendbytes(&buf, str, len); len = snprintf(str, sizeof(str), "%u", tli); - pq_sendint(&buf, len, 4); + pq_sendint32(&buf, len); pq_sendbytes(&buf, str, len); pq_endmessage(&buf); diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index f19649b113..9b126b2957 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -47,7 +47,7 @@ logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn) /* fixed fields */ pq_sendint64(out, txn->final_lsn); pq_sendint64(out, txn->commit_time); - pq_sendint(out, txn->xid, 4); + pq_sendint32(out, txn->xid); } /* @@ -145,7 +145,7 @@ logicalrep_write_insert(StringInfo out, Relation rel, HeapTuple newtuple) rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX); /* use Oid as relation identifier */ - pq_sendint(out, RelationGetRelid(rel), 4); + pq_sendint32(out, RelationGetRelid(rel)); pq_sendbyte(out, 'N'); /* new tuple follows */ logicalrep_write_tuple(out, rel, newtuple); @@ -189,7 +189,7 @@ logicalrep_write_update(StringInfo out, Relation rel, HeapTuple oldtuple, rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX); /* use Oid as relation identifier */ - pq_sendint(out, RelationGetRelid(rel), 4); + pq_sendint32(out, RelationGetRelid(rel)); if (oldtuple != NULL) { @@ -258,7 +258,7 @@ logicalrep_write_delete(StringInfo out, Relation rel, HeapTuple oldtuple) pq_sendbyte(out, 'D'); /* action DELETE */ /* use Oid as relation identifier */ - pq_sendint(out, RelationGetRelid(rel), 4); + pq_sendint32(out, RelationGetRelid(rel)); if (rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL) pq_sendbyte(out, 'O'); /* old tuple follows */ @@ -303,7 +303,7 @@ logicalrep_write_rel(StringInfo out, Relation rel) pq_sendbyte(out, 'R'); /* sending RELATION */ /* use Oid as relation identifier */ - pq_sendint(out, RelationGetRelid(rel), 4); + pq_sendint32(out, RelationGetRelid(rel)); /* send qualified relation name */ logicalrep_write_namespace(out, RelationGetNamespace(rel)); @@ -360,7 +360,7 @@ logicalrep_write_typ(StringInfo out, Oid typoid) typtup = (Form_pg_type) GETSTRUCT(tup); /* use Oid as relation identifier */ - pq_sendint(out, typoid, 4); + pq_sendint32(out, typoid); /* send qualified type name */ logicalrep_write_namespace(out, typtup->typnamespace); @@ -402,7 +402,7 @@ logicalrep_write_tuple(StringInfo out, Relation rel, HeapTuple tuple) continue; nliveatts++; } - pq_sendint(out, nliveatts, 2); + pq_sendint16(out, nliveatts); /* try to allocate enough memory from the get-go */ enlargeStringInfo(out, tuple->t_len + @@ -522,7 +522,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel) continue; nliveatts++; } - pq_sendint(out, nliveatts, 2); + pq_sendint16(out, nliveatts); /* fetch bitmap of REPLICATION IDENTITY attributes */ replidentfull = (rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL); @@ -551,10 +551,10 @@ logicalrep_write_attrs(StringInfo out, Relation rel) pq_sendstring(out, NameStr(att->attname)); /* attribute type id */ - pq_sendint(out, (int) att->atttypid, sizeof(att->atttypid)); + pq_sendint32(out, (int) att->atttypid); /* attribute mode */ - pq_sendint(out, att->atttypmod, sizeof(att->atttypmod)); + pq_sendint32(out, att->atttypmod); } bms_free(idattrs); diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index 1bf9be673b..fe4e085938 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -1272,10 +1272,10 @@ XLogWalRcvSendHSFeedback(bool immed) resetStringInfo(&reply_message); pq_sendbyte(&reply_message, 'h'); pq_sendint64(&reply_message, GetCurrentTimestamp()); - pq_sendint(&reply_message, xmin, 4); - pq_sendint(&reply_message, xmin_epoch, 4); - pq_sendint(&reply_message, catalog_xmin, 4); - pq_sendint(&reply_message, catalog_xmin_epoch, 4); + pq_sendint32(&reply_message, xmin); + pq_sendint32(&reply_message, xmin_epoch); + pq_sendint32(&reply_message, catalog_xmin); + pq_sendint32(&reply_message, catalog_xmin_epoch); walrcv_send(wrconn, reply_message.data, reply_message.len); if (TransactionIdIsValid(xmin) || TransactionIdIsValid(catalog_xmin)) master_has_standby_xmin = true; diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 6ec4e63161..fa1db748b5 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -444,32 +444,32 @@ SendTimeLineHistory(TimeLineHistoryCmd *cmd) /* Send a RowDescription message */ pq_beginmessage(&buf, 'T'); - pq_sendint(&buf, 2, 2); /* 2 fields */ + pq_sendint16(&buf, 2); /* 2 fields */ /* first field */ pq_sendstring(&buf, "filename"); /* col name */ - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ - pq_sendint(&buf, TEXTOID, 4); /* type oid */ - pq_sendint(&buf, -1, 2); /* typlen */ - pq_sendint(&buf, 0, 4); /* typmod */ - pq_sendint(&buf, 0, 2); /* format code */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, TEXTOID); /* type oid */ + pq_sendint16(&buf, -1); /* typlen */ + pq_sendint32(&buf, 0); /* typmod */ + pq_sendint16(&buf, 0); /* format code */ /* second field */ pq_sendstring(&buf, "content"); /* col name */ - pq_sendint(&buf, 0, 4); /* table oid */ - pq_sendint(&buf, 0, 2); /* attnum */ - pq_sendint(&buf, BYTEAOID, 4); /* type oid */ - pq_sendint(&buf, -1, 2); /* typlen */ - pq_sendint(&buf, 0, 4); /* typmod */ - pq_sendint(&buf, 0, 2); /* format code */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, BYTEAOID); /* type oid */ + pq_sendint16(&buf, -1); /* typlen */ + pq_sendint32(&buf, 0); /* typmod */ + pq_sendint16(&buf, 0); /* format code */ pq_endmessage(&buf); /* Send a DataRow message */ pq_beginmessage(&buf, 'D'); - pq_sendint(&buf, 2, 2); /* # of columns */ + pq_sendint16(&buf, 2); /* # of columns */ len = strlen(histfname); - pq_sendint(&buf, len, 4); /* col1 len */ + pq_sendint32(&buf, len); /* col1 len */ pq_sendbytes(&buf, histfname, len); fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); @@ -489,7 +489,7 @@ SendTimeLineHistory(TimeLineHistoryCmd *cmd) (errcode_for_file_access(), errmsg("could not seek to beginning of file \"%s\": %m", path))); - pq_sendint(&buf, histfilelen, 4); /* col2 len */ + pq_sendint32(&buf, histfilelen); /* col2 len */ bytesleft = histfilelen; while (bytesleft > 0) @@ -646,7 +646,7 @@ StartReplication(StartReplicationCmd *cmd) /* Send a CopyBothResponse message, and start streaming */ pq_beginmessage(&buf, 'W'); pq_sendbyte(&buf, 0); - pq_sendint(&buf, 0, 2); + pq_sendint16(&buf, 0); pq_endmessage(&buf); pq_flush(); @@ -1065,7 +1065,7 @@ StartLogicalReplication(StartReplicationCmd *cmd) /* Send a CopyBothResponse message, and start streaming */ pq_beginmessage(&buf, 'W'); pq_sendbyte(&buf, 0); - pq_sendint(&buf, 0, 2); + pq_sendint16(&buf, 0); pq_endmessage(&buf); pq_flush(); diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index 8101ae74e0..a434f7f857 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -143,7 +143,7 @@ SendFunctionResult(Datum retval, bool isnull, Oid rettype, int16 format) if (isnull) { if (newstyle) - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); } else { @@ -169,7 +169,7 @@ SendFunctionResult(Datum retval, bool isnull, Oid rettype, int16 format) getTypeBinaryOutputInfo(rettype, &typsend, &typisvarlena); outputbytes = OidSendFunctionCall(typsend, retval); - pq_sendint(&buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ); pq_sendbytes(&buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); pfree(outputbytes); diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 338ce81331..2c7260e564 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -2376,13 +2376,13 @@ exec_describe_statement_message(const char *stmt_name) */ pq_beginmessage_reuse(&row_description_buf, 't'); /* parameter description * message type */ - pq_sendint(&row_description_buf, psrc->num_params, 2); + pq_sendint16(&row_description_buf, psrc->num_params); for (i = 0; i < psrc->num_params; i++) { Oid ptype = psrc->param_types[i]; - pq_sendint(&row_description_buf, (int) ptype, 4); + pq_sendint32(&row_description_buf, (int) ptype); } pq_endmessage_reuse(&row_description_buf); @@ -3818,8 +3818,8 @@ PostgresMain(int argc, char *argv[], StringInfoData buf; pq_beginmessage(&buf, 'K'); - pq_sendint(&buf, (int32) MyProcPid, sizeof(int32)); - pq_sendint(&buf, (int32) MyCancelKey, sizeof(int32)); + pq_sendint32(&buf, (int32) MyProcPid); + pq_sendint32(&buf, (int32) MyCancelKey); pq_endmessage(&buf); /* Need not flush since ReadyForQuery will do it. */ } diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index ca04b13e82..b4c31ef65c 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -1590,13 +1590,13 @@ array_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); /* Send the array header information */ - pq_sendint(&buf, ndim, 4); - pq_sendint(&buf, AARR_HASNULL(v) ? 1 : 0, 4); - pq_sendint(&buf, element_type, sizeof(Oid)); + pq_sendint32(&buf, ndim); + pq_sendint32(&buf, AARR_HASNULL(v) ? 1 : 0); + pq_sendint32(&buf, element_type); for (i = 0; i < ndim; i++) { - pq_sendint(&buf, dim[i], 4); - pq_sendint(&buf, lb[i], 4); + pq_sendint32(&buf, dim[i]); + pq_sendint32(&buf, lb[i]); } /* Send the array elements using the element's own sendproc */ @@ -1614,14 +1614,14 @@ array_send(PG_FUNCTION_ARGS) if (isnull) { /* -1 length means a NULL */ - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); } else { bytea *outputbytes; outputbytes = SendFunctionCall(&my_extra->proc, itemvalue); - pq_sendint(&buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ); pq_sendbytes(&buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); pfree(outputbytes); diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 0992bb3fdd..04e737d080 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -239,7 +239,7 @@ date_send(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, date, sizeof(date)); + pq_sendint32(&buf, date); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -2049,7 +2049,7 @@ timetz_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); pq_sendint64(&buf, time->time); - pq_sendint(&buf, time->zone, sizeof(time->zone)); + pq_sendint32(&buf, time->zone); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c index 0348855b11..e13389a6cc 100644 --- a/src/backend/utils/adt/geo_ops.c +++ b/src/backend/utils/adt/geo_ops.c @@ -1433,7 +1433,7 @@ path_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); pq_sendbyte(&buf, path->closed ? 1 : 0); - pq_sendint(&buf, path->npts, sizeof(int32)); + pq_sendint32(&buf, path->npts); for (i = 0; i < path->npts; i++) { pq_sendfloat8(&buf, path->p[i].x); @@ -3514,7 +3514,7 @@ poly_send(PG_FUNCTION_ARGS) int32 i; pq_begintypsend(&buf); - pq_sendint(&buf, poly->npts, sizeof(int32)); + pq_sendint32(&buf, poly->npts); for (i = 0; i < poly->npts; i++) { pq_sendfloat8(&buf, poly->p[i].x); diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c index 96ef25b900..4cd8960b3f 100644 --- a/src/backend/utils/adt/int.c +++ b/src/backend/utils/adt/int.c @@ -99,7 +99,7 @@ int2send(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, arg1, sizeof(int16)); + pq_sendint16(&buf, arg1); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -304,7 +304,7 @@ int4send(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, arg1, sizeof(int32)); + pq_sendint32(&buf, arg1); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 95db895538..771c05120b 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -154,7 +154,7 @@ jsonb_send(PG_FUNCTION_ARGS) (void) JsonbToCString(jtext, &jb->root, VARSIZE(jb)); pq_begintypsend(&buf); - pq_sendint(&buf, version, 1); + pq_sendint8(&buf, version); pq_sendtext(&buf, jtext->data, jtext->len); pfree(jtext->data); pfree(jtext); diff --git a/src/backend/utils/adt/nabstime.c b/src/backend/utils/adt/nabstime.c index 2c5948052d..2bca39a90c 100644 --- a/src/backend/utils/adt/nabstime.c +++ b/src/backend/utils/adt/nabstime.c @@ -315,7 +315,7 @@ abstimesend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, time, sizeof(time)); + pq_sendint32(&buf, time); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -674,7 +674,7 @@ reltimesend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, time, sizeof(time)); + pq_sendint32(&buf, time); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -794,9 +794,9 @@ tintervalsend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, tinterval->status, sizeof(tinterval->status)); - pq_sendint(&buf, tinterval->data[0], sizeof(tinterval->data[0])); - pq_sendint(&buf, tinterval->data[1], sizeof(tinterval->data[1])); + pq_sendint32(&buf, tinterval->status); + pq_sendint32(&buf, tinterval->data[0]); + pq_sendint32(&buf, tinterval->data[1]); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 48d95e9050..2cd14f3401 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -876,12 +876,12 @@ numeric_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); - pq_sendint(&buf, x.ndigits, sizeof(int16)); - pq_sendint(&buf, x.weight, sizeof(int16)); - pq_sendint(&buf, x.sign, sizeof(int16)); - pq_sendint(&buf, x.dscale, sizeof(int16)); + pq_sendint16(&buf, x.ndigits); + pq_sendint16(&buf, x.weight); + pq_sendint16(&buf, x.sign); + pq_sendint16(&buf, x.dscale); for (i = 0; i < x.ndigits; i++) - pq_sendint(&buf, x.digits[i], sizeof(NumericDigit)); + pq_sendint16(&buf, x.digits[i]); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -3693,7 +3693,7 @@ numeric_avg_serialize(PG_FUNCTION_ARGS) pq_sendbytes(&buf, VARDATA_ANY(sumX), VARSIZE_ANY_EXHDR(sumX)); /* maxScale */ - pq_sendint(&buf, state->maxScale, 4); + pq_sendint32(&buf, state->maxScale); /* maxScaleCount */ pq_sendint64(&buf, state->maxScaleCount); @@ -3815,7 +3815,7 @@ numeric_serialize(PG_FUNCTION_ARGS) pq_sendbytes(&buf, VARDATA_ANY(sumX2), VARSIZE_ANY_EXHDR(sumX2)); /* maxScale */ - pq_sendint(&buf, state->maxScale, 4); + pq_sendint32(&buf, state->maxScale); /* maxScaleCount */ pq_sendint64(&buf, state->maxScaleCount); diff --git a/src/backend/utils/adt/oid.c b/src/backend/utils/adt/oid.c index 7baaa1dd4e..87e87fe54d 100644 --- a/src/backend/utils/adt/oid.c +++ b/src/backend/utils/adt/oid.c @@ -154,7 +154,7 @@ oidsend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, arg1, sizeof(Oid)); + pq_sendint32(&buf, arg1); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index d0aa33c010..e79f0dbfca 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -272,7 +272,7 @@ range_send(PG_FUNCTION_ARGS) uint32 bound_len = VARSIZE(bound) - VARHDRSZ; char *bound_data = VARDATA(bound); - pq_sendint(buf, bound_len, 4); + pq_sendint32(buf, bound_len); pq_sendbytes(buf, bound_data, bound_len); } @@ -283,7 +283,7 @@ range_send(PG_FUNCTION_ARGS) uint32 bound_len = VARSIZE(bound) - VARHDRSZ; char *bound_data = VARDATA(bound); - pq_sendint(buf, bound_len, 4); + pq_sendint32(buf, bound_len); pq_sendbytes(buf, bound_data, bound_len); } diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c index 98fe00ff39..9b32db5d0a 100644 --- a/src/backend/utils/adt/rowtypes.c +++ b/src/backend/utils/adt/rowtypes.c @@ -718,7 +718,7 @@ record_send(PG_FUNCTION_ARGS) if (!TupleDescAttr(tupdesc, i)->attisdropped) validcols++; } - pq_sendint(&buf, validcols, 4); + pq_sendint32(&buf, validcols); for (i = 0; i < ncolumns; i++) { @@ -732,12 +732,12 @@ record_send(PG_FUNCTION_ARGS) if (att->attisdropped) continue; - pq_sendint(&buf, column_type, sizeof(Oid)); + pq_sendint32(&buf, column_type); if (nulls[i]) { /* emit -1 data length to signify a NULL */ - pq_sendint(&buf, -1, 4); + pq_sendint32(&buf, -1); continue; } @@ -756,7 +756,7 @@ record_send(PG_FUNCTION_ARGS) attr = values[i]; outputbytes = SendFunctionCall(&column_info->proc, attr); - pq_sendint(&buf, VARSIZE(outputbytes) - VARHDRSZ, 4); + pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ); pq_sendbytes(&buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ); } diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c index 083f7d60a7..854097dd58 100644 --- a/src/backend/utils/adt/tid.c +++ b/src/backend/utils/adt/tid.c @@ -149,10 +149,8 @@ tidsend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, ItemPointerGetBlockNumberNoCheck(itemPtr), - sizeof(BlockNumber)); - pq_sendint(&buf, ItemPointerGetOffsetNumberNoCheck(itemPtr), - sizeof(OffsetNumber)); + pq_sendint32(&buf, ItemPointerGetBlockNumberNoCheck(itemPtr)); + pq_sendint16(&buf, ItemPointerGetOffsetNumberNoCheck(itemPtr)); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c index b11d452fc8..5797aaad34 100644 --- a/src/backend/utils/adt/timestamp.c +++ b/src/backend/utils/adt/timestamp.c @@ -1009,8 +1009,8 @@ interval_send(PG_FUNCTION_ARGS) pq_begintypsend(&buf); pq_sendint64(&buf, interval->time); - pq_sendint(&buf, interval->day, sizeof(interval->day)); - pq_sendint(&buf, interval->month, sizeof(interval->month)); + pq_sendint32(&buf, interval->day); + pq_sendint32(&buf, interval->month); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/tsquery.c b/src/backend/utils/adt/tsquery.c index fdb041971e..5cdfe4d732 100644 --- a/src/backend/utils/adt/tsquery.c +++ b/src/backend/utils/adt/tsquery.c @@ -952,23 +952,22 @@ tsquerysend(PG_FUNCTION_ARGS) pq_begintypsend(&buf); - pq_sendint(&buf, query->size, sizeof(uint32)); + pq_sendint32(&buf, query->size); for (i = 0; i < query->size; i++) { - pq_sendint(&buf, item->type, sizeof(item->type)); + pq_sendint8(&buf, item->type); switch (item->type) { case QI_VAL: - pq_sendint(&buf, item->qoperand.weight, sizeof(uint8)); - pq_sendint(&buf, item->qoperand.prefix, sizeof(uint8)); + pq_sendint8(&buf, item->qoperand.weight); + pq_sendint8(&buf, item->qoperand.prefix); pq_sendstring(&buf, GETOPERAND(query) + item->qoperand.distance); break; case QI_OPR: - pq_sendint(&buf, item->qoperator.oper, sizeof(item->qoperator.oper)); + pq_sendint8(&buf, item->qoperator.oper); if (item->qoperator.oper == OP_PHRASE) - pq_sendint(&buf, item->qoperator.distance, - sizeof(item->qoperator.distance)); + pq_sendint16(&buf, item->qoperator.distance); break; default: elog(ERROR, "unrecognized tsquery node type: %d", item->type); diff --git a/src/backend/utils/adt/tsvector.c b/src/backend/utils/adt/tsvector.c index 6f66c1f58c..b0a9217d1e 100644 --- a/src/backend/utils/adt/tsvector.c +++ b/src/backend/utils/adt/tsvector.c @@ -410,7 +410,7 @@ tsvectorsend(PG_FUNCTION_ARGS) pq_begintypsend(&buf); - pq_sendint(&buf, vec->size, sizeof(int32)); + pq_sendint32(&buf, vec->size); for (i = 0; i < vec->size; i++) { uint16 npos; @@ -423,14 +423,14 @@ tsvectorsend(PG_FUNCTION_ARGS) pq_sendbyte(&buf, '\0'); npos = POSDATALEN(vec, weptr); - pq_sendint(&buf, npos, sizeof(uint16)); + pq_sendint16(&buf, npos); if (npos > 0) { WordEntryPos *wepptr = POSDATAPTR(vec, weptr); for (j = 0; j < npos; j++) - pq_sendint(&buf, wepptr[j], sizeof(WordEntryPos)); + pq_sendint16(&buf, wepptr[j]); } weptr++; } diff --git a/src/backend/utils/adt/txid.c b/src/backend/utils/adt/txid.c index 1e38ca2aa5..9d312edf04 100644 --- a/src/backend/utils/adt/txid.c +++ b/src/backend/utils/adt/txid.c @@ -640,7 +640,7 @@ txid_snapshot_send(PG_FUNCTION_ARGS) uint32 i; pq_begintypsend(&buf); - pq_sendint(&buf, snap->nxip, 4); + pq_sendint32(&buf, snap->nxip); pq_sendint64(&buf, snap->xmin); pq_sendint64(&buf, snap->xmax); for (i = 0; i < snap->nxip; i++) diff --git a/src/backend/utils/adt/varbit.c b/src/backend/utils/adt/varbit.c index 0cf1c6f6d6..478fab9bfc 100644 --- a/src/backend/utils/adt/varbit.c +++ b/src/backend/utils/adt/varbit.c @@ -665,7 +665,7 @@ varbit_send(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, VARBITLEN(s), sizeof(int32)); + pq_sendint32(&buf, VARBITLEN(s)); pq_sendbytes(&buf, (char *) VARBITS(s), VARBITBYTES(s)); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } diff --git a/src/backend/utils/adt/xid.c b/src/backend/utils/adt/xid.c index 2051709fde..67c32ac619 100644 --- a/src/backend/utils/adt/xid.c +++ b/src/backend/utils/adt/xid.c @@ -68,7 +68,7 @@ xidsend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, arg1, sizeof(arg1)); + pq_sendint32(&buf, arg1); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } @@ -196,7 +196,7 @@ cidsend(PG_FUNCTION_ARGS) StringInfoData buf; pq_begintypsend(&buf); - pq_sendint(&buf, arg1, sizeof(arg1)); + pq_sendint32(&buf, arg1); PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } From 360fd1a7b2fe779cc9e696b813b12f6a8e83b558 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 12 Oct 2017 10:09:26 -0400 Subject: [PATCH 0375/1087] Fix logical replication to fire BEFORE ROW DELETE triggers. Before, that would fail to happen unless a BEFORE ROW UPDATE trigger was also present. Noted by me while reviewing a patch from Masahiko Sawada, who also wrote this patch. Reviewed by Petr Jelinek. Discussion: http://postgr.es/m/CA+TgmobAZvCxduG8y_mQKBK7nz-vhbdLvjM354KEFozpuzMN5A@mail.gmail.com --- src/backend/executor/execReplication.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index c26420ae10..fb538c0297 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -511,7 +511,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate, /* BEFORE ROW DELETE Triggers */ if (resultRelInfo->ri_TrigDesc && - resultRelInfo->ri_TrigDesc->trig_update_before_row) + resultRelInfo->ri_TrigDesc->trig_delete_before_row) { skip_tuple = !ExecBRDeleteTriggers(estate, epqstate, resultRelInfo, &searchslot->tts_tuple->t_self, From e9ef11ac8bb2acc2d2462fc17ec3291a959589e7 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 12 Oct 2017 17:23:47 +0200 Subject: [PATCH 0376/1087] Infer functional dependency past RelabelType Vars hidden within a RelabelType would not be detected as compatible with some functional dependency. Repair by properly ignoring the RelabelType. Author: David Rowley Reviewed-by: Tomas Vondra Discussion: https://postgr.es/m/CAKJS1f-y-UEy=rsBXynBOgiW1fKMr_LVoYSGL9QOc36mLEC-ww@mail.gmail.com --- src/backend/statistics/dependencies.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c index 2e7c0ad6ba..9756fb83c0 100644 --- a/src/backend/statistics/dependencies.c +++ b/src/backend/statistics/dependencies.c @@ -792,6 +792,14 @@ dependency_is_compatible_clause(Node *clause, Index relid, AttrNumber *attnum) var = (varonleft) ? linitial(expr->args) : lsecond(expr->args); + /* + * We may ignore any RelabelType node above the operand. (There won't + * be more than one, since eval_const_expressions() has been applied + * already.) + */ + if (IsA(var, RelabelType)) + var = (Var *) ((RelabelType *) var)->arg; + /* We only support plain Vars for now */ if (!IsA(var, Var)) return false; From 0a047a1e3ef852884278b1324df73e359972c43a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 12 Oct 2017 11:35:54 -0400 Subject: [PATCH 0377/1087] Doc: fix typo in release notes. Ioseph Kim Discussion: https://postgr.es/m/e7a79f91-8244-5bcb-afcc-96c817e86f4e@postgresql.kr --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index fed67e1b23..9ef798183d 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1946,7 +1946,7 @@ --> Add function txid_current_ifassigned() + linkend="functions-txid-snapshot">txid_current_if_assigned() to return the current transaction ID or NULL if no transaction ID has been assigned (Craig Ringer) From ad4a7ed0996ee044ee7291559deddf9842d8bbf7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 12 Oct 2017 15:14:22 -0400 Subject: [PATCH 0378/1087] Synchronize error messages. Commits 6476b26115f3ef25a9cd87880e0ac5ec5f7a05f6 and 14f67a8ee282ebc0de78e773fbd597f460ab4a54 didn't use quite the same error message for what is basically the same situation. Amit Langote, pared back a bit by me. Discussion: http://postgr.es/m/54dc76d0-3b5b-ba5a-27dc-fb31a3975b61@lab.ntt.co.jp --- src/backend/catalog/partition.c | 4 ++-- src/test/regress/expected/alter_table.out | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index ebda85e4ef..07fdf66c38 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -988,7 +988,7 @@ check_default_allows_bound(Relation parent, Relation default_rel, if (PartConstraintImpliedByRelConstraint(default_rel, def_part_constraints)) { ereport(INFO, - (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + (errmsg("updated partition constraint for default partition \"%s\" is implied by existing constraints", RelationGetRelationName(default_rel)))); return; } @@ -1033,7 +1033,7 @@ check_default_allows_bound(Relation parent, Relation default_rel, def_part_constraints)) { ereport(INFO, - (errmsg("partition constraint for table \"%s\" is implied by existing constraints", + (errmsg("updated partition constraint for default partition \"%s\" is implied by existing constraints", RelationGetRelationName(part_rel)))); heap_close(part_rel, NoLock); diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index dbe438dcd4..98f4db1f85 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3345,7 +3345,7 @@ INFO: partition constraint for table "part_3_4" is implied by existing constrai -- check if default partition scan skipped ALTER TABLE list_parted2_def ADD CONSTRAINT check_a CHECK (a IN (5, 6)); CREATE TABLE part_55_66 PARTITION OF list_parted2 FOR VALUES IN (55, 66); -INFO: partition constraint for table "list_parted2_def" is implied by existing constraints +INFO: updated partition constraint for default partition "list_parted2_def" is implied by existing constraints -- check validation when attaching range partitions CREATE TABLE range_parted ( a int, @@ -3492,7 +3492,7 @@ DROP TABLE quuux1, quuux2; -- should validate for quuux1, but not for quuux2 CREATE TABLE quuux1 PARTITION OF quuux FOR VALUES IN (1); CREATE TABLE quuux2 PARTITION OF quuux FOR VALUES IN (2); -INFO: partition constraint for table "quuux_default1" is implied by existing constraints +INFO: updated partition constraint for default partition "quuux_default1" is implied by existing constraints DROP TABLE quuux; -- -- DETACH PARTITION From 305cf1fd7239e0ffa9ae4ff54a7c66f36432c741 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 12 Oct 2017 15:20:04 -0400 Subject: [PATCH 0379/1087] Fix AggGetAggref() so it won't lie to aggregate final functions. If we merge the transition calculations for two different aggregates, it's reasonable to assume that the transition function should not care which of those Aggref structs it gets from AggGetAggref(). It is not reasonable to make the same assumption about an aggregate final function, however. Commit 804163bc2 broke this, as it will pass whichever Aggref was first associated with the transition state in both cases. This doesn't create an observable bug so far as the core system is concerned, because the only existing uses of AggGetAggref() are in ordered-set aggregates that happen to not pay attention to anything but the input properties of the Aggref; and besides that, we disabled sharing of transition calculations for OSAs yesterday. Nonetheless, if some third-party code were using AggGetAggref() in a normal aggregate, they would be entitled to call this a bug. Hence, back-patch the fix to 9.6 where the problem was introduced. In passing, improve some of the comments about transition state sharing. Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com --- src/backend/executor/nodeAgg.c | 65 ++++++++++++++++++++++------------ src/include/nodes/execnodes.h | 3 +- 2 files changed, 44 insertions(+), 24 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 1a1aebe7b0..6543ecebd3 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -1528,8 +1528,8 @@ finalize_aggregate(AggState *aggstate, { int numFinalArgs = peragg->numFinalArgs; - /* set up aggstate->curpertrans for AggGetAggref() */ - aggstate->curpertrans = pertrans; + /* set up aggstate->curperagg for AggGetAggref() */ + aggstate->curperagg = peragg; InitFunctionCallInfoData(fcinfo, &peragg->finalfn, numFinalArgs, @@ -1562,7 +1562,7 @@ finalize_aggregate(AggState *aggstate, *resultVal = FunctionCallInvoke(&fcinfo); *resultIsNull = fcinfo.isnull; } - aggstate->curpertrans = NULL; + aggstate->curperagg = NULL; } else { @@ -2712,6 +2712,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggstate->current_set = 0; aggstate->peragg = NULL; aggstate->pertrans = NULL; + aggstate->curperagg = NULL; aggstate->curpertrans = NULL; aggstate->input_done = false; aggstate->agg_done = false; @@ -3060,27 +3061,29 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * * Scenarios: * - * 1. An aggregate function appears more than once in query: + * 1. Identical aggregate function calls appear in the query: * * SELECT SUM(x) FROM ... HAVING SUM(x) > 0 * - * Since the aggregates are the identical, we only need to calculate - * the calculate it once. Both aggregates will share the same 'aggno' - * value. + * Since these aggregates are identical, we only need to calculate + * the value once. Both aggregates will share the same 'aggno' value. * * 2. Two different aggregate functions appear in the query, but the - * aggregates have the same transition function and initial value, but - * different final function: + * aggregates have the same arguments, transition functions and + * initial values (and, presumably, different final functions): * - * SELECT SUM(x), AVG(x) FROM ... + * SELECT AVG(x), STDDEV(x) FROM ... * * In this case we must create a new peragg for the varying aggregate, - * and need to call the final functions separately, but can share the - * same transition state. + * and we need to call the final functions separately, but we need + * only run the transition function once. (This requires that the + * final functions be nondestructive of the transition state, but + * that's required anyway for other reasons.) * - * For either of these optimizations to be valid, the aggregate's - * arguments must be the same, including any modifiers such as ORDER BY, - * DISTINCT and FILTER, and they mustn't contain any volatile functions. + * For either of these optimizations to be valid, all aggregate properties + * used in the transition phase must be the same, including any modifiers + * such as ORDER BY, DISTINCT and FILTER, and the arguments mustn't + * contain any volatile functions. * ----------------- */ aggno = -1; @@ -3723,7 +3726,7 @@ GetAggInitVal(Datum textInitVal, Oid transtype) * * As a side-effect, this also collects a list of existing per-Trans structs * with matching inputs. If no identical Aggref is found, the list is passed - * later to find_compatible_perstate, to see if we can at least reuse the + * later to find_compatible_pertrans, to see if we can at least reuse the * state value of another aggregate. */ static int @@ -3743,11 +3746,12 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, /* * Search through the list of already seen aggregates. If we find an - * existing aggregate with the same aggregate function and input - * parameters as an existing one, then we can re-use that one. While + * existing identical aggregate call, then we can re-use that one. While * searching, we'll also collect a list of Aggrefs with the same input * parameters. If no matching Aggref is found, the caller can potentially - * still re-use the transition state of one of them. + * still re-use the transition state of one of them. (At this stage we + * just compare the parsetrees; whether different aggregates share the + * same transition function will be checked later.) */ for (aggno = 0; aggno <= lastaggno; aggno++) { @@ -3796,7 +3800,7 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, * struct * * Searches the list of transnos for a per-Trans struct with the same - * transition state and initial condition. (The inputs have already been + * transition function and initial condition. (The inputs have already been * verified to match.) */ static int @@ -3842,16 +3846,16 @@ find_compatible_pertrans(AggState *aggstate, Aggref *newagg, aggdeserialfn != pertrans->deserialfn_oid) continue; - /* Check that the initial condition matches, too. */ + /* + * Check that the initial condition matches, too. + */ if (initValueIsNull && pertrans->initValueIsNull) return transno; if (!initValueIsNull && !pertrans->initValueIsNull && datumIsEqual(initValue, pertrans->initValue, pertrans->transtypeByVal, pertrans->transtypeLen)) - { return transno; - } } return -1; } @@ -4070,6 +4074,13 @@ AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext) * If the function is being called as an aggregate support function, * return the Aggref node for the aggregate call. Otherwise, return NULL. * + * Aggregates sharing the same inputs and transition functions can get + * merged into a single transition calculation. If the transition function + * calls AggGetAggref, it will get some one of the Aggrefs for which it is + * executing. It must therefore not pay attention to the Aggref fields that + * relate to the final function, as those are indeterminate. But if a final + * function calls AggGetAggref, it will get a precise result. + * * Note that if an aggregate is being used as a window function, this will * return NULL. We could provide a similar function to return the relevant * WindowFunc node in such cases, but it's not needed yet. @@ -4079,8 +4090,16 @@ AggGetAggref(FunctionCallInfo fcinfo) { if (fcinfo->context && IsA(fcinfo->context, AggState)) { + AggStatePerAgg curperagg; AggStatePerTrans curpertrans; + /* check curperagg (valid when in a final function) */ + curperagg = ((AggState *) fcinfo->context)->curperagg; + + if (curperagg) + return curperagg->aggref; + + /* check curpertrans (valid when in a transition function) */ curpertrans = ((AggState *) fcinfo->context)->curpertrans; if (curpertrans) diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index c46113444f..d4ce8d8f49 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1808,7 +1808,8 @@ typedef struct AggState ExprContext **aggcontexts; /* econtexts for long-lived data (per GS) */ ExprContext *tmpcontext; /* econtext for input expressions */ ExprContext *curaggcontext; /* currently active aggcontext */ - AggStatePerTrans curpertrans; /* currently active trans state */ + AggStatePerAgg curperagg; /* currently active aggregate, if any */ + AggStatePerTrans curpertrans; /* currently active trans state, if any */ bool input_done; /* indicates end of input */ bool agg_done; /* indicates completion of Agg scan */ int projected_set; /* The last projected grouping set */ From 60f7c0abef0327648c02795312d1679c66586fbb Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 12 Oct 2017 16:50:53 -0400 Subject: [PATCH 0380/1087] Use ResultRelInfo ** rather than ResultRelInfo * for tuple routing. The previous convention doesn't lend itself to creating ResultRelInfos lazily, as we already do in ExecGetTriggerResultRel. This patch doesn't make anything lazier than before, but the pending patch for UPDATE tuple routing proposes to do so (and there might be other opportunities as well). Amit Khandekar with some adjustments by me. Discussion: http://postgr.es/m/CA+TgmoYPVP9Lyf6vUFA5DwxS4c--x6LOj2y36BsJaYtp62eXPQ@mail.gmail.com --- src/backend/commands/copy.c | 10 ++-- src/backend/executor/execMain.c | 13 ++--- src/backend/executor/nodeModifyTable.c | 74 +++++++++++++++----------- src/include/executor/executor.h | 2 +- src/include/nodes/execnodes.h | 2 +- 5 files changed, 57 insertions(+), 44 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 64550f9c68..8006df32a8 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -167,7 +167,7 @@ typedef struct CopyStateData PartitionDispatch *partition_dispatch_info; int num_dispatch; /* Number of entries in the above array */ int num_partitions; /* Number of members in the following arrays */ - ResultRelInfo *partitions; /* Per partition result relation */ + ResultRelInfo **partitions; /* Per partition result relation pointers */ TupleConversionMap **partition_tupconv_maps; TupleTableSlot *partition_tuple_slot; TransitionCaptureState *transition_capture; @@ -2459,7 +2459,7 @@ CopyFrom(CopyState cstate) if (cstate->rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { PartitionDispatch *partition_dispatch_info; - ResultRelInfo *partitions; + ResultRelInfo **partitions; TupleConversionMap **partition_tupconv_maps; TupleTableSlot *partition_tuple_slot; int num_parted, @@ -2495,7 +2495,7 @@ CopyFrom(CopyState cstate) for (i = 0; i < cstate->num_partitions; ++i) { cstate->transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(cstate->partitions[i].ri_RelationDesc), + convert_tuples_by_name(RelationGetDescr(cstate->partitions[i]->ri_RelationDesc), RelationGetDescr(cstate->rel), gettext_noop("could not convert row type")); } @@ -2626,7 +2626,7 @@ CopyFrom(CopyState cstate) * to the selected partition. */ saved_resultRelInfo = resultRelInfo; - resultRelInfo = cstate->partitions + leaf_part_index; + resultRelInfo = cstate->partitions[leaf_part_index]; /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) @@ -2856,7 +2856,7 @@ CopyFrom(CopyState cstate) } for (i = 0; i < cstate->num_partitions; i++) { - ResultRelInfo *resultRelInfo = cstate->partitions + i; + ResultRelInfo *resultRelInfo = cstate->partitions[i]; ExecCloseIndices(resultRelInfo); heap_close(resultRelInfo->ri_RelationDesc, NoLock); diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 8359beb463..9689429912 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -3242,7 +3242,7 @@ EvalPlanQualEnd(EPQState *epqstate) * Output arguments: * 'pd' receives an array of PartitionDispatch objects with one entry for * every partitioned table in the partition tree - * 'partitions' receives an array of ResultRelInfo objects with one entry for + * 'partitions' receives an array of ResultRelInfo* objects with one entry for * every leaf partition in the partition tree * 'tup_conv_maps' receives an array of TupleConversionMap objects with one * entry for every leaf partition (required to convert input tuple based @@ -3265,7 +3265,7 @@ ExecSetupPartitionTupleRouting(Relation rel, Index resultRTindex, EState *estate, PartitionDispatch **pd, - ResultRelInfo **partitions, + ResultRelInfo ***partitions, TupleConversionMap ***tup_conv_maps, TupleTableSlot **partition_tuple_slot, int *num_parted, int *num_partitions) @@ -3283,8 +3283,8 @@ ExecSetupPartitionTupleRouting(Relation rel, (void) find_all_inheritors(RelationGetRelid(rel), RowExclusiveLock, NULL); *pd = RelationGetPartitionDispatchInfo(rel, num_parted, &leaf_parts); *num_partitions = list_length(leaf_parts); - *partitions = (ResultRelInfo *) palloc(*num_partitions * - sizeof(ResultRelInfo)); + *partitions = (ResultRelInfo **) palloc(*num_partitions * + sizeof(ResultRelInfo *)); *tup_conv_maps = (TupleConversionMap **) palloc0(*num_partitions * sizeof(TupleConversionMap *)); @@ -3296,7 +3296,8 @@ ExecSetupPartitionTupleRouting(Relation rel, */ *partition_tuple_slot = MakeTupleTableSlot(); - leaf_part_rri = *partitions; + leaf_part_rri = (ResultRelInfo *) palloc0(*num_partitions * + sizeof(ResultRelInfo)); i = 0; foreach(cell, leaf_parts) { @@ -3341,7 +3342,7 @@ ExecSetupPartitionTupleRouting(Relation rel, estate->es_leaf_result_relations = lappend(estate->es_leaf_result_relations, leaf_part_rri); - leaf_part_rri++; + (*partitions)[i] = leaf_part_rri++; i++; } } diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 845c409540..0027d21253 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -303,7 +303,7 @@ ExecInsert(ModifyTableState *mtstate, * the selected partition. */ saved_resultRelInfo = resultRelInfo; - resultRelInfo = mtstate->mt_partitions + leaf_part_index; + resultRelInfo = mtstate->mt_partitions[leaf_part_index]; /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) @@ -1498,25 +1498,11 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) if (mtstate->mt_transition_capture != NULL || mtstate->mt_oc_transition_capture != NULL) { - ResultRelInfo *resultRelInfos; int numResultRelInfos; - /* Find the set of partitions so that we can find their TupleDescs. */ - if (mtstate->mt_partition_dispatch_info != NULL) - { - /* - * For INSERT via partitioned table, so we need TupleDescs based - * on the partition routing table. - */ - resultRelInfos = mtstate->mt_partitions; - numResultRelInfos = mtstate->mt_num_partitions; - } - else - { - /* Otherwise we need the ResultRelInfo for each subplan. */ - resultRelInfos = mtstate->resultRelInfo; - numResultRelInfos = mtstate->mt_nplans; - } + numResultRelInfos = (mtstate->mt_partition_tuple_slot != NULL ? + mtstate->mt_num_partitions : + mtstate->mt_nplans); /* * Build array of conversion maps from each child's TupleDesc to the @@ -1526,12 +1512,36 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) */ mtstate->mt_transition_tupconv_maps = (TupleConversionMap **) palloc0(sizeof(TupleConversionMap *) * numResultRelInfos); - for (i = 0; i < numResultRelInfos; ++i) + + /* Choose the right set of partitions */ + if (mtstate->mt_partition_dispatch_info != NULL) { - mtstate->mt_transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(resultRelInfos[i].ri_RelationDesc), - RelationGetDescr(targetRelInfo->ri_RelationDesc), - gettext_noop("could not convert row type")); + /* + * For tuple routing among partitions, we need TupleDescs based + * on the partition routing table. + */ + ResultRelInfo **resultRelInfos = mtstate->mt_partitions; + + for (i = 0; i < numResultRelInfos; ++i) + { + mtstate->mt_transition_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(resultRelInfos[i]->ri_RelationDesc), + RelationGetDescr(targetRelInfo->ri_RelationDesc), + gettext_noop("could not convert row type")); + } + } + else + { + /* Otherwise we need the ResultRelInfo for each subplan. */ + ResultRelInfo *resultRelInfos = mtstate->resultRelInfo; + + for (i = 0; i < numResultRelInfos; ++i) + { + mtstate->mt_transition_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(resultRelInfos[i].ri_RelationDesc), + RelationGetDescr(targetRelInfo->ri_RelationDesc), + gettext_noop("could not convert row type")); + } } /* @@ -1935,7 +1945,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { PartitionDispatch *partition_dispatch_info; - ResultRelInfo *partitions; + ResultRelInfo **partitions; TupleConversionMap **partition_tupconv_maps; TupleTableSlot *partition_tuple_slot; int num_parted, @@ -2014,14 +2024,16 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->mt_nplans == 1); wcoList = linitial(node->withCheckOptionLists); plan = mtstate->mt_plans[0]; - resultRelInfo = mtstate->mt_partitions; for (i = 0; i < mtstate->mt_num_partitions; i++) { - Relation partrel = resultRelInfo->ri_RelationDesc; + Relation partrel; List *mapped_wcoList; List *wcoExprs = NIL; ListCell *ll; + resultRelInfo = mtstate->mt_partitions[i]; + partrel = resultRelInfo->ri_RelationDesc; + /* varno = node->nominalRelation */ mapped_wcoList = map_partition_varattnos(wcoList, node->nominalRelation, @@ -2037,7 +2049,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) resultRelInfo->ri_WithCheckOptions = mapped_wcoList; resultRelInfo->ri_WithCheckOptionExprs = wcoExprs; - resultRelInfo++; } } @@ -2088,13 +2099,15 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * will suffice. This only occurs for the INSERT case; UPDATE/DELETE * are handled above. */ - resultRelInfo = mtstate->mt_partitions; returningList = linitial(node->returningLists); for (i = 0; i < mtstate->mt_num_partitions; i++) { - Relation partrel = resultRelInfo->ri_RelationDesc; + Relation partrel; List *rlist; + resultRelInfo = mtstate->mt_partitions[i]; + partrel = resultRelInfo->ri_RelationDesc; + /* varno = node->nominalRelation */ rlist = map_partition_varattnos(returningList, node->nominalRelation, @@ -2102,7 +2115,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) resultRelInfo->ri_projectReturning = ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps, resultRelInfo->ri_RelationDesc->rd_att); - resultRelInfo++; } } else @@ -2376,7 +2388,7 @@ ExecEndModifyTable(ModifyTableState *node) } for (i = 0; i < node->mt_num_partitions; i++) { - ResultRelInfo *resultRelInfo = node->mt_partitions + i; + ResultRelInfo *resultRelInfo = node->mt_partitions[i]; ExecCloseIndices(resultRelInfo); heap_close(resultRelInfo->ri_RelationDesc, NoLock); diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 37fd6b2700..c4ecf0d50f 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -210,7 +210,7 @@ extern void ExecSetupPartitionTupleRouting(Relation rel, Index resultRTindex, EState *estate, PartitionDispatch **pd, - ResultRelInfo **partitions, + ResultRelInfo ***partitions, TupleConversionMap ***tup_conv_maps, TupleTableSlot **partition_tuple_slot, int *num_parted, int *num_partitions); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index d4ce8d8f49..01ceeef39c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -979,7 +979,7 @@ typedef struct ModifyTableState int mt_num_dispatch; /* Number of entries in the above array */ int mt_num_partitions; /* Number of members in the following * arrays */ - ResultRelInfo *mt_partitions; /* Per partition result relation */ + ResultRelInfo **mt_partitions; /* Per partition result relation pointers */ TupleConversionMap **mt_partition_tupconv_maps; /* Per partition tuple conversion map */ TupleTableSlot *mt_partition_tuple_slot; From 1c497fa72df7593d8976653538da3d0ab033207f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 12 Oct 2017 17:10:48 -0400 Subject: [PATCH 0381/1087] Avoid coercing a whole-row variable that is already coerced. Marginal efficiency and beautification hack. I'm not sure whether this case ever arises currently, but the pending patch for update tuple routing will cause it to arise. Amit Khandekar Discussion: http://postgr.es/m/CAJ3gD9cazfppe7-wwUbabPcQ4_0SubkiPFD1+0r5_DkVNWo5yg@mail.gmail.com --- src/backend/rewrite/rewriteManip.c | 53 +++++++++++++++++++++++------- 1 file changed, 42 insertions(+), 11 deletions(-) diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c index c5773efd19..9290c7f793 100644 --- a/src/backend/rewrite/rewriteManip.c +++ b/src/backend/rewrite/rewriteManip.c @@ -1224,6 +1224,7 @@ typedef struct /* Target type when converting whole-row vars */ Oid to_rowtype; bool *found_whole_row; /* output flag */ + bool coerced_var; /* var is under ConvertRowTypeExpr */ } map_variable_attnos_context; static Node * @@ -1267,22 +1268,29 @@ map_variable_attnos_mutator(Node *node, /* Don't convert unless necessary. */ if (context->to_rowtype != var->vartype) { - ConvertRowtypeExpr *r; - /* Var itself is converted to the requested type. */ newvar->vartype = context->to_rowtype; /* - * And a conversion node on top to convert back to the - * original type. + * If this var is already under a ConvertRowtypeExpr, + * we don't have to add another one. */ - r = makeNode(ConvertRowtypeExpr); - r->arg = (Expr *) newvar; - r->resulttype = var->vartype; - r->convertformat = COERCE_IMPLICIT_CAST; - r->location = -1; - - return (Node *) r; + if (!context->coerced_var) + { + ConvertRowtypeExpr *r; + + /* + * And a conversion node on top to convert back to + * the original type. + */ + r = makeNode(ConvertRowtypeExpr); + r->arg = (Expr *) newvar; + r->resulttype = var->vartype; + r->convertformat = COERCE_IMPLICIT_CAST; + r->location = -1; + + return (Node *) r; + } } } } @@ -1290,6 +1298,28 @@ map_variable_attnos_mutator(Node *node, } /* otherwise fall through to copy the var normally */ } + else if (IsA(node, ConvertRowtypeExpr)) + { + ConvertRowtypeExpr *r = (ConvertRowtypeExpr *) node; + + /* + * If this is coercing a var (which is typical), convert only the var, + * as against adding another ConvertRowtypeExpr over it. + */ + if (IsA(r->arg, Var)) + { + ConvertRowtypeExpr *newnode; + + newnode = (ConvertRowtypeExpr *) palloc(sizeof(ConvertRowtypeExpr)); + *newnode = *r; + context->coerced_var = true; + newnode->arg = (Expr *) map_variable_attnos_mutator((Node *) r->arg, context); + context->coerced_var = false; + + return (Node *) newnode; + } + /* Else fall through the expression tree mutator */ + } else if (IsA(node, Query)) { /* Recurse into RTE subquery or not-yet-planned sublink subquery */ @@ -1321,6 +1351,7 @@ map_variable_attnos(Node *node, context.map_length = map_length; context.to_rowtype = to_rowtype; context.found_whole_row = found_whole_row; + context.coerced_var = false; *found_whole_row = false; From 91d5f1a4a3e8aea2a6488243bac55806160408fb Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 12 Oct 2017 15:25:38 -0700 Subject: [PATCH 0382/1087] Use C99 restrict via pg_restrict, rather than restrict directly. Unfortunately using 'restrict' plainly causes problems with MSVC, which supports restrict only as '__restrict'. Defining 'restrict' to '__restrict' unfortunately causes a conflict with MSVC's usage of __declspec(restrict) in headers. Therefore define pg_restrict to the appropriate keyword instead, and replace existing usages. This replaces the temporary workaround introduced in 36b4b91ba078. Author: Andres Freund Discussion: https://postgr.es/m/2656.1507830907@sss.pgh.pa.us --- configure | 107 +++++++++++++++++++--------------- configure.in | 15 ++++- src/include/libpq/pqformat.h | 24 ++++---- src/include/pg_config.h.in | 4 ++ src/include/pg_config.h.win32 | 22 ++++--- 5 files changed, 101 insertions(+), 71 deletions(-) diff --git a/configure b/configure index 910f0fc373..cdcb3ceb0c 100755 --- a/configure +++ b/configure @@ -11545,52 +11545,6 @@ _ACEOF ;; esac -{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for C/C++ restrict keyword" >&5 -$as_echo_n "checking for C/C++ restrict keyword... " >&6; } -if ${ac_cv_c_restrict+:} false; then : - $as_echo_n "(cached) " >&6 -else - ac_cv_c_restrict=no - # The order here caters to the fact that C++ does not require restrict. - for ac_kw in __restrict __restrict__ _Restrict restrict; do - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -typedef int * int_ptr; - int foo (int_ptr $ac_kw ip) { - return ip[0]; - } -int -main () -{ -int s[1]; - int * $ac_kw t = s; - t[0] = 0; - return foo(t) - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_cv_c_restrict=$ac_kw -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - test "$ac_cv_c_restrict" != no && break - done - -fi -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_restrict" >&5 -$as_echo "$ac_cv_c_restrict" >&6; } - - case $ac_cv_c_restrict in - restrict) ;; - no) $as_echo "#define restrict /**/" >>confdefs.h - ;; - *) cat >>confdefs.h <<_ACEOF -#define restrict $ac_cv_c_restrict -_ACEOF - ;; - esac - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for printf format archetype" >&5 $as_echo_n "checking for printf format archetype... " >&6; } if ${pgac_cv_printf_archetype+:} false; then : @@ -12508,6 +12462,67 @@ $as_echo "#define LOCALE_T_IN_XLOCALE 1" >>confdefs.h fi +# MSVC doesn't cope well with defining restrict to __restrict, the +# spelling it understands, because it conflicts with +# __declspec(restrict). Therefore we define pg_restrict to the +# appropriate definition, which presumably won't conflict. +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for C/C++ restrict keyword" >&5 +$as_echo_n "checking for C/C++ restrict keyword... " >&6; } +if ${ac_cv_c_restrict+:} false; then : + $as_echo_n "(cached) " >&6 +else + ac_cv_c_restrict=no + # The order here caters to the fact that C++ does not require restrict. + for ac_kw in __restrict __restrict__ _Restrict restrict; do + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +typedef int * int_ptr; + int foo (int_ptr $ac_kw ip) { + return ip[0]; + } +int +main () +{ +int s[1]; + int * $ac_kw t = s; + t[0] = 0; + return foo(t) + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_cv_c_restrict=$ac_kw +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + test "$ac_cv_c_restrict" != no && break + done + +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_restrict" >&5 +$as_echo "$ac_cv_c_restrict" >&6; } + + case $ac_cv_c_restrict in + restrict) ;; + no) $as_echo "#define restrict /**/" >>confdefs.h + ;; + *) cat >>confdefs.h <<_ACEOF +#define restrict $ac_cv_c_restrict +_ACEOF + ;; + esac + +if test "$ac_cv_c_restrict" = "no" ; then + pg_restrict="" +else + pg_restrict="$ac_cv_c_restrict" +fi + +cat >>confdefs.h <<_ACEOF +#define pg_restrict $pg_restrict +_ACEOF + + ac_fn_c_check_type "$LINENO" "struct cmsgcred" "ac_cv_type_struct_cmsgcred" "#include #include #ifdef HAVE_SYS_UCRED_H diff --git a/configure.in b/configure.in index ab990d69f4..32bb7bf940 100644 --- a/configure.in +++ b/configure.in @@ -1299,7 +1299,6 @@ fi m4_defun([AC_PROG_CC_STDC], []) dnl We don't want that. AC_C_BIGENDIAN AC_C_INLINE -AC_C_RESTRICT PGAC_PRINTF_ARCHETYPE AC_C_FLEXIBLE_ARRAY_MEMBER PGAC_C_SIGNED @@ -1326,6 +1325,20 @@ AC_TYPE_LONG_LONG_INT PGAC_TYPE_LOCALE_T +# MSVC doesn't cope well with defining restrict to __restrict, the +# spelling it understands, because it conflicts with +# __declspec(restrict). Therefore we define pg_restrict to the +# appropriate definition, which presumably won't conflict. +AC_C_RESTRICT +if test "$ac_cv_c_restrict" = "no" ; then + pg_restrict="" +else + pg_restrict="$ac_cv_c_restrict" +fi +AC_DEFINE_UNQUOTED([pg_restrict], [$pg_restrict], +[Define to keyword to use for C99 restrict support, or to nothing if not +supported]) + AC_CHECK_TYPES([struct cmsgcred], [], [], [#include #include diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 2329669b08..35cdee7b76 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -38,8 +38,8 @@ extern void pq_sendfloat8(StringInfo buf, float8 f); * Append a int8 to a StringInfo buffer, which already has enough space * preallocated. * - * The use of restrict allows the compiler to optimize the code based on the - * assumption that buf, buf->len, buf->data and *buf->data don't + * The use of pg_restrict allows the compiler to optimize the code based on + * the assumption that buf, buf->len, buf->data and *buf->data don't * overlap. Without the annotation buf->len etc cannot be kept in a register * over subsequent pq_writeint* calls. * @@ -47,12 +47,12 @@ extern void pq_sendfloat8(StringInfo buf, float8 f); * overly picky and demanding a * before a restrict. */ static inline void -pq_writeint8(StringInfoData * restrict buf, int8 i) +pq_writeint8(StringInfoData *pg_restrict buf, int8 i) { int8 ni = i; Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(ni)); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(ni)); buf->len += sizeof(i); } @@ -61,12 +61,12 @@ pq_writeint8(StringInfoData * restrict buf, int8 i) * preallocated. */ static inline void -pq_writeint16(StringInfoData * restrict buf, int16 i) +pq_writeint16(StringInfoData *pg_restrict buf, int16 i) { int16 ni = pg_hton16(i); Assert(buf->len + sizeof(ni) <= buf->maxlen); - memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); buf->len += sizeof(i); } @@ -75,12 +75,12 @@ pq_writeint16(StringInfoData * restrict buf, int16 i) * preallocated. */ static inline void -pq_writeint32(StringInfoData * restrict buf, int32 i) +pq_writeint32(StringInfoData *pg_restrict buf, int32 i) { int32 ni = pg_hton32(i); Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); buf->len += sizeof(i); } @@ -89,12 +89,12 @@ pq_writeint32(StringInfoData * restrict buf, int32 i) * preallocated. */ static inline void -pq_writeint64(StringInfoData * restrict buf, int64 i) +pq_writeint64(StringInfoData *pg_restrict buf, int64 i) { int64 ni = pg_hton64(i); Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *restrict) (buf->data + buf->len), &ni, sizeof(i)); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); buf->len += sizeof(i); } @@ -109,7 +109,7 @@ pq_writeint64(StringInfoData * restrict buf, int64 i) * sent to the frontend. */ static inline void -pq_writestring(StringInfoData * restrict buf, const char *restrict str) +pq_writestring(StringInfoData *pg_restrict buf, const char *pg_restrict str) { int slen = strlen(str); char *p; @@ -120,7 +120,7 @@ pq_writestring(StringInfoData * restrict buf, const char *restrict str) Assert(buf->len + slen + 1 <= buf->maxlen); - memcpy(((char *restrict) buf->data + buf->len), p, slen + 1); + memcpy(((char *pg_restrict) buf->data + buf->len), p, slen + 1); buf->len += slen + 1; if (p != str) diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 80ee37dd62..cfdcc5ac62 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -923,6 +923,10 @@ if such a type exists, and if the system does not define it. */ #undef intptr_t +/* Define to keyword to use for C99 restrict support, or to nothing if not + supported */ +#undef pg_restrict + /* Define to the equivalent of the C99 'restrict' keyword, or to nothing if this is not supported. Do not define if restrict is supported directly. */ diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index 81604de7f9..ab9b941e89 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -681,22 +681,20 @@ #define inline __inline #endif -/* Define to the equivalent of the C99 'restrict' keyword, or to - nothing if this is not supported. Do not define if restrict is - supported directly. */ -/* Visual Studio 2008 and upwards */ +/* Define to keyword to use for C99 restrict support, or to nothing if this is + not supported */ +/* Works for C and C++ in Visual Studio 2008 and upwards */ #if (_MSC_VER >= 1500) -/* works for C and C++ in msvc */ -/* - * Temporary attempt at a workaround for stdlib.h's use of - * declspec(restrict), conflicting with below define. - */ -#include -#define restrict __restrict +#define pg_restrict __restrict #else -#define restrict +#define pg_restrict #endif +/* Define to the equivalent of the C99 'restrict' keyword, or to + nothing if this is not supported. Do not define if restrict is + supported directly. */ +/* not defined, because it'd conflict with __declspec(restrict) */ + /* Define to empty if the C compiler does not understand signed types. */ /* #undef signed */ From 1feff99fe4576d4685c14dff18d1f845a1456f10 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 12 Oct 2017 22:33:15 -0400 Subject: [PATCH 0383/1087] Improve LDAP cleanup code in error paths. After calling ldap_unbind_s() we probably shouldn't try to use the LDAP connection again to call ldap_get_option(), even if it failed. The OpenLDAP man page for ldap_unbind[_s] says "Once it is called, the connection to the LDAP server is closed, and the ld structure is invalid." Otherwise, as a general rule we should probably call ldap_unbind() before returning in all paths to avoid leaking resources. It is unlikely there is any practical leak problem since failure to authenticate currently results in the backend exiting soon afterwards. Author: Thomas Munro Reviewed-By: Alvaro Herrera, Peter Eisentraut Discussion: https://postgr.es/m/20170914141205.eup4kxzlkagtmfac%40alvherre.pgsql --- src/backend/libpq/auth.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 3b3a932a7d..11ef4a5858 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2331,9 +2331,9 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) if ((r = ldap_set_option(*ldap, LDAP_OPT_PROTOCOL_VERSION, &ldapversion)) != LDAP_SUCCESS) { - ldap_unbind(*ldap); ereport(LOG, (errmsg("could not set LDAP protocol version: %s", ldap_err2string(r)))); + ldap_unbind(*ldap); return STATUS_ERROR; } @@ -2360,18 +2360,18 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) * should never happen since we import other files from * wldap32, but check anyway */ - ldap_unbind(*ldap); ereport(LOG, (errmsg("could not load wldap32.dll"))); + ldap_unbind(*ldap); return STATUS_ERROR; } _ldap_start_tls_sA = (__ldap_start_tls_sA) GetProcAddress(ldaphandle, "ldap_start_tls_sA"); if (_ldap_start_tls_sA == NULL) { - ldap_unbind(*ldap); ereport(LOG, (errmsg("could not load function _ldap_start_tls_sA in wldap32.dll"), errdetail("LDAP over SSL is not supported on this platform."))); + ldap_unbind(*ldap); return STATUS_ERROR; } @@ -2384,9 +2384,9 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) if ((r = _ldap_start_tls_sA(*ldap, NULL, NULL, NULL, NULL)) != LDAP_SUCCESS) #endif { - ldap_unbind(*ldap); ereport(LOG, (errmsg("could not start LDAP TLS session: %s", ldap_err2string(r)))); + ldap_unbind(*ldap); return STATUS_ERROR; } } @@ -2491,6 +2491,7 @@ CheckLDAPAuth(Port *port) { ereport(LOG, (errmsg("invalid character in user name for LDAP authentication"))); + ldap_unbind(ldap); pfree(passwd); return STATUS_ERROR; } @@ -2508,6 +2509,7 @@ CheckLDAPAuth(Port *port) ereport(LOG, (errmsg("could not perform initial LDAP bind for ldapbinddn \"%s\" on server \"%s\": %s", port->hba->ldapbinddn, port->hba->ldapserver, ldap_err2string(r)))); + ldap_unbind(ldap); pfree(passwd); return STATUS_ERROR; } @@ -2533,6 +2535,7 @@ CheckLDAPAuth(Port *port) ereport(LOG, (errmsg("could not search LDAP for filter \"%s\" on server \"%s\": %s", filter, port->hba->ldapserver, ldap_err2string(r)))); + ldap_unbind(ldap); pfree(passwd); pfree(filter); return STATUS_ERROR; @@ -2554,6 +2557,7 @@ CheckLDAPAuth(Port *port) count, filter, port->hba->ldapserver, count))); + ldap_unbind(ldap); pfree(passwd); pfree(filter); ldap_msgfree(search_message); @@ -2570,6 +2574,7 @@ CheckLDAPAuth(Port *port) ereport(LOG, (errmsg("could not get dn for the first entry matching \"%s\" on server \"%s\": %s", filter, port->hba->ldapserver, ldap_err2string(error)))); + ldap_unbind(ldap); pfree(passwd); pfree(filter); ldap_msgfree(search_message); @@ -2585,12 +2590,9 @@ CheckLDAPAuth(Port *port) r = ldap_unbind_s(ldap); if (r != LDAP_SUCCESS) { - int error; - - (void) ldap_get_option(ldap, LDAP_OPT_ERROR_NUMBER, &error); ereport(LOG, - (errmsg("could not unbind after searching for user \"%s\" on server \"%s\": %s", - fulluser, port->hba->ldapserver, ldap_err2string(error)))); + (errmsg("could not unbind after searching for user \"%s\" on server \"%s\"", + fulluser, port->hba->ldapserver))); pfree(passwd); pfree(fulluser); return STATUS_ERROR; From cf1238cd9763f0a6e3454ddf75ac56ff722f18ee Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 12 Oct 2017 22:33:34 -0400 Subject: [PATCH 0384/1087] Log diagnostic messages if errors occur during LDAP auth. Diagnostic messages seem likely to help users diagnose root causes more easily, so let's report them as errdetail. Author: Thomas Munro Reviewed-By: Ashutosh Bapat, Christoph Berg, Alvaro Herrera, Peter Eisentraut Discussion: https://postgr.es/m/CAEepm=2_dA-SYpFdmNVwvKsEBXOUj=K4ooKovHmvj6jnMdt8dw@mail.gmail.com --- src/backend/libpq/auth.c | 48 +++++++++++++++++++++++++++++++------ src/test/ldap/t/001_auth.pl | 11 ++++++++- 2 files changed, 51 insertions(+), 8 deletions(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 11ef4a5858..2728c66a35 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2305,6 +2305,8 @@ CheckBSDAuth(Port *port, char *user) */ #ifdef USE_LDAP +static int errdetail_for_ldap(LDAP *ldap); + /* * Initialize a connection to the LDAP server, including setting up * TLS if requested. @@ -2332,7 +2334,9 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) if ((r = ldap_set_option(*ldap, LDAP_OPT_PROTOCOL_VERSION, &ldapversion)) != LDAP_SUCCESS) { ereport(LOG, - (errmsg("could not set LDAP protocol version: %s", ldap_err2string(r)))); + (errmsg("could not set LDAP protocol version: %s", + ldap_err2string(r)), + errdetail_for_ldap(*ldap))); ldap_unbind(*ldap); return STATUS_ERROR; } @@ -2385,7 +2389,9 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) #endif { ereport(LOG, - (errmsg("could not start LDAP TLS session: %s", ldap_err2string(r)))); + (errmsg("could not start LDAP TLS session: %s", + ldap_err2string(r)), + errdetail_for_ldap(*ldap))); ldap_unbind(*ldap); return STATUS_ERROR; } @@ -2508,7 +2514,9 @@ CheckLDAPAuth(Port *port) { ereport(LOG, (errmsg("could not perform initial LDAP bind for ldapbinddn \"%s\" on server \"%s\": %s", - port->hba->ldapbinddn, port->hba->ldapserver, ldap_err2string(r)))); + port->hba->ldapbinddn, port->hba->ldapserver, + ldap_err2string(r)), + errdetail_for_ldap(ldap))); ldap_unbind(ldap); pfree(passwd); return STATUS_ERROR; @@ -2534,7 +2542,8 @@ CheckLDAPAuth(Port *port) { ereport(LOG, (errmsg("could not search LDAP for filter \"%s\" on server \"%s\": %s", - filter, port->hba->ldapserver, ldap_err2string(r)))); + filter, port->hba->ldapserver, ldap_err2string(r)), + errdetail_for_ldap(ldap))); ldap_unbind(ldap); pfree(passwd); pfree(filter); @@ -2573,7 +2582,9 @@ CheckLDAPAuth(Port *port) (void) ldap_get_option(ldap, LDAP_OPT_ERROR_NUMBER, &error); ereport(LOG, (errmsg("could not get dn for the first entry matching \"%s\" on server \"%s\": %s", - filter, port->hba->ldapserver, ldap_err2string(error)))); + filter, port->hba->ldapserver, + ldap_err2string(error)), + errdetail_for_ldap(ldap))); ldap_unbind(ldap); pfree(passwd); pfree(filter); @@ -2618,23 +2629,46 @@ CheckLDAPAuth(Port *port) port->hba->ldapsuffix ? port->hba->ldapsuffix : ""); r = ldap_simple_bind_s(ldap, fulluser, passwd); - ldap_unbind(ldap); if (r != LDAP_SUCCESS) { ereport(LOG, (errmsg("LDAP login failed for user \"%s\" on server \"%s\": %s", - fulluser, port->hba->ldapserver, ldap_err2string(r)))); + fulluser, port->hba->ldapserver, ldap_err2string(r)), + errdetail_for_ldap(ldap))); + ldap_unbind(ldap); pfree(passwd); pfree(fulluser); return STATUS_ERROR; } + ldap_unbind(ldap); pfree(passwd); pfree(fulluser); return STATUS_OK; } + +/* + * Add a detail error message text to the current error if one can be + * constructed from the LDAP 'diagnostic message'. + */ +static int +errdetail_for_ldap(LDAP *ldap) +{ + char *message; + int rc; + + rc = ldap_get_option(ldap, LDAP_OPT_DIAGNOSTIC_MESSAGE, &message); + if (rc == LDAP_SUCCESS && message != NULL) + { + errdetail("LDAP diagnostics: %s", message); + ldap_memfree(message); + } + + return 0; +} + #endif /* USE_LDAP */ diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl index a7cac6210b..38760ece61 100644 --- a/src/test/ldap/t/001_auth.pl +++ b/src/test/ldap/t/001_auth.pl @@ -2,7 +2,7 @@ use warnings; use TestLib; use PostgresNode; -use Test::More tests => 14; +use Test::More tests => 15; my ($slapd, $ldap_bin_dir, $ldap_schema_dir); @@ -175,3 +175,12 @@ sub test_access $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'combined LDAP URL and search filter'); + +note "diagnostic message"; + +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="uid=" ldapsuffix=",dc=example,dc=net" ldaptls=1}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 2, 'any attempt fails due to unsupported TLS'); From 7d1b8e7591690fb68cc53553e0f13b537b5455dc Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 12 Oct 2017 23:47:48 -0400 Subject: [PATCH 0385/1087] Attempt to fix LDAP build Apparently, an older spelling of LDAP_OPT_DIAGNOSTIC_MESSAGE is LDAP_OPT_ERROR_STRING, so fall back to that one. --- src/backend/libpq/auth.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 2728c66a35..174ef1c49d 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -141,6 +141,12 @@ ULONG (*__ldap_start_tls_sA) ( #endif static int CheckLDAPAuth(Port *port); + +/* LDAP_OPT_DIAGNOSTIC_MESSAGE is the newer spelling */ +#ifndef LDAP_OPT_DIAGNOSTIC_MESSAGE +#define LDAP_OPT_DIAGNOSTIC_MESSAGE LDAP_OPT_ERROR_STRING +#endif + #endif /* USE_LDAP */ /*---------------------------------------------------------------- From 5229db6c6f92515afcd698cf5d5badc12ffe6bc2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 13 Oct 2017 11:46:05 -0400 Subject: [PATCH 0386/1087] Rely on sizeof(typename) rather than sizeof(variable) in pqformat.h. In each of the pq_writeintN functions, the three uses of sizeof() should surely all be consistent. I started out to make them all sizeof(ni), but on reflection let's make them sizeof(typename) instead. That's more like our usual style elsewhere, and it's just barely possible that the failures buildfarm member hornet has shown since 4c119fbcd went in are caused by the compiler getting confused about sizeof() a parameter that it's optimizing away. In passing, improve a couple of comments. Discussion: https://postgr.es/m/E1e2RML-0002do-Lc@gemulon.postgresql.org --- src/include/libpq/pqformat.h | 44 ++++++++++++++++++------------------ 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 35cdee7b76..4de9e6dd21 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -35,13 +35,13 @@ extern void pq_sendfloat4(StringInfo buf, float4 f); extern void pq_sendfloat8(StringInfo buf, float8 f); /* - * Append a int8 to a StringInfo buffer, which already has enough space + * Append an int8 to a StringInfo buffer, which already has enough space * preallocated. * * The use of pg_restrict allows the compiler to optimize the code based on * the assumption that buf, buf->len, buf->data and *buf->data don't * overlap. Without the annotation buf->len etc cannot be kept in a register - * over subsequent pq_writeint* calls. + * over subsequent pq_writeintN calls. * * The use of StringInfoData * rather than StringInfo is due to MSVC being * overly picky and demanding a * before a restrict. @@ -51,13 +51,13 @@ pq_writeint8(StringInfoData *pg_restrict buf, int8 i) { int8 ni = i; - Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(ni)); - buf->len += sizeof(i); + Assert(buf->len + sizeof(int8) <= buf->maxlen); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int8)); + buf->len += sizeof(int8); } /* - * Append a int16 to a StringInfo buffer, which already has enough space + * Append an int16 to a StringInfo buffer, which already has enough space * preallocated. */ static inline void @@ -65,13 +65,13 @@ pq_writeint16(StringInfoData *pg_restrict buf, int16 i) { int16 ni = pg_hton16(i); - Assert(buf->len + sizeof(ni) <= buf->maxlen); - memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); - buf->len += sizeof(i); + Assert(buf->len + sizeof(int16) <= buf->maxlen); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int16)); + buf->len += sizeof(int16); } /* - * Append a int32 to a StringInfo buffer, which already has enough space + * Append an int32 to a StringInfo buffer, which already has enough space * preallocated. */ static inline void @@ -79,13 +79,13 @@ pq_writeint32(StringInfoData *pg_restrict buf, int32 i) { int32 ni = pg_hton32(i); - Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); - buf->len += sizeof(i); + Assert(buf->len + sizeof(int32) <= buf->maxlen); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int32)); + buf->len += sizeof(int32); } /* - * Append a int64 to a StringInfo buffer, which already has enough space + * Append an int64 to a StringInfo buffer, which already has enough space * preallocated. */ static inline void @@ -93,9 +93,9 @@ pq_writeint64(StringInfoData *pg_restrict buf, int64 i) { int64 ni = pg_hton64(i); - Assert(buf->len + sizeof(i) <= buf->maxlen); - memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(i)); - buf->len += sizeof(i); + Assert(buf->len + sizeof(int64) <= buf->maxlen); + memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int64)); + buf->len += sizeof(int64); } /* @@ -131,7 +131,7 @@ pq_writestring(StringInfoData *pg_restrict buf, const char *pg_restrict str) static inline void pq_sendint8(StringInfo buf, int8 i) { - enlargeStringInfo(buf, sizeof(i)); + enlargeStringInfo(buf, sizeof(int8)); pq_writeint8(buf, i); } @@ -139,7 +139,7 @@ pq_sendint8(StringInfo buf, int8 i) static inline void pq_sendint16(StringInfo buf, int16 i) { - enlargeStringInfo(buf, sizeof(i)); + enlargeStringInfo(buf, sizeof(int16)); pq_writeint16(buf, i); } @@ -147,7 +147,7 @@ pq_sendint16(StringInfo buf, int16 i) static inline void pq_sendint32(StringInfo buf, int32 i) { - enlargeStringInfo(buf, sizeof(i)); + enlargeStringInfo(buf, sizeof(int32)); pq_writeint32(buf, i); } @@ -155,7 +155,7 @@ pq_sendint32(StringInfo buf, int32 i) static inline void pq_sendint64(StringInfo buf, int64 i) { - enlargeStringInfo(buf, sizeof(i)); + enlargeStringInfo(buf, sizeof(int64)); pq_writeint64(buf, i); } @@ -169,7 +169,7 @@ pq_sendbyte(StringInfo buf, int8 byt) /* * Append a binary integer to a StringInfo buffer * - * This function is deprecated. + * This function is deprecated; prefer use of the functions above. */ static inline void pq_sendint(StringInfo buf, int i, int b) From 73937119bfd07a140da4817f5ca949351942ffdc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 13 Oct 2017 13:43:55 -0400 Subject: [PATCH 0387/1087] Improve implementation of CRE-stack-flattening in map_variable_attnos(). I (tgl) objected to the obscure implementation introduced in commit 1c497fa72. This one seems a bit less action-at-a-distance-y, at the price of repeating a few lines of code. Improve the comments about what the function is doing, too. Amit Khandekar, whacked around a bit more by me Discussion: https://postgr.es/m/CAJ3gD9egYTyHUH0nTMxm8-1m3RvdqEbaTyGC-CUNtYf7tKNDaQ@mail.gmail.com --- src/backend/rewrite/rewriteManip.c | 101 ++++++++++++++++------------- 1 file changed, 55 insertions(+), 46 deletions(-) diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c index 9290c7f793..6579b2446d 100644 --- a/src/backend/rewrite/rewriteManip.c +++ b/src/backend/rewrite/rewriteManip.c @@ -1203,9 +1203,11 @@ replace_rte_variables_mutator(Node *node, * appear in the expression. * * If the expression tree contains a whole-row Var for the target RTE, - * *found_whole_row is returned as TRUE. In addition, if to_rowtype is - * not InvalidOid, we modify the Var's vartype and insert a ConvertRowTypeExpr - * to map back to the orignal rowtype. Callers that don't provide to_rowtype + * *found_whole_row is set to TRUE. In addition, if to_rowtype is + * not InvalidOid, we replace the Var with a Var of that vartype, inserting + * a ConvertRowTypeExpr to map back to the rowtype expected by the expression. + * (Therefore, to_rowtype had better be a child rowtype of the rowtype of the + * RTE we're changing references to.) Callers that don't provide to_rowtype * should report an error if *found_row_type is true; we don't do that here * because we don't know exactly what wording for the error message would * be most appropriate. The caller will be aware of the context. @@ -1221,10 +1223,8 @@ typedef struct int sublevels_up; /* (current) nesting depth */ const AttrNumber *attno_map; /* map array for user attnos */ int map_length; /* number of entries in attno_map[] */ - /* Target type when converting whole-row vars */ - Oid to_rowtype; + Oid to_rowtype; /* change whole-row Vars to this type */ bool *found_whole_row; /* output flag */ - bool coerced_var; /* var is under ConvertRowTypeExpr */ } map_variable_attnos_context; static Node * @@ -1244,7 +1244,8 @@ map_variable_attnos_mutator(Node *node, Var *newvar = (Var *) palloc(sizeof(Var)); int attno = var->varattno; - *newvar = *var; + *newvar = *var; /* initially copy all fields of the Var */ + if (attno > 0) { /* user-defined column, replace attno */ @@ -1259,39 +1260,29 @@ map_variable_attnos_mutator(Node *node, /* whole-row variable, warn caller */ *(context->found_whole_row) = true; - /* If the callers expects us to convert the same, do so. */ - if (OidIsValid(context->to_rowtype)) + /* If the caller expects us to convert the Var, do so. */ + if (OidIsValid(context->to_rowtype) && + context->to_rowtype != var->vartype) { - /* No support for RECORDOID. */ + ConvertRowtypeExpr *r; + + /* This certainly won't work for a RECORD variable. */ Assert(var->vartype != RECORDOID); - /* Don't convert unless necessary. */ - if (context->to_rowtype != var->vartype) - { - /* Var itself is converted to the requested type. */ - newvar->vartype = context->to_rowtype; - - /* - * If this var is already under a ConvertRowtypeExpr, - * we don't have to add another one. - */ - if (!context->coerced_var) - { - ConvertRowtypeExpr *r; - - /* - * And a conversion node on top to convert back to - * the original type. - */ - r = makeNode(ConvertRowtypeExpr); - r->arg = (Expr *) newvar; - r->resulttype = var->vartype; - r->convertformat = COERCE_IMPLICIT_CAST; - r->location = -1; - - return (Node *) r; - } - } + /* Var itself is changed to the requested type. */ + newvar->vartype = context->to_rowtype; + + /* + * Add a conversion node on top to convert back to the + * original type expected by the expression. + */ + r = makeNode(ConvertRowtypeExpr); + r->arg = (Expr *) newvar; + r->resulttype = var->vartype; + r->convertformat = COERCE_IMPLICIT_CAST; + r->location = -1; + + return (Node *) r; } } return (Node *) newvar; @@ -1301,24 +1292,43 @@ map_variable_attnos_mutator(Node *node, else if (IsA(node, ConvertRowtypeExpr)) { ConvertRowtypeExpr *r = (ConvertRowtypeExpr *) node; + Var *var = (Var *) r->arg; /* - * If this is coercing a var (which is typical), convert only the var, - * as against adding another ConvertRowtypeExpr over it. + * If this is coercing a whole-row Var that we need to convert, then + * just convert the Var without adding an extra ConvertRowtypeExpr. + * Effectively we're simplifying var::parenttype::grandparenttype into + * just var::grandparenttype. This avoids building stacks of CREs if + * this function is applied repeatedly. */ - if (IsA(r->arg, Var)) + if (IsA(var, Var) && + var->varno == context->target_varno && + var->varlevelsup == context->sublevels_up && + var->varattno == 0 && + OidIsValid(context->to_rowtype) && + context->to_rowtype != var->vartype) { ConvertRowtypeExpr *newnode; + Var *newvar = (Var *) palloc(sizeof(Var)); + + /* whole-row variable, warn caller */ + *(context->found_whole_row) = true; + + *newvar = *var; /* initially copy all fields of the Var */ + + /* This certainly won't work for a RECORD variable. */ + Assert(var->vartype != RECORDOID); + + /* Var itself is changed to the requested type. */ + newvar->vartype = context->to_rowtype; newnode = (ConvertRowtypeExpr *) palloc(sizeof(ConvertRowtypeExpr)); - *newnode = *r; - context->coerced_var = true; - newnode->arg = (Expr *) map_variable_attnos_mutator((Node *) r->arg, context); - context->coerced_var = false; + *newnode = *r; /* initially copy all fields of the CRE */ + newnode->arg = (Expr *) newvar; return (Node *) newnode; } - /* Else fall through the expression tree mutator */ + /* otherwise fall through to process the expression normally */ } else if (IsA(node, Query)) { @@ -1351,7 +1361,6 @@ map_variable_attnos(Node *node, context.map_length = map_length; context.to_rowtype = to_rowtype; context.found_whole_row = found_whole_row; - context.coerced_var = false; *found_whole_row = false; From 6393613b6a1e0feae3d22af608397b252cee5b58 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 13 Oct 2017 14:53:28 -0400 Subject: [PATCH 0388/1087] Fix possible crash with Parallel Bitmap Heap Scan. If a Parallel Bitmap Heap scan's chain of leftmost descendents includes a BitmapOr whose first child is a BitmapAnd, the prior coding would mistakenly create a non-shared TIDBitmap and then try to perform shared iteration. Report by Tomas Vondra. Patch by Dilip Kumar. Discussion: http://postgr.es/m/50e89684-8ad9-dead-8767-c9545bafd3b6@2ndquadrant.com --- src/backend/optimizer/plan/createplan.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 792ea84a81..c802d61c39 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -4933,7 +4933,11 @@ bitmap_subplan_mark_shared(Plan *plan) bitmap_subplan_mark_shared( linitial(((BitmapAnd *) plan)->bitmapplans)); else if (IsA(plan, BitmapOr)) + { ((BitmapOr *) plan)->isshared = true; + bitmap_subplan_mark_shared( + linitial(((BitmapOr *) plan)->bitmapplans)); + } else if (IsA(plan, BitmapIndexScan)) ((BitmapIndexScan *) plan)->isshared = true; else From d133982d598c7e6208d16cb4fc0b552151796603 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 13 Oct 2017 11:54:59 -0700 Subject: [PATCH 0389/1087] Force "restrict" not to be used when compiling with xlc. Per buildfarm animal Hornet and followup manual testing by Noah Misch, it appears xlc miscompiles code using "restrict" in at least some cases. Allow disabling restrict usage with FORCE_DISABLE_RESTRICT=yes in template files, and do so for aix/xlc. Author: Andres Freund and Tom Lane Discussion: https://postgr.es/m/1820.1507918762@sss.pgh.pa.us --- configure | 6 +++++- configure.in | 6 +++++- src/template/aix | 4 ++++ 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/configure b/configure index cdcb3ceb0c..4ecd2e1922 100755 --- a/configure +++ b/configure @@ -12466,6 +12466,10 @@ fi # spelling it understands, because it conflicts with # __declspec(restrict). Therefore we define pg_restrict to the # appropriate definition, which presumably won't conflict. +# +# Allow platforms with buggy compilers to force restrict to not be +# used by setting $FORCE_DISABLE_RESTRICT=yes in the relevant +# template. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C/C++ restrict keyword" >&5 $as_echo_n "checking for C/C++ restrict keyword... " >&6; } if ${ac_cv_c_restrict+:} false; then : @@ -12512,7 +12516,7 @@ _ACEOF ;; esac -if test "$ac_cv_c_restrict" = "no" ; then +if test "$ac_cv_c_restrict" = "no" -o "x$FORCE_DISABLE_RESTRICT" = "xyes"; then pg_restrict="" else pg_restrict="$ac_cv_c_restrict" diff --git a/configure.in b/configure.in index 32bb7bf940..cea7fd0755 100644 --- a/configure.in +++ b/configure.in @@ -1329,8 +1329,12 @@ PGAC_TYPE_LOCALE_T # spelling it understands, because it conflicts with # __declspec(restrict). Therefore we define pg_restrict to the # appropriate definition, which presumably won't conflict. +# +# Allow platforms with buggy compilers to force restrict to not be +# used by setting $FORCE_DISABLE_RESTRICT=yes in the relevant +# template. AC_C_RESTRICT -if test "$ac_cv_c_restrict" = "no" ; then +if test "$ac_cv_c_restrict" = "no" -o "x$FORCE_DISABLE_RESTRICT" = "xyes"; then pg_restrict="" else pg_restrict="$ac_cv_c_restrict" diff --git a/src/template/aix b/src/template/aix index b566ff129d..ed832849da 100644 --- a/src/template/aix +++ b/src/template/aix @@ -10,6 +10,10 @@ if test "$GCC" != yes ; then CFLAGS="-O2 -qmaxmem=16384 -qsrcmsg" ;; esac + + # Due to a compiler bug, see 20171013023536.GA492146@rfd.leadboat.com for details, + # force restrict not to be used when compiling with xlc. + FORCE_DISABLE_RESTRICT=yes fi # Native memset() is faster, tested on: From a0247e7a11bb9f5fd55694b594a3906b7bd05881 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 13 Oct 2017 11:44:51 -0700 Subject: [PATCH 0390/1087] Add pg_noinline macro to c.h. Forcing a function not to be inlined can be useful if it's the slow-path of a performance critical function, or should be visible in profiles to allow for proper cost attribution. Author: Andres Freund Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de --- src/include/c.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/src/include/c.h b/src/include/c.h index b6a969787a..b39bbd7c71 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -642,6 +642,22 @@ typedef NameData *Name; #define pg_attribute_noreturn() #endif + +/* + * Forcing a function not to be inlined can be useful if it's the slow-path of + * a performance critical function, or should be visible in profiles to allow + * for proper cost attribution. + */ +/* GCC, Sunpro and XLC support noinline via __attribute */ +#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) +#define pg_noinline __attribute__((noinline)) +/* msvc via declspec */ +#elif defined(_MSC_VER) +#define pg_noinline __declspec(noinline) +#else +#define pg_noinline +#endif + /* ---------------------------------------------------------------- * Section 6: assertions * ---------------------------------------------------------------- From 141fd1b66ce6e3d10518d66d4008bd368f1505fd Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 13 Oct 2017 13:16:50 -0700 Subject: [PATCH 0391/1087] Improve sys/catcache performance. The following are the individual improvements: 1) Avoidance of FunctionCallInfo based function calls, replaced by more efficient functions with a native C argument interface. 2) Don't extract columns from a cache entry's tuple whenever matching entries - instead store them as a Datum array. This also allows to get rid of having to build dummy tuples for negative & list entries, and of a hack for dealing with cstring vs. text weirdness. 3) Reorder members of catcache.h struct, so imortant entries are more likely to be on one cacheline. 4) Allowing the compiler to specialize critical SearchCatCache for a specific number of attributes allows to unroll loops and avoid other nkeys dependant initialization. 5) Only initializing the ScanKey when necessary, i.e. catcache misses, greatly reduces cache unnecessary cpu cache misses. 6) Split of the cache-miss case from the hash lookup, reducing stack allocations etc in the common case. 7) CatCTup and their corresponding heaptuple are allocated in one piece. This results in making cache lookups themselves roughly three times as fast - full-system benchmarks obviously improve less than that. I've also evaluated further techniques: - replace open coded hash with simplehash - the list walk right now shows up in profiles. Unfortunately it's not easy to do so safely as an entry's memory location can change at various times, which doesn't work well with the refcounting and cache invalidation. - Cacheline-aligning CatCTup entries - helps some with performance, but the win isn't big and the code for it is ugly, because the tuples have to be freed as well. - add more proper functions, rather than macros for SearchSysCacheCopyN etc., but right now they don't show up in profiles. The reason the macro wrapper for syscache.c/h have to be changed, rather than just catcache, is that doing otherwise would require exposing the SysCache array to the outside. That might be a good idea anyway, but it's for another day. Author: Andres Freund Reviewed-By: Robert Haas Discussion: https://postgr.es/m/20170914061207.zxotvyopetm7lrrp@alap3.anarazel.de --- src/backend/utils/cache/catcache.c | 692 ++++++++++++++++++++--------- src/backend/utils/cache/syscache.c | 49 +- src/include/utils/catcache.h | 122 +++-- src/include/utils/syscache.h | 23 +- src/tools/pgindent/typedefs.list | 2 + 5 files changed, 608 insertions(+), 280 deletions(-) diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c index e092801025..95a07422b3 100644 --- a/src/backend/utils/cache/catcache.c +++ b/src/backend/utils/cache/catcache.c @@ -30,7 +30,9 @@ #endif #include "storage/lmgr.h" #include "utils/builtins.h" +#include "utils/datum.h" #include "utils/fmgroids.h" +#include "utils/hashutils.h" #include "utils/inval.h" #include "utils/memutils.h" #include "utils/rel.h" @@ -72,11 +74,25 @@ /* Cache management header --- pointer is NULL until created */ static CatCacheHeader *CacheHdr = NULL; +static inline HeapTuple SearchCatCacheInternal(CatCache *cache, + int nkeys, + Datum v1, Datum v2, + Datum v3, Datum v4); + +static pg_noinline HeapTuple SearchCatCacheMiss(CatCache *cache, + int nkeys, + uint32 hashValue, + Index hashIndex, + Datum v1, Datum v2, + Datum v3, Datum v4); static uint32 CatalogCacheComputeHashValue(CatCache *cache, int nkeys, - ScanKey cur_skey); -static uint32 CatalogCacheComputeTupleHashValue(CatCache *cache, + Datum v1, Datum v2, Datum v3, Datum v4); +static uint32 CatalogCacheComputeTupleHashValue(CatCache *cache, int nkeys, HeapTuple tuple); +static inline bool CatalogCacheCompareTuple(const CatCache *cache, int nkeys, + const Datum *cachekeys, + const Datum *searchkeys); #ifdef CATCACHE_STATS static void CatCachePrintStats(int code, Datum arg); @@ -85,9 +101,14 @@ static void CatCacheRemoveCTup(CatCache *cache, CatCTup *ct); static void CatCacheRemoveCList(CatCache *cache, CatCList *cl); static void CatalogCacheInitializeCache(CatCache *cache); static CatCTup *CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, + Datum *arguments, uint32 hashValue, Index hashIndex, bool negative); -static HeapTuple build_dummy_tuple(CatCache *cache, int nkeys, ScanKey skeys); + +static void CatCacheFreeKeys(TupleDesc tupdesc, int nkeys, int *attnos, + Datum *keys); +static void CatCacheCopyKeys(TupleDesc tupdesc, int nkeys, int *attnos, + Datum *srckeys, Datum *dstkeys); /* @@ -95,45 +116,126 @@ static HeapTuple build_dummy_tuple(CatCache *cache, int nkeys, ScanKey skeys); */ /* - * Look up the hash and equality functions for system types that are used - * as cache key fields. - * - * XXX this should be replaced by catalog lookups, - * but that seems to pose considerable risk of circularity... + * Hash and equality functions for system types that are used as cache key + * fields. In some cases, we just call the regular SQL-callable functions for + * the appropriate data type, but that tends to be a little slow, and the + * speed of these functions is performance-critical. Therefore, for data + * types that frequently occur as catcache keys, we hard-code the logic here. + * Avoiding the overhead of DirectFunctionCallN(...) is a substantial win, and + * in certain cases (like int4) we can adopt a faster hash algorithm as well. */ + +static bool +chareqfast(Datum a, Datum b) +{ + return DatumGetChar(a) == DatumGetChar(b); +} + +static uint32 +charhashfast(Datum datum) +{ + return murmurhash32((int32) DatumGetChar(datum)); +} + +static bool +nameeqfast(Datum a, Datum b) +{ + char *ca = NameStr(*DatumGetName(a)); + char *cb = NameStr(*DatumGetName(b)); + + return strncmp(ca, cb, NAMEDATALEN) == 0; +} + +static uint32 +namehashfast(Datum datum) +{ + char *key = NameStr(*DatumGetName(datum)); + + return hash_any((unsigned char *) key, strlen(key)); +} + +static bool +int2eqfast(Datum a, Datum b) +{ + return DatumGetInt16(a) == DatumGetInt16(b); +} + +static uint32 +int2hashfast(Datum datum) +{ + return murmurhash32((int32) DatumGetInt16(datum)); +} + +static bool +int4eqfast(Datum a, Datum b) +{ + return DatumGetInt32(a) == DatumGetInt32(b); +} + +static uint32 +int4hashfast(Datum datum) +{ + return murmurhash32((int32) DatumGetInt32(datum)); +} + +static bool +texteqfast(Datum a, Datum b) +{ + return DatumGetBool(DirectFunctionCall2(texteq, a, b)); +} + +static uint32 +texthashfast(Datum datum) +{ + return DatumGetInt32(DirectFunctionCall1(hashtext, datum)); +} + +static bool +oidvectoreqfast(Datum a, Datum b) +{ + return DatumGetBool(DirectFunctionCall2(oidvectoreq, a, b)); +} + +static uint32 +oidvectorhashfast(Datum datum) +{ + return DatumGetInt32(DirectFunctionCall1(hashoidvector, datum)); +} + +/* Lookup support functions for a type. */ static void -GetCCHashEqFuncs(Oid keytype, PGFunction *hashfunc, RegProcedure *eqfunc) +GetCCHashEqFuncs(Oid keytype, CCHashFN *hashfunc, RegProcedure *eqfunc, CCFastEqualFN *fasteqfunc) { switch (keytype) { case BOOLOID: - *hashfunc = hashchar; - + *hashfunc = charhashfast; + *fasteqfunc = chareqfast; *eqfunc = F_BOOLEQ; break; case CHAROID: - *hashfunc = hashchar; - + *hashfunc = charhashfast; + *fasteqfunc = chareqfast; *eqfunc = F_CHAREQ; break; case NAMEOID: - *hashfunc = hashname; - + *hashfunc = namehashfast; + *fasteqfunc = nameeqfast; *eqfunc = F_NAMEEQ; break; case INT2OID: - *hashfunc = hashint2; - + *hashfunc = int2hashfast; + *fasteqfunc = int2eqfast; *eqfunc = F_INT2EQ; break; case INT4OID: - *hashfunc = hashint4; - + *hashfunc = int4hashfast; + *fasteqfunc = int4eqfast; *eqfunc = F_INT4EQ; break; case TEXTOID: - *hashfunc = hashtext; - + *hashfunc = texthashfast; + *fasteqfunc = texteqfast; *eqfunc = F_TEXTEQ; break; case OIDOID: @@ -147,13 +249,13 @@ GetCCHashEqFuncs(Oid keytype, PGFunction *hashfunc, RegProcedure *eqfunc) case REGDICTIONARYOID: case REGROLEOID: case REGNAMESPACEOID: - *hashfunc = hashoid; - + *hashfunc = int4hashfast; + *fasteqfunc = int4eqfast; *eqfunc = F_OIDEQ; break; case OIDVECTOROID: - *hashfunc = hashoidvector; - + *hashfunc = oidvectorhashfast; + *fasteqfunc = oidvectoreqfast; *eqfunc = F_OIDVECTOREQ; break; default: @@ -171,10 +273,12 @@ GetCCHashEqFuncs(Oid keytype, PGFunction *hashfunc, RegProcedure *eqfunc) * Compute the hash value associated with a given set of lookup keys */ static uint32 -CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey) +CatalogCacheComputeHashValue(CatCache *cache, int nkeys, + Datum v1, Datum v2, Datum v3, Datum v4) { uint32 hashValue = 0; uint32 oneHash; + CCHashFN *cc_hashfunc = cache->cc_hashfunc; CACHE4_elog(DEBUG2, "CatalogCacheComputeHashValue %s %d %p", cache->cc_relname, @@ -184,30 +288,26 @@ CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey) switch (nkeys) { case 4: - oneHash = - DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[3], - cur_skey[3].sk_argument)); + oneHash = (cc_hashfunc[3]) (v4); + hashValue ^= oneHash << 24; hashValue ^= oneHash >> 8; /* FALLTHROUGH */ case 3: - oneHash = - DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[2], - cur_skey[2].sk_argument)); + oneHash = (cc_hashfunc[2]) (v3); + hashValue ^= oneHash << 16; hashValue ^= oneHash >> 16; /* FALLTHROUGH */ case 2: - oneHash = - DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[1], - cur_skey[1].sk_argument)); + oneHash = (cc_hashfunc[1]) (v2); + hashValue ^= oneHash << 8; hashValue ^= oneHash >> 24; /* FALLTHROUGH */ case 1: - oneHash = - DatumGetUInt32(DirectFunctionCall1(cache->cc_hashfunc[0], - cur_skey[0].sk_argument)); + oneHash = (cc_hashfunc[0]) (v1); + hashValue ^= oneHash; break; default: @@ -224,63 +324,82 @@ CatalogCacheComputeHashValue(CatCache *cache, int nkeys, ScanKey cur_skey) * Compute the hash value associated with a given tuple to be cached */ static uint32 -CatalogCacheComputeTupleHashValue(CatCache *cache, HeapTuple tuple) +CatalogCacheComputeTupleHashValue(CatCache *cache, int nkeys, HeapTuple tuple) { - ScanKeyData cur_skey[CATCACHE_MAXKEYS]; + Datum v1 = 0, + v2 = 0, + v3 = 0, + v4 = 0; bool isNull = false; - - /* Copy pre-initialized overhead data for scankey */ - memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey)); + int *cc_keyno = cache->cc_keyno; + TupleDesc cc_tupdesc = cache->cc_tupdesc; /* Now extract key fields from tuple, insert into scankey */ - switch (cache->cc_nkeys) + switch (nkeys) { case 4: - cur_skey[3].sk_argument = - (cache->cc_key[3] == ObjectIdAttributeNumber) + v4 = (cc_keyno[3] == ObjectIdAttributeNumber) ? ObjectIdGetDatum(HeapTupleGetOid(tuple)) : fastgetattr(tuple, - cache->cc_key[3], - cache->cc_tupdesc, + cc_keyno[3], + cc_tupdesc, &isNull); Assert(!isNull); /* FALLTHROUGH */ case 3: - cur_skey[2].sk_argument = - (cache->cc_key[2] == ObjectIdAttributeNumber) + v3 = (cc_keyno[2] == ObjectIdAttributeNumber) ? ObjectIdGetDatum(HeapTupleGetOid(tuple)) : fastgetattr(tuple, - cache->cc_key[2], - cache->cc_tupdesc, + cc_keyno[2], + cc_tupdesc, &isNull); Assert(!isNull); /* FALLTHROUGH */ case 2: - cur_skey[1].sk_argument = - (cache->cc_key[1] == ObjectIdAttributeNumber) + v2 = (cc_keyno[1] == ObjectIdAttributeNumber) ? ObjectIdGetDatum(HeapTupleGetOid(tuple)) : fastgetattr(tuple, - cache->cc_key[1], - cache->cc_tupdesc, + cc_keyno[1], + cc_tupdesc, &isNull); Assert(!isNull); /* FALLTHROUGH */ case 1: - cur_skey[0].sk_argument = - (cache->cc_key[0] == ObjectIdAttributeNumber) + v1 = (cc_keyno[0] == ObjectIdAttributeNumber) ? ObjectIdGetDatum(HeapTupleGetOid(tuple)) : fastgetattr(tuple, - cache->cc_key[0], - cache->cc_tupdesc, + cc_keyno[0], + cc_tupdesc, &isNull); Assert(!isNull); break; default: - elog(FATAL, "wrong number of hash keys: %d", cache->cc_nkeys); + elog(FATAL, "wrong number of hash keys: %d", nkeys); break; } - return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey); + return CatalogCacheComputeHashValue(cache, nkeys, v1, v2, v3, v4); +} + +/* + * CatalogCacheCompareTuple + * + * Compare a tuple to the passed arguments. + */ +static inline bool +CatalogCacheCompareTuple(const CatCache *cache, int nkeys, + const Datum *cachekeys, + const Datum *searchkeys) +{ + const CCFastEqualFN *cc_fastequal = cache->cc_fastequal; + int i; + + for (i = 0; i < nkeys; i++) + { + if (!(cc_fastequal[i]) (cachekeys[i], searchkeys[i])) + return false; + } + return true; } @@ -371,9 +490,14 @@ CatCacheRemoveCTup(CatCache *cache, CatCTup *ct) /* delink from linked list */ dlist_delete(&ct->cache_elem); - /* free associated tuple data */ - if (ct->tuple.t_data != NULL) - pfree(ct->tuple.t_data); + /* + * Free keys when we're dealing with a negative entry, normal entries just + * point into tuple, allocated together with the CatCTup. + */ + if (ct->negative) + CatCacheFreeKeys(cache->cc_tupdesc, cache->cc_nkeys, + cache->cc_keyno, ct->keys); + pfree(ct); --cache->cc_ntup; @@ -414,9 +538,10 @@ CatCacheRemoveCList(CatCache *cache, CatCList *cl) /* delink from linked list */ dlist_delete(&cl->cache_elem); - /* free associated tuple data */ - if (cl->tuple.t_data != NULL) - pfree(cl->tuple.t_data); + /* free associated column data */ + CatCacheFreeKeys(cache->cc_tupdesc, cl->nkeys, + cache->cc_keyno, cl->keys); + pfree(cl); } @@ -660,6 +785,7 @@ InitCatCache(int id, { CatCache *cp; MemoryContext oldcxt; + size_t sz; int i; /* @@ -699,11 +825,12 @@ InitCatCache(int id, } /* - * allocate a new cache structure + * Allocate a new cache structure, aligning to a cacheline boundary * * Note: we rely on zeroing to initialize all the dlist headers correctly */ - cp = (CatCache *) palloc0(sizeof(CatCache)); + sz = sizeof(CatCache) + PG_CACHE_LINE_SIZE; + cp = (CatCache *) CACHELINEALIGN(palloc0(sz)); cp->cc_bucket = palloc0(nbuckets * sizeof(dlist_head)); /* @@ -721,7 +848,7 @@ InitCatCache(int id, cp->cc_nbuckets = nbuckets; cp->cc_nkeys = nkeys; for (i = 0; i < nkeys; ++i) - cp->cc_key[i] = key[i]; + cp->cc_keyno[i] = key[i]; /* * new cache is initialized as far as we can go for now. print some @@ -794,13 +921,13 @@ RehashCatCache(CatCache *cp) #define CatalogCacheInitializeCache_DEBUG2 \ do { \ - if (cache->cc_key[i] > 0) { \ + if (cache->cc_keyno[i] > 0) { \ elog(DEBUG2, "CatalogCacheInitializeCache: load %d/%d w/%d, %u", \ - i+1, cache->cc_nkeys, cache->cc_key[i], \ - TupleDescAttr(tupdesc, cache->cc_key[i] - 1)->atttypid); \ + i+1, cache->cc_nkeys, cache->cc_keyno[i], \ + TupleDescAttr(tupdesc, cache->cc_keyno[i] - 1)->atttypid); \ } else { \ elog(DEBUG2, "CatalogCacheInitializeCache: load %d/%d w/%d", \ - i+1, cache->cc_nkeys, cache->cc_key[i]); \ + i+1, cache->cc_nkeys, cache->cc_keyno[i]); \ } \ } while(0) #else @@ -860,10 +987,10 @@ CatalogCacheInitializeCache(CatCache *cache) CatalogCacheInitializeCache_DEBUG2; - if (cache->cc_key[i] > 0) + if (cache->cc_keyno[i] > 0) { Form_pg_attribute attr = TupleDescAttr(tupdesc, - cache->cc_key[i] - 1); + cache->cc_keyno[i] - 1); keytype = attr->atttypid; /* cache key columns should always be NOT NULL */ @@ -871,16 +998,15 @@ CatalogCacheInitializeCache(CatCache *cache) } else { - if (cache->cc_key[i] != ObjectIdAttributeNumber) + if (cache->cc_keyno[i] != ObjectIdAttributeNumber) elog(FATAL, "only sys attr supported in caches is OID"); keytype = OIDOID; } GetCCHashEqFuncs(keytype, &cache->cc_hashfunc[i], - &eqfunc); - - cache->cc_isname[i] = (keytype == NAMEOID); + &eqfunc, + &cache->cc_fastequal[i]); /* * Do equality-function lookup (we assume this won't need a catalog @@ -891,7 +1017,7 @@ CatalogCacheInitializeCache(CatCache *cache) CacheMemoryContext); /* Initialize sk_attno suitably for HeapKeyTest() and heap scans */ - cache->cc_skey[i].sk_attno = cache->cc_key[i]; + cache->cc_skey[i].sk_attno = cache->cc_keyno[i]; /* Fill in sk_strategy as well --- always standard equality */ cache->cc_skey[i].sk_strategy = BTEqualStrategyNumber; @@ -1020,7 +1146,7 @@ IndexScanOK(CatCache *cache, ScanKey cur_skey) } /* - * SearchCatCache + * SearchCatCacheInternal * * This call searches a system cache for a tuple, opening the relation * if necessary (on the first access to a particular cache). @@ -1042,42 +1168,90 @@ SearchCatCache(CatCache *cache, Datum v3, Datum v4) { - ScanKeyData cur_skey[CATCACHE_MAXKEYS]; + return SearchCatCacheInternal(cache, cache->cc_nkeys, v1, v2, v3, v4); +} + + +/* + * SearchCatCacheN() are SearchCatCache() versions for a specific number of + * arguments. The compiler can inline the body and unroll loops, making them a + * bit faster than SearchCatCache(). + */ + +HeapTuple +SearchCatCache1(CatCache *cache, + Datum v1) +{ + return SearchCatCacheInternal(cache, 1, v1, 0, 0, 0); +} + + +HeapTuple +SearchCatCache2(CatCache *cache, + Datum v1, Datum v2) +{ + return SearchCatCacheInternal(cache, 2, v1, v2, 0, 0); +} + + +HeapTuple +SearchCatCache3(CatCache *cache, + Datum v1, Datum v2, Datum v3) +{ + return SearchCatCacheInternal(cache, 3, v1, v2, v3, 0); +} + + +HeapTuple +SearchCatCache4(CatCache *cache, + Datum v1, Datum v2, Datum v3, Datum v4) +{ + return SearchCatCacheInternal(cache, 4, v1, v2, v3, v4); +} + +/* + * Work-horse for SearchCatCache/SearchCatCacheN. + */ +static inline HeapTuple +SearchCatCacheInternal(CatCache *cache, + int nkeys, + Datum v1, + Datum v2, + Datum v3, + Datum v4) +{ + Datum arguments[CATCACHE_MAXKEYS]; uint32 hashValue; Index hashIndex; dlist_iter iter; dlist_head *bucket; CatCTup *ct; - Relation relation; - SysScanDesc scandesc; - HeapTuple ntp; /* Make sure we're in an xact, even if this ends up being a cache hit */ Assert(IsTransactionState()); + Assert(cache->cc_nkeys == nkeys); + /* * one-time startup overhead for each cache */ - if (cache->cc_tupdesc == NULL) + if (unlikely(cache->cc_tupdesc == NULL)) CatalogCacheInitializeCache(cache); #ifdef CATCACHE_STATS cache->cc_searches++; #endif - /* - * initialize the search key information - */ - memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey)); - cur_skey[0].sk_argument = v1; - cur_skey[1].sk_argument = v2; - cur_skey[2].sk_argument = v3; - cur_skey[3].sk_argument = v4; + /* Initialize local parameter array */ + arguments[0] = v1; + arguments[1] = v2; + arguments[2] = v3; + arguments[3] = v4; /* * find the hash bucket in which to look for the tuple */ - hashValue = CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey); + hashValue = CatalogCacheComputeHashValue(cache, nkeys, v1, v2, v3, v4); hashIndex = HASH_INDEX(hashValue, cache->cc_nbuckets); /* @@ -1089,8 +1263,6 @@ SearchCatCache(CatCache *cache, bucket = &cache->cc_bucket[hashIndex]; dlist_foreach(iter, bucket) { - bool res; - ct = dlist_container(CatCTup, cache_elem, iter.cur); if (ct->dead) @@ -1099,15 +1271,7 @@ SearchCatCache(CatCache *cache, if (ct->hash_value != hashValue) continue; /* quickly skip entry if wrong hash val */ - /* - * see if the cached tuple matches our key. - */ - HeapKeyTest(&ct->tuple, - cache->cc_tupdesc, - cache->cc_nkeys, - cur_skey, - res); - if (!res) + if (!CatalogCacheCompareTuple(cache, nkeys, ct->keys, arguments)) continue; /* @@ -1150,6 +1314,49 @@ SearchCatCache(CatCache *cache, } } + return SearchCatCacheMiss(cache, nkeys, hashValue, hashIndex, v1, v2, v3, v4); +} + +/* + * Search the actual catalogs, rather than the cache. + * + * This is kept separate from SearchCatCacheInternal() to keep the fast-path + * as small as possible. To avoid that effort being undone by a helpful + * compiler, try to explicitly forbid inlining. + */ +static pg_noinline HeapTuple +SearchCatCacheMiss(CatCache *cache, + int nkeys, + uint32 hashValue, + Index hashIndex, + Datum v1, + Datum v2, + Datum v3, + Datum v4) +{ + ScanKeyData cur_skey[CATCACHE_MAXKEYS]; + Relation relation; + SysScanDesc scandesc; + HeapTuple ntp; + CatCTup *ct; + Datum arguments[CATCACHE_MAXKEYS]; + + /* Initialize local parameter array */ + arguments[0] = v1; + arguments[1] = v2; + arguments[2] = v3; + arguments[3] = v4; + + /* + * Ok, need to make a lookup in the relation, copy the scankey and fill + * out any per-call fields. + */ + memcpy(cur_skey, cache->cc_skey, sizeof(ScanKeyData) * nkeys); + cur_skey[0].sk_argument = v1; + cur_skey[1].sk_argument = v2; + cur_skey[2].sk_argument = v3; + cur_skey[3].sk_argument = v4; + /* * Tuple was not found in cache, so we have to try to retrieve it directly * from the relation. If found, we will add it to the cache; if not @@ -1171,14 +1378,14 @@ SearchCatCache(CatCache *cache, cache->cc_indexoid, IndexScanOK(cache, cur_skey), NULL, - cache->cc_nkeys, + nkeys, cur_skey); ct = NULL; while (HeapTupleIsValid(ntp = systable_getnext(scandesc))) { - ct = CatalogCacheCreateEntry(cache, ntp, + ct = CatalogCacheCreateEntry(cache, ntp, arguments, hashValue, hashIndex, false); /* immediately set the refcount to 1 */ @@ -1207,11 +1414,9 @@ SearchCatCache(CatCache *cache, if (IsBootstrapProcessingMode()) return NULL; - ntp = build_dummy_tuple(cache, cache->cc_nkeys, cur_skey); - ct = CatalogCacheCreateEntry(cache, ntp, + ct = CatalogCacheCreateEntry(cache, NULL, arguments, hashValue, hashIndex, true); - heap_freetuple(ntp); CACHE4_elog(DEBUG2, "SearchCatCache(%s): Contains %d/%d tuples", cache->cc_relname, cache->cc_ntup, CacheHdr->ch_ntup); @@ -1288,27 +1493,16 @@ GetCatCacheHashValue(CatCache *cache, Datum v3, Datum v4) { - ScanKeyData cur_skey[CATCACHE_MAXKEYS]; - /* * one-time startup overhead for each cache */ if (cache->cc_tupdesc == NULL) CatalogCacheInitializeCache(cache); - /* - * initialize the search key information - */ - memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey)); - cur_skey[0].sk_argument = v1; - cur_skey[1].sk_argument = v2; - cur_skey[2].sk_argument = v3; - cur_skey[3].sk_argument = v4; - /* * calculate the hash value */ - return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, cur_skey); + return CatalogCacheComputeHashValue(cache, cache->cc_nkeys, v1, v2, v3, v4); } @@ -1329,7 +1523,7 @@ SearchCatCacheList(CatCache *cache, Datum v3, Datum v4) { - ScanKeyData cur_skey[CATCACHE_MAXKEYS]; + Datum arguments[CATCACHE_MAXKEYS]; uint32 lHashValue; dlist_iter iter; CatCList *cl; @@ -1354,21 +1548,18 @@ SearchCatCacheList(CatCache *cache, cache->cc_lsearches++; #endif - /* - * initialize the search key information - */ - memcpy(cur_skey, cache->cc_skey, sizeof(cur_skey)); - cur_skey[0].sk_argument = v1; - cur_skey[1].sk_argument = v2; - cur_skey[2].sk_argument = v3; - cur_skey[3].sk_argument = v4; + /* Initialize local parameter array */ + arguments[0] = v1; + arguments[1] = v2; + arguments[2] = v3; + arguments[3] = v4; /* * compute a hash value of the given keys for faster search. We don't * presently divide the CatCList items into buckets, but this still lets * us skip non-matching items quickly most of the time. */ - lHashValue = CatalogCacheComputeHashValue(cache, nkeys, cur_skey); + lHashValue = CatalogCacheComputeHashValue(cache, nkeys, v1, v2, v3, v4); /* * scan the items until we find a match or exhaust our list @@ -1378,8 +1569,6 @@ SearchCatCacheList(CatCache *cache, */ dlist_foreach(iter, &cache->cc_lists) { - bool res; - cl = dlist_container(CatCList, cache_elem, iter.cur); if (cl->dead) @@ -1393,12 +1582,8 @@ SearchCatCacheList(CatCache *cache, */ if (cl->nkeys != nkeys) continue; - HeapKeyTest(&cl->tuple, - cache->cc_tupdesc, - nkeys, - cur_skey, - res); - if (!res) + + if (!CatalogCacheCompareTuple(cache, nkeys, cl->keys, arguments)) continue; /* @@ -1441,9 +1626,20 @@ SearchCatCacheList(CatCache *cache, PG_TRY(); { + ScanKeyData cur_skey[CATCACHE_MAXKEYS]; Relation relation; SysScanDesc scandesc; + /* + * Ok, need to make a lookup in the relation, copy the scankey and + * fill out any per-call fields. + */ + memcpy(cur_skey, cache->cc_skey, sizeof(ScanKeyData) * cache->cc_nkeys); + cur_skey[0].sk_argument = v1; + cur_skey[1].sk_argument = v2; + cur_skey[2].sk_argument = v3; + cur_skey[3].sk_argument = v4; + relation = heap_open(cache->cc_reloid, AccessShareLock); scandesc = systable_beginscan(relation, @@ -1467,7 +1663,7 @@ SearchCatCacheList(CatCache *cache, * See if there's an entry for this tuple already. */ ct = NULL; - hashValue = CatalogCacheComputeTupleHashValue(cache, ntp); + hashValue = CatalogCacheComputeTupleHashValue(cache, cache->cc_nkeys, ntp); hashIndex = HASH_INDEX(hashValue, cache->cc_nbuckets); bucket = &cache->cc_bucket[hashIndex]; @@ -1498,7 +1694,7 @@ SearchCatCacheList(CatCache *cache, if (!found) { /* We didn't find a usable entry, so make a new one */ - ct = CatalogCacheCreateEntry(cache, ntp, + ct = CatalogCacheCreateEntry(cache, ntp, arguments, hashValue, hashIndex, false); } @@ -1513,18 +1709,16 @@ SearchCatCacheList(CatCache *cache, heap_close(relation, AccessShareLock); - /* - * Now we can build the CatCList entry. First we need a dummy tuple - * containing the key values... - */ - ntp = build_dummy_tuple(cache, nkeys, cur_skey); + /* Now we can build the CatCList entry. */ oldcxt = MemoryContextSwitchTo(CacheMemoryContext); nmembers = list_length(ctlist); cl = (CatCList *) palloc(offsetof(CatCList, members) + nmembers * sizeof(CatCTup *)); - heap_copytuple_with_tuple(ntp, &cl->tuple); + + /* Extract key values */ + CatCacheCopyKeys(cache->cc_tupdesc, nkeys, cache->cc_keyno, + arguments, cl->keys); MemoryContextSwitchTo(oldcxt); - heap_freetuple(ntp); /* * We are now past the last thing that could trigger an elog before we @@ -1621,35 +1815,80 @@ ReleaseCatCacheList(CatCList *list) * supplied data into it. The new entry initially has refcount 0. */ static CatCTup * -CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, - uint32 hashValue, Index hashIndex, bool negative) +CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, Datum *arguments, + uint32 hashValue, Index hashIndex, + bool negative) { CatCTup *ct; HeapTuple dtp; MemoryContext oldcxt; - /* - * If there are any out-of-line toasted fields in the tuple, expand them - * in-line. This saves cycles during later use of the catcache entry, and - * also protects us against the possibility of the toast tuples being - * freed before we attempt to fetch them, in case of something using a - * slightly stale catcache entry. - */ - if (HeapTupleHasExternal(ntp)) - dtp = toast_flatten_tuple(ntp, cache->cc_tupdesc); - else - dtp = ntp; + /* negative entries have no tuple associated */ + if (ntp) + { + int i; - /* - * Allocate CatCTup header in cache memory, and copy the tuple there too. - */ - oldcxt = MemoryContextSwitchTo(CacheMemoryContext); - ct = (CatCTup *) palloc(sizeof(CatCTup)); - heap_copytuple_with_tuple(dtp, &ct->tuple); - MemoryContextSwitchTo(oldcxt); + Assert(!negative); + + /* + * If there are any out-of-line toasted fields in the tuple, expand + * them in-line. This saves cycles during later use of the catcache + * entry, and also protects us against the possibility of the toast + * tuples being freed before we attempt to fetch them, in case of + * something using a slightly stale catcache entry. + */ + if (HeapTupleHasExternal(ntp)) + dtp = toast_flatten_tuple(ntp, cache->cc_tupdesc); + else + dtp = ntp; + + /* Allocate memory for CatCTup and the cached tuple in one go */ + oldcxt = MemoryContextSwitchTo(CacheMemoryContext); - if (dtp != ntp) - heap_freetuple(dtp); + ct = (CatCTup *) palloc(sizeof(CatCTup) + + MAXIMUM_ALIGNOF + dtp->t_len); + ct->tuple.t_len = dtp->t_len; + ct->tuple.t_self = dtp->t_self; + ct->tuple.t_tableOid = dtp->t_tableOid; + ct->tuple.t_data = (HeapTupleHeader) + MAXALIGN(((char *) ct) + sizeof(CatCTup)); + /* copy tuple contents */ + memcpy((char *) ct->tuple.t_data, + (const char *) dtp->t_data, + dtp->t_len); + MemoryContextSwitchTo(oldcxt); + + if (dtp != ntp) + heap_freetuple(dtp); + + /* extract keys - they'll point into the tuple if not by-value */ + for (i = 0; i < cache->cc_nkeys; i++) + { + Datum atp; + bool isnull; + + atp = heap_getattr(&ct->tuple, + cache->cc_keyno[i], + cache->cc_tupdesc, + &isnull); + Assert(!isnull); + ct->keys[i] = atp; + } + } + else + { + Assert(negative); + oldcxt = MemoryContextSwitchTo(CacheMemoryContext); + ct = (CatCTup *) palloc(sizeof(CatCTup)); + + /* + * Store keys - they'll point into separately allocated memory if not + * by-value. + */ + CatCacheCopyKeys(cache->cc_tupdesc, cache->cc_nkeys, cache->cc_keyno, + arguments, ct->keys); + MemoryContextSwitchTo(oldcxt); + } /* * Finish initializing the CatCTup header, and add it to the cache's @@ -1679,71 +1918,80 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, } /* - * build_dummy_tuple - * Generate a palloc'd HeapTuple that contains the specified key - * columns, and NULLs for other columns. - * - * This is used to store the keys for negative cache entries and CatCList - * entries, which don't have real tuples associated with them. + * Helper routine that frees keys stored in the keys array. */ -static HeapTuple -build_dummy_tuple(CatCache *cache, int nkeys, ScanKey skeys) +static void +CatCacheFreeKeys(TupleDesc tupdesc, int nkeys, int *attnos, Datum *keys) { - HeapTuple ntp; - TupleDesc tupDesc = cache->cc_tupdesc; - Datum *values; - bool *nulls; - Oid tupOid = InvalidOid; - NameData tempNames[4]; int i; - values = (Datum *) palloc(tupDesc->natts * sizeof(Datum)); - nulls = (bool *) palloc(tupDesc->natts * sizeof(bool)); + for (i = 0; i < nkeys; i++) + { + int attnum = attnos[i]; + Form_pg_attribute att; + + /* only valid system attribute is the oid, which is by value */ + if (attnum == ObjectIdAttributeNumber) + continue; + Assert(attnum > 0); + + att = TupleDescAttr(tupdesc, attnum - 1); + + if (!att->attbyval) + pfree(DatumGetPointer(keys[i])); + } +} + +/* + * Helper routine that copies the keys in the srckeys array into the dstkeys + * one, guaranteeing that the datums are fully allocated in the current memory + * context. + */ +static void +CatCacheCopyKeys(TupleDesc tupdesc, int nkeys, int *attnos, + Datum *srckeys, Datum *dstkeys) +{ + int i; - memset(values, 0, tupDesc->natts * sizeof(Datum)); - memset(nulls, true, tupDesc->natts * sizeof(bool)); + /* + * XXX: memory and lookup performance could possibly be improved by + * storing all keys in one allocation. + */ for (i = 0; i < nkeys; i++) { - int attindex = cache->cc_key[i]; - Datum keyval = skeys[i].sk_argument; + int attnum = attnos[i]; - if (attindex > 0) + if (attnum == ObjectIdAttributeNumber) + { + dstkeys[i] = srckeys[i]; + } + else { + Form_pg_attribute att = TupleDescAttr(tupdesc, attnum - 1); + Datum src = srckeys[i]; + NameData srcname; + /* - * Here we must be careful in case the caller passed a C string - * where a NAME is wanted: convert the given argument to a - * correctly padded NAME. Otherwise the memcpy() done in - * heap_form_tuple could fall off the end of memory. + * Must be careful in case the caller passed a C string where a + * NAME is wanted: convert the given argument to a correctly + * padded NAME. Otherwise the memcpy() done by datumCopy() could + * fall off the end of memory. */ - if (cache->cc_isname[i]) + if (att->atttypid == NAMEOID) { - Name newval = &tempNames[i]; - - namestrcpy(newval, DatumGetCString(keyval)); - keyval = NameGetDatum(newval); + namestrcpy(&srcname, DatumGetCString(src)); + src = NameGetDatum(&srcname); } - values[attindex - 1] = keyval; - nulls[attindex - 1] = false; - } - else - { - Assert(attindex == ObjectIdAttributeNumber); - tupOid = DatumGetObjectId(keyval); + + dstkeys[i] = datumCopy(src, + att->attbyval, + att->attlen); } } - ntp = heap_form_tuple(tupDesc, values, nulls); - if (tupOid != InvalidOid) - HeapTupleSetOid(ntp, tupOid); - - pfree(values); - pfree(nulls); - - return ntp; } - /* * PrepareToInvalidateCacheTuple() * @@ -1820,7 +2068,7 @@ PrepareToInvalidateCacheTuple(Relation relation, if (ccp->cc_tupdesc == NULL) CatalogCacheInitializeCache(ccp); - hashvalue = CatalogCacheComputeTupleHashValue(ccp, tuple); + hashvalue = CatalogCacheComputeTupleHashValue(ccp, ccp->cc_nkeys, tuple); dbid = ccp->cc_relisshared ? (Oid) 0 : MyDatabaseId; (*function) (ccp->id, hashvalue, dbid); @@ -1829,7 +2077,7 @@ PrepareToInvalidateCacheTuple(Relation relation, { uint32 newhashvalue; - newhashvalue = CatalogCacheComputeTupleHashValue(ccp, newtuple); + newhashvalue = CatalogCacheComputeTupleHashValue(ccp, ccp->cc_nkeys, newtuple); if (newhashvalue != hashvalue) (*function) (ccp->id, newhashvalue, dbid); diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c index fcbb683a99..888edbb325 100644 --- a/src/backend/utils/cache/syscache.c +++ b/src/backend/utils/cache/syscache.c @@ -1102,13 +1102,56 @@ SearchSysCache(int cacheId, Datum key3, Datum key4) { - if (cacheId < 0 || cacheId >= SysCacheSize || - !PointerIsValid(SysCache[cacheId])) - elog(ERROR, "invalid cache ID: %d", cacheId); + Assert(cacheId >= 0 && cacheId < SysCacheSize && + PointerIsValid(SysCache[cacheId])); return SearchCatCache(SysCache[cacheId], key1, key2, key3, key4); } +HeapTuple +SearchSysCache1(int cacheId, + Datum key1) +{ + Assert(cacheId >= 0 && cacheId < SysCacheSize && + PointerIsValid(SysCache[cacheId])); + Assert(SysCache[cacheId]->cc_nkeys == 1); + + return SearchCatCache1(SysCache[cacheId], key1); +} + +HeapTuple +SearchSysCache2(int cacheId, + Datum key1, Datum key2) +{ + Assert(cacheId >= 0 && cacheId < SysCacheSize && + PointerIsValid(SysCache[cacheId])); + Assert(SysCache[cacheId]->cc_nkeys == 2); + + return SearchCatCache2(SysCache[cacheId], key1, key2); +} + +HeapTuple +SearchSysCache3(int cacheId, + Datum key1, Datum key2, Datum key3) +{ + Assert(cacheId >= 0 && cacheId < SysCacheSize && + PointerIsValid(SysCache[cacheId])); + Assert(SysCache[cacheId]->cc_nkeys == 3); + + return SearchCatCache3(SysCache[cacheId], key1, key2, key3); +} + +HeapTuple +SearchSysCache4(int cacheId, + Datum key1, Datum key2, Datum key3, Datum key4) +{ + Assert(cacheId >= 0 && cacheId < SysCacheSize && + PointerIsValid(SysCache[cacheId])); + Assert(SysCache[cacheId]->cc_nkeys == 4); + + return SearchCatCache4(SysCache[cacheId], key1, key2, key3, key4); +} + /* * ReleaseSysCache * Release previously grabbed reference count on a tuple diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h index 200a3022e7..74535eb7c2 100644 --- a/src/include/utils/catcache.h +++ b/src/include/utils/catcache.h @@ -34,25 +34,33 @@ #define CATCACHE_MAXKEYS 4 + +/* function computing a datum's hash */ +typedef uint32 (*CCHashFN) (Datum datum); + +/* function computing equality of two datums */ +typedef bool (*CCFastEqualFN) (Datum a, Datum b); + typedef struct catcache { int id; /* cache identifier --- see syscache.h */ - slist_node cc_next; /* list link */ + int cc_nbuckets; /* # of hash buckets in this cache */ + TupleDesc cc_tupdesc; /* tuple descriptor (copied from reldesc) */ + dlist_head *cc_bucket; /* hash buckets */ + CCHashFN cc_hashfunc[CATCACHE_MAXKEYS]; /* hash function for each key */ + CCFastEqualFN cc_fastequal[CATCACHE_MAXKEYS]; /* fast equal function for + * each key */ + int cc_keyno[CATCACHE_MAXKEYS]; /* AttrNumber of each key */ + dlist_head cc_lists; /* list of CatCList structs */ + int cc_ntup; /* # of tuples currently in this cache */ + int cc_nkeys; /* # of keys (1..CATCACHE_MAXKEYS) */ const char *cc_relname; /* name of relation the tuples come from */ Oid cc_reloid; /* OID of relation the tuples come from */ Oid cc_indexoid; /* OID of index matching cache keys */ bool cc_relisshared; /* is relation shared across databases? */ - TupleDesc cc_tupdesc; /* tuple descriptor (copied from reldesc) */ - int cc_ntup; /* # of tuples currently in this cache */ - int cc_nbuckets; /* # of hash buckets in this cache */ - int cc_nkeys; /* # of keys (1..CATCACHE_MAXKEYS) */ - int cc_key[CATCACHE_MAXKEYS]; /* AttrNumber of each key */ - PGFunction cc_hashfunc[CATCACHE_MAXKEYS]; /* hash function for each key */ + slist_node cc_next; /* list link */ ScanKeyData cc_skey[CATCACHE_MAXKEYS]; /* precomputed key info for heap * scans */ - bool cc_isname[CATCACHE_MAXKEYS]; /* flag "name" key columns */ - dlist_head cc_lists; /* list of CatCList structs */ - dlist_head *cc_bucket; /* hash buckets */ /* * Keep these at the end, so that compiling catcache.c with CATCACHE_STATS @@ -79,7 +87,14 @@ typedef struct catctup { int ct_magic; /* for identifying CatCTup entries */ #define CT_MAGIC 0x57261502 - CatCache *my_cache; /* link to owning catcache */ + + uint32 hash_value; /* hash value for this tuple's keys */ + + /* + * Lookup keys for the entry. By-reference datums point into the tuple for + * positive cache entries, and are separately allocated for negative ones. + */ + Datum keys[CATCACHE_MAXKEYS]; /* * Each tuple in a cache is a member of a dlist that stores the elements @@ -88,15 +103,6 @@ typedef struct catctup */ dlist_node cache_elem; /* list member of per-bucket list */ - /* - * The tuple may also be a member of at most one CatCList. (If a single - * catcache is list-searched with varying numbers of keys, we may have to - * make multiple entries for the same tuple because of this restriction. - * Currently, that's not expected to be common, so we accept the potential - * inefficiency.) - */ - struct catclist *c_list; /* containing CatCList, or NULL if none */ - /* * A tuple marked "dead" must not be returned by subsequent searches. * However, it won't be physically deleted from the cache until its @@ -112,46 +118,63 @@ typedef struct catctup int refcount; /* number of active references */ bool dead; /* dead but not yet removed? */ bool negative; /* negative cache entry? */ - uint32 hash_value; /* hash value for this tuple's keys */ HeapTupleData tuple; /* tuple management header */ + + /* + * The tuple may also be a member of at most one CatCList. (If a single + * catcache is list-searched with varying numbers of keys, we may have to + * make multiple entries for the same tuple because of this restriction. + * Currently, that's not expected to be common, so we accept the potential + * inefficiency.) + */ + struct catclist *c_list; /* containing CatCList, or NULL if none */ + + CatCache *my_cache; /* link to owning catcache */ + /* properly aligned tuple data follows, unless a negative entry */ } CatCTup; +/* + * A CatCList describes the result of a partial search, ie, a search using + * only the first K key columns of an N-key cache. We store the keys used + * into the keys attribute to represent the stored key set. The CatCList + * object contains links to cache entries for all the table rows satisfying + * the partial key. (Note: none of these will be negative cache entries.) + * + * A CatCList is only a member of a per-cache list; we do not currently + * divide them into hash buckets. + * + * A list marked "dead" must not be returned by subsequent searches. + * However, it won't be physically deleted from the cache until its + * refcount goes to zero. (A list should be marked dead if any of its + * member entries are dead.) + * + * If "ordered" is true then the member tuples appear in the order of the + * cache's underlying index. This will be true in normal operation, but + * might not be true during bootstrap or recovery operations. (namespace.c + * is able to save some cycles when it is true.) + */ typedef struct catclist { int cl_magic; /* for identifying CatCList entries */ #define CL_MAGIC 0x52765103 - CatCache *my_cache; /* link to owning catcache */ + + uint32 hash_value; /* hash value for lookup keys */ + + dlist_node cache_elem; /* list member of per-catcache list */ /* - * A CatCList describes the result of a partial search, ie, a search using - * only the first K key columns of an N-key cache. We form the keys used - * into a tuple (with other attributes NULL) to represent the stored key - * set. The CatCList object contains links to cache entries for all the - * table rows satisfying the partial key. (Note: none of these will be - * negative cache entries.) - * - * A CatCList is only a member of a per-cache list; we do not currently - * divide them into hash buckets. - * - * A list marked "dead" must not be returned by subsequent searches. - * However, it won't be physically deleted from the cache until its - * refcount goes to zero. (A list should be marked dead if any of its - * member entries are dead.) - * - * If "ordered" is true then the member tuples appear in the order of the - * cache's underlying index. This will be true in normal operation, but - * might not be true during bootstrap or recovery operations. (namespace.c - * is able to save some cycles when it is true.) + * Lookup keys for the entry, with the first nkeys elements being valid. + * All by-reference are separately allocated. */ - dlist_node cache_elem; /* list member of per-catcache list */ + Datum keys[CATCACHE_MAXKEYS]; + int refcount; /* number of active references */ bool dead; /* dead but not yet removed? */ bool ordered; /* members listed in index order? */ short nkeys; /* number of lookup keys specified */ - uint32 hash_value; /* hash value for lookup keys */ - HeapTupleData tuple; /* header for tuple holding keys */ int n_members; /* number of member tuples */ + CatCache *my_cache; /* link to owning catcache */ CatCTup *members[FLEXIBLE_ARRAY_MEMBER]; /* members */ } CatCList; @@ -174,8 +197,15 @@ extern CatCache *InitCatCache(int id, Oid reloid, Oid indexoid, extern void InitCatCachePhase2(CatCache *cache, bool touch_index); extern HeapTuple SearchCatCache(CatCache *cache, - Datum v1, Datum v2, - Datum v3, Datum v4); + Datum v1, Datum v2, Datum v3, Datum v4); +extern HeapTuple SearchCatCache1(CatCache *cache, + Datum v1); +extern HeapTuple SearchCatCache2(CatCache *cache, + Datum v1, Datum v2); +extern HeapTuple SearchCatCache3(CatCache *cache, + Datum v1, Datum v2, Datum v3); +extern HeapTuple SearchCatCache4(CatCache *cache, + Datum v1, Datum v2, Datum v3, Datum v4); extern void ReleaseCatCache(HeapTuple tuple); extern uint32 GetCatCacheHashValue(CatCache *cache, diff --git a/src/include/utils/syscache.h b/src/include/utils/syscache.h index 8a92ea27ac..8a0be41929 100644 --- a/src/include/utils/syscache.h +++ b/src/include/utils/syscache.h @@ -117,6 +117,20 @@ extern void InitCatalogCachePhase2(void); extern HeapTuple SearchSysCache(int cacheId, Datum key1, Datum key2, Datum key3, Datum key4); + +/* + * The use of argument specific numbers is encouraged. They're faster, and + * insulates the caller from changes in the maximum number of keys. + */ +extern HeapTuple SearchSysCache1(int cacheId, + Datum key1); +extern HeapTuple SearchSysCache2(int cacheId, + Datum key1, Datum key2); +extern HeapTuple SearchSysCache3(int cacheId, + Datum key1, Datum key2, Datum key3); +extern HeapTuple SearchSysCache4(int cacheId, + Datum key1, Datum key2, Datum key3, Datum key4); + extern void ReleaseSysCache(HeapTuple tuple); /* convenience routines */ @@ -156,15 +170,6 @@ extern bool RelationSupportsSysCache(Oid relid); * functions is encouraged, as it insulates the caller from changes in the * maximum number of keys. */ -#define SearchSysCache1(cacheId, key1) \ - SearchSysCache(cacheId, key1, 0, 0, 0) -#define SearchSysCache2(cacheId, key1, key2) \ - SearchSysCache(cacheId, key1, key2, 0, 0) -#define SearchSysCache3(cacheId, key1, key2, key3) \ - SearchSysCache(cacheId, key1, key2, key3, 0) -#define SearchSysCache4(cacheId, key1, key2, key3, key4) \ - SearchSysCache(cacheId, key1, key2, key3, key4) - #define SearchSysCacheCopy1(cacheId, key1) \ SearchSysCacheCopy(cacheId, key1, 0, 0, 0) #define SearchSysCacheCopy2(cacheId, key1, key2) \ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 8ce97da2ee..7f0ae978c1 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -302,6 +302,8 @@ CatCache CatCacheHeader CatalogId CatalogIndexState +CCHashFN +CCFastEqualFN ChangeVarNodes_context CheckPoint CheckPointStmt From b81eba6a650186dc35b6a1fb8bde320d9c29055d Mon Sep 17 00:00:00 2001 From: Joe Conway Date: Fri, 13 Oct 2017 16:06:41 -0700 Subject: [PATCH 0392/1087] Add missing options to pg_regress help() output A few command line options accepted by pg_regress were not being output by help(), including --help itself. Add that one, as well as --version and --bindir, and the corresponding short options for the first two. We could consider this for backpatching, but it did not seem worthwhile and no one else advocated for it, so apply only to master for now. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/dd519469-06d7-2662-83ef-c926f6c4f0f1%40joeconway.com --- src/test/regress/pg_regress.c | 58 +++++++++++++++++++---------------- 1 file changed, 31 insertions(+), 27 deletions(-) diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c index 7c628df4b4..0156b00bfb 100644 --- a/src/test/regress/pg_regress.c +++ b/src/test/regress/pg_regress.c @@ -2004,37 +2004,41 @@ help(void) printf(_("Usage:\n %s [OPTION]... [EXTRA-TEST]...\n"), progname); printf(_("\n")); printf(_("Options:\n")); - printf(_(" --config-auth=DATADIR update authentication settings for DATADIR\n")); - printf(_(" --create-role=ROLE create the specified role before testing\n")); - printf(_(" --dbname=DB use database DB (default \"regression\")\n")); - printf(_(" --debug turn on debug mode in programs that are run\n")); - printf(_(" --dlpath=DIR look for dynamic libraries in DIR\n")); - printf(_(" --encoding=ENCODING use ENCODING as the encoding\n")); - printf(_(" --inputdir=DIR take input files from DIR (default \".\")\n")); - printf(_(" --launcher=CMD use CMD as launcher of psql\n")); - printf(_(" --load-extension=EXT load the named extension before running the\n")); - printf(_(" tests; can appear multiple times\n")); - printf(_(" --load-language=LANG load the named language before running the\n")); - printf(_(" tests; can appear multiple times\n")); - printf(_(" --max-connections=N maximum number of concurrent connections\n")); - printf(_(" (default is 0, meaning unlimited)\n")); - printf(_(" --max-concurrent-tests=N maximum number of concurrent tests in schedule\n")); - printf(_(" (default is 0, meaning unlimited)\n")); - printf(_(" --outputdir=DIR place output files in DIR (default \".\")\n")); - printf(_(" --schedule=FILE use test ordering schedule from FILE\n")); - printf(_(" (can be used multiple times to concatenate)\n")); - printf(_(" --temp-instance=DIR create a temporary instance in DIR\n")); - printf(_(" --use-existing use an existing installation\n")); + printf(_(" --bindir=BINPATH use BINPATH for programs that are run;\n")); + printf(_(" if empty, use PATH from the environment\n")); + printf(_(" --config-auth=DATADIR update authentication settings for DATADIR\n")); + printf(_(" --create-role=ROLE create the specified role before testing\n")); + printf(_(" --dbname=DB use database DB (default \"regression\")\n")); + printf(_(" --debug turn on debug mode in programs that are run\n")); + printf(_(" --dlpath=DIR look for dynamic libraries in DIR\n")); + printf(_(" --encoding=ENCODING use ENCODING as the encoding\n")); + printf(_(" -h, --help show this help, then exit\n")); + printf(_(" --inputdir=DIR take input files from DIR (default \".\")\n")); + printf(_(" --launcher=CMD use CMD as launcher of psql\n")); + printf(_(" --load-extension=EXT load the named extension before running the\n")); + printf(_(" tests; can appear multiple times\n")); + printf(_(" --load-language=LANG load the named language before running the\n")); + printf(_(" tests; can appear multiple times\n")); + printf(_(" --max-connections=N maximum number of concurrent connections\n")); + printf(_(" (default is 0, meaning unlimited)\n")); + printf(_(" --max-concurrent-tests=N maximum number of concurrent tests in schedule\n")); + printf(_(" (default is 0, meaning unlimited)\n")); + printf(_(" --outputdir=DIR place output files in DIR (default \".\")\n")); + printf(_(" --schedule=FILE use test ordering schedule from FILE\n")); + printf(_(" (can be used multiple times to concatenate)\n")); + printf(_(" --temp-instance=DIR create a temporary instance in DIR\n")); + printf(_(" --use-existing use an existing installation\n")); + printf(_(" -V, --version output version information, then exit\n")); printf(_("\n")); printf(_("Options for \"temp-instance\" mode:\n")); - printf(_(" --no-locale use C locale\n")); - printf(_(" --port=PORT start postmaster on PORT\n")); - printf(_(" --temp-config=FILE append contents of FILE to temporary config\n")); + printf(_(" --no-locale use C locale\n")); + printf(_(" --port=PORT start postmaster on PORT\n")); + printf(_(" --temp-config=FILE append contents of FILE to temporary config\n")); printf(_("\n")); printf(_("Options for using an existing installation:\n")); - printf(_(" --host=HOST use postmaster running on HOST\n")); - printf(_(" --port=PORT use postmaster running at PORT\n")); - printf(_(" --user=USER connect as USER\n")); + printf(_(" --host=HOST use postmaster running on HOST\n")); + printf(_(" --port=PORT use postmaster running at PORT\n")); + printf(_(" --user=USER connect as USER\n")); printf(_("\n")); printf(_("The exit status is 0 if all tests passed, 1 if some tests failed, and 2\n")); printf(_("if the tests could not be run for some reason.\n")); From 5f340cb30ce2f0d9f272840b0d977b0a4b854f0b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 14 Oct 2017 11:40:54 -0400 Subject: [PATCH 0393/1087] Reinstate genhtml --prefix option for non-vpath builds In c3d9a66024a93e6d0380bdd1b18cb03a67216b72, the genhtml --prefix option was removed to get slightly better behavior for vpath builds. genhtml would then automatically pick a suitable prefix. However, for non-vpath builds, this makes the coverage output dependent on the length of the path where the source code happens to be, leading to confusingly arbitrary results. So put the --prefix option back for non-vpath builds. --- src/Makefile.global.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Makefile.global.in b/src/Makefile.global.in index a4209df7c9..27ec54a417 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -898,7 +898,7 @@ GENHTML_TITLE = PostgreSQL $(VERSION) coverage-html-stamp: lcov_base.info lcov_test.info rm -rf coverage - $(GENHTML) $(GENHTML_FLAGS) -o coverage --title='$(GENHTML_TITLE)' --num-spaces=4 $^ + $(GENHTML) $(GENHTML_FLAGS) -o coverage --title='$(GENHTML_TITLE)' --num-spaces=4 $(if $(filter no,$(vpath_build)),--prefix='$(abs_top_srcdir)') $^ touch $@ LCOV += --gcov-tool $(GCOV) From 4de2d4fba38f4f7aff7f95401eb43a6cd05a6db4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 14 Oct 2017 15:21:39 -0400 Subject: [PATCH 0394/1087] Explicitly track whether aggregate final functions modify transition state. Up to now, there's been hard-wired assumptions that normal aggregates' final functions never modify their transition states, while ordered-set aggregates' final functions always do. This has always been a bit limiting, and in particular it's getting in the way of improving the built-in ordered-set aggregates to allow merging of transition states. Therefore, let's introduce catalog and CREATE AGGREGATE infrastructure that lets the finalfn's behavior be declared explicitly. There are now three possibilities for the finalfn behavior: it's purely read-only, it trashes the transition state irrecoverably, or it changes the state in such a way that no more transfn calls are possible but the state can still be passed to other, compatible finalfns. There are no examples of this third case today, but we'll shortly make the built-in OSAs act like that. This change allows user-defined aggregates to explicitly disclaim support for use as window functions, and/or to prevent transition state merging, if their implementations cannot handle that. While it was previously possible to handle the window case with a run-time error check, there was not any way to prevent transition state merging, which in retrospect is something commit 804163bc2 should have provided for. But better late than never. In passing, split out pg_aggregate.c's extern function declarations into a new header file pg_aggregate_fn.h, similarly to what we've done for some other catalog headers, so that pg_aggregate.h itself can be safe for frontend files to include. This lets pg_dump use the symbolic names for relevant constants. Discussion: https://postgr.es/m/4834.1507849699@sss.pgh.pa.us --- doc/src/sgml/catalogs.sgml | 20 + doc/src/sgml/ref/create_aggregate.sgml | 72 +++- doc/src/sgml/xaggr.sgml | 20 +- src/backend/catalog/pg_aggregate.c | 5 + src/backend/commands/aggregatecmds.c | 42 +++ src/backend/executor/nodeAgg.c | 65 ++-- src/backend/executor/nodeWindowAgg.c | 43 ++- src/bin/pg_dump/pg_dump.c | 107 +++++- src/bin/pg_dump/t/002_pg_dump.pl | 4 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_aggregate.h | 346 +++++++++--------- src/include/catalog/pg_aggregate_fn.h | 52 +++ .../regress/expected/create_aggregate.out | 15 +- src/test/regress/expected/opr_sanity.out | 2 + src/test/regress/sql/create_aggregate.sql | 9 +- src/test/regress/sql/opr_sanity.sql | 2 + 16 files changed, 555 insertions(+), 251 deletions(-) create mode 100644 src/include/catalog/pg_aggregate_fn.h diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 9af77c1f5a..cfec2465d2 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -486,6 +486,26 @@ True to pass extra dummy arguments to aggmfinalfn + + aggfinalmodify + char + + Whether aggfinalfn modifies the + transition state value: + r if it is read-only, + s if the aggtransfn + cannot be applied after the aggfinalfn, or + w if it writes on the value + + + + aggmfinalmodify + char + + Like aggfinalmodify, but for + the aggmfinalfn + + aggsortop oid diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index c96e4faba7..4d9c8b0b70 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -27,6 +27,7 @@ CREATE AGGREGATE name ( [ state_data_size ] [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] + [ , FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } ] [ , COMBINEFUNC = combinefunc ] [ , SERIALFUNC = serialfunc ] [ , DESERIALFUNC = deserialfunc ] @@ -37,6 +38,7 @@ CREATE AGGREGATE name ( [ mstate_data_size ] [ , MFINALFUNC = mffunc ] [ , MFINALFUNC_EXTRA ] + [ , MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } ] [ , MINITCOND = minitial_condition ] [ , SORTOP = sort_operator ] [ , PARALLEL = { SAFE | RESTRICTED | UNSAFE } ] @@ -49,6 +51,7 @@ CREATE AGGREGATE name ( [ [ state_data_size ] [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] + [ , FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } ] [ , INITCOND = initial_condition ] [ , PARALLEL = { SAFE | RESTRICTED | UNSAFE } ] [ , HYPOTHETICAL ] @@ -63,6 +66,7 @@ CREATE AGGREGATE name ( [ , SSPACE = state_data_size ] [ , FINALFUNC = ffunc ] [ , FINALFUNC_EXTRA ] + [ , FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } ] [ , COMBINEFUNC = combinefunc ] [ , SERIALFUNC = serialfunc ] [ , DESERIALFUNC = deserialfunc ] @@ -73,6 +77,7 @@ CREATE AGGREGATE name ( [ , MSSPACE = mstate_data_size ] [ , MFINALFUNC = mffunc ] [ , MFINALFUNC_EXTRA ] + [ , MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } ] [ , MINITCOND = minitial_condition ] [ , SORTOP = sort_operator ] ) @@ -197,7 +202,8 @@ CREATE AGGREGATE name ( as described in . This requires specifying the MSFUNC, MINVFUNC, and MSTYPE parameters, and optionally - the MSPACE, MFINALFUNC, MFINALFUNC_EXTRA, + the MSPACE, MFINALFUNC, + MFINALFUNC_EXTRA, MFINALFUNC_MODIFY, and MINITCOND parameters. Except for MINVFUNC, these parameters work like the corresponding simple-aggregate parameters without M; they define a separate implementation of the @@ -412,6 +418,21 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; + + FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + + + This option specifies whether the final function is a pure function + that does not modify its arguments. READ_ONLY indicates + it does not; the other two values indicate that it may change the + transition state value. See below for more detail. The + default is READ_ONLY, except for ordered-set aggregates, + for which the default is READ_WRITE. + + + + combinefunc @@ -563,6 +584,16 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; + + MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + + + This option is like FINALFUNC_MODIFY, but it describes + the behavior of the moving-aggregate final function. + + + + minitial_condition @@ -587,12 +618,12 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - PARALLEL + PARALLEL = { SAFE | RESTRICTED | UNSAFE } The meanings of PARALLEL SAFE, PARALLEL RESTRICTED, and PARALLEL UNSAFE are the same as - for . An aggregate will not be + in . An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support @@ -624,8 +655,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - - Notes + + Notes In parameters that specify support function names, you can write @@ -634,6 +665,34 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; of the support functions are determined from other parameters. + + Ordinarily, Postgres functions are expected to be true functions that + do not modify their input values. However, an aggregate transition + function, when used in the context of an aggregate, + is allowed to cheat and modify its transition-state argument in place. + This can provide substantial performance benefits compared to making + a fresh copy of the transition state each time. + + + + Likewise, while an aggregate final function is normally expected not to + modify its input values, sometimes it is impractical to avoid modifying + the transition-state argument. Such behavior must be declared using + the FINALFUNC_MODIFY parameter. The READ_WRITE + value indicates that the final function modifies the transition state in + unspecified ways. This value prevents use of the aggregate as a window + function, and it also prevents merging of transition states for aggregate + calls that share the same input values and transition functions. + The SHARABLE value indicates that the transition function + cannot be applied after the final function, but multiple final-function + calls can be performed on the ending transition state value. This value + prevents use of the aggregate as a window function, but it allows merging + of transition states. (That is, the optimization of interest here is not + applying the same final function repeatedly, but applying different final + functions to the same ending transition state value. This is allowed as + long as none of the final functions are marked READ_WRITE.) + + If an aggregate supports moving-aggregate mode, it will improve calculation efficiency when the aggregate is used as a window function @@ -671,7 +730,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Note that whether or not the aggregate supports moving-aggregate mode, PostgreSQL can handle a moving frame end without recalculation; this is done by continuing to add new values - to the aggregate's state. It is assumed that the final function does + to the aggregate's state. This is why use of an aggregate as a window + function requires that the final function be read-only: it must not damage the aggregate's state value, so that the aggregation can be continued even after an aggregate result value has been obtained for one set of frame boundaries. diff --git a/doc/src/sgml/xaggr.sgml b/doc/src/sgml/xaggr.sgml index 79a9f288b2..9e6a6648dc 100644 --- a/doc/src/sgml/xaggr.sgml +++ b/doc/src/sgml/xaggr.sgml @@ -487,6 +487,13 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; C, since their state values aren't definable as any SQL data type. (In the above example, notice that the state value is declared as type internal — this is typical.) + Also, because the final function performs the sort, it is not possible + to continue adding input rows by executing the transition function again + later. This means the final function is not READ_ONLY; + it must be declared in + as READ_WRITE, or as SHARABLE if it's + possible for additional final-function calls to make use of the + already-sorted state. @@ -622,16 +629,15 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; if (AggCheckCallContext(fcinfo, NULL)) - One reason for checking this is that when it is true for a transition - function, the first input + One reason for checking this is that when it is true, the first input must be a temporary state value and can therefore safely be modified in-place rather than allocating a new copy. See int8inc() for an example. - (This is the only - case where it is safe for a function to modify a pass-by-reference input. - In particular, final functions for normal aggregates must not - modify their inputs in any case, because in some cases they will be - re-executed on the same final state value.) + (While aggregate transition functions are always allowed to modify + the transition value in-place, aggregate final functions are generally + discouraged from doing so; if they do so, the behavior must be declared + when creating the aggregate. See + for more detail.) diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c index a9204503d3..ca3fd819b4 100644 --- a/src/backend/catalog/pg_aggregate.c +++ b/src/backend/catalog/pg_aggregate.c @@ -19,6 +19,7 @@ #include "catalog/dependency.h" #include "catalog/indexing.h" #include "catalog/pg_aggregate.h" +#include "catalog/pg_aggregate_fn.h" #include "catalog/pg_language.h" #include "catalog/pg_operator.h" #include "catalog/pg_proc.h" @@ -65,6 +66,8 @@ AggregateCreate(const char *aggName, List *aggmfinalfnName, bool finalfnExtraArgs, bool mfinalfnExtraArgs, + char finalfnModify, + char mfinalfnModify, List *aggsortopName, Oid aggTransType, int32 aggTransSpace, @@ -656,6 +659,8 @@ AggregateCreate(const char *aggName, values[Anum_pg_aggregate_aggmfinalfn - 1] = ObjectIdGetDatum(mfinalfn); values[Anum_pg_aggregate_aggfinalextra - 1] = BoolGetDatum(finalfnExtraArgs); values[Anum_pg_aggregate_aggmfinalextra - 1] = BoolGetDatum(mfinalfnExtraArgs); + values[Anum_pg_aggregate_aggfinalmodify - 1] = CharGetDatum(finalfnModify); + values[Anum_pg_aggregate_aggmfinalmodify - 1] = CharGetDatum(mfinalfnModify); values[Anum_pg_aggregate_aggsortop - 1] = ObjectIdGetDatum(sortop); values[Anum_pg_aggregate_aggtranstype - 1] = ObjectIdGetDatum(aggTransType); values[Anum_pg_aggregate_aggtransspace - 1] = Int32GetDatum(aggTransSpace); diff --git a/src/backend/commands/aggregatecmds.c b/src/backend/commands/aggregatecmds.c index a63539ab21..adc9877e79 100644 --- a/src/backend/commands/aggregatecmds.c +++ b/src/backend/commands/aggregatecmds.c @@ -26,6 +26,7 @@ #include "catalog/dependency.h" #include "catalog/indexing.h" #include "catalog/pg_aggregate.h" +#include "catalog/pg_aggregate_fn.h" #include "catalog/pg_proc.h" #include "catalog/pg_type.h" #include "commands/alter.h" @@ -39,6 +40,9 @@ #include "utils/syscache.h" +static char extractModify(DefElem *defel); + + /* * DefineAggregate * @@ -67,6 +71,8 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List List *mfinalfuncName = NIL; bool finalfuncExtraArgs = false; bool mfinalfuncExtraArgs = false; + char finalfuncModify = 0; + char mfinalfuncModify = 0; List *sortoperatorName = NIL; TypeName *baseType = NULL; TypeName *transType = NULL; @@ -143,6 +149,10 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List finalfuncExtraArgs = defGetBoolean(defel); else if (pg_strcasecmp(defel->defname, "mfinalfunc_extra") == 0) mfinalfuncExtraArgs = defGetBoolean(defel); + else if (pg_strcasecmp(defel->defname, "finalfunc_modify") == 0) + finalfuncModify = extractModify(defel); + else if (pg_strcasecmp(defel->defname, "mfinalfunc_modify") == 0) + mfinalfuncModify = extractModify(defel); else if (pg_strcasecmp(defel->defname, "sortop") == 0) sortoperatorName = defGetQualifiedName(defel); else if (pg_strcasecmp(defel->defname, "basetype") == 0) @@ -235,6 +245,15 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List errmsg("aggregate minitcond must not be specified without mstype"))); } + /* + * Default values for modify flags can only be determined once we know the + * aggKind. + */ + if (finalfuncModify == 0) + finalfuncModify = (aggKind == AGGKIND_NORMAL) ? AGGMODIFY_READ_ONLY : AGGMODIFY_READ_WRITE; + if (mfinalfuncModify == 0) + mfinalfuncModify = (aggKind == AGGKIND_NORMAL) ? AGGMODIFY_READ_ONLY : AGGMODIFY_READ_WRITE; + /* * look up the aggregate's input datatype(s). */ @@ -437,6 +456,8 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List mfinalfuncName, /* final function name */ finalfuncExtraArgs, mfinalfuncExtraArgs, + finalfuncModify, + mfinalfuncModify, sortoperatorName, /* sort operator name */ transTypeId, /* transition data type */ transSpace, /* transition space */ @@ -446,3 +467,24 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List minitval, /* initial condition */ proparallel); /* parallel safe? */ } + +/* + * Convert the string form of [m]finalfunc_modify to the catalog representation + */ +static char +extractModify(DefElem *defel) +{ + char *val = defGetString(defel); + + if (strcmp(val, "read_only") == 0) + return AGGMODIFY_READ_ONLY; + if (strcmp(val, "sharable") == 0) + return AGGMODIFY_SHARABLE; + if (strcmp(val, "read_write") == 0) + return AGGMODIFY_READ_WRITE; + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("parameter \"%s\" must be READ_ONLY, SHARABLE, or READ_WRITE", + defel->defname))); + return 0; /* keep compiler quiet */ +} diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 6543ecebd3..40d8ec9db4 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -248,9 +248,9 @@ typedef struct AggStatePerTransData /* * Link to an Aggref expr this state value is for. * - * There can be multiple Aggref's sharing the same state value, as long as - * the inputs and transition function are identical. This points to the - * first one of them. + * There can be multiple Aggref's sharing the same state value, so long as + * the inputs and transition functions are identical and the final + * functions are not read-write. This points to the first one of them. */ Aggref *aggref; @@ -419,8 +419,8 @@ typedef struct AggStatePerAggData Oid finalfn_oid; /* - * fmgr lookup data for final function --- only valid when finalfn_oid oid - * is not InvalidOid. + * fmgr lookup data for final function --- only valid when finalfn_oid is + * not InvalidOid. */ FmgrInfo finalfn; @@ -439,6 +439,11 @@ typedef struct AggStatePerAggData int16 resulttypeLen; bool resulttypeByVal; + /* + * "sharable" is false if this agg cannot share state values with other + * aggregates because the final function is read-write. + */ + bool sharable; } AggStatePerAggData; /* @@ -572,6 +577,7 @@ static void build_pertrans_for_aggref(AggStatePerTrans pertrans, static int find_compatible_peragg(Aggref *newagg, AggState *aggstate, int lastaggno, List **same_input_transnos); static int find_compatible_pertrans(AggState *aggstate, Aggref *newagg, + bool sharable, Oid aggtransfn, Oid aggtranstype, Oid aggserialfn, Oid aggdeserialfn, Datum initValue, bool initValueIsNull, @@ -3105,6 +3111,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AclResult aclresult; Oid transfn_oid, finalfn_oid; + bool sharable; Oid serialfn_oid, deserialfn_oid; Expr *finalfnexpr; @@ -3177,6 +3184,15 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) else peragg->finalfn_oid = finalfn_oid = aggform->aggfinalfn; + /* + * If finalfn is marked read-write, we can't share transition states; + * but it is okay to share states for AGGMODIFY_SHARABLE aggs. Also, + * if we're not executing the finalfn here, we can share regardless. + */ + sharable = (aggform->aggfinalmodify != AGGMODIFY_READ_WRITE) || + (finalfn_oid == InvalidOid); + peragg->sharable = sharable; + serialfn_oid = InvalidOid; deserialfn_oid = InvalidOid; @@ -3315,11 +3331,12 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * 2. Build working state for invoking the transition function, or * look up previously initialized working state, if we can share it. * - * find_compatible_peragg() already collected a list of per-Trans's - * with the same inputs. Check if any of them have the same transition - * function and initial value. + * find_compatible_peragg() already collected a list of sharable + * per-Trans's with the same inputs. Check if any of them have the + * same transition function and initial value. */ existing_transno = find_compatible_pertrans(aggstate, aggref, + sharable, transfn_oid, aggtranstype, serialfn_oid, deserialfn_oid, initValue, initValueIsNull, @@ -3724,10 +3741,10 @@ GetAggInitVal(Datum textInitVal, Oid transtype) * with this one, with the same input parameters. If no compatible aggregate * can be found, returns -1. * - * As a side-effect, this also collects a list of existing per-Trans structs - * with matching inputs. If no identical Aggref is found, the list is passed - * later to find_compatible_pertrans, to see if we can at least reuse the - * state value of another aggregate. + * As a side-effect, this also collects a list of existing, sharable per-Trans + * structs with matching inputs. If no identical Aggref is found, the list is + * passed later to find_compatible_pertrans, to see if we can at least reuse + * the state value of another aggregate. */ static int find_compatible_peragg(Aggref *newagg, AggState *aggstate, @@ -3785,11 +3802,15 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, } /* - * Not identical, but it had the same inputs. Return it to the caller, - * in case we can re-use its per-trans state. + * Not identical, but it had the same inputs. If the final function + * permits sharing, return its transno to the caller, in case we can + * re-use its per-trans state. (If there's already sharing going on, + * we might report a transno more than once. find_compatible_pertrans + * is cheap enough that it's not worth spending cycles to avoid that.) */ - *same_input_transnos = lappend_int(*same_input_transnos, - peragg->transno); + if (peragg->sharable) + *same_input_transnos = lappend_int(*same_input_transnos, + peragg->transno); } return -1; @@ -3804,7 +3825,7 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, * verified to match.) */ static int -find_compatible_pertrans(AggState *aggstate, Aggref *newagg, +find_compatible_pertrans(AggState *aggstate, Aggref *newagg, bool sharable, Oid aggtransfn, Oid aggtranstype, Oid aggserialfn, Oid aggdeserialfn, Datum initValue, bool initValueIsNull, @@ -3812,14 +3833,8 @@ find_compatible_pertrans(AggState *aggstate, Aggref *newagg, { ListCell *lc; - /* - * For the moment, never try to share transition states between different - * ordered-set aggregates. This is necessary because the finalfns of the - * built-in OSAs (see orderedsetaggs.c) are destructive of their - * transition states. We should fix them so we can allow this, but not - * losing performance in the normal non-shared case will take some work. - */ - if (AGGKIND_IS_ORDERED_SET(newagg->aggkind)) + /* If this aggregate can't share transition states, give up */ + if (!sharable) return -1; foreach(lc, transnos) diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 80be46029f..02868749f6 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -49,6 +49,7 @@ #include "utils/datum.h" #include "utils/lsyscache.h" #include "utils/memutils.h" +#include "utils/regproc.h" #include "utils/syscache.h" #include "windowapi.h" @@ -2096,10 +2097,12 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, Oid aggtranstype; AttrNumber initvalAttNo; AclResult aclresult; + bool use_ma_code; Oid transfn_oid, invtransfn_oid, finalfn_oid; bool finalextra; + char finalmodify; Expr *transfnexpr, *invtransfnexpr, *finalfnexpr; @@ -2125,20 +2128,32 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, * Figure out whether we want to use the moving-aggregate implementation, * and collect the right set of fields from the pg_attribute entry. * - * If the frame head can't move, we don't need moving-aggregate code. Even - * if we'd like to use it, don't do so if the aggregate's arguments (and - * FILTER clause if any) contain any calls to volatile functions. - * Otherwise, the difference between restarting and not restarting the - * aggregation would be user-visible. + * It's possible that an aggregate would supply a safe moving-aggregate + * implementation and an unsafe normal one, in which case our hand is + * forced. Otherwise, if the frame head can't move, we don't need + * moving-aggregate code. Even if we'd like to use it, don't do so if the + * aggregate's arguments (and FILTER clause if any) contain any calls to + * volatile functions. Otherwise, the difference between restarting and + * not restarting the aggregation would be user-visible. */ - if (OidIsValid(aggform->aggminvtransfn) && - !(winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING) && - !contain_volatile_functions((Node *) wfunc)) + if (!OidIsValid(aggform->aggminvtransfn)) + use_ma_code = false; /* sine qua non */ + else if (aggform->aggmfinalmodify == AGGMODIFY_READ_ONLY && + aggform->aggfinalmodify != AGGMODIFY_READ_ONLY) + use_ma_code = true; /* decision forced by safety */ + else if (winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING) + use_ma_code = false; /* non-moving frame head */ + else if (contain_volatile_functions((Node *) wfunc)) + use_ma_code = false; /* avoid possible behavioral change */ + else + use_ma_code = true; /* yes, let's use it */ + if (use_ma_code) { peraggstate->transfn_oid = transfn_oid = aggform->aggmtransfn; peraggstate->invtransfn_oid = invtransfn_oid = aggform->aggminvtransfn; peraggstate->finalfn_oid = finalfn_oid = aggform->aggmfinalfn; finalextra = aggform->aggmfinalextra; + finalmodify = aggform->aggmfinalmodify; aggtranstype = aggform->aggmtranstype; initvalAttNo = Anum_pg_aggregate_aggminitval; } @@ -2148,6 +2163,7 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, peraggstate->invtransfn_oid = invtransfn_oid = InvalidOid; peraggstate->finalfn_oid = finalfn_oid = aggform->aggfinalfn; finalextra = aggform->aggfinalextra; + finalmodify = aggform->aggfinalmodify; aggtranstype = aggform->aggtranstype; initvalAttNo = Anum_pg_aggregate_agginitval; } @@ -2198,6 +2214,17 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, } } + /* + * If the selected finalfn isn't read-only, we can't run this aggregate as + * a window function. This is a user-facing error, so we take a bit more + * care with the error message than elsewhere in this function. + */ + if (finalmodify != AGGMODIFY_READ_ONLY) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("aggregate function %s does not support use as a window function", + format_procedure(wfunc->winfnoid)))); + /* Detect how many arguments to pass to the finalfn */ if (finalextra) peraggstate->numFinalArgs = numArguments + 1; diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index e34c83acd2..8733426e8a 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -42,6 +42,7 @@ #include "access/attnum.h" #include "access/sysattr.h" #include "access/transam.h" +#include "catalog/pg_aggregate.h" #include "catalog/pg_am.h" #include "catalog/pg_attribute.h" #include "catalog/pg_cast.h" @@ -13433,8 +13434,10 @@ dumpAgg(Archive *fout, AggInfo *agginfo) int i_aggmfinalfn; int i_aggfinalextra; int i_aggmfinalextra; + int i_aggfinalmodify; + int i_aggmfinalmodify; int i_aggsortop; - int i_hypothetical; + int i_aggkind; int i_aggtranstype; int i_aggtransspace; int i_aggmtranstype; @@ -13453,9 +13456,11 @@ dumpAgg(Archive *fout, AggInfo *agginfo) const char *aggmfinalfn; bool aggfinalextra; bool aggmfinalextra; + char aggfinalmodify; + char aggmfinalmodify; const char *aggsortop; char *aggsortconvop; - bool hypothetical; + char aggkind; const char *aggtranstype; const char *aggtransspace; const char *aggmtranstype; @@ -13464,6 +13469,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) const char *aggminitval; bool convertok; const char *proparallel; + char defaultfinalmodify; /* Skip if not to be dumped */ if (!agginfo->aggfn.dobj.dump || dopt->dataOnly) @@ -13479,15 +13485,37 @@ dumpAgg(Archive *fout, AggInfo *agginfo) selectSourceSchema(fout, agginfo->aggfn.dobj.namespace->dobj.name); /* Get aggregate-specific details */ - if (fout->remoteVersion >= 90600) + if (fout->remoteVersion >= 110000) + { + appendPQExpBuffer(query, "SELECT aggtransfn, " + "aggfinalfn, aggtranstype::pg_catalog.regtype, " + "aggcombinefn, aggserialfn, aggdeserialfn, aggmtransfn, " + "aggminvtransfn, aggmfinalfn, aggmtranstype::pg_catalog.regtype, " + "aggfinalextra, aggmfinalextra, " + "aggfinalmodify, aggmfinalmodify, " + "aggsortop::pg_catalog.regoperator, " + "aggkind, " + "aggtransspace, agginitval, " + "aggmtransspace, aggminitval, " + "true AS convertok, " + "pg_catalog.pg_get_function_arguments(p.oid) AS funcargs, " + "pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs, " + "p.proparallel " + "FROM pg_catalog.pg_aggregate a, pg_catalog.pg_proc p " + "WHERE a.aggfnoid = p.oid " + "AND p.oid = '%u'::pg_catalog.oid", + agginfo->aggfn.dobj.catId.oid); + } + else if (fout->remoteVersion >= 90600) { appendPQExpBuffer(query, "SELECT aggtransfn, " "aggfinalfn, aggtranstype::pg_catalog.regtype, " "aggcombinefn, aggserialfn, aggdeserialfn, aggmtransfn, " "aggminvtransfn, aggmfinalfn, aggmtranstype::pg_catalog.regtype, " "aggfinalextra, aggmfinalextra, " + "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " "aggsortop::pg_catalog.regoperator, " - "(aggkind = 'h') AS hypothetical, " + "aggkind, " "aggtransspace, agginitval, " "aggmtransspace, aggminitval, " "true AS convertok, " @@ -13507,8 +13535,9 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "'-' AS aggdeserialfn, aggmtransfn, aggminvtransfn, " "aggmfinalfn, aggmtranstype::pg_catalog.regtype, " "aggfinalextra, aggmfinalextra, " + "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " "aggsortop::pg_catalog.regoperator, " - "(aggkind = 'h') AS hypothetical, " + "aggkind, " "aggtransspace, agginitval, " "aggmtransspace, aggminitval, " "true AS convertok, " @@ -13528,8 +13557,9 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "'-' AS aggminvtransfn, '-' AS aggmfinalfn, " "0 AS aggmtranstype, false AS aggfinalextra, " "false AS aggmfinalextra, " + "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " "aggsortop::pg_catalog.regoperator, " - "false AS hypothetical, " + "'n' AS aggkind, " "0 AS aggtransspace, agginitval, " "0 AS aggmtransspace, NULL AS aggminitval, " "true AS convertok, " @@ -13549,8 +13579,9 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "'-' AS aggminvtransfn, '-' AS aggmfinalfn, " "0 AS aggmtranstype, false AS aggfinalextra, " "false AS aggmfinalextra, " + "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " "aggsortop::pg_catalog.regoperator, " - "false AS hypothetical, " + "'n' AS aggkind, " "0 AS aggtransspace, agginitval, " "0 AS aggmtransspace, NULL AS aggminitval, " "true AS convertok " @@ -13567,8 +13598,10 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "'-' AS aggdeserialfn, '-' AS aggmtransfn, " "'-' AS aggminvtransfn, '-' AS aggmfinalfn, " "0 AS aggmtranstype, false AS aggfinalextra, " - "false AS aggmfinalextra, 0 AS aggsortop, " - "false AS hypothetical, " + "false AS aggmfinalextra, " + "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " + "0 AS aggsortop, " + "'n' AS aggkind, " "0 AS aggtransspace, agginitval, " "0 AS aggmtransspace, NULL AS aggminitval, " "true AS convertok " @@ -13590,8 +13623,10 @@ dumpAgg(Archive *fout, AggInfo *agginfo) i_aggmfinalfn = PQfnumber(res, "aggmfinalfn"); i_aggfinalextra = PQfnumber(res, "aggfinalextra"); i_aggmfinalextra = PQfnumber(res, "aggmfinalextra"); + i_aggfinalmodify = PQfnumber(res, "aggfinalmodify"); + i_aggmfinalmodify = PQfnumber(res, "aggmfinalmodify"); i_aggsortop = PQfnumber(res, "aggsortop"); - i_hypothetical = PQfnumber(res, "hypothetical"); + i_aggkind = PQfnumber(res, "aggkind"); i_aggtranstype = PQfnumber(res, "aggtranstype"); i_aggtransspace = PQfnumber(res, "aggtransspace"); i_aggmtranstype = PQfnumber(res, "aggmtranstype"); @@ -13611,8 +13646,10 @@ dumpAgg(Archive *fout, AggInfo *agginfo) aggmfinalfn = PQgetvalue(res, 0, i_aggmfinalfn); aggfinalextra = (PQgetvalue(res, 0, i_aggfinalextra)[0] == 't'); aggmfinalextra = (PQgetvalue(res, 0, i_aggmfinalextra)[0] == 't'); + aggfinalmodify = PQgetvalue(res, 0, i_aggfinalmodify)[0]; + aggmfinalmodify = PQgetvalue(res, 0, i_aggmfinalmodify)[0]; aggsortop = PQgetvalue(res, 0, i_aggsortop); - hypothetical = (PQgetvalue(res, 0, i_hypothetical)[0] == 't'); + aggkind = PQgetvalue(res, 0, i_aggkind)[0]; aggtranstype = PQgetvalue(res, 0, i_aggtranstype); aggtransspace = PQgetvalue(res, 0, i_aggtransspace); aggmtranstype = PQgetvalue(res, 0, i_aggmtranstype); @@ -13656,6 +13693,14 @@ dumpAgg(Archive *fout, AggInfo *agginfo) return; } + /* identify default modify flag for aggkind (must match DefineAggregate) */ + defaultfinalmodify = (aggkind == AGGKIND_NORMAL) ? AGGMODIFY_READ_ONLY : AGGMODIFY_READ_WRITE; + /* replace omitted flags for old versions */ + if (aggfinalmodify == '0') + aggfinalmodify = defaultfinalmodify; + if (aggmfinalmodify == '0') + aggmfinalmodify = defaultfinalmodify; + /* regproc and regtype output is already sufficiently quoted */ appendPQExpBuffer(details, " SFUNC = %s,\n STYPE = %s", aggtransfn, aggtranstype); @@ -13678,6 +13723,25 @@ dumpAgg(Archive *fout, AggInfo *agginfo) aggfinalfn); if (aggfinalextra) appendPQExpBufferStr(details, ",\n FINALFUNC_EXTRA"); + if (aggfinalmodify != defaultfinalmodify) + { + switch (aggfinalmodify) + { + case AGGMODIFY_READ_ONLY: + appendPQExpBufferStr(details, ",\n FINALFUNC_MODIFY = READ_ONLY"); + break; + case AGGMODIFY_SHARABLE: + appendPQExpBufferStr(details, ",\n FINALFUNC_MODIFY = SHARABLE"); + break; + case AGGMODIFY_READ_WRITE: + appendPQExpBufferStr(details, ",\n FINALFUNC_MODIFY = READ_WRITE"); + break; + default: + exit_horribly(NULL, "unrecognized aggfinalmodify value for aggregate \"%s\"\n", + agginfo->aggfn.dobj.name); + break; + } + } } if (strcmp(aggcombinefn, "-") != 0) @@ -13715,6 +13779,25 @@ dumpAgg(Archive *fout, AggInfo *agginfo) aggmfinalfn); if (aggmfinalextra) appendPQExpBufferStr(details, ",\n MFINALFUNC_EXTRA"); + if (aggmfinalmodify != defaultfinalmodify) + { + switch (aggmfinalmodify) + { + case AGGMODIFY_READ_ONLY: + appendPQExpBufferStr(details, ",\n MFINALFUNC_MODIFY = READ_ONLY"); + break; + case AGGMODIFY_SHARABLE: + appendPQExpBufferStr(details, ",\n MFINALFUNC_MODIFY = SHARABLE"); + break; + case AGGMODIFY_READ_WRITE: + appendPQExpBufferStr(details, ",\n MFINALFUNC_MODIFY = READ_WRITE"); + break; + default: + exit_horribly(NULL, "unrecognized aggmfinalmodify value for aggregate \"%s\"\n", + agginfo->aggfn.dobj.name); + break; + } + } } aggsortconvop = convertOperatorReference(fout, aggsortop); @@ -13725,7 +13808,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) free(aggsortconvop); } - if (hypothetical) + if (aggkind == AGGKIND_HYPOTHETICAL) appendPQExpBufferStr(details, ",\n HYPOTHETICAL"); if (proparallel != NULL && proparallel[0] != PROPARALLEL_UNSAFE) diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index c492fbdc24..fa3b56a426 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -2784,6 +2784,7 @@ basetype = int4, stype = _int8, finalfunc = int8_avg, + finalfunc_modify = sharable, initcond1 = \'{0,0}\' );', regexp => qr/^ @@ -2791,7 +2792,8 @@ \n\s+\QSFUNC = int4_avg_accum,\E \n\s+\QSTYPE = bigint[],\E \n\s+\QINITCOND = '{0,0}',\E - \n\s+\QFINALFUNC = int8_avg\E + \n\s+\QFINALFUNC = int8_avg,\E + \n\s+\QFINALFUNC_MODIFY = SHARABLE\E \n\);/xm, like => { binary_upgrade => 1, diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 2c382a73cf..7c1756ae08 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201709301 +#define CATALOG_VERSION_NO 201710141 #endif diff --git a/src/include/catalog/pg_aggregate.h b/src/include/catalog/pg_aggregate.h index 4d5b9bb9a6..5769f6430a 100644 --- a/src/include/catalog/pg_aggregate.h +++ b/src/include/catalog/pg_aggregate.h @@ -20,8 +20,6 @@ #define PG_AGGREGATE_H #include "catalog/genbki.h" -#include "catalog/objectaddress.h" -#include "nodes/pg_list.h" /* ---------------------------------------------------------------- * pg_aggregate definition. @@ -41,6 +39,8 @@ * aggmfinalfn final function for moving-aggregate mode (0 if none) * aggfinalextra true to pass extra dummy arguments to aggfinalfn * aggmfinalextra true to pass extra dummy arguments to aggmfinalfn + * aggfinalmodify tells whether aggfinalfn modifies transition state + * aggmfinalmodify tells whether aggmfinalfn modifies transition state * aggsortop associated sort operator (0 if none) * aggtranstype type of aggregate's transition (state) data * aggtransspace estimated size of state data (0 for default estimate) @@ -67,6 +67,8 @@ CATALOG(pg_aggregate,2600) BKI_WITHOUT_OIDS regproc aggmfinalfn; bool aggfinalextra; bool aggmfinalextra; + char aggfinalmodify; + char aggmfinalmodify; Oid aggsortop; Oid aggtranstype; int32 aggtransspace; @@ -91,7 +93,7 @@ typedef FormData_pg_aggregate *Form_pg_aggregate; * ---------------- */ -#define Natts_pg_aggregate 20 +#define Natts_pg_aggregate 22 #define Anum_pg_aggregate_aggfnoid 1 #define Anum_pg_aggregate_aggkind 2 #define Anum_pg_aggregate_aggnumdirectargs 3 @@ -105,13 +107,15 @@ typedef FormData_pg_aggregate *Form_pg_aggregate; #define Anum_pg_aggregate_aggmfinalfn 11 #define Anum_pg_aggregate_aggfinalextra 12 #define Anum_pg_aggregate_aggmfinalextra 13 -#define Anum_pg_aggregate_aggsortop 14 -#define Anum_pg_aggregate_aggtranstype 15 -#define Anum_pg_aggregate_aggtransspace 16 -#define Anum_pg_aggregate_aggmtranstype 17 -#define Anum_pg_aggregate_aggmtransspace 18 -#define Anum_pg_aggregate_agginitval 19 -#define Anum_pg_aggregate_aggminitval 20 +#define Anum_pg_aggregate_aggfinalmodify 14 +#define Anum_pg_aggregate_aggmfinalmodify 15 +#define Anum_pg_aggregate_aggsortop 16 +#define Anum_pg_aggregate_aggtranstype 17 +#define Anum_pg_aggregate_aggtransspace 18 +#define Anum_pg_aggregate_aggmtranstype 19 +#define Anum_pg_aggregate_aggmtransspace 20 +#define Anum_pg_aggregate_agginitval 21 +#define Anum_pg_aggregate_aggminitval 22 /* * Symbolic values for aggkind column. We distinguish normal aggregates @@ -128,6 +132,18 @@ typedef FormData_pg_aggregate *Form_pg_aggregate; /* Use this macro to test for "ordered-set agg including hypothetical case" */ #define AGGKIND_IS_ORDERED_SET(kind) ((kind) != AGGKIND_NORMAL) +/* + * Symbolic values for aggfinalmodify and aggmfinalmodify columns. + * Preferably, finalfns do not modify the transition state value at all, + * but in some cases that would cost too much performance. We distinguish + * "pure read only" and "trashes it arbitrarily" cases, as well as the + * intermediate case where multiple finalfn calls are allowed but the + * transfn cannot be applied anymore after the first finalfn call. + */ +#define AGGMODIFY_READ_ONLY 'r' +#define AGGMODIFY_SHARABLE 's' +#define AGGMODIFY_READ_WRITE 'w' + /* ---------------- * initial contents of pg_aggregate @@ -135,217 +151,183 @@ typedef FormData_pg_aggregate *Form_pg_aggregate; */ /* avg */ -DATA(insert ( 2100 n 0 int8_avg_accum numeric_poly_avg int8_avg_combine int8_avg_serialize int8_avg_deserialize int8_avg_accum int8_avg_accum_inv numeric_poly_avg f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2101 n 0 int4_avg_accum int8_avg int4_avg_combine - - int4_avg_accum int4_avg_accum_inv int8_avg f f 0 1016 0 1016 0 "{0,0}" "{0,0}" )); -DATA(insert ( 2102 n 0 int2_avg_accum int8_avg int4_avg_combine - - int2_avg_accum int2_avg_accum_inv int8_avg f f 0 1016 0 1016 0 "{0,0}" "{0,0}" )); -DATA(insert ( 2103 n 0 numeric_avg_accum numeric_avg numeric_avg_combine numeric_avg_serialize numeric_avg_deserialize numeric_avg_accum numeric_accum_inv numeric_avg f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2104 n 0 float4_accum float8_avg float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2105 n 0 float8_accum float8_avg float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2106 n 0 interval_accum interval_avg interval_combine - - interval_accum interval_accum_inv interval_avg f f 0 1187 0 1187 0 "{0 second,0 second}" "{0 second,0 second}" )); +DATA(insert ( 2100 n 0 int8_avg_accum numeric_poly_avg int8_avg_combine int8_avg_serialize int8_avg_deserialize int8_avg_accum int8_avg_accum_inv numeric_poly_avg f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2101 n 0 int4_avg_accum int8_avg int4_avg_combine - - int4_avg_accum int4_avg_accum_inv int8_avg f f r r 0 1016 0 1016 0 "{0,0}" "{0,0}" )); +DATA(insert ( 2102 n 0 int2_avg_accum int8_avg int4_avg_combine - - int2_avg_accum int2_avg_accum_inv int8_avg f f r r 0 1016 0 1016 0 "{0,0}" "{0,0}" )); +DATA(insert ( 2103 n 0 numeric_avg_accum numeric_avg numeric_avg_combine numeric_avg_serialize numeric_avg_deserialize numeric_avg_accum numeric_accum_inv numeric_avg f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2104 n 0 float4_accum float8_avg float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2105 n 0 float8_accum float8_avg float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2106 n 0 interval_accum interval_avg interval_combine - - interval_accum interval_accum_inv interval_avg f f r r 0 1187 0 1187 0 "{0 second,0 second}" "{0 second,0 second}" )); /* sum */ -DATA(insert ( 2107 n 0 int8_avg_accum numeric_poly_sum int8_avg_combine int8_avg_serialize int8_avg_deserialize int8_avg_accum int8_avg_accum_inv numeric_poly_sum f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2108 n 0 int4_sum - int8pl - - int4_avg_accum int4_avg_accum_inv int2int4_sum f f 0 20 0 1016 0 _null_ "{0,0}" )); -DATA(insert ( 2109 n 0 int2_sum - int8pl - - int2_avg_accum int2_avg_accum_inv int2int4_sum f f 0 20 0 1016 0 _null_ "{0,0}" )); -DATA(insert ( 2110 n 0 float4pl - float4pl - - - - - f f 0 700 0 0 0 _null_ _null_ )); -DATA(insert ( 2111 n 0 float8pl - float8pl - - - - - f f 0 701 0 0 0 _null_ _null_ )); -DATA(insert ( 2112 n 0 cash_pl - cash_pl - - cash_pl cash_mi - f f 0 790 0 790 0 _null_ _null_ )); -DATA(insert ( 2113 n 0 interval_pl - interval_pl - - interval_pl interval_mi - f f 0 1186 0 1186 0 _null_ _null_ )); -DATA(insert ( 2114 n 0 numeric_avg_accum numeric_sum numeric_avg_combine numeric_avg_serialize numeric_avg_deserialize numeric_avg_accum numeric_accum_inv numeric_sum f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2107 n 0 int8_avg_accum numeric_poly_sum int8_avg_combine int8_avg_serialize int8_avg_deserialize int8_avg_accum int8_avg_accum_inv numeric_poly_sum f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2108 n 0 int4_sum - int8pl - - int4_avg_accum int4_avg_accum_inv int2int4_sum f f r r 0 20 0 1016 0 _null_ "{0,0}" )); +DATA(insert ( 2109 n 0 int2_sum - int8pl - - int2_avg_accum int2_avg_accum_inv int2int4_sum f f r r 0 20 0 1016 0 _null_ "{0,0}" )); +DATA(insert ( 2110 n 0 float4pl - float4pl - - - - - f f r r 0 700 0 0 0 _null_ _null_ )); +DATA(insert ( 2111 n 0 float8pl - float8pl - - - - - f f r r 0 701 0 0 0 _null_ _null_ )); +DATA(insert ( 2112 n 0 cash_pl - cash_pl - - cash_pl cash_mi - f f r r 0 790 0 790 0 _null_ _null_ )); +DATA(insert ( 2113 n 0 interval_pl - interval_pl - - interval_pl interval_mi - f f r r 0 1186 0 1186 0 _null_ _null_ )); +DATA(insert ( 2114 n 0 numeric_avg_accum numeric_sum numeric_avg_combine numeric_avg_serialize numeric_avg_deserialize numeric_avg_accum numeric_accum_inv numeric_sum f f r r 0 2281 128 2281 128 _null_ _null_ )); /* max */ -DATA(insert ( 2115 n 0 int8larger - int8larger - - - - - f f 413 20 0 0 0 _null_ _null_ )); -DATA(insert ( 2116 n 0 int4larger - int4larger - - - - - f f 521 23 0 0 0 _null_ _null_ )); -DATA(insert ( 2117 n 0 int2larger - int2larger - - - - - f f 520 21 0 0 0 _null_ _null_ )); -DATA(insert ( 2118 n 0 oidlarger - oidlarger - - - - - f f 610 26 0 0 0 _null_ _null_ )); -DATA(insert ( 2119 n 0 float4larger - float4larger - - - - - f f 623 700 0 0 0 _null_ _null_ )); -DATA(insert ( 2120 n 0 float8larger - float8larger - - - - - f f 674 701 0 0 0 _null_ _null_ )); -DATA(insert ( 2121 n 0 int4larger - int4larger - - - - - f f 563 702 0 0 0 _null_ _null_ )); -DATA(insert ( 2122 n 0 date_larger - date_larger - - - - - f f 1097 1082 0 0 0 _null_ _null_ )); -DATA(insert ( 2123 n 0 time_larger - time_larger - - - - - f f 1112 1083 0 0 0 _null_ _null_ )); -DATA(insert ( 2124 n 0 timetz_larger - timetz_larger - - - - - f f 1554 1266 0 0 0 _null_ _null_ )); -DATA(insert ( 2125 n 0 cashlarger - cashlarger - - - - - f f 903 790 0 0 0 _null_ _null_ )); -DATA(insert ( 2126 n 0 timestamp_larger - timestamp_larger - - - - - f f 2064 1114 0 0 0 _null_ _null_ )); -DATA(insert ( 2127 n 0 timestamptz_larger - timestamptz_larger - - - - - f f 1324 1184 0 0 0 _null_ _null_ )); -DATA(insert ( 2128 n 0 interval_larger - interval_larger - - - - - f f 1334 1186 0 0 0 _null_ _null_ )); -DATA(insert ( 2129 n 0 text_larger - text_larger - - - - - f f 666 25 0 0 0 _null_ _null_ )); -DATA(insert ( 2130 n 0 numeric_larger - numeric_larger - - - - - f f 1756 1700 0 0 0 _null_ _null_ )); -DATA(insert ( 2050 n 0 array_larger - array_larger - - - - - f f 1073 2277 0 0 0 _null_ _null_ )); -DATA(insert ( 2244 n 0 bpchar_larger - bpchar_larger - - - - - f f 1060 1042 0 0 0 _null_ _null_ )); -DATA(insert ( 2797 n 0 tidlarger - tidlarger - - - - - f f 2800 27 0 0 0 _null_ _null_ )); -DATA(insert ( 3526 n 0 enum_larger - enum_larger - - - - - f f 3519 3500 0 0 0 _null_ _null_ )); -DATA(insert ( 3564 n 0 network_larger - network_larger - - - - - f f 1205 869 0 0 0 _null_ _null_ )); +DATA(insert ( 2115 n 0 int8larger - int8larger - - - - - f f r r 413 20 0 0 0 _null_ _null_ )); +DATA(insert ( 2116 n 0 int4larger - int4larger - - - - - f f r r 521 23 0 0 0 _null_ _null_ )); +DATA(insert ( 2117 n 0 int2larger - int2larger - - - - - f f r r 520 21 0 0 0 _null_ _null_ )); +DATA(insert ( 2118 n 0 oidlarger - oidlarger - - - - - f f r r 610 26 0 0 0 _null_ _null_ )); +DATA(insert ( 2119 n 0 float4larger - float4larger - - - - - f f r r 623 700 0 0 0 _null_ _null_ )); +DATA(insert ( 2120 n 0 float8larger - float8larger - - - - - f f r r 674 701 0 0 0 _null_ _null_ )); +DATA(insert ( 2121 n 0 int4larger - int4larger - - - - - f f r r 563 702 0 0 0 _null_ _null_ )); +DATA(insert ( 2122 n 0 date_larger - date_larger - - - - - f f r r 1097 1082 0 0 0 _null_ _null_ )); +DATA(insert ( 2123 n 0 time_larger - time_larger - - - - - f f r r 1112 1083 0 0 0 _null_ _null_ )); +DATA(insert ( 2124 n 0 timetz_larger - timetz_larger - - - - - f f r r 1554 1266 0 0 0 _null_ _null_ )); +DATA(insert ( 2125 n 0 cashlarger - cashlarger - - - - - f f r r 903 790 0 0 0 _null_ _null_ )); +DATA(insert ( 2126 n 0 timestamp_larger - timestamp_larger - - - - - f f r r 2064 1114 0 0 0 _null_ _null_ )); +DATA(insert ( 2127 n 0 timestamptz_larger - timestamptz_larger - - - - - f f r r 1324 1184 0 0 0 _null_ _null_ )); +DATA(insert ( 2128 n 0 interval_larger - interval_larger - - - - - f f r r 1334 1186 0 0 0 _null_ _null_ )); +DATA(insert ( 2129 n 0 text_larger - text_larger - - - - - f f r r 666 25 0 0 0 _null_ _null_ )); +DATA(insert ( 2130 n 0 numeric_larger - numeric_larger - - - - - f f r r 1756 1700 0 0 0 _null_ _null_ )); +DATA(insert ( 2050 n 0 array_larger - array_larger - - - - - f f r r 1073 2277 0 0 0 _null_ _null_ )); +DATA(insert ( 2244 n 0 bpchar_larger - bpchar_larger - - - - - f f r r 1060 1042 0 0 0 _null_ _null_ )); +DATA(insert ( 2797 n 0 tidlarger - tidlarger - - - - - f f r r 2800 27 0 0 0 _null_ _null_ )); +DATA(insert ( 3526 n 0 enum_larger - enum_larger - - - - - f f r r 3519 3500 0 0 0 _null_ _null_ )); +DATA(insert ( 3564 n 0 network_larger - network_larger - - - - - f f r r 1205 869 0 0 0 _null_ _null_ )); /* min */ -DATA(insert ( 2131 n 0 int8smaller - int8smaller - - - - - f f 412 20 0 0 0 _null_ _null_ )); -DATA(insert ( 2132 n 0 int4smaller - int4smaller - - - - - f f 97 23 0 0 0 _null_ _null_ )); -DATA(insert ( 2133 n 0 int2smaller - int2smaller - - - - - f f 95 21 0 0 0 _null_ _null_ )); -DATA(insert ( 2134 n 0 oidsmaller - oidsmaller - - - - - f f 609 26 0 0 0 _null_ _null_ )); -DATA(insert ( 2135 n 0 float4smaller - float4smaller - - - - - f f 622 700 0 0 0 _null_ _null_ )); -DATA(insert ( 2136 n 0 float8smaller - float8smaller - - - - - f f 672 701 0 0 0 _null_ _null_ )); -DATA(insert ( 2137 n 0 int4smaller - int4smaller - - - - - f f 562 702 0 0 0 _null_ _null_ )); -DATA(insert ( 2138 n 0 date_smaller - date_smaller - - - - - f f 1095 1082 0 0 0 _null_ _null_ )); -DATA(insert ( 2139 n 0 time_smaller - time_smaller - - - - - f f 1110 1083 0 0 0 _null_ _null_ )); -DATA(insert ( 2140 n 0 timetz_smaller - timetz_smaller - - - - - f f 1552 1266 0 0 0 _null_ _null_ )); -DATA(insert ( 2141 n 0 cashsmaller - cashsmaller - - - - - f f 902 790 0 0 0 _null_ _null_ )); -DATA(insert ( 2142 n 0 timestamp_smaller - timestamp_smaller - - - - - f f 2062 1114 0 0 0 _null_ _null_ )); -DATA(insert ( 2143 n 0 timestamptz_smaller - timestamptz_smaller - - - - - f f 1322 1184 0 0 0 _null_ _null_ )); -DATA(insert ( 2144 n 0 interval_smaller - interval_smaller - - - - - f f 1332 1186 0 0 0 _null_ _null_ )); -DATA(insert ( 2145 n 0 text_smaller - text_smaller - - - - - f f 664 25 0 0 0 _null_ _null_ )); -DATA(insert ( 2146 n 0 numeric_smaller - numeric_smaller - - - - - f f 1754 1700 0 0 0 _null_ _null_ )); -DATA(insert ( 2051 n 0 array_smaller - array_smaller - - - - - f f 1072 2277 0 0 0 _null_ _null_ )); -DATA(insert ( 2245 n 0 bpchar_smaller - bpchar_smaller - - - - - f f 1058 1042 0 0 0 _null_ _null_ )); -DATA(insert ( 2798 n 0 tidsmaller - tidsmaller - - - - - f f 2799 27 0 0 0 _null_ _null_ )); -DATA(insert ( 3527 n 0 enum_smaller - enum_smaller - - - - - f f 3518 3500 0 0 0 _null_ _null_ )); -DATA(insert ( 3565 n 0 network_smaller - network_smaller - - - - - f f 1203 869 0 0 0 _null_ _null_ )); +DATA(insert ( 2131 n 0 int8smaller - int8smaller - - - - - f f r r 412 20 0 0 0 _null_ _null_ )); +DATA(insert ( 2132 n 0 int4smaller - int4smaller - - - - - f f r r 97 23 0 0 0 _null_ _null_ )); +DATA(insert ( 2133 n 0 int2smaller - int2smaller - - - - - f f r r 95 21 0 0 0 _null_ _null_ )); +DATA(insert ( 2134 n 0 oidsmaller - oidsmaller - - - - - f f r r 609 26 0 0 0 _null_ _null_ )); +DATA(insert ( 2135 n 0 float4smaller - float4smaller - - - - - f f r r 622 700 0 0 0 _null_ _null_ )); +DATA(insert ( 2136 n 0 float8smaller - float8smaller - - - - - f f r r 672 701 0 0 0 _null_ _null_ )); +DATA(insert ( 2137 n 0 int4smaller - int4smaller - - - - - f f r r 562 702 0 0 0 _null_ _null_ )); +DATA(insert ( 2138 n 0 date_smaller - date_smaller - - - - - f f r r 1095 1082 0 0 0 _null_ _null_ )); +DATA(insert ( 2139 n 0 time_smaller - time_smaller - - - - - f f r r 1110 1083 0 0 0 _null_ _null_ )); +DATA(insert ( 2140 n 0 timetz_smaller - timetz_smaller - - - - - f f r r 1552 1266 0 0 0 _null_ _null_ )); +DATA(insert ( 2141 n 0 cashsmaller - cashsmaller - - - - - f f r r 902 790 0 0 0 _null_ _null_ )); +DATA(insert ( 2142 n 0 timestamp_smaller - timestamp_smaller - - - - - f f r r 2062 1114 0 0 0 _null_ _null_ )); +DATA(insert ( 2143 n 0 timestamptz_smaller - timestamptz_smaller - - - - - f f r r 1322 1184 0 0 0 _null_ _null_ )); +DATA(insert ( 2144 n 0 interval_smaller - interval_smaller - - - - - f f r r 1332 1186 0 0 0 _null_ _null_ )); +DATA(insert ( 2145 n 0 text_smaller - text_smaller - - - - - f f r r 664 25 0 0 0 _null_ _null_ )); +DATA(insert ( 2146 n 0 numeric_smaller - numeric_smaller - - - - - f f r r 1754 1700 0 0 0 _null_ _null_ )); +DATA(insert ( 2051 n 0 array_smaller - array_smaller - - - - - f f r r 1072 2277 0 0 0 _null_ _null_ )); +DATA(insert ( 2245 n 0 bpchar_smaller - bpchar_smaller - - - - - f f r r 1058 1042 0 0 0 _null_ _null_ )); +DATA(insert ( 2798 n 0 tidsmaller - tidsmaller - - - - - f f r r 2799 27 0 0 0 _null_ _null_ )); +DATA(insert ( 3527 n 0 enum_smaller - enum_smaller - - - - - f f r r 3518 3500 0 0 0 _null_ _null_ )); +DATA(insert ( 3565 n 0 network_smaller - network_smaller - - - - - f f r r 1203 869 0 0 0 _null_ _null_ )); /* count */ -DATA(insert ( 2147 n 0 int8inc_any - int8pl - - int8inc_any int8dec_any - f f 0 20 0 20 0 "0" "0" )); -DATA(insert ( 2803 n 0 int8inc - int8pl - - int8inc int8dec - f f 0 20 0 20 0 "0" "0" )); +DATA(insert ( 2147 n 0 int8inc_any - int8pl - - int8inc_any int8dec_any - f f r r 0 20 0 20 0 "0" "0" )); +DATA(insert ( 2803 n 0 int8inc - int8pl - - int8inc int8dec - f f r r 0 20 0 20 0 "0" "0" )); /* var_pop */ -DATA(insert ( 2718 n 0 int8_accum numeric_var_pop numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_pop f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2719 n 0 int4_accum numeric_poly_var_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_pop f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2720 n 0 int2_accum numeric_poly_var_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_pop f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2721 n 0 float4_accum float8_var_pop float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2722 n 0 float8_accum float8_var_pop float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2723 n 0 numeric_accum numeric_var_pop numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_pop f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2718 n 0 int8_accum numeric_var_pop numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_pop f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2719 n 0 int4_accum numeric_poly_var_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_pop f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2720 n 0 int2_accum numeric_poly_var_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_pop f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2721 n 0 float4_accum float8_var_pop float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2722 n 0 float8_accum float8_var_pop float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2723 n 0 numeric_accum numeric_var_pop numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_pop f f r r 0 2281 128 2281 128 _null_ _null_ )); /* var_samp */ -DATA(insert ( 2641 n 0 int8_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_samp f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2642 n 0 int4_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2643 n 0 int2_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2644 n 0 float4_accum float8_var_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2645 n 0 float8_accum float8_var_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2646 n 0 numeric_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_samp f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2641 n 0 int8_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2642 n 0 int4_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2643 n 0 int2_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2644 n 0 float4_accum float8_var_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2645 n 0 float8_accum float8_var_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2646 n 0 numeric_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); /* variance: historical Postgres syntax for var_samp */ -DATA(insert ( 2148 n 0 int8_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_samp f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2149 n 0 int4_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2150 n 0 int2_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2151 n 0 float4_accum float8_var_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2152 n 0 float8_accum float8_var_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2153 n 0 numeric_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_samp f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2148 n 0 int8_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_var_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2149 n 0 int4_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_var_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2150 n 0 int2_accum numeric_poly_var_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_var_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2151 n 0 float4_accum float8_var_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2152 n 0 float8_accum float8_var_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2153 n 0 numeric_accum numeric_var_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_var_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); /* stddev_pop */ -DATA(insert ( 2724 n 0 int8_accum numeric_stddev_pop numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_pop f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2725 n 0 int4_accum numeric_poly_stddev_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_pop f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2726 n 0 int2_accum numeric_poly_stddev_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_pop f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2727 n 0 float4_accum float8_stddev_pop float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2728 n 0 float8_accum float8_stddev_pop float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2729 n 0 numeric_accum numeric_stddev_pop numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_pop f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2724 n 0 int8_accum numeric_stddev_pop numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_pop f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2725 n 0 int4_accum numeric_poly_stddev_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_pop f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2726 n 0 int2_accum numeric_poly_stddev_pop numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_pop f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2727 n 0 float4_accum float8_stddev_pop float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2728 n 0 float8_accum float8_stddev_pop float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2729 n 0 numeric_accum numeric_stddev_pop numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_pop f f r r 0 2281 128 2281 128 _null_ _null_ )); /* stddev_samp */ -DATA(insert ( 2712 n 0 int8_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_samp f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2713 n 0 int4_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2714 n 0 int2_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2715 n 0 float4_accum float8_stddev_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2716 n 0 float8_accum float8_stddev_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2717 n 0 numeric_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_samp f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2712 n 0 int8_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2713 n 0 int4_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2714 n 0 int2_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2715 n 0 float4_accum float8_stddev_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2716 n 0 float8_accum float8_stddev_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2717 n 0 numeric_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); /* stddev: historical Postgres syntax for stddev_samp */ -DATA(insert ( 2154 n 0 int8_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_samp f f 0 2281 128 2281 128 _null_ _null_ )); -DATA(insert ( 2155 n 0 int4_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2156 n 0 int2_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_samp f f 0 2281 48 2281 48 _null_ _null_ )); -DATA(insert ( 2157 n 0 float4_accum float8_stddev_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2158 n 0 float8_accum float8_stddev_samp float8_combine - - - - - f f 0 1022 0 0 0 "{0,0,0}" _null_ )); -DATA(insert ( 2159 n 0 numeric_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_samp f f 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2154 n 0 int8_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize int8_accum int8_accum_inv numeric_stddev_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); +DATA(insert ( 2155 n 0 int4_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int4_accum int4_accum_inv numeric_poly_stddev_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2156 n 0 int2_accum numeric_poly_stddev_samp numeric_poly_combine numeric_poly_serialize numeric_poly_deserialize int2_accum int2_accum_inv numeric_poly_stddev_samp f f r r 0 2281 48 2281 48 _null_ _null_ )); +DATA(insert ( 2157 n 0 float4_accum float8_stddev_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2158 n 0 float8_accum float8_stddev_samp float8_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0}" _null_ )); +DATA(insert ( 2159 n 0 numeric_accum numeric_stddev_samp numeric_combine numeric_serialize numeric_deserialize numeric_accum numeric_accum_inv numeric_stddev_samp f f r r 0 2281 128 2281 128 _null_ _null_ )); /* SQL2003 binary regression aggregates */ -DATA(insert ( 2818 n 0 int8inc_float8_float8 - int8pl - - - - - f f 0 20 0 0 0 "0" _null_ )); -DATA(insert ( 2819 n 0 float8_regr_accum float8_regr_sxx float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2820 n 0 float8_regr_accum float8_regr_syy float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2821 n 0 float8_regr_accum float8_regr_sxy float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2822 n 0 float8_regr_accum float8_regr_avgx float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2823 n 0 float8_regr_accum float8_regr_avgy float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2824 n 0 float8_regr_accum float8_regr_r2 float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2825 n 0 float8_regr_accum float8_regr_slope float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2826 n 0 float8_regr_accum float8_regr_intercept float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2827 n 0 float8_regr_accum float8_covar_pop float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2828 n 0 float8_regr_accum float8_covar_samp float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); -DATA(insert ( 2829 n 0 float8_regr_accum float8_corr float8_regr_combine - - - - - f f 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2818 n 0 int8inc_float8_float8 - int8pl - - - - - f f r r 0 20 0 0 0 "0" _null_ )); +DATA(insert ( 2819 n 0 float8_regr_accum float8_regr_sxx float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2820 n 0 float8_regr_accum float8_regr_syy float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2821 n 0 float8_regr_accum float8_regr_sxy float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2822 n 0 float8_regr_accum float8_regr_avgx float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2823 n 0 float8_regr_accum float8_regr_avgy float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2824 n 0 float8_regr_accum float8_regr_r2 float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2825 n 0 float8_regr_accum float8_regr_slope float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2826 n 0 float8_regr_accum float8_regr_intercept float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2827 n 0 float8_regr_accum float8_covar_pop float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2828 n 0 float8_regr_accum float8_covar_samp float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); +DATA(insert ( 2829 n 0 float8_regr_accum float8_corr float8_regr_combine - - - - - f f r r 0 1022 0 0 0 "{0,0,0,0,0,0}" _null_ )); /* boolean-and and boolean-or */ -DATA(insert ( 2517 n 0 booland_statefunc - booland_statefunc - - bool_accum bool_accum_inv bool_alltrue f f 58 16 0 2281 16 _null_ _null_ )); -DATA(insert ( 2518 n 0 boolor_statefunc - boolor_statefunc - - bool_accum bool_accum_inv bool_anytrue f f 59 16 0 2281 16 _null_ _null_ )); -DATA(insert ( 2519 n 0 booland_statefunc - booland_statefunc - - bool_accum bool_accum_inv bool_alltrue f f 58 16 0 2281 16 _null_ _null_ )); +DATA(insert ( 2517 n 0 booland_statefunc - booland_statefunc - - bool_accum bool_accum_inv bool_alltrue f f r r 58 16 0 2281 16 _null_ _null_ )); +DATA(insert ( 2518 n 0 boolor_statefunc - boolor_statefunc - - bool_accum bool_accum_inv bool_anytrue f f r r 59 16 0 2281 16 _null_ _null_ )); +DATA(insert ( 2519 n 0 booland_statefunc - booland_statefunc - - bool_accum bool_accum_inv bool_alltrue f f r r 58 16 0 2281 16 _null_ _null_ )); /* bitwise integer */ -DATA(insert ( 2236 n 0 int2and - int2and - - - - - f f 0 21 0 0 0 _null_ _null_ )); -DATA(insert ( 2237 n 0 int2or - int2or - - - - - f f 0 21 0 0 0 _null_ _null_ )); -DATA(insert ( 2238 n 0 int4and - int4and - - - - - f f 0 23 0 0 0 _null_ _null_ )); -DATA(insert ( 2239 n 0 int4or - int4or - - - - - f f 0 23 0 0 0 _null_ _null_ )); -DATA(insert ( 2240 n 0 int8and - int8and - - - - - f f 0 20 0 0 0 _null_ _null_ )); -DATA(insert ( 2241 n 0 int8or - int8or - - - - - f f 0 20 0 0 0 _null_ _null_ )); -DATA(insert ( 2242 n 0 bitand - bitand - - - - - f f 0 1560 0 0 0 _null_ _null_ )); -DATA(insert ( 2243 n 0 bitor - bitor - - - - - f f 0 1560 0 0 0 _null_ _null_ )); +DATA(insert ( 2236 n 0 int2and - int2and - - - - - f f r r 0 21 0 0 0 _null_ _null_ )); +DATA(insert ( 2237 n 0 int2or - int2or - - - - - f f r r 0 21 0 0 0 _null_ _null_ )); +DATA(insert ( 2238 n 0 int4and - int4and - - - - - f f r r 0 23 0 0 0 _null_ _null_ )); +DATA(insert ( 2239 n 0 int4or - int4or - - - - - f f r r 0 23 0 0 0 _null_ _null_ )); +DATA(insert ( 2240 n 0 int8and - int8and - - - - - f f r r 0 20 0 0 0 _null_ _null_ )); +DATA(insert ( 2241 n 0 int8or - int8or - - - - - f f r r 0 20 0 0 0 _null_ _null_ )); +DATA(insert ( 2242 n 0 bitand - bitand - - - - - f f r r 0 1560 0 0 0 _null_ _null_ )); +DATA(insert ( 2243 n 0 bitor - bitor - - - - - f f r r 0 1560 0 0 0 _null_ _null_ )); /* xml */ -DATA(insert ( 2901 n 0 xmlconcat2 - - - - - - - f f 0 142 0 0 0 _null_ _null_ )); +DATA(insert ( 2901 n 0 xmlconcat2 - - - - - - - f f r r 0 142 0 0 0 _null_ _null_ )); /* array */ -DATA(insert ( 2335 n 0 array_agg_transfn array_agg_finalfn - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 4053 n 0 array_agg_array_transfn array_agg_array_finalfn - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 2335 n 0 array_agg_transfn array_agg_finalfn - - - - - - t f r r 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 4053 n 0 array_agg_array_transfn array_agg_array_finalfn - - - - - - t f r r 0 2281 0 0 0 _null_ _null_ )); /* text */ -DATA(insert ( 3538 n 0 string_agg_transfn string_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3538 n 0 string_agg_transfn string_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); /* bytea */ -DATA(insert ( 3545 n 0 bytea_string_agg_transfn bytea_string_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3545 n 0 bytea_string_agg_transfn bytea_string_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); /* json */ -DATA(insert ( 3175 n 0 json_agg_transfn json_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3197 n 0 json_object_agg_transfn json_object_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3175 n 0 json_agg_transfn json_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3197 n 0 json_object_agg_transfn json_object_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); /* jsonb */ -DATA(insert ( 3267 n 0 jsonb_agg_transfn jsonb_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3270 n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3267 n 0 jsonb_agg_transfn jsonb_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3270 n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); /* ordered-set and hypothetical-set aggregates */ -DATA(insert ( 3972 o 1 ordered_set_transition percentile_disc_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3974 o 1 ordered_set_transition percentile_cont_float8_final - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3976 o 1 ordered_set_transition percentile_cont_interval_final - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3978 o 1 ordered_set_transition percentile_disc_multi_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3980 o 1 ordered_set_transition percentile_cont_float8_multi_final - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3982 o 1 ordered_set_transition percentile_cont_interval_multi_final - - - - - - f f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3984 o 0 ordered_set_transition mode_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3986 h 1 ordered_set_transition_multi rank_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3988 h 1 ordered_set_transition_multi percent_rank_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3990 h 1 ordered_set_transition_multi cume_dist_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3992 h 1 ordered_set_transition_multi dense_rank_final - - - - - - t f 0 2281 0 0 0 _null_ _null_ )); - - -/* - * prototypes for functions in pg_aggregate.c - */ -extern ObjectAddress AggregateCreate(const char *aggName, - Oid aggNamespace, - char aggKind, - int numArgs, - int numDirectArgs, - oidvector *parameterTypes, - Datum allParameterTypes, - Datum parameterModes, - Datum parameterNames, - List *parameterDefaults, - Oid variadicArgType, - List *aggtransfnName, - List *aggfinalfnName, - List *aggcombinefnName, - List *aggserialfnName, - List *aggdeserialfnName, - List *aggmtransfnName, - List *aggminvtransfnName, - List *aggmfinalfnName, - bool finalfnExtraArgs, - bool mfinalfnExtraArgs, - List *aggsortopName, - Oid aggTransType, - int32 aggTransSpace, - Oid aggmTransType, - int32 aggmTransSpace, - const char *agginitval, - const char *aggminitval, - char proparallel); +DATA(insert ( 3972 o 1 ordered_set_transition percentile_disc_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3974 o 1 ordered_set_transition percentile_cont_float8_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3976 o 1 ordered_set_transition percentile_cont_interval_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3978 o 1 ordered_set_transition percentile_disc_multi_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3980 o 1 ordered_set_transition percentile_cont_float8_multi_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3982 o 1 ordered_set_transition percentile_cont_interval_multi_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3984 o 0 ordered_set_transition mode_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3986 h 1 ordered_set_transition_multi rank_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3988 h 1 ordered_set_transition_multi percent_rank_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3990 h 1 ordered_set_transition_multi cume_dist_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3992 h 1 ordered_set_transition_multi dense_rank_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); #endif /* PG_AGGREGATE_H */ diff --git a/src/include/catalog/pg_aggregate_fn.h b/src/include/catalog/pg_aggregate_fn.h new file mode 100644 index 0000000000..a323aab2d4 --- /dev/null +++ b/src/include/catalog/pg_aggregate_fn.h @@ -0,0 +1,52 @@ +/*------------------------------------------------------------------------- + * + * pg_aggregate_fn.h + * prototypes for functions in catalog/pg_aggregate.c + * + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/catalog/pg_aggregate_fn.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_AGGREGATE_FN_H +#define PG_AGGREGATE_FN_H + +#include "catalog/objectaddress.h" +#include "nodes/pg_list.h" + +extern ObjectAddress AggregateCreate(const char *aggName, + Oid aggNamespace, + char aggKind, + int numArgs, + int numDirectArgs, + oidvector *parameterTypes, + Datum allParameterTypes, + Datum parameterModes, + Datum parameterNames, + List *parameterDefaults, + Oid variadicArgType, + List *aggtransfnName, + List *aggfinalfnName, + List *aggcombinefnName, + List *aggserialfnName, + List *aggdeserialfnName, + List *aggmtransfnName, + List *aggminvtransfnName, + List *aggmfinalfnName, + bool finalfnExtraArgs, + bool mfinalfnExtraArgs, + char finalfnModify, + char mfinalfnModify, + List *aggsortopName, + Oid aggTransType, + int32 aggTransSpace, + Oid aggmTransType, + int32 aggmTransSpace, + const char *agginitval, + const char *aggminitval, + char proparallel); + +#endif /* PG_AGGREGATE_FN_H */ diff --git a/src/test/regress/expected/create_aggregate.out b/src/test/regress/expected/create_aggregate.out index 341ba52b8d..ef65cd54ca 100644 --- a/src/test/regress/expected/create_aggregate.out +++ b/src/test/regress/expected/create_aggregate.out @@ -71,7 +71,8 @@ create aggregate my_percentile_disc(float8 ORDER BY anyelement) ( stype = internal, sfunc = ordered_set_transition, finalfunc = percentile_disc_final, - finalfunc_extra = true + finalfunc_extra = true, + finalfunc_modify = read_write ); create aggregate my_rank(VARIADIC "any" ORDER BY VARIADIC "any") ( stype = internal, @@ -146,15 +147,17 @@ CREATE AGGREGATE myavg (numeric) finalfunc = numeric_avg, serialfunc = numeric_avg_serialize, deserialfunc = numeric_avg_deserialize, - combinefunc = numeric_avg_combine + combinefunc = numeric_avg_combine, + finalfunc_modify = sharable -- just to test a non-default setting ); -- Ensure all these functions made it into the catalog -SELECT aggfnoid,aggtransfn,aggcombinefn,aggtranstype,aggserialfn,aggdeserialfn +SELECT aggfnoid, aggtransfn, aggcombinefn, aggtranstype::regtype, + aggserialfn, aggdeserialfn, aggfinalmodify FROM pg_aggregate WHERE aggfnoid = 'myavg'::REGPROC; - aggfnoid | aggtransfn | aggcombinefn | aggtranstype | aggserialfn | aggdeserialfn -----------+-------------------+---------------------+--------------+-----------------------+------------------------- - myavg | numeric_avg_accum | numeric_avg_combine | 2281 | numeric_avg_serialize | numeric_avg_deserialize + aggfnoid | aggtransfn | aggcombinefn | aggtranstype | aggserialfn | aggdeserialfn | aggfinalmodify +----------+-------------------+---------------------+--------------+-----------------------+-------------------------+---------------- + myavg | numeric_avg_accum | numeric_avg_combine | internal | numeric_avg_serialize | numeric_avg_deserialize | s (1 row) DROP AGGREGATE myavg (numeric); diff --git a/src/test/regress/expected/opr_sanity.out b/src/test/regress/expected/opr_sanity.out index fcf8bd7565..684f7f20a8 100644 --- a/src/test/regress/expected/opr_sanity.out +++ b/src/test/regress/expected/opr_sanity.out @@ -1275,6 +1275,8 @@ WHERE aggfnoid = 0 OR aggtransfn = 0 OR aggkind NOT IN ('n', 'o', 'h') OR aggnumdirectargs < 0 OR (aggkind = 'n' AND aggnumdirectargs > 0) OR + aggfinalmodify NOT IN ('r', 's', 'w') OR + aggmfinalmodify NOT IN ('r', 's', 'w') OR aggtranstype = 0 OR aggtransspace < 0 OR aggmtransspace < 0; ctid | aggfnoid ------+---------- diff --git a/src/test/regress/sql/create_aggregate.sql b/src/test/regress/sql/create_aggregate.sql index ae3a6c0ebe..46e773bfe3 100644 --- a/src/test/regress/sql/create_aggregate.sql +++ b/src/test/regress/sql/create_aggregate.sql @@ -86,7 +86,8 @@ create aggregate my_percentile_disc(float8 ORDER BY anyelement) ( stype = internal, sfunc = ordered_set_transition, finalfunc = percentile_disc_final, - finalfunc_extra = true + finalfunc_extra = true, + finalfunc_modify = read_write ); create aggregate my_rank(VARIADIC "any" ORDER BY VARIADIC "any") ( @@ -161,11 +162,13 @@ CREATE AGGREGATE myavg (numeric) finalfunc = numeric_avg, serialfunc = numeric_avg_serialize, deserialfunc = numeric_avg_deserialize, - combinefunc = numeric_avg_combine + combinefunc = numeric_avg_combine, + finalfunc_modify = sharable -- just to test a non-default setting ); -- Ensure all these functions made it into the catalog -SELECT aggfnoid,aggtransfn,aggcombinefn,aggtranstype,aggserialfn,aggdeserialfn +SELECT aggfnoid, aggtransfn, aggcombinefn, aggtranstype::regtype, + aggserialfn, aggdeserialfn, aggfinalmodify FROM pg_aggregate WHERE aggfnoid = 'myavg'::REGPROC; diff --git a/src/test/regress/sql/opr_sanity.sql b/src/test/regress/sql/opr_sanity.sql index 2945966c0e..e8fdf8454d 100644 --- a/src/test/regress/sql/opr_sanity.sql +++ b/src/test/regress/sql/opr_sanity.sql @@ -795,6 +795,8 @@ WHERE aggfnoid = 0 OR aggtransfn = 0 OR aggkind NOT IN ('n', 'o', 'h') OR aggnumdirectargs < 0 OR (aggkind = 'n' AND aggnumdirectargs > 0) OR + aggfinalmodify NOT IN ('r', 's', 'w') OR + aggmfinalmodify NOT IN ('r', 's', 'w') OR aggtranstype = 0 OR aggtransspace < 0 OR aggmtransspace < 0; -- Make sure the matching pg_proc entry is sensible, too. From 82aff8d3369754282114cb0fff92a342b2864e75 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 14 Oct 2017 15:52:00 -0400 Subject: [PATCH 0395/1087] gcc's support for __attribute__((noinline)) hasn't been around forever. Buildfarm member gaur says it wasn't there in 2.95.3. Guess that 3.0 and later have it. --- src/include/c.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index b39bbd7c71..62df4d5b0c 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -644,12 +644,13 @@ typedef NameData *Name; /* - * Forcing a function not to be inlined can be useful if it's the slow-path of - * a performance critical function, or should be visible in profiles to allow - * for proper cost attribution. + * Forcing a function not to be inlined can be useful if it's the slow path of + * a performance-critical function, or should be visible in profiles to allow + * for proper cost attribution. Note that unlike the pg_attribute_XXX macros + * above, this should be placed before the function's return type and name. */ -/* GCC, Sunpro and XLC support noinline via __attribute */ -#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) +/* GCC, Sunpro and XLC support noinline via __attribute__ */ +#if (defined(__GNUC__) && __GNUC__ > 2) || defined(__SUNPRO_C) || defined(__IBMC__) #define pg_noinline __attribute__((noinline)) /* msvc via declspec */ #elif defined(_MSC_VER) From d8794fd7c337a2285f46b23d348c9826afff69eb Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 15 Oct 2017 09:14:08 -0400 Subject: [PATCH 0396/1087] doc: Postgres -> PostgreSQL --- doc/src/sgml/ref/create_aggregate.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index 4d9c8b0b70..ee79c90df2 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -666,7 +666,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - Ordinarily, Postgres functions are expected to be true functions that + Ordinarily, PostgreSQL functions are expected to be true functions that do not modify their input values. However, an aggregate transition function, when used in the context of an aggregate, is allowed to cheat and modify its transition-state argument in place. From 5fc438fb256ce83248feaf60e22e0919b76e3c7b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 15 Oct 2017 19:19:18 -0400 Subject: [PATCH 0397/1087] Restore nodeAgg.c's ability to check for improperly-nested aggregates. While poking around in the aggregate logic, I noticed that commit 8ed3f11bb broke the logic in nodeAgg.c that purports to detect nested aggregates, by moving initialization of regular aggregate argument expressions out of the code segment that checks for that. You could argue that this check is unnecessary, but it's not much code so I'm inclined to keep it as a backstop against parser and planner bugs. However, there's certainly zero value in checking only some of the subexpressions. We can make the check complete again, and as a bonus make it a good deal more bulletproof against future mistakes of the same ilk, by moving it out to the outermost level of ExecInitAgg. This means we need to check only once per Agg node not once per aggregate, which also seems like a good thing --- if the check does find something wrong, it's not urgent that we report it before the plan node initialization finishes. Since this requires remembering the original length of the aggs list, I deleted a long-obsolete stanza that changed numaggs from 0 to 1. That's so old it predates our decision that palloc(0) is a valid operation, in (digs...) 2004, see commit 24a1e20f1. In passing improve a few comments. Back-patch to v10, just in case. --- src/backend/executor/nodeAgg.c | 74 +++++++++++++++++----------------- 1 file changed, 38 insertions(+), 36 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 40d8ec9db4..f62b3b22ba 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -268,7 +268,12 @@ typedef struct AggStatePerTransData */ int numInputs; - /* offset of input columns in AggState->evalslot */ + /* + * At each input row, we evaluate all argument expressions needed for all + * the aggregates in this Agg node in a single ExecProject call. inputoff + * is the starting index of this aggregate's argument expressions in the + * resulting tuple (in AggState->evalslot). + */ int inputoff; /* @@ -2809,10 +2814,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) /* * initialize child expressions * - * We rely on the parser to have checked that no aggs contain other agg - * calls in their arguments. This would make no sense under SQL semantics - * (and it's forbidden by the spec). Because it is true, we don't need to - * worry about evaluating the aggs in any particular order. + * We expect the parser to have checked that no aggs contain other agg + * calls in their arguments (and just to be sure, we verify it again while + * initializing the plan node). This would make no sense under SQL + * semantics, and it's forbidden by the spec. Because it is true, we + * don't need to worry about evaluating the aggs in any particular order. * * Note: execExpr.c finds Aggrefs for us, and adds their AggrefExprState * nodes to aggstate->aggs. Aggrefs in the qual are found here; Aggrefs @@ -2851,17 +2857,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ numaggs = aggstate->numaggs; Assert(numaggs == list_length(aggstate->aggs)); - if (numaggs <= 0) - { - /* - * This is not an error condition: we might be using the Agg node just - * to do hash-based grouping. Even in the regular case, - * constant-expression simplification could optimize away all of the - * Aggrefs in the targetlist and qual. So keep going, but force local - * copy of numaggs positive so that palloc()s below don't choke. - */ - numaggs = 1; - } /* * For each phase, prepare grouping set data and fmgr lookup data for @@ -3364,19 +3359,19 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) } /* - * Update numaggs to match the number of unique aggregates found. Also set - * numstates to the number of unique aggregate states found. + * Update aggstate->numaggs to be the number of unique aggregates found. + * Also set numstates to the number of unique transition states found. */ aggstate->numaggs = aggno + 1; aggstate->numtrans = transno + 1; /* * Build a single projection computing the aggregate arguments for all - * aggregates at once, that's considerably faster than doing it separately - * for each. + * aggregates at once; if there's more than one, that's considerably + * faster than doing it separately for each. * - * First create a targetlist combining the targetlist of all the - * transitions. + * First create a targetlist combining the targetlists of all the + * per-trans states. */ combined_inputeval = NIL; column_offset = 0; @@ -3385,10 +3380,14 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AggStatePerTrans pertrans = &pertransstates[transno]; ListCell *arg; + /* + * Mark this per-trans state with its starting column in the combined + * slot. + */ pertrans->inputoff = column_offset; /* - * Adjust resno in a copied target entries, to point into the combined + * Adjust resnos in the copied target entries to match the combined * slot. */ foreach(arg, pertrans->aggref->args) @@ -3405,7 +3404,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) column_offset += list_length(pertrans->aggref->args); } - /* and then create a projection for that targetlist */ + /* Now create a projection for the combined targetlist */ aggstate->evaldesc = ExecTypeFromTL(combined_inputeval, false); aggstate->evalslot = ExecInitExtraTupleSlot(estate); aggstate->evalproj = ExecBuildProjectionInfo(combined_inputeval, @@ -3415,6 +3414,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) NULL); ExecSetSlotDescriptor(aggstate->evalslot, aggstate->evaldesc); + /* + * Last, check whether any more aggregates got added onto the node while + * we processed the expressions for the aggregate arguments (including not + * only the regular arguments handled immediately above, but any FILTER + * expressions and direct arguments we might've handled earlier). If so, + * we have nested aggregate functions, which is semantically nonsensical, + * so complain. (This should have been caught by the parser, so we don't + * need to work hard on a helpful error message; but we defend against it + * here anyway, just to be sure.) + */ + if (numaggs != list_length(aggstate->aggs)) + ereport(ERROR, + (errcode(ERRCODE_GROUPING_ERROR), + errmsg("aggregate function calls cannot be nested"))); + return aggstate; } @@ -3444,7 +3458,6 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, List *sortlist; int numSortCols; int numDistinctCols; - int naggs; int i; /* Begin filling in the pertrans data */ @@ -3586,22 +3599,11 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, } /* Initialize the input and FILTER expressions */ - naggs = aggstate->numaggs; pertrans->aggfilter = ExecInitExpr(aggref->aggfilter, (PlanState *) aggstate); pertrans->aggdirectargs = ExecInitExprList(aggref->aggdirectargs, (PlanState *) aggstate); - /* - * Complain if the aggregate's arguments contain any aggregates; nested - * agg functions are semantically nonsensical. (This should have been - * caught earlier, but we defend against it here anyway.) - */ - if (naggs != aggstate->numaggs) - ereport(ERROR, - (errcode(ERRCODE_GROUPING_ERROR), - errmsg("aggregate function calls cannot be nested"))); - /* * If we're doing either DISTINCT or ORDER BY for a plain agg, then we * have a list of SortGroupClause nodes; fish out the data in them and From 60a1d96ed7ba0930024af696e1fb209a030b6c42 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 16 Oct 2017 12:22:18 +0200 Subject: [PATCH 0398/1087] Rework DefineIndex relkind check MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Simplify coding using a switch rather than nested if tests. Author: Álvaro Reviewed-by: Robert Haas, Amit Langote, Michaël Paquier Discussion: https://postgr.es/m/20171013163820.pai7djcaxrntaxtn@alvherre.pgsql --- src/backend/commands/indexcmds.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index b61aaac284..3f615b6260 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -375,25 +375,24 @@ DefineIndex(Oid relationId, relationId = RelationGetRelid(rel); namespaceId = RelationGetNamespace(rel); - if (rel->rd_rel->relkind != RELKIND_RELATION && - rel->rd_rel->relkind != RELKIND_MATVIEW) + /* Ensure that it makes sense to index this kind of relation */ + switch (rel->rd_rel->relkind) { - if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE) - - /* - * Custom error message for FOREIGN TABLE since the term is close - * to a regular table and can confuse the user. - */ + case RELKIND_RELATION: + case RELKIND_MATVIEW: + /* OK */ + break; + case RELKIND_FOREIGN_TABLE: ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("cannot create index on foreign table \"%s\"", RelationGetRelationName(rel)))); - else if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + case RELKIND_PARTITIONED_TABLE: ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("cannot create index on partitioned table \"%s\"", RelationGetRelationName(rel)))); - else + default: ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not a table or materialized view", From c3dfe0fec01469b8a7de327303cad50ba8ed338a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 16 Oct 2017 15:24:36 -0400 Subject: [PATCH 0399/1087] Repair breakage of aggregate FILTER option. An aggregate's input expression(s) are not supposed to be evaluated at all for a row where its FILTER test fails ... but commit 8ed3f11bb overlooked that requirement. Reshuffle so that aggregates having a filter clause evaluate their arguments separately from those without. This still gets the benefit of doing only one ExecProject in the common case of multiple Aggrefs, none of which have filters. While at it, arrange for filter clauses to be included in the common ExecProject evaluation, thus perhaps buying a little bit even when there are filters. Back-patch to v10 where the bug was introduced. Discussion: https://postgr.es/m/30065.1508161354@sss.pgh.pa.us --- src/backend/executor/nodeAgg.c | 186 +++++++++++++++-------- src/include/nodes/execnodes.h | 6 +- src/test/regress/expected/aggregates.out | 6 + src/test/regress/sql/aggregates.sql | 2 + 4 files changed, 130 insertions(+), 70 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index f62b3b22ba..8a6dfd64e8 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -268,14 +268,6 @@ typedef struct AggStatePerTransData */ int numInputs; - /* - * At each input row, we evaluate all argument expressions needed for all - * the aggregates in this Agg node in a single ExecProject call. inputoff - * is the starting index of this aggregate's argument expressions in the - * resulting tuple (in AggState->evalslot). - */ - int inputoff; - /* * Number of aggregated input columns to pass to the transfn. This * includes the ORDER BY columns for ordered-set aggs, but not for plain @@ -283,6 +275,16 @@ typedef struct AggStatePerTransData */ int numTransInputs; + /* + * At each input row, we perform a single ExecProject call to evaluate all + * argument expressions that will certainly be needed at this row; that + * includes this aggregate's filter expression if it has one, or its + * regular argument expressions (including any ORDER BY columns) if it + * doesn't. inputoff is the starting index of this aggregate's required + * expressions in the resulting tuple. + */ + int inputoff; + /* Oid of the state transition or combine function */ Oid transfn_oid; @@ -295,9 +297,8 @@ typedef struct AggStatePerTransData /* Oid of state value's datatype */ Oid aggtranstype; - /* ExprStates of the FILTER and argument expressions. */ - ExprState *aggfilter; /* state of FILTER expression, if any */ - List *aggdirectargs; /* states of direct-argument expressions */ + /* ExprStates for any direct-argument expressions */ + List *aggdirectargs; /* * fmgr lookup data for transition function or combine function. Note in @@ -353,20 +354,21 @@ typedef struct AggStatePerTransData transtypeByVal; /* - * Stuff for evaluation of aggregate inputs in cases where the aggregate - * requires sorted input. The arguments themselves will be evaluated via - * AggState->evalslot/evalproj for all aggregates at once, but we only - * want to sort the relevant columns for individual aggregates. + * Stuff for evaluation of aggregate inputs, when they must be evaluated + * separately because there's a FILTER expression. In such cases we will + * create a sortslot and the result will be stored there, whether or not + * we're actually sorting. */ - TupleDesc sortdesc; /* descriptor of input tuples */ + ProjectionInfo *evalproj; /* projection machinery */ /* * Slots for holding the evaluated input arguments. These are set up - * during ExecInitAgg() and then used for each input row requiring - * processing besides what's done in AggState->evalproj. + * during ExecInitAgg() and then used for each input row requiring either + * FILTER or ORDER BY/DISTINCT processing. */ TupleTableSlot *sortslot; /* current input tuple */ TupleTableSlot *uniqslot; /* used for multi-column DISTINCT */ + TupleDesc sortdesc; /* descriptor of input tuples */ /* * These values are working state that is initialized at the start of an @@ -983,30 +985,36 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro int numGroupingSets = Max(aggstate->phase->numsets, 1); int numHashes = aggstate->num_hashes; int numTrans = aggstate->numtrans; - TupleTableSlot *slot = aggstate->evalslot; + TupleTableSlot *combinedslot; - /* compute input for all aggregates */ - if (aggstate->evalproj) - aggstate->evalslot = ExecProject(aggstate->evalproj); + /* compute required inputs for all aggregates */ + combinedslot = ExecProject(aggstate->combinedproj); for (transno = 0; transno < numTrans; transno++) { AggStatePerTrans pertrans = &aggstate->pertrans[transno]; - ExprState *filter = pertrans->aggfilter; int numTransInputs = pertrans->numTransInputs; - int i; int inputoff = pertrans->inputoff; + TupleTableSlot *slot; + int i; /* Skip anything FILTERed out */ - if (filter) + if (pertrans->aggref->aggfilter) { - Datum res; - bool isnull; - - res = ExecEvalExprSwitchContext(filter, aggstate->tmpcontext, - &isnull); - if (isnull || !DatumGetBool(res)) + /* Check the result of the filter expression */ + if (combinedslot->tts_isnull[inputoff] || + !DatumGetBool(combinedslot->tts_values[inputoff])) continue; + + /* Now it's safe to evaluate this agg's arguments */ + slot = ExecProject(pertrans->evalproj); + /* There's no offset needed in this slot, of course */ + inputoff = 0; + } + else + { + /* arguments are already evaluated into combinedslot @ inputoff */ + slot = combinedslot; } if (pertrans->numSortCols > 0) @@ -1040,11 +1048,21 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro tuplesort_putdatum(pertrans->sortstates[setno], slot->tts_values[inputoff], slot->tts_isnull[inputoff]); + else if (pertrans->aggref->aggfilter) + { + /* + * When filtering and ordering, we already have a slot + * containing just the argument columns. + */ + Assert(slot == pertrans->sortslot); + tuplesort_puttupleslot(pertrans->sortstates[setno], slot); + } else { /* - * Copy slot contents, starting from inputoff, into sort - * slot. + * Copy argument columns from combined slot, starting at + * inputoff, into sortslot, so that we can store just the + * columns we want. */ ExecClearTuple(pertrans->sortslot); memcpy(pertrans->sortslot->tts_values, @@ -1053,9 +1071,9 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro memcpy(pertrans->sortslot->tts_isnull, &slot->tts_isnull[inputoff], pertrans->numInputs * sizeof(bool)); - pertrans->sortslot->tts_nvalid = pertrans->numInputs; ExecStoreVirtualTuple(pertrans->sortslot); - tuplesort_puttupleslot(pertrans->sortstates[setno], pertrans->sortslot); + tuplesort_puttupleslot(pertrans->sortstates[setno], + pertrans->sortslot); } } } @@ -1127,7 +1145,7 @@ combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup) Assert(aggstate->phase->numsets <= 1); /* compute input for all aggregates */ - slot = ExecProject(aggstate->evalproj); + slot = ExecProject(aggstate->combinedproj); for (transno = 0; transno < numTrans; transno++) { @@ -2691,6 +2709,8 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) int phase; int phaseidx; List *combined_inputeval; + TupleDesc combineddesc; + TupleTableSlot *combinedslot; ListCell *l; Bitmapset *all_grouped_cols = NULL; int numGroupingSets = 1; @@ -3366,19 +3386,17 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggstate->numtrans = transno + 1; /* - * Build a single projection computing the aggregate arguments for all + * Build a single projection computing the required arguments for all * aggregates at once; if there's more than one, that's considerably * faster than doing it separately for each. * - * First create a targetlist combining the targetlists of all the - * per-trans states. + * First create a targetlist representing the values to compute. */ combined_inputeval = NIL; column_offset = 0; for (transno = 0; transno < aggstate->numtrans; transno++) { AggStatePerTrans pertrans = &pertransstates[transno]; - ListCell *arg; /* * Mark this per-trans state with its starting column in the combined @@ -3387,38 +3405,70 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) pertrans->inputoff = column_offset; /* - * Adjust resnos in the copied target entries to match the combined - * slot. + * If the aggregate has a FILTER, we can only evaluate the filter + * expression, not the actual input expressions, during the combined + * eval step --- unless we're ignoring the filter because this node is + * running combinefns not transfns. */ - foreach(arg, pertrans->aggref->args) + if (pertrans->aggref->aggfilter && + !DO_AGGSPLIT_COMBINE(aggstate->aggsplit)) { - TargetEntry *source_tle = lfirst_node(TargetEntry, arg); TargetEntry *tle; - tle = flatCopyTargetEntry(source_tle); - tle->resno += column_offset; - + tle = makeTargetEntry(pertrans->aggref->aggfilter, + column_offset + 1, NULL, false); combined_inputeval = lappend(combined_inputeval, tle); + column_offset++; + + /* + * We'll need separate projection machinery for the real args. + * Arrange to evaluate them into the sortslot previously created. + */ + Assert(pertrans->sortslot); + pertrans->evalproj = ExecBuildProjectionInfo(pertrans->aggref->args, + aggstate->tmpcontext, + pertrans->sortslot, + &aggstate->ss.ps, + NULL); } + else + { + /* + * Add agg's input expressions to combined_inputeval, adjusting + * resnos in the copied target entries to match the combined slot. + */ + ListCell *arg; + + foreach(arg, pertrans->aggref->args) + { + TargetEntry *source_tle = lfirst_node(TargetEntry, arg); + TargetEntry *tle; + + tle = flatCopyTargetEntry(source_tle); + tle->resno += column_offset; - column_offset += list_length(pertrans->aggref->args); + combined_inputeval = lappend(combined_inputeval, tle); + } + + column_offset += list_length(pertrans->aggref->args); + } } /* Now create a projection for the combined targetlist */ - aggstate->evaldesc = ExecTypeFromTL(combined_inputeval, false); - aggstate->evalslot = ExecInitExtraTupleSlot(estate); - aggstate->evalproj = ExecBuildProjectionInfo(combined_inputeval, - aggstate->tmpcontext, - aggstate->evalslot, - &aggstate->ss.ps, - NULL); - ExecSetSlotDescriptor(aggstate->evalslot, aggstate->evaldesc); + combineddesc = ExecTypeFromTL(combined_inputeval, false); + combinedslot = ExecInitExtraTupleSlot(estate); + ExecSetSlotDescriptor(combinedslot, combineddesc); + aggstate->combinedproj = ExecBuildProjectionInfo(combined_inputeval, + aggstate->tmpcontext, + combinedslot, + &aggstate->ss.ps, + NULL); /* * Last, check whether any more aggregates got added onto the node while * we processed the expressions for the aggregate arguments (including not - * only the regular arguments handled immediately above, but any FILTER - * expressions and direct arguments we might've handled earlier). If so, + * only the regular arguments and FILTER expressions handled immediately + * above, but any direct arguments we might've handled earlier). If so, * we have nested aggregate functions, which is semantically nonsensical, * so complain. (This should have been caught by the parser, so we don't * need to work hard on a helpful error message; but we defend against it @@ -3483,6 +3533,8 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, else pertrans->numTransInputs = numArguments; + /* inputoff and evalproj will be set up later, in ExecInitAgg */ + /* * When combining states, we have no use at all for the aggregate * function's transfn. Instead we use the combinefn. In this case, the @@ -3598,9 +3650,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, } - /* Initialize the input and FILTER expressions */ - pertrans->aggfilter = ExecInitExpr(aggref->aggfilter, - (PlanState *) aggstate); + /* Initialize any direct-argument expressions */ pertrans->aggdirectargs = ExecInitExprList(aggref->aggdirectargs, (PlanState *) aggstate); @@ -3634,16 +3684,20 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, pertrans->numSortCols = numSortCols; pertrans->numDistinctCols = numDistinctCols; - if (numSortCols > 0) + /* + * If we have either sorting or filtering to do, create a tupledesc and + * slot corresponding to the aggregated inputs (including sort + * expressions) of the agg. + */ + if (numSortCols > 0 || aggref->aggfilter) { - /* - * Get a tupledesc and slot corresponding to the aggregated inputs - * (including sort expressions) of the agg. - */ pertrans->sortdesc = ExecTypeFromTL(aggref->args, false); pertrans->sortslot = ExecInitExtraTupleSlot(estate); ExecSetSlotDescriptor(pertrans->sortslot, pertrans->sortdesc); + } + if (numSortCols > 0) + { /* * We don't implement DISTINCT or ORDER BY aggs in the HASHED case * (yet) diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 01ceeef39c..52d3532580 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1830,10 +1830,8 @@ typedef struct AggState int num_hashes; AggStatePerHash perhash; AggStatePerGroup *hash_pergroup; /* array of per-group pointers */ - /* support for evaluation of agg inputs */ - TupleTableSlot *evalslot; /* slot for agg inputs */ - ProjectionInfo *evalproj; /* projection machinery */ - TupleDesc evaldesc; /* descriptor of input tuples */ + /* support for evaluation of agg input expressions: */ + ProjectionInfo *combinedproj; /* projection machinery */ } AggState; /* ---------------- diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index 82ede655aa..c4ea86ff05 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -1388,6 +1388,12 @@ select min(unique1) filter (where unique1 > 100) from tenk1; 101 (1 row) +select sum(1/ten) filter (where ten > 0) from tenk1; + sum +------ + 1000 +(1 row) + select ten, sum(distinct four) filter (where four::text ~ '123') from onek a group by ten; ten | sum diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 77314522eb..fefbef89e0 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -524,6 +524,8 @@ drop table bytea_test_table; select min(unique1) filter (where unique1 > 100) from tenk1; +select sum(1/ten) filter (where ten > 0) from tenk1; + select ten, sum(distinct four) filter (where four::text ~ '123') from onek a group by ten; From be0ebb65f51225223421df6e10eb6e87fc8264d7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 16 Oct 2017 15:51:23 -0400 Subject: [PATCH 0400/1087] Allow the built-in ordered-set aggregates to share transition state. The built-in OSAs all share the same transition function, so they can share transition state as long as the final functions cooperate to not do the sort step more than once. To avoid running the tuplesort object in randomAccess mode unnecessarily, add a bit of infrastructure to nodeAgg.c to let the aggregate functions find out whether the transition state is actually being shared or not. This doesn't work for the hypothetical aggregates, since those inject a hypothetical row that isn't traceable to the shared input state. So they remain marked aggfinalmodify = 'w'. Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com --- src/backend/executor/nodeAgg.c | 52 ++++++++- src/backend/utils/adt/orderedsetaggs.c | 133 +++++++++++++---------- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_aggregate.h | 14 +-- src/include/fmgr.h | 1 + src/test/regress/expected/aggregates.out | 12 +- src/test/regress/sql/aggregates.sql | 8 +- 7 files changed, 149 insertions(+), 73 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 8a6dfd64e8..82ed5b3e1c 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -254,6 +254,11 @@ typedef struct AggStatePerTransData */ Aggref *aggref; + /* + * Is this state value actually being shared by more than one Aggref? + */ + bool aggshared; + /* * Nominal number of arguments for aggregate function. For plain aggs, * this excludes any ORDER BY expressions. For ordered-set aggs, this @@ -3360,9 +3365,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) { /* * Existing compatible trans found, so just point the 'peragg' to - * the same per-trans struct. + * the same per-trans struct, and mark the trans state as shared. */ pertrans = &pertransstates[existing_transno]; + pertrans->aggshared = true; peragg->transno = existing_transno; } else @@ -3512,6 +3518,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, /* Begin filling in the pertrans data */ pertrans->aggref = aggref; + pertrans->aggshared = false; pertrans->aggCollation = aggref->inputcollid; pertrans->transfn_oid = aggtransfn; pertrans->serialfn_oid = aggserialfn; @@ -4161,17 +4168,18 @@ AggGetAggref(FunctionCallInfo fcinfo) { if (fcinfo->context && IsA(fcinfo->context, AggState)) { + AggState *aggstate = (AggState *) fcinfo->context; AggStatePerAgg curperagg; AggStatePerTrans curpertrans; /* check curperagg (valid when in a final function) */ - curperagg = ((AggState *) fcinfo->context)->curperagg; + curperagg = aggstate->curperagg; if (curperagg) return curperagg->aggref; /* check curpertrans (valid when in a transition function) */ - curpertrans = ((AggState *) fcinfo->context)->curpertrans; + curpertrans = aggstate->curpertrans; if (curpertrans) return curpertrans->aggref; @@ -4201,6 +4209,44 @@ AggGetTempMemoryContext(FunctionCallInfo fcinfo) return NULL; } +/* + * AggStateIsShared - find out whether transition state is shared + * + * If the function is being called as an aggregate support function, + * return TRUE if the aggregate's transition state is shared across + * multiple aggregates, FALSE if it is not. + * + * Returns TRUE if not called as an aggregate support function. + * This is intended as a conservative answer, ie "no you'd better not + * scribble on your input". In particular, will return TRUE if the + * aggregate is being used as a window function, which is a scenario + * in which changing the transition state is a bad idea. We might + * want to refine the behavior for the window case in future. + */ +bool +AggStateIsShared(FunctionCallInfo fcinfo) +{ + if (fcinfo->context && IsA(fcinfo->context, AggState)) + { + AggState *aggstate = (AggState *) fcinfo->context; + AggStatePerAgg curperagg; + AggStatePerTrans curpertrans; + + /* check curperagg (valid when in a final function) */ + curperagg = aggstate->curperagg; + + if (curperagg) + return aggstate->pertrans[curperagg->transno].aggshared; + + /* check curpertrans (valid when in a transition function) */ + curpertrans = aggstate->curpertrans; + + if (curpertrans) + return curpertrans->aggshared; + } + return true; +} + /* * AggRegisterCallback - register a cleanup callback for an aggregate * diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 25905a3287..1e323d9444 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -40,14 +40,22 @@ * create just once per query because they will not change across groups. * The per-query struct and subsidiary data live in the executor's per-query * memory context, and go away implicitly at ExecutorEnd(). + * + * These structs are set up during the first call of the transition function. + * Because we allow nodeAgg.c to merge ordered-set aggregates (but not + * hypothetical aggregates) with identical inputs and transition functions, + * this info must not depend on the particular aggregate (ie, particular + * final-function), nor on the direct argument(s) of the aggregate. */ typedef struct OSAPerQueryState { - /* Aggref for this aggregate: */ + /* Representative Aggref for this aggregate: */ Aggref *aggref; /* Memory context containing this struct and other per-query data: */ MemoryContext qcontext; + /* Do we expect multiple final-function calls within one group? */ + bool rescan_needed; /* These fields are used only when accumulating tuples: */ @@ -91,6 +99,8 @@ typedef struct OSAPerGroupState Tuplesortstate *sortstate; /* Number of normal rows inserted into sortstate: */ int64 number_of_rows; + /* Have we already done tuplesort_performsort? */ + bool sort_done; } OSAPerGroupState; static void ordered_set_shutdown(Datum arg); @@ -146,6 +156,9 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples) qstate->aggref = aggref; qstate->qcontext = qcontext; + /* We need to support rescans if the trans state is shared */ + qstate->rescan_needed = AggStateIsShared(fcinfo); + /* Extract the sort information */ sortlist = aggref->aggorder; numSortCols = list_length(sortlist); @@ -277,15 +290,18 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples) qstate->sortOperators, qstate->sortCollations, qstate->sortNullsFirsts, - work_mem, false); + work_mem, + qstate->rescan_needed); else osastate->sortstate = tuplesort_begin_datum(qstate->sortColType, qstate->sortOperator, qstate->sortCollation, qstate->sortNullsFirst, - work_mem, false); + work_mem, + qstate->rescan_needed); osastate->number_of_rows = 0; + osastate->sort_done = false; /* Now register a shutdown callback to clean things up at end of group */ AggRegisterCallback(fcinfo, @@ -306,14 +322,12 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples) * group) by ExecutorEnd. But we must take care to release any potential * non-memory resources. * - * This callback is arguably unnecessary, since we don't support use of - * ordered-set aggs in AGG_HASHED mode and there is currently no non-error - * code path in non-hashed modes wherein nodeAgg.c won't call the finalfn - * after calling the transfn one or more times. So in principle we could rely - * on the finalfn to delete the tuplestore etc. However, it's possible that - * such a code path might exist in future, and in any case it'd be - * notationally tedious and sometimes require extra data copying to ensure - * we always delete the tuplestore in the finalfn. + * In the case where we're not expecting multiple finalfn calls, we could + * arguably rely on the finalfn to clean up; but it's easier and more testable + * if we just do it the same way in either case. Note that many of the + * finalfns could *not* free the tuplesort object, at least not without extra + * data copying, because what they return is a pointer to a datum inside the + * tuplesort object. */ static void ordered_set_shutdown(Datum arg) @@ -436,8 +450,14 @@ percentile_disc_final(PG_FUNCTION_ARGS) if (osastate->number_of_rows == 0) PG_RETURN_NULL(); - /* Finish the sort */ - tuplesort_performsort(osastate->sortstate); + /* Finish the sort, or rescan if we already did */ + if (!osastate->sort_done) + { + tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; + } + else + tuplesort_rescan(osastate->sortstate); /*---------- * We need the smallest K such that (K/N) >= percentile. @@ -457,13 +477,6 @@ percentile_disc_final(PG_FUNCTION_ARGS) if (!tuplesort_getdatum(osastate->sortstate, true, &val, &isnull, NULL)) elog(ERROR, "missing row in percentile_disc"); - /* - * Note: we *cannot* clean up the tuplesort object here, because the value - * to be returned is allocated inside its sortcontext. We could use - * datumCopy to copy it out of there, but it doesn't seem worth the - * trouble, since the cleanup callback will clear the tuplesort later. - */ - /* We shouldn't have stored any nulls, but do the right thing anyway */ if (isnull) PG_RETURN_NULL(); @@ -543,8 +556,14 @@ percentile_cont_final_common(FunctionCallInfo fcinfo, Assert(expect_type == osastate->qstate->sortColType); - /* Finish the sort */ - tuplesort_performsort(osastate->sortstate); + /* Finish the sort, or rescan if we already did */ + if (!osastate->sort_done) + { + tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; + } + else + tuplesort_rescan(osastate->sortstate); first_row = floor(percentile * (osastate->number_of_rows - 1)); second_row = ceil(percentile * (osastate->number_of_rows - 1)); @@ -575,13 +594,6 @@ percentile_cont_final_common(FunctionCallInfo fcinfo, val = lerpfunc(first_val, second_val, proportion); } - /* - * Note: we *cannot* clean up the tuplesort object here, because the value - * to be returned may be allocated inside its sortcontext. We could use - * datumCopy to copy it out of there, but it doesn't seem worth the - * trouble, since the cleanup callback will clear the tuplesort later. - */ - PG_RETURN_DATUM(val); } @@ -779,8 +791,14 @@ percentile_disc_multi_final(PG_FUNCTION_ARGS) */ if (i < num_percentiles) { - /* Finish the sort */ - tuplesort_performsort(osastate->sortstate); + /* Finish the sort, or rescan if we already did */ + if (!osastate->sort_done) + { + tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; + } + else + tuplesort_rescan(osastate->sortstate); for (; i < num_percentiles; i++) { @@ -804,11 +822,6 @@ percentile_disc_multi_final(PG_FUNCTION_ARGS) } } - /* - * We could clean up the tuplesort object after forming the array, but - * probably not worth the trouble. - */ - /* We make the output array the same shape as the input */ PG_RETURN_POINTER(construct_md_array(result_datum, result_isnull, ARR_NDIM(param), @@ -902,8 +915,14 @@ percentile_cont_multi_final_common(FunctionCallInfo fcinfo, */ if (i < num_percentiles) { - /* Finish the sort */ - tuplesort_performsort(osastate->sortstate); + /* Finish the sort, or rescan if we already did */ + if (!osastate->sort_done) + { + tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; + } + else + tuplesort_rescan(osastate->sortstate); for (; i < num_percentiles; i++) { @@ -962,11 +981,6 @@ percentile_cont_multi_final_common(FunctionCallInfo fcinfo, } } - /* - * We could clean up the tuplesort object after forming the array, but - * probably not worth the trouble. - */ - /* We make the output array the same shape as the input */ PG_RETURN_POINTER(construct_md_array(result_datum, result_isnull, ARR_NDIM(param), @@ -1043,8 +1057,14 @@ mode_final(PG_FUNCTION_ARGS) shouldfree = !(osastate->qstate->typByVal); - /* Finish the sort */ - tuplesort_performsort(osastate->sortstate); + /* Finish the sort, or rescan if we already did */ + if (!osastate->sort_done) + { + tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; + } + else + tuplesort_rescan(osastate->sortstate); /* Scan tuples and count frequencies */ while (tuplesort_getdatum(osastate->sortstate, true, &val, &isnull, &abbrev_val)) @@ -1097,13 +1117,6 @@ mode_final(PG_FUNCTION_ARGS) if (shouldfree && !last_val_is_mode) pfree(DatumGetPointer(last_val)); - /* - * Note: we *cannot* clean up the tuplesort object here, because the value - * to be returned is allocated inside its sortcontext. We could use - * datumCopy to copy it out of there, but it doesn't seem worth the - * trouble, since the cleanup callback will clear the tuplesort later. - */ - if (mode_freq) PG_RETURN_DATUM(mode_val); else @@ -1174,6 +1187,9 @@ hypothetical_rank_common(FunctionCallInfo fcinfo, int flag, hypothetical_check_argtypes(fcinfo, nargs, osastate->qstate->tupdesc); + /* because we need a hypothetical row, we can't share transition state */ + Assert(!osastate->sort_done); + /* insert the hypothetical row into the sort */ slot = osastate->qstate->tupslot; ExecClearTuple(slot); @@ -1190,6 +1206,7 @@ hypothetical_rank_common(FunctionCallInfo fcinfo, int flag, /* finish the sort */ tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; /* iterate till we find the hypothetical row */ while (tuplesort_gettupleslot(osastate->sortstate, true, true, slot, NULL)) @@ -1207,10 +1224,6 @@ hypothetical_rank_common(FunctionCallInfo fcinfo, int flag, ExecClearTuple(slot); - /* Might as well clean up the tuplesort object immediately */ - tuplesort_end(osastate->sortstate); - osastate->sortstate = NULL; - return rank; } @@ -1329,6 +1342,9 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) /* Get short-term context we can use for execTuplesMatch */ tmpcontext = AggGetTempMemoryContext(fcinfo); + /* because we need a hypothetical row, we can't share transition state */ + Assert(!osastate->sort_done); + /* insert the hypothetical row into the sort */ slot = osastate->qstate->tupslot; ExecClearTuple(slot); @@ -1345,6 +1361,7 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) /* finish the sort */ tuplesort_performsort(osastate->sortstate); + osastate->sort_done = true; /* * We alternate fetching into tupslot and extraslot so that we have the @@ -1391,10 +1408,6 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) ExecDropSingleTupleTableSlot(extraslot); - /* Might as well clean up the tuplesort object immediately */ - tuplesort_end(osastate->sortstate); - osastate->sortstate = NULL; - rank = rank - duplicate_count; PG_RETURN_INT64(rank); diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 7c1756ae08..9a7f5b25a3 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201710141 +#define CATALOG_VERSION_NO 201710161 #endif diff --git a/src/include/catalog/pg_aggregate.h b/src/include/catalog/pg_aggregate.h index 5769f6430a..13f1bce5af 100644 --- a/src/include/catalog/pg_aggregate.h +++ b/src/include/catalog/pg_aggregate.h @@ -318,13 +318,13 @@ DATA(insert ( 3267 n 0 jsonb_agg_transfn jsonb_agg_finalfn - - - - - - DATA(insert ( 3270 n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn - - - - - - f f r r 0 2281 0 0 0 _null_ _null_ )); /* ordered-set and hypothetical-set aggregates */ -DATA(insert ( 3972 o 1 ordered_set_transition percentile_disc_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3974 o 1 ordered_set_transition percentile_cont_float8_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3976 o 1 ordered_set_transition percentile_cont_interval_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3978 o 1 ordered_set_transition percentile_disc_multi_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3980 o 1 ordered_set_transition percentile_cont_float8_multi_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3982 o 1 ordered_set_transition percentile_cont_interval_multi_final - - - - - - f f w w 0 2281 0 0 0 _null_ _null_ )); -DATA(insert ( 3984 o 0 ordered_set_transition mode_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3972 o 1 ordered_set_transition percentile_disc_final - - - - - - t f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3974 o 1 ordered_set_transition percentile_cont_float8_final - - - - - - f f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3976 o 1 ordered_set_transition percentile_cont_interval_final - - - - - - f f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3978 o 1 ordered_set_transition percentile_disc_multi_final - - - - - - t f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3980 o 1 ordered_set_transition percentile_cont_float8_multi_final - - - - - - f f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3982 o 1 ordered_set_transition percentile_cont_interval_multi_final - - - - - - f f s s 0 2281 0 0 0 _null_ _null_ )); +DATA(insert ( 3984 o 0 ordered_set_transition mode_final - - - - - - t f s s 0 2281 0 0 0 _null_ _null_ )); DATA(insert ( 3986 h 1 ordered_set_transition_multi rank_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); DATA(insert ( 3988 h 1 ordered_set_transition_multi percent_rank_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); DATA(insert ( 3990 h 1 ordered_set_transition_multi cume_dist_final - - - - - - t f w w 0 2281 0 0 0 _null_ _null_ )); diff --git a/src/include/fmgr.h b/src/include/fmgr.h index b604a5c162..a68ec91c68 100644 --- a/src/include/fmgr.h +++ b/src/include/fmgr.h @@ -698,6 +698,7 @@ extern int AggCheckCallContext(FunctionCallInfo fcinfo, MemoryContext *aggcontext); extern fmAggrefPtr AggGetAggref(FunctionCallInfo fcinfo); extern MemoryContext AggGetTempMemoryContext(FunctionCallInfo fcinfo); +extern bool AggStateIsShared(FunctionCallInfo fcinfo); extern void AggRegisterCallback(FunctionCallInfo fcinfo, fmExprContextCallbackFunction func, Datum arg); diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index c4ea86ff05..3408cf3333 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -1866,7 +1866,7 @@ NOTICE: avg_transfn called with 3 2 | 6 (1 row) --- ideally these would share state, but we have to fix the OSAs first. +-- exercise cases where OSAs share state select percentile_cont(0.5) within group (order by a), percentile_disc(0.5) within group (order by a) @@ -1876,6 +1876,16 @@ from (values(1::float8),(3),(5),(7)) t(a); 4 | 3 (1 row) +select + percentile_cont(0.25) within group (order by a), + percentile_disc(0.5) within group (order by a) +from (values(1::float8),(3),(5),(7)) t(a); + percentile_cont | percentile_disc +-----------------+----------------- + 2.5 | 3 +(1 row) + +-- these can't share state currently select rank(4) within group (order by a), dense_rank(4) within group (order by a) diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index fefbef89e0..55c8528fd5 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -741,12 +741,18 @@ select my_avg(one) filter (where one > 1),my_sum(one) from (values(1),(3)) t(one -- this should not share the state due to different input columns. select my_avg(one),my_sum(two) from (values(1,2),(3,4)) t(one,two); --- ideally these would share state, but we have to fix the OSAs first. +-- exercise cases where OSAs share state select percentile_cont(0.5) within group (order by a), percentile_disc(0.5) within group (order by a) from (values(1::float8),(3),(5),(7)) t(a); +select + percentile_cont(0.25) within group (order by a), + percentile_disc(0.5) within group (order by a) +from (values(1::float8),(3),(5),(7)) t(a); + +-- these can't share state currently select rank(4) within group (order by a), dense_rank(4) within group (order by a) From cf5ba7c30c0428f5ff49197ec1e0f052035300d6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 16 Oct 2017 16:02:51 -0400 Subject: [PATCH 0401/1087] Treat aggregate direct arguments as per-agg data not per-trans data. There is no reason to insist that direct arguments must match before we can merge transition states of two aggregate calls. They're only used during the finalfn call, so we can treat them as like the finalfn itself. This allows, eg, merging of select percentile_cont(0.25) within group (order by a), percentile_disc(0.5) within group (order by a) from ... This didn't matter (and could not have been tested) before we allowed state merging of OSAs otherwise. Discussion: https://postgr.es/m/CAB4ELO5RZhOamuT9Xsf72ozbenDLLXZKSk07FiSVsuJNZB861A@mail.gmail.com --- src/backend/executor/nodeAgg.c | 27 ++++++++++----------------- 1 file changed, 10 insertions(+), 17 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 82ed5b3e1c..2b118359b5 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -259,13 +259,6 @@ typedef struct AggStatePerTransData */ bool aggshared; - /* - * Nominal number of arguments for aggregate function. For plain aggs, - * this excludes any ORDER BY expressions. For ordered-set aggs, this - * counts both the direct and aggregated (ORDER BY) arguments. - */ - int numArguments; - /* * Number of aggregated input columns. This includes ORDER BY expressions * in both the plain-agg and ordered-set cases. Ordered-set direct args @@ -302,9 +295,6 @@ typedef struct AggStatePerTransData /* Oid of state value's datatype */ Oid aggtranstype; - /* ExprStates for any direct-argument expressions */ - List *aggdirectargs; - /* * fmgr lookup data for transition function or combine function. Note in * particular that the fn_strict flag is kept here. @@ -444,6 +434,9 @@ typedef struct AggStatePerAggData */ int numFinalArgs; + /* ExprStates for any direct-argument expressions */ + List *aggdirectargs; + /* * We need the len and byval info for the agg's result data type in order * to know how to copy/delete values. @@ -1544,7 +1537,7 @@ finalize_aggregate(AggState *aggstate, * for the transition state value. */ i = 1; - foreach(lc, pertrans->aggdirectargs) + foreach(lc, peragg->aggdirectargs) { ExprState *expr = (ExprState *) lfirst(lc); @@ -3313,6 +3306,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) else peragg->numFinalArgs = numDirectArgs + 1; + /* Initialize any direct-argument expressions */ + peragg->aggdirectargs = ExecInitExprList(aggref->aggdirectargs, + (PlanState *) aggstate); + /* * build expression trees using actual argument & result types for the * finalfn, if it exists and is required. @@ -3657,10 +3654,6 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, } - /* Initialize any direct-argument expressions */ - pertrans->aggdirectargs = ExecInitExprList(aggref->aggdirectargs, - (PlanState *) aggstate); - /* * If we're doing either DISTINCT or ORDER BY for a plain agg, then we * have a list of SortGroupClause nodes; fish out the data in them and @@ -3847,7 +3840,6 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, newagg->aggstar != existingRef->aggstar || newagg->aggvariadic != existingRef->aggvariadic || newagg->aggkind != existingRef->aggkind || - !equal(newagg->aggdirectargs, existingRef->aggdirectargs) || !equal(newagg->args, existingRef->args) || !equal(newagg->aggorder, existingRef->aggorder) || !equal(newagg->aggdistinct, existingRef->aggdistinct) || @@ -3857,7 +3849,8 @@ find_compatible_peragg(Aggref *newagg, AggState *aggstate, /* if it's the same aggregate function then report exact match */ if (newagg->aggfnoid == existingRef->aggfnoid && newagg->aggtype == existingRef->aggtype && - newagg->aggcollid == existingRef->aggcollid) + newagg->aggcollid == existingRef->aggcollid && + equal(newagg->aggdirectargs, existingRef->aggdirectargs)) { list_free(*same_input_transnos); *same_input_transnos = NIL; From 421167362242ce1fb46d6d720798787e7cd65aad Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 10 Aug 2017 23:33:47 -0400 Subject: [PATCH 0402/1087] Exclude flex-generated code from coverage testing Flex generates a lot of functions that are not actually used. In order to avoid coverage figures being ruined by that, mark up the part of the .l files where the generated code appears by lcov exclusion markers. That way, lcov will typically only reported on coverage for the .l file, which is under our control, but not for the .c file. Reviewed-by: Michael Paquier --- contrib/cube/cubescan.l | 4 ++++ contrib/seg/segscan.l | 4 ++++ src/backend/bootstrap/bootscanner.l | 3 +++ src/backend/parser/scan.l | 5 +++++ src/backend/replication/repl_scanner.l | 3 +++ src/backend/replication/syncrep_scanner.l | 3 +++ src/backend/utils/misc/guc-file.l | 4 +++- src/bin/pgbench/exprscan.l | 4 ++++ src/bin/psql/psqlscanslash.l | 4 ++++ src/fe_utils/psqlscan.l | 4 ++++ src/interfaces/ecpg/preproc/pgc.l | 6 ++++++ src/test/isolation/specscanner.l | 4 ++++ 12 files changed, 47 insertions(+), 1 deletion(-) diff --git a/contrib/cube/cubescan.l b/contrib/cube/cubescan.l index dada917820..bd400e3684 100644 --- a/contrib/cube/cubescan.l +++ b/contrib/cube/cubescan.l @@ -4,6 +4,8 @@ * contrib/cube/cubescan.l */ +/* LCOV_EXCL_START */ + /* No reason to constrain amount of data slurped */ #define YY_READ_BUF_SIZE 16777216 @@ -56,6 +58,8 @@ NaN [nN][aA][nN] %% +/* LCOV_EXCL_STOP */ + /* result is not used, but Bison expects this signature */ void yyerror(NDBOX **result, const char *message) diff --git a/contrib/seg/segscan.l b/contrib/seg/segscan.l index 6db24fdd1f..5f6595e9eb 100644 --- a/contrib/seg/segscan.l +++ b/contrib/seg/segscan.l @@ -3,6 +3,8 @@ * A scanner for EMP-style numeric ranges */ +/* LCOV_EXCL_START */ + /* No reason to constrain amount of data slurped */ #define YY_READ_BUF_SIZE 16777216 @@ -51,6 +53,8 @@ float ({integer}|{real})([eE]{integer})? %% +/* LCOV_EXCL_STOP */ + void yyerror(SEG *result, const char *message) { diff --git a/src/backend/bootstrap/bootscanner.l b/src/backend/bootstrap/bootscanner.l index 51c5e5e3cd..5465217bc3 100644 --- a/src/backend/bootstrap/bootscanner.l +++ b/src/backend/bootstrap/bootscanner.l @@ -38,6 +38,7 @@ /* Not needed now that this file is compiled as part of bootparse. */ /* #include "bootparse.h" */ +/* LCOV_EXCL_START */ /* Avoid exit() on fatal scanner errors (a bit ugly -- see yy_fatal_error) */ #undef fprintf @@ -134,6 +135,8 @@ insert { return INSERT_TUPLE; } %% +/* LCOV_EXCL_STOP */ + void yyerror(const char *message) { diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l index 634bfa512f..6af2199cdc 100644 --- a/src/backend/parser/scan.l +++ b/src/backend/parser/scan.l @@ -41,6 +41,9 @@ } %{ + +/* LCOV_EXCL_START */ + /* Avoid exit() on fatal scanner errors (a bit ugly -- see yy_fatal_error) */ #undef fprintf #define fprintf(file, fmt, msg) fprintf_to_ereport(fmt, msg) @@ -1011,6 +1014,8 @@ other . %% +/* LCOV_EXCL_STOP */ + /* * Arrange access to yyextra for subroutines of the main yylex() function. * We expect each subroutine to have a yyscanner parameter. Rather than diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l index 62bb5288c0..568d55ac95 100644 --- a/src/backend/replication/repl_scanner.l +++ b/src/backend/replication/repl_scanner.l @@ -38,6 +38,8 @@ static char *litbufdup(void); static void addlit(char *ytext, int yleng); static void addlitchar(unsigned char ychar); +/* LCOV_EXCL_START */ + %} %option 8bit @@ -186,6 +188,7 @@ WAIT { return K_WAIT; } } %% +/* LCOV_EXCL_STOP */ static void startlit(void) diff --git a/src/backend/replication/syncrep_scanner.l b/src/backend/replication/syncrep_scanner.l index d1d1b26a48..1fbc936aa6 100644 --- a/src/backend/replication/syncrep_scanner.l +++ b/src/backend/replication/syncrep_scanner.l @@ -32,6 +32,8 @@ static YY_BUFFER_STATE scanbufhandle; static StringInfoData xdbuf; +/* LCOV_EXCL_START */ + %} %option 8bit @@ -112,6 +114,7 @@ xdinside [^"]+ . { return JUNK; } %% +/* LCOV_EXCL_STOP */ /* Needs to be here for access to yytext */ void diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l index f01b814c6e..3de8e791f2 100644 --- a/src/backend/utils/misc/guc-file.l +++ b/src/backend/utils/misc/guc-file.l @@ -57,6 +57,8 @@ static void record_config_file_error(const char *errmsg, static int GUC_flex_fatal(const char *msg); static char *GUC_scanstr(const char *s); +/* LCOV_EXCL_START */ + %} %option 8bit @@ -107,7 +109,7 @@ STRING \'([^'\\\n]|\\.|\'\')*\' %% - +/* LCOV_EXCL_STOP */ /* * Exported function to read and process the configuration file. The diff --git a/src/bin/pgbench/exprscan.l b/src/bin/pgbench/exprscan.l index 9bf6d237f5..9f46fb9db8 100644 --- a/src/bin/pgbench/exprscan.l +++ b/src/bin/pgbench/exprscan.l @@ -43,6 +43,8 @@ static bool last_was_newline = false; extern int expr_yyget_column(yyscan_t yyscanner); extern void expr_yyset_column(int column_no, yyscan_t yyscanner); +/* LCOV_EXCL_START */ + %} /* Except for the prefix, these options should match psqlscan.l */ @@ -190,6 +192,8 @@ continuation \\{newline} %% +/* LCOV_EXCL_STOP */ + void expr_yyerror_more(yyscan_t yyscanner, const char *message, const char *more) { diff --git a/src/bin/psql/psqlscanslash.l b/src/bin/psql/psqlscanslash.l index 9a53cb3e02..e3cde04188 100644 --- a/src/bin/psql/psqlscanslash.l +++ b/src/bin/psql/psqlscanslash.l @@ -67,6 +67,8 @@ static void evaluate_backtick(PsqlScanState state); extern int slash_yyget_column(yyscan_t yyscanner); extern void slash_yyset_column(int column_no, yyscan_t yyscanner); +/* LCOV_EXCL_START */ + %} /* Except for the prefix, these options should match psqlscan.l */ @@ -468,6 +470,8 @@ other . %% +/* LCOV_EXCL_STOP */ + /* * Scan the command name of a psql backslash command. This should be called * after psql_scan() returns PSCAN_BACKSLASH. It is assumed that the input diff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l index 4375142a00..44fcf7ee46 100644 --- a/src/fe_utils/psqlscan.l +++ b/src/fe_utils/psqlscan.l @@ -71,6 +71,8 @@ typedef int YYSTYPE; extern int psql_yyget_column(yyscan_t yyscanner); extern void psql_yyset_column(int column_no, yyscan_t yyscanner); +/* LCOV_EXCL_START */ + %} %option reentrant @@ -899,6 +901,8 @@ other . %% +/* LCOV_EXCL_STOP */ + /* * Create a lexer working state struct. * diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l index fc450f30ab..e35843ba4e 100644 --- a/src/interfaces/ecpg/preproc/pgc.l +++ b/src/interfaces/ecpg/preproc/pgc.l @@ -79,6 +79,8 @@ static struct _if_value short else_branch; } stacked_if_value[MAX_NESTED_IF]; +/* LCOV_EXCL_START */ + %} %option 8bit @@ -1249,7 +1251,11 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ } } {other}|\n { mmfatal(PARSE_ERROR, "internal error: unreachable state; please report this to "); } + %% + +/* LCOV_EXCL_STOP */ + void lex_init(void) { diff --git a/src/test/isolation/specscanner.l b/src/test/isolation/specscanner.l index a9528bda6b..9c0532c0c5 100644 --- a/src/test/isolation/specscanner.l +++ b/src/test/isolation/specscanner.l @@ -17,6 +17,8 @@ static int litbufpos = 0; static void addlitchar(char c); +/* LCOV_EXCL_START */ + %} %option 8bit @@ -93,6 +95,8 @@ teardown { return TEARDOWN; } } %% +/* LCOV_EXCL_STOP */ + static void addlitchar(char c) { From 7421f4b89a90e1fa45751dd1cbc4e8d4ca1cba5e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 16 Oct 2017 17:56:42 -0400 Subject: [PATCH 0403/1087] Fix incorrect handling of CTEs and ENRs as DML target relations. setTargetTable threw an error if the proposed target RangeVar's relname matched any visible CTE or ENR. This breaks backwards compatibility in the CTE case, since pre-v10 we never looked for a CTE here at all, so that CTE names did not mask regular tables. It does seem like a good idea to throw an error for the ENR case, though, thus causing ENRs to mask tables for this purpose; ENRs are new in v10 so we're not breaking existing code, and we may someday want to allow them to be the targets of DML. To fix that, replace use of getRTEForSpecialRelationTypes, which was overkill anyway, with use of scanNameSpaceForENR. A second problem was that the check neglected to verify null schemaname, so that a CTE or ENR could incorrectly be thought to match a qualified RangeVar. That happened because getRTEForSpecialRelationTypes relied on its caller to have checked for null schemaname. Even though the one remaining caller got it right, this is obviously bug-prone, so move the check inside getRTEForSpecialRelationTypes. Also, revert commit 18ce3a4ab's extremely poorly thought out decision to add a NULL return case to parserOpenTable --- without either documenting that or adjusting any of the callers to check for it. The current bug seems to have arisen in part due to working around that bad idea. In passing, remove the one-line shim functions transformCTEReference and transformENRReference --- they don't seem to be adding any clarity or functionality. Per report from Hugo Mercier (via Julien Rouhaud). Back-patch to v10 where the bug was introduced. Thomas Munro, with minor editing by me Discussion: https://postgr.es/m/CAOBaU_YdPVH+PTtiKSSLOiiW3mVDYsnNUekK+XPbHXiP=wrFLA@mail.gmail.com --- src/backend/parser/parse_clause.c | 74 ++++++++++----------------- src/backend/parser/parse_relation.c | 8 +-- src/test/regress/expected/plpgsql.out | 8 +-- src/test/regress/expected/with.out | 18 ++++++- src/test/regress/sql/plpgsql.sql | 4 +- src/test/regress/sql/with.sql | 10 +++- 6 files changed, 59 insertions(+), 63 deletions(-) diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index 9ff80b8b40..af99e65aa7 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -62,9 +62,6 @@ static Node *transformJoinOnClause(ParseState *pstate, JoinExpr *j, static RangeTblEntry *getRTEForSpecialRelationTypes(ParseState *pstate, RangeVar *rv); static RangeTblEntry *transformTableEntry(ParseState *pstate, RangeVar *r); -static RangeTblEntry *transformCTEReference(ParseState *pstate, RangeVar *r, - CommonTableExpr *cte, Index levelsup); -static RangeTblEntry *transformENRReference(ParseState *pstate, RangeVar *r); static RangeTblEntry *transformRangeSubselect(ParseState *pstate, RangeSubselect *r); static RangeTblEntry *transformRangeFunction(ParseState *pstate, @@ -184,9 +181,12 @@ setTargetTable(ParseState *pstate, RangeVar *relation, RangeTblEntry *rte; int rtindex; - /* So far special relations are immutable; so they cannot be targets. */ - rte = getRTEForSpecialRelationTypes(pstate, relation); - if (rte != NULL) + /* + * ENRs hide tables of the same name, so we need to check for them first. + * In contrast, CTEs don't hide tables (for this purpose). + */ + if (relation->schemaname == NULL && + scanNameSpaceForENR(pstate, relation->relname)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("relation \"%s\" cannot be the target of a modifying statement", @@ -430,35 +430,6 @@ transformTableEntry(ParseState *pstate, RangeVar *r) return rte; } -/* - * transformCTEReference --- transform a RangeVar that references a common - * table expression (ie, a sub-SELECT defined in a WITH clause) - */ -static RangeTblEntry * -transformCTEReference(ParseState *pstate, RangeVar *r, - CommonTableExpr *cte, Index levelsup) -{ - RangeTblEntry *rte; - - rte = addRangeTableEntryForCTE(pstate, cte, levelsup, r, true); - - return rte; -} - -/* - * transformENRReference --- transform a RangeVar that references an ephemeral - * named relation - */ -static RangeTblEntry * -transformENRReference(ParseState *pstate, RangeVar *r) -{ - RangeTblEntry *rte; - - rte = addRangeTableEntryForENR(pstate, r, true); - - return rte; -} - /* * transformRangeSubselect --- transform a sub-SELECT appearing in FROM */ @@ -1071,19 +1042,32 @@ transformRangeTableSample(ParseState *pstate, RangeTableSample *rts) return tablesample; } - +/* + * getRTEForSpecialRelationTypes + * + * If given RangeVar refers to a CTE or an EphemeralNamedRelation, + * build and return an appropriate RTE, otherwise return NULL + */ static RangeTblEntry * getRTEForSpecialRelationTypes(ParseState *pstate, RangeVar *rv) { CommonTableExpr *cte; Index levelsup; - RangeTblEntry *rte = NULL; + RangeTblEntry *rte; + + /* + * if it is a qualified name, it can't be a CTE or tuplestore reference + */ + if (rv->schemaname) + return NULL; cte = scanNameSpaceForCTE(pstate, rv->relname, &levelsup); if (cte) - rte = transformCTEReference(pstate, rv, cte, levelsup); - if (!rte && scanNameSpaceForENR(pstate, rv->relname)) - rte = transformENRReference(pstate, rv); + rte = addRangeTableEntryForCTE(pstate, cte, levelsup, rv, true); + else if (scanNameSpaceForENR(pstate, rv->relname)) + rte = addRangeTableEntryForENR(pstate, rv, true); + else + rte = NULL; return rte; } @@ -1119,15 +1103,11 @@ transformFromClauseItem(ParseState *pstate, Node *n, /* Plain relation reference, or perhaps a CTE reference */ RangeVar *rv = (RangeVar *) n; RangeTblRef *rtr; - RangeTblEntry *rte = NULL; + RangeTblEntry *rte; int rtindex; - /* - * if it is an unqualified name, it might be a CTE or tuplestore - * reference - */ - if (!rv->schemaname) - rte = getRTEForSpecialRelationTypes(pstate, rv); + /* Check if it's a CTE or tuplestore reference */ + rte = getRTEForSpecialRelationTypes(pstate, rv); /* if not found above, must be a table reference */ if (!rte) diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index 4c5c684b44..a9273affb2 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -1159,19 +1159,13 @@ parserOpenTable(ParseState *pstate, const RangeVar *relation, int lockmode) relation->schemaname, relation->relname))); else { - /* - * An unqualified name might be a named ephemeral relation. - */ - if (get_visible_ENR_metadata(pstate->p_queryEnv, relation->relname)) - rel = NULL; - /* * An unqualified name might have been meant as a reference to * some not-yet-in-scope CTE. The bare "does not exist" message * has proven remarkably unhelpful for figuring out such problems, * so we take pains to offer a specific hint. */ - else if (isFutureCTE(pstate, relation->relname)) + if (isFutureCTE(pstate, relation->relname)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_TABLE), errmsg("relation \"%s\" does not exist", diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 7d3e9225bb..bb3532676b 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -5879,19 +5879,19 @@ CREATE FUNCTION transition_table_level2_bad_usage_func() LANGUAGE plpgsql AS $$ BEGIN - INSERT INTO d VALUES (1000000, 1000000, 'x'); + INSERT INTO dx VALUES (1000000, 1000000, 'x'); RETURN NULL; END; $$; CREATE TRIGGER transition_table_level2_bad_usage_trigger AFTER DELETE ON transition_table_level2 - REFERENCING OLD TABLE AS d + REFERENCING OLD TABLE AS dx FOR EACH STATEMENT EXECUTE PROCEDURE transition_table_level2_bad_usage_func(); DELETE FROM transition_table_level2 WHERE level2_no BETWEEN 301 AND 305; -ERROR: relation "d" cannot be the target of a modifying statement -CONTEXT: SQL statement "INSERT INTO d VALUES (1000000, 1000000, 'x')" +ERROR: relation "dx" cannot be the target of a modifying statement +CONTEXT: SQL statement "INSERT INTO dx VALUES (1000000, 1000000, 'x')" PL/pgSQL function transition_table_level2_bad_usage_func() line 3 at SQL statement DROP TRIGGER transition_table_level2_bad_usage_trigger ON transition_table_level2; diff --git a/src/test/regress/expected/with.out b/src/test/regress/expected/with.out index c32a490580..b4e0a1e83d 100644 --- a/src/test/regress/expected/with.out +++ b/src/test/regress/expected/with.out @@ -2273,5 +2273,19 @@ with ordinality as (select 1 as x) select * from ordinality; (1 row) -- check sane response to attempt to modify CTE relation -WITH d AS (SELECT 42) INSERT INTO d VALUES (1); -ERROR: relation "d" cannot be the target of a modifying statement +WITH test AS (SELECT 42) INSERT INTO test VALUES (1); +ERROR: relation "test" does not exist +LINE 1: WITH test AS (SELECT 42) INSERT INTO test VALUES (1); + ^ +-- check response to attempt to modify table with same name as a CTE (perhaps +-- surprisingly it works, because CTEs don't hide tables from data-modifying +-- statements) +create table test (i int); +with test as (select 42) insert into test select * from test; +select * from test; + i +---- + 42 +(1 row) + +drop table test; diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 6c9399696b..6620ea6172 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4678,14 +4678,14 @@ CREATE FUNCTION transition_table_level2_bad_usage_func() LANGUAGE plpgsql AS $$ BEGIN - INSERT INTO d VALUES (1000000, 1000000, 'x'); + INSERT INTO dx VALUES (1000000, 1000000, 'x'); RETURN NULL; END; $$; CREATE TRIGGER transition_table_level2_bad_usage_trigger AFTER DELETE ON transition_table_level2 - REFERENCING OLD TABLE AS d + REFERENCING OLD TABLE AS dx FOR EACH STATEMENT EXECUTE PROCEDURE transition_table_level2_bad_usage_func(); diff --git a/src/test/regress/sql/with.sql b/src/test/regress/sql/with.sql index 8ae5184d0f..baf65488a8 100644 --- a/src/test/regress/sql/with.sql +++ b/src/test/regress/sql/with.sql @@ -1030,4 +1030,12 @@ create table foo (with ordinality); -- fail, WITH is a reserved word with ordinality as (select 1 as x) select * from ordinality; -- check sane response to attempt to modify CTE relation -WITH d AS (SELECT 42) INSERT INTO d VALUES (1); +WITH test AS (SELECT 42) INSERT INTO test VALUES (1); + +-- check response to attempt to modify table with same name as a CTE (perhaps +-- surprisingly it works, because CTEs don't hide tables from data-modifying +-- statements) +create table test (i int); +with test as (select 42) insert into test select * from test; +select * from test; +drop table test; From 6ecabead4b5993c42745f2802d857b1a79f48bf9 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 17 Oct 2017 11:45:34 +0200 Subject: [PATCH 0404/1087] REASSIGN OWNED BY doc: s/privileges/membership/ Reported by: David G. Johnston Discussion: https://postgr.es/m/CAKFQuwajWqjqEL9xc1xnnmTyBg32EdAZKJXijzigbosGSs_vag@mail.gmail.com --- doc/src/sgml/ref/reassign_owned.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/reassign_owned.sgml b/doc/src/sgml/ref/reassign_owned.sgml index 7e9819395f..c1751e7f47 100644 --- a/doc/src/sgml/ref/reassign_owned.sgml +++ b/doc/src/sgml/ref/reassign_owned.sgml @@ -77,7 +77,7 @@ REASSIGN OWNED BY { old_role | CURR - REASSIGN OWNED requires privileges on both the + REASSIGN OWNED requires membership on both the source role(s) and the target role. From c29c578908dc0271eeb13a4014e54bff07a29c05 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 8 Oct 2017 21:44:17 -0400 Subject: [PATCH 0405/1087] Don't use SGML empty tags MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit For DocBook XML compatibility, don't use SGML empty tags () anymore, replace by the full tag name. Add a warning option to catch future occurrences. Alexander Lakhin, Jürgen Purtz --- doc/src/sgml/Makefile | 3 +- doc/src/sgml/acronyms.sgml | 18 +- doc/src/sgml/adminpack.sgml | 54 +- doc/src/sgml/advanced.sgml | 110 +- doc/src/sgml/amcheck.sgml | 66 +- doc/src/sgml/arch-dev.sgml | 64 +- doc/src/sgml/array.sgml | 110 +- doc/src/sgml/auth-delay.sgml | 6 +- doc/src/sgml/auto-explain.sgml | 36 +- doc/src/sgml/backup.sgml | 496 +-- doc/src/sgml/bgworker.sgml | 80 +- doc/src/sgml/biblio.sgml | 2 +- doc/src/sgml/bki.sgml | 86 +- doc/src/sgml/bloom.sgml | 24 +- doc/src/sgml/brin.sgml | 78 +- doc/src/sgml/btree-gin.sgml | 18 +- doc/src/sgml/btree-gist.sgml | 32 +- doc/src/sgml/catalogs.sgml | 1012 +++---- doc/src/sgml/charset.sgml | 270 +- doc/src/sgml/citext.sgml | 120 +- doc/src/sgml/client-auth.sgml | 328 +- doc/src/sgml/config.sgml | 2156 ++++++------- doc/src/sgml/contrib-spi.sgml | 80 +- doc/src/sgml/contrib.sgml | 26 +- doc/src/sgml/cube.sgml | 158 +- doc/src/sgml/custom-scan.sgml | 136 +- doc/src/sgml/datatype.sgml | 692 ++--- doc/src/sgml/datetime.sgml | 72 +- doc/src/sgml/dblink.sgml | 270 +- doc/src/sgml/ddl.sgml | 390 +-- doc/src/sgml/dfunc.sgml | 40 +- doc/src/sgml/dict-int.sgml | 18 +- doc/src/sgml/dict-xsyn.sgml | 40 +- doc/src/sgml/diskusage.sgml | 16 +- doc/src/sgml/dml.sgml | 32 +- doc/src/sgml/docguide.sgml | 2 +- doc/src/sgml/earthdistance.sgml | 32 +- doc/src/sgml/ecpg.sgml | 734 ++--- doc/src/sgml/errcodes.sgml | 18 +- doc/src/sgml/event-trigger.sgml | 78 +- doc/src/sgml/extend.sgml | 362 +-- doc/src/sgml/external-projects.sgml | 18 +- doc/src/sgml/fdwhandler.sgml | 880 +++--- doc/src/sgml/file-fdw.sgml | 48 +- doc/src/sgml/func.sgml | 2528 ++++++++-------- doc/src/sgml/fuzzystrmatch.sgml | 26 +- doc/src/sgml/generate-errcodes-table.pl | 4 +- doc/src/sgml/generic-wal.sgml | 42 +- doc/src/sgml/geqo.sgml | 12 +- doc/src/sgml/gin.sgml | 286 +- doc/src/sgml/gist.sgml | 402 +-- doc/src/sgml/high-availability.sgml | 516 ++-- doc/src/sgml/history.sgml | 4 +- doc/src/sgml/hstore.sgml | 152 +- doc/src/sgml/indexam.sgml | 446 +-- doc/src/sgml/indices.sgml | 224 +- doc/src/sgml/info.sgml | 6 +- doc/src/sgml/information_schema.sgml | 364 +-- doc/src/sgml/install-windows.sgml | 58 +- doc/src/sgml/installation.sgml | 460 +-- doc/src/sgml/intagg.sgml | 28 +- doc/src/sgml/intarray.sgml | 68 +- doc/src/sgml/intro.sgml | 4 +- doc/src/sgml/isn.sgml | 28 +- doc/src/sgml/json.sgml | 174 +- doc/src/sgml/libpq.sgml | 1414 ++++----- doc/src/sgml/lo.sgml | 34 +- doc/src/sgml/lobj.sgml | 128 +- doc/src/sgml/logicaldecoding.sgml | 18 +- doc/src/sgml/ltree.sgml | 234 +- doc/src/sgml/maintenance.sgml | 338 +-- doc/src/sgml/manage-ag.sgml | 196 +- doc/src/sgml/monitoring.sgml | 1440 ++++----- doc/src/sgml/mvcc.sgml | 130 +- doc/src/sgml/nls.sgml | 24 +- doc/src/sgml/notation.sgml | 8 +- doc/src/sgml/oid2name.sgml | 60 +- doc/src/sgml/pageinspect.sgml | 36 +- doc/src/sgml/parallel.sgml | 96 +- doc/src/sgml/perform.sgml | 344 +-- doc/src/sgml/pgbuffercache.sgml | 16 +- doc/src/sgml/pgcrypto.sgml | 176 +- doc/src/sgml/pgfreespacemap.sgml | 8 +- doc/src/sgml/pgprewarm.sgml | 16 +- doc/src/sgml/pgrowlocks.sgml | 10 +- doc/src/sgml/pgstandby.sgml | 100 +- doc/src/sgml/pgstatstatements.sgml | 106 +- doc/src/sgml/pgstattuple.sgml | 30 +- doc/src/sgml/pgtrgm.sgml | 90 +- doc/src/sgml/pgvisibility.sgml | 12 +- doc/src/sgml/planstats.sgml | 52 +- doc/src/sgml/plhandler.sgml | 50 +- doc/src/sgml/plperl.sgml | 148 +- doc/src/sgml/plpgsql.sgml | 1204 ++++---- doc/src/sgml/plpython.sgml | 104 +- doc/src/sgml/pltcl.sgml | 210 +- doc/src/sgml/postgres-fdw.sgml | 198 +- doc/src/sgml/postgres.sgml | 16 +- doc/src/sgml/problems.sgml | 20 +- doc/src/sgml/protocol.sgml | 474 +-- doc/src/sgml/queries.sgml | 674 ++--- doc/src/sgml/query.sgml | 36 +- doc/src/sgml/rangetypes.sgml | 66 +- doc/src/sgml/recovery-config.sgml | 132 +- doc/src/sgml/ref/abort.sgml | 2 +- doc/src/sgml/ref/alter_aggregate.sgml | 18 +- doc/src/sgml/ref/alter_collation.sgml | 2 +- doc/src/sgml/ref/alter_conversion.sgml | 2 +- doc/src/sgml/ref/alter_database.sgml | 4 +- .../sgml/ref/alter_default_privileges.sgml | 20 +- doc/src/sgml/ref/alter_domain.sgml | 20 +- doc/src/sgml/ref/alter_extension.sgml | 22 +- .../sgml/ref/alter_foreign_data_wrapper.sgml | 18 +- doc/src/sgml/ref/alter_foreign_table.sgml | 42 +- doc/src/sgml/ref/alter_function.sgml | 28 +- doc/src/sgml/ref/alter_group.sgml | 8 +- doc/src/sgml/ref/alter_index.sgml | 10 +- doc/src/sgml/ref/alter_materialized_view.sgml | 4 +- doc/src/sgml/ref/alter_opclass.sgml | 2 +- doc/src/sgml/ref/alter_operator.sgml | 2 +- doc/src/sgml/ref/alter_opfamily.sgml | 32 +- doc/src/sgml/ref/alter_publication.sgml | 8 +- doc/src/sgml/ref/alter_role.sgml | 22 +- doc/src/sgml/ref/alter_schema.sgml | 2 +- doc/src/sgml/ref/alter_sequence.sgml | 34 +- doc/src/sgml/ref/alter_server.sgml | 12 +- doc/src/sgml/ref/alter_statistics.sgml | 4 +- doc/src/sgml/ref/alter_subscription.sgml | 4 +- doc/src/sgml/ref/alter_system.sgml | 12 +- doc/src/sgml/ref/alter_table.sgml | 140 +- doc/src/sgml/ref/alter_tablespace.sgml | 4 +- doc/src/sgml/ref/alter_trigger.sgml | 4 +- doc/src/sgml/ref/alter_tsconfig.sgml | 20 +- doc/src/sgml/ref/alter_tsdictionary.sgml | 8 +- doc/src/sgml/ref/alter_tsparser.sgml | 2 +- doc/src/sgml/ref/alter_tstemplate.sgml | 2 +- doc/src/sgml/ref/alter_type.sgml | 6 +- doc/src/sgml/ref/alter_user_mapping.sgml | 14 +- doc/src/sgml/ref/alter_view.sgml | 10 +- doc/src/sgml/ref/analyze.sgml | 16 +- doc/src/sgml/ref/begin.sgml | 4 +- doc/src/sgml/ref/close.sgml | 4 +- doc/src/sgml/ref/cluster.sgml | 10 +- doc/src/sgml/ref/clusterdb.sgml | 62 +- doc/src/sgml/ref/comment.sgml | 26 +- doc/src/sgml/ref/commit.sgml | 2 +- doc/src/sgml/ref/commit_prepared.sgml | 2 +- doc/src/sgml/ref/copy.sgml | 226 +- doc/src/sgml/ref/create_access_method.sgml | 8 +- doc/src/sgml/ref/create_aggregate.sgml | 164 +- doc/src/sgml/ref/create_cast.sgml | 80 +- doc/src/sgml/ref/create_collation.sgml | 2 +- doc/src/sgml/ref/create_conversion.sgml | 6 +- doc/src/sgml/ref/create_database.sgml | 54 +- doc/src/sgml/ref/create_domain.sgml | 22 +- doc/src/sgml/ref/create_event_trigger.sgml | 6 +- doc/src/sgml/ref/create_extension.sgml | 38 +- .../sgml/ref/create_foreign_data_wrapper.sgml | 12 +- doc/src/sgml/ref/create_foreign_table.sgml | 44 +- doc/src/sgml/ref/create_function.sgml | 128 +- doc/src/sgml/ref/create_index.sgml | 120 +- doc/src/sgml/ref/create_language.sgml | 42 +- .../sgml/ref/create_materialized_view.sgml | 10 +- doc/src/sgml/ref/create_opclass.sgml | 38 +- doc/src/sgml/ref/create_operator.sgml | 24 +- doc/src/sgml/ref/create_opfamily.sgml | 4 +- doc/src/sgml/ref/create_policy.sgml | 20 +- doc/src/sgml/ref/create_publication.sgml | 14 +- doc/src/sgml/ref/create_role.sgml | 94 +- doc/src/sgml/ref/create_rule.sgml | 32 +- doc/src/sgml/ref/create_schema.sgml | 32 +- doc/src/sgml/ref/create_sequence.sgml | 28 +- doc/src/sgml/ref/create_server.sgml | 8 +- doc/src/sgml/ref/create_statistics.sgml | 10 +- doc/src/sgml/ref/create_subscription.sgml | 4 +- doc/src/sgml/ref/create_table.sgml | 294 +- doc/src/sgml/ref/create_table_as.sgml | 36 +- doc/src/sgml/ref/create_tablespace.sgml | 22 +- doc/src/sgml/ref/create_trigger.sgml | 172 +- doc/src/sgml/ref/create_tsconfig.sgml | 2 +- doc/src/sgml/ref/create_tstemplate.sgml | 2 +- doc/src/sgml/ref/create_type.sgml | 98 +- doc/src/sgml/ref/create_user.sgml | 4 +- doc/src/sgml/ref/create_user_mapping.sgml | 10 +- doc/src/sgml/ref/create_view.sgml | 140 +- doc/src/sgml/ref/createdb.sgml | 66 +- doc/src/sgml/ref/createuser.sgml | 114 +- doc/src/sgml/ref/declare.sgml | 56 +- doc/src/sgml/ref/delete.sgml | 62 +- doc/src/sgml/ref/discard.sgml | 10 +- doc/src/sgml/ref/do.sgml | 18 +- doc/src/sgml/ref/drop_access_method.sgml | 4 +- doc/src/sgml/ref/drop_aggregate.sgml | 10 +- doc/src/sgml/ref/drop_collation.sgml | 4 +- doc/src/sgml/ref/drop_conversion.sgml | 2 +- doc/src/sgml/ref/drop_database.sgml | 2 +- doc/src/sgml/ref/drop_domain.sgml | 6 +- doc/src/sgml/ref/drop_extension.sgml | 6 +- .../sgml/ref/drop_foreign_data_wrapper.sgml | 6 +- doc/src/sgml/ref/drop_foreign_table.sgml | 2 +- doc/src/sgml/ref/drop_function.sgml | 12 +- doc/src/sgml/ref/drop_index.sgml | 12 +- doc/src/sgml/ref/drop_language.sgml | 4 +- doc/src/sgml/ref/drop_opclass.sgml | 12 +- doc/src/sgml/ref/drop_opfamily.sgml | 4 +- doc/src/sgml/ref/drop_owned.sgml | 2 +- doc/src/sgml/ref/drop_publication.sgml | 2 +- doc/src/sgml/ref/drop_role.sgml | 4 +- doc/src/sgml/ref/drop_schema.sgml | 2 +- doc/src/sgml/ref/drop_sequence.sgml | 2 +- doc/src/sgml/ref/drop_server.sgml | 6 +- doc/src/sgml/ref/drop_subscription.sgml | 2 +- doc/src/sgml/ref/drop_table.sgml | 6 +- doc/src/sgml/ref/drop_tablespace.sgml | 6 +- doc/src/sgml/ref/drop_tsconfig.sgml | 4 +- doc/src/sgml/ref/drop_tsdictionary.sgml | 2 +- doc/src/sgml/ref/drop_tsparser.sgml | 2 +- doc/src/sgml/ref/drop_tstemplate.sgml | 2 +- doc/src/sgml/ref/drop_type.sgml | 4 +- doc/src/sgml/ref/drop_user_mapping.sgml | 14 +- doc/src/sgml/ref/drop_view.sgml | 2 +- doc/src/sgml/ref/dropdb.sgml | 48 +- doc/src/sgml/ref/dropuser.sgml | 46 +- doc/src/sgml/ref/ecpg-ref.sgml | 6 +- doc/src/sgml/ref/end.sgml | 2 +- doc/src/sgml/ref/execute.sgml | 4 +- doc/src/sgml/ref/explain.sgml | 18 +- doc/src/sgml/ref/fetch.sgml | 36 +- doc/src/sgml/ref/grant.sgml | 98 +- doc/src/sgml/ref/import_foreign_schema.sgml | 14 +- doc/src/sgml/ref/initdb.sgml | 42 +- doc/src/sgml/ref/insert.sgml | 86 +- doc/src/sgml/ref/listen.sgml | 6 +- doc/src/sgml/ref/load.sgml | 14 +- doc/src/sgml/ref/lock.sgml | 72 +- doc/src/sgml/ref/move.sgml | 2 +- doc/src/sgml/ref/notify.sgml | 12 +- doc/src/sgml/ref/pg_basebackup.sgml | 40 +- doc/src/sgml/ref/pg_config-ref.sgml | 100 +- doc/src/sgml/ref/pg_controldata.sgml | 8 +- doc/src/sgml/ref/pg_ctl-ref.sgml | 50 +- doc/src/sgml/ref/pg_dump.sgml | 280 +- doc/src/sgml/ref/pg_dumpall.sgml | 102 +- doc/src/sgml/ref/pg_isready.sgml | 32 +- doc/src/sgml/ref/pg_receivewal.sgml | 34 +- doc/src/sgml/ref/pg_recvlogical.sgml | 26 +- doc/src/sgml/ref/pg_resetwal.sgml | 56 +- doc/src/sgml/ref/pg_restore.sgml | 150 +- doc/src/sgml/ref/pg_rewind.sgml | 48 +- doc/src/sgml/ref/pg_waldump.sgml | 16 +- doc/src/sgml/ref/pgarchivecleanup.sgml | 50 +- doc/src/sgml/ref/pgbench.sgml | 568 ++-- doc/src/sgml/ref/pgtestfsync.sgml | 16 +- doc/src/sgml/ref/pgtesttiming.sgml | 10 +- doc/src/sgml/ref/pgupgrade.sgml | 232 +- doc/src/sgml/ref/postgres-ref.sgml | 76 +- doc/src/sgml/ref/postmaster.sgml | 2 +- doc/src/sgml/ref/prepare.sgml | 18 +- doc/src/sgml/ref/prepare_transaction.sgml | 30 +- doc/src/sgml/ref/psql-ref.sgml | 662 ++-- doc/src/sgml/ref/reassign_owned.sgml | 2 +- .../sgml/ref/refresh_materialized_view.sgml | 4 +- doc/src/sgml/ref/reindex.sgml | 34 +- doc/src/sgml/ref/reindexdb.sgml | 80 +- doc/src/sgml/ref/release_savepoint.sgml | 2 +- doc/src/sgml/ref/reset.sgml | 10 +- doc/src/sgml/ref/revoke.sgml | 48 +- doc/src/sgml/ref/rollback.sgml | 2 +- doc/src/sgml/ref/rollback_prepared.sgml | 2 +- doc/src/sgml/ref/rollback_to.sgml | 28 +- doc/src/sgml/ref/savepoint.sgml | 8 +- doc/src/sgml/ref/security_label.sgml | 22 +- doc/src/sgml/ref/select.sgml | 598 ++-- doc/src/sgml/ref/set.sgml | 44 +- doc/src/sgml/ref/set_constraints.sgml | 14 +- doc/src/sgml/ref/set_role.sgml | 36 +- doc/src/sgml/ref/set_session_auth.sgml | 16 +- doc/src/sgml/ref/set_transaction.sgml | 12 +- doc/src/sgml/ref/show.sgml | 2 +- doc/src/sgml/ref/start_transaction.sgml | 6 +- doc/src/sgml/ref/truncate.sgml | 38 +- doc/src/sgml/ref/unlisten.sgml | 2 +- doc/src/sgml/ref/update.sgml | 88 +- doc/src/sgml/ref/vacuum.sgml | 24 +- doc/src/sgml/ref/vacuumdb.sgml | 52 +- doc/src/sgml/ref/values.sgml | 60 +- doc/src/sgml/regress.sgml | 122 +- doc/src/sgml/release-10.sgml | 736 ++--- doc/src/sgml/release-7.4.sgml | 700 ++--- doc/src/sgml/release-8.0.sgml | 1266 ++++---- doc/src/sgml/release-8.1.sgml | 1344 ++++---- doc/src/sgml/release-8.2.sgml | 1598 +++++----- doc/src/sgml/release-8.3.sgml | 1682 +++++----- doc/src/sgml/release-8.4.sgml | 2468 +++++++-------- doc/src/sgml/release-9.0.sgml | 2572 ++++++++-------- doc/src/sgml/release-9.1.sgml | 2678 ++++++++-------- doc/src/sgml/release-9.2.sgml | 2694 ++++++++--------- doc/src/sgml/release-9.3.sgml | 2518 +++++++-------- doc/src/sgml/release-9.4.sgml | 2142 ++++++------- doc/src/sgml/release-9.5.sgml | 1800 +++++------ doc/src/sgml/release-9.6.sgml | 1516 +++++----- doc/src/sgml/release-old.sgml | 314 +- doc/src/sgml/release.sgml | 2 +- doc/src/sgml/rowtypes.sgml | 130 +- doc/src/sgml/rules.sgml | 326 +- doc/src/sgml/runtime.sgml | 610 ++-- doc/src/sgml/seg.sgml | 70 +- doc/src/sgml/sepgsql.sgml | 190 +- doc/src/sgml/sourcerepo.sgml | 24 +- doc/src/sgml/sources.sgml | 170 +- doc/src/sgml/spgist.sgml | 468 +-- doc/src/sgml/spi.sgml | 226 +- doc/src/sgml/sslinfo.sgml | 14 +- doc/src/sgml/start.sgml | 22 +- doc/src/sgml/storage.sgml | 328 +- doc/src/sgml/syntax.sgml | 362 +-- doc/src/sgml/tablefunc.sgml | 168 +- doc/src/sgml/tablesample-method.sgml | 128 +- doc/src/sgml/tcn.sgml | 8 +- doc/src/sgml/test-decoding.sgml | 4 +- doc/src/sgml/textsearch.sgml | 734 ++--- doc/src/sgml/trigger.sgml | 220 +- doc/src/sgml/tsm-system-rows.sgml | 8 +- doc/src/sgml/tsm-system-time.sgml | 8 +- doc/src/sgml/typeconv.sgml | 122 +- doc/src/sgml/unaccent.sgml | 44 +- doc/src/sgml/user-manag.sgml | 138 +- doc/src/sgml/uuid-ossp.sgml | 26 +- doc/src/sgml/vacuumlo.sgml | 42 +- doc/src/sgml/wal.sgml | 138 +- doc/src/sgml/xaggr.sgml | 160 +- doc/src/sgml/xfunc.sgml | 614 ++-- doc/src/sgml/xindex.sgml | 192 +- doc/src/sgml/xml2.sgml | 58 +- doc/src/sgml/xoper.sgml | 142 +- doc/src/sgml/xplang.sgml | 26 +- doc/src/sgml/xtypes.sgml | 68 +- 337 files changed, 31636 insertions(+), 31635 deletions(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 164c00bb63..428eb569fc 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -66,10 +66,11 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML) # Enable some extra warnings # -wfully-tagged needed to throw a warning on missing tags # for older tool chains, 2007-08-31 -override SPFLAGS += -wall -wno-unused-param -wno-empty -wfully-tagged +override SPFLAGS += -wall -wno-unused-param -wfully-tagged # Additional warnings for XML compatibility. The conditional is meant # to detect whether we are using OpenSP rather than the ancient # original SP. +override SPFLAGS += -wempty ifneq (,$(filter o%,$(notdir $(OSX)))) override SPFLAGS += -wdata-delim -winstance-ignore-ms -winstance-include-ms -winstance-param-entity endif diff --git a/doc/src/sgml/acronyms.sgml b/doc/src/sgml/acronyms.sgml index 29f85e0846..35514d4d9a 100644 --- a/doc/src/sgml/acronyms.sgml +++ b/doc/src/sgml/acronyms.sgml @@ -4,8 +4,8 @@ Acronyms - This is a list of acronyms commonly used in the PostgreSQL - documentation and in discussions about PostgreSQL. + This is a list of acronyms commonly used in the PostgreSQL + documentation and in discussions about PostgreSQL. @@ -153,7 +153,7 @@ Data Definition Language, SQL commands such as CREATE - TABLE, ALTER USER + TABLE, ALTER USER @@ -164,8 +164,8 @@ Data - Manipulation Language, SQL commands such as INSERT, - UPDATE, DELETE + Manipulation Language, SQL commands such as INSERT, + UPDATE, DELETE @@ -281,7 +281,7 @@ Grand Unified Configuration, - the PostgreSQL subsystem that handles server configuration + the PostgreSQL subsystem that handles server configuration @@ -384,7 +384,7 @@ LSN - Log Sequence Number, see pg_lsn + Log Sequence Number, see pg_lsn and WAL Internals. @@ -486,7 +486,7 @@ PGSQL - PostgreSQL + PostgreSQL @@ -495,7 +495,7 @@ PGXS - PostgreSQL Extension System + PostgreSQL Extension System diff --git a/doc/src/sgml/adminpack.sgml b/doc/src/sgml/adminpack.sgml index fddf90c4a5..b27a4a325d 100644 --- a/doc/src/sgml/adminpack.sgml +++ b/doc/src/sgml/adminpack.sgml @@ -8,8 +8,8 @@ - adminpack provides a number of support functions which - pgAdmin and other administration and management tools can + adminpack provides a number of support functions which + pgAdmin and other administration and management tools can use to provide additional functionality, such as remote management of server log files. Use of all these functions is restricted to superusers. @@ -25,7 +25,7 @@
- <filename>adminpack</> Functions + <filename>adminpack</filename> Functions Name Return Type Description @@ -58,7 +58,7 @@ pg_catalog.pg_logdir_ls() setof record - List the log files in the log_directory directory + List the log files in the log_directory directory @@ -69,9 +69,9 @@ pg_file_write - pg_file_write writes the specified data into - the file named by filename. If append is - false, the file must not already exist. If append is true, + pg_file_write writes the specified data into + the file named by filename. If append is + false, the file must not already exist. If append is true, the file can already exist, and will be appended to if so. Returns the number of bytes written. @@ -80,15 +80,15 @@ pg_file_rename - pg_file_rename renames a file. If archivename - is omitted or NULL, it simply renames oldname - to newname (which must not already exist). - If archivename is provided, it first - renames newname to archivename (which must - not already exist), and then renames oldname - to newname. In event of failure of the second rename step, - it will try to rename archivename back - to newname before reporting the error. + pg_file_rename renames a file. If archivename + is omitted or NULL, it simply renames oldname + to newname (which must not already exist). + If archivename is provided, it first + renames newname to archivename (which must + not already exist), and then renames oldname + to newname. In event of failure of the second rename step, + it will try to rename archivename back + to newname before reporting the error. Returns true on success, false if the source file(s) are not present or not writable; other cases throw errors. @@ -97,19 +97,19 @@ pg_file_unlink - pg_file_unlink removes the specified file. + pg_file_unlink removes the specified file. Returns true on success, false if the specified file is not present - or the unlink() call fails; other cases throw errors. + or the unlink() call fails; other cases throw errors. pg_logdir_ls - pg_logdir_ls returns the start timestamps and path + pg_logdir_ls returns the start timestamps and path names of all the log files in the directory. The parameter must have its - default setting (postgresql-%Y-%m-%d_%H%M%S.log) to use this + default setting (postgresql-%Y-%m-%d_%H%M%S.log) to use this function. @@ -119,12 +119,12 @@ and should not be used in new applications; instead use those shown in and . These functions are - provided in adminpack only for compatibility with old - versions of pgAdmin. + provided in adminpack only for compatibility with old + versions of pgAdmin.
- Deprecated <filename>adminpack</> Functions + Deprecated <filename>adminpack</filename> Functions Name Return Type Description @@ -136,22 +136,22 @@ pg_catalog.pg_file_read(filename text, offset bigint, nbytes bigint) text - Alternate name for pg_read_file() + Alternate name for pg_read_file() pg_catalog.pg_file_length(filename text) bigint - Same as size column returned - by pg_stat_file() + Same as size column returned + by pg_stat_file() pg_catalog.pg_logfile_rotate() integer - Alternate name for pg_rotate_logfile(), but note that it + Alternate name for pg_rotate_logfile(), but note that it returns integer 0 or 1 rather than boolean diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml index f47c01987b..bf87df4dcb 100644 --- a/doc/src/sgml/advanced.sgml +++ b/doc/src/sgml/advanced.sgml @@ -145,7 +145,7 @@ DETAIL: Key (city)=(Berkeley) is not present in table "cities". - Transactions are a fundamental concept of all database + Transactions are a fundamental concept of all database systems. The essential point of a transaction is that it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps are not visible to other concurrent transactions, @@ -182,8 +182,8 @@ UPDATE branches SET balance = balance + 100.00 remain a happy customer if she was debited without Bob being credited. We need a guarantee that if something goes wrong partway through the operation, none of the steps executed so far will take effect. Grouping - the updates into a transaction gives us this guarantee. - A transaction is said to be atomic: from the point of + the updates into a transaction gives us this guarantee. + A transaction is said to be atomic: from the point of view of other transactions, it either happens completely or not at all. @@ -216,9 +216,9 @@ UPDATE branches SET balance = balance + 100.00 - In PostgreSQL, a transaction is set up by surrounding + In PostgreSQL, a transaction is set up by surrounding the SQL commands of the transaction with - BEGIN and COMMIT commands. So our banking + BEGIN and COMMIT commands. So our banking transaction would actually look like: @@ -233,23 +233,23 @@ COMMIT; If, partway through the transaction, we decide we do not want to commit (perhaps we just noticed that Alice's balance went negative), - we can issue the command ROLLBACK instead of - COMMIT, and all our updates so far will be canceled. + we can issue the command ROLLBACK instead of + COMMIT, and all our updates so far will be canceled. - PostgreSQL actually treats every SQL statement as being - executed within a transaction. If you do not issue a BEGIN + PostgreSQL actually treats every SQL statement as being + executed within a transaction. If you do not issue a BEGIN command, - then each individual statement has an implicit BEGIN and - (if successful) COMMIT wrapped around it. A group of - statements surrounded by BEGIN and COMMIT - is sometimes called a transaction block. + then each individual statement has an implicit BEGIN and + (if successful) COMMIT wrapped around it. A group of + statements surrounded by BEGIN and COMMIT + is sometimes called a transaction block. - Some client libraries issue BEGIN and COMMIT + Some client libraries issue BEGIN and COMMIT commands automatically, so that you might get the effect of transaction blocks without asking. Check the documentation for the interface you are using. @@ -258,11 +258,11 @@ COMMIT; It's possible to control the statements in a transaction in a more - granular fashion through the use of savepoints. Savepoints + granular fashion through the use of savepoints. Savepoints allow you to selectively discard parts of the transaction, while committing the rest. After defining a savepoint with - SAVEPOINT, you can if needed roll back to the savepoint - with ROLLBACK TO. All the transaction's database changes + SAVEPOINT, you can if needed roll back to the savepoint + with ROLLBACK TO. All the transaction's database changes between defining the savepoint and rolling back to it are discarded, but changes earlier than the savepoint are kept. @@ -308,7 +308,7 @@ COMMIT; This example is, of course, oversimplified, but there's a lot of control possible in a transaction block through the use of savepoints. - Moreover, ROLLBACK TO is the only way to regain control of a + Moreover, ROLLBACK TO is the only way to regain control of a transaction block that was put in aborted state by the system due to an error, short of rolling it back completely and starting again. @@ -325,7 +325,7 @@ COMMIT; - A window function performs a calculation across a set of + A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. However, window functions do not cause rows to become grouped into a single @@ -360,31 +360,31 @@ SELECT depname, empno, salary, avg(salary) OVER (PARTITION BY depname) FROM emps The first three output columns come directly from the table - empsalary, and there is one output row for each row in the + empsalary, and there is one output row for each row in the table. The fourth column represents an average taken across all the table - rows that have the same depname value as the current row. - (This actually is the same function as the non-window avg - aggregate, but the OVER clause causes it to be + rows that have the same depname value as the current row. + (This actually is the same function as the non-window avg + aggregate, but the OVER clause causes it to be treated as a window function and computed across the window frame.) - A window function call always contains an OVER clause + A window function call always contains an OVER clause directly following the window function's name and argument(s). This is what syntactically distinguishes it from a normal function or non-window - aggregate. The OVER clause determines exactly how the + aggregate. The OVER clause determines exactly how the rows of the query are split up for processing by the window function. - The PARTITION BY clause within OVER + The PARTITION BY clause within OVER divides the rows into groups, or partitions, that share the same - values of the PARTITION BY expression(s). For each row, + values of the PARTITION BY expression(s). For each row, the window function is computed across the rows that fall into the same partition as the current row. You can also control the order in which rows are processed by - window functions using ORDER BY within OVER. - (The window ORDER BY does not even have to match the + window functions using ORDER BY within OVER. + (The window ORDER BY does not even have to match the order in which the rows are output.) Here is an example: @@ -409,39 +409,39 @@ FROM empsalary; (10 rows) - As shown here, the rank function produces a numerical rank - for each distinct ORDER BY value in the current row's - partition, using the order defined by the ORDER BY clause. - rank needs no explicit parameter, because its behavior - is entirely determined by the OVER clause. + As shown here, the rank function produces a numerical rank + for each distinct ORDER BY value in the current row's + partition, using the order defined by the ORDER BY clause. + rank needs no explicit parameter, because its behavior + is entirely determined by the OVER clause. The rows considered by a window function are those of the virtual - table produced by the query's FROM clause as filtered by its - WHERE, GROUP BY, and HAVING clauses + table produced by the query's FROM clause as filtered by its + WHERE, GROUP BY, and HAVING clauses if any. For example, a row removed because it does not meet the - WHERE condition is not seen by any window function. + WHERE condition is not seen by any window function. A query can contain multiple window functions that slice up the data - in different ways using different OVER clauses, but + in different ways using different OVER clauses, but they all act on the same collection of rows defined by this virtual table. - We already saw that ORDER BY can be omitted if the ordering + We already saw that ORDER BY can be omitted if the ordering of rows is not important. It is also possible to omit PARTITION - BY, in which case there is a single partition containing all rows. + BY, in which case there is a single partition containing all rows. There is another important concept associated with window functions: for each row, there is a set of rows within its partition called its - window frame. Some window functions act only + window frame. Some window functions act only on the rows of the window frame, rather than of the whole partition. - By default, if ORDER BY is supplied then the frame consists of + By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the - ORDER BY clause. When ORDER BY is omitted the + ORDER BY clause. When ORDER BY is omitted the default frame consists of all rows in the partition. @@ -450,7 +450,7 @@ FROM empsalary; for details. - Here is an example using sum: + Here is an example using sum: @@ -474,11 +474,11 @@ SELECT salary, sum(salary) OVER () FROM empsalary; - Above, since there is no ORDER BY in the OVER + Above, since there is no ORDER BY in the OVER clause, the window frame is the same as the partition, which for lack of - PARTITION BY is the whole table; in other words each sum is + PARTITION BY is the whole table; in other words each sum is taken over the whole table and so we get the same result for each output - row. But if we add an ORDER BY clause, we get very different + row. But if we add an ORDER BY clause, we get very different results: @@ -510,8 +510,8 @@ SELECT salary, sum(salary) OVER (ORDER BY salary) FROM empsalary; Window functions are permitted only in the SELECT list - and the ORDER BY clause of the query. They are forbidden - elsewhere, such as in GROUP BY, HAVING + and the ORDER BY clause of the query. They are forbidden + elsewhere, such as in GROUP BY, HAVING and WHERE clauses. This is because they logically execute after the processing of those clauses. Also, window functions execute after non-window aggregate functions. This means it is valid to @@ -534,15 +534,15 @@ WHERE pos < 3; The above query only shows the rows from the inner query having - rank less than 3. + rank less than 3. When a query involves multiple window functions, it is possible to write - out each one with a separate OVER clause, but this is + out each one with a separate OVER clause, but this is duplicative and error-prone if the same windowing behavior is wanted for several functions. Instead, each windowing behavior can be named - in a WINDOW clause and then referenced in OVER. + in a WINDOW clause and then referenced in OVER. For example: @@ -623,13 +623,13 @@ CREATE TABLE capitals ( In this case, a row of capitals - inherits all columns (name, - population, and altitude) from its + inherits all columns (name, + population, and altitude) from its parent, cities. The type of the column name is text, a native PostgreSQL type for variable length character strings. State capitals have - an extra column, state, that shows their state. In + an extra column, state, that shows their state. In PostgreSQL, a table can inherit from zero or more other tables. diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml index dd71dbd679..0dd68f0ba1 100644 --- a/doc/src/sgml/amcheck.sgml +++ b/doc/src/sgml/amcheck.sgml @@ -8,19 +8,19 @@ - The amcheck module provides functions that allow you to + The amcheck module provides functions that allow you to verify the logical consistency of the structure of indexes. If the structure appears to be valid, no error is raised. - The functions verify various invariants in the + The functions verify various invariants in the structure of the representation of particular indexes. The correctness of the access method functions behind index scans and other important operations relies on these invariants always holding. For example, certain functions verify, among other things, - that all B-Tree pages have items in logical order (e.g., - for B-Tree indexes on text, index tuples should be in + that all B-Tree pages have items in logical order (e.g., + for B-Tree indexes on text, index tuples should be in collated lexical order). If that particular invariant somehow fails to hold, we can expect binary searches on the affected page to incorrectly guide index scans, resulting in wrong answers to SQL @@ -35,7 +35,7 @@ functions. - amcheck functions may be used only by superusers. + amcheck functions may be used only by superusers. @@ -82,7 +82,7 @@ ORDER BY c.relpages DESC LIMIT 10; (10 rows) This example shows a session that performs verification of every - catalog index in the database test. Details of just + catalog index in the database test. Details of just the 10 largest indexes verified are displayed. Since no error is raised, all indexes tested appear to be logically consistent. Naturally, this query could easily be changed to call @@ -90,10 +90,10 @@ ORDER BY c.relpages DESC LIMIT 10; database where verification is supported. - bt_index_check acquires an AccessShareLock + bt_index_check acquires an AccessShareLock on the target index and the heap relation it belongs to. This lock mode is the same lock mode acquired on relations by simple - SELECT statements. + SELECT statements. bt_index_check does not verify invariants that span child/parent relationships, nor does it verify that the target index is consistent with its heap relation. When a @@ -132,13 +132,13 @@ ORDER BY c.relpages DESC LIMIT 10; logical inconsistency or other problem. - A ShareLock is required on the target index by + A ShareLock is required on the target index by bt_index_parent_check (a - ShareLock is also acquired on the heap relation). + ShareLock is also acquired on the heap relation). These locks prevent concurrent data modification from - INSERT, UPDATE, and DELETE + INSERT, UPDATE, and DELETE commands. The locks also prevent the underlying relation from - being concurrently processed by VACUUM, as well as + being concurrently processed by VACUUM, as well as all other utility commands. Note that the function holds locks only while running, not for the entire transaction. @@ -159,13 +159,13 @@ ORDER BY c.relpages DESC LIMIT 10; - Using <filename>amcheck</> effectively + Using <filename>amcheck</filename> effectively - amcheck can be effective at detecting various types of + amcheck can be effective at detecting various types of failure modes that data page - checksums will always fail to catch. These include: + checksums will always fail to catch. These include: @@ -176,13 +176,13 @@ ORDER BY c.relpages DESC LIMIT 10; This includes issues caused by the comparison rules of operating system collations changing. Comparisons of datums of a collatable - type like text must be immutable (just as all + type like text must be immutable (just as all comparisons used for B-Tree index scans must be immutable), which implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the collation order between a master server and a standby server is - implicated, possibly because the major operating + implicated, possibly because the major operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally only be detected on standby servers. @@ -190,25 +190,25 @@ ORDER BY c.relpages DESC LIMIT 10; If a problem like this arises, it may not affect each individual index that is ordered using an affected collation, simply because - indexed values might happen to have the same + indexed values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See and for - further details about how PostgreSQL uses + further details about how PostgreSQL uses operating system locales and collations. Corruption caused by hypothetical undiscovered bugs in the - underlying PostgreSQL access method code or sort + underlying PostgreSQL access method code or sort code. Automatic verification of the structural integrity of indexes plays a role in the general testing of new or proposed - PostgreSQL features that could plausibly allow a + PostgreSQL features that could plausibly allow a logical inconsistency to be introduced. One obvious testing - strategy is to call amcheck functions continuously + strategy is to call amcheck functions continuously when running the standard regression tests. See for details on running the tests. @@ -219,12 +219,12 @@ ORDER BY c.relpages DESC LIMIT 10; simply not be enabled. - Note that amcheck examines a page as represented in some + Note that amcheck examines a page as represented in some shared memory buffer at the time of verification if there is only a shared buffer hit when accessing the block. Consequently, - amcheck does not necessarily examine data read from the + amcheck does not necessarily examine data read from the file system at the time of verification. Note that when checksums are - enabled, amcheck may raise an error due to a checksum + enabled, amcheck may raise an error due to a checksum failure when a corrupt block is read into a buffer. @@ -234,7 +234,7 @@ ORDER BY c.relpages DESC LIMIT 10; and operating system. - PostgreSQL does not protect against correctable + PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. However, ECC memory is typically only immune to @@ -244,7 +244,7 @@ ORDER BY c.relpages DESC LIMIT 10; - In general, amcheck can only prove the presence of + In general, amcheck can only prove the presence of corruption; it cannot prove its absence. @@ -252,19 +252,19 @@ ORDER BY c.relpages DESC LIMIT 10; Repairing corruption - No error concerning corruption raised by amcheck should - ever be a false positive. In practice, amcheck is more + No error concerning corruption raised by amcheck should + ever be a false positive. In practice, amcheck is more likely to find software bugs than problems with hardware. - amcheck raises errors in the event of conditions that, + amcheck raises errors in the event of conditions that, by definition, should never happen, and so careful analysis of - amcheck errors is often required. + amcheck errors is often required. There is no general method of repairing problems that - amcheck detects. An explanation for the root cause of + amcheck detects. An explanation for the root cause of an invariant violation should be sought. may play a useful role in diagnosing - corruption that amcheck detects. A REINDEX + corruption that amcheck detects. A REINDEX may not be effective in repairing corruption. diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index c835e87215..5423aadb9c 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -118,7 +118,7 @@ PostgreSQL is implemented using a - simple process per user client/server model. In this model + simple process per user client/server model. In this model there is one client process connected to exactly one server process. As we do not know ahead of time how many connections will be made, we have to @@ -137,9 +137,9 @@ The client process can be any program that understands the PostgreSQL protocol described in . Many clients are based on the - C-language library libpq, but several independent + C-language library libpq, but several independent implementations of the protocol exist, such as the Java - JDBC driver. + JDBC driver. @@ -184,8 +184,8 @@ text) for valid syntax. If the syntax is correct a parse tree is built up and handed back; otherwise an error is returned. The parser and lexer are - implemented using the well-known Unix tools bison - and flex. + implemented using the well-known Unix tools bison + and flex. @@ -251,7 +251,7 @@ back by the parser as input and does the semantic interpretation needed to understand which tables, functions, and operators are referenced by the query. The data structure that is built to represent this - information is called the query tree. + information is called the query tree. @@ -259,10 +259,10 @@ system catalog lookups can only be done within a transaction, and we do not wish to start a transaction immediately upon receiving a query string. The raw parsing stage is sufficient to identify the transaction - control commands (BEGIN, ROLLBACK, etc), and + control commands (BEGIN, ROLLBACK, etc), and these can then be correctly executed without any further analysis. Once we know that we are dealing with an actual query (such as - SELECT or UPDATE), it is okay to + SELECT or UPDATE), it is okay to start a transaction if we're not already in one. Only then can the transformation process be invoked. @@ -270,10 +270,10 @@ The query tree created by the transformation process is structurally similar to the raw parse tree in most places, but it has many differences - in detail. For example, a FuncCall node in the + in detail. For example, a FuncCall node in the parse tree represents something that looks syntactically like a function - call. This might be transformed to either a FuncExpr - or Aggref node depending on whether the referenced + call. This might be transformed to either a FuncExpr + or Aggref node depending on whether the referenced name turns out to be an ordinary function or an aggregate function. Also, information about the actual data types of columns and expression results is added to the query tree. @@ -354,10 +354,10 @@ The planner's search procedure actually works with data structures - called paths, which are simply cut-down representations of + called paths, which are simply cut-down representations of plans containing only as much information as the planner needs to make its decisions. After the cheapest path is determined, a full-fledged - plan tree is built to pass to the executor. This represents + plan tree is built to pass to the executor. This represents the desired execution plan in sufficient detail for the executor to run it. In the rest of this section we'll ignore the distinction between paths and plans. @@ -378,12 +378,12 @@ relation.attribute OPR constant. If relation.attribute happens to match the key of the B-tree index and OPR is one of the operators listed in - the index's operator class, another plan is created using + the index's operator class, another plan is created using the B-tree index to scan the relation. If there are further indexes present and the restrictions in the query happen to match a key of an index, further plans will be considered. Index scan plans are also generated for indexes that have a sort ordering that can match the - query's ORDER BY clause (if any), or a sort ordering that + query's ORDER BY clause (if any), or a sort ordering that might be useful for merge joining (see below). @@ -462,9 +462,9 @@ the base relations, plus nested-loop, merge, or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function calculation nodes. Most of these plan node - types have the additional ability to do selection + types have the additional ability to do selection (discarding rows that do not meet a specified Boolean condition) - and projection (computation of a derived column set + and projection (computation of a derived column set based on given column values, that is, evaluation of scalar expressions where needed). One of the responsibilities of the planner is to attach selection conditions from the @@ -496,7 +496,7 @@ subplan) is, let's say, a Sort node and again recursion is needed to obtain an input row. The child node of the Sort might - be a SeqScan node, representing actual reading of a table. + be a SeqScan node, representing actual reading of a table. Execution of this node causes the executor to fetch a row from the table and return it up to the calling node. The Sort node will repeatedly call its child to obtain all the rows to be sorted. @@ -529,24 +529,24 @@ The executor mechanism is used to evaluate all four basic SQL query types: - SELECT, INSERT, UPDATE, and - DELETE. For SELECT, the top-level executor + SELECT, INSERT, UPDATE, and + DELETE. For SELECT, the top-level executor code only needs to send each row returned by the query plan tree off - to the client. For INSERT, each returned row is inserted - into the target table specified for the INSERT. This is - done in a special top-level plan node called ModifyTable. + to the client. For INSERT, each returned row is inserted + into the target table specified for the INSERT. This is + done in a special top-level plan node called ModifyTable. (A simple - INSERT ... VALUES command creates a trivial plan tree - consisting of a single Result node, which computes just one - result row, and ModifyTable above it to perform the insertion. - But INSERT ... SELECT can demand the full power - of the executor mechanism.) For UPDATE, the planner arranges + INSERT ... VALUES command creates a trivial plan tree + consisting of a single Result node, which computes just one + result row, and ModifyTable above it to perform the insertion. + But INSERT ... SELECT can demand the full power + of the executor mechanism.) For UPDATE, the planner arranges that each computed row includes all the updated column values, plus - the TID (tuple ID, or row ID) of the original target row; - this data is fed into a ModifyTable node, which uses the + the TID (tuple ID, or row ID) of the original target row; + this data is fed into a ModifyTable node, which uses the information to create a new updated row and mark the old row deleted. - For DELETE, the only column that is actually returned by the - plan is the TID, and the ModifyTable node simply uses the TID + For DELETE, the only column that is actually returned by the + plan is the TID, and the ModifyTable node simply uses the TID to visit each target row and mark it deleted. diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 88eb4be04d..9187f6e02e 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -32,7 +32,7 @@ CREATE TABLE sal_emp ( ); As shown, an array data type is named by appending square brackets - ([]) to the data type name of the array elements. The + ([]) to the data type name of the array elements. The above command will create a table named sal_emp with a column of type text (name), a @@ -69,7 +69,7 @@ CREATE TABLE tictactoe ( An alternative syntax, which conforms to the SQL standard by using - the keyword ARRAY, can be used for one-dimensional arrays. + the keyword ARRAY, can be used for one-dimensional arrays. pay_by_quarter could have been defined as: @@ -79,7 +79,7 @@ CREATE TABLE tictactoe ( pay_by_quarter integer ARRAY, - As before, however, PostgreSQL does not enforce the + As before, however, PostgreSQL does not enforce the size restriction in any case. @@ -107,8 +107,8 @@ CREATE TABLE tictactoe ( for the type, as recorded in its pg_type entry. Among the standard data types provided in the PostgreSQL distribution, all use a comma - (,), except for type box which uses a semicolon - (;). Each val is + (,), except for type box which uses a semicolon + (;). Each val is either a constant of the array element type, or a subarray. An example of an array constant is: @@ -119,10 +119,10 @@ CREATE TABLE tictactoe ( - To set an element of an array constant to NULL, write NULL + To set an element of an array constant to NULL, write NULL for the element value. (Any upper- or lower-case variant of - NULL will do.) If you want an actual string value - NULL, you must put double quotes around it. + NULL will do.) If you want an actual string value + NULL, you must put double quotes around it. @@ -176,7 +176,7 @@ ERROR: multidimensional arrays must have array expressions with matching dimens - The ARRAY constructor syntax can also be used: + The ARRAY constructor syntax can also be used: INSERT INTO sal_emp VALUES ('Bill', @@ -190,7 +190,7 @@ INSERT INTO sal_emp Notice that the array elements are ordinary SQL constants or expressions; for instance, string literals are single quoted, instead of - double quoted as they would be in an array literal. The ARRAY + double quoted as they would be in an array literal. The ARRAY constructor syntax is discussed in more detail in . @@ -222,8 +222,8 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] <> pay_by_quarter[2]; The array subscript numbers are written within square brackets. By default PostgreSQL uses a one-based numbering convention for arrays, that is, - an array of n elements starts with array[1] and - ends with array[n]. + an array of n elements starts with array[1] and + ends with array[n]. @@ -259,8 +259,8 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 - to the number specified. For example, [2] is treated as - [1:2], as in this example: + to the number specified. For example, [2] is treated as + [1:2], as in this example: SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; @@ -272,7 +272,7 @@ SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; To avoid confusion with the non-slice case, it's best to use slice syntax - for all dimensions, e.g., [1:2][1:1], not [2][1:1]. + for all dimensions, e.g., [1:2][1:1], not [2][1:1]. @@ -302,9 +302,9 @@ SELECT schedule[:][1:1] FROM sal_emp WHERE name = 'Bill'; An array subscript expression will return null if either the array itself or any of the subscript expressions are null. Also, null is returned if a subscript is outside the array bounds (this case does not raise an error). - For example, if schedule - currently has the dimensions [1:3][1:2] then referencing - schedule[3][3] yields NULL. Similarly, an array reference + For example, if schedule + currently has the dimensions [1:3][1:2] then referencing + schedule[3][3] yields NULL. Similarly, an array reference with the wrong number of subscripts yields a null rather than an error. @@ -423,16 +423,16 @@ UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}' A stored array value can be enlarged by assigning to elements not already present. Any positions between those previously present and the newly assigned elements will be filled with nulls. For example, if array - myarray currently has 4 elements, it will have six - elements after an update that assigns to myarray[6]; - myarray[5] will contain null. + myarray currently has 4 elements, it will have six + elements after an update that assigns to myarray[6]; + myarray[5] will contain null. Currently, enlargement in this fashion is only allowed for one-dimensional arrays, not multidimensional arrays. Subscripted assignment allows creation of arrays that do not use one-based - subscripts. For example one might assign to myarray[-2:7] to + subscripts. For example one might assign to myarray[-2:7] to create an array with subscript values from -2 to 7. @@ -457,8 +457,8 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; The concatenation operator allows a single element to be pushed onto the beginning or end of a one-dimensional array. It also accepts two - N-dimensional arrays, or an N-dimensional - and an N+1-dimensional array. + N-dimensional arrays, or an N-dimensional + and an N+1-dimensional array. @@ -501,10 +501,10 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]); - When an N-dimensional array is pushed onto the beginning - or end of an N+1-dimensional array, the result is - analogous to the element-array case above. Each N-dimensional - sub-array is essentially an element of the N+1-dimensional + When an N-dimensional array is pushed onto the beginning + or end of an N+1-dimensional array, the result is + analogous to the element-array case above. Each N-dimensional + sub-array is essentially an element of the N+1-dimensional array's outer dimension. For example: SELECT array_dims(ARRAY[1,2] || ARRAY[[3,4],[5,6]]); @@ -587,9 +587,9 @@ SELECT array_append(ARRAY[1, 2], NULL); -- this might have been meant The heuristic it uses to resolve the constant's type is to assume it's of the same type as the operator's other input — in this case, integer array. So the concatenation operator is presumed to - represent array_cat, not array_append. When + represent array_cat, not array_append. When that's the wrong choice, it could be fixed by casting the constant to the - array's element type; but explicit use of array_append might + array's element type; but explicit use of array_append might be a preferable solution. @@ -633,7 +633,7 @@ SELECT * FROM sal_emp WHERE 10000 = ALL (pay_by_quarter); - Alternatively, the generate_subscripts function can be used. + Alternatively, the generate_subscripts function can be used. For example: @@ -648,7 +648,7 @@ SELECT * FROM - You can also search an array using the && operator, + You can also search an array using the && operator, which checks whether the left operand overlaps with the right operand. For instance: @@ -662,8 +662,8 @@ SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; - You can also search for specific values in an array using the array_position - and array_positions functions. The former returns the subscript of + You can also search for specific values in an array using the array_position + and array_positions functions. The former returns the subscript of the first occurrence of a value in an array; the latter returns an array with the subscripts of all occurrences of the value in the array. For example: @@ -703,13 +703,13 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The external text representation of an array value consists of items that are interpreted according to the I/O conversion rules for the array's element type, plus decoration that indicates the array structure. - The decoration consists of curly braces ({ and }) + The decoration consists of curly braces ({ and }) around the array value plus delimiter characters between adjacent items. - The delimiter character is usually a comma (,) but can be - something else: it is determined by the typdelim setting + The delimiter character is usually a comma (,) but can be + something else: it is determined by the typdelim setting for the array's element type. Among the standard data types provided in the PostgreSQL distribution, all use a comma, - except for type box, which uses a semicolon (;). + except for type box, which uses a semicolon (;). In a multidimensional array, each dimension (row, plane, cube, etc.) gets its own level of curly braces, and delimiters must be written between adjacent curly-braced entities of the same level. @@ -719,7 +719,7 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The array output routine will put double quotes around element values if they are empty strings, contain curly braces, delimiter characters, double quotes, backslashes, or white space, or match the word - NULL. Double quotes and backslashes + NULL. Double quotes and backslashes embedded in element values will be backslash-escaped. For numeric data types it is safe to assume that double quotes will never appear, but for textual data types one should be prepared to cope with either the presence @@ -731,10 +731,10 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); set to one. To represent arrays with other lower bounds, the array subscript ranges can be specified explicitly before writing the array contents. - This decoration consists of square brackets ([]) + This decoration consists of square brackets ([]) around each array dimension's lower and upper bounds, with - a colon (:) delimiter character in between. The - array dimension decoration is followed by an equal sign (=). + a colon (:) delimiter character in between. The + array dimension decoration is followed by an equal sign (=). For example: SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 @@ -750,23 +750,23 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 - If the value written for an element is NULL (in any case + If the value written for an element is NULL (in any case variant), the element is taken to be NULL. The presence of any quotes or backslashes disables this and allows the literal string value - NULL to be entered. Also, for backward compatibility with - pre-8.2 versions of PostgreSQL, the NULL to be entered. Also, for backward compatibility with + pre-8.2 versions of PostgreSQL, the configuration parameter can be turned - off to suppress recognition of NULL as a NULL. + off to suppress recognition of NULL as a NULL. As shown previously, when writing an array value you can use double - quotes around any individual array element. You must do so + quotes around any individual array element. You must do so if the element value would otherwise confuse the array-value parser. For example, elements containing curly braces, commas (or the data type's delimiter character), double quotes, backslashes, or leading or trailing whitespace must be double-quoted. Empty strings and strings matching the - word NULL must be quoted, too. To put a double quote or + word NULL must be quoted, too. To put a double quote or backslash in a quoted array element value, use escape string syntax and precede it with a backslash. Alternatively, you can avoid quotes and use backslash-escaping to protect all data characters that would otherwise @@ -785,17 +785,17 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 Remember that what you write in an SQL command will first be interpreted as a string literal, and then as an array. This doubles the number of - backslashes you need. For example, to insert a text array + backslashes you need. For example, to insert a text array value containing a backslash and a double quote, you'd need to write: INSERT ... VALUES (E'{"\\\\","\\""}'); The escape string processor removes one level of backslashes, so that - what arrives at the array-value parser looks like {"\\","\""}. - In turn, the strings fed to the text data type's input routine - become \ and " respectively. (If we were working + what arrives at the array-value parser looks like {"\\","\""}. + In turn, the strings fed to the text data type's input routine + become \ and " respectively. (If we were working with a data type whose input routine also treated backslashes specially, - bytea for example, we might need as many as eight backslashes + bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored array element.) Dollar quoting (see ) can be used to avoid the need to double backslashes. @@ -804,10 +804,10 @@ INSERT ... VALUES (E'{"\\\\","\\""}'); - The ARRAY constructor syntax (see + The ARRAY constructor syntax (see ) is often easier to work with than the array-literal syntax when writing array values in SQL - commands. In ARRAY, individual element values are written the + commands. In ARRAY, individual element values are written the same way they would be written when not members of an array. diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml index 9a6e3e9bb4..9221d2dfb6 100644 --- a/doc/src/sgml/auth-delay.sgml +++ b/doc/src/sgml/auth-delay.sgml @@ -18,7 +18,7 @@ In order to function, this module must be loaded via - in postgresql.conf. + in postgresql.conf. @@ -29,7 +29,7 @@ auth_delay.milliseconds (int) - auth_delay.milliseconds configuration parameter + auth_delay.milliseconds configuration parameter @@ -42,7 +42,7 @@ - These parameters must be set in postgresql.conf. + These parameters must be set in postgresql.conf. Typical usage might be: diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml index 38e6f50c80..240098c82f 100644 --- a/doc/src/sgml/auto-explain.sgml +++ b/doc/src/sgml/auto-explain.sgml @@ -24,10 +24,10 @@ LOAD 'auto_explain'; (You must be superuser to do that.) More typical usage is to preload - it into some or all sessions by including auto_explain in + it into some or all sessions by including auto_explain in or in - postgresql.conf. Then you can track unexpectedly slow queries + postgresql.conf. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that. @@ -47,7 +47,7 @@ LOAD 'auto_explain'; auto_explain.log_min_duration (integer) - auto_explain.log_min_duration configuration parameter + auto_explain.log_min_duration configuration parameter @@ -66,13 +66,13 @@ LOAD 'auto_explain'; auto_explain.log_analyze (boolean) - auto_explain.log_analyze configuration parameter + auto_explain.log_analyze configuration parameter - auto_explain.log_analyze causes EXPLAIN ANALYZE - output, rather than just EXPLAIN output, to be printed + auto_explain.log_analyze causes EXPLAIN ANALYZE + output, rather than just EXPLAIN output, to be printed when an execution plan is logged. This parameter is off by default. Only superusers can change this setting. @@ -92,14 +92,14 @@ LOAD 'auto_explain'; auto_explain.log_buffers (boolean) - auto_explain.log_buffers configuration parameter + auto_explain.log_buffers configuration parameter auto_explain.log_buffers controls whether buffer usage statistics are printed when an execution plan is logged; it's - equivalent to the BUFFERS option of EXPLAIN. + equivalent to the BUFFERS option of EXPLAIN. This parameter has no effect unless auto_explain.log_analyze is enabled. This parameter is off by default. @@ -112,14 +112,14 @@ LOAD 'auto_explain'; auto_explain.log_timing (boolean) - auto_explain.log_timing configuration parameter + auto_explain.log_timing configuration parameter auto_explain.log_timing controls whether per-node timing information is printed when an execution plan is logged; it's - equivalent to the TIMING option of EXPLAIN. + equivalent to the TIMING option of EXPLAIN. The overhead of repeatedly reading the system clock can slow down queries significantly on some systems, so it may be useful to set this parameter to off when only actual row counts, and not exact times, are @@ -136,7 +136,7 @@ LOAD 'auto_explain'; auto_explain.log_triggers (boolean) - auto_explain.log_triggers configuration parameter + auto_explain.log_triggers configuration parameter @@ -155,14 +155,14 @@ LOAD 'auto_explain'; auto_explain.log_verbose (boolean) - auto_explain.log_verbose configuration parameter + auto_explain.log_verbose configuration parameter auto_explain.log_verbose controls whether verbose details are printed when an execution plan is logged; it's - equivalent to the VERBOSE option of EXPLAIN. + equivalent to the VERBOSE option of EXPLAIN. This parameter is off by default. Only superusers can change this setting. @@ -173,13 +173,13 @@ LOAD 'auto_explain'; auto_explain.log_format (enum) - auto_explain.log_format configuration parameter + auto_explain.log_format configuration parameter auto_explain.log_format selects the - EXPLAIN output format to be used. + EXPLAIN output format to be used. The allowed values are text, xml, json, and yaml. The default is text. Only superusers can change this setting. @@ -191,7 +191,7 @@ LOAD 'auto_explain'; auto_explain.log_nested_statements (boolean) - auto_explain.log_nested_statements configuration parameter + auto_explain.log_nested_statements configuration parameter @@ -208,7 +208,7 @@ LOAD 'auto_explain'; auto_explain.sample_rate (real) - auto_explain.sample_rate configuration parameter + auto_explain.sample_rate configuration parameter @@ -224,7 +224,7 @@ LOAD 'auto_explain'; In ordinary usage, these parameters are set - in postgresql.conf, although superusers can alter them + in postgresql.conf, although superusers can alter them on-the-fly within their own sessions. Typical usage might be: diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index bd55e8bb77..dd9c1bff5b 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -3,10 +3,10 @@ Backup and Restore - backup + backup - As with everything that contains valuable data, PostgreSQL + As with everything that contains valuable data, PostgreSQL databases should be backed up regularly. While the procedure is essentially simple, it is important to have a clear understanding of the underlying techniques and assumptions. @@ -14,9 +14,9 @@ There are three fundamentally different approaches to backing up - PostgreSQL data: + PostgreSQL data: - SQL dump + SQL dump File system level backup Continuous archiving @@ -25,30 +25,30 @@ - <acronym>SQL</> Dump + <acronym>SQL</acronym> Dump The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. - PostgreSQL provides the utility program + PostgreSQL provides the utility program for this purpose. The basic usage of this command is: pg_dump dbname > outfile - As you see, pg_dump writes its result to the + As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. - While the above command creates a text file, pg_dump + While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration. - pg_dump is a regular PostgreSQL + pg_dump is a regular PostgreSQL client application (albeit a particularly clever one). This means that you can perform this backup procedure from any remote host that has - access to the database. But remember that pg_dump + access to the database. But remember that pg_dump does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in order to back up the entire database you almost always have to run it as a @@ -60,9 +60,9 @@ pg_dump dbname > - To specify which database server pg_dump should + To specify which database server pg_dump should contact, use the command line options ). psql + supports options similar to pg_dump for specifying the database server to connect to and the user name to use. See the reference page for more information. Non-text file dumps are restored using the dbname < - By default, the psql script will continue to + By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with - the ON_ERROR_STOP variable set to alter that + the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs: @@ -147,8 +147,8 @@ psql --set ON_ERROR_STOP=on dbname < infile Alternatively, you can specify that the whole dump should be restored as a single transaction, so the restore is either fully completed or fully rolled back. This mode can be specified by - passing the - The ability of pg_dump and psql to + The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one server to another, for example: -pg_dump -h host1 dbname | psql -h host2 dbname +pg_dump -h host1 dbname | psql -h host2 dbname - The dumps produced by pg_dump are relative to - template0. This means that any languages, procedures, - etc. added via template1 will also be dumped by - pg_dump. As a result, when restoring, if you are - using a customized template1, you must create the - empty database from template0, as in the example + The dumps produced by pg_dump are relative to + template0. This means that any languages, procedures, + etc. added via template1 will also be dumped by + pg_dump. As a result, when restoring, if you are + using a customized template1, you must create the + empty database from template0, as in the example above. @@ -183,52 +183,52 @@ pg_dump -h host1 dbname | psql -h h see and for more information. For more advice on how to load large amounts of data - into PostgreSQL efficiently, refer to PostgreSQL efficiently, refer to . - Using <application>pg_dumpall</> + Using <application>pg_dumpall</application> - pg_dump dumps only a single database at a time, + pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the program is provided. - pg_dumpall backs up each database in a given + pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is: -pg_dumpall > outfile +pg_dumpall > outfile - The resulting dump can be restored with psql: + The resulting dump can be restored with psql: psql -f infile postgres (Actually, you can specify any existing database name to start from, - but if you are loading into an empty cluster then postgres + but if you are loading into an empty cluster then postgres should usually be used.) It is always necessary to have - database superuser access when restoring a pg_dumpall + database superuser access when restoring a pg_dumpall dump, as that is required to restore the role and tablespace information. If you use tablespaces, make sure that the tablespace paths in the dump are appropriate for the new installation. - pg_dumpall works by emitting commands to re-create + pg_dumpall works by emitting commands to re-create roles, tablespaces, and empty databases, then invoking - pg_dump for each database. This means that while + pg_dump for each database. This means that while each database will be internally consistent, the snapshots of different databases are not synchronized. Cluster-wide data can be dumped alone using the - pg_dumpall option. This is necessary to fully backup the cluster if running the - pg_dump command on individual databases. + pg_dump command on individual databases. @@ -237,8 +237,8 @@ psql -f infile postgres Some operating systems have maximum file size limits that cause - problems when creating large pg_dump output files. - Fortunately, pg_dump can write to the standard + problems when creating large pg_dump output files. + Fortunately, pg_dump can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods: @@ -268,7 +268,7 @@ cat filename.gz | gunzip | psql - Use <command>split</>. + Use <command>split</command>. The split command allows you to split the output into smaller files that are @@ -288,10 +288,10 @@ cat filename* | psql - Use <application>pg_dump</>'s custom dump format. + Use <application>pg_dump</application>'s custom dump format. If PostgreSQL was built on a system with the - zlib compression library installed, the custom dump + zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively. The @@ -301,8 +301,8 @@ cat filename* | psql dbname > filename - A custom-format dump is not a script for psql, but - instead must be restored with pg_restore, for example: + A custom-format dump is not a script for psql, but + instead must be restored with pg_restore, for example: pg_restore -d dbname filename @@ -314,12 +314,12 @@ pg_restore -d dbname - For very large databases, you might need to combine split + For very large databases, you might need to combine split with one of the other two approaches. - Use <application>pg_dump</>'s parallel dump feature. + Use <application>pg_dump</application>'s parallel dump feature. To speed up the dump of a large database, you can use pg_dump's parallel mode. This will dump @@ -344,7 +344,7 @@ pg_dump -j num -F d -f An alternative backup strategy is to directly copy the files that - PostgreSQL uses to store the data in the database; + PostgreSQL uses to store the data in the database; explains where these files are located. You can use whatever method you prefer for doing file system backups; for example: @@ -356,13 +356,13 @@ tar -cf backup.tar /usr/local/pgsql/data There are two restrictions, however, which make this method - impractical, or at least inferior to the pg_dump + impractical, or at least inferior to the pg_dump method: - The database server must be shut down in order to + The database server must be shut down in order to get a usable backup. Half-way measures such as disallowing all connections will not work (in part because tar and similar tools do not take @@ -379,7 +379,7 @@ tar -cf backup.tar /usr/local/pgsql/data If you have dug into the details of the file system layout of the database, you might be tempted to try to back up or restore only certain individual tables or databases from their respective files or - directories. This will not work because the + directories. This will not work because the information contained in these files is not usable without the commit log files, pg_xact/*, which contain the commit status of @@ -399,7 +399,7 @@ tar -cf backup.tar /usr/local/pgsql/data consistent snapshot of the data directory, if the file system supports that functionality (and you are willing to trust that it is implemented correctly). The typical procedure is - to make a frozen snapshot of the volume containing the + to make a frozen snapshot of the volume containing the database, then copy the whole data directory (not just parts, see above) from the snapshot to a backup device, then release the frozen snapshot. This will work even while the database server is running. @@ -419,7 +419,7 @@ tar -cf backup.tar /usr/local/pgsql/data the volumes. For example, if your data files and WAL log are on different disks, or if tablespaces are on different file systems, it might not be possible to use snapshot backup because the snapshots - must be simultaneous. + must be simultaneous. Read your file system documentation very carefully before trusting the consistent-snapshot technique in such situations. @@ -435,13 +435,13 @@ tar -cf backup.tar /usr/local/pgsql/data - Another option is to use rsync to perform a file - system backup. This is done by first running rsync + Another option is to use rsync to perform a file + system backup. This is done by first running rsync while the database server is running, then shutting down the database - server long enough to do an rsync --checksum. - ( @@ -508,7 +508,7 @@ tar -cf backup.tar /usr/local/pgsql/data It is not necessary to replay the WAL entries all the way to the end. We could stop the replay at any point and have a consistent snapshot of the database as it was at that time. Thus, - this technique supports point-in-time recovery: it is + this technique supports point-in-time recovery: it is possible to restore the database to its state at any time since your base backup was taken. @@ -517,7 +517,7 @@ tar -cf backup.tar /usr/local/pgsql/data If we continuously feed the series of WAL files to another machine that has been loaded with the same base backup file, we - have a warm standby system: at any point we can bring up + have a warm standby system: at any point we can bring up the second machine and it will have a nearly-current copy of the database. @@ -530,7 +530,7 @@ tar -cf backup.tar /usr/local/pgsql/data pg_dump and pg_dumpall do not produce file-system-level backups and cannot be used as part of a continuous-archiving solution. - Such dumps are logical and do not contain enough + Such dumps are logical and do not contain enough information to be used by WAL replay. @@ -546,10 +546,10 @@ tar -cf backup.tar /usr/local/pgsql/data To recover successfully using continuous archiving (also called - online backup by many database vendors), you need a continuous + online backup by many database vendors), you need a continuous sequence of archived WAL files that extends back at least as far as the start time of your backup. So to get started, you should set up and test - your procedure for archiving WAL files before you take your + your procedure for archiving WAL files before you take your first base backup. Accordingly, we first discuss the mechanics of archiving WAL files. @@ -558,15 +558,15 @@ tar -cf backup.tar /usr/local/pgsql/data Setting Up WAL Archiving - In an abstract sense, a running PostgreSQL system + In an abstract sense, a running PostgreSQL system produces an indefinitely long sequence of WAL records. The system physically divides this sequence into WAL segment - files, which are normally 16MB apiece (although the segment size - can be altered during initdb). The segment + files, which are normally 16MB apiece (although the segment size + can be altered during initdb). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then - recycles them by renaming no-longer-needed segment files + recycles them by renaming no-longer-needed segment files to higher segment numbers. It's assumed that segment files whose contents precede the checkpoint-before-last are no longer of interest and can be recycled. @@ -577,33 +577,33 @@ tar -cf backup.tar /usr/local/pgsql/data file once it is filled, and save that data somewhere before the segment file is recycled for reuse. Depending on the application and the available hardware, there could be many different ways of saving - the data somewhere: we could copy the segment files to an NFS-mounted + the data somewhere: we could copy the segment files to an NFS-mounted directory on another machine, write them onto a tape drive (ensuring that you have a way of identifying the original name of each file), or batch them together and burn them onto CDs, or something else entirely. To provide the database administrator with flexibility, - PostgreSQL tries not to make any assumptions about how - the archiving will be done. Instead, PostgreSQL lets + PostgreSQL tries not to make any assumptions about how + the archiving will be done. Instead, PostgreSQL lets the administrator specify a shell command to be executed to copy a completed segment file to wherever it needs to go. The command could be - as simple as a cp, or it could invoke a complex shell + as simple as a cp, or it could invoke a complex shell script — it's all up to you. To enable WAL archiving, set the - configuration parameter to replica or higher, - to on, + configuration parameter to replica or higher, + to on, and specify the shell command to use in the configuration parameter. In practice these settings will always be placed in the postgresql.conf file. - In archive_command, - %p is replaced by the path name of the file to - archive, while %f is replaced by only the file name. + In archive_command, + %p is replaced by the path name of the file to + archive, while %f is replaced by only the file name. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Use %% if you need to embed an actual % + Use %% if you need to embed an actual % character in the command. The simplest useful command is something like: @@ -611,9 +611,9 @@ archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/ser archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows which will copy archivable WAL segments to the directory - /mnt/server/archivedir. (This is an example, not a + /mnt/server/archivedir. (This is an example, not a recommendation, and might not work on all platforms.) After the - %p and %f parameters have been replaced, + %p and %f parameters have been replaced, the actual command executed might look like this: test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 @@ -623,7 +623,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 The archive command will be executed under the ownership of the same - user that the PostgreSQL server is running as. Since + user that the PostgreSQL server is running as. Since the series of WAL files being archived contains effectively everything in your database, you will want to be sure that the archived data is protected from prying eyes; for example, archive into a directory that @@ -633,9 +633,9 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is important that the archive command return zero exit status if and only if it succeeds. Upon getting a zero result, - PostgreSQL will assume that the file has been + PostgreSQL will assume that the file has been successfully archived, and will remove or recycle it. However, a nonzero - status tells PostgreSQL that the file was not archived; + status tells PostgreSQL that the file was not archived; it will try again periodically until it succeeds. @@ -650,14 +650,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is advisable to test your proposed archive command to ensure that it indeed does not overwrite an existing file, and that it returns - nonzero status in this case. + nonzero status in this case. The example command above for Unix ensures this by including a separate - test step. On some Unix platforms, cp has - switches such as that can be used to do the same thing less verbosely, but you should not rely on these without verifying that - the right exit status is returned. (In particular, GNU cp - will return status zero when @@ -668,10 +668,10 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 fills, nothing further can be archived until the tape is swapped. You should ensure that any error condition or request to a human operator is reported appropriately so that the situation can be - resolved reasonably quickly. The pg_wal/ directory will + resolved reasonably quickly. The pg_wal/ directory will continue to fill with WAL segment files until the situation is resolved. - (If the file system containing pg_wal/ fills up, - PostgreSQL will do a PANIC shutdown. No committed + (If the file system containing pg_wal/ fills up, + PostgreSQL will do a PANIC shutdown. No committed transactions will be lost, but the database will remain offline until you free some space.) @@ -682,7 +682,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 operation continues even if the archiving process falls a little behind. If archiving falls significantly behind, this will increase the amount of data that would be lost in the event of a disaster. It will also mean that - the pg_wal/ directory will contain large numbers of + the pg_wal/ directory will contain large numbers of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to monitor the archiving process to ensure that it is working as you intend. @@ -692,16 +692,16 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 In writing your archive command, you should assume that the file names to be archived can be up to 64 characters long and can contain any combination of ASCII letters, digits, and dots. It is not necessary to - preserve the original relative path (%p) but it is necessary to - preserve the file name (%f). + preserve the original relative path (%p) but it is necessary to + preserve the file name (%f). Note that although WAL archiving will allow you to restore any - modifications made to the data in your PostgreSQL database, + modifications made to the data in your PostgreSQL database, it will not restore changes made to configuration files (that is, - postgresql.conf, pg_hba.conf and - pg_ident.conf), since those are edited manually rather + postgresql.conf, pg_hba.conf and + pg_ident.conf), since those are edited manually rather than through SQL operations. You might wish to keep the configuration files in a location that will be backed up by your regular file system backup procedures. See @@ -719,32 +719,32 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very - short archive_timeout — it will bloat your archive - storage. archive_timeout settings of a minute or so are + short archive_timeout — it will bloat your archive + storage. archive_timeout settings of a minute or so are usually reasonable. Also, you can force a segment switch manually with - pg_switch_wal if you want to ensure that a + pg_switch_wal if you want to ensure that a just-finished transaction is archived as soon as possible. Other utility functions related to WAL management are listed in . - When wal_level is minimal some SQL commands + When wal_level is minimal some SQL commands are optimized to avoid WAL logging, as described in . If archiving or streaming replication were turned on during execution of one of these statements, WAL would not contain enough information for archive recovery. (Crash recovery is - unaffected.) For this reason, wal_level can only be changed at - server start. However, archive_command can be changed with a + unaffected.) For this reason, wal_level can only be changed at + server start. However, archive_command can be changed with a configuration file reload. If you wish to temporarily stop archiving, - one way to do it is to set archive_command to the empty - string (''). - This will cause WAL files to accumulate in pg_wal/ until a - working archive_command is re-established. + one way to do it is to set archive_command to the empty + string (''). + This will cause WAL files to accumulate in pg_wal/ until a + working archive_command is re-established. @@ -763,8 +763,8 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 It is not necessary to be concerned about the amount of time it takes to make a base backup. However, if you normally run the - server with full_page_writes disabled, you might notice a drop - in performance while the backup runs since full_page_writes is + server with full_page_writes disabled, you might notice a drop + in performance while the backup runs since full_page_writes is effectively forced on during backup mode. @@ -772,13 +772,13 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 To make use of the backup, you will need to keep all the WAL segment files generated during and after the file system backup. To aid you in doing this, the base backup process - creates a backup history file that is immediately + creates a backup history file that is immediately stored into the WAL archive area. This file is named after the first WAL segment file that you need for the file system backup. For example, if the starting WAL file is - 0000000100001234000055CD the backup history file will be + 0000000100001234000055CD the backup history file will be named something like - 0000000100001234000055CD.007C9330.backup. (The second + 0000000100001234000055CD.007C9330.backup. (The second part of the file name stands for an exact position within the WAL file, and can ordinarily be ignored.) Once you have safely archived the file system backup and the WAL segment files used during the @@ -847,14 +847,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 SELECT pg_start_backup('label', false, false); - where label is any string you want to use to uniquely + where label is any string you want to use to uniquely identify this backup operation. The connection - calling pg_start_backup must be maintained until the end of + calling pg_start_backup must be maintained until the end of the backup, or the backup will be automatically aborted. - By default, pg_start_backup can take a long time to finish. + By default, pg_start_backup can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -862,19 +862,19 @@ SELECT pg_start_backup('label', false, false); ). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as - possible, change the second parameter to true, which will + possible, change the second parameter to true, which will issue an immediate checkpoint using as much I/O as available. - The third parameter being false tells - pg_start_backup to initiate a non-exclusive base backup. + The third parameter being false tells + pg_start_backup to initiate a non-exclusive base backup. Perform the backup, using any convenient file-system-backup tool - such as tar or cpio (not + such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database @@ -898,45 +898,45 @@ SELECT * FROM pg_stop_backup(false, true); ready to archive. - The pg_stop_backup will return one row with three + The pg_stop_backup will return one row with three values. The second of these fields should be written to a file named - backup_label in the root directory of the backup. The + backup_label in the root directory of the backup. The third field should be written to a file named - tablespace_map unless the field is empty. These files are + tablespace_map unless the field is empty. These files are vital to the backup working, and must be written without modification. Once the WAL segment files active during the backup are archived, you are - done. The file identified by pg_stop_backup's first return + done. The file identified by pg_stop_backup's first return value is the last segment that is required to form a complete set of - backup files. On a primary, if archive_mode is enabled and the - wait_for_archive parameter is true, - pg_stop_backup does not return until the last segment has + backup files. On a primary, if archive_mode is enabled and the + wait_for_archive parameter is true, + pg_stop_backup does not return until the last segment has been archived. - On a standby, archive_mode must be always in order - for pg_stop_backup to wait. + On a standby, archive_mode must be always in order + for pg_stop_backup to wait. Archiving of these files happens automatically since you have - already configured archive_command. In most cases this + already configured archive_command. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - pg_stop_backup, set an appropriate + pg_stop_backup, set an appropriate statement_timeout value, but make note that if - pg_stop_backup terminates because of this your backup + pg_stop_backup terminates because of this your backup may not be valid. If the backup process monitors and ensures that all WAL segment files required for the backup are successfully archived then the - wait_for_archive parameter (which defaults to true) can be set + wait_for_archive parameter (which defaults to true) can be set to false to have - pg_stop_backup return as soon as the stop backup record is - written to the WAL. By default, pg_stop_backup will wait + pg_stop_backup return as soon as the stop backup record is + written to the WAL. By default, pg_stop_backup will wait until all WAL has been archived, which can take some time. This option must be used with caution: if WAL archiving is not monitored correctly then the backup might not include all of the WAL files and will @@ -952,7 +952,7 @@ SELECT * FROM pg_stop_backup(false, true); The process for an exclusive backup is mostly the same as for a non-exclusive one, but it differs in a few key steps. This type of backup can only be taken on a primary and does not allow concurrent backups. - Prior to PostgreSQL 9.6, this + Prior to PostgreSQL 9.6, this was the only low-level method available, but it is now recommended that all users upgrade their scripts to use non-exclusive backups if possible. @@ -971,20 +971,20 @@ SELECT * FROM pg_stop_backup(false, true); SELECT pg_start_backup('label'); - where label is any string you want to use to uniquely + where label is any string you want to use to uniquely identify this backup operation. - pg_start_backup creates a backup label file, - called backup_label, in the cluster directory with + pg_start_backup creates a backup label file, + called backup_label, in the cluster directory with information about your backup, including the start time and label string. - The function also creates a tablespace map file, - called tablespace_map, in the cluster directory with - information about tablespace symbolic links in pg_tblspc/ if + The function also creates a tablespace map file, + called tablespace_map, in the cluster directory with + information about tablespace symbolic links in pg_tblspc/ if one or more such link is present. Both files are critical to the integrity of the backup, should you need to restore from it. - By default, pg_start_backup can take a long time to finish. + By default, pg_start_backup can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -1002,7 +1002,7 @@ SELECT pg_start_backup('label', true); Perform the backup, using any convenient file-system-backup tool - such as tar or cpio (not + such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database @@ -1012,7 +1012,7 @@ SELECT pg_start_backup('label', true); Note that if the server crashes during the backup it may not be - possible to restart until the backup_label file has been + possible to restart until the backup_label file has been manually deleted from the PGDATA directory. @@ -1033,22 +1033,22 @@ SELECT pg_stop_backup(); Once the WAL segment files active during the backup are archived, you are - done. The file identified by pg_stop_backup's result is + done. The file identified by pg_stop_backup's result is the last segment that is required to form a complete set of backup files. - If archive_mode is enabled, - pg_stop_backup does not return until the last segment has + If archive_mode is enabled, + pg_stop_backup does not return until the last segment has been archived. Archiving of these files happens automatically since you have - already configured archive_command. In most cases this + already configured archive_command. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - pg_stop_backup, set an appropriate + pg_stop_backup, set an appropriate statement_timeout value, but make note that if - pg_stop_backup terminates because of this your backup + pg_stop_backup terminates because of this your backup may not be valid. @@ -1063,21 +1063,21 @@ SELECT pg_stop_backup(); When taking a base backup of an active database, this situation is normal and not an error. However, you need to ensure that you can distinguish complaints of this sort from real errors. For example, some versions - of rsync return a separate exit code for - vanished source files, and you can write a driver script to + of rsync return a separate exit code for + vanished source files, and you can write a driver script to accept this exit code as a non-error case. Also, some versions of - GNU tar return an error code indistinguishable from - a fatal error if a file was truncated while tar was - copying it. Fortunately, GNU tar versions 1.16 and + GNU tar return an error code indistinguishable from + a fatal error if a file was truncated while tar was + copying it. Fortunately, GNU tar versions 1.16 and later exit with 1 if a file was changed during the backup, - and 2 for other errors. With GNU tar version 1.23 and + and 2 for other errors. With GNU tar version 1.23 and later, you can use the warning options --warning=no-file-changed --warning=no-file-removed to hide the related warning messages. Be certain that your backup includes all of the files under - the database cluster directory (e.g., /usr/local/pgsql/data). + the database cluster directory (e.g., /usr/local/pgsql/data). If you are using tablespaces that do not reside underneath this directory, be careful to include them as well (and be sure that your backup archives symbolic links as links, otherwise the restore will corrupt @@ -1086,21 +1086,21 @@ SELECT pg_stop_backup(); You should, however, omit from the backup the files within the - cluster's pg_wal/ subdirectory. This + cluster's pg_wal/ subdirectory. This slight adjustment is worthwhile because it reduces the risk of mistakes when restoring. This is easy to arrange if - pg_wal/ is a symbolic link pointing to someplace outside + pg_wal/ is a symbolic link pointing to someplace outside the cluster directory, which is a common setup anyway for performance - reasons. You might also want to exclude postmaster.pid - and postmaster.opts, which record information - about the running postmaster, not about the - postmaster which will eventually use this backup. - (These files can confuse pg_ctl.) + reasons. You might also want to exclude postmaster.pid + and postmaster.opts, which record information + about the running postmaster, not about the + postmaster which will eventually use this backup. + (These files can confuse pg_ctl.) It is often a good idea to also omit from the backup the files - within the cluster's pg_replslot/ directory, so that + within the cluster's pg_replslot/ directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and @@ -1114,10 +1114,10 @@ SELECT pg_stop_backup(); - The contents of the directories pg_dynshmem/, - pg_notify/, pg_serial/, - pg_snapshots/, pg_stat_tmp/, - and pg_subtrans/ (but not the directories themselves) can be + The contents of the directories pg_dynshmem/, + pg_notify/, pg_serial/, + pg_snapshots/, pg_stat_tmp/, + and pg_subtrans/ (but not the directories themselves) can be omitted from the backup as they will be initialized on postmaster startup. If is set and is under the data directory then the contents of that directory can also be omitted. @@ -1131,13 +1131,13 @@ SELECT pg_stop_backup(); The backup label - file includes the label string you gave to pg_start_backup, - as well as the time at which pg_start_backup was run, and + file includes the label string you gave to pg_start_backup, + as well as the time at which pg_start_backup was run, and the name of the starting WAL file. In case of confusion it is therefore possible to look inside a backup file and determine exactly which backup session the dump file came from. The tablespace map file includes the symbolic link names as they exist in the directory - pg_tblspc/ and the full path of each symbolic link. + pg_tblspc/ and the full path of each symbolic link. These files are not merely for your information; their presence and contents are critical to the proper operation of the system's recovery process. @@ -1146,7 +1146,7 @@ SELECT pg_stop_backup(); It is also possible to make a backup while the server is stopped. In this case, you obviously cannot use - pg_start_backup or pg_stop_backup, and + pg_start_backup or pg_stop_backup, and you will therefore be left to your own devices to keep track of which backup is which and how far back the associated WAL files go. It is generally better to follow the continuous archiving procedure above. @@ -1173,7 +1173,7 @@ SELECT pg_stop_backup(); location in case you need them later. Note that this precaution will require that you have enough free space on your system to hold two copies of your existing database. If you do not have enough space, - you should at least save the contents of the cluster's pg_wal + you should at least save the contents of the cluster's pg_wal subdirectory, as it might contain logs which were not archived before the system went down. @@ -1188,17 +1188,17 @@ SELECT pg_stop_backup(); Restore the database files from your file system backup. Be sure that they are restored with the right ownership (the database system user, not - root!) and with the right permissions. If you are using + root!) and with the right permissions. If you are using tablespaces, - you should verify that the symbolic links in pg_tblspc/ + you should verify that the symbolic links in pg_tblspc/ were correctly restored. - Remove any files present in pg_wal/; these came from the + Remove any files present in pg_wal/; these came from the file system backup and are therefore probably obsolete rather than current. - If you didn't archive pg_wal/ at all, then recreate + If you didn't archive pg_wal/ at all, then recreate it with proper permissions, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before. @@ -1207,16 +1207,16 @@ SELECT pg_stop_backup(); If you have unarchived WAL segment files that you saved in step 2, - copy them into pg_wal/. (It is best to copy them, + copy them into pg_wal/. (It is best to copy them, not move them, so you still have the unmodified files if a problem occurs and you have to start over.) - Create a recovery command file recovery.conf in the cluster + Create a recovery command file recovery.conf in the cluster data directory (see ). You might - also want to temporarily modify pg_hba.conf to prevent + also want to temporarily modify pg_hba.conf to prevent ordinary users from connecting until you are sure the recovery was successful. @@ -1227,7 +1227,7 @@ SELECT pg_stop_backup(); recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will rename - recovery.conf to recovery.done (to prevent + recovery.conf to recovery.done (to prevent accidentally re-entering recovery mode later) and then commence normal database operations. @@ -1236,7 +1236,7 @@ SELECT pg_stop_backup(); Inspect the contents of the database to ensure you have recovered to the desired state. If not, return to step 1. If all is well, - allow your users to connect by restoring pg_hba.conf to normal. + allow your users to connect by restoring pg_hba.conf to normal. @@ -1245,32 +1245,32 @@ SELECT pg_stop_backup(); The key part of all this is to set up a recovery configuration file that describes how you want to recover and how far the recovery should - run. You can use recovery.conf.sample (normally - located in the installation's share/ directory) as a + run. You can use recovery.conf.sample (normally + located in the installation's share/ directory) as a prototype. The one thing that you absolutely must specify in - recovery.conf is the restore_command, - which tells PostgreSQL how to retrieve archived - WAL file segments. Like the archive_command, this is - a shell command string. It can contain %f, which is - replaced by the name of the desired log file, and %p, + recovery.conf is the restore_command, + which tells PostgreSQL how to retrieve archived + WAL file segments. Like the archive_command, this is + a shell command string. It can contain %f, which is + replaced by the name of the desired log file, and %p, which is replaced by the path name to copy the log file to. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Write %% if you need to embed an actual % + Write %% if you need to embed an actual % character in the command. The simplest useful command is something like: restore_command = 'cp /mnt/server/archivedir/%f %p' which will copy previously archived WAL segments from the directory - /mnt/server/archivedir. Of course, you can use something + /mnt/server/archivedir. Of course, you can use something much more complicated, perhaps even a shell script that requests the operator to mount an appropriate tape. It is important that the command return nonzero exit status on failure. - The command will be called requesting files that are not + The command will be called requesting files that are not present in the archive; it must return nonzero when so asked. This is not an error condition. An exception is that if the command was terminated by a signal (other than SIGTERM, which is used as @@ -1282,27 +1282,27 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' Not all of the requested files will be WAL segment files; you should also expect requests for files with a suffix of - .backup or .history. Also be aware that - the base name of the %p path will be different from - %f; do not expect them to be interchangeable. + .backup or .history. Also be aware that + the base name of the %p path will be different from + %f; do not expect them to be interchangeable. WAL segments that cannot be found in the archive will be sought in - pg_wal/; this allows use of recent un-archived segments. + pg_wal/; this allows use of recent un-archived segments. However, segments that are available from the archive will be used in - preference to files in pg_wal/. + preference to files in pg_wal/. Normally, recovery will proceed through all available WAL segments, thereby restoring the database to the current point in time (or as close as possible given the available WAL segments). Therefore, a normal - recovery will end with a file not found message, the exact text + recovery will end with a file not found message, the exact text of the error message depending upon your choice of - restore_command. You may also see an error message + restore_command. You may also see an error message at the start of recovery for a file named something like - 00000001.history. This is also normal and does not + 00000001.history. This is also normal and does not indicate a problem in simple recovery situations; see for discussion. @@ -1310,8 +1310,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If you want to recover to some previous point in time (say, right before the junior DBA dropped your main transaction table), just specify the - required stopping point in recovery.conf. You can specify - the stop point, known as the recovery target, either by + required stopping point in recovery.conf. You can specify + the stop point, known as the recovery target, either by date/time, named restore point or by completion of a specific transaction ID. As of this writing only the date/time and named restore point options are very usable, since there are no tools to help you identify with any @@ -1321,7 +1321,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' The stop point must be after the ending time of the base backup, i.e., - the end time of pg_stop_backup. You cannot use a base backup + the end time of pg_stop_backup. You cannot use a base backup to recover to a time when that backup was in progress. (To recover to such a time, you must go back to your previous base backup and roll forward from there.) @@ -1332,14 +1332,14 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If recovery finds corrupted WAL data, recovery will halt at that point and the server will not start. In such a case the recovery process could be re-run from the beginning, specifying a - recovery target before the point of corruption so that recovery + recovery target before the point of corruption so that recovery can complete normally. If recovery fails for an external reason, such as a system crash or if the WAL archive has become inaccessible, then the recovery can simply be restarted and it will restart almost from where it failed. Recovery restart works much like checkpointing in normal operation: the server periodically forces all its state to disk, and then updates - the pg_control file to indicate that the already-processed + the pg_control file to indicate that the already-processed WAL data need not be scanned again. @@ -1359,7 +1359,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' suppose you dropped a critical table at 5:15PM on Tuesday evening, but didn't realize your mistake until Wednesday noon. Unfazed, you get out your backup, restore to the point-in-time 5:14PM - Tuesday evening, and are up and running. In this history of + Tuesday evening, and are up and running. In this history of the database universe, you never dropped the table. But suppose you later realize this wasn't such a great idea, and would like to return to sometime Wednesday morning in the original history. @@ -1372,8 +1372,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' - To deal with this problem, PostgreSQL has a notion - of timelines. Whenever an archive recovery completes, + To deal with this problem, PostgreSQL has a notion + of timelines. Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does @@ -1384,13 +1384,13 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' and so have to do several point-in-time recoveries by trial and error until you find the best place to branch off from the old history. Without timelines this process would soon generate an unmanageable mess. With - timelines, you can recover to any prior state, including + timelines, you can recover to any prior state, including states in timeline branches that you abandoned earlier. - Every time a new timeline is created, PostgreSQL creates - a timeline history file that shows which timeline it branched + Every time a new timeline is created, PostgreSQL creates + a timeline history file that shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are archived into the WAL @@ -1408,7 +1408,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' that was current when the base backup was taken. If you wish to recover into some child timeline (that is, you want to return to some state that was itself generated after a recovery attempt), you need to specify the - target timeline ID in recovery.conf. You cannot recover into + target timeline ID in recovery.conf. You cannot recover into timelines that branched off earlier than the base backup. @@ -1424,18 +1424,18 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' Standalone Hot Backups - It is possible to use PostgreSQL's backup facilities to + It is possible to use PostgreSQL's backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and - restore than pg_dump dumps. (They are also much larger - than pg_dump dumps, so in some cases the speed advantage + restore than pg_dump dumps. (They are also much larger + than pg_dump dumps, so in some cases the speed advantage might be negated.) As with base backups, the easiest way to produce a standalone hot backup is to use the - tool. If you include the -X parameter when calling + tool. If you include the -X parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup. @@ -1445,16 +1445,16 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If more flexibility in copying the backup files is needed, a lower level process can be used for standalone hot backups as well. To prepare for low level standalone hot backups, make sure - wal_level is set to - replica or higher, archive_mode to - on, and set up an archive_command that performs - archiving only when a switch file exists. For example: + wal_level is set to + replica or higher, archive_mode to + on, and set up an archive_command that performs + archiving only when a switch file exists. For example: archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || (test ! -f /var/lib/pgsql/archive/%f && cp %p /var/lib/pgsql/archive/%f)' This command will perform archiving when - /var/lib/pgsql/backup_in_progress exists, and otherwise - silently return zero exit status (allowing PostgreSQL + /var/lib/pgsql/backup_in_progress exists, and otherwise + silently return zero exit status (allowing PostgreSQL to recycle the unwanted WAL file). @@ -1469,11 +1469,11 @@ psql -c "select pg_stop_backup();" rm /var/lib/pgsql/backup_in_progress tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ - The switch file /var/lib/pgsql/backup_in_progress is + The switch file /var/lib/pgsql/backup_in_progress is created first, enabling archiving of completed WAL files to occur. After the backup the switch file is removed. Archived WAL files are then added to the backup so that both base backup and all required - WAL files are part of the same tar file. + WAL files are part of the same tar file. Please remember to add error handling to your backup scripts. @@ -1488,7 +1488,7 @@ tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f' - You will then need to use gunzip during recovery: + You will then need to use gunzip during recovery: restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' @@ -1501,7 +1501,7 @@ restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' Many people choose to use scripts to define their archive_command, so that their - postgresql.conf entry looks very simple: + postgresql.conf entry looks very simple: archive_command = 'local_backup_script.sh "%p" "%f"' @@ -1509,7 +1509,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' more than a single command in the archiving process. This allows all complexity to be managed within the script, which can be written in a popular scripting language such as - bash or perl. + bash or perl. @@ -1543,7 +1543,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' When using an archive_command script, it's desirable to enable . - Any messages written to stderr from the script will then + Any messages written to stderr from the script will then appear in the database server log, allowing complex configurations to be diagnosed easily if they fail. @@ -1563,7 +1563,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' If a command is executed while a base backup is being taken, and then - the template database that the CREATE DATABASE copied + the template database that the CREATE DATABASE copied is modified while the base backup is still in progress, it is possible that recovery will cause those modifications to be propagated into the created database as well. This is of course @@ -1602,7 +1602,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' before you do so.) Turning off page snapshots does not prevent use of the logs for PITR operations. An area for future development is to compress archived WAL data by removing - unnecessary page copies even when full_page_writes is + unnecessary page copies even when full_page_writes is on. In the meantime, administrators might wish to reduce the number of page snapshots included in WAL by increasing the checkpoint interval parameters as much as feasible. diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index ea1b5c0c8e..0b092f6e49 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -11,17 +11,17 @@ PostgreSQL can be extended to run user-supplied code in separate processes. Such processes are started, stopped and monitored by postgres, which permits them to have a lifetime closely linked to the server's status. - These processes have the option to attach to PostgreSQL's + These processes have the option to attach to PostgreSQL's shared memory area and to connect to databases internally; they can also run multiple transactions serially, just like a regular client-connected server - process. Also, by linking to libpq they can connect to the + process. Also, by linking to libpq they can connect to the server and behave like a regular client application. There are considerable robustness and security risks in using background - worker processes because, being written in the C language, + worker processes because, being written in the C language, they have unrestricted access to data. Administrators wishing to enable modules that include background worker process should exercise extreme caution. Only carefully audited modules should be permitted to run @@ -31,15 +31,15 @@ Background workers can be initialized at the time that - PostgreSQL is started by including the module name in - shared_preload_libraries. A module wishing to run a background + PostgreSQL is started by including the module name in + shared_preload_libraries. A module wishing to run a background worker can register it by calling RegisterBackgroundWorker(BackgroundWorker *worker) - from its _PG_init(). Background workers can also be started + from its _PG_init(). Background workers can also be started after the system is up and running by calling the function RegisterDynamicBackgroundWorker(BackgroundWorker *worker, BackgroundWorkerHandle **handle). Unlike - RegisterBackgroundWorker, which can only be called from within + RegisterBackgroundWorker, which can only be called from within the postmaster, RegisterDynamicBackgroundWorker must be called from a regular backend. @@ -65,7 +65,7 @@ typedef struct BackgroundWorker - bgw_name and bgw_type are + bgw_name and bgw_type are strings to be used in log messages, process listings and similar contexts. bgw_type should be the same for all background workers of the same type, so that it is possible to group such workers in a @@ -76,7 +76,7 @@ typedef struct BackgroundWorker - bgw_flags is a bitwise-or'd bit mask indicating the + bgw_flags is a bitwise-or'd bit mask indicating the capabilities that the module wants. Possible values are: @@ -114,14 +114,14 @@ typedef struct BackgroundWorker bgw_start_time is the server state during which - postgres should start the process; it can be one of - BgWorkerStart_PostmasterStart (start as soon as - postgres itself has finished its own initialization; processes + postgres should start the process; it can be one of + BgWorkerStart_PostmasterStart (start as soon as + postgres itself has finished its own initialization; processes requesting this are not eligible for database connections), - BgWorkerStart_ConsistentState (start as soon as a consistent state + BgWorkerStart_ConsistentState (start as soon as a consistent state has been reached in a hot standby, allowing processes to connect to databases and run read-only queries), and - BgWorkerStart_RecoveryFinished (start as soon as the system has + BgWorkerStart_RecoveryFinished (start as soon as the system has entered normal read-write state). Note the last two values are equivalent in a server that's not a hot standby. Note that this setting only indicates when the processes are to be started; they do not stop when a different state @@ -152,9 +152,9 @@ typedef struct BackgroundWorker - bgw_main_arg is the Datum argument + bgw_main_arg is the Datum argument to the background worker main function. This main function should take a - single argument of type Datum and return void. + single argument of type Datum and return void. bgw_main_arg will be passed as the argument. In addition, the global variable MyBgworkerEntry points to a copy of the BackgroundWorker structure @@ -165,39 +165,39 @@ typedef struct BackgroundWorker On Windows (and anywhere else where EXEC_BACKEND is defined) or in dynamic background workers it is not safe to pass a - Datum by reference, only by value. If an argument is required, it + Datum by reference, only by value. If an argument is required, it is safest to pass an int32 or other small value and use that as an index - into an array allocated in shared memory. If a value like a cstring + into an array allocated in shared memory. If a value like a cstring or text is passed then the pointer won't be valid from the new background worker process. bgw_extra can contain extra data to be passed - to the background worker. Unlike bgw_main_arg, this data + to the background worker. Unlike bgw_main_arg, this data is not passed as an argument to the worker's main function, but it can be accessed via MyBgworkerEntry, as discussed above. bgw_notify_pid is the PID of a PostgreSQL - backend process to which the postmaster should send SIGUSR1 + backend process to which the postmaster should send SIGUSR1 when the process is started or exits. It should be 0 for workers registered at postmaster startup time, or when the backend registering the worker does not wish to wait for the worker to start up. Otherwise, it should be - initialized to MyProcPid. + initialized to MyProcPid. Once running, the process can connect to a database by calling BackgroundWorkerInitializeConnection(char *dbname, char *username) or BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid). This allows the process to run transactions and queries using the - SPI interface. If dbname is NULL or - dboid is InvalidOid, the session is not connected + SPI interface. If dbname is NULL or + dboid is InvalidOid, the session is not connected to any particular database, but shared catalogs can be accessed. - If username is NULL or useroid is - InvalidOid, the process will run as the superuser created - during initdb. + If username is NULL or useroid is + InvalidOid, the process will run as the superuser created + during initdb. A background worker can only call one of these two functions, and only once. It is not possible to switch databases. @@ -207,24 +207,24 @@ typedef struct BackgroundWorker background worker's main function, and must be unblocked by it; this is to allow the process to customize its signal handlers, if necessary. Signals can be unblocked in the new process by calling - BackgroundWorkerUnblockSignals and blocked by calling - BackgroundWorkerBlockSignals. + BackgroundWorkerUnblockSignals and blocked by calling + BackgroundWorkerBlockSignals. If bgw_restart_time for a background worker is - configured as BGW_NEVER_RESTART, or if it exits with an exit - code of 0 or is terminated by TerminateBackgroundWorker, + configured as BGW_NEVER_RESTART, or if it exits with an exit + code of 0 or is terminated by TerminateBackgroundWorker, it will be automatically unregistered by the postmaster on exit. Otherwise, it will be restarted after the time period configured via - bgw_restart_time, or immediately if the postmaster + bgw_restart_time, or immediately if the postmaster reinitializes the cluster due to a backend failure. Backends which need to suspend execution only temporarily should use an interruptible sleep rather than exiting; this can be achieved by calling WaitLatch(). Make sure the - WL_POSTMASTER_DEATH flag is set when calling that function, and + WL_POSTMASTER_DEATH flag is set when calling that function, and verify the return code for a prompt exit in the emergency case that - postgres itself has terminated. + postgres itself has terminated. @@ -238,29 +238,29 @@ typedef struct BackgroundWorker opaque handle that can subsequently be passed to GetBackgroundWorkerPid(BackgroundWorkerHandle *, pid_t *) or TerminateBackgroundWorker(BackgroundWorkerHandle *). - GetBackgroundWorkerPid can be used to poll the status of the - worker: a return value of BGWH_NOT_YET_STARTED indicates that + GetBackgroundWorkerPid can be used to poll the status of the + worker: a return value of BGWH_NOT_YET_STARTED indicates that the worker has not yet been started by the postmaster; BGWH_STOPPED indicates that it has been started but is no longer running; and BGWH_STARTED indicates that it is currently running. In this last case, the PID will also be returned via the second argument. - TerminateBackgroundWorker causes the postmaster to send - SIGTERM to the worker if it is running, and to unregister it + TerminateBackgroundWorker causes the postmaster to send + SIGTERM to the worker if it is running, and to unregister it as soon as it is not. In some cases, a process which registers a background worker may wish to wait for the worker to start up. This can be accomplished by initializing - bgw_notify_pid to MyProcPid and + bgw_notify_pid to MyProcPid and then passing the BackgroundWorkerHandle * obtained at registration time to WaitForBackgroundWorkerStartup(BackgroundWorkerHandle *handle, pid_t *) function. This function will block until the postmaster has attempted to start the background worker, or until the postmaster dies. If the background runner - is running, the return value will BGWH_STARTED, and + is running, the return value will BGWH_STARTED, and the PID will be written to the provided address. Otherwise, the return value will be BGWH_STOPPED or BGWH_POSTMASTER_DIED. @@ -279,7 +279,7 @@ typedef struct BackgroundWorker - The src/test/modules/worker_spi module + The src/test/modules/worker_spi module contains a working example, which demonstrates some useful techniques. diff --git a/doc/src/sgml/biblio.sgml b/doc/src/sgml/biblio.sgml index 5462bc38e4..d7547e6e92 100644 --- a/doc/src/sgml/biblio.sgml +++ b/doc/src/sgml/biblio.sgml @@ -171,7 +171,7 @@ ssimkovi@ag.or.at Discusses SQL history and syntax, and describes the addition of - INTERSECT and EXCEPT constructs into + INTERSECT and EXCEPT constructs into PostgreSQL. Prepared as a Master's Thesis with the support of O. Univ. Prof. Dr. Georg Gottlob and Univ. Ass. Mag. Katrin Seyr at Vienna University of Technology. diff --git a/doc/src/sgml/bki.sgml b/doc/src/sgml/bki.sgml index af6d8d1d2a..33378b46ea 100644 --- a/doc/src/sgml/bki.sgml +++ b/doc/src/sgml/bki.sgml @@ -21,7 +21,7 @@ input file used by initdb is created as part of building and installing PostgreSQL by a program named genbki.pl, which reads some - specially formatted C header files in the src/include/catalog/ + specially formatted C header files in the src/include/catalog/ directory of the source tree. The created BKI file is called postgres.bki and is normally installed in the @@ -67,13 +67,13 @@ - create + create tablename tableoid - bootstrap - shared_relation - without_oids - rowtype_oid oid + bootstrap + shared_relation + without_oids + rowtype_oid oid (name1 = type1 FORCE NOT NULL | FORCE NULL , @@ -93,7 +93,7 @@ The following column types are supported directly by - bootstrap.c: bool, + bootstrap.c: bool, bytea, char (1 byte), name, int2, int4, regproc, regclass, @@ -104,31 +104,31 @@ _oid (array), _char (array), _aclitem (array). Although it is possible to create tables containing columns of other types, this cannot be done until - after pg_type has been created and filled with + after pg_type has been created and filled with appropriate entries. (That effectively means that only these column types can be used in bootstrapped tables, but non-bootstrap catalogs can contain any built-in type.) - When bootstrap is specified, + When bootstrap is specified, the table will only be created on disk; nothing is entered into pg_class, pg_attribute, etc, for it. Thus the table will not be accessible by ordinary SQL operations until - such entries are made the hard way (with insert + such entries are made the hard way (with insert commands). This option is used for creating pg_class etc themselves. - The table is created as shared if shared_relation is + The table is created as shared if shared_relation is specified. - It will have OIDs unless without_oids is specified. - The table's row type OID (pg_type OID) can optionally - be specified via the rowtype_oid clause; if not specified, - an OID is automatically generated for it. (The rowtype_oid - clause is useless if bootstrap is specified, but it can be + It will have OIDs unless without_oids is specified. + The table's row type OID (pg_type OID) can optionally + be specified via the rowtype_oid clause; if not specified, + an OID is automatically generated for it. (The rowtype_oid + clause is useless if bootstrap is specified, but it can be provided anyway for documentation.) @@ -136,7 +136,7 @@ - open tablename + open tablename @@ -150,7 +150,7 @@ - close tablename + close tablename @@ -163,7 +163,7 @@ - insert OID = oid_value ( value1 value2 ... ) + insert OID = oid_value ( value1 value2 ... ) @@ -188,14 +188,14 @@ - declare unique - index indexname + declare unique + index indexname indexoid - on tablename - using amname - ( opclass1 + on tablename + using amname + ( opclass1 name1 - , ... ) + , ... ) @@ -220,10 +220,10 @@ - declare toast + declare toast toasttableoid toastindexoid - on tablename + on tablename @@ -234,14 +234,14 @@ toasttableoid and its index is assigned OID toastindexoid. - As with declare index, filling of the index + As with declare index, filling of the index is postponed. - build indices + build indices @@ -257,17 +257,17 @@ Structure of the Bootstrap <acronym>BKI</acronym> File - The open command cannot be used until the tables it uses + The open command cannot be used until the tables it uses exist and have entries for the table that is to be opened. - (These minimum tables are pg_class, - pg_attribute, pg_proc, and - pg_type.) To allow those tables themselves to be filled, - create with the bootstrap option implicitly opens + (These minimum tables are pg_class, + pg_attribute, pg_proc, and + pg_type.) To allow those tables themselves to be filled, + create with the bootstrap option implicitly opens the created table for data insertion. - Also, the declare index and declare toast + Also, the declare index and declare toast commands cannot be used until the system catalogs they need have been created and filled in. @@ -278,17 +278,17 @@ - create bootstrap one of the critical tables + create bootstrap one of the critical tables - insert data describing at least the critical tables + insert data describing at least the critical tables - close + close @@ -298,22 +298,22 @@ - create (without bootstrap) a noncritical table + create (without bootstrap) a noncritical table - open + open - insert desired data + insert desired data - close + close @@ -328,7 +328,7 @@ - build indices + build indices diff --git a/doc/src/sgml/bloom.sgml b/doc/src/sgml/bloom.sgml index 396348c523..e13ebf80fd 100644 --- a/doc/src/sgml/bloom.sgml +++ b/doc/src/sgml/bloom.sgml @@ -8,7 +8,7 @@ - bloom provides an index access method based on + bloom provides an index access method based on Bloom filters. @@ -42,29 +42,29 @@ Parameters - A bloom index accepts the following parameters in its - WITH clause: + A bloom index accepts the following parameters in its + WITH clause: - length + length Length of each signature (index entry) in bits. The default - is 80 bits and maximum is 4096. + is 80 bits and maximum is 4096. - col1 — col32 + col1 — col32 Number of bits generated for each index column. Each parameter's name refers to the number of the index column that it controls. The default - is 2 bits and maximum is 4095. Parameters for + is 2 bits and maximum is 4095. Parameters for index columns not actually used are ignored. @@ -87,8 +87,8 @@ CREATE INDEX bloomidx ON tbloom USING bloom (i1,i2,i3) The index is created with a signature length of 80 bits, with attributes i1 and i2 mapped to 2 bits, and attribute i3 mapped to 4 bits. We could - have omitted the length, col1, - and col2 specifications since those have the default values. + have omitted the length, col1, + and col2 specifications since those have the default values. @@ -175,7 +175,7 @@ CREATE INDEX Note the relatively large number of false positives: 2439 rows were selected to be visited in the heap, but none actually matched the query. We could reduce that by specifying a larger signature length. - In this example, creating the index with length=200 + In this example, creating the index with length=200 reduced the number of false positives to 55; but it doubled the index size (to 306 MB) and ended up being slower for this query (125 ms overall). @@ -213,7 +213,7 @@ CREATE INDEX An operator class for bloom indexes requires only a hash function for the indexed data type and an equality operator for searching. This example - shows the operator class definition for the text data type: + shows the operator class definition for the text data type: @@ -230,7 +230,7 @@ DEFAULT FOR TYPE text USING bloom AS - Only operator classes for int4 and text are + Only operator classes for int4 and text are included with the module. diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml index 8dcc29925b..91c01700ed 100644 --- a/doc/src/sgml/brin.sgml +++ b/doc/src/sgml/brin.sgml @@ -16,7 +16,7 @@ BRIN is designed for handling very large tables in which certain columns have some natural correlation with their physical location within the table. - A block range is a group of pages that are physically + A block range is a group of pages that are physically adjacent in the table; for each block range, some summary info is stored by the index. For example, a table storing a store's sale orders might have @@ -29,7 +29,7 @@ BRIN indexes can satisfy queries via regular bitmap index scans, and will return all tuples in all pages within each range if - the summary info stored by the index is consistent with the + the summary info stored by the index is consistent with the query conditions. The query executor is in charge of rechecking these tuples and discarding those that do not match the query conditions — in other words, these @@ -51,9 +51,9 @@ The size of the block range is determined at index creation time by - the pages_per_range storage parameter. The number of index + the pages_per_range storage parameter. The number of index entries will be equal to the size of the relation in pages divided by - the selected value for pages_per_range. Therefore, the smaller + the selected value for pages_per_range. Therefore, the smaller the number, the larger the index becomes (because of the need to store more index entries), but at the same time the summary data stored can be more precise and more data blocks can be skipped during an index scan. @@ -99,9 +99,9 @@ - The minmax + The minmax operator classes store the minimum and the maximum values appearing - in the indexed column within the range. The inclusion + in the indexed column within the range. The inclusion operator classes store a value which includes the values in the indexed column within the range. @@ -162,21 +162,21 @@ - box_inclusion_ops + box_inclusion_ops box - << - &< - && - &> - >> - ~= - @> - <@ - &<| - <<| + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| |>> - |&> + |&> @@ -249,11 +249,11 @@ network_inclusion_ops inet - && - >>= + && + >>= <<= = - >> + >> << @@ -346,18 +346,18 @@ - range_inclusion_ops + range_inclusion_ops any range type - << - &< - && - &> - >> - @> - <@ - -|- - = + << + &< + && + &> + >> + @> + <@ + -|- + = < <= = @@ -505,11 +505,11 @@ - BrinOpcInfo *opcInfo(Oid type_oid) + BrinOpcInfo *opcInfo(Oid type_oid) Returns internal information about the indexed columns' summary data. - The return value must point to a palloc'd BrinOpcInfo, + The return value must point to a palloc'd BrinOpcInfo, which has this definition: typedef struct BrinOpcInfo @@ -524,7 +524,7 @@ typedef struct BrinOpcInfo TypeCacheEntry *oi_typcache[FLEXIBLE_ARRAY_MEMBER]; } BrinOpcInfo; - BrinOpcInfo.oi_opaque can be used by the + BrinOpcInfo.oi_opaque can be used by the operator class routines to pass information between support procedures during an index scan. @@ -797,8 +797,8 @@ typedef struct BrinOpcInfo It should accept two arguments with the same data type as the operator class, and return the union of them. The inclusion operator class can store union values with different data types if it is defined with the - STORAGE parameter. The return value of the union - function should match the STORAGE data type. + STORAGE parameter. The return value of the union + function should match the STORAGE data type. @@ -823,11 +823,11 @@ typedef struct BrinOpcInfo on another operator strategy as shown in , or the same operator strategy as themselves. They require the dependency - operator to be defined with the STORAGE data type as the + operator to be defined with the STORAGE data type as the left-hand-side argument and the other supported data type to be the right-hand-side argument of the supported operator. See - float4_minmax_ops as an example of minmax, and - box_inclusion_ops as an example of inclusion. + float4_minmax_ops as an example of minmax, and + box_inclusion_ops as an example of inclusion. diff --git a/doc/src/sgml/btree-gin.sgml b/doc/src/sgml/btree-gin.sgml index 375e7ec4be..e491fa76e7 100644 --- a/doc/src/sgml/btree-gin.sgml +++ b/doc/src/sgml/btree-gin.sgml @@ -8,16 +8,16 @@ - btree_gin provides sample GIN operator classes that + btree_gin provides sample GIN operator classes that implement B-tree equivalent behavior for the data types - int2, int4, int8, float4, - float8, timestamp with time zone, - timestamp without time zone, time with time zone, - time without time zone, date, interval, - oid, money, "char", - varchar, text, bytea, bit, - varbit, macaddr, macaddr8, inet, - cidr, and all enum types. + int2, int4, int8, float4, + float8, timestamp with time zone, + timestamp without time zone, time with time zone, + time without time zone, date, interval, + oid, money, "char", + varchar, text, bytea, bit, + varbit, macaddr, macaddr8, inet, + cidr, and all enum types. diff --git a/doc/src/sgml/btree-gist.sgml b/doc/src/sgml/btree-gist.sgml index f3c639c2f3..dcb939f1fb 100644 --- a/doc/src/sgml/btree-gist.sgml +++ b/doc/src/sgml/btree-gist.sgml @@ -8,16 +8,16 @@ - btree_gist provides GiST index operator classes that + btree_gist provides GiST index operator classes that implement B-tree equivalent behavior for the data types - int2, int4, int8, float4, - float8, numeric, timestamp with time zone, - timestamp without time zone, time with time zone, - time without time zone, date, interval, - oid, money, char, - varchar, text, bytea, bit, - varbit, macaddr, macaddr8, inet, - cidr, uuid, and all enum types. + int2, int4, int8, float4, + float8, numeric, timestamp with time zone, + timestamp without time zone, time with time zone, + time without time zone, date, interval, + oid, money, char, + varchar, text, bytea, bit, + varbit, macaddr, macaddr8, inet, + cidr, uuid, and all enum types. @@ -33,7 +33,7 @@ - In addition to the typical B-tree search operators, btree_gist + In addition to the typical B-tree search operators, btree_gist also provides index support for <> (not equals). This may be useful in combination with an exclusion constraint, @@ -42,14 +42,14 @@ Also, for data types for which there is a natural distance metric, - btree_gist defines a distance operator <->, + btree_gist defines a distance operator <->, and provides GiST index support for nearest-neighbor searches using this operator. Distance operators are provided for - int2, int4, int8, float4, - float8, timestamp with time zone, - timestamp without time zone, - time without time zone, date, interval, - oid, and money. + int2, int4, int8, float4, + float8, timestamp with time zone, + timestamp without time zone, + time without time zone, date, interval, + oid, and money. diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index cfec2465d2..ef60a58631 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -387,7 +387,7 @@
- <structname>pg_aggregate</> Columns + <structname>pg_aggregate</structname> Columns @@ -410,9 +410,9 @@ charAggregate kind: - n for normal aggregates, - o for ordered-set aggregates, or - h for hypothetical-set aggregates + n for normal aggregates, + o for ordered-set aggregates, or + h for hypothetical-set aggregates @@ -421,7 +421,7 @@ Number of direct (non-aggregated) arguments of an ordered-set or hypothetical-set aggregate, counting a variadic array as one argument. - If equal to pronargs, the aggregate must be variadic + If equal to pronargs, the aggregate must be variadic and the variadic array describes the aggregated arguments as well as the final direct arguments. Always zero for normal aggregates. @@ -592,7 +592,7 @@
- <structname>pg_am</> Columns + <structname>pg_am</structname> Columns @@ -644,7 +644,7 @@ - Before PostgreSQL 9.6, pg_am + Before PostgreSQL 9.6, pg_am contained many additional columns representing properties of index access methods. That data is now only directly visible at the C code level. However, pg_index_column_has_property() and related @@ -667,8 +667,8 @@ The catalog pg_amop stores information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A family - member can be either a search operator or an - ordering operator. An operator + member can be either a search operator or an + ordering operator. An operator can appear in more than one family, but cannot appear in more than one search position nor more than one ordering position within a family. (It is allowed, though unlikely, for an operator to be used for both @@ -676,7 +676,7 @@
- <structname>pg_amop</> Columns + <structname>pg_amop</structname> Columns @@ -728,8 +728,8 @@ amoppurposechar - Operator purpose, either s for search or - o for ordering + Operator purpose, either s for search or + o for ordering @@ -759,26 +759,26 @@
- A search operator entry indicates that an index of this operator + A search operator entry indicates that an index of this operator family can be searched to find all rows satisfying - WHERE - indexed_column - operator - constant. + WHERE + indexed_column + operator + constant. Obviously, such an operator must return boolean, and its left-hand input type must match the index's column data type. - An ordering operator entry indicates that an index of this + An ordering operator entry indicates that an index of this operator family can be scanned to return rows in the order represented by - ORDER BY - indexed_column - operator - constant. + ORDER BY + indexed_column + operator + constant. Such an operator could return any sortable data type, though again its left-hand input type must match the index's column data type. - The exact semantics of the ORDER BY are specified by the + The exact semantics of the ORDER BY are specified by the amopsortfamily column, which must reference a B-tree operator family for the operator's result type. @@ -787,19 +787,19 @@ At present, it's assumed that the sort order for an ordering operator is the default for the referenced operator family, i.e., ASC NULLS - LAST. This might someday be relaxed by adding additional columns + LAST. This might someday be relaxed by adding additional columns to specify sort options explicitly. - An entry's amopmethod must match the - opfmethod of its containing operator family (including - amopmethod here is an intentional denormalization of the + An entry's amopmethod must match the + opfmethod of its containing operator family (including + amopmethod here is an intentional denormalization of the catalog structure for performance reasons). Also, - amoplefttype and amoprighttype must match - the oprleft and oprright fields of the - referenced pg_operator entry. + amoplefttype and amoprighttype must match + the oprleft and oprright fields of the + referenced pg_operator entry. @@ -880,14 +880,14 @@ The usual interpretation of the - amproclefttype and amprocrighttype fields + amproclefttype and amprocrighttype fields is that they identify the left and right input types of the operator(s) that a particular support procedure supports. For some access methods these match the input data type(s) of the support procedure itself, for - others not. There is a notion of default support procedures for - an index, which are those with amproclefttype and - amprocrighttype both equal to the index operator class's - opcintype. + others not. There is a notion of default support procedures for + an index, which are those with amproclefttype and + amprocrighttype both equal to the index operator class's + opcintype. @@ -909,7 +909,7 @@
- <structname>pg_attrdef</> Columns + <structname>pg_attrdef</structname> Columns @@ -964,7 +964,7 @@ The adsrc field is historical, and is best not used, because it does not track outside changes that might affect the representation of the default value. Reverse-compiling the - adbin field (with pg_get_expr for + adbin field (with pg_get_expr for example) is a better way to display the default value. @@ -993,7 +993,7 @@
- <structname>pg_attribute</> Columns + <structname>pg_attribute</structname> Columns @@ -1072,7 +1072,7 @@ Number of dimensions, if the column is an array type; otherwise 0. (Presently, the number of dimensions of an array is not enforced, - so any nonzero value effectively means it's an array.) + so any nonzero value effectively means it's an array.) @@ -1096,7 +1096,7 @@ supplied at table creation time (for example, the maximum length of a varchar column). It is passed to type-specific input functions and length coercion functions. - The value will generally be -1 for types that do not need atttypmod. + The value will generally be -1 for types that do not need atttypmod. @@ -1105,7 +1105,7 @@ bool - A copy of pg_type.typbyval of this column's type + A copy of pg_type.typbyval of this column's type @@ -1114,7 +1114,7 @@ char - Normally a copy of pg_type.typstorage of this + Normally a copy of pg_type.typstorage of this column's type. For TOAST-able data types, this can be altered after column creation to control storage policy. @@ -1125,7 +1125,7 @@ char - A copy of pg_type.typalign of this column's type + A copy of pg_type.typalign of this column's type @@ -1216,7 +1216,7 @@ text[] - Attribute-level options, as keyword=value strings + Attribute-level options, as keyword=value strings @@ -1225,7 +1225,7 @@ text[] - Attribute-level foreign data wrapper options, as keyword=value strings + Attribute-level foreign data wrapper options, as keyword=value strings @@ -1237,9 +1237,9 @@ In a dropped column's pg_attribute entry, atttypid is reset to zero, but attlen and the other fields copied from - pg_type are still valid. This arrangement is needed + pg_type are still valid. This arrangement is needed to cope with the situation where the dropped column's data type was - later dropped, and so there is no pg_type row anymore. + later dropped, and so there is no pg_type row anymore. attlen and the other fields can be used to interpret the contents of a row of the table. @@ -1256,9 +1256,9 @@ The catalog pg_authid contains information about database authorization identifiers (roles). A role subsumes the concepts - of users and groups. A user is essentially just a - role with the rolcanlogin flag set. Any role (with or - without rolcanlogin) can have other roles as members; see + of users and groups. A user is essentially just a + role with the rolcanlogin flag set. Any role (with or + without rolcanlogin) can have other roles as members; see pg_auth_members. @@ -1283,7 +1283,7 @@
- <structname>pg_authid</> Columns + <structname>pg_authid</structname> Columns @@ -1390,20 +1390,20 @@ For an MD5 encrypted password, rolpassword - column will begin with the string md5 followed by a + column will begin with the string md5 followed by a 32-character hexadecimal MD5 hash. The MD5 hash will be of the user's password concatenated to their user name. For example, if user - joe has password xyzzy, PostgreSQL - will store the md5 hash of xyzzyjoe. + joe has password xyzzy, PostgreSQL + will store the md5 hash of xyzzyjoe. If the password is encrypted with SCRAM-SHA-256, it has the format: -SCRAM-SHA-256$<iteration count>:<salt>$<StoredKey>:<ServerKey> +SCRAM-SHA-256$<iteration count>:<salt>$<StoredKey>:<ServerKey> - where salt, StoredKey and - ServerKey are in Base64 encoded format. This format is + where salt, StoredKey and + ServerKey are in Base64 encoded format. This format is the same as that specified by RFC 5803. @@ -1435,7 +1435,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_auth_members</> Columns + <structname>pg_auth_members</structname> Columns @@ -1459,7 +1459,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< member oid pg_authid.oid - ID of a role that is a member of roleid + ID of a role that is a member of roleid @@ -1473,8 +1473,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< admin_option bool - True if member can grant membership in - roleid to others + True if member can grant membership in + roleid to others @@ -1501,14 +1501,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< cannot be deduced from some generic rule. For example, casting between a domain and its base type is not explicitly represented in pg_cast. Another important exception is that - automatic I/O conversion casts, those performed using a data - type's own I/O functions to convert to or from text or other + automatic I/O conversion casts, those performed using a data + type's own I/O functions to convert to or from text or other string types, are not explicitly represented in pg_cast.
- <structname>pg_cast</> Columns + <structname>pg_cast</structname> Columns @@ -1558,11 +1558,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< Indicates what contexts the cast can be invoked in. - e means only as an explicit cast (using - CAST or :: syntax). - a means implicitly in assignment + e means only as an explicit cast (using + CAST or :: syntax). + a means implicitly in assignment to a target column, as well as explicitly. - i means implicitly in expressions, as well as the + i means implicitly in expressions, as well as the other cases. @@ -1572,9 +1572,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Indicates how the cast is performed. - f means that the function specified in the castfunc field is used. - i means that the input/output functions are used. - b means that the types are binary-coercible, thus no conversion is required. + f means that the function specified in the castfunc field is used. + i means that the input/output functions are used. + b means that the types are binary-coercible, thus no conversion is required. @@ -1586,18 +1586,18 @@ SCRAM-SHA-256$<iteration count>:<salt>< always take the cast source type as their first argument type, and return the cast destination type as their result type. A cast function can have up to three arguments. The second argument, - if present, must be type integer; it receives the type + if present, must be type integer; it receives the type modifier associated with the destination type, or -1 if there is none. The third argument, - if present, must be type boolean; it receives true - if the cast is an explicit cast, false otherwise. + if present, must be type boolean; it receives true + if the cast is an explicit cast, false otherwise. It is legitimate to create a pg_cast entry in which the source and target types are the same, if the associated function takes more than one argument. Such entries represent - length coercion functions that coerce values of the type + length coercion functions that coerce values of the type to be legal for a particular type modifier value. @@ -1624,14 +1624,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< table. This includes indexes (but see also pg_index), sequences (but see also pg_sequence), views, materialized - views, composite types, and TOAST tables; see relkind. + views, composite types, and TOAST tables; see relkind. Below, when we mean all of these kinds of objects we speak of relations. Not all columns are meaningful for all relation types.
- <structname>pg_class</> Columns + <structname>pg_class</structname> Columns @@ -1673,7 +1673,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_type.oid The OID of the data type that corresponds to this table's row type, - if any (zero for indexes, which have no pg_type entry) + if any (zero for indexes, which have no pg_type entry) @@ -1706,7 +1706,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< oid Name of the on-disk file of this relation; zero means this - is a mapped relation whose disk file name is determined + is a mapped relation whose disk file name is determined by low-level state @@ -1795,8 +1795,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - p = permanent table, u = unlogged table, - t = temporary table + p = permanent table, u = unlogged table, + t = temporary table @@ -1805,15 +1805,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - r = ordinary table, - i = index, - S = sequence, - t = TOAST table, - v = view, - m = materialized view, - c = composite type, - f = foreign table, - p = partitioned table + r = ordinary table, + i = index, + S = sequence, + t = TOAST table, + v = view, + m = materialized view, + c = composite type, + f = foreign table, + p = partitioned table @@ -1834,7 +1834,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< int2 - Number of CHECK constraints on the table; see + Number of CHECK constraints on the table; see pg_constraint catalog @@ -1917,11 +1917,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Columns used to form replica identity for rows: - d = default (primary key, if any), - n = nothing, - f = all columns - i = index with indisreplident set, or default + Columns used to form replica identity for rows: + d = default (primary key, if any), + n = nothing, + f = all columns + i = index with indisreplident set, or default @@ -1938,9 +1938,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< All transaction IDs before this one have been replaced with a permanent - (frozen) transaction ID in this table. This is used to track + (frozen) transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent transaction - ID wraparound or to allow pg_xact to be shrunk. Zero + ID wraparound or to allow pg_xact to be shrunk. Zero (InvalidTransactionId) if the relation is not a table. @@ -1953,7 +1953,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< All multixact IDs before this one have been replaced by a transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent multixact ID - wraparound or to allow pg_multixact to be shrunk. Zero + wraparound or to allow pg_multixact to be shrunk. Zero (InvalidMultiXactId) if the relation is not a table. @@ -1975,7 +1975,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Access-method-specific options, as keyword=value strings + Access-method-specific options, as keyword=value strings @@ -1993,13 +1993,13 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Several of the Boolean flags in pg_class are maintained + Several of the Boolean flags in pg_class are maintained lazily: they are guaranteed to be true if that's the correct state, but may not be reset to false immediately when the condition is no longer - true. For example, relhasindex is set by + true. For example, relhasindex is set by CREATE INDEX, but it is never cleared by DROP INDEX. Instead, VACUUM clears - relhasindex if it finds the table has no indexes. This + relhasindex if it finds the table has no indexes. This arrangement avoids race conditions and improves concurrency. @@ -2019,7 +2019,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_collation</> Columns + <structname>pg_collation</structname> Columns @@ -2082,14 +2082,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< collcollate name - LC_COLLATE for this collation object + LC_COLLATE for this collation object collctype name - LC_CTYPE for this collation object + LC_CTYPE for this collation object @@ -2107,27 +2107,27 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Note that the unique key on this catalog is (collname, - collencoding, collnamespace) not just - (collname, collnamespace). + Note that the unique key on this catalog is (collname, + collencoding, collnamespace) not just + (collname, collnamespace). PostgreSQL generally ignores all - collations that do not have collencoding equal to + collations that do not have collencoding equal to either the current database's encoding or -1, and creation of new entries - with the same name as an entry with collencoding = -1 + with the same name as an entry with collencoding = -1 is forbidden. Therefore it is sufficient to use a qualified SQL name - (schema.name) to identify a collation, + (schema.name) to identify a collation, even though this is not unique according to the catalog definition. The reason for defining the catalog this way is that - initdb fills it in at cluster initialization time with + initdb fills it in at cluster initialization time with entries for all locales available on the system, so it must be able to hold entries for all encodings that might ever be used in the cluster. - In the template0 database, it could be useful to create + In the template0 database, it could be useful to create collations whose encoding does not match the database encoding, since they could match the encodings of databases later cloned from - template0. This would currently have to be done manually. + template0. This would currently have to be done manually. @@ -2143,13 +2143,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< key, unique, foreign key, and exclusion constraints on tables. (Column constraints are not treated specially. Every column constraint is equivalent to some table constraint.) - Not-null constraints are represented in the pg_attribute + Not-null constraints are represented in the pg_attribute catalog, not here. User-defined constraint triggers (created with CREATE CONSTRAINT - TRIGGER) also give rise to an entry in this table. + TRIGGER) also give rise to an entry in this table. @@ -2157,7 +2157,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_constraint</> Columns + <structname>pg_constraint</structname> Columns @@ -2198,12 +2198,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - c = check constraint, - f = foreign key constraint, - p = primary key constraint, - u = unique constraint, - t = constraint trigger, - x = exclusion constraint + c = check constraint, + f = foreign key constraint, + p = primary key constraint, + u = unique constraint, + t = constraint trigger, + x = exclusion constraint @@ -2263,11 +2263,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key update action code: - a = no action, - r = restrict, - c = cascade, - n = set null, - d = set default + a = no action, + r = restrict, + c = cascade, + n = set null, + d = set default @@ -2276,11 +2276,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key deletion action code: - a = no action, - r = restrict, - c = cascade, - n = set null, - d = set default + a = no action, + r = restrict, + c = cascade, + n = set null, + d = set default @@ -2289,9 +2289,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< char Foreign key match type: - f = full, - p = partial, - s = simple + f = full, + p = partial, + s = simple @@ -2329,7 +2329,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< conkey int2[] - pg_attribute.attnum + pg_attribute.attnum If a table constraint (including foreign keys, but not constraint triggers), list of the constrained columns @@ -2337,35 +2337,35 @@ SCRAM-SHA-256$<iteration count>:<salt>< confkey int2[] - pg_attribute.attnum + pg_attribute.attnum If a foreign key, list of the referenced columns conpfeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for PK = FK comparisons conppeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for PK = PK comparisons conffeqop oid[] - pg_operator.oid + pg_operator.oid If a foreign key, list of the equality operators for FK = FK comparisons conexclop oid[] - pg_operator.oid + pg_operator.oid If an exclusion constraint, list of the per-column exclusion operators @@ -2392,7 +2392,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For other cases, a zero appears in conkey and the associated index must be consulted to discover the expression that is constrained. (conkey thus has the - same contents as pg_index.indkey for the + same contents as pg_index.indkey for the index.) @@ -2400,7 +2400,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< consrc is not updated when referenced objects change; for example, it won't track renaming of columns. Rather than - relying on this field, it's best to use pg_get_constraintdef() + relying on this field, it's best to use pg_get_constraintdef() to extract the definition of a check constraint. @@ -2429,7 +2429,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_conversion</> Columns + <structname>pg_conversion</structname> Columns @@ -2529,7 +2529,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_database</> Columns + <structname>pg_database</structname> Columns @@ -2592,7 +2592,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, then this database can be cloned by - any user with CREATEDB privileges; + any user with CREATEDB privileges; if false, then only superusers or the owner of the database can clone it. @@ -2604,7 +2604,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If false then no one can connect to this database. This is - used to protect the template0 database from being altered. + used to protect the template0 database from being altered. @@ -2634,11 +2634,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< All transaction IDs before this one have been replaced with a permanent - (frozen) transaction ID in this database. This is used to + (frozen) transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - transaction ID wraparound or to allow pg_xact to be shrunk. + transaction ID wraparound or to allow pg_xact to be shrunk. It is the minimum of the per-table - pg_class.relfrozenxid values. + pg_class.relfrozenxid values. @@ -2650,9 +2650,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< All multixact IDs before this one have been replaced with a transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - multixact ID wraparound or to allow pg_multixact to be shrunk. + multixact ID wraparound or to allow pg_multixact to be shrunk. It is the minimum of the per-table - pg_class.relminmxid values. + pg_class.relminmxid values. @@ -2663,7 +2663,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The default tablespace for the database. Within this database, all tables for which - pg_class.reltablespace is zero + pg_class.reltablespace is zero will be stored in this tablespace; in particular, all the non-shared system catalogs will be there. @@ -2707,7 +2707,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_db_role_setting</> Columns + <structname>pg_db_role_setting</structname> Columns @@ -2754,12 +2754,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_default_acl stores initial + The catalog pg_default_acl stores initial privileges to be assigned to newly created objects.
- <structname>pg_default_acl</> Columns + <structname>pg_default_acl</structname> Columns @@ -2800,10 +2800,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Type of object this entry is for: - r = relation (table, view), - S = sequence, - f = function, - T = type + r = relation (table, view), + S = sequence, + f = function, + T = type @@ -2820,21 +2820,21 @@ SCRAM-SHA-256$<iteration count>:<salt><
- A pg_default_acl entry shows the initial privileges to + A pg_default_acl entry shows the initial privileges to be assigned to an object belonging to the indicated user. There are - currently two types of entry: global entries with - defaclnamespace = 0, and per-schema entries + currently two types of entry: global entries with + defaclnamespace = 0, and per-schema entries that reference a particular schema. If a global entry is present then - it overrides the normal hard-wired default privileges + it overrides the normal hard-wired default privileges for the object type. A per-schema entry, if present, represents privileges - to be added to the global or hard-wired default privileges. + to be added to the global or hard-wired default privileges. Note that when an ACL entry in another catalog is null, it is taken to represent the hard-wired default privileges for its object, - not whatever might be in pg_default_acl - at the moment. pg_default_acl is only consulted during + not whatever might be in pg_default_acl + at the moment. pg_default_acl is only consulted during object creation. @@ -2851,9 +2851,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_depend records the dependency relationships between database objects. This information allows - DROP commands to find which other objects must be dropped - by DROP CASCADE or prevent dropping in the DROP - RESTRICT case. + DROP commands to find which other objects must be dropped + by DROP CASCADE or prevent dropping in the DROP + RESTRICT case. @@ -2863,7 +2863,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_depend</> Columns + <structname>pg_depend</structname> Columns @@ -2896,7 +2896,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objid and classid refer to the + objid and classid refer to the table itself). For all other object types, this column is zero. @@ -2922,7 +2922,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - refobjid and refclassid refer + refobjid and refclassid refer to the table itself). For all other object types, this column is zero. @@ -2945,17 +2945,17 @@ SCRAM-SHA-256$<iteration count>:<salt>< In all cases, a pg_depend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - deptype: + deptype: - DEPENDENCY_NORMAL (n) + DEPENDENCY_NORMAL (n) A normal relationship between separately-created objects. The dependent object can be dropped without affecting the referenced object. The referenced object can only be dropped - by specifying CASCADE, in which case the dependent + by specifying CASCADE, in which case the dependent object is dropped, too. Example: a table column has a normal dependency on its data type. @@ -2963,12 +2963,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_AUTO (a) + DEPENDENCY_AUTO (a) The dependent object can be dropped separately from the referenced object, and should be automatically dropped - (regardless of RESTRICT or CASCADE + (regardless of RESTRICT or CASCADE mode) if the referenced object is dropped. Example: a named constraint on a table is made autodependent on the table, so that it will go away if the table is dropped. @@ -2977,41 +2977,41 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_INTERNAL (i) + DEPENDENCY_INTERNAL (i) The dependent object was created as part of creation of the referenced object, and is really just a part of its internal - implementation. A DROP of the dependent object + implementation. A DROP of the dependent object will be disallowed outright (we'll tell the user to issue a - DROP against the referenced object, instead). A - DROP of the referenced object will be propagated + DROP against the referenced object, instead). A + DROP of the referenced object will be propagated through to drop the dependent object whether - CASCADE is specified or not. Example: a trigger + CASCADE is specified or not. Example: a trigger that's created to enforce a foreign-key constraint is made internally dependent on the constraint's - pg_constraint entry. + pg_constraint entry. - DEPENDENCY_EXTENSION (e) + DEPENDENCY_EXTENSION (e) - The dependent object is a member of the extension that is + The dependent object is a member of the extension that is the referenced object (see pg_extension). The dependent object can be dropped only via - DROP EXTENSION on the referenced object. Functionally + DROP EXTENSION on the referenced object. Functionally this dependency type acts the same as an internal dependency, but - it's kept separate for clarity and to simplify pg_dump. + it's kept separate for clarity and to simplify pg_dump. - DEPENDENCY_AUTO_EXTENSION (x) + DEPENDENCY_AUTO_EXTENSION (x) The dependent object is not a member of the extension that is the @@ -3024,7 +3024,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - DEPENDENCY_PIN (p) + DEPENDENCY_PIN (p) There is no dependent object; this type of entry is a signal @@ -3051,7 +3051,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_description stores optional descriptions + The catalog pg_description stores optional descriptions (comments) for each database object. Descriptions can be manipulated with the command and viewed with psql's \d commands. @@ -3066,7 +3066,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_description</> Columns + <structname>pg_description</structname> Columns @@ -3099,7 +3099,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a comment on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -3133,7 +3133,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_enum</> Columns + <structname>pg_enum</structname> Columns @@ -3157,7 +3157,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< enumtypid oid pg_type.oid - The OID of the pg_type entry owning this enum value + The OID of the pg_type entry owning this enum value @@ -3191,7 +3191,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< When an enum type is created, its members are assigned sort-order - positions 1..n. But members added later might be given + positions 1..n. But members added later might be given negative or fractional values of enumsortorder. The only requirement on these values is that they be correctly ordered and unique within each enum type. @@ -3212,7 +3212,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_event_trigger</> Columns + <structname>pg_event_trigger</structname> Columns @@ -3260,10 +3260,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the event trigger fires. - O = trigger fires in origin and local modes, - D = trigger is disabled, - R = trigger fires in replica mode, - A = trigger fires always. + O = trigger fires in origin and local modes, + D = trigger is disabled, + R = trigger fires in replica mode, + A = trigger fires always. @@ -3296,7 +3296,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_extension</> Columns + <structname>pg_extension</structname> Columns @@ -3355,16 +3355,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< extconfig oid[] pg_class.oid - Array of regclass OIDs for the extension's configuration - table(s), or NULL if none + Array of regclass OIDs for the extension's configuration + table(s), or NULL if none extcondition text[] - Array of WHERE-clause filter conditions for the - extension's configuration table(s), or NULL if none + Array of WHERE-clause filter conditions for the + extension's configuration table(s), or NULL if none @@ -3372,7 +3372,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- Note that unlike most catalogs with a namespace column, + Note that unlike most catalogs with a namespace column, extnamespace is not meant to imply that the extension belongs to that schema. Extension names are never schema-qualified. Rather, extnamespace @@ -3399,7 +3399,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_foreign_data_wrapper</> Columns + <structname>pg_foreign_data_wrapper</structname> Columns @@ -3474,7 +3474,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign-data wrapper specific options, as keyword=value strings + Foreign-data wrapper specific options, as keyword=value strings @@ -3498,7 +3498,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_foreign_server</> Columns + <structname>pg_foreign_server</structname> Columns @@ -3570,7 +3570,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign server specific options, as keyword=value strings + Foreign server specific options, as keyword=value strings @@ -3596,7 +3596,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_foreign_table</> Columns + <structname>pg_foreign_table</structname> Columns @@ -3613,7 +3613,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< ftrelid oid pg_class.oid - OID of the pg_class entry for this foreign table + OID of the pg_class entry for this foreign table @@ -3628,7 +3628,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Foreign table options, as keyword=value strings + Foreign table options, as keyword=value strings @@ -3651,7 +3651,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_index</> Columns + <structname>pg_index</structname> Columns @@ -3668,14 +3668,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< indexrelid oid pg_class.oid - The OID of the pg_class entry for this index + The OID of the pg_class entry for this index indrelid oid pg_class.oid - The OID of the pg_class entry for the table this index is for + The OID of the pg_class entry for the table this index is for @@ -3698,7 +3698,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool If true, this index represents the primary key of the table - (indisunique should always be true when this is true) + (indisunique should always be true when this is true) @@ -3714,7 +3714,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the uniqueness check is enforced immediately on insertion - (irrelevant if indisunique is not true) + (irrelevant if indisunique is not true) @@ -3731,7 +3731,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by - INSERT/UPDATE operations, but it cannot safely + INSERT/UPDATE operations, but it cannot safely be used for queries. If it is unique, the uniqueness property is not guaranteed true either. @@ -3742,8 +3742,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool - If true, queries must not use the index until the xmin - of this pg_index row is below their TransactionXmin + If true, queries must not use the index until the xmin + of this pg_index row is below their TransactionXmin event horizon, because the table may contain broken HOT chains with incompatible rows that they can see @@ -3755,7 +3755,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< If true, the index is currently ready for inserts. False means the - index must be ignored by INSERT/UPDATE + index must be ignored by INSERT/UPDATE operations. @@ -3775,9 +3775,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool - If true this index has been chosen as replica identity + If true this index has been chosen as replica identity using ALTER TABLE ... REPLICA IDENTITY USING INDEX - ... + ... @@ -3836,7 +3836,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for index attributes that are not simple column references. This is a list with one element for each zero - entry in indkey. Null if all index attributes + entry in indkey. Null if all index attributes are simple references. @@ -3866,14 +3866,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_inherits records information about + The catalog pg_inherits records information about table inheritance hierarchies. There is one entry for each direct parent-child table relationship in the database. (Indirect inheritance can be determined by following chains of entries.)
- <structname>pg_inherits</> Columns + <structname>pg_inherits</structname> Columns @@ -3928,7 +3928,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_init_privs records information about + The catalog pg_init_privs records information about the initial privileges of objects in the system. There is one entry for each object in the database which has a non-default (non-NULL) initial set of privileges. @@ -3936,7 +3936,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Objects can have initial privileges either by having those privileges set - when the system is initialized (by initdb) or when the + when the system is initialized (by initdb) or when the object is created during a CREATE EXTENSION and the extension script sets initial privileges using the GRANT system. Note that the system will automatically handle recording of the @@ -3944,12 +3944,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< only use the GRANT and REVOKE statements in their script to have the privileges recorded. The privtype column indicates if the initial privilege was - set by initdb or during a + set by initdb or during a CREATE EXTENSION command. - Objects which have initial privileges set by initdb will + Objects which have initial privileges set by initdb will have entries where privtype is 'i', while objects which have initial privileges set by CREATE EXTENSION will have entries where @@ -3957,7 +3957,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_init_privs</> Columns + <structname>pg_init_privs</structname> Columns @@ -3990,7 +3990,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objoid and classoid refer to the + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -4039,7 +4039,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_language</> Columns + <structname>pg_language</structname> Columns @@ -4116,7 +4116,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_proc.oid This references a function that is responsible for executing - inline anonymous code blocks + inline anonymous code blocks ( blocks). Zero if inline blocks are not supported. @@ -4162,24 +4162,24 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_largeobject holds the data making up large objects. A large object is identified by an OID assigned when it is created. Each large object is broken into - segments or pages small enough to be conveniently stored as rows + segments or pages small enough to be conveniently stored as rows in pg_largeobject. - The amount of data per page is defined to be LOBLKSIZE (which is currently - BLCKSZ/4, or typically 2 kB). + The amount of data per page is defined to be LOBLKSIZE (which is currently + BLCKSZ/4, or typically 2 kB). - Prior to PostgreSQL 9.0, there was no permission structure + Prior to PostgreSQL 9.0, there was no permission structure associated with large objects. As a result, pg_largeobject was publicly readable and could be used to obtain the OIDs (and contents) of all large objects in the system. This is no longer the case; use - pg_largeobject_metadata + pg_largeobject_metadata to obtain a list of large object OIDs.
- <structname>pg_largeobject</> Columns + <structname>pg_largeobject</structname> Columns @@ -4213,7 +4213,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Actual data stored in the large object. - This will never be more than LOBLKSIZE bytes and might be less. + This will never be more than LOBLKSIZE bytes and might be less. @@ -4223,9 +4223,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Each row of pg_largeobject holds data for one page of a large object, beginning at - byte offset (pageno * LOBLKSIZE) within the object. The implementation + byte offset (pageno * LOBLKSIZE) within the object. The implementation allows sparse storage: pages might be missing, and might be shorter than - LOBLKSIZE bytes even if they are not the last page of the object. + LOBLKSIZE bytes even if they are not the last page of the object. Missing regions within a large object read as zeroes. @@ -4242,11 +4242,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_largeobject_metadata holds metadata associated with large objects. The actual large object data is stored in - pg_largeobject. + pg_largeobject.
- <structname>pg_largeobject_metadata</> Columns + <structname>pg_largeobject_metadata</structname> Columns @@ -4299,14 +4299,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_namespace stores namespaces. + The catalog pg_namespace stores namespaces. A namespace is the structure underlying SQL schemas: each namespace can have a separate collection of relations, types, etc. without name conflicts.
- <structname>pg_namespace</> Columns + <structname>pg_namespace</structname> Columns @@ -4381,7 +4381,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_opclass</> Columns + <structname>pg_opclass</structname> Columns @@ -4447,14 +4447,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< opcdefault bool - True if this operator class is the default for opcintype + True if this operator class is the default for opcintype opckeytype oid pg_type.oid - Type of data stored in index, or zero if same as opcintype + Type of data stored in index, or zero if same as opcintype @@ -4462,11 +4462,11 @@ SCRAM-SHA-256$<iteration count>:<salt><
- An operator class's opcmethod must match the - opfmethod of its containing operator family. + An operator class's opcmethod must match the + opfmethod of its containing operator family. Also, there must be no more than one pg_opclass - row having opcdefault true for any given combination of - opcmethod and opcintype. + row having opcdefault true for any given combination of + opcmethod and opcintype. @@ -4480,13 +4480,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_operator stores information about operators. + The catalog pg_operator stores information about operators. See and for more information. - <structname>pg_operator</> Columns + <structname>pg_operator</structname> Columns @@ -4534,8 +4534,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - b = infix (both), l = prefix - (left), r = postfix (right) + b = infix (both), l = prefix + (left), r = postfix (right) @@ -4632,7 +4632,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Each operator family is a collection of operators and associated support routines that implement the semantics specified for a particular index access method. Furthermore, the operators in a family are all - compatible, in a way that is specified by the access method. + compatible, in a way that is specified by the access method. The operator family concept allows cross-data-type operators to be used with indexes and to be reasoned about using knowledge of access method semantics. @@ -4643,7 +4643,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_opfamily</> Columns + <structname>pg_opfamily</structname> Columns @@ -4720,7 +4720,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_partitioned_table</> Columns + <structname>pg_partitioned_table</structname> Columns @@ -4738,7 +4738,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< partrelid oid pg_class.oid - The OID of the pg_class entry for this partitioned table + The OID of the pg_class entry for this partitioned table @@ -4746,8 +4746,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Partitioning strategy; l = list partitioned table, - r = range partitioned table + Partitioning strategy; l = list partitioned table, + r = range partitioned table @@ -4763,7 +4763,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< oid pg_class.oid - The OID of the pg_class entry for the default partition + The OID of the pg_class entry for the default partition of this partitioned table, or zero if this partitioned table does not have a default partition. @@ -4813,7 +4813,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for partition key columns that are not simple column references. This is a list with one element for each zero - entry in partattrs. Null if all partition key columns + entry in partattrs. Null if all partition key columns are simple references. @@ -4833,9 +4833,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_pltemplate stores - template information for procedural languages. + template information for procedural languages. A template for a language allows the language to be created in a - particular database by a simple CREATE LANGUAGE command, + particular database by a simple CREATE LANGUAGE command, with no need to specify implementation details. @@ -4848,7 +4848,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_pltemplate</> Columns + <structname>pg_pltemplate</structname> Columns @@ -4921,7 +4921,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - It is likely that pg_pltemplate will be removed in some + It is likely that pg_pltemplate will be removed in some future release of PostgreSQL, in favor of keeping this knowledge about procedural languages in their respective extension installation scripts. @@ -4944,7 +4944,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< command that it applies to (possibly all commands), the roles that it applies to, the expression to be added as a security-barrier qualification to queries that include the table, and the expression - to be added as a WITH CHECK option for queries that attempt to + to be added as a WITH CHECK option for queries that attempt to add new records to the table. @@ -4982,11 +4982,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< char The command type to which the policy is applied: - r for SELECT, - a for INSERT, - w for UPDATE, - d for DELETE, - or * for all + r for SELECT, + a for INSERT, + w for UPDATE, + d for DELETE, + or * for all @@ -5023,8 +5023,8 @@ SCRAM-SHA-256$<iteration count>:<salt>< - Policies stored in pg_policy are applied only when - pg_class.relrowsecurity is set for + Policies stored in pg_policy are applied only when + pg_class.relrowsecurity is set for their table. @@ -5039,7 +5039,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - The catalog pg_proc stores information about functions (or procedures). + The catalog pg_proc stores information about functions (or procedures). See and for more information. @@ -5051,7 +5051,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_proc</> Columns + <structname>pg_proc</structname> Columns @@ -5106,7 +5106,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< float4 Estimated execution cost (in units of - ); if proretset, + ); if proretset, this is cost per row returned @@ -5114,7 +5114,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< prorows float4 - Estimated number of result rows (zero if not proretset) + Estimated number of result rows (zero if not proretset) @@ -5151,7 +5151,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< prosecdef bool - Function is a security definer (i.e., a setuid + Function is a security definer (i.e., a setuid function) @@ -5195,11 +5195,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< provolatile tells whether the function's result depends only on its input arguments, or is affected by outside factors. - It is i for immutable functions, + It is i for immutable functions, which always deliver the same result for the same inputs. - It is s for stable functions, + It is s for stable functions, whose results (for fixed inputs) do not change within a scan. - It is v for volatile functions, + It is v for volatile functions, whose results might change at any time. (Use v also for functions with side-effects, so that calls to them cannot get optimized away.) @@ -5251,7 +5251,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< An array with the data types of the function arguments. This includes only input arguments (including INOUT and - VARIADIC arguments), and thus represents + VARIADIC arguments), and thus represents the call signature of the function. @@ -5266,7 +5266,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< INOUT arguments); however, if all the arguments are IN arguments, this field will be null. Note that subscripting is 1-based, whereas for historical reasons - proargtypes is subscripted from 0. + proargtypes is subscripted from 0. @@ -5276,15 +5276,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< An array with the modes of the function arguments, encoded as - i for IN arguments, - o for OUT arguments, - b for INOUT arguments, - v for VARIADIC arguments, - t for TABLE arguments. + i for IN arguments, + o for OUT arguments, + b for INOUT arguments, + v for VARIADIC arguments, + t for TABLE arguments. If all the arguments are IN arguments, this field will be null. Note that subscripts correspond to positions of - proallargtypes not proargtypes. + proallargtypes not proargtypes. @@ -5297,7 +5297,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Arguments without a name are set to empty strings in the array. If none of the arguments have a name, this field will be null. Note that subscripts correspond to positions of - proallargtypes not proargtypes. + proallargtypes not proargtypes. @@ -5308,9 +5308,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< Expression trees (in nodeToString() representation) for default values. This is a list with - pronargdefaults elements, corresponding to the last - N input arguments (i.e., the last - N proargtypes positions). + pronargdefaults elements, corresponding to the last + N input arguments (i.e., the last + N proargtypes positions). If none of the arguments have defaults, this field will be null. @@ -5525,7 +5525,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_range</> Columns + <structname>pg_range</structname> Columns @@ -5586,10 +5586,10 @@ SCRAM-SHA-256$<iteration count>:<salt><
- rngsubopc (plus rngcollation, if the + rngsubopc (plus rngcollation, if the element type is collatable) determines the sort ordering used by the range - type. rngcanonical is used when the element type is - discrete. rngsubdiff is optional but should be supplied to + type. rngcanonical is used when the element type is + discrete. rngsubdiff is optional but should be supplied to improve performance of GiST indexes on the range type. @@ -5655,7 +5655,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - <structname>pg_rewrite</> Columns + <structname>pg_rewrite</structname> Columns @@ -5694,9 +5694,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< char - Event type that the rule is for: 1 = SELECT, 2 = - UPDATE, 3 = INSERT, 4 = - DELETE + Event type that the rule is for: 1 = SELECT, 2 = + UPDATE, 3 = INSERT, 4 = + DELETE @@ -5707,10 +5707,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the rule fires. - O = rule fires in origin and local modes, - D = rule is disabled, - R = rule fires in replica mode, - A = rule fires always. + O = rule fires in origin and local modes, + D = rule is disabled, + R = rule fires in replica mode, + A = rule fires always. @@ -5809,7 +5809,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a security label on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -5847,7 +5847,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_sequence</> Columns + <structname>pg_sequence</structname> Columns @@ -5864,7 +5864,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< seqrelid oid pg_class.oid - The OID of the pg_class entry for this sequence + The OID of the pg_class entry for this sequence @@ -5949,7 +5949,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_shdepend</> Columns + <structname>pg_shdepend</structname> Columns @@ -5990,7 +5990,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a table column, this is the column number (the - objid and classid refer to the + objid and classid refer to the table itself). For all other object types, this column is zero. @@ -6027,11 +6027,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< In all cases, a pg_shdepend entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - deptype: + deptype: - SHARED_DEPENDENCY_OWNER (o) + SHARED_DEPENDENCY_OWNER (o) The referenced object (which must be a role) is the owner of the @@ -6041,20 +6041,20 @@ SCRAM-SHA-256$<iteration count>:<salt>< - SHARED_DEPENDENCY_ACL (a) + SHARED_DEPENDENCY_ACL (a) The referenced object (which must be a role) is mentioned in the ACL (access control list, i.e., privileges list) of the - dependent object. (A SHARED_DEPENDENCY_ACL entry is + dependent object. (A SHARED_DEPENDENCY_ACL entry is not made for the owner of the object, since the owner will have - a SHARED_DEPENDENCY_OWNER entry anyway.) + a SHARED_DEPENDENCY_OWNER entry anyway.) - SHARED_DEPENDENCY_POLICY (r) + SHARED_DEPENDENCY_POLICY (r) The referenced object (which must be a role) is mentioned as the @@ -6064,7 +6064,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< - SHARED_DEPENDENCY_PIN (p) + SHARED_DEPENDENCY_PIN (p) There is no dependent object; this type of entry is a signal @@ -6111,7 +6111,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_shdescription</> Columns + <structname>pg_shdescription</structname> Columns @@ -6235,16 +6235,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< - Normally there is one entry, with stainherit = - false, for each table column that has been analyzed. + Normally there is one entry, with stainherit = + false, for each table column that has been analyzed. If the table has inheritance children, a second entry with - stainherit = true is also created. This row + stainherit = true is also created. This row represents the column's statistics over the inheritance tree, i.e., statistics for the data you'd see with - SELECT column FROM table*, - whereas the stainherit = false row represents + SELECT column FROM table*, + whereas the stainherit = false row represents the results of - SELECT column FROM ONLY table. + SELECT column FROM ONLY table. @@ -6254,7 +6254,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< references the index. No entry is made for an ordinary non-expression index column, however, since it would be redundant with the entry for the underlying table column. Currently, entries for index expressions - always have stainherit = false. + always have stainherit = false. @@ -6281,7 +6281,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_statistic</> Columns + <structname>pg_statistic</structname> Columns @@ -6339,56 +6339,56 @@ SCRAM-SHA-256$<iteration count>:<salt>< A value less than zero is the negative of a multiplier for the number of rows in the table; for example, a column in which about 80% of the values are nonnull and each nonnull value appears about twice on - average could be represented by stadistinct = -0.4. + average could be represented by stadistinct = -0.4. A zero value means the number of distinct values is unknown. - stakindN + stakindN int2 A code number indicating the kind of statistics stored in the - Nth slot of the + Nth slot of the pg_statistic row. - staopN + staopN oid pg_operator.oid An operator used to derive the statistics stored in the - Nth slot. For example, a + Nth slot. For example, a histogram slot would show the < operator that defines the sort order of the data. - stanumbersN + stanumbersN float4[] Numerical statistics of the appropriate kind for the - Nth slot, or null if the slot + Nth slot, or null if the slot kind does not involve numerical values - stavaluesN + stavaluesN anyarray Column data values of the appropriate kind for the - Nth slot, or null if the slot + Nth slot, or null if the slot kind does not store any data values. Each array's element values are actually of the specific column's data type, or a related type such as an array's element type, so there is no way to define - these columns' type more specifically than anyarray. + these columns' type more specifically than anyarray. @@ -6407,12 +6407,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< The catalog pg_statistic_ext holds extended planner statistics. - Each row in this catalog corresponds to a statistics object + Each row in this catalog corresponds to a statistics object created with .
- <structname>pg_statistic_ext</> Columns + <structname>pg_statistic_ext</structname> Columns @@ -6485,7 +6485,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_ndistinct - N-distinct counts, serialized as pg_ndistinct type + N-distinct counts, serialized as pg_ndistinct type @@ -6495,7 +6495,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Functional dependency statistics, serialized - as pg_dependencies type + as pg_dependencies type @@ -6507,7 +6507,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The stxkind field is filled at creation of the statistics object, indicating which statistic type(s) are desired. The fields after it are initially NULL and are filled only when the - corresponding statistic has been computed by ANALYZE. + corresponding statistic has been computed by ANALYZE. @@ -6677,10 +6677,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< State code: - i = initialize, - d = data is being copied, - s = synchronized, - r = ready (normal replication) + i = initialize, + d = data is being copied, + s = synchronized, + r = ready (normal replication) @@ -6689,7 +6689,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_lsn - End LSN for s and r states. + End LSN for s and r states. @@ -6718,7 +6718,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_tablespace</> Columns + <structname>pg_tablespace</structname> Columns @@ -6769,7 +6769,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - Tablespace-level options, as keyword=value strings + Tablespace-level options, as keyword=value strings @@ -6792,7 +6792,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_transform</> Columns + <structname>pg_transform</structname> Columns @@ -6861,7 +6861,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_trigger</> Columns + <structname>pg_trigger</structname> Columns @@ -6916,10 +6916,10 @@ SCRAM-SHA-256$<iteration count>:<salt>< Controls in which modes the trigger fires. - O = trigger fires in origin and local modes, - D = trigger is disabled, - R = trigger fires in replica mode, - A = trigger fires always. + O = trigger fires in origin and local modes, + D = trigger is disabled, + R = trigger fires in replica mode, + A = trigger fires always. @@ -6928,7 +6928,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< bool True if trigger is internally generated (usually, to enforce - the constraint identified by tgconstraint) + the constraint identified by tgconstraint) @@ -6950,7 +6950,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgconstraint oid pg_constraint.oid - The pg_constraint entry associated with the trigger, if any + The pg_constraint entry associated with the trigger, if any @@ -6994,7 +6994,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_node_tree Expression tree (in nodeToString() - representation) for the trigger's WHEN condition, or null + representation) for the trigger's WHEN condition, or null if none @@ -7002,7 +7002,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgoldtable name - REFERENCING clause name for OLD TABLE, + REFERENCING clause name for OLD TABLE, or null if none @@ -7010,7 +7010,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< tgnewtable name - REFERENCING clause name for NEW TABLE, + REFERENCING clause name for NEW TABLE, or null if none @@ -7019,18 +7019,18 @@ SCRAM-SHA-256$<iteration count>:<salt>< Currently, column-specific triggering is supported only for - UPDATE events, and so tgattr is relevant + UPDATE events, and so tgattr is relevant only for that event type. tgtype might contain bits for other event types as well, but those are presumed - to be table-wide regardless of what is in tgattr. + to be table-wide regardless of what is in tgattr. - When tgconstraint is nonzero, - tgconstrrelid, tgconstrindid, - tgdeferrable, and tginitdeferred are - largely redundant with the referenced pg_constraint entry. + When tgconstraint is nonzero, + tgconstrrelid, tgconstrindid, + tgdeferrable, and tginitdeferred are + largely redundant with the referenced pg_constraint entry. However, it is possible for a non-deferrable trigger to be associated with a deferrable constraint: foreign key constraints can have some deferrable and some non-deferrable triggers. @@ -7070,7 +7070,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_config</> Columns + <structname>pg_ts_config</structname> Columns @@ -7145,7 +7145,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_config_map</> Columns + <structname>pg_ts_config_map</structname> Columns @@ -7162,7 +7162,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< mapcfg oid pg_ts_config.oid - The OID of the pg_ts_config entry owning this map entry + The OID of the pg_ts_config entry owning this map entry @@ -7177,7 +7177,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< integer Order in which to consult this entry (lower - mapseqnos first) + mapseqnos first) @@ -7206,7 +7206,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< needed; the dictionary itself provides values for the user-settable parameters supported by the template. This division of labor allows dictionaries to be created by unprivileged users. The parameters - are specified by a text string dictinitoption, + are specified by a text string dictinitoption, whose format and meaning vary depending on the template. @@ -7216,7 +7216,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_dict</> Columns + <structname>pg_ts_dict</structname> Columns @@ -7299,7 +7299,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_parser</> Columns + <structname>pg_ts_parser</structname> Columns @@ -7396,7 +7396,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_ts_template</> Columns + <structname>pg_ts_template</structname> Columns @@ -7470,7 +7470,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_type</> Columns + <structname>pg_type</structname> Columns @@ -7521,7 +7521,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< For a fixed-size type, typlen is the number of bytes in the internal representation of the type. But for a variable-length type, typlen is negative. - -1 indicates a varlena type (one that has a length word), + -1 indicates a varlena type (one that has a length word), -2 indicates a null-terminated C string. @@ -7566,7 +7566,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typcategory is an arbitrary classification of data types that is used by the parser to determine which implicit - casts should be preferred. + casts should be preferred. See . @@ -7711,7 +7711,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typalign is the alignment required when storing a value of this type. It applies to storage on disk as well as most representations of the value inside - PostgreSQL. + PostgreSQL. When multiple values are stored consecutively, such as in the representation of a complete row on disk, padding is inserted before a datum of this type so that it begins on the @@ -7723,16 +7723,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< Possible values are: - c = char alignment, i.e., no alignment needed. + c = char alignment, i.e., no alignment needed. - s = short alignment (2 bytes on most machines). + s = short alignment (2 bytes on most machines). - i = int alignment (4 bytes on most machines). + i = int alignment (4 bytes on most machines). - d = double alignment (8 bytes on many machines, but by no means all). + d = double alignment (8 bytes on many machines, but by no means all). @@ -7757,24 +7757,24 @@ SCRAM-SHA-256$<iteration count>:<salt>< Possible values are - p: Value must always be stored plain. + p: Value must always be stored plain. - e: Value can be stored in a secondary + e: Value can be stored in a secondary relation (if relation has one, see pg_class.reltoastrelid). - m: Value can be stored compressed inline. + m: Value can be stored compressed inline. - x: Value can be stored compressed inline or stored in secondary storage. + x: Value can be stored compressed inline or stored in secondary storage. - Note that m columns can also be moved out to secondary - storage, but only as a last resort (e and x columns are + Note that m columns can also be moved out to secondary + storage, but only as a last resort (e and x columns are moved first). @@ -7805,9 +7805,9 @@ SCRAM-SHA-256$<iteration count>:<salt>< int4 - Domains use typtypmod to record the typmod + Domains use typtypmod to record the typmod to be applied to their base type (-1 if base type does not use a - typmod). -1 if this type is not a domain. + typmod). -1 if this type is not a domain. @@ -7817,7 +7817,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< typndims is the number of array dimensions - for a domain over an array (that is, typbasetype is + for a domain over an array (that is, typbasetype is an array type). Zero for types other than domains over array types. @@ -7842,7 +7842,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< pg_node_tree - If typdefaultbin is not null, it is the + If typdefaultbin is not null, it is the nodeToString() representation of a default expression for the type. This is only used for domains. @@ -7854,12 +7854,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< text - typdefault is null if the type has no associated - default value. If typdefaultbin is not null, - typdefault must contain a human-readable version of the - default expression represented by typdefaultbin. If - typdefaultbin is null and typdefault is - not, then typdefault is the external representation of + typdefault is null if the type has no associated + default value. If typdefaultbin is not null, + typdefault must contain a human-readable version of the + default expression represented by typdefaultbin. If + typdefaultbin is null and typdefault is + not, then typdefault is the external representation of the type's default value, which can be fed to the type's input converter to produce a constant. @@ -7882,13 +7882,13 @@ SCRAM-SHA-256$<iteration count>:<salt>< lists the system-defined values - of typcategory. Any future additions to this list will + of typcategory. Any future additions to this list will also be upper-case ASCII letters. All other ASCII characters are reserved for user-defined categories.
- <structfield>typcategory</> Codes + <structfield>typcategory</structfield> Codes @@ -7957,7 +7957,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< X - unknown type + unknown type @@ -7982,7 +7982,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_user_mapping</> Columns + <structname>pg_user_mapping</structname> Columns @@ -8023,7 +8023,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< text[] - User mapping specific options, as keyword=value strings + User mapping specific options, as keyword=value strings @@ -8241,7 +8241,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_available_extensions</> Columns + <structname>pg_available_extensions</structname> Columns @@ -8303,7 +8303,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_available_extension_versions</> Columns + <structname>pg_available_extension_versions</structname> Columns @@ -8385,11 +8385,11 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_config describes the compile-time configuration parameters of the currently installed - version of PostgreSQL. It is intended, for example, to + version of PostgreSQL. It is intended, for example, to be used by software packages that want to interface to - PostgreSQL to facilitate finding the required header + PostgreSQL to facilitate finding the required header files and libraries. It provides the same basic information as the - PostgreSQL client + PostgreSQL client application. @@ -8399,7 +8399,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_config</> Columns + <structname>pg_config</structname> Columns @@ -8470,15 +8470,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< Cursors are used internally to implement some of the components - of PostgreSQL, such as procedural languages. - Therefore, the pg_cursors view might include cursors + of PostgreSQL, such as procedural languages. + Therefore, the pg_cursors view might include cursors that have not been explicitly created by the user.
- <structname>pg_cursors</> Columns + <structname>pg_cursors</structname> Columns @@ -8526,7 +8526,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< is_scrollable boolean - true if the cursor is scrollable (that is, it + true if the cursor is scrollable (that is, it allows rows to be retrieved in a nonsequential manner); false otherwise @@ -8557,16 +8557,16 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_file_settings provides a summary of the contents of the server's configuration file(s). A row appears in - this view for each name = value entry appearing in the files, + this view for each name = value entry appearing in the files, with annotations indicating whether the value could be applied successfully. Additional row(s) may appear for problems not linked to - a name = value entry, such as syntax errors in the files. + a name = value entry, such as syntax errors in the files. This view is helpful for checking whether planned changes in the configuration files will work, or for diagnosing a previous failure. - Note that this view reports on the current contents of the + Note that this view reports on the current contents of the files, not on what was last applied by the server. (The pg_settings view is usually sufficient to determine that.) @@ -8578,7 +8578,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_file_settings</> Columns + <structname>pg_file_settings</structname> Columns @@ -8604,7 +8604,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< seqno integer - Order in which the entries are processed (1..n) + Order in which the entries are processed (1..n) name @@ -8634,14 +8634,14 @@ SCRAM-SHA-256$<iteration count>:<salt>< If the configuration file contains syntax errors or invalid parameter names, the server will not attempt to apply any settings from it, and - therefore all the applied fields will read as false. + therefore all the applied fields will read as false. In such a case there will be one or more rows with non-null error fields indicating the problem(s). Otherwise, individual settings will be applied if possible. If an individual setting cannot be applied (e.g., invalid value, or the setting cannot be changed after server start) it will have an appropriate message in the error field. Another way that - an entry might have applied = false is that it is + an entry might have applied = false is that it is overridden by a later entry for the same parameter name; this case is not considered an error so nothing appears in the error field. @@ -8666,12 +8666,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< compatibility: it emulates a catalog that existed in PostgreSQL before version 8.1. It shows the names and members of all roles that are marked as not - rolcanlogin, which is an approximation to the set + rolcanlogin, which is an approximation to the set of roles that are being used as groups.
- <structname>pg_group</> Columns + <structname>pg_group</structname> Columns @@ -8720,7 +8720,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< The view pg_hba_file_rules provides a summary of the contents of the client authentication configuration - file, pg_hba.conf. A row appears in this view for each + file, pg_hba.conf. A row appears in this view for each non-empty, non-comment line in the file, with annotations indicating whether the rule could be applied successfully. @@ -8728,7 +8728,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< This view can be helpful for checking whether planned changes in the authentication configuration file will work, or for diagnosing a previous - failure. Note that this view reports on the current contents + failure. Note that this view reports on the current contents of the file, not on what was last loaded by the server. @@ -8738,7 +8738,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_hba_file_rules</> Columns + <structname>pg_hba_file_rules</structname> Columns @@ -8753,7 +8753,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< line_number integer - Line number of this rule in pg_hba.conf + Line number of this rule in pg_hba.conf @@ -8809,7 +8809,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Usually, a row reflecting an incorrect entry will have values for only - the line_number and error fields. + the line_number and error fields. @@ -8831,7 +8831,7 @@ SCRAM-SHA-256$<iteration count>:<salt><
- <structname>pg_indexes</> Columns + <structname>pg_indexes</structname> Columns @@ -8912,12 +8912,12 @@ SCRAM-SHA-256$<iteration count>:<salt>< in the same way as in pg_description or pg_depend). Also, the right to extend a relation is represented as a separate lockable object. - Also, advisory locks can be taken on numbers that have + Also, advisory locks can be taken on numbers that have user-defined meanings.
- <structname>pg_locks</> Columns + <structname>pg_locks</structname> Columns @@ -8935,15 +8935,15 @@ SCRAM-SHA-256$<iteration count>:<salt>< Type of the lockable object: - relation, - extend, - page, - tuple, - transactionid, - virtualxid, - object, - userlock, or - advisory + relation, + extend, + page, + tuple, + transactionid, + virtualxid, + object, + userlock, or + advisory @@ -9025,7 +9025,7 @@ SCRAM-SHA-256$<iteration count>:<salt>< Column number targeted by the lock (the - classid and objid refer to the + classid and objid refer to the table itself), or zero if the target is some other general database object, or null if the target is not a general database object @@ -9107,23 +9107,23 @@ SCRAM-SHA-256$<iteration count>:<salt>< Advisory locks can be acquired on keys consisting of either a single bigint value or two integer values. A bigint key is displayed with its - high-order half in the classid column, its low-order half - in the objid column, and objsubid equal + high-order half in the classid column, its low-order half + in the objid column, and objsubid equal to 1. The original bigint value can be reassembled with the expression (classid::bigint << 32) | objid::bigint. Integer keys are displayed with the first key in the - classid column, the second key in the objid - column, and objsubid equal to 2. The actual meaning of + classid column, the second key in the objid + column, and objsubid equal to 2. The actual meaning of the keys is up to the user. Advisory locks are local to each database, - so the database column is meaningful for an advisory lock. + so the database column is meaningful for an advisory lock. pg_locks provides a global view of all locks in the database cluster, not only those relevant to the current database. Although its relation column can be joined - against pg_class.oid to identify locked + against pg_class.oid to identify locked relations, this will only work correctly for relations in the current database (those for which the database column is either the current database's OID or zero). @@ -9141,7 +9141,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid; Also, if you are using prepared transactions, the - virtualtransaction column can be joined to the + virtualtransaction column can be joined to the transaction column of the pg_prepared_xacts view to get more information on prepared transactions that hold locks. @@ -9163,7 +9163,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx information about which processes are ahead of which others in lock wait queues, nor information about which processes are parallel workers running on behalf of which other client sessions. It is better to use - the pg_blocking_pids() function + the pg_blocking_pids() function (see ) to identify which process(es) a waiting process is blocked behind. @@ -9172,10 +9172,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The pg_locks view displays data from both the regular lock manager and the predicate lock manager, which are separate systems; in addition, the regular lock manager subdivides its - locks into regular and fast-path locks. + locks into regular and fast-path locks. This data is not guaranteed to be entirely consistent. When the view is queried, - data on fast-path locks (with fastpath = true) + data on fast-path locks (with fastpath = true) is gathered from each backend one at a time, without freezing the state of the entire lock manager, so it is possible for locks to be taken or released while information is gathered. Note, however, that these locks are @@ -9218,7 +9218,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_matviews</> Columns + <structname>pg_matviews</structname> Columns @@ -9291,7 +9291,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_policies</> Columns + <structname>pg_policies</structname> Columns @@ -9381,7 +9381,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_prepared_statements</> Columns + <structname>pg_prepared_statements</structname> Columns @@ -9467,7 +9467,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_prepared_xacts</> Columns + <structname>pg_prepared_xacts</structname> Columns @@ -9706,7 +9706,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx slot_typetext - The slot type - physical or logical + The slot type - physical or logical @@ -9787,7 +9787,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The address (LSN) up to which the logical slot's consumer has confirmed receiving data. Data older than this is - not available anymore. NULL for physical slots. + not available anymore. NULL for physical slots. @@ -9817,7 +9817,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_roles</> Columns + <structname>pg_roles</structname> Columns @@ -9900,7 +9900,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx rolpasswordtext - Not the password (always reads as ********) + Not the password (always reads as ********) @@ -9953,7 +9953,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_rules</> Columns + <structname>pg_rules</structname> Columns @@ -9994,9 +9994,9 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- The pg_rules view excludes the ON SELECT rules + The pg_rules view excludes the ON SELECT rules of views and materialized views; those can be seen in - pg_views and pg_matviews. + pg_views and pg_matviews. @@ -10011,11 +10011,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_seclabels provides information about security labels. It as an easier-to-query version of the - pg_seclabel catalog. + pg_seclabel catalog. - <structname>pg_seclabels</> Columns + <structname>pg_seclabels</structname> Columns @@ -10045,7 +10045,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx For a security label on a table column, this is the column number (the - objoid and classoid refer to + objoid and classoid refer to the table itself). For all other object types, this column is zero. @@ -10105,7 +10105,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_sequences</> Columns + <structname>pg_sequences</structname> Columns @@ -10206,12 +10206,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx interface to the and commands. It also provides access to some facts about each parameter that are - not directly available from SHOW, such as minimum and + not directly available from SHOW, such as minimum and maximum values.
- <structname>pg_settings</> Columns + <structname>pg_settings</structname> Columns @@ -10260,8 +10260,8 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx vartype text - Parameter type (bool, enum, - integer, real, or string) + Parameter type (bool, enum, + integer, real, or string) @@ -10306,7 +10306,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx values set from sources other than configuration files, or when examined by a user who is neither a superuser or a member of pg_read_all_settings); helpful when using - include directives in configuration files + include directives in configuration files sourceline @@ -10384,7 +10384,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in postgresql.conf without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via libpq's PGOPTIONS + packet (for example, via libpq's PGOPTIONS environment variable), but only if the connecting user is a superuser. However, these settings never change in a session after it is started. If you change them in postgresql.conf, send a @@ -10402,7 +10402,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in postgresql.conf without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via libpq's PGOPTIONS + packet (for example, via libpq's PGOPTIONS environment variable); any user can make such a change for their session. However, these settings never change in a session after it is started. If you change them in postgresql.conf, send a @@ -10418,10 +10418,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx These settings can be set from postgresql.conf, - or within a session via the SET command; but only superusers - can change them via SET. Changes in + or within a session via the SET command; but only superusers + can change them via SET. Changes in postgresql.conf will affect existing sessions - only if no session-local value has been established with SET. + only if no session-local value has been established with SET. @@ -10431,10 +10431,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx These settings can be set from postgresql.conf, - or within a session via the SET command. Any user is + or within a session via the SET command. Any user is allowed to change their session-local value. Changes in postgresql.conf will affect existing sessions - only if no session-local value has been established with SET. + only if no session-local value has been established with SET. @@ -10473,7 +10473,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx compatibility: it emulates a catalog that existed in PostgreSQL before version 8.1. It shows properties of all roles that are marked as - rolcanlogin in + rolcanlogin in pg_authid. @@ -10486,7 +10486,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_shadow</> Columns + <structname>pg_shadow</structname> Columns @@ -10600,7 +10600,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_stats</> Columns + <structname>pg_stats</structname> Columns @@ -10663,7 +10663,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx If greater than zero, the estimated number of distinct values in the column. If less than zero, the negative of the number of distinct values divided by the number of rows. (The negated form is used when - ANALYZE believes that the number of distinct values is + ANALYZE believes that the number of distinct values is likely to increase as the table grows; the positive form is used when the column seems to have a fixed number of possible values.) For example, -1 indicates a unique column in which the number of distinct @@ -10699,10 +10699,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx A list of values that divide the column's values into groups of approximately equal population. The values in - most_common_vals, if present, are omitted from this + most_common_vals, if present, are omitted from this histogram calculation. (This column is null if the column data type - does not have a < operator or if the - most_common_vals list accounts for the entire + does not have a < operator or if the + most_common_vals list accounts for the entire population.) @@ -10717,7 +10717,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx When the value is near -1 or +1, an index scan on the column will be estimated to be cheaper than when it is near zero, due to reduction of random access to the disk. (This column is null if the column data - type does not have a < operator.) + type does not have a < operator.) @@ -10761,7 +10761,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The maximum number of entries in the array fields can be controlled on a - column-by-column basis using the ALTER TABLE SET STATISTICS + column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the run-time parameter. @@ -10781,7 +10781,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_tables</> Columns + <structname>pg_tables</structname> Columns @@ -10862,7 +10862,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_timezone_abbrevs</> Columns + <structname>pg_timezone_abbrevs</structname> Columns @@ -10910,7 +10910,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_timezone_names provides a list - of time zone names that are recognized by SET TIMEZONE, + of time zone names that are recognized by SET TIMEZONE, along with their associated abbreviations, UTC offsets, and daylight-savings status. (Technically, PostgreSQL does not use UTC because leap @@ -10919,11 +10919,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx linkend="view-pg-timezone-abbrevs">pg_timezone_abbrevs, many of these names imply a set of daylight-savings transition date rules. Therefore, the associated information changes across local DST boundaries. The displayed information is computed based on the current - value of CURRENT_TIMESTAMP. + value of CURRENT_TIMESTAMP.
- <structname>pg_timezone_names</> Columns + <structname>pg_timezone_names</structname> Columns @@ -10976,7 +10976,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_user</> Columns + <structname>pg_user</structname> Columns @@ -11032,7 +11032,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx passwd text - Not the password (always reads as ********) + Not the password (always reads as ********) @@ -11069,7 +11069,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_user_mappings</> Columns + <structname>pg_user_mappings</structname> Columns @@ -11126,7 +11126,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx text[] - User mapping specific options, as keyword=value strings + User mapping specific options, as keyword=value strings @@ -11141,12 +11141,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx current user is the user being mapped, and owns the server or - holds USAGE privilege on it + holds USAGE privilege on it - current user is the server owner and mapping is for PUBLIC + current user is the server owner and mapping is for PUBLIC @@ -11173,7 +11173,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
- <structname>pg_views</> Columns + <structname>pg_views</structname> Columns diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 63f7de5b43..3874a3f1ea 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -35,12 +35,12 @@ Locale Support - locale + locale - Locale support refers to an application respecting + Locale support refers to an application respecting cultural preferences regarding alphabets, sorting, number - formatting, etc. PostgreSQL uses the standard ISO + formatting, etc. PostgreSQL uses the standard ISO C and POSIX locale facilities provided by the server operating system. For additional information refer to the documentation of your system. @@ -67,14 +67,14 @@ initdb --locale=sv_SE This example for Unix systems sets the locale to Swedish - (sv) as spoken - in Sweden (SE). Other possibilities might include - en_US (U.S. English) and fr_CA (French + (sv) as spoken + in Sweden (SE). Other possibilities might include + en_US (U.S. English) and fr_CA (French Canadian). If more than one character set can be used for a locale then the specifications can take the form - language_territory.codeset. For example, - fr_BE.UTF-8 represents the French language (fr) as - spoken in Belgium (BE), with a UTF-8 character set + language_territory.codeset. For example, + fr_BE.UTF-8 represents the French language (fr) as + spoken in Belgium (BE), with a UTF-8 character set encoding. @@ -82,9 +82,9 @@ initdb --locale=sv_SE What locales are available on your system under what names depends on what was provided by the operating system vendor and what was installed. On most Unix systems, the command - locale -a will provide a list of available locales. - Windows uses more verbose locale names, such as German_Germany - or Swedish_Sweden.1252, but the principles are the same. + locale -a will provide a list of available locales. + Windows uses more verbose locale names, such as German_Germany + or Swedish_Sweden.1252, but the principles are the same. @@ -97,28 +97,28 @@ initdb --locale=sv_SE - LC_COLLATE - String sort order + LC_COLLATE + String sort order - LC_CTYPE - Character classification (What is a letter? Its upper-case equivalent?) + LC_CTYPE + Character classification (What is a letter? Its upper-case equivalent?) - LC_MESSAGES - Language of messages + LC_MESSAGES + Language of messages - LC_MONETARY - Formatting of currency amounts + LC_MONETARY + Formatting of currency amounts - LC_NUMERIC - Formatting of numbers + LC_NUMERIC + Formatting of numbers - LC_TIME - Formatting of dates and times + LC_TIME + Formatting of dates and times @@ -133,8 +133,8 @@ initdb --locale=sv_SE If you want the system to behave as if it had no locale support, - use the special locale name C, or equivalently - POSIX. + use the special locale name C, or equivalently + POSIX. @@ -192,14 +192,14 @@ initdb --locale=sv_SE settings for the purpose of setting the language of messages. If in doubt, please refer to the documentation of your operating system, in particular the documentation about - gettext. + gettext. To enable messages to be translated to the user's preferred language, NLS must have been selected at build time - (configure --enable-nls). All other locale support is + (configure --enable-nls). All other locale support is built in automatically. @@ -213,63 +213,63 @@ initdb --locale=sv_SE - Sort order in queries using ORDER BY or the standard + Sort order in queries using ORDER BY or the standard comparison operators on textual data - ORDER BYand locales + ORDER BYand locales - The upper, lower, and initcap + The upper, lower, and initcap functions - upperand locales - lowerand locales + upperand locales + lowerand locales - Pattern matching operators (LIKE, SIMILAR TO, + Pattern matching operators (LIKE, SIMILAR TO, and POSIX-style regular expressions); locales affect both case insensitive matching and the classification of characters by character-class regular expressions - LIKEand locales - regular expressionsand locales + LIKEand locales + regular expressionsand locales - The to_char family of functions - to_charand locales + The to_char family of functions + to_charand locales - The ability to use indexes with LIKE clauses + The ability to use indexes with LIKE clauses - The drawback of using locales other than C or - POSIX in PostgreSQL is its performance + The drawback of using locales other than C or + POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes - from being used by LIKE. For this reason use locales + from being used by LIKE. For this reason use locales only if you actually need them. - As a workaround to allow PostgreSQL to use indexes - with LIKE clauses under a non-C locale, several custom + As a workaround to allow PostgreSQL to use indexes + with LIKE clauses under a non-C locale, several custom operator classes exist. These allow the creation of an index that performs a strict character-by-character comparison, ignoring locale comparison rules. Refer to for more information. Another approach is to create indexes using - the C collation, as discussed in + the C collation, as discussed in . @@ -286,20 +286,20 @@ initdb --locale=sv_SE - Check that PostgreSQL is actually using the locale - that you think it is. The LC_COLLATE and LC_CTYPE + Check that PostgreSQL is actually using the locale + that you think it is. The LC_COLLATE and LC_CTYPE settings are determined when a database is created, and cannot be changed except by creating a new database. Other locale - settings including LC_MESSAGES and LC_MONETARY + settings including LC_MESSAGES and LC_MONETARY are initially determined by the environment the server is started in, but can be changed on-the-fly. You can check the active locale - settings using the SHOW command. + settings using the SHOW command. - The directory src/test/locale in the source + The directory src/test/locale in the source distribution contains a test suite for - PostgreSQL's locale support. + PostgreSQL's locale support. @@ -313,7 +313,7 @@ initdb --locale=sv_SE Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want to see - PostgreSQL speak their preferred language well. + PostgreSQL speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated. If you want to help, refer to or write to the developers' @@ -326,7 +326,7 @@ initdb --locale=sv_SE Collation Support - collation + collation The collation feature allows specifying the sort order and character @@ -370,9 +370,9 @@ initdb --locale=sv_SE function or operator call is derived from the arguments, as described below. In addition to comparison operators, collations are taken into account by functions that convert between lower and upper case - letters, such as lower, upper, and - initcap; by pattern matching operators; and by - to_char and related functions. + letters, such as lower, upper, and + initcap; by pattern matching operators; and by + to_char and related functions. @@ -452,7 +452,7 @@ SELECT a < ('foo' COLLATE "fr_FR") FROM test1; SELECT a < b FROM test1; the parser cannot determine which collation to apply, since the - a and b columns have conflicting + a and b columns have conflicting implicit collations. Since the < operator does need to know which collation to use, this will result in an error. The error can be resolved by attaching an explicit collation @@ -468,7 +468,7 @@ SELECT a COLLATE "de_DE" < b FROM test1; SELECT a || b FROM test1; - does not result in an error, because the || operator + does not result in an error, because the || operator does not care about collations: its result is the same regardless of the collation. @@ -486,8 +486,8 @@ SELECT * FROM test1 ORDER BY a || 'foo'; SELECT * FROM test1 ORDER BY a || b; - results in an error, because even though the || operator - doesn't need to know a collation, the ORDER BY clause does. + results in an error, because even though the || operator + doesn't need to know a collation, the ORDER BY clause does. As before, the conflict can be resolved with an explicit collation specifier: @@ -508,7 +508,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; operating system C library. These are the locales that most tools provided by the operating system use. Another provider is icu, which uses the external - ICUICU library. ICU locales can only be + ICUICU library. ICU locales can only be used if support for ICU was configured when PostgreSQL was built. @@ -541,14 +541,14 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; Standard Collations - On all platforms, the collations named default, - C, and POSIX are available. Additional + On all platforms, the collations named default, + C, and POSIX are available. Additional collations may be available depending on operating system support. - The default collation selects the LC_COLLATE + The default collation selects the LC_COLLATE and LC_CTYPE values specified at database creation time. - The C and POSIX collations both specify - traditional C behavior, in which only the ASCII letters - A through Z + The C and POSIX collations both specify + traditional C behavior, in which only the ASCII letters + A through Z are treated as letters, and sorting is done strictly by character code byte values. @@ -565,7 +565,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; If the operating system provides support for using multiple locales - within a single program (newlocale and related functions), + within a single program (newlocale and related functions), or if support for ICU is configured, then when a database cluster is initialized, initdb populates the system catalog pg_collation with @@ -618,8 +618,8 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; within a given database even though it would not be unique globally. Use of the stripped collation names is recommended, since it will make one less thing you need to change if you decide to change to - another database encoding. Note however that the default, - C, and POSIX collations can be used regardless of + another database encoding. Note however that the default, + C, and POSIX collations can be used regardless of the database encoding. @@ -630,7 +630,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; - will draw an error even though the C and POSIX + will draw an error even though the C and POSIX collations have identical behaviors. Mixing stripped and non-stripped collation names is therefore not recommended. @@ -691,7 +691,7 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; database encoding is one of these, ICU collation entries in pg_collation are ignored. Attempting to use one will draw an error along the lines of collation "de-x-icu" for - encoding "WIN874" does not exist. + encoding "WIN874" does not exist. @@ -889,30 +889,30 @@ CREATE COLLATION french FROM "fr-x-icu"; Character Set Support - character set + character set The character set support in PostgreSQL allows you to store text in a variety of character sets (also called encodings), including single-byte character sets such as the ISO 8859 series and - multiple-byte character sets such as EUC (Extended Unix + multiple-byte character sets such as EUC (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your PostgreSQL database - cluster using initdb. It can be overridden when you + cluster using initdb. It can be overridden when you create a database, so you can have multiple databases each with a different character set. An important restriction, however, is that each database's character set - must be compatible with the database's LC_CTYPE (character - classification) and LC_COLLATE (string sort order) locale - settings. For C or - POSIX locale, any character set is allowed, but for other + must be compatible with the database's LC_CTYPE (character + classification) and LC_COLLATE (string sort order) locale + settings. For C or + POSIX locale, any character set is allowed, but for other libc-provided locales there is only one character set that will work correctly. (On Windows, however, UTF-8 encoding can be used with any locale.) @@ -954,7 +954,7 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN950, Windows950 + WIN950, Windows950 EUC_CN @@ -1017,11 +1017,11 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN936, Windows936 + WIN936, Windows936 ISO_8859_5 - ISO 8859-5, ECMA 113 + ISO 8859-5, ECMA 113 Latin/Cyrillic Yes Yes @@ -1030,7 +1030,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_6 - ISO 8859-6, ECMA 114 + ISO 8859-6, ECMA 114 Latin/Arabic Yes Yes @@ -1039,7 +1039,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_7 - ISO 8859-7, ECMA 118 + ISO 8859-7, ECMA 118 Latin/Greek Yes Yes @@ -1048,7 +1048,7 @@ CREATE COLLATION french FROM "fr-x-icu"; ISO_8859_8 - ISO 8859-8, ECMA 121 + ISO 8859-8, ECMA 121 Latin/Hebrew Yes Yes @@ -1057,7 +1057,7 @@ CREATE COLLATION french FROM "fr-x-icu"; JOHAB - JOHAB + JOHAB Korean (Hangul) No No @@ -1071,7 +1071,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - KOI8 + KOI8 KOI8U @@ -1084,57 +1084,57 @@ CREATE COLLATION french FROM "fr-x-icu"; LATIN1 - ISO 8859-1, ECMA 94 + ISO 8859-1, ECMA 94 Western European Yes Yes 1 - ISO88591 + ISO88591 LATIN2 - ISO 8859-2, ECMA 94 + ISO 8859-2, ECMA 94 Central European Yes Yes 1 - ISO88592 + ISO88592 LATIN3 - ISO 8859-3, ECMA 94 + ISO 8859-3, ECMA 94 South European Yes Yes 1 - ISO88593 + ISO88593 LATIN4 - ISO 8859-4, ECMA 94 + ISO 8859-4, ECMA 94 North European Yes Yes 1 - ISO88594 + ISO88594 LATIN5 - ISO 8859-9, ECMA 128 + ISO 8859-9, ECMA 128 Turkish Yes Yes 1 - ISO88599 + ISO88599 LATIN6 - ISO 8859-10, ECMA 144 + ISO 8859-10, ECMA 144 Nordic Yes Yes 1 - ISO885910 + ISO885910 LATIN7 @@ -1143,7 +1143,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885913 + ISO885913 LATIN8 @@ -1152,7 +1152,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885914 + ISO885914 LATIN9 @@ -1161,16 +1161,16 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ISO885915 + ISO885915 LATIN10 - ISO 8859-16, ASRO SR 14111 + ISO 8859-16, ASRO SR 14111 Romanian Yes No 1 - ISO885916 + ISO885916 MULE_INTERNAL @@ -1188,7 +1188,7 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - Mskanji, ShiftJIS, WIN932, Windows932 + Mskanji, ShiftJIS, WIN932, Windows932 SHIFT_JIS_2004 @@ -1202,7 +1202,7 @@ CREATE COLLATION french FROM "fr-x-icu"; SQL_ASCII unspecified (see text) - any + any Yes No 1 @@ -1215,16 +1215,16 @@ CREATE COLLATION french FROM "fr-x-icu"; No No 1-2 - WIN949, Windows949 + WIN949, Windows949 UTF8 Unicode, 8-bit - all + all Yes Yes 1-4 - Unicode + Unicode WIN866 @@ -1233,7 +1233,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ALT + ALT WIN874 @@ -1260,7 +1260,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - WIN + WIN WIN1252 @@ -1323,30 +1323,30 @@ CREATE COLLATION french FROM "fr-x-icu"; Yes Yes 1 - ABC, TCVN, TCVN5712, VSCII + ABC, TCVN, TCVN5712, VSCII
- Not all client APIs support all the listed character sets. For example, the - PostgreSQL - JDBC driver does not support MULE_INTERNAL, LATIN6, - LATIN8, and LATIN10. + Not all client APIs support all the listed character sets. For example, the + PostgreSQL + JDBC driver does not support MULE_INTERNAL, LATIN6, + LATIN8, and LATIN10. - The SQL_ASCII setting behaves considerably differently + The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is - SQL_ASCII, the server interprets byte values 0-127 + SQL_ASCII, the server interprets byte values 0-127 according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. No encoding conversion will be done when - the setting is SQL_ASCII. Thus, this setting is not so + the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the - SQL_ASCII setting because + SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters. @@ -1356,7 +1356,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Setting the Character Set - initdb defines the default character set (encoding) + initdb defines the default character set (encoding) for a PostgreSQL cluster. For example, @@ -1367,8 +1367,8 @@ initdb -E EUC_JP EUC_JP (Extended Unix Code for Japanese). You can use instead of if you prefer longer option strings. - If no option is - given, initdb attempts to determine the appropriate + If no or option is + given, initdb attempts to determine the appropriate encoding to use based on the specified or default locale. @@ -1388,7 +1388,7 @@ createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0; - Notice that the above commands specify copying the template0 + Notice that the above commands specify copying the template0 database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data. For more information see @@ -1420,7 +1420,7 @@ $ psql -l On most modern operating systems, PostgreSQL - can determine which character set is implied by the LC_CTYPE + can determine which character set is implied by the LC_CTYPE setting, and it will enforce that only the matching database encoding is used. On older systems it is your responsibility to ensure that you use the encoding expected by the locale you have selected. A mistake in @@ -1430,9 +1430,9 @@ $ psql -l PostgreSQL will allow superusers to create - databases with SQL_ASCII encoding even when - LC_CTYPE is not C or POSIX. As noted - above, SQL_ASCII does not enforce that the data stored in + databases with SQL_ASCII encoding even when + LC_CTYPE is not C or POSIX. As noted + above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether. @@ -1447,7 +1447,7 @@ $ psql -l PostgreSQL supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the - pg_conversion system catalog. PostgreSQL + pg_conversion system catalog. PostgreSQL comes with some predefined conversions, as shown in . You can create a new conversion using the SQL command CREATE CONVERSION. @@ -1763,7 +1763,7 @@ $ psql -l - libpq () has functions to control the client encoding. + libpq () has functions to control the client encoding. @@ -1774,14 +1774,14 @@ $ psql -l Setting the client encoding can be done with this SQL command: -SET CLIENT_ENCODING TO 'value'; +SET CLIENT_ENCODING TO 'value'; Also you can use the standard SQL syntax SET NAMES for this purpose: -SET NAMES 'value'; +SET NAMES 'value'; To query the current client encoding: @@ -1813,7 +1813,7 @@ RESET client_encoding; Using the configuration variable . If the - client_encoding variable is set, that client + client_encoding variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.) @@ -1832,9 +1832,9 @@ RESET client_encoding; - If the client character set is defined as SQL_ASCII, + If the client character set is defined as SQL_ASCII, encoding conversion is disabled, regardless of the server's character - set. Just as for the server, use of SQL_ASCII is unwise + set. Just as for the server, use of SQL_ASCII is unwise unless you are working with all-ASCII data. diff --git a/doc/src/sgml/citext.sgml b/doc/src/sgml/citext.sgml index 9b4c68f7d4..82251de852 100644 --- a/doc/src/sgml/citext.sgml +++ b/doc/src/sgml/citext.sgml @@ -8,10 +8,10 @@ - The citext module provides a case-insensitive - character string type, citext. Essentially, it internally calls - lower when comparing values. Otherwise, it behaves almost - exactly like text. + The citext module provides a case-insensitive + character string type, citext. Essentially, it internally calls + lower when comparing values. Otherwise, it behaves almost + exactly like text. @@ -19,7 +19,7 @@ The standard approach to doing case-insensitive matches - in PostgreSQL has been to use the lower + in PostgreSQL has been to use the lower function when comparing values, for example @@ -35,19 +35,19 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); It makes your SQL statements verbose, and you always have to remember to - use lower on both the column and the query value. + use lower on both the column and the query value. It won't use an index, unless you create a functional index using - lower. + lower. - If you declare a column as UNIQUE or PRIMARY - KEY, the implicitly generated index is case-sensitive. So it's + If you declare a column as UNIQUE or PRIMARY + KEY, the implicitly generated index is case-sensitive. So it's useless for case-insensitive searches, and it won't enforce uniqueness case-insensitively. @@ -55,13 +55,13 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); - The citext data type allows you to eliminate calls - to lower in SQL queries, and allows a primary key to - be case-insensitive. citext is locale-aware, just - like text, which means that the matching of upper case and + The citext data type allows you to eliminate calls + to lower in SQL queries, and allows a primary key to + be case-insensitive. citext is locale-aware, just + like text, which means that the matching of upper case and lower case characters is dependent on the rules of - the database's LC_CTYPE setting. Again, this behavior is - identical to the use of lower in queries. But because it's + the database's LC_CTYPE setting. Again, this behavior is + identical to the use of lower in queries. But because it's done transparently by the data type, you don't have to remember to do anything special in your queries. @@ -89,9 +89,9 @@ INSERT INTO users VALUES ( 'Bjørn', md5(random()::text) ); SELECT * FROM users WHERE nick = 'Larry'; - The SELECT statement will return one tuple, even though - the nick column was set to larry and the query - was for Larry. + The SELECT statement will return one tuple, even though + the nick column was set to larry and the query + was for Larry. @@ -99,82 +99,82 @@ SELECT * FROM users WHERE nick = 'Larry'; String Comparison Behavior - citext performs comparisons by converting each string to lower - case (as though lower were called) and then comparing the + citext performs comparisons by converting each string to lower + case (as though lower were called) and then comparing the results normally. Thus, for example, two strings are considered equal - if lower would produce identical results for them. + if lower would produce identical results for them. In order to emulate a case-insensitive collation as closely as possible, - there are citext-specific versions of a number of string-processing + there are citext-specific versions of a number of string-processing operators and functions. So, for example, the regular expression - operators ~ and ~* exhibit the same behavior when - applied to citext: they both match case-insensitively. + operators ~ and ~* exhibit the same behavior when + applied to citext: they both match case-insensitively. The same is true - for !~ and !~*, as well as for the - LIKE operators ~~ and ~~*, and - !~~ and !~~*. If you'd like to match - case-sensitively, you can cast the operator's arguments to text. + for !~ and !~*, as well as for the + LIKE operators ~~ and ~~*, and + !~~ and !~~*. If you'd like to match + case-sensitively, you can cast the operator's arguments to text. Similarly, all of the following functions perform matching - case-insensitively if their arguments are citext: + case-insensitively if their arguments are citext: - regexp_match() + regexp_match() - regexp_matches() + regexp_matches() - regexp_replace() + regexp_replace() - regexp_split_to_array() + regexp_split_to_array() - regexp_split_to_table() + regexp_split_to_table() - replace() + replace() - split_part() + split_part() - strpos() + strpos() - translate() + translate() For the regexp functions, if you want to match case-sensitively, you can - specify the c flag to force a case-sensitive match. Otherwise, - you must cast to text before using one of these functions if + specify the c flag to force a case-sensitive match. Otherwise, + you must cast to text before using one of these functions if you want case-sensitive behavior. @@ -186,13 +186,13 @@ SELECT * FROM users WHERE nick = 'Larry'; - citext's case-folding behavior depends on - the LC_CTYPE setting of your database. How it compares + citext's case-folding behavior depends on + the LC_CTYPE setting of your database. How it compares values is therefore determined when the database is created. It is not truly case-insensitive in the terms defined by the Unicode standard. Effectively, what this means is that, as long as you're happy with your - collation, you should be happy with citext's comparisons. But + collation, you should be happy with citext's comparisons. But if you have data in different languages stored in your database, users of one language may find their query results are not as expected if the collation is for another language. @@ -201,38 +201,38 @@ SELECT * FROM users WHERE nick = 'Larry'; - As of PostgreSQL 9.1, you can attach a - COLLATE specification to citext columns or data - values. Currently, citext operators will honor a non-default - COLLATE specification while comparing case-folded strings, + As of PostgreSQL 9.1, you can attach a + COLLATE specification to citext columns or data + values. Currently, citext operators will honor a non-default + COLLATE specification while comparing case-folded strings, but the initial folding to lower case is always done according to the - database's LC_CTYPE setting (that is, as though - COLLATE "default" were given). This may be changed in a - future release so that both steps follow the input COLLATE + database's LC_CTYPE setting (that is, as though + COLLATE "default" were given). This may be changed in a + future release so that both steps follow the input COLLATE specification. - citext is not as efficient as text because the + citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, - however, slightly more efficient than using lower to get + however, slightly more efficient than using lower to get case-insensitive matching. - citext doesn't help much if you need data to compare + citext doesn't help much if you need data to compare case-sensitively in some contexts and case-insensitively in other - contexts. The standard answer is to use the text type and - manually use the lower function when you need to compare + contexts. The standard answer is to use the text type and + manually use the lower function when you need to compare case-insensitively; this works all right if case-insensitive comparison is needed only infrequently. If you need case-insensitive behavior most of the time and case-sensitive infrequently, consider storing the data - as citext and explicitly casting the column to text + as citext and explicitly casting the column to text when you want case-sensitive comparison. In either situation, you will need two indexes if you want both types of searches to be fast. @@ -240,9 +240,9 @@ SELECT * FROM users WHERE nick = 'Larry'; - The schema containing the citext operators must be - in the current search_path (typically public); - if it is not, the normal case-sensitive text operators + The schema containing the citext operators must be + in the current search_path (typically public); + if it is not, the normal case-sensitive text operators will be invoked instead. @@ -257,7 +257,7 @@ SELECT * FROM users WHERE nick = 'Larry'; - Inspired by the original citext module by Donald Fraser. + Inspired by the original citext module by Donald Fraser. diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 78c594bbba..722f3da813 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -21,9 +21,9 @@ As explained in , PostgreSQL actually does privilege - management in terms of roles. In this chapter, we - consistently use database user to mean role with the - LOGIN privilege. + management in terms of roles. In this chapter, we + consistently use database user to mean role with the + LOGIN privilege. @@ -66,7 +66,7 @@ which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. - (HBA stands for host-based authentication.) A default + (HBA stands for host-based authentication.) A default pg_hba.conf file is installed when the data directory is initialized by initdb. It is possible to place the authentication configuration file elsewhere, @@ -82,7 +82,7 @@ up of a number of fields which are separated by spaces and/or tabs. Fields can contain white space if the field value is double-quoted. Quoting one of the keywords in a database, user, or address field (e.g., - all or replication) makes the word lose its special + all or replication) makes the word lose its special meaning, and just match a database, user, or host with that name. @@ -92,8 +92,8 @@ and the authentication method to be used for connections matching these parameters. The first record with a matching connection type, client address, requested database, and user name is used to perform - authentication. There is no fall-through or - backup: if one record is chosen and the authentication + authentication. There is no fall-through or + backup: if one record is chosen and the authentication fails, subsequent records are not considered. If no record matches, access is denied. @@ -138,7 +138,7 @@ hostnossl database user the server is started with an appropriate value for the configuration parameter, since the default behavior is to listen for TCP/IP connections - only on the local loopback address localhost. + only on the local loopback address localhost. @@ -169,7 +169,7 @@ hostnossl database user hostnossl - This record type has the opposite behavior of hostssl; + This record type has the opposite behavior of hostssl; it only matches connection attempts made over TCP/IP that do not use SSL. @@ -182,24 +182,24 @@ hostnossl database user Specifies which database name(s) this record matches. The value all specifies that it matches all databases. - The value sameuser specifies that the record + The value sameuser specifies that the record matches if the requested database has the same name as the - requested user. The value samerole specifies that + requested user. The value samerole specifies that the requested user must be a member of the role with the same - name as the requested database. (samegroup is an - obsolete but still accepted spelling of samerole.) + name as the requested database. (samegroup is an + obsolete but still accepted spelling of samerole.) Superusers are not considered to be members of a role for the - purposes of samerole unless they are explicitly + purposes of samerole unless they are explicitly members of the role, directly or indirectly, and not just by virtue of being a superuser. - The value replication specifies that the record + The value replication specifies that the record matches if a physical replication connection is requested (note that replication connections do not specify any particular database). Otherwise, this is the name of a specific PostgreSQL database. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by - preceding the file name with @. + preceding the file name with @. @@ -211,18 +211,18 @@ hostnossl database user Specifies which database user name(s) this record matches. The value all specifies that it matches all users. Otherwise, this is either the name of a specific - database user, or a group name preceded by +. + database user, or a group name preceded by +. (Recall that there is no real distinction between users and groups - in PostgreSQL; a + mark really means + in PostgreSQL; a + mark really means match any of the roles that are directly or indirectly members - of this role, while a name without a + mark matches + of this role, while a name without a + mark matches only that specific role.) For this purpose, a superuser is only considered to be a member of a role if they are explicitly a member of the role, directly or indirectly, and not just by virtue of being a superuser. Multiple user names can be supplied by separating them with commas. A separate file containing user names can be specified by preceding the - file name with @. + file name with @. @@ -239,7 +239,7 @@ hostnossl database user An IP address range is specified using standard numeric notation for the range's starting address, then a slash (/) - and a CIDR mask length. The mask + and a CIDR mask length. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this should be zero in the given IP address. @@ -317,7 +317,7 @@ hostnossl database user This field only applies to host, - hostssl, and hostnossl records. + hostssl, and hostnossl records. @@ -360,17 +360,17 @@ hostnossl database user These two fields can be used as an alternative to the - IP-address/mask-length + IP-address/mask-length notation. Instead of specifying the mask length, the actual mask is specified in a - separate column. For example, 255.0.0.0 represents an IPv4 - CIDR mask length of 8, and 255.255.255.255 represents a + separate column. For example, 255.0.0.0 represents an IPv4 + CIDR mask length of 8, and 255.255.255.255 represents a CIDR mask length of 32. These fields only apply to host, - hostssl, and hostnossl records. + hostssl, and hostnossl records. @@ -385,7 +385,7 @@ hostnossl database user - trust + trust Allow the connection unconditionally. This method @@ -399,12 +399,12 @@ hostnossl database user - reject + reject Reject the connection unconditionally. This is useful for - filtering out certain hosts from a group, for example a - reject line could block a specific host from connecting, + filtering out certain hosts from a group, for example a + reject line could block a specific host from connecting, while a later line allows the remaining hosts in a specific network to connect. @@ -412,7 +412,7 @@ hostnossl database user - scram-sha-256 + scram-sha-256 Perform SCRAM-SHA-256 authentication to verify the user's @@ -422,7 +422,7 @@ hostnossl database user - md5 + md5 Perform SCRAM-SHA-256 or MD5 authentication to verify the @@ -433,7 +433,7 @@ hostnossl database user - password + password Require the client to supply an unencrypted password for @@ -446,7 +446,7 @@ hostnossl database user - gss + gss Use GSSAPI to authenticate the user. This is only @@ -457,7 +457,7 @@ hostnossl database user - sspi + sspi Use SSPI to authenticate the user. This is only @@ -468,7 +468,7 @@ hostnossl database user - ident + ident Obtain the operating system user name of the client @@ -483,7 +483,7 @@ hostnossl database user - peer + peer Obtain the client's operating system user name from the operating @@ -495,17 +495,17 @@ hostnossl database user - ldap + ldap - Authenticate using an LDAP server. See LDAP server. See for details. - radius + radius Authenticate using a RADIUS server. See database
user - cert + cert Authenticate using SSL client certificates. See @@ -525,7 +525,7 @@ hostnossl database user - pam + pam Authenticate using the Pluggable Authentication Modules @@ -536,7 +536,7 @@ hostnossl database user - bsd + bsd Authenticate using the BSD Authentication service provided by the @@ -554,17 +554,17 @@ hostnossl database user auth-options - After the auth-method field, there can be field(s) of - the form name=value that + After the auth-method field, there can be field(s) of + the form name=value that specify options for the authentication method. Details about which options are available for which authentication methods appear below. In addition to the method-specific options listed below, there is one - method-independent authentication option clientcert, which - can be specified in any hostssl record. When set - to 1, this option requires the client to present a valid + method-independent authentication option clientcert, which + can be specified in any hostssl record. When set + to 1, this option requires the client to present a valid (trusted) SSL certificate, in addition to the other requirements of the authentication method. @@ -574,11 +574,11 @@ hostnossl database user - Files included by @ constructs are read as lists of names, + Files included by @ constructs are read as lists of names, which can be separated by either whitespace or commas. Comments are introduced by #, just as in - pg_hba.conf, and nested @ constructs are - allowed. Unless the file name following @ is an absolute + pg_hba.conf, and nested @ constructs are + allowed. Unless the file name following @ is an absolute path, it is taken to be relative to the directory containing the referencing file. @@ -589,10 +589,10 @@ hostnossl database user significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication - methods. For example, one might wish to use trust + methods. For example, one might wish to use trust authentication for local TCP/IP connections but require a password for remote TCP/IP connections. In this case a record specifying - trust authentication for connections from 127.0.0.1 would + trust authentication for connections from 127.0.0.1 would appear before a record specifying password authentication for a wider range of allowed client IP addresses. @@ -603,7 +603,7 @@ hostnossl database user SIGHUPSIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster - (using pg_ctl reload or kill -HUP) to make it + (using pg_ctl reload or kill -HUP) to make it re-read the file. @@ -618,7 +618,7 @@ hostnossl database user The system view pg_hba_file_rules - can be helpful for pre-testing changes to the pg_hba.conf + can be helpful for pre-testing changes to the pg_hba.conf file, or for diagnosing problems if loading of the file did not have the desired effects. Rows in the view with non-null error fields indicate problems in the @@ -629,9 +629,9 @@ hostnossl database user To connect to a particular database, a user must not only pass the pg_hba.conf checks, but must have the - CONNECT privilege for the database. If you wish to + CONNECT privilege for the database. If you wish to restrict which users can connect to which databases, it's usually - easier to control this by granting/revoking CONNECT privilege + easier to control this by granting/revoking CONNECT privilege than to put the rules in pg_hba.conf entries. @@ -760,21 +760,21 @@ local db1,db2,@demodbs all md5 User name maps are defined in the ident map file, which by default is named - pg_ident.confpg_ident.conf + pg_ident.confpg_ident.conf and is stored in the cluster's data directory. (It is possible to place the map file elsewhere, however; see the configuration parameter.) The ident map file contains lines of the general form: -map-name system-username database-username +map-name system-username database-username Comments and whitespace are handled in the same way as in - pg_hba.conf. The - map-name is an arbitrary name that will be used to + pg_hba.conf. The + map-name is an arbitrary name that will be used to refer to this mapping in pg_hba.conf. The other two fields specify an operating system user name and a matching - database user name. The same map-name can be + database user name. The same map-name can be used repeatedly to specify multiple user-mappings within a single map. @@ -788,13 +788,13 @@ local db1,db2,@demodbs all md5 user has requested to connect as. - If the system-username field starts with a slash (/), + If the system-username field starts with a slash (/), the remainder of the field is treated as a regular expression. (See for details of - PostgreSQL's regular expression syntax.) The regular + PostgreSQL's regular expression syntax.) The regular expression can include a single capture, or parenthesized subexpression, - which can then be referenced in the database-username - field as \1 (backslash-one). This allows the mapping of + which can then be referenced in the database-username + field as \1 (backslash-one). This allows the mapping of multiple user names in a single line, which is particularly useful for simple syntax substitutions. For example, these entries @@ -802,14 +802,14 @@ mymap /^(.*)@mydomain\.com$ \1 mymap /^(.*)@otherdomain\.com$ guest will remove the domain part for users with system user names that end with - @mydomain.com, and allow any user whose system name ends with - @otherdomain.com to log in as guest. + @mydomain.com, and allow any user whose system name ends with + @otherdomain.com to log in as guest. Keep in mind that by default, a regular expression can match just part of - a string. It's usually wise to use ^ and $, as + a string. It's usually wise to use ^ and $, as shown in the above example, to force the match to be to the entire system user name. @@ -821,28 +821,28 @@ mymap /^(.*)@otherdomain\.com$ guest SIGHUPSIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster - (using pg_ctl reload or kill -HUP) to make it + (using pg_ctl reload or kill -HUP) to make it re-read the file. A pg_ident.conf file that could be used in - conjunction with the pg_hba.conf file in pg_hba.conf file in is shown in . In this example, anyone logged in to a machine on the 192.168 network that does not have the - operating system user name bryanh, ann, or - robert would not be granted access. Unix user - robert would only be allowed access when he tries to - connect as PostgreSQL user bob, not - as robert or anyone else. ann would - only be allowed to connect as ann. User - bryanh would be allowed to connect as either - bryanh or as guest1. + operating system user name bryanh, ann, or + robert would not be granted access. Unix user + robert would only be allowed access when he tries to + connect as PostgreSQL user bob, not + as robert or anyone else. ann would + only be allowed to connect as ann. User + bryanh would be allowed to connect as either + bryanh or as guest1. - An Example <filename>pg_ident.conf</> File + An Example <filename>pg_ident.conf</filename> File # MAPNAME SYSTEM-USERNAME PG-USERNAME @@ -866,21 +866,21 @@ omicron bryanh guest1 Trust Authentication - When trust authentication is specified, + When trust authentication is specified, PostgreSQL assumes that anyone who can connect to the server is authorized to access the database with whatever database user name they specify (even superuser names). - Of course, restrictions made in the database and - user columns still apply. + Of course, restrictions made in the database and + user columns still apply. This method should only be used when there is adequate operating-system-level protection on connections to the server. - trust authentication is appropriate and very + trust authentication is appropriate and very convenient for local connections on a single-user workstation. It - is usually not appropriate by itself on a multiuser - machine. However, you might be able to use trust even + is usually not appropriate by itself on a multiuser + machine. However, you might be able to use trust even on a multiuser machine, if you restrict access to the server's Unix-domain socket file using file-system permissions. To do this, set the unix_socket_permissions (and possibly @@ -895,17 +895,17 @@ omicron bryanh guest1 Setting file-system permissions only helps for Unix-socket connections. Local TCP/IP connections are not restricted by file-system permissions. Therefore, if you want to use file-system permissions for local security, - remove the host ... 127.0.0.1 ... line from - pg_hba.conf, or change it to a - non-trust authentication method. + remove the host ... 127.0.0.1 ... line from + pg_hba.conf, or change it to a + non-trust authentication method. - trust authentication is only suitable for TCP/IP connections + trust authentication is only suitable for TCP/IP connections if you trust every user on every machine that is allowed to connect - to the server by the pg_hba.conf lines that specify - trust. It is seldom reasonable to use trust - for any TCP/IP connections other than those from localhost (127.0.0.1). + to the server by the pg_hba.conf lines that specify + trust. It is seldom reasonable to use trust + for any TCP/IP connections other than those from localhost (127.0.0.1). @@ -914,10 +914,10 @@ omicron bryanh guest1 Password Authentication - MD5 + MD5 - SCRAM + SCRAM password @@ -936,7 +936,7 @@ omicron bryanh guest1 scram-sha-256 - The method scram-sha-256 performs SCRAM-SHA-256 + The method scram-sha-256 performs SCRAM-SHA-256 authentication, as described in RFC 7677. It is a challenge-response scheme that prevents password sniffing on @@ -955,7 +955,7 @@ omicron bryanh guest1 md5 - The method md5 uses a custom less secure challenge-response + The method md5 uses a custom less secure challenge-response mechanism. It prevents password sniffing and avoids storing passwords on the server in plain text but provides no protection if an attacker manages to steal the password hash from the server. Also, the MD5 hash @@ -982,10 +982,10 @@ omicron bryanh guest1 password - The method password sends the password in clear-text and is - therefore vulnerable to password sniffing attacks. It should + The method password sends the password in clear-text and is + therefore vulnerable to password sniffing attacks. It should always be avoided if possible. If the connection is protected by SSL - encryption then password can be used safely, though. + encryption then password can be used safely, though. (Though SSL certificate authentication might be a better choice if one is depending on using SSL). @@ -996,7 +996,7 @@ omicron bryanh guest1 PostgreSQL database passwords are separate from operating system user passwords. The password for - each database user is stored in the pg_authid system + each database user is stored in the pg_authid system catalog. Passwords can be managed with the SQL commands and , @@ -1060,7 +1060,7 @@ omicron bryanh guest1 - GSSAPI support has to be enabled when PostgreSQL is built; + GSSAPI support has to be enabled when PostgreSQL is built; see for more information. @@ -1068,13 +1068,13 @@ omicron bryanh guest1 When GSSAPI uses Kerberos, it uses a standard principal in the format - servicename/hostname@realm. + servicename/hostname@realm. The PostgreSQL server will accept any principal that is included in the keytab used by the server, but care needs to be taken to specify the correct principal details when - making the connection from the client using the krbsrvname connection parameter. (See + making the connection from the client using the krbsrvname connection parameter. (See also .) The installation default can be changed from the default postgres at build time using - ./configure --with-krb-srvnam=whatever. + ./configure --with-krb-srvnam=whatever. In most environments, this parameter never needs to be changed. Some Kerberos implementations might require a different service name, @@ -1082,31 +1082,31 @@ omicron bryanh guest1 to be in upper case (POSTGRES). - hostname is the fully qualified host name of the + hostname is the fully qualified host name of the server machine. The service principal's realm is the preferred realm of the server machine. - Client principals can be mapped to different PostgreSQL - database user names with pg_ident.conf. For example, - pgusername@realm could be mapped to just pgusername. - Alternatively, you can use the full username@realm principal as - the role name in PostgreSQL without any mapping. + Client principals can be mapped to different PostgreSQL + database user names with pg_ident.conf. For example, + pgusername@realm could be mapped to just pgusername. + Alternatively, you can use the full username@realm principal as + the role name in PostgreSQL without any mapping. - PostgreSQL also supports a parameter to strip the realm from + PostgreSQL also supports a parameter to strip the realm from the principal. This method is supported for backwards compatibility and is strongly discouraged as it is then impossible to distinguish different users with the same user name but coming from different realms. To enable this, - set include_realm to 0. For simple single-realm + set include_realm to 0. For simple single-realm installations, doing that combined with setting the - krb_realm parameter (which checks that the principal's realm + krb_realm parameter (which checks that the principal's realm matches exactly what is in the krb_realm parameter) is still secure; but this is a less capable approach compared to specifying an explicit mapping in - pg_ident.conf. + pg_ident.conf. @@ -1116,8 +1116,8 @@ omicron bryanh guest1 of the key file is specified by the configuration parameter. The default is - /usr/local/pgsql/etc/krb5.keytab (or whatever - directory was specified as sysconfdir at build time). + /usr/local/pgsql/etc/krb5.keytab (or whatever + directory was specified as sysconfdir at build time). For security reasons, it is recommended to use a separate keytab just for the PostgreSQL server rather than opening up permissions on the system keytab file. @@ -1127,17 +1127,17 @@ omicron bryanh guest1 Kerberos documentation for details. The following example is for MIT-compatible Kerberos 5 implementations: -kadmin% ank -randkey postgres/server.my.domain.org -kadmin% ktadd -k krb5.keytab postgres/server.my.domain.org +kadmin% ank -randkey postgres/server.my.domain.org +kadmin% ktadd -k krb5.keytab postgres/server.my.domain.org When connecting to the database make sure you have a ticket for a principal matching the requested database user name. For example, for - database user name fred, principal - fred@EXAMPLE.COM would be able to connect. To also allow - principal fred/users.example.com@EXAMPLE.COM, use a user name + database user name fred, principal + fred@EXAMPLE.COM would be able to connect. To also allow + principal fred/users.example.com@EXAMPLE.COM, use a user name map, as described in . @@ -1155,8 +1155,8 @@ omicron bryanh guest1 in multi-realm environments unless krb_realm is also used. It is recommended to leave include_realm set to the default (1) and to - provide an explicit mapping in pg_ident.conf to convert - principal names to PostgreSQL user names. + provide an explicit mapping in pg_ident.conf to convert + principal names to PostgreSQL user names. @@ -1236,8 +1236,8 @@ omicron bryanh guest1 in multi-realm environments unless krb_realm is also used. It is recommended to leave include_realm set to the default (1) and to - provide an explicit mapping in pg_ident.conf to convert - principal names to PostgreSQL user names. + provide an explicit mapping in pg_ident.conf to convert + principal names to PostgreSQL user names. @@ -1270,9 +1270,9 @@ omicron bryanh guest1 By default, these two names are identical for new user accounts. - Note that libpq uses the SAM-compatible name if no + Note that libpq uses the SAM-compatible name if no explicit user name is specified. If you use - libpq or a driver based on it, you should + libpq or a driver based on it, you should leave this option disabled or explicitly specify user name in the connection string. @@ -1357,8 +1357,8 @@ omicron bryanh guest1 is to answer questions like What user initiated the connection that goes out of your port X and connects to my port Y?. - Since PostgreSQL knows both X and - Y when a physical connection is established, it + Since PostgreSQL knows both X and + Y when a physical connection is established, it can interrogate the ident server on the host of the connecting client and can theoretically determine the operating system user for any given connection. @@ -1386,9 +1386,9 @@ omicron bryanh guest1 Some ident servers have a nonstandard option that causes the returned user name to be encrypted, using a key that only the originating - machine's administrator knows. This option must not be - used when using the ident server with PostgreSQL, - since PostgreSQL does not have any way to decrypt the + machine's administrator knows. This option must not be + used when using the ident server with PostgreSQL, + since PostgreSQL does not have any way to decrypt the returned string to determine the actual user name. @@ -1424,11 +1424,11 @@ omicron bryanh guest1 Peer authentication is only available on operating systems providing - the getpeereid() function, the SO_PEERCRED + the getpeereid() function, the SO_PEERCRED socket parameter, or similar mechanisms. Currently that includes - Linux, - most flavors of BSD including - macOS, + Linux, + most flavors of BSD including + macOS, and Solaris. @@ -1454,23 +1454,23 @@ omicron bryanh guest1 LDAP authentication can operate in two modes. In the first mode, which we will call the simple bind mode, the server will bind to the distinguished name constructed as - prefix username suffix. - Typically, the prefix parameter is used to specify - cn=, or DOMAIN\ in an Active - Directory environment. suffix is used to specify the + prefix username suffix. + Typically, the prefix parameter is used to specify + cn=, or DOMAIN\ in an Active + Directory environment. suffix is used to specify the remaining part of the DN in a non-Active Directory environment. In the second mode, which we will call the search+bind mode, the server first binds to the LDAP directory with - a fixed user name and password, specified with ldapbinddn - and ldapbindpasswd, and performs a search for the user trying + a fixed user name and password, specified with ldapbinddn + and ldapbindpasswd, and performs a search for the user trying to log in to the database. If no user and password is configured, an anonymous bind will be attempted to the directory. The search will be - performed over the subtree at ldapbasedn, and will try to + performed over the subtree at ldapbasedn, and will try to do an exact match of the attribute specified in - ldapsearchattribute. + ldapsearchattribute. Once the user has been found in this search, the server disconnects and re-binds to the directory as this user, using the password specified by the client, to verify that the @@ -1572,7 +1572,7 @@ omicron bryanh guest1 Attribute to match against the user name in the search when doing search+bind authentication. If no attribute is specified, the - uid attribute will be used. + uid attribute will be used. @@ -1719,11 +1719,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse When using RADIUS authentication, an Access Request message will be sent to the configured RADIUS server. This request will be of type Authenticate Only, and include parameters for - user name, password (encrypted) and - NAS Identifier. The request will be encrypted using + user name, password (encrypted) and + NAS Identifier. The request will be encrypted using a secret shared with the server. The RADIUS server will respond to - this server with either Access Accept or - Access Reject. There is no support for RADIUS accounting. + this server with either Access Accept or + Access Reject. There is no support for RADIUS accounting. @@ -1762,8 +1762,8 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse The encryption vector used will only be cryptographically - strong if PostgreSQL is built with support for - OpenSSL. In other cases, the transmission to the + strong if PostgreSQL is built with support for + OpenSSL. In other cases, the transmission to the RADIUS server should only be considered obfuscated, not secured, and external security measures should be applied if necessary. @@ -1777,7 +1777,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse The port number on the RADIUS servers to connect to. If no port - is specified, the default port 1812 will be used. + is specified, the default port 1812 will be used. @@ -1786,12 +1786,12 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse radiusidentifiers - The string used as NAS Identifier in the RADIUS + The string used as NAS Identifier in the RADIUS requests. This parameter can be used as a second parameter identifying for example which database user the user is attempting to authenticate as, which can be used for policy matching on the RADIUS server. If no identifier is specified, the default - postgresql will be used. + postgresql will be used. @@ -1836,11 +1836,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - In a pg_hba.conf record specifying certificate - authentication, the authentication option clientcert is - assumed to be 1, and it cannot be turned off since a client - certificate is necessary for this method. What the cert - method adds to the basic clientcert certificate validity test + In a pg_hba.conf record specifying certificate + authentication, the authentication option clientcert is + assumed to be 1, and it cannot be turned off since a client + certificate is necessary for this method. What the cert + method adds to the basic clientcert certificate validity test is a check that the cn attribute matches the database user name. @@ -1863,7 +1863,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse exist in the database before PAM can be used for authentication. For more information about PAM, please read the - Linux-PAM Page. + Linux-PAM Page. @@ -1896,7 +1896,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - If PAM is set up to read /etc/shadow, authentication + If PAM is set up to read /etc/shadow, authentication will fail because the PostgreSQL server is started by a non-root user. However, this is not an issue when PAM is configured to use LDAP or other authentication methods. @@ -1922,11 +1922,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse - BSD Authentication in PostgreSQL uses + BSD Authentication in PostgreSQL uses the auth-postgresql login type and authenticates with the postgresql login class if that's defined in login.conf. By default that login class does not - exist, and PostgreSQL will use the default login class. + exist, and PostgreSQL will use the default login class. diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index b012a26991..aeda826d87 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -70,9 +70,9 @@ (typically eight kilobytes), milliseconds, seconds, or minutes. An unadorned numeric value for one of these settings will use the setting's default unit, which can be learned from - pg_settings.unit. + pg_settings.unit. For convenience, settings can be given with a unit specified explicitly, - for example '120 ms' for a time value, and they will be + for example '120 ms' for a time value, and they will be converted to whatever the parameter's actual unit is. Note that the value must be written as a string (with quotes) to use this feature. The unit name is case-sensitive, and there can be whitespace between @@ -105,7 +105,7 @@ Enumerated-type parameters are written in the same way as string parameters, but are restricted to have one of a limited set of values. The values allowable for such a parameter can be found from - pg_settings.enumvals. + pg_settings.enumvals. Enum parameter values are case-insensitive. @@ -117,7 +117,7 @@ The most fundamental way to set these parameters is to edit the file - postgresql.confpostgresql.conf, + postgresql.confpostgresql.conf, which is normally kept in the data directory. A default copy is installed when the database cluster directory is initialized. An example of what this file might look like is: @@ -150,8 +150,8 @@ shared_buffers = 128MB SIGHUP The configuration file is reread whenever the main server process - receives a SIGHUP signal; this signal is most easily - sent by running pg_ctl reload from the command line or by + receives a SIGHUP signal; this signal is most easily + sent by running pg_ctl reload from the command line or by calling the SQL function pg_reload_conf(). The main server process also propagates this signal to all currently running server processes, so that existing sessions also adopt the new values @@ -161,26 +161,26 @@ shared_buffers = 128MB can only be set at server start; any changes to their entries in the configuration file will be ignored until the server is restarted. Invalid parameter settings in the configuration file are likewise - ignored (but logged) during SIGHUP processing. + ignored (but logged) during SIGHUP processing. - In addition to postgresql.conf, + In addition to postgresql.conf, a PostgreSQL data directory contains a file - postgresql.auto.confpostgresql.auto.conf, - which has the same format as postgresql.conf but should + postgresql.auto.confpostgresql.auto.conf, + which has the same format as postgresql.conf but should never be edited manually. This file holds settings provided through the command. This file is automatically - read whenever postgresql.conf is, and its settings take - effect in the same way. Settings in postgresql.auto.conf - override those in postgresql.conf. + read whenever postgresql.conf is, and its settings take + effect in the same way. Settings in postgresql.auto.conf + override those in postgresql.conf. The system view pg_file_settings can be helpful for pre-testing changes to the configuration file, or for - diagnosing problems if a SIGHUP signal did not have the + diagnosing problems if a SIGHUP signal did not have the desired effects. @@ -193,7 +193,7 @@ shared_buffers = 128MB commands to establish configuration defaults. The already-mentioned command provides a SQL-accessible means of changing global defaults; it is - functionally equivalent to editing postgresql.conf. + functionally equivalent to editing postgresql.conf. In addition, there are two commands that allow setting of defaults on a per-database or per-role basis: @@ -215,7 +215,7 @@ shared_buffers = 128MB - Values set with ALTER DATABASE and ALTER ROLE + Values set with ALTER DATABASE and ALTER ROLE are applied only when starting a fresh database session. They override values obtained from the configuration files or server command line, and constitute defaults for the rest of the session. @@ -224,7 +224,7 @@ shared_buffers = 128MB - Once a client is connected to the database, PostgreSQL + Once a client is connected to the database, PostgreSQL provides two additional SQL commands (and equivalent functions) to interact with session-local configuration settings: @@ -251,14 +251,14 @@ shared_buffers = 128MB In addition, the system view pg_settings can be + linkend="view-pg-settings">pg_settings can be used to view and change session-local values: - Querying this view is similar to using SHOW ALL but + Querying this view is similar to using SHOW ALL but provides more detail. It is also more flexible, since it's possible to specify filter conditions or join against other relations. @@ -267,8 +267,8 @@ shared_buffers = 128MB Using on this view, specifically - updating the setting column, is the equivalent - of issuing SET commands. For example, the equivalent of + updating the setting column, is the equivalent + of issuing SET commands. For example, the equivalent of SET configuration_parameter TO DEFAULT; @@ -289,7 +289,7 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter In addition to setting global defaults or attaching overrides at the database or role level, you can pass settings to PostgreSQL via shell facilities. - Both the server and libpq client library + Both the server and libpq client library accept parameter values via the shell. @@ -298,26 +298,26 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter During server startup, parameter settings can be passed to the postgres command via the - command-line parameter. For example, postgres -c log_connections=yes -c log_destination='syslog' Settings provided in this way override those set via - postgresql.conf or ALTER SYSTEM, + postgresql.conf or ALTER SYSTEM, so they cannot be changed globally without restarting the server. - When starting a client session via libpq, + When starting a client session via libpq, parameter settings can be specified using the PGOPTIONS environment variable. Settings established in this way constitute defaults for the life of the session, but do not affect other sessions. For historical reasons, the format of PGOPTIONS is similar to that used when launching the postgres - command; specifically, the flag must be specified. For example, env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql @@ -338,20 +338,20 @@ env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql Managing Configuration File Contents - PostgreSQL provides several features for breaking - down complex postgresql.conf files into sub-files. + PostgreSQL provides several features for breaking + down complex postgresql.conf files into sub-files. These features are especially useful when managing multiple servers with related, but not identical, configurations. - include + include in configuration file In addition to individual parameter settings, - the postgresql.conf file can contain include - directives, which specify another file to read and process as if + the postgresql.conf file can contain include + directives, which specify another file to read and process as if it were inserted into the configuration file at this point. This feature allows a configuration file to be divided into physically separate parts. Include directives simply look like: @@ -365,23 +365,23 @@ include 'filename' - include_if_exists + include_if_exists in configuration file - There is also an include_if_exists directive, which acts - the same as the include directive, except + There is also an include_if_exists directive, which acts + the same as the include directive, except when the referenced file does not exist or cannot be read. A regular - include will consider this an error condition, but - include_if_exists merely logs a message and continues + include will consider this an error condition, but + include_if_exists merely logs a message and continues processing the referencing configuration file. - include_dir + include_dir in configuration file - The postgresql.conf file can also contain + The postgresql.conf file can also contain include_dir directives, which specify an entire directory of configuration files to include. These look like @@ -401,36 +401,36 @@ include_dir 'directory' Include files or directories can be used to logically separate portions of the database configuration, rather than having a single large - postgresql.conf file. Consider a company that has two + postgresql.conf file. Consider a company that has two database servers, each with a different amount of memory. There are likely elements of the configuration both will share, for things such as logging. But memory-related parameters on the server will vary between the two. And there might be server specific customizations, too. One way to manage this situation is to break the custom configuration changes for your site into three files. You could add - this to the end of your postgresql.conf file to include + this to the end of your postgresql.conf file to include them: include 'shared.conf' include 'memory.conf' include 'server.conf' - All systems would have the same shared.conf. Each + All systems would have the same shared.conf. Each server with a particular amount of memory could share the - same memory.conf; you might have one for all servers + same memory.conf; you might have one for all servers with 8GB of RAM, another for those having 16GB. And - finally server.conf could have truly server-specific + finally server.conf could have truly server-specific configuration information in it. Another possibility is to create a configuration file directory and - put this information into files there. For example, a conf.d - directory could be referenced at the end of postgresql.conf: + put this information into files there. For example, a conf.d + directory could be referenced at the end of postgresql.conf: include_dir 'conf.d' - Then you could name the files in the conf.d directory + Then you could name the files in the conf.d directory like this: 00shared.conf @@ -441,8 +441,8 @@ include_dir 'conf.d' files will be loaded. This is important because only the last setting encountered for a particular parameter while the server is reading configuration files will be used. In this example, - something set in conf.d/02server.conf would override a - value set in conf.d/01memory.conf. + something set in conf.d/02server.conf would override a + value set in conf.d/01memory.conf. @@ -483,7 +483,7 @@ include_dir 'conf.d' data_directory (string) - data_directory configuration parameter + data_directory configuration parameter @@ -497,13 +497,13 @@ include_dir 'conf.d' config_file (string) - config_file configuration parameter + config_file configuration parameter Specifies the main server configuration file - (customarily called postgresql.conf). + (customarily called postgresql.conf). This parameter can only be set on the postgres command line. @@ -512,13 +512,13 @@ include_dir 'conf.d' hba_file (string) - hba_file configuration parameter + hba_file configuration parameter Specifies the configuration file for host-based authentication - (customarily called pg_hba.conf). + (customarily called pg_hba.conf). This parameter can only be set at server start. @@ -527,13 +527,13 @@ include_dir 'conf.d' ident_file (string) - ident_file configuration parameter + ident_file configuration parameter Specifies the configuration file for user name mapping - (customarily called pg_ident.conf). + (customarily called pg_ident.conf). This parameter can only be set at server start. See also . @@ -543,7 +543,7 @@ include_dir 'conf.d' external_pid_file (string) - external_pid_file configuration parameter + external_pid_file configuration parameter @@ -569,10 +569,10 @@ include_dir 'conf.d' data directory, the postgres command-line option or PGDATA environment variable must point to the directory containing the configuration files, - and the data_directory parameter must be set in + and the data_directory parameter must be set in postgresql.conf (or on the command line) to show where the data directory is actually located. Notice that - data_directory overrides and + data_directory overrides and PGDATA for the location of the data directory, but not for the location of the configuration files. @@ -580,12 +580,12 @@ include_dir 'conf.d' If you wish, you can specify the configuration file names and locations - individually using the parameters config_file, - hba_file and/or ident_file. - config_file can only be specified on the + individually using the parameters config_file, + hba_file and/or ident_file. + config_file can only be specified on the postgres command line, but the others can be set within the main configuration file. If all three parameters plus - data_directory are explicitly set, then it is not necessary + data_directory are explicitly set, then it is not necessary to specify or PGDATA. @@ -607,7 +607,7 @@ include_dir 'conf.d' listen_addresses (string) - listen_addresses configuration parameter + listen_addresses configuration parameter @@ -615,15 +615,15 @@ include_dir 'conf.d' Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications. The value takes the form of a comma-separated list of host names - and/or numeric IP addresses. The special entry * + and/or numeric IP addresses. The special entry * corresponds to all available IP interfaces. The entry - 0.0.0.0 allows listening for all IPv4 addresses and - :: allows listening for all IPv6 addresses. + 0.0.0.0 allows listening for all IPv4 addresses and + :: allows listening for all IPv6 addresses. If the list is empty, the server does not listen on any IP interface at all, in which case only Unix-domain sockets can be used to connect to it. - The default value is localhost, - which allows only local TCP/IP loopback connections to be + The default value is localhost, + which allows only local TCP/IP loopback connections to be made. While client authentication () allows fine-grained control over who can access the server, listen_addresses @@ -638,7 +638,7 @@ include_dir 'conf.d' port (integer) - port configuration parameter + port configuration parameter @@ -653,7 +653,7 @@ include_dir 'conf.d' max_connections (integer) - max_connections configuration parameter + max_connections configuration parameter @@ -661,7 +661,7 @@ include_dir 'conf.d' Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections, but might be less if your kernel settings will not support it (as - determined during initdb). This parameter can + determined during initdb). This parameter can only be set at server start. @@ -678,17 +678,17 @@ include_dir 'conf.d' superuser_reserved_connections (integer) - superuser_reserved_connections configuration parameter + superuser_reserved_connections configuration parameter Determines the number of connection slots that - are reserved for connections by PostgreSQL + are reserved for connections by PostgreSQL superusers. At most connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least - max_connections minus + max_connections minus superuser_reserved_connections, new connections will be accepted only for superusers, and no new replication connections will be accepted. @@ -705,7 +705,7 @@ include_dir 'conf.d' unix_socket_directories (string) - unix_socket_directories configuration parameter + unix_socket_directories configuration parameter @@ -726,10 +726,10 @@ include_dir 'conf.d' In addition to the socket file itself, which is named - .s.PGSQL.nnnn where - nnnn is the server's port number, an ordinary file - named .s.PGSQL.nnnn.lock will be - created in each of the unix_socket_directories directories. + .s.PGSQL.nnnn where + nnnn is the server's port number, an ordinary file + named .s.PGSQL.nnnn.lock will be + created in each of the unix_socket_directories directories. Neither file should ever be removed manually. @@ -743,7 +743,7 @@ include_dir 'conf.d' unix_socket_group (string) - unix_socket_group configuration parameter + unix_socket_group configuration parameter @@ -768,7 +768,7 @@ include_dir 'conf.d' unix_socket_permissions (integer) - unix_socket_permissions configuration parameter + unix_socket_permissions configuration parameter @@ -804,7 +804,7 @@ include_dir 'conf.d' This parameter is irrelevant on systems, notably Solaris as of Solaris 10, that ignore socket permissions entirely. There, one can achieve a - similar effect by pointing unix_socket_directories to a + similar effect by pointing unix_socket_directories to a directory having search permission limited to the desired audience. This parameter is also irrelevant on Windows, which does not have Unix-domain sockets. @@ -815,7 +815,7 @@ include_dir 'conf.d' bonjour (boolean) - bonjour configuration parameter + bonjour configuration parameter @@ -830,14 +830,14 @@ include_dir 'conf.d' bonjour_name (string) - bonjour_name configuration parameter + bonjour_name configuration parameter Specifies the Bonjour service name. The computer name is used if this parameter is set to the - empty string '' (which is the default). This parameter is + empty string '' (which is the default). This parameter is ignored if the server was not compiled with Bonjour support. This parameter can only be set at server start. @@ -848,7 +848,7 @@ include_dir 'conf.d' tcp_keepalives_idle (integer) - tcp_keepalives_idle configuration parameter + tcp_keepalives_idle configuration parameter @@ -857,7 +857,7 @@ include_dir 'conf.d' should send a keepalive message to the client. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPIDLE or an equivalent socket option, and on + TCP_KEEPIDLE or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -874,7 +874,7 @@ include_dir 'conf.d' tcp_keepalives_interval (integer) - tcp_keepalives_interval configuration parameter + tcp_keepalives_interval configuration parameter @@ -883,7 +883,7 @@ include_dir 'conf.d' that is not acknowledged by the client should be retransmitted. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPINTVL or an equivalent socket option, and on + TCP_KEEPINTVL or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -900,7 +900,7 @@ include_dir 'conf.d' tcp_keepalives_count (integer) - tcp_keepalives_count configuration parameter + tcp_keepalives_count configuration parameter @@ -909,7 +909,7 @@ include_dir 'conf.d' the server's connection to the client is considered dead. A value of 0 uses the system default. This parameter is supported only on systems that support - TCP_KEEPCNT or an equivalent socket option; + TCP_KEEPCNT or an equivalent socket option; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -930,10 +930,10 @@ include_dir 'conf.d' authentication_timeout (integer) - timeoutclient authentication - client authenticationtimeout during + timeoutclient authentication + client authenticationtimeout during - authentication_timeout configuration parameter + authentication_timeout configuration parameter @@ -943,8 +943,8 @@ include_dir 'conf.d' would-be client has not completed the authentication protocol in this much time, the server closes the connection. This prevents hung clients from occupying a connection indefinitely. - The default is one minute (1m). - This parameter can only be set in the postgresql.conf + The default is one minute (1m). + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -953,16 +953,16 @@ include_dir 'conf.d' ssl (boolean) - ssl configuration parameter + ssl configuration parameter - Enables SSL connections. Please read + Enables SSL connections. Please read before using this. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is off. + The default is off. @@ -970,7 +970,7 @@ include_dir 'conf.d' ssl_ca_file (string) - ssl_ca_file configuration parameter + ssl_ca_file configuration parameter @@ -978,7 +978,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate authority (CA). Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is empty, meaning no CA file is loaded, and client certificate verification is not performed. @@ -989,14 +989,14 @@ include_dir 'conf.d' ssl_cert_file (string) - ssl_cert_file configuration parameter + ssl_cert_file configuration parameter Specifies the name of the file containing the SSL server certificate. Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is server.crt. @@ -1006,7 +1006,7 @@ include_dir 'conf.d' ssl_crl_file (string) - ssl_crl_file configuration parameter + ssl_crl_file configuration parameter @@ -1014,7 +1014,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate revocation list (CRL). Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is empty, meaning no CRL file is loaded. @@ -1024,14 +1024,14 @@ include_dir 'conf.d' ssl_key_file (string) - ssl_key_file configuration parameter + ssl_key_file configuration parameter Specifies the name of the file containing the SSL server private key. Relative paths are relative to the data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is server.key. @@ -1041,19 +1041,19 @@ include_dir 'conf.d' ssl_ciphers (string) - ssl_ciphers configuration parameter + ssl_ciphers configuration parameter - Specifies a list of SSL cipher suites that are allowed to be + Specifies a list of SSL cipher suites that are allowed to be used on secure connections. See - the ciphers manual page - in the OpenSSL package for the syntax of this setting + the ciphers manual page + in the OpenSSL package for the syntax of this setting and a list of supported values. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default value is HIGH:MEDIUM:+3DES:!aNULL. The + The default value is HIGH:MEDIUM:+3DES:!aNULL. The default is usually a reasonable choice unless you have specific security requirements. @@ -1065,7 +1065,7 @@ include_dir 'conf.d' HIGH - Cipher suites that use ciphers from HIGH group (e.g., + Cipher suites that use ciphers from HIGH group (e.g., AES, Camellia, 3DES) @@ -1075,7 +1075,7 @@ include_dir 'conf.d' MEDIUM - Cipher suites that use ciphers from MEDIUM group + Cipher suites that use ciphers from MEDIUM group (e.g., RC4, SEED) @@ -1085,11 +1085,11 @@ include_dir 'conf.d' +3DES - The OpenSSL default order for HIGH is problematic + The OpenSSL default order for HIGH is problematic because it orders 3DES higher than AES128. This is wrong because 3DES offers less security than AES128, and it is also much - slower. +3DES reorders it after all other - HIGH and MEDIUM ciphers. + slower. +3DES reorders it after all other + HIGH and MEDIUM ciphers. @@ -1111,7 +1111,7 @@ include_dir 'conf.d' Available cipher suite details will vary across OpenSSL versions. Use the command openssl ciphers -v 'HIGH:MEDIUM:+3DES:!aNULL' to - see actual details for the currently installed OpenSSL + see actual details for the currently installed OpenSSL version. Note that this list is filtered at run time based on the server key type. @@ -1121,16 +1121,16 @@ include_dir 'conf.d' ssl_prefer_server_ciphers (boolean) - ssl_prefer_server_ciphers configuration parameter + ssl_prefer_server_ciphers configuration parameter Specifies whether to use the server's SSL cipher preferences, rather than the client's. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is true. + The default is true. @@ -1146,28 +1146,28 @@ include_dir 'conf.d' ssl_ecdh_curve (string) - ssl_ecdh_curve configuration parameter + ssl_ecdh_curve configuration parameter - Specifies the name of the curve to use in ECDH key + Specifies the name of the curve to use in ECDH key exchange. It needs to be supported by all clients that connect. It does not need to be the same curve used by the server's Elliptic Curve key. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is prime256v1. + The default is prime256v1. OpenSSL names for the most common curves are: - prime256v1 (NIST P-256), - secp384r1 (NIST P-384), - secp521r1 (NIST P-521). + prime256v1 (NIST P-256), + secp384r1 (NIST P-384), + secp521r1 (NIST P-521). The full list of available curves can be shown with the command openssl ecparam -list_curves. Not all of them - are usable in TLS though. + are usable in TLS though. @@ -1175,17 +1175,17 @@ include_dir 'conf.d' password_encryption (enum) - password_encryption configuration parameter + password_encryption configuration parameter When a password is specified in or , this parameter determines the algorithm - to use to encrypt the password. The default value is md5, - which stores the password as an MD5 hash (on is also - accepted, as alias for md5). Setting this parameter to - scram-sha-256 will encrypt the password with SCRAM-SHA-256. + to use to encrypt the password. The default value is md5, + which stores the password as an MD5 hash (on is also + accepted, as alias for md5). Setting this parameter to + scram-sha-256 will encrypt the password with SCRAM-SHA-256. Note that older clients might lack support for the SCRAM authentication @@ -1198,7 +1198,7 @@ include_dir 'conf.d' ssl_dh_params_file (string) - ssl_dh_params_file configuration parameter + ssl_dh_params_file configuration parameter @@ -1213,7 +1213,7 @@ include_dir 'conf.d' - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1222,7 +1222,7 @@ include_dir 'conf.d' krb_server_keyfile (string) - krb_server_keyfile configuration parameter + krb_server_keyfile configuration parameter @@ -1230,7 +1230,7 @@ include_dir 'conf.d' Sets the location of the Kerberos server key file. See for details. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -1245,8 +1245,8 @@ include_dir 'conf.d' Sets whether GSSAPI user names should be treated case-insensitively. - The default is off (case sensitive). This parameter can only be - set in the postgresql.conf file or on the server command line. + The default is off (case sensitive). This parameter can only be + set in the postgresql.conf file or on the server command line. @@ -1254,43 +1254,43 @@ include_dir 'conf.d' db_user_namespace (boolean) - db_user_namespace configuration parameter + db_user_namespace configuration parameter This parameter enables per-database user names. It is off by default. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - If this is on, you should create users as username@dbname. - When username is passed by a connecting client, - @ and the database name are appended to the user + If this is on, you should create users as username@dbname. + When username is passed by a connecting client, + @ and the database name are appended to the user name and that database-specific user name is looked up by the server. Note that when you create users with names containing - @ within the SQL environment, you will need to + @ within the SQL environment, you will need to quote the user name. With this parameter enabled, you can still create ordinary global - users. Simply append @ when specifying the user - name in the client, e.g. joe@. The @ + users. Simply append @ when specifying the user + name in the client, e.g. joe@. The @ will be stripped off before the user name is looked up by the server. - db_user_namespace causes the client's and + db_user_namespace causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because - md5 uses the user name as salt on both the - client and server, md5 cannot be used with - db_user_namespace. + md5 uses the user name as salt on both the + client and server, md5 cannot be used with + db_user_namespace. @@ -1317,15 +1317,15 @@ include_dir 'conf.d' shared_buffers (integer) - shared_buffers configuration parameter + shared_buffers configuration parameter Sets the amount of memory the database server uses for shared memory buffers. The default is typically 128 megabytes - (128MB), but might be less if your kernel settings will - not support it (as determined during initdb). + (128MB), but might be less if your kernel settings will + not support it (as determined during initdb). This setting must be at least 128 kilobytes. (Non-default values of BLCKSZ change the minimum.) However, settings significantly higher than the minimum are usually needed @@ -1358,7 +1358,7 @@ include_dir 'conf.d' huge_pages (enum) - huge_pages configuration parameter + huge_pages configuration parameter @@ -1392,7 +1392,7 @@ include_dir 'conf.d' temp_buffers (integer) - temp_buffers configuration parameter + temp_buffers configuration parameter @@ -1400,7 +1400,7 @@ include_dir 'conf.d' Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes - (8MB). The setting can be changed within individual + (8MB). The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session. @@ -1408,10 +1408,10 @@ include_dir 'conf.d' A session will allocate temporary buffers as needed up to the limit - given by temp_buffers. The cost of setting a large + given by temp_buffers. The cost of setting a large value in sessions that do not actually need many temporary buffers is only a buffer descriptor, or about 64 bytes, per - increment in temp_buffers. However if a buffer is + increment in temp_buffers. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, BLCKSZ bytes). @@ -1421,13 +1421,13 @@ include_dir 'conf.d' max_prepared_transactions (integer) - max_prepared_transactions configuration parameter + max_prepared_transactions configuration parameter Sets the maximum number of transactions that can be in the - prepared state simultaneously (see prepared state simultaneously (see ). Setting this parameter to zero (which is the default) disables the prepared-transaction feature. @@ -1454,14 +1454,14 @@ include_dir 'conf.d' work_mem (integer) - work_mem configuration parameter + work_mem configuration parameter Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value - defaults to four megabytes (4MB). + defaults to four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary @@ -1469,10 +1469,10 @@ include_dir 'conf.d' concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are - used for ORDER BY, DISTINCT, and + used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and - hash-based processing of IN subqueries. + hash-based processing of IN subqueries. @@ -1480,15 +1480,15 @@ include_dir 'conf.d' maintenance_work_mem (integer) - maintenance_work_mem configuration parameter + maintenance_work_mem configuration parameter Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE - INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults - to 64 megabytes (64MB). Since only one of these + INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults + to 64 megabytes (64MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger @@ -1508,7 +1508,7 @@ include_dir 'conf.d' autovacuum_work_mem (integer) - autovacuum_work_mem configuration parameter + autovacuum_work_mem configuration parameter @@ -1525,26 +1525,26 @@ include_dir 'conf.d' max_stack_depth (integer) - max_stack_depth configuration parameter + max_stack_depth configuration parameter Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit - enforced by the kernel (as set by ulimit -s or local + enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines such as expression evaluation. The default setting is two - megabytes (2MB), which is conservatively small and + megabytes (2MB), which is conservatively small and unlikely to risk crashes. However, it might be too small to allow execution of complex functions. Only superusers can change this setting. - Setting max_stack_depth higher than + Setting max_stack_depth higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process. On platforms where PostgreSQL can determine the kernel limit, @@ -1558,25 +1558,25 @@ include_dir 'conf.d' dynamic_shared_memory_type (enum) - dynamic_shared_memory_type configuration parameter + dynamic_shared_memory_type configuration parameter Specifies the dynamic shared memory implementation that the server - should use. Possible values are posix (for POSIX shared - memory allocated using shm_open), sysv - (for System V shared memory allocated via shmget), - windows (for Windows shared memory), mmap + should use. Possible values are posix (for POSIX shared + memory allocated using shm_open), sysv + (for System V shared memory allocated via shmget), + windows (for Windows shared memory), mmap (to simulate shared memory using memory-mapped files stored in the - data directory), and none (to disable this feature). + data directory), and none (to disable this feature). Not all values are supported on all platforms; the first supported option is the default for that platform. The use of the - mmap option, which is not the default on any platform, + mmap option, which is not the default on any platform, is generally discouraged because the operating system may write modified pages back to disk repeatedly, increasing system I/O load; however, it may be useful for debugging, when the - pg_dynshmem directory is stored on a RAM disk, or when + pg_dynshmem directory is stored on a RAM disk, or when other shared memory facilities are not available. @@ -1592,7 +1592,7 @@ include_dir 'conf.d' temp_file_limit (integer) - temp_file_limit configuration parameter + temp_file_limit configuration parameter @@ -1601,13 +1601,13 @@ include_dir 'conf.d' for temporary files, such as sort and hash temporary files, or the storage file for a held cursor. A transaction attempting to exceed this limit will be canceled. - The value is specified in kilobytes, and -1 (the + The value is specified in kilobytes, and -1 (the default) means no limit. Only superusers can change this setting. This setting constrains the total space used at any instant by all - temporary files used by a given PostgreSQL process. + temporary files used by a given PostgreSQL process. It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does not count against this limit. @@ -1625,7 +1625,7 @@ include_dir 'conf.d' max_files_per_process (integer) - max_files_per_process configuration parameter + max_files_per_process configuration parameter @@ -1637,7 +1637,7 @@ include_dir 'conf.d' allow individual processes to open many more files than the system can actually support if many processes all try to open that many files. If you find yourself seeing Too many open - files failures, try reducing this setting. + files failures, try reducing this setting. This parameter can only be set at server start. @@ -1684,7 +1684,7 @@ include_dir 'conf.d' vacuum_cost_delay (integer) - vacuum_cost_delay configuration parameter + vacuum_cost_delay configuration parameter @@ -1702,7 +1702,7 @@ include_dir 'conf.d' When using cost-based vacuuming, appropriate values for - vacuum_cost_delay are usually quite small, perhaps + vacuum_cost_delay are usually quite small, perhaps 10 or 20 milliseconds. Adjusting vacuum's resource consumption is best done by changing the other vacuum cost parameters. @@ -1712,7 +1712,7 @@ include_dir 'conf.d' vacuum_cost_page_hit (integer) - vacuum_cost_page_hit configuration parameter + vacuum_cost_page_hit configuration parameter @@ -1728,7 +1728,7 @@ include_dir 'conf.d' vacuum_cost_page_miss (integer) - vacuum_cost_page_miss configuration parameter + vacuum_cost_page_miss configuration parameter @@ -1744,7 +1744,7 @@ include_dir 'conf.d' vacuum_cost_page_dirty (integer) - vacuum_cost_page_dirty configuration parameter + vacuum_cost_page_dirty configuration parameter @@ -1760,7 +1760,7 @@ include_dir 'conf.d' vacuum_cost_limit (integer) - vacuum_cost_limit configuration parameter + vacuum_cost_limit configuration parameter @@ -1792,8 +1792,8 @@ include_dir 'conf.d' There is a separate server - process called the background writer, whose function - is to issue writes of dirty (new or modified) shared + process called the background writer, whose function + is to issue writes of dirty (new or modified) shared buffers. It writes shared buffers so server processes handling user queries seldom or never need to wait for a write to occur. However, the background writer does cause a net overall @@ -1808,7 +1808,7 @@ include_dir 'conf.d' bgwriter_delay (integer) - bgwriter_delay configuration parameter + bgwriter_delay configuration parameter @@ -1816,16 +1816,16 @@ include_dir 'conf.d' Specifies the delay between activity rounds for the background writer. In each round the writer issues writes for some number of dirty buffers (controllable by the - following parameters). It then sleeps for bgwriter_delay + following parameters). It then sleeps for bgwriter_delay milliseconds, and repeats. When there are no dirty buffers in the buffer pool, though, it goes into a longer sleep regardless of - bgwriter_delay. The default value is 200 - milliseconds (200ms). Note that on many systems, the + bgwriter_delay. The default value is 200 + milliseconds (200ms). Note that on many systems, the effective resolution of sleep delays is 10 milliseconds; setting - bgwriter_delay to a value that is not a multiple of 10 + bgwriter_delay to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -1833,7 +1833,7 @@ include_dir 'conf.d' bgwriter_lru_maxpages (integer) - bgwriter_lru_maxpages configuration parameter + bgwriter_lru_maxpages configuration parameter @@ -1843,7 +1843,7 @@ include_dir 'conf.d' background writing. (Note that checkpoints, which are managed by a separate, dedicated auxiliary process, are unaffected.) The default value is 100 buffers. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1852,7 +1852,7 @@ include_dir 'conf.d' bgwriter_lru_multiplier (floating point) - bgwriter_lru_multiplier configuration parameter + bgwriter_lru_multiplier configuration parameter @@ -1860,18 +1860,18 @@ include_dir 'conf.d' The number of dirty buffers written in each round is based on the number of new buffers that have been needed by server processes during recent rounds. The average recent need is multiplied by - bgwriter_lru_multiplier to arrive at an estimate of the + bgwriter_lru_multiplier to arrive at an estimate of the number of buffers that will be needed during the next round. Dirty buffers are written until there are that many clean, reusable buffers - available. (However, no more than bgwriter_lru_maxpages + available. (However, no more than bgwriter_lru_maxpages buffers will be written per round.) - Thus, a setting of 1.0 represents a just in time policy + Thus, a setting of 1.0 represents a just in time policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1880,7 +1880,7 @@ include_dir 'conf.d' bgwriter_flush_after (integer) - bgwriter_flush_after configuration parameter + bgwriter_flush_after configuration parameter @@ -1897,10 +1897,10 @@ include_dir 'conf.d' cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, and - 2MB. The default is 512kB on Linux, - 0 elsewhere. (If BLCKSZ is not 8kB, + 2MB. The default is 512kB on Linux, + 0 elsewhere. (If BLCKSZ is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1923,15 +1923,15 @@ include_dir 'conf.d' effective_io_concurrency (integer) - effective_io_concurrency configuration parameter + effective_io_concurrency configuration parameter Sets the number of concurrent disk I/O operations that - PostgreSQL expects can be executed + PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O - operations that any individual PostgreSQL session + operations that any individual PostgreSQL session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. @@ -1951,7 +1951,7 @@ include_dir 'conf.d' - Asynchronous I/O depends on an effective posix_fadvise + Asynchronous I/O depends on an effective posix_fadvise function, which some operating systems lack. If the function is not present then setting this parameter to anything but zero will result in an error. On some operating systems (e.g., Solaris), the function @@ -1970,7 +1970,7 @@ include_dir 'conf.d' max_worker_processes (integer) - max_worker_processes configuration parameter + max_worker_processes configuration parameter @@ -1997,7 +1997,7 @@ include_dir 'conf.d' max_parallel_workers_per_gather (integer) - max_parallel_workers_per_gather configuration parameter + max_parallel_workers_per_gather configuration parameter @@ -2021,7 +2021,7 @@ include_dir 'conf.d' account when choosing a value for this setting, as well as when configuring other settings that control resource utilization, such as . Resource limits such as - work_mem are applied individually to each worker, + work_mem are applied individually to each worker, which means the total utilization may be much higher across all processes than it would normally be for any single process. For example, a parallel query using 4 workers may use up to 5 times @@ -2039,7 +2039,7 @@ include_dir 'conf.d' max_parallel_workers (integer) - max_parallel_workers configuration parameter + max_parallel_workers configuration parameter @@ -2059,7 +2059,7 @@ include_dir 'conf.d' backend_flush_after (integer) - backend_flush_after configuration parameter + backend_flush_after configuration parameter @@ -2076,7 +2076,7 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, - and 2MB. The default is 0, i.e., no + and 2MB. The default is 0, i.e., no forced writeback. (If BLCKSZ is not 8kB, the maximum value scales proportionally to it.) @@ -2086,13 +2086,13 @@ include_dir 'conf.d' old_snapshot_threshold (integer) - old_snapshot_threshold configuration parameter + old_snapshot_threshold configuration parameter Sets the minimum time that a snapshot can be used without risk of a - snapshot too old error occurring when using the snapshot. + snapshot too old error occurring when using the snapshot. This parameter can only be set at server start. @@ -2107,12 +2107,12 @@ include_dir 'conf.d' - A value of -1 disables this feature, and is the default. + A value of -1 disables this feature, and is the default. Useful values for production work probably range from a small number of hours to a few days. The setting will be coerced to a granularity - of minutes, and small numbers (such as 0 or - 1min) are only allowed because they may sometimes be - useful for testing. While a setting as high as 60d is + of minutes, and small numbers (such as 0 or + 1min) are only allowed because they may sometimes be + useful for testing. While a setting as high as 60d is allowed, please note that in many workloads extreme bloat or transaction ID wraparound may occur in much shorter time frames. @@ -2120,10 +2120,10 @@ include_dir 'conf.d' When this feature is enabled, freed space at the end of a relation cannot be released to the operating system, since that could remove - information needed to detect the snapshot too old + information needed to detect the snapshot too old condition. All space allocated to a relation remains associated with that relation for reuse only within that relation unless explicitly - freed (for example, with VACUUM FULL). + freed (for example, with VACUUM FULL). @@ -2135,7 +2135,7 @@ include_dir 'conf.d' Some tables cannot safely be vacuumed early, and so will not be affected by this setting, such as system catalogs. For such tables this setting will neither reduce bloat nor create a possibility - of a snapshot too old error on scanning. + of a snapshot too old error on scanning. @@ -2158,45 +2158,45 @@ include_dir 'conf.d' wal_level (enum) - wal_level configuration parameter + wal_level configuration parameter - wal_level determines how much information is written to - the WAL. The default value is replica, which writes enough + wal_level determines how much information is written to + the WAL. The default value is replica, which writes enough data to support WAL archiving and replication, including running - read-only queries on a standby server. minimal removes all + read-only queries on a standby server. minimal removes all logging except the information required to recover from a crash or immediate shutdown. Finally, - logical adds information necessary to support logical + logical adds information necessary to support logical decoding. Each level includes the information logged at all lower levels. This parameter can only be set at server start. - In minimal level, WAL-logging of some bulk + In minimal level, WAL-logging of some bulk operations can be safely skipped, which can make those operations much faster (see ). Operations in which this optimization can be applied include: - CREATE TABLE AS - CREATE INDEX - CLUSTER - COPY into tables that were created or truncated in the same + CREATE TABLE AS + CREATE INDEX + CLUSTER + COPY into tables that were created or truncated in the same transaction But minimal WAL does not contain enough information to reconstruct the - data from a base backup and the WAL logs, so replica or + data from a base backup and the WAL logs, so replica or higher must be used to enable WAL archiving () and streaming replication. - In logical level, the same information is logged as - with replica, plus information needed to allow + In logical level, the same information is logged as + with replica, plus information needed to allow extracting logical change sets from the WAL. Using a level of - logical will increase the WAL volume, particularly if many + logical will increase the WAL volume, particularly if many tables are configured for REPLICA IDENTITY FULL and - many UPDATE and DELETE statements are + many UPDATE and DELETE statements are executed. @@ -2210,14 +2210,14 @@ include_dir 'conf.d' fsync (boolean) - fsync configuration parameter + fsync configuration parameter - If this parameter is on, the PostgreSQL server + If this parameter is on, the PostgreSQL server will try to make sure that updates are physically written to - disk, by issuing fsync() system calls or various + disk, by issuing fsync() system calls or various equivalent methods (see ). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash. @@ -2249,7 +2249,7 @@ include_dir 'conf.d' off to on, it is necessary to force all modified buffers in the kernel to durable storage. This can be done while the cluster is shutdown or while fsync is on by running initdb - --sync-only, running sync, unmounting the + --sync-only, running sync, unmounting the file system, or rebooting the server. @@ -2261,7 +2261,7 @@ include_dir 'conf.d' - fsync can only be set in the postgresql.conf + fsync can only be set in the postgresql.conf file or on the server command line. If you turn this parameter off, also consider turning off . @@ -2272,26 +2272,26 @@ include_dir 'conf.d' synchronous_commit (enum) - synchronous_commit configuration parameter + synchronous_commit configuration parameter Specifies whether transaction commit will wait for WAL records - to be written to disk before the command returns a success - indication to the client. Valid values are on, - remote_apply, remote_write, local, - and off. The default, and safe, setting - is on. When off, there can be a delay between + to be written to disk before the command returns a success + indication to the client. Valid values are on, + remote_apply, remote_write, local, + and off. The default, and safe, setting + is on. When off, there can be a delay between when success is reported to the client and when the transaction is really guaranteed to be safe against a server crash. (The maximum delay is three times .) Unlike - , setting this parameter to off + , setting this parameter to off does not create any risk of database inconsistency: an operating system or database crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had - been aborted cleanly. So, turning synchronous_commit off + been aborted cleanly. So, turning synchronous_commit off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more discussion see . @@ -2300,32 +2300,32 @@ include_dir 'conf.d' If is non-empty, this parameter also controls whether or not transaction commits will wait for their WAL records to be replicated to the standby server(s). - When set to on, commits will wait until replies + When set to on, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and flushed it to disk. This ensures the transaction will not be lost unless both the primary and all synchronous standbys suffer corruption of their database storage. - When set to remote_apply, commits will wait until replies + When set to remote_apply, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and applied it, so that it has become visible to queries on the standby(s). - When set to remote_write, commits will wait until replies + When set to remote_write, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and written it out to their operating system. This setting is sufficient to ensure data preservation even if a standby instance of - PostgreSQL were to crash, but not if the standby + PostgreSQL were to crash, but not if the standby suffers an operating-system-level crash, since the data has not necessarily reached stable storage on the standby. - Finally, the setting local causes commits to wait for + Finally, the setting local causes commits to wait for local flush to disk, but not for replication. This is not usually desirable when synchronous replication is in use, but is provided for completeness. - If synchronous_standby_names is empty, the settings - on, remote_apply, remote_write - and local all provide the same synchronization level: + If synchronous_standby_names is empty, the settings + on, remote_apply, remote_write + and local all provide the same synchronization level: transaction commits only wait for local flush to disk. @@ -2335,7 +2335,7 @@ include_dir 'conf.d' transactions commit synchronously and others asynchronously. For example, to make a single multistatement transaction commit asynchronously when the default is the opposite, issue SET - LOCAL synchronous_commit TO OFF within the transaction. + LOCAL synchronous_commit TO OFF within the transaction. @@ -2343,7 +2343,7 @@ include_dir 'conf.d' wal_sync_method (enum) - wal_sync_method configuration parameter + wal_sync_method configuration parameter @@ -2356,41 +2356,41 @@ include_dir 'conf.d' - open_datasync (write WAL files with open() option O_DSYNC) + open_datasync (write WAL files with open() option O_DSYNC) - fdatasync (call fdatasync() at each commit) + fdatasync (call fdatasync() at each commit) - fsync (call fsync() at each commit) + fsync (call fsync() at each commit) - fsync_writethrough (call fsync() at each commit, forcing write-through of any disk write cache) + fsync_writethrough (call fsync() at each commit, forcing write-through of any disk write cache) - open_sync (write WAL files with open() option O_SYNC) + open_sync (write WAL files with open() option O_SYNC) - The open_* options also use O_DIRECT if available. + The open_* options also use O_DIRECT if available. Not all of these choices are available on all platforms. The default is the first method in the above list that is supported - by the platform, except that fdatasync is the default on + by the platform, except that fdatasync is the default on Linux. The default is not necessarily ideal; it might be necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. These aspects are discussed in . - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2399,12 +2399,12 @@ include_dir 'conf.d' full_page_writes (boolean) - full_page_writes configuration parameter + full_page_writes configuration parameter - When this parameter is on, the PostgreSQL server + When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint. This is needed because @@ -2436,9 +2436,9 @@ include_dir 'conf.d' - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - The default is on. + The default is on. @@ -2446,12 +2446,12 @@ include_dir 'conf.d' wal_log_hints (boolean) - wal_log_hints configuration parameter + wal_log_hints configuration parameter - When this parameter is on, the PostgreSQL + When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint, even for non-critical modifications of so-called hint bits. @@ -2465,7 +2465,7 @@ include_dir 'conf.d' - This parameter can only be set at server start. The default value is off. + This parameter can only be set at server start. The default value is off. @@ -2473,16 +2473,16 @@ include_dir 'conf.d' wal_compression (boolean) - wal_compression configuration parameter + wal_compression configuration parameter - When this parameter is on, the PostgreSQL + When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when is on or during a base backup. A compressed page image will be decompressed during WAL replay. - The default value is off. + The default value is off. Only superusers can change this setting. @@ -2498,7 +2498,7 @@ include_dir 'conf.d' wal_buffers (integer) - wal_buffers configuration parameter + wal_buffers configuration parameter @@ -2530,24 +2530,24 @@ include_dir 'conf.d' wal_writer_delay (integer) - wal_writer_delay configuration parameter + wal_writer_delay configuration parameter Specifies how often the WAL writer flushes WAL. After flushing WAL it - sleeps for wal_writer_delay milliseconds, unless woken up + sleeps for wal_writer_delay milliseconds, unless woken up by an asynchronously committing transaction. If the last flush - happened less than wal_writer_delay milliseconds ago and - less than wal_writer_flush_after bytes of WAL have been + happened less than wal_writer_delay milliseconds ago and + less than wal_writer_flush_after bytes of WAL have been produced since, then WAL is only written to the operating system, not flushed to disk. - The default value is 200 milliseconds (200ms). Note that + The default value is 200 milliseconds (200ms). Note that on many systems, the effective resolution of sleep delays is 10 - milliseconds; setting wal_writer_delay to a value that is + milliseconds; setting wal_writer_delay to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2555,19 +2555,19 @@ include_dir 'conf.d' wal_writer_flush_after (integer) - wal_writer_flush_after configuration parameter + wal_writer_flush_after configuration parameter Specifies how often the WAL writer flushes WAL. If the last flush - happened less than wal_writer_delay milliseconds ago and - less than wal_writer_flush_after bytes of WAL have been + happened less than wal_writer_delay milliseconds ago and + less than wal_writer_flush_after bytes of WAL have been produced since, then WAL is only written to the operating system, not - flushed to disk. If wal_writer_flush_after is set - to 0 then WAL data is flushed immediately. The default is + flushed to disk. If wal_writer_flush_after is set + to 0 then WAL data is flushed immediately. The default is 1MB. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2575,7 +2575,7 @@ include_dir 'conf.d' commit_delay (integer) - commit_delay configuration parameter + commit_delay configuration parameter @@ -2592,15 +2592,15 @@ include_dir 'conf.d' commit_siblings other transactions are active when a flush is about to be initiated. Also, no delays are performed if fsync is disabled. - The default commit_delay is zero (no delay). + The default commit_delay is zero (no delay). Only superusers can change this setting. - In PostgreSQL releases prior to 9.3, + In PostgreSQL releases prior to 9.3, commit_delay behaved differently and was much less effective: it affected only commits, rather than all WAL flushes, and waited for the entire configured delay even if the WAL flush - was completed sooner. Beginning in PostgreSQL 9.3, + was completed sooner. Beginning in PostgreSQL 9.3, the first process that becomes ready to flush waits for the configured interval, while subsequent processes wait only until the leader completes the flush operation. @@ -2611,13 +2611,13 @@ include_dir 'conf.d' commit_siblings (integer) - commit_siblings configuration parameter + commit_siblings configuration parameter Minimum number of concurrent open transactions to require - before performing the commit_delay delay. A larger + before performing the commit_delay delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions. @@ -2634,17 +2634,17 @@ include_dir 'conf.d' checkpoint_timeout (integer) - checkpoint_timeout configuration parameter + checkpoint_timeout configuration parameter Maximum time between automatic WAL checkpoints, in seconds. The valid range is between 30 seconds and one day. - The default is five minutes (5min). + The default is five minutes (5min). Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2653,14 +2653,14 @@ include_dir 'conf.d' checkpoint_completion_target (floating point) - checkpoint_completion_target configuration parameter + checkpoint_completion_target configuration parameter Specifies the target of checkpoint completion, as a fraction of total time between checkpoints. The default is 0.5. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2669,7 +2669,7 @@ include_dir 'conf.d' checkpoint_flush_after (integer) - checkpoint_flush_after configuration parameter + checkpoint_flush_after configuration parameter @@ -2686,10 +2686,10 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, - and 2MB. The default is 256kB on - Linux, 0 elsewhere. (If BLCKSZ is not + and 2MB. The default is 256kB on + Linux, 0 elsewhere. (If BLCKSZ is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2698,7 +2698,7 @@ include_dir 'conf.d' checkpoint_warning (integer) - checkpoint_warning configuration parameter + checkpoint_warning configuration parameter @@ -2706,11 +2706,11 @@ include_dir 'conf.d' Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that - max_wal_size ought to be raised). The default is - 30 seconds (30s). Zero disables the warning. + max_wal_size ought to be raised). The default is + 30 seconds (30s). Zero disables the warning. No warnings will be generated if checkpoint_timeout is less than checkpoint_warning. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2719,19 +2719,19 @@ include_dir 'conf.d' max_wal_size (integer) - max_wal_size configuration parameter + max_wal_size configuration parameter Maximum size to let the WAL grow to between automatic WAL checkpoints. This is a soft limit; WAL size can exceed - max_wal_size under special circumstances, like - under heavy load, a failing archive_command, or a high - wal_keep_segments setting. The default is 1 GB. + max_wal_size under special circumstances, like + under heavy load, a failing archive_command, or a high + wal_keep_segments setting. The default is 1 GB. Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2740,7 +2740,7 @@ include_dir 'conf.d' min_wal_size (integer) - min_wal_size configuration parameter + min_wal_size configuration parameter @@ -2750,7 +2750,7 @@ include_dir 'conf.d' This can be used to ensure that enough WAL space is reserved to handle spikes in WAL usage, for example when running large batch jobs. The default is 80 MB. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2765,29 +2765,29 @@ include_dir 'conf.d' archive_mode (enum) - archive_mode configuration parameter + archive_mode configuration parameter - When archive_mode is enabled, completed WAL segments + When archive_mode is enabled, completed WAL segments are sent to archive storage by setting - . In addition to off, - to disable, there are two modes: on, and - always. During normal operation, there is no - difference between the two modes, but when set to always + . In addition to off, + to disable, there are two modes: on, and + always. During normal operation, there is no + difference between the two modes, but when set to always the WAL archiver is enabled also during archive recovery or standby - mode. In always mode, all files restored from the archive + mode. In always mode, all files restored from the archive or streamed with streaming replication will be archived (again). See for details. - archive_mode and archive_command are - separate variables so that archive_command can be + archive_mode and archive_command are + separate variables so that archive_command can be changed without leaving archiving mode. This parameter can only be set at server start. - archive_mode cannot be enabled when - wal_level is set to minimal. + archive_mode cannot be enabled when + wal_level is set to minimal. @@ -2795,32 +2795,32 @@ include_dir 'conf.d' archive_command (string) - archive_command configuration parameter + archive_command configuration parameter The local shell command to execute to archive a completed WAL file - segment. Any %p in the string is + segment. Any %p in the string is replaced by the path name of the file to archive, and any - %f is replaced by only the file name. + %f is replaced by only the file name. (The path name is relative to the working directory of the server, i.e., the cluster's data directory.) - Use %% to embed an actual % character in the + Use %% to embed an actual % character in the command. It is important for the command to return a zero exit status only if it succeeds. For more information see . - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. It is ignored unless - archive_mode was enabled at server start. - If archive_command is an empty string (the default) while - archive_mode is enabled, WAL archiving is temporarily + archive_mode was enabled at server start. + If archive_command is an empty string (the default) while + archive_mode is enabled, WAL archiving is temporarily disabled, but the server continues to accumulate WAL segment files in the expectation that a command will soon be provided. Setting - archive_command to a command that does nothing but - return true, e.g. /bin/true (REM on + archive_command to a command that does nothing but + return true, e.g. /bin/true (REM on Windows), effectively disables archiving, but also breaks the chain of WAL files needed for archive recovery, so it should only be used in unusual circumstances. @@ -2831,7 +2831,7 @@ include_dir 'conf.d' archive_timeout (integer) - archive_timeout configuration parameter + archive_timeout configuration parameter @@ -2841,7 +2841,7 @@ include_dir 'conf.d' traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived - data can be, you can set archive_timeout to force the + data can be, you can set archive_timeout to force the server to switch to a new WAL segment file periodically. When this parameter is greater than zero, the server will switch to a new segment file whenever this many seconds have elapsed since the last @@ -2850,13 +2850,13 @@ include_dir 'conf.d' no database activity). Note that archived files that are closed early due to a forced switch are still the same length as completely full files. Therefore, it is unwise to use a very short - archive_timeout — it will bloat your archive - storage. archive_timeout settings of a minute or so are + archive_timeout — it will bloat your archive + storage. archive_timeout settings of a minute or so are usually reasonable. You should consider using streaming replication, instead of archiving, if you want data to be copied off the master server more quickly than that. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2871,7 +2871,7 @@ include_dir 'conf.d' These settings control the behavior of the built-in - streaming replication feature (see + streaming replication feature (see ). Servers will be either a Master or a Standby server. Masters can send data, while Standby(s) are always receivers of replicated data. When cascading replication @@ -2898,7 +2898,7 @@ include_dir 'conf.d' max_wal_senders (integer) - max_wal_senders configuration parameter + max_wal_senders configuration parameter @@ -2914,8 +2914,8 @@ include_dir 'conf.d' a timeout is reached, so this parameter should be set slightly higher than the maximum number of expected clients so disconnected clients can immediately reconnect. This parameter can only - be set at server start. wal_level must be set to - replica or higher to allow connections from standby + be set at server start. wal_level must be set to + replica or higher to allow connections from standby servers. @@ -2924,7 +2924,7 @@ include_dir 'conf.d' max_replication_slots (integer) - max_replication_slots configuration parameter + max_replication_slots configuration parameter @@ -2944,17 +2944,17 @@ include_dir 'conf.d' wal_keep_segments (integer) - wal_keep_segments configuration parameter + wal_keep_segments configuration parameter Specifies the minimum number of past log file segments kept in the - pg_wal + pg_wal directory, in case a standby server needs to fetch them for streaming replication. Each segment is normally 16 megabytes. If a standby server connected to the sending server falls behind by more than - wal_keep_segments segments, the sending server might remove + wal_keep_segments segments, the sending server might remove a WAL segment still needed by the standby, in which case the replication connection will be terminated. Downstream connections will also eventually fail as a result. (However, the standby @@ -2964,15 +2964,15 @@ include_dir 'conf.d' This sets only the minimum number of segments retained in - pg_wal; the system might need to retain more segments + pg_wal; the system might need to retain more segments for WAL archival or to recover from a checkpoint. If - wal_keep_segments is zero (the default), the system + wal_keep_segments is zero (the default), the system doesn't keep any extra segments for standby purposes, so the number of old WAL segments available to standby servers is a function of the location of the previous checkpoint and status of WAL archiving. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. @@ -2980,7 +2980,7 @@ include_dir 'conf.d' wal_sender_timeout (integer) - wal_sender_timeout configuration parameter + wal_sender_timeout configuration parameter @@ -2990,7 +2990,7 @@ include_dir 'conf.d' the sending server to detect a standby crash or network outage. A value of zero disables the timeout mechanism. This parameter can only be set in - the postgresql.conf file or on the server command line. + the postgresql.conf file or on the server command line. The default value is 60 seconds. @@ -2999,13 +2999,13 @@ include_dir 'conf.d' track_commit_timestamp (boolean) - track_commit_timestamp configuration parameter + track_commit_timestamp configuration parameter Record commit time of transactions. This parameter - can only be set in postgresql.conf file or on the server + can only be set in postgresql.conf file or on the server command line. The default value is off. @@ -3034,13 +3034,13 @@ include_dir 'conf.d' synchronous_standby_names (string) - synchronous_standby_names configuration parameter + synchronous_standby_names configuration parameter Specifies a list of standby servers that can support - synchronous replication, as described in + synchronous replication, as described in . There will be one or more active synchronous standbys; transactions waiting for commit will be allowed to proceed after @@ -3050,15 +3050,15 @@ include_dir 'conf.d' that are both currently connected and streaming data in real-time (as shown by a state of streaming in the - pg_stat_replication view). + pg_stat_replication view). Specifying more than one synchronous standby can allow for very high availability and protection against data loss. The name of a standby server for this purpose is the - application_name setting of the standby, as set in the + application_name setting of the standby, as set in the standby's connection information. In case of a physical replication - standby, this should be set in the primary_conninfo + standby, this should be set in the primary_conninfo setting in recovery.conf; the default is walreceiver. For logical replication, this can be set in the connection information of the subscription, and it @@ -3078,54 +3078,54 @@ ANY num_sync ( standby_name is the name of a standby server. - FIRST and ANY specify the method to choose + FIRST and ANY specify the method to choose synchronous standbys from the listed servers. - The keyword FIRST, coupled with + The keyword FIRST, coupled with num_sync, specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to num_sync synchronous standbys chosen based on their priorities. For example, a setting of - FIRST 3 (s1, s2, s3, s4) will cause each commit to wait for + FIRST 3 (s1, s2, s3, s4) will cause each commit to wait for replies from three higher-priority standbys chosen from standby servers - s1, s2, s3 and s4. + s1, s2, s3 and s4. The standbys whose names appear earlier in the list are given higher priority and will be considered as synchronous. Other standby servers appearing later in this list represent potential synchronous standbys. If any of the current synchronous standbys disconnects for whatever reason, it will be replaced immediately with the next-highest-priority - standby. The keyword FIRST is optional. + standby. The keyword FIRST is optional. - The keyword ANY, coupled with + The keyword ANY, coupled with num_sync, specifies a quorum-based synchronous replication and makes transaction commits - wait until their WAL records are replicated to at least + wait until their WAL records are replicated to at least num_sync listed standbys. - For example, a setting of ANY 3 (s1, s2, s3, s4) will cause + For example, a setting of ANY 3 (s1, s2, s3, s4) will cause each commit to proceed as soon as at least any three standbys of - s1, s2, s3 and s4 + s1, s2, s3 and s4 reply. - FIRST and ANY are case-insensitive. If these + FIRST and ANY are case-insensitive. If these keywords are used as the name of a standby server, its standby_name must be double-quoted. - The third syntax was used before PostgreSQL + The third syntax was used before PostgreSQL version 9.6 and is still supported. It's the same as the first syntax - with FIRST and + with FIRST and num_sync equal to 1. - For example, FIRST 1 (s1, s2) and s1, s2 have - the same meaning: either s1 or s2 is chosen + For example, FIRST 1 (s1, s2) and s1, s2 have + the same meaning: either s1 or s2 is chosen as a synchronous standby. - The special entry * matches any standby name. + The special entry * matches any standby name. There is no mechanism to enforce uniqueness of standby names. In case @@ -3136,7 +3136,7 @@ ANY num_sync ( standby_name should have the form of a valid SQL identifier, unless it - is *. You can use double-quoting if necessary. But note + is *. You can use double-quoting if necessary. But note that standby_names are compared to standby application names case-insensitively, whether double-quoted or not. @@ -3149,10 +3149,10 @@ ANY num_sync ( parameter to - local or off. + local or off. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -3161,13 +3161,13 @@ ANY num_sync ( vacuum_defer_cleanup_age (integer) - vacuum_defer_cleanup_age configuration parameter + vacuum_defer_cleanup_age configuration parameter - Specifies the number of transactions by which VACUUM and - HOT updates will defer cleanup of dead row versions. The + Specifies the number of transactions by which VACUUM and + HOT updates will defer cleanup of dead row versions. The default is zero transactions, meaning that dead row versions can be removed as soon as possible, that is, as soon as they are no longer visible to any open transaction. You may wish to set this to a @@ -3178,16 +3178,16 @@ ANY num_sync ( num_sync ( hot_standby (boolean) - hot_standby configuration parameter + hot_standby configuration parameter @@ -3226,7 +3226,7 @@ ANY num_sync ( max_standby_archive_delay (integer) - max_standby_archive_delay configuration parameter + max_standby_archive_delay configuration parameter @@ -3235,16 +3235,16 @@ ANY num_sync ( . - max_standby_archive_delay applies when WAL data is + max_standby_archive_delay applies when WAL data is being read from WAL archive (and is therefore not current). The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - Note that max_standby_archive_delay is not the same as the + Note that max_standby_archive_delay is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply any one WAL segment's data. Thus, if one query has resulted in significant delay earlier in the @@ -3257,7 +3257,7 @@ ANY num_sync ( max_standby_streaming_delay (integer) - max_standby_streaming_delay configuration parameter + max_standby_streaming_delay configuration parameter @@ -3266,16 +3266,16 @@ ANY num_sync ( . - max_standby_streaming_delay applies when WAL data is + max_standby_streaming_delay applies when WAL data is being received via streaming replication. The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - Note that max_standby_streaming_delay is not the same as + Note that max_standby_streaming_delay is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply WAL data once it has been received from the primary server. Thus, if one query has @@ -3289,7 +3289,7 @@ ANY num_sync ( wal_receiver_status_interval (integer) - wal_receiver_status_interval configuration parameter + wal_receiver_status_interval configuration parameter @@ -3298,7 +3298,7 @@ ANY num_sync ( - pg_stat_replication view. The standby will report + pg_stat_replication view. The standby will report the last write-ahead log location it has written, the last position it has flushed to disk, and the last position it has applied. This parameter's @@ -3307,7 +3307,7 @@ ANY num_sync ( num_sync ( hot_standby_feedback (boolean) - hot_standby_feedback configuration parameter + hot_standby_feedback configuration parameter @@ -3327,9 +3327,9 @@ ANY num_sync ( ( num_sync ( wal_receiver_timeout (integer) - wal_receiver_timeout configuration parameter + wal_receiver_timeout configuration parameter @@ -3363,7 +3363,7 @@ ANY num_sync ( num_sync ( wal_retrieve_retry_interval (integer) - wal_retrieve_retry_interval configuration parameter + wal_retrieve_retry_interval configuration parameter Specify how long the standby server should wait when WAL data is not available from any sources (streaming replication, - local pg_wal or WAL archive) before retrying to + local pg_wal or WAL archive) before retrying to retrieve WAL data. This parameter can only be set in the - postgresql.conf file or on the server command line. + postgresql.conf file or on the server command line. The default value is 5 seconds. Units are milliseconds if not specified. @@ -3420,7 +3420,7 @@ ANY num_sync ( max_logical_replication_workers (int) - max_logical_replication_workers configuration parameter + max_logical_replication_workers configuration parameter @@ -3441,7 +3441,7 @@ ANY num_sync ( max_sync_workers_per_subscription (integer) - max_sync_workers_per_subscription configuration parameter + max_sync_workers_per_subscription configuration parameter @@ -3478,7 +3478,7 @@ ANY num_sync ( num_sync ( num_sync ( enable_gathermerge (boolean) - enable_gathermerge configuration parameter + enable_gathermerge configuration parameter Enables or disables the query planner's use of gather - merge plan types. The default is on. + merge plan types. The default is on. @@ -3527,13 +3527,13 @@ ANY num_sync ( enable_hashagg (boolean) - enable_hashagg configuration parameter + enable_hashagg configuration parameter Enables or disables the query planner's use of hashed - aggregation plan types. The default is on. + aggregation plan types. The default is on. @@ -3541,13 +3541,13 @@ ANY num_sync ( enable_hashjoin (boolean) - enable_hashjoin configuration parameter + enable_hashjoin configuration parameter Enables or disables the query planner's use of hash-join plan - types. The default is on. + types. The default is on. @@ -3558,13 +3558,13 @@ ANY num_sync ( num_sync ( enable_indexonlyscan (boolean) - enable_indexonlyscan configuration parameter + enable_indexonlyscan configuration parameter Enables or disables the query planner's use of index-only-scan plan types (see ). - The default is on. + The default is on. @@ -3587,7 +3587,7 @@ ANY num_sync ( enable_material (boolean) - enable_material configuration parameter + enable_material configuration parameter @@ -3596,7 +3596,7 @@ ANY num_sync ( num_sync ( enable_mergejoin (boolean) - enable_mergejoin configuration parameter + enable_mergejoin configuration parameter Enables or disables the query planner's use of merge-join plan - types. The default is on. + types. The default is on. @@ -3618,7 +3618,7 @@ ANY num_sync ( enable_nestloop (boolean) - enable_nestloop configuration parameter + enable_nestloop configuration parameter @@ -3627,7 +3627,7 @@ ANY num_sync ( num_sync ( enable_partition_wise_join (boolean) - enable_partition_wise_join configuration parameter + enable_partition_wise_join configuration parameter @@ -3647,7 +3647,7 @@ ANY num_sync ( num_sync ( num_sync ( num_sync ( enable_sort (boolean) - enable_sort configuration parameter + enable_sort configuration parameter @@ -3684,7 +3684,7 @@ ANY num_sync ( num_sync ( enable_tidscan (boolean) - enable_tidscan configuration parameter + enable_tidscan configuration parameter - Enables or disables the query planner's use of TID - scan plan types. The default is on. + Enables or disables the query planner's use of TID + scan plan types. The default is on. @@ -3709,12 +3709,12 @@ ANY num_sync ( num_sync ( seq_page_cost (floating point) - seq_page_cost configuration parameter + seq_page_cost configuration parameter @@ -3752,7 +3752,7 @@ ANY num_sync ( random_page_cost (floating point) - random_page_cost configuration parameter + random_page_cost configuration parameter @@ -3765,7 +3765,7 @@ ANY num_sync ( num_sync ( num_sync ( cpu_tuple_cost (floating point) - cpu_tuple_cost configuration parameter + cpu_tuple_cost configuration parameter @@ -3826,7 +3826,7 @@ ANY num_sync ( cpu_index_tuple_cost (floating point) - cpu_index_tuple_cost configuration parameter + cpu_index_tuple_cost configuration parameter @@ -3841,7 +3841,7 @@ ANY num_sync ( cpu_operator_cost (floating point) - cpu_operator_cost configuration parameter + cpu_operator_cost configuration parameter @@ -3856,7 +3856,7 @@ ANY num_sync ( parallel_setup_cost (floating point) - parallel_setup_cost configuration parameter + parallel_setup_cost configuration parameter @@ -3871,7 +3871,7 @@ ANY num_sync ( parallel_tuple_cost (floating point) - parallel_tuple_cost configuration parameter + parallel_tuple_cost configuration parameter @@ -3886,7 +3886,7 @@ ANY num_sync ( min_parallel_table_scan_size (integer) - min_parallel_table_scan_size configuration parameter + min_parallel_table_scan_size configuration parameter @@ -3896,7 +3896,7 @@ ANY num_sync ( num_sync ( min_parallel_index_scan_size (integer) - min_parallel_index_scan_size configuration parameter + min_parallel_index_scan_size configuration parameter @@ -3913,7 +3913,7 @@ ANY num_sync ( num_sync ( effective_cache_size (integer) - effective_cache_size configuration parameter + effective_cache_size configuration parameter @@ -3942,7 +3942,7 @@ ANY num_sync ( num_sync ( num_sync ( geqo_threshold (integer) - geqo_threshold configuration parameter + geqo_threshold configuration parameter Use genetic query optimization to plan queries with at least - this many FROM items involved. (Note that a - FULL OUTER JOIN construct counts as only one FROM + this many FROM items involved. (Note that a + FULL OUTER JOIN construct counts as only one FROM item.) The default is 12. For simpler queries it is usually best to use the regular, exhaustive-search planner, but for queries with many tables the exhaustive search takes too long, often @@ -4011,7 +4011,7 @@ ANY num_sync ( geqo_effort (integer) - geqo_effort configuration parameter + geqo_effort configuration parameter @@ -4037,7 +4037,7 @@ ANY num_sync ( geqo_pool_size (integer) - geqo_pool_size configuration parameter + geqo_pool_size configuration parameter @@ -4055,7 +4055,7 @@ ANY num_sync ( geqo_generations (integer) - geqo_generations configuration parameter + geqo_generations configuration parameter @@ -4073,7 +4073,7 @@ ANY num_sync ( geqo_selection_bias (floating point) - geqo_selection_bias configuration parameter + geqo_selection_bias configuration parameter @@ -4088,7 +4088,7 @@ ANY num_sync ( geqo_seed (floating point) - geqo_seed configuration parameter + geqo_seed configuration parameter @@ -4112,17 +4112,17 @@ ANY num_sync ( default_statistics_target (integer) - default_statistics_target configuration parameter + default_statistics_target configuration parameter Sets the default statistics target for table columns without a column-specific target set via ALTER TABLE - SET STATISTICS. Larger values increase the time needed to - do ANALYZE, but might improve the quality of the + SET STATISTICS. Larger values increase the time needed to + do ANALYZE, but might improve the quality of the planner's estimates. The default is 100. For more information - on the use of statistics by the PostgreSQL + on the use of statistics by the PostgreSQL query planner, refer to . @@ -4134,26 +4134,26 @@ ANY num_sync ( cursor_tuple_fraction (floating point) - cursor_tuple_fraction configuration parameter + cursor_tuple_fraction configuration parameter Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved. The default is 0.1. Smaller values of this - setting bias the planner towards using fast start plans + setting bias the planner towards using fast start plans for cursors, which will retrieve the first few rows quickly while perhaps taking a long time to fetch all rows. Larger values put more emphasis on the total estimated time. At the maximum @@ -4209,7 +4209,7 @@ SELECT * FROM parent WHERE key = 2400; from_collapse_limit (integer) - from_collapse_limit configuration parameter + from_collapse_limit configuration parameter @@ -4232,14 +4232,14 @@ SELECT * FROM parent WHERE key = 2400; join_collapse_limit (integer) - join_collapse_limit configuration parameter + join_collapse_limit configuration parameter - The planner will rewrite explicit JOIN - constructs (except FULL JOINs) into lists of - FROM items whenever a list of no more than this many items + The planner will rewrite explicit JOIN + constructs (except FULL JOINs) into lists of + FROM items whenever a list of no more than this many items would result. Smaller values reduce planning time but might yield inferior query plans. @@ -4248,7 +4248,7 @@ SELECT * FROM parent WHERE key = 2400; By default, this variable is set the same as from_collapse_limit, which is appropriate for most uses. Setting it to 1 prevents any reordering of - explicit JOINs. Thus, the explicit join order + explicit JOINs. Thus, the explicit join order specified in the query will be the actual order in which the relations are joined. Because the query planner does not always choose the optimal join order, advanced users can elect to @@ -4268,24 +4268,24 @@ SELECT * FROM parent WHERE key = 2400; force_parallel_mode (enum) - force_parallel_mode configuration parameter + force_parallel_mode configuration parameter Allows the use of parallel queries for testing purposes even in cases where no performance benefit is expected. - The allowed values of force_parallel_mode are - off (use parallel mode only when it is expected to improve - performance), on (force parallel query for all queries - for which it is thought to be safe), and regress (like - on, but with additional behavior changes as explained + The allowed values of force_parallel_mode are + off (use parallel mode only when it is expected to improve + performance), on (force parallel query for all queries + for which it is thought to be safe), and regress (like + on, but with additional behavior changes as explained below). - More specifically, setting this value to on will add - a Gather node to the top of any query plan for which this + More specifically, setting this value to on will add + a Gather node to the top of any query plan for which this appears to be safe, so that the query runs inside of a parallel worker. Even when a parallel worker is not available or cannot be used, operations such as starting a subtransaction that would be prohibited @@ -4297,15 +4297,15 @@ SELECT * FROM parent WHERE key = 2400; - Setting this value to regress has all of the same effects - as setting it to on plus some additional effects that are + Setting this value to regress has all of the same effects + as setting it to on plus some additional effects that are intended to facilitate automated regression testing. Normally, messages from a parallel worker include a context line indicating that, - but a setting of regress suppresses this line so that the + but a setting of regress suppresses this line so that the output is the same as in non-parallel execution. Also, - the Gather nodes added to plans by this setting are hidden - in EXPLAIN output so that the output matches what - would be obtained if this setting were turned off. + the Gather nodes added to plans by this setting are hidden + in EXPLAIN output so that the output matches what + would be obtained if this setting were turned off. @@ -4338,7 +4338,7 @@ SELECT * FROM parent WHERE key = 2400; log_destination (string) - log_destination configuration parameter + log_destination configuration parameter @@ -4351,13 +4351,13 @@ SELECT * FROM parent WHERE key = 2400; parameter to a list of desired log destinations separated by commas. The default is to log to stderr only. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - If csvlog is included in log_destination, + If csvlog is included in log_destination, log entries are output in comma separated - value (CSV) format, which is convenient for + value (CSV) format, which is convenient for loading logs into programs. See for details. must be enabled to generate @@ -4366,7 +4366,7 @@ SELECT * FROM parent WHERE key = 2400; When either stderr or csvlog are included, the file - current_logfiles is created to record the location + current_logfiles is created to record the location of the log file(s) currently in use by the logging collector and the associated logging destination. This provides a convenient way to find the logs currently in use by the instance. Here is an example of @@ -4378,10 +4378,10 @@ csvlog log/postgresql.csv current_logfiles is recreated when a new log file is created as an effect of rotation, and - when log_destination is reloaded. It is removed when + when log_destination is reloaded. It is removed when neither stderr nor csvlog are included - in log_destination, and when the logging collector is + in log_destination, and when the logging collector is disabled. @@ -4390,9 +4390,9 @@ csvlog log/postgresql.csv On most Unix systems, you will need to alter the configuration of your system's syslog daemon in order to make use of the syslog option for - log_destination. PostgreSQL + log_destination. PostgreSQL can log to syslog facilities - LOCAL0 through LOCAL7 (see LOCAL0 through LOCAL7 (see ), but the default syslog configuration on most platforms will discard all such messages. You will need to add something like: @@ -4404,7 +4404,7 @@ local0.* /var/log/postgresql On Windows, when you use the eventlog - option for log_destination, you should + option for log_destination, you should register an event source and its library with the operating system so that the Windows Event Viewer can display event log messages cleanly. @@ -4417,27 +4417,27 @@ local0.* /var/log/postgresql logging_collector (boolean) - logging_collector configuration parameter + logging_collector configuration parameter - This parameter enables the logging collector, which + This parameter enables the logging collector, which is a background process that captures log messages - sent to stderr and redirects them into log files. + sent to stderr and redirects them into log files. This approach is often more useful than - logging to syslog, since some types of messages - might not appear in syslog output. (One common + logging to syslog, since some types of messages + might not appear in syslog output. (One common example is dynamic-linker failure messages; another is error messages - produced by scripts such as archive_command.) + produced by scripts such as archive_command.) This parameter can only be set at server start. - It is possible to log to stderr without using the + It is possible to log to stderr without using the logging collector; the log messages will just go to wherever the - server's stderr is directed. However, that method is + server's stderr is directed. However, that method is only suitable for low log volumes, since it provides no convenient way to rotate log files. Also, on some platforms not using the logging collector can result in lost or garbled log output, because @@ -4451,7 +4451,7 @@ local0.* /var/log/postgresql The logging collector is designed to never lose messages. This means that in case of extremely high load, server processes could be blocked while trying to send additional log messages when the - collector has fallen behind. In contrast, syslog + collector has fallen behind. In contrast, syslog prefers to drop messages if it cannot write them, which means it may fail to log some messages in such cases but it will not block the rest of the system. @@ -4464,16 +4464,16 @@ local0.* /var/log/postgresql log_directory (string) - log_directory configuration parameter + log_directory configuration parameter - When logging_collector is enabled, + When logging_collector is enabled, this parameter determines the directory in which log files will be created. It can be specified as an absolute path, or relative to the cluster data directory. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is log. @@ -4483,7 +4483,7 @@ local0.* /var/log/postgresql log_filename (string) - log_filename configuration parameter + log_filename configuration parameter @@ -4514,14 +4514,14 @@ local0.* /var/log/postgresql longer the case. - If CSV-format output is enabled in log_destination, - .csv will be appended to the timestamped + If CSV-format output is enabled in log_destination, + .csv will be appended to the timestamped log file name to create the file name for CSV-format output. - (If log_filename ends in .log, the suffix is + (If log_filename ends in .log, the suffix is replaced instead.) - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4530,7 +4530,7 @@ local0.* /var/log/postgresql log_file_mode (integer) - log_file_mode configuration parameter + log_file_mode configuration parameter @@ -4545,9 +4545,9 @@ local0.* /var/log/postgresql must start with a 0 (zero).) - The default permissions are 0600, meaning only the + The default permissions are 0600, meaning only the server owner can read or write the log files. The other commonly - useful setting is 0640, allowing members of the owner's + useful setting is 0640, allowing members of the owner's group to read the files. Note however that to make use of such a setting, you'll need to alter to store the files somewhere outside the cluster data directory. In @@ -4555,7 +4555,7 @@ local0.* /var/log/postgresql they might contain sensitive data. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4564,7 +4564,7 @@ local0.* /var/log/postgresql log_rotation_age (integer) - log_rotation_age configuration parameter + log_rotation_age configuration parameter @@ -4574,7 +4574,7 @@ local0.* /var/log/postgresql After this many minutes have elapsed, a new log file will be created. Set to zero to disable time-based creation of new log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4583,7 +4583,7 @@ local0.* /var/log/postgresql log_rotation_size (integer) - log_rotation_size configuration parameter + log_rotation_size configuration parameter @@ -4593,7 +4593,7 @@ local0.* /var/log/postgresql After this many kilobytes have been emitted into a log file, a new log file will be created. Set to zero to disable size-based creation of new log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4602,7 +4602,7 @@ local0.* /var/log/postgresql log_truncate_on_rotation (boolean) - log_truncate_on_rotation configuration parameter + log_truncate_on_rotation configuration parameter @@ -4617,7 +4617,7 @@ local0.* /var/log/postgresql a log_filename like postgresql-%H.log would result in generating twenty-four hourly log files and then cyclically overwriting them. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4635,7 +4635,7 @@ local0.* /var/log/postgresql log_truncate_on_rotation to on, log_rotation_age to 60, and log_rotation_size to 1000000. - Including %M in log_filename allows + Including %M in log_filename allows any size-driven rotations that might occur to select a file name different from the hour's initial file name. @@ -4645,21 +4645,21 @@ local0.* /var/log/postgresql syslog_facility (enum) - syslog_facility configuration parameter + syslog_facility configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines the syslog facility to be used. You can choose - from LOCAL0, LOCAL1, - LOCAL2, LOCAL3, LOCAL4, - LOCAL5, LOCAL6, LOCAL7; - the default is LOCAL0. See also the + from LOCAL0, LOCAL1, + LOCAL2, LOCAL3, LOCAL4, + LOCAL5, LOCAL6, LOCAL7; + the default is LOCAL0. See also the documentation of your system's syslog daemon. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4668,17 +4668,17 @@ local0.* /var/log/postgresql syslog_ident (string) - syslog_ident configuration parameter + syslog_ident configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines the program name used to identify PostgreSQL messages in syslog logs. The default is postgres. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4687,7 +4687,7 @@ local0.* /var/log/postgresql syslog_sequence_numbers (boolean) - syslog_sequence_numbers configuration parameter + syslog_sequence_numbers configuration parameter @@ -4706,7 +4706,7 @@ local0.* /var/log/postgresql - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4715,12 +4715,12 @@ local0.* /var/log/postgresql syslog_split_messages (boolean) - syslog_split_messages configuration parameter + syslog_split_messages configuration parameter - When logging to syslog is enabled, this parameter + When logging to syslog is enabled, this parameter determines how messages are delivered to syslog. When on (the default), messages are split by lines, and long lines are split so that they will fit into 1024 bytes, which is a typical size limit for @@ -4739,7 +4739,7 @@ local0.* /var/log/postgresql - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4748,16 +4748,16 @@ local0.* /var/log/postgresql event_source (string) - event_source configuration parameter + event_source configuration parameter - When logging to event log is enabled, this parameter + When logging to event log is enabled, this parameter determines the program name used to identify PostgreSQL messages in the log. The default is PostgreSQL. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -4773,21 +4773,21 @@ local0.* /var/log/postgresql client_min_messages (enum) - client_min_messages configuration parameter + client_min_messages configuration parameter Controls which message levels are sent to the client. - Valid values are DEBUG5, - DEBUG4, DEBUG3, DEBUG2, - DEBUG1, LOG, NOTICE, - WARNING, ERROR, FATAL, - and PANIC. Each level + Valid values are DEBUG5, + DEBUG4, DEBUG3, DEBUG2, + DEBUG1, LOG, NOTICE, + WARNING, ERROR, FATAL, + and PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The default is - NOTICE. Note that LOG has a different - rank here than in log_min_messages. + NOTICE. Note that LOG has a different + rank here than in log_min_messages. @@ -4795,21 +4795,21 @@ local0.* /var/log/postgresql log_min_messages (enum) - log_min_messages configuration parameter + log_min_messages configuration parameter Controls which message levels are written to the server log. - Valid values are DEBUG5, DEBUG4, - DEBUG3, DEBUG2, DEBUG1, - INFO, NOTICE, WARNING, - ERROR, LOG, FATAL, and - PANIC. Each level includes all the levels that + Valid values are DEBUG5, DEBUG4, + DEBUG3, DEBUG2, DEBUG1, + INFO, NOTICE, WARNING, + ERROR, LOG, FATAL, and + PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent - to the log. The default is WARNING. Note that - LOG has a different rank here than in - client_min_messages. + to the log. The default is WARNING. Note that + LOG has a different rank here than in + client_min_messages. Only superusers can change this setting. @@ -4818,7 +4818,7 @@ local0.* /var/log/postgresql log_min_error_statement (enum) - log_min_error_statement configuration parameter + log_min_error_statement configuration parameter @@ -4846,7 +4846,7 @@ local0.* /var/log/postgresql log_min_duration_statement (integer) - log_min_duration_statement configuration parameter + log_min_duration_statement configuration parameter @@ -4872,9 +4872,9 @@ local0.* /var/log/postgresql When using this option together with , the text of statements that are logged because of - log_statement will not be repeated in the + log_statement will not be repeated in the duration log message. - If you are not using syslog, it is recommended + If you are not using syslog, it is recommended that you log the PID or session ID using so that you can link the statement message to the later @@ -4888,7 +4888,7 @@ local0.* /var/log/postgresql explains the message - severity levels used by PostgreSQL. If logging output + severity levels used by PostgreSQL. If logging output is sent to syslog or Windows' eventlog, the severity levels are translated as shown in the table. @@ -4901,73 +4901,73 @@ local0.* /var/log/postgresql Severity Usage - syslog - eventlog + syslog + eventlog - DEBUG1..DEBUG5 + DEBUG1..DEBUG5 Provides successively-more-detailed information for use by developers. - DEBUG - INFORMATION + DEBUG + INFORMATION - INFO + INFO Provides information implicitly requested by the user, - e.g., output from VACUUM VERBOSE. - INFO - INFORMATION + e.g., output from VACUUM VERBOSE. + INFO + INFORMATION - NOTICE + NOTICE Provides information that might be helpful to users, e.g., notice of truncation of long identifiers. - NOTICE - INFORMATION + NOTICE + INFORMATION - WARNING - Provides warnings of likely problems, e.g., COMMIT + WARNING + Provides warnings of likely problems, e.g., COMMIT outside a transaction block. - NOTICE - WARNING + NOTICE + WARNING - ERROR + ERROR Reports an error that caused the current command to abort. - WARNING - ERROR + WARNING + ERROR - LOG + LOG Reports information of interest to administrators, e.g., checkpoint activity. - INFO - INFORMATION + INFO + INFORMATION - FATAL + FATAL Reports an error that caused the current session to abort. - ERR - ERROR + ERR + ERROR - PANIC + PANIC Reports an error that caused all database sessions to abort. - CRIT - ERROR + CRIT + ERROR @@ -4982,15 +4982,15 @@ local0.* /var/log/postgresql application_name (string) - application_name configuration parameter + application_name configuration parameter The application_name can be any string of less than - NAMEDATALEN characters (64 characters in a standard build). + NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. - The name will be displayed in the pg_stat_activity view + The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the parameter. Only printable ASCII characters may be used in the @@ -5003,17 +5003,17 @@ local0.* /var/log/postgresql debug_print_parse (boolean) - debug_print_parse configuration parameter + debug_print_parse configuration parameter debug_print_rewritten (boolean) - debug_print_rewritten configuration parameter + debug_print_rewritten configuration parameter debug_print_plan (boolean) - debug_print_plan configuration parameter + debug_print_plan configuration parameter @@ -5021,7 +5021,7 @@ local0.* /var/log/postgresql These parameters enable various debugging output to be emitted. When set, they print the resulting parse tree, the query rewriter output, or the execution plan for each executed query. - These messages are emitted at LOG message level, so by + These messages are emitted at LOG message level, so by default they will appear in the server log but will not be sent to the client. You can change that by adjusting and/or @@ -5034,7 +5034,7 @@ local0.* /var/log/postgresql debug_pretty_print (boolean) - debug_pretty_print configuration parameter + debug_pretty_print configuration parameter @@ -5043,7 +5043,7 @@ local0.* /var/log/postgresql produced by debug_print_parse, debug_print_rewritten, or debug_print_plan. This results in more readable - but much longer output than the compact format used when + but much longer output than the compact format used when it is off. It is on by default. @@ -5052,7 +5052,7 @@ local0.* /var/log/postgresql log_checkpoints (boolean) - log_checkpoints configuration parameter + log_checkpoints configuration parameter @@ -5060,7 +5060,7 @@ local0.* /var/log/postgresql Causes checkpoints and restartpoints to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is off. @@ -5069,7 +5069,7 @@ local0.* /var/log/postgresql log_connections (boolean) - log_connections configuration parameter + log_connections configuration parameter @@ -5078,14 +5078,14 @@ local0.* /var/log/postgresql as well as successful completion of client authentication. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is off. + The default is off. - Some client programs, like psql, attempt + Some client programs, like psql, attempt to connect twice while determining if a password is required, so - duplicate connection received messages do not + duplicate connection received messages do not necessarily indicate a problem. @@ -5095,7 +5095,7 @@ local0.* /var/log/postgresql log_disconnections (boolean) - log_disconnections configuration parameter + log_disconnections configuration parameter @@ -5105,7 +5105,7 @@ local0.* /var/log/postgresql plus the duration of the session. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is off. + The default is off. @@ -5114,13 +5114,13 @@ local0.* /var/log/postgresql log_duration (boolean) - log_duration configuration parameter + log_duration configuration parameter Causes the duration of every completed statement to be logged. - The default is off. + The default is off. Only superusers can change this setting. @@ -5133,10 +5133,10 @@ local0.* /var/log/postgresql The difference between setting this option and setting to zero is that - exceeding log_min_duration_statement forces the text of + exceeding log_min_duration_statement forces the text of the query to be logged, but this option doesn't. Thus, if - log_duration is on and - log_min_duration_statement has a positive value, all + log_duration is on and + log_min_duration_statement has a positive value, all durations are logged but the query text is included only for statements exceeding the threshold. This behavior can be useful for gathering statistics in high-load installations. @@ -5148,18 +5148,18 @@ local0.* /var/log/postgresql log_error_verbosity (enum) - log_error_verbosity configuration parameter + log_error_verbosity configuration parameter Controls the amount of detail written in the server log for each - message that is logged. Valid values are TERSE, - DEFAULT, and VERBOSE, each adding more - fields to displayed messages. TERSE excludes - the logging of DETAIL, HINT, - QUERY, and CONTEXT error information. - VERBOSE output includes the SQLSTATE error + message that is logged. Valid values are TERSE, + DEFAULT, and VERBOSE, each adding more + fields to displayed messages. TERSE excludes + the logging of DETAIL, HINT, + QUERY, and CONTEXT error information. + VERBOSE output includes the SQLSTATE error code (see also ) and the source code file name, function name, and line number that generated the error. Only superusers can change this setting. @@ -5170,7 +5170,7 @@ local0.* /var/log/postgresql log_hostname (boolean) - log_hostname configuration parameter + log_hostname configuration parameter @@ -5179,7 +5179,7 @@ local0.* /var/log/postgresql connecting host. Turning this parameter on causes logging of the host name as well. Note that depending on your host name resolution setup this might impose a non-negligible performance penalty. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5188,14 +5188,14 @@ local0.* /var/log/postgresql log_line_prefix (string) - log_line_prefix configuration parameter + log_line_prefix configuration parameter - This is a printf-style string that is output at the + This is a printf-style string that is output at the beginning of each log line. - % characters begin escape sequences + % characters begin escape sequences that are replaced with status information as outlined below. Unrecognized escapes are ignored. Other characters are copied straight to the log line. Some escapes are @@ -5207,9 +5207,9 @@ local0.* /var/log/postgresql right with spaces to give it a minimum width, whereas a positive value will pad on the left. Padding can be useful to aid human readability in log files. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. The default is - '%m [%p] ' which logs a time stamp and the process ID. + '%m [%p] ' which logs a time stamp and the process ID. @@ -5310,19 +5310,19 @@ local0.* /var/log/postgresql
%% - Literal % + Literal % no - The %c escape prints a quasi-unique session identifier, + The %c escape prints a quasi-unique session identifier, consisting of two 4-byte hexadecimal numbers (without leading zeros) separated by a dot. The numbers are the process start time and the - process ID, so %c can also be used as a space saving way + process ID, so %c can also be used as a space saving way of printing those items. For example, to generate the session - identifier from pg_stat_activity, use this query: + identifier from pg_stat_activity, use this query: SELECT to_hex(trunc(EXTRACT(EPOCH FROM backend_start))::integer) || '.' || to_hex(pid) @@ -5333,7 +5333,7 @@ FROM pg_stat_activity; - If you set a nonempty value for log_line_prefix, + If you set a nonempty value for log_line_prefix, you should usually make its last character be a space, to provide visual separation from the rest of the log line. A punctuation character can be used too. @@ -5342,15 +5342,15 @@ FROM pg_stat_activity; - Syslog produces its own + Syslog produces its own time stamp and process ID information, so you probably do not want to - include those escapes if you are logging to syslog. + include those escapes if you are logging to syslog. - The %q escape is useful when including information that is + The %q escape is useful when including information that is only available in session (backend) context like user or database name. For example: @@ -5364,7 +5364,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_lock_waits (boolean) - log_lock_waits configuration parameter + log_lock_waits configuration parameter @@ -5372,7 +5372,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Controls whether a log message is produced when a session waits longer than to acquire a lock. This is useful in determining if lock waits are causing - poor performance. The default is off. + poor performance. The default is off. Only superusers can change this setting. @@ -5381,22 +5381,22 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_statement (enum) - log_statement configuration parameter + log_statement configuration parameter Controls which SQL statements are logged. Valid values are - none (off), ddl, mod, and - all (all statements). ddl logs all data definition - statements, such as CREATE, ALTER, and - DROP statements. mod logs all - ddl statements, plus data-modifying statements - such as INSERT, - UPDATE, DELETE, TRUNCATE, - and COPY FROM. - PREPARE, EXECUTE, and - EXPLAIN ANALYZE statements are also logged if their + none (off), ddl, mod, and + all (all statements). ddl logs all data definition + statements, such as CREATE, ALTER, and + DROP statements. mod logs all + ddl statements, plus data-modifying statements + such as INSERT, + UPDATE, DELETE, TRUNCATE, + and COPY FROM. + PREPARE, EXECUTE, and + EXPLAIN ANALYZE statements are also logged if their contained command is of an appropriate type. For clients using extended query protocol, logging occurs when an Execute message is received, and values of the Bind parameters are included @@ -5404,20 +5404,20 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' - The default is none. Only superusers can change this + The default is none. Only superusers can change this setting. Statements that contain simple syntax errors are not logged - even by the log_statement = all setting, + even by the log_statement = all setting, because the log message is emitted only after basic parsing has been done to determine the statement type. In the case of extended query protocol, this setting likewise does not log statements that fail before the Execute phase (i.e., during parse analysis or - planning). Set log_min_error_statement to - ERROR (or lower) to log such statements. + planning). Set log_min_error_statement to + ERROR (or lower) to log such statements. @@ -5426,14 +5426,14 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_replication_commands (boolean) - log_replication_commands configuration parameter + log_replication_commands configuration parameter Causes each replication command to be logged in the server log. See for more information about - replication command. The default value is off. + replication command. The default value is off. Only superusers can change this setting. @@ -5442,7 +5442,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_temp_files (integer) - log_temp_files configuration parameter + log_temp_files configuration parameter @@ -5463,7 +5463,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' log_timezone (string) - log_timezone configuration parameter + log_timezone configuration parameter @@ -5471,11 +5471,11 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Sets the time zone used for timestamps written in the server log. Unlike , this value is cluster-wide, so that all sessions will report timestamps consistently. - The built-in default is GMT, but that is typically - overridden in postgresql.conf; initdb + The built-in default is GMT, but that is typically + overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. See for more information. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5487,10 +5487,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Using CSV-Format Log Output - Including csvlog in the log_destination list + Including csvlog in the log_destination list provides a convenient way to import log files into a database table. This option emits log lines in comma-separated-values - (CSV) format, + (CSV) format, with these columns: time stamp with milliseconds, user name, @@ -5512,10 +5512,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' character count of the error position therein, error context, user query that led to the error (if any and enabled by - log_min_error_statement), + log_min_error_statement), character count of the error position therein, location of the error in the PostgreSQL source code - (if log_error_verbosity is set to verbose), + (if log_error_verbosity is set to verbose), and application name. Here is a sample table definition for storing CSV-format log output: @@ -5551,7 +5551,7 @@ CREATE TABLE postgres_log - To import a log file into this table, use the COPY FROM + To import a log file into this table, use the COPY FROM command: @@ -5567,7 +5567,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Set log_filename and - log_rotation_age to provide a consistent, + log_rotation_age to provide a consistent, predictable naming scheme for your log files. This lets you predict what the file name will be and know when an individual log file is complete and therefore ready to be imported. @@ -5584,7 +5584,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - Set log_truncate_on_rotation to on so + Set log_truncate_on_rotation to on so that old log data isn't mixed with the new in the same file. @@ -5593,14 +5593,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The table definition above includes a primary key specification. This is useful to protect against accidentally importing the same - information twice. The COPY command commits all of the + information twice. The COPY command commits all of the data it imports at one time, so any error will cause the entire import to fail. If you import a partial log file and later import the file again when it is complete, the primary key violation will cause the import to fail. Wait until the log is complete and closed before importing. This procedure will also protect against accidentally importing a partial line that hasn't been completely - written, which would also cause COPY to fail. + written, which would also cause COPY to fail. @@ -5613,7 +5613,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; These settings control how process titles of server processes are modified. Process titles are typically viewed using programs like - ps or, on Windows, Process Explorer. + ps or, on Windows, Process Explorer. See for details. @@ -5621,18 +5621,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; cluster_name (string) - cluster_name configuration parameter + cluster_name configuration parameter Sets the cluster name that appears in the process title for all server processes in this cluster. The name can be any string of less - than NAMEDATALEN characters (64 characters in a standard + than NAMEDATALEN characters (64 characters in a standard build). Only printable ASCII characters may be used in the cluster_name value. Other characters will be replaced with question marks (?). No name is shown - if this parameter is set to the empty string '' (which is + if this parameter is set to the empty string '' (which is the default). This parameter can only be set at server start. @@ -5641,15 +5641,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; update_process_title (boolean) - update_process_title configuration parameter + update_process_title configuration parameter Enables updating of the process title every time a new SQL command is received by the server. - This setting defaults to on on most platforms, but it - defaults to off on Windows due to that platform's larger + This setting defaults to on on most platforms, but it + defaults to off on Windows due to that platform's larger overhead for updating the process title. Only superusers can change this setting. @@ -5678,7 +5678,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_activities (boolean) - track_activities configuration parameter + track_activities configuration parameter @@ -5698,14 +5698,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_activity_query_size (integer) - track_activity_query_size configuration parameter + track_activity_query_size configuration parameter Specifies the number of bytes reserved to track the currently executing command for each active session, for the - pg_stat_activity.query field. + pg_stat_activity.query field. The default value is 1024. This parameter can only be set at server start. @@ -5715,7 +5715,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_counts (boolean) - track_counts configuration parameter + track_counts configuration parameter @@ -5731,7 +5731,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_io_timing (boolean) - track_io_timing configuration parameter + track_io_timing configuration parameter @@ -5743,7 +5743,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; measure the overhead of timing on your system. I/O timing information is displayed in , in the output of - when the BUFFERS option is + when the BUFFERS option is used, and by . Only superusers can change this setting. @@ -5753,7 +5753,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; track_functions (enum) - track_functions configuration parameter + track_functions configuration parameter @@ -5767,7 +5767,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - SQL-language functions that are simple enough to be inlined + SQL-language functions that are simple enough to be inlined into the calling query will not be tracked, regardless of this setting. @@ -5778,7 +5778,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; stats_temp_directory (string) - stats_temp_directory configuration parameter + stats_temp_directory configuration parameter @@ -5788,7 +5788,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; is pg_stat_tmp. Pointing this at a RAM-based file system will decrease physical I/O requirements and can lead to improved performance. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5804,29 +5804,29 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; log_statement_stats (boolean) - log_statement_stats configuration parameter + log_statement_stats configuration parameter log_parser_stats (boolean) - log_parser_stats configuration parameter + log_parser_stats configuration parameter log_planner_stats (boolean) - log_planner_stats configuration parameter + log_planner_stats configuration parameter log_executor_stats (boolean) - log_executor_stats configuration parameter + log_executor_stats configuration parameter For each query, output performance statistics of the respective module to the server log. This is a crude profiling - instrument, similar to the Unix getrusage() operating + instrument, similar to the Unix getrusage() operating system facility. log_statement_stats reports total statement statistics, while the others report per-module statistics. log_statement_stats cannot be enabled together with @@ -5850,7 +5850,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - These settings control the behavior of the autovacuum + These settings control the behavior of the autovacuum feature. Refer to for more information. Note that many of these settings can be overridden on a per-table basis; see autovacuum (boolean) - autovacuum configuration parameter + autovacuum configuration parameter @@ -5871,7 +5871,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum launcher daemon. This is on by default; however, must also be enabled for autovacuum to work. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; however, autovacuuming can be disabled for individual tables by changing table storage parameters. @@ -5887,7 +5887,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; log_autovacuum_min_duration (integer) - log_autovacuum_min_duration configuration parameter + log_autovacuum_min_duration configuration parameter @@ -5902,7 +5902,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; logged if an autovacuum action is skipped due to the existence of a conflicting lock. Enabling this parameter can be helpful in tracking autovacuum activity. This parameter can only be set in - the postgresql.conf file or on the server command line; + the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5912,7 +5912,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_max_workers (integer) - autovacuum_max_workers configuration parameter + autovacuum_max_workers configuration parameter @@ -5927,17 +5927,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_naptime (integer) - autovacuum_naptime configuration parameter + autovacuum_naptime configuration parameter Specifies the minimum delay between autovacuum runs on any given database. In each round the daemon examines the - database and issues VACUUM and ANALYZE commands + database and issues VACUUM and ANALYZE commands as needed for tables in that database. The delay is measured - in seconds, and the default is one minute (1min). - This parameter can only be set in the postgresql.conf + in seconds, and the default is one minute (1min). + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5946,15 +5946,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_threshold (integer) - autovacuum_vacuum_threshold configuration parameter + autovacuum_vacuum_threshold configuration parameter Specifies the minimum number of updated or deleted tuples needed - to trigger a VACUUM in any one table. + to trigger a VACUUM in any one table. The default is 50 tuples. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5965,15 +5965,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_analyze_threshold (integer) - autovacuum_analyze_threshold configuration parameter + autovacuum_analyze_threshold configuration parameter Specifies the minimum number of inserted, updated or deleted tuples - needed to trigger an ANALYZE in any one table. + needed to trigger an ANALYZE in any one table. The default is 50 tuples. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5984,16 +5984,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_scale_factor (floating point) - autovacuum_vacuum_scale_factor configuration parameter + autovacuum_vacuum_scale_factor configuration parameter Specifies a fraction of the table size to add to autovacuum_vacuum_threshold - when deciding whether to trigger a VACUUM. + when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size). - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6004,16 +6004,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_analyze_scale_factor (floating point) - autovacuum_analyze_scale_factor configuration parameter + autovacuum_analyze_scale_factor configuration parameter Specifies a fraction of the table size to add to autovacuum_analyze_threshold - when deciding whether to trigger an ANALYZE. + when deciding whether to trigger an ANALYZE. The default is 0.1 (10% of table size). - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6024,14 +6024,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_freeze_max_age (integer) - autovacuum_freeze_max_age configuration parameter + autovacuum_freeze_max_age configuration parameter Specifies the maximum age (in transactions) that a table's - pg_class.relfrozenxid field can - attain before a VACUUM operation is forced + pg_class.relfrozenxid field can + attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6039,7 +6039,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Vacuum also allows removal of old files from the - pg_xact subdirectory, which is why the default + pg_xact subdirectory, which is why the default is a relatively low 200 million transactions. This parameter can only be set at server start, but the setting can be reduced for individual tables by @@ -6058,8 +6058,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Specifies the maximum age (in multixacts) that a table's - pg_class.relminmxid field can - attain before a VACUUM operation is forced to + pg_class.relminmxid field can + attain before a VACUUM operation is forced to prevent multixact ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6067,7 +6067,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Vacuuming multixacts also allows removal of old files from the - pg_multixact/members and pg_multixact/offsets + pg_multixact/members and pg_multixact/offsets subdirectories, which is why the default is a relatively low 400 million multixacts. This parameter can only be set at server start, but the setting can @@ -6080,16 +6080,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_cost_delay (integer) - autovacuum_vacuum_cost_delay configuration parameter + autovacuum_vacuum_cost_delay configuration parameter Specifies the cost delay value that will be used in automatic - VACUUM operations. If -1 is specified, the regular + VACUUM operations. If -1 is specified, the regular value will be used. The default value is 20 milliseconds. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6100,19 +6100,19 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum_vacuum_cost_limit (integer) - autovacuum_vacuum_cost_limit configuration parameter + autovacuum_vacuum_cost_limit configuration parameter Specifies the cost limit value that will be used in automatic - VACUUM operations. If -1 is specified (which is the + VACUUM operations. If -1 is specified (which is the default), the regular value will be used. Note that the value is distributed proportionally among the running autovacuum workers, if there is more than one, so that the sum of the limits for each worker does not exceed the value of this variable. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6133,9 +6133,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; search_path (string) - search_path configuration parameter + search_path configuration parameter - pathfor schemas + pathfor schemas @@ -6151,32 +6151,32 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The value for search_path must be a comma-separated list of schema names. Any name that is not an existing schema, or is - a schema for which the user does not have USAGE + a schema for which the user does not have USAGE permission, is silently ignored. If one of the list items is the special name $user, then the schema having the name returned by - SESSION_USER is substituted, if there is such a schema - and the user has USAGE permission for it. + SESSION_USER is substituted, if there is such a schema + and the user has USAGE permission for it. (If not, $user is ignored.) - The system catalog schema, pg_catalog, is always + The system catalog schema, pg_catalog, is always searched, whether it is mentioned in the path or not. If it is mentioned in the path then it will be searched in the specified - order. If pg_catalog is not in the path then it will - be searched before searching any of the path items. + order. If pg_catalog is not in the path then it will + be searched before searching any of the path items. Likewise, the current session's temporary-table schema, - pg_temp_nnn, is always searched if it + pg_temp_nnn, is always searched if it exists. It can be explicitly listed in the path by using the - alias pg_temppg_temp. If it is not listed in the path then - it is searched first (even before pg_catalog). However, + alias pg_temppg_temp. If it is not listed in the path then + it is searched first (even before pg_catalog). However, the temporary schema is only searched for relation (table, view, sequence, etc) and data type names. It is never searched for function or operator names. @@ -6193,7 +6193,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The default value for this parameter is "$user", public. This setting supports shared use of a database (where no users - have private schemas, and all share use of public), + have private schemas, and all share use of public), private per-user schemas, and combinations of these. Other effects can be obtained by altering the default search path setting, either globally or per-user. @@ -6202,11 +6202,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The current effective value of the search path can be examined via the SQL function - current_schemas + current_schemas (see ). This is not quite the same as examining the value of search_path, since - current_schemas shows how the items + current_schemas shows how the items appearing in search_path were resolved. @@ -6219,20 +6219,20 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; row_security (boolean) - row_security configuration parameter + row_security configuration parameter This variable controls whether to raise an error in lieu of applying a - row security policy. When set to on, policies apply - normally. When set to off, queries fail which would - otherwise apply at least one policy. The default is on. - Change to off where limited row visibility could cause - incorrect results; for example, pg_dump makes that + row security policy. When set to on, policies apply + normally. When set to off, queries fail which would + otherwise apply at least one policy. The default is on. + Change to off where limited row visibility could cause + incorrect results; for example, pg_dump makes that change by default. This variable has no effect on roles which bypass every row security policy, to wit, superusers and roles with - the BYPASSRLS attribute. + the BYPASSRLS attribute. @@ -6245,14 +6245,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; default_tablespace (string) - default_tablespace configuration parameter + default_tablespace configuration parameter - tablespacedefault + tablespacedefault This variable specifies the default tablespace in which to create - objects (tables and indexes) when a CREATE command does + objects (tables and indexes) when a CREATE command does not explicitly specify a tablespace. @@ -6260,9 +6260,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The value is either the name of a tablespace, or an empty string to specify using the default tablespace of the current database. If the value does not match the name of any existing tablespace, - PostgreSQL will automatically use the default + PostgreSQL will automatically use the default tablespace of the current database. If a nondefault tablespace - is specified, the user must have CREATE privilege + is specified, the user must have CREATE privilege for it, or creation attempts will fail. @@ -6287,38 +6287,38 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; temp_tablespaces (string) - temp_tablespaces configuration parameter + temp_tablespaces configuration parameter - tablespacetemporary + tablespacetemporary This variable specifies tablespaces in which to create temporary objects (temp tables and indexes on temp tables) when a - CREATE command does not explicitly specify a tablespace. + CREATE command does not explicitly specify a tablespace. Temporary files for purposes such as sorting large data sets are also created in these tablespaces. The value is a list of names of tablespaces. When there is more than - one name in the list, PostgreSQL chooses a random + one name in the list, PostgreSQL chooses a random member of the list each time a temporary object is to be created; except that within a transaction, successively created temporary objects are placed in successive tablespaces from the list. If the selected element of the list is an empty string, - PostgreSQL will automatically use the default + PostgreSQL will automatically use the default tablespace of the current database instead. - When temp_tablespaces is set interactively, specifying a + When temp_tablespaces is set interactively, specifying a nonexistent tablespace is an error, as is specifying a tablespace for - which the user does not have CREATE privilege. However, + which the user does not have CREATE privilege. However, when using a previously set value, nonexistent tablespaces are ignored, as are tablespaces for which the user lacks - CREATE privilege. In particular, this rule applies when - using a value set in postgresql.conf. + CREATE privilege. In particular, this rule applies when + using a value set in postgresql.conf. @@ -6336,18 +6336,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; check_function_bodies (boolean) - check_function_bodies configuration parameter + check_function_bodies configuration parameter - This parameter is normally on. When set to off, it + This parameter is normally on. When set to off, it disables validation of the function body string during . Disabling validation avoids side effects of the validation process and avoids false positives due to problems such as forward references. Set this parameter - to off before loading functions on behalf of other - users; pg_dump does so automatically. + to off before loading functions on behalf of other + users; pg_dump does so automatically. @@ -6359,7 +6359,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_isolation configuration parameter + default_transaction_isolation configuration parameter @@ -6386,14 +6386,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_read_only configuration parameter + default_transaction_read_only configuration parameter A read-only SQL transaction cannot alter non-temporary tables. This parameter controls the default read-only status of each new - transaction. The default is off (read/write). + transaction. The default is off (read/write). @@ -6409,12 +6409,12 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting default - default_transaction_deferrable configuration parameter + default_transaction_deferrable configuration parameter - When running at the serializable isolation level, + When running at the serializable isolation level, a deferrable read-only SQL transaction may be delayed before it is allowed to proceed. However, once it begins executing it does not incur any of the overhead required to ensure @@ -6427,7 +6427,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This parameter controls the default deferrable status of each new transaction. It currently has no effect on read-write transactions or those operating at isolation levels lower - than serializable. The default is off. + than serializable. The default is off. @@ -6440,7 +6440,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; session_replication_role (enum) - session_replication_role configuration parameter + session_replication_role configuration parameter @@ -6448,8 +6448,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Controls firing of replication-related triggers and rules for the current session. Setting this variable requires superuser privilege and results in discarding any previously cached - query plans. Possible values are origin (the default), - replica and local. + query plans. Possible values are origin (the default), + replica and local. See for more information. @@ -6459,21 +6459,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; statement_timeout (integer) - statement_timeout configuration parameter + statement_timeout configuration parameter Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server - from the client. If log_min_error_statement is set to - ERROR or lower, the statement that timed out will also be + from the client. If log_min_error_statement is set to + ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off. - Setting statement_timeout in - postgresql.conf is not recommended because it would + Setting statement_timeout in + postgresql.conf is not recommended because it would affect all sessions. @@ -6482,7 +6482,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; lock_timeout (integer) - lock_timeout configuration parameter + lock_timeout configuration parameter @@ -6491,24 +6491,24 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; milliseconds while attempting to acquire a lock on a table, index, row, or other database object. The time limit applies separately to each lock acquisition attempt. The limit applies both to explicit - locking requests (such as LOCK TABLE, or SELECT - FOR UPDATE without NOWAIT) and to implicitly-acquired - locks. If log_min_error_statement is set to - ERROR or lower, the statement that timed out will be + locking requests (such as LOCK TABLE, or SELECT + FOR UPDATE without NOWAIT) and to implicitly-acquired + locks. If log_min_error_statement is set to + ERROR or lower, the statement that timed out will be logged. A value of zero (the default) turns this off. - Unlike statement_timeout, this timeout can only occur - while waiting for locks. Note that if statement_timeout - is nonzero, it is rather pointless to set lock_timeout to + Unlike statement_timeout, this timeout can only occur + while waiting for locks. Note that if statement_timeout + is nonzero, it is rather pointless to set lock_timeout to the same or larger value, since the statement timeout would always trigger first. - Setting lock_timeout in - postgresql.conf is not recommended because it would + Setting lock_timeout in + postgresql.conf is not recommended because it would affect all sessions. @@ -6517,7 +6517,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; idle_in_transaction_session_timeout (integer) - idle_in_transaction_session_timeout configuration parameter + idle_in_transaction_session_timeout configuration parameter @@ -6537,21 +6537,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_freeze_table_age (integer) - vacuum_freeze_table_age configuration parameter + vacuum_freeze_table_age configuration parameter - VACUUM performs an aggressive scan if the table's - pg_class.relfrozenxid field has reached + VACUUM performs an aggressive scan if the table's + pg_class.relfrozenxid field has reached the age specified by this setting. An aggressive scan differs from - a regular VACUUM in that it visits every page that might + a regular VACUUM in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million transactions. Although users can - set this value anywhere from zero to two billions, VACUUM + set this value anywhere from zero to two billions, VACUUM will silently limit the effective value to 95% of , so that a - periodical manual VACUUM has a chance to run before an + periodical manual VACUUM has a chance to run before an anti-wraparound autovacuum is launched for the table. For more information see . @@ -6562,17 +6562,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_freeze_min_age (integer) - vacuum_freeze_min_age configuration parameter + vacuum_freeze_min_age configuration parameter - Specifies the cutoff age (in transactions) that VACUUM + Specifies the cutoff age (in transactions) that VACUUM should use to decide whether to freeze row versions while scanning a table. The default is 50 million transactions. Although users can set this value anywhere from zero to one billion, - VACUUM will silently limit the effective value to half + VACUUM will silently limit the effective value to half the value of , so that there is not an unreasonably short time between forced autovacuums. For more information see vacuum_multixact_freeze_table_age (integer) - vacuum_multixact_freeze_table_age configuration parameter + vacuum_multixact_freeze_table_age configuration parameter - VACUUM performs an aggressive scan if the table's - pg_class.relminmxid field has reached + VACUUM performs an aggressive scan if the table's + pg_class.relminmxid field has reached the age specified by this setting. An aggressive scan differs from - a regular VACUUM in that it visits every page that might + a regular VACUUM in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million multixacts. Although users can set this value anywhere from zero to two billions, - VACUUM will silently limit the effective value to 95% of + VACUUM will silently limit the effective value to 95% of , so that a - periodical manual VACUUM has a chance to run before an + periodical manual VACUUM has a chance to run before an anti-wraparound is launched for the table. For more information see . @@ -6608,17 +6608,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; vacuum_multixact_freeze_min_age (integer) - vacuum_multixact_freeze_min_age configuration parameter + vacuum_multixact_freeze_min_age configuration parameter - Specifies the cutoff age (in multixacts) that VACUUM + Specifies the cutoff age (in multixacts) that VACUUM should use to decide whether to replace multixact IDs with a newer transaction ID or multixact ID while scanning a table. The default is 5 million multixacts. Although users can set this value anywhere from zero to one billion, - VACUUM will silently limit the effective value to half + VACUUM will silently limit the effective value to half the value of , so that there is not an unreasonably short time between forced autovacuums. @@ -6630,7 +6630,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; bytea_output (enum) - bytea_output configuration parameter + bytea_output configuration parameter @@ -6648,7 +6648,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; xmlbinary (enum) - xmlbinary configuration parameter + xmlbinary configuration parameter @@ -6676,10 +6676,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; xmloption (enum) - xmloption configuration parameter + xmloption configuration parameter - SET XML OPTION + SET XML OPTION XML option @@ -6709,16 +6709,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; gin_pending_list_limit (integer) - gin_pending_list_limit configuration parameter + gin_pending_list_limit configuration parameter Sets the maximum size of the GIN pending list which is used - when fastupdate is enabled. If the list grows + when fastupdate is enabled. If the list grows larger than this maximum size, it is cleaned up by moving the entries in it to the main GIN data structure in bulk. - The default is four megabytes (4MB). This setting + The default is four megabytes (4MB). This setting can be overridden for individual GIN indexes by changing index storage parameters. See and @@ -6737,7 +6737,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; DateStyle (string) - DateStyle configuration parameter + DateStyle configuration parameter @@ -6745,16 +6745,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. For historical reasons, this variable contains two independent - components: the output format specification (ISO, - Postgres, SQL, or German) + components: the output format specification (ISO, + Postgres, SQL, or German) and the input/output specification for year/month/day ordering - (DMY, MDY, or YMD). These - can be set separately or together. The keywords Euro - and European are synonyms for DMY; the - keywords US, NonEuro, and - NonEuropean are synonyms for MDY. See + (DMY, MDY, or YMD). These + can be set separately or together. The keywords Euro + and European are synonyms for DMY; the + keywords US, NonEuro, and + NonEuropean are synonyms for MDY. See for more information. The - built-in default is ISO, MDY, but + built-in default is ISO, MDY, but initdb will initialize the configuration file with a setting that corresponds to the behavior of the chosen lc_time locale. @@ -6765,28 +6765,28 @@ SET XML OPTION { DOCUMENT | CONTENT }; IntervalStyle (enum) - IntervalStyle configuration parameter + IntervalStyle configuration parameter Sets the display format for interval values. - The value sql_standard will produce + The value sql_standard will produce output matching SQL standard interval literals. - The value postgres (which is the default) will produce - output matching PostgreSQL releases prior to 8.4 + The value postgres (which is the default) will produce + output matching PostgreSQL releases prior to 8.4 when the - parameter was set to ISO. - The value postgres_verbose will produce output - matching PostgreSQL releases prior to 8.4 - when the DateStyle - parameter was set to non-ISO output. - The value iso_8601 will produce output matching the time - interval format with designators defined in section + parameter was set to ISO. + The value postgres_verbose will produce output + matching PostgreSQL releases prior to 8.4 + when the DateStyle + parameter was set to non-ISO output. + The value iso_8601 will produce output matching the time + interval format with designators defined in section 4.4.3.2 of ISO 8601. - The IntervalStyle parameter also affects the + The IntervalStyle parameter also affects the interpretation of ambiguous interval input. See for more information. @@ -6796,15 +6796,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; TimeZone (string) - TimeZone configuration parameter + TimeZone configuration parameter - time zone + time zone Sets the time zone for displaying and interpreting time stamps. - The built-in default is GMT, but that is typically - overridden in postgresql.conf; initdb + The built-in default is GMT, but that is typically + overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. See for more information. @@ -6814,14 +6814,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; timezone_abbreviations (string) - timezone_abbreviations configuration parameter + timezone_abbreviations configuration parameter - time zone names + time zone names Sets the collection of time zone abbreviations that will be accepted - by the server for datetime input. The default is 'Default', + by the server for datetime input. The default is 'Default', which is a collection that works in most of the world; there are also 'Australia' and 'India', and other collections can be defined for a particular installation. @@ -6840,15 +6840,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; display - extra_float_digits configuration parameter + extra_float_digits configuration parameter This parameter adjusts the number of digits displayed for - floating-point values, including float4, float8, + floating-point values, including float4, float8, and geometric data types. The parameter value is added to the - standard number of digits (FLT_DIG or DBL_DIG + standard number of digits (FLT_DIG or DBL_DIG as appropriate). The value can be set as high as 3, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set @@ -6861,9 +6861,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; client_encoding (string) - client_encoding configuration parameter + client_encoding configuration parameter - character set + character set @@ -6878,7 +6878,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_messages (string) - lc_messages configuration parameter + lc_messages configuration parameter @@ -6910,7 +6910,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_monetary (string) - lc_monetary configuration parameter + lc_monetary configuration parameter @@ -6929,7 +6929,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_numeric (string) - lc_numeric configuration parameter + lc_numeric configuration parameter @@ -6948,7 +6948,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; lc_time (string) - lc_time configuration parameter + lc_time configuration parameter @@ -6967,7 +6967,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; default_text_search_config (string) - default_text_search_config configuration parameter + default_text_search_config configuration parameter @@ -6976,7 +6976,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; of the text search functions that do not have an explicit argument specifying the configuration. See for further information. - The built-in default is pg_catalog.simple, but + The built-in default is pg_catalog.simple, but initdb will initialize the configuration file with a setting that corresponds to the chosen lc_ctype locale, if a configuration @@ -6997,8 +6997,8 @@ SET XML OPTION { DOCUMENT | CONTENT }; server, in order to load additional functionality or achieve performance benefits. For example, a setting of '$libdir/mylib' would cause - mylib.so (or on some platforms, - mylib.sl) to be preloaded from the installation's standard + mylib.so (or on some platforms, + mylib.sl) to be preloaded from the installation's standard library directory. The differences between the settings are when they take effect and what privileges are required to change them. @@ -7007,14 +7007,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; PostgreSQL procedural language libraries can be preloaded in this way, typically by using the syntax '$libdir/plXXX' where - XXX is pgsql, perl, - tcl, or python. + XXX is pgsql, perl, + tcl, or python. Only shared libraries specifically intended to be used with PostgreSQL can be loaded this way. Every PostgreSQL-supported library has - a magic block that is checked to guarantee compatibility. For + a magic block that is checked to guarantee compatibility. For this reason, non-PostgreSQL libraries cannot be loaded in this way. You might be able to use operating-system facilities such as LD_PRELOAD for that. @@ -7029,10 +7029,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; local_preload_libraries (string) - local_preload_libraries configuration parameter + local_preload_libraries configuration parameter - $libdir/plugins + $libdir/plugins @@ -7051,10 +7051,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; This option can be set by any user. Because of that, the libraries that can be loaded are restricted to those appearing in the - plugins subdirectory of the installation's + plugins subdirectory of the installation's standard library directory. (It is the database administrator's - responsibility to ensure that only safe libraries - are installed there.) Entries in local_preload_libraries + responsibility to ensure that only safe libraries + are installed there.) Entries in local_preload_libraries can specify this directory explicitly, for example $libdir/plugins/mylib, or just specify the library name — mylib would have @@ -7064,11 +7064,11 @@ SET XML OPTION { DOCUMENT | CONTENT }; The intent of this feature is to allow unprivileged users to load debugging or performance-measurement libraries into specific sessions - without requiring an explicit LOAD command. To that end, + without requiring an explicit LOAD command. To that end, it would be typical to set this parameter using the PGOPTIONS environment variable on the client or by using - ALTER ROLE SET. + ALTER ROLE SET. @@ -7083,7 +7083,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; session_preload_libraries (string) - session_preload_libraries configuration parameter + session_preload_libraries configuration parameter @@ -7104,10 +7104,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; The intent of this feature is to allow debugging or performance-measurement libraries to be loaded into specific sessions without an explicit - LOAD command being given. For + LOAD command being given. For example, could be enabled for all sessions under a given user name by setting this parameter - with ALTER ROLE SET. Also, this parameter can be changed + with ALTER ROLE SET. Also, this parameter can be changed without restarting the server (but changes only take effect when a new session is started), so it is easier to add new modules this way, even if they should apply to all sessions. @@ -7125,7 +7125,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; shared_preload_libraries (string) - shared_preload_libraries configuration parameter + shared_preload_libraries configuration parameter @@ -7182,9 +7182,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; dynamic_library_path (string) - dynamic_library_path configuration parameter + dynamic_library_path configuration parameter - dynamic loading + dynamic loading @@ -7236,7 +7236,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' gin_fuzzy_search_limit (integer) - gin_fuzzy_search_limit configuration parameter + gin_fuzzy_search_limit configuration parameter @@ -7267,7 +7267,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' deadlock - deadlock_timeout configuration parameter + deadlock_timeout configuration parameter @@ -7280,7 +7280,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' just wait on the lock for a while before checking for a deadlock. Increasing this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting of - real deadlock errors. The default is one second (1s), + real deadlock errors. The default is one second (1s), which is probably about the smallest value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the setting should exceed your typical transaction time, @@ -7302,7 +7302,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_locks_per_transaction (integer) - max_locks_per_transaction configuration parameter + max_locks_per_transaction configuration parameter @@ -7315,7 +7315,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is not the number of + fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have queries that touch many different @@ -7334,7 +7334,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_transaction (integer) - max_pred_locks_per_transaction configuration parameter + max_pred_locks_per_transaction configuration parameter @@ -7347,7 +7347,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is not the number of + fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has generally been sufficient in testing, but you might need to raise this value if you have clients that touch many different @@ -7360,7 +7360,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_relation (integer) - max_pred_locks_per_relation configuration parameter + max_pred_locks_per_relation configuration parameter @@ -7371,8 +7371,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' limit, while negative values mean divided by the absolute value of this setting. The default is -2, which keeps - the behavior from previous versions of PostgreSQL. - This parameter can only be set in the postgresql.conf + the behavior from previous versions of PostgreSQL. + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -7381,7 +7381,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_pred_locks_per_page (integer) - max_pred_locks_per_page configuration parameter + max_pred_locks_per_page configuration parameter @@ -7389,7 +7389,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' This controls how many rows on a single page can be predicate-locked before the lock is promoted to covering the whole page. The default is 2. This parameter can only be set in - the postgresql.conf file or on the server command line. + the postgresql.conf file or on the server command line. @@ -7408,62 +7408,62 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' array_nulls (boolean) - array_nulls configuration parameter + array_nulls configuration parameter This controls whether the array input parser recognizes - unquoted NULL as specifying a null array element. - By default, this is on, allowing array values containing - null values to be entered. However, PostgreSQL versions + unquoted NULL as specifying a null array element. + By default, this is on, allowing array values containing + null values to be entered. However, PostgreSQL versions before 8.2 did not support null values in arrays, and therefore would - treat NULL as specifying a normal array element with - the string value NULL. For backward compatibility with + treat NULL as specifying a normal array element with + the string value NULL. For backward compatibility with applications that require the old behavior, this variable can be - turned off. + turned off. Note that it is possible to create array values containing null values - even when this variable is off. + even when this variable is off. backslash_quote (enum) - stringsbackslash quotes + stringsbackslash quotes - backslash_quote configuration parameter + backslash_quote configuration parameter This controls whether a quote mark can be represented by - \' in a string literal. The preferred, SQL-standard way - to represent a quote mark is by doubling it ('') but - PostgreSQL has historically also accepted - \'. However, use of \' creates security risks + \' in a string literal. The preferred, SQL-standard way + to represent a quote mark is by doubling it ('') but + PostgreSQL has historically also accepted + \'. However, use of \' creates security risks because in some client character set encodings, there are multibyte characters in which the last byte is numerically equivalent to ASCII - \. If client-side code does escaping incorrectly then a + \. If client-side code does escaping incorrectly then a SQL-injection attack is possible. This risk can be prevented by making the server reject queries in which a quote mark appears to be escaped by a backslash. - The allowed values of backslash_quote are - on (allow \' always), - off (reject always), and - safe_encoding (allow only if client encoding does not - allow ASCII \ within a multibyte character). - safe_encoding is the default setting. + The allowed values of backslash_quote are + on (allow \' always), + off (reject always), and + safe_encoding (allow only if client encoding does not + allow ASCII \ within a multibyte character). + safe_encoding is the default setting. - Note that in a standard-conforming string literal, \ just - means \ anyway. This parameter only affects the handling of + Note that in a standard-conforming string literal, \ just + means \ anyway. This parameter only affects the handling of non-standard-conforming literals, including - escape string syntax (E'...'). + escape string syntax (E'...'). @@ -7471,7 +7471,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' default_with_oids (boolean) - default_with_oids configuration parameter + default_with_oids configuration parameter @@ -7481,9 +7481,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' newly-created tables, if neither WITH OIDS nor WITHOUT OIDS is specified. It also determines whether OIDs will be included in tables created by - SELECT INTO. The parameter is off - by default; in PostgreSQL 8.0 and earlier, it - was on by default. + SELECT INTO. The parameter is off + by default; in PostgreSQL 8.0 and earlier, it + was on by default. @@ -7499,21 +7499,21 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' escape_string_warning (boolean) - stringsescape warning + stringsescape warning - escape_string_warning configuration parameter + escape_string_warning configuration parameter - When on, a warning is issued if a backslash (\) - appears in an ordinary string literal ('...' + When on, a warning is issued if a backslash (\) + appears in an ordinary string literal ('...' syntax) and standard_conforming_strings is off. - The default is on. + The default is on. Applications that wish to use backslash as escape should be - modified to use escape string syntax (E'...'), + modified to use escape string syntax (E'...'), because the default behavior of ordinary strings is now to treat backslash as an ordinary character, per SQL standard. This variable can be enabled to help locate code that needs to be changed. @@ -7524,22 +7524,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lo_compat_privileges (boolean) - lo_compat_privileges configuration parameter + lo_compat_privileges configuration parameter - In PostgreSQL releases prior to 9.0, large objects + In PostgreSQL releases prior to 9.0, large objects did not have access privileges and were, therefore, always readable - and writable by all users. Setting this variable to on + and writable by all users. Setting this variable to on disables the new privilege checks, for compatibility with prior - releases. The default is off. + releases. The default is off. Only superusers can change this setting. Setting this variable does not disable all security checks related to large objects — only those for which the default behavior has - changed in PostgreSQL 9.0. + changed in PostgreSQL 9.0. For example, lo_import() and lo_export() need superuser privileges regardless of this setting. @@ -7550,18 +7550,18 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' operator_precedence_warning (boolean) - operator_precedence_warning configuration parameter + operator_precedence_warning configuration parameter When on, the parser will emit a warning for any construct that might - have changed meanings since PostgreSQL 9.4 as a result + have changed meanings since PostgreSQL 9.4 as a result of changes in operator precedence. This is useful for auditing applications to see if precedence changes have broken anything; but it is not meant to be kept turned on in production, since it will warn about some perfectly valid, standard-compliant SQL code. - The default is off. + The default is off. @@ -7573,15 +7573,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' quote_all_identifiers (boolean) - quote_all_identifiers configuration parameter + quote_all_identifiers configuration parameter When the database generates SQL, force all identifiers to be quoted, even if they are not (currently) keywords. This will affect the - output of EXPLAIN as well as the results of functions - like pg_get_viewdef. See also the + output of EXPLAIN as well as the results of functions + like pg_get_viewdef. See also the option of and . @@ -7590,22 +7590,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' standard_conforming_strings (boolean) - stringsstandard conforming + stringsstandard conforming - standard_conforming_strings configuration parameter + standard_conforming_strings configuration parameter This controls whether ordinary string literals - ('...') treat backslashes literally, as specified in + ('...') treat backslashes literally, as specified in the SQL standard. Beginning in PostgreSQL 9.1, the default is - on (prior releases defaulted to off). + on (prior releases defaulted to off). Applications can check this parameter to determine how string literals will be processed. The presence of this parameter can also be taken as an indication - that the escape string syntax (E'...') is supported. + that the escape string syntax (E'...') is supported. Escape string syntax () should be used if an application desires backslashes to be treated as escape characters. @@ -7616,7 +7616,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' synchronize_seqscans (boolean) - synchronize_seqscans configuration parameter + synchronize_seqscans configuration parameter @@ -7625,13 +7625,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' other, so that concurrent scans read the same block at about the same time and hence share the I/O workload. When this is enabled, a scan might start in the middle of the table and then wrap - around the end to cover all rows, so as to synchronize with the + around the end to cover all rows, so as to synchronize with the activity of scans already in progress. This can result in unpredictable changes in the row ordering returned by queries that - have no ORDER BY clause. Setting this parameter to - off ensures the pre-8.3 behavior in which a sequential + have no ORDER BY clause. Setting this parameter to + off ensures the pre-8.3 behavior in which a sequential scan always starts from the beginning of the table. The default - is on. + is on. @@ -7645,31 +7645,31 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' transform_null_equals (boolean) - IS NULL + IS NULL - transform_null_equals configuration parameter + transform_null_equals configuration parameter - When on, expressions of the form expr = + When on, expressions of the form expr = NULL (or NULL = - expr) are treated as - expr IS NULL, that is, they - return true if expr evaluates to the null value, + expr) are treated as + expr IS NULL, that is, they + return true if expr evaluates to the null value, and false otherwise. The correct SQL-spec-compliant behavior of - expr = NULL is to always + expr = NULL is to always return null (unknown). Therefore this parameter defaults to - off. + off. However, filtered forms in Microsoft Access generate queries that appear to use - expr = NULL to test for + expr = NULL to test for null values, so if you use that interface to access the database you might want to turn this option on. Since expressions of the - form expr = NULL always + form expr = NULL always return the null value (using the SQL standard interpretation), they are not very useful and do not appear often in normal applications so this option does little harm in practice. But new users are @@ -7678,7 +7678,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' - Note that this option only affects the exact form = NULL, + Note that this option only affects the exact form = NULL, not other comparison operators or other expressions that are computationally equivalent to some expression involving the equals operator (such as IN). @@ -7703,7 +7703,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' exit_on_error (boolean) - exit_on_error configuration parameter + exit_on_error configuration parameter @@ -7718,16 +7718,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' restart_after_crash (boolean) - restart_after_crash configuration parameter + restart_after_crash configuration parameter - When set to true, which is the default, PostgreSQL + When set to true, which is the default, PostgreSQL will automatically reinitialize after a backend crash. Leaving this value set to true is normally the best way to maximize the availability of the database. However, in some circumstances, such as when - PostgreSQL is being invoked by clusterware, it may be + PostgreSQL is being invoked by clusterware, it may be useful to disable the restart so that the clusterware can gain control and take any actions it deems appropriate. @@ -7742,10 +7742,10 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Preset Options - The following parameters are read-only, and are determined + The following parameters are read-only, and are determined when PostgreSQL is compiled or when it is installed. As such, they have been excluded from the sample - postgresql.conf file. These options report + postgresql.conf file. These options report various aspects of PostgreSQL behavior that might be of interest to certain applications, particularly administrative front-ends. @@ -7756,13 +7756,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' block_size (integer) - block_size configuration parameter + block_size configuration parameter Reports the size of a disk block. It is determined by the value - of BLCKSZ when building the server. The default + of BLCKSZ when building the server. The default value is 8192 bytes. The meaning of some configuration variables (such as ) is influenced by block_size. See data_checksums (boolean) - data_checksums configuration parameter + data_checksums configuration parameter @@ -7788,7 +7788,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' debug_assertions (boolean) - debug_assertions configuration parameter + debug_assertions configuration parameter @@ -7808,13 +7808,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' integer_datetimes (boolean) - integer_datetimes configuration parameter + integer_datetimes configuration parameter - Reports whether PostgreSQL was built with support for - 64-bit-integer dates and times. As of PostgreSQL 10, + Reports whether PostgreSQL was built with support for + 64-bit-integer dates and times. As of PostgreSQL 10, this is always on. @@ -7823,7 +7823,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lc_collate (string) - lc_collate configuration parameter + lc_collate configuration parameter @@ -7838,7 +7838,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' lc_ctype (string) - lc_ctype configuration parameter + lc_ctype configuration parameter @@ -7855,13 +7855,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_function_args (integer) - max_function_args configuration parameter + max_function_args configuration parameter Reports the maximum number of function arguments. It is determined by - the value of FUNC_MAX_ARGS when building the server. The + the value of FUNC_MAX_ARGS when building the server. The default value is 100 arguments. @@ -7870,14 +7870,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_identifier_length (integer) - max_identifier_length configuration parameter + max_identifier_length configuration parameter Reports the maximum identifier length. It is determined as one - less than the value of NAMEDATALEN when building - the server. The default value of NAMEDATALEN is + less than the value of NAMEDATALEN when building + the server. The default value of NAMEDATALEN is 64; therefore the default max_identifier_length is 63 bytes, which can be less than 63 characters when using multibyte encodings. @@ -7888,13 +7888,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' max_index_keys (integer) - max_index_keys configuration parameter + max_index_keys configuration parameter Reports the maximum number of index keys. It is determined by - the value of INDEX_MAX_KEYS when building the server. The + the value of INDEX_MAX_KEYS when building the server. The default value is 32 keys. @@ -7903,16 +7903,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' segment_size (integer) - segment_size configuration parameter + segment_size configuration parameter Reports the number of blocks (pages) that can be stored within a file - segment. It is determined by the value of RELSEG_SIZE + segment. It is determined by the value of RELSEG_SIZE when building the server. The maximum size of a segment file in bytes - is equal to segment_size multiplied by - block_size; by default this is 1GB. + is equal to segment_size multiplied by + block_size; by default this is 1GB. @@ -7920,9 +7920,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_encoding (string) - server_encoding configuration parameter + server_encoding configuration parameter - character set + character set @@ -7937,13 +7937,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_version (string) - server_version configuration parameter + server_version configuration parameter Reports the version number of the server. It is determined by the - value of PG_VERSION when building the server. + value of PG_VERSION when building the server. @@ -7951,13 +7951,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' server_version_num (integer) - server_version_num configuration parameter + server_version_num configuration parameter Reports the version number of the server as an integer. It is determined - by the value of PG_VERSION_NUM when building the server. + by the value of PG_VERSION_NUM when building the server. @@ -7965,13 +7965,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' wal_block_size (integer) - wal_block_size configuration parameter + wal_block_size configuration parameter Reports the size of a WAL disk block. It is determined by the value - of XLOG_BLCKSZ when building the server. The default value + of XLOG_BLCKSZ when building the server. The default value is 8192 bytes. @@ -7980,14 +7980,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' wal_segment_size (integer) - wal_segment_size configuration parameter + wal_segment_size configuration parameter Reports the number of blocks (pages) in a WAL segment file. The total size of a WAL segment file in bytes is equal to - wal_segment_size multiplied by wal_block_size; + wal_segment_size multiplied by wal_block_size; by default this is 16MB. See for more information. @@ -8010,12 +8010,12 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Custom options have two-part names: an extension name, then a dot, then the parameter name proper, much like qualified names in SQL. An example - is plpgsql.variable_conflict. + is plpgsql.variable_conflict. Because custom options may need to be set in processes that have not - loaded the relevant extension module, PostgreSQL + loaded the relevant extension module, PostgreSQL will accept a setting for any two-part parameter name. Such variables are treated as placeholders and have no function until the module that defines them is loaded. When an extension module is loaded, it will add @@ -8034,7 +8034,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' to assist with recovery of severely damaged databases. There should be no reason to use them on a production database. As such, they have been excluded from the sample - postgresql.conf file. Note that many of these + postgresql.conf file. Note that many of these parameters require special source compilation flags to work at all. @@ -8073,7 +8073,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' post_auth_delay (integer) - post_auth_delay configuration parameter + post_auth_delay configuration parameter @@ -8090,7 +8090,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' pre_auth_delay (integer) - pre_auth_delay configuration parameter + pre_auth_delay configuration parameter @@ -8100,7 +8100,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' authentication procedure. This is intended to give developers an opportunity to attach to the server process with a debugger to trace down misbehavior in authentication. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -8109,7 +8109,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_notify (boolean) - trace_notify configuration parameter + trace_notify configuration parameter @@ -8127,7 +8127,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_recovery_messages (enum) - trace_recovery_messages configuration parameter + trace_recovery_messages configuration parameter @@ -8136,15 +8136,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' would not be logged. This parameter allows the user to override the normal setting of , but only for specific messages. This is intended for use in debugging Hot Standby. - Valid values are DEBUG5, DEBUG4, - DEBUG3, DEBUG2, DEBUG1, and - LOG. The default, LOG, does not affect + Valid values are DEBUG5, DEBUG4, + DEBUG3, DEBUG2, DEBUG1, and + LOG. The default, LOG, does not affect logging decisions at all. The other values cause recovery-related debug messages of that priority or higher to be logged as though they - had LOG priority; for common settings of - log_min_messages this results in unconditionally sending + had LOG priority; for common settings of + log_min_messages this results in unconditionally sending them to the server log. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -8153,7 +8153,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_sort (boolean) - trace_sort configuration parameter + trace_sort configuration parameter @@ -8169,7 +8169,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' trace_locks (boolean) - trace_locks configuration parameter + trace_locks configuration parameter @@ -8210,7 +8210,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lwlocks (boolean) - trace_lwlocks configuration parameter + trace_lwlocks configuration parameter @@ -8230,7 +8230,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_userlocks (boolean) - trace_userlocks configuration parameter + trace_userlocks configuration parameter @@ -8249,7 +8249,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lock_oidmin (integer) - trace_lock_oidmin configuration parameter + trace_lock_oidmin configuration parameter @@ -8268,7 +8268,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) trace_lock_table (integer) - trace_lock_table configuration parameter + trace_lock_table configuration parameter @@ -8286,7 +8286,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) debug_deadlocks (boolean) - debug_deadlocks configuration parameter + debug_deadlocks configuration parameter @@ -8305,7 +8305,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) log_btree_build_stats (boolean) - log_btree_build_stats configuration parameter + log_btree_build_stats configuration parameter @@ -8324,7 +8324,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) wal_consistency_checking (string) - wal_consistency_checking configuration parameter + wal_consistency_checking configuration parameter @@ -8344,10 +8344,10 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) the feature. It can be set to all to check all records, or to a comma-separated list of resource managers to check only records originating from those resource managers. Currently, - the supported resource managers are heap, - heap2, btree, hash, - gin, gist, sequence, - spgist, brin, and generic. Only + the supported resource managers are heap, + heap2, btree, hash, + gin, gist, sequence, + spgist, brin, and generic. Only superusers can change this setting. @@ -8356,7 +8356,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) wal_debug (boolean) - wal_debug configuration parameter + wal_debug configuration parameter @@ -8372,7 +8372,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) ignore_checksum_failure (boolean) - ignore_checksum_failure configuration parameter + ignore_checksum_failure configuration parameter @@ -8381,15 +8381,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) Detection of a checksum failure during a read normally causes - PostgreSQL to report an error, aborting the current - transaction. Setting ignore_checksum_failure to on causes + PostgreSQL to report an error, aborting the current + transaction. Setting ignore_checksum_failure to on causes the system to ignore the failure (but still report a warning), and continue processing. This behavior may cause crashes, propagate - or hide corruption, or other serious problems. However, it may allow + or hide corruption, or other serious problems. However, it may allow you to get past the error and retrieve undamaged tuples that might still be present in the table if the block header is still sane. If the header is corrupt an error will be reported even if this option is enabled. The - default setting is off, and it can only be changed by a superuser. + default setting is off, and it can only be changed by a superuser. @@ -8397,16 +8397,16 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) zero_damaged_pages (boolean) - zero_damaged_pages configuration parameter + zero_damaged_pages configuration parameter Detection of a damaged page header normally causes - PostgreSQL to report an error, aborting the current - transaction. Setting zero_damaged_pages to on causes + PostgreSQL to report an error, aborting the current + transaction. Setting zero_damaged_pages to on causes the system to instead report a warning, zero out the damaged - page in memory, and continue processing. This behavior will destroy data, + page in memory, and continue processing. This behavior will destroy data, namely all the rows on the damaged page. However, it does allow you to get past the error and retrieve rows from any undamaged pages that might be present in the table. It is useful for recovering data if @@ -8415,7 +8415,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) data from the damaged pages of a table. Zeroed-out pages are not forced to disk so it is recommended to recreate the table or the index before turning this parameter off again. The - default setting is off, and it can only be changed + default setting is off, and it can only be changed by a superuser. @@ -8447,15 +8447,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) - shared_buffers = x + shared_buffers = x - log_min_messages = DEBUGx + log_min_messages = DEBUGx - datestyle = euro + datestyle = euro @@ -8464,69 +8464,69 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) , - enable_bitmapscan = off, - enable_hashjoin = off, - enable_indexscan = off, - enable_mergejoin = off, - enable_nestloop = off, - enable_indexonlyscan = off, - enable_seqscan = off, - enable_tidscan = off + enable_bitmapscan = off, + enable_hashjoin = off, + enable_indexscan = off, + enable_mergejoin = off, + enable_nestloop = off, + enable_indexonlyscan = off, + enable_seqscan = off, + enable_tidscan = off - fsync = off + fsync = off - listen_addresses = x + listen_addresses = x - listen_addresses = '*' + listen_addresses = '*' - unix_socket_directories = x + unix_socket_directories = x - ssl = on + ssl = on - max_connections = x + max_connections = x - allow_system_table_mods = on + allow_system_table_mods = on - port = x + port = x - ignore_system_indexes = on + ignore_system_indexes = on - log_statement_stats = on + log_statement_stats = on - work_mem = x + work_mem = x , , - log_parser_stats = on, - log_planner_stats = on, - log_executor_stats = on + log_parser_stats = on, + log_planner_stats = on, + log_executor_stats = on - post_auth_delay = x + post_auth_delay = x diff --git a/doc/src/sgml/contrib-spi.sgml b/doc/src/sgml/contrib-spi.sgml index 3287c18d27..32c7105cf6 100644 --- a/doc/src/sgml/contrib-spi.sgml +++ b/doc/src/sgml/contrib-spi.sgml @@ -9,7 +9,7 @@ - The spi module provides several workable examples + The spi module provides several workable examples of using SPI and triggers. While these functions are of some value in their own right, they are even more useful as examples to modify for your own purposes. The functions are general enough to be used @@ -26,15 +26,15 @@ refint — Functions for Implementing Referential Integrity - check_primary_key() and - check_foreign_key() are used to check foreign key constraints. + check_primary_key() and + check_foreign_key() are used to check foreign key constraints. (This functionality is long since superseded by the built-in foreign key mechanism, of course, but the module is still useful as an example.) - check_primary_key() checks the referencing table. - To use, create a BEFORE INSERT OR UPDATE trigger using this + check_primary_key() checks the referencing table. + To use, create a BEFORE INSERT OR UPDATE trigger using this function on a table referencing another table. Specify as the trigger arguments: the referencing table's column name(s) which form the foreign key, the referenced table name, and the column names in the referenced table @@ -43,14 +43,14 @@ - check_foreign_key() checks the referenced table. - To use, create a BEFORE DELETE OR UPDATE trigger using this + check_foreign_key() checks the referenced table. + To use, create a BEFORE DELETE OR UPDATE trigger using this function on a table referenced by other table(s). Specify as the trigger arguments: the number of referencing tables for which the function has to perform checking, the action if a referencing key is found - (cascade — to delete the referencing row, - restrict — to abort transaction if referencing keys - exist, setnull — to set referencing key fields to null), + (cascade — to delete the referencing row, + restrict — to abort transaction if referencing keys + exist, setnull — to set referencing key fields to null), the triggered table's column names which form the primary/unique key, then the referencing table name and column names (repeated for as many referencing tables as were specified by first argument). Note that the @@ -59,7 +59,7 @@ - There are examples in refint.example. + There are examples in refint.example. @@ -67,10 +67,10 @@ timetravel — Functions for Implementing Time Travel - Long ago, PostgreSQL had a built-in time travel feature + Long ago, PostgreSQL had a built-in time travel feature that kept the insert and delete times for each tuple. This can be emulated using these functions. To use these functions, - you must add to a table two columns of abstime type to store + you must add to a table two columns of abstime type to store the date when a tuple was inserted (start_date) and changed/deleted (stop_date): @@ -89,7 +89,7 @@ CREATE TABLE mytab ( When a new row is inserted, start_date should normally be set to - current time, and stop_date to infinity. The trigger + current time, and stop_date to infinity. The trigger will automatically substitute these values if the inserted data contains nulls in these columns. Generally, inserting explicit non-null data in these columns should only be done when re-loading @@ -97,7 +97,7 @@ CREATE TABLE mytab ( - Tuples with stop_date equal to infinity are valid + Tuples with stop_date equal to infinity are valid now, and can be modified. Tuples with a finite stop_date cannot be modified anymore — the trigger will prevent it. (If you need to do that, you can turn off time travel as shown below.) @@ -107,7 +107,7 @@ CREATE TABLE mytab ( For a modifiable row, on update only the stop_date in the tuple being updated will be changed (to current time) and a new tuple with the modified data will be inserted. Start_date in this new tuple will be set to current - time and stop_date to infinity. + time and stop_date to infinity. @@ -117,29 +117,29 @@ CREATE TABLE mytab ( To query for tuples valid now, include - stop_date = 'infinity' in the query's WHERE condition. + stop_date = 'infinity' in the query's WHERE condition. (You might wish to incorporate that in a view.) Similarly, you can query for tuples valid at any past time with suitable conditions on start_date and stop_date. - timetravel() is the general trigger function that supports - this behavior. Create a BEFORE INSERT OR UPDATE OR DELETE + timetravel() is the general trigger function that supports + this behavior. Create a BEFORE INSERT OR UPDATE OR DELETE trigger using this function on each time-traveled table. Specify two trigger arguments: the actual names of the start_date and stop_date columns. Optionally, you can specify one to three more arguments, which must refer - to columns of type text. The trigger will store the name of + to columns of type text. The trigger will store the name of the current user into the first of these columns during INSERT, the second column during UPDATE, and the third during DELETE. - set_timetravel() allows you to turn time-travel on or off for + set_timetravel() allows you to turn time-travel on or off for a table. - set_timetravel('mytab', 1) will turn TT ON for table mytab. - set_timetravel('mytab', 0) will turn TT OFF for table mytab. + set_timetravel('mytab', 1) will turn TT ON for table mytab. + set_timetravel('mytab', 0) will turn TT OFF for table mytab. In both cases the old status is reported. While TT is off, you can modify the start_date and stop_date columns freely. Note that the on/off status is local to the current database session — fresh sessions will @@ -147,12 +147,12 @@ CREATE TABLE mytab ( - get_timetravel() returns the TT state for a table without + get_timetravel() returns the TT state for a table without changing it. - There is an example in timetravel.example. + There is an example in timetravel.example. @@ -160,17 +160,17 @@ CREATE TABLE mytab ( autoinc — Functions for Autoincrementing Fields - autoinc() is a trigger that stores the next value of + autoinc() is a trigger that stores the next value of a sequence into an integer field. This has some overlap with the - built-in serial column feature, but it is not the same: - autoinc() will override attempts to substitute a + built-in serial column feature, but it is not the same: + autoinc() will override attempts to substitute a different field value during inserts, and optionally it can be used to increment the field during updates, too. - To use, create a BEFORE INSERT (or optionally BEFORE - INSERT OR UPDATE) trigger using this function. Specify two + To use, create a BEFORE INSERT (or optionally BEFORE + INSERT OR UPDATE) trigger using this function. Specify two trigger arguments: the name of the integer column to be modified, and the name of the sequence object that will supply values. (Actually, you can specify any number of pairs of such names, if @@ -178,7 +178,7 @@ CREATE TABLE mytab ( - There is an example in autoinc.example. + There is an example in autoinc.example. @@ -187,19 +187,19 @@ CREATE TABLE mytab ( insert_username — Functions for Tracking Who Changed a Table - insert_username() is a trigger that stores the current + insert_username() is a trigger that stores the current user's name into a text field. This can be useful for tracking who last modified a particular row within a table. - To use, create a BEFORE INSERT and/or UPDATE + To use, create a BEFORE INSERT and/or UPDATE trigger using this function. Specify a single trigger argument: the name of the text column to be modified. - There is an example in insert_username.example. + There is an example in insert_username.example. @@ -208,21 +208,21 @@ CREATE TABLE mytab ( moddatetime — Functions for Tracking Last Modification Time - moddatetime() is a trigger that stores the current - time into a timestamp field. This can be useful for tracking + moddatetime() is a trigger that stores the current + time into a timestamp field. This can be useful for tracking the last modification time of a particular row within a table. - To use, create a BEFORE UPDATE + To use, create a BEFORE UPDATE trigger using this function. Specify a single trigger argument: the name of the column to be modified. - The column must be of type timestamp or timestamp with - time zone. + The column must be of type timestamp or timestamp with + time zone. - There is an example in moddatetime.example. + There is an example in moddatetime.example. diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index f32b8a81a2..7dd203e9cd 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -6,7 +6,7 @@ This appendix and the next one contain information regarding the modules that can be found in the contrib directory of the - PostgreSQL distribution. + PostgreSQL distribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental @@ -41,54 +41,54 @@ make installcheck - once you have a PostgreSQL server running. + once you have a PostgreSQL server running. - If you are using a pre-packaged version of PostgreSQL, + If you are using a pre-packaged version of PostgreSQL, these modules are typically made available as a separate subpackage, - such as postgresql-contrib. + such as postgresql-contrib. Many modules supply new user-defined functions, operators, or types. To make use of one of these modules, after you have installed the code you need to register the new SQL objects in the database system. - In PostgreSQL 9.1 and later, this is done by executing + In PostgreSQL 9.1 and later, this is done by executing a command. In a fresh database, you can simply do -CREATE EXTENSION module_name; +CREATE EXTENSION module_name; This command must be run by a database superuser. This registers the new SQL objects in the current database only, so you need to run this command in each database that you want the module's facilities to be available in. Alternatively, run it in - database template1 so that the extension will be copied into + database template1 so that the extension will be copied into subsequently-created databases by default. Many modules allow you to install their objects in a schema of your choice. To do that, add SCHEMA - schema_name to the CREATE EXTENSION + schema_name to the CREATE EXTENSION command. By default, the objects will be placed in your current creation - target schema, typically public. + target schema, typically public. If your database was brought forward by dump and reload from a pre-9.1 - version of PostgreSQL, and you had been using the pre-9.1 + version of PostgreSQL, and you had been using the pre-9.1 version of the module in it, you should instead do -CREATE EXTENSION module_name FROM unpackaged; +CREATE EXTENSION module_name FROM unpackaged; This will update the pre-9.1 objects of the module into a proper - extension object. Future updates to the module will be + extension object. Future updates to the module will be managed by . For more information about extension updates, see . @@ -163,7 +163,7 @@ pages. This appendix and the previous one contain information regarding the modules that can be found in the contrib directory of the - PostgreSQL distribution. See for + PostgreSQL distribution. See for more information about the contrib section in general and server extensions and plug-ins found in contrib specifically. diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index 1ffc40f1a5..46d8e4eb8f 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -8,7 +8,7 @@ - This module implements a data type cube for + This module implements a data type cube for representing multidimensional cubes. @@ -17,8 +17,8 @@ shows the valid external - representations for the cube - type. x, y, etc. denote + representations for the cube + type. x, y, etc. denote floating-point numbers. @@ -34,43 +34,43 @@ - x + x A one-dimensional point (or, zero-length one-dimensional interval) - (x) + (x) Same as above - x1,x2,...,xn + x1,x2,...,xn A point in n-dimensional space, represented internally as a zero-volume cube - (x1,x2,...,xn) + (x1,x2,...,xn) Same as above - (x),(y) - A one-dimensional interval starting at x and ending at y or vice versa; the + (x),(y) + A one-dimensional interval starting at x and ending at y or vice versa; the order does not matter - [(x),(y)] + [(x),(y)] Same as above - (x1,...,xn),(y1,...,yn) + (x1,...,xn),(y1,...,yn) An n-dimensional cube represented by a pair of its diagonally opposite corners - [(x1,...,xn),(y1,...,yn)] + [(x1,...,xn),(y1,...,yn)] Same as above @@ -79,17 +79,17 @@ It does not matter which order the opposite corners of a cube are - entered in. The cube functions + entered in. The cube functions automatically swap values if needed to create a uniform - lower left — upper right internal representation. - When the corners coincide, cube stores only one corner - along with an is point flag to avoid wasting space. + lower left — upper right internal representation. + When the corners coincide, cube stores only one corner + along with an is point flag to avoid wasting space. White space is ignored on input, so - [(x),(y)] is the same as - [ ( x ), ( y ) ]. + [(x),(y)] is the same as + [ ( x ), ( y ) ]. @@ -107,7 +107,7 @@ shows the operators provided for - type cube. + type cube. @@ -123,91 +123,91 @@ - a = b - boolean + a = b + boolean The cubes a and b are identical. - a && b - boolean + a && b + boolean The cubes a and b overlap. - a @> b - boolean + a @> b + boolean The cube a contains the cube b. - a <@ b - boolean + a <@ b + boolean The cube a is contained in the cube b. - a < b - boolean + a < b + boolean The cube a is less than the cube b. - a <= b - boolean + a <= b + boolean The cube a is less than or equal to the cube b. - a > b - boolean + a > b + boolean The cube a is greater than the cube b. - a >= b - boolean + a >= b + boolean The cube a is greater than or equal to the cube b. - a <> b - boolean + a <> b + boolean The cube a is not equal to the cube b. - a -> n - float8 - Get n-th coordinate of cube (counting from 1). + a -> n + float8 + Get n-th coordinate of cube (counting from 1). - a ~> n - float8 + a ~> n + float8 - Get n-th coordinate in normalized cube + Get n-th coordinate in normalized cube representation, in which the coordinates have been rearranged into - the form lower left — upper right; that is, the + the form lower left — upper right; that is, the smaller endpoint along each dimension appears first. - a <-> b - float8 + a <-> b + float8 Euclidean distance between a and b. - a <#> b - float8 + a <#> b + float8 Taxicab (L-1 metric) distance between a and b. - a <=> b - float8 + a <=> b + float8 Chebyshev (L-inf metric) distance between a and b. @@ -216,35 +216,35 @@
- (Before PostgreSQL 8.2, the containment operators @> and <@ were - respectively called @ and ~. These names are still available, but are + (Before PostgreSQL 8.2, the containment operators @> and <@ were + respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) - The scalar ordering operators (<, >=, etc) + The scalar ordering operators (<, >=, etc) do not make a lot of sense for any practical purpose but sorting. These operators first compare the first coordinates, and if those are equal, compare the second coordinates, etc. They exist mainly to support the - b-tree index operator class for cube, which can be useful for - example if you would like a UNIQUE constraint on a cube column. + b-tree index operator class for cube, which can be useful for + example if you would like a UNIQUE constraint on a cube column. - The cube module also provides a GiST index operator class for - cube values. - A cube GiST index can be used to search for values using the - =, &&, @>, and - <@ operators in WHERE clauses. + The cube module also provides a GiST index operator class for + cube values. + A cube GiST index can be used to search for values using the + =, &&, @>, and + <@ operators in WHERE clauses. - In addition, a cube GiST index can be used to find nearest + In addition, a cube GiST index can be used to find nearest neighbors using the metric operators - <->, <#>, and - <=> in ORDER BY clauses. + <->, <#>, and + <=> in ORDER BY clauses. For example, the nearest neighbor of the 3-D point (0.5, 0.5, 0.5) could be found efficiently with: @@ -253,7 +253,7 @@ SELECT c FROM test ORDER BY c <-> cube(array[0.5,0.5,0.5]) LIMIT 1; - The ~> operator can also be used in this way to + The ~> operator can also be used in this way to efficiently retrieve the first few values sorted by a selected coordinate. For example, to get the first few cubes ordered by the first coordinate (lower left corner) ascending one could use the following query: @@ -365,7 +365,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_ll_coord(cube, integer) float8 - Returns the n-th coordinate value for the lower + Returns the n-th coordinate value for the lower left corner of the cube. @@ -376,7 +376,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_ur_coord(cube, integer) float8 - Returns the n-th coordinate value for the + Returns the n-th coordinate value for the upper right corner of the cube. @@ -412,9 +412,9 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; desired. - cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)' + cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)' cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]) == - '(5,3,1,1),(8,7,6,6)' + '(5,3,1,1),(8,7,6,6)' @@ -440,24 +440,24 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; cube_enlarge(c cube, r double, n integer) cube Increases the size of the cube by the specified - radius r in at least n dimensions. + radius r in at least n dimensions. If the radius is negative the cube is shrunk instead. - All defined dimensions are changed by the radius r. - Lower-left coordinates are decreased by r and - upper-right coordinates are increased by r. If a + All defined dimensions are changed by the radius r. + Lower-left coordinates are decreased by r and + upper-right coordinates are increased by r. If a lower-left coordinate is increased to more than the corresponding - upper-right coordinate (this can only happen when r + upper-right coordinate (this can only happen when r < 0) than both coordinates are set to their average. - If n is greater than the number of defined dimensions - and the cube is being enlarged (r > 0), then extra - dimensions are added to make n altogether; + If n is greater than the number of defined dimensions + and the cube is being enlarged (r > 0), then extra + dimensions are added to make n altogether; 0 is used as the initial value for the extra coordinates. This function is useful for creating bounding boxes around a point for searching for nearby points. cube_enlarge('(1,2),(3,4)', 0.5, 3) == - '(0.5,1.5,-0.5),(3.5,4.5,0.5)' + '(0.5,1.5,-0.5),(3.5,4.5,0.5)' @@ -523,13 +523,13 @@ t Notes - For examples of usage, see the regression test sql/cube.sql. + For examples of usage, see the regression test sql/cube.sql. To make it harder for people to break things, there is a limit of 100 on the number of dimensions of cubes. This is set - in cubedata.h if you need something bigger. + in cubedata.h if you need something bigger. diff --git a/doc/src/sgml/custom-scan.sgml b/doc/src/sgml/custom-scan.sgml index 9d1ca7bfe1..a46641674f 100644 --- a/doc/src/sgml/custom-scan.sgml +++ b/doc/src/sgml/custom-scan.sgml @@ -9,9 +9,9 @@ - PostgreSQL supports a set of experimental facilities which + PostgreSQL supports a set of experimental facilities which are intended to allow extension modules to add new scan types to the system. - Unlike a foreign data wrapper, which is only + Unlike a foreign data wrapper, which is only responsible for knowing how to scan its own foreign tables, a custom scan provider can provide an alternative method of scanning any relation in the system. Typically, the motivation for writing a custom scan provider will @@ -51,9 +51,9 @@ extern PGDLLIMPORT set_rel_pathlist_hook_type set_rel_pathlist_hook; Although this hook function can be used to examine, modify, or remove paths generated by the core system, a custom scan provider will typically - confine itself to generating CustomPath objects and adding - them to rel using add_path. The custom scan - provider is responsible for initializing the CustomPath + confine itself to generating CustomPath objects and adding + them to rel using add_path. The custom scan + provider is responsible for initializing the CustomPath object, which is declared like this: typedef struct CustomPath @@ -68,22 +68,22 @@ typedef struct CustomPath - path must be initialized as for any other path, including + path must be initialized as for any other path, including the row-count estimate, start and total cost, and sort ordering provided - by this path. flags is a bit mask, which should include - CUSTOMPATH_SUPPORT_BACKWARD_SCAN if the custom path can support - a backward scan and CUSTOMPATH_SUPPORT_MARK_RESTORE if it + by this path. flags is a bit mask, which should include + CUSTOMPATH_SUPPORT_BACKWARD_SCAN if the custom path can support + a backward scan and CUSTOMPATH_SUPPORT_MARK_RESTORE if it can support mark and restore. Both capabilities are optional. - An optional custom_paths is a list of Path + An optional custom_paths is a list of Path nodes used by this custom-path node; these will be transformed into - Plan nodes by planner. - custom_private can be used to store the custom path's + Plan nodes by planner. + custom_private can be used to store the custom path's private data. Private data should be stored in a form that can be handled - by nodeToString, so that debugging routines that attempt to - print the custom path will work as designed. methods must + by nodeToString, so that debugging routines that attempt to + print the custom path will work as designed. methods must point to a (usually statically allocated) object implementing the required custom path methods, of which there is currently only one. The - LibraryName and SymbolName fields must also + LibraryName and SymbolName fields must also be initialized so that the dynamic loader can resolve them to locate the method table. @@ -93,7 +93,7 @@ typedef struct CustomPath relations, such a path must produce the same output as would normally be produced by the join it replaces. To do this, the join provider should set the following hook, and then within the hook function, - create CustomPath path(s) for the join relation. + create CustomPath path(s) for the join relation. typedef void (*set_join_pathlist_hook_type) (PlannerInfo *root, RelOptInfo *joinrel, @@ -122,7 +122,7 @@ Plan *(*PlanCustomPath) (PlannerInfo *root, List *custom_plans); Convert a custom path to a finished plan. The return value will generally - be a CustomScan object, which the callback must allocate and + be a CustomScan object, which the callback must allocate and initialize. See for more details. @@ -150,45 +150,45 @@ typedef struct CustomScan - scan must be initialized as for any other scan, including + scan must be initialized as for any other scan, including estimated costs, target lists, qualifications, and so on. - flags is a bit mask with the same meaning as in - CustomPath. - custom_plans can be used to store child - Plan nodes. - custom_exprs should be used to + flags is a bit mask with the same meaning as in + CustomPath. + custom_plans can be used to store child + Plan nodes. + custom_exprs should be used to store expression trees that will need to be fixed up by - setrefs.c and subselect.c, while - custom_private should be used to store other private data + setrefs.c and subselect.c, while + custom_private should be used to store other private data that is only used by the custom scan provider itself. - custom_scan_tlist can be NIL when scanning a base + custom_scan_tlist can be NIL when scanning a base relation, indicating that the custom scan returns scan tuples that match the base relation's row type. Otherwise it is a target list describing - the actual scan tuples. custom_scan_tlist must be + the actual scan tuples. custom_scan_tlist must be provided for joins, and could be provided for scans if the custom scan provider can compute some non-Var expressions. - custom_relids is set by the core code to the set of + custom_relids is set by the core code to the set of relations (range table indexes) that this scan node handles; except when this scan is replacing a join, it will have only one member. - methods must point to a (usually statically allocated) + methods must point to a (usually statically allocated) object implementing the required custom scan methods, which are further detailed below. - When a CustomScan scans a single relation, - scan.scanrelid must be the range table index of the table - to be scanned. When it replaces a join, scan.scanrelid + When a CustomScan scans a single relation, + scan.scanrelid must be the range table index of the table + to be scanned. When it replaces a join, scan.scanrelid should be zero. - Plan trees must be able to be duplicated using copyObject, - so all the data stored within the custom fields must consist of + Plan trees must be able to be duplicated using copyObject, + so all the data stored within the custom fields must consist of nodes that that function can handle. Furthermore, custom scan providers cannot substitute a larger structure that embeds - a CustomScan for the structure itself, as would be possible - for a CustomPath or CustomScanState. + a CustomScan for the structure itself, as would be possible + for a CustomPath or CustomScanState. @@ -197,14 +197,14 @@ typedef struct CustomScan Node *(*CreateCustomScanState) (CustomScan *cscan); - Allocate a CustomScanState for this - CustomScan. The actual allocation will often be larger than - required for an ordinary CustomScanState, because many + Allocate a CustomScanState for this + CustomScan. The actual allocation will often be larger than + required for an ordinary CustomScanState, because many providers will wish to embed that as the first field of a larger structure. - The value returned must have the node tag and methods + The value returned must have the node tag and methods set appropriately, but other fields should be left as zeroes at this - stage; after ExecInitCustomScan performs basic initialization, - the BeginCustomScan callback will be invoked to give the + stage; after ExecInitCustomScan performs basic initialization, + the BeginCustomScan callback will be invoked to give the custom scan provider a chance to do whatever else is needed.
@@ -214,8 +214,8 @@ Node *(*CreateCustomScanState) (CustomScan *cscan); Executing Custom Scans - When a CustomScan is executed, its execution state is - represented by a CustomScanState, which is declared as + When a CustomScan is executed, its execution state is + represented by a CustomScanState, which is declared as follows: typedef struct CustomScanState @@ -228,15 +228,15 @@ typedef struct CustomScanState - ss is initialized as for any other scan state, + ss is initialized as for any other scan state, except that if the scan is for a join rather than a base relation, - ss.ss_currentRelation is left NULL. - flags is a bit mask with the same meaning as in - CustomPath and CustomScan. - methods must point to a (usually statically allocated) + ss.ss_currentRelation is left NULL. + flags is a bit mask with the same meaning as in + CustomPath and CustomScan. + methods must point to a (usually statically allocated) object implementing the required custom scan state methods, which are - further detailed below. Typically, a CustomScanState, which - need not support copyObject, will actually be a larger + further detailed below. Typically, a CustomScanState, which + need not support copyObject, will actually be a larger structure embedding the above as its first member. @@ -249,8 +249,8 @@ void (*BeginCustomScan) (CustomScanState *node, EState *estate, int eflags);
- Complete initialization of the supplied CustomScanState. - Standard fields have been initialized by ExecInitCustomScan, + Complete initialization of the supplied CustomScanState. + Standard fields have been initialized by ExecInitCustomScan, but any private fields should be initialized here. @@ -259,16 +259,16 @@ void (*BeginCustomScan) (CustomScanState *node, TupleTableSlot *(*ExecCustomScan) (CustomScanState *node); Fetch the next scan tuple. If any tuples remain, it should fill - ps_ResultTupleSlot with the next tuple in the current scan + ps_ResultTupleSlot with the next tuple in the current scan direction, and then return the tuple slot. If not, - NULL or an empty slot should be returned. + NULL or an empty slot should be returned. void (*EndCustomScan) (CustomScanState *node); - Clean up any private data associated with the CustomScanState. + Clean up any private data associated with the CustomScanState. This method is required, but it does not need to do anything if there is no associated data or it will be cleaned up automatically. @@ -286,9 +286,9 @@ void (*ReScanCustomScan) (CustomScanState *node); void (*MarkPosCustomScan) (CustomScanState *node); Save the current scan position so that it can subsequently be restored - by the RestrPosCustomScan callback. This callback is + by the RestrPosCustomScan callback. This callback is optional, and need only be supplied if the - CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. + CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. @@ -296,9 +296,9 @@ void (*MarkPosCustomScan) (CustomScanState *node); void (*RestrPosCustomScan) (CustomScanState *node); Restore the previous scan position as saved by the - MarkPosCustomScan callback. This callback is optional, + MarkPosCustomScan callback. This callback is optional, and need only be supplied if the - CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. + CUSTOMPATH_SUPPORT_MARK_RESTORE flag is set. @@ -320,8 +320,8 @@ void (*InitializeDSMCustomScan) (CustomScanState *node, void *coordinate); Initialize the dynamic shared memory that will be required for parallel - operation. coordinate points to a shared memory area of - size equal to the return value of EstimateDSMCustomScan. + operation. coordinate points to a shared memory area of + size equal to the return value of EstimateDSMCustomScan. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. @@ -337,9 +337,9 @@ void (*ReInitializeDSMCustomScan) (CustomScanState *node, This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. Recommended practice is that this callback reset only shared state, - while the ReScanCustomScan callback resets only local + while the ReScanCustomScan callback resets only local state. Currently, this callback will be called - before ReScanCustomScan, but it's best not to rely on + before ReScanCustomScan, but it's best not to rely on that ordering. @@ -350,7 +350,7 @@ void (*InitializeWorkerCustomScan) (CustomScanState *node, void *coordinate); Initialize a parallel worker's local state based on the shared state - set up by the leader during InitializeDSMCustomScan. + set up by the leader during InitializeDSMCustomScan. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. @@ -361,7 +361,7 @@ void (*ShutdownCustomScan) (CustomScanState *node); Release resources when it is anticipated the node will not be executed to completion. This is not called in all cases; sometimes, - EndCustomScan may be called without this function having + EndCustomScan may be called without this function having been called first. Since the DSM segment used by parallel query is destroyed just after this callback is invoked, custom scan providers that wish to take some action before the DSM segment goes away should implement @@ -374,9 +374,9 @@ void (*ExplainCustomScan) (CustomScanState *node, List *ancestors, ExplainState *es); - Output additional information for EXPLAIN of a custom-scan + Output additional information for EXPLAIN of a custom-scan plan node. This callback is optional. Common data stored in the - ScanState, such as the target list and scan relation, will + ScanState, such as the target list and scan relation, will be shown even without this callback, but the callback allows the display of additional, private state. diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 512756df4a..6a15f9030c 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -79,7 +79,7 @@ bytea - binary data (byte array) + binary data (byte array) @@ -354,45 +354,45 @@ - smallint + smallint 2 bytes small-range integer -32768 to +32767 - integer + integer 4 bytes typical choice for integer -2147483648 to +2147483647 - bigint + bigint 8 bytes large-range integer -9223372036854775808 to +9223372036854775807 - decimal + decimal variable user-specified precision, exact up to 131072 digits before the decimal point; up to 16383 digits after the decimal point - numeric + numeric variable user-specified precision, exact up to 131072 digits before the decimal point; up to 16383 digits after the decimal point - real + real 4 bytes variable-precision, inexact 6 decimal digits precision - double precision + double precision 8 bytes variable-precision, inexact 15 decimal digits precision @@ -406,7 +406,7 @@ - serial + serial 4 bytes autoincrementing integer 1 to 2147483647 @@ -574,9 +574,9 @@ NUMERIC Numeric values are physically stored without any extra leading or trailing zeroes. Thus, the declared precision and scale of a column - are maximums, not fixed allocations. (In this sense the numeric - type is more akin to varchar(n) - than to char(n).) The actual storage + are maximums, not fixed allocations. (In this sense the numeric + type is more akin to varchar(n) + than to char(n).) The actual storage requirement is two bytes for each group of four decimal digits, plus three to eight bytes overhead. @@ -593,22 +593,22 @@ NUMERIC In addition to ordinary numeric values, the numeric - type allows the special value NaN, meaning - not-a-number. Any operation on NaN - yields another NaN. When writing this value + type allows the special value NaN, meaning + not-a-number. Any operation on NaN + yields another NaN. When writing this value as a constant in an SQL command, you must put quotes around it, - for example UPDATE table SET x = 'NaN'. On input, - the string NaN is recognized in a case-insensitive manner. + for example UPDATE table SET x = 'NaN'. On input, + the string NaN is recognized in a case-insensitive manner. - In most implementations of the not-a-number concept, - NaN is not considered equal to any other numeric - value (including NaN). In order to allow - numeric values to be sorted and used in tree-based - indexes, PostgreSQL treats NaN - values as equal, and greater than all non-NaN + In most implementations of the not-a-number concept, + NaN is not considered equal to any other numeric + value (including NaN). In order to allow + numeric values to be sorted and used in tree-based + indexes, PostgreSQL treats NaN + values as equal, and greater than all non-NaN values. @@ -756,18 +756,18 @@ FROM generate_series(-3.5, 3.5, 1) as x; floating-point arithmetic does not follow IEEE 754, these values will probably not work as expected.) When writing these values as constants in an SQL command, you must put quotes around them, - for example UPDATE table SET x = '-Infinity'. On input, + for example UPDATE table SET x = '-Infinity'. On input, these strings are recognized in a case-insensitive manner. - IEEE754 specifies that NaN should not compare equal - to any other floating-point value (including NaN). + IEEE754 specifies that NaN should not compare equal + to any other floating-point value (including NaN). In order to allow floating-point values to be sorted and used - in tree-based indexes, PostgreSQL treats - NaN values as equal, and greater than all - non-NaN values. + in tree-based indexes, PostgreSQL treats + NaN values as equal, and greater than all + non-NaN values. @@ -776,7 +776,7 @@ FROM generate_series(-3.5, 3.5, 1) as x; notations float and float(p) for specifying inexact numeric types. Here, p specifies - the minimum acceptable precision in binary digits. + the minimum acceptable precision in binary digits. PostgreSQL accepts float(1) to float(24) as selecting the real type, while @@ -870,12 +870,12 @@ ALTER SEQUENCE tablename_ Thus, we have created an integer column and arranged for its default - values to be assigned from a sequence generator. A NOT NULL + values to be assigned from a sequence generator. A NOT NULL constraint is applied to ensure that a null value cannot be inserted. (In most cases you would also want to attach a - UNIQUE or PRIMARY KEY constraint to prevent + UNIQUE or PRIMARY KEY constraint to prevent duplicate values from being inserted by accident, but this is - not automatic.) Lastly, the sequence is marked as owned by + not automatic.) Lastly, the sequence is marked as owned by the column, so that it will be dropped if the column or table is dropped. @@ -908,7 +908,7 @@ ALTER SEQUENCE tablename_bigserial and serial8 work the same way, except that they create a bigint column. bigserial should be used if you anticipate - the use of more than 231 identifiers over the + the use of more than 231 identifiers over the lifetime of the table. The type names smallserial and serial2 also work the same way, except that they create a smallint column. @@ -962,9 +962,9 @@ ALTER SEQUENCE tablename_ Since the output of this data type is locale-sensitive, it might not - work to load money data into a database that has a different - setting of lc_monetary. To avoid problems, before - restoring a dump into a new database make sure lc_monetary has + work to load money data into a database that has a different + setting of lc_monetary. To avoid problems, before + restoring a dump into a new database make sure lc_monetary has the same or equivalent value as in the database that was dumped. @@ -994,7 +994,7 @@ SELECT '52093.89'::money::numeric::float8; Division of a money value by an integer value is performed with truncation of the fractional part towards zero. To get a rounded result, divide by a floating-point value, or cast the money - value to numeric before dividing and back to money + value to numeric before dividing and back to money afterwards. (The latter is preferable to avoid risking precision loss.) When a money value is divided by another money value, the result is double precision (i.e., a pure number, @@ -1047,11 +1047,11 @@ SELECT '52093.89'::money::numeric::float8; - character varying(n), varchar(n) + character varying(n), varchar(n) variable-length with limit - character(n), char(n) + character(n), char(n) fixed-length, blank padded @@ -1070,10 +1070,10 @@ SELECT '52093.89'::money::numeric::float8; SQL defines two primary character types: - character varying(n) and - character(n), where n + character varying(n) and + character(n), where n is a positive integer. Both of these types can store strings up to - n characters (not bytes) in length. An attempt to store a + n characters (not bytes) in length. An attempt to store a longer string into a column of these types will result in an error, unless the excess characters are all spaces, in which case the string will be truncated to the maximum length. (This somewhat @@ -1087,22 +1087,22 @@ SELECT '52093.89'::money::numeric::float8; If one explicitly casts a value to character - varying(n) or - character(n), then an over-length - value will be truncated to n characters without + varying(n) or + character(n), then an over-length + value will be truncated to n characters without raising an error. (This too is required by the SQL standard.) - The notations varchar(n) and - char(n) are aliases for character - varying(n) and - character(n), respectively. + The notations varchar(n) and + char(n) are aliases for character + varying(n) and + character(n), respectively. character without length specifier is equivalent to character(1). If character varying is used without length specifier, the type accepts strings of any size. The - latter is a PostgreSQL extension. + latter is a PostgreSQL extension. @@ -1115,19 +1115,19 @@ SELECT '52093.89'::money::numeric::float8; Values of type character are physically padded - with spaces to the specified width n, and are + with spaces to the specified width n, and are stored and displayed that way. However, trailing spaces are treated as semantically insignificant and disregarded when comparing two values of type character. In collations where whitespace is significant, this behavior can produce unexpected results; for example SELECT 'a '::CHAR(2) collate "C" < - E'a\n'::CHAR(2) returns true, even though C + E'a\n'::CHAR(2) returns true, even though C locale would consider a space to be greater than a newline. Trailing spaces are removed when converting a character value to one of the other string types. Note that trailing spaces - are semantically significant in + are semantically significant in character varying and text values, and - when using pattern matching, that is LIKE and + when using pattern matching, that is LIKE and regular expressions. @@ -1140,7 +1140,7 @@ SELECT '52093.89'::money::numeric::float8; stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The - maximum value that will be allowed for n in the data + maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to @@ -1155,10 +1155,10 @@ SELECT '52093.89'::money::numeric::float8; apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While - character(n) has performance + character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact - character(n) is usually the slowest of + character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead. @@ -1220,7 +1220,7 @@ SELECT b, char_length(b) FROM test2; in the internal system catalogs and is not intended for use by the general user. Its length is currently defined as 64 bytes (63 usable characters plus terminator) but should be referenced using the constant - NAMEDATALEN in C source code. + NAMEDATALEN in C source code. The length is set at compile time (and is therefore adjustable for special uses); the default maximum length might change in a future release. The type "char" @@ -1304,7 +1304,7 @@ SELECT b, char_length(b) FROM test2; Second, operations on binary strings process the actual bytes, whereas the processing of character strings depends on locale settings. In short, binary strings are appropriate for storing data that the - programmer thinks of as raw bytes, whereas character + programmer thinks of as raw bytes, whereas character strings are appropriate for storing text. @@ -1328,10 +1328,10 @@ SELECT b, char_length(b) FROM test2; - <type>bytea</> Hex Format + <type>bytea</type> Hex Format - The hex format encodes binary data as 2 hexadecimal digits + The hex format encodes binary data as 2 hexadecimal digits per byte, most significant nibble first. The entire string is preceded by the sequence \x (to distinguish it from the escape format). In some contexts, the initial backslash may @@ -1355,7 +1355,7 @@ SELECT E'\\xDEADBEEF'; - <type>bytea</> Escape Format + <type>bytea</type> Escape Format The escape format is the traditional @@ -1390,7 +1390,7 @@ SELECT E'\\xDEADBEEF'; - <type>bytea</> Literal Escaped Octets + <type>bytea</type> Literal Escaped Octets @@ -1430,7 +1430,7 @@ SELECT E'\\xDEADBEEF'; 0 to 31 and 127 to 255 non-printable octets - E'\\xxx' (octal value) + E'\\xxx' (octal value) SELECT E'\\001'::bytea; \001 @@ -1481,7 +1481,7 @@ SELECT E'\\xDEADBEEF';
- <type>bytea</> Output Escaped Octets + <type>bytea</type> Output Escaped Octets @@ -1506,7 +1506,7 @@ SELECT E'\\xDEADBEEF'; 0 to 31 and 127 to 255 non-printable octets - \xxx (octal value) + \xxx (octal value) SELECT E'\\001'::bytea; \001 @@ -1524,7 +1524,7 @@ SELECT E'\\xDEADBEEF';
- Depending on the front end to PostgreSQL you use, + Depending on the front end to PostgreSQL you use, you might have additional work to do in terms of escaping and unescaping bytea strings. For example, you might also have to escape line feeds and carriage returns if your interface @@ -1685,7 +1685,7 @@ MINUTE TO SECOND Note that if both fields and p are specified, the - fields must include SECOND, + fields must include SECOND, since the precision applies only to the seconds. @@ -1717,9 +1717,9 @@ MINUTE TO SECOND For some formats, ordering of day, month, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields. Set the parameter - to MDY to select month-day-year interpretation, - DMY to select day-month-year interpretation, or - YMD to select year-month-day interpretation. + to MDY to select month-day-year interpretation, + DMY to select day-month-year interpretation, or + YMD to select year-month-day interpretation. @@ -1784,19 +1784,19 @@ MINUTE TO SECOND 1/8/1999 - January 8 in MDY mode; - August 1 in DMY mode + January 8 in MDY mode; + August 1 in DMY mode 1/18/1999 - January 18 in MDY mode; + January 18 in MDY mode; rejected in other modes 01/02/03 - January 2, 2003 in MDY mode; - February 1, 2003 in DMY mode; - February 3, 2001 in YMD mode + January 2, 2003 in MDY mode; + February 1, 2003 in DMY mode; + February 3, 2001 in YMD mode @@ -1813,15 +1813,15 @@ MINUTE TO SECOND 99-Jan-08 - January 8 in YMD mode, else error + January 8 in YMD mode, else error 08-Jan-99 - January 8, except error in YMD mode + January 8, except error in YMD mode Jan-08-99 - January 8, except error in YMD mode + January 8, except error in YMD mode 19990108 @@ -2070,20 +2070,20 @@ January 8 04:05:06 1999 PST For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, - GMT). An input value that has an explicit + GMT). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's parameter, and is converted to UTC using the - offset for the timezone zone. + offset for the timezone zone. When a timestamp with time zone value is output, it is always converted from UTC to the - current timezone zone, and displayed as local time in that + current timezone zone, and displayed as local time in that zone. To see the time in another time zone, either change - timezone or use the AT TIME ZONE construct + timezone or use the AT TIME ZONE construct (see ). @@ -2091,8 +2091,8 @@ January 8 04:05:06 1999 PST Conversions between timestamp without time zone and timestamp with time zone normally assume that the timestamp without time zone value should be taken or given - as timezone local time. A different time zone can - be specified for the conversion using AT TIME ZONE. + as timezone local time. A different time zone can + be specified for the conversion using AT TIME ZONE. @@ -2117,7 +2117,7 @@ January 8 04:05:06 1999 PST are specially represented inside the system and will be displayed unchanged; but the others are simply notational shorthands that will be converted to ordinary date/time values when read. - (In particular, now and related strings are converted + (In particular, now and related strings are converted to a specific time value as soon as they are read.) All of these values need to be enclosed in single quotes when used as constants in SQL commands. @@ -2187,7 +2187,7 @@ January 8 04:05:06 1999 PST LOCALTIMESTAMP. The latter four accept an optional subsecond precision specification. (See .) Note that these are - SQL functions and are not recognized in data input strings. + SQL functions and are not recognized in data input strings. @@ -2211,8 +2211,8 @@ January 8 04:05:06 1999 PST The output format of the date/time types can be set to one of the four styles ISO 8601, - SQL (Ingres), traditional POSTGRES - (Unix date format), or + SQL (Ingres), traditional POSTGRES + (Unix date format), or German. The default is the ISO format. (The SQL standard requires the use of the ISO 8601 @@ -2222,7 +2222,7 @@ January 8 04:05:06 1999 PST output style. The output of the date and time types is generally only the date or time part in accordance with the given examples. However, the - POSTGRES style outputs date-only values in + POSTGRES style outputs date-only values in ISO format. @@ -2263,9 +2263,9 @@ January 8 04:05:06 1999 PST - ISO 8601 specifies the use of uppercase letter T to separate - the date and time. PostgreSQL accepts that format on - input, but on output it uses a space rather than T, as shown + ISO 8601 specifies the use of uppercase letter T to separate + the date and time. PostgreSQL accepts that format on + input, but on output it uses a space rather than T, as shown above. This is for readability and for consistency with RFC 3339 as well as some other database systems. @@ -2292,17 +2292,17 @@ January 8 04:05:06 1999 PST - SQL, DMY + SQL, DMY day/month/year 17/12/1997 15:37:16.00 CET - SQL, MDY + SQL, MDY month/day/year 12/17/1997 07:37:16.00 PST - Postgres, DMY + Postgres, DMY day/month/year Wed 17 Dec 07:37:16 1997 PST @@ -2368,7 +2368,7 @@ January 8 04:05:06 1999 PST The default time zone is specified as a constant numeric offset - from UTC. It is therefore impossible to adapt to + from UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. @@ -2380,7 +2380,7 @@ January 8 04:05:06 1999 PST To address these difficulties, we recommend using date/time types that contain both date and time when using time zones. We - do not recommend using the type time with + do not recommend using the type time with time zone (though it is supported by PostgreSQL for legacy applications and for compliance with the SQL standard). @@ -2401,7 +2401,7 @@ January 8 04:05:06 1999 PST - A full time zone name, for example America/New_York. + A full time zone name, for example America/New_York. The recognized time zone names are listed in the pg_timezone_names view (see ). @@ -2412,16 +2412,16 @@ January 8 04:05:06 1999 PST - A time zone abbreviation, for example PST. Such a + A time zone abbreviation, for example PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations - are listed in the pg_timezone_abbrevs view (see pg_timezone_abbrevs view (see ). You cannot set the configuration parameters or to a time zone abbreviation, but you can use abbreviations in - date/time input values and with the AT TIME ZONE + date/time input values and with the AT TIME ZONE operator. @@ -2429,25 +2429,25 @@ January 8 04:05:06 1999 PST In addition to the timezone names and abbreviations, PostgreSQL will accept POSIX-style time zone - specifications of the form STDoffset or - STDoffsetDST, where - STD is a zone abbreviation, offset is a - numeric offset in hours west from UTC, and DST is an + specifications of the form STDoffset or + STDoffsetDST, where + STD is a zone abbreviation, offset is a + numeric offset in hours west from UTC, and DST is an optional daylight-savings zone abbreviation, assumed to stand for one - hour ahead of the given offset. For example, if EST5EDT + hour ahead of the given offset. For example, if EST5EDT were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. In this syntax, a zone abbreviation can be a string of letters, or an - arbitrary string surrounded by angle brackets (<>). + arbitrary string surrounded by angle brackets (<>). When a daylight-savings zone abbreviation is present, it is assumed to be used according to the same daylight-savings transition rules used in the - IANA time zone database's posixrules entry. + IANA time zone database's posixrules entry. In a standard PostgreSQL installation, - posixrules is the same as US/Eastern, so + posixrules is the same as US/Eastern, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the - posixrules file. + posixrules file. @@ -2456,10 +2456,10 @@ January 8 04:05:06 1999 PST and full names: abbreviations represent a specific offset from UTC, whereas many of the full names imply a local daylight-savings time rule, and so have two possible UTC offsets. As an example, - 2014-06-04 12:00 America/New_York represents noon local + 2014-06-04 12:00 America/New_York represents noon local time in New York, which for this particular date was Eastern Daylight - Time (UTC-4). So 2014-06-04 12:00 EDT specifies that - same time instant. But 2014-06-04 12:00 EST specifies + Time (UTC-4). So 2014-06-04 12:00 EDT specifies that + same time instant. But 2014-06-04 12:00 EST specifies noon Eastern Standard Time (UTC-5), regardless of whether daylight savings was nominally in effect on that date. @@ -2467,10 +2467,10 @@ January 8 04:05:06 1999 PST To complicate matters, some jurisdictions have used the same timezone abbreviation to mean different UTC offsets at different times; for - example, in Moscow MSK has meant UTC+3 in some years and - UTC+4 in others. PostgreSQL interprets such + example, in Moscow MSK has meant UTC+3 in some years and + UTC+4 in others. PostgreSQL interprets such abbreviations according to whatever they meant (or had most recently - meant) on the specified date; but, as with the EST example + meant) on the specified date; but, as with the EST example above, this is not necessarily the same as local civil time on that date. @@ -2478,18 +2478,18 @@ January 8 04:05:06 1999 PST One should be wary that the POSIX-style time zone feature can lead to silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations. For example, SET - TIMEZONE TO FOOBAR0 will work, leaving the system effectively using + TIMEZONE TO FOOBAR0 will work, leaving the system effectively using a rather peculiar abbreviation for UTC. Another issue to keep in mind is that in POSIX time zone names, - positive offsets are used for locations west of Greenwich. + positive offsets are used for locations west of Greenwich. Everywhere else, PostgreSQL follows the - ISO-8601 convention that positive timezone offsets are east + ISO-8601 convention that positive timezone offsets are east of Greenwich. In all cases, timezone names and abbreviations are recognized - case-insensitively. (This is a change from PostgreSQL + case-insensitively. (This is a change from PostgreSQL versions prior to 8.2, which were case-sensitive in some contexts but not others.) @@ -2497,14 +2497,14 @@ January 8 04:05:06 1999 PST Neither timezone names nor abbreviations are hard-wired into the server; they are obtained from configuration files stored under - .../share/timezone/ and .../share/timezonesets/ + .../share/timezone/ and .../share/timezonesets/ of the installation directory (see ). The configuration parameter can - be set in the file postgresql.conf, or in any of the + be set in the file postgresql.conf, or in any of the other standard ways described in . There are also some special ways to set it: @@ -2513,7 +2513,7 @@ January 8 04:05:06 1999 PST The SQL command SET TIME ZONE sets the time zone for the session. This is an alternative spelling - of SET TIMEZONE TO with a more SQL-spec-compatible syntax. + of SET TIMEZONE TO with a more SQL-spec-compatible syntax. @@ -2541,52 +2541,52 @@ January 8 04:05:06 1999 PST verbose syntax: -@ quantity unit quantity unit... direction +@ quantity unit quantity unit... direction - where quantity is a number (possibly signed); - unit is microsecond, + where quantity is a number (possibly signed); + unit is microsecond, millisecond, second, minute, hour, day, week, month, year, decade, century, millennium, or abbreviations or plurals of these units; - direction can be ago or - empty. The at sign (@) is optional noise. The amounts + direction can be ago or + empty. The at sign (@) is optional noise. The amounts of the different units are implicitly added with appropriate sign accounting. ago negates all the fields. This syntax is also used for interval output, if is set to - postgres_verbose. + postgres_verbose. Quantities of days, hours, minutes, and seconds can be specified without - explicit unit markings. For example, '1 12:59:10' is read - the same as '1 day 12 hours 59 min 10 sec'. Also, + explicit unit markings. For example, '1 12:59:10' is read + the same as '1 day 12 hours 59 min 10 sec'. Also, a combination of years and months can be specified with a dash; - for example '200-10' is read the same as '200 years - 10 months'. (These shorter forms are in fact the only ones allowed + for example '200-10' is read the same as '200 years + 10 months'. (These shorter forms are in fact the only ones allowed by the SQL standard, and are used for output when - IntervalStyle is set to sql_standard.) + IntervalStyle is set to sql_standard.) Interval values can also be written as ISO 8601 time intervals, using - either the format with designators of the standard's section - 4.4.3.2 or the alternative format of section 4.4.3.3. The + either the format with designators of the standard's section + 4.4.3.2 or the alternative format of section 4.4.3.3. The format with designators looks like this: -P quantity unit quantity unit ... T quantity unit ... +P quantity unit quantity unit ... T quantity unit ... - The string must start with a P, and may include a - T that introduces the time-of-day units. The + The string must start with a P, and may include a + T that introduces the time-of-day units. The available unit abbreviations are given in . Units may be omitted, and may be specified in any order, but units smaller than - a day must appear after T. In particular, the meaning of - M depends on whether it is before or after - T. + a day must appear after T. In particular, the meaning of + M depends on whether it is before or after + T. @@ -2634,51 +2634,51 @@ P quantity unit quantity In the alternative format: -P years-months-days T hours:minutes:seconds +P years-months-days T hours:minutes:seconds the string must begin with P, and a - T separates the date and time parts of the interval. + T separates the date and time parts of the interval. The values are given as numbers similar to ISO 8601 dates. - When writing an interval constant with a fields + When writing an interval constant with a fields specification, or when assigning a string to an interval column that was - defined with a fields specification, the interpretation of - unmarked quantities depends on the fields. For - example INTERVAL '1' YEAR is read as 1 year, whereas - INTERVAL '1' means 1 second. Also, field values - to the right of the least significant field allowed by the - fields specification are silently discarded. For - example, writing INTERVAL '1 day 2:03:04' HOUR TO MINUTE + defined with a fields specification, the interpretation of + unmarked quantities depends on the fields. For + example INTERVAL '1' YEAR is read as 1 year, whereas + INTERVAL '1' means 1 second. Also, field values + to the right of the least significant field allowed by the + fields specification are silently discarded. For + example, writing INTERVAL '1 day 2:03:04' HOUR TO MINUTE results in dropping the seconds field, but not the day field. - According to the SQL standard all fields of an interval + According to the SQL standard all fields of an interval value must have the same sign, so a leading negative sign applies to all fields; for example the negative sign in the interval literal - '-1 2:03:04' applies to both the days and hour/minute/second - parts. PostgreSQL allows the fields to have different + '-1 2:03:04' applies to both the days and hour/minute/second + parts. PostgreSQL allows the fields to have different signs, and traditionally treats each field in the textual representation as independently signed, so that the hour/minute/second part is - considered positive in this example. If IntervalStyle is + considered positive in this example. If IntervalStyle is set to sql_standard then a leading sign is considered to apply to all fields (but only if no additional signs appear). - Otherwise the traditional PostgreSQL interpretation is + Otherwise the traditional PostgreSQL interpretation is used. To avoid ambiguity, it's recommended to attach an explicit sign to each field if any field is negative. - Internally interval values are stored as months, days, + Internally interval values are stored as months, days, and seconds. This is done because the number of days in a month varies, and a day can have 23 or 25 hours if a daylight savings time adjustment is involved. The months and days fields are integers while the seconds field can store fractions. Because intervals are - usually created from constant strings or timestamp subtraction, + usually created from constant strings or timestamp subtraction, this storage method works well in most cases. Functions - justify_days and justify_hours are + justify_days and justify_hours are available for adjusting days and hours that overflow their normal ranges. @@ -2686,18 +2686,18 @@ P years-months-days < In the verbose input format, and in some fields of the more compact input formats, field values can have fractional parts; for example - '1.5 week' or '01:02:03.45'. Such input is + '1.5 week' or '01:02:03.45'. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. - For example, '1.5 month' becomes 1 month and 15 days. + For example, '1.5 month' becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. shows some examples - of valid interval input. + of valid interval input.
@@ -2724,11 +2724,11 @@ P years-months-days < P1Y2M3DT4H5M6S - ISO 8601 format with designators: same meaning as above + ISO 8601 format with designators: same meaning as above P0001-02-03T04:05:06 - ISO 8601 alternative format: same meaning as above + ISO 8601 alternative format: same meaning as above @@ -2747,16 +2747,16 @@ P years-months-days < The output format of the interval type can be set to one of the - four styles sql_standard, postgres, - postgres_verbose, or iso_8601, + four styles sql_standard, postgres, + postgres_verbose, or iso_8601, using the command SET intervalstyle. - The default is the postgres format. + The default is the postgres format. shows examples of each output style. - The sql_standard style produces output that conforms to + The sql_standard style produces output that conforms to the SQL standard's specification for interval literal strings, if the interval value meets the standard's restrictions (either year-month only or day-time only, with no mixing of positive @@ -2766,20 +2766,20 @@ P years-months-days < - The output of the postgres style matches the output of - PostgreSQL releases prior to 8.4 when the - parameter was set to ISO. + The output of the postgres style matches the output of + PostgreSQL releases prior to 8.4 when the + parameter was set to ISO. - The output of the postgres_verbose style matches the output of - PostgreSQL releases prior to 8.4 when the - DateStyle parameter was set to non-ISO output. + The output of the postgres_verbose style matches the output of + PostgreSQL releases prior to 8.4 when the + DateStyle parameter was set to non-ISO output. - The output of the iso_8601 style matches the format - with designators described in section 4.4.3.2 of the + The output of the iso_8601 style matches the format + with designators described in section 4.4.3.2 of the ISO 8601 standard. @@ -2796,25 +2796,25 @@ P years-months-days < - sql_standard + sql_standard 1-2 3 4:05:06 -1-2 +3 -4:05:06 - postgres + postgres 1 year 2 mons 3 days 04:05:06 -1 year -2 mons +3 days -04:05:06 - postgres_verbose + postgres_verbose @ 1 year 2 mons @ 3 days 4 hours 5 mins 6 secs @ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago - iso_8601 + iso_8601 P1Y2M P3DT4H5M6S P-1Y-2M3DT-4H-5M-6S @@ -3178,7 +3178,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays x , y - where x and y are the respective + where x and y are the respective coordinates, as floating-point numbers. @@ -3196,8 +3196,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Lines are represented by the linear - equation Ax + By + C = 0, - where A and B are not both zero. Values + equation Ax + By + C = 0, + where A and B are not both zero. Values of type line are input and output in the following form: { A, B, C } @@ -3324,8 +3324,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays where the points are the end points of the line segments - comprising the path. Square brackets ([]) indicate - an open path, while parentheses (()) indicate a + comprising the path. Square brackets ([]) indicate + an open path, while parentheses (()) indicate a closed path. When the outermost parentheses are omitted, as in the third through fifth syntaxes, a closed path is assumed. @@ -3388,7 +3388,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays where - (x,y) + (x,y) is the center point and r is the radius of the circle. @@ -3409,7 +3409,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - PostgreSQL offers data types to store IPv4, IPv6, and MAC + PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses, as shown in . It is better to use these types instead of plain text types to store network addresses, because @@ -3503,7 +3503,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - <type>cidr</> + <type>cidr</type> cidr @@ -3514,11 +3514,11 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Input and output formats follow Classless Internet Domain Routing conventions. The format for specifying networks is address/y where address is the network represented as an + class="parameter">address/y where address is the network represented as an IPv4 or IPv6 address, and y is the number of bits in the netmask. If - y is omitted, it is calculated + class="parameter">y is the number of bits in the netmask. If + y is omitted, it is calculated using assumptions from the older classful network numbering system, except it will be at least large enough to include all of the octets written in the input. It is an error to specify a network address @@ -3530,7 +3530,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays
- <type>cidr</> Type Input Examples + <type>cidr</type> Type Input Examples @@ -3639,8 +3639,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays If you do not like the output format for inet or - cidr values, try the functions host, - text, and abbrev. + cidr values, try the functions host, + text, and abbrev. @@ -3658,24 +3658,24 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - The macaddr type stores MAC addresses, known for example + The macaddr type stores MAC addresses, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). Input is accepted in the following formats: - '08:00:2b:01:02:03' - '08-00-2b-01-02-03' - '08002b:010203' - '08002b-010203' - '0800.2b01.0203' - '0800-2b01-0203' - '08002b010203' + '08:00:2b:01:02:03' + '08-00-2b-01-02-03' + '08002b:010203' + '08002b-010203' + '0800.2b01.0203' + '0800-2b01-0203' + '08002b010203' These examples would all specify the same address. Upper and lower case is accepted for the digits - a through f. Output is always in the + a through f. Output is always in the first of the forms shown. @@ -3708,7 +3708,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - The macaddr8 type stores MAC addresses in EUI-64 + The macaddr8 type stores MAC addresses in EUI-64 format, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). This type can accept both 6 and 8 byte length MAC addresses @@ -3718,31 +3718,31 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Note that IPv6 uses a modified EUI-64 format where the 7th bit should be set to one after the conversion from EUI-48. The - function macaddr8_set7bit is provided to make this + function macaddr8_set7bit is provided to make this change. Generally speaking, any input which is comprised of pairs of hex digits (on byte boundaries), optionally separated consistently by - one of ':', '-' or '.', is + one of ':', '-' or '.', is accepted. The number of hex digits must be either 16 (8 bytes) or 12 (6 bytes). Leading and trailing whitespace is ignored. The following are examples of input formats that are accepted: - '08:00:2b:01:02:03:04:05' - '08-00-2b-01-02-03-04-05' - '08002b:0102030405' - '08002b-0102030405' - '0800.2b01.0203.0405' - '0800-2b01-0203-0405' - '08002b01:02030405' - '08002b0102030405' + '08:00:2b:01:02:03:04:05' + '08-00-2b-01-02-03-04-05' + '08002b:0102030405' + '08002b-0102030405' + '0800.2b01.0203.0405' + '0800-2b01-0203-0405' + '08002b01:02030405' + '08002b0102030405' These examples would all specify the same address. Upper and lower case is accepted for the digits - a through f. Output is always in the + a through f. Output is always in the first of the forms shown. The last six input formats that are mentioned above are not part @@ -3750,7 +3750,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays To convert a traditional 48 bit MAC address in EUI-48 format to modified EUI-64 format to be included as the host portion of an - IPv6 address, use macaddr8_set7bit as shown: + IPv6 address, use macaddr8_set7bit as shown: SELECT macaddr8_set7bit('08:00:2b:01:02:03'); @@ -3798,12 +3798,12 @@ SELECT macaddr8_set7bit('08:00:2b:01:02:03'); If one explicitly casts a bit-string value to - bit(n), it will be truncated or - zero-padded on the right to be exactly n bits, + bit(n), it will be truncated or + zero-padded on the right to be exactly n bits, without raising an error. Similarly, if one explicitly casts a bit-string value to - bit varying(n), it will be truncated - on the right if it is more than n bits. + bit varying(n), it will be truncated + on the right if it is more than n bits. @@ -3860,8 +3860,8 @@ SELECT * FROM test; PostgreSQL provides two data types that are designed to support full text search, which is the activity of - searching through a collection of natural-language documents - to locate those that best match a query. + searching through a collection of natural-language documents + to locate those that best match a query. The tsvector type represents a document in a form optimized for text search; the tsquery type similarly represents a text query. @@ -3879,8 +3879,8 @@ SELECT * FROM test; A tsvector value is a sorted list of distinct - lexemes, which are words that have been - normalized to merge different variants of the same word + lexemes, which are words that have been + normalized to merge different variants of the same word (see for details). Sorting and duplicate-elimination are done automatically during input, as shown in this example: @@ -3913,7 +3913,7 @@ SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; 'Joe''s' 'a' 'contains' 'lexeme' 'quote' 'the' - Optionally, integer positions + Optionally, integer positions can be attached to lexemes: @@ -3932,7 +3932,7 @@ SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::ts Lexemes that have positions can further be labeled with a - weight, which can be A, + weight, which can be A, B, C, or D. D is the default and hence is not shown on output: @@ -3965,7 +3965,7 @@ SELECT 'The Fat Rats'::tsvector; For most English-text-searching applications the above words would be considered non-normalized, but tsvector doesn't care. Raw document text should usually be passed through - to_tsvector to normalize the words appropriately + to_tsvector to normalize the words appropriately for searching: @@ -3991,17 +3991,17 @@ SELECT to_tsvector('english', 'The Fat Rats'); A tsquery value stores lexemes that are to be searched for, and can combine them using the Boolean operators & (AND), | (OR), and - ! (NOT), as well as the phrase search operator - <-> (FOLLOWED BY). There is also a variant - <N> of the FOLLOWED BY - operator, where N is an integer constant that + ! (NOT), as well as the phrase search operator + <-> (FOLLOWED BY). There is also a variant + <N> of the FOLLOWED BY + operator, where N is an integer constant that specifies the distance between the two lexemes being searched - for. <-> is equivalent to <1>. + for. <-> is equivalent to <1>. Parentheses can be used to enforce grouping of these operators. - In the absence of parentheses, ! (NOT) binds most tightly, + In the absence of parentheses, ! (NOT) binds most tightly, <-> (FOLLOWED BY) next most tightly, then & (AND), with | (OR) binding the least tightly. @@ -4031,7 +4031,7 @@ SELECT 'fat & rat & ! cat'::tsquery; Optionally, lexemes in a tsquery can be labeled with one or more weight letters, which restricts them to match only - tsvector lexemes with one of those weights: + tsvector lexemes with one of those weights: SELECT 'fat:ab & cat'::tsquery; @@ -4042,7 +4042,7 @@ SELECT 'fat:ab & cat'::tsquery; - Also, lexemes in a tsquery can be labeled with * + Also, lexemes in a tsquery can be labeled with * to specify prefix matching: SELECT 'super:*'::tsquery; @@ -4050,15 +4050,15 @@ SELECT 'super:*'::tsquery; ----------- 'super':* - This query will match any word in a tsvector that begins - with super. + This query will match any word in a tsvector that begins + with super. Quoting rules for lexemes are the same as described previously for - lexemes in tsvector; and, as with tsvector, + lexemes in tsvector; and, as with tsvector, any required normalization of words must be done before converting - to the tsquery type. The to_tsquery + to the tsquery type. The to_tsquery function is convenient for performing such normalization: @@ -4068,7 +4068,7 @@ SELECT to_tsquery('Fat:ab & Cats'); 'fat':AB & 'cat' - Note that to_tsquery will process prefixes in the same way + Note that to_tsquery will process prefixes in the same way as other words, which means this comparison returns true: @@ -4077,14 +4077,14 @@ SELECT to_tsvector( 'postgraduate' ) @@ to_tsquery( 'postgres:*' ); ---------- t - because postgres gets stemmed to postgr: + because postgres gets stemmed to postgr: SELECT to_tsvector( 'postgraduate' ), to_tsquery( 'postgres:*' ); to_tsvector | to_tsquery ---------------+------------ 'postgradu':1 | 'postgr':* - which will match the stemmed form of postgraduate. + which will match the stemmed form of postgraduate. @@ -4150,7 +4150,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 - <acronym>XML</> Type + <acronym>XML</acronym> Type XML @@ -4163,7 +4163,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 functions to perform type-safe operations on it; see . Use of this data type requires the installation to have been built with configure - --with-libxml. + --with-libxml. @@ -4311,7 +4311,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; Some XML-related functions may not work at all on non-ASCII data when the server encoding is not UTF-8. This is known to be an - issue for xmltable() and xpath() in particular. + issue for xmltable() and xpath() in particular. @@ -4421,17 +4421,17 @@ SET xmloption TO { DOCUMENT | CONTENT }; system tables. OIDs are not added to user-created tables, unless WITH OIDS is specified when the table is created, or the - configuration variable is enabled. Type oid represents + configuration variable is enabled. Type oid represents an object identifier. There are also several alias types for - oid: regproc, regprocedure, - regoper, regoperator, regclass, - regtype, regrole, regnamespace, - regconfig, and regdictionary. + oid: regproc, regprocedure, + regoper, regoperator, regclass, + regtype, regrole, regnamespace, + regconfig, and regdictionary. shows an overview. - The oid type is currently implemented as an unsigned + The oid type is currently implemented as an unsigned four-byte integer. Therefore, it is not large enough to provide database-wide uniqueness in large databases, or even in large individual tables. So, using a user-created table's OID column as @@ -4440,7 +4440,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; - The oid type itself has few operations beyond comparison. + The oid type itself has few operations beyond comparison. It can be cast to integer, however, and then manipulated using the standard integer operators. (Beware of possible signed-versus-unsigned confusion if you do this.) @@ -4450,10 +4450,10 @@ SET xmloption TO { DOCUMENT | CONTENT }; The OID alias types have no operations of their own except for specialized input and output routines. These routines are able to accept and display symbolic names for system objects, rather than - the raw numeric value that type oid would use. The alias + the raw numeric value that type oid would use. The alias types allow simplified lookup of OID values for objects. For example, - to examine the pg_attribute rows related to a table - mytable, one could write: + to examine the pg_attribute rows related to a table + mytable, one could write: SELECT * FROM pg_attribute WHERE attrelid = 'mytable'::regclass; @@ -4465,11 +4465,11 @@ SELECT * FROM pg_attribute While that doesn't look all that bad by itself, it's still oversimplified. A far more complicated sub-select would be needed to select the right OID if there are multiple tables named - mytable in different schemas. - The regclass input converter handles the table lookup according - to the schema path setting, and so it does the right thing + mytable in different schemas. + The regclass input converter handles the table lookup according + to the schema path setting, and so it does the right thing automatically. Similarly, casting a table's OID to - regclass is handy for symbolic display of a numeric OID. + regclass is handy for symbolic display of a numeric OID.
@@ -4487,80 +4487,80 @@ SELECT * FROM pg_attribute - oid + oid any numeric object identifier - 564182 + 564182 - regproc - pg_proc + regproc + pg_proc function name - sum + sum - regprocedure - pg_proc + regprocedure + pg_proc function with argument types - sum(int4) + sum(int4) - regoper - pg_operator + regoper + pg_operator operator name - + + + - regoperator - pg_operator + regoperator + pg_operator operator with argument types - *(integer,integer) or -(NONE,integer) + *(integer,integer) or -(NONE,integer) - regclass - pg_class + regclass + pg_class relation name - pg_type + pg_type - regtype - pg_type + regtype + pg_type data type name - integer + integer - regrole - pg_authid + regrole + pg_authid role name - smithee + smithee - regnamespace - pg_namespace + regnamespace + pg_namespace namespace name - pg_catalog + pg_catalog - regconfig - pg_ts_config + regconfig + pg_ts_config text search configuration - english + english - regdictionary - pg_ts_dict + regdictionary + pg_ts_dict text search dictionary - simple + simple @@ -4571,11 +4571,11 @@ SELECT * FROM pg_attribute schema-qualified names, and will display schema-qualified names on output if the object would not be found in the current search path without being qualified. - The regproc and regoper alias types will only + The regproc and regoper alias types will only accept input names that are unique (not overloaded), so they are - of limited use; for most uses regprocedure or - regoperator are more appropriate. For regoperator, - unary operators are identified by writing NONE for the unused + of limited use; for most uses regprocedure or + regoperator are more appropriate. For regoperator, + unary operators are identified by writing NONE for the unused operand. @@ -4585,12 +4585,12 @@ SELECT * FROM pg_attribute constant of one of these types appears in a stored expression (such as a column default expression or view), it creates a dependency on the referenced object. For example, if a column has a default - expression nextval('my_seq'::regclass), + expression nextval('my_seq'::regclass), PostgreSQL understands that the default expression depends on the sequence - my_seq; the system will not let the sequence be dropped + my_seq; the system will not let the sequence be dropped without first removing the default expression. - regrole is the only exception for the property. Constants of this + regrole is the only exception for the property. Constants of this type are not allowed in such expressions. @@ -4603,21 +4603,21 @@ SELECT * FROM pg_attribute - Another identifier type used by the system is xid, or transaction - (abbreviated xact) identifier. This is the data type of the system columns - xmin and xmax. Transaction identifiers are 32-bit quantities. + Another identifier type used by the system is xid, or transaction + (abbreviated xact) identifier. This is the data type of the system columns + xmin and xmax. Transaction identifiers are 32-bit quantities. - A third identifier type used by the system is cid, or + A third identifier type used by the system is cid, or command identifier. This is the data type of the system columns - cmin and cmax. Command identifiers are also 32-bit quantities. + cmin and cmax. Command identifiers are also 32-bit quantities. - A final identifier type used by the system is tid, or tuple + A final identifier type used by the system is tid, or tuple identifier (row identifier). This is the data type of the system column - ctid. A tuple ID is a pair + ctid. A tuple ID is a pair (block number, tuple index within block) that identifies the physical location of the row within its table. @@ -4646,7 +4646,7 @@ SELECT * FROM pg_attribute Internally, an LSN is a 64-bit integer, representing a byte position in the write-ahead log stream. It is printed as two hexadecimal numbers of up to 8 digits each, separated by a slash; for example, - 16/B374D848. The pg_lsn type supports the + 16/B374D848. The pg_lsn type supports the standard comparison operators, like = and >. Two LSNs can be subtracted using the - operator; the result is the number of bytes separating @@ -4736,7 +4736,7 @@ SELECT * FROM pg_attribute The PostgreSQL type system contains a number of special-purpose entries that are collectively called - pseudo-types. A pseudo-type cannot be used as a + pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type. Each of the available pseudo-types is useful in situations where a function's behavior does not @@ -4758,106 +4758,106 @@ SELECT * FROM pg_attribute - any + any Indicates that a function accepts any input data type. - anyelement + anyelement Indicates that a function accepts any data type (see ). - anyarray + anyarray Indicates that a function accepts any array data type (see ). - anynonarray + anynonarray Indicates that a function accepts any non-array data type (see ). - anyenum + anyenum Indicates that a function accepts any enum data type (see and ). - anyrange + anyrange Indicates that a function accepts any range data type (see and ). - cstring + cstring Indicates that a function accepts or returns a null-terminated C string. - internal + internal Indicates that a function accepts or returns a server-internal data type. - language_handler - A procedural language call handler is declared to return language_handler. + language_handler + A procedural language call handler is declared to return language_handler. - fdw_handler - A foreign-data wrapper handler is declared to return fdw_handler. + fdw_handler + A foreign-data wrapper handler is declared to return fdw_handler. - index_am_handler - An index access method handler is declared to return index_am_handler. + index_am_handler + An index access method handler is declared to return index_am_handler. - tsm_handler - A tablesample method handler is declared to return tsm_handler. + tsm_handler + A tablesample method handler is declared to return tsm_handler. - record + record Identifies a function taking or returning an unspecified row type. - trigger - A trigger function is declared to return trigger. + trigger + A trigger function is declared to return trigger. - event_trigger - An event trigger function is declared to return event_trigger. + event_trigger + An event trigger function is declared to return event_trigger. - pg_ddl_command + pg_ddl_command Identifies a representation of DDL commands that is available to event triggers. - void + void Indicates that a function returns no value. - unknown + unknown Identifies a not-yet-resolved type, e.g. of an undecorated string literal. - opaque + opaque An obsolete type name that formerly served many of the above purposes. @@ -4876,24 +4876,24 @@ SELECT * FROM pg_attribute Functions coded in procedural languages can use pseudo-types only as allowed by their implementation languages. At present most procedural languages forbid use of a pseudo-type as an argument type, and allow - only void and record as a result type (plus - trigger or event_trigger when the function is used + only void and record as a result type (plus + trigger or event_trigger when the function is used as a trigger or event trigger). Some also - support polymorphic functions using the types anyelement, - anyarray, anynonarray, anyenum, and - anyrange. + support polymorphic functions using the types anyelement, + anyarray, anynonarray, anyenum, and + anyrange. - The internal pseudo-type is used to declare functions + The internal pseudo-type is used to declare functions that are meant only to be called internally by the database system, and not by direct invocation in an SQL - query. If a function has at least one internal-type + query. If a function has at least one internal-type argument then it cannot be called from SQL. To preserve the type safety of this restriction it is important to follow this coding rule: do not create any function that is - declared to return internal unless it has at least one - internal argument. + declared to return internal unless it has at least one + internal argument. diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml index ef9139f9e3..a533bbf8d2 100644 --- a/doc/src/sgml/datetime.sgml +++ b/doc/src/sgml/datetime.sgml @@ -37,18 +37,18 @@ - If the numeric token contains a colon (:), this is + If the numeric token contains a colon (:), this is a time string. Include all subsequent digits and colons. - If the numeric token contains a dash (-), slash - (/), or two or more dots (.), this is + If the numeric token contains a dash (-), slash + (/), or two or more dots (.), this is a date string which might have a text month. If a date token has already been seen, it is instead interpreted as a time zone - name (e.g., America/New_York). + name (e.g., America/New_York). @@ -63,8 +63,8 @@ - If the token starts with a plus (+) or minus - (-), then it is either a numeric time zone or a special + If the token starts with a plus (+) or minus + (-), then it is either a numeric time zone or a special field. @@ -114,7 +114,7 @@ and if no other date fields have been previously read, then interpret as a concatenated date (e.g., 19990118 or 990118). - The interpretation is YYYYMMDD or YYMMDD. + The interpretation is YYYYMMDD or YYMMDD. @@ -128,7 +128,7 @@ If four or six digits and a year has already been read, then - interpret as a time (HHMM or HHMMSS). + interpret as a time (HHMM or HHMMSS). @@ -143,7 +143,7 @@ Otherwise the date field ordering is assumed to follow the - DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. + DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range. @@ -167,7 +167,7 @@ Gregorian years AD 1-99 can be entered by using 4 digits with leading - zeros (e.g., 0099 is AD 99). + zeros (e.g., 0099 is AD 99). @@ -317,7 +317,7 @@ Ignored - JULIAN, JD, J + JULIAN, JD, J Next field is Julian Date @@ -354,23 +354,23 @@ can be altered by any database user, the possible values for it are under the control of the database administrator — they are in fact names of configuration files stored in - .../share/timezonesets/ of the installation directory. + .../share/timezonesets/ of the installation directory. By adding or altering files in that directory, the administrator can set local policy for timezone abbreviations. - timezone_abbreviations can be set to any file name - found in .../share/timezonesets/, if the file's name + timezone_abbreviations can be set to any file name + found in .../share/timezonesets/, if the file's name is entirely alphabetic. (The prohibition against non-alphabetic - characters in timezone_abbreviations prevents reading + characters in timezone_abbreviations prevents reading files outside the intended directory, as well as reading editor backup files and other extraneous files.) A timezone abbreviation file can contain blank lines and comments - beginning with #. Non-comment lines must have one of + beginning with #. Non-comment lines must have one of these formats: @@ -388,12 +388,12 @@ the equivalent offset in seconds from UTC, positive being east from Greenwich and negative being west. For example, -18000 would be five hours west of Greenwich, or North American east coast standard time. - D indicates that the zone name represents local + D indicates that the zone name represents local daylight-savings time rather than standard time. - Alternatively, a time_zone_name can be given, referencing + Alternatively, a time_zone_name can be given, referencing a zone name defined in the IANA timezone database. The zone's definition is consulted to see whether the abbreviation is or has been in use in that zone, and if so, the appropriate meaning is used — that is, @@ -417,34 +417,34 @@ - The @INCLUDE syntax allows inclusion of another file in the - .../share/timezonesets/ directory. Inclusion can be nested, + The @INCLUDE syntax allows inclusion of another file in the + .../share/timezonesets/ directory. Inclusion can be nested, to a limited depth. - The @OVERRIDE syntax indicates that subsequent entries in the + The @OVERRIDE syntax indicates that subsequent entries in the file can override previous entries (typically, entries obtained from included files). Without this, conflicting definitions of the same timezone abbreviation are considered an error. - In an unmodified installation, the file Default contains + In an unmodified installation, the file Default contains all the non-conflicting time zone abbreviations for most of the world. - Additional files Australia and India are + Additional files Australia and India are provided for those regions: these files first include the - Default file and then add or modify abbreviations as needed. + Default file and then add or modify abbreviations as needed. For reference purposes, a standard installation also contains files - Africa.txt, America.txt, etc, containing + Africa.txt, America.txt, etc, containing information about every time zone abbreviation known to be in use according to the IANA timezone database. The zone name definitions found in these files can be copied and pasted into a custom configuration file as needed. Note that these files cannot be directly - referenced as timezone_abbreviations settings, because of + referenced as timezone_abbreviations settings, because of the dot embedded in their names. @@ -460,16 +460,16 @@ Time zone abbreviations defined in the configuration file override non-timezone meanings built into PostgreSQL. - For example, the Australia configuration file defines - SAT (for South Australian Standard Time). When this - file is active, SAT will not be recognized as an abbreviation + For example, the Australia configuration file defines + SAT (for South Australian Standard Time). When this + file is active, SAT will not be recognized as an abbreviation for Saturday. - If you modify files in .../share/timezonesets/, + If you modify files in .../share/timezonesets/, it is up to you to make backups — a normal database dump will not include this directory. @@ -492,10 +492,10 @@ datetime literal, the datetime values are constrained by the natural rules for dates and times according to the Gregorian calendar. - PostgreSQL follows the SQL + PostgreSQL follows the SQL standard's lead by counting dates exclusively in the Gregorian calendar, even for years before that calendar was in use. - This rule is known as the proleptic Gregorian calendar. + This rule is known as the proleptic Gregorian calendar. @@ -569,7 +569,7 @@ $ cal 9 1752 dominions, not other places. Since it would be difficult and confusing to try to track the actual calendars that were in use in various places at various times, - PostgreSQL does not try, but rather follows the Gregorian + PostgreSQL does not try, but rather follows the Gregorian calendar rules for all dates, even though this method is not historically accurate. @@ -597,7 +597,7 @@ $ cal 9 1752 and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484-1558). In the Julian Date system, each day has a sequential number, starting - from JD 0 (which is sometimes called the Julian Date). + from JD 0 (which is sometimes called the Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, @@ -607,10 +607,10 @@ $ cal 9 1752 - Although PostgreSQL supports Julian Date notation for + Although PostgreSQL supports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates - run from noon to noon. PostgreSQL treats a Julian Date + run from noon to noon. PostgreSQL treats a Julian Date as running from midnight to midnight. diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index f19c6b19f5..1f17d3ad2d 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -8,8 +8,8 @@ - dblink is a module that supports connections to - other PostgreSQL databases from within a database + dblink is a module that supports connections to + other PostgreSQL databases from within a database session. @@ -44,9 +44,9 @@ dblink_connect(text connname, text connstr) returns text Description - dblink_connect() establishes a connection to a remote - PostgreSQL database. The server and database to - be contacted are identified through a standard libpq + dblink_connect() establishes a connection to a remote + PostgreSQL database. The server and database to + be contacted are identified through a standard libpq connection string. Optionally, a name can be assigned to the connection. Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection @@ -81,9 +81,9 @@ dblink_connect(text connname, text connstr) returns text connstr - libpq-style connection info string, for example + libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres - password=mypasswd. + password=mypasswd. For details see . Alternatively, the name of a foreign server. @@ -96,7 +96,7 @@ dblink_connect(text connname, text connstr) returns text Return Value - Returns status, which is always OK (since any error + Returns status, which is always OK (since any error causes the function to throw an error instead of returning). @@ -105,15 +105,15 @@ dblink_connect(text connname, text connstr) returns text Notes - Only superusers may use dblink_connect to create + Only superusers may use dblink_connect to create non-password-authenticated connections. If non-superusers need this - capability, use dblink_connect_u instead. + capability, use dblink_connect_u instead. It is unwise to choose connection names that contain equal signs, as this opens a risk of confusion with connection info strings - in other dblink functions. + in other dblink functions. @@ -208,8 +208,8 @@ dblink_connect_u(text connname, text connstr) returns text Description - dblink_connect_u() is identical to - dblink_connect(), except that it will allow non-superusers + dblink_connect_u() is identical to + dblink_connect(), except that it will allow non-superusers to connect using any authentication method. @@ -217,24 +217,24 @@ dblink_connect_u(text connname, text connstr) returns text If the remote server selects an authentication method that does not involve a password, then impersonation and subsequent escalation of privileges can occur, because the session will appear to have - originated from the user as which the local PostgreSQL + originated from the user as which the local PostgreSQL server runs. Also, even if the remote server does demand a password, it is possible for the password to be supplied from the server - environment, such as a ~/.pgpass file belonging to the + environment, such as a ~/.pgpass file belonging to the server's user. This opens not only a risk of impersonation, but the possibility of exposing a password to an untrustworthy remote server. - Therefore, dblink_connect_u() is initially - installed with all privileges revoked from PUBLIC, + Therefore, dblink_connect_u() is initially + installed with all privileges revoked from PUBLIC, making it un-callable except by superusers. In some situations - it may be appropriate to grant EXECUTE permission for - dblink_connect_u() to specific users who are considered + it may be appropriate to grant EXECUTE permission for + dblink_connect_u() to specific users who are considered trustworthy, but this should be done with care. It is also recommended - that any ~/.pgpass file belonging to the server's user - not contain any records specifying a wildcard host name. + that any ~/.pgpass file belonging to the server's user + not contain any records specifying a wildcard host name. - For further details see dblink_connect(). + For further details see dblink_connect(). @@ -265,8 +265,8 @@ dblink_disconnect(text connname) returns text Description - dblink_disconnect() closes a connection previously opened - by dblink_connect(). The form with no arguments closes + dblink_disconnect() closes a connection previously opened + by dblink_connect(). The form with no arguments closes an unnamed connection. @@ -290,7 +290,7 @@ dblink_disconnect(text connname) returns text Return Value - Returns status, which is always OK (since any error + Returns status, which is always OK (since any error causes the function to throw an error instead of returning). @@ -341,15 +341,15 @@ dblink(text sql [, bool fail_on_error]) returns setof record Description - dblink executes a query (usually a SELECT, + dblink executes a query (usually a SELECT, but it can be any SQL statement that returns rows) in a remote database. - When two text arguments are given, the first one is first + When two text arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for dblink_connect, + is treated as a connection info string as for dblink_connect, and the indicated connection is made just for the duration of this command. @@ -373,7 +373,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record A connection info string, as previously described for - dblink_connect. + dblink_connect. @@ -383,7 +383,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record The SQL query that you wish to execute in the remote database, - for example select * from foo. + for example select * from foo. @@ -407,11 +407,11 @@ dblink(text sql [, bool fail_on_error]) returns setof record The function returns the row(s) produced by the query. Since - dblink can be used with any query, it is declared - to return record, rather than specifying any particular + dblink can be used with any query, it is declared + to return record, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise - PostgreSQL would not know what to expect. + PostgreSQL would not know what to expect. Here is an example: @@ -421,20 +421,20 @@ SELECT * WHERE proname LIKE 'bytea%'; - The alias part of the FROM clause must + The alias part of the FROM clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL - syntax, but specifying column types is a PostgreSQL + syntax, but specifying column types is a PostgreSQL extension.) This allows the system to understand what - * should expand to, and what proname - in the WHERE clause refers to, in advance of trying + * should expand to, and what proname + in the WHERE clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not - have the same number of columns shown in the FROM clause. - The column names need not match, however, and dblink + have the same number of columns shown in the FROM clause. + The column names need not match, however, and dblink does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the - column type declared in the FROM clause. + column type declared in the FROM clause. @@ -442,7 +442,7 @@ SELECT * Notes - A convenient way to use dblink with predetermined + A convenient way to use dblink with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example, @@ -559,15 +559,15 @@ dblink_exec(text sql [, bool fail_on_error]) returns text Description - dblink_exec executes a command (that is, any SQL statement + dblink_exec executes a command (that is, any SQL statement that doesn't return rows) in a remote database. - When two text arguments are given, the first one is first + When two text arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for dblink_connect, + is treated as a connection info string as for dblink_connect, and the indicated connection is made just for the duration of this command. @@ -591,7 +591,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text A connection info string, as previously described for - dblink_connect. + dblink_connect. @@ -602,7 +602,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text The SQL command that you wish to execute in the remote database, for example - insert into foo values(0,'a','{"a0","b0","c0"}'). + insert into foo values(0,'a','{"a0","b0","c0"}'). @@ -614,7 +614,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -625,7 +625,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text Return Value - Returns status, either the command's status string or ERROR. + Returns status, either the command's status string or ERROR. @@ -695,9 +695,9 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Description - dblink_open() opens a cursor in a remote database. + dblink_open() opens a cursor in a remote database. The cursor can subsequently be manipulated with - dblink_fetch() and dblink_close(). + dblink_fetch() and dblink_close(). @@ -728,8 +728,8 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret sql - The SELECT statement that you wish to execute in the remote - database, for example select * from pg_class. + The SELECT statement that you wish to execute in the remote + database, for example select * from pg_class. @@ -741,7 +741,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -752,7 +752,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Return Value - Returns status, either OK or ERROR. + Returns status, either OK or ERROR. @@ -761,16 +761,16 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Since a cursor can only persist within a transaction, - dblink_open starts an explicit transaction block - (BEGIN) on the remote side, if the remote side was + dblink_open starts an explicit transaction block + (BEGIN) on the remote side, if the remote side was not already within a transaction. This transaction will be - closed again when the matching dblink_close is + closed again when the matching dblink_close is executed. Note that if - you use dblink_exec to change data between - dblink_open and dblink_close, - and then an error occurs or you use dblink_disconnect before - dblink_close, your change will be - lost because the transaction will be aborted. + you use dblink_exec to change data between + dblink_open and dblink_close, + and then an error occurs or you use dblink_disconnect before + dblink_close, your change will be + lost because the transaction will be aborted. @@ -819,8 +819,8 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) Description - dblink_fetch fetches rows from a cursor previously - established by dblink_open. + dblink_fetch fetches rows from a cursor previously + established by dblink_open. @@ -851,7 +851,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) howmany - The maximum number of rows to retrieve. The next howmany + The maximum number of rows to retrieve. The next howmany rows are fetched, starting at the current cursor position, moving forward. Once the cursor has reached its end, no more rows are produced. @@ -878,7 +878,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) The function returns the row(s) fetched from the cursor. To use this function, you will need to specify the expected set of columns, - as previously discussed for dblink. + as previously discussed for dblink. @@ -887,11 +887,11 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) On a mismatch between the number of return columns specified in the - FROM clause, and the actual number of columns returned by the + FROM clause, and the actual number of columns returned by the remote cursor, an error will be thrown. In this event, the remote cursor is still advanced by as many rows as it would have been if the error had not occurred. The same is true for any other error occurring in the local - query after the remote FETCH has been done. + query after the remote FETCH has been done. @@ -972,8 +972,8 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Description - dblink_close closes a cursor previously opened with - dblink_open. + dblink_close closes a cursor previously opened with + dblink_open. @@ -1007,7 +1007,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to ERROR. + and the function's return value is set to ERROR. @@ -1018,7 +1018,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Return Value - Returns status, either OK or ERROR. + Returns status, either OK or ERROR. @@ -1026,9 +1026,9 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Notes - If dblink_open started an explicit transaction block, + If dblink_open started an explicit transaction block, and this is the last remaining open cursor in this connection, - dblink_close will issue the matching COMMIT. + dblink_close will issue the matching COMMIT. @@ -1082,8 +1082,8 @@ dblink_get_connections() returns text[] Description - dblink_get_connections returns an array of the names - of all open named dblink connections. + dblink_get_connections returns an array of the names + of all open named dblink connections. @@ -1127,7 +1127,7 @@ dblink_error_message(text connname) returns text Description - dblink_error_message fetches the most recent remote + dblink_error_message fetches the most recent remote error message for a given connection. @@ -1190,7 +1190,7 @@ dblink_send_query(text connname, text sql) returns int Description - dblink_send_query sends a query to be executed + dblink_send_query sends a query to be executed asynchronously, that is, without immediately waiting for the result. There must not be an async query already in progress on the connection. @@ -1198,10 +1198,10 @@ dblink_send_query(text connname, text sql) returns int After successfully dispatching an async query, completion status - can be checked with dblink_is_busy, and the results - are ultimately collected with dblink_get_result. + can be checked with dblink_is_busy, and the results + are ultimately collected with dblink_get_result. It is also possible to attempt to cancel an active async query - using dblink_cancel_query. + using dblink_cancel_query. @@ -1223,7 +1223,7 @@ dblink_send_query(text connname, text sql) returns int The SQL statement that you wish to execute in the remote database, - for example select * from pg_class. + for example select * from pg_class. @@ -1272,7 +1272,7 @@ dblink_is_busy(text connname) returns int Description - dblink_is_busy tests whether an async query is in progress. + dblink_is_busy tests whether an async query is in progress. @@ -1297,7 +1297,7 @@ dblink_is_busy(text connname) returns int Returns 1 if connection is busy, 0 if it is not busy. If this function returns 0, it is guaranteed that - dblink_get_result will not block. + dblink_get_result will not block. @@ -1336,10 +1336,10 @@ dblink_get_notify(text connname) returns setof (notify_name text, be_pid int, ex Description - dblink_get_notify retrieves notifications on either + dblink_get_notify retrieves notifications on either the unnamed connection, or on a named connection if specified. - To receive notifications via dblink, LISTEN must - first be issued, using dblink_exec. + To receive notifications via dblink, LISTEN must + first be issued, using dblink_exec. For details see and . @@ -1417,9 +1417,9 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record Description - dblink_get_result collects the results of an - asynchronous query previously sent with dblink_send_query. - If the query is not already completed, dblink_get_result + dblink_get_result collects the results of an + asynchronous query previously sent with dblink_send_query. + If the query is not already completed, dblink_get_result will wait until it is. @@ -1458,14 +1458,14 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record For an async query (that is, a SQL statement returning rows), the function returns the row(s) produced by the query. To use this function, you will need to specify the expected set of columns, - as previously discussed for dblink. + as previously discussed for dblink. For an async command (that is, a SQL statement not returning rows), the function returns a single row with a single text column containing the command's status string. It is still necessary to specify that - the result will have a single text column in the calling FROM + the result will have a single text column in the calling FROM clause. @@ -1474,22 +1474,22 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record Notes - This function must be called if - dblink_send_query returned 1. + This function must be called if + dblink_send_query returned 1. It must be called once for each query sent, and one additional time to obtain an empty set result, before the connection can be used again. - When using dblink_send_query and - dblink_get_result, dblink fetches the entire + When using dblink_send_query and + dblink_get_result, dblink fetches the entire remote query result before returning any of it to the local query processor. If the query returns a large number of rows, this can result in transient memory bloat in the local session. It may be better to open - such a query as a cursor with dblink_open and then fetch a + such a query as a cursor with dblink_open and then fetch a manageable number of rows at a time. Alternatively, use plain - dblink(), which avoids memory bloat by spooling large result + dblink(), which avoids memory bloat by spooling large result sets to disk. @@ -1581,13 +1581,13 @@ dblink_cancel_query(text connname) returns text Description - dblink_cancel_query attempts to cancel any query that + dblink_cancel_query attempts to cancel any query that is in progress on the named connection. Note that this is not certain to succeed (since, for example, the remote query might already have finished). A cancel request simply improves the odds that the query will fail soon. You must still complete the normal query protocol, for example by calling - dblink_get_result. + dblink_get_result. @@ -1610,7 +1610,7 @@ dblink_cancel_query(text connname) returns text Return Value - Returns OK if the cancel request has been sent, or + Returns OK if the cancel request has been sent, or the text of an error message on failure. @@ -1651,7 +1651,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results Description - dblink_get_pkey provides information about the primary + dblink_get_pkey provides information about the primary key of a relation in the local database. This is sometimes useful in generating queries to be sent to remote databases. @@ -1665,10 +1665,10 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1687,7 +1687,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results CREATE TYPE dblink_pkey_results AS (position int, colname text); - The position column simply runs from 1 to N; + The position column simply runs from 1 to N; it is the number of the field within the primary key, not the number within the table's columns. @@ -1748,10 +1748,10 @@ dblink_build_sql_insert(text relname, Description - dblink_build_sql_insert can be useful in doing selective + dblink_build_sql_insert can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - INSERT command that will duplicate that row, but with + INSERT command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.) @@ -1766,10 +1766,10 @@ dblink_build_sql_insert(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1780,7 +1780,7 @@ dblink_build_sql_insert(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -1811,7 +1811,7 @@ dblink_build_sql_insert(text relname, Values of the primary key fields to be placed in the resulting - INSERT command. Each field is represented in text form. + INSERT command. Each field is represented in text form. @@ -1828,10 +1828,10 @@ dblink_build_sql_insert(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -1881,9 +1881,9 @@ dblink_build_sql_delete(text relname, Description - dblink_build_sql_delete can be useful in doing selective + dblink_build_sql_delete can be useful in doing selective replication of a local table to a remote database. It builds a SQL - DELETE command that will delete the row with the given + DELETE command that will delete the row with the given primary key values. @@ -1896,10 +1896,10 @@ dblink_build_sql_delete(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -1910,7 +1910,7 @@ dblink_build_sql_delete(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -1929,7 +1929,7 @@ dblink_build_sql_delete(text relname, Values of the primary key fields to be used in the resulting - DELETE command. Each field is represented in text form. + DELETE command. Each field is represented in text form. @@ -1946,10 +1946,10 @@ dblink_build_sql_delete(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -2000,15 +2000,15 @@ dblink_build_sql_update(text relname, Description - dblink_build_sql_update can be useful in doing selective + dblink_build_sql_update can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - UPDATE command that will duplicate that row, but with + UPDATE command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for - the last two arguments.) The UPDATE command always assigns + the last two arguments.) The UPDATE command always assigns all fields of the row — the main difference between this and - dblink_build_sql_insert is that it's assumed that + dblink_build_sql_insert is that it's assumed that the target row already exists in the remote table. @@ -2021,10 +2021,10 @@ dblink_build_sql_update(text relname, relname - Name of a local relation, for example foo or - myschema.mytab. Include double quotes if the + Name of a local relation, for example foo or + myschema.mytab. Include double quotes if the name is mixed-case or contains special characters, for - example "FooBar"; without quotes, the string + example "FooBar"; without quotes, the string will be folded to lower case. @@ -2035,7 +2035,7 @@ dblink_build_sql_update(text relname, Attribute numbers (1-based) of the primary key fields, - for example 1 2. + for example 1 2. @@ -2066,7 +2066,7 @@ dblink_build_sql_update(text relname, Values of the primary key fields to be placed in the resulting - UPDATE command. Each field is represented in text form. + UPDATE command. Each field is represented in text form. @@ -2083,10 +2083,10 @@ dblink_build_sql_update(text relname, Notes - As of PostgreSQL 9.0, the attribute numbers in + As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums are interpreted as logical column numbers, corresponding to the column's position in - SELECT * FROM relname. Previous versions interpreted the + SELECT * FROM relname. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index b05a9c2150..817db92af2 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -149,7 +149,7 @@ DROP TABLE products; Nevertheless, it is common in SQL script files to unconditionally try to drop each table before creating it, ignoring any error messages, so that the script works whether or not the table exists. - (If you like, you can use the DROP TABLE IF EXISTS variant + (If you like, you can use the DROP TABLE IF EXISTS variant to avoid the error messages, but this is not standard SQL.) @@ -207,9 +207,9 @@ CREATE TABLE products ( The default value can be an expression, which will be evaluated whenever the default value is inserted (not when the table is created). A common example - is for a timestamp column to have a default of CURRENT_TIMESTAMP, + is for a timestamp column to have a default of CURRENT_TIMESTAMP, so that it gets set to the time of row insertion. Another common - example is generating a serial number for each row. + example is generating a serial number for each row. In PostgreSQL this is typically done by something like: @@ -218,8 +218,8 @@ CREATE TABLE products ( ... ); - where the nextval() function supplies successive values - from a sequence object (see nextval() function supplies successive values + from a sequence object (see ). This arrangement is sufficiently common that there's a special shorthand for it: @@ -228,7 +228,7 @@ CREATE TABLE products ( ... ); - The SERIAL shorthand is discussed further in SERIAL shorthand is discussed further in . @@ -385,7 +385,7 @@ CREATE TABLE products ( CHECK (price > 0), discounted_price numeric, CHECK (discounted_price > 0), - CONSTRAINT valid_discount CHECK (price > discounted_price) + CONSTRAINT valid_discount CHECK (price > discounted_price) ); @@ -623,7 +623,7 @@ CREATE TABLE example ( Adding a primary key will automatically create a unique B-tree index on the column or group of columns listed in the primary key, and will - force the column(s) to be marked NOT NULL. + force the column(s) to be marked NOT NULL. @@ -828,7 +828,7 @@ CREATE TABLE order_items ( (The essential difference between these two choices is that NO ACTION allows the check to be deferred until later in the transaction, whereas RESTRICT does not.) - CASCADE specifies that when a referenced row is deleted, + CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT. @@ -845,19 +845,19 @@ CREATE TABLE order_items ( Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is changed (updated). The possible actions are the same. - In this case, CASCADE means that the updated values of the + In this case, CASCADE means that the updated values of the referenced column(s) should be copied into the referencing row(s). Normally, a referencing row need not satisfy the foreign key constraint - if any of its referencing columns are null. If MATCH FULL + if any of its referencing columns are null. If MATCH FULL is added to the foreign key declaration, a referencing row escapes satisfying the constraint only if all its referencing columns are null (so a mix of null and non-null values is guaranteed to fail a - MATCH FULL constraint). If you don't want referencing rows + MATCH FULL constraint). If you don't want referencing rows to be able to avoid satisfying the foreign key constraint, declare the - referencing column(s) as NOT NULL. + referencing column(s) as NOT NULL. @@ -909,7 +909,7 @@ CREATE TABLE circles ( See also CREATE - TABLE ... CONSTRAINT ... EXCLUDE for details. + TABLE ... CONSTRAINT ... EXCLUDE for details. @@ -923,7 +923,7 @@ CREATE TABLE circles ( System Columns - Every table has several system columns that are + Every table has several system columns that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these restrictions are separate from whether the name is a key word or @@ -939,7 +939,7 @@ CREATE TABLE circles ( - oid + oid @@ -957,7 +957,7 @@ CREATE TABLE circles ( - tableoid + tableoid tableoid @@ -976,7 +976,7 @@ CREATE TABLE circles ( - xmin + xmin xmin @@ -992,7 +992,7 @@ CREATE TABLE circles ( - cmin + cmin cmin @@ -1006,7 +1006,7 @@ CREATE TABLE circles ( - xmax + xmax xmax @@ -1023,7 +1023,7 @@ CREATE TABLE circles ( - cmax + cmax cmax @@ -1036,7 +1036,7 @@ CREATE TABLE circles ( - ctid + ctid ctid @@ -1047,7 +1047,7 @@ CREATE TABLE circles ( although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is - updated or moved by VACUUM FULL. Therefore + updated or moved by VACUUM FULL. Therefore ctid is useless as a long-term row identifier. The OID, or even better a user-defined serial number, should be used to identify logical rows. @@ -1074,7 +1074,7 @@ CREATE TABLE circles ( a unique constraint (or unique index) exists, the system takes care not to generate an OID matching an already-existing row. (Of course, this is only possible if the table contains fewer - than 232 (4 billion) rows, and in practice the + than 232 (4 billion) rows, and in practice the table size had better be much less than that, or performance might suffer.) @@ -1082,7 +1082,7 @@ CREATE TABLE circles ( OIDs should never be assumed to be unique across tables; use - the combination of tableoid and row OID if you + the combination of tableoid and row OID if you need a database-wide identifier. @@ -1090,7 +1090,7 @@ CREATE TABLE circles ( Of course, the tables in question must be created WITH OIDS. As of PostgreSQL 8.1, - WITHOUT OIDS is the default. + WITHOUT OIDS is the default. @@ -1107,7 +1107,7 @@ CREATE TABLE circles ( Command identifiers are also 32-bit quantities. This creates a hard limit - of 232 (4 billion) SQL commands + of 232 (4 billion) SQL commands within a single transaction. In practice this limit is not a problem — note that the limit is on the number of SQL commands, not the number of rows processed. @@ -1186,7 +1186,7 @@ CREATE TABLE circles ( ALTER TABLE products ADD COLUMN description text; The new column is initially filled with whatever default - value is given (null if you don't specify a DEFAULT clause). + value is given (null if you don't specify a DEFAULT clause). @@ -1196,9 +1196,9 @@ ALTER TABLE products ADD COLUMN description text; ALTER TABLE products ADD COLUMN description text CHECK (description <> ''); In fact all the options that can be applied to a column description - in CREATE TABLE can be used here. Keep in mind however + in CREATE TABLE can be used here. Keep in mind however that the default value must satisfy the given constraints, or the - ADD will fail. Alternatively, you can add + ADD will fail. Alternatively, you can add constraints later (see below) after you've filled in the new column correctly. @@ -1210,7 +1210,7 @@ ALTER TABLE products ADD COLUMN description text CHECK (description <> '') specified, PostgreSQL is able to avoid the physical update. So if you intend to fill the column with mostly nondefault values, it's best to add the column with no default, - insert the correct values using UPDATE, and then add any + insert the correct values using UPDATE, and then add any desired default as described below. @@ -1234,7 +1234,7 @@ ALTER TABLE products DROP COLUMN description; foreign key constraint of another table, PostgreSQL will not silently drop that constraint. You can authorize dropping everything that depends on - the column by adding CASCADE: + the column by adding CASCADE: ALTER TABLE products DROP COLUMN description CASCADE; @@ -1290,13 +1290,13 @@ ALTER TABLE products ALTER COLUMN product_no SET NOT NULL; ALTER TABLE products DROP CONSTRAINT some_name; - (If you are dealing with a generated constraint name like $2, + (If you are dealing with a generated constraint name like $2, don't forget that you'll need to double-quote it to make it a valid identifier.) - As with dropping a column, you need to add CASCADE if you + As with dropping a column, you need to add CASCADE if you want to drop a constraint that something else depends on. An example is that a foreign key constraint depends on a unique or primary key constraint on the referenced column(s). @@ -1326,7 +1326,7 @@ ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL; ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77; Note that this doesn't affect any existing rows in the table, it - just changes the default for future INSERT commands. + just changes the default for future INSERT commands. @@ -1356,12 +1356,12 @@ ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2); This will succeed only if each existing entry in the column can be converted to the new type by an implicit cast. If a more complex - conversion is needed, you can add a USING clause that + conversion is needed, you can add a USING clause that specifies how to compute the new values from the old. - PostgreSQL will attempt to convert the column's + PostgreSQL will attempt to convert the column's default value (if any) to the new type, as well as any constraints that involve the column. But these conversions might fail, or might produce surprising results. It's often best to drop any constraints @@ -1437,11 +1437,11 @@ ALTER TABLE products RENAME TO items; - There are different kinds of privileges: SELECT, - INSERT, UPDATE, DELETE, - TRUNCATE, REFERENCES, TRIGGER, - CREATE, CONNECT, TEMPORARY, - EXECUTE, and USAGE. + There are different kinds of privileges: SELECT, + INSERT, UPDATE, DELETE, + TRUNCATE, REFERENCES, TRIGGER, + CREATE, CONNECT, TEMPORARY, + EXECUTE, and USAGE. The privileges applicable to a particular object vary depending on the object's type (table, function, etc). For complete information on the different types of privileges @@ -1480,7 +1480,7 @@ GRANT UPDATE ON accounts TO joe; The special role name PUBLIC can be used to grant a privilege to every role on the system. Also, - group roles can be set up to help manage privileges when + group roles can be set up to help manage privileges when there are many users of a database — for details see . @@ -1492,7 +1492,7 @@ GRANT UPDATE ON accounts TO joe; REVOKE ALL ON accounts FROM PUBLIC; The special privileges of the object owner (i.e., the right to do - DROP, GRANT, REVOKE, etc.) + DROP, GRANT, REVOKE, etc.) are always implicit in being the owner, and cannot be granted or revoked. But the object owner can choose to revoke their own ordinary privileges, for example to make a @@ -1502,7 +1502,7 @@ REVOKE ALL ON accounts FROM PUBLIC; Ordinarily, only the object's owner (or a superuser) can grant or revoke privileges on an object. However, it is possible to grant a - privilege with grant option, which gives the recipient + privilege with grant option, which gives the recipient the right to grant it in turn to others. If the grant option is subsequently revoked then all who received the privilege from that recipient (directly or through a chain of grants) will lose the @@ -1525,10 +1525,10 @@ REVOKE ALL ON accounts FROM PUBLIC; In addition to the SQL-standard privilege system available through , - tables can have row security policies that restrict, + tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. - This feature is also known as Row-Level Security. + This feature is also known as Row-Level Security. By default, tables do not have any policies, so that if a user has access privileges to a table according to the SQL privilege system, all rows within it are equally available for querying or updating. @@ -1537,20 +1537,20 @@ REVOKE ALL ON accounts FROM PUBLIC; When row security is enabled on a table (with ALTER TABLE ... ENABLE ROW LEVEL - SECURITY), all normal access to the table for selecting rows or + SECURITY), all normal access to the table for selecting rows or modifying rows must be allowed by a row security policy. (However, the table's owner is typically not subject to row security policies.) If no policy exists for the table, a default-deny policy is used, meaning that no rows are visible or can be modified. Operations that apply to the - whole table, such as TRUNCATE and REFERENCES, + whole table, such as TRUNCATE and REFERENCES, are not subject to row security. Row security policies can be specific to commands, or to roles, or to both. A policy can be specified to apply to ALL - commands, or to SELECT, INSERT, UPDATE, - or DELETE. Multiple roles can be assigned to a given + commands, or to SELECT, INSERT, UPDATE, + or DELETE. Multiple roles can be assigned to a given policy, and normal role membership and inheritance rules apply. @@ -1562,7 +1562,7 @@ REVOKE ALL ON accounts FROM PUBLIC; rule are leakproof functions, which are guaranteed to not leak information; the optimizer may choose to apply such functions ahead of the row-security check.) Rows for which the expression does - not return true will not be processed. Separate expressions + not return true will not be processed. Separate expressions may be specified to provide independent control over the rows which are visible and the rows which are allowed to be modified. Policy expressions are run as part of the query and with the privileges of the @@ -1571,11 +1571,11 @@ REVOKE ALL ON accounts FROM PUBLIC; - Superusers and roles with the BYPASSRLS attribute always + Superusers and roles with the BYPASSRLS attribute always bypass the row security system when accessing a table. Table owners normally bypass row security as well, though a table owner can choose to be subject to row security with ALTER - TABLE ... FORCE ROW LEVEL SECURITY. + TABLE ... FORCE ROW LEVEL SECURITY. @@ -1609,8 +1609,8 @@ REVOKE ALL ON accounts FROM PUBLIC; As a simple example, here is how to create a policy on - the account relation to allow only members of - the managers role to access rows, and only rows of their + the account relation to allow only members of + the managers role to access rows, and only rows of their accounts: @@ -1627,7 +1627,7 @@ CREATE POLICY account_managers ON accounts TO managers If no role is specified, or the special user name PUBLIC is used, then the policy applies to all users on the system. To allow all users to access their own row in - a users table, a simple policy can be used: + a users table, a simple policy can be used: @@ -1637,9 +1637,9 @@ CREATE POLICY user_policy ON users To use a different policy for rows that are being added to the table - compared to those rows that are visible, the WITH CHECK + compared to those rows that are visible, the WITH CHECK clause can be used. This policy would allow all users to view all rows - in the users table, but only modify their own: + in the users table, but only modify their own: @@ -1649,7 +1649,7 @@ CREATE POLICY user_policy ON users - Row security can also be disabled with the ALTER TABLE + Row security can also be disabled with the ALTER TABLE command. Disabling row security does not remove any policies that are defined on the table; they are simply ignored. Then all rows in the table are visible and modifiable, subject to the standard SQL privileges @@ -1658,7 +1658,7 @@ CREATE POLICY user_policy ON users Below is a larger example of how this feature can be used in production - environments. The table passwd emulates a Unix password + environments. The table passwd emulates a Unix password file: @@ -1820,7 +1820,7 @@ UPDATE 0 Referential integrity checks, such as unique or primary key constraints and foreign key references, always bypass row security to ensure that data integrity is maintained. Care must be taken when developing - schemas and row level policies to avoid covert channel leaks of + schemas and row level policies to avoid covert channel leaks of information through such referential integrity checks. @@ -1830,7 +1830,7 @@ UPDATE 0 disastrous if row security silently caused some rows to be omitted from the backup. In such a situation, you can set the configuration parameter - to off. This does not in itself bypass row security; + to off. This does not in itself bypass row security; what it does is throw an error if any query's results would get filtered by a policy. The reason for the error can then be investigated and fixed. @@ -1842,7 +1842,7 @@ UPDATE 0 best-performing case; when possible, it's best to design row security applications to work this way. If it is necessary to consult other rows or other tables to make a policy decision, that can be accomplished using - sub-SELECTs, or functions that contain SELECTs, + sub-SELECTs, or functions that contain SELECTs, in the policy expressions. Be aware however that such accesses can create race conditions that could allow information leakage if care is not taken. As an example, consider the following table design: @@ -1896,8 +1896,8 @@ GRANT ALL ON information TO public; - Now suppose that alice wishes to change the slightly - secret information, but decides that mallory should not + Now suppose that alice wishes to change the slightly + secret information, but decides that mallory should not be trusted with the new content of that row, so she does: @@ -1909,36 +1909,36 @@ COMMIT; - That looks safe; there is no window wherein mallory should be - able to see the secret from mallory string. However, there is - a race condition here. If mallory is concurrently doing, + That looks safe; there is no window wherein mallory should be + able to see the secret from mallory string. However, there is + a race condition here. If mallory is concurrently doing, say, SELECT * FROM information WHERE group_id = 2 FOR UPDATE; - and her transaction is in READ COMMITTED mode, it is possible - for her to see secret from mallory. That happens if her - transaction reaches the information row just - after alice's does. It blocks waiting - for alice's transaction to commit, then fetches the updated - row contents thanks to the FOR UPDATE clause. However, it - does not fetch an updated row for the - implicit SELECT from users, because that - sub-SELECT did not have FOR UPDATE; instead - the users row is read with the snapshot taken at the start + and her transaction is in READ COMMITTED mode, it is possible + for her to see secret from mallory. That happens if her + transaction reaches the information row just + after alice's does. It blocks waiting + for alice's transaction to commit, then fetches the updated + row contents thanks to the FOR UPDATE clause. However, it + does not fetch an updated row for the + implicit SELECT from users, because that + sub-SELECT did not have FOR UPDATE; instead + the users row is read with the snapshot taken at the start of the query. Therefore, the policy expression tests the old value - of mallory's privilege level and allows her to see the + of mallory's privilege level and allows her to see the updated row. There are several ways around this problem. One simple answer is to use - SELECT ... FOR SHARE in sub-SELECTs in row - security policies. However, that requires granting UPDATE - privilege on the referenced table (here users) to the + SELECT ... FOR SHARE in sub-SELECTs in row + security policies. However, that requires granting UPDATE + privilege on the referenced table (here users) to the affected users, which might be undesirable. (But another row security policy could be applied to prevent them from actually exercising that - privilege; or the sub-SELECT could be embedded into a security + privilege; or the sub-SELECT could be embedded into a security definer function.) Also, heavy concurrent use of row share locks on the referenced table could pose a performance problem, especially if updates of it are frequent. Another solution, practical if updates of the @@ -1977,19 +1977,19 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE; Users of a cluster do not necessarily have the privilege to access every database in the cluster. Sharing of user names means that there - cannot be different users named, say, joe in two databases + cannot be different users named, say, joe in two databases in the same cluster; but the system can be configured to allow - joe access to only some of the databases. + joe access to only some of the databases. - A database contains one or more named schemas, which + A database contains one or more named schemas, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators. The same object name can be used in different schemas without conflict; for - example, both schema1 and myschema can - contain tables named mytable. Unlike databases, + example, both schema1 and myschema can + contain tables named mytable. Unlike databases, schemas are not rigidly separated: a user can access objects in any of the schemas in the database they are connected to, if they have privileges to do so. @@ -2053,10 +2053,10 @@ CREATE SCHEMA myschema; To create or access objects in a schema, write a - qualified name consisting of the schema name and + qualified name consisting of the schema name and table name separated by a dot: -schema.table +schema.table This works anywhere a table name is expected, including the table modification commands and the data access commands discussed in @@ -2068,10 +2068,10 @@ CREATE SCHEMA myschema; Actually, the even more general syntax -database.schema.table +database.schema.table can be used too, but at present this is just for pro - forma compliance with the SQL standard. If you write a database name, + forma compliance with the SQL standard. If you write a database name, it must be the same as the database you are connected to. @@ -2116,7 +2116,7 @@ CREATE SCHEMA schema_name AUTHORIZATION - Schema names beginning with pg_ are reserved for + Schema names beginning with pg_ are reserved for system purposes and cannot be created by users. @@ -2163,9 +2163,9 @@ CREATE TABLE public.products ( ... ); Qualified names are tedious to write, and it's often best not to wire a particular schema name into applications anyway. Therefore - tables are often referred to by unqualified names, + tables are often referred to by unqualified names, which consist of just the table name. The system determines which table - is meant by following a search path, which is a list + is meant by following a search path, which is a list of schemas to look in. The first matching table in the search path is taken to be the one wanted. If there is no match in the search path, an error is reported, even if matching table names exist @@ -2180,7 +2180,7 @@ CREATE TABLE public.products ( ... ); The first schema named in the search path is called the current schema. Aside from being the first schema searched, it is also the schema in - which new tables will be created if the CREATE TABLE + which new tables will be created if the CREATE TABLE command does not specify a schema name. @@ -2253,7 +2253,7 @@ SET search_path TO myschema; need to write a qualified operator name in an expression, there is a special provision: you must write -OPERATOR(schema.operator) +OPERATOR(schema.operator) This is needed to avoid syntactic ambiguity. An example is: @@ -2310,28 +2310,28 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; - In addition to public and user-created schemas, each - database contains a pg_catalog schema, which contains + In addition to public and user-created schemas, each + database contains a pg_catalog schema, which contains the system tables and all the built-in data types, functions, and - operators. pg_catalog is always effectively part of + operators. pg_catalog is always effectively part of the search path. If it is not named explicitly in the path then - it is implicitly searched before searching the path's + it is implicitly searched before searching the path's schemas. This ensures that built-in names will always be findable. However, you can explicitly place - pg_catalog at the end of your search path if you + pg_catalog at the end of your search path if you prefer to have user-defined names override built-in names. - Since system table names begin with pg_, it is best to + Since system table names begin with pg_, it is best to avoid such names to ensure that you won't suffer a conflict if some future version defines a system table named the same as your table. (With the default search path, an unqualified reference to your table name would then be resolved as the system table instead.) System tables will continue to follow the convention of having - names beginning with pg_, so that they will not + names beginning with pg_, so that they will not conflict with unqualified user-table names so long as users avoid - the pg_ prefix. + the pg_ prefix. @@ -2397,15 +2397,15 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; implements only the basic schema support specified in the standard. Therefore, many users consider qualified names to really consist of - user_name.table_name. + user_name.table_name. This is how PostgreSQL will effectively behave if you create a per-user schema for every user. - Also, there is no concept of a public schema in the + Also, there is no concept of a public schema in the SQL standard. For maximum conformance to the standard, you should - not use (perhaps even remove) the public schema. + not use (perhaps even remove) the public schema. @@ -2461,9 +2461,9 @@ CREATE TABLE capitals ( ) INHERITS (cities); - In this case, the capitals table inherits - all the columns of its parent table, cities. State - capitals also have an extra column, state, that shows + In this case, the capitals table inherits + all the columns of its parent table, cities. State + capitals also have an extra column, state, that shows their state. @@ -2521,7 +2521,7 @@ SELECT name, altitude - You can also write the table name with a trailing * + You can also write the table name with a trailing * to explicitly specify that descendant tables are included: @@ -2530,7 +2530,7 @@ SELECT name, altitude WHERE altitude > 500; - Writing * is not necessary, since this behavior is always + Writing * is not necessary, since this behavior is always the default. However, this syntax is still supported for compatibility with older releases where the default could be changed. @@ -2559,7 +2559,7 @@ WHERE c.altitude > 500; (If you try to reproduce this example, you will probably get different numeric OIDs.) By doing a join with - pg_class you can see the actual table names: + pg_class you can see the actual table names: SELECT p.relname, c.name, c.altitude @@ -2579,7 +2579,7 @@ WHERE c.altitude > 500 AND c.tableoid = p.oid; - Another way to get the same effect is to use the regclass + Another way to get the same effect is to use the regclass alias type, which will print the table OID symbolically: @@ -2603,15 +2603,15 @@ VALUES ('Albany', NULL, NULL, 'NY'); INSERT always inserts into exactly the table specified. In some cases it is possible to redirect the insertion using a rule (see ). However that does not - help for the above case because the cities table - does not contain the column state, and so the + help for the above case because the cities table + does not contain the column state, and so the command will be rejected before the rule can be applied. All check constraints and not-null constraints on a parent table are automatically inherited by its children, unless explicitly specified - otherwise with NO INHERIT clauses. Other types of constraints + otherwise with NO INHERIT clauses. Other types of constraints (unique, primary key, and foreign key constraints) are not inherited. @@ -2620,7 +2620,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the union of the columns defined by the parent tables. Any columns declared in the child table's definition are added to these. If the same column name appears in multiple parent tables, or in both a parent - table and the child's definition, then these columns are merged + table and the child's definition, then these columns are merged so that there is only one such column in the child table. To be merged, columns must have the same data types, else an error is raised. Inheritable check constraints and not-null constraints are merged in a @@ -2632,7 +2632,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); Table inheritance is typically established when the child table is - created, using the INHERITS clause of the + created, using the INHERITS clause of the statement. Alternatively, a table which is already defined in a compatible way can @@ -2642,7 +2642,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the same names and types as the columns of the parent. It must also include check constraints with the same names and check expressions as those of the parent. Similarly an inheritance link can be removed from a child using the - NO INHERIT variant of ALTER TABLE. + NO INHERIT variant of ALTER TABLE. Dynamically adding and removing inheritance links like this can be useful when the inheritance relationship is being used for table partitioning (see ). @@ -2680,10 +2680,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); Inherited queries perform access permission checks on the parent table - only. Thus, for example, granting UPDATE permission on - the cities table implies permission to update rows in + only. Thus, for example, granting UPDATE permission on + the cities table implies permission to update rows in the capitals table as well, when they are - accessed through cities. This preserves the appearance + accessed through cities. This preserves the appearance that the data is (also) in the parent table. But the capitals table could not be updated directly without an additional grant. In a similar way, the parent table's row @@ -2732,33 +2732,33 @@ VALUES ('Albany', NULL, NULL, 'NY'); - If we declared cities.name to be - UNIQUE or a PRIMARY KEY, this would not stop the - capitals table from having rows with names duplicating - rows in cities. And those duplicate rows would by - default show up in queries from cities. In fact, by - default capitals would have no unique constraint at all, + If we declared cities.name to be + UNIQUE or a PRIMARY KEY, this would not stop the + capitals table from having rows with names duplicating + rows in cities. And those duplicate rows would by + default show up in queries from cities. In fact, by + default capitals would have no unique constraint at all, and so could contain multiple rows with the same name. - You could add a unique constraint to capitals, but this - would not prevent duplication compared to cities. + You could add a unique constraint to capitals, but this + would not prevent duplication compared to cities. Similarly, if we were to specify that - cities.name REFERENCES some + cities.name REFERENCES some other table, this constraint would not automatically propagate to - capitals. In this case you could work around it by - manually adding the same REFERENCES constraint to - capitals. + capitals. In this case you could work around it by + manually adding the same REFERENCES constraint to + capitals. Specifying that another table's column REFERENCES - cities(name) would allow the other table to contain city names, but + cities(name) would allow the other table to contain city names, but not capital names. There is no good workaround for this case. @@ -2825,10 +2825,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); Bulk loads and deletes can be accomplished by adding or removing partitions, if that requirement is planned into the partitioning design. - Doing ALTER TABLE DETACH PARTITION or dropping an individual - partition using DROP TABLE is far faster than a bulk + Doing ALTER TABLE DETACH PARTITION or dropping an individual + partition using DROP TABLE is far faster than a bulk operation. These commands also entirely avoid the - VACUUM overhead caused by a bulk DELETE. + VACUUM overhead caused by a bulk DELETE. @@ -2921,7 +2921,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); containing data as a partition of a partitioned table, or remove a partition from a partitioned table turning it into a standalone table; see to learn more about the - ATTACH PARTITION and DETACH PARTITION + ATTACH PARTITION and DETACH PARTITION sub-commands. @@ -2968,9 +2968,9 @@ VALUES ('Albany', NULL, NULL, 'NY'); Partitions cannot have columns that are not present in the parent. It is neither possible to specify columns when creating partitions with - CREATE TABLE nor is it possible to add columns to - partitions after-the-fact using ALTER TABLE. Tables may be - added as a partition with ALTER TABLE ... ATTACH PARTITION + CREATE TABLE nor is it possible to add columns to + partitions after-the-fact using ALTER TABLE. Tables may be + added as a partition with ALTER TABLE ... ATTACH PARTITION only if their columns exactly match the parent, including any oid column. @@ -3049,7 +3049,7 @@ CREATE TABLE measurement ( accessing the partitioned table will have to scan fewer partitions if the conditions involve some or all of these columns. For example, consider a table range partitioned using columns - lastname and firstname (in that order) + lastname and firstname (in that order) as the partition key. @@ -3067,7 +3067,7 @@ CREATE TABLE measurement ( Partitions thus created are in every way normal - PostgreSQL + PostgreSQL tables (or, possibly, foreign tables). It is possible to specify a tablespace and storage parameters for each partition separately. @@ -3111,12 +3111,12 @@ CREATE TABLE measurement_y2006m02 PARTITION OF measurement PARTITION BY RANGE (peaktemp); - After creating partitions of measurement_y2006m02, - any data inserted into measurement that is mapped to - measurement_y2006m02 (or data that is directly inserted - into measurement_y2006m02, provided it satisfies its + After creating partitions of measurement_y2006m02, + any data inserted into measurement that is mapped to + measurement_y2006m02 (or data that is directly inserted + into measurement_y2006m02, provided it satisfies its partition constraint) will be further redirected to one of its - partitions based on the peaktemp column. The partition + partitions based on the peaktemp column. The partition key specified may overlap with the parent's partition key, although care should be taken when specifying the bounds of a sub-partition such that the set of data it accepts constitutes a subset of what @@ -3147,7 +3147,7 @@ CREATE INDEX ON measurement_y2008m01 (logdate); Ensure that the - configuration parameter is not disabled in postgresql.conf. + configuration parameter is not disabled in postgresql.conf. If it is, queries will not be optimized as desired. @@ -3197,7 +3197,7 @@ ALTER TABLE measurement DETACH PARTITION measurement_y2006m02; This allows further operations to be performed on the data before it is dropped. For example, this is often a useful time to back up - the data using COPY, pg_dump, or + the data using COPY, pg_dump, or similar tools. It might also be a useful time to aggregate data into smaller formats, perform other data manipulations, or run reports. @@ -3236,14 +3236,14 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - Before running the ATTACH PARTITION command, it is - recommended to create a CHECK constraint on the table to + Before running the ATTACH PARTITION command, it is + recommended to create a CHECK constraint on the table to be attached describing the desired partition constraint. That way, the system will be able to skip the scan to validate the implicit partition constraint. Without such a constraint, the table will be scanned to validate the partition constraint while holding an ACCESS EXCLUSIVE lock on the parent table. - One may then drop the constraint after ATTACH PARTITION + One may then drop the constraint after ATTACH PARTITION is finished, because it is no longer necessary. @@ -3285,7 +3285,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - An UPDATE that causes a row to move from one partition to + An UPDATE that causes a row to move from one partition to another fails, because the new value of the row fails to satisfy the implicit partition constraint of the original partition. @@ -3376,7 +3376,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 the master table. Normally, these tables will not add any columns to the set inherited from the master. Just as with declarative partitioning, these partitions are in every way normal - PostgreSQL tables (or foreign tables). + PostgreSQL tables (or foreign tables). @@ -3460,7 +3460,7 @@ CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate); We want our application to be able to say INSERT INTO - measurement ... and have the data be redirected into the + measurement ... and have the data be redirected into the appropriate partition table. We can arrange that by attaching a suitable trigger function to the master table. If data will be added only to the latest partition, we can @@ -3567,9 +3567,9 @@ DO INSTEAD - Be aware that COPY ignores rules. If you want to - use COPY to insert data, you'll need to copy into the - correct partition table rather than into the master. COPY + Be aware that COPY ignores rules. If you want to + use COPY to insert data, you'll need to copy into the + correct partition table rather than into the master. COPY does fire triggers, so you can use it normally if you use the trigger approach. @@ -3585,7 +3585,7 @@ DO INSTEAD Ensure that the configuration parameter is not disabled in - postgresql.conf. + postgresql.conf. If it is, queries will not be optimized as desired. @@ -3666,8 +3666,8 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement; The schemes shown here assume that the partition key column(s) of a row never change, or at least do not change enough to require - it to move to another partition. An UPDATE that attempts - to do that will fail because of the CHECK constraints. + it to move to another partition. An UPDATE that attempts + to do that will fail because of the CHECK constraints. If you need to handle such cases, you can put suitable update triggers on the partition tables, but it makes management of the structure much more complicated. @@ -3688,8 +3688,8 @@ ANALYZE measurement; - INSERT statements with ON CONFLICT - clauses are unlikely to work as expected, as the ON CONFLICT + INSERT statements with ON CONFLICT + clauses are unlikely to work as expected, as the ON CONFLICT action is only taken in case of unique violations on the specified target relation, not its child relations. @@ -3717,7 +3717,7 @@ ANALYZE measurement; - Constraint exclusion is a query optimization technique + Constraint exclusion is a query optimization technique that improves performance for partitioned tables defined in the fashion described above (both declaratively partitioned tables and those implemented using inheritance). As an example: @@ -3728,17 +3728,17 @@ SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; Without constraint exclusion, the above query would scan each of - the partitions of the measurement table. With constraint + the partitions of the measurement table. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query's - WHERE clause. When the planner can prove this, it + WHERE clause. When the planner can prove this, it excludes the partition from the query plan. - You can use the EXPLAIN command to show the difference - between a plan with constraint_exclusion on and a plan + You can use the EXPLAIN command to show the difference + between a plan with constraint_exclusion on and a plan with it off. A typical unoptimized plan for this type of table setup is: @@ -3783,7 +3783,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; - Note that constraint exclusion is driven only by CHECK + Note that constraint exclusion is driven only by CHECK constraints, not by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an index needs to be created for a given partition depends on whether you @@ -3795,11 +3795,11 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; The default (and recommended) setting of is actually neither - on nor off, but an intermediate setting - called partition, which causes the technique to be + on nor off, but an intermediate setting + called partition, which causes the technique to be applied only to queries that are likely to be working on partitioned - tables. The on setting causes the planner to examine - CHECK constraints in all queries, even simple ones that + tables. The on setting causes the planner to examine + CHECK constraints in all queries, even simple ones that are unlikely to benefit. @@ -3810,7 +3810,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; - Constraint exclusion only works when the query's WHERE + Constraint exclusion only works when the query's WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized, since the @@ -3867,7 +3867,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; PostgreSQL implements portions of the SQL/MED specification, allowing you to access data that resides outside PostgreSQL using regular SQL queries. Such data is referred to as - foreign data. (Note that this usage is not to be confused + foreign data. (Note that this usage is not to be confused with foreign keys, which are a type of constraint within the database.) @@ -3876,7 +3876,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; foreign data wrapper. A foreign data wrapper is a library that can communicate with an external data source, hiding the details of connecting to the data source and obtaining data from it. - There are some foreign data wrappers available as contrib + There are some foreign data wrappers available as contrib modules; see . Other kinds of foreign data wrappers might be found as third party products. If none of the existing foreign data wrappers suit your needs, you can write your own; see - To access foreign data, you need to create a foreign server + To access foreign data, you need to create a foreign server object, which defines how to connect to a particular external data source according to the set of options used by its supporting foreign data wrapper. Then you need to create one or more foreign @@ -3899,7 +3899,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; Accessing remote data may require authenticating to the external data source. This information can be provided by a - user mapping, which can provide additional data + user mapping, which can provide additional data such as user names and passwords based on the current PostgreSQL role. @@ -4002,13 +4002,13 @@ DROP TABLE products CASCADE; that depend on them, recursively. In this case, it doesn't remove the orders table, it only removes the foreign key constraint. It stops there because nothing depends on the foreign key constraint. - (If you want to check what DROP ... CASCADE will do, - run DROP without CASCADE and read the - DETAIL output.) + (If you want to check what DROP ... CASCADE will do, + run DROP without CASCADE and read the + DETAIL output.) - Almost all DROP commands in PostgreSQL support + Almost all DROP commands in PostgreSQL support specifying CASCADE. Of course, the nature of the possible dependencies varies with the type of the object. You can also write RESTRICT instead of @@ -4020,7 +4020,7 @@ DROP TABLE products CASCADE; According to the SQL standard, specifying either RESTRICT or CASCADE is - required in a DROP command. No database system actually + required in a DROP command. No database system actually enforces that rule, but whether the default behavior is RESTRICT or CASCADE varies across systems. @@ -4028,18 +4028,18 @@ DROP TABLE products CASCADE; - If a DROP command lists multiple + If a DROP command lists multiple objects, CASCADE is only required when there are dependencies outside the specified group. For example, when saying DROP TABLE tab1, tab2 the existence of a foreign - key referencing tab1 from tab2 would not mean + key referencing tab1 from tab2 would not mean that CASCADE is needed to succeed. For user-defined functions, PostgreSQL tracks dependencies associated with a function's externally-visible properties, - such as its argument and result types, but not dependencies + such as its argument and result types, but not dependencies that could only be known by examining the function body. As an example, consider this situation: @@ -4056,11 +4056,11 @@ CREATE FUNCTION get_color_note (rainbow) RETURNS text AS (See for an explanation of SQL-language functions.) PostgreSQL will be aware that - the get_color_note function depends on the rainbow + the get_color_note function depends on the rainbow type: dropping the type would force dropping the function, because its - argument type would no longer be defined. But PostgreSQL - will not consider get_color_note to depend on - the my_colors table, and so will not drop the function if + argument type would no longer be defined. But PostgreSQL + will not consider get_color_note to depend on + the my_colors table, and so will not drop the function if the table is dropped. While there are disadvantages to this approach, there are also benefits. The function is still valid in some sense if the table is missing, though executing it would cause an error; creating a new diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml index 23af270e32..7ef996b51f 100644 --- a/doc/src/sgml/dfunc.sgml +++ b/doc/src/sgml/dfunc.sgml @@ -9,7 +9,7 @@ C, they must be compiled and linked in a special way to produce a file that can be dynamically loaded by the server. To be precise, a shared library needs to be - created.shared library + created.shared library @@ -30,7 +30,7 @@ executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as position-independent code - (PIC),PIC which + (PIC),PIC which conceptually means that they can be placed at an arbitrary location in memory when they are loaded by the executable. (Object files intended for executables are usually not compiled that way.) The @@ -57,8 +57,8 @@ - FreeBSD - FreeBSDshared library + FreeBSD + FreeBSDshared library @@ -70,15 +70,15 @@ gcc -fPIC -c foo.c gcc -shared -o foo.so foo.o This is applicable as of version 3.0 of - FreeBSD. + FreeBSD. - HP-UX - HP-UXshared library + HP-UX + HP-UXshared library @@ -97,7 +97,7 @@ gcc -fPIC -c foo.c ld -b -o foo.sl foo.o - HP-UX uses the extension + HP-UX uses the extension .sl for shared libraries, unlike most other systems. @@ -106,8 +106,8 @@ ld -b -o foo.sl foo.o - Linux - Linuxshared library + Linux + Linuxshared library @@ -125,8 +125,8 @@ cc -shared -o foo.so foo.o - macOS - macOSshared library + macOS + macOSshared library @@ -141,8 +141,8 @@ cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o - NetBSD - NetBSDshared library + NetBSD + NetBSDshared library @@ -161,8 +161,8 @@ gcc -shared -o foo.so foo.o - OpenBSD - OpenBSDshared library + OpenBSD + OpenBSDshared library @@ -179,17 +179,17 @@ ld -Bshareable -o foo.so foo.o - Solaris - Solarisshared library + Solaris + Solarisshared library The compiler flag to create PIC is with the Sun compiler and - with GCC. To + with GCC. To link shared libraries, the compiler option is with either compiler or alternatively - with GCC. + with GCC. cc -KPIC -c foo.c cc -G -o foo.so foo.o diff --git a/doc/src/sgml/dict-int.sgml b/doc/src/sgml/dict-int.sgml index d49f3e2a3a..04cf14a73d 100644 --- a/doc/src/sgml/dict-int.sgml +++ b/doc/src/sgml/dict-int.sgml @@ -8,7 +8,7 @@ - dict_int is an example of an add-on dictionary template + dict_int is an example of an add-on dictionary template for full-text search. The motivation for this example dictionary is to control the indexing of integers (signed and unsigned), allowing such numbers to be indexed while preventing excessive growth in the number of @@ -25,17 +25,17 @@ - The maxlen parameter specifies the maximum number of + The maxlen parameter specifies the maximum number of digits allowed in an integer word. The default value is 6. - The rejectlong parameter specifies whether an overlength - integer should be truncated or ignored. If rejectlong is - false (the default), the dictionary returns the first - maxlen digits of the integer. If rejectlong is - true, the dictionary treats an overlength integer as a stop + The rejectlong parameter specifies whether an overlength + integer should be truncated or ignored. If rejectlong is + false (the default), the dictionary returns the first + maxlen digits of the integer. If rejectlong is + true, the dictionary treats an overlength integer as a stop word, so that it will not be indexed. Note that this also means that such an integer cannot be searched for. @@ -47,8 +47,8 @@ Usage - Installing the dict_int extension creates a text search - template intdict_template and a dictionary intdict + Installing the dict_int extension creates a text search + template intdict_template and a dictionary intdict based on it, with the default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/dict-xsyn.sgml b/doc/src/sgml/dict-xsyn.sgml index 42362ffbc8..bf4965c36f 100644 --- a/doc/src/sgml/dict-xsyn.sgml +++ b/doc/src/sgml/dict-xsyn.sgml @@ -8,7 +8,7 @@ - dict_xsyn (Extended Synonym Dictionary) is an example of an + dict_xsyn (Extended Synonym Dictionary) is an example of an add-on dictionary template for full-text search. This dictionary type replaces words with groups of their synonyms, and so makes it possible to search for a word using any of its synonyms. @@ -18,41 +18,41 @@ Configuration - A dict_xsyn dictionary accepts the following options: + A dict_xsyn dictionary accepts the following options: - matchorig controls whether the original word is accepted by - the dictionary. Default is true. + matchorig controls whether the original word is accepted by + the dictionary. Default is true. - matchsynonyms controls whether the synonyms are - accepted by the dictionary. Default is false. + matchsynonyms controls whether the synonyms are + accepted by the dictionary. Default is false. - keeporig controls whether the original word is included in - the dictionary's output. Default is true. + keeporig controls whether the original word is included in + the dictionary's output. Default is true. - keepsynonyms controls whether the synonyms are included in - the dictionary's output. Default is true. + keepsynonyms controls whether the synonyms are included in + the dictionary's output. Default is true. - rules is the base name of the file containing the list of + rules is the base name of the file containing the list of synonyms. This file must be stored in - $SHAREDIR/tsearch_data/ (where $SHAREDIR means - the PostgreSQL installation's shared-data directory). - Its name must end in .rules (which is not to be included in - the rules parameter). + $SHAREDIR/tsearch_data/ (where $SHAREDIR means + the PostgreSQL installation's shared-data directory). + Its name must end in .rules (which is not to be included in + the rules parameter). @@ -71,15 +71,15 @@ word syn1 syn2 syn3 - The sharp (#) sign is a comment delimiter. It may appear at + The sharp (#) sign is a comment delimiter. It may appear at any position in a line. The rest of the line will be skipped. - Look at xsyn_sample.rules, which is installed in - $SHAREDIR/tsearch_data/, for an example. + Look at xsyn_sample.rules, which is installed in + $SHAREDIR/tsearch_data/, for an example. @@ -87,8 +87,8 @@ word syn1 syn2 syn3 Usage - Installing the dict_xsyn extension creates a text search - template xsyn_template and a dictionary xsyn + Installing the dict_xsyn extension creates a text search + template xsyn_template and a dictionary xsyn based on it, with default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/diskusage.sgml b/doc/src/sgml/diskusage.sgml index 461deb9dba..ba23084354 100644 --- a/doc/src/sgml/diskusage.sgml +++ b/doc/src/sgml/diskusage.sgml @@ -5,7 +5,7 @@ This chapter discusses how to monitor the disk usage of a - PostgreSQL database system. + PostgreSQL database system. @@ -18,10 +18,10 @@ Each table has a primary heap disk file where most of the data is stored. If the table has any columns with potentially-wide values, - there also might be a TOAST file associated with the table, + there also might be a TOAST file associated with the table, which is used to store values too wide to fit comfortably in the main table (see ). There will be one valid index - on the TOAST table, if present. There also might be indexes + on the TOAST table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte. Naming conventions for these files are described @@ -39,7 +39,7 @@ - Using psql on a recently vacuumed or analyzed database, + Using psql on a recently vacuumed or analyzed database, you can issue queries to see the disk usage of any table: SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'customer'; @@ -49,14 +49,14 @@ SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'custom base/16384/16806 | 60 (1 row) - Each page is typically 8 kilobytes. (Remember, relpages - is only updated by VACUUM, ANALYZE, and - a few DDL commands such as CREATE INDEX.) The file path name + Each page is typically 8 kilobytes. (Remember, relpages + is only updated by VACUUM, ANALYZE, and + a few DDL commands such as CREATE INDEX.) The file path name is of interest if you want to examine the table's disk file directly. - To show the space used by TOAST tables, use a query + To show the space used by TOAST tables, use a query like the following: SELECT relname, relpages diff --git a/doc/src/sgml/dml.sgml b/doc/src/sgml/dml.sgml index 071cdb610f..bc016d3cae 100644 --- a/doc/src/sgml/dml.sgml +++ b/doc/src/sgml/dml.sgml @@ -285,42 +285,42 @@ DELETE FROM products; Sometimes it is useful to obtain data from modified rows while they are - being manipulated. The INSERT, UPDATE, - and DELETE commands all have an - optional RETURNING clause that supports this. Use - of RETURNING avoids performing an extra database query to + being manipulated. The INSERT, UPDATE, + and DELETE commands all have an + optional RETURNING clause that supports this. Use + of RETURNING avoids performing an extra database query to collect the data, and is especially valuable when it would otherwise be difficult to identify the modified rows reliably. - The allowed contents of a RETURNING clause are the same as - a SELECT command's output list + The allowed contents of a RETURNING clause are the same as + a SELECT command's output list (see ). It can contain column names of the command's target table, or value expressions using those - columns. A common shorthand is RETURNING *, which selects + columns. A common shorthand is RETURNING *, which selects all columns of the target table in order. - In an INSERT, the data available to RETURNING is + In an INSERT, the data available to RETURNING is the row as it was inserted. This is not so useful in trivial inserts, since it would just repeat the data provided by the client. But it can be very handy when relying on computed default values. For example, - when using a serial - column to provide unique identifiers, RETURNING can return + when using a serial + column to provide unique identifiers, RETURNING can return the ID assigned to a new row: CREATE TABLE users (firstname text, lastname text, id serial primary key); INSERT INTO users (firstname, lastname) VALUES ('Joe', 'Cool') RETURNING id; - The RETURNING clause is also very useful - with INSERT ... SELECT. + The RETURNING clause is also very useful + with INSERT ... SELECT. - In an UPDATE, the data available to RETURNING is + In an UPDATE, the data available to RETURNING is the new content of the modified row. For example: UPDATE products SET price = price * 1.10 @@ -330,7 +330,7 @@ UPDATE products SET price = price * 1.10 - In a DELETE, the data available to RETURNING is + In a DELETE, the data available to RETURNING is the content of the deleted row. For example: DELETE FROM products @@ -341,9 +341,9 @@ DELETE FROM products If there are triggers () on the target table, - the data available to RETURNING is the row as modified by + the data available to RETURNING is the row as modified by the triggers. Thus, inspecting columns computed by triggers is another - common use-case for RETURNING. + common use-case for RETURNING. diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index ff58a17335..3a5b88ca1c 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -449,7 +449,7 @@ checking for fop... fop To produce HTML documentation with the stylesheet used on postgresql.org instead of the + url="https://www.postgresql.org/docs/current">postgresql.org instead of the default simple style use: doc/src/sgml$ make STYLE=website html diff --git a/doc/src/sgml/earthdistance.sgml b/doc/src/sgml/earthdistance.sgml index 6dedc4a5f4..1bdcf64629 100644 --- a/doc/src/sgml/earthdistance.sgml +++ b/doc/src/sgml/earthdistance.sgml @@ -8,18 +8,18 @@ - The earthdistance module provides two different approaches to + The earthdistance module provides two different approaches to calculating great circle distances on the surface of the Earth. The one - described first depends on the cube module (which - must be installed before earthdistance can be - installed). The second one is based on the built-in point data type, + described first depends on the cube module (which + must be installed before earthdistance can be + installed). The second one is based on the built-in point data type, using longitude and latitude for the coordinates. In this module, the Earth is assumed to be perfectly spherical. (If that's too inaccurate for you, you might want to look at the - PostGIS + PostGIS project.) @@ -29,13 +29,13 @@ Data is stored in cubes that are points (both corners are the same) using 3 coordinates representing the x, y, and z distance from the center of the - Earth. A domain earth over cube is provided, which + Earth. A domain earth over cube is provided, which includes constraint checks that the value meets these restrictions and is reasonably close to the actual surface of the Earth. - The radius of the Earth is obtained from the earth() + The radius of the Earth is obtained from the earth() function. It is given in meters. But by changing this one function you can change the module to use some other units, or to use a different value of the radius that you feel is more appropriate. @@ -43,8 +43,8 @@ This package has applications to astronomical databases as well. - Astronomers will probably want to change earth() to return a - radius of 180/pi() so that distances are in degrees. + Astronomers will probably want to change earth() to return a + radius of 180/pi() so that distances are in degrees. @@ -123,11 +123,11 @@ earth_box(earth, float8)earth_box cube Returns a box suitable for an indexed search using the cube - @> + @> operator for points within a given great circle distance of a location. Some points in this box are further than the specified great circle distance from the location, so a second check using - earth_distance should be included in the query. + earth_distance should be included in the query. @@ -141,7 +141,7 @@ The second part of the module relies on representing Earth locations as - values of type point, in which the first component is taken to + values of type point, in which the first component is taken to represent longitude in degrees, and the second component is taken to represent latitude in degrees. Points are taken as (longitude, latitude) and not vice versa because longitude is closer to the intuitive idea of @@ -165,7 +165,7 @@ - point <@> point + point <@> point float8 Gives the distance in statute miles between two points on the Earth's surface. @@ -176,15 +176,15 @@
- Note that unlike the cube-based part of the module, units - are hardwired here: changing the earth() function will + Note that unlike the cube-based part of the module, units + are hardwired here: changing the earth() function will not affect the results of this operator. One disadvantage of the longitude/latitude representation is that you need to be careful about the edge conditions near the poles - and near +/- 180 degrees of longitude. The cube-based + and near +/- 180 degrees of longitude. The cube-based representation avoids these discontinuities. diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 716a101838..0f9ff3a8eb 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -46,7 +46,7 @@ correctness. Third, embedded SQL in C is specified in the SQL standard and supported by many other SQL database systems. The - PostgreSQL implementation is designed to match this + PostgreSQL implementation is designed to match this standard as much as possible, and it is usually possible to port embedded SQL programs written for other SQL databases to PostgreSQL with relative @@ -97,19 +97,19 @@ EXEC SQL CONNECT TO target AS - dbname@hostname:port + dbname@hostname:port - tcp:postgresql://hostname:port/dbname?options + tcp:postgresql://hostname:port/dbname?options - unix:postgresql://hostname:port/dbname?options + unix:postgresql://hostname:port/dbname?options @@ -475,7 +475,7 @@ EXEC SQL COMMIT; In the default mode, statements are committed only when EXEC SQL COMMIT is issued. The embedded SQL interface also supports autocommit of transactions (similar to - psql's default behavior) via the + psql's default behavior) via the command-line option to ecpg (see ) or via the EXEC SQL SET AUTOCOMMIT TO ON statement. In autocommit mode, each command is @@ -507,7 +507,7 @@ EXEC SQL COMMIT; - EXEC SQL PREPARE TRANSACTION transaction_id + EXEC SQL PREPARE TRANSACTION transaction_id Prepare the current transaction for two-phase commit. @@ -516,7 +516,7 @@ EXEC SQL COMMIT; - EXEC SQL COMMIT PREPARED transaction_id + EXEC SQL COMMIT PREPARED transaction_id Commit a transaction that is in prepared state. @@ -525,7 +525,7 @@ EXEC SQL COMMIT; - EXEC SQL ROLLBACK PREPARED transaction_id + EXEC SQL ROLLBACK PREPARED transaction_id Roll back a transaction that is in prepared state. @@ -720,7 +720,7 @@ EXEC SQL int i = 4; The definition of a structure or union also must be listed inside - a DECLARE section. Otherwise the preprocessor cannot + a DECLARE section. Otherwise the preprocessor cannot handle these types since it does not know the definition.
@@ -890,8 +890,8 @@ do - character(n), varchar(n), text - char[n+1], VARCHAR[n+1]declared in ecpglib.h + character(n), varchar(n), text + char[n+1], VARCHAR[n+1]declared in ecpglib.h @@ -955,7 +955,7 @@ EXEC SQL END DECLARE SECTION; The other way is using the VARCHAR type, which is a special type provided by ECPG. The definition on an array of type VARCHAR is converted into a - named struct for every variable. A declaration like: + named struct for every variable. A declaration like: VARCHAR var[180]; @@ -994,10 +994,10 @@ struct varchar_var { int len; char arr[180]; } var; ECPG contains some special types that help you to interact easily with some special data types from the PostgreSQL server. In particular, it has implemented support for the - numeric, decimal, date, timestamp, - and interval types. These data types cannot usefully be + numeric, decimal, date, timestamp, + and interval types. These data types cannot usefully be mapped to primitive host variable types (such - as int, long long int, + as int, long long int, or char[]), because they have a complex internal structure. Applications deal with these types by declaring host variables in special types and accessing them using functions in @@ -1942,10 +1942,10 @@ free(out); The numeric type offers to do calculations with arbitrary precision. See for the equivalent type in the - PostgreSQL server. Because of the arbitrary precision this + PostgreSQL server. Because of the arbitrary precision this variable needs to be able to expand and shrink dynamically. That's why you can only create numeric variables on the heap, by means of the - PGTYPESnumeric_new and PGTYPESnumeric_free + PGTYPESnumeric_new and PGTYPESnumeric_free functions. The decimal type, which is similar but limited in precision, can be created on the stack as well as on the heap. @@ -2092,17 +2092,17 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) - 1, if var1 is bigger than var2 + 1, if var1 is bigger than var2 - -1, if var1 is smaller than var2 + -1, if var1 is smaller than var2 - 0, if var1 and var2 are equal + 0, if var1 and var2 are equal @@ -2119,7 +2119,7 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) int PGTYPESnumeric_from_int(signed int int_val, numeric *var); This function accepts a variable of type signed int and stores it - in the numeric variable var. Upon success, 0 is returned and + in the numeric variable var. Upon success, 0 is returned and -1 in case of a failure. @@ -2134,7 +2134,7 @@ int PGTYPESnumeric_from_int(signed int int_val, numeric *var); int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); This function accepts a variable of type signed long int and stores it - in the numeric variable var. Upon success, 0 is returned and + in the numeric variable var. Upon success, 0 is returned and -1 in case of a failure. @@ -2149,7 +2149,7 @@ int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); int PGTYPESnumeric_copy(numeric *src, numeric *dst); This function copies over the value of the variable that - src points to into the variable that dst + src points to into the variable that dst points to. It returns 0 on success and -1 if an error occurs. @@ -2164,7 +2164,7 @@ int PGTYPESnumeric_copy(numeric *src, numeric *dst); int PGTYPESnumeric_from_double(double d, numeric *dst); This function accepts a variable of type double and stores the result - in the variable that dst points to. It returns 0 on success + in the variable that dst points to. It returns 0 on success and -1 if an error occurs. @@ -2179,10 +2179,10 @@ int PGTYPESnumeric_from_double(double d, numeric *dst); int PGTYPESnumeric_to_double(numeric *nv, double *dp) The function converts the numeric value from the variable that - nv points to into the double variable that dp points + nv points to into the double variable that dp points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable errno will be set - to PGTYPES_NUM_OVERFLOW additionally. + overflow. On overflow, the global variable errno will be set + to PGTYPES_NUM_OVERFLOW additionally. @@ -2196,10 +2196,10 @@ int PGTYPESnumeric_to_double(numeric *nv, double *dp) int PGTYPESnumeric_to_int(numeric *nv, int *ip); The function converts the numeric value from the variable that - nv points to into the integer variable that ip + nv points to into the integer variable that ip points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable errno will be set - to PGTYPES_NUM_OVERFLOW additionally. + overflow. On overflow, the global variable errno will be set + to PGTYPES_NUM_OVERFLOW additionally. @@ -2213,10 +2213,10 @@ int PGTYPESnumeric_to_int(numeric *nv, int *ip); int PGTYPESnumeric_to_long(numeric *nv, long *lp); The function converts the numeric value from the variable that - nv points to into the long integer variable that - lp points to. It returns 0 on success and -1 if an error + nv points to into the long integer variable that + lp points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - errno will be set to PGTYPES_NUM_OVERFLOW + errno will be set to PGTYPES_NUM_OVERFLOW additionally. @@ -2231,10 +2231,10 @@ int PGTYPESnumeric_to_long(numeric *nv, long *lp); int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); The function converts the numeric value from the variable that - src points to into the decimal variable that - dst points to. It returns 0 on success and -1 if an error + src points to into the decimal variable that + dst points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - errno will be set to PGTYPES_NUM_OVERFLOW + errno will be set to PGTYPES_NUM_OVERFLOW additionally. @@ -2249,8 +2249,8 @@ int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The function converts the decimal value from the variable that - src points to into the numeric variable that - dst points to. It returns 0 on success and -1 if an error + src points to into the numeric variable that + dst points to. It returns 0 on success and -1 if an error occurs. Since the decimal type is implemented as a limited version of the numeric type, overflow cannot occur with this conversion. @@ -2265,7 +2265,7 @@ int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The date type in C enables your programs to deal with data of the SQL type date. See for the equivalent type in the - PostgreSQL server. + PostgreSQL server. The following functions can be used to work with the date type: @@ -2292,8 +2292,8 @@ date PGTYPESdate_from_timestamp(timestamp dt); date PGTYPESdate_from_asc(char *str, char **endptr); - The function receives a C char* string str and a pointer to - a C char* string endptr. At the moment ECPG always parses + The function receives a C char* string str and a pointer to + a C char* string endptr. At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in *endptr. You can safely set endptr to NULL. @@ -2397,9 +2397,9 @@ date PGTYPESdate_from_asc(char *str, char **endptr); char *PGTYPESdate_to_asc(date dDate); - The function receives the date dDate as its only parameter. - It will output the date in the form 1999-01-18, i.e., in the - YYYY-MM-DD format. + The function receives the date dDate as its only parameter. + It will output the date in the form 1999-01-18, i.e., in the + YYYY-MM-DD format. @@ -2414,11 +2414,11 @@ char *PGTYPESdate_to_asc(date dDate); void PGTYPESdate_julmdy(date d, int *mdy); - The function receives the date d and a pointer to an array - of 3 integer values mdy. The variable name indicates - the sequential order: mdy[0] will be set to contain the - number of the month, mdy[1] will be set to the value of the - day and mdy[2] will contain the year. + The function receives the date d and a pointer to an array + of 3 integer values mdy. The variable name indicates + the sequential order: mdy[0] will be set to contain the + number of the month, mdy[1] will be set to the value of the + day and mdy[2] will contain the year. @@ -2432,7 +2432,7 @@ void PGTYPESdate_julmdy(date d, int *mdy); void PGTYPESdate_mdyjul(int *mdy, date *jdate); - The function receives the array of the 3 integers (mdy) as + The function receives the array of the 3 integers (mdy) as its first argument and as its second argument a pointer to a variable of type date that should hold the result of the operation. @@ -2447,7 +2447,7 @@ void PGTYPESdate_mdyjul(int *mdy, date *jdate); int PGTYPESdate_dayofweek(date d); - The function receives the date variable d as its only + The function receives the date variable d as its only argument and returns an integer that indicates the day of the week for this date. @@ -2499,7 +2499,7 @@ int PGTYPESdate_dayofweek(date d); void PGTYPESdate_today(date *d); - The function receives a pointer to a date variable (d) + The function receives a pointer to a date variable (d) that it sets to the current date. @@ -2514,9 +2514,9 @@ void PGTYPESdate_today(date *d); int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); - The function receives the date to convert (dDate), the - format mask (fmtstring) and the string that will hold the - textual representation of the date (outbuf). + The function receives the date to convert (dDate), the + format mask (fmtstring) and the string that will hold the + textual representation of the date (outbuf). On success, 0 is returned and a negative value if an error occurred. @@ -2637,9 +2637,9 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The function receives a pointer to the date value that should hold the - result of the operation (d), the format mask to use for - parsing the date (fmt) and the C char* string containing - the textual representation of the date (str). The textual + result of the operation (d), the format mask to use for + parsing the date (fmt) and the C char* string containing + the textual representation of the date (str). The textual representation is expected to match the format mask. However you do not need to have a 1:1 mapping of the string to the format mask. The function only analyzes the sequential order and looks for the literals @@ -2742,7 +2742,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The timestamp type in C enables your programs to deal with data of the SQL type timestamp. See for the equivalent - type in the PostgreSQL server. + type in the PostgreSQL server. The following functions can be used to work with the timestamp type: @@ -2756,8 +2756,8 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); - The function receives the string to parse (str) and a - pointer to a C char* (endptr). + The function receives the string to parse (str) and a + pointer to a C char* (endptr). At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in *endptr. @@ -2765,15 +2765,15 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); The function returns the parsed timestamp on success. On error, - PGTYPESInvalidTimestamp is returned and errno is - set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. + PGTYPESInvalidTimestamp is returned and errno is + set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. In general, the input string can contain any combination of an allowed date specification, a whitespace character and an allowed time specification. Note that time zones are not supported by ECPG. It can parse them but does not apply any calculation as the - PostgreSQL server does for example. Timezone + PostgreSQL server does for example. Timezone specifiers are silently discarded. @@ -2819,7 +2819,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); char *PGTYPEStimestamp_to_asc(timestamp tstamp); - The function receives the timestamp tstamp as + The function receives the timestamp tstamp as its only argument and returns an allocated string that contains the textual representation of the timestamp. @@ -2835,7 +2835,7 @@ char *PGTYPEStimestamp_to_asc(timestamp tstamp); void PGTYPEStimestamp_current(timestamp *ts); The function retrieves the current timestamp and saves it into the - timestamp variable that ts points to. + timestamp variable that ts points to. @@ -2849,8 +2849,8 @@ void PGTYPEStimestamp_current(timestamp *ts); int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmtstr); The function receives a pointer to the timestamp to convert as its - first argument (ts), a pointer to the output buffer - (output), the maximal length that has been allocated for + first argument (ts), a pointer to the output buffer + (output), the maximal length that has been allocated for the output buffer (str_len) and the format mask to use for the conversion (fmtstr). @@ -2861,7 +2861,7 @@ int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmt You can use the following format specifiers for the format mask. The format specifiers are the same ones that are used in the - strftime function in libc. Any + strftime function in libc. Any non-format specifier will be copied into the output buffer. + ("). The text matching the portion of the pattern between these markers is returned. - Some examples, with #" delimiting the return string: + Some examples, with #" delimiting the return string: substring('foobar' from '%#"o_b#"%' for '#') oob substring('foobar' from '#"o_b#"%' for '#') NULL @@ -4191,7 +4191,7 @@ substring('foobar' from '#"o_b#"%' for '#') NULL POSIX regular expressions provide a more powerful means for pattern matching than the LIKE and - SIMILAR TO operators. + SIMILAR TO operators. Many Unix tools such as egrep, sed, or awk use a pattern matching language that is similar to the one described here. @@ -4228,7 +4228,7 @@ substring('foobar' from '#"o_b#"%' for '#') NULL - The substring function with two parameters, + The substring function with two parameters, substring(string from pattern), provides extraction of a substring @@ -4253,30 +4253,30 @@ substring('foobar' from 'o(.)b') o - The regexp_replace function provides substitution of + The regexp_replace function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax - regexp_replace(source, - pattern, replacement - , flags ). - The source string is returned unchanged if - there is no match to the pattern. If there is a - match, the source string is returned with the - replacement string substituted for the matching - substring. The replacement string can contain - \n, where n is 1 + regexp_replace(source, + pattern, replacement + , flags ). + The source string is returned unchanged if + there is no match to the pattern. If there is a + match, the source string is returned with the + replacement string substituted for the matching + substring. The replacement string can contain + \n, where n is 1 through 9, to indicate that the source substring matching the - n'th parenthesized subexpression of the pattern should be - inserted, and it can contain \& to indicate that the + n'th parenthesized subexpression of the pattern should be + inserted, and it can contain \& to indicate that the substring matching the entire pattern should be inserted. Write - \\ if you need to put a literal backslash in the replacement + \\ if you need to put a literal backslash in the replacement text. - The flags parameter is an optional text + The flags parameter is an optional text string containing zero or more single-letter flags that change the - function's behavior. Flag i specifies case-insensitive - matching, while flag g specifies replacement of each matching + function's behavior. Flag i specifies case-insensitive + matching, while flag g specifies replacement of each matching substring rather than only the first one. Supported flags (though - not g) are + not g) are described in . @@ -4293,22 +4293,22 @@ regexp_replace('foobarbaz', 'b(..)', E'X\\1Y', 'g') - The regexp_match function returns a text array of + The regexp_match function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax - regexp_match(string, - pattern , flags ). - If there is no match, the result is NULL. - If a match is found, and the pattern contains no + regexp_match(string, + pattern , flags ). + If there is no match, the result is NULL. + If a match is found, and the pattern contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. - If a match is found, and the pattern contains + If a match is found, and the pattern contains parenthesized subexpressions, then the result is a text array - whose n'th element is the substring matching - the n'th parenthesized subexpression of - the pattern (not counting non-capturing + whose n'th element is the substring matching + the n'th parenthesized subexpression of + the pattern (not counting non-capturing parentheses; see below for details). - The flags parameter is an optional text string + The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in . @@ -4330,7 +4330,7 @@ SELECT regexp_match('foobarbequebaz', '(bar)(beque)'); (1 row) In the common case where you just want the whole matching substring - or NULL for no match, write something like + or NULL for no match, write something like SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; regexp_match @@ -4341,20 +4341,20 @@ SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; - The regexp_matches function returns a set of text arrays + The regexp_matches function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as regexp_match. This function returns no rows if there is no match, one row if there is - a match and the g flag is not given, or N - rows if there are N matches and the g flag + a match and the g flag is not given, or N + rows if there are N matches and the g flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized - subexpressions of the pattern, just as described above + subexpressions of the pattern, just as described above for regexp_match. - regexp_matches accepts all the flags shown + regexp_matches accepts all the flags shown in , plus - the g flag which commands it to return all matches, not + the g flag which commands it to return all matches, not just the first one. @@ -4377,46 +4377,46 @@ SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g'); - In most cases regexp_matches() should be used with - the g flag, since if you only want the first match, it's - easier and more efficient to use regexp_match(). - However, regexp_match() only exists - in PostgreSQL version 10 and up. When working in older - versions, a common trick is to place a regexp_matches() + In most cases regexp_matches() should be used with + the g flag, since if you only want the first match, it's + easier and more efficient to use regexp_match(). + However, regexp_match() only exists + in PostgreSQL version 10 and up. When working in older + versions, a common trick is to place a regexp_matches() call in a sub-select, for example: SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab; - This produces a text array if there's a match, or NULL if - not, the same as regexp_match() would do. Without the + This produces a text array if there's a match, or NULL if + not, the same as regexp_match() would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior. - The regexp_split_to_table function splits a string using a POSIX + The regexp_split_to_table function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax - regexp_split_to_table(string, pattern - , flags ). - If there is no match to the pattern, the function returns the - string. If there is at least one match, for each match it returns + regexp_split_to_table(string, pattern + , flags ). + If there is no match to the pattern, the function returns the + string. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. - The flags parameter is an optional text string containing + The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table supports the flags described in . - The regexp_split_to_array function behaves the same as - regexp_split_to_table, except that regexp_split_to_array - returns its result as an array of text. It has the syntax - regexp_split_to_array(string, pattern - , flags ). - The parameters are the same as for regexp_split_to_table. + The regexp_split_to_array function behaves the same as + regexp_split_to_table, except that regexp_split_to_array + returns its result as an array of text. It has the syntax + regexp_split_to_array(string, pattern + , flags ). + The parameters are the same as for regexp_split_to_table. @@ -4471,8 +4471,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by - regexp_match and - regexp_matches, but is usually the most convenient behavior + regexp_match and + regexp_matches, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions. @@ -4491,16 +4491,16 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Regular expressions (REs), as defined in POSIX 1003.2, come in two forms: - extended REs or EREs + extended REs or EREs (roughly those of egrep), and - basic REs or BREs + basic REs or BREs (roughly those of ed). PostgreSQL supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. REs using these non-POSIX extensions are called - advanced REs or AREs + advanced REs or AREs in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). @@ -4510,9 +4510,9 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - PostgreSQL always initially presumes that a regular + PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or - BRE rules can be chosen by prepending an embedded option + BRE rules can be chosen by prepending an embedded option to the RE pattern, as described in . This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules. @@ -4527,15 +4527,15 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - A branch is zero or more quantified atoms or - constraints, concatenated. + A branch is zero or more quantified atoms or + constraints, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string. - A quantified atom is an atom possibly followed - by a single quantifier. + A quantified atom is an atom possibly followed + by a single quantifier. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities @@ -4545,7 +4545,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - A constraint matches an empty string, but matches only when + A constraint matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in @@ -4567,57 +4567,57 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - (re) - (where re is any regular expression) + (re) + (where re is any regular expression) matches a match for - re, with the match noted for possible reporting + re, with the match noted for possible reporting - (?:re) + (?:re) as above, but the match is not noted for reporting - (a non-capturing set of parentheses) + (a non-capturing set of parentheses) (AREs only) - . + . matches any single character - [chars] - a bracket expression, - matching any one of the chars (see + [chars] + a bracket expression, + matching any one of the chars (see for more detail) - \k - (where k is a non-alphanumeric character) + \k + (where k is a non-alphanumeric character) matches that character taken as an ordinary character, - e.g., \\ matches a backslash character + e.g., \\ matches a backslash character - \c - where c is alphanumeric + \c + where c is alphanumeric (possibly followed by other characters) - is an escape, see - (AREs only; in EREs and BREs, this matches c) + is an escape, see + (AREs only; in EREs and BREs, this matches c) - { + { when followed by a character other than a digit, - matches the left-brace character {; + matches the left-brace character {; when followed by a digit, it is the beginning of a - bound (see below) + bound (see below) - x - where x is a single character with no other + x + where x is a single character with no other significance, matches that character @@ -4625,7 +4625,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - An RE cannot end with a backslash (\). + An RE cannot end with a backslash (\). @@ -4649,82 +4649,82 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - * + * a sequence of 0 or more matches of the atom - + + + a sequence of 1 or more matches of the atom - ? + ? a sequence of 0 or 1 matches of the atom - {m} - a sequence of exactly m matches of the atom + {m} + a sequence of exactly m matches of the atom - {m,} - a sequence of m or more matches of the atom + {m,} + a sequence of m or more matches of the atom - {m,n} - a sequence of m through n - (inclusive) matches of the atom; m cannot exceed - n + {m,n} + a sequence of m through n + (inclusive) matches of the atom; m cannot exceed + n - *? - non-greedy version of * + *? + non-greedy version of * - +? - non-greedy version of + + +? + non-greedy version of + - ?? - non-greedy version of ? + ?? + non-greedy version of ? - {m}? - non-greedy version of {m} + {m}? + non-greedy version of {m} - {m,}? - non-greedy version of {m,} + {m,}? + non-greedy version of {m,} - {m,n}? - non-greedy version of {m,n} + {m,n}? + non-greedy version of {m,n} - The forms using {...} - are known as bounds. - The numbers m and n within a bound are + The forms using {...} + are known as bounds. + The numbers m and n within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive. - Non-greedy quantifiers (available in AREs only) match the - same possibilities as their corresponding normal (greedy) + Non-greedy quantifiers (available in AREs only) match the + same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See for more detail. @@ -4733,7 +4733,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A quantifier cannot immediately follow another quantifier, e.g., - ** is invalid. + ** is invalid. A quantifier cannot begin an expression or subexpression or follow ^ or |. @@ -4753,40 +4753,40 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - ^ + ^ matches at the beginning of the string - $ + $ matches at the end of the string - (?=re) - positive lookahead matches at any point - where a substring matching re begins + (?=re) + positive lookahead matches at any point + where a substring matching re begins (AREs only) - (?!re) - negative lookahead matches at any point - where no substring matching re begins + (?!re) + negative lookahead matches at any point + where no substring matching re begins (AREs only) - (?<=re) - positive lookbehind matches at any point - where a substring matching re ends + (?<=re) + positive lookbehind matches at any point + where a substring matching re ends (AREs only) - (?<!re) - negative lookbehind matches at any point - where no substring matching re ends + (?<!re) + negative lookbehind matches at any point + where no substring matching re ends (AREs only) @@ -4795,7 +4795,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Lookahead and lookbehind constraints cannot contain back - references (see ), + references (see ), and all parentheses within them are considered non-capturing. @@ -4808,7 +4808,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; characters enclosed in []. It normally matches any single character from the list (but see below). If the list begins with ^, it matches any single character - not from the rest of the list. + not from the rest of the list. If two characters in the list are separated by -, this is shorthand for the full range of characters between those two @@ -4853,7 +4853,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - PostgreSQL currently does not support multi-character collating + PostgreSQL currently does not support multi-character collating elements. This information describes possible future behavior. @@ -4861,7 +4861,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Within a bracket expression, a collating element enclosed in [= and =] is an equivalence - class, standing for the sequences of characters of all collating + class, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [. and @@ -4896,7 +4896,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word - characters. A word character is an alnum character (as + characters. A word character is an alnum character (as defined by ctype3) or an underscore. This is an extension, compatible with but not @@ -4911,44 +4911,44 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Regular Expression Escapes - Escapes are special sequences beginning with \ + Escapes are special sequences beginning with \ followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. - A \ followed by an alphanumeric character but not constituting + A \ followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, - a \ followed by an alphanumeric character merely stands for + a \ followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, - \ is an ordinary character. + \ is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.) - Character-entry escapes exist to make it easier to specify + Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in . - Class-shorthand escapes provide shorthands for certain + Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are shown in . - A constraint escape is a constraint, + A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in . - A back reference (\n) matches the + A back reference (\n) matches the same string matched by the previous parenthesized subexpression specified - by the number n + by the number n (see ). For example, - ([bc])\1 matches bb or cc - but not bc or cb. + ([bc])\1 matches bb or cc + but not bc or cb. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions. @@ -4967,122 +4967,122 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \a + \a alert (bell) character, as in C - \b + \b backspace, as in C - \B - synonym for backslash (\) to help reduce the need for backslash + \B + synonym for backslash (\) to help reduce the need for backslash doubling - \cX - (where X is any character) the character whose + \cX + (where X is any character) the character whose low-order 5 bits are the same as those of - X, and whose other bits are all zero + X, and whose other bits are all zero - \e + \e the character whose collating-sequence name - is ESC, - or failing that, the character with octal value 033 + is ESC, + or failing that, the character with octal value 033 - \f + \f form feed, as in C - \n + \n newline, as in C - \r + \r carriage return, as in C - \t + \t horizontal tab, as in C - \uwxyz - (where wxyz is exactly four hexadecimal digits) + \uwxyz + (where wxyz is exactly four hexadecimal digits) the character whose hexadecimal value is - 0xwxyz + 0xwxyz - \Ustuvwxyz - (where stuvwxyz is exactly eight hexadecimal + \Ustuvwxyz + (where stuvwxyz is exactly eight hexadecimal digits) the character whose hexadecimal value is - 0xstuvwxyz + 0xstuvwxyz - \v + \v vertical tab, as in C - \xhhh - (where hhh is any sequence of hexadecimal + \xhhh + (where hhh is any sequence of hexadecimal digits) the character whose hexadecimal value is - 0xhhh + 0xhhh (a single character no matter how many hexadecimal digits are used) - \0 - the character whose value is 0 (the null byte) + \0 + the character whose value is 0 (the null byte) - \xy - (where xy is exactly two octal digits, - and is not a back reference) + \xy + (where xy is exactly two octal digits, + and is not a back reference) the character whose octal value is - 0xy + 0xy - \xyz - (where xyz is exactly three octal digits, - and is not a back reference) + \xyz + (where xyz is exactly three octal digits, + and is not a back reference) the character whose octal value is - 0xyz + 0xyz - Hexadecimal digits are 0-9, - a-f, and A-F. - Octal digits are 0-7. + Hexadecimal digits are 0-9, + a-f, and A-F. + Octal digits are 0-7. Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, - for example \u1234 means the character U+1234. + for example \u1234 means the character U+1234. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database @@ -5091,8 +5091,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; The character-entry escapes are always taken as ordinary characters. - For example, \135 is ] in ASCII, but - \135 does not terminate a bracket expression. + For example, \135 is ] in ASCII, but + \135 does not terminate a bracket expression. @@ -5108,34 +5108,34 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \d - [[:digit:]] + \d + [[:digit:]] - \s - [[:space:]] + \s + [[:space:]] - \w - [[:alnum:]_] + \w + [[:alnum:]_] (note underscore is included) - \D - [^[:digit:]] + \D + [^[:digit:]] - \S - [^[:space:]] + \S + [^[:space:]] - \W - [^[:alnum:]_] + \W + [^[:alnum:]_] (note underscore is included) @@ -5143,13 +5143,13 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo;
- Within bracket expressions, \d, \s, - and \w lose their outer brackets, - and \D, \S, and \W are illegal. - (So, for example, [a-c\d] is equivalent to - [a-c[:digit:]]. - Also, [a-c\D], which is equivalent to - [a-c^[:digit:]], is illegal.) + Within bracket expressions, \d, \s, + and \w lose their outer brackets, + and \D, \S, and \W are illegal. + (So, for example, [a-c\d] is equivalent to + [a-c[:digit:]]. + Also, [a-c\D], which is equivalent to + [a-c^[:digit:]], is illegal.) @@ -5165,38 +5165,38 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \A + \A matches only at the beginning of the string (see for how this differs from - ^) + ^) - \m + \m matches only at the beginning of a word - \M + \M matches only at the end of a word - \y + \y matches only at the beginning or end of a word - \Y + \Y matches only at a point that is not the beginning or end of a word - \Z + \Z matches only at the end of the string (see for how this differs from - $) + $) @@ -5204,7 +5204,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A word is defined as in the specification of - [[:<:]] and [[:>:]] above. + [[:<:]] and [[:>:]] above. Constraint escapes are illegal within bracket expressions. @@ -5221,18 +5221,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - \m - (where m is a nonzero digit) - a back reference to the m'th subexpression + \m + (where m is a nonzero digit) + a back reference to the m'th subexpression - \mnn - (where m is a nonzero digit, and - nn is some more digits, and the decimal value - mnn is not greater than the number of closing capturing + \mnn + (where m is a nonzero digit, and + nn is some more digits, and the decimal value + mnn is not greater than the number of closing capturing parentheses seen so far) - a back reference to the mnn'th subexpression + a back reference to the mnn'th subexpression @@ -5263,29 +5263,29 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - An RE can begin with one of two special director prefixes. - If an RE begins with ***:, + An RE can begin with one of two special director prefixes. + If an RE begins with ***:, the rest of the RE is taken as an ARE. (This normally has no effect in - PostgreSQL, since REs are assumed to be AREs; + PostgreSQL, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by - the flags parameter to a regex function.) - If an RE begins with ***=, + the flags parameter to a regex function.) + If an RE begins with ***=, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters. - An ARE can begin with embedded options: - a sequence (?xyz) - (where xyz is one or more alphabetic characters) + An ARE can begin with embedded options: + a sequence (?xyz) + (where xyz is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by - a regex operator, or the flags parameter to a regex + a regex operator, or the flags parameter to a regex function. The available option letters are shown in . - Note that these same option letters are used in the flags + Note that these same option letters are used in the flags parameters of regex functions. @@ -5302,67 +5302,67 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - b + b rest of RE is a BRE - c + c case-sensitive matching (overrides operator type) - e + e rest of RE is an ERE - i + i case-insensitive matching (see ) (overrides operator type) - m - historical synonym for n + m + historical synonym for n - n + n newline-sensitive matching (see ) - p + p partial newline-sensitive matching (see ) - q - rest of RE is a literal (quoted) string, all ordinary + q + rest of RE is a literal (quoted) string, all ordinary characters - s + s non-newline-sensitive matching (default) - t + t tight syntax (default; see below) - w - inverse partial newline-sensitive (weird) matching + w + inverse partial newline-sensitive (weird) matching (see ) - x + x expanded syntax (see below) @@ -5370,18 +5370,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo;
- Embedded options take effect at the ) terminating the sequence. + Embedded options take effect at the ) terminating the sequence. They can appear only at the start of an ARE (after the - ***: director if any). + ***: director if any). - In addition to the usual (tight) RE syntax, in which all - characters are significant, there is an expanded syntax, - available by specifying the embedded x option. + In addition to the usual (tight) RE syntax, in which all + characters are significant, there is an expanded syntax, + available by specifying the embedded x option. In the expanded syntax, white-space characters in the RE are ignored, as are - all characters between a # + all characters between a # and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule: @@ -5389,41 +5389,41 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - a white-space character or # preceded by \ is + a white-space character or # preceded by \ is retained - white space or # within a bracket expression is retained + white space or # within a bracket expression is retained white space and comments cannot appear within multi-character symbols, - such as (?: + such as (?: For this purpose, white-space characters are blank, tab, newline, and - any character that belongs to the space character class. + any character that belongs to the space character class. Finally, in an ARE, outside bracket expressions, the sequence - (?#ttt) - (where ttt is any text not containing a )) + (?#ttt) + (where ttt is any text not containing a )) is a comment, completely ignored. Again, this is not allowed between the characters of - multi-character symbols, like (?:. + multi-character symbols, like (?:. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead. - None of these metasyntax extensions is available if - an initial ***= director + None of these metasyntax extensions is available if + an initial ***= director has specified that the user's input be treated as a literal string rather than as an RE. @@ -5437,8 +5437,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will - be taken, depending on whether the RE is greedy or - non-greedy. + be taken, depending on whether the RE is greedy or + non-greedy.
@@ -5458,39 +5458,39 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; A quantified atom with a fixed-repetition quantifier - ({m} + ({m} or - {m}?) + {m}?) has the same greediness (possibly none) as the atom itself. A quantified atom with other normal quantifiers (including - {m,n} - with m equal to n) + {m,n} + with m equal to n) is greedy (prefers longest match). A quantified atom with a non-greedy quantifier (including - {m,n}? - with m equal to n) + {m,n}? + with m equal to n) is non-greedy (prefers shortest match). A branch — that is, an RE that has no top-level - | operator — has the same greediness as the first + | operator — has the same greediness as the first quantified atom in it that has a greediness attribute. An RE consisting of two or more branches connected by the - | operator is always greedy. + | operator is always greedy.
@@ -5501,7 +5501,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible - substring as a whole. Once the length of the entire match + substring as a whole. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking @@ -5516,16 +5516,16 @@ SELECT SUBSTRING('XY1234Z', 'Y*([0-9]{1,3})'); SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); Result: 1 - In the first case, the RE as a whole is greedy because Y* - is greedy. It can match beginning at the Y, and it matches - the longest possible string starting there, i.e., Y123. - The output is the parenthesized part of that, or 123. - In the second case, the RE as a whole is non-greedy because Y*? - is non-greedy. It can match beginning at the Y, and it matches - the shortest possible string starting there, i.e., Y1. - The subexpression [0-9]{1,3} is greedy but it cannot change + In the first case, the RE as a whole is greedy because Y* + is greedy. It can match beginning at the Y, and it matches + the longest possible string starting there, i.e., Y123. + The output is the parenthesized part of that, or 123. + In the second case, the RE as a whole is non-greedy because Y*? + is non-greedy. It can match beginning at the Y, and it matches + the shortest possible string starting there, i.e., Y1. + The subexpression [0-9]{1,3} is greedy but it cannot change the decision as to the overall match length; so it is forced to match - just 1. + just 1. @@ -5533,11 +5533,11 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that - match they are allowed to eat relative to each other. + match they are allowed to eat relative to each other. - The quantifiers {1,1} and {1,1}? + The quantifiers {1,1} and {1,1}? can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute @@ -5549,8 +5549,8 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); SELECT regexp_match('abc01234xyz', '(.*)(\d+)(.*)'); Result: {abc0123,4,xyz} - That didn't work: the first .* is greedy so - it eats as much as it can, leaving the \d+ to + That didn't work: the first .* is greedy so + it eats as much as it can, leaving the \d+ to match at the last possible place, the last digit. We might try to fix that by making it non-greedy: @@ -5573,14 +5573,14 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: - bb* - matches the three middle characters of abbbc; - (week|wee)(night|knights) - matches all ten characters of weeknights; - when (.*).* - is matched against abc the parenthesized subexpression + bb* + matches the three middle characters of abbbc; + (week|wee)(night|knights) + matches all ten characters of weeknights; + when (.*).* + is matched against abc the parenthesized subexpression matches all three characters; and when - (a*)* is matched against bc + (a*)* is matched against bc both the whole RE and the parenthesized subexpression match an empty string. @@ -5592,38 +5592,38 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, - e.g., x becomes [xX]. + e.g., x becomes [xX]. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., - [x] becomes [xX] - and [^x] becomes [^xX]. + [x] becomes [xX] + and [^x] becomes [^xX]. - If newline-sensitive matching is specified, . - and bracket expressions using ^ + If newline-sensitive matching is specified, . + and bracket expressions using ^ will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) - and ^ and $ + and ^ and $ will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. - But the ARE escapes \A and \Z - continue to match beginning or end of string only. + But the ARE escapes \A and \Z + continue to match beginning or end of string only. If partial newline-sensitive matching is specified, - this affects . and bracket expressions - as with newline-sensitive matching, but not ^ - and $. + this affects . and bracket expressions + as with newline-sensitive matching, but not ^ + and $. If inverse partial newline-sensitive matching is specified, - this affects ^ and $ - as with newline-sensitive matching, but not . + this affects ^ and $ + as with newline-sensitive matching, but not . and bracket expressions. This isn't very useful but is provided for symmetry. @@ -5642,18 +5642,18 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); The only feature of AREs that is actually incompatible with - POSIX EREs is that \ does not lose its special + POSIX EREs is that \ does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; - the *** syntax of directors likewise is outside the POSIX + the *** syntax of directors likewise is outside the POSIX syntax for both BREs and EREs. Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. - Incompatibilities of note include \b, \B, + Incompatibilities of note include \b, \B, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, @@ -5664,12 +5664,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Two significant incompatibilities exist between AREs and the ERE syntax - recognized by pre-7.4 releases of PostgreSQL: + recognized by pre-7.4 releases of PostgreSQL: - In AREs, \ followed by an alphanumeric character is either + In AREs, \ followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to @@ -5678,9 +5678,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - In AREs, \ remains a special character within - [], so a literal \ within a bracket - expression must be written \\. + In AREs, \ remains a special character within + [], so a literal \ within a bracket + expression must be written \\. @@ -5692,27 +5692,27 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); BREs differ from EREs in several respects. - In BREs, |, +, and ? + In BREs, |, +, and ? are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are - \{ and \}, - with { and } + \{ and \}, + with { and } by themselves ordinary characters. The parentheses for nested subexpressions are - \( and \), - with ( and ) by themselves ordinary characters. - ^ is an ordinary character except at the beginning of the + \( and \), + with ( and ) by themselves ordinary characters. + ^ is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, - $ is an ordinary character except at the end of the + $ is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, - and * is an ordinary character if it appears at the beginning + and * is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression - (after a possible leading ^). + (after a possible leading ^). Finally, single-digit back references are available, and - \< and \> + \< and \> are synonyms for - [[:<:]] and [[:>:]] + [[:<:]] and [[:>:]] respectively; no other escapes are available in BREs. @@ -5839,13 +5839,13 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier. - Similarly, to_number is unnecessary for standard numeric + Similarly, to_number is unnecessary for standard numeric representations. - In a to_char output template string, there are certain + In a to_char output template string, there are certain patterns that are recognized and replaced with appropriately-formatted data based on the given value. Any text that is not a template pattern is simply copied verbatim. Similarly, in an input template string (for the @@ -6022,11 +6022,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}');
D - day of the week, Sunday (1) to Saturday (7) + day of the week, Sunday (1) to Saturday (7) ID - ISO 8601 day of the week, Monday (1) to Sunday (7) + ISO 8601 day of the week, Monday (1) to Sunday (7) W @@ -6063,17 +6063,17 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TZ upper case time-zone abbreviation - (only supported in to_char) + (only supported in to_char) tz lower case time-zone abbreviation - (only supported in to_char) + (only supported in to_char) OF time-zone offset from UTC - (only supported in to_char) + (only supported in to_char) @@ -6107,12 +6107,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TH suffix upper case ordinal number suffix - DDTH, e.g., 12TH + DDTH, e.g., 12TH th suffix lower case ordinal number suffix - DDth, e.g., 12th + DDth, e.g., 12th FX prefix @@ -6153,7 +6153,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TM does not include trailing blanks. - to_timestamp and to_date ignore + to_timestamp and to_date ignore the TM modifier. @@ -6179,9 +6179,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); even if it contains pattern key words. For example, in '"Hello Year "YYYY', the YYYY will be replaced by the year data, but the single Y in Year - will not be. In to_date, to_number, - and to_timestamp, double-quoted strings skip the number of - input characters contained in the string, e.g. "XX" + will not be. In to_date, to_number, + and to_timestamp, double-quoted strings skip the number of + input characters contained in the string, e.g. "XX" skips two input characters. @@ -6198,9 +6198,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); In to_timestamp and to_date, if the year format specification is less than four digits, e.g. - YYY, and the supplied year is less than four digits, + YYY, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g. - 95 becomes 1995. + 95 becomes 1995. @@ -6269,7 +6269,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the - concept of a month or day of month has no + concept of a month or day of month has no meaning. In the context of a Gregorian year, the ISO week has no meaning. @@ -6278,8 +6278,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); While to_date will reject a mixture of Gregorian and ISO week-numbering date fields, to_char will not, since output format - specifications like YYYY-MM-DD (IYYY-IDDD) can be - useful. But avoid writing something like IYYY-MM-DD; + specifications like YYYY-MM-DD (IYYY-IDDD) can be + useful. But avoid writing something like IYYY-MM-DD; that would yield surprising results near the start of the year. (See for more information.) @@ -6323,11 +6323,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - to_char(interval) formats HH and - HH12 as shown on a 12-hour clock, for example zero hours - and 36 hours both output as 12, while HH24 + to_char(interval) formats HH and + HH12 as shown on a 12-hour clock, for example zero hours + and 36 hours both output as 12, while HH24 outputs the full hour value, which can exceed 23 in - an interval value. + an interval value. @@ -6423,19 +6423,19 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - 0 specifies a digit position that will always be printed, - even if it contains a leading/trailing zero. 9 also + 0 specifies a digit position that will always be printed, + even if it contains a leading/trailing zero. 9 also specifies a digit position, but if it is a leading zero then it will be replaced by a space, while if it is a trailing zero and fill mode - is specified then it will be deleted. (For to_number(), + is specified then it will be deleted. (For to_number(), these two pattern characters are equivalent.) - The pattern characters S, L, D, - and G represent the sign, currency symbol, decimal point, + The pattern characters S, L, D, + and G represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale (see and ). The pattern characters period @@ -6447,9 +6447,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); If no explicit provision is made for a sign - in to_char()'s pattern, one column will be reserved for + in to_char()'s pattern, one column will be reserved for the sign, and it will be anchored to (appear just left of) the - number. If S appears just left of some 9's, + number. If S appears just left of some 9's, it will likewise be anchored to the number. @@ -6742,7 +6742,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); inputs actually come in two variants: one that takes time with time zone or timestamp with time zone, and one that takes time without time zone or timestamp without time zone. For brevity, these variants are not shown separately. Also, the - + and * operators come in commutative pairs (for + + and * operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair. @@ -6899,7 +6899,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); age(timestamp, timestamp) interval - Subtract arguments, producing a symbolic result that + Subtract arguments, producing a symbolic result that uses years and months, rather than just days age(timestamp '2001-04-10', timestamp '1957-06-13') 43 years 9 mons 27 days @@ -7109,7 +7109,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); justify_interval(interval) interval - Adjust interval using justify_days and justify_hours, with additional sign adjustments + Adjust interval using justify_days and justify_hours, with additional sign adjustments justify_interval(interval '1 mon -1 hour') 29 days 23:00:00 @@ -7302,7 +7302,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); text Current date and time - (like clock_timestamp, but as a text string); + (like clock_timestamp, but as a text string); see @@ -7344,7 +7344,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); OVERLAPS - In addition to these functions, the SQL OVERLAPS operator is + In addition to these functions, the SQL OVERLAPS operator is supported: (start1, end1) OVERLAPS (start2, end2) @@ -7355,11 +7355,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written - first; OVERLAPS automatically takes the earlier value + first; OVERLAPS automatically takes the earlier value of the pair as the start. Each time period is considered to - represent the half-open interval start <= - time < end, unless - start and end are equal in which case it + represent the half-open interval start <= + time < end, unless + start and end are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap. @@ -7398,31 +7398,31 @@ SELECT (DATE '2001-10-30', DATE '2001-10-30') OVERLAPS - Note there can be ambiguity in the months field returned by - age because different months have different numbers of - days. PostgreSQL's approach uses the month from the + Note there can be ambiguity in the months field returned by + age because different months have different numbers of + days. PostgreSQL's approach uses the month from the earlier of the two dates when calculating partial months. For example, - age('2004-06-01', '2004-04-30') uses April to yield - 1 mon 1 day, while using May would yield 1 mon 2 - days because May has 31 days, while April has only 30. + age('2004-06-01', '2004-04-30') uses April to yield + 1 mon 1 day, while using May would yield 1 mon 2 + days because May has 31 days, while April has only 30. Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number - of seconds using EXTRACT(EPOCH FROM ...), then subtract the + of seconds using EXTRACT(EPOCH FROM ...), then subtract the results; this produces the - number of seconds between the two values. This will adjust + number of seconds between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp - values with the - operator + values with the - operator returns the number of days (24-hours) and hours/minutes/seconds - between the values, making the same adjustments. The age + between the values, making the same adjustments. The age function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with timezone - = 'US/Eastern'; there is a daylight saving time change between the + = 'US/Eastern'; there is a daylight saving time change between the two dates used: @@ -7534,8 +7534,8 @@ SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); dow - The day of the week as Sunday (0) to - Saturday (6) + The day of the week as Sunday (0) to + Saturday (6) @@ -7587,7 +7587,7 @@ SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); You can convert an epoch value back to a time stamp - with to_timestamp: + with to_timestamp: SELECT to_timestamp(982384720.12); @@ -7614,8 +7614,8 @@ SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); isodow - The day of the week as Monday (1) to - Sunday (7) + The day of the week as Monday (1) to + Sunday (7) @@ -7623,8 +7623,8 @@ SELECT EXTRACT(ISODOW FROM TIMESTAMP '2001-02-18 20:38:40'); Result: 7 - This is identical to dow except for Sunday. This - matches the ISO 8601 day of the week numbering. + This is identical to dow except for Sunday. This + matches the ISO 8601 day of the week numbering. @@ -7819,11 +7819,11 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. - For example, 2005-01-01 is part of the 53rd week of year - 2004, and 2006-01-01 is part of the 52nd week of year - 2005, while 2012-12-31 is part of the first week of 2013. - It's recommended to use the isoyear field together with - week to get consistent results. + For example, 2005-01-01 is part of the 53rd week of year + 2004, and 2006-01-01 is part of the 52nd week of year + 2005, while 2012-12-31 is part of the first week of 2013. + It's recommended to use the isoyear field together with + week to get consistent results. @@ -7837,8 +7837,8 @@ SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); year - The year field. Keep in mind there is no 0 AD, so subtracting - BC years from AD years should be done with care. + The year field. Keep in mind there is no 0 AD, so subtracting + BC years from AD years should be done with care. @@ -7853,11 +7853,11 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); - When the input value is +/-Infinity, extract returns - +/-Infinity for monotonically-increasing fields (epoch, - julian, year, isoyear, - decade, century, and millennium). - For other fields, NULL is returned. PostgreSQL + When the input value is +/-Infinity, extract returns + +/-Infinity for monotonically-increasing fields (epoch, + julian, year, isoyear, + decade, century, and millennium). + For other fields, NULL is returned. PostgreSQL versions before 9.6 returned zero for all cases of infinite input. @@ -7908,13 +7908,13 @@ SELECT date_part('hour', INTERVAL '4 hours 3 minutes'); date_trunc('field', source) source is a value expression of type - timestamp or interval. + timestamp or interval. (Values of type date and time are cast automatically to timestamp or - interval, respectively.) + interval, respectively.) field selects to which precision to truncate the input value. The return value is of type - timestamp or interval + timestamp or interval with all fields that are less significant than the selected one set to zero (or one, for day and month). @@ -7983,34 +7983,34 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); - timestamp without time zone AT TIME ZONE zone + timestamp without time zone AT TIME ZONE zone timestamp with time zone - Treat given time stamp without time zone as located in the specified time zone + Treat given time stamp without time zone as located in the specified time zone - timestamp with time zone AT TIME ZONE zone + timestamp with time zone AT TIME ZONE zone timestamp without time zone - Convert given time stamp with time zone to the new time + Convert given time stamp with time zone to the new time zone, with no time zone designation - time with time zone AT TIME ZONE zone + time with time zone AT TIME ZONE zone time with time zone - Convert given time with time zone to the new time zone + Convert given time with time zone to the new time zone - In these expressions, the desired time zone zone can be + In these expressions, the desired time zone zone can be specified either as a text string (e.g., 'PST') or as an interval (e.g., INTERVAL '-08:00'). In the text case, a time zone name can be specified in any of the ways @@ -8018,7 +8018,7 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); - Examples (assuming the local time zone is PST8PDT): + Examples (assuming the local time zone is PST8PDT): SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; Result: 2001-02-16 19:38:40-08 @@ -8032,10 +8032,10 @@ SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; - The function timezone(zone, - timestamp) is equivalent to the SQL-conforming construct - timestamp AT TIME ZONE - zone. + The function timezone(zone, + timestamp) is equivalent to the SQL-conforming construct + timestamp AT TIME ZONE + zone. @@ -8140,23 +8140,23 @@ now() - transaction_timestamp() is equivalent to + transaction_timestamp() is equivalent to CURRENT_TIMESTAMP, but is named to clearly reflect what it returns. - statement_timestamp() returns the start time of the current + statement_timestamp() returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). - statement_timestamp() and transaction_timestamp() + statement_timestamp() and transaction_timestamp() return the same value during the first command of a transaction, but might differ during subsequent commands. - clock_timestamp() returns the actual current time, and + clock_timestamp() returns the actual current time, and therefore its value changes even within a single SQL command. - timeofday() is a historical + timeofday() is a historical PostgreSQL function. Like - clock_timestamp(), it returns the actual current time, - but as a formatted text string rather than a timestamp - with time zone value. - now() is a traditional PostgreSQL + clock_timestamp(), it returns the actual current time, + but as a formatted text string rather than a timestamp + with time zone value. + now() is a traditional PostgreSQL equivalent to transaction_timestamp(). @@ -8174,7 +8174,7 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT - You do not want to use the third form when specifying a DEFAULT + You do not want to use the third form when specifying a DEFAULT clause while creating a table. The system will convert now to a timestamp as soon as the constant is parsed, so that when the default value is needed, @@ -8210,16 +8210,16 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT process: pg_sleep(seconds) -pg_sleep_for(interval) -pg_sleep_until(timestamp with time zone) +pg_sleep_for(interval) +pg_sleep_until(timestamp with time zone) pg_sleep makes the current session's process sleep until seconds seconds have elapsed. seconds is a value of type - double precision, so fractional-second delays can be specified. + double precision, so fractional-second delays can be specified. pg_sleep_for is a convenience function for larger - sleep times specified as an interval. + sleep times specified as an interval. pg_sleep_until is a convenience function for when a specific wake-up time is desired. For example: @@ -8341,7 +8341,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Notice that except for the two-argument form of enum_range, + Notice that except for the two-argument form of enum_range, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to @@ -8365,13 +8365,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Note that the same as operator, ~=, represents + Note that the same as operator, ~=, represents the usual notion of equality for the point, box, polygon, and circle types. - Some of these types also have an = operator, but - = compares - for equal areas only. The other scalar comparison operators - (<= and so on) likewise compare areas for these types. + Some of these types also have an = operator, but + = compares + for equal areas only. The other scalar comparison operators + (<= and so on) likewise compare areas for these types. @@ -8548,8 +8548,8 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple Before PostgreSQL 8.2, the containment - operators @> and <@ were respectively - called ~ and @. These names are still + operators @> and <@ were respectively + called ~ and @. These names are still available, but are deprecated and will eventually be removed. @@ -8604,67 +8604,67 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - area(object) + area(object) double precision area area(box '((0,0),(1,1))') - center(object) + center(object) point center center(box '((0,0),(1,2))') - diameter(circle) + diameter(circle) double precision diameter of circle diameter(circle '((0,0),2.0)') - height(box) + height(box) double precision vertical size of box height(box '((0,0),(1,1))') - isclosed(path) + isclosed(path) boolean a closed path? isclosed(path '((0,0),(1,1),(2,0))') - isopen(path) + isopen(path) boolean an open path? isopen(path '[(0,0),(1,1),(2,0)]') - length(object) + length(object) double precision length length(path '((-1,0),(1,0))') - npoints(path) + npoints(path) int number of points npoints(path '[(0,0),(1,1),(2,0)]') - npoints(polygon) + npoints(polygon) int number of points npoints(polygon '((1,1),(0,0))') - pclose(path) + pclose(path) path convert path to closed pclose(path '[(0,0),(1,1),(2,0)]') - popen(path) + popen(path) path convert path to open popen(path '((0,0),(1,1),(2,0))') @@ -8676,7 +8676,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple radius(circle '((0,0),2.0)') - width(box) + width(box) double precision horizontal size of box width(box '((0,0),(1,1))') @@ -8859,13 +8859,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - It is possible to access the two component numbers of a point + It is possible to access the two component numbers of a point as though the point were an array with indexes 0 and 1. For example, if - t.p is a point column then - SELECT p[0] FROM t retrieves the X coordinate and - UPDATE t SET p[1] = ... changes the Y coordinate. - In the same way, a value of type box or lseg can be treated - as an array of two point values. + t.p is a point column then + SELECT p[0] FROM t retrieves the X coordinate and + UPDATE t SET p[1] = ... changes the Y coordinate. + In the same way, a value of type box or lseg can be treated + as an array of two point values. @@ -9188,19 +9188,19 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - Any cidr value can be cast to inet implicitly + Any cidr value can be cast to inet implicitly or explicitly; therefore, the functions shown above as operating on - inet also work on cidr values. (Where there are - separate functions for inet and cidr, it is because + inet also work on cidr values. (Where there are + separate functions for inet and cidr, it is because the behavior should be different for the two cases.) - Also, it is permitted to cast an inet value to cidr. + Also, it is permitted to cast an inet value to cidr. When this is done, any bits to the right of the netmask are silently zeroed - to create a valid cidr value. + to create a valid cidr value. In addition, - you can cast a text value to inet or cidr + you can cast a text value to inet or cidr using normal casting syntax: for example, - inet(expression) or - colname::cidr. + inet(expression) or + colname::cidr. @@ -9345,64 +9345,64 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple @@ - boolean - tsvector matches tsquery ? + boolean + tsvector matches tsquery ? to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat') t @@@ - boolean - deprecated synonym for @@ + boolean + deprecated synonym for @@ to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat') t || - tsvector - concatenate tsvectors + tsvector + concatenate tsvectors 'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector 'a':1 'b':2,5 'c':3 'd':4 && - tsquery - AND tsquerys together + tsquery + AND tsquerys together 'fat | rat'::tsquery && 'cat'::tsquery ( 'fat' | 'rat' ) & 'cat' || - tsquery - OR tsquerys together + tsquery + OR tsquerys together 'fat | rat'::tsquery || 'cat'::tsquery ( 'fat' | 'rat' ) | 'cat' !! - tsquery - negate a tsquery + tsquery + negate a tsquery !! 'cat'::tsquery !'cat' <-> - tsquery - tsquery followed by tsquery + tsquery + tsquery followed by tsquery to_tsquery('fat') <-> to_tsquery('rat') 'fat' <-> 'rat' @> - boolean - tsquery contains another ? + boolean + tsquery contains another ? 'cat'::tsquery @> 'cat & rat'::tsquery f <@ - boolean - tsquery is contained in ? + boolean + tsquery is contained in ? 'cat'::tsquery <@ 'cat & rat'::tsquery t @@ -9412,15 +9412,15 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - The tsquery containment operators consider only the lexemes + The tsquery containment operators consider only the lexemes listed in the two queries, ignoring the combining operators. In addition to the operators shown in the table, the ordinary B-tree - comparison operators (=, <, etc) are defined - for types tsvector and tsquery. These are not very + comparison operators (=, <, etc) are defined + for types tsvector and tsquery. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types. @@ -9443,7 +9443,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple array_to_tsvector - array_to_tsvector(text[]) + array_to_tsvector(text[]) tsvector convert array of lexemes to tsvector @@ -9467,10 +9467,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple length - length(tsvector) + length(tsvector) integer - number of lexemes in tsvector + number of lexemes in tsvector length('fat:2,4 cat:3 rat:5A'::tsvector) 3 @@ -9479,10 +9479,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple numnode - numnode(tsquery) + numnode(tsquery) integer - number of lexemes plus operators in tsquery + number of lexemes plus operators in tsquery numnode('(fat & rat) | cat'::tsquery) 5 @@ -9491,10 +9491,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple plainto_tsquery - plainto_tsquery( config regconfig , query text) + plainto_tsquery( config regconfig , query text) tsquery - produce tsquery ignoring punctuation + produce tsquery ignoring punctuation plainto_tsquery('english', 'The Fat Rats') 'fat' & 'rat' @@ -9503,10 +9503,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple phraseto_tsquery - phraseto_tsquery( config regconfig , query text) + phraseto_tsquery( config regconfig , query text) tsquery - produce tsquery that searches for a phrase, + produce tsquery that searches for a phrase, ignoring punctuation phraseto_tsquery('english', 'The Fat Rats') 'fat' <-> 'rat' @@ -9516,10 +9516,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple querytree - querytree(query tsquery) + querytree(query tsquery) text - get indexable part of a tsquery + get indexable part of a tsquery querytree('foo & ! bar'::tsquery) 'foo' @@ -9528,7 +9528,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweight - setweight(vector tsvector, weight "char") + setweight(vector tsvector, weight "char") tsvector assign weight to each element of vector @@ -9541,7 +9541,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple setweight setweight for specific lexeme(s) - setweight(vector tsvector, weight "char", lexemes text[]) + setweight(vector tsvector, weight "char", lexemes text[]) tsvector assign weight to elements of vector that are listed in lexemes @@ -9553,10 +9553,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple strip - strip(tsvector) + strip(tsvector) tsvector - remove positions and weights from tsvector + remove positions and weights from tsvector strip('fat:2,4 cat:3 rat:5A'::tsvector) 'cat' 'fat' 'rat' @@ -9565,10 +9565,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsquery - to_tsquery( config regconfig , query text) + to_tsquery( config regconfig , query text) tsquery - normalize words and convert to tsquery + normalize words and convert to tsquery to_tsquery('english', 'The & Fat & Rats') 'fat' & 'rat' @@ -9577,21 +9577,21 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple to_tsvector - to_tsvector( config regconfig , document text) + to_tsvector( config regconfig , document text) tsvector - reduce document text to tsvector + reduce document text to tsvector to_tsvector('english', 'The Fat Rats') 'fat':2 'rat':3 - to_tsvector( config regconfig , document json(b)) + to_tsvector( config regconfig , document json(b)) tsvector - reduce each string value in the document to a tsvector, and then - concatenate those in document order to produce a single tsvector + reduce each string value in the document to a tsvector, and then + concatenate those in document order to produce a single tsvector to_tsvector('english', '{"a": "The Fat Rats"}'::json) 'fat':2 'rat':3 @@ -9601,7 +9601,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_delete - ts_delete(vector tsvector, lexeme text) + ts_delete(vector tsvector, lexeme text) tsvector remove given lexeme from vector @@ -9611,7 +9611,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - ts_delete(vector tsvector, lexemes text[]) + ts_delete(vector tsvector, lexemes text[]) tsvector remove any occurrence of lexemes in lexemes from vector @@ -9623,7 +9623,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_filter - ts_filter(vector tsvector, weights "char"[]) + ts_filter(vector tsvector, weights "char"[]) tsvector select only elements with given weights from vector @@ -9635,7 +9635,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_headline - ts_headline( config regconfig, document text, query tsquery , options text ) + ts_headline( config regconfig, document text, query tsquery , options text ) text display a query match @@ -9644,7 +9644,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - ts_headline( config regconfig, document json(b), query tsquery , options text ) + ts_headline( config regconfig, document json(b), query tsquery , options text ) text display a query match @@ -9656,7 +9656,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query @@ -9668,7 +9668,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) float4 rank document for query using cover density @@ -9680,18 +9680,18 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_rewrite - ts_rewrite(query tsquery, target tsquery, substitute tsquery) + ts_rewrite(query tsquery, target tsquery, substitute tsquery) tsquery - replace target with substitute + replace target with substitute within query ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery) 'b' & ( 'foo' | 'bar' ) - ts_rewrite(query tsquery, select text) + ts_rewrite(query tsquery, select text) tsquery - replace using targets and substitutes from a SELECT command + replace using targets and substitutes from a SELECT command SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases') 'b' & ( 'foo' | 'bar' ) @@ -9700,22 +9700,22 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery) + tsquery_phrase(query1 tsquery, query2 tsquery) tsquery - make query that searches for query1 followed - by query2 (same as <-> + make query that searches for query1 followed + by query2 (same as <-> operator) tsquery_phrase(to_tsquery('fat'), to_tsquery('cat')) 'fat' <-> 'cat' - tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) + tsquery_phrase(query1 tsquery, query2 tsquery, distance integer) tsquery - make query that searches for query1 followed by - query2 at distance distance + make query that searches for query1 followed by + query2 at distance distance tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10) 'fat' <10> 'cat' @@ -9724,10 +9724,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_to_array - tsvector_to_array(tsvector) + tsvector_to_array(tsvector) text[] - convert tsvector to array of lexemes + convert tsvector to array of lexemes tsvector_to_array('fat:2,4 cat:3 rat:5A'::tsvector) {cat,fat,rat} @@ -9739,7 +9739,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_update_trigger() trigger - trigger function for automatic tsvector column update + trigger function for automatic tsvector column update CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body) @@ -9751,7 +9751,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple tsvector_update_trigger_column() trigger - trigger function for automatic tsvector column update + trigger function for automatic tsvector column update CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body) @@ -9761,7 +9761,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple unnest for tsvector - unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) + unnest(tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text) setof record expand a tsvector to a set of rows @@ -9774,7 +9774,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - All the text search functions that accept an optional regconfig + All the text search functions that accept an optional regconfig argument will use the configuration specified by when that argument is omitted. @@ -9807,7 +9807,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_debug - ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) + ts_debug( config regconfig, document text, OUT alias text, OUT description text, OUT token text, OUT dictionaries regdictionary[], OUT dictionary regdictionary, OUT lexemes text[]) setof record test a configuration @@ -9819,7 +9819,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_lexize - ts_lexize(dict regdictionary, token text) + ts_lexize(dict regdictionary, token text) text[] test a dictionary @@ -9831,7 +9831,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_parse - ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) + ts_parse(parser_name text, document text, OUT tokid integer, OUT token text) setof record test a parser @@ -9839,7 +9839,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,foo) ... - ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) + ts_parse(parser_oid oid, document text, OUT tokid integer, OUT token text) setof record test a parser ts_parse(3722, 'foo - bar') @@ -9850,7 +9850,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_token_type - ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_name text, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser @@ -9858,7 +9858,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple (1,asciiword,"Word, all ASCII") ... - ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) + ts_token_type(parser_oid oid, OUT tokid integer, OUT alias text, OUT description text) setof record get token types defined by parser ts_token_type(3722) @@ -9869,10 +9869,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple ts_stat - ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) + ts_stat(sqlquery text, weights text, OUT word text, OUT ndoc integer, OUT nentry integer) setof record - get statistics of a tsvector column + get statistics of a tsvector column ts_stat('SELECT vector from apod') (foo,10,15) ... @@ -9894,7 +9894,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple and xmlserialize for converting to and from type xml are not repeated here. Use of most of these functions requires the installation to have been built - with configure --with-libxml. + with configure --with-libxml. @@ -10246,7 +10246,7 @@ SELECT xmlagg(x) FROM test; - To determine the order of the concatenation, an ORDER BY + To determine the order of the concatenation, an ORDER BY clause may be added to the aggregate call as described in . For example: @@ -10365,18 +10365,18 @@ SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Tor - These functions check whether a text string is well-formed XML, + These functions check whether a text string is well-formed XML, returning a Boolean result. xml_is_well_formed_document checks for a well-formed document, while xml_is_well_formed_content checks for well-formed content. xml_is_well_formed does the former if the configuration - parameter is set to DOCUMENT, or the latter if it is set to - CONTENT. This means that + parameter is set to DOCUMENT, or the latter if it is set to + CONTENT. This means that xml_is_well_formed is useful for seeing whether - a simple cast to type xml will succeed, whereas the other two + a simple cast to type xml will succeed, whereas the other two functions are useful for seeing whether the corresponding variants of - XMLPARSE will succeed. + XMLPARSE will succeed. @@ -10446,7 +10446,7 @@ SELECT xml_is_well_formed_document(' The optional third argument of the function is an array of namespace - mappings. This array should be a two-dimensional text array with + mappings. This array should be a two-dimensional text array with the length of the second axis being equal to 2 (i.e., it should be an array of arrays, each of which consists of exactly 2 elements). The first element of each array entry is the namespace name (alias), the second the namespace URI. It is not required that aliases provided in this array be the same as those being used in the XML document itself (in other words, both in the XML document and in the xpath - function context, aliases are local). + function context, aliases are local). @@ -10514,7 +10514,7 @@ SELECT xpath('//mydefns:b/text()', 'testxpath
function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This - function is equivalent to the standard XMLEXISTS predicate, + function is equivalent to the standard XMLEXISTS predicate, except that it also offers support for a namespace mapping argument. @@ -10560,21 +10560,21 @@ SELECT xpath_exists('/my:a/text()', 'test - The optional XMLNAMESPACES clause is a comma-separated + The optional XMLNAMESPACES clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported. - The required row_expression argument is an XPath + The required row_expression argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what - xmltable transforms into output rows. + xmltable transforms into output rows. - document_expression provides the XML document to + document_expression provides the XML document to operate on. The BY REF clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other @@ -10586,9 +10586,9 @@ SELECT xpath_exists('/my:a/text()', 'test The mandatory COLUMNS clause specifies the list of columns in the output table. - If the COLUMNS clause is omitted, the rows in the result - set contain a single column of type xml containing the - data matched by row_expression. + If the COLUMNS clause is omitted, the rows in the result + set contain a single column of type xml containing the + data matched by row_expression. If COLUMNS is specified, each entry describes a single column. See the syntax summary above for the format. @@ -10604,10 +10604,10 @@ SELECT xpath_exists('/my:a/text()', 'test - The column_expression for a column is an XPath expression + The column_expression for a column is an XPath expression that is evaluated for each row, relative to the result of the - row_expression, to find the value of the column. - If no column_expression is given, then the column name + row_expression, to find the value of the column. + If no column_expression is given, then the column name is used as an implicit path. @@ -10615,55 +10615,55 @@ SELECT xpath_exists('/my:a/text()', 'testNULL). - Any xsi:nil attributes are ignored. + empty string (not NULL). + Any xsi:nil attributes are ignored. - The text body of the XML matched by the column_expression + The text body of the XML matched by the column_expression is used as the column value. Multiple text() nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. - Note that the whitespace-only text() node between two non-text - elements is preserved, and that leading whitespace on a text() + Note that the whitespace-only text() node between two non-text + elements is preserved, and that leading whitespace on a text() node is not flattened. If the path expression does not match for a given row but - default_expression is specified, the value resulting + default_expression is specified, the value resulting from evaluating that expression is used. - If no DEFAULT clause is given for the column, - the field will be set to NULL. - It is possible for a default_expression to reference + If no DEFAULT clause is given for the column, + the field will be set to NULL. + It is possible for a default_expression to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column. - Columns may be marked NOT NULL. If the - column_expression for a NOT NULL column - does not match anything and there is no DEFAULT or the - default_expression also evaluates to null, an error + Columns may be marked NOT NULL. If the + column_expression for a NOT NULL column + does not match anything and there is no DEFAULT or the + default_expression also evaluates to null, an error is reported. - Unlike regular PostgreSQL functions, column_expression - and default_expression are not evaluated to a simple + Unlike regular PostgreSQL functions, column_expression + and default_expression are not evaluated to a simple value before calling the function. - column_expression is normally evaluated - exactly once per input row, and default_expression + column_expression is normally evaluated + exactly once per input row, and default_expression is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. - Effectively xmltable behaves more like a subquery than a + Effectively xmltable behaves more like a subquery than a function call. This means that you can usefully use volatile functions like - nextval in default_expression, and - column_expression may depend on other parts of the + nextval in default_expression, and + column_expression may depend on other parts of the XML document. @@ -11029,7 +11029,7 @@ table2-mapping - <type>json</> and <type>jsonb</> Operators + <type>json</type> and <type>jsonb</type> Operators @@ -11059,14 +11059,14 @@ table2-mapping ->> int - Get JSON array element as text + Get JSON array element as text '[1,2,3]'::json->>2 3 ->> text - Get JSON object field as text + Get JSON object field as text '{"a":1,"b":2}'::json->>'b' 2 @@ -11080,7 +11080,7 @@ table2-mapping #>> text[] - Get JSON object at specified path as text + Get JSON object at specified path as text '{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}' 3 @@ -11095,7 +11095,7 @@ table2-mapping The field/element/path extraction operators return the same type as their left-hand input (either json or jsonb), except for those specified as - returning text, which coerce the value to text. + returning text, which coerce the value to text. The field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such element exists. The @@ -11115,14 +11115,14 @@ table2-mapping Some further operators also exist only for jsonb, as shown in . Many of these operators can be indexed by - jsonb operator classes. For a full description of - jsonb containment and existence semantics, see jsonb operator classes. For a full description of + jsonb containment and existence semantics, see . describes how these operators can be used to effectively index - jsonb data. + jsonb data.
- Additional <type>jsonb</> Operators + Additional <type>jsonb</type> Operators @@ -11211,7 +11211,7 @@ table2-mapping - The || operator concatenates the elements at the top level of + The || operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand. @@ -11221,8 +11221,8 @@ table2-mapping shows the functions that are available for creating json and jsonb values. - (There are no equivalent functions for jsonb, of the row_to_json - and array_to_json functions. However, the to_jsonb + (There are no equivalent functions for jsonb, of the row_to_json + and array_to_json functions. However, the to_jsonb function supplies much the same functionality as these functions would.) @@ -11274,14 +11274,14 @@ table2-mapping to_jsonb(anyelement) - Returns the value as json or jsonb. + Returns the value as json or jsonb. Arrays and composites are converted (recursively) to arrays and objects; otherwise, if there is a cast from the type to json, the cast function will be used to perform the conversion; otherwise, a scalar value is produced. For any scalar type other than a number, a Boolean, or a null value, the text representation will be used, in such a fashion that it is a - valid json or jsonb value. + valid json or jsonb value. to_json('Fred said "Hi."'::text) "Fred said \"Hi.\"" @@ -11343,8 +11343,8 @@ table2-mapping such that each inner array has exactly two elements, which are taken as a key/value pair. - json_object('{a, 1, b, "def", c, 3.5}') - json_object('{{a, 1},{b, "def"},{c, 3.5}}') + json_object('{a, 1, b, "def", c, 3.5}') + json_object('{{a, 1},{b, "def"},{c, 3.5}}') {"a": "1", "b": "def", "c": "3.5"} @@ -11352,7 +11352,7 @@ table2-mapping jsonb_object(keys text[], values text[]) - This form of json_object takes keys and values pairwise from two separate + This form of json_object takes keys and values pairwise from two separate arrays. In all other respects it is identical to the one-argument form. json_object('{a, b}', '{1,2}') @@ -11364,9 +11364,9 @@ table2-mapping - array_to_json and row_to_json have the same - behavior as to_json except for offering a pretty-printing - option. The behavior described for to_json likewise applies + array_to_json and row_to_json have the same + behavior as to_json except for offering a pretty-printing + option. The behavior described for to_json likewise applies to each individual value converted by the other JSON creation functions. @@ -11530,7 +11530,7 @@ table2-mapping setof key text, value text Expands the outermost JSON object into a set of key/value pairs. The - returned values will be of type text. + returned values will be of type text. select * from json_each_text('{"a":"foo", "b":"bar"}') @@ -11562,7 +11562,7 @@ table2-mapping text Returns JSON value pointed to by path_elems - as text + as text (equivalent to #>> operator). json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6') @@ -11593,7 +11593,7 @@ table2-mapping anyelement Expands the object in from_json to a row - whose columns match the record type defined by base + whose columns match the record type defined by base (see note below). select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a b c"}}') @@ -11613,7 +11613,7 @@ table2-mapping Expands the outermost array of objects in from_json to a set of rows whose - columns match the record type defined by base (see + columns match the record type defined by base (see note below). select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]') @@ -11653,7 +11653,7 @@ table2-mapping setof text - Expands a JSON array to a set of text values. + Expands a JSON array to a set of text values. select * from json_array_elements_text('["foo", "bar"]') @@ -11673,8 +11673,8 @@ table2-mapping Returns the type of the outermost JSON value as a text string. Possible types are - object, array, string, number, - boolean, and null. + object, array, string, number, + boolean, and null. json_typeof('-123.4') number @@ -11686,8 +11686,8 @@ table2-mapping record Builds an arbitrary record from a JSON object (see note below). As - with all functions returning record, the caller must - explicitly define the structure of the record with an AS + with all functions returning record, the caller must + explicitly define the structure of the record with an AS clause. select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype) @@ -11706,9 +11706,9 @@ table2-mapping setof record Builds an arbitrary set of records from a JSON array of objects (see - note below). As with all functions returning record, the + note below). As with all functions returning record, the caller must explicitly define the structure of the record with - an AS clause. + an AS clause. select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]') as x(a int, b text); @@ -11743,7 +11743,7 @@ table2-mapping replaced by new_value, or with new_value added if create_missing is true ( default is - true) and the item + true) and the item designated by path does not exist. As with the path orientated operators, negative integers that appear in path count from the end @@ -11770,7 +11770,7 @@ table2-mapping path is in a JSONB array, new_value will be inserted before target or after if insert_after is true (default is - false). If target section + false). If target section designated by path is in JSONB object, new_value will be inserted only if target does not exist. As with the path @@ -11820,17 +11820,17 @@ table2-mapping Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue - if the input is type jsonb, because the conversion was already - done; but for json input, this may result in throwing an error, + if the input is type jsonb, because the conversion was already + done; but for json input, this may result in throwing an error, as noted in . - In json_populate_record, json_populate_recordset, - json_to_record and json_to_recordset, - type coercion from the JSON is best effort and may not result + In json_populate_record, json_populate_recordset, + json_to_record and json_to_recordset, + type coercion from the JSON is best effort and may not result in desired values for some types. JSON keys are matched to identical column names in the target row type. JSON fields that do not appear in the target row type will be omitted from the output, and @@ -11840,18 +11840,18 @@ table2-mapping - All the items of the path parameter of jsonb_set - as well as jsonb_insert except the last item must be present - in the target. If create_missing is false, all - items of the path parameter of jsonb_set must be - present. If these conditions are not met the target is + All the items of the path parameter of jsonb_set + as well as jsonb_insert except the last item must be present + in the target. If create_missing is false, all + items of the path parameter of jsonb_set must be + present. If these conditions are not met the target is returned unchanged. If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from - the left, and if negative by counting from the right - -1 + the left, and if negative by counting from the right - -1 designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning @@ -11862,20 +11862,20 @@ table2-mapping - The json_typeof function's null return value + The json_typeof function's null return value should not be confused with a SQL NULL. While - calling json_typeof('null'::json) will - return null, calling json_typeof(NULL::json) + calling json_typeof('null'::json) will + return null, calling json_typeof(NULL::json) will return a SQL NULL. - If the argument to json_strip_nulls contains duplicate + If the argument to json_strip_nulls contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an - issue for jsonb_strip_nulls since jsonb values never have + issue for jsonb_strip_nulls since jsonb values never have duplicate object field names. @@ -11886,7 +11886,7 @@ table2-mapping values as JSON, and the aggregate function json_object_agg which aggregates pairs of values into a JSON object, and their jsonb equivalents, - jsonb_agg and jsonb_object_agg. + jsonb_agg and jsonb_object_agg. @@ -11963,52 +11963,52 @@ table2-mapping The sequence to be operated on by a sequence function is specified by - a regclass argument, which is simply the OID of the sequence in the - pg_class system catalog. You do not have to look up the - OID by hand, however, since the regclass data type's input + a regclass argument, which is simply the OID of the sequence in the + pg_class system catalog. You do not have to look up the + OID by hand, however, since the regclass data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus: -nextval('foo') operates on sequence foo -nextval('FOO') operates on sequence foo -nextval('"Foo"') operates on sequence Foo +nextval('foo') operates on sequence foo +nextval('FOO') operates on sequence foo +nextval('"Foo"') operates on sequence Foo The sequence name can be schema-qualified if necessary: -nextval('myschema.foo') operates on myschema.foo +nextval('myschema.foo') operates on myschema.foo nextval('"myschema".foo') same as above -nextval('foo') searches search path for foo +nextval('foo') searches search path for foo See for more information about - regclass. + regclass. Before PostgreSQL 8.1, the arguments of the - sequence functions were of type text, not regclass, and + sequence functions were of type text, not regclass, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit - coercion from text to regclass before the function is + coercion from text to regclass before the function is invoked. When you write the argument of a sequence function as an unadorned - literal string, it becomes a constant of type regclass. + literal string, it becomes a constant of type regclass. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, - etc. This early binding behavior is usually desirable for + etc. This early binding behavior is usually desirable for sequence references in column defaults and views. But sometimes you might - want late binding where the sequence reference is resolved + want late binding where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be - stored as a text constant instead of regclass: + stored as a text constant instead of regclass: -nextval('foo'::text) foo is looked up at runtime +nextval('foo'::text) foo is looked up at runtime Note that late binding was the only behavior supported in PostgreSQL releases before 8.1, so you @@ -12051,14 +12051,14 @@ nextval('foo'::text) foo is looked up at rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends - up not using the value. For example an INSERT with - an ON CONFLICT clause will compute the to-be-inserted + up not using the value. For example an INSERT with + an ON CONFLICT clause will compute the to-be-inserted tuple, including doing any required nextval calls, before detecting any conflict that would cause it to follow - the ON CONFLICT rule instead. Such cases will leave + the ON CONFLICT rule instead. Such cases will leave unused holes in the sequence of assigned values. - Thus, PostgreSQL sequence objects cannot - be used to obtain gapless sequences. + Thus, PostgreSQL sequence objects cannot + be used to obtain gapless sequences. @@ -12094,7 +12094,7 @@ nextval('foo'::text) foo is looked up at Return the value most recently returned by - nextval in the current session. This function is + nextval in the current session. This function is identical to currval, except that instead of taking the sequence name as an argument it refers to whichever sequence nextval was most recently applied to @@ -12119,20 +12119,20 @@ nextval('foo'::text) foo is looked up at specified value and sets its is_called field to true, meaning that the next nextval will advance the sequence before - returning a value. The value reported by currval is + returning a value. The value reported by currval is also set to the specified value. In the three-parameter form, is_called can be set to either true - or false. true has the same effect as + or false. true has the same effect as the two-parameter form. If it is set to false, the next nextval will return exactly the specified value, and sequence advancement commences with the following nextval. Furthermore, the value reported by - currval is not changed in this case. For example, + currval is not changed in this case. For example, -SELECT setval('foo', 42); Next nextval will return 43 +SELECT setval('foo', 42); Next nextval will return 43 SELECT setval('foo', 42, true); Same as above -SELECT setval('foo', 42, false); Next nextval will return 42 +SELECT setval('foo', 42, false); Next nextval will return 42 The result returned by setval is just the value of its @@ -12183,7 +12183,7 @@ SELECT setval('foo', 42, false); Next nextval wi - <literal>CASE</> + <literal>CASE</literal> The SQL CASE expression is a @@ -12206,7 +12206,7 @@ END condition's result is not true, any subsequent WHEN clauses are examined in the same manner. If no WHEN condition yields true, the value of the - CASE expression is the result of the + CASE expression is the result of the ELSE clause. If the ELSE clause is omitted and no condition is true, the result is null. @@ -12245,7 +12245,7 @@ SELECT a, - There is a simple form of CASE expression + There is a simple form of CASE expression that is a variant of the general form above: @@ -12299,7 +12299,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; situations in which subexpressions of an expression are evaluated at different times, so that the principle that CASE evaluates only necessary subexpressions is not ironclad. For - example a constant 1/0 subexpression will usually result in + example a constant 1/0 subexpression will usually result in a division-by-zero failure at planning time, even if it's within a CASE arm that would never be entered at run time. @@ -12307,7 +12307,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; - <literal>COALESCE</> + <literal>COALESCE</literal> COALESCE @@ -12333,8 +12333,8 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; SELECT COALESCE(description, short_description, '(none)') ... - This returns description if it is not null, otherwise - short_description if it is not null, otherwise (none). + This returns description if it is not null, otherwise + short_description if it is not null, otherwise (none). @@ -12342,13 +12342,13 @@ SELECT COALESCE(description, short_description, '(none)') ... evaluates the arguments that are needed to determine the result; that is, arguments to the right of the first non-null argument are not evaluated. This SQL-standard function provides capabilities similar - to NVL and IFNULL, which are used in some other + to NVL and IFNULL, which are used in some other database systems. - <literal>NULLIF</> + <literal>NULLIF</literal> NULLIF @@ -12369,7 +12369,7 @@ SELECT NULLIF(value, '(none)') ... - In this example, if value is (none), + In this example, if value is (none), null is returned, otherwise the value of value is returned. @@ -12394,7 +12394,7 @@ SELECT NULLIF(value, '(none)') ... - The GREATEST and LEAST functions select the + The GREATEST and LEAST functions select the largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result @@ -12404,7 +12404,7 @@ SELECT NULLIF(value, '(none)') ... - Note that GREATEST and LEAST are not in + Note that GREATEST and LEAST are not in the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL. @@ -12534,7 +12534,7 @@ SELECT NULLIF(value, '(none)') ... If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions of - PostgreSQL prior to 8.2: older versions would claim + PostgreSQL prior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.) @@ -12833,7 +12833,7 @@ NULL baz(3 rows)
- In array_position and array_positions, + In array_position and array_positions, each array element is compared to the searched value using IS NOT DISTINCT FROM semantics. @@ -12868,8 +12868,8 @@ NULL baz(3 rows) - There are two differences in the behavior of string_to_array - from pre-9.1 versions of PostgreSQL. + There are two differences in the behavior of string_to_array + from pre-9.1 versions of PostgreSQL. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather @@ -13198,7 +13198,7 @@ NULL baz(3 rows) - The lower and upper functions return null + The lower and upper functions return null if the range is empty or the requested bound is infinite. The lower_inc, upper_inc, lower_inf, and upper_inf @@ -13550,7 +13550,7 @@ NULL baz(3 rows) smallint, int, bigint, real, double precision, numeric, - interval, or money + interval, or money bigint for smallint or @@ -13647,7 +13647,7 @@ SELECT count(*) FROM sometable; aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an - ORDER BY clause within the aggregate call, as shown in + ORDER BY clause within the aggregate call, as shown in . Alternatively, supplying the input values from a sorted subquery will usually work. For example: @@ -14082,9 +14082,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; shows some - aggregate functions that use the ordered-set aggregate + aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as inverse - distribution functions. + distribution functions. @@ -14249,7 +14249,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; window function of the same name defined in . In each case, the aggregate result is the value that the associated window function would have - returned for the hypothetical row constructed from + returned for the hypothetical row constructed from args, if such a row had been added to the sorted group of rows computed from the sorted_args. @@ -14280,10 +14280,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" bigint @@ -14303,10 +14303,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; dense_rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" bigint @@ -14326,10 +14326,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; percent_rank(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" double precision @@ -14349,10 +14349,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; cume_dist(args) WITHIN GROUP (ORDER BY sorted_args) - VARIADIC "any" + VARIADIC "any" - VARIADIC "any" + VARIADIC "any" double precision @@ -14360,7 +14360,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; No relative rank of the hypothetical row, ranging from - 1/N to 1 + 1/N to 1 @@ -14374,7 +14374,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; the aggregated arguments given in sorted_args. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according - to the rule specified in the ORDER BY clause. + to the rule specified in the ORDER BY clause. @@ -14413,14 +14413,14 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Grouping operations are used in conjunction with grouping sets (see ) to distinguish result rows. The - arguments to the GROUPING operation are not actually evaluated, - but they must match exactly expressions given in the GROUP BY + arguments to the GROUPING operation are not actually evaluated, + but they must match exactly expressions given in the GROUP BY clause of the associated query level. Bits are assigned with the rightmost argument being the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the result row, and 1 if it is not. For example: -=> SELECT * FROM items_sold; +=> SELECT * FROM items_sold; make | model | sales -------+-------+------- Foo | GT | 10 @@ -14429,7 +14429,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Bar | Sport | 5 (4 rows) -=> SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model); +=> SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model); make | model | grouping | sum -------+-------+----------+----- Foo | GT | 0 | 10 @@ -14464,8 +14464,8 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; The built-in window functions are listed in . Note that these functions - must be invoked using window function syntax, i.e., an - OVER clause is required. + must be invoked using window function syntax, i.e., an + OVER clause is required. @@ -14474,7 +14474,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see for a list of the built-in aggregates. - Aggregate functions act as window functions only when an OVER + Aggregate functions act as window functions only when an OVER clause follows the call; otherwise they act as non-window aggregates and return a single row for the entire set. @@ -14515,7 +14515,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; bigint - rank of the current row with gaps; same as row_number of its first peer + rank of the current row with gaps; same as row_number of its first peer @@ -14541,7 +14541,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; double precision - relative rank of the current row: (rank - 1) / (total partition rows - 1) + relative rank of the current row: (rank - 1) / (total partition rows - 1) @@ -14562,7 +14562,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; ntile - ntile(num_buckets integer) + ntile(num_buckets integer) integer @@ -14577,9 +14577,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; lag - lag(value anyelement - [, offset integer - [, default anyelement ]]) + lag(value anyelement + [, offset integer + [, default anyelement ]]) @@ -14606,9 +14606,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; lead - lead(value anyelement - [, offset integer - [, default anyelement ]]) + lead(value anyelement + [, offset integer + [, default anyelement ]]) @@ -14634,7 +14634,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; first_value - first_value(value any) + first_value(value any) same type as value @@ -14650,7 +14650,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; last_value - last_value(value any) + last_value(value any) same type as value @@ -14667,7 +14667,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; nth_value - nth_value(value any, nth integer) + nth_value(value any, nth integer) @@ -14686,22 +14686,22 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; All of the functions listed in depend on the sort ordering - specified by the ORDER BY clause of the associated window + specified by the ORDER BY clause of the associated window definition. Rows that are not distinct when considering only the - ORDER BY columns are said to be peers. - The four ranking functions (including cume_dist) are + ORDER BY columns are said to be peers. + The four ranking functions (including cume_dist) are defined so that they give the same answer for all peer rows. - Note that first_value, last_value, and - nth_value consider only the rows within the window - frame, which by default contains the rows from the start of the + Note that first_value, last_value, and + nth_value consider only the rows within the window + frame, which by default contains the rows from the start of the partition through the last peer of the current row. This is - likely to give unhelpful results for last_value and - sometimes also nth_value. You can redefine the frame by - adding a suitable frame specification (RANGE or - ROWS) to the OVER clause. + likely to give unhelpful results for last_value and + sometimes also nth_value. You can redefine the frame by + adding a suitable frame specification (RANGE or + ROWS) to the OVER clause. See for more information about frame specifications. @@ -14709,34 +14709,34 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. - An aggregate used with ORDER BY and the default window frame - definition produces a running sum type of behavior, which may or + An aggregate used with ORDER BY and the default window frame + definition produces a running sum type of behavior, which may or may not be what's wanted. To obtain - aggregation over the whole partition, omit ORDER BY or use - ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. + aggregation over the whole partition, omit ORDER BY or use + ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. Other frame specifications can be used to obtain other effects. - The SQL standard defines a RESPECT NULLS or - IGNORE NULLS option for lead, lag, - first_value, last_value, and - nth_value. This is not implemented in + The SQL standard defines a RESPECT NULLS or + IGNORE NULLS option for lead, lag, + first_value, last_value, and + nth_value. This is not implemented in PostgreSQL: the behavior is always the - same as the standard's default, namely RESPECT NULLS. - Likewise, the standard's FROM FIRST or FROM LAST - option for nth_value is not implemented: only the - default FROM FIRST behavior is supported. (You can achieve - the result of FROM LAST by reversing the ORDER BY + same as the standard's default, namely RESPECT NULLS. + Likewise, the standard's FROM FIRST or FROM LAST + option for nth_value is not implemented: only the + default FROM FIRST behavior is supported. (You can achieve + the result of FROM LAST by reversing the ORDER BY ordering.) - cume_dist computes the fraction of partition rows that + cume_dist computes the fraction of partition rows that are less than or equal to the current row and its peers, while - percent_rank computes the fraction of partition rows that + percent_rank computes the fraction of partition rows that are less than the current row, assuming the current row does not exist in the partition. @@ -14789,12 +14789,12 @@ EXISTS (subquery) - The argument of EXISTS is an arbitrary SELECT statement, + The argument of EXISTS is an arbitrary SELECT statement, or subquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of EXISTS is - true; if the subquery returns no rows, the result of EXISTS - is false. + true; if the subquery returns no rows, the result of EXISTS + is false. @@ -14814,15 +14814,15 @@ EXISTS (subquery) Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is - to write all EXISTS tests in the form + to write all EXISTS tests in the form EXISTS(SELECT 1 WHERE ...). There are exceptions to this rule however, such as subqueries that use INTERSECT. - This simple example is like an inner join on col2, but - it produces at most one output row for each tab1 row, - even if there are several matching tab2 rows: + This simple example is like an inner join on col2, but + it produces at most one output row for each tab1 row, + even if there are several matching tab2 rows: SELECT col1 FROM tab1 @@ -14842,8 +14842,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of IN is true if any equal subquery row is found. - The result is false if no equal row is found (including the + The result of IN is true if any equal subquery row is found. + The result is false if no equal row is found (including the case where the subquery returns no rows). @@ -14871,8 +14871,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of IN is true if any equal subquery row is found. - The result is false if no equal row is found (including the + The result of IN is true if any equal subquery row is found. + The result is false if no equal row is found (including the case where the subquery returns no rows). @@ -14898,9 +14898,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of NOT IN is true if only unequal subquery rows + The result of NOT IN is true if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is false if any equal row is found. + The result is false if any equal row is found. @@ -14927,9 +14927,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of NOT IN is true if only unequal subquery rows + The result of NOT IN is true if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is false if any equal row is found. + The result is false if any equal row is found. @@ -14957,8 +14957,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. - The result of ANY is true if any true result is obtained. - The result is false if no true result is found (including the + The result of ANY is true if any true result is obtained. + The result is false if no true result is found (including the case where the subquery returns no rows). @@ -14981,8 +14981,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); -row_constructor operator ANY (subquery) -row_constructor operator SOME (subquery) +row_constructor operator ANY (subquery) +row_constructor operator SOME (subquery) @@ -14993,9 +14993,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator. - The result of ANY is true if the comparison + The result of ANY is true if the comparison returns true for any subquery row. - The result is false if the comparison returns false for every + The result is false if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if the comparison does not return true for any row, @@ -15021,9 +15021,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. - The result of ALL is true if all rows yield true + The result of ALL is true if all rows yield true (including the case where the subquery returns no rows). - The result is false if any false result is found. + The result is false if any false result is found. The result is NULL if the comparison does not return false for any row, and it returns NULL for at least one row. @@ -15049,10 +15049,10 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator. - The result of ALL is true if the comparison + The result of ALL is true if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). - The result is false if the comparison returns false for any + The result is false if the comparison returns false for any subquery row. The result is NULL if the comparison does not return false for any subquery row, and it returns NULL for at least one row. @@ -15165,7 +15165,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized list - of scalar expressions. The result is true if the left-hand expression's + of scalar expressions. The result is true if the left-hand expression's result is equal to any of the right-hand expressions. This is a shorthand notation for @@ -15243,8 +15243,8 @@ AND is evaluated and compared to each element of the array using the given operator, which must yield a Boolean result. - The result of ANY is true if any true result is obtained. - The result is false if no true result is found (including the + The result of ANY is true if any true result is obtained. + The result is false if no true result is found (including the case where the array has zero elements). @@ -15279,9 +15279,9 @@ AND is evaluated and compared to each element of the array using the given operator, which must yield a Boolean result. - The result of ALL is true if all comparisons yield true + The result of ALL is true if all comparisons yield true (including the case where the array has zero elements). - The result is false if any false result is found. + The result is false if any false result is found. @@ -15310,12 +15310,12 @@ AND The two row values must have the same number of fields. Each side is evaluated and they are compared row-wise. Row constructor comparisons are allowed when the operator is - =, - <>, - <, - <=, - > or - >=. + =, + <>, + <, + <=, + > or + >=. Every row element must be of a type which has a default B-tree operator class or the attempted comparison may generate an error. @@ -15328,7 +15328,7 @@ AND - The = and <> cases work slightly differently + The = and <> cases work slightly differently from the others. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; @@ -15336,13 +15336,13 @@ AND - For the <, <=, > and - >= cases, the row elements are compared left-to-right, + For the <, <=, > and + >= cases, the row elements are compared left-to-right, stopping as soon as an unequal or null pair of elements is found. If either of this pair of elements is null, the result of the row comparison is unknown (null); otherwise comparison of this pair of elements determines the result. For example, - ROW(1,2,NULL) < ROW(1,3,0) + ROW(1,2,NULL) < ROW(1,3,0) yields true, not null, because the third pair of elements are not considered. @@ -15350,13 +15350,13 @@ AND Prior to PostgreSQL 8.2, the - <, <=, > and >= + <, <=, > and >= cases were not handled per SQL specification. A comparison like - ROW(a,b) < ROW(c,d) + ROW(a,b) < ROW(c,d) was implemented as - a < c AND b < d + a < c AND b < d whereas the correct behavior is equivalent to - a < c OR (a = c AND b < d). + a < c OR (a = c AND b < d). @@ -15409,15 +15409,15 @@ AND Each side is evaluated and they are compared row-wise. Composite type comparisons are allowed when the operator is - =, - <>, - <, - <=, - > or - >=, + =, + <>, + <, + <=, + > or + >=, or has semantics similar to one of these. (To be specific, an operator can be a row comparison operator if it is a member of a B-tree operator - class, or is the negator of the = member of a B-tree operator + class, or is the negator of the = member of a B-tree operator class.) The default behavior of the above operators is the same as for IS [ NOT ] DISTINCT FROM for row constructors (see ). @@ -15427,12 +15427,12 @@ AND To support matching of rows which include elements without a default B-tree operator class, the following operators are defined for composite type comparison: - *=, - *<>, - *<, - *<=, - *>, and - *>=. + *=, + *<>, + *<, + *<=, + *>, and + *>=. These operators compare the internal binary representation of the two rows. Two rows might have a different binary representation even though comparisons of the two rows with the equality operator is true. @@ -15501,7 +15501,7 @@ AND - generate_series(start, stop, step interval) + generate_series(start, stop, step interval) timestamp or timestamp with time zone setof timestamp or setof timestamp with time zone (same as argument type) @@ -15616,7 +15616,7 @@ SELECT * FROM generate_series('2008-03-01 00:00'::timestamp, - generate_subscripts is a convenience function that generates + generate_subscripts is a convenience function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, @@ -15681,7 +15681,7 @@ SELECT * FROM unnest2(ARRAY[[1,2],[3,4]]); by WITH ORDINALITY, a bigint column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning - functions such as unnest(). + functions such as unnest(). -- set returning function WITH ORDINALITY @@ -15825,7 +15825,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); - pg_current_logfile(text) + pg_current_logfile(text) text Primary log file name, or log in the requested format, currently in use by the logging collector @@ -15870,7 +15870,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); pg_trigger_depth() int - current nesting level of PostgreSQL triggers + current nesting level of PostgreSQL triggers (0 if not called, directly or indirectly, from inside a trigger) @@ -15889,7 +15889,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); version() text - PostgreSQL version information. See also for a machine-readable version. + PostgreSQL version information. See also for a machine-readable version. @@ -15979,7 +15979,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); current_role and user are synonyms for current_user. (The SQL standard draws a distinction between current_role - and current_user, but PostgreSQL + and current_user, but PostgreSQL does not, since it unifies users and roles into a single kind of entity.) @@ -15990,7 +15990,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); other named objects that are created without specifying a target schema. current_schemas(boolean) returns an array of the names of all schemas presently in the search path. The Boolean option determines whether or not - implicitly included system schemas such as pg_catalog are included in the + implicitly included system schemas such as pg_catalog are included in the returned search path. @@ -15998,7 +15998,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); The search path can be altered at run time. The command is: -SET search_path TO schema , schema, ... +SET search_path TO schema , schema, ... @@ -16043,7 +16043,7 @@ SET search_path TO schema , schema, .. waiting for a lock that would conflict with the blocked process's lock request and is ahead of it in the wait queue (soft block). When using parallel queries the result always lists client-visible process IDs (that - is, pg_backend_pid results) even if the actual lock is held + is, pg_backend_pid results) even if the actual lock is held or awaited by a child worker process. As a result of that, there may be duplicated PIDs in the result. Also note that when a prepared transaction holds a conflicting lock, it will be represented by a zero process ID in @@ -16095,15 +16095,15 @@ SET search_path TO schema , schema, .. is NULL. When multiple log files exist, each in a different format, pg_current_logfile called without arguments returns the path of the file having the first format - found in the ordered list: stderr, csvlog. + found in the ordered list: stderr, csvlog. NULL is returned when no log file has any of these formats. To request a specific file format supply, as text, - either csvlog or stderr as the value of the + either csvlog or stderr as the value of the optional parameter. The return value is NULL when the log format requested is not a configured . The pg_current_logfiles reflects the contents of the - current_logfiles file. + current_logfiles file. @@ -16460,7 +16460,7 @@ SET search_path TO schema , schema, .. has_table_privilege checks whether a user can access a table in a particular way. The user can be specified by name, by OID (pg_authid.oid), - public to indicate the PUBLIC pseudo-role, or if the argument is + public to indicate the PUBLIC pseudo-role, or if the argument is omitted current_user is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of @@ -16470,12 +16470,12 @@ SET search_path TO schema , schema, .. The desired access privilege type is specified by a text string, which must evaluate to one of the values SELECT, INSERT, - UPDATE, DELETE, TRUNCATE, + UPDATE, DELETE, TRUNCATE, REFERENCES, or TRIGGER. Optionally, - WITH GRANT OPTION can be added to a privilege type to test + WITH GRANT OPTION can be added to a privilege type to test whether the privilege is held with grant option. Also, multiple privilege types can be listed separated by commas, in which case the result will - be true if any of the listed privileges is held. + be true if any of the listed privileges is held. (Case of the privilege string is not significant, and extra whitespace is allowed between but not within privilege names.) Some examples: @@ -16499,7 +16499,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') has_any_column_privilege checks whether a user can access any column of a table in a particular way. Its argument possibilities - are analogous to has_table_privilege, + are analogous to has_table_privilege, except that the desired access privilege type must evaluate to some combination of SELECT, @@ -16508,8 +16508,8 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') REFERENCES. Note that having any of these privileges at the table level implicitly grants it for each column of the table, so has_any_column_privilege will always return - true if has_table_privilege does for the same - arguments. But has_any_column_privilege also succeeds if + true if has_table_privilege does for the same + arguments. But has_any_column_privilege also succeeds if there is a column-level grant of the privilege for at least one column. @@ -16547,7 +16547,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') Its argument possibilities are analogous to has_table_privilege. When specifying a function by a text string rather than by OID, - the allowed input is the same as for the regprocedure data type + the allowed input is the same as for the regprocedure data type (see ). The desired access privilege type must evaluate to EXECUTE. @@ -16609,7 +16609,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); Its argument possibilities are analogous to has_table_privilege. When specifying a type by a text string rather than by OID, - the allowed input is the same as for the regtype data type + the allowed input is the same as for the regtype data type (see ). The desired access privilege type must evaluate to USAGE. @@ -16620,14 +16620,14 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); can access a role in a particular way. Its argument possibilities are analogous to has_table_privilege, - except that public is not allowed as a user name. + except that public is not allowed as a user name. The desired access privilege type must evaluate to some combination of MEMBER or USAGE. MEMBER denotes direct or indirect membership in - the role (that is, the right to do SET ROLE), while + the role (that is, the right to do SET ROLE), while USAGE denotes whether the privileges of the role - are immediately available without doing SET ROLE. + are immediately available without doing SET ROLE. @@ -16639,7 +16639,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); shows functions that - determine whether a certain object is visible in the + determine whether a certain object is visible in the current schema search path. For example, a table is said to be visible if its containing schema is in the search path and no table of the same @@ -16793,16 +16793,16 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); pg_type_is_visible can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name - and argument data type(s) earlier in the path. For operator + and argument data type(s) earlier in the path. For operator classes, both name and associated index access method are considered. All these functions require object OIDs to identify the object to be checked. If you want to test an object by name, it is convenient to use - the OID alias types (regclass, regtype, - regprocedure, regoperator, regconfig, - or regdictionary), + the OID alias types (regclass, regtype, + regprocedure, regoperator, regconfig, + or regdictionary), for example: SELECT pg_type_is_visible('myschema.widget'::regtype); @@ -16949,7 +16949,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); - format_type(type_oid, typemod) + format_type(type_oid, typemod) text get SQL name of a data type @@ -16959,18 +16959,18 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get definition of a constraint - pg_get_constraintdef(constraint_oid, pretty_bool) + pg_get_constraintdef(constraint_oid, pretty_bool) text get definition of a constraint - pg_get_expr(pg_node_tree, relation_oid) + pg_get_expr(pg_node_tree, relation_oid) text decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter - pg_get_expr(pg_node_tree, relation_oid, pretty_bool) + pg_get_expr(pg_node_tree, relation_oid, pretty_bool) text decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter @@ -16993,19 +16993,19 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_function_result(func_oid) text - get RETURNS clause for function + get RETURNS clause for function pg_get_indexdef(index_oid) text - get CREATE INDEX command for index + get CREATE INDEX command for index - pg_get_indexdef(index_oid, column_no, pretty_bool) + pg_get_indexdef(index_oid, column_no, pretty_bool) text - get CREATE INDEX command for index, + get CREATE INDEX command for index, or definition of just one index column when - column_no is not zero + column_no is not zero pg_get_keywords() @@ -17015,12 +17015,12 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_ruledef(rule_oid) text - get CREATE RULE command for rule + get CREATE RULE command for rule - pg_get_ruledef(rule_oid, pretty_bool) + pg_get_ruledef(rule_oid, pretty_bool) text - get CREATE RULE command for rule + get CREATE RULE command for rule pg_get_serial_sequence(table_name, column_name) @@ -17030,17 +17030,17 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_statisticsobjdef(statobj_oid) text - get CREATE STATISTICS command for extended statistics object + get CREATE STATISTICS command for extended statistics object pg_get_triggerdef(trigger_oid) text - get CREATE [ CONSTRAINT ] TRIGGER command for trigger + get CREATE [ CONSTRAINT ] TRIGGER command for trigger - pg_get_triggerdef(trigger_oid, pretty_bool) + pg_get_triggerdef(trigger_oid, pretty_bool) text - get CREATE [ CONSTRAINT ] TRIGGER command for trigger + get CREATE [ CONSTRAINT ] TRIGGER command for trigger pg_get_userbyid(role_oid) @@ -17053,7 +17053,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get underlying SELECT command for view or materialized view (deprecated) - pg_get_viewdef(view_name, pretty_bool) + pg_get_viewdef(view_name, pretty_bool) text get underlying SELECT command for view or materialized view (deprecated) @@ -17063,29 +17063,29 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); get underlying SELECT command for view or materialized view - pg_get_viewdef(view_oid, pretty_bool) + pg_get_viewdef(view_oid, pretty_bool) text get underlying SELECT command for view or materialized view - pg_get_viewdef(view_oid, wrap_column_int) + pg_get_viewdef(view_oid, wrap_column_int) text get underlying SELECT command for view or materialized view; lines with fields are wrapped to specified number of columns, pretty-printing is implied - pg_index_column_has_property(index_oid, column_no, prop_name) + pg_index_column_has_property(index_oid, column_no, prop_name) boolean test whether an index column has a specified property - pg_index_has_property(index_oid, prop_name) + pg_index_has_property(index_oid, prop_name) boolean test whether an index has a specified property - pg_indexam_has_property(am_oid, prop_name) + pg_indexam_has_property(am_oid, prop_name) boolean test whether an index access method has a specified property @@ -17166,11 +17166,11 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_keywords returns a set of records describing - the SQL keywords recognized by the server. The word column - contains the keyword. The catcode column contains a - category code: U for unreserved, C for column name, - T for type or function name, or R for reserved. - The catdesc column contains a possibly-localized string + the SQL keywords recognized by the server. The word column + contains the keyword. The catcode column contains a + category code: U for unreserved, C for column name, + T for type or function name, or R for reserved. + The catdesc column contains a possibly-localized string describing the category. @@ -17187,26 +17187,26 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); catalogs. If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient. pg_get_viewdef reconstructs the - SELECT query that defines a view. Most of these functions come - in two variants, one of which can optionally pretty-print the + SELECT query that defines a view. Most of these functions come + in two variants, one of which can optionally pretty-print the result. The pretty-printed format is more readable, but the default format is more likely to be interpreted the same way by future versions of - PostgreSQL; avoid using pretty-printed output for dump - purposes. Passing false for the pretty-print parameter yields + PostgreSQL; avoid using pretty-printed output for dump + purposes. Passing false for the pretty-print parameter yields the same result as the variant that does not have the parameter at all. - pg_get_functiondef returns a complete - CREATE OR REPLACE FUNCTION statement for a function. + pg_get_functiondef returns a complete + CREATE OR REPLACE FUNCTION statement for a function. pg_get_function_arguments returns the argument list of a function, in the form it would need to appear in within - CREATE FUNCTION. + CREATE FUNCTION. pg_get_function_result similarly returns the - appropriate RETURNS clause for the function. + appropriate RETURNS clause for the function. pg_get_function_identity_arguments returns the argument list necessary to identify a function, in the form it - would need to appear in within ALTER FUNCTION, for + would need to appear in within ALTER FUNCTION, for instance. This form omits default values. @@ -17219,10 +17219,10 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); (serial, smallserial, bigserial), it is the sequence created for that serial column definition. In the latter case, this association can be modified or removed with ALTER - SEQUENCE OWNED BY. (The function probably should have been called + SEQUENCE OWNED BY. (The function probably should have been called pg_get_owned_sequence; its current name reflects the - fact that it has typically been used with serial - or bigserial columns.) The first input parameter is a table name + fact that it has typically been used with serial + or bigserial columns.) The first input parameter is a table name with optional schema, and the second parameter is a column name. Because the first parameter is potentially a schema and table, it is not treated as a double-quoted identifier, meaning it is lower cased by default, while the @@ -17290,8 +17290,8 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); distance_orderable - Can the column be scanned in order by a distance - operator, for example ORDER BY col <-> constant ? + Can the column be scanned in order by a distance + operator, for example ORDER BY col <-> constant ? @@ -17301,14 +17301,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); search_array - Does the column natively support col = ANY(array) + Does the column natively support col = ANY(array) searches? search_nulls - Does the column support IS NULL and - IS NOT NULL searches? + Does the column support IS NULL and + IS NOT NULL searches? @@ -17324,7 +17324,7 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); clusterable - Can the index be used in a CLUSTER command? + Can the index be used in a CLUSTER command? @@ -17355,9 +17355,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); can_order - Does the access method support ASC, - DESC and related keywords in - CREATE INDEX? + Does the access method support ASC, + DESC and related keywords in + CREATE INDEX? @@ -17382,9 +17382,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); pg_options_to_table returns the set of storage option name/value pairs - (option_name/option_value) when passed - pg_class.reloptions or - pg_attribute.attoptions. + (option_name/option_value) when passed + pg_class.reloptions or + pg_attribute.attoptions. @@ -17394,14 +17394,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to connect to the databases identified by pg_tablespace_databases and query their - pg_class catalogs. + pg_class catalogs. pg_typeof returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as - returning regtype, which is an OID alias type (see + returning regtype, which is an OID alias type (see ); this means that it is the same as an OID for comparison purposes but displays as a type name. For example: @@ -17447,10 +17447,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); to_regoperator, to_regtype, to_regnamespace, and to_regrole functions translate relation, function, operator, type, schema, and role - names (given as text) to objects of - type regclass, regproc, regprocedure, - regoper, regoperator, regtype, - regnamespace, and regrole + names (given as text) to objects of + type regclass, regproc, regprocedure, + regoper, regoperator, regtype, + regnamespace, and regrole respectively. These functions differ from a cast from text in that they don't accept a numeric OID, and that they return null rather than throwing an error if the name is not found (or, for @@ -17493,18 +17493,18 @@ SELECT collation for ('foo' COLLATE "de_DE"); get description of a database object - pg_identify_object(catalog_id oid, object_id oid, object_sub_id integer) - type text, schema text, name text, identity text + pg_identify_object(catalog_id oid, object_id oid, object_sub_id integer) + type text, schema text, name text, identity text get identity of a database object - pg_identify_object_as_address(catalog_id oid, object_id oid, object_sub_id integer) - type text, name text[], args text[] + pg_identify_object_as_address(catalog_id oid, object_id oid, object_sub_id integer) + type text, name text[], args text[] get external representation of a database object's address - pg_get_object_address(type text, name text[], args text[]) - catalog_id oid, object_id oid, object_sub_id int32 + pg_get_object_address(type text, name text[], args text[]) + catalog_id oid, object_id oid, object_sub_id int32 get address of a database object, from its external representation @@ -17525,13 +17525,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); to uniquely identify the database object specified by catalog OID, object OID and a (possibly zero) sub-object ID. This information is intended to be machine-readable, and is never translated. - type identifies the type of database object; - schema is the schema name that the object belongs in, or - NULL for object types that do not belong to schemas; - name is the name of the object, quoted if necessary, only + type identifies the type of database object; + schema is the schema name that the object belongs in, or + NULL for object types that do not belong to schemas; + name is the name of the object, quoted if necessary, only present if it can be used (alongside schema name, if pertinent) as a unique - identifier of the object, otherwise NULL; - identity is the complete object identity, with the precise format + identifier of the object, otherwise NULL; + identity is the complete object identity, with the precise format depending on object type, and each part within the format being schema-qualified and quoted as necessary. @@ -17542,10 +17542,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); catalog OID, object OID and a (possibly zero) sub-object ID. The returned information is independent of the current server, that is, it could be used to identify an identically named object in another server. - type identifies the type of database object; - name and args are text arrays that together + type identifies the type of database object; + name and args are text arrays that together form a reference to the object. These three columns can be passed to - pg_get_object_address to obtain the internal address + pg_get_object_address to obtain the internal address of the object. This function is the inverse of pg_get_object_address. @@ -17554,13 +17554,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_get_object_address returns a row containing enough information to uniquely identify the database object specified by its type and object name and argument arrays. The returned values are the - ones that would be used in system catalogs such as pg_depend + ones that would be used in system catalogs such as pg_depend and can be passed to other system functions such as - pg_identify_object or pg_describe_object. - catalog_id is the OID of the system catalog containing the + pg_identify_object or pg_describe_object. + catalog_id is the OID of the system catalog containing the object; - object_id is the OID of the object itself, and - object_sub_id is the object sub-ID, or zero if none. + object_id is the OID of the object itself, and + object_sub_id is the object sub-ID, or zero if none. This function is the inverse of pg_identify_object_as_address. @@ -17739,9 +17739,9 @@ SELECT collation for ('foo' COLLATE "de_DE");
- The internal transaction ID type (xid) is 32 bits wide and + The internal transaction ID type (xid) is 32 bits wide and wraps around every 4 billion transactions. However, these functions - export a 64-bit format that is extended with an epoch counter + export a 64-bit format that is extended with an epoch counter so it will not wrap around during the life of an installation. The data type used by these functions, txid_snapshot, stores information about transaction ID @@ -17782,9 +17782,9 @@ SELECT collation for ('foo' COLLATE "de_DE"); xip_list Active txids at the time of the snapshot. The list - includes only those active txids between xmin - and xmax; there might be active txids higher - than xmax. A txid that is xmin <= txid < + includes only those active txids between xmin + and xmax; there might be active txids higher + than xmax. A txid that is xmin <= txid < xmax and not in this list was already completed at the time of the snapshot, and thus either visible or dead according to its commit status. The list does not @@ -17797,27 +17797,27 @@ SELECT collation for ('foo' COLLATE "de_DE"); - txid_snapshot's textual representation is - xmin:xmax:xip_list. + txid_snapshot's textual representation is + xmin:xmax:xip_list. For example 10:20:10,14,15 means xmin=10, xmax=20, xip_list=10, 14, 15. - txid_status(bigint) reports the commit status of a recent + txid_status(bigint) reports the commit status of a recent transaction. Applications may use it to determine whether a transaction committed or aborted when the application and database server become disconnected while a COMMIT is in progress. The status of a transaction will be reported as either - in progress, - committed, or aborted, provided that the + in progress, + committed, or aborted, provided that the transaction is recent enough that the system retains the commit status of that transaction. If is old enough that no references to that transaction survive in the system and the commit status information has been discarded, this function will return NULL. Note that prepared - transactions are reported as in progress; applications must + transactions are reported as in progress; applications must check pg_prepared_xacts if they + linkend="view-pg-prepared-xacts">pg_prepared_xacts if they need to determine whether the txid is a prepared transaction. @@ -17852,7 +17852,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_last_committed_xact pg_last_committed_xact() - xid xid, timestamp timestamp with time zone + xid xid, timestamp timestamp with time zone get transaction ID and commit timestamp of latest committed transaction @@ -17861,7 +17861,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); The functions shown in - print information initialized during initdb, such + print information initialized during initdb, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same @@ -17927,12 +17927,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); - pg_control_checkpoint returns a record, shown in + pg_control_checkpoint returns a record, shown in - <function>pg_control_checkpoint</> Columns + <function>pg_control_checkpoint</function> Columns @@ -18043,12 +18043,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_system returns a record, shown in + pg_control_system returns a record, shown in - <function>pg_control_system</> Columns + <function>pg_control_system</function> Columns @@ -18084,12 +18084,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_init returns a record, shown in + pg_control_init returns a record, shown in - <function>pg_control_init</> Columns + <function>pg_control_init</function> Columns @@ -18165,12 +18165,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
- pg_control_recovery returns a record, shown in + pg_control_recovery returns a record, shown in - <function>pg_control_recovery</> Columns + <function>pg_control_recovery</function> Columns @@ -18217,7 +18217,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); The functions described in this section are used to control and - monitor a PostgreSQL installation. + monitor a PostgreSQL installation. @@ -18357,7 +18357,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_cancel_backend(pid int) + pg_cancel_backend(pid int) boolean Cancel a backend's current query. This is also allowed if the @@ -18382,7 +18382,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_terminate_backend(pid int) + pg_terminate_backend(pid int) boolean Terminate a backend. This is also allowed if the calling role @@ -18401,28 +18401,28 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_cancel_backend and pg_terminate_backend - send signals (SIGINT or SIGTERM + pg_cancel_backend and pg_terminate_backend + send signals (SIGINT or SIGTERM respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the pid column of the pg_stat_activity view, or by listing the postgres processes on the server (using - ps on Unix or the Task - Manager on Windows). + ps on Unix or the Task + Manager on Windows). The role of an active backend can be found from the usename column of the pg_stat_activity view. - pg_reload_conf sends a SIGHUP signal + pg_reload_conf sends a SIGHUP signal to the server, causing configuration files to be reloaded by all server processes. - pg_rotate_logfile signals the log-file manager to switch + pg_rotate_logfile signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess. @@ -18492,7 +18492,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_create_restore_point(name text) + pg_create_restore_point(name text) pg_lsn Create a named point for performing restore (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18520,7 +18520,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_start_backup(label text , fast boolean , exclusive boolean ) + pg_start_backup(label text , fast boolean , exclusive boolean ) pg_lsn Prepare for performing on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18534,7 +18534,7 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_stop_backup(exclusive boolean , wait_for_archive boolean ) + pg_stop_backup(exclusive boolean , wait_for_archive boolean ) setof record Finish performing exclusive or non-exclusive on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function) @@ -18562,23 +18562,23 @@ SELECT set_config('log_statement_stats', 'off', false); - pg_walfile_name(lsn pg_lsn) + pg_walfile_name(lsn pg_lsn) text Convert write-ahead log location to file name - pg_walfile_name_offset(lsn pg_lsn) + pg_walfile_name_offset(lsn pg_lsn) - text, integer + text, integer Convert write-ahead log location to file name and decimal byte offset within file - pg_wal_lsn_diff(lsn pg_lsn, lsn pg_lsn) + pg_wal_lsn_diff(lsn pg_lsn, lsn pg_lsn) - numeric + numeric Calculate the difference between two write-ahead log locations @@ -18586,17 +18586,17 @@ SELECT set_config('log_statement_stats', 'off', false);
- pg_start_backup accepts an arbitrary user-defined label for + pg_start_backup accepts an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) When used in exclusive mode, the function writes a - backup label file (backup_label) and, if there are any links - in the pg_tblspc/ directory, a tablespace map file - (tablespace_map) into the database cluster's data directory, + backup label file (backup_label) and, if there are any links + in the pg_tblspc/ directory, a tablespace map file + (tablespace_map) into the database cluster's data directory, performs a checkpoint, and then returns the backup's starting write-ahead log location as text. The user can ignore this result value, but it is provided in case it is useful. When used in non-exclusive mode, the contents of these files are instead returned by the - pg_stop_backup function, and should be written to the backup + pg_stop_backup function, and should be written to the backup by the caller. @@ -18606,29 +18606,29 @@ postgres=# select pg_start_backup('label_goes_here'); 0/D4445B8 (1 row) - There is an optional second parameter of type boolean. If true, - it specifies executing pg_start_backup as quickly as + There is an optional second parameter of type boolean. If true, + it specifies executing pg_start_backup as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries. - In an exclusive backup, pg_stop_backup removes the label file - and, if it exists, the tablespace_map file created by - pg_start_backup. In a non-exclusive backup, the contents of - the backup_label and tablespace_map are returned + In an exclusive backup, pg_stop_backup removes the label file + and, if it exists, the tablespace_map file created by + pg_start_backup. In a non-exclusive backup, the contents of + the backup_label and tablespace_map are returned in the result of the function, and should be written to files in the backup (and not in the data directory). There is an optional second - parameter of type boolean. If false, the pg_stop_backup + parameter of type boolean. If false, the pg_stop_backup will return immediately after the backup is completed without waiting for WAL to be archived. This behavior is only useful for backup software which independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup - useless. When this parameter is set to true, pg_stop_backup + useless. When this parameter is set to true, pg_stop_backup will wait for WAL to be archived when archiving is enabled; on the standby, - this means that it will wait only when archive_mode = always. + this means that it will wait only when archive_mode = always. If write activity on the primary is low, it may be useful to run - pg_switch_wal on the primary in order to trigger + pg_switch_wal on the primary in order to trigger an immediate segment switch. @@ -18636,7 +18636,7 @@ postgres=# select pg_start_backup('label_goes_here'); When executed on a primary, the function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to - pg_start_backup, the starting and ending write-ahead log locations for + pg_start_backup, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. The return value is the backup's ending write-ahead log location (which again can be ignored). After recording the ending location, the current @@ -18646,16 +18646,16 @@ postgres=# select pg_start_backup('label_goes_here');
- pg_switch_wal moves to the next write-ahead log file, allowing the + pg_switch_wal moves to the next write-ahead log file, allowing the current file to be archived (assuming you are using continuous archiving). The return value is the ending write-ahead log location + 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, - pg_switch_wal does nothing and returns the start location + pg_switch_wal does nothing and returns the start location of the write-ahead log file currently in use. - pg_create_restore_point creates a named write-ahead log + pg_create_restore_point creates a named write-ahead log record that can be used as recovery target, and returns the corresponding write-ahead log location. The given name can then be used with to specify the point up to which @@ -18665,11 +18665,11 @@ postgres=# select pg_start_backup('label_goes_here'); - pg_current_wal_lsn displays the current write-ahead log write + pg_current_wal_lsn displays the current write-ahead log write location in the same format used by the above functions. Similarly, - pg_current_wal_insert_lsn displays the current write-ahead log - insertion location and pg_current_wal_flush_lsn displays the - current write-ahead log flush location. The insertion location is the logical + pg_current_wal_insert_lsn displays the current write-ahead log + insertion location and pg_current_wal_flush_lsn displays the + current write-ahead log flush location. The insertion location is the logical end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers and flush location is the location guaranteed to be written to durable storage. The write @@ -18681,7 +18681,7 @@ postgres=# select pg_start_backup('label_goes_here'); - You can use pg_walfile_name_offset to extract the + You can use pg_walfile_name_offset to extract the corresponding write-ahead log file name and byte offset from the results of any of the above functions. For example: @@ -18691,7 +18691,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); 00000001000000000000000D | 4039624 (1 row) - Similarly, pg_walfile_name extracts just the write-ahead log file name. + Similarly, pg_walfile_name extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving @@ -18700,7 +18700,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_wal_lsn_diff calculates the difference in bytes + pg_wal_lsn_diff calculates the difference in bytes between two write-ahead log locations. It can be used with pg_stat_replication or some functions shown in to get the replication lag. @@ -18878,21 +18878,21 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - PostgreSQL allows database sessions to synchronize their - snapshots. A snapshot determines which data is visible to the + PostgreSQL allows database sessions to synchronize their + snapshots. A snapshot determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits - between the executions of the two START TRANSACTION commands, + between the executions of the two START TRANSACTION commands, so that one session sees the effects of that transaction and the other does not. - To solve this problem, PostgreSQL allows a transaction to - export the snapshot it is using. As long as the exporting - transaction remains open, other transactions can import its + To solve this problem, PostgreSQL allows a transaction to + export the snapshot it is using. As long as the exporting + transaction remains open, other transactions can import its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible @@ -18902,7 +18902,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - Snapshots are exported with the pg_export_snapshot function, + Snapshots are exported with the pg_export_snapshot function, shown in , and imported with the command. @@ -18928,13 +18928,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - The function pg_export_snapshot saves the current snapshot - and returns a text string identifying the snapshot. This string + The function pg_export_snapshot saves the current snapshot + and returns a text string identifying the snapshot. This string must be passed (outside the database) to clients that want to import the snapshot. The snapshot is available for import only until the end of the transaction that exported it. A transaction can export more than one snapshot, if needed. Note that doing so is only useful in READ - COMMITTED transactions, since in REPEATABLE READ and + COMMITTED transactions, since in REPEATABLE READ and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with . @@ -18989,7 +18989,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_create_physical_replication_slot - pg_create_physical_replication_slot(slot_name name , immediately_reserve boolean, temporary boolean) + pg_create_physical_replication_slot(slot_name name , immediately_reserve boolean, temporary boolean) (slot_name name, lsn pg_lsn) @@ -18997,13 +18997,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Creates a new physical replication slot named slot_name. The optional second parameter, - when true, specifies that the LSN for this + when true, specifies that the LSN for this replication slot be reserved immediately; otherwise - the LSN is reserved on first connection from a streaming + the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see . The optional third - parameter, temporary, when set to true, specifies that + parameter, temporary, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. This function corresponds @@ -19024,7 +19024,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Drops the physical or logical replication slot named slot_name. Same as replication protocol - command DROP_REPLICATION_SLOT. For logical slots, this must + command DROP_REPLICATION_SLOT. For logical slots, this must be called when connected to the same database the slot was created on. @@ -19034,7 +19034,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_create_logical_replication_slot - pg_create_logical_replication_slot(slot_name name, plugin name , temporary boolean) + pg_create_logical_replication_slot(slot_name name, plugin name , temporary boolean) (slot_name name, lsn pg_lsn) @@ -19043,7 +19043,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Creates a new logical (decoding) replication slot named slot_name using the output plugin plugin. The optional third - parameter, temporary, when set to true, specifies that + parameter, temporary, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. A call to this function has the same @@ -19065,9 +19065,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Returns changes in the slot slot_name, starting from the point at which since changes have been consumed last. If - upto_lsn and upto_nchanges are NULL, + upto_lsn and upto_nchanges are NULL, logical decoding will continue until end of WAL. If - upto_lsn is non-NULL, decoding will include only + upto_lsn is non-NULL, decoding will include only those transactions which commit prior to the specified LSN. If upto_nchanges is non-NULL, decoding will stop when the number of rows produced by decoding exceeds @@ -19155,7 +19155,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_drop(node_name text) - void + void Delete a previously created replication origin, including any @@ -19187,7 +19187,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_session_setup(node_name text) - void + void Mark the current session as replaying from the given @@ -19205,7 +19205,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_session_reset() - void + void Cancel the effects @@ -19254,7 +19254,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_xact_setup(origin_lsn pg_lsn, origin_timestamp timestamptz) - void + void Mark the current transaction as replaying a transaction that has @@ -19273,7 +19273,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_xact_reset() - void + void Cancel the effects of @@ -19289,7 +19289,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_replication_origin_advance(node_name text, lsn pg_lsn) - void + void Set replication progress for the given node to the given @@ -19446,7 +19446,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); bigint Disk space used by the specified fork ('main', - 'fsm', 'vm', or 'init') + 'fsm', 'vm', or 'init') of the specified table or index @@ -19519,7 +19519,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); bigint Total disk space used by the specified table, - including all indexes and TOAST data + including all indexes and TOAST data @@ -19527,48 +19527,48 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_column_size shows the space used to store any individual + pg_column_size shows the space used to store any individual data value. - pg_total_relation_size accepts the OID or name of a + pg_total_relation_size accepts the OID or name of a table or toast table, and returns the total on-disk space used for that table, including all associated indexes. This function is equivalent to pg_table_size - + pg_indexes_size. + + pg_indexes_size. - pg_table_size accepts the OID or name of a table and + pg_table_size accepts the OID or name of a table and returns the disk space needed for that table, exclusive of indexes. (TOAST space, free space map, and visibility map are included.) - pg_indexes_size accepts the OID or name of a table and + pg_indexes_size accepts the OID or name of a table and returns the total disk space used by all the indexes attached to that table. - pg_database_size and pg_tablespace_size + pg_database_size and pg_tablespace_size accept the OID or name of a database or tablespace, and return the total disk space used therein. To use pg_database_size, - you must have CONNECT permission on the specified database - (which is granted by default), or be a member of the pg_read_all_stats - role. To use pg_tablespace_size, you must have - CREATE permission on the specified tablespace, or be a member - of the pg_read_all_stats role unless it is the default tablespace for + you must have CONNECT permission on the specified database + (which is granted by default), or be a member of the pg_read_all_stats + role. To use pg_tablespace_size, you must have + CREATE permission on the specified tablespace, or be a member + of the pg_read_all_stats role unless it is the default tablespace for the current database. - pg_relation_size accepts the OID or name of a table, index + pg_relation_size accepts the OID or name of a table, index or toast table, and returns the on-disk size in bytes of one fork of that relation. (Note that for most purposes it is more convenient to - use the higher-level functions pg_total_relation_size - or pg_table_size, which sum the sizes of all forks.) + use the higher-level functions pg_total_relation_size + or pg_table_size, which sum the sizes of all forks.) With one argument, it returns the size of the main data fork of the relation. The second argument can be provided to specify which fork to examine: @@ -19601,13 +19601,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_size_pretty can be used to format the result of one of + pg_size_pretty can be used to format the result of one of the other functions in a human-readable way, using bytes, kB, MB, GB or TB as appropriate. - pg_size_bytes can be used to get the size in bytes from a + pg_size_bytes can be used to get the size in bytes from a string in human-readable format. The input may have units of bytes, kB, MB, GB or TB, and is parsed case-insensitively. If no units are specified, bytes are assumed. @@ -19616,17 +19616,17 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The units kB, MB, GB and TB used by the functions - pg_size_pretty and pg_size_bytes are defined + pg_size_pretty and pg_size_bytes are defined using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is - 10242 = 1048576 bytes, and so on. + 10242 = 1048576 bytes, and so on. The functions above that operate on tables or indexes accept a - regclass argument, which is simply the OID of the table or index - in the pg_class system catalog. You do not have to look up - the OID by hand, however, since the regclass data type's input + regclass argument, which is simply the OID of the table or index + in the pg_class system catalog. You do not have to look up + the OID by hand, however, since the regclass data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string @@ -19695,28 +19695,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_relation_filenode accepts the OID or name of a table, - index, sequence, or toast table, and returns the filenode number + pg_relation_filenode accepts the OID or name of a table, + index, sequence, or toast table, and returns the filenode number currently assigned to it. The filenode is the base component of the file name(s) used for the relation (see for more information). For most tables the result is the same as - pg_class.relfilenode, but for certain - system catalogs relfilenode is zero and this function must + pg_class.relfilenode, but for certain + system catalogs relfilenode is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view. - pg_relation_filepath is similar to - pg_relation_filenode, but it returns the entire file path name - (relative to the database cluster's data directory PGDATA) of + pg_relation_filepath is similar to + pg_relation_filenode, but it returns the entire file path name + (relative to the database cluster's data directory PGDATA) of the relation. - pg_filenode_relation is the reverse of - pg_relation_filenode. Given a tablespace OID and - a filenode, it returns the associated relation's OID. For a table + pg_filenode_relation is the reverse of + pg_relation_filenode. Given a tablespace OID and + a filenode, it returns the associated relation's OID. For a table in the database's default tablespace, the tablespace can be specified as 0. @@ -19736,7 +19736,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_collation_actual_version - pg_collation_actual_version(oid) + pg_collation_actual_version(oid) text Return actual version of collation from operating system @@ -19744,7 +19744,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_import_system_collations - pg_import_system_collations(schema regnamespace) + pg_import_system_collations(schema regnamespace) integer Import operating system collations @@ -19763,7 +19763,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_import_system_collations adds collations to the system + pg_import_system_collations adds collations to the system catalog pg_collation based on all the locales it finds in the operating system. This is what initdb uses; @@ -19818,28 +19818,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - brin_summarize_new_values(index regclass) + brin_summarize_new_values(index regclass) integer summarize page ranges not already summarized - brin_summarize_range(index regclass, blockNumber bigint) + brin_summarize_range(index regclass, blockNumber bigint) integer summarize the page range covering the given block, if not already summarized - brin_desummarize_range(index regclass, blockNumber bigint) + brin_desummarize_range(index regclass, blockNumber bigint) integer de-summarize the page range covering the given block, if summarized - gin_clean_pending_list(index regclass) + gin_clean_pending_list(index regclass) bigint move GIN pending list entries into main index structure @@ -19849,25 +19849,25 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - brin_summarize_new_values accepts the OID or name of a + brin_summarize_new_values accepts the OID or name of a BRIN index and inspects the index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning the table pages. It returns the number of new page range summaries that were inserted - into the index. brin_summarize_range does the same, except + into the index. brin_summarize_range does the same, except it only summarizes the range that covers the given block number. - gin_clean_pending_list accepts the OID or name of + gin_clean_pending_list accepts the OID or name of a GIN index and cleans up the pending list of the specified index by moving entries in it to the main GIN data structure in bulk. It returns the number of pages removed from the pending list. Note that if the argument is a GIN index built with - the fastupdate option disabled, no cleanup happens and the + the fastupdate option disabled, no cleanup happens and the return value is 0, because the index doesn't have a pending list. Please see and - for details of the pending list and fastupdate option. + for details of the pending list and fastupdate option. @@ -19879,9 +19879,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in provide native access to files on the machine hosting the server. Only files within the - database cluster directory and the log_directory can be + database cluster directory and the log_directory can be accessed. Use a relative path for files in the cluster directory, - and a path matching the log_directory configuration setting + and a path matching the log_directory configuration setting for log files. Use of these functions is restricted to superusers except where stated otherwise. @@ -19897,7 +19897,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_ls_dir(dirname text [, missing_ok boolean, include_dot_dirs boolean]) + pg_ls_dir(dirname text [, missing_ok boolean, include_dot_dirs boolean]) setof text @@ -19911,7 +19911,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); setof record List the name, size, and last modification time of files in the log - directory. Access is granted to members of the pg_monitor + directory. Access is granted to members of the pg_monitor role and may be granted to other non-superuser roles. @@ -19922,13 +19922,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); setof record List the name, size, and last modification time of files in the WAL - directory. Access is granted to members of the pg_monitor + directory. Access is granted to members of the pg_monitor role and may be granted to other non-superuser roles. - pg_read_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) + pg_read_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) text @@ -19937,7 +19937,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_read_binary_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) + pg_read_binary_file(filename text [, offset bigint, length bigint [, missing_ok boolean] ]) bytea @@ -19946,7 +19946,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - pg_stat_file(filename text[, missing_ok boolean]) + pg_stat_file(filename text[, missing_ok boolean]) record @@ -19958,23 +19958,23 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - Some of these functions take an optional missing_ok parameter, + Some of these functions take an optional missing_ok parameter, which specifies the behavior when the file or directory does not exist. If true, the function returns NULL (except - pg_ls_dir, which returns an empty result set). If - false, an error is raised. The default is false. + pg_ls_dir, which returns an empty result set). If + false, an error is raised. The default is false. pg_ls_dir - pg_ls_dir returns the names of all files (and directories + pg_ls_dir returns the names of all files (and directories and other special files) in the specified directory. The - include_dot_dirs indicates whether . and .. are + include_dot_dirs indicates whether . and .. are included in the result set. The default is to exclude them - (false), but including them can be useful when - missing_ok is true, to distinguish an + (false), but including them can be useful when + missing_ok is true, to distinguish an empty directory from an non-existent directory. @@ -19982,9 +19982,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_ls_logdir - pg_ls_logdir returns the name, size, and last modified time + pg_ls_logdir returns the name, size, and last modified time (mtime) of each file in the log directory. By default, only superusers - and members of the pg_monitor role can use this function. + and members of the pg_monitor role can use this function. Access may be granted to others using GRANT. @@ -19992,9 +19992,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_ls_waldir - pg_ls_waldir returns the name, size, and last modified time + pg_ls_waldir returns the name, size, and last modified time (mtime) of each file in the write ahead log (WAL) directory. By - default only superusers and members of the pg_monitor role + default only superusers and members of the pg_monitor role can use this function. Access may be granted to others using GRANT. @@ -20003,11 +20003,11 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_read_file - pg_read_file returns part of a text file, starting - at the given offset, returning at most length - bytes (less if the end of file is reached first). If offset + pg_read_file returns part of a text file, starting + at the given offset, returning at most length + bytes (less if the end of file is reached first). If offset is negative, it is relative to the end of the file. - If offset and length are omitted, the entire + If offset and length are omitted, the entire file is returned. The bytes read from the file are interpreted as a string in the server encoding; an error is thrown if they are not valid in that encoding. @@ -20017,10 +20017,10 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_read_binary_file - pg_read_binary_file is similar to - pg_read_file, except that the result is a bytea value; + pg_read_binary_file is similar to + pg_read_file, except that the result is a bytea value; accordingly, no encoding checks are performed. - In combination with the convert_from function, this function + In combination with the convert_from function, this function can be used to read a file in a specified encoding: SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); @@ -20031,7 +20031,7 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); pg_stat_file - pg_stat_file returns a record containing the file + pg_stat_file returns a record containing the file size, last accessed time stamp, last modified time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a boolean @@ -20064,42 +20064,42 @@ SELECT (pg_stat_file('filename')).modification; - pg_advisory_lock(key bigint) + pg_advisory_lock(key bigint) void Obtain exclusive session level advisory lock - pg_advisory_lock(key1 int, key2 int) + pg_advisory_lock(key1 int, key2 int) void Obtain exclusive session level advisory lock - pg_advisory_lock_shared(key bigint) + pg_advisory_lock_shared(key bigint) void Obtain shared session level advisory lock - pg_advisory_lock_shared(key1 int, key2 int) + pg_advisory_lock_shared(key1 int, key2 int) void Obtain shared session level advisory lock - pg_advisory_unlock(key bigint) + pg_advisory_unlock(key bigint) boolean Release an exclusive session level advisory lock - pg_advisory_unlock(key1 int, key2 int) + pg_advisory_unlock(key1 int, key2 int) boolean Release an exclusive session level advisory lock @@ -20113,98 +20113,98 @@ SELECT (pg_stat_file('filename')).modification; - pg_advisory_unlock_shared(key bigint) + pg_advisory_unlock_shared(key bigint) boolean Release a shared session level advisory lock - pg_advisory_unlock_shared(key1 int, key2 int) + pg_advisory_unlock_shared(key1 int, key2 int) boolean Release a shared session level advisory lock - pg_advisory_xact_lock(key bigint) + pg_advisory_xact_lock(key bigint) void Obtain exclusive transaction level advisory lock - pg_advisory_xact_lock(key1 int, key2 int) + pg_advisory_xact_lock(key1 int, key2 int) void Obtain exclusive transaction level advisory lock - pg_advisory_xact_lock_shared(key bigint) + pg_advisory_xact_lock_shared(key bigint) void Obtain shared transaction level advisory lock - pg_advisory_xact_lock_shared(key1 int, key2 int) + pg_advisory_xact_lock_shared(key1 int, key2 int) void Obtain shared transaction level advisory lock - pg_try_advisory_lock(key bigint) + pg_try_advisory_lock(key bigint) boolean Obtain exclusive session level advisory lock if available - pg_try_advisory_lock(key1 int, key2 int) + pg_try_advisory_lock(key1 int, key2 int) boolean Obtain exclusive session level advisory lock if available - pg_try_advisory_lock_shared(key bigint) + pg_try_advisory_lock_shared(key bigint) boolean Obtain shared session level advisory lock if available - pg_try_advisory_lock_shared(key1 int, key2 int) + pg_try_advisory_lock_shared(key1 int, key2 int) boolean Obtain shared session level advisory lock if available - pg_try_advisory_xact_lock(key bigint) + pg_try_advisory_xact_lock(key bigint) boolean Obtain exclusive transaction level advisory lock if available - pg_try_advisory_xact_lock(key1 int, key2 int) + pg_try_advisory_xact_lock(key1 int, key2 int) boolean Obtain exclusive transaction level advisory lock if available - pg_try_advisory_xact_lock_shared(key bigint) + pg_try_advisory_xact_lock_shared(key bigint) boolean Obtain shared transaction level advisory lock if available - pg_try_advisory_xact_lock_shared(key1 int, key2 int) + pg_try_advisory_xact_lock_shared(key1 int, key2 int) boolean Obtain shared transaction level advisory lock if available @@ -20217,7 +20217,7 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_lock - pg_advisory_lock locks an application-defined resource, + pg_advisory_lock locks an application-defined resource, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a lock on the same resource identifier, @@ -20231,8 +20231,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_lock_shared - pg_advisory_lock_shared works the same as - pg_advisory_lock, + pg_advisory_lock_shared works the same as + pg_advisory_lock, except the lock can be shared with other sessions requesting shared locks. Only would-be exclusive lockers are locked out. @@ -20241,10 +20241,10 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_lock - pg_try_advisory_lock is similar to - pg_advisory_lock, except the function will not wait for the + pg_try_advisory_lock is similar to + pg_advisory_lock, except the function will not wait for the lock to become available. It will either obtain the lock immediately and - return true, or return false if the lock cannot be + return true, or return false if the lock cannot be acquired immediately. @@ -20252,8 +20252,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_lock_shared - pg_try_advisory_lock_shared works the same as - pg_try_advisory_lock, except it attempts to acquire + pg_try_advisory_lock_shared works the same as + pg_try_advisory_lock, except it attempts to acquire a shared rather than an exclusive lock. @@ -20261,10 +20261,10 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock - pg_advisory_unlock will release a previously-acquired + pg_advisory_unlock will release a previously-acquired exclusive session level advisory lock. It - returns true if the lock is successfully released. - If the lock was not held, it will return false, + returns true if the lock is successfully released. + If the lock was not held, it will return false, and in addition, an SQL warning will be reported by the server. @@ -20272,8 +20272,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock_shared - pg_advisory_unlock_shared works the same as - pg_advisory_unlock, + pg_advisory_unlock_shared works the same as + pg_advisory_unlock, except it releases a shared session level advisory lock. @@ -20281,7 +20281,7 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_unlock_all - pg_advisory_unlock_all will release all session level advisory + pg_advisory_unlock_all will release all session level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.) @@ -20290,8 +20290,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_xact_lock - pg_advisory_xact_lock works the same as - pg_advisory_lock, except the lock is automatically released + pg_advisory_xact_lock works the same as + pg_advisory_lock, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20299,8 +20299,8 @@ SELECT (pg_stat_file('filename')).modification; pg_advisory_xact_lock_shared - pg_advisory_xact_lock_shared works the same as - pg_advisory_lock_shared, except the lock is automatically released + pg_advisory_xact_lock_shared works the same as + pg_advisory_lock_shared, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20308,8 +20308,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_xact_lock - pg_try_advisory_xact_lock works the same as - pg_try_advisory_lock, except the lock, if acquired, + pg_try_advisory_xact_lock works the same as + pg_try_advisory_lock, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20318,8 +20318,8 @@ SELECT (pg_stat_file('filename')).modification; pg_try_advisory_xact_lock_shared - pg_try_advisory_xact_lock_shared works the same as - pg_try_advisory_lock_shared, except the lock, if acquired, + pg_try_advisory_xact_lock_shared works the same as + pg_try_advisory_lock_shared, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. @@ -20336,8 +20336,8 @@ SELECT (pg_stat_file('filename')).modification; - Currently PostgreSQL provides one built in trigger - function, suppress_redundant_updates_trigger, + Currently PostgreSQL provides one built in trigger + function, suppress_redundant_updates_trigger, which will prevent any update that does not actually change the data in the row from taking place, in contrast to the normal behavior which always performs the update @@ -20354,7 +20354,7 @@ SELECT (pg_stat_file('filename')).modification; However, detecting such situations in client code is not always easy, or even possible, and writing expressions to detect them can be error-prone. An alternative is to use - suppress_redundant_updates_trigger, which will skip + suppress_redundant_updates_trigger, which will skip updates that don't change the data. You should use this with care, however. The trigger takes a small but non-trivial time for each record, so if most of the records affected by an update are actually changed, @@ -20362,7 +20362,7 @@ SELECT (pg_stat_file('filename')).modification; - The suppress_redundant_updates_trigger function can be + The suppress_redundant_updates_trigger function can be added to a table like this: CREATE TRIGGER z_min_update @@ -20384,7 +20384,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); Event Trigger Functions - PostgreSQL provides these helper functions + PostgreSQL provides these helper functions to retrieve information from event triggers. @@ -20401,12 +20401,12 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - pg_event_trigger_ddl_commands returns a list of + pg_event_trigger_ddl_commands returns a list of DDL commands executed by each user action, when invoked in a function attached to a - ddl_command_end event trigger. If called in any other + ddl_command_end event trigger. If called in any other context, an error is raised. - pg_event_trigger_ddl_commands returns one row for each + pg_event_trigger_ddl_commands returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns: @@ -20451,7 +20451,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); schema_name text - Name of the schema the object belongs in, if any; otherwise NULL. + Name of the schema the object belongs in, if any; otherwise NULL. No quoting is applied. @@ -20492,11 +20492,11 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - pg_event_trigger_dropped_objects returns a list of all objects - dropped by the command in whose sql_drop event it is called. + pg_event_trigger_dropped_objects returns a list of all objects + dropped by the command in whose sql_drop event it is called. If called in any other context, - pg_event_trigger_dropped_objects raises an error. - pg_event_trigger_dropped_objects returns the following columns: + pg_event_trigger_dropped_objects raises an error. + pg_event_trigger_dropped_objects returns the following columns: @@ -20553,7 +20553,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); schema_name text - Name of the schema the object belonged in, if any; otherwise NULL. + Name of the schema the object belonged in, if any; otherwise NULL. No quoting is applied. @@ -20562,7 +20562,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); text Name of the object, if the combination of schema and name can be - used as a unique identifier for the object; otherwise NULL. + used as a unique identifier for the object; otherwise NULL. No quoting is applied, and name is never schema-qualified. @@ -20598,7 +20598,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); - The pg_event_trigger_dropped_objects function can be used + The pg_event_trigger_dropped_objects function can be used in an event trigger like this: CREATE FUNCTION test_event_trigger_for_drops() @@ -20631,7 +20631,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops The functions shown in provide information about a table for which a - table_rewrite event has just been called. + table_rewrite event has just been called. If called in any other context, an error is raised. @@ -20668,7 +20668,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops - The pg_event_trigger_table_rewrite_oid function can be used + The pg_event_trigger_table_rewrite_oid function can be used in an event trigger like this: CREATE FUNCTION test_event_trigger_table_rewrite_oid() diff --git a/doc/src/sgml/fuzzystrmatch.sgml b/doc/src/sgml/fuzzystrmatch.sgml index ff5bc08fea..373ac4891d 100644 --- a/doc/src/sgml/fuzzystrmatch.sgml +++ b/doc/src/sgml/fuzzystrmatch.sgml @@ -8,14 +8,14 @@ - The fuzzystrmatch module provides several + The fuzzystrmatch module provides several functions to determine similarities and distance between strings. - At present, the soundex, metaphone, - dmetaphone, and dmetaphone_alt functions do + At present, the soundex, metaphone, + dmetaphone, and dmetaphone_alt functions do not work well with multibyte encodings (such as UTF-8). @@ -31,7 +31,7 @@ - The fuzzystrmatch module provides two functions + The fuzzystrmatch module provides two functions for working with Soundex codes: @@ -49,12 +49,12 @@ difference(text, text) returns int - The soundex function converts a string to its Soundex code. - The difference function converts two strings to their Soundex + The soundex function converts a string to its Soundex code. + The difference function converts two strings to their Soundex codes and then reports the number of matching code positions. Since Soundex codes have four characters, the result ranges from zero to four, with zero being no match and four being an exact match. (Thus, the - function is misnamed — similarity would have been + function is misnamed — similarity would have been a better name.) @@ -115,10 +115,10 @@ levenshtein_less_equal(text source, text target, int max_d) returns int levenshtein_less_equal is an accelerated version of the Levenshtein function for use when only small distances are of interest. - If the actual distance is less than or equal to max_d, + If the actual distance is less than or equal to max_d, then levenshtein_less_equal returns the correct - distance; otherwise it returns some value greater than max_d. - If max_d is negative then the behavior is the same as + distance; otherwise it returns some value greater than max_d. + If max_d is negative then the behavior is the same as levenshtein. @@ -198,9 +198,9 @@ test=# SELECT metaphone('GUMBO', 4); Double Metaphone - The Double Metaphone system computes two sounds like strings - for a given input string — a primary and an - alternate. In most cases they are the same, but for non-English + The Double Metaphone system computes two sounds like strings + for a given input string — a primary and an + alternate. In most cases they are the same, but for non-English names especially they can be a bit different, depending on pronunciation. These functions compute the primary and alternate codes: diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl index 01fc6166bf..e655703b5b 100644 --- a/doc/src/sgml/generate-errcodes-table.pl +++ b/doc/src/sgml/generate-errcodes-table.pl @@ -30,12 +30,12 @@ s/-/—/; # Wrap PostgreSQL in - s/PostgreSQL/PostgreSQL<\/>/g; + s/PostgreSQL/PostgreSQL<\/productname>/g; print "\n\n"; print "\n"; print ""; - print "$_\n"; + print "$_\n"; print "\n"; next; diff --git a/doc/src/sgml/generic-wal.sgml b/doc/src/sgml/generic-wal.sgml index dfa78c5ca2..7a0284994c 100644 --- a/doc/src/sgml/generic-wal.sgml +++ b/doc/src/sgml/generic-wal.sgml @@ -13,8 +13,8 @@ The API for constructing generic WAL records is defined in - access/generic_xlog.h and implemented - in access/transam/generic_xlog.c. + access/generic_xlog.h and implemented + in access/transam/generic_xlog.c. @@ -24,24 +24,24 @@ - state = GenericXLogStart(relation) — start + state = GenericXLogStart(relation) — start construction of a generic WAL record for the given relation. - page = GenericXLogRegisterBuffer(state, buffer, flags) + page = GenericXLogRegisterBuffer(state, buffer, flags) — register a buffer to be modified within the current generic WAL record. This function returns a pointer to a temporary copy of the buffer's page, where modifications should be made. (Do not modify the buffer's contents directly.) The third argument is a bit mask of flags applicable to the operation. Currently the only such flag is - GENERIC_XLOG_FULL_IMAGE, which indicates that a full-page + GENERIC_XLOG_FULL_IMAGE, which indicates that a full-page image rather than a delta update should be included in the WAL record. Typically this flag would be set if the page is new or has been rewritten completely. - GenericXLogRegisterBuffer can be repeated if the + GenericXLogRegisterBuffer can be repeated if the WAL-logged action needs to modify multiple pages. @@ -54,7 +54,7 @@ - GenericXLogFinish(state) — apply the changes to + GenericXLogFinish(state) — apply the changes to the buffers and emit the generic WAL record. @@ -63,7 +63,7 @@ WAL record construction can be canceled between any of the above steps by - calling GenericXLogAbort(state). This will discard all + calling GenericXLogAbort(state). This will discard all changes to the page image copies. @@ -75,13 +75,13 @@ No direct modifications of buffers are allowed! All modifications must - be done in copies acquired from GenericXLogRegisterBuffer(). + be done in copies acquired from GenericXLogRegisterBuffer(). In other words, code that makes generic WAL records should never call - BufferGetPage() for itself. However, it remains the + BufferGetPage() for itself. However, it remains the caller's responsibility to pin/unpin and lock/unlock the buffers at appropriate times. Exclusive lock must be held on each target buffer - from before GenericXLogRegisterBuffer() until after - GenericXLogFinish(). + from before GenericXLogRegisterBuffer() until after + GenericXLogFinish(). @@ -97,7 +97,7 @@ The maximum number of buffers that can be registered for a generic WAL - record is MAX_GENERIC_XLOG_PAGES. An error will be thrown + record is MAX_GENERIC_XLOG_PAGES. An error will be thrown if this limit is exceeded. @@ -106,26 +106,26 @@ Generic WAL assumes that the pages to be modified have standard layout, and in particular that there is no useful data between - pd_lower and pd_upper. + pd_lower and pd_upper. Since you are modifying copies of buffer - pages, GenericXLogStart() does not start a critical + pages, GenericXLogStart() does not start a critical section. Thus, you can safely do memory allocation, error throwing, - etc. between GenericXLogStart() and - GenericXLogFinish(). The only actual critical section is - present inside GenericXLogFinish(). There is no need to - worry about calling GenericXLogAbort() during an error + etc. between GenericXLogStart() and + GenericXLogFinish(). The only actual critical section is + present inside GenericXLogFinish(). There is no need to + worry about calling GenericXLogAbort() during an error exit, either. - GenericXLogFinish() takes care of marking buffers dirty + GenericXLogFinish() takes care of marking buffers dirty and setting their LSNs. You do not need to do this explicitly. @@ -148,7 +148,7 @@ - If GENERIC_XLOG_FULL_IMAGE is not specified for a + If GENERIC_XLOG_FULL_IMAGE is not specified for a registered buffer, the generic WAL record contains a delta between the old and the new page images. This delta is based on byte-by-byte comparison. This is not very compact for the case of moving data diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index e0f8adcd6e..99ee3ebca0 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -88,7 +88,7 @@ - According to the comp.ai.genetic FAQ it cannot be stressed too + According to the comp.ai.genetic FAQ it cannot be stressed too strongly that a GA is not a pure random search for a solution to a problem. A GA uses stochastic processes, but the result is distinctly non-random (better than random). @@ -222,7 +222,7 @@ are considered; and all the initially-determined relation scan plans are available. The estimated cost is the cheapest of these possibilities.) Join sequences with lower estimated cost are considered - more fit than those with higher cost. The genetic algorithm + more fit than those with higher cost. The genetic algorithm discards the least fit candidates. Then new candidates are generated by combining genes of more-fit candidates — that is, by using randomly-chosen portions of known low-cost join sequences to create @@ -235,20 +235,20 @@ This process is inherently nondeterministic, because of the randomized choices made during both the initial population selection and subsequent - mutation of the best candidates. To avoid surprising changes + mutation of the best candidates. To avoid surprising changes of the selected plan, each run of the GEQO algorithm restarts its random number generator with the current - parameter setting. As long as geqo_seed and the other + parameter setting. As long as geqo_seed and the other GEQO parameters are kept fixed, the same plan will be generated for a given query (and other planner inputs such as statistics). To experiment - with different search paths, try changing geqo_seed. + with different search paths, try changing geqo_seed. Future Implementation Tasks for - <productname>PostgreSQL</> <acronym>GEQO</acronym> + PostgreSQL GEQO Work is still needed to improve the genetic algorithm parameter diff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml index 7c2321ec3c..873627a210 100644 --- a/doc/src/sgml/gin.sgml +++ b/doc/src/sgml/gin.sgml @@ -21,15 +21,15 @@ - We use the word item to refer to a composite value that - is to be indexed, and the word key to refer to an element + We use the word item to refer to a composite value that + is to be indexed, and the word key to refer to an element value. GIN always stores and searches for keys, not item values per se. A GIN index stores a set of (key, posting list) pairs, - where a posting list is a set of row IDs in which the key + where a posting list is a set of row IDs in which the key occurs. The same row ID can appear in multiple posting lists, since an item can contain more than one key. Each key value is stored only once, so a GIN index is very compact for cases @@ -66,7 +66,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GIN operator classes shown in . (Some of the optional modules described in @@ -85,38 +85,38 @@ - array_ops - anyarray + array_ops + anyarray - && - <@ - = - @> + && + <@ + = + @> - jsonb_ops - jsonb + jsonb_ops + jsonb - ? - ?& - ?| - @> + ? + ?& + ?| + @> - jsonb_path_ops - jsonb + jsonb_path_ops + jsonb - @> + @> - tsvector_ops - tsvector + tsvector_ops + tsvector - @@ - @@@ + @@ + @@@ @@ -124,8 +124,8 @@ - Of the two operator classes for type jsonb, jsonb_ops - is the default. jsonb_path_ops supports fewer operators but + Of the two operator classes for type jsonb, jsonb_ops + is the default. jsonb_path_ops supports fewer operators but offers better performance for those operators. See for details. @@ -157,15 +157,15 @@ Datum *extractValue(Datum itemValue, int32 *nkeys, - bool **nullFlags) + bool **nullFlags) Returns a palloc'd array of keys given an item to be indexed. The - number of returned keys must be stored into *nkeys. + number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of - *nkeys bool fields, store its address at - *nullFlags, and set these null flags as needed. - *nullFlags can be left NULL (its initial value) + *nkeys bool fields, store its address at + *nullFlags, and set these null flags as needed. + *nullFlags can be left NULL (its initial value) if all keys are non-null. The return value can be NULL if the item contains no keys. @@ -175,40 +175,40 @@ Datum *extractQuery(Datum query, int32 *nkeys, StrategyNumber n, bool **pmatch, Pointer **extra_data, - bool **nullFlags, int32 *searchMode) + bool **nullFlags, int32 *searchMode) Returns a palloc'd array of keys given a value to be queried; that is, - query is the value on the right-hand side of an + query is the value on the right-hand side of an indexable operator whose left-hand side is the indexed column. - n is the strategy number of the operator within the + n is the strategy number of the operator within the operator class (see ). - Often, extractQuery will need - to consult n to determine the data type of - query and the method it should use to extract key values. - The number of returned keys must be stored into *nkeys. + Often, extractQuery will need + to consult n to determine the data type of + query and the method it should use to extract key values. + The number of returned keys must be stored into *nkeys. If any of the keys can be null, also palloc an array of - *nkeys bool fields, store its address at - *nullFlags, and set these null flags as needed. - *nullFlags can be left NULL (its initial value) + *nkeys bool fields, store its address at + *nullFlags, and set these null flags as needed. + *nullFlags can be left NULL (its initial value) if all keys are non-null. - The return value can be NULL if the query contains no keys. + The return value can be NULL if the query contains no keys. - searchMode is an output argument that allows - extractQuery to specify details about how the search + searchMode is an output argument that allows + extractQuery to specify details about how the search will be done. - If *searchMode is set to - GIN_SEARCH_MODE_DEFAULT (which is the value it is + If *searchMode is set to + GIN_SEARCH_MODE_DEFAULT (which is the value it is initialized to before call), only items that match at least one of the returned keys are considered candidate matches. - If *searchMode is set to - GIN_SEARCH_MODE_INCLUDE_EMPTY, then in addition to items + If *searchMode is set to + GIN_SEARCH_MODE_INCLUDE_EMPTY, then in addition to items containing at least one matching key, items that contain no keys at all are considered candidate matches. (This mode is useful for implementing is-subset-of operators, for example.) - If *searchMode is set to GIN_SEARCH_MODE_ALL, + If *searchMode is set to GIN_SEARCH_MODE_ALL, then all non-null items in the index are considered candidate matches, whether they match any of the returned keys or not. (This mode is much slower than the other two choices, since it requires @@ -217,33 +217,33 @@ in most cases is probably not a good candidate for a GIN operator class.) The symbols to use for setting this mode are defined in - access/gin.h. + access/gin.h. - pmatch is an output argument for use when partial match - is supported. To use it, extractQuery must allocate - an array of *nkeys booleans and store its address at - *pmatch. Each element of the array should be set to TRUE + pmatch is an output argument for use when partial match + is supported. To use it, extractQuery must allocate + an array of *nkeys booleans and store its address at + *pmatch. Each element of the array should be set to TRUE if the corresponding key requires partial match, FALSE if not. - If *pmatch is set to NULL then GIN assumes partial match + If *pmatch is set to NULL then GIN assumes partial match is not required. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that do not support partial match. - extra_data is an output argument that allows - extractQuery to pass additional data to the - consistent and comparePartial methods. - To use it, extractQuery must allocate - an array of *nkeys pointers and store its address at - *extra_data, then store whatever it wants to into the + extra_data is an output argument that allows + extractQuery to pass additional data to the + consistent and comparePartial methods. + To use it, extractQuery must allocate + an array of *nkeys pointers and store its address at + *extra_data, then store whatever it wants to into the individual pointers. The variable is initialized to NULL before call, so this argument can simply be ignored by operator classes that - do not require extra data. If *extra_data is set, the - whole array is passed to the consistent method, and - the appropriate element to the comparePartial method. + do not require extra data. If *extra_data is set, the + whole array is passed to the consistent method, and + the appropriate element to the comparePartial method. @@ -251,10 +251,10 @@ An operator class must also provide a function to check if an indexed item - matches the query. It comes in two flavors, a boolean consistent - function, and a ternary triConsistent function. - triConsistent covers the functionality of both, so providing - triConsistent alone is sufficient. However, if the boolean + matches the query. It comes in two flavors, a boolean consistent + function, and a ternary triConsistent function. + triConsistent covers the functionality of both, so providing + triConsistent alone is sufficient. However, if the boolean variant is significantly cheaper to calculate, it can be advantageous to provide both. If only the boolean variant is provided, some optimizations that depend on refuting index items before fetching all the keys are @@ -264,48 +264,48 @@ bool consistent(bool check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], bool *recheck, - Datum queryKeys[], bool nullFlags[]) + Datum queryKeys[], bool nullFlags[]) Returns TRUE if an indexed item satisfies the query operator with - strategy number n (or might satisfy it, if the recheck + strategy number n (or might satisfy it, if the recheck indication is returned). This function does not have direct access to the indexed item's value, since GIN does not store items explicitly. Rather, what is available is knowledge about which key values extracted from the query appear in a given - indexed item. The check array has length - nkeys, which is the same as the number of keys previously - returned by extractQuery for this query datum. + indexed item. The check array has length + nkeys, which is the same as the number of keys previously + returned by extractQuery for this query datum. Each element of the - check array is TRUE if the indexed item contains the + check array is TRUE if the indexed item contains the corresponding query key, i.e., if (check[i] == TRUE) the i-th key of the - extractQuery result array is present in the indexed item. - The original query datum is - passed in case the consistent method needs to consult it, - and so are the queryKeys[] and nullFlags[] - arrays previously returned by extractQuery. - extra_data is the extra-data array returned by - extractQuery, or NULL if none. + extractQuery result array is present in the indexed item. + The original query datum is + passed in case the consistent method needs to consult it, + and so are the queryKeys[] and nullFlags[] + arrays previously returned by extractQuery. + extra_data is the extra-data array returned by + extractQuery, or NULL if none. - When extractQuery returns a null key in - queryKeys[], the corresponding check[] element + When extractQuery returns a null key in + queryKeys[], the corresponding check[] element is TRUE if the indexed item contains a null key; that is, the - semantics of check[] are like IS NOT DISTINCT - FROM. The consistent function can examine the - corresponding nullFlags[] element if it needs to tell + semantics of check[] are like IS NOT DISTINCT + FROM. The consistent function can examine the + corresponding nullFlags[] element if it needs to tell the difference between a regular value match and a null match. - On success, *recheck should be set to TRUE if the heap + On success, *recheck should be set to TRUE if the heap tuple needs to be rechecked against the query operator, or FALSE if the index test is exact. That is, a FALSE return value guarantees that the heap tuple does not match the query; a TRUE return value with - *recheck set to FALSE guarantees that the heap tuple does + *recheck set to FALSE guarantees that the heap tuple does match the query; and a TRUE return value with - *recheck set to TRUE means that the heap tuple might match + *recheck set to TRUE means that the heap tuple might match the query, so it needs to be fetched and rechecked by evaluating the query operator directly against the originally indexed item. @@ -315,30 +315,30 @@ GinTernaryValue triConsistent(GinTernaryValue check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], - Datum queryKeys[], bool nullFlags[]) + Datum queryKeys[], bool nullFlags[]) - triConsistent is similar to consistent, - but instead of booleans in the check vector, there are + triConsistent is similar to consistent, + but instead of booleans in the check vector, there are three possible values for each - key: GIN_TRUE, GIN_FALSE and - GIN_MAYBE. GIN_FALSE and GIN_TRUE + key: GIN_TRUE, GIN_FALSE and + GIN_MAYBE. GIN_FALSE and GIN_TRUE have the same meaning as regular boolean values, while - GIN_MAYBE means that the presence of that key is not known. - When GIN_MAYBE values are present, the function should only - return GIN_TRUE if the item certainly matches whether or + GIN_MAYBE means that the presence of that key is not known. + When GIN_MAYBE values are present, the function should only + return GIN_TRUE if the item certainly matches whether or not the index item contains the corresponding query keys. Likewise, the - function must return GIN_FALSE only if the item certainly - does not match, whether or not it contains the GIN_MAYBE - keys. If the result depends on the GIN_MAYBE entries, i.e., + function must return GIN_FALSE only if the item certainly + does not match, whether or not it contains the GIN_MAYBE + keys. If the result depends on the GIN_MAYBE entries, i.e., the match cannot be confirmed or refuted based on the known query keys, - the function must return GIN_MAYBE. + the function must return GIN_MAYBE. - When there are no GIN_MAYBE values in the check - vector, a GIN_MAYBE return value is the equivalent of - setting the recheck flag in the - boolean consistent function. + When there are no GIN_MAYBE values in the check + vector, a GIN_MAYBE return value is the equivalent of + setting the recheck flag in the + boolean consistent function. @@ -352,7 +352,7 @@ - int compare(Datum a, Datum b) + int compare(Datum a, Datum b) Compares two keys (not indexed items!) and returns an integer less than @@ -364,13 +364,13 @@ - Alternatively, if the operator class does not provide a compare + Alternatively, if the operator class does not provide a compare method, GIN will look up the default btree operator class for the index key data type, and use its comparison function. It is recommended to specify the comparison function in a GIN operator class that is meant for just one data type, as looking up the btree operator class costs a few cycles. However, polymorphic GIN operator classes (such - as array_ops) typically cannot specify a single comparison + as array_ops) typically cannot specify a single comparison function. @@ -381,7 +381,7 @@ int comparePartial(Datum partial_key, Datum key, StrategyNumber n, - Pointer extra_data) + Pointer extra_data)
Compare a partial-match query key to an index key. Returns an integer @@ -389,11 +389,11 @@ does not match the query, but the index scan should continue; zero means that the index key does match the query; greater than zero indicates that the index scan should stop because no more matches - are possible. The strategy number n of the operator + are possible. The strategy number n of the operator that generated the partial match query is provided, in case its semantics are needed to determine when to end the scan. Also, - extra_data is the corresponding element of the extra-data - array made by extractQuery, or NULL if none. + extra_data is the corresponding element of the extra-data + array made by extractQuery, or NULL if none. Null keys are never passed to this function. @@ -402,25 +402,25 @@ - To support partial match queries, an operator class must - provide the comparePartial method, and its - extractQuery method must set the pmatch + To support partial match queries, an operator class must + provide the comparePartial method, and its + extractQuery method must set the pmatch parameter when a partial-match query is encountered. See for details. - The actual data types of the various Datum values mentioned + The actual data types of the various Datum values mentioned above vary depending on the operator class. The item values passed to - extractValue are always of the operator class's input type, and - all key values must be of the class's STORAGE type. The type of - the query argument passed to extractQuery, - consistent and triConsistent is whatever is the + extractValue are always of the operator class's input type, and + all key values must be of the class's STORAGE type. The type of + the query argument passed to extractQuery, + consistent and triConsistent is whatever is the right-hand input type of the class member operator identified by the strategy number. This need not be the same as the indexed type, so long as key values of the correct type can be extracted from it. However, it is recommended that the SQL declarations of these three support functions use - the opclass's indexed data type for the query argument, even + the opclass's indexed data type for the query argument, even though the actual type might be something else depending on the operator. @@ -434,8 +434,8 @@ constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and where each tuple in a leaf page contains either a pointer to a B-tree of heap pointers (a - posting tree), or a simple list of heap pointers (a posting - list) when the list is small enough to fit into a single index tuple along + posting tree), or a simple list of heap pointers (a posting + list) when the list is small enough to fit into a single index tuple along with the key value. @@ -443,7 +443,7 @@ As of PostgreSQL 9.1, null key values can be included in the index. Also, placeholder nulls are included in the index for indexed items that are null or contain no keys according to - extractValue. This allows searches that should find empty + extractValue. This allows searches that should find empty items to do so. @@ -461,7 +461,7 @@ intrinsic nature of inverted indexes: inserting or updating one heap row can cause many inserts into the index (one for each key extracted from the indexed item). As of PostgreSQL 8.4, - GIN is capable of postponing much of this work by inserting + GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed or autoanalyzed, or when gin_clean_pending_list function is called, or if the @@ -479,7 +479,7 @@ of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update - that causes the pending list to become too large will incur an + that causes the pending list to become too large will incur an immediate cleanup cycle and thus be much slower than other updates. Proper use of autovacuum can minimize both of these problems. @@ -497,15 +497,15 @@ Partial Match Algorithm - GIN can support partial match queries, in which the query + GIN can support partial match queries, in which the query does not determine an exact match for one or more keys, but the possible matches fall within a reasonably narrow range of key values (within the - key sorting order determined by the compare support method). - The extractQuery method, instead of returning a key value + key sorting order determined by the compare support method). + The extractQuery method, instead of returning a key value to be matched exactly, returns a key value that is the lower bound of - the range to be searched, and sets the pmatch flag true. - The key range is then scanned using the comparePartial - method. comparePartial must return zero for a matching + the range to be searched, and sets the pmatch flag true. + The key range is then scanned using the comparePartial + method. comparePartial must return zero for a matching index key, less than zero for a non-match that is still within the range to be searched, or greater than zero if the index key is past the range that could match. @@ -542,7 +542,7 @@ Build time for a GIN index is very sensitive to - the maintenance_work_mem setting; it doesn't pay to + the maintenance_work_mem setting; it doesn't pay to skimp on work memory during index creation. @@ -553,18 +553,18 @@ During a series of insertions into an existing GIN - index that has fastupdate enabled, the system will clean up + index that has fastupdate enabled, the system will clean up the pending-entry list whenever the list grows larger than - gin_pending_list_limit. To avoid fluctuations in observed + gin_pending_list_limit. To avoid fluctuations in observed response time, it's desirable to have pending-list cleanup occur in the background (i.e., via autovacuum). Foreground cleanup operations - can be avoided by increasing gin_pending_list_limit + can be avoided by increasing gin_pending_list_limit or making autovacuum more aggressive. However, enlarging the threshold of the cleanup operation means that if a foreground cleanup does occur, it will take even longer. - gin_pending_list_limit can be overridden for individual + gin_pending_list_limit can be overridden for individual GIN indexes by changing storage parameters, and which allows each GIN index to have its own cleanup threshold. For example, it's possible to increase the threshold only for the GIN @@ -616,7 +616,7 @@ GIN assumes that indexable operators are strict. This - means that extractValue will not be called at all on a null + means that extractValue will not be called at all on a null item value (instead, a placeholder index entry is created automatically), and extractQuery will not be called on a null query value either (instead, the query is presumed to be unsatisfiable). Note @@ -629,36 +629,36 @@ Examples - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GIN operator classes previously shown in . - The following contrib modules also contain + The following contrib modules also contain GIN operator classes: - btree_gin + btree_gin B-tree equivalent functionality for several data types - hstore + hstore Module for storing (key, value) pairs - intarray + intarray Enhanced support for int[] - pg_trgm + pg_trgm Text similarity using trigram matching diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index 1648eb3672..4e4470d439 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -44,7 +44,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the GiST operator classes shown in . (Some of the optional modules described in @@ -64,142 +64,142 @@ - box_ops - box + box_ops + box - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - circle_ops - circle + circle_ops + circle - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - <-> + <-> - inet_ops - inet, cidr + inet_ops + inet, cidr - && - >> - >>= - > - >= - <> - << - <<= - < - <= - = + && + >> + >>= + > + >= + <> + << + <<= + < + <= + = - point_ops - point + point_ops + point - >> - >^ - << - <@ - <@ - <@ - <^ - ~= + >> + >^ + << + <@ + <@ + <@ + <^ + ~= - <-> + <-> - poly_ops - polygon + poly_ops + polygon - && - &> - &< - &<| - >> - << - <<| - <@ - @> - @ - |&> - |>> - ~ - ~= + && + &> + &< + &<| + >> + << + <<| + <@ + @> + @ + |&> + |>> + ~ + ~= - <-> + <-> - range_ops + range_ops any range type - && - &> - &< - >> - << - <@ - -|- - = - @> - @> + && + &> + &< + >> + << + <@ + -|- + = + @> + @> - tsquery_ops - tsquery + tsquery_ops + tsquery - <@ - @> + <@ + @> - tsvector_ops - tsvector + tsvector_ops + tsvector - @@ + @@ @@ -209,9 +209,9 @@ - For historical reasons, the inet_ops operator class is - not the default class for types inet and cidr. - To use it, mention the class name in CREATE INDEX, + For historical reasons, the inet_ops operator class is + not the default class for types inet and cidr. + To use it, mention the class name in CREATE INDEX, for example CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); @@ -270,53 +270,53 @@ CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); There are five methods that an index operator class for GiST must provide, and four that are optional. Correctness of the index is ensured - by proper implementation of the same, consistent - and union methods, while efficiency (size and speed) of the - index will depend on the penalty and picksplit + by proper implementation of the same, consistent + and union methods, while efficiency (size and speed) of the + index will depend on the penalty and picksplit methods. - Two optional methods are compress and - decompress, which allow an index to have internal tree data of + Two optional methods are compress and + decompress, which allow an index to have internal tree data of a different type than the data it indexes. The leaves are to be of the indexed data type, while the other tree nodes can be of any C struct (but - you still have to follow PostgreSQL data type rules here, - see about varlena for variable sized data). If the tree's - internal data type exists at the SQL level, the STORAGE option - of the CREATE OPERATOR CLASS command can be used. - The optional eighth method is distance, which is needed + you still have to follow PostgreSQL data type rules here, + see about varlena for variable sized data). If the tree's + internal data type exists at the SQL level, the STORAGE option + of the CREATE OPERATOR CLASS command can be used. + The optional eighth method is distance, which is needed if the operator class wishes to support ordered scans (nearest-neighbor - searches). The optional ninth method fetch is needed if the + searches). The optional ninth method fetch is needed if the operator class wishes to support index-only scans, except when the - compress method is omitted. + compress method is omitted. - consistent + consistent - Given an index entry p and a query value q, + Given an index entry p and a query value q, this function determines whether the index entry is - consistent with the query; that is, could the predicate - indexed_column - indexable_operator q be true for + consistent with the query; that is, could the predicate + indexed_column + indexable_operator q be true for any row represented by the index entry? For a leaf index entry this is equivalent to testing the indexable condition, while for an internal tree node this determines whether it is necessary to scan the subtree of the index represented by the tree node. When the result is - true, a recheck flag must also be returned. + true, a recheck flag must also be returned. This indicates whether the predicate is certainly true or only possibly - true. If recheck = false then the index has - tested the predicate condition exactly, whereas if recheck - = true the row is only a candidate match. In that case the + true. If recheck = false then the index has + tested the predicate condition exactly, whereas if recheck + = true the row is only a candidate match. In that case the system will automatically evaluate the - indexable_operator against the actual row value to see + indexable_operator against the actual row value to see if it is really a match. This convention allows GiST to support both lossless and lossy index structures. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_consistent(internal, data_type, smallint, oid, internal) @@ -356,23 +356,23 @@ my_consistent(PG_FUNCTION_ARGS) } - Here, key is an element in the index and query - the value being looked up in the index. The StrategyNumber + Here, key is an element in the index and query + the value being looked up in the index. The StrategyNumber parameter indicates which operator of your operator class is being applied — it matches one of the operator numbers in the - CREATE OPERATOR CLASS command. + CREATE OPERATOR CLASS command. Depending on which operators you have included in the class, the data - type of query could vary with the operator, since it will + type of query could vary with the operator, since it will be whatever type is on the righthand side of the operator, which might be different from the indexed data type appearing on the lefthand side. (The above code skeleton assumes that only one type is possible; if - not, fetching the query argument value would have to depend + not, fetching the query argument value would have to depend on the operator.) It is recommended that the SQL declaration of - the consistent function use the opclass's indexed data - type for the query argument, even though the actual type + the consistent function use the opclass's indexed data + type for the query argument, even though the actual type might be something else depending on the operator. @@ -380,7 +380,7 @@ my_consistent(PG_FUNCTION_ARGS) - union + union This method consolidates information in the tree. Given a set of @@ -389,7 +389,7 @@ my_consistent(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_union(internal, internal) @@ -439,44 +439,44 @@ my_union(PG_FUNCTION_ARGS) As you can see, in this skeleton we're dealing with a data type - where union(X, Y, Z) = union(union(X, Y), Z). It's easy + where union(X, Y, Z) = union(union(X, Y), Z). It's easy enough to support data types where this is not the case, by implementing the proper union algorithm in this - GiST support method. + GiST support method. - The result of the union function must be a value of the + The result of the union function must be a value of the index's storage type, whatever that is (it might or might not be - different from the indexed column's type). The union - function should return a pointer to newly palloc()ed + different from the indexed column's type). The union + function should return a pointer to newly palloc()ed memory. You can't just return the input value as-is, even if there is no type change. - As shown above, the union function's - first internal argument is actually - a GistEntryVector pointer. The second argument is a + As shown above, the union function's + first internal argument is actually + a GistEntryVector pointer. The second argument is a pointer to an integer variable, which can be ignored. (It used to be - required that the union function store the size of its + required that the union function store the size of its result value into that variable, but this is no longer necessary.) - compress + compress Converts a data item into a format suitable for physical storage in an index page. - If the compress method is omitted, data items are stored + If the compress method is omitted, data items are stored in the index without modification. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_compress(internal) @@ -519,7 +519,7 @@ my_compress(PG_FUNCTION_ARGS) - You have to adapt compressed_data_type to the specific + You have to adapt compressed_data_type to the specific type you're converting to in order to compress your leaf nodes, of course. @@ -527,24 +527,24 @@ my_compress(PG_FUNCTION_ARGS) - decompress + decompress Converts the stored representation of a data item into a format that can be manipulated by the other GiST methods in the operator class. - If the decompress method is omitted, it is assumed that + If the decompress method is omitted, it is assumed that the other GiST methods can work directly on the stored data format. - (decompress is not necessarily the reverse of + (decompress is not necessarily the reverse of the compress method; in particular, if compress is lossy then it's impossible - for decompress to exactly reconstruct the original - data. decompress is not necessarily equivalent - to fetch, either, since the other GiST methods might not + for decompress to exactly reconstruct the original + data. decompress is not necessarily equivalent + to fetch, either, since the other GiST methods might not require full reconstruction of the data.) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_decompress(internal) @@ -573,7 +573,7 @@ my_decompress(PG_FUNCTION_ARGS) - penalty + penalty Returns a value indicating the cost of inserting the new @@ -584,7 +584,7 @@ my_decompress(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_penalty(internal, internal, internal) @@ -612,15 +612,15 @@ my_penalty(PG_FUNCTION_ARGS) } - For historical reasons, the penalty function doesn't - just return a float result; instead it has to store the value + For historical reasons, the penalty function doesn't + just return a float result; instead it has to store the value at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the address of that argument. - The penalty function is crucial to good performance of + The penalty function is crucial to good performance of the index. It'll get used at insertion time to determine which branch to follow when choosing where to add the new entry in the tree. At query time, the more balanced the index, the quicker the lookup. @@ -629,7 +629,7 @@ my_penalty(PG_FUNCTION_ARGS) - picksplit + picksplit When an index page split is necessary, this function decides which @@ -638,7 +638,7 @@ my_penalty(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_picksplit(internal, internal) @@ -725,33 +725,33 @@ my_picksplit(PG_FUNCTION_ARGS) } - Notice that the picksplit function's result is delivered - by modifying the passed-in v structure. The return + Notice that the picksplit function's result is delivered + by modifying the passed-in v structure. The return value per se is ignored, though it's conventional to pass back the - address of v. + address of v. - Like penalty, the picksplit function + Like penalty, the picksplit function is crucial to good performance of the index. Designing suitable - penalty and picksplit implementations + penalty and picksplit implementations is where the challenge of implementing well-performing - GiST indexes lies. + GiST indexes lies. - same + same Returns true if two index entries are identical, false otherwise. - (An index entry is a value of the index's storage type, + (An index entry is a value of the index's storage type, not necessarily the original indexed column's type.) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_same(storage_type, storage_type, internal) @@ -777,7 +777,7 @@ my_same(PG_FUNCTION_ARGS) } - For historical reasons, the same function doesn't + For historical reasons, the same function doesn't just return a Boolean result; instead it has to store the flag at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the @@ -787,15 +787,15 @@ my_same(PG_FUNCTION_ARGS) - distance + distance - Given an index entry p and a query value q, + Given an index entry p and a query value q, this function determines the index entry's - distance from the query value. This function must be + distance from the query value. This function must be supplied if the operator class contains any ordering operators. A query using the ordering operator will be implemented by returning - index entries with the smallest distance values first, + index entries with the smallest distance values first, so the results must be consistent with the operator's semantics. For a leaf index entry the result just represents the distance to the index entry; for an internal tree node, the result must be the @@ -803,7 +803,7 @@ my_same(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_distance(internal, data_type, smallint, oid, internal) @@ -836,8 +836,8 @@ my_distance(PG_FUNCTION_ARGS) } - The arguments to the distance function are identical to - the arguments of the consistent function. + The arguments to the distance function are identical to + the arguments of the consistent function. @@ -847,31 +847,31 @@ my_distance(PG_FUNCTION_ARGS) geometric applications. For an internal tree node, the distance returned must not be greater than the distance to any of the child nodes. If the returned distance is not exact, the function must set - *recheck to true. (This is not necessary for internal tree + *recheck to true. (This is not necessary for internal tree nodes; for them, the calculation is always assumed to be inexact.) In this case the executor will calculate the accurate distance after fetching the tuple from the heap, and reorder the tuples if necessary. - If the distance function returns *recheck = true for any + If the distance function returns *recheck = true for any leaf node, the original ordering operator's return type must - be float8 or float4, and the distance function's + be float8 or float4, and the distance function's result values must be comparable to those of the original ordering operator, since the executor will sort using both distance function results and recalculated ordering-operator results. Otherwise, the - distance function's result values can be any finite float8 + distance function's result values can be any finite float8 values, so long as the relative order of the result values matches the order returned by the ordering operator. (Infinity and minus infinity are used internally to handle cases such as nulls, so it is not - recommended that distance functions return these values.) + recommended that distance functions return these values.) - fetch + fetch Converts the compressed index representation of a data item into the @@ -880,7 +880,7 @@ my_distance(PG_FUNCTION_ARGS) - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE OR REPLACE FUNCTION my_fetch(internal) @@ -889,14 +889,14 @@ AS 'MODULE_PATHNAME' LANGUAGE C STRICT; - The argument is a pointer to a GISTENTRY struct. On - entry, its key field contains a non-NULL leaf datum in - compressed form. The return value is another GISTENTRY - struct, whose key field contains the same datum in its + The argument is a pointer to a GISTENTRY struct. On + entry, its key field contains a non-NULL leaf datum in + compressed form. The return value is another GISTENTRY + struct, whose key field contains the same datum in its original, uncompressed form. If the opclass's compress function does - nothing for leaf entries, the fetch method can return the + nothing for leaf entries, the fetch method can return the argument as-is. Or, if the opclass does not have a compress function, - the fetch method can be omitted as well, since it would + the fetch method can be omitted as well, since it would necessarily be a no-op. @@ -933,7 +933,7 @@ my_fetch(PG_FUNCTION_ARGS) If the compress method is lossy for leaf entries, the operator class cannot support index-only scans, and must not define - a fetch function. + a fetch function. @@ -942,15 +942,15 @@ my_fetch(PG_FUNCTION_ARGS) All the GiST support methods are normally called in short-lived memory - contexts; that is, CurrentMemoryContext will get reset after + contexts; that is, CurrentMemoryContext will get reset after each tuple is processed. It is therefore not very important to worry about pfree'ing everything you palloc. However, in some cases it's useful for a support method to cache data across repeated calls. To do that, allocate - the longer-lived data in fcinfo->flinfo->fn_mcxt, and - keep a pointer to it in fcinfo->flinfo->fn_extra. Such + the longer-lived data in fcinfo->flinfo->fn_mcxt, and + keep a pointer to it in fcinfo->flinfo->fn_extra. Such data will survive for the life of the index operation (e.g., a single GiST index scan, index build, or index tuple insertion). Be careful to pfree - the previous value when replacing a fn_extra value, or the leak + the previous value when replacing a fn_extra value, or the leak will accumulate for the duration of the operation. @@ -974,7 +974,7 @@ my_fetch(PG_FUNCTION_ARGS) - However, buffering index build needs to call the penalty + However, buffering index build needs to call the penalty function more often, which consumes some extra CPU resources. Also, the buffers used in the buffering build need temporary disk space, up to the size of the resulting index. Buffering can also influence the quality @@ -1002,57 +1002,57 @@ my_fetch(PG_FUNCTION_ARGS) The PostgreSQL source distribution includes several examples of index methods implemented using GiST. The core system currently provides text search - support (indexing for tsvector and tsquery) as well as + support (indexing for tsvector and tsquery) as well as R-Tree equivalent functionality for some of the built-in geometric data types - (see src/backend/access/gist/gistproc.c). The following - contrib modules also contain GiST + (see src/backend/access/gist/gistproc.c). The following + contrib modules also contain GiST operator classes: - btree_gist + btree_gist B-tree equivalent functionality for several data types - cube + cube Indexing for multidimensional cubes - hstore + hstore Module for storing (key, value) pairs - intarray + intarray RD-Tree for one-dimensional array of int4 values - ltree + ltree Indexing for tree-like structures - pg_trgm + pg_trgm Text similarity using trigram matching - seg + seg Indexing for float ranges diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 6c54fbd40d..086d6abb30 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -3,12 +3,12 @@ High Availability, Load Balancing, and Replication - high availability - failover - replication - load balancing - clustering - data partitioning + high availability + failover + replication + load balancing + clustering + data partitioning Database servers can work together to allow a second server to @@ -38,12 +38,12 @@ Some solutions deal with synchronization by allowing only one server to modify the data. Servers that can modify data are - called read/write, master or primary servers. - Servers that track changes in the master are called standby - or secondary servers. A standby server that cannot be connected + called read/write, master or primary servers. + Servers that track changes in the master are called standby + or secondary servers. A standby server that cannot be connected to until it is promoted to a master server is called a warm - standby server, and one that can accept connections and serves read-only - queries is called a hot standby server. + standby server, and one that can accept connections and serves read-only + queries is called a hot standby server. @@ -99,7 +99,7 @@ Shared hardware functionality is common in network storage devices. Using a network file system is also possible, though care must be - taken that the file system has full POSIX behavior (see POSIX behavior (see ). One significant limitation of this method is that if the shared disk array fails or becomes corrupt, the primary and standby servers are both nonfunctional. Another issue is @@ -121,7 +121,7 @@ the mirroring must be done in a way that ensures the standby server has a consistent copy of the file system — specifically, writes to the standby must be done in the same order as those on the master. - DRBD is a popular file system replication solution + DRBD is a popular file system replication solution for Linux. @@ -143,7 +143,7 @@ protocol to make nodes agree on a serializable transactional order. Warm and hot standby servers can be kept current by reading a - stream of write-ahead log (WAL) + stream of write-ahead log (WAL) records. If the main server fails, the standby contains almost all of the data of the main server, and can be quickly made the new master database server. This can be synchronous or @@ -189,7 +189,7 @@ protocol to make nodes agree on a serializable transactional order. - Slony-I is an example of this type of replication, with per-table + Slony-I is an example of this type of replication, with per-table granularity, and support for multiple standby servers. Because it updates the standby server asynchronously (in batches), there is possible data loss during fail over. @@ -212,7 +212,7 @@ protocol to make nodes agree on a serializable transactional order. If queries are simply broadcast unmodified, functions like - random(), CURRENT_TIMESTAMP, and + random(), CURRENT_TIMESTAMP, and sequences can have different values on different servers. This is because each server operates independently, and because SQL queries are broadcast (and not actual modified rows). If @@ -226,7 +226,7 @@ protocol to make nodes agree on a serializable transactional order. transactions either commit or abort on all servers, perhaps using two-phase commit ( and ). - Pgpool-II and Continuent Tungsten + Pgpool-II and Continuent Tungsten are examples of this type of replication. @@ -266,12 +266,12 @@ protocol to make nodes agree on a serializable transactional order. there is no need to partition workloads between master and standby servers, and because the data changes are sent from one server to another, there is no problem with non-deterministic - functions like random(). + functions like random(). - PostgreSQL does not offer this type of replication, - though PostgreSQL two-phase commit (PostgreSQL does not offer this type of replication, + though PostgreSQL two-phase commit ( and ) can be used to implement this in application code or middleware. @@ -284,8 +284,8 @@ protocol to make nodes agree on a serializable transactional order. - Because PostgreSQL is open source and easily - extended, a number of companies have taken PostgreSQL + Because PostgreSQL is open source and easily + extended, a number of companies have taken PostgreSQL and created commercial closed-source solutions with unique failover, replication, and load balancing capabilities. @@ -475,9 +475,9 @@ protocol to make nodes agree on a serializable transactional order. concurrently on a single query. It is usually accomplished by splitting the data among servers and having each server execute its part of the query and return results to a central server where they - are combined and returned to the user. Pgpool-II + are combined and returned to the user. Pgpool-II has this capability. Also, this can be implemented using the - PL/Proxy tool set. + PL/Proxy tool set. @@ -494,10 +494,10 @@ protocol to make nodes agree on a serializable transactional order. Continuous archiving can be used to create a high - availability (HA) cluster configuration with one or more - standby servers ready to take over operations if the + availability (HA) cluster configuration with one or more + standby servers ready to take over operations if the primary server fails. This capability is widely referred to as - warm standby or log shipping. + warm standby or log shipping. @@ -513,7 +513,7 @@ protocol to make nodes agree on a serializable transactional order. Directly moving WAL records from one database server to another - is typically described as log shipping. PostgreSQL + is typically described as log shipping. PostgreSQL implements file-based log shipping by transferring WAL records one file (WAL segment) at a time. WAL files (16MB) can be shipped easily and cheaply over any distance, whether it be to an @@ -597,7 +597,7 @@ protocol to make nodes agree on a serializable transactional order. In general, log shipping between servers running different major - PostgreSQL release + PostgreSQL release levels is not possible. It is the policy of the PostgreSQL Global Development Group not to make changes to disk formats during minor release upgrades, so it is likely that running different minor release levels @@ -621,32 +621,32 @@ protocol to make nodes agree on a serializable transactional order. (see ) or directly from the master over a TCP connection (streaming replication). The standby server will also attempt to restore any WAL found in the standby cluster's - pg_wal directory. That typically happens after a server + pg_wal directory. That typically happens after a server restart, when the standby replays again WAL that was streamed from the master before the restart, but you can also manually copy files to - pg_wal at any time to have them replayed. + pg_wal at any time to have them replayed. At startup, the standby begins by restoring all WAL available in the - archive location, calling restore_command. Once it - reaches the end of WAL available there and restore_command - fails, it tries to restore any WAL available in the pg_wal directory. + archive location, calling restore_command. Once it + reaches the end of WAL available there and restore_command + fails, it tries to restore any WAL available in the pg_wal directory. If that fails, and streaming replication has been configured, the standby tries to connect to the primary server and start streaming WAL - from the last valid record found in archive or pg_wal. If that fails + from the last valid record found in archive or pg_wal. If that fails or streaming replication is not configured, or if the connection is later disconnected, the standby goes back to step 1 and tries to restore the file from the archive again. This loop of retries from the - archive, pg_wal, and via streaming replication goes on until the server + archive, pg_wal, and via streaming replication goes on until the server is stopped or failover is triggered by a trigger file. Standby mode is exited and the server switches to normal operation - when pg_ctl promote is run or a trigger file is found - (trigger_file). Before failover, - any WAL immediately available in the archive or in pg_wal will be + when pg_ctl promote is run or a trigger file is found + (trigger_file). Before failover, + any WAL immediately available in the archive or in pg_wal will be restored, but no attempt is made to connect to the master. @@ -667,8 +667,8 @@ protocol to make nodes agree on a serializable transactional order. If you want to use streaming replication, set up authentication on the primary server to allow replication connections from the standby server(s); that is, create a role and provide a suitable entry or - entries in pg_hba.conf with the database field set to - replication. Also ensure max_wal_senders is set + entries in pg_hba.conf with the database field set to + replication. Also ensure max_wal_senders is set to a sufficiently large value in the configuration file of the primary server. If replication slots will be used, ensure that max_replication_slots is set sufficiently @@ -687,19 +687,19 @@ protocol to make nodes agree on a serializable transactional order. To set up the standby server, restore the base backup taken from primary server (see ). Create a recovery - command file recovery.conf in the standby's cluster data - directory, and turn on standby_mode. Set - restore_command to a simple command to copy files from + command file recovery.conf in the standby's cluster data + directory, and turn on standby_mode. Set + restore_command to a simple command to copy files from the WAL archive. If you plan to have multiple standby servers for high - availability purposes, set recovery_target_timeline to - latest, to make the standby server follow the timeline change + availability purposes, set recovery_target_timeline to + latest, to make the standby server follow the timeline change that occurs at failover to another standby. Do not use pg_standby or similar tools with the built-in standby mode - described here. restore_command should return immediately + described here. restore_command should return immediately if the file does not exist; the server will retry the command again if necessary. See for using tools like pg_standby. @@ -708,11 +708,11 @@ protocol to make nodes agree on a serializable transactional order. If you want to use streaming replication, fill in - primary_conninfo with a libpq connection string, including + primary_conninfo with a libpq connection string, including the host name (or IP address) and any additional details needed to connect to the primary server. If the primary needs a password for authentication, the password needs to be specified in - primary_conninfo as well. + primary_conninfo as well. @@ -726,8 +726,8 @@ protocol to make nodes agree on a serializable transactional order. If you're using a WAL archive, its size can be minimized using the parameter to remove files that are no longer required by the standby server. - The pg_archivecleanup utility is designed specifically to - be used with archive_cleanup_command in typical single-standby + The pg_archivecleanup utility is designed specifically to + be used with archive_cleanup_command in typical single-standby configurations, see . Note however, that if you're using the archive for backup purposes, you need to retain files needed to recover from at least the latest base @@ -735,7 +735,7 @@ protocol to make nodes agree on a serializable transactional order. - A simple example of a recovery.conf is: + A simple example of a recovery.conf is: standby_mode = 'on' primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' @@ -746,7 +746,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' You can have any number of standby servers, but if you use streaming - replication, make sure you set max_wal_senders high enough in + replication, make sure you set max_wal_senders high enough in the primary to allow them to be connected simultaneously. @@ -773,7 +773,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' changes becoming visible in the standby. This delay is however much smaller than with file-based log shipping, typically under one second assuming the standby is powerful enough to keep up with the load. With - streaming replication, archive_timeout is not required to + streaming replication, archive_timeout is not required to reduce the data loss window. @@ -782,7 +782,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting - wal_keep_segments to a value large enough to ensure that + wal_keep_segments to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that's accessible from the standby, these solutions are not required, since the standby can @@ -793,11 +793,11 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' To use streaming replication, set up a file-based log-shipping standby server as described in . The step that turns a file-based log-shipping standby into streaming replication - standby is setting primary_conninfo setting in the - recovery.conf file to point to the primary server. Set + standby is setting primary_conninfo setting in the + recovery.conf file to point to the primary server. Set and authentication options - (see pg_hba.conf) on the primary so that the standby server - can connect to the replication pseudo-database on the primary + (see pg_hba.conf) on the primary so that the standby server + can connect to the replication pseudo-database on the primary server (see ). @@ -815,7 +815,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' - When the standby is started and primary_conninfo is set + When the standby is started and primary_conninfo is set correctly, the standby will connect to the primary after replaying all WAL files available in the archive. If the connection is established successfully, you will see a walreceiver process in the standby, and @@ -829,20 +829,20 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' so that only trusted users can read the WAL stream, because it is easy to extract privileged information from it. Standby servers must authenticate to the primary as a superuser or an account that has the - REPLICATION privilege. It is recommended to create a - dedicated user account with REPLICATION and LOGIN - privileges for replication. While REPLICATION privilege gives + REPLICATION privilege. It is recommended to create a + dedicated user account with REPLICATION and LOGIN + privileges for replication. While REPLICATION privilege gives very high permissions, it does not allow the user to modify any data on - the primary system, which the SUPERUSER privilege does. + the primary system, which the SUPERUSER privilege does. Client authentication for replication is controlled by a - pg_hba.conf record specifying replication in the - database field. For example, if the standby is running on - host IP 192.168.1.100 and the account name for replication - is foo, the administrator can add the following line to the - pg_hba.conf file on the primary: + pg_hba.conf record specifying replication in the + database field. For example, if the standby is running on + host IP 192.168.1.100 and the account name for replication + is foo, the administrator can add the following line to the + pg_hba.conf file on the primary: # Allow the user "foo" from host 192.168.1.100 to connect to the primary @@ -854,14 +854,14 @@ host replication foo 192.168.1.100/32 md5 The host name and port number of the primary, connection user name, - and password are specified in the recovery.conf file. - The password can also be set in the ~/.pgpass file on the - standby (specify replication in the database + and password are specified in the recovery.conf file. + The password can also be set in the ~/.pgpass file on the + standby (specify replication in the database field). - For example, if the primary is running on host IP 192.168.1.50, + For example, if the primary is running on host IP 192.168.1.50, port 5432, the account name for replication is - foo, and the password is foopass, the administrator - can add the following line to the recovery.conf file on the + foo, and the password is foopass, the administrator + can add the following line to the recovery.conf file on the standby: @@ -880,22 +880,22 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' standby. You can calculate this lag by comparing the current WAL write location on the primary with the last WAL location received by the standby. These locations can be retrieved using - pg_current_wal_lsn on the primary and - pg_last_wal_receive_lsn on the standby, + pg_current_wal_lsn on the primary and + pg_last_wal_receive_lsn on the standby, respectively (see and for details). The last WAL receive location in the standby is also displayed in the process status of the WAL receiver process, displayed using the - ps command (see for details). + ps command (see for details). You can retrieve a list of WAL sender processes via the - pg_stat_replication view. Large differences between - pg_current_wal_lsn and the view's sent_lsn field + pg_stat_replication view. Large differences between + pg_current_wal_lsn and the view's sent_lsn field might indicate that the master server is under heavy load, while - differences between sent_lsn and - pg_last_wal_receive_lsn on the standby might indicate + differences between sent_lsn and + pg_last_wal_receive_lsn on the standby might indicate network delay, or that the standby is under heavy load. @@ -911,7 +911,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a - recovery conflict even when the + recovery conflict even when the standby is disconnected. @@ -922,7 +922,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' However, these methods often result in retaining more WAL segments than required, whereas replication slots retain only the number of segments known to be needed. An advantage of these methods is that they bound - the space requirement for pg_wal; there is currently no way + the space requirement for pg_wal; there is currently no way to do this using replication slots. @@ -966,8 +966,8 @@ postgres=# SELECT * FROM pg_replication_slots; node_a_slot | physical | | | f | | | (1 row) - To configure the standby to use this slot, primary_slot_name - should be configured in the standby's recovery.conf. + To configure the standby to use this slot, primary_slot_name + should be configured in the standby's recovery.conf. Here is a simple example: standby_mode = 'on' @@ -1022,7 +1022,7 @@ primary_slot_name = 'node_a_slot' If an upstream standby server is promoted to become new master, downstream servers will continue to stream from the new master if - recovery_target_timeline is set to 'latest'. + recovery_target_timeline is set to 'latest'. @@ -1031,7 +1031,7 @@ primary_slot_name = 'node_a_slot' and , and configure host-based authentication). - You will also need to set primary_conninfo in the downstream + You will also need to set primary_conninfo in the downstream standby to point to the cascading standby. @@ -1044,7 +1044,7 @@ primary_slot_name = 'node_a_slot' - PostgreSQL streaming replication is asynchronous by + PostgreSQL streaming replication is asynchronous by default. If the primary server crashes then some transactions that were committed may not have been replicated to the standby server, causing data loss. The amount @@ -1058,8 +1058,8 @@ primary_slot_name = 'node_a_slot' standby servers. This extends that standard level of durability offered by a transaction commit. This level of protection is referred to as 2-safe replication in computer science theory, and group-1-safe - (group-safe and 1-safe) when synchronous_commit is set to - remote_write. + (group-safe and 1-safe) when synchronous_commit is set to + remote_write. @@ -1104,14 +1104,14 @@ primary_slot_name = 'node_a_slot' Once streaming replication has been configured, configuring synchronous replication requires only one additional configuration step: must be set to - a non-empty value. synchronous_commit must also be set to - on, but since this is the default value, typically no change is + a non-empty value. synchronous_commit must also be set to + on, but since this is the default value, typically no change is required. (See and .) This configuration will cause each commit to wait for confirmation that the standby has written the commit record to durable storage. - synchronous_commit can be set by individual + synchronous_commit can be set by individual users, so it can be configured in the configuration file, for particular users or databases, or dynamically by applications, in order to control the durability guarantee on a per-transaction basis. @@ -1121,12 +1121,12 @@ primary_slot_name = 'node_a_slot' After a commit record has been written to disk on the primary, the WAL record is then sent to the standby. The standby sends reply messages each time a new batch of WAL data is written to disk, unless - wal_receiver_status_interval is set to zero on the standby. - In the case that synchronous_commit is set to - remote_apply, the standby sends reply messages when the commit + wal_receiver_status_interval is set to zero on the standby. + In the case that synchronous_commit is set to + remote_apply, the standby sends reply messages when the commit record is replayed, making the transaction visible. If the standby is chosen as a synchronous standby, according to the setting - of synchronous_standby_names on the primary, the reply + of synchronous_standby_names on the primary, the reply messages from that standby will be considered along with those from other synchronous standbys to decide when to release transactions waiting for confirmation that the commit record has been received. These parameters @@ -1138,13 +1138,13 @@ primary_slot_name = 'node_a_slot' - Setting synchronous_commit to remote_write will + Setting synchronous_commit to remote_write will cause each commit to wait for confirmation that the standby has received the commit record and written it out to its own operating system, but not for the data to be flushed to disk on the standby. This - setting provides a weaker guarantee of durability than on + setting provides a weaker guarantee of durability than on does: the standby could lose the data in the event of an operating system - crash, though not a PostgreSQL crash. + crash, though not a PostgreSQL crash. However, it's a useful setting in practice because it can decrease the response time for the transaction. Data loss could only occur if both the primary and the standby crash and @@ -1152,7 +1152,7 @@ primary_slot_name = 'node_a_slot' - Setting synchronous_commit to remote_apply will + Setting synchronous_commit to remote_apply will cause each commit to wait until the current synchronous standbys report that they have replayed the transaction, making it visible to user queries. In simple cases, this allows for load balancing with causal @@ -1176,12 +1176,12 @@ primary_slot_name = 'node_a_slot' transactions will wait until all the standby servers which are considered as synchronous confirm receipt of their data. The number of synchronous standbys that transactions must wait for replies from is specified in - synchronous_standby_names. This parameter also specifies - a list of standby names and the method (FIRST and - ANY) to choose synchronous standbys from the listed ones. + synchronous_standby_names. This parameter also specifies + a list of standby names and the method (FIRST and + ANY) to choose synchronous standbys from the listed ones. - The method FIRST specifies a priority-based synchronous + The method FIRST specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to the requested number of synchronous standbys chosen based on their priorities. The standbys whose names appear earlier in the list are @@ -1192,36 +1192,36 @@ primary_slot_name = 'node_a_slot' next-highest-priority standby. - An example of synchronous_standby_names for + An example of synchronous_standby_names for a priority-based multiple synchronous standbys is: synchronous_standby_names = 'FIRST 2 (s1, s2, s3)' - In this example, if four standby servers s1, s2, - s3 and s4 are running, the two standbys - s1 and s2 will be chosen as synchronous standbys + In this example, if four standby servers s1, s2, + s3 and s4 are running, the two standbys + s1 and s2 will be chosen as synchronous standbys because their names appear early in the list of standby names. - s3 is a potential synchronous standby and will take over - the role of synchronous standby when either of s1 or - s2 fails. s4 is an asynchronous standby since + s3 is a potential synchronous standby and will take over + the role of synchronous standby when either of s1 or + s2 fails. s4 is an asynchronous standby since its name is not in the list. - The method ANY specifies a quorum-based synchronous + The method ANY specifies a quorum-based synchronous replication and makes transaction commits wait until their WAL records - are replicated to at least the requested number of + are replicated to at least the requested number of synchronous standbys in the list. - An example of synchronous_standby_names for + An example of synchronous_standby_names for a quorum-based multiple synchronous standbys is: synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - In this example, if four standby servers s1, s2, - s3 and s4 are running, transaction commits will - wait for replies from at least any two standbys of s1, - s2 and s3. s4 is an asynchronous + In this example, if four standby servers s1, s2, + s3 and s4 are running, transaction commits will + wait for replies from at least any two standbys of s1, + s2 and s3. s4 is an asynchronous standby since its name is not in the list. @@ -1243,7 +1243,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - PostgreSQL allows the application developer + PostgreSQL allows the application developer to specify the durability level required via replication. This can be specified for the system overall, though it can also be specified for specific users or connections, or even individual transactions. @@ -1275,10 +1275,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' Planning for High Availability - synchronous_standby_names specifies the number and + synchronous_standby_names specifies the number and names of synchronous standbys that transaction commits made when - synchronous_commit is set to on, - remote_apply or remote_write will wait for + synchronous_commit is set to on, + remote_apply or remote_write will wait for responses from. Such transaction commits may never be completed if any one of synchronous standbys should crash. @@ -1286,7 +1286,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' The best solution for high availability is to ensure you keep as many synchronous standbys as requested. This can be achieved by naming multiple - potential synchronous standbys using synchronous_standby_names. + potential synchronous standbys using synchronous_standby_names. @@ -1305,14 +1305,14 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' When a standby first attaches to the primary, it will not yet be properly - synchronized. This is described as catchup mode. Once + synchronized. This is described as catchup mode. Once the lag between standby and primary reaches zero for the first time - we move to real-time streaming state. + we move to real-time streaming state. The catch-up duration may be long immediately after the standby has been created. If the standby is shut down, then the catch-up period will increase according to the length of time the standby has been down. The standby is only able to become a synchronous standby - once it has reached streaming state. + once it has reached streaming state. This state can be viewed using the pg_stat_replication view. @@ -1334,7 +1334,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you really cannot keep as many synchronous standbys as requested then you should decrease the number of synchronous standbys that transaction commits must wait for responses from - in synchronous_standby_names (or disable it) and + in synchronous_standby_names (or disable it) and reload the configuration file on the primary server. @@ -1347,7 +1347,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you need to re-create a standby server while transactions are waiting, make sure that the commands pg_start_backup() and pg_stop_backup() are run in a session with - synchronous_commit = off, otherwise those + synchronous_commit = off, otherwise those requests will wait forever for the standby to appear. @@ -1381,7 +1381,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' - If archive_mode is set to on, the + If archive_mode is set to on, the archiver is not enabled during recovery or standby mode. If the standby server is promoted, it will start archiving after the promotion, but will not archive any WAL it did not generate itself. To get a complete @@ -1415,7 +1415,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If the primary server fails and the standby server becomes the new primary, and then the old primary restarts, you must have a mechanism for informing the old primary that it is no longer the primary. This is - sometimes known as STONITH (Shoot The Other Node In The Head), which is + sometimes known as STONITH (Shoot The Other Node In The Head), which is necessary to avoid situations where both systems think they are the primary, which will lead to confusion and ultimately data loss. @@ -1466,10 +1466,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' To trigger failover of a log-shipping standby server, - run pg_ctl promote or create a trigger - file with the file name and path specified by the trigger_file - setting in recovery.conf. If you're planning to use - pg_ctl promote to fail over, trigger_file is + run pg_ctl promote or create a trigger + file with the file name and path specified by the trigger_file + setting in recovery.conf. If you're planning to use + pg_ctl promote to fail over, trigger_file is not required. If you're setting up the reporting servers that are only used to offload read-only queries from the primary, not for high availability purposes, you don't need to promote it. @@ -1481,9 +1481,9 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' An alternative to the built-in standby mode described in the previous - sections is to use a restore_command that polls the archive location. + sections is to use a restore_command that polls the archive location. This was the only option available in versions 8.4 and below. In this - setup, set standby_mode off, because you are implementing + setup, set standby_mode off, because you are implementing the polling required for standby operation yourself. See the module for a reference implementation of this. @@ -1494,7 +1494,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' time, so if you use the standby server for queries (see Hot Standby), there is a delay between an action in the master and when the action becomes visible in the standby, corresponding the time it takes - to fill up the WAL file. archive_timeout can be used to make that delay + to fill up the WAL file. archive_timeout can be used to make that delay shorter. Also note that you can't combine streaming replication with this method. @@ -1511,25 +1511,25 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' The magic that makes the two loosely coupled servers work together is - simply a restore_command used on the standby that, + simply a restore_command used on the standby that, when asked for the next WAL file, waits for it to become available from - the primary. The restore_command is specified in the - recovery.conf file on the standby server. Normal recovery + the primary. The restore_command is specified in the + recovery.conf file on the standby server. Normal recovery processing would request a file from the WAL archive, reporting failure if the file was unavailable. For standby processing it is normal for the next WAL file to be unavailable, so the standby must wait for - it to appear. For files ending in .backup or - .history there is no need to wait, and a non-zero return - code must be returned. A waiting restore_command can be + it to appear. For files ending in .backup or + .history there is no need to wait, and a non-zero return + code must be returned. A waiting restore_command can be written as a custom script that loops after polling for the existence of the next WAL file. There must also be some way to trigger failover, which - should interrupt the restore_command, break the loop and + should interrupt the restore_command, break the loop and return a file-not-found error to the standby server. This ends recovery and the standby will then come up as a normal server. - Pseudocode for a suitable restore_command is: + Pseudocode for a suitable restore_command is: triggered = false; while (!NextWALFileReady() && !triggered) @@ -1544,7 +1544,7 @@ if (!triggered) - A working example of a waiting restore_command is provided + A working example of a waiting restore_command is provided in the module. It should be used as a reference on how to correctly implement the logic described above. It can also be extended as needed to support specific @@ -1553,14 +1553,14 @@ if (!triggered) The method for triggering failover is an important part of planning - and design. One potential option is the restore_command + and design. One potential option is the restore_command command. It is executed once for each WAL file, but the process - running the restore_command is created and dies for + running the restore_command is created and dies for each file, so there is no daemon or server process, and signals or a signal handler cannot be used. Therefore, the - restore_command is not suitable to trigger failover. + restore_command is not suitable to trigger failover. It is possible to use a simple timeout facility, especially if - used in conjunction with a known archive_timeout + used in conjunction with a known archive_timeout setting on the primary. However, this is somewhat error prone since a network problem or busy primary server might be sufficient to initiate failover. A notification mechanism such as the explicit @@ -1579,7 +1579,7 @@ if (!triggered) Set up primary and standby systems as nearly identical as possible, including two identical copies of - PostgreSQL at the same release level. + PostgreSQL at the same release level. @@ -1602,8 +1602,8 @@ if (!triggered) Begin recovery on the standby server from the local WAL - archive, using a recovery.conf that specifies a - restore_command that waits as described + archive, using a recovery.conf that specifies a + restore_command that waits as described previously (see ). @@ -1637,7 +1637,7 @@ if (!triggered) - An external program can call the pg_walfile_name_offset() + An external program can call the pg_walfile_name_offset() function (see ) to find out the file name and the exact byte offset within it of the current end of WAL. It can then access the WAL file directly @@ -1646,17 +1646,17 @@ if (!triggered) loss is the polling cycle time of the copying program, which can be very small, and there is no wasted bandwidth from forcing partially-used segment files to be archived. Note that the standby servers' - restore_command scripts can only deal with whole WAL files, + restore_command scripts can only deal with whole WAL files, so the incrementally copied data is not ordinarily made available to the standby servers. It is of use only when the primary dies — then the last partial WAL file is fed to the standby before allowing it to come up. The correct implementation of this process requires - cooperation of the restore_command script with the data + cooperation of the restore_command script with the data copying program. - Starting with PostgreSQL version 9.0, you can use + Starting with PostgreSQL version 9.0, you can use streaming replication (see ) to achieve the same benefits with less effort. @@ -1716,17 +1716,17 @@ if (!triggered) - Query access - SELECT, COPY TO + Query access - SELECT, COPY TO - Cursor commands - DECLARE, FETCH, CLOSE + Cursor commands - DECLARE, FETCH, CLOSE - Parameters - SHOW, SET, RESET + Parameters - SHOW, SET, RESET @@ -1735,17 +1735,17 @@ if (!triggered) - BEGIN, END, ABORT, START TRANSACTION + BEGIN, END, ABORT, START TRANSACTION - SAVEPOINT, RELEASE, ROLLBACK TO SAVEPOINT + SAVEPOINT, RELEASE, ROLLBACK TO SAVEPOINT - EXCEPTION blocks and other internal subtransactions + EXCEPTION blocks and other internal subtransactions @@ -1753,19 +1753,19 @@ if (!triggered) - LOCK TABLE, though only when explicitly in one of these modes: - ACCESS SHARE, ROW SHARE or ROW EXCLUSIVE. + LOCK TABLE, though only when explicitly in one of these modes: + ACCESS SHARE, ROW SHARE or ROW EXCLUSIVE. - Plans and resources - PREPARE, EXECUTE, - DEALLOCATE, DISCARD + Plans and resources - PREPARE, EXECUTE, + DEALLOCATE, DISCARD - Plugins and extensions - LOAD + Plugins and extensions - LOAD @@ -1779,9 +1779,9 @@ if (!triggered) - Data Manipulation Language (DML) - INSERT, - UPDATE, DELETE, COPY FROM, - TRUNCATE. + Data Manipulation Language (DML) - INSERT, + UPDATE, DELETE, COPY FROM, + TRUNCATE. Note that there are no allowed actions that result in a trigger being executed during recovery. This restriction applies even to temporary tables, because table rows cannot be read or written without @@ -1791,31 +1791,31 @@ if (!triggered) - Data Definition Language (DDL) - CREATE, - DROP, ALTER, COMMENT. + Data Definition Language (DDL) - CREATE, + DROP, ALTER, COMMENT. This restriction applies even to temporary tables, because carrying out these operations would require updating the system catalog tables. - SELECT ... FOR SHARE | UPDATE, because row locks cannot be + SELECT ... FOR SHARE | UPDATE, because row locks cannot be taken without updating the underlying data files. - Rules on SELECT statements that generate DML commands. + Rules on SELECT statements that generate DML commands. - LOCK that explicitly requests a mode higher than ROW EXCLUSIVE MODE. + LOCK that explicitly requests a mode higher than ROW EXCLUSIVE MODE. - LOCK in short default form, since it requests ACCESS EXCLUSIVE MODE. + LOCK in short default form, since it requests ACCESS EXCLUSIVE MODE. @@ -1824,19 +1824,19 @@ if (!triggered) - BEGIN READ WRITE, - START TRANSACTION READ WRITE + BEGIN READ WRITE, + START TRANSACTION READ WRITE - SET TRANSACTION READ WRITE, - SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE + SET TRANSACTION READ WRITE, + SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE - SET transaction_read_only = off + SET transaction_read_only = off @@ -1844,35 +1844,35 @@ if (!triggered) - Two-phase commit commands - PREPARE TRANSACTION, - COMMIT PREPARED, ROLLBACK PREPARED + Two-phase commit commands - PREPARE TRANSACTION, + COMMIT PREPARED, ROLLBACK PREPARED because even read-only transactions need to write WAL in the prepare phase (the first phase of two phase commit). - Sequence updates - nextval(), setval() + Sequence updates - nextval(), setval() - LISTEN, UNLISTEN, NOTIFY + LISTEN, UNLISTEN, NOTIFY - In normal operation, read-only transactions are allowed to - use LISTEN, UNLISTEN, and - NOTIFY, so Hot Standby sessions operate under slightly tighter + In normal operation, read-only transactions are allowed to + use LISTEN, UNLISTEN, and + NOTIFY, so Hot Standby sessions operate under slightly tighter restrictions than ordinary read-only sessions. It is possible that some of these restrictions might be loosened in a future release. - During hot standby, the parameter transaction_read_only is always + During hot standby, the parameter transaction_read_only is always true and may not be changed. But as long as no attempt is made to modify the database, connections during hot standby will act much like any other database connection. If failover or switchover occurs, the database will @@ -1884,7 +1884,7 @@ if (!triggered) Users will be able to tell whether their session is read-only by - issuing SHOW transaction_read_only. In addition, a set of + issuing SHOW transaction_read_only. In addition, a set of functions () allow users to access information about the standby server. These allow you to write programs that are aware of the current state of the database. These @@ -1907,7 +1907,7 @@ if (!triggered) There are also additional types of conflict that can occur with Hot Standby. - These conflicts are hard conflicts in the sense that queries + These conflicts are hard conflicts in the sense that queries might need to be canceled and, in some cases, sessions disconnected to resolve them. The user is provided with several ways to handle these conflicts. Conflict cases include: @@ -1916,7 +1916,7 @@ if (!triggered) Access Exclusive locks taken on the primary server, including both - explicit LOCK commands and various DDL + explicit LOCK commands and various DDL actions, conflict with table accesses in standby queries. @@ -1935,7 +1935,7 @@ if (!triggered) Application of a vacuum cleanup record from WAL conflicts with - standby transactions whose snapshots can still see any of + standby transactions whose snapshots can still see any of the rows to be removed. @@ -1962,18 +1962,18 @@ if (!triggered) An example of the problem situation is an administrator on the primary - server running DROP TABLE on a table that is currently being + server running DROP TABLE on a table that is currently being queried on the standby server. Clearly the standby query cannot continue - if the DROP TABLE is applied on the standby. If this situation - occurred on the primary, the DROP TABLE would wait until the - other query had finished. But when DROP TABLE is run on the + if the DROP TABLE is applied on the standby. If this situation + occurred on the primary, the DROP TABLE would wait until the + other query had finished. But when DROP TABLE is run on the primary, the primary doesn't have information about what queries are running on the standby, so it will not wait for any such standby queries. The WAL change records come through to the standby while the standby query is still running, causing a conflict. The standby server must either delay application of the WAL records (and everything after them, too) or else cancel the conflicting query so that the DROP - TABLE can be applied. + TABLE can be applied. @@ -1986,7 +1986,7 @@ if (!triggered) once it has taken longer than the relevant delay setting to apply any newly-received WAL data. There are two parameters so that different delay values can be specified for the case of reading WAL data from an archive - (i.e., initial recovery from a base backup or catching up a + (i.e., initial recovery from a base backup or catching up a standby server that has fallen far behind) versus reading WAL data via streaming replication. @@ -2003,10 +2003,10 @@ if (!triggered) - Once the delay specified by max_standby_archive_delay or - max_standby_streaming_delay has been exceeded, conflicting + Once the delay specified by max_standby_archive_delay or + max_standby_streaming_delay has been exceeded, conflicting queries will be canceled. This usually results just in a cancellation - error, although in the case of replaying a DROP DATABASE + error, although in the case of replaying a DROP DATABASE the entire conflicting session will be terminated. Also, if the conflict is over a lock held by an idle transaction, the conflicting session is terminated (this behavior might change in the future). @@ -2030,7 +2030,7 @@ if (!triggered) The most common reason for conflict between standby queries and WAL replay - is early cleanup. Normally, PostgreSQL allows + is early cleanup. Normally, PostgreSQL allows cleanup of old row versions when there are no transactions that need to see them to ensure correct visibility of data according to MVCC rules. However, this rule can only be applied for transactions executing on the @@ -2041,7 +2041,7 @@ if (!triggered) Experienced users should note that both row version cleanup and row version freezing will potentially conflict with standby queries. Running a manual - VACUUM FREEZE is likely to cause conflicts even on tables with + VACUUM FREEZE is likely to cause conflicts even on tables with no updated or deleted rows. @@ -2049,15 +2049,15 @@ if (!triggered) Users should be clear that tables that are regularly and heavily updated on the primary server will quickly cause cancellation of longer running queries on the standby. In such cases the setting of a finite value for - max_standby_archive_delay or - max_standby_streaming_delay can be considered similar to - setting statement_timeout. + max_standby_archive_delay or + max_standby_streaming_delay can be considered similar to + setting statement_timeout. Remedial possibilities exist if the number of standby-query cancellations is found to be unacceptable. The first option is to set the parameter - hot_standby_feedback, which prevents VACUUM from + hot_standby_feedback, which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur. If you do this, you should note that this will delay cleanup of dead rows on the primary, @@ -2067,11 +2067,11 @@ if (!triggered) off-loading execution onto the standby. If standby servers connect and disconnect frequently, you might want to make adjustments to handle the period when - hot_standby_feedback feedback is not being provided. - For example, consider increasing max_standby_archive_delay + hot_standby_feedback feedback is not being provided. + For example, consider increasing max_standby_archive_delay so that queries are not rapidly canceled by conflicts in WAL archive files during disconnected periods. You should also consider increasing - max_standby_streaming_delay to avoid rapid cancellations + max_standby_streaming_delay to avoid rapid cancellations by newly-arrived streaming WAL entries after reconnection. @@ -2080,16 +2080,16 @@ if (!triggered) on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set - a high max_standby_streaming_delay. However it is + a high max_standby_streaming_delay. However it is difficult to guarantee any specific execution-time window with this - approach, since vacuum_defer_cleanup_age is measured in + approach, since vacuum_defer_cleanup_age is measured in transactions executed on the primary server. The number of query cancels and the reason for them can be viewed using - the pg_stat_database_conflicts system view on the standby - server. The pg_stat_database system view also contains + the pg_stat_database_conflicts system view on the standby + server. The pg_stat_database system view also contains summary information. @@ -2098,8 +2098,8 @@ if (!triggered) Administrator's Overview - If hot_standby is on in postgresql.conf - (the default value) and there is a recovery.conf + If hot_standby is on in postgresql.conf + (the default value) and there is a recovery.conf file present, the server will run in Hot Standby mode. However, it may take some time for Hot Standby connections to be allowed, because the server will not accept connections until it has completed @@ -2120,8 +2120,8 @@ LOG: database system is ready to accept read only connections Consistency information is recorded once per checkpoint on the primary. It is not possible to enable hot standby when reading WAL - written during a period when wal_level was not set to - replica or logical on the primary. Reaching + written during a period when wal_level was not set to + replica or logical on the primary. Reaching a consistent state can also be delayed in the presence of both of these conditions: @@ -2140,7 +2140,7 @@ LOG: database system is ready to accept read only connections If you are running file-based log shipping ("warm standby"), you might need to wait until the next WAL file arrives, which could be as long as the - archive_timeout setting on the primary. + archive_timeout setting on the primary. @@ -2155,22 +2155,22 @@ LOG: database system is ready to accept read only connections - max_connections + max_connections - max_prepared_transactions + max_prepared_transactions - max_locks_per_transaction + max_locks_per_transaction - max_worker_processes + max_worker_processes @@ -2209,19 +2209,19 @@ LOG: database system is ready to accept read only connections - Data Definition Language (DDL) - e.g. CREATE INDEX + Data Definition Language (DDL) - e.g. CREATE INDEX - Privilege and Ownership - GRANT, REVOKE, - REASSIGN + Privilege and Ownership - GRANT, REVOKE, + REASSIGN - Maintenance commands - ANALYZE, VACUUM, - CLUSTER, REINDEX + Maintenance commands - ANALYZE, VACUUM, + CLUSTER, REINDEX @@ -2241,14 +2241,14 @@ LOG: database system is ready to accept read only connections - pg_cancel_backend() - and pg_terminate_backend() will work on user backends, + pg_cancel_backend() + and pg_terminate_backend() will work on user backends, but not the Startup process, which performs recovery. pg_stat_activity does not show recovering transactions as active. As a result, pg_prepared_xacts is always empty during recovery. If you wish to resolve in-doubt prepared transactions, view - pg_prepared_xacts on the primary and issue commands to + pg_prepared_xacts on the primary and issue commands to resolve transactions there or resolve them after the end of recovery. @@ -2256,17 +2256,17 @@ LOG: database system is ready to accept read only connections pg_locks will show locks held by backends, as normal. pg_locks also shows a virtual transaction managed by the Startup process that owns all - AccessExclusiveLocks held by transactions being replayed by recovery. + AccessExclusiveLocks held by transactions being replayed by recovery. Note that the Startup process does not acquire locks to - make database changes, and thus locks other than AccessExclusiveLocks + make database changes, and thus locks other than AccessExclusiveLocks do not show in pg_locks for the Startup process; they are just presumed to exist. - The Nagios plugin check_pgsql will + The Nagios plugin check_pgsql will work, because the simple information it checks for exists. - The check_postgres monitoring script will also work, + The check_postgres monitoring script will also work, though some reported values could give different or confusing results. For example, last vacuum time will not be maintained, since no vacuum occurs on the standby. Vacuums running on the primary @@ -2275,11 +2275,11 @@ LOG: database system is ready to accept read only connections WAL file control commands will not work during recovery, - e.g. pg_start_backup, pg_switch_wal etc. + e.g. pg_start_backup, pg_switch_wal etc. - Dynamically loadable modules work, including pg_stat_statements. + Dynamically loadable modules work, including pg_stat_statements. @@ -2292,8 +2292,8 @@ LOG: database system is ready to accept read only connections - Trigger-based replication systems such as Slony, - Londiste and Bucardo won't run on the + Trigger-based replication systems such as Slony, + Londiste and Bucardo won't run on the standby at all, though they will run happily on the primary server as long as the changes are not sent to standby servers to be applied. WAL replay is not trigger-based so you cannot relay from the @@ -2302,7 +2302,7 @@ LOG: database system is ready to accept read only connections - New OIDs cannot be assigned, though some UUID generators may still + New OIDs cannot be assigned, though some UUID generators may still work as long as they do not rely on writing new status to the database. @@ -2314,32 +2314,32 @@ LOG: database system is ready to accept read only connections - DROP TABLESPACE can only succeed if the tablespace is empty. + DROP TABLESPACE can only succeed if the tablespace is empty. Some standby users may be actively using the tablespace via their - temp_tablespaces parameter. If there are temporary files in the + temp_tablespaces parameter. If there are temporary files in the tablespace, all active queries are canceled to ensure that temporary files are removed, so the tablespace can be removed and WAL replay can continue. - Running DROP DATABASE or ALTER DATABASE ... SET - TABLESPACE on the primary + Running DROP DATABASE or ALTER DATABASE ... SET + TABLESPACE on the primary will generate a WAL entry that will cause all users connected to that database on the standby to be forcibly disconnected. This action occurs immediately, whatever the setting of - max_standby_streaming_delay. Note that - ALTER DATABASE ... RENAME does not disconnect users, which + max_standby_streaming_delay. Note that + ALTER DATABASE ... RENAME does not disconnect users, which in most cases will go unnoticed, though might in some cases cause a program confusion if it depends in some way upon database name. - In normal (non-recovery) mode, if you issue DROP USER or DROP ROLE + In normal (non-recovery) mode, if you issue DROP USER or DROP ROLE for a role with login capability while that user is still connected then nothing happens to the connected user - they remain connected. The user cannot reconnect however. This behavior applies in recovery also, so a - DROP USER on the primary does not disconnect that user on the standby. + DROP USER on the primary does not disconnect that user on the standby. @@ -2361,7 +2361,7 @@ LOG: database system is ready to accept read only connections restartpoints (similar to checkpoints on the primary) and normal block cleaning activities. This can include updates of the hint bit information stored on the standby server. - The CHECKPOINT command is accepted during recovery, + The CHECKPOINT command is accepted during recovery, though it performs a restartpoint rather than a new checkpoint. @@ -2427,15 +2427,15 @@ LOG: database system is ready to accept read only connections - At the end of recovery, AccessExclusiveLocks held by prepared transactions + At the end of recovery, AccessExclusiveLocks held by prepared transactions will require twice the normal number of lock table entries. If you plan on running either a large number of concurrent prepared transactions - that normally take AccessExclusiveLocks, or you plan on having one - large transaction that takes many AccessExclusiveLocks, you are - advised to select a larger value of max_locks_per_transaction, + that normally take AccessExclusiveLocks, or you plan on having one + large transaction that takes many AccessExclusiveLocks, you are + advised to select a larger value of max_locks_per_transaction, perhaps as much as twice the value of the parameter on the primary server. You need not consider this at all if - your setting of max_prepared_transactions is 0. + your setting of max_prepared_transactions is 0. diff --git a/doc/src/sgml/history.sgml b/doc/src/sgml/history.sgml index a7f4b701ea..d1535469f9 100644 --- a/doc/src/sgml/history.sgml +++ b/doc/src/sgml/history.sgml @@ -132,7 +132,7 @@ (psql) was provided for interactive SQL queries, which used GNU Readline. This largely superseded - the old monitor program. + the old monitor program. @@ -215,7 +215,7 @@ - Details about what has happened in PostgreSQL since + Details about what has happened in PostgreSQL since then can be found in . diff --git a/doc/src/sgml/hstore.sgml b/doc/src/sgml/hstore.sgml index db5d4409a6..0264e4e532 100644 --- a/doc/src/sgml/hstore.sgml +++ b/doc/src/sgml/hstore.sgml @@ -8,21 +8,21 @@ - This module implements the hstore data type for storing sets of - key/value pairs within a single PostgreSQL value. + This module implements the hstore data type for storing sets of + key/value pairs within a single PostgreSQL value. This can be useful in various scenarios, such as rows with many attributes that are rarely examined, or semi-structured data. Keys and values are simply text strings. - <type>hstore</> External Representation + <type>hstore</type> External Representation - The text representation of an hstore, used for input and output, - includes zero or more key => - value pairs separated by commas. Some examples: + The text representation of an hstore, used for input and output, + includes zero or more key => + value pairs separated by commas. Some examples: k => v @@ -31,15 +31,15 @@ foo => bar, baz => whatever The order of the pairs is not significant (and may not be reproduced on - output). Whitespace between pairs or around the => sign is + output). Whitespace between pairs or around the => sign is ignored. Double-quote keys and values that include whitespace, commas, - =s or >s. To include a double quote or a + =s or >s. To include a double quote or a backslash in a key or value, escape it with a backslash. - Each key in an hstore is unique. If you declare an hstore - with duplicate keys, only one will be stored in the hstore and + Each key in an hstore is unique. If you declare an hstore + with duplicate keys, only one will be stored in the hstore and there is no guarantee as to which will be kept: @@ -51,24 +51,24 @@ SELECT 'a=>1,a=>2'::hstore; - A value (but not a key) can be an SQL NULL. For example: + A value (but not a key) can be an SQL NULL. For example: key => NULL - The NULL keyword is case-insensitive. Double-quote the - NULL to treat it as the ordinary string NULL. + The NULL keyword is case-insensitive. Double-quote the + NULL to treat it as the ordinary string NULL. - Keep in mind that the hstore text format, when used for input, - applies before any required quoting or escaping. If you are - passing an hstore literal via a parameter, then no additional + Keep in mind that the hstore text format, when used for input, + applies before any required quoting or escaping. If you are + passing an hstore literal via a parameter, then no additional processing is needed. But if you're passing it as a quoted literal constant, then any single-quote characters and (depending on the setting of - the standard_conforming_strings configuration parameter) + the standard_conforming_strings configuration parameter) backslash characters need to be escaped correctly. See for more on the handling of string constants. @@ -83,7 +83,7 @@ key => NULL - <type>hstore</> Operators and Functions + <type>hstore</type> Operators and Functions The operators provided by the hstore module are @@ -92,7 +92,7 @@ key => NULL - <type>hstore</> Operators + <type>hstore</type> Operators @@ -106,99 +106,99 @@ key => NULL - hstore -> text - get value for key (NULL if not present) + hstore -> text + get value for key (NULL if not present) 'a=>x, b=>y'::hstore -> 'a' x - hstore -> text[] - get values for keys (NULL if not present) + hstore -> text[] + get values for keys (NULL if not present) 'a=>x, b=>y, c=>z'::hstore -> ARRAY['c','a'] {"z","x"} - hstore || hstore - concatenate hstores + hstore || hstore + concatenate hstores 'a=>b, c=>d'::hstore || 'c=>x, d=>q'::hstore "a"=>"b", "c"=>"x", "d"=>"q" - hstore ? text - does hstore contain key? + hstore ? text + does hstore contain key? 'a=>1'::hstore ? 'a' t - hstore ?& text[] - does hstore contain all specified keys? + hstore ?& text[] + does hstore contain all specified keys? 'a=>1,b=>2'::hstore ?& ARRAY['a','b'] t - hstore ?| text[] - does hstore contain any of the specified keys? + hstore ?| text[] + does hstore contain any of the specified keys? 'a=>1,b=>2'::hstore ?| ARRAY['b','c'] t - hstore @> hstore + hstore @> hstore does left operand contain right? 'a=>b, b=>1, c=>NULL'::hstore @> 'b=>1' t - hstore <@ hstore + hstore <@ hstore is left operand contained in right? 'a=>c'::hstore <@ 'a=>b, b=>1, c=>NULL' f - hstore - text + hstore - text delete key from left operand 'a=>1, b=>2, c=>3'::hstore - 'b'::text "a"=>"1", "c"=>"3" - hstore - text[] + hstore - text[] delete keys from left operand 'a=>1, b=>2, c=>3'::hstore - ARRAY['a','b'] "c"=>"3" - hstore - hstore + hstore - hstore delete matching pairs from left operand 'a=>1, b=>2, c=>3'::hstore - 'a=>4, b=>2'::hstore "a"=>"1", "c"=>"3" - record #= hstore - replace fields in record with matching values from hstore + record #= hstore + replace fields in record with matching values from hstore see Examples section - %% hstore - convert hstore to array of alternating keys and values + %% hstore + convert hstore to array of alternating keys and values %% 'a=>foo, b=>bar'::hstore {a,foo,b,bar} - %# hstore - convert hstore to two-dimensional key/value array + %# hstore + convert hstore to two-dimensional key/value array %# 'a=>foo, b=>bar'::hstore {{a,foo},{b,bar}} @@ -209,8 +209,8 @@ key => NULL - Prior to PostgreSQL 8.2, the containment operators @> - and <@ were called @ and ~, + Prior to PostgreSQL 8.2, the containment operators @> + and <@ were called @ and ~, respectively. These names are still available, but are deprecated and will eventually be removed. Notice that the old names are reversed from the convention formerly followed by the core geometric data types! @@ -218,7 +218,7 @@ key => NULL
- <type>hstore</> Functions + <type>hstore</type> Functions @@ -235,7 +235,7 @@ key => NULL hstore(record)hstore hstore - construct an hstore from a record or row + construct an hstore from a record or row hstore(ROW(1,2)) f1=>1,f2=>2 @@ -243,7 +243,7 @@ key => NULL hstore(text[]) hstore - construct an hstore from an array, which may be either + construct an hstore from an array, which may be either a key/value array, or a two-dimensional array hstore(ARRAY['a','1','b','2']) || hstore(ARRAY[['c','3'],['d','4']]) a=>1, b=>2, c=>3, d=>4 @@ -252,7 +252,7 @@ key => NULL hstore(text[], text[]) hstore - construct an hstore from separate key and value arrays + construct an hstore from separate key and value arrays hstore(ARRAY['a','b'], ARRAY['1','2']) "a"=>"1","b"=>"2" @@ -260,7 +260,7 @@ key => NULL hstore(text, text) hstore - make single-item hstore + make single-item hstore hstore('a', 'b') "a"=>"b" @@ -268,7 +268,7 @@ key => NULL akeys(hstore)akeys text[] - get hstore's keys as an array + get hstore's keys as an array akeys('a=>1,b=>2') {a,b} @@ -276,7 +276,7 @@ key => NULL skeys(hstore)skeys setof text - get hstore's keys as a set + get hstore's keys as a set skeys('a=>1,b=>2') @@ -288,7 +288,7 @@ b avals(hstore)avals text[] - get hstore's values as an array + get hstore's values as an array avals('a=>1,b=>2') {1,2} @@ -296,7 +296,7 @@ b svals(hstore)svals setof text - get hstore's values as a set + get hstore's values as a set svals('a=>1,b=>2') @@ -308,7 +308,7 @@ b hstore_to_array(hstore)hstore_to_array text[] - get hstore's keys and values as an array of alternating + get hstore's keys and values as an array of alternating keys and values hstore_to_array('a=>1,b=>2') {a,1,b,2} @@ -317,7 +317,7 @@ b hstore_to_matrix(hstore)hstore_to_matrix text[] - get hstore's keys and values as a two-dimensional array + get hstore's keys and values as a two-dimensional array hstore_to_matrix('a=>1,b=>2') {{a,1},{b,2}} @@ -359,7 +359,7 @@ b slice(hstore, text[])slice hstore - extract a subset of an hstore + extract a subset of an hstore slice('a=>1,b=>2,c=>3'::hstore, ARRAY['b','c','x']) "b"=>"2", "c"=>"3" @@ -367,7 +367,7 @@ b each(hstore)each setof(key text, value text) - get hstore's keys and values as a set + get hstore's keys and values as a set select * from each('a=>1,b=>2') @@ -381,7 +381,7 @@ b exist(hstore,text)exist boolean - does hstore contain key? + does hstore contain key? exist('a=>1','a') t @@ -389,7 +389,7 @@ b defined(hstore,text)defined boolean - does hstore contain non-NULL value for key? + does hstore contain non-NULL value for key? defined('a=>NULL','a') f @@ -421,7 +421,7 @@ b populate_record(record,hstore)populate_record record - replace fields in record with matching values from hstore + replace fields in record with matching values from hstore see Examples section @@ -442,7 +442,7 @@ b The function populate_record is actually declared - with anyelement, not record, as its first argument, + with anyelement, not record, as its first argument, but it will reject non-record types with a run-time error. @@ -452,8 +452,8 @@ b Indexes - hstore has GiST and GIN index support for the @>, - ?, ?& and ?| operators. For example: + hstore has GiST and GIN index support for the @>, + ?, ?& and ?| operators. For example: CREATE INDEX hidx ON testhstore USING GIST (h); @@ -462,12 +462,12 @@ CREATE INDEX hidx ON testhstore USING GIN (h); - hstore also supports btree or hash indexes for - the = operator. This allows hstore columns to be - declared UNIQUE, or to be used in GROUP BY, - ORDER BY or DISTINCT expressions. The sort ordering - for hstore values is not particularly useful, but these indexes - may be useful for equivalence lookups. Create indexes for = + hstore also supports btree or hash indexes for + the = operator. This allows hstore columns to be + declared UNIQUE, or to be used in GROUP BY, + ORDER BY or DISTINCT expressions. The sort ordering + for hstore values is not particularly useful, but these indexes + may be useful for equivalence lookups. Create indexes for = comparisons as follows: @@ -495,7 +495,7 @@ UPDATE tab SET h = delete(h, 'k1'); - Convert a record to an hstore: + Convert a record to an hstore: CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -509,7 +509,7 @@ SELECT hstore(t) FROM test AS t; - Convert an hstore to a predefined record type: + Convert an hstore to a predefined record type: CREATE TABLE test (col1 integer, col2 text, col3 text); @@ -523,7 +523,7 @@ SELECT * FROM populate_record(null::test, - Modify an existing record using the values from an hstore: + Modify an existing record using the values from an hstore: CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -541,7 +541,7 @@ SELECT (r).* FROM (SELECT t #= '"col3"=>"baz"' AS r FROM test t) s; Statistics - The hstore type, because of its intrinsic liberality, could + The hstore type, because of its intrinsic liberality, could contain a lot of different keys. Checking for valid keys is the task of the application. The following examples demonstrate several techniques for checking keys and obtaining statistics. @@ -588,7 +588,7 @@ SELECT key, count(*) FROM Compatibility - As of PostgreSQL 9.0, hstore uses a different internal + As of PostgreSQL 9.0, hstore uses a different internal representation than previous versions. This presents no obstacle for dump/restore upgrades since the text representation (used in the dump) is unchanged. @@ -599,7 +599,7 @@ SELECT key, count(*) FROM having the new code recognize old-format data. This will entail a slight performance penalty when processing data that has not yet been modified by the new code. It is possible to force an upgrade of all values in a table - column by doing an UPDATE statement as follows: + column by doing an UPDATE statement as follows: UPDATE tablename SET hstorecol = hstorecol || ''; @@ -610,7 +610,7 @@ UPDATE tablename SET hstorecol = hstorecol || ''; ALTER TABLE tablename ALTER hstorecol TYPE hstore USING hstorecol || ''; - The ALTER TABLE method requires an exclusive lock on the table, + The ALTER TABLE method requires an exclusive lock on the table, but does not result in bloating the table with old row versions. diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml index aa3d371d2e..b06ffcdbff 100644 --- a/doc/src/sgml/indexam.sgml +++ b/doc/src/sgml/indexam.sgml @@ -6,17 +6,17 @@ This chapter defines the interface between the core PostgreSQL system and index access - methods, which manage individual index types. The core system + methods, which manage individual index types. The core system knows nothing about indexes beyond what is specified here, so it is possible to develop entirely new index types by writing add-on code. All indexes in PostgreSQL are what are known - technically as secondary indexes; that is, the index is + technically as secondary indexes; that is, the index is physically separate from the table file that it describes. Each index - is stored as its own physical relation and so is described - by an entry in the pg_class catalog. The contents of an + is stored as its own physical relation and so is described + by an entry in the pg_class catalog. The contents of an index are entirely under the control of its index access method. In practice, all index access methods divide indexes into standard-size pages so that they can use the regular storage manager and buffer manager @@ -28,7 +28,7 @@ An index is effectively a mapping from some data key values to - tuple identifiers, or TIDs, of row versions + tuple identifiers, or TIDs, of row versions (tuples) in the index's parent table. A TID consists of a block number and an item number within that block (see ). This is sufficient @@ -50,7 +50,7 @@ Each index access method is described by a row in the pg_am system catalog. The pg_am entry - specifies a name and a handler function for the access + specifies a name and a handler function for the access method. These entries can be created and deleted using the and SQL commands. @@ -58,14 +58,14 @@ An index access method handler function must be declared to accept a - single argument of type internal and to return the - pseudo-type index_am_handler. The argument is a dummy value that + single argument of type internal and to return the + pseudo-type index_am_handler. The argument is a dummy value that simply serves to prevent handler functions from being called directly from SQL commands. The result of the function must be a palloc'd struct of type IndexAmRoutine, which contains everything that the core code needs to know to make use of the index access method. The IndexAmRoutine struct, also called the access - method's API struct, includes fields specifying assorted + method's API struct, includes fields specifying assorted fixed properties of the access method, such as whether it can support multicolumn indexes. More importantly, it contains pointers to support functions for the access method, which do all of the real work to access @@ -144,8 +144,8 @@ typedef struct IndexAmRoutine To be useful, an index access method must also have one or more - operator families and - operator classes defined in + operator families and + operator classes defined in pg_opfamily, pg_opclass, pg_amop, and @@ -170,12 +170,12 @@ typedef struct IndexAmRoutine key values come from (it is always handed precomputed key values) but it will be very interested in the operator class information in pg_index. Both of these catalog entries can be - accessed as part of the Relation data structure that is + accessed as part of the Relation data structure that is passed to all operations on the index. - Some of the flag fields of IndexAmRoutine have nonobvious + Some of the flag fields of IndexAmRoutine have nonobvious implications. The requirements of amcanunique are discussed in . The amcanmulticol flag asserts that the @@ -185,7 +185,7 @@ typedef struct IndexAmRoutine When amcanmulticol is false, amoptionalkey essentially says whether the access method supports full-index scans without any restriction clause. - Access methods that support multiple index columns must + Access methods that support multiple index columns must support scans that omit restrictions on any or all of the columns after the first; however they are permitted to require some restriction to appear for the first index column, and this is signaled by setting @@ -201,17 +201,17 @@ typedef struct IndexAmRoutine indexes that have amoptionalkey true must index nulls, since the planner might decide to use such an index with no scan keys at all. A related restriction is that an index - access method that supports multiple index columns must + access method that supports multiple index columns must support indexing null values in columns after the first, because the planner will assume the index can be used for queries that do not restrict these columns. For example, consider an index on (a,b) and a query with WHERE a = 4. The system will assume the index can be used to scan for rows with a = 4, which is wrong if the - index omits rows where b is null. + index omits rows where b is null. It is, however, OK to omit rows where the first indexed column is null. An index access method that does index nulls may also set amsearchnulls, indicating that it supports - IS NULL and IS NOT NULL clauses as search + IS NULL and IS NOT NULL clauses as search conditions. @@ -235,8 +235,8 @@ ambuild (Relation heapRelation, Build a new index. The index relation has been physically created, but is empty. It must be filled in with whatever fixed data the access method requires, plus entries for all tuples already existing - in the table. Ordinarily the ambuild function will call - IndexBuildHeapScan() to scan the table for existing tuples + in the table. Ordinarily the ambuild function will call + IndexBuildHeapScan() to scan the table for existing tuples and compute the keys that need to be inserted into the index. The function must return a palloc'd struct containing statistics about the new index. @@ -264,22 +264,22 @@ aminsert (Relation indexRelation, IndexUniqueCheck checkUnique, IndexInfo *indexInfo); - Insert a new tuple into an existing index. The values and - isnull arrays give the key values to be indexed, and - heap_tid is the TID to be indexed. + Insert a new tuple into an existing index. The values and + isnull arrays give the key values to be indexed, and + heap_tid is the TID to be indexed. If the access method supports unique indexes (its - amcanunique flag is true) then - checkUnique indicates the type of uniqueness check to + amcanunique flag is true) then + checkUnique indicates the type of uniqueness check to perform. This varies depending on whether the unique constraint is deferrable; see for details. - Normally the access method only needs the heapRelation + Normally the access method only needs the heapRelation parameter when performing uniqueness checking (since then it will have to look into the heap to verify tuple liveness). The function's Boolean result value is significant only when - checkUnique is UNIQUE_CHECK_PARTIAL. + checkUnique is UNIQUE_CHECK_PARTIAL. In this case a TRUE result means the new entry is known unique, whereas FALSE means it might be non-unique (and a deferred uniqueness check must be scheduled). For other cases a constant FALSE result is recommended. @@ -287,7 +287,7 @@ aminsert (Relation indexRelation, Some indexes might not index all tuples. If the tuple is not to be - indexed, aminsert should just return without doing anything. + indexed, aminsert should just return without doing anything. @@ -306,26 +306,26 @@ ambulkdelete (IndexVacuumInfo *info, IndexBulkDeleteCallback callback, void *callback_state); - Delete tuple(s) from the index. This is a bulk delete operation + Delete tuple(s) from the index. This is a bulk delete operation that is intended to be implemented by scanning the whole index and checking each entry to see if it should be deleted. - The passed-in callback function must be called, in the style - callback(TID, callback_state) returns bool, + The passed-in callback function must be called, in the style + callback(TID, callback_state) returns bool, to determine whether any particular index entry, as identified by its referenced TID, is to be deleted. Must return either NULL or a palloc'd struct containing statistics about the effects of the deletion operation. It is OK to return NULL if no information needs to be passed on to - amvacuumcleanup. + amvacuumcleanup. - Because of limited maintenance_work_mem, - ambulkdelete might need to be called more than once when many - tuples are to be deleted. The stats argument is the result + Because of limited maintenance_work_mem, + ambulkdelete might need to be called more than once when many + tuples are to be deleted. The stats argument is the result of the previous call for this index (it is NULL for the first call within a - VACUUM operation). This allows the AM to accumulate statistics - across the whole operation. Typically, ambulkdelete will - modify and return the same struct if the passed stats is not + VACUUM operation). This allows the AM to accumulate statistics + across the whole operation. Typically, ambulkdelete will + modify and return the same struct if the passed stats is not null. @@ -336,14 +336,14 @@ amvacuumcleanup (IndexVacuumInfo *info, IndexBulkDeleteResult *stats); Clean up after a VACUUM operation (zero or more - ambulkdelete calls). This does not have to do anything + ambulkdelete calls). This does not have to do anything beyond returning index statistics, but it might perform bulk cleanup - such as reclaiming empty index pages. stats is whatever the - last ambulkdelete call returned, or NULL if - ambulkdelete was not called because no tuples needed to be + such as reclaiming empty index pages. stats is whatever the + last ambulkdelete call returned, or NULL if + ambulkdelete was not called because no tuples needed to be deleted. If the result is not NULL it must be a palloc'd struct. - The statistics it contains will be used to update pg_class, - and will be reported by VACUUM if VERBOSE is given. + The statistics it contains will be used to update pg_class, + and will be reported by VACUUM if VERBOSE is given. It is OK to return NULL if the index was not changed at all during the VACUUM operation, but otherwise correct stats should be returned. @@ -351,8 +351,8 @@ amvacuumcleanup (IndexVacuumInfo *info, As of PostgreSQL 8.4, - amvacuumcleanup will also be called at completion of an - ANALYZE operation. In this case stats is always + amvacuumcleanup will also be called at completion of an + ANALYZE operation. In this case stats is always NULL and any return value will be ignored. This case can be distinguished by checking info->analyze_only. It is recommended that the access method do nothing except post-insert cleanup in such a @@ -365,12 +365,12 @@ bool amcanreturn (Relation indexRelation, int attno); Check whether the index can support index-only scans on + linkend="indexes-index-only-scans">index-only scans on the given column, by returning the indexed column values for an index entry in the form of an IndexTuple. The attribute number is 1-based, i.e. the first column's attno is 1. Returns TRUE if supported, else FALSE. If the access method does not support index-only scans at all, - the amcanreturn field in its IndexAmRoutine + the amcanreturn field in its IndexAmRoutine struct can be set to NULL. @@ -397,18 +397,18 @@ amoptions (ArrayType *reloptions, Parse and validate the reloptions array for an index. This is called only when a non-null reloptions array exists for the index. - reloptions is a text array containing entries of the - form name=value. - The function should construct a bytea value, which will be copied - into the rd_options field of the index's relcache entry. - The data contents of the bytea value are open for the access + reloptions is a text array containing entries of the + form name=value. + The function should construct a bytea value, which will be copied + into the rd_options field of the index's relcache entry. + The data contents of the bytea value are open for the access method to define; most of the standard access methods use struct - StdRdOptions. - When validate is true, the function should report a suitable + StdRdOptions. + When validate is true, the function should report a suitable error message if any of the options are unrecognized or have invalid - values; when validate is false, invalid entries should be - silently ignored. (validate is false when loading options - already stored in pg_catalog; an invalid entry could only + values; when validate is false, invalid entries should be + silently ignored. (validate is false when loading options + already stored in pg_catalog; an invalid entry could only be found if the access method has changed its rules for options, and in that case ignoring obsolete entries is appropriate.) It is OK to return NULL if default behavior is wanted. @@ -421,44 +421,44 @@ amproperty (Oid index_oid, int attno, IndexAMProperty prop, const char *propname, bool *res, bool *isnull); - The amproperty method allows index access methods to override + The amproperty method allows index access methods to override the default behavior of pg_index_column_has_property and related functions. If the access method does not have any special behavior for index property - inquiries, the amproperty field in - its IndexAmRoutine struct can be set to NULL. - Otherwise, the amproperty method will be called with - index_oid and attno both zero for + inquiries, the amproperty field in + its IndexAmRoutine struct can be set to NULL. + Otherwise, the amproperty method will be called with + index_oid and attno both zero for pg_indexam_has_property calls, - or with index_oid valid and attno zero for + or with index_oid valid and attno zero for pg_index_has_property calls, - or with index_oid valid and attno greater than + or with index_oid valid and attno greater than zero for pg_index_column_has_property calls. - prop is an enum value identifying the property being tested, - while propname is the original property name string. + prop is an enum value identifying the property being tested, + while propname is the original property name string. If the core code does not recognize the property name - then prop is AMPROP_UNKNOWN. + then prop is AMPROP_UNKNOWN. Access methods can define custom property names by - checking propname for a match (use pg_strcasecmp + checking propname for a match (use pg_strcasecmp to match, for consistency with the core code); for names known to the core - code, it's better to inspect prop. - If the amproperty method returns true then - it has determined the property test result: it must set *res - to the boolean value to return, or set *isnull - to true to return a NULL. (Both of the referenced variables - are initialized to false before the call.) - If the amproperty method returns false then + code, it's better to inspect prop. + If the amproperty method returns true then + it has determined the property test result: it must set *res + to the boolean value to return, or set *isnull + to true to return a NULL. (Both of the referenced variables + are initialized to false before the call.) + If the amproperty method returns false then the core code will proceed with its normal logic for determining the property test result. Access methods that support ordering operators should - implement AMPROP_DISTANCE_ORDERABLE property testing, as the + implement AMPROP_DISTANCE_ORDERABLE property testing, as the core code does not know how to do that and will return NULL. It may - also be advantageous to implement AMPROP_RETURNABLE testing, + also be advantageous to implement AMPROP_RETURNABLE testing, if that can be done more cheaply than by opening the index and calling - amcanreturn, which is the core code's default behavior. + amcanreturn, which is the core code's default behavior. The default behavior should be satisfactory for all other standard properties. @@ -471,18 +471,18 @@ amvalidate (Oid opclassoid); Validate the catalog entries for the specified operator class, so far as the access method can reasonably do that. For example, this might include testing that all required support functions are provided. - The amvalidate function must return false if the opclass is - invalid. Problems should be reported with ereport messages. + The amvalidate function must return false if the opclass is + invalid. Problems should be reported with ereport messages. The purpose of an index, of course, is to support scans for tuples matching - an indexable WHERE condition, often called a - qualifier or scan key. The semantics of + an indexable WHERE condition, often called a + qualifier or scan key. The semantics of index scanning are described more fully in , - below. An index access method can support plain index scans, - bitmap index scans, or both. The scan-related functions that an + below. An index access method can support plain index scans, + bitmap index scans, or both. The scan-related functions that an index access method must or may provide are: @@ -493,17 +493,17 @@ ambeginscan (Relation indexRelation, int nkeys, int norderbys); - Prepare for an index scan. The nkeys and norderbys + Prepare for an index scan. The nkeys and norderbys parameters indicate the number of quals and ordering operators that will be used in the scan; these may be useful for space allocation purposes. Note that the actual values of the scan keys aren't provided yet. The result must be a palloc'd struct. For implementation reasons the index access method - must create this struct by calling - RelationGetIndexScan(). In most cases - ambeginscan does little beyond making that call and perhaps + must create this struct by calling + RelationGetIndexScan(). In most cases + ambeginscan does little beyond making that call and perhaps acquiring locks; - the interesting parts of index-scan startup are in amrescan. + the interesting parts of index-scan startup are in amrescan. @@ -516,10 +516,10 @@ amrescan (IndexScanDesc scan, int norderbys); Start or restart an index scan, possibly with new scan keys. (To restart - using previously-passed keys, NULL is passed for keys and/or - orderbys.) Note that it is not allowed for + using previously-passed keys, NULL is passed for keys and/or + orderbys.) Note that it is not allowed for the number of keys or order-by operators to be larger than - what was passed to ambeginscan. In practice the restart + what was passed to ambeginscan. In practice the restart feature is used when a new outer tuple is selected by a nested-loop join and so a new key comparison value is needed, but the scan key structure remains the same. @@ -534,42 +534,42 @@ amgettuple (IndexScanDesc scan, Fetch the next tuple in the given scan, moving in the given direction (forward or backward in the index). Returns TRUE if a tuple was obtained, FALSE if no matching tuples remain. In the TRUE case the tuple - TID is stored into the scan structure. Note that - success means only that the index contains an entry that matches + TID is stored into the scan structure. Note that + success means only that the index contains an entry that matches the scan keys, not that the tuple necessarily still exists in the heap or - will pass the caller's snapshot test. On success, amgettuple - must also set scan->xs_recheck to TRUE or FALSE. + will pass the caller's snapshot test. On success, amgettuple + must also set scan->xs_recheck to TRUE or FALSE. FALSE means it is certain that the index entry matches the scan keys. TRUE means this is not certain, and the conditions represented by the scan keys must be rechecked against the heap tuple after fetching it. - This provision supports lossy index operators. + This provision supports lossy index operators. Note that rechecking will extend only to the scan conditions; a partial - index predicate (if any) is never rechecked by amgettuple + index predicate (if any) is never rechecked by amgettuple callers. If the index supports index-only scans (i.e., amcanreturn returns TRUE for it), - then on success the AM must also check scan->xs_want_itup, + then on success the AM must also check scan->xs_want_itup, and if that is true it must return the originally indexed data for the index entry. The data can be returned in the form of an - IndexTuple pointer stored at scan->xs_itup, - with tuple descriptor scan->xs_itupdesc; or in the form of - a HeapTuple pointer stored at scan->xs_hitup, - with tuple descriptor scan->xs_hitupdesc. (The latter + IndexTuple pointer stored at scan->xs_itup, + with tuple descriptor scan->xs_itupdesc; or in the form of + a HeapTuple pointer stored at scan->xs_hitup, + with tuple descriptor scan->xs_hitupdesc. (The latter format should be used when reconstructing data that might possibly not fit - into an IndexTuple.) In either case, + into an IndexTuple.) In either case, management of the data referenced by the pointer is the access method's responsibility. The data must remain good at least until the next - amgettuple, amrescan, or amendscan + amgettuple, amrescan, or amendscan call for the scan. - The amgettuple function need only be provided if the access - method supports plain index scans. If it doesn't, the - amgettuple field in its IndexAmRoutine + The amgettuple function need only be provided if the access + method supports plain index scans. If it doesn't, the + amgettuple field in its IndexAmRoutine struct must be set to NULL. @@ -583,24 +583,24 @@ amgetbitmap (IndexScanDesc scan, TIDBitmap (that is, OR the set of tuple IDs into whatever set is already in the bitmap). The number of tuples fetched is returned (this might be just an approximate count, for instance some AMs do not detect duplicates). - While inserting tuple IDs into the bitmap, amgetbitmap can + While inserting tuple IDs into the bitmap, amgetbitmap can indicate that rechecking of the scan conditions is required for specific - tuple IDs. This is analogous to the xs_recheck output parameter - of amgettuple. Note: in the current implementation, support + tuple IDs. This is analogous to the xs_recheck output parameter + of amgettuple. Note: in the current implementation, support for this feature is conflated with support for lossy storage of the bitmap itself, and therefore callers recheck both the scan conditions and the partial index predicate (if any) for recheckable tuples. That might not always be true, however. - amgetbitmap and - amgettuple cannot be used in the same index scan; there - are other restrictions too when using amgetbitmap, as explained + amgetbitmap and + amgettuple cannot be used in the same index scan; there + are other restrictions too when using amgetbitmap, as explained in . - The amgetbitmap function need only be provided if the access - method supports bitmap index scans. If it doesn't, the - amgetbitmap field in its IndexAmRoutine + The amgetbitmap function need only be provided if the access + method supports bitmap index scans. If it doesn't, the + amgetbitmap field in its IndexAmRoutine struct must be set to NULL. @@ -609,7 +609,7 @@ amgetbitmap (IndexScanDesc scan, void amendscan (IndexScanDesc scan); - End a scan and release resources. The scan struct itself + End a scan and release resources. The scan struct itself should not be freed, but any locks or pins taken internally by the access method must be released. @@ -624,9 +624,9 @@ ammarkpos (IndexScanDesc scan); - The ammarkpos function need only be provided if the access + The ammarkpos function need only be provided if the access method supports ordered scans. If it doesn't, - the ammarkpos field in its IndexAmRoutine + the ammarkpos field in its IndexAmRoutine struct may be set to NULL. @@ -639,15 +639,15 @@ amrestrpos (IndexScanDesc scan); - The amrestrpos function need only be provided if the access + The amrestrpos function need only be provided if the access method supports ordered scans. If it doesn't, - the amrestrpos field in its IndexAmRoutine + the amrestrpos field in its IndexAmRoutine struct may be set to NULL. In addition to supporting ordinary index scans, some types of index - may wish to support parallel index scans, which allow + may wish to support parallel index scans, which allow multiple backends to cooperate in performing an index scan. The index access method should arrange things so that each cooperating process returns a subset of the tuples that would be performed by @@ -668,7 +668,7 @@ amestimateparallelscan (void); Estimate and return the number of bytes of dynamic shared memory which the access method will be needed to perform a parallel scan. (This number is in addition to, not in lieu of, the amount of space needed for - AM-independent data in ParallelIndexScanDescData.) + AM-independent data in ParallelIndexScanDescData.) @@ -683,9 +683,9 @@ void aminitparallelscan (void *target); This function will be called to initialize dynamic shared memory at the - beginning of a parallel scan. target will point to at least + beginning of a parallel scan. target will point to at least the number of bytes previously returned by - amestimateparallelscan, and this function may use that + amestimateparallelscan, and this function may use that amount of space to store whatever data it wishes. @@ -702,7 +702,7 @@ amparallelrescan (IndexScanDesc scan); This function, if implemented, will be called when a parallel index scan must be restarted. It should reset any shared state set up by - aminitparallelscan such that the scan will be restarted from + aminitparallelscan such that the scan will be restarted from the beginning. @@ -714,16 +714,16 @@ amparallelrescan (IndexScanDesc scan); In an index scan, the index access method is responsible for regurgitating the TIDs of all the tuples it has been told about that match the - scan keys. The access method is not involved in + scan keys. The access method is not involved in actually fetching those tuples from the index's parent table, nor in determining whether they pass the scan's time qualification test or other conditions. - A scan key is the internal representation of a WHERE clause of - the form index_key operator - constant, where the index key is one of the columns of the + A scan key is the internal representation of a WHERE clause of + the form index_key operator + constant, where the index key is one of the columns of the index and the operator is one of the members of the operator family associated with that index column. An index scan has zero or more scan keys, which are implicitly ANDed — the returned tuples are expected @@ -731,7 +731,7 @@ amparallelrescan (IndexScanDesc scan); - The access method can report that the index is lossy, or + The access method can report that the index is lossy, or requires rechecks, for a particular query. This implies that the index scan will return all the entries that pass the scan key, plus possibly additional entries that do not. The core system's index-scan machinery @@ -743,16 +743,16 @@ amparallelrescan (IndexScanDesc scan); Note that it is entirely up to the access method to ensure that it correctly finds all and only the entries passing all the given scan keys. - Also, the core system will simply hand off all the WHERE + Also, the core system will simply hand off all the WHERE clauses that match the index keys and operator families, without any semantic analysis to determine whether they are redundant or contradictory. As an example, given - WHERE x > 4 AND x > 14 where x is a b-tree - indexed column, it is left to the b-tree amrescan function + WHERE x > 4 AND x > 14 where x is a b-tree + indexed column, it is left to the b-tree amrescan function to realize that the first scan key is redundant and can be discarded. - The extent of preprocessing needed during amrescan will + The extent of preprocessing needed during amrescan will depend on the extent to which the index access method needs to reduce - the scan keys to a normalized form. + the scan keys to a normalized form. @@ -765,7 +765,7 @@ amparallelrescan (IndexScanDesc scan); Access methods that always return entries in the natural ordering of their data (such as btree) should set - amcanorder to true. + amcanorder to true. Currently, such access methods must use btree-compatible strategy numbers for their equality and ordering operators. @@ -773,11 +773,11 @@ amparallelrescan (IndexScanDesc scan); Access methods that support ordering operators should set - amcanorderbyop to true. + amcanorderbyop to true. This indicates that the index is capable of returning entries in - an order satisfying ORDER BY index_key - operator constant. Scan modifiers - of that form can be passed to amrescan as described + an order satisfying ORDER BY index_key + operator constant. Scan modifiers + of that form can be passed to amrescan as described previously. @@ -785,29 +785,29 @@ amparallelrescan (IndexScanDesc scan); - The amgettuple function has a direction argument, - which can be either ForwardScanDirection (the normal case) - or BackwardScanDirection. If the first call after - amrescan specifies BackwardScanDirection, then the + The amgettuple function has a direction argument, + which can be either ForwardScanDirection (the normal case) + or BackwardScanDirection. If the first call after + amrescan specifies BackwardScanDirection, then the set of matching index entries is to be scanned back-to-front rather than in - the normal front-to-back direction, so amgettuple must return + the normal front-to-back direction, so amgettuple must return the last matching tuple in the index, rather than the first one as it normally would. (This will only occur for access - methods that set amcanorder to true.) After the - first call, amgettuple must be prepared to advance the scan in + methods that set amcanorder to true.) After the + first call, amgettuple must be prepared to advance the scan in either direction from the most recently returned entry. (But if - amcanbackward is false, all subsequent + amcanbackward is false, all subsequent calls will have the same direction as the first one.) - Access methods that support ordered scans must support marking a + Access methods that support ordered scans must support marking a position in a scan and later returning to the marked position. The same position might be restored multiple times. However, only one position need - be remembered per scan; a new ammarkpos call overrides the + be remembered per scan; a new ammarkpos call overrides the previously marked position. An access method that does not support ordered - scans need not provide ammarkpos and amrestrpos - functions in IndexAmRoutine; set those pointers to NULL + scans need not provide ammarkpos and amrestrpos + functions in IndexAmRoutine; set those pointers to NULL instead. @@ -835,29 +835,29 @@ amparallelrescan (IndexScanDesc scan); - Instead of using amgettuple, an index scan can be done with - amgetbitmap to fetch all tuples in one call. This can be - noticeably more efficient than amgettuple because it allows + Instead of using amgettuple, an index scan can be done with + amgetbitmap to fetch all tuples in one call. This can be + noticeably more efficient than amgettuple because it allows avoiding lock/unlock cycles within the access method. In principle - amgetbitmap should have the same effects as repeated - amgettuple calls, but we impose several restrictions to - simplify matters. First of all, amgetbitmap returns all + amgetbitmap should have the same effects as repeated + amgettuple calls, but we impose several restrictions to + simplify matters. First of all, amgetbitmap returns all tuples at once and marking or restoring scan positions isn't supported. Secondly, the tuples are returned in a bitmap which doesn't - have any specific ordering, which is why amgetbitmap doesn't - take a direction argument. (Ordering operators will never be + have any specific ordering, which is why amgetbitmap doesn't + take a direction argument. (Ordering operators will never be supplied for such a scan, either.) Also, there is no provision for index-only scans with - amgetbitmap, since there is no way to return the contents of + amgetbitmap, since there is no way to return the contents of index tuples. - Finally, amgetbitmap + Finally, amgetbitmap does not guarantee any locking of the returned tuples, with implications spelled out in . Note that it is permitted for an access method to implement only - amgetbitmap and not amgettuple, or vice versa, + amgetbitmap and not amgettuple, or vice versa, if its internal implementation is unsuited to one API or the other. @@ -870,26 +870,26 @@ amparallelrescan (IndexScanDesc scan); Index access methods must handle concurrent updates of the index by multiple processes. The core PostgreSQL system obtains - AccessShareLock on the index during an index scan, and - RowExclusiveLock when updating the index (including plain - VACUUM). Since these lock types do not conflict, the access + AccessShareLock on the index during an index scan, and + RowExclusiveLock when updating the index (including plain + VACUUM). Since these lock types do not conflict, the access method is responsible for handling any fine-grained locking it might need. An exclusive lock on the index as a whole will be taken only during index - creation, destruction, or REINDEX. + creation, destruction, or REINDEX. Building an index type that supports concurrent updates usually requires extensive and subtle analysis of the required behavior. For the b-tree and hash index types, you can read about the design decisions involved in - src/backend/access/nbtree/README and - src/backend/access/hash/README. + src/backend/access/nbtree/README and + src/backend/access/hash/README. Aside from the index's own internal consistency requirements, concurrent updates create issues about consistency between the parent table (the - heap) and the index. Because + heap) and the index. Because PostgreSQL separates accesses and updates of the heap from those of the index, there are windows in which the index might be inconsistent with the heap. We handle this problem @@ -906,7 +906,7 @@ amparallelrescan (IndexScanDesc scan); - When a heap entry is to be deleted (by VACUUM), all its + When a heap entry is to be deleted (by VACUUM), all its index entries must be removed first. @@ -914,7 +914,7 @@ amparallelrescan (IndexScanDesc scan); An index scan must maintain a pin on the index page holding the item last returned by - amgettuple, and ambulkdelete cannot delete + amgettuple, and ambulkdelete cannot delete entries from pages that are pinned by other backends. The need for this rule is explained below. @@ -922,33 +922,33 @@ amparallelrescan (IndexScanDesc scan); Without the third rule, it is possible for an index reader to - see an index entry just before it is removed by VACUUM, and + see an index entry just before it is removed by VACUUM, and then to arrive at the corresponding heap entry after that was removed by - VACUUM. + VACUUM. This creates no serious problems if that item number is still unused when the reader reaches it, since an empty - item slot will be ignored by heap_fetch(). But what if a + item slot will be ignored by heap_fetch(). But what if a third backend has already re-used the item slot for something else? When using an MVCC-compliant snapshot, there is no problem because the new occupant of the slot is certain to be too new to pass the snapshot test. However, with a non-MVCC-compliant snapshot (such as - SnapshotAny), it would be possible to accept and return + SnapshotAny), it would be possible to accept and return a row that does not in fact match the scan keys. We could defend against this scenario by requiring the scan keys to be rechecked against the heap row in all cases, but that is too expensive. Instead, we use a pin on an index page as a proxy to indicate that the reader - might still be in flight from the index entry to the matching - heap entry. Making ambulkdelete block on such a pin ensures - that VACUUM cannot delete the heap entry before the reader + might still be in flight from the index entry to the matching + heap entry. Making ambulkdelete block on such a pin ensures + that VACUUM cannot delete the heap entry before the reader is done with it. This solution costs little in run time, and adds blocking overhead only in the rare cases where there actually is a conflict. - This solution requires that index scans be synchronous: we have + This solution requires that index scans be synchronous: we have to fetch each heap tuple immediately after scanning the corresponding index entry. This is expensive for a number of reasons. An - asynchronous scan in which we collect many TIDs from the index, + asynchronous scan in which we collect many TIDs from the index, and only visit the heap tuples sometime later, requires much less index locking overhead and can allow a more efficient heap access pattern. Per the above analysis, we must use the synchronous approach for @@ -957,13 +957,13 @@ amparallelrescan (IndexScanDesc scan); - In an amgetbitmap index scan, the access method does not + In an amgetbitmap index scan, the access method does not keep an index pin on any of the returned tuples. Therefore it is only safe to use such scans with MVCC-compliant snapshots. - When the ampredlocks flag is not set, any scan using that + When the ampredlocks flag is not set, any scan using that index access method within a serializable transaction will acquire a nonblocking predicate lock on the full index. This will generate a read-write conflict with the insert of any tuple into that index by a @@ -982,9 +982,9 @@ amparallelrescan (IndexScanDesc scan); PostgreSQL enforces SQL uniqueness constraints - using unique indexes, which are indexes that disallow + using unique indexes, which are indexes that disallow multiple entries with identical keys. An access method that supports this - feature sets amcanunique true. + feature sets amcanunique true. (At present, only b-tree supports it.) @@ -1032,7 +1032,7 @@ amparallelrescan (IndexScanDesc scan); no violation should be reported. (This case cannot occur during the ordinary scenario of inserting a row that's just been created by the current transaction. It can happen during - CREATE UNIQUE INDEX CONCURRENTLY, however.) + CREATE UNIQUE INDEX CONCURRENTLY, however.) @@ -1057,32 +1057,32 @@ amparallelrescan (IndexScanDesc scan); are done. Otherwise, we schedule a recheck to occur when it is time to enforce the constraint. If, at the time of the recheck, both the inserted tuple and some other tuple with the same key are live, then the error - must be reported. (Note that for this purpose, live actually - means any tuple in the index entry's HOT chain is live.) - To implement this, the aminsert function is passed a - checkUnique parameter having one of the following values: + must be reported. (Note that for this purpose, live actually + means any tuple in the index entry's HOT chain is live.) + To implement this, the aminsert function is passed a + checkUnique parameter having one of the following values: - UNIQUE_CHECK_NO indicates that no uniqueness checking + UNIQUE_CHECK_NO indicates that no uniqueness checking should be done (this is not a unique index). - UNIQUE_CHECK_YES indicates that this is a non-deferrable + UNIQUE_CHECK_YES indicates that this is a non-deferrable unique index, and the uniqueness check must be done immediately, as described above. - UNIQUE_CHECK_PARTIAL indicates that the unique + UNIQUE_CHECK_PARTIAL indicates that the unique constraint is deferrable. PostgreSQL will use this mode to insert each row's index entry. The access method must allow duplicate entries into the index, and report any - potential duplicates by returning FALSE from aminsert. + potential duplicates by returning FALSE from aminsert. For each row for which FALSE is returned, a deferred recheck will be scheduled. @@ -1098,21 +1098,21 @@ amparallelrescan (IndexScanDesc scan); - UNIQUE_CHECK_EXISTING indicates that this is a deferred + UNIQUE_CHECK_EXISTING indicates that this is a deferred recheck of a row that was reported as a potential uniqueness violation. - Although this is implemented by calling aminsert, the - access method must not insert a new index entry in this + Although this is implemented by calling aminsert, the + access method must not insert a new index entry in this case. The index entry is already present. Rather, the access method must check to see if there is another live index entry. If so, and if the target row is also still live, report error. - It is recommended that in a UNIQUE_CHECK_EXISTING call, + It is recommended that in a UNIQUE_CHECK_EXISTING call, the access method further verify that the target row actually does have an existing entry in the index, and report error if not. This is a good idea because the index tuple values passed to - aminsert will have been recomputed. If the index + aminsert will have been recomputed. If the index definition involves functions that are not really immutable, we might be checking the wrong area of the index. Checking that the target row is found in the recheck verifies that we are scanning @@ -1128,20 +1128,20 @@ amparallelrescan (IndexScanDesc scan); Index Cost Estimation Functions - The amcostestimate function is given information describing + The amcostestimate function is given information describing a possible index scan, including lists of WHERE and ORDER BY clauses that have been determined to be usable with the index. It must return estimates of the cost of accessing the index and the selectivity of the WHERE clauses (that is, the fraction of parent-table rows that will be retrieved during the index scan). For simple cases, nearly all the work of the cost estimator can be done by calling standard routines - in the optimizer; the point of having an amcostestimate function is + in the optimizer; the point of having an amcostestimate function is to allow index access methods to provide index-type-specific knowledge, in case it is possible to improve on the standard estimates. - Each amcostestimate function must have the signature: + Each amcostestimate function must have the signature: void @@ -1158,7 +1158,7 @@ amcostestimate (PlannerInfo *root, - root + root The planner's information about the query being processed. @@ -1167,7 +1167,7 @@ amcostestimate (PlannerInfo *root, - path + path The index access path being considered. All fields except cost and @@ -1177,14 +1177,14 @@ amcostestimate (PlannerInfo *root, - loop_count + loop_count The number of repetitions of the index scan that should be factored into the cost estimates. This will typically be greater than one when considering a parameterized scan for use in the inside of a nestloop join. Note that the cost estimates should still be for just one scan; - a larger loop_count means that it may be appropriate + a larger loop_count means that it may be appropriate to allow for some caching effects across multiple scans. @@ -1197,7 +1197,7 @@ amcostestimate (PlannerInfo *root, - *indexStartupCost + *indexStartupCost Set to cost of index start-up processing @@ -1206,7 +1206,7 @@ amcostestimate (PlannerInfo *root, - *indexTotalCost + *indexTotalCost Set to total cost of index processing @@ -1215,7 +1215,7 @@ amcostestimate (PlannerInfo *root, - *indexSelectivity + *indexSelectivity Set to index selectivity @@ -1224,7 +1224,7 @@ amcostestimate (PlannerInfo *root, - *indexCorrelation + *indexCorrelation Set to correlation coefficient between index scan order and @@ -1244,17 +1244,17 @@ amcostestimate (PlannerInfo *root, The index access costs should be computed using the parameters used by src/backend/optimizer/path/costsize.c: a sequential - disk block fetch has cost seq_page_cost, a nonsequential fetch - has cost random_page_cost, and the cost of processing one index - row should usually be taken as cpu_index_tuple_cost. In - addition, an appropriate multiple of cpu_operator_cost should + disk block fetch has cost seq_page_cost, a nonsequential fetch + has cost random_page_cost, and the cost of processing one index + row should usually be taken as cpu_index_tuple_cost. In + addition, an appropriate multiple of cpu_operator_cost should be charged for any comparison operators invoked during index processing (especially evaluation of the indexquals themselves). The access costs should include all disk and CPU costs associated with - scanning the index itself, but not the costs of retrieving or + scanning the index itself, but not the costs of retrieving or processing the parent-table rows that are identified by the index. @@ -1266,21 +1266,21 @@ amcostestimate (PlannerInfo *root, - The indexSelectivity should be set to the estimated fraction of the parent + The indexSelectivity should be set to the estimated fraction of the parent table rows that will be retrieved during the index scan. In the case of a lossy query, this will typically be higher than the fraction of rows that actually pass the given qual conditions. - The indexCorrelation should be set to the correlation (ranging between + The indexCorrelation should be set to the correlation (ranging between -1.0 and 1.0) between the index order and the table order. This is used to adjust the estimate for the cost of fetching rows from the parent table. - When loop_count is greater than one, the returned numbers + When loop_count is greater than one, the returned numbers should be averages expected for any one scan of the index. @@ -1307,17 +1307,17 @@ amcostestimate (PlannerInfo *root, Estimate the number of index rows that will be visited during the - scan. For many index types this is the same as indexSelectivity times + scan. For many index types this is the same as indexSelectivity times the number of rows in the index, but it might be more. (Note that the index's size in pages and rows is available from the - path->indexinfo struct.) + path->indexinfo struct.) Estimate the number of index pages that will be retrieved during the scan. - This might be just indexSelectivity times the index's size in pages. + This might be just indexSelectivity times the index's size in pages. diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index e40750e8ec..4cdd387b7b 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -147,14 +147,14 @@ CREATE INDEX test1_id_index ON test1 (id); Constructs equivalent to combinations of these operators, such as - BETWEEN and IN, can also be implemented with - a B-tree index search. Also, an IS NULL or IS NOT - NULL condition on an index column can be used with a B-tree index. + BETWEEN and IN, can also be implemented with + a B-tree index search. Also, an IS NULL or IS NOT + NULL condition on an index column can be used with a B-tree index. The optimizer can also use a B-tree index for queries involving the - pattern matching operators LIKE and ~ + pattern matching operators LIKE and ~ if the pattern is a constant and is anchored to the beginning of the string — for example, col LIKE 'foo%' or col ~ '^foo', but not @@ -206,7 +206,7 @@ CREATE INDEX name ON table within which many different indexing strategies can be implemented. Accordingly, the particular operators with which a GiST index can be used vary depending on the indexing strategy (the operator - class). As an example, the standard distribution of + class). As an example, the standard distribution of PostgreSQL includes GiST operator classes for several two-dimensional geometric data types, which support indexed queries using these operators: @@ -231,12 +231,12 @@ CREATE INDEX name ON table The GiST operator classes included in the standard distribution are documented in . Many other GiST operator - classes are available in the contrib collection or as separate + classes are available in the contrib collection or as separate projects. For more information see . - GiST indexes are also capable of optimizing nearest-neighbor + GiST indexes are also capable of optimizing nearest-neighbor searches, such as point '(101,456)' LIMIT 10; @@ -245,7 +245,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; which finds the ten places closest to a given target point. The ability to do this is again dependent on the particular operator class being used. In , operators that can be - used in this way are listed in the column Ordering Operators. + used in this way are listed in the column Ordering Operators. @@ -290,7 +290,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; GIN index - GIN indexes are inverted indexes which are appropriate for + GIN indexes are inverted indexes which are appropriate for data values that contain multiple component values, such as arrays. An inverted index contains a separate entry for each component value, and can efficiently handle queries that test for the presence of specific @@ -318,7 +318,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; The GIN operator classes included in the standard distribution are documented in . Many other GIN operator - classes are available in the contrib collection or as separate + classes are available in the contrib collection or as separate projects. For more information see . @@ -407,13 +407,13 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); are checked in the index, so they save visits to the table proper, but they do not reduce the portion of the index that has to be scanned. For example, given an index on (a, b, c) and a - query condition WHERE a = 5 AND b >= 42 AND c < 77, + query condition WHERE a = 5 AND b >= 42 AND c < 77, the index would have to be scanned from the first entry with - a = 5 and b = 42 up through the last entry with - a = 5. Index entries with c >= 77 would be + a = 5 and b = 42 up through the last entry with + a = 5. Index entries with c >= 77 would be skipped, but they'd still have to be scanned through. This index could in principle be used for queries that have constraints - on b and/or c with no constraint on a + on b and/or c with no constraint on a — but the entire index would have to be scanned, so in most cases the planner would prefer a sequential table scan over using the index. @@ -462,17 +462,17 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); - Indexes and <literal>ORDER BY</> + Indexes and <literal>ORDER BY</literal> index - and ORDER BY + and ORDER BY In addition to simply finding the rows to be returned by a query, an index may be able to deliver them in a specific sorted order. - This allows a query's ORDER BY specification to be honored + This allows a query's ORDER BY specification to be honored without a separate sorting step. Of the index types currently supported by PostgreSQL, only B-tree can produce sorted output — the other index types return @@ -480,7 +480,7 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); - The planner will consider satisfying an ORDER BY specification + The planner will consider satisfying an ORDER BY specification either by scanning an available index that matches the specification, or by scanning the table in physical order and doing an explicit sort. For a query that requires scanning a large fraction of the @@ -488,50 +488,50 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); because it requires less disk I/O due to following a sequential access pattern. Indexes are more useful when only a few rows need be fetched. An important - special case is ORDER BY in combination with - LIMIT n: an explicit sort will have to process - all the data to identify the first n rows, but if there is - an index matching the ORDER BY, the first n + special case is ORDER BY in combination with + LIMIT n: an explicit sort will have to process + all the data to identify the first n rows, but if there is + an index matching the ORDER BY, the first n rows can be retrieved directly, without scanning the remainder at all. By default, B-tree indexes store their entries in ascending order with nulls last. This means that a forward scan of an index on - column x produces output satisfying ORDER BY x - (or more verbosely, ORDER BY x ASC NULLS LAST). The + column x produces output satisfying ORDER BY x + (or more verbosely, ORDER BY x ASC NULLS LAST). The index can also be scanned backward, producing output satisfying - ORDER BY x DESC - (or more verbosely, ORDER BY x DESC NULLS FIRST, since - NULLS FIRST is the default for ORDER BY DESC). + ORDER BY x DESC + (or more verbosely, ORDER BY x DESC NULLS FIRST, since + NULLS FIRST is the default for ORDER BY DESC). You can adjust the ordering of a B-tree index by including the - options ASC, DESC, NULLS FIRST, - and/or NULLS LAST when creating the index; for example: + options ASC, DESC, NULLS FIRST, + and/or NULLS LAST when creating the index; for example: CREATE INDEX test2_info_nulls_low ON test2 (info NULLS FIRST); CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); An index stored in ascending order with nulls first can satisfy - either ORDER BY x ASC NULLS FIRST or - ORDER BY x DESC NULLS LAST depending on which direction + either ORDER BY x ASC NULLS FIRST or + ORDER BY x DESC NULLS LAST depending on which direction it is scanned in. You might wonder why bother providing all four options, when two options together with the possibility of backward scan would cover - all the variants of ORDER BY. In single-column indexes + all the variants of ORDER BY. In single-column indexes the options are indeed redundant, but in multicolumn indexes they can be - useful. Consider a two-column index on (x, y): this can - satisfy ORDER BY x, y if we scan forward, or - ORDER BY x DESC, y DESC if we scan backward. + useful. Consider a two-column index on (x, y): this can + satisfy ORDER BY x, y if we scan forward, or + ORDER BY x DESC, y DESC if we scan backward. But it might be that the application frequently needs to use - ORDER BY x ASC, y DESC. There is no way to get that + ORDER BY x ASC, y DESC. There is no way to get that ordering from a plain index, but it is possible if the index is defined - as (x ASC, y DESC) or (x DESC, y ASC). + as (x ASC, y DESC) or (x DESC, y ASC). @@ -559,38 +559,38 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); A single index scan can only use query clauses that use the index's columns with operators of its operator class and are joined with - AND. For example, given an index on (a, b) - a query condition like WHERE a = 5 AND b = 6 could - use the index, but a query like WHERE a = 5 OR b = 6 could not + AND. For example, given an index on (a, b) + a query condition like WHERE a = 5 AND b = 6 could + use the index, but a query like WHERE a = 5 OR b = 6 could not directly use the index. Fortunately, - PostgreSQL has the ability to combine multiple indexes + PostgreSQL has the ability to combine multiple indexes (including multiple uses of the same index) to handle cases that cannot - be implemented by single index scans. The system can form AND - and OR conditions across several index scans. For example, - a query like WHERE x = 42 OR x = 47 OR x = 53 OR x = 99 - could be broken down into four separate scans of an index on x, + be implemented by single index scans. The system can form AND + and OR conditions across several index scans. For example, + a query like WHERE x = 42 OR x = 47 OR x = 53 OR x = 99 + could be broken down into four separate scans of an index on x, each scan using one of the query clauses. The results of these scans are then ORed together to produce the result. Another example is that if we - have separate indexes on x and y, one possible - implementation of a query like WHERE x = 5 AND y = 6 is to + have separate indexes on x and y, one possible + implementation of a query like WHERE x = 5 AND y = 6 is to use each index with the appropriate query clause and then AND together the index results to identify the result rows. To combine multiple indexes, the system scans each needed index and - prepares a bitmap in memory giving the locations of + prepares a bitmap in memory giving the locations of table rows that are reported as matching that index's conditions. The bitmaps are then ANDed and ORed together as needed by the query. Finally, the actual table rows are visited and returned. The table rows are visited in physical order, because that is how the bitmap is laid out; this means that any ordering of the original indexes is lost, and so a separate sort step will be needed if the query has an ORDER - BY clause. For this reason, and because each additional index scan + BY clause. For this reason, and because each additional index scan adds extra time, the planner will sometimes choose to use a simple index scan even though additional indexes are available that could have been used as well. @@ -603,19 +603,19 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); indexes are best, but sometimes it's better to create separate indexes and rely on the index-combination feature. For example, if your workload includes a mix of queries that sometimes involve only column - x, sometimes only column y, and sometimes both + x, sometimes only column y, and sometimes both columns, you might choose to create two separate indexes on - x and y, relying on index combination to + x and y, relying on index combination to process the queries that use both columns. You could also create a - multicolumn index on (x, y). This index would typically be + multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both columns, but as discussed in , it - would be almost useless for queries involving only y, so it + would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index - and a separate index on y would serve reasonably well. For - queries involving only x, the multicolumn index could be + and a separate index on y would serve reasonably well. For + queries involving only x, the multicolumn index could be used, though it would be larger and hence slower than an index on - x alone. The last alternative is to create all three + x alone. The last alternative is to create all three indexes, but this is probably only reasonable if the table is searched much more often than it is updated and all three types of query are common. If one of the types of query is much less common than the @@ -698,9 +698,9 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); - If we were to declare this index UNIQUE, it would prevent - creation of rows whose col1 values differ only in case, - as well as rows whose col1 values are actually identical. + If we were to declare this index UNIQUE, it would prevent + creation of rows whose col1 values differ only in case, + as well as rows whose col1 values are actually identical. Thus, indexes on expressions can be used to enforce constraints that are not definable as simple unique constraints. @@ -717,7 +717,7 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); - The syntax of the CREATE INDEX command normally requires + The syntax of the CREATE INDEX command normally requires writing parentheses around index expressions, as shown in the second example. The parentheses can be omitted when the expression is just a function call, as in the first example. @@ -727,9 +727,9 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); Index expressions are relatively expensive to maintain, because the derived expression(s) must be computed for each row upon insertion and whenever it is updated. However, the index expressions are - not recomputed during an indexed search, since they are + not recomputed during an indexed search, since they are already stored in the index. In both examples above, the system - sees the query as just WHERE indexedcolumn = 'constant' + sees the query as just WHERE indexedcolumn = 'constant' and so the speed of the search is equivalent to any other simple index query. Thus, indexes on expressions are useful when retrieval speed is more important than insertion and update speed. @@ -856,12 +856,12 @@ CREATE INDEX orders_unbilled_index ON orders (order_nr) SELECT * FROM orders WHERE billed is not true AND order_nr < 10000; However, the index can also be used in queries that do not involve - order_nr at all, e.g.: + order_nr at all, e.g.: SELECT * FROM orders WHERE billed is not true AND amount > 5000.00; This is not as efficient as a partial index on the - amount column would be, since the system has to + amount column would be, since the system has to scan the entire index. Yet, if there are relatively few unbilled orders, using this partial index just to find the unbilled orders could be a win. @@ -886,7 +886,7 @@ SELECT * FROM orders WHERE order_nr = 3501; predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that - the WHERE condition of the query mathematically implies + the WHERE condition of the query mathematically implies the predicate of the index. PostgreSQL does not have a sophisticated theorem prover that can recognize mathematically equivalent @@ -896,7 +896,7 @@ SELECT * FROM orders WHERE order_nr = 3501; The system can recognize simple inequality implications, for example x < 1 implies x < 2; otherwise the predicate condition must exactly match part of the query's - WHERE condition + WHERE condition or the index will not be recognized as usable. Matching takes place at query planning time, not at run time. As a result, parameterized query clauses do not work with a partial index. For @@ -919,9 +919,9 @@ SELECT * FROM orders WHERE order_nr = 3501; Suppose that we have a table describing test outcomes. We wish - to ensure that there is only one successful entry for + to ensure that there is only one successful entry for a given subject and target combination, but there might be any number of - unsuccessful entries. Here is one way to do it: + unsuccessful entries. Here is one way to do it: CREATE TABLE tests ( subject text, @@ -944,7 +944,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, - PostgreSQL makes reasonable choices about index + PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause @@ -956,7 +956,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in - PostgreSQL work. In most cases, the advantage of a + PostgreSQL work. In most cases, the advantage of a partial index over a regular index will be minimal. @@ -998,8 +998,8 @@ CREATE INDEX name ON table the proper class when making an index. The operator class determines the basic sort ordering (which can then be modified by adding sort options COLLATE, - ASC/DESC and/or - NULLS FIRST/NULLS LAST). + ASC/DESC and/or + NULLS FIRST/NULLS LAST). @@ -1025,8 +1025,8 @@ CREATE INDEX name ON table CREATE INDEX test_index ON test_table (col varchar_pattern_ops); Note that you should also create an index with the default operator - class if you want queries involving ordinary <, - <=, >, or >= comparisons + class if you want queries involving ordinary <, + <=, >, or >= comparisons to use an index. Such queries cannot use the xxx_pattern_ops operator classes. (Ordinary equality comparisons can use these @@ -1057,7 +1057,7 @@ SELECT am.amname AS index_method, An operator class is actually just a subset of a larger structure called an - operator family. In cases where several data types have + operator family. In cases where several data types have similar behaviors, it is frequently useful to define cross-data-type operators and allow these to work with indexes. To do this, the operator classes for each of the types must be grouped into the same operator @@ -1147,13 +1147,13 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); - All indexes in PostgreSQL are secondary + All indexes in PostgreSQL are secondary indexes, meaning that each index is stored separately from the table's - main data area (which is called the table's heap - in PostgreSQL terminology). This means that in an + main data area (which is called the table's heap + in PostgreSQL terminology). This means that in an ordinary index scan, each row retrieval requires fetching data from both the index and the heap. Furthermore, while the index entries that match a - given indexable WHERE condition are usually close together in + given indexable WHERE condition are usually close together in the index, the table rows they reference might be anywhere in the heap. The heap-access portion of an index scan thus involves a lot of random access into the heap, which can be slow, particularly on traditional @@ -1163,8 +1163,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); - To solve this performance problem, PostgreSQL - supports index-only scans, which can answer queries from an + To solve this performance problem, PostgreSQL + supports index-only scans, which can answer queries from an index alone without any heap access. The basic idea is to return values directly out of each index entry instead of consulting the associated heap entry. There are two fundamental restrictions on when this method can be @@ -1187,8 +1187,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); The query must reference only columns stored in the index. For - example, given an index on columns x and y of a - table that also has a column z, these queries could use + example, given an index on columns x and y of a + table that also has a column z, these queries could use index-only scans: SELECT x, y FROM tab WHERE x = 'key'; @@ -1210,17 +1210,17 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; If these two fundamental requirements are met, then all the data values required by the query are available from the index, so an index-only scan is physically possible. But there is an additional requirement for any - table scan in PostgreSQL: it must verify that each - retrieved row be visible to the query's MVCC snapshot, as + table scan in PostgreSQL: it must verify that each + retrieved row be visible to the query's MVCC snapshot, as discussed in . Visibility information is not stored in index entries, only in heap entries; so at first glance it would seem that every row retrieval would require a heap access anyway. And this is indeed the case, if the table row has been modified recently. However, for seldom-changing data there is a way around this - problem. PostgreSQL tracks, for each page in a table's + problem. PostgreSQL tracks, for each page in a table's heap, whether all rows stored in that page are old enough to be visible to all current and future transactions. This information is stored in a bit - in the table's visibility map. An index-only scan, after + in the table's visibility map. An index-only scan, after finding a candidate index entry, checks the visibility map bit for the corresponding heap page. If it's set, the row is known visible and so the data can be returned with no further work. If it's not set, the heap @@ -1243,48 +1243,48 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; To make effective use of the index-only scan feature, you might choose to create indexes in which only the leading columns are meant to - match WHERE clauses, while the trailing columns - hold payload data to be returned by a query. For example, if + match WHERE clauses, while the trailing columns + hold payload data to be returned by a query. For example, if you commonly run queries like SELECT y FROM tab WHERE x = 'key'; the traditional approach to speeding up such queries would be to create an - index on x only. However, an index on (x, y) + index on x only. However, an index on (x, y) would offer the possibility of implementing this query as an index-only scan. As previously discussed, such an index would be larger and hence - more expensive than an index on x alone, so this is attractive + more expensive than an index on x alone, so this is attractive only if the table is known to be mostly static. Note it's important that - the index be declared on (x, y) not (y, x), as for + the index be declared on (x, y) not (y, x), as for most index types (particularly B-trees) searches that do not constrain the leading index columns are not very efficient. In principle, index-only scans can be used with expression indexes. - For example, given an index on f(x) where x is a + For example, given an index on f(x) where x is a table column, it should be possible to execute SELECT f(x) FROM tab WHERE f(x) < 1; - as an index-only scan; and this is very attractive if f() is - an expensive-to-compute function. However, PostgreSQL's + as an index-only scan; and this is very attractive if f() is + an expensive-to-compute function. However, PostgreSQL's planner is currently not very smart about such cases. It considers a query to be potentially executable by index-only scan only when - all columns needed by the query are available from the index. - In this example, x is not needed except in the - context f(x), but the planner does not notice that and + all columns needed by the query are available from the index. + In this example, x is not needed except in the + context f(x), but the planner does not notice that and concludes that an index-only scan is not possible. If an index-only scan seems sufficiently worthwhile, this can be worked around by declaring the - index to be on (f(x), x), where the second column is not + index to be on (f(x), x), where the second column is not expected to be used in practice but is just there to convince the planner that an index-only scan is possible. An additional caveat, if the goal is - to avoid recalculating f(x), is that the planner won't - necessarily match uses of f(x) that aren't in - indexable WHERE clauses to the index column. It will usually + to avoid recalculating f(x), is that the planner won't + necessarily match uses of f(x) that aren't in + indexable WHERE clauses to the index column. It will usually get this right in simple queries such as shown above, but not in queries that involve joins. These deficiencies may be remedied in future versions - of PostgreSQL. + of PostgreSQL. @@ -1299,13 +1299,13 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) SELECT target FROM tests WHERE subject = 'some-subject' AND success; - But there's a problem: the WHERE clause refers - to success which is not available as a result column of the + But there's a problem: the WHERE clause refers + to success which is not available as a result column of the index. Nonetheless, an index-only scan is possible because the plan does - not need to recheck that part of the WHERE clause at run time: - all entries found in the index necessarily have success = true + not need to recheck that part of the WHERE clause at run time: + all entries found in the index necessarily have success = true so this need not be explicitly checked in the - plan. PostgreSQL versions 9.6 and later will recognize + plan. PostgreSQL versions 9.6 and later will recognize such cases and allow index-only scans to be generated, but older versions will not. @@ -1321,7 +1321,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; - Although indexes in PostgreSQL do not need + Although indexes in PostgreSQL do not need maintenance or tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage for an individual query is done with the @@ -1388,8 +1388,8 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; their use. There are run-time parameters that can turn off various plan types (see ). For instance, turning off sequential scans - (enable_seqscan) and nested-loop joins - (enable_nestloop), which are the most basic plans, + (enable_seqscan) and nested-loop joins + (enable_nestloop), which are the most basic plans, will force the system to use a different plan. If the system still chooses a sequential scan or nested-loop join then there is probably a more fundamental reason why the index is not being @@ -1428,7 +1428,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; If you do not succeed in adjusting the costs to be more appropriate, then you might have to resort to forcing index usage explicitly. You might also want to contact the - PostgreSQL developers to examine the issue. + PostgreSQL developers to examine the issue. diff --git a/doc/src/sgml/info.sgml b/doc/src/sgml/info.sgml index 233ba0e668..6b9f1b5d81 100644 --- a/doc/src/sgml/info.sgml +++ b/doc/src/sgml/info.sgml @@ -15,9 +15,9 @@ The PostgreSQL wiki contains the project's FAQ + url="https://wiki.postgresql.org/wiki/Frequently_Asked_Questions">FAQ (Frequently Asked Questions) list, TODO list, and + url="https://wiki.postgresql.org/wiki/Todo">TODO list, and detailed information about many more topics. @@ -42,7 +42,7 @@ The mailing lists are a good place to have your questions answered, to share experiences with other users, and to contact - the developers. Consult the PostgreSQL web site + the developers. Consult the PostgreSQL web site for details. diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index e07ff35bca..58c54254d7 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -35,12 +35,12 @@ This problem can appear when querying information schema views such - as check_constraint_routine_usage, - check_constraints, domain_constraints, and - referential_constraints. Some other views have similar + as check_constraint_routine_usage, + check_constraints, domain_constraints, and + referential_constraints. Some other views have similar issues but contain the table name to help distinguish duplicate - rows, e.g., constraint_column_usage, - constraint_table_usage, table_constraints. + rows, e.g., constraint_column_usage, + constraint_table_usage, table_constraints. @@ -384,19 +384,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -535,25 +535,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -572,7 +572,7 @@ is_derived_reference_attribute yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1256,7 +1256,7 @@ The view columns contains information about all table columns (or view columns) in the database. System columns - (oid, etc.) are not included. Only those columns are + (oid, etc.) are not included. Only those columns are shown that the current user has access to (by way of being the owner or having some privilege). @@ -1441,19 +1441,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1540,25 +1540,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -1577,7 +1577,7 @@ is_self_referencing yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -1648,13 +1648,13 @@ is_generated character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL generation_expression character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2152,19 +2152,19 @@ character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2300,25 +2300,25 @@ scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -2442,31 +2442,31 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -2501,37 +2501,37 @@ ORDER BY c.ordinal_position; numeric_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to array element data types in PostgreSQL + Always null, since this information is not applied to array element data types in PostgreSQL @@ -2569,25 +2569,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -3160,13 +3160,13 @@ ORDER BY c.ordinal_position; is_result yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -3191,85 +3191,85 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL collation_schema sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL collation_name sql_identifier - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to parameter data types in PostgreSQL + Always null, since this information is not applied to parameter data types in PostgreSQL @@ -3301,25 +3301,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -4045,37 +4045,37 @@ ORDER BY c.ordinal_position; module_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL module_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL module_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL udt_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4094,85 +4094,85 @@ ORDER BY c.ordinal_position; character_maximum_length cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL character_octet_length cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL collation_schema sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL collation_name sql_identifier - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_precision_radix cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL numeric_scale cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL datetime_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL interval_type character_data - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL interval_precision cardinal_number - Always null, since this information is not applied to return data types in PostgreSQL + Always null, since this information is not applied to return data types in PostgreSQL @@ -4204,25 +4204,25 @@ ORDER BY c.ordinal_position; scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL maximum_cardinality cardinal_number - Always null, because arrays always have unlimited maximum cardinality in PostgreSQL + Always null, because arrays always have unlimited maximum cardinality in PostgreSQL @@ -4283,7 +4283,7 @@ ORDER BY c.ordinal_position; character_data Always GENERAL (The SQL standard defines - other parameter styles, which are not available in PostgreSQL.) + other parameter styles, which are not available in PostgreSQL.) @@ -4294,7 +4294,7 @@ ORDER BY c.ordinal_position; If the function is declared immutable (called deterministic in the SQL standard), then YES, else NO. (You cannot query the other volatility - levels available in PostgreSQL through the information schema.) + levels available in PostgreSQL through the information schema.) @@ -4304,7 +4304,7 @@ ORDER BY c.ordinal_position; Always MODIFIES, meaning that the function possibly modifies SQL data. This information is not useful for - PostgreSQL. + PostgreSQL. @@ -4321,7 +4321,7 @@ ORDER BY c.ordinal_position; sql_path character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4330,26 +4330,26 @@ ORDER BY c.ordinal_position; Always YES (The opposite would be a method of a user-defined type, which is a feature not available in - PostgreSQL.) + PostgreSQL.) max_dynamic_result_sets cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_user_defined_cast yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_implicitly_invocable yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4366,43 +4366,43 @@ ORDER BY c.ordinal_position; to_sql_specific_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL to_sql_specific_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL to_sql_specific_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL created time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL last_altered time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL new_savepoint_level yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4411,152 +4411,152 @@ ORDER BY c.ordinal_position; Currently always NO. The alternative YES applies to a feature not available in - PostgreSQL. + PostgreSQL. result_cast_from_data_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_as_locator yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_max_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_octet_length character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_char_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_collation_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_precision_radix cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_numeric_scale cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_datetime_precision character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_interval_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_interval_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_type_udt_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_scope_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_maximum_cardinality cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL result_cast_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4606,25 +4606,25 @@ ORDER BY c.ordinal_position; default_character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL default_character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL default_character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL sql_path character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -4808,7 +4808,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the feature is fully supported by the - current version of PostgreSQL, NO if not + current version of PostgreSQL, NO if not @@ -4816,7 +4816,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -4982,7 +4982,7 @@ ORDER BY c.ordinal_position; character_data The programming language, if the binding style is - EMBEDDED, else null. PostgreSQL only + EMBEDDED, else null. PostgreSQL only supports the language C. @@ -5031,7 +5031,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the package is fully supported by the - current version of PostgreSQL, NO if not + current version of PostgreSQL, NO if not @@ -5039,7 +5039,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -5093,7 +5093,7 @@ ORDER BY c.ordinal_position; yes_or_no YES if the part is fully supported by the - current version of PostgreSQL, + current version of PostgreSQL, NO if not @@ -5102,7 +5102,7 @@ ORDER BY c.ordinal_position; is_verified_bycharacter_data - Always null, since the PostgreSQL development group does not + Always null, since the PostgreSQL development group does not perform formal testing of feature conformance @@ -5182,7 +5182,7 @@ ORDER BY c.ordinal_position; The table sql_sizing_profiles contains information about the sql_sizing values that are - required by various profiles of the SQL standard. PostgreSQL does + required by various profiles of the SQL standard. PostgreSQL does not track any SQL profiles, so this table is empty. @@ -5465,13 +5465,13 @@ ORDER BY c.ordinal_position; self_referencing_column_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL reference_generation character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -5806,31 +5806,31 @@ ORDER BY c.ordinal_position; action_reference_old_table sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_new_table sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_old_row sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL action_reference_new_row sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL created time_stamp - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -5864,7 +5864,7 @@ ORDER BY c.ordinal_position; - Prior to PostgreSQL 9.1, this view's columns + Prior to PostgreSQL 9.1, this view's columns action_timing, action_reference_old_table, action_reference_new_table, @@ -6113,151 +6113,151 @@ ORDER BY c.ordinal_position; is_instantiable yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL is_final yes_or_no - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_form character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_category character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ordering_routine_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL reference_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL data_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_maximum_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_octet_length cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL character_set_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_catalog sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_schema sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL collation_name sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_precision_radix cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL numeric_scale cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL datetime_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL interval_type character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL interval_precision cardinal_number - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL source_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL ref_dtd_identifier sql_identifier - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -6660,7 +6660,7 @@ ORDER BY c.ordinal_position; check_option character_data - Applies to a feature not available in PostgreSQL + Applies to a feature not available in PostgreSQL @@ -6686,8 +6686,8 @@ ORDER BY c.ordinal_position; is_trigger_updatable yes_or_no - YES if the view has an INSTEAD OF - UPDATE trigger defined on it, NO if not + YES if the view has an INSTEAD OF + UPDATE trigger defined on it, NO if not @@ -6695,8 +6695,8 @@ ORDER BY c.ordinal_position; is_trigger_deletable yes_or_no - YES if the view has an INSTEAD OF - DELETE trigger defined on it, NO if not + YES if the view has an INSTEAD OF + DELETE trigger defined on it, NO if not @@ -6704,8 +6704,8 @@ ORDER BY c.ordinal_position; is_trigger_insertable_intoyes_or_no - YES if the view has an INSTEAD OF - INSERT trigger defined on it, NO if not + YES if the view has an INSTEAD OF + INSERT trigger defined on it, NO if not diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml index 696c620b18..029e1dbc28 100644 --- a/doc/src/sgml/install-windows.sgml +++ b/doc/src/sgml/install-windows.sgml @@ -84,13 +84,13 @@ Microsoft Windows SDK version 6.0a to 8.1 or Visual Studio 2008 and above. Compilation is supported down to Windows XP and - Windows Server 2003 when building with - Visual Studio 2005 to + Windows Server 2003 when building with + Visual Studio 2005 to Visual Studio 2013. Building with Visual Studio 2015 is supported down to - Windows Vista and Windows Server 2008. + Windows Vista and Windows Server 2008. Building with Visual Studio 2017 is supported - down to Windows 7 SP1 and Windows Server 2008 R2 SP1. + down to Windows 7 SP1 and Windows Server 2008 R2 SP1. @@ -163,7 +163,7 @@ $ENV{MSBFLAGS}="/m"; Microsoft Windows SDK it is recommended that you upgrade to the latest version (currently version 7.1), available for download from - . + . You must always include the @@ -182,7 +182,7 @@ $ENV{MSBFLAGS}="/m"; ActiveState Perl is required to run the build generation scripts. MinGW or Cygwin Perl will not work. It must also be present in the PATH. Binaries can be downloaded from - + (Note: version 5.8.3 or later is required, the free Standard Distribution is sufficient). @@ -219,7 +219,7 @@ $ENV{MSBFLAGS}="/m"; Both Bison and Flex are included in the msys tool suite, available - from as part of the + from as part of the MinGW compiler suite. @@ -259,7 +259,7 @@ $ENV{MSBFLAGS}="/m"; Diff Diff is required to run the regression tests, and can be downloaded - from . + from . @@ -267,7 +267,7 @@ $ENV{MSBFLAGS}="/m"; Gettext Gettext is required to build with NLS support, and can be downloaded - from . Note that binaries, + from . Note that binaries, dependencies and developer files are all needed. @@ -277,7 +277,7 @@ $ENV{MSBFLAGS}="/m"; Required for GSSAPI authentication support. MIT Kerberos can be downloaded from - . + . @@ -286,8 +286,8 @@ $ENV{MSBFLAGS}="/m"; libxslt Required for XML support. Binaries can be downloaded from - or source from - . Note that libxml2 requires iconv, + or source from + . Note that libxml2 requires iconv, which is available from the same download location. @@ -296,8 +296,8 @@ $ENV{MSBFLAGS}="/m"; openssl Required for SSL support. Binaries can be downloaded from - - or source from . + + or source from . @@ -306,7 +306,7 @@ $ENV{MSBFLAGS}="/m"; Required for UUID-OSSP support (contrib only). Source can be downloaded from - . + . @@ -314,7 +314,7 @@ $ENV{MSBFLAGS}="/m"; Python Required for building PL/Python. Binaries can - be downloaded from . + be downloaded from . @@ -323,7 +323,7 @@ $ENV{MSBFLAGS}="/m"; Required for compression support in pg_dump and pg_restore. Binaries can be downloaded - from . + from . @@ -347,8 +347,8 @@ $ENV{MSBFLAGS}="/m"; - To use a server-side third party library such as python or - openssl, this library must also be + To use a server-side third party library such as python or + openssl, this library must also be 64-bit. There is no support for loading a 32-bit library in a 64-bit server. Several of the third party libraries that PostgreSQL supports may only be available in 32-bit versions, in which case they cannot be used with @@ -462,20 +462,20 @@ $ENV{CONFIG}="Debug"; Running the regression tests on client programs, with - vcregress bincheck, or on recovery tests, with - vcregress recoverycheck, requires an additional Perl module + vcregress bincheck, or on recovery tests, with + vcregress recoverycheck, requires an additional Perl module to be installed: IPC::Run - As of this writing, IPC::Run is not included in the + As of this writing, IPC::Run is not included in the ActiveState Perl installation, nor in the ActiveState Perl Package Manager (PPM) library. To install, download the - IPC-Run-<version>.tar.gz source archive from CPAN, - at , and - uncompress. Edit the buildenv.pl file, and add a PERL5LIB - variable to point to the lib subdirectory from the + IPC-Run-<version>.tar.gz source archive from CPAN, + at , and + uncompress. Edit the buildenv.pl file, and add a PERL5LIB + variable to point to the lib subdirectory from the extracted archive. For example: $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; @@ -498,7 +498,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; OpenJade 1.3.1-2 Download from - + and uncompress in the subdirectory openjade-1.3.1. @@ -507,7 +507,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; DocBook DTD 4.2 Download from - + and uncompress in the subdirectory docbook. @@ -516,7 +516,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; ISO character entities Download from - and + and uncompress in the subdirectory docbook. diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index f4e4fc7c5e..f8e1d60356 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -52,17 +52,17 @@ su - postgres In general, a modern Unix-compatible platform should be able to run - PostgreSQL. + PostgreSQL. The platforms that had received specific testing at the time of release are listed in - below. In the doc subdirectory of the distribution - there are several platform-specific FAQ documents you + below. In the doc subdirectory of the distribution + there are several platform-specific FAQ documents you might wish to consult if you are having trouble. The following software packages are required for building - PostgreSQL: + PostgreSQL: @@ -71,9 +71,9 @@ su - postgres make - GNU make version 3.80 or newer is required; other - make programs or older GNU make versions will not work. - (GNU make is sometimes installed under + GNU make version 3.80 or newer is required; other + make programs or older GNU make versions will not work. + (GNU make is sometimes installed under the name gmake.) To test for GNU make enter: @@ -84,19 +84,19 @@ su - postgres - You need an ISO/ANSI C compiler (at least + You need an ISO/ANSI C compiler (at least C89-compliant). Recent - versions of GCC are recommended, but - PostgreSQL is known to build using a wide variety + versions of GCC are recommended, but + PostgreSQL is known to build using a wide variety of compilers from different vendors. - tar is required to unpack the source + tar is required to unpack the source distribution, in addition to either - gzip or bzip2. + gzip or bzip2. @@ -109,23 +109,23 @@ su - postgres libedit - The GNU Readline library is used by + The GNU Readline library is used by default. It allows psql (the PostgreSQL command line SQL interpreter) to remember each command you type, and allows you to use arrow keys to recall and edit previous commands. This is very helpful and is strongly recommended. If you don't want to use it then you must specify the option to - configure. As an alternative, you can often use the + configure. As an alternative, you can often use the BSD-licensed libedit library, originally developed on NetBSD. The libedit library is GNU Readline-compatible and is used if libreadline is not found, or if is used as an - option to configure. If you are using a package-based + option to configure. If you are using a package-based Linux distribution, be aware that you need both the - readline and readline-devel packages, if + readline and readline-devel packages, if those are separate in your distribution. @@ -140,8 +140,8 @@ su - postgres used by default. If you don't want to use it then you must specify the option to configure. Using this option disables - support for compressed archives in pg_dump and - pg_restore. + support for compressed archives in pg_dump and + pg_restore. @@ -179,14 +179,14 @@ su - postgres If you intend to make more than incidental use of PL/Perl, you should ensure that the Perl installation was built with the - usemultiplicity option enabled (perl -V + usemultiplicity option enabled (perl -V will show whether this is the case). - To build the PL/Python server programming + To build the PL/Python server programming language, you need a Python installation with the header files and the distutils module. The minimum @@ -209,15 +209,15 @@ su - postgres find a shared libpython. That might mean that you either have to install additional packages or rebuild (part of) your Python installation to provide this shared - library. When building from source, run Python's - configure with the --enable-shared flag. + library. When building from source, run Python's + configure with the --enable-shared flag. To build the PL/Tcl - procedural language, you of course need a Tcl + procedural language, you of course need a Tcl installation. The minimum required version is Tcl 8.4. @@ -228,13 +228,13 @@ su - postgres To enable Native Language Support (NLS), that is, the ability to display a program's messages in a language other than English, you need an implementation of the - Gettext API. Some operating + Gettext API. Some operating systems have this built-in (e.g., Linux, NetBSD, - Solaris), for other systems you + class="osname">Linux, NetBSD, + Solaris), for other systems you can download an add-on package from . - If you are using the Gettext implementation in + If you are using the Gettext implementation in the GNU C library then you will additionally need the GNU Gettext package for some utility programs. For any of the other implementations you will @@ -244,7 +244,7 @@ su - postgres - You need OpenSSL, if you want to support + You need OpenSSL, if you want to support encrypted client connections. The minimum required version is 0.9.8. @@ -252,8 +252,8 @@ su - postgres - You need Kerberos, OpenLDAP, - and/or PAM, if you want to support authentication + You need Kerberos, OpenLDAP, + and/or PAM, if you want to support authentication using those services. @@ -289,12 +289,12 @@ su - postgres yacc - GNU Flex and Bison + GNU Flex and Bison are needed to build from a Git checkout, or if you changed the actual scanner and parser definition files. If you need them, be sure - to get Flex 2.5.31 or later and - Bison 1.875 or later. Other lex - and yacc programs cannot be used. + to get Flex 2.5.31 or later and + Bison 1.875 or later. Other lex + and yacc programs cannot be used. @@ -303,10 +303,10 @@ su - postgres perl - Perl 5.8.3 or later is needed to build from a Git checkout, + Perl 5.8.3 or later is needed to build from a Git checkout, or if you changed the input files for any of the build steps that use Perl scripts. If building on Windows you will need - Perl in any case. Perl is + Perl in any case. Perl is also required to run some test suites. @@ -316,7 +316,7 @@ su - postgres If you need to get a GNU package, you can find it at your local GNU mirror site (see + url="http://www.gnu.org/order/ftp.html"> for a list) or at . @@ -337,7 +337,7 @@ su - postgres Getting The Source - The PostgreSQL &version; sources can be obtained from the + The PostgreSQL &version; sources can be obtained from the download section of our website: . You should get a file named postgresql-&version;.tar.gz @@ -351,7 +351,7 @@ su - postgres have the .bz2 file.) This will create a directory postgresql-&version; under the current directory - with the PostgreSQL sources. + with the PostgreSQL sources. Change into that directory for the rest of the installation procedure. @@ -377,7 +377,7 @@ su - postgres The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. - This is done by running the configure script. For a + This is done by running the configure script. For a default installation simply enter: ./configure @@ -403,7 +403,7 @@ su - postgres The default configuration will build the server and utilities, as well as all client applications and interfaces that require only a C compiler. All files will be installed under - /usr/local/pgsql by default. + /usr/local/pgsql by default. @@ -413,14 +413,14 @@ su - postgres - + - Install all files under the directory PREFIX + Install all files under the directory PREFIX instead of /usr/local/pgsql. The actual files will be installed into various subdirectories; no files will ever be installed directly into the - PREFIX directory. + PREFIX directory. @@ -428,13 +428,13 @@ su - postgres individual subdirectories with the following options. However, if you leave these with their defaults, the installation will be relocatable, meaning you can move the directory after - installation. (The man and doc + installation. (The man and doc locations are not affected by this.) For relocatable installs, you might want to use - configure's --disable-rpath + configure's --disable-rpath option. Also, you will need to tell the operating system how to find the shared libraries. @@ -442,15 +442,15 @@ su - postgres - + You can install architecture-dependent files under a - different prefix, EXEC-PREFIX, than what - PREFIX was set to. This can be useful to + different prefix, EXEC-PREFIX, than what + PREFIX was set to. This can be useful to share architecture-independent files between hosts. If you - omit this, then EXEC-PREFIX is set equal to - PREFIX and both architecture-dependent and + omit this, then EXEC-PREFIX is set equal to + PREFIX and both architecture-dependent and independent files will be installed under the same tree, which is probably what you want. @@ -458,114 +458,114 @@ su - postgres - + Specifies the directory for executable programs. The default - is EXEC-PREFIX/bin, which - normally means /usr/local/pgsql/bin. + is EXEC-PREFIX/bin, which + normally means /usr/local/pgsql/bin. - + Sets the directory for various configuration files, - PREFIX/etc by default. + PREFIX/etc by default. - + Sets the location to install libraries and dynamically loadable modules. The default is - EXEC-PREFIX/lib. + EXEC-PREFIX/lib. - + Sets the directory for installing C and C++ header files. The - default is PREFIX/include. + default is PREFIX/include. - + Sets the root directory for various types of read-only data files. This only sets the default for some of the following options. The default is - PREFIX/share. + PREFIX/share. - + Sets the directory for read-only data files used by the installed programs. The default is - DATAROOTDIR. Note that this has + DATAROOTDIR. Note that this has nothing to do with where your database files will be placed. - + Sets the directory for installing locale data, in particular message translation catalog files. The default is - DATAROOTDIR/locale. + DATAROOTDIR/locale. - + - The man pages that come with PostgreSQL will be installed under + The man pages that come with PostgreSQL will be installed under this directory, in their respective - manx subdirectories. - The default is DATAROOTDIR/man. + manx subdirectories. + The default is DATAROOTDIR/man. - + Sets the root directory for installing documentation files, - except man pages. This only sets the default for + except man pages. This only sets the default for the following options. The default value for this option is - DATAROOTDIR/doc/postgresql. + DATAROOTDIR/doc/postgresql. - + The HTML-formatted documentation for PostgreSQL will be installed under this directory. The default is - DATAROOTDIR. + DATAROOTDIR. @@ -574,15 +574,15 @@ su - postgres Care has been taken to make it possible to install - PostgreSQL into shared installation locations + PostgreSQL into shared installation locations (such as /usr/local/include) without interfering with the namespace of the rest of the system. First, the string /postgresql is automatically appended to datadir, sysconfdir, and docdir, unless the fully expanded directory name already contains the - string postgres or - pgsql. For example, if you choose + string postgres or + pgsql. For example, if you choose /usr/local as prefix, the documentation will be installed in /usr/local/doc/postgresql, but if the prefix is /opt/postgres, then it @@ -602,10 +602,10 @@ su - postgres - + - Append STRING to the PostgreSQL version number. You + Append STRING to the PostgreSQL version number. You can use this, for example, to mark binaries built from unreleased Git snapshots or containing custom patches with an extra version string such as a git describe identifier or a @@ -615,35 +615,35 @@ su - postgres - + - DIRECTORIES is a colon-separated list of + DIRECTORIES is a colon-separated list of directories that will be added to the list the compiler searches for header files. If you have optional packages - (such as GNU Readline) installed in a non-standard + (such as GNU Readline) installed in a non-standard location, you have to use this option and probably also the corresponding - option. - Example: --with-includes=/opt/gnu/include:/usr/sup/include. + Example: --with-includes=/opt/gnu/include:/usr/sup/include. - + - DIRECTORIES is a colon-separated list of + DIRECTORIES is a colon-separated list of directories to search for libraries. You will probably have to use this option (and the corresponding - option) if you have packages installed in non-standard locations. - Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib. + Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib. @@ -657,7 +657,7 @@ su - postgres language other than English. LANGUAGES is an optional space-separated list of codes of the languages that you want supported, for - example --enable-nls='de fr'. (The intersection + example --enable-nls='de fr'. (The intersection between your list and the set of actually provided translations will be computed automatically.) If you do not specify a list, then all available translations are @@ -666,22 +666,22 @@ su - postgres To use this option, you will need an implementation of the - Gettext API; see above. + Gettext API; see above. - + - Set NUMBER as the default port number for + Set NUMBER as the default port number for server and clients. The default is 5432. The port can always be changed later on, but if you specify it here then both server and clients will have the same default compiled in, which can be very convenient. Usually the only good reason to select a non-default value is if you intend to run multiple - PostgreSQL servers on the same machine. + PostgreSQL servers on the same machine. @@ -690,7 +690,7 @@ su - postgres - Build the PL/Perl server-side language. + Build the PL/Perl server-side language. @@ -699,7 +699,7 @@ su - postgres - Build the PL/Python server-side language. + Build the PL/Python server-side language. @@ -708,7 +708,7 @@ su - postgres - Build the PL/Tcl server-side language. + Build the PL/Tcl server-side language. @@ -734,10 +734,10 @@ su - postgres Build with support for GSSAPI authentication. On many systems, the GSSAPI (usually a part of the Kerberos installation) system is not installed in a location - that is searched by default (e.g., /usr/include, - /usr/lib), so you must use the options -
- <filename>intarray</> Functions + <filename>intarray</filename> Functions @@ -59,7 +59,7 @@ sort(int[], text dir)sort int[] - sort array — dir must be asc or desc + sort array — dir must be asc or desc sort('{1,2,3}'::int[], 'desc') {3,2,1} @@ -99,7 +99,7 @@ idx(int[], int item)idx int - index of first element matching item (0 if none) + index of first element matching item (0 if none) idx(array[11,22,33,22,11], 22) 2 @@ -107,7 +107,7 @@ subarray(int[], int start, int len)subarray int[] - portion of array starting at position start, len elements + portion of array starting at position start, len elements subarray('{1,2,3,2,1}'::int[], 2, 3) {2,3,2} @@ -115,7 +115,7 @@ subarray(int[], int start) int[] - portion of array starting at position start + portion of array starting at position start subarray('{1,2,3,2,1}'::int[], 2) {2,3,2,1} @@ -133,7 +133,7 @@
- <filename>intarray</> Operators + <filename>intarray</filename> Operators @@ -148,17 +148,17 @@ int[] && int[] boolean - overlap — true if arrays have at least one common element + overlap — true if arrays have at least one common element int[] @> int[] boolean - contains — true if left array contains right array + contains — true if left array contains right array int[] <@ int[] boolean - contained — true if left array is contained in right array + contained — true if left array is contained in right array # int[] @@ -168,7 +168,7 @@ int[] # int int - index (same as idx function) + index (same as idx function) int[] + int @@ -208,28 +208,28 @@ int[] @@ query_int boolean - true if array satisfies query (see below) + true if array satisfies query (see below) query_int ~~ int[] boolean - true if array satisfies query (commutator of @@) + true if array satisfies query (commutator of @@)
- (Before PostgreSQL 8.2, the containment operators @> and - <@ were respectively called @ and ~. + (Before PostgreSQL 8.2, the containment operators @> and + <@ were respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) - The operators &&, @> and - <@ are equivalent to PostgreSQL's built-in + The operators &&, @> and + <@ are equivalent to PostgreSQL's built-in operators of the same names, except that they work only on integer arrays that do not contain nulls, while the built-in operators work for any array type. This restriction makes them faster than the built-in operators @@ -237,14 +237,14 @@ - The @@ and ~~ operators test whether an array - satisfies a query, which is expressed as a value of a - specialized data type query_int. A query + The @@ and ~~ operators test whether an array + satisfies a query, which is expressed as a value of a + specialized data type query_int. A query consists of integer values that are checked against the elements of - the array, possibly combined using the operators & - (AND), | (OR), and ! (NOT). Parentheses + the array, possibly combined using the operators & + (AND), | (OR), and ! (NOT). Parentheses can be used as needed. For example, - the query 1&(2|3) matches arrays that contain 1 + the query 1&(2|3) matches arrays that contain 1 and also contain either 2 or 3.
@@ -253,16 +253,16 @@ Index Support - intarray provides index support for the - &&, @>, <@, - and @@ operators, as well as regular array equality. + intarray provides index support for the + &&, @>, <@, + and @@ operators, as well as regular array equality. Two GiST index operator classes are provided: - gist__int_ops (used by default) is suitable for + gist__int_ops (used by default) is suitable for small- to medium-size data sets, while - gist__intbig_ops uses a larger signature and is more + gist__intbig_ops uses a larger signature and is more suitable for indexing large data sets (i.e., columns containing a large number of distinct array values). The implementation uses an RD-tree data structure with @@ -271,7 +271,7 @@ There is also a non-default GIN operator class - gin__int_ops supporting the same operators. + gin__int_ops supporting the same operators. @@ -284,7 +284,7 @@ Example --- a message can be in one or more sections +-- a message can be in one or more sections CREATE TABLE message (mid INT PRIMARY KEY, sections INT[], ...); -- create specialized index @@ -305,9 +305,9 @@ SELECT message.mid FROM message WHERE message.sections @@ '1&2'::query_int; Benchmark - The source directory contrib/intarray/bench contains a + The source directory contrib/intarray/bench contains a benchmark test suite, which can be run against an installed - PostgreSQL server. (It also requires DBD::Pg + PostgreSQL server. (It also requires DBD::Pg to be installed.) To run: @@ -320,7 +320,7 @@ psql -c "CREATE EXTENSION intarray" TEST - The bench.pl script has numerous options, which + The bench.pl script has numerous options, which are displayed when it is run without any arguments. diff --git a/doc/src/sgml/intro.sgml b/doc/src/sgml/intro.sgml index f0dba6f56f..2fb19725f0 100644 --- a/doc/src/sgml/intro.sgml +++ b/doc/src/sgml/intro.sgml @@ -32,7 +32,7 @@ documents the SQL query language environment, including data types and functions, as well as user-level performance tuning. Every - PostgreSQL user should read this. + PostgreSQL user should read this. @@ -75,7 +75,7 @@ contains assorted information that might be of - use to PostgreSQL developers. + use to PostgreSQL developers. diff --git a/doc/src/sgml/isn.sgml b/doc/src/sgml/isn.sgml index c1da702df6..329b7b2c54 100644 --- a/doc/src/sgml/isn.sgml +++ b/doc/src/sgml/isn.sgml @@ -123,7 +123,7 @@ UPC numbers are a subset of the EAN13 numbers (they are basically - EAN13 without the first 0 digit). + EAN13 without the first 0 digit). All UPC, ISBN, ISMN and ISSN numbers can be represented as EAN13 @@ -139,7 +139,7 @@ - The ISBN, ISMN, and ISSN types will display the + The ISBN, ISMN, and ISSN types will display the short version of the number (ISxN 10) whenever it's possible, and will show ISxN 13 format for numbers that do not fit in the short version. The EAN13, ISBN13, ISMN13 and @@ -152,7 +152,7 @@ Casts - The isn module provides the following pairs of type casts: + The isn module provides the following pairs of type casts: @@ -209,7 +209,7 @@ - When casting from EAN13 to another type, there is a run-time + When casting from EAN13 to another type, there is a run-time check that the value is within the domain of the other type, and an error is thrown if not. The other casts are simply relabelings that will always succeed. @@ -220,15 +220,15 @@ Functions and Operators - The isn module provides the standard comparison operators, + The isn module provides the standard comparison operators, plus B-tree and hash indexing support for all these data types. In addition there are several specialized functions; shown in . In this table, - isn means any one of the module's data types. + isn means any one of the module's data types. - <filename>isn</> Functions + <filename>isn</filename> Functions @@ -285,21 +285,21 @@ When you insert invalid numbers in a table using the weak mode, the number will be inserted with the corrected check digit, but it will be displayed - with an exclamation mark (!) at the end, for example - 0-11-000322-5!. This invalid marker can be checked with - the is_valid function and cleared with the - make_valid function. + with an exclamation mark (!) at the end, for example + 0-11-000322-5!. This invalid marker can be checked with + the is_valid function and cleared with the + make_valid function. You can also force the insertion of invalid numbers even when not in the - weak mode, by appending the ! character at the end of the + weak mode, by appending the ! character at the end of the number. Another special feature is that during input, you can write - ? in place of the check digit, and the correct check digit + ? in place of the check digit, and the correct check digit will be inserted automatically. @@ -384,7 +384,7 @@ SELECT isbn13(id) FROM test; This module was inspired by Garrett A. Wollman's - isbn_issn code. + isbn_issn code. diff --git a/doc/src/sgml/json.sgml b/doc/src/sgml/json.sgml index 7dfdf96764..05ecef2ffc 100644 --- a/doc/src/sgml/json.sgml +++ b/doc/src/sgml/json.sgml @@ -1,7 +1,7 @@ - <acronym>JSON</> Types + <acronym>JSON</acronym> Types JSON @@ -22,25 +22,25 @@ - There are two JSON data types: json and jsonb. - They accept almost identical sets of values as + There are two JSON data types: json and jsonb. + They accept almost identical sets of values as input. The major practical difference is one of efficiency. The - json data type stores an exact copy of the input text, + json data type stores an exact copy of the input text, which processing functions must reparse on each execution; while - jsonb data is stored in a decomposed binary format that + jsonb data is stored in a decomposed binary format that makes it slightly slower to input due to added conversion overhead, but significantly faster to process, since no reparsing - is needed. jsonb also supports indexing, which can be a + is needed. jsonb also supports indexing, which can be a significant advantage. - Because the json type stores an exact copy of the input text, it + Because the json type stores an exact copy of the input text, it will preserve semantically-insignificant white space between tokens, as well as the order of keys within JSON objects. Also, if a JSON object within the value contains the same key more than once, all the key/value pairs are kept. (The processing functions consider the last value as the - operative one.) By contrast, jsonb does not preserve white + operative one.) By contrast, jsonb does not preserve white space, does not preserve the order of object keys, and does not keep duplicate object keys. If duplicate keys are specified in the input, only the last value is kept. @@ -48,7 +48,7 @@ In general, most applications should prefer to store JSON data as - jsonb, unless there are quite specialized needs, such as + jsonb, unless there are quite specialized needs, such as legacy assumptions about ordering of object keys. @@ -64,15 +64,15 @@ RFC 7159 permits JSON strings to contain Unicode escape sequences - denoted by \uXXXX. In the input - function for the json type, Unicode escapes are allowed + denoted by \uXXXX. In the input + function for the json type, Unicode escapes are allowed regardless of the database encoding, and are checked only for syntactic - correctness (that is, that four hex digits follow \u). - However, the input function for jsonb is stricter: it disallows - Unicode escapes for non-ASCII characters (those above U+007F) - unless the database encoding is UTF8. The jsonb type also - rejects \u0000 (because that cannot be represented in - PostgreSQL's text type), and it insists + correctness (that is, that four hex digits follow \u). + However, the input function for jsonb is stricter: it disallows + Unicode escapes for non-ASCII characters (those above U+007F) + unless the database encoding is UTF8. The jsonb type also + rejects \u0000 (because that cannot be represented in + PostgreSQL's text type), and it insists that any use of Unicode surrogate pairs to designate characters outside the Unicode Basic Multilingual Plane be correct. Valid Unicode escapes are converted to the equivalent ASCII or UTF8 character for storage; @@ -84,8 +84,8 @@ Many of the JSON processing functions described in will convert Unicode escapes to regular characters, and will therefore throw the same types of errors - just described even if their input is of type json - not jsonb. The fact that the json input function does + just described even if their input is of type json + not jsonb. The fact that the json input function does not make these checks may be considered a historical artifact, although it does allow for simple storage (without processing) of JSON Unicode escapes in a non-UTF8 database encoding. In general, it is best to @@ -95,22 +95,22 @@ - When converting textual JSON input into jsonb, the primitive - types described by RFC 7159 are effectively mapped onto + When converting textual JSON input into jsonb, the primitive + types described by RFC 7159 are effectively mapped onto native PostgreSQL types, as shown in . Therefore, there are some minor additional constraints on what constitutes valid jsonb data that do not apply to the json type, nor to JSON in the abstract, corresponding to limits on what can be represented by the underlying data type. - Notably, jsonb will reject numbers that are outside the - range of the PostgreSQL numeric data - type, while json will not. Such implementation-defined - restrictions are permitted by RFC 7159. However, in + Notably, jsonb will reject numbers that are outside the + range of the PostgreSQL numeric data + type, while json will not. Such implementation-defined + restrictions are permitted by RFC 7159. However, in practice such problems are far more likely to occur in other - implementations, as it is common to represent JSON's number + implementations, as it is common to represent JSON's number primitive type as IEEE 754 double precision floating point - (which RFC 7159 explicitly anticipates and allows for). + (which RFC 7159 explicitly anticipates and allows for). When using JSON as an interchange format with such systems, the danger of losing numeric precision compared to data originally stored by PostgreSQL should be considered. @@ -134,23 +134,23 @@ - string - text - \u0000 is disallowed, as are non-ASCII Unicode + string + text + \u0000 is disallowed, as are non-ASCII Unicode escapes if database encoding is not UTF8 - number - numeric + number + numeric NaN and infinity values are disallowed - boolean - boolean + boolean + boolean Only lowercase true and false spellings are accepted - null + null (none) SQL NULL is a different concept @@ -162,10 +162,10 @@ JSON Input and Output Syntax The input/output syntax for the JSON data types is as specified in - RFC 7159. + RFC 7159. - The following are all valid json (or jsonb) expressions: + The following are all valid json (or jsonb) expressions: -- Simple scalar/primitive value -- Primitive values can be numbers, quoted strings, true, false, or null @@ -185,8 +185,8 @@ SELECT '{"foo": [true, "bar"], "tags": {"a": 1, "b": null}}'::json; As previously stated, when a JSON value is input and then printed without - any additional processing, json outputs the same text that was - input, while jsonb does not preserve semantically-insignificant + any additional processing, json outputs the same text that was + input, while jsonb does not preserve semantically-insignificant details such as whitespace. For example, note the differences here: SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::json; @@ -202,9 +202,9 @@ SELECT '{"bar": "baz", "balance": 7.77, "active":false}'::jsonb; (1 row) One semantically-insignificant detail worth noting is that - in jsonb, numbers will be printed according to the behavior of the - underlying numeric type. In practice this means that numbers - entered with E notation will be printed without it, for + in jsonb, numbers will be printed according to the behavior of the + underlying numeric type. In practice this means that numbers + entered with E notation will be printed without it, for example: SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; @@ -213,7 +213,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; {"reading": 1.230e-5} | {"reading": 0.00001230} (1 row) - However, jsonb will preserve trailing fractional zeroes, as seen + However, jsonb will preserve trailing fractional zeroes, as seen in this example, even though those are semantically insignificant for purposes such as equality checks. @@ -231,7 +231,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; have a somewhat fixed structure. The structure is typically unenforced (though enforcing some business rules declaratively is possible), but having a predictable structure makes it easier to write - queries that usefully summarize a set of documents (datums) + queries that usefully summarize a set of documents (datums) in a table. @@ -249,7 +249,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; - <type>jsonb</> Containment and Existence + <type>jsonb</type> Containment and Existence jsonb containment @@ -259,10 +259,10 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb; existence - Testing containment is an important capability of - jsonb. There is no parallel set of facilities for the - json type. Containment tests whether - one jsonb document has contained within it another one. + Testing containment is an important capability of + jsonb. There is no parallel set of facilities for the + json type. Containment tests whether + one jsonb document has contained within it another one. These examples return true except as noted: @@ -282,7 +282,7 @@ SELECT '[1, 2, 3]'::jsonb @> '[1, 2, 2]'::jsonb; -- within the object on the left side: SELECT '{"product": "PostgreSQL", "version": 9.4, "jsonb": true}'::jsonb @> '{"version": 9.4}'::jsonb; --- The array on the right side is not considered contained within the +-- The array on the right side is not considered contained within the -- array on the left, even though a similar array is nested within it: SELECT '[1, 2, [1, 3]]'::jsonb @> '[1, 3]'::jsonb; -- yields false @@ -319,10 +319,10 @@ SELECT '"bar"'::jsonb @> '["bar"]'::jsonb; -- yields false - jsonb also has an existence operator, which is + jsonb also has an existence operator, which is a variation on the theme of containment: it tests whether a string - (given as a text value) appears as an object key or array - element at the top level of the jsonb value. + (given as a text value) appears as an object key or array + element at the top level of the jsonb value. These examples return true except as noted: @@ -353,11 +353,11 @@ SELECT '"foo"'::jsonb ? 'foo'; Because JSON containment is nested, an appropriate query can skip explicit selection of sub-objects. As an example, suppose that we have - a doc column containing objects at the top level, with - most objects containing tags fields that contain arrays of + a doc column containing objects at the top level, with + most objects containing tags fields that contain arrays of sub-objects. This query finds entries in which sub-objects containing - both "term":"paris" and "term":"food" appear, - while ignoring any such keys outside the tags array: + both "term":"paris" and "term":"food" appear, + while ignoring any such keys outside the tags array: SELECT doc->'site_name' FROM websites WHERE doc @> '{"tags":[{"term":"paris"}, {"term":"food"}]}'; @@ -385,7 +385,7 @@ SELECT doc->'site_name' FROM websites - <type>jsonb</> Indexing + <type>jsonb</type> Indexing jsonb indexes on @@ -394,23 +394,23 @@ SELECT doc->'site_name' FROM websites GIN indexes can be used to efficiently search for keys or key/value pairs occurring within a large number of - jsonb documents (datums). - Two GIN operator classes are provided, offering different + jsonb documents (datums). + Two GIN operator classes are provided, offering different performance and flexibility trade-offs. - The default GIN operator class for jsonb supports queries with - top-level key-exists operators ?, ?& - and ?| operators and path/value-exists operator - @>. + The default GIN operator class for jsonb supports queries with + top-level key-exists operators ?, ?& + and ?| operators and path/value-exists operator + @>. (For details of the semantics that these operators implement, see .) An example of creating an index with this operator class is: CREATE INDEX idxgin ON api USING GIN (jdoc); - The non-default GIN operator class jsonb_path_ops - supports indexing the @> operator only. + The non-default GIN operator class jsonb_path_ops + supports indexing the @> operator only. An example of creating an index with this operator class is: CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); @@ -438,8 +438,8 @@ CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); ] } - We store these documents in a table named api, - in a jsonb column named jdoc. + We store these documents in a table named api, + in a jsonb column named jdoc. If a GIN index is created on this column, queries like the following can make use of the index: @@ -447,23 +447,23 @@ CREATE INDEX idxginp ON api USING GIN (jdoc jsonb_path_ops); SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company": "Magnafone"}'; However, the index could not be used for queries like the - following, because though the operator ? is indexable, - it is not applied directly to the indexed column jdoc: + following, because though the operator ? is indexable, + it is not applied directly to the indexed column jdoc: -- Find documents in which the key "tags" contains key or array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc -> 'tags' ? 'qui'; Still, with appropriate use of expression indexes, the above query can use an index. If querying for particular items within - the "tags" key is common, defining an index like this + the "tags" key is common, defining an index like this may be worthwhile: CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags')); - Now, the WHERE clause jdoc -> 'tags' ? 'qui' + Now, the WHERE clause jdoc -> 'tags' ? 'qui' will be recognized as an application of the indexable - operator ? to the indexed - expression jdoc -> 'tags'. + operator ? to the indexed + expression jdoc -> 'tags'. (More information on expression indexes can be found in .) @@ -473,11 +473,11 @@ CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags')); -- Find documents in which the key "tags" contains array element "qui" SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qui"]}'; - A simple GIN index on the jdoc column can support this + A simple GIN index on the jdoc column can support this query. But note that such an index will store copies of every key and - value in the jdoc column, whereas the expression index + value in the jdoc column, whereas the expression index of the previous example stores only data found under - the tags key. While the simple-index approach is far more + the tags key. While the simple-index approach is far more flexible (since it supports queries about any key), targeted expression indexes are likely to be smaller and faster to search than a simple index. @@ -485,7 +485,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu Although the jsonb_path_ops operator class supports - only queries with the @> operator, it has notable + only queries with the @> operator, it has notable performance advantages over the default operator class jsonb_ops. A jsonb_path_ops index is usually much smaller than a jsonb_ops @@ -503,7 +503,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu data. - For this purpose, the term value includes array elements, + For this purpose, the term value includes array elements, though JSON terminology sometimes considers array elements distinct from values within objects. @@ -511,13 +511,13 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu Basically, each jsonb_path_ops index item is a hash of the value and the key(s) leading to it; for example to index {"foo": {"bar": "baz"}}, a single index item would - be created incorporating all three of foo, bar, - and baz into the hash value. Thus a containment query + be created incorporating all three of foo, bar, + and baz into the hash value. Thus a containment query looking for this structure would result in an extremely specific index - search; but there is no way at all to find out whether foo + search; but there is no way at all to find out whether foo appears as a key. On the other hand, a jsonb_ops - index would create three index items representing foo, - bar, and baz separately; then to do the + index would create three index items representing foo, + bar, and baz separately; then to do the containment query, it would look for rows containing all three of these items. While GIN indexes can perform such an AND search fairly efficiently, it will still be less specific and slower than the @@ -531,15 +531,15 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"tags": ["qu that it produces no index entries for JSON structures not containing any values, such as {"a": {}}. If a search for documents containing such a structure is requested, it will require a - full-index scan, which is quite slow. jsonb_path_ops is + full-index scan, which is quite slow. jsonb_path_ops is therefore ill-suited for applications that often perform such searches. - jsonb also supports btree and hash + jsonb also supports btree and hash indexes. These are usually useful only if it's important to check equality of complete JSON documents. - The btree ordering for jsonb datums is seldom + The btree ordering for jsonb datums is seldom of great interest, but for completeness it is: Object > Array > Boolean > Number > String > Null diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 0aedd837dc..a7e2653371 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -13,23 +13,23 @@ libpq is the C - application programmer's interface to PostgreSQL. - libpq is a set of library functions that allow - client programs to pass queries to the PostgreSQL + application programmer's interface to PostgreSQL. + libpq is a set of library functions that allow + client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries. - libpq is also the underlying engine for several - other PostgreSQL application interfaces, including - those written for C++, Perl, Python, Tcl and ECPG. - So some aspects of libpq's behavior will be + libpq is also the underlying engine for several + other PostgreSQL application interfaces, including + those written for C++, Perl, Python, Tcl and ECPG. + So some aspects of libpq's behavior will be important to you if you use one of those packages. In particular, , and describe behavior that is visible to the user of any application - that uses libpq. + that uses libpq. @@ -42,7 +42,7 @@ Client programs that use libpq must include the header file - libpq-fe.hlibpq-fe.h + libpq-fe.hlibpq-fe.h and must link with the libpq library. @@ -55,13 +55,13 @@ application program can have several backend connections open at one time. (One reason to do that is to access more than one database.) Each connection is represented by a - PGconnPGconn object, which - is obtained from the function PQconnectdb, - PQconnectdbParams, or - PQsetdbLogin. Note that these functions will always + PGconnPGconn object, which + is obtained from the function PQconnectdb, + PQconnectdbParams, or + PQsetdbLogin. Note that these functions will always return a non-null object pointer, unless perhaps there is too - little memory even to allocate the PGconn object. - The PQstatus function should be called to check + little memory even to allocate the PGconn object. + The PQstatus function should be called to check the return value for a successful connection before queries are sent via the connection object. @@ -70,7 +70,7 @@ On Unix, forking a process with open libpq connections can lead to unpredictable results because the parent and child processes share the same sockets and operating system resources. For this reason, - such usage is not recommended, though doing an exec from + such usage is not recommended, though doing an exec from the child process to load a new executable is safe. @@ -79,20 +79,20 @@ On Windows, there is a way to improve performance if a single database connection is repeatedly started and shutdown. Internally, - libpq calls WSAStartup() and WSACleanup() for connection startup - and shutdown, respectively. WSAStartup() increments an internal - Windows library reference count which is decremented by WSACleanup(). - When the reference count is just one, calling WSACleanup() frees + libpq calls WSAStartup() and WSACleanup() for connection startup + and shutdown, respectively. WSAStartup() increments an internal + Windows library reference count which is decremented by WSACleanup(). + When the reference count is just one, calling WSACleanup() frees all resources and all DLLs are unloaded. This is an expensive operation. To avoid this, an application can manually call - WSAStartup() so resources will not be freed when the last database + WSAStartup() so resources will not be freed when the last database connection is closed. - PQconnectdbParamsPQconnectdbParams + PQconnectdbParamsPQconnectdbParams Makes a new connection to the database server. @@ -109,9 +109,9 @@ PGconn *PQconnectdbParams(const char * const *keywords, from two NULL-terminated arrays. The first, keywords, is defined as an array of strings, each one being a key word. The second, values, gives the value - for each key word. Unlike PQsetdbLogin below, the parameter + for each key word. Unlike PQsetdbLogin below, the parameter set can be extended without changing the function signature, so use of - this function (or its nonblocking analogs PQconnectStartParams + this function (or its nonblocking analogs PQconnectStartParams and PQconnectPoll) is preferred for new application programming. @@ -157,7 +157,7 @@ PGconn *PQconnectdbParams(const char * const *keywords, - PQconnectdbPQconnectdb + PQconnectdbPQconnectdb Makes a new connection to the database server. @@ -184,7 +184,7 @@ PGconn *PQconnectdb(const char *conninfo); - PQsetdbLoginPQsetdbLogin + PQsetdbLoginPQsetdbLogin Makes a new connection to the database server. @@ -211,13 +211,13 @@ PGconn *PQsetdbLogin(const char *pghost, an = sign or has a valid connection URI prefix, it is taken as a conninfo string in exactly the same way as if it had been passed to PQconnectdb, and the remaining - parameters are then applied as specified for PQconnectdbParams. + parameters are then applied as specified for PQconnectdbParams. - PQsetdbPQsetdb + PQsetdbPQsetdb Makes a new connection to the database server. @@ -232,16 +232,16 @@ PGconn *PQsetdb(char *pghost, This is a macro that calls PQsetdbLogin with null pointers - for the login and pwd parameters. It is provided + for the login and pwd parameters. It is provided for backward compatibility with very old programs. - PQconnectStartParamsPQconnectStartParams - PQconnectStartPQconnectStart - PQconnectPollPQconnectPoll + PQconnectStartParamsPQconnectStartParams + PQconnectStartPQconnectStart + PQconnectPollPQconnectPoll nonblocking connection @@ -263,7 +263,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); that your application's thread of execution is not blocked on remote I/O whilst doing so. The point of this approach is that the waits for I/O to complete can occur in the application's main loop, rather than down inside - PQconnectdbParams or PQconnectdb, and so the + PQconnectdbParams or PQconnectdb, and so the application can manage this operation in parallel with other activities. @@ -287,7 +287,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); - The hostaddr and host parameters are used appropriately to ensure that + The hostaddr and host parameters are used appropriately to ensure that name and reverse name queries are not made. See the documentation of these parameters in for details. @@ -310,27 +310,27 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); - Note: use of PQconnectStartParams is analogous to - PQconnectStart shown below. + Note: use of PQconnectStartParams is analogous to + PQconnectStart shown below. - To begin a nonblocking connection request, call conn = PQconnectStart("connection_info_string"). - If conn is null, then libpq has been unable to allocate a new PGconn - structure. Otherwise, a valid PGconn pointer is returned (though not yet + To begin a nonblocking connection request, call conn = PQconnectStart("connection_info_string"). + If conn is null, then libpq has been unable to allocate a new PGconn + structure. Otherwise, a valid PGconn pointer is returned (though not yet representing a valid connection to the database). On return from PQconnectStart, call status = PQstatus(conn). If status equals CONNECTION_BAD, PQconnectStart has failed. - If PQconnectStart succeeds, the next stage is to poll - libpq so that it can proceed with the connection sequence. + If PQconnectStart succeeds, the next stage is to poll + libpq so that it can proceed with the connection sequence. Use PQsocket(conn) to obtain the descriptor of the socket underlying the database connection. Loop thus: If PQconnectPoll(conn) last returned PGRES_POLLING_READING, wait until the socket is ready to - read (as indicated by select(), poll(), or + read (as indicated by select(), poll(), or similar system function). Then call PQconnectPoll(conn) again. Conversely, if PQconnectPoll(conn) last returned @@ -348,10 +348,10 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); At any time during connection, the status of the connection can be - checked by calling PQstatus. If this call returns CONNECTION_BAD, then the - connection procedure has failed; if the call returns CONNECTION_OK, then the + checked by calling PQstatus. If this call returns CONNECTION_BAD, then the + connection procedure has failed; if the call returns CONNECTION_OK, then the connection is ready. Both of these states are equally detectable - from the return value of PQconnectPoll, described above. Other states might also occur + from the return value of PQconnectPoll, described above. Other states might also occur during (and only during) an asynchronous connection procedure. These indicate the current stage of the connection procedure and might be useful to provide feedback to the user for example. These statuses are: @@ -472,7 +472,7 @@ switch(PQstatus(conn)) - PQconndefaultsPQconndefaults + PQconndefaultsPQconndefaults Returns the default connection options. @@ -501,7 +501,7 @@ typedef struct all possible PQconnectdb options and their current default values. The return value points to an array of PQconninfoOption structures, which ends - with an entry having a null keyword pointer. The + with an entry having a null keyword pointer. The null pointer is returned if memory could not be allocated. Note that the current default values (val fields) will depend on environment variables and other context. A @@ -519,7 +519,7 @@ typedef struct - PQconninfoPQconninfo + PQconninfoPQconninfo Returns the connection options used by a live connection. @@ -533,7 +533,7 @@ PQconninfoOption *PQconninfo(PGconn *conn); all possible PQconnectdb options and the values that were used to connect to the server. The return value points to an array of PQconninfoOption - structures, which ends with an entry having a null keyword + structures, which ends with an entry having a null keyword pointer. All notes above for PQconndefaults also apply to the result of PQconninfo. @@ -543,7 +543,7 @@ PQconninfoOption *PQconninfo(PGconn *conn); - PQconninfoParsePQconninfoParse + PQconninfoParsePQconninfoParse Returns parsed connection options from the provided connection string. @@ -555,12 +555,12 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); Parses a connection string and returns the resulting options as an - array; or returns NULL if there is a problem with the connection + array; or returns NULL if there is a problem with the connection string. This function can be used to extract the PQconnectdb options in the provided connection string. The return value points to an array of PQconninfoOption structures, which ends - with an entry having a null keyword pointer. + with an entry having a null keyword pointer. @@ -571,10 +571,10 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); - If errmsg is not NULL, then *errmsg is set - to NULL on success, else to a malloc'd error string explaining - the problem. (It is also possible for *errmsg to be - set to NULL and the function to return NULL; + If errmsg is not NULL, then *errmsg is set + to NULL on success, else to a malloc'd error string explaining + the problem. (It is also possible for *errmsg to be + set to NULL and the function to return NULL; this indicates an out-of-memory condition.) @@ -582,15 +582,15 @@ PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); After processing the options array, free it by passing it to PQconninfoFree. If this is not done, some memory is leaked for each call to PQconninfoParse. - Conversely, if an error occurs and errmsg is not NULL, - be sure to free the error string using PQfreemem. + Conversely, if an error occurs and errmsg is not NULL, + be sure to free the error string using PQfreemem. - PQfinishPQfinish + PQfinishPQfinish Closes the connection to the server. Also frees @@ -604,14 +604,14 @@ void PQfinish(PGconn *conn); Note that even if the server connection attempt fails (as indicated by PQstatus), the application should call PQfinish to free the memory used by the PGconn object. - The PGconn pointer must not be used again after + The PGconn pointer must not be used again after PQfinish has been called. - PQresetPQreset + PQresetPQreset Resets the communication channel to the server. @@ -631,8 +631,8 @@ void PQreset(PGconn *conn); - PQresetStartPQresetStart - PQresetPollPQresetPoll + PQresetStartPQresetStart + PQresetPollPQresetPoll Reset the communication channel to the server, in a nonblocking manner. @@ -650,8 +650,8 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn); parameters previously used. This can be useful for error recovery if a working connection is lost. They differ from PQreset (above) in that they act in a nonblocking manner. These functions suffer from the same - restrictions as PQconnectStartParams, PQconnectStart - and PQconnectPoll. + restrictions as PQconnectStartParams, PQconnectStart + and PQconnectPoll. @@ -665,12 +665,12 @@ PostgresPollingStatusType PQresetPoll(PGconn *conn); - PQpingParamsPQpingParams + PQpingParamsPQpingParams PQpingParams reports the status of the server. It accepts connection parameters identical to those of - PQconnectdbParams, described above. It is not + PQconnectdbParams, described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt. @@ -734,12 +734,12 @@ PGPing PQpingParams(const char * const *keywords, - PQpingPQping + PQpingPQping PQping reports the status of the server. It accepts connection parameters identical to those of - PQconnectdb, described above. It is not + PQconnectdb, described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt. @@ -750,7 +750,7 @@ PGPing PQping(const char *conninfo); - The return values are the same as for PQpingParams. + The return values are the same as for PQpingParams. @@ -771,7 +771,7 @@ PGPing PQping(const char *conninfo); - Several libpq functions parse a user-specified string to obtain + Several libpq functions parse a user-specified string to obtain connection parameters. There are two accepted formats for these strings: plain keyword = value strings and URIs. URIs generally follow @@ -840,8 +840,8 @@ postgresql:///mydb?host=localhost&port=5433 Percent-encoding may be used to include symbols with special meaning in any - of the URI parts, e.g. replace = with - %3D. + of the URI parts, e.g. replace = with + %3D. @@ -895,18 +895,18 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname It is possible to specify multiple hosts to connect to, so that they are - tried in the given order. In the Keyword/Value format, the host, - hostaddr, and port options accept a comma-separated + tried in the given order. In the Keyword/Value format, the host, + hostaddr, and port options accept a comma-separated list of values. The same number of elements must be given in each option, such - that e.g. the first hostaddr corresponds to the first host name, - the second hostaddr corresponds to the second host name, and so + that e.g. the first hostaddr corresponds to the first host name, + the second hostaddr corresponds to the second host name, and so forth. As an exception, if only one port is specified, it applies to all the hosts. - In the connection URI format, you can list multiple host:port pairs - separated by commas, in the host component of the URI. In either + In the connection URI format, you can list multiple host:port pairs + separated by commas, in the host component of the URI. In either format, a single hostname can also translate to multiple network addresses. A common example of this is a host that has both an IPv4 and an IPv6 address. @@ -939,17 +939,17 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname host - Name of host to connect to.host name + Name of host to connect to.host name If a host name begins with a slash, it specifies Unix-domain communication rather than TCP/IP communication; the value is the name of the directory in which the socket file is stored. If multiple host names are specified, each will be tried in turn in the order given. The default behavior when host is not specified is to connect to a Unix-domain - socketUnix domain socket in + socketUnix domain socket in /tmp (or whatever socket directory was specified - when PostgreSQL was built). On machines without - Unix-domain sockets, the default is to connect to localhost. + when PostgreSQL was built). On machines without + Unix-domain sockets, the default is to connect to localhost. A comma-separated list of host names is also accepted, in which case @@ -964,53 +964,53 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Numeric IP address of host to connect to. This should be in the - standard IPv4 address format, e.g., 172.28.40.9. If + standard IPv4 address format, e.g., 172.28.40.9. If your machine supports IPv6, you can also use those addresses. TCP/IP communication is always used when a nonempty string is specified for this parameter. - Using hostaddr instead of host allows the + Using hostaddr instead of host allows the application to avoid a host name look-up, which might be important in applications with time constraints. However, a host name is required for GSSAPI or SSPI authentication - methods, as well as for verify-full SSL + methods, as well as for verify-full SSL certificate verification. The following rules are used: - If host is specified without hostaddr, + If host is specified without hostaddr, a host name lookup occurs. - If hostaddr is specified without host, - the value for hostaddr gives the server network address. + If hostaddr is specified without host, + the value for hostaddr gives the server network address. The connection attempt will fail if the authentication method requires a host name. - If both host and hostaddr are specified, - the value for hostaddr gives the server network address. - The value for host is ignored unless the + If both host and hostaddr are specified, + the value for hostaddr gives the server network address. + The value for host is ignored unless the authentication method requires it, in which case it will be used as the host name. - Note that authentication is likely to fail if host - is not the name of the server at network address hostaddr. - Also, note that host rather than hostaddr + Note that authentication is likely to fail if host + is not the name of the server at network address hostaddr. + Also, note that host rather than hostaddr is used to identify the connection in a password file (see ). - A comma-separated list of hostaddrs is also accepted, in + A comma-separated list of hostaddrs is also accepted, in which case each host in the list is tried in order. See for details. @@ -1018,7 +1018,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Without either a host name or host address, libpq will connect using a local Unix-domain socket; or on machines without Unix-domain - sockets, it will attempt to connect to localhost. + sockets, it will attempt to connect to localhost. @@ -1029,9 +1029,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Port number to connect to at the server host, or socket file name extension for Unix-domain - connections.port + connections.port If multiple hosts were given in the host or - hostaddr parameters, this parameter may specify a list + hostaddr parameters, this parameter may specify a list of ports of equal length, or it may specify a single port number to be used for all hosts. @@ -1077,7 +1077,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies the name of the file used to store passwords (see ). Defaults to ~/.pgpass, or - %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. + %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. (No error is reported if this file does not exist.) @@ -1091,7 +1091,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname string). Zero or not specified means wait indefinitely. It is not recommended to use a timeout of less than 2 seconds. This timeout applies separately to each connection attempt. - For example, if you specify two hosts and connect_timeout + For example, if you specify two hosts and connect_timeout is 5, each host will time out if no connection is made within 5 seconds, so the total time spent waiting for a connection might be up to 10 seconds. @@ -1119,11 +1119,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies command-line options to send to the server at connection - start. For example, setting this to -c geqo=off sets the - session's value of the geqo parameter to - off. Spaces within this string are considered to + start. For example, setting this to -c geqo=off sets the + session's value of the geqo parameter to + off. Spaces within this string are considered to separate command-line arguments, unless escaped with a backslash - (\); write \\ to represent a literal + (\); write \\ to represent a literal backslash. For a detailed discussion of the available options, consult . @@ -1147,7 +1147,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies a fallback value for the configuration parameter. This value will be used if no value has been given for - application_name via a connection parameter or the + application_name via a connection parameter or the PGAPPNAME environment variable. Specifying a fallback name is useful in generic utility programs that wish to set a default application name but allow it to be @@ -1176,7 +1176,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname send a keepalive message to the server. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPIDLE or + It is only supported on systems where TCP_KEEPIDLE or an equivalent socket option is available, and on Windows; on other systems, it has no effect. @@ -1191,7 +1191,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname that is not acknowledged by the server should be retransmitted. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPINTVL or + It is only supported on systems where TCP_KEEPINTVL or an equivalent socket option is available, and on Windows; on other systems, it has no effect. @@ -1206,7 +1206,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname client's connection to the server is considered dead. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. - It is only supported on systems where TCP_KEEPCNT or + It is only supported on systems where TCP_KEEPCNT or an equivalent socket option is available; on other systems, it has no effect. @@ -1227,7 +1227,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This option determines whether or with what priority a secure - SSL TCP/IP connection will be negotiated with the + SSL TCP/IP connection will be negotiated with the server. There are six modes: @@ -1235,7 +1235,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname disable - only try a non-SSL connection + only try a non-SSL connection @@ -1244,8 +1244,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname allow - first try a non-SSL connection; if that - fails, try an SSL connection + first try a non-SSL connection; if that + fails, try an SSL connection @@ -1254,8 +1254,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname prefer (default) - first try an SSL connection; if that fails, - try a non-SSL connection + first try an SSL connection; if that fails, + try a non-SSL connection @@ -1264,7 +1264,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname require - only try an SSL connection. If a root CA + only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified @@ -1275,9 +1275,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname verify-ca - only try an SSL connection, and verify that + only try an SSL connection, and verify that the server certificate is issued by a trusted - certificate authority (CA) + certificate authority (CA) @@ -1286,9 +1286,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname verify-full - only try an SSL connection, verify that the + only try an SSL connection, verify that the server certificate is issued by a - trusted CA and that the requested server host name + trusted CA and that the requested server host name matches that in the certificate @@ -1300,16 +1300,16 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - sslmode is ignored for Unix domain socket + sslmode is ignored for Unix domain socket communication. - If PostgreSQL is compiled without SSL support, - using options require, verify-ca, or - verify-full will cause an error, while - options allow and prefer will be - accepted but libpq will not actually attempt - an SSL - connection.SSLwith libpq + If PostgreSQL is compiled without SSL support, + using options require, verify-ca, or + verify-full will cause an error, while + options allow and prefer will be + accepted but libpq will not actually attempt + an SSL + connection.SSLwith libpq @@ -1318,20 +1318,20 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname requiressl - This option is deprecated in favor of the sslmode + This option is deprecated in favor of the sslmode setting. If set to 1, an SSL connection to the server - is required (this is equivalent to sslmode - require). libpq will then refuse + is required (this is equivalent to sslmode + require). libpq will then refuse to connect if the server does not accept an SSL connection. If set to 0 (default), - libpq will negotiate the connection type with - the server (equivalent to sslmode - prefer). This option is only available if - PostgreSQL is compiled with SSL support. + libpq will negotiate the connection type with + the server (equivalent to sslmode + prefer). This option is only available if + PostgreSQL is compiled with SSL support. @@ -1343,9 +1343,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname If set to 1 (default), data sent over SSL connections will be compressed. If set to 0, compression will be disabled (this requires - OpenSSL 1.0.0 or later). + OpenSSL 1.0.0 or later). This parameter is ignored if a connection without SSL is made, - or if the version of OpenSSL used does not support + or if the version of OpenSSL used does not support it. @@ -1363,7 +1363,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the file name of the client SSL certificate, replacing the default - ~/.postgresql/postgresql.crt. + ~/.postgresql/postgresql.crt. This parameter is ignored if an SSL connection is not made. @@ -1376,9 +1376,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the location for the secret key used for the client certificate. It can either specify a file name that will be used instead of the default - ~/.postgresql/postgresql.key, or it can specify a key - obtained from an external engine (engines are - OpenSSL loadable modules). An external engine + ~/.postgresql/postgresql.key, or it can specify a key + obtained from an external engine (engines are + OpenSSL loadable modules). An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. This parameter is ignored if an SSL connection is not made. @@ -1391,10 +1391,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname This parameter specifies the name of a file containing SSL - certificate authority (CA) certificate(s). + certificate authority (CA) certificate(s). If the file exists, the server's certificate will be verified to be signed by one of these authorities. The default is - ~/.postgresql/root.crt. + ~/.postgresql/root.crt. @@ -1407,7 +1407,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname revocation list (CRL). Certificates listed in this file, if it exists, will be rejected while attempting to authenticate the server's certificate. The default is - ~/.postgresql/root.crl. + ~/.postgresql/root.crl. @@ -1429,7 +1429,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname any user could start a server listening there. Use this parameter to ensure that you are connected to a server run by a trusted user.) This option is only supported on platforms for which the - peer authentication method is implemented; see + peer authentication method is implemented; see . @@ -1478,11 +1478,11 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname connection in which read-write transactions are accepted by default is considered acceptable. The query SHOW transaction_read_only will be sent upon any - successful connection; if it returns on, the connection + successful connection; if it returns on, the connection will be closed. If multiple hosts were specified in the connection string, any remaining servers will be tried just as if the connection attempt had failed. The default value of this parameter, - any, regards all connections as acceptable. + any, regards all connections as acceptable. @@ -1501,13 +1501,13 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - libpq-fe.h - libpq-int.h + libpq-fe.h + libpq-int.h libpq application programmers should be careful to maintain the PGconn abstraction. Use the accessor functions described below to get at the contents of PGconn. Reference to internal PGconn fields using - libpq-int.h is not recommended because they are subject to change + libpq-int.h is not recommended because they are subject to change in the future. @@ -1515,10 +1515,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname The following functions return parameter values established at connection. These values are fixed for the life of the connection. If a multi-host - connection string is used, the values of PQhost, - PQport, and PQpass can change if a new connection - is established using the same PGconn object. Other values - are fixed for the lifetime of the PGconn object. + connection string is used, the values of PQhost, + PQport, and PQpass can change if a new connection + is established using the same PGconn object. Other values + are fixed for the lifetime of the PGconn object. @@ -1589,7 +1589,7 @@ char *PQpass(const PGconn *conn); This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning - with /.) + with /.) char *PQhost(const PGconn *conn); @@ -1660,7 +1660,7 @@ char *PQoptions(const PGconn *conn); The following functions return status data that can change as operations - are executed on the PGconn object. + are executed on the PGconn object. @@ -1695,8 +1695,8 @@ ConnStatusType PQstatus(const PGconn *conn); - See the entry for PQconnectStartParams, PQconnectStart - and PQconnectPoll with regards to other status codes that + See the entry for PQconnectStartParams, PQconnectStart + and PQconnectPoll with regards to other status codes that might be returned. @@ -1747,62 +1747,62 @@ const char *PQparameterStatus(const PGconn *conn, const char *paramName); Certain parameter values are reported by the server automatically at connection startup or whenever their values change. - PQparameterStatus can be used to interrogate these settings. + PQparameterStatus can be used to interrogate these settings. It returns the current value of a parameter if known, or NULL if the parameter is not known. Parameters reported as of the current release include - server_version, - server_encoding, - client_encoding, - application_name, - is_superuser, - session_authorization, - DateStyle, - IntervalStyle, - TimeZone, - integer_datetimes, and - standard_conforming_strings. - (server_encoding, TimeZone, and - integer_datetimes were not reported by releases before 8.0; - standard_conforming_strings was not reported by releases + server_version, + server_encoding, + client_encoding, + application_name, + is_superuser, + session_authorization, + DateStyle, + IntervalStyle, + TimeZone, + integer_datetimes, and + standard_conforming_strings. + (server_encoding, TimeZone, and + integer_datetimes were not reported by releases before 8.0; + standard_conforming_strings was not reported by releases before 8.1; - IntervalStyle was not reported by releases before 8.4; - application_name was not reported by releases before 9.0.) + IntervalStyle was not reported by releases before 8.4; + application_name was not reported by releases before 9.0.) Note that - server_version, - server_encoding and - integer_datetimes + server_version, + server_encoding and + integer_datetimes cannot change after startup. Pre-3.0-protocol servers do not report parameter settings, but - libpq includes logic to obtain values for - server_version and client_encoding anyway. - Applications are encouraged to use PQparameterStatus - rather than ad hoc code to determine these values. + libpq includes logic to obtain values for + server_version and client_encoding anyway. + Applications are encouraged to use PQparameterStatus + rather than ad hoc code to determine these values. (Beware however that on a pre-3.0 connection, changing - client_encoding via SET after connection - startup will not be reflected by PQparameterStatus.) - For server_version, see also - PQserverVersion, which returns the information in a + client_encoding via SET after connection + startup will not be reflected by PQparameterStatus.) + For server_version, see also + PQserverVersion, which returns the information in a numeric form that is much easier to compare against. - If no value for standard_conforming_strings is reported, - applications can assume it is off, that is, backslashes + If no value for standard_conforming_strings is reported, + applications can assume it is off, that is, backslashes are treated as escapes in string literals. Also, the presence of this parameter can be taken as an indication that the escape string - syntax (E'...') is accepted. + syntax (E'...') is accepted. - Although the returned pointer is declared const, it in fact - points to mutable storage associated with the PGconn structure. + Although the returned pointer is declared const, it in fact + points to mutable storage associated with the PGconn structure. It is unwise to assume the pointer will remain valid across queries. @@ -1829,7 +1829,7 @@ int PQprotocolVersion(const PGconn *conn); not change after connection startup is complete, but it could theoretically change during a connection reset. The 3.0 protocol will normally be used when communicating with - PostgreSQL 7.4 or later servers; pre-7.4 servers + PostgreSQL 7.4 or later servers; pre-7.4 servers support only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.) @@ -1862,17 +1862,17 @@ int PQserverVersion(const PGconn *conn); - Prior to major version 10, PostgreSQL used + Prior to major version 10, PostgreSQL used three-part version numbers in which the first two parts together represented the major version. For those - versions, PQserverVersion uses two digits for each + versions, PQserverVersion uses two digits for each part; for example version 9.1.5 will be returned as 90105, and version 9.2.0 will be returned as 90200. Therefore, for purposes of determining feature compatibility, - applications should divide the result of PQserverVersion + applications should divide the result of PQserverVersion by 100 not 10000 to determine a logical major version number. In all release series, only the last two digits differ between minor releases (bug-fix releases). @@ -1890,7 +1890,7 @@ int PQserverVersion(const PGconn *conn); - error message Returns the error message + error message Returns the error message most recently generated by an operation on the connection. @@ -1900,22 +1900,22 @@ char *PQerrorMessage(const PGconn *conn); - Nearly all libpq functions will set a message for + Nearly all libpq functions will set a message for PQerrorMessage if they fail. Note that by libpq convention, a nonempty PQerrorMessage result can consist of multiple lines, and will include a trailing newline. The caller should not free the result directly. It will be freed when the associated - PGconn handle is passed to + PGconn handle is passed to PQfinish. The result string should not be expected to remain the same across operations on the - PGconn structure. + PGconn structure. - PQsocketPQsocket + PQsocketPQsocket Obtains the file descriptor number of the connection socket to @@ -1933,13 +1933,13 @@ int PQsocket(const PGconn *conn); - PQbackendPIDPQbackendPID + PQbackendPIDPQbackendPID Returns the process ID (PID) - PID - determining PID of server process - in libpq + PID + determining PID of server process + in libpq of the backend process handling this connection. @@ -1960,7 +1960,7 @@ int PQbackendPID(const PGconn *conn); - PQconnectionNeedsPasswordPQconnectionNeedsPassword + PQconnectionNeedsPasswordPQconnectionNeedsPassword Returns true (1) if the connection authentication method @@ -1980,7 +1980,7 @@ int PQconnectionNeedsPassword(const PGconn *conn); - PQconnectionUsedPasswordPQconnectionUsedPassword + PQconnectionUsedPasswordPQconnectionUsedPassword Returns true (1) if the connection authentication method @@ -2006,7 +2006,7 @@ int PQconnectionUsedPassword(const PGconn *conn); - PQsslInUsePQsslInUse + PQsslInUsePQsslInUse Returns true (1) if the connection uses SSL, false (0) if not. @@ -2020,7 +2020,7 @@ int PQsslInUse(const PGconn *conn); - PQsslAttributePQsslAttribute + PQsslAttributePQsslAttribute Returns SSL-related information about the connection. @@ -2093,7 +2093,7 @@ const char *PQsslAttribute(const PGconn *conn, const char *attribute_name); - PQsslAttributeNamesPQsslAttributeNames + PQsslAttributeNamesPQsslAttributeNames Return an array of SSL attribute names available. The array is terminated by a NULL pointer. @@ -2105,7 +2105,7 @@ const char * const * PQsslAttributeNames(const PGconn *conn); - PQsslStructPQsslStruct + PQsslStructPQsslStruct Return a pointer to an SSL-implementation-specific object describing @@ -2139,17 +2139,17 @@ void *PQsslStruct(const PGconn *conn, const char *struct_name); This structure can be used to verify encryption levels, check server - certificates, and more. Refer to the OpenSSL + certificates, and more. Refer to the OpenSSL documentation for information about this structure. - PQgetsslPQgetssl + PQgetsslPQgetssl - SSLin libpq + SSLin libpq Returns the SSL structure used in the connection, or null if SSL is not in use. @@ -2163,8 +2163,8 @@ void *PQgetssl(const PGconn *conn); not be used in new applications, because the returned struct is specific to OpenSSL and will not be available if another SSL implementation is used. To check if a connection uses SSL, call - PQsslInUse instead, and for more details about the - connection, use PQsslAttribute. + PQsslInUse instead, and for more details about the + connection, use PQsslAttribute. @@ -2209,7 +2209,7 @@ PGresult *PQexec(PGconn *conn, const char *command); Returns a PGresult pointer or possibly a null pointer. A non-null pointer will generally be returned except in out-of-memory conditions or serious errors such as inability to send - the command to the server. The PQresultStatus function + the command to the server. The PQresultStatus function should be called to check the return value for any errors (including the value of a null pointer, in which case it will return PGRES_FATAL_ERROR). Use @@ -2222,7 +2222,7 @@ PGresult *PQexec(PGconn *conn, const char *command); The command string can include multiple SQL commands (separated by semicolons). Multiple queries sent in a single - PQexec call are processed in a single transaction, unless + PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple transactions. (See @@ -2263,10 +2263,10 @@ PGresult *PQexecParams(PGconn *conn, - PQexecParams is like PQexec, but offers additional + PQexecParams is like PQexec, but offers additional functionality: parameter values can be specified separately from the command string proper, and query results can be requested in either text or binary - format. PQexecParams is supported only in protocol 3.0 and later + format. PQexecParams is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. @@ -2289,8 +2289,8 @@ PGresult *PQexecParams(PGconn *conn, The SQL command string to be executed. If parameters are used, - they are referred to in the command string as $1, - $2, etc. + they are referred to in the command string as $1, + $2, etc. @@ -2300,9 +2300,9 @@ PGresult *PQexecParams(PGconn *conn, The number of parameters supplied; it is the length of the arrays - paramTypes[], paramValues[], - paramLengths[], and paramFormats[]. (The - array pointers can be NULL when nParams + paramTypes[], paramValues[], + paramLengths[], and paramFormats[]. (The + array pointers can be NULL when nParams is zero.) @@ -2313,7 +2313,7 @@ PGresult *PQexecParams(PGconn *conn, Specifies, by OID, the data types to be assigned to the - parameter symbols. If paramTypes is + parameter symbols. If paramTypes is NULL, or any particular element in the array is zero, the server infers a data type for the parameter symbol in the same way it would do for an untyped literal string. @@ -2359,11 +2359,11 @@ PGresult *PQexecParams(PGconn *conn, Values passed in binary format require knowledge of the internal representation expected by the backend. For example, integers must be passed in network byte - order. Passing numeric values requires + order. Passing numeric values requires knowledge of the server storage format, as implemented in - src/backend/utils/adt/numeric.c::numeric_send() and - src/backend/utils/adt/numeric.c::numeric_recv(). + src/backend/utils/adt/numeric.c::numeric_send() and + src/backend/utils/adt/numeric.c::numeric_recv(). @@ -2387,14 +2387,14 @@ PGresult *PQexecParams(PGconn *conn, - The primary advantage of PQexecParams over - PQexec is that parameter values can be separated from the + The primary advantage of PQexecParams over + PQexec is that parameter values can be separated from the command string, thus avoiding the need for tedious and error-prone quoting and escaping. - Unlike PQexec, PQexecParams allows at most + Unlike PQexec, PQexecParams allows at most one SQL command in the given string. (There can be semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has some usefulness as an extra defense against @@ -2412,8 +2412,8 @@ PGresult *PQexecParams(PGconn *conn, SELECT * FROM mytable WHERE x = $1::bigint; - This forces parameter $1 to be treated as bigint, whereas - by default it would be assigned the same type as x. Forcing the + This forces parameter $1 to be treated as bigint, whereas + by default it would be assigned the same type as x. Forcing the parameter type decision, either this way or by specifying a numeric type OID, is strongly recommended when sending parameter values in binary format, because binary format has less redundancy than text format and so there is less chance @@ -2444,40 +2444,40 @@ PGresult *PQprepare(PGconn *conn, - PQprepare creates a prepared statement for later - execution with PQexecPrepared. This feature allows + PQprepare creates a prepared statement for later + execution with PQexecPrepared. This feature allows commands to be executed repeatedly without being parsed and planned each time; see for details. - PQprepare is supported only in protocol 3.0 and later + PQprepare is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. The function creates a prepared statement named - stmtName from the query string, which - must contain a single SQL command. stmtName can be - "" to create an unnamed statement, in which case any + stmtName from the query string, which + must contain a single SQL command. stmtName can be + "" to create an unnamed statement, in which case any pre-existing unnamed statement is automatically replaced; otherwise it is an error if the statement name is already defined in the current session. If any parameters are used, they are referred - to in the query as $1, $2, etc. - nParams is the number of parameters for which types - are pre-specified in the array paramTypes[]. (The + to in the query as $1, $2, etc. + nParams is the number of parameters for which types + are pre-specified in the array paramTypes[]. (The array pointer can be NULL when - nParams is zero.) paramTypes[] + nParams is zero.) paramTypes[] specifies, by OID, the data types to be assigned to the parameter - symbols. If paramTypes is NULL, + symbols. If paramTypes is NULL, or any particular element in the array is zero, the server assigns a data type to the parameter symbol in the same way it would do for an untyped literal string. Also, the query can use parameter - symbols with numbers higher than nParams; data types + symbols with numbers higher than nParams; data types will be inferred for these symbols as well. (See PQdescribePrepared for a means to find out what data types were inferred.) - As with PQexec, the result is normally a + As with PQexec, the result is normally a PGresult object whose contents indicate server-side success or failure. A null result indicates out-of-memory or inability to send the command at all. Use @@ -2488,9 +2488,9 @@ PGresult *PQprepare(PGconn *conn, - Prepared statements for use with PQexecPrepared can also + Prepared statements for use with PQexecPrepared can also be created by executing SQL - statements. Also, although there is no libpq + statements. Also, although there is no libpq function for deleting a prepared statement, the SQL statement can be used for that purpose. @@ -2522,21 +2522,21 @@ PGresult *PQexecPrepared(PGconn *conn, - PQexecPrepared is like PQexecParams, + PQexecPrepared is like PQexecParams, but the command to be executed is specified by naming a previously-prepared statement, instead of giving a query string. This feature allows commands that will be used repeatedly to be parsed and planned just once, rather than each time they are executed. The statement must have been prepared previously in - the current session. PQexecPrepared is supported + the current session. PQexecPrepared is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - The parameters are identical to PQexecParams, except that the + The parameters are identical to PQexecParams, except that the name of a prepared statement is given instead of a query string, and the - paramTypes[] parameter is not present (it is not needed since + paramTypes[] parameter is not present (it is not needed since the prepared statement's parameter types were determined when it was created). @@ -2560,20 +2560,20 @@ PGresult *PQdescribePrepared(PGconn *conn, const char *stmtName); - PQdescribePrepared allows an application to obtain + PQdescribePrepared allows an application to obtain information about a previously prepared statement. - PQdescribePrepared is supported only in protocol 3.0 + PQdescribePrepared is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - stmtName can be "" or NULL to reference + stmtName can be "" or NULL to reference the unnamed statement, otherwise it must be the name of an existing - prepared statement. On success, a PGresult with + prepared statement. On success, a PGresult with status PGRES_COMMAND_OK is returned. The functions PQnparams and PQparamtype can be applied to this - PGresult to obtain information about the parameters + PGresult to obtain information about the parameters of the prepared statement, and the functions PQnfields, PQfname, PQftype, etc provide information about the @@ -2600,23 +2600,23 @@ PGresult *PQdescribePortal(PGconn *conn, const char *portalName); - PQdescribePortal allows an application to obtain + PQdescribePortal allows an application to obtain information about a previously created portal. - (libpq does not provide any direct access to + (libpq does not provide any direct access to portals, but you can use this function to inspect the properties - of a cursor created with a DECLARE CURSOR SQL command.) - PQdescribePortal is supported only in protocol 3.0 + of a cursor created with a DECLARE CURSOR SQL command.) + PQdescribePortal is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. - portalName can be "" or NULL to reference + portalName can be "" or NULL to reference the unnamed portal, otherwise it must be the name of an existing - portal. On success, a PGresult with status + portal. On success, a PGresult with status PGRES_COMMAND_OK is returned. The functions PQnfields, PQfname, PQftype, etc can be applied to the - PGresult to obtain information about the result + PGresult to obtain information about the result columns (if any) of the portal. @@ -2625,7 +2625,7 @@ PGresult *PQdescribePortal(PGconn *conn, const char *portalName); - The PGresultPGresult + The PGresultPGresult structure encapsulates the result returned by the server. libpq application programmers should be careful to maintain the PGresult abstraction. @@ -2678,7 +2678,7 @@ ExecStatusType PQresultStatus(const PGresult *res); Successful completion of a command returning data (such as - a SELECT or SHOW). + a SELECT or SHOW). @@ -2743,7 +2743,7 @@ ExecStatusType PQresultStatus(const PGresult *res); PGRES_SINGLE_TUPLE - The PGresult contains a single result tuple + The PGresult contains a single result tuple from the current command. This status occurs only when single-row mode has been selected for the query (see ). @@ -2786,7 +2786,7 @@ ExecStatusType PQresultStatus(const PGresult *res); Converts the enumerated type returned by - PQresultStatus into a string constant describing the + PQresultStatus into a string constant describing the status code. The caller should not free the result. @@ -2813,7 +2813,7 @@ char *PQresultErrorMessage(const PGresult *res); If there was an error, the returned string will include a trailing newline. The caller should not free the result directly. It will - be freed when the associated PGresult handle is + be freed when the associated PGresult handle is passed to PQclear. @@ -2845,7 +2845,7 @@ char *PQresultErrorMessage(const PGresult *res); Returns a reformatted version of the error message associated with - a PGresult object. + a PGresult object. char *PQresultVerboseErrorMessage(const PGresult *res, PGVerbosity verbosity, @@ -2857,17 +2857,17 @@ char *PQresultVerboseErrorMessage(const PGresult *res, by computing the message that would have been produced by PQresultErrorMessage if the specified verbosity settings had been in effect for the connection when the - given PGresult was generated. If - the PGresult is not an error result, - PGresult is not an error result is reported instead. + given PGresult was generated. If + the PGresult is not an error result, + PGresult is not an error result is reported instead. The returned string includes a trailing newline. Unlike most other functions for extracting data from - a PGresult, the result of this function is a freshly + a PGresult, the result of this function is a freshly allocated string. The caller must free it - using PQfreemem() when the string is no longer needed. + using PQfreemem() when the string is no longer needed. @@ -2877,20 +2877,20 @@ char *PQresultVerboseErrorMessage(const PGresult *res, - PQresultErrorFieldPQresultErrorField + PQresultErrorFieldPQresultErrorField Returns an individual field of an error report. char *PQresultErrorField(const PGresult *res, int fieldcode); - fieldcode is an error field identifier; see the symbols + fieldcode is an error field identifier; see the symbols listed below. NULL is returned if the PGresult is not an error or warning result, or does not include the specified field. Field values will normally not include a trailing newline. The caller should not free the result directly. It will be freed when the - associated PGresult handle is passed to + associated PGresult handle is passed to PQclear. @@ -2898,29 +2898,29 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); The following field codes are available: - PG_DIAG_SEVERITY + PG_DIAG_SEVERITY - The severity; the field contents are ERROR, - FATAL, or PANIC (in an error message), - or WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message), or + The severity; the field contents are ERROR, + FATAL, or PANIC (in an error message), + or WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message), or a localized translation of one of these. Always present. - PG_DIAG_SEVERITY_NONLOCALIZED + PG_DIAG_SEVERITY_NONLOCALIZED - The severity; the field contents are ERROR, - FATAL, or PANIC (in an error message), - or WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message). - This is identical to the PG_DIAG_SEVERITY field except + The severity; the field contents are ERROR, + FATAL, or PANIC (in an error message), + or WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message). + This is identical to the PG_DIAG_SEVERITY field except that the contents are never localized. This is present only in - reports generated by PostgreSQL versions 9.6 + reports generated by PostgreSQL versions 9.6 and later. @@ -2928,7 +2928,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SQLSTATE + PG_DIAG_SQLSTATE error codes libpq @@ -2948,7 +2948,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_PRIMARY + PG_DIAG_MESSAGE_PRIMARY The primary human-readable error message (typically one line). @@ -2958,7 +2958,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_DETAIL + PG_DIAG_MESSAGE_DETAIL Detail: an optional secondary error message carrying more @@ -2968,7 +2968,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_MESSAGE_HINT + PG_DIAG_MESSAGE_HINT Hint: an optional suggestion what to do about the problem. @@ -2980,7 +2980,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_STATEMENT_POSITION + PG_DIAG_STATEMENT_POSITION A string containing a decimal integer indicating an error cursor @@ -2992,21 +2992,21 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_INTERNAL_POSITION + PG_DIAG_INTERNAL_POSITION This is defined the same as the - PG_DIAG_STATEMENT_POSITION field, but it is used + PG_DIAG_STATEMENT_POSITION field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. The - PG_DIAG_INTERNAL_QUERY field will always appear when + PG_DIAG_INTERNAL_QUERY field will always appear when this field appears. - PG_DIAG_INTERNAL_QUERY + PG_DIAG_INTERNAL_QUERY The text of a failed internally-generated command. This could @@ -3016,7 +3016,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_CONTEXT + PG_DIAG_CONTEXT An indication of the context in which the error occurred. @@ -3028,7 +3028,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SCHEMA_NAME + PG_DIAG_SCHEMA_NAME If the error was associated with a specific database object, @@ -3038,7 +3038,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_TABLE_NAME + PG_DIAG_TABLE_NAME If the error was associated with a specific table, the name of the @@ -3049,7 +3049,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_COLUMN_NAME + PG_DIAG_COLUMN_NAME If the error was associated with a specific table column, the name @@ -3060,7 +3060,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_DATATYPE_NAME + PG_DIAG_DATATYPE_NAME If the error was associated with a specific data type, the name of @@ -3071,7 +3071,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_CONSTRAINT_NAME + PG_DIAG_CONSTRAINT_NAME If the error was associated with a specific constraint, the name @@ -3084,7 +3084,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_FILE + PG_DIAG_SOURCE_FILE The file name of the source-code location where the error was @@ -3094,7 +3094,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_LINE + PG_DIAG_SOURCE_LINE The line number of the source-code location where the error @@ -3104,7 +3104,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PG_DIAG_SOURCE_FUNCTION + PG_DIAG_SOURCE_FUNCTION The name of the source-code function reporting the error. @@ -3151,7 +3151,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); - PQclearPQclear + PQclearPQclear Frees the storage associated with a @@ -3184,7 +3184,7 @@ void PQclear(PGresult *res); These functions are used to extract information from a PGresult object that represents a successful query result (that is, one that has status - PGRES_TUPLES_OK or PGRES_SINGLE_TUPLE). + PGRES_TUPLES_OK or PGRES_SINGLE_TUPLE). They can also be used to extract information from a successful Describe operation: a Describe's result has all the same column information that actual execution of the query @@ -3204,8 +3204,8 @@ void PQclear(PGresult *res); Returns the number of rows (tuples) in the query result. - (Note that PGresult objects are limited to no more - than INT_MAX rows, so an int result is + (Note that PGresult objects are limited to no more + than INT_MAX rows, so an int result is sufficient.) @@ -3249,7 +3249,7 @@ int PQnfields(const PGresult *res); Returns the column name associated with the given column number. Column numbers start at 0. The caller should not free the result directly. It will be freed when the associated - PGresult handle is passed to + PGresult handle is passed to PQclear. char *PQfname(const PGresult *res, @@ -3323,7 +3323,7 @@ Oid PQftable(const PGresult *res, - InvalidOid is returned if the column number is out of range, + InvalidOid is returned if the column number is out of range, or if the specified column is not a simple reference to a table column, or when using pre-3.0 protocol. You can query the system table pg_class to determine @@ -3442,7 +3442,7 @@ int PQfmod(const PGresult *res, The interpretation of modifier values is type-specific; they typically indicate precision or size limits. The value -1 is - used to indicate no information available. Most data + used to indicate no information available. Most data types do not use modifiers, in which case the value is always -1. @@ -3468,7 +3468,7 @@ int PQfsize(const PGresult *res, - PQfsize returns the space allocated for this column + PQfsize returns the space allocated for this column in a database row, in other words the size of the server's internal representation of the data type. (Accordingly, it is not really very useful to clients.) A negative value indicates @@ -3487,7 +3487,7 @@ int PQfsize(const PGresult *res, - Returns 1 if the PGresult contains binary data + Returns 1 if the PGresult contains binary data and 0 if it contains text data. int PQbinaryTuples(const PGresult *res); @@ -3496,10 +3496,10 @@ int PQbinaryTuples(const PGresult *res); This function is deprecated (except for its use in connection with - COPY), because it is possible for a single - PGresult to contain text data in some columns and - binary data in others. PQfformat is preferred. - PQbinaryTuples returns 1 only if all columns of the + COPY), because it is possible for a single + PGresult to contain text data in some columns and + binary data in others. PQfformat is preferred. + PQbinaryTuples returns 1 only if all columns of the result are binary (format 1). @@ -3518,7 +3518,7 @@ int PQbinaryTuples(const PGresult *res); Returns a single field value of one row of a PGresult. Row and column numbers start at 0. The caller should not free the result directly. It will - be freed when the associated PGresult handle is + be freed when the associated PGresult handle is passed to PQclear. char *PQgetvalue(const PGresult *res, @@ -3532,7 +3532,7 @@ char *PQgetvalue(const PGresult *res, PQgetvalue is a null-terminated character string representation of the field value. For data in binary format, the value is in the binary representation determined by - the data type's typsend and typreceive + the data type's typsend and typreceive functions. (The value is actually followed by a zero byte in this case too, but that is not ordinarily useful, since the value is likely to contain embedded nulls.) @@ -3540,7 +3540,7 @@ char *PQgetvalue(const PGresult *res, An empty string is returned if the field value is null. See - PQgetisnull to distinguish null values from + PQgetisnull to distinguish null values from empty-string values. @@ -3609,8 +3609,8 @@ int PQgetlength(const PGresult *res, This is the actual data length for the particular data value, that is, the size of the object pointed to by PQgetvalue. For text data format this is - the same as strlen(). For binary format this is - essential information. Note that one should not + the same as strlen(). For binary format this is + essential information. Note that one should not rely on PQfsize to obtain the actual data length. @@ -3635,7 +3635,7 @@ int PQnparams(const PGresult *res); This function is only useful when inspecting the result of - PQdescribePrepared. For other types of queries it + PQdescribePrepared. For other types of queries it will return zero. @@ -3660,7 +3660,7 @@ Oid PQparamtype(const PGresult *res, int param_number); This function is only useful when inspecting the result of - PQdescribePrepared. For other types of queries it + PQdescribePrepared. For other types of queries it will return zero. @@ -3738,7 +3738,7 @@ char *PQcmdStatus(PGresult *res); Commonly this is just the name of the command, but it might include additional data such as the number of rows processed. The caller should not free the result directly. It will be freed when the - associated PGresult handle is passed to + associated PGresult handle is passed to PQclear. @@ -3762,17 +3762,17 @@ char *PQcmdTuples(PGresult *res); This function returns a string containing the number of rows - affected by the SQL statement that generated the - PGresult. This function can only be used following - the execution of a SELECT, CREATE TABLE AS, - INSERT, UPDATE, DELETE, - MOVE, FETCH, or COPY statement, - or an EXECUTE of a prepared query that contains an - INSERT, UPDATE, or DELETE statement. - If the command that generated the PGresult was anything - else, PQcmdTuples returns an empty string. The caller + affected by the SQL statement that generated the + PGresult. This function can only be used following + the execution of a SELECT, CREATE TABLE AS, + INSERT, UPDATE, DELETE, + MOVE, FETCH, or COPY statement, + or an EXECUTE of a prepared query that contains an + INSERT, UPDATE, or DELETE statement. + If the command that generated the PGresult was anything + else, PQcmdTuples returns an empty string. The caller should not free the return value directly. It will be freed when - the associated PGresult handle is passed to + the associated PGresult handle is passed to PQclear. @@ -3788,14 +3788,14 @@ char *PQcmdTuples(PGresult *res); - Returns the OIDOIDin libpq - of the inserted row, if the SQL command was an - INSERT that inserted exactly one row into a table that - has OIDs, or a EXECUTE of a prepared query containing - a suitable INSERT statement. Otherwise, this function + Returns the OIDOIDin libpq + of the inserted row, if the SQL command was an + INSERT that inserted exactly one row into a table that + has OIDs, or a EXECUTE of a prepared query containing + a suitable INSERT statement. Otherwise, this function returns InvalidOid. This function will also return InvalidOid if the table affected by the - INSERT statement does not contain OIDs. + INSERT statement does not contain OIDs. Oid PQoidValue(const PGresult *res); @@ -3858,19 +3858,19 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); values as literal constants in SQL commands. Certain characters (such as quotes and backslashes) must be escaped to prevent them from being interpreted specially by the SQL parser. - PQescapeLiteral performs this operation. + PQescapeLiteral performs this operation. - PQescapeLiteral returns an escaped version of the + PQescapeLiteral returns an escaped version of the str parameter in memory allocated with - malloc(). This memory should be freed using - PQfreemem() when the result is no longer needed. + malloc(). This memory should be freed using + PQfreemem() when the result is no longer needed. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeLiteral stops at the zero; the behavior is - thus rather like strncpy.) The + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeLiteral stops at the zero; the behavior is + thus rather like strncpy.) The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser. A terminating zero byte is also added. The @@ -3879,8 +3879,8 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); - On error, PQescapeLiteral returns NULL and a suitable - message is stored in the conn object. + On error, PQescapeLiteral returns NULL and a suitable + message is stored in the conn object. @@ -3888,14 +3888,14 @@ char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); It is especially important to do proper escaping when handling strings that were received from an untrustworthy source. Otherwise there is a security risk: you are vulnerable to - SQL injection attacks wherein unwanted SQL commands are + SQL injection attacks wherein unwanted SQL commands are fed to your database. Note that it is not necessary nor correct to do escaping when a data - value is passed as a separate parameter in PQexecParams or + value is passed as a separate parameter in PQexecParams or its sibling routines. @@ -3926,15 +3926,15 @@ char *PQescapeIdentifier(PGconn *conn, const char *str, size_t length); - PQescapeIdentifier returns a version of the + PQescapeIdentifier returns a version of the str parameter escaped as an SQL identifier - in memory allocated with malloc(). This memory must be - freed using PQfreemem() when the result is no longer + in memory allocated with malloc(). This memory must be + freed using PQfreemem() when the result is no longer needed. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeIdentifier stops at the zero; the behavior is - thus rather like strncpy.) The + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeIdentifier stops at the zero; the behavior is + thus rather like strncpy.) The return string has all special characters replaced so that it will be properly processed as an SQL identifier. A terminating zero byte is also added. The return string will also be surrounded by double @@ -3942,8 +3942,8 @@ char *PQescapeIdentifier(PGconn *conn, const char *str, size_t length); - On error, PQescapeIdentifier returns NULL and a suitable - message is stored in the conn object. + On error, PQescapeIdentifier returns NULL and a suitable + message is stored in the conn object. @@ -3974,39 +3974,39 @@ size_t PQescapeStringConn(PGconn *conn, - PQescapeStringConn escapes string literals, much like - PQescapeLiteral. Unlike PQescapeLiteral, + PQescapeStringConn escapes string literals, much like + PQescapeLiteral. Unlike PQescapeLiteral, the caller is responsible for providing an appropriately sized buffer. - Furthermore, PQescapeStringConn does not generate the - single quotes that must surround PostgreSQL string + Furthermore, PQescapeStringConn does not generate the + single quotes that must surround PostgreSQL string literals; they should be provided in the SQL command that the - result is inserted into. The parameter from points to + result is inserted into. The parameter from points to the first character of the string that is to be escaped, and the - length parameter gives the number of bytes in this + length parameter gives the number of bytes in this string. A terminating zero byte is not required, and should not be - counted in length. (If a terminating zero byte is found - before length bytes are processed, - PQescapeStringConn stops at the zero; the behavior is - thus rather like strncpy.) to shall point + counted in length. (If a terminating zero byte is found + before length bytes are processed, + PQescapeStringConn stops at the zero; the behavior is + thus rather like strncpy.) to shall point to a buffer that is able to hold at least one more byte than twice - the value of length, otherwise the behavior is undefined. - Behavior is likewise undefined if the to and - from strings overlap. + the value of length, otherwise the behavior is undefined. + Behavior is likewise undefined if the to and + from strings overlap. - If the error parameter is not NULL, then - *error is set to zero on success, nonzero on error. + If the error parameter is not NULL, then + *error is set to zero on success, nonzero on error. Presently the only possible error conditions involve invalid multibyte encoding in the source string. The output string is still generated on error, but it can be expected that the server will reject it as malformed. On error, a suitable message is stored in the - conn object, whether or not error is NULL. + conn object, whether or not error is NULL. - PQescapeStringConn returns the number of bytes written - to to, not including the terminating zero byte. + PQescapeStringConn returns the number of bytes written + to to, not including the terminating zero byte. @@ -4021,30 +4021,30 @@ size_t PQescapeStringConn(PGconn *conn, - PQescapeString is an older, deprecated version of - PQescapeStringConn. + PQescapeString is an older, deprecated version of + PQescapeStringConn. size_t PQescapeString (char *to, const char *from, size_t length); - The only difference from PQescapeStringConn is that - PQescapeString does not take PGconn - or error parameters. + The only difference from PQescapeStringConn is that + PQescapeString does not take PGconn + or error parameters. Because of this, it cannot adjust its behavior depending on the connection properties (such as character encoding) and therefore - it might give the wrong results. Also, it has no way + it might give the wrong results. Also, it has no way to report error conditions. - PQescapeString can be used safely in - client programs that work with only one PostgreSQL + PQescapeString can be used safely in + client programs that work with only one PostgreSQL connection at a time (in this case it can find out what it needs to - know behind the scenes). In other contexts it is a security + know behind the scenes). In other contexts it is a security hazard and should be avoided in favor of - PQescapeStringConn. + PQescapeStringConn. @@ -4090,10 +4090,10 @@ unsigned char *PQescapeByteaConn(PGconn *conn, - PQescapeByteaConn returns an escaped version of the + PQescapeByteaConn returns an escaped version of the from parameter binary string in memory - allocated with malloc(). This memory should be freed using - PQfreemem() when the result is no longer needed. The + allocated with malloc(). This memory should be freed using + PQfreemem() when the result is no longer needed. The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser, and the bytea input function. A @@ -4104,7 +4104,7 @@ unsigned char *PQescapeByteaConn(PGconn *conn, On error, a null pointer is returned, and a suitable error message - is stored in the conn object. Currently, the only + is stored in the conn object. Currently, the only possible error is insufficient memory for the result string. @@ -4120,8 +4120,8 @@ unsigned char *PQescapeByteaConn(PGconn *conn, - PQescapeBytea is an older, deprecated version of - PQescapeByteaConn. + PQescapeBytea is an older, deprecated version of + PQescapeByteaConn. unsigned char *PQescapeBytea(const unsigned char *from, size_t from_length, @@ -4130,15 +4130,15 @@ unsigned char *PQescapeBytea(const unsigned char *from, - The only difference from PQescapeByteaConn is that - PQescapeBytea does not take a PGconn - parameter. Because of this, PQescapeBytea can + The only difference from PQescapeByteaConn is that + PQescapeBytea does not take a PGconn + parameter. Because of this, PQescapeBytea can only be used safely in client programs that use a single - PostgreSQL connection at a time (in this case + PostgreSQL connection at a time (in this case it can find out what it needs to know behind the - scenes). It might give the wrong results if + scenes). It might give the wrong results if used in programs that use multiple database connections (use - PQescapeByteaConn in such cases). + PQescapeByteaConn in such cases). @@ -4169,17 +4169,17 @@ unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); to a bytea column. PQunescapeBytea converts this string representation into its binary representation. It returns a pointer to a buffer allocated with - malloc(), or NULL on error, and puts the size of + malloc(), or NULL on error, and puts the size of the buffer in to_length. The result must be - freed using PQfreemem when it is no longer needed. + freed using PQfreemem when it is no longer needed. This conversion is not exactly the inverse of PQescapeBytea, because the string is not expected - to be escaped when received from PQgetvalue. + to be escaped when received from PQgetvalue. In particular this means there is no need for string quoting considerations, - and so no need for a PGconn parameter. + and so no need for a PGconn parameter. @@ -4273,7 +4273,7 @@ unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); Submits a command to the server without waiting for the result(s). 1 is returned if the command was successfully dispatched and 0 if - not (in which case, use PQerrorMessage to get more + not (in which case, use PQerrorMessage to get more information about the failure). int PQsendQuery(PGconn *conn, const char *command); @@ -4323,7 +4323,7 @@ int PQsendQueryParams(PGconn *conn, - PQsendPrepare + PQsendPrepare PQsendPrepare @@ -4341,7 +4341,7 @@ int PQsendPrepare(PGconn *conn, const Oid *paramTypes); - This is an asynchronous version of PQprepare: it + This is an asynchronous version of PQprepare: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to determine whether the server successfully created the prepared @@ -4388,7 +4388,7 @@ int PQsendQueryPrepared(PGconn *conn, - PQsendDescribePrepared + PQsendDescribePrepared PQsendDescribePrepared @@ -4402,7 +4402,7 @@ int PQsendQueryPrepared(PGconn *conn, int PQsendDescribePrepared(PGconn *conn, const char *stmtName); - This is an asynchronous version of PQdescribePrepared: + This is an asynchronous version of PQdescribePrepared: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to obtain the results. The function's parameters are handled @@ -4415,7 +4415,7 @@ int PQsendDescribePrepared(PGconn *conn, const char *stmtName); - PQsendDescribePortal + PQsendDescribePortal PQsendDescribePortal @@ -4429,7 +4429,7 @@ int PQsendDescribePrepared(PGconn *conn, const char *stmtName); int PQsendDescribePortal(PGconn *conn, const char *portalName); - This is an asynchronous version of PQdescribePortal: + This is an asynchronous version of PQdescribePortal: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult to obtain the results. The function's parameters are handled @@ -4472,7 +4472,7 @@ PGresult *PQgetResult(PGconn *conn); PQgetResult will just return a null pointer at once.) Each non-null result from PQgetResult should be processed using the - same PGresult accessor functions previously + same PGresult accessor functions previously described. Don't forget to free each result object with PQclear when done with it. Note that PQgetResult will block only if a command is @@ -4484,7 +4484,7 @@ PGresult *PQgetResult(PGconn *conn); Even when PQresultStatus indicates a fatal error, PQgetResult should be called until it - returns a null pointer, to allow libpq to + returns a null pointer, to allow libpq to process the error information completely. @@ -4589,7 +4589,7 @@ int PQisBusy(PGconn *conn); A typical application using these functions will have a main loop that - uses select() or poll() to wait for + uses select() or poll() to wait for all the conditions that it must respond to. One of the conditions will be input available from the server, which in terms of select() means readable data on the file @@ -4599,7 +4599,7 @@ int PQisBusy(PGconn *conn); call PQisBusy, followed by PQgetResult if PQisBusy returns false (0). It can also call PQnotifies - to detect NOTIFY messages (see NOTIFY messages (see ). @@ -4737,12 +4737,12 @@ int PQflush(PGconn *conn); - Ordinarily, libpq collects a SQL command's + Ordinarily, libpq collects a SQL command's entire result and returns it to the application as a single PGresult. This can be unworkable for commands that return a large number of rows. For such cases, applications can use PQsendQuery and PQgetResult in - single-row mode. In this mode, the result row(s) are + single-row mode. In this mode, the result row(s) are returned to the application one at a time, as they are received from the server. @@ -4807,7 +4807,7 @@ int PQsetSingleRowMode(PGconn *conn); While processing a query, the server may return some rows and then encounter an error, causing the query to be aborted. Ordinarily, - libpq discards any such rows and reports only the + libpq discards any such rows and reports only the error. But in single-row mode, those rows will have already been returned to the application. Hence, the application will see some PGRES_SINGLE_TUPLE PGresult @@ -4853,10 +4853,10 @@ PGcancel *PQgetCancel(PGconn *conn); PQgetCancel creates a - PGcancelPGcancel object - given a PGconn connection object. It will return - NULL if the given conn is NULL or an invalid - connection. The PGcancel object is an opaque + PGcancelPGcancel object + given a PGconn connection object. It will return + NULL if the given conn is NULL or an invalid + connection. The PGcancel object is an opaque structure that is not meant to be accessed directly by the application; it can only be passed to PQcancel or PQfreeCancel. @@ -4905,9 +4905,9 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); The return value is 1 if the cancel request was successfully - dispatched and 0 if not. If not, errbuf is filled - with an explanatory error message. errbuf - must be a char array of size errbufsize (the + dispatched and 0 if not. If not, errbuf is filled + with an explanatory error message. errbuf + must be a char array of size errbufsize (the recommended size is 256 bytes). @@ -4922,11 +4922,11 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); PQcancel can safely be invoked from a signal - handler, if the errbuf is a local variable in the - signal handler. The PGcancel object is read-only + handler, if the errbuf is a local variable in the + signal handler. The PGcancel object is read-only as far as PQcancel is concerned, so it can also be invoked from a thread that is separate from the one - manipulating the PGconn object. + manipulating the PGconn object. @@ -4953,12 +4953,12 @@ int PQrequestCancel(PGconn *conn); Requests that the server abandon processing of the current command. It operates directly on the - PGconn object, and in case of failure stores the - error message in the PGconn object (whence it can + PGconn object, and in case of failure stores the + error message in the PGconn object (whence it can be retrieved by PQerrorMessage). Although the functionality is the same, this approach creates hazards for multiple-thread programs and signal handlers, since it is possible - that overwriting the PGconn's error message will + that overwriting the PGconn's error message will mess up the operation currently in progress on the connection. @@ -4991,7 +4991,7 @@ int PQrequestCancel(PGconn *conn); - The function PQfnPQfn + The function PQfnPQfn requests execution of a server function via the fast-path interface: PGresult *PQfn(PGconn *conn, @@ -5016,19 +5016,19 @@ typedef struct - The fnid argument is the OID of the function to be - executed. args and nargs define the + The fnid argument is the OID of the function to be + executed. args and nargs define the parameters to be passed to the function; they must match the declared - function argument list. When the isint field of a - parameter structure is true, the u.integer value is sent + function argument list. When the isint field of a + parameter structure is true, the u.integer value is sent to the server as an integer of the indicated length (this must be - 2 or 4 bytes); proper byte-swapping occurs. When isint - is false, the indicated number of bytes at *u.ptr are + 2 or 4 bytes); proper byte-swapping occurs. When isint + is false, the indicated number of bytes at *u.ptr are sent with no processing; the data must be in the format expected by the server for binary transmission of the function's argument data - type. (The declaration of u.ptr as being of - type int * is historical; it would be better to consider - it void *.) + type. (The declaration of u.ptr as being of + type int * is historical; it would be better to consider + it void *.) result_buf points to the buffer in which to place the function's return value. The caller must have allocated sufficient space to store the return value. (There is no check!) The actual result @@ -5036,14 +5036,14 @@ typedef struct result_len. If a 2- or 4-byte integer result is expected, set result_is_int to 1, otherwise set it to 0. Setting result_is_int to 1 causes - libpq to byte-swap the value if necessary, so that it + libpq to byte-swap the value if necessary, so that it is delivered as a proper int value for the client machine; - note that a 4-byte integer is delivered into *result_buf + note that a 4-byte integer is delivered into *result_buf for either allowed result size. - When result_is_int is 0, the binary-format byte string + When result_is_int is 0, the binary-format byte string sent by the server is returned unmodified. (In this case it's better to consider result_buf as being of - type void *.) + type void *.) @@ -5077,7 +5077,7 @@ typedef struct can stop listening with the UNLISTEN command). All sessions listening on a particular channel will be notified asynchronously when a NOTIFY command with that - channel name is executed by any session. A payload string can + channel name is executed by any session. A payload string can be passed to communicate additional data to the listeners. @@ -5087,14 +5087,14 @@ typedef struct and NOTIFY commands as ordinary SQL commands. The arrival of NOTIFY messages can subsequently be detected by calling - PQnotifies.PQnotifies + PQnotifies.PQnotifies The function PQnotifies returns the next notification from a list of unhandled notification messages received from the server. It returns a null pointer if there are no pending notifications. Once a - notification is returned from PQnotifies, it is considered + notification is returned from PQnotifies, it is considered handled and will be removed from the list of notifications. @@ -5128,14 +5128,14 @@ typedef struct pgNotify server; it just returns messages previously absorbed by another libpq function. In prior releases of libpq, the only way to ensure timely receipt - of NOTIFY messages was to constantly submit commands, even + of NOTIFY messages was to constantly submit commands, even empty ones, and then check PQnotifies after each PQexec. While this still works, it is deprecated as a waste of processing power. - A better way to check for NOTIFY messages when you have no + A better way to check for NOTIFY messages when you have no useful commands to execute is to call PQconsumeInput, then check PQnotifies. You can use @@ -5173,12 +5173,12 @@ typedef struct pgNotify The overall process is that the application first issues the SQL COPY command via PQexec or one of the equivalent functions. The response to this (if there is no - error in the command) will be a PGresult object bearing + error in the command) will be a PGresult object bearing a status code of PGRES_COPY_OUT or PGRES_COPY_IN (depending on the specified copy direction). The application should then use the functions of this section to receive or transmit data rows. When the data transfer is - complete, another PGresult object is returned to indicate + complete, another PGresult object is returned to indicate success or failure of the transfer. Its status will be PGRES_COMMAND_OK for success or PGRES_FATAL_ERROR if some problem was encountered. @@ -5192,8 +5192,8 @@ typedef struct pgNotify If a COPY command is issued via PQexec in a string that could contain additional commands, the application must continue fetching results via - PQgetResult after completing the COPY - sequence. Only when PQgetResult returns + PQgetResult after completing the COPY + sequence. Only when PQgetResult returns NULL is it certain that the PQexec command string is done and it is safe to issue more commands. @@ -5206,7 +5206,7 @@ typedef struct pgNotify - A PGresult object bearing one of these status values + A PGresult object bearing one of these status values carries some additional data about the COPY operation that is starting. This additional data is available using functions that are also used in connection with query results: @@ -5262,7 +5262,7 @@ typedef struct pgNotify each column of the copy operation. The per-column format codes will always be zero when the overall copy format is textual, but the binary format can support both text and binary columns. - (However, as of the current implementation of COPY, + (However, as of the current implementation of COPY, only binary columns appear in a binary copy; so the per-column formats always match the overall format at present.) @@ -5283,8 +5283,8 @@ typedef struct pgNotify These functions are used to send data during COPY FROM - STDIN. They will fail if called when the connection is not in - COPY_IN state. + STDIN. They will fail if called when the connection is not in + COPY_IN state. @@ -5298,7 +5298,7 @@ typedef struct pgNotify - Sends data to the server during COPY_IN state. + Sends data to the server during COPY_IN state. int PQputCopyData(PGconn *conn, const char *buffer, @@ -5308,7 +5308,7 @@ int PQputCopyData(PGconn *conn, Transmits the COPY data in the specified - buffer, of length nbytes, to the server. + buffer, of length nbytes, to the server. The result is 1 if the data was queued, zero if it was not queued because of full buffers (this will only happen in nonblocking mode), or -1 if an error occurred. @@ -5322,7 +5322,7 @@ int PQputCopyData(PGconn *conn, into buffer loads of any convenient size. Buffer-load boundaries have no semantic significance when sending. The contents of the data stream must match the data format expected by the - COPY command; see for details. + COPY command; see for details. @@ -5337,7 +5337,7 @@ int PQputCopyData(PGconn *conn, - Sends end-of-data indication to the server during COPY_IN state. + Sends end-of-data indication to the server during COPY_IN state. int PQputCopyEnd(PGconn *conn, const char *errormsg); @@ -5345,14 +5345,14 @@ int PQputCopyEnd(PGconn *conn, - Ends the COPY_IN operation successfully if - errormsg is NULL. If - errormsg is not NULL then the - COPY is forced to fail, with the string pointed to by - errormsg used as the error message. (One should not + Ends the COPY_IN operation successfully if + errormsg is NULL. If + errormsg is not NULL then the + COPY is forced to fail, with the string pointed to by + errormsg used as the error message. (One should not assume that this exact error message will come back from the server, however, as the server might have already failed the - COPY for its own reasons. Also note that the option + COPY for its own reasons. Also note that the option to force failure does not work when using pre-3.0-protocol connections.) @@ -5362,19 +5362,19 @@ int PQputCopyEnd(PGconn *conn, nonblocking mode, this may only indicate that the termination message was successfully queued. (In nonblocking mode, to be certain that the data has been sent, you should next wait for - write-ready and call PQflush, repeating until it + write-ready and call PQflush, repeating until it returns zero.) Zero indicates that the function could not queue the termination message because of full buffers; this will only happen in nonblocking mode. (In this case, wait for - write-ready and try the PQputCopyEnd call + write-ready and try the PQputCopyEnd call again.) If a hard error occurs, -1 is returned; you can use PQerrorMessage to retrieve details. - After successfully calling PQputCopyEnd, call - PQgetResult to obtain the final result status of the - COPY command. One can wait for this result to be + After successfully calling PQputCopyEnd, call + PQgetResult to obtain the final result status of the + COPY command. One can wait for this result to be available in the usual way. Then return to normal operation. @@ -5388,8 +5388,8 @@ int PQputCopyEnd(PGconn *conn, These functions are used to receive data during COPY TO - STDOUT. They will fail if called when the connection is not in - COPY_OUT state. + STDOUT. They will fail if called when the connection is not in + COPY_OUT state. @@ -5403,7 +5403,7 @@ int PQputCopyEnd(PGconn *conn, - Receives data from the server during COPY_OUT state. + Receives data from the server during COPY_OUT state. int PQgetCopyData(PGconn *conn, char **buffer, @@ -5416,11 +5416,11 @@ int PQgetCopyData(PGconn *conn, COPY. Data is always returned one data row at a time; if only a partial row is available, it is not returned. Successful return of a data row involves allocating a chunk of - memory to hold the data. The buffer parameter must - be non-NULL. *buffer is set to + memory to hold the data. The buffer parameter must + be non-NULL. *buffer is set to point to the allocated memory, or to NULL in cases where no buffer is returned. A non-NULL result - buffer should be freed using PQfreemem when no longer + buffer should be freed using PQfreemem when no longer needed. @@ -5431,26 +5431,26 @@ int PQgetCopyData(PGconn *conn, probably only useful for textual COPY. A result of zero indicates that the COPY is still in progress, but no row is yet available (this is only possible when - async is true). A result of -1 indicates that the + async is true). A result of -1 indicates that the COPY is done. A result of -2 indicates that an - error occurred (consult PQerrorMessage for the reason). + error occurred (consult PQerrorMessage for the reason). - When async is true (not zero), - PQgetCopyData will not block waiting for input; it + When async is true (not zero), + PQgetCopyData will not block waiting for input; it will return zero if the COPY is still in progress but no complete row is available. (In this case wait for read-ready - and then call PQconsumeInput before calling - PQgetCopyData again.) When async is - false (zero), PQgetCopyData will block until data is + and then call PQconsumeInput before calling + PQgetCopyData again.) When async is + false (zero), PQgetCopyData will block until data is available or the operation completes. - After PQgetCopyData returns -1, call - PQgetResult to obtain the final result status of the - COPY command. One can wait for this result to be + After PQgetCopyData returns -1, call + PQgetResult to obtain the final result status of the + COPY command. One can wait for this result to be available in the usual way. Then return to normal operation. @@ -5463,7 +5463,7 @@ int PQgetCopyData(PGconn *conn, Obsolete Functions for <command>COPY</command> - These functions represent older methods of handling COPY. + These functions represent older methods of handling COPY. Although they still work, they are deprecated due to poor error handling, inconvenient methods of detecting end-of-data, and lack of support for binary or nonblocking transfers. @@ -5481,7 +5481,7 @@ int PQgetCopyData(PGconn *conn, Reads a newline-terminated line of characters (transmitted - by the server) into a buffer string of size length. + by the server) into a buffer string of size length. int PQgetline(PGconn *conn, char *buffer, @@ -5490,7 +5490,7 @@ int PQgetline(PGconn *conn, - This function copies up to length-1 characters into + This function copies up to length-1 characters into the buffer and converts the terminating newline into a zero byte. PQgetline returns EOF at the end of input, 0 if the entire line has been read, and 1 if the @@ -5501,7 +5501,7 @@ int PQgetline(PGconn *conn, of the two characters \., which indicates that the server has finished sending the results of the COPY command. If the application might receive - lines that are more than length-1 characters long, + lines that are more than length-1 characters long, care is needed to be sure it recognizes the \. line correctly (and does not, for example, mistake the end of a long data line for a terminator line). @@ -5545,7 +5545,7 @@ int PQgetlineAsync(PGconn *conn, On each call, PQgetlineAsync will return data if a - complete data row is available in libpq's input buffer. + complete data row is available in libpq's input buffer. Otherwise, no data is returned until the rest of the row arrives. The function returns -1 if the end-of-copy-data marker has been recognized, or 0 if no data is available, or a positive number giving the number of @@ -5559,7 +5559,7 @@ int PQgetlineAsync(PGconn *conn, the caller is too small to hold a row sent by the server, then a partial data row will be returned. With textual data this can be detected by testing whether the last returned byte is \n or not. (In a binary - COPY, actual parsing of the COPY data format will be needed to make the + COPY, actual parsing of the COPY data format will be needed to make the equivalent determination.) The returned string is not null-terminated. (If you want to add a terminating null, be sure to pass a bufsize one smaller @@ -5600,7 +5600,7 @@ int PQputline(PGconn *conn, Before PostgreSQL protocol 3.0, it was necessary for the application to explicitly send the two characters \. as a final line to indicate to the server that it had - finished sending COPY data. While this still works, it is deprecated and the + finished sending COPY data. While this still works, it is deprecated and the special meaning of \. can be expected to be removed in a future release. It is sufficient to call PQendcopy after having sent the actual data. @@ -5696,7 +5696,7 @@ int PQendcopy(PGconn *conn); Control Functions - These functions control miscellaneous details of libpq's + These functions control miscellaneous details of libpq's behavior. @@ -5747,7 +5747,7 @@ int PQsetClientEncoding(PGconn *conn, const char *encoding is the encoding you want to use. If the function successfully sets the encoding, it returns 0, otherwise -1. The current encoding for this connection can be - determined by using PQclientEncoding. + determined by using PQclientEncoding. @@ -5763,7 +5763,7 @@ int PQsetClientEncoding(PGconn *conn, const char * Determines the verbosity of messages returned by - PQerrorMessage and PQresultErrorMessage. + PQerrorMessage and PQresultErrorMessage. typedef enum { @@ -5775,15 +5775,15 @@ typedef enum PGVerbosity PQsetErrorVerbosity(PGconn *conn, PGVerbosity verbosity); - PQsetErrorVerbosity sets the verbosity mode, returning - the connection's previous setting. In TERSE mode, + PQsetErrorVerbosity sets the verbosity mode, returning + the connection's previous setting. In TERSE mode, returned messages include severity, primary text, and position only; this will normally fit on a single line. The default mode produces messages that include the above plus any detail, hint, or context - fields (these might span multiple lines). The VERBOSE + fields (these might span multiple lines). The VERBOSE mode includes all available fields. Changing the verbosity does not affect the messages available from already-existing - PGresult objects, only subsequently-created ones. + PGresult objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage if you want to print a previous error with a different verbosity.) @@ -5800,9 +5800,9 @@ PGVerbosity PQsetErrorVerbosity(PGconn *conn, PGVerbosity verbosity); - Determines the handling of CONTEXT fields in messages - returned by PQerrorMessage - and PQresultErrorMessage. + Determines the handling of CONTEXT fields in messages + returned by PQerrorMessage + and PQresultErrorMessage. typedef enum { @@ -5814,17 +5814,17 @@ typedef enum PGContextVisibility PQsetErrorContextVisibility(PGconn *conn, PGContextVisibility show_context); - PQsetErrorContextVisibility sets the context display mode, + PQsetErrorContextVisibility sets the context display mode, returning the connection's previous setting. This mode controls whether the CONTEXT field is included in messages - (unless the verbosity setting is TERSE, in which - case CONTEXT is never shown). The NEVER mode - never includes CONTEXT, while ALWAYS always - includes it if available. In ERRORS mode (the - default), CONTEXT fields are included only for error + (unless the verbosity setting is TERSE, in which + case CONTEXT is never shown). The NEVER mode + never includes CONTEXT, while ALWAYS always + includes it if available. In ERRORS mode (the + default), CONTEXT fields are included only for error messages, not for notices and warnings. Changing this mode does not affect the messages available from - already-existing PGresult objects, only + already-existing PGresult objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage if you want to print a previous error with a different display mode.) @@ -5850,9 +5850,9 @@ void PQtrace(PGconn *conn, FILE *stream); - On Windows, if the libpq library and an application are + On Windows, if the libpq library and an application are compiled with different flags, this function call will crash the - application because the internal representation of the FILE + application because the internal representation of the FILE pointers differ. Specifically, multithreaded/single-threaded, release/debug, and static/dynamic flags should be the same for the library and all applications using that library. @@ -5901,25 +5901,25 @@ void PQuntrace(PGconn *conn); - Frees memory allocated by libpq. + Frees memory allocated by libpq. void PQfreemem(void *ptr); - Frees memory allocated by libpq, particularly + Frees memory allocated by libpq, particularly PQescapeByteaConn, PQescapeBytea, PQunescapeBytea, and PQnotifies. It is particularly important that this function, rather than - free(), be used on Microsoft Windows. This is because + free(), be used on Microsoft Windows. This is because allocating memory in a DLL and releasing it in the application works only if multithreaded/single-threaded, release/debug, and static/dynamic flags are the same for the DLL and the application. On non-Microsoft Windows platforms, this function is the same as the standard library - function free(). + function free(). @@ -5935,7 +5935,7 @@ void PQfreemem(void *ptr); Frees the data structures allocated by - PQconndefaults or PQconninfoParse. + PQconndefaults or PQconninfoParse. void PQconninfoFree(PQconninfoOption *connOptions); @@ -5958,44 +5958,44 @@ void PQconninfoFree(PQconninfoOption *connOptions); - Prepares the encrypted form of a PostgreSQL password. + Prepares the encrypted form of a PostgreSQL password. char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm); This function is intended to be used by client applications that wish to send commands like ALTER USER joe PASSWORD - 'pwd'. It is good practice not to send the original cleartext + 'pwd'. It is good practice not to send the original cleartext password in such a command, because it might be exposed in command logs, activity displays, and so on. Instead, use this function to convert the password to encrypted form before it is sent. - The passwd and user arguments + The passwd and user arguments are the cleartext password, and the SQL name of the user it is for. - algorithm specifies the encryption algorithm + algorithm specifies the encryption algorithm to use to encrypt the password. Currently supported algorithms are - md5 and scram-sha-256 (on and - off are also accepted as aliases for md5, for + md5 and scram-sha-256 (on and + off are also accepted as aliases for md5, for compatibility with older server versions). Note that support for - scram-sha-256 was introduced in PostgreSQL + scram-sha-256 was introduced in PostgreSQL version 10, and will not work correctly with older server versions. If - algorithm is NULL, this function will query + algorithm is NULL, this function will query the server for the current value of the setting. That can block, and will fail if the current transaction is aborted, or if the connection is busy executing another query. If you wish to use the default algorithm for the server but want to avoid blocking, query - password_encryption yourself before calling - PQencryptPasswordConn, and pass that value as the - algorithm. + password_encryption yourself before calling + PQencryptPasswordConn, and pass that value as the + algorithm. - The return value is a string allocated by malloc. + The return value is a string allocated by malloc. The caller can assume the string doesn't contain any special characters - that would require escaping. Use PQfreemem to free the - result when done with it. On error, returns NULL, and + that would require escaping. Use PQfreemem to free the + result when done with it. On error, returns NULL, and a suitable message is stored in the connection object. @@ -6012,14 +6012,14 @@ char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, - Prepares the md5-encrypted form of a PostgreSQL password. + Prepares the md5-encrypted form of a PostgreSQL password. char *PQencryptPassword(const char *passwd, const char *user); - PQencryptPassword is an older, deprecated version of - PQencryptPasswodConn. The difference is that - PQencryptPassword does not - require a connection object, and md5 is always used as the + PQencryptPassword is an older, deprecated version of + PQencryptPasswodConn. The difference is that + PQencryptPassword does not + require a connection object, and md5 is always used as the encryption algorithm. @@ -6042,18 +6042,18 @@ PGresult *PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status); - This is libpq's internal function to allocate and + This is libpq's internal function to allocate and initialize an empty PGresult object. This - function returns NULL if memory could not be allocated. It is + function returns NULL if memory could not be allocated. It is exported because some applications find it useful to generate result objects (particularly objects with error status) themselves. If - conn is not null and status + conn is not null and status indicates an error, the current error message of the specified connection is copied into the PGresult. Also, if conn is not null, any event procedures registered in the connection are copied into the PGresult. (They do not get - PGEVT_RESULTCREATE calls, but see + PGEVT_RESULTCREATE calls, but see PQfireResultCreateEvents.) Note that PQclear should eventually be called on the object, just as with a PGresult @@ -6082,14 +6082,14 @@ int PQfireResultCreateEvents(PGconn *conn, PGresult *res); - The conn argument is passed through to event procedures - but not used directly. It can be NULL if the event + The conn argument is passed through to event procedures + but not used directly. It can be NULL if the event procedures won't use it. Event procedures that have already received a - PGEVT_RESULTCREATE or PGEVT_RESULTCOPY event + PGEVT_RESULTCREATE or PGEVT_RESULTCOPY event for this object are not fired again. @@ -6115,7 +6115,7 @@ int PQfireResultCreateEvents(PGconn *conn, PGresult *res); Makes a copy of a PGresult object. The copy is not linked to the source result in any way and PQclear must be called when the copy is no longer - needed. If the function fails, NULL is returned. + needed. If the function fails, NULL is returned. PGresult *PQcopyResult(const PGresult *src, int flags); @@ -6159,7 +6159,7 @@ int PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs); The provided attDescs are copied into the result. - If the attDescs pointer is NULL or + If the attDescs pointer is NULL or numAttributes is less than one, the request is ignored and the function succeeds. If res already contains attributes, the function will fail. If the function @@ -6193,7 +6193,7 @@ int PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len); field of any existing tuple can be modified in any order. If a value at field_num already exists, it will be overwritten. If len is -1 or - value is NULL, the field value + value is NULL, the field value will be set to an SQL null value. The value is copied into the result's private storage, thus is no longer needed after the function @@ -6222,9 +6222,9 @@ void *PQresultAlloc(PGresult *res, size_t nBytes); Any memory allocated with this function will be freed when res is cleared. If the function fails, - the return value is NULL. The result is + the return value is NULL. The result is guaranteed to be adequately aligned for any type of data, - just as for malloc. + just as for malloc. @@ -6240,7 +6240,7 @@ void *PQresultAlloc(PGresult *res, size_t nBytes); - Return the version of libpq that is being used. + Return the version of libpq that is being used. int PQlibVersion(void); @@ -6251,7 +6251,7 @@ int PQlibVersion(void); run time, whether specific functionality is available in the currently loaded version of libpq. The function can be used, for example, to determine which connection options are available in - PQconnectdb. + PQconnectdb. @@ -6262,17 +6262,17 @@ int PQlibVersion(void); - Prior to major version 10, PostgreSQL used + Prior to major version 10, PostgreSQL used three-part version numbers in which the first two parts together represented the major version. For those - versions, PQlibVersion uses two digits for each + versions, PQlibVersion uses two digits for each part; for example version 9.1.5 will be returned as 90105, and version 9.2.0 will be returned as 90200. Therefore, for purposes of determining feature compatibility, - applications should divide the result of PQlibVersion + applications should divide the result of PQlibVersion by 100 not 10000 to determine a logical major version number. In all release series, only the last two digits differ between minor releases (bug-fix releases). @@ -6280,7 +6280,7 @@ int PQlibVersion(void); - This function appeared in PostgreSQL version 9.1, so + This function appeared in PostgreSQL version 9.1, so it cannot be used to detect required functionality in earlier versions, since calling it will create a link dependency on version 9.1 or later. @@ -6322,12 +6322,12 @@ int PQlibVersion(void); The function PQsetNoticeReceiver - notice receiver - PQsetNoticeReceiver sets or + notice receiver + PQsetNoticeReceiver sets or examines the current notice receiver for a connection object. Similarly, PQsetNoticeProcessor - notice processor - PQsetNoticeProcessor sets or + notice processor + PQsetNoticeProcessor sets or examines the current notice processor. @@ -6358,9 +6358,9 @@ PQsetNoticeProcessor(PGconn *conn, receiver function is called. It is passed the message in the form of a PGRES_NONFATAL_ERROR PGresult. (This allows the receiver to extract - individual fields using PQresultErrorField, or obtain a - complete preformatted message using PQresultErrorMessage - or PQresultVerboseErrorMessage.) The same + individual fields using PQresultErrorField, or obtain a + complete preformatted message using PQresultErrorMessage + or PQresultVerboseErrorMessage.) The same void pointer passed to PQsetNoticeReceiver is also passed. (This pointer can be used to access application-specific state if needed.) @@ -6368,7 +6368,7 @@ PQsetNoticeProcessor(PGconn *conn, The default notice receiver simply extracts the message (using - PQresultErrorMessage) and passes it to the notice + PQresultErrorMessage) and passes it to the notice processor. @@ -6394,10 +6394,10 @@ defaultNoticeProcessor(void *arg, const char *message) Once you have set a notice receiver or processor, you should expect that that function could be called as long as either the - PGconn object or PGresult objects made - from it exist. At creation of a PGresult, the - PGconn's current notice handling pointers are copied - into the PGresult for possible use by functions like + PGconn object or PGresult objects made + from it exist. At creation of a PGresult, the + PGconn's current notice handling pointers are copied + into the PGresult for possible use by functions like PQgetvalue. @@ -6419,21 +6419,21 @@ defaultNoticeProcessor(void *arg, const char *message) Each registered event handler is associated with two pieces of data, - known to libpq only as opaque void * - pointers. There is a passthrough pointer that is provided + known to libpq only as opaque void * + pointers. There is a passthrough pointer that is provided by the application when the event handler is registered with a - PGconn. The passthrough pointer never changes for the - life of the PGconn and all PGresults + PGconn. The passthrough pointer never changes for the + life of the PGconn and all PGresults generated from it; so if used, it must point to long-lived data. - In addition there is an instance data pointer, which starts - out NULL in every PGconn and PGresult. + In addition there is an instance data pointer, which starts + out NULL in every PGconn and PGresult. This pointer can be manipulated using the PQinstanceData, PQsetInstanceData, PQresultInstanceData and PQsetResultInstanceData functions. Note that - unlike the passthrough pointer, instance data of a PGconn - is not automatically inherited by PGresults created from + unlike the passthrough pointer, instance data of a PGconn + is not automatically inherited by PGresults created from it. libpq does not know what passthrough and instance data pointers point to (if anything) and will never attempt to free them — that is the responsibility of the event handler. @@ -6443,7 +6443,7 @@ defaultNoticeProcessor(void *arg, const char *message) Event Types - The enum PGEventId names the types of events handled by + The enum PGEventId names the types of events handled by the event system. All its values have names beginning with PGEVT. For each event type, there is a corresponding event info structure that carries the parameters passed to the event @@ -6507,8 +6507,8 @@ typedef struct PGconn was just reset, all event data remains unchanged. This event should be used to reset/reload/requery any associated instanceData. Note that even if the - event procedure fails to process PGEVT_CONNRESET, it will - still receive a PGEVT_CONNDESTROY event when the connection + event procedure fails to process PGEVT_CONNRESET, it will + still receive a PGEVT_CONNDESTROY event when the connection is closed. @@ -6568,7 +6568,7 @@ typedef struct instanceData that needs to be associated with the result. If the event procedure fails, the result will be cleared and the failure will be propagated. The event procedure must not try to - PQclear the result object for itself. When returning a + PQclear the result object for itself. When returning a failure code, all cleanup must be performed as no PGEVT_RESULTDESTROY event will be sent. @@ -6675,7 +6675,7 @@ int eventproc(PGEventId evtId, void *evtInfo, void *passThrough) A particular event procedure can be registered only once in any - PGconn. This is because the address of the procedure + PGconn. This is because the address of the procedure is used as a lookup key to identify the associated instance data. @@ -6684,9 +6684,9 @@ int eventproc(PGEventId evtId, void *evtInfo, void *passThrough) On Windows, functions can have two different addresses: one visible from outside a DLL and another visible from inside the DLL. One should be careful that only one of these addresses is used with - libpq's event-procedure functions, else confusion will + libpq's event-procedure functions, else confusion will result. The simplest rule for writing code that will work is to - ensure that event procedures are declared static. If the + ensure that event procedures are declared static. If the procedure's address must be available outside its own source file, expose a separate function to return the address. @@ -6720,7 +6720,7 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, An event procedure must be registered once on each - PGconn you want to receive events about. There is no + PGconn you want to receive events about. There is no limit, other than memory, on the number of event procedures that can be registered with a connection. The function returns a non-zero value if it succeeds and zero if it fails. @@ -6731,11 +6731,11 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, event is fired. Its memory address is also used to lookup instanceData. The name argument is used to refer to the event procedure in error messages. - This value cannot be NULL or a zero-length string. The name string is - copied into the PGconn, so what is passed need not be + This value cannot be NULL or a zero-length string. The name string is + copied into the PGconn, so what is passed need not be long-lived. The passThrough pointer is passed to the proc whenever an event occurs. This - argument can be NULL. + argument can be NULL. @@ -6749,11 +6749,11 @@ int PQregisterEventProc(PGconn *conn, PGEventProc proc, - Sets the connection conn's instanceData - for procedure proc to data. This + Sets the connection conn's instanceData + for procedure proc to data. This returns non-zero for success and zero for failure. (Failure is - only possible if proc has not been properly - registered in conn.) + only possible if proc has not been properly + registered in conn.) int PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data); @@ -6772,8 +6772,8 @@ int PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data); Returns the - connection conn's instanceData - associated with procedure proc, + connection conn's instanceData + associated with procedure proc, or NULL if there is none. @@ -6792,10 +6792,10 @@ void *PQinstanceData(const PGconn *conn, PGEventProc proc); - Sets the result's instanceData - for proc to data. This returns + Sets the result's instanceData + for proc to data. This returns non-zero for success and zero for failure. (Failure is only - possible if proc has not been properly registered + possible if proc has not been properly registered in the result.) @@ -6814,7 +6814,7 @@ int PQresultSetInstanceData(PGresult *res, PGEventProc proc, void *data); - Returns the result's instanceData associated with proc, or NULL + Returns the result's instanceData associated with proc, or NULL if there is none. @@ -6992,8 +6992,8 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) The following environment variables can be used to select default connection parameter values, which will be used by - PQconnectdb, PQsetdbLogin and - PQsetdb if no value is directly specified by the calling + PQconnectdb, PQsetdbLogin and + PQsetdb if no value is directly specified by the calling code. These are useful to avoid hard-coding database connection information into simple client applications, for example. @@ -7060,7 +7060,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via - ps; instead consider using a password file + ps; instead consider using a password file (see ). @@ -7092,7 +7092,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSERVICEFILE specifies the name of the per-user connection service file. If not set, it defaults - to ~/.pg_service.conf + to ~/.pg_service.conf (see ). @@ -7309,7 +7309,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSYSCONFDIR PGSYSCONFDIR sets the directory containing the - pg_service.conf file and in a future version + pg_service.conf file and in a future version possibly other system-wide configuration files. @@ -7320,7 +7320,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGLOCALEDIR PGLOCALEDIR sets the directory containing the - locale files for message localization. + locale files for message localization. @@ -7344,8 +7344,8 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) contain passwords to be used if the connection requires a password (and no password has been specified otherwise). On Microsoft Windows the file is named - %APPDATA%\postgresql\pgpass.conf (where - %APPDATA% refers to the Application Data subdirectory in + %APPDATA%\postgresql\pgpass.conf (where + %APPDATA% refers to the Application Data subdirectory in the user's profile). Alternatively, a password file can be specified using the connection parameter @@ -7358,19 +7358,19 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) hostname:port:database:username:password (You can add a reminder comment to the file by copying the line above and - preceding it with #.) + preceding it with #.) Each of the first four fields can be a literal value, or *, which matches anything. The password field from the first line that matches the current connection parameters will be used. (Therefore, put more-specific entries first when you are using wildcards.) If an entry needs to contain : or \, escape this character with \. - A host name of localhost matches both TCP (host name - localhost) and Unix domain socket (pghost empty + A host name of localhost matches both TCP (host name + localhost) and Unix domain socket (pghost empty or the default socket directory) connections coming from the local - machine. In a standby server, a database name of replication + machine. In a standby server, a database name of replication matches streaming replication connections made to the master server. - The database field is of limited usefulness because + The database field is of limited usefulness because users have the same password for all databases in the same cluster. @@ -7526,17 +7526,17 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - PostgreSQL has native support for using SSL + PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. See for details about the server-side - SSL functionality. + SSL functionality. libpq reads the system-wide OpenSSL configuration file. By default, this file is named openssl.cnf and is located in the - directory reported by openssl version -d. This default + directory reported by openssl version -d. This default can be overridden by setting environment variable OPENSSL_CONF to the name of the desired configuration file. @@ -7546,43 +7546,43 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) Client Verification of Server Certificates - By default, PostgreSQL will not perform any verification of + By default, PostgreSQL will not perform any verification of the server certificate. This means that it is possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent spoofing, - SSL certificate verification must be used. + SSL certificate verification must be used. - If the parameter sslmode is set to verify-ca, + If the parameter sslmode is set to verify-ca, libpq will verify that the server is trustworthy by checking the certificate chain up to a trusted certificate authority - (CA). If sslmode is set to verify-full, - libpq will also verify that the server host name matches its + (CA). If sslmode is set to verify-full, + libpq will also verify that the server host name matches its certificate. The SSL connection will fail if the server certificate cannot - be verified. verify-full is recommended in most + be verified. verify-full is recommended in most security-sensitive environments. - In verify-full mode, the host name is matched against the + In verify-full mode, the host name is matched against the certificate's Subject Alternative Name attribute(s), or against the Common Name attribute if no Subject Alternative Name of type dNSName is present. If the certificate's name attribute starts with an asterisk - (*), the asterisk will be treated as - a wildcard, which will match all characters except a dot - (.). This means the certificate will not match subdomains. + (*), the asterisk will be treated as + a wildcard, which will match all characters except a dot + (.). This means the certificate will not match subdomains. If the connection is made using an IP address instead of a host name, the IP address will be matched (without doing any DNS lookups). To allow server certificate verification, the certificate(s) of one or more - trusted CAs must be - placed in the file ~/.postgresql/root.crt in the user's home - directory. If intermediate CAs appear in + trusted CAs must be + placed in the file ~/.postgresql/root.crt in the user's home + directory. If intermediate CAs appear in root.crt, the file must also contain certificate - chains to their root CAs. (On Microsoft Windows the file is named + chains to their root CAs. (On Microsoft Windows the file is named %APPDATA%\postgresql\root.crt.) @@ -7596,8 +7596,8 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) The location of the root certificate file and the CRL can be changed by setting - the connection parameters sslrootcert and sslcrl - or the environment variables PGSSLROOTCERT and PGSSLCRL. + the connection parameters sslrootcert and sslcrl + or the environment variables PGSSLROOTCERT and PGSSLCRL. @@ -7619,10 +7619,10 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If the server requests a trusted client certificate, libpq will send the certificate stored in - file ~/.postgresql/postgresql.crt in the user's home + file ~/.postgresql/postgresql.crt in the user's home directory. The certificate must be signed by one of the certificate authorities (CA) trusted by the server. A matching - private key file ~/.postgresql/postgresql.key must also + private key file ~/.postgresql/postgresql.key must also be present. The private key file must not allow any access to world or group; achieve this by the command chmod 0600 ~/.postgresql/postgresql.key. @@ -7631,23 +7631,23 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) %APPDATA%\postgresql\postgresql.key, and there is no special permissions check since the directory is presumed secure. The location of the certificate and key files can be overridden by the - connection parameters sslcert and sslkey or the - environment variables PGSSLCERT and PGSSLKEY. + connection parameters sslcert and sslkey or the + environment variables PGSSLCERT and PGSSLKEY. In some cases, the client certificate might be signed by an - intermediate certificate authority, rather than one that is + intermediate certificate authority, rather than one that is directly trusted by the server. To use such a certificate, append the - certificate of the signing authority to the postgresql.crt + certificate of the signing authority to the postgresql.crt file, then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by + authority, root or intermediate, that is trusted by the server, i.e. signed by a certificate in the server's root CA file (). - Note that the client's ~/.postgresql/root.crt lists the top-level CAs + Note that the client's ~/.postgresql/root.crt lists the top-level CAs that are considered trusted for signing server certificates. In principle it need not list the CA that signed the client's certificate, though in most cases that CA would also be trusted for server certificates. @@ -7659,7 +7659,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) Protection Provided in Different Modes - The different values for the sslmode parameter provide different + The different values for the sslmode parameter provide different levels of protection. SSL can provide protection against three types of attacks: @@ -7669,23 +7669,23 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If a third party can examine the network traffic between the client and the server, it can read both connection information (including - the user name and password) and the data that is passed. SSL + the user name and password) and the data that is passed. SSL uses encryption to prevent this. - Man in the middle (MITM) + Man in the middle (MITM) If a third party can modify the data while passing between the client and server, it can pretend to be the server and therefore see and - modify data even if it is encrypted. The third party can then + modify data even if it is encrypted. The third party can then forward the connection information and data to the original server, making it impossible to detect this attack. Common vectors to do this include DNS poisoning and address hijacking, whereby the client is directed to a different server than intended. There are also several other - attack methods that can accomplish this. SSL uses certificate + attack methods that can accomplish this. SSL uses certificate verification to prevent this, by authenticating the server to the client. @@ -7696,7 +7696,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) If a third party can pretend to be an authorized client, it can simply access data it should not have access to. Typically this can - happen through insecure password management. SSL uses + happen through insecure password management. SSL uses client certificates to prevent this, by making sure that only holders of valid certificates can access the server. @@ -7707,15 +7707,15 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) For a connection to be known secure, SSL usage must be configured - on both the client and the server before the connection + on both the client and the server before the connection is made. If it is only configured on the server, the client may end up sending sensitive information (e.g. passwords) before it knows that the server requires high security. In libpq, secure connections can be ensured - by setting the sslmode parameter to verify-full or - verify-ca, and providing the system with a root certificate to - verify against. This is analogous to using an https - URL for encrypted web browsing. + by setting the sslmode parameter to verify-full or + verify-ca, and providing the system with a root certificate to + verify against. This is analogous to using an https + URL for encrypted web browsing. @@ -7726,10 +7726,10 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - All SSL options carry overhead in the form of encryption and + All SSL options carry overhead in the form of encryption and key-exchange, so there is a trade-off that has to be made between performance and security. - illustrates the risks the different sslmode values + illustrates the risks the different sslmode values protect against, and what statement they make about security and overhead. @@ -7738,16 +7738,16 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - sslmode + sslmode Eavesdropping protection - MITM protection + MITM protection Statement - disable + disable No No I don't care about security, and I don't want to pay the overhead @@ -7756,7 +7756,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - allow + allow Maybe No I don't care about security, but I will pay the overhead of @@ -7765,7 +7765,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - prefer + prefer Maybe No I don't care about encryption, but I wish to pay the overhead of @@ -7774,7 +7774,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - require + require Yes No I want my data to be encrypted, and I accept the overhead. I trust @@ -7783,16 +7783,16 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - verify-ca + verify-ca Yes - Depends on CA-policy + Depends on CA-policy I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server that I trust. - verify-full + verify-full Yes Yes I want my data encrypted, and I accept the overhead. I want to be @@ -7806,17 +7806,17 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*)
- The difference between verify-ca and verify-full - depends on the policy of the root CA. If a public - CA is used, verify-ca allows connections to a server - that somebody else may have registered with the CA. - In this case, verify-full should always be used. If - a local CA is used, or even a self-signed certificate, using - verify-ca often provides enough protection. + The difference between verify-ca and verify-full + depends on the policy of the root CA. If a public + CA is used, verify-ca allows connections to a server + that somebody else may have registered with the CA. + In this case, verify-full should always be used. If + a local CA is used, or even a self-signed certificate, using + verify-ca often provides enough protection. - The default value for sslmode is prefer. As is shown + The default value for sslmode is prefer. As is shown in the table, this makes no sense from a security point of view, and it only promises performance overhead if possible. It is only provided as the default for backward compatibility, and is not recommended in secure deployments. @@ -7846,27 +7846,27 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - ~/.postgresql/postgresql.crt + ~/.postgresql/postgresql.crt client certificate requested by server - ~/.postgresql/postgresql.key + ~/.postgresql/postgresql.key client private key proves client certificate sent by owner; does not indicate certificate owner is trustworthy - ~/.postgresql/root.crt + ~/.postgresql/root.crt trusted certificate authorities checks that server certificate is signed by a trusted certificate authority - ~/.postgresql/root.crl + ~/.postgresql/root.crl certificates revoked by certificate authorities server certificate must not be on this list @@ -7880,11 +7880,11 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) SSL Library Initialization - If your application initializes libssl and/or - libcrypto libraries and libpq - is built with SSL support, you should call - PQinitOpenSSL to tell libpq - that the libssl and/or libcrypto libraries + If your application initializes libssl and/or + libcrypto libraries and libpq + is built with SSL support, you should call + PQinitOpenSSL to tell libpq + that the libssl and/or libcrypto libraries have been initialized by your application, so that libpq will not also initialize those libraries. @@ -7912,18 +7912,18 @@ void PQinitOpenSSL(int do_ssl, int do_crypto); - When do_ssl is non-zero, libpq - will initialize the OpenSSL library before first - opening a database connection. When do_crypto is - non-zero, the libcrypto library will be initialized. By - default (if PQinitOpenSSL is not called), both libraries + When do_ssl is non-zero, libpq + will initialize the OpenSSL library before first + opening a database connection. When do_crypto is + non-zero, the libcrypto library will be initialized. By + default (if PQinitOpenSSL is not called), both libraries are initialized. When SSL support is not compiled in, this function is present but does nothing. - If your application uses and initializes either OpenSSL - or its underlying libcrypto library, you must + If your application uses and initializes either OpenSSL + or its underlying libcrypto library, you must call this function with zeroes for the appropriate parameter(s) before first opening a database connection. Also be sure that you have done that initialization before opening a database connection. @@ -7949,15 +7949,15 @@ void PQinitSSL(int do_ssl); This function is equivalent to - PQinitOpenSSL(do_ssl, do_ssl). + PQinitOpenSSL(do_ssl, do_ssl). It is sufficient for applications that initialize both or neither - of OpenSSL and libcrypto. + of OpenSSL and libcrypto. - PQinitSSL has been present since - PostgreSQL 8.0, while PQinitOpenSSL - was added in PostgreSQL 8.4, so PQinitSSL + PQinitSSL has been present since + PostgreSQL 8.0, while PQinitOpenSSL + was added in PostgreSQL 8.4, so PQinitSSL might be preferable for applications that need to work with older versions of libpq. @@ -7984,8 +7984,8 @@ void PQinitSSL(int do_ssl); options when you compile your application code. Refer to your system's documentation for information about how to build thread-enabled applications, or look in - src/Makefile.global for PTHREAD_CFLAGS - and PTHREAD_LIBS. This function allows the querying of + src/Makefile.global for PTHREAD_CFLAGS + and PTHREAD_LIBS. This function allows the querying of libpq's thread-safe status: @@ -8017,18 +8017,18 @@ int PQisthreadsafe(); One thread restriction is that no two threads attempt to manipulate - the same PGconn object at the same time. In particular, + the same PGconn object at the same time. In particular, you cannot issue concurrent commands from different threads through the same connection object. (If you need to run concurrent commands, use multiple connections.) - PGresult objects are normally read-only after creation, + PGresult objects are normally read-only after creation, and so can be passed around freely between threads. However, if you use - any of the PGresult-modifying functions described in + any of the PGresult-modifying functions described in or , it's up - to you to avoid concurrent operations on the same PGresult, + to you to avoid concurrent operations on the same PGresult, too. @@ -8045,14 +8045,14 @@ int PQisthreadsafe(); If you are using Kerberos inside your application (in addition to inside libpq), you will need to do locking around Kerberos calls because Kerberos functions are not thread-safe. See - function PQregisterThreadLock in the + function PQregisterThreadLock in the libpq source code for a way to do cooperative locking between libpq and your application. If you experience problems with threaded applications, run the program - in src/tools/thread to see if your platform has + in src/tools/thread to see if your platform has thread-unsafe functions. This program is run by configure, but for binary distributions your library might not match the library used to build the binaries. @@ -8095,7 +8095,7 @@ foo.c:95: `PGRES_TUPLES_OK' undeclared (first use in this function) - Point your compiler to the directory where the PostgreSQL header + Point your compiler to the directory where the PostgreSQL header files were installed, by supplying the -Idirectory option to your compiler. (In some cases the compiler will look into @@ -8116,8 +8116,8 @@ CPPFLAGS += -I/usr/local/pgsql/include If there is any chance that your program might be compiled by other users then you should not hardcode the directory location like that. Instead, you can run the utility - pg_configpg_configwith libpq to find out where the header + pg_configpg_configwith libpq to find out where the header files are on the local system: $ pg_config --includedir diff --git a/doc/src/sgml/lo.sgml b/doc/src/sgml/lo.sgml index 9c318f1c98..8d8ee82722 100644 --- a/doc/src/sgml/lo.sgml +++ b/doc/src/sgml/lo.sgml @@ -8,9 +8,9 @@ - The lo module provides support for managing Large Objects - (also called LOs or BLOBs). This includes a data type lo - and a trigger lo_manage. + The lo module provides support for managing Large Objects + (also called LOs or BLOBs). This includes a data type lo + and a trigger lo_manage. @@ -24,7 +24,7 @@ - As PostgreSQL stands, this doesn't occur. Large objects + As PostgreSQL stands, this doesn't occur. Large objects are treated as objects in their own right; a table entry can reference a large object by OID, but there can be multiple table entries referencing the same large object OID, so the system doesn't delete the large object @@ -32,30 +32,30 @@ - Now this is fine for PostgreSQL-specific applications, but + Now this is fine for PostgreSQL-specific applications, but standard code using JDBC or ODBC won't delete the objects, resulting in orphan objects — objects that are not referenced by anything, and simply occupy disk space. - The lo module allows fixing this by attaching a trigger + The lo module allows fixing this by attaching a trigger to tables that contain LO reference columns. The trigger essentially just - does a lo_unlink whenever you delete or modify a value + does a lo_unlink whenever you delete or modify a value referencing a large object. When you use this trigger, you are assuming that there is only one database reference to any large object that is referenced in a trigger-controlled column! - The module also provides a data type lo, which is really just - a domain of the oid type. This is useful for differentiating + The module also provides a data type lo, which is really just + a domain of the oid type. This is useful for differentiating database columns that hold large object references from those that are - OIDs of other things. You don't have to use the lo type to + OIDs of other things. You don't have to use the lo type to use the trigger, but it may be convenient to use it to keep track of which columns in your database represent large objects that you are managing with the trigger. It is also rumored that the ODBC driver gets confused if you - don't use lo for BLOB columns. + don't use lo for BLOB columns. @@ -75,11 +75,11 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image For each column that will contain unique references to large objects, - create a BEFORE UPDATE OR DELETE trigger, and give the column + create a BEFORE UPDATE OR DELETE trigger, and give the column name as the sole trigger argument. You can also restrict the trigger to only execute on updates to the column by using BEFORE UPDATE OF column_name. - If you need multiple lo + If you need multiple lo columns in the same table, create a separate trigger for each one, remembering to give a different name to each trigger on the same table. @@ -93,18 +93,18 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image Dropping a table will still orphan any objects it contains, as the trigger is not executed. You can avoid this by preceding the DROP - TABLE with DELETE FROM table. + TABLE with DELETE FROM table. - TRUNCATE has the same hazard. + TRUNCATE has the same hazard. If you already have, or suspect you have, orphaned large objects, see the module to help - you clean them up. It's a good idea to run vacuumlo - occasionally as a back-stop to the lo_manage trigger. + you clean them up. It's a good idea to run vacuumlo + occasionally as a back-stop to the lo_manage trigger. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 7757e1e441..2e930ac240 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -3,11 +3,11 @@ Large Objects - large object - BLOBlarge object + large object + BLOBlarge object - PostgreSQL has a large object + PostgreSQL has a large object facility, which provides stream-style access to user data that is stored in a special large-object structure. Streaming access is useful when working with data values that are too large to manipulate @@ -76,12 +76,12 @@ of 1000000 bytes worth of storage; only of chunks covering the range of data bytes actually written. A read operation will, however, read out zeroes for any unallocated locations preceding the last existing chunk. - This corresponds to the common behavior of sparsely allocated + This corresponds to the common behavior of sparsely allocated files in Unix file systems. - As of PostgreSQL 9.0, large objects have an owner + As of PostgreSQL 9.0, large objects have an owner and a set of access permissions, which can be managed using and . @@ -101,7 +101,7 @@ This section describes the facilities that - PostgreSQL's libpq + PostgreSQL's libpq client interface library provides for accessing large objects. The PostgreSQL large object interface is modeled after the Unix file-system interface, with @@ -121,7 +121,7 @@ If an error occurs while executing any one of these functions, the function will return an otherwise-impossible value, typically 0 or -1. A message describing the error is stored in the connection object and - can be retrieved with PQerrorMessage. + can be retrieved with PQerrorMessage. @@ -134,7 +134,7 @@ Creating a Large Object - lo_creat + lo_creat The function Oid lo_creat(PGconn *conn, int mode); @@ -147,7 +147,7 @@ Oid lo_creat(PGconn *conn, int mode); ignored as of PostgreSQL 8.1; however, for backward compatibility with earlier releases it is best to set it to INV_READ, INV_WRITE, - or INV_READ | INV_WRITE. + or INV_READ | INV_WRITE. (These symbolic constants are defined in the header file libpq/libpq-fs.h.) @@ -160,7 +160,7 @@ inv_oid = lo_creat(conn, INV_READ|INV_WRITE); - lo_create + lo_create The function Oid lo_create(PGconn *conn, Oid lobjId); @@ -169,14 +169,14 @@ Oid lo_create(PGconn *conn, Oid lobjId); specified by lobjId; if so, failure occurs if that OID is already in use for some large object. If lobjId - is InvalidOid (zero) then lo_create assigns an unused - OID (this is the same behavior as lo_creat). + is InvalidOid (zero) then lo_create assigns an unused + OID (this is the same behavior as lo_creat). The return value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure. - lo_create is new as of PostgreSQL + lo_create is new as of PostgreSQL 8.1; if this function is run against an older server version, it will fail and return InvalidOid. @@ -193,7 +193,7 @@ inv_oid = lo_create(conn, desired_oid); Importing a Large Object - lo_import + lo_import To import an operating system file as a large object, call Oid lo_import(PGconn *conn, const char *filename); @@ -209,7 +209,7 @@ Oid lo_import(PGconn *conn, const char *filename); - lo_import_with_oid + lo_import_with_oid The function Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); @@ -218,14 +218,14 @@ Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); specified by lobjId; if so, failure occurs if that OID is already in use for some large object. If lobjId - is InvalidOid (zero) then lo_import_with_oid assigns an unused - OID (this is the same behavior as lo_import). + is InvalidOid (zero) then lo_import_with_oid assigns an unused + OID (this is the same behavior as lo_import). The return value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure. - lo_import_with_oid is new as of PostgreSQL + lo_import_with_oid is new as of PostgreSQL 8.4 and uses lo_create internally which is new in 8.1; if this function is run against 8.0 or before, it will fail and return InvalidOid. @@ -235,7 +235,7 @@ Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); Exporting a Large Object - lo_export + lo_export To export a large object into an operating system file, call @@ -253,14 +253,14 @@ int lo_export(PGconn *conn, Oid lobjId, const char *filename); Opening an Existing Large Object - lo_open + lo_open To open an existing large object for reading or writing, call int lo_open(PGconn *conn, Oid lobjId, int mode); The lobjId argument specifies the OID of the large object to open. The mode bits control whether the - object is opened for reading (INV_READ), writing + object is opened for reading (INV_READ), writing (INV_WRITE), or both. (These symbolic constants are defined in the header file libpq/libpq-fs.h.) @@ -277,19 +277,19 @@ int lo_open(PGconn *conn, Oid lobjId, int mode); The server currently does not distinguish between modes - INV_WRITE and INV_READ | + INV_WRITE and INV_READ | INV_WRITE: you are allowed to read from the descriptor in either case. However there is a significant difference between - these modes and INV_READ alone: with INV_READ + these modes and INV_READ alone: with INV_READ you cannot write on the descriptor, and the data read from it will reflect the contents of the large object at the time of the transaction - snapshot that was active when lo_open was executed, + snapshot that was active when lo_open was executed, regardless of later writes by this or other transactions. Reading from a descriptor opened with INV_WRITE returns data that reflects all writes of other committed transactions as well as writes of the current transaction. This is similar to the behavior - of REPEATABLE READ versus READ COMMITTED transaction - modes for ordinary SQL SELECT commands. + of REPEATABLE READ versus READ COMMITTED transaction + modes for ordinary SQL SELECT commands. @@ -304,14 +304,14 @@ inv_fd = lo_open(conn, inv_oid, INV_READ|INV_WRITE); Writing Data to a Large Object - lo_write + lo_write The function int lo_write(PGconn *conn, int fd, const char *buf, size_t len); writes len bytes from buf (which must be of size len) to large object - descriptor fd. The fd argument must + descriptor fd. The fd argument must have been returned by a previous lo_open. The number of bytes actually written is returned (in the current implementation, this will always equal len unless @@ -320,8 +320,8 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len); Although the len parameter is declared as - size_t, this function will reject length values larger than - INT_MAX. In practice, it's best to transfer data in chunks + size_t, this function will reject length values larger than + INT_MAX. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. @@ -330,7 +330,7 @@ int lo_write(PGconn *conn, int fd, const char *buf, size_t len); Reading Data from a Large Object - lo_read + lo_read The function int lo_read(PGconn *conn, int fd, char *buf, size_t len); @@ -347,8 +347,8 @@ int lo_read(PGconn *conn, int fd, char *buf, size_t len); Although the len parameter is declared as - size_t, this function will reject length values larger than - INT_MAX. In practice, it's best to transfer data in chunks + size_t, this function will reject length values larger than + INT_MAX. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. @@ -357,7 +357,7 @@ int lo_read(PGconn *conn, int fd, char *buf, size_t len); Seeking in a Large Object - lo_lseek + lo_lseek To change the current read or write location associated with a large object descriptor, call @@ -365,16 +365,16 @@ int lo_lseek(PGconn *conn, int fd, int offset, int whence); This function moves the current location pointer for the large object descriptor identified by - fd to the new location specified by - offset. The valid values for whence - are SEEK_SET (seek from object start), - SEEK_CUR (seek from current position), and - SEEK_END (seek from object end). The return value is + fd to the new location specified by + offset. The valid values for whence + are SEEK_SET (seek from object start), + SEEK_CUR (seek from current position), and + SEEK_END (seek from object end). The return value is the new location pointer, or -1 on error. - lo_lseek64 + lo_lseek64 When dealing with large objects that might exceed 2GB in size, instead use @@ -382,14 +382,14 @@ pg_int64 lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence); This function has the same behavior as lo_lseek, but it can accept an - offset larger than 2GB and/or deliver a result larger + offset larger than 2GB and/or deliver a result larger than 2GB. Note that lo_lseek will fail if the new location pointer would be greater than 2GB. - lo_lseek64 is new as of PostgreSQL + lo_lseek64 is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. @@ -400,7 +400,7 @@ pg_int64 lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence); Obtaining the Seek Position of a Large Object - lo_tell + lo_tell To obtain the current read or write location of a large object descriptor, call @@ -410,7 +410,7 @@ int lo_tell(PGconn *conn, int fd); - lo_tell64 + lo_tell64 When dealing with large objects that might exceed 2GB in size, instead use @@ -424,7 +424,7 @@ pg_int64 lo_tell64(PGconn *conn, int fd); - lo_tell64 is new as of PostgreSQL + lo_tell64 is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. @@ -434,15 +434,15 @@ pg_int64 lo_tell64(PGconn *conn, int fd); Truncating a Large Object - lo_truncate + lo_truncate To truncate a large object to a given length, call int lo_truncate(PGcon *conn, int fd, size_t len); This function truncates the large object - descriptor fd to length len. The + descriptor fd to length len. The fd argument must have been returned by a - previous lo_open. If len is + previous lo_open. If len is greater than the large object's current length, the large object is extended to the specified length with null bytes ('\0'). On success, lo_truncate returns @@ -456,12 +456,12 @@ int lo_truncate(PGcon *conn, int fd, size_t len); Although the len parameter is declared as - size_t, lo_truncate will reject length - values larger than INT_MAX. + size_t, lo_truncate will reject length + values larger than INT_MAX. - lo_truncate64 + lo_truncate64 When dealing with large objects that might exceed 2GB in size, instead use @@ -469,17 +469,17 @@ int lo_truncate64(PGcon *conn, int fd, pg_int64 len); This function has the same behavior as lo_truncate, but it can accept a - len value exceeding 2GB. + len value exceeding 2GB. - lo_truncate is new as of PostgreSQL + lo_truncate is new as of PostgreSQL 8.3; if this function is run against an older server version, it will fail and return -1. - lo_truncate64 is new as of PostgreSQL + lo_truncate64 is new as of PostgreSQL 9.3; if this function is run against an older server version, it will fail and return -1. @@ -489,12 +489,12 @@ int lo_truncate64(PGcon *conn, int fd, pg_int64 len); Closing a Large Object Descriptor - lo_close + lo_close A large object descriptor can be closed by calling int lo_close(PGconn *conn, int fd); - where fd is a + where fd is a large object descriptor returned by lo_open. On success, lo_close returns zero. On error, the return value is -1. @@ -510,7 +510,7 @@ int lo_close(PGconn *conn, int fd); Removing a Large Object - lo_unlink + lo_unlink To remove a large object from the database, call int lo_unlink(PGconn *conn, Oid lobjId); @@ -554,7 +554,7 @@ int lo_unlink(PGconn *conn, Oid lobjId); oid Create a large object and store data there, returning its OID. - Pass 0 to have the system choose an OID. + Pass 0 to have the system choose an OID. lo_from_bytea(0, E'\\xffffff00') 24528 @@ -599,11 +599,11 @@ int lo_unlink(PGconn *conn, Oid lobjId); client-side functions described earlier; indeed, for the most part the client-side functions are simply interfaces to the equivalent server-side functions. The ones just as convenient to call via SQL commands are - lo_creatlo_creat, + lo_creatlo_creat, lo_create, - lo_unlinklo_unlink, - lo_importlo_import, and - lo_exportlo_export. + lo_unlinklo_unlink, + lo_importlo_import, and + lo_exportlo_export. Here are examples of their use: @@ -645,7 +645,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image lo_write is also available via server-side calls, but the names of the server-side functions differ from the client side interfaces in that they do not contain underscores. You must call - these functions as loread and lowrite. + these functions as loread and lowrite. @@ -656,7 +656,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image is a sample program which shows how the large object interface - in libpq can be used. Parts of the program are + in libpq can be used. Parts of the program are commented out but are left in the source for the reader's benefit. This program can also be found in src/test/examples/testlo.c in the source distribution. diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index 35ac5abbe5..c02f6e9765 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -156,13 +156,13 @@ postgres=# SELECT pg_drop_replication_slot('regression_slot'); $ pg_recvlogical -d postgres --slot test --create-slot $ pg_recvlogical -d postgres --slot test --start -f - -ControlZ +ControlZ $ psql -d postgres -c "INSERT INTO data(data) VALUES('4');" $ fg BEGIN 693 table public.data: INSERT: id[integer]:4 data[text]:'4' COMMIT 693 -ControlC +ControlC $ pg_recvlogical -d postgres --slot test --drop-slot @@ -286,7 +286,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot Creation of a snapshot is not always possible. In particular, it will fail when connected to a hot standby. Applications that do not require - snapshot export may suppress it with the NOEXPORT_SNAPSHOT + snapshot export may suppress it with the NOEXPORT_SNAPSHOT option. @@ -303,7 +303,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot
- DROP_REPLICATION_SLOT slot_name WAIT + DROP_REPLICATION_SLOT slot_name WAIT @@ -426,12 +426,12 @@ CREATE TABLE another_catalog_table(data text) WITH (user_catalog_table = true); data in a data type that can contain arbitrary data (e.g., bytea) is cumbersome. If the output plugin only outputs textual data in the server's encoding, it can declare that by - setting OutputPluginOptions.output_type - to OUTPUT_PLUGIN_TEXTUAL_OUTPUT instead - of OUTPUT_PLUGIN_BINARY_OUTPUT in + setting OutputPluginOptions.output_type + to OUTPUT_PLUGIN_TEXTUAL_OUTPUT instead + of OUTPUT_PLUGIN_BINARY_OUTPUT in the startup - callback. In that case, all the data has to be in the server's encoding - so that a text datum can contain it. This is checked in assertion-enabled + callback. In that case, all the data has to be in the server's encoding + so that a text datum can contain it. This is checked in assertion-enabled builds. diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml index fccfd320f5..602d9403f7 100644 --- a/doc/src/sgml/ltree.sgml +++ b/doc/src/sgml/ltree.sgml @@ -8,7 +8,7 @@ - This module implements a data type ltree for representing + This module implements a data type ltree for representing labels of data stored in a hierarchical tree-like structure. Extensive facilities for searching through label trees are provided. @@ -19,17 +19,17 @@ A label is a sequence of alphanumeric characters and underscores (for example, in C locale the characters - A-Za-z0-9_ are allowed). Labels must be less than 256 bytes + A-Za-z0-9_ are allowed). Labels must be less than 256 bytes long. - Examples: 42, Personal_Services + Examples: 42, Personal_Services A label path is a sequence of zero or more - labels separated by dots, for example L1.L2.L3, representing + labels separated by dots, for example L1.L2.L3, representing a path from the root of a hierarchical tree to a particular node. The length of a label path must be less than 65kB, but keeping it under 2kB is preferable. In practice this is not a major limitation; for example, @@ -42,7 +42,7 @@ - The ltree module provides several data types: + The ltree module provides several data types: @@ -55,13 +55,13 @@ lquery represents a regular-expression-like pattern - for matching ltree values. A simple word matches that - label within a path. A star symbol (*) matches zero + for matching ltree values. A simple word matches that + label within a path. A star symbol (*) matches zero or more labels. For example: -foo Match the exact label path foo -*.foo.* Match any label path containing the label foo -*.foo Match any label path whose last label is foo +foo Match the exact label path foo +*.foo.* Match any label path containing the label foo +*.foo Match any label path whose last label is foo @@ -69,34 +69,34 @@ foo Match the exact label path foo -*{n} Match exactly n labels -*{n,} Match at least n labels -*{n,m} Match at least n but not more than m labels -*{,m} Match at most m labels — same as *{0,m} +*{n} Match exactly n labels +*{n,} Match at least n labels +*{n,m} Match at least n but not more than m labels +*{,m} Match at most m labels — same as *{0,m} There are several modifiers that can be put at the end of a non-star - label in lquery to make it match more than just the exact match: + label in lquery to make it match more than just the exact match: -@ Match case-insensitively, for example a@ matches A -* Match any label with this prefix, for example foo* matches foobar +@ Match case-insensitively, for example a@ matches A +* Match any label with this prefix, for example foo* matches foobar % Match initial underscore-separated words - The behavior of % is a bit complicated. It tries to match + The behavior of % is a bit complicated. It tries to match words rather than the entire label. For example - foo_bar% matches foo_bar_baz but not - foo_barbaz. If combined with *, prefix + foo_bar% matches foo_bar_baz but not + foo_barbaz. If combined with *, prefix matching applies to each word separately, for example - foo_bar%* matches foo1_bar2_baz but - not foo1_br2_baz. + foo_bar%* matches foo1_bar2_baz but + not foo1_br2_baz. Also, you can write several possibly-modified labels separated with - | (OR) to match any of those labels, and you can put - ! (NOT) at the start to match any label that doesn't + | (OR) to match any of those labels, and you can put + ! (NOT) at the start to match any label that doesn't match any of the alternatives. @@ -141,14 +141,14 @@ a. b. c. d. e. ltxtquery represents a full-text-search-like - pattern for matching ltree values. An + pattern for matching ltree values. An ltxtquery value contains words, possibly with the - modifiers @, *, % at the end; - the modifiers have the same meanings as in lquery. - Words can be combined with & (AND), - | (OR), ! (NOT), and parentheses. + modifiers @, *, % at the end; + the modifiers have the same meanings as in lquery. + Words can be combined with & (AND), + | (OR), ! (NOT), and parentheses. The key difference from - lquery is that ltxtquery matches words without + lquery is that ltxtquery matches words without regard to their position in the label path. @@ -161,7 +161,7 @@ Europe & Russia*@ & !Transportation any label beginning with Russia (case-insensitive), but not paths containing the label Transportation. The location of these words within the path is not important. - Also, when % is used, the word can be matched to any + Also, when % is used, the word can be matched to any underscore-separated word within a label, regardless of position. @@ -169,8 +169,8 @@ Europe & Russia*@ & !Transportation - Note: ltxtquery allows whitespace between symbols, but - ltree and lquery do not. + Note: ltxtquery allows whitespace between symbols, but + ltree and lquery do not. @@ -178,16 +178,16 @@ Europe & Russia*@ & !Transportation Operators and Functions - Type ltree has the usual comparison operators - =, <>, - <, >, <=, >=. + Type ltree has the usual comparison operators + =, <>, + <, >, <=, >=. Comparison sorts in the order of a tree traversal, with the children of a node sorted by label text. In addition, the specialized operators shown in are available. - <type>ltree</> Operators + <type>ltree</type> Operators @@ -200,153 +200,153 @@ Europe & Russia*@ & !Transportation - ltree @> ltree + ltree @> ltree boolean is left argument an ancestor of right (or equal)? - ltree <@ ltree + ltree <@ ltree boolean is left argument a descendant of right (or equal)? - ltree ~ lquery + ltree ~ lquery boolean - does ltree match lquery? + does ltree match lquery? - lquery ~ ltree + lquery ~ ltree boolean - does ltree match lquery? + does ltree match lquery? - ltree ? lquery[] + ltree ? lquery[] boolean - does ltree match any lquery in array? + does ltree match any lquery in array? - lquery[] ? ltree + lquery[] ? ltree boolean - does ltree match any lquery in array? + does ltree match any lquery in array? - ltree @ ltxtquery + ltree @ ltxtquery boolean - does ltree match ltxtquery? + does ltree match ltxtquery? - ltxtquery @ ltree + ltxtquery @ ltree boolean - does ltree match ltxtquery? + does ltree match ltxtquery? - ltree || ltree + ltree || ltree ltree - concatenate ltree paths + concatenate ltree paths - ltree || text + ltree || text ltree - convert text to ltree and concatenate + convert text to ltree and concatenate - text || ltree + text || ltree ltree - convert text to ltree and concatenate + convert text to ltree and concatenate - ltree[] @> ltree + ltree[] @> ltree boolean - does array contain an ancestor of ltree? + does array contain an ancestor of ltree? - ltree <@ ltree[] + ltree <@ ltree[] boolean - does array contain an ancestor of ltree? + does array contain an ancestor of ltree? - ltree[] <@ ltree + ltree[] <@ ltree boolean - does array contain a descendant of ltree? + does array contain a descendant of ltree? - ltree @> ltree[] + ltree @> ltree[] boolean - does array contain a descendant of ltree? + does array contain a descendant of ltree? - ltree[] ~ lquery + ltree[] ~ lquery boolean - does array contain any path matching lquery? + does array contain any path matching lquery? - lquery ~ ltree[] + lquery ~ ltree[] boolean - does array contain any path matching lquery? + does array contain any path matching lquery? - ltree[] ? lquery[] + ltree[] ? lquery[] boolean - does ltree array contain any path matching any lquery? + does ltree array contain any path matching any lquery? - lquery[] ? ltree[] + lquery[] ? ltree[] boolean - does ltree array contain any path matching any lquery? + does ltree array contain any path matching any lquery? - ltree[] @ ltxtquery + ltree[] @ ltxtquery boolean - does array contain any path matching ltxtquery? + does array contain any path matching ltxtquery? - ltxtquery @ ltree[] + ltxtquery @ ltree[] boolean - does array contain any path matching ltxtquery? + does array contain any path matching ltxtquery? - ltree[] ?@> ltree + ltree[] ?@> ltree ltree - first array entry that is an ancestor of ltree; NULL if none + first array entry that is an ancestor of ltree; NULL if none - ltree[] ?<@ ltree + ltree[] ?<@ ltree ltree - first array entry that is a descendant of ltree; NULL if none + first array entry that is a descendant of ltree; NULL if none - ltree[] ?~ lquery + ltree[] ?~ lquery ltree - first array entry that matches lquery; NULL if none + first array entry that matches lquery; NULL if none - ltree[] ?@ ltxtquery + ltree[] ?@ ltxtquery ltree - first array entry that matches ltxtquery; NULL if none + first array entry that matches ltxtquery; NULL if none @@ -356,7 +356,7 @@ Europe & Russia*@ & !Transportation The operators <@, @>, @ and ~ have analogues - ^<@, ^@>, ^@, + ^<@, ^@>, ^@, ^~, which are the same except they do not use indexes. These are useful only for testing purposes. @@ -366,7 +366,7 @@ Europe & Russia*@ & !Transportation
- <type>ltree</> Functions + <type>ltree</type> Functions @@ -383,8 +383,8 @@ Europe & Russia*@ & !Transportation subltree(ltree, int start, int end)subltree ltree - subpath of ltree from position start to - position end-1 (counting from 0) + subpath of ltree from position start to + position end-1 (counting from 0) subltree('Top.Child1.Child2',1,2) Child1 @@ -392,10 +392,10 @@ Europe & Russia*@ & !Transportation subpath(ltree, int offset, int len)subpath ltree - subpath of ltree starting at position - offset, length len. - If offset is negative, subpath starts that far from the - end of the path. If len is negative, leaves that many + subpath of ltree starting at position + offset, length len. + If offset is negative, subpath starts that far from the + end of the path. If len is negative, leaves that many labels off the end of the path. subpath('Top.Child1.Child2',0,2) Top.Child1 @@ -404,9 +404,9 @@ Europe & Russia*@ & !Transportation subpath(ltree, int offset) ltree - subpath of ltree starting at position - offset, extending to end of path. - If offset is negative, subpath starts that far from the + subpath of ltree starting at position + offset, extending to end of path. + If offset is negative, subpath starts that far from the end of the path. subpath('Top.Child1.Child2',1) Child1.Child2 @@ -423,8 +423,8 @@ Europe & Russia*@ & !Transportation index(ltree a, ltree b)index integer - position of first occurrence of b in - a; -1 if not found + position of first occurrence of b in + a; -1 if not found index('0.1.2.3.5.4.5.6.8.5.6.8','5.6') 6 @@ -432,9 +432,9 @@ Europe & Russia*@ & !Transportation index(ltree a, ltree b, int offset) integer - position of first occurrence of b in - a, searching starting at offset; - negative offset means start -offset + position of first occurrence of b in + a, searching starting at offset; + negative offset means start -offset labels from the end of the path index('0.1.2.3.5.4.5.6.8.5.6.8','5.6',-4) 9 @@ -443,7 +443,7 @@ Europe & Russia*@ & !Transportation text2ltree(text)text2ltree ltree - cast text to ltree + cast text to ltree @@ -451,7 +451,7 @@ Europe & Russia*@ & !Transportation ltree2text(ltree)ltree2text text - cast ltree to text + cast ltree to text @@ -481,25 +481,25 @@ Europe & Russia*@ & !Transportation Indexes - ltree supports several types of indexes that can speed + ltree supports several types of indexes that can speed up the indicated operators: - B-tree index over ltree: - <, <=, =, - >=, > + B-tree index over ltree: + <, <=, =, + >=, > - GiST index over ltree: - <, <=, =, - >=, >, - @>, <@, - @, ~, ? + GiST index over ltree: + <, <=, =, + >=, >, + @>, <@, + @, ~, ? Example of creating such an index: @@ -510,9 +510,9 @@ CREATE INDEX path_gist_idx ON test USING GIST (path); - GiST index over ltree[]: - ltree[] <@ ltree, ltree @> ltree[], - @, ~, ? + GiST index over ltree[]: + ltree[] <@ ltree, ltree @> ltree[], + @, ~, ? Example of creating such an index: @@ -532,7 +532,7 @@ CREATE INDEX path_gist_idx ON test USING GIST (array_path); This example uses the following data (also available in file - contrib/ltree/ltreetest.sql in the source distribution): + contrib/ltree/ltreetest.sql in the source distribution): @@ -555,7 +555,7 @@ CREATE INDEX path_idx ON test USING BTREE (path); - Now, we have a table test populated with data describing + Now, we have a table test populated with data describing the hierarchy shown below: diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 616aece6c0..1952bc9178 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -12,12 +12,12 @@ - PostgreSQL, like any database software, requires that certain tasks + PostgreSQL, like any database software, requires that certain tasks be performed regularly to achieve optimum performance. The tasks discussed here are required, but they are repetitive in nature and can easily be automated using standard tools such as cron scripts or - Windows' Task Scheduler. It is the database + Windows' Task Scheduler. It is the database administrator's responsibility to set up appropriate scripts, and to check that they execute successfully. @@ -32,7 +32,7 @@ - The other main category of maintenance task is periodic vacuuming + The other main category of maintenance task is periodic vacuuming of the database. This activity is discussed in . Closely related to this is updating the statistics that will be used by the query planner, as discussed in @@ -46,9 +46,9 @@ check_postgres + url="http://bucardo.org/wiki/Check_postgres">check_postgres is available for monitoring database health and reporting unusual - conditions. check_postgres integrates with + conditions. check_postgres integrates with Nagios and MRTG, but can be run standalone too. @@ -68,15 +68,15 @@ PostgreSQL databases require periodic - maintenance known as vacuuming. For many installations, it + maintenance known as vacuuming. For many installations, it is sufficient to let vacuuming be performed by the autovacuum - daemon, which is described in . You might + daemon, which is described in . You might need to adjust the autovacuuming parameters described there to obtain best results for your situation. Some database administrators will want to supplement or replace the daemon's activities with manually-managed - VACUUM commands, which typically are executed according to a + VACUUM commands, which typically are executed according to a schedule by cron or Task - Scheduler scripts. To set up manually-managed vacuuming properly, + Scheduler scripts. To set up manually-managed vacuuming properly, it is essential to understand the issues discussed in the next few subsections. Administrators who rely on autovacuuming may still wish to skim this material to help them understand and adjust autovacuuming. @@ -109,30 +109,30 @@ To protect against loss of very old data due to - transaction ID wraparound or - multixact ID wraparound. + transaction ID wraparound or + multixact ID wraparound. - Each of these reasons dictates performing VACUUM operations + Each of these reasons dictates performing VACUUM operations of varying frequency and scope, as explained in the following subsections. - There are two variants of VACUUM: standard VACUUM - and VACUUM FULL. VACUUM FULL can reclaim more + There are two variants of VACUUM: standard VACUUM + and VACUUM FULL. VACUUM FULL can reclaim more disk space but runs much more slowly. Also, - the standard form of VACUUM can run in parallel with production + the standard form of VACUUM can run in parallel with production database operations. (Commands such as SELECT, INSERT, UPDATE, and DELETE will continue to function normally, though you will not be able to modify the definition of a table with commands such as ALTER TABLE while it is being vacuumed.) - VACUUM FULL requires exclusive lock on the table it is + VACUUM FULL requires exclusive lock on the table it is working on, and therefore cannot be done in parallel with other use of the table. Generally, therefore, - administrators should strive to use standard VACUUM and - avoid VACUUM FULL. + administrators should strive to use standard VACUUM and + avoid VACUUM FULL. @@ -153,15 +153,15 @@ In PostgreSQL, an - UPDATE or DELETE of a row does not + UPDATE or DELETE of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion - concurrency control (MVCC, see ): the row version + concurrency control (MVCC, see ): the row version must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must then be reclaimed for reuse by new rows, to avoid unbounded growth of disk - space requirements. This is done by running VACUUM. + space requirements. This is done by running VACUUM. @@ -170,7 +170,7 @@ future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be - easily obtained. In contrast, VACUUM FULL actively compacts + easily obtained. In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until @@ -178,18 +178,18 @@ - The usual goal of routine vacuuming is to do standard VACUUMs - often enough to avoid needing VACUUM FULL. The + The usual goal of routine vacuuming is to do standard VACUUMs + often enough to avoid needing VACUUM FULL. The autovacuum daemon attempts to work this way, and in fact will - never issue VACUUM FULL. In this approach, the idea + never issue VACUUM FULL. In this approach, the idea is not to keep tables at their minimum size, but to maintain steady-state usage of disk space: each table occupies space equivalent to its minimum size plus however much space gets used up between vacuumings. - Although VACUUM FULL can be used to shrink a table back + Although VACUUM FULL can be used to shrink a table back to its minimum size and return the disk space to the operating system, there is not much point in this if the table will just grow again in the - future. Thus, moderately-frequent standard VACUUM runs are a - better approach than infrequent VACUUM FULL runs for + future. Thus, moderately-frequent standard VACUUM runs are a + better approach than infrequent VACUUM FULL runs for maintaining heavily-updated tables. @@ -198,20 +198,20 @@ doing all the work at night when load is low. The difficulty with doing vacuuming according to a fixed schedule is that if a table has an unexpected spike in update activity, it may - get bloated to the point that VACUUM FULL is really necessary + get bloated to the point that VACUUM FULL is really necessary to reclaim space. Using the autovacuum daemon alleviates this problem, since the daemon schedules vacuuming dynamically in response to update activity. It is unwise to disable the daemon completely unless you have an extremely predictable workload. One possible compromise is to set the daemon's parameters so that it will only react to unusually heavy update activity, thus keeping things from getting out of hand, - while scheduled VACUUMs are expected to do the bulk of the + while scheduled VACUUMs are expected to do the bulk of the work when the load is typical. For those not using autovacuum, a typical approach is to schedule a - database-wide VACUUM once a day during a low-usage period, + database-wide VACUUM once a day during a low-usage period, supplemented by more frequent vacuuming of heavily-updated tables as necessary. (Some installations with extremely high update rates vacuum their busiest tables as often as once every few minutes.) If you have @@ -222,11 +222,11 @@ - Plain VACUUM may not be satisfactory when + Plain VACUUM may not be satisfactory when a table contains large numbers of dead row versions as a result of massive update or delete activity. If you have such a table and you need to reclaim the excess disk space it occupies, you will need - to use VACUUM FULL, or alternatively + to use VACUUM FULL, or alternatively or one of the table-rewriting variants of . @@ -271,19 +271,19 @@ generate good plans for queries. These statistics are gathered by the command, which can be invoked by itself or - as an optional step in VACUUM. It is important to have + as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans might degrade database performance. The autovacuum daemon, if enabled, will automatically issue - ANALYZE commands whenever the content of a table has + ANALYZE commands whenever the content of a table has changed sufficiently. However, administrators might prefer to rely - on manually-scheduled ANALYZE operations, particularly + on manually-scheduled ANALYZE operations, particularly if it is known that update activity on a table will not affect the - statistics of interesting columns. The daemon schedules - ANALYZE strictly as a function of the number of rows + statistics of interesting columns. The daemon schedules + ANALYZE strictly as a function of the number of rows inserted or updated; it has no knowledge of whether that will lead to meaningful statistical changes. @@ -305,24 +305,24 @@ - It is possible to run ANALYZE on specific tables and even + It is possible to run ANALYZE on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it. In practice, however, it is usually best to just analyze the entire - database, because it is a fast operation. ANALYZE uses a + database, because it is a fast operation. ANALYZE uses a statistically random sampling of the rows of a table rather than reading every single row. - Although per-column tweaking of ANALYZE frequency might not be + Although per-column tweaking of ANALYZE frequency might not be very productive, you might find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by - ANALYZE. Columns that are heavily used in WHERE + ANALYZE. Columns that are heavily used in WHERE clauses and have highly irregular data distributions might require a finer-grain data histogram than other columns. See ALTER TABLE - SET STATISTICS, or change the database-wide default using the , or change the database-wide default using the configuration parameter. @@ -337,11 +337,11 @@ - The autovacuum daemon does not issue ANALYZE commands for + The autovacuum daemon does not issue ANALYZE commands for foreign tables, since it has no means of determining how often that might be useful. If your queries require statistics on foreign tables for proper planning, it's a good idea to run manually-managed - ANALYZE commands on those tables on a suitable schedule. + ANALYZE commands on those tables on a suitable schedule. @@ -350,7 +350,7 @@ Updating The Visibility Map - Vacuum maintains a visibility map for each + Vacuum maintains a visibility map for each table to keep track of which pages contain only tuples that are known to be visible to all active transactions (and all future transactions, until the page is again modified). This has two purposes. First, vacuum @@ -366,7 +366,7 @@ matching index entry, to check whether it should be seen by the current transaction. An index-only - scan, on the other hand, checks the visibility map first. + scan, on the other hand, checks the visibility map first. If it's known that all tuples on the page are visible, the heap fetch can be skipped. This is most useful on large data sets where the visibility map can prevent disk accesses. @@ -391,13 +391,13 @@ PostgreSQL's MVCC transaction semantics - depend on being able to compare transaction ID (XID) + depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current - transaction's XID is in the future and should not be visible + transaction's XID is in the future and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID - wraparound: the XID counter wraps around to zero, and all of a sudden + wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their output become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you cannot @@ -407,47 +407,47 @@ The reason that periodic vacuuming solves the problem is that - VACUUM will mark rows as frozen, indicating that + VACUUM will mark rows as frozen, indicating that they were inserted by a transaction that committed sufficiently far in the past that the effects of the inserting transaction are certain to be visible to all current and future transactions. Normal XIDs are - compared using modulo-232 arithmetic. This means + compared using modulo-232 arithmetic. This means that for every normal XID, there are two billion XIDs that are - older and two billion that are newer; another + older and two billion that are newer; another way to say it is that the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular - normal XID, the row version will appear to be in the past for + normal XID, the row version will appear to be in the past for the next two billion transactions, no matter which normal XID we are talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To - prevent this, PostgreSQL reserves a special XID, - FrozenTransactionId, which does not follow the normal XID + prevent this, PostgreSQL reserves a special XID, + FrozenTransactionId, which does not follow the normal XID comparison rules and is always considered older than every normal XID. Frozen row versions are treated as if the inserting XID were - FrozenTransactionId, so that they will appear to be - in the past to all normal transactions regardless of wraparound + FrozenTransactionId, so that they will appear to be + in the past to all normal transactions regardless of wraparound issues, and so such row versions will be valid until deleted, no matter how long that is. - In PostgreSQL versions before 9.4, freezing was + In PostgreSQL versions before 9.4, freezing was implemented by actually replacing a row's insertion XID - with FrozenTransactionId, which was visible in the - row's xmin system column. Newer versions just set a flag - bit, preserving the row's original xmin for possible - forensic use. However, rows with xmin equal - to FrozenTransactionId (2) may still be found - in databases pg_upgrade'd from pre-9.4 versions. + with FrozenTransactionId, which was visible in the + row's xmin system column. Newer versions just set a flag + bit, preserving the row's original xmin for possible + forensic use. However, rows with xmin equal + to FrozenTransactionId (2) may still be found + in databases pg_upgrade'd from pre-9.4 versions. - Also, system catalogs may contain rows with xmin equal - to BootstrapTransactionId (1), indicating that they were - inserted during the first phase of initdb. - Like FrozenTransactionId, this special XID is treated as + Also, system catalogs may contain rows with xmin equal + to BootstrapTransactionId (1), indicating that they were + inserted during the first phase of initdb. + Like FrozenTransactionId, this special XID is treated as older than every normal XID. @@ -463,26 +463,26 @@ - VACUUM uses the visibility map + VACUUM uses the visibility map to determine which pages of a table must be scanned. Normally, it will skip pages that don't have any dead row versions even if those pages might still have row versions with old XID values. Therefore, normal - VACUUMs won't always freeze every old row version in the table. - Periodically, VACUUM will perform an aggressive - vacuum, skipping only those pages which contain neither dead rows nor + VACUUMs won't always freeze every old row version in the table. + Periodically, VACUUM will perform an aggressive + vacuum, skipping only those pages which contain neither dead rows nor any unfrozen XID or MXID values. - controls when VACUUM does that: all-visible but not all-frozen + controls when VACUUM does that: all-visible but not all-frozen pages are scanned if the number of transactions that have passed since the - last such scan is greater than vacuum_freeze_table_age minus - vacuum_freeze_min_age. Setting - vacuum_freeze_table_age to 0 forces VACUUM to + last such scan is greater than vacuum_freeze_table_age minus + vacuum_freeze_min_age. Setting + vacuum_freeze_table_age to 0 forces VACUUM to use this more aggressive strategy for all scans. The maximum time that a table can go unvacuumed is two billion - transactions minus the vacuum_freeze_min_age value at + transactions minus the vacuum_freeze_min_age value at the time of the last aggressive vacuum. If it were to go unvacuumed for longer than that, data loss could result. To ensure that this does not happen, @@ -495,29 +495,29 @@ This implies that if a table is not otherwise vacuumed, autovacuum will be invoked on it approximately once every - autovacuum_freeze_max_age minus - vacuum_freeze_min_age transactions. + autovacuum_freeze_max_age minus + vacuum_freeze_min_age transactions. For tables that are regularly vacuumed for space reclamation purposes, this is of little importance. However, for static tables (including tables that receive inserts, but no updates or deletes), there is no need to vacuum for space reclamation, so it can be useful to try to maximize the interval between forced autovacuums on very large static tables. Obviously one can do this either by - increasing autovacuum_freeze_max_age or decreasing - vacuum_freeze_min_age. + increasing autovacuum_freeze_max_age or decreasing + vacuum_freeze_min_age. - The effective maximum for vacuum_freeze_table_age is 0.95 * - autovacuum_freeze_max_age; a setting higher than that will be + The effective maximum for vacuum_freeze_table_age is 0.95 * + autovacuum_freeze_max_age; a setting higher than that will be capped to the maximum. A value higher than - autovacuum_freeze_max_age wouldn't make sense because an + autovacuum_freeze_max_age wouldn't make sense because an anti-wraparound autovacuum would be triggered at that point anyway, and the 0.95 multiplier leaves some breathing room to run a manual - VACUUM before that happens. As a rule of thumb, - vacuum_freeze_table_age should be set to a value somewhat - below autovacuum_freeze_max_age, leaving enough gap so that - a regularly scheduled VACUUM or an autovacuum triggered by + VACUUM before that happens. As a rule of thumb, + vacuum_freeze_table_age should be set to a value somewhat + below autovacuum_freeze_max_age, leaving enough gap so that + a regularly scheduled VACUUM or an autovacuum triggered by normal delete and update activity is run in that window. Setting it too close could lead to anti-wraparound autovacuums, even though the table was recently vacuumed to reclaim space, whereas lower values lead to more @@ -525,29 +525,29 @@ - The sole disadvantage of increasing autovacuum_freeze_max_age - (and vacuum_freeze_table_age along with it) is that - the pg_xact and pg_commit_ts + The sole disadvantage of increasing autovacuum_freeze_max_age + (and vacuum_freeze_table_age along with it) is that + the pg_xact and pg_commit_ts subdirectories of the database cluster will take more space, because it - must store the commit status and (if track_commit_timestamp is + must store the commit status and (if track_commit_timestamp is enabled) timestamp of all transactions back to - the autovacuum_freeze_max_age horizon. The commit status uses + the autovacuum_freeze_max_age horizon. The commit status uses two bits per transaction, so if - autovacuum_freeze_max_age is set to its maximum allowed value - of two billion, pg_xact can be expected to grow to about half + autovacuum_freeze_max_age is set to its maximum allowed value + of two billion, pg_xact can be expected to grow to about half a gigabyte and pg_commit_ts to about 20GB. If this is trivial compared to your total database size, - setting autovacuum_freeze_max_age to its maximum allowed value + setting autovacuum_freeze_max_age to its maximum allowed value is recommended. Otherwise, set it depending on what you are willing to - allow for pg_xact and pg_commit_ts storage. + allow for pg_xact and pg_commit_ts storage. (The default, 200 million transactions, translates to about 50MB - of pg_xact storage and about 2GB of pg_commit_ts + of pg_xact storage and about 2GB of pg_commit_ts storage.) - One disadvantage of decreasing vacuum_freeze_min_age is that - it might cause VACUUM to do useless work: freezing a row + One disadvantage of decreasing vacuum_freeze_min_age is that + it might cause VACUUM to do useless work: freezing a row version is a waste of time if the row is modified soon thereafter (causing it to acquire a new XID). So the setting should be large enough that rows are not frozen until they are unlikely to change @@ -556,18 +556,18 @@ To track the age of the oldest unfrozen XIDs in a database, - VACUUM stores XID - statistics in the system tables pg_class and - pg_database. In particular, - the relfrozenxid column of a table's - pg_class row contains the freeze cutoff XID that was used - by the last aggressive VACUUM for that table. All rows + VACUUM stores XID + statistics in the system tables pg_class and + pg_database. In particular, + the relfrozenxid column of a table's + pg_class row contains the freeze cutoff XID that was used + by the last aggressive VACUUM for that table. All rows inserted by transactions with XIDs older than this cutoff XID are guaranteed to have been frozen. Similarly, - the datfrozenxid column of a database's - pg_database row is a lower bound on the unfrozen XIDs + the datfrozenxid column of a database's + pg_database row is a lower bound on the unfrozen XIDs appearing in that database — it is just the minimum of the - per-table relfrozenxid values within the database. + per-table relfrozenxid values within the database. A convenient way to examine this information is to execute queries such as: @@ -581,27 +581,27 @@ WHERE c.relkind IN ('r', 'm'); SELECT datname, age(datfrozenxid) FROM pg_database; - The age column measures the number of transactions from the + The age column measures the number of transactions from the cutoff XID to the current transaction's XID. - VACUUM normally only scans pages that have been modified - since the last vacuum, but relfrozenxid can only be + VACUUM normally only scans pages that have been modified + since the last vacuum, but relfrozenxid can only be advanced when every page of the table that might contain unfrozen XIDs is scanned. This happens when - relfrozenxid is more than - vacuum_freeze_table_age transactions old, when - VACUUM's FREEZE option is used, or when all + relfrozenxid is more than + vacuum_freeze_table_age transactions old, when + VACUUM's FREEZE option is used, or when all pages that are not already all-frozen happen to - require vacuuming to remove dead row versions. When VACUUM + require vacuuming to remove dead row versions. When VACUUM scans every page in the table that is not already all-frozen, it should - set age(relfrozenxid) to a value just a little more than the - vacuum_freeze_min_age setting + set age(relfrozenxid) to a value just a little more than the + vacuum_freeze_min_age setting that was used (more by the number of transactions started since the - VACUUM started). If no relfrozenxid-advancing - VACUUM is issued on the table until - autovacuum_freeze_max_age is reached, an autovacuum will soon + VACUUM started). If no relfrozenxid-advancing + VACUUM is issued on the table until + autovacuum_freeze_max_age is reached, an autovacuum will soon be forced for the table. @@ -616,10 +616,10 @@ WARNING: database "mydb" must be vacuumed within 177009986 transactions HINT: To avoid a database shutdown, execute a database-wide VACUUM in "mydb". - (A manual VACUUM should fix the problem, as suggested by the - hint; but note that the VACUUM must be performed by a + (A manual VACUUM should fix the problem, as suggested by the + hint; but note that the VACUUM must be performed by a superuser, else it will fail to process system catalogs and thus not - be able to advance the database's datfrozenxid.) + be able to advance the database's datfrozenxid.) If these warnings are ignored, the system will shut down and refuse to start any new transactions once there are fewer than 1 million transactions left @@ -632,10 +632,10 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. The 1-million-transaction safety margin exists to let the administrator recover without data loss, by manually executing the - required VACUUM commands. However, since the system will not + required VACUUM commands. However, since the system will not execute commands once it has gone into the safety shutdown mode, the only way to do this is to stop the server and start the server in single-user - mode to execute VACUUM. The shutdown mode is not enforced + mode to execute VACUUM. The shutdown mode is not enforced in single-user mode. See the reference page for details about using single-user mode. @@ -653,15 +653,15 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Multixact IDs are used to support row locking by + Multixact IDs are used to support row locking by multiple transactions. Since there is only limited space in a tuple header to store lock information, that information is encoded as - a multiple transaction ID, or multixact ID for short, + a multiple transaction ID, or multixact ID for short, whenever there is more than one transaction concurrently locking a row. Information about which transaction IDs are included in any particular multixact ID is stored separately in - the pg_multixact subdirectory, and only the multixact ID - appears in the xmax field in the tuple header. + the pg_multixact subdirectory, and only the multixact ID + appears in the xmax field in the tuple header. Like transaction IDs, multixact IDs are implemented as a 32-bit counter and corresponding storage, all of which requires careful aging management, storage cleanup, and wraparound handling. @@ -671,23 +671,23 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Whenever VACUUM scans any part of a table, it will replace + Whenever VACUUM scans any part of a table, it will replace any multixact ID it encounters which is older than by a different value, which can be the zero value, a single transaction ID, or a newer multixact ID. For each table, - pg_class.relminmxid stores the oldest + pg_class.relminmxid stores the oldest possible multixact ID still appearing in any tuple of that table. If this value is older than , an aggressive vacuum is forced. As discussed in the previous section, an aggressive vacuum means that only those pages which are known to be all-frozen will - be skipped. mxid_age() can be used on - pg_class.relminmxid to find its age. + be skipped. mxid_age() can be used on + pg_class.relminmxid to find its age. - Aggressive VACUUM scans, regardless of + Aggressive VACUUM scans, regardless of what causes them, enable advancing the value for that table. Eventually, as all tables in all databases are scanned and their oldest multixact values are advanced, on-disk storage for older @@ -729,21 +729,21 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - The autovacuum daemon actually consists of multiple processes. + The autovacuum daemon actually consists of multiple processes. There is a persistent daemon process, called the autovacuum launcher, which is in charge of starting autovacuum worker processes for all databases. The launcher will distribute the work across time, attempting to start one worker within each database every - seconds. (Therefore, if the installation has N databases, + seconds. (Therefore, if the installation has N databases, a new worker will be launched every - autovacuum_naptime/N seconds.) + autovacuum_naptime/N seconds.) A maximum of worker processes are allowed to run at the same time. If there are more than - autovacuum_max_workers databases to be processed, + autovacuum_max_workers databases to be processed, the next database will be processed as soon as the first worker finishes. Each worker process will check each table within its database and - execute VACUUM and/or ANALYZE as needed. + execute VACUUM and/or ANALYZE as needed. can be set to monitor autovacuum workers' activity. @@ -761,7 +761,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. - Tables whose relfrozenxid value is more than + Tables whose relfrozenxid value is more than transactions old are always vacuumed (this also applies to those tables whose freeze max age has been modified via storage parameters; see below). Otherwise, if the @@ -781,10 +781,10 @@ vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuple collector; it is a semi-accurate count updated by each UPDATE and DELETE operation. (It is only semi-accurate because some information might be lost under heavy - load.) If the relfrozenxid value of the table is more - than vacuum_freeze_table_age transactions old, an aggressive + load.) If the relfrozenxid value of the table is more + than vacuum_freeze_table_age transactions old, an aggressive vacuum is performed to freeze old tuples and advance - relfrozenxid; otherwise, only pages that have been modified + relfrozenxid; otherwise, only pages that have been modified since the last vacuum are scanned. @@ -821,8 +821,8 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu balanced among all the running workers, so that the total I/O impact on the system is the same regardless of the number of workers actually running. However, any workers processing tables whose - per-table autovacuum_vacuum_cost_delay or - autovacuum_vacuum_cost_limit storage parameters have been set + per-table autovacuum_vacuum_cost_delay or + autovacuum_vacuum_cost_limit storage parameters have been set are not considered in the balancing algorithm. @@ -872,7 +872,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu But since the command requires an exclusive table lock, it is often preferable to execute an index rebuild with a sequence of creation and replacement steps. Index types that support - with the CONCURRENTLY + with the CONCURRENTLY option can instead be recreated that way. If that is successful and the resulting index is valid, the original index can then be replaced by the newly built one using a combination of @@ -896,17 +896,17 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu It is a good idea to save the database server's log output - somewhere, rather than just discarding it via /dev/null. + somewhere, rather than just discarding it via /dev/null. The log output is invaluable when diagnosing problems. However, the log output tends to be voluminous (especially at higher debug levels) so you won't want to save it - indefinitely. You need to rotate the log files so that + indefinitely. You need to rotate the log files so that new log files are started and old ones removed after a reasonable period of time. - If you simply direct the stderr of + If you simply direct the stderr of postgres into a file, you will have log output, but the only way to truncate the log file is to stop and restart @@ -917,13 +917,13 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu A better approach is to send the server's - stderr output to some type of log rotation program. + stderr output to some type of log rotation program. There is a built-in log rotation facility, which you can use by - setting the configuration parameter logging_collector to - true in postgresql.conf. The control + setting the configuration parameter logging_collector to + true in postgresql.conf. The control parameters for this program are described in . You can also use this approach - to capture the log data in machine readable CSV + to capture the log data in machine readable CSV (comma-separated values) format. @@ -934,10 +934,10 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu tool included in the Apache distribution can be used with PostgreSQL. To do this, just pipe the server's - stderr output to the desired program. + stderr output to the desired program. If you start the server with - pg_ctl, then stderr - is already redirected to stdout, so you just need a + pg_ctl, then stderr + is already redirected to stdout, so you just need a pipe command, for example: @@ -947,12 +947,12 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 Another production-grade approach to managing log output is to - send it to syslog and let - syslog deal with file rotation. To do this, set the - configuration parameter log_destination to syslog - (to log to syslog only) in - postgresql.conf. Then you can send a SIGHUP - signal to the syslog daemon whenever you want to force it + send it to syslog and let + syslog deal with file rotation. To do this, set the + configuration parameter log_destination to syslog + (to log to syslog only) in + postgresql.conf. Then you can send a SIGHUP + signal to the syslog daemon whenever you want to force it to start writing a new log file. If you want to automate log rotation, the logrotate program can be configured to work with log files from @@ -960,12 +960,12 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 - On many systems, however, syslog is not very reliable, + On many systems, however, syslog is not very reliable, particularly with large log messages; it might truncate or drop messages - just when you need them the most. Also, on Linux, - syslog will flush each message to disk, yielding poor - performance. (You can use a - at the start of the file name - in the syslog configuration file to disable syncing.) + just when you need them the most. Also, on Linux, + syslog will flush each message to disk, yielding poor + performance. (You can use a - at the start of the file name + in the syslog configuration file to disable syncing.) diff --git a/doc/src/sgml/manage-ag.sgml b/doc/src/sgml/manage-ag.sgml index fe1a6355c4..f005538220 100644 --- a/doc/src/sgml/manage-ag.sgml +++ b/doc/src/sgml/manage-ag.sgml @@ -3,7 +3,7 @@ Managing Databases - database + database Every instance of a running PostgreSQL @@ -26,7 +26,7 @@ (database objects). Generally, every database object (tables, functions, etc.) belongs to one and only one database. (However there are a few system catalogs, for example - pg_database, that belong to a whole cluster and + pg_database, that belong to a whole cluster and are accessible from each database within the cluster.) More accurately, a database is a collection of schemas and the schemas contain the tables, functions, etc. So the full hierarchy is: @@ -41,7 +41,7 @@ connection. However, an application is not restricted in the number of connections it opens to the same or other databases. Databases are physically separated and access control is managed at the - connection level. If one PostgreSQL server + connection level. If one PostgreSQL server instance is to house projects or users that should be separate and for the most part unaware of each other, it is therefore recommended to put them into separate databases. If the projects @@ -53,23 +53,23 @@ - Databases are created with the CREATE DATABASE command + Databases are created with the CREATE DATABASE command (see ) and destroyed with the - DROP DATABASE command + DROP DATABASE command (see ). To determine the set of existing databases, examine the - pg_database system catalog, for example + pg_database system catalog, for example SELECT datname FROM pg_database; - The program's \l meta-command - and - The SQL standard calls databases catalogs, but there + The SQL standard calls databases catalogs, but there is no difference in practice. @@ -78,10 +78,10 @@ SELECT datname FROM pg_database; Creating a Database - CREATE DATABASE + CREATE DATABASE - In order to create a database, the PostgreSQL + In order to create a database, the PostgreSQL server must be up and running (see ). @@ -90,9 +90,9 @@ SELECT datname FROM pg_database; Databases are created with the SQL command : -CREATE DATABASE name; +CREATE DATABASE name; - where name follows the usual rules for + where name follows the usual rules for SQL identifiers. The current role automatically becomes the owner of the new database. It is the privilege of the owner of a database to remove it later (which also removes all @@ -107,25 +107,25 @@ CREATE DATABASE name; Since you need to be connected to the database server in order to execute the CREATE DATABASE command, the - question remains how the first database at any given + question remains how the first database at any given site can be created. The first database is always created by the - initdb command when the data storage area is + initdb command when the data storage area is initialized. (See .) This database is called - postgres.postgres So to - create the first ordinary database you can connect to - postgres. + postgres.postgres So to + create the first ordinary database you can connect to + postgres. A second database, - template1,template1 + template1,template1 is also created during database cluster initialization. Whenever a new database is created within the cluster, template1 is essentially cloned. - This means that any changes you make in template1 are + This means that any changes you make in template1 are propagated to all subsequently created databases. Because of this, - avoid creating objects in template1 unless you want them + avoid creating objects in template1 unless you want them propagated to every newly created database. More details appear in . @@ -133,17 +133,17 @@ CREATE DATABASE name; As a convenience, there is a program you can execute from the shell to create new databases, - createdb.createdb + createdb.createdb createdb dbname - createdb does no magic. It connects to the postgres - database and issues the CREATE DATABASE command, + createdb does no magic. It connects to the postgres + database and issues the CREATE DATABASE command, exactly as described above. The reference page contains the invocation - details. Note that createdb without any arguments will create + details. Note that createdb without any arguments will create a database with the current user name. @@ -160,11 +160,11 @@ createdb dbname configure and manage it themselves. To achieve that, use one of the following commands: -CREATE DATABASE dbname OWNER rolename; +CREATE DATABASE dbname OWNER rolename; from the SQL environment, or: -createdb -O rolename dbname +createdb -O rolename dbname from the shell. Only the superuser is allowed to create a database for @@ -176,55 +176,55 @@ createdb -O rolename dbname Template Databases - CREATE DATABASE actually works by copying an existing + CREATE DATABASE actually works by copying an existing database. By default, it copies the standard system database named - template1.template1 Thus that - database is the template from which new databases are - made. If you add objects to template1, these objects + template1.template1 Thus that + database is the template from which new databases are + made. If you add objects to template1, these objects will be copied into subsequently created user databases. This behavior allows site-local modifications to the standard set of objects in databases. For example, if you install the procedural - language PL/Perl in template1, it will + language PL/Perl in template1, it will automatically be available in user databases without any extra action being taken when those databases are created. There is a second standard system database named - template0.template0 This + template0.template0 This database contains the same data as the initial contents of - template1, that is, only the standard objects + template1, that is, only the standard objects predefined by your version of - PostgreSQL. template0 + PostgreSQL. template0 should never be changed after the database cluster has been initialized. By instructing - CREATE DATABASE to copy template0 instead - of template1, you can create a virgin user + CREATE DATABASE to copy template0 instead + of template1, you can create a virgin user database that contains none of the site-local additions in - template1. This is particularly handy when restoring a - pg_dump dump: the dump script should be restored in a + template1. This is particularly handy when restoring a + pg_dump dump: the dump script should be restored in a virgin database to ensure that one recreates the correct contents of the dumped database, without conflicting with objects that - might have been added to template1 later on. + might have been added to template1 later on. - Another common reason for copying template0 instead - of template1 is that new encoding and locale settings - can be specified when copying template0, whereas a copy - of template1 must use the same settings it does. - This is because template1 might contain encoding-specific - or locale-specific data, while template0 is known not to. + Another common reason for copying template0 instead + of template1 is that new encoding and locale settings + can be specified when copying template0, whereas a copy + of template1 must use the same settings it does. + This is because template1 might contain encoding-specific + or locale-specific data, while template0 is known not to. To create a database by copying template0, use: -CREATE DATABASE dbname TEMPLATE template0; +CREATE DATABASE dbname TEMPLATE template0; from the SQL environment, or: -createdb -T template0 dbname +createdb -T template0 dbname from the shell. @@ -232,49 +232,49 @@ createdb -T template0 dbname It is possible to create additional template databases, and indeed one can copy any database in a cluster by specifying its name - as the template for CREATE DATABASE. It is important to + as the template for CREATE DATABASE. It is important to understand, however, that this is not (yet) intended as a general-purpose COPY DATABASE facility. The principal limitation is that no other sessions can be connected to the source database while it is being copied. CREATE - DATABASE will fail if any other connection exists when it starts; + DATABASE will fail if any other connection exists when it starts; during the copy operation, new connections to the source database are prevented. - Two useful flags exist in pg_databasepg_database for each + Two useful flags exist in pg_databasepg_database for each database: the columns datistemplate and datallowconn. datistemplate can be set to indicate that a database is intended as a template for - CREATE DATABASE. If this flag is set, the database can be - cloned by any user with CREATEDB privileges; if it is not set, + CREATE DATABASE. If this flag is set, the database can be + cloned by any user with CREATEDB privileges; if it is not set, only superusers and the owner of the database can clone it. If datallowconn is false, then no new connections to that database will be allowed (but existing sessions are not terminated simply by setting the flag false). The template0 - database is normally marked datallowconn = false to prevent its modification. + database is normally marked datallowconn = false to prevent its modification. Both template0 and template1 - should always be marked with datistemplate = true. + should always be marked with datistemplate = true. - template1 and template0 do not have any special - status beyond the fact that the name template1 is the default - source database name for CREATE DATABASE. - For example, one could drop template1 and recreate it from - template0 without any ill effects. This course of action + template1 and template0 do not have any special + status beyond the fact that the name template1 is the default + source database name for CREATE DATABASE. + For example, one could drop template1 and recreate it from + template0 without any ill effects. This course of action might be advisable if one has carelessly added a bunch of junk in - template1. (To delete template1, - it must have pg_database.datistemplate = false.) + template1. (To delete template1, + it must have pg_database.datistemplate = false.) - The postgres database is also created when a database + The postgres database is also created when a database cluster is initialized. This database is meant as a default database for users and applications to connect to. It is simply a copy of - template1 and can be dropped and recreated if necessary. + template1 and can be dropped and recreated if necessary. @@ -284,7 +284,7 @@ createdb -T template0 dbname Recall from that the - PostgreSQL server provides a large number of + PostgreSQL server provides a large number of run-time configuration variables. You can set database-specific default values for many of these settings. @@ -305,8 +305,8 @@ ALTER DATABASE mydb SET geqo TO off; session started. Note that users can still alter this setting during their sessions; it will only be the default. To undo any such setting, use - ALTER DATABASE dbname RESET - varname. + ALTER DATABASE dbname RESET + varname. @@ -315,9 +315,9 @@ ALTER DATABASE mydb SET geqo TO off; Databases are destroyed with the command - :DROP DATABASE + :DROP DATABASE -DROP DATABASE name; +DROP DATABASE name; Only the owner of the database, or a superuser, can drop a database. Dropping a database removes all objects @@ -329,19 +329,19 @@ DROP DATABASE name; You cannot execute the DROP DATABASE command while connected to the victim database. You can, however, be - connected to any other database, including the template1 + connected to any other database, including the template1 database. - template1 would be the only option for dropping the last user database of a + template1 would be the only option for dropping the last user database of a given cluster. For convenience, there is also a shell program to drop - databases, :dropdb + databases, :dropdb dropdb dbname - (Unlike createdb, it is not the default action to drop + (Unlike createdb, it is not the default action to drop the database with the current user name.) @@ -354,7 +354,7 @@ dropdb dbname - Tablespaces in PostgreSQL allow database administrators to + Tablespaces in PostgreSQL allow database administrators to define locations in the file system where the files representing database objects can be stored. Once created, a tablespace can be referred to by name when creating database objects. @@ -362,7 +362,7 @@ dropdb dbname By using tablespaces, an administrator can control the disk layout - of a PostgreSQL installation. This is useful in at + of a PostgreSQL installation. This is useful in at least two ways. First, if the partition or volume on which the cluster was initialized runs out of space and cannot be extended, a tablespace can be created on a different partition and used @@ -397,12 +397,12 @@ dropdb dbname To define a tablespace, use the - command, for example:CREATE TABLESPACE: + command, for example:CREATE TABLESPACE: CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; The location must be an existing, empty directory that is owned by - the PostgreSQL operating system user. All objects subsequently + the PostgreSQL operating system user. All objects subsequently created within the tablespace will be stored in files underneath this directory. The location must not be on removable or transient storage, as the cluster might fail to function if the tablespace is missing @@ -414,7 +414,7 @@ CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; There is usually not much point in making more than one tablespace per logical file system, since you cannot control the location of individual files within a logical file system. However, - PostgreSQL does not enforce any such limitation, and + PostgreSQL does not enforce any such limitation, and indeed it is not directly aware of the file system boundaries on your system. It just stores files in the directories you tell it to use. @@ -423,15 +423,15 @@ CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; Creation of the tablespace itself must be done as a database superuser, but after that you can allow ordinary database users to use it. - To do that, grant them the CREATE privilege on it. + To do that, grant them the CREATE privilege on it. Tables, indexes, and entire databases can be assigned to - particular tablespaces. To do so, a user with the CREATE + particular tablespaces. To do so, a user with the CREATE privilege on a given tablespace must pass the tablespace name as a parameter to the relevant command. For example, the following creates - a table in the tablespace space1: + a table in the tablespace space1: CREATE TABLE foo(i int) TABLESPACE space1; @@ -443,9 +443,9 @@ CREATE TABLE foo(i int) TABLESPACE space1; SET default_tablespace = space1; CREATE TABLE foo(i int); - When default_tablespace is set to anything but an empty - string, it supplies an implicit TABLESPACE clause for - CREATE TABLE and CREATE INDEX commands that + When default_tablespace is set to anything but an empty + string, it supplies an implicit TABLESPACE clause for + CREATE TABLE and CREATE INDEX commands that do not have an explicit one. @@ -463,9 +463,9 @@ CREATE TABLE foo(i int); The tablespace associated with a database is used to store the system catalogs of that database. Furthermore, it is the default tablespace used for tables, indexes, and temporary files created within the database, - if no TABLESPACE clause is given and no other selection is - specified by default_tablespace or - temp_tablespaces (as appropriate). + if no TABLESPACE clause is given and no other selection is + specified by default_tablespace or + temp_tablespaces (as appropriate). If a database is created without specifying a tablespace for it, it uses the same tablespace as the template database it is copied from. @@ -473,12 +473,12 @@ CREATE TABLE foo(i int); Two tablespaces are automatically created when the database cluster is initialized. The - pg_global tablespace is used for shared system catalogs. The - pg_default tablespace is the default tablespace of the - template1 and template0 databases (and, therefore, + pg_global tablespace is used for shared system catalogs. The + pg_default tablespace is the default tablespace of the + template1 and template0 databases (and, therefore, will be the default tablespace for other databases as well, unless - overridden by a TABLESPACE clause in CREATE - DATABASE). + overridden by a TABLESPACE clause in CREATE + DATABASE). @@ -501,25 +501,25 @@ CREATE TABLE foo(i int); SELECT spcname FROM pg_tablespace; - The program's \db meta-command + The program's \db meta-command is also useful for listing the existing tablespaces. - PostgreSQL makes use of symbolic links + PostgreSQL makes use of symbolic links to simplify the implementation of tablespaces. This - means that tablespaces can be used only on systems + means that tablespaces can be used only on systems that support symbolic links. - The directory $PGDATA/pg_tblspc contains symbolic links that + The directory $PGDATA/pg_tblspc contains symbolic links that point to each of the non-built-in tablespaces defined in the cluster. Although not recommended, it is possible to adjust the tablespace layout by hand by redefining these links. Under no circumstances perform this operation while the server is running. Note that in PostgreSQL 9.1 - and earlier you will also need to update the pg_tablespace - catalog with the new locations. (If you do not, pg_dump will + and earlier you will also need to update the pg_tablespace + catalog with the new locations. (If you do not, pg_dump will continue to output the old tablespace locations.) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 18fb9c2aa6..6f8203355e 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -24,11 +24,11 @@ analyzing performance. Most of this chapter is devoted to describing PostgreSQL's statistics collector, but one should not neglect regular Unix monitoring programs such as - ps, top, iostat, and vmstat. + ps, top, iostat, and vmstat. Also, once one has identified a poorly-performing query, further investigation might be needed using PostgreSQL's command. - discusses EXPLAIN + discusses EXPLAIN and other methods for understanding the behavior of an individual query. @@ -43,7 +43,7 @@ On most Unix platforms, PostgreSQL modifies its - command title as reported by ps, so that individual server + command title as reported by ps, so that individual server processes can readily be identified. A sample display is @@ -59,29 +59,29 @@ postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction - (The appropriate invocation of ps varies across different + (The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the master server process. The command arguments shown for it are the same ones used when it was launched. The next five processes are background worker processes automatically launched by the - master process. (The stats collector process will not be present + master process. (The stats collector process will not be present if you have set the system not to start the statistics collector; likewise - the autovacuum launcher process can be disabled.) + the autovacuum launcher process can be disabled.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form -postgres: user database host activity +postgres: user database host activity The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. - The activity can be idle (i.e., waiting for a client command), - idle in transaction (waiting for client inside a BEGIN block), - or a command type name such as SELECT. Also, - waiting is appended if the server process is presently waiting + The activity can be idle (i.e., waiting for a client command), + idle in transaction (waiting for client inside a BEGIN block), + or a command type name such as SELECT. Also, + waiting is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because @@ -93,7 +93,7 @@ postgres: user database host If has been configured the - cluster name will also be shown in ps output: + cluster name will also be shown in ps output: $ psql -c 'SHOW cluster_name' cluster_name @@ -122,8 +122,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser flags, not just one. In addition, your original invocation of the postgres command must have a shorter ps status display than that provided by each - server process. If you fail to do all three things, the ps - output for each server process will be the original postgres + server process. If you fail to do all three things, the ps + output for each server process will be the original postgres command line. @@ -137,7 +137,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - PostgreSQL's statistics collector + PostgreSQL's statistics collector is a subsystem that supports collection and reporting of information about server activity. Presently, the collector can count accesses to tables and indexes in both disk-block and individual-row terms. It also tracks @@ -161,7 +161,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in - postgresql.conf. (See for + postgresql.conf. (See for details about setting configuration parameters.) @@ -186,13 +186,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - Normally these parameters are set in postgresql.conf so + Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with - SET.) + SET.) @@ -201,7 +201,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser These files are stored in the directory named by the parameter, pg_stat_tmp by default. - For better performance, stats_temp_directory can be + For better performance, stats_temp_directory can be pointed at a RAM-based file system, decreasing physical I/O requirements. When the server shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that @@ -261,10 +261,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser A transaction can also see its own statistics (as yet untransmitted to the - collector) in the views pg_stat_xact_all_tables, - pg_stat_xact_sys_tables, - pg_stat_xact_user_tables, and - pg_stat_xact_user_functions. These numbers do not act as + collector) in the views pg_stat_xact_all_tables, + pg_stat_xact_sys_tables, + pg_stat_xact_user_tables, and + pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction. @@ -293,7 +293,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_replicationpg_stat_replication + pg_stat_replicationpg_stat_replication One row per WAL sender process, showing statistics about replication to that sender's connected standby server. See for details. @@ -301,7 +301,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_wal_receiverpg_stat_wal_receiver + pg_stat_wal_receiverpg_stat_wal_receiver Only one row, showing statistics about the WAL receiver from that receiver's connected server. See for details. @@ -309,7 +309,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_subscriptionpg_stat_subscription + pg_stat_subscriptionpg_stat_subscription At least one row per subscription, showing information about the subscription workers. See for details. @@ -317,7 +317,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sslpg_stat_ssl + pg_stat_sslpg_stat_ssl One row per connection (regular and replication), showing information about SSL used on this connection. See for details. @@ -325,9 +325,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_progress_vacuumpg_stat_progress_vacuum + pg_stat_progress_vacuumpg_stat_progress_vacuum One row for each backend (including autovacuum worker processes) running - VACUUM, showing current progress. + VACUUM, showing current progress. See . @@ -349,7 +349,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_archiverpg_stat_archiver + pg_stat_archiverpg_stat_archiver One row only, showing statistics about the WAL archiver process's activity. See for details. @@ -357,7 +357,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_bgwriterpg_stat_bgwriter + pg_stat_bgwriterpg_stat_bgwriter One row only, showing statistics about the background writer process's activity. See for details. @@ -365,14 +365,14 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_databasepg_stat_database + pg_stat_databasepg_stat_database One row per database, showing database-wide statistics. See for details. - pg_stat_database_conflictspg_stat_database_conflicts + pg_stat_database_conflictspg_stat_database_conflicts One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. @@ -381,7 +381,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_all_tablespg_stat_all_tables + pg_stat_all_tablespg_stat_all_tables One row for each table in the current database, showing statistics about accesses to that specific table. @@ -390,40 +390,40 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sys_tablespg_stat_sys_tables - Same as pg_stat_all_tables, except that only + pg_stat_sys_tablespg_stat_sys_tables + Same as pg_stat_all_tables, except that only system tables are shown. - pg_stat_user_tablespg_stat_user_tables - Same as pg_stat_all_tables, except that only user + pg_stat_user_tablespg_stat_user_tables + Same as pg_stat_all_tables, except that only user tables are shown. - pg_stat_xact_all_tablespg_stat_xact_all_tables - Similar to pg_stat_all_tables, but counts actions - taken so far within the current transaction (which are not - yet included in pg_stat_all_tables and related views). + pg_stat_xact_all_tablespg_stat_xact_all_tables + Similar to pg_stat_all_tables, but counts actions + taken so far within the current transaction (which are not + yet included in pg_stat_all_tables and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view. - pg_stat_xact_sys_tablespg_stat_xact_sys_tables - Same as pg_stat_xact_all_tables, except that only + pg_stat_xact_sys_tablespg_stat_xact_sys_tables + Same as pg_stat_xact_all_tables, except that only system tables are shown. - pg_stat_xact_user_tablespg_stat_xact_user_tables - Same as pg_stat_xact_all_tables, except that only + pg_stat_xact_user_tablespg_stat_xact_user_tables + Same as pg_stat_xact_all_tables, except that only user tables are shown. - pg_stat_all_indexespg_stat_all_indexes + pg_stat_all_indexespg_stat_all_indexes One row for each index in the current database, showing statistics about accesses to that specific index. @@ -432,19 +432,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_sys_indexespg_stat_sys_indexes - Same as pg_stat_all_indexes, except that only + pg_stat_sys_indexespg_stat_sys_indexes + Same as pg_stat_all_indexes, except that only indexes on system tables are shown. - pg_stat_user_indexespg_stat_user_indexes - Same as pg_stat_all_indexes, except that only + pg_stat_user_indexespg_stat_user_indexes + Same as pg_stat_all_indexes, except that only indexes on user tables are shown. - pg_statio_all_tablespg_statio_all_tables + pg_statio_all_tablespg_statio_all_tables One row for each table in the current database, showing statistics about I/O on that specific table. @@ -453,19 +453,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_tablespg_statio_sys_tables - Same as pg_statio_all_tables, except that only + pg_statio_sys_tablespg_statio_sys_tables + Same as pg_statio_all_tables, except that only system tables are shown. - pg_statio_user_tablespg_statio_user_tables - Same as pg_statio_all_tables, except that only + pg_statio_user_tablespg_statio_user_tables + Same as pg_statio_all_tables, except that only user tables are shown. - pg_statio_all_indexespg_statio_all_indexes + pg_statio_all_indexespg_statio_all_indexes One row for each index in the current database, showing statistics about I/O on that specific index. @@ -474,19 +474,19 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_indexespg_statio_sys_indexes - Same as pg_statio_all_indexes, except that only + pg_statio_sys_indexespg_statio_sys_indexes + Same as pg_statio_all_indexes, except that only indexes on system tables are shown. - pg_statio_user_indexespg_statio_user_indexes - Same as pg_statio_all_indexes, except that only + pg_statio_user_indexespg_statio_user_indexes + Same as pg_statio_all_indexes, except that only indexes on user tables are shown. - pg_statio_all_sequencespg_statio_all_sequences + pg_statio_all_sequencespg_statio_all_sequences One row for each sequence in the current database, showing statistics about I/O on that specific sequence. @@ -495,20 +495,20 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_statio_sys_sequencespg_statio_sys_sequences - Same as pg_statio_all_sequences, except that only + pg_statio_sys_sequencespg_statio_sys_sequences + Same as pg_statio_all_sequences, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.) - pg_statio_user_sequencespg_statio_user_sequences - Same as pg_statio_all_sequences, except that only + pg_statio_user_sequencespg_statio_user_sequences + Same as pg_statio_all_sequences, except that only user sequences are shown. - pg_stat_user_functionspg_stat_user_functions + pg_stat_user_functionspg_stat_user_functions One row for each tracked function, showing statistics about executions of that function. See @@ -517,10 +517,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - pg_stat_xact_user_functionspg_stat_xact_user_functions - Similar to pg_stat_user_functions, but counts only - calls during the current transaction (which are not - yet included in pg_stat_user_functions). + pg_stat_xact_user_functionspg_stat_xact_user_functions + Similar to pg_stat_user_functions, but counts only + calls during the current transaction (which are not + yet included in pg_stat_user_functions). @@ -533,18 +533,18 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - The pg_statio_ views are primarily useful to + The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the - entire story: due to the way in which PostgreSQL + entire story: due to the way in which PostgreSQL handles disk I/O, data that is not in the - PostgreSQL buffer cache might still reside in the + PostgreSQL buffer cache might still reside in the kernel's I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more - detailed information on PostgreSQL I/O behavior are - advised to use the PostgreSQL statistics collector + detailed information on PostgreSQL I/O behavior are + advised to use the PostgreSQL statistics collector in combination with operating system utilities that allow insight into the kernel's handling of I/O. @@ -564,39 +564,39 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - datid - oid + datid + oid OID of the database this backend is connected to - datname - name + datname + name Name of the database this backend is connected to - pid - integer + pid + integer Process ID of this backend - usesysid - oid + usesysid + oid OID of the user logged into this backend - usename - name + usename + name Name of the user logged into this backend - application_name - text + application_name + text Name of the application that is connected to this backend - client_addr - inet + client_addr + inet IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an @@ -604,78 +604,78 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - client_hostname - text + client_hostname + text Host name of the connected client, as reported by a - reverse DNS lookup of client_addr. This field will + reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. - client_port - integer + client_port + integer TCP port number that the client is using for communication - with this backend, or -1 if a Unix socket is used + with this backend, or -1 if a Unix socket is used - backend_start - timestamp with time zone + backend_start + timestamp with time zone Time when this process was started. For client backends, this is the time the client connected to the server. - xact_start - timestamp with time zone + xact_start + timestamp with time zone Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the - query_start column. + query_start column. - query_start - timestamp with time zone + query_start + timestamp with time zone Time when the currently active query was started, or if - state is not active, when the last query + state is not active, when the last query was started - state_change - timestamp with time zone - Time when the state was last changed + state_change + timestamp with time zone + Time when the state was last changed - wait_event_type - text + wait_event_type + text The type of event for which the backend is waiting, if any; otherwise NULL. Possible values are: - LWLock: The backend is waiting for a lightweight lock. + LWLock: The backend is waiting for a lightweight lock. Each such lock protects a particular data structure in shared memory. - wait_event will contain a name identifying the purpose + wait_event will contain a name identifying the purpose of the lightweight lock. (Some locks have specific names; others are part of a group of locks each with a similar purpose.) - Lock: The backend is waiting for a heavyweight lock. + Lock: The backend is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal - operations such as relation extension. wait_event will + operations such as relation extension. wait_event will identify the type of lock awaited. - BufferPin: The server process is waiting to access to + BufferPin: The server process is waiting to access to a data buffer during a period when no other process can be examining that buffer. Buffer pin waits can be protracted if another process holds an open cursor which last read data from the @@ -684,94 +684,94 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - Activity: The server process is idle. This is used by + Activity: The server process is idle. This is used by system processes waiting for activity in their main processing loop. - wait_event will identify the specific wait point. + wait_event will identify the specific wait point. - Extension: The server process is waiting for activity + Extension: The server process is waiting for activity in an extension module. This category is useful for modules to track custom waiting points. - Client: The server process is waiting for some activity + Client: The server process is waiting for some activity on a socket from user applications, and that the server expects something to happen that is independent from its internal processes. - wait_event will identify the specific wait point. + wait_event will identify the specific wait point. - IPC: The server process is waiting for some activity - from another process in the server. wait_event will + IPC: The server process is waiting for some activity + from another process in the server. wait_event will identify the specific wait point. - Timeout: The server process is waiting for a timeout - to expire. wait_event will identify the specific wait + Timeout: The server process is waiting for a timeout + to expire. wait_event will identify the specific wait point. - IO: The server process is waiting for a IO to complete. - wait_event will identify the specific wait point. + IO: The server process is waiting for a IO to complete. + wait_event will identify the specific wait point. - wait_event - text + wait_event + text Wait event name if backend is currently waiting, otherwise NULL. See for details. - state - text + state + text Current overall state of this backend. Possible values are: - active: The backend is executing a query. + active: The backend is executing a query. - idle: The backend is waiting for a new client command. + idle: The backend is waiting for a new client command. - idle in transaction: The backend is in a transaction, + idle in transaction: The backend is in a transaction, but is not currently executing a query. - idle in transaction (aborted): This state is similar to - idle in transaction, except one of the statements in + idle in transaction (aborted): This state is similar to + idle in transaction, except one of the statements in the transaction caused an error. - fastpath function call: The backend is executing a + fastpath function call: The backend is executing a fast-path function. - disabled: This state is reported if disabled: This state is reported if is disabled in this backend. @@ -786,13 +786,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser backend_xmin xid - The current backend's xmin horizon. + The current backend's xmin horizon. - query - text + query + text Text of this backend's most recent query. If - state is active this field shows the + state is active this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 characters; this value can be changed via the parameter @@ -803,11 +803,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser backend_type text Type of current backend. Possible types are - autovacuum launcher, autovacuum worker, - background worker, background writer, - client backend, checkpointer, - startup, walreceiver, - walsender and walwriter. + autovacuum launcher, autovacuum worker, + background worker, background writer, + client backend, checkpointer, + startup, walreceiver, + walsender and walwriter. @@ -822,10 +822,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - The wait_event and state columns are - independent. If a backend is in the active state, - it may or may not be waiting on some event. If the state - is active and wait_event is non-null, it + The wait_event and state columns are + independent. If a backend is in the active state, + it may or may not be waiting on some event. If the state + is active and wait_event is non-null, it means that a query is being executed, but is being blocked somewhere in the system. @@ -845,767 +845,767 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - LWLock - ShmemIndexLock + LWLock + ShmemIndexLock Waiting to find or allocate space in shared memory. - OidGenLock + OidGenLock Waiting to allocate or assign an OID. - XidGenLock + XidGenLock Waiting to allocate or assign a transaction id. - ProcArrayLock + ProcArrayLock Waiting to get a snapshot or clearing a transaction id at transaction end. - SInvalReadLock + SInvalReadLock Waiting to retrieve or remove messages from shared invalidation queue. - SInvalWriteLock + SInvalWriteLock Waiting to add a message in shared invalidation queue. - WALBufMappingLock + WALBufMappingLock Waiting to replace a page in WAL buffers. - WALWriteLock + WALWriteLock Waiting for WAL buffers to be written to disk. - ControlFileLock + ControlFileLock Waiting to read or update the control file or creation of a new WAL file. - CheckpointLock + CheckpointLock Waiting to perform checkpoint. - CLogControlLock + CLogControlLock Waiting to read or update transaction status. - SubtransControlLock + SubtransControlLock Waiting to read or update sub-transaction information. - MultiXactGenLock + MultiXactGenLock Waiting to read or update shared multixact state. - MultiXactOffsetControlLock + MultiXactOffsetControlLock Waiting to read or update multixact offset mappings. - MultiXactMemberControlLock + MultiXactMemberControlLock Waiting to read or update multixact member mappings. - RelCacheInitLock + RelCacheInitLock Waiting to read or write relation cache initialization file. - CheckpointerCommLock + CheckpointerCommLock Waiting to manage fsync requests. - TwoPhaseStateLock + TwoPhaseStateLock Waiting to read or update the state of prepared transactions. - TablespaceCreateLock + TablespaceCreateLock Waiting to create or drop the tablespace. - BtreeVacuumLock + BtreeVacuumLock Waiting to read or update vacuum-related information for a B-tree index. - AddinShmemInitLock + AddinShmemInitLock Waiting to manage space allocation in shared memory. - AutovacuumLock + AutovacuumLock Autovacuum worker or launcher waiting to update or read the current state of autovacuum workers. - AutovacuumScheduleLock + AutovacuumScheduleLock Waiting to ensure that the table it has selected for a vacuum still needs vacuuming. - SyncScanLock + SyncScanLock Waiting to get the start location of a scan on a table for synchronized scans. - RelationMappingLock + RelationMappingLock Waiting to update the relation map file used to store catalog to filenode mapping. - AsyncCtlLock + AsyncCtlLock Waiting to read or update shared notification state. - AsyncQueueLock + AsyncQueueLock Waiting to read or update notification messages. - SerializableXactHashLock + SerializableXactHashLock Waiting to retrieve or store information about serializable transactions. - SerializableFinishedListLock + SerializableFinishedListLock Waiting to access the list of finished serializable transactions. - SerializablePredicateLockListLock + SerializablePredicateLockListLock Waiting to perform an operation on a list of locks held by serializable transactions. - OldSerXidLock + OldSerXidLock Waiting to read or record conflicting serializable transactions. - SyncRepLock + SyncRepLock Waiting to read or update information about synchronous replicas. - BackgroundWorkerLock + BackgroundWorkerLock Waiting to read or update background worker state. - DynamicSharedMemoryControlLock + DynamicSharedMemoryControlLock Waiting to read or update dynamic shared memory state. - AutoFileLock - Waiting to update the postgresql.auto.conf file. + AutoFileLock + Waiting to update the postgresql.auto.conf file. - ReplicationSlotAllocationLock + ReplicationSlotAllocationLock Waiting to allocate or free a replication slot. - ReplicationSlotControlLock + ReplicationSlotControlLock Waiting to read or update replication slot state. - CommitTsControlLock + CommitTsControlLock Waiting to read or update transaction commit timestamps. - CommitTsLock + CommitTsLock Waiting to read or update the last value set for the transaction timestamp. - ReplicationOriginLock + ReplicationOriginLock Waiting to setup, drop or use replication origin. - MultiXactTruncationLock + MultiXactTruncationLock Waiting to read or truncate multixact information. - OldSnapshotTimeMapLock + OldSnapshotTimeMapLock Waiting to read or update old snapshot control information. - BackendRandomLock + BackendRandomLock Waiting to generate a random number. - LogicalRepWorkerLock + LogicalRepWorkerLock Waiting for action on logical replication worker to finish. - CLogTruncationLock + CLogTruncationLock Waiting to truncate the write-ahead log or waiting for write-ahead log truncation to finish. - clog + clog Waiting for I/O on a clog (transaction status) buffer. - commit_timestamp + commit_timestamp Waiting for I/O on commit timestamp buffer. - subtrans + subtrans Waiting for I/O a subtransaction buffer. - multixact_offset + multixact_offset Waiting for I/O on a multixact offset buffer. - multixact_member + multixact_member Waiting for I/O on a multixact_member buffer. - async + async Waiting for I/O on an async (notify) buffer. - oldserxid + oldserxid Waiting to I/O on an oldserxid buffer. - wal_insert + wal_insert Waiting to insert WAL into a memory buffer. - buffer_content + buffer_content Waiting to read or write a data page in memory. - buffer_io + buffer_io Waiting for I/O on a data page. - replication_origin + replication_origin Waiting to read or update the replication progress. - replication_slot_io + replication_slot_io Waiting for I/O on a replication slot. - proc + proc Waiting to read or update the fast-path lock information. - buffer_mapping + buffer_mapping Waiting to associate a data block with a buffer in the buffer pool. - lock_manager + lock_manager Waiting to add or examine locks for backends, or waiting to join or exit a locking group (used by parallel query). - predicate_lock_manager + predicate_lock_manager Waiting to add or examine predicate lock information. - parallel_query_dsa + parallel_query_dsa Waiting for parallel query dynamic shared memory allocation lock. - tbm + tbm Waiting for TBM shared iterator lock. - Lock - relation + Lock + relation Waiting to acquire a lock on a relation. - extend + extend Waiting to extend a relation. - page + page Waiting to acquire a lock on page of a relation. - tuple + tuple Waiting to acquire a lock on a tuple. - transactionid + transactionid Waiting for a transaction to finish. - virtualxid + virtualxid Waiting to acquire a virtual xid lock. - speculative token + speculative token Waiting to acquire a speculative insertion lock. - object + object Waiting to acquire a lock on a non-relation database object. - userlock + userlock Waiting to acquire a user lock. - advisory + advisory Waiting to acquire an advisory user lock. - BufferPin - BufferPin + BufferPin + BufferPin Waiting to acquire a pin on a buffer. - Activity - ArchiverMain + Activity + ArchiverMain Waiting in main loop of the archiver process. - AutoVacuumMain + AutoVacuumMain Waiting in main loop of autovacuum launcher process. - BgWriterHibernate + BgWriterHibernate Waiting in background writer process, hibernating. - BgWriterMain + BgWriterMain Waiting in main loop of background writer process background worker. - CheckpointerMain + CheckpointerMain Waiting in main loop of checkpointer process. - LogicalLauncherMain + LogicalLauncherMain Waiting in main loop of logical launcher process. - LogicalApplyMain + LogicalApplyMain Waiting in main loop of logical apply process. - PgStatMain + PgStatMain Waiting in main loop of the statistics collector process. - RecoveryWalAll + RecoveryWalAll Waiting for WAL from any kind of source (local, archive or stream) at recovery. - RecoveryWalStream + RecoveryWalStream Waiting for WAL from a stream at recovery. - SysLoggerMain + SysLoggerMain Waiting in main loop of syslogger process. - WalReceiverMain + WalReceiverMain Waiting in main loop of WAL receiver process. - WalSenderMain + WalSenderMain Waiting in main loop of WAL sender process. - WalWriterMain + WalWriterMain Waiting in main loop of WAL writer process. - Client - ClientRead + Client + ClientRead Waiting to read data from the client. - ClientWrite + ClientWrite Waiting to write data from the client. - LibPQWalReceiverConnect + LibPQWalReceiverConnect Waiting in WAL receiver to establish connection to remote server. - LibPQWalReceiverReceive + LibPQWalReceiverReceive Waiting in WAL receiver to receive data from remote server. - SSLOpenServer + SSLOpenServer Waiting for SSL while attempting connection. - WalReceiverWaitStart + WalReceiverWaitStart Waiting for startup process to send initial data for streaming replication. - WalSenderWaitForWAL + WalSenderWaitForWAL Waiting for WAL to be flushed in WAL sender process. - WalSenderWriteData + WalSenderWriteData Waiting for any activity when processing replies from WAL receiver in WAL sender process. - Extension - Extension + Extension + Extension Waiting in an extension. - IPC - BgWorkerShutdown + IPC + BgWorkerShutdown Waiting for background worker to shut down. - BgWorkerStartup + BgWorkerStartup Waiting for background worker to start up. - BtreePage + BtreePage Waiting for the page number needed to continue a parallel B-tree scan to become available. - ExecuteGather - Waiting for activity from child process when executing Gather node. + ExecuteGather + Waiting for activity from child process when executing Gather node. - LogicalSyncData + LogicalSyncData Waiting for logical replication remote server to send data for initial table synchronization. - LogicalSyncStateChange + LogicalSyncStateChange Waiting for logical replication remote server to change state. - MessageQueueInternal + MessageQueueInternal Waiting for other process to be attached in shared message queue. - MessageQueuePutMessage + MessageQueuePutMessage Waiting to write a protocol message to a shared message queue. - MessageQueueReceive + MessageQueueReceive Waiting to receive bytes from a shared message queue. - MessageQueueSend + MessageQueueSend Waiting to send bytes to a shared message queue. - ParallelFinish + ParallelFinish Waiting for parallel workers to finish computing. - ParallelBitmapScan + ParallelBitmapScan Waiting for parallel bitmap scan to become initialized. - ProcArrayGroupUpdate + ProcArrayGroupUpdate Waiting for group leader to clear transaction id at transaction end. - ClogGroupUpdate + ClogGroupUpdate Waiting for group leader to update transaction status at transaction end. - ReplicationOriginDrop + ReplicationOriginDrop Waiting for a replication origin to become inactive to be dropped. - ReplicationSlotDrop + ReplicationSlotDrop Waiting for a replication slot to become inactive to be dropped. - SafeSnapshot - Waiting for a snapshot for a READ ONLY DEFERRABLE transaction. + SafeSnapshot + Waiting for a snapshot for a READ ONLY DEFERRABLE transaction. - SyncRep + SyncRep Waiting for confirmation from remote server during synchronous replication. - Timeout - BaseBackupThrottle + Timeout + BaseBackupThrottle Waiting during base backup when throttling activity. - PgSleep - Waiting in process that called pg_sleep. + PgSleep + Waiting in process that called pg_sleep. - RecoveryApplyDelay + RecoveryApplyDelay Waiting to apply WAL at recovery because it is delayed. - IO - BufFileRead + IO + BufFileRead Waiting for a read from a buffered file. - BufFileWrite + BufFileWrite Waiting for a write to a buffered file. - ControlFileRead + ControlFileRead Waiting for a read from the control file. - ControlFileSync + ControlFileSync Waiting for the control file to reach stable storage. - ControlFileSyncUpdate + ControlFileSyncUpdate Waiting for an update to the control file to reach stable storage. - ControlFileWrite + ControlFileWrite Waiting for a write to the control file. - ControlFileWriteUpdate + ControlFileWriteUpdate Waiting for a write to update the control file. - CopyFileRead + CopyFileRead Waiting for a read during a file copy operation. - CopyFileWrite + CopyFileWrite Waiting for a write during a file copy operation. - DataFileExtend + DataFileExtend Waiting for a relation data file to be extended. - DataFileFlush + DataFileFlush Waiting for a relation data file to reach stable storage. - DataFileImmediateSync + DataFileImmediateSync Waiting for an immediate synchronization of a relation data file to stable storage. - DataFilePrefetch + DataFilePrefetch Waiting for an asynchronous prefetch from a relation data file. - DataFileRead + DataFileRead Waiting for a read from a relation data file. - DataFileSync + DataFileSync Waiting for changes to a relation data file to reach stable storage. - DataFileTruncate + DataFileTruncate Waiting for a relation data file to be truncated. - DataFileWrite + DataFileWrite Waiting for a write to a relation data file. - DSMFillZeroWrite + DSMFillZeroWrite Waiting to write zero bytes to a dynamic shared memory backing file. - LockFileAddToDataDirRead + LockFileAddToDataDirRead Waiting for a read while adding a line to the data directory lock file. - LockFileAddToDataDirSync + LockFileAddToDataDirSync Waiting for data to reach stable storage while adding a line to the data directory lock file. - LockFileAddToDataDirWrite + LockFileAddToDataDirWrite Waiting for a write while adding a line to the data directory lock file. - LockFileCreateRead + LockFileCreateRead Waiting to read while creating the data directory lock file. - LockFileCreateSync + LockFileCreateSync Waiting for data to reach stable storage while creating the data directory lock file. - LockFileCreateWrite + LockFileCreateWrite Waiting for a write while creating the data directory lock file. - LockFileReCheckDataDirRead + LockFileReCheckDataDirRead Waiting for a read during recheck of the data directory lock file. - LogicalRewriteCheckpointSync + LogicalRewriteCheckpointSync Waiting for logical rewrite mappings to reach stable storage during a checkpoint. - LogicalRewriteMappingSync + LogicalRewriteMappingSync Waiting for mapping data to reach stable storage during a logical rewrite. - LogicalRewriteMappingWrite + LogicalRewriteMappingWrite Waiting for a write of mapping data during a logical rewrite. - LogicalRewriteSync + LogicalRewriteSync Waiting for logical rewrite mappings to reach stable storage. - LogicalRewriteWrite + LogicalRewriteWrite Waiting for a write of logical rewrite mappings. - RelationMapRead + RelationMapRead Waiting for a read of the relation map file. - RelationMapSync + RelationMapSync Waiting for the relation map file to reach stable storage. - RelationMapWrite + RelationMapWrite Waiting for a write to the relation map file. - ReorderBufferRead + ReorderBufferRead Waiting for a read during reorder buffer management. - ReorderBufferWrite + ReorderBufferWrite Waiting for a write during reorder buffer management. - ReorderLogicalMappingRead + ReorderLogicalMappingRead Waiting for a read of a logical mapping during reorder buffer management. - ReplicationSlotRead + ReplicationSlotRead Waiting for a read from a replication slot control file. - ReplicationSlotRestoreSync + ReplicationSlotRestoreSync Waiting for a replication slot control file to reach stable storage while restoring it to memory. - ReplicationSlotSync + ReplicationSlotSync Waiting for a replication slot control file to reach stable storage. - ReplicationSlotWrite + ReplicationSlotWrite Waiting for a write to a replication slot control file. - SLRUFlushSync + SLRUFlushSync Waiting for SLRU data to reach stable storage during a checkpoint or database shutdown. - SLRURead + SLRURead Waiting for a read of an SLRU page. - SLRUSync + SLRUSync Waiting for SLRU data to reach stable storage following a page write. - SLRUWrite + SLRUWrite Waiting for a write of an SLRU page. - SnapbuildRead + SnapbuildRead Waiting for a read of a serialized historical catalog snapshot. - SnapbuildSync + SnapbuildSync Waiting for a serialized historical catalog snapshot to reach stable storage. - SnapbuildWrite + SnapbuildWrite Waiting for a write of a serialized historical catalog snapshot. - TimelineHistoryFileSync + TimelineHistoryFileSync Waiting for a timeline history file received via streaming replication to reach stable storage. - TimelineHistoryFileWrite + TimelineHistoryFileWrite Waiting for a write of a timeline history file received via streaming replication. - TimelineHistoryRead + TimelineHistoryRead Waiting for a read of a timeline history file. - TimelineHistorySync + TimelineHistorySync Waiting for a newly created timeline history file to reach stable storage. - TimelineHistoryWrite + TimelineHistoryWrite Waiting for a write of a newly created timeline history file. - TwophaseFileRead + TwophaseFileRead Waiting for a read of a two phase state file. - TwophaseFileSync + TwophaseFileSync Waiting for a two phase state file to reach stable storage. - TwophaseFileWrite + TwophaseFileWrite Waiting for a write of a two phase state file. - WALBootstrapSync + WALBootstrapSync Waiting for WAL to reach stable storage during bootstrapping. - WALBootstrapWrite + WALBootstrapWrite Waiting for a write of a WAL page during bootstrapping. - WALCopyRead + WALCopyRead Waiting for a read when creating a new WAL segment by copying an existing one. - WALCopySync + WALCopySync Waiting a new WAL segment created by copying an existing one to reach stable storage. - WALCopyWrite + WALCopyWrite Waiting for a write when creating a new WAL segment by copying an existing one. - WALInitSync + WALInitSync Waiting for a newly initialized WAL file to reach stable storage. - WALInitWrite + WALInitWrite Waiting for a write while initializing a new WAL file. - WALRead + WALRead Waiting for a read from a WAL file. - WALSenderTimelineHistoryRead + WALSenderTimelineHistoryRead Waiting for a read from a timeline history file during walsender timeline command. - WALSyncMethodAssign + WALSyncMethodAssign Waiting for data to reach stable storage while assigning WAL sync method. - WALWrite + WALWrite Waiting for a write to a WAL file. @@ -1615,10 +1615,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser For tranches registered by extensions, the name is specified by extension - and this will be displayed as wait_event. It is quite + and this will be displayed as wait_event. It is quite possible that user has registered the tranche in one of the backends (by having allocation in dynamic shared memory) in which case other backends - won't have that information, so we display extension for such + won't have that information, so we display extension for such cases. @@ -1649,53 +1649,53 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of a WAL sender process - usesysid - oid + usesysid + oid OID of the user logged into this WAL sender process - usename - name + usename + name Name of the user logged into this WAL sender process - application_name - text + application_name + text Name of the application that is connected to this WAL sender - client_addr - inet + client_addr + inet IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. - client_hostname - text + client_hostname + text Host name of the connected client, as reported by a - reverse DNS lookup of client_addr. This field will + reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. - client_port - integer + client_port + integer TCP port number that the client is using for communication - with this WAL sender, or -1 if a Unix socket is used + with this WAL sender, or -1 if a Unix socket is used - backend_start - timestamp with time zone + backend_start + timestamp with time zone Time when this process was started, i.e., when the client connected to this WAL sender @@ -1703,71 +1703,71 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i backend_xmin xid - This standby's xmin horizon reported + This standby's xmin horizon reported by . - state - text + state + text Current WAL sender state. Possible values are: - startup: This WAL sender is starting up. + startup: This WAL sender is starting up. - catchup: This WAL sender's connected standby is + catchup: This WAL sender's connected standby is catching up with the primary. - streaming: This WAL sender is streaming changes + streaming: This WAL sender is streaming changes after its connected standby server has caught up with the primary. - backup: This WAL sender is sending a backup. + backup: This WAL sender is sending a backup. - stopping: This WAL sender is stopping. + stopping: This WAL sender is stopping. - sent_lsn - pg_lsn + sent_lsn + pg_lsn Last write-ahead log location sent on this connection - write_lsn - pg_lsn + write_lsn + pg_lsn Last write-ahead log location written to disk by this standby server - flush_lsn - pg_lsn + flush_lsn + pg_lsn Last write-ahead log location flushed to disk by this standby server - replay_lsn - pg_lsn + replay_lsn + pg_lsn Last write-ahead log location replayed into the database on this standby server - write_lag - interval + write_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that @@ -1776,8 +1776,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - flush_lag - interval + flush_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that @@ -1786,8 +1786,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - replay_lag - interval + replay_lag + interval Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that @@ -1796,38 +1796,38 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i server was configured as a synchronous standby. - sync_priority - integer + sync_priority + integer Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. - sync_state - text + sync_state + text Synchronous state of this standby server. Possible values are: - async: This standby server is asynchronous. + async: This standby server is asynchronous. - potential: This standby server is now asynchronous, + potential: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails. - sync: This standby server is synchronous. + sync: This standby server is synchronous. - quorum: This standby server is considered as a candidate + quorum: This standby server is considered as a candidate for quorum standbys. @@ -1897,69 +1897,69 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of the WAL receiver process - status - text + status + text Activity status of the WAL receiver process - receive_start_lsn - pg_lsn + receive_start_lsn + pg_lsn First write-ahead log location used when WAL receiver is started - receive_start_tli - integer + receive_start_tli + integer First timeline number used when WAL receiver is started - received_lsn - pg_lsn + received_lsn + pg_lsn Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started - received_tli - integer + received_tli + integer Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started - last_msg_send_time - timestamp with time zone + last_msg_send_time + timestamp with time zone Send time of last message received from origin WAL sender - last_msg_receipt_time - timestamp with time zone + last_msg_receipt_time + timestamp with time zone Receipt time of last message received from origin WAL sender - latest_end_lsn - pg_lsn + latest_end_lsn + pg_lsn Last write-ahead log location reported to origin WAL sender - latest_end_time - timestamp with time zone + latest_end_time + timestamp with time zone Time of last write-ahead log location reported to origin WAL sender - slot_name - text + slot_name + text Replication slot name used by this WAL receiver - conninfo - text + conninfo + text Connection string used by this WAL receiver, with security-sensitive fields obfuscated. @@ -1988,52 +1988,52 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - subid - oid + subid + oid OID of the subscription - subname - text + subname + text Name of the subscription - pid - integer + pid + integer Process ID of the subscription worker process - relid - Oid + relid + Oid OID of the relation that the worker is synchronizing; null for the main apply worker - received_lsn - pg_lsn + received_lsn + pg_lsn Last write-ahead log location received, the initial value of this field being 0 - last_msg_send_time - timestamp with time zone + last_msg_send_time + timestamp with time zone Send time of last message received from origin WAL sender - last_msg_receipt_time - timestamp with time zone + last_msg_receipt_time + timestamp with time zone Receipt time of last message received from origin WAL sender - latest_end_lsn - pg_lsn + latest_end_lsn + pg_lsn Last write-ahead log location reported to origin WAL sender - latest_end_time - timestamp with time zone + latest_end_time + timestamp with time zone Time of last write-ahead log location reported to origin WAL sender @@ -2061,42 +2061,42 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - pid - integer + pid + integer Process ID of a backend or WAL sender process - ssl - boolean + ssl + boolean True if SSL is used on this connection - version - text + version + text Version of SSL in use, or NULL if SSL is not in use on this connection - cipher - text + cipher + text Name of SSL cipher in use, or NULL if SSL is not in use on this connection - bits - integer + bits + integer Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection - compression - boolean + compression + boolean True if SSL compression is in use, false if not, or NULL if SSL is not in use on this connection - clientdn - text + clientdn + text Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the @@ -2132,37 +2132,37 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - archived_count + archived_count bigint Number of WAL files that have been successfully archived - last_archived_wal + last_archived_wal text Name of the last WAL file successfully archived - last_archived_time + last_archived_time timestamp with time zone Time of the last successful archive operation - failed_count + failed_count bigint Number of failed attempts for archiving WAL files - last_failed_wal + last_failed_wal text Name of the WAL file of the last failed archival operation - last_failed_time + last_failed_time timestamp with time zone Time of the last failed archival operation - stats_reset + stats_reset timestamp with time zone Time at which these statistics were last reset @@ -2189,17 +2189,17 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - checkpoints_timed + checkpoints_timed bigint Number of scheduled checkpoints that have been performed - checkpoints_req + checkpoints_req bigint Number of requested checkpoints that have been performed - checkpoint_write_time + checkpoint_write_time double precision Total amount of time that has been spent in the portion of @@ -2207,7 +2207,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - checkpoint_sync_time + checkpoint_sync_time double precision Total amount of time that has been spent in the portion of @@ -2216,40 +2216,40 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - buffers_checkpoint + buffers_checkpoint bigint Number of buffers written during checkpoints - buffers_clean + buffers_clean bigint Number of buffers written by the background writer - maxwritten_clean + maxwritten_clean bigint Number of times the background writer stopped a cleaning scan because it had written too many buffers - buffers_backend + buffers_backend bigint Number of buffers written directly by a backend - buffers_backend_fsync + buffers_backend_fsync bigint Number of times a backend had to execute its own - fsync call (normally the background writer handles those + fsync call (normally the background writer handles those even when the backend does its own write) - buffers_alloc + buffers_alloc bigint Number of buffers allocated - stats_reset + stats_reset timestamp with time zone Time at which these statistics were last reset @@ -2275,84 +2275,84 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - datid - oid + datid + oid OID of a database - datname - name + datname + name Name of this database - numbackends - integer + numbackends + integer Number of backends currently connected to this database. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset. - xact_commit - bigint + xact_commit + bigint Number of transactions in this database that have been committed - xact_rollback - bigint + xact_rollback + bigint Number of transactions in this database that have been rolled back - blks_read - bigint + blks_read + bigint Number of disk blocks read in this database - blks_hit - bigint + blks_hit + bigint Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache) - tup_returned - bigint + tup_returned + bigint Number of rows returned by queries in this database - tup_fetched - bigint + tup_fetched + bigint Number of rows fetched by queries in this database - tup_inserted - bigint + tup_inserted + bigint Number of rows inserted by queries in this database - tup_updated - bigint + tup_updated + bigint Number of rows updated by queries in this database - tup_deleted - bigint + tup_deleted + bigint Number of rows deleted by queries in this database - conflicts - bigint + conflicts + bigint Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see for details.) - temp_files - bigint + temp_files + bigint Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the @@ -2360,8 +2360,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - temp_bytes - bigint + temp_bytes + bigint Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and @@ -2369,25 +2369,25 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - deadlocks - bigint + deadlocks + bigint Number of deadlocks detected in this database - blk_read_time - double precision + blk_read_time + double precision Time spent reading data file blocks by backends in this database, in milliseconds - blk_write_time - double precision + blk_write_time + double precision Time spent writing data file blocks by backends in this database, in milliseconds - stats_reset - timestamp with time zone + stats_reset + timestamp with time zone Time at which these statistics were last reset @@ -2412,42 +2412,42 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - datid - oid + datid + oid OID of a database - datname - name + datname + name Name of this database - confl_tablespace - bigint + confl_tablespace + bigint Number of queries in this database that have been canceled due to dropped tablespaces - confl_lock - bigint + confl_lock + bigint Number of queries in this database that have been canceled due to lock timeouts - confl_snapshot - bigint + confl_snapshot + bigint Number of queries in this database that have been canceled due to old snapshots - confl_bufferpin - bigint + confl_bufferpin + bigint Number of queries in this database that have been canceled due to pinned buffers - confl_deadlock - bigint + confl_deadlock + bigint Number of queries in this database that have been canceled due to deadlocks @@ -2476,119 +2476,119 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a table - schemaname - name + schemaname + name Name of the schema that this table is in - relname - name + relname + name Name of this table - seq_scan - bigint + seq_scan + bigint Number of sequential scans initiated on this table - seq_tup_read - bigint + seq_tup_read + bigint Number of live rows fetched by sequential scans - idx_scan - bigint + idx_scan + bigint Number of index scans initiated on this table - idx_tup_fetch - bigint + idx_tup_fetch + bigint Number of live rows fetched by index scans - n_tup_ins - bigint + n_tup_ins + bigint Number of rows inserted - n_tup_upd - bigint + n_tup_upd + bigint Number of rows updated (includes HOT updated rows) - n_tup_del - bigint + n_tup_del + bigint Number of rows deleted - n_tup_hot_upd - bigint + n_tup_hot_upd + bigint Number of rows HOT updated (i.e., with no separate index update required) - n_live_tup - bigint + n_live_tup + bigint Estimated number of live rows - n_dead_tup - bigint + n_dead_tup + bigint Estimated number of dead rows - n_mod_since_analyze - bigint + n_mod_since_analyze + bigint Estimated number of rows modified since this table was last analyzed - last_vacuum - timestamp with time zone + last_vacuum + timestamp with time zone Last time at which this table was manually vacuumed - (not counting VACUUM FULL) + (not counting VACUUM FULL) - last_autovacuum - timestamp with time zone + last_autovacuum + timestamp with time zone Last time at which this table was vacuumed by the autovacuum daemon - last_analyze - timestamp with time zone + last_analyze + timestamp with time zone Last time at which this table was manually analyzed - last_autoanalyze - timestamp with time zone + last_autoanalyze + timestamp with time zone Last time at which this table was analyzed by the autovacuum daemon - vacuum_count - bigint + vacuum_count + bigint Number of times this table has been manually vacuumed - (not counting VACUUM FULL) + (not counting VACUUM FULL) - autovacuum_count - bigint + autovacuum_count + bigint Number of times this table has been vacuumed by the autovacuum daemon - analyze_count - bigint + analyze_count + bigint Number of times this table has been manually analyzed - autoanalyze_count - bigint + autoanalyze_count + bigint Number of times this table has been analyzed by the autovacuum daemon @@ -2619,43 +2619,43 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of the table for this index - indexrelid - oid + indexrelid + oid OID of this index - schemaname - name + schemaname + name Name of the schema this index is in - relname - name + relname + name Name of the table for this index - indexrelname - name + indexrelname + name Name of this index - idx_scan - bigint + idx_scan + bigint Number of index scans initiated on this index - idx_tup_read - bigint + idx_tup_read + bigint Number of index entries returned by scans on this index - idx_tup_fetch - bigint + idx_tup_fetch + bigint Number of live table rows fetched by simple index scans using this index @@ -2674,17 +2674,17 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - Indexes can be used by simple index scans, bitmap index scans, + Indexes can be used by simple index scans, bitmap index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the - pg_stat_all_indexes.idx_tup_read + pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the - pg_stat_all_tables.idx_tup_fetch + pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect - pg_stat_all_indexes.idx_tup_fetch. + pg_stat_all_indexes.idx_tup_fetch. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale. @@ -2692,10 +2692,10 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - The idx_tup_read and idx_tup_fetch counts + The idx_tup_read and idx_tup_fetch counts can be different even without any use of bitmap scans, - because idx_tup_read counts - index entries retrieved from the index while idx_tup_fetch + because idx_tup_read counts + index entries retrieved from the index while idx_tup_fetch counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan. @@ -2715,58 +2715,58 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a table - schemaname - name + schemaname + name Name of the schema that this table is in - relname - name + relname + name Name of this table - heap_blks_read - bigint + heap_blks_read + bigint Number of disk blocks read from this table - heap_blks_hit - bigint + heap_blks_hit + bigint Number of buffer hits in this table - idx_blks_read - bigint + idx_blks_read + bigint Number of disk blocks read from all indexes on this table - idx_blks_hit - bigint + idx_blks_hit + bigint Number of buffer hits in all indexes on this table - toast_blks_read - bigint + toast_blks_read + bigint Number of disk blocks read from this table's TOAST table (if any) - toast_blks_hit - bigint + toast_blks_hit + bigint Number of buffer hits in this table's TOAST table (if any) - tidx_blks_read - bigint + tidx_blks_read + bigint Number of disk blocks read from this table's TOAST table indexes (if any) - tidx_blks_hit - bigint + tidx_blks_hit + bigint Number of buffer hits in this table's TOAST table indexes (if any) @@ -2796,38 +2796,38 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of the table for this index - indexrelid - oid + indexrelid + oid OID of this index - schemaname - name + schemaname + name Name of the schema this index is in - relname - name + relname + name Name of the table for this index - indexrelname - name + indexrelname + name Name of this index - idx_blks_read - bigint + idx_blks_read + bigint Number of disk blocks read from this index - idx_blks_hit - bigint + idx_blks_hit + bigint Number of buffer hits in this index @@ -2857,28 +2857,28 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - relid - oid + relid + oid OID of a sequence - schemaname - name + schemaname + name Name of the schema this sequence is in - relname - name + relname + name Name of this sequence - blks_read - bigint + blks_read + bigint Number of disk blocks read from this sequence - blks_hit - bigint + blks_hit + bigint Number of buffer hits in this sequence @@ -2904,34 +2904,34 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i - funcid - oid + funcid + oid OID of a function - schemaname - name + schemaname + name Name of the schema this function is in - funcname - name + funcname + name Name of this function - calls - bigint + calls + bigint Number of times this function has been called - total_time - double precision + total_time + double precision Total time spent in this function and all other functions called by it, in milliseconds - self_time - double precision + self_time + double precision Total time spent in this function itself, not including other functions called by it, in milliseconds @@ -2956,7 +2956,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in - psql you could issue \d+ pg_stat_activity.) + psql you could issue \d+ pg_stat_activity.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. @@ -3037,10 +3037,10 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Reset some cluster-wide statistics counters to zero, depending on the argument (requires superuser privileges by default, but EXECUTE for this function can be granted to others). - Calling pg_stat_reset_shared('bgwriter') will zero all the - counters shown in the pg_stat_bgwriter view. - Calling pg_stat_reset_shared('archiver') will zero all the - counters shown in the pg_stat_archiver view. + Calling pg_stat_reset_shared('bgwriter') will zero all the + counters shown in the pg_stat_bgwriter view. + Calling pg_stat_reset_shared('archiver') will zero all the + counters shown in the pg_stat_archiver view. @@ -3069,7 +3069,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i pg_stat_get_activity, the underlying function of - the pg_stat_activity view, returns a set of records + the pg_stat_activity view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics @@ -3079,7 +3079,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i to the number of currently active backends. The function pg_stat_get_backend_idset provides a convenient way to generate one row for each active backend for - invoking these functions. For example, to show the PIDs and + invoking these functions. For example, to show the PIDs and current queries of all backends: @@ -3113,7 +3113,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, pg_stat_get_backend_activity(integer) text - Text of this backend's most recent query + Text of this backend's most recent query @@ -3240,9 +3240,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, Progress Reporting - PostgreSQL has the ability to report the progress of + PostgreSQL has the ability to report the progress of certain commands during command execution. Currently, the only command - which supports progress reporting is VACUUM. This may be + which supports progress reporting is VACUUM. This may be expanded in the future. @@ -3250,13 +3250,13 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, VACUUM Progress Reporting - Whenever VACUUM is running, the + Whenever VACUUM is running, the pg_stat_progress_vacuum view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. - Progress reporting is not currently supported for VACUUM FULL - and backends running VACUUM FULL will not be listed in this + Progress reporting is not currently supported for VACUUM FULL + and backends running VACUUM FULL will not be listed in this view. @@ -3273,73 +3273,73 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, - pid - integer + pid + integer Process ID of backend. - datid - oid + datid + oid OID of the database to which this backend is connected. - datname - name + datname + name Name of the database to which this backend is connected. - relid - oid + relid + oid OID of the table being vacuumed. - phase - text + phase + text Current processing phase of vacuum. See . - heap_blks_total - bigint + heap_blks_total + bigint Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and - need not be) visited by this VACUUM. + need not be) visited by this VACUUM. - heap_blks_scanned - bigint + heap_blks_scanned + bigint Number of heap blocks scanned. Because the - visibility map is used to optimize scans, + visibility map is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become - equal to heap_blks_total when the vacuum is complete. - This counter only advances when the phase is scanning heap. + equal to heap_blks_total when the vacuum is complete. + This counter only advances when the phase is scanning heap. - heap_blks_vacuumed - bigint + heap_blks_vacuumed + bigint Number of heap blocks vacuumed. Unless the table has no indexes, this - counter only advances when the phase is vacuuming heap. + counter only advances when the phase is vacuuming heap. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments. - index_vacuum_count - bigint + index_vacuum_count + bigint Number of completed index vacuum cycles. - max_dead_tuples - bigint + max_dead_tuples + bigint Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on @@ -3347,8 +3347,8 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, - num_dead_tuples - bigint + num_dead_tuples + bigint Number of dead tuples collected since the last index vacuum cycle. @@ -3371,23 +3371,23 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, initializing - VACUUM is preparing to begin scanning the heap. This + VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief. scanning heap - VACUUM is currently scanning the heap. It will prune and + VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing - activity. The heap_blks_scanned column can be used + activity. The heap_blks_scanned column can be used to monitor the progress of the scan. vacuuming indexes - VACUUM is currently vacuuming the indexes. If a table has + VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if is insufficient to @@ -3397,10 +3397,10 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, vacuuming heap - VACUUM is currently vacuuming the heap. Vacuuming the heap + VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of - vacuuming indexes. If heap_blks_scanned is less than - heap_blks_total, the system will return to scanning + vacuuming indexes. If heap_blks_scanned is less than + heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed. @@ -3408,7 +3408,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, cleaning up indexes - VACUUM is currently cleaning up indexes. This occurs after + VACUUM is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed. @@ -3416,7 +3416,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, truncating heap - VACUUM is currently truncating the heap so as to return + VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes. @@ -3424,10 +3424,10 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, performing final cleanup - VACUUM is performing final cleanup. During this phase, - VACUUM will vacuum the free space map, update statistics - in pg_class, and report statistics to the statistics - collector. When this phase is completed, VACUUM will end. + VACUUM is performing final cleanup. During this phase, + VACUUM will vacuum the free space map, update statistics + in pg_class, and report statistics to the statistics + collector. When this phase is completed, VACUUM will end. @@ -3467,7 +3467,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for - the macros in src/include/utils/probes.h. + the macros in src/include/utils/probes.h. @@ -3477,7 +3477,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in PostgreSQL. To include DTrace support - specify to configure. See for further information. @@ -3490,7 +3490,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, as shown in ; shows the types used in the probes. More probes can certainly be - added to enhance PostgreSQL's observability. + added to enhance PostgreSQL's observability.
@@ -3584,7 +3584,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, statement-status(const char *)Probe that fires anytime the server process updates its - pg_stat_activity.status. + pg_stat_activity.status. arg0 is the new status string. @@ -3978,7 +3978,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting - pg_stat_database before and after a performance test: + pg_stat_database before and after a performance test: #!/usr/sbin/dtrace -qs @@ -4050,15 +4050,15 @@ Total time (ns) 2312105013 - Add the probe definitions to src/backend/utils/probes.d + Add the probe definitions to src/backend/utils/probes.d - Include pg_trace.h if it is not already present in the + Include pg_trace.h if it is not already present in the module(s) containing the probe points, and insert - TRACE_POSTGRESQL probe macros at the desired locations + TRACE_POSTGRESQL probe macros at the desired locations in the source code @@ -4081,30 +4081,30 @@ Total time (ns) 2312105013 - Decide that the probe will be named transaction-start and + Decide that the probe will be named transaction-start and requires a parameter of type LocalTransactionId - Add the probe definition to src/backend/utils/probes.d: + Add the probe definition to src/backend/utils/probes.d: probe transaction__start(LocalTransactionId); Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a - hyphen, so transaction-start is the name to document for + hyphen, so transaction-start is the name to document for users. - At compile time, transaction__start is converted to a macro - called TRACE_POSTGRESQL_TRANSACTION_START (notice the + At compile time, transaction__start is converted to a macro + called TRACE_POSTGRESQL_TRANSACTION_START (notice the underscores are single here), which is available by including - pg_trace.h. Add the macro call to the appropriate location + pg_trace.h. Add the macro call to the appropriate location in the source code. In this case, it looks like the following: @@ -4148,9 +4148,9 @@ TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId); On most platforms, if PostgreSQL is - built with , the arguments to a trace macro will be evaluated whenever control passes through the - macro, even if no tracing is being done. This is + macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, @@ -4162,7 +4162,7 @@ if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED()) TRACE_POSTGRESQL_TRANSACTION_START(some_function(...)); - Each trace macro has a corresponding ENABLED macro. + Each trace macro has a corresponding ENABLED macro. diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index dda0170886..75cb39359f 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -279,7 +279,7 @@ The table also shows that PostgreSQL's Repeatable Read implementation does not allow phantom reads. Stricter behavior is permitted by the SQL standard: the four isolation levels only define which phenomena - must not happen, not which phenomena must happen. + must not happen, not which phenomena must happen. The behavior of the available isolation levels is detailed in the following subsections. @@ -317,7 +317,7 @@ Read Committed is the default isolation level in PostgreSQL. When a transaction uses this isolation level, a SELECT query - (without a FOR UPDATE/SHARE clause) sees only data + (without a FOR UPDATE/SHARE clause) sees only data committed before the query began; it never sees either uncommitted data or changes committed during query execution by concurrent transactions. In effect, a SELECT query sees @@ -345,7 +345,7 @@ updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of - the row. The search condition of the command (the WHERE clause) is + the row. The search condition of the command (the WHERE clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation using the updated version of the row. In the case of @@ -355,19 +355,19 @@ - INSERT with an ON CONFLICT DO UPDATE clause + INSERT with an ON CONFLICT DO UPDATE clause behaves similarly. In Read Committed mode, each row proposed for insertion will either insert or update. Unless there are unrelated errors, one of those two outcomes is guaranteed. If a conflict originates in another transaction whose effects are not yet visible to the INSERT , the UPDATE clause will affect that row, - even though possibly no version of that row is + even though possibly no version of that row is conventionally visible to the command. INSERT with an ON CONFLICT DO - NOTHING clause may have insertion not proceed for a row due to + NOTHING clause may have insertion not proceed for a row due to the outcome of another transaction whose effects are not visible to the INSERT snapshot. Again, this is only the case in Read Committed mode. @@ -416,10 +416,10 @@ COMMIT; The DELETE will have no effect even though there is a website.hits = 10 row before and after the UPDATE. This occurs because the - pre-update row value 9 is skipped, and when the + pre-update row value 9 is skipped, and when the UPDATE completes and DELETE - obtains a lock, the new row value is no longer 10 but - 11, which no longer matches the criteria. + obtains a lock, the new row value is no longer 10 but + 11, which no longer matches the criteria. @@ -427,7 +427,7 @@ COMMIT; that includes all transactions committed up to that instant, subsequent commands in the same transaction will see the effects of the committed concurrent transaction in any case. The point - at issue above is whether or not a single command + at issue above is whether or not a single command sees an absolutely consistent view of the database. @@ -472,9 +472,9 @@ COMMIT; This level is different from Read Committed in that a query in a repeatable read transaction sees a snapshot as of the start of the first non-transaction-control statement in the - transaction, not as of the start + transaction, not as of the start of the current statement within the transaction. Thus, successive - SELECT commands within a single + SELECT commands within a single transaction see the same data, i.e., they do not see changes made by other transactions that committed after their own transaction started. @@ -587,7 +587,7 @@ ERROR: could not serialize access due to concurrent update As an example, - consider a table mytab, initially containing: + consider a table mytab, initially containing: class | value -------+------- @@ -600,14 +600,14 @@ ERROR: could not serialize access due to concurrent update SELECT SUM(value) FROM mytab WHERE class = 1; - and then inserts the result (30) as the value in a - new row with class = 2. Concurrently, serializable + and then inserts the result (30) as the value in a + new row with class = 2. Concurrently, serializable transaction B computes: SELECT SUM(value) FROM mytab WHERE class = 2; and obtains the result 300, which it inserts in a new row with - class = 1. Then both transactions try to commit. + class = 1. Then both transactions try to commit. If either transaction were running at the Repeatable Read isolation level, both would be allowed to commit; but since there is no serial order of execution consistent with the result, using Serializable transactions will allow one @@ -639,11 +639,11 @@ ERROR: could not serialize access due to read/write dependencies among transact To guarantee true serializability PostgreSQL - uses predicate locking, which means that it keeps locks + uses predicate locking, which means that it keeps locks which allow it to determine when a write would have had an impact on the result of a previous read from a concurrent transaction, had it run first. In PostgreSQL these locks do not - cause any blocking and therefore can not play any part in + cause any blocking and therefore can not play any part in causing a deadlock. They are used to identify and flag dependencies among concurrent Serializable transactions which in certain combinations can lead to serialization anomalies. In contrast, a Read Committed or @@ -659,20 +659,20 @@ ERROR: could not serialize access due to read/write dependencies among transact other database systems, are based on data actually accessed by a transaction. These will show up in the pg_locks - system view with a mode of SIReadLock. The + system view with a mode of SIReadLock. The particular locks acquired during execution of a query will depend on the plan used by the query, and multiple finer-grained locks (e.g., tuple locks) may be combined into fewer coarser-grained locks (e.g., page locks) during the course of the transaction to prevent exhaustion of the memory used to - track the locks. A READ ONLY transaction may be able to + track the locks. A READ ONLY transaction may be able to release its SIRead locks before completion, if it detects that no conflicts can still occur which could lead to a serialization anomaly. - In fact, READ ONLY transactions will often be able to + In fact, READ ONLY transactions will often be able to establish that fact at startup and avoid taking any predicate locks. - If you explicitly request a SERIALIZABLE READ ONLY DEFERRABLE + If you explicitly request a SERIALIZABLE READ ONLY DEFERRABLE transaction, it will block until it can establish this fact. (This is - the only case where Serializable transactions block but + the only case where Serializable transactions block but Repeatable Read transactions don't.) On the other hand, SIRead locks often need to be kept past transaction commit, until overlapping read write transactions complete. @@ -695,13 +695,13 @@ ERROR: could not serialize access due to read/write dependencies among transact anomalies. The monitoring of read/write dependencies has a cost, as does the restart of transactions which are terminated with a serialization failure, but balanced against the cost and blocking involved in use of - explicit locks and SELECT FOR UPDATE or SELECT FOR - SHARE, Serializable transactions are the best performance choice + explicit locks and SELECT FOR UPDATE or SELECT FOR + SHARE, Serializable transactions are the best performance choice for some environments. - While PostgreSQL's Serializable transaction isolation + While PostgreSQL's Serializable transaction isolation level only allows concurrent transactions to commit if it can prove there is a serial order of execution that would produce the same effect, it doesn't always prevent errors from being raised that would not occur in @@ -709,7 +709,7 @@ ERROR: could not serialize access due to read/write dependencies among transact constraint violations caused by conflicts with overlapping Serializable transactions even after explicitly checking that the key isn't present before attempting to insert it. This can be avoided by making sure - that all Serializable transactions that insert potentially + that all Serializable transactions that insert potentially conflicting keys explicitly check if they can do so first. For example, imagine an application that asks the user for a new key and then checks that it doesn't exist already by trying to select it first, or generates @@ -727,7 +727,7 @@ ERROR: could not serialize access due to read/write dependencies among transact - Declare transactions as READ ONLY when possible. + Declare transactions as READ ONLY when possible. @@ -754,8 +754,8 @@ ERROR: could not serialize access due to read/write dependencies among transact - Eliminate explicit locks, SELECT FOR UPDATE, and - SELECT FOR SHARE where no longer needed due to the + Eliminate explicit locks, SELECT FOR UPDATE, and + SELECT FOR SHARE where no longer needed due to the protections automatically provided by Serializable transactions. @@ -801,7 +801,7 @@ ERROR: could not serialize access due to read/write dependencies among transact most PostgreSQL commands automatically acquire locks of appropriate modes to ensure that referenced tables are not dropped or modified in incompatible ways while the - command executes. (For example, TRUNCATE cannot safely be + command executes. (For example, TRUNCATE cannot safely be executed concurrently with other operations on the same table, so it obtains an exclusive lock on the table to enforce that.) @@ -860,7 +860,7 @@ ERROR: could not serialize access due to read/write dependencies among transact The SELECT command acquires a lock of this mode on - referenced tables. In general, any query that only reads a table + referenced tables. In general, any query that only reads a table and does not modify it will acquire this lock mode. @@ -904,7 +904,7 @@ ERROR: could not serialize access due to read/write dependencies among transact acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any - command that modifies data in a table. + command that modifies data in a table. @@ -920,13 +920,13 @@ ERROR: could not serialize access due to read/write dependencies among transact EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against - concurrent schema changes and VACUUM runs. + concurrent schema changes and VACUUM runs. Acquired by VACUUM (without ), - ANALYZE, CREATE INDEX CONCURRENTLY, - CREATE STATISTICS and + ANALYZE, CREATE INDEX CONCURRENTLY, + CREATE STATISTICS and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see ). @@ -1016,12 +1016,12 @@ ERROR: could not serialize access due to read/write dependencies among transact - Acquired by the DROP TABLE, + Acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, VACUUM FULL, and REFRESH MATERIALIZED VIEW (without ) - commands. Many forms of ALTER TABLE also acquire + commands. Many forms of ALTER TABLE also acquire a lock at this level. This is also the default lock mode for LOCK TABLE statements that do not specify a mode explicitly. @@ -1042,9 +1042,9 @@ ERROR: could not serialize access due to read/write dependencies among transact Once acquired, a lock is normally held till end of transaction. But if a lock is acquired after establishing a savepoint, the lock is released immediately if the savepoint is rolled back to. This is consistent with - the principle that ROLLBACK cancels all effects of the + the principle that ROLLBACK cancels all effects of the commands since the savepoint. The same holds for locks acquired within a - PL/pgSQL exception block: an error escape from the block + PL/pgSQL exception block: an error escape from the block releases locks acquired within it. @@ -1204,17 +1204,17 @@ ERROR: could not serialize access due to read/write dependencies among transact concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the - row was deleted). Within a REPEATABLE READ or - SERIALIZABLE transaction, + row was deleted). Within a REPEATABLE READ or + SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see . - The FOR UPDATE lock mode - is also acquired by any DELETE on a row, and also by an - UPDATE that modifies the values on certain columns. Currently, - the set of columns considered for the UPDATE case are those that + The FOR UPDATE lock mode + is also acquired by any DELETE on a row, and also by an + UPDATE that modifies the values on certain columns. Currently, + the set of columns considered for the UPDATE case are those that have a unique index on them that can be used in a foreign key (so partial indexes and expressional indexes are not considered), but this may change in the future. @@ -1228,11 +1228,11 @@ ERROR: could not serialize access due to read/write dependencies among transact - Behaves similarly to FOR UPDATE, except that the lock + Behaves similarly to FOR UPDATE, except that the lock acquired is weaker: this lock will not block - SELECT FOR KEY SHARE commands that attempt to acquire + SELECT FOR KEY SHARE commands that attempt to acquire a lock on the same rows. This lock mode is also acquired by any - UPDATE that does not acquire a FOR UPDATE lock. + UPDATE that does not acquire a FOR UPDATE lock. @@ -1243,12 +1243,12 @@ ERROR: could not serialize access due to read/write dependencies among transact - Behaves similarly to FOR NO KEY UPDATE, except that it + Behaves similarly to FOR NO KEY UPDATE, except that it acquires a shared lock rather than exclusive lock on each retrieved row. A shared lock blocks other transactions from performing UPDATE, DELETE, SELECT FOR UPDATE or - SELECT FOR NO KEY UPDATE on these rows, but it does not + SELECT FOR NO KEY UPDATE on these rows, but it does not prevent them from performing SELECT FOR SHARE or SELECT FOR KEY SHARE. @@ -1262,13 +1262,13 @@ ERROR: could not serialize access due to read/write dependencies among transact Behaves similarly to FOR SHARE, except that the - lock is weaker: SELECT FOR UPDATE is blocked, but not - SELECT FOR NO KEY UPDATE. A key-shared lock blocks + lock is weaker: SELECT FOR UPDATE is blocked, but not + SELECT FOR NO KEY UPDATE. A key-shared lock blocks other transactions from performing DELETE or any UPDATE that changes the key values, but not - other UPDATE, and neither does it prevent - SELECT FOR NO KEY UPDATE, SELECT FOR SHARE, - or SELECT FOR KEY SHARE. + other UPDATE, and neither does it prevent + SELECT FOR NO KEY UPDATE, SELECT FOR SHARE, + or SELECT FOR KEY SHARE. @@ -1357,7 +1357,7 @@ ERROR: could not serialize access due to read/write dependencies among transact The use of explicit locking can increase the likelihood of - deadlocks, wherein two (or more) transactions each + deadlocks, wherein two (or more) transactions each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A and then tries to acquire an exclusive lock on table B, while transaction 2 has already @@ -1447,12 +1447,12 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; PostgreSQL provides a means for creating locks that have application-defined meanings. These are - called advisory locks, because the system does not + called advisory locks, because the system does not enforce their use — it is up to the application to use them correctly. Advisory locks can be useful for locking strategies that are an awkward fit for the MVCC model. For example, a common use of advisory locks is to emulate pessimistic - locking strategies typical of so-called flat file data + locking strategies typical of so-called flat file data management systems. While a flag stored in a table could be used for the same purpose, advisory locks are faster, avoid table bloat, and are automatically @@ -1506,7 +1506,7 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; In certain cases using advisory locking methods, especially in queries - involving explicit ordering and LIMIT clauses, care must be + involving explicit ordering and LIMIT clauses, care must be taken to control the locks acquired because of the order in which SQL expressions are evaluated. For example: @@ -1518,7 +1518,7 @@ SELECT pg_advisory_lock(q.id) FROM ) q; -- ok In the above queries, the second form is dangerous because the - LIMIT is not guaranteed to be applied before the locking + LIMIT is not guaranteed to be applied before the locking function is executed. This might cause some locks to be acquired that the application was not expecting, and hence would fail to release (until it ends the session). @@ -1590,7 +1590,7 @@ SELECT pg_advisory_lock(q.id) FROM for application programmers if the application software goes through a framework which automatically retries transactions which are rolled back with a serialization failure. It may be a good idea to set - default_transaction_isolation to serializable. + default_transaction_isolation to serializable. It would also be wise to take some action to ensure that no other transaction isolation level is used, either inadvertently or to subvert integrity checks, through checks of the transaction isolation @@ -1660,7 +1660,7 @@ SELECT pg_advisory_lock(q.id) FROM includes some but not all post-transaction-start changes. In such cases a careful person might wish to lock all tables needed for the check, in order to get an indisputable picture of current reality. A - SHARE mode (or higher) lock guarantees that there are no + SHARE mode (or higher) lock guarantees that there are no uncommitted changes in the locked table, other than those of the current transaction. @@ -1675,8 +1675,8 @@ SELECT pg_advisory_lock(q.id) FROM transaction predates obtaining the lock, it might predate some now-committed changes in the table. A repeatable read transaction's snapshot is actually frozen at the start of its first query or data-modification command - (SELECT, INSERT, - UPDATE, or DELETE), so + (SELECT, INSERT, + UPDATE, or DELETE), so it is possible to obtain locks explicitly before the snapshot is frozen. diff --git a/doc/src/sgml/nls.sgml b/doc/src/sgml/nls.sgml index 1d331473af..f312b5bfb5 100644 --- a/doc/src/sgml/nls.sgml +++ b/doc/src/sgml/nls.sgml @@ -7,12 +7,12 @@ For the Translator - PostgreSQL + PostgreSQL programs (server and client) can issue their messages in your favorite language — if the messages have been translated. Creating and maintaining translated message sets needs the help of people who speak their own language well and want to contribute to - the PostgreSQL effort. You do not have to be a + the PostgreSQL effort. You do not have to be a programmer at all to do this. This section explains how to help. @@ -170,8 +170,8 @@ make init-po This will create a file progname.pot. (.pot to distinguish it from PO files that - are in production. The T stands for - template.) + are in production. The T stands for + template.) Copy this file to language.po and edit it. To make it known that the new language is available, @@ -234,7 +234,7 @@ make update-po - If the original is a printf format string, the translation + If the original is a printf format string, the translation also needs to be. The translation also needs to have the same format specifiers in the same order. Sometimes the natural rules of the language make this impossible or at least awkward. @@ -301,7 +301,7 @@ msgstr "Die Datei %2$s hat %1$u Zeichen." This section describes how to implement native language support in a program or library that is part of the - PostgreSQL distribution. + PostgreSQL distribution. Currently, it only applies to C programs. @@ -447,7 +447,7 @@ fprintf(stderr, gettext("panic level %d\n"), lvl); printf("Files were %s.\n", flag ? "copied" : "removed"); The word order within the sentence might be different in other - languages. Also, even if you remember to call gettext() on + languages. Also, even if you remember to call gettext() on each fragment, the fragments might not translate well separately. It's better to duplicate a little code so that each message to be translated is a coherent whole. Only numbers, file names, and @@ -481,7 +481,7 @@ printf("number of copied files: %d", n); If you really want to construct a properly pluralized message, there is support for this, but it's a bit awkward. When generating - a primary or detail error message in ereport(), you can + a primary or detail error message in ereport(), you can write something like this: errmsg_plural("copied %d file", @@ -496,17 +496,17 @@ errmsg_plural("copied %d file", are formatted per the format string as usual. (Normally, the pluralization control value will also be one of the values to be formatted, so it has to be written twice.) In English it only - matters whether n is 1 or not 1, but in other + matters whether n is 1 or not 1, but in other languages there can be many different plural forms. The translator sees the two English forms as a group and has the opportunity to supply multiple substitute strings, with the appropriate one being - selected based on the run-time value of n. + selected based on the run-time value of n. If you need to pluralize a message that isn't going directly to an - errmsg or errdetail report, you have to use - the underlying function ngettext. See the gettext + errmsg or errdetail report, you have to use + the underlying function ngettext. See the gettext documentation. diff --git a/doc/src/sgml/notation.sgml b/doc/src/sgml/notation.sgml index 2f350a329d..bd1e8f629a 100644 --- a/doc/src/sgml/notation.sgml +++ b/doc/src/sgml/notation.sgml @@ -7,17 +7,17 @@ The following conventions are used in the synopsis of a command: brackets ([ and ]) indicate optional parts. (In the synopsis of a Tcl command, question marks - (?) are used instead, as is usual in Tcl.) Braces + (?) are used instead, as is usual in Tcl.) Braces ({ and }) and vertical lines (|) indicate that you must choose one - alternative. Dots (...) mean that the preceding element + alternative. Dots (...) mean that the preceding element can be repeated. Where it enhances the clarity, SQL commands are preceded by the - prompt =>, and shell commands are preceded by the - prompt $. Normally, prompts are not shown, though. + prompt =>, and shell commands are preceded by the + prompt $. Normally, prompts are not shown, though. diff --git a/doc/src/sgml/oid2name.sgml b/doc/src/sgml/oid2name.sgml index 97b170a23f..4ab2cf1a85 100644 --- a/doc/src/sgml/oid2name.sgml +++ b/doc/src/sgml/oid2name.sgml @@ -27,7 +27,7 @@ Description - oid2name is a utility program that helps administrators to + oid2name is a utility program that helps administrators to examine the file structure used by PostgreSQL. To make use of it, you need to be familiar with the database file structure, which is described in . @@ -35,7 +35,7 @@ - The name oid2name is historical, and is actually rather + The name oid2name is historical, and is actually rather misleading, since most of the time when you use it, you will really be concerned with tables' filenode numbers (which are the file names visible in the database directories). Be sure you understand the @@ -60,8 +60,8 @@ - filenode - show info for table with filenode filenode + filenode + show info for table with filenode filenode @@ -70,8 +70,8 @@ - oid - show info for table with OID oid + oid + show info for table with OID oid @@ -93,13 +93,13 @@ - tablename_pattern - show info for table(s) matching tablename_pattern + tablename_pattern + show info for table(s) matching tablename_pattern - - + + Print the oid2name version and exit. @@ -115,8 +115,8 @@ - - + + Show help about oid2name command line @@ -133,27 +133,27 @@ - database + database database to connect to - host + host database server's host - port + port database server's port - username + username user name to connect as - password + password password (deprecated — putting this on the command line is a security hazard) @@ -163,27 +163,27 @@ To display specific tables, select which tables to show by - using - If you don't give any of , or , + but do give , it will list all tables in the database + named by . In this mode, the and + options control what gets listed. - If you don't give either, it will show a listing of database + OIDs. Alternatively you can give to get a tablespace listing. @@ -192,7 +192,7 @@ Notes - oid2name requires a running database server with + oid2name requires a running database server with non-corrupt system catalogs. It is therefore of only limited use for recovering from catastrophic database corruption situations. diff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml index e46f5ca6bc..23570af4bf 100644 --- a/doc/src/sgml/pageinspect.sgml +++ b/doc/src/sgml/pageinspect.sgml @@ -8,7 +8,7 @@ - The pageinspect module provides functions that allow you to + The pageinspect module provides functions that allow you to inspect the contents of database pages at a low level, which is useful for debugging purposes. All of these functions may be used only by superusers. @@ -28,7 +28,7 @@ get_raw_page reads the specified block of the named - relation and returns a copy as a bytea value. This allows a + relation and returns a copy as a bytea value. This allows a single time-consistent copy of the block to be obtained. fork should be 'main' for the main data fork, 'fsm' for the free space map, @@ -63,7 +63,7 @@ page_header shows fields that are common to all - PostgreSQL heap and index pages. + PostgreSQL heap and index pages. @@ -76,8 +76,8 @@ test=# SELECT * FROM page_header(get_raw_page('pg_class', 0)); 0/24A1B50 | 0 | 1 | 232 | 368 | 8192 | 8192 | 4 | 0 The returned columns correspond to the fields in the - PageHeaderData struct. - See src/include/storage/bufpage.h for details. + PageHeaderData struct. + See src/include/storage/bufpage.h for details. @@ -147,8 +147,8 @@ test=# SELECT page_checksum(get_raw_page('pg_class', 0), 0); test=# SELECT * FROM heap_page_items(get_raw_page('pg_class', 0)); - See src/include/storage/itemid.h and - src/include/access/htup_details.h for explanations of the fields + See src/include/storage/itemid.h and + src/include/access/htup_details.h for explanations of the fields returned. @@ -221,7 +221,7 @@ test=# SELECT * FROM heap_page_item_attrs(get_raw_page('pg_class', 0), 'pg_class next slot to be returned from the page, is also printed. - See src/backend/storage/freespace/README for more + See src/backend/storage/freespace/README for more information on the structure of an FSM page. @@ -315,21 +315,21 @@ test=# SELECT * FROM bt_page_items('pg_cast_oid_index', 1); 7 | (0,7) | 12 | f | f | 29 27 00 00 8 | (0,8) | 12 | f | f | 2a 27 00 00 - In a B-tree leaf page, ctid points to a heap tuple. - In an internal page, the block number part of ctid + In a B-tree leaf page, ctid points to a heap tuple. + In an internal page, the block number part of ctid points to another page in the index itself, while the offset part (the second number) is ignored and is usually 1. Note that the first item on any non-rightmost page (any page with - a non-zero value in the btpo_next field) is the - page's high key, meaning its data + a non-zero value in the btpo_next field) is the + page's high key, meaning its data serves as an upper bound on all items appearing on the page, while - its ctid field is meaningless. Also, on non-leaf + its ctid field is meaningless. Also, on non-leaf pages, the first real data item (the first item that is not a high key) is a minus infinity item, with no actual value - in its data field. Such an item does have a valid - downlink in its ctid field, however. + in its data field. Such an item does have a valid + downlink in its ctid field, however. @@ -345,7 +345,7 @@ test=# SELECT * FROM bt_page_items('pg_cast_oid_index', 1); It is also possible to pass a page to bt_page_items - as a bytea value. A page image obtained + as a bytea value. A page image obtained with get_raw_page should be passed as argument. So the last example could also be rewritten like this: @@ -470,8 +470,8 @@ test=# SELECT * FROM brin_page_items(get_raw_page('brinidx', 5), 139 | 8 | 2 | f | f | f | {177 .. 264} The returned columns correspond to the fields in the - BrinMemTuple and BrinValues structs. - See src/include/access/brin_tuple.h for details. + BrinMemTuple and BrinValues structs. + See src/include/access/brin_tuple.h for details. diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml index 1f5efd9e6d..6aac506942 100644 --- a/doc/src/sgml/parallel.sgml +++ b/doc/src/sgml/parallel.sgml @@ -8,7 +8,7 @@ - PostgreSQL can devise query plans which can leverage + PostgreSQL can devise query plans which can leverage multiple CPUs in order to answer queries faster. This feature is known as parallel query. Many queries cannot benefit from parallel query, either due to limitations of the current implementation or because there is no @@ -47,18 +47,18 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; In all cases, the Gather or Gather Merge node will have exactly one child plan, which is the portion of the plan that will be executed in - parallel. If the Gather or Gather Merge node is + parallel. If the Gather or Gather Merge node is at the very top of the plan tree, then the entire query will execute in parallel. If it is somewhere else in the plan tree, then only the portion of the plan below it will run in parallel. In the example above, the query accesses only one table, so there is only one plan node other than - the Gather node itself; since that plan node is a child of the - Gather node, it will run in parallel. + the Gather node itself; since that plan node is a child of the + Gather node, it will run in parallel. - Using EXPLAIN, you can see the number of - workers chosen by the planner. When the Gather node is reached + Using EXPLAIN, you can see the number of + workers chosen by the planner. When the Gather node is reached during query execution, the process which is implementing the user's session will request a number of background worker processes equal to the number @@ -72,7 +72,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance. If this occurrence is frequent, consider increasing - max_worker_processes and max_parallel_workers + max_worker_processes and max_parallel_workers so that more workers can be run simultaneously or alternatively reducing max_parallel_workers_per_gather so that the planner requests fewer workers. @@ -96,10 +96,10 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; When the node at the top of the parallel portion of the plan is - Gather Merge rather than Gather, it indicates that + Gather Merge rather than Gather, it indicates that each process executing the parallel portion of the plan is producing tuples in sorted order, and that the leader is performing an - order-preserving merge. In contrast, Gather reads tuples + order-preserving merge. In contrast, Gather reads tuples from the workers in whatever order is convenient, destroying any sort order that may have existed. @@ -128,7 +128,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; must be set to a - value other than none. Parallel query requires dynamic + value other than none. Parallel query requires dynamic shared memory in order to pass data between cooperating processes. @@ -152,8 +152,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; The query writes any data or locks any database rows. If a query contains a data-modifying operation either at the top level or within a CTE, no parallel plans for that query will be generated. As an - exception, the commands CREATE TABLE, SELECT - INTO, and CREATE MATERIALIZED VIEW which create a new + exception, the commands CREATE TABLE, SELECT + INTO, and CREATE MATERIALIZED VIEW which create a new table and populate it can use a parallel plan. @@ -205,8 +205,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Even when parallel query plan is generated for a particular query, there are several circumstances under which it will be impossible to execute that plan in parallel at execution time. If this occurs, the leader - will execute the portion of the plan below the Gather - node entirely by itself, almost as if the Gather node were + will execute the portion of the plan below the Gather + node entirely by itself, almost as if the Gather node were not present. This will happen if any of the following conditions are met: @@ -264,7 +264,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; copy of the output result set, so the query would not run any faster than normal but would produce incorrect results. Instead, the parallel portion of the plan must be what is known internally to the query - optimizer as a partial plan; that is, it must be constructed + optimizer as a partial plan; that is, it must be constructed so that each process which executes the plan will generate only a subset of the output rows in such a way that each required output row is guaranteed to be generated by exactly one of the cooperating processes. @@ -281,14 +281,14 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - In a parallel sequential scan, the table's blocks will + In a parallel sequential scan, the table's blocks will be divided among the cooperating processes. Blocks are handed out one at a time, so that access to the table remains sequential. - In a parallel bitmap heap scan, one process is chosen + In a parallel bitmap heap scan, one process is chosen as the leader. That process performs a scan of one or more indexes and builds a bitmap indicating which table blocks need to be visited. These blocks are then divided among the cooperating processes as in @@ -298,8 +298,8 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - In a parallel index scan or parallel index-only - scan, the cooperating processes take turns reading data from the + In a parallel index scan or parallel index-only + scan, the cooperating processes take turns reading data from the index. Currently, parallel index scans are supported only for btree indexes. Each process will claim a single index block and will scan and return all tuples referenced by that block; other process can @@ -345,25 +345,25 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Parallel Aggregation - PostgreSQL supports parallel aggregation by aggregating in + PostgreSQL supports parallel aggregation by aggregating in two stages. First, each process participating in the parallel portion of the query performs an aggregation step, producing a partial result for each group of which that process is aware. This is reflected in the plan - as a Partial Aggregate node. Second, the partial results are - transferred to the leader via Gather or Gather - Merge. Finally, the leader re-aggregates the results across all + as a Partial Aggregate node. Second, the partial results are + transferred to the leader via Gather or Gather + Merge. Finally, the leader re-aggregates the results across all workers in order to produce the final result. This is reflected in the - plan as a Finalize Aggregate node. + plan as a Finalize Aggregate node. - Because the Finalize Aggregate node runs on the leader + Because the Finalize Aggregate node runs on the leader process, queries which produce a relatively large number of groups in comparison to the number of input rows will appear less favorable to the query planner. For example, in the worst-case scenario the number of - groups seen by the Finalize Aggregate node could be as many as + groups seen by the Finalize Aggregate node could be as many as the number of input rows which were seen by all worker processes in the - Partial Aggregate stage. For such cases, there is clearly + Partial Aggregate stage. For such cases, there is clearly going to be no performance benefit to using parallel aggregation. The query planner takes this into account during the planning process and is unlikely to choose parallel aggregate in this scenario. @@ -371,14 +371,14 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Parallel aggregation is not supported in all situations. Each aggregate - must be safe for parallelism and must + must be safe for parallelism and must have a combine function. If the aggregate has a transition state of type - internal, it must have serialization and deserialization + internal, it must have serialization and deserialization functions. See for more details. Parallel aggregation is not supported if any aggregate function call - contains DISTINCT or ORDER BY clause and is also + contains DISTINCT or ORDER BY clause and is also not supported for ordered set aggregates or when the query involves - GROUPING SETS. It can only be used when all joins involved in + GROUPING SETS. It can only be used when all joins involved in the query are also part of the parallel portion of the plan. @@ -417,13 +417,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; The planner classifies operations involved in a query as either - parallel safe, parallel restricted, - or parallel unsafe. A parallel safe operation is one which + parallel safe, parallel restricted, + or parallel unsafe. A parallel safe operation is one which does not conflict with the use of parallel query. A parallel restricted operation is one which cannot be performed in a parallel worker, but which can be performed in the leader while parallel query is in use. Therefore, - parallel restricted operations can never occur below a Gather - or Gather Merge node, but can occur elsewhere in a plan which + parallel restricted operations can never occur below a Gather + or Gather Merge node, but can occur elsewhere in a plan which contains such a node. A parallel unsafe operation is one which cannot be performed while parallel query is in use, not even in the leader. When a query contains anything which is parallel unsafe, parallel query @@ -450,13 +450,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Scans of foreign tables, unless the foreign data wrapper has - an IsForeignScanParallelSafe API which indicates otherwise. + an IsForeignScanParallelSafe API which indicates otherwise. - Access to an InitPlan or correlated SubPlan. + Access to an InitPlan or correlated SubPlan. @@ -475,23 +475,23 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; be parallel unsafe unless otherwise marked. When using or , markings can be set by specifying - PARALLEL SAFE, PARALLEL RESTRICTED, or - PARALLEL UNSAFE as appropriate. When using + PARALLEL SAFE, PARALLEL RESTRICTED, or + PARALLEL UNSAFE as appropriate. When using , the - PARALLEL option can be specified with SAFE, - RESTRICTED, or UNSAFE as the corresponding value. + PARALLEL option can be specified with SAFE, + RESTRICTED, or UNSAFE as the corresponding value. - Functions and aggregates must be marked PARALLEL UNSAFE if + Functions and aggregates must be marked PARALLEL UNSAFE if they write to the database, access sequences, change the transaction state even temporarily (e.g. a PL/pgSQL function which establishes an - EXCEPTION block to catch errors), or make persistent changes to + EXCEPTION block to catch errors), or make persistent changes to settings. Similarly, functions must be marked PARALLEL - RESTRICTED if they access temporary tables, client connection state, + RESTRICTED if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize across workers. For example, - setseed and random are parallel restricted for + setseed and random are parallel restricted for this last reason. @@ -503,7 +503,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; mislabeled, since there is no way for the system to protect itself against arbitrary C code, but in most likely cases the result will be no worse than for any other function. If in doubt, it is probably best to label functions - as UNSAFE. + as UNSAFE. @@ -519,13 +519,13 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Note that the query planner does not consider deferring the evaluation of parallel-restricted functions or aggregates involved in the query in - order to obtain a superior plan. So, for example, if a WHERE + order to obtain a superior plan. So, for example, if a WHERE clause applied to a particular table is parallel restricted, the query planner will not consider performing a scan of that table in the parallel portion of a plan. In some cases, it would be possible (and perhaps even efficient) to include the scan of that table in the parallel portion of the query and defer the evaluation of the - WHERE clause so that it happens above the Gather + WHERE clause so that it happens above the Gather node. However, the planner does not do this. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index d3b47bc5a5..6a5182d85b 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -30,7 +30,7 @@ plan for each query it receives. Choosing the right plan to match the query structure and the properties of the data is absolutely critical for good performance, so the system includes - a complex planner that tries to choose good plans. + a complex planner that tries to choose good plans. You can use the command to see what query plan the planner creates for any query. Plan-reading is an art that requires some experience to master, @@ -39,17 +39,17 @@ Examples in this section are drawn from the regression test database - after doing a VACUUM ANALYZE, using 9.3 development sources. + after doing a VACUUM ANALYZE, using 9.3 development sources. You should be able to get similar results if you try the examples yourself, but your estimated costs and row counts might vary slightly - because ANALYZE's statistics are random samples rather + because ANALYZE's statistics are random samples rather than exact, and because costs are inherently somewhat platform-dependent. - The examples use EXPLAIN's default text output + The examples use EXPLAIN's default text output format, which is compact and convenient for humans to read. - If you want to feed EXPLAIN's output to a program for further + If you want to feed EXPLAIN's output to a program for further analysis, you should use one of its machine-readable output formats (XML, JSON, or YAML) instead. @@ -58,12 +58,12 @@ <command>EXPLAIN</command> Basics - The structure of a query plan is a tree of plan nodes. + The structure of a query plan is a tree of plan nodes. Nodes at the bottom level of the tree are scan nodes: they return raw rows from a table. There are different types of scan nodes for different table access methods: sequential scans, index scans, and bitmap index - scans. There are also non-table row sources, such as VALUES - clauses and set-returning functions in FROM, which have their + scans. There are also non-table row sources, such as VALUES + clauses and set-returning functions in FROM, which have their own scan node types. If the query requires joining, aggregation, sorting, or other operations on the raw rows, then there will be additional nodes @@ -93,7 +93,7 @@ EXPLAIN SELECT * FROM tenk1; - Since this query has no WHERE clause, it must scan all the + Since this query has no WHERE clause, it must scan all the rows of the table, so the planner has chosen to use a simple sequential scan plan. The numbers that are quoted in parentheses are (left to right): @@ -111,7 +111,7 @@ EXPLAIN SELECT * FROM tenk1; Estimated total cost. This is stated on the assumption that the plan node is run to completion, i.e., all available rows are retrieved. In practice a node's parent node might stop short of reading all - available rows (see the LIMIT example below). + available rows (see the LIMIT example below). @@ -135,7 +135,7 @@ EXPLAIN SELECT * FROM tenk1; cost parameters (see ). Traditional practice is to measure the costs in units of disk page fetches; that is, is conventionally - set to 1.0 and the other cost parameters are set relative + set to 1.0 and the other cost parameters are set relative to that. The examples in this section are run with the default cost parameters. @@ -152,11 +152,11 @@ EXPLAIN SELECT * FROM tenk1; - The rows value is a little tricky because it is + The rows value is a little tricky because it is not the number of rows processed or scanned by the plan node, but rather the number emitted by the node. This is often less than the number scanned, as a result of filtering by any - WHERE-clause conditions that are being applied at the node. + WHERE-clause conditions that are being applied at the node. Ideally the top-level rows estimate will approximate the number of rows actually returned, updated, or deleted by the query. @@ -184,12 +184,12 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; pages and 10000 rows. The estimated cost is computed as (disk pages read * ) + (rows scanned * ). By default, - seq_page_cost is 1.0 and cpu_tuple_cost is 0.01, + seq_page_cost is 1.0 and cpu_tuple_cost is 0.01, so the estimated cost is (358 * 1.0) + (10000 * 0.01) = 458. - Now let's modify the query to add a WHERE condition: + Now let's modify the query to add a WHERE condition: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000; @@ -200,21 +200,21 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000; Filter: (unique1 < 7000) - Notice that the EXPLAIN output shows the WHERE - clause being applied as a filter condition attached to the Seq + Notice that the EXPLAIN output shows the WHERE + clause being applied as a filter condition attached to the Seq Scan plan node. This means that the plan node checks the condition for each row it scans, and outputs only the ones that pass the condition. The estimate of output rows has been reduced because of the - WHERE clause. + WHERE clause. However, the scan will still have to visit all 10000 rows, so the cost hasn't decreased; in fact it has gone up a bit (by 10000 * , to be exact) to reflect the extra CPU - time spent checking the WHERE condition. + time spent checking the WHERE condition. - The actual number of rows this query would select is 7000, but the rows + The actual number of rows this query would select is 7000, but the rows estimate is only approximate. If you try to duplicate this experiment, you will probably get a slightly different estimate; moreover, it can change after each ANALYZE command, because the @@ -245,12 +245,12 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100; scan. (The reason for using two plan levels is that the upper plan node sorts the row locations identified by the index into physical order before reading them, to minimize the cost of separate fetches. - The bitmap mentioned in the node names is the mechanism that + The bitmap mentioned in the node names is the mechanism that does the sorting.) - Now let's add another condition to the WHERE clause: + Now let's add another condition to the WHERE clause: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND stringu1 = 'xxx'; @@ -266,15 +266,15 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND stringu1 = 'xxx'; The added condition stringu1 = 'xxx' reduces the output row count estimate, but not the cost because we still have to visit - the same set of rows. Notice that the stringu1 clause + the same set of rows. Notice that the stringu1 clause cannot be applied as an index condition, since this index is only on - the unique1 column. Instead it is applied as a filter on + the unique1 column. Instead it is applied as a filter on the rows retrieved by the index. Thus the cost has actually gone up slightly to reflect this extra checking. - In some cases the planner will prefer a simple index scan plan: + In some cases the planner will prefer a simple index scan plan: EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42; @@ -289,14 +289,14 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42; makes them even more expensive to read, but there are so few that the extra cost of sorting the row locations is not worth it. You'll most often see this plan type for queries that fetch just a single row. It's - also often used for queries that have an ORDER BY condition + also often used for queries that have an ORDER BY condition that matches the index order, because then no extra sorting step is needed - to satisfy the ORDER BY. + to satisfy the ORDER BY. If there are separate indexes on several of the columns referenced - in WHERE, the planner might choose to use an AND or OR + in WHERE, the planner might choose to use an AND or OR combination of the indexes: @@ -320,7 +320,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000; - Here is an example showing the effects of LIMIT: + Here is an example showing the effects of LIMIT: EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; @@ -335,7 +335,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2 - This is the same query as above, but we added a LIMIT so that + This is the same query as above, but we added a LIMIT so that not all the rows need be retrieved, and the planner changed its mind about what to do. Notice that the total cost and row count of the Index Scan node are shown as if it were run to completion. However, the Limit node @@ -370,23 +370,23 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In this plan, we have a nested-loop join node with two table scans as inputs, or children. The indentation of the node summary lines reflects - the plan tree structure. The join's first, or outer, child + the plan tree structure. The join's first, or outer, child is a bitmap scan similar to those we saw before. Its cost and row count - are the same as we'd get from SELECT ... WHERE unique1 < 10 + are the same as we'd get from SELECT ... WHERE unique1 < 10 because we are - applying the WHERE clause unique1 < 10 + applying the WHERE clause unique1 < 10 at that node. The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't affect the row count of the outer scan. The nested-loop join node will run its second, - or inner child once for each row obtained from the outer child. + or inner child once for each row obtained from the outer child. Column values from the current outer row can be plugged into the inner - scan; here, the t1.unique2 value from the outer row is available, + scan; here, the t1.unique2 value from the outer row is available, so we get a plan and costs similar to what we saw above for a simple - SELECT ... WHERE t2.unique2 = constant case. + SELECT ... WHERE t2.unique2 = constant case. (The estimated cost is actually a bit lower than what was seen above, as a result of caching that's expected to occur during the repeated - index scans on t2.) The + index scans on t2.) The costs of the loop node are then set on the basis of the cost of the outer scan, plus one repetition of the inner scan for each outer row (10 * 7.87, here), plus a little CPU time for join processing. @@ -395,7 +395,7 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In this example the join's output row count is the same as the product of the two scans' row counts, but that's not true in all cases because - there can be additional WHERE clauses that mention both tables + there can be additional WHERE clauses that mention both tables and so can only be applied at the join point, not to either input scan. Here's an example: @@ -418,15 +418,15 @@ WHERE t1.unique1 < 10 AND t2.unique2 < 10 AND t1.hundred < t2.hundred; The condition t1.hundred < t2.hundred can't be - tested in the tenk2_unique2 index, so it's applied at the + tested in the tenk2_unique2 index, so it's applied at the join node. This reduces the estimated output row count of the join node, but does not change either input scan. - Notice that here the planner has chosen to materialize the inner + Notice that here the planner has chosen to materialize the inner relation of the join, by putting a Materialize plan node atop it. This - means that the t2 index scan will be done just once, even + means that the t2 index scan will be done just once, even though the nested-loop join node needs to read that data ten times, once for each row from the outer relation. The Materialize node saves the data in memory as it's read, and then returns the data from memory on each @@ -435,8 +435,8 @@ WHERE t1.unique1 < 10 AND t2.unique2 < 10 AND t1.hundred < t2.hundred; When dealing with outer joins, you might see join plan nodes with both - Join Filter and plain Filter conditions attached. - Join Filter conditions come from the outer join's ON clause, + Join Filter and plain Filter conditions attached. + Join Filter conditions come from the outer join's ON clause, so a row that fails the Join Filter condition could still get emitted as a null-extended row. But a plain Filter condition is applied after the outer-join rules and so acts to remove rows unconditionally. In an inner @@ -470,7 +470,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; table are entered into an in-memory hash table, after which the other table is scanned and the hash table is probed for matches to each row. Again note how the indentation reflects the plan structure: the bitmap - scan on tenk1 is the input to the Hash node, which constructs + scan on tenk1 is the input to the Hash node, which constructs the hash table. That's then returned to the Hash Join node, which reads rows from its outer child plan and searches the hash table for each one. @@ -497,9 +497,9 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; Merge join requires its input data to be sorted on the join keys. In this - plan the tenk1 data is sorted by using an index scan to visit + plan the tenk1 data is sorted by using an index scan to visit the rows in the correct order, but a sequential scan and sort is preferred - for onek, because there are many more rows to be visited in + for onek, because there are many more rows to be visited in that table. (Sequential-scan-and-sort frequently beats an index scan for sorting many rows, because of the nonsequential disk access required by the index scan.) @@ -512,7 +512,7 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; (This is a crude tool, but useful. See also .) For example, if we're unconvinced that sequential-scan-and-sort is the best way to - deal with table onek in the previous example, we could try + deal with table onek in the previous example, we could try SET enable_sort = off; @@ -530,10 +530,10 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; -> Index Scan using onek_unique2 on onek t2 (cost=0.28..224.79 rows=1000 width=244) - which shows that the planner thinks that sorting onek by + which shows that the planner thinks that sorting onek by index-scanning is about 12% more expensive than sequential-scan-and-sort. Of course, the next question is whether it's right about that. - We can investigate that using EXPLAIN ANALYZE, as discussed + We can investigate that using EXPLAIN ANALYZE, as discussed below. @@ -544,8 +544,8 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; It is possible to check the accuracy of the planner's estimates - by using EXPLAIN's ANALYZE option. With this - option, EXPLAIN actually executes the query, and then displays + by using EXPLAIN's ANALYZE option. With this + option, EXPLAIN actually executes the query, and then displays the true row counts and true run time accumulated within each plan node, along with the same estimates that a plain EXPLAIN shows. For example, we might get a result like this: @@ -569,7 +569,7 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; Note that the actual time values are in milliseconds of - real time, whereas the cost estimates are expressed in + real time, whereas the cost estimates are expressed in arbitrary units; so they are unlikely to match up. The thing that's usually most important to look for is whether the estimated row counts are reasonably close to reality. In this example @@ -580,17 +580,17 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; In some query plans, it is possible for a subplan node to be executed more than once. For example, the inner index scan will be executed once per outer row in the above nested-loop plan. In such cases, the - loops value reports the + loops value reports the total number of executions of the node, and the actual time and rows values shown are averages per-execution. This is done to make the numbers comparable with the way that the cost estimates are shown. Multiply by - the loops value to get the total time actually spent in + the loops value to get the total time actually spent in the node. In the above example, we spent a total of 0.220 milliseconds - executing the index scans on tenk2. + executing the index scans on tenk2. - In some cases EXPLAIN ANALYZE shows additional execution + In some cases EXPLAIN ANALYZE shows additional execution statistics beyond the plan node execution times and row counts. For example, Sort and Hash nodes provide extra information: @@ -642,13 +642,13 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE ten < 7; These counts can be particularly valuable for filter conditions applied at - join nodes. The Rows Removed line only appears when at least + join nodes. The Rows Removed line only appears when at least one scanned row, or potential join pair in the case of a join node, is rejected by the filter condition. - A case similar to filter conditions occurs with lossy + A case similar to filter conditions occurs with lossy index scans. For example, consider this search for polygons containing a specific point: @@ -685,14 +685,14 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @> polygon '(0.5,2.0)'; Here we can see that the index returned one candidate row, which was then rejected by a recheck of the index condition. This happens because a - GiST index is lossy for polygon containment tests: it actually + GiST index is lossy for polygon containment tests: it actually returns the rows with polygons that overlap the target, and then we have to do the exact containment test on those rows. - EXPLAIN has a BUFFERS option that can be used with - ANALYZE to get even more run time statistics: + EXPLAIN has a BUFFERS option that can be used with + ANALYZE to get even more run time statistics: EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000; @@ -714,7 +714,7 @@ EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique Execution time: 0.423 ms - The numbers provided by BUFFERS help to identify which parts + The numbers provided by BUFFERS help to identify which parts of the query are the most I/O-intensive. @@ -722,7 +722,7 @@ EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique Keep in mind that because EXPLAIN ANALYZE actually runs the query, any side-effects will happen as usual, even though whatever results the query might output are discarded in favor of - printing the EXPLAIN data. If you want to analyze a + printing the EXPLAIN data. If you want to analyze a data-modifying query without changing your tables, you can roll the command back afterwards, for example: @@ -746,8 +746,8 @@ ROLLBACK; - As seen in this example, when the query is an INSERT, - UPDATE, or DELETE command, the actual work of + As seen in this example, when the query is an INSERT, + UPDATE, or DELETE command, the actual work of applying the table changes is done by a top-level Insert, Update, or Delete plan node. The plan nodes underneath this node perform the work of locating the old rows and/or computing the new data. @@ -762,7 +762,7 @@ ROLLBACK; - When an UPDATE or DELETE command affects an + When an UPDATE or DELETE command affects an inheritance hierarchy, the output might look like this: @@ -789,7 +789,7 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; scanning subplans, one per table. For clarity, the Update node is annotated to show the specific target tables that will be updated, in the same order as the corresponding subplans. (These annotations are new as - of PostgreSQL 9.5; in prior versions the reader had to + of PostgreSQL 9.5; in prior versions the reader had to intuit the target tables by inspecting the subplans.) @@ -804,12 +804,12 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; ANALYZE includes executor start-up and shut-down time, as well as the time to run any triggers that are fired, but it does not include parsing, rewriting, or planning time. - Time spent executing BEFORE triggers, if any, is included in + Time spent executing BEFORE triggers, if any, is included in the time for the related Insert, Update, or Delete node; but time - spent executing AFTER triggers is not counted there because - AFTER triggers are fired after completion of the whole plan. + spent executing AFTER triggers is not counted there because + AFTER triggers are fired after completion of the whole plan. The total time spent in each trigger - (either BEFORE or AFTER) is also shown separately. + (either BEFORE or AFTER) is also shown separately. Note that deferred constraint triggers will not be executed until end of transaction and are thus not considered at all by EXPLAIN ANALYZE. @@ -827,13 +827,13 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; network transmission costs and I/O conversion costs are not included. Second, the measurement overhead added by EXPLAIN ANALYZE can be significant, especially on machines with slow - gettimeofday() operating-system calls. You can use the + gettimeofday() operating-system calls. You can use the tool to measure the overhead of timing on your system. - EXPLAIN results should not be extrapolated to situations + EXPLAIN results should not be extrapolated to situations much different from the one you are actually testing; for example, results on a toy-sized table cannot be assumed to apply to large tables. The planner's cost estimates are not linear and so it might choose @@ -843,14 +843,14 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; The planner realizes that it's going to take one disk page read to process the table in any case, so there's no value in expending additional page reads to look at an index. (We saw this happening in the - polygon_tbl example above.) + polygon_tbl example above.) There are cases in which the actual and estimated values won't match up well, but nothing is really wrong. One such case occurs when - plan node execution is stopped short by a LIMIT or similar - effect. For example, in the LIMIT query we used before, + plan node execution is stopped short by a LIMIT or similar + effect. For example, in the LIMIT query we used before, EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; @@ -880,10 +880,10 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 and the next key value in the one input is greater than the last key value of the other input; in such a case there can be no more matches and so no need to scan the rest of the first input. This results in not reading all - of one child, with results like those mentioned for LIMIT. + of one child, with results like those mentioned for LIMIT. Also, if the outer (first) child contains rows with duplicate key values, the inner (second) child is backed up and rescanned for the portion of its - rows matching that key value. EXPLAIN ANALYZE counts these + rows matching that key value. EXPLAIN ANALYZE counts these repeated emissions of the same inner rows as if they were real additional rows. When there are many outer duplicates, the reported actual row count for the inner child plan node can be significantly larger than the number @@ -948,9 +948,9 @@ WHERE relname LIKE 'tenk1%'; For efficiency reasons, reltuples and relpages are not updated on-the-fly, and so they usually contain somewhat out-of-date values. - They are updated by VACUUM, ANALYZE, and a - few DDL commands such as CREATE INDEX. A VACUUM - or ANALYZE operation that does not scan the entire table + They are updated by VACUUM, ANALYZE, and a + few DDL commands such as CREATE INDEX. A VACUUM + or ANALYZE operation that does not scan the entire table (which is commonly the case) will incrementally update the reltuples count on the basis of the part of the table it did scan, resulting in an approximate value. @@ -966,16 +966,16 @@ WHERE relname LIKE 'tenk1%'; Most queries retrieve only a fraction of the rows in a table, due - to WHERE clauses that restrict the rows to be + to WHERE clauses that restrict the rows to be examined. The planner thus needs to make an estimate of the - selectivity of WHERE clauses, that is, + selectivity of WHERE clauses, that is, the fraction of rows that match each condition in the - WHERE clause. The information used for this task is + WHERE clause. The information used for this task is stored in the pg_statistic system catalog. Entries in pg_statistic - are updated by the ANALYZE and VACUUM - ANALYZE commands, and are always approximate even when freshly + are updated by the ANALYZE and VACUUM + ANALYZE commands, and are always approximate even when freshly updated. @@ -1020,17 +1020,17 @@ WHERE tablename = 'road'; Note that two rows are displayed for the same column, one corresponding to the complete inheritance hierarchy starting at the - road table (inherited=t), + road table (inherited=t), and another one including only the road table itself - (inherited=f). + (inherited=f). The amount of information stored in pg_statistic - by ANALYZE, in particular the maximum number of entries in the - most_common_vals and histogram_bounds + by ANALYZE, in particular the maximum number of entries in the + most_common_vals and histogram_bounds arrays for each column, can be set on a - column-by-column basis using the ALTER TABLE SET STATISTICS + column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the configuration variable. The default limit is presently 100 entries. Raising the limit @@ -1072,7 +1072,7 @@ WHERE tablename = 'road'; an assumption that does not hold when column values are correlated. Regular statistics, because of their per-individual-column nature, cannot capture any knowledge about cross-column correlation. - However, PostgreSQL has the ability to compute + However, PostgreSQL has the ability to compute multivariate statistics, which can capture such information. @@ -1081,7 +1081,7 @@ WHERE tablename = 'road'; Because the number of possible column combinations is very large, it's impractical to compute multivariate statistics automatically. Instead, extended statistics objects, more often - called just statistics objects, can be created to instruct + called just statistics objects, can be created to instruct the server to obtain statistics across interesting sets of columns. @@ -1116,12 +1116,12 @@ WHERE tablename = 'road'; The simplest kind of extended statistics tracks functional - dependencies, a concept used in definitions of database normal forms. - We say that column b is functionally dependent on - column a if knowledge of the value of - a is sufficient to determine the value - of b, that is there are no two rows having the same value - of a but different values of b. + dependencies, a concept used in definitions of database normal forms. + We say that column b is functionally dependent on + column a if knowledge of the value of + a is sufficient to determine the value + of b, that is there are no two rows having the same value + of a but different values of b. In a fully normalized database, functional dependencies should exist only on primary keys and superkeys. However, in practice many data sets are not fully normalized for various reasons; intentional @@ -1142,15 +1142,15 @@ WHERE tablename = 'road'; - To inform the planner about functional dependencies, ANALYZE + To inform the planner about functional dependencies, ANALYZE can collect measurements of cross-column dependency. Assessing the degree of dependency between all sets of columns would be prohibitively expensive, so data collection is limited to those groups of columns appearing together in a statistics object defined with - the dependencies option. It is advisable to create - dependencies statistics only for column groups that are + the dependencies option. It is advisable to create + dependencies statistics only for column groups that are strongly correlated, to avoid unnecessary overhead in both - ANALYZE and later query planning. + ANALYZE and later query planning. @@ -1189,7 +1189,7 @@ SELECT stxname, stxkeys, stxdependencies simple equality conditions that compare columns to constant values. They are not used to improve estimates for equality conditions comparing two columns or comparing a column to an expression, nor for - range clauses, LIKE or any other type of condition. + range clauses, LIKE or any other type of condition. @@ -1200,7 +1200,7 @@ SELECT stxname, stxkeys, stxdependencies SELECT * FROM zipcodes WHERE city = 'San Francisco' AND zip = '94105'; - the planner will disregard the city clause as not + the planner will disregard the city clause as not changing the selectivity, which is correct. However, it will make the same assumption about @@ -1233,11 +1233,11 @@ SELECT * FROM zipcodes WHERE city = 'San Francisco' AND zip = '90210'; - To improve such estimates, ANALYZE can collect n-distinct + To improve such estimates, ANALYZE can collect n-distinct statistics for groups of columns. As before, it's impractical to do this for every possible column grouping, so data is collected only for those groups of columns appearing together in a statistics object - defined with the ndistinct option. Data will be collected + defined with the ndistinct option. Data will be collected for each possible combination of two or more columns from the set of listed columns. @@ -1267,17 +1267,17 @@ nd | {"1, 2": 33178, "1, 5": 33178, "2, 5": 27435, "1, 2, 5": 33178} - It's advisable to create ndistinct statistics objects only + It's advisable to create ndistinct statistics objects only on combinations of columns that are actually used for grouping, and for which misestimation of the number of groups is resulting in bad - plans. Otherwise, the ANALYZE cycles are just wasted. + plans. Otherwise, the ANALYZE cycles are just wasted. - Controlling the Planner with Explicit <literal>JOIN</> Clauses + Controlling the Planner with Explicit <literal>JOIN</literal> Clauses join @@ -1286,7 +1286,7 @@ nd | {"1, 2": 33178, "1, 5": 33178, "2, 5": 27435, "1, 2, 5": 33178} It is possible - to control the query planner to some extent by using the explicit JOIN + to control the query planner to some extent by using the explicit JOIN syntax. To see why this matters, we first need some background. @@ -1297,13 +1297,13 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; the planner is free to join the given tables in any order. For example, it could generate a query plan that joins A to B, using - the WHERE condition a.id = b.id, and then - joins C to this joined table, using the other WHERE + the WHERE condition a.id = b.id, and then + joins C to this joined table, using the other WHERE condition. Or it could join B to C and then join A to that result. Or it could join A to C and then join them with B — but that would be inefficient, since the full Cartesian product of A and C would have to be formed, there being no applicable condition in the - WHERE clause to allow optimization of the join. (All + WHERE clause to allow optimization of the join. (All joins in the PostgreSQL executor happen between two input tables, so it's necessary to build up the result in one or another of these fashions.) The important point is that @@ -1347,30 +1347,30 @@ SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); SELECT * FROM a LEFT JOIN b ON (a.bid = b.id) LEFT JOIN c ON (a.cid = c.id); it is valid to join A to either B or C first. Currently, only - FULL JOIN completely constrains the join order. Most - practical cases involving LEFT JOIN or RIGHT JOIN + FULL JOIN completely constrains the join order. Most + practical cases involving LEFT JOIN or RIGHT JOIN can be rearranged to some extent. - Explicit inner join syntax (INNER JOIN, CROSS - JOIN, or unadorned JOIN) is semantically the same as - listing the input relations in FROM, so it does not + Explicit inner join syntax (INNER JOIN, CROSS + JOIN, or unadorned JOIN) is semantically the same as + listing the input relations in FROM, so it does not constrain the join order. - Even though most kinds of JOIN don't completely constrain + Even though most kinds of JOIN don't completely constrain the join order, it is possible to instruct the PostgreSQL query planner to treat all - JOIN clauses as constraining the join order anyway. + JOIN clauses as constraining the join order anyway. For example, these three queries are logically equivalent: SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); - But if we tell the planner to honor the JOIN order, + But if we tell the planner to honor the JOIN order, the second and third take less time to plan than the first. This effect is not worth worrying about for only three tables, but it can be a lifesaver with many tables. @@ -1378,19 +1378,19 @@ SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); To force the planner to follow the join order laid out by explicit - JOINs, + JOINs, set the run-time parameter to 1. (Other possible values are discussed below.) You do not need to constrain the join order completely in order to - cut search time, because it's OK to use JOIN operators - within items of a plain FROM list. For example, consider: + cut search time, because it's OK to use JOIN operators + within items of a plain FROM list. For example, consider: SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; - With join_collapse_limit = 1, this + With join_collapse_limit = 1, this forces the planner to join A to B before joining them to other tables, but doesn't constrain its choices otherwise. In this example, the number of possible join orders is reduced by a factor of 5. @@ -1400,7 +1400,7 @@ SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; Constraining the planner's search in this way is a useful technique both for reducing planning time and for directing the planner to a good query plan. If the planner chooses a bad join order by default, - you can force it to choose a better order via JOIN syntax + you can force it to choose a better order via JOIN syntax — assuming that you know of a better order, that is. Experimentation is recommended. @@ -1415,22 +1415,22 @@ FROM x, y, WHERE somethingelse; This situation might arise from use of a view that contains a join; - the view's SELECT rule will be inserted in place of the view + the view's SELECT rule will be inserted in place of the view reference, yielding a query much like the above. Normally, the planner will try to collapse the subquery into the parent, yielding: SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; This usually results in a better plan than planning the subquery - separately. (For example, the outer WHERE conditions might be such that + separately. (For example, the outer WHERE conditions might be such that joining X to A first eliminates many rows of A, thus avoiding the need to form the full logical output of the subquery.) But at the same time, we have increased the planning time; here, we have a five-way join problem replacing two separate three-way join problems. Because of the exponential growth of the number of possibilities, this makes a big difference. The planner tries to avoid getting stuck in huge join search - problems by not collapsing a subquery if more than from_collapse_limit - FROM items would result in the parent + problems by not collapsing a subquery if more than from_collapse_limit + FROM items would result in the parent query. You can trade off planning time against quality of plan by adjusting this run-time parameter up or down. @@ -1439,11 +1439,11 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; and are similarly named because they do almost the same thing: one controls - when the planner will flatten out subqueries, and the + when the planner will flatten out subqueries, and the other controls when it will flatten out explicit joins. Typically - you would either set join_collapse_limit equal to - from_collapse_limit (so that explicit joins and subqueries - act similarly) or set join_collapse_limit to 1 (if you want + you would either set join_collapse_limit equal to + from_collapse_limit (so that explicit joins and subqueries + act similarly) or set join_collapse_limit to 1 (if you want to control join order with explicit joins). But you might set them differently if you are trying to fine-tune the trade-off between planning time and run time. @@ -1468,7 +1468,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - When using multiple INSERTs, turn off autocommit and just do + When using multiple INSERTs, turn off autocommit and just do one commit at the end. (In plain SQL, this means issuing BEGIN at the start and COMMIT at the end. Some client libraries might @@ -1505,14 +1505,14 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; EXECUTE as many times as required. This avoids some of the overhead of repeatedly parsing and planning INSERT. Different interfaces provide this facility - in different ways; look for prepared statements in the interface + in different ways; look for prepared statements in the interface documentation. Note that loading a large number of rows using COPY is almost always faster than using - INSERT, even if PREPARE is used and + INSERT, even if PREPARE is used and multiple insertions are batched into a single transaction. @@ -1523,7 +1523,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway. However, this consideration only applies when - is minimal as all commands + is minimal as all commands must write WAL otherwise. @@ -1557,7 +1557,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Just as with indexes, a foreign key constraint can be checked - in bulk more efficiently than row-by-row. So it might be + in bulk more efficiently than row-by-row. So it might be useful to drop foreign key constraints, load data, and re-create the constraints. Again, there is a trade-off between data load speed and loss of error checking while the constraint is missing. @@ -1570,7 +1570,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; the row's foreign key constraint). Loading many millions of rows can cause the trigger event queue to overflow available memory, leading to intolerable swapping or even outright failure of the command. Therefore - it may be necessary, not just desirable, to drop and re-apply + it may be necessary, not just desirable, to drop and re-apply foreign keys when loading large amounts of data. If temporarily removing the constraint isn't acceptable, the only other recourse may be to split up the load operation into smaller transactions. @@ -1584,8 +1584,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Temporarily increasing the configuration variable when loading large amounts of data can lead to improved performance. This will help to speed up CREATE - INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. - It won't do much for COPY itself, so this advice is + INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. + It won't do much for COPY itself, so this advice is only useful when you are using one or both of the above techniques. @@ -1617,8 +1617,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; new base backup after the load has completed than to process a large amount of incremental WAL data. To prevent incremental WAL logging while loading, disable archiving and streaming replication, by setting - to minimal, - to off, and + to minimal, + to off, and to zero. But note that changing these settings requires a server restart. @@ -1628,8 +1628,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; process the WAL data, doing this will actually make certain commands faster, because they are designed not to write WAL at all if wal_level - is minimal. (They can guarantee crash safety more cheaply - by doing an fsync at the end than by writing WAL.) + is minimal. (They can guarantee crash safety more cheaply + by doing an fsync at the end than by writing WAL.) This applies to the following commands: @@ -1683,21 +1683,21 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Some Notes About <application>pg_dump</> + Some Notes About <application>pg_dump</application> - Dump scripts generated by pg_dump automatically apply + Dump scripts generated by pg_dump automatically apply several, but not all, of the above guidelines. To reload a - pg_dump dump as quickly as possible, you need to + pg_dump dump as quickly as possible, you need to do a few extra things manually. (Note that these points apply while - restoring a dump, not while creating it. + restoring a dump, not while creating it. The same points apply whether loading a text dump with - psql or using pg_restore to load - from a pg_dump archive file.) + psql or using pg_restore to load + from a pg_dump archive file.) - By default, pg_dump uses COPY, and when + By default, pg_dump uses COPY, and when it is generating a complete schema-and-data dump, it is careful to load data before creating indexes and foreign keys. So in this case several guidelines are handled automatically. What is left @@ -1713,10 +1713,10 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; If using WAL archiving or streaming replication, consider disabling - them during the restore. To do that, set archive_mode - to off, - wal_level to minimal, and - max_wal_senders to zero before loading the dump. + them during the restore. To do that, set archive_mode + to off, + wal_level to minimal, and + max_wal_senders to zero before loading the dump. Afterwards, set them back to the right values and take a fresh base backup. @@ -1724,49 +1724,49 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Experiment with the parallel dump and restore modes of both - pg_dump and pg_restore and find the + pg_dump and pg_restore and find the optimal number of concurrent jobs to use. Dumping and restoring in - parallel by means of the option should give you a significantly higher performance over the serial mode. Consider whether the whole dump should be restored as a single - transaction. To do that, pass the If multiple CPUs are available in the database server, consider using - pg_restore's option. This allows concurrent data loading and index creation. - Run ANALYZE afterwards. + Run ANALYZE afterwards. - A data-only dump will still use COPY, but it does not + A data-only dump will still use COPY, but it does not drop or recreate indexes, and it does not normally touch foreign keys. You can get the effect of disabling foreign keys by using - the option — but realize that that eliminates, rather than just postpones, foreign key validation, and so it is possible to insert bad data if you use it. @@ -1778,7 +1778,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; while loading the data, but don't bother increasing maintenance_work_mem; rather, you'd do that while manually recreating indexes and foreign keys afterwards. - And don't forget to ANALYZE when you're done; see + And don't forget to ANALYZE when you're done; see and for more information. @@ -1808,7 +1808,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Place the database cluster's data directory in a memory-backed - file system (i.e. RAM disk). This eliminates all + file system (i.e. RAM disk). This eliminates all database disk I/O, but limits data storage to the amount of available memory (and perhaps swap). @@ -1826,7 +1826,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Turn off ; there might be no need to force WAL writes to disk on every commit. This setting does risk transaction loss (though not data - corruption) in case of a crash of the database. + corruption) in case of a crash of the database. @@ -1842,7 +1842,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Increase and ; this reduces the frequency of checkpoints, but increases the storage requirements of - /pg_wal. + /pg_wal. diff --git a/doc/src/sgml/pgbuffercache.sgml b/doc/src/sgml/pgbuffercache.sgml index 4e53009ae0..18ac781d0d 100644 --- a/doc/src/sgml/pgbuffercache.sgml +++ b/doc/src/sgml/pgbuffercache.sgml @@ -37,7 +37,7 @@
- <structname>pg_buffercache</> Columns + <structname>pg_buffercache</structname> Columns @@ -54,7 +54,7 @@ bufferidinteger - ID, in the range 1..shared_buffers + ID, in the range 1..shared_buffers @@ -83,7 +83,7 @@ smallint Fork number within the relation; see - include/common/relpath.h + include/common/relpath.h @@ -120,22 +120,22 @@ There is one row for each buffer in the shared cache. Unused buffers are - shown with all fields null except bufferid. Shared system + shown with all fields null except bufferid. Shared system catalogs are shown as belonging to database zero. Because the cache is shared by all the databases, there will normally be pages from relations not belonging to the current database. This means - that there may not be matching join rows in pg_class for + that there may not be matching join rows in pg_class for some rows, or that there could even be incorrect joins. If you are - trying to join against pg_class, it's a good idea to - restrict the join to rows having reldatabase equal to + trying to join against pg_class, it's a good idea to + restrict the join to rows having reldatabase equal to the current database's OID or zero. - When the pg_buffercache view is accessed, internal buffer + When the pg_buffercache view is accessed, internal buffer manager locks are taken for long enough to copy all the buffer state data that the view will display. This ensures that the view produces a consistent set of results, while not diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml index 34d8621958..80595e193b 100644 --- a/doc/src/sgml/pgcrypto.sgml +++ b/doc/src/sgml/pgcrypto.sgml @@ -13,8 +13,8 @@ - The pgcrypto module provides cryptographic functions for - PostgreSQL. + The pgcrypto module provides cryptographic functions for + PostgreSQL. @@ -33,19 +33,19 @@ digest(data bytea, type text) returns bytea - Computes a binary hash of the given data. - type is the algorithm to use. + Computes a binary hash of the given data. + type is the algorithm to use. Standard algorithms are md5, sha1, sha224, sha256, sha384 and sha512. - If pgcrypto was built with + If pgcrypto was built with OpenSSL, more algorithms are available, as detailed in . If you want the digest as a hexadecimal string, use - encode() on the result. For example: + encode() on the result. For example: CREATE OR REPLACE FUNCTION sha1(bytea) returns text AS $$ SELECT encode(digest($1, 'sha1'), 'hex') @@ -67,12 +67,12 @@ hmac(data bytea, key text, type text) returns bytea - Calculates hashed MAC for data with key key. - type is the same as in digest(). + Calculates hashed MAC for data with key key. + type is the same as in digest(). - This is similar to digest() but the hash can only be + This is similar to digest() but the hash can only be recalculated knowing the key. This prevents the scenario of someone altering data and also changing the hash to match. @@ -88,14 +88,14 @@ hmac(data bytea, key text, type text) returns bytea Password Hashing Functions - The functions crypt() and gen_salt() + The functions crypt() and gen_salt() are specifically designed for hashing passwords. - crypt() does the hashing and gen_salt() + crypt() does the hashing and gen_salt() prepares algorithm parameters for it. - The algorithms in crypt() differ from the usual + The algorithms in crypt() differ from the usual MD5 or SHA1 hashing algorithms in the following respects: @@ -108,7 +108,7 @@ hmac(data bytea, key text, type text) returns bytea - They use a random value, called the salt, so that users + They use a random value, called the salt, so that users having the same password will have different encrypted passwords. This is also an additional defense against reversing the algorithm. @@ -134,7 +134,7 @@ hmac(data bytea, key text, type text) returns bytea
- Supported Algorithms for <function>crypt()</> + Supported Algorithms for <function>crypt()</function> @@ -148,7 +148,7 @@ hmac(data bytea, key text, type text) returns bytea - bf + bf 72 yes 128 @@ -156,7 +156,7 @@ hmac(data bytea, key text, type text) returns bytea Blowfish-based, variant 2a - md5 + md5 unlimited no 48 @@ -164,7 +164,7 @@ hmac(data bytea, key text, type text) returns bytea MD5-based crypt - xdes + xdes 8 yes 24 @@ -172,7 +172,7 @@ hmac(data bytea, key text, type text) returns bytea Extended DES - des + des 8 no 12 @@ -184,7 +184,7 @@ hmac(data bytea, key text, type text) returns bytea
- <function>crypt()</> + <function>crypt()</function> crypt @@ -195,10 +195,10 @@ crypt(password text, salt text) returns text - Calculates a crypt(3)-style hash of password. + Calculates a crypt(3)-style hash of password. When storing a new password, you need to use - gen_salt() to generate a new salt value. - To check a password, pass the stored hash value as salt, + gen_salt() to generate a new salt value. + To check a password, pass the stored hash value as salt, and test whether the result matches the stored value. @@ -212,12 +212,12 @@ UPDATE ... SET pswhash = crypt('new password', gen_salt('md5')); SELECT (pswhash = crypt('entered password', pswhash)) AS pswmatch FROM ... ; - This returns true if the entered password is correct. + This returns true if the entered password is correct. - <function>gen_salt()</> + <function>gen_salt()</function> gen_salt @@ -228,30 +228,30 @@ gen_salt(type text [, iter_count integer ]) returns text - Generates a new random salt string for use in crypt(). - The salt string also tells crypt() which algorithm to use. + Generates a new random salt string for use in crypt(). + The salt string also tells crypt() which algorithm to use. - The type parameter specifies the hashing algorithm. + The type parameter specifies the hashing algorithm. The accepted types are: des, xdes, md5 and bf. - The iter_count parameter lets the user specify the iteration + The iter_count parameter lets the user specify the iteration count, for algorithms that have one. The higher the count, the more time it takes to hash the password and therefore the more time to break it. Although with too high a count the time to calculate a hash may be several years - — which is somewhat impractical. If the iter_count + — which is somewhat impractical. If the iter_count parameter is omitted, the default iteration count is used. - Allowed values for iter_count depend on the algorithm and + Allowed values for iter_count depend on the algorithm and are shown in . - Iteration Counts for <function>crypt()</> + Iteration Counts for <function>crypt()</function> @@ -263,13 +263,13 @@ gen_salt(type text [, iter_count integer ]) returns text - xdes + xdes 725 1 16777215 - bf + bf 6 4 31 @@ -310,63 +310,63 @@ gen_salt(type text [, iter_count integer ]) returns text Algorithm Hashes/sec - For [a-z] - For [A-Za-z0-9] - Duration relative to md5 hash + For [a-z] + For [A-Za-z0-9] + Duration relative to md5 hash - crypt-bf/8 + crypt-bf/8 1792 4 years 3927 years 100k - crypt-bf/7 + crypt-bf/7 3648 2 years 1929 years 50k - crypt-bf/6 + crypt-bf/6 7168 1 year 982 years 25k - crypt-bf/5 + crypt-bf/5 13504 188 days 521 years 12.5k - crypt-md5 + crypt-md5 171584 15 days 41 years 1k - crypt-des + crypt-des 23221568 157.5 minutes 108 days 7 - sha1 + sha1 37774272 90 minutes 68 days 4 - md5 (hash) + md5 (hash) 150085504 22.5 minutes 17 days @@ -388,18 +388,18 @@ gen_salt(type text [, iter_count integer ]) returns text - crypt-des and crypt-md5 algorithm numbers are - taken from John the Ripper v1.6.38 -test output. + crypt-des and crypt-md5 algorithm numbers are + taken from John the Ripper v1.6.38 -test output. - md5 hash numbers are from mdcrack 1.2. + md5 hash numbers are from mdcrack 1.2. - sha1 numbers are from lcrack-20031130-beta. + sha1 numbers are from lcrack-20031130-beta. @@ -407,10 +407,10 @@ gen_salt(type text [, iter_count integer ]) returns text crypt-bf numbers are taken using a simple program that loops over 1000 8-character passwords. That way I can show the speed with different numbers of iterations. For reference: john - -test shows 13506 loops/sec for crypt-bf/5. + -test shows 13506 loops/sec for crypt-bf/5. (The very small difference in results is in accordance with the fact that the - crypt-bf implementation in pgcrypto + crypt-bf implementation in pgcrypto is the same one used in John the Ripper.) @@ -436,7 +436,7 @@ gen_salt(type text [, iter_count integer ]) returns text - An encrypted PGP message consists of 2 parts, or packets: + An encrypted PGP message consists of 2 parts, or packets: @@ -459,7 +459,7 @@ gen_salt(type text [, iter_count integer ]) returns text The given password is hashed using a String2Key (S2K) algorithm. This is - rather similar to crypt() algorithms — purposefully + rather similar to crypt() algorithms — purposefully slow and with random salt — but it produces a full-length binary key. @@ -540,8 +540,8 @@ pgp_sym_encrypt(data text, psw text [, options text ]) returns bytea pgp_sym_encrypt_bytea(data bytea, psw text [, options text ]) returns bytea - Encrypt data with a symmetric PGP key psw. - The options parameter can contain option settings, + Encrypt data with a symmetric PGP key psw. + The options parameter can contain option settings, as described below. @@ -565,12 +565,12 @@ pgp_sym_decrypt_bytea(msg bytea, psw text [, options text ]) returns bytea Decrypt a symmetric-key-encrypted PGP message. - Decrypting bytea data with pgp_sym_decrypt is disallowed. + Decrypting bytea data with pgp_sym_decrypt is disallowed. This is to avoid outputting invalid character data. Decrypting - originally textual data with pgp_sym_decrypt_bytea is fine. + originally textual data with pgp_sym_decrypt_bytea is fine. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -591,11 +591,11 @@ pgp_pub_encrypt(data text, key bytea [, options text ]) returns bytea pgp_pub_encrypt_bytea(data bytea, key bytea [, options text ]) returns bytea - Encrypt data with a public PGP key key. + Encrypt data with a public PGP key key. Giving this function a secret key will produce an error. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -616,19 +616,19 @@ pgp_pub_decrypt(msg bytea, key bytea [, psw text [, options text ]]) returns tex pgp_pub_decrypt_bytea(msg bytea, key bytea [, psw text [, options text ]]) returns bytea - Decrypt a public-key-encrypted message. key must be the + Decrypt a public-key-encrypted message. key must be the secret key corresponding to the public key that was used to encrypt. If the secret key is password-protected, you must give the password in - psw. If there is no password, but you want to specify + psw. If there is no password, but you want to specify options, you need to give an empty password. - Decrypting bytea data with pgp_pub_decrypt is disallowed. + Decrypting bytea data with pgp_pub_decrypt is disallowed. This is to avoid outputting invalid character data. Decrypting - originally textual data with pgp_pub_decrypt_bytea is fine. + originally textual data with pgp_pub_decrypt_bytea is fine. - The options parameter can contain option settings, + The options parameter can contain option settings, as described below. @@ -644,7 +644,7 @@ pgp_pub_decrypt_bytea(msg bytea, key bytea [, psw text [, options text ]]) retur pgp_key_id(bytea) returns text - pgp_key_id extracts the key ID of a PGP public or secret key. + pgp_key_id extracts the key ID of a PGP public or secret key. Or it gives the key ID that was used for encrypting the data, if given an encrypted message. @@ -654,7 +654,7 @@ pgp_key_id(bytea) returns text - SYMKEY + SYMKEY The message is encrypted with a symmetric key. @@ -662,12 +662,12 @@ pgp_key_id(bytea) returns text - ANYKEY + ANYKEY The message is public-key encrypted, but the key ID has been removed. That means you will need to try all your secret keys on it to see - which one decrypts it. pgcrypto itself does not produce + which one decrypts it. pgcrypto itself does not produce such messages. @@ -675,7 +675,7 @@ pgp_key_id(bytea) returns text Note that different keys may have the same ID. This is rare but a normal event. The client application should then try to decrypt with each one, - to see which fits — like handling ANYKEY. + to see which fits — like handling ANYKEY. @@ -700,8 +700,8 @@ dearmor(data text) returns bytea - If the keys and values arrays are specified, - an armor header is added to the armored format for each + If the keys and values arrays are specified, + an armor header is added to the armored format for each key/value pair. Both arrays must be single-dimensional, and they must be of the same length. The keys and values cannot contain any non-ASCII characters. @@ -719,8 +719,8 @@ dearmor(data text) returns bytea pgp_armor_headers(data text, key out text, value out text) returns setof record - pgp_armor_headers() extracts the armor headers from - data. The return value is a set of rows with two columns, + pgp_armor_headers() extracts the armor headers from + data. The return value is a set of rows with two columns, key and value. If the keys or values contain any non-ASCII characters, they are treated as UTF-8. @@ -924,7 +924,7 @@ gpg --gen-key - The preferred key type is DSA and Elgamal. + The preferred key type is DSA and Elgamal. For RSA encryption you must create either DSA or RSA sign-only key @@ -950,7 +950,7 @@ gpg -a --export-secret-keys KEYID > secret.key - You need to use dearmor() on these keys before giving them to + You need to use dearmor() on these keys before giving them to the PGP functions. Or if you can handle binary data, you can drop -a from the command. @@ -982,7 +982,7 @@ gpg -a --export-secret-keys KEYID > secret.key No support for several subkeys. This may seem like a problem, as this is common practice. On the other hand, you should not use your regular - GPG/PGP keys with pgcrypto, but create new ones, + GPG/PGP keys with pgcrypto, but create new ones, as the usage scenario is rather different. @@ -1056,15 +1056,15 @@ decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea type string is: -algorithm - mode /pad: padding +algorithm - mode /pad: padding - where algorithm is one of: + where algorithm is one of: bf — Blowfish aes — AES (Rijndael-128) - and mode is one of: + and mode is one of: @@ -1078,7 +1078,7 @@ decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea - and padding is one of: + and padding is one of: @@ -1100,8 +1100,8 @@ encrypt(data, 'fooz', 'bf-cbc/pad:pkcs') - In encrypt_iv and decrypt_iv, the - iv parameter is the initial value for the CBC mode; + In encrypt_iv and decrypt_iv, the + iv parameter is the initial value for the CBC mode; it is ignored for ECB. It is clipped or padded with zeroes if not exactly block size. It defaults to all zeroes in the functions without this parameter. @@ -1119,7 +1119,7 @@ encrypt(data, 'fooz', 'bf-cbc/pad:pkcs') gen_random_bytes(count integer) returns bytea - Returns count cryptographically strong random bytes. + Returns count cryptographically strong random bytes. At most 1024 bytes can be extracted at a time. This is to avoid draining the randomness generator pool. @@ -1143,7 +1143,7 @@ gen_random_uuid() returns uuid Configuration - pgcrypto configures itself according to the findings of the + pgcrypto configures itself according to the findings of the main PostgreSQL configure script. The options that affect it are --with-zlib and --with-openssl. @@ -1253,9 +1253,9 @@ gen_random_uuid() returns uuid Security Limitations - All pgcrypto functions run inside the database server. + All pgcrypto functions run inside the database server. That means that all - the data and passwords move between pgcrypto and client + the data and passwords move between pgcrypto and client applications in clear text. Thus you must: @@ -1276,7 +1276,7 @@ gen_random_uuid() returns uuid The implementation does not resist side-channel attacks. For example, the time required for - a pgcrypto decryption function to complete varies among + a pgcrypto decryption function to complete varies among ciphertexts of a given size. @@ -1342,7 +1342,7 @@ gen_random_uuid() returns uuid - Jean-Luc Cooke Fortuna-based /dev/random driver for Linux. + Jean-Luc Cooke Fortuna-based /dev/random driver for Linux. diff --git a/doc/src/sgml/pgfreespacemap.sgml b/doc/src/sgml/pgfreespacemap.sgml index 43e154a2f3..0122d278e3 100644 --- a/doc/src/sgml/pgfreespacemap.sgml +++ b/doc/src/sgml/pgfreespacemap.sgml @@ -8,7 +8,7 @@ - The pg_freespacemap module provides a means for examining the + The pg_freespacemap module provides a means for examining the free space map (FSM). It provides a function called pg_freespace, or two overloaded functions, to be precise. The functions show the value recorded in the free space map for @@ -36,7 +36,7 @@ Returns the amount of free space on the page of the relation, specified - by blkno, according to the FSM. + by blkno, according to the FSM. @@ -50,7 +50,7 @@ Displays the amount of free space on each page of the relation, - according to the FSM. A set of (blkno bigint, avail int2) + according to the FSM. A set of (blkno bigint, avail int2) tuples is returned, one tuple for each page in the relation. @@ -59,7 +59,7 @@ The values stored in the free space map are not exact. They're rounded - to precision of 1/256th of BLCKSZ (32 bytes with default BLCKSZ), and + to precision of 1/256th of BLCKSZ (32 bytes with default BLCKSZ), and they're not kept fully up-to-date as tuples are inserted and updated. diff --git a/doc/src/sgml/pgprewarm.sgml b/doc/src/sgml/pgprewarm.sgml index c6b94a8b72..e0a6d0503f 100644 --- a/doc/src/sgml/pgprewarm.sgml +++ b/doc/src/sgml/pgprewarm.sgml @@ -11,11 +11,11 @@ The pg_prewarm module provides a convenient way to load relation data into either the operating system buffer cache or the PostgreSQL buffer cache. Prewarming - can be performed manually using the pg_prewarm function, - or can be performed automatically by including pg_prewarm in + can be performed manually using the pg_prewarm function, + or can be performed automatically by including pg_prewarm in . In the latter case, the system will run a background worker which periodically records the contents - of shared buffers in a file called autoprewarm.blocks and + of shared buffers in a file called autoprewarm.blocks and will, using 2 background workers, reload those same blocks after a restart. @@ -77,10 +77,10 @@ autoprewarm_dump_now() RETURNS int8 - Update autoprewarm.blocks immediately. This may be useful + Update autoprewarm.blocks immediately. This may be useful if the autoprewarm worker is not running but you anticipate running it after the next restart. The return value is the number of records written - to autoprewarm.blocks. + to autoprewarm.blocks. @@ -92,7 +92,7 @@ autoprewarm_dump_now() RETURNS int8 pg_prewarm.autoprewarm (boolean) - pg_prewarm.autoprewarm configuration parameter + pg_prewarm.autoprewarm configuration parameter @@ -109,12 +109,12 @@ autoprewarm_dump_now() RETURNS int8 pg_prewarm.autoprewarm_interval (int) - pg_prewarm.autoprewarm_interval configuration parameter + pg_prewarm.autoprewarm_interval configuration parameter - This is the interval between updates to autoprewarm.blocks. + This is the interval between updates to autoprewarm.blocks. The default is 300 seconds. If set to 0, the file will not be dumped at regular intervals, but only when the server is shut down. diff --git a/doc/src/sgml/pgrowlocks.sgml b/doc/src/sgml/pgrowlocks.sgml index 65d532e081..57dcf6beb2 100644 --- a/doc/src/sgml/pgrowlocks.sgml +++ b/doc/src/sgml/pgrowlocks.sgml @@ -37,7 +37,7 @@ pgrowlocks(text) returns setof record
- <function>pgrowlocks</> Output Columns + <function>pgrowlocks</function> Output Columns @@ -73,9 +73,9 @@ pgrowlocks(text) returns setof record lock_typetext[]Lock mode of lockers (more than one if multitransaction), - an array of Key Share, Share, - For No Key Update, No Key Update, - For Update, Update. + an array of Key Share, Share, + For No Key Update, No Key Update, + For Update, Update. @@ -89,7 +89,7 @@ pgrowlocks(text) returns setof record
- pgrowlocks takes AccessShareLock for the + pgrowlocks takes AccessShareLock for the target table and reads each row one by one to collect the row locking information. This is not very speedy for a large table. Note that: diff --git a/doc/src/sgml/pgstandby.sgml b/doc/src/sgml/pgstandby.sgml index bf4edea9f1..7feba8cdd6 100644 --- a/doc/src/sgml/pgstandby.sgml +++ b/doc/src/sgml/pgstandby.sgml @@ -31,14 +31,14 @@ Description - pg_standby supports creation of a warm standby + pg_standby supports creation of a warm standby database server. It is designed to be a production-ready program, as well as a customizable template should you require specific modifications. - pg_standby is designed to be a waiting - restore_command, which is needed to turn a standard + pg_standby is designed to be a waiting + restore_command, which is needed to turn a standard archive recovery into a warm standby operation. Other configuration is required as well, all of which is described in the main server manual (see ). @@ -46,33 +46,33 @@ To configure a standby - server to use pg_standby, put this into its + server to use pg_standby, put this into its recovery.conf configuration file: -restore_command = 'pg_standby archiveDir %f %p %r' +restore_command = 'pg_standby archiveDir %f %p %r' - where archiveDir is the directory from which WAL segment + where archiveDir is the directory from which WAL segment files should be restored. - If restartwalfile is specified, normally by using the + If restartwalfile is specified, normally by using the %r macro, then all WAL files logically preceding this - file will be removed from archivelocation. This minimizes + file will be removed from archivelocation. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of this parameter is appropriate if the - archivelocation is a transient staging area for this - particular standby server, but not when the - archivelocation is intended as a long-term WAL archive area. + archivelocation is a transient staging area for this + particular standby server, but not when the + archivelocation is intended as a long-term WAL archive area. pg_standby assumes that - archivelocation is a directory readable by the - server-owning user. If restartwalfile (or -k) + archivelocation is a directory readable by the + server-owning user. If restartwalfile (or -k) is specified, - the archivelocation directory must be writable too. + the archivelocation directory must be writable too. - There are two ways to fail over to a warm standby database server + There are two ways to fail over to a warm standby database server when the master server fails: @@ -85,7 +85,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' the standby server has fallen behind, but if there is a lot of unapplied WAL it can be a long time before the standby server becomes ready. To trigger a smart failover, create a trigger file containing - the word smart, or just create it and leave it empty. + the word smart, or just create it and leave it empty.
@@ -96,8 +96,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' In fast failover, the server is brought up immediately. Any WAL files in the archive that have not yet been applied will be ignored, and all transactions in those files are lost. To trigger a fast failover, - create a trigger file and write the word fast into it. - pg_standby can also be configured to execute a fast + create a trigger file and write the word fast into it. + pg_standby can also be configured to execute a fast failover automatically if no new WAL file appears within a defined interval. @@ -120,7 +120,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - Use cp or copy command to restore WAL files + Use cp or copy command to restore WAL files from archive. This is the only supported behavior so this option is useless. @@ -130,7 +130,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - Print lots of debug logging output on stderr. + Print lots of debug logging output on stderr. @@ -147,8 +147,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' restartwalfile is specified, since that specification method is more accurate in determining the correct archive cut-off point. - Use of this parameter is deprecated as of - PostgreSQL 8.3; it is safer and more efficient to + Use of this parameter is deprecated as of + PostgreSQL 8.3; it is safer and more efficient to specify a restartwalfile parameter. A too small setting could result in removal of files that are still needed for a restart of the standby server, while a too large setting wastes @@ -158,12 +158,12 @@ restore_command = 'pg_standby archiveDir %f %p %r' - maxretries + maxretries Set the maximum number of times to retry the copy command if it fails (default 3). After each failure, we wait for - sleeptime * num_retries + sleeptime * num_retries so that the wait time increases progressively. So by default, we will wait 5 secs, 10 secs, then 15 secs before reporting the failure back to the standby server. This will be @@ -174,7 +174,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - sleeptime + sleeptime Set the number of seconds (up to 60, default 5) to sleep between @@ -186,21 +186,21 @@ restore_command = 'pg_standby archiveDir %f %p %r' - triggerfile + triggerfile Specify a trigger file whose presence should cause failover. It is recommended that you use a structured file name to avoid confusion as to which server is being triggered when multiple servers exist on the same system; for example - /tmp/pgsql.trigger.5432. + /tmp/pgsql.trigger.5432. - - + + Print the pg_standby version and exit. @@ -209,7 +209,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' - maxwaittime + maxwaittime Set the maximum number of seconds to wait for the next WAL file, @@ -222,8 +222,8 @@ restore_command = 'pg_standby archiveDir %f %p %r' - - + + Show help about pg_standby command line @@ -241,18 +241,18 @@ restore_command = 'pg_standby archiveDir %f %p %r' pg_standby is designed to work with - PostgreSQL 8.2 and later. + PostgreSQL 8.2 and later. - PostgreSQL 8.3 provides the %r macro, + PostgreSQL 8.3 provides the %r macro, which is designed to let pg_standby know the - last file it needs to keep. With PostgreSQL 8.2, the + last file it needs to keep. With PostgreSQL 8.2, the -k option must be used if archive cleanup is required. This option remains available in 8.3, but its use is deprecated. - PostgreSQL 8.4 provides the - recovery_end_command option. Without this option + PostgreSQL 8.4 provides the + recovery_end_command option. Without this option a leftover trigger file can be hazardous. @@ -276,13 +276,13 @@ restore_command = 'pg_standby -d -s 2 -t /tmp/pgsql.trigger.5442 .../archive %f recovery_end_command = 'rm -f /tmp/pgsql.trigger.5442' where the archive directory is physically located on the standby server, - so that the archive_command is accessing it across NFS, - but the files are local to the standby (enabling use of ln). + so that the archive_command is accessing it across NFS, + but the files are local to the standby (enabling use of ln). This will: - produce debugging output in standby.log + produce debugging output in standby.log @@ -293,7 +293,7 @@ recovery_end_command = 'rm -f /tmp/pgsql.trigger.5442' stop waiting only when a trigger file called - /tmp/pgsql.trigger.5442 appears, + /tmp/pgsql.trigger.5442 appears, and perform failover according to its content @@ -320,18 +320,18 @@ restore_command = 'pg_standby -d -s 5 -t C:\pgsql.trigger.5442 ...\archive %f %p recovery_end_command = 'del C:\pgsql.trigger.5442' Note that backslashes need to be doubled in the - archive_command, but not in the - restore_command or recovery_end_command. + archive_command, but not in the + restore_command or recovery_end_command. This will: - use the copy command to restore WAL files from archive + use the copy command to restore WAL files from archive - produce debugging output in standby.log + produce debugging output in standby.log @@ -342,7 +342,7 @@ recovery_end_command = 'del C:\pgsql.trigger.5442' stop waiting only when a trigger file called - C:\pgsql.trigger.5442 appears, + C:\pgsql.trigger.5442 appears, and perform failover according to its content @@ -360,16 +360,16 @@ recovery_end_command = 'del C:\pgsql.trigger.5442' - The copy command on Windows sets the final file size + The copy command on Windows sets the final file size before the file is completely copied, which would ordinarily confuse pg_standby. Therefore - pg_standby waits sleeptime - seconds once it sees the proper file size. GNUWin32's cp + pg_standby waits sleeptime + seconds once it sees the proper file size. GNUWin32's cp sets the file size only after the file copy is complete. - Since the Windows example uses copy at both ends, either + Since the Windows example uses copy at both ends, either or both servers might be accessing the archive directory across the network. diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml index f9dd43e891..4b15a268cd 100644 --- a/doc/src/sgml/pgstatstatements.sgml +++ b/doc/src/sgml/pgstatstatements.sgml @@ -13,20 +13,20 @@ - The module must be loaded by adding pg_stat_statements to + The module must be loaded by adding pg_stat_statements to in - postgresql.conf, because it requires additional shared memory. + postgresql.conf, because it requires additional shared memory. This means that a server restart is needed to add or remove the module. When pg_stat_statements is loaded, it tracks statistics across all databases of the server. To access and manipulate - these statistics, the module provides a view, pg_stat_statements, - and the utility functions pg_stat_statements_reset and - pg_stat_statements. These are not available globally but + these statistics, the module provides a view, pg_stat_statements, + and the utility functions pg_stat_statements_reset and + pg_stat_statements. These are not available globally but can be enabled for a specific database with - CREATE EXTENSION pg_stat_statements. + CREATE EXTENSION pg_stat_statements. @@ -34,7 +34,7 @@ The statistics gathered by the module are made available via a - view named pg_stat_statements. This view + view named pg_stat_statements. This view contains one row for each distinct database ID, user ID and query ID (up to the maximum number of distinct statements that the module can track). The columns of the view are shown in @@ -42,7 +42,7 @@ - <structname>pg_stat_statements</> Columns + <structname>pg_stat_statements</structname> Columns @@ -234,9 +234,9 @@ - Plannable queries (that is, SELECT, INSERT, - UPDATE, and DELETE) are combined into a single - pg_stat_statements entry whenever they have identical query + Plannable queries (that is, SELECT, INSERT, + UPDATE, and DELETE) are combined into a single + pg_stat_statements entry whenever they have identical query structures according to an internal hash calculation. Typically, two queries will be considered the same for this purpose if they are semantically equivalent except for the values of literal constants @@ -247,16 +247,16 @@ When a constant's value has been ignored for purposes of matching the query to other queries, the constant is replaced by a parameter symbol, such - as $1, in the pg_stat_statements + as $1, in the pg_stat_statements display. The rest of the query text is that of the first query that had the - particular queryid hash value associated with the - pg_stat_statements entry. + particular queryid hash value associated with the + pg_stat_statements entry. In some cases, queries with visibly different texts might get merged into a - single pg_stat_statements entry. Normally this will happen + single pg_stat_statements entry. Normally this will happen only for semantically equivalent queries, but there is a small chance of hash collisions causing unrelated queries to be merged into one entry. (This cannot happen for queries belonging to different users or databases, @@ -264,41 +264,41 @@ - Since the queryid hash value is computed on the + Since the queryid hash value is computed on the post-parse-analysis representation of the queries, the opposite is also possible: queries with identical texts might appear as separate entries, if they have different meanings as a result of - factors such as different search_path settings. + factors such as different search_path settings. - Consumers of pg_stat_statements may wish to use - queryid (perhaps in combination with - dbid and userid) as a more stable + Consumers of pg_stat_statements may wish to use + queryid (perhaps in combination with + dbid and userid) as a more stable and reliable identifier for each entry than its query text. However, it is important to understand that there are only limited - guarantees around the stability of the queryid hash + guarantees around the stability of the queryid hash value. Since the identifier is derived from the post-parse-analysis tree, its value is a function of, among other things, the internal object identifiers appearing in this representation. This has some counterintuitive implications. For example, - pg_stat_statements will consider two apparently-identical + pg_stat_statements will consider two apparently-identical queries to be distinct, if they reference a table that was dropped and recreated between the executions of the two queries. The hashing process is also sensitive to differences in machine architecture and other facets of the platform. - Furthermore, it is not safe to assume that queryid - will be stable across major versions of PostgreSQL. + Furthermore, it is not safe to assume that queryid + will be stable across major versions of PostgreSQL. - As a rule of thumb, queryid values can be assumed to be + As a rule of thumb, queryid values can be assumed to be stable and comparable only so long as the underlying server version and catalog metadata details stay exactly the same. Two servers participating in replication based on physical WAL replay can be expected - to have identical queryid values for the same query. + to have identical queryid values for the same query. However, logical replication schemes do not promise to keep replicas - identical in all relevant details, so queryid will + identical in all relevant details, so queryid will not be a useful identifier for accumulating costs across a set of logical replicas. If in doubt, direct testing is recommended. @@ -306,13 +306,13 @@ The parameter symbols used to replace constants in representative query texts start from the next number after the - highest $n parameter in the original query - text, or $1 if there was none. It's worth noting that in + highest $n parameter in the original query + text, or $1 if there was none. It's worth noting that in some cases there may be hidden parameter symbols that affect this - numbering. For example, PL/pgSQL uses hidden parameter + numbering. For example, PL/pgSQL uses hidden parameter symbols to insert values of function local variables into queries, so that - a PL/pgSQL statement like SELECT i + 1 INTO j - would have representative text like SELECT i + $2. + a PL/pgSQL statement like SELECT i + 1 INTO j + would have representative text like SELECT i + $2. @@ -320,11 +320,11 @@ not consume shared memory. Therefore, even very lengthy query texts can be stored successfully. However, if many long query texts are accumulated, the external file might grow unmanageably large. As a - recovery method if that happens, pg_stat_statements may + recovery method if that happens, pg_stat_statements may choose to discard the query texts, whereupon all existing entries in - the pg_stat_statements view will show - null query fields, though the statistics associated with - each queryid are preserved. If this happens, consider + the pg_stat_statements view will show + null query fields, though the statistics associated with + each queryid are preserved. If this happens, consider reducing pg_stat_statements.max to prevent recurrences. @@ -345,7 +345,7 @@ pg_stat_statements_reset discards all statistics - gathered so far by pg_stat_statements. + gathered so far by pg_stat_statements. By default, this function can only be executed by superusers. @@ -363,17 +363,17 @@ The pg_stat_statements view is defined in - terms of a function also named pg_stat_statements. + terms of a function also named pg_stat_statements. It is possible for clients to call the pg_stat_statements function directly, and by specifying showtext := false have query text be omitted (that is, the OUT argument that corresponds - to the view's query column will return nulls). This + to the view's query column will return nulls). This feature is intended to support external tools that might wish to avoid the overhead of repeatedly retrieving query texts of indeterminate length. Such tools can instead cache the first query text observed for each entry themselves, since that is - all pg_stat_statements itself does, and then retrieve + all pg_stat_statements itself does, and then retrieve query texts only as needed. Since the server stores query texts in a file, this approach may reduce physical I/O for repeated examination of the pg_stat_statements data. @@ -396,7 +396,7 @@ pg_stat_statements.max is the maximum number of statements tracked by the module (i.e., the maximum number of rows - in the pg_stat_statements view). If more distinct + in the pg_stat_statements view). If more distinct statements than that are observed, information about the least-executed statements is discarded. The default value is 5000. @@ -414,11 +414,11 @@ pg_stat_statements.track controls which statements are counted by the module. - Specify top to track top-level statements (those issued - directly by clients), all to also track nested statements - (such as statements invoked within functions), or none to + Specify top to track top-level statements (those issued + directly by clients), all to also track nested statements + (such as statements invoked within functions), or none to disable statement statistics collection. - The default value is top. + The default value is top. Only superusers can change this setting. @@ -433,9 +433,9 @@ pg_stat_statements.track_utility controls whether utility commands are tracked by the module. Utility commands are - all those other than SELECT, INSERT, - UPDATE and DELETE. - The default value is on. + all those other than SELECT, INSERT, + UPDATE and DELETE. + The default value is on. Only superusers can change this setting. @@ -450,10 +450,10 @@ pg_stat_statements.save specifies whether to save statement statistics across server shutdowns. - If it is off then statistics are not saved at + If it is off then statistics are not saved at shutdown nor reloaded at server start. - The default value is on. - This parameter can only be set in the postgresql.conf + The default value is on. + This parameter can only be set in the postgresql.conf file or on the server command line. @@ -464,11 +464,11 @@ The module requires additional shared memory proportional to pg_stat_statements.max. Note that this memory is consumed whenever the module is loaded, even if - pg_stat_statements.track is set to none. + pg_stat_statements.track is set to none. - These parameters must be set in postgresql.conf. + These parameters must be set in postgresql.conf. Typical usage might be: diff --git a/doc/src/sgml/pgstattuple.sgml b/doc/src/sgml/pgstattuple.sgml index a7c67ae645..611df9d0bf 100644 --- a/doc/src/sgml/pgstattuple.sgml +++ b/doc/src/sgml/pgstattuple.sgml @@ -30,13 +30,13 @@ pgstattuple - pgstattuple(regclass) returns record + pgstattuple(regclass) returns record pgstattuple returns a relation's physical length, - percentage of dead tuples, and other info. This may help users + percentage of dead tuples, and other info. This may help users to determine whether vacuum is necessary or not. The argument is the target relation's name (optionally schema-qualified) or OID. For example: @@ -135,15 +135,15 @@ free_percent | 1.95 - pgstattuple judges a tuple is dead if - HeapTupleSatisfiesDirty returns false. + pgstattuple judges a tuple is dead if + HeapTupleSatisfiesDirty returns false. - pgstattuple(text) returns record + pgstattuple(text) returns record @@ -161,7 +161,7 @@ free_percent | 1.95 pgstatindex - pgstatindex(regclass) returns record + pgstatindex(regclass) returns record @@ -225,7 +225,7 @@ leaf_fragmentation | 0 internal_pages bigint - Number of internal (upper-level) pages + Number of internal (upper-level) pages @@ -264,14 +264,14 @@ leaf_fragmentation | 0 - The reported index_size will normally correspond to one more + The reported index_size will normally correspond to one more page than is accounted for by internal_pages + leaf_pages + empty_pages + deleted_pages, because it also includes the index's metapage. - As with pgstattuple, the results are accumulated + As with pgstattuple, the results are accumulated page-by-page, and should not be expected to represent an instantaneous snapshot of the whole index. @@ -280,7 +280,7 @@ leaf_fragmentation | 0 - pgstatindex(text) returns record + pgstatindex(text) returns record @@ -298,7 +298,7 @@ leaf_fragmentation | 0 pgstatginindex - pgstatginindex(regclass) returns record + pgstatginindex(regclass) returns record @@ -358,7 +358,7 @@ pending_tuples | 0 pgstathashindex - pgstathashindex(regclass) returns record + pgstathashindex(regclass) returns record @@ -453,7 +453,7 @@ free_percent | 61.8005949100872 pg_relpages - pg_relpages(regclass) returns bigint + pg_relpages(regclass) returns bigint @@ -466,7 +466,7 @@ free_percent | 61.8005949100872 - pg_relpages(text) returns bigint + pg_relpages(text) returns bigint @@ -484,7 +484,7 @@ free_percent | 61.8005949100872 pgstattuple_approx - pgstattuple_approx(regclass) returns record + pgstattuple_approx(regclass) returns record diff --git a/doc/src/sgml/pgtrgm.sgml b/doc/src/sgml/pgtrgm.sgml index 775a7b8be7..7903dc3d82 100644 --- a/doc/src/sgml/pgtrgm.sgml +++ b/doc/src/sgml/pgtrgm.sgml @@ -111,7 +111,7 @@ show_limit()show_limit real - Returns the current similarity threshold used by the % + Returns the current similarity threshold used by the % operator. This sets the minimum similarity between two words for them to be considered similar enough to be misspellings of each other, for example @@ -122,7 +122,7 @@ set_limit(real)set_limit real - Sets the current similarity threshold that is used by the % + Sets the current similarity threshold that is used by the % operator. The threshold must be between 0 and 1 (default is 0.3). Returns the same value passed in (deprecated). @@ -144,56 +144,56 @@ - text % text + text % text boolean - Returns true if its arguments have a similarity that is + Returns true if its arguments have a similarity that is greater than the current similarity threshold set by - pg_trgm.similarity_threshold. + pg_trgm.similarity_threshold. - text <% text + text <% text boolean - Returns true if its first argument has the similar word in + Returns true if its first argument has the similar word in the second argument and they have a similarity that is greater than the current word similarity threshold set by - pg_trgm.word_similarity_threshold parameter. + pg_trgm.word_similarity_threshold parameter. - text %> text + text %> text boolean - Commutator of the <% operator. + Commutator of the <% operator. - text <-> text + text <-> text real - Returns the distance between the arguments, that is - one minus the similarity() value. + Returns the distance between the arguments, that is + one minus the similarity() value. - text <<-> text + text <<-> text real - Returns the distance between the arguments, that is - one minus the word_similarity() value. + Returns the distance between the arguments, that is + one minus the word_similarity() value. - text <->> text + text <->> text real - Commutator of the <<-> operator. + Commutator of the <<-> operator. @@ -207,31 +207,31 @@ - pg_trgm.similarity_threshold (real) + pg_trgm.similarity_threshold (real) - pg_trgm.similarity_threshold configuration parameter + pg_trgm.similarity_threshold configuration parameter - Sets the current similarity threshold that is used by the % + Sets the current similarity threshold that is used by the % operator. The threshold must be between 0 and 1 (default is 0.3). - pg_trgm.word_similarity_threshold (real) + pg_trgm.word_similarity_threshold (real) - pg_trgm.word_similarity_threshold configuration parameter + pg_trgm.word_similarity_threshold configuration parameter Sets the current word similarity threshold that is used by - <% and %> operators. The threshold + <% and %> operators. The threshold must be between 0 and 1 (default is 0.6). @@ -247,8 +247,8 @@ operator classes that allow you to create an index over a text column for the purpose of very fast similarity searches. These index types support the above-described similarity operators, and additionally support - trigram-based index searches for LIKE, ILIKE, - ~ and ~* queries. (These indexes do not + trigram-based index searches for LIKE, ILIKE, + ~ and ~* queries. (These indexes do not support equality nor simple comparison operators, so you may need a regular B-tree index too.) @@ -267,16 +267,16 @@ CREATE INDEX trgm_idx ON test_trgm USING GIN (t gin_trgm_ops); - At this point, you will have an index on the t column that + At this point, you will have an index on the t column that you can use for similarity searching. A typical query is -SELECT t, similarity(t, 'word') AS sml +SELECT t, similarity(t, 'word') AS sml FROM test_trgm - WHERE t % 'word' + WHERE t % 'word' ORDER BY sml DESC, t; This will return all values in the text column that are sufficiently - similar to word, sorted from best match to worst. The + similar to word, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets. @@ -284,7 +284,7 @@ SELECT t, similarity(t, 'word') AS sml A variant of the above query is -SELECT t, t <-> 'word' AS dist +SELECT t, t <-> 'word' AS dist FROM test_trgm ORDER BY dist LIMIT 10; @@ -294,16 +294,16 @@ SELECT t, t <-> 'word' AS dist - Also you can use an index on the t column for word + Also you can use an index on the t column for word similarity. For example: -SELECT t, word_similarity('word', t) AS sml +SELECT t, word_similarity('word', t) AS sml FROM test_trgm - WHERE 'word' <% t + WHERE 'word' <% t ORDER BY sml DESC, t; This will return all values in the text column that have a word - which sufficiently similar to word, sorted from best + which sufficiently similar to word, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets. @@ -311,7 +311,7 @@ SELECT t, word_similarity('word', t) AS sml A variant of the above query is -SELECT t, 'word' <<-> t AS dist +SELECT t, 'word' <<-> t AS dist FROM test_trgm ORDER BY dist LIMIT 10; @@ -321,8 +321,8 @@ SELECT t, 'word' <<-> t AS dist - Beginning in PostgreSQL 9.1, these index types also support - index searches for LIKE and ILIKE, for example + Beginning in PostgreSQL 9.1, these index types also support + index searches for LIKE and ILIKE, for example SELECT * FROM test_trgm WHERE t LIKE '%foo%bar'; @@ -333,9 +333,9 @@ SELECT * FROM test_trgm WHERE t LIKE '%foo%bar'; - Beginning in PostgreSQL 9.3, these index types also support + Beginning in PostgreSQL 9.3, these index types also support index searches for regular-expression matches - (~ and ~* operators), for example + (~ and ~* operators), for example SELECT * FROM test_trgm WHERE t ~ '(foo|bar)'; @@ -347,7 +347,7 @@ SELECT * FROM test_trgm WHERE t ~ '(foo|bar)'; - For both LIKE and regular-expression searches, keep in mind + For both LIKE and regular-expression searches, keep in mind that a pattern with no extractable trigrams will degenerate to a full-index scan. @@ -377,9 +377,9 @@ CREATE TABLE words AS SELECT word FROM ts_stat('SELECT to_tsvector(''simple'', bodytext) FROM documents'); - where documents is a table that has a text field - bodytext that we wish to search. The reason for using - the simple configuration with the to_tsvector + where documents is a table that has a text field + bodytext that we wish to search. The reason for using + the simple configuration with the to_tsvector function, instead of using a language-specific configuration, is that we want a list of the original (unstemmed) words. @@ -399,7 +399,7 @@ CREATE INDEX words_idx ON words USING GIN (word gin_trgm_ops); - Since the words table has been generated as a separate, + Since the words table has been generated as a separate, static table, it will need to be periodically regenerated so that it remains reasonably up-to-date with the document collection. Keeping it exactly current is usually unnecessary. diff --git a/doc/src/sgml/pgvisibility.sgml b/doc/src/sgml/pgvisibility.sgml index d466a3bce8..75336946a6 100644 --- a/doc/src/sgml/pgvisibility.sgml +++ b/doc/src/sgml/pgvisibility.sgml @@ -8,7 +8,7 @@ - The pg_visibility module provides a means for examining the + The pg_visibility module provides a means for examining the visibility map (VM) and page-level visibility information of a table. It also provides functions to check the integrity of a visibility map and to force it to be rebuilt. @@ -28,13 +28,13 @@ These two bits will normally agree, but the page's all-visible bit can sometimes be set while the visibility map bit is clear after a crash recovery. The reported values can also disagree because of a change that - occurs after pg_visibility examines the visibility map and + occurs after pg_visibility examines the visibility map and before it examines the data page. Any event that causes data corruption can also cause these bits to disagree. - Functions that display information about PD_ALL_VISIBLE bits + Functions that display information about PD_ALL_VISIBLE bits are much more costly than those that only consult the visibility map, because they must read the relation's data blocks rather than only the (much smaller) visibility map. Functions that check the relation's @@ -61,7 +61,7 @@ Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation, plus the - PD_ALL_VISIBLE bit of that block. + PD_ALL_VISIBLE bit of that block. @@ -82,7 +82,7 @@ Returns the all-visible and all-frozen bits in the visibility map for - each block of the given relation, plus the PD_ALL_VISIBLE + each block of the given relation, plus the PD_ALL_VISIBLE bit of each block. @@ -130,7 +130,7 @@ Truncates the visibility map for the given relation. This function is useful if you believe that the visibility map for the relation is - corrupt and wish to force rebuilding it. The first VACUUM + corrupt and wish to force rebuilding it. The first VACUUM executed on the given relation after this function is executed will scan every page in the relation and rebuild the visibility map. (Until that is done, queries will treat the visibility map as containing all zeroes.) diff --git a/doc/src/sgml/planstats.sgml b/doc/src/sgml/planstats.sgml index 838fcda6d2..ee081308a9 100644 --- a/doc/src/sgml/planstats.sgml +++ b/doc/src/sgml/planstats.sgml @@ -28,13 +28,13 @@ - The examples shown below use tables in the PostgreSQL + The examples shown below use tables in the PostgreSQL regression test database. The outputs shown are taken from version 8.3. The behavior of earlier (or later) versions might vary. - Note also that since ANALYZE uses random sampling + Note also that since ANALYZE uses random sampling while producing statistics, the results will change slightly after - any new ANALYZE. + any new ANALYZE. @@ -61,8 +61,8 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; 358 | 10000 - These numbers are current as of the last VACUUM or - ANALYZE on the table. The planner then fetches the + These numbers are current as of the last VACUUM or + ANALYZE on the table. The planner then fetches the actual current number of pages in the table (this is a cheap operation, not requiring a table scan). If that is different from relpages then @@ -150,7 +150,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 = 'CRAAAA'; and looks up the selectivity function for =, which is eqsel. For equality estimation the histogram is not useful; instead the list of most - common values (MCVs) is used to determine the + common values (MCVs) is used to determine the selectivity. Let's have a look at the MCVs, with some additional columns that will be useful later: @@ -165,7 +165,7 @@ most_common_freqs | {0.00333333,0.003,0.003,0.003,0.003,0.003,0.003,0.003,0.003, - Since CRAAAA appears in the list of MCVs, the selectivity is + Since CRAAAA appears in the list of MCVs, the selectivity is merely the corresponding entry in the list of most common frequencies (MCFs): @@ -225,18 +225,18 @@ rows = 10000 * 0.0014559 - The previous example with unique1 < 1000 was an + The previous example with unique1 < 1000 was an oversimplification of what scalarltsel really does; now that we have seen an example of the use of MCVs, we can fill in some more detail. The example was correct as far as it went, because since - unique1 is a unique column it has no MCVs (obviously, no + unique1 is a unique column it has no MCVs (obviously, no value is any more common than any other value). For a non-unique column, there will normally be both a histogram and an MCV list, and the histogram does not include the portion of the column - population represented by the MCVs. We do things this way because + population represented by the MCVs. We do things this way because it allows more precise estimation. In this situation scalarltsel directly applies the condition (e.g., - < 1000) to each value of the MCV list, and adds up the + < 1000) to each value of the MCV list, and adds up the frequencies of the MCVs for which the condition is true. This gives an exact estimate of the selectivity within the portion of the table that is MCVs. The histogram is then used in the same way as above @@ -253,7 +253,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE stringu1 < 'IAAAAA'; Filter: (stringu1 < 'IAAAAA'::name) - We already saw the MCV information for stringu1, + We already saw the MCV information for stringu1, and here is its histogram: @@ -266,7 +266,7 @@ WHERE tablename='tenk1' AND attname='stringu1'; Checking the MCV list, we find that the condition stringu1 < - 'IAAAAA' is satisfied by the first six entries and not the last four, + 'IAAAAA' is satisfied by the first six entries and not the last four, so the selectivity within the MCV part of the population is @@ -279,11 +279,11 @@ selectivity = sum(relevant mvfs) population represented by MCVs is 0.03033333, and therefore the fraction represented by the histogram is 0.96966667 (again, there are no nulls, else we'd have to exclude them here). We can see - that the value IAAAAA falls nearly at the end of the + that the value IAAAAA falls nearly at the end of the third histogram bucket. Using some rather cheesy assumptions about the frequency of different characters, the planner arrives at the estimate 0.298387 for the portion of the histogram population - that is less than IAAAAA. We then combine the estimates + that is less than IAAAAA. We then combine the estimates for the MCV and non-MCV populations: @@ -372,7 +372,7 @@ rows = 10000 * 0.005035 = 50 (rounding off) - The restriction for the join is t2.unique2 = t1.unique2. + The restriction for the join is t2.unique2 = t1.unique2. The operator is just our familiar =, however the selectivity function is obtained from the oprjoin column of @@ -424,12 +424,12 @@ rows = (outer_cardinality * inner_cardinality) * selectivity - Notice that we showed inner_cardinality as 10000, that is, - the unmodified size of tenk2. It might appear from - inspection of the EXPLAIN output that the estimate of + Notice that we showed inner_cardinality as 10000, that is, + the unmodified size of tenk2. It might appear from + inspection of the EXPLAIN output that the estimate of join rows comes from 50 * 1, that is, the number of outer rows times the estimated number of rows obtained by each inner index scan on - tenk2. But this is not the case: the join relation size + tenk2. But this is not the case: the join relation size is estimated before any particular join plan has been considered. If everything is working well then the two ways of estimating the join size will produce about the same answer, but due to round-off error and @@ -438,7 +438,7 @@ rows = (outer_cardinality * inner_cardinality) * selectivity For those interested in further details, estimation of the size of - a table (before any WHERE clauses) is done in + a table (before any WHERE clauses) is done in src/backend/optimizer/util/plancat.c. The generic logic for clause selectivities is in src/backend/optimizer/path/clausesel.c. The @@ -485,8 +485,8 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 't'; - The following example shows the result of estimating a WHERE - condition on the a column: + The following example shows the result of estimating a WHERE + condition on the a column: EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1; @@ -501,9 +501,9 @@ EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1; of this clause to be 1%. By comparing this estimate and the actual number of rows, we see that the estimate is very accurate (in fact exact, as the table is very small). Changing the - WHERE condition to use the b column, an + WHERE condition to use the b column, an identical plan is generated. But observe what happens if we apply the same - condition on both columns, combining them with AND: + condition on both columns, combining them with AND: EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1 AND b = 1; @@ -524,7 +524,7 @@ EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t WHERE a = 1 AND b = 1; This problem can be fixed by creating a statistics object that - directs ANALYZE to calculate functional-dependency + directs ANALYZE to calculate functional-dependency multivariate statistics on the two columns: diff --git a/doc/src/sgml/plhandler.sgml b/doc/src/sgml/plhandler.sgml index 2573e67743..95e7dc9fc0 100644 --- a/doc/src/sgml/plhandler.sgml +++ b/doc/src/sgml/plhandler.sgml @@ -35,7 +35,7 @@ The call handler is called in the same way as any other function: It receives a pointer to a - FunctionCallInfoData struct containing + FunctionCallInfoData struct containing argument values and information about the called function, and it is expected to return a Datum result (and possibly set the isnull field of the @@ -54,7 +54,7 @@ It's up to the call handler to fetch the entry of the function from the pg_proc system catalog and to analyze the argument - and return types of the called function. The AS clause from the + and return types of the called function. The AS clause from the CREATE FUNCTION command for the function will be found in the prosrc column of the pg_proc row. This is commonly source @@ -68,9 +68,9 @@ A call handler can avoid repeated lookups of information about the called function by using the flinfo->fn_extra field. This will - initially be NULL, but can be set by the call handler to point at + initially be NULL, but can be set by the call handler to point at information about the called function. On subsequent calls, if - flinfo->fn_extra is already non-NULL + flinfo->fn_extra is already non-NULL then it can be used and the information lookup step skipped. The call handler must make sure that flinfo->fn_extra is made to point at @@ -90,7 +90,7 @@ are passed in the usual way, but the FunctionCallInfoData's context field points at a - TriggerData structure, rather than being NULL + TriggerData structure, rather than being NULL as it is in a plain function call. A language handler should provide mechanisms for procedural-language functions to get at the trigger information. @@ -170,21 +170,21 @@ CREATE LANGUAGE plsample If a validator is provided by a procedural language, it must be declared as a function taking a single parameter of type - oid. The validator's result is ignored, so it is customarily - declared to return void. The validator will be called at - the end of a CREATE FUNCTION command that has created + oid. The validator's result is ignored, so it is customarily + declared to return void. The validator will be called at + the end of a CREATE FUNCTION command that has created or updated a function written in the procedural language. - The passed-in OID is the OID of the function's pg_proc + The passed-in OID is the OID of the function's pg_proc row. The validator must fetch this row in the usual way, and do whatever checking is appropriate. - First, call CheckFunctionValidatorAccess() to diagnose + First, call CheckFunctionValidatorAccess() to diagnose explicit calls to the validator that the user could not achieve through - CREATE FUNCTION. Typical checks then include verifying + CREATE FUNCTION. Typical checks then include verifying that the function's argument and result types are supported by the language, and that the function's body is syntactically correct in the language. If the validator finds the function to be okay, it should just return. If it finds an error, it should report that - via the normal ereport() error reporting mechanism. + via the normal ereport() error reporting mechanism. Throwing an error will force a transaction rollback and thus prevent the incorrect function definition from being committed. @@ -195,40 +195,40 @@ CREATE LANGUAGE plsample any expensive or context-sensitive checking should be skipped. If the language provides for code execution at compilation time, the validator must suppress checks that would induce such execution. In particular, - this parameter is turned off by pg_dump so that it can + this parameter is turned off by pg_dump so that it can load procedural language functions without worrying about side effects or dependencies of the function bodies on other database objects. (Because of this requirement, the call handler should avoid assuming that the validator has fully checked the function. The point of having a validator is not to let the call handler omit checks, but to notify the user immediately if there are obvious errors in a - CREATE FUNCTION command.) + CREATE FUNCTION command.) While the choice of exactly what to check is mostly left to the discretion of the validator function, note that the core - CREATE FUNCTION code only executes SET clauses - attached to a function when check_function_bodies is on. + CREATE FUNCTION code only executes SET clauses + attached to a function when check_function_bodies is on. Therefore, checks whose results might be affected by GUC parameters - definitely should be skipped when check_function_bodies is + definitely should be skipped when check_function_bodies is off, to avoid false failures when reloading a dump. If an inline handler is provided by a procedural language, it must be declared as a function taking a single parameter of type - internal. The inline handler's result is ignored, so it is - customarily declared to return void. The inline handler - will be called when a DO statement is executed specifying + internal. The inline handler's result is ignored, so it is + customarily declared to return void. The inline handler + will be called when a DO statement is executed specifying the procedural language. The parameter actually passed is a pointer - to an InlineCodeBlock struct, which contains information - about the DO statement's parameters, in particular the + to an InlineCodeBlock struct, which contains information + about the DO statement's parameters, in particular the text of the anonymous code block to be executed. The inline handler should execute this code and return. It's recommended that you wrap all these function declarations, - as well as the CREATE LANGUAGE command itself, into - an extension so that a simple CREATE EXTENSION + as well as the CREATE LANGUAGE command itself, into + an extension so that a simple CREATE EXTENSION command is sufficient to install the language. See for information about writing extensions. @@ -237,7 +237,7 @@ CREATE LANGUAGE plsample The procedural languages included in the standard distribution are good references when trying to write your own language handler. - Look into the src/pl subdirectory of the source tree. + Look into the src/pl subdirectory of the source tree. The reference page also has some useful details. diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml index 37a3557d61..dfffa4077f 100644 --- a/doc/src/sgml/plperl.sgml +++ b/doc/src/sgml/plperl.sgml @@ -27,12 +27,12 @@ To install PL/Perl in a particular database, use - CREATE EXTENSION plperl. + CREATE EXTENSION plperl. - If a language is installed into template1, all subsequently + If a language is installed into template1, all subsequently created databases will have the language installed automatically. @@ -90,8 +90,8 @@ $$ LANGUAGE plperl; subroutines which you call via a coderef. For more information, see the entries for Variable "%s" will not stay shared and Variable "%s" is not available in the - perldiag man page, or - search the Internet for perl nested named subroutine. + perldiag man page, or + search the Internet for perl nested named subroutine. @@ -100,16 +100,16 @@ $$ LANGUAGE plperl; the function body to be written as a string constant. It is usually most convenient to use dollar quoting (see ) for the string constant. - If you choose to use escape string syntax E'', - you must double any single quote marks (') and backslashes - (\) used in the body of the function + If you choose to use escape string syntax E'', + you must double any single quote marks (') and backslashes + (\) used in the body of the function (see ). Arguments and results are handled as in any other Perl subroutine: arguments are passed in @_, and a result value - is returned with return or as the last expression + is returned with return or as the last expression evaluated in the function. @@ -134,12 +134,12 @@ $$ LANGUAGE plperl; - If an SQL null valuenull valuein PL/Perl is passed to a function, - the argument value will appear as undefined in Perl. The + If an SQL null valuenull valuein PL/Perl is passed to a function, + the argument value will appear as undefined in Perl. The above function definition will not behave very nicely with null inputs (in fact, it will act as though they are zeroes). We could - add STRICT to the function definition to make + add STRICT to the function definition to make PostgreSQL do something more reasonable: if a null value is passed, the function will not be called at all, but will just return a null result automatically. Alternatively, @@ -174,14 +174,14 @@ $$ LANGUAGE plperl; other cases the argument will need to be converted into a form that is more usable in Perl. For example, the decode_bytea function can be used to convert an argument of - type bytea into unescaped binary. + type bytea into unescaped binary. Similarly, values passed back to PostgreSQL must be in the external text representation format. For example, the encode_bytea function can be used to - escape binary data for a return value of type bytea. + escape binary data for a return value of type bytea. @@ -330,10 +330,10 @@ SELECT * FROM perl_set(); - If you wish to use the strict pragma with your code you - have a few options. For temporary global use you can SET + If you wish to use the strict pragma with your code you + have a few options. For temporary global use you can SET plperl.use_strict to true. - This will affect subsequent compilations of PL/Perl + This will affect subsequent compilations of PL/Perl functions, but not functions already compiled in the current session. For permanent global use you can set plperl.use_strict to true in the postgresql.conf file. @@ -348,7 +348,7 @@ use strict; - The feature pragma is also available to use if your Perl is version 5.10.0 or higher. + The feature pragma is also available to use if your Perl is version 5.10.0 or higher. @@ -380,7 +380,7 @@ use strict; - spi_exec_query(query [, max-rows]) + spi_exec_query(query [, max-rows]) spi_exec_query in PL/Perl @@ -524,13 +524,13 @@ SELECT * from lotsa_md5(500); - Normally, spi_fetchrow should be repeated until it + Normally, spi_fetchrow should be repeated until it returns undef, indicating that there are no more rows to read. The cursor returned by spi_query is automatically freed when - spi_fetchrow returns undef. + spi_fetchrow returns undef. If you do not wish to read all the rows, instead call - spi_cursor_close to free the cursor. + spi_cursor_close to free the cursor. Failure to do so will result in memory leaks. @@ -675,13 +675,13 @@ SELECT release_hosts_query(); Emit a log or error message. Possible levels are - DEBUG, LOG, INFO, - NOTICE, WARNING, and ERROR. - ERROR + DEBUG, LOG, INFO, + NOTICE, WARNING, and ERROR. + ERROR raises an error condition; if this is not trapped by the surrounding Perl code, the error propagates out to the calling query, causing the current transaction or subtransaction to be aborted. This - is effectively the same as the Perl die command. + is effectively the same as the Perl die command. The other levels only generate messages of different priority levels. Whether messages of a particular priority are reported to the client, @@ -706,8 +706,8 @@ SELECT release_hosts_query(); Return the given string suitably quoted to be used as a string literal in an SQL statement string. Embedded single-quotes and backslashes are properly doubled. - Note that quote_literal returns undef on undef input; if the argument - might be undef, quote_nullable is often more suitable. + Note that quote_literal returns undef on undef input; if the argument + might be undef, quote_nullable is often more suitable. @@ -849,7 +849,7 @@ SELECT release_hosts_query(); Returns a true value if the content of the given string looks like a number, according to Perl, returns false otherwise. Returns undef if the argument is undef. Leading and trailing space is - ignored. Inf and Infinity are regarded as numbers. + ignored. Inf and Infinity are regarded as numbers. @@ -865,8 +865,8 @@ SELECT release_hosts_query(); Returns a true value if the given argument may be treated as an - array reference, that is, if ref of the argument is ARRAY or - PostgreSQL::InServer::ARRAY. Returns false otherwise. + array reference, that is, if ref of the argument is ARRAY or + PostgreSQL::InServer::ARRAY. Returns false otherwise. @@ -941,11 +941,11 @@ $$ LANGUAGE plperl; PL/Perl functions will share the same value of %_SHARED if and only if they are executed by the same SQL role. In an application wherein a single session executes code under multiple SQL roles (via - SECURITY DEFINER functions, use of SET ROLE, etc) + SECURITY DEFINER functions, use of SET ROLE, etc) you may need to take explicit steps to ensure that PL/Perl functions can share data via %_SHARED. To do that, make sure that functions that should communicate are owned by the same user, and mark - them SECURITY DEFINER. You must of course take care that + them SECURITY DEFINER. You must of course take care that such functions can't be used to do anything unintended. @@ -959,8 +959,8 @@ $$ LANGUAGE plperl; - Normally, PL/Perl is installed as a trusted programming - language named plperl. In this setup, certain Perl + Normally, PL/Perl is installed as a trusted programming + language named plperl. In this setup, certain Perl operations are disabled to preserve security. In general, the operations that are restricted are those that interact with the environment. This includes file handle operations, @@ -993,15 +993,15 @@ $$ LANGUAGE plperl; Sometimes it is desirable to write Perl functions that are not restricted. For example, one might want a Perl function that sends mail. To handle these cases, PL/Perl can also be installed as an - untrusted language (usually called - PL/PerlUPL/PerlU). + untrusted language (usually called + PL/PerlUPL/PerlU). In this case the full Perl language is available. When installing the language, the language name plperlu will select the untrusted PL/Perl variant. - The writer of a PL/PerlU function must take care that the function + The writer of a PL/PerlU function must take care that the function cannot be used to do anything unwanted, since it will be able to do anything that could be done by a user logged in as the database administrator. Note that the database system allows only database @@ -1010,25 +1010,25 @@ $$ LANGUAGE plperl; If the above function was created by a superuser using the language - plperlu, execution would succeed. + plperlu, execution would succeed. In the same way, anonymous code blocks written in Perl can use restricted operations if the language is specified as - plperlu rather than plperl, but the caller + plperlu rather than plperl, but the caller must be a superuser. - While PL/Perl functions run in a separate Perl - interpreter for each SQL role, all PL/PerlU functions + While PL/Perl functions run in a separate Perl + interpreter for each SQL role, all PL/PerlU functions executed in a given session run in a single Perl interpreter (which is - not any of the ones used for PL/Perl functions). - This allows PL/PerlU functions to share data freely, - but no communication can occur between PL/Perl and - PL/PerlU functions. + not any of the ones used for PL/Perl functions). + This allows PL/PerlU functions to share data freely, + but no communication can occur between PL/Perl and + PL/PerlU functions. @@ -1036,14 +1036,14 @@ $$ LANGUAGE plperl; Perl cannot support multiple interpreters within one process unless it was built with the appropriate flags, namely either - usemultiplicity or useithreads. - (usemultiplicity is preferred unless you actually need + usemultiplicity or useithreads. + (usemultiplicity is preferred unless you actually need to use threads. For more details, see the - perlembed man page.) - If PL/Perl is used with a copy of Perl that was not built + perlembed man page.) + If PL/Perl is used with a copy of Perl that was not built this way, then it is only possible to have one Perl interpreter per session, and so any one session can only execute either - PL/PerlU functions, or PL/Perl functions + PL/PerlU functions, or PL/Perl functions that are all called by the same SQL role. @@ -1056,7 +1056,7 @@ $$ LANGUAGE plperl; PL/Perl can be used to write trigger functions. In a trigger function, the hash reference $_TD contains information about the - current trigger event. $_TD is a global variable, + current trigger event. $_TD is a global variable, which gets a separate local value for each invocation of the trigger. The fields of the $_TD hash reference are: @@ -1092,8 +1092,8 @@ $$ LANGUAGE plperl; $_TD->{event} - Trigger event: INSERT, UPDATE, - DELETE, TRUNCATE, or UNKNOWN + Trigger event: INSERT, UPDATE, + DELETE, TRUNCATE, or UNKNOWN @@ -1244,7 +1244,7 @@ CREATE TRIGGER test_valid_id_trig PL/Perl can be used to write event trigger functions. In an event trigger function, the hash reference $_TD contains information - about the current trigger event. $_TD is a global variable, + about the current trigger event. $_TD is a global variable, which gets a separate local value for each invocation of the trigger. The fields of the $_TD hash reference are: @@ -1295,7 +1295,7 @@ CREATE EVENT TRIGGER perl_a_snitch Configuration - This section lists configuration parameters that affect PL/Perl. + This section lists configuration parameters that affect PL/Perl. @@ -1304,14 +1304,14 @@ CREATE EVENT TRIGGER perl_a_snitch plperl.on_init (string) - plperl.on_init configuration parameter + plperl.on_init configuration parameter Specifies Perl code to be executed when a Perl interpreter is first - initialized, before it is specialized for use by plperl or - plperlu. + initialized, before it is specialized for use by plperl or + plperlu. The SPI functions are not available when this code is executed. If the code fails with an error it will abort the initialization of the interpreter and propagate out to the calling query, causing the @@ -1319,7 +1319,7 @@ CREATE EVENT TRIGGER perl_a_snitch The Perl code is limited to a single string. Longer code can be placed - into a module and loaded by the on_init string. + into a module and loaded by the on_init string. Examples: plperl.on_init = 'require "plperlinit.pl"' @@ -1327,8 +1327,8 @@ plperl.on_init = 'use lib "/my/app"; use MyApp::PgInit;' - Any modules loaded by plperl.on_init, either directly or - indirectly, will be available for use by plperl. This may + Any modules loaded by plperl.on_init, either directly or + indirectly, will be available for use by plperl. This may create a security risk. To see what modules have been loaded you can use: DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; @@ -1339,14 +1339,14 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; included in , in which case extra consideration should be given to the risk of destabilizing the postmaster. The principal reason for making use of this feature - is that Perl modules loaded by plperl.on_init need be + is that Perl modules loaded by plperl.on_init need be loaded only at postmaster start, and will be instantly available without loading overhead in individual database sessions. However, keep in mind that the overhead is avoided only for the first Perl interpreter used by a database session — either PL/PerlU, or PL/Perl for the first SQL role that calls a PL/Perl function. Any additional Perl interpreters created in a database session will have - to execute plperl.on_init afresh. Also, on Windows there + to execute plperl.on_init afresh. Also, on Windows there will be no savings whatsoever from preloading, since the Perl interpreter created in the postmaster process does not propagate to child processes. @@ -1361,27 +1361,27 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; plperl.on_plperl_init (string) - plperl.on_plperl_init configuration parameter + plperl.on_plperl_init configuration parameter plperl.on_plperlu_init (string) - plperl.on_plperlu_init configuration parameter + plperl.on_plperlu_init configuration parameter These parameters specify Perl code to be executed when a Perl - interpreter is specialized for plperl or - plperlu respectively. This will happen when a PL/Perl or + interpreter is specialized for plperl or + plperlu respectively. This will happen when a PL/Perl or PL/PerlU function is first executed in a database session, or when an additional interpreter has to be created because the other language is called or a PL/Perl function is called by a new SQL role. This - follows any initialization done by plperl.on_init. + follows any initialization done by plperl.on_init. The SPI functions are not available when this code is executed. - The Perl code in plperl.on_plperl_init is executed after - locking down the interpreter, and thus it can only perform + The Perl code in plperl.on_plperl_init is executed after + locking down the interpreter, and thus it can only perform trusted operations. @@ -1404,13 +1404,13 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; plperl.use_strict (boolean) - plperl.use_strict configuration parameter + plperl.use_strict configuration parameter When set true subsequent compilations of PL/Perl functions will have - the strict pragma enabled. This parameter does not affect + the strict pragma enabled. This parameter does not affect functions already compiled in the current session. @@ -1459,7 +1459,7 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; When a session ends normally, not due to a fatal error, any - END blocks that have been defined are executed. + END blocks that have been defined are executed. Currently no other actions are performed. Specifically, file handles are not automatically flushed and objects are not automatically destroyed. diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index d18b48c40c..7323c2f67d 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -13,7 +13,7 @@ PL/pgSQL is a loadable procedural language for the PostgreSQL database - system. The design goals of PL/pgSQL were to create + system. The design goals of PL/pgSQL were to create a loadable procedural language that @@ -59,7 +59,7 @@ - In PostgreSQL 9.0 and later, + In PostgreSQL 9.0 and later, PL/pgSQL is installed by default. However it is still a loadable module, so especially security-conscious administrators could choose to remove it. @@ -69,7 +69,7 @@ Advantages of Using <application>PL/pgSQL</application> - SQL is the language PostgreSQL + SQL is the language PostgreSQL and most other relational databases use as query language. It's portable and easy to learn. But every SQL statement must be executed individually by the database server. @@ -123,49 +123,49 @@ and they can return a result of any of these types. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL - function as returning record, which means that the result + function as returning record, which means that the result is a row type whose columns are determined by specification in the calling query, as discussed in . - PL/pgSQL functions can be declared to accept a variable - number of arguments by using the VARIADIC marker. This + PL/pgSQL functions can be declared to accept a variable + number of arguments by using the VARIADIC marker. This works exactly the same way as for SQL functions, as discussed in . - PL/pgSQL functions can also be declared to accept + PL/pgSQL functions can also be declared to accept and return the polymorphic types anyelement, anyarray, anynonarray, - anyenum, and anyrange. The actual + anyenum, and anyrange. The actual data types handled by a polymorphic function can vary from call to call, as discussed in . An example is shown in . - PL/pgSQL functions can also be declared to return - a set (or table) of any data type that can be returned as + PL/pgSQL functions can also be declared to return + a set (or table) of any data type that can be returned as a single instance. Such a function generates its output by executing - RETURN NEXT for each desired element of the result - set, or by using RETURN QUERY to output the result of + RETURN NEXT for each desired element of the result + set, or by using RETURN QUERY to output the result of evaluating a query. - Finally, a PL/pgSQL function can be declared to return - void if it has no useful return value. + Finally, a PL/pgSQL function can be declared to return + void if it has no useful return value. - PL/pgSQL functions can also be declared with output + PL/pgSQL functions can also be declared with output parameters in place of an explicit specification of the return type. This does not add any fundamental capability to the language, but it is often convenient, especially for returning multiple values. - The RETURNS TABLE notation can also be used in place - of RETURNS SETOF. + The RETURNS TABLE notation can also be used in place + of RETURNS SETOF. @@ -185,11 +185,11 @@ Such a command would normally look like, say, CREATE FUNCTION somefunc(integer, text) RETURNS integer -AS 'function body text' +AS 'function body text' LANGUAGE plpgsql; The function body is simply a string literal so far as CREATE - FUNCTION is concerned. It is often helpful to use dollar quoting + FUNCTION is concerned. It is often helpful to use dollar quoting (see ) to write the function body, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function body must be escaped by @@ -200,7 +200,7 @@ LANGUAGE plpgsql; PL/pgSQL is a block-structured language. The complete text of a function body must be a - block. A block is defined as: + block. A block is defined as: <<label>> @@ -223,16 +223,16 @@ END label ; A common mistake is to write a semicolon immediately after - BEGIN. This is incorrect and will result in a syntax error. + BEGIN. This is incorrect and will result in a syntax error. A label is only needed if you want to identify the block for use - in an EXIT statement, or to qualify the names of the + in an EXIT statement, or to qualify the names of the variables declared in the block. If a label is given after - END, it must match the label at the block's beginning. + END, it must match the label at the block's beginning. @@ -242,7 +242,7 @@ END label ; - Comments work the same way in PL/pgSQL code as in + Comments work the same way in PL/pgSQL code as in ordinary SQL. A double dash (--) starts a comment that extends to the end of the line. A /* starts a block comment that extends to the matching occurrence of @@ -251,7 +251,7 @@ END label ; Any statement in the statement section of a block - can be a subblock. Subblocks can be used for + can be a subblock. Subblocks can be used for logical grouping or to localize variables to a small group of statements. Variables declared in a subblock mask any similarly-named variables of outer blocks for the duration @@ -285,8 +285,8 @@ $$ LANGUAGE plpgsql; - There is actually a hidden outer block surrounding the body - of any PL/pgSQL function. This block provides the + There is actually a hidden outer block surrounding the body + of any PL/pgSQL function. This block provides the declarations of the function's parameters (if any), as well as some special variables such as FOUND (see ). The outer block is @@ -297,15 +297,15 @@ $$ LANGUAGE plpgsql; It is important not to confuse the use of - BEGIN/END for grouping statements in - PL/pgSQL with the similarly-named SQL commands + BEGIN/END for grouping statements in + PL/pgSQL with the similarly-named SQL commands for transaction - control. PL/pgSQL's BEGIN/END + control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query — they cannot start or commit that transaction, since there would be no context for them to execute in. - However, a block containing an EXCEPTION clause effectively + However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see . @@ -318,15 +318,15 @@ $$ LANGUAGE plpgsql; All variables used in a block must be declared in the declarations section of the block. - (The only exceptions are that the loop variable of a FOR loop + (The only exceptions are that the loop variable of a FOR loop iterating over a range of integer values is automatically declared as an - integer variable, and likewise the loop variable of a FOR loop + integer variable, and likewise the loop variable of a FOR loop iterating over a cursor's result is automatically declared as a record variable.) - PL/pgSQL variables can have any SQL data type, such as + PL/pgSQL variables can have any SQL data type, such as integer, varchar, and char. @@ -348,21 +348,21 @@ arow RECORD; name CONSTANT type COLLATE collation_name NOT NULL { DEFAULT | := | = } expression ; - The DEFAULT clause, if given, specifies the initial value assigned - to the variable when the block is entered. If the DEFAULT clause + The DEFAULT clause, if given, specifies the initial value assigned + to the variable when the block is entered. If the DEFAULT clause is not given then the variable is initialized to the SQL null value. - The CONSTANT option prevents the variable from being + The CONSTANT option prevents the variable from being assigned to after initialization, so that its value will remain constant for the duration of the block. - The COLLATE option specifies a collation to use for the + The COLLATE option specifies a collation to use for the variable (see ). - If NOT NULL + If NOT NULL is specified, an assignment of a null value results in a run-time - error. All variables declared as NOT NULL + error. All variables declared as NOT NULL must have a nonnull default value specified. - Equal (=) can be used instead of PL/SQL-compliant - :=. + Equal (=) can be used instead of PL/SQL-compliant + :=. @@ -428,9 +428,9 @@ $$ LANGUAGE plpgsql; These two examples are not perfectly equivalent. In the first case, - subtotal could be referenced as - sales_tax.subtotal, but in the second case it could not. - (Had we attached a label to the inner block, subtotal could + subtotal could be referenced as + sales_tax.subtotal, but in the second case it could not. + (Had we attached a label to the inner block, subtotal could be qualified with that label, instead.) @@ -474,7 +474,7 @@ END; $$ LANGUAGE plpgsql; - Notice that we omitted RETURNS real — we could have + Notice that we omitted RETURNS real — we could have included it, but it would be redundant. @@ -493,13 +493,13 @@ $$ LANGUAGE plpgsql; As discussed in , this effectively creates an anonymous record type for the function's - results. If a RETURNS clause is given, it must say - RETURNS record. + results. If a RETURNS clause is given, it must say + RETURNS record. Another way to declare a PL/pgSQL function - is with RETURNS TABLE, for example: + is with RETURNS TABLE, for example: CREATE FUNCTION extended_sales(p_itemno int) @@ -511,9 +511,9 @@ END; $$ LANGUAGE plpgsql; - This is exactly equivalent to declaring one or more OUT + This is exactly equivalent to declaring one or more OUT parameters and specifying RETURNS SETOF - sometype. + sometype. @@ -530,7 +530,7 @@ $$ LANGUAGE plpgsql; the function, so it can be used to hold the return value if desired, though that is not required. $0 can also be given an alias. For example, this function works on any data type - that has a + operator: + that has a + operator: CREATE FUNCTION add_three_values(v1 anyelement, v2 anyelement, v3 anyelement) @@ -564,14 +564,14 @@ $$ LANGUAGE plpgsql; - <literal>ALIAS</> + <literal>ALIAS</literal> -newname ALIAS FOR oldname; +newname ALIAS FOR oldname; - The ALIAS syntax is more general than is suggested in the + The ALIAS syntax is more general than is suggested in the previous section: you can declare an alias for any variable, not just function parameters. The main practical use for this is to assign a different name for variables with predetermined names, such as @@ -589,7 +589,7 @@ DECLARE - Since ALIAS creates two different ways to name the same + Since ALIAS creates two different ways to name the same object, unrestricted use can be confusing. It's best to use it only for the purpose of overriding predetermined names. @@ -608,7 +608,7 @@ DECLARE database values. For example, let's say you have a column named user_id in your users table. To declare a variable with the same data type as - users.user_id you write: + users.user_id you write: user_id users.user_id%TYPE; @@ -618,7 +618,7 @@ user_id users.user_id%TYPE; By using %TYPE you don't need to know the data type of the structure you are referencing, and most importantly, if the data type of the referenced item changes in the future (for - instance: you change the type of user_id + instance: you change the type of user_id from integer to real), you might not need to change your function definition. @@ -642,9 +642,9 @@ user_id users.user_id%TYPE; - A variable of a composite type is called a row - variable (or row-type variable). Such a variable - can hold a whole row of a SELECT or FOR + A variable of a composite type is called a row + variable (or row-type variable). Such a variable + can hold a whole row of a SELECT or FOR query result, so long as that query's column set matches the declared type of the variable. The individual fields of the row value @@ -658,7 +658,7 @@ user_id users.user_id%TYPE; table_name%ROWTYPE notation; or it can be declared by giving a composite type's name. (Since every table has an associated composite type of the same name, - it actually does not matter in PostgreSQL whether you + it actually does not matter in PostgreSQL whether you write %ROWTYPE or not. But the form with %ROWTYPE is more portable.) @@ -666,7 +666,7 @@ user_id users.user_id%TYPE; Parameters to a function can be composite types (complete table rows). In that case, the - corresponding identifier $n will be a row variable, and fields can + corresponding identifier $n will be a row variable, and fields can be selected from it, for example $1.user_id. @@ -675,12 +675,12 @@ user_id users.user_id%TYPE; row-type variable, not the OID or other system columns (because the row could be from a view). The fields of the row type inherit the table's field size or precision for data types such as - char(n). + char(n). - Here is an example of using composite types. table1 - and table2 are existing tables having at least the + Here is an example of using composite types. table1 + and table2 are existing tables having at least the mentioned fields: @@ -708,7 +708,7 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; Record variables are similar to row-type variables, but they have no predefined structure. They take on the actual row structure of the - row they are assigned during a SELECT or FOR command. The substructure + row they are assigned during a SELECT or FOR command. The substructure of a record variable can change each time it is assigned to. A consequence of this is that until a record variable is first assigned to, it has no substructure, and any attempt to access a @@ -716,13 +716,13 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; - Note that RECORD is not a true data type, only a placeholder. + Note that RECORD is not a true data type, only a placeholder. One should also realize that when a PL/pgSQL - function is declared to return type record, this is not quite the + function is declared to return type record, this is not quite the same concept as a record variable, even though such a function might use a record variable to hold its result. In both cases the actual row structure is unknown when the function is written, but for a function - returning record the actual structure is determined when the + returning record the actual structure is determined when the calling query is parsed, whereas a record variable can change its row structure on-the-fly. @@ -732,8 +732,8 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; Collation of <application>PL/pgSQL</application> Variables - collation - in PL/pgSQL + collation + in PL/pgSQL @@ -758,9 +758,9 @@ SELECT less_than(text_field_1, text_field_2) FROM table1; SELECT less_than(text_field_1, text_field_2 COLLATE "C") FROM table1; - The first use of less_than will use the common collation - of text_field_1 and text_field_2 for - the comparison, while the second use will use C collation. + The first use of less_than will use the common collation + of text_field_1 and text_field_2 for + the comparison, while the second use will use C collation. @@ -790,7 +790,7 @@ $$ LANGUAGE plpgsql; A local variable of a collatable data type can have a different collation - associated with it by including the COLLATE option in its + associated with it by including the COLLATE option in its declaration, for example @@ -803,7 +803,7 @@ DECLARE - Also, of course explicit COLLATE clauses can be written inside + Also, of course explicit COLLATE clauses can be written inside a function if it is desired to force a particular collation to be used in a particular operation. For example, @@ -838,7 +838,7 @@ IF expression THEN ... SELECT expression - to the main SQL engine. While forming the SELECT command, + to the main SQL engine. While forming the SELECT command, any occurrences of PL/pgSQL variable names are replaced by parameters, as discussed in detail in . @@ -846,17 +846,17 @@ SELECT expression be prepared just once and then reused for subsequent evaluations with different values of the variables. Thus, what really happens on first use of an expression is essentially a - PREPARE command. For example, if we have declared - two integer variables x and y, and we write + PREPARE command. For example, if we have declared + two integer variables x and y, and we write IF x < y THEN ... what happens behind the scenes is equivalent to -PREPARE statement_name(integer, integer) AS SELECT $1 < $2; +PREPARE statement_name(integer, integer) AS SELECT $1 < $2; - and then this prepared statement is EXECUTEd for each - execution of the IF statement, with the current values + and then this prepared statement is EXECUTEd for each + execution of the IF statement, with the current values of the PL/pgSQL variables supplied as parameter values. Normally these details are not important to a PL/pgSQL user, but @@ -888,20 +888,20 @@ PREPARE statement_name(integer, integer) AS SELECT $1 < $2; variable { := | = } expression; As explained previously, the expression in such a statement is evaluated - by means of an SQL SELECT command sent to the main + by means of an SQL SELECT command sent to the main database engine. The expression must yield a single value (possibly a row value, if the variable is a row or record variable). The target variable can be a simple variable (optionally qualified with a block name), a field of a row or record variable, or an element of an array - that is a simple variable or field. Equal (=) can be - used instead of PL/SQL-compliant :=. + that is a simple variable or field. Equal (=) can be + used instead of PL/SQL-compliant :=. If the expression's result data type doesn't match the variable's data type, the value will be coerced as though by an assignment cast (see ). If no assignment cast is known - for the pair of data types involved, the PL/pgSQL + for the pair of data types involved, the PL/pgSQL interpreter will attempt to convert the result value textually, that is by applying the result type's output function followed by the variable type's input function. Note that this could result in run-time errors @@ -923,7 +923,7 @@ my_record.user_id := 20; For any SQL command that does not return rows, for example - INSERT without a RETURNING clause, you can + INSERT without a RETURNING clause, you can execute the command within a PL/pgSQL function just by writing the command. @@ -944,7 +944,7 @@ my_record.user_id := 20; - Sometimes it is useful to evaluate an expression or SELECT + Sometimes it is useful to evaluate an expression or SELECT query but discard the result, for example when calling a function that has side-effects but no useful result value. To do this in PL/pgSQL, use the @@ -956,9 +956,9 @@ PERFORM query; This executes query and discards the result. Write the query the same - way you would write an SQL SELECT command, but replace the - initial keyword SELECT with PERFORM. - For WITH queries, use PERFORM and then + way you would write an SQL SELECT command, but replace the + initial keyword SELECT with PERFORM. + For WITH queries, use PERFORM and then place the query in parentheses. (In this case, the query can only return one row.) PL/pgSQL variables will be @@ -976,7 +976,7 @@ PERFORM query; present the only accepted way to do it is PERFORM. A SQL command that can return rows, such as SELECT, will be rejected as an error - unless it has an INTO clause as discussed in the + unless it has an INTO clause as discussed in the next section. @@ -1006,7 +1006,7 @@ PERFORM create_mv('cs_session_page_requests_mv', my_query); The result of a SQL command yielding a single row (possibly of multiple columns) can be assigned to a record variable, row-type variable, or list of scalar variables. This is done by writing the base SQL command and - adding an INTO clause. For example, + adding an INTO clause. For example, SELECT select_expressions INTO STRICT target FROM ...; @@ -1021,21 +1021,21 @@ DELETE ... RETURNING expressions INTO STRIC PL/pgSQL variables will be substituted into the rest of the query, and the plan is cached, just as described above for commands that do not return rows. - This works for SELECT, - INSERT/UPDATE/DELETE with - RETURNING, and utility commands that return row-set - results (such as EXPLAIN). - Except for the INTO clause, the SQL command is the same + This works for SELECT, + INSERT/UPDATE/DELETE with + RETURNING, and utility commands that return row-set + results (such as EXPLAIN). + Except for the INTO clause, the SQL command is the same as it would be written outside PL/pgSQL. - Note that this interpretation of SELECT with INTO - is quite different from PostgreSQL's regular - SELECT INTO command, wherein the INTO + Note that this interpretation of SELECT with INTO + is quite different from PostgreSQL's regular + SELECT INTO command, wherein the INTO target is a newly created table. If you want to create a table from a - SELECT result inside a + SELECT result inside a PL/pgSQL function, use the syntax CREATE TABLE ... AS SELECT. @@ -1050,21 +1050,21 @@ DELETE ... RETURNING expressions INTO STRIC - The INTO clause can appear almost anywhere in the SQL + The INTO clause can appear almost anywhere in the SQL command. Customarily it is written either just before or just after the list of select_expressions in a - SELECT command, or at the end of the command for other + SELECT command, or at the end of the command for other command types. It is recommended that you follow this convention in case the PL/pgSQL parser becomes stricter in future versions. - If STRICT is not specified in the INTO + If STRICT is not specified in the INTO clause, then target will be set to the first row returned by the query, or to nulls if the query returned no rows. - (Note that the first row is not - well-defined unless you've used ORDER BY.) Any result rows + (Note that the first row is not + well-defined unless you've used ORDER BY.) Any result rows after the first row are discarded. You can check the special FOUND variable (see ) to @@ -1079,7 +1079,7 @@ END IF; If the STRICT option is specified, the query must return exactly one row or a run-time error will be reported, either - NO_DATA_FOUND (no rows) or TOO_MANY_ROWS + NO_DATA_FOUND (no rows) or TOO_MANY_ROWS (more than one row). You can use an exception block if you wish to catch the error, for example: @@ -1093,28 +1093,28 @@ BEGIN RAISE EXCEPTION 'employee % not unique', myname; END; - Successful execution of a command with STRICT + Successful execution of a command with STRICT always sets FOUND to true. - For INSERT/UPDATE/DELETE with - RETURNING, PL/pgSQL reports + For INSERT/UPDATE/DELETE with + RETURNING, PL/pgSQL reports an error for more than one returned row, even when STRICT is not specified. This is because there - is no option such as ORDER BY with which to determine + is no option such as ORDER BY with which to determine which affected row should be returned. - If print_strict_params is enabled for the function, + If print_strict_params is enabled for the function, then when an error is thrown because the requirements - of STRICT are not met, the DETAIL part of + of STRICT are not met, the DETAIL part of the error message will include information about the parameters passed to the query. - You can change the print_strict_params + You can change the print_strict_params setting for all functions by setting - plpgsql.print_strict_params, though only subsequent + plpgsql.print_strict_params, though only subsequent function compilations will be affected. You can also enable it on a per-function basis by using a compiler option, for example: @@ -1140,7 +1140,7 @@ CONTEXT: PL/pgSQL function get_userid(text) line 6 at SQL statement - The STRICT option matches the behavior of + The STRICT option matches the behavior of Oracle PL/SQL's SELECT INTO and related statements. @@ -1174,12 +1174,12 @@ EXECUTE command-string INT command to be executed. The optional target is a record variable, a row variable, or a comma-separated list of simple variables and record/row fields, into which the results of - the command will be stored. The optional USING expressions + the command will be stored. The optional USING expressions supply values to be inserted into the command. - No substitution of PL/pgSQL variables is done on the + No substitution of PL/pgSQL variables is done on the computed command string. Any required variable values must be inserted in the command string as it is constructed; or you can use parameters as described below. @@ -1207,14 +1207,14 @@ EXECUTE command-string INT - If the STRICT option is given, an error is reported + If the STRICT option is given, an error is reported unless the query produces exactly one row. The command string can use parameter values, which are referenced - in the command as $1, $2, etc. - These symbols refer to values supplied in the USING + in the command as $1, $2, etc. + These symbols refer to values supplied in the USING clause. This method is often preferable to inserting data values into the command string as text: it avoids run-time overhead of converting the values to text and back, and it is much less prone @@ -1240,7 +1240,7 @@ EXECUTE 'SELECT count(*) FROM ' INTO c USING checked_user, checked_date; - A cleaner approach is to use format()'s %I + A cleaner approach is to use format()'s %I specification for table or column names (strings separated by a newline are concatenated): @@ -1250,32 +1250,32 @@ EXECUTE format('SELECT count(*) FROM %I ' USING checked_user, checked_date; Another restriction on parameter symbols is that they only work in - SELECT, INSERT, UPDATE, and - DELETE commands. In other statement + SELECT, INSERT, UPDATE, and + DELETE commands. In other statement types (generically called utility statements), you must insert values textually even if they are just data values. - An EXECUTE with a simple constant command string and some - USING parameters, as in the first example above, is + An EXECUTE with a simple constant command string and some + USING parameters, as in the first example above, is functionally equivalent to just writing the command directly in PL/pgSQL and allowing replacement of PL/pgSQL variables to happen automatically. - The important difference is that EXECUTE will re-plan + The important difference is that EXECUTE will re-plan the command on each execution, generating a plan that is specific to the current parameter values; whereas PL/pgSQL may otherwise create a generic plan and cache it for re-use. In situations where the best plan depends strongly on the parameter values, it can be helpful to use - EXECUTE to positively ensure that a generic plan is not + EXECUTE to positively ensure that a generic plan is not selected. SELECT INTO is not currently supported within - EXECUTE; instead, execute a plain SELECT - command and specify INTO as part of the EXECUTE + EXECUTE; instead, execute a plain SELECT + command and specify INTO as part of the EXECUTE itself. @@ -1287,7 +1287,7 @@ EXECUTE format('SELECT count(*) FROM %I ' statement supported by the PostgreSQL server. The server's EXECUTE statement cannot be used directly within - PL/pgSQL functions (and is not needed). + PL/pgSQL functions (and is not needed). @@ -1326,7 +1326,7 @@ EXECUTE format('SELECT count(*) FROM %I ' Dynamic values require careful handling since they might contain quote characters. - An example using format() (this assumes that you are + An example using format() (this assumes that you are dollar quoting the function body so quote marks need not be doubled): EXECUTE format('UPDATE tbl SET %I = $1 ' @@ -1351,7 +1351,7 @@ EXECUTE 'UPDATE tbl SET ' or table identifiers should be passed through quote_ident before insertion in a dynamic query. Expressions containing values that should be literal strings in the - constructed command should be passed through quote_literal. + constructed command should be passed through quote_literal. These functions take the appropriate steps to return the input text enclosed in double or single quotes respectively, with any embedded special characters properly escaped. @@ -1360,12 +1360,12 @@ EXECUTE 'UPDATE tbl SET ' Because quote_literal is labeled STRICT, it will always return null when called with a - null argument. In the above example, if newvalue or - keyvalue were null, the entire dynamic query string would + null argument. In the above example, if newvalue or + keyvalue were null, the entire dynamic query string would become null, leading to an error from EXECUTE. - You can avoid this problem by using the quote_nullable - function, which works the same as quote_literal except that - when called with a null argument it returns the string NULL. + You can avoid this problem by using the quote_nullable + function, which works the same as quote_literal except that + when called with a null argument it returns the string NULL. For example, EXECUTE 'UPDATE tbl SET ' @@ -1376,26 +1376,26 @@ EXECUTE 'UPDATE tbl SET ' || quote_nullable(keyvalue); If you are dealing with values that might be null, you should usually - use quote_nullable in place of quote_literal. + use quote_nullable in place of quote_literal. As always, care must be taken to ensure that null values in a query do - not deliver unintended results. For example the WHERE clause + not deliver unintended results. For example the WHERE clause 'WHERE key = ' || quote_nullable(keyvalue) - will never succeed if keyvalue is null, because the - result of using the equality operator = with a null operand + will never succeed if keyvalue is null, because the + result of using the equality operator = with a null operand is always null. If you wish null to work like an ordinary key value, you would need to rewrite the above as 'WHERE key IS NOT DISTINCT FROM ' || quote_nullable(keyvalue) - (At present, IS NOT DISTINCT FROM is handled much less - efficiently than =, so don't do this unless you must. + (At present, IS NOT DISTINCT FROM is handled much less + efficiently than =, so don't do this unless you must. See for - more information on nulls and IS DISTINCT.) + more information on nulls and IS DISTINCT.) @@ -1409,12 +1409,12 @@ EXECUTE 'UPDATE tbl SET ' || '$$ WHERE key = ' || quote_literal(keyvalue); - because it would break if the contents of newvalue - happened to contain $$. The same objection would + because it would break if the contents of newvalue + happened to contain $$. The same objection would apply to any other dollar-quoting delimiter you might pick. So, to safely quote text that is not known in advance, you - must use quote_literal, - quote_nullable, or quote_ident, as appropriate. + must use quote_literal, + quote_nullable, or quote_ident, as appropriate. @@ -1425,8 +1425,8 @@ EXECUTE 'UPDATE tbl SET ' EXECUTE format('UPDATE tbl SET %I = %L ' 'WHERE key = %L', colname, newvalue, keyvalue); - %I is equivalent to quote_ident, and - %L is equivalent to quote_nullable. + %I is equivalent to quote_ident, and + %L is equivalent to quote_nullable. The format function can be used in conjunction with the USING clause: @@ -1435,7 +1435,7 @@ EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) This form is better because the variables are handled in their native data type format, rather than unconditionally converting them to - text and quoting them via %L. It is also more efficient. + text and quoting them via %L. It is also more efficient. @@ -1443,7 +1443,7 @@ EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) A much larger example of a dynamic command and EXECUTE can be seen in , which builds and executes a - CREATE FUNCTION command to define a new function. + CREATE FUNCTION command to define a new function. @@ -1460,14 +1460,14 @@ GET CURRENT DIAGNOSTICS variable This command allows retrieval of system status indicators. - CURRENT is a noise word (but see also GET STACKED + CURRENT is a noise word (but see also GET STACKED DIAGNOSTICS in ). Each item is a key word identifying a status value to be assigned to the specified variable (which should be of the right data type to receive it). The currently available status items are shown in . Colon-equal - (:=) can be used instead of the SQL-standard = + (:=) can be used instead of the SQL-standard = token. An example: GET DIAGNOSTICS integer_var = ROW_COUNT; @@ -1487,13 +1487,13 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; ROW_COUNT - bigint + bigint the number of rows processed by the most recent SQL command RESULT_OID - oid + oid the OID of the last row inserted by the most recent SQL command (only useful after an INSERT command into a table having @@ -1501,7 +1501,7 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; PG_CONTEXT - text + text line(s) of text describing the current call stack (see ) @@ -1526,33 +1526,33 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; - A PERFORM statement sets FOUND + A PERFORM statement sets FOUND true if it produces (and discards) one or more rows, false if no row is produced. - UPDATE, INSERT, and DELETE + UPDATE, INSERT, and DELETE statements set FOUND true if at least one row is affected, false if no row is affected. - A FETCH statement sets FOUND + A FETCH statement sets FOUND true if it returns a row, false if no row is returned. - A MOVE statement sets FOUND + A MOVE statement sets FOUND true if it successfully repositions the cursor, false otherwise. - A FOR or FOREACH statement sets + A FOR or FOREACH statement sets FOUND true if it iterates one or more times, else false. FOUND is set this way when the @@ -1625,7 +1625,7 @@ END; In Oracle's PL/SQL, empty statement lists are not allowed, and so - NULL statements are required for situations + NULL statements are required for situations such as this. PL/pgSQL allows you to just write nothing, instead. @@ -1639,9 +1639,9 @@ END; Control structures are probably the most useful (and - important) part of PL/pgSQL. With - PL/pgSQL's control structures, - you can manipulate PostgreSQL data in a very + important) part of PL/pgSQL. With + PL/pgSQL's control structures, + you can manipulate PostgreSQL data in a very flexible and powerful way. @@ -1655,7 +1655,7 @@ END; - <command>RETURN</> + <command>RETURN</command> RETURN expression; @@ -1665,7 +1665,7 @@ RETURN expression; RETURN with an expression terminates the function and returns the value of expression to the caller. This form - is used for PL/pgSQL functions that do + is used for PL/pgSQL functions that do not return a set. @@ -1716,7 +1716,7 @@ RETURN (1, 2, 'three'::text); -- must cast columns to correct types - <command>RETURN NEXT</> and <command>RETURN QUERY</command> + <command>RETURN NEXT</command> and <command>RETURN QUERY</command> RETURN NEXT in PL/pgSQL @@ -1733,8 +1733,8 @@ RETURN QUERY EXECUTE command-string < - When a PL/pgSQL function is declared to return - SETOF sometype, the procedure + When a PL/pgSQL function is declared to return + SETOF sometype, the procedure to follow is slightly different. In that case, the individual items to return are specified by a sequence of RETURN NEXT or RETURN QUERY commands, and @@ -1755,7 +1755,7 @@ RETURN QUERY EXECUTE command-string < QUERY do not actually return from the function — they simply append zero or more rows to the function's result set. Execution then continues with the next statement in the - PL/pgSQL function. As successive + PL/pgSQL function. As successive RETURN NEXT or RETURN QUERY commands are executed, the result set is built up. A final RETURN, which should have no @@ -1767,8 +1767,8 @@ RETURN QUERY EXECUTE command-string < RETURN QUERY has a variant RETURN QUERY EXECUTE, which specifies the query to be executed dynamically. Parameter expressions can - be inserted into the computed query string via USING, - in just the same way as in the EXECUTE command. + be inserted into the computed query string via USING, + in just the same way as in the EXECUTE command. @@ -1778,9 +1778,9 @@ RETURN QUERY EXECUTE command-string < variable(s) will be saved for eventual return as a row of the result. Note that you must declare the function as returning SETOF record when there are multiple output - parameters, or SETOF sometype + parameters, or SETOF sometype when there is just one output parameter of type - sometype, in order to create a set-returning + sometype, in order to create a set-returning function with output parameters. @@ -1848,11 +1848,11 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); The current implementation of RETURN NEXT and RETURN QUERY stores the entire result set before returning from the function, as discussed above. That - means that if a PL/pgSQL function produces a + means that if a PL/pgSQL function produces a very large result set, performance might be poor: data will be written to disk to avoid memory exhaustion, but the function itself will not return until the entire result set has been - generated. A future version of PL/pgSQL might + generated. A future version of PL/pgSQL might allow users to define set-returning functions that do not have this limitation. Currently, the point at which data begins being written to disk is controlled by the @@ -1869,34 +1869,34 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); Conditionals - IF and CASE statements let you execute + IF and CASE statements let you execute alternative commands based on certain conditions. - PL/pgSQL has three forms of IF: + PL/pgSQL has three forms of IF: - IF ... THEN ... END IF + IF ... THEN ... END IF - IF ... THEN ... ELSE ... END IF + IF ... THEN ... ELSE ... END IF - IF ... THEN ... ELSIF ... THEN ... ELSE ... END IF + IF ... THEN ... ELSIF ... THEN ... ELSE ... END IF - and two forms of CASE: + and two forms of CASE: - CASE ... WHEN ... THEN ... ELSE ... END CASE + CASE ... WHEN ... THEN ... ELSE ... END CASE - CASE WHEN ... THEN ... ELSE ... END CASE + CASE WHEN ... THEN ... ELSE ... END CASE - <literal>IF-THEN</> + <literal>IF-THEN</literal> IF boolean-expression THEN @@ -1923,7 +1923,7 @@ END IF; - <literal>IF-THEN-ELSE</> + <literal>IF-THEN-ELSE</literal> IF boolean-expression THEN @@ -1964,7 +1964,7 @@ END IF; - <literal>IF-THEN-ELSIF</> + <literal>IF-THEN-ELSIF</literal> IF boolean-expression THEN @@ -1983,15 +1983,15 @@ END IF; Sometimes there are more than just two alternatives. - IF-THEN-ELSIF provides a convenient + IF-THEN-ELSIF provides a convenient method of checking several alternatives in turn. - The IF conditions are tested successively + The IF conditions are tested successively until the first one that is true is found. Then the associated statement(s) are executed, after which control - passes to the next statement after END IF. - (Any subsequent IF conditions are not - tested.) If none of the IF conditions is true, - then the ELSE block (if any) is executed. + passes to the next statement after END IF. + (Any subsequent IF conditions are not + tested.) If none of the IF conditions is true, + then the ELSE block (if any) is executed. @@ -2012,8 +2012,8 @@ END IF; - The key word ELSIF can also be spelled - ELSEIF. + The key word ELSIF can also be spelled + ELSEIF. @@ -2033,14 +2033,14 @@ END IF; - However, this method requires writing a matching END IF - for each IF, so it is much more cumbersome than - using ELSIF when there are many alternatives. + However, this method requires writing a matching END IF + for each IF, so it is much more cumbersome than + using ELSIF when there are many alternatives. - Simple <literal>CASE</> + Simple <literal>CASE</literal> CASE search-expression @@ -2055,16 +2055,16 @@ END CASE; - The simple form of CASE provides conditional execution - based on equality of operands. The search-expression + The simple form of CASE provides conditional execution + based on equality of operands. The search-expression is evaluated (once) and successively compared to each - expression in the WHEN clauses. + expression in the WHEN clauses. If a match is found, then the corresponding statements are executed, and then control - passes to the next statement after END CASE. (Subsequent - WHEN expressions are not evaluated.) If no match is - found, the ELSE statements are - executed; but if ELSE is not present, then a + passes to the next statement after END CASE. (Subsequent + WHEN expressions are not evaluated.) If no match is + found, the ELSE statements are + executed; but if ELSE is not present, then a CASE_NOT_FOUND exception is raised. @@ -2083,7 +2083,7 @@ END CASE; - Searched <literal>CASE</> + Searched <literal>CASE</literal> CASE @@ -2098,16 +2098,16 @@ END CASE; - The searched form of CASE provides conditional execution - based on truth of Boolean expressions. Each WHEN clause's + The searched form of CASE provides conditional execution + based on truth of Boolean expressions. Each WHEN clause's boolean-expression is evaluated in turn, - until one is found that yields true. Then the + until one is found that yields true. Then the corresponding statements are executed, and - then control passes to the next statement after END CASE. - (Subsequent WHEN expressions are not evaluated.) - If no true result is found, the ELSE + then control passes to the next statement after END CASE. + (Subsequent WHEN expressions are not evaluated.) + If no true result is found, the ELSE statements are executed; - but if ELSE is not present, then a + but if ELSE is not present, then a CASE_NOT_FOUND exception is raised. @@ -2125,9 +2125,9 @@ END CASE; - This form of CASE is entirely equivalent to - IF-THEN-ELSIF, except for the rule that reaching - an omitted ELSE clause results in an error rather + This form of CASE is entirely equivalent to + IF-THEN-ELSIF, except for the rule that reaching + an omitted ELSE clause results in an error rather than doing nothing. @@ -2143,14 +2143,14 @@ END CASE; - With the LOOP, EXIT, - CONTINUE, WHILE, FOR, - and FOREACH statements, you can arrange for your - PL/pgSQL function to repeat a series of commands. + With the LOOP, EXIT, + CONTINUE, WHILE, FOR, + and FOREACH statements, you can arrange for your + PL/pgSQL function to repeat a series of commands. - <literal>LOOP</> + <literal>LOOP</literal> <<label>> @@ -2160,17 +2160,17 @@ END LOOP label ; - LOOP defines an unconditional loop that is repeated - indefinitely until terminated by an EXIT or + LOOP defines an unconditional loop that is repeated + indefinitely until terminated by an EXIT or RETURN statement. The optional - label can be used by EXIT + label can be used by EXIT and CONTINUE statements within nested loops to specify which loop those statements refer to. - <literal>EXIT</> + <literal>EXIT</literal> EXIT @@ -2184,21 +2184,21 @@ EXIT label WHEN If no label is given, the innermost loop is terminated and the statement following END - LOOP is executed next. If label + LOOP is executed next. If label is given, it must be the label of the current or some outer level of nested loop or block. Then the named loop or block is terminated and control continues with the statement after the - loop's/block's corresponding END. + loop's/block's corresponding END. - If WHEN is specified, the loop exit occurs only if - boolean-expression is true. Otherwise, control passes - to the statement after EXIT. + If WHEN is specified, the loop exit occurs only if + boolean-expression is true. Otherwise, control passes + to the statement after EXIT. - EXIT can be used with all types of loops; it is + EXIT can be used with all types of loops; it is not limited to use with unconditional loops. @@ -2242,7 +2242,7 @@ END; - <literal>CONTINUE</> + <literal>CONTINUE</literal> CONTINUE @@ -2254,25 +2254,25 @@ CONTINUE label WHEN - If no label is given, the next iteration of + If no label is given, the next iteration of the innermost loop is begun. That is, all statements remaining in the loop body are skipped, and control returns to the loop control expression (if any) to determine whether another loop iteration is needed. - If label is present, it + If label is present, it specifies the label of the loop whose execution will be continued. - If WHEN is specified, the next iteration of the - loop is begun only if boolean-expression is + If WHEN is specified, the next iteration of the + loop is begun only if boolean-expression is true. Otherwise, control passes to the statement after - CONTINUE. + CONTINUE. - CONTINUE can be used with all types of loops; it + CONTINUE can be used with all types of loops; it is not limited to use with unconditional loops. @@ -2291,7 +2291,7 @@ END LOOP; - <literal>WHILE</> + <literal>WHILE</literal> WHILE @@ -2306,7 +2306,7 @@ END LOOP label ; - The WHILE statement repeats a + The WHILE statement repeats a sequence of statements so long as the boolean-expression evaluates to true. The expression is checked just before @@ -2328,7 +2328,7 @@ END LOOP; - <literal>FOR</> (Integer Variant) + <literal>FOR</literal> (Integer Variant) <<label>> @@ -2338,22 +2338,22 @@ END LOOP label ; - This form of FOR creates a loop that iterates over a range + This form of FOR creates a loop that iterates over a range of integer values. The variable name is automatically defined as type - integer and exists only inside the loop (any existing + integer and exists only inside the loop (any existing definition of the variable name is ignored within the loop). The two expressions giving the lower and upper bound of the range are evaluated once when entering - the loop. If the BY clause isn't specified the iteration - step is 1, otherwise it's the value specified in the BY + the loop. If the BY clause isn't specified the iteration + step is 1, otherwise it's the value specified in the BY clause, which again is evaluated once on loop entry. - If REVERSE is specified then the step value is + If REVERSE is specified then the step value is subtracted, rather than added, after each iteration. - Some examples of integer FOR loops: + Some examples of integer FOR loops: FOR i IN 1..10 LOOP -- i will take on the values 1,2,3,4,5,6,7,8,9,10 within the loop @@ -2371,13 +2371,13 @@ END LOOP; If the lower bound is greater than the upper bound (or less than, - in the REVERSE case), the loop body is not + in the REVERSE case), the loop body is not executed at all. No error is raised. If a label is attached to the - FOR loop then the integer loop variable can be + FOR loop then the integer loop variable can be referenced with a qualified name, using that label. @@ -2388,7 +2388,7 @@ END LOOP; Looping Through Query Results - Using a different type of FOR loop, you can iterate through + Using a different type of FOR loop, you can iterate through the results of a query and manipulate that data accordingly. The syntax is: @@ -2424,28 +2424,28 @@ END; $$ LANGUAGE plpgsql; - If the loop is terminated by an EXIT statement, the last + If the loop is terminated by an EXIT statement, the last assigned row value is still accessible after the loop. - The query used in this type of FOR + The query used in this type of FOR statement can be any SQL command that returns rows to the caller: - SELECT is the most common case, - but you can also use INSERT, UPDATE, or - DELETE with a RETURNING clause. Some utility - commands such as EXPLAIN will work too. + SELECT is the most common case, + but you can also use INSERT, UPDATE, or + DELETE with a RETURNING clause. Some utility + commands such as EXPLAIN will work too. - PL/pgSQL variables are substituted into the query text, + PL/pgSQL variables are substituted into the query text, and the query plan is cached for possible re-use, as discussed in detail in and . - The FOR-IN-EXECUTE statement is another way to iterate over + The FOR-IN-EXECUTE statement is another way to iterate over rows: <<label>> @@ -2455,11 +2455,11 @@ END LOOP label ; This is like the previous form, except that the source query is specified as a string expression, which is evaluated and replanned - on each entry to the FOR loop. This allows the programmer to + on each entry to the FOR loop. This allows the programmer to choose the speed of a preplanned query or the flexibility of a dynamic query, just as with a plain EXECUTE statement. As with EXECUTE, parameter values can be inserted - into the dynamic command via USING. + into the dynamic command via USING. @@ -2473,13 +2473,13 @@ END LOOP label ; Looping Through Arrays - The FOREACH loop is much like a FOR loop, + The FOREACH loop is much like a FOR loop, but instead of iterating through the rows returned by a SQL query, it iterates through the elements of an array value. - (In general, FOREACH is meant for looping through + (In general, FOREACH is meant for looping through components of a composite-valued expression; variants for looping through composites besides arrays may be added in future.) - The FOREACH statement to loop over an array is: + The FOREACH statement to loop over an array is: <<label>> @@ -2490,7 +2490,7 @@ END LOOP label ; - Without SLICE, or if SLICE 0 is specified, + Without SLICE, or if SLICE 0 is specified, the loop iterates through individual elements of the array produced by evaluating the expression. The target variable is assigned each @@ -2522,13 +2522,13 @@ $$ LANGUAGE plpgsql; - With a positive SLICE value, FOREACH + With a positive SLICE value, FOREACH iterates through slices of the array rather than single elements. - The SLICE value must be an integer constant not larger + The SLICE value must be an integer constant not larger than the number of dimensions of the array. The target variable must be an array, and it receives successive slices of the array value, where each slice - is of the number of dimensions specified by SLICE. + is of the number of dimensions specified by SLICE. Here is an example of iterating through one-dimensional slices: @@ -2562,12 +2562,12 @@ NOTICE: row = {10,11,12} - By default, any error occurring in a PL/pgSQL + By default, any error occurring in a PL/pgSQL function aborts execution of the function, and indeed of the surrounding transaction as well. You can trap errors and recover - from them by using a BEGIN block with an - EXCEPTION clause. The syntax is an extension of the - normal syntax for a BEGIN block: + from them by using a BEGIN block with an + EXCEPTION clause. The syntax is an extension of the + normal syntax for a BEGIN block: <<label>> @@ -2588,18 +2588,18 @@ END; If no error occurs, this form of block simply executes all the statements, and then control passes - to the next statement after END. But if an error + to the next statement after END. But if an error occurs within the statements, further processing of the statements is - abandoned, and control passes to the EXCEPTION list. + abandoned, and control passes to the EXCEPTION list. The list is searched for the first condition matching the error that occurred. If a match is found, the corresponding handler_statements are executed, and then control passes to the next statement after - END. If no match is found, the error propagates out - as though the EXCEPTION clause were not there at all: + END. If no match is found, the error propagates out + as though the EXCEPTION clause were not there at all: the error can be caught by an enclosing block with - EXCEPTION, or if there is none it aborts processing + EXCEPTION, or if there is none it aborts processing of the function. @@ -2607,12 +2607,12 @@ END; The condition names can be any of those shown in . A category name matches any error within its category. The special - condition name OTHERS matches every error type except - QUERY_CANCELED and ASSERT_FAILURE. + condition name OTHERS matches every error type except + QUERY_CANCELED and ASSERT_FAILURE. (It is possible, but often unwise, to trap those two error types by name.) Condition names are not case-sensitive. Also, an error condition can be specified - by SQLSTATE code; for example these are equivalent: + by SQLSTATE code; for example these are equivalent: WHEN division_by_zero THEN ... WHEN SQLSTATE '22012' THEN ... @@ -2622,13 +2622,13 @@ WHEN SQLSTATE '22012' THEN ... If a new error occurs within the selected handler_statements, it cannot be caught - by this EXCEPTION clause, but is propagated out. - A surrounding EXCEPTION clause could catch it. + by this EXCEPTION clause, but is propagated out. + A surrounding EXCEPTION clause could catch it. - When an error is caught by an EXCEPTION clause, - the local variables of the PL/pgSQL function + When an error is caught by an EXCEPTION clause, + the local variables of the PL/pgSQL function remain as they were when the error occurred, but all changes to persistent database state within the block are rolled back. As an example, consider this fragment: @@ -2646,32 +2646,32 @@ EXCEPTION END; - When control reaches the assignment to y, it will - fail with a division_by_zero error. This will be caught by - the EXCEPTION clause. The value returned in the - RETURN statement will be the incremented value of - x, but the effects of the UPDATE command will - have been rolled back. The INSERT command preceding the + When control reaches the assignment to y, it will + fail with a division_by_zero error. This will be caught by + the EXCEPTION clause. The value returned in the + RETURN statement will be the incremented value of + x, but the effects of the UPDATE command will + have been rolled back. The INSERT command preceding the block is not rolled back, however, so the end result is that the database - contains Tom Jones not Joe Jones. + contains Tom Jones not Joe Jones. - A block containing an EXCEPTION clause is significantly + A block containing an EXCEPTION clause is significantly more expensive to enter and exit than a block without one. Therefore, - don't use EXCEPTION without need. + don't use EXCEPTION without need. - Exceptions with <command>UPDATE</>/<command>INSERT</> + Exceptions with <command>UPDATE</command>/<command>INSERT</command> This example uses exception handling to perform either - UPDATE or INSERT, as appropriate. It is - recommended that applications use INSERT with - ON CONFLICT DO UPDATE rather than actually using + UPDATE or INSERT, as appropriate. It is + recommended that applications use INSERT with + ON CONFLICT DO UPDATE rather than actually using this pattern. This example serves primarily to illustrate use of PL/pgSQL control flow structures: @@ -2705,8 +2705,8 @@ SELECT merge_db(1, 'david'); SELECT merge_db(1, 'dennis'); - This coding assumes the unique_violation error is caused by - the INSERT, and not by, say, an INSERT in a + This coding assumes the unique_violation error is caused by + the INSERT, and not by, say, an INSERT in a trigger function on the table. It might also misbehave if there is more than one unique index on the table, since it will retry the operation regardless of which index caused the error. @@ -2722,7 +2722,7 @@ SELECT merge_db(1, 'dennis'); Exception handlers frequently need to identify the specific error that occurred. There are two ways to get information about the current - exception in PL/pgSQL: special variables and the + exception in PL/pgSQL: special variables and the GET STACKED DIAGNOSTICS command. @@ -2764,52 +2764,52 @@ GET STACKED DIAGNOSTICS variable { = | := } RETURNED_SQLSTATE - text + text the SQLSTATE error code of the exception COLUMN_NAME - text + text the name of the column related to exception CONSTRAINT_NAME - text + text the name of the constraint related to exception PG_DATATYPE_NAME - text + text the name of the data type related to exception MESSAGE_TEXT - text + text the text of the exception's primary message TABLE_NAME - text + text the name of the table related to exception SCHEMA_NAME - text + text the name of the schema related to exception PG_EXCEPTION_DETAIL - text + text the text of the exception's detail message, if any PG_EXCEPTION_HINT - text + text the text of the exception's hint message, if any PG_EXCEPTION_CONTEXT - text + text line(s) of text describing the call stack at the time of the exception (see ) @@ -2850,9 +2850,9 @@ END; in , retrieves information about current execution state (whereas the GET STACKED DIAGNOSTICS command discussed above reports information about - the execution state as of a previous error). Its PG_CONTEXT + the execution state as of a previous error). Its PG_CONTEXT status item is useful for identifying the current execution - location. PG_CONTEXT returns a text string with line(s) + location. PG_CONTEXT returns a text string with line(s) of text describing the call stack. The first line refers to the current function and currently executing GET DIAGNOSTICS command. The second and any subsequent lines refer to calling functions @@ -2907,11 +2907,11 @@ CONTEXT: PL/pgSQL function outer_func() line 3 at RETURN Rather than executing a whole query at once, it is possible to set - up a cursor that encapsulates the query, and then read + up a cursor that encapsulates the query, and then read the query result a few rows at a time. One reason for doing this is to avoid memory overrun when the result contains a large number of - rows. (However, PL/pgSQL users do not normally need - to worry about that, since FOR loops automatically use a cursor + rows. (However, PL/pgSQL users do not normally need + to worry about that, since FOR loops automatically use a cursor internally to avoid memory problems.) A more interesting usage is to return a reference to a cursor that a function has created, allowing the caller to read the rows. This provides an efficient way to return @@ -2922,19 +2922,19 @@ CONTEXT: PL/pgSQL function outer_func() line 3 at RETURN Declaring Cursor Variables - All access to cursors in PL/pgSQL goes through + All access to cursors in PL/pgSQL goes through cursor variables, which are always of the special data type - refcursor. One way to create a cursor variable - is just to declare it as a variable of type refcursor. + refcursor. One way to create a cursor variable + is just to declare it as a variable of type refcursor. Another way is to use the cursor declaration syntax, which in general is: name NO SCROLL CURSOR ( arguments ) FOR query; - (FOR can be replaced by IS for + (FOR can be replaced by IS for Oracle compatibility.) - If SCROLL is specified, the cursor will be capable of - scrolling backward; if NO SCROLL is specified, backward + If SCROLL is specified, the cursor will be capable of + scrolling backward; if NO SCROLL is specified, backward fetches will be rejected; if neither specification appears, it is query-dependent whether backward fetches will be allowed. arguments, if specified, is a @@ -2952,13 +2952,13 @@ DECLARE curs2 CURSOR FOR SELECT * FROM tenk1; curs3 CURSOR (key integer) FOR SELECT * FROM tenk1 WHERE unique1 = key; - All three of these variables have the data type refcursor, + All three of these variables have the data type refcursor, but the first can be used with any query, while the second has - a fully specified query already bound to it, and the last - has a parameterized query bound to it. (key will be + a fully specified query already bound to it, and the last + has a parameterized query bound to it. (key will be replaced by an integer parameter value when the cursor is opened.) - The variable curs1 - is said to be unbound since it is not bound to + The variable curs1 + is said to be unbound since it is not bound to any particular query. @@ -2968,16 +2968,16 @@ DECLARE Before a cursor can be used to retrieve rows, it must be - opened. (This is the equivalent action to the SQL - command DECLARE CURSOR.) PL/pgSQL has - three forms of the OPEN statement, two of which use unbound + opened. (This is the equivalent action to the SQL + command DECLARE CURSOR.) PL/pgSQL has + three forms of the OPEN statement, two of which use unbound cursor variables while the third uses a bound cursor variable. Bound cursor variables can also be used without explicitly opening the cursor, - via the FOR statement described in + via the FOR statement described in . @@ -2993,18 +2993,18 @@ OPEN unbound_cursorvar NO refcursor variable). The query must be a + refcursor variable). The query must be a SELECT, or something else that returns rows - (such as EXPLAIN). The query + (such as EXPLAIN). The query is treated in the same way as other SQL commands in - PL/pgSQL: PL/pgSQL + PL/pgSQL: PL/pgSQL variable names are substituted, and the query plan is cached for - possible reuse. When a PL/pgSQL + possible reuse. When a PL/pgSQL variable is substituted into the cursor query, the value that is - substituted is the one it has at the time of the OPEN; + substituted is the one it has at the time of the OPEN; subsequent changes to the variable will not affect the cursor's behavior. - The SCROLL and NO SCROLL + The SCROLL and NO SCROLL options have the same meanings as for a bound cursor. @@ -3028,16 +3028,16 @@ OPEN unbound_cursorvar NO refcursor variable). The query is specified as a string + refcursor variable). The query is specified as a string expression, in the same way as in the EXECUTE command. As usual, this gives flexibility so the query plan can vary from one run to the next (see ), and it also means that variable substitution is not done on the command string. As with EXECUTE, parameter values can be inserted into the dynamic command via - format() and USING. - The SCROLL and - NO SCROLL options have the same meanings as for a bound + format() and USING. + The SCROLL and + NO SCROLL options have the same meanings as for a bound cursor. @@ -3047,8 +3047,8 @@ OPEN unbound_cursorvar NO In this example, the table name is inserted into the query via - format(). The comparison value for col1 - is inserted via a USING parameter, so it needs + format(). The comparison value for col1 + is inserted via a USING parameter, so it needs no quoting. @@ -3071,8 +3071,8 @@ OPEN bound_cursorvar ( The query plan for a bound cursor is always considered cacheable; there is no equivalent of EXECUTE in this case. - Notice that SCROLL and NO SCROLL cannot be - specified in OPEN, as the cursor's scrolling + Notice that SCROLL and NO SCROLL cannot be + specified in OPEN, as the cursor's scrolling behavior was already determined. @@ -3098,13 +3098,13 @@ OPEN curs3(key := 42); Because variable substitution is done on a bound cursor's query, there are really two ways to pass values into the cursor: either - with an explicit argument to OPEN, or implicitly by - referencing a PL/pgSQL variable in the query. + with an explicit argument to OPEN, or implicitly by + referencing a PL/pgSQL variable in the query. However, only variables declared before the bound cursor was declared will be substituted into it. In either case the value to - be passed is determined at the time of the OPEN. + be passed is determined at the time of the OPEN. For example, another way to get the same effect as the - curs3 example above is + curs3 example above is DECLARE key integer; @@ -3127,22 +3127,22 @@ BEGIN These manipulations need not occur in the same function that - opened the cursor to begin with. You can return a refcursor + opened the cursor to begin with. You can return a refcursor value out of a function and let the caller operate on the cursor. - (Internally, a refcursor value is simply the string name + (Internally, a refcursor value is simply the string name of a so-called portal containing the active query for the cursor. This name - can be passed around, assigned to other refcursor variables, + can be passed around, assigned to other refcursor variables, and so on, without disturbing the portal.) All portals are implicitly closed at transaction end. Therefore - a refcursor value is usable to reference an open cursor + a refcursor value is usable to reference an open cursor only until the end of the transaction. - <literal>FETCH</> + <literal>FETCH</literal> FETCH direction { FROM | IN } cursor INTO target; @@ -3163,23 +3163,23 @@ FETCH direction { FROM | IN } variants allowed in the SQL command except the ones that can fetch more than one row; namely, it can be - NEXT, - PRIOR, - FIRST, - LAST, - ABSOLUTE count, - RELATIVE count, - FORWARD, or - BACKWARD. + NEXT, + PRIOR, + FIRST, + LAST, + ABSOLUTE count, + RELATIVE count, + FORWARD, or + BACKWARD. Omitting direction is the same - as specifying NEXT. + as specifying NEXT. direction values that require moving backward are likely to fail unless the cursor was declared or opened - with the SCROLL option. + with the SCROLL option. - cursor must be the name of a refcursor + cursor must be the name of a refcursor variable that references an open cursor portal. @@ -3195,7 +3195,7 @@ FETCH RELATIVE -2 FROM curs4 INTO x; - <literal>MOVE</> + <literal>MOVE</literal> MOVE direction { FROM | IN } cursor; @@ -3214,20 +3214,20 @@ MOVE direction { FROM | IN } < The direction clause can be any of the variants allowed in the SQL command, namely - NEXT, - PRIOR, - FIRST, - LAST, - ABSOLUTE count, - RELATIVE count, - ALL, - FORWARD count | ALL , or - BACKWARD count | ALL . + NEXT, + PRIOR, + FIRST, + LAST, + ABSOLUTE count, + RELATIVE count, + ALL, + FORWARD count | ALL , or + BACKWARD count | ALL . Omitting direction is the same - as specifying NEXT. + as specifying NEXT. direction values that require moving backward are likely to fail unless the cursor was declared or opened - with the SCROLL option. + with the SCROLL option. @@ -3242,7 +3242,7 @@ MOVE FORWARD 2 FROM curs4; - <literal>UPDATE/DELETE WHERE CURRENT OF</> + <literal>UPDATE/DELETE WHERE CURRENT OF</literal> UPDATE table SET ... WHERE CURRENT OF cursor; @@ -3253,7 +3253,7 @@ DELETE FROM table WHERE CURRENT OF curso When a cursor is positioned on a table row, that row can be updated or deleted using the cursor to identify the row. There are restrictions on what the cursor's query can be (in particular, - no grouping) and it's best to use FOR UPDATE in the + no grouping) and it's best to use FOR UPDATE in the cursor. For more information see the reference page. @@ -3268,7 +3268,7 @@ UPDATE foo SET dataval = myval WHERE CURRENT OF curs1; - <literal>CLOSE</> + <literal>CLOSE</literal> CLOSE cursor; @@ -3292,7 +3292,7 @@ CLOSE curs1; Returning Cursors - PL/pgSQL functions can return cursors to the + PL/pgSQL functions can return cursors to the caller. This is useful to return multiple rows or columns, especially with very large result sets. To do this, the function opens the cursor and returns the cursor name to the caller (or simply @@ -3305,13 +3305,13 @@ CLOSE curs1; The portal name used for a cursor can be specified by the programmer or automatically generated. To specify a portal name, - simply assign a string to the refcursor variable before - opening it. The string value of the refcursor variable - will be used by OPEN as the name of the underlying portal. - However, if the refcursor variable is null, - OPEN automatically generates a name that does not + simply assign a string to the refcursor variable before + opening it. The string value of the refcursor variable + will be used by OPEN as the name of the underlying portal. + However, if the refcursor variable is null, + OPEN automatically generates a name that does not conflict with any existing portal, and assigns it to the - refcursor variable. + refcursor variable. @@ -3405,7 +3405,7 @@ COMMIT; Looping Through a Cursor's Result - There is a variant of the FOR statement that allows + There is a variant of the FOR statement that allows iterating through the rows returned by a cursor. The syntax is: @@ -3416,18 +3416,18 @@ END LOOP label ; The cursor variable must have been bound to some query when it was - declared, and it cannot be open already. The - FOR statement automatically opens the cursor, and it closes + declared, and it cannot be open already. The + FOR statement automatically opens the cursor, and it closes the cursor again when the loop exits. A list of actual argument value expressions must appear if and only if the cursor was declared to take arguments. These values will be substituted in the query, in just - the same way as during an OPEN (see OPEN (see ). The variable recordvar is automatically - defined as type record and exists only inside the loop (any + defined as type record and exists only inside the loop (any existing definition of the variable name is ignored within the loop). Each row returned by the cursor is successively assigned to this record variable and the loop body is executed. @@ -3458,8 +3458,8 @@ END LOOP label ; RAISE level 'format' , expression , ... USING option = expression , ... ; -RAISE level condition_name USING option = expression , ... ; -RAISE level SQLSTATE 'sqlstate' USING option = expression , ... ; +RAISE level condition_name USING option = expression , ... ; +RAISE level SQLSTATE 'sqlstate' USING option = expression , ... ; RAISE level USING option = expression , ... ; RAISE ; @@ -3491,13 +3491,13 @@ RAISE ; Inside the format string, % is replaced by the string representation of the next optional argument's value. Write %% to emit a literal %. - The number of arguments must match the number of % + The number of arguments must match the number of % placeholders in the format string, or an error is raised during the compilation of the function. - In this example, the value of v_job_id will replace the + In this example, the value of v_job_id will replace the % in the string: RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; @@ -3506,7 +3506,7 @@ RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; You can attach additional information to the error report by writing - USING followed by USING followed by option = expression items. Each expression can be any @@ -3518,8 +3518,8 @@ RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; MESSAGE Sets the error message text. This option can't be used in the - form of RAISE that includes a format string - before USING. + form of RAISE that includes a format string + before USING. @@ -3577,13 +3577,13 @@ RAISE 'Duplicate user ID: %', user_id USING ERRCODE = '23505'; - There is a second RAISE syntax in which the main argument + There is a second RAISE syntax in which the main argument is the condition name or SQLSTATE to be reported, for example: RAISE division_by_zero; RAISE SQLSTATE '22012'; - In this syntax, USING can be used to supply a custom + In this syntax, USING can be used to supply a custom error message, detail, or hint. Another way to do the earlier example is @@ -3592,25 +3592,25 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; - Still another variant is to write RAISE USING or RAISE - level USING and put - everything else into the USING list. + Still another variant is to write RAISE USING or RAISE + level USING and put + everything else into the USING list. - The last variant of RAISE has no parameters at all. - This form can only be used inside a BEGIN block's - EXCEPTION clause; + The last variant of RAISE has no parameters at all. + This form can only be used inside a BEGIN block's + EXCEPTION clause; it causes the error currently being handled to be re-thrown. - Before PostgreSQL 9.1, RAISE without + Before PostgreSQL 9.1, RAISE without parameters was interpreted as re-throwing the error from the block - containing the active exception handler. Thus an EXCEPTION + containing the active exception handler. Thus an EXCEPTION clause nested within that handler could not catch it, even if the - RAISE was within the nested EXCEPTION clause's + RAISE was within the nested EXCEPTION clause's block. This was deemed surprising as well as being incompatible with Oracle's PL/SQL. @@ -3619,7 +3619,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; If no condition name nor SQLSTATE is specified in a RAISE EXCEPTION command, the default is to use - RAISE_EXCEPTION (P0001). If no message + RAISE_EXCEPTION (P0001). If no message text is specified, the default is to use the condition name or SQLSTATE as message text. @@ -3629,7 +3629,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; When specifying an error code by SQLSTATE code, you are not limited to the predefined error codes, but can select any error code consisting of five digits and/or upper-case ASCII - letters, other than 00000. It is recommended that + letters, other than 00000. It is recommended that you avoid throwing error codes that end in three zeroes, because these are category codes and can only be trapped by trapping the whole category. @@ -3652,7 +3652,7 @@ RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' || user_id; - plpgsql.check_asserts configuration parameter + plpgsql.check_asserts configuration parameter @@ -3667,7 +3667,7 @@ ASSERT condition , condition is a Boolean expression that is expected to always evaluate to true; if it does, the ASSERT statement does nothing further. If the - result is false or null, then an ASSERT_FAILURE exception + result is false or null, then an ASSERT_FAILURE exception is raised. (If an error occurs while evaluating the condition, it is reported as a normal error.) @@ -3676,7 +3676,7 @@ ASSERT condition , If the optional message is provided, it is an expression whose result (if not null) replaces the - default error message text assertion failed, should + default error message text assertion failed, should the condition fail. The message expression is not evaluated in the normal case where the assertion succeeds. @@ -3684,15 +3684,15 @@ ASSERT condition , Testing of assertions can be enabled or disabled via the configuration - parameter plpgsql.check_asserts, which takes a Boolean - value; the default is on. If this parameter - is off then ASSERT statements do nothing. + parameter plpgsql.check_asserts, which takes a Boolean + value; the default is on. If this parameter + is off then ASSERT statements do nothing. Note that ASSERT is meant for detecting program bugs, not for reporting ordinary error conditions. Use - the RAISE statement, described above, for that. + the RAISE statement, described above, for that. @@ -3710,11 +3710,11 @@ ASSERT condition , PL/pgSQL can be used to define trigger procedures on data changes or database events. - A trigger procedure is created with the CREATE FUNCTION + A trigger procedure is created with the CREATE FUNCTION command, declaring it as a function with no arguments and a return type of - trigger (for data change triggers) or - event_trigger (for database event triggers). - Special local variables named PG_something are + trigger (for data change triggers) or + event_trigger (for database event triggers). + Special local variables named PG_something are automatically defined to describe the condition that triggered the call. @@ -3722,11 +3722,11 @@ ASSERT condition , Triggers on Data Changes - A data change trigger is declared as a - function with no arguments and a return type of trigger. + A data change trigger is declared as a + function with no arguments and a return type of trigger. Note that the function must be declared with no arguments even if it - expects to receive some arguments specified in CREATE TRIGGER - — such arguments are passed via TG_ARGV, as described + expects to receive some arguments specified in CREATE TRIGGER + — such arguments are passed via TG_ARGV, as described below. @@ -3741,7 +3741,7 @@ ASSERT condition , Data type RECORD; variable holding the new - database row for INSERT/UPDATE operations in row-level + database row for INSERT/UPDATE operations in row-level triggers. This variable is unassigned in statement-level triggers and for DELETE operations. @@ -3753,7 +3753,7 @@ ASSERT condition , Data type RECORD; variable holding the old - database row for UPDATE/DELETE operations in row-level + database row for UPDATE/DELETE operations in row-level triggers. This variable is unassigned in statement-level triggers and for INSERT operations. @@ -3798,7 +3798,7 @@ ASSERT condition , Data type text; a string of INSERT, UPDATE, - DELETE, or TRUNCATE + DELETE, or TRUNCATE telling for which operation the trigger was fired. @@ -3820,7 +3820,7 @@ ASSERT condition , Data type name; the name of the table that caused the trigger invocation. This is now deprecated, and could disappear in a future - release. Use TG_TABLE_NAME instead. + release. Use TG_TABLE_NAME instead. @@ -3862,7 +3862,7 @@ ASSERT condition , text; the arguments from the CREATE TRIGGER statement. The index counts from 0. Invalid - indexes (less than 0 or greater than or equal to tg_nargs) + indexes (less than 0 or greater than or equal to tg_nargs) result in a null value. @@ -3877,20 +3877,20 @@ ASSERT condition , - Row-level triggers fired BEFORE can return null to signal the + Row-level triggers fired BEFORE can return null to signal the trigger manager to skip the rest of the operation for this row (i.e., subsequent triggers are not fired, and the - INSERT/UPDATE/DELETE does not occur + INSERT/UPDATE/DELETE does not occur for this row). If a nonnull value is returned then the operation proceeds with that row value. Returning a row value different from the original value - of NEW alters the row that will be inserted or + of NEW alters the row that will be inserted or updated. Thus, if the trigger function wants the triggering action to succeed normally without altering the row value, NEW (or a value equal thereto) has to be returned. To alter the row to be stored, it is possible to - replace single values directly in NEW and return the - modified NEW, or to build a complete new record/row to + replace single values directly in NEW and return the + modified NEW, or to build a complete new record/row to return. In the case of a before-trigger on DELETE, the returned value has no direct effect, but it has to be nonnull to allow the trigger action to @@ -3901,28 +3901,28 @@ ASSERT condition , - INSTEAD OF triggers (which are always row-level triggers, + INSTEAD OF triggers (which are always row-level triggers, and may only be used on views) can return null to signal that they did not perform any updates, and that the rest of the operation for this row should be skipped (i.e., subsequent triggers are not fired, and the row is not counted in the rows-affected status for the surrounding - INSERT/UPDATE/DELETE). + INSERT/UPDATE/DELETE). Otherwise a nonnull value should be returned, to signal that the trigger performed the requested operation. For - INSERT and UPDATE operations, the return value - should be NEW, which the trigger function may modify to - support INSERT RETURNING and UPDATE RETURNING + INSERT and UPDATE operations, the return value + should be NEW, which the trigger function may modify to + support INSERT RETURNING and UPDATE RETURNING (this will also affect the row value passed to any subsequent triggers, - or passed to a special EXCLUDED alias reference within - an INSERT statement with an ON CONFLICT DO - UPDATE clause). For DELETE operations, the return - value should be OLD. + or passed to a special EXCLUDED alias reference within + an INSERT statement with an ON CONFLICT DO + UPDATE clause). For DELETE operations, the return + value should be OLD. The return value of a row-level trigger fired AFTER or a statement-level trigger - fired BEFORE or AFTER is + fired BEFORE or AFTER is always ignored; it might as well be null. However, any of these types of triggers might still abort the entire operation by raising an error. @@ -4267,9 +4267,9 @@ SELECT * FROM sales_summary_bytime; - AFTER triggers can also make use of transition - tables to inspect the entire set of rows changed by the triggering - statement. The CREATE TRIGGER command assigns names to one + AFTER triggers can also make use of transition + tables to inspect the entire set of rows changed by the triggering + statement. The CREATE TRIGGER command assigns names to one or both transition tables, and then the function can refer to those names as though they were read-only temporary tables. shows an example. @@ -4286,10 +4286,10 @@ SELECT * FROM sales_summary_bytime; table. This can be significantly faster than the row-trigger approach when the invoking statement has modified many rows. Notice that we must make a separate trigger declaration for each kind of event, since the - REFERENCING clauses must be different for each case. But + REFERENCING clauses must be different for each case. But this does not stop us from using a single trigger function if we choose. (In practice, it might be better to use three separate functions and - avoid the run-time tests on TG_OP.) + avoid the run-time tests on TG_OP.) @@ -4348,10 +4348,10 @@ CREATE TRIGGER emp_audit_del PL/pgSQL can be used to define - event triggers. - PostgreSQL requires that a procedure that + event triggers. + PostgreSQL requires that a procedure that is to be called as an event trigger must be declared as a function with - no arguments and a return type of event_trigger. + no arguments and a return type of event_trigger. @@ -4410,29 +4410,29 @@ CREATE EVENT TRIGGER snitch ON ddl_command_start EXECUTE PROCEDURE snitch(); - <application>PL/pgSQL</> Under the Hood + <application>PL/pgSQL</application> Under the Hood This section discusses some implementation details that are - frequently important for PL/pgSQL users to know. + frequently important for PL/pgSQL users to know. Variable Substitution - SQL statements and expressions within a PL/pgSQL function + SQL statements and expressions within a PL/pgSQL function can refer to variables and parameters of the function. Behind the scenes, - PL/pgSQL substitutes query parameters for such references. + PL/pgSQL substitutes query parameters for such references. Parameters will only be substituted in places where a parameter or column reference is syntactically allowed. As an extreme case, consider this example of poor programming style: INSERT INTO foo (foo) VALUES (foo); - The first occurrence of foo must syntactically be a table + The first occurrence of foo must syntactically be a table name, so it will not be substituted, even if the function has a variable - named foo. The second occurrence must be the name of a + named foo. The second occurrence must be the name of a column of the table, so it will not be substituted either. Only the third occurrence is a candidate to be a reference to the function's variable. @@ -4453,18 +4453,18 @@ INSERT INTO foo (foo) VALUES (foo); INSERT INTO dest (col) SELECT foo + bar FROM src; - Here, dest and src must be table names, and - col must be a column of dest, but foo - and bar might reasonably be either variables of the function - or columns of src. + Here, dest and src must be table names, and + col must be a column of dest, but foo + and bar might reasonably be either variables of the function + or columns of src. - By default, PL/pgSQL will report an error if a name + By default, PL/pgSQL will report an error if a name in a SQL statement could refer to either a variable or a table column. You can fix such a problem by renaming the variable or column, or by qualifying the ambiguous reference, or by telling - PL/pgSQL which interpretation to prefer. + PL/pgSQL which interpretation to prefer. @@ -4473,13 +4473,13 @@ INSERT INTO dest (col) SELECT foo + bar FROM src; different naming convention for PL/pgSQL variables than you use for column names. For example, if you consistently name function variables - v_something while none of your - column names start with v_, no conflicts will occur. + v_something while none of your + column names start with v_, no conflicts will occur. Alternatively you can qualify ambiguous references to make them clear. - In the above example, src.foo would be an unambiguous reference + In the above example, src.foo would be an unambiguous reference to the table column. To create an unambiguous reference to a variable, declare it in a labeled block and use the block's label (see ). For example, @@ -4491,37 +4491,37 @@ BEGIN foo := ...; INSERT INTO dest (col) SELECT block.foo + bar FROM src; - Here block.foo means the variable even if there is a column - foo in src. Function parameters, as well as - special variables such as FOUND, can be qualified by the + Here block.foo means the variable even if there is a column + foo in src. Function parameters, as well as + special variables such as FOUND, can be qualified by the function's name, because they are implicitly declared in an outer block labeled with the function's name. Sometimes it is impractical to fix all the ambiguous references in a - large body of PL/pgSQL code. In such cases you can - specify that PL/pgSQL should resolve ambiguous references - as the variable (which is compatible with PL/pgSQL's + large body of PL/pgSQL code. In such cases you can + specify that PL/pgSQL should resolve ambiguous references + as the variable (which is compatible with PL/pgSQL's behavior before PostgreSQL 9.0), or as the table column (which is compatible with some other systems such as Oracle). - plpgsql.variable_conflict configuration parameter + plpgsql.variable_conflict configuration parameter To change this behavior on a system-wide basis, set the configuration - parameter plpgsql.variable_conflict to one of - error, use_variable, or - use_column (where error is the factory default). + parameter plpgsql.variable_conflict to one of + error, use_variable, or + use_column (where error is the factory default). This parameter affects subsequent compilations - of statements in PL/pgSQL functions, but not statements + of statements in PL/pgSQL functions, but not statements already compiled in the current session. Because changing this setting - can cause unexpected changes in the behavior of PL/pgSQL + can cause unexpected changes in the behavior of PL/pgSQL functions, it can only be changed by a superuser. @@ -4535,7 +4535,7 @@ BEGIN #variable_conflict use_column These commands affect only the function they are written in, and override - the setting of plpgsql.variable_conflict. An example is + the setting of plpgsql.variable_conflict. An example is CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ #variable_conflict use_variable @@ -4547,15 +4547,15 @@ CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ END; $$ LANGUAGE plpgsql; - In the UPDATE command, curtime, comment, - and id will refer to the function's variable and parameters - whether or not users has columns of those names. Notice - that we had to qualify the reference to users.id in the - WHERE clause to make it refer to the table column. - But we did not have to qualify the reference to comment - as a target in the UPDATE list, because syntactically - that must be a column of users. We could write the same - function without depending on the variable_conflict setting + In the UPDATE command, curtime, comment, + and id will refer to the function's variable and parameters + whether or not users has columns of those names. Notice + that we had to qualify the reference to users.id in the + WHERE clause to make it refer to the table column. + But we did not have to qualify the reference to comment + as a target in the UPDATE list, because syntactically + that must be a column of users. We could write the same + function without depending on the variable_conflict setting in this way: CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$ @@ -4572,19 +4572,19 @@ $$ LANGUAGE plpgsql; Variable substitution does not happen in the command string given - to EXECUTE or one of its variants. If you need to + to EXECUTE or one of its variants. If you need to insert a varying value into such a command, do so as part of - constructing the string value, or use USING, as illustrated in + constructing the string value, or use USING, as illustrated in . - Variable substitution currently works only in SELECT, - INSERT, UPDATE, and DELETE commands, + Variable substitution currently works only in SELECT, + INSERT, UPDATE, and DELETE commands, because the main SQL engine allows query parameters only in these commands. To use a non-constant name or value in other statement types (generically called utility statements), you must construct - the utility statement as a string and EXECUTE it. + the utility statement as a string and EXECUTE it. @@ -4593,22 +4593,22 @@ $$ LANGUAGE plpgsql; Plan Caching - The PL/pgSQL interpreter parses the function's source + The PL/pgSQL interpreter parses the function's source text and produces an internal binary instruction tree the first time the function is called (within each session). The instruction tree fully translates the - PL/pgSQL statement structure, but individual + PL/pgSQL statement structure, but individual SQL expressions and SQL commands used in the function are not translated immediately. - preparing a query - in PL/pgSQL + preparing a query + in PL/pgSQL As each expression and SQL command is first - executed in the function, the PL/pgSQL interpreter + executed in the function, the PL/pgSQL interpreter parses and analyzes the command to create a prepared statement, using the SPI manager's SPI_prepare function. @@ -4624,17 +4624,17 @@ $$ LANGUAGE plpgsql; - PL/pgSQL (or more precisely, the SPI manager) can + PL/pgSQL (or more precisely, the SPI manager) can furthermore attempt to cache the execution plan associated with any particular prepared statement. If a cached plan is not used, then a fresh execution plan is generated on each visit to the statement, - and the current parameter values (that is, PL/pgSQL + and the current parameter values (that is, PL/pgSQL variable values) can be used to optimize the selected plan. If the statement has no parameters, or is executed many times, the SPI manager - will consider creating a generic plan that is not dependent + will consider creating a generic plan that is not dependent on specific parameter values, and caching that for re-use. Typically this will happen only if the execution plan is not very sensitive to - the values of the PL/pgSQL variables referenced in it. + the values of the PL/pgSQL variables referenced in it. If it is, generating a plan each time is a net win. See for more information about the behavior of prepared statements. @@ -4670,7 +4670,7 @@ $$ LANGUAGE plpgsql; for each trigger function and table combination, not just for each function. This alleviates some of the problems with varying data types; for instance, a trigger function will be able to work - successfully with a column named key even if it happens + successfully with a column named key even if it happens to have different types in different tables. @@ -4720,8 +4720,8 @@ $$ LANGUAGE plpgsql; INSERT is analyzed, and then used in all invocations of logfunc1 during the lifetime of the session. Needless to say, this isn't what the programmer - wanted. A better idea is to use the now() or - current_timestamp function. + wanted. A better idea is to use the now() or + current_timestamp function. @@ -4737,7 +4737,7 @@ $$ LANGUAGE plpgsql; functions for the conversion. So, the computed time stamp is updated on each execution as the programmer expects. Even though this happens to work as expected, it's not terribly efficient, so - use of the now() function would still be a better idea. + use of the now() function would still be a better idea. @@ -4749,12 +4749,12 @@ $$ LANGUAGE plpgsql; One good way to develop in - PL/pgSQL is to use the text editor of your + PL/pgSQL is to use the text editor of your choice to create your functions, and in another window, use psql to load and test those functions. If you are doing it this way, it is a good idea to write the function using CREATE OR - REPLACE FUNCTION. That way you can just reload the file to update + REPLACE FUNCTION. That way you can just reload the file to update the function definition. For example: CREATE OR REPLACE FUNCTION testfunc(integer) RETURNS integer AS $$ @@ -4773,10 +4773,10 @@ $$ LANGUAGE plpgsql; - Another good way to develop in PL/pgSQL is with a + Another good way to develop in PL/pgSQL is with a GUI database access tool that facilitates development in a procedural language. One example of such a tool is - pgAdmin, although others exist. These tools often + pgAdmin, although others exist. These tools often provide convenient features such as escaping single quotes and making it easier to recreate and debug functions. @@ -4785,7 +4785,7 @@ $$ LANGUAGE plpgsql; Handling of Quotation Marks - The code of a PL/pgSQL function is specified in + The code of a PL/pgSQL function is specified in CREATE FUNCTION as a string literal. If you write the string literal in the ordinary way with surrounding single quotes, then any single quotes inside the function body @@ -4795,7 +4795,7 @@ $$ LANGUAGE plpgsql; the code can become downright incomprehensible, because you can easily find yourself needing half a dozen or more adjacent quote marks. It's recommended that you instead write the function body as a - dollar-quoted string literal (see dollar-quoted string literal (see ). In the dollar-quoting approach, you never double any quote marks, but instead take care to choose a different dollar-quoting delimiter for each level of @@ -4807,9 +4807,9 @@ CREATE OR REPLACE FUNCTION testfunc(integer) RETURNS integer AS $PROC$ $PROC$ LANGUAGE plpgsql; Within this, you might use quote marks for simple literal strings in - SQL commands and $$ to delimit fragments of SQL commands + SQL commands and $$ to delimit fragments of SQL commands that you are assembling as strings. If you need to quote text that - includes $$, you could use $Q$, and so on. + includes $$, you could use $Q$, and so on. @@ -4830,7 +4830,7 @@ CREATE FUNCTION foo() RETURNS integer AS ' ' LANGUAGE plpgsql; Anywhere within a single-quoted function body, quote marks - must appear in pairs. + must appear in pairs. @@ -4849,7 +4849,7 @@ SELECT * FROM users WHERE f_name=''foobar''; a_output := 'Blah'; SELECT * FROM users WHERE f_name='foobar'; - which is exactly what the PL/pgSQL parser would see + which is exactly what the PL/pgSQL parser would see in either case. @@ -4873,7 +4873,7 @@ a_output := a_output || '' AND name LIKE ''''foobar'''' AND xyz'' a_output := a_output || $$ AND name LIKE 'foobar' AND xyz$$ being careful that any dollar-quote delimiters around this are not - just $$. + just $$. @@ -4942,20 +4942,20 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ To aid the user in finding instances of simple but common problems before - they cause harm, PL/pgSQL provides additional - checks. When enabled, depending on the configuration, they - can be used to emit either a WARNING or an ERROR + they cause harm, PL/pgSQL provides additional + checks. When enabled, depending on the configuration, they + can be used to emit either a WARNING or an ERROR during the compilation of a function. A function which has received - a WARNING can be executed without producing further messages, + a WARNING can be executed without producing further messages, so you are advised to test in a separate development environment. These additional checks are enabled through the configuration variables - plpgsql.extra_warnings for warnings and - plpgsql.extra_errors for errors. Both can be set either to - a comma-separated list of checks, "none" or "all". - The default is "none". Currently the list of available checks + plpgsql.extra_warnings for warnings and + plpgsql.extra_errors for errors. Both can be set either to + a comma-separated list of checks, "none" or "all". + The default is "none". Currently the list of available checks includes only one: @@ -4968,8 +4968,8 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ - The following example shows the effect of plpgsql.extra_warnings - set to shadowed_variables: + The following example shows the effect of plpgsql.extra_warnings + set to shadowed_variables: SET plpgsql.extra_warnings TO 'shadowed_variables'; @@ -5006,10 +5006,10 @@ CREATE FUNCTION This section explains differences between - PostgreSQL's PL/pgSQL + PostgreSQL's PL/pgSQL language and Oracle's PL/SQL language, to help developers who port applications from - Oracle to PostgreSQL. + Oracle to PostgreSQL. @@ -5017,7 +5017,7 @@ CREATE FUNCTION aspects. It is a block-structured, imperative language, and all variables have to be declared. Assignments, loops, conditionals are similar. The main differences you should keep in mind when - porting from PL/SQL to + porting from PL/SQL to PL/pgSQL are: @@ -5025,21 +5025,21 @@ CREATE FUNCTION If a name used in a SQL command could be either a column name of a table or a reference to a variable of the function, - PL/SQL treats it as a column name. This corresponds - to PL/pgSQL's - plpgsql.variable_conflict = use_column + PL/SQL treats it as a column name. This corresponds + to PL/pgSQL's + plpgsql.variable_conflict = use_column behavior, which is not the default, as explained in . It's often best to avoid such ambiguities in the first place, but if you have to port a large amount of code that depends on - this behavior, setting variable_conflict may be the + this behavior, setting variable_conflict may be the best solution. - In PostgreSQL the function body must be written as + In PostgreSQL the function body must be written as a string literal. Therefore you need to use dollar quoting or escape single quotes in the function body. (See .) @@ -5049,10 +5049,10 @@ CREATE FUNCTION Data type names often need translation. For example, in Oracle string - values are commonly declared as being of type varchar2, which + values are commonly declared as being of type varchar2, which is a non-SQL-standard type. In PostgreSQL, - use type varchar or text instead. Similarly, replace - type number with numeric, or use some other numeric + use type varchar or text instead. Similarly, replace + type number with numeric, or use some other numeric data type if there's a more appropriate one. @@ -5074,9 +5074,9 @@ CREATE FUNCTION - Integer FOR loops with REVERSE work - differently: PL/SQL counts down from the second - number to the first, while PL/pgSQL counts down + Integer FOR loops with REVERSE work + differently: PL/SQL counts down from the second + number to the first, while PL/pgSQL counts down from the first number to the second, requiring the loop bounds to be swapped when porting. This incompatibility is unfortunate but is unlikely to be changed. (See - FOR loops over queries (other than cursors) also work + FOR loops over queries (other than cursors) also work differently: the target variable(s) must have been declared, - whereas PL/SQL always declares them implicitly. + whereas PL/SQL always declares them implicitly. An advantage of this is that the variable values are still accessible after the loop exits. @@ -5109,14 +5109,14 @@ CREATE FUNCTION shows how to port a simple - function from PL/SQL to PL/pgSQL. + function from PL/SQL to PL/pgSQL. - Porting a Simple Function from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Simple Function from <application>PL/SQL</application> to <application>PL/pgSQL</application> - Here is an Oracle PL/SQL function: + Here is an Oracle PL/SQL function: CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar2, v_version varchar2) @@ -5134,14 +5134,14 @@ show errors; Let's go through this function and see the differences compared to - PL/pgSQL: + PL/pgSQL: - The type name varchar2 has to be changed to varchar - or text. In the examples in this section, we'll - use varchar, but text is often a better choice if + The type name varchar2 has to be changed to varchar + or text. In the examples in this section, we'll + use varchar, but text is often a better choice if you do not need specific string length limits. @@ -5152,17 +5152,17 @@ show errors; prototype (not the function body) becomes RETURNS in PostgreSQL. - Also, IS becomes AS, and you need to - add a LANGUAGE clause because PL/pgSQL + Also, IS becomes AS, and you need to + add a LANGUAGE clause because PL/pgSQL is not the only possible function language. - In PostgreSQL, the function body is considered + In PostgreSQL, the function body is considered to be a string literal, so you need to use quote marks or dollar - quotes around it. This substitutes for the terminating / + quotes around it. This substitutes for the terminating / in the Oracle approach. @@ -5170,7 +5170,7 @@ show errors; The show errors command does not exist in - PostgreSQL, and is not needed since errors are + PostgreSQL, and is not needed since errors are reported automatically. @@ -5179,7 +5179,7 @@ show errors; This is how this function would look when ported to - PostgreSQL: + PostgreSQL: CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar, @@ -5203,7 +5203,7 @@ $$ LANGUAGE plpgsql; - Porting a Function that Creates Another Function from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Function that Creates Another Function from <application>PL/SQL</application> to <application>PL/pgSQL</application> The following procedure grabs rows from a @@ -5242,7 +5242,7 @@ show errors; - Here is how this function would end up in PostgreSQL: + Here is how this function would end up in PostgreSQL: CREATE OR REPLACE FUNCTION cs_update_referrer_type_proc() RETURNS void AS $func$ DECLARE @@ -5277,24 +5277,24 @@ END; $func$ LANGUAGE plpgsql; Notice how the body of the function is built separately and passed - through quote_literal to double any quote marks in it. This + through quote_literal to double any quote marks in it. This technique is needed because we cannot safely use dollar quoting for defining the new function: we do not know for sure what strings will - be interpolated from the referrer_key.key_string field. - (We are assuming here that referrer_key.kind can be - trusted to always be host, domain, or - url, but referrer_key.key_string might be + be interpolated from the referrer_key.key_string field. + (We are assuming here that referrer_key.kind can be + trusted to always be host, domain, or + url, but referrer_key.key_string might be anything, in particular it might contain dollar signs.) This function is actually an improvement on the Oracle original, because it will - not generate broken code when referrer_key.key_string or - referrer_key.referrer_type contain quote marks. + not generate broken code when referrer_key.key_string or + referrer_key.referrer_type contain quote marks. shows how to port a function - with OUT parameters and string manipulation. - PostgreSQL does not have a built-in + with OUT parameters and string manipulation. + PostgreSQL does not have a built-in instr function, but you can create one using a combination of other functions. In there is a @@ -5305,8 +5305,8 @@ $func$ LANGUAGE plpgsql; Porting a Procedure With String Manipulation and - <literal>OUT</> Parameters from <application>PL/SQL</> to - <application>PL/pgSQL</> + OUT Parameters from PL/SQL to + PL/pgSQL The following Oracle PL/SQL procedure is used @@ -5357,7 +5357,7 @@ show errors; - Here is a possible translation into PL/pgSQL: + Here is a possible translation into PL/pgSQL: CREATE OR REPLACE FUNCTION cs_parse_url( v_url IN VARCHAR, @@ -5411,7 +5411,7 @@ SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); - Porting a Procedure from <application>PL/SQL</> to <application>PL/pgSQL</> + Porting a Procedure from <application>PL/SQL</application> to <application>PL/pgSQL</application> The Oracle version: @@ -5447,20 +5447,20 @@ show errors - Procedures like this can easily be converted into PostgreSQL + Procedures like this can easily be converted into PostgreSQL functions returning void. This procedure in particular is interesting because it can teach us some things: - There is no PRAGMA statement in PostgreSQL. + There is no PRAGMA statement in PostgreSQL. - If you do a LOCK TABLE in PL/pgSQL, + If you do a LOCK TABLE in PL/pgSQL, the lock will not be released until the calling transaction is finished. @@ -5468,9 +5468,9 @@ show errors - You cannot issue COMMIT in a + You cannot issue COMMIT in a PL/pgSQL function. The function is - running within some outer transaction and so COMMIT + running within some outer transaction and so COMMIT would imply terminating the function's execution. However, in this particular case it is not necessary anyway, because the lock obtained by the LOCK TABLE will be released when @@ -5481,7 +5481,7 @@ show errors - This is how we could port this procedure to PL/pgSQL: + This is how we could port this procedure to PL/pgSQL: CREATE OR REPLACE FUNCTION cs_create_job(v_job_id integer) RETURNS void AS $$ @@ -5512,15 +5512,15 @@ $$ LANGUAGE plpgsql; - The syntax of RAISE is considerably different from - Oracle's statement, although the basic case RAISE + The syntax of RAISE is considerably different from + Oracle's statement, although the basic case RAISE exception_name works similarly. - The exception names supported by PL/pgSQL are + The exception names supported by PL/pgSQL are different from Oracle's. The set of built-in exception names is much larger (see ). There is not currently a way to declare user-defined exception names, @@ -5530,7 +5530,7 @@ $$ LANGUAGE plpgsql; The main functional difference between this procedure and the - Oracle equivalent is that the exclusive lock on the cs_jobs + Oracle equivalent is that the exclusive lock on the cs_jobs table will be held until the calling transaction completes. Also, if the caller later aborts (for example due to an error), the effects of this procedure will be rolled back. @@ -5543,7 +5543,7 @@ $$ LANGUAGE plpgsql; This section explains a few other things to watch for when porting - Oracle PL/SQL functions to + Oracle PL/SQL functions to PostgreSQL. @@ -5551,9 +5551,9 @@ $$ LANGUAGE plpgsql; Implicit Rollback after Exceptions - In PL/pgSQL, when an exception is caught by an - EXCEPTION clause, all database changes since the block's - BEGIN are automatically rolled back. That is, the behavior + In PL/pgSQL, when an exception is caught by an + EXCEPTION clause, all database changes since the block's + BEGIN are automatically rolled back. That is, the behavior is equivalent to what you'd get in Oracle with: @@ -5571,10 +5571,10 @@ END; If you are translating an Oracle procedure that uses - SAVEPOINT and ROLLBACK TO in this style, - your task is easy: just omit the SAVEPOINT and - ROLLBACK TO. If you have a procedure that uses - SAVEPOINT and ROLLBACK TO in a different way + SAVEPOINT and ROLLBACK TO in this style, + your task is easy: just omit the SAVEPOINT and + ROLLBACK TO. If you have a procedure that uses + SAVEPOINT and ROLLBACK TO in a different way then some actual thought will be required. @@ -5583,9 +5583,9 @@ END; <command>EXECUTE</command> - The PL/pgSQL version of + The PL/pgSQL version of EXECUTE works similarly to the - PL/SQL version, but you have to remember to use + PL/SQL version, but you have to remember to use quote_literal and quote_ident as described in . Constructs of the @@ -5598,8 +5598,8 @@ END; Optimizing <application>PL/pgSQL</application> Functions - PostgreSQL gives you two function creation - modifiers to optimize execution: volatility (whether + PostgreSQL gives you two function creation + modifiers to optimize execution: volatility (whether the function always returns the same result when given the same arguments) and strictness (whether the function returns null if any argument is null). Consult the - instr function + instr function diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml index 777a7ef780..043225fc47 100644 --- a/doc/src/sgml/plpython.sgml +++ b/doc/src/sgml/plpython.sgml @@ -3,8 +3,8 @@ PL/Python - Python Procedural Language - PL/Python - Python + PL/Python + Python The PL/Python procedural language allows @@ -14,22 +14,22 @@ To install PL/Python in a particular database, use - CREATE EXTENSION plpythonu (but + CREATE EXTENSION plpythonu (but see also ). - If a language is installed into template1, all subsequently + If a language is installed into template1, all subsequently created databases will have the language installed automatically. - PL/Python is only available as an untrusted language, meaning + PL/Python is only available as an untrusted language, meaning it does not offer any way of restricting what users can do in it and - is therefore named plpythonu. A trusted - variant plpython might become available in the future + is therefore named plpythonu. A trusted + variant plpython might become available in the future if a secure execution mechanism is developed in Python. The writer of a function in untrusted PL/Python must take care that the function cannot be used to do anything unwanted, since it will be @@ -383,8 +383,8 @@ $$ LANGUAGE plpythonu; For all other PostgreSQL return types, the return value is converted to a string using the Python built-in str, and the result is passed to the input function of the PostgreSQL data type. - (If the Python value is a float, it is converted using - the repr built-in instead of str, to + (If the Python value is a float, it is converted using + the repr built-in instead of str, to avoid loss of precision.) @@ -756,8 +756,8 @@ SELECT * FROM multiout_simple_setof(3); data between function calls. This variable is private static data. The global dictionary GD is public data, available to all Python functions within a session. Use with - care.global data - in PL/Python + care.global data + in PL/Python @@ -800,38 +800,38 @@ $$ LANGUAGE plpythonu; TD contains trigger-related values: - TD["event"] + TD["event"] contains the event as a string: - INSERT, UPDATE, - DELETE, or TRUNCATE. + INSERT, UPDATE, + DELETE, or TRUNCATE. - TD["when"] + TD["when"] - contains one of BEFORE, AFTER, or - INSTEAD OF. + contains one of BEFORE, AFTER, or + INSTEAD OF. - TD["level"] + TD["level"] - contains ROW or STATEMENT. + contains ROW or STATEMENT. - TD["new"] - TD["old"] + TD["new"] + TD["old"] For a row-level trigger, one or both of these fields contain @@ -841,7 +841,7 @@ $$ LANGUAGE plpythonu; - TD["name"] + TD["name"] contains the trigger name. @@ -850,7 +850,7 @@ $$ LANGUAGE plpythonu; - TD["table_name"] + TD["table_name"] contains the name of the table on which the trigger occurred. @@ -859,7 +859,7 @@ $$ LANGUAGE plpythonu; - TD["table_schema"] + TD["table_schema"] contains the schema of the table on which the trigger occurred. @@ -868,7 +868,7 @@ $$ LANGUAGE plpythonu; - TD["relid"] + TD["relid"] contains the OID of the table on which the trigger occurred. @@ -877,12 +877,12 @@ $$ LANGUAGE plpythonu; - TD["args"] + TD["args"] - If the CREATE TRIGGER command - included arguments, they are available in TD["args"][0] to - TD["args"][n-1]. + If the CREATE TRIGGER command + included arguments, they are available in TD["args"][0] to + TD["args"][n-1]. @@ -890,14 +890,14 @@ $$ LANGUAGE plpythonu; - If TD["when"] is BEFORE or - INSTEAD OF and - TD["level"] is ROW, you can + If TD["when"] is BEFORE or + INSTEAD OF and + TD["level"] is ROW, you can return None or "OK" from the Python function to indicate the row is unmodified, - "SKIP" to abort the event, or if TD["event"] - is INSERT or UPDATE you can return - "MODIFY" to indicate you've modified the new row. + "SKIP" to abort the event, or if TD["event"] + is INSERT or UPDATE you can return + "MODIFY" to indicate you've modified the new row. Otherwise the return value is ignored. @@ -1023,7 +1023,7 @@ foo = rv[i]["my_column"] plpy.execute(plan [, arguments [, max-rows]]) - preparing a queryin PL/Python + preparing a queryin PL/Python plpy.prepare prepares the execution plan for a query. It is called with a query string and a list of parameter types, if you have parameter references in the query. For example: @@ -1371,22 +1371,22 @@ $$ LANGUAGE plpythonu; The plpy module also provides the functions - plpy.debug(msg, **kwargs) - plpy.log(msg, **kwargs) - plpy.info(msg, **kwargs) - plpy.notice(msg, **kwargs) - plpy.warning(msg, **kwargs) - plpy.error(msg, **kwargs) - plpy.fatal(msg, **kwargs) + plpy.debug(msg, **kwargs) + plpy.log(msg, **kwargs) + plpy.info(msg, **kwargs) + plpy.notice(msg, **kwargs) + plpy.warning(msg, **kwargs) + plpy.error(msg, **kwargs) + plpy.fatal(msg, **kwargs) - elogin PL/Python + elogin PL/Python plpy.error and plpy.fatal actually raise a Python exception which, if uncaught, propagates out to the calling query, causing the current transaction or subtransaction to - be aborted. raise plpy.Error(msg) and - raise plpy.Fatal(msg) are - equivalent to calling plpy.error(msg) and - plpy.fatal(msg), respectively but + be aborted. raise plpy.Error(msg) and + raise plpy.Fatal(msg) are + equivalent to calling plpy.error(msg) and + plpy.fatal(msg), respectively but the raise form does not allow passing keyword arguments. The other functions only generate messages of different priority levels. Whether messages of a particular priority are reported to the client, @@ -1397,7 +1397,7 @@ $$ LANGUAGE plpythonu; - The msg argument is given as a positional argument. For + The msg argument is given as a positional argument. For backward compatibility, more than one positional argument can be given. In that case, the string representation of the tuple of positional arguments becomes the message reported to the client. @@ -1438,9 +1438,9 @@ PL/Python function "raise_custom_exception" Another set of utility functions are - plpy.quote_literal(string), - plpy.quote_nullable(string), and - plpy.quote_ident(string). They + plpy.quote_literal(string), + plpy.quote_nullable(string), and + plpy.quote_ident(string). They are equivalent to the built-in quoting functions described in . They are useful when constructing ad-hoc queries. A PL/Python equivalent of dynamic SQL from elog(). PL/Tcl + SPI and to raise messages via elog(). PL/Tcl provides no way to access internals of the database server or to gain OS-level access under the permissions of the PostgreSQL server process, as a C @@ -50,23 +50,23 @@ Sometimes it is desirable to write Tcl functions that are not restricted to safe Tcl. For example, one might want a Tcl function that sends - email. To handle these cases, there is a variant of PL/Tcl called PL/TclU + email. To handle these cases, there is a variant of PL/Tcl called PL/TclU (for untrusted Tcl). This is exactly the same language except that a full - Tcl interpreter is used. If PL/TclU is used, it must be + Tcl interpreter is used. If PL/TclU is used, it must be installed as an untrusted procedural language so that only - database superusers can create functions in it. The writer of a PL/TclU + database superusers can create functions in it. The writer of a PL/TclU function must take care that the function cannot be used to do anything unwanted, since it will be able to do anything that could be done by a user logged in as the database administrator. - The shared object code for the PL/Tcl and - PL/TclU call handlers is automatically built and + The shared object code for the PL/Tcl and + PL/TclU call handlers is automatically built and installed in the PostgreSQL library directory if Tcl support is specified in the configuration step of - the installation procedure. To install PL/Tcl - and/or PL/TclU in a particular database, use the - CREATE EXTENSION command, for example + the installation procedure. To install PL/Tcl + and/or PL/TclU in a particular database, use the + CREATE EXTENSION command, for example CREATE EXTENSION pltcl or CREATE EXTENSION pltclu. @@ -78,7 +78,7 @@ PL/Tcl Functions and Arguments - To create a function in the PL/Tcl language, use + To create a function in the PL/Tcl language, use the standard syntax: @@ -87,8 +87,8 @@ CREATE FUNCTION funcname (argument-types $$ LANGUAGE pltcl; - PL/TclU is the same, except that the language has to be specified as - pltclu. + PL/TclU is the same, except that the language has to be specified as + pltclu. @@ -111,7 +111,7 @@ CREATE FUNCTION tcl_max(integer, integer) RETURNS integer AS $$ $$ LANGUAGE pltcl STRICT; - Note the clause STRICT, which saves us from + Note the clause STRICT, which saves us from having to think about null input values: if a null value is passed, the function will not be called at all, but will just return a null result automatically. @@ -122,7 +122,7 @@ $$ LANGUAGE pltcl STRICT; if the actual value of an argument is null, the corresponding $n variable will be set to an empty string. To detect whether a particular argument is null, use the function - argisnull. For example, suppose that we wanted tcl_max + argisnull. For example, suppose that we wanted tcl_max with one null and one nonnull argument to return the nonnull argument, rather than null: @@ -188,7 +188,7 @@ $$ LANGUAGE pltcl; The result list can be made from an array representation of the - desired tuple with the array get Tcl command. For example: + desired tuple with the array get Tcl command. For example: CREATE FUNCTION raise_pay(employee, delta int) RETURNS employee AS $$ @@ -233,8 +233,8 @@ $$ LANGUAGE pltcl; The argument values supplied to a PL/Tcl function's code are simply the input arguments converted to text form (just as if they had been - displayed by a SELECT statement). Conversely, the - return and return_next commands will accept + displayed by a SELECT statement). Conversely, the + return and return_next commands will accept any string that is acceptable input format for the function's declared result type, or for the specified column of a composite result type. @@ -262,14 +262,14 @@ $$ LANGUAGE pltcl; role in a separate Tcl interpreter for that role. This prevents accidental or malicious interference by one user with the behavior of another user's PL/Tcl functions. Each such interpreter will have its own - values for any global Tcl variables. Thus, two PL/Tcl + values for any global Tcl variables. Thus, two PL/Tcl functions will share the same global variables if and only if they are executed by the same SQL role. In an application wherein a single session executes code under multiple SQL roles (via SECURITY - DEFINER functions, use of SET ROLE, etc) you may need to + DEFINER functions, use of SET ROLE, etc) you may need to take explicit steps to ensure that PL/Tcl functions can share data. To do that, make sure that functions that should communicate are owned by - the same user, and mark them SECURITY DEFINER. You must of + the same user, and mark them SECURITY DEFINER. You must of course take care that such functions can't be used to do anything unintended. @@ -286,19 +286,19 @@ $$ LANGUAGE pltcl; To help protect PL/Tcl functions from unintentionally interfering with each other, a global - array is made available to each function via the upvar + array is made available to each function via the upvar command. The global name of this variable is the function's internal - name, and the local name is GD. It is recommended that - GD be used + name, and the local name is GD. It is recommended that + GD be used for persistent private data of a function. Use regular Tcl global variables only for values that you specifically intend to be shared among - multiple functions. (Note that the GD arrays are only + multiple functions. (Note that the GD arrays are only global within a particular interpreter, so they do not bypass the security restrictions mentioned above.) - An example of using GD appears in the + An example of using GD appears in the spi_execp example below. @@ -320,28 +320,28 @@ $$ LANGUAGE pltcl; causes an error to be raised. Otherwise, the return value of spi_exec is the number of rows processed (selected, inserted, updated, or deleted) by the command, or zero if the command is a utility - statement. In addition, if the command is a SELECT statement, the + statement. In addition, if the command is a SELECT statement, the values of the selected columns are placed in Tcl variables as described below. - The optional -count value tells + The optional -count value tells spi_exec the maximum number of rows to process in the command. The effect of this is comparable to - setting up a query as a cursor and then saying FETCH n. + setting up a query as a cursor and then saying FETCH n. - If the command is a SELECT statement, the values of the + If the command is a SELECT statement, the values of the result columns are placed into Tcl variables named after the columns. - If the -array option is given, the column values are + If the -array option is given, the column values are instead stored into elements of the named associative array, with the column names used as array indexes. In addition, the current row number within the result (counting from zero) is stored into the array - element named .tupno, unless that name is + element named .tupno, unless that name is in use as a column name in the result. - If the command is a SELECT statement and no loop-body + If the command is a SELECT statement and no loop-body script is given, then only the first row of results are stored into Tcl variables or array elements; remaining rows, if any, are ignored. No storing occurs if the query returns no rows. (This case can be @@ -350,14 +350,14 @@ $$ LANGUAGE pltcl; spi_exec "SELECT count(*) AS cnt FROM pg_proc" - will set the Tcl variable $cnt to the number of rows in - the pg_proc system catalog. + will set the Tcl variable $cnt to the number of rows in + the pg_proc system catalog. - If the optional loop-body argument is given, it is + If the optional loop-body argument is given, it is a piece of Tcl script that is executed once for each row in the - query result. (loop-body is ignored if the given - command is not a SELECT.) + query result. (loop-body is ignored if the given + command is not a SELECT.) The values of the current row's columns are stored into Tcl variables or array elements before each iteration. For example: @@ -366,14 +366,14 @@ spi_exec -array C "SELECT * FROM pg_class" { elog DEBUG "have table $C(relname)" } - will print a log message for every row of pg_class. This + will print a log message for every row of pg_class. This feature works similarly to other Tcl looping constructs; in - particular continue and break work in the + particular continue and break work in the usual way inside the loop body. If a column of a query result is null, the target - variable for it is unset rather than being set. + variable for it is unset rather than being set. @@ -384,8 +384,8 @@ spi_exec -array C "SELECT * FROM pg_class" { Prepares and saves a query plan for later execution. The saved plan will be retained for the life of the current - session.preparing a query - in PL/Tcl + session.preparing a query + in PL/Tcl The query can use parameters, that is, placeholders for @@ -405,29 +405,29 @@ spi_exec -array C "SELECT * FROM pg_class" { - spi_execp -count n -array name -nulls string queryid value-list loop-body + spi_execp -count n -array name -nulls string queryid value-list loop-body - Executes a query previously prepared with spi_prepare. + Executes a query previously prepared with spi_prepare. queryid is the ID returned by - spi_prepare. If the query references parameters, + spi_prepare. If the query references parameters, a value-list must be supplied. This is a Tcl list of actual values for the parameters. The list must be the same length as the parameter type list previously given to - spi_prepare. Omit value-list + spi_prepare. Omit value-list if the query has no parameters. - The optional value for -nulls is a string of spaces and - 'n' characters telling spi_execp + The optional value for -nulls is a string of spaces and + 'n' characters telling spi_execp which of the parameters are null values. If given, it must have exactly the same length as the value-list. If it is not given, all the parameter values are nonnull. Except for the way in which the query and its parameters are specified, - spi_execp works just like spi_exec. - The -count, -array, and + spi_execp works just like spi_exec. + The -count, -array, and loop-body options are the same, and so is the result value. @@ -448,9 +448,9 @@ $$ LANGUAGE pltcl; We need backslashes inside the query string given to - spi_prepare to ensure that the - $n markers will be passed - through to spi_prepare as-is, and not replaced by Tcl + spi_prepare to ensure that the + $n markers will be passed + through to spi_prepare as-is, and not replaced by Tcl variable substitution. @@ -459,7 +459,7 @@ $$ LANGUAGE pltcl; - spi_lastoid + spi_lastoid spi_lastoid in PL/Tcl @@ -468,8 +468,8 @@ $$ LANGUAGE pltcl; Returns the OID of the row inserted by the last - spi_exec or spi_execp, if the - command was a single-row INSERT and the modified + spi_exec or spi_execp, if the + command was a single-row INSERT and the modified table contained OIDs. (If not, you get zero.) @@ -490,7 +490,7 @@ $$ LANGUAGE pltcl; - quote string + quote string Doubles all occurrences of single quote and backslash characters @@ -504,7 +504,7 @@ $$ LANGUAGE pltcl; "SELECT '$val' AS ret" - where the Tcl variable val actually contains + where the Tcl variable val actually contains doesn't. This would result in the final command string: @@ -536,7 +536,7 @@ SELECT 'doesn''t' AS ret - elog level msg + elog level msg elog in PL/Tcl @@ -545,14 +545,14 @@ SELECT 'doesn''t' AS ret Emits a log or error message. Possible levels are - DEBUG, LOG, INFO, - NOTICE, WARNING, ERROR, and - FATAL. ERROR + DEBUG, LOG, INFO, + NOTICE, WARNING, ERROR, and + FATAL. ERROR raises an error condition; if this is not trapped by the surrounding Tcl code, the error propagates out to the calling query, causing the current transaction or subtransaction to be aborted. This - is effectively the same as the Tcl error command. - FATAL aborts the transaction and causes the current + is effectively the same as the Tcl error command. + FATAL aborts the transaction and causes the current session to shut down. (There is probably no good reason to use this error level in PL/Tcl functions, but it's provided for completeness.) The other levels only generate messages of different @@ -585,7 +585,7 @@ SELECT 'doesn''t' AS ret Trigger procedures can be written in PL/Tcl. PostgreSQL requires that a procedure that is to be called as a trigger must be declared as a function with no arguments - and a return type of trigger. + and a return type of trigger. The information from the trigger manager is passed to the procedure body @@ -637,8 +637,8 @@ SELECT 'doesn''t' AS ret A Tcl list of the table column names, prefixed with an empty list - element. So looking up a column name in the list with Tcl's - lsearch command returns the element's number starting + element. So looking up a column name in the list with Tcl's + lsearch command returns the element's number starting with 1 for the first column, the same way the columns are customarily numbered in PostgreSQL. (Empty list elements also appear in the positions of columns that have been @@ -652,8 +652,8 @@ SELECT 'doesn''t' AS ret $TG_when - The string BEFORE, AFTER, or - INSTEAD OF, depending on the type of trigger event. + The string BEFORE, AFTER, or + INSTEAD OF, depending on the type of trigger event. @@ -662,7 +662,7 @@ SELECT 'doesn''t' AS ret $TG_level - The string ROW or STATEMENT depending on the + The string ROW or STATEMENT depending on the type of trigger event. @@ -672,8 +672,8 @@ SELECT 'doesn''t' AS ret $TG_op - The string INSERT, UPDATE, - DELETE, or TRUNCATE depending on the type of + The string INSERT, UPDATE, + DELETE, or TRUNCATE depending on the type of trigger event. @@ -684,8 +684,8 @@ SELECT 'doesn''t' AS ret An associative array containing the values of the new table - row for INSERT or UPDATE actions, or - empty for DELETE. The array is indexed by column + row for INSERT or UPDATE actions, or + empty for DELETE. The array is indexed by column name. Columns that are null will not appear in the array. This is not set for statement-level triggers. @@ -697,8 +697,8 @@ SELECT 'doesn''t' AS ret An associative array containing the values of the old table - row for UPDATE or DELETE actions, or - empty for INSERT. The array is indexed by column + row for UPDATE or DELETE actions, or + empty for INSERT. The array is indexed by column name. Columns that are null will not appear in the array. This is not set for statement-level triggers. @@ -721,32 +721,32 @@ SELECT 'doesn''t' AS ret The return value from a trigger procedure can be one of the strings - OK or SKIP, or a list of column name/value pairs. - If the return value is OK, - the operation (INSERT/UPDATE/DELETE) + OK or SKIP, or a list of column name/value pairs. + If the return value is OK, + the operation (INSERT/UPDATE/DELETE) that fired the trigger will proceed - normally. SKIP tells the trigger manager to silently suppress + normally. SKIP tells the trigger manager to silently suppress the operation for this row. If a list is returned, it tells PL/Tcl to return a modified row to the trigger manager; the contents of the modified row are specified by the column names and values in the list. Any columns not mentioned in the list are set to null. Returning a modified row is only meaningful - for row-level BEFORE INSERT or UPDATE + for row-level BEFORE INSERT or UPDATE triggers, for which the modified row will be inserted instead of the one - given in $NEW; or for row-level INSTEAD OF - INSERT or UPDATE triggers where the returned row - is used as the source data for INSERT RETURNING or - UPDATE RETURNING clauses. - In row-level BEFORE DELETE or INSTEAD - OF DELETE triggers, returning a modified row has the same - effect as returning OK, that is the operation proceeds. + given in $NEW; or for row-level INSTEAD OF + INSERT or UPDATE triggers where the returned row + is used as the source data for INSERT RETURNING or + UPDATE RETURNING clauses. + In row-level BEFORE DELETE or INSTEAD + OF DELETE triggers, returning a modified row has the same + effect as returning OK, that is the operation proceeds. The trigger return value is ignored for all other types of triggers. The result list can be made from an array representation of the - modified tuple with the array get Tcl command. + modified tuple with the array get Tcl command. @@ -797,7 +797,7 @@ CREATE TRIGGER trig_mytab_modcount BEFORE INSERT OR UPDATE ON mytab Event trigger procedures can be written in PL/Tcl. PostgreSQL requires that a procedure that is to be called as an event trigger must be declared as a function with no - arguments and a return type of event_trigger. + arguments and a return type of event_trigger. The information from the trigger manager is passed to the procedure body @@ -885,17 +885,17 @@ CREATE EVENT TRIGGER tcl_a_snitch ON ddl_command_start EXECUTE PROCEDURE tclsnit word is POSTGRES, the second word is the PostgreSQL version number, and additional words are field name/value pairs providing detailed information about the error. - Fields SQLSTATE, condition, - and message are always supplied + Fields SQLSTATE, condition, + and message are always supplied (the first two represent the error code and condition name as shown in ). Fields that may be present include - detail, hint, context, - schema, table, column, - datatype, constraint, - statement, cursor_position, - filename, lineno, and - funcname. + detail, hint, context, + schema, table, column, + datatype, constraint, + statement, cursor_position, + filename, lineno, and + funcname. @@ -1006,7 +1006,7 @@ $$ LANGUAGE pltcl; This section lists configuration parameters that - affect PL/Tcl. + affect PL/Tcl. @@ -1015,7 +1015,7 @@ $$ LANGUAGE pltcl; pltcl.start_proc (string) - pltcl.start_proc configuration parameter + pltcl.start_proc configuration parameter @@ -1031,8 +1031,8 @@ $$ LANGUAGE pltcl; - The referenced function must be written in the pltcl - language, and must not be marked SECURITY DEFINER. + The referenced function must be written in the pltcl + language, and must not be marked SECURITY DEFINER. (These restrictions ensure that it runs in the interpreter it's supposed to initialize.) The current user must have permission to call it, too. @@ -1060,14 +1060,14 @@ $$ LANGUAGE pltcl; pltclu.start_proc (string) - pltclu.start_proc configuration parameter + pltclu.start_proc configuration parameter This parameter is exactly like pltcl.start_proc, except that it applies to PL/TclU. The referenced function must - be written in the pltclu language. + be written in the pltclu language. @@ -1084,7 +1084,7 @@ $$ LANGUAGE pltcl; differ. Tcl, however, requires all procedure names to be distinct. PL/Tcl deals with this by making the internal Tcl procedure names contain the object - ID of the function from the system table pg_proc as part of their name. Thus, + ID of the function from the system table pg_proc as part of their name. Thus, PostgreSQL functions with the same name and different argument types will be different Tcl procedures, too. This is not normally a concern for a PL/Tcl programmer, but it might be visible diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml index d83fc9e52b..265effbe48 100644 --- a/doc/src/sgml/postgres-fdw.sgml +++ b/doc/src/sgml/postgres-fdw.sgml @@ -8,7 +8,7 @@ - The postgres_fdw module provides the foreign-data wrapper + The postgres_fdw module provides the foreign-data wrapper postgres_fdw, which can be used to access data stored in external PostgreSQL servers. @@ -16,17 +16,17 @@ The functionality provided by this module overlaps substantially with the functionality of the older module. - But postgres_fdw provides more transparent and + But postgres_fdw provides more transparent and standards-compliant syntax for accessing remote tables, and can give better performance in many cases. - To prepare for remote access using postgres_fdw: + To prepare for remote access using postgres_fdw: - Install the postgres_fdw extension using postgres_fdw extension using . @@ -61,17 +61,17 @@ - Now you need only SELECT from a foreign table to access + Now you need only SELECT from a foreign table to access the data stored in its underlying remote table. You can also modify - the remote table using INSERT, UPDATE, or - DELETE. (Of course, the remote user you have specified + the remote table using INSERT, UPDATE, or + DELETE. (Of course, the remote user you have specified in your user mapping must have privileges to do these things.) - Note that postgres_fdw currently lacks support for + Note that postgres_fdw currently lacks support for INSERT statements with an ON CONFLICT DO - UPDATE clause. However, the ON CONFLICT DO NOTHING + UPDATE clause. However, the ON CONFLICT DO NOTHING clause is supported, provided a unique index inference specification is omitted. @@ -79,10 +79,10 @@ It is generally recommended that the columns of a foreign table be declared with exactly the same data types, and collations if applicable, as the - referenced columns of the remote table. Although postgres_fdw + referenced columns of the remote table. Although postgres_fdw is currently rather forgiving about performing data type conversions at need, surprising semantic anomalies may arise when types or collations do - not match, due to the remote server interpreting WHERE clauses + not match, due to the remote server interpreting WHERE clauses slightly differently from the local server. @@ -99,8 +99,8 @@ Connection Options - A foreign server using the postgres_fdw foreign data wrapper - can have the same options that libpq accepts in + A foreign server using the postgres_fdw foreign data wrapper + can have the same options that libpq accepts in connection strings, as described in , except that these options are not allowed: @@ -113,14 +113,14 @@ - client_encoding (this is automatically set from the local + client_encoding (this is automatically set from the local server encoding) - fallback_application_name (always set to - postgres_fdw) + fallback_application_name (always set to + postgres_fdw) @@ -186,14 +186,14 @@ Cost Estimation Options - postgres_fdw retrieves remote data by executing queries + postgres_fdw retrieves remote data by executing queries against remote servers, so ideally the estimated cost of scanning a foreign table should be whatever it costs to be done on the remote server, plus some overhead for communication. The most reliable way to get such an estimate is to ask the remote server and then add something for overhead — but for simple queries, it may not be worth the cost of an additional remote query to get a cost estimate. - So postgres_fdw provides the following options to control + So postgres_fdw provides the following options to control how cost estimation is done: @@ -204,7 +204,7 @@ This option, which can be specified for a foreign table or a foreign - server, controls whether postgres_fdw issues remote + server, controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost estimates. A setting for a foreign table overrides any setting for its server, but only for that table. @@ -245,11 +245,11 @@ When use_remote_estimate is true, - postgres_fdw obtains row count and cost estimates from the + postgres_fdw obtains row count and cost estimates from the remote server and then adds fdw_startup_cost and fdw_tuple_cost to the cost estimates. When use_remote_estimate is false, - postgres_fdw performs local row count and cost estimation + postgres_fdw performs local row count and cost estimation and then adds fdw_startup_cost and fdw_tuple_cost to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the @@ -268,12 +268,12 @@ Remote Execution Options - By default, only WHERE clauses using built-in operators and + By default, only WHERE clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched. If such functions are available on the remote server and can be relied on to produce the same results as they do locally, performance can - be improved by sending such WHERE clauses for remote + be improved by sending such WHERE clauses for remote execution. This behavior can be controlled using the following option: @@ -284,7 +284,7 @@ This option is a comma-separated list of names - of PostgreSQL extensions that are installed, in + of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. @@ -293,7 +293,7 @@ When using the extensions option, it is the - user's responsibility that the listed extensions exist and behave + user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly. @@ -304,11 +304,11 @@ fetch_size - This option specifies the number of rows postgres_fdw + This option specifies the number of rows postgres_fdw should get in each fetch operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. - The default is 100. + The default is 100. @@ -321,7 +321,7 @@ Updatability Options - By default all foreign tables using postgres_fdw are assumed + By default all foreign tables using postgres_fdw are assumed to be updatable. This may be overridden using the following option: @@ -331,20 +331,20 @@ updatable - This option controls whether postgres_fdw allows foreign - tables to be modified using INSERT, UPDATE and - DELETE commands. It can be specified for a foreign table + This option controls whether postgres_fdw allows foreign + tables to be modified using INSERT, UPDATE and + DELETE commands. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. - The default is true. + The default is true. Of course, if the remote table is not in fact updatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. Note however - that the information_schema views will report a - postgres_fdw foreign table to be updatable (or not) + that the information_schema views will report a + postgres_fdw foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server. @@ -358,7 +358,7 @@ Importing Options - postgres_fdw is able to import foreign table definitions + postgres_fdw is able to import foreign table definitions using . This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported @@ -368,7 +368,7 @@ Importing behavior can be customized with the following options - (given in the IMPORT FOREIGN SCHEMA command): + (given in the IMPORT FOREIGN SCHEMA command): @@ -376,9 +376,9 @@ import_collate - This option controls whether column COLLATE options + This option controls whether column COLLATE options are included in the definitions of foreign tables imported - from a foreign server. The default is true. You might + from a foreign server. The default is true. You might need to turn this off if the remote server has a different set of collation names than the local server does, which is likely to be the case if it's running on a different operating system. @@ -389,13 +389,13 @@ import_default - This option controls whether column DEFAULT expressions + This option controls whether column DEFAULT expressions are included in the definitions of foreign tables imported - from a foreign server. The default is false. If you + from a foreign server. The default is false. If you enable this option, be wary of defaults that might get computed differently on the local server than they would be on the remote - server; nextval() is a common source of problems. - The IMPORT will fail altogether if an imported default + server; nextval() is a common source of problems. + The IMPORT will fail altogether if an imported default expression uses a function or operator that does not exist locally. @@ -404,25 +404,25 @@ import_not_null - This option controls whether column NOT NULL + This option controls whether column NOT NULL constraints are included in the definitions of foreign tables imported - from a foreign server. The default is true. + from a foreign server. The default is true. - Note that constraints other than NOT NULL will never be - imported from the remote tables. Although PostgreSQL - does support CHECK constraints on foreign tables, there is no + Note that constraints other than NOT NULL will never be + imported from the remote tables. Although PostgreSQL + does support CHECK constraints on foreign tables, there is no provision for importing them automatically, because of the risk that a constraint expression could evaluate differently on the local and remote - servers. Any such inconsistency in the behavior of a CHECK + servers. Any such inconsistency in the behavior of a CHECK constraint could lead to hard-to-detect errors in query optimization. - So if you wish to import CHECK constraints, you must do so + So if you wish to import CHECK constraints, you must do so manually, and you should verify the semantics of each one carefully. - For more detail about the treatment of CHECK constraints on + For more detail about the treatment of CHECK constraints on foreign tables, see . @@ -464,18 +464,18 @@ - The remote transaction uses SERIALIZABLE - isolation level when the local transaction has SERIALIZABLE - isolation level; otherwise it uses REPEATABLE READ + The remote transaction uses SERIALIZABLE + isolation level when the local transaction has SERIALIZABLE + isolation level; otherwise it uses REPEATABLE READ isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local - transaction uses SERIALIZABLE or REPEATABLE READ + transaction uses SERIALIZABLE or REPEATABLE READ isolation level, but it might be surprising for a READ - COMMITTED local transaction. A future + COMMITTED local transaction. A future PostgreSQL release might modify these rules. @@ -484,42 +484,42 @@ Remote Query Optimization - postgres_fdw attempts to optimize remote queries to reduce + postgres_fdw attempts to optimize remote queries to reduce the amount of data transferred from foreign servers. This is done by - sending query WHERE clauses to the remote server for + sending query WHERE clauses to the remote server for execution, and by not retrieving table columns that are not needed for the current query. To reduce the risk of misexecution of queries, - WHERE clauses are not sent to the remote server unless they use + WHERE clauses are not sent to the remote server unless they use only data types, operators, and functions that are built-in or belong to an - extension that's listed in the foreign server's extensions + extension that's listed in the foreign server's extensions option. Operators and functions in such clauses must - be IMMUTABLE as well. - For an UPDATE or DELETE query, - postgres_fdw attempts to optimize the query execution by + be IMMUTABLE as well. + For an UPDATE or DELETE query, + postgres_fdw attempts to optimize the query execution by sending the whole query to the remote server if there are no query - WHERE clauses that cannot be sent to the remote server, - no local joins for the query, no row-level local BEFORE or - AFTER triggers on the target table, and no - CHECK OPTION constraints from parent views. - In UPDATE, + WHERE clauses that cannot be sent to the remote server, + no local joins for the query, no row-level local BEFORE or + AFTER triggers on the target table, and no + CHECK OPTION constraints from parent views. + In UPDATE, expressions to assign to target columns must use only built-in data types, - IMMUTABLE operators, or IMMUTABLE functions, + IMMUTABLE operators, or IMMUTABLE functions, to reduce the risk of misexecution of the query. - When postgres_fdw encounters a join between foreign tables on + When postgres_fdw encounters a join between foreign tables on the same foreign server, it sends the entire join to the foreign server, unless for some reason it believes that it will be more efficient to fetch rows from each table individually, or unless the table references involved - are subject to different user mappings. While sending the JOIN + are subject to different user mappings. While sending the JOIN clauses, it takes the same precautions as mentioned above for the - WHERE clauses. + WHERE clauses. The query that is actually sent to the remote server for execution can - be examined using EXPLAIN VERBOSE. + be examined using EXPLAIN VERBOSE. @@ -527,55 +527,55 @@ Remote Query Execution Environment - In the remote sessions opened by postgres_fdw, + In the remote sessions opened by postgres_fdw, the parameter is set to - just pg_catalog, so that only built-in objects are visible + just pg_catalog, so that only built-in objects are visible without schema qualification. This is not an issue for queries - generated by postgres_fdw itself, because it always + generated by postgres_fdw itself, because it always supplies such qualification. However, this can pose a hazard for functions that are executed on the remote server via triggers or rules on remote tables. For example, if a remote table is actually a view, any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such - functions, or else attach SET search_path options + functions, or else attach SET search_path options (see ) to such functions to establish their expected search path environment. - postgres_fdw likewise establishes remote session settings + postgres_fdw likewise establishes remote session settings for various parameters: - is set to UTC + is set to UTC - is set to ISO + is set to ISO - is set to postgres + is set to postgres - is set to 3 for remote - servers 9.0 and newer and is set to 2 for older versions + is set to 3 for remote + servers 9.0 and newer and is set to 2 for older versions - These are less likely to be problematic than search_path, but - can be handled with function SET options if the need arises. + These are less likely to be problematic than search_path, but + can be handled with function SET options if the need arises. - It is not recommended that you override this behavior by + It is not recommended that you override this behavior by changing the session-level settings of these parameters; that is likely - to cause postgres_fdw to malfunction. + to cause postgres_fdw to malfunction. @@ -583,19 +583,19 @@ Cross-Version Compatibility - postgres_fdw can be used with remote servers dating back - to PostgreSQL 8.3. Read-only capability is available - back to 8.1. A limitation however is that postgres_fdw + postgres_fdw can be used with remote servers dating back + to PostgreSQL 8.3. Read-only capability is available + back to 8.1. A limitation however is that postgres_fdw generally assumes that immutable built-in functions and operators are safe to send to the remote server for execution, if they appear in a - WHERE clause for a foreign table. Thus, a built-in + WHERE clause for a foreign table. Thus, a built-in function that was added since the remote server's release might be sent - to it for execution, resulting in function does not exist or + to it for execution, resulting in function does not exist or a similar error. This type of failure can be worked around by rewriting the query, for example by embedding the foreign table - reference in a sub-SELECT with OFFSET 0 as an + reference in a sub-SELECT with OFFSET 0 as an optimization fence, and placing the problematic function or operator - outside the sub-SELECT. + outside the sub-SELECT. @@ -604,7 +604,7 @@ Here is an example of creating a foreign table with - postgres_fdw. First install the extension: + postgres_fdw. First install the extension: @@ -613,7 +613,7 @@ CREATE EXTENSION postgres_fdw; Then create a foreign server using . - In this example we wish to connect to a PostgreSQL server + In this example we wish to connect to a PostgreSQL server on host 192.83.123.89 listening on port 5432. The database to which the connection is made is named foreign_db on the remote server: @@ -640,9 +640,9 @@ CREATE USER MAPPING FOR local_user Now it is possible to create a foreign table with . In this example we - wish to access the table named some_schema.some_table + wish to access the table named some_schema.some_table on the remote server. The local name for it will - be foreign_table: + be foreign_table: CREATE FOREIGN TABLE foreign_table ( @@ -654,8 +654,8 @@ CREATE FOREIGN TABLE foreign_table ( It's essential that the data types and other properties of the columns - declared in CREATE FOREIGN TABLE match the actual remote table. - Column names must match as well, unless you attach column_name + declared in CREATE FOREIGN TABLE match the actual remote table. + Column names must match as well, unless you attach column_name options to the individual columns to show how they are named in the remote table. In many cases, use of is diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml index 8a3bfc9b0d..f8a6c48a57 100644 --- a/doc/src/sgml/postgres.sgml +++ b/doc/src/sgml/postgres.sgml @@ -85,11 +85,11 @@ Readers of this part should know how to connect to a - PostgreSQL database and issue + PostgreSQL database and issue SQL commands. Readers that are unfamiliar with these issues are encouraged to read first. SQL commands are typically entered - using the PostgreSQL interactive terminal + using the PostgreSQL interactive terminal psql, but other programs that have similar functionality can be used as well. @@ -116,10 +116,10 @@ This part covers topics that are of interest to a - PostgreSQL database administrator. This includes + PostgreSQL database administrator. This includes installation of the software, set up and configuration of the server, management of users and databases, and maintenance tasks. - Anyone who runs a PostgreSQL server, even for + Anyone who runs a PostgreSQL server, even for personal use, but especially in production, should be familiar with the topics covered in this part. @@ -139,7 +139,7 @@ up their own server can begin their exploration with this part. The rest of this part is about tuning and management; that material assumes that the reader is familiar with the general use of - the PostgreSQL database system. Readers are + the PostgreSQL database system. Readers are encouraged to look at and for additional information. @@ -171,7 +171,7 @@ This part describes the client programming interfaces distributed - with PostgreSQL. Each of these chapters can be + with PostgreSQL. Each of these chapters can be read independently. Note that there are many other programming interfaces for client programs that are distributed separately and contain their own documentation ( @@ -197,7 +197,7 @@ This part is about extending the server functionality with user-defined functions, data types, triggers, etc. These are advanced topics which should probably be approached only after all - the other user documentation about PostgreSQL has + the other user documentation about PostgreSQL has been understood. Later chapters in this part describe the server-side programming languages available in the PostgreSQL distribution as well as @@ -234,7 +234,7 @@ This part contains assorted information that might be of use to - PostgreSQL developers. + PostgreSQL developers. diff --git a/doc/src/sgml/problems.sgml b/doc/src/sgml/problems.sgml index 6bf74bb399..edceec3381 100644 --- a/doc/src/sgml/problems.sgml +++ b/doc/src/sgml/problems.sgml @@ -145,7 +145,7 @@ - If your application uses some other client interface, such as PHP, then + If your application uses some other client interface, such as PHP, then please try to isolate the offending queries. We will probably not set up a web server to reproduce your problem. In any case remember to provide the exact input files; do not guess that the problem happens for @@ -167,10 +167,10 @@ If you are reporting an error message, please obtain the most verbose - form of the message. In psql, say \set - VERBOSITY verbose beforehand. If you are extracting the message + form of the message. In psql, say \set + VERBOSITY verbose beforehand. If you are extracting the message from the server log, set the run-time parameter - to verbose so that all + to verbose so that all details are logged. @@ -236,9 +236,9 @@ If your version is older than &version; we will almost certainly tell you to upgrade. There are many bug fixes and improvements in each new release, so it is quite possible that a bug you have - encountered in an older release of PostgreSQL + encountered in an older release of PostgreSQL has already been fixed. We can only provide limited support for - sites using older releases of PostgreSQL; if you + sites using older releases of PostgreSQL; if you require more than we can provide, consider acquiring a commercial support contract. @@ -283,8 +283,8 @@ are specifically talking about the backend process, mention that, do not just say PostgreSQL crashes. A crash of a single backend process is quite different from crash of the parent - postgres process; please don't say the server - crashed when you mean a single backend process went down, nor vice versa. + postgres process; please don't say the server + crashed when you mean a single backend process went down, nor vice versa. Also, client programs such as the interactive frontend psql are completely separate from the backend. Please try to be specific about whether the problem is on the client or server side. @@ -356,10 +356,10 @@ subscribed to a list to be allowed to post on it. (You need not be subscribed to use the bug-report web form, however.) If you would like to send mail but do not want to receive list traffic, - you can subscribe and set your subscription option to nomail. + you can subscribe and set your subscription option to nomail. For more information send mail to majordomo@postgresql.org - with the single word help in the body of the message. + with the single word help in the body of the message. diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 526e8011de..15108baf71 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -30,12 +30,12 @@ In order to serve multiple clients efficiently, the server launches - a new backend process for each client. + a new backend process for each client. In the current implementation, a new child process is created immediately after an incoming connection is detected. This is transparent to the protocol, however. For purposes of the - protocol, the terms backend and server are - interchangeable; likewise frontend and client + protocol, the terms backend and server are + interchangeable; likewise frontend and client are interchangeable. @@ -56,7 +56,7 @@ During normal operation, the frontend sends queries and other commands to the backend, and the backend sends back query results - and other responses. There are a few cases (such as NOTIFY) + and other responses. There are a few cases (such as NOTIFY) wherein the backend will send unsolicited messages, but for the most part this portion of a session is driven by frontend requests. @@ -71,9 +71,9 @@ Within normal operation, SQL commands can be executed through either of - two sub-protocols. In the simple query protocol, the frontend + two sub-protocols. In the simple query protocol, the frontend just sends a textual query string, which is parsed and immediately - executed by the backend. In the extended query protocol, + executed by the backend. In the extended query protocol, processing of queries is separated into multiple steps: parsing, binding of parameter values, and execution. This offers flexibility and performance benefits, at the cost of extra complexity. @@ -81,7 +81,7 @@ Normal operation has additional sub-protocols for special operations - such as COPY. + such as COPY. @@ -123,24 +123,24 @@ In the extended-query protocol, execution of SQL commands is divided into multiple steps. The state retained between steps is represented - by two types of objects: prepared statements and - portals. A prepared statement represents the result of + by two types of objects: prepared statements and + portals. A prepared statement represents the result of parsing and semantic analysis of a textual query string. A prepared statement is not in itself ready to execute, because it might - lack specific values for parameters. A portal represents + lack specific values for parameters. A portal represents a ready-to-execute or already-partially-executed statement, with any - missing parameter values filled in. (For SELECT statements, + missing parameter values filled in. (For SELECT statements, a portal is equivalent to an open cursor, but we choose to use a different - term since cursors don't handle non-SELECT statements.) + term since cursors don't handle non-SELECT statements.) - The overall execution cycle consists of a parse step, + The overall execution cycle consists of a parse step, which creates a prepared statement from a textual query string; a - bind step, which creates a portal given a prepared + bind step, which creates a portal given a prepared statement and values for any needed parameters; and an - execute step that runs a portal's query. In the case of - a query that returns rows (SELECT, SHOW, etc), + execute step that runs a portal's query. In the case of + a query that returns rows (SELECT, SHOW, etc), the execute step can be told to fetch only a limited number of rows, so that multiple execute steps might be needed to complete the operation. @@ -151,7 +151,7 @@ (but note that these exist only within a session, and are never shared across sessions). Existing prepared statements and portals are referenced by names assigned when they were created. In addition, - an unnamed prepared statement and portal exist. Although these + an unnamed prepared statement and portal exist. Although these behave largely the same as named objects, operations on them are optimized for the case of executing a query only once and then discarding it, whereas operations on named objects are optimized on the expectation @@ -164,10 +164,10 @@ Data of a particular data type might be transmitted in any of several - different formats. As of PostgreSQL 7.4 - the only supported formats are text and binary, + different formats. As of PostgreSQL 7.4 + the only supported formats are text and binary, but the protocol makes provision for future extensions. The desired - format for any value is specified by a format code. + format for any value is specified by a format code. Clients can specify a format code for each transmitted parameter value and for each column of a query result. Text has format code zero, binary has format code one, and all other format codes are reserved @@ -300,8 +300,8 @@ password, the server responds with an AuthenticationOk, otherwise it responds with an ErrorResponse. The actual PasswordMessage can be computed in SQL as concat('md5', - md5(concat(md5(concat(password, username)), random-salt))). - (Keep in mind the md5() function returns its + md5(concat(md5(concat(password, username)), random-salt))). + (Keep in mind the md5() function returns its result as a hex string.) @@ -624,11 +624,11 @@ - The response to a SELECT query (or other queries that - return row sets, such as EXPLAIN or SHOW) + The response to a SELECT query (or other queries that + return row sets, such as EXPLAIN or SHOW) normally consists of RowDescription, zero or more DataRow messages, and then CommandComplete. - COPY to or from the frontend invokes special protocol + COPY to or from the frontend invokes special protocol as described in . All other query types normally produce only a CommandComplete message. @@ -657,8 +657,8 @@ In simple Query mode, the format of retrieved values is always text, - except when the given command is a FETCH from a cursor - declared with the BINARY option. In that case, the + except when the given command is a FETCH from a cursor + declared with the BINARY option. In that case, the retrieved values are in binary format. The format codes given in the RowDescription message tell which format is being used. @@ -689,10 +689,10 @@ INSERT INTO mytable VALUES(1); SELECT 1/0; INSERT INTO mytable VALUES(2); - then the divide-by-zero failure in the SELECT will force - rollback of the first INSERT. Furthermore, because + then the divide-by-zero failure in the SELECT will force + rollback of the first INSERT. Furthermore, because execution of the message is abandoned at the first error, the second - INSERT is never attempted at all. + INSERT is never attempted at all. @@ -704,17 +704,17 @@ COMMIT; INSERT INTO mytable VALUES(2); SELECT 1/0; - then the first INSERT is committed by the - explicit COMMIT command. The second INSERT - and the SELECT are still treated as a single transaction, + then the first INSERT is committed by the + explicit COMMIT command. The second INSERT + and the SELECT are still treated as a single transaction, so that the divide-by-zero failure will roll back the - second INSERT, but not the first one. + second INSERT, but not the first one. This behavior is implemented by running the statements in a multi-statement Query message in an implicit transaction - block unless there is some explicit transaction block for them to + block unless there is some explicit transaction block for them to run in. The main difference between an implicit transaction block and a regular one is that an implicit block is closed automatically at the end of the Query message, either by an implicit commit if there was no @@ -725,27 +725,27 @@ SELECT 1/0; If the session is already in a transaction block, as a result of - a BEGIN in some previous message, then the Query message + a BEGIN in some previous message, then the Query message simply continues that transaction block, whether the message contains one statement or several. However, if the Query message contains - a COMMIT or ROLLBACK closing the existing + a COMMIT or ROLLBACK closing the existing transaction block, then any following statements are executed in an implicit transaction block. - Conversely, if a BEGIN appears in a multi-statement Query + Conversely, if a BEGIN appears in a multi-statement Query message, then it starts a regular transaction block that will only be - terminated by an explicit COMMIT or ROLLBACK, + terminated by an explicit COMMIT or ROLLBACK, whether that appears in this Query message or a later one. - If the BEGIN follows some statements that were executed as + If the BEGIN follows some statements that were executed as an implicit transaction block, those statements are not immediately committed; in effect, they are retroactively included into the new regular transaction block. - A COMMIT or ROLLBACK appearing in an implicit + A COMMIT or ROLLBACK appearing in an implicit transaction block is executed as normal, closing the implicit block; - however, a warning will be issued since a COMMIT - or ROLLBACK without a previous BEGIN might + however, a warning will be issued since a COMMIT + or ROLLBACK without a previous BEGIN might represent a mistake. If more statements follow, a new implicit transaction block will be started for them. @@ -766,8 +766,8 @@ SELECT 1/0; ROLLBACK; in a single Query message, the session will be left inside a failed - regular transaction block, since the ROLLBACK is not - reached after the divide-by-zero error. Another ROLLBACK + regular transaction block, since the ROLLBACK is not + reached after the divide-by-zero error. Another ROLLBACK will be needed to restore the session to a usable state. @@ -789,7 +789,7 @@ INSERT INTO mytable VALUES(2); SELCT 1/0; then none of the statements would get run, resulting in the visible - difference that the first INSERT is not committed. + difference that the first INSERT is not committed. Errors detected at semantic analysis or later, such as a misspelled table or column name, do not have this effect. @@ -824,17 +824,17 @@ SELCT 1/0; A parameter data type can be left unspecified by setting it to zero, or by making the array of parameter type OIDs shorter than the - number of parameter symbols ($n) + number of parameter symbols ($n) used in the query string. Another special case is that a parameter's - type can be specified as void (that is, the OID of the - void pseudo-type). This is meant to allow parameter symbols + type can be specified as void (that is, the OID of the + void pseudo-type). This is meant to allow parameter symbols to be used for function parameters that are actually OUT parameters. - Ordinarily there is no context in which a void parameter + Ordinarily there is no context in which a void parameter could be used, but if such a parameter symbol appears in a function's parameter list, it is effectively ignored. For example, a function - call such as foo($1,$2,$3,$4) could match a function with - two IN and two OUT arguments, if $3 and $4 - are specified as having type void. + call such as foo($1,$2,$3,$4) could match a function with + two IN and two OUT arguments, if $3 and $4 + are specified as having type void. @@ -858,7 +858,7 @@ SELCT 1/0; statements must be explicitly closed before they can be redefined by another Parse message, but this is not required for the unnamed statement. Named prepared statements can also be created and accessed at the SQL - command level, using PREPARE and EXECUTE. + command level, using PREPARE and EXECUTE. @@ -869,7 +869,7 @@ SELCT 1/0; the values to use for any parameter placeholders present in the prepared statement. The supplied parameter set must match those needed by the prepared statement. - (If you declared any void parameters in the Parse message, + (If you declared any void parameters in the Parse message, pass NULL values for them in the Bind message.) Bind also specifies the format to use for any data returned by the query; the format can be specified overall, or per-column. @@ -880,7 +880,7 @@ SELCT 1/0; The choice between text and binary output is determined by the format codes given in Bind, regardless of the SQL command involved. The - BINARY attribute in cursor declarations is irrelevant when + BINARY attribute in cursor declarations is irrelevant when using extended query protocol. @@ -904,14 +904,14 @@ SELCT 1/0; portals must be explicitly closed before they can be redefined by another Bind message, but this is not required for the unnamed portal. Named portals can also be created and accessed at the SQL - command level, using DECLARE CURSOR and FETCH. + command level, using DECLARE CURSOR and FETCH. Once a portal exists, it can be executed using an Execute message. The Execute message specifies the portal name (empty string denotes the unnamed portal) and - a maximum result-row count (zero meaning fetch all rows). + a maximum result-row count (zero meaning fetch all rows). The result-row count is only meaningful for portals containing commands that return row sets; in other cases the command is always executed to completion, and the row count is ignored. @@ -938,7 +938,7 @@ SELCT 1/0; At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless message causes the backend to close the current transaction if it's not inside a - BEGIN/COMMIT transaction block (close + BEGIN/COMMIT transaction block (close meaning to commit if no error, or roll back if error). Then a ReadyForQuery response is issued. The purpose of Sync is to provide a resynchronization point for error recovery. When an error is detected @@ -946,13 +946,13 @@ SELCT 1/0; ErrorResponse, then reads and discards messages until a Sync is reached, then issues ReadyForQuery and returns to normal message processing. (But note that no skipping occurs if an error is detected - while processing Sync — this ensures that there is one + while processing Sync — this ensures that there is one and only one ReadyForQuery sent for each Sync.) - Sync does not cause a transaction block opened with BEGIN + Sync does not cause a transaction block opened with BEGIN to be closed. It is possible to detect this situation since the ReadyForQuery message includes transaction status information. @@ -1039,7 +1039,7 @@ SELCT 1/0; The Function Call sub-protocol is a legacy feature that is probably best avoided in new code. Similar results can be accomplished by setting up - a prepared statement that does SELECT function($1, ...). + a prepared statement that does SELECT function($1, ...). The Function Call cycle can then be replaced with Bind/Execute. @@ -1107,7 +1107,7 @@ SELCT 1/0; COPY Operations - The COPY command allows high-speed bulk data transfer + The COPY command allows high-speed bulk data transfer to or from the server. Copy-in and copy-out operations each switch the connection into a distinct sub-protocol, which lasts until the operation is completed. @@ -1115,16 +1115,16 @@ SELCT 1/0; Copy-in mode (data transfer to the server) is initiated when the - backend executes a COPY FROM STDIN SQL statement. The backend + backend executes a COPY FROM STDIN SQL statement. The backend sends a CopyInResponse message to the frontend. The frontend should then send zero or more CopyData messages, forming a stream of input data. (The message boundaries are not required to have anything to do with row boundaries, although that is often a reasonable choice.) The frontend can terminate the copy-in mode by sending either a CopyDone message (allowing successful termination) or a CopyFail message (which - will cause the COPY SQL statement to fail with an + will cause the COPY SQL statement to fail with an error). The backend then reverts to the command-processing mode it was - in before the COPY started, which will be either simple or + in before the COPY started, which will be either simple or extended query protocol. It will next send either CommandComplete (if successful) or ErrorResponse (if not). @@ -1132,10 +1132,10 @@ SELCT 1/0; In the event of a backend-detected error during copy-in mode (including receipt of a CopyFail message), the backend will issue an ErrorResponse - message. If the COPY command was issued via an extended-query + message. If the COPY command was issued via an extended-query message, the backend will now discard frontend messages until a Sync message is received, then it will issue ReadyForQuery and return to normal - processing. If the COPY command was issued in a simple + processing. If the COPY command was issued in a simple Query message, the rest of that message is discarded and ReadyForQuery is issued. In either case, any subsequent CopyData, CopyDone, or CopyFail messages issued by the frontend will simply be dropped. @@ -1147,16 +1147,16 @@ SELCT 1/0; that will abort the copy-in state as described above. (The exception for Flush and Sync is for the convenience of client libraries that always send Flush or Sync after an Execute message, without checking whether - the command to be executed is a COPY FROM STDIN.) + the command to be executed is a COPY FROM STDIN.) Copy-out mode (data transfer from the server) is initiated when the - backend executes a COPY TO STDOUT SQL statement. The backend + backend executes a COPY TO STDOUT SQL statement. The backend sends a CopyOutResponse message to the frontend, followed by zero or more CopyData messages (always one per row), followed by CopyDone. The backend then reverts to the command-processing mode it was - in before the COPY started, and sends CommandComplete. + in before the COPY started, and sends CommandComplete. The frontend cannot abort the transfer (except by closing the connection or issuing a Cancel request), but it can discard unwanted CopyData and CopyDone messages. @@ -1179,7 +1179,7 @@ SELCT 1/0; There is another Copy-related mode called copy-both, which allows - high-speed bulk data transfer to and from the server. + high-speed bulk data transfer to and from the server. Copy-both mode is initiated when a backend in walsender mode executes a START_REPLICATION statement. The backend sends a CopyBothResponse message to the frontend. Both @@ -1204,7 +1204,7 @@ SELCT 1/0; The CopyInResponse, CopyOutResponse and CopyBothResponse messages include fields that inform the frontend of the number of columns per row and the format codes being used for each column. (As of - the present implementation, all columns in a given COPY + the present implementation, all columns in a given COPY operation will use the same format, but the message design does not assume this.) @@ -1226,7 +1226,7 @@ SELCT 1/0; It is possible for NoticeResponse messages to be generated due to outside activity; for example, if the database administrator commands - a fast database shutdown, the backend will send a NoticeResponse + a fast database shutdown, the backend will send a NoticeResponse indicating this fact before closing the connection. Accordingly, frontends should always be prepared to accept and display NoticeResponse messages, even when the connection is nominally idle. @@ -1236,7 +1236,7 @@ SELCT 1/0; ParameterStatus messages will be generated whenever the active value changes for any of the parameters the backend believes the frontend should know about. Most commonly this occurs in response - to a SET SQL command executed by the frontend, and + to a SET SQL command executed by the frontend, and this case is effectively synchronous — but it is also possible for parameter status changes to occur because the administrator changed a configuration file and then sent the @@ -1249,27 +1249,27 @@ SELCT 1/0; At present there is a hard-wired set of parameters for which ParameterStatus will be generated: they are - server_version, - server_encoding, - client_encoding, - application_name, - is_superuser, - session_authorization, - DateStyle, - IntervalStyle, - TimeZone, - integer_datetimes, and - standard_conforming_strings. - (server_encoding, TimeZone, and - integer_datetimes were not reported by releases before 8.0; - standard_conforming_strings was not reported by releases + server_version, + server_encoding, + client_encoding, + application_name, + is_superuser, + session_authorization, + DateStyle, + IntervalStyle, + TimeZone, + integer_datetimes, and + standard_conforming_strings. + (server_encoding, TimeZone, and + integer_datetimes were not reported by releases before 8.0; + standard_conforming_strings was not reported by releases before 8.1; - IntervalStyle was not reported by releases before 8.4; - application_name was not reported by releases before 9.0.) + IntervalStyle was not reported by releases before 8.4; + application_name was not reported by releases before 9.0.) Note that - server_version, - server_encoding and - integer_datetimes + server_version, + server_encoding and + integer_datetimes are pseudo-parameters that cannot change after startup. This set might change in the future, or even become configurable. Accordingly, a frontend should simply ignore ParameterStatus for @@ -1394,7 +1394,7 @@ SELCT 1/0; frontend disconnects while a non-SELECT query is being processed, the backend will probably finish the query before noticing the disconnection. If the query is outside any - transaction block (BEGIN ... COMMIT + transaction block (BEGIN ... COMMIT sequence) then its results might be committed before the disconnection is recognized. @@ -1404,7 +1404,7 @@ SELCT 1/0; <acronym>SSL</acronym> Session Encryption - If PostgreSQL was built with + If PostgreSQL was built with SSL support, frontend/backend communications can be encrypted using SSL. This provides communication security in environments where attackers might be @@ -1417,17 +1417,17 @@ SELCT 1/0; To initiate an SSL-encrypted connection, the frontend initially sends an SSLRequest message rather than a StartupMessage. The server then responds with a single byte - containing S or N, indicating that it is + containing S or N, indicating that it is willing or unwilling to perform SSL, respectively. The frontend might close the connection at this point if it is dissatisfied with the response. To continue after - S, perform an SSL startup handshake + S, perform an SSL startup handshake (not described here, part of the SSL specification) with the server. If this is successful, continue with sending the usual StartupMessage. In this case the StartupMessage and all subsequent data will be SSL-encrypted. To continue after - N, send the usual StartupMessage and proceed without + N, send the usual StartupMessage and proceed without encryption. @@ -1435,7 +1435,7 @@ SELCT 1/0; The frontend should also be prepared to handle an ErrorMessage response to SSLRequest from the server. This would only occur if the server predates the addition of SSL support - to PostgreSQL. (Such servers are now very ancient, + to PostgreSQL. (Such servers are now very ancient, and likely do not exist in the wild anymore.) In this case the connection must be closed, but the frontend might choose to open a fresh connection @@ -1460,8 +1460,8 @@ SELCT 1/0; SASL Authentication -SASL is a framework for authentication in connection-oriented -protocols. At the moment, PostgreSQL implements only one SASL +SASL is a framework for authentication in connection-oriented +protocols. At the moment, PostgreSQL implements only one SASL authentication mechanism, SCRAM-SHA-256, but more might be added in the future. The below steps illustrate how SASL authentication is performed in general, while the next subsection gives more details on SCRAM-SHA-256. @@ -1518,24 +1518,24 @@ ErrorMessage. SCRAM-SHA-256 authentication - SCRAM-SHA-256 (called just SCRAM from now on) is + SCRAM-SHA-256 (called just SCRAM from now on) is the only implemented SASL mechanism, at the moment. It is described in detail in RFC 7677 and RFC 5802. When SCRAM-SHA-256 is used in PostgreSQL, the server will ignore the user name -that the client sends in the client-first-message. The user name +that the client sends in the client-first-message. The user name that was already sent in the startup message is used instead. -PostgreSQL supports multiple character encodings, while SCRAM +PostgreSQL supports multiple character encodings, while SCRAM dictates UTF-8 to be used for the user name, so it might be impossible to represent the PostgreSQL user name in UTF-8. The SCRAM specification dictates that the password is also in UTF-8, and is -processed with the SASLprep algorithm. -PostgreSQL, however, does not require UTF-8 to be used for +processed with the SASLprep algorithm. +PostgreSQL, however, does not require UTF-8 to be used for the password. When a user's password is set, it is processed with SASLprep as if it was in UTF-8, regardless of the actual encoding used. However, if it is not a legal UTF-8 byte sequence, or it contains UTF-8 byte sequences @@ -1547,7 +1547,7 @@ the password is in. -Channel binding has not been implemented yet. +Channel binding has not been implemented yet. @@ -1561,27 +1561,27 @@ the password is in. The client responds by sending a SASLInitialResponse message, which - indicates the chosen mechanism, SCRAM-SHA-256. In the Initial + indicates the chosen mechanism, SCRAM-SHA-256. In the Initial Client response field, the message contains the SCRAM - client-first-message. + client-first-message. Server sends an AuthenticationSASLContinue message, with a SCRAM - server-first message as the content. + server-first message as the content. Client sends a SASLResponse message, with SCRAM - client-final-message as the content. + client-final-message as the content. Server sends an AuthenticationSASLFinal message, with the SCRAM - server-final-message, followed immediately by + server-final-message, followed immediately by an AuthenticationOk message. @@ -1594,14 +1594,14 @@ the password is in. To initiate streaming replication, the frontend sends the -replication parameter in the startup message. A Boolean value -of true tells the backend to go into walsender mode, wherein a +replication parameter in the startup message. A Boolean value +of true tells the backend to go into walsender mode, wherein a small set of replication commands can be issued instead of SQL statements. Only the simple query protocol can be used in walsender mode. Replication commands are logged in the server log when is enabled. -Passing database as the value instructs walsender to connect to -the database specified in the dbname parameter, which will allow +Passing database as the value instructs walsender to connect to +the database specified in the dbname parameter, which will allow the connection to be used for logical replication from that database. @@ -1697,7 +1697,7 @@ The commands accepted in walsender mode are: - name + name The name of a run-time parameter. Available parameters are documented @@ -1728,7 +1728,7 @@ The commands accepted in walsender mode are: - File name of the timeline history file, e.g., 00000002.history. + File name of the timeline history file, e.g., 00000002.history. @@ -1750,7 +1750,7 @@ The commands accepted in walsender mode are: - CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] } + CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] } CREATE_REPLICATION_SLOT @@ -1761,7 +1761,7 @@ The commands accepted in walsender mode are: - slot_name + slot_name The name of the slot to create. Must be a valid replication slot @@ -1771,7 +1771,7 @@ The commands accepted in walsender mode are: - output_plugin + output_plugin The name of the output plugin used for logical decoding @@ -1781,7 +1781,7 @@ The commands accepted in walsender mode are: - TEMPORARY + TEMPORARY Specify that this replication slot is a temporary one. Temporary @@ -1792,30 +1792,30 @@ The commands accepted in walsender mode are: - RESERVE_WAL + RESERVE_WAL - Specify that this physical replication slot reserves WAL - immediately. Otherwise, WAL is only reserved upon + Specify that this physical replication slot reserves WAL + immediately. Otherwise, WAL is only reserved upon connection from a streaming replication client. - EXPORT_SNAPSHOT - NOEXPORT_SNAPSHOT - USE_SNAPSHOT + EXPORT_SNAPSHOT + NOEXPORT_SNAPSHOT + USE_SNAPSHOT Decides what to do with the snapshot created during logical slot - initialization. EXPORT_SNAPSHOT, which is the default, + initialization. EXPORT_SNAPSHOT, which is the default, will export the snapshot for use in other sessions. This option can't - be used inside a transaction. USE_SNAPSHOT will use the + be used inside a transaction. USE_SNAPSHOT will use the snapshot for the current transaction executing the command. This option must be used in a transaction, and CREATE_REPLICATION_SLOT must be the first command - run in that transaction. Finally, NOEXPORT_SNAPSHOT will + run in that transaction. Finally, NOEXPORT_SNAPSHOT will just use the snapshot for logical decoding as normal but won't do anything else with it. @@ -1875,15 +1875,15 @@ The commands accepted in walsender mode are: - START_REPLICATION [ SLOT slot_name ] [ PHYSICAL ] XXX/XXX [ TIMELINE tli ] + START_REPLICATION [ SLOT slot_name ] [ PHYSICAL ] XXX/XXX [ TIMELINE tli ] START_REPLICATION Instructs server to start streaming WAL, starting at - WAL location XXX/XXX. + WAL location XXX/XXX. If TIMELINE option is specified, - streaming starts on timeline tli; + streaming starts on timeline tli; otherwise, the server's current timeline is selected. The server can reply with an error, for example if the requested section of WAL has already been recycled. On success, server responds with a CopyBothResponse @@ -1892,9 +1892,9 @@ The commands accepted in walsender mode are: If a slot's name is provided - via slot_name, it will be updated + via slot_name, it will be updated as replication progresses so that the server knows which WAL segments, - and if hot_standby_feedback is on which transactions, + and if hot_standby_feedback is on which transactions, are still needed by the standby. @@ -2228,11 +2228,11 @@ The commands accepted in walsender mode are: - START_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name [ option_value ] [, ...] ) ] + START_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name [ option_value ] [, ...] ) ] Instructs server to start streaming WAL for logical replication, starting - at WAL location XXX/XXX. The server can + at WAL location XXX/XXX. The server can reply with an error, for example if the requested section of WAL has already been recycled. On success, server responds with a CopyBothResponse message, and then starts to stream WAL to the frontend. @@ -2250,7 +2250,7 @@ The commands accepted in walsender mode are: - SLOT slot_name + SLOT slot_name The name of the slot to stream changes from. This parameter is required, @@ -2261,7 +2261,7 @@ The commands accepted in walsender mode are: - XXX/XXX + XXX/XXX The WAL location to begin streaming at. @@ -2269,7 +2269,7 @@ The commands accepted in walsender mode are: - option_name + option_name The name of an option passed to the slot's logical decoding plugin. @@ -2277,7 +2277,7 @@ The commands accepted in walsender mode are: - option_value + option_value Optional value, in the form of a string constant, associated with the @@ -2291,7 +2291,7 @@ The commands accepted in walsender mode are: - DROP_REPLICATION_SLOT slot_name WAIT + DROP_REPLICATION_SLOT slot_name WAIT DROP_REPLICATION_SLOT @@ -2302,7 +2302,7 @@ The commands accepted in walsender mode are: - slot_name + slot_name The name of the slot to drop. @@ -2348,7 +2348,7 @@ The commands accepted in walsender mode are: - PROGRESS + PROGRESS Request information required to generate a progress report. This will @@ -2365,7 +2365,7 @@ The commands accepted in walsender mode are: - FAST + FAST Request a fast checkpoint. @@ -2399,7 +2399,7 @@ The commands accepted in walsender mode are: - MAX_RATE rate + MAX_RATE rate Limit (throttle) the maximum amount of data transferred from server @@ -2420,7 +2420,7 @@ The commands accepted in walsender mode are: pg_tblspc in a file named tablespace_map. The tablespace map file includes each symbolic link name as it exists in the directory - pg_tblspc/ and the full path of that symbolic link. + pg_tblspc/ and the full path of that symbolic link. @@ -2473,9 +2473,9 @@ The commands accepted in walsender mode are: After the second regular result set, one or more CopyResponse results will be sent, one for the main data directory and one for each additional tablespace other - than pg_default and pg_global. The data in + than pg_default and pg_global. The data in the CopyResponse results will be a tar format (following the - ustar interchange format specified in the POSIX 1003.1-2008 + ustar interchange format specified in the POSIX 1003.1-2008 standard) dump of the tablespace contents, except that the two trailing blocks of zeroes specified in the standard are omitted. After the tar data is complete, a final ordinary result set will be sent, @@ -2486,29 +2486,29 @@ The commands accepted in walsender mode are: The tar archive for the data directory and each tablespace will contain all files in the directories, regardless of whether they are - PostgreSQL files or other files added to the same + PostgreSQL files or other files added to the same directory. The only excluded files are: - postmaster.pid + postmaster.pid - postmaster.opts + postmaster.opts Various temporary files and directories created during the operation of the PostgreSQL server, such as any file or directory beginning - with pgsql_tmp. + with pgsql_tmp. - pg_wal, including subdirectories. If the backup is run + pg_wal, including subdirectories. If the backup is run with WAL files included, a synthesized version of pg_wal will be included, but it will only contain the files necessary for the backup to work, not the rest of the contents. @@ -2516,10 +2516,10 @@ The commands accepted in walsender mode are: - pg_dynshmem, pg_notify, - pg_replslot, pg_serial, - pg_snapshots, pg_stat_tmp, and - pg_subtrans are copied as empty directories (even if + pg_dynshmem, pg_notify, + pg_replslot, pg_serial, + pg_snapshots, pg_stat_tmp, and + pg_subtrans are copied as empty directories (even if they are symbolic links). @@ -2549,7 +2549,7 @@ The commands accepted in walsender mode are: This section describes the logical replication protocol, which is the message flow started by the START_REPLICATION - SLOT slot_name + SLOT slot_name LOGICAL replication command. @@ -3419,7 +3419,7 @@ Bind (F) The number of parameter format codes that follow - (denoted C below). + (denoted C below). This can be zero to indicate that there are no parameters or that the parameters all use the default format (text); or one, in which case the specified format code is applied @@ -3430,7 +3430,7 @@ Bind (F) - Int16[C] + Int16[C] @@ -3488,7 +3488,7 @@ Bind (F) The number of result-column format codes that follow - (denoted R below). + (denoted R below). This can be zero to indicate that there are no result columns or that the result columns should all use the default format (text); @@ -3500,7 +3500,7 @@ Bind (F) - Int16[R] + Int16[R] @@ -3575,7 +3575,7 @@ CancelRequest (F) The cancel request code. The value is chosen to contain - 1234 in the most significant 16 bits, and 5678 in the + 1234 in the most significant 16 bits, and 5678 in the least significant 16 bits. (To avoid confusion, this code must not be the same as any protocol version number.) @@ -3642,8 +3642,8 @@ Close (F) - 'S' to close a prepared statement; or - 'P' to close a portal. + 'S' to close a prepared statement; or + 'P' to close a portal. @@ -3977,13 +3977,13 @@ CopyInResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4050,13 +4050,13 @@ CopyOutResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4123,13 +4123,13 @@ CopyBothResponse (B) The number of columns in the data to be copied - (denoted N below). + (denoted N below). - Int16[N] + Int16[N] @@ -4252,8 +4252,8 @@ Describe (F) - 'S' to describe a prepared statement; or - 'P' to describe a portal. + 'S' to describe a prepared statement; or + 'P' to describe a portal. @@ -4424,7 +4424,7 @@ Execute (F) Maximum number of rows to return, if portal contains a query that returns rows (ignored otherwise). Zero - denotes no limit. + denotes no limit. @@ -4514,7 +4514,7 @@ FunctionCall (F) The number of argument format codes that follow - (denoted C below). + (denoted C below). This can be zero to indicate that there are no arguments or that the arguments all use the default format (text); or one, in which case the specified format code is applied @@ -4525,7 +4525,7 @@ FunctionCall (F) - Int16[C] + Int16[C] @@ -4855,7 +4855,7 @@ NotificationResponse (B) - The payload string passed from the notifying process. + The payload string passed from the notifying process. @@ -5261,9 +5261,9 @@ ReadyForQuery (B) Current backend transaction status indicator. - Possible values are 'I' if idle (not in - a transaction block); 'T' if in a transaction - block; or 'E' if in a failed transaction + Possible values are 'I' if idle (not in + a transaction block); 'T' if in a transaction + block; or 'E' if in a failed transaction block (queries will be rejected until block is ended). @@ -5364,7 +5364,7 @@ RowDescription (B) - The data type size (see pg_type.typlen). + The data type size (see pg_type.typlen). Note that negative values denote variable-width types. @@ -5375,7 +5375,7 @@ RowDescription (B) - The type modifier (see pg_attribute.atttypmod). + The type modifier (see pg_attribute.atttypmod). The meaning of the modifier is type-specific. @@ -5539,7 +5539,7 @@ SSLRequest (F) The SSL request code. The value is chosen to contain - 1234 in the most significant 16 bits, and 5679 in the + 1234 in the most significant 16 bits, and 5679 in the least significant 16 bits. (To avoid confusion, this code must not be the same as any protocol version number.) @@ -5588,7 +5588,7 @@ StartupMessage (F) parameter name and value strings. A zero byte is required as a terminator after the last name/value pair. Parameters can appear in any - order. user is required, others are optional. + order. user is required, others are optional. Each parameter is specified as: @@ -5602,7 +5602,7 @@ StartupMessage (F) - user + user @@ -5613,7 +5613,7 @@ StartupMessage (F) - database + database @@ -5623,7 +5623,7 @@ StartupMessage (F) - options + options @@ -5631,23 +5631,23 @@ StartupMessage (F) deprecated in favor of setting individual run-time parameters.) Spaces within this string are considered to separate arguments, unless escaped with - a backslash (\); write \\ to + a backslash (\); write \\ to represent a literal backslash. - replication + replication Used to connect in streaming replication mode, where a small set of replication commands can be issued instead of SQL statements. Value can be - true, false, or - database, and the default is - false. See + true, false, or + database, and the default is + false. See for details. @@ -5768,15 +5768,15 @@ message. -S +S Severity: the field contents are - ERROR, FATAL, or - PANIC (in an error message), or - WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message), + ERROR, FATAL, or + PANIC (in an error message), or + WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message), or a localized translation of one of these. Always present. @@ -5784,18 +5784,18 @@ message. -V +V Severity: the field contents are - ERROR, FATAL, or - PANIC (in an error message), or - WARNING, NOTICE, DEBUG, - INFO, or LOG (in a notice message). - This is identical to the S field except + ERROR, FATAL, or + PANIC (in an error message), or + WARNING, NOTICE, DEBUG, + INFO, or LOG (in a notice message). + This is identical to the S field except that the contents are never localized. This is present only in - messages generated by PostgreSQL versions 9.6 + messages generated by PostgreSQL versions 9.6 and later. @@ -5803,7 +5803,7 @@ message. -C +C @@ -5815,7 +5815,7 @@ message. -M +M @@ -5828,7 +5828,7 @@ message. -D +D @@ -5840,7 +5840,7 @@ message. -H +H @@ -5854,7 +5854,7 @@ message. -P +P @@ -5868,21 +5868,21 @@ message. -p +p - Internal position: this is defined the same as the P + Internal position: this is defined the same as the P field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. - The q field will always appear when this field appears. + The q field will always appear when this field appears. -q +q @@ -5894,7 +5894,7 @@ message. -W +W @@ -5908,7 +5908,7 @@ message. -s +s @@ -5920,7 +5920,7 @@ message. -t +t @@ -5933,7 +5933,7 @@ message. -c +c @@ -5946,7 +5946,7 @@ message. -d +d @@ -5959,7 +5959,7 @@ message. -n +n @@ -5974,7 +5974,7 @@ message. -F +F @@ -5986,7 +5986,7 @@ message. -L +L @@ -5998,7 +5998,7 @@ message. -R +R @@ -6738,8 +6738,8 @@ developers trying to update existing client libraries to protocol 3.0. The initial startup packet uses a flexible list-of-strings format instead of a fixed format. Notice that session default values for run-time parameters can now be specified directly in the startup packet. (Actually, -you could do that before using the options field, but given the -limited width of options and the lack of any way to quote +you could do that before using the options field, but given the +limited width of options and the lack of any way to quote whitespace in the values, it wasn't a very safe technique.) @@ -6750,7 +6750,7 @@ PasswordMessage now has a type byte. -ErrorResponse and NoticeResponse ('E' and 'N') +ErrorResponse and NoticeResponse ('E' and 'N') messages now contain multiple fields, from which the client code can assemble an error message of the desired level of verbosity. Note that individual fields will typically not end with a newline, whereas the single @@ -6758,7 +6758,7 @@ string sent in the older protocol always did. -The ReadyForQuery ('Z') message includes a transaction status +The ReadyForQuery ('Z') message includes a transaction status indicator. @@ -6771,7 +6771,7 @@ directly tied to the server's internal representation. -There is a new extended query sub-protocol, which adds the frontend +There is a new extended query sub-protocol, which adds the frontend message types Parse, Bind, Execute, Describe, Close, Flush, and Sync, and the backend message types ParseComplete, BindComplete, PortalSuspended, ParameterDescription, NoData, and CloseComplete. Existing clients do not @@ -6782,7 +6782,7 @@ might allow improvements in performance or functionality. COPY data is now encapsulated into CopyData and CopyDone messages. There is a well-defined way to recover from errors during COPY. The special -\. last line is not needed anymore, and is not sent +\. last line is not needed anymore, and is not sent during COPY OUT. (It is still recognized as a terminator during COPY IN, but its use is deprecated and will eventually be removed.) Binary COPY is supported. @@ -6800,31 +6800,31 @@ server data representations. -The backend sends ParameterStatus ('S') messages during connection +The backend sends ParameterStatus ('S') messages during connection startup for all parameters it considers interesting to the client library. Subsequently, a ParameterStatus message is sent whenever the active value changes for any of these parameters. -The RowDescription ('T') message carries new table OID and column +The RowDescription ('T') message carries new table OID and column number fields for each column of the described row. It also shows the format code for each column. -The CursorResponse ('P') message is no longer generated by +The CursorResponse ('P') message is no longer generated by the backend. -The NotificationResponse ('A') message has an additional string -field, which can carry a payload string passed +The NotificationResponse ('A') message has an additional string +field, which can carry a payload string passed from the NOTIFY event sender. -The EmptyQueryResponse ('I') message used to include an empty +The EmptyQueryResponse ('I') message used to include an empty string parameter; this has been removed. diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index 11a4bf4e41..c2c1aaa208 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -31,7 +31,7 @@ WITH with_queries SELECT select_list FROM table_expression sort_specification The following sections describe the details of the select list, the - table expression, and the sort specification. WITH + table expression, and the sort specification. WITH queries are treated last since they are an advanced feature. @@ -51,13 +51,13 @@ SELECT * FROM table1; expression happens to provide. A select list can also select a subset of the available columns or make calculations using the columns. For example, if - table1 has columns named a, - b, and c (and perhaps others) you can make + table1 has columns named a, + b, and c (and perhaps others) you can make the following query: SELECT a, b + c FROM table1; - (assuming that b and c are of a numerical + (assuming that b and c are of a numerical data type). See for more details. @@ -89,19 +89,19 @@ SELECT random(); A table expression computes a table. The - table expression contains a FROM clause that is - optionally followed by WHERE, GROUP BY, and - HAVING clauses. Trivial table expressions simply refer + table expression contains a FROM clause that is + optionally followed by WHERE, GROUP BY, and + HAVING clauses. Trivial table expressions simply refer to a table on disk, a so-called base table, but more complex expressions can be used to modify or combine base tables in various ways. - The optional WHERE, GROUP BY, and - HAVING clauses in the table expression specify a + The optional WHERE, GROUP BY, and + HAVING clauses in the table expression specify a pipeline of successive transformations performed on the table - derived in the FROM clause. All these transformations + derived in the FROM clause. All these transformations produce a virtual table that provides the rows that are passed to the select list to compute the output rows of the query. @@ -118,14 +118,14 @@ FROM table_reference , table_r A table reference can be a table name (possibly schema-qualified), - or a derived table such as a subquery, a JOIN construct, or + or a derived table such as a subquery, a JOIN construct, or complex combinations of these. If more than one table reference is - listed in the FROM clause, the tables are cross-joined + listed in the FROM clause, the tables are cross-joined (that is, the Cartesian product of their rows is formed; see below). - The result of the FROM list is an intermediate virtual + The result of the FROM list is an intermediate virtual table that can then be subject to - transformations by the WHERE, GROUP BY, - and HAVING clauses and is finally the result of the + transformations by the WHERE, GROUP BY, + and HAVING clauses and is finally the result of the overall table expression. @@ -137,14 +137,14 @@ FROM table_reference , table_r When a table reference names a table that is the parent of a table inheritance hierarchy, the table reference produces rows of not only that table but all of its descendant tables, unless the - key word ONLY precedes the table name. However, the + key word ONLY precedes the table name. However, the reference produces only the columns that appear in the named table — any columns added in subtables are ignored. - Instead of writing ONLY before the table name, you can write - * after the table name to explicitly specify that descendant + Instead of writing ONLY before the table name, you can write + * after the table name to explicitly specify that descendant tables are included. There is no real reason to use this syntax any more, because searching descendant tables is now always the default behavior. However, it is supported for compatibility with older releases. @@ -168,8 +168,8 @@ FROM table_reference , table_r Joins of all types can be chained together, or nested: either or both T1 and T2 can be joined tables. Parentheses - can be used around JOIN clauses to control the join - order. In the absence of parentheses, JOIN clauses + can be used around JOIN clauses to control the join + order. In the absence of parentheses, JOIN clauses nest left-to-right. @@ -215,7 +215,7 @@ FROM table_reference , table_r This latter equivalence does not hold exactly when more than two - tables appear, because JOIN binds more tightly than + tables appear, because JOIN binds more tightly than comma. For example FROM T1 CROSS JOIN T2 INNER JOIN T3 @@ -262,8 +262,8 @@ FROM table_reference , table_r The join condition is specified in the - ON or USING clause, or implicitly by - the word NATURAL. The join condition determines + ON or USING clause, or implicitly by + the word NATURAL. The join condition determines which rows from the two source tables are considered to match, as explained in detail below. @@ -273,7 +273,7 @@ FROM table_reference , table_r - INNER JOIN + INNER JOIN @@ -284,7 +284,7 @@ FROM table_reference , table_r - LEFT OUTER JOIN + LEFT OUTER JOIN join left @@ -307,7 +307,7 @@ FROM table_reference , table_r - RIGHT OUTER JOIN + RIGHT OUTER JOIN join right @@ -330,7 +330,7 @@ FROM table_reference , table_r - FULL OUTER JOIN + FULL OUTER JOIN @@ -347,35 +347,35 @@ FROM table_reference , table_r - The ON clause is the most general kind of join + The ON clause is the most general kind of join condition: it takes a Boolean value expression of the same - kind as is used in a WHERE clause. A pair of rows - from T1 and T2 match if the - ON expression evaluates to true. + kind as is used in a WHERE clause. A pair of rows + from T1 and T2 match if the + ON expression evaluates to true. - The USING clause is a shorthand that allows you to take + The USING clause is a shorthand that allows you to take advantage of the specific situation where both sides of the join use the same name for the joining column(s). It takes a comma-separated list of the shared column names and forms a join condition that includes an equality comparison - for each one. For example, joining T1 - and T2 with USING (a, b) produces - the join condition ON T1.a - = T2.a AND T1.b - = T2.b. + for each one. For example, joining T1 + and T2 with USING (a, b) produces + the join condition ON T1.a + = T2.a AND T1.b + = T2.b. - Furthermore, the output of JOIN USING suppresses + Furthermore, the output of JOIN USING suppresses redundant columns: there is no need to print both of the matched columns, since they must have equal values. While JOIN - ON produces all columns from T1 followed by all - columns from T2, JOIN USING produces one + ON produces all columns from T1 followed by all + columns from T2, JOIN USING produces one output column for each of the listed column pairs (in the listed - order), followed by any remaining columns from T1, - followed by any remaining columns from T2. + order), followed by any remaining columns from T1, + followed by any remaining columns from T2. @@ -386,10 +386,10 @@ FROM table_reference , table_r natural join - Finally, NATURAL is a shorthand form of - USING: it forms a USING list + Finally, NATURAL is a shorthand form of + USING: it forms a USING list consisting of all column names that appear in both - input tables. As with USING, these columns appear + input tables. As with USING, these columns appear only once in the output table. If there are no common column names, NATURAL JOIN behaves like JOIN ... ON TRUE, producing a cross-product join. @@ -399,7 +399,7 @@ FROM table_reference , table_r USING is reasonably safe from column changes in the joined relations since only the listed columns - are combined. NATURAL is considerably more risky since + are combined. NATURAL is considerably more risky since any schema changes to either relation that cause a new matching column name to be present will cause the join to combine that new column as well. @@ -428,7 +428,7 @@ FROM table_reference , table_r then we get the following results for the various joins: -=> SELECT * FROM t1 CROSS JOIN t2; +=> SELECT * FROM t1 CROSS JOIN t2; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -442,28 +442,28 @@ FROM table_reference , table_r 3 | c | 5 | zzz (9 rows) -=> SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 INNER JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx 3 | c | 3 | yyy (2 rows) -=> SELECT * FROM t1 INNER JOIN t2 USING (num); +=> SELECT * FROM t1 INNER JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) -=> SELECT * FROM t1 NATURAL INNER JOIN t2; +=> SELECT * FROM t1 NATURAL INNER JOIN t2; num | name | value -----+------+------- 1 | a | xxx 3 | c | yyy (2 rows) -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -471,7 +471,7 @@ FROM table_reference , table_r 3 | c | 3 | yyy (3 rows) -=> SELECT * FROM t1 LEFT JOIN t2 USING (num); +=> SELECT * FROM t1 LEFT JOIN t2 USING (num); num | name | value -----+------+------- 1 | a | xxx @@ -479,7 +479,7 @@ FROM table_reference , table_r 3 | c | yyy (3 rows) -=> SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 RIGHT JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -487,7 +487,7 @@ FROM table_reference , table_r | | 5 | zzz (3 rows) -=> SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num; +=> SELECT * FROM t1 FULL JOIN t2 ON t1.num = t2.num; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -499,12 +499,12 @@ FROM table_reference , table_r - The join condition specified with ON can also contain + The join condition specified with ON can also contain conditions that do not relate directly to the join. This can prove useful for some queries but needs to be thought out carefully. For example: -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx'; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = 'xxx'; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx @@ -512,19 +512,19 @@ FROM table_reference , table_r 3 | c | | (3 rows) - Notice that placing the restriction in the WHERE clause + Notice that placing the restriction in the WHERE clause produces a different result: -=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx'; +=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num WHERE t2.value = 'xxx'; num | name | num | value -----+------+-----+------- 1 | a | 1 | xxx (1 row) - This is because a restriction placed in the ON - clause is processed before the join, while - a restriction placed in the WHERE clause is processed - after the join. + This is because a restriction placed in the ON + clause is processed before the join, while + a restriction placed in the WHERE clause is processed + after the join. That does not matter with inner joins, but it matters a lot with outer joins. @@ -595,7 +595,7 @@ SELECT * FROM people AS mother JOIN people AS child ON mother.id = child.mother_ Parentheses are used to resolve ambiguities. In the following example, the first statement assigns the alias b to the second - instance of my_table, but the second statement assigns the + instance of my_table, but the second statement assigns the alias to the result of the join: SELECT * FROM my_table AS a CROSS JOIN my_table AS b ... @@ -615,9 +615,9 @@ FROM table_reference AS - When an alias is applied to the output of a JOIN + When an alias is applied to the output of a JOIN clause, the alias hides the original - name(s) within the JOIN. For example: + name(s) within the JOIN. For example: SELECT a.* FROM my_table AS a JOIN your_table AS b ON ... @@ -625,8 +625,8 @@ SELECT a.* FROM my_table AS a JOIN your_table AS b ON ... SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c - is not valid; the table alias a is not visible - outside the alias c. + is not valid; the table alias a is not visible + outside the alias c. @@ -655,13 +655,13 @@ FROM (SELECT * FROM table1) AS alias_name - A subquery can also be a VALUES list: + A subquery can also be a VALUES list: FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) AS names(first, last) Again, a table alias is required. Assigning alias names to the columns - of the VALUES list is optional, but is good practice. + of the VALUES list is optional, but is good practice. For more information see . @@ -669,25 +669,25 @@ FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) Table Functions - table function + table function - function - in the FROM clause + function + in the FROM clause Table functions are functions that produce a set of rows, made up of either base data types (scalar types) or composite data types (table rows). They are used like a table, view, or subquery in - the FROM clause of a query. Columns returned by table - functions can be included in SELECT, - JOIN, or WHERE clauses in the same manner + the FROM clause of a query. Columns returned by table + functions can be included in SELECT, + JOIN, or WHERE clauses in the same manner as columns of a table, view, or subquery. - Table functions may also be combined using the ROWS FROM + Table functions may also be combined using the ROWS FROM syntax, with the results returned in parallel columns; the number of result rows in this case is that of the largest function result, with smaller results padded with null values to match. @@ -704,7 +704,7 @@ ROWS FROM( function_call , ... function result columns. This column numbers the rows of the function result set, starting from 1. (This is a generalization of the SQL-standard syntax for UNNEST ... WITH ORDINALITY.) - By default, the ordinal column is called ordinality, but + By default, the ordinal column is called ordinality, but a different column name can be assigned to it using an AS clause. @@ -723,7 +723,7 @@ UNNEST( array_expression , ... If no table_alias is specified, the function - name is used as the table name; in the case of a ROWS FROM() + name is used as the table name; in the case of a ROWS FROM() construct, the first function's name is used. @@ -762,7 +762,7 @@ SELECT * FROM vw_getfoo; In some cases it is useful to define table functions that can return different column sets depending on how they are invoked. To support this, the table function can be declared as returning - the pseudo-type record. When such a function is used in + the pseudo-type record. When such a function is used in a query, the expected row structure must be specified in the query itself, so that the system can know how to parse and plan the query. This syntax looks like: @@ -775,16 +775,16 @@ ROWS FROM( ... function_call AS (column_ - When not using the ROWS FROM() syntax, + When not using the ROWS FROM() syntax, the column_definition list replaces the column - alias list that could otherwise be attached to the FROM + alias list that could otherwise be attached to the FROM item; the names in the column definitions serve as column aliases. - When using the ROWS FROM() syntax, + When using the ROWS FROM() syntax, a column_definition list can be attached to each member function separately; or if there is only one member function - and no WITH ORDINALITY clause, + and no WITH ORDINALITY clause, a column_definition list can be written in - place of a column alias list following ROWS FROM(). + place of a column alias list following ROWS FROM(). @@ -798,49 +798,49 @@ SELECT * The function (part of the module) executes a remote query. It is declared to return - record since it might be used for any kind of query. + record since it might be used for any kind of query. The actual column set must be specified in the calling query so - that the parser knows, for example, what * should + that the parser knows, for example, what * should expand to. - <literal>LATERAL</> Subqueries + <literal>LATERAL</literal> Subqueries - LATERAL - in the FROM clause + LATERAL + in the FROM clause - Subqueries appearing in FROM can be - preceded by the key word LATERAL. This allows them to - reference columns provided by preceding FROM items. + Subqueries appearing in FROM can be + preceded by the key word LATERAL. This allows them to + reference columns provided by preceding FROM items. (Without LATERAL, each subquery is evaluated independently and so cannot cross-reference any other - FROM item.) + FROM item.) - Table functions appearing in FROM can also be - preceded by the key word LATERAL, but for functions the + Table functions appearing in FROM can also be + preceded by the key word LATERAL, but for functions the key word is optional; the function's arguments can contain references - to columns provided by preceding FROM items in any case. + to columns provided by preceding FROM items in any case. A LATERAL item can appear at top level in the - FROM list, or within a JOIN tree. In the latter + FROM list, or within a JOIN tree. In the latter case it can also refer to any items that are on the left-hand side of a - JOIN that it is on the right-hand side of. + JOIN that it is on the right-hand side of. - When a FROM item contains LATERAL + When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the - FROM item providing the cross-referenced column(s), or - set of rows of multiple FROM items providing the + FROM item providing the cross-referenced column(s), or + set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is @@ -860,7 +860,7 @@ SELECT * FROM foo, bar WHERE bar.id = foo.bar_id; LATERAL is primarily useful when the cross-referenced column is necessary for computing the row(s) to be joined. A common application is providing an argument value for a set-returning function. - For example, supposing that vertices(polygon) returns the + For example, supposing that vertices(polygon) returns the set of vertices of a polygon, we could identify close-together vertices of polygons stored in a table with: @@ -878,15 +878,15 @@ FROM polygons p1 CROSS JOIN LATERAL vertices(p1.poly) v1, WHERE (v1 <-> v2) < 10 AND p1.id != p2.id; or in several other equivalent formulations. (As already mentioned, - the LATERAL key word is unnecessary in this example, but + the LATERAL key word is unnecessary in this example, but we use it for clarity.) - It is often particularly handy to LEFT JOIN to a + It is often particularly handy to LEFT JOIN to a LATERAL subquery, so that source rows will appear in the result even if the LATERAL subquery produces no - rows for them. For example, if get_product_names() returns + rows for them. For example, if get_product_names() returns the names of products made by a manufacturer, but some manufacturers in our table currently produce no products, we could find out which ones those are like this: @@ -918,20 +918,20 @@ WHERE search_condition - After the processing of the FROM clause is done, each + After the processing of the FROM clause is done, each row of the derived virtual table is checked against the search condition. If the result of the condition is true, the row is kept in the output table, otherwise (i.e., if the result is false or null) it is discarded. The search condition typically references at least one column of the table generated in the - FROM clause; this is not required, but otherwise the - WHERE clause will be fairly useless. + FROM clause; this is not required, but otherwise the + WHERE clause will be fairly useless. The join condition of an inner join can be written either in - the WHERE clause or in the JOIN clause. + the WHERE clause or in the JOIN clause. For example, these table expressions are equivalent: FROM a, b WHERE a.id = b.id AND b.val > 5 @@ -945,13 +945,13 @@ FROM a INNER JOIN b ON (a.id = b.id) WHERE b.val > 5 FROM a NATURAL JOIN b WHERE b.val > 5 Which one of these you use is mainly a matter of style. The - JOIN syntax in the FROM clause is + JOIN syntax in the FROM clause is probably not as portable to other SQL database management systems, even though it is in the SQL standard. For outer joins there is no choice: they must be done in - the FROM clause. The ON or USING - clause of an outer join is not equivalent to a - WHERE condition, because it results in the addition + the FROM clause. The ON or USING + clause of an outer join is not equivalent to a + WHERE condition, because it results in the addition of rows (for unmatched input rows) as well as the removal of rows in the final result. @@ -973,14 +973,14 @@ SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) fdt is the table derived in the - FROM clause. Rows that do not meet the search - condition of the WHERE clause are eliminated from + FROM clause. Rows that do not meet the search + condition of the WHERE clause are eliminated from fdt. Notice the use of scalar subqueries as value expressions. Just like any other query, the subqueries can employ complex table expressions. Notice also how fdt is referenced in the subqueries. - Qualifying c1 as fdt.c1 is only necessary - if c1 is also the name of a column in the derived + Qualifying c1 as fdt.c1 is only necessary + if c1 is also the name of a column in the derived input table of the subquery. But qualifying the column name adds clarity even when it is not needed. This example shows how the column naming scope of an outer query extends into its inner queries. @@ -1000,9 +1000,9 @@ SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) - After passing the WHERE filter, the derived input - table might be subject to grouping, using the GROUP BY - clause, and elimination of group rows using the HAVING + After passing the WHERE filter, the derived input + table might be subject to grouping, using the GROUP BY + clause, and elimination of group rows using the HAVING clause. @@ -1023,7 +1023,7 @@ SELECT select_list eliminate redundancy in the output and/or compute aggregates that apply to these groups. For instance: -=> SELECT * FROM test1; +=> SELECT * FROM test1; x | y ---+--- a | 3 @@ -1032,7 +1032,7 @@ SELECT select_list a | 1 (4 rows) -=> SELECT x FROM test1 GROUP BY x; +=> SELECT x FROM test1 GROUP BY x; x --- a @@ -1045,17 +1045,17 @@ SELECT select_list In the second query, we could not have written SELECT * FROM test1 GROUP BY x, because there is no single value - for the column y that could be associated with each + for the column y that could be associated with each group. The grouped-by columns can be referenced in the select list since they have a single value in each group. In general, if a table is grouped, columns that are not - listed in GROUP BY cannot be referenced except in aggregate + listed in GROUP BY cannot be referenced except in aggregate expressions. An example with aggregate expressions is: -=> SELECT x, sum(y) FROM test1 GROUP BY x; +=> SELECT x, sum(y) FROM test1 GROUP BY x; x | sum ---+----- a | 4 @@ -1073,7 +1073,7 @@ SELECT select_list Grouping without aggregate expressions effectively calculates the set of distinct values in a column. This can also be achieved - using the DISTINCT clause (see DISTINCT clause (see ). @@ -1088,10 +1088,10 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales In this example, the columns product_id, p.name, and p.price must be - in the GROUP BY clause since they are referenced in + in the GROUP BY clause since they are referenced in the query select list (but see below). The column - s.units does not have to be in the GROUP - BY list since it is only used in an aggregate expression + s.units does not have to be in the GROUP + BY list since it is only used in an aggregate expression (sum(...)), which represents the sales of a product. For each product, the query returns a summary row about all sales of the product. @@ -1110,9 +1110,9 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales - In strict SQL, GROUP BY can only group by columns of + In strict SQL, GROUP BY can only group by columns of the source table but PostgreSQL extends - this to also allow GROUP BY to group by columns in the + this to also allow GROUP BY to group by columns in the select list. Grouping by value expressions instead of simple column names is also allowed. @@ -1125,12 +1125,12 @@ SELECT product_id, p.name, (sum(s.units) * p.price) AS sales If a table has been grouped using GROUP BY, but only certain groups are of interest, the HAVING clause can be used, much like a - WHERE clause, to eliminate groups from the result. + WHERE clause, to eliminate groups from the result. The syntax is: SELECT select_list FROM ... WHERE ... GROUP BY ... HAVING boolean_expression - Expressions in the HAVING clause can refer both to + Expressions in the HAVING clause can refer both to grouped expressions and to ungrouped expressions (which necessarily involve an aggregate function). @@ -1138,14 +1138,14 @@ SELECT select_list FROM ... WHERE ... Example: -=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING sum(y) > 3; +=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING sum(y) > 3; x | sum ---+----- a | 4 b | 5 (2 rows) -=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING x < 'c'; +=> SELECT x, sum(y) FROM test1 GROUP BY x HAVING x < 'c'; x | sum ---+----- a | 4 @@ -1163,26 +1163,26 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit GROUP BY product_id, p.name, p.price, p.cost HAVING sum(p.price * s.units) > 5000; - In the example above, the WHERE clause is selecting + In the example above, the WHERE clause is selecting rows by a column that is not grouped (the expression is only true for - sales during the last four weeks), while the HAVING + sales during the last four weeks), while the HAVING clause restricts the output to groups with total gross sales over 5000. Note that the aggregate expressions do not necessarily need to be the same in all parts of the query. - If a query contains aggregate function calls, but no GROUP BY + If a query contains aggregate function calls, but no GROUP BY clause, grouping still occurs: the result is a single group row (or perhaps no rows at all, if the single row is then eliminated by - HAVING). - The same is true if it contains a HAVING clause, even - without any aggregate function calls or GROUP BY clause. + HAVING). + The same is true if it contains a HAVING clause, even + without any aggregate function calls or GROUP BY clause. - <literal>GROUPING SETS</>, <literal>CUBE</>, and <literal>ROLLUP</> + <literal>GROUPING SETS</literal>, <literal>CUBE</literal>, and <literal>ROLLUP</literal> GROUPING SETS @@ -1196,13 +1196,13 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit More complex grouping operations than those described above are possible - using the concept of grouping sets. The data selected by - the FROM and WHERE clauses is grouped separately + using the concept of grouping sets. The data selected by + the FROM and WHERE clauses is grouped separately by each specified grouping set, aggregates computed for each group just as - for simple GROUP BY clauses, and then the results returned. + for simple GROUP BY clauses, and then the results returned. For example: -=> SELECT * FROM items_sold; +=> SELECT * FROM items_sold; brand | size | sales -------+------+------- Foo | L | 10 @@ -1211,7 +1211,7 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit Bar | L | 5 (4 rows) -=> SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ()); +=> SELECT brand, size, sum(sales) FROM items_sold GROUP BY GROUPING SETS ((brand), (size), ()); brand | size | sum -------+------+----- Foo | | 30 @@ -1224,12 +1224,12 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit - Each sublist of GROUPING SETS may specify zero or more columns + Each sublist of GROUPING SETS may specify zero or more columns or expressions and is interpreted the same way as though it were directly - in the GROUP BY clause. An empty grouping set means that all + in the GROUP BY clause. An empty grouping set means that all rows are aggregated down to a single group (which is output even if no input rows were present), as described above for the case of aggregate - functions with no GROUP BY clause. + functions with no GROUP BY clause. @@ -1243,16 +1243,16 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit A shorthand notation is provided for specifying two common types of grouping set. A clause of the form -ROLLUP ( e1, e2, e3, ... ) +ROLLUP ( e1, e2, e3, ... ) represents the given list of expressions and all prefixes of the list including the empty list; thus it is equivalent to GROUPING SETS ( - ( e1, e2, e3, ... ), + ( e1, e2, e3, ... ), ... - ( e1, e2 ), - ( e1 ), + ( e1, e2 ), + ( e1 ), ( ) ) @@ -1263,7 +1263,7 @@ GROUPING SETS ( A clause of the form -CUBE ( e1, e2, ... ) +CUBE ( e1, e2, ... ) represents the given list and all of its possible subsets (i.e. the power set). Thus @@ -1286,7 +1286,7 @@ GROUPING SETS ( - The individual elements of a CUBE or ROLLUP + The individual elements of a CUBE or ROLLUP clause may be either individual expressions, or sublists of elements in parentheses. In the latter case, the sublists are treated as single units for the purposes of generating the individual grouping sets. @@ -1319,15 +1319,15 @@ GROUPING SETS ( - The CUBE and ROLLUP constructs can be used either - directly in the GROUP BY clause, or nested inside a - GROUPING SETS clause. If one GROUPING SETS clause + The CUBE and ROLLUP constructs can be used either + directly in the GROUP BY clause, or nested inside a + GROUPING SETS clause. If one GROUPING SETS clause is nested inside another, the effect is the same as if all the elements of the inner clause had been written directly in the outer clause. - If multiple grouping items are specified in a single GROUP BY + If multiple grouping items are specified in a single GROUP BY clause, then the final list of grouping sets is the cross product of the individual items. For example: @@ -1346,12 +1346,12 @@ GROUP BY GROUPING SETS ( - The construct (a, b) is normally recognized in expressions as + The construct (a, b) is normally recognized in expressions as a row constructor. - Within the GROUP BY clause, this does not apply at the top - levels of expressions, and (a, b) is parsed as a list of - expressions as described above. If for some reason you need - a row constructor in a grouping expression, use ROW(a, b). + Within the GROUP BY clause, this does not apply at the top + levels of expressions, and (a, b) is parsed as a list of + expressions as described above. If for some reason you need + a row constructor in a grouping expression, use ROW(a, b). @@ -1361,7 +1361,7 @@ GROUP BY GROUPING SETS ( window function - order of execution + order of execution @@ -1369,32 +1369,32 @@ GROUP BY GROUPING SETS ( , and ), these functions are evaluated - after any grouping, aggregation, and HAVING filtering is + after any grouping, aggregation, and HAVING filtering is performed. That is, if the query uses any aggregates, GROUP - BY, or HAVING, then the rows seen by the window functions + BY, or HAVING, then the rows seen by the window functions are the group rows instead of the original table rows from - FROM/WHERE. + FROM/WHERE. When multiple window functions are used, all the window functions having - syntactically equivalent PARTITION BY and ORDER BY + syntactically equivalent PARTITION BY and ORDER BY clauses in their window definitions are guaranteed to be evaluated in a single pass over the data. Therefore they will see the same sort ordering, - even if the ORDER BY does not uniquely determine an ordering. + even if the ORDER BY does not uniquely determine an ordering. However, no guarantees are made about the evaluation of functions having - different PARTITION BY or ORDER BY specifications. + different PARTITION BY or ORDER BY specifications. (In such cases a sort step is typically required between the passes of window function evaluations, and the sort is not guaranteed to preserve - ordering of rows that its ORDER BY sees as equivalent.) + ordering of rows that its ORDER BY sees as equivalent.) Currently, window functions always require presorted data, and so the query output will be ordered according to one or another of the window - functions' PARTITION BY/ORDER BY clauses. + functions' PARTITION BY/ORDER BY clauses. It is not recommended to rely on this, however. Use an explicit - top-level ORDER BY clause if you want to be sure the + top-level ORDER BY clause if you want to be sure the results are sorted in a particular way. @@ -1435,13 +1435,13 @@ GROUP BY GROUPING SETS ( SELECT a, b, c FROM ... - The columns names a, b, and c + The columns names a, b, and c are either the actual names of the columns of tables referenced - in the FROM clause, or the aliases given to them as + in the FROM clause, or the aliases given to them as explained in . The name space available in the select list is the same as in the - WHERE clause, unless grouping is used, in which case - it is the same as in the HAVING clause. + WHERE clause, unless grouping is used, in which case + it is the same as in the HAVING clause. @@ -1456,7 +1456,7 @@ SELECT tbl1.a, tbl2.a, tbl1.b FROM ... SELECT tbl1.*, tbl2.a FROM ... See for more about - the table_name.* notation. + the table_name.* notation. @@ -1465,7 +1465,7 @@ SELECT tbl1.*, tbl2.a FROM ... value expression is evaluated once for each result row, with the row's values substituted for any column references. But the expressions in the select list do not have to reference any - columns in the table expression of the FROM clause; + columns in the table expression of the FROM clause; they can be constant arithmetic expressions, for instance. @@ -1480,7 +1480,7 @@ SELECT tbl1.*, tbl2.a FROM ... The entries in the select list can be assigned names for subsequent - processing, such as for use in an ORDER BY clause + processing, such as for use in an ORDER BY clause or for display by the client application. For example: SELECT a AS value, b + c AS sum FROM ... @@ -1488,7 +1488,7 @@ SELECT a AS value, b + c AS sum FROM ... - If no output column name is specified using AS, + If no output column name is specified using AS, the system assigns a default column name. For simple column references, this is the name of the referenced column. For function calls, this is the name of the function. For complex expressions, @@ -1496,12 +1496,12 @@ SELECT a AS value, b + c AS sum FROM ... - The AS keyword is optional, but only if the new column + The AS keyword is optional, but only if the new column name does not match any PostgreSQL keyword (see ). To avoid an accidental match to a keyword, you can double-quote the column name. For example, - VALUE is a keyword, so this does not work: + VALUE is a keyword, so this does not work: SELECT a value, b + c AS sum FROM ... @@ -1517,7 +1517,7 @@ SELECT a "value", b + c AS sum FROM ... The naming of output columns here is different from that done in - the FROM clause (see FROM clause (see ). It is possible to rename the same column twice, but the name assigned in the select list is the one that will be passed on. @@ -1544,13 +1544,13 @@ SELECT a "value", b + c AS sum FROM ... SELECT DISTINCT select_list ... - (Instead of DISTINCT the key word ALL + (Instead of DISTINCT the key word ALL can be used to specify the default behavior of retaining all rows.) - null value - in DISTINCT + null value + in DISTINCT @@ -1571,16 +1571,16 @@ SELECT DISTINCT ON (expression , first row of a set is unpredictable unless the query is sorted on enough columns to guarantee a unique ordering - of the rows arriving at the DISTINCT filter. - (DISTINCT ON processing occurs after ORDER - BY sorting.) + of the rows arriving at the DISTINCT filter. + (DISTINCT ON processing occurs after ORDER + BY sorting.) - The DISTINCT ON clause is not part of the SQL standard + The DISTINCT ON clause is not part of the SQL standard and is sometimes considered bad style because of the potentially indeterminate nature of its results. With judicious use of - GROUP BY and subqueries in FROM, this + GROUP BY and subqueries in FROM, this construct can be avoided, but it is often the most convenient alternative. @@ -1635,27 +1635,27 @@ SELECT DISTINCT ON (expression , - UNION effectively appends the result of + UNION effectively appends the result of query2 to the result of query1 (although there is no guarantee that this is the order in which the rows are actually returned). Furthermore, it eliminates duplicate rows from its result, in the same - way as DISTINCT, unless UNION ALL is used. + way as DISTINCT, unless UNION ALL is used. - INTERSECT returns all rows that are both in the result + INTERSECT returns all rows that are both in the result of query1 and in the result of query2. Duplicate rows are eliminated - unless INTERSECT ALL is used. + unless INTERSECT ALL is used. - EXCEPT returns all rows that are in the result of + EXCEPT returns all rows that are in the result of query1 but not in the result of query2. (This is sometimes called the - difference between two queries.) Again, duplicates - are eliminated unless EXCEPT ALL is used. + difference between two queries.) Again, duplicates + are eliminated unless EXCEPT ALL is used. @@ -1690,7 +1690,7 @@ SELECT DISTINCT ON (expression , - The ORDER BY clause specifies the sort order: + The ORDER BY clause specifies the sort order: SELECT select_list FROM table_expression @@ -1705,17 +1705,17 @@ SELECT a, b FROM table1 ORDER BY a + b, c; When more than one expression is specified, the later values are used to sort rows that are equal according to the earlier values. Each expression can be followed by an optional - ASC or DESC keyword to set the sort direction to - ascending or descending. ASC order is the default. + ASC or DESC keyword to set the sort direction to + ascending or descending. ASC order is the default. Ascending order puts smaller values first, where smaller is defined in terms of the < operator. Similarly, descending order is determined with the > operator. - Actually, PostgreSQL uses the default B-tree - operator class for the expression's data type to determine the sort - ordering for ASC and DESC. Conventionally, + Actually, PostgreSQL uses the default B-tree + operator class for the expression's data type to determine the sort + ordering for ASC and DESC. Conventionally, data types will be set up so that the < and > operators correspond to this sort ordering, but a user-defined data type's designer could choose to do something @@ -1725,22 +1725,22 @@ SELECT a, b FROM table1 ORDER BY a + b, c; - The NULLS FIRST and NULLS LAST options can be + The NULLS FIRST and NULLS LAST options can be used to determine whether nulls appear before or after non-null values in the sort ordering. By default, null values sort as if larger than any - non-null value; that is, NULLS FIRST is the default for - DESC order, and NULLS LAST otherwise. + non-null value; that is, NULLS FIRST is the default for + DESC order, and NULLS LAST otherwise. Note that the ordering options are considered independently for each - sort column. For example ORDER BY x, y DESC means - ORDER BY x ASC, y DESC, which is not the same as - ORDER BY x DESC, y DESC. + sort column. For example ORDER BY x, y DESC means + ORDER BY x ASC, y DESC, which is not the same as + ORDER BY x DESC, y DESC. - A sort_expression can also be the column label or number + A sort_expression can also be the column label or number of an output column, as in: SELECT a + b AS sum, c FROM table1 ORDER BY sum; @@ -1748,21 +1748,21 @@ SELECT a, max(b) FROM table1 GROUP BY a ORDER BY 1; both of which sort by the first output column. Note that an output column name has to stand alone, that is, it cannot be used in an expression - — for example, this is not correct: + — for example, this is not correct: SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong This restriction is made to reduce ambiguity. There is still - ambiguity if an ORDER BY item is a simple name that + ambiguity if an ORDER BY item is a simple name that could match either an output column name or a column from the table expression. The output column is used in such cases. This would - only cause confusion if you use AS to rename an output + only cause confusion if you use AS to rename an output column to match some other table column's name. - ORDER BY can be applied to the result of a - UNION, INTERSECT, or EXCEPT + ORDER BY can be applied to the result of a + UNION, INTERSECT, or EXCEPT combination, but in this case it is only permitted to sort by output column names or numbers, not by expressions. @@ -1781,7 +1781,7 @@ SELECT a + b AS sum, c FROM table1 ORDER BY sum + c; -- wrong - LIMIT and OFFSET allow you to retrieve just + LIMIT and OFFSET allow you to retrieve just a portion of the rows that are generated by the rest of the query: SELECT select_list @@ -1794,49 +1794,49 @@ SELECT select_list If a limit count is given, no more than that many rows will be returned (but possibly fewer, if the query itself yields fewer rows). - LIMIT ALL is the same as omitting the LIMIT - clause, as is LIMIT with a NULL argument. + LIMIT ALL is the same as omitting the LIMIT + clause, as is LIMIT with a NULL argument. - OFFSET says to skip that many rows before beginning to - return rows. OFFSET 0 is the same as omitting the - OFFSET clause, as is OFFSET with a NULL argument. + OFFSET says to skip that many rows before beginning to + return rows. OFFSET 0 is the same as omitting the + OFFSET clause, as is OFFSET with a NULL argument. - If both OFFSET - and LIMIT appear, then OFFSET rows are - skipped before starting to count the LIMIT rows that + If both OFFSET + and LIMIT appear, then OFFSET rows are + skipped before starting to count the LIMIT rows that are returned. - When using LIMIT, it is important to use an - ORDER BY clause that constrains the result rows into a + When using LIMIT, it is important to use an + ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The - ordering is unknown, unless you specified ORDER BY. + ordering is unknown, unless you specified ORDER BY. - The query optimizer takes LIMIT into account when + The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give - for LIMIT and OFFSET. Thus, using - different LIMIT/OFFSET values to select + for LIMIT and OFFSET. Thus, using + different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable - result ordering with ORDER BY. This is not a bug; it + result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless - ORDER BY is used to constrain the order. + ORDER BY is used to constrain the order. - The rows skipped by an OFFSET clause still have to be - computed inside the server; therefore a large OFFSET + The rows skipped by an OFFSET clause still have to be + computed inside the server; therefore a large OFFSET might be inefficient. @@ -1850,7 +1850,7 @@ SELECT select_list - VALUES provides a way to generate a constant table + VALUES provides a way to generate a constant table that can be used in a query without having to actually create and populate a table on-disk. The syntax is @@ -1860,7 +1860,7 @@ VALUES ( expression [, ...] ) [, .. The lists must all have the same number of elements (i.e., the number of columns in the table), and corresponding entries in each list must have compatible data types. The actual data type assigned to each column - of the result is determined using the same rules as for UNION + of the result is determined using the same rules as for UNION (see ). @@ -1881,8 +1881,8 @@ SELECT 3, 'three'; By default, PostgreSQL assigns the names - column1, column2, etc. to the columns of a - VALUES table. The column names are not specified by the + column1, column2, etc. to the columns of a + VALUES table. The column names are not specified by the SQL standard and different database systems do it differently, so it's usually better to override the default names with a table alias list, like this: @@ -1898,16 +1898,16 @@ SELECT 3, 'three'; - Syntactically, VALUES followed by expression lists is + Syntactically, VALUES followed by expression lists is treated as equivalent to: SELECT select_list FROM table_expression - and can appear anywhere a SELECT can. For example, you can - use it as part of a UNION, or attach a - sort_specification (ORDER BY, - LIMIT, and/or OFFSET) to it. VALUES - is most commonly used as the data source in an INSERT command, + and can appear anywhere a SELECT can. For example, you can + use it as part of a UNION, or attach a + sort_specification (ORDER BY, + LIMIT, and/or OFFSET) to it. VALUES + is most commonly used as the data source in an INSERT command, and next most commonly as a subquery. @@ -1932,22 +1932,22 @@ SELECT select_list FROM table_expression - WITH provides a way to write auxiliary statements for use in a + WITH provides a way to write auxiliary statements for use in a larger query. These statements, which are often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query. Each auxiliary statement - in a WITH clause can be a SELECT, - INSERT, UPDATE, or DELETE; and the - WITH clause itself is attached to a primary statement that can - also be a SELECT, INSERT, UPDATE, or - DELETE. + in a WITH clause can be a SELECT, + INSERT, UPDATE, or DELETE; and the + WITH clause itself is attached to a primary statement that can + also be a SELECT, INSERT, UPDATE, or + DELETE. - <command>SELECT</> in <literal>WITH</> + <command>SELECT</command> in <literal>WITH</literal> - The basic value of SELECT in WITH is to + The basic value of SELECT in WITH is to break down complicated queries into simpler parts. An example is: @@ -1970,21 +1970,21 @@ GROUP BY region, product; which displays per-product sales totals in only the top sales regions. - The WITH clause defines two auxiliary statements named - regional_sales and top_regions, - where the output of regional_sales is used in - top_regions and the output of top_regions - is used in the primary SELECT query. - This example could have been written without WITH, + The WITH clause defines two auxiliary statements named + regional_sales and top_regions, + where the output of regional_sales is used in + top_regions and the output of top_regions + is used in the primary SELECT query. + This example could have been written without WITH, but we'd have needed two levels of nested sub-SELECTs. It's a bit easier to follow this way. - The optional RECURSIVE modifier changes WITH + The optional RECURSIVE modifier changes WITH from a mere syntactic convenience into a feature that accomplishes things not otherwise possible in standard SQL. Using - RECURSIVE, a WITH query can refer to its own + RECURSIVE, a WITH query can refer to its own output. A very simple example is this query to sum the integers from 1 through 100: @@ -1997,10 +1997,10 @@ WITH RECURSIVE t(n) AS ( SELECT sum(n) FROM t; - The general form of a recursive WITH query is always a - non-recursive term, then UNION (or - UNION ALL), then a - recursive term, where only the recursive term can contain + The general form of a recursive WITH query is always a + non-recursive term, then UNION (or + UNION ALL), then a + recursive term, where only the recursive term can contain a reference to the query's own output. Such a query is executed as follows: @@ -2010,10 +2010,10 @@ SELECT sum(n) FROM t; - Evaluate the non-recursive term. For UNION (but not - UNION ALL), discard duplicate rows. Include all remaining + Evaluate the non-recursive term. For UNION (but not + UNION ALL), discard duplicate rows. Include all remaining rows in the result of the recursive query, and also place them in a - temporary working table. + temporary working table. @@ -2026,10 +2026,10 @@ SELECT sum(n) FROM t; Evaluate the recursive term, substituting the current contents of the working table for the recursive self-reference. - For UNION (but not UNION ALL), discard + For UNION (but not UNION ALL), discard duplicate rows and rows that duplicate any previous result row. Include all remaining rows in the result of the recursive query, and - also place them in a temporary intermediate table. + also place them in a temporary intermediate table. @@ -2046,7 +2046,7 @@ SELECT sum(n) FROM t; Strictly speaking, this process is iteration not recursion, but - RECURSIVE is the terminology chosen by the SQL standards + RECURSIVE is the terminology chosen by the SQL standards committee. @@ -2054,7 +2054,7 @@ SELECT sum(n) FROM t; In the example above, the working table has just a single row in each step, and it takes on the values from 1 through 100 in successive steps. In - the 100th step, there is no output because of the WHERE + the 100th step, there is no output because of the WHERE clause, and so the query terminates. @@ -2082,14 +2082,14 @@ GROUP BY sub_part When working with recursive queries it is important to be sure that the recursive part of the query will eventually return no tuples, or else the query will loop indefinitely. Sometimes, using - UNION instead of UNION ALL can accomplish this + UNION instead of UNION ALL can accomplish this by discarding rows that duplicate previous output rows. However, often a cycle does not involve output rows that are completely duplicate: it may be necessary to check just one or a few fields to see if the same point has been reached before. The standard method for handling such situations is to compute an array of the already-visited values. For example, consider - the following query that searches a table graph using a - link field: + the following query that searches a table graph using a + link field: WITH RECURSIVE search_graph(id, link, data, depth) AS ( @@ -2103,12 +2103,12 @@ WITH RECURSIVE search_graph(id, link, data, depth) AS ( SELECT * FROM search_graph; - This query will loop if the link relationships contain - cycles. Because we require a depth output, just changing - UNION ALL to UNION would not eliminate the looping. + This query will loop if the link relationships contain + cycles. Because we require a depth output, just changing + UNION ALL to UNION would not eliminate the looping. Instead we need to recognize whether we have reached the same row again while following a particular path of links. We add two columns - path and cycle to the loop-prone query: + path and cycle to the loop-prone query: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( @@ -2127,13 +2127,13 @@ SELECT * FROM search_graph; Aside from preventing cycles, the array value is often useful in its own - right as representing the path taken to reach any particular row. + right as representing the path taken to reach any particular row. In the general case where more than one field needs to be checked to recognize a cycle, use an array of rows. For example, if we needed to - compare fields f1 and f2: + compare fields f1 and f2: WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS ( @@ -2154,7 +2154,7 @@ SELECT * FROM search_graph; - Omit the ROW() syntax in the common case where only one field + Omit the ROW() syntax in the common case where only one field needs to be checked to recognize a cycle. This allows a simple array rather than a composite-type array to be used, gaining efficiency. @@ -2164,16 +2164,16 @@ SELECT * FROM search_graph; The recursive query evaluation algorithm produces its output in breadth-first search order. You can display the results in depth-first - search order by making the outer query ORDER BY a - path column constructed in this way. + search order by making the outer query ORDER BY a + path column constructed in this way. A helpful trick for testing queries - when you are not certain if they might loop is to place a LIMIT + when you are not certain if they might loop is to place a LIMIT in the parent query. For example, this query would loop forever without - the LIMIT: + the LIMIT: WITH RECURSIVE t(n) AS ( @@ -2185,26 +2185,26 @@ SELECT n FROM t LIMIT 100; This works because PostgreSQL's implementation - evaluates only as many rows of a WITH query as are actually + evaluates only as many rows of a WITH query as are actually fetched by the parent query. Using this trick in production is not recommended, because other systems might work differently. Also, it usually won't work if you make the outer query sort the recursive query's results or join them to some other table, because in such cases the - outer query will usually try to fetch all of the WITH query's + outer query will usually try to fetch all of the WITH query's output anyway. - A useful property of WITH queries is that they are evaluated + A useful property of WITH queries is that they are evaluated only once per execution of the parent query, even if they are referred to - more than once by the parent query or sibling WITH queries. + more than once by the parent query or sibling WITH queries. Thus, expensive calculations that are needed in multiple places can be - placed within a WITH query to avoid redundant work. Another + placed within a WITH query to avoid redundant work. Another possible application is to prevent unwanted multiple evaluations of functions with side-effects. However, the other side of this coin is that the optimizer is less able to - push restrictions from the parent query down into a WITH query - than an ordinary subquery. The WITH query will generally be + push restrictions from the parent query down into a WITH query + than an ordinary subquery. The WITH query will generally be evaluated as written, without suppression of rows that the parent query might discard afterwards. (But, as mentioned above, evaluation might stop early if the reference(s) to the query demand only a limited number of @@ -2212,20 +2212,20 @@ SELECT n FROM t LIMIT 100; - The examples above only show WITH being used with - SELECT, but it can be attached in the same way to - INSERT, UPDATE, or DELETE. + The examples above only show WITH being used with + SELECT, but it can be attached in the same way to + INSERT, UPDATE, or DELETE. In each case it effectively provides temporary table(s) that can be referred to in the main command. - Data-Modifying Statements in <literal>WITH</> + Data-Modifying Statements in <literal>WITH</literal> - You can use data-modifying statements (INSERT, - UPDATE, or DELETE) in WITH. This + You can use data-modifying statements (INSERT, + UPDATE, or DELETE) in WITH. This allows you to perform several different operations in the same query. An example is: @@ -2241,32 +2241,32 @@ INSERT INTO products_log SELECT * FROM moved_rows; - This query effectively moves rows from products to - products_log. The DELETE in WITH - deletes the specified rows from products, returning their - contents by means of its RETURNING clause; and then the + This query effectively moves rows from products to + products_log. The DELETE in WITH + deletes the specified rows from products, returning their + contents by means of its RETURNING clause; and then the primary query reads that output and inserts it into - products_log. + products_log. - A fine point of the above example is that the WITH clause is - attached to the INSERT, not the sub-SELECT within - the INSERT. This is necessary because data-modifying - statements are only allowed in WITH clauses that are attached - to the top-level statement. However, normal WITH visibility - rules apply, so it is possible to refer to the WITH - statement's output from the sub-SELECT. + A fine point of the above example is that the WITH clause is + attached to the INSERT, not the sub-SELECT within + the INSERT. This is necessary because data-modifying + statements are only allowed in WITH clauses that are attached + to the top-level statement. However, normal WITH visibility + rules apply, so it is possible to refer to the WITH + statement's output from the sub-SELECT. - Data-modifying statements in WITH usually have - RETURNING clauses (see ), + Data-modifying statements in WITH usually have + RETURNING clauses (see ), as shown in the example above. - It is the output of the RETURNING clause, not the + It is the output of the RETURNING clause, not the target table of the data-modifying statement, that forms the temporary table that can be referred to by the rest of the query. If a - data-modifying statement in WITH lacks a RETURNING + data-modifying statement in WITH lacks a RETURNING clause, then it forms no temporary table and cannot be referred to in the rest of the query. Such a statement will be executed nonetheless. A not-particularly-useful example is: @@ -2278,15 +2278,15 @@ WITH t AS ( DELETE FROM bar; - This example would remove all rows from tables foo and - bar. The number of affected rows reported to the client - would only include rows removed from bar. + This example would remove all rows from tables foo and + bar. The number of affected rows reported to the client + would only include rows removed from bar. Recursive self-references in data-modifying statements are not allowed. In some cases it is possible to work around this limitation by - referring to the output of a recursive WITH, for example: + referring to the output of a recursive WITH, for example: WITH RECURSIVE included_parts(sub_part, part) AS ( @@ -2304,24 +2304,24 @@ DELETE FROM parts - Data-modifying statements in WITH are executed exactly once, + Data-modifying statements in WITH are executed exactly once, and always to completion, independently of whether the primary query reads all (or indeed any) of their output. Notice that this is different - from the rule for SELECT in WITH: as stated in the - previous section, execution of a SELECT is carried only as far + from the rule for SELECT in WITH: as stated in the + previous section, execution of a SELECT is carried only as far as the primary query demands its output. - The sub-statements in WITH are executed concurrently with + The sub-statements in WITH are executed concurrently with each other and with the main query. Therefore, when using data-modifying - statements in WITH, the order in which the specified updates + statements in WITH, the order in which the specified updates actually happen is unpredictable. All the statements are executed with - the same snapshot (see ), so they - cannot see one another's effects on the target tables. This + the same snapshot (see ), so they + cannot see one another's effects on the target tables. This alleviates the effects of the unpredictability of the actual order of row - updates, and means that RETURNING data is the only way to - communicate changes between different WITH sub-statements and + updates, and means that RETURNING data is the only way to + communicate changes between different WITH sub-statements and the main query. An example of this is that in @@ -2332,8 +2332,8 @@ WITH t AS ( SELECT * FROM products; - the outer SELECT would return the original prices before the - action of the UPDATE, while in + the outer SELECT would return the original prices before the + action of the UPDATE, while in WITH t AS ( @@ -2343,7 +2343,7 @@ WITH t AS ( SELECT * FROM t; - the outer SELECT would return the updated data. + the outer SELECT would return the updated data. @@ -2353,15 +2353,15 @@ SELECT * FROM t; applies to deleting a row that was already updated in the same statement: only the update is performed. Therefore you should generally avoid trying to modify a single row twice in a single statement. In particular avoid - writing WITH sub-statements that could affect the same rows + writing WITH sub-statements that could affect the same rows changed by the main statement or a sibling sub-statement. The effects of such a statement will not be predictable. At present, any table used as the target of a data-modifying statement in - WITH must not have a conditional rule, nor an ALSO - rule, nor an INSTEAD rule that expands to multiple statements. + WITH must not have a conditional rule, nor an ALSO + rule, nor an INSTEAD rule that expands to multiple statements. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index 98434925df..fc60febcbd 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -29,7 +29,7 @@ in the directory src/tutorial/. (Binary distributions of PostgreSQL might not compile these files.) To use those - files, first change to that directory and run make: + files, first change to that directory and run make: $ cd ..../src/tutorial @@ -50,7 +50,7 @@ The \i command reads in commands from the - specified file. psql's -s option puts you in + specified file. psql's -s option puts you in single step mode which pauses before sending each statement to the server. The commands used in this section are in the file basics.sql. @@ -155,8 +155,8 @@ CREATE TABLE weather ( PostgreSQL supports the standard SQL types int, smallint, real, double - precision, char(N), - varchar(N), date, + precision, char(N), + varchar(N), date, time, timestamp, and interval, as well as other types of general utility and a rich set of geometric types. @@ -211,7 +211,7 @@ INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27'); Note that all data types use rather obvious input formats. Constants that are not simple numeric values usually must be - surrounded by single quotes ('), as in the example. + surrounded by single quotes ('), as in the example. The date type is actually quite flexible in what it accepts, but for this tutorial we will stick to the unambiguous @@ -336,8 +336,8 @@ SELECT city, (temp_hi+temp_lo)/2 AS temp_avg, date FROM weather; - A query can be qualified by adding a WHERE - clause that specifies which rows are wanted. The WHERE + A query can be qualified by adding a WHERE + clause that specifies which rows are wanted. The WHERE clause contains a Boolean (truth value) expression, and only rows for which the Boolean expression is true are returned. The usual Boolean operators (AND, @@ -446,9 +446,9 @@ SELECT DISTINCT city of the same or different tables at one time is called a join query. As an example, say you wish to list all the weather records together with the location of the - associated city. To do that, we need to compare the city - column of each row of the weather table with the - name column of all rows in the cities + associated city. To do that, we need to compare the city + column of each row of the weather table with the + name column of all rows in the cities table, and select the pairs of rows where these values match. @@ -483,7 +483,7 @@ SELECT * There is no result row for the city of Hayward. This is because there is no matching entry in the cities table for Hayward, so the join - ignores the unmatched rows in the weather table. We will see + ignores the unmatched rows in the weather table. We will see shortly how this can be fixed. @@ -520,7 +520,7 @@ SELECT city, temp_lo, temp_hi, prcp, date, location Since the columns all had different names, the parser automatically found which table they belong to. If there were duplicate column names in the two tables you'd need to - qualify the column names to show which one you + qualify the column names to show which one you meant, as in: @@ -599,7 +599,7 @@ SELECT * self join. As an example, suppose we wish to find all the weather records that are in the temperature range of other weather records. So we need to compare the - temp_lo and temp_hi columns of + temp_lo and temp_hi columns of each weather row to the temp_lo and temp_hi columns of all other @@ -620,8 +620,8 @@ SELECT W1.city, W1.temp_lo AS low, W1.temp_hi AS high, (2 rows) - Here we have relabeled the weather table as W1 and - W2 to be able to distinguish the left and right side + Here we have relabeled the weather table as W1 and + W2 to be able to distinguish the left and right side of the join. You can also use these kinds of aliases in other queries to save some typing, e.g.: @@ -644,7 +644,7 @@ SELECT * Like most other relational database products, PostgreSQL supports - aggregate functions. + aggregate functions. An aggregate function computes a single result from multiple input rows. For example, there are aggregates to compute the count, sum, @@ -747,7 +747,7 @@ SELECT city, max(temp_lo) which gives us the same results for only the cities that have all - temp_lo values below 40. Finally, if we only care about + temp_lo values below 40. Finally, if we only care about cities whose names begin with S, we might do: @@ -871,7 +871,7 @@ DELETE FROM tablename; Without a qualification, DELETE will - remove all rows from the given table, leaving it + remove all rows from the given table, leaving it empty. The system will not request confirmation before doing this! diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml index 9557c16a4d..b585fd3d2a 100644 --- a/doc/src/sgml/rangetypes.sgml +++ b/doc/src/sgml/rangetypes.sgml @@ -9,7 +9,7 @@ Range types are data types representing a range of values of some - element type (called the range's subtype). + element type (called the range's subtype). For instance, ranges of timestamp might be used to represent the ranges of time that a meeting room is reserved. In this case the data type @@ -148,12 +148,12 @@ SELECT isempty(numrange(1, 5)); - Also, some element types have a notion of infinity, but that + Also, some element types have a notion of infinity, but that is just another value so far as the range type mechanisms are concerned. - For example, in timestamp ranges, [today,] means the same - thing as [today,). But [today,infinity] means - something different from [today,infinity) — the latter - excludes the special timestamp value infinity. + For example, in timestamp ranges, [today,] means the same + thing as [today,). But [today,infinity] means + something different from [today,infinity) — the latter + excludes the special timestamp value infinity. @@ -284,25 +284,25 @@ SELECT numrange(NULL, 2.2); no valid values between them. This contrasts with continuous ranges, where it's always (or almost always) possible to identify other element values between two given values. For example, a range over the - numeric type is continuous, as is a range over timestamp. - (Even though timestamp has limited precision, and so could + numeric type is continuous, as is a range over timestamp. + (Even though timestamp has limited precision, and so could theoretically be treated as discrete, it's better to consider it continuous since the step size is normally not of interest.) Another way to think about a discrete range type is that there is a clear - idea of a next or previous value for each element value. + idea of a next or previous value for each element value. Knowing that, it is possible to convert between inclusive and exclusive representations of a range's bounds, by choosing the next or previous element value instead of the one originally given. - For example, in an integer range type [4,8] and - (3,9) denote the same set of values; but this would not be so + For example, in an integer range type [4,8] and + (3,9) denote the same set of values; but this would not be so for a range over numeric. - A discrete range type should have a canonicalization + A discrete range type should have a canonicalization function that is aware of the desired step size for the element type. The canonicalization function is charged with converting equivalent values of the range type to have identical representations, in particular @@ -352,8 +352,8 @@ SELECT '[1.234, 5.678]'::floatrange; If the subtype is considered to have discrete rather than continuous - values, the CREATE TYPE command should specify a - canonical function. + values, the CREATE TYPE command should specify a + canonical function. The canonicalization function takes an input range value, and must return an equivalent range value that may have different bounds and formatting. The canonical output for two ranges that represent the same set of values, @@ -364,7 +364,7 @@ SELECT '[1.234, 5.678]'::floatrange; formatting. In addition to adjusting the inclusive/exclusive bounds format, a canonicalization function might round off boundary values, in case the desired step size is larger than what the subtype is capable of - storing. For instance, a range type over timestamp could be + storing. For instance, a range type over timestamp could be defined to have a step size of an hour, in which case the canonicalization function would need to round off bounds that weren't a multiple of an hour, or perhaps throw an error instead. @@ -372,25 +372,25 @@ SELECT '[1.234, 5.678]'::floatrange; In addition, any range type that is meant to be used with GiST or SP-GiST - indexes should define a subtype difference, or subtype_diff, - function. (The index will still work without subtype_diff, + indexes should define a subtype difference, or subtype_diff, + function. (The index will still work without subtype_diff, but it is likely to be considerably less efficient than if a difference function is provided.) The subtype difference function takes two input values of the subtype, and returns their difference - (i.e., X minus Y) represented as - a float8 value. In our example above, the - function float8mi that underlies the regular float8 + (i.e., X minus Y) represented as + a float8 value. In our example above, the + function float8mi that underlies the regular float8 minus operator can be used; but for any other subtype, some type conversion would be necessary. Some creative thought about how to represent differences as numbers might be needed, too. To the greatest - extent possible, the subtype_diff function should agree with + extent possible, the subtype_diff function should agree with the sort ordering implied by the selected operator class and collation; that is, its result should be positive whenever its first argument is greater than its second according to the sort ordering. - A less-oversimplified example of a subtype_diff function is: + A less-oversimplified example of a subtype_diff function is: @@ -426,15 +426,15 @@ SELECT '[11:10, 23:00]'::timerange; CREATE INDEX reservation_idx ON reservation USING GIST (during); A GiST or SP-GiST index can accelerate queries involving these range operators: - =, - &&, - <@, - @>, - <<, - >>, - -|-, - &<, and - &> + =, + &&, + <@, + @>, + <<, + >>, + -|-, + &<, and + &> (see for more information). @@ -442,7 +442,7 @@ CREATE INDEX reservation_idx ON reservation USING GIST (during); In addition, B-tree and hash indexes can be created for table columns of range types. For these index types, basically the only useful range operation is equality. There is a B-tree sort ordering defined for range - values, with corresponding < and > operators, + values, with corresponding < and > operators, but the ordering is rather arbitrary and not usually useful in the real world. Range types' B-tree and hash support is primarily meant to allow sorting and hashing internally in queries, rather than creation of @@ -491,7 +491,7 @@ with existing key (during)=(["2010-01-01 11:30:00","2010-01-01 15:00:00")). - You can use the btree_gist + You can use the btree_gist extension to define exclusion constraints on plain scalar data types, which can then be combined with range exclusions for maximum flexibility. For example, after btree_gist is installed, the following diff --git a/doc/src/sgml/recovery-config.sgml b/doc/src/sgml/recovery-config.sgml index 0a5d086248..4e1aa74c1f 100644 --- a/doc/src/sgml/recovery-config.sgml +++ b/doc/src/sgml/recovery-config.sgml @@ -11,23 +11,23 @@ This chapter describes the settings available in the - recovery.confrecovery.conf + recovery.confrecovery.conf file. They apply only for the duration of the recovery. They must be reset for any subsequent recovery you wish to perform. They cannot be changed once recovery has begun. - Settings in recovery.conf are specified in the format - name = 'value'. One parameter is specified per line. + Settings in recovery.conf are specified in the format + name = 'value'. One parameter is specified per line. Hash marks (#) designate the rest of the line as a comment. To embed a single quote in a parameter - value, write two quotes (''). + value, write two quotes (''). - A sample file, share/recovery.conf.sample, - is provided in the installation's share/ directory. + A sample file, share/recovery.conf.sample, + is provided in the installation's share/ directory. @@ -38,7 +38,7 @@ restore_command (string) - restore_command recovery parameter + restore_command recovery parameter @@ -46,25 +46,25 @@ The local shell command to execute to retrieve an archived segment of the WAL file series. This parameter is required for archive recovery, but optional for streaming replication. - Any %f in the string is + Any %f in the string is replaced by the name of the file to retrieve from the archive, - and any %p is replaced by the copy destination path name + and any %p is replaced by the copy destination path name on the server. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point. That is the earliest file that must be kept to allow a restore to be restartable, so this information can be used to truncate the archive to just the minimum required to support - restarting from the current restore. %r is typically only + restarting from the current restore. %r is typically only used by warm-standby configurations (see ). - Write %% to embed an actual % character. + Write %% to embed an actual % character. It is important for the command to return a zero exit status - only if it succeeds. The command will be asked for file + only if it succeeds. The command will be asked for file names that are not present in the archive; it must return nonzero when so asked. Examples: @@ -82,33 +82,33 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows archive_cleanup_command (string) - archive_cleanup_command recovery parameter + archive_cleanup_command recovery parameter This optional parameter specifies a shell command that will be executed at every restartpoint. The purpose of - archive_cleanup_command is to provide a mechanism for + archive_cleanup_command is to provide a mechanism for cleaning up old archived WAL files that are no longer needed by the standby server. - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point. - That is the earliest file that must be kept to allow a - restore to be restartable, and so all files earlier than %r + That is the earliest file that must be kept to allow a + restore to be restartable, and so all files earlier than %r may be safely removed. This information can be used to truncate the archive to just the minimum required to support restart from the current restore. The module - is often used in archive_cleanup_command for + is often used in archive_cleanup_command for single-standby configurations, for example: archive_cleanup_command = 'pg_archivecleanup /mnt/server/archivedir %r' Note however that if multiple standby servers are restoring from the same archive directory, you will need to ensure that you do not delete WAL files until they are no longer needed by any of the servers. - archive_cleanup_command would typically be used in a + archive_cleanup_command would typically be used in a warm-standby configuration (see ). - Write %% to embed an actual % character in the + Write %% to embed an actual % character in the command. @@ -123,16 +123,16 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_end_command (string) - recovery_end_command recovery parameter + recovery_end_command recovery parameter This parameter specifies a shell command that will be executed once only at the end of recovery. This parameter is optional. The purpose of the - recovery_end_command is to provide a mechanism for cleanup + recovery_end_command is to provide a mechanism for cleanup following replication or recovery. - Any %r is replaced by the name of the file containing the + Any %r is replaced by the name of the file containing the last valid restart point, like in . @@ -156,9 +156,9 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows By default, recovery will recover to the end of the WAL log. The following parameters can be used to specify an earlier stopping point. - At most one of recovery_target, - recovery_target_lsn, recovery_target_name, - recovery_target_time, or recovery_target_xid + At most one of recovery_target, + recovery_target_lsn, recovery_target_name, + recovery_target_time, or recovery_target_xid can be used; if more than one of these is specified in the configuration file, the last entry will be used. @@ -167,7 +167,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target = 'immediate' - recovery_target recovery parameter + recovery_target recovery parameter @@ -178,7 +178,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows ended. - Technically, this is a string parameter, but 'immediate' + Technically, this is a string parameter, but 'immediate' is currently the only allowed value. @@ -187,13 +187,13 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_name (string) - recovery_target_name recovery parameter + recovery_target_name recovery parameter This parameter specifies the named restore point (created with - pg_create_restore_point()) to which recovery will proceed. + pg_create_restore_point()) to which recovery will proceed. @@ -201,7 +201,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_time (timestamp) - recovery_target_time recovery parameter + recovery_target_time recovery parameter @@ -217,7 +217,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_xid (string) - recovery_target_xid recovery parameter + recovery_target_xid recovery parameter @@ -237,7 +237,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_target_lsn (pg_lsn) - recovery_target_lsn recovery parameter + recovery_target_lsn recovery parameter @@ -246,7 +246,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows to which recovery will proceed. The precise stopping point is also influenced by . This parameter is parsed using the system data type - pg_lsn. + pg_lsn. @@ -262,7 +262,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_inclusive"> recovery_target_inclusive (boolean) - recovery_target_inclusive recovery parameter + recovery_target_inclusive recovery parameter @@ -274,7 +274,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows or is specified. This setting controls whether transactions having exactly the target commit time or ID, respectively, will - be included in the recovery. Default is true. + be included in the recovery. Default is true. @@ -283,14 +283,14 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_timeline"> recovery_target_timeline (string) - recovery_target_timeline recovery parameter + recovery_target_timeline recovery parameter Specifies recovering into a particular timeline. The default is to recover along the same timeline that was current when the - base backup was taken. Setting this to latest recovers + base backup was taken. Setting this to latest recovers to the latest timeline found in the archive, which is useful in a standby server. Other than that you only need to set this parameter in complex re-recovery situations, where you need to return to @@ -304,24 +304,24 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows xreflabel="recovery_target_action"> recovery_target_action (enum) - recovery_target_action recovery parameter + recovery_target_action recovery parameter Specifies what action the server should take once the recovery target is - reached. The default is pause, which means recovery will - be paused. promote means the recovery process will finish + reached. The default is pause, which means recovery will + be paused. promote means the recovery process will finish and the server will start to accept connections. - Finally shutdown will stop the server after reaching the + Finally shutdown will stop the server after reaching the recovery target. - The intended use of the pause setting is to allow queries + The intended use of the pause setting is to allow queries to be executed against the database to check if this recovery target is the most desirable point for recovery. The paused state can be resumed by - using pg_wal_replay_resume() (see + using pg_wal_replay_resume() (see ), which then causes recovery to end. If this recovery target is not the desired stopping point, then shut down the server, change the @@ -329,22 +329,22 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows continue recovery. - The shutdown setting is useful to have the instance ready + The shutdown setting is useful to have the instance ready at the exact replay point desired. The instance will still be able to replay more WAL records (and in fact will have to replay WAL records since the last checkpoint next time it is started). - Note that because recovery.conf will not be renamed when - recovery_target_action is set to shutdown, + Note that because recovery.conf will not be renamed when + recovery_target_action is set to shutdown, any subsequent start will end with immediate shutdown unless the - configuration is changed or the recovery.conf file is + configuration is changed or the recovery.conf file is removed manually. This setting has no effect if no recovery target is set. If is not enabled, a setting of - pause will act the same as shutdown. + pause will act the same as shutdown. @@ -360,25 +360,25 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows standby_mode (boolean) - standby_mode recovery parameter + standby_mode recovery parameter - Specifies whether to start the PostgreSQL server as - a standby. If this parameter is on, the server will + Specifies whether to start the PostgreSQL server as + a standby. If this parameter is on, the server will not stop recovery when the end of archived WAL is reached, but will keep trying to continue recovery by fetching new WAL segments - using restore_command + using restore_command and/or by connecting to the primary server as specified by the - primary_conninfo setting. + primary_conninfo setting. primary_conninfo (string) - primary_conninfo recovery parameter + primary_conninfo recovery parameter @@ -401,20 +401,20 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows A password needs to be provided too, if the primary demands password authentication. It can be provided in the primary_conninfo string, or in a separate - ~/.pgpass file on the standby server (use - replication as the database name). + ~/.pgpass file on the standby server (use + replication as the database name). Do not specify a database name in the primary_conninfo string. - This setting has no effect if standby_mode is off. + This setting has no effect if standby_mode is off. primary_slot_name (string) - primary_slot_name recovery parameter + primary_slot_name recovery parameter @@ -423,7 +423,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows connecting to the primary via streaming replication to control resource removal on the upstream node (see ). - This setting has no effect if primary_conninfo is not + This setting has no effect if primary_conninfo is not set. @@ -431,15 +431,15 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows trigger_file (string) - trigger_file recovery parameter + trigger_file recovery parameter Specifies a trigger file whose presence ends recovery in the standby. Even if this value is not set, you can still promote - the standby using pg_ctl promote. - This setting has no effect if standby_mode is off. + the standby using pg_ctl promote. + This setting has no effect if standby_mode is off. @@ -447,7 +447,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_min_apply_delay (integer) - recovery_min_apply_delay recovery parameter + recovery_min_apply_delay recovery parameter @@ -488,7 +488,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows This parameter is intended for use with streaming replication deployments; however, if the parameter is specified it will be honored in all cases. - hot_standby_feedback will be delayed by use of this feature + hot_standby_feedback will be delayed by use of this feature which could lead to bloat on the master; use both together with care. diff --git a/doc/src/sgml/ref/abort.sgml b/doc/src/sgml/ref/abort.sgml index ed9332c395..285d0d4ac6 100644 --- a/doc/src/sgml/ref/abort.sgml +++ b/doc/src/sgml/ref/abort.sgml @@ -63,7 +63,7 @@ ABORT [ WORK | TRANSACTION ] - Issuing ABORT outside of a transaction block + Issuing ABORT outside of a transaction block emits a warning and otherwise has no effect. diff --git a/doc/src/sgml/ref/alter_aggregate.sgml b/doc/src/sgml/ref/alter_aggregate.sgml index 7b7616ca01..43f0a1609b 100644 --- a/doc/src/sgml/ref/alter_aggregate.sgml +++ b/doc/src/sgml/ref/alter_aggregate.sgml @@ -43,7 +43,7 @@ ALTER AGGREGATE name ( aggregate_signatu - You must own the aggregate function to use ALTER AGGREGATE. + You must own the aggregate function to use ALTER AGGREGATE. To change the schema of an aggregate function, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -73,8 +73,8 @@ ALTER AGGREGATE name ( aggregate_signatu - The mode of an argument: IN or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. @@ -97,10 +97,10 @@ ALTER AGGREGATE name ( aggregate_signatu An input data type on which the aggregate function operates. - To reference a zero-argument aggregate function, write * + To reference a zero-argument aggregate function, write * in place of the list of argument specifications. To reference an ordered-set aggregate function, write - ORDER BY between the direct and aggregated argument + ORDER BY between the direct and aggregated argument specifications. @@ -140,13 +140,13 @@ ALTER AGGREGATE name ( aggregate_signatu The recommended syntax for referencing an ordered-set aggregate - is to write ORDER BY between the direct and aggregated + is to write ORDER BY between the direct and aggregated argument specifications, in the same style as in . However, it will also work to - omit ORDER BY and just run the direct and aggregated + omit ORDER BY and just run the direct and aggregated argument specifications into a single list. In this abbreviated form, - if VARIADIC "any" was used in both the direct and - aggregated argument lists, write VARIADIC "any" only once. + if VARIADIC "any" was used in both the direct and + aggregated argument lists, write VARIADIC "any" only once. diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml index 30e8c756a1..9d77ee5c2c 100644 --- a/doc/src/sgml/ref/alter_collation.sgml +++ b/doc/src/sgml/ref/alter_collation.sgml @@ -38,7 +38,7 @@ ALTER COLLATION name SET SCHEMA new_sche - You must own the collation to use ALTER COLLATION. + You must own the collation to use ALTER COLLATION. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the collation's schema. (These restrictions enforce that altering the diff --git a/doc/src/sgml/ref/alter_conversion.sgml b/doc/src/sgml/ref/alter_conversion.sgml index 3514720d03..83fcbbd5a5 100644 --- a/doc/src/sgml/ref/alter_conversion.sgml +++ b/doc/src/sgml/ref/alter_conversion.sgml @@ -36,7 +36,7 @@ ALTER CONVERSION name SET SCHEMA new_sch - You must own the conversion to use ALTER CONVERSION. + You must own the conversion to use ALTER CONVERSION. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the conversion's schema. (These restrictions enforce that altering the diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index 59639d3729..35e4123cad 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -89,7 +89,7 @@ ALTER DATABASE name RESET ALL database. Whenever a new session is subsequently started in that database, the specified value becomes the session default value. The database-specific default overrides whatever setting is present - in postgresql.conf or has been received from the + in postgresql.conf or has been received from the postgres command line. Only the database owner or a superuser can change the session defaults for a database. Certain variables cannot be set this way, or can only be @@ -183,7 +183,7 @@ ALTER DATABASE name RESET ALL database-specific setting is removed, so the system-wide default setting will be inherited in new sessions. Use RESET ALL to clear all database-specific settings. - SET FROM CURRENT saves the session's current value of + SET FROM CURRENT saves the session's current value of the parameter as the database-specific value. diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index 09eabda68a..6c34f2446a 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -88,7 +88,7 @@ REVOKE [ GRANT OPTION FOR ] Description - ALTER DEFAULT PRIVILEGES allows you to set the privileges + ALTER DEFAULT PRIVILEGES allows you to set the privileges that will be applied to objects created in the future. (It does not affect privileges assigned to already-existing objects.) Currently, only the privileges for schemas, tables (including views and foreign @@ -109,9 +109,9 @@ REVOKE [ GRANT OPTION FOR ] As explained under , the default privileges for any object type normally grant all grantable permissions to the object owner, and may grant some privileges to - PUBLIC as well. However, this behavior can be changed by + PUBLIC as well. However, this behavior can be changed by altering the global default privileges with - ALTER DEFAULT PRIVILEGES. + ALTER DEFAULT PRIVILEGES. @@ -123,7 +123,7 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing role of which the current role is a member. - If FOR ROLE is omitted, the current role is assumed. + If FOR ROLE is omitted, the current role is assumed. @@ -134,9 +134,9 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing schema. If specified, the default privileges are altered for objects later created in that schema. - If IN SCHEMA is omitted, the global default privileges + If IN SCHEMA is omitted, the global default privileges are altered. - IN SCHEMA is not allowed when using ON SCHEMAS + IN SCHEMA is not allowed when using ON SCHEMAS as schemas can't be nested. @@ -148,7 +148,7 @@ REVOKE [ GRANT OPTION FOR ] The name of an existing role to grant or revoke privileges for. This parameter, and all the other parameters in - abbreviated_grant_or_revoke, + abbreviated_grant_or_revoke, act as described under or , @@ -175,7 +175,7 @@ REVOKE [ GRANT OPTION FOR ] If you wish to drop a role for which the default privileges have been altered, it is necessary to reverse the changes in its default privileges - or use DROP OWNED BY to get rid of the default privileges entry + or use DROP OWNED BY to get rid of the default privileges entry for the role. @@ -186,7 +186,7 @@ REVOKE [ GRANT OPTION FOR ] Grant SELECT privilege to everyone for all tables (and views) you subsequently create in schema myschema, and allow - role webuser to INSERT into them too: + role webuser to INSERT into them too: ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO PUBLIC; @@ -206,7 +206,7 @@ ALTER DEFAULT PRIVILEGES IN SCHEMA myschema REVOKE INSERT ON TABLES FROM webuser Remove the public EXECUTE permission that is normally granted on functions, - for all functions subsequently created by role admin: + for all functions subsequently created by role admin: ALTER DEFAULT PRIVILEGES FOR ROLE admin REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC; diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 827a1c7d20..96a7db95ec 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -69,7 +69,7 @@ ALTER DOMAIN name These forms change whether a domain is marked to allow NULL - values or to reject NULL values. You can only SET NOT NULL + values or to reject NULL values. You can only SET NOT NULL when the columns using the domain contain no null values. @@ -88,7 +88,7 @@ ALTER DOMAIN name valid using ALTER DOMAIN ... VALIDATE CONSTRAINT. Newly inserted or updated rows are always checked against all constraints, even those marked NOT VALID. - NOT VALID is only accepted for CHECK constraints. + NOT VALID is only accepted for CHECK constraints. @@ -118,7 +118,7 @@ ALTER DOMAIN name This form validates a constraint previously added as - NOT VALID, that is, verify that all data in columns using the + NOT VALID, that is, verify that all data in columns using the domain satisfy the specified constraint. @@ -154,7 +154,7 @@ ALTER DOMAIN name - You must own the domain to use ALTER DOMAIN. + You must own the domain to use ALTER DOMAIN. To change the schema of a domain, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -273,8 +273,8 @@ ALTER DOMAIN name Notes - Currently, ALTER DOMAIN ADD CONSTRAINT, ALTER - DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT NULL + Currently, ALTER DOMAIN ADD CONSTRAINT, ALTER + DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT NULL will fail if the validated named domain or any derived domain is used within a composite-type column of any table in the database. They should eventually be improved to be @@ -330,10 +330,10 @@ ALTER DOMAIN zipcode SET SCHEMA customers; ALTER DOMAIN conforms to the SQL - standard, except for the OWNER, RENAME, SET SCHEMA, and - VALIDATE CONSTRAINT variants, which are - PostgreSQL extensions. The NOT VALID - clause of the ADD CONSTRAINT variant is also a + standard, except for the OWNER, RENAME, SET SCHEMA, and + VALIDATE CONSTRAINT variants, which are + PostgreSQL extensions. The NOT VALID + clause of the ADD CONSTRAINT variant is also a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index ae84e98e49..c6c831fa30 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -89,7 +89,7 @@ ALTER EXTENSION name DROP This form moves the extension's objects into another schema. The - extension has to be relocatable for this command to + extension has to be relocatable for this command to succeed. @@ -125,7 +125,7 @@ ALTER EXTENSION name DROP You must own the extension to use ALTER EXTENSION. - The ADD/DROP forms require ownership of the + The ADD/DROP forms require ownership of the added/dropped object as well. @@ -150,7 +150,7 @@ ALTER EXTENSION name DROP The desired new version of the extension. This can be written as either an identifier or a string literal. If not specified, - ALTER EXTENSION UPDATE attempts to update to whatever is + ALTER EXTENSION UPDATE attempts to update to whatever is shown as the default version in the extension's control file. @@ -205,14 +205,14 @@ ALTER EXTENSION name DROP The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that ALTER EXTENSION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -246,7 +246,7 @@ ALTER EXTENSION name DROP The data type(s) of the operator's arguments (optionally - schema-qualified). Write NONE for the missing argument + schema-qualified). Write NONE for the missing argument of a prefix or postfix operator. @@ -314,7 +314,7 @@ ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore); Compatibility - ALTER EXTENSION is a PostgreSQL + ALTER EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml index 9c5b84fe64..1c0a26de6b 100644 --- a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml @@ -93,11 +93,11 @@ ALTER FOREIGN DATA WRAPPER name REN Note that it is possible that pre-existing options of the foreign-data wrapper, or of dependent servers, user mappings, or foreign tables, are - invalid according to the new validator. PostgreSQL does + invalid according to the new validator. PostgreSQL does not check for this. It is up to the user to make sure that these options are correct before using the modified foreign-data wrapper. However, any options specified in this ALTER FOREIGN DATA - WRAPPER command will be checked using the new validator. + WRAPPER command will be checked using the new validator. @@ -117,8 +117,8 @@ ALTER FOREIGN DATA WRAPPER name REN Change options for the foreign-data - wrapper. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + wrapper. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; names and values are also validated using the foreign data wrapper's validator function, if any. @@ -150,16 +150,16 @@ ALTER FOREIGN DATA WRAPPER name REN Examples - Change a foreign-data wrapper dbi, add - option foo, drop bar: + Change a foreign-data wrapper dbi, add + option foo, drop bar: ALTER FOREIGN DATA WRAPPER dbi OPTIONS (ADD foo '1', DROP 'bar'); - Change the foreign-data wrapper dbi validator - to bob.myvalidator: + Change the foreign-data wrapper dbi validator + to bob.myvalidator: ALTER FOREIGN DATA WRAPPER dbi VALIDATOR bob.myvalidator; @@ -171,7 +171,7 @@ ALTER FOREIGN DATA WRAPPER dbi VALIDATOR bob.myvalidator; ALTER FOREIGN DATA WRAPPER conforms to ISO/IEC 9075-9 (SQL/MED), except that the HANDLER, - VALIDATOR, OWNER TO, and RENAME + VALIDATOR, OWNER TO, and RENAME clauses are extensions. diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml index cb4e7044fb..44d981a5bd 100644 --- a/doc/src/sgml/ref/alter_foreign_table.sgml +++ b/doc/src/sgml/ref/alter_foreign_table.sgml @@ -85,7 +85,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form drops a column from a foreign table. - You will need to say CASCADE if + You will need to say CASCADE if anything outside the table depends on the column; for example, views. If IF EXISTS is specified and the column @@ -101,7 +101,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form changes the type of a column of a foreign table. Again, this has no effect on any underlying storage: this action simply - changes the type that PostgreSQL believes the column to + changes the type that PostgreSQL believes the column to have. @@ -113,7 +113,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name These forms set or remove the default value for a column. Default values only apply in subsequent INSERT - or UPDATE commands; they do not cause rows already in the + or UPDATE commands; they do not cause rows already in the table to change. @@ -174,7 +174,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new constraint to a foreign table, using the same syntax as . - Currently only CHECK constraints are supported. + Currently only CHECK constraints are supported. @@ -183,7 +183,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name.) - If the constraint is marked NOT VALID, then it isn't + If the constraint is marked NOT VALID, then it isn't assumed to hold, but is only recorded for possible future use. @@ -235,9 +235,9 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - Note that this is not equivalent to ADD COLUMN oid oid; + Note that this is not equivalent to ADD COLUMN oid oid; that would add a normal column that happened to be named - oid, not a system column. + oid, not a system column. @@ -292,8 +292,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name Change options for the foreign table or one of its columns. - ADD, SET, and DROP - specify the action to be performed. ADD is assumed + ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Duplicate option names are not allowed (although it's OK for a table option and a column option to have the same name). Option names and values are also validated using the @@ -325,7 +325,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - All the actions except RENAME and SET SCHEMA + All the actions except RENAME and SET SCHEMA can be combined into a list of multiple alterations to apply in parallel. For example, it is possible to add several columns and/or alter the type of several @@ -333,13 +333,13 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - If the command is written as ALTER FOREIGN TABLE IF EXISTS ... + If the command is written as ALTER FOREIGN TABLE IF EXISTS ... and the foreign table does not exist, no error is thrown. A notice is issued in this case. - You must own the table to use ALTER FOREIGN TABLE. + You must own the table to use ALTER FOREIGN TABLE. To change the schema of a foreign table, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -362,10 +362,10 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name The name (possibly schema-qualified) of an existing foreign table to - alter. If ONLY is specified before the table name, only - that table is altered. If ONLY is not specified, the table + alter. If ONLY is specified before the table name, only + that table is altered. If ONLY is not specified, the table and all its descendant tables (if any) are altered. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -518,9 +518,9 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name Consistency with the foreign server is not checked when a column is added or removed with ADD COLUMN or - DROP COLUMN, a NOT NULL - or CHECK constraint is added, or a column type is changed - with SET DATA TYPE. It is the user's responsibility to ensure + DROP COLUMN, a NOT NULL + or CHECK constraint is added, or a column type is changed + with SET DATA TYPE. It is the user's responsibility to ensure that the table definition matches the remote side. @@ -552,16 +552,16 @@ ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'v Compatibility - The forms ADD, DROP, + The forms ADD, DROP, and SET DATA TYPE conform with the SQL standard. The other forms are PostgreSQL extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single - ALTER FOREIGN TABLE command is an extension. + ALTER FOREIGN TABLE command is an extension. - ALTER FOREIGN TABLE DROP COLUMN can be used to drop the only + ALTER FOREIGN TABLE DROP COLUMN can be used to drop the only column of a foreign table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column foreign tables. diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index 8d9fec6005..cdecf631b1 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -56,8 +56,8 @@ ALTER FUNCTION name [ ( [ [ - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that ALTER FUNCTION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -260,8 +260,8 @@ ALTER FUNCTION name [ ( [ [ group_name RENAME TO The first two variants add users to a group or remove them from a group. - (Any role can play the part of either a user or a - group for this purpose.) These variants are effectively + (Any role can play the part of either a user or a + group for this purpose.) These variants are effectively equivalent to granting or revoking membership in the role named as the - group; so the preferred way to do this is to use + group; so the preferred way to do this is to use or . @@ -79,7 +79,7 @@ ALTER GROUP group_name RENAME TO Users (roles) that are to be added to or removed from the group. - The users must already exist; ALTER GROUP does not + The users must already exist; ALTER GROUP does not create or drop users. diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index 4c777f8675..30e399e62c 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -106,7 +106,7 @@ ALTER INDEX ALL IN TABLESPACE name This form resets one or more index-method-specific storage parameters to - their defaults. As with SET, a REINDEX + their defaults. As with SET, a REINDEX might be needed to update the index entirely. @@ -226,12 +226,12 @@ ALTER INDEX ALL IN TABLESPACE name These operations are also possible using . - ALTER INDEX is in fact just an alias for the forms - of ALTER TABLE that apply to indexes. + ALTER INDEX is in fact just an alias for the forms + of ALTER TABLE that apply to indexes. - There was formerly an ALTER INDEX OWNER variant, but + There was formerly an ALTER INDEX OWNER variant, but this is now ignored (with a warning). An index cannot have an owner different from its table's owner. Changing the table's owner automatically changes the index as well. @@ -280,7 +280,7 @@ ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000; Compatibility - ALTER INDEX is a PostgreSQL + ALTER INDEX is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_materialized_view.sgml b/doc/src/sgml/ref/alter_materialized_view.sgml index a1cced1581..eaea819744 100644 --- a/doc/src/sgml/ref/alter_materialized_view.sgml +++ b/doc/src/sgml/ref/alter_materialized_view.sgml @@ -58,8 +58,8 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE name You must own the materialized view to use ALTER MATERIALIZED - VIEW. To change a materialized view's schema, you must also have - CREATE privilege on the new schema. + VIEW. To change a materialized view's schema, you must also have + CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the materialized view's schema. (These restrictions enforce that altering diff --git a/doc/src/sgml/ref/alter_opclass.sgml b/doc/src/sgml/ref/alter_opclass.sgml index 58de603aa4..834f3e4231 100644 --- a/doc/src/sgml/ref/alter_opclass.sgml +++ b/doc/src/sgml/ref/alter_opclass.sgml @@ -41,7 +41,7 @@ ALTER OPERATOR CLASS name USING When operators and support functions are added to a family with ALTER OPERATOR FAMILY, they are not part of any - specific operator class within the family, but are just loose + specific operator class within the family, but are just loose within the family. This indicates that these operators and functions are compatible with the family's semantics, but are not required for correct functioning of any specific index. (Operators and functions @@ -74,7 +74,7 @@ ALTER OPERATOR FAMILY name USING op_type - In an OPERATOR clause, - the operand data type(s) of the operator, or NONE to + In an OPERATOR clause, + the operand data type(s) of the operator, or NONE to signify a left-unary or right-unary operator. Unlike the comparable - syntax in CREATE OPERATOR CLASS, the operand data types + syntax in CREATE OPERATOR CLASS, the operand data types must always be specified. - In an ADD FUNCTION clause, the operand data type(s) the + In an ADD FUNCTION clause, the operand data type(s) the function is intended to support, if different from the input data type(s) of the function. For B-tree comparison functions and hash functions it is not necessary to specify name USING - If neither FOR SEARCH nor FOR ORDER BY is - specified, FOR SEARCH is the default. + If neither FOR SEARCH nor FOR ORDER BY is + specified, FOR SEARCH is the default. @@ -240,7 +240,7 @@ ALTER OPERATOR FAMILY name USING Notes - Notice that the DROP syntax only specifies the slot + Notice that the DROP syntax only specifies the slot in the operator family, by strategy or support number and input data type(s). The name of the operator or function occupying the slot is not - mentioned. Also, for DROP FUNCTION the type(s) to specify + mentioned. Also, for DROP FUNCTION the type(s) to specify are the input data type(s) the function is intended to support; for GiST, SP-GiST and GIN indexes this might have nothing to do with the actual input argument types of the function. @@ -274,9 +274,9 @@ ALTER OPERATOR FAMILY name USING The following example command adds cross-data-type operators and support functions to an operator family that already contains B-tree - operator classes for data types int4 and int2. + operator classes for data types int4 and int2. diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index 7392e6f3de..801404e0cf 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -87,10 +87,10 @@ ALTER PUBLICATION name RENAME TO table_name - Name of an existing table. If ONLY is specified before the - table name, only that table is affected. If ONLY is not + Name of an existing table. If ONLY is specified before the + table name, only that table is affected. If ONLY is not specified, the table and all its descendant tables (if any) are - affected. Optionally, * can be specified after the table + affected. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -147,7 +147,7 @@ ALTER PUBLICATION mypublication ADD TABLE users, departments; Compatibility - ALTER PUBLICATION is a PostgreSQL + ALTER PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index 607b25962f..e30ca10454 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -69,7 +69,7 @@ ALTER ROLE { role_specification | A for that.) Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. - Roles having CREATEROLE privilege can change any of these + Roles having CREATEROLE privilege can change any of these settings, but only for non-superuser and non-replication roles. Ordinary roles can only change their own password. @@ -77,13 +77,13 @@ ALTER ROLE { role_specification | A The second variant changes the name of the role. Database superusers can rename any role. - Roles having CREATEROLE privilege can rename non-superuser + Roles having CREATEROLE privilege can rename non-superuser roles. The current session user cannot be renamed. (Connect as a different user if you need to do that.) - Because MD5-encrypted passwords use the role name as + Because MD5-encrypted passwords use the role name as cryptographic salt, renaming a role clears its password if the - password is MD5-encrypted. + password is MD5-encrypted. @@ -100,7 +100,7 @@ ALTER ROLE { role_specification | A Whenever the role subsequently starts a new session, the specified value becomes the session default, overriding whatever setting is present in - postgresql.conf or has been received from the postgres + postgresql.conf or has been received from the postgres command line. This only happens at login time; executing or does not cause new @@ -112,7 +112,7 @@ ALTER ROLE { role_specification | A Superusers can change anyone's session defaults. Roles having - CREATEROLE privilege can change defaults for non-superuser + CREATEROLE privilege can change defaults for non-superuser roles. Ordinary roles can only set defaults for themselves. Certain configuration variables cannot be set this way, or can only be set if a superuser issues the command. Only superusers can change a setting @@ -155,8 +155,8 @@ ALTER ROLE { role_specification | A SUPERUSER NOSUPERUSER - CREATEDB - NOCREATEDB + CREATEDB + NOCREATEDB CREATEROLE NOCREATEROLE INHERIT @@ -168,7 +168,7 @@ ALTER ROLE { role_specification | A BYPASSRLS NOBYPASSRLS CONNECTION LIMIT connlimit - [ ENCRYPTED ] PASSWORD password + [ ENCRYPTED ] PASSWORD password VALID UNTIL 'timestamp' @@ -209,7 +209,7 @@ ALTER ROLE { role_specification | A role-specific variable setting is removed, so the role will inherit the system-wide default setting in new sessions. Use RESET ALL to clear all role-specific settings. - SET FROM CURRENT saves the session's current value of + SET FROM CURRENT saves the session's current value of the parameter as the role-specific value. If IN DATABASE is specified, the configuration parameter is set or removed for the given role and database only. @@ -288,7 +288,7 @@ ALTER ROLE davide WITH PASSWORD NULL; Change a password expiration date, specifying that the password should expire at midday on 4th May 2015 using - the time zone which is one hour ahead of UTC: + the time zone which is one hour ahead of UTC: ALTER ROLE chris VALID UNTIL 'May 4 12:00:00 2015 +1'; diff --git a/doc/src/sgml/ref/alter_schema.sgml b/doc/src/sgml/ref/alter_schema.sgml index dbc5c2d45f..2ca406b914 100644 --- a/doc/src/sgml/ref/alter_schema.sgml +++ b/doc/src/sgml/ref/alter_schema.sgml @@ -34,7 +34,7 @@ ALTER SCHEMA name OWNER TO { new_owner - You must own the schema to use ALTER SCHEMA. + You must own the schema to use ALTER SCHEMA. To rename a schema you must also have the CREATE privilege for the database. To alter the owner, you must also be a direct or diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index c505935fcc..9b8ad36522 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -47,8 +47,8 @@ ALTER SEQUENCE [ IF EXISTS ] name S - You must own the sequence to use ALTER SEQUENCE. - To change a sequence's schema, you must also have CREATE + You must own the sequence to use ALTER SEQUENCE. + To change a sequence's schema, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on @@ -159,8 +159,8 @@ ALTER SEQUENCE [ IF EXISTS ] name S The optional clause START WITH start changes the recorded start value of the sequence. This has no effect on the - current sequence value; it simply sets the value - that future ALTER SEQUENCE RESTART commands will use. + current sequence value; it simply sets the value + that future ALTER SEQUENCE RESTART commands will use. @@ -172,13 +172,13 @@ ALTER SEQUENCE [ IF EXISTS ] name S The optional clause RESTART [ WITH restart ] changes the current value of the sequence. This is similar to calling the - setval function with is_called = - false: the specified value will be returned by the - next call of nextval. - Writing RESTART with no restart value is equivalent to supplying - the start value that was recorded by CREATE SEQUENCE - or last set by ALTER SEQUENCE START WITH. + setval function with is_called = + false: the specified value will be returned by the + next call of nextval. + Writing RESTART with no restart value is equivalent to supplying + the start value that was recorded by CREATE SEQUENCE + or last set by ALTER SEQUENCE START WITH. @@ -186,7 +186,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S a RESTART operation on a sequence is transactional and blocks concurrent transactions from obtaining numbers from the same sequence. If that's not the desired mode of - operation, setval should be used. + operation, setval should be used. @@ -250,7 +250,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S table must have the same owner and be in the same schema as the sequence. Specifying OWNED BY NONE removes any existing - association, making the sequence free-standing. + association, making the sequence free-standing. @@ -291,7 +291,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S ALTER SEQUENCE will not immediately affect - nextval results in backends, + nextval results in backends, other than the current one, that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the changed sequence generation parameters. The current backend will be affected @@ -299,7 +299,7 @@ ALTER SEQUENCE [ IF EXISTS ] name S - ALTER SEQUENCE does not affect the currval + ALTER SEQUENCE does not affect the currval status for the sequence. (Before PostgreSQL 8.3, it sometimes did.) @@ -332,8 +332,8 @@ ALTER SEQUENCE serial RESTART WITH 105; ALTER SEQUENCE conforms to the SQL - standard, except for the AS, START WITH, - OWNED BY, OWNER TO, RENAME TO, and + standard, except for the AS, START WITH, + OWNED BY, OWNER TO, RENAME TO, and SET SCHEMA clauses, which are PostgreSQL extensions. diff --git a/doc/src/sgml/ref/alter_server.sgml b/doc/src/sgml/ref/alter_server.sgml index 7f5def30a4..05e11f5ef2 100644 --- a/doc/src/sgml/ref/alter_server.sgml +++ b/doc/src/sgml/ref/alter_server.sgml @@ -42,7 +42,7 @@ ALTER SERVER name RENAME TO USAGE privilege on the server's foreign-data + have USAGE privilege on the server's foreign-data wrapper. (Note that superusers satisfy all these criteria automatically.) @@ -75,8 +75,8 @@ ALTER SERVER name RENAME TO Change options for the - server. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + server. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; names and values are also validated using the server's foreign-data wrapper library. @@ -108,15 +108,15 @@ ALTER SERVER name RENAME TO Examples - Alter server foo, add connection options: + Alter server foo, add connection options: ALTER SERVER foo OPTIONS (host 'foo', dbname 'foodb'); - Alter server foo, change version, - change host option: + Alter server foo, change version, + change host option: ALTER SERVER foo VERSION '8.4' OPTIONS (SET host 'baz'); diff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml index db4f2f0d52..87acb879b0 100644 --- a/doc/src/sgml/ref/alter_statistics.sgml +++ b/doc/src/sgml/ref/alter_statistics.sgml @@ -39,9 +39,9 @@ ALTER STATISTICS name SET SCHEMA - You must own the statistics object to use ALTER STATISTICS. + You must own the statistics object to use ALTER STATISTICS. To change a statistics object's schema, you must also - have CREATE privilege on the new schema. + have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on the statistics object's schema. (These restrictions enforce that altering diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml index 44c0b35069..b76a21f654 100644 --- a/doc/src/sgml/ref/alter_subscription.sgml +++ b/doc/src/sgml/ref/alter_subscription.sgml @@ -42,7 +42,7 @@ ALTER SUBSCRIPTION name RENAME TO < - You must own the subscription to use ALTER SUBSCRIPTION. + You must own the subscription to use ALTER SUBSCRIPTION. To alter the owner, you must also be a direct or indirect member of the new owning role. The new owner has to be a superuser. (Currently, all subscription owners must be superusers, so the owner checks @@ -211,7 +211,7 @@ ALTER SUBSCRIPTION mysub DISABLE; Compatibility - ALTER SUBSCRIPTION is a PostgreSQL + ALTER SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/alter_system.sgml b/doc/src/sgml/ref/alter_system.sgml index e3a4af4041..b8ef117b7d 100644 --- a/doc/src/sgml/ref/alter_system.sgml +++ b/doc/src/sgml/ref/alter_system.sgml @@ -50,8 +50,8 @@ ALTER SYSTEM RESET ALL the next server configuration reload, or after the next server restart in the case of parameters that can only be changed at server start. A server configuration reload can be commanded by calling the SQL - function pg_reload_conf(), running pg_ctl reload, - or sending a SIGHUP signal to the main server process. + function pg_reload_conf(), running pg_ctl reload, + or sending a SIGHUP signal to the main server process. @@ -95,8 +95,8 @@ ALTER SYSTEM RESET ALL This command can't be used to set , - nor parameters that are not allowed in postgresql.conf - (e.g., preset options). + nor parameters that are not allowed in postgresql.conf + (e.g., preset options). @@ -108,7 +108,7 @@ ALTER SYSTEM RESET ALL Examples - Set the wal_level: + Set the wal_level: ALTER SYSTEM SET wal_level = replica; @@ -116,7 +116,7 @@ ALTER SYSTEM SET wal_level = replica; Undo that, restoring whatever setting was effective - in postgresql.conf: + in postgresql.conf: ALTER SYSTEM RESET wal_level; diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 0559f80549..68393d70b4 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -126,7 +126,7 @@ ALTER TABLE [ IF EXISTS ] name Multivariate statistics referencing the dropped column will also be removed if the removal of the column would cause the statistics to contain data for only a single column. - You will need to say CASCADE if anything outside the table + You will need to say CASCADE if anything outside the table depends on the column, for example, foreign key references or views. If IF EXISTS is specified and the column does not exist, no error is thrown. In this case a notice @@ -162,7 +162,7 @@ ALTER TABLE [ IF EXISTS ] name These forms set or remove the default value for a column. Default values only apply in subsequent INSERT - or UPDATE commands; they do not cause rows already in the + or UPDATE commands; they do not cause rows already in the table to change. @@ -174,7 +174,7 @@ ALTER TABLE [ IF EXISTS ] name These forms change whether a column is marked to allow null values or to reject null values. You can only use SET - NOT NULL when the column contains no null values. + NOT NULL when the column contains no null values. @@ -182,7 +182,7 @@ ALTER TABLE [ IF EXISTS ] name on a column if it is marked NOT NULL in the parent table. To drop the NOT NULL constraint from all the partitions, perform DROP NOT NULL on the parent - table. Even if there is no NOT NULL constraint on the + table. Even if there is no NOT NULL constraint on the parent, such a constraint can still be added to individual partitions, if desired; that is, the children can disallow nulls even if the parent allows them, but not the other way around. @@ -249,17 +249,17 @@ ALTER TABLE [ IF EXISTS ] name This form sets or resets per-attribute options. Currently, the only - defined per-attribute options are n_distinct and - n_distinct_inherited, which override the + defined per-attribute options are n_distinct and + n_distinct_inherited, which override the number-of-distinct-values estimates made by subsequent - operations. n_distinct affects the statistics for the table - itself, while n_distinct_inherited affects the statistics + operations. n_distinct affects the statistics for the table + itself, while n_distinct_inherited affects the statistics gathered for the table plus its inheritance children. When set to a - positive value, ANALYZE will assume that the column contains + positive value, ANALYZE will assume that the column contains exactly the specified number of distinct nonnull values. When set to a negative value, which must be greater - than or equal to -1, ANALYZE will assume that the number of + than or equal to -1, ANALYZE will assume that the number of distinct nonnull values in the column is linear in the size of the table; the exact count is to be computed by multiplying the estimated table size by the absolute value of the given number. For example, @@ -290,7 +290,7 @@ ALTER TABLE [ IF EXISTS ] name This form sets the storage mode for a column. This controls whether this - column is held inline or in a secondary TOAST table, and + column is held inline or in a secondary TOAST table, and whether the data should be compressed or not. PLAIN must be used for fixed-length values such as integer and is @@ -302,7 +302,7 @@ ALTER TABLE [ IF EXISTS ] name Use of EXTERNAL will make substring operations on very large text and bytea values run faster, at the penalty of increased storage space. Note that - SET STORAGE doesn't itself change anything in the table, + SET STORAGE doesn't itself change anything in the table, it just sets the strategy to be pursued during future table updates. See for more information. @@ -335,7 +335,7 @@ ALTER TABLE [ IF EXISTS ] name ADD table_constraint_using_index - This form adds a new PRIMARY KEY or UNIQUE + This form adds a new PRIMARY KEY or UNIQUE constraint to a table based on an existing unique index. All the columns of the index will be included in the constraint. @@ -344,14 +344,14 @@ ALTER TABLE [ IF EXISTS ] name The index cannot have expression columns nor be a partial index. Also, it must be a b-tree index with default sort ordering. These restrictions ensure that the index is equivalent to one that would be - built by a regular ADD PRIMARY KEY or ADD UNIQUE + built by a regular ADD PRIMARY KEY or ADD UNIQUE command. - If PRIMARY KEY is specified, and the index's columns are not - already marked NOT NULL, then this command will attempt to - do ALTER COLUMN SET NOT NULL against each such column. + If PRIMARY KEY is specified, and the index's columns are not + already marked NOT NULL, then this command will attempt to + do ALTER COLUMN SET NOT NULL against each such column. That requires a full table scan to verify the column(s) contain no nulls. In all other cases, this is a fast operation. @@ -363,9 +363,9 @@ ALTER TABLE [ IF EXISTS ] name - After this command is executed, the index is owned by the + After this command is executed, the index is owned by the constraint, in the same way as if the index had been built by - a regular ADD PRIMARY KEY or ADD UNIQUE + a regular ADD PRIMARY KEY or ADD UNIQUE command. In particular, dropping the constraint will make the index disappear too. @@ -375,7 +375,7 @@ ALTER TABLE [ IF EXISTS ] name Adding a constraint using an existing index can be helpful in situations where a new constraint needs to be added without blocking table updates for a long time. To do that, create the index using - CREATE INDEX CONCURRENTLY, and then install it as an + CREATE INDEX CONCURRENTLY, and then install it as an official constraint using this syntax. See the example below. @@ -447,9 +447,9 @@ ALTER TABLE [ IF EXISTS ] name triggers are not executed. The trigger firing mechanism is also affected by the configuration variable . Simply enabled - triggers will fire when the replication role is origin - (the default) or local. Triggers configured as ENABLE - REPLICA will only fire if the session is in replica + triggers will fire when the replication role is origin + (the default) or local. Triggers configured as ENABLE + REPLICA will only fire if the session is in replica mode, and triggers configured as ENABLE ALWAYS will fire regardless of the current replication mode. @@ -542,9 +542,9 @@ ALTER TABLE [ IF EXISTS ] name - Note that this is not equivalent to ADD COLUMN oid oid; + Note that this is not equivalent to ADD COLUMN oid oid; that would add a normal column that happened to be named - oid, not a system column. + oid, not a system column. @@ -609,8 +609,8 @@ ALTER TABLE [ IF EXISTS ] name will not be modified immediately by this command; depending on the parameter you might need to rewrite the table to get the desired effects. That can be done with VACUUM - FULL, or one of the forms - of ALTER TABLE that forces a table rewrite. + FULL, or one of the forms + of ALTER TABLE that forces a table rewrite. For planner related parameters, changes will take effect from the next time the table is locked so currently executing queries will not be affected. @@ -620,18 +620,18 @@ ALTER TABLE [ IF EXISTS ] name SHARE UPDATE EXCLUSIVE lock will be taken for fillfactor and autovacuum storage parameters, as well as the following planner related parameters: - effective_io_concurrency, parallel_workers, seq_page_cost, - random_page_cost, n_distinct and n_distinct_inherited. + effective_io_concurrency, parallel_workers, seq_page_cost, + random_page_cost, n_distinct and n_distinct_inherited. - While CREATE TABLE allows OIDS to be specified + While CREATE TABLE allows OIDS to be specified in the WITH (storage_parameter) syntax, - ALTER TABLE does not treat OIDS as a - storage parameter. Instead use the SET WITH OIDS - and SET WITHOUT OIDS forms to change OID status. + class="parameter">storage_parameter) syntax, + ALTER TABLE does not treat OIDS as a + storage parameter. Instead use the SET WITH OIDS + and SET WITHOUT OIDS forms to change OID status. @@ -642,7 +642,7 @@ ALTER TABLE [ IF EXISTS ] name This form resets one or more storage parameters to their - defaults. As with SET, a table rewrite might be + defaults. As with SET, a table rewrite might be needed to update the table entirely. @@ -693,11 +693,11 @@ ALTER TABLE [ IF EXISTS ] name This form links the table to a composite type as though CREATE - TABLE OF had formed it. The table's list of column names and types + TABLE OF had formed it. The table's list of column names and types must precisely match that of the composite type; the presence of - an oid system column is permitted to differ. The table must + an oid system column is permitted to differ. The table must not inherit from any other table. These restrictions ensure - that CREATE TABLE OF would permit an equivalent table + that CREATE TABLE OF would permit an equivalent table definition. @@ -728,13 +728,13 @@ ALTER TABLE [ IF EXISTS ] name This form changes the information which is written to the write-ahead log to identify rows which are updated or deleted. This option has no effect - except when logical replication is in use. DEFAULT + except when logical replication is in use. DEFAULT (the default for non-system tables) records the - old values of the columns of the primary key, if any. USING INDEX + old values of the columns of the primary key, if any. USING INDEX records the old values of the columns covered by the named index, which must be unique, not partial, not deferrable, and include only columns marked - NOT NULL. FULL records the old values of all columns - in the row. NOTHING records no information about the old row. + NOT NULL. FULL records the old values of all columns + in the row. NOTHING records no information about the old row. (This is the default for system tables.) In all cases, no old values are logged unless at least one of the columns that would be logged differs between the old and new versions of the row. @@ -853,7 +853,7 @@ ALTER TABLE [ IF EXISTS ] name - You must own the table to use ALTER TABLE. + You must own the table to use ALTER TABLE. To change the schema or tablespace of a table, you must also have CREATE privilege on the new schema or tablespace. To add the table as a new child of a parent table, you must own the parent @@ -890,10 +890,10 @@ ALTER TABLE [ IF EXISTS ] name The name (optionally schema-qualified) of an existing table to - alter. If ONLY is specified before the table name, only - that table is altered. If ONLY is not specified, the table + alter. If ONLY is specified before the table name, only + that table is altered. If ONLY is not specified, the table and all its descendant tables (if any) are altered. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -1106,28 +1106,28 @@ ALTER TABLE [ IF EXISTS ] name When a column is added with ADD COLUMN, all existing rows in the table are initialized with the column's default value - (NULL if no DEFAULT clause is specified). - If there is no DEFAULT clause, this is merely a metadata + (NULL if no DEFAULT clause is specified). + If there is no DEFAULT clause, this is merely a metadata change and does not require any immediate update of the table's data; the added NULL values are supplied on readout, instead. - Adding a column with a DEFAULT clause or changing the type of + Adding a column with a DEFAULT clause or changing the type of an existing column will require the entire table and its indexes to be rewritten. As an exception when changing the type of an existing column, - if the USING clause does not change the column + if the USING clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt. Adding or - removing a system oid column also requires rewriting the entire + removing a system oid column also requires rewriting the entire table. Table and/or index rebuilds may take a significant amount of time for a large table; and will temporarily require as much as double the disk space. - Adding a CHECK or NOT NULL constraint requires + Adding a CHECK or NOT NULL constraint requires scanning the table to verify that existing rows meet the constraint, but does not require a table rewrite. @@ -1139,7 +1139,7 @@ ALTER TABLE [ IF EXISTS ] name The main reason for providing the option to specify multiple changes - in a single ALTER TABLE is that multiple table scans or + in a single ALTER TABLE is that multiple table scans or rewrites can thereby be combined into a single pass over the table. @@ -1151,37 +1151,37 @@ ALTER TABLE [ IF EXISTS ] name reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. (These statements do - not apply when dropping the system oid column; that is done + not apply when dropping the system oid column; that is done with an immediate rewrite.) To force immediate reclamation of space occupied by a dropped column, - you can execute one of the forms of ALTER TABLE that + you can execute one of the forms of ALTER TABLE that performs a rewrite of the whole table. This results in reconstructing each row with the dropped column replaced by a null value. - The rewriting forms of ALTER TABLE are not MVCC-safe. + The rewriting forms of ALTER TABLE are not MVCC-safe. After a table rewrite, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the rewrite occurred. See for more details. - The USING option of SET DATA TYPE can actually + The USING option of SET DATA TYPE can actually specify any expression involving the old values of the row; that is, it can refer to other columns as well as the one being converted. This allows - very general conversions to be done with the SET DATA TYPE + very general conversions to be done with the SET DATA TYPE syntax. Because of this flexibility, the USING expression is not applied to the column's default value (if any); the result might not be a constant expression as required for a default. This means that when there is no implicit or assignment cast from old to - new type, SET DATA TYPE might fail to convert the default even + new type, SET DATA TYPE might fail to convert the default even though a USING clause is supplied. In such cases, - drop the default with DROP DEFAULT, perform the ALTER - TYPE, and then use SET DEFAULT to add a suitable new + drop the default with DROP DEFAULT, perform the ALTER + TYPE, and then use SET DEFAULT to add a suitable new default. Similar considerations apply to indexes and constraints involving the column. @@ -1216,11 +1216,11 @@ ALTER TABLE [ IF EXISTS ] name The actions for identity columns (ADD GENERATED, SET etc., DROP IDENTITY), as well as the actions - TRIGGER, CLUSTER, OWNER, - and TABLESPACE never recurse to descendant tables; - that is, they always act as though ONLY were specified. - Adding a constraint recurses only for CHECK constraints - that are not marked NO INHERIT. + TRIGGER, CLUSTER, OWNER, + and TABLESPACE never recurse to descendant tables; + that is, they always act as though ONLY were specified. + Adding a constraint recurses only for CHECK constraints + that are not marked NO INHERIT. @@ -1434,17 +1434,17 @@ ALTER TABLE measurement The forms ADD (without USING INDEX), - DROP [COLUMN], DROP IDENTITY, RESTART, - SET DEFAULT, SET DATA TYPE (without USING), + DROP [COLUMN], DROP IDENTITY, RESTART, + SET DEFAULT, SET DATA TYPE (without USING), SET GENERATED, and SET sequence_option conform with the SQL standard. The other forms are PostgreSQL extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single - ALTER TABLE command is an extension. + ALTER TABLE command is an extension. - ALTER TABLE DROP COLUMN can be used to drop the only + ALTER TABLE DROP COLUMN can be used to drop the only column of a table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column tables. diff --git a/doc/src/sgml/ref/alter_tablespace.sgml b/doc/src/sgml/ref/alter_tablespace.sgml index 4542bd90a2..def554bfb3 100644 --- a/doc/src/sgml/ref/alter_tablespace.sgml +++ b/doc/src/sgml/ref/alter_tablespace.sgml @@ -83,8 +83,8 @@ ALTER TABLESPACE name RESET ( name ON The ability to temporarily enable or disable a trigger is provided by , not by - ALTER TRIGGER, because ALTER TRIGGER has no + ALTER TRIGGER, because ALTER TRIGGER has no convenient way to express the option of enabling or disabling all of a table's triggers at once. @@ -117,7 +117,7 @@ ALTER TRIGGER emp_stamp ON emp DEPENDS ON EXTENSION emplib; Compatibility - ALTER TRIGGER is a PostgreSQL + ALTER TRIGGER is a PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml index 72a719b862..b44aac9bf5 100644 --- a/doc/src/sgml/ref/alter_tsconfig.sgml +++ b/doc/src/sgml/ref/alter_tsconfig.sgml @@ -49,7 +49,7 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA You must be the owner of the configuration to use - ALTER TEXT SEARCH CONFIGURATION. + ALTER TEXT SEARCH CONFIGURATION. @@ -136,20 +136,20 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA - The ADD MAPPING FOR form installs a list of dictionaries to be + The ADD MAPPING FOR form installs a list of dictionaries to be consulted for the specified token type(s); it is an error if there is already a mapping for any of the token types. - The ALTER MAPPING FOR form does the same, but first removing + The ALTER MAPPING FOR form does the same, but first removing any existing mapping for those token types. - The ALTER MAPPING REPLACE forms substitute ALTER MAPPING REPLACE forms substitute new_dictionary for old_dictionary anywhere the latter appears. - This is done for only the specified token types when FOR + This is done for only the specified token types when FOR appears, or for all mappings of the configuration when it doesn't. - The DROP MAPPING form removes all dictionaries for the + The DROP MAPPING form removes all dictionaries for the specified token type(s), causing tokens of those types to be ignored by the text search configuration. It is an error if there is no mapping - for the token types, unless IF EXISTS appears. + for the token types, unless IF EXISTS appears. @@ -158,9 +158,9 @@ ALTER TEXT SEARCH CONFIGURATION name SET SCHEMA Examples - The following example replaces the english dictionary - with the swedish dictionary anywhere that english - is used within my_config. + The following example replaces the english dictionary + with the swedish dictionary anywhere that english + is used within my_config. diff --git a/doc/src/sgml/ref/alter_tsdictionary.sgml b/doc/src/sgml/ref/alter_tsdictionary.sgml index 7cecabea83..16d76687ab 100644 --- a/doc/src/sgml/ref/alter_tsdictionary.sgml +++ b/doc/src/sgml/ref/alter_tsdictionary.sgml @@ -41,7 +41,7 @@ ALTER TEXT SEARCH DICTIONARY name SET SCHEMA You must be the owner of the dictionary to use - ALTER TEXT SEARCH DICTIONARY. + ALTER TEXT SEARCH DICTIONARY. @@ -126,7 +126,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( StopWords = newrussian ); - The following example command changes the language option to dutch, + The following example command changes the language option to dutch, and removes the stopword option entirely. @@ -135,7 +135,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( language = dutch, StopWords ); - The following example command updates the dictionary's + The following example command updates the dictionary's definition without actually changing anything. @@ -144,7 +144,7 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( dummy ); (The reason this works is that the option removal code doesn't complain if there is no such option.) This trick is useful when changing - configuration files for the dictionary: the ALTER will + configuration files for the dictionary: the ALTER will force existing database sessions to re-read the configuration files, which otherwise they would never do if they had read them earlier. diff --git a/doc/src/sgml/ref/alter_tsparser.sgml b/doc/src/sgml/ref/alter_tsparser.sgml index e2b6060a17..737a507565 100644 --- a/doc/src/sgml/ref/alter_tsparser.sgml +++ b/doc/src/sgml/ref/alter_tsparser.sgml @@ -36,7 +36,7 @@ ALTER TEXT SEARCH PARSER name SET SCHEMA - You must be a superuser to use ALTER TEXT SEARCH PARSER. + You must be a superuser to use ALTER TEXT SEARCH PARSER. diff --git a/doc/src/sgml/ref/alter_tstemplate.sgml b/doc/src/sgml/ref/alter_tstemplate.sgml index e7ae91c0a0..d9a753017b 100644 --- a/doc/src/sgml/ref/alter_tstemplate.sgml +++ b/doc/src/sgml/ref/alter_tstemplate.sgml @@ -36,7 +36,7 @@ ALTER TEXT SEARCH TEMPLATE name SET SCHEMA - You must be a superuser to use ALTER TEXT SEARCH TEMPLATE. + You must be a superuser to use ALTER TEXT SEARCH TEMPLATE. diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 6c5201ccb5..75be3187f1 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -147,7 +147,7 @@ ALTER TYPE name RENAME VALUE - You must own the type to use ALTER TYPE. + You must own the type to use ALTER TYPE. To change the schema of a type, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new @@ -290,7 +290,7 @@ ALTER TYPE name RENAME VALUE Notes - ALTER TYPE ... ADD VALUE (the form that adds a new value to an + ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum type) cannot be executed inside a transaction block. @@ -301,7 +301,7 @@ ALTER TYPE name RENAME VALUE wrapped - around since the original creation of the enum type). The slowdown is + around since the original creation of the enum type). The slowdown is usually insignificant; but if it matters, optimal performance can be regained by dropping and recreating the enum type, or by dumping and reloading the database. diff --git a/doc/src/sgml/ref/alter_user_mapping.sgml b/doc/src/sgml/ref/alter_user_mapping.sgml index 5cc49210ed..18271d5199 100644 --- a/doc/src/sgml/ref/alter_user_mapping.sgml +++ b/doc/src/sgml/ref/alter_user_mapping.sgml @@ -38,7 +38,7 @@ ALTER USER MAPPING FOR { user_name The owner of a foreign server can alter user mappings for that server for any user. Also, a user can alter a user mapping for - their own user name if USAGE privilege on the server has + their own user name if USAGE privilege on the server has been granted to the user. @@ -51,9 +51,9 @@ ALTER USER MAPPING FOR { user_name user_name - User name of the mapping. CURRENT_USER - and USER match the name of the current - user. PUBLIC is used to match all present and future + User name of the mapping. CURRENT_USER + and USER match the name of the current + user. PUBLIC is used to match all present and future user names in the system. @@ -74,8 +74,8 @@ ALTER USER MAPPING FOR { user_name Change options for the user mapping. The new options override any previously specified - options. ADD, SET, and DROP - specify the action to be performed. ADD is assumed + options. ADD, SET, and DROP + specify the action to be performed. ADD is assumed if no operation is explicitly specified. Option names must be unique; options are also validated by the server's foreign-data wrapper. @@ -89,7 +89,7 @@ ALTER USER MAPPING FOR { user_name Examples - Change the password for user mapping bob, server foo: + Change the password for user mapping bob, server foo: ALTER USER MAPPING FOR bob SERVER foo OPTIONS (SET password 'public'); diff --git a/doc/src/sgml/ref/alter_view.sgml b/doc/src/sgml/ref/alter_view.sgml index 788eda5d58..e7180b4409 100644 --- a/doc/src/sgml/ref/alter_view.sgml +++ b/doc/src/sgml/ref/alter_view.sgml @@ -37,12 +37,12 @@ ALTER VIEW [ IF EXISTS ] name RESET ALTER VIEW changes various auxiliary properties of a view. (If you want to modify the view's defining query, - use CREATE OR REPLACE VIEW.) + use CREATE OR REPLACE VIEW.) - You must own the view to use ALTER VIEW. - To change a view's schema, you must also have CREATE + You must own the view to use ALTER VIEW. + To change a view's schema, you must also have CREATE privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE privilege on @@ -81,7 +81,7 @@ ALTER VIEW [ IF EXISTS ] name RESET These forms set or remove the default value for a column. A view column's default value is substituted into any - INSERT or UPDATE command whose target is the + INSERT or UPDATE command whose target is the view, before applying any rules or triggers for the view. The view's default will therefore take precedence over any default values from underlying relations. @@ -185,7 +185,7 @@ INSERT INTO a_view(id) VALUES(2); -- ts will receive the current time Compatibility - ALTER VIEW is a PostgreSQL + ALTER VIEW is a PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index eae7fe92e0..12f2f09337 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -35,7 +35,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns ANALYZE collects statistics about the contents of tables in the database, and stores the results in the pg_statistic + linkend="catalog-pg-statistic">pg_statistic system catalog. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries. @@ -93,7 +93,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsOutputs - When VERBOSE is specified, ANALYZE emits + When VERBOSE is specified, ANALYZE emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well. @@ -104,8 +104,8 @@ ANALYZE [ VERBOSE ] [ table_and_columns Foreign tables are analyzed only when explicitly selected. Not all - foreign data wrappers support ANALYZE. If the table's - wrapper does not support ANALYZE, the command prints a + foreign data wrappers support ANALYZE. If the table's + wrapper does not support ANALYZE, the command prints a warning and does nothing. @@ -172,8 +172,8 @@ ANALYZE [ VERBOSE ] [ table_and_columnspg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are - never used as part of the WHERE, GROUP BY, - or ORDER BY clauses of queries, since the planner will + never used as part of the WHERE, GROUP BY, + or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns. @@ -191,7 +191,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) + ALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) (see ). @@ -210,7 +210,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns If any of the child tables are foreign tables whose foreign data wrappers - do not support ANALYZE, those child tables are ignored while + do not support ANALYZE, those child tables are ignored while gathering inheritance statistics. diff --git a/doc/src/sgml/ref/begin.sgml b/doc/src/sgml/ref/begin.sgml index c04f1c8064..fd6f073d18 100644 --- a/doc/src/sgml/ref/begin.sgml +++ b/doc/src/sgml/ref/begin.sgml @@ -91,7 +91,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode has the same functionality - as BEGIN. + as BEGIN. @@ -101,7 +101,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode - Issuing BEGIN when already inside a transaction block will + Issuing BEGIN when already inside a transaction block will provoke a warning message. The state of the transaction is not affected. To nest transactions within a transaction block, use savepoints (see ). diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index aaa2f89a30..4d71c45797 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -90,7 +90,7 @@ CLOSE { name | ALL } You can see all available cursors by querying the pg_cursors system view. + linkend="view-pg-cursors">pg_cursors system view. @@ -115,7 +115,7 @@ CLOSE liahona; CLOSE is fully conforming with the SQL - standard. CLOSE ALL is a PostgreSQL + standard. CLOSE ALL is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml index b55734d35c..5c5db75077 100644 --- a/doc/src/sgml/ref/cluster.sgml +++ b/doc/src/sgml/ref/cluster.sgml @@ -128,7 +128,7 @@ CLUSTER [VERBOSE] - CLUSTER can re-sort the table using either an index scan + CLUSTER can re-sort the table using either an index scan on the specified index, or (if the index is a b-tree) a sequential scan followed by sorting. It will attempt to choose the method that will be faster, based on planner cost parameters and available statistical @@ -148,13 +148,13 @@ CLUSTER [VERBOSE] as double the table size, plus the index sizes. This method is often faster than the index scan method, but if the disk space requirement is intolerable, you can disable this choice by temporarily setting to off. + linkend="guc-enable-sort"> to off. It is advisable to set to a reasonably large value (but not more than the amount of RAM you can - dedicate to the CLUSTER operation) before clustering. + dedicate to the CLUSTER operation) before clustering. @@ -168,7 +168,7 @@ CLUSTER [VERBOSE] Because CLUSTER remembers which indexes are clustered, one can cluster the tables one wants clustered manually the first time, then set up a periodic maintenance script that executes - CLUSTER without any parameters, so that the desired tables + CLUSTER without any parameters, so that the desired tables are periodically reclustered. @@ -212,7 +212,7 @@ CLUSTER; CLUSTER index_name ON table_name - is also supported for compatibility with pre-8.3 PostgreSQL + is also supported for compatibility with pre-8.3 PostgreSQL versions. diff --git a/doc/src/sgml/ref/clusterdb.sgml b/doc/src/sgml/ref/clusterdb.sgml index 67582fd6e6..081bbc5f7a 100644 --- a/doc/src/sgml/ref/clusterdb.sgml +++ b/doc/src/sgml/ref/clusterdb.sgml @@ -76,8 +76,8 @@ PostgreSQL documentation - - + + Cluster all databases. @@ -86,8 +86,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be clustered. @@ -101,8 +101,8 @@ PostgreSQL documentation - - + + Echo the commands that clusterdb generates @@ -112,8 +112,8 @@ PostgreSQL documentation - - + + Do not display progress messages. @@ -122,20 +122,20 @@ PostgreSQL documentation - - + + Cluster table only. Multiple tables can be clustered by writing multiple - switches. - - + + Print detailed information during processing. @@ -144,8 +144,8 @@ PostgreSQL documentation - - + + Print the clusterdb version and exit. @@ -154,8 +154,8 @@ PostgreSQL documentation - - + + Show help about clusterdb command line @@ -173,8 +173,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the server is @@ -185,8 +185,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -197,8 +197,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -207,8 +207,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -222,8 +222,8 @@ PostgreSQL documentation - - + + Force clusterdb to prompt for a @@ -236,14 +236,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, clusterdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -277,8 +277,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index 059d6f41d8..ab2e09d521 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -83,16 +83,16 @@ COMMENT ON Only one comment string is stored for each object, so to modify a comment, - issue a new COMMENT command for the same object. To remove a + issue a new COMMENT command for the same object. To remove a comment, write NULL in place of the text string. Comments are automatically dropped when their object is dropped. For most kinds of object, only the object's owner can set the comment. - Roles don't have owners, so the rule for COMMENT ON ROLE is + Roles don't have owners, so the rule for COMMENT ON ROLE is that you must be superuser to comment on a superuser role, or have the - CREATEROLE privilege to comment on non-superuser roles. + CREATEROLE privilege to comment on non-superuser roles. Likewise, access methods don't have owners either; you must be superuser to comment on an access method. Of course, a superuser can comment on anything. @@ -103,8 +103,8 @@ COMMENT ON \d family of commands. Other user interfaces to retrieve comments can be built atop the same built-in functions that psql uses, namely - obj_description, col_description, - and shobj_description + obj_description, col_description, + and shobj_description (see ). @@ -171,14 +171,14 @@ COMMENT ON The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that COMMENT does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -219,7 +219,7 @@ COMMENT ON The data type(s) of the operator's arguments (optionally - schema-qualified). Write NONE for the missing argument + schema-qualified). Write NONE for the missing argument of a prefix or postfix operator. @@ -258,7 +258,7 @@ COMMENT ON text - The new comment, written as a string literal; or NULL + The new comment, written as a string literal; or NULL to drop the comment. diff --git a/doc/src/sgml/ref/commit.sgml b/doc/src/sgml/ref/commit.sgml index e93c216849..8e3f53957e 100644 --- a/doc/src/sgml/ref/commit.sgml +++ b/doc/src/sgml/ref/commit.sgml @@ -60,7 +60,7 @@ COMMIT [ WORK | TRANSACTION ] - Issuing COMMIT when not inside a transaction does + Issuing COMMIT when not inside a transaction does no harm, but it will provoke a warning message. diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml index 716aed3ac2..35bbf85af7 100644 --- a/doc/src/sgml/ref/commit_prepared.sgml +++ b/doc/src/sgml/ref/commit_prepared.sgml @@ -75,7 +75,7 @@ COMMIT PREPARED transaction_id Examples Commit the transaction identified by the transaction - identifier foobar: + identifier foobar: COMMIT PREPARED 'foobar'; diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index 732efe69e6..8f0974b256 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -54,10 +54,10 @@ COPY { table_name [ ( COPY moves data between PostgreSQL tables and standard file-system files. COPY TO copies the contents of a table - to a file, while COPY FROM copies - data from a file to a table (appending the data to + to a file, while COPY FROM copies + data from a file to a table (appending the data to whatever is in the table already). COPY TO - can also copy the results of a SELECT query. + can also copy the results of a SELECT query. @@ -118,10 +118,10 @@ COPY { table_name [ ( - For INSERT, UPDATE and - DELETE queries a RETURNING clause must be provided, + For INSERT, UPDATE and + DELETE queries a RETURNING clause must be provided, and the target relation must not have a conditional rule, nor - an ALSO rule, nor an INSTEAD rule + an ALSO rule, nor an INSTEAD rule that expands to multiple statements. @@ -133,7 +133,7 @@ COPY { table_name [ ( The path name of the input or output file. An input file name can be an absolute or relative path, but an output file name must be an absolute - path. Windows users might need to use an E'' string and + path. Windows users might need to use an E'' string and double any backslashes used in the path name. @@ -144,7 +144,7 @@ COPY { table_name [ ( A command to execute. In COPY FROM, the input is - read from standard output of the command, and in COPY TO, + read from standard output of the command, and in COPY TO, the output is written to the standard input of the command. @@ -181,9 +181,9 @@ COPY { table_name [ ( Specifies whether the selected option should be turned on or off. - You can write TRUE, ON, or + You can write TRUE, ON, or 1 to enable the option, and FALSE, - OFF, or 0 to disable it. The + OFF, or 0 to disable it. The boolean value can also be omitted, in which case TRUE is assumed. @@ -195,10 +195,10 @@ COPY { table_name [ ( Selects the data format to be read or written: - text, - csv (Comma Separated Values), - or binary. - The default is text. + text, + csv (Comma Separated Values), + or binary. + The default is text. @@ -220,7 +220,7 @@ COPY { table_name [ ( Requests copying the data with rows already frozen, just as they - would be after running the VACUUM FREEZE command. + would be after running the VACUUM FREEZE command. This is intended as a performance option for initial data loading. Rows will be frozen only if the table being loaded has been created or truncated in the current subtransaction, there are no cursors @@ -241,9 +241,9 @@ COPY { table_name [ ( Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, - a comma in CSV format. + a comma in CSV format. This must be a single one-byte character. - This option is not allowed when using binary format. + This option is not allowed when using binary format. @@ -254,10 +254,10 @@ COPY { table_name [ ( Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty - string in CSV format. You might prefer an + string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. - This option is not allowed when using binary format. + This option is not allowed when using binary format. @@ -279,7 +279,7 @@ COPY { table_name [ ( CSV format. + This option is allowed only when using CSV format. @@ -291,7 +291,7 @@ COPY { table_name [ ( CSV format. + This option is allowed only when using CSV format. @@ -301,59 +301,59 @@ COPY { table_name [ ( Specifies the character that should appear before a - data character that matches the QUOTE value. - The default is the same as the QUOTE value (so that + data character that matches the QUOTE value. + The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). This must be a single one-byte character. - This option is allowed only when using CSV format. + This option is allowed only when using CSV format. - FORCE_QUOTE + FORCE_QUOTE Forces quoting to be - used for all non-NULL values in each specified column. - NULL output is never quoted. If * is specified, - non-NULL values will be quoted in all columns. - This option is allowed only in COPY TO, and only when - using CSV format. + used for all non-NULL values in each specified column. + NULL output is never quoted. If * is specified, + non-NULL values will be quoted in all columns. + This option is allowed only in COPY TO, and only when + using CSV format. - FORCE_NOT_NULL + FORCE_NOT_NULL Do not match the specified columns' values against the null string. In the default case where the null string is empty, this means that empty values will be read as zero-length strings rather than nulls, even when they are not quoted. - This option is allowed only in COPY FROM, and only when - using CSV format. + This option is allowed only in COPY FROM, and only when + using CSV format. - FORCE_NULL + FORCE_NULL Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to - NULL. In the default case where the null string is empty, + NULL. In the default case where the null string is empty, this converts a quoted empty string into NULL. - This option is allowed only in COPY FROM, and only when - using CSV format. + This option is allowed only in COPY FROM, and only when + using CSV format. - ENCODING + ENCODING Specifies that the file is encoded in the table_name [ ( Outputs - On successful completion, a COPY command returns a command + On successful completion, a COPY command returns a command tag of the form COPY count @@ -382,10 +382,10 @@ COPY count - psql will print this command tag only if the command - was not COPY ... TO STDOUT, or the - equivalent psql meta-command - \copy ... to stdout. This is to prevent confusing the + psql will print this command tag only if the command + was not COPY ... TO STDOUT, or the + equivalent psql meta-command + \copy ... to stdout. This is to prevent confusing the command tag with the data that was just printed. @@ -403,16 +403,16 @@ COPY count COPY FROM can be used with plain tables and with views - that have INSTEAD OF INSERT triggers. + that have INSTEAD OF INSERT triggers. COPY only deals with the specific table named; it does not copy data to or from child tables. Thus for example - COPY table TO + COPY table TO shows the same data as SELECT * FROM ONLY table. But COPY - (SELECT * FROM table) TO ... + class="parameter">table. But COPY + (SELECT * FROM table) TO ... can be used to dump all of the data in an inheritance hierarchy. @@ -427,7 +427,7 @@ COPY count If row-level security is enabled for the table, the relevant SELECT policies will apply to COPY - table TO statements. + table TO statements. Currently, COPY FROM is not supported for tables with row-level security. Use equivalent INSERT statements instead. @@ -491,10 +491,10 @@ COPY count DateStyle. To ensure portability to other PostgreSQL installations that might use non-default DateStyle settings, - DateStyle should be set to ISO before - using COPY TO. It is also a good idea to avoid dumping + DateStyle should be set to ISO before + using COPY TO. It is also a good idea to avoid dumping data with IntervalStyle set to - sql_standard, because negative interval values might be + sql_standard, because negative interval values might be misinterpreted by a server that has a different setting for IntervalStyle. @@ -519,7 +519,7 @@ COPY count - FORCE_NULL and FORCE_NOT_NULL can be used + FORCE_NULL and FORCE_NOT_NULL can be used simultaneously on the same column. This results in converting quoted null strings to null values and unquoted null strings to empty strings. @@ -533,7 +533,7 @@ COPY count Text Format - When the text format is used, + When the text format is used, the data read or written is a text file with one line per table row. Columns in a row are separated by the delimiter character. The column values themselves are strings generated by the @@ -548,17 +548,17 @@ COPY count End of data can be represented by a single line containing just - backslash-period (\.). An end-of-data marker is + backslash-period (\.). An end-of-data marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed only when copying data to or from client applications using pre-3.0 client protocol. - Backslash characters (\) can be used in the + Backslash characters (\) can be used in the COPY data to quote data characters that might otherwise be taken as row or column delimiters. In particular, the - following characters must be preceded by a backslash if + following characters must be preceded by a backslash if they appear as part of a column value: backslash itself, newline, carriage return, and the current delimiter character. @@ -587,37 +587,37 @@ COPY count - \b + \b Backspace (ASCII 8) - \f + \f Form feed (ASCII 12) - \n + \n Newline (ASCII 10) - \r + \r Carriage return (ASCII 13) - \t + \t Tab (ASCII 9) - \v + \v Vertical tab (ASCII 11) - \digits + \digits Backslash followed by one to three octal digits specifies the character with that numeric code - \xdigits - Backslash x followed by one or two hex digits specifies + \xdigits + Backslash x followed by one or two hex digits specifies the character with that numeric code @@ -633,15 +633,15 @@ COPY count Any other backslashed character that is not mentioned in the above table will be taken to represent itself. However, beware of adding backslashes unnecessarily, since that might accidentally produce a string matching the - end-of-data marker (\.) or the null string (\N by + end-of-data marker (\.) or the null string (\N by default). These strings will be recognized before any other backslash processing is done. It is strongly recommended that applications generating COPY data convert - data newlines and carriage returns to the \n and - \r sequences respectively. At present it is + data newlines and carriage returns to the \n and + \r sequences respectively. At present it is possible to represent a data carriage return by a backslash and carriage return, and to represent a data newline by a backslash and newline. However, these representations might not be accepted in future releases. @@ -652,10 +652,10 @@ COPY count COPY TO will terminate each row with a Unix-style - newline (\n). Servers running on Microsoft Windows instead - output carriage return/newline (\r\n), but only for - COPY to a server file; for consistency across platforms, - COPY TO STDOUT always sends \n + newline (\n). Servers running on Microsoft Windows instead + output carriage return/newline (\r\n), but only for + COPY to a server file; for consistency across platforms, + COPY TO STDOUT always sends \n regardless of server platform. COPY FROM can handle lines ending with newlines, carriage returns, or carriage return/newlines. To reduce the risk of @@ -670,62 +670,62 @@ COPY count This format option is used for importing and exporting the Comma - Separated Value (CSV) file format used by many other + Separated Value (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping rules used by PostgreSQL's standard text format, it produces and recognizes the common CSV escaping mechanism. - The values in each record are separated by the DELIMITER + The values in each record are separated by the DELIMITER character. If the value contains the delimiter character, the - QUOTE character, the NULL string, a carriage + QUOTE character, the NULL string, a carriage return, or line feed character, then the whole value is prefixed and - suffixed by the QUOTE character, and any occurrence - within the value of a QUOTE character or the - ESCAPE character is preceded by the escape character. - You can also use FORCE_QUOTE to force quotes when outputting - non-NULL values in specific columns. + suffixed by the QUOTE character, and any occurrence + within the value of a QUOTE character or the + ESCAPE character is preceded by the escape character. + You can also use FORCE_QUOTE to force quotes when outputting + non-NULL values in specific columns. - The CSV format has no standard way to distinguish a - NULL value from an empty string. - PostgreSQL's COPY handles this by quoting. - A NULL is output as the NULL parameter string - and is not quoted, while a non-NULL value matching the - NULL parameter string is quoted. For example, with the - default settings, a NULL is written as an unquoted empty + The CSV format has no standard way to distinguish a + NULL value from an empty string. + PostgreSQL's COPY handles this by quoting. + A NULL is output as the NULL parameter string + and is not quoted, while a non-NULL value matching the + NULL parameter string is quoted. For example, with the + default settings, a NULL is written as an unquoted empty string, while an empty string data value is written with double quotes - (""). Reading values follows similar rules. You can - use FORCE_NOT_NULL to prevent NULL input + (""). Reading values follows similar rules. You can + use FORCE_NOT_NULL to prevent NULL input comparisons for specific columns. You can also use - FORCE_NULL to convert quoted null string data values to - NULL. + FORCE_NULL to convert quoted null string data values to + NULL. - Because backslash is not a special character in the CSV - format, \., the end-of-data marker, could also appear - as a data value. To avoid any misinterpretation, a \. + Because backslash is not a special character in the CSV + format, \., the end-of-data marker, could also appear + as a data value. To avoid any misinterpretation, a \. data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a - value of \., you might need to quote that value in the + value of \., you might need to quote that value in the input file. - In CSV format, all characters are significant. A quoted value + In CSV format, all characters are significant. A quoted value surrounded by white space, or any characters other than - DELIMITER, will include those characters. This can cause - errors if you import data from a system that pads CSV + DELIMITER, will include those characters. This can cause + errors if you import data from a system that pads CSV lines with white space out to some fixed width. If such a situation - arises you might need to preprocess the CSV file to remove + arises you might need to preprocess the CSV file to remove the trailing white space, before importing the data into - PostgreSQL. + PostgreSQL. @@ -743,7 +743,7 @@ COPY count Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus you might encounter some files that cannot be imported using this - mechanism, and COPY might produce files that other + mechanism, and COPY might produce files that other programs cannot process. @@ -756,17 +756,17 @@ COPY count The binary format option causes all data to be stored/read as binary format rather than as text. It is - somewhat faster than the text and CSV formats, + somewhat faster than the text and CSV formats, but a binary-format file is less portable across machine architectures and PostgreSQL versions. Also, the binary format is very data type specific; for example - it will not work to output binary data from a smallint column - and read it into an integer column, even though that would work + it will not work to output binary data from a smallint column + and read it into an integer column, even though that would work fine in text format. - The binary file format consists + The binary file format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order. @@ -790,7 +790,7 @@ COPY count Signature -11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte +11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation @@ -804,7 +804,7 @@ filters, dropped zero bytes, dropped high bits, or parity changes.) 32-bit integer bit mask to denote important aspects of the file format. Bits -are numbered from 0 (LSB) to 31 (MSB). Note that +are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader @@ -880,7 +880,7 @@ to be specified. To determine the appropriate binary format for the actual tuple data you should consult the PostgreSQL source, in -particular the *send and *recv functions for +particular the *send and *recv functions for each column's data type (typically these functions are found in the src/backend/utils/adt/ directory of the source distribution). @@ -924,7 +924,7 @@ COPY country TO STDOUT (DELIMITER '|'); - To copy data from a file into the country table: + To copy data from a file into the country table: COPY country FROM '/usr1/proj/bray/sql/country_data'; @@ -986,7 +986,7 @@ ZW ZIMBABWE - The following syntax was used before PostgreSQL + The following syntax was used before PostgreSQL version 9.0 and is still supported: @@ -1015,13 +1015,13 @@ COPY { table_name [ ( column_name [, ...] | * } ] ] ] - Note that in this syntax, BINARY and CSV are - treated as independent keywords, not as arguments of a FORMAT + Note that in this syntax, BINARY and CSV are + treated as independent keywords, not as arguments of a FORMAT option. - The following syntax was used before PostgreSQL + The following syntax was used before PostgreSQL version 7.3 and is still supported: diff --git a/doc/src/sgml/ref/create_access_method.sgml b/doc/src/sgml/ref/create_access_method.sgml index 891926dba5..1bb1a79bd2 100644 --- a/doc/src/sgml/ref/create_access_method.sgml +++ b/doc/src/sgml/ref/create_access_method.sgml @@ -73,7 +73,7 @@ CREATE ACCESS METHOD name handler_function is the name (possibly schema-qualified) of a previously registered function that represents the access method. The handler function must be - declared to take a single argument of type internal, + declared to take a single argument of type internal, and its return type depends on the type of access method; for INDEX access methods, it must be index_am_handler. The C-level API that the handler @@ -89,8 +89,8 @@ CREATE ACCESS METHOD name Examples - Create an index access method heptree with - handler function heptree_handler: + Create an index access method heptree with + handler function heptree_handler: CREATE ACCESS METHOD heptree TYPE INDEX HANDLER heptree_handler; @@ -101,7 +101,7 @@ CREATE ACCESS METHOD heptree TYPE INDEX HANDLER heptree_handler; CREATE ACCESS METHOD is a - PostgreSQL extension. + PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index ee79c90df2..3de30fa580 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -98,7 +98,7 @@ CREATE AGGREGATE name ( If a schema name is given (for example, CREATE AGGREGATE - myschema.myagg ...) then the aggregate function is created in the + myschema.myagg ...) then the aggregate function is created in the specified schema. Otherwise it is created in the current schema. @@ -191,57 +191,57 @@ CREATE AGGREGATE name ( is polymorphic and the state value's data type would be inadequate to pin down the result type. These extra parameters are always passed as NULL (and so the final function must not be strict when - the FINALFUNC_EXTRA option is used), but nonetheless they + the FINALFUNC_EXTRA option is used), but nonetheless they are valid parameters. The final function could for example make use - of get_fn_expr_argtype to identify the actual argument type + of get_fn_expr_argtype to identify the actual argument type in the current call. - An aggregate can optionally support moving-aggregate mode, + An aggregate can optionally support moving-aggregate mode, as described in . This requires - specifying the MSFUNC, MINVFUNC, - and MSTYPE parameters, and optionally - the MSPACE, MFINALFUNC, - MFINALFUNC_EXTRA, MFINALFUNC_MODIFY, - and MINITCOND parameters. Except for MINVFUNC, + specifying the MSFUNC, MINVFUNC, + and MSTYPE parameters, and optionally + the MSPACE, MFINALFUNC, + MFINALFUNC_EXTRA, MFINALFUNC_MODIFY, + and MINITCOND parameters. Except for MINVFUNC, these parameters work like the corresponding simple-aggregate parameters - without M; they define a separate implementation of the + without M; they define a separate implementation of the aggregate that includes an inverse transition function. The syntax with ORDER BY in the parameter list creates a special type of aggregate called an ordered-set - aggregate; or if HYPOTHETICAL is specified, then + aggregate; or if HYPOTHETICAL is specified, then a hypothetical-set aggregate is created. These aggregates operate over groups of sorted values in order-dependent ways, so that specification of an input sort order is an essential part of a - call. Also, they can have direct arguments, which are + call. Also, they can have direct arguments, which are arguments that are evaluated only once per aggregation rather than once per input row. Hypothetical-set aggregates are a subclass of ordered-set aggregates in which some of the direct arguments are required to match, in number and data types, the aggregated argument columns. This allows the values of those direct arguments to be added to the collection of - aggregate-input rows as an additional hypothetical row. + aggregate-input rows as an additional hypothetical row. - An aggregate can optionally support partial aggregation, + An aggregate can optionally support partial aggregation, as described in . - This requires specifying the COMBINEFUNC parameter. + This requires specifying the COMBINEFUNC parameter. If the state_data_type - is internal, it's usually also appropriate to provide the - SERIALFUNC and DESERIALFUNC parameters so that + is internal, it's usually also appropriate to provide the + SERIALFUNC and DESERIALFUNC parameters so that parallel aggregation is possible. Note that the aggregate must also be - marked PARALLEL SAFE to enable parallel aggregation. + marked PARALLEL SAFE to enable parallel aggregation. - Aggregates that behave like MIN or MAX can + Aggregates that behave like MIN or MAX can sometimes be optimized by looking into an index instead of scanning every input row. If this aggregate can be so optimized, indicate it by - specifying a sort operator. The basic requirement is that + specifying a sort operator. The basic requirement is that the aggregate must yield the first element in the sort ordering induced by the operator; in other words: @@ -253,9 +253,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Further assumptions are that the aggregate ignores null inputs, and that it delivers a null result if and only if there were no non-null inputs. - Ordinarily, a data type's < operator is the proper sort - operator for MIN, and > is the proper sort - operator for MAX. Note that the optimization will never + Ordinarily, a data type's < operator is the proper sort + operator for MIN, and > is the proper sort + operator for MAX. Note that the optimization will never actually take effect unless the specified operator is the less than or greater than strategy member of a B-tree index operator class. @@ -288,10 +288,10 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - The mode of an argument: IN or VARIADIC. - (Aggregate functions do not support OUT arguments.) - If omitted, the default is IN. Only the last argument - can be marked VARIADIC. + The mode of an argument: IN or VARIADIC. + (Aggregate functions do not support OUT arguments.) + If omitted, the default is IN. Only the last argument + can be marked VARIADIC. @@ -312,7 +312,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; An input data type on which this aggregate function operates. - To create a zero-argument aggregate function, write * + To create a zero-argument aggregate function, write * in place of the list of argument specifications. (An example of such an aggregate is count(*).) @@ -323,12 +323,12 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; base_type - In the old syntax for CREATE AGGREGATE, the input data type - is specified by a basetype parameter rather than being + In the old syntax for CREATE AGGREGATE, the input data type + is specified by a basetype parameter rather than being written next to the aggregate name. Note that this syntax allows only one input parameter. To define a zero-argument aggregate function - with this syntax, specify the basetype as - "ANY" (not *). + with this syntax, specify the basetype as + "ANY" (not *). Ordered-set aggregates cannot be defined with the old syntax. @@ -339,9 +339,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the state transition function to be called for each - input row. For a normal N-argument - aggregate function, the sfunc - must take N+1 arguments, + input row. For a normal N-argument + aggregate function, the sfunc + must take N+1 arguments, the first being of type state_data_type and the rest matching the declared input data type(s) of the aggregate. @@ -375,7 +375,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The approximate average size (in bytes) of the aggregate's state value. If this parameter is omitted or is zero, a default estimate is used - based on the state_data_type. + based on the state_data_type. The planner uses this value to estimate the memory required for a grouped aggregate query. The planner will consider using hash aggregation for such a query only if the hash table is estimated to fit @@ -408,7 +408,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - If FINALFUNC_EXTRA is specified, then in addition to the + If FINALFUNC_EXTRA is specified, then in addition to the final state value and any direct arguments, the final function receives extra NULL values corresponding to the aggregate's regular (aggregated) arguments. This is mainly useful to allow correct @@ -419,16 +419,16 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + FINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } This option specifies whether the final function is a pure function - that does not modify its arguments. READ_ONLY indicates + that does not modify its arguments. READ_ONLY indicates it does not; the other two values indicate that it may change the transition state value. See below for more detail. The - default is READ_ONLY, except for ordered-set aggregates, - for which the default is READ_WRITE. + default is READ_ONLY, except for ordered-set aggregates, + for which the default is READ_WRITE. @@ -482,11 +482,11 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; An aggregate function whose state_data_type - is internal can participate in parallel aggregation only if it + is internal can participate in parallel aggregation only if it has a serialfunc function, - which must serialize the aggregate state into a bytea value for + which must serialize the aggregate state into a bytea value for transmission to another process. This function must take a single - argument of type internal and return type bytea. A + argument of type internal and return type bytea. A corresponding deserialfunc is also required. @@ -499,9 +499,9 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Deserialize a previously serialized aggregate state back into state_data_type. This - function must take two arguments of types bytea - and internal, and produce a result of type internal. - (Note: the second, internal argument is unused, but is required + function must take two arguments of types bytea + and internal, and produce a result of type internal. + (Note: the second, internal argument is unused, but is required for type safety reasons.) @@ -526,8 +526,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the forward state transition function to be called for each input row in moving-aggregate mode. This is exactly like the regular transition function, except that its first argument and result are of - type mstate_data_type, which might be different - from state_data_type. + type mstate_data_type, which might be different + from state_data_type. @@ -538,7 +538,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the inverse state transition function to be used in moving-aggregate mode. This function has the same argument and - result types as msfunc, but it is used to remove + result types as msfunc, but it is used to remove a value from the current aggregate state, rather than add a value to it. The inverse transition function must have the same strictness attribute as the forward state transition function. @@ -562,7 +562,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The approximate average size (in bytes) of the aggregate's state value, when using moving-aggregate mode. This works the same as - state_data_size. + state_data_size. @@ -573,22 +573,22 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The name of the final function called to compute the aggregate's result after all input rows have been traversed, when using - moving-aggregate mode. This works the same as ffunc, + moving-aggregate mode. This works the same as ffunc, except that its first argument's type - is mstate_data_type and extra dummy arguments are - specified by writing MFINALFUNC_EXTRA. - The aggregate result type determined by mffunc - or mstate_data_type must match that determined by the + is mstate_data_type and extra dummy arguments are + specified by writing MFINALFUNC_EXTRA. + The aggregate result type determined by mffunc + or mstate_data_type must match that determined by the aggregate's regular implementation. - MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } + MFINALFUNC_MODIFY = { READ_ONLY | SHARABLE | READ_WRITE } - This option is like FINALFUNC_MODIFY, but it describes + This option is like FINALFUNC_MODIFY, but it describes the behavior of the moving-aggregate final function. @@ -599,7 +599,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The initial setting for the state value, when using moving-aggregate - mode. This works the same as initial_condition. + mode. This works the same as initial_condition. @@ -608,8 +608,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; sort_operator - The associated sort operator for a MIN- or - MAX-like aggregate. + The associated sort operator for a MIN- or + MAX-like aggregate. This is just an operator name (possibly schema-qualified). The operator is assumed to have the same input data types as the aggregate (which must be a single-argument normal aggregate). @@ -618,14 +618,14 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - PARALLEL = { SAFE | RESTRICTED | UNSAFE } + PARALLEL = { SAFE | RESTRICTED | UNSAFE } - The meanings of PARALLEL SAFE, PARALLEL - RESTRICTED, and PARALLEL UNSAFE are the same as + The meanings of PARALLEL SAFE, PARALLEL + RESTRICTED, and PARALLEL UNSAFE are the same as in . An aggregate will not be considered for parallelization if it is marked PARALLEL - UNSAFE (which is the default!) or PARALLEL RESTRICTED. + UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself. @@ -640,7 +640,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; For ordered-set aggregates only, this flag specifies that the aggregate arguments are to be processed according to the requirements for hypothetical-set aggregates: that is, the last few direct arguments must - match the data types of the aggregated (WITHIN GROUP) + match the data types of the aggregated (WITHIN GROUP) arguments. The HYPOTHETICAL flag has no effect on run-time behavior, only on parse-time resolution of the data types and collations of the aggregate's arguments. @@ -660,7 +660,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; In parameters that specify support function names, you can write - a schema name if needed, for example SFUNC = public.sum. + a schema name if needed, for example SFUNC = public.sum. Do not write argument types there, however — the argument types of the support functions are determined from other parameters. @@ -668,7 +668,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Ordinarily, PostgreSQL functions are expected to be true functions that do not modify their input values. However, an aggregate transition - function, when used in the context of an aggregate, + function, when used in the context of an aggregate, is allowed to cheat and modify its transition-state argument in place. This can provide substantial performance benefits compared to making a fresh copy of the transition state each time. @@ -678,26 +678,26 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Likewise, while an aggregate final function is normally expected not to modify its input values, sometimes it is impractical to avoid modifying the transition-state argument. Such behavior must be declared using - the FINALFUNC_MODIFY parameter. The READ_WRITE + the FINALFUNC_MODIFY parameter. The READ_WRITE value indicates that the final function modifies the transition state in unspecified ways. This value prevents use of the aggregate as a window function, and it also prevents merging of transition states for aggregate calls that share the same input values and transition functions. - The SHARABLE value indicates that the transition function + The SHARABLE value indicates that the transition function cannot be applied after the final function, but multiple final-function calls can be performed on the ending transition state value. This value prevents use of the aggregate as a window function, but it allows merging of transition states. (That is, the optimization of interest here is not applying the same final function repeatedly, but applying different final functions to the same ending transition state value. This is allowed as - long as none of the final functions are marked READ_WRITE.) + long as none of the final functions are marked READ_WRITE.) If an aggregate supports moving-aggregate mode, it will improve calculation efficiency when the aggregate is used as a window function for a window with moving frame start (that is, a frame start mode other - than UNBOUNDED PRECEDING). Conceptually, the forward + than UNBOUNDED PRECEDING). Conceptually, the forward transition function adds input values to the aggregate's state when they enter the window frame from the bottom, and the inverse transition function removes them again when they leave the frame at the top. So, @@ -738,20 +738,20 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; - The syntax for ordered-set aggregates allows VARIADIC + The syntax for ordered-set aggregates allows VARIADIC to be specified for both the last direct parameter and the last - aggregated (WITHIN GROUP) parameter. However, the - current implementation restricts use of VARIADIC + aggregated (WITHIN GROUP) parameter. However, the + current implementation restricts use of VARIADIC in two ways. First, ordered-set aggregates can only use - VARIADIC "any", not other variadic array types. - Second, if the last direct parameter is VARIADIC "any", + VARIADIC "any", not other variadic array types. + Second, if the last direct parameter is VARIADIC "any", then there can be only one aggregated parameter and it must also - be VARIADIC "any". (In the representation used in the + be VARIADIC "any". (In the representation used in the system catalogs, these two parameters are merged into a single - VARIADIC "any" item, since pg_proc cannot - represent functions with more than one VARIADIC parameter.) + VARIADIC "any" item, since pg_proc cannot + represent functions with more than one VARIADIC parameter.) If the aggregate is a hypothetical-set aggregate, the direct arguments - that match the VARIADIC "any" parameter are the hypothetical + that match the VARIADIC "any" parameter are the hypothetical ones; any preceding parameters represent additional direct arguments that are not constrained to match the aggregated arguments. @@ -764,7 +764,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Partial (including parallel) aggregation is currently not supported for ordered-set aggregates. Also, it will never be used for aggregate calls - that include DISTINCT or ORDER BY clauses, since + that include DISTINCT or ORDER BY clauses, since those semantics cannot be supported during partial aggregation. diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml index a7d13edc22..89af1e5051 100644 --- a/doc/src/sgml/ref/create_cast.sgml +++ b/doc/src/sgml/ref/create_cast.sgml @@ -44,7 +44,7 @@ SELECT CAST(42 AS float8); converts the integer constant 42 to type float8 by invoking a previously specified function, in this case - float8(int4). (If no suitable cast has been defined, the + float8(int4). (If no suitable cast has been defined, the conversion fails.) @@ -64,7 +64,7 @@ SELECT CAST(42 AS float8); - You can define a cast as an I/O conversion cast by using + You can define a cast as an I/O conversion cast by using the WITH INOUT syntax. An I/O conversion cast is performed by invoking the output function of the source data type, and passing the resulting string to the input function of the target data type. @@ -75,14 +75,14 @@ SELECT CAST(42 AS float8); By default, a cast can be invoked only by an explicit cast request, - that is an explicit CAST(x AS - typename) or - x::typename + that is an explicit CAST(x AS + typename) or + x::typename construct. - If the cast is marked AS ASSIGNMENT then it can be invoked + If the cast is marked AS ASSIGNMENT then it can be invoked implicitly when assigning a value to a column of the target data type. For example, supposing that foo.f1 is a column of type text, then: @@ -90,13 +90,13 @@ SELECT CAST(42 AS float8); INSERT INTO foo (f1) VALUES (42); will be allowed if the cast from type integer to type - text is marked AS ASSIGNMENT, otherwise not. + text is marked AS ASSIGNMENT, otherwise not. (We generally use the term assignment cast to describe this kind of cast.) - If the cast is marked AS IMPLICIT then it can be invoked + If the cast is marked AS IMPLICIT then it can be invoked implicitly in any context, whether assignment or internally in an expression. (We generally use the term implicit cast to describe this kind of cast.) @@ -104,12 +104,12 @@ INSERT INTO foo (f1) VALUES (42); SELECT 2 + 4.0; - The parser initially marks the constants as being of type integer - and numeric respectively. There is no integer - + numeric operator in the system catalogs, - but there is a numeric + numeric operator. - The query will therefore succeed if a cast from integer to - numeric is available and is marked AS IMPLICIT — + The parser initially marks the constants as being of type integer + and numeric respectively. There is no integer + + numeric operator in the system catalogs, + but there is a numeric + numeric operator. + The query will therefore succeed if a cast from integer to + numeric is available and is marked AS IMPLICIT — which in fact it is. The parser will apply the implicit cast and resolve the query as if it had been written @@ -118,17 +118,17 @@ SELECT CAST ( 2 AS numeric ) + 4.0; - Now, the catalogs also provide a cast from numeric to - integer. If that cast were marked AS IMPLICIT — + Now, the catalogs also provide a cast from numeric to + integer. If that cast were marked AS IMPLICIT — which it is not — then the parser would be faced with choosing between the above interpretation and the alternative of casting the - numeric constant to integer and applying the - integer + integer operator. Lacking any + numeric constant to integer and applying the + integer + integer operator. Lacking any knowledge of which choice to prefer, it would give up and declare the query ambiguous. The fact that only one of the two casts is implicit is the way in which we teach the parser to prefer resolution - of a mixed numeric-and-integer expression as - numeric; there is no built-in knowledge about that. + of a mixed numeric-and-integer expression as + numeric; there is no built-in knowledge about that. @@ -142,8 +142,8 @@ SELECT CAST ( 2 AS numeric ) + 4.0; general type category. For example, the cast from int2 to int4 can reasonably be implicit, but the cast from float8 to int4 should probably be - assignment-only. Cross-type-category casts, such as text - to int4, are best made explicit-only. + assignment-only. Cross-type-category casts, such as text + to int4, are best made explicit-only. @@ -151,8 +151,8 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Sometimes it is necessary for usability or standards-compliance reasons to provide multiple implicit casts among a set of types, resulting in ambiguity that cannot be avoided as above. The parser has a fallback - heuristic based on type categories and preferred - types that can help to provide desired behavior in such cases. See + heuristic based on type categories and preferred + types that can help to provide desired behavior in such cases. See for more information. @@ -255,11 +255,11 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Cast implementation functions can have one to three arguments. The first argument type must be identical to or binary-coercible from the cast's source type. The second argument, - if present, must be type integer; it receives the type - modifier associated with the destination type, or -1 + if present, must be type integer; it receives the type + modifier associated with the destination type, or -1 if there is none. The third argument, - if present, must be type boolean; it receives true - if the cast is an explicit cast, false otherwise. + if present, must be type boolean; it receives true + if the cast is an explicit cast, false otherwise. (Bizarrely, the SQL standard demands different behaviors for explicit and implicit casts in some cases. This argument is supplied for functions that must implement such casts. It is not recommended that you design @@ -316,9 +316,9 @@ SELECT CAST ( 2 AS numeric ) + 4.0; It is normally not necessary to create casts between user-defined types - and the standard string types (text, varchar, and - char(n), as well as user-defined types that - are defined to be in the string category). PostgreSQL + and the standard string types (text, varchar, and + char(n), as well as user-defined types that + are defined to be in the string category). PostgreSQL provides automatic I/O conversion casts for that. The automatic casts to string types are treated as assignment casts, while the automatic casts from string types are @@ -338,11 +338,11 @@ SELECT CAST ( 2 AS numeric ) + 4.0; convention of naming cast implementation functions after the target data type. Many users are used to being able to cast data types using a function-style notation, that is - typename(x). This notation is in fact + typename(x). This notation is in fact nothing more nor less than a call of the cast implementation function; it is not specially treated as a cast. If your conversion functions are not named to support this convention then you will have surprised users. - Since PostgreSQL allows overloading of the same function + Since PostgreSQL allows overloading of the same function name with different argument types, there is no difficulty in having multiple conversion functions from different types that all use the target type's name. @@ -353,14 +353,14 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Actually the preceding paragraph is an oversimplification: there are two cases in which a function-call construct will be treated as a cast request without having matched it to an actual function. - If a function call name(x) does not - exactly match any existing function, but name is the name - of a data type and pg_cast provides a binary-coercible cast - to this type from the type of x, then the call will be + If a function call name(x) does not + exactly match any existing function, but name is the name + of a data type and pg_cast provides a binary-coercible cast + to this type from the type of x, then the call will be construed as a binary-coercible cast. This exception is made so that binary-coercible casts can be invoked using functional syntax, even though they lack any function. Likewise, if there is no - pg_cast entry but the cast would be to or from a string + pg_cast entry but the cast would be to or from a string type, the call will be construed as an I/O conversion cast. This exception allows I/O conversion casts to be invoked using functional syntax. @@ -372,7 +372,7 @@ SELECT CAST ( 2 AS numeric ) + 4.0; There is also an exception to the exception: I/O conversion casts from composite types to string types cannot be invoked using functional syntax, but must be written in explicit cast syntax (either - CAST or :: notation). This exception was added + CAST or :: notation). This exception was added because after the introduction of automatically-provided I/O conversion casts, it was found too easy to accidentally invoke such a cast when a function or column reference was intended. @@ -402,7 +402,7 @@ CREATE CAST (bigint AS int4) WITH FUNCTION int4(bigint) AS ASSIGNMENT; SQL standard, except that SQL does not make provisions for binary-coercible types or extra arguments to implementation functions. - AS IMPLICIT is a PostgreSQL + AS IMPLICIT is a PostgreSQL extension, too. diff --git a/doc/src/sgml/ref/create_collation.sgml b/doc/src/sgml/ref/create_collation.sgml index f88758095f..d4e99e925f 100644 --- a/doc/src/sgml/ref/create_collation.sgml +++ b/doc/src/sgml/ref/create_collation.sgml @@ -116,7 +116,7 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM Specifies the provider to use for locale services associated with this collation. Possible values - are: icu,ICU + are: icu,ICU libc. libc is the default. The available choices depend on the operating system and build options. diff --git a/doc/src/sgml/ref/create_conversion.sgml b/doc/src/sgml/ref/create_conversion.sgml index d2e2c010ef..03e0315eef 100644 --- a/doc/src/sgml/ref/create_conversion.sgml +++ b/doc/src/sgml/ref/create_conversion.sgml @@ -29,7 +29,7 @@ CREATE [ DEFAULT ] CONVERSION name CREATE CONVERSION defines a new conversion between character set encodings. Also, conversions that - are marked DEFAULT can be used for automatic encoding + are marked DEFAULT can be used for automatic encoding conversion between client and server. For this purpose, two conversions, from encoding A to B and from encoding B to A, must be defined. @@ -51,7 +51,7 @@ CREATE [ DEFAULT ] CONVERSION name - The DEFAULT clause indicates that this conversion + The DEFAULT clause indicates that this conversion is the default for this particular source to destination encoding. There should be only one default encoding in a schema for the encoding pair. @@ -137,7 +137,7 @@ conv_proc( To create a conversion from encoding UTF8 to - LATIN1 using myfunc: + LATIN1 using myfunc: CREATE CONVERSION myconv FOR 'UTF8' TO 'LATIN1' FROM myfunc; diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 8e2a73402f..8adfa3a37b 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -44,21 +44,21 @@ CREATE DATABASE name To create a database, you must be a superuser or have the special - CREATEDB privilege. + CREATEDB privilege. See . By default, the new database will be created by cloning the standard - system database template1. A different template can be + system database template1. A different template can be specified by writing TEMPLATE name. In particular, - by writing TEMPLATE template0, you can create a virgin + by writing TEMPLATE template0, you can create a virgin database containing only the standard objects predefined by your version of PostgreSQL. This is useful if you wish to avoid copying any installation-local objects that might have been added to - template1. + template1. @@ -115,7 +115,7 @@ CREATE DATABASE name lc_collate - Collation order (LC_COLLATE) to use in the new database. + Collation order (LC_COLLATE) to use in the new database. This affects the sort order applied to strings, e.g. in queries with ORDER BY, as well as the order used in indexes on text columns. The default is to use the collation order of the template database. @@ -127,7 +127,7 @@ CREATE DATABASE name lc_ctype - Character classification (LC_CTYPE) to use in the new + Character classification (LC_CTYPE) to use in the new database. This affects the categorization of characters, e.g. lower, upper and digit. The default is to use the character classification of the template database. See below for additional restrictions. @@ -155,7 +155,7 @@ CREATE DATABASE name If false then no one can connect to this database. The default is true, allowing connections (except as restricted by other mechanisms, - such as GRANT/REVOKE CONNECT). + such as GRANT/REVOKE CONNECT). @@ -192,12 +192,12 @@ CREATE DATABASE name Notes - CREATE DATABASE cannot be executed inside a transaction + CREATE DATABASE cannot be executed inside a transaction block. - Errors along the line of could not initialize database directory + Errors along the line of could not initialize database directory are most likely related to insufficient permissions on the data directory, a full disk, or other file system problems. @@ -218,26 +218,26 @@ CREATE DATABASE name - Although it is possible to copy a database other than template1 + Although it is possible to copy a database other than template1 by specifying its name as the template, this is not (yet) intended as a general-purpose COPY DATABASE facility. The principal limitation is that no other sessions can be connected to the template database while it is being copied. CREATE - DATABASE will fail if any other connection exists when it starts; + DATABASE will fail if any other connection exists when it starts; otherwise, new connections to the template database are locked out - until CREATE DATABASE completes. + until CREATE DATABASE completes. See for more information. The character set encoding specified for the new database must be - compatible with the chosen locale settings (LC_COLLATE and - LC_CTYPE). If the locale is C (or equivalently - POSIX), then all encodings are allowed, but for other + compatible with the chosen locale settings (LC_COLLATE and + LC_CTYPE). If the locale is C (or equivalently + POSIX), then all encodings are allowed, but for other locale settings there is only one encoding that will work properly. (On Windows, however, UTF-8 encoding can be used with any locale.) - CREATE DATABASE will allow superusers to specify - SQL_ASCII encoding regardless of the locale settings, + CREATE DATABASE will allow superusers to specify + SQL_ASCII encoding regardless of the locale settings, but this choice is deprecated and may result in misbehavior of character-string functions if data that is not encoding-compatible with the locale is stored in the database. @@ -245,19 +245,19 @@ CREATE DATABASE name The encoding and locale settings must match those of the template database, - except when template0 is used as template. This is because + except when template0 is used as template. This is because other databases might contain data that does not match the specified encoding, or might contain indexes whose sort ordering is affected by - LC_COLLATE and LC_CTYPE. Copying such data would + LC_COLLATE and LC_CTYPE. Copying such data would result in a database that is corrupt according to the new settings. template0, however, is known to not contain any data or indexes that would be affected. - The CONNECTION LIMIT option is only enforced approximately; + The CONNECTION LIMIT option is only enforced approximately; if two new sessions start at about the same time when just one - connection slot remains for the database, it is possible that + connection slot remains for the database, it is possible that both will fail. Also, the limit is not enforced against superusers or background worker processes. @@ -275,8 +275,8 @@ CREATE DATABASE lusiadas; - To create a database sales owned by user salesapp - with a default tablespace of salesspace: + To create a database sales owned by user salesapp + with a default tablespace of salesspace: CREATE DATABASE sales OWNER salesapp TABLESPACE salesspace; @@ -284,19 +284,19 @@ CREATE DATABASE sales OWNER salesapp TABLESPACE salesspace; - To create a database music with a different locale: + To create a database music with a different locale: CREATE DATABASE music LC_COLLATE 'sv_SE.utf8' LC_CTYPE 'sv_SE.utf8' TEMPLATE template0; - In this example, the TEMPLATE template0 clause is required if - the specified locale is different from the one in template1. + In this example, the TEMPLATE template0 clause is required if + the specified locale is different from the one in template1. (If it is not, then specifying the locale explicitly is redundant.) - To create a database music2 with a different locale and a + To create a database music2 with a different locale and a different character set encoding: CREATE DATABASE music2 diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index 85ed57dd08..705ff55c49 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -45,7 +45,7 @@ CREATE DOMAIN name [ AS ] If a schema name is given (for example, CREATE DOMAIN - myschema.mydomain ...) then the domain is created in the + myschema.mydomain ...) then the domain is created in the specified schema. Otherwise it is created in the current schema. The domain name must be unique among the types and domains existing in its schema. @@ -95,7 +95,7 @@ CREATE DOMAIN name [ AS ] An optional collation for the domain. If no collation is specified, the underlying data type's default collation is used. - The underlying type must be collatable if COLLATE + The underlying type must be collatable if COLLATE is specified. @@ -106,7 +106,7 @@ CREATE DOMAIN name [ AS ] - The DEFAULT clause specifies a default value for + The DEFAULT clause specifies a default value for columns of the domain data type. The value is any variable-free expression (but subqueries are not allowed). The data type of the default expression must match the data @@ -136,7 +136,7 @@ CREATE DOMAIN name [ AS ] - NOT NULL + NOT NULL Values of this domain are prevented from being null @@ -146,7 +146,7 @@ CREATE DOMAIN name [ AS ] - NULL + NULL Values of this domain are allowed to be null. This is the default. @@ -163,10 +163,10 @@ CREATE DOMAIN name [ AS ] CHECK (expression) - CHECK clauses specify integrity constraints or tests + CHECK clauses specify integrity constraints or tests which values of the domain must satisfy. Each constraint must be an expression - producing a Boolean result. It should use the key word VALUE + producing a Boolean result. It should use the key word VALUE to refer to the value being tested. Expressions evaluating to TRUE or UNKNOWN succeed. If the expression produces a FALSE result, an error is reported and the value is not allowed to be converted @@ -175,13 +175,13 @@ CREATE DOMAIN name [ AS ] Currently, CHECK expressions cannot contain - subqueries nor refer to variables other than VALUE. + subqueries nor refer to variables other than VALUE. When a domain has multiple CHECK constraints, they will be tested in alphabetical order by name. - (PostgreSQL versions before 9.5 did not honor any + (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK constraints.) @@ -193,7 +193,7 @@ CREATE DOMAIN name [ AS ] Notes - Domain constraints, particularly NOT NULL, are checked when + Domain constraints, particularly NOT NULL, are checked when converting a value to the domain type. It is possible for a column that is nominally of the domain type to read as null despite there being such a constraint. For example, this can happen in an outer-join query, if @@ -211,7 +211,7 @@ INSERT INTO tab (domcol) VALUES ((SELECT domcol FROM tab WHERE false)); It is very difficult to avoid such problems, because of SQL's general assumption that a null value is a valid value of every data type. Best practice therefore is to design a domain's constraints so that a null value is allowed, - and then to apply column NOT NULL constraints to columns of + and then to apply column NOT NULL constraints to columns of the domain type as needed, rather than directly to the domain type. diff --git a/doc/src/sgml/ref/create_event_trigger.sgml b/doc/src/sgml/ref/create_event_trigger.sgml index 7decfbb983..9652f02412 100644 --- a/doc/src/sgml/ref/create_event_trigger.sgml +++ b/doc/src/sgml/ref/create_event_trigger.sgml @@ -33,7 +33,7 @@ CREATE EVENT TRIGGER name CREATE EVENT TRIGGER creates a new event trigger. - Whenever the designated event occurs and the WHEN condition + Whenever the designated event occurs and the WHEN condition associated with the trigger, if any, is satisfied, the trigger function will be executed. For a general introduction to event triggers, see . The user who creates an event trigger @@ -85,8 +85,8 @@ CREATE EVENT TRIGGER name A list of values for the associated filter_variable - for which the trigger should fire. For TAG, this means a - list of command tags (e.g. 'DROP FUNCTION'). + for which the trigger should fire. For TAG, this means a + list of command tags (e.g. 'DROP FUNCTION'). diff --git a/doc/src/sgml/ref/create_extension.sgml b/doc/src/sgml/ref/create_extension.sgml index 14e910115a..a3a7892812 100644 --- a/doc/src/sgml/ref/create_extension.sgml +++ b/doc/src/sgml/ref/create_extension.sgml @@ -39,7 +39,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name Loading an extension essentially amounts to running the extension's script - file. The script will typically create new SQL objects such as + file. The script will typically create new SQL objects such as functions, data types, operators and index support methods. CREATE EXTENSION additionally records the identities of all the created objects, so that they can be dropped again if @@ -62,7 +62,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if an extension with the same name already @@ -97,17 +97,17 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - If the extension specifies a schema parameter in its + If the extension specifies a schema parameter in its control file, then that schema cannot be overridden with - a SCHEMA clause. Normally, an error will be raised if - a SCHEMA clause is given and it conflicts with the - extension's schema parameter. However, if - the CASCADE clause is also given, + a SCHEMA clause. Normally, an error will be raised if + a SCHEMA clause is given and it conflicts with the + extension's schema parameter. However, if + the CASCADE clause is also given, then schema_name is ignored when it conflicts. The given schema_name will be used for installation of any needed extensions that do not - specify schema in their control files. + specify schema in their control files. @@ -134,13 +134,13 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name old_version - FROM old_version + FROM old_version must be specified when, and only when, you are attempting to install - an extension that replaces an old style module that is just + an extension that replaces an old style module that is just a collection of objects not packaged into an extension. This option - causes CREATE EXTENSION to run an alternative installation + causes CREATE EXTENSION to run an alternative installation script that absorbs the existing objects into the extension, instead - of creating new objects. Be careful that SCHEMA specifies + of creating new objects. Be careful that SCHEMA specifies the schema containing these pre-existing objects. @@ -150,7 +150,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name extension's author, and might vary if there is more than one version of the old-style module that can be upgraded into an extension. For the standard additional modules supplied with pre-9.1 - PostgreSQL, use unpackaged + PostgreSQL, use unpackaged for old_version when updating a module to extension style. @@ -158,12 +158,12 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name - CASCADE + CASCADE Automatically install any extensions that this extension depends on that are not already installed. Their dependencies are likewise - automatically installed, recursively. The SCHEMA clause, + automatically installed, recursively. The SCHEMA clause, if given, applies to all extensions that get installed this way. Other options of the statement are not applied to automatically-installed extensions; in particular, their default @@ -178,7 +178,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name Notes - Before you can use CREATE EXTENSION to load an extension + Before you can use CREATE EXTENSION to load an extension into a database, the extension's supporting files must be installed. Information about installing the extensions supplied with PostgreSQL can be found in @@ -211,13 +211,13 @@ CREATE EXTENSION hstore; - Update a pre-9.1 installation of hstore into + Update a pre-9.1 installation of hstore into extension style: CREATE EXTENSION hstore SCHEMA public FROM unpackaged; Be careful to specify the schema in which you installed the existing - hstore objects. + hstore objects. @@ -225,7 +225,7 @@ CREATE EXTENSION hstore SCHEMA public FROM unpackaged; Compatibility - CREATE EXTENSION is a PostgreSQL + CREATE EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml index 1161e05d1c..87403a55e3 100644 --- a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml @@ -117,7 +117,7 @@ CREATE FOREIGN DATA WRAPPER name Notes - PostgreSQL's foreign-data functionality is still under + PostgreSQL's foreign-data functionality is still under active development. Optimization of queries is primitive (and mostly left to the wrapper, too). Thus, there is considerable room for future performance improvements. @@ -128,22 +128,22 @@ CREATE FOREIGN DATA WRAPPER name Examples - Create a useless foreign-data wrapper dummy: + Create a useless foreign-data wrapper dummy: CREATE FOREIGN DATA WRAPPER dummy; - Create a foreign-data wrapper file with - handler function file_fdw_handler: + Create a foreign-data wrapper file with + handler function file_fdw_handler: CREATE FOREIGN DATA WRAPPER file HANDLER file_fdw_handler; - Create a foreign-data wrapper mywrapper with some + Create a foreign-data wrapper mywrapper with some options: CREATE FOREIGN DATA WRAPPER mywrapper @@ -159,7 +159,7 @@ CREATE FOREIGN DATA WRAPPER mywrapper 9075-9 (SQL/MED), with the exception that the HANDLER and VALIDATOR clauses are extensions and the standard clauses LIBRARY and LANGUAGE - are not implemented in PostgreSQL. + are not implemented in PostgreSQL. diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml index f514b2d59f..47705fd187 100644 --- a/doc/src/sgml/ref/create_foreign_table.sgml +++ b/doc/src/sgml/ref/create_foreign_table.sgml @@ -62,7 +62,7 @@ CHECK ( expression ) [ NO INHERIT ] If a schema name is given (for example, CREATE FOREIGN TABLE - myschema.mytable ...) then the table is created in the specified + myschema.mytable ...) then the table is created in the specified schema. Otherwise it is created in the current schema. The name of the foreign table must be distinct from the name of any other foreign table, table, sequence, index, @@ -95,7 +95,7 @@ CHECK ( expression ) [ NO INHERIT ] - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -140,7 +140,7 @@ CHECK ( expression ) [ NO INHERIT ] COLLATE collation - The COLLATE clause assigns a collation to + The COLLATE clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used. @@ -151,7 +151,7 @@ CHECK ( expression ) [ NO INHERIT ] INHERITS ( parent_table [, ... ] ) - The optional INHERITS clause specifies a list of + The optional INHERITS clause specifies a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables. See the similar form of @@ -166,7 +166,7 @@ CHECK ( expression ) [ NO INHERIT ] An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, - so constraint names like col must be positive can be used + so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name. @@ -175,7 +175,7 @@ CHECK ( expression ) [ NO INHERIT ] - NOT NULL + NOT NULL The column is not allowed to contain null values. @@ -184,7 +184,7 @@ CHECK ( expression ) [ NO INHERIT ] - NULL + NULL The column is allowed to contain null values. This is the default. @@ -202,7 +202,7 @@ CHECK ( expression ) [ NO INHERIT ] CHECK ( expression ) [ NO INHERIT ] - The CHECK clause specifies an expression producing a + The CHECK clause specifies an expression producing a Boolean result which each row in the foreign table is expected to satisfy; that is, the expression should produce TRUE or UNKNOWN, never FALSE, for all rows in the foreign table. @@ -219,7 +219,7 @@ CHECK ( expression ) [ NO INHERIT ] - A constraint marked with NO INHERIT will not propagate to + A constraint marked with NO INHERIT will not propagate to child tables. @@ -230,7 +230,7 @@ CHECK ( expression ) [ NO INHERIT ] default_expr - The DEFAULT clause assigns a default data value for + The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The @@ -279,9 +279,9 @@ CHECK ( expression ) [ NO INHERIT ] Notes - Constraints on foreign tables (such as CHECK - or NOT NULL clauses) are not enforced by the - core PostgreSQL system, and most foreign data wrappers + Constraints on foreign tables (such as CHECK + or NOT NULL clauses) are not enforced by the + core PostgreSQL system, and most foreign data wrappers do not attempt to enforce them either; that is, the constraint is simply assumed to hold true. There would be little point in such enforcement since it would only apply to rows inserted or updated via @@ -300,7 +300,7 @@ CHECK ( expression ) [ NO INHERIT ] - Although PostgreSQL does not attempt to enforce + Although PostgreSQL does not attempt to enforce constraints on foreign tables, it does assume that they are correct for purposes of query optimization. If there are rows visible in the foreign table that do not satisfy a declared constraint, queries on @@ -314,8 +314,8 @@ CHECK ( expression ) [ NO INHERIT ] Examples - Create foreign table films, which will be accessed through - the server film_server: + Create foreign table films, which will be accessed through + the server film_server: CREATE FOREIGN TABLE films ( @@ -330,9 +330,9 @@ SERVER film_server; - Create foreign table measurement_y2016m07, which will be - accessed through the server server_07, as a partition - of the range partitioned table measurement: + Create foreign table measurement_y2016m07, which will be + accessed through the server server_07, as a partition + of the range partitioned table measurement: CREATE FOREIGN TABLE measurement_y2016m07 @@ -348,10 +348,10 @@ CREATE FOREIGN TABLE measurement_y2016m07 The CREATE FOREIGN TABLE command largely conforms to the SQL standard; however, much as with - CREATE TABLE, - NULL constraints and zero-column foreign tables are permitted. + CREATE TABLE, + NULL constraints and zero-column foreign tables are permitted. The ability to specify column default values is also - a PostgreSQL extension. Table inheritance, in the form + a PostgreSQL extension. Table inheritance, in the form defined by PostgreSQL, is nonstandard. diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index 072e033687..97cb9b7fc8 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -58,7 +58,7 @@ CREATE [ OR REPLACE ] FUNCTION The name of the new function must not match any existing function with the same input argument types in the same schema. However, functions of different argument types can share a name (this is - called overloading). + called overloading). @@ -68,13 +68,13 @@ CREATE [ OR REPLACE ] FUNCTION tried, you would actually be creating a new, distinct function). Also, CREATE OR REPLACE FUNCTION will not let you change the return type of an existing function. To do that, - you must drop and recreate the function. (When using OUT + you must drop and recreate the function. (When using OUT parameters, that means you cannot change the types of any - OUT parameters except by dropping the function.) + OUT parameters except by dropping the function.) - When CREATE OR REPLACE FUNCTION is used to replace an + When CREATE OR REPLACE FUNCTION is used to replace an existing function, the ownership and permissions of the function do not change. All other function properties are assigned the values specified or implied in the command. You must own the function @@ -87,7 +87,7 @@ CREATE [ OR REPLACE ] FUNCTION triggers, etc. that refer to the old function. Use CREATE OR REPLACE FUNCTION to change a function definition without breaking objects that refer to the function. - Also, ALTER FUNCTION can be used to change most of the + Also, ALTER FUNCTION can be used to change most of the auxiliary properties of an existing function. @@ -121,12 +121,12 @@ CREATE [ OR REPLACE ] FUNCTION - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. - Only OUT arguments can follow a VARIADIC one. - Also, OUT and INOUT arguments cannot be used - together with the RETURNS TABLE notation. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. + Only OUT arguments can follow a VARIADIC one. + Also, OUT and INOUT arguments cannot be used + together with the RETURNS TABLE notation. @@ -160,7 +160,7 @@ CREATE [ OR REPLACE ] FUNCTION Depending on the implementation language it might also be allowed - to specify pseudo-types such as cstring. + to specify pseudo-types such as cstring. Pseudo-types indicate that the actual argument type is either incompletely specified, or outside the set of ordinary SQL data types. @@ -183,7 +183,7 @@ CREATE [ OR REPLACE ] FUNCTION An expression to be used as default value if the parameter is not specified. The expression has to be coercible to the argument type of the parameter. - Only input (including INOUT) parameters can have a default + Only input (including INOUT) parameters can have a default value. All input parameters following a parameter with a default value must have default values as well. @@ -199,15 +199,15 @@ CREATE [ OR REPLACE ] FUNCTION can be a base, composite, or domain type, or can reference the type of a table column. Depending on the implementation language it might also be allowed - to specify pseudo-types such as cstring. + to specify pseudo-types such as cstring. If the function is not supposed to return a value, specify - void as the return type. + void as the return type. - When there are OUT or INOUT parameters, - the RETURNS clause can be omitted. If present, it + When there are OUT or INOUT parameters, + the RETURNS clause can be omitted. If present, it must agree with the result type implied by the output parameters: - RECORD if there are multiple output parameters, or + RECORD if there are multiple output parameters, or the same type as the single output parameter. @@ -229,10 +229,10 @@ CREATE [ OR REPLACE ] FUNCTION - The name of an output column in the RETURNS TABLE + The name of an output column in the RETURNS TABLE syntax. This is effectively another way of declaring a named - OUT parameter, except that RETURNS TABLE - also implies RETURNS SETOF. + OUT parameter, except that RETURNS TABLE + also implies RETURNS SETOF. @@ -242,7 +242,7 @@ CREATE [ OR REPLACE ] FUNCTION - The data type of an output column in the RETURNS TABLE + The data type of an output column in the RETURNS TABLE syntax. @@ -284,9 +284,9 @@ CREATE [ OR REPLACE ] FUNCTION WINDOW indicates that the function is a - window function rather than a plain function. + window function rather than a plain function. This is currently only useful for functions written in C. - The WINDOW attribute cannot be changed when + The WINDOW attribute cannot be changed when replacing an existing function definition. @@ -321,20 +321,20 @@ CREATE [ OR REPLACE ] FUNCTION result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter variables (such as the current time zone), etc. (It is - inappropriate for AFTER triggers that wish to + inappropriate for AFTER triggers that wish to query rows modified by the current command.) Also note - that the current_timestamp family of functions qualify + that the current_timestamp family of functions qualify as stable, since their values do not change within a transaction. VOLATILE indicates that the function value can change even within a single table scan, so no optimizations can be made. Relatively few database functions are volatile in this sense; - some examples are random(), currval(), - timeofday(). But note that any function that has + some examples are random(), currval(), + timeofday(). But note that any function that has side-effects must be classified volatile, even if its result is quite predictable, to prevent calls from being optimized away; an example is - setval(). + setval(). @@ -430,11 +430,11 @@ CREATE [ OR REPLACE ] FUNCTION Functions should be labeled parallel unsafe if they modify any database state, or if they make changes to the transaction such as using sub-transactions, or if they access sequences or attempt to make - persistent changes to settings (e.g. setval). They should + persistent changes to settings (e.g. setval). They should be labeled as parallel restricted if they access temporary tables, client connection state, cursors, prepared statements, or miscellaneous backend-local state which the system cannot synchronize in parallel mode - (e.g. setseed cannot be executed other than by the group + (e.g. setseed cannot be executed other than by the group leader because a change made by another process would not be reflected in the leader). In general, if a function is labeled as being safe when it is restricted or unsafe, or if it is labeled as being restricted when @@ -443,7 +443,7 @@ CREATE [ OR REPLACE ] FUNCTION exhibit totally undefined behavior if mislabeled, since there is no way for the system to protect itself against arbitrary C code, but in most likely cases the result will be no worse than for any other function. - If in doubt, functions should be labeled as UNSAFE, which is + If in doubt, functions should be labeled as UNSAFE, which is the default. @@ -483,23 +483,23 @@ CREATE [ OR REPLACE ] FUNCTION value - The SET clause causes the specified configuration + The SET clause causes the specified configuration parameter to be set to the specified value when the function is entered, and then restored to its prior value when the function exits. - SET FROM CURRENT saves the value of the parameter that - is current when CREATE FUNCTION is executed as the value + SET FROM CURRENT saves the value of the parameter that + is current when CREATE FUNCTION is executed as the value to be applied when the function is entered. - If a SET clause is attached to a function, then - the effects of a SET LOCAL command executed inside the + If a SET clause is attached to a function, then + the effects of a SET LOCAL command executed inside the function for the same variable are restricted to the function: the configuration parameter's prior value is still restored at function exit. However, an ordinary - SET command (without LOCAL) overrides the - SET clause, much as it would do for a previous SET - LOCAL command: the effects of such a command will persist after + SET command (without LOCAL) overrides the + SET clause, much as it would do for a previous SET + LOCAL command: the effects of such a command will persist after function exit, unless the current transaction is rolled back. @@ -570,7 +570,7 @@ CREATE [ OR REPLACE ] FUNCTION - isStrict + isStrict Equivalent to STRICT or RETURNS NULL ON NULL INPUT. @@ -579,7 +579,7 @@ CREATE [ OR REPLACE ] FUNCTION - isCachable + isCachable isCachable is an obsolete equivalent of IMMUTABLE; it's still accepted for @@ -619,7 +619,7 @@ CREATE [ OR REPLACE ] FUNCTION Two functions are considered the same if they have the same names and - input argument types, ignoring any OUT + input argument types, ignoring any OUT parameters. Thus for example these declarations conflict: CREATE FUNCTION foo(int) ... @@ -635,7 +635,7 @@ CREATE FUNCTION foo(int, out text) ... CREATE FUNCTION foo(int) ... CREATE FUNCTION foo(int, int default 42) ... - A call foo(10) will fail due to the ambiguity about which + A call foo(10) will fail due to the ambiguity about which function should be called. @@ -648,16 +648,16 @@ CREATE FUNCTION foo(int, int default 42) ... The full SQL type syntax is allowed for declaring a function's arguments and return value. However, parenthesized type modifiers (e.g., the precision field for - type numeric) are discarded by CREATE FUNCTION. + type numeric) are discarded by CREATE FUNCTION. Thus for example - CREATE FUNCTION foo (varchar(10)) ... + CREATE FUNCTION foo (varchar(10)) ... is exactly the same as - CREATE FUNCTION foo (varchar) .... + CREATE FUNCTION foo (varchar) .... When replacing an existing function with CREATE OR REPLACE - FUNCTION, there are restrictions on changing parameter names. + FUNCTION, there are restrictions on changing parameter names. You cannot change the name already assigned to any input parameter (although you can add names to parameters that had none before). If there is more than one output parameter, you cannot change the @@ -668,9 +668,9 @@ CREATE FUNCTION foo(int, int default 42) ... - If a function is declared STRICT with a VARIADIC + If a function is declared STRICT with a VARIADIC argument, the strictness check tests that the variadic array as - a whole is non-null. The function will still be called if the + a whole is non-null. The function will still be called if the array has null elements. @@ -723,7 +723,7 @@ CREATE FUNCTION dup(int) RETURNS dup_result SELECT * FROM dup(42); - Another way to return multiple columns is to use a TABLE + Another way to return multiple columns is to use a TABLE function: CREATE FUNCTION dup(int) RETURNS TABLE(f1 int, f2 text) @@ -732,8 +732,8 @@ CREATE FUNCTION dup(int) RETURNS TABLE(f1 int, f2 text) SELECT * FROM dup(42); - However, a TABLE function is different from the - preceding examples, because it actually returns a set + However, a TABLE function is different from the + preceding examples, because it actually returns a set of records, not just one record. @@ -742,8 +742,8 @@ SELECT * FROM dup(42); Writing <literal>SECURITY DEFINER</literal> Functions Safely - search_path configuration parameter - use in securing functions + search_path configuration parameter + use in securing functions @@ -758,7 +758,7 @@ SELECT * FROM dup(42); temporary-table schema, which is searched first by default, and is normally writable by anyone. A secure arrangement can be obtained by forcing the temporary schema to be searched last. To do this, - write pg_temppg_tempsecuring functions as the last entry in search_path. + write pg_temppg_tempsecuring functions as the last entry in search_path. This function illustrates safe usage: @@ -778,27 +778,27 @@ $$ LANGUAGE plpgsql SET search_path = admin, pg_temp; - This function's intention is to access a table admin.pwds. - But without the SET clause, or with a SET clause - mentioning only admin, the function could be subverted by - creating a temporary table named pwds. + This function's intention is to access a table admin.pwds. + But without the SET clause, or with a SET clause + mentioning only admin, the function could be subverted by + creating a temporary table named pwds. Before PostgreSQL version 8.3, the - SET clause was not available, and so older functions may + SET clause was not available, and so older functions may contain rather complicated logic to save, set, and restore - search_path. The SET clause is far easier + search_path. The SET clause is far easier to use for this purpose. Another point to keep in mind is that by default, execute privilege - is granted to PUBLIC for newly created functions + is granted to PUBLIC for newly created functions (see for more information). Frequently you will wish to restrict use of a security definer function to only some users. To do that, you must revoke - the default PUBLIC privileges and then grant execute + the default PUBLIC privileges and then grant execute privilege selectively. To avoid having a window where the new function is accessible to all, create it and set the privileges within a single transaction. For example: diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index a462be790f..bb2601dc8c 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -51,8 +51,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] upper(col) would allow the clause - WHERE upper(col) = 'JIM' to use an index. + upper(col) would allow the clause + WHERE upper(col) = 'JIM' to use an index. @@ -85,7 +85,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] All functions and operators used in an index definition must be - immutable, that is, their results must depend only on + immutable, that is, their results must depend only on their arguments and never on any outside influence (such as the contents of another table or the current time). This restriction ensures that the behavior of the index is well-defined. To use a @@ -115,7 +115,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] CONCURRENTLY - When this option is used, PostgreSQL will build the + When this option is used, PostgreSQL will build the index without taking any locks that prevent concurrent inserts, updates, or deletes on the table; whereas a standard index build locks out writes (but not reads) on the table until it's done. @@ -144,7 +144,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The name of the index to be created. No schema name can be included here; the index is always created in the same schema as its parent - table. If the name is omitted, PostgreSQL chooses a + table. If the name is omitted, PostgreSQL chooses a suitable name based on the parent table's name and the indexed column name(s). @@ -166,8 +166,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The name of the index method to be used. Choices are btree, hash, - gist, spgist, gin, and - brin. + gist, spgist, gin, and + brin. The default method is btree. @@ -217,7 +217,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - ASC + ASC Specifies ascending sort order (which is the default). @@ -226,7 +226,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - DESC + DESC Specifies descending sort order. @@ -235,21 +235,21 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - NULLS FIRST + NULLS FIRST Specifies that nulls sort before non-nulls. This is the default - when DESC is specified. + when DESC is specified. - NULLS LAST + NULLS LAST Specifies that nulls sort after non-nulls. This is the default - when DESC is not specified. + when DESC is not specified. @@ -292,15 +292,15 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Index Storage Parameters - The optional WITH clause specifies storage - parameters for the index. Each index method has its own set of allowed + The optional WITH clause specifies storage + parameters for the index. Each index method has its own set of allowed storage parameters. The B-tree, hash, GiST and SP-GiST index methods all accept this parameter: - fillfactor + fillfactor The fillfactor for an index is a percentage that determines how full @@ -327,14 +327,14 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - buffering + buffering Determines whether the buffering build technique described in is used to build the index. With - OFF it is disabled, with ON it is enabled, and - with AUTO it is initially disabled, but turned on - on-the-fly once the index size reaches . The default is AUTO. + OFF it is disabled, with ON it is enabled, and + with AUTO it is initially disabled, but turned on + on-the-fly once the index size reaches . The default is AUTO. @@ -346,23 +346,23 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - fastupdate + fastupdate This setting controls usage of the fast update technique described in . It is a Boolean parameter: - ON enables fast update, OFF disables it. - (Alternative spellings of ON and OFF are + ON enables fast update, OFF disables it. + (Alternative spellings of ON and OFF are allowed as described in .) The - default is ON. + default is ON. - Turning fastupdate off via ALTER INDEX prevents + Turning fastupdate off via ALTER INDEX prevents future insertions from going into the list of pending index entries, but does not in itself flush previous entries. You might want to - VACUUM the table or call gin_clean_pending_list + VACUUM the table or call gin_clean_pending_list function afterward to ensure the pending list is emptied. @@ -371,7 +371,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - gin_pending_list_limit + gin_pending_list_limit Custom parameter. @@ -382,23 +382,23 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - BRIN indexes accept different parameters: + BRIN indexes accept different parameters: - pages_per_range + pages_per_range Defines the number of table blocks that make up one block range for - each entry of a BRIN index (see - for more details). The default is 128. + each entry of a BRIN index (see + for more details). The default is 128. - autosummarize + autosummarize Defines whether a summarization run is invoked for the previous page @@ -419,7 +419,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Creating an index can interfere with regular operation of a database. - Normally PostgreSQL locks the table to be indexed against + Normally PostgreSQL locks the table to be indexed against writes and performs the entire index build with a single scan of the table. Other transactions can still read the table, but if they try to insert, update, or delete rows in the table they will block until the @@ -430,11 +430,11 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - PostgreSQL supports building indexes without locking + PostgreSQL supports building indexes without locking out writes. This method is invoked by specifying the - CONCURRENTLY option of CREATE INDEX. + CONCURRENTLY option of CREATE INDEX. When this option is used, - PostgreSQL must perform two scans of the table, and in + PostgreSQL must perform two scans of the table, and in addition it must wait for all existing transactions that could potentially modify or use the index to terminate. Thus this method requires more total work than a standard index build and takes @@ -452,7 +452,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] ) predating the second scan to terminate. Then finally the index can be marked ready for use, - and the CREATE INDEX command terminates. + and the CREATE INDEX command terminates. Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build. @@ -460,11 +460,11 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] If a problem arises while scanning the table, such as a deadlock or a - uniqueness violation in a unique index, the CREATE INDEX - command will fail but leave behind an invalid index. This index + uniqueness violation in a unique index, the CREATE INDEX + command will fail but leave behind an invalid index. This index will be ignored for querying purposes because it might be incomplete; - however it will still consume update overhead. The psql - \d command will report such an index as INVALID: + however it will still consume update overhead. The psql + \d command will report such an index as INVALID: postgres=# \d tab @@ -478,8 +478,8 @@ Indexes: The recommended recovery method in such cases is to drop the index and try again to perform - CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild - the index with REINDEX. However, since REINDEX + CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild + the index with REINDEX. However, since REINDEX does not support concurrent builds, this option is unlikely to seem attractive.) @@ -490,7 +490,7 @@ Indexes: when the second table scan begins. This means that constraint violations could be reported in other queries prior to the index becoming available for use, or even in cases where the index build eventually fails. Also, - if a failure does occur in the second scan, the invalid index + if a failure does occur in the second scan, the invalid index continues to enforce its uniqueness constraint afterwards. @@ -505,8 +505,8 @@ Indexes: same table to occur in parallel, but only one concurrent index build can occur on a table at a time. In both cases, no other types of schema modification on the table are allowed meanwhile. Another difference - is that a regular CREATE INDEX command can be performed within - a transaction block, but CREATE INDEX CONCURRENTLY cannot. + is that a regular CREATE INDEX command can be performed within + a transaction block, but CREATE INDEX CONCURRENTLY cannot. @@ -547,17 +547,17 @@ Indexes: For index methods that support ordered scans (currently, only B-tree), - the optional clauses ASC, DESC, NULLS - FIRST, and/or NULLS LAST can be specified to modify + the optional clauses ASC, DESC, NULLS + FIRST, and/or NULLS LAST can be specified to modify the sort ordering of the index. Since an ordered index can be scanned either forward or backward, it is not normally useful to create a - single-column DESC index — that sort ordering is already + single-column DESC index — that sort ordering is already available with a regular index. The value of these options is that multicolumn indexes can be created that match the sort ordering requested by a mixed-ordering query, such as SELECT ... ORDER BY x ASC, y - DESC. The NULLS options are useful if you need to support - nulls sort low behavior, rather than the default nulls - sort high, in queries that depend on indexes to avoid sorting steps. + DESC. The NULLS options are useful if you need to support + nulls sort low behavior, rather than the default nulls + sort high, in queries that depend on indexes to avoid sorting steps. @@ -577,8 +577,8 @@ Indexes: Prior releases of PostgreSQL also had an R-tree index method. This method has been removed because it had no significant advantages over the GiST method. - If USING rtree is specified, CREATE INDEX - will interpret it as USING gist, to simplify conversion + If USING rtree is specified, CREATE INDEX + will interpret it as USING gist, to simplify conversion of old databases to GiST. @@ -595,13 +595,13 @@ CREATE UNIQUE INDEX title_idx ON films (title); - To create an index on the expression lower(title), + To create an index on the expression lower(title), allowing efficient case-insensitive searches: CREATE INDEX ON films ((lower(title))); (In this example we have chosen to omit the index name, so the system - will choose a name, typically films_lower_idx.) + will choose a name, typically films_lower_idx.) @@ -626,16 +626,16 @@ CREATE UNIQUE INDEX title_idx ON films (title) WITH (fillfactor = 70); - To create a GIN index with fast updates disabled: + To create a GIN index with fast updates disabled: CREATE INDEX gin_idx ON documents_table USING GIN (locations) WITH (fastupdate = off); - To create an index on the column code in the table - films and have the index reside in the tablespace - indexspace: + To create an index on the column code in the table + films and have the index reside in the tablespace + indexspace: CREATE INDEX code_idx ON films (code) TABLESPACE indexspace; diff --git a/doc/src/sgml/ref/create_language.sgml b/doc/src/sgml/ref/create_language.sgml index 75165b677f..20d56a766f 100644 --- a/doc/src/sgml/ref/create_language.sgml +++ b/doc/src/sgml/ref/create_language.sgml @@ -40,14 +40,14 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE not CREATE LANGUAGE. Direct use of CREATE LANGUAGE should now be confined to - extension installation scripts. If you have a bare + extension installation scripts. If you have a bare language in your database, perhaps as a result of an upgrade, you can convert it to an extension using - CREATE EXTENSION langname FROM + CREATE EXTENSION langname FROM unpackaged. @@ -67,11 +67,11 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE pg_pltemplate catalog and is marked - as allowed to be created by database owners (tmpldbacreate + as allowed to be created by database owners (tmpldbacreate is true). The default is that trusted languages can be created by database owners, but this can be adjusted by superusers by modifying the contents of pg_pltemplate. @@ -101,9 +101,9 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE internal, which will be the DO command's + type internal, which will be the DO command's internal representation, and it will typically return - void. The return value of the handler is ignored. + void. The return value of the handler is ignored. @@ -204,7 +204,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE - The TRUSTED option and the support function name(s) are + The TRUSTED option and the support function name(s) are ignored if the server has an entry for the specified language - name in pg_pltemplate. + name in pg_pltemplate. @@ -243,7 +243,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE pg_pltemplate. But when there is an entry, + in pg_pltemplate. But when there is an entry, the functions need not already exist; they will be automatically defined if not present in the database. - (This might result in CREATE LANGUAGE failing, if the + (This might result in CREATE LANGUAGE failing, if the shared library that implements the language is not available in the installation.) @@ -269,11 +269,11 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE name [ DEFAUL CREATE OPERATOR CLASS creates a new operator class. An operator class defines how a particular data type can be used with an index. The operator class specifies that certain operators will fill - particular roles or strategies for this data type and this + particular roles or strategies for this data type and this index method. The operator class also specifies the support procedures to be used by the index method when the operator class is selected for an @@ -69,8 +69,8 @@ CREATE OPERATOR CLASS name [ DEFAUL Related operator classes can be grouped into operator - families. To add a new operator class to an existing family, - specify the FAMILY option in CREATE OPERATOR + families. To add a new operator class to an existing family, + specify the FAMILY option in CREATE OPERATOR CLASS. Without this option, the new class is placed into a family named the same as the new class (creating that family if it doesn't already exist). @@ -96,7 +96,7 @@ CREATE OPERATOR CLASS name [ DEFAUL - DEFAULT + DEFAULT If present, the operator class will become the default @@ -159,15 +159,15 @@ CREATE OPERATOR CLASS name [ DEFAUL op_type - In an OPERATOR clause, - the operand data type(s) of the operator, or NONE to + In an OPERATOR clause, + the operand data type(s) of the operator, or NONE to signify a left-unary or right-unary operator. The operand data types can be omitted in the normal case where they are the same as the operator class's data type. - In a FUNCTION clause, the operand data type(s) the + In a FUNCTION clause, the operand data type(s) the function is intended to support, if different from the input data type(s) of the function (for B-tree comparison functions and hash functions) @@ -175,7 +175,7 @@ CREATE OPERATOR CLASS name [ DEFAUL functions in GiST, SP-GiST, GIN and BRIN operator classes). These defaults are correct, and so op_type need not be specified in - FUNCTION clauses, except for the case of a B-tree sort + FUNCTION clauses, except for the case of a B-tree sort support function that is meant to support cross-data-type comparisons. @@ -191,8 +191,8 @@ CREATE OPERATOR CLASS name [ DEFAUL - If neither FOR SEARCH nor FOR ORDER BY is - specified, FOR SEARCH is the default. + If neither FOR SEARCH nor FOR ORDER BY is + specified, FOR SEARCH is the default. @@ -233,11 +233,11 @@ CREATE OPERATOR CLASS name [ DEFAUL The data type actually stored in the index. Normally this is the same as the column data type, but some index methods (currently GiST, GIN and BRIN) allow it to be different. The - STORAGE clause must be omitted unless the index + STORAGE clause must be omitted unless the index method allows a different type to be used. - If the column data_type is specified - as anyarray, the storage_type - can be declared as anyelement to indicate that the index + If the column data_type is specified + as anyarray, the storage_type + can be declared as anyelement to indicate that the index entries are members of the element type belonging to the actual array type that each particular index is created for. @@ -246,7 +246,7 @@ CREATE OPERATOR CLASS name [ DEFAUL - The OPERATOR, FUNCTION, and STORAGE + The OPERATOR, FUNCTION, and STORAGE clauses can appear in any order. @@ -269,9 +269,9 @@ CREATE OPERATOR CLASS name [ DEFAUL - Before PostgreSQL 8.4, the OPERATOR - clause could include a RECHECK option. This is no longer - supported because whether an index operator is lossy is now + Before PostgreSQL 8.4, the OPERATOR + clause could include a RECHECK option. This is no longer + supported because whether an index operator is lossy is now determined on-the-fly at run time. This allows efficient handling of cases where an operator might or might not be lossy. @@ -282,7 +282,7 @@ CREATE OPERATOR CLASS name [ DEFAUL The following example command defines a GiST index operator class - for the data type _int4 (array of int4). See the + for the data type _int4 (array of int4). See the module for the complete example. diff --git a/doc/src/sgml/ref/create_operator.sgml b/doc/src/sgml/ref/create_operator.sgml index 818e3a2315..11c38fd38b 100644 --- a/doc/src/sgml/ref/create_operator.sgml +++ b/doc/src/sgml/ref/create_operator.sgml @@ -43,7 +43,7 @@ CREATE OPERATOR name ( - The operator name is a sequence of up to NAMEDATALEN-1 + The operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following list: + - * / < > = ~ ! @ # % ^ & | ` ? @@ -72,7 +72,7 @@ CREATE OPERATOR name ( - The use of => as an operator name is deprecated. It may + The use of => as an operator name is deprecated. It may be disallowed altogether in a future release. @@ -86,10 +86,10 @@ CREATE OPERATOR name ( - At least one of LEFTARG and RIGHTARG must be defined. For + At least one of LEFTARG and RIGHTARG must be defined. For binary operators, both must be defined. For right unary - operators, only LEFTARG should be defined, while for left - unary operators only RIGHTARG should be defined. + operators, only LEFTARG should be defined, while for left + unary operators only RIGHTARG should be defined. @@ -122,11 +122,11 @@ CREATE OPERATOR name ( The name of the operator to be defined. See above for allowable characters. The name can be schema-qualified, for example - CREATE OPERATOR myschema.+ (...). If not, then + CREATE OPERATOR myschema.+ (...). If not, then the operator is created in the current schema. Two operators in the same schema can have the same name if they operate on different data types. This is called - overloading. + overloading. @@ -218,7 +218,7 @@ CREATE OPERATOR name ( To give a schema-qualified operator name in com_op or the other optional - arguments, use the OPERATOR() syntax, for example: + arguments, use the OPERATOR() syntax, for example: COMMUTATOR = OPERATOR(myschema.===) , @@ -233,18 +233,18 @@ COMMUTATOR = OPERATOR(myschema.===) , It is not possible to specify an operator's lexical precedence in - CREATE OPERATOR, because the parser's precedence behavior + CREATE OPERATOR, because the parser's precedence behavior is hard-wired. See for precedence details. - The obsolete options SORT1, SORT2, - LTCMP, and GTCMP were formerly used to + The obsolete options SORT1, SORT2, + LTCMP, and GTCMP were formerly used to specify the names of sort operators associated with a merge-joinable operator. This is no longer necessary, since information about associated operators is found by looking at B-tree operator families instead. If one of these options is given, it is ignored except - for implicitly setting MERGES true. + for implicitly setting MERGES true. diff --git a/doc/src/sgml/ref/create_opfamily.sgml b/doc/src/sgml/ref/create_opfamily.sgml index c4bcf0863e..ca5261b7a0 100644 --- a/doc/src/sgml/ref/create_opfamily.sgml +++ b/doc/src/sgml/ref/create_opfamily.sgml @@ -35,7 +35,7 @@ CREATE OPERATOR FAMILY name USING < compatible with these operator classes but not essential for the functioning of any individual index. (Operators and functions that are essential to indexes should be grouped within the relevant operator - class, rather than being loose in the operator family. + class, rather than being loose in the operator family. Typically, single-data-type operators are bound to operator classes, while cross-data-type operators can be loose in an operator family containing operator classes for both data types.) @@ -45,7 +45,7 @@ CREATE OPERATOR FAMILY name USING < The new operator family is initially empty. It should be populated by issuing subsequent CREATE OPERATOR CLASS commands to add contained operator classes, and optionally - ALTER OPERATOR FAMILY commands to add loose + ALTER OPERATOR FAMILY commands to add loose operators and their corresponding support functions. diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index 70df22c059..1bcf2de429 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -88,7 +88,7 @@ CREATE POLICY name ON If row-level security is enabled for a table, but no applicable policies - exist, a default deny policy is assumed, so that no rows will + exist, a default deny policy is assumed, so that no rows will be visible or updatable. @@ -188,9 +188,9 @@ CREATE POLICY name ON SELECT), and will not be - available for modification (in an UPDATE - or DELETE). Such rows are silently suppressed; no error + visible to the user (in a SELECT), and will not be + available for modification (in an UPDATE + or DELETE). Such rows are silently suppressed; no error is reported. @@ -223,7 +223,7 @@ CREATE POLICY name ON - ALL + ALL Using ALL for a policy means that it will apply @@ -254,7 +254,7 @@ CREATE POLICY name ON - SELECT + SELECT Using SELECT for a policy means that it will apply @@ -274,7 +274,7 @@ CREATE POLICY name ON - INSERT + INSERT Using INSERT for a policy means that it will apply @@ -295,7 +295,7 @@ CREATE POLICY name ON - UPDATE + UPDATE Using UPDATE for a policy means that it will apply @@ -347,14 +347,14 @@ CREATE POLICY name ON UPDATE command, if the existing row does not pass the USING expressions, an error will be thrown (the - UPDATE path will never be silently + UPDATE path will never be silently avoided). - DELETE + DELETE Using DELETE for a policy means that it will apply diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml index 62a5fd432e..b997d387e7 100644 --- a/doc/src/sgml/ref/create_publication.sgml +++ b/doc/src/sgml/ref/create_publication.sgml @@ -64,10 +64,10 @@ CREATE PUBLICATION name Specifies a list of tables to add to the publication. If - ONLY is specified before the table name, only - that table is added to the publication. If ONLY is not + ONLY is specified before the table name, only + that table is added to the publication. If ONLY is not specified, the table and all its descendant tables (if any) are added. - Optionally, * can be specified after the table name to + Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -138,7 +138,7 @@ CREATE PUBLICATION name To create a publication, the invoking user must have the - CREATE privilege for the current database. + CREATE privilege for the current database. (Of course, superusers bypass this check.) @@ -151,12 +151,12 @@ CREATE PUBLICATION name The tables added to a publication that publishes UPDATE and/or DELETE operations must have - REPLICA IDENTITY defined. Otherwise those operations will be + REPLICA IDENTITY defined. Otherwise those operations will be disallowed on those tables. - For an INSERT ... ON CONFLICT command, the publication will + For an INSERT ... ON CONFLICT command, the publication will publish the operation that actually results from the command. So depending of the outcome, it may be published as either INSERT or UPDATE, or it may not be published at all. @@ -203,7 +203,7 @@ CREATE PUBLICATION insert_only FOR TABLE mydata Compatibility - CREATE PUBLICATION is a PostgreSQL + CREATE PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml index 41670c4b05..4a4061a237 100644 --- a/doc/src/sgml/ref/create_role.sgml +++ b/doc/src/sgml/ref/create_role.sgml @@ -51,11 +51,11 @@ CREATE ROLE name [ [ WITH ] CREATE ROLE adds a new role to a PostgreSQL database cluster. A role is an entity that can own database objects and have database privileges; - a role can be considered a user, a group, or both + a role can be considered a user, a group, or both depending on how it is used. Refer to and for information about managing - users and authentication. You must have CREATEROLE + users and authentication. You must have CREATEROLE privilege or be a database superuser to use this command. @@ -83,7 +83,7 @@ CREATE ROLE name [ [ WITH ] NOSUPERUSER - These clauses determine whether the new role is a superuser, + These clauses determine whether the new role is a superuser, who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed. You must yourself be a superuser to create a new superuser. @@ -94,8 +94,8 @@ CREATE ROLE name [ [ WITH ] - CREATEDB - NOCREATEDB + CREATEDB + NOCREATEDB These clauses define a role's ability to create databases. If @@ -128,13 +128,13 @@ CREATE ROLE name [ [ WITH ] NOINHERIT - These clauses determine whether a role inherits the + These clauses determine whether a role inherits the privileges of roles it is a member of. A role with the INHERIT attribute can automatically use whatever database privileges have been granted to all roles it is directly or indirectly a member of. Without INHERIT, membership in another role - only grants the ability to SET ROLE to that other role; + only grants the ability to SET ROLE to that other role; the privileges of the other role are only available after having done so. If not specified, @@ -156,7 +156,7 @@ CREATE ROLE name [ [ WITH ] NOLOGIN is the default, except when - CREATE ROLE is invoked through its alternative spelling + CREATE ROLE is invoked through its alternative spelling . @@ -172,7 +172,7 @@ CREATE ROLE name [ [ WITH ] REPLICATION attribute is a very + A role having the REPLICATION attribute is a very highly privileged role, and should only be used on roles actually used for replication. If not specified, NOREPLICATION is the default. @@ -210,7 +210,7 @@ CREATE ROLE name [ [ WITH ] - [ ENCRYPTED ] PASSWORD password + [ ENCRYPTED ] PASSWORD password Sets the role's password. (A password is only of use for @@ -225,7 +225,7 @@ CREATE ROLE name [ [ WITH ] Specifying an empty string will also set the password to null, - but that was not the case before PostgreSQL + but that was not the case before PostgreSQL version 10. In earlier versions, an empty string could be used, or not, depending on the authentication method and the exact version, and libpq would refuse to use it in any case. @@ -235,12 +235,12 @@ CREATE ROLE name [ [ WITH ] The password is always stored encrypted in the system catalogs. The - ENCRYPTED keyword has no effect, but is accepted for + ENCRYPTED keyword has no effect, but is accepted for backwards compatibility. The method of encryption is determined by the configuration parameter . If the presented password string is already in MD5-encrypted or SCRAM-encrypted format, then it is stored as-is regardless of - password_encryption (since the system cannot decrypt + password_encryption (since the system cannot decrypt the specified encrypted password string, to encrypt it in a different format). This allows reloading of encrypted passwords during dump/restore. @@ -260,61 +260,61 @@ CREATE ROLE name [ [ WITH ] - IN ROLE role_name + IN ROLE role_name The IN ROLE clause lists one or more existing roles to which the new role will be immediately added as a new member. (Note that there is no option to add the new role as an - administrator; use a separate GRANT command to do that.) + administrator; use a separate GRANT command to do that.) - IN GROUP role_name + IN GROUP role_name IN GROUP is an obsolete spelling of - IN ROLE. + IN ROLE. - ROLE role_name + ROLE role_name The ROLE clause lists one or more existing roles which are automatically added as members of the new role. - (This in effect makes the new role a group.) + (This in effect makes the new role a group.) - ADMIN role_name + ADMIN role_name The ADMIN clause is like ROLE, but the named roles are added to the new role WITH ADMIN - OPTION, giving them the right to grant membership in this role + OPTION, giving them the right to grant membership in this role to others. - USER role_name + USER role_name The USER clause is an obsolete spelling of - the ROLE clause. + the ROLE clause. - SYSID uid + SYSID uid The SYSID clause is ignored, but is accepted @@ -332,8 +332,8 @@ CREATE ROLE name [ [ WITH ] to change the attributes of a role, and to remove a role. All the attributes - specified by CREATE ROLE can be modified by later - ALTER ROLE commands. + specified by CREATE ROLE can be modified by later + ALTER ROLE commands. @@ -344,42 +344,42 @@ CREATE ROLE name [ [ WITH ] - The VALID UNTIL clause defines an expiration time for a - password only, not for the role per se. In + The VALID UNTIL clause defines an expiration time for a + password only, not for the role per se. In particular, the expiration time is not enforced when logging in using a non-password-based authentication method. - The INHERIT attribute governs inheritance of grantable + The INHERIT attribute governs inheritance of grantable privileges (that is, access privileges for database objects and role memberships). It does not apply to the special role attributes set by - CREATE ROLE and ALTER ROLE. For example, being - a member of a role with CREATEDB privilege does not immediately - grant the ability to create databases, even if INHERIT is set; + CREATE ROLE and ALTER ROLE. For example, being + a member of a role with CREATEDB privilege does not immediately + grant the ability to create databases, even if INHERIT is set; it would be necessary to become that role via before creating a database. - The INHERIT attribute is the default for reasons of backwards + The INHERIT attribute is the default for reasons of backwards compatibility: in prior releases of PostgreSQL, users always had access to all privileges of groups they were members of. - However, NOINHERIT provides a closer match to the semantics + However, NOINHERIT provides a closer match to the semantics specified in the SQL standard. - Be careful with the CREATEROLE privilege. There is no concept of - inheritance for the privileges of a CREATEROLE-role. That + Be careful with the CREATEROLE privilege. There is no concept of + inheritance for the privileges of a CREATEROLE-role. That means that even if a role does not have a certain privilege but is allowed to create other roles, it can easily create another role with different privileges than its own (except for creating roles with superuser - privileges). For example, if the role user has the - CREATEROLE privilege but not the CREATEDB privilege, - nonetheless it can create a new role with the CREATEDB - privilege. Therefore, regard roles that have the CREATEROLE + privileges). For example, if the role user has the + CREATEROLE privilege but not the CREATEDB privilege, + nonetheless it can create a new role with the CREATEDB + privilege. Therefore, regard roles that have the CREATEROLE privilege as almost-superuser-roles. @@ -391,9 +391,9 @@ CREATE ROLE name [ [ WITH ] - The CONNECTION LIMIT option is only enforced approximately; + The CONNECTION LIMIT option is only enforced approximately; if two new sessions start at about the same time when just one - connection slot remains for the role, it is possible that + connection slot remains for the role, it is possible that both will fail. Also, the limit is never enforced for superusers. @@ -425,8 +425,8 @@ CREATE ROLE jonathan LOGIN; CREATE USER davide WITH PASSWORD 'jw8s0F4'; - (CREATE USER is the same as CREATE ROLE except - that it implies LOGIN.) + (CREATE USER is the same as CREATE ROLE except + that it implies LOGIN.) @@ -453,7 +453,7 @@ CREATE ROLE admin WITH CREATEDB CREATEROLE; The CREATE ROLE statement is in the SQL standard, but the standard only requires the syntax -CREATE ROLE name [ WITH ADMIN role_name ] +CREATE ROLE name [ WITH ADMIN role_name ] Multiple initial administrators, and all the other options of CREATE ROLE, are @@ -471,8 +471,8 @@ CREATE ROLE name [ WITH ADMIN The behavior specified by the SQL standard is most closely approximated - by giving users the NOINHERIT attribute, while roles are - given the INHERIT attribute. + by giving users the NOINHERIT attribute, while roles are + given the INHERIT attribute. diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index 53fdf56621..c772c38399 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -76,13 +76,13 @@ CREATE [ OR REPLACE ] RULE name AS ON DELETE rules (or any subset of those that's sufficient for your purposes) to replace update actions on the view with appropriate updates on other tables. If you want to support - INSERT RETURNING and so on, then be sure to put a suitable - RETURNING clause into each of these rules. + INSERT RETURNING and so on, then be sure to put a suitable + RETURNING clause into each of these rules. There is a catch if you try to use conditional rules for complex view - updates: there must be an unconditional + updates: there must be an unconditional INSTEAD rule for each action you wish to allow on the view. If the rule is conditional, or is not INSTEAD, then the system will still reject @@ -95,7 +95,7 @@ CREATE [ OR REPLACE ] RULE name AS Then make the conditional rules non-INSTEAD; in the cases where they are applied, they add to the default INSTEAD NOTHING action. (This method does not - currently work to support RETURNING queries, however.) + currently work to support RETURNING queries, however.) @@ -108,7 +108,7 @@ CREATE [ OR REPLACE ] RULE name AS - Another alternative worth considering is to use INSTEAD OF + Another alternative worth considering is to use INSTEAD OF triggers (see ) in place of rules. @@ -161,7 +161,7 @@ CREATE [ OR REPLACE ] RULE name AS Any SQL conditional expression (returning boolean). The condition expression cannot refer - to any tables except NEW and OLD, and + to any tables except NEW and OLD, and cannot contain aggregate functions. @@ -171,7 +171,7 @@ CREATE [ OR REPLACE ] RULE name AS INSTEAD indicates that the commands should be - executed instead of the original command. + executed instead of the original command. @@ -227,19 +227,19 @@ CREATE [ OR REPLACE ] RULE name AS In a rule for INSERT, UPDATE, or - DELETE on a view, you can add a RETURNING + DELETE on a view, you can add a RETURNING clause that emits the view's columns. This clause will be used to compute - the outputs if the rule is triggered by an INSERT RETURNING, - UPDATE RETURNING, or DELETE RETURNING command + the outputs if the rule is triggered by an INSERT RETURNING, + UPDATE RETURNING, or DELETE RETURNING command respectively. When the rule is triggered by a command without - RETURNING, the rule's RETURNING clause will be + RETURNING, the rule's RETURNING clause will be ignored. The current implementation allows only unconditional - INSTEAD rules to contain RETURNING; furthermore - there can be at most one RETURNING clause among all the rules + INSTEAD rules to contain RETURNING; furthermore + there can be at most one RETURNING clause among all the rules for the same event. (This ensures that there is only one candidate - RETURNING clause to be used to compute the results.) - RETURNING queries on the view will be rejected if - there is no RETURNING clause in any available rule. + RETURNING clause to be used to compute the results.) + RETURNING queries on the view will be rejected if + there is no RETURNING clause in any available rule. diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index ce145f96a0..ce3530c048 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -48,9 +48,9 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp A schema is essentially a namespace: it contains named objects (tables, data types, functions, and operators) whose names can duplicate those of other objects existing in other - schemas. Named objects are accessed either by qualifying + schemas. Named objects are accessed either by qualifying their names with the schema name as a prefix, or by setting a search - path that includes the desired schema(s). A CREATE command + path that includes the desired schema(s). A CREATE command specifying an unqualified object name creates the object in the current schema (the one at the front of the search path, which can be determined with the function current_schema). @@ -60,7 +60,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp Optionally, CREATE SCHEMA can include subcommands to create objects within the new schema. The subcommands are treated essentially the same as separate commands issued after creating the - schema, except that if the AUTHORIZATION clause is used, + schema, except that if the AUTHORIZATION clause is used, all the created objects will be owned by that user. @@ -100,10 +100,10 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp An SQL statement defining an object to be created within the schema. Currently, only CREATE - TABLE, CREATE VIEW, CREATE - INDEX, CREATE SEQUENCE, CREATE - TRIGGER and GRANT are accepted as clauses - within CREATE SCHEMA. Other kinds of objects may + TABLE, CREATE VIEW, CREATE + INDEX, CREATE SEQUENCE, CREATE + TRIGGER and GRANT are accepted as clauses + within CREATE SCHEMA. Other kinds of objects may be created in separate commands after the schema is created. @@ -114,7 +114,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp Do nothing (except issuing a notice) if a schema with the same name - already exists. schema_element + already exists. schema_element subcommands cannot be included when this option is used. @@ -127,7 +127,7 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_sp To create a schema, the invoking user must have the - CREATE privilege for the current database. + CREATE privilege for the current database. (Of course, superusers bypass this check.) @@ -143,17 +143,17 @@ CREATE SCHEMA myschema; - Create a schema for user joe; the schema will also be - named joe: + Create a schema for user joe; the schema will also be + named joe: CREATE SCHEMA AUTHORIZATION joe; - Create a schema named test that will be owned by user - joe, unless there already is a schema named test. - (It does not matter whether joe owns the pre-existing schema.) + Create a schema named test that will be owned by user + joe, unless there already is a schema named test. + (It does not matter whether joe owns the pre-existing schema.) CREATE SCHEMA IF NOT EXISTS test AUTHORIZATION joe; @@ -185,7 +185,7 @@ CREATE VIEW hollywood.winners AS Compatibility - The SQL standard allows a DEFAULT CHARACTER SET clause + The SQL standard allows a DEFAULT CHARACTER SET clause in CREATE SCHEMA, as well as more subcommand types than are presently accepted by PostgreSQL. @@ -205,7 +205,7 @@ CREATE VIEW hollywood.winners AS all objects within it. PostgreSQL allows schemas to contain objects owned by users other than the schema owner. This can happen only if the schema owner grants the - CREATE privilege on their schema to someone else, or a + CREATE privilege on their schema to someone else, or a superuser chooses to create objects in it. diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index 2af8c8d23e..9248b1d459 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -67,10 +67,10 @@ SELECT * FROM name; to examine the parameters and current state of a sequence. In particular, - the last_value field of the sequence shows the last value + the last_value field of the sequence shows the last value allocated by any session. (Of course, this value might be obsolete by the time it's printed, if other sessions are actively doing - nextval calls.) + nextval calls.) @@ -250,14 +250,14 @@ SELECT * FROM name; - Sequences are based on bigint arithmetic, so the range + Sequences are based on bigint arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807). - Because nextval and setval calls are never - rolled back, sequence objects cannot be used if gapless + Because nextval and setval calls are never + rolled back, sequence objects cannot be used if gapless assignment of sequence numbers is needed. It is possible to build gapless assignment by using exclusive locking of a table containing a counter; but this solution is much more expensive than sequence @@ -271,9 +271,9 @@ SELECT * FROM name; used for a sequence object that will be used concurrently by multiple sessions. Each session will allocate and cache successive sequence values during one access to the sequence object and - increase the sequence object's last_value accordingly. + increase the sequence object's last_value accordingly. Then, the next cache-1 - uses of nextval within that session simply return the + uses of nextval within that session simply return the preallocated values without touching the sequence object. So, any numbers allocated but not used within a session will be lost when that session ends, resulting in holes in the @@ -290,18 +290,18 @@ SELECT * FROM name; 11..20 and return nextval=11 before session A has generated nextval=2. Thus, with a cache setting of one - it is safe to assume that nextval values are generated + it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you - should only assume that the nextval values are all + should only assume that the nextval values are all distinct, not that they are generated purely sequentially. Also, - last_value will reflect the latest value reserved by + last_value will reflect the latest value reserved by any session, whether or not it has yet been returned by - nextval. + nextval. - Another consideration is that a setval executed on + Another consideration is that a setval executed on such a sequence will not be noticed by other sessions until they have used up any preallocated values they have cached. @@ -365,14 +365,14 @@ END; - Obtaining the next value is done using the nextval() + Obtaining the next value is done using the nextval() function instead of the standard's NEXT VALUE FOR expression. - The OWNED BY clause is a PostgreSQL + The OWNED BY clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_server.sgml b/doc/src/sgml/ref/create_server.sgml index 47b8a6291b..e14ce43bf9 100644 --- a/doc/src/sgml/ref/create_server.sgml +++ b/doc/src/sgml/ref/create_server.sgml @@ -47,7 +47,7 @@ CREATE SERVER [IF NOT EXISTS] server_name - Creating a server requires USAGE privilege on the + Creating a server requires USAGE privilege on the foreign-data wrapper being used. @@ -57,7 +57,7 @@ CREATE SERVER [IF NOT EXISTS] server_name - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a server with the same name already exists. @@ -135,8 +135,8 @@ CREATE SERVER [IF NOT EXISTS] server_nameExamples - Create a server myserver that uses the - foreign-data wrapper postgres_fdw: + Create a server myserver that uses the + foreign-data wrapper postgres_fdw: CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foo', dbname 'foodb', port '5432'); diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index 0d68ca06b7..066af8a4b4 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -41,7 +41,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na If a schema name is given (for example, CREATE STATISTICS - myschema.mystat ...) then the statistics object is created in the + myschema.mystat ...) then the statistics object is created in the specified schema. Otherwise it is created in the current schema. The name of the statistics object must be distinct from the name of any other statistics object in the same schema. @@ -54,7 +54,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a statistics object with the same name already @@ -129,7 +129,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na Examples - Create table t1 with two functionally dependent columns, i.e. + Create table t1 with two functionally dependent columns, i.e. knowledge of a value in the first column is sufficient for determining the value in the other column. Then functional dependency statistics are built on those columns: @@ -157,10 +157,10 @@ EXPLAIN ANALYZE SELECT * FROM t1 WHERE (a = 1) AND (b = 0); Without functional-dependency statistics, the planner would assume - that the two WHERE conditions are independent, and would + that the two WHERE conditions are independent, and would multiply their selectivities together to arrive at a much-too-small row count estimate. - With such statistics, the planner recognizes that the WHERE + With such statistics, the planner recognizes that the WHERE conditions are redundant and does not underestimate the rowcount. diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index bae9f839bd..cd51b7fcac 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -201,7 +201,7 @@ CREATE SUBSCRIPTION subscription_namefalse, the tables are not subscribed, and so after you enable the subscription nothing will be replicated. It is required to run - ALTER SUBSCRIPTION ... REFRESH PUBLICATION in order + ALTER SUBSCRIPTION ... REFRESH PUBLICATION in order for tables to be subscribed. @@ -272,7 +272,7 @@ CREATE SUBSCRIPTION mysub Compatibility - CREATE SUBSCRIPTION is a PostgreSQL + CREATE SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index d15795857b..2db2e9fc44 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -113,7 +113,7 @@ FROM ( { numeric_literal | If a schema name is given (for example, CREATE TABLE - myschema.mytable ...) then the table is created in the specified + myschema.mytable ...) then the table is created in the specified schema. Otherwise it is created in the current schema. Temporary tables exist in a special schema, so a schema name cannot be given when creating a temporary table. The name of the table must be @@ -158,7 +158,7 @@ FROM ( { numeric_literal | - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the table is created as a temporary table. @@ -177,13 +177,13 @@ FROM ( { numeric_literal | ANALYZE on the temporary table after it is populated. + ANALYZE on the temporary table after it is populated. Optionally, GLOBAL or LOCAL - can be written before TEMPORARY or TEMP. - This presently makes no difference in PostgreSQL + can be written before TEMPORARY or TEMP. + This presently makes no difference in PostgreSQL and is deprecated; see . @@ -192,7 +192,7 @@ FROM ( { numeric_literal | - UNLOGGED + UNLOGGED If specified, the table is created as an unlogged table. Data written @@ -208,7 +208,7 @@ FROM ( { numeric_literal | - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -263,14 +263,14 @@ FROM ( { numeric_literal | partition_bound_spec must correspond to the partitioning method and partition key of the parent table, and must not overlap with any existing partition of that - parent. The form with IN is used for list partitioning, - while the form with FROM and TO is used for + parent. The form with IN is used for list partitioning, + while the form with FROM and TO is used for range partitioning. Each of the values specified in - the partition_bound_spec is + the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key @@ -294,52 +294,52 @@ FROM ( { numeric_literal | TO list are not. Note that this statement must be understood according to the rules of row-wise comparison (). - For example, given PARTITION BY RANGE (x,y), a partition + For example, given PARTITION BY RANGE (x,y), a partition bound FROM (1, 2) TO (3, 4) - allows x=1 with any y>=2, - x=2 with any non-null y, - and x=3 with any y<4. + allows x=1 with any y>=2, + x=2 with any non-null y, + and x=3 with any y<4. - The special values MINVALUE and MAXVALUE + The special values MINVALUE and MAXVALUE may be used when creating a range partition to indicate that there is no lower or upper bound on the column's value. For example, a - partition defined using FROM (MINVALUE) TO (10) allows + partition defined using FROM (MINVALUE) TO (10) allows any values less than 10, and a partition defined using - FROM (10) TO (MAXVALUE) allows any values greater than + FROM (10) TO (MAXVALUE) allows any values greater than or equal to 10. When creating a range partition involving more than one column, it - can also make sense to use MAXVALUE as part of the lower - bound, and MINVALUE as part of the upper bound. For + can also make sense to use MAXVALUE as part of the lower + bound, and MINVALUE as part of the upper bound. For example, a partition defined using - FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any rows + FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any rows where the first partition key column is greater than 0 and less than or equal to 10. Similarly, a partition defined using - FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any rows + FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any rows where the first partition key column starts with "a". - Note that if MINVALUE or MAXVALUE is used for + Note that if MINVALUE or MAXVALUE is used for one column of a partitioning bound, the same value must be used for all - subsequent columns. For example, (10, MINVALUE, 0) is not - a valid bound; you should write (10, MINVALUE, MINVALUE). + subsequent columns. For example, (10, MINVALUE, 0) is not + a valid bound; you should write (10, MINVALUE, MINVALUE). - Also note that some element types, such as timestamp, + Also note that some element types, such as timestamp, have a notion of "infinity", which is just another value that can - be stored. This is different from MINVALUE and - MAXVALUE, which are not real values that can be stored, + be stored. This is different from MINVALUE and + MAXVALUE, which are not real values that can be stored, but rather they are ways of saying that the value is unbounded. - MAXVALUE can be thought of as being greater than any - other value, including "infinity" and MINVALUE as being + MAXVALUE can be thought of as being greater than any + other value, including "infinity" and MINVALUE as being less than any other value, including "minus infinity". Thus the range - FROM ('infinity') TO (MAXVALUE) is not an empty range; it + FROM ('infinity') TO (MAXVALUE) is not an empty range; it allows precisely one value to be stored — "infinity". @@ -370,9 +370,9 @@ FROM ( { numeric_literal | CHECK constraints will be inherited + to all partitions. CHECK constraints will be inherited automatically by every partition, but an individual partition may specify - additional CHECK constraints; additional constraints with + additional CHECK constraints; additional constraints with the same name and condition as in the parent will be merged with the parent constraint. Defaults may be specified separately for each partition. @@ -421,7 +421,7 @@ FROM ( { numeric_literal | COLLATE collation - The COLLATE clause assigns a collation to + The COLLATE clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used. @@ -432,13 +432,13 @@ FROM ( { numeric_literal | INHERITS ( parent_table [, ... ] ) - The optional INHERITS clause specifies a list of + The optional INHERITS clause specifies a list of tables from which the new table automatically inherits all columns. Parent tables can be plain tables or foreign tables. - Use of INHERITS creates a persistent relationship + Use of INHERITS creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in @@ -462,19 +462,19 @@ FROM ( { numeric_literal | - CHECK constraints are merged in essentially the same way as + CHECK constraints are merged in essentially the same way as columns: if multiple parent tables and/or the new table definition - contain identically-named CHECK constraints, these + contain identically-named CHECK constraints, these constraints must all have the same check expression, or an error will be reported. Constraints having the same name and expression will - be merged into one copy. A constraint marked NO INHERIT in a - parent will not be considered. Notice that an unnamed CHECK + be merged into one copy. A constraint marked NO INHERIT in a + parent will not be considered. Notice that an unnamed CHECK constraint in the new table will never be merged, since a unique name will always be chosen for it. - Column STORAGE settings are also copied from parent tables. + Column STORAGE settings are also copied from parent tables. @@ -504,7 +504,7 @@ FROM ( { numeric_literal | A partitioned table is divided into sub-tables (called partitions), - which are created using separate CREATE TABLE commands. + which are created using separate CREATE TABLE commands. The partitioned table is itself empty. A data row inserted into the table is routed to a partition based on the value of columns or expressions in the partition key. If no existing partition matches @@ -542,7 +542,7 @@ FROM ( { numeric_literal | nextval, may create a functional linkage between + such as nextval, may create a functional linkage between the original and new tables. @@ -559,8 +559,8 @@ FROM ( { numeric_literal | - Indexes, PRIMARY KEY, UNIQUE, - and EXCLUDE constraints on the original table will be + Indexes, PRIMARY KEY, UNIQUE, + and EXCLUDE constraints on the original table will be created on the new table only if INCLUDING INDEXES is specified. Names for the new indexes and constraints are chosen according to the default rules, regardless of how the originals @@ -568,11 +568,11 @@ FROM ( { numeric_literal | - STORAGE settings for the copied column definitions will be + STORAGE settings for the copied column definitions will be copied only if INCLUDING STORAGE is specified. The - default behavior is to exclude STORAGE settings, resulting + default behavior is to exclude STORAGE settings, resulting in the copied columns in the new table having type-specific default - settings. For more on STORAGE settings, see + settings. For more on STORAGE settings, see . @@ -587,7 +587,7 @@ FROM ( { numeric_literal | Note that unlike INHERITS, columns and - constraints copied by LIKE are not merged with similarly + constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is signaled. @@ -607,7 +607,7 @@ FROM ( { numeric_literal | An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, - so constraint names like col must be positive can be used + so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name. @@ -616,7 +616,7 @@ FROM ( { numeric_literal | - NOT NULL + NOT NULL The column is not allowed to contain null values. @@ -625,7 +625,7 @@ FROM ( { numeric_literal | - NULL + NULL The column is allowed to contain null values. This is the default. @@ -643,7 +643,7 @@ FROM ( { numeric_literal | CHECK ( expression ) [ NO INHERIT ] - The CHECK clause specifies an expression producing a + The CHECK clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or @@ -662,15 +662,15 @@ FROM ( { numeric_literal | - A constraint marked with NO INHERIT will not propagate to + A constraint marked with NO INHERIT will not propagate to child tables. When a table has multiple CHECK constraints, they will be tested for each row in alphabetical order by name, - after checking NOT NULL constraints. - (PostgreSQL versions before 9.5 did not honor any + after checking NOT NULL constraints. + (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK constraints.) @@ -681,7 +681,7 @@ FROM ( { numeric_literal | default_expr - The DEFAULT clause assigns a default data value for + The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The @@ -729,8 +729,8 @@ FROM ( { numeric_literal | - UNIQUE (column constraint) - UNIQUE ( column_name [, ... ] ) (table constraint) + UNIQUE (column constraint) + UNIQUE ( column_name [, ... ] ) (table constraint) @@ -756,11 +756,11 @@ FROM ( { numeric_literal | - PRIMARY KEY (column constraint) - PRIMARY KEY ( column_name [, ... ] ) (table constraint) + PRIMARY KEY (column constraint) + PRIMARY KEY ( column_name [, ... ] ) (table constraint) - The PRIMARY KEY constraint specifies that a column or + The PRIMARY KEY constraint specifies that a column or columns of a table can contain only unique (non-duplicate), nonnull values. Only one primary key can be specified for a table, whether as a column constraint or a table constraint. @@ -775,7 +775,7 @@ FROM ( { numeric_literal | PRIMARY KEY enforces the same data constraints as - a combination of UNIQUE and NOT NULL, but + a combination of UNIQUE and NOT NULL, but identifying a set of columns as the primary key also provides metadata about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows. @@ -787,19 +787,19 @@ FROM ( { numeric_literal | EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] - The EXCLUDE clause defines an exclusion + The EXCLUDE clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified column(s) or expression(s) using the specified operator(s), not all of these - comparisons will return TRUE. If all of the + comparisons will return TRUE. If all of the specified operators test for equality, this is equivalent to a - UNIQUE constraint, although an ordinary unique constraint + UNIQUE constraint, although an ordinary unique constraint will be faster. However, exclusion constraints can specify constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles (see ) by using the - && operator. + && operator. @@ -807,7 +807,7 @@ FROM ( { numeric_literal | ) for the index access - method index_method. + method index_method. The operators are required to be commutative. Each exclude_element can optionally specify an operator class and/or ordering options; @@ -816,17 +816,17 @@ FROM ( { numeric_literal | - The access method must support amgettuple (see ); at present this means GIN + The access method must support amgettuple (see ); at present this means GIN cannot be used. Although it's allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does nothing that an ordinary unique constraint doesn't do better. - So in practice the access method will always be GiST or - SP-GiST. + So in practice the access method will always be GiST or + SP-GiST. - The predicate allows you to specify an + The predicate allows you to specify an exclusion constraint on a subset of the table; internally this creates a partial index. Note that parentheses are required around the predicate. @@ -853,7 +853,7 @@ FROM ( { numeric_literal | reftable is used. The referenced columns must be the columns of a non-deferrable unique or primary key constraint in the referenced table. The user - must have REFERENCES permission on the referenced table + must have REFERENCES permission on the referenced table (either the whole table, or the specific referenced columns). Note that foreign key constraints cannot be defined between temporary tables and permanent tables. @@ -863,16 +863,16 @@ FROM ( { numeric_literal | MATCH - FULL, MATCH PARTIAL, and MATCH + FULL, MATCH PARTIAL, and MATCH SIMPLE (which is the default). MATCH - FULL will not allow one column of a multicolumn foreign key + FULL will not allow one column of a multicolumn foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. - MATCH PARTIAL is not yet implemented. - (Of course, NOT NULL constraints can be applied to the + MATCH PARTIAL is not yet implemented. + (Of course, NOT NULL constraints can be applied to the referencing column(s) to prevent these cases from arising.) @@ -969,13 +969,13 @@ FROM ( { numeric_literal | command). NOT DEFERRABLE is the default. - Currently, only UNIQUE, PRIMARY KEY, - EXCLUDE, and - REFERENCES (foreign key) constraints accept this - clause. NOT NULL and CHECK constraints are not + Currently, only UNIQUE, PRIMARY KEY, + EXCLUDE, and + REFERENCES (foreign key) constraints accept this + clause. NOT NULL and CHECK constraints are not deferrable. Note that deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that - includes an ON CONFLICT DO UPDATE clause. + includes an ON CONFLICT DO UPDATE clause. @@ -1003,16 +1003,16 @@ FROM ( { numeric_literal | for more - information. The WITH clause for a - table can also include OIDS=TRUE (or just OIDS) + information. The WITH clause for a + table can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or - OIDS=FALSE to specify that the rows should not have OIDs. - If OIDS is not specified, the default setting depends upon + OIDS=FALSE to specify that the rows should not have OIDs. + If OIDS is not specified, the default setting depends upon the configuration parameter. (If the new table inherits from any tables that have OIDs, then - OIDS=TRUE is forced even if the command says - OIDS=FALSE.) + OIDS=TRUE is forced even if the command says + OIDS=FALSE.) @@ -1035,14 +1035,14 @@ FROM ( { numeric_literal | - WITH OIDS - WITHOUT OIDS + WITH OIDS + WITHOUT OIDS - These are obsolescent syntaxes equivalent to WITH (OIDS) - and WITH (OIDS=FALSE), respectively. If you wish to give - both an OIDS setting and storage parameters, you must use - the WITH ( ... ) syntax; see above. + These are obsolescent syntaxes equivalent to WITH (OIDS) + and WITH (OIDS=FALSE), respectively. If you wish to give + both an OIDS setting and storage parameters, you must use + the WITH ( ... ) syntax; see above. @@ -1110,7 +1110,7 @@ FROM ( { numeric_literal | This clause allows selection of the tablespace in which the index associated with a UNIQUE, PRIMARY - KEY, or EXCLUDE constraint will be created. + KEY, or EXCLUDE constraint will be created. If not specified, is consulted, or if the table is temporary. @@ -1128,16 +1128,16 @@ FROM ( { numeric_literal | - The WITH clause can specify storage parameters + The WITH clause can specify storage parameters for tables, and for indexes associated with a UNIQUE, - PRIMARY KEY, or EXCLUDE constraint. + PRIMARY KEY, or EXCLUDE constraint. Storage parameters for indexes are documented in . The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast., which controls the behavior of the - table's secondary TOAST table, if any + table's secondary TOAST table, if any (see for more information about TOAST). If a table parameter value is set and the equivalent toast. parameter is not, the TOAST table @@ -1149,14 +1149,14 @@ FROM ( { numeric_literal | - fillfactor (integer) + fillfactor (integer) The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default. When a smaller fillfactor - is specified, INSERT operations pack table pages only + is specified, INSERT operations pack table pages only to the indicated percentage; the remaining space on each page is - reserved for updating rows on that page. This gives UPDATE + reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page. For a table whose entries are never updated, complete packing is the @@ -1167,7 +1167,7 @@ FROM ( { numeric_literal | - parallel_workers (integer) + parallel_workers (integer) This sets the number of workers that should be used to assist a parallel @@ -1180,12 +1180,12 @@ FROM ( { numeric_literal | - autovacuum_enabled, toast.autovacuum_enabled (boolean) + autovacuum_enabled, toast.autovacuum_enabled (boolean) Enables or disables the autovacuum daemon for a particular table. - If true, the autovacuum daemon will perform automatic VACUUM - and/or ANALYZE operations on this table following the rules + If true, the autovacuum daemon will perform automatic VACUUM + and/or ANALYZE operations on this table following the rules discussed in . If false, this table will not be autovacuumed, except to prevent transaction ID wraparound. See for @@ -1194,14 +1194,14 @@ FROM ( { numeric_literal | parameter is false; setting individual tables' storage parameters does not override that. Therefore there is seldom much point in explicitly - setting this storage parameter to true, only - to false. + setting this storage parameter to true, only + to false. - autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer) + autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer) Per-table value for @@ -1211,7 +1211,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (float4) + autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (float4) Per-table value for @@ -1221,7 +1221,7 @@ FROM ( { numeric_literal | - autovacuum_analyze_threshold (integer) + autovacuum_analyze_threshold (integer) Per-table value for @@ -1231,7 +1231,7 @@ FROM ( { numeric_literal | - autovacuum_analyze_scale_factor (float4) + autovacuum_analyze_scale_factor (float4) Per-table value for @@ -1241,7 +1241,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (integer) + autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (integer) Per-table value for @@ -1251,7 +1251,7 @@ FROM ( { numeric_literal | - autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer) + autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer) Per-table value for @@ -1261,12 +1261,12 @@ FROM ( { numeric_literal | - autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer) + autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer) Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_freeze_min_age parameters that are + per-table autovacuum_freeze_min_age parameters that are larger than half the system-wide setting. @@ -1274,12 +1274,12 @@ FROM ( { numeric_literal | - autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer) + autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer) Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_freeze_max_age parameters that are + per-table autovacuum_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller). @@ -1301,7 +1301,7 @@ FROM ( { numeric_literal | Per-table value for parameter. Note that autovacuum will ignore - per-table autovacuum_multixact_freeze_min_age parameters + per-table autovacuum_multixact_freeze_min_age parameters that are larger than half the system-wide setting. @@ -1316,7 +1316,7 @@ FROM ( { numeric_literal | parameter. Note that autovacuum will ignore - per-table autovacuum_multixact_freeze_max_age parameters + per-table autovacuum_multixact_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller). @@ -1369,11 +1369,11 @@ FROM ( { numeric_literal | oid column of that table, to ensure that + on the oid column of that table, to ensure that OIDs in the table will indeed uniquely identify rows even after counter wraparound. Avoid assuming that OIDs are unique across tables; if you need a database-wide unique identifier, use the - combination of tableoid and row OID for the + combination of tableoid and row OID for the purpose. @@ -1411,8 +1411,8 @@ FROM ( { numeric_literal | Examples - Create table films and table - distributors: + Create table films and table + distributors: CREATE TABLE films ( @@ -1484,7 +1484,7 @@ CREATE TABLE distributors ( Define a primary key table constraint for the table - films: + films: CREATE TABLE films ( @@ -1501,7 +1501,7 @@ CREATE TABLE films ( Define a primary key constraint for table - distributors. The following two examples are + distributors. The following two examples are equivalent, the first using the table constraint syntax, the second the column constraint syntax: @@ -1537,7 +1537,7 @@ CREATE TABLE distributors ( - Define two NOT NULL column constraints on the table + Define two NOT NULL column constraints on the table distributors, one of which is explicitly given a name: @@ -1585,7 +1585,7 @@ WITH (fillfactor=70); - Create table circles with an exclusion + Create table circles with an exclusion constraint that prevents any two circles from overlapping: @@ -1597,7 +1597,7 @@ CREATE TABLE circles ( - Create table cinemas in tablespace diskvol1: + Create table cinemas in tablespace diskvol1: CREATE TABLE cinemas ( @@ -1761,8 +1761,8 @@ CREATE TABLE cities_partdef The ON COMMIT clause for temporary tables also resembles the SQL standard, but has some differences. - If the ON COMMIT clause is omitted, SQL specifies that the - default behavior is ON COMMIT DELETE ROWS. However, the + If the ON COMMIT clause is omitted, SQL specifies that the + default behavior is ON COMMIT DELETE ROWS. However, the default behavior in PostgreSQL is ON COMMIT PRESERVE ROWS. The ON COMMIT DROP option does not exist in SQL. @@ -1773,15 +1773,15 @@ CREATE TABLE cities_partdef Non-deferred Uniqueness Constraints - When a UNIQUE or PRIMARY KEY constraint is + When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as - DEFERRABLE but not deferred (i.e., INITIALLY - IMMEDIATE). Be aware that this can be significantly slower than + DEFERRABLE but not deferred (i.e., INITIALLY + IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking. @@ -1790,8 +1790,8 @@ CREATE TABLE cities_partdef Column Check Constraints - The SQL standard says that CHECK column constraints - can only refer to the column they apply to; only CHECK + The SQL standard says that CHECK column constraints + can only refer to the column they apply to; only CHECK table constraints can refer to multiple columns. PostgreSQL does not enforce this restriction; it treats column and table check constraints alike. @@ -1802,7 +1802,7 @@ CREATE TABLE cities_partdef <literal>EXCLUDE</literal> Constraint - The EXCLUDE constraint type is a + The EXCLUDE constraint type is a PostgreSQL extension. @@ -1811,7 +1811,7 @@ CREATE TABLE cities_partdef <literal>NULL</literal> <quote>Constraint</quote> - The NULL constraint (actually a + The NULL constraint (actually a non-constraint) is a PostgreSQL extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the NOT @@ -1838,11 +1838,11 @@ CREATE TABLE cities_partdef PostgreSQL allows a table of no columns - to be created (for example, CREATE TABLE foo();). This + to be created (for example, CREATE TABLE foo();). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE - DROP COLUMN, so it seems cleaner to ignore this spec restriction. + DROP COLUMN, so it seems cleaner to ignore this spec restriction. @@ -1861,10 +1861,10 @@ CREATE TABLE cities_partdef - <literal>LIKE</> Clause + <literal>LIKE</literal> Clause - While a LIKE clause exists in the SQL standard, many of the + While a LIKE clause exists in the SQL standard, many of the options that PostgreSQL accepts for it are not in the standard, and some of the standard's options are not implemented by PostgreSQL. @@ -1872,10 +1872,10 @@ CREATE TABLE cities_partdef - <literal>WITH</> Clause + <literal>WITH</literal> Clause - The WITH clause is a PostgreSQL + The WITH clause is a PostgreSQL extension; neither storage parameters nor OIDs are in the standard. @@ -1904,19 +1904,19 @@ CREATE TABLE cities_partdef - <literal>PARTITION BY</> Clause + <literal>PARTITION BY</literal> Clause - The PARTITION BY clause is a + The PARTITION BY clause is a PostgreSQL extension. - <literal>PARTITION OF</> Clause + <literal>PARTITION OF</literal> Clause - The PARTITION OF clause is a + The PARTITION OF clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml index 0fa28a11fa..8198442a97 100644 --- a/doc/src/sgml/ref/create_table_as.sgml +++ b/doc/src/sgml/ref/create_table_as.sgml @@ -71,7 +71,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the table is created as a temporary table. @@ -81,7 +81,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - UNLOGGED + UNLOGGED If specified, the table is created as an unlogged table. @@ -91,7 +91,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a relation with the same name already exists. @@ -127,25 +127,25 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This clause specifies optional storage parameters for the new table; see for more - information. The WITH clause - can also include OIDS=TRUE (or just OIDS) + information. The WITH clause + can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or - OIDS=FALSE to specify that the rows should not have OIDs. + OIDS=FALSE to specify that the rows should not have OIDs. See for more information. - WITH OIDS - WITHOUT OIDS + WITH OIDS + WITHOUT OIDS - These are obsolescent syntaxes equivalent to WITH (OIDS) - and WITH (OIDS=FALSE), respectively. If you wish to give - both an OIDS setting and storage parameters, you must use - the WITH ( ... ) syntax; see above. + These are obsolescent syntaxes equivalent to WITH (OIDS) + and WITH (OIDS=FALSE), respectively. If you wish to give + both an OIDS setting and storage parameters, you must use + the WITH ( ... ) syntax; see above. @@ -214,14 +214,14 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI A , TABLE, or command, or an command that runs a - prepared SELECT, TABLE, or - VALUES query. + prepared SELECT, TABLE, or + VALUES query. - WITH [ NO ] DATA + WITH [ NO ] DATA This clause specifies whether or not the data produced by the query @@ -241,7 +241,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This command is functionally similar to , but it is preferred since it is less likely to be confused with other uses of - the SELECT INTO syntax. Furthermore, CREATE + the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset of the functionality offered by SELECT INTO. @@ -315,7 +315,7 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS - PostgreSQL handles temporary tables in a way + PostgreSQL handles temporary tables in a way rather different from the standard; see for details. @@ -324,7 +324,7 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS - The WITH clause is a PostgreSQL + The WITH clause is a PostgreSQL extension; neither storage parameters nor OIDs are in the standard. diff --git a/doc/src/sgml/ref/create_tablespace.sgml b/doc/src/sgml/ref/create_tablespace.sgml index 2fed29ffaf..4d95cac9e5 100644 --- a/doc/src/sgml/ref/create_tablespace.sgml +++ b/doc/src/sgml/ref/create_tablespace.sgml @@ -45,9 +45,9 @@ CREATE TABLESPACE tablespace_name A user with appropriate privileges can pass - tablespace_name to - CREATE DATABASE, CREATE TABLE, - CREATE INDEX or ADD CONSTRAINT to have the data + tablespace_name to + CREATE DATABASE, CREATE TABLE, + CREATE INDEX or ADD CONSTRAINT to have the data files for these objects stored within the specified tablespace. @@ -93,7 +93,7 @@ CREATE TABLESPACE tablespace_name The directory that will be used for the tablespace. The directory should be empty and must be owned by the - PostgreSQL system user. The directory must be + PostgreSQL system user. The directory must be specified by an absolute path name. @@ -104,8 +104,8 @@ CREATE TABLESPACE tablespace_name A tablespace parameter to be set or reset. Currently, the only - available parameters are seq_page_cost, - random_page_cost and effective_io_concurrency. + available parameters are seq_page_cost, + random_page_cost and effective_io_concurrency. Setting either value for a particular tablespace will override the planner's usual estimate of the cost of reading pages from tables in that tablespace, as established by the configuration parameters of the @@ -128,7 +128,7 @@ CREATE TABLESPACE tablespace_name - CREATE TABLESPACE cannot be executed inside a transaction + CREATE TABLESPACE cannot be executed inside a transaction block. @@ -137,15 +137,15 @@ CREATE TABLESPACE tablespace_name Examples - Create a tablespace dbspace at /data/dbs: + Create a tablespace dbspace at /data/dbs: CREATE TABLESPACE dbspace LOCATION '/data/dbs'; - Create a tablespace indexspace at /data/indexes - owned by user genevieve: + Create a tablespace indexspace at /data/indexes + owned by user genevieve: CREATE TABLESPACE indexspace OWNER genevieve LOCATION '/data/indexes'; @@ -155,7 +155,7 @@ CREATE TABLESPACE indexspace OWNER genevieve LOCATION '/data/indexes'; Compatibility - CREATE TABLESPACE is a PostgreSQL + CREATE TABLESPACE is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 7fc481d9fc..6726e3c766 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -86,10 +86,10 @@ CREATE [ CONSTRAINT ] TRIGGER name - Triggers that are specified to fire INSTEAD OF the trigger - event must be marked FOR EACH ROW, and can only be defined - on views. BEFORE and AFTER triggers on a view - must be marked as FOR EACH STATEMENT. + Triggers that are specified to fire INSTEAD OF the trigger + event must be marked FOR EACH ROW, and can only be defined + on views. BEFORE and AFTER triggers on a view + must be marked as FOR EACH STATEMENT. @@ -115,35 +115,35 @@ CREATE [ CONSTRAINT ] TRIGGER name - BEFORE - INSERT/UPDATE/DELETE + BEFORE + INSERT/UPDATE/DELETE Tables and foreign tables Tables, views, and foreign tables - TRUNCATE + TRUNCATE Tables - AFTER - INSERT/UPDATE/DELETE + AFTER + INSERT/UPDATE/DELETE Tables and foreign tables Tables, views, and foreign tables - TRUNCATE + TRUNCATE Tables - INSTEAD OF - INSERT/UPDATE/DELETE + INSTEAD OF + INSERT/UPDATE/DELETE Views - TRUNCATE + TRUNCATE @@ -152,11 +152,11 @@ CREATE [ CONSTRAINT ] TRIGGER name - Also, a trigger definition can specify a Boolean WHEN + Also, a trigger definition can specify a Boolean WHEN condition, which will be tested to see whether the trigger should - be fired. In row-level triggers the WHEN condition can + be fired. In row-level triggers the WHEN condition can examine the old and/or new values of columns of the row. Statement-level - triggers can also have WHEN conditions, although the feature + triggers can also have WHEN conditions, although the feature is not so useful for them since the condition cannot refer to any values in the table. @@ -167,36 +167,36 @@ CREATE [ CONSTRAINT ] TRIGGER name - When the CONSTRAINT option is specified, this command creates a - constraint trigger. This is the same as a regular trigger + When the CONSTRAINT option is specified, this command creates a + constraint trigger. This is the same as a regular trigger except that the timing of the trigger firing can be adjusted using . - Constraint triggers must be AFTER ROW triggers on plain + Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; in the latter case they - are said to be deferred. A pending deferred-trigger firing + are said to be deferred. A pending deferred-trigger firing can also be forced to happen immediately by using SET - CONSTRAINTS. Constraint triggers are expected to raise an exception + CONSTRAINTS. Constraint triggers are expected to raise an exception when the constraints they implement are violated. - The REFERENCING option enables collection - of transition relations, which are row sets that include all + The REFERENCING option enables collection + of transition relations, which are row sets that include all of the rows inserted, deleted, or modified by the current SQL statement. This feature lets the trigger see a global view of what the statement did, not just one row at a time. This option is only allowed for - an AFTER trigger that is not a constraint trigger; also, if - the trigger is an UPDATE trigger, it must not specify + an AFTER trigger that is not a constraint trigger; also, if + the trigger is an UPDATE trigger, it must not specify a column_name list. - OLD TABLE may only be specified once, and only for a trigger - that can fire on UPDATE or DELETE; it creates a - transition relation containing the before-images of all rows + OLD TABLE may only be specified once, and only for a trigger + that can fire on UPDATE or DELETE; it creates a + transition relation containing the before-images of all rows updated or deleted by the statement. - Similarly, NEW TABLE may only be specified once, and only for - a trigger that can fire on UPDATE or INSERT; - it creates a transition relation containing the after-images + Similarly, NEW TABLE may only be specified once, and only for + a trigger that can fire on UPDATE or INSERT; + it creates a transition relation containing the after-images of all rows updated or inserted by the statement. @@ -225,7 +225,7 @@ CREATE [ CONSTRAINT ] TRIGGER name The name cannot be schema-qualified — the trigger inherits the schema of its table. For a constraint trigger, this is also the name to use when modifying the trigger's behavior using - SET CONSTRAINTS. + SET CONSTRAINTS. @@ -238,7 +238,7 @@ CREATE [ CONSTRAINT ] TRIGGER name Determines whether the function is called before, after, or instead of the event. A constraint trigger can only be specified as - AFTER. + AFTER. @@ -261,11 +261,11 @@ CREATE [ CONSTRAINT ] TRIGGER name UPDATE OF column_name1 [, column_name2 ... ] The trigger will only fire if at least one of the listed columns - is mentioned as a target of the UPDATE command. + is mentioned as a target of the UPDATE command. - INSTEAD OF UPDATE events do not allow a list of columns. + INSTEAD OF UPDATE events do not allow a list of columns. A column list cannot be specified when requesting transition relations, either. @@ -352,7 +352,7 @@ UPDATE OF column_name1 [, column_name2FOR EACH STATEMENT is the default. Constraint triggers can only - be specified FOR EACH ROW. + be specified FOR EACH ROW. @@ -362,20 +362,20 @@ UPDATE OF column_name1 [, column_name2 A Boolean expression that determines whether the trigger function - will actually be executed. If WHEN is specified, the + will actually be executed. If WHEN is specified, the function will only be called if the condition returns true. - In FOR EACH ROW triggers, the WHEN + class="parameter">condition returns true. + In FOR EACH ROW triggers, the WHEN condition can refer to columns of the old and/or new row values by writing OLD.column_name or NEW.column_name respectively. - Of course, INSERT triggers cannot refer to OLD - and DELETE triggers cannot refer to NEW. + Of course, INSERT triggers cannot refer to OLD + and DELETE triggers cannot refer to NEW. - INSTEAD OF triggers do not support WHEN + INSTEAD OF triggers do not support WHEN conditions. @@ -385,7 +385,7 @@ UPDATE OF column_name1 [, column_name2 - Note that for constraint triggers, evaluation of the WHEN + Note that for constraint triggers, evaluation of the WHEN condition is not deferred, but occurs immediately after the row update operation is performed. If the condition does not evaluate to true then the trigger is not queued for deferred execution. @@ -398,7 +398,7 @@ UPDATE OF column_name1 [, column_name2 A user-supplied function that is declared as taking no arguments - and returning type trigger, which is executed when + and returning type trigger, which is executed when the trigger fires. @@ -438,32 +438,32 @@ UPDATE OF column_name1 [, column_name2 A column-specific trigger (one defined using the UPDATE OF column_name syntax) will fire when any - of its columns are listed as targets in the UPDATE - command's SET list. It is possible for a column's value + of its columns are listed as targets in the UPDATE + command's SET list. It is possible for a column's value to change even when the trigger is not fired, because changes made to the - row's contents by BEFORE UPDATE triggers are not considered. - Conversely, a command such as UPDATE ... SET x = x ... - will fire a trigger on column x, even though the column's + row's contents by BEFORE UPDATE triggers are not considered. + Conversely, a command such as UPDATE ... SET x = x ... + will fire a trigger on column x, even though the column's value did not change. - In a BEFORE trigger, the WHEN condition is + In a BEFORE trigger, the WHEN condition is evaluated just before the function is or would be executed, so using - WHEN is not materially different from testing the same + WHEN is not materially different from testing the same condition at the beginning of the trigger function. Note in particular - that the NEW row seen by the condition is the current value, - as possibly modified by earlier triggers. Also, a BEFORE - trigger's WHEN condition is not allowed to examine the - system columns of the NEW row (such as oid), + that the NEW row seen by the condition is the current value, + as possibly modified by earlier triggers. Also, a BEFORE + trigger's WHEN condition is not allowed to examine the + system columns of the NEW row (such as oid), because those won't have been set yet. - In an AFTER trigger, the WHEN condition is + In an AFTER trigger, the WHEN condition is evaluated just after the row update occurs, and it determines whether an event is queued to fire the trigger at the end of statement. So when an - AFTER trigger's WHEN condition does not return + AFTER trigger's WHEN condition does not return true, it is not necessary to queue an event nor to re-fetch the row at end of statement. This can result in significant speedups in statements that modify many rows, if the trigger only needs to be fired for a few of the @@ -473,7 +473,7 @@ UPDATE OF column_name1 [, column_name2 In some cases it is possible for a single SQL command to fire more than one kind of trigger. For instance an INSERT with - an ON CONFLICT DO UPDATE clause may cause both insert and + an ON CONFLICT DO UPDATE clause may cause both insert and update operations, so it will fire both kinds of triggers as needed. The transition relations supplied to triggers are specific to their event type; thus an INSERT trigger @@ -483,14 +483,14 @@ UPDATE OF column_name1 [, column_name2 Row updates or deletions caused by foreign-key enforcement actions, such - as ON UPDATE CASCADE or ON DELETE SET NULL, are + as ON UPDATE CASCADE or ON DELETE SET NULL, are treated as part of the SQL command that caused them (note that such actions are never deferred). Relevant triggers on the affected table will be fired, so that this provides another way in which a SQL command might fire triggers not directly matching its type. In simple cases, triggers that request transition relations will see all changes caused in their table by a single original SQL command as a single transition relation. - However, there are cases in which the presence of an AFTER ROW + However, there are cases in which the presence of an AFTER ROW trigger that requests transition relations will cause the foreign-key enforcement actions triggered by a single SQL command to be split into multiple steps, each with its own transition relation(s). In such cases, @@ -516,10 +516,10 @@ UPDATE OF column_name1 [, column_name2 In PostgreSQL versions before 7.3, it was necessary to declare trigger functions as returning the placeholder - type opaque, rather than trigger. To support loading - of old dump files, CREATE TRIGGER will accept a function - declared as returning opaque, but it will issue a notice and - change the function's declared return type to trigger. + type opaque, rather than trigger. To support loading + of old dump files, CREATE TRIGGER will accept a function + declared as returning opaque, but it will issue a notice and + change the function's declared return type to trigger. @@ -527,8 +527,8 @@ UPDATE OF column_name1 [, column_name2Examples - Execute the function check_account_update whenever - a row of the table accounts is about to be updated: + Execute the function check_account_update whenever + a row of the table accounts is about to be updated: CREATE TRIGGER check_update @@ -537,8 +537,8 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - The same, but only execute the function if column balance - is specified as a target in the UPDATE command: + The same, but only execute the function if column balance + is specified as a target in the UPDATE command: CREATE TRIGGER check_update @@ -547,7 +547,7 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - This form only executes the function if column balance + This form only executes the function if column balance has in fact changed value: @@ -558,7 +558,7 @@ CREATE TRIGGER check_update EXECUTE PROCEDURE check_account_update(); - Call a function to log updates of accounts, but only if + Call a function to log updates of accounts, but only if something changed: @@ -569,7 +569,7 @@ CREATE TRIGGER log_update EXECUTE PROCEDURE log_account_update(); - Execute the function view_insert_row for each row to insert + Execute the function view_insert_row for each row to insert rows into the tables underlying a view: @@ -579,8 +579,8 @@ CREATE TRIGGER view_insert EXECUTE PROCEDURE view_insert_row(); - Execute the function check_transfer_balances_to_zero for each - statement to confirm that the transfer rows offset to a net of + Execute the function check_transfer_balances_to_zero for each + statement to confirm that the transfer rows offset to a net of zero: @@ -591,7 +591,7 @@ CREATE TRIGGER transfer_insert EXECUTE PROCEDURE check_transfer_balances_to_zero(); - Execute the function check_matching_pairs for each row to + Execute the function check_matching_pairs for each row to confirm that changes are made to matching pairs at the same time (by the same statement): @@ -624,27 +624,27 @@ CREATE TRIGGER paired_items_update The CREATE TRIGGER statement in PostgreSQL implements a subset of the - SQL standard. The following functionalities are currently + SQL standard. The following functionalities are currently missing: - While transition table names for AFTER triggers are - specified using the REFERENCING clause in the standard way, - the row variables used in FOR EACH ROW triggers may not be - specified in a REFERENCING clause. They are available in a + While transition table names for AFTER triggers are + specified using the REFERENCING clause in the standard way, + the row variables used in FOR EACH ROW triggers may not be + specified in a REFERENCING clause. They are available in a manner that is dependent on the language in which the trigger function is written, but is fixed for any one language. Some languages - effectively behave as though there is a REFERENCING clause - containing OLD ROW AS OLD NEW ROW AS NEW. + effectively behave as though there is a REFERENCING clause + containing OLD ROW AS OLD NEW ROW AS NEW. The standard allows transition tables to be used with - column-specific UPDATE triggers, but then the set of rows + column-specific UPDATE triggers, but then the set of rows that should be visible in the transition tables depends on the trigger's column list. This is not currently implemented by PostgreSQL. @@ -673,7 +673,7 @@ CREATE TRIGGER paired_items_update SQL specifies that BEFORE DELETE triggers on cascaded - deletes fire after the cascaded DELETE completes. + deletes fire after the cascaded DELETE completes. The PostgreSQL behavior is for BEFORE DELETE to always fire before the delete action, even a cascading one. This is considered more consistent. There is also nonstandard @@ -685,19 +685,19 @@ CREATE TRIGGER paired_items_update The ability to specify multiple actions for a single trigger using - OR is a PostgreSQL extension of + OR is a PostgreSQL extension of the SQL standard. The ability to fire triggers for TRUNCATE is a - PostgreSQL extension of the SQL standard, as is the + PostgreSQL extension of the SQL standard, as is the ability to define statement-level triggers on views. CREATE CONSTRAINT TRIGGER is a - PostgreSQL extension of the SQL + PostgreSQL extension of the SQL standard. diff --git a/doc/src/sgml/ref/create_tsconfig.sgml b/doc/src/sgml/ref/create_tsconfig.sgml index 63321520df..d1792e5d29 100644 --- a/doc/src/sgml/ref/create_tsconfig.sgml +++ b/doc/src/sgml/ref/create_tsconfig.sgml @@ -99,7 +99,7 @@ CREATE TEXT SEARCH CONFIGURATION nameNotes - The PARSER and COPY options are mutually + The PARSER and COPY options are mutually exclusive, because when an existing configuration is copied, its parser selection is copied too. diff --git a/doc/src/sgml/ref/create_tstemplate.sgml b/doc/src/sgml/ref/create_tstemplate.sgml index 360ad41f35..e10f18b28b 100644 --- a/doc/src/sgml/ref/create_tstemplate.sgml +++ b/doc/src/sgml/ref/create_tstemplate.sgml @@ -49,7 +49,7 @@ CREATE TEXT SEARCH TEMPLATE name ( TEMPLATE. This restriction is made because an erroneous text search template definition could confuse or even crash the server. The reason for separating templates from dictionaries is that a template - encapsulates the unsafe aspects of defining a dictionary. + encapsulates the unsafe aspects of defining a dictionary. The parameters that can be set when defining a dictionary are safe for unprivileged users to set, and so creating a dictionary need not be a privileged operation. diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 312bd050bc..02ca27b281 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -81,8 +81,8 @@ CREATE TYPE name There are five forms of CREATE TYPE, as shown in the syntax synopsis above. They respectively create a composite - type, an enum type, a range type, a - base type, or a shell type. The first four + type, an enum type, a range type, a + base type, or a shell type. The first four of these are discussed in turn below. A shell type is simply a placeholder for a type to be defined later; it is created by issuing CREATE TYPE with no parameters except for the type name. Shell types @@ -154,7 +154,7 @@ CREATE TYPE name declared. To do this, you must first create a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command CREATE TYPE - name, with no additional parameters. Then + name, with no additional parameters. Then the function can be declared using the shell type as argument and result, and finally the range type can be declared using the same name. This automatically replaces the shell type entry with a valid range type. @@ -211,7 +211,7 @@ CREATE TYPE name The first argument is the input text as a C string, the second argument is the type's own OID (except for array types, which instead receive their element type's OID), - and the third is the typmod of the destination column, if known + and the third is the typmod of the destination column, if known (-1 will be passed if not). The input function must return a value of the data type itself. Usually, an input function should be declared STRICT; if it is not, @@ -264,12 +264,12 @@ CREATE TYPE name You should at this point be wondering how the input and output functions can be declared to have results or arguments of the new type, when they have to be created before the new type can be created. The answer is that - the type should first be defined as a shell type, which is a + the type should first be defined as a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command CREATE TYPE - name, with no additional parameters. Then the + name, with no additional parameters. Then the C I/O functions can be defined referencing the shell type. Finally, - CREATE TYPE with a full definition replaces the shell entry + CREATE TYPE with a full definition replaces the shell entry with a complete, valid type definition, after which the new type can be used normally. @@ -279,23 +279,23 @@ CREATE TYPE name type_modifier_input_function and type_modifier_output_function are needed if the type supports modifiers, that is optional constraints - attached to a type declaration, such as char(5) or - numeric(30,2). PostgreSQL allows + attached to a type declaration, such as char(5) or + numeric(30,2). PostgreSQL allows user-defined types to take one or more simple constants or identifiers as modifiers. However, this information must be capable of being packed into a single non-negative integer value for storage in the system catalogs. The type_modifier_input_function - is passed the declared modifier(s) in the form of a cstring + is passed the declared modifier(s) in the form of a cstring array. It must check the values for validity (throwing an error if they are wrong), and if they are correct, return a single non-negative - integer value that will be stored as the column typmod. + integer value that will be stored as the column typmod. Type modifiers will be rejected if the type does not have a type_modifier_input_function. The type_modifier_output_function converts the internal integer typmod value back to the correct form for - user display. It must return a cstring value that is the exact - string to append to the type name; for example numeric's - function might return (30,2). + user display. It must return a cstring value that is the exact + string to append to the type name; for example numeric's + function might return (30,2). It is allowed to omit the type_modifier_output_function, in which case the default display format is just the stored typmod integer @@ -305,14 +305,14 @@ CREATE TYPE name The optional analyze_function performs type-specific statistics collection for columns of the data type. - By default, ANALYZE will attempt to gather statistics using - the type's equals and less-than operators, if there + By default, ANALYZE will attempt to gather statistics using + the type's equals and less-than operators, if there is a default b-tree operator class for the type. For non-scalar types this behavior is likely to be unsuitable, so it can be overridden by specifying a custom analysis function. The analysis function must be - declared to take a single argument of type internal, and return - a boolean result. The detailed API for analysis functions appears - in src/include/commands/vacuum.h. + declared to take a single argument of type internal, and return + a boolean result. The detailed API for analysis functions appears + in src/include/commands/vacuum.h. @@ -327,7 +327,7 @@ CREATE TYPE name positive integer, or variable-length, indicated by setting internallength to VARIABLE. (Internally, this is represented - by setting typlen to -1.) The internal representation of all + by setting typlen to -1.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type. (Note that the length field is often encoded, as described in ; it's unwise @@ -338,7 +338,7 @@ CREATE TYPE name The optional flag PASSEDBYVALUE indicates that values of this data type are passed by value, rather than by reference. Types passed by value must be fixed-length, and their internal - representation cannot be larger than the size of the Datum type + representation cannot be larger than the size of the Datum type (4 bytes on some machines, 8 bytes on others). @@ -347,7 +347,7 @@ CREATE TYPE name specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least - 4, since they necessarily contain an int4 as their first component. + 4, since they necessarily contain an int4 as their first component. @@ -372,12 +372,12 @@ CREATE TYPE name All storage values other than plain imply that the functions of the data type - can handle values that have been toasted, as described + can handle values that have been toasted, as described in and . The specific other value given merely determines the default TOAST storage strategy for columns of a toastable data type; users can pick other strategies for individual columns using ALTER TABLE - SET STORAGE. + SET STORAGE. @@ -389,9 +389,9 @@ CREATE TYPE name alignment, and storage are copied from the named type. (It is possible, though usually undesirable, to override - some of these values by specifying them along with the LIKE + some of these values by specifying them along with the LIKE clause.) Specifying representation this way is especially useful when - the low-level implementation of the new type piggybacks on an + the low-level implementation of the new type piggybacks on an existing type in some fashion. @@ -400,7 +400,7 @@ CREATE TYPE name preferred parameters can be used to help control which implicit cast will be applied in ambiguous situations. Each data type belongs to a category named by a single ASCII - character, and each type is either preferred or not within its + character, and each type is either preferred or not within its category. The parser will prefer casting to preferred types (but only from other types within the same category) when this rule is helpful in resolving overloaded functions or operators. For more details see name other types, it is sufficient to leave these settings at the defaults. However, for a group of related types that have implicit casts, it is often helpful to mark them all as belonging to a category and select one or two - of the most general types as being preferred within the category. + of the most general types as being preferred within the category. The category parameter is especially useful when adding a user-defined type to an existing built-in category, such as the numeric or string types. However, it is also @@ -426,7 +426,7 @@ CREATE TYPE name To indicate that a type is an array, specify the type of the array - elements using the ELEMENT key word. For example, to + elements using the ELEMENT key word. For example, to define an array of 4-byte integers (int4), specify ELEMENT = int4. More details about array types appear below. @@ -465,26 +465,26 @@ CREATE TYPE name so generated collides with an existing type name, the process is repeated until a non-colliding name is found.) This implicitly-created array type is variable length and uses the - built-in input and output functions array_in and - array_out. The array type tracks any changes in its + built-in input and output functions array_in and + array_out. The array type tracks any changes in its element type's owner or schema, and is dropped if the element type is. - You might reasonably ask why there is an option, if the system makes the correct array type automatically. - The only case where it's useful to use is when you are making a fixed-length type that happens to be internally an array of a number of identical things, and you want to allow these things to be accessed directly by subscripting, in addition to whatever operations you plan - to provide for the type as a whole. For example, type point + to provide for the type as a whole. For example, type point is represented as just two floating-point numbers, which can be accessed - using point[0] and point[1]. + using point[0] and point[1]. Note that this facility only works for fixed-length types whose internal form is exactly a sequence of identical fixed-length fields. A subscriptable variable-length type must have the generalized internal representation - used by array_in and array_out. + used by array_in and array_out. For historical reasons (i.e., this is clearly wrong but it's far too late to change it), subscripting of fixed-length array types starts from zero, rather than from one as for variable-length arrays. @@ -697,7 +697,7 @@ CREATE TYPE name alignment, and storage are copied from that type, unless overridden by explicit - specification elsewhere in this CREATE TYPE command. + specification elsewhere in this CREATE TYPE command. @@ -707,7 +707,7 @@ CREATE TYPE name The category code (a single ASCII character) for this type. - The default is 'U' for user-defined type. + The default is 'U' for user-defined type. Other standard category codes can be found in . You may also choose other ASCII characters in order to create custom categories. @@ -779,7 +779,7 @@ CREATE TYPE name This is usually not an issue for the sorts of functions that are useful in a type definition. But you might want to think twice before designing a type - in a way that would require secret information to be used + in a way that would require secret information to be used while converting it to or from external form. @@ -792,7 +792,7 @@ CREATE TYPE name this in case of maximum-length names or collisions with user type names that begin with underscore. Writing code that depends on this convention is therefore deprecated. Instead, use - pg_type.typarray to locate the array type + pg_type.typarray to locate the array type associated with a given type. @@ -807,7 +807,7 @@ CREATE TYPE name Before PostgreSQL version 8.2, the shell-type creation syntax - CREATE TYPE name did not exist. + CREATE TYPE name did not exist. The way to create a new base type was to create its input function first. In this approach, PostgreSQL will first see the name of the new data type as the return type of the input function. @@ -824,10 +824,10 @@ CREATE TYPE name In PostgreSQL versions before 7.3, it was customary to avoid creating a shell type at all, by replacing the functions' forward references to the type name with the placeholder - pseudo-type opaque. The cstring arguments and - results also had to be declared as opaque before 7.3. To - support loading of old dump files, CREATE TYPE will - accept I/O functions declared using opaque, but it will issue + pseudo-type opaque. The cstring arguments and + results also had to be declared as opaque before 7.3. To + support loading of old dump files, CREATE TYPE will + accept I/O functions declared using opaque, but it will issue a notice and change the function declarations to use the correct types. @@ -894,7 +894,7 @@ CREATE TABLE myboxes ( If the internal structure of box were an array of four - float4 elements, we might instead use: + float4 elements, we might instead use: CREATE TYPE box ( INTERNALLENGTH = 16, @@ -933,11 +933,11 @@ CREATE TABLE big_objs ( The first form of the CREATE TYPE command, which - creates a composite type, conforms to the SQL standard. + creates a composite type, conforms to the SQL standard. The other forms are PostgreSQL extensions. The CREATE TYPE statement in - the SQL standard also defines other forms that are not - implemented in PostgreSQL. + the SQL standard also defines other forms that are not + implemented in PostgreSQL. diff --git a/doc/src/sgml/ref/create_user.sgml b/doc/src/sgml/ref/create_user.sgml index 480b6405e6..500169da98 100644 --- a/doc/src/sgml/ref/create_user.sgml +++ b/doc/src/sgml/ref/create_user.sgml @@ -51,8 +51,8 @@ CREATE USER name [ [ WITH ] CREATE USER is now an alias for . The only difference is that when the command is spelled - CREATE USER, LOGIN is assumed - by default, whereas NOLOGIN is assumed when + CREATE USER, LOGIN is assumed + by default, whereas NOLOGIN is assumed when the command is spelled CREATE ROLE. diff --git a/doc/src/sgml/ref/create_user_mapping.sgml b/doc/src/sgml/ref/create_user_mapping.sgml index d6f29c9489..10182e1426 100644 --- a/doc/src/sgml/ref/create_user_mapping.sgml +++ b/doc/src/sgml/ref/create_user_mapping.sgml @@ -41,7 +41,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na The owner of a foreign server can create user mappings for that server for any user. Also, a user can create a user mapping for - their own user name if USAGE privilege on the server has + their own user name if USAGE privilege on the server has been granted to the user. @@ -51,7 +51,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na - IF NOT EXISTS + IF NOT EXISTS Do not throw an error if a mapping of the given user to the given foreign @@ -67,8 +67,8 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na The name of an existing user that is mapped to foreign server. - CURRENT_USER and USER match the name of - the current user. When PUBLIC is specified, a + CURRENT_USER and USER match the name of + the current user. When PUBLIC is specified, a so-called public mapping is created that is used when no user-specific mapping is applicable. @@ -103,7 +103,7 @@ CREATE USER MAPPING [IF NOT EXISTS] FOR { user_na Examples - Create a user mapping for user bob, server foo: + Create a user mapping for user bob, server foo: CREATE USER MAPPING FOR bob SERVER foo OPTIONS (user 'bob', password 'secret'); diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index 695c759312..c0dd022495 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -48,7 +48,7 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW If a schema name is given (for example, CREATE VIEW - myschema.myview ...) then the view is created in the specified + myschema.myview ...) then the view is created in the specified schema. Otherwise it is created in the current schema. Temporary views exist in a special schema, so a schema name cannot be given when creating a temporary view. The name of the view must be @@ -62,7 +62,7 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW - TEMPORARY or TEMP + TEMPORARY or TEMP If specified, the view is created as a temporary view. @@ -82,16 +82,16 @@ CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] [ RECURSIVE ] VIEW - RECURSIVE + RECURSIVE Creates a recursive view. The syntax -CREATE RECURSIVE VIEW [ schema . ] view_name (column_names) AS SELECT ...; +CREATE RECURSIVE VIEW [ schema . ] view_name (column_names) AS SELECT ...; is equivalent to -CREATE VIEW [ schema . ] view_name AS WITH RECURSIVE view_name (column_names) AS (SELECT ...) SELECT column_names FROM view_name; +CREATE VIEW [ schema . ] view_name AS WITH RECURSIVE view_name (column_names) AS (SELECT ...) SELECT column_names FROM view_name; A view column name list must be specified for a recursive view. @@ -129,9 +129,9 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR check_option (string) - This parameter may be either local or - cascaded, and is equivalent to specifying - WITH [ CASCADED | LOCAL ] CHECK OPTION (see below). + This parameter may be either local or + cascaded, and is equivalent to specifying + WITH [ CASCADED | LOCAL ] CHECK OPTION (see below). This option can be changed on existing views using . @@ -175,12 +175,12 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR This option controls the behavior of automatically updatable views. When - this option is specified, INSERT and UPDATE + this option is specified, INSERT and UPDATE commands on the view will be checked to ensure that new rows satisfy the view-defining condition (that is, the new rows are checked to ensure that they are visible through the view). If they are not, the update will be - rejected. If the CHECK OPTION is not specified, - INSERT and UPDATE commands on the view are + rejected. If the CHECK OPTION is not specified, + INSERT and UPDATE commands on the view are allowed to create rows that are not visible through the view. The following check options are supported: @@ -191,7 +191,7 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR New rows are only checked against the conditions defined directly in the view itself. Any conditions defined on underlying base views are - not checked (unless they also specify the CHECK OPTION). + not checked (unless they also specify the CHECK OPTION). @@ -201,9 +201,9 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR New rows are checked against the conditions of the view and all - underlying base views. If the CHECK OPTION is specified, - and neither LOCAL nor CASCADED is specified, - then CASCADED is assumed. + underlying base views. If the CHECK OPTION is specified, + and neither LOCAL nor CASCADED is specified, + then CASCADED is assumed. @@ -211,26 +211,26 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR - The CHECK OPTION may not be used with RECURSIVE + The CHECK OPTION may not be used with RECURSIVE views. - Note that the CHECK OPTION is only supported on views that - are automatically updatable, and do not have INSTEAD OF - triggers or INSTEAD rules. If an automatically updatable - view is defined on top of a base view that has INSTEAD OF - triggers, then the LOCAL CHECK OPTION may be used to check + Note that the CHECK OPTION is only supported on views that + are automatically updatable, and do not have INSTEAD OF + triggers or INSTEAD rules. If an automatically updatable + view is defined on top of a base view that has INSTEAD OF + triggers, then the LOCAL CHECK OPTION may be used to check the conditions on the automatically updatable view, but the conditions - on the base view with INSTEAD OF triggers will not be + on the base view with INSTEAD OF triggers will not be checked (a cascaded check option will not cascade down to a trigger-updatable view, and any check options defined directly on a trigger-updatable view will be ignored). If the view or any of its base - relations has an INSTEAD rule that causes the - INSERT or UPDATE command to be rewritten, then + relations has an INSTEAD rule that causes the + INSERT or UPDATE command to be rewritten, then all check options will be ignored in the rewritten query, including any checks from automatically updatable views defined on top of the relation - with the INSTEAD rule. + with the INSTEAD rule. @@ -251,8 +251,8 @@ CREATE VIEW [ schema . ] view_name AS WITH RECUR CREATE VIEW vista AS SELECT 'Hello World'; - is bad form because the column name defaults to ?column?; - also, the column data type defaults to text, which might not + is bad form because the column name defaults to ?column?; + also, the column data type defaults to text, which might not be what you wanted. Better style for a string literal in a view's result is something like: @@ -271,7 +271,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; - When CREATE OR REPLACE VIEW is used on an + When CREATE OR REPLACE VIEW is used on an existing view, only the view's defining SELECT rule is changed. Other view properties, including ownership, permissions, and non-SELECT rules, remain unchanged. You must own the view @@ -287,30 +287,30 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; Simple views are automatically updatable: the system will allow - INSERT, UPDATE and DELETE statements + INSERT, UPDATE and DELETE statements to be used on the view in the same way as on a regular table. A view is automatically updatable if it satisfies all of the following conditions: - The view must have exactly one entry in its FROM list, + The view must have exactly one entry in its FROM list, which must be a table or another updatable view. - The view definition must not contain WITH, - DISTINCT, GROUP BY, HAVING, - LIMIT, or OFFSET clauses at the top level. + The view definition must not contain WITH, + DISTINCT, GROUP BY, HAVING, + LIMIT, or OFFSET clauses at the top level. - The view definition must not contain set operations (UNION, - INTERSECT or EXCEPT) at the top level. + The view definition must not contain set operations (UNION, + INTERSECT or EXCEPT) at the top level. @@ -327,42 +327,42 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; An automatically updatable view may contain a mix of updatable and non-updatable columns. A column is updatable if it is a simple reference to an updatable column of the underlying base relation; otherwise the - column is read-only, and an error will be raised if an INSERT - or UPDATE statement attempts to assign a value to it. + column is read-only, and an error will be raised if an INSERT + or UPDATE statement attempts to assign a value to it. If the view is automatically updatable the system will convert any - INSERT, UPDATE or DELETE statement + INSERT, UPDATE or DELETE statement on the view into the corresponding statement on the underlying base - relation. INSERT statements that have an ON - CONFLICT UPDATE clause are fully supported. + relation. INSERT statements that have an ON + CONFLICT UPDATE clause are fully supported. - If an automatically updatable view contains a WHERE + If an automatically updatable view contains a WHERE condition, the condition restricts which rows of the base relation are - available to be modified by UPDATE and DELETE - statements on the view. However, an UPDATE is allowed to - change a row so that it no longer satisfies the WHERE + available to be modified by UPDATE and DELETE + statements on the view. However, an UPDATE is allowed to + change a row so that it no longer satisfies the WHERE condition, and thus is no longer visible through the view. Similarly, - an INSERT command can potentially insert base-relation rows - that do not satisfy the WHERE condition and thus are not - visible through the view (ON CONFLICT UPDATE may + an INSERT command can potentially insert base-relation rows + that do not satisfy the WHERE condition and thus are not + visible through the view (ON CONFLICT UPDATE may similarly affect an existing row not visible through the view). - The CHECK OPTION may be used to prevent - INSERT and UPDATE commands from creating + The CHECK OPTION may be used to prevent + INSERT and UPDATE commands from creating such rows that are not visible through the view. If an automatically updatable view is marked with the - security_barrier property then all the view's WHERE + security_barrier property then all the view's WHERE conditions (and any conditions using operators which are marked as LEAKPROOF) will always be evaluated before any conditions that a user of the view has added. See for full details. Note that, due to this, rows which are not ultimately returned (because they do not - pass the user's WHERE conditions) may still end up being locked. + pass the user's WHERE conditions) may still end up being locked. EXPLAIN can be used to see which conditions are applied at the relation level (and therefore do not lock rows) and which are not. @@ -372,7 +372,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; A more complex view that does not satisfy all these conditions is read-only by default: the system will not allow an insert, update, or delete on the view. You can get the effect of an updatable view by - creating INSTEAD OF triggers on the view, which must + creating INSTEAD OF triggers on the view, which must convert attempted inserts, etc. on the view into appropriate actions on other tables. For more information see . Another possibility is to create rules @@ -404,13 +404,13 @@ CREATE VIEW comedies AS WHERE kind = 'Comedy'; This will create a view containing the columns that are in the - film table at the time of view creation. Though - * was used to create the view, columns added later to + film table at the time of view creation. Though + * was used to create the view, columns added later to the table will not be part of the view. - Create a view with LOCAL CHECK OPTION: + Create a view with LOCAL CHECK OPTION: CREATE VIEW universal_comedies AS @@ -419,16 +419,16 @@ CREATE VIEW universal_comedies AS WHERE classification = 'U' WITH LOCAL CHECK OPTION; - This will create a view based on the comedies view, showing - only films with kind = 'Comedy' and - classification = 'U'. Any attempt to INSERT or - UPDATE a row in the view will be rejected if the new row - doesn't have classification = 'U', but the film - kind will not be checked. + This will create a view based on the comedies view, showing + only films with kind = 'Comedy' and + classification = 'U'. Any attempt to INSERT or + UPDATE a row in the view will be rejected if the new row + doesn't have classification = 'U', but the film + kind will not be checked. - Create a view with CASCADED CHECK OPTION: + Create a view with CASCADED CHECK OPTION: CREATE VIEW pg_comedies AS @@ -437,8 +437,8 @@ CREATE VIEW pg_comedies AS WHERE classification = 'PG' WITH CASCADED CHECK OPTION; - This will create a view that checks both the kind and - classification of new rows. + This will create a view that checks both the kind and + classification of new rows. @@ -454,10 +454,10 @@ CREATE VIEW comedies AS FROM films f WHERE f.kind = 'Comedy'; - This view will support INSERT, UPDATE and - DELETE. All the columns from the films table will - be updatable, whereas the computed columns country and - avg_rating will be read-only. + This view will support INSERT, UPDATE and + DELETE. All the columns from the films table will + be updatable, whereas the computed columns country and + avg_rating will be read-only. @@ -469,7 +469,7 @@ UNION ALL SELECT n+1 FROM nums_1_100 WHERE n < 100; Notice that although the recursive view's name is schema-qualified in this - CREATE, its internal self-reference is not schema-qualified. + CREATE, its internal self-reference is not schema-qualified. This is because the implicitly-created CTE's name cannot be schema-qualified. @@ -482,7 +482,7 @@ UNION ALL CREATE OR REPLACE VIEW is a PostgreSQL language extension. So is the concept of a temporary view. - The WITH ( ... ) clause is an extension as well. + The WITH ( ... ) clause is an extension as well. diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml index 9fc4c16a81..0112d3a848 100644 --- a/doc/src/sgml/ref/createdb.sgml +++ b/doc/src/sgml/ref/createdb.sgml @@ -86,8 +86,8 @@ PostgreSQL documentation - - + + Specifies the default tablespace for the database. (This name @@ -97,8 +97,8 @@ PostgreSQL documentation - - + + Echo the commands that createdb generates @@ -108,8 +108,8 @@ PostgreSQL documentation - - + + Specifies the character encoding scheme to be used in this @@ -121,8 +121,8 @@ PostgreSQL documentation - - + + Specifies the locale to be used in this database. This is equivalent @@ -132,7 +132,7 @@ PostgreSQL documentation - + Specifies the LC_COLLATE setting to be used in this database. @@ -141,7 +141,7 @@ PostgreSQL documentation - + Specifies the LC_CTYPE setting to be used in this database. @@ -150,8 +150,8 @@ PostgreSQL documentation - - + + Specifies the database user who will own the new database. @@ -161,8 +161,8 @@ PostgreSQL documentation - - + + Specifies the template database from which to build this @@ -172,8 +172,8 @@ PostgreSQL documentation - - + + Print the createdb version and exit. @@ -182,8 +182,8 @@ PostgreSQL documentation - - + + Show help about createdb command line @@ -209,8 +209,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -221,8 +221,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or the local Unix domain socket file @@ -232,8 +232,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -242,8 +242,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -257,8 +257,8 @@ PostgreSQL documentation - - + + Force createdb to prompt for a @@ -271,14 +271,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, createdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to when creating the @@ -325,8 +325,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -362,7 +362,7 @@ PostgreSQL documentation To create the database demo using the - server on host eden, port 5000, using the + server on host eden, port 5000, using the template0 template database, here is the command-line command and the underlying SQL command: diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml index fda77976ff..788ee81daf 100644 --- a/doc/src/sgml/ref/createuser.sgml +++ b/doc/src/sgml/ref/createuser.sgml @@ -34,15 +34,15 @@ PostgreSQL documentation createuser creates a new PostgreSQL user (or more precisely, a role). - Only superusers and users with CREATEROLE privilege can create + Only superusers and users with CREATEROLE privilege can create new users, so createuser must be invoked by someone who can connect as a superuser or a user with - CREATEROLE privilege. + CREATEROLE privilege. If you wish to create a new superuser, you must connect as a - superuser, not merely with CREATEROLE privilege. + superuser, not merely with CREATEROLE privilege. Being a superuser implies the ability to bypass all access permission checks within the database, so superuserdom should not be granted lightly. @@ -61,7 +61,7 @@ PostgreSQL documentation Options - createuser accepts the following command-line arguments: + createuser accepts the following command-line arguments: @@ -77,8 +77,8 @@ PostgreSQL documentation - - + + Set a maximum number of connections for the new user. @@ -88,8 +88,8 @@ PostgreSQL documentation - - + + The new user will be allowed to create databases. @@ -98,8 +98,8 @@ PostgreSQL documentation - - + + The new user will not be allowed to create databases. This is the @@ -109,8 +109,8 @@ PostgreSQL documentation - - + + Echo the commands that createuser generates @@ -120,8 +120,8 @@ PostgreSQL documentation - - + + This option is obsolete but still accepted for backward @@ -131,21 +131,21 @@ PostgreSQL documentation - - + + Indicates role to which this role will be added immediately as a new member. Multiple roles to which this role will be added as a member can be specified by writing multiple - switches. - - + + The new role will automatically inherit privileges of roles @@ -156,8 +156,8 @@ PostgreSQL documentation - - + + The new role will not automatically inherit privileges of roles @@ -167,7 +167,7 @@ PostgreSQL documentation - + Prompt for the user name if none is specified on the command line, and @@ -181,8 +181,8 @@ PostgreSQL documentation - - + + The new user will be allowed to log in (that is, the user name @@ -193,8 +193,8 @@ PostgreSQL documentation - - + + The new user will not be allowed to log in. @@ -205,8 +205,8 @@ PostgreSQL documentation - - + + If given, createuser will issue a prompt for @@ -217,19 +217,19 @@ PostgreSQL documentation - - + + The new user will be allowed to create new roles (that is, - this user will have CREATEROLE privilege). + this user will have CREATEROLE privilege). - - + + The new user will not be allowed to create new roles. This is the @@ -239,8 +239,8 @@ PostgreSQL documentation - - + + The new user will be a superuser. @@ -249,8 +249,8 @@ PostgreSQL documentation - - + + The new user will not be a superuser. This is the default. @@ -259,8 +259,8 @@ PostgreSQL documentation - - + + Print the createuser version and exit. @@ -269,7 +269,7 @@ PostgreSQL documentation - + The new user will have the REPLICATION privilege, @@ -280,7 +280,7 @@ PostgreSQL documentation - + The new user will not have the REPLICATION @@ -291,8 +291,8 @@ PostgreSQL documentation - - + + Show help about createuser command line @@ -310,8 +310,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -323,8 +323,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -335,8 +335,8 @@ PostgreSQL documentation - - + + User name to connect as (not the user name to create). @@ -345,8 +345,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -360,8 +360,8 @@ PostgreSQL documentation - - + + Force createuser to prompt for a @@ -375,7 +375,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, createuser will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -403,8 +403,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -451,7 +451,7 @@ PostgreSQL documentation To create the same user joe using the - server on host eden, port 5000, with attributes explicitly specified, + server on host eden, port 5000, with attributes explicitly specified, taking a look at the underlying command: $ createuser -h eden -p 5000 -S -D -R -e joe diff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml index 5cb85cc568..8eae0354af 100644 --- a/doc/src/sgml/ref/declare.sgml +++ b/doc/src/sgml/ref/declare.sgml @@ -45,7 +45,7 @@ DECLARE name [ BINARY ] [ INSENSITI This page describes usage of cursors at the SQL command level. - If you are trying to use cursors inside a PL/pgSQL + If you are trying to use cursors inside a PL/pgSQL function, the rules are different — see . @@ -144,13 +144,13 @@ DECLARE name [ BINARY ] [ INSENSITI Normal cursors return data in text format, the same as a - SELECT would produce. The BINARY option + SELECT would produce. The BINARY option specifies that the cursor should return data in binary format. This reduces conversion effort for both the server and client, at the cost of more programmer effort to deal with platform-dependent binary data formats. As an example, if a query returns a value of one from an integer column, - you would get a string of 1 with a default cursor, + you would get a string of 1 with a default cursor, whereas with a binary cursor you would get a 4-byte field containing the internal representation of the value (in big-endian byte order). @@ -165,8 +165,8 @@ DECLARE name [ BINARY ] [ INSENSITI - When the client application uses the extended query protocol - to issue a FETCH command, the Bind protocol message + When the client application uses the extended query protocol + to issue a FETCH command, the Bind protocol message specifies whether data is to be retrieved in text or binary format. This choice overrides the way that the cursor is defined. The concept of a binary cursor as such is thus obsolete when using extended query @@ -177,7 +177,7 @@ DECLARE name [ BINARY ] [ INSENSITI Unless WITH HOLD is specified, the cursor created by this command can only be used within the current - transaction. Thus, DECLARE without WITH + transaction. Thus, DECLARE without WITH HOLD is useless outside a transaction block: the cursor would survive only to the completion of the statement. Therefore PostgreSQL reports an error if such a @@ -204,25 +204,25 @@ DECLARE name [ BINARY ] [ INSENSITI WITH HOLD may not be specified when the query - includes FOR UPDATE or FOR SHARE. + includes FOR UPDATE or FOR SHARE. - The SCROLL option should be specified when defining a + The SCROLL option should be specified when defining a cursor that will be used to fetch backwards. This is required by the SQL standard. However, for compatibility with earlier versions, PostgreSQL will allow - backward fetches without SCROLL, if the cursor's query + backward fetches without SCROLL, if the cursor's query plan is simple enough that no extra overhead is needed to support it. However, application developers are advised not to rely on using backward fetches from a cursor that has not been created - with SCROLL. If NO SCROLL is + with SCROLL. If NO SCROLL is specified, then backward fetches are disallowed in any case. Backward fetches are also disallowed when the query - includes FOR UPDATE or FOR SHARE; therefore + includes FOR UPDATE or FOR SHARE; therefore SCROLL may not be specified in this case. @@ -241,42 +241,42 @@ DECLARE name [ BINARY ] [ INSENSITI - If the cursor's query includes FOR UPDATE or FOR - SHARE, then returned rows are locked at the time they are first + If the cursor's query includes FOR UPDATE or FOR + SHARE, then returned rows are locked at the time they are first fetched, in the same way as for a regular command with these options. In addition, the returned rows will be the most up-to-date versions; therefore these options provide the equivalent of what the SQL standard - calls a sensitive cursor. (Specifying INSENSITIVE - together with FOR UPDATE or FOR SHARE is an error.) + calls a sensitive cursor. (Specifying INSENSITIVE + together with FOR UPDATE or FOR SHARE is an error.) - It is generally recommended to use FOR UPDATE if the cursor - is intended to be used with UPDATE ... WHERE CURRENT OF or - DELETE ... WHERE CURRENT OF. Using FOR UPDATE + It is generally recommended to use FOR UPDATE if the cursor + is intended to be used with UPDATE ... WHERE CURRENT OF or + DELETE ... WHERE CURRENT OF. Using FOR UPDATE prevents other sessions from changing the rows between the time they are - fetched and the time they are updated. Without FOR UPDATE, - a subsequent WHERE CURRENT OF command will have no effect if + fetched and the time they are updated. Without FOR UPDATE, + a subsequent WHERE CURRENT OF command will have no effect if the row was changed since the cursor was created. - Another reason to use FOR UPDATE is that without it, a - subsequent WHERE CURRENT OF might fail if the cursor query + Another reason to use FOR UPDATE is that without it, a + subsequent WHERE CURRENT OF might fail if the cursor query does not meet the SQL standard's rules for being simply - updatable (in particular, the cursor must reference just one table - and not use grouping or ORDER BY). Cursors + updatable (in particular, the cursor must reference just one table + and not use grouping or ORDER BY). Cursors that are not simply updatable might work, or might not, depending on plan choice details; so in the worst case, an application might work in testing and then fail in production. - The main reason not to use FOR UPDATE with WHERE - CURRENT OF is if you need the cursor to be scrollable, or to be + The main reason not to use FOR UPDATE with WHERE + CURRENT OF is if you need the cursor to be scrollable, or to be insensitive to the subsequent updates (that is, continue to show the old data). If this is a requirement, pay close heed to the caveats shown above. @@ -321,13 +321,13 @@ DECLARE liahona CURSOR FOR SELECT * FROM films; The SQL standard says that it is implementation-dependent whether cursors are sensitive to concurrent updates of the underlying data by default. In PostgreSQL, cursors are insensitive by default, - and can be made sensitive by specifying FOR UPDATE. Other + and can be made sensitive by specifying FOR UPDATE. Other products may work differently. The SQL standard allows cursors only in embedded - SQL and in modules. PostgreSQL + SQL and in modules. PostgreSQL permits cursors to be used interactively. diff --git a/doc/src/sgml/ref/delete.sgml b/doc/src/sgml/ref/delete.sgml index 8ced7de7be..570e9aa710 100644 --- a/doc/src/sgml/ref/delete.sgml +++ b/doc/src/sgml/ref/delete.sgml @@ -55,12 +55,12 @@ DELETE FROM [ ONLY ] table_name [ * - The optional RETURNING clause causes DELETE + The optional RETURNING clause causes DELETE to compute and return value(s) based on each row actually deleted. Any expression using the table's columns, and/or columns of other tables mentioned in USING, can be computed. - The syntax of the RETURNING list is identical to that of the - output list of SELECT. + The syntax of the RETURNING list is identical to that of the + output list of SELECT. @@ -81,7 +81,7 @@ DELETE FROM [ ONLY ] table_name [ * The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the DELETE + subqueries that can be referenced by name in the DELETE query. See and for details. @@ -93,11 +93,11 @@ DELETE FROM [ ONLY ] table_name [ * The name (optionally schema-qualified) of the table to delete rows - from. If ONLY is specified before the table name, + from. If ONLY is specified before the table name, matching rows are deleted from the named table only. If - ONLY is not specified, matching rows are also deleted + ONLY is not specified, matching rows are also deleted from any tables inheriting from the named table. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -109,9 +109,9 @@ DELETE FROM [ ONLY ] table_name [ * A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For - example, given DELETE FROM foo AS f, the remainder + example, given DELETE FROM foo AS f, the remainder of the DELETE statement must refer to this - table as f not foo. + table as f not foo. @@ -121,7 +121,7 @@ DELETE FROM [ ONLY ] table_name [ * A list of table expressions, allowing columns from other tables - to appear in the WHERE condition. This is similar + to appear in the WHERE condition. This is similar to the list of tables that can be specified in the of a SELECT statement; for example, an alias for @@ -137,7 +137,7 @@ DELETE FROM [ ONLY ] table_name [ * An expression that returns a value of type boolean. - Only rows for which this expression returns true + Only rows for which this expression returns true will be deleted. @@ -147,15 +147,15 @@ DELETE FROM [ ONLY ] table_name [ * cursor_name - The name of the cursor to use in a WHERE CURRENT OF + The name of the cursor to use in a WHERE CURRENT OF condition. The row to be deleted is the one most recently fetched from this cursor. The cursor must be a non-grouping - query on the DELETE's target table. - Note that WHERE CURRENT OF cannot be + query on the DELETE's target table. + Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See for more information about using cursors with - WHERE CURRENT OF. + WHERE CURRENT OF. @@ -164,11 +164,11 @@ DELETE FROM [ ONLY ] table_name [ * output_expression - An expression to be computed and returned by the DELETE + An expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any column names of the table named by table_name - or table(s) listed in USING. - Write * to return all columns. + or table(s) listed in USING. + Write * to return all columns. @@ -188,7 +188,7 @@ DELETE FROM [ ONLY ] table_name [ * Outputs - On successful completion, a DELETE command returns a command + On successful completion, a DELETE command returns a command tag of the form DELETE count @@ -197,16 +197,16 @@ DELETE count of rows deleted. Note that the number may be less than the number of rows that matched the condition when deletes were - suppressed by a BEFORE DELETE trigger. If BEFORE DELETE trigger. If count is 0, no rows were deleted by the query (this is not considered an error). - If the DELETE command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the DELETE command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) deleted by the + RETURNING list, computed over the row(s) deleted by the command. @@ -216,16 +216,16 @@ DELETE count PostgreSQL lets you reference columns of - other tables in the WHERE condition by specifying the + other tables in the WHERE condition by specifying the other tables in the USING clause. For example, to delete all films produced by a given producer, one can do: DELETE FROM films USING producers WHERE producer_id = producers.id AND producers.name = 'foo'; - What is essentially happening here is a join between films - and producers, with all successfully joined - films rows being marked for deletion. + What is essentially happening here is a join between films + and producers, with all successfully joined + films rows being marked for deletion. This syntax is not standard. A more standard way to do it is: DELETE FROM films @@ -261,8 +261,8 @@ DELETE FROM tasks WHERE status = 'DONE' RETURNING *; - Delete the row of tasks on which the cursor - c_tasks is currently positioned: + Delete the row of tasks on which the cursor + c_tasks is currently positioned: DELETE FROM tasks WHERE CURRENT OF c_tasks; @@ -273,9 +273,9 @@ DELETE FROM tasks WHERE CURRENT OF c_tasks; This command conforms to the SQL standard, except - that the USING and RETURNING clauses + that the USING and RETURNING clauses are PostgreSQL extensions, as is the ability - to use WITH with DELETE. + to use WITH with DELETE. diff --git a/doc/src/sgml/ref/discard.sgml b/doc/src/sgml/ref/discard.sgml index e859bf7bab..f432e70430 100644 --- a/doc/src/sgml/ref/discard.sgml +++ b/doc/src/sgml/ref/discard.sgml @@ -29,10 +29,10 @@ DISCARD { ALL | PLANS | SEQUENCES | TEMPORARY | TEMP } Description - DISCARD releases internal resources associated with a + DISCARD releases internal resources associated with a database session. This command is useful for partially or fully resetting the session's state. There are several subcommands to - release different types of resources; the DISCARD ALL + release different types of resources; the DISCARD ALL variant subsumes all the others, and also resets additional state. @@ -57,9 +57,9 @@ DISCARD { ALL | PLANS | SEQUENCES | TEMPORARY | TEMP } Discards all cached sequence-related state, - including currval()/lastval() + including currval()/lastval() information and any preallocated sequence values that have not - yet been returned by nextval(). + yet been returned by nextval(). (See for a description of preallocated sequence values.) @@ -104,7 +104,7 @@ DISCARD TEMP; Notes - DISCARD ALL cannot be executed inside a transaction block. + DISCARD ALL cannot be executed inside a transaction block. diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index d4da32c34d..5d2e9b1b8c 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -39,12 +39,12 @@ DO [ LANGUAGE lang_name ] The code block is treated as though it were the body of a function - with no parameters, returning void. It is parsed and + with no parameters, returning void. It is parsed and executed a single time. - The optional LANGUAGE clause can be written either + The optional LANGUAGE clause can be written either before or after the code block. @@ -58,7 +58,7 @@ DO [ LANGUAGE lang_name ] The procedural language code to be executed. This must be specified - as a string literal, just as in CREATE FUNCTION. + as a string literal, just as in CREATE FUNCTION. Use of a dollar-quoted literal is recommended. @@ -69,7 +69,7 @@ DO [ LANGUAGE lang_name ] The name of the procedural language the code is written in. - If omitted, the default is plpgsql. + If omitted, the default is plpgsql. @@ -81,12 +81,12 @@ DO [ LANGUAGE lang_name ] The procedural language to be used must already have been installed - into the current database by means of CREATE LANGUAGE. - plpgsql is installed by default, but other languages are not. + into the current database by means of CREATE LANGUAGE. + plpgsql is installed by default, but other languages are not. - The user must have USAGE privilege for the procedural + The user must have USAGE privilege for the procedural language, or must be a superuser if the language is untrusted. This is the same privilege requirement as for creating a function in the language. @@ -96,8 +96,8 @@ DO [ LANGUAGE lang_name ] Examples - Grant all privileges on all views in schema public to - role webuser: + Grant all privileges on all views in schema public to + role webuser: DO $$DECLARE r record; BEGIN diff --git a/doc/src/sgml/ref/drop_access_method.sgml b/doc/src/sgml/ref/drop_access_method.sgml index 8aa9197fe4..aa5d2505c7 100644 --- a/doc/src/sgml/ref/drop_access_method.sgml +++ b/doc/src/sgml/ref/drop_access_method.sgml @@ -85,7 +85,7 @@ DROP ACCESS METHOD [ IF EXISTS ] nameExamples - Drop the access method heptree: + Drop the access method heptree: DROP ACCESS METHOD heptree; @@ -96,7 +96,7 @@ DROP ACCESS METHOD heptree; DROP ACCESS METHOD is a - PostgreSQL extension. + PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_aggregate.sgml b/doc/src/sgml/ref/drop_aggregate.sgml index dde1ea2444..ac29e7a419 100644 --- a/doc/src/sgml/ref/drop_aggregate.sgml +++ b/doc/src/sgml/ref/drop_aggregate.sgml @@ -70,8 +70,8 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr - The mode of an argument: IN or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. @@ -94,10 +94,10 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr An input data type on which the aggregate function operates. - To reference a zero-argument aggregate function, write * + To reference a zero-argument aggregate function, write * in place of the list of argument specifications. To reference an ordered-set aggregate function, write - ORDER BY between the direct and aggregated argument + ORDER BY between the direct and aggregated argument specifications. @@ -148,7 +148,7 @@ DROP AGGREGATE myavg(integer); - To remove the hypothetical-set aggregate function myrank, + To remove the hypothetical-set aggregate function myrank, which takes an arbitrary list of ordering columns and a matching list of direct arguments: diff --git a/doc/src/sgml/ref/drop_collation.sgml b/doc/src/sgml/ref/drop_collation.sgml index 2177d8e5d6..23f8e88fc9 100644 --- a/doc/src/sgml/ref/drop_collation.sgml +++ b/doc/src/sgml/ref/drop_collation.sgml @@ -83,7 +83,7 @@ DROP COLLATION [ IF EXISTS ] name [ CASCADE | RESTRIC Examples - To drop the collation named german: + To drop the collation named german: DROP COLLATION german; @@ -95,7 +95,7 @@ DROP COLLATION german; The DROP COLLATION command conforms to the SQL standard, apart from the IF - EXISTS option, which is a PostgreSQL extension. + EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_conversion.sgml b/doc/src/sgml/ref/drop_conversion.sgml index 1a33b3dcc5..9d56ec51a5 100644 --- a/doc/src/sgml/ref/drop_conversion.sgml +++ b/doc/src/sgml/ref/drop_conversion.sgml @@ -74,7 +74,7 @@ DROP CONVERSION [ IF EXISTS ] name [ CASCADE | RESTRI Examples - To drop the conversion named myname: + To drop the conversion named myname: DROP CONVERSION myname; diff --git a/doc/src/sgml/ref/drop_database.sgml b/doc/src/sgml/ref/drop_database.sgml index 44436ad48d..7e5fbe7396 100644 --- a/doc/src/sgml/ref/drop_database.sgml +++ b/doc/src/sgml/ref/drop_database.sgml @@ -71,7 +71,7 @@ DROP DATABASE [ IF EXISTS ] name Notes - DROP DATABASE cannot be executed inside a transaction + DROP DATABASE cannot be executed inside a transaction block. diff --git a/doc/src/sgml/ref/drop_domain.sgml b/doc/src/sgml/ref/drop_domain.sgml index ba546165c2..b1dac01e65 100644 --- a/doc/src/sgml/ref/drop_domain.sgml +++ b/doc/src/sgml/ref/drop_domain.sgml @@ -58,7 +58,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - CASCADE + CASCADE Automatically drop objects that depend on the domain (such as @@ -70,7 +70,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - RESTRICT + RESTRICT Refuse to drop the domain if any objects depend on it. This is @@ -97,7 +97,7 @@ DROP DOMAIN box; This command conforms to the SQL standard, except for the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_extension.sgml b/doc/src/sgml/ref/drop_extension.sgml index ba52922013..f75308a20d 100644 --- a/doc/src/sgml/ref/drop_extension.sgml +++ b/doc/src/sgml/ref/drop_extension.sgml @@ -79,7 +79,7 @@ DROP EXTENSION [ IF EXISTS ] name [ Refuse to drop the extension if any objects depend on it (other than its own member objects and other extensions listed in the same - DROP command). This is the default. + DROP command). This is the default. @@ -97,7 +97,7 @@ DROP EXTENSION hstore; This command will fail if any of hstore's objects are in use in the database, for example if any tables have columns - of the hstore type. Add the CASCADE option to + of the hstore type. Add the CASCADE option to forcibly remove those dependent objects as well. @@ -106,7 +106,7 @@ DROP EXTENSION hstore; Compatibility - DROP EXTENSION is a PostgreSQL + DROP EXTENSION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml index 702cc021db..a3c73a0d46 100644 --- a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml @@ -86,7 +86,7 @@ DROP FOREIGN DATA WRAPPER [ IF EXISTS ] nameExamples - Drop the foreign-data wrapper dbi: + Drop the foreign-data wrapper dbi: DROP FOREIGN DATA WRAPPER dbi; @@ -97,8 +97,8 @@ DROP FOREIGN DATA WRAPPER dbi; DROP FOREIGN DATA WRAPPER conforms to ISO/IEC - 9075-9 (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + 9075-9 (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_foreign_table.sgml b/doc/src/sgml/ref/drop_foreign_table.sgml index 173eadadd3..456d55d112 100644 --- a/doc/src/sgml/ref/drop_foreign_table.sgml +++ b/doc/src/sgml/ref/drop_foreign_table.sgml @@ -95,7 +95,7 @@ DROP FOREIGN TABLE films, distributors; This command conforms to the ISO/IEC 9075-9 (SQL/MED), except that the standard only allows one foreign table to be dropped per command, and apart - from the IF EXISTS option, which is a PostgreSQL + from the IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_function.sgml b/doc/src/sgml/ref/drop_function.sgml index 0aa984528d..9c9adb9a46 100644 --- a/doc/src/sgml/ref/drop_function.sgml +++ b/doc/src/sgml/ref/drop_function.sgml @@ -67,14 +67,14 @@ DROP FUNCTION [ IF EXISTS ] name [ - The mode of an argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + The mode of an argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that DROP FUNCTION does not actually pay - any attention to OUT arguments, since only the input + any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml index 4c838fffff..de36c135d1 100644 --- a/doc/src/sgml/ref/drop_index.sgml +++ b/doc/src/sgml/ref/drop_index.sgml @@ -44,19 +44,19 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name Drop the index without locking out concurrent selects, inserts, updates, - and deletes on the index's table. A normal DROP INDEX + and deletes on the index's table. A normal DROP INDEX acquires exclusive lock on the table, blocking other accesses until the index drop can be completed. With this option, the command instead waits until conflicting transactions have completed. There are several caveats to be aware of when using this option. - Only one index name can be specified, and the CASCADE option - is not supported. (Thus, an index that supports a UNIQUE or - PRIMARY KEY constraint cannot be dropped this way.) - Also, regular DROP INDEX commands can be + Only one index name can be specified, and the CASCADE option + is not supported. (Thus, an index that supports a UNIQUE or + PRIMARY KEY constraint cannot be dropped this way.) + Also, regular DROP INDEX commands can be performed within a transaction block, but - DROP INDEX CONCURRENTLY cannot. + DROP INDEX CONCURRENTLY cannot. diff --git a/doc/src/sgml/ref/drop_language.sgml b/doc/src/sgml/ref/drop_language.sgml index 081bd5fe3e..524d758370 100644 --- a/doc/src/sgml/ref/drop_language.sgml +++ b/doc/src/sgml/ref/drop_language.sgml @@ -31,13 +31,13 @@ DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name DROP LANGUAGE removes the definition of a previously registered procedural language. You must be a superuser - or the owner of the language to use DROP LANGUAGE. + or the owner of the language to use DROP LANGUAGE. As of PostgreSQL 9.1, most procedural - languages have been made into extensions, and should + languages have been made into extensions, and should therefore be removed with not DROP LANGUAGE. diff --git a/doc/src/sgml/ref/drop_opclass.sgml b/doc/src/sgml/ref/drop_opclass.sgml index 423a211bca..83af6d7e48 100644 --- a/doc/src/sgml/ref/drop_opclass.sgml +++ b/doc/src/sgml/ref/drop_opclass.sgml @@ -37,7 +37,7 @@ DROP OPERATOR CLASS [ IF EXISTS ] nameDROP OPERATOR CLASS does not drop any of the operators or functions referenced by the class. If there are any indexes depending on the operator class, you will need to specify - CASCADE for the drop to complete. + CASCADE for the drop to complete. @@ -101,13 +101,13 @@ DROP OPERATOR CLASS [ IF EXISTS ] nameNotes - DROP OPERATOR CLASS will not drop the operator family + DROP OPERATOR CLASS will not drop the operator family containing the class, even if there is nothing else left in the family (in particular, in the case where the family was implicitly - created by CREATE OPERATOR CLASS). An empty operator + created by CREATE OPERATOR CLASS). An empty operator family is harmless, but for the sake of tidiness you might wish to - remove the family with DROP OPERATOR FAMILY; or perhaps - better, use DROP OPERATOR FAMILY in the first place. + remove the family with DROP OPERATOR FAMILY; or perhaps + better, use DROP OPERATOR FAMILY in the first place. @@ -122,7 +122,7 @@ DROP OPERATOR CLASS widget_ops USING btree; This command will not succeed if there are any existing indexes - that use the operator class. Add CASCADE to drop + that use the operator class. Add CASCADE to drop such indexes along with the operator class. diff --git a/doc/src/sgml/ref/drop_opfamily.sgml b/doc/src/sgml/ref/drop_opfamily.sgml index a7b90f306c..b825978aee 100644 --- a/doc/src/sgml/ref/drop_opfamily.sgml +++ b/doc/src/sgml/ref/drop_opfamily.sgml @@ -38,7 +38,7 @@ DROP OPERATOR FAMILY [ IF EXISTS ] nameCASCADE for the drop to complete. + CASCADE for the drop to complete. @@ -109,7 +109,7 @@ DROP OPERATOR FAMILY float_ops USING btree; This command will not succeed if there are any existing indexes - that use operator classes within the family. Add CASCADE to + that use operator classes within the family. Add CASCADE to drop such indexes along with the operator family. diff --git a/doc/src/sgml/ref/drop_owned.sgml b/doc/src/sgml/ref/drop_owned.sgml index 0426373d2d..8b4b3644e6 100644 --- a/doc/src/sgml/ref/drop_owned.sgml +++ b/doc/src/sgml/ref/drop_owned.sgml @@ -92,7 +92,7 @@ DROP OWNED BY { name | CURRENT_USER The command is an alternative that reassigns the ownership of all the database objects owned by one or - more roles. However, REASSIGN OWNED does not deal with + more roles. However, REASSIGN OWNED does not deal with privileges for other objects. diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml index bf43db3dac..1c129c0444 100644 --- a/doc/src/sgml/ref/drop_publication.sgml +++ b/doc/src/sgml/ref/drop_publication.sgml @@ -89,7 +89,7 @@ DROP PUBLICATION mypublication; Compatibility - DROP PUBLICATION is a PostgreSQL + DROP PUBLICATION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_role.sgml b/doc/src/sgml/ref/drop_role.sgml index fcddfeb172..3c1bbaba6f 100644 --- a/doc/src/sgml/ref/drop_role.sgml +++ b/doc/src/sgml/ref/drop_role.sgml @@ -31,7 +31,7 @@ DROP ROLE [ IF EXISTS ] name [, ... DROP ROLE removes the specified role(s). To drop a superuser role, you must be a superuser yourself; - to drop non-superuser roles, you must have CREATEROLE + to drop non-superuser roles, you must have CREATEROLE privilege. @@ -47,7 +47,7 @@ DROP ROLE [ IF EXISTS ] name [, ... However, it is not necessary to remove role memberships involving - the role; DROP ROLE automatically revokes any memberships + the role; DROP ROLE automatically revokes any memberships of the target role in other roles, and of other roles in the target role. The other roles are not dropped nor otherwise affected. diff --git a/doc/src/sgml/ref/drop_schema.sgml b/doc/src/sgml/ref/drop_schema.sgml index fd1fcd7e03..bb3af1e186 100644 --- a/doc/src/sgml/ref/drop_schema.sgml +++ b/doc/src/sgml/ref/drop_schema.sgml @@ -114,7 +114,7 @@ DROP SCHEMA mystuff CASCADE; DROP SCHEMA is fully conforming with the SQL standard, except that the standard only allows one schema to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_sequence.sgml b/doc/src/sgml/ref/drop_sequence.sgml index 9d827f0cb1..5027129b38 100644 --- a/doc/src/sgml/ref/drop_sequence.sgml +++ b/doc/src/sgml/ref/drop_sequence.sgml @@ -98,7 +98,7 @@ DROP SEQUENCE serial; DROP SEQUENCE conforms to the SQL standard, except that the standard only allows one sequence to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_server.sgml b/doc/src/sgml/ref/drop_server.sgml index 42acdd41dc..8ef0e014e4 100644 --- a/doc/src/sgml/ref/drop_server.sgml +++ b/doc/src/sgml/ref/drop_server.sgml @@ -86,7 +86,7 @@ DROP SERVER [ IF EXISTS ] name [, . Examples - Drop a server foo if it exists: + Drop a server foo if it exists: DROP SERVER IF EXISTS foo; @@ -97,8 +97,8 @@ DROP SERVER IF EXISTS foo; DROP SERVER conforms to ISO/IEC 9075-9 - (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_subscription.sgml b/doc/src/sgml/ref/drop_subscription.sgml index f5734e6f30..58b1489475 100644 --- a/doc/src/sgml/ref/drop_subscription.sgml +++ b/doc/src/sgml/ref/drop_subscription.sgml @@ -114,7 +114,7 @@ DROP SUBSCRIPTION mysub; Compatibility - DROP SUBSCRIPTION is a PostgreSQL + DROP SUBSCRIPTION is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_table.sgml b/doc/src/sgml/ref/drop_table.sgml index ae96cf0657..cea7e00351 100644 --- a/doc/src/sgml/ref/drop_table.sgml +++ b/doc/src/sgml/ref/drop_table.sgml @@ -40,8 +40,8 @@ DROP TABLE [ IF EXISTS ] name [, .. DROP TABLE always removes any indexes, rules, triggers, and constraints that exist for the target table. However, to drop a table that is referenced by a view or a foreign-key - constraint of another table, CASCADE must be - specified. (CASCADE will remove a dependent view entirely, + constraint of another table, CASCADE must be + specified. (CASCADE will remove a dependent view entirely, but in the foreign-key case it will only remove the foreign-key constraint, not the other table entirely.) @@ -112,7 +112,7 @@ DROP TABLE films, distributors; This command conforms to the SQL standard, except that the standard only allows one table to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_tablespace.sgml b/doc/src/sgml/ref/drop_tablespace.sgml index ee40cc6b0c..4343035ebb 100644 --- a/doc/src/sgml/ref/drop_tablespace.sgml +++ b/doc/src/sgml/ref/drop_tablespace.sgml @@ -39,7 +39,7 @@ DROP TABLESPACE [ IF EXISTS ] name in the tablespace even if no objects in the current database are using the tablespace. Also, if the tablespace is listed in the setting of any active session, the - DROP might fail due to temporary files residing in the + DROP might fail due to temporary files residing in the tablespace. @@ -74,7 +74,7 @@ DROP TABLESPACE [ IF EXISTS ] name Notes - DROP TABLESPACE cannot be executed inside a transaction block. + DROP TABLESPACE cannot be executed inside a transaction block. @@ -93,7 +93,7 @@ DROP TABLESPACE mystuff; Compatibility - DROP TABLESPACE is a PostgreSQL + DROP TABLESPACE is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_tsconfig.sgml b/doc/src/sgml/ref/drop_tsconfig.sgml index e4a1738bae..cc053beceb 100644 --- a/doc/src/sgml/ref/drop_tsconfig.sgml +++ b/doc/src/sgml/ref/drop_tsconfig.sgml @@ -94,8 +94,8 @@ DROP TEXT SEARCH CONFIGURATION my_english; This command will not succeed if there are any existing indexes - that reference the configuration in to_tsvector calls. - Add CASCADE to + that reference the configuration in to_tsvector calls. + Add CASCADE to drop such indexes along with the text search configuration. diff --git a/doc/src/sgml/ref/drop_tsdictionary.sgml b/doc/src/sgml/ref/drop_tsdictionary.sgml index faa4b3a1e5..66af10fb0f 100644 --- a/doc/src/sgml/ref/drop_tsdictionary.sgml +++ b/doc/src/sgml/ref/drop_tsdictionary.sgml @@ -94,7 +94,7 @@ DROP TEXT SEARCH DICTIONARY english; This command will not succeed if there are any existing text search - configurations that use the dictionary. Add CASCADE to + configurations that use the dictionary. Add CASCADE to drop such configurations along with the dictionary. diff --git a/doc/src/sgml/ref/drop_tsparser.sgml b/doc/src/sgml/ref/drop_tsparser.sgml index bc9dae17a5..3fa9467ebd 100644 --- a/doc/src/sgml/ref/drop_tsparser.sgml +++ b/doc/src/sgml/ref/drop_tsparser.sgml @@ -92,7 +92,7 @@ DROP TEXT SEARCH PARSER my_parser; This command will not succeed if there are any existing text search - configurations that use the parser. Add CASCADE to + configurations that use the parser. Add CASCADE to drop such configurations along with the parser. diff --git a/doc/src/sgml/ref/drop_tstemplate.sgml b/doc/src/sgml/ref/drop_tstemplate.sgml index 98f5523e51..ad83275457 100644 --- a/doc/src/sgml/ref/drop_tstemplate.sgml +++ b/doc/src/sgml/ref/drop_tstemplate.sgml @@ -93,7 +93,7 @@ DROP TEXT SEARCH TEMPLATE thesaurus; This command will not succeed if there are any existing text search - dictionaries that use the template. Add CASCADE to + dictionaries that use the template. Add CASCADE to drop such dictionaries along with the template. diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 4ec1b92f32..92ac2729ca 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -96,8 +96,8 @@ DROP TYPE box; This command is similar to the corresponding command in the SQL - standard, apart from the IF EXISTS - option, which is a PostgreSQL extension. + standard, apart from the IF EXISTS + option, which is a PostgreSQL extension. But note that much of the CREATE TYPE command and the data type extension mechanisms in PostgreSQL differ from the SQL standard. diff --git a/doc/src/sgml/ref/drop_user_mapping.sgml b/doc/src/sgml/ref/drop_user_mapping.sgml index eb4c320293..27284acae4 100644 --- a/doc/src/sgml/ref/drop_user_mapping.sgml +++ b/doc/src/sgml/ref/drop_user_mapping.sgml @@ -36,7 +36,7 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_name The owner of a foreign server can drop user mappings for that server for any user. Also, a user can drop a user mapping for their own - user name if USAGE privilege on the server has been + user name if USAGE privilege on the server has been granted to the user. @@ -59,9 +59,9 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_nameuser_name - User name of the mapping. CURRENT_USER - and USER match the name of the current - user. PUBLIC is used to match all present and + User name of the mapping. CURRENT_USER + and USER match the name of the current + user. PUBLIC is used to match all present and future user names in the system. @@ -82,7 +82,7 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_nameExamples - Drop a user mapping bob, server foo if it exists: + Drop a user mapping bob, server foo if it exists: DROP USER MAPPING IF EXISTS FOR bob SERVER foo; @@ -93,8 +93,8 @@ DROP USER MAPPING IF EXISTS FOR bob SERVER foo; DROP USER MAPPING conforms to ISO/IEC 9075-9 - (SQL/MED). The IF EXISTS clause is - a PostgreSQL extension. + (SQL/MED). The IF EXISTS clause is + a PostgreSQL extension. diff --git a/doc/src/sgml/ref/drop_view.sgml b/doc/src/sgml/ref/drop_view.sgml index 002d2c6dd6..a33b33335b 100644 --- a/doc/src/sgml/ref/drop_view.sgml +++ b/doc/src/sgml/ref/drop_view.sgml @@ -97,7 +97,7 @@ DROP VIEW kinds; This command conforms to the SQL standard, except that the standard only allows one view to be dropped per command, and apart from the - IF EXISTS option, which is a PostgreSQL + IF EXISTS option, which is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/dropdb.sgml b/doc/src/sgml/ref/dropdb.sgml index 16c49e7928..9dd44be882 100644 --- a/doc/src/sgml/ref/dropdb.sgml +++ b/doc/src/sgml/ref/dropdb.sgml @@ -53,7 +53,7 @@ PostgreSQL documentation Options - dropdb accepts the following command-line arguments: + dropdb accepts the following command-line arguments: @@ -66,8 +66,8 @@ PostgreSQL documentation - - + + Echo the commands that dropdb generates @@ -77,8 +77,8 @@ PostgreSQL documentation - - + + Issues a verification prompt before doing anything destructive. @@ -87,8 +87,8 @@ PostgreSQL documentation - - + + Print the dropdb version and exit. @@ -97,7 +97,7 @@ PostgreSQL documentation - + Do not throw an error if the database does not exist. A notice is issued @@ -107,8 +107,8 @@ PostgreSQL documentation - - + + Show help about dropdb command line @@ -127,8 +127,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -140,8 +140,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -152,8 +152,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -162,8 +162,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -177,8 +177,8 @@ PostgreSQL documentation - - + + Force dropdb to prompt for a @@ -191,14 +191,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, dropdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to in order to drop the @@ -231,8 +231,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/dropuser.sgml b/doc/src/sgml/ref/dropuser.sgml index d7ad61b3d6..1387b7dc2d 100644 --- a/doc/src/sgml/ref/dropuser.sgml +++ b/doc/src/sgml/ref/dropuser.sgml @@ -35,7 +35,7 @@ PostgreSQL documentation dropuser removes an existing PostgreSQL user. - Only superusers and users with the CREATEROLE privilege can + Only superusers and users with the CREATEROLE privilege can remove PostgreSQL users. (To remove a superuser, you must yourself be a superuser.) @@ -70,8 +70,8 @@ PostgreSQL documentation - - + + Echo the commands that dropuser generates @@ -81,8 +81,8 @@ PostgreSQL documentation - - + + Prompt for confirmation before actually removing the user, and prompt @@ -92,8 +92,8 @@ PostgreSQL documentation - - + + Print the dropuser version and exit. @@ -102,7 +102,7 @@ PostgreSQL documentation - + Do not throw an error if the user does not exist. A notice is @@ -112,8 +112,8 @@ PostgreSQL documentation - - + + Show help about dropuser command line @@ -131,8 +131,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -144,8 +144,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -156,8 +156,8 @@ PostgreSQL documentation - - + + User name to connect as (not the user name to drop). @@ -166,8 +166,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -181,8 +181,8 @@ PostgreSQL documentation - - + + Force dropuser to prompt for a @@ -195,7 +195,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, dropuser will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -223,8 +223,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/ecpg-ref.sgml b/doc/src/sgml/ref/ecpg-ref.sgml index 8bfb47c4d7..a9eaff815d 100644 --- a/doc/src/sgml/ref/ecpg-ref.sgml +++ b/doc/src/sgml/ref/ecpg-ref.sgml @@ -220,9 +220,9 @@ PostgreSQL documentation When compiling the preprocessed C code files, the compiler needs to - be able to find the ECPG header files in the - PostgreSQL include directory. Therefore, you might - have to use the option when invoking the compiler (e.g., -I/usr/local/pgsql/include). diff --git a/doc/src/sgml/ref/end.sgml b/doc/src/sgml/ref/end.sgml index 10e414515b..1f74118efd 100644 --- a/doc/src/sgml/ref/end.sgml +++ b/doc/src/sgml/ref/end.sgml @@ -62,7 +62,7 @@ END [ WORK | TRANSACTION ] - Issuing END when not inside a transaction does + Issuing END when not inside a transaction does no harm, but it will provoke a warning message. diff --git a/doc/src/sgml/ref/execute.sgml b/doc/src/sgml/ref/execute.sgml index 6ab5e54fa7..6ac413d808 100644 --- a/doc/src/sgml/ref/execute.sgml +++ b/doc/src/sgml/ref/execute.sgml @@ -87,12 +87,12 @@ EXECUTE name [ ( Outputs The command tag returned by EXECUTE - is that of the prepared statement, and not EXECUTE. + is that of the prepared statement, and not EXECUTE. - Examples</> + <title>Examples Examples are given in the section of the hit means that a read was avoided because the block was + A hit means that a read was avoided because the block was found already in cache when needed. Shared blocks contain data from regular tables and indexes; local blocks contain data from temporary tables and indexes; while temp blocks contain short-term working data used in sorts, hashes, Materialize plan nodes, and similar cases. - The number of blocks dirtied indicates the number of + The number of blocks dirtied indicates the number of previously unmodified blocks that were changed by this query; while the - number of blocks written indicates the number of + number of blocks written indicates the number of previously-dirtied blocks evicted from cache by this backend during query processing. The number of blocks shown for an @@ -229,9 +229,9 @@ ROLLBACK; Specifies whether the selected option should be turned on or off. - You can write TRUE, ON, or + You can write TRUE, ON, or 1 to enable the option, and FALSE, - OFF, or 0 to disable it. The + OFF, or 0 to disable it. The boolean value can also be omitted, in which case TRUE is assumed. @@ -242,10 +242,10 @@ ROLLBACK; statement - Any SELECT, INSERT, UPDATE, - DELETE, VALUES, EXECUTE, - DECLARE, CREATE TABLE AS, or - CREATE MATERIALIZED VIEW AS statement, whose execution + Any SELECT, INSERT, UPDATE, + DELETE, VALUES, EXECUTE, + DECLARE, CREATE TABLE AS, or + CREATE MATERIALIZED VIEW AS statement, whose execution plan you wish to see. diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml index 7651dcd0f8..fb79a1ac61 100644 --- a/doc/src/sgml/ref/fetch.sgml +++ b/doc/src/sgml/ref/fetch.sgml @@ -57,20 +57,20 @@ FETCH [ direction [ FROM | IN ] ] < A cursor has an associated position, which is used by - FETCH. The cursor position can be before the first row of the + FETCH. The cursor position can be before the first row of the query result, on any particular row of the result, or after the last row of the result. When created, a cursor is positioned before the first row. After fetching some rows, the cursor is positioned on the row most recently - retrieved. If FETCH runs off the end of the available rows + retrieved. If FETCH runs off the end of the available rows then the cursor is left positioned after the last row, or before the first - row if fetching backward. FETCH ALL or FETCH BACKWARD - ALL will always leave the cursor positioned after the last row or before + row if fetching backward. FETCH ALL or FETCH BACKWARD + ALL will always leave the cursor positioned after the last row or before the first row. - The forms NEXT, PRIOR, FIRST, - LAST, ABSOLUTE, RELATIVE fetch + The forms NEXT, PRIOR, FIRST, + LAST, ABSOLUTE, RELATIVE fetch a single row after moving the cursor appropriately. If there is no such row, an empty result is returned, and the cursor is left positioned before the first row or after the last row as @@ -78,7 +78,7 @@ FETCH [ direction [ FROM | IN ] ] < - The forms using FORWARD and BACKWARD + The forms using FORWARD and BACKWARD retrieve the indicated number of rows moving in the forward or backward direction, leaving the cursor positioned on the last-returned row (or after/before all rows, if the direction [ FROM | IN ] ] < - RELATIVE 0, FORWARD 0, and - BACKWARD 0 all request fetching the current row without + RELATIVE 0, FORWARD 0, and + BACKWARD 0 all request fetching the current row without moving the cursor, that is, re-fetching the most recently fetched row. This will succeed unless the cursor is positioned before the first row or after the last row; in which case, no row is returned. @@ -97,7 +97,7 @@ FETCH [ direction [ FROM | IN ] ] < This page describes usage of cursors at the SQL command level. - If you are trying to use cursors inside a PL/pgSQL + If you are trying to use cursors inside a PL/pgSQL function, the rules are different — see . @@ -274,10 +274,10 @@ FETCH [ direction [ FROM | IN ] ] < count is a possibly-signed integer constant, determining the location or - number of rows to fetch. For FORWARD and - BACKWARD cases, specifying a negative FORWARD and + BACKWARD cases, specifying a negative count is equivalent to changing - the sense of FORWARD and BACKWARD. + the sense of FORWARD and BACKWARD. @@ -297,7 +297,7 @@ FETCH [ direction [ FROM | IN ] ] < Outputs - On successful completion, a FETCH command returns a command + On successful completion, a FETCH command returns a command tag of the form FETCH count @@ -315,8 +315,8 @@ FETCH count The cursor should be declared with the SCROLL - option if one intends to use any variants of FETCH - other than FETCH NEXT or FETCH FORWARD with + option if one intends to use any variants of FETCH + other than FETCH NEXT or FETCH FORWARD with a positive count. For simple queries PostgreSQL will allow backwards fetch from cursors not declared with SCROLL, but this @@ -400,8 +400,8 @@ COMMIT WORK; - The SQL standard allows only FROM preceding the cursor - name; the option to use IN, or to leave them out altogether, is + The SQL standard allows only FROM preceding the cursor + name; the option to use IN, or to leave them out altogether, is an extension. diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index 385cfe6a9c..fd9fe03a6a 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -116,7 +116,7 @@ GRANT role_name [, ...] TO ALL - TABLES is considered to include views and foreign tables). + TABLES is considered to include views and foreign tables). @@ -174,7 +174,7 @@ GRANT role_name [, ...] TO REVOKE both default and expressly granted privileges. (For maximum - security, issue the REVOKE in the same transaction that + security, issue the REVOKE in the same transaction that creates the object; then there is no window in which another user can use the object.) Also, these initial default privilege settings can be changed using the @@ -211,7 +211,7 @@ GRANT role_name [, ...] TO Allows of a new row into the specified table. If specific columns are listed, - only those columns may be assigned to in the INSERT + only those columns may be assigned to in the INSERT command (other columns will therefore receive default values). Also allows FROM. @@ -224,8 +224,8 @@ GRANT role_name [, ...] TO Allows of any column, or the specific columns listed, of the specified table. - (In practice, any nontrivial UPDATE command will require - SELECT privilege as well, since it must reference table + (In practice, any nontrivial UPDATE command will require + SELECT privilege as well, since it must reference table columns to determine which rows to update, and/or to compute new values for columns.) SELECT ... FOR UPDATE @@ -246,8 +246,8 @@ GRANT role_name [, ...] TO Allows of a row from the specified table. - (In practice, any nontrivial DELETE command will require - SELECT privilege as well, since it must reference table + (In practice, any nontrivial DELETE command will require + SELECT privilege as well, since it must reference table columns to determine which rows to delete.) @@ -292,7 +292,7 @@ GRANT role_name [, ...] TO For schemas, allows new objects to be created within the schema. - To rename an existing object, you must own the object and + To rename an existing object, you must own the object and have this privilege for the containing schema. @@ -310,7 +310,7 @@ GRANT role_name [, ...] TO Allows the user to connect to the specified database. This privilege is checked at connection startup (in addition to checking - any restrictions imposed by pg_hba.conf). + any restrictions imposed by pg_hba.conf). @@ -348,7 +348,7 @@ GRANT role_name [, ...] TO For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are - also met). Essentially this allows the grantee to look up + also met). Essentially this allows the grantee to look up objects within the schema. Without this permission, it is still possible to see the object names, e.g. by querying the system tables. Also, after revoking this permission, existing backends might have @@ -416,14 +416,14 @@ GRANT role_name [, ...] TO on itself, but it may grant or revoke membership in itself from a database session where the session user matches the role. Database superusers can grant or revoke membership in any role - to anyone. Roles having CREATEROLE privilege can grant + to anyone. Roles having CREATEROLE privilege can grant or revoke membership in any role that is not a superuser. Unlike the case with privileges, membership in a role cannot be granted - to PUBLIC. Note also that this form of the command does not - allow the noise word GROUP. + to PUBLIC. Note also that this form of the command does not + allow the noise word GROUP. @@ -440,13 +440,13 @@ GRANT role_name [, ...] TO Since PostgreSQL 8.1, the concepts of users and groups have been unified into a single kind of entity called a role. - It is therefore no longer necessary to use the keyword GROUP - to identify whether a grantee is a user or a group. GROUP + It is therefore no longer necessary to use the keyword GROUP + to identify whether a grantee is a user or a group. GROUP is still allowed in the command, but it is a noise word. - A user may perform SELECT, INSERT, etc. on a + A user may perform SELECT, INSERT, etc. on a column if they hold that privilege for either the specific column or its whole table. Granting the privilege at the table level and then revoking it for one column will not do what one might wish: the @@ -454,12 +454,12 @@ GRANT role_name [, ...] TO - When a non-owner of an object attempts to GRANT privileges + When a non-owner of an object attempts to GRANT privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will grant only those privileges for which the user has grant options. The GRANT ALL - PRIVILEGES forms will issue a warning message if no grant options are + PRIVILEGES forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but @@ -470,13 +470,13 @@ GRANT role_name [, ...] TO It should be noted that database superusers can access all objects regardless of object privilege settings. This - is comparable to the rights of root in a Unix system. - As with root, it's unwise to operate as a superuser + is comparable to the rights of root in a Unix system. + As with root, it's unwise to operate as a superuser except when absolutely necessary. - If a superuser chooses to issue a GRANT or REVOKE + If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it were issued by the owner of the affected object. In particular, privileges granted via such a command will appear to have been granted by the object owner. @@ -485,32 +485,32 @@ GRANT role_name [, ...] TO - GRANT and REVOKE can also be done by a role + GRANT and REVOKE can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges WITH GRANT OPTION on the object. In this case the privileges will be recorded as having been granted by the role that actually owns the object or holds the privileges WITH GRANT OPTION. For example, if table - t1 is owned by role g1, of which role - u1 is a member, then u1 can grant privileges - on t1 to u2, but those privileges will appear - to have been granted directly by g1. Any other member - of role g1 could revoke them later. + t1 is owned by role g1, of which role + u1 is a member, then u1 can grant privileges + on t1 to u2, but those privileges will appear + to have been granted directly by g1. Any other member + of role g1 could revoke them later. - If the role executing GRANT holds the required privileges + If the role executing GRANT holds the required privileges indirectly via more than one role membership path, it is unspecified which containing role will be recorded as having done the grant. In such - cases it is best practice to use SET ROLE to become the - specific role you want to do the GRANT as. + cases it is best practice to use SET ROLE to become the + specific role you want to do the GRANT as. Granting permission on a table does not automatically extend permissions to any sequences used by the table, including - sequences tied to SERIAL columns. Permissions on + sequences tied to SERIAL columns. Permissions on sequences must be set separately. @@ -551,8 +551,8 @@ rolename=xxxx -- privileges granted to a role /yyyy -- role that granted this privilege - The above example display would be seen by user miriam after - creating table mytable and doing: + The above example display would be seen by user miriam after + creating table mytable and doing: GRANT SELECT ON mytable TO PUBLIC; @@ -562,31 +562,31 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw; - For non-table objects there are other \d commands + For non-table objects there are other \d commands that can display their privileges. - If the Access privileges column is empty for a given object, + If the Access privileges column is empty for a given object, it means the object has default privileges (that is, its privileges column is null). Default privileges always include all privileges for the owner, - and can include some privileges for PUBLIC depending on the - object type, as explained above. The first GRANT or - REVOKE on an object + and can include some privileges for PUBLIC depending on the + object type, as explained above. The first GRANT or + REVOKE on an object will instantiate the default privileges (producing, for example, - {miriam=arwdDxt/miriam}) and then modify them per the + {miriam=arwdDxt/miriam}) and then modify them per the specified request. Similarly, entries are shown in Column access - privileges only for columns with nondefault privileges. - (Note: for this purpose, default privileges always means the + privileges only for columns with nondefault privileges. + (Note: for this purpose, default privileges always means the built-in default privileges for the object's type. An object whose - privileges have been affected by an ALTER DEFAULT PRIVILEGES + privileges have been affected by an ALTER DEFAULT PRIVILEGES command will always be shown with an explicit privilege entry that - includes the effects of the ALTER.) + includes the effects of the ALTER.) Notice that the owner's implicit grant options are not marked in the - access privileges display. A * will appear only when + access privileges display. A * will appear only when grant options have been explicitly granted to someone. @@ -617,7 +617,7 @@ GRANT ALL PRIVILEGES ON kinds TO manuel; - Grant membership in role admins to user joe: + Grant membership in role admins to user joe: GRANT admins TO joe; @@ -637,14 +637,14 @@ GRANT admins TO joe; PostgreSQL allows an object owner to revoke their own ordinary privileges: for example, a table owner can make the table - read-only to themselves by revoking their own INSERT, - UPDATE, DELETE, and TRUNCATE + read-only to themselves by revoking their own INSERT, + UPDATE, DELETE, and TRUNCATE privileges. This is not possible according to the SQL standard. The reason is that PostgreSQL treats the owner's privileges as having been granted by the owner to themselves; therefore they can revoke them too. In the SQL standard, the owner's privileges are - granted by an assumed entity _SYSTEM. Not being - _SYSTEM, the owner cannot revoke these rights. + granted by an assumed entity _SYSTEM. Not being + _SYSTEM, the owner cannot revoke these rights. diff --git a/doc/src/sgml/ref/import_foreign_schema.sgml b/doc/src/sgml/ref/import_foreign_schema.sgml index f22893f137..9bc83f1c6a 100644 --- a/doc/src/sgml/ref/import_foreign_schema.sgml +++ b/doc/src/sgml/ref/import_foreign_schema.sgml @@ -124,9 +124,9 @@ IMPORT FOREIGN SCHEMA remote_schema Examples - Import table definitions from a remote schema foreign_films - on server film_server, creating the foreign tables in - local schema films: + Import table definitions from a remote schema foreign_films + on server film_server, creating the foreign tables in + local schema films: IMPORT FOREIGN SCHEMA foreign_films @@ -135,8 +135,8 @@ IMPORT FOREIGN SCHEMA foreign_films - As above, but import only the two tables actors and - directors (if they exist): + As above, but import only the two tables actors and + directors (if they exist): IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) @@ -149,8 +149,8 @@ IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) The IMPORT FOREIGN SCHEMA command conforms to the - SQL standard, except that the OPTIONS - clause is a PostgreSQL extension. + SQL standard, except that the OPTIONS + clause is a PostgreSQL extension. diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml index 732fecab8e..6696d4d05a 100644 --- a/doc/src/sgml/ref/initdb.sgml +++ b/doc/src/sgml/ref/initdb.sgml @@ -79,8 +79,8 @@ PostgreSQL documentation initdb initializes the database cluster's default locale and character set encoding. The character set encoding, - collation order (LC_COLLATE) and character set classes - (LC_CTYPE, e.g. upper, lower, digit) can be set separately + collation order (LC_COLLATE) and character set classes + (LC_CTYPE, e.g. upper, lower, digit) can be set separately for a database when it is created. initdb determines those settings for the template1 database, which will serve as the default for all other databases. @@ -89,7 +89,7 @@ PostgreSQL documentation To alter the default collation order or character set classes, use the and options. - Collation orders other than C or POSIX also have + Collation orders other than C or POSIX also have a performance penalty. For these reasons it is important to choose the right locale when running initdb. @@ -98,8 +98,8 @@ PostgreSQL documentation The remaining locale categories can be changed later when the server is started. You can also use to set the default for all locale categories, including collation order and - character set classes. All server locale values (lc_*) can - be displayed via SHOW ALL. + character set classes. All server locale values (lc_*) can + be displayed via SHOW ALL. More details can be found in . @@ -121,7 +121,7 @@ PostgreSQL documentation This option specifies the default authentication method for local - users used in pg_hba.conf (host + users used in pg_hba.conf (host and local lines). initdb will prepopulate pg_hba.conf entries using the specified authentication method for non-replication as well as @@ -129,8 +129,8 @@ PostgreSQL documentation - Do not use trust unless you trust all local users on your - system. trust is the default for ease of installation. + Do not use trust unless you trust all local users on your + system. trust is the default for ease of installation. @@ -140,7 +140,7 @@ PostgreSQL documentation This option specifies the authentication method for local users via - TCP/IP connections used in pg_hba.conf + TCP/IP connections used in pg_hba.conf (host lines). @@ -151,7 +151,7 @@ PostgreSQL documentation This option specifies the authentication method for local users via - Unix-domain socket connections used in pg_hba.conf + Unix-domain socket connections used in pg_hba.conf (local lines). @@ -255,7 +255,7 @@ PostgreSQL documentation - + Makes initdb read the database superuser's password @@ -270,14 +270,14 @@ PostgreSQL documentation Safely write all database files to disk and exit. This does not - perform any of the normal initdb operations. + perform any of the normal initdb operations. - - + + Sets the default text search configuration. @@ -319,7 +319,7 @@ PostgreSQL documentation - Set the WAL segment size, in megabytes. This is + Set the WAL segment size, in megabytes. This is the size of each individual file in the WAL log. It may be useful to adjust this size to control the granularity of WAL log shipping. This option can only be set during initialization, and cannot be @@ -395,8 +395,8 @@ PostgreSQL documentation - - + + Print the initdb version and exit. @@ -405,8 +405,8 @@ PostgreSQL documentation - - + + Show help about initdb command line @@ -449,8 +449,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index ce037e5902..7f44ec31d1 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -56,10 +56,10 @@ INSERT INTO table_name [ AS The target column names can be listed in any order. If no list of column names is given at all, the default is all the columns of the - table in their declared order; or the first N column - names, if there are only N columns supplied by the - VALUES clause or query. The values - supplied by the VALUES clause or query are + table in their declared order; or the first N column + names, if there are only N columns supplied by the + VALUES clause or query. The values + supplied by the VALUES clause or query are associated with the explicit or implicit column list left-to-right. @@ -75,21 +75,21 @@ INSERT INTO table_name [ AS - ON CONFLICT can be used to specify an alternative + ON CONFLICT can be used to specify an alternative action to raising a unique constraint or exclusion constraint violation error. (See below.) - The optional RETURNING clause causes INSERT + The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted - (or updated, if an ON CONFLICT DO UPDATE clause was + (or updated, if an ON CONFLICT DO UPDATE clause was used). This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of - the RETURNING list is identical to that of the output - list of SELECT. Only rows that were successfully + the RETURNING list is identical to that of the output + list of SELECT. Only rows that were successfully inserted or updated will be returned. For example, if a row was locked but not updated because an ON CONFLICT DO UPDATE ... WHERE clause table_name [ AS You must have INSERT privilege on a table in - order to insert into it. If ON CONFLICT DO UPDATE is + order to insert into it. If ON CONFLICT DO UPDATE is present, UPDATE privilege on the table is also required. @@ -107,17 +107,17 @@ INSERT INTO table_name [ AS If a column list is specified, you only need INSERT privilege on the listed columns. - Similarly, when ON CONFLICT DO UPDATE is specified, you - only need UPDATE privilege on the column(s) that are - listed to be updated. However, ON CONFLICT DO UPDATE - also requires SELECT privilege on any column whose - values are read in the ON CONFLICT DO UPDATE - expressions or condition. + Similarly, when ON CONFLICT DO UPDATE is specified, you + only need UPDATE privilege on the column(s) that are + listed to be updated. However, ON CONFLICT DO UPDATE + also requires SELECT privilege on any column whose + values are read in the ON CONFLICT DO UPDATE + expressions or condition. - Use of the RETURNING clause requires SELECT - privilege on all columns mentioned in RETURNING. + Use of the RETURNING clause requires SELECT + privilege on all columns mentioned in RETURNING. If you use the query clause to insert rows from a query, you of course need to have SELECT privilege on @@ -144,7 +144,7 @@ INSERT INTO table_name [ AS The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the INSERT + subqueries that can be referenced by name in the INSERT query. See and for details. @@ -175,8 +175,8 @@ INSERT INTO table_name [ AS table_name. When an alias is provided, it completely hides the actual name of the table. - This is particularly useful when ON CONFLICT DO UPDATE - targets a table named excluded, since that will otherwise + This is particularly useful when ON CONFLICT DO UPDATE + targets a table named excluded, since that will otherwise be taken as the name of the special table representing rows proposed for insertion. @@ -193,11 +193,11 @@ INSERT INTO table_name [ AS ON CONFLICT DO UPDATE, do not include + column with ON CONFLICT DO UPDATE, do not include the table's name in the specification of a target column. For example, INSERT INTO table_name ... ON CONFLICT DO UPDATE - SET table_name.col = 1 is invalid (this follows the general - behavior for UPDATE). + SET table_name.col = 1 is invalid (this follows the general + behavior for UPDATE). @@ -281,11 +281,11 @@ INSERT INTO table_name [ AS An expression to be computed and returned by the - INSERT command after each row is inserted or + INSERT command after each row is inserted or updated. The expression can use any column names of the table named by table_name. Write - * to return all columns of the inserted or updated + * to return all columns of the inserted or updated row(s). @@ -386,7 +386,7 @@ INSERT INTO table_name [ AS have access to the existing row using the table's name (or an alias), and to rows proposed for insertion using the special excluded table. - SELECT privilege is required on any column in the + SELECT privilege is required on any column in the target table where corresponding excluded columns are read. @@ -406,7 +406,7 @@ INSERT INTO table_name [ AS table_name column. Used to infer arbiter indexes. Follows CREATE - INDEX format. SELECT privilege on + INDEX format. SELECT privilege on index_column_name is required. @@ -422,7 +422,7 @@ INSERT INTO table_name [ AS table_name columns appearing within index definitions (not simple columns). Follows - CREATE INDEX format. SELECT + CREATE INDEX format. SELECT privilege on any column appearing within index_expression is required. @@ -469,7 +469,7 @@ INSERT INTO table_name [ AS CREATE - INDEX format. SELECT privilege on any + INDEX format. SELECT privilege on any column appearing within index_predicate is required. @@ -494,7 +494,7 @@ INSERT INTO table_name [ AS boolean. Only rows for which this expression returns true will be updated, although all - rows will be locked when the ON CONFLICT DO UPDATE + rows will be locked when the ON CONFLICT DO UPDATE action is taken. Note that condition is evaluated last, after a conflict has been identified as a candidate to update. @@ -510,7 +510,7 @@ INSERT INTO table_name [ AS - INSERT with an ON CONFLICT DO UPDATE + INSERT with an ON CONFLICT DO UPDATE clause is a deterministic statement. This means that the command will not be allowed to affect any single existing row more than once; a cardinality violation error will be raised @@ -538,7 +538,7 @@ INSERT INTO table_name [ AS Outputs - On successful completion, an INSERT command returns a command + On successful completion, an INSERT command returns a command tag of the form INSERT oid count @@ -554,10 +554,10 @@ INSERT oid count - If the INSERT command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the INSERT command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) inserted or + RETURNING list, computed over the row(s) inserted or updated by the command. @@ -616,7 +616,7 @@ INSERT INTO films DEFAULT VALUES; - To insert multiple rows using the multirow VALUES syntax: + To insert multiple rows using the multirow VALUES syntax: INSERT INTO films (code, title, did, date_prod, kind) VALUES @@ -675,7 +675,7 @@ INSERT INTO employees_log SELECT *, current_timestamp FROM upd; Insert or update new distributors as appropriate. Assumes a unique index has been defined that constrains values appearing in the did column. Note that the special - excluded table is used to reference values originally + excluded table is used to reference values originally proposed for insertion: INSERT INTO distributors (did, dname) @@ -697,7 +697,7 @@ INSERT INTO distributors (did, dname) VALUES (7, 'Redline GmbH') Insert or update new distributors as appropriate. Example assumes a unique index has been defined that constrains values appearing in - the did column. WHERE clause is + the did column. WHERE clause is used to limit the rows actually updated (any existing row not updated will still be locked, though): @@ -734,13 +734,13 @@ INSERT INTO distributors (did, dname) VALUES (10, 'Conrad International') INSERT conforms to the SQL standard, except that - the RETURNING clause is a + the RETURNING clause is a PostgreSQL extension, as is the ability - to use WITH with INSERT, and the ability to - specify an alternative action with ON CONFLICT. + to use WITH with INSERT, and the ability to + specify an alternative action with ON CONFLICT. Also, the case in which a column name list is omitted, but not all the columns are - filled from the VALUES clause or query, + filled from the VALUES clause or query, is disallowed by the standard. diff --git a/doc/src/sgml/ref/listen.sgml b/doc/src/sgml/ref/listen.sgml index 76215716d6..6527562717 100644 --- a/doc/src/sgml/ref/listen.sgml +++ b/doc/src/sgml/ref/listen.sgml @@ -54,12 +54,12 @@ LISTEN channel The method a client application must use to detect notification events depends on which PostgreSQL application programming interface it - uses. With the libpq library, the application issues + uses. With the libpq library, the application issues LISTEN as an ordinary SQL command, and then must periodically call the function PQnotifies to find out whether any notification events have been received. Other interfaces such as - libpgtcl provide higher-level methods for handling notify events; indeed, - with libpgtcl the application programmer should not even issue + libpgtcl provide higher-level methods for handling notify events; indeed, + with libpgtcl the application programmer should not even issue LISTEN or UNLISTEN directly. See the documentation for the interface you are using for more details. diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index 2be28e6d15..b9e3fe8b25 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -28,12 +28,12 @@ LOAD 'filename' Description - This command loads a shared library file into the PostgreSQL + This command loads a shared library file into the PostgreSQL server's address space. If the file has been loaded already, the command does nothing. Shared library files that contain C functions are automatically loaded whenever one of their functions is called. - Therefore, an explicit LOAD is usually only needed to - load a library that modifies the server's behavior through hooks + Therefore, an explicit LOAD is usually only needed to + load a library that modifies the server's behavior through hooks rather than providing a set of functions. @@ -47,15 +47,15 @@ LOAD 'filename' - $libdir/plugins + $libdir/plugins - Non-superusers can only apply LOAD to library files - located in $libdir/plugins/ — the specified + Non-superusers can only apply LOAD to library files + located in $libdir/plugins/ — the specified filename must begin with exactly that string. (It is the database administrator's - responsibility to ensure that only safe libraries + responsibility to ensure that only safe libraries are installed there.) diff --git a/doc/src/sgml/ref/lock.sgml b/doc/src/sgml/ref/lock.sgml index f1dbb8e65a..6d68ec6c53 100644 --- a/doc/src/sgml/ref/lock.sgml +++ b/doc/src/sgml/ref/lock.sgml @@ -51,13 +51,13 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] restrictive lock mode possible. LOCK TABLE provides for cases when you might need more restrictive locking. For example, suppose an application runs a transaction at the - READ COMMITTED isolation level and needs to ensure that + READ COMMITTED isolation level and needs to ensure that data in a table remains stable for the duration of the transaction. - To achieve this you could obtain SHARE lock mode over the + To achieve this you could obtain SHARE lock mode over the table before querying. This will prevent concurrent data changes and ensure subsequent reads of the table see a stable view of - committed data, because SHARE lock mode conflicts with - the ROW EXCLUSIVE lock acquired by writers, and your + committed data, because SHARE lock mode conflicts with + the ROW EXCLUSIVE lock acquired by writers, and your LOCK TABLE name IN SHARE MODE statement will wait until any concurrent holders of ROW @@ -68,28 +68,28 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] To achieve a similar effect when running a transaction at the - REPEATABLE READ or SERIALIZABLE - isolation level, you have to execute the LOCK TABLE statement - before executing any SELECT or data modification statement. - A REPEATABLE READ or SERIALIZABLE transaction's + REPEATABLE READ or SERIALIZABLE + isolation level, you have to execute the LOCK TABLE statement + before executing any SELECT or data modification statement. + A REPEATABLE READ or SERIALIZABLE transaction's view of data will be frozen when its first - SELECT or data modification statement begins. A LOCK - TABLE later in the transaction will still prevent concurrent writes + SELECT or data modification statement begins. A LOCK + TABLE later in the transaction will still prevent concurrent writes — but it won't ensure that what the transaction reads corresponds to the latest committed values. If a transaction of this sort is going to change the data in the - table, then it should use SHARE ROW EXCLUSIVE lock mode - instead of SHARE mode. This ensures that only one + table, then it should use SHARE ROW EXCLUSIVE lock mode + instead of SHARE mode. This ensures that only one transaction of this type runs at a time. Without this, a deadlock - is possible: two transactions might both acquire SHARE - mode, and then be unable to also acquire ROW EXCLUSIVE + is possible: two transactions might both acquire SHARE + mode, and then be unable to also acquire ROW EXCLUSIVE mode to actually perform their updates. (Note that a transaction's own locks never conflict, so a transaction can acquire ROW - EXCLUSIVE mode when it holds SHARE mode — but not - if anyone else holds SHARE mode.) To avoid deadlocks, + EXCLUSIVE mode when it holds SHARE mode — but not + if anyone else holds SHARE mode.) To avoid deadlocks, make sure all transactions acquire locks on the same objects in the same order, and if multiple lock modes are involved for a single object, then transactions should always acquire the most @@ -111,16 +111,16 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] The name (optionally schema-qualified) of an existing table to - lock. If ONLY is specified before the table name, only that - table is locked. If ONLY is not specified, the table and all - its descendant tables (if any) are locked. Optionally, * + lock. If ONLY is specified before the table name, only that + table is locked. If ONLY is not specified, the table and all + its descendant tables (if any) are locked. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. - The command LOCK TABLE a, b; is equivalent to - LOCK TABLE a; LOCK TABLE b;. The tables are locked + The command LOCK TABLE a, b; is equivalent to + LOCK TABLE a; LOCK TABLE b;. The tables are locked one-by-one in the order specified in the LOCK TABLE command. @@ -160,18 +160,18 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] Notes - LOCK TABLE ... IN ACCESS SHARE MODE requires SELECT + LOCK TABLE ... IN ACCESS SHARE MODE requires SELECT privileges on the target table. LOCK TABLE ... IN ROW EXCLUSIVE - MODE requires INSERT, UPDATE, DELETE, - or TRUNCATE privileges on the target table. All other forms of - LOCK require table-level UPDATE, DELETE, - or TRUNCATE privileges. + MODE requires INSERT, UPDATE, DELETE, + or TRUNCATE privileges on the target table. All other forms of + LOCK require table-level UPDATE, DELETE, + or TRUNCATE privileges. - LOCK TABLE is useless outside a transaction block: the lock + LOCK TABLE is useless outside a transaction block: the lock would remain held only to the completion of the statement. Therefore - PostgreSQL reports an error if LOCK + PostgreSQL reports an error if LOCK is used outside a transaction block. Use and @@ -181,13 +181,13 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] - LOCK TABLE only deals with table-level locks, and so - the mode names involving ROW are all misnomers. These + LOCK TABLE only deals with table-level locks, and so + the mode names involving ROW are all misnomers. These mode names should generally be read as indicating the intention of the user to acquire row-level locks within the locked table. Also, - ROW EXCLUSIVE mode is a shareable table lock. Keep in + ROW EXCLUSIVE mode is a shareable table lock. Keep in mind that all the lock modes have identical semantics so far as - LOCK TABLE is concerned, differing only in the rules + LOCK TABLE is concerned, differing only in the rules about which modes conflict with which. For information on how to acquire an actual row-level lock, see and the name [ * ] Examples - Obtain a SHARE lock on a primary key table when going to perform + Obtain a SHARE lock on a primary key table when going to perform inserts into a foreign key table: @@ -216,7 +216,7 @@ COMMIT WORK; - Take a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform + Take a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform a delete operation: @@ -240,8 +240,8 @@ COMMIT WORK; - Except for ACCESS SHARE, ACCESS EXCLUSIVE, - and SHARE UPDATE EXCLUSIVE lock modes, the + Except for ACCESS SHARE, ACCESS EXCLUSIVE, + and SHARE UPDATE EXCLUSIVE lock modes, the PostgreSQL lock modes and the LOCK TABLE syntax are compatible with those present in Oracle. diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml index 6b809b961d..4bf7896858 100644 --- a/doc/src/sgml/ref/move.sgml +++ b/doc/src/sgml/ref/move.sgml @@ -69,7 +69,7 @@ MOVE [ direction [ FROM | IN ] ] Outputs - On successful completion, a MOVE command returns a command + On successful completion, a MOVE command returns a command tag of the form MOVE count diff --git a/doc/src/sgml/ref/notify.sgml b/doc/src/sgml/ref/notify.sgml index 09debd6685..4376b9fdd7 100644 --- a/doc/src/sgml/ref/notify.sgml +++ b/doc/src/sgml/ref/notify.sgml @@ -30,9 +30,9 @@ NOTIFY channel [ , The NOTIFY command sends a notification event together - with an optional payload string to each client application that + with an optional payload string to each client application that has previously executed - LISTEN channel + LISTEN channel for the specified channel name in the current database. Notifications are visible to all users. @@ -49,7 +49,7 @@ NOTIFY channel [ , The information passed to the client for a notification event includes the notification channel - name, the notifying session's server process PID, and the + name, the notifying session's server process PID, and the payload string, which is an empty string if it has not been specified. @@ -115,9 +115,9 @@ NOTIFY channel [ , PID (supplied in the + session's server process PID (supplied in the notification event message) is the same as one's own session's - PID (available from libpq). When they + PID (available from libpq). When they are the same, the notification event is one's own work bouncing back, and can be ignored. @@ -139,7 +139,7 @@ NOTIFY channel [ , payload - The payload string to be communicated along with the + The payload string to be communicated along with the notification. This must be specified as a simple string literal. In the default configuration it must be shorter than 8000 bytes. (If binary data or large amounts of information need to be communicated, diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index f790c56003..1944c185cb 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation pg_basebackup - option + option @@ -69,7 +69,7 @@ PostgreSQL documentation pg_basebackup can make a base backup from not only the master but also the standby. To take a backup from the standby, set up the standby so that it can accept replication connections (that is, set - max_wal_senders and , + max_wal_senders and , and configure host-based authentication). You will also need to enable on the master. @@ -85,7 +85,7 @@ PostgreSQL documentation - If you are using -X none, there is no guarantee that all + If you are using -X none, there is no guarantee that all WAL files required for the backup are archived at the end of backup. @@ -97,9 +97,9 @@ PostgreSQL documentation All WAL records required for the backup must contain sufficient full-page writes, - which requires you to enable full_page_writes on the master and - not to use a tool like pg_compresslog as - archive_command to remove full-page writes from WAL files. + which requires you to enable full_page_writes on the master and + not to use a tool like pg_compresslog as + archive_command to remove full-page writes from WAL files. @@ -193,8 +193,8 @@ PostgreSQL documentation The maximum transfer rate of data transferred from the server. Values are - in kilobytes per second. Use a suffix of M to indicate megabytes - per second. A suffix of k is also accepted, and has no effect. + in kilobytes per second. Use a suffix of M to indicate megabytes + per second. A suffix of k is also accepted, and has no effect. Valid values are between 32 kilobytes per second and 1024 megabytes per second. @@ -534,7 +534,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_basebackup doesn't connect to any particular database in the cluster, database name in the connection string will be ignored. @@ -594,8 +594,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -623,7 +623,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_basebackup will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -636,8 +636,8 @@ PostgreSQL documentation - - + + Print the pg_basebackup version and exit. @@ -646,8 +646,8 @@ PostgreSQL documentation - - + + Show help about pg_basebackup command line @@ -665,8 +665,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). @@ -709,8 +709,8 @@ PostgreSQL documentation tar file before starting the PostgreSQL server. If there are additional tablespaces, the tar files for them need to be unpacked in the correct locations. In this case the symbolic links for those tablespaces will be created by the server - according to the contents of the tablespace_map file that is - included in the base.tar file. + according to the contents of the tablespace_map file that is + included in the base.tar file. diff --git a/doc/src/sgml/ref/pg_config-ref.sgml b/doc/src/sgml/ref/pg_config-ref.sgml index 0210f6389d..b819f3f345 100644 --- a/doc/src/sgml/ref/pg_config-ref.sgml +++ b/doc/src/sgml/ref/pg_config-ref.sgml @@ -13,7 +13,7 @@ pg_config - retrieve information about the installed version of PostgreSQL + retrieve information about the installed version of PostgreSQL @@ -24,12 +24,12 @@ - Description</> + <title>Description - The pg_config utility prints configuration parameters - of the currently installed version of PostgreSQL. It is + The pg_config utility prints configuration parameters + of the currently installed version of PostgreSQL. It is intended, for example, to be used by software packages that want to interface - to PostgreSQL to facilitate finding the required header files + to PostgreSQL to facilitate finding the required header files and libraries. @@ -39,22 +39,22 @@ Options - To use pg_config, supply one or more of the following + To use pg_config, supply one or more of the following options: - + Print the location of user executables. Use this, for example, to find - the psql program. This is normally also the location - where the pg_config program resides. + the psql program. This is normally also the location + where the pg_config program resides. - + Print the location of documentation files. @@ -63,7 +63,7 @@ - + Print the location of HTML documentation files. @@ -72,7 +72,7 @@ - + Print the location of C header files of the client interfaces. @@ -81,7 +81,7 @@ - + Print the location of other C header files. @@ -90,7 +90,7 @@ - + Print the location of C header files for server programming. @@ -99,7 +99,7 @@ - + Print the location of object code libraries. @@ -108,7 +108,7 @@ - + Print the location of dynamically loadable modules, or where @@ -120,18 +120,18 @@ - + Print the location of locale support files. (This will be an empty string if locale support was not configured when - PostgreSQL was built.) + PostgreSQL was built.) - + Print the location of manual pages. @@ -140,7 +140,7 @@ - + Print the location of architecture-independent support files. @@ -149,7 +149,7 @@ - + Print the location of system-wide configuration files. @@ -158,7 +158,7 @@ - + Print the location of extension makefiles. @@ -167,11 +167,11 @@ - + - Print the options that were given to the configure - script when PostgreSQL was configured for building. + Print the options that were given to the configure + script when PostgreSQL was configured for building. This can be used to reproduce the identical configuration, or to find out with what options a binary package was built. (Note however that binary packages often contain vendor-specific custom @@ -181,102 +181,102 @@ - + Print the value of the CC variable that was used for building - PostgreSQL. This shows the C compiler used. + PostgreSQL. This shows the C compiler used. - + Print the value of the CPPFLAGS variable that was used for building - PostgreSQL. This shows C compiler switches needed - at preprocessing time (typically, -I switches). + PostgreSQL. This shows C compiler switches needed + at preprocessing time (typically, -I switches). - + Print the value of the CFLAGS variable that was used for building - PostgreSQL. This shows C compiler switches. + PostgreSQL. This shows C compiler switches. - + Print the value of the CFLAGS_SL variable that was used for building - PostgreSQL. This shows extra C compiler switches + PostgreSQL. This shows extra C compiler switches used for building shared libraries. - + Print the value of the LDFLAGS variable that was used for building - PostgreSQL. This shows linker switches. + PostgreSQL. This shows linker switches. - + Print the value of the LDFLAGS_EX variable that was used for building - PostgreSQL. This shows linker switches + PostgreSQL. This shows linker switches used for building executables only. - + Print the value of the LDFLAGS_SL variable that was used for building - PostgreSQL. This shows linker switches + PostgreSQL. This shows linker switches used for building shared libraries only. - + Print the value of the LIBS variable that was used for building - PostgreSQL. This normally contains -l - switches for external libraries linked into PostgreSQL. + PostgreSQL. This normally contains -l + switches for external libraries linked into PostgreSQL. - + - Print the version of PostgreSQL. + Print the version of PostgreSQL. - - + + Show help about pg_config command line @@ -303,9 +303,9 @@ , , , , , , - and were added in PostgreSQL 8.1. - The option was added in PostgreSQL 8.4. - The option was added in PostgreSQL 9.0. + and were added in PostgreSQL 8.1. + The option was added in PostgreSQL 8.4. + The option was added in PostgreSQL 9.0. diff --git a/doc/src/sgml/ref/pg_controldata.sgml b/doc/src/sgml/ref/pg_controldata.sgml index 4a360d61fd..4d4feacb93 100644 --- a/doc/src/sgml/ref/pg_controldata.sgml +++ b/doc/src/sgml/ref/pg_controldata.sgml @@ -31,7 +31,7 @@ PostgreSQL documentation Description pg_controldata prints information initialized during - initdb, such as the catalog version. + initdb, such as the catalog version. It also shows information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. @@ -41,10 +41,10 @@ PostgreSQL documentation This utility can only be run by the user who initialized the cluster because it requires read access to the data directory. You can specify the data directory on the command line, or use - the environment variable PGDATA. This utility supports the options - and , which print the pg_controldata version and exit. It also - supports options and , which output the supported arguments. diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index 12fa011c4e..3bcf0a2e9f 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -159,13 +159,13 @@ PostgreSQL documentation mode launches a new server. The server is started in the background, and its standard input is attached - to /dev/null (or nul on Windows). + to /dev/null (or nul on Windows). On Unix-like systems, by default, the server's standard output and standard error are sent to pg_ctl's standard output (not standard error). The standard output of pg_ctl should then be redirected to a file or piped to another process such as a log rotating program - like rotatelogs; otherwise postgres + like rotatelogs; otherwise postgres will write its output to the controlling terminal (from the background) and will not leave the shell's process group. On Windows, by default the server's standard output and standard error @@ -203,7 +203,7 @@ PostgreSQL documentation mode simply sends the - postgres server process a SIGHUP + postgres server process a SIGHUP signal, causing it to reread its configuration files (postgresql.conf, pg_hba.conf, etc.). This allows changing @@ -228,14 +228,14 @@ PostgreSQL documentation mode sends a signal to a specified process. - This is primarily valuable on Microsoft Windows - which does not have a built-in kill command. Use - --help to see a list of supported signal names. + This is primarily valuable on Microsoft Windows + which does not have a built-in kill command. Use + --help to see a list of supported signal names. - mode registers the PostgreSQL - server as a system service on Microsoft Windows. + mode registers the PostgreSQL + server as a system service on Microsoft Windows. The option allows selection of service start type, either auto (start service automatically on system startup) or demand (start service on demand). @@ -243,7 +243,7 @@ PostgreSQL documentation mode unregisters a system service - on Microsoft Windows. This undoes the effects of the + on Microsoft Windows. This undoes the effects of the command. @@ -286,7 +286,7 @@ PostgreSQL documentation Append the server log output to filename. If the file does not - exist, it is created. The umask is set to 077, + exist, it is created. The umask is set to 077, so access to the log file is disallowed to other users by default. @@ -313,11 +313,11 @@ PostgreSQL documentation Specifies options to be passed directly to the postgres command. - can be specified multiple times, with all the given options being passed through. - The options should usually be surrounded by single or + The options should usually be surrounded by single or double quotes to ensure that they are passed through as a group. @@ -330,11 +330,11 @@ PostgreSQL documentation Specifies options to be passed directly to the initdb command. - can be specified multiple times, with all the given options being passed through. - The options should usually be surrounded by single or + The options should usually be surrounded by single or double quotes to ensure that they are passed through as a group. @@ -377,15 +377,15 @@ PostgreSQL documentation Specifies the maximum number of seconds to wait when waiting for an operation to complete (see option ). Defaults to - the value of the PGCTLTIMEOUT environment variable or, if + the value of the PGCTLTIMEOUT environment variable or, if not set, to 60 seconds. - - + + Print the pg_ctl version and exit. @@ -446,8 +446,8 @@ PostgreSQL documentation - - + + Show help about pg_ctl command line @@ -507,7 +507,7 @@ PostgreSQL documentation - Start type of the system service. start-type can + Start type of the system service. start-type can be auto, or demand, or the first letter of one of these two. If this option is omitted, auto is the default. @@ -559,14 +559,14 @@ PostgreSQL documentation Most pg_ctl modes require knowing the data directory - location; therefore, the option is required unless PGDATA is set. - pg_ctl, like most other PostgreSQL + pg_ctl, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + also uses the environment variables supported by libpq (see ). @@ -661,8 +661,8 @@ PostgreSQL documentation - But if diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 7ccbee4855..79a9ee0983 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -116,8 +116,8 @@ PostgreSQL documentation - - + + Dump only the data, not the schema (data definitions). @@ -126,19 +126,19 @@ PostgreSQL documentation This option is similar to, but for historical reasons not identical - to, specifying . - - + + Include large objects in the dump. This is the default behavior - except when , , or + is specified. The switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when @@ -148,17 +148,17 @@ PostgreSQL documentation - - + + Exclude large objects in the dump. - When both and are given, the behavior is to output large objects, when data is being dumped, see the - documentation. @@ -170,7 +170,7 @@ PostgreSQL documentation Output commands to clean (drop) database objects prior to outputting the commands for creating them. - (Unless is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.) @@ -184,8 +184,8 @@ PostgreSQL documentation - - + + Begin the output with a command to create the @@ -242,8 +242,8 @@ PostgreSQL documentation - p - plain + p + plain Output a plain-text SQL script file (the default). @@ -252,8 +252,8 @@ PostgreSQL documentation - c - custom + c + custom Output a custom-format archive suitable for input into @@ -267,8 +267,8 @@ PostgreSQL documentation - d - directory + d + directory Output a directory-format archive suitable for input into @@ -286,8 +286,8 @@ PostgreSQL documentation - t - tar + t + tar Output a tar-format archive suitable for input @@ -305,8 +305,8 @@ PostgreSQL documentation - - + + Run the dump in parallel by dumping njobs @@ -315,13 +315,13 @@ PostgreSQL documentation directory output format because this is the only output format where multiple processes can write their data at the same time. - pg_dump will open njobs + pg_dump will open njobs + 1 connections to the database, so make sure your setting is high enough to accommodate all connections. Requesting exclusive locks on database objects while running a parallel dump could - cause the dump to fail. The reason is that the pg_dump master process + cause the dump to fail. The reason is that the pg_dump master process requests shared locks on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. @@ -330,10 +330,10 @@ PostgreSQL documentation released. Consequently any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. - To detect this conflict, the pg_dump worker process requests another - shared lock using the NOWAIT option. If the worker process is not granted + To detect this conflict, the pg_dump worker process requests another + shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime - and there is no way to continue with the dump, so pg_dump has no choice + and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump. @@ -371,10 +371,10 @@ PostgreSQL documentation schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be - selected by writing multiple switches. Also, the schema parameter is interpreted as a pattern according to the same rules used by - psql's \d commands (see psql's \d commands (see ), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern @@ -384,7 +384,7 @@ PostgreSQL documentation - When is specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully @@ -394,9 +394,9 @@ PostgreSQL documentation - Non-schema objects such as blobs are not dumped when is specified. You can add blobs back to the dump with the - switch. @@ -410,29 +410,29 @@ PostgreSQL documentation Do not dump any schemas matching the schema pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude schemas matching any of several patterns. - When both and are given, the behavior + is to dump just the schemas that match at least one + switch but no switches. If appears + without , then schemas matching are excluded from what is otherwise a normal dump. - - + + Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references - the OID + the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used. @@ -440,21 +440,21 @@ PostgreSQL documentation - + Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues - ALTER OWNER or + ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give - that user ownership of all the objects, specify . @@ -484,18 +484,18 @@ PostgreSQL documentation Dump only the object definitions (schema), not data. - This option is the inverse of . It is similar to, but for historical reasons not identical to, specifying - . - (Do not confuse this with the To exclude table data for only a subset of tables in the database, - see . @@ -506,7 +506,7 @@ PostgreSQL documentation Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.) @@ -520,12 +520,12 @@ PostgreSQL documentation Dump only tables with names matching table. - For this purpose, table includes views, materialized views, + For this purpose, table includes views, materialized views, sequences, and foreign tables. Multiple tables - can be selected by writing multiple switches. Also, the table parameter is interpreted as a pattern according to the same rules used by - psql's \d commands (see psql's \d commands (see ), so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern @@ -534,15 +534,15 @@ PostgreSQL documentation - The and switches have no effect when + is used, because tables selected by will be dumped regardless of those switches, and non-table objects will not be dumped. - When is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully @@ -552,14 +552,14 @@ PostgreSQL documentation - The behavior of the switch is not entirely upward compatible with pre-8.2 PostgreSQL - versions. Formerly, writing -t tab would dump all - tables named tab, but now it just dumps whichever one + versions. Formerly, writing -t tab would dump all + tables named tab, but now it just dumps whichever one is visible in your default search path. To get the old behavior - you can write -t '*.tab'. Also, you must write something - like -t sch.tab to select a table in a particular schema, - rather than the old locution of -n sch -t tab. + you can write -t '*.tab'. Also, you must write something + like -t sch.tab to select a table in a particular schema, + rather than the old locution of -n sch -t tab. @@ -572,24 +572,24 @@ PostgreSQL documentation Do not dump any tables matching the table pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude tables matching any of several patterns. - When both and are given, the behavior + is to dump just the tables that match at least one + switch but no switches. If appears + without , then tables matching are excluded from what is otherwise a normal dump. - - + + Specifies verbose mode. This will cause @@ -601,8 +601,8 @@ PostgreSQL documentation - - + + Print the pg_dump version and exit. @@ -611,9 +611,9 @@ PostgreSQL documentation - - - + + + Prevent dumping of access privileges (grant/revoke commands). @@ -632,7 +632,7 @@ PostgreSQL documentation at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been - fed through gzip; but the default is not to compress. + fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all. @@ -670,7 +670,7 @@ PostgreSQL documentation - + This option disables the use of dollar quoting for function bodies, @@ -680,7 +680,7 @@ PostgreSQL documentation - + This option is relevant only when creating a data-only dump. @@ -692,9 +692,9 @@ PostgreSQL documentation - Presently, the commands emitted for must be done as superuser. So, you should also specify - a superuser name with , or preferably be careful to start the resulting script as a superuser. @@ -707,7 +707,7 @@ PostgreSQL documentation - + This option is relevant only when dumping the contents of a table @@ -734,14 +734,14 @@ PostgreSQL documentation Do not dump data for any tables matching the table pattern. The pattern is - interpreted according to the same rules as for . + can be given more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it. - To exclude data for all tables in the database, see . @@ -752,7 +752,7 @@ PostgreSQL documentation Use conditional commands (i.e. add an IF EXISTS clause) when cleaning database objects. This option is not valid - unless is also specified. @@ -782,9 +782,9 @@ PostgreSQL documentation Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified - timeout. The timeout may be + timeout. The timeout may be specified in any of the formats accepted by SET - statement_timeout. (Allowed formats vary depending on the server + statement_timeout. (Allowed formats vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions.) @@ -833,10 +833,10 @@ PostgreSQL documentation - + - This option allows running pg_dump -j against a pre-9.2 + This option allows running pg_dump -j against a pre-9.2 server, see the documentation of the parameter for more details. @@ -873,25 +873,25 @@ PostgreSQL documentation - + Force quoting of all identifiers. This option is recommended when - dumping a database from a server whose PostgreSQL - major version is different from pg_dump's, or when + dumping a database from a server whose PostgreSQL + major version is different from pg_dump's, or when the output is intended to be loaded into a server of a different - major version. By default, pg_dump quotes only + major version. By default, pg_dump quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets - of reserved words. Using prevents such issues, at the price of a harder-to-read dump script. - + When dumping a COPY or INSERT statement for a partitioned table, @@ -910,7 +910,7 @@ PostgreSQL documentation Only dump the named section. The section name can be - , , or . This option can be specified more than once to select multiple sections. The default is to dump all sections. @@ -981,7 +981,7 @@ PostgreSQL documentation - + Require that each schema @@ -1003,23 +1003,23 @@ PostgreSQL documentation - + - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore - properly. Also, a dump using SET SESSION AUTHORIZATION + properly. Also, a dump using SET SESSION AUTHORIZATION will certainly require superuser privileges to restore correctly, - whereas ALTER OWNER requires lesser privileges. + whereas ALTER OWNER requires lesser privileges. - - + + Show help about pg_dump command line @@ -1036,8 +1036,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to connect to. This is @@ -1093,8 +1093,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -1122,7 +1122,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_dump will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -1133,11 +1133,11 @@ PostgreSQL documentation Specifies a role name to be used to create the dump. - This option causes pg_dump to issue a - SET ROLE rolename + This option causes pg_dump to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -1192,7 +1192,7 @@ PostgreSQL documentation The database activity of pg_dump is normally collected by the statistics collector. If this is - undesirable, you can set parameter track_counts + undesirable, you can set parameter track_counts to false via PGOPTIONS or the ALTER USER command. @@ -1204,11 +1204,11 @@ PostgreSQL documentation Notes - If your database cluster has any local additions to the template1 database, + If your database cluster has any local additions to the template1 database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database - without any local additions, copy from template0 not template1, + without any local additions, copy from template0 not template1, for example: CREATE DATABASE foo WITH TEMPLATE template0; @@ -1216,7 +1216,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - When a data-only dump is chosen and the option is used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been @@ -1232,30 +1232,30 @@ CREATE DATABASE foo WITH TEMPLATE template0; to ensure optimal performance; see and for more information. The dump file also does not - contain any ALTER DATABASE ... SET commands; + contain any ALTER DATABASE ... SET commands; these settings are dumped by , along with database users and other installation-wide settings. Because pg_dump is used to transfer data - to newer versions of PostgreSQL, the output of + to newer versions of PostgreSQL, the output of pg_dump can be expected to load into - PostgreSQL server versions newer than - pg_dump's version. pg_dump can also - dump from PostgreSQL servers older than its own version. + PostgreSQL server versions newer than + pg_dump's version. pg_dump can also + dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.) - However, pg_dump cannot dump from - PostgreSQL servers newer than its own major version; + However, pg_dump cannot dump from + PostgreSQL servers newer than its own major version; it will refuse to even try, rather than risk making an invalid dump. - Also, it is not guaranteed that pg_dump's output can + Also, it is not guaranteed that pg_dump's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. Use of the option is recommended in cross-version cases, as it can prevent problems arising from varying - reserved-word lists in different PostgreSQL versions. + reserved-word lists in different PostgreSQL versions. @@ -1276,7 +1276,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; Examples - To dump a database called mydb into a SQL-script file: + To dump a database called mydb into a SQL-script file: $ pg_dump mydb > db.sql @@ -1284,7 +1284,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To reload such a script into a (freshly created) database named - newdb: + newdb: $ psql -d newdb -f db.sql @@ -1318,7 +1318,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To reload an archive file into a (freshly created) database named - newdb: + newdb: $ pg_restore -d newdb db.dump @@ -1326,7 +1326,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump a single table named mytab: + To dump a single table named mytab: $ pg_dump -t mytab mydb > db.sql @@ -1334,8 +1334,8 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump all tables whose names start with emp in the - detroit schema, except for the table named + To dump all tables whose names start with emp in the + detroit schema, except for the table named employee_log: @@ -1344,9 +1344,9 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To dump all schemas whose names start with east or - west and end in gsm, excluding any schemas whose - names contain the word test: + To dump all schemas whose names start with east or + west and end in gsm, excluding any schemas whose + names contain the word test: $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql @@ -1371,7 +1371,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; - To specify an upper-case or mixed-case name in and related switches, you need to double-quote the name; else it will be folded to lower case (see ). But diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 1dba702ad9..0a64c3548e 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -32,7 +32,7 @@ PostgreSQL documentation pg_dumpall is a utility for writing out - (dumping) all PostgreSQL databases + (dumping) all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to to restore the databases. It does this by @@ -63,7 +63,7 @@ PostgreSQL documentation times to the PostgreSQL server (once per database). If you use password authentication it will ask for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. @@ -78,8 +78,8 @@ PostgreSQL documentation - - + + Dump only the data, not the schema (data definitions). @@ -93,7 +93,7 @@ PostgreSQL documentation Include SQL commands to clean (drop) databases before - recreating them. DROP commands for roles and + recreating them. DROP commands for roles and tablespaces are added as well. @@ -134,13 +134,13 @@ PostgreSQL documentation - - + + Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references - the OID + the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used. @@ -148,21 +148,21 @@ PostgreSQL documentation - + Do not output commands to set ownership of objects to match the original database. By default, pg_dumpall issues - ALTER OWNER or + ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created schema elements. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give - that user ownership of all the objects, specify . @@ -193,7 +193,7 @@ PostgreSQL documentation Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.) @@ -211,21 +211,21 @@ PostgreSQL documentation - - + + Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and progress messages to standard error. - It will also enable verbose output in pg_dump. + It will also enable verbose output in pg_dump. - - + + Print the pg_dumpall version and exit. @@ -234,9 +234,9 @@ PostgreSQL documentation - - - + + + Prevent dumping of access privileges (grant/revoke commands). @@ -273,7 +273,7 @@ PostgreSQL documentation - + This option disables the use of dollar quoting for function bodies, @@ -283,7 +283,7 @@ PostgreSQL documentation - + This option is relevant only when creating a data-only dump. @@ -295,9 +295,9 @@ PostgreSQL documentation - Presently, the commands emitted for must be done as superuser. So, you should also specify - a superuser name with , or preferably be careful to start the resulting script as a superuser. @@ -309,7 +309,7 @@ PostgreSQL documentation Use conditional commands (i.e. add an IF EXISTS clause) to clean databases and other objects. This option is not valid - unless is also specified. @@ -335,9 +335,9 @@ PostgreSQL documentation Do not wait forever to acquire shared table locks at the beginning of the dump. Instead, fail if unable to lock a table within the specified - timeout. The timeout may be + timeout. The timeout may be specified in any of the formats accepted by SET - statement_timeout. Allowed values vary depending on the server + statement_timeout. Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7.3. This option is ignored when dumping from a pre-7.3 server. @@ -426,25 +426,25 @@ PostgreSQL documentation - + Force quoting of all identifiers. This option is recommended when - dumping a database from a server whose PostgreSQL - major version is different from pg_dumpall's, or when + dumping a database from a server whose PostgreSQL + major version is different from pg_dumpall's, or when the output is intended to be loaded into a server of a different - major version. By default, pg_dumpall quotes only + major version. By default, pg_dumpall quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets - of reserved words. Using prevents such issues, at the price of a harder-to-read dump script. - + When dumping a COPY or INSERT statement for a partitioned table, @@ -459,11 +459,11 @@ PostgreSQL documentation - + - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, might not restore properly. @@ -472,8 +472,8 @@ PostgreSQL documentation - - + + Show help about pg_dumpall command line @@ -498,7 +498,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_dumpall needs to connect to many databases, database name in the connection string will be ignored. Use -l option to specify @@ -559,8 +559,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -588,14 +588,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_dumpall will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. Note that the password prompt will occur again for each database to be dumped. Usually, it's better to set up a - ~/.pgpass file than to rely on manual password entry. + ~/.pgpass file than to rely on manual password entry. @@ -605,11 +605,11 @@ PostgreSQL documentation Specifies a role name to be used to create the dump. - This option causes pg_dumpall to issue a - SET ROLE rolename + This option causes pg_dumpall to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by - Once restored, it is wise to run ANALYZE on each + Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You - can also run vacuumdb -a -z to analyze all + can also run vacuumdb -a -z to analyze all databases. diff --git a/doc/src/sgml/ref/pg_isready.sgml b/doc/src/sgml/ref/pg_isready.sgml index 2ee79a0bbe..f140c82079 100644 --- a/doc/src/sgml/ref/pg_isready.sgml +++ b/doc/src/sgml/ref/pg_isready.sgml @@ -43,8 +43,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to connect to. @@ -61,8 +61,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the @@ -74,8 +74,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or the local Unix-domain @@ -98,8 +98,8 @@ PostgreSQL documentation - - + + The maximum number of seconds to wait when attempting connection before @@ -110,8 +110,8 @@ PostgreSQL documentation - - + + Connect to the database as the user - - + + Print the pg_isready version and exit. @@ -131,8 +131,8 @@ PostgreSQL documentation - - + + Show help about pg_isready command line @@ -159,9 +159,9 @@ PostgreSQL documentation Environment - pg_isready, like most other PostgreSQL + pg_isready, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + also uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index f0513dad2a..5395fde6d6 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation pg_receivewal - option + option @@ -49,9 +49,9 @@ PostgreSQL documentation - Unlike the WAL receiver of a PostgreSQL standby server, pg_receivewal + Unlike the WAL receiver of a PostgreSQL standby server, pg_receivewal by default flushes WAL data only when a WAL file is closed. - The option must be specified to flush WAL data in real time. @@ -77,7 +77,7 @@ PostgreSQL documentation In the absence of fatal errors, pg_receivewal will run until terminated by the SIGINT signal - (ControlC). + (ControlC). @@ -108,7 +108,7 @@ PostgreSQL documentation - If there is a record with LSN exactly equal to lsn, + If there is a record with LSN exactly equal to lsn, the record will be processed. @@ -156,7 +156,7 @@ PostgreSQL documentation Require pg_receivewal to use an existing replication slot (see ). - When this option is used, pg_receivewal will report + When this option is used, pg_receivewal will report a flush position to the server, indicating when each segment has been synchronized to disk so that the server can remove that segment if it is not otherwise needed. @@ -181,7 +181,7 @@ PostgreSQL documentation Flush the WAL data to disk immediately after it has been received. Also send a status packet back to the server immediately after flushing, - regardless of --status-interval. + regardless of --status-interval. @@ -230,7 +230,7 @@ PostgreSQL documentation string. See for more information. - The option is called --dbname for consistency with other + The option is called --dbname for consistency with other client applications, but because pg_receivewal doesn't connect to any particular database in the cluster, database name in the connection string will be ignored. @@ -276,8 +276,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -305,7 +305,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_receivewal will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -345,8 +345,8 @@ PostgreSQL documentation - - + + Print the pg_receivewal version and exit. @@ -355,8 +355,8 @@ PostgreSQL documentation - - + + Show help about pg_receivewal command line @@ -386,8 +386,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml index 9c7bb1907b..5add6113f3 100644 --- a/doc/src/sgml/ref/pg_recvlogical.sgml +++ b/doc/src/sgml/ref/pg_recvlogical.sgml @@ -40,11 +40,11 @@ PostgreSQL documentation - pg_recvlogical has no equivalent to the logical decoding + pg_recvlogical has no equivalent to the logical decoding SQL interface's peek and get modes. It sends replay confirmations for data lazily as it receives it and on clean exit. To examine pending data on a slot without consuming it, use - pg_logical_slot_peek_changes. + pg_logical_slot_peek_changes. @@ -125,7 +125,7 @@ PostgreSQL documentation - If there's a record with LSN exactly equal to lsn, + If there's a record with LSN exactly equal to lsn, the record will be output. @@ -145,7 +145,7 @@ PostgreSQL documentation Write received and decoded transaction data into this - file. Use - for stdout. + file. Use - for stdout. @@ -257,8 +257,8 @@ PostgreSQL documentation - - + + Enables verbose mode. @@ -353,7 +353,7 @@ PostgreSQL documentation for a password if the server demands password authentication. However, pg_recvlogical will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -366,8 +366,8 @@ PostgreSQL documentation - - + + Print the pg_recvlogical version and exit. @@ -376,8 +376,8 @@ PostgreSQL documentation - - + + Show help about pg_recvlogical command line @@ -393,8 +393,8 @@ PostgreSQL documentation Environment - This utility, like most other PostgreSQL utilities, - uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + uses the environment variables supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_resetwal.sgml b/doc/src/sgml/ref/pg_resetwal.sgml index defaf170dc..c8e5790a8e 100644 --- a/doc/src/sgml/ref/pg_resetwal.sgml +++ b/doc/src/sgml/ref/pg_resetwal.sgml @@ -34,7 +34,7 @@ PostgreSQL documentation pg_resetwal clears the write-ahead log (WAL) and optionally resets some other control information stored in the - pg_control file. This function is sometimes needed + pg_control file. This function is sometimes needed if these files have become corrupted. It should be used only as a last resort, when the server will not start due to such corruption. @@ -43,7 +43,7 @@ PostgreSQL documentation After running this command, it should be possible to start the server, but bear in mind that the database might contain inconsistent data due to partially-committed transactions. You should immediately dump your data, - run initdb, and reload. After reload, check for + run initdb, and reload. After reload, check for inconsistencies and repair as needed. @@ -52,21 +52,21 @@ PostgreSQL documentation it requires read/write access to the data directory. For safety reasons, you must specify the data directory on the command line. pg_resetwal does not use the environment variable - PGDATA. + PGDATA. If pg_resetwal complains that it cannot determine - valid data for pg_control, you can force it to proceed anyway - by specifying the (force) option. In this case plausible values will be substituted for the missing data. Most of the fields can be expected to match, but manual assistance might be needed for the next OID, next transaction ID and epoch, next multitransaction ID and offset, and WAL starting address fields. These fields can be set using the options discussed below. If you are not able to determine correct values for all - these fields, can still be used, but the recovered database must be treated with even more suspicion than - usual: an immediate dump and reload is imperative. Do not + usual: an immediate dump and reload is imperative. Do not execute any data-modifying operations in the database before you dump, as any such action is likely to make the corruption worse. @@ -81,7 +81,7 @@ PostgreSQL documentation Force pg_resetwal to proceed even if it cannot determine - valid data for pg_control, as explained above. + valid data for pg_control, as explained above. @@ -90,9 +90,9 @@ PostgreSQL documentation - The (no operation) option instructs pg_resetwal to print the values reconstructed from - pg_control and values about to be changed, and then exit + pg_control and values about to be changed, and then exit without modifying anything. This is mainly a debugging tool, but can be useful as a sanity check before allowing pg_resetwal to proceed for real. @@ -116,7 +116,7 @@ PostgreSQL documentation The following options are only needed when pg_resetwal is unable to determine appropriate values - by reading pg_control. Safe values can be determined as + by reading pg_control. Safe values can be determined as described below. For values that take numeric arguments, hexadecimal values can be specified by using the prefix 0x. @@ -134,7 +134,7 @@ PostgreSQL documentation A safe value for the oldest transaction ID for which the commit time can be retrieved (first part) can be determined by looking for the numerically smallest file name in the directory - pg_commit_ts under the data directory. Conversely, a safe + pg_commit_ts under the data directory. Conversely, a safe value for the newest transaction ID for which the commit time can be retrieved (second part) can be determined by looking for the numerically greatest file name in the same directory. The file names are in @@ -155,8 +155,8 @@ PostgreSQL documentation except in the field that is set by pg_resetwal, so any value will work so far as the database itself is concerned. You might need to adjust this value to ensure that replication - systems such as Slony-I and - Skytools work correctly — + systems such as Slony-I and + Skytools work correctly — if so, an appropriate value should be obtainable from the state of the downstream replicated database. @@ -173,22 +173,22 @@ PostgreSQL documentation The WAL starting address should be larger than any WAL segment file name currently existing in - the directory pg_wal under the data directory. + the directory pg_wal under the data directory. These names are also in hexadecimal and have three parts. The first - part is the timeline ID and should usually be kept the same. - For example, if 00000001000000320000004A is the - largest entry in pg_wal, use -l 00000001000000320000004B or higher. + part is the timeline ID and should usually be kept the same. + For example, if 00000001000000320000004A is the + largest entry in pg_wal, use -l 00000001000000320000004B or higher. pg_resetwal itself looks at the files in - pg_wal and chooses a default setting beyond the last existing file name. Therefore, manual adjustment of - @@ -204,10 +204,10 @@ PostgreSQL documentation A safe value for the next multitransaction ID (first part) can be determined by looking for the numerically largest file name in the - directory pg_multixact/offsets under the data directory, + directory pg_multixact/offsets under the data directory, adding one, and then multiplying by 65536 (0x10000). Conversely, a safe value for the oldest multitransaction ID (second part of - ) can be determined by looking for the numerically smallest file name in the same directory and multiplying by 65536. The file names are in hexadecimal, so the easiest way to do this is to specify the option value in hexadecimal and append four zeroes. @@ -239,7 +239,7 @@ PostgreSQL documentation A safe value can be determined by looking for the numerically largest - file name in the directory pg_multixact/members under the + file name in the directory pg_multixact/members under the data directory, adding one, and then multiplying by 52352 (0xCC80). The file names are in hexadecimal. There is no simple recipe such as the ones for other options of appending zeroes. @@ -256,12 +256,12 @@ PostgreSQL documentation A safe value can be determined by looking for the numerically largest - file name in the directory pg_xact under the data directory, + file name in the directory pg_xact under the data directory, adding one, and then multiplying by 1048576 (0x100000). Note that the file names are in hexadecimal. It is usually easiest to specify the option value in - hexadecimal too. For example, if 0011 is the largest entry - in pg_xact, -x 0x1200000 will work (five + hexadecimal too. For example, if 0011 is the largest entry + in pg_xact, -x 0x1200000 will work (five trailing zeroes provide the proper multiplier). diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index a628e79310..ed535f6f89 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -98,7 +98,7 @@ This option is similar to, but for historical reasons not identical - to, specifying . @@ -109,7 +109,7 @@ Clean (drop) database objects before recreating them. - (Unless is used, this might generate some harmless error messages, if any objects were not present in the destination database.) @@ -128,8 +128,8 @@ When this option is used, the database named with - is used only to issue the initial DROP DATABASE and - CREATE DATABASE commands. All data is restored into the + is used only to issue the initial DROP DATABASE and + CREATE DATABASE commands. All data is restored into the database name that appears in the archive. @@ -183,8 +183,8 @@ - c - custom + c + custom The archive is in the custom format of @@ -194,8 +194,8 @@ - d - directory + d + directory The archive is a directory archive. @@ -204,8 +204,8 @@ - t - tar + t + tar The archive is a tar archive. @@ -222,7 +222,7 @@ Restore definition of named index only. Multiple indexes - may be specified with multiple switches. @@ -233,7 +233,7 @@ Run the most time-consuming parts - of pg_restore — those which load data, + of pg_restore — those which load data, create indexes, or create constraints — using multiple concurrent jobs. This option can dramatically reduce the time to restore a large database to a server running on a @@ -275,8 +275,8 @@ List the table of contents of the archive. The output of this operation can be used as input to the option. Note that - if filtering switches such as or are + used with , they will restrict the items listed. @@ -289,11 +289,11 @@ Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that - if filtering switches such as or are + used with , they will further restrict the items restored. - list-file is normally created by - editing the output of a previous - This option is the inverse of . It is similar to, but for historical reasons not identical to, specifying - . - (Do not confuse this with the @@ -401,7 +401,7 @@ Specify the superuser user name to use when disabling triggers. - This is relevant only if is used. @@ -412,16 +412,16 @@ Restore definition and/or data of only the named table. - For this purpose, table includes views, materialized views, + For this purpose, table includes views, materialized views, sequences, and foreign tables. Multiple tables - can be selected by writing multiple switches. This option can be combined with the option to specify table(s) in a particular schema. - When is specified, pg_restore + When is specified, pg_restore makes no attempt to restore any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that a specific-table restore into a clean database will @@ -433,14 +433,14 @@ This flag does not behave identically to the flag of pg_dump. There is not currently - any provision for wild-card matching in pg_restore, - nor can you include a schema name within its . - In versions prior to PostgreSQL 9.6, this flag + In versions prior to PostgreSQL 9.6, this flag matched only tables, not any other type of relation. @@ -453,7 +453,7 @@ Restore named trigger only. Multiple triggers may be specified with - multiple switches. @@ -469,8 +469,8 @@ - - + + Print the pg_restore version and exit. @@ -495,16 +495,16 @@ Execute the restore as a single transaction (that is, wrap the - emitted commands in BEGIN/COMMIT). This + emitted commands in BEGIN/COMMIT). This ensures that either all the commands complete successfully, or no changes are applied. This option implies - . - + This option is relevant only when performing a data-only restore. @@ -517,16 +517,16 @@ Presently, the commands emitted for - must be done as superuser. So you + should also specify a superuser name with or, preferably, run pg_restore as a - PostgreSQL superuser. + PostgreSQL superuser. - + This option is relevant only when restoring the contents of a table @@ -554,7 +554,7 @@ Use conditional commands (i.e. add an IF EXISTS clause) when cleaning database objects. This option is not valid - unless is also specified. @@ -568,8 +568,8 @@ With this option, data for such a table is skipped. This behavior is useful if the target database already contains the desired table contents. For example, - auxiliary tables for PostgreSQL extensions - such as PostGIS might already be loaded in + auxiliary tables for PostgreSQL extensions + such as PostGIS might already be loaded in the target database; specifying this option prevents duplicate or obsolete data from being loaded into them. @@ -627,7 +627,7 @@ Only restore the named section. The section name can be - , , or . This option can be specified more than once to select multiple sections. The default is to restore all sections. @@ -642,7 +642,7 @@ - + Require that each schema @@ -657,8 +657,8 @@ - Output SQL-standard SET SESSION AUTHORIZATION commands - instead of ALTER OWNER commands to determine object + Output SQL-standard SET SESSION AUTHORIZATION commands + instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. @@ -667,8 +667,8 @@ - - + + Show help about pg_restore command line @@ -723,8 +723,8 @@ - - + + Never issue a password prompt. If the server requires @@ -752,7 +752,7 @@ for a password if the server demands password authentication. However, pg_restore will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -763,11 +763,11 @@ Specifies a role name to be used to perform the restore. - This option causes pg_restore to issue a - SET ROLE rolename + This option causes pg_restore to issue a + SET ROLE rolename command after connecting to the database. It is useful when the - authenticated user (specified by @@ -192,9 +192,9 @@ PostgreSQL documentation Environment - When option is used, pg_rewind also uses the environment variables - supported by libpq (see ). + supported by libpq (see ). @@ -224,7 +224,7 @@ PostgreSQL documentation Copy all those changed blocks from the source cluster to the target cluster, either using direct file system access - () or SQL (). @@ -237,9 +237,9 @@ PostgreSQL documentation Apply the WAL from the source cluster, starting from the checkpoint - created at failover. (Strictly speaking, pg_rewind + created at failover. (Strictly speaking, pg_rewind doesn't apply the WAL, it just creates a backup label file that - makes PostgreSQL start by replaying all WAL from + makes PostgreSQL start by replaying all WAL from that checkpoint forward.) diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml index cff88a4c1e..0b39726e30 100644 --- a/doc/src/sgml/ref/pg_waldump.sgml +++ b/doc/src/sgml/ref/pg_waldump.sgml @@ -133,7 +133,7 @@ PostgreSQL documentation Only display records generated by the specified resource manager. - If list is passed as name, print a list of valid resource manager + If list is passed as name, print a list of valid resource manager names, and exit. @@ -156,15 +156,15 @@ PostgreSQL documentation Timeline from which to read log records. The default is to use the - value in startseg, if that is specified; otherwise, the + value in startseg, if that is specified; otherwise, the default is 1. - - + + Print the pg_waldump version and exit. @@ -195,8 +195,8 @@ PostgreSQL documentation - - + + Show help about pg_waldump command line @@ -220,8 +220,8 @@ PostgreSQL documentation - pg_waldump cannot read WAL files with suffix - .partial. If those files need to be read, .partial + pg_waldump cannot read WAL files with suffix + .partial. If those files need to be read, .partial suffix needs to be removed from the file name. diff --git a/doc/src/sgml/ref/pgarchivecleanup.sgml b/doc/src/sgml/ref/pgarchivecleanup.sgml index abe01bef4f..65ba3df928 100644 --- a/doc/src/sgml/ref/pgarchivecleanup.sgml +++ b/doc/src/sgml/ref/pgarchivecleanup.sgml @@ -29,44 +29,44 @@ Description - pg_archivecleanup is designed to be used as an + pg_archivecleanup is designed to be used as an archive_cleanup_command to clean up WAL file archives when running as a standby server (see ). - pg_archivecleanup can also be used as a standalone program to + pg_archivecleanup can also be used as a standalone program to clean WAL file archives. To configure a standby - server to use pg_archivecleanup, put this into its + server to use pg_archivecleanup, put this into its recovery.conf configuration file: -archive_cleanup_command = 'pg_archivecleanup archivelocation %r' +archive_cleanup_command = 'pg_archivecleanup archivelocation %r' - where archivelocation is the directory from which WAL segment + where archivelocation is the directory from which WAL segment files should be removed. When used within , all WAL files - logically preceding the value of the %r argument will be removed - from archivelocation. This minimizes the number of files + logically preceding the value of the %r argument will be removed + from archivelocation. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of - this parameter is appropriate if the archivelocation is a + this parameter is appropriate if the archivelocation is a transient staging area for this particular standby server, but - not when the archivelocation is intended as a + not when the archivelocation is intended as a long-term WAL archive area, or when multiple standby servers are recovering from the same archive location. When used as a standalone program all WAL files logically preceding the - oldestkeptwalfile will be removed from archivelocation. - In this mode, if you specify a .partial or .backup + oldestkeptwalfile will be removed from archivelocation. + In this mode, if you specify a .partial or .backup file name, then only the file prefix will be used as the - oldestkeptwalfile. This treatment of .backup + oldestkeptwalfile. This treatment of .backup file name allows you to remove all WAL files archived prior to a specific base backup without error. For example, the following example will remove all files older than - WAL file name 000000010000003700000010: + WAL file name 000000010000003700000010: pg_archivecleanup -d archive 000000010000003700000010.00000020.backup @@ -77,7 +77,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" pg_archivecleanup assumes that - archivelocation is a directory readable and writable by the + archivelocation is a directory readable and writable by the server-owning user. @@ -94,7 +94,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - Print lots of debug logging output on stderr. + Print lots of debug logging output on stderr. @@ -103,14 +103,14 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - Print the names of the files that would have been removed on stdout (performs a dry run). + Print the names of the files that would have been removed on stdout (performs a dry run). - - + + Print the pg_archivecleanup version and exit. @@ -119,7 +119,7 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - extension + extension Provide an extension @@ -134,8 +134,8 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" - - + + Show help about pg_archivecleanup command line @@ -152,8 +152,8 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" pg_archivecleanup is designed to work with - PostgreSQL 8.0 and later when used as a standalone utility, - or with PostgreSQL 9.0 and later when used as an + PostgreSQL 8.0 and later when used as a standalone utility, + or with PostgreSQL 9.0 and later when used as an archive cleanup command. @@ -172,14 +172,14 @@ pg_archivecleanup: removing file "archive/00000001000000370000000E" archive_cleanup_command = 'pg_archivecleanup -d /mnt/standby/archive %r 2>>cleanup.log' where the archive directory is physically located on the standby server, - so that the archive_command is accessing it across NFS, + so that the archive_command is accessing it across NFS, but the files are local to the standby. This will: - produce debugging output in cleanup.log + produce debugging output in cleanup.log diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index f5db8d18d3..e509e6c7f6 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -34,12 +34,12 @@ Description pgbench is a simple program for running benchmark - tests on PostgreSQL. It runs the same sequence of SQL + tests on PostgreSQL. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is - loosely based on TPC-B, involving five SELECT, - UPDATE, and INSERT commands per transaction. + loosely based on TPC-B, involving five SELECT, + UPDATE, and INSERT commands per transaction. However, it is easy to test other cases by writing your own transaction script files. @@ -63,7 +63,7 @@ tps = 85.296346 (excluding connections establishing) settings. The next line reports the number of transactions completed and intended (the latter being just the product of number of clients and number of transactions per client); these will be equal unless the run - failed before completion. (In mode, only the actual number of transactions is printed.) The last two lines report the number of transactions per second, figured with and without counting the time to start database sessions. @@ -71,27 +71,27 @@ tps = 85.296346 (excluding connections establishing) The default TPC-B-like transaction test requires specific tables to be - set up beforehand. pgbench should be invoked with - the (initialize) option to create and populate these tables. (When you are testing a custom script, you don't need this step, but will instead need to do whatever setup your test needs.) Initialization looks like: -pgbench -i other-options dbname +pgbench -i other-options dbname - where dbname is the name of the already-created - database to test in. (You may also need , + , and/or options to specify how to connect to the database server.) - pgbench -i creates four tables pgbench_accounts, - pgbench_branches, pgbench_history, and - pgbench_tellers, + pgbench -i creates four tables pgbench_accounts, + pgbench_branches, pgbench_history, and + pgbench_tellers, destroying any existing tables of these names. Be very careful to use another database if you have tables having these names! @@ -99,7 +99,7 @@ pgbench -i other-options dbn - At the default scale factor of 1, the tables initially + At the default scale factor of 1, the tables initially contain this many rows: table # of rows @@ -110,22 +110,22 @@ pgbench_accounts 100000 pgbench_history 0 You can (and, for most purposes, probably should) increase the number - of rows by using the (scale factor) option. The + (fillfactor) option might also be used at this point. Once you have done the necessary setup, you can run your benchmark - with a command that doesn't include , that is -pgbench options dbname +pgbench options dbname In nearly all cases, you'll need some options to make a useful test. - The most important options are (number of clients), + (number of transactions), (time limit), + and (specify a custom script file). See below for a full list. @@ -159,13 +159,13 @@ pgbench options dbname - fillfactor - fillfactor + fillfactor + fillfactor - Create the pgbench_accounts, - pgbench_tellers and - pgbench_branches tables with the given fillfactor. + Create the pgbench_accounts, + pgbench_tellers and + pgbench_branches tables with the given fillfactor. Default is 100. @@ -194,13 +194,13 @@ pgbench options dbname - scale_factor - scale_factor + scale_factor + scale_factor Multiply the number of rows generated by the scale factor. - For example, -s 100 will create 10,000,000 rows - in the pgbench_accounts table. Default is 1. + For example, -s 100 will create 10,000,000 rows + in the pgbench_accounts table. Default is 1. When the scale is 20,000 or larger, the columns used to hold account identifiers (aid columns) will switch to using larger integers (bigint), @@ -262,17 +262,17 @@ pgbench options dbname - - + scriptname[@weight] + =scriptname[@weight] Add the specified built-in script to the list of executed scripts. - An optional integer weight after @ allows to adjust the + An optional integer weight after @ allows to adjust the probability of drawing the script. If not specified, it is set to 1. - Available built-in scripts are: tpcb-like, - simple-update and select-only. + Available built-in scripts are: tpcb-like, + simple-update and select-only. Unambiguous prefixes of built-in names are accepted. - With special name list, show the list of built-in scripts + With special name list, show the list of built-in scripts and exit immediately. @@ -280,8 +280,8 @@ pgbench options dbname - clients - clients + clients + clients Number of clients simulated, that is, number of concurrent database @@ -313,24 +313,24 @@ pgbench options dbname - varname=value - varname=value + varname=value + varname=value Define a variable for use by a custom script (see below). - Multiple options are allowed. - - + filename[@weight] + filename[@weight] - Add a transaction script read from filename to + Add a transaction script read from filename to the list of executed scripts. - An optional integer weight after @ allows to adjust the + An optional integer weight after @ allows to adjust the probability of drawing the test. See below for details. @@ -338,8 +338,8 @@ pgbench options dbname - threads - threads + threads + threads Number of worker threads within pgbench. @@ -362,38 +362,38 @@ pgbench options dbname - limit - limit + limit + limit - Transaction which last more than limit milliseconds - are counted and reported separately, as late. + Transaction which last more than limit milliseconds + are counted and reported separately, as late. - When throttling is used ( - querymode - querymode + querymode + querymode Protocol to use for submitting queries to the server: - simple: use simple query protocol. + simple: use simple query protocol. - extended: use extended query protocol. + extended: use extended query protocol. - prepared: use extended query protocol with prepared statements. + prepared: use extended query protocol with prepared statements. The default is simple query protocol. (See @@ -408,11 +408,11 @@ pgbench options dbname Perform no vacuuming before running the test. - This option is necessary + This option is necessary if you are running a custom test scenario that does not include - the standard tables pgbench_accounts, - pgbench_branches, pgbench_history, and - pgbench_tellers. + the standard tables pgbench_accounts, + pgbench_branches, pgbench_history, and + pgbench_tellers. @@ -423,20 +423,20 @@ pgbench options dbname Run built-in simple-update script. - Shorthand for . - sec - sec + sec + sec - Show progress report every sec seconds. The report + Show progress report every sec seconds. The report includes the time since the beginning of the run, the tps since the last report, and the transaction latency average and standard - deviation since the last report. Under throttling (), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time, thus it also includes the average schedule lag time. @@ -457,8 +457,8 @@ pgbench options dbname - rate - rate + rate + rate Execute transactions targeting the specified rate instead of running @@ -487,7 +487,7 @@ pgbench options dbname - If is used together with , a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time. Such transactions are @@ -508,15 +508,15 @@ pgbench options dbname - scale_factor - scale_factor + scale_factor + scale_factor - Report the specified scale factor in pgbench's + Report the specified scale factor in pgbench's output. With the built-in tests, this is not necessary; the correct scale factor will be detected by counting the number of - rows in the pgbench_branches table. - However, when testing only custom benchmarks ( option), the scale factor will be reported as 1 unless this option is used. @@ -528,14 +528,14 @@ pgbench options dbname Run built-in select-only script. - Shorthand for . - transactions - transactions + transactions + transactions Number of transactions each client runs. Default is 10. @@ -544,8 +544,8 @@ pgbench options dbname - seconds - seconds + seconds + seconds Run the test for this many seconds, rather than a fixed number of @@ -561,15 +561,15 @@ pgbench options dbname Vacuum all four standard tables before running the test. - With neither - + Length of aggregation interval (in seconds). May be used only @@ -580,11 +580,11 @@ pgbench options dbname - + Set the filename prefix for the log files created by - @@ -593,7 +593,7 @@ pgbench options dbname - When showing progress (option ), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run. The unit is in seconds, with millisecond precision after the dot. @@ -603,7 +603,7 @@ pgbench options dbname - + Sampling rate, used when writing data into the log, to reduce the @@ -635,8 +635,8 @@ pgbench options dbname - hostname - hostname + hostname + hostname The database server's host name @@ -645,8 +645,8 @@ pgbench options dbname - port - port + port + port The database server's port number @@ -655,8 +655,8 @@ pgbench options dbname - login - login + login + login The user name to connect as @@ -665,8 +665,8 @@ pgbench options dbname - - + + Print the pgbench version and exit. @@ -675,8 +675,8 @@ pgbench options dbname - - + + Show help about pgbench command line @@ -694,23 +694,23 @@ pgbench options dbname Notes - What is the <quote>Transaction</> Actually Performed in <application>pgbench</application>? + What is the <quote>Transaction</quote> Actually Performed in <application>pgbench</application>? - pgbench executes test scripts chosen randomly + pgbench executes test scripts chosen randomly from a specified list. - They include built-in scripts with and + user-provided custom scripts with . Each script may be given a relative weight specified after a - @ so as to change its drawing probability. - The default weight is 1. - Scripts with a weight of 0 are ignored. + @ so as to change its drawing probability. + The default weight is 1. + Scripts with a weight of 0 are ignored. - The default built-in transaction script (also invoked with @@ -726,15 +726,15 @@ pgbench options dbname - If you select the simple-update built-in (also ), steps 4 and 5 aren't included in the transaction. This will avoid update contention on these tables, but it makes the test case even less like TPC-B. - If you select the select-only built-in (also @@ -745,26 +745,26 @@ pgbench options dbname pgbench has support for running custom benchmark scenarios by replacing the default transaction script (described above) with a transaction script read from a file - ( option). In this case a transaction + ( option). In this case a transaction counts as one execution of a script file. A script file contains one or more SQL commands terminated by semicolons. Empty lines and lines beginning with - -- are ignored. Script files can also contain - meta commands, which are interpreted by pgbench + -- are ignored. Script files can also contain + meta commands, which are interpreted by pgbench itself, as described below. - Before PostgreSQL 9.6, SQL commands in script files + Before PostgreSQL 9.6, SQL commands in script files were terminated by newlines, and so they could not be continued across - lines. Now a semicolon is required to separate consecutive + lines. Now a semicolon is required to separate consecutive SQL commands (though a SQL command does not need one if it is followed by a meta command). If you need to create a script file that works with - both old and new versions of pgbench, be sure to write + both old and new versions of pgbench, be sure to write each SQL command on a single line ending with a semicolon. @@ -773,15 +773,15 @@ pgbench options dbname There is a simple variable-substitution facility for script files. Variable names must consist of letters (including non-Latin letters), digits, and underscores. - Variables can be set by the command-line option, explained above, or by the meta commands explained below. - In addition to any variables preset by command-line options, there are a few variables that are preset automatically, listed in . A value specified for these - variables using takes precedence over the automatic presets. Once set, a variable's value can be inserted into a SQL command by writing - :variablename. When running more than + :variablename. When running more than one client session, each session has its own set of variables. @@ -810,7 +810,7 @@ pgbench options dbname
- Script file meta commands begin with a backslash (\) and + Script file meta commands begin with a backslash (\) and normally extend to the end of the line, although they can be continued to additional lines by writing backslash-return. Arguments to a meta command are separated by white space. @@ -820,20 +820,20 @@ pgbench options dbname - \set varname expression + \set varname expression - Sets variable varname to a value calculated - from expression. - The expression may contain integer constants such as 5432, - double constants such as 3.14159, - references to variables :variablename, - unary operators (+, -) and binary operators - (+, -, *, /, - %) with their usual precedence and associativity, - function calls, and + Sets variable varname to a value calculated + from expression. + The expression may contain integer constants such as 5432, + double constants such as 3.14159, + references to variables :variablename, + unary operators (+, -) and binary operators + (+, -, *, /, + %) with their usual precedence and associativity, + function calls, and parentheses. @@ -849,16 +849,16 @@ pgbench options dbname - \sleep number [ us | ms | s ] + \sleep number [ us | ms | s ] Causes script execution to sleep for the specified duration in - microseconds (us), milliseconds (ms) or seconds - (s). If the unit is omitted then seconds are the default. - number can be either an integer constant or a - :variablename reference to a variable + microseconds (us), milliseconds (ms) or seconds + (s). If the unit is omitted then seconds are the default. + number can be either an integer constant or a + :variablename reference to a variable having an integer value. @@ -872,22 +872,22 @@ pgbench options dbname - \setshell varname command [ argument ... ] + \setshell varname command [ argument ... ] - Sets variable varname to the result of the shell command - command with the given argument(s). + Sets variable varname to the result of the shell command + command with the given argument(s). The command must return an integer value through its standard output. - command and each argument can be either - a text constant or a :variablename reference - to a variable. If you want to use an argument starting + command and each argument can be either + a text constant or a :variablename reference + to a variable. If you want to use an argument starting with a colon, write an additional colon at the beginning of - argument. + argument. @@ -900,7 +900,7 @@ pgbench options dbname - \shell command [ argument ... ] + \shell command [ argument ... ] @@ -924,7 +924,7 @@ pgbench options dbname The functions listed in are built - into pgbench and may be used in expressions appearing in + into pgbench and may be used in expressions appearing in \set. @@ -943,123 +943,123 @@ pgbench options dbname - abs(a) - same as a - absolute value - abs(-17) - 17 + abs(a) + same as a + absolute value + abs(-17) + 17 - debug(a) - same as a - print a to stderr, - and return a - debug(5432.1) - 5432.1 + debug(a) + same as a + print a to stderr, + and return a + debug(5432.1) + 5432.1 - double(i) - double - cast to double - double(5432) - 5432.0 + double(i) + double + cast to double + double(5432) + 5432.0 - greatest(a [, ... ] ) - double if any a is double, else integer - largest value among arguments - greatest(5, 4, 3, 2) - 5 + greatest(a [, ... ] ) + double if any a is double, else integer + largest value among arguments + greatest(5, 4, 3, 2) + 5 - int(x) - integer - cast to int - int(5.4 + 3.8) - 9 + int(x) + integer + cast to int + int(5.4 + 3.8) + 9 - least(a [, ... ] ) - double if any a is double, else integer - smallest value among arguments - least(5, 4, 3, 2.1) - 2.1 + least(a [, ... ] ) + double if any a is double, else integer + smallest value among arguments + least(5, 4, 3, 2.1) + 2.1 - pi() - double - value of the constant PI - pi() - 3.14159265358979323846 + pi() + double + value of the constant PI + pi() + 3.14159265358979323846 - random(lb, ub) - integer - uniformly-distributed random integer in [lb, ub] - random(1, 10) - an integer between 1 and 10 + random(lb, ub) + integer + uniformly-distributed random integer in [lb, ub] + random(1, 10) + an integer between 1 and 10 - random_exponential(lb, ub, parameter) - integer - exponentially-distributed random integer in [lb, ub], - see below - random_exponential(1, 10, 3.0) - an integer between 1 and 10 + random_exponential(lb, ub, parameter) + integer + exponentially-distributed random integer in [lb, ub], + see below + random_exponential(1, 10, 3.0) + an integer between 1 and 10 - random_gaussian(lb, ub, parameter) - integer - Gaussian-distributed random integer in [lb, ub], - see below - random_gaussian(1, 10, 2.5) - an integer between 1 and 10 + random_gaussian(lb, ub, parameter) + integer + Gaussian-distributed random integer in [lb, ub], + see below + random_gaussian(1, 10, 2.5) + an integer between 1 and 10 - sqrt(x) - double - square root - sqrt(2.0) - 1.414213562 + sqrt(x) + double + square root + sqrt(2.0) + 1.414213562 - The random function generates values using a uniform + The random function generates values using a uniform distribution, that is all the values are drawn within the specified - range with equal probability. The random_exponential and - random_gaussian functions require an additional double + range with equal probability. The random_exponential and + random_gaussian functions require an additional double parameter which determines the precise shape of the distribution. - For an exponential distribution, parameter + For an exponential distribution, parameter controls the distribution by truncating a quickly-decreasing - exponential distribution at parameter, and then + exponential distribution at parameter, and then projecting onto integers between the bounds. To be precise, with f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter)) - Then value i between min and - max inclusive is drawn with probability: - f(i) - f(i + 1). + Then value i between min and + max inclusive is drawn with probability: + f(i) - f(i + 1). - Intuitively, the larger the parameter, the more - frequently values close to min are accessed, and the - less frequently values close to max are accessed. - The closer to 0 parameter is, the flatter (more + Intuitively, the larger the parameter, the more + frequently values close to min are accessed, and the + less frequently values close to max are accessed. + The closer to 0 parameter is, the flatter (more uniform) the access distribution. A crude approximation of the distribution is that the most frequent 1% - values in the range, close to min, are drawn - parameter% of the time. - The parameter value must be strictly positive. + values in the range, close to min, are drawn + parameter% of the time. + The parameter value must be strictly positive. @@ -1067,32 +1067,32 @@ f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter)) For a Gaussian distribution, the interval is mapped onto a standard normal distribution (the classical bell-shaped Gaussian curve) truncated - at -parameter on the left and +parameter + at -parameter on the left and +parameter on the right. Values in the middle of the interval are more likely to be drawn. - To be precise, if PHI(x) is the cumulative distribution - function of the standard normal distribution, with mean mu - defined as (max + min) / 2.0, with + To be precise, if PHI(x) is the cumulative distribution + function of the standard normal distribution, with mean mu + defined as (max + min) / 2.0, with f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) / (2.0 * PHI(parameter) - 1) - then value i between min and - max inclusive is drawn with probability: - f(i + 0.5) - f(i - 0.5). - Intuitively, the larger the parameter, the more + then value i between min and + max inclusive is drawn with probability: + f(i + 0.5) - f(i - 0.5). + Intuitively, the larger the parameter, the more frequently values close to the middle of the interval are drawn, and the - less frequently values close to the min and - max bounds. About 67% of values are drawn from the - middle 1.0 / parameter, that is a relative - 0.5 / parameter around the mean, and 95% in the middle - 2.0 / parameter, that is a relative - 1.0 / parameter around the mean; for instance, if - parameter is 4.0, 67% of values are drawn from the + less frequently values close to the min and + max bounds. About 67% of values are drawn from the + middle 1.0 / parameter, that is a relative + 0.5 / parameter around the mean, and 95% in the middle + 2.0 / parameter, that is a relative + 1.0 / parameter around the mean; for instance, if + parameter is 4.0, 67% of values are drawn from the middle quarter (1.0 / 4.0) of the interval (i.e. from - 3.0 / 8.0 to 5.0 / 8.0) and 95% from - the middle half (2.0 / 4.0) of the interval (second and third - quartiles). The minimum parameter is 2.0 for performance + 3.0 / 8.0 to 5.0 / 8.0) and 95% from + the middle half (2.0 / 4.0) of the interval (second and third + quartiles). The minimum parameter is 2.0 for performance of the Box-Muller transform. @@ -1128,21 +1128,21 @@ END; Per-Transaction Logging - With the option (but without the option), - pgbench writes information about each transaction + pgbench writes information about each transaction to a log file. The log file will be named - prefix.nnn, - where prefix defaults to pgbench_log, and - nnn is the PID of the + prefix.nnn, + where prefix defaults to pgbench_log, and + nnn is the PID of the pgbench process. - The prefix can be changed by using the option. + If the option is 2 or higher, so that there are multiple worker threads, each will have its own log file. The first worker will use the same name for its log file as in the standard single worker case. The additional log files for the other workers will be named - prefix.nnn.mmm, - where mmm is a sequential number for each worker starting + prefix.nnn.mmm, + where mmm is a sequential number for each worker starting with 1. @@ -1150,27 +1150,27 @@ END; The format of the log is: -client_id transaction_no time script_no time_epoch time_us schedule_lag +client_id transaction_no time script_no time_epoch time_us schedule_lag where - client_id indicates which client session ran the transaction, - transaction_no counts how many transactions have been + client_id indicates which client session ran the transaction, + transaction_no counts how many transactions have been run by that session, - time is the total elapsed transaction time in microseconds, - script_no identifies which script file was used (useful when - multiple scripts were specified with @@ -1182,9 +1182,9 @@ END; 0 202 2038 0 1175850569 2663 - Another example with --rate=100 - and --latency-limit=5 (note the additional - schedule_lag column): + Another example with --rate=100 + and --latency-limit=5 (note the additional + schedule_lag column): 0 81 4621 0 1412881037 912698 3005 0 82 6173 0 1412881037 914578 4304 @@ -1201,7 +1201,7 @@ END; When running a long test on hardware that can handle a lot of transactions, - the log files can become very large. The option can be used to log only a random sample of transactions. @@ -1214,30 +1214,30 @@ END; format is used for the log files: -interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency sum_lag sum_lag_2 min_lag max_lag skipped +interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency sum_lag sum_lag_2 min_lag max_lag skipped where - interval_start is the start of the interval (as a Unix + interval_start is the start of the interval (as a Unix epoch time stamp), - num_transactions is the number of transactions + num_transactions is the number of transactions within the interval, sum_latency is the sum of the transaction latencies within the interval, sum_latency_2 is the sum of squares of the transaction latencies within the interval, - min_latency is the minimum latency within the interval, + min_latency is the minimum latency within the interval, and - max_latency is the maximum latency within the interval. + max_latency is the maximum latency within the interval. The next fields, - sum_lag, sum_lag_2, min_lag, - and max_lag, are only present if the option is used. They provide statistics about the time each transaction had to wait for the previous one to finish, i.e. the difference between each transaction's scheduled start time and the time it actually started. - The very last field, skipped, - is only present if the option is used, too. It counts the number of transactions skipped because they would have started too late. Each transaction is counted in the interval when it was committed. @@ -1265,7 +1265,7 @@ END; Per-Statement Latencies - With the @@ -79,8 +79,8 @@ - - + + Print the pg_test_fsync version and exit. @@ -89,8 +89,8 @@ - - + + Show help about pg_test_fsync command line diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml index c659101361..966546747e 100644 --- a/doc/src/sgml/ref/pgtesttiming.sgml +++ b/doc/src/sgml/ref/pgtesttiming.sgml @@ -27,7 +27,7 @@ Description - pg_test_timing is a tool to measure the timing overhead + pg_test_timing is a tool to measure the timing overhead on your system and confirm that the system time never moves backwards. Systems that are slow to collect timing data can give less accurate EXPLAIN ANALYZE results. @@ -57,8 +57,8 @@ - - + + Print the pg_test_timing version and exit. @@ -67,8 +67,8 @@ - - + + Show help about pg_test_timing command line diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index c3df343571..8785a3ded2 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -35,38 +35,38 @@ Description - pg_upgrade (formerly called pg_migrator) allows data - stored in PostgreSQL data files to be upgraded to a later PostgreSQL + pg_upgrade (formerly called pg_migrator) allows data + stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/reload typically required for major version upgrades, e.g. from 9.6.3 to the current major release - of PostgreSQL. It is not required for minor version upgrades, e.g. from + of PostgreSQL. It is not required for minor version upgrades, e.g. from 9.6.2 to 9.6.3. Major PostgreSQL releases regularly add new features that often change the layout of the system tables, but the internal data storage - format rarely changes. pg_upgrade uses this fact + format rarely changes. pg_upgrade uses this fact to perform rapid upgrades by creating new system tables and simply reusing the old user data files. If a future major release ever changes the data storage format in a way that makes the old data - format unreadable, pg_upgrade will not be usable + format unreadable, pg_upgrade will not be usable for such upgrades. (The community will attempt to avoid such situations.) - pg_upgrade does its best to + pg_upgrade does its best to make sure the old and new clusters are binary-compatible, e.g. by checking for compatible compile-time settings, including 32/64-bit binaries. It is important that any external modules are also binary compatible, though this cannot - be checked by pg_upgrade. + be checked by pg_upgrade. pg_upgrade supports upgrades from 8.4.X and later to the current - major release of PostgreSQL, including snapshot and beta releases. + major release of PostgreSQL, including snapshot and beta releases. @@ -79,17 +79,17 @@ - bindir - bindir + bindir + bindir the old PostgreSQL executable directory; - environment variable PGBINOLD + environment variable PGBINOLD - bindir - bindir + bindir + bindir the new PostgreSQL executable directory; - environment variable PGBINNEW + environment variable PGBINNEW
@@ -99,17 +99,17 @@ - datadir - datadir + datadir + datadir the old cluster data directory; environment - variable PGDATAOLD + variable PGDATAOLD - datadir - datadir + datadir + datadir the new cluster data directory; environment - variable PGDATANEW + variable PGDATANEW @@ -143,17 +143,17 @@ - port - port + port + port the old cluster port number; environment - variable PGPORTOLD + variable PGPORTOLD - port - port + port + port the new cluster port number; environment - variable PGPORTNEW + variable PGPORTNEW @@ -164,10 +164,10 @@ - username - username + username + username cluster's install user name; environment - variable PGUSER + variable PGUSER @@ -207,17 +207,17 @@ If you are using a version-specific installation directory, e.g. - /opt/PostgreSQL/&majorversion;, you do not need to move the old cluster. The + /opt/PostgreSQL/&majorversion;, you do not need to move the old cluster. The graphical installers all use version-specific installation directories. If your installation directory is not version-specific, e.g. - /usr/local/pgsql, it is necessary to move the current PostgreSQL install - directory so it does not interfere with the new PostgreSQL installation. - Once the current PostgreSQL server is shut down, it is safe to rename the + /usr/local/pgsql, it is necessary to move the current PostgreSQL install + directory so it does not interfere with the new PostgreSQL installation. + Once the current PostgreSQL server is shut down, it is safe to rename the PostgreSQL installation directory; assuming the old directory is - /usr/local/pgsql, you can do: + /usr/local/pgsql, you can do: mv /usr/local/pgsql /usr/local/pgsql.old @@ -230,8 +230,8 @@ mv /usr/local/pgsql /usr/local/pgsql.old For source installs, build the new version - Build the new PostgreSQL source with configure flags that are compatible - with the old cluster. pg_upgrade will check pg_controldata to make + Build the new PostgreSQL source with configure flags that are compatible + with the old cluster. pg_upgrade will check pg_controldata to make sure all settings are compatible before starting the upgrade. @@ -241,7 +241,7 @@ mv /usr/local/pgsql /usr/local/pgsql.old Install the new server's binaries and support - files. pg_upgrade is included in a default installation. + files. pg_upgrade is included in a default installation. @@ -273,7 +273,7 @@ make prefix=/usr/local/pgsql.new install into the new cluster, e.g. pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g. - CREATE EXTENSION pgcrypto, because these will be upgraded + CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster. @@ -284,9 +284,9 @@ make prefix=/usr/local/pgsql.new install Adjust authentication - pg_upgrade will connect to the old and new servers several - times, so you might want to set authentication to peer - in pg_hba.conf or use a ~/.pgpass file + pg_upgrade will connect to the old and new servers several + times, so you might want to set authentication to peer + in pg_hba.conf or use a ~/.pgpass file (see ). @@ -322,23 +322,23 @@ NET STOP postgresql-&majorversion; If you are upgrading standby servers using methods outlined in section , verify that the old standby - servers are caught up by running pg_controldata + servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the - Latest checkpoint location values match in all clusters. + Latest checkpoint location values match in all clusters. (There will be a mismatch if old standby servers were shut down - before the old primary.) Also, change wal_level to - replica in the postgresql.conf file on the + before the old primary.) Also, change wal_level to + replica in the postgresql.conf file on the new primary cluster. - Run <application>pg_upgrade</> + Run <application>pg_upgrade</application> - Always run the pg_upgrade binary of the new server, not the old one. - pg_upgrade requires the specification of the old and new cluster's - data and executable (bin) directories. You can also specify + Always run the pg_upgrade binary of the new server, not the old one. + pg_upgrade requires the specification of the old and new cluster's + data and executable (bin) directories. You can also specify user and port values, and whether you want the data files linked instead of the default copy behavior. @@ -349,13 +349,13 @@ NET STOP postgresql-&majorversion; your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the - same file system. (Tablespaces and pg_wal can be on - different file systems.) See pg_upgrade --help for a full + same file system. (Tablespaces and pg_wal can be on + different file systems.) See pg_upgrade --help for a full list of options. - The option allows multiple CPU cores to be used for copying/linking of files and to dump and reload database schemas in parallel; a good place to start is the maximum of the number of CPU cores and tablespaces. This option can dramatically reduce the @@ -365,14 +365,14 @@ NET STOP postgresql-&majorversion; For Windows users, you must be logged into an administrative account, and - then start a shell as the postgres user and set the proper path: + then start a shell as the postgres user and set the proper path: RUNAS /USER:postgres "CMD.EXE" SET PATH=%PATH%;C:\Program Files\PostgreSQL\&majorversion;\bin; - and then run pg_upgrade with quoted directories, e.g.: + and then run pg_upgrade with quoted directories, e.g.: pg_upgrade.exe @@ -382,19 +382,19 @@ pg_upgrade.exe --new-bindir "C:/Program Files/PostgreSQL/&majorversion;/bin" - Once started, pg_upgrade will verify the two clusters are compatible - and then do the upgrade. You can use pg_upgrade --check + Once started, pg_upgrade will verify the two clusters are compatible + and then do the upgrade. You can use pg_upgrade --check to perform only the checks, even if the old server is still - running. pg_upgrade --check will also outline any + running. pg_upgrade --check will also outline any manual adjustments you will need to make after the upgrade. If you - are going to be using link mode, you should use the option with to enable link-mode-specific checks. - pg_upgrade requires write permission in the current directory. + pg_upgrade requires write permission in the current directory. Obviously, no one should be accessing the clusters during the - upgrade. pg_upgrade defaults to running servers + upgrade. pg_upgrade defaults to running servers on port 50432 to avoid unintended client connections. You can use the same port number for both clusters when doing an upgrade because the old and new clusters will not be running at the @@ -403,7 +403,7 @@ pg_upgrade.exe - If an error occurs while restoring the database schema, pg_upgrade will + If an error occurs while restoring the database schema, pg_upgrade will exit and you will have to revert to the old cluster as outlined in below. To try pg_upgrade again, you will need to modify the old cluster so the pg_upgrade schema restore succeeds. If the problem is a @@ -420,16 +420,16 @@ pg_upgrade.exe If you used link mode and have Streaming Replication (see ) or Log-Shipping (see ) standby servers, you can follow these steps to - quickly upgrade them. You will not be running pg_upgrade on - the standby servers, but rather rsync on the primary. + quickly upgrade them. You will not be running pg_upgrade on + the standby servers, but rather rsync on the primary. Do not start any servers yet. - If you did not use link mode, do not have or do not - want to use rsync, or want an easier solution, skip + If you did not use link mode, do not have or do not + want to use rsync, or want an easier solution, skip the instructions in this section and simply recreate the standby - servers once pg_upgrade completes and the new primary + servers once pg_upgrade completes and the new primary is running. @@ -445,11 +445,11 @@ pg_upgrade.exe - Make sure the new standby data directories do <emphasis>not</> exist + Make sure the new standby data directories do <emphasis>not</emphasis> exist - Make sure the new standby data directories do not - exist or are empty. If initdb was run, delete + Make sure the new standby data directories do not + exist or are empty. If initdb was run, delete the standby servers' new data directories. @@ -477,32 +477,32 @@ pg_upgrade.exe Save any configuration files from the old standbys' data - directories you need to keep, e.g. postgresql.conf, - recovery.conf, because these will be overwritten or + directories you need to keep, e.g. postgresql.conf, + recovery.conf, because these will be overwritten or removed in the next step. - Run <application>rsync</> + Run <application>rsync</application> When using link mode, standby servers can be quickly upgraded using - rsync. To accomplish this, from a directory on + rsync. To accomplish this, from a directory on the primary server that is above the old and new database cluster - directories, run this on the primary for each standby + directories, run this on the primary for each standby server: rsync --archive --delete --hard-links --size-only --no-inc-recursive old_pgdata new_pgdata remote_dir - where What this does is to record the links created by - pg_upgrade's link mode that connect files in the + pg_upgrade's link mode that connect files in the old and new clusters on the primary server. It then finds matching files in the standby's old cluster and creates links for them in the standby's new cluster. Files that were not linked on the primary are copied from the primary to the standby. (They are usually small.) This provides rapid standby upgrades. Unfortunately, - rsync needlessly copies files associated with + rsync needlessly copies files associated with temporary and unlogged tables because these files don't normally exist on standby servers. If you have tablespaces, you will need to run a similar - rsync command for each tablespace directory, e.g.: + rsync command for each tablespace directory, e.g.: rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tblsp/PG_9.5_201510051 \ /vol1/pg_tblsp/PG_9.6_201608131 standby.example.com:/vol1/pg_tblsp - If you have relocated pg_wal outside the data - directories, rsync must be run on those directories + If you have relocated pg_wal outside the data + directories, rsync must be run on those directories too. @@ -551,7 +551,7 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb Configure the servers for log shipping. (You do not need to run - pg_start_backup() and pg_stop_backup() + pg_start_backup() and pg_stop_backup() or take a file system backup as the standbys are still synchronized with the primary.) @@ -562,12 +562,12 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb - Restore <filename>pg_hba.conf</> + Restore <filename>pg_hba.conf</filename> - If you modified pg_hba.conf, restore its original settings. + If you modified pg_hba.conf, restore its original settings. It might also be necessary to adjust other configuration files in the new - cluster to match the old cluster, e.g. postgresql.conf. + cluster to match the old cluster, e.g. postgresql.conf. @@ -576,7 +576,7 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive /vol1/pg_tb The new server can now be safely started, and then any - rsync'ed standby servers. + rsync'ed standby servers. @@ -612,7 +612,7 @@ psql --username=postgres --file=script.sql postgres Statistics - Because optimizer statistics are not transferred by pg_upgrade, you will + Because optimizer statistics are not transferred by pg_upgrade, you will be instructed to run a command to regenerate that information at the end of the upgrade. You might need to set connection parameters to match your new cluster. @@ -628,7 +628,7 @@ psql --username=postgres --file=script.sql postgres pg_upgrade completes. (Automatic deletion is not possible if you have user-defined tablespaces inside the old data directory.) You can also delete the old installation directories - (e.g. bin, share). + (e.g. bin, share). @@ -643,7 +643,7 @@ psql --username=postgres --file=script.sql postgres If you ran pg_upgrade - with , no modifications were made to the old cluster and you can re-use it anytime. @@ -651,7 +651,7 @@ psql --username=postgres --file=script.sql postgres If you ran pg_upgrade - with , the data files are shared between the old and new cluster. If you started the new cluster, the new server has written to those shared files and it is unsafe to use the old cluster. @@ -660,13 +660,13 @@ psql --username=postgres --file=script.sql postgres - If you ran pg_upgrade without - or did not start the new server, the old cluster was not modified except that, if linking - started, a .old suffix was appended to - $PGDATA/global/pg_control. To reuse the old - cluster, possibly remove the .old suffix from - $PGDATA/global/pg_control; you can then restart the + started, a .old suffix was appended to + $PGDATA/global/pg_control. To reuse the old + cluster, possibly remove the .old suffix from + $PGDATA/global/pg_control; you can then restart the old cluster. @@ -681,16 +681,16 @@ psql --username=postgres --file=script.sql postgres Notes - pg_upgrade does not support upgrading of databases - containing these reg* OID-referencing system data types: - regproc, regprocedure, regoper, - regoperator, regconfig, and - regdictionary. (regtype can be upgraded.) + pg_upgrade does not support upgrading of databases + containing these reg* OID-referencing system data types: + regproc, regprocedure, regoper, + regoperator, regconfig, and + regdictionary. (regtype can be upgraded.) All failure, rebuild, and reindex cases will be reported by - pg_upgrade if they affect your installation; + pg_upgrade if they affect your installation; post-upgrade scripts to rebuild tables and indexes will be generated automatically. If you are trying to automate the upgrade of many clusters, you should find that clusters with identical database @@ -705,17 +705,17 @@ psql --username=postgres --file=script.sql postgres - If you are upgrading a pre-PostgreSQL 9.2 cluster + If you are upgrading a pre-PostgreSQL 9.2 cluster that uses a configuration-file-only directory, you must pass the - real data directory location to pg_upgrade, and + real data directory location to pg_upgrade, and pass the configuration directory location to the server, e.g. - -d /real-data-directory -o '-D /configuration-directory'. + -d /real-data-directory -o '-D /configuration-directory'. If using a pre-9.1 old server that is using a non-default Unix-domain socket directory or a default that differs from the default of the - new cluster, set PGHOST to point to the old server's socket + new cluster, set PGHOST to point to the old server's socket location. (This is not relevant on Windows.) @@ -723,13 +723,13 @@ psql --username=postgres --file=script.sql postgres If you want to use link mode and you do not want your old cluster to be modified when the new cluster is started, make a copy of the old cluster and upgrade that in link mode. To make a valid copy - of the old cluster, use rsync to create a dirty + of the old cluster, use rsync to create a dirty copy of the old cluster while the server is running, then shut down - the old server and run rsync --checksum again to update the - copy with any changes to make it consistent. ( @@ -122,7 +122,7 @@ PostgreSQL documentation supported by PostgreSQL are described in . Most of the other command line options are in fact short forms of such a - parameter assignment. can appear multiple times to set multiple parameters. @@ -133,9 +133,9 @@ PostgreSQL documentation Prints the value of the named run-time parameter, and exits. - (See the option above for details.) This can be used on a running server, and returns values from - postgresql.conf, modified by any parameters + postgresql.conf, modified by any parameters supplied in this invocation. It does not reflect parameters supplied when the cluster was started. @@ -157,7 +157,7 @@ PostgreSQL documentation debugging output is written to the server log. Values are from 1 to 5. It is also possible to pass -d 0 for a specific session, which will prevent the - server log level of the parent postgres process from being + server log level of the parent postgres process from being propagated to this session. @@ -179,7 +179,7 @@ PostgreSQL documentation Sets the default date style to European, that is - DMY ordering of input date fields. This also causes + DMY ordering of input date fields. This also causes the day to be printed before the month in certain date output formats. See for more information. @@ -206,7 +206,7 @@ PostgreSQL documentation Specifies the IP host name or address on which postgres is to listen for TCP/IP connections from client applications. The value can also be a - comma-separated list of addresses, or * to specify + comma-separated list of addresses, or * to specify listening on all available interfaces. An empty value specifies not listening on any IP addresses, in which case only Unix-domain sockets can be used to connect to the @@ -225,13 +225,13 @@ PostgreSQL documentation Allows remote clients to connect via TCP/IP (Internet domain) connections. Without this option, only local connections are accepted. This option is equivalent to setting - listen_addresses to * in - postgresql.conf or via . This option is deprecated since it does not allow access to the full functionality of . - It's usually better to set listen_addresses directly. + It's usually better to set listen_addresses directly. @@ -291,11 +291,11 @@ PostgreSQL documentation - Spaces within extra-options are + Spaces within extra-options are considered to separate arguments, unless escaped with a backslash - (\); write \\ to represent a literal + (\); write \\ to represent a literal backslash. Multiple arguments can also be specified via multiple - uses of . @@ -340,15 +340,15 @@ PostgreSQL documentation Specifies the amount of memory to be used by internal sorts and hashes before resorting to temporary disk files. See the description of the - work_mem configuration parameter in work_mem configuration parameter in . - - + + Print the postgres version and exit. @@ -361,7 +361,7 @@ PostgreSQL documentation Sets a named run-time parameter; a shorter form of - . @@ -371,15 +371,15 @@ PostgreSQL documentation This option dumps out the server's internal configuration variables, - descriptions, and defaults in tab-delimited COPY format. + descriptions, and defaults in tab-delimited COPY format. It is designed primarily for use by administration tools. - - + + Show help about postgres command line @@ -643,13 +643,13 @@ PostgreSQL documentation Diagnostics - A failure message mentioning semget or - shmget probably indicates you need to configure your + A failure message mentioning semget or + shmget probably indicates you need to configure your kernel to provide adequate shared memory and semaphores. For more discussion see . You might be able to postpone reconfiguring your kernel by decreasing to reduce the shared memory - consumption of PostgreSQL, and/or by reducing + consumption of PostgreSQL, and/or by reducing to reduce the semaphore consumption. @@ -725,7 +725,7 @@ PostgreSQL documentation To cancel a running query, send the SIGINT signal to the process running that command. To terminate a backend process cleanly, send SIGTERM to that process. See - also pg_cancel_backend and pg_terminate_backend + also pg_cancel_backend and pg_terminate_backend in for the SQL-callable equivalents of these two actions. @@ -745,9 +745,9 @@ PostgreSQL documentation Bugs - The @@ -759,17 +759,17 @@ PostgreSQL documentation To start a single-user mode server, use a command like -postgres --single -D /usr/local/pgsql/data other-options my_database +postgres --single -D /usr/local/pgsql/data other-options my_database - Provide the correct path to the database directory with Normally, the single-user mode server treats newline as the command entry terminator; there is no intelligence about semicolons, - as there is in psql. To continue a command + as there is in psql. To continue a command across multiple lines, you must type backslash just before each newline except the last one. The backslash and adjacent newline are both dropped from the input command. Note that this will happen even @@ -777,7 +777,7 @@ PostgreSQL documentation - But if you use the command line switch, a single newline does not terminate command entry; instead, the sequence semicolon-newline-newline does. That is, type a semicolon immediately followed by a completely empty line. Backslash-newline is not @@ -794,10 +794,10 @@ PostgreSQL documentation To quit the session, type EOF - (ControlD, usually). + (ControlD, usually). If you've entered any text since the last command entry terminator, then EOF will be taken as a command entry terminator, - and another EOF will be needed to exit. + and another EOF will be needed to exit. @@ -826,7 +826,7 @@ PostgreSQL documentation $ postgres -p 1234 - To connect to this server using psql, specify this port with the -p option: + To connect to this server using psql, specify this port with the -p option: $ psql -p 1234 @@ -844,11 +844,11 @@ PostgreSQL documentation $ postgres --work-mem=1234 Either form overrides whatever setting might exist for - work_mem in postgresql.conf. Notice that + work_mem in postgresql.conf. Notice that underscores in parameter names can be written as either underscore or dash on the command line. Except for short-term experiments, it's probably better practice to edit the setting in - postgresql.conf than to rely on a command-line switch + postgresql.conf than to rely on a command-line switch to set a parameter. diff --git a/doc/src/sgml/ref/postmaster.sgml b/doc/src/sgml/ref/postmaster.sgml index 0a58a63331..ec11ec65f5 100644 --- a/doc/src/sgml/ref/postmaster.sgml +++ b/doc/src/sgml/ref/postmaster.sgml @@ -22,7 +22,7 @@ PostgreSQL documentation postmaster - option + option diff --git a/doc/src/sgml/ref/prepare.sgml b/doc/src/sgml/ref/prepare.sgml index f4e1d54349..bcf188f4b9 100644 --- a/doc/src/sgml/ref/prepare.sgml +++ b/doc/src/sgml/ref/prepare.sgml @@ -48,7 +48,7 @@ PREPARE name [ ( $1, $2, etc. A corresponding list of + $1, $2, etc. A corresponding list of parameter data types can optionally be specified. When a parameter's data type is not specified or is declared as unknown, the type is inferred from the context @@ -115,8 +115,8 @@ PREPARE name [ ( statement - Any SELECT, INSERT, UPDATE, - DELETE, or VALUES statement. + Any SELECT, INSERT, UPDATE, + DELETE, or VALUES statement. @@ -155,9 +155,9 @@ PREPARE name [ ( To examine the query plan PostgreSQL is using for a prepared statement, use , e.g. - EXPLAIN EXECUTE. + EXPLAIN EXECUTE. If a generic plan is in use, it will contain parameter symbols - $n, while a custom plan will have the + $n, while a custom plan will have the supplied parameter values substituted into it. The row estimates in the generic plan reflect the selectivity computed for the parameters. @@ -172,13 +172,13 @@ PREPARE name [ ( Although the main point of a prepared statement is to avoid repeated parse - analysis and planning of the statement, PostgreSQL will + analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new - search_path. (This latter behavior is new as of + search_path. (This latter behavior is new as of PostgreSQL 9.3.) These rules make use of a prepared statement semantically almost equivalent to re-submitting the same query text over and over, but with a performance benefit if no object @@ -186,7 +186,7 @@ PREPARE name [ ( search_path, no automatic re-parse will occur + earlier in the search_path, no automatic re-parse will occur since no object used in the statement changed. However, if some other change forces a re-parse, the new table will be referenced in subsequent uses. @@ -222,7 +222,7 @@ EXECUTE usrrptplan(1, current_date); Note that the data type of the second parameter is not specified, - so it is inferred from the context in which $2 is used. + so it is inferred from the context in which $2 is used. diff --git a/doc/src/sgml/ref/prepare_transaction.sgml b/doc/src/sgml/ref/prepare_transaction.sgml index 9a2e38e98c..4f78e6b131 100644 --- a/doc/src/sgml/ref/prepare_transaction.sgml +++ b/doc/src/sgml/ref/prepare_transaction.sgml @@ -47,7 +47,7 @@ PREPARE TRANSACTION transaction_id From the point of view of the issuing session, PREPARE - TRANSACTION is not unlike a ROLLBACK command: + TRANSACTION is not unlike a ROLLBACK command: after executing it, there is no active current transaction, and the effects of the prepared transaction are no longer visible. (The effects will become visible again if the transaction is committed.) @@ -55,7 +55,7 @@ PREPARE TRANSACTION transaction_id If the PREPARE TRANSACTION command fails for any - reason, it becomes a ROLLBACK: the current transaction + reason, it becomes a ROLLBACK: the current transaction is canceled. @@ -69,7 +69,7 @@ PREPARE TRANSACTION transaction_id An arbitrary identifier that later identifies this transaction for - COMMIT PREPARED or ROLLBACK PREPARED. + COMMIT PREPARED or ROLLBACK PREPARED. The identifier must be written as a string literal, and must be less than 200 bytes long. It must not be the same as the identifier used for any currently prepared transaction. @@ -83,12 +83,12 @@ PREPARE TRANSACTION transaction_id Notes - PREPARE TRANSACTION is not intended for use in applications + PREPARE TRANSACTION is not intended for use in applications or interactive sessions. Its purpose is to allow an external transaction manager to perform atomic global transactions across multiple databases or other transactional resources. Unless you're writing a transaction manager, you probably shouldn't be using PREPARE - TRANSACTION. + TRANSACTION. @@ -97,22 +97,22 @@ PREPARE TRANSACTION transaction_id - It is not currently allowed to PREPARE a transaction that + It is not currently allowed to PREPARE a transaction that has executed any operations involving temporary tables, - created any cursors WITH HOLD, or executed - LISTEN or UNLISTEN. + created any cursors WITH HOLD, or executed + LISTEN or UNLISTEN. Those features are too tightly tied to the current session to be useful in a transaction to be prepared. - If the transaction modified any run-time parameters with SET - (without the LOCAL option), - those effects persist after PREPARE TRANSACTION, and will not + If the transaction modified any run-time parameters with SET + (without the LOCAL option), + those effects persist after PREPARE TRANSACTION, and will not be affected by any later COMMIT PREPARED or ROLLBACK PREPARED. Thus, in this one respect - PREPARE TRANSACTION acts more like COMMIT than - ROLLBACK. + PREPARE TRANSACTION acts more like COMMIT than + ROLLBACK. @@ -124,7 +124,7 @@ PREPARE TRANSACTION transaction_id It is unwise to leave transactions in the prepared state for a long time. - This will interfere with the ability of VACUUM to reclaim + This will interfere with the ability of VACUUM to reclaim storage, and in extreme cases could cause the database to shut down to prevent transaction ID wraparound (see ). Keep in mind also that the transaction @@ -149,7 +149,7 @@ PREPARE TRANSACTION transaction_id Examples Prepare the current transaction for two-phase commit, using - foobar as the transaction identifier: + foobar as the transaction identifier: PREPARE TRANSACTION 'foobar'; diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index e7a3e17c67..8cbe0569cf 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -50,8 +50,8 @@ PostgreSQL documentation - - + + Print all nonempty input lines to standard output as they are read. @@ -63,8 +63,8 @@ PostgreSQL documentation - - + + Switches to unaligned output mode. (The default output mode is @@ -75,8 +75,8 @@ PostgreSQL documentation - - + + Print failed SQL commands to standard error output. This is @@ -87,8 +87,8 @@ PostgreSQL documentation - - + + Specifies that psql is to execute the given @@ -116,14 +116,14 @@ psql -c '\x' -c 'SELECT * FROM foo;' echo '\x \\ SELECT * FROM foo;' | psql - (\\ is the separator meta-command.) + (\\ is the separator meta-command.) Each SQL command string passed to is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, - unless there are explicit BEGIN/COMMIT + unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See for more details about how the server handles multi-query strings.) @@ -152,8 +152,8 @@ EOF - - + + Specifies the name of the database to connect to. This is @@ -173,8 +173,8 @@ EOF - - + + Copy all SQL commands sent to the server to standard output as well. @@ -186,21 +186,21 @@ EOF - - + + Echo the actual queries generated by \d and other backslash commands. You can use this to study psql's internal operations. This is equivalent to - setting the variable ECHO_HIDDEN to on. + setting the variable ECHO_HIDDEN to on. - - + + Read commands from the @@ -219,7 +219,7 @@ EOF If filename is - (hyphen), then standard input is read until an EOF indication - or \q meta-command. This can be used to intersperse + or \q meta-command. This can be used to intersperse interactive input with input from files. Note however that Readline is not used in this case (much as if had been specified). @@ -241,8 +241,8 @@ EOF - - + + Use separator as the @@ -253,8 +253,8 @@ EOF - - + + Specifies the host name of the machine on which the @@ -266,8 +266,8 @@ EOF - - + + Turn on HTML tabular output. This is @@ -278,8 +278,8 @@ EOF - - + + List all available databases, then exit. Other non-connection @@ -290,8 +290,8 @@ EOF - - + + Write all query output into file - - + + Do not use Readline for line editing and do @@ -314,8 +314,8 @@ EOF - - + + Put all query output into file - - + + Specifies the TCP port or the local Unix-domain @@ -340,8 +340,8 @@ EOF - - + + Specifies printing options, in the style of @@ -354,8 +354,8 @@ EOF - - + + Specifies that psql should do its work @@ -363,14 +363,14 @@ EOF informational output. If this option is used, none of this happens. This is useful with the option. This is equivalent to setting the variable QUIET - to on. + to on. - - + + Use separator as the @@ -381,8 +381,8 @@ EOF - - + + Run in single-step mode. That means the user is prompted before @@ -393,8 +393,8 @@ EOF - - + + Runs in single-line mode where a newline terminates an SQL command, as a @@ -413,8 +413,8 @@ EOF - - + + Turn off printing of column names and result row count footers, @@ -425,8 +425,8 @@ EOF - - + + Specifies options to be placed within the @@ -437,8 +437,8 @@ EOF - - + + Connect to the database as the user - - - + + + Perform a variable assignment, like the \set @@ -466,8 +466,8 @@ EOF - - + + Print the psql version and exit. @@ -476,8 +476,8 @@ EOF - - + + Never issue a password prompt. If the server requires password @@ -496,8 +496,8 @@ EOF - - + + Force psql to prompt for a @@ -509,7 +509,7 @@ EOF will automatically prompt for a password if the server demands password authentication. However, psql will waste a connection attempt finding out that the server wants a - password. In some cases it is worth typing to avoid the extra connection attempt. @@ -522,8 +522,8 @@ EOF - - + + Turn on the expanded table formatting mode. This is equivalent to @@ -533,8 +533,8 @@ EOF - - + + Do not read the start-up file (neither the system-wide @@ -574,8 +574,8 @@ EOF This option can only be used in combination with one or more and/or options. It causes - psql to issue a BEGIN command - before the first such option and a COMMIT command after + psql to issue a BEGIN command + before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single transaction. This ensures that either all the commands complete successfully, or no changes are applied. @@ -583,8 +583,8 @@ EOF If the commands themselves - contain BEGIN, COMMIT, - or ROLLBACK, this option will not have the desired + contain BEGIN, COMMIT, + or ROLLBACK, this option will not have the desired effects. Also, if an individual command cannot be executed inside a transaction block, specifying this option will cause the whole transaction to fail. @@ -593,17 +593,17 @@ EOF - - + + Show help about psql and exit. The optional - topic parameter (defaulting + topic parameter (defaulting to options) selects which part of psql is - explained: commands describes psql's - backslash commands; options describes the command-line - options that can be passed to psql; - and variables shows help about psql configuration + explained: commands describes psql's + backslash commands; options describes the command-line + options that can be passed to psql; + and variables shows help about psql configuration variables. @@ -644,8 +644,8 @@ EOF not belong to any option it will be interpreted as the database name (or the user name, if the database name is already given). Not all of these options are required; there are useful defaults. If you omit the host - name, psql will connect via a Unix-domain socket - to a server on the local host, or via TCP/IP to localhost on + name, psql will connect via a Unix-domain socket + to a server on the local host, or via TCP/IP to localhost on machines that don't have Unix-domain sockets. The default port number is determined at compile time. Since the database server uses the same default, you will not have @@ -663,7 +663,7 @@ EOF PGPORT and/or PGUSER to appropriate values. (For additional environment variables, see .) It is also convenient to have a - ~/.pgpass file to avoid regularly having to type in + ~/.pgpass file to avoid regularly having to type in passwords. See for more information. @@ -777,13 +777,13 @@ testdb=> If an unquoted colon (:) followed by a - psql variable name appears within an argument, it is + psql variable name appears within an argument, it is replaced by the variable's value, as described in . - The forms :'variable_name' and - :"variable_name" described there + The forms :'variable_name' and + :"variable_name" described there work as well. - The :{?variable_name} syntax allows + The :{?variable_name} syntax allows testing whether a variable is defined. It is substituted by TRUE or FALSE. Escaping the colon with a backslash protects it from substitution. @@ -795,15 +795,15 @@ testdb=> shell. The output of the command (with any trailing newline removed) replaces the backquoted text. Within the text enclosed in backquotes, no special quoting or other processing occurs, except that appearances - of :variable_name where - variable_name is a psql variable name + of :variable_name where + variable_name is a psql variable name are replaced by the variable's value. Also, appearances of - :'variable_name' are replaced by the + :'variable_name' are replaced by the variable's value suitably quoted to become a single shell command argument. (The latter form is almost always preferable, unless you are very sure of what is in the variable.) Because carriage return and line feed characters cannot be safely quoted on all platforms, the - :'variable_name' form prints an + :'variable_name' form prints an error message and does not substitute the variable value when such characters appear in the value. @@ -812,13 +812,13 @@ testdb=> Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to - lowercase, while double quotes (") protect letters + lowercase, while double quotes (") protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, - FOO"BAR"BAZ is interpreted as fooBARbaz, - and "A weird"" name" becomes A weird" - name. + FOO"BAR"BAZ is interpreted as fooBARbaz, + and "A weird"" name" becomes A weird" + name. @@ -834,7 +834,7 @@ testdb=> - Many of the meta-commands act on the current query buffer. + Many of the meta-commands act on the current query buffer. This is simply a buffer holding whatever SQL command text has been typed but not yet sent to the server for execution. This will include previous input lines as well as any text appearing before the meta-command on the @@ -861,9 +861,9 @@ testdb=> \c or \connect [ -reuse-previous=on|off ] [ dbname [ username ] [ host ] [ port ] | conninfo ] - Establishes a new connection to a PostgreSQL + Establishes a new connection to a PostgreSQL server. The connection parameters to use can be specified either - using a positional syntax, or using conninfo connection + using a positional syntax, or using conninfo connection strings as detailed in . @@ -871,8 +871,8 @@ testdb=> Where the command omits database name, user, host, or port, the new connection can reuse values from the previous connection. By default, values from the previous connection are reused except when processing - a conninfo string. Passing a first argument - of -reuse-previous=on + a conninfo string. Passing a first argument + of -reuse-previous=on or -reuse-previous=off overrides that default. When the command neither specifies nor reuses a particular parameter, the libpq default is used. Specifying any @@ -969,7 +969,7 @@ testdb=> - When program is specified, + When program is specified, command is executed by psql and the data passed from or to command is @@ -980,17 +980,17 @@ testdb=> - For \copy ... from stdin, data rows are read from the same + For \copy ... from stdin, data rows are read from the same source that issued the command, continuing until \. - is read or the stream reaches EOF. This option is useful + is read or the stream reaches EOF. This option is useful for populating tables in-line within a SQL script file. - For \copy ... to stdout, output is sent to the same place - as psql command output, and - the COPY count command status is + For \copy ... to stdout, output is sent to the same place + as psql command output, and + the COPY count command status is not printed (since it might be confused with a data row). To read/write psql's standard input or - output regardless of the current command source or \o - option, write from pstdin or to pstdout. + output regardless of the current command source or \o + option, write from pstdin or to pstdout. @@ -998,9 +998,9 @@ testdb=> SQL command. All options other than the data source/destination are as specified for . - Because of this, special parsing rules apply to the \copy + Because of this, special parsing rules apply to the \copy meta-command. Unlike most other meta-commands, the entire remainder - of the line is always taken to be the arguments of \copy, + of the line is always taken to be the arguments of \copy, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1040,7 +1040,7 @@ testdb=> Executes the current query buffer (like \g) and shows the results in a crosstab grid. The query must return at least three columns. - The output column identified by colV + The output column identified by colV becomes a vertical header and the output column identified by colH becomes a horizontal header. @@ -1068,7 +1068,7 @@ testdb=> The vertical header, displayed as the leftmost column, contains the - values found in column colV, in the + values found in column colV, in the same order as in the query results, but with duplicates removed. @@ -1077,11 +1077,11 @@ testdb=> found in column colH, with duplicates removed. By default, these appear in the same order as in the query results. But if the - optional sortcolH argument is given, + optional sortcolH argument is given, it identifies a column whose values must be integer numbers, and the values from colH will appear in the horizontal header sorted according to the - corresponding sortcolH values. + corresponding sortcolH values. @@ -1094,7 +1094,7 @@ testdb=> the value of colH is x and the value of colV - is y. If there is no such row, the cell is empty. If + is y. If there is no such row, the cell is empty. If there are multiple such rows, an error is reported. @@ -1115,13 +1115,13 @@ testdb=> Associated indexes, constraints, rules, and triggers are also shown. For foreign tables, the associated foreign server is shown as well. - (Matching the pattern is defined in + (Matching the pattern is defined in below.) - For some types of relation, \d shows additional information + For some types of relation, \d shows additional information for each column: column values for sequences, indexed expressions for indexes, and foreign data wrapper options for foreign tables. @@ -1237,9 +1237,9 @@ testdb=> \dd[S] [ pattern ] - Shows the descriptions of objects of type constraint, - operator class, operator family, - rule, and trigger. All + Shows the descriptions of objects of type constraint, + operator class, operator family, + rule, and trigger. All other comments may be viewed by the respective backslash commands for those object types. @@ -1318,7 +1318,7 @@ testdb=> respectively. You can specify any or all of these letters, in any order, to obtain a listing of objects - of these types. For example, \dit lists indexes + of these types. For example, \dit lists indexes and tables. If + is appended to the command name, each object is listed with its physical size on disk and its associated description, if any. @@ -1408,11 +1408,11 @@ testdb=> Lists functions, together with their result data types, argument data - types, and function types, which are classified as agg - (aggregate), normal, trigger, or window. + types, and function types, which are classified as agg + (aggregate), normal, trigger, or window. To display only functions - of specific type(s), add the corresponding letters a, - n, t, or w to the command. + of specific type(s), add the corresponding letters a, + n, t, or w to the command. If pattern is specified, only functions whose names match the pattern are shown. @@ -1429,7 +1429,7 @@ testdb=> To look up functions taking arguments or returning values of a specific data type, use your pager's search capability to scroll through the - \df output. + \df output. @@ -1497,8 +1497,8 @@ testdb=> Lists database roles. - (Since the concepts of users and groups have been - unified into roles, this command is now equivalent to + (Since the concepts of users and groups have been + unified into roles, this command is now equivalent to \du.) By default, only user-created roles are shown; supply the S modifier to include system roles. @@ -1624,7 +1624,7 @@ testdb=> role-pattern and database-pattern are used to select specific roles and databases to list, respectively. If omitted, or if - * is specified, all settings are listed, including those + * is specified, all settings are listed, including those not role-specific or database-specific, respectively. @@ -1674,7 +1674,7 @@ testdb=> specified, only types whose names match the pattern are listed. If + is appended to the command name, each type is listed with its internal name and size, its allowed values - if it is an enum type, and its associated permissions. + if it is an enum type, and its associated permissions. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects. @@ -1687,8 +1687,8 @@ testdb=> Lists database roles. - (Since the concepts of users and groups have been - unified into roles, this command is now equivalent to + (Since the concepts of users and groups have been + unified into roles, this command is now equivalent to \dg.) By default, only user-created roles are shown; supply the S modifier to include system roles. @@ -1730,7 +1730,7 @@ testdb=> - \e or \edit filename line_number + \e or \edit filename line_number @@ -1750,8 +1750,8 @@ testdb=> whole buffer as a single line. Any complete queries are immediately executed; that is, if the query buffer contains or ends with a semicolon, everything up to that point is executed. Whatever remains - will wait in the query buffer; type semicolon or \g to - send it, or \r to cancel it by clearing the query buffer. + will wait in the query buffer; type semicolon or \g to + send it, or \r to cancel it by clearing the query buffer. Treating the buffer as a single line primarily affects meta-commands: whatever is in the buffer after a meta-command will be taken as argument(s) to the meta-command, even if it spans multiple lines. @@ -1803,27 +1803,27 @@ Tue Oct 26 21:40:57 CEST 1999 - \ef function_description line_number + \ef function_description line_number This command fetches and edits the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. - Editing is done in the same way as for \edit. + in the form of a CREATE OR REPLACE FUNCTION command. + Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; - type semicolon or \g to send it, or \r + type semicolon or \g to send it, or \r to cancel. The target function can be specified by name alone, or by name - and arguments, for example foo(integer, text). + and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name. - If no function is specified, a blank CREATE FUNCTION + If no function is specified, a blank CREATE FUNCTION template is presented for editing. @@ -1836,7 +1836,7 @@ Tue Oct 26 21:40:57 CEST 1999 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \ef, and neither + always taken to be the argument(s) of \ef, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1871,28 +1871,28 @@ Tue Oct 26 21:40:57 CEST 1999 Repeats the most recent server error message at maximum verbosity, as though VERBOSITY were set - to verbose and SHOW_CONTEXT were - set to always. + to verbose and SHOW_CONTEXT were + set to always. - \ev view_name line_number + \ev view_name line_number This command fetches and edits the definition of the named view, - in the form of a CREATE OR REPLACE VIEW command. - Editing is done in the same way as for \edit. + in the form of a CREATE OR REPLACE VIEW command. + Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; - type semicolon or \g to send it, or \r + type semicolon or \g to send it, or \r to cancel. - If no view is specified, a blank CREATE VIEW + If no view is specified, a blank CREATE VIEW template is presented for editing. @@ -1903,7 +1903,7 @@ Tue Oct 26 21:40:57 CEST 1999 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \ev, and neither + always taken to be the argument(s) of \ev, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -1944,7 +1944,7 @@ Tue Oct 26 21:40:57 CEST 1999 alternative to the \o command. - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -1982,13 +1982,13 @@ Tue Oct 26 21:40:57 CEST 1999 Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed. For example, to create an index on each - column of my_table: + column of my_table: -=> SELECT format('create index on my_table(%I)', attname) --> FROM pg_attribute --> WHERE attrelid = 'my_table'::regclass AND attnum > 0 --> ORDER BY attnum --> \gexec +=> SELECT format('create index on my_table(%I)', attname) +-> FROM pg_attribute +-> WHERE attrelid = 'my_table'::regclass AND attnum > 0 +-> ORDER BY attnum +-> \gexec CREATE INDEX CREATE INDEX CREATE INDEX @@ -2001,14 +2001,14 @@ CREATE INDEX are returned, and left-to-right within each row if there is more than one column. NULL fields are ignored. The generated queries are sent literally to the server for processing, so they cannot be - psql meta-commands nor contain psql + psql meta-commands nor contain psql variable references. If any individual query fails, execution of the remaining queries continues unless ON_ERROR_STOP is set. Execution of each query is subject to ECHO processing. (Setting ECHO to all or queries is often advisable when - using \gexec.) Query logging, single-step mode, + using \gexec.) Query logging, single-step mode, timing, and other query execution features apply to each generated query as well. @@ -2026,7 +2026,7 @@ CREATE INDEX Sends the current query buffer to the server and stores the - query's output into psql variables (see psql variables (see ). The query to be executed must return exactly one row. Each column of the row is stored into a separate variable, named the same as the @@ -2092,7 +2092,7 @@ hello 10 Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \help, and neither + always taken to be the argument(s) of \help, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -2133,7 +2133,7 @@ hello 10 If filename is - (hyphen), then standard input is read until an EOF indication - or \q meta-command. This can be used to intersperse + or \q meta-command. This can be used to intersperse interactive input with input from files. Note that Readline behavior will be used only if it is active at the outermost level. @@ -2208,7 +2208,7 @@ hello 10 the same source file. If EOF is reached on the main input file or an \include-ed file before all local \if-blocks have been closed, - then psql will raise an error. + then psql will raise an error. Here is an example: @@ -2241,7 +2241,7 @@ SELECT \ir or \include_relative filename - The \ir command is similar to \i, but resolves + The \ir command is similar to \i, but resolves relative file names differently. When executing in interactive mode, the two commands behave identically. However, when invoked from a script, \ir interprets file names relative to the @@ -2366,7 +2366,7 @@ lo_import 152801 - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -2409,7 +2409,7 @@ lo_import 152801 Changes the password of the specified user (by default, the current user). This command prompts for the new password, encrypts it, and - sends it to the server as an ALTER ROLE command. This + sends it to the server as an ALTER ROLE command. This makes sure that the new password does not appear in cleartext in the command history, the server log, or elsewhere. @@ -2421,16 +2421,16 @@ lo_import 152801 Prompts the user to supply text, which is assigned to the variable - name. + name. An optional prompt string, text, can be specified. (For multiword + class="parameter">text, can be specified. (For multiword prompts, surround the text with single quotes.) - By default, \prompt uses the terminal for input and - output. However, if the @@ -2484,16 +2484,16 @@ lo_import 152801 columns - Sets the target width for the wrapped format, and also + Sets the target width for the wrapped format, and also the width limit for determining whether output is wide enough to require the pager or switch to the vertical display in expanded auto mode. Zero (the default) causes the target width to be controlled by the - environment variable COLUMNS, or the detected screen width - if COLUMNS is not set. - In addition, if columns is zero then the - wrapped format only affects screen output. - If columns is nonzero then file and pipe output is + environment variable COLUMNS, or the detected screen width + if COLUMNS is not set. + In addition, if columns is zero then the + wrapped format only affects screen output. + If columns is nonzero then file and pipe output is wrapped to that width as well. @@ -2552,7 +2552,7 @@ lo_import 152801 If value is specified it must be either on or off which will enable or disable display of the table footer - (the (n rows) count). + (the (n rows) count). If value is omitted the command toggles footer display on or off. @@ -2573,7 +2573,7 @@ lo_import 152801 is enough.) - unaligned format writes all columns of a row on one + unaligned format writes all columns of a row on one line, separated by the currently active field separator. This is useful for creating output that might be intended to be read in by other programs (for example, tab-separated or comma-separated @@ -2584,18 +2584,18 @@ lo_import 152801 nicely formatted text output; this is the default. - wrapped format is like aligned but wraps + wrapped format is like aligned but wraps wide data values across lines to make the output fit in the target column width. The target width is determined as described under - the columns option. Note that psql will + the columns option. Note that psql will not attempt to wrap column header titles; therefore, - wrapped format behaves the same as aligned + wrapped format behaves the same as aligned if the total width needed for column headers exceeds the target. - The html, asciidoc, latex, - latex-longtable, and troff-ms + The html, asciidoc, latex, + latex-longtable, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! This might not be @@ -2603,7 +2603,7 @@ lo_import 152801 LaTeX you must have a complete document wrapper. latex-longtable also requires the LaTeX - longtable and booktabs packages. + longtable and booktabs packages. @@ -2617,9 +2617,9 @@ lo_import 152801 or unicode. Unique abbreviations are allowed. (That would mean one letter is enough.) - The default setting is ascii. - This option only affects the aligned and - wrapped output formats. + The default setting is ascii. + This option only affects the aligned and + wrapped output formats. ascii style uses plain ASCII @@ -2627,17 +2627,17 @@ lo_import 152801 a + symbol in the right-hand margin. When the wrapped format wraps data from one line to the next without a newline character, a dot - (.) is shown in the right-hand margin of the first line, + (.) is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line. - old-ascii style uses plain ASCII + old-ascii style uses plain ASCII characters, using the formatting style used in PostgreSQL 8.4 and earlier. Newlines in data are shown using a : symbol in place of the left-hand column separator. When the data is wrapped from one line - to the next without a newline character, a ; + to the next without a newline character, a ; symbol is used in place of the left-hand column separator. @@ -2650,7 +2650,7 @@ lo_import 152801 - When the border setting is greater than zero, + When the border setting is greater than zero, the linestyle option also determines the characters with which the border lines are drawn. Plain ASCII characters work everywhere, but @@ -2689,7 +2689,7 @@ lo_import 152801 pager - Controls use of a pager program for query and psql + Controls use of a pager program for query and psql help output. If the environment variable PSQL_PAGER or PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default program @@ -2697,13 +2697,13 @@ lo_import 152801 - When the pager option is off, the pager - program is not used. When the pager option is - on, the pager is used when appropriate, i.e., when the + When the pager option is off, the pager + program is not used. When the pager option is + on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. - The pager option can also be set to always, + The pager option can also be set to always, which causes the pager to be used for all terminal output regardless - of whether it fits on the screen. \pset pager + of whether it fits on the screen. \pset pager without a value toggles pager use on and off. @@ -2714,7 +2714,7 @@ lo_import 152801 pager_min_lines - If pager_min_lines is set to a number greater than the + If pager_min_lines is set to a number greater than the page height, the pager program will not be called unless there are at least this many lines of output to show. The default setting is 0. @@ -2760,7 +2760,7 @@ lo_import 152801 In latex-longtable format, this controls the proportional width of each column containing a left-aligned data type. It is specified as a whitespace-separated list of values, - e.g. '0.2 0.2 0.6'. Unspecified output columns + e.g. '0.2 0.2 0.6'. Unspecified output columns use the last specified value. @@ -2902,7 +2902,7 @@ lo_import 152801 - Sets the psql variable psql variable name to value, or if more than one value is given, to the concatenation of all of them. If only one @@ -2910,8 +2910,8 @@ lo_import 152801 unset a variable, use the \unset command. - \set without any arguments displays the names and values - of all currently-set psql variables. + \set without any arguments displays the names and values + of all currently-set psql variables. @@ -2958,19 +2958,19 @@ testdb=> \setenv LESS -imx4F - \sf[+] function_description + \sf[+] function_description This command fetches and shows the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. + in the form of a CREATE OR REPLACE FUNCTION command. The definition is printed to the current query output channel, as set by \o. The target function can be specified by name alone, or by name - and arguments, for example foo(integer, text). + and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name. @@ -2983,7 +2983,7 @@ testdb=> \setenv LESS -imx4F Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \sf, and neither + always taken to be the argument(s) of \sf, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -2992,12 +2992,12 @@ testdb=> \setenv LESS -imx4F - \sv[+] view_name + \sv[+] view_name This command fetches and shows the definition of the named view, - in the form of a CREATE OR REPLACE VIEW command. + in the form of a CREATE OR REPLACE VIEW command. The definition is printed to the current query output channel, as set by \o. @@ -3009,7 +3009,7 @@ testdb=> \setenv LESS -imx4F Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \sv, and neither + always taken to be the argument(s) of \sv, and neither variable interpolation nor backquote expansion are performed in the arguments. @@ -3062,13 +3062,13 @@ testdb=> \setenv LESS -imx4F - Unsets (deletes) the psql variable psql variable name. Most variables that control psql's behavior - cannot be unset; instead, an \unset command is interpreted + cannot be unset; instead, an \unset command is interpreted as setting them to their default values. See , below. @@ -3079,7 +3079,7 @@ testdb=> \setenv LESS -imx4F \w or \write filename - \w or \write |command + \w or \write |command Writes the current query buffer to the file \setenv LESS -imx4F - If the argument begins with |, then the entire remainder + If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are @@ -3105,10 +3105,10 @@ testdb=> \setenv LESS -imx4F \watch [ seconds ] - Repeatedly execute the current query buffer (as \g does) + Repeatedly execute the current query buffer (as \g does) until interrupted or the query fails. Wait the specified number of seconds (default 2) between executions. Each query result is - displayed with a header that includes the \pset title + displayed with a header that includes the \pset title string (if any), the time as of query start, and the delay interval. @@ -3153,14 +3153,14 @@ testdb=> \setenv LESS -imx4F \! [ command ] - With no argument, escapes to a sub-shell; psql + With no argument, escapes to a sub-shell; psql resumes when the sub-shell exits. With an argument, executes the shell command command. Unlike most other meta-commands, the entire remainder of the line is - always taken to be the argument(s) of \!, and neither + always taken to be the argument(s) of \!, and neither variable interpolation nor backquote expansion are performed in the arguments. The rest of the line is simply passed literally to the shell. @@ -3170,16 +3170,16 @@ testdb=> \setenv LESS -imx4F - \? [ topic ] + \? [ topic ] Shows help information. The optional - topic parameter - (defaulting to commands) selects which part of psql is - explained: commands describes psql's - backslash commands; options describes the command-line - options that can be passed to psql; - and variables shows help about psql configuration + topic parameter + (defaulting to commands) selects which part of psql is + explained: commands describes psql's + backslash commands; options describes the command-line + options that can be passed to psql; + and variables shows help about psql configuration variables. @@ -3196,7 +3196,7 @@ testdb=> \setenv LESS -imx4F - Normally, psql will dispatch a SQL command to the + Normally, psql will dispatch a SQL command to the server as soon as it reaches the command-ending semicolon, even if more input remains on the current line. Thus for example entering @@ -3205,7 +3205,7 @@ select 1; select 2; select 3; will result in the three SQL commands being individually sent to the server, with each one's results being displayed before continuing to the next command. However, a semicolon entered - as \; will not trigger command processing, so that the + as \; will not trigger command processing, so that the command before it and the one after are effectively combined and sent to the server in one request. So for example @@ -3214,14 +3214,14 @@ select 1\; select 2\; select 3; results in sending the three SQL commands to the server in a single request, when the non-backslashed semicolon is reached. The server executes such a request as a single transaction, - unless there are explicit BEGIN/COMMIT + unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See for more details about how the server handles multi-query strings.) psql prints only the last query result it receives for each request; in this example, although all - three SELECTs are indeed executed, psql - only prints the 3. + three SELECTs are indeed executed, psql + only prints the 3. @@ -3238,54 +3238,54 @@ select 1\; select 2\; select 3; - The various \d commands accept a \d commands accept a pattern parameter to specify the object name(s) to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, just as in SQL names; - for example, \dt FOO will display the table named - foo. As in SQL names, placing double quotes around + for example, \dt FOO will display the table named + foo. As in SQL names, placing double quotes around a pattern stops folding to lower case. Should you need to include an actual double quote character in a pattern, write it as a pair of double quotes within a double-quote sequence; again this is in accord with the rules for SQL quoted identifiers. For example, - \dt "FOO""BAR" will display the table named - FOO"BAR (not foo"bar). Unlike the normal + \dt "FOO""BAR" will display the table named + FOO"BAR (not foo"bar). Unlike the normal rules for SQL names, you can put double quotes around just part - of a pattern, for instance \dt FOO"FOO"BAR will display - the table named fooFOObar. + of a pattern, for instance \dt FOO"FOO"BAR will display + the table named fooFOObar. Whenever the pattern parameter - is omitted completely, the \d commands display all objects + is omitted completely, the \d commands display all objects that are visible in the current schema search path — this is - equivalent to using * as the pattern. - (An object is said to be visible if its + equivalent to using * as the pattern. + (An object is said to be visible if its containing schema is in the search path and no object of the same kind and name appears earlier in the search path. This is equivalent to the statement that the object can be referenced by name without explicit schema qualification.) To see all objects in the database regardless of visibility, - use *.* as the pattern. + use *.* as the pattern. - Within a pattern, * matches any sequence of characters - (including no characters) and ? matches any single character. + Within a pattern, * matches any sequence of characters + (including no characters) and ? matches any single character. (This notation is comparable to Unix shell file name patterns.) - For example, \dt int* displays tables whose names - begin with int. But within double quotes, * - and ? lose these special meanings and are just matched + For example, \dt int* displays tables whose names + begin with int. But within double quotes, * + and ? lose these special meanings and are just matched literally. - A pattern that contains a dot (.) is interpreted as a schema + A pattern that contains a dot (.) is interpreted as a schema name pattern followed by an object name pattern. For example, - \dt foo*.*bar* displays all tables whose table name - includes bar that are in schemas whose schema name - starts with foo. When no dot appears, then the pattern + \dt foo*.*bar* displays all tables whose table name + includes bar that are in schemas whose schema name + starts with foo. When no dot appears, then the pattern matches only objects that are visible in the current schema search path. Again, a dot within double quotes loses its special meaning and is matched literally. @@ -3293,28 +3293,28 @@ select 1\; select 2\; select 3; Advanced users can use regular-expression notations such as character - classes, for example [0-9] to match any digit. All regular + classes, for example [0-9] to match any digit. All regular expression special characters work as specified in - , except for . which - is taken as a separator as mentioned above, * which is - translated to the regular-expression notation .*, - ? which is translated to ., and - $ which is matched literally. You can emulate + , except for . which + is taken as a separator as mentioned above, * which is + translated to the regular-expression notation .*, + ? which is translated to ., and + $ which is matched literally. You can emulate these pattern characters at need by writing - ? for ., + ? for ., (R+|) for R*, or (R|) for R?. - $ is not needed as a regular-expression character since + $ is not needed as a regular-expression character since the pattern must match the whole name, unlike the usual - interpretation of regular expressions (in other words, $ - is automatically appended to your pattern). Write * at the + interpretation of regular expressions (in other words, $ + is automatically appended to your pattern). Write * at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name - patterns (i.e., the argument of \do). + patterns (i.e., the argument of \do). @@ -3387,14 +3387,14 @@ bar Variables that control psql's behavior - generally cannot be unset or set to invalid values. An \unset + generally cannot be unset or set to invalid values. An \unset command is allowed but is interpreted as setting the variable to its - default value. A \set command without a second argument is - interpreted as setting the variable to on, for control + default value. A \set command without a second argument is + interpreted as setting the variable to on, for control variables that accept that value, and is rejected for others. Also, - control variables that accept the values on - and off will also accept other common spellings of Boolean - values, such as true and false. + control variables that accept the values on + and off will also accept other common spellings of Boolean + values, such as true and false. @@ -3412,23 +3412,23 @@ bar - When on (the default), each SQL command is automatically + When on (the default), each SQL command is automatically committed upon successful completion. To postpone commit in this - mode, you must enter a BEGIN or START - TRANSACTION SQL command. When off or unset, SQL + mode, you must enter a BEGIN or START + TRANSACTION SQL command. When off or unset, SQL commands are not committed until you explicitly issue - COMMIT or END. The autocommit-off - mode works by issuing an implicit BEGIN for you, just + COMMIT or END. The autocommit-off + mode works by issuing an implicit BEGIN for you, just before any command that is not already in a transaction block and - is not itself a BEGIN or other transaction-control + is not itself a BEGIN or other transaction-control command, nor a command that cannot be executed inside a transaction - block (such as VACUUM). + block (such as VACUUM). In autocommit-off mode, you must explicitly abandon any failed - transaction by entering ABORT or ROLLBACK. + transaction by entering ABORT or ROLLBACK. Also keep in mind that if you exit the session without committing, your work will be lost. @@ -3436,7 +3436,7 @@ bar - The autocommit-on mode is PostgreSQL's traditional + The autocommit-on mode is PostgreSQL's traditional behavior, but autocommit-off is closer to the SQL spec. If you prefer autocommit-off, you might wish to set it in the system-wide psqlrc file or your @@ -3496,7 +3496,7 @@ bar ECHO_HIDDEN - When this variable is set to on and a backslash command + When this variable is set to on and a backslash command queries the database, the query is first shown. This feature helps you to study PostgreSQL internals and provide @@ -3504,7 +3504,7 @@ bar on program start-up, use the switch .) If you set this variable to the value noexec, the queries are just shown but are not actually sent to the server and executed. - The default value is off. + The default value is off. @@ -3516,7 +3516,7 @@ bar The current client character set encoding. This is set every time you connect to a database (including program start-up), and when you change the encoding - with \encoding, but it can be changed or unset. + with \encoding, but it can be changed or unset. @@ -3525,8 +3525,8 @@ bar ERROR - true if the last SQL query failed, false if - it succeeded. See also SQLSTATE. + true if the last SQL query failed, false if + it succeeded. See also SQLSTATE. @@ -3550,7 +3550,7 @@ bar Although you can use any output format with this feature, - the default aligned format tends to look bad + the default aligned format tends to look bad because each group of FETCH_COUNT rows will be formatted separately, leading to varying column widths across the row groups. The other output formats work better. @@ -3637,11 +3637,11 @@ bar IGNOREEOF - If set to 1 or less, sending an EOF character (usually - ControlD) + If set to 1 or less, sending an EOF character (usually + ControlD) to an interactive session of psql will terminate the application. If set to a larger numeric value, - that many consecutive EOF characters must be typed to + that many consecutive EOF characters must be typed to make an interactive session terminate. If the variable is set to a non-numeric value, it is interpreted as 10. The default is 0. @@ -3673,8 +3673,8 @@ bar The primary error message and associated SQLSTATE code for the most - recent failed query in the current psql session, or - an empty string and 00000 if no error has occurred in + recent failed query in the current psql session, or + an empty string and 00000 if no error has occurred in the current session. @@ -3690,14 +3690,14 @@ bar - When set to on, if a statement in a transaction block + When set to on, if a statement in a transaction block generates an error, the error is ignored and the transaction - continues. When set to interactive, such errors are only + continues. When set to interactive, such errors are only ignored in interactive sessions, and not when reading script - files. When set to off (the default), a statement in a + files. When set to off (the default), a statement in a transaction block that generates an error aborts the entire transaction. The error rollback mode works by issuing an - implicit SAVEPOINT for you, just before each command + implicit SAVEPOINT for you, just before each command that is in a transaction block, and then rolling back to the savepoint if the command fails. @@ -3709,7 +3709,7 @@ bar By default, command processing continues after an error. When this - variable is set to on, processing will instead stop + variable is set to on, processing will instead stop immediately. In interactive mode, psql will return to the command prompt; otherwise, psql will exit, returning @@ -3752,7 +3752,7 @@ bar QUIET - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . It is probably not too useful in interactive mode. @@ -3775,9 +3775,9 @@ bar The server's version number as a string, for - example 9.6.2, 10.1 or 11beta1, + example 9.6.2, 10.1 or 11beta1, and in numeric form, for - example 90602 or 100001. + example 90602 or 100001. These are set every time you connect to a database (including program start-up), but can be changed or unset. @@ -3789,13 +3789,13 @@ bar This variable can be set to the - values never, errors, or always - to control whether CONTEXT fields are displayed in - messages from the server. The default is errors (meaning + values never, errors, or always + to control whether CONTEXT fields are displayed in + messages from the server. The default is errors (meaning that context will be shown in error messages, but not in notice or warning messages). This setting has no effect - when VERBOSITY is set to terse. - (See also \errverbose, for use when you want a verbose + when VERBOSITY is set to terse. + (See also \errverbose, for use when you want a verbose version of the error you just got.) @@ -3805,7 +3805,7 @@ bar SINGLELINE - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . @@ -3815,7 +3815,7 @@ bar SINGLESTEP - Setting this variable to on is equivalent to the command + Setting this variable to on is equivalent to the command line option . @@ -3826,7 +3826,7 @@ bar The error code (see ) associated - with the last SQL query's failure, or 00000 if it + with the last SQL query's failure, or 00000 if it succeeded. @@ -3847,10 +3847,10 @@ bar VERBOSITY - This variable can be set to the values default, - verbose, or terse to control the verbosity + This variable can be set to the values default, + verbose, or terse to control the verbosity of error reports. - (See also \errverbose, for use when you want a verbose + (See also \errverbose, for use when you want a verbose version of the error you just got.) @@ -3863,10 +3863,10 @@ bar These variables are set at program start-up to reflect - psql's version, respectively as a verbose string, - a short string (e.g., 9.6.2, 10.1, - or 11beta1), and a number (e.g., 90602 - or 100001). They can be changed or unset. + psql's version, respectively as a verbose string, + a short string (e.g., 9.6.2, 10.1, + or 11beta1), and a number (e.g., 90602 + or 100001). They can be changed or unset. @@ -3916,7 +3916,7 @@ testdb=> SELECT * FROM :"foo"; Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a - construction such as ':foo' doesn't work to produce a quoted + construction such as ':foo' doesn't work to produce a quoted literal from a variable's value (and it would be unsafe if it did work, since it wouldn't correctly handle quotes embedded in the value). @@ -3943,7 +3943,7 @@ testdb=> INSERT INTO my_table VALUES (:'content'); - The :{?name} special syntax returns TRUE + The :{?name} special syntax returns TRUE or FALSE depending on whether the variable exists or not, and is thus always substituted, unless the colon is backslash-escaped. @@ -4086,8 +4086,8 @@ testdb=> INSERT INTO my_table VALUES (:'content'); Transaction status: an empty string when not in a transaction - block, or * when in a transaction block, or - ! when in a failed transaction block, or ? + block, or * when in a transaction block, or + ! when in a failed transaction block, or ? when the transaction state is indeterminate (for example, because there is no connection). @@ -4098,7 +4098,7 @@ testdb=> INSERT INTO my_table VALUES (:'content'); %l - The line number inside the current statement, starting from 1. + The line number inside the current statement, starting from 1. @@ -4186,7 +4186,7 @@ testdb=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%# ' supported, although the completion logic makes no claim to be an SQL parser. The queries generated by tab-completion can also interfere with other SQL commands, e.g. SET - TRANSACTION ISOLATION LEVEL. + TRANSACTION ISOLATION LEVEL. If for some reason you do not like the tab completion, you can turn it off by putting this in a file named .inputrc in your home directory: @@ -4214,8 +4214,8 @@ $endif - If \pset columns is zero, controls the - width for the wrapped format and width for determining + If \pset columns is zero, controls the + width for the wrapped format and width for determining if wide output requires the pager or should be switched to the vertical format in expanded auto mode. @@ -4261,8 +4261,8 @@ $endif \ev is used with a line number argument, this variable specifies the command-line argument used to pass the starting line number to - the user's editor. For editors such as Emacs or - vi, this is a plus sign. Include a trailing + the user's editor. For editors such as Emacs or + vi, this is a plus sign. Include a trailing space in the value of the variable if there needs to be space between the option name and the line number. Examples: @@ -4304,8 +4304,8 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' pager-related options of the \pset command. These variables are examined in the order listed; the first that is set is used. - If none of them is set, the default is to use more on most - platforms, but less on Cygwin. + If none of them is set, the default is to use more on most + platforms, but less on Cygwin. @@ -4344,8 +4344,8 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -4371,9 +4371,9 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' The system-wide startup file is named psqlrc and is - sought in the installation's system configuration directory, + sought in the installation's system configuration directory, which is most reliably identified by running pg_config - --sysconfdir. By default this directory will be ../etc/ + --sysconfdir. By default this directory will be ../etc/ relative to the directory containing the PostgreSQL executables. The name of this directory can be set explicitly via the PGSYSCONFDIR @@ -4410,7 +4410,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' The location of the history file can be set explicitly via - the HISTFILE psql variable or + the HISTFILE psql variable or the PSQL_HISTORY environment variable. @@ -4426,10 +4426,10 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' psql works best with servers of the same or an older major version. Backslash commands are particularly likely - to fail if the server is of a newer version than psql - itself. However, backslash commands of the \d family should + to fail if the server is of a newer version than psql + itself. However, backslash commands of the \d family should work with servers of versions back to 7.4, though not necessarily with - servers newer than psql itself. The general + servers newer than psql itself. The general functionality of running SQL commands and displaying query results should also work with servers of a newer major version, but this cannot be guaranteed in all cases. @@ -4449,7 +4449,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' Before PostgreSQL 9.6, the option implied - (); this is no longer the case. @@ -4471,7 +4471,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' psql is built as a console - application. Since the Windows console windows use a different + application. Since the Windows console windows use a different encoding than the rest of the system, you must take special care when using 8-bit characters within psql. If psql detects a problematic @@ -4490,7 +4490,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' - Set the console font to Lucida Console, because the + Set the console font to Lucida Console, because the raster font does not work with the ANSI code page. diff --git a/doc/src/sgml/ref/reassign_owned.sgml b/doc/src/sgml/ref/reassign_owned.sgml index c1751e7f47..2bbd6b8f07 100644 --- a/doc/src/sgml/ref/reassign_owned.sgml +++ b/doc/src/sgml/ref/reassign_owned.sgml @@ -88,7 +88,7 @@ REASSIGN OWNED BY { old_role | CURR The REASSIGN OWNED command does not affect any - privileges granted to the old_roles for + privileges granted to the old_roles for objects that are not owned by them. Use DROP OWNED to revoke such privileges. diff --git a/doc/src/sgml/ref/refresh_materialized_view.sgml b/doc/src/sgml/ref/refresh_materialized_view.sgml index e56e542eb5..0135d15cec 100644 --- a/doc/src/sgml/ref/refresh_materialized_view.sgml +++ b/doc/src/sgml/ref/refresh_materialized_view.sgml @@ -94,9 +94,9 @@ REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name While the default index for future - operations is retained, REFRESH MATERIALIZED VIEW does not + operations is retained, REFRESH MATERIALIZED VIEW does not order the generated rows based on this property. If you want the data - to be ordered upon generation, you must use an ORDER BY + to be ordered upon generation, you must use an ORDER BY clause in the backing query. diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml index 09fc61d15b..3dc2608f76 100644 --- a/doc/src/sgml/ref/reindex.sgml +++ b/doc/src/sgml/ref/reindex.sgml @@ -46,7 +46,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } - An index has become bloated, that is it contains many + An index has become bloated, that is it contains many empty or nearly-empty pages. This can occur with B-tree indexes in PostgreSQL under certain uncommon access patterns. REINDEX provides a way to reduce @@ -65,12 +65,12 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } - An index build with the CONCURRENTLY option failed, leaving - an invalid index. Such indexes are useless but it can be - convenient to use REINDEX to rebuild them. Note that - REINDEX will not perform a concurrent build. To build the + An index build with the CONCURRENTLY option failed, leaving + an invalid index. Such indexes are useless but it can be + convenient to use REINDEX to rebuild them. Note that + REINDEX will not perform a concurrent build. To build the index without interfering with production you should drop the index and - reissue the CREATE INDEX CONCURRENTLY command. + reissue the CREATE INDEX CONCURRENTLY command. @@ -95,7 +95,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } Recreate all indexes of the specified table. If the table has a - secondary TOAST table, that is reindexed as well. + secondary TOAST table, that is reindexed as well. @@ -105,7 +105,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } Recreate all indexes of the specified schema. If a table of this - schema has a secondary TOAST table, that is reindexed as + schema has a secondary TOAST table, that is reindexed as well. Indexes on shared system catalogs are also processed. This form of REINDEX cannot be executed inside a transaction block. @@ -144,7 +144,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } The name of the specific index, table, or database to be reindexed. Index and table names can be schema-qualified. - Presently, REINDEX DATABASE and REINDEX SYSTEM + Presently, REINDEX DATABASE and REINDEX SYSTEM can only reindex the current database, so their parameter must match the current database's name. @@ -186,10 +186,10 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } PostgreSQL
server with the option included on its command line. - Then, REINDEX DATABASE, REINDEX SYSTEM, - REINDEX TABLE, or REINDEX INDEX can be + Then, REINDEX DATABASE, REINDEX SYSTEM, + REINDEX TABLE, or REINDEX INDEX can be issued, depending on how much you want to reconstruct. If in - doubt, use REINDEX SYSTEM to select + doubt, use REINDEX SYSTEM to select reconstruction of all system indexes in the database. Then quit the single-user server session and restart the regular server. See the reference page for more @@ -201,8 +201,8 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } -P included in its command line options. The method for doing this varies across clients, but in all - libpq-based clients, it is possible to set - the PGOPTIONS environment variable to -P + libpq-based clients, it is possible to set + the PGOPTIONS environment variable to -P before starting the client. Note that while this method does not require locking out other clients, it might still be wise to prevent other users from connecting to the damaged database until repairs @@ -212,12 +212,12 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } REINDEX is similar to a drop and recreate of the index in that the index contents are rebuilt from scratch. However, the locking - considerations are rather different. REINDEX locks out writes + considerations are rather different. REINDEX locks out writes but not reads of the index's parent table. It also takes an exclusive lock on the specific index being processed, which will block reads that attempt - to use that index. In contrast, DROP INDEX momentarily takes + to use that index. In contrast, DROP INDEX momentarily takes an exclusive lock on the parent table, blocking both writes and reads. The - subsequent CREATE INDEX locks out writes but not reads; since + subsequent CREATE INDEX locks out writes but not reads; since the index is not there, no read will attempt to use it, meaning that there will be no blocking but reads might be forced into expensive sequential scans. diff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml index e4721d8113..627be6a0ad 100644 --- a/doc/src/sgml/ref/reindexdb.sgml +++ b/doc/src/sgml/ref/reindexdb.sgml @@ -109,8 +109,8 @@ PostgreSQL documentation - - + + Reindex all databases. @@ -119,8 +119,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be reindexed. @@ -134,8 +134,8 @@ PostgreSQL documentation - - + + Echo the commands that reindexdb generates @@ -145,20 +145,20 @@ PostgreSQL documentation - - + + Recreate index only. Multiple indexes can be recreated by writing multiple - switches. - - + + Do not display progress messages. @@ -167,8 +167,8 @@ PostgreSQL documentation - - + + Reindex database's system catalogs. @@ -177,32 +177,32 @@ PostgreSQL documentation - - + + Reindex schema only. Multiple schemas can be reindexed by writing multiple - switches. - - + + Reindex table only. Multiple tables can be reindexed by writing multiple - switches. - - + + Print detailed information during processing. @@ -211,8 +211,8 @@ PostgreSQL documentation - - + + Print the reindexdb version and exit. @@ -221,8 +221,8 @@ PostgreSQL documentation - - + + Show help about reindexdb command line @@ -241,8 +241,8 @@ PostgreSQL documentation - - + + Specifies the host name of the machine on which the server is @@ -253,8 +253,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -265,8 +265,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -275,8 +275,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -290,8 +290,8 @@ PostgreSQL documentation - - + + Force reindexdb to prompt for a @@ -304,14 +304,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, reindexdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -345,8 +345,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -376,7 +376,7 @@ PostgreSQL documentation reindexdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. diff --git a/doc/src/sgml/ref/release_savepoint.sgml b/doc/src/sgml/ref/release_savepoint.sgml index b331b7226b..2e8dcc0746 100644 --- a/doc/src/sgml/ref/release_savepoint.sgml +++ b/doc/src/sgml/ref/release_savepoint.sgml @@ -109,7 +109,7 @@ COMMIT; Compatibility - This command conforms to the SQL standard. The standard + This command conforms to the SQL standard. The standard specifies that the key word SAVEPOINT is mandatory, but PostgreSQL allows it to be omitted. diff --git a/doc/src/sgml/ref/reset.sgml b/doc/src/sgml/ref/reset.sgml index 98c3207831..b434ad10c2 100644 --- a/doc/src/sgml/ref/reset.sgml +++ b/doc/src/sgml/ref/reset.sgml @@ -42,19 +42,19 @@ SET configuration_parameter TO DEFA The default value is defined as the value that the parameter would - have had, if no SET had ever been issued for it in the + have had, if no SET had ever been issued for it in the current session. The actual source of this value might be a compiled-in default, the configuration file, command-line options, or per-database or per-user default settings. This is subtly different from defining it as the value that the parameter had at session - start, because if the value came from the configuration file, it + start, because if the value came from the configuration file, it will be reset to whatever is specified by the configuration file now. See for details. - The transactional behavior of RESET is the same as - SET: its effects will be undone by transaction rollback. + The transactional behavior of RESET is the same as + SET: its effects will be undone by transaction rollback. @@ -88,7 +88,7 @@ SET configuration_parameter TO DEFA Examples - Set the timezone configuration variable to its default value: + Set the timezone configuration variable to its default value: RESET timezone; diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml index 91f69af9ee..c893666e83 100644 --- a/doc/src/sgml/ref/revoke.sgml +++ b/doc/src/sgml/ref/revoke.sgml @@ -130,13 +130,13 @@ REVOKE [ ADMIN OPTION FOR ] Note that any particular role will have the sum of privileges granted directly to it, privileges granted to any role it is presently a member of, and privileges granted to - PUBLIC. Thus, for example, revoking SELECT privilege + PUBLIC. Thus, for example, revoking SELECT privilege from PUBLIC does not necessarily mean that all roles - have lost SELECT privilege on the object: those who have it granted + have lost SELECT privilege on the object: those who have it granted directly or via another role will still have it. Similarly, revoking - SELECT from a user might not prevent that user from using - SELECT if PUBLIC or another membership - role still has SELECT rights. + SELECT from a user might not prevent that user from using + SELECT if PUBLIC or another membership + role still has SELECT rights. @@ -167,10 +167,10 @@ REVOKE [ ADMIN OPTION FOR ] - When revoking membership in a role, GRANT OPTION is instead - called ADMIN OPTION, but the behavior is similar. + When revoking membership in a role, GRANT OPTION is instead + called ADMIN OPTION, but the behavior is similar. Note also that this form of the command does not - allow the noise word GROUP. + allow the noise word GROUP. @@ -181,7 +181,7 @@ REVOKE [ ADMIN OPTION FOR ] Use 's \dp command to display the privileges granted on existing tables and columns. See for information about the - format. For non-table objects there are other \d commands + format. For non-table objects there are other \d commands that can display their privileges. @@ -198,12 +198,12 @@ REVOKE [ ADMIN OPTION FOR ] - When a non-owner of an object attempts to REVOKE privileges + When a non-owner of an object attempts to REVOKE privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will revoke only those privileges for which the user has grant options. The REVOKE ALL - PRIVILEGES forms will issue a warning message if no grant options are + PRIVILEGES forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but @@ -212,7 +212,7 @@ REVOKE [ ADMIN OPTION FOR ] - If a superuser chooses to issue a GRANT or REVOKE + If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it were issued by the owner of the affected object. Since all privileges ultimately come from the object owner (possibly indirectly via chains of grant options), @@ -221,26 +221,26 @@ REVOKE [ ADMIN OPTION FOR ] - REVOKE can also be done by a role + REVOKE can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges WITH GRANT OPTION on the object. In this case the command is performed as though it were issued by the containing role that actually owns the object or holds the privileges WITH GRANT OPTION. For example, if table - t1 is owned by role g1, of which role - u1 is a member, then u1 can revoke privileges - on t1 that are recorded as being granted by g1. - This would include grants made by u1 as well as by other - members of role g1. + t1 is owned by role g1, of which role + u1 is a member, then u1 can revoke privileges + on t1 that are recorded as being granted by g1. + This would include grants made by u1 as well as by other + members of role g1. - If the role executing REVOKE holds privileges + If the role executing REVOKE holds privileges indirectly via more than one role membership path, it is unspecified which containing role will be used to perform the command. In such cases - it is best practice to use SET ROLE to become the specific - role you want to do the REVOKE as. Failure to do so might + it is best practice to use SET ROLE to become the specific + role you want to do the REVOKE as. Failure to do so might lead to revoking privileges other than the ones you intended, or not revoking anything at all. @@ -267,11 +267,11 @@ REVOKE ALL PRIVILEGES ON kinds FROM manuel; Note that this actually means revoke all privileges that I - granted. + granted. - Revoke membership in role admins from user joe: + Revoke membership in role admins from user joe: REVOKE admins FROM joe; @@ -285,7 +285,7 @@ REVOKE admins FROM joe; The compatibility notes of the command apply analogously to REVOKE. The keyword RESTRICT or CASCADE - is required according to the standard, but PostgreSQL + is required according to the standard, but PostgreSQL assumes RESTRICT by default. diff --git a/doc/src/sgml/ref/rollback.sgml b/doc/src/sgml/ref/rollback.sgml index b0b1e8d0e3..1a0e5a0ebc 100644 --- a/doc/src/sgml/ref/rollback.sgml +++ b/doc/src/sgml/ref/rollback.sgml @@ -59,7 +59,7 @@ ROLLBACK [ WORK | TRANSACTION ] - Issuing ROLLBACK outside of a transaction + Issuing ROLLBACK outside of a transaction block emits a warning and otherwise has no effect. diff --git a/doc/src/sgml/ref/rollback_prepared.sgml b/doc/src/sgml/ref/rollback_prepared.sgml index a0ffc65083..6c44049a89 100644 --- a/doc/src/sgml/ref/rollback_prepared.sgml +++ b/doc/src/sgml/ref/rollback_prepared.sgml @@ -75,7 +75,7 @@ ROLLBACK PREPARED transaction_id Examples Roll back the transaction identified by the transaction - identifier foobar: + identifier foobar: ROLLBACK PREPARED 'foobar'; diff --git a/doc/src/sgml/ref/rollback_to.sgml b/doc/src/sgml/ref/rollback_to.sgml index e8072d8974..f1da804f67 100644 --- a/doc/src/sgml/ref/rollback_to.sgml +++ b/doc/src/sgml/ref/rollback_to.sgml @@ -40,7 +40,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name - ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that + ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint. @@ -50,7 +50,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name - savepoint_name + savepoint_name The savepoint to roll back to. @@ -77,17 +77,17 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_nameFETCH or MOVE command inside a + affected by a FETCH or MOVE command inside a savepoint that is later rolled back, the cursor remains at the - position that FETCH left it pointing to (that is, the cursor - motion caused by FETCH is not rolled back). + position that FETCH left it pointing to (that is, the cursor + motion caused by FETCH is not rolled back). Closing a cursor is not undone by rolling back, either. However, other side-effects caused by the cursor's query (such as - side-effects of volatile functions called by the query) are + side-effects of volatile functions called by the query) are rolled back if they occur during a savepoint that is later rolled back. A cursor whose execution causes a transaction to abort is put in a cannot-execute state, so while the transaction can be restored using - ROLLBACK TO SAVEPOINT, the cursor can no longer be used. + ROLLBACK TO SAVEPOINT, the cursor can no longer be used. @@ -133,13 +133,13 @@ COMMIT; Compatibility - The SQL standard specifies that the key word - SAVEPOINT is mandatory, but PostgreSQL - and Oracle allow it to be omitted. SQL allows - only WORK, not TRANSACTION, as a noise word - after ROLLBACK. Also, SQL has an optional clause - AND [ NO ] CHAIN which is not currently supported by - PostgreSQL. Otherwise, this command conforms to + The SQL standard specifies that the key word + SAVEPOINT is mandatory, but PostgreSQL + and Oracle allow it to be omitted. SQL allows + only WORK, not TRANSACTION, as a noise word + after ROLLBACK. Also, SQL has an optional clause + AND [ NO ] CHAIN which is not currently supported by + PostgreSQL. Otherwise, this command conforms to the SQL standard. diff --git a/doc/src/sgml/ref/savepoint.sgml b/doc/src/sgml/ref/savepoint.sgml index 5b944a2561..6d40f4da42 100644 --- a/doc/src/sgml/ref/savepoint.sgml +++ b/doc/src/sgml/ref/savepoint.sgml @@ -114,11 +114,11 @@ COMMIT; SQL requires a savepoint to be destroyed automatically when another savepoint with the same name is established. In - PostgreSQL, the old savepoint is kept, though only the more + PostgreSQL, the old savepoint is kept, though only the more recent one will be used when rolling back or releasing. (Releasing the - newer savepoint with RELEASE SAVEPOINT will cause the older one - to again become accessible to ROLLBACK TO SAVEPOINT and - RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is + newer savepoint with RELEASE SAVEPOINT will cause the older one + to again become accessible to ROLLBACK TO SAVEPOINT and + RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is fully SQL conforming. diff --git a/doc/src/sgml/ref/security_label.sgml b/doc/src/sgml/ref/security_label.sgml index 971b928a02..999f9c80cd 100644 --- a/doc/src/sgml/ref/security_label.sgml +++ b/doc/src/sgml/ref/security_label.sgml @@ -60,12 +60,12 @@ SECURITY LABEL [ FOR provider ] ON object. An arbitrary number of security labels, one per label provider, can be associated with a given database object. Label providers are loadable modules which register themselves by using the function - register_label_provider. + register_label_provider. - register_label_provider is not an SQL function; it can + register_label_provider is not an SQL function; it can only be called from C code loaded into the backend. @@ -74,11 +74,11 @@ SECURITY LABEL [ FOR provider ] ON The label provider determines whether a given label is valid and whether it is permissible to assign that label to a given object. The meaning of a given label is likewise at the discretion of the label provider. - PostgreSQL places no restrictions on whether or how a + PostgreSQL places no restrictions on whether or how a label provider must interpret security labels; it merely provides a mechanism for storing them. In practice, this facility is intended to allow integration with label-based mandatory access control (MAC) systems such as - SE-Linux. Such systems make all access control decisions + SE-Linux. Such systems make all access control decisions based on object labels, rather than traditional discretionary access control (DAC) concepts such as users and groups. @@ -120,14 +120,14 @@ SECURITY LABEL [ FOR provider ] ON The mode of a function or aggregate - argument: IN, OUT, - INOUT, or VARIADIC. - If omitted, the default is IN. + argument: IN, OUT, + INOUT, or VARIADIC. + If omitted, the default is IN. Note that SECURITY LABEL does not actually - pay any attention to OUT arguments, since only the input + pay any attention to OUT arguments, since only the input arguments are needed to determine the function's identity. - So it is sufficient to list the IN, INOUT, - and VARIADIC arguments. + So it is sufficient to list the IN, INOUT, + and VARIADIC arguments. @@ -178,7 +178,7 @@ SECURITY LABEL [ FOR provider ] ON label - The new security label, written as a string literal; or NULL + The new security label, written as a string literal; or NULL to drop the security label. diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml index 57f11e66fb..7355e790f6 100644 --- a/doc/src/sgml/ref/select.sgml +++ b/doc/src/sgml/ref/select.sgml @@ -163,10 +163,10 @@ TABLE [ ONLY ] table_name [ * ] operator returns the rows that are in the first result set but not in the second. In all three cases, duplicate rows are eliminated unless ALL is specified. The noise - word DISTINCT can be added to explicitly specify - eliminating duplicate rows. Notice that DISTINCT is + word DISTINCT can be added to explicitly specify + eliminating duplicate rows. Notice that DISTINCT is the default behavior here, even though ALL is - the default for SELECT itself. (See + the default for SELECT itself. (See , , and below.) @@ -194,7 +194,7 @@ TABLE [ ONLY ] table_name [ * ] - If FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE + If FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE or FOR KEY SHARE is specified, the SELECT statement locks the selected rows @@ -207,7 +207,7 @@ TABLE [ ONLY ] table_name [ * ] You must have SELECT privilege on each column used - in a SELECT command. The use of FOR NO KEY UPDATE, + in a SELECT command. The use of FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE or FOR KEY SHARE requires UPDATE privilege as well (for at least one column @@ -226,15 +226,15 @@ TABLE [ ONLY ] table_name [ * ] subqueries that can be referenced by name in the primary query. The subqueries effectively act as temporary tables or views for the duration of the primary query. - Each subquery can be a SELECT, TABLE, VALUES, + Each subquery can be a SELECT, TABLE, VALUES, INSERT, UPDATE or DELETE statement. When writing a data-modifying statement (INSERT, UPDATE or DELETE) in - WITH, it is usual to include a RETURNING clause. - It is the output of RETURNING, not the underlying + WITH, it is usual to include a RETURNING clause. + It is the output of RETURNING, not the underlying table that the statement modifies, that forms the temporary table that is - read by the primary query. If RETURNING is omitted, the + read by the primary query. If RETURNING is omitted, the statement is still executed, but it produces no output so it cannot be referenced as a table by the primary query. @@ -254,7 +254,7 @@ TABLE [ ONLY ] table_name [ * ] non_recursive_term UNION [ ALL | DISTINCT ] recursive_term where the recursive self-reference must appear on the right-hand - side of the UNION. Only one recursive self-reference + side of the UNION. Only one recursive self-reference is permitted per query. Recursive data-modifying statements are not supported, but you can use the results of a recursive SELECT query in @@ -285,7 +285,7 @@ TABLE [ ONLY ] table_name [ * ] The primary query and the WITH queries are all (notionally) executed at the same time. This implies that the effects of a data-modifying statement in WITH cannot be seen from - other parts of the query, other than by reading its RETURNING + other parts of the query, other than by reading its RETURNING output. If two such data-modifying statements attempt to modify the same row, the results are unspecified. @@ -303,7 +303,7 @@ TABLE [ ONLY ] table_name [ * ] tables for the SELECT. If multiple sources are specified, the result is the Cartesian product (cross join) of all the sources. But usually qualification conditions are added (via - WHERE) to restrict the returned rows to a small subset of the + WHERE) to restrict the returned rows to a small subset of the Cartesian product. @@ -317,10 +317,10 @@ TABLE [ ONLY ] table_name [ * ] The name (optionally schema-qualified) of an existing table or view. - If ONLY is specified before the table name, only that - table is scanned. If ONLY is not specified, the table + If ONLY is specified before the table name, only that + table is scanned. If ONLY is not specified, the table and all its descendant tables (if any) are scanned. Optionally, - * can be specified after the table name to explicitly + * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -330,14 +330,14 @@ TABLE [ ONLY ] table_name [ * ] alias - A substitute name for the FROM item containing the + A substitute name for the FROM item containing the alias. An alias is used for brevity or to eliminate ambiguity for self-joins (where the same table is scanned multiple times). When an alias is provided, it completely hides the actual name of the table or function; for example given - FROM foo AS f, the remainder of the - SELECT must refer to this FROM - item as f not foo. If an alias is + FROM foo AS f, the remainder of the + SELECT must refer to this FROM + item as f not foo. If an alias is written, a column alias list can also be written to provide substitute names for one or more columns of the table. @@ -348,12 +348,12 @@ TABLE [ ONLY ] table_name [ * ] TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] - A TABLESAMPLE clause after - a table_name indicates that the + A TABLESAMPLE clause after + a table_name indicates that the specified sampling_method should be used to retrieve a subset of the rows in that table. This sampling precedes the application of any other filters such - as WHERE clauses. + as WHERE clauses. The standard PostgreSQL distribution includes two sampling methods, BERNOULLI and SYSTEM, and other sampling methods can be @@ -361,11 +361,11 @@ TABLE [ ONLY ] table_name [ * ] - The BERNOULLI and SYSTEM sampling methods - each accept a single argument + The BERNOULLI and SYSTEM sampling methods + each accept a single argument which is the fraction of the table to sample, expressed as a percentage between 0 and 100. This argument can be - any real-valued expression. (Other sampling methods might + any real-valued expression. (Other sampling methods might accept more or different arguments.) These two methods each return a randomly-chosen sample of the table that will contain approximately the specified percentage of the table's rows. @@ -383,10 +383,10 @@ TABLE [ ONLY ] table_name [ * ] The optional REPEATABLE clause specifies - a seed number or expression to use + a seed number or expression to use for generating random numbers within the sampling method. The seed value can be any non-null floating-point value. Two queries that - specify the same seed and argument + specify the same seed and argument values will select the same sample of the table, if the table has not been changed meanwhile. But different seed values will usually produce different samples. @@ -420,9 +420,9 @@ TABLE [ ONLY ] table_name [ * ] with_query_name - A WITH query is referenced by writing its name, + A WITH query is referenced by writing its name, just as though the query's name were a table name. (In fact, - the WITH query hides any real table of the same name + the WITH query hides any real table of the same name for the purposes of the primary query. If necessary, you can refer to a real table of the same name by schema-qualifying the table's name.) @@ -456,8 +456,8 @@ TABLE [ ONLY ] table_name [ * ] Multiple function calls can be combined into a - single FROM-clause item by surrounding them - with ROWS FROM( ... ). The output of such an item is the + single FROM-clause item by surrounding them + with ROWS FROM( ... ). The output of such an item is the concatenation of the first row from each function, then the second row from each function, etc. If some of the functions produce fewer rows than others, null values are substituted for the missing data, so @@ -467,28 +467,28 @@ TABLE [ ONLY ] table_name [ * ] If the function has been defined as returning the - record data type, then an alias or the key word - AS must be present, followed by a column + record data type, then an alias or the key word + AS must be present, followed by a column definition list in the form ( column_name data_type , ... - ). The column definition list must match the + ). The column definition list must match the actual number and types of columns returned by the function. - When using the ROWS FROM( ... ) syntax, if one of the + When using the ROWS FROM( ... ) syntax, if one of the functions requires a column definition list, it's preferred to put the column definition list after the function call inside - ROWS FROM( ... ). A column definition list can be placed - after the ROWS FROM( ... ) construct only if there's just - a single function and no WITH ORDINALITY clause. + ROWS FROM( ... ). A column definition list can be placed + after the ROWS FROM( ... ) construct only if there's just + a single function and no WITH ORDINALITY clause. To use ORDINALITY together with a column definition - list, you must use the ROWS FROM( ... ) syntax and put the - column definition list inside ROWS FROM( ... ). + list, you must use the ROWS FROM( ... ) syntax and put the + column definition list inside ROWS FROM( ... ). @@ -516,9 +516,9 @@ TABLE [ ONLY ] table_name [ * ] - For the INNER and OUTER join types, a + For the INNER and OUTER join types, a join condition must be specified, namely exactly one of - NATURAL, ON NATURAL, ON join_condition, or USING (join_column [, ...]). @@ -527,46 +527,46 @@ TABLE [ ONLY ] table_name [ * ] - A JOIN clause combines two FROM - items, which for convenience we will refer to as tables, - though in reality they can be any type of FROM item. + A JOIN clause combines two FROM + items, which for convenience we will refer to as tables, + though in reality they can be any type of FROM item. Use parentheses if necessary to determine the order of nesting. In the absence of parentheses, JOINs nest left-to-right. In any case JOIN binds more - tightly than the commas separating FROM-list items. + tightly than the commas separating FROM-list items. - CROSS JOIN and INNER JOIN + CROSS JOIN and INNER JOIN produce a simple Cartesian product, the same result as you get from - listing the two tables at the top level of FROM, + listing the two tables at the top level of FROM, but restricted by the join condition (if any). - CROSS JOIN is equivalent to INNER JOIN ON - (TRUE), that is, no rows are removed by qualification. + CROSS JOIN is equivalent to INNER JOIN ON + (TRUE), that is, no rows are removed by qualification. These join types are just a notational convenience, since they - do nothing you couldn't do with plain FROM and - WHERE. + do nothing you couldn't do with plain FROM and + WHERE. - LEFT OUTER JOIN returns all rows in the qualified + LEFT OUTER JOIN returns all rows in the qualified Cartesian product (i.e., all combined rows that pass its join condition), plus one copy of each row in the left-hand table for which there was no right-hand row that passed the join condition. This left-hand row is extended to the full width of the joined table by inserting null values for the - right-hand columns. Note that only the JOIN + right-hand columns. Note that only the JOIN clause's own condition is considered while deciding which rows have matches. Outer conditions are applied afterwards. - Conversely, RIGHT OUTER JOIN returns all the + Conversely, RIGHT OUTER JOIN returns all the joined rows, plus one row for each unmatched right-hand row (extended with nulls on the left). This is just a notational convenience, since you could convert it to a LEFT - OUTER JOIN by switching the left and right tables. + OUTER JOIN by switching the left and right tables. - FULL OUTER JOIN returns all the joined rows, plus + FULL OUTER JOIN returns all the joined rows, plus one row for each unmatched left-hand row (extended with nulls on the right), plus one row for each unmatched right-hand row (extended with nulls on the left). @@ -593,7 +593,7 @@ TABLE [ ONLY ] table_name [ * ] A clause of the form USING ( a, b, ... ) is shorthand for ON left_table.a = right_table.a AND left_table.b = right_table.b .... Also, - USING implies that only one of each pair of + USING implies that only one of each pair of equivalent columns will be included in the join output, not both. @@ -605,10 +605,10 @@ TABLE [ ONLY ] table_name [ * ] NATURAL is shorthand for a - USING list that mentions all columns in the two + USING list that mentions all columns in the two tables that have matching names. If there are no common column names, NATURAL is equivalent - to ON TRUE. + to ON TRUE. @@ -618,32 +618,32 @@ TABLE [ ONLY ] table_name [ * ] The LATERAL key word can precede a - sub-SELECT FROM item. This allows the - sub-SELECT to refer to columns of FROM - items that appear before it in the FROM list. (Without + sub-SELECT FROM item. This allows the + sub-SELECT to refer to columns of FROM + items that appear before it in the FROM list. (Without LATERAL, each sub-SELECT is evaluated independently and so cannot cross-reference any other - FROM item.) + FROM item.) LATERAL can also precede a function-call - FROM item, but in this case it is a noise word, because - the function expression can refer to earlier FROM items + FROM item, but in this case it is a noise word, because + the function expression can refer to earlier FROM items in any case. A LATERAL item can appear at top level in the - FROM list, or within a JOIN tree. In the + FROM list, or within a JOIN tree. In the latter case it can also refer to any items that are on the left-hand - side of a JOIN that it is on the right-hand side of. + side of a JOIN that it is on the right-hand side of. - When a FROM item contains LATERAL + When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the - FROM item providing the cross-referenced column(s), or - set of rows of multiple FROM items providing the + FROM item providing the cross-referenced column(s), or + set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is @@ -651,14 +651,14 @@ TABLE [ ONLY ] table_name [ * ] - The column source table(s) must be INNER or - LEFT joined to the LATERAL item, else + The column source table(s) must be INNER or + LEFT joined to the LATERAL item, else there would not be a well-defined set of rows from which to compute each set of rows for the LATERAL item. Thus, - although a construct such as X RIGHT JOIN - LATERAL Y is syntactically valid, it is - not actually allowed for Y to reference - X. + although a construct such as X RIGHT JOIN + LATERAL Y is syntactically valid, it is + not actually allowed for Y to reference + X. @@ -707,13 +707,13 @@ GROUP BY grouping_element [, ...] - If any of GROUPING SETS, ROLLUP or - CUBE are present as grouping elements, then the - GROUP BY clause as a whole defines some number of - independent grouping sets. The effect of this is - equivalent to constructing a UNION ALL between + If any of GROUPING SETS, ROLLUP or + CUBE are present as grouping elements, then the + GROUP BY clause as a whole defines some number of + independent grouping sets. The effect of this is + equivalent to constructing a UNION ALL between subqueries with the individual grouping sets as their - GROUP BY clauses. For further details on the handling + GROUP BY clauses. For further details on the handling of grouping sets see . @@ -744,15 +744,15 @@ GROUP BY grouping_element [, ...] Keep in mind that all aggregate functions are evaluated before - evaluating any scalar expressions in the HAVING - clause or SELECT list. This means that, for example, - a CASE expression cannot be used to skip evaluation of + evaluating any scalar expressions in the HAVING + clause or SELECT list. This means that, for example, + a CASE expression cannot be used to skip evaluation of an aggregate function; see . - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with GROUP BY. @@ -784,9 +784,9 @@ HAVING condition The presence of HAVING turns a query into a grouped - query even if there is no GROUP BY clause. This is the + query even if there is no GROUP BY clause. This is the same as what happens when the query contains aggregate functions but - no GROUP BY clause. All the selected rows are considered to + no GROUP BY clause. All the selected rows are considered to form a single group, and the SELECT list and HAVING clause can only reference table columns from within aggregate functions. Such a query will emit a single row if the @@ -794,8 +794,8 @@ HAVING condition - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with HAVING. @@ -809,7 +809,7 @@ HAVING condition WINDOW window_name AS ( window_definition ) [, ...] where window_name is - a name that can be referenced from OVER clauses or + a name that can be referenced from OVER clauses or subsequent window definitions, and window_definition is @@ -822,29 +822,29 @@ WINDOW window_name AS ( If an existing_window_name - is specified it must refer to an earlier entry in the WINDOW + is specified it must refer to an earlier entry in the WINDOW list; the new window copies its partitioning clause from that entry, as well as its ordering clause if any. In this case the new window cannot - specify its own PARTITION BY clause, and it can specify - ORDER BY only if the copied window does not have one. + specify its own PARTITION BY clause, and it can specify + ORDER BY only if the copied window does not have one. The new window always uses its own frame clause; the copied window must not specify a frame clause. - The elements of the PARTITION BY list are interpreted in + The elements of the PARTITION BY list are interpreted in much the same fashion as elements of a , except that they are always simple expressions and never the name or number of an output column. Another difference is that these expressions can contain aggregate - function calls, which are not allowed in a regular GROUP BY + function calls, which are not allowed in a regular GROUP BY clause. They are allowed here because windowing occurs after grouping and aggregation. - Similarly, the elements of the ORDER BY list are interpreted + Similarly, the elements of the ORDER BY list are interpreted in much the same fashion as elements of an , except that the expressions are always taken as simple expressions and never the name @@ -852,18 +852,18 @@ WINDOW window_name AS ( - The optional frame_clause defines - the window frame for window functions that depend on the + The optional frame_clause defines + the window frame for window functions that depend on the frame (not all do). The window frame is a set of related rows for - each row of the query (called the current row). - The frame_clause can be one of + each row of the query (called the current row). + The frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS } frame_start +{ RANGE | ROWS } BETWEEN frame_start AND frame_end - where frame_start and frame_end can be + where frame_start and frame_end can be one of @@ -874,34 +874,34 @@ CURRENT ROW UNBOUNDED FOLLOWING - If frame_end is omitted it defaults to CURRENT - ROW. Restrictions are that - frame_start cannot be UNBOUNDED FOLLOWING, - frame_end cannot be UNBOUNDED PRECEDING, - and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + If frame_end is omitted it defaults to CURRENT + ROW. Restrictions are that + frame_start cannot be UNBOUNDED FOLLOWING, + frame_end cannot be UNBOUNDED PRECEDING, + and the frame_end choice cannot appear earlier in the + above list than the frame_start choice — for example + RANGE BETWEEN CURRENT ROW AND value PRECEDING is not allowed. - The default framing option is RANGE UNBOUNDED PRECEDING, + The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND - CURRENT ROW; it sets the frame to be all rows from the partition start + CURRENT ROW; it sets the frame to be all rows from the partition start up through the current row's last peer (a row that ORDER - BY considers equivalent to the current row, or all rows if there - is no ORDER BY). - In general, UNBOUNDED PRECEDING means that the frame + BY considers equivalent to the current row, or all rows if there + is no ORDER BY). + In general, UNBOUNDED PRECEDING means that the frame starts with the first row of the partition, and similarly - UNBOUNDED FOLLOWING means that the frame ends with the last - row of the partition (regardless of RANGE or ROWS - mode). In ROWS mode, CURRENT ROW + UNBOUNDED FOLLOWING means that the frame ends with the last + row of the partition (regardless of RANGE or ROWS + mode). In ROWS mode, CURRENT ROW means that the frame starts or ends with the current row; but in - RANGE mode it means that the frame starts or ends with - the current row's first or last peer in the ORDER BY ordering. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts + RANGE mode it means that the frame starts or ends with + the current row's first or last peer in the ORDER BY ordering. + The value PRECEDING and + value FOLLOWING cases are currently only + allowed in ROWS mode. They indicate that the frame starts or ends with the row that many rows before or after the current row. value must be an integer expression not containing any variables, aggregate functions, or window functions. @@ -910,32 +910,32 @@ UNBOUNDED FOLLOWING - Beware that the ROWS options can produce unpredictable - results if the ORDER BY ordering does not order the rows - uniquely. The RANGE options are designed to ensure that - rows that are peers in the ORDER BY ordering are treated + Beware that the ROWS options can produce unpredictable + results if the ORDER BY ordering does not order the rows + uniquely. The RANGE options are designed to ensure that + rows that are peers in the ORDER BY ordering are treated alike; all peer rows will be in the same frame. The purpose of a WINDOW clause is to specify the - behavior of window functions appearing in the query's + behavior of window functions appearing in the query's or . These functions can reference the WINDOW clause entries by name - in their OVER clauses. A WINDOW clause + in their OVER clauses. A WINDOW clause entry does not have to be referenced anywhere, however; if it is not used in the query it is simply ignored. It is possible to use window functions without any WINDOW clause at all, since a window function call can specify its window definition directly in - its OVER clause. However, the WINDOW + its OVER clause. However, the WINDOW clause saves typing when the same window definition is needed for more than one window function. - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with WINDOW. @@ -952,20 +952,20 @@ UNBOUNDED FOLLOWING The SELECT list (between the key words - SELECT and FROM) specifies expressions + SELECT and FROM) specifies expressions that form the output rows of the SELECT statement. The expressions can (and usually do) refer to columns - computed in the FROM clause. + computed in the FROM clause. Just as in a table, every output column of a SELECT has a name. In a simple SELECT this name is just - used to label the column for display, but when the SELECT + used to label the column for display, but when the SELECT is a sub-query of a larger query, the name is seen by the larger query as the column name of the virtual table produced by the sub-query. To specify the name to use for an output column, write - AS output_name + AS output_name after the column's expression. (You can omit AS, but only if the desired output name does not match any PostgreSQL keyword (see An output column's name can be used to refer to the column's value in - ORDER BY and GROUP BY clauses, but not in the - WHERE or HAVING clauses; there you must write + ORDER BY and GROUP BY clauses, but not in the + WHERE or HAVING clauses; there you must write out the expression instead. @@ -993,7 +993,7 @@ UNBOUNDED FOLLOWING rows. Also, you can write table_name.* as a shorthand for the columns coming from just that table. In these - cases it is not possible to specify new names with AS; + cases it is not possible to specify new names with AS; the output column names will be the same as the table columns' names. @@ -1008,11 +1008,11 @@ UNBOUNDED FOLLOWING contains any volatile or expensive functions. With that behavior, the order of function evaluations is more intuitive and there will not be evaluations corresponding to rows that never appear in the output. - PostgreSQL will effectively evaluate output expressions + PostgreSQL will effectively evaluate output expressions after sorting and limiting, so long as those expressions are not referenced in DISTINCT, ORDER BY or GROUP BY. (As a counterexample, SELECT - f(x) FROM tab ORDER BY 1 clearly must evaluate f(x) + f(x) FROM tab ORDER BY 1 clearly must evaluate f(x) before sorting.) Output expressions that contain set-returning functions are effectively evaluated after sorting and before limiting, so that LIMIT will act to cut off the output from a @@ -1021,7 +1021,7 @@ UNBOUNDED FOLLOWING - PostgreSQL versions before 9.6 did not provide any + PostgreSQL versions before 9.6 did not provide any guarantees about the timing of evaluation of output expressions versus sorting and limiting; it depended on the form of the chosen query plan. @@ -1032,9 +1032,9 @@ UNBOUNDED FOLLOWING <literal>DISTINCT</literal> Clause - If SELECT DISTINCT is specified, all duplicate rows are + If SELECT DISTINCT is specified, all duplicate rows are removed from the result set (one row is kept from each group of - duplicates). SELECT ALL specifies the opposite: all rows are + duplicates). SELECT ALL specifies the opposite: all rows are kept; that is the default. @@ -1044,9 +1044,9 @@ UNBOUNDED FOLLOWING keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for - ORDER BY (see above). Note that the first + ORDER BY (see above). Note that the first row of each set is unpredictable unless ORDER - BY is used to ensure that the desired row appears first. For + BY is used to ensure that the desired row appears first. For example: SELECT DISTINCT ON (location) location, time, report @@ -1054,21 +1054,21 @@ SELECT DISTINCT ON (location) location, time, report ORDER BY location, time DESC; retrieves the most recent weather report for each location. But - if we had not used ORDER BY to force descending order + if we had not used ORDER BY to force descending order of time values for each location, we'd have gotten a report from an unpredictable time for each location. - The DISTINCT ON expression(s) must match the leftmost - ORDER BY expression(s). The ORDER BY clause + The DISTINCT ON expression(s) must match the leftmost + ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the - desired precedence of rows within each DISTINCT ON group. + desired precedence of rows within each DISTINCT ON group. - Currently, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE and FOR KEY SHARE cannot be + Currently, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE and FOR KEY SHARE cannot be specified with DISTINCT. @@ -1082,9 +1082,9 @@ SELECT DISTINCT ON (location) location, time, report select_statement UNION [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE, or FOR KEY SHARE clause. - (ORDER BY and LIMIT can be attached to a + (ORDER BY and LIMIT can be attached to a subexpression if it is enclosed in parentheses. Without parentheses, these clauses will be taken to apply to the result of the UNION, not to its right-hand input @@ -1103,26 +1103,26 @@ SELECT DISTINCT ON (location) location, time, report - The result of UNION does not contain any duplicate - rows unless the ALL option is specified. - ALL prevents elimination of duplicates. (Therefore, - UNION ALL is usually significantly quicker than - UNION; use ALL when you can.) - DISTINCT can be written to explicitly specify the + The result of UNION does not contain any duplicate + rows unless the ALL option is specified. + ALL prevents elimination of duplicates. (Therefore, + UNION ALL is usually significantly quicker than + UNION; use ALL when you can.) + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. - Multiple UNION operators in the same + Multiple UNION operators in the same SELECT statement are evaluated left to right, unless otherwise indicated by parentheses. - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for a UNION result or for any input of a - UNION. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for a UNION result or for any input of a + UNION. @@ -1135,8 +1135,8 @@ SELECT DISTINCT ON (location) location, time, report select_statement INTERSECT [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE, or FOR KEY SHARE clause. + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE, or FOR KEY SHARE clause. @@ -1148,11 +1148,11 @@ SELECT DISTINCT ON (location) location, time, report The result of INTERSECT does not contain any - duplicate rows unless the ALL option is specified. - With ALL, a row that has m duplicates in the - left table and n duplicates in the right table will appear - min(m,n) times in the result set. - DISTINCT can be written to explicitly specify the + duplicate rows unless the ALL option is specified. + With ALL, a row that has m duplicates in the + left table and n duplicates in the right table will appear + min(m,n) times in the result set. + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. @@ -1167,10 +1167,10 @@ SELECT DISTINCT ON (location) location, time, report - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for an INTERSECT result or for any input of - an INTERSECT. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for an INTERSECT result or for any input of + an INTERSECT. @@ -1183,8 +1183,8 @@ SELECT DISTINCT ON (location) location, time, report select_statement EXCEPT [ ALL | DISTINCT ] select_statement select_statement is any SELECT statement without an ORDER - BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, - FOR SHARE, or FOR KEY SHARE clause. + BY, LIMIT, FOR NO KEY UPDATE, FOR UPDATE, + FOR SHARE, or FOR KEY SHARE clause. @@ -1195,26 +1195,26 @@ SELECT DISTINCT ON (location) location, time, report The result of EXCEPT does not contain any - duplicate rows unless the ALL option is specified. - With ALL, a row that has m duplicates in the - left table and n duplicates in the right table will appear - max(m-n,0) times in the result set. - DISTINCT can be written to explicitly specify the + duplicate rows unless the ALL option is specified. + With ALL, a row that has m duplicates in the + left table and n duplicates in the right table will appear + max(m-n,0) times in the result set. + DISTINCT can be written to explicitly specify the default behavior of eliminating duplicate rows. Multiple EXCEPT operators in the same SELECT statement are evaluated left to right, - unless parentheses dictate otherwise. EXCEPT binds at - the same level as UNION. + unless parentheses dictate otherwise. EXCEPT binds at + the same level as UNION. - Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and - FOR KEY SHARE cannot be - specified either for an EXCEPT result or for any input of - an EXCEPT. + Currently, FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE and + FOR KEY SHARE cannot be + specified either for an EXCEPT result or for any input of + an EXCEPT. @@ -1247,7 +1247,7 @@ ORDER BY expression [ ASC | DESC | ordering on the basis of a column that does not have a unique name. This is never absolutely necessary because it is always possible to assign a name to an output column using the - AS clause. + AS clause. @@ -1258,59 +1258,59 @@ ORDER BY expression [ ASC | DESC | SELECT name FROM distributors ORDER BY code; - A limitation of this feature is that an ORDER BY - clause applying to the result of a UNION, - INTERSECT, or EXCEPT clause can only + A limitation of this feature is that an ORDER BY + clause applying to the result of a UNION, + INTERSECT, or EXCEPT clause can only specify an output column name or number, not an expression. - If an ORDER BY expression is a simple name that + If an ORDER BY expression is a simple name that matches both an output column name and an input column name, - ORDER BY will interpret it as the output column name. - This is the opposite of the choice that GROUP BY will + ORDER BY will interpret it as the output column name. + This is the opposite of the choice that GROUP BY will make in the same situation. This inconsistency is made to be compatible with the SQL standard. - Optionally one can add the key word ASC (ascending) or - DESC (descending) after any expression in the - ORDER BY clause. If not specified, ASC is + Optionally one can add the key word ASC (ascending) or + DESC (descending) after any expression in the + ORDER BY clause. If not specified, ASC is assumed by default. Alternatively, a specific ordering operator - name can be specified in the USING clause. + name can be specified in the USING clause. An ordering operator must be a less-than or greater-than member of some B-tree operator family. - ASC is usually equivalent to USING < and - DESC is usually equivalent to USING >. + ASC is usually equivalent to USING < and + DESC is usually equivalent to USING >. (But the creator of a user-defined data type can define exactly what the default sort ordering is, and it might correspond to operators with other names.) - If NULLS LAST is specified, null values sort after all - non-null values; if NULLS FIRST is specified, null values + If NULLS LAST is specified, null values sort after all + non-null values; if NULLS FIRST is specified, null values sort before all non-null values. If neither is specified, the default - behavior is NULLS LAST when ASC is specified - or implied, and NULLS FIRST when DESC is specified + behavior is NULLS LAST when ASC is specified + or implied, and NULLS FIRST when DESC is specified (thus, the default is to act as though nulls are larger than non-nulls). - When USING is specified, the default nulls ordering depends + When USING is specified, the default nulls ordering depends on whether the operator is a less-than or greater-than operator. Note that ordering options apply only to the expression they follow; - for example ORDER BY x, y DESC does not mean - the same thing as ORDER BY x DESC, y DESC. + for example ORDER BY x, y DESC does not mean + the same thing as ORDER BY x DESC, y DESC. Character-string data is sorted according to the collation that applies to the column being sorted. That can be overridden at need by including - a COLLATE clause in the + a COLLATE clause in the expression, for example - ORDER BY mycolumn COLLATE "en_US". + ORDER BY mycolumn COLLATE "en_US". For more information see and . @@ -1337,60 +1337,60 @@ OFFSET start If the count expression - evaluates to NULL, it is treated as LIMIT ALL, i.e., no + evaluates to NULL, it is treated as LIMIT ALL, i.e., no limit. If start evaluates - to NULL, it is treated the same as OFFSET 0. + to NULL, it is treated the same as OFFSET 0. SQL:2008 introduced a different syntax to achieve the same result, - which PostgreSQL also supports. It is: + which PostgreSQL also supports. It is: OFFSET start { ROW | ROWS } FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY In this syntax, to write anything except a simple integer constant for - start or start or count, you must write parentheses around it. - If count is - omitted in a FETCH clause, it defaults to 1. + If count is + omitted in a FETCH clause, it defaults to 1. ROW and ROWS as well as FIRST and NEXT are noise words that don't influence the effects of these clauses. According to the standard, the OFFSET clause must come before the FETCH clause if both are present; but - PostgreSQL is laxer and allows either order. + PostgreSQL is laxer and allows either order. - When using LIMIT, it is a good idea to use an - ORDER BY clause that constrains the result rows into a + When using LIMIT, it is a good idea to use an + ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows — you might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? You - don't know what ordering unless you specify ORDER BY. + don't know what ordering unless you specify ORDER BY. - The query planner takes LIMIT into account when + The query planner takes LIMIT into account when generating a query plan, so you are very likely to get different plans (yielding different row orders) depending on what you use - for LIMIT and OFFSET. Thus, using - different LIMIT/OFFSET values to select + for LIMIT and OFFSET. Thus, using + different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable - result ordering with ORDER BY. This is not a bug; it + result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless - ORDER BY is used to constrain the order. + ORDER BY is used to constrain the order. - It is even possible for repeated executions of the same LIMIT + It is even possible for repeated executions of the same LIMIT query to return different subsets of the rows of a table, if there - is not an ORDER BY to enforce selection of a deterministic + is not an ORDER BY to enforce selection of a deterministic subset. Again, this is not a bug; determinism of the results is simply not guaranteed in such a case. @@ -1400,9 +1400,9 @@ FETCH { FIRST | NEXT } [ count ] { The Locking Clause - FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE - and FOR KEY SHARE - are locking clauses; they affect how SELECT + FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE + and FOR KEY SHARE + are locking clauses; they affect how SELECT locks rows as they are obtained from the table. @@ -1410,10 +1410,10 @@ FETCH { FIRST | NEXT } [ count ] { The locking clause has the general form -FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] +FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] - where lock_strength can be one of + where lock_strength can be one of UPDATE @@ -1430,20 +1430,20 @@ KEY SHARE To prevent the operation from waiting for other transactions to commit, - use either the NOWAIT or SKIP LOCKED - option. With NOWAIT, the statement reports an error, rather + use either the NOWAIT or SKIP LOCKED + option. With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately. With SKIP LOCKED, any selected rows that cannot be immediately locked are skipped. Skipping locked rows provides an inconsistent view of the data, so this is not suitable for general purpose work, but can be used to avoid lock contention with multiple consumers accessing a queue-like table. - Note that NOWAIT and SKIP LOCKED apply only + Note that NOWAIT and SKIP LOCKED apply only to the row-level lock(s) — the required ROW SHARE table-level lock is still taken in the ordinary way (see ). You can use - with the NOWAIT option first, + with the NOWAIT option first, if you need to acquire the table-level lock without waiting. @@ -1457,9 +1457,9 @@ KEY SHARE applied to a view or sub-query, it affects all tables used in the view or sub-query. However, these clauses - do not apply to WITH queries referenced by the primary query. - If you want row locking to occur within a WITH query, specify - a locking clause within the WITH query. + do not apply to WITH queries referenced by the primary query. + If you want row locking to occur within a WITH query, specify + a locking clause within the WITH query. @@ -1469,7 +1469,7 @@ KEY SHARE implicitly affected) by more than one locking clause, then it is processed as if it was only specified by the strongest one. Similarly, a table is processed - as NOWAIT if that is specified in any of the clauses + as NOWAIT if that is specified in any of the clauses affecting it. Otherwise, it is processed as SKIP LOCKED if that is specified in any of the clauses affecting it. @@ -1483,16 +1483,16 @@ KEY SHARE When a locking clause - appears at the top level of a SELECT query, the rows that + appears at the top level of a SELECT query, the rows that are locked are exactly those that are returned by the query; in the case of a join query, the rows locked are those that contribute to returned join rows. In addition, rows that satisfied the query conditions as of the query snapshot will be locked, although they will not be returned if they were updated after the snapshot and no longer satisfy the query conditions. If a - LIMIT is used, locking stops + LIMIT is used, locking stops once enough rows have been returned to satisfy the limit (but note that - rows skipped over by OFFSET will get locked). Similarly, + rows skipped over by OFFSET will get locked). Similarly, if a locking clause is used in a cursor's query, only rows actually fetched or stepped past by the cursor will be locked. @@ -1500,7 +1500,7 @@ KEY SHARE When a locking clause - appears in a sub-SELECT, the rows locked are those + appears in a sub-SELECT, the rows locked are those returned to the outer query by the sub-query. This might involve fewer rows than inspection of the sub-query alone would suggest, since conditions from the outer query might be used to optimize @@ -1508,7 +1508,7 @@ KEY SHARE SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss WHERE col1 = 5; - will lock only rows having col1 = 5, even though that + will lock only rows having col1 = 5, even though that condition is not textually within the sub-query. @@ -1522,18 +1522,18 @@ SAVEPOINT s; UPDATE mytable SET ... WHERE key = 1; ROLLBACK TO s; - would fail to preserve the FOR UPDATE lock after the - ROLLBACK TO. This has been fixed in release 9.3. + would fail to preserve the FOR UPDATE lock after the + ROLLBACK TO. This has been fixed in release 9.3. - It is possible for a SELECT command running at the READ + It is possible for a SELECT command running at the READ COMMITTED transaction isolation level and using ORDER BY and a locking clause to return rows out of - order. This is because ORDER BY is applied first. + order. This is because ORDER BY is applied first. The command sorts the result, but might then block trying to obtain a lock - on one or more of the rows. Once the SELECT unblocks, some + on one or more of the rows. Once the SELECT unblocks, some of the ordering column values might have been modified, leading to those rows appearing to be out of order (though they are in order in terms of the original column values). This can be worked around at need by @@ -1542,11 +1542,11 @@ ROLLBACK TO s; SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss ORDER BY column1; - Note that this will result in locking all rows of mytable, - whereas FOR UPDATE at the top level would lock only the + Note that this will result in locking all rows of mytable, + whereas FOR UPDATE at the top level would lock only the actually returned rows. This can make for a significant performance - difference, particularly if the ORDER BY is combined with - LIMIT or other restrictions. So this technique is recommended + difference, particularly if the ORDER BY is combined with + LIMIT or other restrictions. So this technique is recommended only if concurrent updates of the ordering columns are expected and a strictly sorted result is required. @@ -1573,11 +1573,11 @@ TABLE name SELECT * FROM name It can be used as a top-level command or as a space-saving syntax - variant in parts of complex queries. Only the WITH, - UNION, INTERSECT, EXCEPT, - ORDER BY, LIMIT, OFFSET, - FETCH and FOR locking clauses can be used - with TABLE; the WHERE clause and any form of + variant in parts of complex queries. Only the WITH, + UNION, INTERSECT, EXCEPT, + ORDER BY, LIMIT, OFFSET, + FETCH and FOR locking clauses can be used + with TABLE; the WHERE clause and any form of aggregation cannot be used. @@ -1702,7 +1702,7 @@ SELECT actors.name - This example shows how to use a function in the FROM + This example shows how to use a function in the FROM clause, both with and without a column definition list: @@ -1744,7 +1744,7 @@ SELECT * FROM unnest(ARRAY['a','b','c','d','e','f']) WITH ORDINALITY; - This example shows how to use a simple WITH clause: + This example shows how to use a simple WITH clause: WITH t AS ( @@ -1764,7 +1764,7 @@ SELECT * FROM t 0.0735620250925422 - Notice that the WITH query was evaluated only once, + Notice that the WITH query was evaluated only once, so that we got two sets of the same three random values. @@ -1796,9 +1796,9 @@ SELECT distance, employee_name FROM employee_recursive; - This example uses LATERAL to apply a set-returning function - get_product_names() for each row of the - manufacturers table: + This example uses LATERAL to apply a set-returning function + get_product_names() for each row of the + manufacturers table: SELECT m.name AS mname, pname @@ -1866,7 +1866,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; This is not valid syntax according to the SQL standard. PostgreSQL allows it to be consistent with allowing zero-column tables. - However, an empty list is not allowed when DISTINCT is used. + However, an empty list is not allowed when DISTINCT is used. @@ -1874,19 +1874,19 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; Omitting the <literal>AS</literal> Key Word - In the SQL standard, the optional key word AS can be + In the SQL standard, the optional key word AS can be omitted before an output column name whenever the new column name is a valid column name (that is, not the same as any reserved keyword). PostgreSQL is slightly more - restrictive: AS is required if the new column name + restrictive: AS is required if the new column name matches any keyword at all, reserved or not. Recommended practice is - to use AS or double-quote output column names, to prevent + to use AS or double-quote output column names, to prevent any possible conflict against future keyword additions. In FROM items, both the standard and - PostgreSQL allow AS to + PostgreSQL allow AS to be omitted before an alias that is an unreserved keyword. But this is impractical for output column names, because of syntactic ambiguities. @@ -1899,12 +1899,12 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; The SQL standard requires parentheses around the table name when writing ONLY, for example SELECT * FROM ONLY - (tab1), ONLY (tab2) WHERE .... PostgreSQL + (tab1), ONLY (tab2) WHERE .... PostgreSQL considers these parentheses to be optional. - PostgreSQL allows a trailing * to be written to + PostgreSQL allows a trailing * to be written to explicitly specify the non-ONLY behavior of including child tables. The standard does not allow this. @@ -1919,9 +1919,9 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; <literal>TABLESAMPLE</literal> Clause Restrictions - The TABLESAMPLE clause is currently accepted only on + The TABLESAMPLE clause is currently accepted only on regular tables and materialized views. According to the SQL standard - it should be possible to apply it to any FROM item. + it should be possible to apply it to any FROM item. @@ -1930,16 +1930,16 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; PostgreSQL allows a function call to be - written directly as a member of the FROM list. In the SQL + written directly as a member of the FROM list. In the SQL standard it would be necessary to wrap such a function call in a sub-SELECT; that is, the syntax - FROM func(...) alias + FROM func(...) alias is approximately equivalent to - FROM LATERAL (SELECT func(...)) alias. - Note that LATERAL is considered to be implicit; this is - because the standard requires LATERAL semantics for an - UNNEST() item in FROM. - PostgreSQL treats UNNEST() the + FROM LATERAL (SELECT func(...)) alias. + Note that LATERAL is considered to be implicit; this is + because the standard requires LATERAL semantics for an + UNNEST() item in FROM. + PostgreSQL treats UNNEST() the same as other set-returning functions. @@ -1974,8 +1974,8 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; PostgreSQL recognizes functional dependency - (allowing columns to be omitted from GROUP BY) only when - a table's primary key is included in the GROUP BY list. + (allowing columns to be omitted from GROUP BY) only when + a table's primary key is included in the GROUP BY list. The SQL standard specifies additional conditions that should be recognized. @@ -1986,7 +1986,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; The SQL standard provides additional options for the window - frame_clause. + frame_clause. PostgreSQL currently supports only the options listed above. @@ -2011,26 +2011,26 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; - <literal>FOR NO KEY UPDATE</>, <literal>FOR UPDATE</>, <literal>FOR SHARE</>, <literal>FOR KEY SHARE</> + <literal>FOR NO KEY UPDATE</literal>, <literal>FOR UPDATE</literal>, <literal>FOR SHARE</literal>, <literal>FOR KEY SHARE</literal> - Although FOR UPDATE appears in the SQL standard, the - standard allows it only as an option of DECLARE CURSOR. - PostgreSQL allows it in any SELECT - query as well as in sub-SELECTs, but this is an extension. - The FOR NO KEY UPDATE, FOR SHARE and - FOR KEY SHARE variants, as well as the NOWAIT + Although FOR UPDATE appears in the SQL standard, the + standard allows it only as an option of DECLARE CURSOR. + PostgreSQL allows it in any SELECT + query as well as in sub-SELECTs, but this is an extension. + The FOR NO KEY UPDATE, FOR SHARE and + FOR KEY SHARE variants, as well as the NOWAIT and SKIP LOCKED options, do not appear in the standard. - Data-Modifying Statements in <literal>WITH</> + Data-Modifying Statements in <literal>WITH</literal> - PostgreSQL allows INSERT, - UPDATE, and DELETE to be used as WITH + PostgreSQL allows INSERT, + UPDATE, and DELETE to be used as WITH queries. This is not found in the SQL standard. @@ -2044,7 +2044,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; - ROWS FROM( ... ) is an extension of the SQL standard. + ROWS FROM( ... ) is an extension of the SQL standard. diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml index 89c0fad195..8c44d0e156 100644 --- a/doc/src/sgml/ref/set.sgml +++ b/doc/src/sgml/ref/set.sgml @@ -66,15 +66,15 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone If SET LOCAL is used within a function that has a - SET option for the same variable (see + SET option for the same variable (see ), the effects of the SET LOCAL command disappear at function exit; that is, the value in effect when the function was called is restored anyway. This allows SET LOCAL to be used for dynamic or repeated changes of a parameter within a function, while still - having the convenience of using the SET option to save and - restore the caller's value. However, a regular SET command - overrides any surrounding function's SET option; its effects + having the convenience of using the SET option to save and + restore the caller's value. However, a regular SET command + overrides any surrounding function's SET option; its effects will persist unless rolled back. @@ -94,22 +94,22 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone - SESSION + SESSION Specifies that the command takes effect for the current session. - (This is the default if neither SESSION nor - LOCAL appears.) + (This is the default if neither SESSION nor + LOCAL appears.) - LOCAL + LOCAL Specifies that the command takes effect for only the current - transaction. After COMMIT or ROLLBACK, + transaction. After COMMIT or ROLLBACK, the session-level setting takes effect again. Issuing this outside of a transaction block emits a warning and otherwise has no effect. @@ -136,7 +136,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezoneDEFAULT can be written to specify resetting the parameter to its default value (that is, whatever - value it would have had if no SET had been executed + value it would have had if no SET had been executed in the current session). @@ -153,8 +153,8 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone SCHEMA - SET SCHEMA 'value' is an alias for - SET search_path TO value. Only one + SET SCHEMA 'value' is an alias for + SET search_path TO value. Only one schema can be specified using this syntax. @@ -163,8 +163,8 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone NAMES - SET NAMES value is an alias for - SET client_encoding TO value. + SET NAMES value is an alias for + SET client_encoding TO value. @@ -176,7 +176,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezonerandom
). Allowed values are floating-point numbers between -1 and 1, which are then - multiplied by 231-1. + multiplied by 231-1. @@ -191,8 +191,8 @@ SELECT setseed(value); TIME ZONE - SET TIME ZONE value is an alias - for SET timezone TO value. The + SET TIME ZONE value is an alias + for SET timezone TO value. The syntax SET TIME ZONE allows special syntax for the time zone specification. Here are examples of valid values: @@ -238,7 +238,7 @@ SELECT setseed(value); Set the time zone to your local time zone (that is, the - server's default value of timezone). + server's default value of timezone). @@ -248,8 +248,8 @@ SELECT setseed(value); Timezone settings given as numbers or intervals are internally translated to POSIX timezone syntax. For example, after - SET TIME ZONE -7, SHOW TIME ZONE would - report <-07>+07. + SET TIME ZONE -7, SHOW TIME ZONE would + report <-07>+07. @@ -270,7 +270,7 @@ SELECT setseed(value); functionality; see . Also, it is possible to UPDATE the pg_settings - system view to perform the equivalent of SET. + system view to perform the equivalent of SET. @@ -286,7 +286,7 @@ SET search_path TO my_schema, public; Set the style of date to traditional - POSTGRES with day before month + POSTGRES with day before month input convention: SET datestyle TO postgres, dmy; diff --git a/doc/src/sgml/ref/set_constraints.sgml b/doc/src/sgml/ref/set_constraints.sgml index 7c31871b0b..237a0a3988 100644 --- a/doc/src/sgml/ref/set_constraints.sgml +++ b/doc/src/sgml/ref/set_constraints.sgml @@ -67,18 +67,18 @@ SET CONSTRAINTS { ALL | name [, ... - Currently, only UNIQUE, PRIMARY KEY, - REFERENCES (foreign key), and EXCLUDE + Currently, only UNIQUE, PRIMARY KEY, + REFERENCES (foreign key), and EXCLUDE constraints are affected by this setting. - NOT NULL and CHECK constraints are + NOT NULL and CHECK constraints are always checked immediately when a row is inserted or modified - (not at the end of the statement). + (not at the end of the statement). Uniqueness and exclusion constraints that have not been declared - DEFERRABLE are also checked immediately. + DEFERRABLE are also checked immediately. - The firing of triggers that are declared as constraint triggers + The firing of triggers that are declared as constraint triggers is also controlled by this setting — they fire at the same time that the associated constraint should be checked. @@ -111,7 +111,7 @@ SET CONSTRAINTS { ALL | name [, ... This command complies with the behavior defined in the SQL standard, except for the limitation that, in PostgreSQL, it does not apply to - NOT NULL and CHECK constraints. + NOT NULL and CHECK constraints. Also, PostgreSQL checks non-deferrable uniqueness constraints immediately, not at end of statement as the standard would suggest. diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml index a97ceabcff..eac4b3405a 100644 --- a/doc/src/sgml/ref/set_role.sgml +++ b/doc/src/sgml/ref/set_role.sgml @@ -35,7 +35,7 @@ RESET ROLE identifier of the current SQL session to be role_name. The role name can be written as either an identifier or a string literal. - After SET ROLE, permissions checking for SQL commands + After SET ROLE, permissions checking for SQL commands is carried out as though the named role were the one that had logged in originally. @@ -47,13 +47,13 @@ RESET ROLE - The SESSION and LOCAL modifiers act the same + The SESSION and LOCAL modifiers act the same as for the regular command. - The NONE and RESET forms reset the current + The NONE and RESET forms reset the current user identifier to be the current session user identifier. These forms can be executed by any user. @@ -64,41 +64,41 @@ RESET ROLE Using this command, it is possible to either add privileges or restrict - one's privileges. If the session user role has the INHERITS + one's privileges. If the session user role has the INHERITS attribute, then it automatically has all the privileges of every role that - it could SET ROLE to; in this case SET ROLE + it could SET ROLE to; in this case SET ROLE effectively drops all the privileges assigned directly to the session user and to the other roles it is a member of, leaving only the privileges available to the named role. On the other hand, if the session user role - has the NOINHERITS attribute, SET ROLE drops the + has the NOINHERITS attribute, SET ROLE drops the privileges assigned directly to the session user and instead acquires the privileges available to the named role. - In particular, when a superuser chooses to SET ROLE to a + In particular, when a superuser chooses to SET ROLE to a non-superuser role, they lose their superuser privileges. - SET ROLE has effects comparable to + SET ROLE has effects comparable to , but the privilege checks involved are quite different. Also, - SET SESSION AUTHORIZATION determines which roles are - allowable for later SET ROLE commands, whereas changing - roles with SET ROLE does not change the set of roles - allowed to a later SET ROLE. + SET SESSION AUTHORIZATION determines which roles are + allowable for later SET ROLE commands, whereas changing + roles with SET ROLE does not change the set of roles + allowed to a later SET ROLE. - SET ROLE does not process session variables as specified by + SET ROLE does not process session variables as specified by the role's settings; this only happens during login. - SET ROLE cannot be used within a - SECURITY DEFINER function. + SET ROLE cannot be used within a + SECURITY DEFINER function. @@ -127,14 +127,14 @@ SELECT SESSION_USER, CURRENT_USER; PostgreSQL - allows identifier syntax ("rolename"), while + allows identifier syntax ("rolename"), while the SQL standard requires the role name to be written as a string literal. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. - The SESSION and LOCAL modifiers are a + The SESSION and LOCAL modifiers are a PostgreSQL extension, as is the - RESET syntax. + RESET syntax. diff --git a/doc/src/sgml/ref/set_session_auth.sgml b/doc/src/sgml/ref/set_session_auth.sgml index 96d279aaf9..a8aee6f632 100644 --- a/doc/src/sgml/ref/set_session_auth.sgml +++ b/doc/src/sgml/ref/set_session_auth.sgml @@ -39,7 +39,7 @@ RESET SESSION AUTHORIZATION The session user identifier is initially set to be the (possibly authenticated) user name provided by the client. The current user identifier is normally equal to the session user identifier, but - might change temporarily in the context of SECURITY DEFINER + might change temporarily in the context of SECURITY DEFINER functions and similar mechanisms; it can also be changed by . The current user identifier is relevant for permission checking. @@ -53,13 +53,13 @@ RESET SESSION AUTHORIZATION - The SESSION and LOCAL modifiers act the same + The SESSION and LOCAL modifiers act the same as for the regular command. - The DEFAULT and RESET forms reset the session + The DEFAULT and RESET forms reset the session and current user identifiers to be the originally authenticated user name. These forms can be executed by any user. @@ -69,8 +69,8 @@ RESET SESSION AUTHORIZATION Notes - SET SESSION AUTHORIZATION cannot be used within a - SECURITY DEFINER function. + SET SESSION AUTHORIZATION cannot be used within a + SECURITY DEFINER function. @@ -101,13 +101,13 @@ SELECT SESSION_USER, CURRENT_USER; The SQL standard allows some other expressions to appear in place of the literal user_name, but these options are not important in practice. PostgreSQL - allows identifier syntax ("username"), which SQL + allows identifier syntax ("username"), which SQL does not. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. - The SESSION and LOCAL modifiers are a + The SESSION and LOCAL modifiers are a PostgreSQL extension, as is the - RESET syntax. + RESET syntax. diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml index 188d2ed92e..f5631372f5 100644 --- a/doc/src/sgml/ref/set_transaction.sgml +++ b/doc/src/sgml/ref/set_transaction.sgml @@ -153,14 +153,14 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa The SET TRANSACTION SNAPSHOT command allows a new - transaction to run with the same snapshot as an existing + transaction to run with the same snapshot as an existing transaction. The pre-existing transaction must have exported its snapshot with the pg_export_snapshot function (see ). That function returns a snapshot identifier, which must be given to SET TRANSACTION SNAPSHOT to specify which snapshot is to be imported. The identifier must be written as a string literal in this command, for example - '000003A1-1'. + '000003A1-1'. SET TRANSACTION SNAPSHOT can only be executed at the start of a transaction, before the first query or data-modification statement (SELECT, @@ -169,7 +169,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa COPY) of the transaction. Furthermore, the transaction must already be set to SERIALIZABLE or REPEATABLE READ isolation level (otherwise, the snapshot - would be discarded immediately, since READ COMMITTED mode takes + would be discarded immediately, since READ COMMITTED mode takes a new snapshot for each command). If the importing transaction uses SERIALIZABLE isolation level, then the transaction that exported the snapshot must also use that isolation level. Also, a @@ -203,9 +203,9 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa , and . (In fact SET SESSION CHARACTERISTICS is just a - verbose equivalent for setting these variables with SET.) + verbose equivalent for setting these variables with SET.) This means the defaults can be set in the configuration file, via - ALTER DATABASE, etc. Consult + ALTER DATABASE, etc. Consult for more information. @@ -243,7 +243,7 @@ SET TRANSACTION SNAPSHOT '00000003-0000001B-1'; These commands are defined in the SQL standard, except for the DEFERRABLE transaction mode - and the SET TRANSACTION SNAPSHOT form, which are + and the SET TRANSACTION SNAPSHOT form, which are PostgreSQL extensions. diff --git a/doc/src/sgml/ref/show.sgml b/doc/src/sgml/ref/show.sgml index 7e198e6df8..2a2b2fbb9f 100644 --- a/doc/src/sgml/ref/show.sgml +++ b/doc/src/sgml/ref/show.sgml @@ -35,7 +35,7 @@ SHOW ALL SET statement, by editing the postgresql.conf configuration file, through the PGOPTIONS environmental variable (when using - libpq or a libpq-based + libpq or a libpq-based application), or through command-line flags when starting the postgres server. See for details. diff --git a/doc/src/sgml/ref/start_transaction.sgml b/doc/src/sgml/ref/start_transaction.sgml index 60926f5dfe..8dcf6318d2 100644 --- a/doc/src/sgml/ref/start_transaction.sgml +++ b/doc/src/sgml/ref/start_transaction.sgml @@ -55,12 +55,12 @@ START TRANSACTION [ transaction_modeCompatibility - In the standard, it is not necessary to issue START TRANSACTION + In the standard, it is not necessary to issue START TRANSACTION to start a transaction block: any SQL command implicitly begins a block. PostgreSQL's behavior can be seen as implicitly issuing a COMMIT after each command that does not - follow START TRANSACTION (or BEGIN), - and it is therefore often called autocommit. + follow START TRANSACTION (or BEGIN), + and it is therefore often called autocommit. Other relational database systems might offer an autocommit feature as a convenience. diff --git a/doc/src/sgml/ref/truncate.sgml b/doc/src/sgml/ref/truncate.sgml index fef3315599..80abe67525 100644 --- a/doc/src/sgml/ref/truncate.sgml +++ b/doc/src/sgml/ref/truncate.sgml @@ -48,9 +48,9 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ The name (optionally schema-qualified) of a table to truncate. - If ONLY is specified before the table name, only that table - is truncated. If ONLY is not specified, the table and all - its descendant tables (if any) are truncated. Optionally, * + If ONLY is specified before the table name, only that table + is truncated. If ONLY is not specified, the table and all + its descendant tables (if any) are truncated. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -108,29 +108,29 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ - TRUNCATE acquires an ACCESS EXCLUSIVE lock on each + TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates on, which blocks all other concurrent operations - on the table. When RESTART IDENTITY is specified, any + on the table. When RESTART IDENTITY is specified, any sequences that are to be restarted are likewise locked exclusively. If concurrent access to a table is required, then - the DELETE command should be used instead. + the DELETE command should be used instead. - TRUNCATE cannot be used on a table that has foreign-key + TRUNCATE cannot be used on a table that has foreign-key references from other tables, unless all such tables are also truncated in the same command. Checking validity in such cases would require table - scans, and the whole point is not to do one. The CASCADE + scans, and the whole point is not to do one. The CASCADE option can be used to automatically include all dependent tables — but be very careful when using this option, or else you might lose data you did not intend to! - TRUNCATE will not fire any ON DELETE + TRUNCATE will not fire any ON DELETE triggers that might exist for the tables. But it will fire ON TRUNCATE triggers. - If ON TRUNCATE triggers are defined for any of + If ON TRUNCATE triggers are defined for any of the tables, then all BEFORE TRUNCATE triggers are fired before any truncation happens, and all AFTER TRUNCATE triggers are fired after the last truncation is @@ -141,36 +141,36 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ - TRUNCATE is not MVCC-safe. After truncation, the table will + TRUNCATE is not MVCC-safe. After truncation, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the truncation occurred. See for more details. - TRUNCATE is transaction-safe with respect to the data + TRUNCATE is transaction-safe with respect to the data in the tables: the truncation will be safely rolled back if the surrounding transaction does not commit. - When RESTART IDENTITY is specified, the implied - ALTER SEQUENCE RESTART operations are also done + When RESTART IDENTITY is specified, the implied + ALTER SEQUENCE RESTART operations are also done transactionally; that is, they will be rolled back if the surrounding transaction does not commit. This is unlike the normal behavior of - ALTER SEQUENCE RESTART. Be aware that if any additional + ALTER SEQUENCE RESTART. Be aware that if any additional sequence operations are done on the restarted sequences before the transaction rolls back, the effects of these operations on the sequences - will be rolled back, but not their effects on currval(); - that is, after the transaction currval() will continue to + will be rolled back, but not their effects on currval(); + that is, after the transaction currval() will continue to reflect the last sequence value obtained inside the failed transaction, even though the sequence itself may no longer be consistent with that. - This is similar to the usual behavior of currval() after + This is similar to the usual behavior of currval() after a failed transaction. - TRUNCATE is not currently supported for foreign tables. + TRUNCATE is not currently supported for foreign tables. This implies that if a specified table has any descendant tables that are foreign, the command will fail. diff --git a/doc/src/sgml/ref/unlisten.sgml b/doc/src/sgml/ref/unlisten.sgml index 622e1cf154..1ea9aa3a0b 100644 --- a/doc/src/sgml/ref/unlisten.sgml +++ b/doc/src/sgml/ref/unlisten.sgml @@ -104,7 +104,7 @@ Asynchronous notification "virtual" received from server process with PID 8448. - Once UNLISTEN has been executed, further NOTIFY + Once UNLISTEN has been executed, further NOTIFY messages will be ignored: diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index 9dcbbd0e28..1ede52384f 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -52,13 +52,13 @@ UPDATE [ ONLY ] table_name [ * ] [ - The optional RETURNING clause causes UPDATE + The optional RETURNING clause causes UPDATE to compute and return value(s) based on each row actually updated. Any expression using the table's columns, and/or columns of other tables mentioned in FROM, can be computed. The new (post-update) values of the table's columns are used. - The syntax of the RETURNING list is identical to that of the - output list of SELECT. + The syntax of the RETURNING list is identical to that of the + output list of SELECT. @@ -80,7 +80,7 @@ UPDATE [ ONLY ] table_name [ * ] [ The WITH clause allows you to specify one or more - subqueries that can be referenced by name in the UPDATE + subqueries that can be referenced by name in the UPDATE query. See and for details. @@ -92,10 +92,10 @@ UPDATE [ ONLY ] table_name [ * ] [ The name (optionally schema-qualified) of the table to update. - If ONLY is specified before the table name, matching rows - are updated in the named table only. If ONLY is not + If ONLY is specified before the table name, matching rows + are updated in the named table only. If ONLY is not specified, matching rows are also updated in any tables inheriting from - the named table. Optionally, * can be specified after the + the named table. Optionally, * can be specified after the table name to explicitly indicate that descendant tables are included. @@ -107,9 +107,9 @@ UPDATE [ ONLY ] table_name [ * ] [ A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For - example, given UPDATE foo AS f, the remainder of the + example, given UPDATE foo AS f, the remainder of the UPDATE statement must refer to this table as - f not foo. + f not foo. @@ -123,7 +123,7 @@ UPDATE [ ONLY ] table_name [ * ] [ The column name can be qualified with a subfield name or array subscript, if needed. Do not include the table's name in the specification of a target column — for example, - UPDATE table_name SET table_name.col = 1 is invalid. + UPDATE table_name SET table_name.col = 1 is invalid. @@ -152,7 +152,7 @@ UPDATE [ ONLY ] table_name [ * ] [ sub-SELECT - A SELECT sub-query that produces as many output columns + A SELECT sub-query that produces as many output columns as are listed in the parenthesized column list preceding it. The sub-query must yield no more than one row when executed. If it yields one row, its column values are assigned to the target columns; @@ -168,13 +168,13 @@ UPDATE [ ONLY ] table_name [ * ] [ A list of table expressions, allowing columns from other tables - to appear in the WHERE condition and the update + to appear in the WHERE condition and the update expressions. This is similar to the list of tables that can be specified in the of a SELECT statement. Note that the target table must not appear in the - from_list, unless you intend a self-join (in which - case it must appear with an alias in the from_list). + from_list, unless you intend a self-join (in which + case it must appear with an alias in the from_list). @@ -184,7 +184,7 @@ UPDATE [ ONLY ] table_name [ * ] [ An expression that returns a value of type boolean. - Only rows for which this expression returns true + Only rows for which this expression returns true will be updated. @@ -194,15 +194,15 @@ UPDATE [ ONLY ] table_name [ * ] [ cursor_name - The name of the cursor to use in a WHERE CURRENT OF + The name of the cursor to use in a WHERE CURRENT OF condition. The row to be updated is the one most recently fetched from this cursor. The cursor must be a non-grouping - query on the UPDATE's target table. - Note that WHERE CURRENT OF cannot be + query on the UPDATE's target table. + Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See for more information about using cursors with - WHERE CURRENT OF. + WHERE CURRENT OF. @@ -211,11 +211,11 @@ UPDATE [ ONLY ] table_name [ * ] [ output_expression - An expression to be computed and returned by the UPDATE + An expression to be computed and returned by the UPDATE command after each row is updated. The expression can use any column names of the table named by table_name - or table(s) listed in FROM. - Write * to return all columns. + or table(s) listed in FROM. + Write * to return all columns. @@ -235,7 +235,7 @@ UPDATE [ ONLY ] table_name [ * ] [ Outputs - On successful completion, an UPDATE command returns a command + On successful completion, an UPDATE command returns a command tag of the form UPDATE count @@ -244,16 +244,16 @@ UPDATE count of rows updated, including matched rows whose values did not change. Note that the number may be less than the number of rows that matched the condition when - updates were suppressed by a BEFORE UPDATE trigger. If + updates were suppressed by a BEFORE UPDATE trigger. If count is 0, no rows were updated by the query (this is not considered an error). - If the UPDATE command contains a RETURNING - clause, the result will be similar to that of a SELECT + If the UPDATE command contains a RETURNING + clause, the result will be similar to that of a SELECT statement containing the columns and values defined in the - RETURNING list, computed over the row(s) updated by the + RETURNING list, computed over the row(s) updated by the command. @@ -262,11 +262,11 @@ UPDATE count Notes - When a FROM clause is present, what essentially happens + When a FROM clause is present, what essentially happens is that the target table is joined to the tables mentioned in the from_list, and each output row of the join represents an update operation for the target table. When using - FROM you should ensure that the join + FROM you should ensure that the join produces at most one output row for each row to be modified. In other words, a target row shouldn't join to more than one row from the other table(s). If it does, then only one of the join rows @@ -293,8 +293,8 @@ UPDATE count Examples - Change the word Drama to Dramatic in the - column kind of the table films: + Change the word Drama to Dramatic in the + column kind of the table films: UPDATE films SET kind = 'Dramatic' WHERE kind = 'Drama'; @@ -364,10 +364,10 @@ UPDATE accounts SET contact_first_name = first_name, FROM salesmen WHERE salesmen.id = accounts.sales_id; However, the second query may give unexpected results - if salesmen.id is not a unique key, whereas + if salesmen.id is not a unique key, whereas the first query is guaranteed to raise an error if there are multiple - id matches. Also, if there is no match for a particular - accounts.sales_id entry, the first query + id matches. Also, if there is no match for a particular + accounts.sales_id entry, the first query will set the corresponding name fields to NULL, whereas the second query will not update that row at all. @@ -400,9 +400,9 @@ COMMIT; - Change the kind column of the table + Change the kind column of the table films in the row on which the cursor - c_films is currently positioned: + c_films is currently positioned: UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; @@ -413,16 +413,16 @@ UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; This command conforms to the SQL standard, except - that the FROM and RETURNING clauses + that the FROM and RETURNING clauses are PostgreSQL extensions, as is the ability - to use WITH with UPDATE. + to use WITH with UPDATE. - Some other database systems offer a FROM option in which - the target table is supposed to be listed again within FROM. + Some other database systems offer a FROM option in which + the target table is supposed to be listed again within FROM. That is not how PostgreSQL interprets - FROM. Be careful when porting applications that use this + FROM. Be careful when porting applications that use this extension. @@ -431,9 +431,9 @@ UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; target column names can be any row-valued expression yielding the correct number of columns. PostgreSQL only allows the source value to be a row - constructor or a sub-SELECT. An individual column's - updated value can be specified as DEFAULT in the - row-constructor case, but not inside a sub-SELECT. + constructor or a sub-SELECT. An individual column's + updated value can be specified as DEFAULT in the + row-constructor case, but not inside a sub-SELECT. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index f5bc87e290..2e205668c1 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -66,7 +66,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ and parameters set to zero. Aggressive freezing is always performed when the - table is rewritten, so this option is redundant when FULL + table is rewritten, so this option is redundant when FULL is specified. @@ -145,8 +145,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ visibility map. Pages where + Normally, VACUUM will skip pages based on the visibility map. Pages where all tuples are known to be frozen can always be skipped, and those where all tuples are known to be visible to all transactions may be skipped except when performing an aggressive vacuum. Furthermore, @@ -176,7 +176,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ Outputs - When VERBOSE is specified, VACUUM emits + When VERBOSE is specified, VACUUM emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well. @@ -202,19 +202,19 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ for details. @@ -247,7 +247,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ . diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml index 4f6fa0d708..277c231687 100644 --- a/doc/src/sgml/ref/vacuumdb.sgml +++ b/doc/src/sgml/ref/vacuumdb.sgml @@ -88,8 +88,8 @@ PostgreSQL documentation - - + + Specifies the name of the database to be cleaned or analyzed. @@ -103,8 +103,8 @@ PostgreSQL documentation - - + + Echo the commands that vacuumdb generates @@ -158,8 +158,8 @@ PostgreSQL documentation - - + + Do not display progress messages. @@ -176,7 +176,7 @@ PostgreSQL documentation Column names can be specified only in conjunction with the or options. Multiple tables can be vacuumed by writing multiple - switches. @@ -198,8 +198,8 @@ PostgreSQL documentation - - + + Print the vacuumdb version and exit. @@ -248,8 +248,8 @@ PostgreSQL documentation - - + + Show help about vacuumdb command line @@ -266,8 +266,8 @@ PostgreSQL documentation the following command-line arguments for connection parameters: - - + + Specifies the host name of the machine on which the server @@ -278,8 +278,8 @@ PostgreSQL documentation - - + + Specifies the TCP port or local Unix domain socket file @@ -290,8 +290,8 @@ PostgreSQL documentation - - + + User name to connect as. @@ -300,8 +300,8 @@ PostgreSQL documentation - - + + Never issue a password prompt. If the server requires @@ -315,8 +315,8 @@ PostgreSQL documentation - - + + Force vacuumdb to prompt for a @@ -329,14 +329,14 @@ PostgreSQL documentation for a password if the server demands password authentication. However, vacuumdb will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. - + Specifies the name of the database to connect to discover what other @@ -370,8 +370,8 @@ PostgreSQL documentation - This utility, like most other PostgreSQL utilities, - also uses the environment variables supported by libpq + This utility, like most other PostgreSQL utilities, + also uses the environment variables supported by libpq (see ). @@ -401,7 +401,7 @@ PostgreSQL documentation vacuumdb might need to connect several times to the PostgreSQL server, asking for a password each time. It is convenient to have a - ~/.pgpass file in such cases. See ~/.pgpass file in such cases. See for more information. diff --git a/doc/src/sgml/ref/values.sgml b/doc/src/sgml/ref/values.sgml index 9baeade551..75a594725b 100644 --- a/doc/src/sgml/ref/values.sgml +++ b/doc/src/sgml/ref/values.sgml @@ -35,7 +35,7 @@ VALUES ( expression [, ...] ) [, .. VALUES computes a row value or set of row values specified by value expressions. It is most commonly used to generate - a constant table within a larger command, but it can be + a constant table within a larger command, but it can be used on its own. @@ -43,18 +43,18 @@ VALUES ( expression [, ...] ) [, .. When more than one row is specified, all the rows must have the same number of elements. The data types of the resulting table's columns are determined by combining the explicit or inferred types of the expressions - appearing in that column, using the same rules as for UNION + appearing in that column, using the same rules as for UNION (see ). - Within larger commands, VALUES is syntactically allowed - anywhere that SELECT is. Because it is treated like a - SELECT by the grammar, it is possible to use - the ORDER BY, LIMIT (or + Within larger commands, VALUES is syntactically allowed + anywhere that SELECT is. Because it is treated like a + SELECT by the grammar, it is possible to use + the ORDER BY, LIMIT (or equivalently FETCH FIRST), - and OFFSET clauses with a - VALUES command. + and OFFSET clauses with a + VALUES command. @@ -67,12 +67,12 @@ VALUES ( expression [, ...] ) [, .. A constant or expression to compute and insert at the indicated place - in the resulting table (set of rows). In a VALUES list - appearing at the top level of an INSERT, an + in the resulting table (set of rows). In a VALUES list + appearing at the top level of an INSERT, an expression can be replaced by DEFAULT to indicate that the destination column's default value should be inserted. DEFAULT cannot - be used when VALUES appears in other contexts. + be used when VALUES appears in other contexts. @@ -83,7 +83,7 @@ VALUES ( expression [, ...] ) [, .. An expression or integer constant indicating how to sort the result rows. This expression can refer to the columns of the - VALUES result as column1, column2, + VALUES result as column1, column2, etc. For more details see . @@ -127,11 +127,11 @@ VALUES ( expression [, ...] ) [, .. Notes - VALUES lists with very large numbers of rows should be avoided, + VALUES lists with very large numbers of rows should be avoided, as you might encounter out-of-memory failures or poor performance. - VALUES appearing within INSERT is a special case - (because the desired column types are known from the INSERT's - target table, and need not be inferred by scanning the VALUES + VALUES appearing within INSERT is a special case + (because the desired column types are known from the INSERT's + target table, and need not be inferred by scanning the VALUES list), so it can handle larger lists than are practical in other contexts. @@ -140,7 +140,7 @@ VALUES ( expression [, ...] ) [, .. Examples - A bare VALUES command: + A bare VALUES command: VALUES (1, 'one'), (2, 'two'), (3, 'three'); @@ -160,8 +160,8 @@ SELECT 3, 'three'; - More usually, VALUES is used within a larger SQL command. - The most common use is in INSERT: + More usually, VALUES is used within a larger SQL command. + The most common use is in INSERT: INSERT INTO films (code, title, did, date_prod, kind) @@ -170,7 +170,7 @@ INSERT INTO films (code, title, did, date_prod, kind) - In the context of INSERT, entries of a VALUES list + In the context of INSERT, entries of a VALUES list can be DEFAULT to indicate that the column default should be used here instead of specifying a value: @@ -182,8 +182,8 @@ INSERT INTO films VALUES - VALUES can also be used where a sub-SELECT might - be written, for example in a FROM clause: + VALUES can also be used where a sub-SELECT might + be written, for example in a FROM clause: SELECT f.* @@ -195,17 +195,17 @@ UPDATE employees SET salary = salary * v.increase WHERE employees.depno = v.depno AND employees.sales >= v.target; - Note that an AS clause is required when VALUES - is used in a FROM clause, just as is true for - SELECT. It is not required that the AS clause + Note that an AS clause is required when VALUES + is used in a FROM clause, just as is true for + SELECT. It is not required that the AS clause specify names for all the columns, but it's good practice to do so. - (The default column names for VALUES are column1, - column2, etc in PostgreSQL, but + (The default column names for VALUES are column1, + column2, etc in PostgreSQL, but these names might be different in other database systems.) - When VALUES is used in INSERT, the values are all + When VALUES is used in INSERT, the values are all automatically coerced to the data type of the corresponding destination column. When it's used in other contexts, it might be necessary to specify the correct data type. If the entries are all quoted literal constants, @@ -218,9 +218,9 @@ WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), ('192.168.1. - For simple IN tests, it's better to rely on the + For simple IN tests, it's better to rely on the list-of-scalars - form of IN than to write a VALUES + form of IN than to write a VALUES query as shown above. The list of scalars method requires less writing and is often more efficient. diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index 14747e5f3b..e83edf96ec 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -53,7 +53,7 @@ make check or otherwise a note about which tests failed. See below before assuming that a - failure represents a serious problem. + failure represents a serious problem. @@ -66,12 +66,12 @@ make check If you have configured PostgreSQL to install into a location where an older PostgreSQL - installation already exists, and you perform make check + installation already exists, and you perform make check before installing the new version, you might find that the tests fail because the new programs try to use the already-installed shared libraries. (Typical symptoms are complaints about undefined symbols.) If you wish to run the tests before overwriting the old installation, - you'll need to build with configure --disable-rpath. + you'll need to build with configure --disable-rpath. It is not recommended that you use this option for the final installation, however. @@ -80,12 +80,12 @@ make check The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means forty processes: there's a server process and a - psql process for each test script. + psql process for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least fifty or so, else you might get random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut down the degree of parallelism - by setting the MAX_CONNECTIONS parameter. For example: + by setting the MAX_CONNECTIONS parameter. For example: make MAX_CONNECTIONS=10 check @@ -110,14 +110,14 @@ make installcheck-parallel The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by PGHOST and PGPORT environment variables. The tests will be run in a - database named regression; any existing database by this name + database named regression; any existing database by this name will be dropped. The tests will also transiently create some cluster-wide objects, such as roles and tablespaces. These objects will have names beginning with - regress_. Beware of using installcheck + regress_. Beware of using installcheck mode in installations that have any actual users or tablespaces named that way. @@ -127,9 +127,9 @@ make installcheck-parallel Additional Test Suites - The make check and make installcheck commands - run only the core regression tests, which test built-in - functionality of the PostgreSQL server. The source + The make check and make installcheck commands + run only the core regression tests, which test built-in + functionality of the PostgreSQL server. The source distribution also contains additional test suites, most of them having to do with add-on functionality such as optional procedural languages. @@ -144,18 +144,18 @@ make installcheck-world These commands run the tests using temporary servers or an already-installed server, respectively, just as previously explained - for make check and make installcheck. Other + for make check and make installcheck. Other considerations are the same as previously explained for each method. - Note that make check-world builds a separate temporary + Note that make check-world builds a separate temporary installation tree for each tested module, so it requires a great deal - more time and disk space than make installcheck-world. + more time and disk space than make installcheck-world. Alternatively, you can run individual test suites by typing - make check or make installcheck in the appropriate + make check or make installcheck in the appropriate subdirectory of the build tree. Keep in mind that make - installcheck assumes you've installed the relevant module(s), not + installcheck assumes you've installed the relevant module(s), not only the core server. @@ -167,27 +167,27 @@ make installcheck-world Regression tests for optional procedural languages (other than - PL/pgSQL, which is tested by the core tests). - These are located under src/pl. + PL/pgSQL, which is tested by the core tests). + These are located under src/pl. - Regression tests for contrib modules, - located under contrib. - Not all contrib modules have tests. + Regression tests for contrib modules, + located under contrib. + Not all contrib modules have tests. Regression tests for the ECPG interface library, - located in src/interfaces/ecpg/test. + located in src/interfaces/ecpg/test. Tests stressing behavior of concurrent sessions, - located in src/test/isolation. + located in src/test/isolation. @@ -199,11 +199,11 @@ make installcheck-world - When using installcheck mode, these tests will destroy any - existing databases named pl_regression, - contrib_regression, isolation_regression, - ecpg1_regression, or ecpg2_regression, as well as - regression. + When using installcheck mode, these tests will destroy any + existing databases named pl_regression, + contrib_regression, isolation_regression, + ecpg1_regression, or ecpg2_regression, as well as + regression. @@ -272,7 +272,7 @@ make check EXTRA_TESTS=numeric_big make check EXTRA_TESTS='collate.icu.utf8 collate.linux.utf8' LANG=en_US.utf8 - The collate.linux.utf8 test works only on Linux/glibc + The collate.linux.utf8 test works only on Linux/glibc platforms. The collate.icu.utf8 test only works when support for ICU was built. Both tests will only succeed when run in a database that uses UTF-8 encoding. @@ -294,7 +294,7 @@ make check EXTRA_TESTS='collate.icu.utf8 collate.linux.utf8' LANG=en_US.utf8 To run the Hot Standby tests, first create a database - called regression on the primary: + called regression on the primary: psql -h primary -c "CREATE DATABASE regression" @@ -311,7 +311,7 @@ psql -h primary -f src/test/regress/sql/hs_primary_setup.sql regression Now arrange for the default database connection to be to the standby server under test (for example, by setting the PGHOST and PGPORT environment variables). - Finally, run make standbycheck in the regression directory: + Finally, run make standbycheck in the regression directory: cd src/test/regress make standbycheck @@ -355,7 +355,7 @@ make standbycheck src/test/regress/regression.diffs. (When running a test suite other than the core tests, these files of course appear in the relevant subdirectory, - not src/test/regress.) + not src/test/regress.) @@ -367,7 +367,7 @@ make standbycheck - If for some reason a particular platform generates a failure + If for some reason a particular platform generates a failure for a given test, but inspection of the output convinces you that the result is valid, you can add a new comparison file to silence the failure report in future test runs. See @@ -457,8 +457,8 @@ make check NO_LOCALE=1 Some of the tests involve computing 64-bit floating-point numbers (double precision) from table columns. Differences in results involving mathematical functions of double - precision columns have been observed. The float8 and - geometry tests are particularly prone to small differences + precision columns have been observed. The float8 and + geometry tests are particularly prone to small differences across platforms, or even with different compiler optimization settings. Human eyeball comparison is needed to determine the real significance of these differences which are usually 10 places to @@ -466,8 +466,8 @@ make check NO_LOCALE=1 - Some systems display minus zero as -0, while others - just show 0. + Some systems display minus zero as -0, while others + just show 0. @@ -485,23 +485,23 @@ make check NO_LOCALE=1 You might see differences in which the same rows are output in a different order than what appears in the expected file. In most cases this is not, strictly speaking, a bug. Most of the regression test -scripts are not so pedantic as to use an ORDER BY for every single -SELECT, and so their result row orderings are not well-defined +scripts are not so pedantic as to use an ORDER BY for every single +SELECT, and so their result row orderings are not well-defined according to the SQL specification. In practice, since we are looking at the same queries being executed on the same data by the same software, we usually get the same result ordering on all platforms, -so the lack of ORDER BY is not a problem. Some queries do exhibit +so the lack of ORDER BY is not a problem. Some queries do exhibit cross-platform ordering differences, however. When testing against an already-installed server, ordering differences can also be caused by non-C locale settings or non-default parameter settings, such as custom values -of work_mem or the planner cost parameters. +of work_mem or the planner cost parameters. Therefore, if you see an ordering difference, it's not something to -worry about, unless the query does have an ORDER BY that your +worry about, unless the query does have an ORDER BY that your result is violating. However, please report it anyway, so that we can add an -ORDER BY to that particular query to eliminate the bogus +ORDER BY to that particular query to eliminate the bogus failure in future releases. @@ -519,18 +519,18 @@ exclusion of those that don't. If the errors test results in a server crash - at the select infinite_recurse() command, it means that + at the select infinite_recurse() command, it means that the platform's limit on process stack size is smaller than the parameter indicates. This can be fixed by running the server under a higher stack size limit (4MB is recommended with the default value of - max_stack_depth). If you are unable to do that, an - alternative is to reduce the value of max_stack_depth. + max_stack_depth). If you are unable to do that, an + alternative is to reduce the value of max_stack_depth. - On platforms supporting getrlimit(), the server should - automatically choose a safe value of max_stack_depth; + On platforms supporting getrlimit(), the server should + automatically choose a safe value of max_stack_depth; so unless you've manually overridden this setting, a failure of this kind is a reportable bug. @@ -559,7 +559,7 @@ diff results/random.out expected/random.out parameter settings could cause the tests to fail. For example, changing parameters such as enable_seqscan or enable_indexscan could cause plan changes that would - affect the results of tests that use EXPLAIN. + affect the results of tests that use EXPLAIN. @@ -570,7 +570,7 @@ diff results/random.out expected/random.out Since some of the tests inherently produce environment-dependent - results, we have provided ways to specify alternate expected + results, we have provided ways to specify alternate expected result files. Each regression test can have several comparison files showing possible results on different platforms. There are two independent mechanisms for determining which comparison file is used @@ -597,7 +597,7 @@ testname:output:platformpattern=comparisonfilename standard regression tests, this is always out. The value corresponds to the file extension of the output file. The platform pattern is a pattern in the style of the Unix - tool expr (that is, a regular expression with an implicit + tool expr (that is, a regular expression with an implicit ^ anchor at the start). It is matched against the platform name as printed by config.guess. The comparison file name is the base name of the substitute result @@ -607,7 +607,7 @@ testname:output:platformpattern=comparisonfilename For example: some systems interpret very small floating-point values as zero, rather than reporting an underflow error. This causes a - few differences in the float8 regression test. + few differences in the float8 regression test. Therefore, we provide a variant comparison file, float8-small-is-zero.out, which includes the results to be expected on these systems. To silence the bogus @@ -619,30 +619,30 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out which will trigger on any machine where the output of config.guess matches i.86-.*-openbsd. Other lines - in resultmap select the variant comparison file for other + in resultmap select the variant comparison file for other platforms where it's appropriate. The second selection mechanism for variant comparison files is - much more automatic: it simply uses the best match among + much more automatic: it simply uses the best match among several supplied comparison files. The regression test driver script considers both the standard comparison file for a test, - testname.out, and variant files named - testname_digit.out - (where the digit is any single digit - 0-9). If any such file is an exact match, + testname.out, and variant files named + testname_digit.out + (where the digit is any single digit + 0-9). If any such file is an exact match, the test is considered to pass; otherwise, the one that generates the shortest diff is used to create the failure report. (If resultmap includes an entry for the particular - test, then the base testname is the substitute + test, then the base testname is the substitute name given in resultmap.) For example, for the char test, the comparison file char.out contains results that are expected - in the C and POSIX locales, while + in the C and POSIX locales, while the file char_1.out contains results sorted as they appear in many other locales. @@ -652,7 +652,7 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out results, but it can be used in any situation where the test results cannot be predicted easily from the platform name alone. A limitation of this mechanism is that the test driver cannot tell which variant is - actually correct for the current environment; it will just pick + actually correct for the current environment; it will just pick the variant that seems to work best. Therefore it is safest to use this mechanism only for variant results that you are willing to consider equally valid in all contexts. @@ -668,7 +668,7 @@ float8:out:i.86-.*-openbsd=float8-small-is-zero.out under src/bin, use the Perl TAP tools and are run using the Perl testing program prove. You can pass command-line options to prove by setting - the make variable PROVE_FLAGS, for example: + the make variable PROVE_FLAGS, for example: make -C src/bin check PROVE_FLAGS='--timer' diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 9ef798183d..116f7224da 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -13,7 +13,7 @@ Overview - Major enhancements in PostgreSQL 10 include: + Major enhancements in PostgreSQL 10 include: @@ -58,14 +58,14 @@ 2017-08-04 [620b49a16] hash: Increase the number of possible overflow bitmaps b --> - Hash indexes must be rebuilt after pg_upgrade-ing - from any previous major PostgreSQL version (Mithun + Hash indexes must be rebuilt after pg_upgrade-ing + from any previous major PostgreSQL version (Mithun Cy, Robert Haas, Amit Kapila) Major hash index improvements necessitated this requirement. - pg_upgrade will create a script to assist with this. + pg_upgrade will create a script to assist with this. @@ -75,9 +75,9 @@ 2017-03-17 [88e66d193] Rename "pg_clog" directory to "pg_xact". --> - Rename write-ahead log directory pg_xlog - to pg_wal, and rename transaction - status directory pg_clog to pg_xact + Rename write-ahead log directory pg_xlog + to pg_wal, and rename transaction + status directory pg_clog to pg_xact (Michael Paquier) @@ -98,17 +98,17 @@ 2017-02-15 [0dfa89ba2] Replace reference to "xlog-method" with "wal-method" in --> - Rename SQL functions, tools, and options that reference - xlog to wal (Robert Haas) + Rename SQL functions, tools, and options that reference + xlog to wal (Robert Haas) - For example, pg_switch_xlog() becomes - pg_switch_wal(), pg_receivexlog - becomes pg_receivewal, and @@ -118,8 +118,8 @@ 2017-05-11 [d10c626de] Rename WAL-related functions and views to use "lsn" not --> - Rename WAL-related functions and views to use lsn - instead of location (David Rowley) + Rename WAL-related functions and views to use lsn + instead of location (David Rowley) @@ -136,20 +136,20 @@ --> Change the implementation of set-returning functions appearing in - a query's SELECT list (Andres Freund) + a query's SELECT list (Andres Freund) Set-returning functions are now evaluated before evaluation of scalar - expressions in the SELECT list, much as though they had - been placed in a LATERAL FROM-clause item. This allows + expressions in the SELECT list, much as though they had + been placed in a LATERAL FROM-clause item. This allows saner semantics for cases where multiple set-returning functions are present. If they return different numbers of rows, the shorter results are extended to match the longest result by adding nulls. Previously the results were cycled until they all terminated at the same time, producing a number of rows equal to the least common multiple of the functions' periods. In addition, set-returning functions are now - disallowed within CASE and COALESCE constructs. + disallowed within CASE and COALESCE constructs. For more information see . @@ -160,8 +160,8 @@ 2017-08-04 [c30f1770a] Apply ALTER ... SET NOT NULL recursively in ALTER ... AD --> - When ALTER TABLE ... ADD PRIMARY KEY marks - columns NOT NULL, that change now propagates to + When ALTER TABLE ... ADD PRIMARY KEY marks + columns NOT NULL, that change now propagates to inheritance child tables as well (Michael Paquier) @@ -179,9 +179,9 @@ Cases involving writable CTEs updating the same table updated by the containing statement, or by another writable CTE, fired BEFORE - STATEMENT or AFTER STATEMENT triggers more than once. + STATEMENT or AFTER STATEMENT triggers more than once. Also, if there were statement-level triggers on a table affected by a - foreign key enforcement action (such as ON DELETE CASCADE), + foreign key enforcement action (such as ON DELETE CASCADE), they could fire more than once per outer SQL statement. This is contrary to the SQL standard, so change it. @@ -197,20 +197,20 @@ --> Move sequences' metadata fields into a new pg_sequence + linkend="catalog-pg-sequence">pg_sequence system catalog (Peter Eisentraut) A sequence relation now stores only the fields that can be modified - by nextval(), that - is last_value, log_cnt, - and is_called. Other sequence properties, such as + by nextval(), that + is last_value, log_cnt, + and is_called. Other sequence properties, such as the starting value and increment, are kept in a corresponding row of - the pg_sequence catalog. - ALTER SEQUENCE updates are now fully transactional, + the pg_sequence catalog. + ALTER SEQUENCE updates are now fully transactional, implying that the sequence is locked until commit. - The nextval() and setval() functions + The nextval() and setval() functions remain nontransactional. @@ -218,14 +218,14 @@ The main incompatibility introduced by this change is that selecting from a sequence relation now returns only the three fields named above. To obtain the sequence's other properties, applications must - look into pg_sequence. The new system - view pg_sequences + look into pg_sequence. The new system + view pg_sequences can also be used for this purpose; it provides column names that are more compatible with existing code. - The output of psql's \d command for a + The output of psql's \d command for a sequence has been redesigned, too. @@ -235,17 +235,17 @@ 2017-01-04 [9a4d51077] Make wal streaming the default mode for pg_basebackup --> - Make stream the - WAL needed to restore the backup by default (Magnus + Make stream the + WAL needed to restore the backup by default (Magnus Hagander) - This changes pg_basebackup's - @@ -275,13 +275,13 @@ 2017-01-14 [05cd12ed5] pg_ctl: Change default to wait for all actions --> - Make all actions wait + Make all actions wait for completion by default (Peter Eisentraut) - Previously some pg_ctl actions didn't wait for - completion, and required the use of to do so. @@ -291,7 +291,7 @@ --> Change the default value of the - server parameter from pg_log to log + server parameter from pg_log to log (Andreas Karlsson) @@ -307,7 +307,7 @@ This replaces the hardcoded, undocumented file - name dh1024.pem. Note that dh1024.pem is + name dh1024.pem. Note that dh1024.pem is no longer examined by default; you must set this option if you want to use custom DH parameters. @@ -345,14 +345,14 @@ The server parameter - no longer supports off or plain. - The UNENCRYPTED option is no longer supported in - CREATE/ALTER USER ... PASSSWORD. Similarly, the - @@ -367,7 +367,7 @@ - These replace min_parallel_relation_size, which was + These replace min_parallel_relation_size, which was found to be too generic. @@ -394,14 +394,14 @@ 2016-12-23 [e13486eba] Remove sql_inheritance GUC. --> - Remove sql_inheritance server parameter (Robert Haas) + Remove sql_inheritance server parameter (Robert Haas) Changing this setting from the default value caused queries referencing - parent tables to not include child tables. The SQL + parent tables to not include child tables. The SQL standard requires them to be included, however, and this has been the - default since PostgreSQL 7.1. + default since PostgreSQL 7.1. @@ -420,10 +420,10 @@ This feature requires a backwards-incompatible change to the handling of arrays of composite types in PL/Python. Previously, you could return an array of composite values by writing, e.g., [[col1, - col2], [col1, col2]]; but now that is interpreted as a + col2], [col1, col2]]; but now that is interpreted as a two-dimensional array. Composite types in arrays must now be written as Python tuples, not lists, to resolve the ambiguity; that is, - write [(col1, col2), (col1, col2)] instead. + write [(col1, col2), (col1, col2)] instead. @@ -432,7 +432,7 @@ 2017-02-27 [817f2a586] Remove PL/Tcl's "module" facility. --> - Remove PL/Tcl's module auto-loading facility (Tom Lane) + Remove PL/Tcl's module auto-loading facility (Tom Lane) @@ -448,13 +448,13 @@ 2016-10-12 [64f3524e2] Remove pg_dump/pg_dumpall support for dumping from pre-8 --> - Remove pg_dump/pg_dumpall support + Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers (Tom Lane) Users needing to dump from pre-8.0 servers will need to use dump - programs from PostgreSQL 9.6 or earlier. The + programs from PostgreSQL 9.6 or earlier. The resulting output should still load successfully into newer servers. @@ -468,9 +468,9 @@ - This removes configure's option. Floating-point timestamps have few advantages and have not - been the default since PostgreSQL 8.3. + been the default since PostgreSQL 8.3. @@ -484,7 +484,7 @@ This protocol hasn't had client support - since PostgreSQL 6.3. + since PostgreSQL 6.3. @@ -493,12 +493,12 @@ 2017-02-13 [7ada2d31f] Remove contrib/tsearch2. --> - Remove contrib/tsearch2 module (Robert Haas) + Remove contrib/tsearch2 module (Robert Haas) This module provided compatibility with the version of full text - search that shipped in pre-8.3 PostgreSQL releases. + search that shipped in pre-8.3 PostgreSQL releases. @@ -507,14 +507,14 @@ 2017-03-23 [50c956add] Remove createlang and droplang --> - Remove createlang and droplang + Remove createlang and droplang command-line applications (Peter Eisentraut) - These had been deprecated since PostgreSQL 9.1. - Instead, use CREATE EXTENSION and DROP - EXTENSION directly. + These had been deprecated since PostgreSQL 9.1. + Instead, use CREATE EXTENSION and DROP + EXTENSION directly. @@ -686,8 +686,8 @@ 2016-08-23 [77e290682] Create an SP-GiST opclass for inet/cidr. --> - Add SP-GiST index support for INET and - CIDR data types (Emre Hasegeli) + Add SP-GiST index support for INET and + CIDR data types (Emre Hasegeli) @@ -696,14 +696,14 @@ 2017-04-01 [7526e1022] BRIN auto-summarization --> - Add option to allow BRIN index summarization to happen + Add option to allow BRIN index summarization to happen more aggressively (Álvaro Herrera) A new CREATE - INDEX option enables auto-summarization of the - previous BRIN page range when a new page + INDEX option enables auto-summarization of the + previous BRIN page range when a new page range is created. @@ -713,18 +713,18 @@ 2017-04-01 [c655899ba] BRIN de-summarization --> - Add functions to remove and re-add BRIN - summarization for BRIN index ranges (Álvaro + Add functions to remove and re-add BRIN + summarization for BRIN index ranges (Álvaro Herrera) - The new SQL function brin_summarize_range() - updates BRIN index summarization for a specified - range and brin_desummarize_range() removes it. + The new SQL function brin_summarize_range() + updates BRIN index summarization for a specified + range and brin_desummarize_range() removes it. This is helpful to update summarization of a range that is now - smaller due to UPDATEs and DELETEs. + smaller due to UPDATEs and DELETEs. @@ -733,7 +733,7 @@ 2017-04-06 [7e534adcd] Fix BRIN cost estimation --> - Improve accuracy in determining if a BRIN index scan + Improve accuracy in determining if a BRIN index scan is beneficial (David Rowley, Emre Hasegeli) @@ -743,7 +743,7 @@ 2016-09-09 [b1328d78f] Invent PageIndexTupleOverwrite, and teach BRIN and GiST --> - Allow faster GiST inserts and updates by reusing + Allow faster GiST inserts and updates by reusing index space more efficiently (Andrey Borodin) @@ -753,7 +753,7 @@ 2017-03-23 [218f51584] Reduce page locking in GIN vacuum --> - Reduce page locking during vacuuming of GIN indexes + Reduce page locking during vacuuming of GIN indexes (Andrey Borodin) @@ -825,9 +825,9 @@ New commands are CREATE STATISTICS, - ALTER STATISTICS, and - DROP STATISTICS. + linkend="SQL-CREATESTATISTICS">CREATE STATISTICS, + ALTER STATISTICS, and + DROP STATISTICS. This feature is helpful in estimating query memory usage and when combining the statistics from individual columns. @@ -864,9 +864,9 @@ --> Speed up aggregate functions that calculate a running sum - using numeric-type arithmetic, including some variants - of SUM(), AVG(), - and STDDEV() (Heikki Linnakangas) + using numeric-type arithmetic, including some variants + of SUM(), AVG(), + and STDDEV() (Heikki Linnakangas) @@ -950,14 +950,14 @@ --> Allow explicit control - over EXPLAIN's display + over EXPLAIN's display of planning and execution time (Ashutosh Bapat) By default planning and execution time are displayed by - EXPLAIN ANALYZE and are not displayed in other cases. - The new EXPLAIN option SUMMARY allows + EXPLAIN ANALYZE and are not displayed in other cases. + The new EXPLAIN option SUMMARY allows explicit control of this. @@ -971,8 +971,8 @@ - New roles pg_monitor, pg_read_all_settings, - pg_read_all_stats, and pg_stat_scan_tables + New roles pg_monitor, pg_read_all_settings, + pg_read_all_stats, and pg_stat_scan_tables allow simplified permission configuration. @@ -984,7 +984,7 @@ Properly update the statistics collector during REFRESH MATERIALIZED - VIEW (Jim Mlodgenski) + VIEW (Jim Mlodgenski) @@ -1015,14 +1015,14 @@ 2017-03-16 [befd73c50] Add pg_ls_logdir() and pg_ls_waldir() functions. --> - Add functions to return the log and WAL directory + Add functions to return the log and WAL directory contents (Dave Page) The new functions - are pg_ls_logdir() - and pg_ls_waldir() + are pg_ls_logdir() + and pg_ls_waldir() and can be executed by non-superusers with the proper permissions. @@ -1034,7 +1034,7 @@ --> Add function pg_current_logfile() + linkend="functions-info-session-table">pg_current_logfile() to read logging collector's current stderr and csvlog output file names (Gilles Darold) @@ -1066,7 +1066,7 @@ - These are now DEBUG1-level messages. + These are now DEBUG1-level messages. @@ -1091,7 +1091,7 @@ - <link linkend="pg-stat-activity-view"><structname>pg_stat_activity</></link> + <link linkend="pg-stat-activity-view"><structname>pg_stat_activity</structname></link> @@ -1101,7 +1101,7 @@ 2017-03-18 [249cf070e] Create and use wait events for read, write, and fsync op --> - Add pg_stat_activity reporting of low-level wait + Add pg_stat_activity reporting of low-level wait states (Michael Paquier, Robert Haas, Rushabh Lathia) @@ -1119,13 +1119,13 @@ --> Show auxiliary processes, background workers, and walsender - processes in pg_stat_activity (Kuntal Ghosh, + processes in pg_stat_activity (Kuntal Ghosh, Michael Paquier) This simplifies monitoring. A new - column backend_type identifies the process type. + column backend_type identifies the process type. @@ -1134,7 +1134,7 @@ 2017-02-22 [4c728f382] Pass the source text for a parallel query to the workers --> - Allow pg_stat_activity to show the SQL query + Allow pg_stat_activity to show the SQL query being executed by parallel workers (Rafia Sabih) @@ -1145,9 +1145,9 @@ --> Rename - pg_stat_activity.wait_event_type - values LWLockTranche and - LWLockNamed to LWLock (Robert Haas) + pg_stat_activity.wait_event_type + values LWLockTranche and + LWLockNamed to LWLock (Robert Haas) @@ -1161,7 +1161,7 @@ - <acronym>Authentication</> + <acronym>Authentication</acronym> @@ -1173,13 +1173,13 @@ 2017-04-18 [c727f120f] Rename "scram" to "scram-sha-256" in pg_hba.conf and pas --> - Add SCRAM-SHA-256 + Add SCRAM-SHA-256 support for password negotiation and storage (Michael Paquier, Heikki Linnakangas) - This provides better security than the existing md5 + This provides better security than the existing md5 negotiation and storage method. @@ -1190,7 +1190,7 @@ --> Change the server parameter - from boolean to enum (Michael Paquier) + from boolean to enum (Michael Paquier) @@ -1204,8 +1204,8 @@ --> Add view pg_hba_file_rules - to display the contents of pg_hba.conf (Haribabu + linkend="view-pg-hba-file-rules">pg_hba_file_rules + to display the contents of pg_hba.conf (Haribabu Kommi) @@ -1219,11 +1219,11 @@ 2017-03-22 [6b76f1bb5] Support multiple RADIUS servers --> - Support multiple RADIUS servers (Magnus Hagander) + Support multiple RADIUS servers (Magnus Hagander) - All the RADIUS related parameters are now plural and + All the RADIUS related parameters are now plural and support a comma-separated list of servers. @@ -1244,16 +1244,16 @@ 2017-01-04 [6667d9a6d] Re-allow SSL passphrase prompt at server start, but not --> - Allow SSL configuration to be updated during + Allow SSL configuration to be updated during configuration reload (Andreas Karlsson, Tom Lane) - This allows SSL to be reconfigured without a server - restart, by using pg_ctl reload, SELECT - pg_reload_conf(), or sending a SIGHUP signal. - However, reloading the SSL configuration does not work - if the server's SSL key requires a passphrase, as there + This allows SSL to be reconfigured without a server + restart, by using pg_ctl reload, SELECT + pg_reload_conf(), or sending a SIGHUP signal. + However, reloading the SSL configuration does not work + if the server's SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case. @@ -1297,7 +1297,7 @@ - <link linkend="wal">Write-Ahead Log</> (<acronym>WAL</>) + <link linkend="wal">Write-Ahead Log</link> (<acronym>WAL</acronym>) @@ -1306,7 +1306,7 @@ 2016-12-22 [6ef2eba3f] Skip checkpoints, archiving on idle systems. --> - Prevent unnecessary checkpoints and WAL archiving on + Prevent unnecessary checkpoints and WAL archiving on otherwise-idle systems (Michael Paquier) @@ -1318,7 +1318,7 @@ --> Add server parameter - to add details to WAL that can be sanity-checked on + to add details to WAL that can be sanity-checked on the standby (Kuntal Ghosh, Robert Haas) @@ -1332,14 +1332,14 @@ 2017-04-05 [00b6b6feb] Allow -\-with-wal-segsize=n up to n=1024MB --> - Increase the maximum configurable WAL segment size + Increase the maximum configurable WAL segment size to one gigabyte (Beena Emerson) - A larger WAL segment size allows for fewer + A larger WAL segment size allows for fewer invocations and fewer - WAL files to manage. + WAL files to manage. @@ -1364,13 +1364,13 @@ --> Add the ability to logically - replicate tables to standby servers (Petr Jelinek) + replicate tables to standby servers (Petr Jelinek) Logical replication allows more flexibility than physical replication does, including replication between different major - versions of PostgreSQL and selective + versions of PostgreSQL and selective replication. @@ -1387,8 +1387,8 @@ Previously the server always waited for the active standbys that - appeared first in synchronous_standby_names. The new - synchronous_standby_names keyword ANY allows + appeared first in synchronous_standby_names. The new + synchronous_standby_names keyword ANY allows waiting for any number of standbys irrespective of their ordering. This is known as quorum commit. @@ -1419,14 +1419,14 @@ --> Enable replication from localhost connections by default in - pg_hba.conf + pg_hba.conf (Michael Paquier) - Previously pg_hba.conf's replication connection + Previously pg_hba.conf's replication connection lines were commented out by default. This is particularly useful for - . + . @@ -1436,13 +1436,13 @@ --> Add columns to pg_stat_replication + linkend="monitoring-stats-views-table">pg_stat_replication to report replication delay times (Thomas Munro) - The new columns are write_lag, - flush_lag, and replay_lag. + The new columns are write_lag, + flush_lag, and replay_lag. @@ -1452,8 +1452,8 @@ --> Allow specification of the recovery stopping point by Log Sequence - Number (LSN) in - recovery.conf + Number (LSN) in + recovery.conf (Michael Paquier) @@ -1470,12 +1470,12 @@ --> Allow users to disable pg_stop_backup()'s - waiting for all WAL to be archived (David Steele) + linkend="functions-admin">pg_stop_backup()'s + waiting for all WAL to be archived (David Steele) - An optional second argument to pg_stop_backup() + An optional second argument to pg_stop_backup() controls that behavior. @@ -1486,7 +1486,7 @@ --> Allow creation of temporary replication slots + linkend="functions-replication-table">temporary replication slots (Petr Jelinek) @@ -1530,8 +1530,8 @@ --> Add XMLTABLE - function that converts XML-formatted data into a row set + linkend="functions-xml-processing-xmltable">XMLTABLE + function that converts XML-formatted data into a row set (Pavel Stehule, Álvaro Herrera) @@ -1542,17 +1542,17 @@ --> Allow standard row constructor syntax in UPDATE ... SET - (column_list) = row_constructor + (column_list) = row_constructor (Tom Lane) - The row_constructor can now begin with the - keyword ROW; previously that had to be omitted. Also, - an occurrence of table_name.* - within the row_constructor is now expanded into + The row_constructor can now begin with the + keyword ROW; previously that had to be omitted. Also, + an occurrence of table_name.* + within the row_constructor is now expanded into multiple columns, as in other uses - of row_constructors. + of row_constructors. @@ -1562,13 +1562,13 @@ --> Fix regular expressions' character class handling for large character - codes, particularly Unicode characters above U+7FF + codes, particularly Unicode characters above U+7FF (Tom Lane) Previously, such characters were never recognized as belonging to - locale-dependent character classes such as [[:alpha:]]. + locale-dependent character classes such as [[:alpha:]]. @@ -1587,7 +1587,7 @@ --> Add table partitioning - syntax that automatically creates partition constraints and + syntax that automatically creates partition constraints and handles routing of tuple insertions and updates (Amit Langote) @@ -1603,7 +1603,7 @@ 2017-03-31 [597027163] Add transition table support to plpgsql. --> - Add AFTER trigger + Add AFTER trigger transition tables to record changed rows (Kevin Grittner, Thomas Munro) @@ -1620,7 +1620,7 @@ --> Allow restrictive row-level - security policies (Stephen Frost) + security policies (Stephen Frost) @@ -1636,16 +1636,16 @@ --> When creating a foreign-key constraint, check - for REFERENCES permission on only the referenced table + for REFERENCES permission on only the referenced table (Tom Lane) - Previously REFERENCES permission on the referencing + Previously REFERENCES permission on the referencing table was also required. This appears to have stemmed from a misreading of the SQL standard. Since creating a foreign key (or any other type of) constraint requires ownership privilege on the - constrained table, additionally requiring REFERENCES + constrained table, additionally requiring REFERENCES permission seems rather pointless. @@ -1656,11 +1656,11 @@ --> Allow default - permissions on schemas (Matheus Oliveira) + permissions on schemas (Matheus Oliveira) - This is done using the ALTER DEFAULT PRIVILEGES command. + This is done using the ALTER DEFAULT PRIVILEGES command. @@ -1670,7 +1670,7 @@ --> Add CREATE SEQUENCE - AS command to create a sequence matching an integer data type + AS command to create a sequence matching an integer data type (Peter Eisentraut) @@ -1685,13 +1685,13 @@ 2016-11-10 [279c439c7] Support "COPY view FROM" for views with INSTEAD OF INSER --> - Allow COPY view - FROM source on views with INSTEAD - INSERT triggers (Haribabu Kommi) + Allow COPY view + FROM source on views with INSTEAD + INSERT triggers (Haribabu Kommi) - The triggers are fed the data rows read by COPY. + The triggers are fed the data rows read by COPY. @@ -1701,14 +1701,14 @@ --> Allow the specification of a function name without arguments in - DDL commands, if it is unique (Peter Eisentraut) + DDL commands, if it is unique (Peter Eisentraut) For example, allow DROP - FUNCTION on a function name without arguments if there + FUNCTION on a function name without arguments if there is only one function with that name. This behavior is required by the - SQL standard. + SQL standard. @@ -1718,7 +1718,7 @@ --> Allow multiple functions, operators, and aggregates to be dropped - with a single DROP command (Peter Eisentraut) + with a single DROP command (Peter Eisentraut) @@ -1728,10 +1728,10 @@ 2017-03-20 [b6fb534f1] Add IF NOT EXISTS for CREATE SERVER and CREATE USER MAPP --> - Support IF NOT EXISTS - in CREATE SERVER, - CREATE USER MAPPING, - and CREATE COLLATION + Support IF NOT EXISTS + in CREATE SERVER, + CREATE USER MAPPING, + and CREATE COLLATION (Anastasia Lubennikova, Peter Eisentraut) @@ -1742,7 +1742,7 @@ 2017-03-03 [9eb344faf] Allow vacuums to report oldestxmin --> - Make VACUUM VERBOSE report + Make VACUUM VERBOSE report the number of skipped frozen pages and oldest xmin (Masahiko Sawada, Simon Riggs) @@ -1758,7 +1758,7 @@ 2017-01-23 [7e26e02ee] Prefetch blocks during lazy vacuum's truncation scan --> - Improve speed of VACUUM's removal of trailing empty + Improve speed of VACUUM's removal of trailing empty heap pages (Claudio Freire, Álvaro Herrera) @@ -1777,13 +1777,13 @@ 2017-03-31 [e306df7f9] Full Text Search support for JSON and JSONB --> - Add full text search support for JSON and JSONB + Add full text search support for JSON and JSONB (Dmitry Dolgov) - The functions ts_headline() and - to_tsvector() can now be used on these data types. + The functions ts_headline() and + to_tsvector() can now be used on these data types. @@ -1792,15 +1792,15 @@ 2017-03-15 [c7a9fa399] Add support for EUI-64 MAC addresses as macaddr8 --> - Add support for EUI-64 MAC addresses, as a - new data type macaddr8 + Add support for EUI-64 MAC addresses, as a + new data type macaddr8 (Haribabu Kommi) This complements the existing support - for EUI-48 MAC addresses - (type macaddr). + for EUI-48 MAC addresses + (type macaddr). @@ -1809,13 +1809,13 @@ 2017-04-06 [321732705] Identity columns --> - Add identity columns for + Add identity columns for assigning a numeric value to columns on insert (Peter Eisentraut) - These are similar to SERIAL columns, but are - SQL standard compliant. + These are similar to SERIAL columns, but are + SQL standard compliant. @@ -1824,13 +1824,13 @@ 2016-09-07 [0ab9c56d0] Support renaming an existing value of an enum type. --> - Allow ENUM values to be + Allow ENUM values to be renamed (Dagfinn Ilmari Mannsåker) This uses the syntax ALTER - TYPE ... RENAME VALUE. + TYPE ... RENAME VALUE. @@ -1840,14 +1840,14 @@ --> Properly treat array pseudotypes - (anyarray) as arrays in to_json() - and to_jsonb() (Andrew Dunstan) + (anyarray) as arrays in to_json() + and to_jsonb() (Andrew Dunstan) - Previously columns declared as anyarray (particularly those - in the pg_stats view) were converted to JSON + Previously columns declared as anyarray (particularly those + in the pg_stats view) were converted to JSON strings rather than arrays. @@ -1858,16 +1858,16 @@ --> Add operators for multiplication and division - of money values - with int8 values (Peter Eisentraut) + of money values + with int8 values (Peter Eisentraut) - Previously such cases would result in converting the int8 - values to float8 and then using - the money-and-float8 operators. The new behavior + Previously such cases would result in converting the int8 + values to float8 and then using + the money-and-float8 operators. The new behavior avoids possible precision loss. But note that division - of money by int8 now truncates the quotient, like + of money by int8 now truncates the quotient, like other integer-division cases, while the previous behavior would have rounded. @@ -1878,7 +1878,7 @@ 2016-09-14 [656df624c] Add overflow checks to money type input function --> - Check for overflow in the money type's input function + Check for overflow in the money type's input function (Peter Eisentraut) @@ -1898,12 +1898,12 @@ --> Add simplified regexp_match() + linkend="functions-posix-regexp">regexp_match() function (Emre Hasegeli) - This is similar to regexp_matches(), but it only + This is similar to regexp_matches(), but it only returns results from the first match so it does not need to return a set, making it easier to use for simple cases. @@ -1914,8 +1914,8 @@ 2017-01-18 [d00ca333c] Implement array version of jsonb_delete and operator --> - Add a version of jsonb's delete operator that takes + Add a version of jsonb's delete operator that takes an array of keys to delete (Magnus Hagander) @@ -1925,7 +1925,7 @@ 2017-04-06 [cf35346e8] Make json_populate_record and friends operate recursivel --> - Make json_populate_record() + Make json_populate_record() and related functions process JSON arrays and objects recursively (Nikita Glukhov) @@ -1935,7 +1935,7 @@ properly converted from JSON arrays, and composite-type fields are properly converted from JSON objects. Previously, such cases would fail because the text representation of the JSON value would be fed - to array_in() or record_in(), and its + to array_in() or record_in(), and its syntax would not match what those input functions expect. @@ -1946,14 +1946,14 @@ --> Add function txid_current_if_assigned() - to return the current transaction ID or NULL if no + linkend="functions-txid-snapshot">txid_current_if_assigned() + to return the current transaction ID or NULL if no transaction ID has been assigned (Craig Ringer) This is different from txid_current(), + linkend="functions-txid-snapshot">txid_current(), which always returns a transaction ID, assigning one if necessary. Unlike that function, this function can be run on standby servers. @@ -1965,7 +1965,7 @@ --> Add function txid_status() + linkend="functions-txid-snapshot">txid_status() to check if a transaction was committed (Craig Ringer) @@ -1982,8 +1982,8 @@ --> Allow make_date() - to interpret negative years as BC years (Álvaro + linkend="functions-formatting-table">make_date() + to interpret negative years as BC years (Álvaro Herrera) @@ -1993,14 +1993,14 @@ 2016-09-28 [d3cd36a13] Make to_timestamp() and to_date() range-check fields of --> - Make to_timestamp() and to_date() reject + Make to_timestamp() and to_date() reject out-of-range input fields (Artur Zakirov) For example, - previously to_date('2009-06-40','YYYY-MM-DD') was - accepted and returned 2009-07-10. It will now generate + previously to_date('2009-06-40','YYYY-MM-DD') was + accepted and returned 2009-07-10. It will now generate an error. @@ -2019,7 +2019,7 @@ 2017-03-27 [70ec3f1f8] PL/Python: Add cursor and execute methods to plan object --> - Allow PL/Python's cursor() and execute() + Allow PL/Python's cursor() and execute() functions to be called as methods of their plan-object arguments (Peter Eisentraut) @@ -2034,7 +2034,7 @@ 2016-12-13 [55caaaeba] Improve handling of array elements as getdiag_targets an --> - Allow PL/pgSQL's GET DIAGNOSTICS statement to retrieve + Allow PL/pgSQL's GET DIAGNOSTICS statement to retrieve values into array elements (Tom Lane) @@ -2047,7 +2047,7 @@ - <link linkend="pltcl">PL/Tcl</> + <link linkend="pltcl">PL/Tcl</link> @@ -2104,7 +2104,7 @@ --> Allow specification of multiple - host names or addresses in libpq connection strings and URIs + host names or addresses in libpq connection strings and URIs (Robert Haas, Heikki Linnakangas) @@ -2119,7 +2119,7 @@ --> Allow libpq connection strings and URIs to request a read/write host, + linkend="libpq-connect-target-session-attrs">read/write host, that is a master server rather than a standby server (Victor Wagner, Mithun Cy) @@ -2127,7 +2127,7 @@ This is useful when multiple host names are specified. It is controlled by libpq connection parameter - . @@ -2136,7 +2136,7 @@ 2017-01-24 [ba005f193] Allow password file name to be specified as a libpq conn --> - Allow the password file name + Allow the password file name to be specified as a libpq connection parameter (Julian Markwort) @@ -2151,17 +2151,17 @@ --> Add function PQencryptPasswordConn() + linkend="libpq-pqencryptpasswordconn">PQencryptPasswordConn() to allow creation of more types of encrypted passwords on the client side (Michael Paquier, Heikki Linnakangas) - Previously only MD5-encrypted passwords could be created + Previously only MD5-encrypted passwords could be created using PQencryptPassword(). + linkend="libpq-pqencryptpassword">PQencryptPassword(). This new function can also create SCRAM-SHA-256-encrypted + linkend="auth-pg-hba-conf">SCRAM-SHA-256-encrypted passwords. @@ -2171,13 +2171,13 @@ 2016-08-16 [a7b5573d6] Remove separate version numbering for ecpg preprocessor. --> - Change ecpg preprocessor version from 4.12 to 10 + Change ecpg preprocessor version from 4.12 to 10 (Tom Lane) - Henceforth the ecpg version will match - the PostgreSQL distribution version number. + Henceforth the ecpg version will match + the PostgreSQL distribution version number. @@ -2200,14 +2200,14 @@ 2017-04-02 [68dba97a4] Document psql's behavior of recalling the previously exe --> - Add conditional branch support to psql (Corey + Add conditional branch support to psql (Corey Huinker) - This feature adds psql - meta-commands \if, \elif, \else, - and \endif. This is primarily helpful for scripting. + This feature adds psql + meta-commands \if, \elif, \else, + and \endif. This is primarily helpful for scripting. @@ -2216,8 +2216,8 @@ 2017-03-07 [b2678efd4] psql: Add \gx command --> - Add psql \gx meta-command to execute - (\g) a query in expanded mode (\x) + Add psql \gx meta-command to execute + (\g) a query in expanded mode (\x) (Christoph Berg) @@ -2227,12 +2227,12 @@ 2017-04-01 [f833c847b] Allow psql variable substitution to occur in backtick co --> - Expand psql variable references in + Expand psql variable references in backtick-executed strings (Tom Lane) - This is particularly useful in the new psql + This is particularly useful in the new psql conditional branch commands. @@ -2244,23 +2244,23 @@ 2017-02-02 [fd6cd6980] Clean up psql's behavior for a few more control variable --> - Prevent psql's special variables from being set to + Prevent psql's special variables from being set to invalid values (Daniel Vérité, Tom Lane) - Previously, setting one of psql's special variables + Previously, setting one of psql's special variables to an invalid value silently resulted in the default behavior. - \set on a special variable now fails if the proposed - new value is invalid. As a special exception, \set + \set on a special variable now fails if the proposed + new value is invalid. As a special exception, \set with an empty or omitted new value, on a boolean-valued special variable, still has the effect of setting the variable - to on; but now it actually acquires that value rather - than an empty string. \unset on a special variable now + to on; but now it actually acquires that value rather + than an empty string. \unset on a special variable now explicitly sets the variable to its default value, which is also the value it acquires at startup. In sum, a control variable now always has a displayable value that reflects - what psql is actually doing. + what psql is actually doing. @@ -2269,7 +2269,7 @@ 2017-09-06 [a6c678f01] Add psql variables showing server version and psql versi --> - Add variables showing server version and psql version + Add variables showing server version and psql version (Fabien Coelho) @@ -2279,14 +2279,14 @@ 2016-11-03 [a0f357e57] psql: Split up "Modifiers" column in \d and \dD --> - Improve psql's \d (display relation) - and \dD (display domain) commands to show collation, + Improve psql's \d (display relation) + and \dD (display domain) commands to show collation, nullable, and default properties in separate columns (Peter Eisentraut) - Previously they were shown in a single Modifiers column. + Previously they were shown in a single Modifiers column. @@ -2295,7 +2295,7 @@ 2017-07-27 [77cb4a1d6] Standardize describe.c's behavior for no-matching-object --> - Make the various \d commands handle no-matching-object + Make the various \d commands handle no-matching-object cases more consistently (Daniel Gustafsson) @@ -2319,7 +2319,7 @@ 2017-03-16 [d7d77f382] psql: Add completion for \help DROP|ALTER --> - Improve psql's tab completion (Jeff Janes, + Improve psql's tab completion (Jeff Janes, Ian Barwick, Andreas Karlsson, Sehrope Sarkuni, Thomas Munro, Kevin Grittner, Dagfinn Ilmari Mannsåker) @@ -2339,7 +2339,7 @@ 2016-11-09 [41124a91e] pgbench: Allow the transaction log file prefix to be cha --> - Add pgbench option to control the log file prefix (Masahiko Sawada) @@ -2349,7 +2349,7 @@ 2017-01-20 [cdc2a7047] Allow backslash line continuations in pgbench's meta com --> - Allow pgbench's meta-commands to span multiple + Allow pgbench's meta-commands to span multiple lines (Fabien Coelho) @@ -2364,7 +2364,7 @@ 2017-08-11 [796818442] Remove pgbench's restriction on placement of -M switch. --> - Remove restriction on placement of option relative to other command line options (Tom Lane) @@ -2386,8 +2386,8 @@ --> Add pg_receivewal - option / to specify compression (Michael Paquier) @@ -2398,12 +2398,12 @@ --> Add pg_recvlogical option - to specify the ending position (Craig Ringer) - This complements the existing option. @@ -2412,9 +2412,9 @@ 2016-10-19 [5d58c07a4] initdb pg_basebackup: Rename -\-noxxx options to -\-no-x --> - Rename initdb - options and to be spelled + and (Vik Fearing, Peter Eisentraut) @@ -2426,9 +2426,9 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</></>, - <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</></>, - <link linkend="APP-PGRESTORE"><application>pg_restore</></> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link>, + <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</application></link>, + <link linkend="APP-PGRESTORE"><application>pg_restore</application></link> @@ -2437,11 +2437,11 @@ 2016-09-20 [46b55e7f8] pg_restore: Add -N option to exclude schemas --> - Allow pg_restore to exclude schemas (Michael Banck) + Allow pg_restore to exclude schemas (Michael Banck) - This adds a new / option. @@ -2450,8 +2450,8 @@ 2016-11-29 [4fafa579b] Add -\-no-blobs option to pg_dump --> - Add @@ -2464,13 +2464,13 @@ 2017-03-07 [9a83d56b3] Allow pg_dumpall to dump roles w/o user passwords --> - Add pg_dumpall option - to omit role passwords (Robins Tharakan, Simon Riggs) - This allows use of pg_dumpall by non-superusers; + This allows use of pg_dumpall by non-superusers; without this option, it fails due to inability to read passwords. @@ -2490,15 +2490,15 @@ 2017-03-22 [96a7128b7] Sync pg_dump and pg_dumpall output --> - Issue fsync() on the output files generated by - pg_dump and - pg_dumpall (Michael Paquier) + Issue fsync() on the output files generated by + pg_dump and + pg_dumpall (Michael Paquier) This provides more security that the output is safely stored on disk before the program exits. This can be disabled with - the new option. @@ -2518,12 +2518,12 @@ 2016-12-21 [ecbdc4c55] Forbid invalid combination of options in pg_basebackup. --> - Allow pg_basebackup to stream write-ahead log in + Allow pg_basebackup to stream write-ahead log in tar mode (Magnus Hagander) - The WAL will be stored in a separate tar file from + The WAL will be stored in a separate tar file from the base backup. @@ -2533,13 +2533,13 @@ 2017-01-16 [e7b020f78] Make pg_basebackup use temporary replication slots --> - Make pg_basebackup use temporary replication slots + Make pg_basebackup use temporary replication slots (Magnus Hagander) Temporary replication slots will be used by default when - pg_basebackup uses WAL streaming with default + pg_basebackup uses WAL streaming with default options. @@ -2550,8 +2550,8 @@ --> Be more careful about fsync'ing in all required places - in pg_basebackup and - pg_receivewal (Michael Paquier) + in pg_basebackup and + pg_receivewal (Michael Paquier) @@ -2561,7 +2561,7 @@ 2016-10-19 [5d58c07a4] initdb pg_basebackup: Rename -\-noxxx options to -\-no-x --> - Add pg_basebackup option to disable fsync (Michael Paquier) @@ -2571,7 +2571,7 @@ 2016-09-28 [6ad8ac602] Exclude additional directories in pg_basebackup --> - Improve pg_basebackup's handling of which + Improve pg_basebackup's handling of which directories to skip (David Steele) @@ -2581,7 +2581,7 @@ - <application><xref linkend="app-pg-ctl"></> + <application><xref linkend="app-pg-ctl"></application> @@ -2590,7 +2590,7 @@ 2016-09-21 [e7010ce47] pg_ctl: Add wait option to promote action --> - Add wait option for 's + Add wait option for 's promote operation (Peter Eisentraut) @@ -2600,8 +2600,8 @@ 2016-10-19 [0be22457d] pg_ctl: Add long options for -w and -W --> - Add long options for pg_ctl wait () + and no-wait () (Vik Fearing) @@ -2610,8 +2610,8 @@ 2016-10-19 [caf936b09] pg_ctl: Add long option for -o --> - Add long option for pg_ctl server options - () (Peter Eisentraut) @@ -2620,14 +2620,14 @@ 2017-06-28 [f13ea95f9] Change pg_ctl to detect server-ready by watching status --> - Make pg_ctl start --wait detect server-ready by - watching postmaster.pid, not by attempting connections + Make pg_ctl start --wait detect server-ready by + watching postmaster.pid, not by attempting connections (Tom Lane) The postmaster has been changed to report its ready-for-connections - status in postmaster.pid, and pg_ctl + status in postmaster.pid, and pg_ctl now examines that file to detect whether startup is complete. This is more efficient and reliable than the old method, and it eliminates postmaster log entries about rejected connection @@ -2640,12 +2640,12 @@ 2017-06-26 [c61559ec3] Reduce pg_ctl's reaction time when waiting for postmaste --> - Reduce pg_ctl's reaction time when waiting for + Reduce pg_ctl's reaction time when waiting for postmaster start/stop (Tom Lane) - pg_ctl now probes ten times per second when waiting + pg_ctl now probes ten times per second when waiting for a postmaster state change, rather than once per second. @@ -2655,14 +2655,14 @@ 2017-07-05 [1bac5f552] pg_ctl: Make failure to complete operation a nonzero exi --> - Ensure that pg_ctl exits with nonzero status if an + Ensure that pg_ctl exits with nonzero status if an operation being waited for does not complete within the timeout (Peter Eisentraut) - The start and promote operations now return - exit status 1, not 0, in such cases. The stop operation + The start and promote operations now return + exit status 1, not 0, in such cases. The stop operation has always done that. @@ -2687,14 +2687,14 @@ - Release numbers will now have two parts (e.g., 10.1) - rather than three (e.g., 9.6.3). + Release numbers will now have two parts (e.g., 10.1) + rather than three (e.g., 9.6.3). Major versions will now increase just the first number, and minor releases will increase just the second number. Release branches will be referred to by single numbers - (e.g., 10 rather than 9.6). + (e.g., 10 rather than 9.6). This change is intended to reduce user confusion about what is a - major or minor release of PostgreSQL. + major or minor release of PostgreSQL. @@ -2708,12 +2708,12 @@ 2017-06-21 [81f056c72] Remove entab and associated detritus. --> - Improve behavior of pgindent + Improve behavior of pgindent (Piotr Stefaniak, Tom Lane) - We have switched to a new version of pg_bsd_indent + We have switched to a new version of pg_bsd_indent based on recent improvements made by the FreeBSD project. This fixes numerous small bugs that led to odd C code formatting decisions. Most notably, lines within parentheses (such as in a @@ -2728,14 +2728,14 @@ 2017-03-23 [eccfef81e] ICU support --> - Allow the ICU library to + Allow the ICU library to optionally be used for collation support (Peter Eisentraut) - The ICU library has versioning that allows detection + The ICU library has versioning that allows detection of collation changes between versions. It is enabled via configure - option . The default still uses the operating system's native collation library. @@ -2746,14 +2746,14 @@ --> Automatically mark all PG_FUNCTION_INFO_V1 functions - as DLLEXPORT-ed on - Windows (Laurenz Albe) + linkend="xfunc-c">PG_FUNCTION_INFO_V1 functions + as DLLEXPORT-ed on + Windows (Laurenz Albe) - If third-party code is using extern function - declarations, they should also add DLLEXPORT markers + If third-party code is using extern function + declarations, they should also add DLLEXPORT markers to those declarations. @@ -2763,10 +2763,10 @@ 2016-11-08 [1833f1a1c] Simplify code by getting rid of SPI_push, SPI_pop, SPI_r --> - Remove SPI functions SPI_push(), - SPI_pop(), SPI_push_conditional(), - SPI_pop_conditional(), - and SPI_restore_connection() as unnecessary (Tom Lane) + Remove SPI functions SPI_push(), + SPI_pop(), SPI_push_conditional(), + SPI_pop_conditional(), + and SPI_restore_connection() as unnecessary (Tom Lane) @@ -2776,9 +2776,9 @@ - A side effect of this change is that SPI_palloc() and + A side effect of this change is that SPI_palloc() and allied functions now require an active SPI connection; they do not - degenerate to simple palloc() if there is none. That + degenerate to simple palloc() if there is none. That previous behavior was not very useful and posed risks of unexpected memory leaks. @@ -2811,9 +2811,9 @@ 2016-10-09 [ecb0d20a9] Use unnamed POSIX semaphores, if available, on Linux and --> - Use POSIX semaphores rather than SysV semaphores - on Linux and FreeBSD (Tom Lane) + Use POSIX semaphores rather than SysV semaphores + on Linux and FreeBSD (Tom Lane) @@ -2835,7 +2835,7 @@ 2017-03-10 [f8f1430ae] Enable 64 bit atomics on ARM64. --> - Enable 64-bit atomic operations on ARM64 (Roman + Enable 64-bit atomic operations on ARM64 (Roman Shaposhnik) @@ -2845,13 +2845,13 @@ 2017-01-02 [1d63f7d2d] Use clock_gettime(), if available, in instr_time measure --> - Switch to using clock_gettime(), if available, for + Switch to using clock_gettime(), if available, for duration measurements (Tom Lane) - gettimeofday() is still used - if clock_gettime() is not available. + gettimeofday() is still used + if clock_gettime() is not available. @@ -2868,9 +2868,9 @@ If no strong random number generator can be - found, configure will fail unless - the @@ -2880,7 +2880,7 @@ 2017-08-15 [d7ab908fb] Distinguish wait-for-connection from wait-for-write-read --> - Allow WaitLatchOrSocket() to wait for socket + Allow WaitLatchOrSocket() to wait for socket connection on Windows (Andres Freund) @@ -2890,7 +2890,7 @@ 2017-04-06 [3f902354b] Clean up after insufficiently-researched optimization of --> - tupconvert.c functions no longer convert tuples just to + tupconvert.c functions no longer convert tuples just to embed a different composite-type OID in them (Ashutosh Bapat, Tom Lane) @@ -2906,8 +2906,8 @@ 2016-10-11 [2b860f52e] Remove "sco" and "unixware" ports. --> - Remove SCO and Unixware ports (Tom Lane) + Remove SCO and Unixware ports (Tom Lane) @@ -2918,7 +2918,7 @@ --> Overhaul documentation build - process (Alexander Lakhin) + process (Alexander Lakhin) @@ -2927,13 +2927,13 @@ 2017-04-06 [510074f9f] Remove use of Jade and DSSSL --> - Use XSLT to build the PostgreSQL + Use XSLT to build the PostgreSQL documentation (Peter Eisentraut) - Previously Jade, DSSSL, and - JadeTex were used. + Previously Jade, DSSSL, and + JadeTex were used. @@ -2942,7 +2942,7 @@ 2016-11-15 [e36ddab11] Build HTML documentation using XSLT stylesheets by defau --> - Build HTML documentation using XSLT + Build HTML documentation using XSLT stylesheets by default (Peter Eisentraut) @@ -2961,7 +2961,7 @@ 2016-09-29 [8e91e12bc] Allow contrib/file_fdw to read from a program, like COPY --> - Allow file_fdw to read + Allow file_fdw to read from program output as well as files (Corey Huinker, Adam Gomaa) @@ -2971,7 +2971,7 @@ 2016-10-21 [7012b132d] postgres_fdw: Push down aggregates to remote servers. --> - In postgres_fdw, + In postgres_fdw, push aggregate functions to the remote server, when possible (Jeevan Chalke, Ashutosh Bapat) @@ -2988,7 +2988,7 @@ 2017-04-24 [332bec1e6] postgres_fdw: Fix join push down with extensions --> - In postgres_fdw, push joins to the remote server in + In postgres_fdw, push joins to the remote server in more cases (David Rowley, Ashutosh Bapat, Etsuro Fujita) @@ -2998,12 +2998,12 @@ 2016-08-26 [ae025a159] Support OID system column in postgres_fdw. --> - Properly support OID columns in - postgres_fdw tables (Etsuro Fujita) + Properly support OID columns in + postgres_fdw tables (Etsuro Fujita) - Previously OID columns always returned zeros. + Previously OID columns always returned zeros. @@ -3012,8 +3012,8 @@ 2017-03-21 [f7946a92b] Add btree_gist support for enum types. --> - Allow btree_gist - and btree_gin to + Allow btree_gist + and btree_gin to index enum types (Andrew Dunstan) @@ -3027,8 +3027,8 @@ 2016-11-29 [11da83a0e] Add uuid to the set of types supported by contrib/btree_ --> - Add indexing support to btree_gist for the - UUID data type (Paul Jungwirth) + Add indexing support to btree_gist for the + UUID data type (Paul Jungwirth) @@ -3037,7 +3037,7 @@ 2017-03-09 [3717dc149] Add amcheck extension to contrib. --> - Add amcheck which can + Add amcheck which can check the validity of B-tree indexes (Peter Geoghegan) @@ -3047,10 +3047,10 @@ 2017-03-27 [a6f22e835] Show ignored constants as "$N" rather than "?" in pg_sta --> - Show ignored constants as $N rather than ? + Show ignored constants as $N rather than ? in pg_stat_statements + linkend="pgstatstatements">pg_stat_statements (Lukas Fittl) @@ -3060,13 +3060,13 @@ 2016-09-27 [f31a931fa] Improve contrib/cube's handling of zero-D cubes, infinit --> - Improve cube's handling + Improve cube's handling of zero-dimensional cubes (Tom Lane) - This also improves handling of infinite and - NaN values. + This also improves handling of infinite and + NaN values. @@ -3076,7 +3076,7 @@ --> Allow pg_buffercache to run + linkend="pgbuffercache">pg_buffercache to run with fewer locks (Ivan Kartyshov) @@ -3090,8 +3090,8 @@ 2017-02-03 [e759854a0] pgstattuple: Add pgstathashindex. --> - Add pgstattuple - function pgstathashindex() to view hash index + Add pgstattuple + function pgstathashindex() to view hash index statistics (Ashutosh Sharma) @@ -3101,8 +3101,8 @@ 2016-09-29 [fd321a1df] Remove superuser checks in pgstattuple --> - Use GRANT permissions to - control pgstattuple function usage (Stephen Frost) + Use GRANT permissions to + control pgstattuple function usage (Stephen Frost) @@ -3115,7 +3115,7 @@ 2016-10-28 [d4b5d4cad] pgstattuple: Don't take heavyweight locks when examining --> - Reduce locking when pgstattuple examines hash + Reduce locking when pgstattuple examines hash indexes (Amit Kapila) @@ -3125,8 +3125,8 @@ 2017-03-17 [fef2bcdcb] pageinspect: Add page_checksum function --> - Add pageinspect - function page_checksum() to show a page's checksum + Add pageinspect + function page_checksum() to show a page's checksum (Tomas Vondra) @@ -3136,8 +3136,8 @@ 2017-04-04 [193f5f9e9] pageinspect: Add bt_page_items function with bytea argum --> - Add pageinspect - function bt_page_items() to print page items from a + Add pageinspect + function bt_page_items() to print page items from a page image (Tomas Vondra) @@ -3147,7 +3147,7 @@ 2017-02-02 [08bf6e529] pageinspect: Support hash indexes. --> - Add hash index support to pageinspect (Jesper + Add hash index support to pageinspect (Jesper Pedersen, Ashutosh Sharma) diff --git a/doc/src/sgml/release-7.4.sgml b/doc/src/sgml/release-7.4.sgml index bc4f4e18d0..bdbfe8e006 100644 --- a/doc/src/sgml/release-7.4.sgml +++ b/doc/src/sgml/release-7.4.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 7.4.X series. Users are encouraged to update to a newer release branch soon. @@ -47,7 +47,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -76,7 +76,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -97,7 +97,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -111,7 +111,7 @@ - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -119,7 +119,7 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) @@ -150,7 +150,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 7.4.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -173,19 +173,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -194,19 +194,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -219,10 +219,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -230,7 +230,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -242,7 +242,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -255,7 +255,7 @@ - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -263,7 +263,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -294,7 +294,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 7.4.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -317,7 +317,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -332,8 +332,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -351,17 +351,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -369,7 +369,7 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) @@ -381,14 +381,14 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) @@ -460,14 +460,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -486,7 +486,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -498,7 +498,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -507,7 +507,7 @@ - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -537,8 +537,8 @@ A dump/restore is not required for those running 7.4.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 7.4.26. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 7.4.26. Also, if you are upgrading from a version earlier than 7.4.11, see . @@ -552,14 +552,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -573,21 +573,21 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -604,7 +604,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -612,7 +612,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -631,8 +631,8 @@ - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -687,7 +687,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -698,7 +698,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -711,14 +711,14 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -760,13 +760,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -781,30 +781,30 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Fix bug in to_char()'s handling of TH + Fix bug in to_char()'s handling of TH format codes (Andreas Scherbaum) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) @@ -852,7 +852,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -868,14 +868,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -889,7 +889,7 @@ - Fix ecpg's parsing of CREATE USER (Michael) + Fix ecpg's parsing of CREATE USER (Michael) @@ -944,27 +944,27 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) @@ -1006,18 +1006,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1061,7 +1061,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1076,7 +1076,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1084,36 +1084,36 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1121,21 +1121,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1144,8 +1144,8 @@ - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1157,7 +1157,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -1207,7 +1207,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -1218,18 +1218,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -1249,13 +1249,13 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 7.4.18 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) @@ -1263,13 +1263,13 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) @@ -1282,42 +1282,42 @@ - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -1360,40 +1360,40 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) - Prevent CLUSTER from failing + Prevent CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -1437,28 +1437,28 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -1529,7 +1529,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -1577,7 +1577,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -1588,8 +1588,8 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) @@ -1601,20 +1601,20 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -1625,7 +1625,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -1665,12 +1665,12 @@ Fix core dump when an untyped literal is taken as ANYARRAY -Fix string_to_array() to handle overlapping +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) Fix backslash escaping in /contrib/dbmirror @@ -1712,9 +1712,9 @@ ANYARRAY into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -1724,48 +1724,48 @@ ANYARRAY Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix bug that sometimes caused OR'd index scans to @@ -1774,8 +1774,8 @@ miss rows they should have returned Fix WAL replay for case where a btree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) Fix server to use custom DH SSL parameters correctly (Michael Fuhr) @@ -1818,7 +1818,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -1833,18 +1833,18 @@ created in 7.4.9 and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog file creation +exists errors during pg_clog file creation (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) Fix to allow restoring dumps that have cross-schema references to custom operators (Tom) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -1872,9 +1872,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.8, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -1888,28 +1888,28 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Fix longstanding bug in strpos() and regular expression handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -1956,15 +1956,15 @@ corruption. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) AIX and HPUX compile fixes (Tom) Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions
. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped @@ -1999,41 +1999,41 @@ table has been dropped Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Fix the sense of the test for read-only transaction -in COPY -The code formerly prohibited COPY TO, where it should -prohibit COPY FROM. +in COPY +The code formerly prohibited COPY TO, where it should +prohibit COPY FROM. Fix planning problem with outer-join ON clauses that reference only the inner-side relation -Further fixes for x FULL JOIN y ON true corner +Further fixes for x FULL JOIN y ON true corner cases -Make array_in and array_recv more +Make array_in and array_recv more paranoid about validating their OID parameter Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve robustness of datetime parsing Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled -Don't try to open more than max_files_per_process +Don't try to open more than max_files_per_process files during postmaster startup Various memory leakage fixes Various portability improvements -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type -Update contrib/tsearch2 to use current Snowball +Update contrib/tsearch2 to use current Snowball code @@ -2077,10 +2077,10 @@ code - The lesser problem is that the contrib/tsearch2 module + The lesser problem is that the contrib/tsearch2 module creates several functions that are misdeclared to return - internal when they do not accept internal arguments. - This breaks type safety for all functions using internal + internal when they do not accept internal arguments. + This breaks type safety for all functions using internal arguments. @@ -2106,7 +2106,7 @@ WHERE pronamespace = 11 AND pronargs = 5 COMMIT; - Next, if you have installed contrib/tsearch2, do: + Next, if you have installed contrib/tsearch2, do: BEGIN; @@ -2124,22 +2124,22 @@ COMMIT; If this command fails with a message like function - "dex_init(text)" does not exist, then either tsearch2 + "dex_init(text)" does not exist, then either tsearch2 is not installed in this database, or you already did the update. - The above procedures must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedures must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same errors. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same errors. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedures. Finally, do: -- re-freeze template0: @@ -2156,8 +2156,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Change encoding function signature to prevent misuse -Change contrib/tsearch2 to avoid unsafe use of -INTERNAL function results +Change contrib/tsearch2 to avoid unsafe use of +INTERNAL function results Repair ancient race condition that allowed a transaction to be seen as committed for some purposes (eg SELECT FOR UPDATE) slightly sooner than for other purposes @@ -2169,56 +2169,56 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. Ensure operations done during backend shutdown are counted by statistics collector -This is expected to resolve reports of pg_autovacuum +This is expected to resolve reports of pg_autovacuum not vacuuming the system catalogs often enough — it was not being told about catalog deletions caused by temporary table removal during backend exit. Additional buffer overrun checks in plpgsql (Neil) -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD -Prevent to_char(interval) from dumping core for +RECORD +Prevent to_char(interval) from dumping core for month-related formats -Prevent crash on COALESCE(NULL,NULL) -Fix array_map to call PL functions correctly -Fix permission checking in ALTER DATABASE RENAME -Fix ALTER LANGUAGE RENAME -Make RemoveFromWaitQueue clean up after itself +Prevent crash on COALESCE(NULL,NULL) +Fix array_map to call PL functions correctly +Fix permission checking in ALTER DATABASE RENAME +Fix ALTER LANGUAGE RENAME +Make RemoveFromWaitQueue clean up after itself This fixes a lock management error that would only be visible if a transaction was kicked out of a wait for a lock (typically by query cancel) and then the holder of the lock released it within a very narrow window. Fix problem with untyped parameter appearing in -INSERT ... SELECT -Fix CLUSTER failure after -ALTER TABLE SET WITHOUT OIDS +INSERT ... SELECT +Fix CLUSTER failure after +ALTER TABLE SET WITHOUT OIDS @@ -2251,11 +2251,11 @@ holder of the lock released it within a very narrow window. Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -2314,7 +2314,7 @@ GMT Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -2322,11 +2322,11 @@ it as a potential-data-loss bug. Very large left joins using a hash join plan could fail to output unmatched left-side rows given just the right data distribution. -Disallow running pg_ctl as root +Disallow running pg_ctl as root This is to guard against any possible security issues. -Avoid using temp files in /tmp in make_oidjoins_check +Avoid using temp files in /tmp in make_oidjoins_check This has been reported as a security issue, though it's hardly worthy of concern since there is no reason for non-developers to use this script anyway. @@ -2343,13 +2343,13 @@ This could lead to misbehavior in some of the system-statistics views. Fix small memory leak in postmaster Fix expected both swapped tables to have TOAST -tables bug +tables
bug This could arise in cases such as CLUSTER after ALTER TABLE DROP COLUMN. -Prevent pg_ctl restart from adding -D multiple times +Prevent pg_ctl restart from adding -D multiple times Fix problem with NULL values in GiST indexes -:: is no longer interpreted as a variable in an +:: is no longer interpreted as a variable in an ECPG prepare statement @@ -2435,8 +2435,8 @@ aggregate plan Fix hashed crosstab for zero-rows case (Joe) Force cache update after renaming a column in a foreign key Pretty-print UNION queries correctly -Make psql handle \r\n newlines properly in COPY IN -pg_dump handled ACLs with grant options incorrectly +Make psql handle \r\n newlines properly in COPY IN +pg_dump handled ACLs with grant options incorrectly Fix thread support for macOS and Solaris Updated JDBC driver (build 215) with various fixes ECPG fixes @@ -2492,7 +2492,7 @@ large tables, unsigned oids, stability, temp tables, and debug mode Select-list aliases within the sub-select will now take precedence over names from outer query levels. -Do not generate NATURAL CROSS JOIN when decompiling rules (Tom) +Do not generate NATURAL CROSS JOIN when decompiling rules (Tom) Add checks for invalid field length in binary COPY (Tom) This fixes a difficult-to-exploit security hole. @@ -2531,29 +2531,29 @@ names from outer query levels. - The more severe of the two errors is that data type anyarray + The more severe of the two errors is that data type anyarray has the wrong alignment label; this is a problem because the - pg_statistic system catalog uses anyarray + pg_statistic system catalog uses anyarray columns. The mislabeling can cause planner misestimations and even - crashes when planning queries that involve WHERE clauses on - double-aligned columns (such as float8 and timestamp). + crashes when planning queries that involve WHERE clauses on + double-aligned columns (such as float8 and timestamp). It is strongly recommended that all installations repair this error, either by initdb or by following the manual repair procedure given below. - The lesser error is that the system view pg_settings + The lesser error is that the system view pg_settings ought to be marked as having public update access, to allow - UPDATE pg_settings to be used as a substitute for - SET. This can also be fixed either by initdb or manually, + UPDATE pg_settings to be used as a substitute for + SET. This can also be fixed either by initdb or manually, but it is not necessary to fix unless you want to use UPDATE - pg_settings. + pg_settings. If you wish not to do an initdb, the following procedure will work - for fixing pg_statistic. As the database superuser, + for fixing pg_statistic. As the database superuser, do: @@ -2573,28 +2573,28 @@ ANALYZE; This can be done in a live database, but beware that all backends running in the altered database must be restarted before it is safe to - repopulate pg_statistic. + repopulate pg_statistic. - To repair the pg_settings error, simply do: + To repair the pg_settings error, simply do: GRANT SELECT, UPDATE ON pg_settings TO PUBLIC; - The above procedures must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedures must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same errors. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same errors. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedures. Finally, do: -- re-freeze template0: @@ -2614,28 +2614,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; -Fix pg_statistic alignment bug that could crash optimizer +Fix pg_statistic alignment bug that could crash optimizer See above for details about this problem. -Allow non-super users to update pg_settings +Allow non-super users to update pg_settings Fix several optimizer bugs, most of which led to -variable not found in subplan target lists errors +variable not found in subplan target lists errors Avoid out-of-memory failure during startup of large multiple index scan Fix multibyte problem that could lead to out of -memory error during COPY IN -Fix problems with SELECT INTO / CREATE -TABLE AS from tables without OIDs -Fix problems with alter_table regression test +memory error during COPY IN +Fix problems with SELECT INTO / CREATE +TABLE AS from tables without OIDs +Fix problems with alter_table regression test during parallel testing Fix problems with hitting open file limit, especially on macOS (Tom) Partial fix for Turkish-locale issues initdb will succeed now in Turkish locale, but there are still some -inconveniences associated with the i/I problem. +inconveniences associated with the i/I problem. Make pg_dump set client encoding on restore Other minor pg_dump fixes Allow ecpg to again use C keywords as column names (Michael) -Added ecpg WHENEVER NOT_FOUND to -SELECT/INSERT/UPDATE/DELETE (Michael) +Added ecpg WHENEVER NOT_FOUND to +SELECT/INSERT/UPDATE/DELETE (Michael) Fix ecpg crash for queries calling set-returning functions (Michael) Various other ecpg fixes (Michael) Fixes for Borland compiler @@ -2810,7 +2810,7 @@ DROP SCHEMA information_schema CASCADE; without sorting, by accumulating results into a hash table with one entry per group. It will still use the sort technique, however, if the hash table is estimated to be too - large to fit in sort_mem. + large to fit in sort_mem. @@ -3125,16 +3125,16 @@ DROP SCHEMA information_schema CASCADE; Trailing spaces are now trimmed when converting from type - char(n) to - varchar(n) or text. + char(n) to + varchar(n) or text. This is what most people always expected to happen anyway. - The data type float(p) now - measures p in binary digits, not decimal + The data type float(p) now + measures p in binary digits, not decimal digits. The new behavior follows the SQL standard. @@ -3143,11 +3143,11 @@ DROP SCHEMA information_schema CASCADE; Ambiguous date values now must match the ordering specified by the datestyle setting. In prior releases, a - date specification of 10/20/03 was interpreted as a - date in October even if datestyle specified that + date specification of 10/20/03 was interpreted as a + date in October even if datestyle specified that the day should be first. 7.4 will throw an error if a date specification is invalid for the current setting of - datestyle. + datestyle. @@ -3167,28 +3167,28 @@ DROP SCHEMA information_schema CASCADE; no longer work as expected in column default expressions; they now cause the time of the table creation to be the default, not the time of the insertion. Functions such as - now(), current_timestamp, or + now(), current_timestamp, or current_date should be used instead. In previous releases, there was special code so that strings such as 'now' were interpreted at - INSERT time and not at table creation time, but + INSERT time and not at table creation time, but this work around didn't cover all cases. Release 7.4 now requires that defaults be defined properly using functions such - as now() or current_timestamp. These + as now() or current_timestamp. These will work in all situations. - The dollar sign ($) is no longer allowed in + The dollar sign ($) is no longer allowed in operator names. It can instead be a non-first character in identifiers. This was done to improve compatibility with other database systems, and to avoid syntax problems when parameter - placeholders ($n) are written + placeholders ($n) are written adjacent to operators. @@ -3333,14 +3333,14 @@ DROP SCHEMA information_schema CASCADE; - Allow IN/NOT IN to be handled via hash + Allow IN/NOT IN to be handled via hash tables (Tom) - Improve NOT IN (subquery) + Improve NOT IN (subquery) performance (Tom) @@ -3490,19 +3490,19 @@ DROP SCHEMA information_schema CASCADE; - Rename server parameter server_min_messages to log_min_messages (Bruce) + Rename server parameter server_min_messages to log_min_messages (Bruce) This was done so most parameters that control the server logs - begin with log_. + begin with log_. - Rename show_*_stats to log_*_stats (Bruce) - Rename show_source_port to log_source_port (Bruce) - Rename hostname_lookup to log_hostname (Bruce) + Rename show_*_stats to log_*_stats (Bruce) + Rename show_source_port to log_source_port (Bruce) + Rename hostname_lookup to log_hostname (Bruce) - Add checkpoint_warning to warn of excessive checkpointing (Bruce) + Add checkpoint_warning to warn of excessive checkpointing (Bruce) In prior releases, it was difficult to determine if checkpoint was happening too frequently. This feature adds a warning to the @@ -3514,8 +3514,8 @@ DROP SCHEMA information_schema CASCADE; - Change debug server log messages to output as DEBUG - rather than LOG (Bruce) + Change debug server log messages to output as DEBUG + rather than LOG (Bruce) @@ -3529,8 +3529,8 @@ DROP SCHEMA information_schema CASCADE; - log_min_messages/client_min_messages now - controls debug_* output (Bruce) + log_min_messages/client_min_messages now + controls debug_* output (Bruce) This centralizes client debug information so all debug output @@ -3589,15 +3589,15 @@ DROP SCHEMA information_schema CASCADE; Add new columns in pg_settings: - context, type, source, - min_val, max_val (Joe) + context, type, source, + min_val, max_val (Joe) - Make default shared_buffers 1000 and - max_connections 100, if possible (Tom) + Make default shared_buffers 1000 and + max_connections 100, if possible (Tom) Prior versions defaulted to 64 shared buffers so PostgreSQL @@ -3612,7 +3612,7 @@ DROP SCHEMA information_schema CASCADE; New pg_hba.conf record type - hostnossl to prevent SSL connections (Jon + hostnossl to prevent SSL connections (Jon Jensen) @@ -3675,7 +3675,7 @@ DROP SCHEMA information_schema CASCADE; Add option to prevent auto-addition of tables referenced in query (Nigel J. Andrews) By default, tables mentioned in the query are automatically - added to the FROM clause if they are not already + added to the FROM clause if they are not already there. This is compatible with historic POSTGRES behavior but is contrary to the SQL standard. This option allows selecting @@ -3692,9 +3692,9 @@ DROP SCHEMA information_schema CASCADE; - Allow expressions to be used in LIMIT/OFFSET (Tom) + Allow expressions to be used in LIMIT/OFFSET (Tom) - In prior releases, LIMIT/OFFSET could + In prior releases, LIMIT/OFFSET could only use constants, not expressions. @@ -3780,7 +3780,7 @@ DROP SCHEMA information_schema CASCADE; Improve automatic type casting for domains (Rod, Tom) Allow dollar signs in identifiers, except as first character (Tom) - Disallow dollar signs in operator names, so x=$1 works (Tom) + Disallow dollar signs in operator names, so x=$1 works (Tom) @@ -3863,9 +3863,9 @@ DROP SCHEMA information_schema CASCADE; - Implement SQL-compatible options FIRST, - LAST, ABSOLUTE n, - RELATIVE n for + Implement SQL-compatible options FIRST, + LAST, ABSOLUTE n, + RELATIVE n for FETCH and MOVE (Tom) @@ -3888,18 +3888,18 @@ DROP SCHEMA information_schema CASCADE; Prevent CLUSTER on partial indexes (Tom) - Allow DOS and Mac line-endings in COPY files (Bruce) + Allow DOS and Mac line-endings in COPY files (Bruce) Disallow literal carriage return as a data value, - backslash-carriage-return and \r are still allowed + backslash-carriage-return and \r are still allowed (Bruce) - COPY changes (binary, \.) (Tom) + COPY changes (binary, \.) (Tom) @@ -3965,7 +3965,7 @@ DROP SCHEMA information_schema CASCADE; - Improve reliability of LISTEN/NOTIFY (Tom) + Improve reliability of LISTEN/NOTIFY (Tom) @@ -3976,8 +3976,8 @@ DROP SCHEMA information_schema CASCADE; requirement of a standalone session, which was necessary in previous releases. The only tables that now require a standalone session for reindexing are the global system tables - pg_database, pg_shadow, and - pg_group. + pg_database, pg_shadow, and + pg_group. @@ -4003,14 +4003,14 @@ DROP SCHEMA information_schema CASCADE; - Remove rarely used functions oidrand, - oidsrand, and userfntest functions + Remove rarely used functions oidrand, + oidsrand, and userfntest functions (Neil) - Add md5() function to main server, already in contrib/pgcrypto (Joe) + Add md5() function to main server, already in contrib/pgcrypto (Joe) An MD5 function was frequently requested. For more complex encryption capabilities, use @@ -4067,8 +4067,8 @@ DROP SCHEMA information_schema CASCADE; Allow WHERE qualification - expr op ANY/SOME/ALL - (array_expr) (Joe) + expr op ANY/SOME/ALL + (array_expr) (Joe) This allows arrays to behave like a list of values, for purposes @@ -4079,10 +4079,10 @@ DROP SCHEMA information_schema CASCADE; - New array functions array_append, - array_cat, array_lower, - array_prepend, array_to_string, - array_upper, string_to_array (Joe) + New array functions array_append, + array_cat, array_lower, + array_prepend, array_to_string, + array_upper, string_to_array (Joe) @@ -4107,14 +4107,14 @@ DROP SCHEMA information_schema CASCADE; Trim trailing spaces when char is cast to - varchar or text (Tom) + varchar or text (Tom) - Make float(p) measure the precision - p in binary digits, not decimal digits + Make float(p) measure the precision + p in binary digits, not decimal digits (Tom) @@ -4164,9 +4164,9 @@ DROP SCHEMA information_schema CASCADE; - Add new datestyle values MDY, - DMY, and YMD to set input field order; - honor US and European for backward + Add new datestyle values MDY, + DMY, and YMD to set input field order; + honor US and European for backward compatibility (Tom) @@ -4182,10 +4182,10 @@ DROP SCHEMA information_schema CASCADE; - Treat NaN as larger than any other value in min()/max() (Tom) + Treat NaN as larger than any other value in min()/max() (Tom) NaN was already sorted after ordinary numeric values for most - purposes, but min() and max() didn't + purposes, but min() and max() didn't get this right. @@ -4203,7 +4203,7 @@ DROP SCHEMA information_schema CASCADE; - Allow time to be specified as 040506 or 0405 (Tom) + Allow time to be specified as 040506 or 0405 (Tom) @@ -4275,7 +4275,7 @@ DROP SCHEMA information_schema CASCADE; - Add new parameter $0 in PL/pgSQL representing the + Add new parameter $0 in PL/pgSQL representing the function's actual return type (Joe) @@ -4310,12 +4310,12 @@ DROP SCHEMA information_schema CASCADE; Improve tab completion (Rod, Ross Reedstrom, Ian Barwick) - Reorder \? help into groupings (Harald Armin Massa, Bruce) + Reorder \? help into groupings (Harald Armin Massa, Bruce) Add backslash commands for listing schemas, casts, and conversions (Christopher) - \encoding now changes based on the server parameter + \encoding now changes based on the server parameter client_encoding (Tom) @@ -4328,7 +4328,7 @@ DROP SCHEMA information_schema CASCADE; Save editor buffer into readline history (Ross) - When \e is used to edit a query, the result is saved + When \e is used to edit a query, the result is saved in the readline history for retrieval using the up arrow. @@ -4373,14 +4373,14 @@ DROP SCHEMA information_schema CASCADE; - Have pg_dumpall use GRANT/REVOKE to dump database-level privileges (Tom) + Have pg_dumpall use GRANT/REVOKE to dump database-level privileges (Tom) - Allow pg_dumpall to support the options , + , of pg_dump (Tom) @@ -4565,7 +4565,7 @@ DROP SCHEMA information_schema CASCADE; Allow libpq to compile with Borland C++ compiler (Lester Godwin, Karl Waclawek) Use our own version of getopt_long() if needed (Peter) Convert administration scripts to C (Peter) - Bison >= 1.85 is now required to build the PostgreSQL grammar, if building from CVS + Bison >= 1.85 is now required to build the PostgreSQL grammar, if building from CVS Merge documentation into one book (Peter) Add Windows compatibility functions (Bruce) Allow client interfaces to compile under MinGW (Bruce) @@ -4605,16 +4605,16 @@ DROP SCHEMA information_schema CASCADE; Update btree_gist (Oleg) New tsearch2 full-text search module (Oleg, Teodor) Add hash-based crosstab function to tablefuncs (Joe) - Add serial column to order connectby() siblings in tablefuncs (Nabil Sayegh,Joe) + Add serial column to order connectby() siblings in tablefuncs (Nabil Sayegh,Joe) Add named persistent connections to dblink (Shridhar Daithanka) New pg_autovacuum allows automatic VACUUM (Matthew T. O'Connor) - Make pgbench honor environment variables PGHOST, PGPORT, PGUSER (Tatsuo) + Make pgbench honor environment variables PGHOST, PGPORT, PGUSER (Tatsuo) Improve intarray (Teodor Sigaev) Improve pgstattuple (Rod) Fix bug in metaphone() in fuzzystrmatch Improve adddepend (Rod) Update spi/timetravel (Böjthe Zoltán) - Fix dbase + Fix dbase option and improve non-ASCII handling (Thomas Behr, Márcio Smiderle) Remove array module because features now included by default (Joe) diff --git a/doc/src/sgml/release-8.0.sgml b/doc/src/sgml/release-8.0.sgml index 0f43e24b1d..46ca87e93a 100644 --- a/doc/src/sgml/release-8.0.sgml +++ b/doc/src/sgml/release-8.0.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.0.X series. Users are encouraged to update to a newer release branch soon. @@ -47,7 +47,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -76,7 +76,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -104,7 +104,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -130,7 +130,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -138,28 +138,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -167,13 +167,13 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) @@ -187,7 +187,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -220,7 +220,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.0.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -243,19 +243,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -264,19 +264,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -289,10 +289,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -300,7 +300,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -312,7 +312,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -325,14 +325,14 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -340,7 +340,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -353,7 +353,7 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. @@ -380,7 +380,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.0.X release series in July 2010. Users are encouraged to update to a newer release branch soon. @@ -403,7 +403,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -432,8 +432,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -459,7 +459,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -467,17 +467,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -485,7 +485,7 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) @@ -499,7 +499,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -511,28 +511,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -604,14 +604,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -630,7 +630,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -649,7 +649,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -664,20 +664,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -685,7 +685,7 @@ - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -716,8 +716,8 @@ A dump/restore is not required for those running 8.0.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.0.22. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.0.22. Also, if you are upgrading from a version earlier than 8.0.6, see . @@ -731,14 +731,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -752,32 +752,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -794,7 +794,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -802,7 +802,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -821,22 +821,22 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -849,7 +849,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -900,7 +900,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -911,7 +911,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -924,14 +924,14 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -973,13 +973,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -994,30 +994,30 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -1065,7 +1065,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -1095,14 +1095,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -1116,19 +1116,19 @@ - Fix ecpg's parsing of CREATE USER (Michael) + Fix ecpg's parsing of CREATE USER (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -1176,19 +1176,19 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -1210,21 +1210,21 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. @@ -1247,21 +1247,21 @@ - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -1304,18 +1304,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1358,7 +1358,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -1370,8 +1370,8 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) @@ -1379,7 +1379,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1394,7 +1394,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1402,24 +1402,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis, and Chile) @@ -1427,34 +1427,34 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1462,21 +1462,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1502,19 +1502,19 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -1522,20 +1522,20 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1547,7 +1547,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -1579,8 +1579,8 @@ - This is the last 8.0.X release for which the PostgreSQL - community will produce binary packages for Windows. + This is the last 8.0.X release for which the PostgreSQL + community will produce binary packages for Windows. Windows users are encouraged to move to 8.2.X or later, since there are Windows-specific fixes in 8.2.X that are impractical to back-port. 8.0.X will continue to @@ -1606,7 +1606,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -1617,18 +1617,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -1648,20 +1648,20 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.0.14 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -1669,14 +1669,14 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) Preserve the tablespace of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -1695,27 +1695,27 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -1727,49 +1727,49 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -1812,20 +1812,20 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -1838,7 +1838,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -1851,7 +1851,7 @@ - Prevent CLUSTER from failing + Prevent CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -1870,14 +1870,14 @@ - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -1921,28 +1921,28 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -2061,7 +2061,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -2109,7 +2109,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -2120,15 +2120,15 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -2146,7 +2146,7 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) @@ -2159,13 +2159,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -2176,7 +2176,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -2225,28 +2225,28 @@ Changes -Fix crash when referencing NEW row +Fix crash when referencing NEW row values in rule WHERE expressions (Tom) Fix core dump when an untyped literal is taken as ANYARRAY Fix mishandling of AFTER triggers when query contains a SQL function returning multiple rows (Tom) -Fix ALTER TABLE ... TYPE to recheck -NOT NULL for USING clause (Tom) -Fix string_to_array() to handle overlapping +Fix ALTER TABLE ... TYPE to recheck +NOT NULL for USING clause (Tom) +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) -Numerous robustness fixes in ecpg (Joachim +Numerous robustness fixes in ecpg (Joachim Wieland) Fix backslash escaping in /contrib/dbmirror Fix instability of statistics collection on Win32 (Tom, Andrew) -Fixes for AIX and -Intel compilers (Tom) +Fixes for AIX and +Intel compilers (Tom) @@ -2283,9 +2283,9 @@ Wieland) into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -2295,48 +2295,48 @@ Wieland) Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix bug that sometimes caused OR'd index scans to @@ -2345,10 +2345,10 @@ miss rows they should have returned Fix WAL replay for case where a btree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) -Fix SELECT INTO and CREATE TABLE AS to +Fix SELECT INTO and CREATE TABLE AS to create tables in the default tablespace, not the base directory (Kris Jurka) @@ -2396,7 +2396,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -2411,44 +2411,44 @@ created in 8.0.4, 7.4.9, and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog and pg_subtrans file creation +exists errors during pg_clog and pg_subtrans file creation (Tom) Fix cases that could lead to crashes if a cache-invalidation message arrives at just the wrong time (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) -Ensure ALTER COLUMN TYPE will process -FOREIGN KEY, UNIQUE, and PRIMARY KEY +Ensure ALTER COLUMN TYPE will process +FOREIGN KEY, UNIQUE, and PRIMARY KEY constraints in the proper order (Nakano Yoshihisa) Fixes to allow restoring dumps that have cross-schema references to custom operators or operator classes (Tom) -Allow pg_restore to continue properly after a -COPY failure; formerly it tried to treat the remaining -COPY data as SQL commands (Stephen Frost) +Allow pg_restore to continue properly after a +COPY failure; formerly it tried to treat the remaining +COPY data as SQL commands (Stephen Frost) -Fix pg_ctl unregister crash +Fix pg_ctl unregister crash when the data directory is not specified (Magnus) -Fix ecpg crash on AMD64 and PPC +Fix ecpg crash on AMD64 and PPC (Neil) Recover properly if error occurs during argument passing -in PL/Python (Neil) +in PL/Python (Neil) -Fix PL/Perl's handling of locales on +Fix PL/Perl's handling of locales on Win32 to match the backend (Andrew) -Fix crash when log_min_messages is set to -DEBUG3 or above in postgresql.conf on Win32 +Fix crash when log_min_messages is set to +DEBUG3 or above in postgresql.conf on Win32 (Bruce) -Fix pgxs -L library path +Fix pgxs -L library path specification for Win32, Cygwin, macOS, AIX (Bruce) Check that SID is enabled while checking for Win32 admin @@ -2457,8 +2457,8 @@ privileges (Magnus) Properly reject out-of-range date inputs (Kris Jurka) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -2486,9 +2486,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.3, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -2501,7 +2501,7 @@ and isinf during configure (Tom) than exit if there is no more room in ShmemBackendArray (Magnus) The previous behavior could lead to a denial-of-service situation if too many connection requests arrive close together. This applies -only to the Windows port. +only to the Windows port. Fix bug introduced in 8.0 that could allow ReadBuffer to return an already-used page as new, potentially causing loss of @@ -2512,16 +2512,16 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Allow more flexible relocation of installation @@ -2533,15 +2533,15 @@ directory paths were the same except for the last component. handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Various fixes for functions returning RECORDs +Various fixes for functions returning RECORDs (Tom) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -2597,35 +2597,35 @@ later VACUUM commands. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) AIX and HPUX compile fixes (Tom) Retry file reads and writes after Windows NO_SYSTEM_RESOURCES error (Qingqing Zhou) -Fix intermittent failure when log_line_prefix -includes %i +Fix intermittent failure when log_line_prefix +includes %i -Fix psql performance issue with long scripts +Fix psql performance issue with long scripts on Windows (Merlin Moncure) -Fix missing updates of pg_group flat +Fix missing updates of pg_group flat file Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions. Postpone timezone initialization until after -postmaster.pid is created +postmaster.pid is created This avoids confusing startup scripts that expect the pid file to appear quickly. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped -Fix problems with whole-row references (foo.*) +Fix problems with whole-row references (foo.*) to subquery results @@ -2660,69 +2660,69 @@ to subquery results Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Force a checkpoint before committing CREATE -DATABASE -This should fix recent reports of index is not a btree +DATABASE +This should fix recent reports of index is not a btree failures when a crash occurs shortly after CREATE -DATABASE. +DATABASE. Fix the sense of the test for read-only transaction -in COPY -The code formerly prohibited COPY TO, where it should -prohibit COPY FROM. +in COPY +The code formerly prohibited COPY TO, where it should +prohibit COPY FROM. -Handle consecutive embedded newlines in COPY +Handle consecutive embedded newlines in COPY CSV-mode input -Fix date_trunc(week) for dates near year +Fix date_trunc(week) for dates near year end Fix planning problem with outer-join ON clauses that reference only the inner-side relation -Further fixes for x FULL JOIN y ON true corner +Further fixes for x FULL JOIN y ON true corner cases Fix overenthusiastic optimization of x IN (SELECT -DISTINCT ...) and related cases -Fix mis-planning of queries with small LIMIT -values due to poorly thought out fuzzy cost +DISTINCT ...) and related cases +Fix mis-planning of queries with small LIMIT +values due to poorly thought out fuzzy cost comparison -Make array_in and array_recv more +Make array_in and array_recv more paranoid about validating their OID parameter Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve robustness of datetime parsing Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled Improve MIPS and M68K spinlock code -Don't try to open more than max_files_per_process +Don't try to open more than max_files_per_process files during postmaster startup Various memory leakage fixes Various portability improvements Update timezone data files Improve handling of DLL load failures on Windows Improve random-number generation on Windows -Make psql -f filename return a nonzero exit code +Make psql -f filename return a nonzero exit code when opening the file fails -Change pg_dump to handle inherited check +Change pg_dump to handle inherited check constraints more reliably -Fix password prompting in pg_restore on +Fix password prompting in pg_restore on Windows -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type -Fix PL/Perl %_SHARED so it's actually +Fix PL/Perl %_SHARED so it's actually shared -Fix contrib/pg_autovacuum to allow sleep +Fix contrib/pg_autovacuum to allow sleep intervals over 2000 sec -Update contrib/tsearch2 to use current Snowball +Update contrib/tsearch2 to use current Snowball code @@ -2766,10 +2766,10 @@ code - The lesser problem is that the contrib/tsearch2 module + The lesser problem is that the contrib/tsearch2 module creates several functions that are improperly declared to return - internal when they do not accept internal arguments. - This breaks type safety for all functions using internal + internal when they do not accept internal arguments. + This breaks type safety for all functions using internal arguments. @@ -2794,10 +2794,10 @@ code Change encoding function signature to prevent misuse -Change contrib/tsearch2 to avoid unsafe use of -INTERNAL function results +Change contrib/tsearch2 to avoid unsafe use of +INTERNAL function results Guard against incorrect second parameter to -record_out +record_out Repair ancient race condition that allowed a transaction to be seen as committed for some purposes (eg SELECT FOR UPDATE) slightly sooner than for other purposes @@ -2809,36 +2809,36 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD -Prevent crash on COALESCE(NULL,NULL) +RECORD +Prevent crash on COALESCE(NULL,NULL) Fix Borland makefile for libpq -Fix contrib/btree_gist for timetz type +Fix contrib/btree_gist for timetz type (Teodor) -Make pg_ctl check the PID found in -postmaster.pid to see if it is still a live +Make pg_ctl check the PID found in +postmaster.pid to see if it is still a live process -Fix pg_dump/pg_restore problems caused +Fix pg_dump/pg_restore problems caused by addition of dump timestamps Fix interaction between materializing holdable cursors and firing deferred triggers during transaction commit @@ -2883,51 +2883,51 @@ data types libraries (Bruce) This should have been done in 8.0.0. It is required so 7.4.X versions -of PostgreSQL client applications, like psql, +of PostgreSQL client applications, like psql, can be used on the same machine as 8.0.X applications. This might require re-linking user applications that use these libraries. -Add Windows-only wal_sync_method setting of - +Add Windows-only wal_sync_method setting of + (Magnus, Bruce) This setting causes PostgreSQL to write through any disk-drive write cache when writing to WAL. -This behavior was formerly called , but was +renamed because it acts quite differently from on other platforms. -Enable the wal_sync_method setting of - -Formerly the array would remain NULL, but now it becomes a +Formerly the array would remain NULL, but now it becomes a single-element array. The main SQL engine was changed to handle -UPDATE of a null array value this way in 8.0, but the similar +UPDATE of a null array value this way in 8.0, but the similar case in plpgsql was overlooked. -Convert \r\n and \r to \n +Convert \r\n and \r to \n in plpython function bodies (Michael Fuhr) This prevents syntax errors when plpython code is written on a Windows or @@ -2935,72 +2935,72 @@ in plpython function bodies (Michael Fuhr) Allow SPI cursors to handle utility commands that return rows, -such as EXPLAIN (Tom) -Fix CLUSTER failure after ALTER TABLE -SET WITHOUT OIDS (Tom) -Reduce memory usage of ALTER TABLE ADD COLUMN +such as EXPLAIN (Tom) +Fix CLUSTER failure after ALTER TABLE +SET WITHOUT OIDS (Tom) +Reduce memory usage of ALTER TABLE ADD COLUMN (Neil) -Fix ALTER LANGUAGE RENAME (Tom) -Document the Windows-only register and -unregister options of pg_ctl (Magnus) +Fix ALTER LANGUAGE RENAME (Tom) +Document the Windows-only register and +unregister options of pg_ctl (Magnus) Ensure operations done during backend shutdown are counted by statistics collector -This is expected to resolve reports of pg_autovacuum +This is expected to resolve reports of pg_autovacuum not vacuuming the system catalogs often enough — it was not being told about catalog deletions caused by temporary table removal during backend exit. Change the Windows default for configuration parameter -log_destination to +log_destination to (Magnus) By default, a server running on Windows will now send log output to the Windows event logger rather than standard error. Make Kerberos authentication work on Windows (Magnus) -Allow ALTER DATABASE RENAME by superusers +Allow ALTER DATABASE RENAME by superusers who aren't flagged as having CREATEDB privilege (Tom) -Modify WAL log entries for CREATE and -DROP DATABASE to not specify absolute paths (Tom) +Modify WAL log entries for CREATE and +DROP DATABASE to not specify absolute paths (Tom) This allows point-in-time recovery on a different machine with possibly -different database location. Note that CREATE TABLESPACE still +different database location. Note that CREATE TABLESPACE still poses a hazard in such situations. Fix crash from a backend exiting with an open transaction that created a table and opened a cursor on it (Tom) -Fix array_map() so it can call PL functions +Fix array_map() so it can call PL functions (Tom) -Several contrib/tsearch2 and -contrib/btree_gist fixes (Teodor) +Several contrib/tsearch2 and +contrib/btree_gist fixes (Teodor) -Fix crash of some contrib/pgcrypto +Fix crash of some contrib/pgcrypto functions on some platforms (Marko Kreen) -Fix contrib/intagg for 64-bit platforms +Fix contrib/intagg for 64-bit platforms (Tom) -Fix ecpg bugs in parsing of CREATE statement +Fix ecpg bugs in parsing of CREATE statement (Michael) Work around gcc bug on powerpc and amd64 causing problems in ecpg (Christof Petig) -Do not use locale-aware versions of upper(), -lower(), and initcap() when the locale is -C (Bruce) +Do not use locale-aware versions of upper(), +lower(), and initcap() when the locale is +C (Bruce) This allows these functions to work on platforms that generate errors - for non-7-bit data when the locale is C. + for non-7-bit data when the locale is C. -Fix quote_ident() to quote names that match keywords (Tom) -Fix to_date() to behave reasonably when -CC and YY fields are both used (Karel) -Prevent to_char(interval) from failing +Fix quote_ident() to quote names that match keywords (Tom) +Fix to_date() to behave reasonably when +CC and YY fields are both used (Karel) +Prevent to_char(interval) from failing when given a zero-month interval (Tom) -Fix wrong week returned by date_trunc('week') +Fix wrong week returned by date_trunc('week') (Bruce) -date_trunc('week') +date_trunc('week') returned the wrong year for the first few days of January in some years. -Use the correct default mask length for class D -addresses in INET data types (Tom) +Use the correct default mask length for class D +addresses in INET data types (Tom) @@ -3033,11 +3033,11 @@ addresses in INET data types (Tom) Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -3050,7 +3050,7 @@ contrib/intagg Jurka) Avoid buffer overrun when plpgsql cursor declaration has too many parameters (Neil) -Make ALTER TABLE ADD COLUMN enforce domain +Make ALTER TABLE ADD COLUMN enforce domain constraints in all cases Fix planning error for FULL and RIGHT outer joins @@ -3059,7 +3059,7 @@ left input. This could not only deliver mis-sorted output to the user, but in case of nested merge joins could give outright wrong answers. Improve planning of grouped aggregate queries -ROLLBACK TO savepoint +ROLLBACK TO savepoint closes cursors created since the savepoint Fix inadequate backend stack size on Windows Avoid SHGetSpecialFolderPath() on Windows @@ -3099,17 +3099,17 @@ typedefs (Michael) This is the first PostgreSQL release - to run natively on Microsoft Windows as - a server. It can run as a Windows service. This + to run natively on Microsoft Windows as + a server. It can run as a Windows service. This release supports NT-based Windows releases like - Windows 2000 SP4, Windows XP, and - Windows 2003. Older releases like - Windows 95, Windows 98, and - Windows ME are not supported because these operating + Windows 2000 SP4, Windows XP, and + Windows 2003. Older releases like + Windows 95, Windows 98, and + Windows ME are not supported because these operating systems do not have the infrastructure to support PostgreSQL. A separate installer project has been created to ease installation on - Windows — see Windows — see . @@ -3123,7 +3123,7 @@ typedefs (Michael) Previous releases required the Unix emulation toolkit - Cygwin in order to run the server on Windows + Cygwin in order to run the server on Windows operating systems. PostgreSQL has supported native clients on Windows for many years. @@ -3174,7 +3174,7 @@ typedefs (Michael) Tablespaces allow administrators to select different file systems for storage of individual tables, indexes, and databases. This improves performance and control over disk space - usage. Prior releases used initlocation and + usage. Prior releases used initlocation and manual symlink management for such tasks. @@ -3216,7 +3216,7 @@ typedefs (Michael) - A new version of the plperl server-side language now + A new version of the plperl server-side language now supports a persistent shared storage area, triggers, returning records and arrays of records, and SPI calls to access the database. @@ -3257,7 +3257,7 @@ typedefs (Michael) - In serialization mode, volatile functions now see the results of concurrent transactions committed up to the beginning of each statement within the function, rather than up to the beginning of the interactive command that called the function. @@ -3266,18 +3266,18 @@ typedefs (Michael) - Functions declared or always use the snapshot of the calling query, and therefore do not see the effects of actions taken after the calling query starts, whether in their own transaction or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL commands other than - SELECT. + SELECT. - Nondeferred triggers are now fired immediately after completion of the triggering query, rather than upon finishing the current interactive command. This makes a difference when the triggering query occurred within a function: @@ -3288,19 +3288,19 @@ typedefs (Michael) - Server configuration parameters virtual_host and - tcpip_socket have been replaced with a more general - parameter listen_addresses. Also, the server now listens on - localhost by default, which eliminates the need for the - -i postmaster switch in many scenarios. + Server configuration parameters virtual_host and + tcpip_socket have been replaced with a more general + parameter listen_addresses. Also, the server now listens on + localhost by default, which eliminates the need for the + -i postmaster switch in many scenarios. - Server configuration parameters SortMem and - VacuumMem have been renamed to work_mem - and maintenance_work_mem to better reflect their + Server configuration parameters SortMem and + VacuumMem have been renamed to work_mem + and maintenance_work_mem to better reflect their use. The original names are still supported in SET and SHOW. @@ -3308,34 +3308,34 @@ typedefs (Michael) - Server configuration parameters log_pid, - log_timestamp, and log_source_port have been - replaced with a more general parameter log_line_prefix. + Server configuration parameters log_pid, + log_timestamp, and log_source_port have been + replaced with a more general parameter log_line_prefix. - Server configuration parameter syslog has been - replaced with a more logical log_destination variable to + Server configuration parameter syslog has been + replaced with a more logical log_destination variable to control the log output destination. - Server configuration parameter log_statement has been + Server configuration parameter log_statement has been changed so it can selectively log just database modification or data definition statements. Server configuration parameter - log_duration now prints only when log_statement + log_duration now prints only when log_statement prints the query. - Server configuration parameter max_expr_depth parameter has - been replaced with max_stack_depth which measures the + Server configuration parameter max_expr_depth parameter has + been replaced with max_stack_depth which measures the physical stack size rather than the expression nesting depth. This helps prevent session termination due to stack overflow caused by recursive functions. @@ -3344,14 +3344,14 @@ typedefs (Michael) - The length() function no longer counts trailing spaces in - CHAR(n) values. + The length() function no longer counts trailing spaces in + CHAR(n) values. - Casting an integer to BIT(N) selects the rightmost N bits of the + Casting an integer to BIT(N) selects the rightmost N bits of the integer, not the leftmost N bits as before. @@ -3369,7 +3369,7 @@ typedefs (Michael) Syntax checking of array input values has been tightened up considerably. Junk that was previously allowed in odd places with odd results now causes an error. Empty-string element values - must now be written as "", rather than writing nothing. + must now be written as "", rather than writing nothing. Also changed behavior with respect to whitespace surrounding array elements: trailing whitespace is now ignored, for symmetry with leading whitespace (which has always been ignored). @@ -3386,14 +3386,14 @@ typedefs (Michael) The arithmetic operators associated with the single-byte - "char" data type have been removed. + "char" data type have been removed. - The extract() function (also called - date_part) now returns the proper year for BC dates. + The extract() function (also called + date_part) now returns the proper year for BC dates. It previously returned one less than the correct year. The function now also returns the proper values for millennium and century. @@ -3402,9 +3402,9 @@ typedefs (Michael) - CIDR values now must have their nonmasked bits be zero. + CIDR values now must have their nonmasked bits be zero. For example, we no longer allow - 204.248.199.1/31 as a CIDR value. Such + 204.248.199.1/31 as a CIDR value. Such values should never have been accepted by PostgreSQL and will now be rejected. @@ -3419,11 +3419,11 @@ typedefs (Michael) - psql's \copy command now reads or - writes to the query's stdin/stdout, rather than - psql's stdin/stdout. The previous + psql's \copy command now reads or + writes to the query's stdin/stdout, rather than + psql's stdin/stdout. The previous behavior can be accessed via new - / parameters. @@ -3449,14 +3449,14 @@ typedefs (Michael) one supplied by the operating system. This will provide consistent behavior across all platforms. In most cases, there should be little noticeable difference in time zone behavior, except that - the time zone names used by SET/SHOW - TimeZone might be different from what your platform provides. + the time zone names used by SET/SHOW + TimeZone might be different from what your platform provides. - Configure's threading option no longer requires + Configure's threading option no longer requires users to run tests or edit configuration files; threading options are now detected automatically. @@ -3465,7 +3465,7 @@ typedefs (Michael) Now that tablespaces have been implemented, - initlocation has been removed. + initlocation has been removed. @@ -3495,7 +3495,7 @@ typedefs (Michael) - The 8.1 release will remove the to_char() function + The 8.1 release will remove the to_char() function for intervals. @@ -3513,12 +3513,12 @@ typedefs (Michael) By default, tables in PostgreSQL 8.0 - and earlier are created with OIDs. In the next release, + and earlier are created with OIDs. In the next release, this will not be the case: to create a table - that contains OIDs, the clause must be specified or the default_with_oids configuration parameter must be set. Users are encouraged to - explicitly specify if their tables require OIDs for compatibility with future releases of PostgreSQL. @@ -3581,7 +3581,7 @@ typedefs (Michael) hurt performance. The new code uses a background writer to trickle disk writes at a steady pace so checkpoints have far fewer dirty pages to write to disk. Also, the new code does not issue a global - sync() call, but instead fsync()s just + sync() call, but instead fsync()s just the files written since the last checkpoint. This should improve performance and minimize degradation during checkpoints. @@ -3629,13 +3629,13 @@ typedefs (Michael) - Improved index usage with OR clauses (Tom) + Improved index usage with OR clauses (Tom) This allows the optimizer to use indexes in statements with many OR clauses that would not have been indexed in the past. It can also use multi-column indexes where the first column is specified and the second - column is part of an OR clause. + column is part of an OR clause. @@ -3645,7 +3645,7 @@ typedefs (Michael) The server is now smarter about using partial indexes in queries - involving complex clauses. @@ -3754,7 +3754,7 @@ typedefs (Michael) It is now possible to log server messages conveniently without - relying on either syslog or an external log + relying on either syslog or an external log rotation program. @@ -3762,56 +3762,56 @@ typedefs (Michael) Add new read-only server configuration parameters to show server - compile-time settings: block_size, - integer_datetimes, max_function_args, - max_identifier_length, max_index_keys (Joe) + compile-time settings: block_size, + integer_datetimes, max_function_args, + max_identifier_length, max_index_keys (Joe) - Make quoting of sameuser, samegroup, and - all remove special meaning of these terms in - pg_hba.conf (Andrew) + Make quoting of sameuser, samegroup, and + all remove special meaning of these terms in + pg_hba.conf (Andrew) - Use clearer IPv6 name ::1/128 for - localhost in default pg_hba.conf (Andrew) + Use clearer IPv6 name ::1/128 for + localhost in default pg_hba.conf (Andrew) - Use CIDR format in pg_hba.conf examples (Andrew) + Use CIDR format in pg_hba.conf examples (Andrew) - Rename server configuration parameters SortMem and - VacuumMem to work_mem and - maintenance_work_mem (Old names still supported) (Tom) + Rename server configuration parameters SortMem and + VacuumMem to work_mem and + maintenance_work_mem (Old names still supported) (Tom) This change was made to clarify that bulk operations such as index and - foreign key creation use maintenance_work_mem, while - work_mem is for workspaces used during query execution. + foreign key creation use maintenance_work_mem, while + work_mem is for workspaces used during query execution. Allow logging of session disconnections using server configuration - log_disconnections (Andrew) + log_disconnections (Andrew) - Add new server configuration parameter log_line_prefix to + Add new server configuration parameter log_line_prefix to allow control of information emitted in each log line (Andrew) @@ -3822,21 +3822,21 @@ typedefs (Michael) - Remove server configuration parameters log_pid, - log_timestamp, log_source_port; functionality - superseded by log_line_prefix (Andrew) + Remove server configuration parameters log_pid, + log_timestamp, log_source_port; functionality + superseded by log_line_prefix (Andrew) - Replace the virtual_host and tcpip_socket - parameters with a unified listen_addresses parameter + Replace the virtual_host and tcpip_socket + parameters with a unified listen_addresses parameter (Andrew, Tom) - virtual_host could only specify a single IP address to - listen on. listen_addresses allows multiple addresses + virtual_host could only specify a single IP address to + listen on. listen_addresses allows multiple addresses to be specified. @@ -3844,10 +3844,10 @@ typedefs (Michael) Listen on localhost by default, which eliminates the need for the - postmaster switch in many scenarios (Andrew) - Listening on localhost (127.0.0.1) opens no new + Listening on localhost (127.0.0.1) opens no new security holes but allows configurations like Windows and JDBC, which do not support local sockets, to work without special adjustments. @@ -3856,17 +3856,17 @@ typedefs (Michael) - Remove syslog server configuration parameter, and add more - logical log_destination variable to control log output + Remove syslog server configuration parameter, and add more + logical log_destination variable to control log output location (Magnus) - Change server configuration parameter log_statement to take - values all, mod, ddl, or - none to select which queries are logged (Bruce) + Change server configuration parameter log_statement to take + values all, mod, ddl, or + none to select which queries are logged (Bruce) This allows administrators to log only data definition changes or @@ -3877,12 +3877,12 @@ typedefs (Michael) Some logging-related configuration parameters could formerly be adjusted - by ordinary users, but only in the more verbose direction. + by ordinary users, but only in the more verbose direction. They are now treated more strictly: only superusers can set them. - However, a superuser can use ALTER USER to provide per-user + However, a superuser can use ALTER USER to provide per-user settings of these values for non-superusers. Also, it is now possible for superusers to set values of superuser-only configuration parameters - via PGOPTIONS. + via PGOPTIONS. @@ -3921,8 +3921,8 @@ typedefs (Michael) It is now useful to issue DECLARE CURSOR in a - Parse message with parameters. The parameter values - sent at Bind time will be substituted into the + Parse message with parameters. The parameter values + sent at Bind time will be substituted into the execution of the cursor's query. @@ -3942,7 +3942,7 @@ typedefs (Michael) - Make log_duration print only when log_statement + Make log_duration print only when log_statement prints the query (Ed L.) @@ -4007,10 +4007,10 @@ typedefs (Michael) - Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom) + Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom) - no longer evaluates the tested expression multiple times. This has benefits when the expression is complex or is volatile. @@ -4018,20 +4018,20 @@ typedefs (Michael) - Test before computing target list of an aggregate query (Tom) Fixes improper failure of cases such as SELECT SUM(win)/SUM(lose) - ... GROUP BY ... HAVING SUM(lose) > 0. This should work but formerly + ... GROUP BY ... HAVING SUM(lose) > 0. This should work but formerly could fail with divide-by-zero. - Replace max_expr_depth parameter with - max_stack_depth parameter, measured in kilobytes of stack + Replace max_expr_depth parameter with + max_stack_depth parameter, measured in kilobytes of stack size (Tom) @@ -4054,7 +4054,7 @@ typedefs (Michael) - Allow / to be used as the operator in row and subselect comparisons (Fabien Coelho) @@ -4065,8 +4065,8 @@ typedefs (Michael) identifiers and keywords (Tom) - This solves the Turkish problem with mangling of words - containing I and i. Folding of characters + This solves the Turkish problem with mangling of words + containing I and i. Folding of characters outside the 7-bit-ASCII set is still locale-aware. @@ -4094,7 +4094,7 @@ typedefs (Michael) - Avoid emitting in rule listings (Tom) Such a clause makes no logical sense, but in some cases the rule @@ -4112,36 +4112,36 @@ typedefs (Michael) - Add COMMENT ON for casts, conversions, languages, + Add COMMENT ON for casts, conversions, languages, operator classes, and large objects (Christopher) - Add new server configuration parameter default_with_oids to - control whether tables are created with OIDs by default (Neil) + Add new server configuration parameter default_with_oids to + control whether tables are created with OIDs by default (Neil) This allows administrators to control whether CREATE - TABLE commands create tables with or without OID + TABLE commands create tables with or without OID columns by default. (Note: the current factory default setting for - default_with_oids is TRUE, but the default - will become FALSE in future releases.) + default_with_oids is TRUE, but the default + will become FALSE in future releases.) - Add / clause to CREATE TABLE AS (Neil) - Allow ALTER TABLE DROP COLUMN to drop an OID - column (ALTER TABLE SET WITHOUT OIDS still works) + Allow ALTER TABLE DROP COLUMN to drop an OID + column (ALTER TABLE SET WITHOUT OIDS still works) (Tom) @@ -4154,11 +4154,11 @@ typedefs (Michael) - Allow ALTER ... ADD COLUMN with defaults and - constraints; works per SQL spec (Rod) - It is now possible for to create a column that is not initially filled with NULLs, but with a specified default value. @@ -4166,7 +4166,7 @@ typedefs (Michael) - Add ALTER COLUMN TYPE to change column's type (Rod) + Add ALTER COLUMN TYPE to change column's type (Rod) It is now possible to alter a column's data type without dropping @@ -4176,14 +4176,14 @@ typedefs (Michael) - Allow multiple ALTER actions in a single ALTER + Allow multiple ALTER actions in a single ALTER TABLE command (Rod) - This is particularly useful for ALTER commands that - rewrite the table (which include @@ -4213,13 +4213,13 @@ typedefs (Michael) Allow temporary object creation to be limited to functions (Sean Chittenden) - Add (Christopher) Prior to this release, there was no way to clear an auto-cluster @@ -4229,8 +4229,8 @@ typedefs (Michael) - Constraint/Index/SERIAL names are now - table_column_type + Constraint/Index/SERIAL names are now + table_column_type with numbers appended to guarantee uniqueness within the schema (Tom) @@ -4242,11 +4242,11 @@ typedefs (Michael) - Add pg_get_serial_sequence() to return a - SERIAL column's sequence name (Christopher) + Add pg_get_serial_sequence() to return a + SERIAL column's sequence name (Christopher) - This allows automated scripts to reliably find the SERIAL + This allows automated scripts to reliably find the SERIAL sequence name. @@ -4259,14 +4259,14 @@ typedefs (Michael) - New ALTER INDEX command to allow moving of indexes + New ALTER INDEX command to allow moving of indexes between tablespaces (Gavin) - Make ALTER TABLE OWNER change dependent sequence + Make ALTER TABLE OWNER change dependent sequence ownership too (Alvaro) @@ -4289,18 +4289,18 @@ typedefs (Michael) - Add keyword to CREATE RULE (Fabien Coelho) - This allows to be added to rule creation to contrast it with + rules. - Add option to LOCK (Tatsuo) This allows the LOCK command to fail if it @@ -4336,7 +4336,7 @@ typedefs (Michael) In 7.3 and 7.4, a long-running B-tree index build could block concurrent - CHECKPOINTs from completing, thereby causing WAL bloat because the + CHECKPOINTs from completing, thereby causing WAL bloat because the WAL log could not be recycled. @@ -4384,11 +4384,11 @@ typedefs (Michael) - New pg_ctl option for Windows (Andrew) - Windows does not have a kill command to send signals to - backends so this capability was added to pg_ctl. + Windows does not have a kill command to send signals to + backends so this capability was added to pg_ctl. @@ -4400,7 +4400,7 @@ typedefs (Michael) - Add option to initdb so the initial password can be set by GUI tools (Magnus) @@ -4415,7 +4415,7 @@ typedefs (Michael) - Add @@ -4443,7 +4443,7 @@ typedefs (Michael) Reject nonrectangular array values as erroneous (Joe) - Formerly, array_in would silently build a + Formerly, array_in would silently build a surprising result. @@ -4457,13 +4457,13 @@ typedefs (Michael) The arithmetic operators associated with the single-byte - "char" data type have been removed. + "char" data type have been removed. Formerly, the parser would select these operators in many situations - where an unable to select an operator error would be more - appropriate, such as null * null. If you actually want - to do arithmetic on a "char" column, you can cast it to + where an unable to select an operator error would be more + appropriate, such as null * null. If you actually want + to do arithmetic on a "char" column, you can cast it to integer explicitly. @@ -4474,7 +4474,7 @@ typedefs (Michael) Junk that was previously allowed in odd places with odd results - now causes an ERROR, for example, non-whitespace + now causes an ERROR, for example, non-whitespace after the closing right brace. @@ -4482,7 +4482,7 @@ typedefs (Michael) Empty-string array element values must now be written as - "", rather than writing nothing (Joe) + "", rather than writing nothing (Joe) Formerly, both ways of writing an empty-string element value were @@ -4512,13 +4512,13 @@ typedefs (Michael) - Accept YYYY-monthname-DD as a date string (Tom) + Accept YYYY-monthname-DD as a date string (Tom) - Make netmask and hostmask functions + Make netmask and hostmask functions return maximum-length mask length (Tom) @@ -4535,27 +4535,27 @@ typedefs (Michael) - to_char/to_date() date conversion + to_char/to_date() date conversion improvements (Kurt Roeckx, Fabien Coelho) - Make length() disregard trailing spaces in - CHAR(n) (Gavin) + Make length() disregard trailing spaces in + CHAR(n) (Gavin) This change was made to improve consistency: trailing spaces are - semantically insignificant in CHAR(n) data, so they - should not be counted by length(). + semantically insignificant in CHAR(n) data, so they + should not be counted by length(). Warn about empty string being passed to - OID/float4/float8 data types (Neil) + OID/float4/float8 data types (Neil) 8.1 will throw an error instead. @@ -4565,7 +4565,7 @@ typedefs (Michael) Allow leading or trailing whitespace in - int2/int4/int8/float4/float8 + int2/int4/int8/float4/float8 input routines (Neil) @@ -4573,7 +4573,7 @@ typedefs (Michael) - Better support for IEEE Infinity and NaN + Better support for IEEE Infinity and NaN values in float4/float8 (Neil) @@ -4584,27 +4584,27 @@ typedefs (Michael) - Add - Fix to_char for 1 BC - (previously it returned 1 AD) (Bruce) + Fix to_char for 1 BC + (previously it returned 1 AD) (Bruce) - Fix date_part(year) for BC dates (previously it + Fix date_part(year) for BC dates (previously it returned one less than the correct year) (Bruce) - Fix date_part() to return the proper millennium and + Fix date_part() to return the proper millennium and century (Fabien Coelho) @@ -4616,44 +4616,44 @@ typedefs (Michael) - Add ceiling() as an alias for ceil(), - and power() as an alias for pow() for + Add ceiling() as an alias for ceil(), + and power() as an alias for pow() for standards compliance (Neil) - Change ln(), log(), - power(), and sqrt() to emit the correct - SQLSTATE error codes for certain error conditions, as + Change ln(), log(), + power(), and sqrt() to emit the correct + SQLSTATE error codes for certain error conditions, as specified by SQL:2003 (Neil) - Add width_bucket() function as defined by SQL:2003 (Neil) + Add width_bucket() function as defined by SQL:2003 (Neil) - Add generate_series() functions to simplify working + Add generate_series() functions to simplify working with numeric sets (Joe) - Fix upper/lower/initcap() functions to work with + Fix upper/lower/initcap() functions to work with multibyte encodings (Tom) - Add boolean and bitwise integer / aggregates (Fabien Coelho) @@ -4679,17 +4679,17 @@ typedefs (Michael) - Add interval plus datetime operators (Tom) + Add interval plus datetime operators (Tom) - The reverse ordering, datetime plus interval, + The reverse ordering, datetime plus interval, was already supported, but both are required by the SQL standard. - Casting an integer to BIT(N) selects the rightmost N bits + Casting an integer to BIT(N) selects the rightmost N bits of the integer (Tom) @@ -4702,7 +4702,7 @@ typedefs (Michael) - Require CIDR values to have all nonmasked bits be zero + Require CIDR values to have all nonmasked bits be zero (Kevin Brintnall) @@ -4717,7 +4717,7 @@ typedefs (Michael) - In READ COMMITTED serialization mode, volatile functions + In READ COMMITTED serialization mode, volatile functions now see the results of concurrent transactions committed up to the beginning of each statement within the function, rather than up to the beginning of the interactive command that called the function. @@ -4726,20 +4726,20 @@ typedefs (Michael) - Functions declared STABLE or IMMUTABLE always + Functions declared STABLE or IMMUTABLE always use the snapshot of the calling query, and therefore do not see the effects of actions taken after the calling query starts, whether in their own transaction or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL commands other than - SELECT. There is a considerable performance gain from - declaring a function STABLE or IMMUTABLE - rather than VOLATILE. + SELECT. There is a considerable performance gain from + declaring a function STABLE or IMMUTABLE + rather than VOLATILE. - Nondeferred triggers are now fired immediately after completion of the triggering query, rather than upon finishing the current interactive command. This makes a difference when the triggering query occurred within a function: the trigger @@ -4801,8 +4801,8 @@ typedefs (Michael) Improve parsing of PL/pgSQL FOR loops (Tom) - Parsing is now driven by presence of ".." rather than - data type of variable. This makes no difference for correct functions, but should result in more understandable error messages when a mistake is made. @@ -4818,18 +4818,18 @@ typedefs (Michael) In PL/Tcl, SPI commands are now run in subtransactions. If an error occurs, the subtransaction is cleaned up and the error is reported - as an ordinary Tcl error, which can be trapped with catch. + as an ordinary Tcl error, which can be trapped with catch. Formerly, it was not possible to catch such errors. - Accept ELSEIF in PL/pgSQL (Neil) + Accept ELSEIF in PL/pgSQL (Neil) - Previously PL/pgSQL only allowed ELSIF, but many people - are accustomed to spelling this keyword ELSEIF. + Previously PL/pgSQL only allowed ELSIF, but many people + are accustomed to spelling this keyword ELSEIF. @@ -4838,47 +4838,47 @@ typedefs (Michael) - <application>psql</> Changes + <application>psql</application> Changes - Improve psql information display about database + Improve psql information display about database objects (Christopher) - Allow psql to display group membership in - \du and \dg (Markus Bertheau) + Allow psql to display group membership in + \du and \dg (Markus Bertheau) - Prevent psql \dn from showing + Prevent psql \dn from showing temporary schemas (Bruce) - Allow psql to handle tilde user expansion for file + Allow psql to handle tilde user expansion for file names (Zach Irmen) - Allow psql to display fancy prompts, including - color, via readline (Reece Hart, Chet Ramey) + Allow psql to display fancy prompts, including + color, via readline (Reece Hart, Chet Ramey) - Make psql \copy match COPY command syntax + Make psql \copy match COPY command syntax fully (Tom) @@ -4891,55 +4891,55 @@ typedefs (Michael) - Add CLUSTER information to psql - \d display + Add CLUSTER information to psql + \d display (Bruce) - Change psql \copy stdin/stdout to read + Change psql \copy stdin/stdout to read from command input/output (Bruce) - Add - Add global psql configuration file, psqlrc.sample + Add global psql configuration file, psqlrc.sample (Bruce) - This allows a central file where global psql startup commands can + This allows a central file where global psql startup commands can be stored. - Have psql \d+ indicate if the table - has an OID column (Neil) + Have psql \d+ indicate if the table + has an OID column (Neil) - On Windows, use binary mode in psql when reading files so control-Z + On Windows, use binary mode in psql when reading files so control-Z is not seen as end-of-file - Have \dn+ show permissions and description for schemas (Dennis + Have \dn+ show permissions and description for schemas (Dennis Björklund) @@ -4961,13 +4961,13 @@ typedefs (Michael) - <application>pg_dump</> Changes + <application>pg_dump</application> Changes Use dependency information to improve the reliability of - pg_dump (Tom) + pg_dump (Tom) This should solve the longstanding problems with related objects @@ -4977,7 +4977,7 @@ typedefs (Michael) - Have pg_dump output objects in alphabetical order if possible (Tom) + Have pg_dump output objects in alphabetical order if possible (Tom) This should make it easier to identify changes between @@ -4987,12 +4987,12 @@ typedefs (Michael) - Allow pg_restore to ignore some SQL errors (Fabien Coelho) + Allow pg_restore to ignore some SQL errors (Fabien Coelho) - This makes pg_restore's behavior similar to the - results of feeding a pg_dump output script to - psql. In most cases, ignoring errors and plowing + This makes pg_restore's behavior similar to the + results of feeding a pg_dump output script to + psql. In most cases, ignoring errors and plowing ahead is the most useful thing to do. Also added was a pg_restore option to give the old behavior of exiting on an error. @@ -5000,36 +5000,36 @@ typedefs (Michael) - pg_restore display now includes objects' schema names - New begin/end markers in pg_dump text output (Bruce) + New begin/end markers in pg_dump text output (Bruce) Add start/stop times for - pg_dump/pg_dumpall in verbose mode + pg_dump/pg_dumpall in verbose mode (Bruce) - Allow most pg_dump options in - pg_dumpall (Christopher) + Allow most pg_dump options in + pg_dumpall (Christopher) - Have pg_dump use ALTER OWNER rather - than SET SESSION AUTHORIZATION by default + Have pg_dump use ALTER OWNER rather + than SET SESSION AUTHORIZATION by default (Christopher) @@ -5044,42 +5044,42 @@ typedefs (Michael) - Make libpq's handling thread-safe (Bruce) - Add PQmbdsplen() which returns the display length + Add PQmbdsplen() which returns the display length of a character (Tatsuo) - Add thread locking to SSL and - Kerberos connections (Manfred Spraul) + Add thread locking to SSL and + Kerberos connections (Manfred Spraul) - Allow PQoidValue(), PQcmdTuples(), and - PQoidStatus() to work on EXECUTE + Allow PQoidValue(), PQcmdTuples(), and + PQoidStatus() to work on EXECUTE commands (Neil) - Add PQserverVersion() to provide more convenient + Add PQserverVersion() to provide more convenient access to the server version number (Greg Sabino Mullane) - Add PQprepare/PQsendPrepared() functions to support + Add PQprepare/PQsendPrepared() functions to support preparing statements without necessarily specifying the data types of their parameters (Abhijit Menon-Sen) @@ -5087,7 +5087,7 @@ typedefs (Michael) - Many ECPG improvements, including SET DESCRIPTOR (Michael) + Many ECPG improvements, including SET DESCRIPTOR (Michael) @@ -5127,7 +5127,7 @@ typedefs (Michael) Directory paths for installed files (such as the - /share directory) are now computed relative to the + /share directory) are now computed relative to the actual location of the executables, so that an installation tree can be moved to another place without reconfiguring and rebuilding. @@ -5136,31 +5136,31 @@ typedefs (Michael) - Use to choose installation location of documentation; also + allow (Peter) - Add to prevent installation of documentation (Peter) - Upgrade to DocBook V4.2 SGML (Peter) + Upgrade to DocBook V4.2 SGML (Peter) - New PostgreSQL CVS tag (Marc) + New PostgreSQL CVS tag (Marc) This was done to make it easier for organizations to manage their own copies of the PostgreSQL - CVS repository. File version stamps from the master + CVS repository. File version stamps from the master repository will not get munged by checking into or out of a copied repository. @@ -5186,7 +5186,7 @@ typedefs (Michael) - Add inlined test-and-set code on PA-RISC for gcc + Add inlined test-and-set code on PA-RISC for gcc (ViSolve, Tom) @@ -5200,7 +5200,7 @@ typedefs (Michael) Clean up spinlock assembly code to avoid warnings from newer - gcc releases (Tom) + gcc releases (Tom) @@ -5230,7 +5230,7 @@ typedefs (Michael) - New fsync() test program (Bruce) + New fsync() test program (Bruce) @@ -5268,7 +5268,7 @@ typedefs (Michael) - Use Olson's public domain timezone library (Magnus) + Use Olson's public domain timezone library (Magnus) @@ -5285,7 +5285,7 @@ typedefs (Michael) - psql now uses a flex-generated + psql now uses a flex-generated lexical analyzer to process command strings @@ -5322,7 +5322,7 @@ typedefs (Michael) - New pgevent for Windows logging + New pgevent for Windows logging @@ -5342,19 +5342,19 @@ typedefs (Michael) - Overhaul of contrib/dblink (Joe) + Overhaul of contrib/dblink (Joe) - contrib/dbmirror improvements (Steven Singer) + contrib/dbmirror improvements (Steven Singer) - New contrib/xml2 (John Gray, Torchbox) + New contrib/xml2 (John Gray, Torchbox) @@ -5366,51 +5366,51 @@ typedefs (Michael) - New version of contrib/btree_gist (Teodor) + New version of contrib/btree_gist (Teodor) - New contrib/trgm, trigram matching for + New contrib/trgm, trigram matching for PostgreSQL (Teodor) - Many contrib/tsearch2 improvements (Teodor) + Many contrib/tsearch2 improvements (Teodor) - Add double metaphone to contrib/fuzzystrmatch (Andrew) + Add double metaphone to contrib/fuzzystrmatch (Andrew) - Allow contrib/pg_autovacuum to run as a Windows service (Dave Page) + Allow contrib/pg_autovacuum to run as a Windows service (Dave Page) - Add functions to contrib/dbsize (Andreas Pflug) + Add functions to contrib/dbsize (Andreas Pflug) - Removed contrib/pg_logger: obsoleted by integrated logging + Removed contrib/pg_logger: obsoleted by integrated logging subprocess - Removed contrib/rserv: obsoleted by various separate projects + Removed contrib/rserv: obsoleted by various separate projects diff --git a/doc/src/sgml/release-8.1.sgml b/doc/src/sgml/release-8.1.sgml index d48bccd17d..6827afd7e0 100644 --- a/doc/src/sgml/release-8.1.sgml +++ b/doc/src/sgml/release-8.1.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.1.X series. Users are encouraged to update to a newer release branch soon. @@ -40,17 +40,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -63,19 +63,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -91,7 +91,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -101,7 +101,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -113,14 +113,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -132,7 +132,7 @@ - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -143,11 +143,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -166,29 +166,29 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -196,20 +196,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -235,7 +235,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.1.X release series in November 2010. Users are encouraged to update to a newer release branch soon. @@ -266,7 +266,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -295,7 +295,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -337,7 +337,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -363,7 +363,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -371,28 +371,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -400,13 +400,13 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) @@ -420,7 +420,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -470,19 +470,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -491,19 +491,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -516,10 +516,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -527,7 +527,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -539,7 +539,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -552,14 +552,14 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -567,7 +567,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -580,7 +580,7 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. @@ -624,7 +624,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -653,8 +653,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -680,7 +680,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -700,17 +700,17 @@ - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -718,14 +718,14 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) @@ -739,7 +739,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -751,28 +751,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -844,14 +844,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -870,7 +870,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -896,7 +896,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -906,14 +906,14 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -930,20 +930,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -951,7 +951,7 @@ - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -982,8 +982,8 @@ A dump/restore is not required for those running 8.1.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.1.18. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.1.18. Also, if you are upgrading from a version earlier than 8.1.15, see . @@ -997,14 +997,14 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) @@ -1018,32 +1018,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -1060,7 +1060,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -1068,7 +1068,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -1087,22 +1087,22 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -1115,7 +1115,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -1166,7 +1166,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -1177,7 +1177,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -1190,20 +1190,20 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -1214,15 +1214,15 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). @@ -1240,7 +1240,7 @@ - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -1294,13 +1294,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -1315,7 +1315,7 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) @@ -1337,30 +1337,30 @@ - Fix uninitialized variables in contrib/tsearch2's - get_covers() function (Teodor) + Fix uninitialized variables in contrib/tsearch2's + get_covers() function (Teodor) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -1391,7 +1391,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, see . Also, if you were running a previous - 8.1.X release, it is recommended to REINDEX all GiST + 8.1.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -1405,13 +1405,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -1423,7 +1423,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -1438,13 +1438,13 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. @@ -1458,9 +1458,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -1479,14 +1479,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -1500,19 +1500,19 @@ - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -1560,7 +1560,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -1573,12 +1573,12 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -1586,18 +1586,18 @@ - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -1617,7 +1617,7 @@ Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) @@ -1635,21 +1635,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -1673,21 +1673,21 @@ - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -1730,18 +1730,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -1749,13 +1749,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -1797,7 +1797,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -1809,8 +1809,8 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) @@ -1818,7 +1818,7 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) @@ -1833,7 +1833,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -1841,24 +1841,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, Argentina/San_Luis, and Chile) @@ -1866,34 +1866,34 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -1901,21 +1901,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -1924,14 +1924,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -1954,19 +1954,19 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -1974,20 +1974,20 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -1999,7 +1999,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -2031,8 +2031,8 @@ - This is the last 8.1.X release for which the PostgreSQL - community will produce binary packages for Windows. + This is the last 8.1.X release for which the PostgreSQL + community will produce binary packages for Windows. Windows users are encouraged to move to 8.2.X or later, since there are Windows-specific fixes in 8.2.X that are impractical to back-port. 8.1.X will continue to @@ -2058,7 +2058,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -2069,18 +2069,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -2100,20 +2100,20 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.1.10 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -2128,14 +2128,14 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) Preserve the tablespace of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -2154,21 +2154,21 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Fix overflow in extract(epoch from interval) for intervals + Fix overflow in extract(epoch from interval) for intervals exceeding 68 years (Tom) @@ -2182,13 +2182,13 @@ - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -2200,64 +2200,64 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - Fix libpq crash when PGPASSFILE refers + Fix libpq crash when PGPASSFILE refers to a file that is not a plain file (Martin Pitt) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/pgcrypto defend against - OpenSSL libraries that fail on keys longer than 128 + Make contrib/pgcrypto defend against + OpenSSL libraries that fail on keys longer than 128 bits; which is the case at least on some Solaris versions (Marko Kreen) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -2300,20 +2300,20 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Allow the interval data type to accept input consisting only of + Allow the interval data type to accept input consisting only of milliseconds or microseconds (Neil) @@ -2326,7 +2326,7 @@ - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -2339,7 +2339,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -2352,7 +2352,7 @@ - Prevent REINDEX and CLUSTER from failing + Prevent REINDEX and CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -2371,14 +2371,14 @@ - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -2422,35 +2422,35 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Require COMMIT PREPARED to be executed in the same + Require COMMIT PREPARED to be executed in the same database as the transaction was prepared in (Heikki) - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -2576,7 +2576,7 @@ - Improve VACUUM performance for databases with many tables (Tom) + Improve VACUUM performance for databases with many tables (Tom) @@ -2593,7 +2593,7 @@ - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) @@ -2606,7 +2606,7 @@ - Fix bogus permission denied failures occurring on Windows + Fix bogus permission denied failures occurring on Windows due to attempts to fsync already-deleted files (Magnus, Tom) @@ -2655,7 +2655,7 @@ - Improve handling of getaddrinfo() on AIX (Tom) + Improve handling of getaddrinfo() on AIX (Tom) @@ -2666,21 +2666,21 @@ - Fix pg_restore to handle a tar-format backup + Fix pg_restore to handle a tar-format backup that contains large objects (blobs) with comments (Tom) - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) - Clean out pg_internal.init cache files during server + Clean out pg_internal.init cache files during server restart (Simon) @@ -2693,7 +2693,7 @@ Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -2717,7 +2717,7 @@ - Fix error when constructing an ARRAY[] made up of multiple + Fix error when constructing an ARRAY[] made up of multiple empty elements (Tom) @@ -2736,13 +2736,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -2753,7 +2753,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -2802,7 +2802,7 @@ Changes -Disallow aggregate functions in UPDATE +Disallow aggregate functions in UPDATE commands, except within sub-SELECTs (Tom) The behavior of such an aggregate was unpredictable, and in 8.1.X could cause a crash, so it has been disabled. The SQL standard does not allow @@ -2810,25 +2810,25 @@ this either. Fix core dump when an untyped literal is taken as ANYARRAY Fix core dump in duration logging for extended query protocol -when a COMMIT or ROLLBACK is +when a COMMIT or ROLLBACK is executed Fix mishandling of AFTER triggers when query contains a SQL function returning multiple rows (Tom) -Fix ALTER TABLE ... TYPE to recheck -NOT NULL for USING clause (Tom) -Fix string_to_array() to handle overlapping +Fix ALTER TABLE ... TYPE to recheck +NOT NULL for USING clause (Tom) +Fix string_to_array() to handle overlapping matches for the separator string -For example, string_to_array('123xx456xxx789', 'xx'). +For example, string_to_array('123xx456xxx789', 'xx'). -Fix to_timestamp() for -AM/PM formats (Bruce) +Fix to_timestamp() for +AM/PM formats (Bruce) Fix autovacuum's calculation that decides whether - ANALYZE is needed (Alvaro) + ANALYZE is needed (Alvaro) Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) -Numerous robustness fixes in ecpg (Joachim +Numerous robustness fixes in ecpg (Joachim Wieland) Fix backslash escaping in /contrib/dbmirror Minor fixes in /contrib/dblink and /contrib/tsearch2 @@ -2836,14 +2836,14 @@ Wieland) Efficiency improvements in hash tables and bitmap index scans (Tom) Fix instability of statistics collection on Windows (Tom, Andrew) -Fix statement_timeout to use the proper +Fix statement_timeout to use the proper units on Win32 (Bruce) In previous Win32 8.1.X versions, the delay was off by a factor of 100. -Fixes for MSVC and Borland C++ +Fixes for MSVC and Borland C++ compilers (Hiroshi Saito) -Fixes for AIX and -Intel compilers (Tom) +Fixes for AIX and +Intel compilers (Tom) Fix rare bug in continuous archiving (Tom) @@ -2881,9 +2881,9 @@ compilers (Hiroshi Saito) into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -2893,61 +2893,61 @@ compilers (Hiroshi Saito) Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations and -standard_conforming_strings -This fixes libpq-using applications for the security +standard_conforming_strings +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314, and also future-proofs them against the planned changeover to SQL-standard string literal syntax. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix weak key selection in pgcrypto (Marko Kreen) Errors in fortuna PRNG reseeding logic could cause a predictable -session key to be selected by pgp_sym_encrypt() in some cases. +session key to be selected by pgp_sym_encrypt() in some cases. This only affects non-OpenSSL-using builds. Fix some incorrect encoding conversion functions -win1251_to_iso, win866_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, win866_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) -Make autovacuum visible in pg_stat_activity +Make autovacuum visible in pg_stat_activity (Alvaro) -Disable full_page_writes (Tom) -In certain cases, having full_page_writes off would cause +Disable full_page_writes (Tom) +In certain cases, having full_page_writes off would cause crash recovery to fail. A proper fix will appear in 8.2; for now it's just disabled. @@ -2965,10 +2965,10 @@ same transaction Fix WAL replay for case where a B-Tree index has been truncated -Fix SIMILAR TO for patterns involving -| (Tom) +Fix SIMILAR TO for patterns involving +| (Tom) -Fix SELECT INTO and CREATE TABLE AS to +Fix SELECT INTO and CREATE TABLE AS to create tables in the default tablespace, not the base directory (Kris Jurka) @@ -2986,18 +2986,18 @@ Fuhr) Fix problem with password prompting on some Win32 systems (Robert Kinberg) -Improve pg_dump's handling of default values +Improve pg_dump's handling of default values for domains -Fix pg_dumpall to handle identically-named +Fix pg_dumpall to handle identically-named users and groups reasonably (only possible when dumping from a pre-8.1 server) (Tom) The user and group will be merged into a single role with -LOGIN permission. Formerly the merged role wouldn't have -LOGIN permission, making it unusable as a user. +LOGIN permission. Formerly the merged role wouldn't have +LOGIN permission, making it unusable as a user. -Fix pg_restore -n to work as +Fix pg_restore -n to work as documented (Tom) @@ -3035,14 +3035,14 @@ documented (Tom) Fix bug that allowed any logged-in user to SET -ROLE to any other database user id (CVE-2006-0553) +ROLE to any other database user id (CVE-2006-0553) Due to inadequate validity checking, a user could exploit the special -case that SET ROLE normally uses to restore the previous role +case that SET ROLE normally uses to restore the previous role setting after an error. This allowed ordinary users to acquire superuser status, for example. The escalation-of-privilege risk exists only in 8.1.0-8.1.2. However, in all releases back to 7.3 there is a related bug in SET -SESSION AUTHORIZATION that allows unprivileged users to crash the server, +SESSION AUTHORIZATION that allows unprivileged users to crash the server, if it has been compiled with Asserts enabled (which is not the default). Thanks to Akio Ishida for reporting this problem. @@ -3055,55 +3055,55 @@ created in 8.0.4, 7.4.9, and 7.3.11 releases. Fix race condition that could lead to file already -exists errors during pg_clog and pg_subtrans file creation +exists errors during pg_clog and pg_subtrans file creation (Tom) Fix cases that could lead to crashes if a cache-invalidation message arrives at just the wrong time (Tom) -Properly check DOMAIN constraints for -UNKNOWN parameters in prepared statements +Properly check DOMAIN constraints for +UNKNOWN parameters in prepared statements (Neil) -Ensure ALTER COLUMN TYPE will process -FOREIGN KEY, UNIQUE, and PRIMARY KEY +Ensure ALTER COLUMN TYPE will process +FOREIGN KEY, UNIQUE, and PRIMARY KEY constraints in the proper order (Nakano Yoshihisa) Fixes to allow restoring dumps that have cross-schema references to custom operators or operator classes (Tom) -Allow pg_restore to continue properly after a -COPY failure; formerly it tried to treat the remaining -COPY data as SQL commands (Stephen Frost) +Allow pg_restore to continue properly after a +COPY failure; formerly it tried to treat the remaining +COPY data as SQL commands (Stephen Frost) -Fix pg_ctl unregister crash +Fix pg_ctl unregister crash when the data directory is not specified (Magnus) -Fix libpq PQprint HTML tags +Fix libpq PQprint HTML tags (Christoph Zwerschke) -Fix ecpg crash on AMD64 and PPC +Fix ecpg crash on AMD64 and PPC (Neil) -Allow SETOF and %TYPE to be used +Allow SETOF and %TYPE to be used together in function result type declarations Recover properly if error occurs during argument passing -in PL/Python (Neil) +in PL/Python (Neil) -Fix memory leak in plperl_return_next +Fix memory leak in plperl_return_next (Neil) -Fix PL/Perl's handling of locales on +Fix PL/Perl's handling of locales on Win32 to match the backend (Andrew) Various optimizer fixes (Tom) -Fix crash when log_min_messages is set to -DEBUG3 or above in postgresql.conf on Win32 +Fix crash when log_min_messages is set to +DEBUG3 or above in postgresql.conf on Win32 (Bruce) -Fix pgxs -L library path +Fix pgxs -L library path specification for Win32, Cygwin, macOS, AIX (Bruce) Check that SID is enabled while checking for Win32 admin @@ -3112,13 +3112,13 @@ privileges (Magnus) Properly reject out-of-range date inputs (Kris Jurka) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) -Improve speed of COPY IN via libpq, by +Improve speed of COPY IN via libpq, by avoiding a kernel call per data line (Alon Goldshuv) -Improve speed of /contrib/tsearch2 index +Improve speed of /contrib/tsearch2 index creation (Tom) @@ -3145,9 +3145,9 @@ creation (Tom) A dump/restore is not required for those running 8.1.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -3160,7 +3160,7 @@ creation (Tom) than exit if there is no more room in ShmemBackendArray (Magnus) The previous behavior could lead to a denial-of-service situation if too many connection requests arrive close together. This applies -only to the Windows port. +only to the Windows port. Fix bug introduced in 8.0 that could allow ReadBuffer to return an already-used page as new, potentially causing loss of @@ -3171,16 +3171,16 @@ outside a transaction or in a failed transaction (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Allow more flexible relocation of installation @@ -3189,7 +3189,7 @@ directories (Tom) directory paths were the same except for the last component. Prevent crashes caused by the use of -ISO-8859-5 and ISO-8859-9 encodings +ISO-8859-5 and ISO-8859-9 encodings (Tatsuo) Fix longstanding bug in strpos() and regular expression @@ -3197,22 +3197,22 @@ handling in certain rarely used Asian multi-byte character sets (Tatsuo) Fix bug where COPY CSV mode considered any -\. to terminate the copy data The new code -requires \. to appear alone on a line, as per +\. to terminate the copy data The new code +requires \. to appear alone on a line, as per documentation. Make COPY CSV mode quote a literal data value of -\. to ensure it cannot be interpreted as the +\. to ensure it cannot be interpreted as the end-of-data marker (Bruce) -Various fixes for functions returning RECORDs +Various fixes for functions returning RECORDs (Tom) -Fix processing of postgresql.conf so a +Fix processing of postgresql.conf so a final line with no newline is processed properly (Tom) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. @@ -3220,7 +3220,7 @@ XDES algorithms (Marko Kreen, Solar Designer) Fix autovacuum crash when processing expression indexes -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -3262,7 +3262,7 @@ what's actually returned by the query (Joe) involving sub-selects flattened by the optimizer (Tom) Fix update failures in scenarios involving CHECK constraints, -toasted columns, and indexes (Tom) +toasted columns, and indexes (Tom) Fix bgwriter problems after recovering from errors (Tom) @@ -3276,7 +3276,7 @@ later VACUUM commands. Prevent failure if client sends Bind protocol message when current transaction is already aborted -/contrib/tsearch2 and /contrib/ltree +/contrib/tsearch2 and /contrib/ltree fixes (Teodor) Fix problems with translated error messages in @@ -3285,17 +3285,17 @@ unexpected truncation of output strings and wrong display of the smallest possible bigint value (Andrew, Tom) These problems only appeared on platforms that were using our -port/snprintf.c code, which includes BSD variants if ---enable-nls was given, and perhaps others. In addition, +port/snprintf.c code, which includes BSD variants if +--enable-nls was given, and perhaps others. In addition, a different form of the translated-error-message problem could appear -on Windows depending on which version of libintl was used. +on Windows depending on which version of libintl was used. -Re-allow AM/PM, HH, -HH12, and D format specifiers for -to_char(time) and to_char(interval). -(to_char(interval) should probably use -HH24.) (Bruce) +Re-allow AM/PM, HH, +HH12, and D format specifiers for +to_char(time) and to_char(interval). +(to_char(interval) should probably use +HH24.) (Bruce) AIX, HPUX, and MSVC compile fixes (Tom, Hiroshi Saito) @@ -3305,7 +3305,7 @@ Saito) Retry file reads and writes after Windows NO_SYSTEM_RESOURCES error (Qingqing Zhou) -Prevent autovacuum from crashing during +Prevent autovacuum from crashing during ANALYZE of expression index (Alvaro) Fix problems with ON COMMIT DELETE ROWS temp @@ -3315,7 +3315,7 @@ tables DISTINCT query Add 8.1.0 release note item on how to migrate invalid -UTF-8 byte sequences (Paul Lindner) +UTF-8 byte sequences (Paul Lindner) @@ -3365,13 +3365,13 @@ DISTINCT query In previous releases, only a single index could be used to do lookups on a table. With this feature, if a query has - WHERE tab.col1 = 4 and tab.col2 = 9, and there is - no multicolumn index on col1 and col2, - but there is an index on col1 and another on - col2, it is possible to search both indexes and + WHERE tab.col1 = 4 and tab.col2 = 9, and there is + no multicolumn index on col1 and col2, + but there is an index on col1 and another on + col2, it is possible to search both indexes and combine the results in memory, then do heap fetches for only - the rows matching both the col1 and - col2 restrictions. This is very useful in + the rows matching both the col1 and + col2 restrictions. This is very useful in environments that have a lot of unstructured queries where it is impossible to create indexes that match all possible access conditions. Bitmap scans are useful even with a single index, @@ -3394,9 +3394,9 @@ DISTINCT query their transactions (none failed), all transactions can be committed. Even if a machine crashes after a prepare, the prepared transaction can be committed after the machine is - restarted. New syntax includes PREPARE TRANSACTION and - COMMIT/ROLLBACK PREPARED. A new system view - pg_prepared_xacts has also been added. + restarted. New syntax includes PREPARE TRANSACTION and + COMMIT/ROLLBACK PREPARED. A new system view + pg_prepared_xacts has also been added. @@ -3445,12 +3445,12 @@ DISTINCT query Once a user logs into a role, she obtains capabilities of the login role plus any inherited roles, and can use - SET ROLE to switch to other roles she is a member of. + SET ROLE to switch to other roles she is a member of. This feature is a generalization of the SQL standard's concept of roles. - This change also replaces pg_shadow and - pg_group by new role-capable catalogs - pg_authid and pg_auth_members. The old + This change also replaces pg_shadow and + pg_group by new role-capable catalogs + pg_authid and pg_auth_members. The old tables are redefined as read-only views on the new role tables. @@ -3458,15 +3458,15 @@ DISTINCT query - Automatically use indexes for MIN() and - MAX() (Tom) + Automatically use indexes for MIN() and + MAX() (Tom) In previous releases, the only way to use an index for - MIN() or MAX() was to rewrite the - query as SELECT col FROM tab ORDER BY col LIMIT 1. + MIN() or MAX() was to rewrite the + query as SELECT col FROM tab ORDER BY col LIMIT 1. Index usage now happens automatically. @@ -3474,7 +3474,7 @@ DISTINCT query - Move /contrib/pg_autovacuum into the main server + Move /contrib/pg_autovacuum into the main server (Alvaro) @@ -3483,21 +3483,21 @@ DISTINCT query Integrating autovacuum into the server allows it to be automatically started and stopped in sync with the database server, and allows autovacuum to be configured from - postgresql.conf. + postgresql.conf. - Add shared row level locks using SELECT ... FOR SHARE + Add shared row level locks using SELECT ... FOR SHARE (Alvaro) While PostgreSQL's MVCC locking - allows SELECT to never be blocked by writers and + allows SELECT to never be blocked by writers and therefore does not need shared row locks for typical operations, shared locks are useful for applications that require shared row locking. In particular this reduces the locking requirements @@ -3516,7 +3516,7 @@ DISTINCT query This extension of the dependency mechanism prevents roles from being dropped while there are still database objects they own. - Formerly it was possible to accidentally orphan objects by + Formerly it was possible to accidentally orphan objects by deleting their owner. While this could be recovered from, it was messy and unpleasant. @@ -3537,7 +3537,7 @@ DISTINCT query This allows for a basic type of table partitioning. If child tables store separate key ranges and this is enforced using appropriate - CHECK constraints, the optimizer will skip child + CHECK constraints, the optimizer will skip child table accesses when the constraint guarantees no matching rows exist in the child table. @@ -3556,9 +3556,9 @@ DISTINCT query - The 8.0 release announced that the to_char() function + The 8.0 release announced that the to_char() function for intervals would be removed in 8.1. However, since no better API - has been suggested, to_char(interval) has been enhanced in + has been suggested, to_char(interval) has been enhanced in 8.1 and will remain in the server. @@ -3570,21 +3570,21 @@ DISTINCT query - add_missing_from is now false by default (Neil) + add_missing_from is now false by default (Neil) By default, we now generate an error if a table is used in a query - without a FROM reference. The old behavior is still + without a FROM reference. The old behavior is still available, but the parameter must be set to 'true' to obtain it. - It might be necessary to set add_missing_from to true + It might be necessary to set add_missing_from to true in order to load an existing dump file, if the dump contains any - views or rules created using the implicit-FROM syntax. + views or rules created using the implicit-FROM syntax. This should be a one-time annoyance, because PostgreSQL 8.1 will convert - such views and rules to standard explicit-FROM syntax. + such views and rules to standard explicit-FROM syntax. Subsequent dumps will therefore not have the problem. @@ -3604,29 +3604,29 @@ DISTINCT query - default_with_oids is now false by default (Neil) + default_with_oids is now false by default (Neil) With this option set to false, user-created tables no longer - have an OID column unless WITH OIDS is specified in - CREATE TABLE. Though OIDs have existed in all - releases of PostgreSQL, their use is limited + have an OID column unless WITH OIDS is specified in + CREATE TABLE. Though OIDs have existed in all + releases of PostgreSQL, their use is limited because they are only four bytes long and the counter is shared across all installed databases. The preferred way of uniquely - identifying rows is via sequences and the SERIAL type, - which have been supported since PostgreSQL 6.4. + identifying rows is via sequences and the SERIAL type, + which have been supported since PostgreSQL 6.4. - Add E'' syntax so eventually ordinary strings can + Add E'' syntax so eventually ordinary strings can treat backslashes literally (Bruce) Currently PostgreSQL processes a backslash in a string literal as introducing a special escape sequence, - e.g. \n or \010. + e.g. \n or \010. While this allows easy entry of special values, it is nonstandard and makes porting of applications from other databases more difficult. For this reason, the @@ -3634,8 +3634,8 @@ DISTINCT query remove the special meaning of backslashes in strings. For backward compatibility and for users who want special backslash processing, a new string syntax has been created. This new string - syntax is formed by writing an E immediately preceding the - single quote that starts the string, e.g. E'hi\n'. While + syntax is formed by writing an E immediately preceding the + single quote that starts the string, e.g. E'hi\n'. While this release does not change the handling of backslashes in strings, it does add new configuration parameters to help users migrate applications for future releases: @@ -3644,14 +3644,14 @@ DISTINCT query - standard_conforming_strings — does this release + standard_conforming_strings — does this release treat backslashes literally in ordinary strings? - escape_string_warning — warn about backslashes in + escape_string_warning — warn about backslashes in ordinary (non-E) strings @@ -3659,36 +3659,36 @@ DISTINCT query - The standard_conforming_strings value is read-only. + The standard_conforming_strings value is read-only. Applications can retrieve the value to know how backslashes are processed. (Presence of the parameter can also be taken as an - indication that E'' string syntax is supported.) - In a future release, standard_conforming_strings + indication that E'' string syntax is supported.) + In a future release, standard_conforming_strings will be true, meaning backslashes will be treated literally in - non-E strings. To prepare for this change, use E'' + non-E strings. To prepare for this change, use E'' strings in places that need special backslash processing, and - turn on escape_string_warning to find additional - strings that need to be converted to use E''. - Also, use two single-quotes ('') to embed a literal + turn on escape_string_warning to find additional + strings that need to be converted to use E''. + Also, use two single-quotes ('') to embed a literal single-quote in a string, rather than the PostgreSQL-supported syntax of - backslash single-quote (\'). The former is + backslash single-quote (\'). The former is standards-conforming and does not require the use of the - E'' string syntax. You can also use the - $$ string syntax, which does not treat backslashes + E'' string syntax. You can also use the + $$ string syntax, which does not treat backslashes specially. - Make REINDEX DATABASE reindex all indexes in the + Make REINDEX DATABASE reindex all indexes in the database (Tom) - Formerly, REINDEX DATABASE reindexed only + Formerly, REINDEX DATABASE reindexed only system tables. This new behavior seems more intuitive. A new - command REINDEX SYSTEM provides the old functionality + command REINDEX SYSTEM provides the old functionality of reindexing just the system tables. @@ -3698,13 +3698,13 @@ DISTINCT query Read-only large object descriptors now obey MVCC snapshot semantics - When a large object is opened with INV_READ (and not - INV_WRITE), the data read from the descriptor will now - reflect a snapshot of the large object's state at the + When a large object is opened with INV_READ (and not + INV_WRITE), the data read from the descriptor will now + reflect a snapshot of the large object's state at the time of the transaction snapshot in use by the query that called - lo_open(). To obtain the old behavior of always - returning the latest committed data, include INV_WRITE - in the mode flags for lo_open(). + lo_open(). To obtain the old behavior of always + returning the latest committed data, include INV_WRITE + in the mode flags for lo_open(). @@ -3713,28 +3713,28 @@ DISTINCT query Add proper dependencies for arguments of sequence functions (Tom) - In previous releases, sequence names passed to nextval(), - currval(), and setval() were stored as + In previous releases, sequence names passed to nextval(), + currval(), and setval() were stored as simple text strings, meaning that renaming or dropping a - sequence used in a DEFAULT clause made the clause + sequence used in a DEFAULT clause made the clause invalid. This release stores all newly-created sequence function arguments as internal OIDs, allowing them to track sequence renaming, and adding dependency information that prevents - improper sequence removal. It also makes such DEFAULT + improper sequence removal. It also makes such DEFAULT clauses immune to schema renaming and search path changes. Some applications might rely on the old behavior of run-time lookup for sequence names. This can still be done by - explicitly casting the argument to text, for example - nextval('myseq'::text). + explicitly casting the argument to text, for example + nextval('myseq'::text). Pre-8.1 database dumps loaded into 8.1 will use the old text-based representation and therefore will not have the features of OID-stored arguments. However, it is possible to update a - database containing text-based DEFAULT clauses. - First, save this query into a file, such as fixseq.sql: + database containing text-based DEFAULT clauses. + First, save this query into a file, such as fixseq.sql: SELECT 'ALTER TABLE ' || pg_catalog.quote_ident(n.nspname) || '.' || @@ -3754,11 +3754,11 @@ WHERE n.oid = c.relnamespace AND d.adsrc ~ $$val\(\('[^']*'::text\)::regclass$$; Next, run the query against a database to find what - adjustments are required, like this for database db1: + adjustments are required, like this for database db1: psql -t -f fixseq.sql db1 - This will show the ALTER TABLE commands needed to + This will show the ALTER TABLE commands needed to convert the database to the newer OID-based representation. If the commands look reasonable, run this to update the database: @@ -3771,51 +3771,51 @@ psql -t -f fixseq.sql db1 | psql -e db1 In psql, treat unquoted - \{digit}+ sequences as octal (Bruce) + \{digit}+ sequences as octal (Bruce) - In previous releases, \{digit}+ sequences were - treated as decimal, and only \0{digit}+ were treated + In previous releases, \{digit}+ sequences were + treated as decimal, and only \0{digit}+ were treated as octal. This change was made for consistency. - Remove grammar productions for prefix and postfix % - and ^ operators + Remove grammar productions for prefix and postfix % + and ^ operators (Tom) These have never been documented and complicated the use of the - modulus operator (%) with negative numbers. + modulus operator (%) with negative numbers. - Make &< and &> for polygons + Make &< and &> for polygons consistent with the box "over" operators (Tom) - CREATE LANGUAGE can ignore the provided arguments - in favor of information from pg_pltemplate + CREATE LANGUAGE can ignore the provided arguments + in favor of information from pg_pltemplate (Tom) - A new system catalog pg_pltemplate has been defined + A new system catalog pg_pltemplate has been defined to carry information about the preferred definitions of procedural languages (such as whether they have validator functions). When an entry exists in this catalog for the language being created, - CREATE LANGUAGE will ignore all its parameters except the + CREATE LANGUAGE will ignore all its parameters except the language name and instead use the catalog information. This measure was taken because of increasing problems with obsolete language definitions being loaded by old dump files. As of 8.1, - pg_dump will dump procedural language definitions as - just CREATE LANGUAGE name, relying + pg_dump will dump procedural language definitions as + just CREATE LANGUAGE name, relying on a template entry to exist at load time. We expect this will be a more future-proof representation. @@ -3835,11 +3835,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 sequences to be entered into the database, and this release properly accepts only valid UTF-8 sequences. One way to correct a dumpfile is to run the command iconv -c -f UTF-8 -t - UTF-8 -o cleanfile.sql dumpfile.sql. The -c option + UTF-8 -o cleanfile.sql dumpfile.sql. The -c option removes invalid character sequences. A diff of the two files will - show the sequences that are invalid. iconv reads the + show the sequences that are invalid. iconv reads the entire input file into memory so it might be necessary to use - split to break up the dump into multiple smaller + split to break up the dump into multiple smaller files for processing. @@ -3908,17 +3908,17 @@ psql -t -f fixseq.sql db1 | psql -e db1 For example, this allows an index on columns a,b,c to be used in - a query with WHERE a = 4 and c = 10. + a query with WHERE a = 4 and c = 10. - Skip WAL logging for CREATE TABLE AS / - SELECT INTO (Simon) + Skip WAL logging for CREATE TABLE AS / + SELECT INTO (Simon) - Since a crash during CREATE TABLE AS would cause the + Since a crash during CREATE TABLE AS would cause the table to be dropped during recovery, there is no reason to WAL log as the table is loaded. (Logging still happens if WAL archiving is enabled, however.) @@ -3933,7 +3933,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add configuration parameter full_page_writes to + Add configuration parameter full_page_writes to control writing full pages to WAL (Bruce) @@ -3948,22 +3948,22 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Use O_DIRECT if available when using - O_SYNC for wal_sync_method + Use O_DIRECT if available when using + O_SYNC for wal_sync_method (Itagaki Takahiro) - O_DIRECT causes disk writes to bypass the kernel + O_DIRECT causes disk writes to bypass the kernel cache, and for WAL writes, this improves performance. - Improve COPY FROM performance (Alon Goldshuv) + Improve COPY FROM performance (Alon Goldshuv) - This was accomplished by reading COPY input in + This was accomplished by reading COPY input in larger chunks, rather than character by character. @@ -4005,14 +4005,14 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add warning about the need to increase - max_fsm_relations and max_fsm_pages - during VACUUM (Ron Mayer) + max_fsm_relations and max_fsm_pages + during VACUUM (Ron Mayer) - Add temp_buffers configuration parameter to allow + Add temp_buffers configuration parameter to allow users to determine the size of the local buffer area for temporary table access (Tom) @@ -4021,13 +4021,13 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add session start time and client IP address to - pg_stat_activity (Magnus) + pg_stat_activity (Magnus) - Adjust pg_stat views for bitmap scans (Tom) + Adjust pg_stat views for bitmap scans (Tom) The meanings of some of the fields have changed slightly. @@ -4036,27 +4036,27 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Enhance pg_locks view (Tom) + Enhance pg_locks view (Tom) - Log queries for client-side PREPARE and - EXECUTE (Simon) + Log queries for client-side PREPARE and + EXECUTE (Simon) Allow Kerberos name and user name case sensitivity to be - specified in postgresql.conf (Magnus) + specified in postgresql.conf (Magnus) - Add configuration parameter krb_server_hostname so + Add configuration parameter krb_server_hostname so that the server host name can be specified as part of service principal (Todd Kover) @@ -4069,8 +4069,8 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add log_line_prefix options for millisecond - timestamps (%m) and remote host (%h) (Ed + Add log_line_prefix options for millisecond + timestamps (%m) and remote host (%h) (Ed L.) @@ -4086,12 +4086,12 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Remove old *.backup files when we do - pg_stop_backup() (Bruce) + Remove old *.backup files when we do + pg_stop_backup() (Bruce) - This prevents a large number of *.backup files from - existing in pg_xlog/. + This prevents a large number of *.backup files from + existing in pg_xlog/. @@ -4112,7 +4112,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 Add per-user and per-database connection limits (Petr Jelinek) - Using ALTER USER and ALTER DATABASE, + Using ALTER USER and ALTER DATABASE, limits can now be enforced on the maximum number of sessions that can concurrently connect as a specific user or to a specific database. Setting the limit to zero disables user or database connections. @@ -4128,7 +4128,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - New system catalog pg_pltemplate allows overriding + New system catalog pg_pltemplate allows overriding obsolete procedural-language definitions in dump files (Tom) @@ -4149,63 +4149,63 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Fix HAVING without any aggregate functions or - GROUP BY so that the query returns a single group (Tom) + Fix HAVING without any aggregate functions or + GROUP BY so that the query returns a single group (Tom) - Previously, such a case would treat the HAVING - clause the same as a WHERE clause. This was not per spec. + Previously, such a case would treat the HAVING + clause the same as a WHERE clause. This was not per spec. - Add USING clause to allow additional tables to be - specified to DELETE (Euler Taveira de Oliveira, Neil) + Add USING clause to allow additional tables to be + specified to DELETE (Euler Taveira de Oliveira, Neil) In prior releases, there was no clear method for specifying - additional tables to be used for joins in a DELETE - statement. UPDATE already has a FROM + additional tables to be used for joins in a DELETE + statement. UPDATE already has a FROM clause for this purpose. - Add support for \x hex escapes in backend and ecpg + Add support for \x hex escapes in backend and ecpg strings (Bruce) - This is just like the standard C \x escape syntax. + This is just like the standard C \x escape syntax. Octal escapes were already supported. - Add BETWEEN SYMMETRIC query syntax (Pavel Stehule) + Add BETWEEN SYMMETRIC query syntax (Pavel Stehule) - This feature allows BETWEEN comparisons without + This feature allows BETWEEN comparisons without requiring the first value to be less than the second. For - example, 2 BETWEEN [ASYMMETRIC] 3 AND 1 returns - false, while 2 BETWEEN SYMMETRIC 3 AND 1 returns - true. BETWEEN ASYMMETRIC was already supported. + example, 2 BETWEEN [ASYMMETRIC] 3 AND 1 returns + false, while 2 BETWEEN SYMMETRIC 3 AND 1 returns + true. BETWEEN ASYMMETRIC was already supported. - Add NOWAIT option to SELECT ... FOR - UPDATE/SHARE (Hans-Juergen Schoenig) + Add NOWAIT option to SELECT ... FOR + UPDATE/SHARE (Hans-Juergen Schoenig) - While the statement_timeout configuration + While the statement_timeout configuration parameter allows a query taking more than a certain amount of - time to be canceled, the NOWAIT option allows a + time to be canceled, the NOWAIT option allows a query to be canceled as soon as a SELECT ... FOR - UPDATE/SHARE command cannot immediately acquire a row lock. + UPDATE/SHARE command cannot immediately acquire a row lock. @@ -4233,7 +4233,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Allow limited ALTER OWNER commands to be performed + Allow limited ALTER OWNER commands to be performed by the object owner (Stephen Frost) @@ -4248,7 +4248,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add ALTER object SET SCHEMA capability + Add ALTER object SET SCHEMA capability for some object types (tables, functions, types) (Bernd Helmle) @@ -4273,54 +4273,54 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Allow TRUNCATE to truncate multiple tables in a + Allow TRUNCATE to truncate multiple tables in a single command (Alvaro) Because of referential integrity checks, it is not allowed to truncate a table that is part of a referential integrity - constraint. Using this new functionality, TRUNCATE + constraint. Using this new functionality, TRUNCATE can be used to truncate such tables, if both tables involved in a referential integrity constraint are truncated in a single - TRUNCATE command. + TRUNCATE command. Properly process carriage returns and line feeds in - COPY CSV mode (Andrew) + COPY CSV mode (Andrew) In release 8.0, carriage returns and line feeds in CSV - COPY TO were processed in an inconsistent manner. (This was + COPY TO were processed in an inconsistent manner. (This was documented on the TODO list.) - Add COPY WITH CSV HEADER to allow a header line as - the first line in COPY (Andrew) + Add COPY WITH CSV HEADER to allow a header line as + the first line in COPY (Andrew) - This allows handling of the common CSV usage of + This allows handling of the common CSV usage of placing the column names on the first line of the data file. For - COPY TO, the first line contains the column names, - and for COPY FROM, the first line is ignored. + COPY TO, the first line contains the column names, + and for COPY FROM, the first line is ignored. On Windows, display better sub-second precision in - EXPLAIN ANALYZE (Magnus) + EXPLAIN ANALYZE (Magnus) - Add trigger duration display to EXPLAIN ANALYZE + Add trigger duration display to EXPLAIN ANALYZE (Tom) @@ -4332,7 +4332,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add support for \x hex escapes in COPY + Add support for \x hex escapes in COPY (Sergey Ten) @@ -4342,11 +4342,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Make SHOW ALL include variable descriptions + Make SHOW ALL include variable descriptions (Matthias Schmidt) - SHOW varname still only displays the variable's + SHOW varname still only displays the variable's value and does not include the description. @@ -4354,27 +4354,27 @@ psql -t -f fixseq.sql db1 | psql -e db1 Make initdb create a new standard - database called postgres, and convert utilities to - use postgres rather than template1 for + database called postgres, and convert utilities to + use postgres rather than template1 for standard lookups (Dave) - In prior releases, template1 was used both as a + In prior releases, template1 was used both as a default connection for utilities like createuser, and as a template for - new databases. This caused CREATE DATABASE to + new databases. This caused CREATE DATABASE to sometimes fail, because a new database cannot be created if anyone else is in the template database. With this change, the - default connection database is now postgres, + default connection database is now postgres, meaning it is much less likely someone will be using - template1 during CREATE DATABASE. + template1 during CREATE DATABASE. Create new reindexdb command-line - utility by moving /contrib/reindexdb into the + utility by moving /contrib/reindexdb into the server (Euler Taveira de Oliveira) @@ -4389,38 +4389,38 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add MAX() and MIN() aggregates for + Add MAX() and MIN() aggregates for array types (Koju Iijima) - Fix to_date() and to_timestamp() to - behave reasonably when CC and YY fields + Fix to_date() and to_timestamp() to + behave reasonably when CC and YY fields are both used (Karel Zak) - If the format specification contains CC and a year - specification is YYY or longer, ignore the - CC. If the year specification is YY or - shorter, interpret CC as the previous century. + If the format specification contains CC and a year + specification is YYY or longer, ignore the + CC. If the year specification is YY or + shorter, interpret CC as the previous century. - Add md5(bytea) (Abhijit Menon-Sen) + Add md5(bytea) (Abhijit Menon-Sen) - md5(text) already existed. + md5(text) already existed. - Add support for numeric ^ numeric based on - power(numeric, numeric) + Add support for numeric ^ numeric based on + power(numeric, numeric) The function already existed, but there was no operator assigned @@ -4430,7 +4430,7 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Fix NUMERIC modulus by properly truncating the quotient + Fix NUMERIC modulus by properly truncating the quotient during computation (Bruce) @@ -4441,29 +4441,29 @@ psql -t -f fixseq.sql db1 | psql -e db1 - Add a function lastval() (Dennis Björklund) + Add a function lastval() (Dennis Björklund) - lastval() is a simplified version of - currval(). It automatically determines the proper - sequence name based on the most recent nextval() or - setval() call performed by the current session. + lastval() is a simplified version of + currval(). It automatically determines the proper + sequence name based on the most recent nextval() or + setval() call performed by the current session. - Add to_timestamp(DOUBLE PRECISION) (Michael Glaesemann) + Add to_timestamp(DOUBLE PRECISION) (Michael Glaesemann) Converts Unix seconds since 1970 to a TIMESTAMP WITH - TIMEZONE. + TIMEZONE. - Add pg_postmaster_start_time() function (Euler + Add pg_postmaster_start_time() function (Euler Taveira de Oliveira, Matthias Schmidt) @@ -4471,11 +4471,11 @@ psql -t -f fixseq.sql db1 | psql -e db1 Allow the full use of time zone names in AT TIME - ZONE, not just the short list previously available (Magnus) + ZONE, not just the short list previously available (Magnus) Previously, only a predefined list of time zone names were - supported by AT TIME ZONE. Now any supported time + supported by AT TIME ZONE. Now any supported time zone name can be used, e.g.: SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; @@ -4488,7 +4488,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add GREATEST() and LEAST() variadic + Add GREATEST() and LEAST() variadic functions (Pavel Stehule) @@ -4499,7 +4499,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add pg_column_size() (Mark Kirkwood) + Add pg_column_size() (Mark Kirkwood) This returns storage size of a column, which might be compressed. @@ -4508,7 +4508,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add regexp_replace() (Atsushi Ogawa) + Add regexp_replace() (Atsushi Ogawa) This allows regular expression replacement, like sed. An optional @@ -4523,8 +4523,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Previous versions sometimes returned unjustified results, like - '4 months'::interval / 5 returning '1 mon - -6 days'. + '4 months'::interval / 5 returning '1 mon + -6 days'. @@ -4534,24 +4534,24 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; This fixes some cases in which the seconds field would be shown as - 60 instead of incrementing the higher-order fields. + 60 instead of incrementing the higher-order fields. - Add a separate day field to type interval so a one day + Add a separate day field to type interval so a one day interval can be distinguished from a 24 hour interval (Michael Glaesemann) Days that contain a daylight saving time adjustment are not 24 hours long, but typically 23 or 25 hours. This change creates a - conceptual distinction between intervals of so many days - and intervals of so many hours. Adding - 1 day to a timestamp now gives the same local time on + conceptual distinction between intervals of so many days + and intervals of so many hours. Adding + 1 day to a timestamp now gives the same local time on the next day even if a daylight saving time adjustment occurs - between, whereas adding 24 hours will give a different + between, whereas adding 24 hours will give a different local time when this happens. For example, under US DST rules: '2005-04-03 00:00:00-05' + '1 day' = '2005-04-04 00:00:00-04' @@ -4562,7 +4562,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add justify_days() and justify_hours() + Add justify_days() and justify_hours() (Michael Glaesemann) @@ -4574,7 +4574,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Move /contrib/dbsize into the backend, and rename + Move /contrib/dbsize into the backend, and rename some of the functions (Dave Page, Andreas Pflug) @@ -4582,38 +4582,38 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - pg_tablespace_size() + pg_tablespace_size() - pg_database_size() + pg_database_size() - pg_relation_size() + pg_relation_size() - pg_total_relation_size() + pg_total_relation_size() - pg_size_pretty() + pg_size_pretty() - pg_total_relation_size() includes indexes and TOAST + pg_total_relation_size() includes indexes and TOAST tables. @@ -4628,19 +4628,19 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - pg_stat_file() + pg_stat_file() - pg_read_file() + pg_read_file() - pg_ls_dir() + pg_ls_dir() @@ -4650,21 +4650,21 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add pg_reload_conf() to force reloading of the + Add pg_reload_conf() to force reloading of the configuration files (Dave Page, Andreas Pflug) - Add pg_rotate_logfile() to force rotation of the + Add pg_rotate_logfile() to force rotation of the server log file (Dave Page, Andreas Pflug) - Change pg_stat_* views to include TOAST tables (Tom) + Change pg_stat_* views to include TOAST tables (Tom) @@ -4686,25 +4686,25 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - UNICODE is now UTF8 + UNICODE is now UTF8 - ALT is now WIN866 + ALT is now WIN866 - WIN is now WIN1251 + WIN is now WIN1251 - TCVN is now WIN1258 + TCVN is now WIN1258 @@ -4718,17 +4718,17 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for WIN1252 encoding (Roland Volkmann) + Add support for WIN1252 encoding (Roland Volkmann) - Add support for four-byte UTF8 characters (John + Add support for four-byte UTF8 characters (John Hansen) - Previously only one, two, and three-byte UTF8 characters + Previously only one, two, and three-byte UTF8 characters were supported. This is particularly important for support for some Chinese character sets. @@ -4736,8 +4736,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow direct conversion between EUC_JP and - SJIS to improve performance (Atsushi Ogawa) + Allow direct conversion between EUC_JP and + SJIS to improve performance (Atsushi Ogawa) @@ -4761,14 +4761,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Fix ALTER LANGUAGE RENAME (Sergey Yatskevich) + Fix ALTER LANGUAGE RENAME (Sergey Yatskevich) Allow function characteristics, like strictness and volatility, - to be modified via ALTER FUNCTION (Neil) + to be modified via ALTER FUNCTION (Neil) @@ -4780,14 +4780,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow SQL and PL/pgSQL functions to use OUT and - INOUT parameters (Tom) + Allow SQL and PL/pgSQL functions to use OUT and + INOUT parameters (Tom) - OUT is an alternate way for a function to return - values. Instead of using RETURN, values can be - returned by assigning to parameters declared as OUT or - INOUT. This is notationally simpler in some cases, + OUT is an alternate way for a function to return + values. Instead of using RETURN, values can be + returned by assigning to parameters declared as OUT or + INOUT. This is notationally simpler in some cases, particularly so when multiple values need to be returned. While returning multiple values from a function was possible in previous releases, this greatly simplifies the @@ -4798,7 +4798,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Move language handler functions into the pg_catalog schema + Move language handler functions into the pg_catalog schema This makes it easier to drop the public schema if desired. @@ -4831,7 +4831,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Check function syntax at CREATE FUNCTION time, + Check function syntax at CREATE FUNCTION time, rather than at runtime (Neil) @@ -4842,19 +4842,19 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow OPEN to open non-SELECT queries - like EXPLAIN and SHOW (Tom) + Allow OPEN to open non-SELECT queries + like EXPLAIN and SHOW (Tom) - No longer require functions to issue a RETURN + No longer require functions to issue a RETURN statement (Tom) - This is a byproduct of the newly added OUT and - INOUT functionality. RETURN can + This is a byproduct of the newly added OUT and + INOUT functionality. RETURN can be omitted when it is not needed to provide the function's return value. @@ -4862,21 +4862,21 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for an optional INTO clause to - PL/pgSQL's EXECUTE statement (Pavel Stehule, Neil) + Add support for an optional INTO clause to + PL/pgSQL's EXECUTE statement (Pavel Stehule, Neil) - Make CREATE TABLE AS set ROW_COUNT (Tom) + Make CREATE TABLE AS set ROW_COUNT (Tom) - Define SQLSTATE and SQLERRM to return - the SQLSTATE and error message of the current + Define SQLSTATE and SQLERRM to return + the SQLSTATE and error message of the current exception (Pavel Stehule, Neil) @@ -4886,14 +4886,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow the parameters to the RAISE statement to be + Allow the parameters to the RAISE statement to be expressions (Pavel Stehule, Neil) - Add a loop CONTINUE statement (Pavel Stehule, Neil) + Add a loop CONTINUE statement (Pavel Stehule, Neil) @@ -4917,7 +4917,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Menon-Sen) - This allows functions to use return_next() to avoid + This allows functions to use return_next() to avoid building the entire result set in memory. @@ -4927,16 +4927,16 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Allow one-row-at-a-time retrieval of query results (Abhijit Menon-Sen) - This allows functions to use spi_query() and - spi_fetchrow() to avoid accumulating the entire + This allows functions to use spi_query() and + spi_fetchrow() to avoid accumulating the entire result set in memory. - Force PL/Perl to handle strings as UTF8 if the - server encoding is UTF8 (David Kamholz) + Force PL/Perl to handle strings as UTF8 if the + server encoding is UTF8 (David Kamholz) @@ -4963,14 +4963,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow Perl nonfatal warnings to generate NOTICE + Allow Perl nonfatal warnings to generate NOTICE messages (Andrew) - Allow Perl's strict mode to be enabled (Andrew) + Allow Perl's strict mode to be enabled (Andrew) @@ -4979,12 +4979,12 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - <application>psql</> Changes + <application>psql</application> Changes - Add \set ON_ERROR_ROLLBACK to allow statements in + Add \set ON_ERROR_ROLLBACK to allow statements in a transaction to error without affecting the rest of the transaction (Greg Sabino Mullane) @@ -4996,8 +4996,8 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for \x hex strings in - psql variables (Bruce) + Add support for \x hex strings in + psql variables (Bruce) Octal escapes were already supported. @@ -5006,7 +5006,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add support for troff -ms output format (Roger + Add support for troff -ms output format (Roger Leigh) @@ -5014,7 +5014,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; Allow the history file location to be controlled by - HISTFILE (Andreas Seltenreich) + HISTFILE (Andreas Seltenreich) This allows configuration of per-database history storage. @@ -5023,14 +5023,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Prevent \x (expanded mode) from affecting - the output of \d tablename (Neil) + Prevent \x (expanded mode) from affecting + the output of \d tablename (Neil) - Add option to psql to log sessions (Lorne Sunley) @@ -5041,44 +5041,44 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Make \d show the tablespaces of indexes (Qingqing + Make \d show the tablespaces of indexes (Qingqing Zhou) - Allow psql help (\h) to + Allow psql help (\h) to make a best guess on the proper help information (Greg Sabino Mullane) - This allows the user to just add \h to the front of + This allows the user to just add \h to the front of the syntax error query and get help on the supported syntax. Previously any additional query text beyond the command name - had to be removed to use \h. + had to be removed to use \h. - Add \pset numericlocale to allow numbers to be + Add \pset numericlocale to allow numbers to be output in a locale-aware format (Eugen Nedelcu) - For example, using C locale 100000 would - be output as 100,000.0 while a European locale might - output this value as 100.000,0. + For example, using C locale 100000 would + be output as 100,000.0 while a European locale might + output this value as 100.000,0. Make startup banner show both server version number and - psql's version number, when they are different (Bruce) + psql's version number, when they are different (Bruce) - Also, a warning will be shown if the server and psql + Also, a warning will be shown if the server and psql are from different major releases. @@ -5088,13 +5088,13 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - <application>pg_dump</> Changes + <application>pg_dump</application> Changes - Add This allows just the objects in a specified schema to be restored. @@ -5103,18 +5103,18 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow pg_dump to dump large objects even in + Allow pg_dump to dump large objects even in text mode (Tom) With this change, large objects are now always dumped; the former - switch is a no-op. - Allow pg_dump to dump a consistent snapshot of + Allow pg_dump to dump a consistent snapshot of large objects (Tom) @@ -5127,7 +5127,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add @@ -5139,14 +5139,14 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Rely on pg_pltemplate for procedural languages (Tom) + Rely on pg_pltemplate for procedural languages (Tom) If the call handler for a procedural language is in the - pg_catalog schema, pg_dump does not + pg_catalog schema, pg_dump does not dump the handler. Instead, it dumps the language using just - CREATE LANGUAGE name, - relying on the pg_pltemplate catalog to provide + CREATE LANGUAGE name, + relying on the pg_pltemplate catalog to provide the language's creation parameters at load time. @@ -5161,15 +5161,15 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add a PGPASSFILE environment variable to specify the + Add a PGPASSFILE environment variable to specify the password file's filename (Andrew) - Add lo_create(), that is similar to - lo_creat() but allows the OID of the large object + Add lo_create(), that is similar to + lo_creat() but allows the OID of the large object to be specified (Tom) @@ -5191,7 +5191,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Fix pgxs to support building against a relocated + Fix pgxs to support building against a relocated installation @@ -5238,10 +5238,10 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Allow pg_config to be compiled using MSVC (Andrew) + Allow pg_config to be compiled using MSVC (Andrew) - This is required to build DBD::Pg using MSVC. + This is required to build DBD::Pg using MSVC. @@ -5264,15 +5264,15 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Modify postgresql.conf to use documentation defaults - on/off rather than - true/false (Bruce) + Modify postgresql.conf to use documentation defaults + on/off rather than + true/false (Bruce) - Enhance pg_config to be able to report more + Enhance pg_config to be able to report more build-time values (Tom) @@ -5304,11 +5304,11 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - In previous releases, gist.h contained both the + In previous releases, gist.h contained both the public GiST API (intended for use by authors of GiST index implementations) as well as some private declarations used by the implementation of GiST itself. The latter have been moved - to a separate file, gist_private.h. Most GiST + to a separate file, gist_private.h. Most GiST index implementations should be unaffected. @@ -5320,10 +5320,10 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; GiST methods are now always invoked in a short-lived memory - context. Therefore, memory allocated via palloc() + context. Therefore, memory allocated via palloc() will be reclaimed automatically, so GiST index implementations do not need to manually release allocated memory via - pfree(). + pfree(). @@ -5336,7 +5336,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Add /contrib/pg_buffercache contrib module (Mark + Add /contrib/pg_buffercache contrib module (Mark Kirkwood) @@ -5347,28 +5347,28 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Remove /contrib/array because it is obsolete (Tom) + Remove /contrib/array because it is obsolete (Tom) - Clean up the /contrib/lo module (Tom) + Clean up the /contrib/lo module (Tom) - Move /contrib/findoidjoins to - /src/tools (Tom) + Move /contrib/findoidjoins to + /src/tools (Tom) - Remove the <<, >>, - &<, and &> operators from - /contrib/cube + Remove the <<, >>, + &<, and &> operators from + /contrib/cube These operators were not useful. @@ -5377,13 +5377,13 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Improve /contrib/btree_gist (Janko Richter) + Improve /contrib/btree_gist (Janko Richter) - Improve /contrib/pgbench (Tomoaki Sato, Tatsuo) + Improve /contrib/pgbench (Tomoaki Sato, Tatsuo) There is now a facility for testing with SQL command scripts given @@ -5393,7 +5393,7 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Improve /contrib/pgcrypto (Marko Kreen) + Improve /contrib/pgcrypto (Marko Kreen) @@ -5421,16 +5421,16 @@ SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London'; - Take build parameters (OpenSSL, zlib) from configure result + Take build parameters (OpenSSL, zlib) from configure result - There is no need to edit the Makefile anymore. + There is no need to edit the Makefile anymore. - Remove support for libmhash and libmcrypt + Remove support for libmhash and libmcrypt diff --git a/doc/src/sgml/release-8.2.sgml b/doc/src/sgml/release-8.2.sgml index c00cbd3467..71b50cfb01 100644 --- a/doc/src/sgml/release-8.2.sgml +++ b/doc/src/sgml/release-8.2.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.2.X series. Users are encouraged to update to a newer release branch soon. @@ -30,7 +30,7 @@ However, a longstanding error was discovered in the definition of the - information_schema.referential_constraints view. If you + information_schema.referential_constraints view. If you rely on correct results from that view, you should replace its definition as explained in the first changelog item below. @@ -49,7 +49,7 @@ - Fix bugs in information_schema.referential_constraints view + Fix bugs in information_schema.referential_constraints view (Tom Lane) @@ -62,13 +62,13 @@ - Since the view definition is installed by initdb, + Since the view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can (as a superuser) drop the - information_schema schema then re-create it by sourcing - SHAREDIR/information_schema.sql. - (Run pg_config --sharedir if you're uncertain where - SHAREDIR is.) This must be repeated in each database + information_schema schema then re-create it by sourcing + SHAREDIR/information_schema.sql. + (Run pg_config --sharedir if you're uncertain where + SHAREDIR is.) This must be repeated in each database to be fixed. @@ -76,12 +76,12 @@ Fix TOAST-related data corruption during CREATE TABLE dest AS - SELECT * FROM src or INSERT INTO dest SELECT * FROM src + SELECT * FROM src or INSERT INTO dest SELECT * FROM src (Tom Lane) - If a table has been modified by ALTER TABLE ADD COLUMN, + If a table has been modified by ALTER TABLE ADD COLUMN, attempts to copy its data verbatim to another table could produce corrupt results in certain corner cases. The problem can only manifest in this precise form in 8.4 and later, @@ -98,22 +98,22 @@ The typical symptom was transient errors like missing chunk - number 0 for toast value NNNNN in pg_toast_2619, where the cited + number 0 for toast value NNNNN in pg_toast_2619, where the cited toast table would always belong to a system catalog. - Improve locale support in money type's input and output + Improve locale support in money type's input and output (Tom Lane) Aside from not supporting all standard - lc_monetary + lc_monetary formatting options, the input and output functions were inconsistent, - meaning there were locales in which dumped money values could + meaning there were locales in which dumped money values could not be re-read. @@ -121,15 +121,15 @@ Don't let transform_null_equals - affect CASE foo WHEN NULL ... constructs + linkend="guc-transform-null-equals">transform_null_equals + affect CASE foo WHEN NULL ... constructs (Heikki Linnakangas) - transform_null_equals is only supposed to affect - foo = NULL expressions written directly by the user, not - equality checks generated internally by this form of CASE. + transform_null_equals is only supposed to affect + foo = NULL expressions written directly by the user, not + equality checks generated internally by this form of CASE. @@ -141,14 +141,14 @@ For a cascading foreign key that references its own table, a row update - will fire both the ON UPDATE trigger and the - CHECK trigger as one event. The ON UPDATE - trigger must execute first, else the CHECK will check a + will fire both the ON UPDATE trigger and the + CHECK trigger as one event. The ON UPDATE + trigger must execute first, else the CHECK will check a non-final state of the row and possibly throw an inappropriate error. However, the firing order of these triggers is determined by their names, which generally sort in creation order since the triggers have auto-generated names following the convention - RI_ConstraintTrigger_NNNN. A proper fix would require + RI_ConstraintTrigger_NNNN. A proper fix would require modifying that convention, which we will do in 9.2, but it seems risky to change it in existing releases. So this patch just changes the creation order of the triggers. Users encountering this type of error @@ -159,7 +159,7 @@ - Preserve blank lines within commands in psql's command + Preserve blank lines within commands in psql's command history (Robert Haas) @@ -171,7 +171,7 @@ - Use the preferred version of xsubpp to build PL/Perl, + Use the preferred version of xsubpp to build PL/Perl, not necessarily the operating system's main copy (David Wheeler and Alex Hunsaker) @@ -179,7 +179,7 @@ - Honor query cancel interrupts promptly in pgstatindex() + Honor query cancel interrupts promptly in pgstatindex() (Robert Haas) @@ -210,15 +210,15 @@ - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -244,7 +244,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.2.X release series in December 2011. Users are encouraged to update to a newer release branch soon. @@ -279,7 +279,7 @@ - Avoid possibly accessing off the end of memory in ANALYZE + Avoid possibly accessing off the end of memory in ANALYZE (Noah Misch) @@ -297,7 +297,7 @@ There was a window wherein a new backend process could read a stale init file but miss the inval messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically - could not read block 0 in file ... later during startup. + could not read block 0 in file ... later during startup. @@ -346,13 +346,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -370,18 +370,18 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -389,25 +389,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -415,41 +415,41 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -461,14 +461,14 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) @@ -480,7 +480,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -488,13 +488,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -526,7 +526,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -582,15 +582,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -598,13 +598,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -617,7 +617,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -649,14 +649,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -700,15 +700,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -733,44 +733,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -782,16 +782,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -833,17 +833,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -853,7 +853,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -867,19 +867,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -895,7 +895,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -905,7 +905,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -917,14 +917,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -936,15 +936,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -955,11 +955,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -978,14 +978,14 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -997,22 +997,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -1020,20 +1020,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -1084,7 +1084,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -1113,7 +1113,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -1135,7 +1135,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -1149,7 +1149,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -1201,7 +1201,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -1227,7 +1227,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -1235,28 +1235,28 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -1264,30 +1264,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -1300,7 +1300,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -1315,7 +1315,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -1362,19 +1362,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -1383,19 +1383,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -1419,10 +1419,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -1430,7 +1430,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -1442,7 +1442,7 @@ - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -1455,15 +1455,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -1474,7 +1474,7 @@ - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -1482,7 +1482,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -1514,14 +1514,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -1563,7 +1563,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -1619,8 +1619,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -1646,7 +1646,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -1668,23 +1668,23 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -1692,35 +1692,35 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) @@ -1741,7 +1741,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -1753,28 +1753,28 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -1785,14 +1785,14 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -1864,14 +1864,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -1890,7 +1890,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -1948,7 +1948,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -1958,13 +1958,13 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -1984,7 +1984,7 @@ Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -2001,20 +2001,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -2027,14 +2027,14 @@ - This includes adding IDT and SGT to the default + This includes adding IDT and SGT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -2065,8 +2065,8 @@ A dump/restore is not required for those running 8.2.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.2.14. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.2.14. Also, if you are upgrading from a version earlier than 8.2.11, see . @@ -2080,7 +2080,7 @@ - Force WAL segment switch during pg_start_backup() + Force WAL segment switch during pg_start_backup() (Heikki) @@ -2091,26 +2091,26 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. @@ -2145,32 +2145,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -2187,7 +2187,7 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) @@ -2195,7 +2195,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -2228,14 +2228,14 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Make contrib/hstore throw an error when a key or + Make contrib/hstore throw an error when a key or value is too long to fit in its data structure, rather than silently truncating it (Andrew Gierth) @@ -2243,15 +2243,15 @@ - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -2264,7 +2264,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -2315,7 +2315,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -2326,7 +2326,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -2339,40 +2339,40 @@ - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) - Fix possible failure in contrib/tsearch2 when C locale is + Fix possible failure in contrib/tsearch2 when C locale is used with a multi-byte encoding (Teodor) - Crashes were possible on platforms where wchar_t is narrower - than int; Windows in particular. + Crashes were possible on platforms where wchar_t is narrower + than int; Windows in particular. - Fix extreme inefficiency in contrib/tsearch2 parser's - handling of an email-like string containing multiple @ + Fix extreme inefficiency in contrib/tsearch2 parser's + handling of an email-like string containing multiple @ characters (Heikki) - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -2383,24 +2383,24 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). - Fix PL/pgSQL to not treat INTO after INSERT as + Fix PL/pgSQL to not treat INTO after INSERT as an INTO-variables clause anywhere in the string, not only at the start; - in particular, don't fail for INSERT INTO within - CREATE RULE (Tom) + in particular, don't fail for INSERT INTO within + CREATE RULE (Tom) @@ -2418,21 +2418,21 @@ - Retry failed calls to CallNamedPipe() on Windows + Retry failed calls to CallNamedPipe() on Windows (Steve Marshall, Magnus) It appears that this function can sometimes fail transiently; we previously treated any failure as a hard error, which could - confuse LISTEN/NOTIFY as well as other + confuse LISTEN/NOTIFY as well as other operations. - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -2474,13 +2474,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -2497,7 +2497,7 @@ Fix possible Assert failure if a statement executed in PL/pgSQL is rewritten into another kind of statement, for example if an - INSERT is rewritten into an UPDATE (Heikki) + INSERT is rewritten into an UPDATE (Heikki) @@ -2507,7 +2507,7 @@ - This primarily affects domains that are declared with CHECK + This primarily affects domains that are declared with CHECK constraints involving user-defined stable or immutable functions. Such functions typically fail if no snapshot has been set. @@ -2522,14 +2522,14 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) - Fix a problem that made UPDATE RETURNING tableoid + Fix a problem that made UPDATE RETURNING tableoid return zero instead of the correct OID (Tom) @@ -2542,13 +2542,13 @@ This could result in bad plans for queries like - ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... + ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... - Improve optimizer's handling of long IN lists (Tom) + Improve optimizer's handling of long IN lists (Tom) @@ -2581,37 +2581,37 @@ - Fix contrib/dblink's - dblink_get_result(text,bool) function (Joe) + Fix contrib/dblink's + dblink_get_result(text,bool) function (Joe) - Fix possible garbage output from contrib/sslinfo functions + Fix possible garbage output from contrib/sslinfo functions (Tom) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -2642,7 +2642,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, see . Also, if you were running a previous - 8.2.X release, it is recommended to REINDEX all GiST + 8.2.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -2656,13 +2656,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -2674,7 +2674,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -2689,8 +2689,8 @@ - Improve optimization of expression IN - (expression-list) queries (Tom, per an idea from Robert + Improve optimization of expression IN + (expression-list) queries (Tom, per an idea from Robert Haas) @@ -2703,13 +2703,13 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. @@ -2729,9 +2729,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -2750,14 +2750,14 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -2771,31 +2771,31 @@ - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Ensure pg_control is opened in binary mode + Ensure pg_control is opened in binary mode (Itagaki Takahiro) - pg_controldata and pg_resetxlog + pg_controldata and pg_resetxlog did this incorrectly, and so could fail on Windows. - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -2847,12 +2847,12 @@ - Fix potential miscalculation of datfrozenxid (Alvaro) + Fix potential miscalculation of datfrozenxid (Alvaro) This error may explain some recent reports of failure to remove old - pg_clog data. + pg_clog data. @@ -2864,7 +2864,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -2877,7 +2877,7 @@ Fix missed permissions checks when a view contains a simple - UNION ALL construct (Heikki) + UNION ALL construct (Heikki) @@ -2889,12 +2889,12 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) - ALTER COLUMN TYPE, followed by re-use of a previously + ALTER COLUMN TYPE, followed by re-use of a previously cached plan, could produce this type of situation. The check protects against data corruption and/or crashes that could ensue. @@ -2902,29 +2902,29 @@ - Fix possible repeated drops during DROP OWNED (Tom) + Fix possible repeated drops during DROP OWNED (Tom) This would typically result in strange errors such as cache - lookup failed for relation NNN. + lookup failed for relation NNN. - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -2951,14 +2951,14 @@ Allow spaces in the suffix part of an LDAP URL in - pg_hba.conf (Tom) + pg_hba.conf (Tom) Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) @@ -2976,21 +2976,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -3005,28 +3005,28 @@ On Windows, work around a Microsoft bug by preventing - libpq from trying to send more than 64kB per system call + libpq from trying to send more than 64kB per system call (Magnus) - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -3069,18 +3069,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -3088,13 +3088,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -3144,7 +3144,7 @@ - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -3156,16 +3156,16 @@ - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) - Fix pg_get_ruledef() to show the alias, if any, attached - to the target table of an UPDATE or DELETE + Fix pg_get_ruledef() to show the alias, if any, attached + to the target table of an UPDATE or DELETE (Tom) @@ -3200,14 +3200,14 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) - Fix several datatype input functions, notably array_in(), + Fix several datatype input functions, notably array_in(), that were allowing unused bytes in their results to contain uninitialized, unpredictable values (Tom) @@ -3215,7 +3215,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -3223,24 +3223,24 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, and Argentina/San_Luis) @@ -3248,47 +3248,47 @@ - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix broken GiST comparison function for contrib/tsearch2's - tsquery type (Teodor) + Fix broken GiST comparison function for contrib/tsearch2's + tsquery type (Teodor) - Fix possible crashes in contrib/cube functions (Tom) + Fix possible crashes in contrib/cube functions (Tom) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS (Tom) - Fix DatumGetBool macro to not fail with gcc + Fix DatumGetBool macro to not fail with gcc 4.3 (Tom) - This problem affects old style (V0) C functions that + This problem affects old style (V0) C functions that return boolean. The fix is already in 8.3, but the need to back-patch it was not realized at the time. @@ -3318,7 +3318,7 @@ A dump/restore is not required for those running 8.2.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the Windows locale issue described below. @@ -3342,34 +3342,34 @@ over two years ago, but Windows with UTF-8 uses a separate code path that was not updated. If you are using a locale that considers some non-identical strings as equal, you may need to - REINDEX to fix existing indexes on textual columns. + REINDEX to fix existing indexes on textual columns. - Repair potential deadlock between concurrent VACUUM FULL + Repair potential deadlock between concurrent VACUUM FULL operations on different system catalogs (Tom) - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -3378,14 +3378,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -3426,14 +3426,14 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) - Ensure pg_stat_activity.waiting flag + Ensure pg_stat_activity.waiting flag is cleared when a lock wait is aborted (Tom) @@ -3451,20 +3451,20 @@ - Update time zone data files to tzdata release 2008a + Update time zone data files to tzdata release 2008a (in particular, recent Chile changes); adjust timezone abbreviation - VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) + VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -3472,31 +3472,31 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Correctly enforce statement_timeout values longer - than INT_MAX microseconds (about 35 minutes) (Tom) + Correctly enforce statement_timeout values longer + than INT_MAX microseconds (about 35 minutes) (Tom) - This bug affects only builds with . - Fix unexpected PARAM_SUBLINK ID planner error when + Fix unexpected PARAM_SUBLINK ID planner error when constant-folding simplifies a sub-select (Tom) @@ -3504,7 +3504,7 @@ Fix logical errors in constraint-exclusion handling of IS - NULL and NOT expressions (Tom) + NULL and NOT expressions (Tom) @@ -3515,7 +3515,7 @@ - Fix another cause of failed to build any N-way joins + Fix another cause of failed to build any N-way joins planner errors (Tom) @@ -3539,8 +3539,8 @@ - Fix display of constant expressions in ORDER BY - and GROUP BY (Tom) + Fix display of constant expressions in ORDER BY + and GROUP BY (Tom) @@ -3552,7 +3552,7 @@ - Fix libpq to handle NOTICE messages correctly + Fix libpq to handle NOTICE messages correctly during COPY OUT (Tom) @@ -3600,7 +3600,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -3611,18 +3611,18 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) @@ -3642,13 +3642,13 @@ - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 8.2.5 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) @@ -3662,13 +3662,13 @@ Fix GIN index build to work properly when - maintenance_work_mem is 4GB or more (Tom) + maintenance_work_mem is 4GB or more (Tom) - Update time zone data files to tzdata release 2007k + Update time zone data files to tzdata release 2007k (in particular, recent Argentina changes) (Tom) @@ -3690,22 +3690,22 @@ Fix planner failure in some cases of WHERE false AND var IN - (SELECT ...) (Tom) + (SELECT ...) (Tom) - Make CREATE TABLE ... SERIAL and - ALTER SEQUENCE ... OWNED BY not change the - currval() state of the sequence (Tom) + Make CREATE TABLE ... SERIAL and + ALTER SEQUENCE ... OWNED BY not change the + currval() state of the sequence (Tom) Preserve the tablespace and storage parameters of indexes that are - rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) + rebuilt by ALTER TABLE ... ALTER COLUMN TYPE (Tom) @@ -3724,28 +3724,28 @@ - Make VACUUM not use all of maintenance_work_mem + Make VACUUM not use all of maintenance_work_mem when the table is too small for it to be useful (Alvaro) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Make corr() return the correct result for negative + Make corr() return the correct result for negative correlation values (Neil) - Fix overflow in extract(epoch from interval) for intervals + Fix overflow in extract(epoch from interval) for intervals exceeding 68 years (Tom) @@ -3759,13 +3759,13 @@ - Fix PL/Perl to cope when platform's Perl defines type bool - as int rather than char (Tom) + Fix PL/Perl to cope when platform's Perl defines type bool + as int rather than char (Tom) While this could theoretically happen anywhere, no standard build of - Perl did things this way ... until macOS 10.5. + Perl did things this way ... until macOS 10.5. @@ -3784,73 +3784,73 @@ - Fix pg_dump to correctly handle inheritance child tables + Fix pg_dump to correctly handle inheritance child tables that have default expressions different from their parent's (Tom) - Fix libpq crash when PGPASSFILE refers + Fix libpq crash when PGPASSFILE refers to a file that is not a plain file (Martin Pitt) - ecpg parser fixes (Michael) + ecpg parser fixes (Michael) - Make contrib/pgcrypto defend against - OpenSSL libraries that fail on keys longer than 128 + Make contrib/pgcrypto defend against + OpenSSL libraries that fail on keys longer than 128 bits; which is the case at least on some Solaris versions (Marko Kreen) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Fix tsvector and tsquery output routines to + Fix tsvector and tsquery output routines to escape backslashes correctly (Teodor, Bruce) - Fix crash of to_tsvector() on huge input strings (Teodor) + Fix crash of to_tsvector() on huge input strings (Teodor) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. - Update gettimeofday configuration check so that - PostgreSQL can be built on newer versions of - MinGW (Magnus) + Update gettimeofday configuration check so that + PostgreSQL can be built on newer versions of + MinGW (Magnus) @@ -3890,48 +3890,48 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Fix ALTER DOMAIN ADD CONSTRAINT for cases involving + Fix ALTER DOMAIN ADD CONSTRAINT for cases involving domains over domains (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) Fix some planner problems with outer joins, notably poor - size estimation for t1 LEFT JOIN t2 WHERE t2.col IS NULL + size estimation for t1 LEFT JOIN t2 WHERE t2.col IS NULL (Tom) - Allow the interval data type to accept input consisting only of + Allow the interval data type to accept input consisting only of milliseconds or microseconds (Neil) - Allow timezone name to appear before the year in timestamp input (Tom) + Allow timezone name to appear before the year in timestamp input (Tom) - Fixes for GIN indexes used by /contrib/tsearch2 (Teodor) + Fixes for GIN indexes used by /contrib/tsearch2 (Teodor) @@ -3943,7 +3943,7 @@ - Fix excessive logging of SSL error messages (Tom) + Fix excessive logging of SSL error messages (Tom) @@ -3956,7 +3956,7 @@ - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) @@ -3969,13 +3969,13 @@ - Fix stddev_pop(numeric) and var_pop(numeric) (Tom) + Fix stddev_pop(numeric) and var_pop(numeric) (Tom) - Prevent REINDEX and CLUSTER from failing + Prevent REINDEX and CLUSTER from failing due to attempting to process temporary tables of other sessions (Alvaro) @@ -3994,39 +3994,39 @@ - Make pg_ctl -w work properly in Windows service mode (Dave Page) + Make pg_ctl -w work properly in Windows service mode (Dave Page) - Fix memory allocation bug when using MIT Kerberos on Windows (Magnus) + Fix memory allocation bug when using MIT Kerberos on Windows (Magnus) - Suppress timezone name (%Z) in log timestamps on Windows + Suppress timezone name (%Z) in log timestamps on Windows because of possible encoding mismatches (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) - Restrict /contrib/pgstattuple functions to superusers, for security reasons (Tom) + Restrict /contrib/pgstattuple functions to superusers, for security reasons (Tom) - Do not let /contrib/intarray try to make its GIN opclass + Do not let /contrib/intarray try to make its GIN opclass the default (this caused problems at dump/restore) (Tom) @@ -4068,56 +4068,56 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - Fix shared_preload_libraries for Windows + Fix shared_preload_libraries for Windows by forcing reload in each backend (Korry Douglas) - Fix to_char() so it properly upper/lower cases localized day or month + Fix to_char() so it properly upper/lower cases localized day or month names (Pavel Stehule) - /contrib/tsearch2 crash fixes (Teodor) + /contrib/tsearch2 crash fixes (Teodor) - Require COMMIT PREPARED to be executed in the same + Require COMMIT PREPARED to be executed in the same database as the transaction was prepared in (Heikki) - Allow pg_dump to do binary backups larger than two gigabytes + Allow pg_dump to do binary backups larger than two gigabytes on Windows (Magnus) - New traditional (Taiwan) Chinese FAQ (Zhou Daojing) + New traditional (Taiwan) Chinese FAQ (Zhou Daojing) @@ -4129,8 +4129,8 @@ - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -4142,8 +4142,8 @@ - Fix pg_dump so it can dump a serial column's sequence - using when not also dumping the owning table (Tom) @@ -4158,7 +4158,7 @@ Fix possible wrong answers or crash when a PL/pgSQL function tries - to RETURN from within an EXCEPTION block + to RETURN from within an EXCEPTION block (Tom) @@ -4286,8 +4286,8 @@ - Properly handle to_char('CC') for years ending in - 00 (Tom) + Properly handle to_char('CC') for years ending in + 00 (Tom) @@ -4297,41 +4297,41 @@ - /contrib/tsearch2 localization improvements (Tatsuo, Teodor) + /contrib/tsearch2 localization improvements (Tatsuo, Teodor) Fix incorrect permission check in - information_schema.key_column_usage view (Tom) + information_schema.key_column_usage view (Tom) - The symptom is relation with OID nnnnn does not exist errors. - To get this fix without using initdb, use CREATE OR - REPLACE VIEW to install the corrected definition found in - share/information_schema.sql. Note you will need to do + The symptom is relation with OID nnnnn does not exist errors. + To get this fix without using initdb, use CREATE OR + REPLACE VIEW to install the corrected definition found in + share/information_schema.sql. Note you will need to do this in each database. - Improve VACUUM performance for databases with many tables (Tom) + Improve VACUUM performance for databases with many tables (Tom) - Fix for rare Assert() crash triggered by UNION (Tom) + Fix for rare Assert() crash triggered by UNION (Tom) Fix potentially incorrect results from index searches using - ROW inequality conditions (Tom) + ROW inequality conditions (Tom) @@ -4344,7 +4344,7 @@ - Fix bogus permission denied failures occurring on Windows + Fix bogus permission denied failures occurring on Windows due to attempts to fsync already-deleted files (Magnus, Tom) @@ -4414,21 +4414,21 @@ - Fix crash with SELECT ... LIMIT ALL (also - LIMIT NULL) (Tom) + Fix crash with SELECT ... LIMIT ALL (also + LIMIT NULL) (Tom) - Several /contrib/tsearch2 fixes (Teodor) + Several /contrib/tsearch2 fixes (Teodor) On Windows, make log messages coming from the operating system use - ASCII encoding (Hiroshi Saito) + ASCII encoding (Hiroshi Saito) @@ -4439,8 +4439,8 @@ - Fix Windows linking of pg_dump using - win32.mak + Fix Windows linking of pg_dump using + win32.mak (Hiroshi Saito) @@ -4469,13 +4469,13 @@ - Improve build speed of PDF documentation (Peter) + Improve build speed of PDF documentation (Peter) - Re-add JST (Japan) timezone abbreviation (Tom) + Re-add JST (Japan) timezone abbreviation (Tom) @@ -4487,8 +4487,8 @@ - Have psql print multi-byte combining characters as - before, rather than output as \u (Tom) + Have psql print multi-byte combining characters as + before, rather than output as \u (Tom) @@ -4498,19 +4498,19 @@ - This improves psql \d performance also. + This improves psql \d performance also. - Make pg_dumpall assume that databases have public - CONNECT privilege, when dumping from a pre-8.2 server (Tom) + Make pg_dumpall assume that databases have public + CONNECT privilege, when dumping from a pre-8.2 server (Tom) This preserves the previous behavior that anyone can connect to a - database if allowed by pg_hba.conf. + database if allowed by pg_hba.conf. @@ -4541,14 +4541,14 @@ Query language enhancements including INSERT/UPDATE/DELETE RETURNING, multirow VALUES lists, and optional target-table alias in - UPDATE/DELETE + UPDATE/DELETE Index creation without blocking concurrent - INSERT/UPDATE/DELETE + INSERT/UPDATE/DELETE operations @@ -4659,13 +4659,13 @@ Set escape_string_warning - to on by default (Bruce) + linkend="guc-escape-string-warning">escape_string_warning + to on by default (Bruce) This issues a warning if backslash escapes are used in - non-escape (non-E'') + non-escape (non-E'') strings. @@ -4673,8 +4673,8 @@ Change the row - constructor syntax (ROW(...)) so that - list elements foo.* will be expanded to a list + constructor syntax (ROW(...)) so that + list elements foo.* will be expanded to a list of their member fields, rather than creating a nested row type field as formerly (Tom) @@ -4682,15 +4682,15 @@ The new behavior is substantially more useful since it allows, for example, triggers to check for data changes - with IF row(new.*) IS DISTINCT FROM row(old.*). - The old behavior is still available by omitting .*. + with IF row(new.*) IS DISTINCT FROM row(old.*). + The old behavior is still available by omitting .*. Make row comparisons - follow SQL standard semantics and allow them + follow SQL standard semantics and allow them to be used in index scans (Tom) @@ -4704,13 +4704,13 @@ - Make row IS NOT NULL - tests follow SQL standard semantics (Tom) + Make row IS NOT NULL + tests follow SQL standard semantics (Tom) The former behavior conformed to the standard for simple cases - with IS NULL, but IS NOT NULL would return + with IS NULL, but IS NOT NULL would return true if any row field was non-null, whereas the standard says it should return true only when all fields are non-null. @@ -4719,11 +4719,11 @@ Make SET - CONSTRAINT affect only one constraint (Kris Jurka) + CONSTRAINT affect only one constraint (Kris Jurka) - In previous releases, SET CONSTRAINT modified + In previous releases, SET CONSTRAINT modified all constraints with a matching name. In this release, the schema search path is used to modify only the first matching constraint. A schema specification is also @@ -4733,14 +4733,14 @@ - Remove RULE permission for tables, for security reasons + Remove RULE permission for tables, for security reasons (Tom) As of this release, only a table's owner can create or modify rules for the table. For backwards compatibility, - GRANT/REVOKE RULE is still accepted, + GRANT/REVOKE RULE is still accepted, but it does nothing. @@ -4769,14 +4769,14 @@ - Make command-line options of postmaster - and postgres + Make command-line options of postmaster + and postgres identical (Peter) This allows the postmaster to pass arguments to each backend - without using -o. Note that some options are now + without using -o. Note that some options are now only available as long-form options, because there were conflicting single-letter options. @@ -4784,13 +4784,13 @@ - Deprecate use of postmaster symbolic link (Peter) + Deprecate use of postmaster symbolic link (Peter) - postmaster and postgres + postmaster and postgres commands now act identically, with the behavior determined - by command-line options. The postmaster symbolic link is + by command-line options. The postmaster symbolic link is kept for compatibility, but is not really needed. @@ -4798,12 +4798,12 @@ Change log_duration + linkend="guc-log-duration">log_duration to output even if the query is not output (Tom) - In prior releases, log_duration only printed if + In prior releases, log_duration only printed if the query appeared earlier in the log. @@ -4811,15 +4811,15 @@ Make to_char(time) + linkend="functions-formatting">to_char(time) and to_char(interval) - treat HH and HH12 as 12-hour + linkend="functions-formatting">to_char(interval) + treat HH and HH12 as 12-hour intervals - Most applications should use HH24 unless they + Most applications should use HH24 unless they want a 12-hour display. @@ -4827,19 +4827,19 @@ Zero unmasked bits in conversion from INET to CIDR (Tom) + linkend="datatype-inet">INET to CIDR (Tom) This ensures that the converted value is actually valid for - CIDR. + CIDR. - Remove australian_timezones configuration variable + Remove australian_timezones configuration variable (Joachim Wieland) @@ -4857,35 +4857,35 @@ This might eliminate the need to set unrealistically small values of random_page_cost. - If you have been using a very small random_page_cost, + linkend="guc-random-page-cost">random_page_cost. + If you have been using a very small random_page_cost, please recheck your test cases. - Change behavior of pg_dump -n and - -t options. (Greg Sabino Mullane) + Change behavior of pg_dump -n and + -t options. (Greg Sabino Mullane) - See the pg_dump manual page for details. + See the pg_dump manual page for details. - Change libpq - PQdsplen() to return a useful value (Martijn + Change libpq + PQdsplen() to return a useful value (Martijn van Oosterhout) - Declare libpq - PQgetssl() as returning void *, - rather than SSL * (Martijn van Oosterhout) + Declare libpq + PQgetssl() as returning void *, + rather than SSL * (Martijn van Oosterhout) @@ -4897,7 +4897,7 @@ C-language loadable modules must now include a - PG_MODULE_MAGIC + PG_MODULE_MAGIC macro call for version compatibility checking (Martijn van Oosterhout) @@ -4923,12 +4923,12 @@ - In contrib/xml2/, rename xml_valid() to - xml_is_well_formed() (Tom) + In contrib/xml2/, rename xml_valid() to + xml_is_well_formed() (Tom) - xml_valid() will remain for backward compatibility, + xml_valid() will remain for backward compatibility, but its behavior will change to do schema checking in a future release. @@ -4936,7 +4936,7 @@ - Remove contrib/ora2pg/, now at contrib/ora2pg/, now at @@ -4944,21 +4944,21 @@ Remove contrib modules that have been migrated to PgFoundry: - adddepend, dbase, dbmirror, - fulltextindex, mac, userlock + adddepend, dbase, dbmirror, + fulltextindex, mac, userlock Remove abandoned contrib modules: - mSQL-interface, tips + mSQL-interface, tips - Remove QNX and BEOS ports (Bruce) + Remove QNX and BEOS ports (Bruce) @@ -5002,7 +5002,7 @@ Improve efficiency of IN + linkend="functions-comparisons">IN (list-of-expressions) clauses (Tom) @@ -5022,7 +5022,7 @@ - Add FILLFACTOR to FILLFACTOR to table and index creation (ITAGAKI Takahiro) @@ -5038,8 +5038,8 @@ Increase default values for shared_buffers - and max_fsm_pages + linkend="guc-shared-buffers">shared_buffers + and max_fsm_pages (Andrew) @@ -5074,8 +5074,8 @@ Improve the optimizer's selectivity estimates for LIKE, ILIKE, and + linkend="functions-like">LIKE, ILIKE, and regular expression operations (Tom) @@ -5085,7 +5085,7 @@ Improve planning of joins to inherited tables and UNION - ALL views (Tom) + ALL views (Tom) @@ -5093,18 +5093,18 @@ Allow constraint exclusion to be applied to inherited UPDATE and - DELETE queries (Tom) + linkend="ddl-inherit">inherited UPDATE and + DELETE queries (Tom) - SELECT already honored constraint exclusion. + SELECT already honored constraint exclusion. - Improve planning of constant WHERE clauses, such as + Improve planning of constant WHERE clauses, such as a condition that depends only on variables inherited from an outer query level (Tom) @@ -5113,7 +5113,7 @@ Protocol-level unnamed prepared statements are re-planned - for each set of BIND values (Tom) + for each set of BIND values (Tom) @@ -5132,13 +5132,13 @@ Avoid extra scan of tables without indexes during VACUUM (Greg Stark) + linkend="SQL-VACUUM">VACUUM (Greg Stark) - Improve multicolumn GiST + Improve multicolumn GiST indexing (Oleg, Teodor) @@ -5167,7 +5167,7 @@ This is valuable for keeping warm standby slave servers in sync with the master. Transaction log file switching now also happens automatically during pg_stop_backup(). + linkend="functions-admin">pg_stop_backup(). This ensures that all transaction log files needed for recovery can be archived immediately. @@ -5175,26 +5175,26 @@ - Add WAL informational functions (Simon) + Add WAL informational functions (Simon) Add functions for interrogating the current transaction log insertion - point and determining WAL filenames from the - hex WAL locations displayed by pg_stop_backup() + point and determining WAL filenames from the + hex WAL locations displayed by pg_stop_backup() and related functions. - Improve recovery from a crash during WAL replay (Simon) + Improve recovery from a crash during WAL replay (Simon) - The server now does periodic checkpoints during WAL - recovery, so if there is a crash, future WAL + The server now does periodic checkpoints during WAL + recovery, so if there is a crash, future WAL recovery is shortened. This also eliminates the need for warm standby servers to replay the entire log since the base backup if they crash. @@ -5203,7 +5203,7 @@ - Improve reliability of long-term WAL replay + Improve reliability of long-term WAL replay (Heikki, Simon, Tom) @@ -5218,7 +5218,7 @@ Add archive_timeout + linkend="guc-archive-timeout">archive_timeout to force transaction log file switches at a given interval (Simon) @@ -5229,46 +5229,46 @@ - Add native LDAP + Add native LDAP authentication (Magnus Hagander) This is particularly useful for platforms that do not - support PAM, such as Windows. + support PAM, such as Windows. Add GRANT - CONNECT ON DATABASE (Gevik Babakhani) + CONNECT ON DATABASE (Gevik Babakhani) This gives SQL-level control over database access. It works as an additional filter on top of the existing - pg_hba.conf + pg_hba.conf controls. - Add support for SSL - Certificate Revocation List (CRL) files + Add support for SSL + Certificate Revocation List (CRL) files (Libor Hohoš) - The server and libpq both recognize CRL + The server and libpq both recognize CRL files now. - GiST indexes are + GiST indexes are now clusterable (Teodor) @@ -5280,7 +5280,7 @@ pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity now shows autovacuum activity. @@ -5304,7 +5304,7 @@ These values now appear in the pg_stat_*_tables + linkend="monitoring-stats-views-table">pg_stat_*_tables system views. @@ -5312,44 +5312,44 @@ Improve performance of statistics monitoring, especially - stats_command_string + stats_command_string (Tom, Bruce) - This release enables stats_command_string by + This release enables stats_command_string by default, now that its overhead is minimal. This means pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity will now show all active queries by default. - Add a waiting column to pg_stat_activity + Add a waiting column to pg_stat_activity (Tom) - This allows pg_stat_activity to show all the - information included in the ps display. + This allows pg_stat_activity to show all the + information included in the ps display. Add configuration parameter update_process_title - to control whether the ps display is updated + linkend="guc-update-process-title">update_process_title + to control whether the ps display is updated for every command (Bruce) - On platforms where it is expensive to update the ps + On platforms where it is expensive to update the ps display, it might be worthwhile to turn this off and rely solely on - pg_stat_activity for status information. + pg_stat_activity for status information. @@ -5361,15 +5361,15 @@ For example, you can now set shared_buffers - to 32MB rather than mentally converting sizes. + linkend="guc-shared-buffers">shared_buffers + to 32MB rather than mentally converting sizes. Add support for include - directives in postgresql.conf (Joachim + directives in postgresql.conf (Joachim Wieland) @@ -5384,21 +5384,21 @@ Such logging now shows statement names, bind parameter values, and the text of the query being executed. Also, the query text is properly included in logged error messages - when enabled by log_min_error_statement. + when enabled by log_min_error_statement. Prevent max_stack_depth + linkend="guc-max-stack-depth">max_stack_depth from being set to unsafe values On platforms where we can determine the actual kernel stack depth limit (which is most), make sure that the initial default value of - max_stack_depth is safe, and reject attempts to set it + max_stack_depth is safe, and reject attempts to set it to unsafely large values. @@ -5418,14 +5418,14 @@ - Fix failed to re-find parent key errors in - VACUUM (Tom) + Fix failed to re-find parent key errors in + VACUUM (Tom) - Clean out pg_internal.init cache files during server + Clean out pg_internal.init cache files during server restart (Simon) @@ -5438,7 +5438,7 @@ Fix race condition for truncation of a large relation across a - gigabyte boundary by VACUUM (Tom) + gigabyte boundary by VACUUM (Tom) @@ -5475,15 +5475,15 @@ - Add INSERT/UPDATE/DELETE - RETURNING (Jonah Harris, Tom) + Add INSERT/UPDATE/DELETE + RETURNING (Jonah Harris, Tom) This allows these commands to return values, such as the - computed serial key for a new row. In the UPDATE + computed serial key for a new row. In the UPDATE case, values from the updated version of the row are returned. @@ -5491,23 +5491,23 @@ Add support for multiple-row VALUES clauses, + linkend="queries-values">VALUES clauses, per SQL standard (Joe, Tom) - This allows INSERT to insert multiple rows of + This allows INSERT to insert multiple rows of constants, or queries to generate result sets using constants. For example, INSERT ... VALUES (...), (...), - ...., and SELECT * FROM (VALUES (...), (...), - ....) AS alias(f1, ...). + ...., and SELECT * FROM (VALUES (...), (...), + ....) AS alias(f1, ...). - Allow UPDATE - and DELETE + Allow UPDATE + and DELETE to use an alias for the target table (Atsushi Ogawa) @@ -5519,7 +5519,7 @@ - Allow UPDATE + Allow UPDATE to set multiple columns with a list of values (Susanne Ebrecht) @@ -5527,7 +5527,7 @@ This is basically a short-hand for assigning the columns and values in pairs. The syntax is UPDATE tab - SET (column, ...) = (val, ...). + SET (column, ...) = (val, ...). @@ -5546,12 +5546,12 @@ - Add CASCADE - option to TRUNCATE (Joachim Wieland) + Add CASCADE + option to TRUNCATE (Joachim Wieland) - This causes TRUNCATE to automatically include all tables + This causes TRUNCATE to automatically include all tables that reference the specified table(s) via foreign keys. While convenient, this is a dangerous tool — use with caution! @@ -5559,8 +5559,8 @@ - Support FOR UPDATE and FOR SHARE - in the same SELECT + Support FOR UPDATE and FOR SHARE + in the same SELECT command (Tom) @@ -5568,21 +5568,21 @@ Add IS NOT - DISTINCT FROM (Pavel Stehule) + DISTINCT FROM (Pavel Stehule) - This operator is similar to equality (=), but + This operator is similar to equality (=), but evaluates to true when both left and right operands are - NULL, and to false when just one is, rather than - yielding NULL in these cases. + NULL, and to false when just one is, rather than + yielding NULL in these cases. Improve the length output used by UNION/INTERSECT/EXCEPT + linkend="queries-union">UNION/INTERSECT/EXCEPT (Tom) @@ -5594,13 +5594,13 @@ - Allow ILIKE + Allow ILIKE to work for multi-byte encodings (Tom) - Internally, ILIKE now calls lower() - and then uses LIKE. Locale-specific regular + Internally, ILIKE now calls lower() + and then uses LIKE. Locale-specific regular expression patterns still do not work in these encodings. @@ -5608,39 +5608,39 @@ Enable standard_conforming_strings - to be turned on (Kevin Grittner) + linkend="guc-standard-conforming-strings">standard_conforming_strings + to be turned on (Kevin Grittner) This allows backslash escaping in strings to be disabled, - making PostgreSQL more - standards-compliant. The default is off for backwards - compatibility, but future releases will default this to on. + making PostgreSQL more + standards-compliant. The default is off for backwards + compatibility, but future releases will default this to on. - Do not flatten subqueries that contain volatile + Do not flatten subqueries that contain volatile functions in their target lists (Jaime Casanova) This prevents surprising behavior due to multiple evaluation - of a volatile function (such as random() - or nextval()). It might cause performance + of a volatile function (such as random() + or nextval()). It might cause performance degradation in the presence of functions that are unnecessarily - marked as volatile. + marked as volatile. Add system views pg_prepared_statements + linkend="view-pg-prepared-statements">pg_prepared_statements and pg_cursors + linkend="view-pg-cursors">pg_cursors to show prepared statements and open cursors (Joachim Wieland, Neil) @@ -5652,32 +5652,32 @@ Support portal parameters in EXPLAIN and EXECUTE (Tom) + linkend="SQL-EXPLAIN">EXPLAIN and EXECUTE (Tom) - This allows, for example, JDBC ? parameters to + This allows, for example, JDBC ? parameters to work in these commands. - If SQL-level PREPARE parameters + If SQL-level PREPARE parameters are unspecified, infer their types from the content of the query (Neil) - Protocol-level PREPARE already did this. + Protocol-level PREPARE already did this. - Allow LIMIT and OFFSET to exceed + Allow LIMIT and OFFSET to exceed two billion (Dhanaraj M) @@ -5692,8 +5692,8 @@ - Add TABLESPACE clause to CREATE TABLE AS + Add TABLESPACE clause to CREATE TABLE AS (Neil) @@ -5704,8 +5704,8 @@ - Add ON COMMIT clause to CREATE TABLE AS + Add ON COMMIT clause to CREATE TABLE AS (Neil) @@ -5718,13 +5718,13 @@ - Add INCLUDING CONSTRAINTS to CREATE TABLE LIKE + Add INCLUDING CONSTRAINTS to CREATE TABLE LIKE (Greg Stark) - This allows easy copying of CHECK constraints to a new + This allows easy copying of CHECK constraints to a new table. @@ -5740,8 +5740,8 @@ any of the details of the type. Making a shell type is useful because it allows cleaner declaration of the type's input/output functions, which must exist before the type can be defined for - real. The syntax is CREATE TYPE typename. + real. The syntax is CREATE TYPE typename. @@ -5760,8 +5760,8 @@ The new syntax is CREATE AGGREGATE - aggname (input_type) - (parameter_list). This more + aggname (input_type) + (parameter_list). This more naturally supports the new multi-parameter aggregate functionality. The previous syntax is still supported. @@ -5770,26 +5770,26 @@ Add ALTER ROLE PASSWORD NULL + linkend="SQL-ALTERROLE">ALTER ROLE PASSWORD NULL to remove a previously set role password (Peter) - Add DROP object IF EXISTS for many + Add DROP object IF EXISTS for many object types (Andrew) - This allows DROP operations on non-existent + This allows DROP operations on non-existent objects without generating an error. - Add DROP OWNED + Add DROP OWNED to drop all objects owned by a role (Alvaro) @@ -5797,50 +5797,50 @@ Add REASSIGN - OWNED to reassign ownership of all objects owned + OWNED to reassign ownership of all objects owned by a role (Alvaro) - This, and DROP OWNED above, facilitate dropping + This, and DROP OWNED above, facilitate dropping roles. - Add GRANT ON SEQUENCE + Add GRANT ON SEQUENCE syntax (Bruce) This was added for setting sequence-specific permissions. - GRANT ON TABLE for sequences is still supported + GRANT ON TABLE for sequences is still supported for backward compatibility. - Add USAGE - permission for sequences that allows only currval() - and nextval(), not setval() + Add USAGE + permission for sequences that allows only currval() + and nextval(), not setval() (Bruce) - USAGE permission allows more fine-grained - control over sequence access. Granting USAGE + USAGE permission allows more fine-grained + control over sequence access. Granting USAGE allows users to increment a sequence, but prevents them from setting the sequence to - an arbitrary value using setval(). + an arbitrary value using setval(). Add ALTER TABLE - [ NO ] INHERIT (Greg Stark) + [ NO ] INHERIT (Greg Stark) @@ -5882,7 +5882,7 @@ The new syntax is CREATE - INDEX CONCURRENTLY. The default behavior is + INDEX CONCURRENTLY. The default behavior is still to block table modification while an index is being created. @@ -5902,20 +5902,20 @@ - Allow COPY to - dump a SELECT query (Zoltan Boszormenyi, Karel + Allow COPY to + dump a SELECT query (Zoltan Boszormenyi, Karel Zak) - This allows COPY to dump arbitrary SQL - queries. The syntax is COPY (SELECT ...) TO. + This allows COPY to dump arbitrary SQL + queries. The syntax is COPY (SELECT ...) TO. - Make the COPY + Make the COPY command return a command tag that includes the number of rows copied (Volkan YAZICI) @@ -5923,29 +5923,29 @@ - Allow VACUUM + Allow VACUUM to expire rows without being affected by other concurrent - VACUUM operations (Hannu Krossing, Alvaro, Tom) + VACUUM operations (Hannu Krossing, Alvaro, Tom) - Make initdb + Make initdb detect the operating system locale and set the default - DateStyle accordingly (Peter) + DateStyle accordingly (Peter) This makes it more likely that the installed - postgresql.conf DateStyle value will + postgresql.conf DateStyle value will be as desired. - Reduce number of progress messages displayed by initdb (Tom) + Reduce number of progress messages displayed by initdb (Tom) @@ -5960,13 +5960,13 @@ Allow full timezone names in timestamp input values + linkend="datatype-datetime">timestamp input values (Joachim Wieland) For example, '2006-05-24 21:11 - America/New_York'::timestamptz. + America/New_York'::timestamptz. @@ -5978,16 +5978,16 @@ A desired set of timezone abbreviations can be chosen via the configuration parameter timezone_abbreviations. + linkend="guc-timezone-abbreviations">timezone_abbreviations. Add pg_timezone_abbrevs + linkend="view-pg-timezone-abbrevs">pg_timezone_abbrevs and pg_timezone_names + linkend="view-pg-timezone-names">pg_timezone_names views to show supported timezones (Magnus Hagander) @@ -5995,27 +5995,27 @@ Add clock_timestamp(), + linkend="functions-datetime-table">clock_timestamp(), statement_timestamp(), + linkend="functions-datetime-table">statement_timestamp(), and transaction_timestamp() + linkend="functions-datetime-table">transaction_timestamp() (Bruce) - clock_timestamp() is the current wall-clock time, - statement_timestamp() is the time the current + clock_timestamp() is the current wall-clock time, + statement_timestamp() is the time the current statement arrived at the server, and - transaction_timestamp() is an alias for - now(). + transaction_timestamp() is an alias for + now(). Allow to_char() + linkend="functions-formatting">to_char() to print localized month and day names (Euler Taveira de Oliveira) @@ -6024,23 +6024,23 @@ Allow to_char(time) + linkend="functions-formatting">to_char(time) and to_char(interval) - to output AM/PM specifications + linkend="functions-formatting">to_char(interval) + to output AM/PM specifications (Bruce) Intervals and times are treated as 24-hour periods, e.g. - 25 hours is considered AM. + 25 hours is considered AM. Add new function justify_interval() + linkend="functions-datetime-table">justify_interval() to adjust interval units (Mark Dilger) @@ -6071,7 +6071,7 @@ - Allow arrays to contain NULL elements (Tom) + Allow arrays to contain NULL elements (Tom) @@ -6090,13 +6090,13 @@ New built-in operators - for array-subset comparisons (@>, - <@, &&) (Teodor, Tom) + for array-subset comparisons (@>, + <@, &&) (Teodor, Tom) These operators can be indexed for many data types using - GiST or GIN indexes. + GiST or GIN indexes. @@ -6104,15 +6104,15 @@ Add convenient arithmetic operations on - INET/CIDR values (Stephen R. van den + INET/CIDR values (Stephen R. van den Berg) - The new operators are & (and), | - (or), ~ (not), inet + int8, - inet - int8, and - inet - inet. + The new operators are & (and), | + (or), ~ (not), inet + int8, + inet - int8, and + inet - inet. @@ -6124,12 +6124,12 @@ - The new functions are var_pop(), - var_samp(), stddev_pop(), and - stddev_samp(). var_samp() and - stddev_samp() are merely renamings of the - existing aggregates variance() and - stddev(). The latter names remain available + The new functions are var_pop(), + var_samp(), stddev_pop(), and + stddev_samp(). var_samp() and + stddev_samp() are merely renamings of the + existing aggregates variance() and + stddev(). The latter names remain available for backward compatibility. @@ -6142,13 +6142,13 @@ - New functions: regr_intercept(), - regr_slope(), regr_r2(), - corr(), covar_samp(), - covar_pop(), regr_avgx(), - regr_avgy(), regr_sxy(), - regr_sxx(), regr_syy(), - regr_count(). + New functions: regr_intercept(), + regr_slope(), regr_r2(), + corr(), covar_samp(), + covar_pop(), regr_avgx(), + regr_avgy(), regr_sxy(), + regr_sxx(), regr_syy(), + regr_count(). @@ -6162,7 +6162,7 @@ Properly enforce domain CHECK constraints + linkend="ddl-constraints">CHECK constraints everywhere (Neil, Tom) @@ -6177,24 +6177,24 @@ Fix problems with dumping renamed SERIAL columns + linkend="datatype-serial">SERIAL columns (Tom) - The fix is to dump a SERIAL column by explicitly - specifying its DEFAULT and sequence elements, - and reconstructing the SERIAL column on reload + The fix is to dump a SERIAL column by explicitly + specifying its DEFAULT and sequence elements, + and reconstructing the SERIAL column on reload using a new ALTER - SEQUENCE OWNED BY command. This also allows - dropping a SERIAL column specification. + SEQUENCE OWNED BY command. This also allows + dropping a SERIAL column specification. Add a server-side sleep function pg_sleep() + linkend="functions-datetime-delay">pg_sleep() (Joachim Wieland) @@ -6202,7 +6202,7 @@ Add all comparison operators for the tid (tuple id) data + linkend="datatype-oid">tid (tuple id) data type (Mark Kirkwood, Greg Stark, Tom) @@ -6217,12 +6217,12 @@ - Add TG_table_name and TG_table_schema to + Add TG_table_name and TG_table_schema to trigger parameters (Andrew) - TG_relname is now deprecated. Comparable + TG_relname is now deprecated. Comparable changes have been made in the trigger parameters for the other PLs as well. @@ -6230,29 +6230,29 @@ - Allow FOR statements to return values to scalars + Allow FOR statements to return values to scalars as well as records and row types (Pavel Stehule) - Add a BY clause to the FOR loop, + Add a BY clause to the FOR loop, to control the iteration increment (Jaime Casanova) - Add STRICT to STRICT to SELECT - INTO (Matt Miller) + INTO (Matt Miller) - STRICT mode throws an exception if more or less - than one row is returned by the SELECT, for - Oracle PL/SQL compatibility. + STRICT mode throws an exception if more or less + than one row is returned by the SELECT, for + Oracle PL/SQL compatibility. @@ -6266,7 +6266,7 @@ - Add table_name and table_schema to + Add table_name and table_schema to trigger parameters (Adam Sjøgren) @@ -6279,7 +6279,7 @@ - Make $_TD trigger data a global variable (Andrew) + Make $_TD trigger data a global variable (Andrew) @@ -6312,13 +6312,13 @@ Named parameters are passed as ordinary variables, as well as in the - args[] array (Sven Suursoho) + args[] array (Sven Suursoho) - Add table_name and table_schema to + Add table_name and table_schema to trigger parameters (Andrew) @@ -6331,14 +6331,14 @@ - Return result-set as list, iterator, - or generator (Sven Suursoho) + Return result-set as list, iterator, + or generator (Sven Suursoho) - Allow functions to return void (Neil) + Allow functions to return void (Neil) @@ -6353,40 +6353,40 @@ - <link linkend="APP-PSQL"><application>psql</></link> Changes + <link linkend="APP-PSQL"><application>psql</application></link> Changes - Add new command \password for changing role + Add new command \password for changing role password with client-side password encryption (Peter) - Allow \c to connect to a new host and port + Allow \c to connect to a new host and port number (David, Volkan YAZICI) - Add tablespace display to \l+ (Philip Yarra) + Add tablespace display to \l+ (Philip Yarra) - Improve \df slash command to include the argument - names and modes (OUT or INOUT) of + Improve \df slash command to include the argument + names and modes (OUT or INOUT) of the function (David Fetter) - Support binary COPY (Andreas Pflug) + Support binary COPY (Andreas Pflug) @@ -6397,21 +6397,21 @@ - Use option -1 or --single-transaction. + Use option -1 or --single-transaction. - Support for automatically retrieving SELECT + Support for automatically retrieving SELECT results in batches using a cursor (Chris Mair) This is enabled using \set FETCH_COUNT - n. This + n. This feature allows large result sets to be retrieved in - psql without attempting to buffer the entire + psql without attempting to buffer the entire result set in memory. @@ -6451,8 +6451,8 @@ Report both the returned data and the command status tag - for INSERT/UPDATE/DELETE - RETURNING (Tom) + for INSERT/UPDATE/DELETE + RETURNING (Tom) @@ -6461,31 +6461,31 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</></link> Changes + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> Changes Allow complex selection of objects to be included or excluded - by pg_dump (Greg Sabino Mullane) + by pg_dump (Greg Sabino Mullane) - pg_dump now supports multiple -n - (schema) and -t (table) options, and adds - -N and -T options to exclude objects. + pg_dump now supports multiple -n + (schema) and -t (table) options, and adds + -N and -T options to exclude objects. Also, the arguments of these switches can now be wild-card expressions rather than single object names, for example - -t 'foo*', and a schema can be part of - a -t or -T switch, for example - -t schema1.table1. + -t 'foo*', and a schema can be part of + a -t or -T switch, for example + -t schema1.table1. - Add pg_restore - --no-data-for-failed-tables option to suppress + Add pg_restore + --no-data-for-failed-tables option to suppress loading data if table creation failed (i.e., the table already exists) (Martin Pitt) @@ -6493,13 +6493,13 @@ - Add pg_restore + Add pg_restore option to run the entire session in a single transaction (Simon) - Use option -1 or --single-transaction. + Use option -1 or --single-transaction. @@ -6508,27 +6508,27 @@ - <link linkend="libpq"><application>libpq</></link> Changes + <link linkend="libpq"><application>libpq</application></link> Changes Add PQencryptPassword() + linkend="libpq-misc">PQencryptPassword() to encrypt passwords (Tom) This allows passwords to be sent pre-encrypted for commands like ALTER ROLE ... - PASSWORD. + PASSWORD. Add function PQisthreadsafe() + linkend="libpq-threading">PQisthreadsafe() (Bruce) @@ -6541,9 +6541,9 @@ Add PQdescribePrepared(), + linkend="libpq-exec-main">PQdescribePrepared(), PQdescribePortal(), + linkend="libpq-exec-main">PQdescribePortal(), and related functions to return information about previously prepared statements and open cursors (Volkan YAZICI) @@ -6551,9 +6551,9 @@ - Allow LDAP lookups + Allow LDAP lookups from pg_service.conf + linkend="libpq-pgservice">pg_service.conf (Laurenz Albe) @@ -6561,7 +6561,7 @@ Allow a hostname in ~/.pgpass + linkend="libpq-pgpass">~/.pgpass to match the default socket directory (Bruce) @@ -6577,19 +6577,19 @@ - <link linkend="ecpg"><application>ecpg</></link> Changes + <link linkend="ecpg"><application>ecpg</application></link> Changes - Allow SHOW to + Allow SHOW to put its result into a variable (Joachim Wieland) - Add COPY TO STDOUT + Add COPY TO STDOUT (Joachim Wieland) @@ -6611,28 +6611,28 @@ - <application>Windows</> Port + <application>Windows</application> Port - Allow MSVC to compile the PostgreSQL + Allow MSVC to compile the PostgreSQL server (Magnus, Hiroshi Saito) - Add MSVC support for utility commands and pg_dump (Hiroshi + Add MSVC support for utility commands and pg_dump (Hiroshi Saito) - Add support for Windows code pages 1253, - 1254, 1255, and 1257 + Add support for Windows code pages 1253, + 1254, 1255, and 1257 (Kris Jurka) @@ -6670,7 +6670,7 @@ - Add GIN (Generalized + Add GIN (Generalized Inverted iNdex) index access method (Teodor, Oleg) @@ -6682,7 +6682,7 @@ Rtree has been re-implemented using GiST. Among other + linkend="GiST">GiST. Among other differences, this means that rtree indexes now have support for crash recovery via write-ahead logging (WAL). @@ -6698,12 +6698,12 @@ Add a configure flag to allow libedit to be preferred over - GNU readline (Bruce) + GNU readline (Bruce) Use configure --with-libedit-preferred. + linkend="configure">--with-libedit-preferred. @@ -6722,21 +6722,21 @@ - Add support for Solaris x86_64 using the - Solaris compiler (Pierre Girard, Theo + Add support for Solaris x86_64 using the + Solaris compiler (Pierre Girard, Theo Schlossnagle, Bruce) - Add DTrace support (Robert Lor) + Add DTrace support (Robert Lor) - Add PG_VERSION_NUM for use by third-party + Add PG_VERSION_NUM for use by third-party applications wanting to test the backend version in C using > and < comparisons (Bruce) @@ -6744,37 +6744,37 @@ - Add XLOG_BLCKSZ as independent from BLCKSZ + Add XLOG_BLCKSZ as independent from BLCKSZ (Mark Wong) - Add LWLOCK_STATS define to report locking + Add LWLOCK_STATS define to report locking activity (Tom) - Emit warnings for unknown configure options + Emit warnings for unknown configure options (Martijn van Oosterhout) - Add server support for plugin libraries + Add server support for plugin libraries that can be used for add-on tasks such as debugging and performance measurement (Korry Douglas) This consists of two features: a table of rendezvous - variables that allows separately-loaded shared libraries to + variables that allows separately-loaded shared libraries to communicate, and a new configuration parameter local_preload_libraries + linkend="guc-local-preload-libraries">local_preload_libraries that allows libraries to be loaded into specific sessions without explicit cooperation from the client application. This allows external add-ons to implement features such as a PL/pgSQL debugger. @@ -6784,27 +6784,27 @@ Rename existing configuration parameter - preload_libraries to shared_preload_libraries + preload_libraries to shared_preload_libraries (Tom) This was done for clarity in comparison to - local_preload_libraries. + local_preload_libraries. Add new configuration parameter server_version_num + linkend="guc-server-version-num">server_version_num (Greg Sabino Mullane) This is like server_version, but is an - integer, e.g. 80200. This allows applications to + integer, e.g. 80200. This allows applications to make version checks more easily. @@ -6812,7 +6812,7 @@ Add a configuration parameter seq_page_cost + linkend="guc-seq-page-cost">seq_page_cost (Tom) @@ -6839,11 +6839,11 @@ New functions - _PG_init() and _PG_fini() are + _PG_init() and _PG_fini() are called if the library defines such symbols. Hence we no longer need to specify an initialization function in - shared_preload_libraries; we can assume that - the library used the _PG_init() convention + shared_preload_libraries; we can assume that + the library used the _PG_init() convention instead. @@ -6851,7 +6851,7 @@ Add PG_MODULE_MAGIC + linkend="xfunc-c-dynload">PG_MODULE_MAGIC header block to all shared object files (Martijn van Oosterhout) @@ -6870,7 +6870,7 @@ - New XML + New XML documentation section (Bruce) @@ -6892,7 +6892,7 @@ - multibyte encoding support, including UTF8 + multibyte encoding support, including UTF8 @@ -6912,13 +6912,13 @@ - Ispell dictionaries now recognize MySpell - format, used by OpenOffice + Ispell dictionaries now recognize MySpell + format, used by OpenOffice - GIN support + GIN support @@ -6928,13 +6928,13 @@ - Add adminpack module containing Pgadmin administration + Add adminpack module containing Pgadmin administration functions (Dave) These functions provide additional file system access - routines not present in the default PostgreSQL + routines not present in the default PostgreSQL server. @@ -6945,7 +6945,7 @@ - Reports information about the current connection's SSL + Reports information about the current connection's SSL certificate. @@ -6972,9 +6972,9 @@ - This new implementation supports EAN13, UPC, - ISBN (books), ISMN (music), and - ISSN (serials). + This new implementation supports EAN13, UPC, + ISBN (books), ISMN (music), and + ISSN (serials). @@ -7034,9 +7034,9 @@ - New functions are cube(float[]), - cube(float[], float[]), and - cube_subset(cube, int4[]). + New functions are cube(float[]), + cube(float[], float[]), and + cube_subset(cube, int4[]). @@ -7049,8 +7049,8 @@ - New operators for array-subset comparisons (@>, - <@, &&) (Tom) + New operators for array-subset comparisons (@>, + <@, &&) (Tom) diff --git a/doc/src/sgml/release-8.3.sgml b/doc/src/sgml/release-8.3.sgml index a82410d057..45ecf9c054 100644 --- a/doc/src/sgml/release-8.3.sgml +++ b/doc/src/sgml/release-8.3.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.3.X series. Users are encouraged to update to a newer release branch soon. @@ -42,7 +42,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -63,19 +63,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -86,13 +86,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -110,26 +110,26 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. @@ -149,15 +149,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -185,7 +185,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -212,13 +212,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -226,8 +226,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -249,8 +249,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -268,10 +268,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -279,7 +279,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -292,14 +292,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -313,7 +313,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -321,7 +321,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -349,7 +349,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -362,8 +362,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -373,33 +373,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -410,41 +410,41 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -455,7 +455,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -481,7 +481,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -524,22 +524,22 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -558,7 +558,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -566,7 +566,7 @@ - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -591,7 +591,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.3.X release series in February 2013. Users are encouraged to update to a newer release branch soon. @@ -622,7 +622,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -635,22 +635,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -678,22 +678,22 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -718,7 +718,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -730,7 +730,7 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) @@ -738,24 +738,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -763,7 +763,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -775,21 +775,21 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -835,12 +835,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -851,7 +851,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -863,7 +863,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -889,7 +889,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -897,19 +897,19 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -930,8 +930,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -964,7 +964,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -983,25 +983,25 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -1009,14 +1009,14 @@ - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -1064,26 +1064,26 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -1099,10 +1099,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -1114,16 +1114,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -1145,7 +1145,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -1159,18 +1159,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -1201,32 +1201,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -1898,7 +1898,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -1906,13 +1906,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -1944,7 +1944,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -2013,15 +2013,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -2029,13 +2029,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -2048,7 +2048,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -2080,14 +2080,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -2098,14 +2098,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -2149,15 +2149,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -2182,44 +2182,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -2231,16 +2231,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -2282,17 +2282,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -2302,7 +2302,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -2321,7 +2321,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -2329,19 +2329,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -2357,7 +2357,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -2367,7 +2367,7 @@ - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -2379,14 +2379,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -2398,15 +2398,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -2418,7 +2418,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -2426,7 +2426,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -2438,11 +2438,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -2461,14 +2461,14 @@ - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -2480,22 +2480,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -2503,20 +2503,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -2567,7 +2567,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -2596,7 +2596,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -2605,7 +2605,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -2627,13 +2627,13 @@ This is a back-patch of an 8.4 fix that was missed in the 8.3 branch. This corrects an error introduced in 8.3.8 that could cause incorrect results for outer joins when the inner relation is an inheritance tree - or UNION ALL subquery. + or UNION ALL subquery. - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -2655,7 +2655,7 @@ - If a plan is prepared while CREATE INDEX CONCURRENTLY is + If a plan is prepared while CREATE INDEX CONCURRENTLY is in progress for one of the referenced tables, it is supposed to be re-planned once the index is ready for use. This was not happening reliably. @@ -2709,7 +2709,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -2746,7 +2746,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -2754,35 +2754,35 @@ Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) - Fix REASSIGN OWNED to handle operator classes and families + Fix REASSIGN OWNED to handle operator classes and families (Asko Tiidumaa) - Fix possible core dump when comparing two empty tsquery values + Fix possible core dump when comparing two empty tsquery values (Tom Lane) - Fix LIKE's handling of patterns containing % - followed by _ (Tom Lane) + Fix LIKE's handling of patterns containing % + followed by _ (Tom Lane) @@ -2794,14 +2794,14 @@ In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -2809,14 +2809,14 @@ - Fix ecpg to process data from RETURNING + Fix ecpg to process data from RETURNING clauses correctly (Michael Meskes) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -2824,30 +2824,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -2860,7 +2860,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -2875,7 +2875,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -2922,19 +2922,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -2943,19 +2943,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -2980,7 +2980,7 @@ This avoids failures if the function's code is invalid without the setting; an example is that SQL functions may not parse if the - search_path is not correct. + search_path is not correct. @@ -2992,10 +2992,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -3003,7 +3003,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -3016,13 +3016,13 @@ Ensure the archiver process responds to changes in - archive_command as soon as possible (Tom) + archive_command as soon as possible (Tom) - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -3035,15 +3035,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -3054,15 +3054,15 @@ - Fix unnecessary GIN indexes do not support whole-index scans - errors for unsatisfiable queries using contrib/intarray + Fix unnecessary GIN indexes do not support whole-index scans + errors for unsatisfiable queries using contrib/intarray operators (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -3070,7 +3070,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -3102,14 +3102,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -3151,7 +3151,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -3214,8 +3214,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -3241,7 +3241,7 @@ - Fix assorted crashes in xml processing caused by sloppy + Fix assorted crashes in xml processing caused by sloppy memory management (Tom) @@ -3261,7 +3261,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -3283,23 +3283,23 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -3307,49 +3307,49 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Disallow GSSAPI authentication on local connections, + Disallow GSSAPI authentication on local connections, since it requires a hostname to function correctly (Magnus) - Make ecpg report the proper SQLSTATE if the connection + Make ecpg report the proper SQLSTATE if the connection disappears (Michael) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) @@ -3370,7 +3370,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -3382,43 +3382,43 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Allow zero-dimensional arrays in contrib/ltree operations + Allow zero-dimensional arrays in contrib/ltree operations (Tom) This case was formerly rejected as an error, but it's more convenient to treat it the same as a zero-element array. In particular this avoids - unnecessary failures when an ltree operation is applied to the - result of ARRAY(SELECT ...) and the sub-select returns no + unnecessary failures when an ltree operation is applied to the + result of ARRAY(SELECT ...) and the sub-select returns no rows. - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -3429,14 +3429,14 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -3514,14 +3514,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -3540,7 +3540,7 @@ - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -3617,7 +3617,7 @@ The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -3650,19 +3650,19 @@ Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix incorrect handling of WHERE - x=x conditions (Tom) + Fix incorrect handling of WHERE + x=x conditions (Tom) In some cases these could get ignored as redundant, but they aren't - — they're equivalent to x IS NOT NULL. + — they're equivalent to x IS NOT NULL. @@ -3674,7 +3674,7 @@ - Fix encoding handling in xml binary input (Heikki) + Fix encoding handling in xml binary input (Heikki) @@ -3685,7 +3685,7 @@ - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -3705,7 +3705,7 @@ Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -3722,7 +3722,7 @@ - In contrib/pg_standby, disable triggering failover with a + In contrib/pg_standby, disable triggering failover with a signal on Windows (Fujii Masao) @@ -3734,20 +3734,20 @@ - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -3760,14 +3760,14 @@ - This includes adding IDT and SGT to the default + This includes adding IDT and SGT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -3798,8 +3798,8 @@ A dump/restore is not required for those running 8.3.X. - However, if you have any hash indexes on interval columns, - you must REINDEX them after updating to 8.3.8. + However, if you have any hash indexes on interval columns, + you must REINDEX them after updating to 8.3.8. Also, if you are upgrading from a version earlier than 8.3.5, see . @@ -3818,13 +3818,13 @@ This bug led to the often-reported could not reattach - to shared memory error message. + to shared memory error message. - Force WAL segment switch during pg_start_backup() + Force WAL segment switch during pg_start_backup() (Heikki) @@ -3835,26 +3835,26 @@ - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. @@ -3881,8 +3881,8 @@ - Prevent synchronize_seqscans from changing the results of - scrollable and WITH HOLD cursors (Tom) + Prevent synchronize_seqscans from changing the results of + scrollable and WITH HOLD cursors (Tom) @@ -3896,32 +3896,32 @@ - Fix hash calculation for data type interval (Tom) + Fix hash calculation for data type interval (Tom) This corrects wrong results for hash joins on interval values. It also changes the contents of hash indexes on interval columns. - If you have any such indexes, you must REINDEX them + If you have any such indexes, you must REINDEX them after updating. - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -3938,14 +3938,14 @@ - Fix money data type to work in locales where currency + Fix money data type to work in locales where currency amounts have no fractional digits, e.g. Japan (Itagaki Takahiro) - Fix LIKE for case where pattern contains %_ + Fix LIKE for case where pattern contains %_ (Tom) @@ -3953,7 +3953,7 @@ Properly round datetime input like - 00:12:57.9999999999999999999999999999 (Tom) + 00:12:57.9999999999999999999999999999 (Tom) @@ -3972,8 +3972,8 @@ - Ensure that a fast shutdown request will forcibly terminate - open sessions, even if a smart shutdown was already in progress + Ensure that a fast shutdown request will forcibly terminate + open sessions, even if a smart shutdown was already in progress (Fujii Masao) @@ -4000,35 +4000,35 @@ - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Improve pg_dump's efficiency when there are + Improve pg_dump's efficiency when there are many large objects (Tamas Vincze) - Use SIGUSR1, not SIGQUIT, as the - failover signal for pg_standby (Heikki) + Use SIGUSR1, not SIGQUIT, as the + failover signal for pg_standby (Heikki) - Make pg_standby's maxretries option + Make pg_standby's maxretries option behave as documented (Fujii Masao) - Make contrib/hstore throw an error when a key or + Make contrib/hstore throw an error when a key or value is too long to fit in its data structure, rather than silently truncating it (Andrew Gierth) @@ -4036,15 +4036,15 @@ - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -4057,7 +4057,7 @@ - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Jordan, Pakistan, Argentina/San_Luis, Cuba, Jordan (historical correction only), Mauritius, Morocco, Palestine, Syria, Tunisia. @@ -4108,7 +4108,7 @@ This change extends fixes made in the last two minor releases for related failure scenarios. The previous fixes were narrowly tailored for the original problem reports, but we have now recognized that - any error thrown by an encoding conversion function could + any error thrown by an encoding conversion function could potentially lead to infinite recursion while trying to report the error. The solution therefore is to disable translation and encoding conversion and report the plain-ASCII form of any error message, @@ -4119,7 +4119,7 @@ - Disallow CREATE CONVERSION with the wrong encodings + Disallow CREATE CONVERSION with the wrong encodings for the specified conversion function (Heikki) @@ -4132,19 +4132,19 @@ - Fix xpath() to not modify the path expression unless + Fix xpath() to not modify the path expression unless necessary, and to make a saner attempt at it when necessary (Andrew) - The SQL standard suggests that xpath should work on data - that is a document fragment, but libxml doesn't support + The SQL standard suggests that xpath should work on data + that is a document fragment, but libxml doesn't support that, and indeed it's not clear that this is sensible according to the - XPath standard. xpath attempted to work around this + XPath standard. xpath attempted to work around this mismatch by modifying both the data and the path expression, but the modification was buggy and could cause valid searches to fail. Now, - xpath checks whether the data is in fact a well-formed - document, and if so invokes libxml with no change to the + xpath checks whether the data is in fact a well-formed + document, and if so invokes libxml with no change to the data or path expression. Otherwise, a different modification method that is somewhat less likely to fail is used. @@ -4155,15 +4155,15 @@ seems likely that no real solution is possible. This patch should therefore be viewed as a band-aid to keep from breaking existing applications unnecessarily. It is likely that - PostgreSQL 8.4 will simply reject use of - xpath on data that is not a well-formed document. + PostgreSQL 8.4 will simply reject use of + xpath on data that is not a well-formed document. - Fix core dump when to_char() is given format codes that + Fix core dump when to_char() is given format codes that are inappropriate for the type of the data argument (Tom) @@ -4175,40 +4175,40 @@ - Crashes were possible on platforms where wchar_t is narrower - than int; Windows in particular. + Crashes were possible on platforms where wchar_t is narrower + than int; Windows in particular. Fix extreme inefficiency in text search parser's handling of an - email-like string containing multiple @ characters (Heikki) + email-like string containing multiple @ characters (Heikki) - Fix planner problem with sub-SELECT in the output list + Fix planner problem with sub-SELECT in the output list of a larger subquery (Tom) The known symptom of this bug is a failed to locate grouping - columns error that is dependent on the datatype involved; + columns error that is dependent on the datatype involved; but there could be other issues as well. - Fix decompilation of CASE WHEN with an implicit coercion + Fix decompilation of CASE WHEN with an implicit coercion (Tom) This mistake could lead to Assert failures in an Assert-enabled build, - or an unexpected CASE WHEN clause error message in other + or an unexpected CASE WHEN clause error message in other cases, when trying to examine or dump a view. @@ -4219,38 +4219,38 @@ - If CLUSTER or a rewriting variant of ALTER TABLE + If CLUSTER or a rewriting variant of ALTER TABLE were executed by someone other than the table owner, the - pg_type entry for the table's TOAST table would end up + pg_type entry for the table's TOAST table would end up marked as owned by that someone. This caused no immediate problems, since the permissions on the TOAST rowtype aren't examined by any ordinary database operation. However, it could lead to unexpected failures if one later tried to drop the role that issued the command - (in 8.1 or 8.2), or owner of data type appears to be invalid - warnings from pg_dump after having done so (in 8.3). + (in 8.1 or 8.2), or owner of data type appears to be invalid + warnings from pg_dump after having done so (in 8.3). - Change UNLISTEN to exit quickly if the current session has - never executed any LISTEN command (Tom) + Change UNLISTEN to exit quickly if the current session has + never executed any LISTEN command (Tom) Most of the time this is not a particularly useful optimization, but - since DISCARD ALL invokes UNLISTEN, the previous + since DISCARD ALL invokes UNLISTEN, the previous coding caused a substantial performance problem for applications that - made heavy use of DISCARD ALL. + made heavy use of DISCARD ALL. - Fix PL/pgSQL to not treat INTO after INSERT as + Fix PL/pgSQL to not treat INTO after INSERT as an INTO-variables clause anywhere in the string, not only at the start; - in particular, don't fail for INSERT INTO within - CREATE RULE (Tom) + in particular, don't fail for INSERT INTO within + CREATE RULE (Tom) @@ -4268,21 +4268,21 @@ - Retry failed calls to CallNamedPipe() on Windows + Retry failed calls to CallNamedPipe() on Windows (Steve Marshall, Magnus) It appears that this function can sometimes fail transiently; we previously treated any failure as a hard error, which could - confuse LISTEN/NOTIFY as well as other + confuse LISTEN/NOTIFY as well as other operations. - Add MUST (Mauritius Island Summer Time) to the default list + Add MUST (Mauritius Island Summer Time) to the default list of known timezone abbreviations (Xavier Bugaud) @@ -4324,7 +4324,7 @@ - Make DISCARD ALL release advisory locks, in addition + Make DISCARD ALL release advisory locks, in addition to everything it already did (Tom) @@ -4347,13 +4347,13 @@ - Fix crash of xmlconcat(NULL) (Peter) + Fix crash of xmlconcat(NULL) (Peter) - Fix possible crash in ispell dictionary if high-bit-set + Fix possible crash in ispell dictionary if high-bit-set characters are used as flags (Teodor) @@ -4365,7 +4365,7 @@ - Fix misordering of pg_dump output for composite types + Fix misordering of pg_dump output for composite types (Tom) @@ -4377,13 +4377,13 @@ - Improve handling of URLs in headline() function (Teodor) + Improve handling of URLs in headline() function (Teodor) - Improve handling of overlength headlines in headline() + Improve handling of overlength headlines in headline() function (Teodor) @@ -4400,7 +4400,7 @@ Fix possible Assert failure if a statement executed in PL/pgSQL is rewritten into another kind of statement, for example if an - INSERT is rewritten into an UPDATE (Heikki) + INSERT is rewritten into an UPDATE (Heikki) @@ -4410,7 +4410,7 @@ - This primarily affects domains that are declared with CHECK + This primarily affects domains that are declared with CHECK constraints involving user-defined stable or immutable functions. Such functions typically fail if no snapshot has been set. @@ -4425,7 +4425,7 @@ - Avoid unnecessary locking of small tables in VACUUM + Avoid unnecessary locking of small tables in VACUUM (Heikki) @@ -4433,21 +4433,21 @@ Fix a problem that sometimes kept ALTER TABLE ENABLE/DISABLE - RULE from being recognized by active sessions (Tom) + RULE from being recognized by active sessions (Tom) - Fix a problem that made UPDATE RETURNING tableoid + Fix a problem that made UPDATE RETURNING tableoid return zero instead of the correct OID (Tom) - Allow functions declared as taking ANYARRAY to work on - the pg_statistic columns of that type (Tom) + Allow functions declared as taking ANYARRAY to work on + the pg_statistic columns of that type (Tom) @@ -4463,13 +4463,13 @@ This could result in bad plans for queries like - ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... + ... from a left join b on a.a1 = b.b1 where a.a1 = 42 ... - Improve optimizer's handling of long IN lists (Tom) + Improve optimizer's handling of long IN lists (Tom) @@ -4521,21 +4521,21 @@ - Fix contrib/dblink's - dblink_get_result(text,bool) function (Joe) + Fix contrib/dblink's + dblink_get_result(text,bool) function (Joe) - Fix possible garbage output from contrib/sslinfo functions + Fix possible garbage output from contrib/sslinfo functions (Tom) - Fix incorrect behavior of contrib/tsearch2 compatibility + Fix incorrect behavior of contrib/tsearch2 compatibility trigger when it's fired more than once in a command (Teodor) @@ -4554,29 +4554,29 @@ - Fix ecpg's handling of varchar structs (Michael) + Fix ecpg's handling of varchar structs (Michael) - Fix configure script to properly report failure when + Fix configure script to properly report failure when unable to obtain linkage information for PL/Perl (Andrew) - Make all documentation reference pgsql-bugs and/or - pgsql-hackers as appropriate, instead of the - now-decommissioned pgsql-ports and pgsql-patches + Make all documentation reference pgsql-bugs and/or + pgsql-hackers as appropriate, instead of the + now-decommissioned pgsql-ports and pgsql-patches mailing lists (Tom) - Update time zone data files to tzdata release 2009a (for + Update time zone data files to tzdata release 2009a (for Kathmandu and historical DST corrections in Switzerland, Cuba) @@ -4607,7 +4607,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, see . Also, if you were running a previous - 8.3.X release, it is recommended to REINDEX all GiST + 8.3.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -4621,13 +4621,13 @@ Fix GiST index corruption due to marking the wrong index entry - dead after a deletion (Teodor) + dead after a deletion (Teodor) This would result in index searches failing to find rows they should have found. Corrupted indexes can be fixed with - REINDEX. + REINDEX. @@ -4639,7 +4639,7 @@ We have addressed similar issues before, but it would still fail if - the character has no equivalent message itself couldn't + the character has no equivalent message itself couldn't be converted. The fix is to disable localization and send the plain ASCII error message when we detect such a situation. @@ -4647,7 +4647,7 @@ - Fix possible crash in bytea-to-XML mapping (Michael McMaster) + Fix possible crash in bytea-to-XML mapping (Michael McMaster) @@ -4660,8 +4660,8 @@ - Improve optimization of expression IN - (expression-list) queries (Tom, per an idea from Robert + Improve optimization of expression IN + (expression-list) queries (Tom, per an idea from Robert Haas) @@ -4674,20 +4674,20 @@ - Fix mis-expansion of rule queries when a sub-SELECT appears - in a function call in FROM, a multi-row VALUES - list, or a RETURNING list (Tom) + Fix mis-expansion of rule queries when a sub-SELECT appears + in a function call in FROM, a multi-row VALUES + list, or a RETURNING list (Tom) - The usual symptom of this problem is an unrecognized node type + The usual symptom of this problem is an unrecognized node type error. - Fix Assert failure during rescan of an IS NULL + Fix Assert failure during rescan of an IS NULL search of a GiST index (Teodor) @@ -4707,7 +4707,7 @@ - Force a checkpoint before CREATE DATABASE starts to copy + Force a checkpoint before CREATE DATABASE starts to copy files (Heikki) @@ -4719,9 +4719,9 @@ - Prevent possible collision of relfilenode numbers + Prevent possible collision of relfilenode numbers when moving a table to another tablespace with ALTER SET - TABLESPACE (Heikki) + TABLESPACE (Heikki) @@ -4740,21 +4740,21 @@ Fix improper display of fractional seconds in interval values when - using a non-ISO datestyle in an build (Ron Mayer) - Make ILIKE compare characters case-insensitively + Make ILIKE compare characters case-insensitively even when they're escaped (Andrew) - Ensure DISCARD is handled properly by statement logging (Tom) + Ensure DISCARD is handled properly by statement logging (Tom) @@ -4767,7 +4767,7 @@ - Ensure SPI_getvalue and SPI_getbinval + Ensure SPI_getvalue and SPI_getbinval behave correctly when the passed tuple and tuple descriptor have different numbers of columns (Tom) @@ -4781,15 +4781,15 @@ - Mark SessionReplicationRole as PGDLLIMPORT - so it can be used by Slony on Windows (Magnus) + Mark SessionReplicationRole as PGDLLIMPORT + so it can be used by Slony on Windows (Magnus) - Fix small memory leak when using libpq's - gsslib parameter (Magnus) + Fix small memory leak when using libpq's + gsslib parameter (Magnus) @@ -4800,38 +4800,38 @@ - Ensure libgssapi is linked into libpq + Ensure libgssapi is linked into libpq if needed (Markus Schaaf) - Fix ecpg's parsing of CREATE ROLE (Michael) + Fix ecpg's parsing of CREATE ROLE (Michael) - Fix recent breakage of pg_ctl restart (Tom) + Fix recent breakage of pg_ctl restart (Tom) - Ensure pg_control is opened in binary mode + Ensure pg_control is opened in binary mode (Itagaki Takahiro) - pg_controldata and pg_resetxlog + pg_controldata and pg_resetxlog did this incorrectly, and so could fail on Windows. - Update time zone data files to tzdata release 2008i (for + Update time zone data files to tzdata release 2008i (for DST law changes in Argentina, Brazil, Mauritius, Syria) @@ -4888,41 +4888,41 @@ This error created a risk of corruption in system - catalogs that are consulted by VACUUM: dead tuple versions + catalogs that are consulted by VACUUM: dead tuple versions might be removed too soon. The impact of this on actual database operations would be minimal, since the system doesn't follow MVCC rules while examining catalogs, but it might result in transiently - wrong output from pg_dump or other client programs. + wrong output from pg_dump or other client programs. - Fix potential miscalculation of datfrozenxid (Alvaro) + Fix potential miscalculation of datfrozenxid (Alvaro) This error may explain some recent reports of failure to remove old - pg_clog data. + pg_clog data. - Fix incorrect HOT updates after pg_class is reindexed + Fix incorrect HOT updates after pg_class is reindexed (Tom) - Corruption of pg_class could occur if REINDEX - TABLE pg_class was followed in the same session by an ALTER - TABLE RENAME or ALTER TABLE SET SCHEMA command. + Corruption of pg_class could occur if REINDEX + TABLE pg_class was followed in the same session by an ALTER + TABLE RENAME or ALTER TABLE SET SCHEMA command. - Fix missed combo cid case (Karl Schnaitter) + Fix missed combo cid case (Karl Schnaitter) @@ -4946,7 +4946,7 @@ This responds to reports that the counters could overflow in sufficiently long transactions, leading to unexpected lock is - already held errors. + already held errors. @@ -4972,7 +4972,7 @@ Fix missed permissions checks when a view contains a simple - UNION ALL construct (Heikki) + UNION ALL construct (Heikki) @@ -4984,7 +4984,7 @@ Add checks in executor startup to ensure that the tuples produced by an - INSERT or UPDATE will match the target table's + INSERT or UPDATE will match the target table's current rowtype (Tom) @@ -4996,12 +4996,12 @@ - Fix possible repeated drops during DROP OWNED (Tom) + Fix possible repeated drops during DROP OWNED (Tom) This would typically result in strange errors such as cache - lookup failed for relation NNN. + lookup failed for relation NNN. @@ -5013,7 +5013,7 @@ - Fix xmlserialize() to raise error properly for + Fix xmlserialize() to raise error properly for unacceptable target data type (Tom) @@ -5026,7 +5026,7 @@ Certain characters occurring in configuration files would always cause - invalid byte sequence for encoding failures. + invalid byte sequence for encoding failures. @@ -5039,18 +5039,18 @@ - Fix AT TIME ZONE to first try to interpret its timezone + Fix AT TIME ZONE to first try to interpret its timezone argument as a timezone abbreviation, and only try it as a full timezone name if that fails, rather than the other way around as formerly (Tom) The timestamp input functions have always resolved ambiguous zone names - in this order. Making AT TIME ZONE do so as well improves + in this order. Making AT TIME ZONE do so as well improves consistency, and fixes a compatibility bug introduced in 8.1: in ambiguous cases we now behave the same as 8.0 and before did, - since in the older versions AT TIME ZONE accepted - only abbreviations. + since in the older versions AT TIME ZONE accepted + only abbreviations. @@ -5077,26 +5077,26 @@ Allow spaces in the suffix part of an LDAP URL in - pg_hba.conf (Tom) + pg_hba.conf (Tom) Fix bug in backwards scanning of a cursor on a SELECT DISTINCT - ON query (Tom) + ON query (Tom) - Fix planner bug that could improperly push down IS NULL + Fix planner bug that could improperly push down IS NULL tests below an outer join (Tom) - This was triggered by occurrence of IS NULL tests for - the same relation in all arms of an upper OR clause. + This was triggered by occurrence of IS NULL tests for + the same relation in all arms of an upper OR clause. @@ -5114,21 +5114,21 @@ - Fix planner to estimate that GROUP BY expressions yielding + Fix planner to estimate that GROUP BY expressions yielding boolean results always result in two groups, regardless of the expressions' contents (Tom) This is very substantially more accurate than the regular GROUP - BY estimate for certain boolean tests like col - IS NULL. + BY estimate for certain boolean tests like col + IS NULL. - Fix PL/pgSQL to not fail when a FOR loop's target variable + Fix PL/pgSQL to not fail when a FOR loop's target variable is a record containing composite-type fields (Tom) @@ -5142,49 +5142,49 @@ - Improve performance of PQescapeBytea() (Rudolf Leitgeb) + Improve performance of PQescapeBytea() (Rudolf Leitgeb) On Windows, work around a Microsoft bug by preventing - libpq from trying to send more than 64kB per system call + libpq from trying to send more than 64kB per system call (Magnus) - Fix ecpg to handle variables properly in SET + Fix ecpg to handle variables properly in SET commands (Michael) - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's error reporting after failure to send a SQL command (Tom) - Fix pg_ctl to properly preserve postmaster - command-line arguments across a restart (Bruce) + Fix pg_ctl to properly preserve postmaster + command-line arguments across a restart (Bruce) Fix erroneous WAL file cutoff point calculation in - pg_standby (Simon) + pg_standby (Simon) - Update time zone data files to tzdata release 2008f (for + Update time zone data files to tzdata release 2008f (for DST law changes in Argentina, Bahamas, Brazil, Mauritius, Morocco, Pakistan, Palestine, and Paraguay) @@ -5227,18 +5227,18 @@ - Make pg_get_ruledef() parenthesize negative constants (Tom) + Make pg_get_ruledef() parenthesize negative constants (Tom) Before this fix, a negative constant in a view or rule might be dumped - as, say, -42::integer, which is subtly incorrect: it should - be (-42)::integer due to operator precedence rules. + as, say, -42::integer, which is subtly incorrect: it should + be (-42)::integer due to operator precedence rules. Usually this would make little difference, but it could interact with another recent patch to cause - PostgreSQL to reject what had been a valid - SELECT DISTINCT view query. Since this could result in - pg_dump output failing to reload, it is being treated + PostgreSQL to reject what had been a valid + SELECT DISTINCT view query. Since this could result in + pg_dump output failing to reload, it is being treated as a high-priority fix. The only released versions in which dump output is actually incorrect are 8.3.1 and 8.2.7. @@ -5246,13 +5246,13 @@ - Make ALTER AGGREGATE ... OWNER TO update - pg_shdepend (Tom) + Make ALTER AGGREGATE ... OWNER TO update + pg_shdepend (Tom) This oversight could lead to problems if the aggregate was later - involved in a DROP OWNED or REASSIGN OWNED + involved in a DROP OWNED or REASSIGN OWNED operation. @@ -5303,19 +5303,19 @@ Fix incorrect archive truncation point calculation for the - %r macro in restore_command parameters + %r macro in restore_command parameters (Simon) This could lead to data loss if a warm-standby script relied on - %r to decide when to throw away WAL segment files. + %r to decide when to throw away WAL segment files. - Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new + Fix ALTER TABLE ADD COLUMN ... PRIMARY KEY so that the new column is correctly checked to see if it's been initialized to all non-nulls (Brendan Jurd) @@ -5327,31 +5327,31 @@ - Fix REASSIGN OWNED so that it works on procedural + Fix REASSIGN OWNED so that it works on procedural languages too (Alvaro) - Fix problems with SELECT FOR UPDATE/SHARE occurring as a - subquery in a query with a non-SELECT top-level operation + Fix problems with SELECT FOR UPDATE/SHARE occurring as a + subquery in a query with a non-SELECT top-level operation (Tom) - Fix possible CREATE TABLE failure when inheriting the - same constraint from multiple parent relations that + Fix possible CREATE TABLE failure when inheriting the + same constraint from multiple parent relations that inherited that constraint from a common ancestor (Tom) - Fix pg_get_ruledef() to show the alias, if any, attached - to the target table of an UPDATE or DELETE + Fix pg_get_ruledef() to show the alias, if any, attached + to the target table of an UPDATE or DELETE (Tom) @@ -5377,13 +5377,13 @@ - Fix broken GiST comparison function for tsquery (Teodor) + Fix broken GiST comparison function for tsquery (Teodor) - Fix tsvector_update_trigger() and ts_stat() + Fix tsvector_update_trigger() and ts_stat() to accept domains over the types they expect to work with (Tom) @@ -5404,7 +5404,7 @@ Fix race conditions between delayed unlinks and DROP - DATABASE (Heikki) + DATABASE (Heikki) @@ -5431,11 +5431,11 @@ Fix possible crash due to incorrect plan generated for an - x IN (SELECT y - FROM ...) clause when x and y + x IN (SELECT y + FROM ...) clause when x and y have different data types; and make sure the behavior is semantically - correct when the conversion from y's type to - x's type is lossy (Tom) + correct when the conversion from y's type to + x's type is lossy (Tom) @@ -5456,15 +5456,15 @@ - Fix planner failure when an indexable MIN or - MAX aggregate is used with DISTINCT or - ORDER BY (Tom) + Fix planner failure when an indexable MIN or + MAX aggregate is used with DISTINCT or + ORDER BY (Tom) - Fix planner to ensure it never uses a physical tlist for a + Fix planner to ensure it never uses a physical tlist for a plan node that is feeding a Sort node (Tom) @@ -5488,7 +5488,7 @@ - Make TransactionIdIsCurrentTransactionId() use binary + Make TransactionIdIsCurrentTransactionId() use binary search instead of linear search when checking child-transaction XIDs (Heikki) @@ -5502,14 +5502,14 @@ Fix conversions between ISO-8859-5 and other encodings to handle - Cyrillic Yo characters (e and E with + Cyrillic Yo characters (e and E with two dots) (Sergey Burladyan) - Fix several datatype input functions, notably array_in(), + Fix several datatype input functions, notably array_in(), that were allowing unused bytes in their results to contain uninitialized, unpredictable values (Tom) @@ -5517,7 +5517,7 @@ This could lead to failures in which two apparently identical literal values were not seen as equal, resulting in the parser complaining - about unmatched ORDER BY and DISTINCT + about unmatched ORDER BY and DISTINCT expressions. @@ -5525,18 +5525,18 @@ Fix a corner case in regular-expression substring matching - (substring(string from - pattern)) (Tom) + (substring(string from + pattern)) (Tom) The problem occurs when there is a match to the pattern overall but the user has specified a parenthesized subexpression and that subexpression hasn't got a match. An example is - substring('foo' from 'foo(bar)?'). - This should return NULL, since (bar) isn't matched, but + substring('foo' from 'foo(bar)?'). + This should return NULL, since (bar) isn't matched, but it was mistakenly returning the whole-pattern match instead (ie, - foo). + foo). @@ -5549,7 +5549,7 @@ - Improve ANALYZE's handling of in-doubt tuples (those + Improve ANALYZE's handling of in-doubt tuples (those inserted or deleted by a not-yet-committed transaction) so that the counts it reports to the stats collector are more likely to be correct (Pavan Deolasee) @@ -5558,14 +5558,14 @@ - Fix initdb to reject a relative path for its - --xlogdir (-X) option (Tom) + Fix initdb to reject a relative path for its + --xlogdir (-X) option (Tom) - Make psql print tab characters as an appropriate + Make psql print tab characters as an appropriate number of spaces, rather than \x09 as was done in 8.3.0 and 8.3.1 (Bruce) @@ -5573,7 +5573,7 @@ - Update time zone data files to tzdata release 2008c (for + Update time zone data files to tzdata release 2008c (for DST law changes in Morocco, Iraq, Choibalsan, Pakistan, Syria, Cuba, and Argentina/San_Luis) @@ -5581,44 +5581,44 @@ - Add ECPGget_PGconn() function to - ecpglib (Michael) + Add ECPGget_PGconn() function to + ecpglib (Michael) - Fix incorrect result from ecpg's - PGTYPEStimestamp_sub() function (Michael) + Fix incorrect result from ecpg's + PGTYPEStimestamp_sub() function (Michael) - Fix handling of continuation line markers in ecpg + Fix handling of continuation line markers in ecpg (Michael) - Fix possible crashes in contrib/cube functions (Tom) + Fix possible crashes in contrib/cube functions (Tom) - Fix core dump in contrib/xml2's - xpath_table() function when the input query returns a + Fix core dump in contrib/xml2's + xpath_table() function when the input query returns a NULL value (Tom) - Fix contrib/xml2's makefile to not override - CFLAGS, and make it auto-configure properly for - libxslt present or not (Tom) + Fix contrib/xml2's makefile to not override + CFLAGS, and make it auto-configure properly for + libxslt present or not (Tom) @@ -5646,7 +5646,7 @@ A dump/restore is not required for those running 8.3.X. - However, you might need to REINDEX indexes on textual + However, you might need to REINDEX indexes on textual columns after updating, if you are affected by the Windows locale issue described below. @@ -5670,17 +5670,17 @@ over two years ago, but Windows with UTF-8 uses a separate code path that was not updated. If you are using a locale that considers some non-identical strings as equal, you may need to - REINDEX to fix existing indexes on textual columns. + REINDEX to fix existing indexes on textual columns. - Repair corner-case bugs in VACUUM FULL (Tom) + Repair corner-case bugs in VACUUM FULL (Tom) - A potential deadlock between concurrent VACUUM FULL + A potential deadlock between concurrent VACUUM FULL operations on different system catalogs was introduced in 8.2. This has now been corrected. 8.3 made this worse because the deadlock could occur within a critical code section, making it @@ -5688,13 +5688,13 @@ - Also, a VACUUM FULL that failed partway through + Also, a VACUUM FULL that failed partway through vacuuming a system catalog could result in cache corruption in concurrent database sessions. - Another VACUUM FULL bug introduced in 8.3 could + Another VACUUM FULL bug introduced in 8.3 could result in a crash or out-of-memory report when dealing with pages containing no live tuples. @@ -5702,13 +5702,13 @@ - Fix misbehavior of foreign key checks involving character - or bit columns (Tom) + Fix misbehavior of foreign key checks involving character + or bit columns (Tom) If the referencing column were of a different but compatible type - (for instance varchar), the constraint was enforced incorrectly. + (for instance varchar), the constraint was enforced incorrectly. @@ -5726,7 +5726,7 @@ This bug affected only protocol-level prepare operations, not - SQL PREPARE, and so tended to be seen only with + SQL PREPARE, and so tended to be seen only with JDBC, DBI, and other client-side drivers that use prepared statements heavily. @@ -5748,21 +5748,21 @@ - Fix longstanding LISTEN/NOTIFY + Fix longstanding LISTEN/NOTIFY race condition (Tom) In rare cases a session that had just executed a - LISTEN might not get a notification, even though + LISTEN might not get a notification, even though one would be expected because the concurrent transaction executing - NOTIFY was observed to commit later. + NOTIFY was observed to commit later. A side effect of the fix is that a transaction that has executed - a not-yet-committed LISTEN command will not see any - row in pg_listener for the LISTEN, + a not-yet-committed LISTEN command will not see any + row in pg_listener for the LISTEN, should it choose to look; formerly it would have. This behavior was never documented one way or the other, but it is possible that some applications depend on the old behavior. @@ -5771,14 +5771,14 @@ - Disallow LISTEN and UNLISTEN within a + Disallow LISTEN and UNLISTEN within a prepared transaction (Tom) This was formerly allowed but trying to do it had various unpleasant consequences, notably that the originating backend could not exit - as long as an UNLISTEN remained uncommitted. + as long as an UNLISTEN remained uncommitted. @@ -5803,20 +5803,20 @@ - Fix incorrect comparison of tsquery values (Teodor) + Fix incorrect comparison of tsquery values (Teodor) - Fix incorrect behavior of LIKE with non-ASCII characters + Fix incorrect behavior of LIKE with non-ASCII characters in single-byte encodings (Rolf Jentsch) - Disable xmlvalidate (Tom) + Disable xmlvalidate (Tom) @@ -5835,8 +5835,8 @@ - Make encode(bytea, 'escape') convert all - high-bit-set byte values into \nnn octal + Make encode(bytea, 'escape') convert all + high-bit-set byte values into \nnn octal escape sequences (Tom) @@ -5844,7 +5844,7 @@ This is necessary to avoid encoding problems when the database encoding is multi-byte. This change could pose compatibility issues for applications that are expecting specific results from - encode. + encode. @@ -5860,21 +5860,21 @@ - Fix unrecognized node type error in some variants of - ALTER OWNER (Tom) + Fix unrecognized node type error in some variants of + ALTER OWNER (Tom) Avoid tablespace permissions errors in CREATE TABLE LIKE - INCLUDING INDEXES (Tom) + INCLUDING INDEXES (Tom) - Ensure pg_stat_activity.waiting flag + Ensure pg_stat_activity.waiting flag is cleared when a lock wait is aborted (Tom) @@ -5892,26 +5892,26 @@ - Update time zone data files to tzdata release 2008a + Update time zone data files to tzdata release 2008a (in particular, recent Chile changes); adjust timezone abbreviation - VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) + VET (Venezuela) to mean UTC-4:30, not UTC-4:00 (Tom) - Fix ecpg problems with arrays (Michael) + Fix ecpg problems with arrays (Michael) - Fix pg_ctl to correctly extract the postmaster's port + Fix pg_ctl to correctly extract the postmaster's port number from command-line options (Itagaki Takahiro, Tom) - Previously, pg_ctl start -w could try to contact the + Previously, pg_ctl start -w could try to contact the postmaster on the wrong port, leading to bogus reports of startup failure. @@ -5919,19 +5919,19 @@ - Use - This is known to be necessary when building PostgreSQL - with gcc 4.3 or later. + This is known to be necessary when building PostgreSQL + with gcc 4.3 or later. - Enable building contrib/uuid-ossp with MSVC (Hiroshi Saito) + Enable building contrib/uuid-ossp with MSVC (Hiroshi Saito) @@ -5954,7 +5954,7 @@ With significant new functionality and performance enhancements, this release represents a major leap forward for - PostgreSQL. This was made possible by a growing + PostgreSQL. This was made possible by a growing community that has dramatically accelerated the pace of development. This release adds the following major features: @@ -5988,13 +5988,13 @@ - Universally Unique Identifier (UUID) data type + Universally Unique Identifier (UUID) data type - Add control over whether NULLs sort first or last + Add control over whether NULLs sort first or last @@ -6032,7 +6032,7 @@ - Support Security Service Provider Interface (SSPI) for + Support Security Service Provider Interface (SSPI) for authentication on Windows @@ -6046,8 +6046,8 @@ - Allow the whole PostgreSQL distribution to be compiled - with Microsoft Visual C++ + Allow the whole PostgreSQL distribution to be compiled + with Microsoft Visual C++ @@ -6076,8 +6076,8 @@ - Heap-Only Tuples (HOT) accelerate space reuse for - most UPDATEs and DELETEs + Heap-Only Tuples (HOT) accelerate space reuse for + most UPDATEs and DELETEs @@ -6091,7 +6091,7 @@ Using non-persistent transaction IDs for read-only transactions - reduces overhead and VACUUM requirements + reduces overhead and VACUUM requirements @@ -6116,7 +6116,7 @@ - ORDER BY ... LIMIT can be done without sorting + ORDER BY ... LIMIT can be done without sorting @@ -6148,14 +6148,14 @@ Non-character data types are no longer automatically cast to - TEXT (Peter, Tom) + TEXT (Peter, Tom) Previously, if a non-character value was supplied to an operator or - function that requires text input, it was automatically - cast to text, for most (though not all) built-in data types. - This no longer happens: an explicit cast to text is now + function that requires text input, it was automatically + cast to text, for most (though not all) built-in data types. + This no longer happens: an explicit cast to text is now required for all non-character-string types. For example, these expressions formerly worked: @@ -6164,15 +6164,15 @@ substr(current_date, 1, 4) 23 LIKE '2%' - but will now draw function does not exist and operator - does not exist errors respectively. Use an explicit cast instead: + but will now draw function does not exist and operator + does not exist errors respectively. Use an explicit cast instead: substr(current_date::text, 1, 4) 23::text LIKE '2%' - (Of course, you can use the more verbose CAST() syntax too.) + (Of course, you can use the more verbose CAST() syntax too.) The reason for the change is that these automatic casts too often caused surprising behavior. An example is that in previous releases, this expression was accepted but did not do what was expected: @@ -6183,35 +6183,35 @@ current_date < 2017-11-17 This is actually comparing a date to an integer, which should be (and now is) rejected — but in the presence of automatic - casts both sides were cast to text and a textual comparison - was done, because the text < text operator was able - to match the expression when no other < operator could. + casts both sides were cast to text and a textual comparison + was done, because the text < text operator was able + to match the expression when no other < operator could. - Types char(n) and - varchar(n) still cast to text - automatically. Also, automatic casting to text still works for - inputs to the concatenation (||) operator, so long as least + Types char(n) and + varchar(n) still cast to text + automatically. Also, automatic casting to text still works for + inputs to the concatenation (||) operator, so long as least one input is a character-string type. - Full text search features from contrib/tsearch2 have + Full text search features from contrib/tsearch2 have been moved into the core server, with some minor syntax changes - contrib/tsearch2 now contains a compatibility + contrib/tsearch2 now contains a compatibility interface. - ARRAY(SELECT ...), where the SELECT + ARRAY(SELECT ...), where the SELECT returns no rows, now returns an empty array, rather than NULL (Tom) @@ -6233,8 +6233,8 @@ current_date < 2017-11-17 - ORDER BY ... USING operator must now - use a less-than or greater-than operator that is + ORDER BY ... USING operator must now + use a less-than or greater-than operator that is defined in a btree operator class @@ -6251,7 +6251,7 @@ current_date < 2017-11-17 Previously SET LOCAL's effects were lost - after subtransaction commit (RELEASE SAVEPOINT + after subtransaction commit (RELEASE SAVEPOINT or exit from a PL/pgSQL exception block). @@ -6263,15 +6263,15 @@ current_date < 2017-11-17 - For example, "BEGIN; DROP DATABASE; COMMIT" will now be + For example, "BEGIN; DROP DATABASE; COMMIT" will now be rejected even if submitted as a single query message. - ROLLBACK outside a transaction block now - issues NOTICE instead of WARNING (Bruce) + ROLLBACK outside a transaction block now + issues NOTICE instead of WARNING (Bruce) @@ -6282,15 +6282,15 @@ current_date < 2017-11-17 - Formerly, these commands accepted schema.relation but + Formerly, these commands accepted schema.relation but ignored the schema part, which was confusing. - ALTER SEQUENCE no longer affects the sequence's - currval() state (Tom) + ALTER SEQUENCE no longer affects the sequence's + currval() state (Tom) @@ -6314,16 +6314,16 @@ current_date < 2017-11-17 For example, pg_database_size() now requires - CONNECT permission, which is granted to everyone by + CONNECT permission, which is granted to everyone by default. pg_tablespace_size() requires - CREATE permission in the tablespace, or is allowed if + CREATE permission in the tablespace, or is allowed if the tablespace is the default tablespace for the database. - Remove the undocumented !!= (not in) operator (Tom) + Remove the undocumented !!= (not in) operator (Tom) @@ -6339,7 +6339,7 @@ current_date < 2017-11-17 If application code was computing and storing hash values using - internal PostgreSQL hashing functions, the hash + internal PostgreSQL hashing functions, the hash values must be regenerated. @@ -6351,8 +6351,8 @@ current_date < 2017-11-17 - The new SET_VARSIZE() macro must be used - to set the length of generated varlena values. Also, it + The new SET_VARSIZE() macro must be used + to set the length of generated varlena values. Also, it might be necessary to expand (de-TOAST) input values in more cases. @@ -6361,7 +6361,7 @@ current_date < 2017-11-17 Continuous archiving no longer reports each successful archive - operation to the server logs unless DEBUG level is used + operation to the server logs unless DEBUG level is used (Simon) @@ -6381,18 +6381,18 @@ current_date < 2017-11-17 - bgwriter_lru_percent, - bgwriter_all_percent, - bgwriter_all_maxpages, - stats_start_collector, and - stats_reset_on_server_start are removed. - redirect_stderr is renamed to - logging_collector. - stats_command_string is renamed to - track_activities. - stats_block_level and stats_row_level - are merged into track_counts. - A new boolean configuration parameter, archive_mode, + bgwriter_lru_percent, + bgwriter_all_percent, + bgwriter_all_maxpages, + stats_start_collector, and + stats_reset_on_server_start are removed. + redirect_stderr is renamed to + logging_collector. + stats_command_string is renamed to + track_activities. + stats_block_level and stats_row_level + are merged into track_counts. + A new boolean configuration parameter, archive_mode, controls archiving. Autovacuum's default settings have changed. @@ -6403,7 +6403,7 @@ current_date < 2017-11-17 - We now always start the collector process, unless UDP + We now always start the collector process, unless UDP socket creation fails. @@ -6421,7 +6421,7 @@ current_date < 2017-11-17 - Commenting out a parameter in postgresql.conf now + Commenting out a parameter in postgresql.conf now causes it to revert to its default value (Joachim Wieland) @@ -6461,12 +6461,12 @@ current_date < 2017-11-17 - On most platforms, C locale is the only locale that + On most platforms, C locale is the only locale that will work with any database encoding. Other locale settings imply a specific encoding and will misbehave if the database encoding is something different. (Typical symptoms include bogus textual - sort order and wrong results from upper() or - lower().) The server now rejects attempts to create + sort order and wrong results from upper() or + lower().) The server now rejects attempts to create databases that have an incompatible encoding. @@ -6503,7 +6503,7 @@ current_date < 2017-11-17 convert_from(bytea, name) returns - text — converts the first argument from the named + text — converts the first argument from the named encoding to the database encoding @@ -6511,7 +6511,7 @@ current_date < 2017-11-17 convert_to(text, name) returns - bytea — converts the first argument from the + bytea — converts the first argument from the database encoding to the named encoding @@ -6519,7 +6519,7 @@ current_date < 2017-11-17 length(bytea, name) returns - integer — gives the length of the first + integer — gives the length of the first argument in characters in the named encoding @@ -6582,10 +6582,10 @@ current_date < 2017-11-17 database consistency at risk; the worst case is that after a crash the last few reportedly-committed transactions might not be committed after all. - This feature is enabled by turning off synchronous_commit + This feature is enabled by turning off synchronous_commit (which can be done per-session or per-transaction, if some transactions are critical and others are not). - wal_writer_delay can be adjusted to control the maximum + wal_writer_delay can be adjusted to control the maximum delay before transactions actually reach disk. @@ -6609,19 +6609,19 @@ current_date < 2017-11-17 - Heap-Only Tuples (HOT) accelerate space reuse for most - UPDATEs and DELETEs (Pavan Deolasee, with + Heap-Only Tuples (HOT) accelerate space reuse for most + UPDATEs and DELETEs (Pavan Deolasee, with ideas from many others) - UPDATEs and DELETEs leave dead tuples - behind, as do failed INSERTs. Previously only - VACUUM could reclaim space taken by dead tuples. With - HOT dead tuple space can be automatically reclaimed at - the time of INSERT or UPDATE if no changes + UPDATEs and DELETEs leave dead tuples + behind, as do failed INSERTs. Previously only + VACUUM could reclaim space taken by dead tuples. With + HOT dead tuple space can be automatically reclaimed at + the time of INSERT or UPDATE if no changes are made to indexed columns. This allows for more consistent - performance. Also, HOT avoids adding duplicate index + performance. Also, HOT avoids adding duplicate index entries. @@ -6655,13 +6655,13 @@ current_date < 2017-11-17 Using non-persistent transaction IDs for read-only transactions - reduces overhead and VACUUM requirements (Florian Pflug) + reduces overhead and VACUUM requirements (Florian Pflug) Non-persistent transaction IDs do not increment the global transaction counter. Therefore, they reduce the load on - pg_clog and increase the time between forced + pg_clog and increase the time between forced vacuums to prevent transaction ID wraparound. Other performance improvements were also made that should improve concurrency. @@ -6674,7 +6674,7 @@ current_date < 2017-11-17 - There was formerly a hard limit of 232 + There was formerly a hard limit of 232 (4 billion) commands per transaction. Now only commands that actually changed the database count, so while this limit still exists, it should be significantly less annoying. @@ -6683,7 +6683,7 @@ current_date < 2017-11-17 - Create a dedicated WAL writer process to off-load + Create a dedicated WAL writer process to off-load work from backends (Simon) @@ -6696,7 +6696,7 @@ current_date < 2017-11-17 Unless WAL archiving is enabled, the system now avoids WAL writes - for CLUSTER and just fsync()s the + for CLUSTER and just fsync()s the table at the end of the command. It also does the same for COPY if the table was created in the same transaction. @@ -6720,22 +6720,22 @@ current_date < 2017-11-17 middle of the table (where another sequential scan is already in-progress) and wrapping around to the beginning to finish. This can affect the order of returned rows in a query that does not - specify ORDER BY. The synchronize_seqscans + specify ORDER BY. The synchronize_seqscans configuration parameter can be used to disable this if necessary. - ORDER BY ... LIMIT can be done without sorting + ORDER BY ... LIMIT can be done without sorting (Greg Stark) This is done by sequentially scanning the table and tracking just - the top N candidate rows, rather than performing a + the top N candidate rows, rather than performing a full sort of the entire table. This is useful when there is no - matching index and the LIMIT is not large. + matching index and the LIMIT is not large. @@ -6805,7 +6805,7 @@ current_date < 2017-11-17 Previously PL/pgSQL functions that referenced temporary tables would fail if the temporary table was dropped and recreated - between function invocations, unless EXECUTE was + between function invocations, unless EXECUTE was used. This improvement fixes that problem and many related issues. @@ -6830,7 +6830,7 @@ current_date < 2017-11-17 Place temporary tables' TOAST tables in special schemas named - pg_toast_temp_nnn (Tom) + pg_toast_temp_nnn (Tom) @@ -6860,7 +6860,7 @@ current_date < 2017-11-17 - Fix CREATE CONSTRAINT TRIGGER + Fix CREATE CONSTRAINT TRIGGER to convert old-style foreign key trigger definitions into regular foreign key constraints (Tom) @@ -6868,17 +6868,17 @@ current_date < 2017-11-17 This will ease porting of foreign key constraints carried forward from pre-7.3 databases, if they were never converted using - contrib/adddepend. + contrib/adddepend. - Fix DEFAULT NULL to override inherited defaults (Tom) + Fix DEFAULT NULL to override inherited defaults (Tom) - DEFAULT NULL was formerly considered a noise phrase, but it + DEFAULT NULL was formerly considered a noise phrase, but it should (and now does) override non-null defaults that would otherwise be inherited from a parent table or domain. @@ -6998,9 +6998,9 @@ current_date < 2017-11-17 This avoids Windows-specific problems with localized time zone names that are in the wrong encoding. There is a new - log_timezone parameter that controls the timezone + log_timezone parameter that controls the timezone used in log messages, independently of the client-visible - timezone parameter. + timezone parameter. @@ -7031,7 +7031,7 @@ current_date < 2017-11-17 - Add n_live_tuples and n_dead_tuples columns + Add n_live_tuples and n_dead_tuples columns to pg_stat_all_tables and related views (Glen Parker) @@ -7039,8 +7039,8 @@ current_date < 2017-11-17 - Merge stats_block_level and stats_row_level - parameters into a single parameter track_counts, which + Merge stats_block_level and stats_row_level + parameters into a single parameter track_counts, which controls all messages sent to the statistics collector process (Tom) @@ -7070,7 +7070,7 @@ current_date < 2017-11-17 - Support Security Service Provider Interface (SSPI) for + Support Security Service Provider Interface (SSPI) for authentication on Windows (Magnus) @@ -7094,14 +7094,14 @@ current_date < 2017-11-17 - Add ssl_ciphers parameter to control accepted SSL ciphers + Add ssl_ciphers parameter to control accepted SSL ciphers (Victor Wagner) - Add a Kerberos realm parameter, krb_realm (Magnus) + Add a Kerberos realm parameter, krb_realm (Magnus) @@ -7110,7 +7110,7 @@ current_date < 2017-11-17 - Write-Ahead Log (<acronym>WAL</>) and Continuous Archiving + Write-Ahead Log (<acronym>WAL</acronym>) and Continuous Archiving @@ -7133,7 +7133,7 @@ current_date < 2017-11-17 This change allows a warm standby server to pass the name of the earliest still-needed WAL file to the recovery script, allowing automatic removal - of no-longer-needed WAL files. This is done using %r in + of no-longer-needed WAL files. This is done using %r in the restore_command parameter of recovery.conf. @@ -7141,14 +7141,14 @@ current_date < 2017-11-17 - New boolean configuration parameter, archive_mode, + New boolean configuration parameter, archive_mode, controls archiving (Simon) - Previously setting archive_command to an empty string - turned off archiving. Now archive_mode turns archiving - on and off, independently of archive_command. This is + Previously setting archive_command to an empty string + turned off archiving. Now archive_mode turns archiving + on and off, independently of archive_command. This is useful for stopping archiving temporarily. @@ -7169,40 +7169,40 @@ current_date < 2017-11-17 Text search has been improved, moved into the core code, and is now - installed by default. contrib/tsearch2 now contains + installed by default. contrib/tsearch2 now contains a compatibility interface. - Add control over whether NULLs sort first or last (Teodor, Tom) + Add control over whether NULLs sort first or last (Teodor, Tom) - The syntax is ORDER BY ... NULLS FIRST/LAST. + The syntax is ORDER BY ... NULLS FIRST/LAST. - Allow per-column ascending/descending (ASC/DESC) + Allow per-column ascending/descending (ASC/DESC) ordering options for indexes (Teodor, Tom) - Previously a query using ORDER BY with mixed - ASC/DESC specifiers could not fully use + Previously a query using ORDER BY with mixed + ASC/DESC specifiers could not fully use an index. Now an index can be fully used in such cases if the index was created with matching - ASC/DESC specifications. - NULL sort order within an index can be controlled, too. + ASC/DESC specifications. + NULL sort order within an index can be controlled, too. - Allow col IS NULL to use an index (Teodor) + Allow col IS NULL to use an index (Teodor) @@ -7213,8 +7213,8 @@ current_date < 2017-11-17 This eliminates the need to reference a primary key to - UPDATE or DELETE rows returned by a cursor. - The syntax is UPDATE/DELETE WHERE CURRENT OF. + UPDATE or DELETE rows returned by a cursor. + The syntax is UPDATE/DELETE WHERE CURRENT OF. @@ -7243,7 +7243,7 @@ current_date < 2017-11-17 - Allow UNION and related constructs to return a domain + Allow UNION and related constructs to return a domain type, when all inputs are of that domain type (Tom) @@ -7271,7 +7271,7 @@ current_date < 2017-11-17 Improve optimizer logic for detecting when variables are equal - in a WHERE clause (Tom) + in a WHERE clause (Tom) @@ -7318,8 +7318,8 @@ current_date < 2017-11-17 For example, functions can now set their own - search_path to prevent unexpected behavior if a - different search_path exists at run-time. Security + search_path to prevent unexpected behavior if a + different search_path exists at run-time. Security definer functions should set search_path to avoid security loopholes. @@ -7367,7 +7367,7 @@ current_date < 2017-11-17 - Make CREATE/DROP/RENAME DATABASE wait briefly for + Make CREATE/DROP/RENAME DATABASE wait briefly for conflicting backends to exit before failing (Tom) @@ -7385,7 +7385,7 @@ current_date < 2017-11-17 This allows replication systems to disable triggers and rewrite rules as a group without modifying the system catalogs directly. - The behavior is controlled by ALTER TABLE and a new + The behavior is controlled by ALTER TABLE and a new parameter session_replication_role. @@ -7397,7 +7397,7 @@ current_date < 2017-11-17 This allows a user-defined type to take a modifier, like - ssnum(7). Previously only built-in + ssnum(7). Previously only built-in data types could have modifiers. @@ -7419,7 +7419,7 @@ current_date < 2017-11-17 While this is reasonably safe, some administrators might wish to revoke the privilege. It is controlled by - pg_pltemplate.tmpldbacreate. + pg_pltemplate.tmpldbacreate. @@ -7465,7 +7465,7 @@ current_date < 2017-11-17 Add new CLUSTER syntax: CLUSTER - table USING index + table USING index (Holger Schurig) @@ -7483,7 +7483,7 @@ current_date < 2017-11-17 References to subplan outputs are now always shown correctly, - instead of using ?columnN? + instead of using ?columnN? for complicated cases. @@ -7527,19 +7527,19 @@ current_date < 2017-11-17 This feature provides convenient support for fields that have a small, fixed set of allowed values. An example of creating an - ENUM type is - CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'). + ENUM type is + CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'). - Universally Unique Identifier (UUID) data type (Gevik + Universally Unique Identifier (UUID) data type (Gevik Babakhani, Neil) - This closely matches RFC 4122. + This closely matches RFC 4122. @@ -7549,7 +7549,7 @@ current_date < 2017-11-17 - This greatly increases the range of supported MONEY + This greatly increases the range of supported MONEY values. @@ -7557,13 +7557,13 @@ current_date < 2017-11-17 Fix float4/float8 to handle - Infinity and NAN (Not A Number) + Infinity and NAN (Not A Number) consistently (Bruce) The code formerly was not consistent about distinguishing - Infinity from overflow conditions. + Infinity from overflow conditions. @@ -7576,7 +7576,7 @@ current_date < 2017-11-17 - Prevent COPY from using digits and lowercase letters as + Prevent COPY from using digits and lowercase letters as delimiters (Tom) @@ -7613,7 +7613,7 @@ current_date < 2017-11-17 - Implement width_bucket() for the float8 + Implement width_bucket() for the float8 data type (Neil) @@ -7636,34 +7636,34 @@ current_date < 2017-11-17 - Add isodow option to EXTRACT() and - date_part() (Bruce) + Add isodow option to EXTRACT() and + date_part() (Bruce) This returns the day of the week, with Sunday as seven. - (dow returns Sunday as zero.) + (dow returns Sunday as zero.) - Add ID (ISO day of week) and IDDD (ISO - day of year) format codes for to_char(), - to_date(), and to_timestamp() (Brendan + Add ID (ISO day of week) and IDDD (ISO + day of year) format codes for to_char(), + to_date(), and to_timestamp() (Brendan Jurd) - Make to_timestamp() and to_date() + Make to_timestamp() and to_date() assume TM (trim) option for potentially variable-width fields (Bruce) - This matches Oracle's behavior. + This matches Oracle's behavior. @@ -7671,7 +7671,7 @@ current_date < 2017-11-17 Fix off-by-one conversion error in to_date()/to_timestamp() - D (non-ISO day of week) fields (Bruce) + D (non-ISO day of week) fields (Bruce) @@ -7757,7 +7757,7 @@ current_date < 2017-11-17 This adds convenient syntax for PL/pgSQL set-returning functions - that want to return the result of a query. RETURN QUERY + that want to return the result of a query. RETURN QUERY is easier and more efficient than a loop around RETURN NEXT. @@ -7770,7 +7770,7 @@ current_date < 2017-11-17 - For example, myfunc.myvar. This is particularly + For example, myfunc.myvar. This is particularly useful for specifying variables in a query where the variable name might match a column name. @@ -7790,11 +7790,11 @@ current_date < 2017-11-17 Tighten requirements for FOR loop - STEP values (Tom) + STEP values (Tom) - Prevent non-positive STEP values, and handle + Prevent non-positive STEP values, and handle loop overflows. @@ -7831,7 +7831,7 @@ current_date < 2017-11-17 - Allow type-name arguments to PL/Tcl spi_prepare to + Allow type-name arguments to PL/Tcl spi_prepare to be data type aliases in addition to names found in pg_type (Andrew) @@ -7852,7 +7852,7 @@ current_date < 2017-11-17 - Fix PL/Tcl problems with thread-enabled libtcl spawning + Fix PL/Tcl problems with thread-enabled libtcl spawning multiple threads within the backend (Steve Marshall, Paul Bayer, Doug Knight) @@ -7867,7 +7867,7 @@ current_date < 2017-11-17 - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> @@ -7907,20 +7907,20 @@ current_date < 2017-11-17 Allow \pset, \t, and - \x to specify on or off, + \x to specify on or off, rather than just toggling (Chad Wagner) - Add \sleep capability (Jan) + Add \sleep capability (Jan) - Enable \timing output for \copy (Andrew) + Enable \timing output for \copy (Andrew) @@ -7933,20 +7933,20 @@ current_date < 2017-11-17 - Flush \o output after each backslash command (Tom) + Flush \o output after each backslash command (Tom) - Correctly detect and report errors while reading a -f + Correctly detect and report errors while reading a -f input file (Peter) - Remove -u option (this option has long been deprecated) + Remove -u option (this option has long been deprecated) (Tom) @@ -7956,12 +7956,12 @@ current_date < 2017-11-17 - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add --tablespaces-only and --roles-only + Add --tablespaces-only and --roles-only options to pg_dumpall (Dave Page) @@ -7980,7 +7980,7 @@ current_date < 2017-11-17 - Allow pg_dumpall to accept an initial-connection + Allow pg_dumpall to accept an initial-connection database name rather than the default template1 (Dave Page) @@ -7988,7 +7988,7 @@ current_date < 2017-11-17 - In -n and -t switches, always match + In -n and -t switches, always match $ literally (Tom) @@ -8001,7 +8001,7 @@ current_date < 2017-11-17 - Remove -u option (this option has long been deprecated) + Remove -u option (this option has long been deprecated) (Tom) @@ -8016,7 +8016,7 @@ current_date < 2017-11-17 - In initdb, allow the location of the + In initdb, allow the location of the pg_xlog directory to be specified (Euler Taveira de Oliveira) @@ -8024,19 +8024,19 @@ current_date < 2017-11-17 - Enable server core dump generation in pg_regress + Enable server core dump generation in pg_regress on supported operating systems (Andrew) - Add a -t (timeout) parameter to pg_ctl + Add a -t (timeout) parameter to pg_ctl (Bruce) - This controls how long pg_ctl will wait when waiting + This controls how long pg_ctl will wait when waiting for server startup or shutdown. Formerly the timeout was hard-wired as 60 seconds. @@ -8044,28 +8044,28 @@ current_date < 2017-11-17 - Add a pg_ctl option to control generation + Add a pg_ctl option to control generation of server core dumps (Andrew) - Allow Control-C to cancel clusterdb, - reindexdb, and vacuumdb (Itagaki + Allow Control-C to cancel clusterdb, + reindexdb, and vacuumdb (Itagaki Takahiro, Magnus) - Suppress command tag output for createdb, - createuser, dropdb, and - dropuser (Peter) + Suppress command tag output for createdb, + createuser, dropdb, and + dropuser (Peter) - The --quiet option is ignored and will be removed in 8.4. + The --quiet option is ignored and will be removed in 8.4. Progress messages when acting on all databases now go to stdout instead of stderr because they are not actually errors. @@ -8076,33 +8076,33 @@ current_date < 2017-11-17 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Interpret the dbName parameter of - PQsetdbLogin() as a conninfo string if + Interpret the dbName parameter of + PQsetdbLogin() as a conninfo string if it contains an equals sign (Andrew) - This allows use of conninfo strings in client - programs that still use PQsetdbLogin(). + This allows use of conninfo strings in client + programs that still use PQsetdbLogin(). - Support a global SSL configuration file (Victor + Support a global SSL configuration file (Victor Wagner) - Add environment variable PGSSLKEY to control - SSL hardware keys (Victor Wagner) + Add environment variable PGSSLKEY to control + SSL hardware keys (Victor Wagner) @@ -8147,7 +8147,7 @@ current_date < 2017-11-17 - <link linkend="ecpg"><application>ecpg</></link> + <link linkend="ecpg"><application>ecpg</application></link> @@ -8183,13 +8183,13 @@ current_date < 2017-11-17 - <application>Windows</> Port + <application>Windows</application> Port - Allow the whole PostgreSQL distribution to be compiled - with Microsoft Visual C++ (Magnus and others) + Allow the whole PostgreSQL distribution to be compiled + with Microsoft Visual C++ (Magnus and others) @@ -8226,7 +8226,7 @@ current_date < 2017-11-17 - Server Programming Interface (<acronym>SPI</>) + Server Programming Interface (<acronym>SPI</acronym>) @@ -8236,7 +8236,7 @@ current_date < 2017-11-17 Allow access to the cursor-related planning options, and add - FETCH/MOVE routines. + FETCH/MOVE routines. @@ -8247,15 +8247,15 @@ current_date < 2017-11-17 - The macro SPI_ERROR_CURSOR still exists but will + The macro SPI_ERROR_CURSOR still exists but will never be returned. - SPI plan pointers are now declared as SPIPlanPtr instead of - void * (Tom) + SPI plan pointers are now declared as SPIPlanPtr instead of + void * (Tom) @@ -8274,35 +8274,35 @@ current_date < 2017-11-17 - Add configure option --enable-profiling - to enable code profiling (works only with gcc) + Add configure option --enable-profiling + to enable code profiling (works only with gcc) (Korry Douglas and Nikhil Sontakke) - Add configure option --with-system-tzdata + Add configure option --with-system-tzdata to use the operating system's time zone database (Peter) - Fix PGXS so extensions can be built against PostgreSQL - installations whose pg_config program does not - appear first in the PATH (Tom) + Fix PGXS so extensions can be built against PostgreSQL + installations whose pg_config program does not + appear first in the PATH (Tom) Support gmake draft when building the - SGML documentation (Bruce) + SGML documentation (Bruce) - Unless draft is used, the documentation build will + Unless draft is used, the documentation build will now be repeated if necessary to ensure the index is up-to-date. @@ -8317,9 +8317,9 @@ current_date < 2017-11-17 - Rename macro DLLIMPORT to PGDLLIMPORT to + Rename macro DLLIMPORT to PGDLLIMPORT to avoid conflicting with third party includes (like Tcl) that - define DLLIMPORT (Magnus) + define DLLIMPORT (Magnus) @@ -8332,15 +8332,15 @@ current_date < 2017-11-17 - Update GIN extractQuery() API to allow signalling + Update GIN extractQuery() API to allow signalling that nothing can satisfy the query (Teodor) - Move NAMEDATALEN definition from - postgres_ext.h to pg_config_manual.h + Move NAMEDATALEN definition from + postgres_ext.h to pg_config_manual.h (Peter) @@ -8364,7 +8364,7 @@ current_date < 2017-11-17 - Create a function variable join_search_hook to let plugins + Create a function variable join_search_hook to let plugins override the join search order portion of the planner (Julius Stroffek) @@ -8372,7 +8372,7 @@ current_date < 2017-11-17 - Add tas() support for Renesas' M32R processor + Add tas() support for Renesas' M32R processor (Kazuhiro Inaoka) @@ -8388,14 +8388,14 @@ current_date < 2017-11-17 Change the on-disk representation of the NUMERIC - data type so that the sign_dscale word comes + data type so that the sign_dscale word comes before the weight (Tom) - Use SYSV semaphores rather than POSIX on Darwin + Use SYSV semaphores rather than POSIX on Darwin >= 6.0, i.e., macOS 10.2 and up (Chris Marcellino) @@ -8432,8 +8432,8 @@ current_date < 2017-11-17 - Move contrib README content into the - main PostgreSQL documentation (Albert Cervera i + Move contrib README content into the + main PostgreSQL documentation (Albert Cervera i Areny) @@ -8455,11 +8455,11 @@ current_date < 2017-11-17 Add contrib/uuid-ossp module for generating - UUID values using the OSSP UUID library (Peter) + UUID values using the OSSP UUID library (Peter) - Use configure + Use configure --with-ossp-uuid to activate. This takes advantage of the new UUID builtin type. @@ -8477,14 +8477,14 @@ current_date < 2017-11-17 - Allow contrib/pgbench to set the fillfactor (Pavan + Allow contrib/pgbench to set the fillfactor (Pavan Deolasee) - Add timestamps to contrib/pgbench -l + Add timestamps to contrib/pgbench -l (Greg Smith) @@ -8498,13 +8498,13 @@ current_date < 2017-11-17 - Add GIN support for contrib/hstore (Teodor) + Add GIN support for contrib/hstore (Teodor) - Add GIN support for contrib/pg_trgm (Guillaume Smet, Teodor) + Add GIN support for contrib/pg_trgm (Guillaume Smet, Teodor) diff --git a/doc/src/sgml/release-8.4.sgml b/doc/src/sgml/release-8.4.sgml index 53e319ff33..521048ad93 100644 --- a/doc/src/sgml/release-8.4.sgml +++ b/doc/src/sgml/release-8.4.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 8.4.X series. Users are encouraged to update to a newer release branch soon. @@ -48,15 +48,15 @@ - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -76,7 +76,7 @@ Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -103,13 +103,13 @@ This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -124,7 +124,7 @@ Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -137,7 +137,7 @@ - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -150,19 +150,19 @@ This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -170,7 +170,7 @@ - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -182,7 +182,7 @@ This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -190,7 +190,7 @@ Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -199,16 +199,16 @@ the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -232,15 +232,15 @@ - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -251,17 +251,17 @@ - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -269,27 +269,27 @@ - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -297,20 +297,20 @@ - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -335,7 +335,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.4.X release series in July 2014. Users are encouraged to update to a newer release branch soon. @@ -387,7 +387,7 @@ - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -400,35 +400,35 @@ - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -454,7 +454,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 8.4.X release series in July 2014. Users are encouraged to update to a newer release branch soon. @@ -480,19 +480,19 @@ - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -505,7 +505,7 @@ The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -525,7 +525,7 @@ If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -539,12 +539,12 @@ - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -571,7 +571,7 @@ - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -583,35 +583,35 @@ - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -626,7 +626,7 @@ The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -654,25 +654,25 @@ Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -696,7 +696,7 @@ - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -710,19 +710,19 @@ Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -730,21 +730,21 @@ - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -769,12 +769,12 @@ - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -782,51 +782,51 @@ - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -837,7 +837,7 @@ - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -851,21 +851,21 @@ - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -874,20 +874,20 @@ the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -939,13 +939,13 @@ - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -957,12 +957,12 @@ The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). @@ -979,8 +979,8 @@ - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -997,7 +997,7 @@ This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -1015,13 +1015,13 @@ - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -1035,7 +1035,7 @@ In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -1049,7 +1049,7 @@ - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -1060,10 +1060,10 @@ - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -1073,21 +1073,21 @@ - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -1139,7 +1139,7 @@ - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -1153,7 +1153,7 @@ - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -1171,29 +1171,29 @@ - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) @@ -1208,13 +1208,13 @@ Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -1238,14 +1238,14 @@ - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) @@ -1260,21 +1260,21 @@ Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -1288,7 +1288,7 @@ - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -1323,7 +1323,7 @@ However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -1347,41 +1347,41 @@ This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -1396,7 +1396,7 @@ These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. @@ -1417,27 +1417,27 @@ - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -1458,28 +1458,28 @@ - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump (Michael Paquier) + Ignore invalid indexes in pg_dump (Michael Paquier) @@ -1488,24 +1488,24 @@ a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. + pg_dump wouldn't be expected to dump anyway. - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -1513,12 +1513,12 @@ Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -1563,7 +1563,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -1596,19 +1596,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -1620,13 +1620,13 @@ Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -1634,13 +1634,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -1653,7 +1653,7 @@ - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) @@ -1664,41 +1664,41 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) @@ -1717,15 +1717,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -1774,13 +1774,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -1788,8 +1788,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -1811,8 +1811,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -1830,10 +1830,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -1841,7 +1841,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -1854,7 +1854,7 @@ - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -1866,14 +1866,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -1887,7 +1887,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -1895,7 +1895,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -1923,7 +1923,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -1936,8 +1936,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -1947,33 +1947,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -1984,41 +1984,41 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -2029,7 +2029,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -2081,7 +2081,7 @@ These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -2104,22 +2104,22 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -2138,7 +2138,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -2146,7 +2146,7 @@ - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -2196,7 +2196,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -2209,22 +2209,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -2252,22 +2252,22 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -2292,7 +2292,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -2304,15 +2304,15 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -2320,24 +2320,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -2345,7 +2345,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -2357,7 +2357,7 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -2365,22 +2365,22 @@ Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -2426,12 +2426,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -2442,7 +2442,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -2454,7 +2454,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -2480,7 +2480,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -2488,7 +2488,7 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) @@ -2502,7 +2502,7 @@ This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -2510,13 +2510,13 @@ - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -2537,8 +2537,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -2565,12 +2565,12 @@ - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. @@ -2583,7 +2583,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -2602,7 +2602,7 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) @@ -2615,33 +2615,33 @@ - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -2649,21 +2649,21 @@ - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -2711,14 +2711,14 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -2730,7 +2730,7 @@ - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -2745,12 +2745,12 @@ - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -2766,10 +2766,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -2795,16 +2795,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -2842,7 +2842,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -2856,18 +2856,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -2875,8 +2875,8 @@ - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -2910,32 +2910,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -3410,7 +3410,7 @@ - Fix possible buffer overrun in tsvector_concat() + Fix possible buffer overrun in tsvector_concat() (Tom Lane) @@ -3422,14 +3422,14 @@ - Fix crash in xml_recv when processing a - standalone parameter (Tom Lane) + Fix crash in xml_recv when processing a + standalone parameter (Tom Lane) - Make pg_options_to_table return NULL for an option with no + Make pg_options_to_table return NULL for an option with no value (Tom Lane) @@ -3440,7 +3440,7 @@ - Avoid possibly accessing off the end of memory in ANALYZE + Avoid possibly accessing off the end of memory in ANALYZE and in SJIS-2004 encoding conversion (Noah Misch) @@ -3469,7 +3469,7 @@ There was a window wherein a new backend process could read a stale init file but miss the inval messages that would tell it the data is stale. The result would be bizarre failures in catalog accesses, typically - could not read block 0 in file ... later during startup. + could not read block 0 in file ... later during startup. @@ -3490,7 +3490,7 @@ Fix incorrect memory accounting (leading to possible memory bloat) in tuplestores supporting holdable cursors and plpgsql's RETURN - NEXT command (Tom Lane) + NEXT command (Tom Lane) @@ -3526,7 +3526,7 @@ - Allow nested EXISTS queries to be optimized properly (Tom + Allow nested EXISTS queries to be optimized properly (Tom Lane) @@ -3546,12 +3546,12 @@ - Fix EXPLAIN to handle gating Result nodes within + Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans (Tom Lane) - The usual symptom of this oversight was bogus varno errors. + The usual symptom of this oversight was bogus varno errors. @@ -3567,13 +3567,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -3583,8 +3583,8 @@ - Fix VACUUM so that it always updates - pg_class.reltuples/relpages (Tom + Fix VACUUM so that it always updates + pg_class.reltuples/relpages (Tom Lane) @@ -3603,7 +3603,7 @@ - Fix cases where CLUSTER might attempt to access + Fix cases where CLUSTER might attempt to access already-removed TOAST data (Tom Lane) @@ -3611,7 +3611,7 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) @@ -3623,13 +3623,13 @@ The typical symptom of this problem was The function requested is - not supported errors during SSPI login. + not supported errors during SSPI login. - Throw an error if pg_hba.conf contains hostssl + Throw an error if pg_hba.conf contains hostssl but SSL is disabled (Tom Lane) @@ -3641,12 +3641,12 @@ - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -3654,25 +3654,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -3680,7 +3680,7 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) @@ -3698,59 +3698,59 @@ - Correctly handle quotes in locale names during initdb + Correctly handle quotes in locale names during initdb (Heikki Linnakangas) The case can arise with some Windows locales, such as People's - Republic of China. + Republic of China. - Fix pg_upgrade to preserve toast tables' relfrozenxids + Fix pg_upgrade to preserve toast tables' relfrozenxids during an upgrade from 8.3 (Bruce Momjian) - Failure to do this could lead to pg_clog files being + Failure to do this could lead to pg_clog files being removed too soon after the upgrade. - In pg_ctl, support silent mode for service registrations + In pg_ctl, support silent mode for service registrations on Windows (MauMau) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. Be more user-friendly about unsupported cases for parallel - pg_restore (Tom Lane) + pg_restore (Tom Lane) @@ -3761,14 +3761,14 @@ - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -3780,36 +3780,36 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Fix PQsetvalue() to avoid possible crash when adding a new - tuple to a PGresult originally obtained from a server + Fix PQsetvalue() to avoid possible crash when adding a new + tuple to a PGresult originally obtained from a server query (Andrew Chernow) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -3821,7 +3821,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -3829,13 +3829,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -3867,7 +3867,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -3900,10 +3900,10 @@ However, if your installation was upgraded from a previous major - release by running pg_upgrade, you should take + release by running pg_upgrade, you should take action to prevent possible data loss due to a now-fixed bug in - pg_upgrade. The recommended solution is to run - VACUUM FREEZE on all TOAST tables. + pg_upgrade. The recommended solution is to run + VACUUM FREEZE on all TOAST tables. More information is available at http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix. @@ -3923,36 +3923,36 @@ - Fix pg_upgrade's handling of TOAST tables + Fix pg_upgrade's handling of TOAST tables (Bruce Momjian) - The pg_class.relfrozenxid value for + The pg_class.relfrozenxid value for TOAST tables was not correctly copied into the new installation - during pg_upgrade. This could later result in - pg_clog files being discarded while they were still + during pg_upgrade. This could later result in + pg_clog files being discarded while they were still needed to validate tuples in the TOAST tables, leading to - could not access status of transaction failures. + could not access status of transaction failures. This error poses a significant risk of data loss for installations - that have been upgraded with pg_upgrade. This patch - corrects the problem for future uses of pg_upgrade, + that have been upgraded with pg_upgrade. This patch + corrects the problem for future uses of pg_upgrade, but does not in itself cure the issue in installations that have been - processed with a buggy version of pg_upgrade. + processed with a buggy version of pg_upgrade. - Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set + Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set warning (Heikki Linnakangas) - VACUUM would sometimes issue this warning in cases that + VACUUM would sometimes issue this warning in cases that are actually valid. @@ -3986,15 +3986,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -4002,13 +4002,13 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. @@ -4053,7 +4053,7 @@ - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -4085,14 +4085,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -4103,14 +4103,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -4154,15 +4154,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -4187,44 +4187,44 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. - Fix pg_restore's text output for large objects (BLOBs) - when standard_conforming_strings is on (Tom Lane) + Fix pg_restore's text output for large objects (BLOBs) + when standard_conforming_strings is on (Tom Lane) Although restoring directly to a database worked correctly, string - escaping was incorrect if pg_restore was asked for - SQL text output and standard_conforming_strings had been + escaping was incorrect if pg_restore was asked for + SQL text output and standard_conforming_strings had been enabled in the source database. - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -4236,16 +4236,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -4287,17 +4287,17 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. @@ -4307,7 +4307,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -4326,7 +4326,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -4334,19 +4334,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -4362,7 +4362,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -4389,16 +4389,16 @@ Certain cases where a large number of tuples needed to be read in - advance, but work_mem was large enough to allow them all + advance, but work_mem was large enough to allow them all to be held in memory, were unexpectedly slow. - percent_rank(), cume_dist() and - ntile() in particular were subject to this problem. + percent_rank(), cume_dist() and + ntile() in particular were subject to this problem. - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -4410,14 +4410,14 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -4429,15 +4429,15 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -4449,7 +4449,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -4457,7 +4457,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -4469,11 +4469,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -4493,20 +4493,20 @@ Fix incorrect calculation of transaction status in - ecpg (Itagaki Takahiro) + ecpg (Itagaki Takahiro) - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -4518,22 +4518,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -4541,20 +4541,20 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -4605,7 +4605,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -4634,7 +4634,7 @@ - Prevent possible crashes in pg_get_expr() by disallowing + Prevent possible crashes in pg_get_expr() by disallowing it from being called with an argument that is not one of the system catalog columns it's intended to be used with (Heikki Linnakangas, Tom Lane) @@ -4643,7 +4643,7 @@ - Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on + Treat exit code 128 (ERROR_WAIT_NO_CHILDREN) as non-fatal on Windows (Magnus Hagander) @@ -4669,7 +4669,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -4694,18 +4694,18 @@ - Fix mishandling of cross-type IN comparisons (Tom Lane) + Fix mishandling of cross-type IN comparisons (Tom Lane) This could result in failures if the planner tried to implement an - IN join with a sort-then-unique-then-plain-join plan. + IN join with a sort-then-unique-then-plain-join plan. - Fix computation of ANALYZE statistics for tsvector + Fix computation of ANALYZE statistics for tsvector columns (Jan Urbanski) @@ -4717,8 +4717,8 @@ - Improve planner's estimate of memory used by array_agg(), - string_agg(), and similar aggregate functions + Improve planner's estimate of memory used by array_agg(), + string_agg(), and similar aggregate functions (Hitoshi Harada) @@ -4734,7 +4734,7 @@ - If a plan is prepared while CREATE INDEX CONCURRENTLY is + If a plan is prepared while CREATE INDEX CONCURRENTLY is in progress for one of the referenced tables, it is supposed to be re-planned once the index is ready for use. This was not happening reliably. @@ -4812,7 +4812,7 @@ Take care to fsync the contents of lockfiles (both - postmaster.pid and the socket lockfile) while writing them + postmaster.pid and the socket lockfile) while writing them (Tom Lane) @@ -4849,7 +4849,7 @@ - Fix log_line_prefix's %i escape, + Fix log_line_prefix's %i escape, which could produce junk early in backend startup (Tom Lane) @@ -4861,7 +4861,7 @@ - In particular, fillfactor would be read as zero if any + In particular, fillfactor would be read as zero if any other reloption had been set for the table, leading to serious bloat. @@ -4869,49 +4869,49 @@ Fix inheritance count tracking in ALTER TABLE ... ADD - CONSTRAINT (Robert Haas) + CONSTRAINT (Robert Haas) Fix possible data corruption in ALTER TABLE ... SET - TABLESPACE when archiving is enabled (Jeff Davis) + TABLESPACE when archiving is enabled (Jeff Davis) - Allow CREATE DATABASE and ALTER DATABASE ... SET - TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) + Allow CREATE DATABASE and ALTER DATABASE ... SET + TABLESPACE to be interrupted by query-cancel (Guillaume Lelarge) - Improve CREATE INDEX's checking of whether proposed index + Improve CREATE INDEX's checking of whether proposed index expressions are immutable (Tom Lane) - Fix REASSIGN OWNED to handle operator classes and families + Fix REASSIGN OWNED to handle operator classes and families (Asko Tiidumaa) - Fix possible core dump when comparing two empty tsquery values + Fix possible core dump when comparing two empty tsquery values (Tom Lane) - Fix LIKE's handling of patterns containing % - followed by _ (Tom Lane) + Fix LIKE's handling of patterns containing % + followed by _ (Tom Lane) @@ -4926,7 +4926,7 @@ - Input such as 'J100000'::date worked before 8.4, + Input such as 'J100000'::date worked before 8.4, but was unintentionally broken by added error-checking. @@ -4934,7 +4934,7 @@ Fix PL/pgSQL to throw an error, not crash, if a cursor is closed within - a FOR loop that is iterating over that cursor + a FOR loop that is iterating over that cursor (Heikki Linnakangas) @@ -4942,22 +4942,22 @@ In PL/Python, defend against null pointer results from - PyCObject_AsVoidPtr and PyCObject_FromVoidPtr + PyCObject_AsVoidPtr and PyCObject_FromVoidPtr (Peter Eisentraut) - In libpq, fix full SSL certificate verification for the - case where both host and hostaddr are specified + In libpq, fix full SSL certificate verification for the + case where both host and hostaddr are specified (Tom Lane) - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -4965,19 +4965,19 @@ - Fix some issues in pg_dump's handling of SQL/MED objects + Fix some issues in pg_dump's handling of SQL/MED objects (Tom Lane) - Notably, pg_dump would always fail if run by a + Notably, pg_dump would always fail if run by a non-superuser, which was not intended. - Improve pg_dump and pg_restore's + Improve pg_dump and pg_restore's handling of non-seekable archive files (Tom Lane, Robert Haas) @@ -4989,31 +4989,31 @@ Improve parallel pg_restore's ability to cope with selective restore - (-L option) (Tom Lane) + (-L option) (Tom Lane) - The original code tended to fail if the -L file commanded + The original code tended to fail if the -L file commanded a non-default restore ordering. - Fix ecpg to process data from RETURNING + Fix ecpg to process data from RETURNING clauses correctly (Michael Meskes) - Fix some memory leaks in ecpg (Zoltan Boszormenyi) + Fix some memory leaks in ecpg (Zoltan Boszormenyi) - Improve contrib/dblink's handling of tables containing + Improve contrib/dblink's handling of tables containing dropped columns (Tom Lane) @@ -5021,30 +5021,30 @@ Fix connection leak after duplicate connection name - errors in contrib/dblink (Itagaki Takahiro) + errors in contrib/dblink (Itagaki Takahiro) - Fix contrib/dblink to handle connection names longer than + Fix contrib/dblink to handle connection names longer than 62 bytes correctly (Itagaki Takahiro) - Add hstore(text, text) - function to contrib/hstore (Robert Haas) + Add hstore(text, text) + function to contrib/hstore (Robert Haas) This function is the recommended substitute for the now-deprecated - => operator. It was back-patched so that future-proofed + => operator. It was back-patched so that future-proofed code can be used with older server versions. Note that the patch will - be effective only after contrib/hstore is installed or + be effective only after contrib/hstore is installed or reinstalled in a particular database. Users might prefer to execute - the CREATE FUNCTION command by hand, instead. + the CREATE FUNCTION command by hand, instead. @@ -5057,7 +5057,7 @@ - Update time zone data files to tzdata release 2010l + Update time zone data files to tzdata release 2010l for DST law changes in Egypt and Palestine; also historical corrections for Finland. @@ -5072,7 +5072,7 @@ - Make Windows' N. Central Asia Standard Time timezone map to + Make Windows' N. Central Asia Standard Time timezone map to Asia/Novosibirsk, not Asia/Almaty (Magnus Hagander) @@ -5119,19 +5119,19 @@ - Enforce restrictions in plperl using an opmask applied to - the whole interpreter, instead of using Safe.pm + Enforce restrictions in plperl using an opmask applied to + the whole interpreter, instead of using Safe.pm (Tim Bunce, Andrew Dunstan) - Recent developments have convinced us that Safe.pm is too - insecure to rely on for making plperl trustable. This - change removes use of Safe.pm altogether, in favor of using + Recent developments have convinced us that Safe.pm is too + insecure to rely on for making plperl trustable. This + change removes use of Safe.pm altogether, in favor of using a separate interpreter with an opcode mask that is always applied. Pleasant side effects of the change include that it is now possible to - use Perl's strict pragma in a natural way in - plperl, and that Perl's $a and $b + use Perl's strict pragma in a natural way in + plperl, and that Perl's $a and $b variables work as expected in sort routines, and that function compilation is significantly faster. (CVE-2010-1169) @@ -5140,19 +5140,19 @@ Prevent PL/Tcl from executing untrustworthy code from - pltcl_modules (Tom) + pltcl_modules (Tom) PL/Tcl's feature for autoloading Tcl code from a database table could be exploited for trojan-horse attacks, because there was no restriction on who could create or insert into that table. This change - disables the feature unless pltcl_modules is owned by a + disables the feature unless pltcl_modules is owned by a superuser. (However, the permissions on the table are not checked, so installations that really need a less-than-secure modules table can still grant suitable privileges to trusted non-superusers.) Also, - prevent loading code into the unrestricted normal Tcl - interpreter unless we are really going to execute a pltclu + prevent loading code into the unrestricted normal Tcl + interpreter unless we are really going to execute a pltclu function. (CVE-2010-1170) @@ -5160,16 +5160,16 @@ Fix data corruption during WAL replay of - ALTER ... SET TABLESPACE (Tom) + ALTER ... SET TABLESPACE (Tom) - When archive_mode is on, ALTER ... SET TABLESPACE + When archive_mode is on, ALTER ... SET TABLESPACE generates a WAL record whose replay logic was incorrect. It could write the data to the wrong place, leading to possibly-unrecoverable data corruption. Data corruption would be observed on standby slaves, and could occur on the master as well if a database crash and recovery - occurred after committing the ALTER and before the next + occurred after committing the ALTER and before the next checkpoint. @@ -5194,20 +5194,20 @@ This avoids failures if the function's code is invalid without the setting; an example is that SQL functions may not parse if the - search_path is not correct. + search_path is not correct. - Do constraint exclusion for inherited UPDATE and - DELETE target tables when - constraint_exclusion = partition (Tom) + Do constraint exclusion for inherited UPDATE and + DELETE target tables when + constraint_exclusion = partition (Tom) Due to an oversight, this setting previously only caused constraint - exclusion to be checked in SELECT commands. + exclusion to be checked in SELECT commands. @@ -5219,10 +5219,10 @@ Previously, if an unprivileged user ran ALTER USER ... RESET - ALL for himself, or ALTER DATABASE ... RESET ALL for + ALL for himself, or ALTER DATABASE ... RESET ALL for a database he owns, this would remove all special parameter settings for the user or database, even ones that are only supposed to be - changeable by a superuser. Now, the ALTER will only + changeable by a superuser. Now, the ALTER will only remove the parameters that the user has permission to change. @@ -5230,7 +5230,7 @@ Avoid possible crash during backend shutdown if shutdown occurs - when a CONTEXT addition would be made to log entries (Tom) + when a CONTEXT addition would be made to log entries (Tom) @@ -5242,8 +5242,8 @@ - Fix erroneous handling of %r parameter in - recovery_end_command (Heikki) + Fix erroneous handling of %r parameter in + recovery_end_command (Heikki) @@ -5254,20 +5254,20 @@ Ensure the archiver process responds to changes in - archive_command as soon as possible (Tom) + archive_command as soon as possible (Tom) - Fix PL/pgSQL's CASE statement to not fail when the + Fix PL/pgSQL's CASE statement to not fail when the case expression is a query that returns no rows (Tom) - Update PL/Perl's ppport.h for modern Perl versions + Update PL/Perl's ppport.h for modern Perl versions (Andrew) @@ -5286,15 +5286,15 @@ - Prevent infinite recursion in psql when expanding + Prevent infinite recursion in psql when expanding a variable that refers to itself (Tom) - Fix psql's \copy to not add spaces around - a dot within \copy (select ...) (Tom) + Fix psql's \copy to not add spaces around + a dot within \copy (select ...) (Tom) @@ -5305,23 +5305,23 @@ - Avoid formatting failure in psql when running in a - locale context that doesn't match the client_encoding + Avoid formatting failure in psql when running in a + locale context that doesn't match the client_encoding (Tom) - Fix unnecessary GIN indexes do not support whole-index scans - errors for unsatisfiable queries using contrib/intarray + Fix unnecessary GIN indexes do not support whole-index scans + errors for unsatisfiable queries using contrib/intarray operators (Tom) - Ensure that contrib/pgstattuple functions respond to cancel + Ensure that contrib/pgstattuple functions respond to cancel interrupts promptly (Tatsuhito Kasahara) @@ -5329,7 +5329,7 @@ Make server startup deal properly with the case that - shmget() returns EINVAL for an existing + shmget() returns EINVAL for an existing shared memory segment (Tom) @@ -5361,14 +5361,14 @@ - Update time zone data files to tzdata release 2010j + Update time zone data files to tzdata release 2010j for DST law changes in Argentina, Australian Antarctic, Bangladesh, Mexico, Morocco, Pakistan, Palestine, Russia, Syria, Tunisia; also historical corrections for Taiwan. - Also, add PKST (Pakistan Summer Time) to the default set of + Also, add PKST (Pakistan Summer Time) to the default set of timezone abbreviations. @@ -5410,7 +5410,7 @@ - Add new configuration parameter ssl_renegotiation_limit to + Add new configuration parameter ssl_renegotiation_limit to control how often we do session key renegotiation for an SSL connection (Magnus) @@ -5446,7 +5446,7 @@ Fix possible crash due to overenthusiastic invalidation of cached - plan for ROLLBACK (Tom) + plan for ROLLBACK (Tom) @@ -5492,8 +5492,8 @@ - Make substring() for bit types treat any negative - length as meaning all the rest of the string (Tom) + Make substring() for bit types treat any negative + length as meaning all the rest of the string (Tom) @@ -5533,12 +5533,12 @@ - Avoid failure when EXPLAIN has to print a FieldStore or + Avoid failure when EXPLAIN has to print a FieldStore or assignment ArrayRef expression (Tom) - These cases can arise now that EXPLAIN VERBOSE tries to + These cases can arise now that EXPLAIN VERBOSE tries to print plan node target lists. @@ -5547,7 +5547,7 @@ Avoid an unnecessary coercion failure in some cases where an undecorated literal string appears in a subquery within - UNION/INTERSECT/EXCEPT (Tom) + UNION/INTERSECT/EXCEPT (Tom) @@ -5564,7 +5564,7 @@ - Fix the STOP WAL LOCATION entry in backup history files to + Fix the STOP WAL LOCATION entry in backup history files to report the next WAL segment's name when the end location is exactly at a segment boundary (Itagaki Takahiro) @@ -5573,7 +5573,7 @@ Always pass the catalog ID to an option validator function specified in - CREATE FOREIGN DATA WRAPPER (Martin Pihlak) + CREATE FOREIGN DATA WRAPPER (Martin Pihlak) @@ -5591,7 +5591,7 @@ - Add support for doing FULL JOIN ON FALSE (Tom) + Add support for doing FULL JOIN ON FALSE (Tom) @@ -5604,13 +5604,13 @@ Improve constraint exclusion processing of boolean-variable cases, in particular make it possible to exclude a partition that has a - bool_column = false constraint (Tom) + bool_column = false constraint (Tom) - Prevent treating an INOUT cast as representing binary + Prevent treating an INOUT cast as representing binary compatibility (Heikki) @@ -5623,24 +5623,24 @@ This is more useful than before and helps to prevent confusion when - a REVOKE generates multiple messages, which formerly + a REVOKE generates multiple messages, which formerly appeared to be duplicates. - When reading pg_hba.conf and related files, do not treat - @something as a file inclusion request if the @ - appears inside quote marks; also, never treat @ by itself + When reading pg_hba.conf and related files, do not treat + @something as a file inclusion request if the @ + appears inside quote marks; also, never treat @ by itself as a file inclusion request (Tom) This prevents erratic behavior if a role or database name starts with - @. If you need to include a file whose path name + @. If you need to include a file whose path name contains spaces, you can still do so, but you must write - @"/path to/file" rather than putting the quotes around + @"/path to/file" rather than putting the quotes around the whole construct. @@ -5648,83 +5648,83 @@ Prevent infinite loop on some platforms if a directory is named as - an inclusion target in pg_hba.conf and related files + an inclusion target in pg_hba.conf and related files (Tom) - Fix possible infinite loop if SSL_read or - SSL_write fails without setting errno (Tom) + Fix possible infinite loop if SSL_read or + SSL_write fails without setting errno (Tom) This is reportedly possible with some Windows versions of - openssl. + openssl. - Disallow GSSAPI authentication on local connections, + Disallow GSSAPI authentication on local connections, since it requires a hostname to function correctly (Magnus) - Protect ecpg against applications freeing strings + Protect ecpg against applications freeing strings unexpectedly (Michael) - Make ecpg report the proper SQLSTATE if the connection + Make ecpg report the proper SQLSTATE if the connection disappears (Michael) - Fix translation of cell contents in psql \d + Fix translation of cell contents in psql \d output (Heikki) - Fix psql's numericlocale option to not + Fix psql's numericlocale option to not format strings it shouldn't in latex and troff output formats (Heikki) - Fix a small per-query memory leak in psql (Tom) + Fix a small per-query memory leak in psql (Tom) - Make psql return the correct exit status (3) when - ON_ERROR_STOP and --single-transaction are - both specified and an error occurs during the implied COMMIT + Make psql return the correct exit status (3) when + ON_ERROR_STOP and --single-transaction are + both specified and an error occurs during the implied COMMIT (Bruce) - Fix pg_dump's output of permissions for foreign servers + Fix pg_dump's output of permissions for foreign servers (Heikki) - Fix possible crash in parallel pg_restore due to + Fix possible crash in parallel pg_restore due to out-of-range dependency IDs (Tom) @@ -5745,7 +5745,7 @@ - Add volatile markings in PL/Python to avoid possible + Add volatile markings in PL/Python to avoid possible compiler-specific misbehavior (Zdenek Kotala) @@ -5757,55 +5757,55 @@ The only known symptom of this oversight is that the Tcl - clock command misbehaves if using Tcl 8.5 or later. + clock command misbehaves if using Tcl 8.5 or later. - Prevent ExecutorEnd from being run on portals created + Prevent ExecutorEnd from being run on portals created within a failed transaction or subtransaction (Tom) This is known to cause issues when using - contrib/auto_explain. + contrib/auto_explain. - Prevent crash in contrib/dblink when too many key - columns are specified to a dblink_build_sql_* function + Prevent crash in contrib/dblink when too many key + columns are specified to a dblink_build_sql_* function (Rushabh Lathia, Joe Conway) - Allow zero-dimensional arrays in contrib/ltree operations + Allow zero-dimensional arrays in contrib/ltree operations (Tom) This case was formerly rejected as an error, but it's more convenient to treat it the same as a zero-element array. In particular this avoids - unnecessary failures when an ltree operation is applied to the - result of ARRAY(SELECT ...) and the sub-select returns no + unnecessary failures when an ltree operation is applied to the + result of ARRAY(SELECT ...) and the sub-select returns no rows. - Fix assorted crashes in contrib/xml2 caused by sloppy + Fix assorted crashes in contrib/xml2 caused by sloppy memory management (Tom) - Make building of contrib/xml2 more robust on Windows + Make building of contrib/xml2 more robust on Windows (Andrew) @@ -5816,7 +5816,7 @@ - One known symptom of this bug is that rows in pg_listener + One known symptom of this bug is that rows in pg_listener could be dropped under heavy load. @@ -5835,7 +5835,7 @@ - Update time zone data files to tzdata release 2010e + Update time zone data files to tzdata release 2010e for DST law changes in Bangladesh, Chile, Fiji, Mexico, Paraguay, Samoa. @@ -5865,7 +5865,7 @@ A dump/restore is not required for those running 8.4.X. However, if you have any hash indexes, - you should REINDEX them after updating to 8.4.2, + you should REINDEX them after updating to 8.4.2, to repair possible damage. @@ -5911,7 +5911,7 @@ preserve the ordering. So application of either of those operations could lead to permanent corruption of an index, in the sense that searches might fail to find entries that are present. To deal with - this, it is recommended to REINDEX any hash indexes you may + this, it is recommended to REINDEX any hash indexes you may have after installing this update. @@ -5930,14 +5930,14 @@ - Prevent signals from interrupting VACUUM at unsafe times + Prevent signals from interrupting VACUUM at unsafe times (Alvaro) - This fix prevents a PANIC if a VACUUM FULL is canceled + This fix prevents a PANIC if a VACUUM FULL is canceled after it's already committed its tuple movements, as well as transient - errors if a plain VACUUM is interrupted after having + errors if a plain VACUUM is interrupted after having truncated the table. @@ -5956,14 +5956,14 @@ - Fix crash if a DROP is attempted on an internally-dependent + Fix crash if a DROP is attempted on an internally-dependent object (Tom) - Fix very rare crash in inet/cidr comparisons (Chris + Fix very rare crash in inet/cidr comparisons (Chris Mikkelson) @@ -5991,7 +5991,7 @@ - Fix memory leak in postmaster when re-parsing pg_hba.conf + Fix memory leak in postmaster when re-parsing pg_hba.conf (Tom) @@ -6010,8 +6010,8 @@ - Make FOR UPDATE/SHARE in the primary query not propagate - into WITH queries (Tom) + Make FOR UPDATE/SHARE in the primary query not propagate + into WITH queries (Tom) @@ -6019,18 +6019,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - the FOR UPDATE will now affect bar but not - foo. This is more useful and consistent than the original - 8.4 behavior, which tried to propagate FOR UPDATE into the - WITH query but always failed due to assorted implementation - restrictions. It also follows the design rule that WITH + the FOR UPDATE will now affect bar but not + foo. This is more useful and consistent than the original + 8.4 behavior, which tried to propagate FOR UPDATE into the + WITH query but always failed due to assorted implementation + restrictions. It also follows the design rule that WITH queries are executed as if independent of the main query. - Fix bug with a WITH RECURSIVE query immediately inside + Fix bug with a WITH RECURSIVE query immediately inside another one (Tom) @@ -6056,7 +6056,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix wrong search results for a multi-column GIN index with - fastupdate enabled (Teodor) + fastupdate enabled (Teodor) @@ -6066,7 +6066,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - These bugs were masked when full_page_writes was on, but + These bugs were masked when full_page_writes was on, but with it off a WAL replay failure was certain if a crash occurred before the next checkpoint. @@ -6104,7 +6104,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE The previous code is known to fail with the combination of the Linux - pam_krb5 PAM module with Microsoft Active Directory as the + pam_krb5 PAM module with Microsoft Active Directory as the domain controller. It might have problems elsewhere too, since it was making unjustified assumptions about what arguments the PAM stack would pass to it. @@ -6127,7 +6127,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Ensure that domain constraints are enforced in constructs like - ARRAY[...]::domain, where the domain is over an array type + ARRAY[...]::domain, where the domain is over an array type (Heikki) @@ -6153,7 +6153,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix CREATE TABLE to properly merge default expressions + Fix CREATE TABLE to properly merge default expressions coming from different inheritance parent tables (Tom) @@ -6175,39 +6175,39 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix processing of ownership dependencies during CREATE OR - REPLACE FUNCTION (Tom) + REPLACE FUNCTION (Tom) - Fix incorrect handling of WHERE - x=x conditions (Tom) + Fix incorrect handling of WHERE + x=x conditions (Tom) In some cases these could get ignored as redundant, but they aren't - — they're equivalent to x IS NOT NULL. + — they're equivalent to x IS NOT NULL. Fix incorrect plan construction when using hash aggregation to implement - DISTINCT for textually identical volatile expressions (Tom) + DISTINCT for textually identical volatile expressions (Tom) - Fix Assert failure for a volatile SELECT DISTINCT ON + Fix Assert failure for a volatile SELECT DISTINCT ON expression (Tom) - Fix ts_stat() to not fail on an empty tsvector + Fix ts_stat() to not fail on an empty tsvector value (Tom) @@ -6220,7 +6220,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix encoding handling in xml binary input (Heikki) + Fix encoding handling in xml binary input (Heikki) @@ -6231,7 +6231,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix bug with calling plperl from plperlu or vice + Fix bug with calling plperl from plperlu or vice versa (Tom) @@ -6251,7 +6251,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Ensure that Perl arrays are properly converted to - PostgreSQL arrays when returned by a set-returning + PostgreSQL arrays when returned by a set-returning PL/Perl function (Andrew Dunstan, Abhijit Menon-Sen) @@ -6268,43 +6268,43 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix ecpg problem with comments in DECLARE - CURSOR statements (Michael) + Fix ecpg problem with comments in DECLARE + CURSOR statements (Michael) - Fix ecpg to not treat recently-added keywords as + Fix ecpg to not treat recently-added keywords as reserved words (Tom) - This affected the keywords CALLED, CATALOG, - DEFINER, ENUM, FOLLOWING, - INVOKER, OPTIONS, PARTITION, - PRECEDING, RANGE, SECURITY, - SERVER, UNBOUNDED, and WRAPPER. + This affected the keywords CALLED, CATALOG, + DEFINER, ENUM, FOLLOWING, + INVOKER, OPTIONS, PARTITION, + PRECEDING, RANGE, SECURITY, + SERVER, UNBOUNDED, and WRAPPER. - Re-allow regular expression special characters in psql's - \df function name parameter (Tom) + Re-allow regular expression special characters in psql's + \df function name parameter (Tom) - In contrib/fuzzystrmatch, correct the calculation of - levenshtein distances with non-default costs (Marcin Mank) + In contrib/fuzzystrmatch, correct the calculation of + levenshtein distances with non-default costs (Marcin Mank) - In contrib/pg_standby, disable triggering failover with a + In contrib/pg_standby, disable triggering failover with a signal on Windows (Fujii Masao) @@ -6316,35 +6316,35 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Put FREEZE and VERBOSE options in the right - order in the VACUUM command that - contrib/vacuumdb produces (Heikki) + Put FREEZE and VERBOSE options in the right + order in the VACUUM command that + contrib/vacuumdb produces (Heikki) - Fix possible leak of connections when contrib/dblink + Fix possible leak of connections when contrib/dblink encounters an error (Tatsuhito Kasahara) - Ensure psql's flex module is compiled with the correct + Ensure psql's flex module is compiled with the correct system header definitions (Tom) This fixes build failures on platforms where - --enable-largefile causes incompatible changes in the + --enable-largefile causes incompatible changes in the generated code. - Make the postmaster ignore any application_name parameter in + Make the postmaster ignore any application_name parameter in connection request packets, to improve compatibility with future libpq versions (Tom) @@ -6357,14 +6357,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This includes adding IDT to the default + This includes adding IDT to the default timezone abbreviation set. - Update time zone data files to tzdata release 2009s + Update time zone data files to tzdata release 2009s for DST law changes in Antarctica, Argentina, Bangladesh, Fiji, Novokuznetsk, Pakistan, Palestine, Samoa, Syria; also historical corrections for Hong Kong. @@ -6418,7 +6418,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix cannot make new WAL entries during recovery error (Tom) + Fix cannot make new WAL entries during recovery error (Tom) @@ -6435,39 +6435,39 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disallow RESET ROLE and RESET SESSION - AUTHORIZATION inside security-definer functions (Tom, Heikki) + Disallow RESET ROLE and RESET SESSION + AUTHORIZATION inside security-definer functions (Tom, Heikki) This covers a case that was missed in the previous patch that - disallowed SET ROLE and SET SESSION - AUTHORIZATION inside security-definer functions. + disallowed SET ROLE and SET SESSION + AUTHORIZATION inside security-definer functions. (See CVE-2007-6600) - Make LOAD of an already-loaded loadable module + Make LOAD of an already-loaded loadable module into a no-op (Tom) - Formerly, LOAD would attempt to unload and re-load the + Formerly, LOAD would attempt to unload and re-load the module, but this is unsafe and not all that useful. - Make window function PARTITION BY and ORDER BY + Make window function PARTITION BY and ORDER BY items always be interpreted as simple expressions (Tom) In 8.4.0 these lists were parsed following the rules used for - top-level GROUP BY and ORDER BY lists. + top-level GROUP BY and ORDER BY lists. But this was not correct per the SQL standard, and it led to possible circularity. @@ -6479,8 +6479,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - These led to wrong query results in some cases where IN - or EXISTS was used together with another join. + These led to wrong query results in some cases where IN + or EXISTS was used together with another join. @@ -6492,8 +6492,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE An example is - SELECT COUNT(ss.*) FROM ... LEFT JOIN (SELECT ...) ss ON .... - Here, ss.* would be treated as ROW(NULL,NULL,...) + SELECT COUNT(ss.*) FROM ... LEFT JOIN (SELECT ...) ss ON .... + Here, ss.* would be treated as ROW(NULL,NULL,...) for null-extended join rows, which is not the same as a simple NULL. Now it is treated as a simple NULL. @@ -6506,7 +6506,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This bug led to the often-reported could not reattach - to shared memory error message. + to shared memory error message. @@ -6530,36 +6530,36 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Ensure that a fast shutdown request will forcibly terminate - open sessions, even if a smart shutdown was already in progress + Ensure that a fast shutdown request will forcibly terminate + open sessions, even if a smart shutdown was already in progress (Fujii Masao) - Avoid memory leak for array_agg() in GROUP BY + Avoid memory leak for array_agg() in GROUP BY queries (Tom) - Treat to_char(..., 'TH') as an uppercase ordinal - suffix with 'HH'/'HH12' (Heikki) + Treat to_char(..., 'TH') as an uppercase ordinal + suffix with 'HH'/'HH12' (Heikki) - It was previously handled as 'th' (lowercase). + It was previously handled as 'th' (lowercase). Include the fractional part in the result of - EXTRACT(second) and - EXTRACT(milliseconds) for - time and time with time zone inputs (Tom) + EXTRACT(second) and + EXTRACT(milliseconds) for + time and time with time zone inputs (Tom) @@ -6570,8 +6570,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix overflow for INTERVAL 'x ms' - when x is more than 2 million and integer + Fix overflow for INTERVAL 'x ms' + when x is more than 2 million and integer datetimes are in use (Alex Hunsaker) @@ -6589,13 +6589,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix a typo that disabled commit_delay (Jeff Janes) + Fix a typo that disabled commit_delay (Jeff Janes) - Output early-startup messages to postmaster.log if the + Output early-startup messages to postmaster.log if the server is started in silent mode (Tom) @@ -6619,33 +6619,33 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix pg_ctl to not go into an infinite loop if - postgresql.conf is empty (Jeff Davis) + Fix pg_ctl to not go into an infinite loop if + postgresql.conf is empty (Jeff Davis) - Fix several errors in pg_dump's - --binary-upgrade mode (Bruce, Tom) + Fix several errors in pg_dump's + --binary-upgrade mode (Bruce, Tom) - pg_dump --binary-upgrade is used by pg_migrator. + pg_dump --binary-upgrade is used by pg_migrator. - Fix contrib/xml2's xslt_process() to + Fix contrib/xml2's xslt_process() to properly handle the maximum number of parameters (twenty) (Tom) - Improve robustness of libpq's code to recover - from errors during COPY FROM STDIN (Tom) + Improve robustness of libpq's code to recover + from errors during COPY FROM STDIN (Tom) @@ -6658,14 +6658,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Work around gcc bug that causes floating-point exception - instead of division by zero on some platforms (Tom) + Work around gcc bug that causes floating-point exception + instead of division by zero on some platforms (Tom) - Update time zone data files to tzdata release 2009l + Update time zone data files to tzdata release 2009l for DST law changes in Bangladesh, Egypt, Mauritius. @@ -6687,7 +6687,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Overview - After many years of development, PostgreSQL has + After many years of development, PostgreSQL has become feature-complete in many areas. This release shows a targeted approach to adding features (e.g., authentication, monitoring, space reuse), and adds capabilities defined in the @@ -6742,7 +6742,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improved join performance for EXISTS and NOT EXISTS queries + Improved join performance for EXISTS and NOT EXISTS queries @@ -6825,15 +6825,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Previously this was selected by configure's - option. To retain + the old behavior, build with . - Remove ipcclean utility command (Bruce) + Remove ipcclean utility command (Bruce) @@ -6853,50 +6853,50 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Change default setting for - log_min_messages to warning (previously - it was notice) to reduce log file volume (Tom) + log_min_messages to warning (previously + it was notice) to reduce log file volume (Tom) - Change default setting for max_prepared_transactions to + Change default setting for max_prepared_transactions to zero (previously it was 5) (Tom) - Make debug_print_parse, debug_print_rewritten, - and debug_print_plan - output appear at LOG message level, not - DEBUG1 as formerly (Tom) + Make debug_print_parse, debug_print_rewritten, + and debug_print_plan + output appear at LOG message level, not + DEBUG1 as formerly (Tom) - Make debug_pretty_print default to on (Tom) + Make debug_pretty_print default to on (Tom) - Remove explain_pretty_print parameter (no longer needed) (Tom) + Remove explain_pretty_print parameter (no longer needed) (Tom) - Make log_temp_files settable by superusers only, like other + Make log_temp_files settable by superusers only, like other logging options (Simon Riggs) - Remove automatic appending of the epoch timestamp when no % - escapes are present in log_filename (Robert Haas) + Remove automatic appending of the epoch timestamp when no % + escapes are present in log_filename (Robert Haas) @@ -6907,22 +6907,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove log_restartpoints from recovery.conf; - instead use log_checkpoints (Simon) + Remove log_restartpoints from recovery.conf; + instead use log_checkpoints (Simon) - Remove krb_realm and krb_server_hostname; - these are now set in pg_hba.conf instead (Magnus) + Remove krb_realm and krb_server_hostname; + these are now set in pg_hba.conf instead (Magnus) There are also significant changes in pg_hba.conf, + linkend="release-8-4-pg-hba-conf">pg_hba.conf, as described below. @@ -6938,12 +6938,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Change TRUNCATE and LOCK to + Change TRUNCATE and LOCK to apply to child tables of the specified table(s) (Peter) - These commands now accept an ONLY option that prevents + These commands now accept an ONLY option that prevents processing child tables; this option must be used if the old behavior is needed. @@ -6951,8 +6951,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - SELECT DISTINCT and - UNION/INTERSECT/EXCEPT + SELECT DISTINCT and + UNION/INTERSECT/EXCEPT no longer always produce sorted output (Tom) @@ -6961,17 +6961,17 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE by means of Sort/Unique processing (i.e., sort then remove adjacent duplicates). Now they can be implemented by hashing, which will not produce sorted output. If an application relied on the output being - in sorted order, the recommended fix is to add an ORDER BY + in sorted order, the recommended fix is to add an ORDER BY clause. As a short-term workaround, the previous behavior can be - restored by disabling enable_hashagg, but that is a very - performance-expensive fix. SELECT DISTINCT ON never uses + restored by disabling enable_hashagg, but that is a very + performance-expensive fix. SELECT DISTINCT ON never uses hashing, however, so its behavior is unchanged. - Force child tables to inherit CHECK constraints from parents + Force child tables to inherit CHECK constraints from parents (Alex Hunsaker, Nikhil Sontakke, Tom) @@ -6985,14 +6985,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disallow negative LIMIT or OFFSET + Disallow negative LIMIT or OFFSET values, rather than treating them as zero (Simon) - Disallow LOCK TABLE outside a transaction block + Disallow LOCK TABLE outside a transaction block (Tom) @@ -7004,12 +7004,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Sequences now contain an additional start_value column + Sequences now contain an additional start_value column (Zoltan Boszormenyi) - This supports ALTER SEQUENCE ... RESTART. + This supports ALTER SEQUENCE ... RESTART. @@ -7025,14 +7025,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make numeric zero raised to a fractional power return - 0, rather than throwing an error, and make - numeric zero raised to the zero power return 1, + Make numeric zero raised to a fractional power return + 0, rather than throwing an error, and make + numeric zero raised to the zero power return 1, rather than error (Bruce) - This matches the longstanding float8 behavior. + This matches the longstanding float8 behavior. @@ -7042,7 +7042,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - The changed behavior is more IEEE-standard + The changed behavior is more IEEE-standard compliant. @@ -7050,7 +7050,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Throw an error if an escape character is the last character in - a LIKE pattern (i.e., it has nothing to escape) (Tom) + a LIKE pattern (i.e., it has nothing to escape) (Tom) @@ -7061,8 +7061,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove ~=~ and ~<>~ operators - formerly used for LIKE index comparisons (Tom) + Remove ~=~ and ~<>~ operators + formerly used for LIKE index comparisons (Tom) @@ -7072,7 +7072,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - xpath() now passes its arguments to libxml + xpath() now passes its arguments to libxml without any changes (Andrew) @@ -7085,7 +7085,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make xmlelement() format attribute values just like + Make xmlelement() format attribute values just like content values (Peter) @@ -7098,13 +7098,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Rewrite memory management for libxml-using functions + Rewrite memory management for libxml-using functions (Tom) This change should avoid some compatibility problems with use of - libxml in PL/Perl and other add-on code. + libxml in PL/Perl and other add-on code. @@ -7129,8 +7129,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - DateStyle no longer controls interval output - formatting; instead there is a new variable IntervalStyle + DateStyle no longer controls interval output + formatting; instead there is a new variable IntervalStyle (Ron Mayer) @@ -7138,7 +7138,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve consistency of handling of fractional seconds in - timestamp and interval output (Ron Mayer) + timestamp and interval output (Ron Mayer) @@ -7149,15 +7149,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make to_char()'s localized month/day names depend - on LC_TIME, not LC_MESSAGES (Euler + Make to_char()'s localized month/day names depend + on LC_TIME, not LC_MESSAGES (Euler Taveira de Oliveira) - Cause to_date() and to_timestamp() + Cause to_date() and to_timestamp() to more consistently report errors for invalid input (Brendan Jurd) @@ -7171,15 +7171,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix to_timestamp() to not require upper/lower case - matching for meridian (AM/PM) and era - (BC/AD) format designations (Brendan + Fix to_timestamp() to not require upper/lower case + matching for meridian (AM/PM) and era + (BC/AD) format designations (Brendan Jurd) - For example, input value ad now matches the format - string AD. + For example, input value ad now matches the format + string AD. @@ -7217,8 +7217,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow SELECT DISTINCT and - UNION/INTERSECT/EXCEPT to + Allow SELECT DISTINCT and + UNION/INTERSECT/EXCEPT to use hashing (Tom) @@ -7235,12 +7235,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This work formalizes our previous ad-hoc treatment of IN - (SELECT ...) clauses, and extends it to EXISTS and - NOT EXISTS clauses. It should result in significantly - better planning of EXISTS and NOT EXISTS - queries. In general, logically equivalent IN and - EXISTS clauses should now have similar performance, - whereas previously IN often won. + (SELECT ...) clauses, and extends it to EXISTS and + NOT EXISTS clauses. It should result in significantly + better planning of EXISTS and NOT EXISTS + queries. In general, logically equivalent IN and + EXISTS clauses should now have similar performance, + whereas previously IN often won. @@ -7258,7 +7258,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve the performance of text_position() and + Improve the performance of text_position() and related functions by using Boyer-Moore-Horspool searching (David Rowley) @@ -7283,26 +7283,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Increase the default value of default_statistics_target - from 10 to 100 (Greg Sabino Mullane, + Increase the default value of default_statistics_target + from 10 to 100 (Greg Sabino Mullane, Tom) - The maximum value was also increased from 1000 to - 10000. + The maximum value was also increased from 1000 to + 10000. - Perform constraint_exclusion checking by default - in queries involving inheritance or UNION ALL (Tom) + Perform constraint_exclusion checking by default + in queries involving inheritance or UNION ALL (Tom) - A new constraint_exclusion setting, - partition, was added to specify this behavior. + A new constraint_exclusion setting, + partition, was added to specify this behavior. @@ -7313,15 +7313,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE The amount of read-ahead is controlled by - effective_io_concurrency. This feature is available only - if the kernel has posix_fadvise() support. + effective_io_concurrency. This feature is available only + if the kernel has posix_fadvise() support. - Inline simple set-returning SQL functions in - FROM clauses (Richard Rowell) + Inline simple set-returning SQL functions in + FROM clauses (Richard Rowell) @@ -7336,7 +7336,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Reduce volume of temporary data in multi-batch hash joins - by suppressing physical tlist optimization (Michael + by suppressing physical tlist optimization (Michael Henderson, Ramon Lawrence) @@ -7344,7 +7344,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Avoid waiting for idle-in-transaction sessions during - CREATE INDEX CONCURRENTLY (Simon) + CREATE INDEX CONCURRENTLY (Simon) @@ -7368,15 +7368,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Convert many postgresql.conf settings to enumerated - values so that pg_settings can display the valid + Convert many postgresql.conf settings to enumerated + values so that pg_settings can display the valid values (Magnus) - Add cursor_tuple_fraction parameter to control the + Add cursor_tuple_fraction parameter to control the fraction of a cursor's rows that the planner assumes will be fetched (Robert Hell) @@ -7385,7 +7385,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Allow underscores in the names of custom variable - classes in postgresql.conf (Tom) + classes in postgresql.conf (Tom) @@ -7399,12 +7399,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove support for the (insecure) crypt authentication method + Remove support for the (insecure) crypt authentication method (Magnus) - This effectively obsoletes pre-PostgreSQL 7.2 client + This effectively obsoletes pre-PostgreSQL 7.2 client libraries, as there is no longer any non-plaintext password method that they can use. @@ -7412,21 +7412,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support regular expressions in pg_ident.conf + Support regular expressions in pg_ident.conf (Magnus) - Allow Kerberos/GSSAPI parameters + Allow Kerberos/GSSAPI parameters to be changed without restarting the postmaster (Magnus) - Support SSL certificate chains in server certificate + Support SSL certificate chains in server certificate file (Andrew Gierth) @@ -7440,8 +7440,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Report appropriate error message for combination of MD5 - authentication and db_user_namespace enabled (Bruce) + Report appropriate error message for combination of MD5 + authentication and db_user_namespace enabled (Bruce) @@ -7449,26 +7449,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <filename>pg_hba.conf</> + <filename>pg_hba.conf</filename> - Change all authentication options to use name=value + Change all authentication options to use name=value syntax (Magnus) - This makes incompatible changes to the ldap, - pam and ident authentication methods. All - pg_hba.conf entries with these methods need to be + This makes incompatible changes to the ldap, + pam and ident authentication methods. All + pg_hba.conf entries with these methods need to be rewritten using the new format. - Remove the ident sameuser option, instead making that + Remove the ident sameuser option, instead making that behavior the default if no usermap is specified (Magnus) @@ -7480,14 +7480,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Previously a usermap was only supported for ident + Previously a usermap was only supported for ident authentication. - Add clientcert option to control requesting of a + Add clientcert option to control requesting of a client certificate (Magnus) @@ -7499,13 +7499,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add cert authentication method to allow - user authentication via SSL certificates + Add cert authentication method to allow + user authentication via SSL certificates (Magnus) - Previously SSL certificates could only verify that + Previously SSL certificates could only verify that the client had access to a certificate, not authenticate a user. @@ -7513,20 +7513,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow krb5, gssapi and sspi - realm and krb5 host settings to be specified in - pg_hba.conf (Magnus) + Allow krb5, gssapi and sspi + realm and krb5 host settings to be specified in + pg_hba.conf (Magnus) - These override the settings in postgresql.conf. + These override the settings in postgresql.conf. - Add include_realm parameter for krb5, - gssapi, and sspi methods (Magnus) + Add include_realm parameter for krb5, + gssapi, and sspi methods (Magnus) @@ -7537,7 +7537,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Parse pg_hba.conf fully when it is loaded, + Parse pg_hba.conf fully when it is loaded, so that errors are reported immediately (Magnus) @@ -7552,15 +7552,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Show all parsing errors in pg_hba.conf instead of + Show all parsing errors in pg_hba.conf instead of aborting after the first one (Selena Deckelmann) - Support ident authentication over Unix-domain sockets - on Solaris (Garick Hamlin) + Support ident authentication over Unix-domain sockets + on Solaris (Garick Hamlin) @@ -7574,7 +7574,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Provide an option to pg_start_backup() to force its + Provide an option to pg_start_backup() to force its implied checkpoint to finish as quickly as possible (Tom) @@ -7586,13 +7586,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make pg_stop_backup() wait for modified WAL + Make pg_stop_backup() wait for modified WAL files to be archived (Simon) This guarantees that the backup is valid at the time - pg_stop_backup() completes. + pg_stop_backup() completes. @@ -7606,22 +7606,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Delay smart shutdown while a continuous archiving base backup + Delay smart shutdown while a continuous archiving base backup is in progress (Laurenz Albe) - Cancel a continuous archiving base backup if fast shutdown + Cancel a continuous archiving base backup if fast shutdown is requested (Laurenz Albe) - Allow recovery.conf boolean variables to take the - same range of string values as postgresql.conf + Allow recovery.conf boolean variables to take the + same range of string values as postgresql.conf boolean variables (Bruce) @@ -7637,20 +7637,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_conf_load_time() to report when - the PostgreSQL configuration files were last loaded + Add pg_conf_load_time() to report when + the PostgreSQL configuration files were last loaded (George Gensure) - Add pg_terminate_backend() to safely terminate a - backend (the SIGTERM signal works also) (Tom, Bruce) + Add pg_terminate_backend() to safely terminate a + backend (the SIGTERM signal works also) (Tom, Bruce) - While it's always been possible to SIGTERM a single + While it's always been possible to SIGTERM a single backend, this was previously considered unsupported; and testing of the case found some bugs that are now fixed. @@ -7664,30 +7664,30 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Function statistics appear in a new system view, - pg_stat_user_functions. Tracking is controlled - by the new parameter track_functions. + pg_stat_user_functions. Tracking is controlled + by the new parameter track_functions. Allow specification of the maximum query string size in - pg_stat_activity via new - track_activity_query_size parameter (Thomas Lee) + pg_stat_activity via new + track_activity_query_size parameter (Thomas Lee) - Increase the maximum line length sent to syslog, in + Increase the maximum line length sent to syslog, in hopes of improving performance (Tom) - Add read-only configuration variables segment_size, - wal_block_size, and wal_segment_size + Add read-only configuration variables segment_size, + wal_block_size, and wal_segment_size (Bernd Helmle) @@ -7701,7 +7701,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_stat_get_activity(pid) function to return + Add pg_stat_get_activity(pid) function to return information about a specific process id (Magnus) @@ -7709,14 +7709,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Allow the location of the server's statistics file to be specified - via stats_temp_directory (Magnus) + via stats_temp_directory (Magnus) This allows the statistics file to be placed in a - RAM-resident directory to reduce I/O requirements. + RAM-resident directory to reduce I/O requirements. On startup/shutdown, the file is copied to its traditional location - ($PGDATA/global/) so it is preserved across restarts. + ($PGDATA/global/) so it is preserved across restarts. @@ -7732,45 +7732,45 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for WINDOW functions (Hitoshi Harada) + Add support for WINDOW functions (Hitoshi Harada) - Add support for WITH clauses (CTEs), including WITH - RECURSIVE (Yoshiyuki Asaba, Tatsuo Ishii, Tom) + Add support for WITH clauses (CTEs), including WITH + RECURSIVE (Yoshiyuki Asaba, Tatsuo Ishii, Tom) - Add TABLE command (Peter) + Add TABLE command (Peter) - TABLE tablename is a SQL standard short-hand for - SELECT * FROM tablename. + TABLE tablename is a SQL standard short-hand for + SELECT * FROM tablename. - Allow AS to be optional when specifying a - SELECT (or RETURNING) column output + Allow AS to be optional when specifying a + SELECT (or RETURNING) column output label (Hiroshi Saito) This works so long as the column label is not any - PostgreSQL keyword; otherwise AS is still + PostgreSQL keyword; otherwise AS is still needed. - Support set-returning functions in SELECT result lists + Support set-returning functions in SELECT result lists even for functions that return their result via a tuplestore (Tom) @@ -7789,22 +7789,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow SELECT FOR UPDATE/SHARE to work + Allow SELECT FOR UPDATE/SHARE to work on inheritance trees (Tom) - Add infrastructure for SQL/MED (Martin Pihlak, + Add infrastructure for SQL/MED (Martin Pihlak, Peter) - There are no remote or external SQL/MED capabilities + There are no remote or external SQL/MED capabilities yet, but this change provides a standardized and future-proof system for managing connection information for modules like - dblink and plproxy. + dblink and plproxy. @@ -7827,7 +7827,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This allows constructs such as - row(1, 1.1) = any (array[row(7, 7.7), row(1, 1.0)]). + row(1, 1.1) = any (array[row(7, 7.7), row(1, 1.0)]). This is particularly useful in recursive queries. @@ -7835,14 +7835,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add support for Unicode string literal and identifier specifications - using code points, e.g. U&'d\0061t\+000061' + using code points, e.g. U&'d\0061t\+000061' (Peter) - Reject \000 in string literals and COPY data + Reject \000 in string literals and COPY data (Tom) @@ -7866,37 +7866,37 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>TRUNCATE</> + <command>TRUNCATE</command> - Support statement-level ON TRUNCATE triggers (Simon) + Support statement-level ON TRUNCATE triggers (Simon) - Add RESTART/CONTINUE IDENTITY options - for TRUNCATE TABLE + Add RESTART/CONTINUE IDENTITY options + for TRUNCATE TABLE (Zoltan Boszormenyi) The start value of a sequence can be changed by ALTER - SEQUENCE START WITH. + SEQUENCE START WITH. - Allow TRUNCATE tab1, tab1 to succeed (Bruce) + Allow TRUNCATE tab1, tab1 to succeed (Bruce) - Add a separate TRUNCATE permission (Robert Haas) + Add a separate TRUNCATE permission (Robert Haas) @@ -7905,38 +7905,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>EXPLAIN</> + <command>EXPLAIN</command> - Make EXPLAIN VERBOSE show the output columns of each + Make EXPLAIN VERBOSE show the output columns of each plan node (Tom) - Previously EXPLAIN VERBOSE output an internal + Previously EXPLAIN VERBOSE output an internal representation of the query plan. (That behavior is now - available via debug_print_plan.) + available via debug_print_plan.) - Make EXPLAIN identify subplans and initplans with + Make EXPLAIN identify subplans and initplans with individual labels (Tom) - Make EXPLAIN honor debug_print_plan (Tom) + Make EXPLAIN honor debug_print_plan (Tom) - Allow EXPLAIN on CREATE TABLE AS (Peter) + Allow EXPLAIN on CREATE TABLE AS (Peter) @@ -7945,25 +7945,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <literal>LIMIT</>/<literal>OFFSET</> + <literal>LIMIT</literal>/<literal>OFFSET</literal> - Allow sub-selects in LIMIT and OFFSET (Tom) + Allow sub-selects in LIMIT and OFFSET (Tom) - Add SQL-standard syntax for - LIMIT/OFFSET capabilities (Peter) + Add SQL-standard syntax for + LIMIT/OFFSET capabilities (Peter) To wit, OFFSET num {ROW|ROWS} FETCH {FIRST|NEXT} [num] {ROW|ROWS} - ONLY. + ONLY. @@ -7986,20 +7986,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Refactor multi-object DROP operations to reduce the - need for CASCADE (Alex Hunsaker) + Refactor multi-object DROP operations to reduce the + need for CASCADE (Alex Hunsaker) - For example, if table B has a dependency on table - A, the command DROP TABLE A, B no longer - requires the CASCADE option. + For example, if table B has a dependency on table + A, the command DROP TABLE A, B no longer + requires the CASCADE option. - Fix various problems with concurrent DROP commands + Fix various problems with concurrent DROP commands by ensuring that locks are taken before we begin to drop dependencies of an object (Tom) @@ -8007,15 +8007,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve reporting of dependencies during DROP + Improve reporting of dependencies during DROP commands (Tom) - Add WITH [NO] DATA clause to CREATE TABLE - AS, per the SQL standard (Peter, Tom) + Add WITH [NO] DATA clause to CREATE TABLE + AS, per the SQL standard (Peter, Tom) @@ -8027,14 +8027,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow CREATE AGGREGATE to use an internal + Allow CREATE AGGREGATE to use an internal transition datatype (Tom) - Add LIKE clause to CREATE TYPE (Tom) + Add LIKE clause to CREATE TYPE (Tom) @@ -8045,7 +8045,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow specification of the type category and preferred + Allow specification of the type category and preferred status for user-defined base types (Tom) @@ -8057,7 +8057,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow CREATE OR REPLACE VIEW to add columns to the + Allow CREATE OR REPLACE VIEW to add columns to the end of a view (Robert Haas) @@ -8065,25 +8065,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>ALTER</> + <command>ALTER</command> - Add ALTER TYPE RENAME (Petr Jelinek) + Add ALTER TYPE RENAME (Petr Jelinek) - Add ALTER SEQUENCE ... RESTART (with no parameter) to + Add ALTER SEQUENCE ... RESTART (with no parameter) to reset a sequence to its initial value (Zoltan Boszormenyi) - Modify the ALTER TABLE syntax to allow all reasonable + Modify the ALTER TABLE syntax to allow all reasonable combinations for tables, indexes, sequences, and views (Tom) @@ -8093,28 +8093,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - ALTER SEQUENCE OWNER TO + ALTER SEQUENCE OWNER TO - ALTER VIEW ALTER COLUMN SET/DROP DEFAULT + ALTER VIEW ALTER COLUMN SET/DROP DEFAULT - ALTER VIEW OWNER TO + ALTER VIEW OWNER TO - ALTER VIEW SET SCHEMA + ALTER VIEW SET SCHEMA There is no actual new functionality here, but formerly - you had to say ALTER TABLE to do these things, + you had to say ALTER TABLE to do these things, which was confusing. @@ -8122,24 +8122,24 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add support for the syntax ALTER TABLE ... ALTER COLUMN - ... SET DATA TYPE (Peter) + ... SET DATA TYPE (Peter) - This is SQL-standard syntax for functionality that + This is SQL-standard syntax for functionality that was already supported. - Make ALTER TABLE SET WITHOUT OIDS rewrite the table - to physically remove OID values (Tom) + Make ALTER TABLE SET WITHOUT OIDS rewrite the table + to physically remove OID values (Tom) - Also, add ALTER TABLE SET WITH OIDS to rewrite the - table to add OIDs. + Also, add ALTER TABLE SET WITH OIDS to rewrite the + table to add OIDs. @@ -8154,7 +8154,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve reporting of - CREATE/DROP/RENAME DATABASE + CREATE/DROP/RENAME DATABASE failure when uncommitted prepared transactions are the cause (Tom) @@ -8162,7 +8162,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make LC_COLLATE and LC_CTYPE into + Make LC_COLLATE and LC_CTYPE into per-database settings (Radek Strnad, Heikki) @@ -8175,20 +8175,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve checks that the database encoding, collation - (LC_COLLATE), and character classes - (LC_CTYPE) match (Heikki, Tom) + (LC_COLLATE), and character classes + (LC_CTYPE) match (Heikki, Tom) Note in particular that a new database's encoding and locale - settings can be changed only when copying from template0. + settings can be changed only when copying from template0. This prevents possibly copying data that doesn't match the settings. - Add ALTER DATABASE SET TABLESPACE to move a database + Add ALTER DATABASE SET TABLESPACE to move a database to a new tablespace (Guillaume Lelarge, Bernd Helmle) @@ -8206,8 +8206,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a VERBOSE option to the CLUSTER command and - clusterdb (Jim Cox) + Add a VERBOSE option to the CLUSTER command and + clusterdb (Jim Cox) @@ -8261,8 +8261,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - xxx_pattern_ops indexes can now be used for simple - equality comparisons, not only for LIKE (Tom) + xxx_pattern_ops indexes can now be used for simple + equality comparisons, not only for LIKE (Tom) @@ -8276,19 +8276,19 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove the requirement to use @@@ when doing - GIN weighted lookups on full text indexes (Tom, Teodor) + Remove the requirement to use @@@ when doing + GIN weighted lookups on full text indexes (Tom, Teodor) - The normal @@ text search operator can be used + The normal @@ text search operator can be used instead. - Add an optimizer selectivity function for @@ text + Add an optimizer selectivity function for @@ text search operations (Jan Urbanski) @@ -8302,7 +8302,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support multi-column GIN indexes (Teodor Sigaev) + Support multi-column GIN indexes (Teodor Sigaev) @@ -8317,18 +8317,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <command>VACUUM</> + <command>VACUUM</command> - Track free space in separate per-relation fork files (Heikki) + Track free space in separate per-relation fork files (Heikki) - Free space discovered by VACUUM is now recorded in - *_fsm files, rather than in a fixed-sized shared memory - area. The max_fsm_pages and max_fsm_relations + Free space discovered by VACUUM is now recorded in + *_fsm files, rather than in a fixed-sized shared memory + area. The max_fsm_pages and max_fsm_relations settings have been removed, greatly simplifying administration of free space management. @@ -8341,16 +8341,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This allows VACUUM to avoid scanning all of + This allows VACUUM to avoid scanning all of a table when only a portion of the table needs vacuuming. - The visibility map is stored in per-relation fork files. + The visibility map is stored in per-relation fork files. - Add vacuum_freeze_table_age parameter to control - when VACUUM should ignore the visibility map and + Add vacuum_freeze_table_age parameter to control + when VACUUM should ignore the visibility map and do a full table scan to freeze tuples (Heikki) @@ -8361,15 +8361,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This improves VACUUM's ability to reclaim space + This improves VACUUM's ability to reclaim space in the presence of long-running transactions. - Add ability to specify per-relation autovacuum and TOAST - parameters in CREATE TABLE (Alvaro, Euler Taveira de + Add ability to specify per-relation autovacuum and TOAST + parameters in CREATE TABLE (Alvaro, Euler Taveira de Oliveira) @@ -8380,7 +8380,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add --freeze option to vacuumdb + Add --freeze option to vacuumdb (Bruce) @@ -8397,20 +8397,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a CaseSensitive option for text search synonym + Add a CaseSensitive option for text search synonym dictionaries (Simon) - Improve the precision of NUMERIC division (Tom) + Improve the precision of NUMERIC division (Tom) - Add basic arithmetic operators for int2 with int8 + Add basic arithmetic operators for int2 with int8 (Tom) @@ -8421,22 +8421,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow UUID input to accept an optional hyphen after + Allow UUID input to accept an optional hyphen after every fourth digit (Robert Haas) - Allow on/off as input for the boolean data type + Allow on/off as input for the boolean data type (Itagaki Takahiro) - Allow spaces around NaN in the input string for - type numeric (Sam Mason) + Allow spaces around NaN in the input string for + type numeric (Sam Mason) @@ -8448,53 +8448,53 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Reject year 0 BC and years 000 and - 0000 (Tom) + Reject year 0 BC and years 000 and + 0000 (Tom) - Previously these were interpreted as 1 BC. - (Note: years 0 and 00 are still assumed to be + Previously these were interpreted as 1 BC. + (Note: years 0 and 00 are still assumed to be the year 2000.) - Include SGT (Singapore time) in the default list of + Include SGT (Singapore time) in the default list of known time zone abbreviations (Tom) - Support infinity and -infinity as - values of type date (Tom) + Support infinity and -infinity as + values of type date (Tom) - Make parsing of interval literals more standard-compliant + Make parsing of interval literals more standard-compliant (Tom, Ron Mayer) - For example, INTERVAL '1' YEAR now does what it's + For example, INTERVAL '1' YEAR now does what it's supposed to. - Allow interval fractional-seconds precision to be specified - after the second keyword, for SQL standard + Allow interval fractional-seconds precision to be specified + after the second keyword, for SQL standard compliance (Tom) Formerly the precision had to be specified after the keyword - interval. (For backwards compatibility, this syntax is still + interval. (For backwards compatibility, this syntax is still supported, though deprecated.) Data type definitions will now be output using the standard format. @@ -8502,26 +8502,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support the IS0 8601 interval syntax (Ron + Support the IS0 8601 interval syntax (Ron Mayer, Kevin Grittner) - For example, INTERVAL 'P1Y2M3DT4H5M6.7S' is now + For example, INTERVAL 'P1Y2M3DT4H5M6.7S' is now supported. - Add IntervalStyle parameter - which controls how interval values are output (Ron Mayer) + Add IntervalStyle parameter + which controls how interval values are output (Ron Mayer) - Valid values are: postgres, postgres_verbose, - sql_standard, iso_8601. This setting also - controls the handling of negative interval input when only + Valid values are: postgres, postgres_verbose, + sql_standard, iso_8601. This setting also + controls the handling of negative interval input when only some fields have positive/negative designations. @@ -8529,7 +8529,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve consistency of handling of fractional seconds in - timestamp and interval output (Ron Mayer) + timestamp and interval output (Ron Mayer) @@ -8543,38 +8543,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve the handling of casts applied to ARRAY[] - constructs, such as ARRAY[...]::integer[] + Improve the handling of casts applied to ARRAY[] + constructs, such as ARRAY[...]::integer[] (Brendan Jurd) - Formerly PostgreSQL attempted to determine a data type - for the ARRAY[] construct without reference to the ensuing + Formerly PostgreSQL attempted to determine a data type + for the ARRAY[] construct without reference to the ensuing cast. This could fail unnecessarily in many cases, in particular when - the ARRAY[] construct was empty or contained only - ambiguous entries such as NULL. Now the cast is consulted + the ARRAY[] construct was empty or contained only + ambiguous entries such as NULL. Now the cast is consulted to determine the type that the array elements must be. - Make SQL-syntax ARRAY dimensions optional - to match the SQL standard (Peter) + Make SQL-syntax ARRAY dimensions optional + to match the SQL standard (Peter) - Add array_ndims() to return the number + Add array_ndims() to return the number of dimensions of an array (Robert Haas) - Add array_length() to return the length + Add array_length() to return the length of an array for a specified dimension (Jim Nasby, Robert Haas, Peter Eisentraut) @@ -8582,7 +8582,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add aggregate function array_agg(), which + Add aggregate function array_agg(), which returns all aggregated values as a single array (Robert Haas, Jeff Davis, Peter) @@ -8590,25 +8590,25 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add unnest(), which converts an array to + Add unnest(), which converts an array to individual row values (Tom) - This is the opposite of array_agg(). + This is the opposite of array_agg(). - Add array_fill() to create arrays initialized with + Add array_fill() to create arrays initialized with a value (Pavel Stehule) - Add generate_subscripts() to simplify generating + Add generate_subscripts() to simplify generating the range of an array's subscripts (Pavel Stehule) @@ -8618,19 +8618,19 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Wide-Value Storage (<acronym>TOAST</>) + Wide-Value Storage (<acronym>TOAST</acronym>) - Consider TOAST compression on values as short as + Consider TOAST compression on values as short as 32 bytes (previously 256 bytes) (Greg Stark) - Require 25% minimum space savings before using TOAST + Require 25% minimum space savings before using TOAST compression (previously 20% for small values and any-savings-at-all for large values) (Greg) @@ -8638,7 +8638,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve TOAST heuristics for rows that have a mix of large + Improve TOAST heuristics for rows that have a mix of large and small toastable fields, so that we prefer to push large values out of line and don't compress small values unnecessarily (Greg, Tom) @@ -8656,52 +8656,52 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Document that setseed() allows values from - -1 to 1 (not just 0 to - 1), and enforce the valid range (Kris Jurka) + Document that setseed() allows values from + -1 to 1 (not just 0 to + 1), and enforce the valid range (Kris Jurka) - Add server-side function lo_import(filename, oid) + Add server-side function lo_import(filename, oid) (Tatsuo) - Add quote_nullable(), which behaves like - quote_literal() but returns the string NULL for + Add quote_nullable(), which behaves like + quote_literal() but returns the string NULL for a null argument (Brendan Jurd) - Improve full text search headline() function to + Improve full text search headline() function to allow extracting several fragments of text (Sushant Sinha) - Add suppress_redundant_updates_trigger() trigger + Add suppress_redundant_updates_trigger() trigger function to avoid overhead for non-data-changing updates (Andrew) - Add div(numeric, numeric) to perform numeric + Add div(numeric, numeric) to perform numeric division without rounding (Tom) - Add timestamp and timestamptz versions of - generate_series() (Hitoshi Harada) + Add timestamp and timestamptz versions of + generate_series() (Hitoshi Harada) @@ -8713,54 +8713,54 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Implement current_query() for use by functions + Implement current_query() for use by functions that need to know the currently running query (Tomas Doran) - Add pg_get_keywords() to return a list of the + Add pg_get_keywords() to return a list of the parser keywords (Dave Page) - Add pg_get_functiondef() to see a function's + Add pg_get_functiondef() to see a function's definition (Abhijit Menon-Sen) - Allow the second argument of pg_get_expr() to be zero + Allow the second argument of pg_get_expr() to be zero when deparsing an expression that does not contain variables (Tom) - Modify pg_relation_size() to use regclass + Modify pg_relation_size() to use regclass (Heikki) - pg_relation_size(data_type_name) no longer works. + pg_relation_size(data_type_name) no longer works. - Add boot_val and reset_val columns to - pg_settings output (Greg Smith) + Add boot_val and reset_val columns to + pg_settings output (Greg Smith) Add source file name and line number columns to - pg_settings output for variables set in a configuration + pg_settings output for variables set in a configuration file (Magnus, Alvaro) @@ -8771,26 +8771,26 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for CURRENT_CATALOG, - CURRENT_SCHEMA, SET CATALOG, SET - SCHEMA (Peter) + Add support for CURRENT_CATALOG, + CURRENT_SCHEMA, SET CATALOG, SET + SCHEMA (Peter) - These provide SQL-standard syntax for existing features. + These provide SQL-standard syntax for existing features. - Add pg_typeof() which returns the data type + Add pg_typeof() which returns the data type of any value (Brendan Jurd) - Make version() return information about whether + Make version() return information about whether the server is a 32- or 64-bit binary (Bruce) @@ -8798,7 +8798,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Fix the behavior of information schema columns - is_insertable_into and is_updatable to + is_insertable_into and is_updatable to be consistent (Peter) @@ -8806,13 +8806,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Improve the behavior of information schema - datetime_precision columns (Peter) + datetime_precision columns (Peter) - These columns now show zero for date columns, and 6 - (the default precision) for time, timestamp, and - interval without a declared precision, rather than showing + These columns now show zero for date columns, and 6 + (the default precision) for time, timestamp, and + interval without a declared precision, rather than showing null as formerly. @@ -8820,28 +8820,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Convert remaining builtin set-returning functions to use - OUT parameters (Jaime Casanova) + OUT parameters (Jaime Casanova) This makes it possible to call these functions without specifying - a column list: pg_show_all_settings(), - pg_lock_status(), pg_prepared_xact(), - pg_prepared_statement(), pg_cursor() + a column list: pg_show_all_settings(), + pg_lock_status(), pg_prepared_xact(), + pg_prepared_statement(), pg_cursor() - Make pg_*_is_visible() and - has_*_privilege() functions return NULL + Make pg_*_is_visible() and + has_*_privilege() functions return NULL for invalid OIDs, rather than reporting an error (Tom) - Extend has_*_privilege() functions to allow inquiring + Extend has_*_privilege() functions to allow inquiring about the OR of multiple privileges in one call (Stephen Frost, Tom) @@ -8849,8 +8849,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add has_column_privilege() and - has_any_column_privilege() functions (Stephen + Add has_column_privilege() and + has_any_column_privilege() functions (Stephen Frost, Tom) @@ -8883,16 +8883,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add CREATE FUNCTION ... RETURNS TABLE clause (Pavel + Add CREATE FUNCTION ... RETURNS TABLE clause (Pavel Stehule) - Allow SQL-language functions to return the output - of an INSERT/UPDATE/DELETE - RETURNING clause (Tom) + Allow SQL-language functions to return the output + of an INSERT/UPDATE/DELETE + RETURNING clause (Tom) @@ -8906,38 +8906,38 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Support EXECUTE USING for easier insertion of data + Support EXECUTE USING for easier insertion of data values into a dynamic query string (Pavel Stehule) - Allow looping over the results of a cursor using a FOR + Allow looping over the results of a cursor using a FOR loop (Pavel Stehule) - Support RETURN QUERY EXECUTE (Pavel + Support RETURN QUERY EXECUTE (Pavel Stehule) - Improve the RAISE command (Pavel Stehule) + Improve the RAISE command (Pavel Stehule) - Support DETAIL and HINT fields + Support DETAIL and HINT fields - Support specification of the SQLSTATE error code + Support specification of the SQLSTATE error code @@ -8947,7 +8947,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow RAISE without parameters in an exception + Allow RAISE without parameters in an exception block to re-throw the current error @@ -8957,45 +8957,45 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Allow specification of SQLSTATE codes - in EXCEPTION lists (Pavel Stehule) + Allow specification of SQLSTATE codes + in EXCEPTION lists (Pavel Stehule) - This is useful for handling custom SQLSTATE codes. + This is useful for handling custom SQLSTATE codes. - Support the CASE statement (Pavel Stehule) + Support the CASE statement (Pavel Stehule) - Make RETURN QUERY set the special FOUND and - GET DIAGNOSTICS ROW_COUNT variables + Make RETURN QUERY set the special FOUND and + GET DIAGNOSTICS ROW_COUNT variables (Pavel Stehule) - Make FETCH and MOVE set the - GET DIAGNOSTICS ROW_COUNT variable + Make FETCH and MOVE set the + GET DIAGNOSTICS ROW_COUNT variable (Andrew Gierth) - Make EXIT without a label always exit the innermost + Make EXIT without a label always exit the innermost loop (Tom) - Formerly, if there were a BEGIN block more closely nested + Formerly, if there were a BEGIN block more closely nested than any loop, it would exit that block instead. The new behavior matches Oracle(TM) and is also what was previously stated by our own documentation. @@ -9009,11 +9009,11 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - In particular, the format string in RAISE now works + In particular, the format string in RAISE now works the same as any other string literal, including being subject - to standard_conforming_strings. This change also + to standard_conforming_strings. This change also fixes other cases in which valid commands would fail when - standard_conforming_strings is on. + standard_conforming_strings is on. @@ -9037,28 +9037,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix pg_ctl restart to preserve command-line arguments + Fix pg_ctl restart to preserve command-line arguments (Bruce) - Add -w/--no-password option that + Add -w/--no-password option that prevents password prompting in all utilities that have a - -W/--password option (Peter) + -W/--password option (Peter) - Remove - These options have had no effect since PostgreSQL + These options have had no effect since PostgreSQL 8.3. @@ -9066,41 +9066,41 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>psql</> + <application>psql</application> - Remove verbose startup banner; now just suggest help + Remove verbose startup banner; now just suggest help (Joshua Drake) - Make help show common backslash commands (Greg + Make help show common backslash commands (Greg Sabino Mullane) - Add \pset format wrapped mode to wrap output to the - screen width, or file/pipe output too if \pset columns + Add \pset format wrapped mode to wrap output to the + screen width, or file/pipe output too if \pset columns is set (Bryce Nesbitt) - Allow all supported spellings of boolean values in \pset, - rather than just on and off (Bruce) + Allow all supported spellings of boolean values in \pset, + rather than just on and off (Bruce) - Formerly, any string other than off was silently taken - to mean true. psql will now complain - about unrecognized spellings (but still take them as true). + Formerly, any string other than off was silently taken + to mean true. psql will now complain + about unrecognized spellings (but still take them as true). @@ -9130,8 +9130,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add optional on/off argument for - \timing (David Fetter) + Add optional on/off argument for + \timing (David Fetter) @@ -9144,20 +9144,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make \l show database access privileges (Andrew Gilligan) + Make \l show database access privileges (Andrew Gilligan) - Make \l+ show database sizes, if permissions + Make \l+ show database sizes, if permissions allow (Andrew Gilligan) - Add the \ef command to edit function definitions + Add the \ef command to edit function definitions (Abhijit Menon-Sen) @@ -9167,28 +9167,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>psql</> \d* commands + <application>psql</application> \d* commands - Make \d* commands that do not have a pattern argument - show system objects only if the S modifier is specified + Make \d* commands that do not have a pattern argument + show system objects only if the S modifier is specified (Greg Sabino Mullane, Bruce) The former behavior was inconsistent across different variants - of \d, and in most cases it provided no easy way to see + of \d, and in most cases it provided no easy way to see just user objects. - Improve \d* commands to work with older - PostgreSQL server versions (back to 7.4), + Improve \d* commands to work with older + PostgreSQL server versions (back to 7.4), not only the current server version (Guillaume Lelarge) @@ -9196,14 +9196,14 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make \d show foreign-key constraints that reference + Make \d show foreign-key constraints that reference the selected table (Kenneth D'Souza) - Make \d on a sequence show its column values + Make \d on a sequence show its column values (Euler Taveira de Oliveira) @@ -9211,43 +9211,43 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Add column storage type and other relation options to the - \d+ display (Gregory Stark, Euler Taveira de + \d+ display (Gregory Stark, Euler Taveira de Oliveira) - Show relation size in \dt+ output (Dickson S. + Show relation size in \dt+ output (Dickson S. Guedes) - Show the possible values of enum types in \dT+ + Show the possible values of enum types in \dT+ (David Fetter) - Allow \dC to accept a wildcard pattern, which matches + Allow \dC to accept a wildcard pattern, which matches either datatype involved in the cast (Tom) - Add a function type column to \df's output, and add + Add a function type column to \df's output, and add options to list only selected types of functions (David Fetter) - Make \df not hide functions that take or return - type cstring (Tom) + Make \df not hide functions that take or return + type cstring (Tom) @@ -9263,13 +9263,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>pg_dump</> + <application>pg_dump</application> - Add a --no-tablespaces option to - pg_dump/pg_dumpall/pg_restore + Add a --no-tablespaces option to + pg_dump/pg_dumpall/pg_restore so that dumps can be restored to clusters that have non-matching tablespace layouts (Gavin Roy) @@ -9277,23 +9277,23 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove These options were too frequently confused with the option to - select a database name in other PostgreSQL + select a database name in other PostgreSQL client applications. The functionality is still available, but you must now spell out the long option name - or . - Remove @@ -9305,15 +9305,15 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Disable statement_timeout during dump and restore + Disable statement_timeout during dump and restore (Joshua Drake) - Add pg_dump/pg_dumpall option - (David Gould) @@ -9324,7 +9324,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Reorder pg_dump --data-only output + Reorder pg_dump --data-only output to dump tables referenced by foreign keys before the referencing tables (Tom) @@ -9332,27 +9332,27 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This allows data loads when foreign keys are already present. If circular references make a safe ordering impossible, a - NOTICE is issued. + NOTICE is issued. - Allow pg_dump, pg_dumpall, and - pg_restore to use a specified role (Benedek + Allow pg_dump, pg_dumpall, and + pg_restore to use a specified role (Benedek László) - Allow pg_restore to use multiple concurrent + Allow pg_restore to use multiple concurrent connections to do the restore (Andrew) The number of concurrent connections is controlled by the option - --jobs. This is supported only for custom-format archives. + --jobs. This is supported only for custom-format archives. @@ -9366,24 +9366,24 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE Programming Tools - <application>libpq</> + <application>libpq</application> - Allow the OID to be specified when importing a large - object, via new function lo_import_with_oid() (Tatsuo) + Allow the OID to be specified when importing a large + object, via new function lo_import_with_oid() (Tatsuo) - Add events support (Andrew Chernow, Merlin Moncure) + Add events support (Andrew Chernow, Merlin Moncure) This adds the ability to register callbacks to manage private - data associated with PGconn and PGresult + data associated with PGconn and PGresult objects. @@ -9397,18 +9397,18 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make PQexecParams() and related functions return - PGRES_EMPTY_QUERY for an empty query (Tom) + Make PQexecParams() and related functions return + PGRES_EMPTY_QUERY for an empty query (Tom) - They previously returned PGRES_COMMAND_OK. + They previously returned PGRES_COMMAND_OK. - Document how to avoid the overhead of WSACleanup() + Document how to avoid the overhead of WSACleanup() on Windows (Andrew Chernow) @@ -9434,22 +9434,22 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>libpq</> <acronym>SSL</> (Secure Sockets Layer) + <title><application>libpq</application> <acronym>SSL</acronym> (Secure Sockets Layer) support - Fix certificate validation for SSL connections + Fix certificate validation for SSL connections (Magnus) - libpq now supports verifying both the certificate - and the name of the server when making SSL + libpq now supports verifying both the certificate + and the name of the server when making SSL connections. If a root certificate is not available to use for - verification, SSL connections will fail. The - sslmode parameter is used to enable certificate + verification, SSL connections will fail. The + sslmode parameter is used to enable certificate verification and set the level of checking. The default is still not to do any verification, allowing connections to SSL-enabled servers without requiring a root certificate on the @@ -9463,7 +9463,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - If a certificate CN starts with *, it will + If a certificate CN starts with *, it will be treated as a wildcard when matching the hostname, allowing the use of the same certificate for multiple servers. @@ -9478,21 +9478,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add a PQinitOpenSSL function to allow greater control + Add a PQinitOpenSSL function to allow greater control over OpenSSL/libcrypto initialization (Andrew Chernow) - Make libpq unregister its OpenSSL + Make libpq unregister its OpenSSL callbacks when no database connections remain open (Bruce, Magnus, Russell Smith) This is required for applications that unload the libpq library, - otherwise invalid OpenSSL callbacks will remain. + otherwise invalid OpenSSL callbacks will remain. @@ -9501,7 +9501,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - <application>ecpg</> + <application>ecpg</application> @@ -9527,7 +9527,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Server Programming Interface (<acronym>SPI</>) + Server Programming Interface (<acronym>SPI</acronym>) @@ -9539,8 +9539,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add new SPI_OK_REWRITTEN return code for - SPI_execute() (Heikki) + Add new SPI_OK_REWRITTEN return code for + SPI_execute() (Heikki) @@ -9551,12 +9551,12 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Remove unnecessary inclusions from executor/spi.h (Tom) + Remove unnecessary inclusions from executor/spi.h (Tom) - SPI-using modules might need to add some #include - lines if they were depending on spi.h to include + SPI-using modules might need to add some #include + lines if they were depending on spi.h to include things for them. @@ -9573,13 +9573,13 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Update build system to use Autoconf 2.61 (Peter) + Update build system to use Autoconf 2.61 (Peter) - Require GNU bison for source code builds (Peter) + Require GNU bison for source code builds (Peter) @@ -9590,63 +9590,63 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add pg_config --htmldir option + Add pg_config --htmldir option (Peter) - Pass float4 by value inside the server (Zoltan + Pass float4 by value inside the server (Zoltan Boszormenyi) - Add configure option - --disable-float4-byval to use the old behavior. + Add configure option + --disable-float4-byval to use the old behavior. External C functions that use old-style (version 0) call convention - and pass or return float4 values will be broken by this - change, so you may need the configure option if you + and pass or return float4 values will be broken by this + change, so you may need the configure option if you have such functions and don't want to update them. - Pass float8, int8, and related datatypes + Pass float8, int8, and related datatypes by value inside the server on 64-bit platforms (Zoltan Boszormenyi) - Add configure option - --disable-float8-byval to use the old behavior. + Add configure option + --disable-float8-byval to use the old behavior. As above, this change might break old-style external C functions. - Add configure options --with-segsize, - --with-blocksize, --with-wal-blocksize, - --with-wal-segsize (Zdenek Kotala, Tom) + Add configure options --with-segsize, + --with-blocksize, --with-wal-blocksize, + --with-wal-segsize (Zdenek Kotala, Tom) This simplifies build-time control over several constants that previously could only be changed by editing - pg_config_manual.h. + pg_config_manual.h. - Allow threaded builds on Solaris 2.5 (Bruce) + Allow threaded builds on Solaris 2.5 (Bruce) - Use the system's getopt_long() on Solaris + Use the system's getopt_long() on Solaris (Zdenek Kotala, Tom) @@ -9658,16 +9658,16 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for the Sun Studio compiler on - Linux (Julius Stroffek) + Add support for the Sun Studio compiler on + Linux (Julius Stroffek) - Append the major version number to the backend gettext - domain, and the soname major version number to - libraries' gettext domain (Peter) + Append the major version number to the backend gettext + domain, and the soname major version number to + libraries' gettext domain (Peter) @@ -9677,21 +9677,21 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for code coverage testing with gcov + Add support for code coverage testing with gcov (Michelle Caisse) - Allow out-of-tree builds on Mingw and - Cygwin (Richard Evans) + Allow out-of-tree builds on Mingw and + Cygwin (Richard Evans) - Fix the use of Mingw as a cross-compiling source + Fix the use of Mingw as a cross-compiling source platform (Peter) @@ -9710,20 +9710,20 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - This adds support for daylight saving time (DST) + This adds support for daylight saving time (DST) calculations beyond the year 2038. - Deprecate use of platform's time_t data type (Tom) + Deprecate use of platform's time_t data type (Tom) - Some platforms have migrated to 64-bit time_t, some have + Some platforms have migrated to 64-bit time_t, some have not, and Windows can't make up its mind what it's doing. Define - pg_time_t to have the same meaning as time_t, + pg_time_t to have the same meaning as time_t, but always be 64 bits (unless the platform has no 64-bit integer type), and use that type in all module APIs and on-disk data formats. @@ -9745,7 +9745,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Improve gettext support to allow better translation + Improve gettext support to allow better translation of plurals (Peter) @@ -9758,44 +9758,44 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add more DTrace probes (Robert Lor) + Add more DTrace probes (Robert Lor) - Enable DTrace support on macOS - Leopard and other non-Solaris platforms (Robert Lor) + Enable DTrace support on macOS + Leopard and other non-Solaris platforms (Robert Lor) Simplify and standardize conversions between C strings and - text datums, by providing common functions for the purpose + text datums, by providing common functions for the purpose (Brendan Jurd, Tom) - Clean up the include/catalog/ header files so that + Clean up the include/catalog/ header files so that frontend programs can include them without including - postgres.h + postgres.h (Zdenek Kotala) - Make name char-aligned, and suppress zero-padding of - name entries in indexes (Tom) + Make name char-aligned, and suppress zero-padding of + name entries in indexes (Tom) - Recover better if dynamically-loaded code executes exit() + Recover better if dynamically-loaded code executes exit() (Tom) @@ -9816,55 +9816,55 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add shmem_startup_hook() for custom shared memory + Add shmem_startup_hook() for custom shared memory requirements (Tom) - Replace the index access method amgetmulti entry point - with amgetbitmap, and extend the API for - amgettuple to support run-time determination of + Replace the index access method amgetmulti entry point + with amgetbitmap, and extend the API for + amgettuple to support run-time determination of operator lossiness (Heikki, Tom, Teodor) - The API for GIN and GiST opclass consistent functions + The API for GIN and GiST opclass consistent functions has been extended as well. - Add support for partial-match searches in GIN indexes + Add support for partial-match searches in GIN indexes (Teodor Sigaev, Oleg Bartunov) - Replace pg_class column reltriggers - with boolean relhastriggers (Simon) + Replace pg_class column reltriggers + with boolean relhastriggers (Simon) - Also remove unused pg_class columns - relukeys, relfkeys, and - relrefs. + Also remove unused pg_class columns + relukeys, relfkeys, and + relrefs. - Add a relistemp column to pg_class + Add a relistemp column to pg_class to ease identification of temporary tables (Tom) - Move platform FAQs into the main documentation + Move platform FAQs into the main documentation (Peter) @@ -9878,7 +9878,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add support for the KOI8U (Ukrainian) encoding + Add support for the KOI8U (Ukrainian) encoding (Peter) @@ -9895,8 +9895,8 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Fix problem when setting LC_MESSAGES on - MSVC-built systems (Hiroshi Inoue, Hiroshi + Fix problem when setting LC_MESSAGES on + MSVC-built systems (Hiroshi Inoue, Hiroshi Saito, Magnus) @@ -9912,65 +9912,65 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add contrib/auto_explain to automatically run - EXPLAIN on queries exceeding a specified duration + Add contrib/auto_explain to automatically run + EXPLAIN on queries exceeding a specified duration (Itagaki Takahiro, Tom) - Add contrib/btree_gin to allow GIN indexes to + Add contrib/btree_gin to allow GIN indexes to handle more datatypes (Oleg, Teodor) - Add contrib/citext to provide a case-insensitive, + Add contrib/citext to provide a case-insensitive, multibyte-aware text data type (David Wheeler) - Add contrib/pg_stat_statements for server-wide + Add contrib/pg_stat_statements for server-wide tracking of statement execution statistics (Itagaki Takahiro) - Add duration and query mode options to contrib/pgbench + Add duration and query mode options to contrib/pgbench (Itagaki Takahiro) - Make contrib/pgbench use table names - pgbench_accounts, pgbench_branches, - pgbench_history, and pgbench_tellers, - rather than just accounts, branches, - history, and tellers (Tom) + Make contrib/pgbench use table names + pgbench_accounts, pgbench_branches, + pgbench_history, and pgbench_tellers, + rather than just accounts, branches, + history, and tellers (Tom) This is to reduce the risk of accidentally destroying real data - by running pgbench. + by running pgbench. - Fix contrib/pgstattuple to handle tables and + Fix contrib/pgstattuple to handle tables and indexes with over 2 billion pages (Tatsuhito Kasahara) - In contrib/fuzzystrmatch, add a version of the + In contrib/fuzzystrmatch, add a version of the Levenshtein string-distance function that allows the user to specify the costs of insertion, deletion, and substitution (Volkan Yazici) @@ -9979,28 +9979,28 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make contrib/ltree support multibyte encodings + Make contrib/ltree support multibyte encodings (laser) - Enable contrib/dblink to use connection information + Enable contrib/dblink to use connection information stored in the SQL/MED catalogs (Joe Conway) - Improve contrib/dblink's reporting of errors from + Improve contrib/dblink's reporting of errors from the remote server (Joe Conway) - Make contrib/dblink set client_encoding + Make contrib/dblink set client_encoding to match the local database's encoding (Joe Conway) @@ -10012,9 +10012,9 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Make sure contrib/dblink uses a password supplied + Make sure contrib/dblink uses a password supplied by the user, and not accidentally taken from the server's - .pgpass file (Joe Conway) + .pgpass file (Joe Conway) @@ -10024,51 +10024,51 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE - Add fsm_page_contents() - to contrib/pageinspect (Heikki) + Add fsm_page_contents() + to contrib/pageinspect (Heikki) - Modify get_raw_page() to support free space map - (*_fsm) files. Also update - contrib/pg_freespacemap. + Modify get_raw_page() to support free space map + (*_fsm) files. Also update + contrib/pg_freespacemap. - Add support for multibyte encodings to contrib/pg_trgm + Add support for multibyte encodings to contrib/pg_trgm (Teodor) - Rewrite contrib/intagg to use new - functions array_agg() and unnest() + Rewrite contrib/intagg to use new + functions array_agg() and unnest() (Tom) - Make contrib/pg_standby recover all available WAL before + Make contrib/pg_standby recover all available WAL before failover (Fujii Masao, Simon, Heikki) To make this work safely, you now need to set the new - recovery_end_command option in recovery.conf - to clean up the trigger file after failover. pg_standby + recovery_end_command option in recovery.conf + to clean up the trigger file after failover. pg_standby will no longer remove the trigger file itself. - contrib/pg_standby's option is now a no-op, because it is unsafe to use a symlink (Simon) diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index f7c63fc567..e09f38e180 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 9.0.X series. Users are encouraged to update to a newer release branch soon. @@ -42,8 +42,8 @@ - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -69,13 +69,13 @@ - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -93,7 +93,7 @@ - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -108,13 +108,13 @@ too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -126,14 +126,14 @@ - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -141,21 +141,21 @@ Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -168,7 +168,7 @@ Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -212,22 +212,22 @@ - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -240,9 +240,9 @@ These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -263,12 +263,12 @@ During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -288,7 +288,7 @@ - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -321,30 +321,30 @@ Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -352,61 +352,61 @@ Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) When dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -415,23 +415,23 @@ - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -439,14 +439,14 @@ - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -458,38 +458,38 @@ Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -513,7 +513,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -544,7 +544,7 @@ With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -555,13 +555,13 @@ Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -587,7 +587,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -613,12 +613,12 @@ - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -632,29 +632,29 @@ - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -681,7 +681,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.0.X release series in September 2015. Users are encouraged to update to a newer release branch soon. @@ -727,7 +727,7 @@ - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -737,7 +737,7 @@ - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -747,15 +747,15 @@ - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -786,7 +786,7 @@ This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -814,7 +814,7 @@ This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -822,7 +822,7 @@ Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -836,14 +836,14 @@ - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -863,13 +863,13 @@ - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -910,9 +910,9 @@ - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. @@ -925,20 +925,20 @@ - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -951,7 +951,7 @@ crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) @@ -960,14 +960,14 @@ While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -976,25 +976,25 @@ buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -1007,37 +1007,37 @@ - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -1049,7 +1049,7 @@ - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -1057,28 +1057,28 @@ - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -1086,7 +1086,7 @@ - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -1098,7 +1098,7 @@ - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -1145,15 +1145,15 @@ - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -1163,27 +1163,27 @@ - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -1191,12 +1191,12 @@ - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -1237,7 +1237,7 @@ Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -1263,21 +1263,21 @@ Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -1289,22 +1289,22 @@ - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -1312,12 +1312,12 @@ - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -1327,7 +1327,7 @@ Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -1339,7 +1339,7 @@ - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -1352,19 +1352,19 @@ - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -1384,7 +1384,7 @@ - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -1415,14 +1415,14 @@ - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -1431,7 +1431,7 @@ Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -1444,8 +1444,8 @@ - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -1471,7 +1471,7 @@ the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -1491,19 +1491,19 @@ Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -1511,14 +1511,14 @@ Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -1527,7 +1527,7 @@ case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -1541,32 +1541,32 @@ - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -1574,14 +1574,14 @@ - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -1589,32 +1589,32 @@ Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -1624,7 +1624,7 @@ This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -1632,17 +1632,17 @@ - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -1650,9 +1650,9 @@ - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) @@ -1666,7 +1666,7 @@ - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) @@ -1674,7 +1674,7 @@ Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -1686,7 +1686,7 @@ - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -1695,24 +1695,24 @@ Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -1739,29 +1739,29 @@ With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -1773,15 +1773,15 @@ - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -1793,9 +1793,9 @@ Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -1804,21 +1804,21 @@ - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -1877,15 +1877,15 @@ - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -1917,7 +1917,7 @@ Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -1944,13 +1944,13 @@ This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -1965,7 +1965,7 @@ Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -1978,7 +1978,7 @@ - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -1991,19 +1991,19 @@ This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -2011,7 +2011,7 @@ - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -2023,7 +2023,7 @@ This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -2031,7 +2031,7 @@ Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -2040,16 +2040,16 @@ the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -2085,15 +2085,15 @@ - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -2104,17 +2104,17 @@ - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -2122,15 +2122,15 @@ - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -2138,20 +2138,20 @@ - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -2159,20 +2159,20 @@ - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -2232,7 +2232,7 @@ Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -2256,7 +2256,7 @@ - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -2269,17 +2269,17 @@ - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -2305,26 +2305,26 @@ - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -2370,19 +2370,19 @@ - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -2395,7 +2395,7 @@ The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -2415,7 +2415,7 @@ If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -2429,12 +2429,12 @@ - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -2461,7 +2461,7 @@ - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -2473,35 +2473,35 @@ - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -2516,7 +2516,7 @@ The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -2536,20 +2536,20 @@ was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -2572,25 +2572,25 @@ Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -2614,7 +2614,7 @@ - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -2634,26 +2634,26 @@ A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -2661,21 +2661,21 @@ - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -2700,12 +2700,12 @@ - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -2713,51 +2713,51 @@ - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -2768,7 +2768,7 @@ - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -2782,28 +2782,28 @@ - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -2812,20 +2812,20 @@ the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -2877,13 +2877,13 @@ - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -2895,18 +2895,18 @@ The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -2932,7 +2932,7 @@ - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -2954,8 +2954,8 @@ - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -2972,7 +2972,7 @@ This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -2990,13 +2990,13 @@ - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -3010,7 +3010,7 @@ In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -3024,7 +3024,7 @@ - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -3035,10 +3035,10 @@ - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -3048,21 +3048,21 @@ - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -3114,7 +3114,7 @@ - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -3122,7 +3122,7 @@ Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -3135,7 +3135,7 @@ - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -3160,29 +3160,29 @@ - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) @@ -3196,37 +3196,37 @@ - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -3250,14 +3250,14 @@ - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -3269,16 +3269,16 @@ Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) @@ -3292,7 +3292,7 @@ - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -3308,14 +3308,14 @@ Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -3329,7 +3329,7 @@ - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -3364,7 +3364,7 @@ However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -3388,7 +3388,7 @@ A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -3402,41 +3402,41 @@ This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -3451,21 +3451,21 @@ These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -3476,21 +3476,21 @@ - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -3511,28 +3511,28 @@ - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -3543,7 +3543,7 @@ - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -3564,29 +3564,29 @@ - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -3595,26 +3595,26 @@ a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -3622,12 +3622,12 @@ Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -3672,7 +3672,7 @@ - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -3742,19 +3742,19 @@ Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -3766,13 +3766,13 @@ Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -3780,13 +3780,13 @@ - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -3799,7 +3799,7 @@ - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) @@ -3810,55 +3810,55 @@ - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -3878,15 +3878,15 @@ - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -3935,13 +3935,13 @@ Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -3949,8 +3949,8 @@ Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -3987,13 +3987,13 @@ This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -4026,8 +4026,8 @@ The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. @@ -4045,10 +4045,10 @@ - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -4056,7 +4056,7 @@ Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) @@ -4069,7 +4069,7 @@ - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -4081,14 +4081,14 @@ - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -4102,7 +4102,7 @@ - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -4110,7 +4110,7 @@ Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -4132,14 +4132,14 @@ Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. @@ -4153,7 +4153,7 @@ Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -4166,8 +4166,8 @@ - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -4177,33 +4177,33 @@ - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -4214,48 +4214,48 @@ - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -4266,7 +4266,7 @@ - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -4318,7 +4318,7 @@ These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -4341,10 +4341,10 @@ - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. @@ -4358,12 +4358,12 @@ - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -4382,7 +4382,7 @@ Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -4390,26 +4390,26 @@ - Fix pg_upgrade's handling of line endings on Windows + Fix pg_upgrade's handling of line endings on Windows (Andrew Dunstan) - Previously, pg_upgrade might add or remove carriage + Previously, pg_upgrade might add or remove carriage returns in places such as function bodies. - On Windows, make pg_upgrade use backslash path + On Windows, make pg_upgrade use backslash path separators in the scripts it emits (Andrew Dunstan) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -4459,7 +4459,7 @@ - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -4472,22 +4472,22 @@ - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -4515,21 +4515,21 @@ - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Fix txid_current() to report the correct epoch when not + Fix txid_current() to report the correct epoch when not in hot standby (Heikki Linnakangas) @@ -4546,14 +4546,14 @@ This mistake led to failures reported as out-of-order XID - insertion in KnownAssignedXids. + insertion in KnownAssignedXids. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -4564,7 +4564,7 @@ WAL sender background processes neglected to establish a - SIGALRM handler, meaning they would wait forever in + SIGALRM handler, meaning they would wait forever in some corner cases where a timeout ought to happen. @@ -4583,15 +4583,15 @@ - Fix LISTEN/NOTIFY to cope better with I/O + Fix LISTEN/NOTIFY to cope better with I/O problems, such as out of disk space (Tom Lane) After a write failure, all subsequent attempts to send more - NOTIFY messages would fail with messages like - Could not read from file "pg_notify/nnnn" at - offset nnnnn: Success. + NOTIFY messages would fail with messages like + Could not read from file "pg_notify/nnnn" at + offset nnnnn: Success. @@ -4604,7 +4604,7 @@ The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -4616,15 +4616,15 @@ - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -4632,24 +4632,24 @@ Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -4657,7 +4657,7 @@ - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -4669,7 +4669,7 @@ The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -4677,9 +4677,9 @@ Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) @@ -4708,14 +4708,14 @@ - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -4761,12 +4761,12 @@ Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -4777,7 +4777,7 @@ - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -4789,7 +4789,7 @@ - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -4815,7 +4815,7 @@ - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -4823,13 +4823,13 @@ - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Ensure txid_current() reports the correct epoch when + Ensure txid_current() reports the correct epoch when executed in hot standby (Simon Riggs) @@ -4844,7 +4844,7 @@ This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -4852,13 +4852,13 @@ - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -4879,8 +4879,8 @@ - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -4907,12 +4907,12 @@ - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. @@ -4925,7 +4925,7 @@ Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -4944,7 +4944,7 @@ Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) @@ -4957,33 +4957,33 @@ - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -4991,7 +4991,7 @@ - Fix pg_upgrade for the case that a database stored in a + Fix pg_upgrade for the case that a database stored in a non-default tablespace contains a table in the cluster's default tablespace (Bruce Momjian) @@ -4999,41 +4999,41 @@ - In ecpg, fix rare memory leaks and possible overwrite - of one byte after the sqlca_t structure (Peter Eisentraut) + In ecpg, fix rare memory leaks and possible overwrite + of one byte after the sqlca_t structure (Peter Eisentraut) - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Fix contrib/vacuumlo to use multiple transactions when + Fix contrib/vacuumlo to use multiple transactions when dropping many large objects (Tim Lewis, Robert Haas, Tom Lane) - This change avoids exceeding max_locks_per_transaction when + This change avoids exceeding max_locks_per_transaction when many objects need to be dropped. The behavior can be adjusted with the - new -l (limit) option. + new -l (limit) option. - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -5081,14 +5081,14 @@ Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -5100,7 +5100,7 @@ - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -5115,12 +5115,12 @@ - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -5136,10 +5136,10 @@ An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -5158,7 +5158,7 @@ that the contents were transiently invalid. In hot standby mode this can result in a query that's executing in parallel seeing garbage data. Various symptoms could result from that, but the most common one seems - to be invalid memory alloc request size. + to be invalid memory alloc request size. @@ -5176,13 +5176,13 @@ - Fix CLUSTER/VACUUM FULL handling of toast + Fix CLUSTER/VACUUM FULL handling of toast values owned by recently-updated rows (Tom Lane) This oversight could lead to duplicate key value violates unique - constraint errors being reported against the toast table's index + constraint errors being reported against the toast table's index during one of these commands. @@ -5204,11 +5204,11 @@ Support foreign data wrappers and foreign servers in - REASSIGN OWNED (Alvaro Herrera) + REASSIGN OWNED (Alvaro Herrera) - This command failed with unexpected classid errors if + This command failed with unexpected classid errors if it needed to change the ownership of any such objects. @@ -5216,16 +5216,16 @@ Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. @@ -5249,7 +5249,7 @@ Recover from errors occurring during WAL replay of DROP - TABLESPACE (Tom Lane) + TABLESPACE (Tom Lane) @@ -5271,7 +5271,7 @@ Sometimes a lock would be logged as being held by transaction - zero. This is at least known to produce assertion failures on + zero. This is at least known to produce assertion failures on slave servers, and might be the cause of more serious problems. @@ -5293,7 +5293,7 @@ - Prevent emitting misleading consistent recovery state reached + Prevent emitting misleading consistent recovery state reached log message at the beginning of crash recovery (Heikki Linnakangas) @@ -5301,7 +5301,7 @@ Fix initial value of - pg_stat_replication.replay_location + pg_stat_replication.replay_location (Fujii Masao) @@ -5313,7 +5313,7 @@ - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -5327,18 +5327,18 @@ A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -5346,8 +5346,8 @@ - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -5381,32 +5381,32 @@ - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Fix trigger WHEN conditions when both BEFORE and - AFTER triggers exist (Tom Lane) + Fix trigger WHEN conditions when both BEFORE and + AFTER triggers exist (Tom Lane) - Evaluation of WHEN conditions for AFTER ROW - UPDATE triggers could crash if there had been a BEFORE - ROW trigger fired for the same update. + Evaluation of WHEN conditions for AFTER ROW + UPDATE triggers could crash if there had been a BEFORE + ROW trigger fired for the same update. @@ -6202,7 +6202,7 @@ - Allow nested EXISTS queries to be optimized properly (Tom + Allow nested EXISTS queries to be optimized properly (Tom Lane) @@ -6222,19 +6222,19 @@ - Fix EXPLAIN to handle gating Result nodes within + Fix EXPLAIN to handle gating Result nodes within inner-indexscan subplans (Tom Lane) - The usual symptom of this oversight was bogus varno errors. + The usual symptom of this oversight was bogus varno errors. - Fix btree preprocessing of indexedcol IS - NULL conditions (Dean Rasheed) + Fix btree preprocessing of indexedcol IS + NULL conditions (Dean Rasheed) @@ -6257,13 +6257,13 @@ - Fix dump bug for VALUES in a view (Tom Lane) + Fix dump bug for VALUES in a view (Tom Lane) - Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) + Disallow SELECT FOR UPDATE/SHARE on sequences (Tom Lane) @@ -6273,8 +6273,8 @@ - Fix VACUUM so that it always updates - pg_class.reltuples/relpages (Tom + Fix VACUUM so that it always updates + pg_class.reltuples/relpages (Tom Lane) @@ -6293,7 +6293,7 @@ - Fix cases where CLUSTER might attempt to access + Fix cases where CLUSTER might attempt to access already-removed TOAST data (Tom Lane) @@ -6308,7 +6308,7 @@ Fix portability bugs in use of credentials control messages for - peer authentication (Tom Lane) + peer authentication (Tom Lane) @@ -6320,20 +6320,20 @@ The typical symptom of this problem was The function requested is - not supported errors during SSPI login. + not supported errors during SSPI login. Fix failure when adding a new variable of a custom variable class to - postgresql.conf (Tom Lane) + postgresql.conf (Tom Lane) - Throw an error if pg_hba.conf contains hostssl + Throw an error if pg_hba.conf contains hostssl but SSL is disabled (Tom Lane) @@ -6345,19 +6345,19 @@ - Fix failure when DROP OWNED BY attempts to remove default + Fix failure when DROP OWNED BY attempts to remove default privileges on sequences (Shigeru Hanada) - Fix typo in pg_srand48 seed initialization (Andres Freund) + Fix typo in pg_srand48 seed initialization (Andres Freund) This led to failure to use all bits of the provided seed. This function - is not used on most platforms (only those without srandom), + is not used on most platforms (only those without srandom), and the potential security exposure from a less-random-than-expected seed seems minimal in any case. @@ -6365,25 +6365,25 @@ - Avoid integer overflow when the sum of LIMIT and - OFFSET values exceeds 2^63 (Heikki Linnakangas) + Avoid integer overflow when the sum of LIMIT and + OFFSET values exceeds 2^63 (Heikki Linnakangas) - Add overflow checks to int4 and int8 versions of - generate_series() (Robert Haas) + Add overflow checks to int4 and int8 versions of + generate_series() (Robert Haas) - Fix trailing-zero removal in to_char() (Marti Raudsepp) + Fix trailing-zero removal in to_char() (Marti Raudsepp) - In a format with FM and no digit positions + In a format with FM and no digit positions after the decimal point, zeroes to the left of the decimal point could be removed incorrectly. @@ -6391,7 +6391,7 @@ - Fix pg_size_pretty() to avoid overflow for inputs close to + Fix pg_size_pretty() to avoid overflow for inputs close to 2^63 (Tom Lane) @@ -6409,19 +6409,19 @@ - Correctly handle quotes in locale names during initdb + Correctly handle quotes in locale names during initdb (Heikki Linnakangas) The case can arise with some Windows locales, such as People's - Republic of China. + Republic of China. - In pg_upgrade, avoid dumping orphaned temporary tables + In pg_upgrade, avoid dumping orphaned temporary tables (Bruce Momjian) @@ -6433,54 +6433,54 @@ - Fix pg_upgrade to preserve toast tables' relfrozenxids + Fix pg_upgrade to preserve toast tables' relfrozenxids during an upgrade from 8.3 (Bruce Momjian) - Failure to do this could lead to pg_clog files being + Failure to do this could lead to pg_clog files being removed too soon after the upgrade. - In pg_upgrade, fix the -l (log) option to + In pg_upgrade, fix the -l (log) option to work on Windows (Bruce Momjian) - In pg_ctl, support silent mode for service registrations + In pg_ctl, support silent mode for service registrations on Windows (MauMau) - Fix psql's counting of script file line numbers during - COPY from a different file (Tom Lane) + Fix psql's counting of script file line numbers during + COPY from a different file (Tom Lane) - Fix pg_restore's direct-to-database mode for - standard_conforming_strings (Tom Lane) + Fix pg_restore's direct-to-database mode for + standard_conforming_strings (Tom Lane) - pg_restore could emit incorrect commands when restoring + pg_restore could emit incorrect commands when restoring directly to a database server from an archive file that had been made - with standard_conforming_strings set to on. + with standard_conforming_strings set to on. Be more user-friendly about unsupported cases for parallel - pg_restore (Tom Lane) + pg_restore (Tom Lane) @@ -6491,14 +6491,14 @@ - Fix write-past-buffer-end and memory leak in libpq's + Fix write-past-buffer-end and memory leak in libpq's LDAP service lookup code (Albe Laurenz) - In libpq, avoid failures when using nonblocking I/O + In libpq, avoid failures when using nonblocking I/O and an SSL connection (Martin Pihlak, Tom Lane) @@ -6510,36 +6510,36 @@ - In particular, the response to a server report of fork() + In particular, the response to a server report of fork() failure during SSL connection startup is now saner. - Improve libpq's error reporting for SSL failures (Tom + Improve libpq's error reporting for SSL failures (Tom Lane) - Fix PQsetvalue() to avoid possible crash when adding a new - tuple to a PGresult originally obtained from a server + Fix PQsetvalue() to avoid possible crash when adding a new + tuple to a PGresult originally obtained from a server query (Andrew Chernow) - Make ecpglib write double values with 15 digits + Make ecpglib write double values with 15 digits precision (Akira Kurosawa) - In ecpglib, be sure LC_NUMERIC setting is + In ecpglib, be sure LC_NUMERIC setting is restored after an error (Michael Meskes) @@ -6551,7 +6551,7 @@ - contrib/pg_crypto's blowfish encryption code could give + contrib/pg_crypto's blowfish encryption code could give wrong results on platforms where char is signed (which is most), leading to encrypted passwords being weaker than they should be. @@ -6559,13 +6559,13 @@ - Fix memory leak in contrib/seg (Heikki Linnakangas) + Fix memory leak in contrib/seg (Heikki Linnakangas) - Fix pgstatindex() to give consistent results for empty + Fix pgstatindex() to give consistent results for empty indexes (Tom Lane) @@ -6585,7 +6585,7 @@ - Update time zone data files to tzdata release 2011i + Update time zone data files to tzdata release 2011i for DST law changes in Canada, Egypt, Russia, Samoa, and South Sudan. @@ -6618,10 +6618,10 @@ However, if your installation was upgraded from a previous major - release by running pg_upgrade, you should take + release by running pg_upgrade, you should take action to prevent possible data loss due to a now-fixed bug in - pg_upgrade. The recommended solution is to run - VACUUM FREEZE on all TOAST tables. + pg_upgrade. The recommended solution is to run + VACUUM FREEZE on all TOAST tables. More information is available at http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix. @@ -6636,36 +6636,36 @@ - Fix pg_upgrade's handling of TOAST tables + Fix pg_upgrade's handling of TOAST tables (Bruce Momjian) - The pg_class.relfrozenxid value for + The pg_class.relfrozenxid value for TOAST tables was not correctly copied into the new installation - during pg_upgrade. This could later result in - pg_clog files being discarded while they were still + during pg_upgrade. This could later result in + pg_clog files being discarded while they were still needed to validate tuples in the TOAST tables, leading to - could not access status of transaction failures. + could not access status of transaction failures. This error poses a significant risk of data loss for installations - that have been upgraded with pg_upgrade. This patch - corrects the problem for future uses of pg_upgrade, + that have been upgraded with pg_upgrade. This patch + corrects the problem for future uses of pg_upgrade, but does not in itself cure the issue in installations that have been - processed with a buggy version of pg_upgrade. + processed with a buggy version of pg_upgrade. - Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set + Suppress incorrect PD_ALL_VISIBLE flag was incorrectly set warning (Heikki Linnakangas) - VACUUM would sometimes issue this warning in cases that + VACUUM would sometimes issue this warning in cases that are actually valid. @@ -6680,8 +6680,8 @@ All retryable conflict errors now have an error code that indicates that a retry is possible. Also, session closure due to the database being dropped on the master is now reported as - ERRCODE_DATABASE_DROPPED, rather than - ERRCODE_ADMIN_SHUTDOWN, so that connection poolers can + ERRCODE_DATABASE_DROPPED, rather than + ERRCODE_ADMIN_SHUTDOWN, so that connection poolers can handle the situation correctly. @@ -6726,15 +6726,15 @@ - Fix dangling-pointer problem in BEFORE ROW UPDATE trigger + Fix dangling-pointer problem in BEFORE ROW UPDATE trigger handling when there was a concurrent update to the target tuple (Tom Lane) This bug has been observed to result in intermittent cannot - extract system attribute from virtual tuple failures while trying to - do UPDATE RETURNING ctid. There is a very small probability + extract system attribute from virtual tuple failures while trying to + do UPDATE RETURNING ctid. There is a very small probability of more serious errors, such as generating incorrect index entries for the updated tuple. @@ -6742,25 +6742,25 @@ - Disallow DROP TABLE when there are pending deferred trigger + Disallow DROP TABLE when there are pending deferred trigger events for the table (Tom Lane) - Formerly the DROP would go through, leading to - could not open relation with OID nnn errors when the + Formerly the DROP would go through, leading to + could not open relation with OID nnn errors when the triggers were eventually fired. - Allow replication as a user name in - pg_hba.conf (Andrew Dunstan) + Allow replication as a user name in + pg_hba.conf (Andrew Dunstan) - replication is special in the database name column, but it + replication is special in the database name column, but it was mistakenly also treated as special in the user name column. @@ -6781,13 +6781,13 @@ - Fix handling of SELECT FOR UPDATE in a sub-SELECT + Fix handling of SELECT FOR UPDATE in a sub-SELECT (Tom Lane) This bug typically led to cannot extract system attribute from - virtual tuple errors. + virtual tuple errors. @@ -6813,7 +6813,7 @@ - Allow libpq's SSL initialization to succeed when + Allow libpq's SSL initialization to succeed when user's home directory is unavailable (Tom Lane) @@ -6826,34 +6826,34 @@ - Fix libpq to return a useful error message for errors - detected in conninfo_array_parse (Joseph Adams) + Fix libpq to return a useful error message for errors + detected in conninfo_array_parse (Joseph Adams) A typo caused the library to return NULL, rather than the - PGconn structure containing the error message, to the + PGconn structure containing the error message, to the application. - Fix ecpg preprocessor's handling of float constants + Fix ecpg preprocessor's handling of float constants (Heikki Linnakangas) - Fix parallel pg_restore to handle comments on + Fix parallel pg_restore to handle comments on POST_DATA items correctly (Arnd Hannemann) - Fix pg_restore to cope with long lines (over 1KB) in + Fix pg_restore to cope with long lines (over 1KB) in TOC files (Tom Lane) @@ -6899,14 +6899,14 @@ - Fix version-incompatibility problem with libintl on + Fix version-incompatibility problem with libintl on Windows (Hiroshi Inoue) - Fix usage of xcopy in Windows build scripts to + Fix usage of xcopy in Windows build scripts to work correctly under Windows 7 (Andrew Dunstan) @@ -6917,14 +6917,14 @@ - Fix path separator used by pg_regress on Cygwin + Fix path separator used by pg_regress on Cygwin (Andrew Dunstan) - Update time zone data files to tzdata release 2011f + Update time zone data files to tzdata release 2011f for DST law changes in Chile, Cuba, Falkland Islands, Morocco, Samoa, and Turkey; also historical corrections for South Australia, Alaska, and Hawaii. @@ -6966,7 +6966,7 @@ - Before exiting walreceiver, ensure all the received WAL + Before exiting walreceiver, ensure all the received WAL is fsync'd to disk (Heikki Linnakangas) @@ -6978,27 +6978,27 @@ - Avoid excess fsync activity in walreceiver + Avoid excess fsync activity in walreceiver (Heikki Linnakangas) - Make ALTER TABLE revalidate uniqueness and exclusion + Make ALTER TABLE revalidate uniqueness and exclusion constraints when needed (Noah Misch) This was broken in 9.0 by a change that was intended to suppress - revalidation during VACUUM FULL and CLUSTER, - but unintentionally affected ALTER TABLE as well. + revalidation during VACUUM FULL and CLUSTER, + but unintentionally affected ALTER TABLE as well. - Fix EvalPlanQual for UPDATE of an inheritance tree in which + Fix EvalPlanQual for UPDATE of an inheritance tree in which the tables are not all alike (Tom Lane) @@ -7013,15 +7013,15 @@ - Avoid failures when EXPLAIN tries to display a simple-form - CASE expression (Tom Lane) + Avoid failures when EXPLAIN tries to display a simple-form + CASE expression (Tom Lane) - If the CASE's test expression was a constant, the planner - could simplify the CASE into a form that confused the + If the CASE's test expression was a constant, the planner + could simplify the CASE into a form that confused the expression-display code, resulting in unexpected CASE WHEN - clause errors. + clause errors. @@ -7046,8 +7046,8 @@ - The date type supports a wider range of dates than can be - represented by the timestamp types, but the planner assumed it + The date type supports a wider range of dates than can be + represented by the timestamp types, but the planner assumed it could always convert a date to timestamp with impunity. @@ -7060,29 +7060,29 @@ - Remove ecpg's fixed length limit for constants defining + Remove ecpg's fixed length limit for constants defining an array dimension (Michael Meskes) - Fix erroneous parsing of tsquery values containing + Fix erroneous parsing of tsquery values containing ... & !(subexpression) | ... (Tom Lane) Queries containing this combination of operators were not executed - correctly. The same error existed in contrib/intarray's - query_int type and contrib/ltree's - ltxtquery type. + correctly. The same error existed in contrib/intarray's + query_int type and contrib/ltree's + ltxtquery type. - Fix buffer overrun in contrib/intarray's input function - for the query_int type (Apple) + Fix buffer overrun in contrib/intarray's input function + for the query_int type (Apple) @@ -7094,16 +7094,16 @@ - Fix bug in contrib/seg's GiST picksplit algorithm + Fix bug in contrib/seg's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a seg column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a seg column. + If you have such an index, consider REINDEXing it after installing this update. (This is identical to the bug that was fixed in - contrib/cube in the previous update.) + contrib/cube in the previous update.) @@ -7143,23 +7143,23 @@ Force the default - wal_sync_method - to be fdatasync on Linux (Tom Lane, Marti Raudsepp) + wal_sync_method + to be fdatasync on Linux (Tom Lane, Marti Raudsepp) - The default on Linux has actually been fdatasync for many - years, but recent kernel changes caused PostgreSQL to - choose open_datasync instead. This choice did not result + The default on Linux has actually been fdatasync for many + years, but recent kernel changes caused PostgreSQL to + choose open_datasync instead. This choice did not result in any performance improvement, and caused outright failures on - certain filesystems, notably ext4 with the - data=journal mount option. + certain filesystems, notably ext4 with the + data=journal mount option. - Fix too many KnownAssignedXids error during Hot Standby + Fix too many KnownAssignedXids error during Hot Standby replay (Heikki Linnakangas) @@ -7188,7 +7188,7 @@ - This could result in bad buffer id: 0 failures or + This could result in bad buffer id: 0 failures or corruption of index contents during replication. @@ -7214,7 +7214,7 @@ - The effective vacuum_cost_limit for an autovacuum worker + The effective vacuum_cost_limit for an autovacuum worker could drop to nearly zero if it processed enough tables, causing it to run extremely slowly. @@ -7240,19 +7240,19 @@ - Add support for detecting register-stack overrun on IA64 + Add support for detecting register-stack overrun on IA64 (Tom Lane) - The IA64 architecture has two hardware stacks. Full + The IA64 architecture has two hardware stacks. Full prevention of stack-overrun failures requires checking both. - Add a check for stack overflow in copyObject() (Tom Lane) + Add a check for stack overflow in copyObject() (Tom Lane) @@ -7268,7 +7268,7 @@ - It is possible to have a concurrent page split in a + It is possible to have a concurrent page split in a temporary index, if for example there is an open cursor scanning the index when an insertion is done. GiST failed to detect this case and hence could deliver wrong results when execution of the cursor @@ -7295,16 +7295,16 @@ Certain cases where a large number of tuples needed to be read in - advance, but work_mem was large enough to allow them all + advance, but work_mem was large enough to allow them all to be held in memory, were unexpectedly slow. - percent_rank(), cume_dist() and - ntile() in particular were subject to this problem. + percent_rank(), cume_dist() and + ntile() in particular were subject to this problem. - Avoid memory leakage while ANALYZE'ing complex index + Avoid memory leakage while ANALYZE'ing complex index expressions (Tom Lane) @@ -7316,21 +7316,21 @@ - An index declared like create index i on t (foo(t.*)) + An index declared like create index i on t (foo(t.*)) would not automatically get dropped when its table was dropped. - Add missing support in DROP OWNED BY for removing foreign + Add missing support in DROP OWNED BY for removing foreign data wrapper/server privileges belonging to a user (Heikki Linnakangas) - Do not inline a SQL function with multiple OUT + Do not inline a SQL function with multiple OUT parameters (Tom Lane) @@ -7349,28 +7349,28 @@ - Behave correctly if ORDER BY, LIMIT, - FOR UPDATE, or WITH is attached to the - VALUES part of INSERT ... VALUES (Tom Lane) + Behave correctly if ORDER BY, LIMIT, + FOR UPDATE, or WITH is attached to the + VALUES part of INSERT ... VALUES (Tom Lane) - Make the OFF keyword unreserved (Heikki Linnakangas) + Make the OFF keyword unreserved (Heikki Linnakangas) - This prevents problems with using off as a variable name in - PL/pgSQL. That worked before 9.0, but was now broken - because PL/pgSQL now treats all core reserved words + This prevents problems with using off as a variable name in + PL/pgSQL. That worked before 9.0, but was now broken + because PL/pgSQL now treats all core reserved words as reserved. - Fix constant-folding of COALESCE() expressions (Tom Lane) + Fix constant-folding of COALESCE() expressions (Tom Lane) @@ -7381,7 +7381,7 @@ - Fix could not find pathkey item to sort planner failure + Fix could not find pathkey item to sort planner failure with comparison of whole-row Vars (Tom Lane) @@ -7389,7 +7389,7 @@ Fix postmaster crash when connection acceptance - (accept() or one of the calls made immediately after it) + (accept() or one of the calls made immediately after it) fails, and the postmaster was compiled with GSSAPI support (Alexander Chernikov) @@ -7408,7 +7408,7 @@ - Fix missed unlink of temporary files when log_temp_files + Fix missed unlink of temporary files when log_temp_files is active (Tom Lane) @@ -7420,11 +7420,11 @@ - Add print functionality for InhRelation nodes (Tom Lane) + Add print functionality for InhRelation nodes (Tom Lane) - This avoids a failure when debug_print_parse is enabled + This avoids a failure when debug_print_parse is enabled and certain types of query are executed. @@ -7444,46 +7444,46 @@ Fix incorrect calculation of transaction status in - ecpg (Itagaki Takahiro) + ecpg (Itagaki Takahiro) - Fix errors in psql's Unicode-escape support (Tom Lane) + Fix errors in psql's Unicode-escape support (Tom Lane) - Speed up parallel pg_restore when the archive + Speed up parallel pg_restore when the archive contains many large objects (blobs) (Tom Lane) - Fix PL/pgSQL's handling of simple + Fix PL/pgSQL's handling of simple expressions to not fail in recursion or error-recovery cases (Tom Lane) - Fix PL/pgSQL's error reporting for no-such-column + Fix PL/pgSQL's error reporting for no-such-column cases (Tom Lane) As of 9.0, it would sometimes report missing FROM-clause entry - for table foo when record foo has no field bar would be + for table foo when record foo has no field bar would be more appropriate. - Fix PL/Python to honor typmod (i.e., length or + Fix PL/Python to honor typmod (i.e., length or precision restrictions) when assigning to tuple fields (Tom Lane) @@ -7494,7 +7494,7 @@ - Fix PL/Python's handling of set-returning functions + Fix PL/Python's handling of set-returning functions (Jan Urbanski) @@ -7506,22 +7506,22 @@ - Fix bug in contrib/cube's GiST picksplit algorithm + Fix bug in contrib/cube's GiST picksplit algorithm (Alexander Korotkov) This could result in considerable inefficiency, though not actually - incorrect answers, in a GiST index on a cube column. - If you have such an index, consider REINDEXing it after + incorrect answers, in a GiST index on a cube column. + If you have such an index, consider REINDEXing it after installing this update. - Don't emit identifier will be truncated notices in - contrib/dblink except when creating new connections + Don't emit identifier will be truncated notices in + contrib/dblink except when creating new connections (Itagaki Takahiro) @@ -7529,26 +7529,26 @@ Fix potential coredump on missing public key in - contrib/pgcrypto (Marti Raudsepp) + contrib/pgcrypto (Marti Raudsepp) - Fix buffer overrun in contrib/pg_upgrade (Hernan Gonzalez) + Fix buffer overrun in contrib/pg_upgrade (Hernan Gonzalez) - Fix memory leak in contrib/xml2's XPath query functions + Fix memory leak in contrib/xml2's XPath query functions (Tom Lane) - Update time zone data files to tzdata release 2010o + Update time zone data files to tzdata release 2010o for DST law changes in Fiji and Samoa; also historical corrections for Hong Kong. @@ -7597,7 +7597,7 @@ This change prevents security problems that can be caused by subverting Perl or Tcl code that will be executed later in the same session under another SQL user identity (for example, within a SECURITY - DEFINER function). Most scripting languages offer numerous ways that + DEFINER function). Most scripting languages offer numerous ways that that might be done, such as redefining standard functions or operators called by the target function. Without this change, any SQL user with Perl or Tcl language usage rights can do essentially anything with the @@ -7626,7 +7626,7 @@ - Improve pg_get_expr() security fix so that the function + Improve pg_get_expr() security fix so that the function can still be used on the output of a sub-select (Tom Lane) @@ -7651,7 +7651,7 @@ - Fix possible duplicate scans of UNION ALL member relations + Fix possible duplicate scans of UNION ALL member relations (Tom Lane) @@ -7676,14 +7676,14 @@ - Input such as 'J100000'::date worked before 8.4, + Input such as 'J100000'::date worked before 8.4, but was unintentionally broken by added error-checking. - Make psql recognize DISCARD ALL as a command that should + Make psql recognize DISCARD ALL as a command that should not be encased in a transaction block in autocommit-off mode (Itagaki Takahiro) @@ -7714,12 +7714,12 @@ This release of - PostgreSQL adds features that have been requested + PostgreSQL adds features that have been requested for years, such as easy-to-use replication, a mass permission-changing facility, and anonymous code blocks. While past major releases have been conservative in their scope, this release shows a bold new desire to provide facilities that new and existing - users of PostgreSQL will embrace. This has all + users of PostgreSQL will embrace. This has all been done with few incompatibilities. Major enhancements include: @@ -7732,7 +7732,7 @@ Built-in replication based on log shipping. This advance consists of two features: Streaming Replication, allowing continuous archive - (WAL) files to be streamed over a network connection to a + (WAL) files to be streamed over a network connection to a standby server, and Hot Standby, allowing continuous archive standby servers to execute read-only queries. The net effect is to support a single master with multiple read-only slave servers. @@ -7742,10 +7742,10 @@ Easier database object permissions management. GRANT/REVOKE IN - SCHEMA supports mass permissions changes on existing objects, + linkend="SQL-GRANT">GRANT/REVOKE IN + SCHEMA supports mass permissions changes on existing objects, while ALTER DEFAULT - PRIVILEGES allows control of privileges for objects created in + PRIVILEGES allows control of privileges for objects created in the future. Large objects (BLOBs) now support permissions management as well. @@ -7754,8 +7754,8 @@ Broadly enhanced stored procedure support. - The DO statement supports - ad-hoc or anonymous code blocks. + The DO statement supports + ad-hoc or anonymous code blocks. Functions can now be called using named parameters. PL/pgSQL is now installed by default, and PL/Perl and Full support for 64-bit - Windows. + Windows. More advanced reporting queries, including additional windowing options - (PRECEDING and FOLLOWING) and the ability to + (PRECEDING and FOLLOWING) and the ability to control the order in which values are fed to aggregate functions. @@ -7808,7 +7808,7 @@ New and enhanced security features, including RADIUS authentication, LDAP authentication improvements, and a new contrib module - passwordcheck + passwordcheck for testing password strength. @@ -7816,10 +7816,10 @@ New high-performance implementation of the - LISTEN/NOTIFY feature. + LISTEN/NOTIFY feature. Pending events are now stored in a memory-based queue rather than - a table. Also, a payload string can be sent with each + a table. Also, a payload string can be sent with each event, rather than transmitting just an event name as before. @@ -7827,7 +7827,7 @@ New implementation of - VACUUM FULL. + VACUUM FULL. This command now rewrites the entire table and indexes, rather than moving individual rows to compact space. It is substantially faster in most cases, and no longer results in index bloat. @@ -7837,7 +7837,7 @@ New contrib module - pg_upgrade + pg_upgrade to support in-place upgrades from 8.3 or 8.4 to 9.0. @@ -7853,7 +7853,7 @@ - EXPLAIN enhancements. + EXPLAIN enhancements. The output is now available in JSON, XML, or YAML format, and includes buffer utilization and other data not previously available. @@ -7861,7 +7861,7 @@ - hstore improvements, + hstore improvements, including new functions and greater data capacity. @@ -7901,34 +7901,34 @@ - Remove server parameter add_missing_from, which was + Remove server parameter add_missing_from, which was defaulted to off for many years (Tom Lane) - Remove server parameter regex_flavor, which + Remove server parameter regex_flavor, which was defaulted to advanced + linkend="posix-syntax-details">advanced for many years (Tom Lane) - archive_mode + archive_mode now only affects archive_command; + linkend="guc-archive-command">archive_command; a new setting, wal_level, affects + linkend="guc-wal-level">wal_level, affects the contents of the write-ahead log (Heikki Linnakangas) - log_temp_files + log_temp_files now uses default file size units of kilobytes (Robert Haas) @@ -7967,13 +7967,13 @@ - bytea output now + bytea output now appears in hex format by default (Peter Eisentraut) The server parameter bytea_output can be + linkend="guc-bytea-output">bytea_output can be used to select the traditional output format if needed for compatibility. @@ -7995,18 +7995,18 @@ Improve standards compliance of SIMILAR TO - patterns and SQL-style substring() patterns (Tom Lane) + linkend="functions-similarto-regexp">SIMILAR TO + patterns and SQL-style substring() patterns (Tom Lane) - This includes treating ? and {...} as + This includes treating ? and {...} as pattern metacharacters, while they were simple literal characters before; that corresponds to new features added in SQL:2008. - Also, ^ and $ are now treated as simple + Also, ^ and $ are now treated as simple literal characters; formerly they were treated as metacharacters, as if the pattern were following POSIX rather than SQL rules. - Also, in SQL-standard substring(), use of parentheses + Also, in SQL-standard substring(), use of parentheses for nesting no longer interferes with capturing of a substring. Also, processing of bracket expressions (character classes) is now more standards-compliant. @@ -8016,14 +8016,14 @@ Reject negative length values in 3-parameter substring() + linkend="functions-string-sql">substring() for bit strings, per the SQL standard (Tom Lane) - Make date_trunc truncate rather than round when reducing + Make date_trunc truncate rather than round when reducing precision of fractional seconds (Tom Lane) @@ -8044,7 +8044,7 @@ - Tighten enforcement of column name consistency during RENAME + Tighten enforcement of column name consistency during RENAME when a child table inherits the same column from multiple unrelated parents (KaiGai Kohei) @@ -8100,8 +8100,8 @@ situations. Although it's recommended that functions encountering this type of error be modified to remove the conflict, the old behavior can be restored if necessary via the configuration parameter plpgsql.variable_conflict, - or via the per-function option #variable_conflict. + linkend="plpgsql-var-subst">plpgsql.variable_conflict, + or via the per-function option #variable_conflict. @@ -8126,8 +8126,8 @@ For example, if a column of the result type is declared as - NUMERIC(30,2), it is no longer acceptable to return a - NUMERIC of some other precision in that column. Previous + NUMERIC(30,2), it is no longer acceptable to return a + NUMERIC of some other precision in that column. Previous versions neglected to check the type modifier and would thus allow result rows that didn't actually conform to the declared restrictions. @@ -8141,33 +8141,33 @@ Formerly, a statement like - SELECT ... INTO rec.fld FROM ... + SELECT ... INTO rec.fld FROM ... was treated as a scalar assignment even if the record field - fld was of composite type. Now it is treated as a - record assignment, the same as when the INTO target is a + fld was of composite type. Now it is treated as a + record assignment, the same as when the INTO target is a regular variable of composite type. So the values to be assigned to the field's subfields should be written as separate columns of the - SELECT list, not as a ROW(...) construct as in + SELECT list, not as a ROW(...) construct as in previous versions. If you need to do this in a way that will work in both 9.0 and previous releases, you can write something like - rec.fld := ROW(...) FROM .... + rec.fld := ROW(...) FROM .... - Remove PL/pgSQL's RENAME declaration (Tom Lane) + Remove PL/pgSQL's RENAME declaration (Tom Lane) - Instead of RENAME, use ALIAS, + Instead of RENAME, use ALIAS, which can now create an alias for any variable, not only dollar sign - parameter names (such as $1) as before. + parameter names (such as $1) as before. @@ -8181,11 +8181,11 @@ - Deprecate use of => as an operator name (Robert Haas) + Deprecate use of => as an operator name (Robert Haas) - Future versions of PostgreSQL will probably reject + Future versions of PostgreSQL will probably reject this operator name entirely, in order to support the SQL-standard notation for named function parameters. For the moment, it is still allowed, but a warning is emitted when such an operator is @@ -8240,7 +8240,7 @@ This feature is called Hot Standby. There are new - postgresql.conf and recovery.conf + postgresql.conf and recovery.conf settings to control this feature, as well as extensive documentation. @@ -8248,18 +8248,18 @@ - Allow write-ahead log (WAL) data to be streamed to a + Allow write-ahead log (WAL) data to be streamed to a standby server (Fujii Masao, Heikki Linnakangas) This feature is called Streaming Replication. - Previously WAL data could be sent to standby servers only - in units of entire WAL files (normally 16 megabytes each). + Previously WAL data could be sent to standby servers only + in units of entire WAL files (normally 16 megabytes each). Streaming Replication eliminates this inefficiency and allows updates on the master to be propagated to standby servers with very little - delay. There are new postgresql.conf and - recovery.conf settings to control this feature, as well as + delay. There are new postgresql.conf and + recovery.conf settings to control this feature, as well as extensive documentation. @@ -8267,9 +8267,9 @@ Add pg_last_xlog_receive_location() - and pg_last_xlog_replay_location(), which - can be used to monitor standby server WAL + linkend="functions-recovery-info-table">pg_last_xlog_receive_location() + and pg_last_xlog_replay_location(), which + can be used to monitor standby server WAL activity (Simon Riggs, Fujii Masao, Heikki Linnakangas) @@ -8286,9 +8286,9 @@ Allow per-tablespace values to be set for sequential and random page - cost estimates (seq_page_cost/random_page_cost) + cost estimates (seq_page_cost/random_page_cost) via ALTER TABLESPACE - ... SET/RESET (Robert Haas) + ... SET/RESET (Robert Haas) @@ -8299,8 +8299,8 @@ - UPDATE, DELETE, and SELECT FOR - UPDATE/SHARE queries that involve joins will now behave much better + UPDATE, DELETE, and SELECT FOR + UPDATE/SHARE queries that involve joins will now behave much better when encountering freshly-updated rows. @@ -8308,7 +8308,7 @@ Improve performance of TRUNCATE when + linkend="SQL-TRUNCATE">TRUNCATE when the table was created or truncated earlier in the same transaction (Tom Lane) @@ -8345,12 +8345,12 @@ - Allow IS NOT NULL restrictions to use indexes (Tom Lane) + Allow IS NOT NULL restrictions to use indexes (Tom Lane) This is particularly useful for finding - MAX()/MIN() values in indexes that + MAX()/MIN() values in indexes that contain many null values. @@ -8358,7 +8358,7 @@ Improve the optimizer's choices about when to use materialize nodes, - and when to use sorting versus hashing for DISTINCT + and when to use sorting versus hashing for DISTINCT (Tom Lane) @@ -8366,7 +8366,7 @@ Improve the optimizer's equivalence detection for expressions involving - boolean <> operators (Tom Lane) + boolean <> operators (Tom Lane) @@ -8387,7 +8387,7 @@ While the Genetic Query Optimizer (GEQO) still selects random plans, it now always selects the same random plans for identical queries, thus giving more consistent performance. You can modify geqo_seed to experiment with + linkend="guc-geqo-seed">geqo_seed to experiment with alternative plans. @@ -8398,7 +8398,7 @@ - This avoids the rare error failed to make a valid plan, + This avoids the rare error failed to make a valid plan, and should also improve planning speed. @@ -8414,7 +8414,7 @@ - Improve ANALYZE + Improve ANALYZE to support inheritance-tree statistics (Tom Lane) @@ -8451,14 +8451,14 @@ Allow setting of number-of-distinct-values statistics using ALTER TABLE + linkend="SQL-ALTERTABLE">ALTER TABLE (Robert Haas) This allows users to override the estimated number or percentage of distinct values for a column. This statistic is normally computed by - ANALYZE, but the estimate can be poor, especially on tables + ANALYZE, but the estimate can be poor, especially on tables with very large numbers of rows. @@ -8475,7 +8475,7 @@ Add support for RADIUS (Remote + linkend="auth-radius">RADIUS (Remote Authentication Dial In User Service) authentication (Magnus Hagander) @@ -8483,28 +8483,28 @@ - Allow LDAP + Allow LDAP (Lightweight Directory Access Protocol) authentication - to operate in search/bind mode + to operate in search/bind mode (Robert Fleming, Magnus Hagander) This allows the user to be looked up first, then the system uses - the DN (Distinguished Name) returned for that user. + the DN (Distinguished Name) returned for that user. Add samehost - and samenet designations to - pg_hba.conf (Stef Walter) + linkend="auth-pg-hba-conf">samehost + and samenet designations to + pg_hba.conf (Stef Walter) - These match the server's IP address and subnet address + These match the server's IP address and subnet address respectively. @@ -8530,7 +8530,7 @@ Add the ability for clients to set an application name, which is displayed in - pg_stat_activity (Dave Page) + pg_stat_activity (Dave Page) @@ -8541,8 +8541,8 @@ - Add a SQLSTATE option (%e) to log_line_prefix + Add a SQLSTATE option (%e) to log_line_prefix (Guillaume Smet) @@ -8555,7 +8555,7 @@ - Write to the Windows event log in UTF16 encoding + Write to the Windows event log in UTF16 encoding (Itagaki Takahiro) @@ -8577,7 +8577,7 @@ Add pg_stat_reset_shared('bgwriter') + linkend="monitoring-stats-funcs-table">pg_stat_reset_shared('bgwriter') to reset the cluster-wide shared statistics for the background writer (Greg Smith) @@ -8586,8 +8586,8 @@ Add pg_stat_reset_single_table_counters() - and pg_stat_reset_single_function_counters() + linkend="monitoring-stats-funcs-table">pg_stat_reset_single_table_counters() + and pg_stat_reset_single_function_counters() to allow resetting the statistics counters for individual tables and functions (Magnus Hagander) @@ -8612,10 +8612,10 @@ Previously only per-database and per-role settings were possible, not combinations. All role and database settings are now stored - in the new pg_db_role_setting system catalog. A new - psql command \drds shows these settings. - The legacy system views pg_roles, - pg_shadow, and pg_user + in the new pg_db_role_setting system catalog. A new + psql command \drds shows these settings. + The legacy system views pg_roles, + pg_shadow, and pg_user do not show combination settings, and therefore no longer completely represent the configuration for a user or database. @@ -8624,9 +8624,9 @@ Add server parameter bonjour, which + linkend="guc-bonjour">bonjour, which controls whether a Bonjour-enabled server advertises - itself via Bonjour (Tom Lane) + itself via Bonjour (Tom Lane) @@ -8639,7 +8639,7 @@ Add server parameter enable_material, which + linkend="guc-enable-material">enable_material, which controls the use of materialize nodes in the optimizer (Robert Haas) @@ -8654,7 +8654,7 @@ Change server parameter log_temp_files to + linkend="guc-log-temp-files">log_temp_files to use default file size units of kilobytes (Robert Haas) @@ -8666,14 +8666,14 @@ - Log changes of parameter values when postgresql.conf is + Log changes of parameter values when postgresql.conf is reloaded (Peter Eisentraut) This lets administrators and security staff audit changes of database settings, and is also very convenient for checking the effects of - postgresql.conf edits. + postgresql.conf edits. @@ -8685,10 +8685,10 @@ Non-superusers can no longer issue ALTER - ROLE/DATABASE SET for parameters that are not currently + ROLE/DATABASE SET for parameters that are not currently known to the server. This allows the server to correctly check that superuser-only parameters are only set by superusers. Previously, - the SET would be allowed and then ignored at session start, + the SET would be allowed and then ignored at session start, making superuser-only custom parameters much less useful than they should be. @@ -8708,24 +8708,24 @@ Perform SELECT - FOR UPDATE/SHARE processing after - applying LIMIT, so the number of rows returned + FOR UPDATE/SHARE processing after + applying LIMIT, so the number of rows returned is always predictable (Tom Lane) Previously, changes made by concurrent transactions could cause a - SELECT FOR UPDATE to unexpectedly return fewer rows than - specified by its LIMIT. FOR UPDATE in combination - with ORDER BY can still produce surprising results, but that - can be corrected by placing FOR UPDATE in a subquery. + SELECT FOR UPDATE to unexpectedly return fewer rows than + specified by its LIMIT. FOR UPDATE in combination + with ORDER BY can still produce surprising results, but that + can be corrected by placing FOR UPDATE in a subquery. Allow mixing of traditional and SQL-standard LIMIT/OFFSET + linkend="SQL-LIMIT">LIMIT/OFFSET syntax (Tom Lane) @@ -8738,15 +8738,15 @@ - Frames can now start with CURRENT ROW, and the ROWS - n PRECEDING/FOLLOWING options are now + Frames can now start with CURRENT ROW, and the ROWS + n PRECEDING/FOLLOWING options are now supported. - Make SELECT INTO and CREATE TABLE AS return + Make SELECT INTO and CREATE TABLE AS return row counts to the client in their command tags (Boszormenyi Zoltan) @@ -8769,7 +8769,7 @@ Support Unicode surrogate pairs (dual 16-bit representation) in U& + linkend="sql-syntax-strings-uescape">U& strings and identifiers (Peter Eisentraut) @@ -8777,7 +8777,7 @@ Support Unicode escapes in E'...' + linkend="sql-syntax-strings-escape">E'...' strings (Marko Kreen) @@ -8796,7 +8796,7 @@ Speed up CREATE - DATABASE by deferring flushes to disk (Andres + DATABASE by deferring flushes to disk (Andres Freund, Greg Stark) @@ -8805,7 +8805,7 @@ Allow comments on columns of tables, views, and composite types only, not other - relation types such as indexes and TOAST tables (Tom Lane) + relation types such as indexes and TOAST tables (Tom Lane) @@ -8819,12 +8819,12 @@ - Let values of columns having storage type MAIN remain on + Let values of columns having storage type MAIN remain on the main heap page unless the row cannot fit on a page (Kevin Grittner) - Previously MAIN values were forced out to TOAST + Previously MAIN values were forced out to TOAST tables until the row size was less than one-quarter of the page size. @@ -8832,26 +8832,26 @@ - <command>ALTER TABLE</> + <command>ALTER TABLE</command> - Implement IF EXISTS for ALTER TABLE DROP COLUMN - and ALTER TABLE DROP CONSTRAINT (Andres Freund) + Implement IF EXISTS for ALTER TABLE DROP COLUMN + and ALTER TABLE DROP CONSTRAINT (Andres Freund) - Allow ALTER TABLE commands that rewrite tables to skip - WAL logging (Itagaki Takahiro) + Allow ALTER TABLE commands that rewrite tables to skip + WAL logging (Itagaki Takahiro) Such operations either produce a new copy of the table or are rolled - back, so WAL archiving can be skipped, unless running in + back, so WAL archiving can be skipped, unless running in continuous archiving mode. This reduces I/O overhead and improves performance. @@ -8859,8 +8859,8 @@ - Fix failure of ALTER TABLE table ADD COLUMN - col serial when done by non-owner of table + Fix failure of ALTER TABLE table ADD COLUMN + col serial when done by non-owner of table (Tom Lane) @@ -8870,14 +8870,14 @@ - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> - Add support for copying COMMENTS and STORAGE - settings in CREATE TABLE ... LIKE commands + Add support for copying COMMENTS and STORAGE + settings in CREATE TABLE ... LIKE commands (Itagaki Takahiro) @@ -8885,14 +8885,14 @@ Add a shortcut for copying all properties in CREATE - TABLE ... LIKE commands (Itagaki Takahiro) + TABLE ... LIKE commands (Itagaki Takahiro) Add the SQL-standard - CREATE TABLE ... OF type command + CREATE TABLE ... OF type command (Peter Eisentraut) @@ -8920,10 +8920,10 @@ This allows mass updates, such as - UPDATE tab SET col = col + 1, + UPDATE tab SET col = col + 1, to work reliably on columns that have unique indexes or are marked as primary keys. - If the constraint is specified as DEFERRABLE it will be + If the constraint is specified as DEFERRABLE it will be checked at the end of the statement, rather than after each row is updated. The constraint check can also be deferred until the end of the current transaction, allowing such updates to be spread over multiple @@ -8942,7 +8942,7 @@ Exclusion constraints generalize uniqueness constraints by allowing arbitrary comparison operators, not just equality. They are created with the CREATE - TABLE CONSTRAINT ... EXCLUDE clause. + TABLE CONSTRAINT ... EXCLUDE clause. The most common use of exclusion constraints is to specify that column entries must not overlap, rather than simply not be equal. This is useful for time periods and other ranges, as well as arrays. @@ -8959,7 +8959,7 @@ For example, a uniqueness constraint violation might now report - Key (x)=(2) already exists. + Key (x)=(2) already exists. @@ -8976,8 +8976,8 @@ Add the ability to make mass permission changes across a whole schema using the new GRANT/REVOKE - IN SCHEMA clause (Petr Jelinek) + linkend="SQL-GRANT">GRANT/REVOKE + IN SCHEMA clause (Petr Jelinek) @@ -8990,7 +8990,7 @@ Add ALTER - DEFAULT PRIVILEGES command to control privileges + DEFAULT PRIVILEGES command to control privileges of objects created later (Petr Jelinek) @@ -9005,7 +9005,7 @@ Add the ability to control large object (BLOB) permissions with - GRANT/REVOKE (KaiGai Kohei) + GRANT/REVOKE (KaiGai Kohei) @@ -9028,8 +9028,8 @@ - Make LISTEN/NOTIFY store pending events + Make LISTEN/NOTIFY store pending events in a memory queue, rather than in a system table (Joachim Wieland) @@ -9042,21 +9042,21 @@ - Allow NOTIFY - to pass an optional payload string to listeners + Allow NOTIFY + to pass an optional payload string to listeners (Joachim Wieland) This greatly improves the usefulness of - LISTEN/NOTIFY as a + LISTEN/NOTIFY as a general-purpose event queue system. - Allow CLUSTER + Allow CLUSTER on all per-database system catalogs (Tom Lane) @@ -9068,30 +9068,30 @@ - <link linkend="SQL-COPY"><command>COPY</></link> + <link linkend="SQL-COPY"><command>COPY</command></link> - Accept COPY ... CSV FORCE QUOTE * + Accept COPY ... CSV FORCE QUOTE * (Itagaki Takahiro) - Now * can be used as shorthand for all columns - in the FORCE QUOTE clause. + Now * can be used as shorthand for all columns + in the FORCE QUOTE clause. - Add new COPY syntax that allows options to be + Add new COPY syntax that allows options to be specified inside parentheses (Robert Haas, Emmanuel Cecchet) - This allows greater flexibility for future COPY options. + This allows greater flexibility for future COPY options. The old syntax is still supported, but only for pre-existing options. @@ -9101,27 +9101,27 @@ - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</></link> + <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> - Allow EXPLAIN to output in XML, - JSON, or YAML format (Robert Haas, Greg + Allow EXPLAIN to output in XML, + JSON, or YAML format (Robert Haas, Greg Sabino Mullane) The new output formats are easily machine-readable, supporting the - development of new tools for analysis of EXPLAIN output. + development of new tools for analysis of EXPLAIN output. - Add new BUFFERS option to report query - buffer usage during EXPLAIN ANALYZE (Itagaki Takahiro) + Add new BUFFERS option to report query + buffer usage during EXPLAIN ANALYZE (Itagaki Takahiro) @@ -9134,19 +9134,19 @@ - Add hash usage information to EXPLAIN output (Robert + Add hash usage information to EXPLAIN output (Robert Haas) - Add new EXPLAIN syntax that allows options to be + Add new EXPLAIN syntax that allows options to be specified inside parentheses (Robert Haas) - This allows greater flexibility for future EXPLAIN options. + This allows greater flexibility for future EXPLAIN options. The old syntax is still supported, but only for pre-existing options. @@ -9156,13 +9156,13 @@ - <link linkend="SQL-VACUUM"><command>VACUUM</></link> + <link linkend="SQL-VACUUM"><command>VACUUM</command></link> - Change VACUUM FULL to rewrite the entire table and + Change VACUUM FULL to rewrite the entire table and rebuild its indexes, rather than moving individual rows around to compact space (Itagaki Takahiro, Tom Lane) @@ -9170,7 +9170,7 @@ The previous method was usually slower and caused index bloat. Note that the new method will use more disk space transiently - during VACUUM FULL; potentially as much as twice + during VACUUM FULL; potentially as much as twice the space normally occupied by the table and its indexes. @@ -9178,12 +9178,12 @@ - Add new VACUUM syntax that allows options to be + Add new VACUUM syntax that allows options to be specified inside parentheses (Itagaki Takahiro) - This allows greater flexibility for future VACUUM options. + This allows greater flexibility for future VACUUM options. The old syntax is still supported, but only for pre-existing options. @@ -9200,7 +9200,7 @@ Allow an index to be named automatically by omitting the index name in - CREATE INDEX + CREATE INDEX (Tom Lane) @@ -9228,22 +9228,22 @@ - Add point_ops operator class for GiST + Add point_ops operator class for GiST (Teodor Sigaev) - This feature permits GiST indexing of point + This feature permits GiST indexing of point columns. The index can be used for several types of queries - such as point <@ polygon + such as point <@ polygon (point is in polygon). This should make many - PostGIS queries faster. + PostGIS queries faster. - Use red-black binary trees for GIN index creation + Use red-black binary trees for GIN index creation (Teodor Sigaev) @@ -9267,16 +9267,16 @@ - Allow bytea values + Allow bytea values to be written in hex notation (Peter Eisentraut) The server parameter bytea_output controls - whether hex or traditional format is used for bytea - output. Libpq's PQescapeByteaConn() function automatically - uses the hex format when connected to PostgreSQL 9.0 + linkend="guc-bytea-output">bytea_output controls + whether hex or traditional format is used for bytea + output. Libpq's PQescapeByteaConn() function automatically + uses the hex format when connected to PostgreSQL 9.0 or newer servers. However, pre-9.0 libpq versions will not correctly process hex format from newer servers. @@ -9293,20 +9293,20 @@ Allow server parameter extra_float_digits - to be increased to 3 (Tom Lane) + to be increased to 3 (Tom Lane) - The previous maximum extra_float_digits setting was - 2. There are cases where 3 digits are needed to dump and - restore float4 values exactly. pg_dump will + The previous maximum extra_float_digits setting was + 2. There are cases where 3 digits are needed to dump and + restore float4 values exactly. pg_dump will now use the setting of 3 when dumping from a server that allows it. - Tighten input checking for int2vector values (Caleb + Tighten input checking for int2vector values (Caleb Welton) @@ -9320,14 +9320,14 @@ - Add prefix support in synonym dictionaries + Add prefix support in synonym dictionaries (Teodor Sigaev) - Add filtering dictionaries (Teodor Sigaev) + Add filtering dictionaries (Teodor Sigaev) @@ -9344,7 +9344,7 @@ - Use more standards-compliant rules for parsing URL tokens + Use more standards-compliant rules for parsing URL tokens (Tom Lane) @@ -9367,9 +9367,9 @@ - For example, if a function is defined to take parameters a - and b, it can be called with func(a := 7, b - := 12) or func(b := 12, a := 7). + For example, if a function is defined to take parameters a + and b, it can be called with func(a := 7, b + := 12) or func(b := 12, a := 7). @@ -9377,24 +9377,24 @@ Support locale-specific regular expression - processing with UTF-8 server encoding (Tom Lane) + processing with UTF-8 server encoding (Tom Lane) Locale-specific regular expression functionality includes case-insensitive matching and locale-specific character classes. - Previously, these features worked correctly for non-ASCII + Previously, these features worked correctly for non-ASCII characters only if the database used a single-byte server encoding (such as LATIN1). They will still misbehave in multi-byte encodings other - than UTF-8. + than UTF-8. Add support for scientific notation in to_char() - (EEEE + linkend="functions-formatting">to_char() + (EEEE specification) (Pavel Stehule, Brendan Jurd) @@ -9402,21 +9402,21 @@ - Make to_char() honor FM - (fill mode) in Y, YY, and - YYY specifications (Bruce Momjian, Tom Lane) + Make to_char() honor FM + (fill mode) in Y, YY, and + YYY specifications (Bruce Momjian, Tom Lane) - It was already honored by YYYY. + It was already honored by YYYY. - Fix to_char() to output localized numeric and monetary - strings in the correct encoding on Windows + Fix to_char() to output localized numeric and monetary + strings in the correct encoding on Windows (Hiroshi Inoue, Itagaki Takahiro, Bruce Momjian) @@ -9429,12 +9429,12 @@ - The polygon && (overlaps) operator formerly just + The polygon && (overlaps) operator formerly just checked to see if the two polygons' bounding boxes overlapped. It now - does a more correct check. The polygon @> and - <@ (contains/contained by) operators formerly checked + does a more correct check. The polygon @> and + <@ (contains/contained by) operators formerly checked to see if one polygon's vertexes were all contained in the other; - this can wrongly report true for some non-convex polygons. + this can wrongly report true for some non-convex polygons. Now they check that all line segments of one polygon are contained in the other. @@ -9450,12 +9450,12 @@ Allow aggregate functions to use ORDER BY (Andrew Gierth) + linkend="syntax-aggregates">ORDER BY (Andrew Gierth) For example, this is now supported: array_agg(a ORDER BY - b). This is useful with aggregates for which the order of input + b). This is useful with aggregates for which the order of input values is significant, and eliminates the need to use a nonstandard subquery to determine the ordering. @@ -9463,7 +9463,7 @@ - Multi-argument aggregate functions can now use DISTINCT + Multi-argument aggregate functions can now use DISTINCT (Andrew Gierth) @@ -9471,7 +9471,7 @@ Add the string_agg() + linkend="functions-aggregate-table">string_agg() aggregate function to combine values into a single string (Pavel Stehule) @@ -9479,15 +9479,15 @@ - Aggregate functions that are called with DISTINCT are + Aggregate functions that are called with DISTINCT are now passed NULL values if the aggregate transition function is - not marked as STRICT (Andrew Gierth) + not marked as STRICT (Andrew Gierth) - For example, agg(DISTINCT x) might pass a NULL x - value to agg(). This is more consistent with the behavior - in non-DISTINCT cases. + For example, agg(DISTINCT x) might pass a NULL x + value to agg(). This is more consistent with the behavior + in non-DISTINCT cases. @@ -9503,9 +9503,9 @@ Add get_bit() - and set_bit() functions for bit - strings, mirroring those for bytea (Leonardo + linkend="functions-binarystring-other">get_bit() + and set_bit() functions for bit + strings, mirroring those for bytea (Leonardo F) @@ -9513,8 +9513,8 @@ Implement OVERLAY() - (replace) for bit strings and bytea + linkend="functions-string-sql">OVERLAY() + (replace) for bit strings and bytea (Leonardo F) @@ -9531,9 +9531,9 @@ Add pg_table_size() - and pg_indexes_size() to provide a more - user-friendly interface to the pg_relation_size() + linkend="functions-admin-dbsize">pg_table_size() + and pg_indexes_size() to provide a more + user-friendly interface to the pg_relation_size() function (Bernd Helmle) @@ -9541,7 +9541,7 @@ Add has_sequence_privilege() + linkend="functions-info-access-table">has_sequence_privilege() for sequence permission checking (Abhijit Menon-Sen) @@ -9556,15 +9556,15 @@ - Make the information_schema views correctly display maximum - octet lengths for char and varchar columns (Peter + Make the information_schema views correctly display maximum + octet lengths for char and varchar columns (Peter Eisentraut) - Speed up information_schema privilege views + Speed up information_schema privilege views (Joachim Wieland) @@ -9581,7 +9581,7 @@ Support execution of anonymous code blocks using the DO statement + linkend="SQL-DO">DO statement (Petr Jelinek, Joshua Tolley, Hannu Valtonen) @@ -9601,22 +9601,22 @@ Such triggers are fired only when the specified column(s) are affected - by the query, e.g. appear in an UPDATE's SET + by the query, e.g. appear in an UPDATE's SET list. - Add the WHEN clause to CREATE TRIGGER + Add the WHEN clause to CREATE TRIGGER to allow control over whether a trigger is fired (Itagaki Takahiro) While the same type of check can always be performed inside the - trigger, doing it in an external WHEN clause can have + trigger, doing it in an external WHEN clause can have performance benefits. @@ -9634,8 +9634,8 @@ - Add the OR REPLACE clause to CREATE LANGUAGE + Add the OR REPLACE clause to CREATE LANGUAGE (Tom Lane) @@ -9677,8 +9677,8 @@ The default behavior is now to throw an error when there is a conflict, so as to avoid surprising behaviors. This can be modified, via the configuration parameter plpgsql.variable_conflict - or the per-function option #variable_conflict, to allow + linkend="plpgsql-var-subst">plpgsql.variable_conflict + or the per-function option #variable_conflict, to allow either the variable or the query-supplied column to be used. In any case PL/pgSQL will no longer attempt to substitute variables in places where they would not be syntactically valid. @@ -9731,7 +9731,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Formerly, input parameters were treated as being declared - CONST, so the function's code could not change their + CONST, so the function's code could not change their values. This restriction has been removed to simplify porting of functions from other DBMSes that do not impose the equivalent restriction. An input parameter now acts like a local @@ -9747,26 +9747,26 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add count and ALL options to MOVE - FORWARD/BACKWARD in PL/pgSQL (Pavel Stehule) + Add count and ALL options to MOVE + FORWARD/BACKWARD in PL/pgSQL (Pavel Stehule) - Allow PL/pgSQL's WHERE CURRENT OF to use a cursor + Allow PL/pgSQL's WHERE CURRENT OF to use a cursor variable (Tom Lane) - Allow PL/pgSQL's OPEN cursor FOR EXECUTE to + Allow PL/pgSQL's OPEN cursor FOR EXECUTE to use parameters (Pavel Stehule, Itagaki Takahiro) - This is accomplished with a new USING clause. + This is accomplished with a new USING clause. @@ -9782,28 +9782,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add new PL/Perl functions: quote_literal(), - quote_nullable(), quote_ident(), - encode_bytea(), decode_bytea(), - looks_like_number(), - encode_array_literal(), - encode_array_constructor() (Tim Bunce) + linkend="plperl-utility-functions">quote_literal(), + quote_nullable(), quote_ident(), + encode_bytea(), decode_bytea(), + looks_like_number(), + encode_array_literal(), + encode_array_constructor() (Tim Bunce) Add server parameter plperl.on_init to + linkend="guc-plperl-on-init">plperl.on_init to specify a PL/Perl initialization function (Tim Bunce) plperl.on_plperl_init + linkend="guc-plperl-on-plperl-init">plperl.on_plperl_init and plperl.on_plperlu_init + linkend="guc-plperl-on-plperl-init">plperl.on_plperlu_init are also available for initialization that is specific to the trusted or untrusted language respectively. @@ -9811,29 +9811,29 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Support END blocks in PL/Perl (Tim Bunce) + Support END blocks in PL/Perl (Tim Bunce) - END blocks do not currently allow database access. + END blocks do not currently allow database access. - Allow use strict in PL/Perl (Tim Bunce) + Allow use strict in PL/Perl (Tim Bunce) - Perl strict checks can also be globally enabled with the + Perl strict checks can also be globally enabled with the new server parameter plperl.use_strict. + linkend="guc-plperl-use-strict">plperl.use_strict. - Allow require in PL/Perl (Tim Bunce) + Allow require in PL/Perl (Tim Bunce) @@ -9845,7 +9845,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Allow use feature in PL/Perl if Perl version 5.10 or + Allow use feature in PL/Perl if Perl version 5.10 or later is used (Tim Bunce) @@ -9879,13 +9879,13 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Improve bytea support in PL/Python (Caleb Welton) + Improve bytea support in PL/Python (Caleb Welton) - Bytea values passed into PL/Python are now represented as - binary, rather than the PostgreSQL bytea text format. - Bytea values containing null bytes are now also output + Bytea values passed into PL/Python are now represented as + binary, rather than the PostgreSQL bytea text format. + Bytea values containing null bytes are now also output properly from PL/Python. Passing of boolean, integer, and float values was also improved. @@ -9906,14 +9906,14 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add Python 3 support to PL/Python (Peter Eisentraut) + Add Python 3 support to PL/Python (Peter Eisentraut) The new server-side language is called plpython3u. This + linkend="plpython-python23">plpython3u. This cannot be used in the same session with the - Python 2 server-side language. + Python 2 server-side language. @@ -9936,8 +9936,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add an @@ -9945,21 +9945,21 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Add support for quoting/escaping the values of psql + Add support for quoting/escaping the values of psql variables as SQL strings or identifiers (Pavel Stehule, Robert Haas) - For example, :'var' will produce the value of - var quoted and properly escaped as a literal string, while - :"var" will produce its value quoted and escaped as an + For example, :'var' will produce the value of + var quoted and properly escaped as a literal string, while + :"var" will produce its value quoted and escaped as an identifier. @@ -9967,11 +9967,11 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Ignore a leading UTF-8-encoded Unicode byte-order marker in - script files read by psql (Itagaki Takahiro) + script files read by psql (Itagaki Takahiro) - This is enabled when the client encoding is UTF-8. + This is enabled when the client encoding is UTF-8. It improves compatibility with certain editors, mostly on Windows, that insist on inserting such markers. @@ -9979,57 +9979,57 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Fix psql --file - to properly honor (Bruce Momjian) - Avoid overwriting of psql's command-line history when - two psql sessions are run concurrently (Tom Lane) + Avoid overwriting of psql's command-line history when + two psql sessions are run concurrently (Tom Lane) - Improve psql's tab completion support (Itagaki + Improve psql's tab completion support (Itagaki Takahiro) - Show \timing output when it is enabled, regardless of - quiet mode (Peter Eisentraut) + Show \timing output when it is enabled, regardless of + quiet mode (Peter Eisentraut) - <application>psql</> Display + <application>psql</application> Display - Improve display of wrapped columns in psql (Roger + Improve display of wrapped columns in psql (Roger Leigh) This behavior is now the default. The previous formatting is available by using \pset linestyle - old-ascii. + old-ascii. - Allow psql to use fancy Unicode line-drawing - characters via \pset linestyle unicode (Roger Leigh) + Allow psql to use fancy Unicode line-drawing + characters via \pset linestyle unicode (Roger Leigh) @@ -10038,27 +10038,27 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <application>psql</> <link - linkend="APP-PSQL-meta-commands"><command>\d</></link> + <title><application>psql</application> <link + linkend="APP-PSQL-meta-commands"><command>\d</command></link> Commands - Make \d show child tables that inherit from the specified + Make \d show child tables that inherit from the specified parent (Damien Clochard) - \d shows only the number of child tables, while - \d+ shows the names of all child tables. + \d shows only the number of child tables, while + \d+ shows the names of all child tables. - Show definitions of index columns in \d index_name + Show definitions of index columns in \d index_name (Khee Chin) @@ -10070,7 +10070,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Show a view's defining query only in - \d+, not in \d (Peter Eisentraut) + \d+, not in \d (Peter Eisentraut) @@ -10084,33 +10084,33 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Make pg_dump/pg_restore - also remove large objects (Itagaki Takahiro) - Fix pg_dump to properly dump large objects when - standard_conforming_strings is enabled (Tom Lane) + Fix pg_dump to properly dump large objects when + standard_conforming_strings is enabled (Tom Lane) The previous coding could fail when dumping to an archive file - and then generating script output from pg_restore. + and then generating script output from pg_restore. - pg_restore now emits large-object data in hex format + pg_restore now emits large-object data in hex format when generating script output (Tom Lane) @@ -10123,16 +10123,16 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Allow pg_dump to dump comments attached to columns + Allow pg_dump to dump comments attached to columns of composite types (Taro Minowa (Higepon)) - Make pg_dump @@ -10143,7 +10143,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - pg_restore now complains if any command-line arguments + pg_restore now complains if any command-line arguments remain after the switches and optional file name (Tom Lane) @@ -10158,28 +10158,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then <link - linkend="app-pg-ctl"><application>pg_ctl</></link> + linkend="app-pg-ctl">pg_ctl - Allow pg_ctl to be used safely to start the - postmaster during a system reboot (Tom Lane) + Allow pg_ctl to be used safely to start the + postmaster during a system reboot (Tom Lane) - Previously, pg_ctl's parent process could have been - mistakenly identified as a running postmaster based on - a stale postmaster lock file, resulting in a transient + Previously, pg_ctl's parent process could have been + mistakenly identified as a running postmaster based on + a stale postmaster lock file, resulting in a transient failure to start the database. - Give pg_ctl the ability to initialize the database - (by invoking initdb) (Zdenek Kotala) + Give pg_ctl the ability to initialize the database + (by invoking initdb) (Zdenek Kotala) @@ -10190,25 +10190,25 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <application>Development Tools</> + <application>Development Tools</application> - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Add new libpq functions + Add new libpq functions PQconnectdbParams() - and PQconnectStartParams() (Guillaume + linkend="libpq-connect">PQconnectdbParams() + and PQconnectStartParams() (Guillaume Lelarge) - These functions are similar to PQconnectdb() and - PQconnectStart() except that they accept a null-terminated + These functions are similar to PQconnectdb() and + PQconnectStart() except that they accept a null-terminated array of connection options, rather than requiring all options to be provided in a single string. @@ -10216,22 +10216,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add libpq functions PQescapeLiteral() - and PQescapeIdentifier() (Robert Haas) + Add libpq functions PQescapeLiteral() + and PQescapeIdentifier() (Robert Haas) These functions return appropriately quoted and escaped SQL string literals and identifiers. The caller is not required to pre-allocate - the string result, as is required by PQescapeStringConn(). + the string result, as is required by PQescapeStringConn(). Add support for a per-user service file (.pg_service.conf), + linkend="libpq-pgservice">.pg_service.conf), which is checked before the site-wide service file (Peter Eisentraut) @@ -10239,7 +10239,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Properly report an error if the specified libpq service + Properly report an error if the specified libpq service cannot be found (Peter Eisentraut) @@ -10258,15 +10258,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Avoid extra system calls to block and unblock SIGPIPE - in libpq, on platforms that offer alternative methods + Avoid extra system calls to block and unblock SIGPIPE + in libpq, on platforms that offer alternative methods (Jeremy Kerr) - When a .pgpass-supplied + When a .pgpass-supplied password fails, mention where the password came from in the error message (Bruce Momjian) @@ -10288,22 +10288,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="ecpg"><application>ecpg</></link> + <link linkend="ecpg"><application>ecpg</application></link> - Add SQLDA - (SQL Descriptor Area) support to ecpg + Add SQLDA + (SQL Descriptor Area) support to ecpg (Boszormenyi Zoltan) - Add the DESCRIBE - [ OUTPUT ] statement to ecpg + Add the DESCRIBE + [ OUTPUT ] statement to ecpg (Boszormenyi Zoltan) @@ -10317,28 +10317,28 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add the string data type in ecpg + Add the string data type in ecpg Informix-compatibility mode (Boszormenyi Zoltan) - Allow ecpg to use new and old + Allow ecpg to use new and old variable names without restriction (Michael Meskes) - Allow ecpg to use variable names in - free() (Michael Meskes) + Allow ecpg to use variable names in + free() (Michael Meskes) - Make ecpg_dynamic_type() return zero for non-SQL3 data + Make ecpg_dynamic_type() return zero for non-SQL3 data types (Michael Meskes) @@ -10350,41 +10350,41 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Support long long types on platforms that already have 64-bit - long (Michael Meskes) + Support long long types on platforms that already have 64-bit + long (Michael Meskes) - <application>ecpg</> Cursors + <application>ecpg</application> Cursors - Add out-of-scope cursor support in ecpg's native mode + Add out-of-scope cursor support in ecpg's native mode (Boszormenyi Zoltan) - This allows DECLARE to use variables that are not in - scope when OPEN is called. This facility already existed - in ecpg's Informix-compatibility mode. + This allows DECLARE to use variables that are not in + scope when OPEN is called. This facility already existed + in ecpg's Informix-compatibility mode. - Allow dynamic cursor names in ecpg (Boszormenyi Zoltan) + Allow dynamic cursor names in ecpg (Boszormenyi Zoltan) - Allow ecpg to use noise words FROM and - IN in FETCH and MOVE (Boszormenyi + Allow ecpg to use noise words FROM and + IN in FETCH and MOVE (Boszormenyi Zoltan) @@ -10409,8 +10409,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then The thread-safety option can be disabled with configure - . @@ -10421,12 +10421,12 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Now that /proc/self/oom_adj allows disabling - of the Linux out-of-memory (OOM) + Now that /proc/self/oom_adj allows disabling + of the Linux out-of-memory (OOM) killer, it's recommendable to disable OOM kills for the postmaster. It may then be desirable to re-enable OOM kills for the postmaster's child processes. The new compile-time option LINUX_OOM_ADJ + linkend="linux-memory-overcommit">LINUX_OOM_ADJ allows the killer to be reactivated for child processes. @@ -10440,31 +10440,31 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - New Makefile targets world, - install-world, and installcheck-world + New Makefile targets world, + install-world, and installcheck-world (Andrew Dunstan) - These are similar to the existing all, install, - and installcheck targets, but they also build the - HTML documentation, build and test contrib, - and test server-side languages and ecpg. + These are similar to the existing all, install, + and installcheck targets, but they also build the + HTML documentation, build and test contrib, + and test server-side languages and ecpg. Add data and documentation installation location control to - PGXS Makefiles (Mark Cave-Ayland) + PGXS Makefiles (Mark Cave-Ayland) - Add Makefile rules to build the PostgreSQL documentation - as a single HTML file or as a single plain-text file + Add Makefile rules to build the PostgreSQL documentation + as a single HTML file or as a single plain-text file (Peter Eisentraut, Bruce Momjian) @@ -10482,12 +10482,12 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Support compiling on 64-bit - Windows and running in 64-bit + Windows and running in 64-bit mode (Tsutomu Yamada, Magnus Hagander) - This allows for large shared memory sizes on Windows. + This allows for large shared memory sizes on Windows. @@ -10495,7 +10495,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Support server builds using Visual Studio - 2008 (Magnus Hagander) + 2008 (Magnus Hagander) @@ -10518,8 +10518,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - For example, the prebuilt HTML documentation is now in - doc/src/sgml/html/; the manual pages are packaged + For example, the prebuilt HTML documentation is now in + doc/src/sgml/html/; the manual pages are packaged similarly. @@ -10543,13 +10543,13 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then User-defined constraint triggers now have entries in - pg_constraint as well as pg_trigger + pg_constraint as well as pg_trigger (Tom Lane) Because of this change, - pg_constraint.pgconstrname is now + pg_constraint.pgconstrname is now redundant and has been removed. @@ -10557,8 +10557,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add system catalog columns - pg_constraint.conindid and - pg_trigger.tgconstrindid + pg_constraint.conindid and + pg_trigger.tgconstrindid to better document the use of indexes for constraint enforcement (Tom Lane) @@ -10578,7 +10578,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Improve source code test coverage, including contrib, PL/Python, + Improve source code test coverage, including contrib, PL/Python, and PL/Perl (Peter Eisentraut, Andrew Dunstan) @@ -10598,7 +10598,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Automatically generate the initial contents of - pg_attribute for bootstrapped catalogs + pg_attribute for bootstrapped catalogs (John Naylor) @@ -10610,8 +10610,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Split the processing of - INSERT/UPDATE/DELETE operations out - of execMain.c (Marko Tiikkaja) + INSERT/UPDATE/DELETE operations out + of execMain.c (Marko Tiikkaja) @@ -10622,7 +10622,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Simplify translation of psql's SQL help text + Simplify translation of psql's SQL help text (Peter Eisentraut) @@ -10641,8 +10641,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add a new ERRCODE_INVALID_PASSWORD - SQLSTATE error code (Bruce Momjian) + linkend="errcodes-table">ERRCODE_INVALID_PASSWORD + SQLSTATE error code (Bruce Momjian) @@ -10661,23 +10661,23 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add new documentation section - about running PostgreSQL in non-durable mode + about running PostgreSQL in non-durable mode to improve performance (Bruce Momjian) - Restructure the HTML documentation - Makefile rules to make their dependency checks work + Restructure the HTML documentation + Makefile rules to make their dependency checks work correctly, avoiding unnecessary rebuilds (Peter Eisentraut) - Use DocBook XSL stylesheets for man page - building, rather than Docbook2X (Peter Eisentraut) + Use DocBook XSL stylesheets for man page + building, rather than Docbook2X (Peter Eisentraut) @@ -10711,22 +10711,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Require Autoconf 2.63 to build - configure (Peter Eisentraut) + Require Autoconf 2.63 to build + configure (Peter Eisentraut) - Require Flex 2.5.31 or later to build - from a CVS checkout (Tom Lane) + Require Flex 2.5.31 or later to build + from a CVS checkout (Tom Lane) - Require Perl version 5.8 or later to build - from a CVS checkout (John Naylor, Andrew Dunstan) + Require Perl version 5.8 or later to build + from a CVS checkout (John Naylor, Andrew Dunstan) @@ -10741,25 +10741,25 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Use a more modern API for Bonjour (Tom Lane) + Use a more modern API for Bonjour (Tom Lane) - Bonjour support now requires macOS 10.3 or later. + Bonjour support now requires macOS 10.3 or later. The older API has been deprecated by Apple. - Add spinlock support for the SuperH + Add spinlock support for the SuperH architecture (Nobuhiro Iwamatsu) - Allow non-GCC compilers to use inline functions if + Allow non-GCC compilers to use inline functions if they support them (Kurt Harriman) @@ -10773,14 +10773,14 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Restructure use of LDFLAGS to be more consistent + Restructure use of LDFLAGS to be more consistent across platforms (Tom Lane) - LDFLAGS is now used for linking both executables and shared - libraries, and we add on LDFLAGS_EX when linking - executables, or LDFLAGS_SL when linking shared libraries. + LDFLAGS is now used for linking both executables and shared + libraries, and we add on LDFLAGS_EX when linking + executables, or LDFLAGS_SL when linking shared libraries. @@ -10795,15 +10795,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Make backend header files safe to include in C++ + Make backend header files safe to include in C++ (Kurt Harriman, Peter Eisentraut) These changes remove keyword conflicts that previously made - C++ usage difficult in backend code. However, there - are still other complexities when using C++ for backend - functions. extern "C" { } is still necessary in + C++ usage difficult in backend code. However, there + are still other complexities when using C++ for backend + functions. extern "C" { } is still necessary in appropriate places, and memory management and error handling are still problematic. @@ -10812,15 +10812,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add AggCheckCallContext() - for use in detecting if a C function is + linkend="xaggr">AggCheckCallContext() + for use in detecting if a C function is being called as an aggregate (Hitoshi Harada) - Change calling convention for SearchSysCache() and related + Change calling convention for SearchSysCache() and related functions to avoid hard-wiring the maximum number of cache keys (Robert Haas) @@ -10833,8 +10833,8 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Require calls of fastgetattr() and - heap_getattr() backend macros to provide a non-NULL fourth + Require calls of fastgetattr() and + heap_getattr() backend macros to provide a non-NULL fourth argument (Robert Haas) @@ -10842,7 +10842,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Custom typanalyze functions should no longer rely on - VacAttrStats.attr to determine the type + VacAttrStats.attr to determine the type of data they will be passed (Tom Lane) @@ -10888,7 +10888,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add contrib/pg_upgrade + Add contrib/pg_upgrade to support in-place upgrades (Bruce Momjian) @@ -10903,15 +10903,15 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add support for preserving relation relfilenode values + linkend="catalog-pg-class">relfilenode values during binary upgrades (Bruce Momjian) - Add support for preserving pg_type - and pg_enum OIDs during binary upgrades + Add support for preserving pg_type + and pg_enum OIDs during binary upgrades (Bruce Momjian) @@ -10919,7 +10919,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Move data files within tablespaces into - PostgreSQL-version-specific subdirectories + PostgreSQL-version-specific subdirectories (Bruce Momjian) @@ -10941,22 +10941,22 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - Add multithreading option ( - This allows multiple CPUs to be used by pgbench, + This allows multiple CPUs to be used by pgbench, reducing the risk of pgbench itself becoming the test bottleneck. - Add \shell and \setshell meta + Add \shell and \setshell meta commands to contrib/pgbench + linkend="pgbench">contrib/pgbench (Michael Paquier) @@ -10964,20 +10964,20 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then New features for contrib/dict_xsyn + linkend="dict-xsyn">contrib/dict_xsyn (Sergey Karpov) - The new options are matchorig, matchsynonyms, - and keepsynonyms. + The new options are matchorig, matchsynonyms, + and keepsynonyms. Add full text dictionary contrib/unaccent + linkend="unaccent">contrib/unaccent (Teodor Sigaev) @@ -10990,24 +10990,24 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add dblink_get_notify() - to contrib/dblink (Marcus Kempe) + linkend="CONTRIB-DBLINK-GET-NOTIFY">dblink_get_notify() + to contrib/dblink (Marcus Kempe) - This allows asynchronous notifications in dblink. + This allows asynchronous notifications in dblink. - Improve contrib/dblink's handling of dropped columns + Improve contrib/dblink's handling of dropped columns (Tom Lane) This affects dblink_build_sql_insert() + linkend="CONTRIB-DBLINK-BUILD-SQL-INSERT">dblink_build_sql_insert() and related functions. These functions now number columns according to logical not physical column numbers. @@ -11016,23 +11016,23 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Greatly increase contrib/hstore's data + linkend="hstore">contrib/hstore's data length limit, and add B-tree and hash support so GROUP - BY and DISTINCT operations are possible on - hstore columns (Andrew Gierth) + BY and DISTINCT operations are possible on + hstore columns (Andrew Gierth) New functions and operators were also added. These improvements - make hstore a full-function key-value store embedded in - PostgreSQL. + make hstore a full-function key-value store embedded in + PostgreSQL. Add contrib/passwordcheck + linkend="passwordcheck">contrib/passwordcheck to support site-specific password strength policies (Laurenz Albe) @@ -11046,7 +11046,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add contrib/pg_archivecleanup + linkend="pgarchivecleanup">contrib/pg_archivecleanup tool (Simon Riggs) @@ -11060,7 +11060,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add query text to contrib/auto_explain + linkend="auto-explain">contrib/auto_explain output (Andrew Dunstan) @@ -11068,7 +11068,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add buffer access counters to contrib/pg_stat_statements + linkend="pgstatstatements">contrib/pg_stat_statements (Itagaki Takahiro) @@ -11076,10 +11076,10 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Update contrib/start-scripts/linux - to use /proc/self/oom_adj to disable the - Linux - out-of-memory (OOM) killer (Alex + linkend="server-start">contrib/start-scripts/linux + to use /proc/self/oom_adj to disable the + Linux + out-of-memory (OOM) killer (Alex Hunsaker, Tom Lane) diff --git a/doc/src/sgml/release-9.1.sgml b/doc/src/sgml/release-9.1.sgml index c354b7d1bc..2939631609 100644 --- a/doc/src/sgml/release-9.1.sgml +++ b/doc/src/sgml/release-9.1.sgml @@ -16,7 +16,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 9.1.X series. Users are encouraged to update to a newer release branch soon. @@ -68,13 +68,13 @@ - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. @@ -82,15 +82,15 @@ Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -112,7 +112,7 @@ - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -126,7 +126,7 @@ Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -137,26 +137,26 @@ - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -170,17 +170,17 @@ If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -193,15 +193,15 @@ or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -226,7 +226,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.1.X release series in September 2016. Users are encouraged to update to a newer release branch soon. @@ -253,17 +253,17 @@ Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -277,7 +277,7 @@ - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -286,22 +286,22 @@ Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -311,40 +311,40 @@ These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -355,12 +355,12 @@ - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -370,7 +370,7 @@ Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -383,12 +383,12 @@ - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -397,12 +397,12 @@ - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -410,8 +410,8 @@ - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -423,7 +423,7 @@ - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -451,8 +451,8 @@ - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -465,21 +465,21 @@ It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -491,13 +491,13 @@ Branch: REL9_1_STABLE [d56c02f1a] 2016-06-19 13:45:03 -0400 Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 --> - Revert to the old heuristic timeout for pg_ctl start -w + Revert to the old heuristic timeout for pg_ctl start -w (Tom Lane) The new method adopted as of release 9.1.20 does not work - when silent_mode is enabled, so go back to the old way. + when silent_mode is enabled, so go back to the old way. @@ -530,7 +530,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -542,7 +542,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -568,7 +568,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.1.X release series in September 2016. Users are encouraged to update to a newer release branch soon. @@ -604,7 +604,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -613,7 +613,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -621,8 +621,8 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -634,28 +634,28 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -663,23 +663,23 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -692,12 +692,12 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -705,9 +705,9 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -754,56 +754,56 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -829,27 +829,27 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -863,20 +863,20 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -897,21 +897,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -972,25 +972,25 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -999,7 +999,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -1018,21 +1018,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -1041,7 +1041,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -1063,14 +1063,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -1122,7 +1122,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -1135,14 +1135,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -1155,7 +1155,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -1181,13 +1181,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -1195,15 +1195,15 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -1211,21 +1211,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -1233,23 +1233,23 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -1257,18 +1257,18 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -1278,44 +1278,44 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -1323,22 +1323,22 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -1346,42 +1346,42 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -1393,19 +1393,19 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -1413,11 +1413,11 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -1426,7 +1426,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -1472,8 +1472,8 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -1499,13 +1499,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -1523,7 +1523,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -1535,7 +1535,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - This substantially improves performance when pg_dump + This substantially improves performance when pg_dump tries to dump a large number of tables. @@ -1550,13 +1550,13 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -1568,14 +1568,14 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -1583,21 +1583,21 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -1610,7 +1610,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -1654,22 +1654,22 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -1682,9 +1682,9 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -1723,12 +1723,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -1748,7 +1748,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -1781,44 +1781,44 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -1830,7 +1830,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -1838,68 +1838,68 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) When dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -1908,18 +1908,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -1927,11 +1927,11 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -1939,14 +1939,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -1958,38 +1958,38 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -2038,7 +2038,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -2049,13 +2049,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -2101,12 +2101,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -2120,29 +2120,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -2176,8 +2176,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -2215,7 +2215,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -2225,7 +2225,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -2235,15 +2235,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -2252,16 +2252,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -2269,16 +2269,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -2306,7 +2306,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -2334,7 +2334,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -2342,7 +2342,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -2356,14 +2356,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -2383,13 +2383,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -2430,18 +2430,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -2454,20 +2454,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -2480,20 +2480,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -2501,14 +2501,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -2517,25 +2517,25 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -2548,38 +2548,38 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -2592,14 +2592,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -2611,7 +2611,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -2619,28 +2619,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -2648,15 +2648,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -2668,7 +2668,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -2715,15 +2715,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -2733,27 +2733,27 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -2761,12 +2761,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -2807,7 +2807,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -2833,21 +2833,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -2855,14 +2855,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -2870,7 +2870,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -2882,22 +2882,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -2905,12 +2905,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -2920,7 +2920,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -2932,7 +2932,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -2945,19 +2945,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -2977,7 +2977,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -3008,14 +3008,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -3024,7 +3024,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -3037,8 +3037,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -3064,7 +3064,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -3097,19 +3097,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -3117,14 +3117,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -3133,7 +3133,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -3147,32 +3147,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -3180,14 +3180,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -3195,32 +3195,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -3230,7 +3230,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -3238,17 +3238,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -3256,16 +3256,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -3280,21 +3280,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -3302,7 +3302,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -3314,7 +3314,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -3322,7 +3322,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -3334,24 +3334,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -3359,7 +3359,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -3393,29 +3393,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -3427,15 +3427,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -3447,9 +3447,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -3458,21 +3458,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -3531,15 +3531,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -3578,14 +3578,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -3613,13 +3613,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -3634,7 +3634,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -3647,7 +3647,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -3668,19 +3668,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -3688,7 +3688,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -3700,14 +3700,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -3722,7 +3722,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -3731,16 +3731,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -3776,15 +3776,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -3795,17 +3795,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -3813,15 +3813,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -3829,20 +3829,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -3850,20 +3850,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -3923,7 +3923,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -3947,7 +3947,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -3960,17 +3960,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -3989,8 +3989,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -4003,13 +4003,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -4022,14 +4022,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -4075,19 +4075,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -4100,7 +4100,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -4120,7 +4120,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -4134,12 +4134,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -4166,7 +4166,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -4178,35 +4178,35 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -4221,7 +4221,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -4241,20 +4241,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -4272,8 +4272,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -4286,7 +4286,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -4299,19 +4299,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -4335,7 +4335,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -4355,7 +4355,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -4369,19 +4369,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -4389,21 +4389,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -4428,12 +4428,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -4441,8 +4441,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -4453,31 +4453,31 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -4485,7 +4485,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -4498,20 +4498,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -4522,7 +4522,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -4536,28 +4536,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -4566,20 +4566,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -4631,13 +4631,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would @@ -4649,18 +4649,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -4686,7 +4686,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -4708,8 +4708,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -4726,7 +4726,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. @@ -4756,13 +4756,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -4776,7 +4776,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -4790,7 +4790,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -4801,10 +4801,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -4814,28 +4814,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -4887,7 +4887,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -4895,7 +4895,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -4908,7 +4908,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -4939,46 +4939,46 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) - Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) + Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) - Previously such cases could cause a pg_upgrade error. + Previously such cases could cause a pg_upgrade error. - Reorder pg_dump processing of extension-related + Reorder pg_dump processing of extension-related rules and event triggers (Joe Conway) @@ -4986,7 +4986,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Force dumping of extension tables if specified by pg_dump - -t or -n (Joe Conway) + -t or -n (Joe Conway) @@ -4999,19 +4999,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix pg_restore -l with the directory archive to display + Fix pg_restore -l with the directory archive to display the correct format name (Fujii Masao) - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. @@ -5041,26 +5041,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -5084,14 +5084,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -5103,24 +5103,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) - Make pg_upgrade use pg_dump - --quote-all-identifiers to avoid problems with keyword changes + Make pg_upgrade use pg_dump + --quote-all-identifiers to avoid problems with keyword changes between releases (Tom Lane) @@ -5134,7 +5134,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -5143,21 +5143,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. @@ -5171,7 +5171,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -5206,7 +5206,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -5230,7 +5230,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -5244,9 +5244,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) @@ -5259,7 +5259,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 An unprivileged database user could exploit this mistake to call - pg_start_backup() or pg_stop_backup(), + pg_start_backup() or pg_stop_backup(), thus possibly interfering with creation of routine backups. (CVE-2013-1901) @@ -5267,32 +5267,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -5307,21 +5307,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -5332,21 +5332,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -5367,28 +5367,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -5406,7 +5406,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -5427,29 +5427,29 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -5458,15 +5458,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - In pg_basebackup, include only the current server + In pg_basebackup, include only the current server version's subdirectory when backing up a tablespace (Heikki Linnakangas) @@ -5474,26 +5474,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a server version check in pg_basebackup and - pg_receivexlog, so they fail cleanly with version + Add a server version check in pg_basebackup and + pg_receivexlog, so they fail cleanly with version combinations that won't work (Heikki Linnakangas) - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -5501,12 +5501,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -5551,7 +5551,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -5635,19 +5635,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -5659,13 +5659,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -5673,13 +5673,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -5699,13 +5699,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) - Fix pg_extension_config_dump() to handle + Fix pg_extension_config_dump() to handle extension-update cases properly (Tom Lane) @@ -5729,13 +5729,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) @@ -5743,61 +5743,61 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix possible error if a relation file is removed while - pg_basebackup is running (Heikki Linnakangas) + pg_basebackup is running (Heikki Linnakangas) - Make pg_dump exclude data of unlogged tables when + Make pg_dump exclude data of unlogged tables when running on a hot-standby server (Magnus Hagander) This would fail anyway because the data is not available on the standby server, so it seems most convenient to assume - automatically. - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -5817,15 +5817,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -5874,13 +5874,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix multiple bugs associated with CREATE INDEX - CONCURRENTLY (Andres Freund, Tom Lane) + CONCURRENTLY (Andres Freund, Tom Lane) - Fix CREATE INDEX CONCURRENTLY to use + Fix CREATE INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus resulting in corrupt concurrently-created indexes. @@ -5888,8 +5888,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. @@ -5926,13 +5926,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -5990,20 +5990,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. - Fix SELECT DISTINCT with index-optimized - MIN/MAX on an inheritance tree (Tom Lane) + Fix SELECT DISTINCT with index-optimized + MIN/MAX on an inheritance tree (Tom Lane) The planner would fail with failed to re-find MinMaxAggInfo - record given this combination of factors. + record given this combination of factors. @@ -6021,10 +6021,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -6032,12 +6032,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) In very unusual circumstances, this oversight could result in passing - incorrect data to a trigger WHEN condition, or to the + incorrect data to a trigger WHEN condition, or to the precheck logic for a foreign-key enforcement trigger. That could result in a crash, or in an incorrect decision about whether to fire the trigger. @@ -6046,7 +6046,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -6058,7 +6058,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix ALTER EXTENSION SET SCHEMA's failure to move some + Fix ALTER EXTENSION SET SCHEMA's failure to move some subsidiary objects into the new schema (Álvaro Herrera, Dimitri Fontaine) @@ -6066,14 +6066,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -6087,7 +6087,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -6095,7 +6095,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -6117,14 +6117,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. @@ -6132,7 +6132,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix display of - pg_stat_replication.sync_state at a + pg_stat_replication.sync_state at a page boundary (Kyotaro Horiguchi) @@ -6146,7 +6146,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -6159,8 +6159,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -6170,15 +6170,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Make pg_dump dump SEQUENCE SET items in + Make pg_dump dump SEQUENCE SET items in the data not pre-data section of the archive (Tom Lane) @@ -6190,25 +6190,25 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -6219,67 +6219,67 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix tar files emitted by pg_basebackup to + Fix tar files emitted by pg_basebackup to be POSIX conformant (Brian Weaver, Tom Lane) - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Ensure that make install for an extension creates the - extension installation directory (Cédric Villemain) + Ensure that make install for an extension creates the + extension installation directory (Cédric Villemain) - Previously, this step was missed if MODULEDIR was set in + Previously, this step was missed if MODULEDIR was set in the extension's Makefile. - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -6290,7 +6290,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -6323,7 +6323,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, you may need to perform REINDEX operations to + However, you may need to perform REINDEX operations to recover from the effects of the data corruption bug described in the first changelog item below. @@ -6354,7 +6354,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 likely to occur on standby slave servers since those perform much more WAL replay. There is a low probability of corruption of btree and GIN indexes. There is a much higher probability of corruption of - table visibility maps. Fortunately, visibility maps are + table visibility maps. Fortunately, visibility maps are non-critical data in 9.1, so the worst consequence of such corruption in 9.1 installations is transient inefficiency of vacuuming. Table data proper cannot be corrupted by this bug. @@ -6363,18 +6363,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 While no index corruption due to this bug is known to have occurred in the field, as a precautionary measure it is recommended that - production installations REINDEX all btree and GIN + production installations REINDEX all btree and GIN indexes at a convenient time after upgrading to 9.1.6. Also, if you intend to do an in-place upgrade to 9.2.X, before doing - so it is recommended to perform a VACUUM of all tables + so it is recommended to perform a VACUUM of all tables while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will ensure that any lingering wrong data in the visibility maps is corrected before 9.2.X can depend on it. vacuum_cost_delay + linkend="guc-vacuum-cost-delay">vacuum_cost_delay can be adjusted to reduce the performance impact of vacuuming, while causing it to take longer to finish. @@ -6388,15 +6388,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 These errors could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. Fix misbehavior when default_transaction_isolation - is set to serializable (Kevin Grittner, Tom Lane, Heikki + linkend="guc-default-transaction-isolation">default_transaction_isolation + is set to serializable (Kevin Grittner, Tom Lane, Heikki Linnakangas) @@ -6409,7 +6409,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Improve selectivity estimation for text search queries involving - prefixes, i.e. word:* patterns (Tom Lane) + prefixes, i.e. word:* patterns (Tom Lane) @@ -6432,10 +6432,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - If we revoke a grant option from some role X, but - X still holds that option via a grant from someone + If we revoke a grant option from some role X, but + X still holds that option via a grant from someone else, we should not recursively revoke the corresponding privilege - from role(s) Y that X had granted it + from role(s) Y that X had granted it to. @@ -6448,7 +6448,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This situation creates circular dependencies that confuse - pg_dump and probably other things. It's confusing + pg_dump and probably other things. It's confusing for humans too, so disallow it. @@ -6462,7 +6462,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make configure probe for mbstowcs_l (Tom + Make configure probe for mbstowcs_l (Tom Lane) @@ -6473,12 +6473,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) + Fix handling of SIGFPE when PL/Perl is in use (Andres Freund) - Perl resets the process's SIGFPE handler to - SIG_IGN, which could result in crashes later on. Restore + Perl resets the process's SIGFPE handler to + SIG_IGN, which could result in crashes later on. Restore the normal Postgres signal handler after initializing PL/Perl. @@ -6497,7 +6497,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -6505,45 +6505,45 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix bugs in contrib/pg_trgm's LIKE pattern + Fix bugs in contrib/pg_trgm's LIKE pattern analysis code (Fujii Masao) - LIKE queries using a trigram index could produce wrong - results if the pattern contained LIKE escape characters. + LIKE queries using a trigram index could produce wrong + results if the pattern contained LIKE escape characters. - Fix pg_upgrade's handling of line endings on Windows + Fix pg_upgrade's handling of line endings on Windows (Andrew Dunstan) - Previously, pg_upgrade might add or remove carriage + Previously, pg_upgrade might add or remove carriage returns in places such as function bodies. - On Windows, make pg_upgrade use backslash path + On Windows, make pg_upgrade use backslash path separators in the scripts it emits (Andrew Dunstan) - Remove unnecessary dependency on pg_config from - pg_upgrade (Peter Eisentraut) + Remove unnecessary dependency on pg_config from + pg_upgrade (Peter Eisentraut) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -6593,7 +6593,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - xml_parse() would attempt to fetch external files or + xml_parse() would attempt to fetch external files or URLs as needed to resolve DTD and entity references in an XML value, thus allowing unprivileged database users to attempt to fetch data with the privileges of the database server. While the external data @@ -6606,22 +6606,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent access to external files/URLs via contrib/xml2's - xslt_process() (Peter Eisentraut) + Prevent access to external files/URLs via contrib/xml2's + xslt_process() (Peter Eisentraut) - libxslt offers the ability to read and write both + libxslt offers the ability to read and write both files and URLs through stylesheet commands, thus allowing unprivileged database users to both read and write data with the privileges of the database server. Disable that through proper use - of libxslt's security options. (CVE-2012-3488) + of libxslt's security options. (CVE-2012-3488) - Also, remove xslt_process()'s ability to fetch documents + Also, remove xslt_process()'s ability to fetch documents and stylesheets from external files/URLs. While this was a - documented feature, it was long regarded as a bad idea. + documented feature, it was long regarded as a bad idea. The fix for CVE-2012-3489 broke that capability, and rather than expend effort on trying to fix it, we're just going to summarily remove it. @@ -6649,21 +6649,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - If ALTER SEQUENCE was executed on a freshly created or - reset sequence, and then precisely one nextval() call + If ALTER SEQUENCE was executed on a freshly created or + reset sequence, and then precisely one nextval() call was made on it, and then the server crashed, WAL replay would restore the sequence to a state in which it appeared that no - nextval() had been done, thus allowing the first + nextval() had been done, thus allowing the first sequence value to be returned again by the next - nextval() call. In particular this could manifest for - serial columns, since creation of a serial column's sequence - includes an ALTER SEQUENCE OWNED BY step. + nextval() call. In particular this could manifest for + serial columns, since creation of a serial column's sequence + includes an ALTER SEQUENCE OWNED BY step. - Fix race condition in enum-type value comparisons (Robert + Fix race condition in enum-type value comparisons (Robert Haas, Tom Lane) @@ -6675,7 +6675,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix txid_current() to report the correct epoch when not + Fix txid_current() to report the correct epoch when not in hot standby (Heikki Linnakangas) @@ -6692,7 +6692,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The master might improperly choose pseudo-servers such as - pg_receivexlog or pg_basebackup + pg_receivexlog or pg_basebackup as the synchronous standby, and then wait indefinitely for them. @@ -6705,14 +6705,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This mistake led to failures reported as out-of-order XID - insertion in KnownAssignedXids. + insertion in KnownAssignedXids. - Ensure the backup_label file is fsync'd after - pg_start_backup() (Dave Kerr) + Ensure the backup_label file is fsync'd after + pg_start_backup() (Dave Kerr) @@ -6723,7 +6723,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 WAL sender background processes neglected to establish a - SIGALRM handler, meaning they would wait forever in + SIGALRM handler, meaning they would wait forever in some corner cases where a timeout ought to happen. @@ -6742,15 +6742,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix LISTEN/NOTIFY to cope better with I/O + Fix LISTEN/NOTIFY to cope better with I/O problems, such as out of disk space (Tom Lane) After a write failure, all subsequent attempts to send more - NOTIFY messages would fail with messages like - Could not read from file "pg_notify/nnnn" at - offset nnnnn: Success. + NOTIFY messages would fail with messages like + Could not read from file "pg_notify/nnnn" at + offset nnnnn: Success. @@ -6763,7 +6763,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The original coding could allow inconsistent behavior in some cases; in particular, an autovacuum could get canceled after less than - deadlock_timeout grace period. + deadlock_timeout grace period. @@ -6775,15 +6775,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix log collector so that log_truncate_on_rotation works + Fix log collector so that log_truncate_on_rotation works during the very first log rotation after server start (Tom Lane) - Fix WITH attached to a nested set operation - (UNION/INTERSECT/EXCEPT) + Fix WITH attached to a nested set operation + (UNION/INTERSECT/EXCEPT) (Tom Lane) @@ -6791,44 +6791,44 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ensure that a whole-row reference to a subquery doesn't include any - extra GROUP BY or ORDER BY columns (Tom Lane) + extra GROUP BY or ORDER BY columns (Tom Lane) Fix dependencies generated during ALTER TABLE ... ADD - CONSTRAINT USING INDEX (Tom Lane) + CONSTRAINT USING INDEX (Tom Lane) - This command left behind a redundant pg_depend entry + This command left behind a redundant pg_depend entry for the index, which could confuse later operations, notably - ALTER TABLE ... ALTER COLUMN TYPE on one of the indexed + ALTER TABLE ... ALTER COLUMN TYPE on one of the indexed columns. - Fix REASSIGN OWNED to work on extensions (Alvaro Herrera) + Fix REASSIGN OWNED to work on extensions (Alvaro Herrera) - Disallow copying whole-row references in CHECK - constraints and index definitions during CREATE TABLE + Disallow copying whole-row references in CHECK + constraints and index definitions during CREATE TABLE (Tom Lane) - This situation can arise in CREATE TABLE with - LIKE or INHERITS. The copied whole-row + This situation can arise in CREATE TABLE with + LIKE or INHERITS. The copied whole-row variable was incorrectly labeled with the row type of the original table not the new one. Rejecting the case seems reasonable for - LIKE, since the row types might well diverge later. For - INHERITS we should ideally allow it, with an implicit + LIKE, since the row types might well diverge later. For + INHERITS we should ideally allow it, with an implicit coercion to the parent table's row type; but that will require more work than seems safe to back-patch. @@ -6836,7 +6836,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki + Fix memory leak in ARRAY(SELECT ...) subqueries (Heikki Linnakangas, Tom Lane) @@ -6860,7 +6860,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The code could get confused by quantified parenthesized - subexpressions, such as ^(foo)?bar. This would lead to + subexpressions, such as ^(foo)?bar. This would lead to incorrect index optimization of searches for such patterns. @@ -6868,26 +6868,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix bugs with parsing signed - hh:mm and - hh:mm:ss - fields in interval constants (Amit Kapila, Tom Lane) + hh:mm and + hh:mm:ss + fields in interval constants (Amit Kapila, Tom Lane) - Fix pg_dump to better handle views containing partial - GROUP BY lists (Tom Lane) + Fix pg_dump to better handle views containing partial + GROUP BY lists (Tom Lane) - A view that lists only a primary key column in GROUP BY, + A view that lists only a primary key column in GROUP BY, but uses other table columns as if they were grouped, gets marked as depending on the primary key. Improper handling of such primary key - dependencies in pg_dump resulted in poorly-ordered + dependencies in pg_dump resulted in poorly-ordered dumps, which at best would be inefficient to restore and at worst could result in outright failure of a parallel - pg_restore run. + pg_restore run. @@ -6923,14 +6923,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Report errors properly in contrib/xml2's - xslt_process() (Tom Lane) + Report errors properly in contrib/xml2's + xslt_process() (Tom Lane) - Update time zone data files to tzdata release 2012e + Update time zone data files to tzdata release 2012e for DST law changes in Morocco and Tokelau @@ -6962,13 +6962,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - However, if you use the citext data type, and you upgraded - from a previous major release by running pg_upgrade, - you should run CREATE EXTENSION citext FROM unpackaged - to avoid collation-related failures in citext operations. + However, if you use the citext data type, and you upgraded + from a previous major release by running pg_upgrade, + you should run CREATE EXTENSION citext FROM unpackaged + to avoid collation-related failures in citext operations. The same is necessary if you restore a dump from a pre-9.1 database - that contains an instance of the citext data type. - If you've already run the CREATE EXTENSION command before + that contains an instance of the citext data type. + If you've already run the CREATE EXTENSION command before upgrading to 9.1.4, you will instead need to do manual catalog updates as explained in the third changelog item below. @@ -6988,12 +6988,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix incorrect password transformation in - contrib/pgcrypto's DES crypt() function + contrib/pgcrypto's DES crypt() function (Solar Designer) - If a password string contained the byte value 0x80, the + If a password string contained the byte value 0x80, the remainder of the password was ignored, causing the password to be much weaker than it appeared. With this fix, the rest of the string is properly included in the DES hash. Any stored password values that are @@ -7004,7 +7004,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ignore SECURITY DEFINER and SET attributes for + Ignore SECURITY DEFINER and SET attributes for a procedural language's call handler (Tom Lane) @@ -7016,16 +7016,16 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make contrib/citext's upgrade script fix collations of - citext arrays and domains over citext + Make contrib/citext's upgrade script fix collations of + citext arrays and domains over citext (Tom Lane) - Release 9.1.2 provided a fix for collations of citext columns + Release 9.1.2 provided a fix for collations of citext columns and indexes in databases upgraded or reloaded from pre-9.1 installations, but that fix was incomplete: it neglected to handle arrays - and domains over citext. This release extends the module's + and domains over citext. This release extends the module's upgrade script to handle these cases. As before, if you have already run the upgrade script, you'll need to run the collation update commands by hand instead. See the 9.1.2 release notes for more @@ -7035,7 +7035,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow numeric timezone offsets in timestamp input to be up to + Allow numeric timezone offsets in timestamp input to be up to 16 hours away from UTC (Tom Lane) @@ -7061,7 +7061,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix text to name and char to name + Fix text to name and char to name casts to perform string truncation correctly in multibyte encodings (Karl Schnaitter) @@ -7069,13 +7069,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory copying bug in to_tsquery() (Heikki Linnakangas) + Fix memory copying bug in to_tsquery() (Heikki Linnakangas) - Ensure txid_current() reports the correct epoch when + Ensure txid_current() reports the correct epoch when executed in hot standby (Simon Riggs) @@ -7090,7 +7090,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This bug concerns sub-SELECTs that reference variables coming from the nullable side of an outer join of the surrounding query. In 9.1, queries affected by this bug would fail with ERROR: - Upper-level PlaceHolderVar found where not expected. But in 9.0 and + Upper-level PlaceHolderVar found where not expected. But in 9.0 and 8.4, you'd silently get possibly-wrong answers, since the value transmitted into the subquery wouldn't go to null when it should. @@ -7098,26 +7098,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix planning of UNION ALL subqueries with output columns + Fix planning of UNION ALL subqueries with output columns that are not simple variables (Tom Lane) Planning of such cases got noticeably worse in 9.1 as a result of a misguided fix for MergeAppend child's targetlist doesn't match - MergeAppend errors. Revert that fix and do it another way. + MergeAppend errors. Revert that fix and do it another way. - Fix slow session startup when pg_attribute is very large + Fix slow session startup when pg_attribute is very large (Tom Lane) - If pg_attribute exceeds one-fourth of - shared_buffers, cache rebuilding code that is sometimes + If pg_attribute exceeds one-fourth of + shared_buffers, cache rebuilding code that is sometimes needed during session start would trigger the synchronized-scan logic, causing it to take many times longer than normal. The problem was particularly acute if many new sessions were starting at once. @@ -7138,8 +7138,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure the Windows implementation of PGSemaphoreLock() - clears ImmediateInterruptOK before returning (Tom Lane) + Ensure the Windows implementation of PGSemaphoreLock() + clears ImmediateInterruptOK before returning (Tom Lane) @@ -7166,31 +7166,31 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix COPY FROM to properly handle null marker strings that + Fix COPY FROM to properly handle null marker strings that correspond to invalid encoding (Tom Lane) - A null marker string such as E'\\0' should work, and did + A null marker string such as E'\\0' should work, and did work in the past, but the case got broken in 8.4. - Fix EXPLAIN VERBOSE for writable CTEs containing - RETURNING clauses (Tom Lane) + Fix EXPLAIN VERBOSE for writable CTEs containing + RETURNING clauses (Tom Lane) - Fix PREPARE TRANSACTION to work correctly in the presence + Fix PREPARE TRANSACTION to work correctly in the presence of advisory locks (Tom Lane) - Historically, PREPARE TRANSACTION has simply ignored any + Historically, PREPARE TRANSACTION has simply ignored any session-level advisory locks the session holds, but this case was accidentally broken in 9.1. @@ -7205,14 +7205,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Ignore missing schemas during non-interactive assignments of - search_path (Tom Lane) + search_path (Tom Lane) This re-aligns 9.1's behavior with that of older branches. Previously 9.1 would throw an error for nonexistent schemas mentioned in - search_path settings obtained from places such as - ALTER DATABASE SET. + search_path settings obtained from places such as + ALTER DATABASE SET. @@ -7223,7 +7223,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - This includes cases such as a rewriting ALTER TABLE within + This includes cases such as a rewriting ALTER TABLE within an extension update script, since that uses a transient table behind the scenes. @@ -7237,7 +7237,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Previously, infinite recursion in a function invoked by - auto-ANALYZE could crash worker processes. + auto-ANALYZE could crash worker processes. @@ -7256,13 +7256,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix logging collector to ensure it will restart file rotation - after receiving SIGHUP (Tom Lane) + after receiving SIGHUP (Tom Lane) - Fix too many LWLocks taken failure in GiST indexes (Heikki + Fix too many LWLocks taken failure in GiST indexes (Heikki Linnakangas) @@ -7296,35 +7296,35 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix error handling in pg_basebackup + Fix error handling in pg_basebackup (Thomas Ogrisegg, Fujii Masao) - Fix walsender to not go into a busy loop if connection + Fix walsender to not go into a busy loop if connection is terminated (Fujii Masao) - Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe + Fix memory leak in PL/pgSQL's RETURN NEXT command (Joe Conway) - Fix PL/pgSQL's GET DIAGNOSTICS command when the target + Fix PL/pgSQL's GET DIAGNOSTICS command when the target is the function's first variable (Tom Lane) - Ensure that PL/Perl package-qualifies the _TD variable + Ensure that PL/Perl package-qualifies the _TD variable (Alex Hunsaker) @@ -7349,19 +7349,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix potential access off the end of memory in psql's - expanded display (\x) mode (Peter Eisentraut) + Fix potential access off the end of memory in psql's + expanded display (\x) mode (Peter Eisentraut) - Fix several performance problems in pg_dump when + Fix several performance problems in pg_dump when the database contains many objects (Jeff Janes, Tom Lane) - pg_dump could get very slow if the database contained + pg_dump could get very slow if the database contained many schemas, or if many objects are in dependency loops, or if there are many owned sequences. @@ -7369,14 +7369,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix memory and file descriptor leaks in pg_restore + Fix memory and file descriptor leaks in pg_restore when reading a directory-format archive (Peter Eisentraut) - Fix pg_upgrade for the case that a database stored in a + Fix pg_upgrade for the case that a database stored in a non-default tablespace contains a table in the cluster's default tablespace (Bruce Momjian) @@ -7384,41 +7384,41 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - In ecpg, fix rare memory leaks and possible overwrite - of one byte after the sqlca_t structure (Peter Eisentraut) + In ecpg, fix rare memory leaks and possible overwrite + of one byte after the sqlca_t structure (Peter Eisentraut) - Fix contrib/dblink's dblink_exec() to not leak + Fix contrib/dblink's dblink_exec() to not leak temporary database connections upon error (Tom Lane) - Fix contrib/dblink to report the correct connection name in + Fix contrib/dblink to report the correct connection name in error messages (Kyotaro Horiguchi) - Fix contrib/vacuumlo to use multiple transactions when + Fix contrib/vacuumlo to use multiple transactions when dropping many large objects (Tim Lewis, Robert Haas, Tom Lane) - This change avoids exceeding max_locks_per_transaction when + This change avoids exceeding max_locks_per_transaction when many objects need to be dropped. The behavior can be adjusted with the - new -l (limit) option. + new -l (limit) option. - Update time zone data files to tzdata release 2012c + Update time zone data files to tzdata release 2012c for DST law changes in Antarctica, Armenia, Chile, Cuba, Falkland Islands, Gaza, Haiti, Hebron, Morocco, Syria, and Tokelau Islands; also historical corrections for Canada. @@ -7466,14 +7466,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Require execute permission on the trigger function for - CREATE TRIGGER (Robert Haas) + CREATE TRIGGER (Robert Haas) This missing check could allow another user to execute a trigger function with forged input data, by installing it on a table he owns. This is only of significance for trigger functions marked - SECURITY DEFINER, since otherwise trigger functions run + SECURITY DEFINER, since otherwise trigger functions run as the table owner anyway. (CVE-2012-0866) @@ -7485,7 +7485,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Both libpq and the server truncated the common name + Both libpq and the server truncated the common name extracted from an SSL certificate at 32 bytes. Normally this would cause nothing worse than an unexpected verification failure, but there are some rather-implausible scenarios in which it might allow one @@ -7500,12 +7500,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Convert newlines to spaces in names written in pg_dump + Convert newlines to spaces in names written in pg_dump comments (Robert Haas) - pg_dump was incautious about sanitizing object names + pg_dump was incautious about sanitizing object names that are emitted within SQL comments in its output script. A name containing a newline would at least render the script syntactically incorrect. Maliciously crafted object names could present a SQL @@ -7521,10 +7521,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 An index page split caused by an insertion could sometimes cause a - concurrently-running VACUUM to miss removing index entries + concurrently-running VACUUM to miss removing index entries that it should remove. After the corresponding table rows are removed, the dangling index entries would cause errors (such as could not - read block N in file ...) or worse, silently wrong query results + read block N in file ...) or worse, silently wrong query results after unrelated rows are re-inserted at the now-free table locations. This bug has been present since release 8.2, but occurs so infrequently that it was not diagnosed until now. If you have reason to suspect @@ -7543,22 +7543,22 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 that the contents were transiently invalid. In hot standby mode this can result in a query that's executing in parallel seeing garbage data. Various symptoms could result from that, but the most common one seems - to be invalid memory alloc request size. + to be invalid memory alloc request size. - Fix handling of data-modifying WITH subplans in - READ COMMITTED rechecking (Tom Lane) + Fix handling of data-modifying WITH subplans in + READ COMMITTED rechecking (Tom Lane) - A WITH clause containing - INSERT/UPDATE/DELETE would crash - if the parent UPDATE or DELETE command needed + A WITH clause containing + INSERT/UPDATE/DELETE would crash + if the parent UPDATE or DELETE command needed to be re-evaluated at one or more rows due to concurrent updates - in READ COMMITTED mode. + in READ COMMITTED mode. @@ -7589,13 +7589,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix CLUSTER/VACUUM FULL handling of toast + Fix CLUSTER/VACUUM FULL handling of toast values owned by recently-updated rows (Tom Lane) This oversight could lead to duplicate key value violates unique - constraint errors being reported against the toast table's index + constraint errors being reported against the toast table's index during one of these commands. @@ -7617,11 +7617,11 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support foreign data wrappers and foreign servers in - REASSIGN OWNED (Alvaro Herrera) + REASSIGN OWNED (Alvaro Herrera) - This command failed with unexpected classid errors if + This command failed with unexpected classid errors if it needed to change the ownership of any such objects. @@ -7629,24 +7629,24 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow non-existent values for some settings in ALTER - USER/DATABASE SET (Heikki Linnakangas) + USER/DATABASE SET (Heikki Linnakangas) - Allow default_text_search_config, - default_tablespace, and temp_tablespaces to be + Allow default_text_search_config, + default_tablespace, and temp_tablespaces to be set to names that are not known. This is because they might be known in another database where the setting is intended to be used, or for the tablespace cases because the tablespace might not be created yet. The - same issue was previously recognized for search_path, and + same issue was previously recognized for search_path, and these settings now act like that one. - Fix unsupported node type error caused by COLLATE - in an INSERT expression (Tom Lane) + Fix unsupported node type error caused by COLLATE + in an INSERT expression (Tom Lane) @@ -7669,7 +7669,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Recover from errors occurring during WAL replay of DROP - TABLESPACE (Tom Lane) + TABLESPACE (Tom Lane) @@ -7691,7 +7691,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Sometimes a lock would be logged as being held by transaction - zero. This is at least known to produce assertion failures on + zero. This is at least known to produce assertion failures on slave servers, and might be the cause of more serious problems. @@ -7713,7 +7713,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Prevent emitting misleading consistent recovery state reached + Prevent emitting misleading consistent recovery state reached log message at the beginning of crash recovery (Heikki Linnakangas) @@ -7721,7 +7721,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix initial value of - pg_stat_replication.replay_location + pg_stat_replication.replay_location (Fujii Masao) @@ -7733,7 +7733,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix regular expression back-references with * attached + Fix regular expression back-references with * attached (Tom Lane) @@ -7747,18 +7747,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 A similar problem still afflicts back-references that are embedded in a larger quantified expression, rather than being the immediate subject of the quantifier. This will be addressed in a future - PostgreSQL release. + PostgreSQL release. Fix recently-introduced memory leak in processing of - inet/cidr values (Heikki Linnakangas) + inet/cidr values (Heikki Linnakangas) - A patch in the December 2011 releases of PostgreSQL + A patch in the December 2011 releases of PostgreSQL caused memory leakage in these operations, which could be significant in scenarios such as building a btree index on such a column. @@ -7767,7 +7767,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix planner's ability to push down index-expression restrictions - through UNION ALL (Tom Lane) + through UNION ALL (Tom Lane) @@ -7778,19 +7778,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix planning of WITH clauses referenced in - UPDATE/DELETE on an inherited table + Fix planning of WITH clauses referenced in + UPDATE/DELETE on an inherited table (Tom Lane) - This bug led to could not find plan for CTE failures. + This bug led to could not find plan for CTE failures. - Fix GIN cost estimation to handle column IN (...) + Fix GIN cost estimation to handle column IN (...) index conditions (Marti Raudsepp) @@ -7813,8 +7813,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix dangling pointer after CREATE TABLE AS/SELECT - INTO in a SQL-language function (Tom Lane) + Fix dangling pointer after CREATE TABLE AS/SELECT + INTO in a SQL-language function (Tom Lane) @@ -7853,14 +7853,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This function crashes when handed a typeglob or certain read-only - objects such as $^V. Make plperl avoid passing those to + objects such as $^V. Make plperl avoid passing those to it. - In pg_dump, don't dump contents of an extension's + In pg_dump, don't dump contents of an extension's configuration tables if the extension itself is not being dumped (Tom Lane) @@ -7868,32 +7868,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Improve pg_dump's handling of inherited table columns + Improve pg_dump's handling of inherited table columns (Tom Lane) - pg_dump mishandled situations where a child column has + pg_dump mishandled situations where a child column has a different default expression than its parent column. If the default is textually identical to the parent's default, but not actually the same (for instance, because of schema search path differences) it would not be recognized as different, so that after dump and restore the child would be allowed to inherit the parent's default. Child columns - that are NOT NULL where their parent is not could also be + that are NOT NULL where their parent is not could also be restored subtly incorrectly. - Fix pg_restore's direct-to-database mode for + Fix pg_restore's direct-to-database mode for INSERT-style table data (Tom Lane) Direct-to-database restores from archive files made with - - Cope with invalid pre-existing search_path settings during - CREATE EXTENSION (Tom Lane) + Cope with invalid pre-existing search_path settings during + CREATE EXTENSION (Tom Lane) @@ -8453,14 +8453,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Ensure walsender processes respond promptly to SIGTERM + Ensure walsender processes respond promptly to SIGTERM (Magnus Hagander) - Exclude postmaster.opts from base backups + Exclude postmaster.opts from base backups (Magnus Hagander) @@ -8473,20 +8473,20 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Formerly, these would not be displayed correctly in the - pg_settings view. + pg_settings view. - Fix incorrect field alignment in ecpg's SQLDA area + Fix incorrect field alignment in ecpg's SQLDA area (Zoltan Boszormenyi) - Preserve blank lines within commands in psql's command + Preserve blank lines within commands in psql's command history (Robert Haas) @@ -8498,41 +8498,41 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid platform-specific infinite loop in pg_dump + Avoid platform-specific infinite loop in pg_dump (Steve Singer) - Fix compression of plain-text output format in pg_dump + Fix compression of plain-text output format in pg_dump (Adrian Klaver and Tom Lane) - pg_dump has historically understood -Z with - no -F switch to mean that it should emit a gzip-compressed + pg_dump has historically understood -Z with + no -F switch to mean that it should emit a gzip-compressed version of its plain text output. Restore that behavior. - Fix pg_dump to dump user-defined casts between + Fix pg_dump to dump user-defined casts between auto-generated types, such as table rowtypes (Tom Lane) - Fix missed quoting of foreign server names in pg_dump + Fix missed quoting of foreign server names in pg_dump (Tom Lane) - Assorted fixes for pg_upgrade (Bruce Momjian) + Assorted fixes for pg_upgrade (Bruce Momjian) @@ -8556,15 +8556,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Restore the pre-9.1 behavior that PL/Perl functions returning - void ignore the result value of their last Perl statement; + void ignore the result value of their last Perl statement; 9.1.0 would throw an error if that statement returned a reference. Also, make sure it works to return a string value for a composite type, so long as the string meets the type's input format. In addition, throw errors for attempts to return Perl arrays or hashes when the function's declared result type is not an array or composite type, respectively. (Pre-9.1 versions rather uselessly returned - strings like ARRAY(0x221a9a0) or - HASH(0x221aa90) in such cases.) + strings like ARRAY(0x221a9a0) or + HASH(0x221aa90) in such cases.) @@ -8577,7 +8577,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Use the preferred version of xsubpp to build PL/Perl, + Use the preferred version of xsubpp to build PL/Perl, not necessarily the operating system's main copy (David Wheeler and Alex Hunsaker) @@ -8599,14 +8599,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Change all the contrib extension script files to report - a useful error message if they are fed to psql + Change all the contrib extension script files to report + a useful error message if they are fed to psql (Andrew Dunstan and Tom Lane) This should help teach people about the new method of using - CREATE EXTENSION to load these files. In most cases, + CREATE EXTENSION to load these files. In most cases, sourcing the scripts directly would fail anyway, but with harder-to-interpret messages. @@ -8614,19 +8614,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix incorrect coding in contrib/dict_int and - contrib/dict_xsyn (Tom Lane) + Fix incorrect coding in contrib/dict_int and + contrib/dict_xsyn (Tom Lane) Some functions incorrectly assumed that memory returned by - palloc() is guaranteed zeroed. + palloc() is guaranteed zeroed. - Remove contrib/sepgsql tests from the regular regression + Remove contrib/sepgsql tests from the regular regression test mechanism (Tom Lane) @@ -8639,14 +8639,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix assorted errors in contrib/unaccent's configuration + Fix assorted errors in contrib/unaccent's configuration file parsing (Tom Lane) - Honor query cancel interrupts promptly in pgstatindex() + Honor query cancel interrupts promptly in pgstatindex() (Robert Haas) @@ -8660,7 +8660,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Revert unintentional enabling of WAL_DEBUG (Robert Haas) + Revert unintentional enabling of WAL_DEBUG (Robert Haas) @@ -8695,15 +8695,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Map Central America Standard Time to CST6, not - CST6CDT, because DST is generally not observed anywhere in + Map Central America Standard Time to CST6, not + CST6CDT, because DST is generally not observed anywhere in Central America. - Update time zone data files to tzdata release 2011n + Update time zone data files to tzdata release 2011n for DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, and Samoa; also historical corrections for Alaska and British East Africa. @@ -8744,7 +8744,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make pg_options_to_table return NULL for an option with no + Make pg_options_to_table return NULL for an option with no value (Tom Lane) @@ -8768,8 +8768,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix explicit reference to pg_temp schema in CREATE - TEMPORARY TABLE (Robert Haas) + Fix explicit reference to pg_temp schema in CREATE + TEMPORARY TABLE (Robert Haas) @@ -8794,9 +8794,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Overview - This release shows PostgreSQL moving beyond the + This release shows PostgreSQL moving beyond the traditional relational-database feature set with new, ground-breaking - functionality that is unique to PostgreSQL. + functionality that is unique to PostgreSQL. The streaming replication feature introduced in release 9.0 is significantly enhanced by adding a synchronous-replication option, streaming backups, and monitoring improvements. @@ -8831,7 +8831,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add extensions which - simplify packaging of additions to PostgreSQL + simplify packaging of additions to PostgreSQL @@ -8844,32 +8844,32 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Support unlogged tables using the UNLOGGED + Support unlogged tables using the UNLOGGED option in CREATE - TABLE + TABLE Allow data-modification commands - (INSERT/UPDATE/DELETE) in - WITH clauses + (INSERT/UPDATE/DELETE) in + WITH clauses Add nearest-neighbor (order-by-operator) searching to GiST indexes + linkend="GiST">GiST indexes Add a SECURITY - LABEL command and support for - SELinux permissions control + LABEL command and support for + SELinux permissions control @@ -8912,7 +8912,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change the default value of standard_conforming_strings + linkend="guc-standard-conforming-strings">standard_conforming_strings to on (Robert Haas) @@ -8920,8 +8920,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 By default, backslashes are now ordinary characters in string literals, not escape characters. This change removes a long-standing incompatibility with the SQL standard. escape_string_warning - has produced warnings about this usage for years. E'' + linkend="guc-escape-string-warning">escape_string_warning + has produced warnings about this usage for years. E'' strings are the proper way to embed backslash escapes in strings and are unaffected by this change. @@ -8955,12 +8955,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 For example, disallow - composite_value.text and - text(composite_value). + composite_value.text and + text(composite_value). Unintentional uses of this syntax have frequently resulted in bug reports; although it was not a bug, it seems better to go back to rejecting such expressions. - The CAST and :: syntaxes are still available + The CAST and :: syntaxes are still available for use when a cast of an entire composite value is actually intended. @@ -8972,10 +8972,10 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 When a domain is based on an array type, it is allowed to look - through the domain type to access the array elements, including + through the domain type to access the array elements, including subscripting the domain value to fetch or assign an element. Assignment to an element of such a domain value, for instance via - UPDATE ... SET domaincol[5] = ..., will now result in + UPDATE ... SET domaincol[5] = ..., will now result in rechecking the domain type's constraints, whereas before the checks were skipped. @@ -8993,7 +8993,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change string_to_array() + linkend="array-functions-table">string_to_array() to return an empty array for a zero-length string (Pavel Stehule) @@ -9006,8 +9006,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change string_to_array() - so a NULL separator splits the string into characters + linkend="array-functions-table">string_to_array() + so a NULL separator splits the string into characters (Pavel Stehule) @@ -9031,8 +9031,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Triggers can now be fired in three cases: BEFORE, - AFTER, or INSTEAD OF some action. + Triggers can now be fired in three cases: BEFORE, + AFTER, or INSTEAD OF some action. Trigger function authors should verify that their logic behaves sanely in all three cases. @@ -9040,7 +9040,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Require superuser or CREATEROLE permissions in order to + Require superuser or CREATEROLE permissions in order to set comments on roles (Tom Lane) @@ -9057,12 +9057,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Change pg_last_xlog_receive_location() + linkend="functions-recovery-info-table">pg_last_xlog_receive_location() so it never moves backwards (Fujii Masao) - Previously, the value of pg_last_xlog_receive_location() + Previously, the value of pg_last_xlog_receive_location() could move backward when streaming replication is restarted. @@ -9070,7 +9070,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Have logging of replication connections honor log_connections + linkend="guc-log-connections">log_connections (Magnus Hagander) @@ -9090,12 +9090,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Change PL/pgSQL's RAISE command without parameters + Change PL/pgSQL's RAISE command without parameters to be catchable by the attached exception block (Piyush Newe) - Previously RAISE in a code block was always scoped to + Previously RAISE in a code block was always scoped to an attached exception block, so it was uncatchable at the same scope. @@ -9154,7 +9154,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 All contrib modules are now installed with CREATE EXTENSION + linkend="SQL-CREATEEXTENSION">CREATE EXTENSION rather than by manually invoking their SQL scripts (Dimitri Fontaine, Tom Lane) @@ -9164,7 +9164,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 module, use CREATE EXTENSION ... FROM unpackaged to wrap the existing contrib module's objects into an extension. When updating from a pre-9.0 version, drop the contrib module's objects - using its old uninstall script, then use CREATE EXTENSION. + using its old uninstall script, then use CREATE EXTENSION. @@ -9180,26 +9180,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Make pg_stat_reset() + linkend="monitoring-stats-funcs-table">pg_stat_reset() reset all database-level statistics (Tomas Vondra) - Some pg_stat_database counters were not being reset. + Some pg_stat_database counters were not being reset. Fix some information_schema.triggers + linkend="infoschema-triggers">information_schema.triggers column names to match the new SQL-standard names (Dean Rasheed) - Treat ECPG cursor names as case-insensitive + Treat ECPG cursor names as case-insensitive (Zoltan Boszormenyi) @@ -9228,9 +9228,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Support unlogged tables using the UNLOGGED + Support unlogged tables using the UNLOGGED option in CREATE - TABLE (Robert Haas) + TABLE (Robert Haas) @@ -9244,8 +9244,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow FULL OUTER JOIN to be implemented as a - hash join, and allow either side of a LEFT OUTER JOIN - or RIGHT OUTER JOIN to be hashed (Tom Lane) + hash join, and allow either side of a LEFT OUTER JOIN + or RIGHT OUTER JOIN to be hashed (Tom Lane) @@ -9270,7 +9270,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Improve performance of commit_siblings + linkend="guc-commit-siblings">commit_siblings (Greg Smith) @@ -9289,7 +9289,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Avoid leaving data files open after blind writes + Avoid leaving data files open after blind writes (Alvaro Herrera) @@ -9317,7 +9317,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This allows better optimization of queries that use ORDER - BY, LIMIT, or MIN/MAX with + BY, LIMIT, or MIN/MAX with inherited tables. @@ -9346,34 +9346,34 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support host names and host suffixes - (e.g. .example.com) in pg_hba.conf + (e.g. .example.com) in pg_hba.conf (Peter Eisentraut) - Previously only host IP addresses and CIDR + Previously only host IP addresses and CIDR values were supported. - Support the key word all in the host column of pg_hba.conf + Support the key word all in the host column of pg_hba.conf (Peter Eisentraut) - Previously people used 0.0.0.0/0 or ::/0 + Previously people used 0.0.0.0/0 or ::/0 for this. - Reject local lines in pg_hba.conf + Reject local lines in pg_hba.conf on platforms that don't support Unix-socket connections (Magnus Hagander) @@ -9386,14 +9386,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow GSSAPI + Allow GSSAPI to be used to authenticate to servers via SSPI (Christian Ullrich) + linkend="sspi-auth">SSPI (Christian Ullrich) - Specifically this allows Unix-based GSSAPI clients - to do SSPI authentication with Windows servers. + Specifically this allows Unix-based GSSAPI clients + to do SSPI authentication with Windows servers. @@ -9414,14 +9414,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Rewrite peer + Rewrite peer authentication to avoid use of credential control messages (Tom Lane) This change makes the peer authentication code simpler and better-performing. However, it requires the platform to provide the - getpeereid function or an equivalent socket operation. + getpeereid function or an equivalent socket operation. So far as is known, the only platform for which peer authentication worked before and now will not is pre-5.0 NetBSD. @@ -9440,19 +9440,19 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add details to the logging of restartpoints and checkpoints, which is controlled by log_checkpoints + linkend="guc-log-checkpoints">log_checkpoints (Fujii Masao, Greg Smith) - New details include WAL file and sync activity. + New details include WAL file and sync activity. Add log_file_mode + linkend="guc-log-file-mode">log_file_mode which controls the permissions on log files created by the logging collector (Martin Pihlak) @@ -9460,7 +9460,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Reduce the default maximum line length for syslog + Reduce the default maximum line length for syslog logging to 900 bytes plus prefixes (Noah Misch) @@ -9482,7 +9482,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add client_hostname column to pg_stat_activity + linkend="monitoring-stats-views-table">pg_stat_activity (Peter Eisentraut) @@ -9494,7 +9494,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add pg_stat_xact_* + linkend="monitoring-stats-views-table">pg_stat_xact_* statistics functions and views (Joel Jacobson) @@ -9515,15 +9515,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add columns showing the number of vacuum and analyze operations in pg_stat_*_tables + linkend="monitoring-stats-views-table">pg_stat_*_tables views (Magnus Hagander) - Add buffers_backend_fsync column to pg_stat_bgwriter + Add buffers_backend_fsync column to pg_stat_bgwriter (Greg Smith) @@ -9545,13 +9545,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Provide auto-tuning of wal_buffers (Greg + linkend="guc-wal-buffers">wal_buffers (Greg Smith) - By default, the value of wal_buffers is now chosen - automatically based on the value of shared_buffers. + By default, the value of wal_buffers is now chosen + automatically based on the value of shared_buffers. @@ -9598,7 +9598,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 synchronous_standby_names setting. Synchronous replication can be enabled or disabled on a per-transaction basis using the - synchronous_commit + synchronous_commit setting. @@ -9619,13 +9619,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add - replication_timeout + replication_timeout setting (Fujii Masao, Heikki Linnakangas) Replication connections that are idle for more than the - replication_timeout interval will be terminated + replication_timeout interval will be terminated automatically. Formerly, a failed connection was typically not detected until the TCP timeout elapsed, which is inconveniently long in many situations. @@ -9635,7 +9635,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add command-line tool pg_basebackup + linkend="app-pgbasebackup">pg_basebackup for creating a new standby server or database backup (Magnus Hagander) @@ -9667,8 +9667,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add system view pg_stat_replication - which displays activity of WAL sender processes (Itagaki + linkend="monitoring-stats-views-table">pg_stat_replication + which displays activity of WAL sender processes (Itagaki Takahiro, Simon Riggs) @@ -9680,7 +9680,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add monitoring function pg_last_xact_replay_timestamp() + linkend="functions-recovery-info-table">pg_last_xact_replay_timestamp() (Fujii Masao) @@ -9702,7 +9702,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add configuration parameter hot_standby_feedback + linkend="guc-hot-standby-feedback">hot_standby_feedback to enable standbys to postpone cleanup of old row versions on the primary (Simon Riggs) @@ -9715,7 +9715,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add the pg_stat_database_conflicts + linkend="monitoring-stats-views-table">pg_stat_database_conflicts system view to show queries that have been canceled and the reason (Magnus Hagander) @@ -9728,8 +9728,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a conflicts count to pg_stat_database + Add a conflicts count to pg_stat_database (Magnus Hagander) @@ -9754,7 +9754,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add ERRCODE_T_R_DATABASE_DROPPED + linkend="errcodes-table">ERRCODE_T_R_DATABASE_DROPPED error code to report recovery conflicts due to dropped databases (Tatsuo Ishii) @@ -9780,18 +9780,18 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 The new functions are pg_xlog_replay_pause(), + linkend="functions-recovery-control-table">pg_xlog_replay_pause(), pg_xlog_replay_resume(), + linkend="functions-recovery-control-table">pg_xlog_replay_resume(), and the status function pg_is_xlog_replay_paused(). + linkend="functions-recovery-control-table">pg_is_xlog_replay_paused(). - Add recovery.conf setting - pause_at_recovery_target + Add recovery.conf setting + pause_at_recovery_target to pause recovery at target (Simon Riggs) @@ -9804,14 +9804,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add the ability to create named restore points using pg_create_restore_point() + linkend="functions-admin-backup-table">pg_create_restore_point() (Jaime Casanova) These named restore points can be specified as recovery - targets using the new recovery.conf setting - recovery_target_name. + targets using the new recovery.conf setting + recovery_target_name. @@ -9830,7 +9830,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add restart_after_crash + linkend="guc-restart-after-crash">restart_after_crash setting which disables automatic server restart after a backend crash (Robert Haas) @@ -9844,8 +9844,8 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow recovery.conf - to use the same quoting behavior as postgresql.conf + linkend="recovery-config">recovery.conf + to use the same quoting behavior as postgresql.conf (Dimitri Fontaine) @@ -9877,7 +9877,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 single MVCC snapshot would be used for the entire transaction, which allowed certain documented anomalies. The old snapshot isolation behavior is still available by requesting the REPEATABLE READ + linkend="xact-repeatable-read">REPEATABLE READ isolation level. @@ -9885,30 +9885,30 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow data-modification commands - (INSERT/UPDATE/DELETE) in - WITH clauses + (INSERT/UPDATE/DELETE) in + WITH clauses (Marko Tiikkaja, Hitoshi Harada) - These commands can use RETURNING to pass data up to the + These commands can use RETURNING to pass data up to the containing query. - Allow WITH - clauses to be attached to INSERT, UPDATE, - DELETE statements (Marko Tiikkaja, Hitoshi Harada) + Allow WITH + clauses to be attached to INSERT, UPDATE, + DELETE statements (Marko Tiikkaja, Hitoshi Harada) Allow non-GROUP - BY columns in the query target list when the primary - key is specified in the GROUP BY clause (Peter + BY columns in the query target list when the primary + key is specified in the GROUP BY clause (Peter Eisentraut) @@ -9920,13 +9920,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow use of the key word DISTINCT in UNION/INTERSECT/EXCEPT + Allow use of the key word DISTINCT in UNION/INTERSECT/EXCEPT clauses (Tom Lane) - DISTINCT is the default behavior so use of this + DISTINCT is the default behavior so use of this key word is redundant, but the SQL standard allows it. @@ -9934,13 +9934,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Fix ordinary queries with rules to use the same snapshot behavior - as EXPLAIN ANALYZE (Marko Tiikkaja) + as EXPLAIN ANALYZE (Marko Tiikkaja) - Previously EXPLAIN ANALYZE used slightly different + Previously EXPLAIN ANALYZE used slightly different snapshot timing for queries involving rules. The - EXPLAIN ANALYZE behavior was judged to be more logical. + EXPLAIN ANALYZE behavior was judged to be more logical. @@ -9962,7 +9962,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Previously collation (the sort ordering of text strings) could only be chosen at database creation. Collation can now be set per column, domain, index, or - expression, via the SQL-standard COLLATE clause. + expression, via the SQL-standard COLLATE clause. @@ -9980,17 +9980,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add extensions which - simplify packaging of additions to PostgreSQL + simplify packaging of additions to PostgreSQL (Dimitri Fontaine, Tom Lane) Extensions are controlled by the new CREATE/ALTER/DROP EXTENSION + linkend="SQL-CREATEEXTENSION">CREATE/ALTER/DROP EXTENSION commands. This replaces ad-hoc methods of grouping objects that - are added to a PostgreSQL installation. + are added to a PostgreSQL installation. @@ -10003,7 +10003,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This allows data stored outside the database to be used like - native PostgreSQL-stored data. Foreign tables + native PostgreSQL-stored data. Foreign tables are currently read-only, however. @@ -10011,7 +10011,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow new values to be added to an existing enum type via - ALTER TYPE (Andrew + ALTER TYPE (Andrew Dunstan) @@ -10019,7 +10019,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add ALTER TYPE ... - ADD/DROP/ALTER/RENAME ATTRIBUTE (Peter Eisentraut) + ADD/DROP/ALTER/RENAME ATTRIBUTE (Peter Eisentraut) @@ -10030,28 +10030,28 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <command>ALTER</> Object + <command>ALTER</command> Object - Add RESTRICT/CASCADE to ALTER TYPE operations + Add RESTRICT/CASCADE to ALTER TYPE operations on typed tables (Peter Eisentraut) This controls - ADD/DROP/ALTER/RENAME - ATTRIBUTE cascading behavior. + ADD/DROP/ALTER/RENAME + ATTRIBUTE cascading behavior. - Support ALTER TABLE name {OF | NOT OF} - type + Support ALTER TABLE name {OF | NOT OF} + type (Noah Misch) @@ -10064,7 +10064,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add support for more object types in ALTER ... SET - SCHEMA commands (Dimitri Fontaine) + SCHEMA commands (Dimitri Fontaine) @@ -10079,7 +10079,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CREATETABLE"><command>CREATE/ALTER TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE/ALTER TABLE</command></link> @@ -10098,13 +10098,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow ALTER TABLE + Allow ALTER TABLE to add foreign keys without validation (Simon Riggs) - The new option is called NOT VALID. The constraint's - state can later be modified to VALIDATED and validation + The new option is called NOT VALID. The constraint's + state can later be modified to VALIDATED and validation checks performed. Together these allow you to add a foreign key with minimal impact on read and write operations. @@ -10118,17 +10118,17 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - For example, converting a varchar column to - text no longer requires a rewrite of the table. + For example, converting a varchar column to + text no longer requires a rewrite of the table. However, increasing the length constraint on a - varchar column still requires a table rewrite. + varchar column still requires a table rewrite. Add CREATE TABLE IF - NOT EXISTS syntax (Robert Haas) + NOT EXISTS syntax (Robert Haas) @@ -10163,7 +10163,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add a SECURITY - LABEL command (KaiGai Kohei) + LABEL command (KaiGai Kohei) @@ -10197,7 +10197,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Make TRUNCATE ... RESTART - IDENTITY restart sequences transactionally (Steve + IDENTITY restart sequences transactionally (Steve Singer) @@ -10211,26 +10211,26 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-COPY"><command>COPY</></link> + <link linkend="SQL-COPY"><command>COPY</command></link> - Add ENCODING option to COPY TO/FROM (Hitoshi + Add ENCODING option to COPY TO/FROM (Hitoshi Harada, Itagaki Takahiro) - This allows the encoding of the COPY file to be + This allows the encoding of the COPY file to be specified separately from client encoding. - Add bidirectional COPY + Add bidirectional COPY protocol support (Fujii Masao) @@ -10244,13 +10244,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</></link> + <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> - Make EXPLAIN VERBOSE show the function call expression + Make EXPLAIN VERBOSE show the function call expression in a FunctionScan node (Tom Lane) @@ -10260,21 +10260,21 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-VACUUM"><command>VACUUM</></link> + <link linkend="SQL-VACUUM"><command>VACUUM</command></link> Add additional details to the output of VACUUM FULL VERBOSE - and CLUSTER VERBOSE + linkend="SQL-VACUUM">VACUUM FULL VERBOSE + and CLUSTER VERBOSE (Itagaki Takahiro) New information includes the live and dead tuple count and - whether CLUSTER is using an index to rebuild. + whether CLUSTER is using an index to rebuild. @@ -10294,13 +10294,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CLUSTER"><command>CLUSTER</></link> + <link linkend="SQL-CLUSTER"><command>CLUSTER</command></link> - Allow CLUSTER to sort the table rather than scanning + Allow CLUSTER to sort the table rather than scanning the index when it seems likely to be cheaper (Leonardo Francalanci) @@ -10317,12 +10317,12 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add nearest-neighbor (order-by-operator) searching to GiST indexes (Teodor Sigaev, Tom Lane) + linkend="GiST">GiST indexes (Teodor Sigaev, Tom Lane) - This allows GiST indexes to quickly return the - N closest values in a query with LIMIT. + This allows GiST indexes to quickly return the + N closest values in a query with LIMIT. For example point '(101,456)' LIMIT 10; @@ -10334,19 +10334,19 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow GIN indexes to index null + Allow GIN indexes to index null and empty values (Tom Lane) - This allows full GIN index scans, and fixes various + This allows full GIN index scans, and fixes various corner cases in which GIN scans would fail. - Allow GIN indexes to + Allow GIN indexes to better recognize duplicate search entries (Tom Lane) @@ -10358,12 +10358,12 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Fix GiST indexes to be fully + Fix GiST indexes to be fully crash-safe (Heikki Linnakangas) - Previously there were rare cases where a REINDEX + Previously there were rare cases where a REINDEX would be required (you would be informed). @@ -10381,19 +10381,19 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow numeric to use a more compact, two-byte header + Allow numeric to use a more compact, two-byte header in common cases (Robert Haas) - Previously all numeric values had four-byte headers; + Previously all numeric values had four-byte headers; this change saves on disk storage. - Add support for dividing money by money + Add support for dividing money by money (Andy Balholm) @@ -10431,9 +10431,9 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - This avoids possible could not identify a comparison function + This avoids possible could not identify a comparison function failures at runtime, if it is possible to implement the query without - sorting. Also, ANALYZE won't try to use inappropriate + sorting. Also, ANALYZE won't try to use inappropriate statistics-gathering methods for columns of such composite types. @@ -10447,15 +10447,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for casting between money and numeric + Add support for casting between money and numeric (Andy Balholm) - Add support for casting from int4 and int8 - to money (Joey Adams) + Add support for casting from int4 and int8 + to money (Joey Adams) @@ -10476,15 +10476,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="functions-xml"><acronym>XML</></link> + <link linkend="functions-xml"><acronym>XML</acronym></link> - Add XML function XMLEXISTS and xpath_exists() + Add XML function XMLEXISTS and xpath_exists() functions (Mike Fowler) @@ -10495,17 +10495,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add XML functions xml_is_well_formed(), + Add XML functions xml_is_well_formed(), xml_is_well_formed_document(), + linkend="xml-is-well-formed">xml_is_well_formed_document(), xml_is_well_formed_content() + linkend="xml-is-well-formed">xml_is_well_formed_content() (Mike Fowler) - These check whether the input is properly-formed XML. + These check whether the input is properly-formed XML. They provide functionality that was previously available only in the deprecated contrib/xml2 module. @@ -10525,8 +10525,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add SQL function format(text, ...), which - behaves analogously to C's printf() (Pavel Stehule, + linkend="format">format(text, ...), which + behaves analogously to C's printf() (Pavel Stehule, Robert Haas) @@ -10539,13 +10539,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add string functions concat(), + linkend="functions-string-other">concat(), concat_ws(), - left(), - right(), + linkend="functions-string-other">concat_ws(), + left(), + right(), and reverse() + linkend="functions-string-other">reverse() (Pavel Stehule) @@ -10557,7 +10557,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add function pg_read_binary_file() + linkend="functions-admin-genfile">pg_read_binary_file() to read binary files (Dimitri Fontaine, Itagaki Takahiro) @@ -10565,7 +10565,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add a single-parameter version of function pg_read_file() + linkend="functions-admin-genfile">pg_read_file() to read an entire file (Dimitri Fontaine, Itagaki Takahiro) @@ -10573,9 +10573,9 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add three-parameter forms of array_to_string() + linkend="array-functions-table">array_to_string() and string_to_array() + linkend="array-functions-table">string_to_array() for null value processing control (Pavel Stehule) @@ -10590,7 +10590,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add the pg_describe_object() + linkend="functions-info-catalog-table">pg_describe_object() function (Alvaro Herrera) @@ -10619,10 +10619,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add variable quote_all_identifiers - to force the quoting of all identifiers in EXPLAIN + linkend="guc-quote-all-identifiers">quote_all_identifiers + to force the quoting of all identifiers in EXPLAIN and in system catalog functions like pg_get_viewdef() + linkend="functions-info-catalog-table">pg_get_viewdef() (Robert Haas) @@ -10635,7 +10635,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add columns to the information_schema.sequences + linkend="infoschema-sequences">information_schema.sequences system view (Peter Eisentraut) @@ -10647,8 +10647,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow public as a pseudo-role name in has_table_privilege() + Allow public as a pseudo-role name in has_table_privilege() and related functions (Alvaro Herrera) @@ -10669,7 +10669,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Support INSTEAD - OF triggers on views (Dean Rasheed) + OF triggers on views (Dean Rasheed) @@ -10694,7 +10694,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add FOREACH IN - ARRAY to PL/pgSQL + ARRAY to PL/pgSQL (Pavel Stehule) @@ -10734,7 +10734,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - PL/Perl functions can now be declared to accept type record. + PL/Perl functions can now be declared to accept type record. The behavior is the same as for any named composite type. @@ -10776,7 +10776,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - PL/Python can now return multiple OUT parameters + PL/Python can now return multiple OUT parameters and record sets. @@ -10816,10 +10816,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; These functions are plpy.quote_ident, - plpy.quote_literal, + linkend="plpython-util">plpy.quote_ident, + plpy.quote_literal, and plpy.quote_nullable. + linkend="plpython-util">plpy.quote_nullable. @@ -10831,7 +10831,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Report PL/Python errors from iterators with PLy_elog (Jan + Report PL/Python errors from iterators with PLy_elog (Jan Urbanski) @@ -10843,7 +10843,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Exception classes were previously not available in - plpy under Python 3. + plpy under Python 3. @@ -10860,7 +10860,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Mark createlang and droplang + Mark createlang and droplang as deprecated now that they just invoke extension commands (Tom Lane) @@ -10869,64 +10869,64 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Add psql command \conninfo + Add psql command \conninfo to show current connection information (David Christensen) - Add psql command \sf to + Add psql command \sf to show a function's definition (Pavel Stehule) - Add psql command \dL to list + Add psql command \dL to list languages (Fernando Ike) - Add the - \dn without S now suppresses system + \dn without S now suppresses system schemas. - Allow psql's \e and \ef + Allow psql's \e and \ef commands to accept a line number to be used to position the cursor in the editor (Pavel Stehule) This is passed to the editor according to the - PSQL_EDITOR_LINENUMBER_ARG environment variable. + PSQL_EDITOR_LINENUMBER_ARG environment variable. - Have psql set the client encoding from the + Have psql set the client encoding from the operating system locale by default (Heikki Linnakangas) - This only happens if the PGCLIENTENCODING environment + This only happens if the PGCLIENTENCODING environment variable is not set. @@ -10940,8 +10940,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Make \dt+ report pg_table_size - instead of pg_relation_size when talking to 9.0 or + Make \dt+ report pg_table_size + instead of pg_relation_size when talking to 9.0 or later servers (Bernd Helmle) @@ -10963,29 +10963,29 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add pg_dump + Add pg_dump and pg_dumpall - option to force quoting of all identifiers (Robert Haas) - Add directory format to pg_dump + Add directory format to pg_dump (Joachim Wieland, Heikki Linnakangas) - This is internally similar to the tar - pg_dump format. + This is internally similar to the tar + pg_dump format. @@ -10994,27 +10994,27 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PG-CTL"><application>pg_ctl</></link> + <link linkend="APP-PG-CTL"><application>pg_ctl</application></link> - Fix pg_ctl + Fix pg_ctl so it no longer incorrectly reports that the server is not running (Bruce Momjian) Previously this could happen if the server was running but - pg_ctl could not authenticate. + pg_ctl could not authenticate. - Improve pg_ctl start's wait - () option (Bruce Momjian, Tom Lane) @@ -11027,7 +11027,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add promote option to pg_ctl to + Add promote option to pg_ctl to switch a standby server to primary (Fujii Masao) @@ -11039,23 +11039,23 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <application>Development Tools</> + <application>Development Tools</application> - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> Add a libpq connection option client_encoding - which behaves like the PGCLIENTENCODING environment + linkend="libpq-connect-client-encoding">client_encoding + which behaves like the PGCLIENTENCODING environment variable (Heikki Linnakangas) - The value auto sets the client encoding based on + The value auto sets the client encoding based on the operating system locale. @@ -11063,13 +11063,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add PQlibVersion() + linkend="libpq-pqlibversion">PQlibVersion() function which returns the libpq library version (Magnus Hagander) - libpq already had PQserverVersion() which returns + libpq already had PQserverVersion() which returns the server version. @@ -11079,22 +11079,22 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Allow libpq-using clients to check the user name of the server process when connecting via Unix-domain sockets, with the new requirepeer + linkend="libpq-connect-requirepeer">requirepeer connection option (Peter Eisentraut) - PostgreSQL already allowed servers to check + PostgreSQL already allowed servers to check the client user name when connecting via Unix-domain sockets. - Add PQping() + Add PQping() and PQpingParams() + linkend="libpq-pqpingparams">PQpingParams() to libpq (Bruce Momjian, Tom Lane) @@ -11109,7 +11109,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="ecpg"><application>ECPG</></link> + <link linkend="ecpg"><application>ECPG</application></link> @@ -11123,7 +11123,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Make ecpglib write double values with a + Make ecpglib write double values with a precision of 15 digits, not 14 as formerly (Akira Kurosawa) @@ -11140,7 +11140,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Use +Olibmerrno compile flag with HP-UX C compilers + Use +Olibmerrno compile flag with HP-UX C compilers that accept it (Ibrar Ahmed) @@ -11163,15 +11163,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - This allows for faster compiles. Also, make -k + This allows for faster compiles. Also, make -k now works more consistently. - Require GNU make + Require GNU make 3.80 or newer (Peter Eisentraut) @@ -11182,7 +11182,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add make maintainer-check target + Add make maintainer-check target (Peter Eisentraut) @@ -11195,15 +11195,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Support make check in contrib + Support make check in contrib (Peter Eisentraut) - Formerly only make installcheck worked, but now + Formerly only make installcheck worked, but now there is support for testing in a temporary installation. - The top-level make check-world target now includes - testing contrib this way. + The top-level make check-world target now includes + testing contrib this way. @@ -11219,7 +11219,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; On Windows, allow pg_ctl to register + linkend="app-pg-ctl">pg_ctl to register the service as auto-start or start-on-demand (Quan Zongliang) @@ -11231,7 +11231,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - minidumps can now be generated by non-debug + minidumps can now be generated by non-debug Windows binaries and analyzed by standard debugging tools. @@ -11287,7 +11287,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add missing get_object_oid() functions, for consistency + Add missing get_object_oid() functions, for consistency (Robert Haas) @@ -11302,13 +11302,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for DragonFly BSD (Rumko) + Add support for DragonFly BSD (Rumko) - Expose quote_literal_cstr() for backend use + Expose quote_literal_cstr() for backend use (Robert Haas) @@ -11321,22 +11321,22 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Regression tests were previously always run with - SQL_ASCII encoding. + SQL_ASCII encoding. - Add src/tools/git_changelog to replace - cvs2cl and pgcvslog (Robert + Add src/tools/git_changelog to replace + cvs2cl and pgcvslog (Robert Haas, Tom Lane) - Add git-external-diff script to - src/tools (Bruce Momjian) + Add git-external-diff script to + src/tools (Bruce Momjian) @@ -11391,7 +11391,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Modify contrib modules and procedural + Modify contrib modules and procedural languages to install via the new extension mechanism (Tom Lane, Dimitri Fontaine) @@ -11400,21 +11400,21 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add contrib/file_fdw + Add contrib/file_fdw foreign-data wrapper (Shigeru Hanada) Foreign tables using this foreign data wrapper can read flat files - in a manner very similar to COPY. + in a manner very similar to COPY. Add nearest-neighbor search support to contrib/pg_trgm and contrib/btree_gist + linkend="pgtrgm">contrib/pg_trgm and contrib/btree_gist (Teodor Sigaev) @@ -11422,7 +11422,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add contrib/btree_gist + linkend="btree-gist">contrib/btree_gist support for searching on not-equals (Jeff Davis) @@ -11430,25 +11430,25 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Fix contrib/fuzzystrmatch's - levenshtein() function to handle multibyte characters + linkend="fuzzystrmatch">contrib/fuzzystrmatch's + levenshtein() function to handle multibyte characters (Alexander Korotkov) - Add ssl_cipher() and ssl_version() + Add ssl_cipher() and ssl_version() functions to contrib/sslinfo (Robert + linkend="sslinfo">contrib/sslinfo (Robert Haas) - Fix contrib/intarray - and contrib/hstore + Fix contrib/intarray + and contrib/hstore to give consistent results with indexed empty arrays (Tom Lane) @@ -11460,7 +11460,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow contrib/intarray + Allow contrib/intarray to work properly on multidimensional arrays (Tom Lane) @@ -11468,7 +11468,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; In - contrib/intarray, + contrib/intarray, avoid errors complaining about the presence of nulls in cases where no nulls are actually present (Tom Lane) @@ -11477,7 +11477,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; In - contrib/intarray, + contrib/intarray, fix behavior of containment operators with respect to empty arrays (Tom Lane) @@ -11490,10 +11490,10 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Remove contrib/xml2's + Remove contrib/xml2's arbitrary limit on the number of - parameter=value pairs that can be - handled by xslt_process() (Pavel Stehule) + parameter=value pairs that can be + handled by xslt_process() (Pavel Stehule) @@ -11503,7 +11503,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - In contrib/pageinspect, + In contrib/pageinspect, fix heap_page_item to return infomasks as 32-bit values (Alvaro Herrera) @@ -11522,13 +11522,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add contrib/sepgsql - to interface permission checks with SELinux (KaiGai Kohei) + Add contrib/sepgsql + to interface permission checks with SELinux (KaiGai Kohei) This uses the new SECURITY LABEL + linkend="SQL-SECURITY-LABEL">SECURITY LABEL facility. @@ -11536,7 +11536,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add contrib module auth_delay (KaiGai + linkend="auth-delay">auth_delay (KaiGai Kohei) @@ -11549,7 +11549,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add dummy_seclabel + Add dummy_seclabel contrib module (KaiGai Kohei) @@ -11569,17 +11569,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Add support for LIKE and ILIKE index + Add support for LIKE and ILIKE index searches to contrib/pg_trgm (Alexander + linkend="pgtrgm">contrib/pg_trgm (Alexander Korotkov) - Add levenshtein_less_equal() function to contrib/fuzzystrmatch, + Add levenshtein_less_equal() function to contrib/fuzzystrmatch, which is optimized for small distances (Alexander Korotkov) @@ -11587,7 +11587,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Improve performance of index lookups on contrib/seg columns (Alexander + linkend="seg">contrib/seg columns (Alexander Korotkov) @@ -11595,7 +11595,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Improve performance of pg_upgrade for + linkend="pgupgrade">pg_upgrade for databases with many relations (Bruce Momjian) @@ -11603,7 +11603,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add flag to contrib/pgbench to + linkend="pgbench">contrib/pgbench to report per-statement latencies (Florian Pflug) @@ -11619,29 +11619,29 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Move src/tools/test_fsync to contrib/pg_test_fsync + Move src/tools/test_fsync to contrib/pg_test_fsync (Bruce Momjian, Tom Lane) - Add O_DIRECT support to contrib/pg_test_fsync + Add O_DIRECT support to contrib/pg_test_fsync (Bruce Momjian) - This matches the use of O_DIRECT by wal_sync_method. + This matches the use of O_DIRECT by wal_sync_method. Add new tests to contrib/pg_test_fsync + linkend="pgtestfsync">contrib/pg_test_fsync (Bruce Momjian) @@ -11659,7 +11659,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Extensive ECPG + Extensive ECPG documentation improvements (Satoshi Nagayasu) @@ -11674,7 +11674,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add documentation for exit_on_error + linkend="guc-exit-on-error">exit_on_error (Robert Haas) @@ -11686,7 +11686,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Add documentation for pg_options_to_table() + linkend="functions-info-catalog-table">pg_options_to_table() (Josh Berkus) @@ -11699,7 +11699,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Document that it is possible to access all composite type fields using (compositeval).* + linkend="field-selection">(compositeval).* syntax (Peter Eisentraut) @@ -11707,16 +11707,16 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Document that translate() - removes characters in from that don't have a - corresponding to character (Josh Kupershmidt) + linkend="functions-string-other">translate() + removes characters in from that don't have a + corresponding to character (Josh Kupershmidt) - Merge documentation for CREATE CONSTRAINT TRIGGER and CREATE TRIGGER + Merge documentation for CREATE CONSTRAINT TRIGGER and CREATE TRIGGER (Alvaro Herrera) @@ -11741,12 +11741,12 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Handle non-ASCII characters consistently in HISTORY file + Handle non-ASCII characters consistently in HISTORY file (Peter Eisentraut) - While the HISTORY file is in English, we do have to deal + While the HISTORY file is in English, we do have to deal with non-ASCII letters in contributor names. These are now transliterated so that they are reasonably legible without assumptions about character set. diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index 6fa21e3759..ca8c87a4ab 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -16,7 +16,7 @@ - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -43,20 +43,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -95,21 +95,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -126,7 +126,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -138,13 +138,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -156,12 +156,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -185,7 +185,7 @@ CREATE OR REPLACE VIEW table_privileges AS - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -217,7 +217,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -225,11 +225,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -244,15 +244,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -283,15 +283,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -305,7 +305,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -319,16 +319,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -406,28 +406,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -436,56 +436,56 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -496,22 +496,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -522,14 +522,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -540,21 +540,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -562,7 +562,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -574,8 +574,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -587,7 +587,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -607,27 +607,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -651,7 +651,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - The PostgreSQL community will stop releasing updates + The PostgreSQL community will stop releasing updates for the 9.2.X release series in September 2017. Users are encouraged to update to a newer release branch soon. @@ -683,18 +683,18 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -718,7 +718,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -732,7 +732,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -745,7 +745,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -753,7 +753,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -768,19 +768,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -788,27 +788,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -822,33 +822,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -866,21 +866,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -895,20 +895,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -920,19 +920,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) @@ -967,7 +967,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -981,9 +981,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -996,15 +996,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1058,15 +1058,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1075,13 +1075,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1098,7 +1098,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In corner cases, a spurious out-of-sequence TLI error + In corner cases, a spurious out-of-sequence TLI error could be reported during recovery. @@ -1144,7 +1144,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1162,15 +1162,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1203,12 +1203,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1223,15 +1223,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1242,33 +1242,33 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1280,8 +1280,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1293,28 +1293,28 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1333,21 +1333,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1359,23 +1359,23 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1386,8 +1386,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) @@ -1414,7 +1414,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1489,71 +1489,71 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -1575,7 +1575,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -1589,7 +1589,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -1600,30 +1600,30 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -1631,8 +1631,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -1653,17 +1653,17 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -1676,15 +1676,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -1730,17 +1730,17 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -1754,7 +1754,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -1763,22 +1763,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -1788,40 +1788,40 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -1832,12 +1832,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -1847,7 +1847,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -1860,19 +1860,19 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -1881,12 +1881,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -1894,8 +1894,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -1907,7 +1907,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -1942,8 +1942,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -1956,29 +1956,29 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2012,7 +2012,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2024,7 +2024,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2080,7 +2080,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2089,7 +2089,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2103,10 +2103,10 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2114,8 +2114,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -2127,28 +2127,28 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -2156,7 +2156,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. @@ -2177,22 +2177,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -2205,12 +2205,12 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -2218,9 +2218,9 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -2267,56 +2267,56 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -2342,27 +2342,27 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -2376,20 +2376,20 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -2410,21 +2410,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -2485,25 +2485,25 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -2512,7 +2512,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -2531,21 +2531,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -2554,7 +2554,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -2576,14 +2576,14 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -2635,7 +2635,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -2648,14 +2648,14 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -2668,7 +2668,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -2694,13 +2694,13 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -2708,15 +2708,15 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -2724,21 +2724,21 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -2746,23 +2746,23 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -2770,18 +2770,18 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -2791,44 +2791,44 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -2837,22 +2837,22 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -2860,29 +2860,29 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -2890,42 +2890,42 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -2937,19 +2937,19 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -2957,11 +2957,11 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -2970,7 +2970,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -3016,8 +3016,8 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -3043,13 +3043,13 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -3067,7 +3067,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -3085,7 +3085,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - This substantially improves performance when pg_dump + This substantially improves performance when pg_dump tries to dump a large number of tables. @@ -3100,13 +3100,13 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -3118,14 +3118,14 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -3133,21 +3133,21 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -3160,7 +3160,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -3212,22 +3212,22 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -3240,9 +3240,9 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -3250,7 +3250,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -3271,12 +3271,12 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -3296,7 +3296,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -3333,7 +3333,7 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -3341,44 +3341,44 @@ Branch: REL9_1_STABLE [9b1b9446f] 2015-08-27 12:22:10 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -3395,14 +3395,14 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix contrib/sepgsql's handling of SELECT INTO + Fix contrib/sepgsql's handling of SELECT INTO statements (Kohei KaiGai) - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -3410,64 +3410,64 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -3475,11 +3475,11 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -3488,18 +3488,18 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -3507,11 +3507,11 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -3519,14 +3519,14 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -3538,38 +3538,38 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -3618,7 +3618,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -3629,13 +3629,13 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -3681,12 +3681,12 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -3700,36 +3700,36 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -3763,8 +3763,8 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -3802,7 +3802,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -3812,7 +3812,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -3822,15 +3822,15 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -3839,16 +3839,16 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -3856,16 +3856,16 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -3907,7 +3907,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -3935,7 +3935,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -3943,7 +3943,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -3964,14 +3964,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Avoid cannot GetMultiXactIdMembers() during recovery error + Avoid cannot GetMultiXactIdMembers() during recovery error (Álvaro Herrera) - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -3991,19 +3991,19 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -4050,18 +4050,18 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -4074,20 +4074,20 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -4100,20 +4100,20 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -4121,14 +4121,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -4137,32 +4137,32 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -4175,38 +4175,38 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -4219,14 +4219,14 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -4238,7 +4238,7 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -4246,28 +4246,28 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -4275,8 +4275,8 @@ Branch: REL9_0_STABLE [850e1a566] 2015-05-18 17:44:21 -0300 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) @@ -4288,18 +4288,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix failure in pg_receivexlog (Andres Freund) + Fix failure in pg_receivexlog (Andres Freund) A patch merge mistake in 9.2.10 led to could not create archive - status file errors. + status file errors. - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -4311,7 +4311,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -4346,11 +4346,11 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -4367,15 +4367,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -4385,27 +4385,27 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -4413,12 +4413,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -4459,7 +4459,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -4484,35 +4484,35 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -4520,14 +4520,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -4535,7 +4535,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -4547,22 +4547,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -4570,12 +4570,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -4585,7 +4585,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -4611,7 +4611,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -4624,19 +4624,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -4649,7 +4649,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Matching would often fail when the number of allowed iterations is - limited by a ? quantifier or a bound expression. + limited by a ? quantifier or a bound expression. @@ -4668,7 +4668,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -4699,14 +4699,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -4720,7 +4720,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In some contexts, constructs like row_to_json(tab.*) may + In some contexts, constructs like row_to_json(tab.*) may not produce the expected column names. This is fixed properly as of 9.4; in older branches, just ensure that we produce some nonempty name. (In some cases this will be the underlying table's column name @@ -4732,19 +4732,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix mishandling of system columns, - particularly tableoid, in FDW queries (Etsuro Fujita) + particularly tableoid, in FDW queries (Etsuro Fujita) - Avoid doing indexed_column = ANY - (array) as an index qualifier if that leads + Avoid doing indexed_column = ANY + (array) as an index qualifier if that leads to an inferior plan (Andrew Gierth) - In some cases, = ANY conditions applied to non-first index + In some cases, = ANY conditions applied to non-first index columns would be done as index conditions even though it would be better to use them as simple filter conditions. @@ -4753,7 +4753,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -4766,8 +4766,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -4793,7 +4793,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -4826,12 +4826,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. @@ -4845,7 +4845,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -4853,14 +4853,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -4869,7 +4869,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -4883,32 +4883,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -4916,14 +4916,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -4931,32 +4931,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -4966,7 +4966,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -4974,17 +4974,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -4992,16 +4992,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -5009,7 +5009,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_dumpall to restore its ability to dump from + Fix pg_dumpall to restore its ability to dump from pre-8.1 servers (Gilles Darold) @@ -5023,28 +5023,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) - Fix failure of contrib/auto_explain to print per-node - timing information when doing EXPLAIN ANALYZE (Tom Lane) + Fix failure of contrib/auto_explain to print per-node + timing information when doing EXPLAIN ANALYZE (Tom Lane) - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -5052,7 +5052,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -5064,7 +5064,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -5072,7 +5072,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -5084,24 +5084,24 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -5109,7 +5109,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -5143,29 +5143,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -5177,15 +5177,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -5197,9 +5197,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -5208,21 +5208,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -5281,15 +5281,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -5335,7 +5335,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -5347,14 +5347,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could result in variable not found in subplan - target lists errors, or in silently wrong query results. + target lists errors, or in silently wrong query results. - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -5375,7 +5375,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve planner to drop constant-NULL inputs - of AND/OR when possible (Tom Lane) + of AND/OR when possible (Tom Lane) @@ -5387,13 +5387,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix identification of input type category in to_json() + Fix identification of input type category in to_json() and friends (Tom Lane) - This is known to have led to inadequate quoting of money - fields in the JSON result, and there may have been wrong + This is known to have led to inadequate quoting of money + fields in the JSON result, and there may have been wrong results for other data types as well. @@ -5408,13 +5408,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -5429,7 +5429,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -5442,7 +5442,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -5463,19 +5463,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -5483,7 +5483,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -5495,14 +5495,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -5516,21 +5516,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow the root user to use postgres -C variable and - postgres --describe-config (MauMau) + Allow the root user to use postgres -C variable and + postgres --describe-config (MauMau) The prohibition on starting the server as root does not need to extend to these operations, and relaxing it prevents failure - of pg_ctl in some scenarios. + of pg_ctl in some scenarios. Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -5539,16 +5539,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -5584,15 +5584,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -5603,17 +5603,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -5621,15 +5621,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -5637,52 +5637,52 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. - Fix pg_upgrade for cases where the new server creates + Fix pg_upgrade for cases where the new server creates a TOAST table but the old version did not (Bruce Momjian) - This rare situation would manifest as relation OID mismatch + This rare situation would manifest as relation OID mismatch errors. - Prevent contrib/auto_explain from changing the output of - a user's EXPLAIN (Tom Lane) + Prevent contrib/auto_explain from changing the output of + a user's EXPLAIN (Tom Lane) - If auto_explain is active, it could cause - an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless + If auto_explain is active, it could cause + an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless print timing information. - Fix query-lifespan memory leak in contrib/dblink + Fix query-lifespan memory leak in contrib/dblink (MauMau, Joe Conway) - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -5691,27 +5691,27 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Prevent use of already-freed memory in - contrib/pgstattuple's pgstat_heap() + contrib/pgstattuple's pgstat_heap() (Noah Misch) - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -5771,7 +5771,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -5795,7 +5795,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -5808,17 +5808,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -5837,8 +5837,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -5858,13 +5858,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -5877,13 +5877,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix tracking of psql script line numbers - during \copy from out-of-line data + Fix tracking of psql script line numbers + during \copy from out-of-line data (Kumar Rajeev Rastogi, Amit Khandekar) - \copy ... from incremented the script file line number + \copy ... from incremented the script file line number for each data line, even if the data was not coming from the script file. This mistake resulted in wrong line numbers being reported for any errors occurring later in the same script file. @@ -5892,14 +5892,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -5945,19 +5945,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -5970,7 +5970,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -5990,7 +5990,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -6004,12 +6004,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -6036,7 +6036,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -6048,35 +6048,35 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -6091,7 +6091,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -6111,20 +6111,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -6142,8 +6142,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -6156,7 +6156,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -6169,19 +6169,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -6205,7 +6205,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -6225,7 +6225,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -6239,19 +6239,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) - Fix UPDATE/DELETE of an inherited target table - that has UNION ALL subqueries (Tom Lane) + Fix UPDATE/DELETE of an inherited target table + that has UNION ALL subqueries (Tom Lane) - Without this fix, UNION ALL subqueries aren't correctly + Without this fix, UNION ALL subqueries aren't correctly inserted into the update plans for inheritance child tables after the first one, typically resulting in no update happening for those child table(s). @@ -6260,12 +6260,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -6273,21 +6273,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -6319,12 +6319,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -6332,8 +6332,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -6344,44 +6344,44 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) - Fix *-qualification of named parameters in SQL-language + Fix *-qualification of named parameters in SQL-language functions (Tom Lane) Given a composite-type parameter - named foo, $1.* worked fine, - but foo.* not so much. + named foo, $1.* worked fine, + but foo.* not so much. - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -6389,14 +6389,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix incorrect translation handling in - some psql \d commands + some psql \d commands (Peter Eisentraut, Tom Lane) - Ensure pg_basebackup's background process is killed + Ensure pg_basebackup's background process is killed when exiting its foreground process (Magnus Hagander) @@ -6404,7 +6404,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -6417,20 +6417,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -6441,15 +6441,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) - Fix contrib/pg_stat_statement's handling - of CURRENT_DATE and related constructs (Kyotaro + Fix contrib/pg_stat_statement's handling + of CURRENT_DATE and related constructs (Kyotaro Horiguchi) @@ -6463,28 +6463,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -6493,20 +6493,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -6558,19 +6558,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would need to happen before actual loss occurs, but it's not zero. In 9.2.0 and later, the probability of loss is higher, and it's also possible - to get could not access status of transaction errors as a + to get could not access status of transaction errors as a consequence of this bug. Users upgrading from releases 9.0.4 or 8.4.8 or earlier are not affected, but all later versions contain the bug. @@ -6578,18 +6578,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -6620,13 +6620,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This could lead to corruption of the lock data structures in shared - memory, causing lock already held and other odd errors. + memory, causing lock already held and other odd errors. - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -6637,14 +6637,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure an anti-wraparound VACUUM counts a page as scanned + Ensure an anti-wraparound VACUUM counts a page as scanned when it's only verified that no tuples need freezing (Sergey Burladyan, Jeff Janes) This bug could result in failing to - advance relfrozenxid, so that the table would still be + advance relfrozenxid, so that the table would still be thought to need another anti-wraparound vacuum. In the worst case the database might even shut down to prevent wraparound. @@ -6663,15 +6663,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix unexpected spgdoinsert() failure error during SP-GiST + Fix unexpected spgdoinsert() failure error during SP-GiST index creation (Teodor Sigaev) - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -6688,14 +6688,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. Fix incorrect planning in cases where the same non-strict expression - appears in multiple WHERE and outer JOIN + appears in multiple WHERE and outer JOIN equality clauses (Tom Lane) @@ -6763,13 +6763,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. @@ -6783,7 +6783,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -6797,7 +6797,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -6808,10 +6808,10 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -6821,28 +6821,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -6894,7 +6894,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - PostgreSQL case-folds non-ASCII characters only + PostgreSQL case-folds non-ASCII characters only when using a single-byte server encoding. @@ -6909,7 +6909,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix checkpoint memory leak in background writer when wal_level = - hot_standby (Naoya Anzai) + hot_standby (Naoya Anzai) @@ -6922,7 +6922,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix memory overcommit bug when work_mem is using more + Fix memory overcommit bug when work_mem is using more than 24GB of memory (Stephen Frost) @@ -6964,58 +6964,58 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously tests like col IS NOT TRUE and col IS - NOT FALSE did not properly factor in NULL values when estimating + Previously tests like col IS NOT TRUE and col IS + NOT FALSE did not properly factor in NULL values when estimating plan costs. - Fix accounting for qualifier evaluation costs in UNION ALL + Fix accounting for qualifier evaluation costs in UNION ALL and inheritance queries (Tom Lane) This fixes cases where suboptimal query plans could be chosen if - some WHERE clauses are expensive to calculate. + some WHERE clauses are expensive to calculate. - Prevent pushing down WHERE clauses into unsafe - UNION/INTERSECT subqueries (Tom Lane) + Prevent pushing down WHERE clauses into unsafe + UNION/INTERSECT subqueries (Tom Lane) - Subqueries of a UNION or INTERSECT that + Subqueries of a UNION or INTERSECT that contain set-returning functions or volatile functions in their - SELECT lists could be improperly optimized, leading to + SELECT lists could be improperly optimized, leading to run-time errors or incorrect query results. - Fix rare case of failed to locate grouping columns + Fix rare case of failed to locate grouping columns planner failure (Tom Lane) - Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) + Fix pg_dump of foreign tables with dropped columns (Andrew Dunstan) - Previously such cases could cause a pg_upgrade error. + Previously such cases could cause a pg_upgrade error. - Reorder pg_dump processing of extension-related + Reorder pg_dump processing of extension-related rules and event triggers (Joe Conway) @@ -7023,7 +7023,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Force dumping of extension tables if specified by pg_dump - -t or -n (Joe Conway) + -t or -n (Joe Conway) @@ -7036,25 +7036,25 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix pg_restore -l with the directory archive to display + Fix pg_restore -l with the directory archive to display the correct format name (Fujii Masao) - Properly record index comments created using UNIQUE - and PRIMARY KEY syntax (Andres Freund) + Properly record index comments created using UNIQUE + and PRIMARY KEY syntax (Andres Freund) - This fixes a parallel pg_restore failure. + This fixes a parallel pg_restore failure. - Cause pg_basebackup -x with an empty xlog directory + Cause pg_basebackup -x with an empty xlog directory to throw an error rather than crashing (Magnus Hagander, Haruka Takatsuka) @@ -7093,13 +7093,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REINDEX TABLE and REINDEX DATABASE + Fix REINDEX TABLE and REINDEX DATABASE to properly revalidate constraints and mark invalidated indexes as valid (Noah Misch) - REINDEX INDEX has always worked properly. + REINDEX INDEX has always worked properly. @@ -7112,7 +7112,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible deadlock during concurrent CREATE INDEX - CONCURRENTLY operations (Tom Lane) + CONCURRENTLY operations (Tom Lane) @@ -7124,7 +7124,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix regexp_matches() handling of zero-length matches + Fix regexp_matches() handling of zero-length matches (Jeevan Chalke) @@ -7148,14 +7148,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) - Allow ALTER DEFAULT PRIVILEGES to operate on schemas + Allow ALTER DEFAULT PRIVILEGES to operate on schemas without requiring CREATE permission (Tom Lane) @@ -7167,31 +7167,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Specifically, lessen keyword restrictions for role names, language - names, EXPLAIN and COPY options, and - SET values. This allows COPY ... (FORMAT - BINARY) to work as expected; previously BINARY needed + names, EXPLAIN and COPY options, and + SET values. This allows COPY ... (FORMAT + BINARY) to work as expected; previously BINARY needed to be quoted. - Print proper line number during COPY failure (Heikki + Print proper line number during COPY failure (Heikki Linnakangas) - Fix pgp_pub_decrypt() so it works for secret keys with + Fix pgp_pub_decrypt() so it works for secret keys with passwords (Marko Kreen) - Make pg_upgrade use pg_dump - --quote-all-identifiers to avoid problems with keyword changes + Make pg_upgrade use pg_dump + --quote-all-identifiers to avoid problems with keyword changes between releases (Tom Lane) @@ -7205,7 +7205,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that VACUUM ANALYZE still runs the ANALYZE phase + Ensure that VACUUM ANALYZE still runs the ANALYZE phase if its attempt to truncate the file is cancelled due to lock conflicts (Kevin Grittner) @@ -7214,28 +7214,28 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Avoid possible failure when performing transaction control commands (e.g - ROLLBACK) in prepared queries (Tom Lane) + ROLLBACK) in prepared queries (Tom Lane) Ensure that floating-point data input accepts standard spellings - of infinity on all platforms (Tom Lane) + of infinity on all platforms (Tom Lane) - The C99 standard says that allowable spellings are inf, - +inf, -inf, infinity, - +infinity, and -infinity. Make sure we - recognize these even if the platform's strtod function + The C99 standard says that allowable spellings are inf, + +inf, -inf, infinity, + +infinity, and -infinity. Make sure we + recognize these even if the platform's strtod function doesn't. - Avoid unnecessary reporting when track_activities is off + Avoid unnecessary reporting when track_activities is off (Tom Lane) @@ -7249,7 +7249,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent crash when psql's PSQLRC variable + Prevent crash when psql's PSQLRC variable contains a tilde (Bruce Momjian) @@ -7262,7 +7262,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2013d + Update time zone data files to tzdata release 2013d for DST law changes in Israel, Morocco, Palestine, and Paraguay. Also, historical zone data corrections for Macquarie Island. @@ -7297,7 +7297,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, this release corrects several errors in management of GiST indexes. After installing this update, it is advisable to - REINDEX any GiST indexes that meet one or more of the + REINDEX any GiST indexes that meet one or more of the conditions described below. @@ -7321,7 +7321,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A connection request containing a database name that begins with - - could be crafted to damage or destroy + - could be crafted to damage or destroy files within the server's data directory, even if the request is eventually rejected. (CVE-2013-1899) @@ -7335,9 +7335,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This avoids a scenario wherein random numbers generated by - contrib/pgcrypto functions might be relatively easy for + contrib/pgcrypto functions might be relatively easy for another database user to guess. The risk is only significant when - the postmaster is configured with ssl = on + the postmaster is configured with ssl = on but most connections don't use SSL encryption. (CVE-2013-1900) @@ -7350,7 +7350,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 An unprivileged database user could exploit this mistake to call - pg_start_backup() or pg_stop_backup(), + pg_start_backup() or pg_stop_backup(), thus possibly interfering with creation of routine backups. (CVE-2013-1901) @@ -7358,32 +7358,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix GiST indexes to not use fuzzy geometric comparisons when + Fix GiST indexes to not use fuzzy geometric comparisons when it's not appropriate to do so (Alexander Korotkov) - The core geometric types perform comparisons using fuzzy - equality, but gist_box_same must do exact comparisons, + The core geometric types perform comparisons using fuzzy + equality, but gist_box_same must do exact comparisons, else GiST indexes using it might become inconsistent. After installing - this update, users should REINDEX any GiST indexes on - box, polygon, circle, or point - columns, since all of these use gist_box_same. + this update, users should REINDEX any GiST indexes on + box, polygon, circle, or point + columns, since all of these use gist_box_same. Fix erroneous range-union and penalty logic in GiST indexes that use - contrib/btree_gist for variable-width data types, that is - text, bytea, bit, and numeric + contrib/btree_gist for variable-width data types, that is + text, bytea, bit, and numeric columns (Tom Lane) These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in useless - index bloat. Users are advised to REINDEX such indexes + index bloat. Users are advised to REINDEX such indexes after installing this update. @@ -7398,21 +7398,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 These errors could result in inconsistent indexes in which some keys that are present would not be found by searches, and also in indexes that are unnecessarily inefficient to search. Users are advised to - REINDEX multi-column GiST indexes after installing this + REINDEX multi-column GiST indexes after installing this update. - Fix gist_point_consistent + Fix gist_point_consistent to handle fuzziness consistently (Alexander Korotkov) - Index scans on GiST indexes on point columns would sometimes + Index scans on GiST indexes on point columns would sometimes yield results different from a sequential scan, because - gist_point_consistent disagreed with the underlying + gist_point_consistent disagreed with the underlying operator code about whether to do comparisons exactly or fuzzily. @@ -7423,7 +7423,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This bug could result in incorrect local pin count errors + This bug could result in incorrect local pin count errors during replay, making recovery impossible. @@ -7431,7 +7431,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure we do crash recovery before entering archive recovery, if the - database was not stopped cleanly and a recovery.conf file + database was not stopped cleanly and a recovery.conf file is present (Heikki Linnakangas, Kyotaro Horiguchi, Mitsumasa Kondo) @@ -7451,14 +7451,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix race condition in DELETE RETURNING (Tom Lane) + Fix race condition in DELETE RETURNING (Tom Lane) - Under the right circumstances, DELETE RETURNING could + Under the right circumstances, DELETE RETURNING could attempt to fetch data from a shared buffer that the current process no longer has any pin on. If some other process changed the buffer - meanwhile, this would lead to garbage RETURNING output, or + meanwhile, this would lead to garbage RETURNING output, or even a crash. @@ -7479,20 +7479,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix to_char() to use ASCII-only case-folding rules where + Fix to_char() to use ASCII-only case-folding rules where appropriate (Tom Lane) This fixes misbehavior of some template patterns that should be - locale-independent, but mishandled I and - i in Turkish locales. + locale-independent, but mishandled I and + i in Turkish locales. - Fix unwanted rejection of timestamp 1999-12-31 24:00:00 + Fix unwanted rejection of timestamp 1999-12-31 24:00:00 (Tom Lane) @@ -7506,8 +7506,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix logic error when a single transaction does UNLISTEN - then LISTEN (Tom Lane) + Fix logic error when a single transaction does UNLISTEN + then LISTEN (Tom Lane) @@ -7525,14 +7525,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF) + Fix performance issue in EXPLAIN (ANALYZE, TIMING OFF) (Pavel Stehule) - Remove useless picksplit doesn't support secondary split log + Remove useless picksplit doesn't support secondary split log messages (Josh Hansen, Tom Lane) @@ -7547,7 +7547,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Remove vestigial secondary-split support in - gist_box_picksplit() (Tom Lane) + gist_box_picksplit() (Tom Lane) @@ -7566,29 +7566,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Eliminate memory leaks in PL/Perl's spi_prepare() function + Eliminate memory leaks in PL/Perl's spi_prepare() function (Alex Hunsaker, Tom Lane) - Fix pg_dumpall to handle database names containing - = correctly (Heikki Linnakangas) + Fix pg_dumpall to handle database names containing + = correctly (Heikki Linnakangas) - Avoid crash in pg_dump when an incorrect connection + Avoid crash in pg_dump when an incorrect connection string is given (Heikki Linnakangas) - Ignore invalid indexes in pg_dump and - pg_upgrade (Michael Paquier, Bruce Momjian) + Ignore invalid indexes in pg_dump and + pg_upgrade (Michael Paquier, Bruce Momjian) @@ -7597,15 +7597,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 a uniqueness condition not satisfied by the table's data. Also, if the index creation is in fact still in progress, it seems reasonable to consider it to be an uncommitted DDL change, which - pg_dump wouldn't be expected to dump anyway. - pg_upgrade now also skips invalid indexes rather than + pg_dump wouldn't be expected to dump anyway. + pg_upgrade now also skips invalid indexes rather than failing. - In pg_basebackup, include only the current server + In pg_basebackup, include only the current server version's subdirectory when backing up a tablespace (Heikki Linnakangas) @@ -7613,16 +7613,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a server version check in pg_basebackup and - pg_receivexlog, so they fail cleanly with version + Add a server version check in pg_basebackup and + pg_receivexlog, so they fail cleanly with version combinations that won't work (Heikki Linnakangas) - Fix contrib/dblink to handle inconsistent settings of - DateStyle or IntervalStyle safely (Daniel + Fix contrib/dblink to handle inconsistent settings of + DateStyle or IntervalStyle safely (Daniel Farina, Tom Lane) @@ -7630,7 +7630,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Previously, if the remote server had different settings of these parameters, ambiguous dates might be read incorrectly. This fix ensures that datetime and interval columns fetched by a - dblink query will be interpreted correctly. Note however + dblink query will be interpreted correctly. Note however that inconsistent settings are still risky, since literal values appearing in SQL commands sent to the remote server might be interpreted differently than they would be locally. @@ -7639,25 +7639,25 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix contrib/pg_trgm's similarity() function + Fix contrib/pg_trgm's similarity() function to return zero for trigram-less strings (Tom Lane) - Previously it returned NaN due to internal division by zero. + Previously it returned NaN due to internal division by zero. - Enable building PostgreSQL with Microsoft Visual + Enable building PostgreSQL with Microsoft Visual Studio 2012 (Brar Piening, Noah Misch) - Update time zone data files to tzdata release 2013b + Update time zone data files to tzdata release 2013b for DST law changes in Chile, Haiti, Morocco, Paraguay, and some Russian areas. Also, historical zone data corrections for numerous places. @@ -7665,12 +7665,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, update the time zone abbreviation files for recent changes in - Russia and elsewhere: CHOT, GET, - IRKT, KGT, KRAT, MAGT, - MAWT, MSK, NOVT, OMST, - TKT, VLAT, WST, YAKT, - YEKT now follow their current meanings, and - VOLT (Europe/Volgograd) and MIST + Russia and elsewhere: CHOT, GET, + IRKT, KGT, KRAT, MAGT, + MAWT, MSK, NOVT, OMST, + TKT, VLAT, WST, YAKT, + YEKT now follow their current meanings, and + VOLT (Europe/Volgograd) and MIST (Antarctica/Macquarie) are added to the default abbreviations list. @@ -7715,7 +7715,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent execution of enum_recv from SQL (Tom Lane) + Prevent execution of enum_recv from SQL (Tom Lane) @@ -7742,7 +7742,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This mistake could result in incorrect WAL ends before end of - online backup errors. + online backup errors. @@ -7824,8 +7824,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve performance of SPI_execute and related - functions, thereby improving PL/pgSQL's EXECUTE + Improve performance of SPI_execute and related + functions, thereby improving PL/pgSQL's EXECUTE (Heikki Linnakangas, Tom Lane) @@ -7860,20 +7860,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix intermittent crash in DROP INDEX CONCURRENTLY (Tom Lane) + Fix intermittent crash in DROP INDEX CONCURRENTLY (Tom Lane) Fix potential corruption of shared-memory lock table during - CREATE/DROP INDEX CONCURRENTLY (Tom Lane) + CREATE/DROP INDEX CONCURRENTLY (Tom Lane) - Fix COPY's multiple-tuple-insertion code for the case of + Fix COPY's multiple-tuple-insertion code for the case of a tuple larger than page size minus fillfactor (Heikki Linnakangas) @@ -7885,19 +7885,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Protect against race conditions when scanning - pg_tablespace (Stephen Frost, Tom Lane) + pg_tablespace (Stephen Frost, Tom Lane) - CREATE DATABASE and DROP DATABASE could + CREATE DATABASE and DROP DATABASE could misbehave if there were concurrent updates of - pg_tablespace entries. + pg_tablespace entries. - Prevent DROP OWNED from trying to drop whole databases or + Prevent DROP OWNED from trying to drop whole databases or tablespaces (Álvaro Herrera) @@ -7909,13 +7909,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix error in vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age implementation (Andres Freund) In installations that have existed for more than vacuum_freeze_min_age + linkend="guc-vacuum-freeze-min-age">vacuum_freeze_min_age transactions, this mistake prevented autovacuum from using partial-table scans, so that a full-table scan would always happen instead. @@ -7923,13 +7923,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent misbehavior when a RowExpr or XmlExpr + Prevent misbehavior when a RowExpr or XmlExpr is parse-analyzed twice (Andres Freund, Tom Lane) This mistake could be user-visible in contexts such as - CREATE TABLE LIKE INCLUDING INDEXES. + CREATE TABLE LIKE INCLUDING INDEXES. @@ -7947,7 +7947,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 There were some issues with default privileges for types, and - pg_dump failed to dump such privileges at all. + pg_dump failed to dump such privileges at all. @@ -7967,13 +7967,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Reject out-of-range dates in to_date() (Hitoshi Harada) + Reject out-of-range dates in to_date() (Hitoshi Harada) - Fix pg_extension_config_dump() to handle + Fix pg_extension_config_dump() to handle extension-update cases properly (Tom Lane) @@ -7991,7 +7991,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The previous coding resulted in sometimes omitting the first line in - the CONTEXT traceback for the error. + the CONTEXT traceback for the error. @@ -8009,13 +8009,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This bug affected psql and some other client programs. + This bug affected psql and some other client programs. - Fix possible crash in psql's \? command + Fix possible crash in psql's \? command when not connected to a database (Meng Qingzhong) @@ -8023,74 +8023,74 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible error if a relation file is removed while - pg_basebackup is running (Heikki Linnakangas) + pg_basebackup is running (Heikki Linnakangas) - Tolerate timeline switches while pg_basebackup -X fetch + Tolerate timeline switches while pg_basebackup -X fetch is backing up a standby server (Heikki Linnakangas) - Make pg_dump exclude data of unlogged tables when + Make pg_dump exclude data of unlogged tables when running on a hot-standby server (Magnus Hagander) This would fail anyway because the data is not available on the standby server, so it seems most convenient to assume - automatically. - Fix pg_upgrade to deal with invalid indexes safely + Fix pg_upgrade to deal with invalid indexes safely (Bruce Momjian) - Fix pg_upgrade's -O/-o options (Marti Raudsepp) + Fix pg_upgrade's -O/-o options (Marti Raudsepp) - Fix one-byte buffer overrun in libpq's - PQprintTuples (Xi Wang) + Fix one-byte buffer overrun in libpq's + PQprintTuples (Xi Wang) This ancient function is not used anywhere by - PostgreSQL itself, but it might still be used by some + PostgreSQL itself, but it might still be used by some client code. - Make ecpglib use translated messages properly + Make ecpglib use translated messages properly (Chen Huajun) - Properly install ecpg_compat and - pgtypes libraries on MSVC (Jiang Guiqing) + Properly install ecpg_compat and + pgtypes libraries on MSVC (Jiang Guiqing) - Include our version of isinf() in - libecpg if it's not provided by the system + Include our version of isinf() in + libecpg if it's not provided by the system (Jiang Guiqing) @@ -8110,15 +8110,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make pgxs build executables with the right - .exe suffix when cross-compiling for Windows + Make pgxs build executables with the right + .exe suffix when cross-compiling for Windows (Zoltan Boszormenyi) - Add new timezone abbreviation FET (Tom Lane) + Add new timezone abbreviation FET (Tom Lane) @@ -8153,7 +8153,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - However, you may need to perform REINDEX operations to + However, you may need to perform REINDEX operations to correct problems in concurrently-built indexes, as described in the first changelog item below. @@ -8173,22 +8173,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix multiple bugs associated with CREATE/DROP INDEX - CONCURRENTLY (Andres Freund, Tom Lane, Simon Riggs, Pavan Deolasee) + CONCURRENTLY (Andres Freund, Tom Lane, Simon Riggs, Pavan Deolasee) - An error introduced while adding DROP INDEX CONCURRENTLY + An error introduced while adding DROP INDEX CONCURRENTLY allowed incorrect indexing decisions to be made during the initial - phase of CREATE INDEX CONCURRENTLY; so that indexes built + phase of CREATE INDEX CONCURRENTLY; so that indexes built by that command could be corrupt. It is recommended that indexes - built in 9.2.X with CREATE INDEX CONCURRENTLY be rebuilt + built in 9.2.X with CREATE INDEX CONCURRENTLY be rebuilt after applying this update. - In addition, fix CREATE/DROP INDEX CONCURRENTLY to use + In addition, fix CREATE/DROP INDEX CONCURRENTLY to use in-place updates when changing the state of an index's - pg_index row. This prevents race conditions that could + pg_index row. This prevents race conditions that could cause concurrent sessions to miss updating the target index, thus again resulting in corrupt concurrently-created indexes. @@ -8196,33 +8196,33 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, fix various other operations to ensure that they ignore invalid indexes resulting from a failed CREATE INDEX - CONCURRENTLY command. The most important of these is - VACUUM, because an auto-vacuum could easily be launched + CONCURRENTLY command. The most important of these is + VACUUM, because an auto-vacuum could easily be launched on the table before corrective action can be taken to fix or remove the invalid index. - Also fix DROP INDEX CONCURRENTLY to not disable + Also fix DROP INDEX CONCURRENTLY to not disable insertions into the target index until all queries using it are done. - Also fix misbehavior if DROP INDEX CONCURRENTLY is + Also fix misbehavior if DROP INDEX CONCURRENTLY is canceled: the previous coding could leave an un-droppable index behind. - Correct predicate locking for DROP INDEX CONCURRENTLY + Correct predicate locking for DROP INDEX CONCURRENTLY (Kevin Grittner) Previously, SSI predicate locks were processed at the wrong time, possibly leading to incorrect behavior of serializable transactions - executing in parallel with the DROP. + executing in parallel with the DROP. @@ -8280,13 +8280,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could prevent subsequent execution of certain - operations such as CREATE INDEX CONCURRENTLY. + operations such as CREATE INDEX CONCURRENTLY. - Avoid bogus out-of-sequence timeline ID errors in standby + Avoid bogus out-of-sequence timeline ID errors in standby mode (Heikki Linnakangas) @@ -8306,20 +8306,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix the syslogger process to not fail when - log_rotation_age exceeds 2^31 milliseconds (about 25 days) + log_rotation_age exceeds 2^31 milliseconds (about 25 days) (Tom Lane) - Fix WaitLatch() to return promptly when the requested + Fix WaitLatch() to return promptly when the requested timeout expires (Jeff Janes, Tom Lane) With the previous coding, a steady stream of non-wait-terminating - interrupts could delay return from WaitLatch() + interrupts could delay return from WaitLatch() indefinitely. This has been shown to be a problem for the autovacuum launcher process, and might cause trouble elsewhere as well. @@ -8372,8 +8372,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The planner could derive incorrect constraints from a clause equating a non-strict construct to something else, for example - WHERE COALESCE(foo, 0) = 0 - when foo is coming from the nullable side of an outer join. + WHERE COALESCE(foo, 0) = 0 + when foo is coming from the nullable side of an outer join. 9.2 showed this type of error in more cases than previous releases, but the basic bug has been there for a long time. @@ -8381,13 +8381,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix SELECT DISTINCT with index-optimized - MIN/MAX on an inheritance tree (Tom Lane) + Fix SELECT DISTINCT with index-optimized + MIN/MAX on an inheritance tree (Tom Lane) The planner would fail with failed to re-find MinMaxAggInfo - record given this combination of factors. + record given this combination of factors. @@ -8407,7 +8407,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A strict join clause can be sufficient to establish an - x IS NOT NULL predicate, for example. + x IS NOT NULL predicate, for example. This fixes a planner regression in 9.2, since previous versions could make comparable deductions. @@ -8434,10 +8434,10 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This affects multicolumn NOT IN subplans, such as - WHERE (a, b) NOT IN (SELECT x, y FROM ...) - when for instance b and y are int4 - and int8 respectively. This mistake led to wrong answers + This affects multicolumn NOT IN subplans, such as + WHERE (a, b) NOT IN (SELECT x, y FROM ...) + when for instance b and y are int4 + and int8 respectively. This mistake led to wrong answers or crashes depending on the specific datatypes involved. @@ -8450,8 +8450,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This oversight could result in wrong answers from merge joins whose inner side is an index scan using an - indexed_column = - ANY(array) condition. + indexed_column = + ANY(array) condition. @@ -8475,12 +8475,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Acquire buffer lock when re-fetching the old tuple for an - AFTER ROW UPDATE/DELETE trigger (Andres Freund) + AFTER ROW UPDATE/DELETE trigger (Andres Freund) In very unusual circumstances, this oversight could result in passing - incorrect data to a trigger WHEN condition, or to the + incorrect data to a trigger WHEN condition, or to the precheck logic for a foreign-key enforcement trigger. That could result in a crash, or in an incorrect decision about whether to fire the trigger. @@ -8489,7 +8489,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix ALTER COLUMN TYPE to handle inherited check + Fix ALTER COLUMN TYPE to handle inherited check constraints properly (Pavan Deolasee) @@ -8501,7 +8501,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix ALTER EXTENSION SET SCHEMA's failure to move some + Fix ALTER EXTENSION SET SCHEMA's failure to move some subsidiary objects into the new schema (Álvaro Herrera, Dimitri Fontaine) @@ -8509,7 +8509,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Handle CREATE TABLE AS EXECUTE correctly in extended query + Handle CREATE TABLE AS EXECUTE correctly in extended query protocol (Tom Lane) @@ -8517,7 +8517,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Don't modify the input parse tree in DROP RULE IF NOT - EXISTS and DROP TRIGGER IF NOT EXISTS (Tom Lane) + EXISTS and DROP TRIGGER IF NOT EXISTS (Tom Lane) @@ -8528,14 +8528,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix REASSIGN OWNED to handle grants on tablespaces + Fix REASSIGN OWNED to handle grants on tablespaces (Álvaro Herrera) - Ignore incorrect pg_attribute entries for system + Ignore incorrect pg_attribute entries for system columns for views (Tom Lane) @@ -8549,7 +8549,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix rule printing to dump INSERT INTO table + Fix rule printing to dump INSERT INTO table DEFAULT VALUES correctly (Tom Lane) @@ -8557,7 +8557,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Guard against stack overflow when there are too many - UNION/INTERSECT/EXCEPT clauses + UNION/INTERSECT/EXCEPT clauses in a query (Tom Lane) @@ -8579,22 +8579,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix failure to advance XID epoch if XID wraparound happens during a - checkpoint and wal_level is hot_standby + checkpoint and wal_level is hot_standby (Tom Lane, Andres Freund) While this mistake had no particular impact on PostgreSQL itself, it was bad for - applications that rely on txid_current() and related + applications that rely on txid_current() and related functions: the TXID value would appear to go backwards. - Fix pg_terminate_backend() and - pg_cancel_backend() to not throw error for a non-existent + Fix pg_terminate_backend() and + pg_cancel_backend() to not throw error for a non-existent target process (Josh Kupershmidt) @@ -8607,7 +8607,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix display of - pg_stat_replication.sync_state at a + pg_stat_replication.sync_state at a page boundary (Kyotaro Horiguchi) @@ -8621,7 +8621,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Formerly, this would result in something quite unhelpful, such as - Non-recoverable failure in name resolution. + Non-recoverable failure in name resolution. @@ -8646,8 +8646,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make pg_ctl more robust about reading the - postmaster.pid file (Heikki Linnakangas) + Make pg_ctl more robust about reading the + postmaster.pid file (Heikki Linnakangas) @@ -8657,45 +8657,45 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix possible crash in psql if incorrectly-encoded data - is presented and the client_encoding setting is a + Fix possible crash in psql if incorrectly-encoded data + is presented and the client_encoding setting is a client-only encoding, such as SJIS (Jiang Guiqing) - Make pg_dump dump SEQUENCE SET items in + Make pg_dump dump SEQUENCE SET items in the data not pre-data section of the archive (Tom Lane) This fixes an undesirable inconsistency between the meanings of - and , and also fixes dumping of sequences that are marked as extension configuration tables. - Fix pg_dump's handling of DROP DATABASE - commands in mode (Guillaume Lelarge) - Beginning in 9.2.0, pg_dump --clean would issue a - DROP DATABASE command, which was either useless or + Beginning in 9.2.0, pg_dump --clean would issue a + DROP DATABASE command, which was either useless or dangerous depending on the usage scenario. It no longer does that. - This change also fixes the combination of - Fix pg_dump for views with circular dependencies and + Fix pg_dump for views with circular dependencies and no relation options (Tom Lane) @@ -8703,31 +8703,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 The previous fix to dump relation options when a view is involved in a circular dependency didn't work right for the case that the view has no options; it emitted ALTER VIEW foo - SET () which is invalid syntax. + SET () which is invalid syntax. - Fix bugs in the restore.sql script emitted by - pg_dump in tar output format (Tom Lane) + Fix bugs in the restore.sql script emitted by + pg_dump in tar output format (Tom Lane) The script would fail outright on tables whose names include upper-case characters. Also, make the script capable of restoring - data in mode as well as the regular COPY mode. - Fix pg_restore to accept POSIX-conformant - tar files (Brian Weaver, Tom Lane) + Fix pg_restore to accept POSIX-conformant + tar files (Brian Weaver, Tom Lane) - The original coding of pg_dump's tar + The original coding of pg_dump's tar output mode produced files that are not fully conformant with the POSIX standard. This has been corrected for version 9.3. This patch updates previous branches so that they will accept both the @@ -8738,82 +8738,82 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix tar files emitted by pg_basebackup to + Fix tar files emitted by pg_basebackup to be POSIX conformant (Brian Weaver, Tom Lane) - Fix pg_resetxlog to locate postmaster.pid + Fix pg_resetxlog to locate postmaster.pid correctly when given a relative path to the data directory (Tom Lane) - This mistake could lead to pg_resetxlog not noticing + This mistake could lead to pg_resetxlog not noticing that there is an active postmaster using the data directory. - Fix libpq's lo_import() and - lo_export() functions to report file I/O errors properly + Fix libpq's lo_import() and + lo_export() functions to report file I/O errors properly (Tom Lane) - Fix ecpg's processing of nested structure pointer + Fix ecpg's processing of nested structure pointer variables (Muhammad Usama) - Fix ecpg's ecpg_get_data function to + Fix ecpg's ecpg_get_data function to handle arrays properly (Michael Meskes) - Prevent pg_upgrade from trying to process TOAST tables + Prevent pg_upgrade from trying to process TOAST tables for system catalogs (Bruce Momjian) - This fixes an error seen when the information_schema has + This fixes an error seen when the information_schema has been dropped and recreated. Other failures were also possible. - Improve pg_upgrade performance by setting - synchronous_commit to off in the new cluster + Improve pg_upgrade performance by setting + synchronous_commit to off in the new cluster (Bruce Momjian) - Make contrib/pageinspect's btree page inspection + Make contrib/pageinspect's btree page inspection functions take buffer locks while examining pages (Tom Lane) - Work around unportable behavior of malloc(0) and - realloc(NULL, 0) (Tom Lane) + Work around unportable behavior of malloc(0) and + realloc(NULL, 0) (Tom Lane) - On platforms where these calls return NULL, some code + On platforms where these calls return NULL, some code mistakenly thought that meant out-of-memory. - This is known to have broken pg_dump for databases + This is known to have broken pg_dump for databases containing no user-defined aggregates. There might be other cases as well. @@ -8821,19 +8821,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Ensure that make install for an extension creates the - extension installation directory (Cédric Villemain) + Ensure that make install for an extension creates the + extension installation directory (Cédric Villemain) - Previously, this step was missed if MODULEDIR was set in + Previously, this step was missed if MODULEDIR was set in the extension's Makefile. - Fix pgxs support for building loadable modules on AIX + Fix pgxs support for building loadable modules on AIX (Tom Lane) @@ -8844,7 +8844,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Update time zone data files to tzdata release 2012j + Update time zone data files to tzdata release 2012j for DST law changes in Cuba, Israel, Jordan, Libya, Palestine, Western Samoa, and portions of Brazil. @@ -8877,8 +8877,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - However, you may need to perform REINDEX and/or - VACUUM operations to recover from the effects of the data + However, you may need to perform REINDEX and/or + VACUUM operations to recover from the effects of the data corruption bug described in the first changelog item below. @@ -8903,7 +8903,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 likely to occur on standby slave servers since those perform much more WAL replay. There is a low probability of corruption of btree and GIN indexes. There is a much higher probability of corruption - of table visibility maps, which might lead to wrong answers + of table visibility maps, which might lead to wrong answers from index-only scans. Table data proper cannot be corrupted by this bug. @@ -8911,16 +8911,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 While no index corruption due to this bug is known to have occurred in the field, as a precautionary measure it is recommended that - production installations REINDEX all btree and GIN + production installations REINDEX all btree and GIN indexes at a convenient time after upgrading to 9.2.1. - Also, it is recommended to perform a VACUUM of all tables + Also, it is recommended to perform a VACUUM of all tables while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any incorrect visibility map data. vacuum_cost_delay + linkend="guc-vacuum-cost-delay">vacuum_cost_delay can be adjusted to reduce the performance impact of vacuuming, while causing it to take longer to finish. @@ -8929,14 +8929,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix possible incorrect sorting of output from queries involving - WHERE indexed_column IN - (list_of_values) (Tom Lane) + WHERE indexed_column IN + (list_of_values) (Tom Lane) - Fix planner failure for queries involving GROUP BY + Fix planner failure for queries involving GROUP BY expressions along with window functions and aggregates (Tom Lane) @@ -8948,7 +8948,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This error could result in wrong answers from queries that scan the - same WITH subquery multiple times. + same WITH subquery multiple times. @@ -8961,7 +8961,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve selectivity estimation for text search queries involving - prefixes, i.e. word:* patterns (Tom Lane) + prefixes, i.e. word:* patterns (Tom Lane) @@ -8972,14 +8972,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 A command that needed no locks other than ones its transaction already - had might fail to notice a concurrent GRANT or - REVOKE that committed since the start of its transaction. + had might fail to notice a concurrent GRANT or + REVOKE that committed since the start of its transaction. - Fix ANALYZE to not fail when a column is a domain over an + Fix ANALYZE to not fail when a column is a domain over an array type (Tom Lane) @@ -8998,7 +8998,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Some Linux distributions contain an incorrect version of - pthread.h that results in incorrect compiled code in + pthread.h that results in incorrect compiled code in PL/Perl, leading to crashes if a PL/Perl function calls another one that throws an error. @@ -9006,14 +9006,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove unnecessary dependency on pg_config from - pg_upgrade (Peter Eisentraut) + Remove unnecessary dependency on pg_config from + pg_upgrade (Peter Eisentraut) - Update time zone data files to tzdata release 2012f + Update time zone data files to tzdata release 2012f for DST law changes in Fiji @@ -9047,7 +9047,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow queries to retrieve data only from indexes, avoiding heap - access (index-only scans) + access (index-only scans) @@ -9069,14 +9069,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow streaming replication slaves to forward data to other slaves (cascading - replication) + replication) Allow pg_basebackup + linkend="app-pgbasebackup">pg_basebackup to make base backups from standby servers @@ -9084,7 +9084,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog tool to archive WAL file changes as they are written @@ -9112,14 +9112,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="SQL-CREATEVIEW">security_barrier option for views - Allow libpq connection strings to have the format of a + Allow libpq connection strings to have the format of a URI @@ -9127,7 +9127,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a single-row processing - mode to libpq for better handling of large + mode to libpq for better handling of large result sets @@ -9162,8 +9162,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove the spclocation field from pg_tablespace + Remove the spclocation field from pg_tablespace (Magnus Hagander) @@ -9173,23 +9173,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 a tablespace. This change allows tablespace directories to be moved while the server is down, by manually adjusting the symbolic links. To replace this field, we have added pg_tablespace_location() + linkend="functions-info-catalog-table">pg_tablespace_location() to allow querying of the symbolic links. - Move tsvector most-common-element statistics to new - pg_stats columns + Move tsvector most-common-element statistics to new + pg_stats columns (Alexander Korotkov) - Consult most_common_elems - and most_common_elem_freqs for the data formerly - available in most_common_vals - and most_common_freqs for a tsvector column. + Consult most_common_elems + and most_common_elem_freqs for the data formerly + available in most_common_vals + and most_common_freqs for a tsvector column. @@ -9204,14 +9204,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Remove hstore's => + Remove hstore's => operator (Robert Haas) - Users should now use hstore(text, text). Since + Users should now use hstore(text, text). Since PostgreSQL 9.0, a warning message has been - emitted when an operator named => is created because + emitted when an operator named => is created because the SQL standard reserves that token for another use. @@ -9220,7 +9220,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Ensure that xpath() + linkend="functions-xml-processing">xpath() escapes special characters in string values (Florian Pflug) @@ -9233,13 +9233,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make pg_relation_size() + linkend="functions-admin-dbobject">pg_relation_size() and friends return NULL if the object does not exist (Phil Sorber) This prevents queries that call these functions from returning - errors immediately after a concurrent DROP. + errors immediately after a concurrent DROP. @@ -9247,7 +9247,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make EXTRACT(EPOCH FROM - timestamp without time zone) + timestamp without time zone) measure the epoch from local midnight, not UTC midnight (Tom Lane) @@ -9256,17 +9256,17 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change reverts an ill-considered change made in release 7.3. Measuring from UTC midnight was inconsistent because it made the result dependent on the timezone setting, which - computations for timestamp without time zone should not be. + linkend="guc-timezone">timezone setting, which + computations for timestamp without time zone should not be. The previous behavior remains available by casting the input value - to timestamp with time zone. + to timestamp with time zone. - Properly parse time strings with trailing yesterday, - today, and tomorrow (Dean Rasheed) + Properly parse time strings with trailing yesterday, + today, and tomorrow (Dean Rasheed) @@ -9278,8 +9278,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix to_date() and - to_timestamp() to wrap incomplete dates toward 2020 + linkend="functions-formatting">to_date() and + to_timestamp() to wrap incomplete dates toward 2020 (Bruce Momjian) @@ -9314,15 +9314,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer forcibly lowercase procedural language names in CREATE FUNCTION + linkend="SQL-CREATEFUNCTION">CREATE FUNCTION (Robert Haas) While unquoted language identifiers are still lowercased, strings and quoted identifiers are no longer forcibly down-cased. - Thus for example CREATE FUNCTION ... LANGUAGE 'C' - will no longer work; it must be spelled 'c', or better + Thus for example CREATE FUNCTION ... LANGUAGE 'C' + will no longer work; it must be spelled 'c', or better omit the quotes. @@ -9352,15 +9352,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Provide consistent backquote, variable expansion, and quoted substring behavior in psql meta-command + linkend="APP-PSQL">psql meta-command arguments (Tom Lane) Previously, such references were treated oddly when not separated by - whitespace from adjacent text. For example 'FOO'BAR was - output as FOO BAR (unexpected insertion of a space) and - FOO'BAR'BAZ was output unchanged (not removing the quotes + whitespace from adjacent text. For example 'FOO'BAR was + output as FOO BAR (unexpected insertion of a space) and + FOO'BAR'BAZ was output unchanged (not removing the quotes as most would expect). @@ -9368,9 +9368,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer treat clusterdb + linkend="APP-CLUSTERDB">clusterdb table names as double-quoted; no longer treat reindexdb table + linkend="APP-REINDEXDB">reindexdb table and index names as double-quoted (Bruce Momjian) @@ -9382,20 +9382,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - createuser + createuser no longer prompts for option settings by default (Peter Eisentraut) - Use to obtain the old behavior. Disable prompting for the user name in dropuser unless - is specified (Peter Eisentraut) @@ -9417,36 +9417,36 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows changing the names and locations of the files that were - previously hard-coded as server.crt, - server.key, root.crt, and - root.crl in the data directory. - The server will no longer examine root.crt or - root.crl by default; to load these files, the + previously hard-coded as server.crt, + server.key, root.crt, and + root.crl in the data directory. + The server will no longer examine root.crt or + root.crl by default; to load these files, the associated parameters must be set to non-default values. - Remove the silent_mode parameter (Heikki Linnakangas) + Remove the silent_mode parameter (Heikki Linnakangas) Similar behavior can be obtained with pg_ctl start - -l postmaster.log. + -l postmaster.log. - Remove the wal_sender_delay parameter, + Remove the wal_sender_delay parameter, as it is no longer needed (Tom Lane) - Remove the custom_variable_classes parameter (Tom Lane) + Remove the custom_variable_classes parameter (Tom Lane) @@ -9466,19 +9466,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Rename pg_stat_activity.procpid - to pid, to match other system tables (Magnus Hagander) + linkend="monitoring-stats-views-table">pg_stat_activity.procpid + to pid, to match other system tables (Magnus Hagander) - Create a separate pg_stat_activity column to + Create a separate pg_stat_activity column to report process state (Scott Mead, Magnus Hagander) - The previous query and query_start + The previous query and query_start values now remain available for an idle session, allowing enhanced analysis. @@ -9486,8 +9486,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Rename pg_stat_activity.current_query to - query because it is not cleared when the query + Rename pg_stat_activity.current_query to + query because it is not cleared when the query completes (Magnus Hagander) @@ -9495,24 +9495,24 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Change all SQL-level statistics timing values - to be float8 columns measured in milliseconds (Tom Lane) + to be float8 columns measured in milliseconds (Tom Lane) This change eliminates the designed-in assumption that the values - are accurate to microseconds and no more (since the float8 + are accurate to microseconds and no more (since the float8 values can be fractional). The columns affected are - pg_stat_user_functions.total_time, - pg_stat_user_functions.self_time, - pg_stat_xact_user_functions.total_time, + pg_stat_user_functions.total_time, + pg_stat_user_functions.self_time, + pg_stat_xact_user_functions.total_time, and - pg_stat_xact_user_functions.self_time. + pg_stat_xact_user_functions.self_time. The statistics functions underlying these columns now also return - float8 milliseconds, rather than bigint + float8 milliseconds, rather than bigint microseconds. - contrib/pg_stat_statements' - total_time column is now also measured in + contrib/pg_stat_statements' + total_time column is now also measured in milliseconds. @@ -9546,7 +9546,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This feature is often called index-only scans. + This feature is often called index-only scans. Heap access can be skipped for heap pages containing only tuples that are visible to all sessions, as reported by the visibility map; so the benefit applies mainly to mostly-static data. The visibility map @@ -9618,7 +9618,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Move the frequently accessed members of the PGPROC + Move the frequently accessed members of the PGPROC shared memory array to a separate array (Pavan Deolasee, Heikki Linnakangas, Robert Haas) @@ -9663,7 +9663,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make the number of CLOG buffers scale based on shared_buffers + linkend="guc-shared-buffers">shared_buffers (Robert Haas, Simon Riggs, Tom Lane) @@ -9724,7 +9724,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Previously, only wal_writer_delay + linkend="guc-wal-writer-delay">wal_writer_delay triggered WAL flushing to disk; now filling a WAL buffer also triggers WAL writes. @@ -9763,7 +9763,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 In the past, a prepared statement always had a single - generic plan that was used for all parameter values, which + generic plan that was used for all parameter values, which was frequently much inferior to the plans used for non-prepared statements containing explicit constant values. Now, the planner attempts to generate custom plans for specific parameter values. @@ -9781,7 +9781,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - The new parameterized path mechanism allows inner + The new parameterized path mechanism allows inner index scans to use values from relations that are more than one join level up from the scan. This can greatly improve performance in situations where semantic restrictions (such as outer joins) limit @@ -9796,7 +9796,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Wrappers can now provide multiple access paths for their + Wrappers can now provide multiple access paths for their tables, allowing more flexibility in join planning. @@ -9809,14 +9809,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This check is only performed when constraint_exclusion + linkend="guc-constraint-exclusion">constraint_exclusion is on. - Allow indexed_col op ANY(ARRAY[...]) conditions to be + Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain index scans and index-only scans (Tom Lane) @@ -9827,14 +9827,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Support MIN/MAX index optimizations on + Support MIN/MAX index optimizations on boolean columns (Marti Raudsepp) - Account for set-returning functions in SELECT target + Account for set-returning functions in SELECT target lists when setting row count estimates (Tom Lane) @@ -9882,7 +9882,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve statistical estimates for subqueries using - DISTINCT (Tom Lane) + DISTINCT (Tom Lane) @@ -9897,13 +9897,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Do not treat role names and samerole specified in samerole specified in pg_hba.conf as automatically including superusers (Andrew Dunstan) - This makes it easier to use reject lines with group roles. + This makes it easier to use reject lines with group roles. @@ -9958,7 +9958,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This logging is triggered by log_autovacuum_min_duration. + linkend="guc-log-autovacuum-min-duration">log_autovacuum_min_duration. @@ -9977,7 +9977,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add pg_xlog_location_diff() + linkend="functions-admin-backup">pg_xlog_location_diff() to simplify WAL location comparisons (Euler Taveira de Oliveira) @@ -9995,15 +9995,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows different instances to use the event log with different identifiers, by setting the event_source + linkend="guc-event-source">event_source server parameter, which is similar to how syslog_ident works. + linkend="guc-syslog-ident">syslog_ident works. - Change unexpected EOF messages to DEBUG1 level, + Change unexpected EOF messages to DEBUG1 level, except when there is an open transaction (Magnus Hagander) @@ -10025,14 +10025,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Track temporary file sizes and file counts in the pg_stat_database + linkend="pg-stat-database-view">pg_stat_database system view (Tomas Vondra) - Add a deadlock counter to the pg_stat_database + Add a deadlock counter to the pg_stat_database system view (Magnus Hagander) @@ -10040,7 +10040,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a server parameter track_io_timing + linkend="guc-track-io-timing">track_io_timing to track I/O timings (Ants Aasma, Robert Haas) @@ -10048,7 +10048,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Report checkpoint timing information in pg_stat_bgwriter + linkend="pg-stat-bgwriter-view">pg_stat_bgwriter (Greg Smith, Peter Geoghegan) @@ -10065,7 +10065,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Silently ignore nonexistent schemas specified in search_path (Tom Lane) + linkend="guc-search-path">search_path (Tom Lane) @@ -10077,12 +10077,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow superusers to set deadlock_timeout + linkend="guc-deadlock-timeout">deadlock_timeout per-session, not just per-cluster (Noah Misch) - This allows deadlock_timeout to be reduced for + This allows deadlock_timeout to be reduced for transactions that are likely to be involved in a deadlock, thus detecting the failure more quickly. Alternatively, increasing the value can be used to reduce the chances of a session being chosen for @@ -10093,7 +10093,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a server parameter temp_file_limit + linkend="guc-temp-file-limit">temp_file_limit to constrain temporary file space usage per session (Mark Kirkwood) @@ -10114,13 +10114,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add postmaster option to query configuration parameters (Bruce Momjian) - This allows pg_ctl to better handle cases where - PGDATA or points to a configuration-only directory. @@ -10128,14 +10128,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Replace an empty locale name with the implied value in - CREATE DATABASE + CREATE DATABASE (Tom Lane) This prevents cases where - pg_database.datcollate or - datctype could be interpreted differently after a + pg_database.datcollate or + datctype could be interpreted differently after a server restart. @@ -10170,22 +10170,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add an include_if_exists facility for configuration + Add an include_if_exists facility for configuration files (Greg Smith) - This works the same as include, except that an error + This works the same as include, except that an error is not thrown if the file is missing. - Identify the server time zone during initdb, and set + Identify the server time zone during initdb, and set postgresql.conf entries - timezone and - log_timezone + timezone and + log_timezone accordingly (Tom Lane) @@ -10197,7 +10197,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Fix pg_settings to + linkend="view-pg-settings">pg_settings to report postgresql.conf line numbers on Windows (Tom Lane) @@ -10220,7 +10220,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow streaming replication slaves to forward data to other slaves (cascading - replication) (Fujii Masao) + replication) (Fujii Masao) @@ -10232,8 +10232,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add new synchronous_commit - mode remote_write (Fujii Masao, Simon Riggs) + linkend="guc-synchronous-commit">synchronous_commit + mode remote_write (Fujii Masao, Simon Riggs) @@ -10246,7 +10246,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog tool to archive WAL file changes as they are written, rather than waiting for completed WAL files (Magnus Hagander) @@ -10255,7 +10255,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow pg_basebackup + linkend="app-pgbasebackup">pg_basebackup to make base backups from standby servers (Jun Ishizuka, Fujii Masao) @@ -10267,7 +10267,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow streaming of WAL files while pg_basebackup + Allow streaming of WAL files while pg_basebackup is performing a backup (Magnus Hagander) @@ -10306,19 +10306,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change allows better results when a row value is converted to - hstore or json type: the fields of the resulting + hstore or json type: the fields of the resulting value will now have the expected names. - Improve column labels used for sub-SELECT results + Improve column labels used for sub-SELECT results (Marti Raudsepp) - Previously, the generic label ?column? was used. + Previously, the generic label ?column? was used. @@ -10348,7 +10348,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - When a row fails a CHECK or NOT NULL + When a row fails a CHECK or NOT NULL constraint, show the row's contents as error detail (Jan Kundrát) @@ -10376,7 +10376,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This change adds locking that should eliminate cache lookup - failed errors in many scenarios. Also, it is no longer possible + failed errors in many scenarios. Also, it is no longer possible to add relations to a schema that is being concurrently dropped, a scenario that formerly led to inconsistent system catalog contents. @@ -10384,7 +10384,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add CONCURRENTLY option to CONCURRENTLY option to DROP INDEX (Simon Riggs) @@ -10415,31 +10415,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow CHECK - constraints to be declared NOT VALID (Álvaro + Allow CHECK + constraints to be declared NOT VALID (Álvaro Herrera) - Adding a NOT VALID constraint does not cause the table to + Adding a NOT VALID constraint does not cause the table to be scanned to verify that existing rows meet the constraint. Subsequently, newly added or updated rows are checked. Such constraints are ignored by the planner when considering - constraint_exclusion, since it is not certain that all + constraint_exclusion, since it is not certain that all rows meet the constraint. - The new ALTER TABLE VALIDATE command allows NOT - VALID constraints to be checked for existing rows, after which + The new ALTER TABLE VALIDATE command allows NOT + VALID constraints to be checked for existing rows, after which they are converted into ordinary constraints. - Allow CHECK constraints to be declared NO - INHERIT (Nikhil Sontakke, Alex Hunsaker, Álvaro Herrera) + Allow CHECK constraints to be declared NO + INHERIT (Nikhil Sontakke, Alex Hunsaker, Álvaro Herrera) @@ -10459,7 +10459,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <command>ALTER</> + <command>ALTER</command> @@ -10467,18 +10467,18 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Reduce need to rebuild tables and indexes for certain ALTER TABLE - ... ALTER COLUMN TYPE operations (Noah Misch) + ... ALTER COLUMN TYPE operations (Noah Misch) - Increasing the length limit for a varchar or varbit + Increasing the length limit for a varchar or varbit column, or removing the limit altogether, no longer requires a table rewrite. Similarly, increasing the allowable precision of a - numeric column, or changing a column from constrained - numeric to unconstrained numeric, no longer + numeric column, or changing a column from constrained + numeric to unconstrained numeric, no longer requires a table rewrite. Table rewrites are also avoided in similar - cases involving the interval, timestamp, and - timestamptz types. + cases involving the interval, timestamp, and + timestamptz types. @@ -10492,7 +10492,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add IF EXISTS options to some ALTER + Add IF EXISTS options to some ALTER commands (Pavel Stehule) @@ -10505,16 +10505,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add ALTER - FOREIGN DATA WRAPPER ... RENAME + FOREIGN DATA WRAPPER ... RENAME and ALTER - SERVER ... RENAME (Peter Eisentraut) + SERVER ... RENAME (Peter Eisentraut) Add ALTER - DOMAIN ... RENAME (Peter Eisentraut) + DOMAIN ... RENAME (Peter Eisentraut) @@ -10526,11 +10526,11 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Throw an error for ALTER DOMAIN ... DROP - CONSTRAINT on a nonexistent constraint (Peter Eisentraut) + CONSTRAINT on a nonexistent constraint (Peter Eisentraut) - An IF EXISTS option has been added to provide the + An IF EXISTS option has been added to provide the previous behavior. @@ -10540,7 +10540,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</></link> + <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> @@ -10565,8 +10565,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix CREATE TABLE ... AS EXECUTE - to handle WITH NO DATA and column name specifications + Fix CREATE TABLE ... AS EXECUTE + to handle WITH NO DATA and column name specifications (Tom Lane) @@ -10583,14 +10583,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="SQL-CREATEVIEW">security_barrier option for views (KaiGai Kohei, Robert Haas) This option prevents optimizations that might allow view-protected data to be exposed to users, for example pushing a clause involving - an insecure function into the WHERE clause of the view. + an insecure function into the WHERE clause of the view. Such views can be expected to perform more poorly than ordinary views. @@ -10599,9 +10599,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a new LEAKPROOF function + linkend="SQL-CREATEFUNCTION">LEAKPROOF function attribute to mark functions that can safely be pushed down - into security_barrier views (KaiGai Kohei) + into security_barrier views (KaiGai Kohei) @@ -10611,8 +10611,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This adds support for the SQL-conforming - USAGE privilege on types and domains. The intent is + This adds support for the SQL-conforming + USAGE privilege on types and domains. The intent is to be able to restrict which users can create dependencies on types, since such dependencies limit the owner's ability to alter the type. @@ -10628,7 +10628,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Because the object is being created by SELECT INTO or CREATE TABLE AS, the creator would ordinarily have insert permissions; but there are corner cases where this is not - true, such as when ALTER DEFAULT PRIVILEGES has removed + true, such as when ALTER DEFAULT PRIVILEGES has removed such permissions. @@ -10646,20 +10646,20 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow VACUUM to more + Allow VACUUM to more easily skip pages that cannot be locked (Simon Riggs, Robert Haas) - This change should greatly reduce the incidence of VACUUM - getting stuck waiting for other sessions. + This change should greatly reduce the incidence of VACUUM + getting stuck waiting for other sessions. - Make EXPLAIN - (BUFFERS) count blocks dirtied and written (Robert Haas) + Make EXPLAIN + (BUFFERS) count blocks dirtied and written (Robert Haas) @@ -10677,8 +10677,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This is accomplished by setting the new TIMING option to - FALSE. + This is accomplished by setting the new TIMING option to + FALSE. @@ -10719,41 +10719,41 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add array_to_json() - and row_to_json() (Andrew Dunstan) + linkend="functions-json">array_to_json() + and row_to_json() (Andrew Dunstan) - Add a SMALLSERIAL + Add a SMALLSERIAL data type (Mike Pultz) - This is like SERIAL, except it stores the sequence in - a two-byte integer column (int2). + This is like SERIAL, except it stores the sequence in + a two-byte integer column (int2). Allow domains to be - declared NOT VALID (Álvaro Herrera) + declared NOT VALID (Álvaro Herrera) This option can be set at domain creation time, or via ALTER - DOMAIN ... ADD CONSTRAINT ... NOT - VALID. ALTER DOMAIN ... VALIDATE - CONSTRAINT fully validates the constraint. + DOMAIN ... ADD CONSTRAINT ... NOT + VALID. ALTER DOMAIN ... VALIDATE + CONSTRAINT fully validates the constraint. Support more locale-specific formatting options for the money data type (Tom Lane) + linkend="datatype-money">money data type (Tom Lane) @@ -10766,22 +10766,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add bitwise and, or, and not - operators for the macaddr data type (Brendan Jurd) + Add bitwise and, or, and not + operators for the macaddr data type (Brendan Jurd) Allow xpath() to + linkend="functions-xml-processing">xpath() to return a single-element XML array when supplied a scalar value (Florian Pflug) Previously, it returned an empty array. This change will also - cause xpath_exists() to return true, not false, + cause xpath_exists() to return true, not false, for such expressions. @@ -10805,9 +10805,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow non-superusers to use pg_cancel_backend() + linkend="functions-admin-signal">pg_cancel_backend() and pg_terminate_backend() + linkend="functions-admin-signal">pg_terminate_backend() on other sessions belonging to the same user (Magnus Hagander, Josh Kupershmidt, Dan Farina) @@ -10827,7 +10827,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This allows multiple transactions to share identical views of the database state. Snapshots are exported via pg_export_snapshot() + linkend="functions-snapshot-synchronization">pg_export_snapshot() and imported via SET TRANSACTION SNAPSHOT. Only snapshots from currently-running transactions can be imported. @@ -10838,7 +10838,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Support COLLATION - FOR on expressions (Peter Eisentraut) + FOR on expressions (Peter Eisentraut) @@ -10849,23 +10849,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add pg_opfamily_is_visible() + linkend="functions-info-schema-table">pg_opfamily_is_visible() (Josh Kupershmidt) - Add a numeric variant of pg_size_pretty() - for use with pg_xlog_location_diff() (Fujii Masao) + Add a numeric variant of pg_size_pretty() + for use with pg_xlog_location_diff() (Fujii Masao) Add a pg_trigger_depth() + linkend="functions-info-session-table">pg_trigger_depth() function (Kevin Grittner) @@ -10877,8 +10877,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Allow string_agg() - to process bytea values (Pavel Stehule) + linkend="functions-aggregate-table">string_agg() + to process bytea values (Pavel Stehule) @@ -10889,7 +10889,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - For example, ^(\w+)( \1)+$. Previous releases did not + For example, ^(\w+)( \1)+$. Previous releases did not check that the back-reference actually matched the first occurrence. @@ -10906,22 +10906,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add information schema views - role_udt_grants, udt_privileges, - and user_defined_types (Peter Eisentraut) + role_udt_grants, udt_privileges, + and user_defined_types (Peter Eisentraut) Add composite-type attributes to the - information schema element_types view + information schema element_types view (Peter Eisentraut) - Implement interval_type columns in the information + Implement interval_type columns in the information schema (Peter Eisentraut) @@ -10933,23 +10933,23 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Implement collation-related columns in the information schema - attributes, columns, - domains, and element_types + attributes, columns, + domains, and element_types views (Peter Eisentraut) - Implement the with_hierarchy column in the - information schema table_privileges view (Peter + Implement the with_hierarchy column in the + information schema table_privileges view (Peter Eisentraut) - Add display of sequence USAGE privileges to information + Add display of sequence USAGE privileges to information schema (Peter Eisentraut) @@ -10980,7 +10980,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow the PL/pgSQL OPEN cursor command to supply + Allow the PL/pgSQL OPEN cursor command to supply parameters by name (Yeb Havinga) @@ -11002,7 +11002,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve performance and memory consumption for long chains of - ELSIF clauses (Tom Lane) + ELSIF clauses (Tom Lane) @@ -11083,31 +11083,31 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add initdb - options and (Peter Eisentraut) - This allows separate control of local and - host pg_hba.conf authentication - settings. still controls both. - Add - Add the @@ -11115,15 +11115,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Give command-line tools the ability to specify the name of the - database to connect to, and fall back to template1 - if a postgres database connection fails (Robert Haas) + database to connect to, and fall back to template1 + if a postgres database connection fails (Robert Haas) - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> @@ -11134,7 +11134,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This adds the auto option to the \x + This adds the auto option to the \x command, which switches to the expanded mode when the normal output would be wider than the screen. @@ -11147,32 +11147,32 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - This is done with a new command \ir. + This is done with a new command \ir. Add support for non-ASCII characters in - psql variable names (Tom Lane) + psql variable names (Tom Lane) - Add support for major-version-specific .psqlrc files + Add support for major-version-specific .psqlrc files (Bruce Momjian) - psql already supported minor-version-specific - .psqlrc files. + psql already supported minor-version-specific + .psqlrc files. - Provide environment variable overrides for psql + Provide environment variable overrides for psql history and startup file locations (Andrew Dunstan) @@ -11184,15 +11184,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a \setenv command to modify + Add a \setenv command to modify the environment variables passed to child processes (Andrew Dunstan) - Name psql's temporary editor files with a - .sql extension (Peter Eisentraut) + Name psql's temporary editor files with a + .sql extension (Peter Eisentraut) @@ -11202,19 +11202,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow psql to use zero-byte field and record + Allow psql to use zero-byte field and record separators (Peter Eisentraut) Various shell tools use zero-byte (NUL) separators, - e.g. find. + e.g. find. - Make the \timing option report times for + Make the \timing option report times for failed queries (Magnus Hagander) @@ -11225,13 +11225,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Unify and tighten psql's treatment of \copy - and SQL COPY (Noah Misch) + Unify and tighten psql's treatment of \copy + and SQL COPY (Noah Misch) This fix makes failure behavior more predictable and honors - \set ON_ERROR_ROLLBACK. + \set ON_ERROR_ROLLBACK. @@ -11245,21 +11245,21 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make \d on a sequence show the + Make \d on a sequence show the table/column name owning it (Magnus Hagander) - Show statistics target for columns in \d+ (Magnus + Show statistics target for columns in \d+ (Magnus Hagander) - Show role password expiration dates in \du + Show role password expiration dates in \du (Fabrízio de Royes Mello) @@ -11271,8 +11271,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - These are included in the output of \dC+, - \dc+, \dD+, and \dL respectively. + These are included in the output of \dC+, + \dc+, \dD+, and \dL respectively. @@ -11283,15 +11283,15 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - These are included in the output of \des+, - \det+, and \dew+ for foreign servers, foreign + These are included in the output of \des+, + \det+, and \dew+ for foreign servers, foreign tables, and foreign data wrappers respectively. - Change \dd to display comments only for object types + Change \dd to display comments only for object types without their own backslash command (Josh Kupershmidt) @@ -11307,9 +11307,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - In psql tab completion, complete SQL + In psql tab completion, complete SQL keywords in either upper or lower case according to the new COMP_KEYWORD_CASE + linkend="APP-PSQL-variables">COMP_KEYWORD_CASE setting (Peter Eisentraut) @@ -11348,14 +11348,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add an @@ -11366,13 +11366,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a - Valid values are pre-data, data, - and post-data. The option can be + Valid values are pre-data, data, + and post-data. The option can be given more than once to select two or more sections. @@ -11380,7 +11380,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make pg_dumpall dump all + linkend="APP-PG-DUMPALL">pg_dumpall dump all roles first, then all configuration settings on roles (Phil Sorber) @@ -11392,8 +11392,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_dumpall to avoid errors if the - postgres database is missing in the new cluster + Allow pg_dumpall to avoid errors if the + postgres database is missing in the new cluster (Robert Haas) @@ -11418,13 +11418,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Tighten rules for when extension configuration tables are dumped - by pg_dump (Tom Lane) + by pg_dump (Tom Lane) - Make pg_dump emit more useful dependency + Make pg_dump emit more useful dependency information (Tom Lane) @@ -11438,7 +11438,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Improve pg_dump's performance when dumping many + Improve pg_dump's performance when dumping many database objects (Tom Lane) @@ -11450,19 +11450,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> - Allow libpq connection strings to have the format of a + Allow libpq connection strings to have the format of a URI (Alexander Shulgin) - The syntax begins with postgres://. This can allow + The syntax begins with postgres://. This can allow applications to avoid implementing their own parser for URIs representing database connections. @@ -11489,30 +11489,30 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Previously, libpq always collected the entire query + Previously, libpq always collected the entire query result in memory before passing it back to the application. - Add const qualifiers to the declarations of the functions - PQconnectdbParams, PQconnectStartParams, - and PQpingParams (Lionel Elie Mamane) + Add const qualifiers to the declarations of the functions + PQconnectdbParams, PQconnectStartParams, + and PQpingParams (Lionel Elie Mamane) - Allow the .pgpass file to include escaped characters + Allow the .pgpass file to include escaped characters in the password field (Robert Haas) - Make library functions use abort() instead of - exit() when it is necessary to terminate the process + Make library functions use abort() instead of + exit() when it is necessary to terminate the process (Peter Eisentraut) @@ -11557,7 +11557,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Install plpgsql.h into include/server during installation + Install plpgsql.h into include/server during installation (Heikki Linnakangas) @@ -11583,14 +11583,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Improve the concurrent transaction regression tests - (isolationtester) (Noah Misch) + (isolationtester) (Noah Misch) - Modify thread_test to create its test files in - the current directory, rather than /tmp (Bruce Momjian) + Modify thread_test to create its test files in + the current directory, rather than /tmp (Bruce Momjian) @@ -11639,7 +11639,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add a pg_upgrade test suite (Peter Eisentraut) + Add a pg_upgrade test suite (Peter Eisentraut) @@ -11659,14 +11659,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add options to git_changelog for use in major + Add options to git_changelog for use in major release note creation (Bruce Momjian) - Support Linux's /proc/self/oom_score_adj API (Tom Lane) + Support Linux's /proc/self/oom_score_adj API (Tom Lane) @@ -11688,13 +11688,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This improvement does not apply to - dblink_send_query()/dblink_get_result(). + dblink_send_query()/dblink_get_result(). - Support force_not_null option in force_not_null option in file_fdw (Shigeru Hanada) @@ -11702,7 +11702,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Implement dry-run mode for pg_archivecleanup + linkend="pgarchivecleanup">pg_archivecleanup (Gabriele Bartolini) @@ -11714,29 +11714,29 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add new pgbench switches - , , and + (Robert Haas) Change pg_test_fsync to test + linkend="pgtestfsync">pg_test_fsync to test for a fixed amount of time, rather than a fixed number of cycles (Bruce Momjian) - The /cycles option was removed, and + /seconds added. Add a pg_test_timing + linkend="pgtesttiming">pg_test_timing utility to measure clock monotonicity and timing overhead (Ants Aasma, Greg Smith) @@ -11753,19 +11753,19 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="pgupgrade"><application>pg_upgrade</></link> + <link linkend="pgupgrade"><application>pg_upgrade</application></link> - Adjust pg_upgrade environment variables (Bruce + Adjust pg_upgrade environment variables (Bruce Momjian) Rename data, bin, and port environment - variables to begin with PG, and support + variables to begin with PG, and support PGPORTOLD/PGPORTNEW, to replace PGPORT. @@ -11773,22 +11773,22 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Overhaul pg_upgrade logging and failure reporting + Overhaul pg_upgrade logging and failure reporting (Bruce Momjian) Create four append-only log files, and delete them on success. - Add - Make pg_upgrade create a script to incrementally + Make pg_upgrade create a script to incrementally generate more accurate optimizer statistics (Bruce Momjian) @@ -11800,14 +11800,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_upgrade to upgrade an old cluster that - does not have a postgres database (Bruce Momjian) + Allow pg_upgrade to upgrade an old cluster that + does not have a postgres database (Bruce Momjian) - Allow pg_upgrade to handle cases where some + Allow pg_upgrade to handle cases where some old or new databases are missing, as long as they are empty (Bruce Momjian) @@ -11815,14 +11815,14 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow pg_upgrade to handle configuration-only + Allow pg_upgrade to handle configuration-only directory installations (Bruce Momjian) - In pg_upgrade, add / options to pass parameters to the servers (Bruce Momjian) @@ -11833,7 +11833,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Change pg_upgrade to use port 50432 by default + Change pg_upgrade to use port 50432 by default (Bruce Momjian) @@ -11844,7 +11844,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Reduce cluster locking in pg_upgrade (Bruce + Reduce cluster locking in pg_upgrade (Bruce Momjian) @@ -11859,13 +11859,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="pgstatstatements"><application>pg_stat_statements</></link> + <link linkend="pgstatstatements"><application>pg_stat_statements</application></link> - Allow pg_stat_statements to aggregate similar + Allow pg_stat_statements to aggregate similar queries via SQL text normalization (Peter Geoghegan, Tom Lane) @@ -11878,13 +11878,13 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add dirtied and written block counts and read/write times to - pg_stat_statements (Robert Haas, Ants Aasma) + pg_stat_statements (Robert Haas, Ants Aasma) - Prevent pg_stat_statements from double-counting + Prevent pg_stat_statements from double-counting PREPARE and EXECUTE commands (Tom Lane) @@ -11900,7 +11900,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Support SECURITY LABEL on global objects (KaiGai + Support SECURITY LABEL on global objects (KaiGai Kohei, Robert Haas) @@ -11925,7 +11925,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add sepgsql_setcon() and related functions to control + Add sepgsql_setcon() and related functions to control the sepgsql security domain (KaiGai Kohei) @@ -11954,7 +11954,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Use gmake STYLE=website draft. + Use gmake STYLE=website draft. @@ -11967,7 +11967,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Document that user/database names are preserved with double-quoting - by command-line tools like vacuumdb (Bruce + by command-line tools like vacuumdb (Bruce Momjian) @@ -11981,12 +11981,12 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Deprecate use of GLOBAL and LOCAL in - CREATE TEMP TABLE (Noah Misch) + Deprecate use of GLOBAL and LOCAL in + CREATE TEMP TABLE (Noah Misch) - PostgreSQL has long treated these keyword as no-ops, + PostgreSQL has long treated these keyword as no-ops, and continues to do so; but in future they might mean what the SQL standard says they mean, so applications should avoid using them. diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index 91fbb34399..dada255057 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -37,20 +37,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -89,21 +89,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -120,7 +120,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -132,7 +132,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -140,13 +140,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -158,12 +158,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -213,7 +213,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -221,11 +221,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -240,15 +240,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -279,15 +279,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -301,7 +301,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -315,16 +315,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -424,28 +424,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -453,7 +453,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -461,56 +461,56 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -521,20 +521,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -547,9 +547,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -560,8 +560,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -572,7 +572,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -585,14 +585,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -603,14 +603,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -618,13 +618,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -632,7 +632,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -644,8 +644,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -657,9 +657,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -670,7 +670,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -682,7 +682,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -702,27 +702,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -772,18 +772,18 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -807,7 +807,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -821,17 +821,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -839,7 +839,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -852,7 +852,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -860,7 +860,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -875,19 +875,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -895,27 +895,27 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -929,33 +929,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -973,21 +973,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1002,20 +1002,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1027,26 +1027,26 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1059,7 +1059,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1101,7 +1101,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1115,9 +1115,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1130,15 +1130,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1192,15 +1192,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1209,13 +1209,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1269,7 +1269,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1287,15 +1287,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1309,7 +1309,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1339,13 +1339,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1353,12 +1353,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1373,15 +1373,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1392,33 +1392,33 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1430,8 +1430,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1443,15 +1443,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1464,14 +1464,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1490,21 +1490,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1516,23 +1516,23 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1543,22 +1543,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -1584,7 +1584,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1648,7 +1648,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -1656,19 +1656,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -1702,71 +1702,71 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -1788,7 +1788,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -1802,7 +1802,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -1813,30 +1813,30 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -1844,8 +1844,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -1856,7 +1856,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -1864,8 +1864,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -1886,17 +1886,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -1909,15 +1909,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -1963,17 +1963,17 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -1987,7 +1987,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -1996,22 +1996,22 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -2021,40 +2021,40 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -2065,19 +2065,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -2087,8 +2087,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -2102,7 +2102,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -2128,15 +2128,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -2165,12 +2165,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -2179,12 +2179,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -2199,15 +2199,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -2219,7 +2219,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -2254,8 +2254,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -2268,53 +2268,53 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -2322,8 +2322,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -2335,7 +2335,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -2347,7 +2347,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2368,13 +2368,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -2394,7 +2394,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2406,7 +2406,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2462,7 +2462,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2471,7 +2471,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2485,10 +2485,10 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2496,8 +2496,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -2509,28 +2509,28 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -2538,20 +2538,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -2586,22 +2586,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -2614,19 +2614,19 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -2634,9 +2634,9 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -2683,56 +2683,56 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -2758,27 +2758,27 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -2792,26 +2792,26 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -2839,21 +2839,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -2914,25 +2914,25 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -2941,7 +2941,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -2967,21 +2967,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -2990,7 +2990,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -3004,13 +3004,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix planner's handling of LATERAL references (Tom + Fix planner's handling of LATERAL references (Tom Lane) This fixes some corner cases that led to failed to build any - N-way joins or could not devise a query plan planner + N-way joins or could not devise a query plan planner failures. @@ -3032,22 +3032,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Speed up generation of unique table aliases in EXPLAIN and + Speed up generation of unique table aliases in EXPLAIN and rule dumping, and ensure that generated aliases do not - exceed NAMEDATALEN (Tom Lane) + exceed NAMEDATALEN (Tom Lane) - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -3099,7 +3099,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -3112,14 +3112,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -3132,7 +3132,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -3158,13 +3158,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -3172,15 +3172,15 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -3188,21 +3188,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -3210,23 +3210,23 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -3234,18 +3234,18 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -3255,51 +3255,51 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. - Avoid repeated password prompts during parallel pg_dump + Avoid repeated password prompts during parallel pg_dump (Zeus Kronion) - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -3308,22 +3308,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -3331,42 +3331,42 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) - Fix premature clearing of libpq's input buffer when + Fix premature clearing of libpq's input buffer when socket EOF is seen (Tom Lane) - This mistake caused libpq to sometimes not report the + This mistake caused libpq to sometimes not report the backend's final error message before reporting server closed the - connection unexpectedly. + connection unexpectedly. - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -3374,36 +3374,36 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -3414,14 +3414,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -3445,19 +3445,19 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -3465,11 +3465,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -3478,7 +3478,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -3524,13 +3524,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Guard against stack overflows in json parsing + Guard against stack overflows in json parsing (Oskari Saarenmaa) - If an application constructs PostgreSQL json - or jsonb values from arbitrary user input, the application's + If an application constructs PostgreSQL json + or jsonb values from arbitrary user input, the application's users can reliably crash the PostgreSQL server, causing momentary denial of service. (CVE-2015-5289) @@ -3538,8 +3538,8 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -3572,13 +3572,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -3596,7 +3596,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -3608,7 +3608,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - This was seen primarily when restoring pg_dump output + This was seen primarily when restoring pg_dump output for databases with many thousands of tables. @@ -3623,13 +3623,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -3641,7 +3641,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) @@ -3649,15 +3649,15 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid logging complaints when a parameter that can only be set at - server start appears multiple times in postgresql.conf, - and fix counting of line numbers after an include_dir + server start appears multiple times in postgresql.conf, + and fix counting of line numbers after an include_dir directive (Tom Lane) - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -3665,21 +3665,21 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -3692,7 +3692,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -3744,22 +3744,22 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -3772,9 +3772,9 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -3782,7 +3782,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -3803,12 +3803,12 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -3835,7 +3835,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -3872,7 +3872,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -3880,44 +3880,44 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -3929,20 +3929,20 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Improve contrib/postgres_fdw's handling of + Improve contrib/postgres_fdw's handling of collation-related decisions (Tom Lane) The main user-visible effect is expected to be that comparisons - involving varchar columns will be sent to the remote server + involving varchar columns will be sent to the remote server for execution in more cases than before. - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -3950,64 +3950,64 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -4015,11 +4015,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -4028,18 +4028,18 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -4047,11 +4047,11 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -4059,14 +4059,14 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -4078,38 +4078,38 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -4141,7 +4141,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading an installation that was previously - upgraded using a pg_upgrade version between 9.3.0 and + upgraded using a pg_upgrade version between 9.3.0 and 9.3.4 inclusive, see the first changelog entry below. @@ -4164,52 +4164,52 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Recent PostgreSQL releases introduced mechanisms to + Recent PostgreSQL releases introduced mechanisms to protect against multixact wraparound, but some of that code did not account for the possibility that it would need to run during crash recovery, when the database may not be in a consistent state. This could result in failure to restart after a crash, or failure to start up a secondary server. The lingering effects of a previously-fixed - bug in pg_upgrade could also cause such a failure, in - installations that had used pg_upgrade versions + bug in pg_upgrade could also cause such a failure, in + installations that had used pg_upgrade versions between 9.3.0 and 9.3.4. - The pg_upgrade bug in question was that it would - set oldestMultiXid to 1 in pg_control even + The pg_upgrade bug in question was that it would + set oldestMultiXid to 1 in pg_control even if the true value should be higher. With the fixes introduced in this release, such a situation will result in immediate emergency - autovacuuming until a correct oldestMultiXid value can be + autovacuuming until a correct oldestMultiXid value can be determined. If that would pose a hardship, users can avoid it by - doing manual vacuuming before upgrading to this release. + doing manual vacuuming before upgrading to this release. In detail: - Check whether pg_controldata reports Latest - checkpoint's oldestMultiXid to be 1. If not, there's nothing + Check whether pg_controldata reports Latest + checkpoint's oldestMultiXid to be 1. If not, there's nothing to do. - Look in PGDATA/pg_multixact/offsets to see if there's a - file named 0000. If there is, there's nothing to do. + Look in PGDATA/pg_multixact/offsets to see if there's a + file named 0000. If there is, there's nothing to do. Otherwise, for each table that has - pg_class.relminmxid equal to 1, - VACUUM that table with + pg_class.relminmxid equal to 1, + VACUUM that table with both and set to zero. (You can use the vacuum cost delay parameters described in to reduce the performance consequences for concurrent sessions.) You must - use PostgreSQL 9.3.5 or later to perform this step. + use PostgreSQL 9.3.5 or later to perform this step. @@ -4223,7 +4223,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -4234,13 +4234,13 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -4302,12 +4302,12 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -4319,28 +4319,28 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 - Also apply the same rules in initdb --sync-only. + Also apply the same rules in initdb --sync-only. This case is less critical but it should act similarly. - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. @@ -4355,15 +4355,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Allow libpq to use TLS protocol versions beyond v1 + Allow libpq to use TLS protocol versions beyond v1 (Noah Misch) - For a long time, libpq was coded so that the only SSL + For a long time, libpq was coded so that the only SSL protocol it would allow was TLS v1. Now that newer TLS versions are becoming popular, allow it to negotiate the highest commonly-supported - TLS version with the server. (PostgreSQL servers were + TLS version with the server. (PostgreSQL servers were already capable of such negotiation, so no change is needed on the server side.) This is a back-patch of a change already released in 9.4.0. @@ -4397,8 +4397,8 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -4436,7 +4436,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -4446,7 +4446,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -4456,15 +4456,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -4479,7 +4479,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Under certain usage patterns, the existing defenses against this might - be insufficient, allowing pg_multixact/members files to be + be insufficient, allowing pg_multixact/members files to be removed too early, resulting in data loss. The fix for this includes modifying the server to fail transactions that would result in overwriting old multixact member ID data, and @@ -4491,16 +4491,16 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -4508,16 +4508,16 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -4559,7 +4559,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -4587,7 +4587,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -4595,7 +4595,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -4609,7 +4609,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -4629,19 +4629,19 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -4688,18 +4688,18 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -4719,20 +4719,20 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -4745,20 +4745,20 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -4766,14 +4766,14 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -4782,32 +4782,32 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -4820,38 +4820,38 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -4864,21 +4864,21 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Avoid possible pg_dump failure when concurrent sessions + Avoid possible pg_dump failure when concurrent sessions are creating and dropping temporary functions (Tom Lane) - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -4890,7 +4890,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -4898,28 +4898,28 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -4927,15 +4927,15 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -4953,7 +4953,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -4988,11 +4988,11 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -5026,15 +5026,15 @@ Branch: REL9_0_STABLE [56b970f2e] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -5054,27 +5054,27 @@ Branch: REL9_0_STABLE [9e05c5063] 2015-02-02 10:00:52 -0500 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -5099,12 +5099,12 @@ Branch: REL9_0_STABLE [0a3ee8a5f] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -5165,7 +5165,7 @@ Branch: REL9_0_STABLE [3a2063369] 2015-01-28 12:33:29 -0500 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -5214,14 +5214,14 @@ Branch: REL9_2_STABLE [6bf343c6e] 2015-01-16 13:10:23 +0200 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. @@ -5236,7 +5236,7 @@ Branch: REL9_0_STABLE [45a607d5c] 2014-11-04 13:24:26 -0500 Avoid possible data corruption if ALTER DATABASE SET - TABLESPACE is used to move a database to a new tablespace and then + TABLESPACE is used to move a database to a new tablespace and then shortly later move it back to its original tablespace (Tom Lane) @@ -5256,14 +5256,14 @@ Branch: REL9_0_STABLE [73f950fc8] 2014-10-30 13:03:39 -0400 - Avoid corrupting tables when ANALYZE inside a transaction + Avoid corrupting tables when ANALYZE inside a transaction is rolled back (Andres Freund, Tom Lane, Michael Paquier) If the failing transaction had earlier removed the last index, rule, or trigger from the table, the table would be left in a corrupted state - with the relevant pg_class flags not set though they + with the relevant pg_class flags not set though they should be. @@ -5278,8 +5278,8 @@ Branch: REL9_1_STABLE [d5fef87e9] 2014-10-20 23:47:45 +0200 Ensure that unlogged tables are copied correctly - during CREATE DATABASE or ALTER DATABASE SET - TABLESPACE (Pavan Deolasee, Andres Freund) + during CREATE DATABASE or ALTER DATABASE SET + TABLESPACE (Pavan Deolasee, Andres Freund) @@ -5291,12 +5291,12 @@ Branch: REL9_3_STABLE [e35db342a] 2014-09-22 16:19:59 -0400 Fix incorrect processing - of CreateEventTrigStmt.eventname (Petr + of CreateEventTrigStmt.eventname (Petr Jelinek) - This could result in misbehavior if CREATE EVENT TRIGGER + This could result in misbehavior if CREATE EVENT TRIGGER were executed as a prepared query, or via extended query protocol. @@ -5310,7 +5310,7 @@ Branch: REL9_1_STABLE [94d5d57d5] 2014-11-11 17:00:28 -0500 - Fix DROP's dependency searching to correctly handle the + Fix DROP's dependency searching to correctly handle the case where a table column is recursively visited before its table (Petr Jelinek, Tom Lane) @@ -5318,7 +5318,7 @@ Branch: REL9_1_STABLE [94d5d57d5] 2014-11-11 17:00:28 -0500 This case is only known to arise when an extension creates both a datatype and a table using that datatype. The faulty code might - refuse a DROP EXTENSION unless CASCADE is + refuse a DROP EXTENSION unless CASCADE is specified, which should not be required. @@ -5340,7 +5340,7 @@ Branch: REL9_0_STABLE [5308e085b] 2015-01-15 18:52:38 -0500 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. @@ -5369,8 +5369,8 @@ Branch: REL9_3_STABLE [54a8abc2b] 2015-01-04 15:48:29 -0300 Fix failure to wait when a transaction tries to acquire a FOR - NO KEY EXCLUSIVE tuple lock, while multiple other transactions - currently hold FOR SHARE locks (Álvaro Herrera) + NO KEY EXCLUSIVE tuple lock, while multiple other transactions + currently hold FOR SHARE locks (Álvaro Herrera) @@ -5384,15 +5384,15 @@ Branch: REL9_0_STABLE [662eebdc6] 2014-12-11 21:02:41 -0500 - Fix planning of SELECT FOR UPDATE when using a partial + Fix planning of SELECT FOR UPDATE when using a partial index on a child table (Kyotaro Horiguchi) - In READ COMMITTED mode, SELECT FOR UPDATE must - also recheck the partial index's WHERE condition when + In READ COMMITTED mode, SELECT FOR UPDATE must + also recheck the partial index's WHERE condition when rechecking a recently-updated row to see if it still satisfies the - query's WHERE condition. This requirement was missed if the + query's WHERE condition. This requirement was missed if the index belonged to an inheritance child table, so that it was possible to incorrectly return rows that no longer satisfy the query condition. @@ -5408,12 +5408,12 @@ Branch: REL9_0_STABLE [f5e4e92fb] 2014-12-11 19:37:17 -0500 - Fix corner case wherein SELECT FOR UPDATE could return a row + Fix corner case wherein SELECT FOR UPDATE could return a row twice, and possibly miss returning other rows (Tom Lane) - In READ COMMITTED mode, a SELECT FOR UPDATE + In READ COMMITTED mode, a SELECT FOR UPDATE that is scanning an inheritance tree could incorrectly return a row from a prior child table instead of the one it should return from a later child table. @@ -5429,7 +5429,7 @@ Branch: REL9_3_STABLE [939f0fb67] 2015-01-15 13:18:19 -0500 - Improve performance of EXPLAIN with large range tables + Improve performance of EXPLAIN with large range tables (Tom Lane) @@ -5445,7 +5445,7 @@ Branch: REL9_0_STABLE [4ff49746e] 2014-08-09 13:46:52 -0400 Reject duplicate column names in the referenced-columns list of - a FOREIGN KEY declaration (David Rowley) + a FOREIGN KEY declaration (David Rowley) @@ -5462,7 +5462,7 @@ Branch: REL9_3_STABLE [6306d0712] 2014-07-22 13:30:14 -0400 - Re-enable error for SELECT ... OFFSET -1 (Tom Lane) + Re-enable error for SELECT ... OFFSET -1 (Tom Lane) @@ -5499,7 +5499,7 @@ Branch: REL9_3_STABLE [8571ecb24] 2014-12-02 15:02:43 -0500 - Fix json_agg() to not return extra trailing right + Fix json_agg() to not return extra trailing right brackets in its result (Tom Lane) @@ -5514,7 +5514,7 @@ Branch: REL9_0_STABLE [26f8a4691] 2014-09-11 23:31:06 -0400 - Fix bugs in raising a numeric value to a large integral power + Fix bugs in raising a numeric value to a large integral power (Tom Lane) @@ -5535,19 +5535,19 @@ Branch: REL9_0_STABLE [e6550626c] 2014-12-01 15:25:18 -0500 - In numeric_recv(), truncate away any fractional digits - that would be hidden according to the value's dscale field + In numeric_recv(), truncate away any fractional digits + that would be hidden according to the value's dscale field (Tom Lane) - A numeric value's display scale (dscale) should + A numeric value's display scale (dscale) should never be less than the number of nonzero fractional digits; but apparently there's at least one broken client application that - transmits binary numeric values in which that's true. + transmits binary numeric values in which that's true. This leads to strange behavior since the extra digits are taken into account by arithmetic operations even though they aren't printed. - The least risky fix seems to be to truncate away such hidden + The least risky fix seems to be to truncate away such hidden digits on receipt, so that the value is indeed what it prints as. @@ -5566,7 +5566,7 @@ Branch: REL9_2_STABLE [3359a818c] 2014-09-23 20:25:39 -0400 Matching would often fail when the number of allowed iterations is - limited by a ? quantifier or a bound expression. + limited by a ? quantifier or a bound expression. @@ -5601,7 +5601,7 @@ Branch: REL9_0_STABLE [10059c2da] 2014-10-27 10:51:38 +0200 - Fix bugs in tsquery @> tsquery + Fix bugs in tsquery @> tsquery operator (Heikki Linnakangas) @@ -5658,14 +5658,14 @@ Branch: REL9_0_STABLE [cebb3f032] 2015-01-17 22:37:32 -0500 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -5685,7 +5685,7 @@ Branch: REL9_2_STABLE [19ccaf9d4] 2014-11-10 15:21:26 -0500 - In some contexts, constructs like row_to_json(tab.*) may + In some contexts, constructs like row_to_json(tab.*) may not produce the expected column names. This is fixed properly as of 9.4; in older branches, just ensure that we produce some nonempty name. (In some cases this will be the underlying table's column name @@ -5703,7 +5703,7 @@ Branch: REL9_2_STABLE [906599f65] 2014-11-22 16:01:15 -0500 Fix mishandling of system columns, - particularly tableoid, in FDW queries (Etsuro Fujita) + particularly tableoid, in FDW queries (Etsuro Fujita) @@ -5721,7 +5721,7 @@ Branch: REL9_3_STABLE [527ff8baf] 2015-01-30 12:30:43 -0500 - This patch fixes corner-case unexpected operator NNNN planner + This patch fixes corner-case unexpected operator NNNN planner errors, and improves the selectivity estimates for some other cases. @@ -5734,13 +5734,13 @@ Branch: REL9_2_STABLE [4586572d7] 2014-10-26 16:12:32 -0400 - Avoid doing indexed_column = ANY - (array) as an index qualifier if that leads + Avoid doing indexed_column = ANY + (array) as an index qualifier if that leads to an inferior plan (Andrew Gierth) - In some cases, = ANY conditions applied to non-first index + In some cases, = ANY conditions applied to non-first index columns would be done as index conditions even though it would be better to use them as simple filter conditions. @@ -5753,9 +5753,9 @@ Branch: REL9_3_STABLE [4e54685d0] 2014-10-20 12:23:48 -0400 - Fix variable not found in subplan target list planner + Fix variable not found in subplan target list planner failure when an inline-able SQL function taking a composite argument - is used in a LATERAL subselect and the composite argument + is used in a LATERAL subselect and the composite argument is a lateral reference (Tom Lane) @@ -5771,7 +5771,7 @@ Branch: REL9_0_STABLE [288f15b7c] 2014-10-01 19:30:41 -0400 Fix planner problems with nested append relations, such as inherited - tables within UNION ALL subqueries (Tom Lane) + tables within UNION ALL subqueries (Tom Lane) @@ -5800,8 +5800,8 @@ Branch: REL9_0_STABLE [50a757698] 2014-10-03 13:01:27 -0300 - Exempt tables that have per-table cost_limit - and/or cost_delay settings from autovacuum's global cost + Exempt tables that have per-table cost_limit + and/or cost_delay settings from autovacuum's global cost balancing rules (Álvaro Herrera) @@ -5835,7 +5835,7 @@ Branch: REL9_0_STABLE [91b4a881c] 2014-07-30 14:42:12 -0400 the target database, if they met the usual thresholds for autovacuuming. This is at best pretty unexpected; at worst it delays response to the wraparound threat. Fix it so that if autovacuum is - turned off, workers only do anti-wraparound vacuums and + turned off, workers only do anti-wraparound vacuums and not any other work. @@ -5899,12 +5899,12 @@ Branch: REL9_0_STABLE [804983961] 2014-07-29 11:58:17 +0300 Fix several cases where recovery logic improperly ignored WAL records - for COMMIT/ABORT PREPARED (Heikki Linnakangas) + for COMMIT/ABORT PREPARED (Heikki Linnakangas) The most notable oversight was - that recovery_target_xid could not be used to stop at + that recovery_target_xid could not be used to stop at a two-phase commit. @@ -5932,7 +5932,7 @@ Branch: REL9_0_STABLE [83c7bfb9a] 2014-11-06 21:26:21 +0900 - Avoid creating unnecessary .ready marker files for + Avoid creating unnecessary .ready marker files for timeline history files (Fujii Masao) @@ -5948,8 +5948,8 @@ Branch: REL9_0_STABLE [857a5d6b5] 2014-09-05 02:19:57 +0900 Fix possible null pointer dereference when an empty prepared statement - is used and the log_statement setting is mod - or ddl (Fujii Masao) + is used and the log_statement setting is mod + or ddl (Fujii Masao) @@ -5965,7 +5965,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -5974,7 +5974,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -6018,7 +6018,7 @@ Branch: REL9_0_STABLE [2e4946169] 2015-01-07 22:46:20 -0500 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) @@ -6033,13 +6033,13 @@ Branch: REL9_0_STABLE [9880fea4f] 2014-11-25 17:39:09 +0200 - Fix processing of repeated dbname parameters - in PQconnectdbParams() (Alex Shulgin) + Fix processing of repeated dbname parameters + in PQconnectdbParams() (Alex Shulgin) Unexpected behavior ensued if the first occurrence - of dbname contained a connection string or URI to be + of dbname contained a connection string or URI to be expanded. @@ -6054,12 +6054,12 @@ Branch: REL9_0_STABLE [ac6e87537] 2014-10-22 18:42:01 -0400 - Ensure that libpq reports a suitable error message on + Ensure that libpq reports a suitable error message on unexpected socket EOF (Marko Tiikkaja, Tom Lane) - Depending on kernel behavior, libpq might return an + Depending on kernel behavior, libpq might return an empty error string rather than something useful when the server unexpectedly closed the socket. @@ -6075,14 +6075,14 @@ Branch: REL9_0_STABLE [49ef4eba2] 2014-10-29 14:35:39 +0200 - Clear any old error message during PQreset() + Clear any old error message during PQreset() (Heikki Linnakangas) - If PQreset() is called repeatedly, and the connection + If PQreset() is called repeatedly, and the connection cannot be re-established, error messages from the failed connection - attempts kept accumulating in the PGconn's error + attempts kept accumulating in the PGconn's error string. @@ -6098,7 +6098,7 @@ Branch: REL9_0_STABLE [1f3517039] 2014-11-25 14:10:54 +0200 Properly handle out-of-memory conditions while parsing connection - options in libpq (Alex Shulgin, Heikki Linnakangas) + options in libpq (Alex Shulgin, Heikki Linnakangas) @@ -6112,8 +6112,8 @@ Branch: REL9_0_STABLE [d9a1e9de5] 2014-10-06 21:23:50 -0400 - Fix array overrun in ecpg's version - of ParseDateTime() (Michael Paquier) + Fix array overrun in ecpg's version + of ParseDateTime() (Michael Paquier) @@ -6127,7 +6127,7 @@ Branch: REL9_0_STABLE [d67be559e] 2014-12-05 14:30:55 +0200 - In initdb, give a clearer error message if a password + In initdb, give a clearer error message if a password file is specified but is empty (Mats Erik Andersson) @@ -6142,12 +6142,12 @@ Branch: REL9_0_STABLE [44c518328] 2014-09-08 16:10:05 -0400 - Fix psql's \s command to work nicely with + Fix psql's \s command to work nicely with libedit, and add pager support (Stepan Rutz, Tom Lane) - When using libedit rather than readline, \s printed the + When using libedit rather than readline, \s printed the command history in a fairly unreadable encoded format, and on recent libedit versions might fail altogether. Fix that by printing the history ourselves rather than having the library do it. A pleasant @@ -6157,7 +6157,7 @@ Branch: REL9_0_STABLE [44c518328] 2014-09-08 16:10:05 -0400 This patch also fixes a bug that caused newline encoding to be applied inconsistently when saving the command history with libedit. - Multiline history entries written by older psql + Multiline history entries written by older psql versions will be read cleanly with this patch, but perhaps not vice versa, depending on the exact libedit versions involved. @@ -6175,17 +6175,17 @@ Branch: REL9_0_STABLE [2600e4436] 2014-12-31 12:17:12 -0500 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -6198,8 +6198,8 @@ Branch: REL9_3_STABLE [4b1953079] 2014-11-28 02:44:40 +0900 - Make psql's \watch command display - nulls as specified by \pset null (Fujii Masao) + Make psql's \watch command display + nulls as specified by \pset null (Fujii Masao) @@ -6213,9 +6213,9 @@ Branch: REL9_0_STABLE [1f89fc218] 2014-09-12 11:24:39 -0400 - Fix psql's expanded-mode display to work - consistently when using border = 3 - and linestyle = ascii or unicode + Fix psql's expanded-mode display to work + consistently when using border = 3 + and linestyle = ascii or unicode (Stephen Frost) @@ -6229,7 +6229,7 @@ Branch: REL9_3_STABLE [bb1e2426b] 2015-01-05 19:27:09 -0500 - Fix pg_dump to handle comments on event triggers + Fix pg_dump to handle comments on event triggers without failing (Tom Lane) @@ -6243,8 +6243,8 @@ Branch: REL9_3_STABLE [cc609c46f] 2015-01-30 09:01:36 -0600 - Allow parallel pg_dump to - use (Kevin Grittner) @@ -6257,7 +6257,7 @@ Branch: REL9_1_STABLE [40c333c39] 2014-07-25 19:48:54 -0400 - Improve performance of pg_dump when the database + Improve performance of pg_dump when the database contains many instances of multiple dependency paths between the same two objects (Tom Lane) @@ -6271,7 +6271,7 @@ Branch: REL9_2_STABLE [3c5ce5102] 2014-11-13 18:19:35 -0500 - Fix pg_dumpall to restore its ability to dump from + Fix pg_dumpall to restore its ability to dump from pre-8.1 servers (Gilles Darold) @@ -6301,7 +6301,7 @@ Branch: REL9_0_STABLE [31021e7ba] 2014-10-17 12:49:15 -0400 - Fix core dump in pg_dump --binary-upgrade on zero-column + Fix core dump in pg_dump --binary-upgrade on zero-column composite type (Rushabh Lathia) @@ -6314,7 +6314,7 @@ Branch: REL9_3_STABLE [26a4e0ed7] 2014-11-15 01:21:11 +0100 Fix failure to fsync tables in nondefault tablespaces - during pg_upgrade (Abhijit Menon-Sen, Andres Freund) + during pg_upgrade (Abhijit Menon-Sen, Andres Freund) @@ -6330,7 +6330,7 @@ Branch: REL9_3_STABLE [fca9f349b] 2014-08-07 14:56:13 -0400 - In pg_upgrade, cope with cases where the new cluster + In pg_upgrade, cope with cases where the new cluster creates a TOAST table for a table that didn't previously have one (Bruce Momjian) @@ -6347,8 +6347,8 @@ Branch: REL9_3_STABLE [24ae44914] 2014-08-04 11:45:45 -0400 - In pg_upgrade, don't try to - set autovacuum_multixact_freeze_max_age for the old cluster + In pg_upgrade, don't try to + set autovacuum_multixact_freeze_max_age for the old cluster (Bruce Momjian) @@ -6365,12 +6365,12 @@ Branch: REL9_3_STABLE [5724f491d] 2014-09-11 18:39:46 -0400 - In pg_upgrade, preserve the transaction ID epoch + In pg_upgrade, preserve the transaction ID epoch (Bruce Momjian) - This oversight did not bother PostgreSQL proper, + This oversight did not bother PostgreSQL proper, but could confuse some external replication tools. @@ -6386,7 +6386,7 @@ Branch: REL9_1_STABLE [2a0bfa4d6] 2015-01-03 20:54:13 +0100 - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) @@ -6398,7 +6398,7 @@ Branch: REL9_3_STABLE [9747a9898] 2014-08-02 15:19:45 +0900 - Fix memory leak in pg_receivexlog (Fujii Masao) + Fix memory leak in pg_receivexlog (Fujii Masao) @@ -6409,7 +6409,7 @@ Branch: REL9_3_STABLE [39217ce41] 2014-08-02 14:59:10 +0900 - Fix unintended suppression of pg_receivexlog verbose + Fix unintended suppression of pg_receivexlog verbose messages (Fujii Masao) @@ -6422,8 +6422,8 @@ Branch: REL9_2_STABLE [5ff8c2d7d] 2014-09-19 13:19:05 -0400 - Fix failure of contrib/auto_explain to print per-node - timing information when doing EXPLAIN ANALYZE (Tom Lane) + Fix failure of contrib/auto_explain to print per-node + timing information when doing EXPLAIN ANALYZE (Tom Lane) @@ -6436,7 +6436,7 @@ Branch: REL9_1_STABLE [9807c8220] 2014-08-28 18:21:20 -0400 - Fix upgrade-from-unpackaged script for contrib/citext + Fix upgrade-from-unpackaged script for contrib/citext (Tom Lane) @@ -6449,7 +6449,7 @@ Branch: REL9_3_STABLE [f44290b7b] 2014-11-04 16:54:59 -0500 Avoid integer overflow and buffer overrun - in contrib/hstore's hstore_to_json() + in contrib/hstore's hstore_to_json() (Heikki Linnakangas) @@ -6461,7 +6461,7 @@ Branch: REL9_3_STABLE [55c880797] 2014-12-01 11:44:48 -0500 - Fix recognition of numbers in hstore_to_json_loose(), + Fix recognition of numbers in hstore_to_json_loose(), so that JSON numbers and strings are correctly distinguished (Andrew Dunstan) @@ -6478,7 +6478,7 @@ Branch: REL9_0_STABLE [9dc2a3fd0] 2014-07-22 11:46:04 -0400 Fix block number checking - in contrib/pageinspect's get_raw_page() + in contrib/pageinspect's get_raw_page() (Tom Lane) @@ -6498,7 +6498,7 @@ Branch: REL9_0_STABLE [ef5a3b957] 2014-11-11 17:22:58 -0500 - Fix contrib/pgcrypto's pgp_sym_decrypt() + Fix contrib/pgcrypto's pgp_sym_decrypt() to not fail on messages whose length is 6 less than a power of 2 (Marko Tiikkaja) @@ -6513,7 +6513,7 @@ Branch: REL9_1_STABLE [a855c90a7] 2014-11-19 12:26:06 -0500 - Fix file descriptor leak in contrib/pg_test_fsync + Fix file descriptor leak in contrib/pg_test_fsync (Jeff Janes) @@ -6535,12 +6535,12 @@ Branch: REL9_0_STABLE [dc9a506e6] 2015-01-29 20:18:46 -0500 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. @@ -6555,12 +6555,12 @@ Branch: REL9_0_STABLE [6a694bbab] 2014-11-27 11:13:03 -0500 - Avoid a possible crash in contrib/xml2's - xslt_process() (Mark Simonetti) + Avoid a possible crash in contrib/xml2's + xslt_process() (Mark Simonetti) - libxslt seems to have an undocumented dependency on + libxslt seems to have an undocumented dependency on the order in which resources are freed; reorder our calls to avoid a crash. @@ -6575,7 +6575,7 @@ Branch: REL9_1_STABLE [7225abf00] 2014-11-05 11:34:25 -0500 - Mark some contrib I/O functions with correct volatility + Mark some contrib I/O functions with correct volatility properties (Tom Lane) @@ -6696,10 +6696,10 @@ Branch: REL9_0_STABLE [4c6d0abde] 2014-07-22 11:02:25 -0400 With OpenLDAP versions 2.4.24 through 2.4.31, - inclusive, PostgreSQL backends can crash at exit. - Raise a warning during configure based on the + inclusive, PostgreSQL backends can crash at exit. + Raise a warning during configure based on the compile-time OpenLDAP version number, and test the crashing scenario - in the contrib/dblink regression test. + in the contrib/dblink regression test. @@ -6713,7 +6713,7 @@ Branch: REL9_0_STABLE [e6841c4d6] 2014-08-18 23:01:23 -0400 - In non-MSVC Windows builds, ensure libpq.dll is installed + In non-MSVC Windows builds, ensure libpq.dll is installed with execute permissions (Noah Misch) @@ -6730,13 +6730,13 @@ Branch: REL9_0_STABLE [338ff75fc] 2015-01-19 23:44:33 -0500 - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -6756,15 +6756,15 @@ Branch: REL9_0_STABLE [870a980aa] 2014-10-16 15:22:26 -0400 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. @@ -6789,9 +6789,9 @@ Branch: REL9_0_STABLE [8b70023af] 2014-12-24 16:35:54 -0500 Add CST (China Standard Time) to our lists. - Remove references to ADT as Arabia Daylight Time, an + Remove references to ADT as Arabia Daylight Time, an abbreviation that's been out of use since 2007; therefore, claiming - there is a conflict with Atlantic Daylight Time doesn't seem + there is a conflict with Atlantic Daylight Time doesn't seem especially helpful. Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, and FJST (Fiji); we didn't even have them on the proper side of the date line. @@ -6818,21 +6818,21 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 - Update time zone data files to tzdata release 2015a. + Update time zone data files to tzdata release 2015a. The IANA timezone database has adopted abbreviations of the form - AxST/AxDT + AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into - our Default timezone abbreviation set. - The Australia abbreviation set now contains only CST, EAST, + our Default timezone abbreviation set. + The Australia abbreviation set now contains only CST, EAST, EST, SAST, SAT, and WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South - Africa Standard Time in the Default abbreviation set. + Africa Standard Time in the Default abbreviation set. @@ -6873,7 +6873,7 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 However, this release corrects a logic error - in pg_upgrade, as well as an index corruption problem in + in pg_upgrade, as well as an index corruption problem in some GiST indexes. See the first two changelog entries below to find out whether your installation has been affected and what steps you should take if so. @@ -6900,15 +6900,15 @@ Branch: REL9_3_STABLE [cc5841809] 2014-06-24 16:11:06 -0400 - In pg_upgrade, remove pg_multixact files - left behind by initdb (Bruce Momjian) + In pg_upgrade, remove pg_multixact files + left behind by initdb (Bruce Momjian) - If you used a pre-9.3.5 version of pg_upgrade to + If you used a pre-9.3.5 version of pg_upgrade to upgrade a database cluster to 9.3, it might have left behind a file - $PGDATA/pg_multixact/offsets/0000 that should not be - there and will eventually cause problems in VACUUM. + $PGDATA/pg_multixact/offsets/0000 that should not be + there and will eventually cause problems in VACUUM. However, in common cases this file is actually valid and must not be removed. To determine whether your installation has this problem, run this @@ -6921,9 +6921,9 @@ SELECT EXISTS (SELECT * FROM list WHERE file = '0000') AND EXISTS (SELECT * FROM list WHERE file != '0000') AS file_0000_removal_required; - If this query returns t, manually remove the file - $PGDATA/pg_multixact/offsets/0000. - Do nothing if the query returns f. + If this query returns t, manually remove the file + $PGDATA/pg_multixact/offsets/0000. + Do nothing if the query returns f. @@ -6939,15 +6939,15 @@ Branch: REL8_4_STABLE [e31d77c96] 2014-05-13 15:27:43 +0300 - Correctly initialize padding bytes in contrib/btree_gist - indexes on bit columns (Heikki Linnakangas) + Correctly initialize padding bytes in contrib/btree_gist + indexes on bit columns (Heikki Linnakangas) This error could result in incorrect query results due to values that should compare equal not being seen as equal. - Users with GiST indexes on bit or bit varying - columns should REINDEX those indexes after installing this + Users with GiST indexes on bit or bit varying + columns should REINDEX those indexes after installing this update. @@ -7032,7 +7032,7 @@ Branch: REL9_3_STABLE [167a2535f] 2014-06-09 15:17:23 -0400 - Fix wraparound handling for pg_multixact/members + Fix wraparound handling for pg_multixact/members (Álvaro Herrera) @@ -7046,12 +7046,12 @@ Branch: REL9_3_STABLE [9a28c3752] 2014-06-27 14:43:52 -0400 - Truncate pg_multixact during checkpoints, not - during VACUUM (Álvaro Herrera) + Truncate pg_multixact during checkpoints, not + during VACUUM (Álvaro Herrera) - This change ensures that pg_multixact segments can't be + This change ensures that pg_multixact segments can't be removed if they'd still be needed during WAL replay after a crash. @@ -7082,7 +7082,7 @@ Branch: REL8_4_STABLE [3ada1fab8] 2014-05-05 14:43:55 -0400 Fix possibly-incorrect cache invalidation during nested calls - to ReceiveSharedInvalidMessages (Andres Freund) + to ReceiveSharedInvalidMessages (Andres Freund) @@ -7108,8 +7108,8 @@ Branch: REL9_1_STABLE [555d0b200] 2014-06-26 10:42:08 -0700 - Fix could not find pathkey item to sort planner failures - with UNION ALL over subqueries reading from tables with + Fix could not find pathkey item to sort planner failures + with UNION ALL over subqueries reading from tables with inheritance children (Tom Lane) @@ -7148,7 +7148,7 @@ Branch: REL9_2_STABLE [0901dbab3] 2014-04-29 13:12:33 -0400 Improve planner to drop constant-NULL inputs - of AND/OR when possible (Tom Lane) + of AND/OR when possible (Tom Lane) @@ -7166,8 +7166,8 @@ Branch: REL9_3_STABLE [d359f71ac] 2014-04-03 22:02:27 -0400 - Ensure that the planner sees equivalent VARIADIC and - non-VARIADIC function calls as equivalent (Tom Lane) + Ensure that the planner sees equivalent VARIADIC and + non-VARIADIC function calls as equivalent (Tom Lane) @@ -7188,13 +7188,13 @@ Branch: REL9_3_STABLE [a1fc36495] 2014-06-24 21:22:47 -0700 - Fix handling of nested JSON objects - in json_populate_recordset() and friends + Fix handling of nested JSON objects + in json_populate_recordset() and friends (Michael Paquier, Tom Lane) - A nested JSON object could result in previous fields of the + A nested JSON object could result in previous fields of the parent object not being shown in the output. @@ -7208,13 +7208,13 @@ Branch: REL9_2_STABLE [25c933c5c] 2014-05-09 12:55:06 -0400 - Fix identification of input type category in to_json() + Fix identification of input type category in to_json() and friends (Tom Lane) - This is known to have led to inadequate quoting of money - fields in the JSON result, and there may have been wrong + This is known to have led to inadequate quoting of money + fields in the JSON result, and there may have been wrong results for other data types as well. @@ -7239,7 +7239,7 @@ Branch: REL8_4_STABLE [70debcf09] 2014-05-01 15:19:23 -0400 This corrects cases where TOAST pointers could be copied into other tables without being dereferenced. If the original data is later deleted, it would lead to errors like missing chunk number 0 - for toast value ... when the now-dangling pointer is used. + for toast value ... when the now-dangling pointer is used. @@ -7256,7 +7256,7 @@ Branch: REL8_4_STABLE [a81fbcfb3] 2014-07-11 19:12:56 -0400 - Fix record type has not been registered failures with + Fix record type has not been registered failures with whole-row references to the output of Append plan nodes (Tom Lane) @@ -7292,7 +7292,7 @@ Branch: REL8_4_STABLE [d297c91d4] 2014-06-19 22:14:00 -0400 Fix query-lifespan memory leak while evaluating the arguments for a - function in FROM (Tom Lane) + function in FROM (Tom Lane) @@ -7327,7 +7327,7 @@ Branch: REL8_4_STABLE [f3f40434b] 2014-06-10 22:49:08 -0400 - Fix data encoding error in hungarian.stop (Tom Lane) + Fix data encoding error in hungarian.stop (Tom Lane) @@ -7367,7 +7367,7 @@ Branch: REL8_4_STABLE [80d45ae4e] 2014-06-04 23:27:38 +0200 This could cause problems (at least spurious warnings, and at worst an - infinite loop) if CREATE INDEX or CLUSTER were + infinite loop) if CREATE INDEX or CLUSTER were done later in the same transaction. @@ -7384,12 +7384,12 @@ Branch: REL8_4_STABLE [82fbd88a7] 2014-04-24 13:30:14 -0400 - Clear pg_stat_activity.xact_start - during PREPARE TRANSACTION (Andres Freund) + Clear pg_stat_activity.xact_start + during PREPARE TRANSACTION (Andres Freund) - After the PREPARE, the originating session is no longer in + After the PREPARE, the originating session is no longer in a transaction, so it should not continue to display a transaction start time. @@ -7408,7 +7408,7 @@ Branch: REL8_4_STABLE [4b767789d] 2014-07-15 13:24:07 -0400 - Fix REASSIGN OWNED to not fail for text search objects + Fix REASSIGN OWNED to not fail for text search objects (Álvaro Herrera) @@ -7422,8 +7422,8 @@ Branch: REL9_3_STABLE [e86cfc4bb] 2014-06-27 14:43:45 -0400 - Prevent pg_class.relminmxid values from - going backwards during VACUUM FULL (Álvaro Herrera) + Prevent pg_class.relminmxid values from + going backwards during VACUUM FULL (Álvaro Herrera) @@ -7461,7 +7461,7 @@ Branch: REL9_3_STABLE [e31193d49] 2014-05-01 20:22:39 -0400 Fix dumping of rules/views when subsequent addition of a column has - resulted in multiple input columns matching a USING + resulted in multiple input columns matching a USING specification (Tom Lane) @@ -7476,7 +7476,7 @@ Branch: REL9_3_STABLE [b978ab5f6] 2014-07-19 14:29:05 -0400 Repair view printing for some cases involving functions - in FROM that return a composite type containing dropped + in FROM that return a composite type containing dropped columns (Tom Lane) @@ -7498,7 +7498,7 @@ Branch: REL8_4_STABLE [969735cf1] 2014-04-05 18:16:24 -0400 This ensures that the postmaster will properly clean up after itself - if, for example, it receives SIGINT while still + if, for example, it receives SIGINT while still starting up. @@ -7513,7 +7513,7 @@ Branch: REL9_1_STABLE [b7a424371] 2014-04-02 17:11:34 -0400 - Fix client host name lookup when processing pg_hba.conf + Fix client host name lookup when processing pg_hba.conf entries that specify host names instead of IP addresses (Tom Lane) @@ -7534,14 +7534,14 @@ Branch: REL9_2_STABLE [6d25eb314] 2014-04-04 22:03:42 -0400 - Allow the root user to use postgres -C variable and - postgres --describe-config (MauMau) + Allow the root user to use postgres -C variable and + postgres --describe-config (MauMau) The prohibition on starting the server as root does not need to extend to these operations, and relaxing it prevents failure - of pg_ctl in some scenarios. + of pg_ctl in some scenarios. @@ -7559,7 +7559,7 @@ Branch: REL8_4_STABLE [95cefd30e] 2014-06-14 09:41:18 -0400 Secure Unix-domain sockets of temporary postmasters started during - make check (Noah Misch) + make check (Noah Misch) @@ -7568,16 +7568,16 @@ Branch: REL8_4_STABLE [95cefd30e] 2014-06-14 09:41:18 -0400 the operating-system user running the test, as we previously noted in CVE-2014-0067. This change defends against that risk by placing the server's socket in a temporary, mode 0700 subdirectory - of /tmp. The hazard remains however on platforms where + of /tmp. The hazard remains however on platforms where Unix sockets are not supported, notably Windows, because then the temporary postmaster must accept local TCP connections. A useful side effect of this change is to simplify - make check testing in builds that - override DEFAULT_PGSOCKET_DIR. Popular non-default values - like /var/run/postgresql are often not writable by the + make check testing in builds that + override DEFAULT_PGSOCKET_DIR. Popular non-default values + like /var/run/postgresql are often not writable by the build user, requiring workarounds that will no longer be necessary. @@ -7651,9 +7651,9 @@ Branch: REL8_4_STABLE [e3f273ff6] 2014-04-30 10:39:03 +0300 - This oversight could cause initdb - and pg_upgrade to fail on Windows, if the installation - path contained both spaces and @ signs. + This oversight could cause initdb + and pg_upgrade to fail on Windows, if the installation + path contained both spaces and @ signs. @@ -7669,7 +7669,7 @@ Branch: REL8_4_STABLE [ae41bb4be] 2014-05-30 18:18:32 -0400 - Fix linking of libpython on macOS (Tom Lane) + Fix linking of libpython on macOS (Tom Lane) @@ -7690,17 +7690,17 @@ Branch: REL8_4_STABLE [664ac3de7] 2014-05-07 21:38:50 -0400 - Avoid buffer bloat in libpq when the server + Avoid buffer bloat in libpq when the server consistently sends data faster than the client can absorb it (Shin-ichi Morita, Tom Lane) - libpq could be coerced into enlarging its input buffer + libpq could be coerced into enlarging its input buffer until it runs out of memory (which would be reported misleadingly - as lost synchronization with server). Under ordinary + as lost synchronization with server). Under ordinary circumstances it's quite far-fetched that data could be continuously - transmitted more quickly than the recv() loop can + transmitted more quickly than the recv() loop can absorb it, but this has been observed when the client is artificially slowed by scheduler constraints. @@ -7718,7 +7718,7 @@ Branch: REL8_4_STABLE [b4ae2e37d] 2014-04-16 18:59:48 +0200 - Ensure that LDAP lookup attempts in libpq time out as + Ensure that LDAP lookup attempts in libpq time out as intended (Laurenz Albe) @@ -7741,8 +7741,8 @@ Branch: REL9_0_STABLE [0c2eb989e] 2014-04-09 12:12:32 +0200 - Fix ecpg to do the right thing when an array - of char * is the target for a FETCH statement returning more + Fix ecpg to do the right thing when an array + of char * is the target for a FETCH statement returning more than one row, as well as some other array-handling fixes (Ashutosh Bapat) @@ -7756,13 +7756,13 @@ Branch: REL9_3_STABLE [3080bbaa9] 2014-03-29 17:34:03 -0400 - Fix pg_dump to cope with a materialized view that + Fix pg_dump to cope with a materialized view that depends on a table's primary key (Tom Lane) This occurs if the view's query relies on functional dependency to - abbreviate a GROUP BY list. pg_dump got + abbreviate a GROUP BY list. pg_dump got sufficiently confused that it dumped the materialized view as a regular view. @@ -7776,7 +7776,7 @@ Branch: REL9_3_STABLE [63817f86b] 2014-03-18 10:38:38 -0400 - Fix parsing of pg_dumpall's switch (Tom Lane) @@ -7794,13 +7794,13 @@ Branch: REL8_4_STABLE [6adddac8a] 2014-06-12 20:14:55 -0400 - Fix pg_restore's processing of old-style large object + Fix pg_restore's processing of old-style large object comments (Tom Lane) A direct-to-database restore from an archive file generated by a - pre-9.0 version of pg_dump would usually fail if the + pre-9.0 version of pg_dump would usually fail if the archive contained more than a few comments for large objects. @@ -7815,12 +7815,12 @@ Branch: REL9_2_STABLE [759c9fb63] 2014-07-07 13:24:08 -0400 - Fix pg_upgrade for cases where the new server creates + Fix pg_upgrade for cases where the new server creates a TOAST table but the old version did not (Bruce Momjian) - This rare situation would manifest as relation OID mismatch + This rare situation would manifest as relation OID mismatch errors. @@ -7839,9 +7839,9 @@ Branch: REL9_3_STABLE [e7984cca0] 2014-07-21 11:42:05 -0400 - In pg_upgrade, - preserve pg_database.datminmxid - and pg_class.relminmxid values from the + In pg_upgrade, + preserve pg_database.datminmxid + and pg_class.relminmxid values from the old cluster, or insert reasonable values when upgrading from pre-9.3; also defend against unreasonable values in the core server (Bruce Momjian, Álvaro Herrera, Tom Lane) @@ -7864,13 +7864,13 @@ Branch: REL9_2_STABLE [31f579f09] 2014-05-20 12:20:57 -0400 - Prevent contrib/auto_explain from changing the output of - a user's EXPLAIN (Tom Lane) + Prevent contrib/auto_explain from changing the output of + a user's EXPLAIN (Tom Lane) - If auto_explain is active, it could cause - an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless + If auto_explain is active, it could cause + an EXPLAIN (ANALYZE, TIMING OFF) command to nonetheless print timing information. @@ -7885,7 +7885,7 @@ Branch: REL9_2_STABLE [3e2cfa42f] 2014-06-20 12:27:04 -0700 - Fix query-lifespan memory leak in contrib/dblink + Fix query-lifespan memory leak in contrib/dblink (MauMau, Joe Conway) @@ -7902,7 +7902,7 @@ Branch: REL8_4_STABLE [df2e62603] 2014-04-17 12:37:53 -0400 - In contrib/pgcrypto functions, ensure sensitive + In contrib/pgcrypto functions, ensure sensitive information is cleared from stack variables before returning (Marko Kreen) @@ -7919,7 +7919,7 @@ Branch: REL9_2_STABLE [f6d6b7b1e] 2014-06-30 17:00:40 -0400 Prevent use of already-freed memory in - contrib/pgstattuple's pgstat_heap() + contrib/pgstattuple's pgstat_heap() (Noah Misch) @@ -7936,13 +7936,13 @@ Branch: REL8_4_STABLE [fd785441f] 2014-05-29 13:51:18 -0400 - In contrib/uuid-ossp, cache the state of the OSSP UUID + In contrib/uuid-ossp, cache the state of the OSSP UUID library across calls (Tom Lane) This improves the efficiency of UUID generation and reduces the amount - of entropy drawn from /dev/urandom, on platforms that + of entropy drawn from /dev/urandom, on platforms that have that. @@ -7960,7 +7960,7 @@ Branch: REL8_4_STABLE [c51da696b] 2014-07-19 15:01:45 -0400 - Update time zone data files to tzdata release 2014e + Update time zone data files to tzdata release 2014e for DST law changes in Crimea, Egypt, and Morocco. @@ -8071,7 +8071,7 @@ Branch: REL9_0_STABLE [7aea1050e] 2014-03-13 12:03:07 -0400 Avoid race condition in checking transaction commit status during - receipt of a NOTIFY message (Marko Tiikkaja) + receipt of a NOTIFY message (Marko Tiikkaja) @@ -8089,8 +8089,8 @@ Branch: REL9_3_STABLE [3973034e6] 2014-03-06 11:37:04 -0500 - Allow materialized views to be referenced in UPDATE - and DELETE commands (Michael Paquier) + Allow materialized views to be referenced in UPDATE + and DELETE commands (Michael Paquier) @@ -8133,7 +8133,7 @@ Branch: REL8_4_STABLE [dd378dd1e] 2014-02-18 12:44:36 -0500 - Remove incorrect code that tried to allow OVERLAPS with + Remove incorrect code that tried to allow OVERLAPS with single-element row arguments (Joshua Yanovski) @@ -8156,17 +8156,17 @@ Branch: REL8_4_STABLE [f043bddfe] 2014-03-06 19:31:22 -0500 - Avoid getting more than AccessShareLock when de-parsing a + Avoid getting more than AccessShareLock when de-parsing a rule or view (Dean Rasheed) - This oversight resulted in pg_dump unexpectedly - acquiring RowExclusiveLock locks on tables mentioned as - the targets of INSERT/UPDATE/DELETE + This oversight resulted in pg_dump unexpectedly + acquiring RowExclusiveLock locks on tables mentioned as + the targets of INSERT/UPDATE/DELETE commands in rules. While usually harmless, that could interfere with concurrent transactions that tried to acquire, for example, - ShareLock on those tables. + ShareLock on those tables. @@ -8201,9 +8201,9 @@ Branch: REL9_3_STABLE [e8655a77f] 2014-02-21 17:10:49 -0500 Use non-default selectivity estimates for - value IN (list) and - value operator ANY - (array) + value IN (list) and + value operator ANY + (array) expressions when the righthand side is a stable expression (Tom Lane) @@ -8217,16 +8217,16 @@ Branch: REL9_3_STABLE [13ea43ab8] 2014-03-05 13:03:29 -0300 Remove the correct per-database statistics file during DROP - DATABASE (Tomas Vondra) + DATABASE (Tomas Vondra) This fix prevents a permanent leak of statistics file space. - Users who have done many DROP DATABASE commands since - upgrading to PostgreSQL 9.3 may wish to check their + Users who have done many DROP DATABASE commands since + upgrading to PostgreSQL 9.3 may wish to check their statistics directory and delete statistics files that do not correspond to any existing database. Please note - that db_0.stat should not be removed. + that db_0.stat should not be removed. @@ -8238,12 +8238,12 @@ Branch: REL9_3_STABLE [dcd1131c8] 2014-03-06 21:40:50 +0200 - Fix walsender ping logic to avoid inappropriate + Fix walsender ping logic to avoid inappropriate disconnects under continuous load (Andres Freund, Heikki Linnakangas) - walsender failed to send ping messages to the client + walsender failed to send ping messages to the client if it was constantly busy sending WAL data; but it expected to see ping responses despite that, and would therefore disconnect once elapsed. @@ -8260,8 +8260,8 @@ Branch: REL9_1_STABLE [65e8dbb18] 2014-03-17 20:42:35 +0900 - Fix walsender's failure to shut down cleanly when client - is pg_receivexlog (Fujii Masao) + Fix walsender's failure to shut down cleanly when client + is pg_receivexlog (Fujii Masao) @@ -8324,13 +8324,13 @@ Branch: REL8_4_STABLE [172c53e92] 2014-03-13 20:59:57 -0400 - Prevent interrupts while reporting non-ERROR messages + Prevent interrupts while reporting non-ERROR messages (Tom Lane) This guards against rare server-process freezeups due to recursive - entry to syslog(), and perhaps other related problems. + entry to syslog(), and perhaps other related problems. @@ -8358,13 +8358,13 @@ Branch: REL9_2_STABLE [b315b767f] 2014-03-10 15:47:13 -0400 - Fix tracking of psql script line numbers - during \copy from out-of-line data + Fix tracking of psql script line numbers + during \copy from out-of-line data (Kumar Rajeev Rastogi, Amit Khandekar) - \copy ... from incremented the script file line number + \copy ... from incremented the script file line number for each data line, even if the data was not coming from the script file. This mistake resulted in wrong line numbers being reported for any errors occurring later in the same script file. @@ -8379,12 +8379,12 @@ Branch: REL9_3_STABLE [73f0483fd] 2014-03-07 16:36:50 -0500 - Fix contrib/postgres_fdw to handle multiple join + Fix contrib/postgres_fdw to handle multiple join conditions properly (Tom Lane) - This oversight could result in sending WHERE clauses to + This oversight could result in sending WHERE clauses to the remote server for execution even though the clauses are not known to have the same semantics on the remote server (for example, clauses that use non-built-in operators). The query might succeed anyway, @@ -8404,7 +8404,7 @@ Branch: REL9_0_STABLE [665515539] 2014-03-16 11:47:37 +0100 - Prevent intermittent could not reserve shared memory region + Prevent intermittent could not reserve shared memory region failures on recent Windows versions (MauMau) @@ -8421,7 +8421,7 @@ Branch: REL8_4_STABLE [6e6c2c2e1] 2014-03-15 13:36:57 -0400 - Update time zone data files to tzdata release 2014a + Update time zone data files to tzdata release 2014a for DST law changes in Fiji and Turkey, plus historical changes in Israel and Ukraine. @@ -8494,19 +8494,19 @@ Branch: REL8_4_STABLE [ff35425c8] 2014-02-17 09:33:38 -0500 - Shore up GRANT ... WITH ADMIN OPTION restrictions + Shore up GRANT ... WITH ADMIN OPTION restrictions (Noah Misch) - Granting a role without ADMIN OPTION is supposed to + Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role, but this restriction was easily bypassed by doing SET - ROLE first. The security impact is mostly that a role member can + ROLE first. The security impact is mostly that a role member can revoke the access of others, contrary to the wishes of his grantor. Unapproved role member additions are a lesser concern, since an uncooperative role member could provide most of his rights to others - anyway by creating views or SECURITY DEFINER functions. + anyway by creating views or SECURITY DEFINER functions. (CVE-2014-0060) @@ -8529,7 +8529,7 @@ Branch: REL8_4_STABLE [823b9dc25] 2014-02-17 09:33:38 -0500 The primary role of PL validator functions is to be called implicitly - during CREATE FUNCTION, but they are also normal SQL + during CREATE FUNCTION, but they are also normal SQL functions that a user can call explicitly. Calling a validator on a function actually written in some other language was not checked for and could be exploited for privilege-escalation purposes. @@ -8559,7 +8559,7 @@ Branch: REL8_4_STABLE [e46476133] 2014-02-17 09:33:38 -0500 If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table - than other parts. At least in the case of CREATE INDEX, + than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. @@ -8583,12 +8583,12 @@ Branch: REL8_4_STABLE [d0ed1a6c0] 2014-02-17 09:33:39 -0500 - The MAXDATELEN constant was too small for the longest - possible value of type interval, allowing a buffer overrun - in interval_out(). Although the datetime input + The MAXDATELEN constant was too small for the longest + possible value of type interval, allowing a buffer overrun + in interval_out(). Although the datetime input functions were more careful about avoiding buffer overrun, the limit was short enough to cause them to reject some valid inputs, such as - input containing a very long timezone name. The ecpg + input containing a very long timezone name. The ecpg library contained these vulnerabilities along with some of its own. (CVE-2014-0063) @@ -8635,7 +8635,7 @@ Branch: REL8_4_STABLE [69d2bc14a] 2014-02-17 11:20:38 -0500 - Use strlcpy() and related functions to provide a clear + Use strlcpy() and related functions to provide a clear guarantee that fixed-size buffers are not overrun. Unlike the preceding items, it is unclear whether these cases really represent live issues, since in most cases there appear to be previous @@ -8657,16 +8657,16 @@ Branch: REL8_4_STABLE [69d2bc14a] 2014-02-17 11:20:38 -0500 - Avoid crashing if crypt() returns NULL (Honza Horak, + Avoid crashing if crypt() returns NULL (Honza Horak, Bruce Momjian) - There are relatively few scenarios in which crypt() - could return NULL, but contrib/chkpass would crash + There are relatively few scenarios in which crypt() + could return NULL, but contrib/chkpass would crash if it did. One practical case in which this could be an issue is - if libc is configured to refuse to execute unapproved - hashing algorithms (e.g., FIPS mode). + if libc is configured to refuse to execute unapproved + hashing algorithms (e.g., FIPS mode). (CVE-2014-0066) @@ -8683,19 +8683,19 @@ Branch: REL8_4_STABLE [f58663ab1] 2014-02-17 11:24:51 -0500 - Document risks of make check in the regression testing + Document risks of make check in the regression testing instructions (Noah Misch, Tom Lane) - Since the temporary server started by make check - uses trust authentication, another user on the same machine + Since the temporary server started by make check + uses trust authentication, another user on the same machine could connect to it as database superuser, and then potentially exploit the privileges of the operating-system user who started the tests. A future release will probably incorporate changes in the testing procedure to prevent this risk, but some public discussion is needed first. So for the moment, just warn people against using - make check when there are untrusted users on the + make check when there are untrusted users on the same machine. (CVE-2014-0067) @@ -8716,7 +8716,7 @@ Branch: REL9_3_STABLE [8e9a16ab8] 2013-12-16 11:29:51 -0300 The logic for tuple freezing was unable to handle some cases involving freezing of - multixact + multixact IDs, with the practical effect that shared row-level locks might be forgotten once old enough. @@ -8725,7 +8725,7 @@ Branch: REL9_3_STABLE [8e9a16ab8] 2013-12-16 11:29:51 -0300 Fixing this required changing the WAL record format for tuple freezing. While this is no issue for standalone servers, when using replication it means that standby servers must be upgraded - to 9.3.3 or later before their masters are. An older standby will + to 9.3.3 or later before their masters are. An older standby will be unable to interpret freeze records generated by a newer master, and will fail with a PANIC message. (In such a case, upgrading the standby should be sufficient to let it resume execution.) @@ -8783,8 +8783,8 @@ Branch: REL9_3_STABLE [db1014bc4] 2013-12-18 13:31:27 -0300 This oversight could allow referential integrity checks to give false positives (for instance, allow deletes that should have been rejected). - Applications using the new commands SELECT FOR KEY SHARE - and SELECT FOR NO KEY UPDATE might also have suffered + Applications using the new commands SELECT FOR KEY SHARE + and SELECT FOR NO KEY UPDATE might also have suffered locking failures of this kind. @@ -8797,7 +8797,7 @@ Branch: REL9_3_STABLE [c6cd27e36] 2013-12-05 12:21:55 -0300 - Prevent forgetting valid row locks when one of several + Prevent forgetting valid row locks when one of several holders of a row lock aborts (Álvaro Herrera) @@ -8822,8 +8822,8 @@ Branch: REL9_3_STABLE [2dcc48c35] 2013-12-05 17:47:51 -0300 This mistake could result in spurious could not serialize access - due to concurrent update errors in REPEATABLE READ - and SERIALIZABLE transaction isolation modes. + due to concurrent update errors in REPEATABLE READ + and SERIALIZABLE transaction isolation modes. @@ -8836,7 +8836,7 @@ Branch: REL9_3_STABLE [03db79459] 2014-01-02 18:17:07 -0300 Handle wraparound correctly during extension or truncation - of pg_multixact/members + of pg_multixact/members (Andres Freund, Álvaro Herrera) @@ -8849,7 +8849,7 @@ Branch: REL9_3_STABLE [948a3dfbb] 2014-01-02 18:17:29 -0300 - Fix handling of 5-digit filenames in pg_multixact/members + Fix handling of 5-digit filenames in pg_multixact/members (Álvaro Herrera) @@ -8886,7 +8886,7 @@ Branch: REL9_3_STABLE [85d3b3c3a] 2013-12-19 16:39:59 -0300 This fixes a performance regression from pre-9.3 versions when doing - SELECT FOR UPDATE followed by UPDATE/DELETE. + SELECT FOR UPDATE followed by UPDATE/DELETE. @@ -8900,7 +8900,7 @@ Branch: REL9_3_STABLE [762bd379a] 2014-02-14 15:18:34 +0200 During archive recovery, prefer highest timeline number when WAL segments with the same ID are present in both the archive - and pg_xlog/ (Kyotaro Horiguchi) + and pg_xlog/ (Kyotaro Horiguchi) @@ -8929,7 +8929,7 @@ Branch: REL8_4_STABLE [9620fede9] 2014-02-12 14:52:32 -0500 The WAL update could be applied to the wrong page, potentially many pages past where it should have been. Aside from corrupting data, - this error has been observed to result in significant bloat + this error has been observed to result in significant bloat of standby servers compared to their masters, due to updates being applied far beyond where the end-of-file should have been. This failure mode does not appear to be a significant risk during crash @@ -8958,7 +8958,7 @@ Branch: REL9_0_STABLE [5301c8395] 2014-01-08 14:34:21 +0200 was already consistent at the start of replay, thus possibly allowing hot-standby queries before the database was really consistent. Other symptoms such as PANIC: WAL contains references to invalid - pages were also possible. + pages were also possible. @@ -8986,13 +8986,13 @@ Branch: REL9_0_STABLE [5d742b9ce] 2014-01-14 17:35:00 -0500 Fix improper locking of btree index pages while replaying - a VACUUM operation in hot-standby mode (Andres Freund, + a VACUUM operation in hot-standby mode (Andres Freund, Heikki Linnakangas, Tom Lane) This error could result in PANIC: WAL contains references to - invalid pages failures. + invalid pages failures. @@ -9028,8 +9028,8 @@ Branch: REL9_1_STABLE [0402f2441] 2014-01-08 23:31:01 +0200 - When pause_at_recovery_target - and recovery_target_inclusive are both set, ensure the + When pause_at_recovery_target + and recovery_target_inclusive are both set, ensure the target record is applied before pausing, not after (Heikki Linnakangas) @@ -9058,14 +9058,14 @@ Branch: REL9_3_STABLE [478af9b79] 2013-12-13 11:50:25 -0500 Prevent timeout interrupts from taking control away from mainline - code unless ImmediateInterruptOK is set + code unless ImmediateInterruptOK is set (Andres Freund, Tom Lane) This is a serious issue for any application making use of statement timeouts, as it could cause all manner of strange failures after a - timeout occurred. We have seen reports of stuck spinlocks, + timeout occurred. We have seen reports of stuck spinlocks, ERRORs being unexpectedly promoted to PANICs, unkillable backends, and other misbehaviors. @@ -9088,7 +9088,7 @@ Branch: REL8_4_STABLE [458b20f2d] 2014-01-31 21:41:09 -0500 Ensure that signal handlers don't attempt to use the - process's MyProc pointer after it's no longer valid. + process's MyProc pointer after it's no longer valid. @@ -9119,13 +9119,13 @@ Branch: REL8_4_STABLE [01b882fd8] 2014-01-29 20:04:14 -0500 - Fix unsafe references to errno within error reporting + Fix unsafe references to errno within error reporting logic (Christian Kruse) This would typically lead to odd behaviors such as missing or - inappropriate HINT fields. + inappropriate HINT fields. @@ -9141,7 +9141,7 @@ Branch: REL8_4_STABLE [d0070ac81] 2014-01-11 16:35:44 -0500 - Fix possible crashes from using ereport() too early + Fix possible crashes from using ereport() too early during server startup (Tom Lane) @@ -9185,7 +9185,7 @@ Branch: REL8_4_STABLE [a8a46d846] 2014-02-13 14:24:58 -0500 - Fix length checking for Unicode identifiers (U&"..." + Fix length checking for Unicode identifiers (U&"..." syntax) containing escapes (Tom Lane) @@ -9227,7 +9227,7 @@ Branch: REL9_0_STABLE [f2eede9b5] 2014-01-21 23:01:40 -0500 A previous patch allowed such keywords to be used without quoting in places such as role identifiers; but it missed cases where a - list of role identifiers was permitted, such as DROP ROLE. + list of role identifiers was permitted, such as DROP ROLE. @@ -9259,7 +9259,7 @@ Branch: REL8_4_STABLE [884c6384a] 2013-12-10 16:10:36 -0500 Fix possible crash due to invalid plan for nested sub-selects, such - as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) + as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...) (Tom Lane) @@ -9272,13 +9272,13 @@ Branch: REL9_3_STABLE [a4aa854ca] 2014-01-30 14:51:19 -0500 - Fix mishandling of WHERE conditions pulled up from - a LATERAL subquery (Tom Lane) + Fix mishandling of WHERE conditions pulled up from + a LATERAL subquery (Tom Lane) The typical symptom of this bug was a JOIN qualification - cannot refer to other relations error, though subtle logic + cannot refer to other relations error, though subtle logic errors in created plans seem possible as well. @@ -9291,8 +9291,8 @@ Branch: REL9_3_STABLE [27ff4cfe7] 2014-01-11 19:03:15 -0500 - Disallow LATERAL references to the target table of - an UPDATE/DELETE (Tom Lane) + Disallow LATERAL references to the target table of + an UPDATE/DELETE (Tom Lane) @@ -9310,12 +9310,12 @@ Branch: REL9_2_STABLE [5d545b7ed] 2013-12-14 17:34:00 -0500 - Fix UPDATE/DELETE of an inherited target table - that has UNION ALL subqueries (Tom Lane) + Fix UPDATE/DELETE of an inherited target table + that has UNION ALL subqueries (Tom Lane) - Without this fix, UNION ALL subqueries aren't correctly + Without this fix, UNION ALL subqueries aren't correctly inserted into the update plans for inheritance child tables after the first one, typically resulting in no update happening for those child table(s). @@ -9330,7 +9330,7 @@ Branch: REL9_3_STABLE [663f8419b] 2013-12-23 22:18:23 -0500 - Fix ANALYZE to not fail on a column that's a domain over + Fix ANALYZE to not fail on a column that's a domain over a range type (Tom Lane) @@ -9347,12 +9347,12 @@ Branch: REL8_4_STABLE [00b77771a] 2014-01-11 13:42:11 -0500 - Ensure that ANALYZE creates statistics for a table column - even when all the values in it are too wide (Tom Lane) + Ensure that ANALYZE creates statistics for a table column + even when all the values in it are too wide (Tom Lane) - ANALYZE intentionally omits very wide values from its + ANALYZE intentionally omits very wide values from its histogram and most-common-values calculations, but it neglected to do something sane in the case that all the sampled entries are too wide. @@ -9370,14 +9370,14 @@ Branch: REL8_4_STABLE [0fb4e3ceb] 2014-01-18 18:50:47 -0500 - In ALTER TABLE ... SET TABLESPACE, allow the database's + In ALTER TABLE ... SET TABLESPACE, allow the database's default tablespace to be used without a permissions check (Stephen Frost) - CREATE TABLE has always allowed such usage, - but ALTER TABLE didn't get the memo. + CREATE TABLE has always allowed such usage, + but ALTER TABLE didn't get the memo. @@ -9405,8 +9405,8 @@ Branch: REL8_4_STABLE [57ac7d8a7] 2014-01-08 20:18:24 -0500 - Fix cannot accept a set error when some arms of - a CASE return a set and others don't (Tom Lane) + Fix cannot accept a set error when some arms of + a CASE return a set and others don't (Tom Lane) @@ -9487,12 +9487,12 @@ Branch: REL8_4_STABLE [6141983fb] 2014-02-10 10:00:50 +0200 - Fix possible misbehavior in plainto_tsquery() + Fix possible misbehavior in plainto_tsquery() (Heikki Linnakangas) - Use memmove() not memcpy() for copying + Use memmove() not memcpy() for copying overlapping memory regions. There have been no field reports of this actually causing trouble, but it's certainly risky. @@ -9508,8 +9508,8 @@ Branch: REL9_1_STABLE [026a91f86] 2014-01-07 18:00:36 +0100 - Fix placement of permissions checks in pg_start_backup() - and pg_stop_backup() (Andres Freund, Magnus Hagander) + Fix placement of permissions checks in pg_start_backup() + and pg_stop_backup() (Andres Freund, Magnus Hagander) @@ -9530,7 +9530,7 @@ Branch: REL8_4_STABLE [69f77d756] 2013-12-15 11:11:11 +0900 - Accept SHIFT_JIS as an encoding name for locale checking + Accept SHIFT_JIS as an encoding name for locale checking purposes (Tatsuo Ishii) @@ -9544,14 +9544,14 @@ Branch: REL9_2_STABLE [888b56570] 2014-02-03 14:46:57 -0500 - Fix *-qualification of named parameters in SQL-language + Fix *-qualification of named parameters in SQL-language functions (Tom Lane) Given a composite-type parameter - named foo, $1.* worked fine, - but foo.* not so much. + named foo, $1.* worked fine, + but foo.* not so much. @@ -9567,11 +9567,11 @@ Branch: REL8_4_STABLE [5525529db] 2014-01-23 23:02:30 +0900 - Fix misbehavior of PQhost() on Windows (Fujii Masao) + Fix misbehavior of PQhost() on Windows (Fujii Masao) - It should return localhost if no host has been specified. + It should return localhost if no host has been specified. @@ -9587,14 +9587,14 @@ Branch: REL8_4_STABLE [7644a7bd8] 2014-02-13 18:45:32 -0500 - Improve error handling in libpq and psql - for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) + Improve error handling in libpq and psql + for failures during COPY TO STDOUT/FROM STDIN (Tom Lane) In particular this fixes an infinite loop that could occur in 9.2 and up if the server connection was lost during COPY FROM - STDIN. Variants of that scenario might be possible in older + STDIN. Variants of that scenario might be possible in older versions, or with other client applications. @@ -9609,7 +9609,7 @@ Branch: REL9_2_STABLE [fa28f9cba] 2014-01-04 16:05:23 -0500 Fix incorrect translation handling in - some psql \d commands + some psql \d commands (Peter Eisentraut, Tom Lane) @@ -9623,7 +9623,7 @@ Branch: REL9_2_STABLE [0ae288d2d] 2014-02-12 14:51:00 +0100 - Ensure pg_basebackup's background process is killed + Ensure pg_basebackup's background process is killed when exiting its foreground process (Magnus Hagander) @@ -9639,7 +9639,7 @@ Branch: REL9_1_STABLE [c6e5c4dd1] 2014-02-09 12:09:55 +0100 Fix possible incorrect printing of filenames - in pg_basebackup's verbose mode (Magnus Hagander) + in pg_basebackup's verbose mode (Magnus Hagander) @@ -9670,7 +9670,7 @@ Branch: REL8_4_STABLE [d68a65b01] 2014-01-09 15:58:37 +0100 - Fix misaligned descriptors in ecpg (MauMau) + Fix misaligned descriptors in ecpg (MauMau) @@ -9686,7 +9686,7 @@ Branch: REL8_4_STABLE [96de4939c] 2014-01-01 12:44:58 +0100 - In ecpg, handle lack of a hostname in the connection + In ecpg, handle lack of a hostname in the connection parameters properly (Michael Meskes) @@ -9703,7 +9703,7 @@ Branch: REL8_4_STABLE [6c8b16e30] 2013-12-07 16:56:34 -0800 - Fix performance regression in contrib/dblink connection + Fix performance regression in contrib/dblink connection startup (Joe Conway) @@ -9724,7 +9724,7 @@ Branch: REL8_4_STABLE [492b68541] 2014-01-13 15:44:14 +0200 - In contrib/isn, fix incorrect calculation of the check + In contrib/isn, fix incorrect calculation of the check digit for ISMN values (Fabien Coelho) @@ -9737,7 +9737,7 @@ Branch: REL9_3_STABLE [27902bc91] 2013-12-12 19:07:53 +0900 - Fix contrib/pgbench's progress logging to avoid overflow + Fix contrib/pgbench's progress logging to avoid overflow when the scale factor is large (Tatsuo Ishii) @@ -9751,8 +9751,8 @@ Branch: REL9_2_STABLE [27ab1eb7e] 2014-01-21 16:34:35 -0500 - Fix contrib/pg_stat_statement's handling - of CURRENT_DATE and related constructs (Kyotaro + Fix contrib/pg_stat_statement's handling + of CURRENT_DATE and related constructs (Kyotaro Horiguchi) @@ -9766,7 +9766,7 @@ Branch: REL9_3_STABLE [eb3d350db] 2014-02-03 21:30:28 -0500 Improve lost-connection error handling - in contrib/postgres_fdw (Tom Lane) + in contrib/postgres_fdw (Tom Lane) @@ -9799,13 +9799,13 @@ Branch: REL8_4_STABLE [ae3c98b9b] 2014-02-01 15:16:52 -0500 - In Mingw and Cygwin builds, install the libpq DLL - in the bin directory (Andrew Dunstan) + In Mingw and Cygwin builds, install the libpq DLL + in the bin directory (Andrew Dunstan) This duplicates what the MSVC build has long done. It should fix - problems with programs like psql failing to start + problems with programs like psql failing to start because they can't find the DLL. @@ -9821,7 +9821,7 @@ Branch: REL9_0_STABLE [1c0bf372f] 2014-02-01 16:14:15 -0500 - Avoid using the deprecated dllwrap tool in Cygwin builds + Avoid using the deprecated dllwrap tool in Cygwin builds (Marco Atzeri) @@ -9850,8 +9850,8 @@ Branch: REL8_4_STABLE [432735cbf] 2014-02-10 20:48:30 -0500 - Don't generate plain-text HISTORY - and src/test/regress/README files anymore (Tom Lane) + Don't generate plain-text HISTORY + and src/test/regress/README files anymore (Tom Lane) @@ -9860,7 +9860,7 @@ Branch: REL8_4_STABLE [432735cbf] 2014-02-10 20:48:30 -0500 the likely audience for plain-text format. Distribution tarballs will still contain files by these names, but they'll just be stubs directing the reader to consult the main documentation. - The plain-text INSTALL file will still be maintained, as + The plain-text INSTALL file will still be maintained, as there is arguably a use-case for that. @@ -9877,13 +9877,13 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Update time zone data files to tzdata release 2013i + Update time zone data files to tzdata release 2013i for DST law changes in Jordan and historical changes in Cuba. - In addition, the zones Asia/Riyadh87, - Asia/Riyadh88, and Asia/Riyadh89 have been + In addition, the zones Asia/Riyadh87, + Asia/Riyadh88, and Asia/Riyadh89 have been removed, as they are no longer maintained by IANA, and never represented actual civil timekeeping practice. @@ -9935,19 +9935,19 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix VACUUM's tests to see whether it can - update relfrozenxid (Andres Freund) + Fix VACUUM's tests to see whether it can + update relfrozenxid (Andres Freund) - In some cases VACUUM (either manual or autovacuum) could - incorrectly advance a table's relfrozenxid value, + In some cases VACUUM (either manual or autovacuum) could + incorrectly advance a table's relfrozenxid value, allowing tuples to escape freezing, causing those rows to become invisible once 2^31 transactions have elapsed. The probability of data loss is fairly low since multiple incorrect advancements would need to happen before actual loss occurs, but it's not zero. In 9.2.0 and later, the probability of loss is higher, and it's also possible - to get could not access status of transaction errors as a + to get could not access status of transaction errors as a consequence of this bug. Users upgrading from releases 9.0.4 or 8.4.8 or earlier are not affected, but all later versions contain the bug. @@ -9955,12 +9955,12 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix any latent corruption but will not be able to fix all pre-existing data errors. However, an installation can be presumed safe after performing this vacuuming if it has executed fewer than 2^31 update transactions in its lifetime (check this with - SELECT txid_current() < 2^31). + SELECT txid_current() < 2^31). @@ -9972,14 +9972,14 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 These bugs could lead to could not access status of - transaction errors, or to duplicate or vanishing rows. + transaction errors, or to duplicate or vanishing rows. Users upgrading from releases prior to 9.3.0 are not affected. The issue can be ameliorated by, after upgrading, vacuuming all tables in all databases while having vacuum_freeze_table_age + linkend="guc-vacuum-freeze-table-age">vacuum_freeze_table_age set to zero. This will fix latent corruption but will not be able to fix all pre-existing data errors. @@ -9995,7 +9995,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix initialization of pg_clog and pg_subtrans + Fix initialization of pg_clog and pg_subtrans during hot standby startup (Andres Freund, Heikki Linnakangas) @@ -10028,7 +10028,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 These bugs could result in incorrect behavior, such as locking or even updating the wrong row, in the presence of concurrent updates. - Spurious unable to fetch updated version of tuple errors + Spurious unable to fetch updated version of tuple errors were also possible. @@ -10040,7 +10040,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 This could lead to corruption of the lock data structures in shared - memory, causing lock already held and other odd errors. + memory, causing lock already held and other odd errors. @@ -10057,7 +10057,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Truncate pg_multixact contents during WAL replay + Truncate pg_multixact contents during WAL replay (Andres Freund) @@ -10068,14 +10068,14 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Ensure an anti-wraparound VACUUM counts a page as scanned + Ensure an anti-wraparound VACUUM counts a page as scanned when it's only verified that no tuples need freezing (Sergey Burladyan, Jeff Janes) This bug could result in failing to - advance relfrozenxid, so that the table would still be + advance relfrozenxid, so that the table would still be thought to need another anti-wraparound vacuum. In the worst case the database might even shut down to prevent wraparound. @@ -10104,7 +10104,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Fix unexpected spgdoinsert() failure error during SP-GiST + Fix unexpected spgdoinsert() failure error during SP-GiST index creation (Teodor Sigaev) @@ -10122,12 +10122,12 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 - Historically PostgreSQL has accepted queries like + Historically PostgreSQL has accepted queries like SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z although a strict reading of the SQL standard would forbid the - duplicate usage of table alias x. A misguided change in + duplicate usage of table alias x. A misguided change in 9.3.0 caused it to reject some such cases that were formerly accepted. Restore the previous behavior. @@ -10135,8 +10135,8 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Avoid flattening a subquery whose SELECT list contains a - volatile function wrapped inside a sub-SELECT (Tom Lane) + Avoid flattening a subquery whose SELECT list contains a + volatile function wrapped inside a sub-SELECT (Tom Lane) @@ -10153,14 +10153,14 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z This error could lead to incorrect plans for queries involving - multiple levels of subqueries within JOIN syntax. + multiple levels of subqueries within JOIN syntax. Fix incorrect planning in cases where the same non-strict expression - appears in multiple WHERE and outer JOIN + appears in multiple WHERE and outer JOIN equality clauses (Tom Lane) @@ -10248,20 +10248,20 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix array slicing of int2vector and oidvector values + Fix array slicing of int2vector and oidvector values (Tom Lane) Expressions of this kind are now implicitly promoted to - regular int2 or oid arrays. + regular int2 or oid arrays. - Return a valid JSON value when converting an empty hstore value - to json + Return a valid JSON value when converting an empty hstore value + to json (Oskari Saarenmaa) @@ -10276,7 +10276,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z In some cases, the system would use the simple GMT offset value when it should have used the regular timezone setting that had prevailed before the simple offset was selected. This change also causes - the timeofday function to honor the simple GMT offset + the timeofday function to honor the simple GMT offset zone. @@ -10290,7 +10290,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Properly quote generated command lines in pg_ctl + Properly quote generated command lines in pg_ctl (Naoya Anzai and Tom Lane) @@ -10301,10 +10301,10 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix pg_dumpall to work when a source database + Fix pg_dumpall to work when a source database sets default_transaction_read_only - via ALTER DATABASE SET (Kevin Grittner) + linkend="guc-default-transaction-read-only">default_transaction_read_only + via ALTER DATABASE SET (Kevin Grittner) @@ -10314,19 +10314,19 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Fix pg_isready to handle its option properly (Fabrízio de Royes Mello and Fujii Masao) - Fix parsing of WAL file names in pg_receivexlog + Fix parsing of WAL file names in pg_receivexlog (Heikki Linnakangas) - This error made pg_receivexlog unable to restart + This error made pg_receivexlog unable to restart streaming after stopping, once at least 4 GB of WAL had been written. @@ -10334,34 +10334,34 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z Report out-of-disk-space failures properly - in pg_upgrade (Peter Eisentraut) + in pg_upgrade (Peter Eisentraut) - Make ecpg search for quoted cursor names + Make ecpg search for quoted cursor names case-sensitively (Zoltán Böszörményi) - Fix ecpg's processing of lists of variables - declared varchar (Zoltán Böszörményi) + Fix ecpg's processing of lists of variables + declared varchar (Zoltán Böszörményi) - Make contrib/lo defend against incorrect trigger definitions + Make contrib/lo defend against incorrect trigger definitions (Marc Cousin) - Update time zone data files to tzdata release 2013h + Update time zone data files to tzdata release 2013h for DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein, Morocco, and Palestine. Also, new timezone abbreviations WIB, WIT, WITA for Indonesia. @@ -10395,7 +10395,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - However, if you use the hstore extension, see the + However, if you use the hstore extension, see the first changelog entry. @@ -10408,18 +10408,18 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z - Ensure new-in-9.3 JSON functionality is added to the hstore + Ensure new-in-9.3 JSON functionality is added to the hstore extension during an update (Andrew Dunstan) - Users who upgraded a pre-9.3 database containing hstore + Users who upgraded a pre-9.3 database containing hstore should execute ALTER EXTENSION hstore UPDATE; after installing 9.3.1, to add two new JSON functions and a cast. - (If hstore is already up to date, this command does + (If hstore is already up to date, this command does nothing.) @@ -10452,14 +10452,14 @@ ALTER EXTENSION hstore UPDATE; - Fix timeline handling bugs in pg_receivexlog + Fix timeline handling bugs in pg_receivexlog (Heikki Linnakangas, Andrew Gierth) - Prevent CREATE FUNCTION from checking SET + Prevent CREATE FUNCTION from checking SET variables unless function body checking is enabled (Tom Lane) @@ -10488,7 +10488,7 @@ ALTER EXTENSION hstore UPDATE; Overview - Major enhancements in PostgreSQL 9.3 include: + Major enhancements in PostgreSQL 9.3 include: @@ -10511,17 +10511,17 @@ ALTER EXTENSION hstore UPDATE; - Add many features for the JSON data type, + Add many features for the JSON data type, including operators and functions - to extract elements from JSON values + to extract elements from JSON values - Implement SQL-standard LATERAL option for - FROM-clause subqueries and function calls + Implement SQL-standard LATERAL option for + FROM-clause subqueries and function calls @@ -10535,9 +10535,9 @@ ALTER EXTENSION hstore UPDATE; - Add a Postgres foreign + Add a Postgres foreign data wrapper to allow access to - other Postgres servers + other Postgres servers @@ -10582,8 +10582,8 @@ ALTER EXTENSION hstore UPDATE; A dump/restore using pg_dumpall, or use - of pg_upgrade, is + linkend="APP-PG-DUMPALL">pg_dumpall, or use + of pg_upgrade, is required for those wishing to migrate data from any previous release. @@ -10599,21 +10599,21 @@ ALTER EXTENSION hstore UPDATE; - Rename replication_timeout to wal_sender_timeout + Rename replication_timeout to wal_sender_timeout (Amit Kapila) This setting controls the WAL sender timeout. + linkend="wal">WAL sender timeout. Require superuser privileges to set commit_delay + linkend="guc-commit-delay">commit_delay because it can now potentially delay other sessions (Simon Riggs) @@ -10625,7 +10625,7 @@ ALTER EXTENSION hstore UPDATE; Users who have set work_mem based on the + linkend="guc-work-mem">work_mem based on the previous behavior may need to revisit that setting. @@ -10642,7 +10642,7 @@ ALTER EXTENSION hstore UPDATE; Throw an error if a tuple to be updated or deleted has already been - updated or deleted by a BEFORE trigger (Kevin Grittner) + updated or deleted by a BEFORE trigger (Kevin Grittner) @@ -10652,7 +10652,7 @@ ALTER EXTENSION hstore UPDATE; Now an error is thrown to prevent the inconsistent results from being committed. If this change affects your application, the best solution is usually to move the data-propagation actions to - an AFTER trigger. + an AFTER trigger. @@ -10665,15 +10665,15 @@ ALTER EXTENSION hstore UPDATE; Change multicolumn ON UPDATE - SET NULL/SET DEFAULT foreign key actions to affect + SET NULL/SET DEFAULT foreign key actions to affect all columns of the constraint, not just those changed in the - UPDATE (Tom Lane) + UPDATE (Tom Lane) Previously, we would set only those referencing columns that correspond to referenced columns that were changed by - the UPDATE. This was what was required by SQL-92, + the UPDATE. This was what was required by SQL-92, but more recent editions of the SQL standard specify the new behavior. @@ -10681,35 +10681,35 @@ ALTER EXTENSION hstore UPDATE; Force cached plans to be replanned if the search_path changes + linkend="guc-search-path">search_path changes (Tom Lane) Previously, cached plans already generated in the current session were not redone if the query was re-executed with a - new search_path setting, resulting in surprising behavior. + new search_path setting, resulting in surprising behavior. Fix to_number() + linkend="functions-formatting-table">to_number() to properly handle a period used as a thousands separator (Tom Lane) Previously, a period was considered to be a decimal point even when - the locale says it isn't and the D format code is used to + the locale says it isn't and the D format code is used to specify use of the locale-specific decimal point. This resulted in - wrong answers if FM format was also used. + wrong answers if FM format was also used. - Fix STRICT non-set-returning functions that have + Fix STRICT non-set-returning functions that have set-returning functions in their arguments to properly return null rows (Tom Lane) @@ -10722,14 +10722,14 @@ ALTER EXTENSION hstore UPDATE; - Store WAL in a continuous + Store WAL in a continuous stream, rather than skipping the last 16MB segment every 4GB (Heikki Linnakangas) - Previously, WAL files with names ending in FF - were not used because of this skipping. If you have WAL + Previously, WAL files with names ending in FF + were not used because of this skipping. If you have WAL backup or restore scripts that took this behavior into account, they will need to be adjusted. @@ -10738,15 +10738,15 @@ ALTER EXTENSION hstore UPDATE; In pg_constraint.confmatchtype, - store the default foreign key match type (non-FULL, - non-PARTIAL) as s for simple + linkend="catalog-pg-constraint">pg_constraint.confmatchtype, + store the default foreign key match type (non-FULL, + non-PARTIAL) as s for simple (Tom Lane) - Previously this case was represented by u - for unspecified. + Previously this case was represented by u + for unspecified. @@ -10783,10 +10783,10 @@ ALTER EXTENSION hstore UPDATE; This change improves concurrency and reduces the probability of deadlocks when updating tables involved in a foreign-key constraint. - UPDATEs that do not change any columns referenced in a - foreign key now take the new NO KEY UPDATE lock mode on - the row, while foreign key checks use the new KEY SHARE - lock mode, which does not conflict with NO KEY UPDATE. + UPDATEs that do not change any columns referenced in a + foreign key now take the new NO KEY UPDATE lock mode on + the row, while foreign key checks use the new KEY SHARE + lock mode, which does not conflict with NO KEY UPDATE. So there is no blocking unless a foreign-key column is changed. @@ -10794,7 +10794,7 @@ ALTER EXTENSION hstore UPDATE; Add configuration variable lock_timeout to + linkend="guc-lock-timeout">lock_timeout to allow limiting how long a session will wait to acquire any one lock (Zoltán Böszörményi) @@ -10811,21 +10811,21 @@ ALTER EXTENSION hstore UPDATE; - Add SP-GiST + Add SP-GiST support for range data types (Alexander Korotkov) - Allow GiST indexes to be + Allow GiST indexes to be unlogged (Jeevan Chalke) - Improve performance of GiST index insertion by randomizing + Improve performance of GiST index insertion by randomizing the choice of which page to descend to when there are multiple equally good alternatives (Heikki Linnakangas) @@ -10863,7 +10863,7 @@ ALTER EXTENSION hstore UPDATE; Improve optimizer's hash table size estimate for - doing DISTINCT via hash aggregation (Tom Lane) + doing DISTINCT via hash aggregation (Tom Lane) @@ -10893,7 +10893,7 @@ ALTER EXTENSION hstore UPDATE; - Add COPY FREEZE + Add COPY FREEZE option to avoid the overhead of marking tuples as frozen later (Simon Riggs, Jeff Davis) @@ -10902,7 +10902,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of NUMERIC calculations + linkend="datatype-numeric">NUMERIC calculations (Kyotaro Horiguchi) @@ -10910,12 +10910,12 @@ ALTER EXTENSION hstore UPDATE; Improve synchronization of sessions waiting for commit_delay + linkend="guc-commit-delay">commit_delay (Peter Geoghegan) - This greatly improves the usefulness of commit_delay. + This greatly improves the usefulness of commit_delay. @@ -10923,7 +10923,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of the CREATE TEMPORARY TABLE ... ON - COMMIT DELETE ROWS option by not truncating such temporary + COMMIT DELETE ROWS option by not truncating such temporary tables in transactions that haven't touched any temporary tables (Heikki Linnakangas) @@ -10948,7 +10948,7 @@ ALTER EXTENSION hstore UPDATE; This speeds up lock bookkeeping at statement completion in multi-statement transactions that hold many locks; it is particularly - useful for pg_dump. + useful for pg_dump. @@ -10960,7 +10960,7 @@ ALTER EXTENSION hstore UPDATE; This speeds up sessions that create many tables in successive - small transactions, such as a pg_restore run. + small transactions, such as a pg_restore run. @@ -11042,7 +11042,7 @@ ALTER EXTENSION hstore UPDATE; When an authentication failure occurs, log the relevant - pg_hba.conf + pg_hba.conf line, to ease debugging of unintended failures (Magnus Hagander) @@ -11050,23 +11050,23 @@ ALTER EXTENSION hstore UPDATE; - Improve LDAP error + Improve LDAP error reporting and documentation (Peter Eisentraut) - Add support for specifying LDAP authentication parameters - in URL format, per RFC 4516 (Peter Eisentraut) + Add support for specifying LDAP authentication parameters + in URL format, per RFC 4516 (Peter Eisentraut) Change the ssl_ciphers parameter - to start with DEFAULT, rather than ALL, + linkend="guc-ssl-ciphers">ssl_ciphers parameter + to start with DEFAULT, rather than ALL, then remove insecure ciphers (Magnus Hagander) @@ -11078,12 +11078,12 @@ ALTER EXTENSION hstore UPDATE; Parse and load pg_ident.conf + linkend="auth-username-maps">pg_ident.conf once, not during each connection (Amit Kapila) - This is similar to how pg_hba.conf is processed. + This is similar to how pg_hba.conf is processed. @@ -11103,8 +11103,8 @@ ALTER EXTENSION hstore UPDATE; - On Unix-like systems, mmap() is now used for most - of PostgreSQL's shared memory. For most users, this + On Unix-like systems, mmap() is now used for most + of PostgreSQL's shared memory. For most users, this will eliminate any need to adjust kernel parameters for shared memory. @@ -11117,8 +11117,8 @@ ALTER EXTENSION hstore UPDATE; The configuration parameter - unix_socket_directory is replaced by unix_socket_directories, + unix_socket_directory is replaced by unix_socket_directories, which accepts a list of directories. @@ -11131,7 +11131,7 @@ ALTER EXTENSION hstore UPDATE; Such a directory is specified with include_dir in the server + linkend="config-includes">include_dir in the server configuration file. @@ -11140,13 +11140,13 @@ ALTER EXTENSION hstore UPDATE; Increase the maximum initdb-configured value for shared_buffers + linkend="guc-shared-buffers">shared_buffers to 128MB (Robert Haas) This is the maximum value that initdb will attempt to set in postgresql.conf; + linkend="config-setting-configuration-file">postgresql.conf; the previous maximum was 32MB. @@ -11154,7 +11154,7 @@ ALTER EXTENSION hstore UPDATE; Remove the external - PID file, if any, on postmaster exit + PID file, if any, on postmaster exit (Peter Eisentraut) @@ -11186,10 +11186,10 @@ ALTER EXTENSION hstore UPDATE; - Add SQL functions pg_is_in_backup() + Add SQL functions pg_is_in_backup() and pg_backup_start_time() + linkend="functions-admin-backup">pg_backup_start_time() (Gilles Darold) @@ -11201,7 +11201,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of streaming log shipping with synchronous_commit + linkend="guc-synchronous-commit">synchronous_commit disabled (Andres Freund) @@ -11216,12 +11216,12 @@ ALTER EXTENSION hstore UPDATE; Add the last checkpoint's redo location to pg_controldata's + linkend="APP-PGCONTROLDATA">pg_controldata's output (Fujii Masao) - This information is useful for determining which WAL + This information is useful for determining which WAL files are needed for restore. @@ -11229,7 +11229,7 @@ ALTER EXTENSION hstore UPDATE; Allow tools like pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog to run on computers with different architectures (Heikki Linnakangas) @@ -11245,9 +11245,9 @@ ALTER EXTENSION hstore UPDATE; Make pg_basebackup - @@ -11259,10 +11259,10 @@ ALTER EXTENSION hstore UPDATE; Allow pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog and pg_basebackup - to handle streaming timeline switches (Heikki Linnakangas) @@ -11270,8 +11270,8 @@ ALTER EXTENSION hstore UPDATE; Add wal_receiver_timeout - parameter to control the WAL receiver's timeout + linkend="guc-wal-receiver-timeout">wal_receiver_timeout + parameter to control the WAL receiver's timeout (Amit Kapila) @@ -11282,7 +11282,7 @@ ALTER EXTENSION hstore UPDATE; - Change the WAL record format to + Change the WAL record format to allow splitting the record header across pages (Heikki Linnakangas) @@ -11303,23 +11303,23 @@ ALTER EXTENSION hstore UPDATE; - Implement SQL-standard LATERAL option for - FROM-clause subqueries and function calls (Tom Lane) + Implement SQL-standard LATERAL option for + FROM-clause subqueries and function calls (Tom Lane) - This feature allows subqueries and functions in FROM to - reference columns from other tables in the FROM - clause. The LATERAL keyword is optional for functions. + This feature allows subqueries and functions in FROM to + reference columns from other tables in the FROM + clause. The LATERAL keyword is optional for functions. Add support for piping COPY and psql \copy + linkend="SQL-COPY">COPY and psql \copy data to/from an external program (Etsuro Fujita) @@ -11327,8 +11327,8 @@ ALTER EXTENSION hstore UPDATE; Allow a multirow VALUES clause in a rule - to reference OLD/NEW (Tom Lane) + linkend="SQL-VALUES">VALUES clause in a rule + to reference OLD/NEW (Tom Lane) @@ -11364,14 +11364,14 @@ ALTER EXTENSION hstore UPDATE; Add CREATE SCHEMA ... IF - NOT EXISTS clause (Fabrízio de Royes Mello) + NOT EXISTS clause (Fabrízio de Royes Mello) Make REASSIGN - OWNED also change ownership of shared objects + OWNED also change ownership of shared objects (Álvaro Herrera) @@ -11379,7 +11379,7 @@ ALTER EXTENSION hstore UPDATE; Make CREATE - AGGREGATE complain if the given initial value string is not + AGGREGATE complain if the given initial value string is not valid input for the transition datatype (Tom Lane) @@ -11387,12 +11387,12 @@ ALTER EXTENSION hstore UPDATE; Suppress CREATE - TABLE's messages about implicit index and sequence creation + TABLE's messages about implicit index and sequence creation (Robert Haas) - These messages now appear at DEBUG1 verbosity, so that + These messages now appear at DEBUG1 verbosity, so that they will not be shown by default. @@ -11400,7 +11400,7 @@ ALTER EXTENSION hstore UPDATE; Allow DROP TABLE IF - EXISTS to succeed when a non-existent schema is specified + EXISTS to succeed when a non-existent schema is specified in the table name (Bruce Momjian) @@ -11427,14 +11427,14 @@ ALTER EXTENSION hstore UPDATE; - <command>ALTER</> + <command>ALTER</command> - Support IF NOT EXISTS option in ALTER TYPE ... ADD VALUE + Support IF NOT EXISTS option in ALTER TYPE ... ADD VALUE (Andrew Dunstan) @@ -11446,21 +11446,21 @@ ALTER EXTENSION hstore UPDATE; Add ALTER ROLE ALL - SET to establish settings for all users (Peter Eisentraut) + SET to establish settings for all users (Peter Eisentraut) This allows settings to apply to all users in all databases. ALTER DATABASE SET + linkend="SQL-ALTERDATABASE">ALTER DATABASE SET already allowed addition of settings for all users in a single - database. postgresql.conf has a similar effect. + database. postgresql.conf has a similar effect. Add support for ALTER RULE - ... RENAME (Ali Dar) + ... RENAME (Ali Dar) @@ -11469,7 +11469,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="rules-views"><command>VIEWs</></link> + <link linkend="rules-views"><command>VIEWs</command></link> @@ -11499,20 +11499,20 @@ ALTER EXTENSION hstore UPDATE; Simple views that reference some or all columns from a single base table are now updatable by default. More complex views can be made updatable using INSTEAD OF triggers - or INSTEAD rules. + linkend="SQL-CREATETRIGGER">INSTEAD OF triggers + or INSTEAD rules. Add CREATE RECURSIVE - VIEW syntax (Peter Eisentraut) + VIEW syntax (Peter Eisentraut) Internally this is translated into CREATE VIEW ... WITH - RECURSIVE .... + RECURSIVE .... @@ -11558,8 +11558,8 @@ ALTER EXTENSION hstore UPDATE; Allow text timezone - designations, e.g. America/Chicago, in the - T field of ISO-format timestamptz + designations, e.g. America/Chicago, in the + T field of ISO-format timestamptz input (Bruce Momjian) @@ -11567,20 +11567,20 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="datatype-json"><type>JSON</></link> + <link linkend="datatype-json"><type>JSON</type></link> Add operators and functions - to extract elements from JSON values (Andrew Dunstan) + to extract elements from JSON values (Andrew Dunstan) - Allow JSON values to be JSON values to be converted into records (Andrew Dunstan) @@ -11589,7 +11589,7 @@ ALTER EXTENSION hstore UPDATE; Add functions to convert - scalars, records, and hstore values to JSON (Andrew + scalars, records, and hstore values to JSON (Andrew Dunstan) @@ -11609,9 +11609,9 @@ ALTER EXTENSION hstore UPDATE; Add array_remove() + linkend="array-functions-table">array_remove() and array_replace() + linkend="array-functions-table">array_replace() functions (Marco Nenciarini, Gabriele Bartolini) @@ -11619,10 +11619,10 @@ ALTER EXTENSION hstore UPDATE; Allow concat() + linkend="functions-string-other">concat() and format() - to properly expand VARIADIC-labeled arguments + linkend="functions-string-format">format() + to properly expand VARIADIC-labeled arguments (Pavel Stehule) @@ -11630,7 +11630,7 @@ ALTER EXTENSION hstore UPDATE; Improve format() + linkend="functions-string-format">format() to provide field width and left/right alignment options (Pavel Stehule) @@ -11638,29 +11638,29 @@ ALTER EXTENSION hstore UPDATE; Make to_char(), + linkend="functions-formatting-table">to_char(), to_date(), + linkend="functions-formatting-table">to_date(), and to_timestamp() + linkend="functions-formatting-table">to_timestamp() handle negative (BC) century values properly (Bruce Momjian) Previously the behavior was either wrong or inconsistent - with positive/AD handling, e.g. with the format mask - IYYY-IW-DY. + with positive/AD handling, e.g. with the format mask + IYYY-IW-DY. Make to_date() + linkend="functions-formatting-table">to_date() and to_timestamp() - return proper results when mixing ISO and Gregorian + linkend="functions-formatting-table">to_timestamp() + return proper results when mixing ISO and Gregorian week/day designations (Bruce Momjian) @@ -11668,27 +11668,27 @@ ALTER EXTENSION hstore UPDATE; Cause pg_get_viewdef() - to start a new line by default after each SELECT target - list entry and FROM entry (Marko Tiikkaja) + linkend="functions-info-catalog-table">pg_get_viewdef() + to start a new line by default after each SELECT target + list entry and FROM entry (Marko Tiikkaja) This reduces line length in view printing, for instance in pg_dump output. + linkend="APP-PGDUMP">pg_dump output. - Fix map_sql_value_to_xml_value() to print values of + Fix map_sql_value_to_xml_value() to print values of domain types the same way their base type would be printed (Pavel Stehule) There are special formatting rules for certain built-in types such as - boolean; these rules now also apply to domains over these + boolean; these rules now also apply to domains over these types. @@ -11707,13 +11707,13 @@ ALTER EXTENSION hstore UPDATE; - Allow PL/pgSQL to use RETURN with a composite-type + Allow PL/pgSQL to use RETURN with a composite-type expression (Asif Rehman) Previously, in a function returning a composite type, - RETURN could only reference a variable of that type. + RETURN could only reference a variable of that type. @@ -11728,14 +11728,14 @@ ALTER EXTENSION hstore UPDATE; Allow PL/pgSQL to access the number of rows processed by - COPY (Pavel Stehule) + COPY (Pavel Stehule) - A COPY executed in a PL/pgSQL function now updates the + A COPY executed in a PL/pgSQL function now updates the value retrieved by GET DIAGNOSTICS - x = ROW_COUNT. + x = ROW_COUNT. @@ -11779,9 +11779,9 @@ ALTER EXTENSION hstore UPDATE; - Handle SPI errors raised - explicitly (with PL/Python's RAISE) the same as - internal SPI errors (Oskari Saarenmaa and Jan Urbanski) + Handle SPI errors raised + explicitly (with PL/Python's RAISE) the same as + internal SPI errors (Oskari Saarenmaa and Jan Urbanski) @@ -11798,7 +11798,7 @@ ALTER EXTENSION hstore UPDATE; - Prevent leakage of SPI tuple tables during subtransaction + Prevent leakage of SPI tuple tables during subtransaction abort (Tom Lane) @@ -11809,7 +11809,7 @@ ALTER EXTENSION hstore UPDATE; of such tuple tables and release them manually in error-recovery code. Failure to do so caused a number of transaction-lifespan memory leakage issues in PL/pgSQL and perhaps other SPI clients. SPI_freetuptable() + linkend="spi-spi-freetupletable">SPI_freetuptable() now protects itself against multiple freeing requests, so any existing code that did take care to clean up shouldn't be broken by this change. @@ -11817,8 +11817,8 @@ ALTER EXTENSION hstore UPDATE; - Allow SPI functions to access the number of rows processed - by COPY (Pavel Stehule) + Allow SPI functions to access the number of rows processed + by COPY (Pavel Stehule) @@ -11834,35 +11834,35 @@ ALTER EXTENSION hstore UPDATE; Add command-line utility pg_isready to + linkend="app-pg-isready">pg_isready to check if the server is ready to accept connections (Phil Sorber) - Support multiple This is similar to the way pg_dump's - option works. - Add @@ -11870,7 +11870,7 @@ ALTER EXTENSION hstore UPDATE; Add libpq function PQconninfo() + linkend="libpq-pqconninfo">PQconninfo() to return connection information (Zoltán Böszörményi, Magnus Hagander) @@ -11879,27 +11879,27 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PSQL"><application>psql</></link> + <link linkend="APP-PSQL"><application>psql</application></link> - Adjust function cost settings so psql tab + Adjust function cost settings so psql tab completion and pattern searching are more efficient (Tom Lane) - Improve psql's tab completion coverage (Jeff Janes, + Improve psql's tab completion coverage (Jeff Janes, Dean Rasheed, Peter Eisentraut, Magnus Hagander) - Allow the psql mode to work when reading from standard input (Fabien Coelho, Robert Haas) @@ -11911,13 +11911,13 @@ ALTER EXTENSION hstore UPDATE; - Remove psql warning when connecting to an older + Remove psql warning when connecting to an older server (Peter Eisentraut) A warning is still issued when connecting to a server of a newer major - version than psql's. + version than psql's. @@ -11930,42 +11930,42 @@ ALTER EXTENSION hstore UPDATE; - Add psql command \watch to repeatedly + Add psql command \watch to repeatedly execute a SQL command (Will Leinweber) - Add psql command \gset to store query - results in psql variables (Pavel Stehule) + Add psql command \gset to store query + results in psql variables (Pavel Stehule) - Add SSL information to psql's - \conninfo command (Alastair Turner) + Add SSL information to psql's + \conninfo command (Alastair Turner) - Add Security column to psql's - \df+ output (Jon Erdman) + Add Security column to psql's + \df+ output (Jon Erdman) - Allow psql command \l to accept a database + Allow psql command \l to accept a database name pattern (Peter Eisentraut) - In psql, do not allow \connect to + In psql, do not allow \connect to use defaults if there is no active connection (Bruce Momjian) @@ -11977,7 +11977,7 @@ ALTER EXTENSION hstore UPDATE; Properly reset state after failure of a SQL command executed with - psql's \g file + psql's \g file (Tom Lane) @@ -11998,8 +11998,8 @@ ALTER EXTENSION hstore UPDATE; - Add a latex-longtable output format to - psql (Bruce Momjian) + Add a latex-longtable output format to + psql (Bruce Momjian) @@ -12009,21 +12009,21 @@ ALTER EXTENSION hstore UPDATE; - Add a border=3 output mode to the psql - latex format (Bruce Momjian) + Add a border=3 output mode to the psql + latex format (Bruce Momjian) - In psql's tuples-only and expanded output modes, no - longer emit (No rows) for zero rows (Peter Eisentraut) + In psql's tuples-only and expanded output modes, no + longer emit (No rows) for zero rows (Peter Eisentraut) - In psql's unaligned, expanded output mode, no longer + In psql's unaligned, expanded output mode, no longer print an empty line for zero rows (Peter Eisentraut) @@ -12035,34 +12035,34 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PGDUMP"><application>pg_dump</></link> + <link linkend="APP-PGDUMP"><application>pg_dump</application></link> - Add pg_dump option to dump tables in parallel (Joachim Wieland) - Make pg_dump output functions in a more predictable + Make pg_dump output functions in a more predictable order (Joel Jacobson) - Fix tar files emitted by pg_dump - to be POSIX conformant (Brian Weaver, Tom Lane) + Fix tar files emitted by pg_dump + to be POSIX conformant (Brian Weaver, Tom Lane) - Add @@ -12076,7 +12076,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-INITDB"><application>initdb</></link> + <link linkend="APP-INITDB"><application>initdb</application></link> @@ -12087,19 +12087,19 @@ ALTER EXTENSION hstore UPDATE; This insures data integrity in event of a system crash shortly after - initdb. This can be disabled by using . - Add initdb option to sync the data directory to durable storage (Bruce Momjian) This is used by pg_upgrade. + linkend="pgupgrade">pg_upgrade. @@ -12131,14 +12131,14 @@ ALTER EXTENSION hstore UPDATE; - Create a centralized timeout API (Zoltán + Create a centralized timeout API (Zoltán Böszörményi) - Create libpgcommon and move pg_malloc() and other + Create libpgcommon and move pg_malloc() and other functions there (Álvaro Herrera, Andres Freund) @@ -12155,15 +12155,15 @@ ALTER EXTENSION hstore UPDATE; - Use SA_RESTART for all signals, - including SIGALRM (Tom Lane) + Use SA_RESTART for all signals, + including SIGALRM (Tom Lane) Ensure that the correct text domain is used when - translating errcontext() messages + translating errcontext() messages (Heikki Linnakangas) @@ -12176,7 +12176,7 @@ ALTER EXTENSION hstore UPDATE; - Provide support for static assertions that will fail at + Provide support for static assertions that will fail at compile time if some compile-time-constant condition is not met (Andres Freund, Tom Lane) @@ -12184,14 +12184,14 @@ ALTER EXTENSION hstore UPDATE; - Support Assert() in client-side code (Andrew Dunstan) + Support Assert() in client-side code (Andrew Dunstan) - Add decoration to inform the C compiler that some ereport() - and elog() calls do not return (Peter Eisentraut, + Add decoration to inform the C compiler that some ereport() + and elog() calls do not return (Peter Eisentraut, Andres Freund, Tom Lane, Heikki Linnakangas) @@ -12200,7 +12200,7 @@ ALTER EXTENSION hstore UPDATE; Allow options to be passed to the regression test output comparison utility via PG_REGRESS_DIFF_OPTS + linkend="regress-evaluation">PG_REGRESS_DIFF_OPTS (Peter Eisentraut) @@ -12209,42 +12209,42 @@ ALTER EXTENSION hstore UPDATE; Add isolation tests for CREATE INDEX - CONCURRENTLY (Abhijit Menon-Sen) + CONCURRENTLY (Abhijit Menon-Sen) - Remove typedefs for int2/int4 as they are better - represented as int16/int32 (Peter Eisentraut) + Remove typedefs for int2/int4 as they are better + represented as int16/int32 (Peter Eisentraut) Fix install-strip on Mac OS - X (Peter Eisentraut) + X (Peter Eisentraut) Remove configure flag - , as it is no longer supported (Bruce Momjian) - Rewrite pgindent in Perl (Andrew Dunstan) + Rewrite pgindent in Perl (Andrew Dunstan) Provide Emacs macro to set Perl formatting to - match PostgreSQL's perltidy settings (Peter Eisentraut) + match PostgreSQL's perltidy settings (Peter Eisentraut) @@ -12257,25 +12257,25 @@ ALTER EXTENSION hstore UPDATE; - Change the way UESCAPE is lexed, to significantly reduce + Change the way UESCAPE is lexed, to significantly reduce the size of the lexer tables (Heikki Linnakangas) - Centralize flex and bison - make rules (Peter Eisentraut) + Centralize flex and bison + make rules (Peter Eisentraut) - This is useful for pgxs authors. + This is useful for pgxs authors. - Change many internal backend functions to return object OIDs + Change many internal backend functions to return object OIDs rather than void (Dimitri Fontaine) @@ -12299,7 +12299,7 @@ ALTER EXTENSION hstore UPDATE; Add function pg_identify_object() + linkend="functions-info-catalog-table">pg_identify_object() to produce a machine-readable description of a database object (Álvaro Herrera) @@ -12307,7 +12307,7 @@ ALTER EXTENSION hstore UPDATE; - Add post-ALTER-object server hooks (KaiGai Kohei) + Add post-ALTER-object server hooks (KaiGai Kohei) @@ -12321,28 +12321,28 @@ ALTER EXTENSION hstore UPDATE; Provide a tool to help detect timezone abbreviation changes when - updating the src/timezone/data files + updating the src/timezone/data files (Tom Lane) - Add pkg-config support for libpq - and ecpg libraries (Peter Eisentraut) + Add pkg-config support for libpq + and ecpg libraries (Peter Eisentraut) - Remove src/tools/backend, now that the content is on - the PostgreSQL wiki (Bruce Momjian) + Remove src/tools/backend, now that the content is on + the PostgreSQL wiki (Bruce Momjian) - Split out WAL reading as + Split out WAL reading as an independent facility (Heikki Linnakangas, Andres Freund) @@ -12350,13 +12350,13 @@ ALTER EXTENSION hstore UPDATE; Use a 64-bit integer to represent WAL positions - (XLogRecPtr) instead of two 32-bit integers + linkend="wal">WAL positions + (XLogRecPtr) instead of two 32-bit integers (Heikki Linnakangas) - Generally, tools that need to read the WAL format + Generally, tools that need to read the WAL format will need to be adjusted. @@ -12371,7 +12371,7 @@ ALTER EXTENSION hstore UPDATE; Allow PL/Python on OS - X to build against custom versions of Python + X to build against custom versions of Python (Peter Eisentraut) @@ -12387,9 +12387,9 @@ ALTER EXTENSION hstore UPDATE; - Add a Postgres foreign + Add a Postgres foreign data wrapper contrib module to allow access to - other Postgres servers (Shigeru Hanada) + other Postgres servers (Shigeru Hanada) @@ -12399,7 +12399,7 @@ ALTER EXTENSION hstore UPDATE; - Add pg_xlogdump + Add pg_xlogdump contrib program (Andres Freund) @@ -12407,46 +12407,46 @@ ALTER EXTENSION hstore UPDATE; Add support for indexing of regular-expression searches in - pg_trgm + pg_trgm (Alexander Korotkov) - Improve pg_trgm's + Improve pg_trgm's handling of multibyte characters (Tom Lane) On a platform that does not have the wcstombs() or towlower() library functions, this could result in an incompatible change in the contents - of pg_trgm indexes for non-ASCII data. In such cases, - REINDEX those indexes to ensure correct search results. + of pg_trgm indexes for non-ASCII data. In such cases, + REINDEX those indexes to ensure correct search results. Add a pgstattuple function to report - the size of the pending-insertions list of a GIN index + the size of the pending-insertions list of a GIN index (Fujii Masao) - Make oid2name, - pgbench, and - vacuumlo set - fallback_application_name (Amit Kapila) + Make oid2name, + pgbench, and + vacuumlo set + fallback_application_name (Amit Kapila) Improve output of pg_test_timing + linkend="pgtesttiming">pg_test_timing (Bruce Momjian) @@ -12454,7 +12454,7 @@ ALTER EXTENSION hstore UPDATE; Improve output of pg_test_fsync + linkend="pgtestfsync">pg_test_fsync (Peter Geoghegan) @@ -12466,9 +12466,9 @@ ALTER EXTENSION hstore UPDATE; - When using this FDW to define the target of a dblink + When using this FDW to define the target of a dblink connection, instead of using a hard-wired list of connection options, - the underlying libpq library is consulted to see what + the underlying libpq library is consulted to see what connection options it supports. @@ -12476,26 +12476,26 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="pgupgrade"><application>pg_upgrade</></link> + <link linkend="pgupgrade"><application>pg_upgrade</application></link> - Allow pg_upgrade to do dumps and restores in + Allow pg_upgrade to do dumps and restores in parallel (Bruce Momjian, Andrew Dunstan) This allows parallel schema dump/restore of databases, as well as parallel copy/link of data files per tablespace. Use the - option to specify the level of parallelism. - Make pg_upgrade create Unix-domain sockets in + Make pg_upgrade create Unix-domain sockets in the current directory (Bruce Momjian, Tom Lane) @@ -12507,7 +12507,7 @@ ALTER EXTENSION hstore UPDATE; - Make pg_upgrade mode properly detect the location of non-default socket directories (Bruce Momjian, Tom Lane) @@ -12515,21 +12515,21 @@ ALTER EXTENSION hstore UPDATE; - Improve performance of pg_upgrade for databases + Improve performance of pg_upgrade for databases with many tables (Bruce Momjian) - Improve pg_upgrade's logs by showing + Improve pg_upgrade's logs by showing executed commands (Álvaro Herrera) - Improve pg_upgrade's status display during + Improve pg_upgrade's status display during copy/link (Bruce Momjian) @@ -12539,33 +12539,33 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="pgbench"><application>pgbench</></link> + <link linkend="pgbench"><application>pgbench</application></link> - Add This adds foreign key constraints to the standard tables created by - pgbench, for use in foreign key performance testing. + pgbench, for use in foreign key performance testing. - Allow pgbench to aggregate performance statistics - and produce output every seconds (Tomas Vondra) - Add pgbench option to control the percentage of transactions logged (Tomas Vondra) @@ -12573,29 +12573,29 @@ ALTER EXTENSION hstore UPDATE; Reduce and improve the status message output of - pgbench's initialization mode (Robert Haas, + pgbench's initialization mode (Robert Haas, Peter Eisentraut) - Add pgbench mode to print one output line every five seconds (Tomas Vondra) - Output pgbench elapsed and estimated remaining + Output pgbench elapsed and estimated remaining time during initialization (Tomas Vondra) - Allow pgbench to use much larger scale factors, - by changing relevant columns from integer to bigint + Allow pgbench to use much larger scale factors, + by changing relevant columns from integer to bigint when the requested scale factor exceeds 20000 (Greg Smith) @@ -12614,21 +12614,21 @@ ALTER EXTENSION hstore UPDATE; - Allow EPUB-format documentation to be created + Allow EPUB-format documentation to be created (Peter Eisentraut) - Update FreeBSD kernel configuration documentation + Update FreeBSD kernel configuration documentation (Brad Davis) - Improve WINDOW + Improve WINDOW function documentation (Bruce Momjian, Florian Pflug) @@ -12636,7 +12636,7 @@ ALTER EXTENSION hstore UPDATE; Add instructions for setting - up the documentation tool chain on macOS + up the documentation tool chain on macOS (Peter Eisentraut) @@ -12644,7 +12644,7 @@ ALTER EXTENSION hstore UPDATE; Improve commit_delay + linkend="guc-commit-delay">commit_delay documentation (Peter Geoghegan) diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index c665f90ca1..deb74b4e1c 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -53,20 +53,20 @@ Branch: REL9_4_STABLE [b51c8efc6] 2017-08-24 15:21:32 -0700 Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -105,21 +105,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -136,7 +136,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -148,7 +148,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -156,13 +156,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -174,12 +174,12 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -228,7 +228,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -236,11 +236,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -255,15 +255,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -294,15 +294,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -316,7 +316,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -330,16 +330,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -347,13 +347,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -460,21 +460,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -488,7 +488,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -505,7 +505,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Logical decoding crashed on tuples that are wider than 64KB (after compression, but with all data in-line). The case arises only - when REPLICA IDENTITY FULL is enabled for a table + when REPLICA IDENTITY FULL is enabled for a table containing such tuples. @@ -553,7 +553,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -561,7 +561,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -569,56 +569,56 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -629,20 +629,20 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -655,9 +655,9 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -668,8 +668,8 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -680,15 +680,15 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -701,14 +701,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -719,14 +719,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -734,13 +734,13 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -748,7 +748,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -760,8 +760,8 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -773,9 +773,9 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -786,7 +786,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -798,14 +798,14 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -825,34 +825,34 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -889,7 +889,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -906,18 +906,18 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -941,7 +941,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -955,17 +955,17 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -996,7 +996,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1009,7 +1009,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1017,21 +1017,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1052,19 +1052,19 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1072,27 +1072,27 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -1106,12 +1106,12 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -1120,21 +1120,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -1146,7 +1146,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -1160,13 +1160,13 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -1184,21 +1184,21 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1213,20 +1213,20 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1238,26 +1238,26 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1270,7 +1270,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1320,7 +1320,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1334,9 +1334,9 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1349,15 +1349,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1410,15 +1410,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1435,19 +1435,19 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1513,7 +1513,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1528,7 +1528,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -1544,15 +1544,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1565,7 +1565,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -1578,7 +1578,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1608,13 +1608,13 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1622,12 +1622,12 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1642,15 +1642,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1661,33 +1661,33 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1699,8 +1699,8 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1712,15 +1712,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1733,28 +1733,28 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -1767,15 +1767,15 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1794,21 +1794,21 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -1820,23 +1820,23 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -1847,22 +1847,22 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -1888,7 +1888,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -1951,7 +1951,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -1959,7 +1959,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -1970,20 +1970,20 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - The typical symptom was unexpected GIN leaf action errors + The typical symptom was unexpected GIN leaf action errors during WAL replay. - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -2017,79 +2017,79 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Fix query-lifespan memory leak in a bulk UPDATE on a table - with a PRIMARY KEY or REPLICA IDENTITY index + Fix query-lifespan memory leak in a bulk UPDATE on a table + with a PRIMARY KEY or REPLICA IDENTITY index (Tom Lane) - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -2134,7 +2134,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -2153,7 +2153,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 This failure to reset all of the fields of the slot could - prevent VACUUM from removing dead tuples. + prevent VACUUM from removing dead tuples. @@ -2164,7 +2164,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -2178,7 +2178,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Previously, the same value would be chosen every time, because it was - derived from random() but srandom() had not + derived from random() but srandom() had not yet been called. While relatively harmless, this was not the intended behavior. @@ -2191,8 +2191,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Windows sometimes returns ERROR_ACCESS_DENIED rather - than ERROR_ALREADY_EXISTS when there is an existing + Windows sometimes returns ERROR_ACCESS_DENIED rather + than ERROR_ALREADY_EXISTS when there is an existing segment. This led to postmaster startup failure due to believing that the former was an unrecoverable error. @@ -2201,7 +2201,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -2212,30 +2212,30 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) - Fix pgbench's calculation of average latency + Fix pgbench's calculation of average latency (Fabien Coelho) - The calculation was incorrect when there were \sleep + The calculation was incorrect when there were \sleep commands in the script, or when the test duration was specified in number of transactions rather than total time. @@ -2243,12 +2243,12 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -2256,8 +2256,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -2268,7 +2268,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -2276,15 +2276,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Fix contrib/pg_buffercache to work - when shared_buffers exceeds 256GB (KaiGai Kohei) + Fix contrib/pg_buffercache to work + when shared_buffers exceeds 256GB (KaiGai Kohei) - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -2296,17 +2296,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - When PostgreSQL has been configured - with - In MSVC builds, include pg_recvlogical in a + In MSVC builds, include pg_recvlogical in a client-only installation (MauMau) @@ -2327,17 +2327,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -2350,15 +2350,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -2403,17 +2403,17 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -2427,7 +2427,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -2436,22 +2436,22 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -2461,40 +2461,40 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -2505,19 +2505,19 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -2527,8 +2527,8 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -2542,7 +2542,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -2568,15 +2568,15 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -2619,12 +2619,12 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -2640,12 +2640,12 @@ Branch: REL9_2_STABLE [294509ea9] 2016-05-25 19:39:49 -0400 Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 --> - Avoid canceling hot-standby queries during VACUUM FREEZE + Avoid canceling hot-standby queries during VACUUM FREEZE (Simon Riggs, Álvaro Herrera) - VACUUM FREEZE on an otherwise-idle master server could + VACUUM FREEZE on an otherwise-idle master server could result in unnecessary cancellations of queries on its standby servers. @@ -2660,15 +2660,15 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -2680,7 +2680,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -2713,7 +2713,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - This mistake prevented VACUUM from completing in some + This mistake prevented VACUUM from completing in some cases involving corrupt b-tree indexes. @@ -2727,8 +2727,8 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -2741,53 +2741,53 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) - In pg_dump with both - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -2795,8 +2795,8 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -2808,7 +2808,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -2820,7 +2820,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -2841,13 +2841,13 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -2867,7 +2867,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -2879,7 +2879,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -2934,7 +2934,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -2943,7 +2943,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -2957,10 +2957,10 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -2981,14 +2981,14 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 The memory leak would typically not amount to much in simple queries, but it could be very substantial during a large GIN index build with - high maintenance_work_mem. + high maintenance_work_mem. - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -3000,29 +3000,29 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) - Disallow newlines in ALTER SYSTEM parameter values + Disallow newlines in ALTER SYSTEM parameter values (Tom Lane) The configuration-file parser doesn't support embedded newlines in string literals, so we mustn't allow them in values to be inserted - by ALTER SYSTEM. + by ALTER SYSTEM. - Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to + Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to work properly if an index on OID is selected (David Rowley) @@ -3048,19 +3048,19 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -3068,20 +3068,20 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -3092,22 +3092,22 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -3120,19 +3120,19 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -3140,9 +3140,9 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -3188,29 +3188,29 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) - Fix bug in json_to_record() when a field of its input + Fix bug in json_to_record() when a field of its input object contains a sub-object with a field name matching one of the requested output column names (Tom Lane) @@ -3219,7 +3219,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix misformatting of negative time zone offsets - by to_char()'s OF format code + by to_char()'s OF format code (Thomas Munro, Tom Lane) @@ -3232,7 +3232,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Previously, standby servers would delay application of WAL records in - response to recovery_min_apply_delay even while replaying + response to recovery_min_apply_delay even while replaying the initial portion of WAL needed to make their database state valid. Since the standby is useless until it's reached a consistent database state, this was deemed unhelpful. @@ -3241,7 +3241,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) @@ -3253,44 +3253,44 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Trouble cases included tuples larger than one page when replica - identity is FULL, UPDATEs that change a + identity is FULL, UPDATEs that change a primary key within a transaction large enough to be spooled to disk, incorrect reports of subxact logged without previous toplevel - record, and incorrect reporting of a transaction's commit time. + record, and incorrect reporting of a transaction's commit time. Fix planner error with nested security barrier views when the outer - view has a WHERE clause containing a correlated subquery + view has a WHERE clause containing a correlated subquery (Dean Rasheed) - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -3316,27 +3316,27 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -3350,26 +3350,26 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -3397,21 +3397,21 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -3447,7 +3447,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 However, if you are upgrading an installation that contains any GIN - indexes that use the (non-default) jsonb_path_ops operator + indexes that use the (non-default) jsonb_path_ops operator class, see the first changelog entry below. @@ -3471,19 +3471,19 @@ Branch: REL9_4_STABLE [788e35ac0] 2015-11-05 18:15:48 -0500 - Fix inconsistent hash calculations in jsonb_path_ops GIN + Fix inconsistent hash calculations in jsonb_path_ops GIN indexes (Tom Lane) - When processing jsonb values that contain both scalars and + When processing jsonb values that contain both scalars and sub-objects at the same nesting level, for example an array containing both scalars and sub-arrays, key hash values could be calculated differently than they would be for the same key in a different context. This could result in queries not finding entries that they should find. Fixing this means that existing indexes may now be inconsistent with the new hash calculation code. Users - should REINDEX jsonb_path_ops GIN indexes after + should REINDEX jsonb_path_ops GIN indexes after installing this update to make sure that all searches work as expected. @@ -3513,18 +3513,18 @@ Branch: REL9_1_STABLE [dea6da132] 2015-10-06 17:15:27 -0400 - Perform an immediate shutdown if the postmaster.pid file + Perform an immediate shutdown if the postmaster.pid file is removed (Tom Lane) The postmaster now checks every minute or so - that postmaster.pid is still there and still contains its + that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had - received SIGQUIT. The main motivation for this change + received SIGQUIT. The main motivation for this change is to ensure that failed buildfarm runs will get cleaned up without manual intervention; but it also serves to limit the bad effects if a - DBA forcibly removes postmaster.pid and then starts a new + DBA forcibly removes postmaster.pid and then starts a new postmaster. @@ -3541,7 +3541,7 @@ Branch: REL9_1_STABLE [08322daed] 2015-10-31 14:36:58 -0500 - In SERIALIZABLE transaction isolation mode, serialization + In SERIALIZABLE transaction isolation mode, serialization anomalies could be missed due to race conditions during insertions (Kevin Grittner, Thomas Munro) @@ -3560,7 +3560,7 @@ Branch: REL9_1_STABLE [5f9a86b35] 2015-12-12 14:19:29 +0100 Fix failure to emit appropriate WAL records when doing ALTER - TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, + TABLE ... SET TABLESPACE for unlogged relations (Michael Paquier, Andres Freund) @@ -3614,7 +3614,7 @@ Branch: REL9_1_STABLE [60ba32cb5] 2015-11-20 14:55:29 -0500 - Fix ALTER COLUMN TYPE to reconstruct inherited check + Fix ALTER COLUMN TYPE to reconstruct inherited check constraints properly (Tom Lane) @@ -3629,7 +3629,7 @@ Branch: REL9_1_STABLE [7e29e7f55] 2015-12-21 19:49:15 -0300 - Fix REASSIGN OWNED to change ownership of composite types + Fix REASSIGN OWNED to change ownership of composite types properly (Álvaro Herrera) @@ -3644,7 +3644,7 @@ Branch: REL9_1_STABLE [ab14c1383] 2015-12-21 19:16:15 -0300 - Fix REASSIGN OWNED and ALTER OWNER to correctly + Fix REASSIGN OWNED and ALTER OWNER to correctly update granted-permissions lists when changing owners of data types, foreign data wrappers, or foreign servers (Bruce Momjian, Álvaro Herrera) @@ -3663,7 +3663,7 @@ Branch: REL9_1_STABLE [f44c5203b] 2015-12-11 18:39:09 -0300 - Fix REASSIGN OWNED to ignore foreign user mappings, + Fix REASSIGN OWNED to ignore foreign user mappings, rather than fail (Álvaro Herrera) @@ -3697,13 +3697,13 @@ Branch: REL9_3_STABLE [0a34ff7e9] 2015-12-07 17:41:45 -0500 - Fix planner's handling of LATERAL references (Tom + Fix planner's handling of LATERAL references (Tom Lane) This fixes some corner cases that led to failed to build any - N-way joins or could not devise a query plan planner + N-way joins or could not devise a query plan planner failures. @@ -3753,9 +3753,9 @@ Branch: REL9_3_STABLE [faf18a905] 2015-11-16 13:45:17 -0500 - Speed up generation of unique table aliases in EXPLAIN and + Speed up generation of unique table aliases in EXPLAIN and rule dumping, and ensure that generated aliases do not - exceed NAMEDATALEN (Tom Lane) + exceed NAMEDATALEN (Tom Lane) @@ -3771,8 +3771,8 @@ Branch: REL9_1_STABLE [7b21d1bca] 2015-11-15 14:41:09 -0500 - Fix dumping of whole-row Vars in ROW() - and VALUES() lists (Tom Lane) + Fix dumping of whole-row Vars in ROW() + and VALUES() lists (Tom Lane) @@ -3785,8 +3785,8 @@ Branch: REL9_4_STABLE [4f33572ee] 2015-10-20 11:06:24 -0700 - Translation of minus-infinity dates and timestamps to json - or jsonb incorrectly rendered them as plus-infinity (Tom Lane) + Translation of minus-infinity dates and timestamps to json + or jsonb incorrectly rendered them as plus-infinity (Tom Lane) @@ -3802,7 +3802,7 @@ Branch: REL9_1_STABLE [728a2ac21] 2015-11-17 15:47:12 -0500 - Fix possible internal overflow in numeric division + Fix possible internal overflow in numeric division (Dean Rasheed) @@ -3894,7 +3894,7 @@ Branch: REL9_1_STABLE [b94c2b6a6] 2015-10-16 15:36:17 -0400 This causes the code to emit regular expression is too - complex errors in some cases that previously used unreasonable + complex errors in some cases that previously used unreasonable amounts of time and memory. @@ -3929,14 +3929,14 @@ Branch: REL9_1_STABLE [b00c79b5b] 2015-10-16 14:43:18 -0400 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -3959,7 +3959,7 @@ Branch: REL9_1_STABLE [b0d858359] 2015-10-13 11:21:33 -0400 This oversight resulted in failure to recover from crashes - whenever logging_collector is turned on. + whenever logging_collector is turned on. @@ -4009,13 +4009,13 @@ Branch: REL9_1_STABLE [db462a44e] 2015-12-17 16:55:51 -0500 - In psql, ensure that libreadline's idea + In psql, ensure that libreadline's idea of the screen size is updated when the terminal window size changes (Merlin Moncure) - Previously, libreadline did not notice if the window + Previously, libreadline did not notice if the window was resized during query output, leading to strange behavior during later input of multiline queries. @@ -4023,8 +4023,8 @@ Branch: REL9_1_STABLE [db462a44e] 2015-12-17 16:55:51 -0500 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) @@ -4041,7 +4041,7 @@ Branch: REL9_1_STABLE [6430a11fa] 2015-11-25 17:31:54 -0500 - Avoid possible crash in psql's \c command + Avoid possible crash in psql's \c command when previous connection was via Unix socket and command specifies a new hostname and same username (Tom Lane) @@ -4059,21 +4059,21 @@ Branch: REL9_1_STABLE [c869a7d5b] 2015-10-12 18:30:37 -0400 - In pg_ctl start -w, test child process status directly + In pg_ctl start -w, test child process status directly rather than relying on heuristics (Tom Lane, Michael Paquier) - Previously, pg_ctl relied on an assumption that the new - postmaster would always create postmaster.pid within five + Previously, pg_ctl relied on an assumption that the new + postmaster would always create postmaster.pid within five seconds. But that can fail on heavily-loaded systems, - causing pg_ctl to report incorrectly that the + causing pg_ctl to report incorrectly that the postmaster failed to start. Except on Windows, this change also means that a pg_ctl start - -w done immediately after another such command will now reliably + -w done immediately after another such command will now reliably fail, whereas previously it would report success if done within two seconds of the first command. @@ -4091,23 +4091,23 @@ Branch: REL9_1_STABLE [87deb55a4] 2015-11-08 17:31:24 -0500 - In pg_ctl start -w, don't attempt to use a wildcard listen + In pg_ctl start -w, don't attempt to use a wildcard listen address to connect to the postmaster (Kondo Yuta) - On Windows, pg_ctl would fail to detect postmaster - startup if listen_addresses is set to 0.0.0.0 - or ::, because it would try to use that value verbatim as + On Windows, pg_ctl would fail to detect postmaster + startup if listen_addresses is set to 0.0.0.0 + or ::, because it would try to use that value verbatim as the address to connect to, which doesn't work. Instead assume - that 127.0.0.1 or ::1, respectively, is the + that 127.0.0.1 or ::1, respectively, is the right thing to use. - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -4127,18 +4127,18 @@ Branch: REL9_1_STABLE [6df62ef43] 2015-11-23 00:32:01 -0500 - In pg_dump and pg_basebackup, adopt + In pg_dump and pg_basebackup, adopt the GNU convention for handling tar-archive members exceeding 8GB (Tom Lane) - The POSIX standard for tar file format does not allow + The POSIX standard for tar file format does not allow archive member files to exceed 8GB, but most modern implementations - of tar support an extension that fixes that. Adopt - this extension so that pg_dump with no longer fails on tables with more than 8GB of data, and so - that pg_basebackup can handle files larger than 8GB. + that pg_basebackup can handle files larger than 8GB. In addition, fix some portability issues that could cause failures for members between 4GB and 8GB on some platforms. Potentially these problems could cause unrecoverable data loss due to unreadable backup @@ -4148,16 +4148,16 @@ Branch: REL9_1_STABLE [6df62ef43] 2015-11-23 00:32:01 -0500 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) @@ -4180,14 +4180,14 @@ Branch: REL9_1_STABLE [e4959fb5c] 2016-01-02 19:04:45 -0500 Ensure that relation option values are properly quoted - in pg_dump (Kouhei Sutou, Tom Lane) + in pg_dump (Kouhei Sutou, Tom Lane) A reloption value that isn't a simple identifier or number could lead to dump/reload failures due to syntax errors in CREATE statements - issued by pg_dump. This is not an issue with any - reloption currently supported by core PostgreSQL, but + issued by pg_dump. This is not an issue with any + reloption currently supported by core PostgreSQL, but extensions could allow reloptions that cause the problem. @@ -4202,7 +4202,7 @@ Branch: REL9_3_STABLE [534a4159c] 2015-12-23 14:25:31 -0500 - Avoid repeated password prompts during parallel pg_dump + Avoid repeated password prompts during parallel pg_dump (Zeus Kronion) @@ -4225,14 +4225,14 @@ Branch: REL9_1_STABLE [c36064e43] 2015-11-24 17:18:27 -0500 - Fix pg_upgrade's file-copying code to handle errors + Fix pg_upgrade's file-copying code to handle errors properly on Windows (Bruce Momjian) - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -4250,22 +4250,22 @@ Branch: REL9_2_STABLE [4fb9e6109] 2015-12-28 10:50:35 -0300 Fix failure to localize messages emitted - by pg_receivexlog and pg_recvlogical + by pg_receivexlog and pg_recvlogical (Ioseph Kim) - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -4273,7 +4273,7 @@ Branch: REL9_2_STABLE [4fb9e6109] 2015-12-28 10:50:35 -0300 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) @@ -4288,29 +4288,29 @@ Branch: REL9_3_STABLE [db6e8e162] 2015-11-12 13:03:53 -0500 - Fix premature clearing of libpq's input buffer when + Fix premature clearing of libpq's input buffer when socket EOF is seen (Tom Lane) - This mistake caused libpq to sometimes not report the + This mistake caused libpq to sometimes not report the backend's final error message before reporting server closed the - connection unexpectedly. + connection unexpectedly. - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. @@ -4326,7 +4326,7 @@ Branch: REL9_1_STABLE [4b58ded74] 2015-12-14 18:48:49 +0200 - Improve libpq's handling of out-of-memory situations + Improve libpq's handling of out-of-memory situations (Michael Paquier, Amit Kapila, Heikki Linnakangas) @@ -4343,7 +4343,7 @@ Branch: REL9_1_STABLE [a9bcd8370] 2015-10-18 10:17:12 +0200 Fix order of arguments - in ecpg-generated typedef statements + in ecpg-generated typedef statements (Michael Meskes) @@ -4360,29 +4360,29 @@ Branch: REL9_1_STABLE [84387496f] 2015-12-01 11:42:52 -0500 - Use %g not %f format - in ecpg's PGTYPESnumeric_from_double() + Use %g not %f format + in ecpg's PGTYPESnumeric_from_double() (Tom Lane) - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -4403,15 +4403,15 @@ Branch: REL9_1_STABLE [1b6102eb7] 2015-12-27 13:03:19 -0300 - Ensure that contrib/pgcrypto's crypt() + Ensure that contrib/pgcrypto's crypt() function can be interrupted by query cancel (Andreas Karlsson) - In contrib/postgres_fdw, fix bugs triggered by use - of tableoid in data-modifying commands (Etsuro Fujita, + In contrib/postgres_fdw, fix bugs triggered by use + of tableoid in data-modifying commands (Etsuro Fujita, Robert Haas) @@ -4433,7 +4433,7 @@ Branch: REL9_2_STABLE [7f94a5c10] 2015-12-10 10:19:31 -0500 - Accept flex versions later than 2.5.x + Accept flex versions later than 2.5.x (Tom Lane, Michael Paquier) @@ -4467,19 +4467,19 @@ Branch: REL9_1_STABLE [2a37a103b] 2015-12-11 16:14:48 -0500 - Install our missing script where PGXS builds can find it + Install our missing script where PGXS builds can find it (Jim Nasby) This allows sane behavior in a PGXS build done on a machine where build - tools such as bison are missing. + tools such as bison are missing. - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -4497,11 +4497,11 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 Add variant regression test expected-output file to match behavior of - current libxml2 (Tom Lane) + current libxml2 (Tom Lane) - The fix for libxml2's CVE-2015-7499 causes it not to + The fix for libxml2's CVE-2015-7499 causes it not to output error context reports in some cases where it used to do so. This seems to be a bug, but we'll probably have to live with it for some time, so work around it. @@ -4510,7 +4510,7 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -4563,13 +4563,13 @@ Branch: REL9_3_STABLE [f8862172e] 2015-10-05 10:06:34 -0400 - Guard against stack overflows in json parsing + Guard against stack overflows in json parsing (Oskari Saarenmaa) - If an application constructs PostgreSQL json - or jsonb values from arbitrary user input, the application's + If an application constructs PostgreSQL json + or jsonb values from arbitrary user input, the application's users can reliably crash the PostgreSQL server, causing momentary denial of service. (CVE-2015-5289) @@ -4588,8 +4588,8 @@ Branch: REL9_0_STABLE [188e081ef] 2015-10-05 10:06:36 -0400 - Fix contrib/pgcrypto to detect and report - too-short crypt() salts (Josh Kupershmidt) + Fix contrib/pgcrypto to detect and report + too-short crypt() salts (Josh Kupershmidt) @@ -4634,7 +4634,7 @@ Branch: REL9_4_STABLE [bab959906] 2015-08-02 20:09:05 +0300 Fix possible deadlock during WAL insertion - when commit_delay is set (Heikki Linnakangas) + when commit_delay is set (Heikki Linnakangas) @@ -4665,13 +4665,13 @@ Branch: REL9_0_STABLE [45c69178b] 2015-06-25 14:39:06 -0400 - Fix insertion of relations into the relation cache init file + Fix insertion of relations into the relation cache init file (Tom Lane) An oversight in a patch in the most recent minor releases - caused pg_trigger_tgrelid_tgname_index to be omitted + caused pg_trigger_tgrelid_tgname_index to be omitted from the init file. Subsequent sessions detected this, then deemed the init file to be broken and silently ignored it, resulting in a significant degradation in session startup time. In addition to fixing @@ -4711,7 +4711,7 @@ Branch: REL9_0_STABLE [2d4336cf8] 2015-09-30 23:32:23 -0400 - Improve LISTEN startup time when there are many unread + Improve LISTEN startup time when there are many unread notifications (Matt Newell) @@ -4731,7 +4731,7 @@ Branch: REL9_3_STABLE [1bcc9e60a] 2015-09-25 13:16:31 -0400 - This was seen primarily when restoring pg_dump output + This was seen primarily when restoring pg_dump output for databases with many thousands of tables. @@ -4755,7 +4755,7 @@ Branch: REL9_0_STABLE [444b2ebee] 2015-07-28 22:06:32 +0200 too many bugs in practice, both in the underlying OpenSSL library and in our usage of it. Renegotiation will be removed entirely in 9.5 and later. In the older branches, just change the default value - of ssl_renegotiation_limit to zero (disabled). + of ssl_renegotiation_limit to zero (disabled). @@ -4779,7 +4779,7 @@ Branch: REL9_0_STABLE [eeb0b7830] 2015-10-05 11:57:25 +0200 - Lower the minimum values of the *_freeze_max_age parameters + Lower the minimum values of the *_freeze_max_age parameters (Andres Freund) @@ -4802,7 +4802,7 @@ Branch: REL9_0_STABLE [b09446ed7] 2015-08-04 13:12:03 -0400 - Limit the maximum value of wal_buffers to 2GB to avoid + Limit the maximum value of wal_buffers to 2GB to avoid server crashes (Josh Berkus) @@ -4816,8 +4816,8 @@ Branch: REL9_3_STABLE [5a56c2545] 2015-06-28 18:38:06 -0400 Avoid logging complaints when a parameter that can only be set at - server start appears multiple times in postgresql.conf, - and fix counting of line numbers after an include_dir + server start appears multiple times in postgresql.conf, + and fix counting of line numbers after an include_dir directive (Tom Lane) @@ -4835,7 +4835,7 @@ Branch: REL9_0_STABLE [a89781e34] 2015-09-21 12:12:16 -0400 - Fix rare internal overflow in multiplication of numeric values + Fix rare internal overflow in multiplication of numeric values (Dean Rasheed) @@ -4862,8 +4862,8 @@ Branch: REL9_2_STABLE [8dacb29ca] 2015-10-05 10:06:35 -0400 Guard against hard-to-reach stack overflows involving record types, - range types, json, jsonb, tsquery, - ltxtquery and query_int (Noah Misch) + range types, json, jsonb, tsquery, + ltxtquery and query_int (Noah Misch) @@ -4887,14 +4887,14 @@ Branch: REL9_0_STABLE [92d956f51] 2015-09-07 20:47:06 +0100 - Fix handling of DOW and DOY in datetime input + Fix handling of DOW and DOY in datetime input (Greg Stark) These tokens aren't meant to be used in datetime values, but previously they resulted in opaque internal error messages rather - than invalid input syntax. + than invalid input syntax. @@ -4936,7 +4936,7 @@ Branch: REL9_0_STABLE [b875ca09f] 2015-10-02 15:00:52 -0400 Add recursion depth protections to regular expression, SIMILAR - TO, and LIKE matching (Tom Lane) + TO, and LIKE matching (Tom Lane) @@ -5052,8 +5052,8 @@ Branch: REL9_0_STABLE [bd327627f] 2015-08-04 18:18:47 -0400 - Fix unexpected out-of-memory situation during sort errors - when using tuplestores with small work_mem settings (Tom + Fix unexpected out-of-memory situation during sort errors + when using tuplestores with small work_mem settings (Tom Lane) @@ -5071,7 +5071,7 @@ Branch: REL9_0_STABLE [36522d627] 2015-07-16 22:57:46 -0400 - Fix very-low-probability stack overrun in qsort (Tom Lane) + Fix very-low-probability stack overrun in qsort (Tom Lane) @@ -5093,8 +5093,8 @@ Branch: REL9_0_STABLE [d637a899c] 2015-10-04 15:55:07 -0400 - Fix invalid memory alloc request size failure in hash joins - with large work_mem settings (Tomas Vondra, Tom Lane) + Fix invalid memory alloc request size failure in hash joins + with large work_mem settings (Tomas Vondra, Tom Lane) @@ -5172,9 +5172,9 @@ Branch: REL9_0_STABLE [7b4b57fc4] 2015-08-12 21:19:10 -0400 These mistakes could lead to incorrect query plans that would give wrong answers, or to assertion failures in assert-enabled builds, or to odd planner errors such as could not devise a query plan for the - given query, could not find pathkey item to - sort, plan should not reference subplan's variable, - or failed to assign all NestLoopParams to plan nodes. + given query, could not find pathkey item to + sort, plan should not reference subplan's variable, + or failed to assign all NestLoopParams to plan nodes. Thanks are due to Andreas Seltenreich and Piotr Stefaniak for fuzz testing that exposed these problems. @@ -5190,7 +5190,7 @@ Branch: REL9_2_STABLE [e538e510e] 2015-06-22 18:53:27 -0400 - Improve planner's performance for UPDATE/DELETE + Improve planner's performance for UPDATE/DELETE on large inheritance sets (Tom Lane, Dean Rasheed) @@ -5232,12 +5232,12 @@ Branch: REL9_0_STABLE [8b53c087d] 2015-08-02 14:54:44 -0400 During postmaster shutdown, ensure that per-socket lock files are removed and listen sockets are closed before we remove - the postmaster.pid file (Tom Lane) + the postmaster.pid file (Tom Lane) This avoids race-condition failures if an external script attempts to - start a new postmaster as soon as pg_ctl stop returns. + start a new postmaster as soon as pg_ctl stop returns. @@ -5311,7 +5311,7 @@ Branch: REL9_0_STABLE [f527c0a2a] 2015-07-28 17:34:00 -0400 - Do not print a WARNING when an autovacuum worker is already + Do not print a WARNING when an autovacuum worker is already gone when we attempt to signal it, and reduce log verbosity for such signals (Tom Lane) @@ -5389,7 +5389,7 @@ Branch: REL9_2_STABLE [f4297f8c5] 2015-07-27 12:32:48 +0300 - VACUUM attempted to recycle such pages, but did so in a + VACUUM attempted to recycle such pages, but did so in a way that wasn't crash-safe. @@ -5408,7 +5408,7 @@ Branch: REL9_0_STABLE [40ad78220] 2015-07-23 01:30:19 +0300 Fix off-by-one error that led to otherwise-harmless warnings - about apparent wraparound in subtrans/multixact truncation + about apparent wraparound in subtrans/multixact truncation (Thomas Munro) @@ -5426,8 +5426,8 @@ Branch: REL9_0_STABLE [e41718fa1] 2015-08-18 19:22:38 -0400 - Fix misreporting of CONTINUE and MOVE statement - types in PL/pgSQL's error context messages + Fix misreporting of CONTINUE and MOVE statement + types in PL/pgSQL's error context messages (Pavel Stehule, Tom Lane) @@ -5444,7 +5444,7 @@ Branch: REL9_1_STABLE [ca6c2f863] 2015-09-29 10:52:22 -0400 - Fix PL/Perl to handle non-ASCII error + Fix PL/Perl to handle non-ASCII error message texts correctly (Alex Hunsaker) @@ -5467,8 +5467,8 @@ Branch: REL9_1_STABLE [1d190d095] 2015-08-21 12:21:37 -0400 - Fix PL/Python crash when returning the string - representation of a record result (Tom Lane) + Fix PL/Python crash when returning the string + representation of a record result (Tom Lane) @@ -5485,8 +5485,8 @@ Branch: REL9_0_STABLE [4c11967e7] 2015-07-20 14:18:08 +0200 - Fix some places in PL/Tcl that neglected to check for - failure of malloc() calls (Michael Paquier, Álvaro + Fix some places in PL/Tcl that neglected to check for + failure of malloc() calls (Michael Paquier, Álvaro Herrera) @@ -5503,7 +5503,7 @@ Branch: REL9_1_STABLE [2d19a0e97] 2015-08-02 22:12:51 +0300 - In contrib/isn, fix output of ISBN-13 numbers that begin + In contrib/isn, fix output of ISBN-13 numbers that begin with 979 (Fabien Coelho) @@ -5522,7 +5522,7 @@ Branch: REL9_4_STABLE [93840f96c] 2015-10-04 17:58:30 -0400 - Improve contrib/pg_stat_statements' handling of + Improve contrib/pg_stat_statements' handling of query-text garbage collection (Peter Geoghegan) @@ -5543,13 +5543,13 @@ Branch: REL9_3_STABLE [b7dcb2dd4] 2015-09-24 12:47:30 -0400 - Improve contrib/postgres_fdw's handling of + Improve contrib/postgres_fdw's handling of collation-related decisions (Tom Lane) The main user-visible effect is expected to be that comparisons - involving varchar columns will be sent to the remote server + involving varchar columns will be sent to the remote server for execution in more cases than before. @@ -5567,7 +5567,7 @@ Branch: REL9_0_STABLE [2b189c7ec] 2015-07-07 18:45:31 +0300 - Improve libpq's handling of out-of-memory conditions + Improve libpq's handling of out-of-memory conditions (Michael Paquier, Heikki Linnakangas) @@ -5603,7 +5603,7 @@ Branch: REL9_0_STABLE [d278ff3b2] 2015-06-15 14:27:39 +0200 Fix memory leaks and missing out-of-memory checks - in ecpg (Michael Paquier) + in ecpg (Michael Paquier) @@ -5634,15 +5634,15 @@ Branch: REL9_0_STABLE [98d8c75f9] 2015-09-25 12:20:46 -0400 - Fix psql's code for locale-aware formatting of numeric + Fix psql's code for locale-aware formatting of numeric output (Tom Lane) - The formatting code invoked by \pset numericlocale on + The formatting code invoked by \pset numericlocale on did the wrong thing for some uncommon cases such as numbers with an exponent but no decimal point. It could also mangle already-localized - output from the money data type. + output from the money data type. @@ -5659,7 +5659,7 @@ Branch: REL9_0_STABLE [6087bf1a1] 2015-07-08 20:44:27 -0400 - Prevent crash in psql's \c command when + Prevent crash in psql's \c command when there is no current connection (Noah Misch) @@ -5675,7 +5675,7 @@ Branch: REL9_2_STABLE [3756c65a0] 2015-10-01 16:19:49 -0400 - Make pg_dump handle inherited NOT VALID + Make pg_dump handle inherited NOT VALID check constraints correctly (Tom Lane) @@ -5692,8 +5692,8 @@ Branch: REL9_1_STABLE [af225551e] 2015-07-25 17:16:39 -0400 - Fix selection of default zlib compression level - in pg_dump's directory output format (Andrew Dunstan) + Fix selection of default zlib compression level + in pg_dump's directory output format (Andrew Dunstan) @@ -5710,8 +5710,8 @@ Branch: REL9_0_STABLE [24aed2124] 2015-09-20 20:44:34 -0400 - Ensure that temporary files created during a pg_dump - run with tar-format output are not world-readable (Michael + Ensure that temporary files created during a pg_dump + run with tar-format output are not world-readable (Michael Paquier) @@ -5729,8 +5729,8 @@ Branch: REL9_0_STABLE [52b07779d] 2015-09-11 15:51:10 -0400 - Fix pg_dump and pg_upgrade to support - cases where the postgres or template1 database + Fix pg_dump and pg_upgrade to support + cases where the postgres or template1 database is in a non-default tablespace (Marti Raudsepp, Bruce Momjian) @@ -5748,7 +5748,7 @@ Branch: REL9_0_STABLE [298d1f808] 2015-08-10 20:10:16 -0400 - Fix pg_dump to handle object privileges sanely when + Fix pg_dump to handle object privileges sanely when dumping from a server too old to have a particular privilege type (Tom Lane) @@ -5756,11 +5756,11 @@ Branch: REL9_0_STABLE [298d1f808] 2015-08-10 20:10:16 -0400 When dumping data types from pre-9.2 servers, and when dumping functions or procedural languages from pre-7.3 - servers, pg_dump would - produce GRANT/REVOKE commands that revoked the + servers, pg_dump would + produce GRANT/REVOKE commands that revoked the owner's grantable privileges and instead granted all privileges - to PUBLIC. Since the privileges involved are - just USAGE and EXECUTE, this isn't a security + to PUBLIC. Since the privileges involved are + just USAGE and EXECUTE, this isn't a security problem, but it's certainly a surprising representation of the older systems' behavior. Fix it to leave the default privilege state alone in these cases. @@ -5780,12 +5780,12 @@ Branch: REL9_0_STABLE [5d175be17] 2015-08-04 19:34:12 -0400 - Fix pg_dump to dump shell types (Tom Lane) + Fix pg_dump to dump shell types (Tom Lane) Shell types (that is, not-yet-fully-defined types) aren't useful for - much, but nonetheless pg_dump should dump them. + much, but nonetheless pg_dump should dump them. @@ -5801,7 +5801,7 @@ Branch: REL9_1_STABLE [e9a859b54] 2015-07-12 16:25:52 -0400 - Fix assorted minor memory leaks in pg_dump and other + Fix assorted minor memory leaks in pg_dump and other client-side programs (Michael Paquier) @@ -5815,8 +5815,8 @@ Branch: REL9_4_STABLE [9d6352aaa] 2015-07-03 11:15:27 +0300 - Fix pgbench's progress-report behavior when a query, - or pgbench itself, gets stuck (Fabien Coelho) + Fix pgbench's progress-report behavior when a query, + or pgbench itself, gets stuck (Fabien Coelho) @@ -5845,11 +5845,11 @@ Branch: REL9_0_STABLE [b5a22d8bb] 2015-08-29 16:09:25 -0400 Fix spinlock assembly code for PPC hardware to be compatible - with AIX's native assembler (Tom Lane) + with AIX's native assembler (Tom Lane) - Building with gcc didn't work if gcc + Building with gcc didn't work if gcc had been configured to use the native assembler, which is becoming more common. @@ -5868,7 +5868,7 @@ Branch: REL9_0_STABLE [cdf596b1c] 2015-07-17 03:02:46 -0400 - On AIX, test the -qlonglong compiler option + On AIX, test the -qlonglong compiler option rather than just assuming it's safe to use (Noah Misch) @@ -5886,7 +5886,7 @@ Branch: REL9_0_STABLE [7803d5720] 2015-07-15 21:00:31 -0400 - On AIX, use -Wl,-brtllib link option to allow + On AIX, use -Wl,-brtllib link option to allow symbols to be resolved at runtime (Noah Misch) @@ -5909,7 +5909,7 @@ Branch: REL9_0_STABLE [2d8c136e7] 2015-07-29 22:54:08 -0400 Avoid use of inline functions when compiling with - 32-bit xlc, due to compiler bugs (Noah Misch) + 32-bit xlc, due to compiler bugs (Noah Misch) @@ -5925,7 +5925,7 @@ Branch: REL9_0_STABLE [b185c42c1] 2015-06-30 14:20:37 -0300 - Use librt for sched_yield() when necessary, + Use librt for sched_yield() when necessary, which it is on some Solaris versions (Oskari Saarenmaa) @@ -5939,7 +5939,7 @@ Branch: REL9_4_STABLE [a0104e080] 2015-08-14 20:23:42 -0400 - Translate encoding UHC as Windows code page 949 + Translate encoding UHC as Windows code page 949 (Noah Misch) @@ -5972,12 +5972,12 @@ Branch: REL9_4_STABLE [b2ed1682d] 2015-06-20 12:10:56 -0400 Fix postmaster startup failure due to not - copying setlocale()'s return value (Noah Misch) + copying setlocale()'s return value (Noah Misch) This has been reported on Windows systems with the ANSI code page set - to CP936 (Chinese (Simplified, PRC)), and may occur with + to CP936 (Chinese (Simplified, PRC)), and may occur with other multibyte code pages. @@ -5995,7 +5995,7 @@ Branch: REL9_0_STABLE [341b877d3] 2015-07-07 16:39:25 +0300 - Fix Windows install.bat script to handle target directory + Fix Windows install.bat script to handle target directory names that contain spaces (Heikki Linnakangas) @@ -6013,9 +6013,9 @@ Branch: REL9_0_STABLE [29ff43adf] 2015-07-05 12:01:02 -0400 - Make the numeric form of the PostgreSQL version number - (e.g., 90405) readily available to extension Makefiles, - as a variable named VERSION_NUM (Michael Paquier) + Make the numeric form of the PostgreSQL version number + (e.g., 90405) readily available to extension Makefiles, + as a variable named VERSION_NUM (Michael Paquier) @@ -6032,10 +6032,10 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 - Update time zone data files to tzdata release 2015g for + Update time zone data files to tzdata release 2015g for DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, and Uruguay. There is a new zone name - America/Fort_Nelson for the Canadian Northern Rockies. + America/Fort_Nelson for the Canadian Northern Rockies. @@ -6067,7 +6067,7 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 However, if you are upgrading an installation that was previously - upgraded using a pg_upgrade version between 9.3.0 and + upgraded using a pg_upgrade version between 9.3.0 and 9.3.4 inclusive, see the first changelog entry below. @@ -6096,46 +6096,46 @@ Branch: REL9_3_STABLE [2a9b01928] 2015-06-05 09:34:15 -0400 - Recent PostgreSQL releases introduced mechanisms to + Recent PostgreSQL releases introduced mechanisms to protect against multixact wraparound, but some of that code did not account for the possibility that it would need to run during crash recovery, when the database may not be in a consistent state. This could result in failure to restart after a crash, or failure to start up a secondary server. The lingering effects of a previously-fixed - bug in pg_upgrade could also cause such a failure, in - installations that had used pg_upgrade versions + bug in pg_upgrade could also cause such a failure, in + installations that had used pg_upgrade versions between 9.3.0 and 9.3.4. - The pg_upgrade bug in question was that it would - set oldestMultiXid to 1 in pg_control even + The pg_upgrade bug in question was that it would + set oldestMultiXid to 1 in pg_control even if the true value should be higher. With the fixes introduced in this release, such a situation will result in immediate emergency - autovacuuming until a correct oldestMultiXid value can + autovacuuming until a correct oldestMultiXid value can be determined. If that would pose a hardship, users can avoid it by - doing manual vacuuming before upgrading to this release. + doing manual vacuuming before upgrading to this release. In detail: - Check whether pg_controldata reports Latest - checkpoint's oldestMultiXid to be 1. If not, there's nothing + Check whether pg_controldata reports Latest + checkpoint's oldestMultiXid to be 1. If not, there's nothing to do. - Look in PGDATA/pg_multixact/offsets to see if there's a - file named 0000. If there is, there's nothing to do. + Look in PGDATA/pg_multixact/offsets to see if there's a + file named 0000. If there is, there's nothing to do. Otherwise, for each table that has - pg_class.relminmxid equal to 1, - VACUUM that table with + pg_class.relminmxid equal to 1, + VACUUM that table with both and set to zero. (You can use the vacuum cost delay parameters described @@ -6164,7 +6164,7 @@ Branch: REL9_0_STABLE [2fe1939b0] 2015-06-07 15:32:09 -0400 With just the wrong timing of concurrent activity, a VACUUM - FULL on a system catalog might fail to update the init file + FULL on a system catalog might fail to update the init file that's used to avoid cache-loading work for new sessions. This would result in later sessions being unable to access that catalog at all. This is a very ancient bug, but it's so hard to trigger that no @@ -6185,13 +6185,13 @@ Branch: REL9_0_STABLE [dbd99c7f0] 2015-06-05 13:22:27 -0400 Avoid deadlock between incoming sessions and CREATE/DROP - DATABASE (Tom Lane) + DATABASE (Tom Lane) A new session starting in a database that is the target of - a DROP DATABASE command, or is the template for - a CREATE DATABASE command, could cause the command to wait + a DROP DATABASE command, or is the template for + a CREATE DATABASE command, could cause the command to wait for five seconds and then fail, even if the new session would have exited before that. @@ -6284,12 +6284,12 @@ Branch: REL9_3_STABLE [c2b68b1f7] 2015-05-29 17:02:58 -0400 - Avoid failures while fsync'ing data directory during + Avoid failures while fsync'ing data directory during crash restart (Abhijit Menon-Sen, Tom Lane) - In the previous minor releases we added a patch to fsync + In the previous minor releases we added a patch to fsync everything in the data directory after a crash. Unfortunately its response to any error condition was to fail, thereby preventing the server from starting up, even when the problem was quite harmless. @@ -6301,7 +6301,7 @@ Branch: REL9_3_STABLE [c2b68b1f7] 2015-05-29 17:02:58 -0400 - Also apply the same rules in initdb --sync-only. + Also apply the same rules in initdb --sync-only. This case is less critical but it should act similarly. @@ -6316,8 +6316,8 @@ Branch: REL9_2_STABLE [f3c67aad4] 2015-05-28 11:24:37 -0400 - Fix pg_get_functiondef() to show - functions' LEAKPROOF property, if set (Jeevan Chalke) + Fix pg_get_functiondef() to show + functions' LEAKPROOF property, if set (Jeevan Chalke) @@ -6329,7 +6329,7 @@ Branch: REL9_4_STABLE [9b74f32cd] 2015-05-22 10:31:29 -0400 - Fix pushJsonbValue() to unpack jbvBinary + Fix pushJsonbValue() to unpack jbvBinary objects (Andrew Dunstan) @@ -6351,14 +6351,14 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 - Remove configure's check prohibiting linking to a - threaded libpython - on OpenBSD (Tom Lane) + Remove configure's check prohibiting linking to a + threaded libpython + on OpenBSD (Tom Lane) The failure this restriction was meant to prevent seems to not be a - problem anymore on current OpenBSD + problem anymore on current OpenBSD versions. @@ -6390,8 +6390,8 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 - However, if you use contrib/citext's - regexp_matches() functions, see the changelog entry below + However, if you use contrib/citext's + regexp_matches() functions, see the changelog entry below about that. @@ -6469,7 +6469,7 @@ Branch: REL9_0_STABLE [cf893530a] 2015-05-19 18:18:56 -0400 - Our replacement implementation of snprintf() failed to + Our replacement implementation of snprintf() failed to check for errors reported by the underlying system library calls; the main case that might be missed is out-of-memory situations. In the worst case this might lead to information exposure, due to our @@ -6479,7 +6479,7 @@ Branch: REL9_0_STABLE [cf893530a] 2015-05-19 18:18:56 -0400 - It remains possible that some calls of the *printf() + It remains possible that some calls of the *printf() family of functions are vulnerable to information disclosure if an out-of-memory error occurs at just the wrong time. We judge the risk to not be large, but will continue analysis in this area. @@ -6499,15 +6499,15 @@ Branch: REL9_0_STABLE [b84e5c017] 2015-05-18 10:02:39 -0400 - In contrib/pgcrypto, uniformly report decryption failures - as Wrong key or corrupt data (Noah Misch) + In contrib/pgcrypto, uniformly report decryption failures + as Wrong key or corrupt data (Noah Misch) Previously, some cases of decryption with an incorrect key could report other error message texts. It has been shown that such variance in error reports can aid attackers in recovering keys from other systems. - While it's unknown whether pgcrypto's specific behaviors + While it's unknown whether pgcrypto's specific behaviors are likewise exploitable, it seems better to avoid the risk by using a one-size-fits-all message. (CVE-2015-3167) @@ -6557,7 +6557,7 @@ Branch: REL9_3_STABLE [ddebd2119] 2015-05-11 12:16:51 -0400 Under certain usage patterns, the existing defenses against this might - be insufficient, allowing pg_multixact/members files to be + be insufficient, allowing pg_multixact/members files to be removed too early, resulting in data loss. The fix for this includes modifying the server to fail transactions that would result in overwriting old multixact member ID data, and @@ -6578,16 +6578,16 @@ Branch: REL9_1_STABLE [801e250a8] 2015-05-05 15:50:53 -0400 - Fix incorrect declaration of contrib/citext's - regexp_matches() functions (Tom Lane) + Fix incorrect declaration of contrib/citext's + regexp_matches() functions (Tom Lane) - These functions should return setof text[], like the core + These functions should return setof text[], like the core functions they are wrappers for; but they were incorrectly declared as - returning just text[]. This mistake had two results: first, + returning just text[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you - should get is an empty set (zero rows). Second, the g flag + should get is an empty set (zero rows). Second, the g flag was effectively ignored, since you would get only one result array even if there were multiple matches. @@ -6595,16 +6595,16 @@ Branch: REL9_1_STABLE [801e250a8] 2015-05-05 15:50:53 -0400 While the latter behavior is clearly a bug, there might be applications depending on the former behavior; therefore the function declarations - will not be changed by default until PostgreSQL 9.5. + will not be changed by default until PostgreSQL 9.5. In pre-9.5 branches, the old behavior exists in version 1.0 of - the citext extension, while we have provided corrected - declarations in version 1.1 (which is not installed by + the citext extension, while we have provided corrected + declarations in version 1.1 (which is not installed by default). To adopt the fix in pre-9.5 branches, execute - ALTER EXTENSION citext UPDATE TO '1.1' in each database in - which citext is installed. (You can also update + ALTER EXTENSION citext UPDATE TO '1.1' in each database in + which citext is installed. (You can also update back to 1.0 if you need to undo that.) Be aware that either update direction will require dropping and recreating any views or rules that - use citext's regexp_matches() functions. + use citext's regexp_matches() functions. @@ -6616,8 +6616,8 @@ Branch: REL9_4_STABLE [79afe6e66] 2015-02-26 12:34:43 -0500 - Render infinite dates and timestamps as infinity when - converting to json, rather than throwing an error + Render infinite dates and timestamps as infinity when + converting to json, rather than throwing an error (Andrew Dunstan) @@ -6630,8 +6630,8 @@ Branch: REL9_4_STABLE [997066f44] 2015-05-04 12:43:16 -0400 - Fix json/jsonb's populate_record() - and to_record() functions to handle empty input properly + Fix json/jsonb's populate_record() + and to_record() functions to handle empty input properly (Andrew Dunstan) @@ -6671,7 +6671,7 @@ Branch: REL9_4_STABLE [79edb2981] 2015-05-03 11:30:24 -0400 Fix behavior when changing foreign key constraint deferrability status - with ALTER TABLE ... ALTER CONSTRAINT (Tom Lane) + with ALTER TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -6720,7 +6720,7 @@ Branch: REL9_0_STABLE [985da346e] 2015-04-25 16:44:27 -0400 This oversight in the planner has been observed to cause could - not find RelOptInfo for given relids errors, but it seems possible + not find RelOptInfo for given relids errors, but it seems possible that sometimes an incorrect query plan might get past that consistency check and result in silently-wrong query output. @@ -6768,7 +6768,7 @@ Branch: REL9_0_STABLE [72bbca27e] 2015-02-10 20:37:31 -0500 This oversight has been seen to lead to failed to join all - relations together errors in queries involving LATERAL, + relations together errors in queries involving LATERAL, and that might happen in other cases as well. @@ -6782,7 +6782,7 @@ Branch: REL9_4_STABLE [f16270ade] 2015-02-25 21:36:40 -0500 Ensure that row locking occurs properly when the target of - an UPDATE or DELETE is a security-barrier view + an UPDATE or DELETE is a security-barrier view (Stephen Frost) @@ -6801,7 +6801,7 @@ Branch: REL9_4_STABLE [fd3dfc236] 2015-04-28 00:18:04 +0200 On some platforms, the previous coding could result in errors like - could not fsync file "pg_replslot/...": Bad file descriptor. + could not fsync file "pg_replslot/...": Bad file descriptor. @@ -6818,7 +6818,7 @@ Branch: REL9_0_STABLE [223a94680] 2015-04-23 21:37:09 +0300 Fix possible deadlock at startup - when max_prepared_transactions is too small + when max_prepared_transactions is too small (Heikki Linnakangas) @@ -6859,7 +6859,7 @@ Branch: REL9_0_STABLE [262fbcb9d] 2015-05-05 09:30:07 -0400 - Recursively fsync() the data directory after a crash + Recursively fsync() the data directory after a crash (Abhijit Menon-Sen, Robert Haas) @@ -6901,7 +6901,7 @@ Branch: REL9_4_STABLE [ee0d06c0b] 2015-04-03 00:07:29 -0400 This oversight could result in failures in sessions that start - concurrently with a VACUUM FULL on a system catalog. + concurrently with a VACUUM FULL on a system catalog. @@ -6913,7 +6913,7 @@ Branch: REL9_4_STABLE [2897e069c] 2015-03-30 13:05:35 -0400 - Fix crash in BackendIdGetTransactionIds() when trying + Fix crash in BackendIdGetTransactionIds() when trying to get status for a backend process that just exited (Tom Lane) @@ -6930,13 +6930,13 @@ Branch: REL9_0_STABLE [87b7fcc87] 2015-02-23 16:14:16 +0100 - Cope with unexpected signals in LockBufferForCleanup() + Cope with unexpected signals in LockBufferForCleanup() (Andres Freund) This oversight could result in spurious errors about multiple - backends attempting to wait for pincount 1. + backends attempting to wait for pincount 1. @@ -6950,7 +6950,7 @@ Branch: REL9_2_STABLE [effcaa4c2] 2015-02-15 23:26:46 -0500 - Fix crash when doing COPY IN to a table with check + Fix crash when doing COPY IN to a table with check constraints that contain whole-row references (Tom Lane) @@ -6995,7 +6995,7 @@ Branch: REL9_4_STABLE [16be9737c] 2015-03-23 16:52:17 +0100 - Avoid busy-waiting with short recovery_min_apply_delay + Avoid busy-waiting with short recovery_min_apply_delay values (Andres Freund) @@ -7061,9 +7061,9 @@ Branch: REL9_0_STABLE [152c94632] 2015-03-29 15:04:38 -0400 - ANALYZE executes index expressions many times; if there are + ANALYZE executes index expressions many times; if there are slow functions in such an expression, it's desirable to be able to - cancel the ANALYZE before that loop finishes. + cancel the ANALYZE before that loop finishes. @@ -7078,10 +7078,10 @@ Branch: REL9_1_STABLE [4a4fd2b0c] 2015-03-12 13:38:49 -0400 - Ensure tableoid of a foreign table is reported - correctly when a READ COMMITTED recheck occurs after - locking rows in SELECT FOR UPDATE, UPDATE, - or DELETE (Etsuro Fujita) + Ensure tableoid of a foreign table is reported + correctly when a READ COMMITTED recheck occurs after + locking rows in SELECT FOR UPDATE, UPDATE, + or DELETE (Etsuro Fujita) @@ -7127,14 +7127,14 @@ Branch: REL9_0_STABLE [c981e5999] 2015-05-08 19:40:15 -0400 - Recommend setting include_realm to 1 when using + Recommend setting include_realm to 1 when using Kerberos/GSSAPI/SSPI authentication (Stephen Frost) Without this, identically-named users from different realms cannot be distinguished. For the moment this is only a documentation change, but - it will become the default setting in PostgreSQL 9.5. + it will become the default setting in PostgreSQL 9.5. @@ -7157,7 +7157,7 @@ Branch: REL9_0_STABLE [e48ce4f33] 2015-02-17 12:49:18 -0500 - Remove code for matching IPv4 pg_hba.conf entries to + Remove code for matching IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses (Tom Lane) @@ -7170,7 +7170,7 @@ Branch: REL9_0_STABLE [e48ce4f33] 2015-02-17 12:49:18 -0500 crashes on some systems, so let's just remove it rather than fix it. (Had we chosen to fix it, that would make for a subtle and potentially security-sensitive change in the effective meaning of - IPv4 pg_hba.conf entries, which does not seem like a good + IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases.) @@ -7197,7 +7197,7 @@ Branch: REL9_4_STABLE [a1f4ade01] 2015-04-02 14:39:18 -0400 After a database crash, don't restart background workers that are - marked BGW_NEVER_RESTART (Amit Khandekar) + marked BGW_NEVER_RESTART (Amit Khandekar) @@ -7212,13 +7212,13 @@ Branch: REL9_1_STABLE [0d36d9f2b] 2015-02-06 11:32:42 +0200 - Report WAL flush, not insert, position in IDENTIFY_SYSTEM + Report WAL flush, not insert, position in IDENTIFY_SYSTEM replication command (Heikki Linnakangas) This avoids a possible startup failure - in pg_receivexlog. + in pg_receivexlog. @@ -7236,7 +7236,7 @@ Branch: REL9_0_STABLE [78ce2dc8e] 2015-05-07 15:10:01 +0200 While shutting down service on Windows, periodically send status updates to the Service Control Manager to prevent it from killing the - service too soon; and ensure that pg_ctl will wait for + service too soon; and ensure that pg_ctl will wait for shutdown (Krystian Bigaj) @@ -7253,7 +7253,7 @@ Branch: REL9_0_STABLE [8878eaaa8] 2015-02-23 13:32:53 +0200 - Reduce risk of network deadlock when using libpq's + Reduce risk of network deadlock when using libpq's non-blocking mode (Heikki Linnakangas) @@ -7262,12 +7262,12 @@ Branch: REL9_0_STABLE [8878eaaa8] 2015-02-23 13:32:53 +0200 buffer every so often, in case the server has sent enough response data to cause it to block on output. (A typical scenario is that the server is sending a stream of NOTICE messages during COPY FROM - STDIN.) This worked properly in the normal blocking mode, but not - so much in non-blocking mode. We've modified libpq + STDIN.) This worked properly in the normal blocking mode, but not + so much in non-blocking mode. We've modified libpq to opportunistically drain input when it can, but a full defense against this problem requires application cooperation: the application should watch for socket read-ready as well as write-ready conditions, - and be sure to call PQconsumeInput() upon read-ready. + and be sure to call PQconsumeInput() upon read-ready. @@ -7281,7 +7281,7 @@ Branch: REL9_2_STABLE [83c3115dd] 2015-02-21 12:59:43 -0500 - In libpq, fix misparsing of empty values in URI + In libpq, fix misparsing of empty values in URI connection strings (Thomas Fanghaenel) @@ -7298,7 +7298,7 @@ Branch: REL9_0_STABLE [ce2fcc58e] 2015-02-11 11:30:11 +0100 - Fix array handling in ecpg (Michael Meskes) + Fix array handling in ecpg (Michael Meskes) @@ -7314,8 +7314,8 @@ Branch: REL9_0_STABLE [557fcfae3] 2015-04-01 20:00:07 -0300 - Fix psql to sanely handle URIs and conninfo strings as - the first parameter to \connect + Fix psql to sanely handle URIs and conninfo strings as + the first parameter to \connect (David Fetter, Andrew Dunstan, Álvaro Herrera) @@ -7338,17 +7338,17 @@ Branch: REL9_0_STABLE [396ef6fd8] 2015-03-14 13:43:26 -0400 - Suppress incorrect complaints from psql on some - platforms that it failed to write ~/.psql_history at exit + Suppress incorrect complaints from psql on some + platforms that it failed to write ~/.psql_history at exit (Tom Lane) This misbehavior was caused by a workaround for a bug in very old - (pre-2006) versions of libedit. We fixed it by + (pre-2006) versions of libedit. We fixed it by removing the workaround, which will cause a similar failure to appear - for anyone still using such versions of libedit. - Recommendation: upgrade that library, or use libreadline. + for anyone still using such versions of libedit. + Recommendation: upgrade that library, or use libreadline. @@ -7364,7 +7364,7 @@ Branch: REL9_0_STABLE [8e70f3c40] 2015-02-10 22:38:29 -0500 - Fix pg_dump's rule for deciding which casts are + Fix pg_dump's rule for deciding which casts are system-provided casts that should not be dumped (Tom Lane) @@ -7380,8 +7380,8 @@ Branch: REL9_1_STABLE [b0d53b2e3] 2015-02-18 11:43:00 -0500 - In pg_dump, fix failure to honor -Z - compression level option together with -Fd + In pg_dump, fix failure to honor -Z + compression level option together with -Fd (Michael Paquier) @@ -7397,7 +7397,7 @@ Branch: REL9_1_STABLE [dcb467b8e] 2015-03-02 14:12:43 -0500 - Make pg_dump consider foreign key relationships + Make pg_dump consider foreign key relationships between extension configuration tables while choosing dump order (Gilles Darold, Michael Paquier, Stephen Frost) @@ -7417,7 +7417,7 @@ Branch: REL9_3_STABLE [d645273cf] 2015-03-06 13:27:46 -0500 - Avoid possible pg_dump failure when concurrent sessions + Avoid possible pg_dump failure when concurrent sessions are creating and dropping temporary functions (Tom Lane) @@ -7434,7 +7434,7 @@ Branch: REL9_0_STABLE [7a501bcbf] 2015-02-25 12:01:12 -0500 - Fix dumping of views that are just VALUES(...) but have + Fix dumping of views that are just VALUES(...) but have column aliases (Tom Lane) @@ -7448,7 +7448,7 @@ Branch: REL9_4_STABLE [70fac4844] 2015-05-01 13:03:23 -0400 Ensure that a view's replication identity is correctly set - to nothing during dump/restore (Marko Tiikkaja) + to nothing during dump/restore (Marko Tiikkaja) @@ -7472,7 +7472,7 @@ Branch: REL9_3_STABLE [4e9935979] 2015-05-16 15:16:28 -0400 - In pg_upgrade, force timeline 1 in the new cluster + In pg_upgrade, force timeline 1 in the new cluster (Bruce Momjian) @@ -7494,7 +7494,7 @@ Branch: REL9_0_STABLE [2194aa92b] 2015-05-16 00:10:03 -0400 - In pg_upgrade, check for improperly non-connectable + In pg_upgrade, check for improperly non-connectable databases before proceeding (Bruce Momjian) @@ -7512,8 +7512,8 @@ Branch: REL9_0_STABLE [4ae178f60] 2015-02-11 22:06:04 -0500 - In pg_upgrade, quote directory paths - properly in the generated delete_old_cluster script + In pg_upgrade, quote directory paths + properly in the generated delete_old_cluster script (Bruce Momjian) @@ -7530,14 +7530,14 @@ Branch: REL9_0_STABLE [85dac37ee] 2015-02-11 21:02:06 -0500 - In pg_upgrade, preserve database-level freezing info + In pg_upgrade, preserve database-level freezing info properly (Bruce Momjian) This oversight could cause missing-clog-file errors for tables within - the postgres and template1 databases. + the postgres and template1 databases. @@ -7553,7 +7553,7 @@ Branch: REL9_0_STABLE [bf22a8e58] 2015-03-30 17:18:10 -0400 - Run pg_upgrade and pg_resetxlog with + Run pg_upgrade and pg_resetxlog with restricted privileges on Windows, so that they don't fail when run by an administrator (Muhammad Asif Naeem) @@ -7571,8 +7571,8 @@ Branch: REL9_1_STABLE [d7d294f59] 2015-02-17 11:08:40 -0500 - Improve handling of readdir() failures when scanning - directories in initdb and pg_basebackup + Improve handling of readdir() failures when scanning + directories in initdb and pg_basebackup (Marco Nenciarini) @@ -7589,7 +7589,7 @@ Branch: REL9_0_STABLE [40b0c10b7] 2015-03-15 23:22:03 -0400 - Fix slow sorting algorithm in contrib/intarray (Tom Lane) + Fix slow sorting algorithm in contrib/intarray (Tom Lane) @@ -7637,7 +7637,7 @@ Branch: REL9_0_STABLE [3c3749a3b] 2015-05-15 19:36:20 -0400 - Update time zone data files to tzdata release 2015d + Update time zone data files to tzdata release 2015d for DST law changes in Egypt, Mongolia, and Palestine, plus historical changes in Canada and Chile. Also adopt revised zone abbreviations for the America/Adak zone (HST/HDT not HAST/HADT). @@ -7672,12 +7672,12 @@ Branch: REL9_0_STABLE [3c3749a3b] 2015-05-15 19:36:20 -0400 However, if you are a Windows user and are using the Norwegian - (Bokmål) locale, manual action is needed after the upgrade to - replace any Norwegian (Bokmål)_Norway - or norwegian-bokmal locale names stored - in PostgreSQL system catalogs with the plain-ASCII - alias Norwegian_Norway. For details see - + (Bokmål) locale, manual action is needed after the upgrade to + replace any Norwegian (Bokmål)_Norway + or norwegian-bokmal locale names stored + in PostgreSQL system catalogs with the plain-ASCII + alias Norwegian_Norway. For details see + @@ -7705,15 +7705,15 @@ Branch: REL9_0_STABLE [56b970f2e] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in to_char() + Fix buffer overruns in to_char() (Bruce Momjian) - When to_char() processes a numeric formatting template - calling for a large number of digits, PostgreSQL + When to_char() processes a numeric formatting template + calling for a large number of digits, PostgreSQL would read past the end of a buffer. When processing a crafted - timestamp formatting template, PostgreSQL would write + timestamp formatting template, PostgreSQL would write past the end of a buffer. Either case could crash the server. We have not ruled out the possibility of attacks that lead to privilege escalation, though they seem unlikely. @@ -7733,27 +7733,27 @@ Branch: REL9_0_STABLE [9e05c5063] 2015-02-02 10:00:52 -0500 - Fix buffer overrun in replacement *printf() functions + Fix buffer overrun in replacement *printf() functions (Tom Lane) - PostgreSQL includes a replacement implementation - of printf and related functions. This code will overrun + PostgreSQL includes a replacement implementation + of printf and related functions. This code will overrun a stack buffer when formatting a floating point number (conversion - specifiers e, E, f, F, - g or G) with requested precision greater than + specifiers e, E, f, F, + g or G) with requested precision greater than about 500. This will crash the server, and we have not ruled out the possibility of attacks that lead to privilege escalation. A database user can trigger such a buffer overrun through - the to_char() SQL function. While that is the only - affected core PostgreSQL functionality, extension + the to_char() SQL function. While that is the only + affected core PostgreSQL functionality, extension modules that use printf-family functions may be at risk as well. - This issue primarily affects PostgreSQL on Windows. - PostgreSQL uses the system implementation of these + This issue primarily affects PostgreSQL on Windows. + PostgreSQL uses the system implementation of these functions where adequate, which it is on other modern platforms. (CVE-2015-0242) @@ -7778,12 +7778,12 @@ Branch: REL9_0_STABLE [0a3ee8a5f] 2015-02-02 10:00:52 -0500 - Fix buffer overruns in contrib/pgcrypto + Fix buffer overruns in contrib/pgcrypto (Marko Tiikkaja, Noah Misch) - Errors in memory size tracking within the pgcrypto + Errors in memory size tracking within the pgcrypto module permitted stack buffer overruns and improper dependence on the contents of uninitialized memory. The buffer overrun cases can crash the server, and we have not ruled out the possibility of @@ -7844,7 +7844,7 @@ Branch: REL9_0_STABLE [3a2063369] 2015-01-28 12:33:29 -0500 Some server error messages show the values of columns that violate a constraint, such as a unique constraint. If the user does not have - SELECT privilege on all columns of the table, this could + SELECT privilege on all columns of the table, this could mean exposing values that the user should not be able to see. Adjust the code so that values are displayed only when they came from the SQL command or could be selected by the user. @@ -7893,20 +7893,20 @@ Branch: REL9_2_STABLE [6bf343c6e] 2015-01-16 13:10:23 +0200 - Cope with the Windows locale named Norwegian (Bokmål) + Cope with the Windows locale named Norwegian (Bokmål) (Heikki Linnakangas) Non-ASCII locale names are problematic since it's not clear what encoding they should be represented in. Map the troublesome locale - name to a plain-ASCII alias, Norwegian_Norway. + name to a plain-ASCII alias, Norwegian_Norway. - 9.4.0 mapped the troublesome name to norwegian-bokmal, + 9.4.0 mapped the troublesome name to norwegian-bokmal, but that turns out not to work on all Windows configurations. - Norwegian_Norway is now recommended instead. + Norwegian_Norway is now recommended instead. @@ -7927,7 +7927,7 @@ Branch: REL9_0_STABLE [5308e085b] 2015-01-15 18:52:38 -0500 - In READ COMMITTED mode, queries that lock or update + In READ COMMITTED mode, queries that lock or update recently-updated rows could crash as a result of this bug. @@ -7956,8 +7956,8 @@ Branch: REL9_3_STABLE [54a8abc2b] 2015-01-04 15:48:29 -0300 Fix failure to wait when a transaction tries to acquire a FOR - NO KEY EXCLUSIVE tuple lock, while multiple other transactions - currently hold FOR SHARE locks (Álvaro Herrera) + NO KEY EXCLUSIVE tuple lock, while multiple other transactions + currently hold FOR SHARE locks (Álvaro Herrera) @@ -7970,7 +7970,7 @@ Branch: REL9_3_STABLE [939f0fb67] 2015-01-15 13:18:19 -0500 - Improve performance of EXPLAIN with large range tables + Improve performance of EXPLAIN with large range tables (Tom Lane) @@ -7983,41 +7983,41 @@ Branch: REL9_4_STABLE [4cbf390d5] 2015-01-30 14:44:49 -0500 - Fix jsonb Unicode escape processing, and in consequence - disallow \u0000 (Tom Lane) + Fix jsonb Unicode escape processing, and in consequence + disallow \u0000 (Tom Lane) - Previously, the JSON Unicode escape \u0000 was accepted + Previously, the JSON Unicode escape \u0000 was accepted and was stored as those six characters; but that is indistinguishable - from what is stored for the input \\u0000, resulting in + from what is stored for the input \\u0000, resulting in ambiguity. Moreover, in cases where de-escaped textual output is - expected, such as the ->> operator, the sequence was - printed as \u0000, which does not meet the expectation + expected, such as the ->> operator, the sequence was + printed as \u0000, which does not meet the expectation that JSON escaping would be removed. (Consistent behavior would - require emitting a zero byte, but PostgreSQL does not + require emitting a zero byte, but PostgreSQL does not support zero bytes embedded in text strings.) 9.4.0 included an ill-advised attempt to improve this situation by adjusting JSON output conversion rules; but of course that could not fix the fundamental ambiguity, and it turned out to break other usages of Unicode escape sequences. Revert that, and to avoid the core problem, - reject \u0000 in jsonb input. + reject \u0000 in jsonb input. - If a jsonb column contains a \u0000 value stored + If a jsonb column contains a \u0000 value stored with 9.4.0, it will henceforth read out as though it - were \\u0000, which is the other valid interpretation of + were \\u0000, which is the other valid interpretation of the data stored by 9.4.0 for this case. - The json type did not have the storage-ambiguity problem, but + The json type did not have the storage-ambiguity problem, but it did have the problem of inconsistent de-escaped textual output. - Therefore \u0000 will now also be rejected - in json values when conversion to de-escaped form is + Therefore \u0000 will now also be rejected + in json values when conversion to de-escaped form is required. This change does not break the ability to - store \u0000 in json columns so long as no + store \u0000 in json columns so long as no processing is done on the values. This is exactly parallel to the cases in which non-ASCII Unicode escapes are allowed when the database encoding is not UTF8. @@ -8036,14 +8036,14 @@ Branch: REL9_0_STABLE [cebb3f032] 2015-01-17 22:37:32 -0500 - Fix namespace handling in xpath() (Ali Akbar) + Fix namespace handling in xpath() (Ali Akbar) - Previously, the xml value resulting from - an xpath() call would not have namespace declarations if + Previously, the xml value resulting from + an xpath() call would not have namespace declarations if the namespace declarations were attached to an ancestor element in the - input xml value, rather than to the specific element being + input xml value, rather than to the specific element being returned. Propagate the ancestral declaration so that the result is correct when considered in isolation. @@ -8063,7 +8063,7 @@ Branch: REL9_3_STABLE [527ff8baf] 2015-01-30 12:30:43 -0500 - This patch fixes corner-case unexpected operator NNNN planner + This patch fixes corner-case unexpected operator NNNN planner errors, and improves the selectivity estimates for some other cases. @@ -8081,7 +8081,7 @@ Branch: REL9_4_STABLE [4e241f7cd] 2014-12-30 14:53:03 +0200 - 9.4.0 could fail with index row size exceeds maximum errors + 9.4.0 could fail with index row size exceeds maximum errors for data that previous versions would accept. @@ -8111,7 +8111,7 @@ Branch: REL9_1_STABLE [37e0f13f2] 2015-01-29 19:37:22 +0200 Fix possible crash when using - nonzero gin_fuzzy_search_limit (Heikki Linnakangas) + nonzero gin_fuzzy_search_limit (Heikki Linnakangas) @@ -8139,7 +8139,7 @@ Branch: REL9_4_STABLE [b337d9657] 2015-01-15 20:52:18 +0200 Fix incorrect replay of WAL parameter change records that report - changes in the wal_log_hints setting (Petr Jelinek) + changes in the wal_log_hints setting (Petr Jelinek) @@ -8155,7 +8155,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 - Change pgstat wait timeout warning message to be LOG level, + Change pgstat wait timeout warning message to be LOG level, and rephrase it to be more understandable (Tom Lane) @@ -8164,7 +8164,7 @@ Branch: REL9_0_STABLE [a1a8d0249] 2015-01-19 23:01:46 -0500 case, but it occurs often enough on our slower buildfarm members to be a nuisance. Reduce it to LOG level, and expend a bit more effort on the wording: it now reads using stale statistics instead of - current ones because stats collector is not responding. + current ones because stats collector is not responding. @@ -8180,7 +8180,7 @@ Branch: REL9_0_STABLE [2e4946169] 2015-01-07 22:46:20 -0500 - Warn if macOS's setlocale() starts an unwanted extra + Warn if macOS's setlocale() starts an unwanted extra thread inside the postmaster (Noah Misch) @@ -8193,18 +8193,18 @@ Branch: REL9_4_STABLE [733728ff3] 2015-01-11 12:35:47 -0500 - Fix libpq's behavior when /etc/passwd + Fix libpq's behavior when /etc/passwd isn't readable (Tom Lane) - While doing PQsetdbLogin(), libpq + While doing PQsetdbLogin(), libpq attempts to ascertain the user's operating system name, which on most - Unix platforms involves reading /etc/passwd. As of 9.4, + Unix platforms involves reading /etc/passwd. As of 9.4, failure to do that was treated as a hard error. Restore the previous behavior, which was to fail only if the application does not provide a database role name to connect as. This supports operation in chroot - environments that lack an /etc/passwd file. + environments that lack an /etc/passwd file. @@ -8220,17 +8220,17 @@ Branch: REL9_0_STABLE [2600e4436] 2014-12-31 12:17:12 -0500 - Improve consistency of parsing of psql's special + Improve consistency of parsing of psql's special variables (Tom Lane) - Allow variant spellings of on and off (such - as 1/0) for ECHO_HIDDEN - and ON_ERROR_ROLLBACK. Report a warning for unrecognized - values for COMP_KEYWORD_CASE, ECHO, - ECHO_HIDDEN, HISTCONTROL, - ON_ERROR_ROLLBACK, and VERBOSITY. Recognize + Allow variant spellings of on and off (such + as 1/0) for ECHO_HIDDEN + and ON_ERROR_ROLLBACK. Report a warning for unrecognized + values for COMP_KEYWORD_CASE, ECHO, + ECHO_HIDDEN, HISTCONTROL, + ON_ERROR_ROLLBACK, and VERBOSITY. Recognize all values for all these variables case-insensitively; previously there was a mishmash of case-sensitive and case-insensitive behaviors. @@ -8245,7 +8245,7 @@ Branch: REL9_3_STABLE [bb1e2426b] 2015-01-05 19:27:09 -0500 - Fix pg_dump to handle comments on event triggers + Fix pg_dump to handle comments on event triggers without failing (Tom Lane) @@ -8259,8 +8259,8 @@ Branch: REL9_3_STABLE [cc609c46f] 2015-01-30 09:01:36 -0600 - Allow parallel pg_dump to - use (Kevin Grittner) @@ -8275,7 +8275,7 @@ Branch: REL9_1_STABLE [2a0bfa4d6] 2015-01-03 20:54:13 +0100 - Prevent WAL files created by pg_basebackup -x/-X from + Prevent WAL files created by pg_basebackup -x/-X from being archived again when the standby is promoted (Andres Freund) @@ -8293,12 +8293,12 @@ Branch: REL9_0_STABLE [dc9a506e6] 2015-01-29 20:18:46 -0500 Handle unexpected query results, especially NULLs, safely in - contrib/tablefunc's connectby() + contrib/tablefunc's connectby() (Michael Paquier) - connectby() previously crashed if it encountered a NULL + connectby() previously crashed if it encountered a NULL key value. It now prints that row but doesn't recurse further. @@ -8392,14 +8392,14 @@ Branch: REL9_4_STABLE [adb355106] 2015-01-14 11:08:17 -0500 - Allow CFLAGS from configure's environment - to override automatically-supplied CFLAGS (Tom Lane) + Allow CFLAGS from configure's environment + to override automatically-supplied CFLAGS (Tom Lane) - Previously, configure would add any switches that it + Previously, configure would add any switches that it chose of its own accord to the end of the - user-specified CFLAGS string. Since most compilers + user-specified CFLAGS string. Since most compilers process switches left-to-right, this meant that configure's choices would override the user-specified flags in case of conflicts. That should work the other way around, so adjust the logic to put the @@ -8419,13 +8419,13 @@ Branch: REL9_0_STABLE [338ff75fc] 2015-01-19 23:44:33 -0500 - Make pg_regress remove any temporary installation it + Make pg_regress remove any temporary installation it created upon successful exit (Tom Lane) This results in a very substantial reduction in disk space usage - during make check-world, since that sequence involves + during make check-world, since that sequence involves creation of numerous temporary installations. @@ -8451,7 +8451,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Update time zone data files to tzdata release 2015a + Update time zone data files to tzdata release 2015a for DST law changes in Chile and Mexico, plus historical changes in Iceland. @@ -8474,7 +8474,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Overview - Major enhancements in PostgreSQL 9.4 include: + Major enhancements in PostgreSQL 9.4 include: @@ -8483,15 +8483,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add jsonb, a more - capable and efficient data type for storing JSON data + Add jsonb, a more + capable and efficient data type for storing JSON data - Add new SQL command - for changing postgresql.conf configuration file entries + Add new SQL command + for changing postgresql.conf configuration file entries @@ -8504,14 +8504,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow materialized views + Allow materialized views to be refreshed without blocking concurrent reads - Add support for logical decoding + Add support for logical decoding of WAL data, to allow database changes to be streamed out in a customizable format @@ -8519,7 +8519,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow background worker processes + Allow background worker processes to be dynamically registered, started and terminated @@ -8558,14 +8558,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, an input array string that started with a single-element sub-array could later contain multi-element sub-arrays, - e.g. '{{1}, {2,3}}'::int[] would be accepted. + e.g. '{{1}, {2,3}}'::int[] would be accepted. - When converting values of type date, timestamp - or timestamptz + When converting values of type date, timestamp + or timestamptz to JSON, render the values in a format compliant with ISO 8601 (Andrew Dunstan) @@ -8575,7 +8575,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 setting; but many JSON processors require timestamps to be in ISO 8601 format. If necessary, the previous behavior can be obtained by explicitly casting the datetime - value to text before passing it to the JSON conversion + value to text before passing it to the JSON conversion function. @@ -8583,15 +8583,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 The json - #> text[] path extraction operator now + #> text[] path extraction operator now returns its lefthand input, not NULL, if the array is empty (Tom Lane) This is consistent with the notion that this represents zero applications of the simple field/element extraction - operator ->. Similarly, json - #>> text[] with an empty array merely + operator ->. Similarly, json + #>> text[] with an empty array merely coerces its lefthand input to text. @@ -8616,26 +8616,26 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Cause consecutive whitespace in to_timestamp() - and to_date() format strings to consume a corresponding + linkend="functions-formatting-table">to_timestamp() + and to_date() format strings to consume a corresponding number of characters in the input string (whitespace or not), then - conditionally consume adjacent whitespace, if not in FX + conditionally consume adjacent whitespace, if not in FX mode (Jeevan Chalke) - Previously, consecutive whitespace characters in a non-FX + Previously, consecutive whitespace characters in a non-FX format string behaved like a single whitespace character and consumed all adjacent whitespace in the input string. For example, previously a format string of three spaces would consume only the first space in - ' 12', but it will now consume all three characters. + ' 12', but it will now consume all three characters. Fix ts_rank_cd() + linkend="textsearch-functions-table">ts_rank_cd() to ignore stripped lexemes (Alex Hill) @@ -8649,15 +8649,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 For functions declared to take VARIADIC - "any", an actual parameter marked as VARIADIC + "any", an actual parameter marked as VARIADIC must be of a determinable array type (Pavel Stehule) Such parameters can no longer be written as an undecorated string - literal or NULL; a cast to an appropriate array data type + literal or NULL; a cast to an appropriate array data type will now be required. Note that this does not affect parameters not - marked VARIADIC. + marked VARIADIC. @@ -8669,8 +8669,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Constructs like row_to_json(tab.*) now always emit column - names that match the column aliases visible for table tab + Constructs like row_to_json(tab.*) now always emit column + names that match the column aliases visible for table tab at the point of the call. In previous releases the emitted column names would sometimes be the table's actual column names regardless of any aliases assigned in the query. @@ -8687,7 +8687,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Rename EXPLAIN - ANALYZE's total runtime output + ANALYZE's total runtime output to execution time (Tom Lane) @@ -8699,15 +8699,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - SHOW TIME ZONE now - outputs simple numeric UTC offsets in POSIX timezone + SHOW TIME ZONE now + outputs simple numeric UTC offsets in POSIX timezone format (Tom Lane) Previously, such timezone settings were displayed as interval values. - The new output is properly interpreted by SET TIME ZONE + linkend="datatype-interval-output">interval values. + The new output is properly interpreted by SET TIME ZONE when passed as a simple string, whereas the old output required special treatment to be re-parsed correctly. @@ -8716,25 +8716,25 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Foreign data wrappers that support updating foreign tables must - consider the possible presence of AFTER ROW triggers + consider the possible presence of AFTER ROW triggers (Noah Misch) - When an AFTER ROW trigger is present, all columns of the + When an AFTER ROW trigger is present, all columns of the table must be returned by updating actions, since the trigger might inspect any or all of them. Previously, foreign tables never had triggers, so the FDW might optimize away fetching columns not mentioned - in the RETURNING clause (if any). + in the RETURNING clause (if any). Prevent CHECK + linkend="ddl-constraints-check-constraints">CHECK constraints from referencing system columns, except - tableoid (Amit Kapila) + tableoid (Amit Kapila) @@ -8752,7 +8752,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, there was an undocumented precedence order among - the recovery_target_xxx parameters. + the recovery_target_xxx parameters. @@ -8766,14 +8766,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 User commands that did their own quote preservation might need adjustment. This is likely to be an issue for commands used in , , - and COPY TO/FROM PROGRAM. + and COPY TO/FROM PROGRAM. Remove catalog column pg_class.reltoastidxid + linkend="catalog-pg-class">pg_class.reltoastidxid (Michael Paquier) @@ -8781,33 +8781,33 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Remove catalog column pg_rewrite.ev_attr + linkend="catalog-pg-rewrite">pg_rewrite.ev_attr (Kevin Grittner) Per-column rules have not been supported since - PostgreSQL 7.3. + PostgreSQL 7.3. - Remove native support for Kerberos authentication - (, etc) (Magnus Hagander) - The supported way to use Kerberos authentication is - with GSSAPI. The native code has been deprecated since - PostgreSQL 8.3. + The supported way to use Kerberos authentication is + with GSSAPI. The native code has been deprecated since + PostgreSQL 8.3. - In PL/Python, handle domains over arrays like the + In PL/Python, handle domains over arrays like the underlying array type (Rodolfo Campero) @@ -8819,9 +8819,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make libpq's PQconnectdbParams() + linkend="libpq-pqconnectdbparams">PQconnectdbParams() and PQpingParams() + linkend="libpq-pqpingparams">PQpingParams() functions process zero-length strings as defaults (Adrian Vondendriesch) @@ -8841,20 +8841,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously, empty arrays were returned as zero-length one-dimensional arrays, whose text representation looked the same as zero-dimensional - arrays ({}), but they acted differently in array - operations. intarray's behavior in this area now + arrays ({}), but they acted differently in array + operations. intarray's behavior in this area now matches the built-in array operators. - now uses - Previously this option was spelled or , but that was inconsistent with other tools. @@ -8884,7 +8884,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - The new worker_spi module shows an example of use + The new worker_spi module shows an example of use of this feature. @@ -8904,7 +8904,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 During crash recovery or immediate shutdown, send uncatchable - termination signals (SIGKILL) to child processes + termination signals (SIGKILL) to child processes that do not shut down promptly (MauMau, Álvaro Herrera) @@ -8912,7 +8912,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This reduces the likelihood of leaving orphaned child processes behind after shutdown, as well as ensuring that crash recovery can proceed if some child processes - have become stuck. + have become stuck. @@ -8942,13 +8942,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce GIN index size + Reduce GIN index size (Alexander Korotkov, Heikki Linnakangas) Indexes upgraded via will work fine - but will still be in the old, larger GIN format. + but will still be in the old, larger GIN format. Use to recreate old GIN indexes in the new format. @@ -8957,16 +8957,16 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of multi-key GIN lookups (Alexander Korotkov, + linkend="GIN">GIN lookups (Alexander Korotkov, Heikki Linnakangas) - Add GiST index support - for inet and - cidr data types + Add GiST index support + for inet and + cidr data types (Emre Hasegeli) @@ -9002,7 +9002,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow multiple backends to insert - into WAL buffers + into WAL buffers concurrently (Heikki Linnakangas) @@ -9014,7 +9014,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Conditionally write only the modified portion of updated rows to - WAL (Amit Kapila) + WAL (Amit Kapila) @@ -9029,7 +9029,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of aggregates that - use numeric state + use numeric state values (Hadi Moshayedi) @@ -9039,7 +9039,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Attempt to freeze tuples when tables are rewritten with or VACUUM FULL (Robert Haas, + linkend="SQL-VACUUM">VACUUM FULL (Robert Haas, Andres Freund) @@ -9051,7 +9051,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of with default nextval() + linkend="functions-sequence-table">nextval() columns (Simon Riggs) @@ -9073,7 +9073,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce memory allocated by PL/pgSQL + Reduce memory allocated by PL/pgSQL blocks (Tom Lane) @@ -9081,18 +9081,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make the planner more aggressive about extracting restriction clauses - from mixed AND/OR clauses (Tom Lane) + from mixed AND/OR clauses (Tom Lane) - Disallow pushing volatile WHERE clauses down - into DISTINCT subqueries (Tom Lane) + Disallow pushing volatile WHERE clauses down + into DISTINCT subqueries (Tom Lane) - Pushing down a WHERE clause can produce a more + Pushing down a WHERE clause can produce a more efficient plan overall, but at the cost of evaluating the clause more often than is implied by the text of the query; so don't do it if the clause contains any volatile functions. @@ -9122,14 +9122,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add system view to - report WAL archiver activity + report WAL archiver activity (Gabriele Bartolini) - Add n_mod_since_analyze columns to + Add n_mod_since_analyze columns to and related system views (Mark Kirkwood) @@ -9143,9 +9143,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add backend_xid and backend_xmin + Add backend_xid and backend_xmin columns to the system view , - and a backend_xmin column to + and a backend_xmin column to (Christian Kruse) @@ -9155,22 +9155,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <acronym>SSL</> + <acronym>SSL</acronym> - Add support for SSL ECDH key exchange + Add support for SSL ECDH key exchange (Marko Kreen) This allows use of Elliptic Curve keys for server authentication. - Such keys are faster and have better security than RSA + Such keys are faster and have better security than RSA keys. The new configuration parameter - controls which curve is used for ECDH. + controls which curve is used for ECDH. @@ -9184,14 +9184,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 By default, the server not the client now controls the preference - order of SSL ciphers + order of SSL ciphers (Marko Kreen) Previously, the order specified by was usually ignored in favor of client-side defaults, which are not - configurable in most PostgreSQL clients. If + configurable in most PostgreSQL clients. If desired, the old behavior can be restored via the new configuration parameter . @@ -9199,14 +9199,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make show SSL + Make show SSL encryption information (Andreas Kunert) - Improve SSL renegotiation handling (Álvaro + Improve SSL renegotiation handling (Álvaro Herrera) @@ -9222,14 +9222,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command - for changing postgresql.conf configuration file entries + Add new SQL command + for changing postgresql.conf configuration file entries (Amit Kapila) Previously such settings could only be changed by manually - editing postgresql.conf. + editing postgresql.conf. @@ -9274,7 +9274,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 In contrast to , this parameter can load any shared library, not just those in - the $libdir/plugins directory. + the $libdir/plugins directory. @@ -9287,7 +9287,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Hint bit changes are not normally logged, except when checksums are enabled. This is useful for external tools - like pg_rewind. + like pg_rewind. @@ -9320,14 +9320,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow terabyte units (TB) to be used when specifying + Allow terabyte units (TB) to be used when specifying configuration variable values (Simon Riggs) - Show PIDs of lock holders and waiters and improve + Show PIDs of lock holders and waiters and improve information about relations in log messages (Christian Kruse) @@ -9340,14 +9340,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - The previous level was LOG, which was too verbose + The previous level was LOG, which was too verbose for libraries loaded per-session. - On Windows, make SQL_ASCII-encoded databases and server + On Windows, make SQL_ASCII-encoded databases and server processes (e.g., ) emit messages in the character encoding of the server's Windows user locale (Alexander Law, Noah Misch) @@ -9355,7 +9355,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously these messages were output in the Windows - ANSI code page. + ANSI code page. @@ -9379,7 +9379,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Replication slots allow preservation of resources like - WAL files on the primary until they are no longer + WAL files on the primary until they are no longer needed by standby servers. @@ -9400,8 +9400,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add - option @@ -9413,7 +9413,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 The timestamp reported - by pg_last_xact_replay_timestamp() + by pg_last_xact_replay_timestamp() now reflects already-committed records, not transactions about to be committed. Recovering to a restore point now replays the restore point, rather than stopping just before the restore point. @@ -9423,34 +9423,34 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 pg_switch_xlog() - now clears any unused trailing space in the old WAL file + linkend="functions-admin-backup-table">pg_switch_xlog() + now clears any unused trailing space in the old WAL file (Heikki Linnakangas) - This improves the compression ratio for WAL files. + This improves the compression ratio for WAL files. Report failure return codes from external recovery commands + linkend="archive-recovery-settings">external recovery commands (Peter Eisentraut) - Reduce spinlock contention during WAL replay (Heikki + Reduce spinlock contention during WAL replay (Heikki Linnakangas) - Write WAL records of running transactions more + Write WAL records of running transactions more frequently (Andres Freund) @@ -9463,12 +9463,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="logicaldecoding">Logical Decoding</> + <link linkend="logicaldecoding">Logical Decoding</link> Logical decoding allows database changes to be streamed in a configurable format. The data is read from - the WAL and transformed into the + the WAL and transformed into the desired target format. To implement this feature, the following changes were made: @@ -9477,7 +9477,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add support for logical decoding + Add support for logical decoding of WAL data, to allow database changes to be streamed out in a customizable format (Andres Freund) @@ -9486,8 +9486,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new setting @@ -9495,7 +9495,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add table-level parameter REPLICA IDENTITY + linkend="catalog-pg-class">REPLICA IDENTITY to control logical replication (Andres Freund) @@ -9503,7 +9503,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add relation option to identify user-created tables involved in logical change-set encoding (Andres Freund) @@ -9519,7 +9519,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add module to illustrate logical - decoding at the SQL level (Andres Freund) + decoding at the SQL level (Andres Freund) @@ -9537,22 +9537,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add WITH - ORDINALITY syntax to number the rows returned from a - set-returning function in the FROM clause + ORDINALITY syntax to number the rows returned from a + set-returning function in the FROM clause (Andrew Gierth, David Fetter) This is particularly useful for functions like - unnest(). + unnest(). Add ROWS - FROM() syntax to allow horizontal concatenation of - set-returning functions in the FROM clause (Andrew Gierth) + FROM() syntax to allow horizontal concatenation of + set-returning functions in the FROM clause (Andrew Gierth) @@ -9571,7 +9571,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Ensure that SELECT ... FOR UPDATE - NOWAIT does not wait in corner cases involving + NOWAIT does not wait in corner cases involving already-concurrently-updated tuples (Craig Ringer and Thomas Munro) @@ -9588,21 +9588,21 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add DISCARD - SEQUENCES command to discard cached sequence-related state + SEQUENCES command to discard cached sequence-related state (Fabrízio de Royes Mello, Robert Haas) - DISCARD ALL will now also discard such information. + DISCARD ALL will now also discard such information. - Add FORCE NULL option - to COPY FROM, which + Add FORCE NULL option + to COPY FROM, which causes quoted strings matching the specified null string to be - converted to NULLs in CSV mode (Ian Barwick, Michael + converted to NULLs in CSV mode (Ian Barwick, Michael Paquier) @@ -9620,8 +9620,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 New warnings are issued for SET - LOCAL, SET CONSTRAINTS, SET TRANSACTION and - ABORT when used outside a transaction block. + LOCAL, SET CONSTRAINTS, SET TRANSACTION and + ABORT when used outside a transaction block. @@ -9634,21 +9634,21 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make EXPLAIN ANALYZE show planning time (Andreas + Make EXPLAIN ANALYZE show planning time (Andreas Karlsson) - Make EXPLAIN show the grouping columns in Agg and + Make EXPLAIN show the grouping columns in Agg and Group nodes (Tom Lane) - Make EXPLAIN ANALYZE show exact and lossy + Make EXPLAIN ANALYZE show exact and lossy block counts in bitmap heap scans (Etsuro Fujita) @@ -9664,7 +9664,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow a materialized view + Allow a materialized view to be refreshed without blocking other sessions from reading the view meanwhile (Kevin Grittner) @@ -9672,7 +9672,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This is done with REFRESH MATERIALIZED - VIEW CONCURRENTLY. + VIEW CONCURRENTLY. @@ -9687,28 +9687,28 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously the presence of non-updatable output columns such as expressions, literals, and function calls prevented automatic - updates. Now INSERTs, UPDATEs and - DELETEs are supported, provided that they do not + updates. Now INSERTs, UPDATEs and + DELETEs are supported, provided that they do not attempt to assign new values to any of the non-updatable columns. - Allow control over whether INSERTs and - UPDATEs can add rows to an auto-updatable view that + Allow control over whether INSERTs and + UPDATEs can add rows to an auto-updatable view that would not appear in the view (Dean Rasheed) This is controlled with the new - clause WITH CHECK OPTION. + clause WITH CHECK OPTION. - Allow security barrier views + Allow security barrier views to be automatically updatable (Dean Rasheed) @@ -9727,14 +9727,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support triggers on foreign - tables (Ronan Dunklau) + tables (Ronan Dunklau) Allow moving groups of objects from one tablespace to another - using the ALL IN TABLESPACE ... SET TABLESPACE form of + using the ALL IN TABLESPACE ... SET TABLESPACE form of , , or (Stephen Frost) @@ -9744,7 +9744,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow changing foreign key constraint deferrability via ... ALTER - CONSTRAINT (Simon Riggs) + CONSTRAINT (Simon Riggs) @@ -9756,12 +9756,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Specifically, VALIDATE CONSTRAINT, CLUSTER - ON, SET WITHOUT CLUSTER, ALTER COLUMN - SET STATISTICS, ALTER COLUMN SET - @@ -9791,7 +9791,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Fix DROP IF EXISTS to avoid errors for non-existent + Fix DROP IF EXISTS to avoid errors for non-existent objects in more cases (Pavel Stehule, Dean Rasheed) @@ -9803,7 +9803,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Previously, relations once moved into the pg_catalog + Previously, relations once moved into the pg_catalog schema could no longer be modified or dropped. @@ -9820,14 +9820,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Fully implement the line data type (Peter + linkend="datatype-line">line data type (Peter Eisentraut) - The line segment data type (lseg) has always been - fully supported. The previous line data type (which was + The line segment data type (lseg) has always been + fully supported. The previous line data type (which was enabled only via a compile-time option) is not binary or dump-compatible with the new implementation. @@ -9835,17 +9835,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add pg_lsn - data type to represent a WAL log sequence number - (LSN) (Robert Haas, Michael Paquier) + Add pg_lsn + data type to represent a WAL log sequence number + (LSN) (Robert Haas, Michael Paquier) Allow single-point polygons to be converted - to circles + linkend="datatype-polygon">polygons to be converted + to circles (Bruce Momjian) @@ -9857,31 +9857,31 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Previously, PostgreSQL assumed that the UTC offset - associated with a time zone abbreviation (such as EST) + Previously, PostgreSQL assumed that the UTC offset + associated with a time zone abbreviation (such as EST) never changes in the usage of any particular locale. However this assumption fails in the real world, so introduce the ability for a zone abbreviation to represent a UTC offset that sometimes changes. Update the zone abbreviation definition files to make use of this feature in timezone locales that have changed the UTC offset of their abbreviations since 1970 (according to the IANA timezone database). - In such timezones, PostgreSQL will now associate the + In such timezones, PostgreSQL will now associate the correct UTC offset with the abbreviation depending on the given date. - Allow 5+ digit years for non-ISO timestamp and - date strings, where appropriate (Bruce Momjian) + Allow 5+ digit years for non-ISO timestamp and + date strings, where appropriate (Bruce Momjian) Add checks for overflow/underflow of interval values + linkend="datatype-datetime">interval values (Bruce Momjian) @@ -9889,14 +9889,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="datatype-json"><acronym>JSON</></link> + <link linkend="datatype-json"><acronym>JSON</acronym></link> - Add jsonb, a more - capable and efficient data type for storing JSON data + Add jsonb, a more + capable and efficient data type for storing JSON data (Oleg Bartunov, Teodor Sigaev, Alexander Korotkov, Peter Geoghegan, Andrew Dunstan) @@ -9904,9 +9904,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This new type allows faster access to values within a JSON document, and faster and more useful indexing of JSON columns. - Scalar values in jsonb documents are stored as appropriate + Scalar values in jsonb documents are stored as appropriate scalar SQL types, and the JSON document structure is pre-parsed - rather than being stored as text as in the original json + rather than being stored as text as in the original json data type. @@ -9919,18 +9919,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 New functions include json_array_elements_text(), - json_build_array(), json_object(), - json_object_agg(), json_to_record(), - and json_to_recordset(). + linkend="functions-json-processing-table">json_array_elements_text(), + json_build_array(), json_object(), + json_object_agg(), json_to_record(), + and json_to_recordset(). Add json_typeof() - to return the data type of a json value (Andrew Tipton) + linkend="functions-json-processing-table">json_typeof() + to return the data type of a json value (Andrew Tipton) @@ -9948,13 +9948,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add pg_sleep_for(interval) - and pg_sleep_until(timestamp) to specify + linkend="functions-datetime-delay">pg_sleep_for(interval) + and pg_sleep_until(timestamp) to specify delays more flexibly (Vik Fearing, Julien Rouhaud) - The existing pg_sleep() function only supports delays + The existing pg_sleep() function only supports delays specified in seconds. @@ -9962,7 +9962,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add cardinality() + linkend="array-functions-table">cardinality() function for arrays (Marko Tiikkaja) @@ -9974,7 +9974,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add SQL functions to allow large + Add SQL functions to allow large object reads/writes at arbitrary offsets (Pavel Stehule) @@ -9982,7 +9982,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow unnest() + linkend="array-functions-table">unnest() to take multiple arguments, which are individually unnested then horizontally concatenated (Andrew Gierth) @@ -9990,36 +9990,36 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add functions to construct times, dates, - timestamps, timestamptzs, and intervals + Add functions to construct times, dates, + timestamps, timestamptzs, and intervals from individual values, rather than strings (Pavel Stehule) - These functions' names are prefixed with make_, - e.g. make_date(). + These functions' names are prefixed with make_, + e.g. make_date(). Make to_char()'s - TZ format specifier return a useful value for simple + linkend="functions-formatting-table">to_char()'s + TZ format specifier return a useful value for simple numeric time zone offsets (Tom Lane) - Previously, to_char(CURRENT_TIMESTAMP, 'TZ') returned - an empty string if the timezone was set to a constant - like -4. + Previously, to_char(CURRENT_TIMESTAMP, 'TZ') returned + an empty string if the timezone was set to a constant + like -4. - Add timezone offset format specifier OF to to_char() + Add timezone offset format specifier OF to to_char() (Bruce Momjian) @@ -10027,7 +10027,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve the random seed used for random() + linkend="functions-math-random-table">random() (Honza Horak) @@ -10035,7 +10035,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Tighten validity checking for Unicode code points in chr(int) + linkend="functions-string-other">chr(int) (Tom Lane) @@ -10054,18 +10054,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add functions for looking up objects in pg_class, - pg_proc, pg_type, and - pg_operator that do not generate errors for + Add functions for looking up objects in pg_class, + pg_proc, pg_type, and + pg_operator that do not generate errors for non-existent objects (Yugo Nagata, Nozomi Anzai, Robert Haas) For example, to_regclass() - does a lookup in pg_class similarly to - the regclass input function, but it returns NULL for a + linkend="functions-info-catalog-table">to_regclass() + does a lookup in pg_class similarly to + the regclass input function, but it returns NULL for a non-existent object instead of failing. @@ -10073,7 +10073,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add function pg_filenode_relation() + linkend="functions-admin-dblocation">pg_filenode_relation() to allow for more efficient lookup of relation names from filenodes (Andres Freund) @@ -10081,8 +10081,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add parameter_default column to information_schema.parameters + Add parameter_default column to information_schema.parameters view (Peter Eisentraut) @@ -10090,7 +10090,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make information_schema.schemata + linkend="infoschema-schemata">information_schema.schemata show all accessible schemas (Peter Eisentraut) @@ -10112,7 +10112,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add control over which rows are passed into aggregate functions via the FILTER clause + linkend="syntax-aggregates">FILTER clause (David Fetter) @@ -10120,7 +10120,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support ordered-set (WITHIN GROUP) + linkend="syntax-aggregates">WITHIN GROUP) aggregates (Atri Sharma, Andrew Gierth, Tom Lane) @@ -10128,11 +10128,11 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add standard ordered-set aggregates percentile_cont(), - percentile_disc(), mode(), rank(), - dense_rank(), percent_rank(), and - cume_dist() + linkend="functions-orderedset-table">percentile_cont(), + percentile_disc(), mode(), rank(), + dense_rank(), percent_rank(), and + cume_dist() (Atri Sharma, Andrew Gierth) @@ -10140,7 +10140,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Support VARIADIC + linkend="xfunc-sql-variadic-functions">VARIADIC aggregate functions (Tom Lane) @@ -10152,7 +10152,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This allows proper declaration in SQL of aggregates like the built-in - aggregate array_agg(). + aggregate array_agg(). @@ -10169,20 +10169,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add event trigger support to PL/Perl - and PL/Tcl (Dimitri Fontaine) + Add event trigger support to PL/Perl + and PL/Tcl (Dimitri Fontaine) - Convert numeric - values to decimal in PL/Python + Convert numeric + values to decimal in PL/Python (Szymon Guz, Ronan Dunklau) - Previously such values were converted to Python float values, + Previously such values were converted to Python float values, risking loss of precision. @@ -10198,7 +10198,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add ability to retrieve the current PL/pgSQL call stack using GET - DIAGNOSTICS + DIAGNOSTICS (Pavel Stehule, Stephen Frost) @@ -10206,17 +10206,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add option to display the parameters passed to a query that violated a - STRICT constraint (Marko Tiikkaja) + STRICT constraint (Marko Tiikkaja) Add variables plpgsql.extra_warnings - and plpgsql.extra_errors to enable additional PL/pgSQL + linkend="plpgsql-extra-checks">plpgsql.extra_warnings + and plpgsql.extra_errors to enable additional PL/pgSQL warnings and errors (Marko Tiikkaja, Petr Jelinek) @@ -10232,13 +10232,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="libpq"><application>libpq</></link> + <link linkend="libpq"><application>libpq</application></link> Make libpq's PQconndefaults() + linkend="libpq-pqconndefaults">PQconndefaults() function ignore invalid service files (Steve Singer, Bruce Momjian) @@ -10250,7 +10250,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Accept TLS protocol versions beyond TLSv1 + Accept TLS protocol versions beyond TLSv1 in libpq (Marko Kreen) @@ -10266,7 +10266,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add option @@ -10274,7 +10274,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add - option to analyze in stages of increasing granularity (Peter Eisentraut) @@ -10285,8 +10285,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_resetxlog - with option output current and potentially changed values (Rajeev Rastogi) @@ -10301,19 +10301,19 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make return exit code 4 for + Make return exit code 4 for an inaccessible data directory (Amit Kapila, Bruce Momjian) This behavior more closely matches the Linux Standard Base - (LSB) Core Specification. + (LSB) Core Specification. - On Windows, ensure that a non-absolute path specification is interpreted relative to 's current directory (Kumar Rajeev Rastogi) @@ -10327,7 +10327,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow sizeof() in ECPG + Allow sizeof() in ECPG C array definitions (Michael Meskes) @@ -10335,7 +10335,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Make ECPG properly handle nesting - of C-style comments in both C and SQL text + of C-style comments in both C and SQL text (Michael Meskes) @@ -10349,15 +10349,15 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Suppress No rows output in psql mode when the footer is disabled (Bruce Momjian) - Allow Control-C to abort psql when it's hung at + Allow Control-C to abort psql when it's hung at connection startup (Peter Eisentraut) @@ -10371,22 +10371,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make psql's \db+ show tablespace options + Make psql's \db+ show tablespace options (Magnus Hagander) - Make \do+ display the functions + Make \do+ display the functions that implement the operators (Marko Tiikkaja) - Make \d+ output an - OID line only if an oid column + Make \d+ output an + OID line only if an oid column exists in the table (Bruce Momjian) @@ -10398,7 +10398,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make \d show disabled system triggers (Bruce + Make \d show disabled system triggers (Bruce Momjian) @@ -10410,55 +10410,55 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Fix \copy to no longer require - a space between stdin and a semicolon (Etsuro Fujita) + Fix \copy to no longer require + a space between stdin and a semicolon (Etsuro Fujita) - Output the row count at the end of \copy, just - like COPY already did (Kumar Rajeev Rastogi) + Output the row count at the end of \copy, just + like COPY already did (Kumar Rajeev Rastogi) - Fix \conninfo to display the - server's IP address for connections using - hostaddr (Fujii Masao) + Fix \conninfo to display the + server's IP address for connections using + hostaddr (Fujii Masao) - Previously \conninfo could not display the server's - IP address in such cases. + Previously \conninfo could not display the server's + IP address in such cases. - Show the SSL protocol version in - \conninfo (Marko Kreen) + Show the SSL protocol version in + \conninfo (Marko Kreen) - Add tab completion for \pset + Add tab completion for \pset (Pavel Stehule) - Allow \pset with no arguments + Allow \pset with no arguments to show all settings (Gilles Darold) - Make \s display the name of the history file it wrote + Make \s display the name of the history file it wrote without converting it to an absolute path (Tom Lane) @@ -10482,7 +10482,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow options - , , and to be specified multiple times (Heikki Linnakangas) @@ -10493,17 +10493,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Optionally add IF EXISTS clauses to the DROP + Optionally add IF EXISTS clauses to the DROP commands emitted when removing old objects during a restore (Pavel Stehule) This change prevents unnecessary errors when removing old objects. - The new option for , , and is only available - when is also specified. @@ -10518,20 +10518,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add pg_basebackup option - Allow pg_basebackup to relocate tablespaces in + Allow pg_basebackup to relocate tablespaces in the backup copy (Steeve Lennmark) - This is particularly useful for using pg_basebackup + This is particularly useful for using pg_basebackup on the same machine as the primary. @@ -10542,8 +10542,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - This can be controlled with the pg_basebackup - parameter. @@ -10574,13 +10574,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 No longer require function prototypes for functions marked with the - PG_FUNCTION_INFO_V1 + PG_FUNCTION_INFO_V1 macro (Peter Eisentraut) This change eliminates the need to write boilerplate prototypes. - Note that the PG_FUNCTION_INFO_V1 macro must appear + Note that the PG_FUNCTION_INFO_V1 macro must appear before the corresponding function definition to avoid compiler warnings. @@ -10588,41 +10588,41 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Remove SnapshotNow and - HeapTupleSatisfiesNow() (Robert Haas) + Remove SnapshotNow and + HeapTupleSatisfiesNow() (Robert Haas) All existing uses have been switched to more appropriate snapshot - types. Catalog scans now use MVCC snapshots. + types. Catalog scans now use MVCC snapshots. - Add an API to allow memory allocations over one gigabyte + Add an API to allow memory allocations over one gigabyte (Noah Misch) - Add psprintf() to simplify memory allocation during + Add psprintf() to simplify memory allocation during string composition (Peter Eisentraut, Tom Lane) - Support printf() size modifier z to - print size_t values (Andres Freund) + Support printf() size modifier z to + print size_t values (Andres Freund) - Change API of appendStringInfoVA() - to better use vsnprintf() (David Rowley, Tom Lane) + Change API of appendStringInfoVA() + to better use vsnprintf() (David Rowley, Tom Lane) @@ -10642,7 +10642,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve spinlock speed on x86_64 CPUs (Heikki + Improve spinlock speed on x86_64 CPUs (Heikki Linnakangas) @@ -10650,56 +10650,56 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Remove spinlock support for unsupported platforms - SINIX, Sun3, and - NS32K (Robert Haas) + SINIX, Sun3, and + NS32K (Robert Haas) - Remove IRIX port (Robert Haas) + Remove IRIX port (Robert Haas) Reduce the number of semaphores required by - builds (Robert Haas) - Rewrite duplicate_oids Unix shell script in - Perl (Andrew Dunstan) + Rewrite duplicate_oids Unix shell script in + Perl (Andrew Dunstan) - Add Test Anything Protocol (TAP) tests for client + Add Test Anything Protocol (TAP) tests for client programs (Peter Eisentraut) - Currently, these tests are run by make check-world - only if the - Add make targets and + , which allow selection of individual tests to be run (Andrew Dunstan) - Remove makefile rule (Peter Eisentraut) @@ -10709,7 +10709,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve support for VPATH builds of PGXS + Improve support for VPATH builds of PGXS modules (Cédric Villemain, Andrew Dunstan, Peter Eisentraut) @@ -10722,8 +10722,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add a configure flag that appends custom text to the - PG_VERSION string (Oskari Saarenmaa) + Add a configure flag that appends custom text to the + PG_VERSION string (Oskari Saarenmaa) @@ -10733,46 +10733,46 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve DocBook XML validity (Peter Eisentraut) + Improve DocBook XML validity (Peter Eisentraut) Fix various minor security and sanity issues reported by the - Coverity scanner (Stephen Frost) + Coverity scanner (Stephen Frost) Improve detection of invalid memory usage when testing - PostgreSQL with Valgrind + PostgreSQL with Valgrind (Noah Misch) - Improve sample Emacs configuration file - emacs.samples (Peter Eisentraut) + Improve sample Emacs configuration file + emacs.samples (Peter Eisentraut) - Also add .dir-locals.el to the top of the source tree. + Also add .dir-locals.el to the top of the source tree. - Allow pgindent to accept a command-line list + Allow pgindent to accept a command-line list of typedefs (Bruce Momjian) - Make pgindent smarter about blank lines + Make pgindent smarter about blank lines around preprocessor conditionals (Bruce Momjian) @@ -10780,14 +10780,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Avoid most uses of dlltool - in Cygwin and - Mingw builds (Marco Atzeri, Hiroshi Inoue) + in Cygwin and + Mingw builds (Marco Atzeri, Hiroshi Inoue) - Support client-only installs in MSVC (Windows) builds + Support client-only installs in MSVC (Windows) builds (MauMau) @@ -10814,13 +10814,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add UUID random number generator - gen_random_uuid() to + Add UUID random number generator + gen_random_uuid() to (Oskari Saarenmaa) - This allows creation of version 4 UUIDs without + This allows creation of version 4 UUIDs without requiring installation of . @@ -10828,12 +10828,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow to work with - the BSD or e2fsprogs UUID libraries, - not only the OSSP UUID library (Matteo Beccati) + the BSD or e2fsprogs UUID libraries, + not only the OSSP UUID library (Matteo Beccati) - This improves the uuid-ossp module's portability + This improves the uuid-ossp module's portability since it no longer has to have the increasingly-obsolete OSSP library. The module's name is now rather a misnomer, but we won't change it. @@ -10887,8 +10887,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow pg_xlogdump - to report a live log stream with (Heikki Linnakangas) @@ -10920,7 +10920,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Pass 's user name ( @@ -10934,31 +10934,31 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Remove line length limit for pgbench scripts (Sawada + Remove line length limit for pgbench scripts (Sawada Masahiko) - The previous line limit was BUFSIZ. + The previous line limit was BUFSIZ. - Add long option names to pgbench (Fabien Coelho) + Add long option names to pgbench (Fabien Coelho) - Add pgbench option to control the transaction rate (Fabien Coelho) - Add pgbench option to print periodic progress reports (Fabien Coelho) @@ -10975,7 +10975,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_stat_statements use a file, rather than + Make pg_stat_statements use a file, rather than shared memory, for query text storage (Peter Geoghegan) @@ -10987,7 +10987,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow reporting of pg_stat_statements's internal + Allow reporting of pg_stat_statements's internal query hash identifier (Daniel Farina, Sameer Thakur, Peter Geoghegan) @@ -10995,7 +10995,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add the ability to retrieve all pg_stat_statements + Add the ability to retrieve all pg_stat_statements information except the query text (Peter Geoghegan) @@ -11008,20 +11008,20 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make pg_stat_statements ignore DEALLOCATE + Make pg_stat_statements ignore DEALLOCATE commands (Fabien Coelho) - It already ignored PREPARE, as well as planning time in + It already ignored PREPARE, as well as planning time in general, so this seems more consistent. - Save the statistics file into $PGDATA/pg_stat at server - shutdown, rather than $PGDATA/global (Fujii Masao) + Save the statistics file into $PGDATA/pg_stat at server + shutdown, rather than $PGDATA/global (Fujii Masao) diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index 0f700dd5d3..2f23abe329 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -36,20 +36,20 @@ Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -88,21 +88,21 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -119,7 +119,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -131,7 +131,7 @@ CREATE OR REPLACE VIEW table_privileges AS - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -139,13 +139,13 @@ CREATE OR REPLACE VIEW table_privileges AS - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -157,18 +157,18 @@ CREATE OR REPLACE VIEW table_privileges AS This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. - Fix make check to behave correctly when invoked via a + Fix make check to behave correctly when invoked via a non-GNU make program (Thomas Munro) @@ -218,7 +218,7 @@ CREATE OR REPLACE VIEW table_privileges AS Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -226,11 +226,11 @@ CREATE OR REPLACE VIEW table_privileges AS The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -245,15 +245,15 @@ CREATE OR REPLACE VIEW table_privileges AS Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -284,15 +284,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -306,7 +306,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -320,16 +320,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -337,13 +337,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -352,12 +352,12 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Correct the documentation about the process for upgrading standby - servers with pg_upgrade (Bruce Momjian) + servers with pg_upgrade (Bruce Momjian) The previous documentation instructed users to start/stop the primary - server after running pg_upgrade but before syncing + server after running pg_upgrade but before syncing the standby servers. This sequence is unsafe. @@ -463,21 +463,21 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -491,7 +491,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -539,7 +539,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -547,7 +547,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -555,19 +555,19 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. - Fix dangling pointer in ALTER TABLE when there is a + Fix dangling pointer in ALTER TABLE when there is a comment on a constraint belonging to the table (David Rowley) @@ -579,44 +579,44 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -627,20 +627,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -653,9 +653,9 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -666,8 +666,8 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -678,15 +678,15 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -699,14 +699,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -717,14 +717,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -732,13 +732,13 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -746,7 +746,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -758,20 +758,20 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_rewind to correctly handle files exceeding 2GB + Fix pg_rewind to correctly handle files exceeding 2GB (Kuntal Ghosh, Michael Paquier) - Ordinarily such files won't appear in PostgreSQL data + Ordinarily such files won't appear in PostgreSQL data directories, but they could be present in some cases. - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -783,16 +783,16 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix pg_xlogdump's computation of WAL record length + Fix pg_xlogdump's computation of WAL record length (Andres Freund) - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -803,7 +803,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -815,14 +815,14 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -849,34 +849,34 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -913,7 +913,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -930,18 +930,18 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -965,7 +965,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -979,17 +979,17 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -1020,7 +1020,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1033,7 +1033,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1041,14 +1041,14 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) @@ -1062,7 +1062,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1079,14 +1079,14 @@ Author: Andrew Gierth Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 --> - Fix crash or wrong answers when a GROUPING SETS column's + Fix crash or wrong answers when a GROUPING SETS column's data type is hashable but not sortable (Pavan Deolasee) - Avoid applying physical targetlist optimization to custom + Avoid applying physical targetlist optimization to custom scans (Dmitry Ivanov, Tom Lane) @@ -1099,13 +1099,13 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Use the correct sub-expression when applying a FOR ALL + Use the correct sub-expression when applying a FOR ALL row-level-security policy (Stephen Frost) - In some cases the WITH CHECK restriction would be applied - when the USING restriction is more appropriate. + In some cases the WITH CHECK restriction would be applied + when the USING restriction is more appropriate. @@ -1119,19 +1119,19 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1139,20 +1139,20 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. - Avoid dangling pointer in COPY ... TO when row-level + Avoid dangling pointer in COPY ... TO when row-level security is active for the source table (Tom Lane) @@ -1164,8 +1164,8 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Avoid accessing an already-closed relcache entry in CLUSTER - and VACUUM FULL (Tom Lane) + Avoid accessing an already-closed relcache entry in CLUSTER + and VACUUM FULL (Tom Lane) @@ -1176,14 +1176,14 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -1197,12 +1197,12 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -1211,21 +1211,21 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -1237,14 +1237,14 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix pg_get_object_address() to handle members of operator + Fix pg_get_object_address() to handle members of operator families correctly (Álvaro Herrera) - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -1258,13 +1258,13 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -1282,21 +1282,21 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -1311,20 +1311,20 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -1336,26 +1336,26 @@ Branch: REL9_5_STABLE [7be3678a8] 2017-04-24 07:53:05 +0100 Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -1374,7 +1374,7 @@ Branch: REL9_4_STABLE [f14bf0a8f] 2017-05-06 22:19:56 -0400 Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 --> - In contrib/postgres_fdw, + In contrib/postgres_fdw, transmit query cancellation requests to the remote server (Michael Paquier, Etsuro Fujita) @@ -1405,7 +1405,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -1419,9 +1419,9 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -1434,15 +1434,15 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1495,15 +1495,15 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -1520,7 +1520,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. @@ -1530,7 +1530,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - The WAL record emitted for a BRIN revmap page when moving an + The WAL record emitted for a BRIN revmap page when moving an index tuple to a different page was incorrect. Replay would make the related portion of the index useless, forcing it to be recomputed. @@ -1538,13 +1538,13 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1615,7 +1615,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -1630,7 +1630,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -1646,29 +1646,29 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. - Fix ALTER TABLE ... SET DATA TYPE ... USING when child + Fix ALTER TABLE ... SET DATA TYPE ... USING when child table has different column ordering than the parent (Álvaro Herrera) - Failure to adjust the column numbering in the USING + Failure to adjust the column numbering in the USING expression led to errors, - typically attribute N has wrong type. + typically attribute N has wrong type. Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -1681,7 +1681,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -1694,7 +1694,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -1706,8 +1706,8 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Fix commit timestamp mechanism to not fail when queried about - the special XIDs FrozenTransactionId - and BootstrapTransactionId (Craig Ringer) + the special XIDs FrozenTransactionId + and BootstrapTransactionId (Craig Ringer) @@ -1745,28 +1745,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 The symptom was spurious ON CONFLICT is not supported on table - ... used as a catalog table errors when the target - of INSERT ... ON CONFLICT is a view with cascade option. + ... used as a catalog table errors when the target + of INSERT ... ON CONFLICT is a view with cascade option. - Fix incorrect target lists can have at most N - entries complaint when using ON CONFLICT with + Fix incorrect target lists can have at most N + entries complaint when using ON CONFLICT with wide tables (Tom Lane) - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -1774,12 +1774,12 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -1794,15 +1794,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -1813,20 +1813,20 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) @@ -1834,27 +1834,27 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Improve speed of user-defined aggregates that - use array_append() as transition function (Tom Lane) + use array_append() as transition function (Tom Lane) - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) - Fix possible crash in array_position() - or array_positions() when processing arrays of records + Fix possible crash in array_position() + or array_positions() when processing arrays of records (Junseok Yang) - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -1866,8 +1866,8 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -1880,7 +1880,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Disable transform that attempted to remove no-op AT TIME - ZONE conversions (Tom Lane) + ZONE conversions (Tom Lane) @@ -1891,15 +1891,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -1919,28 +1919,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -1953,15 +1953,15 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -1969,7 +1969,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Fix possible mishandling of expanded arrays in domain check - constraints and CASE execution (Tom Lane) + constraints and CASE execution (Tom Lane) @@ -2001,21 +2001,21 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -2027,23 +2027,23 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -2054,28 +2054,28 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. - Fix portability problems in contrib/pageinspect's + Fix portability problems in contrib/pageinspect's functions for GIN indexes (Peter Eisentraut, Tom Lane) @@ -2102,7 +2102,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -2165,7 +2165,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -2173,7 +2173,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -2191,7 +2191,7 @@ Branch: REL9_4_STABLE [a69443564] 2016-09-03 13:28:53 -0400 - The typical symptom was unexpected GIN leaf action errors + The typical symptom was unexpected GIN leaf action errors during WAL replay. @@ -2206,13 +2206,13 @@ Branch: REL9_4_STABLE [8778da2af] 2016-09-09 15:54:29 -0300 Branch: REL9_3_STABLE [dfe7121df] 2016-09-09 15:54:29 -0300 --> - Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that + Fix SELECT FOR UPDATE/SHARE to correctly lock tuples that have been updated by a subsequently-aborted transaction (Álvaro Herrera) - In 9.5 and later, the SELECT would sometimes fail to + In 9.5 and later, the SELECT would sometimes fail to return such tuples at all. A failure has not been proven to occur in earlier releases, but might be possible with concurrent updates. @@ -2248,13 +2248,13 @@ Branch: REL9_5_STABLE [94bc30725] 2016-08-17 17:03:36 -0700 --> Fix deletion of speculatively inserted TOAST tuples when backing out - of INSERT ... ON CONFLICT (Oskari Saarenmaa) + of INSERT ... ON CONFLICT (Oskari Saarenmaa) In the race condition where two transactions try to insert conflicting tuples at about the same time, the loser would fail with - an attempted to delete invisible tuple error if its + an attempted to delete invisible tuple error if its insertion included any TOAST'ed fields. @@ -2262,7 +2262,7 @@ Branch: REL9_5_STABLE [94bc30725] 2016-08-17 17:03:36 -0700 Don't throw serialization errors for self-conflicting insertions - in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) + in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) @@ -2300,29 +2300,29 @@ Branch: REL9_5_STABLE [46bd14a10] 2016-08-24 22:20:01 -0400 Branch: REL9_4_STABLE [566afa15c] 2016-08-24 22:20:01 -0400 --> - Fix query-lifespan memory leak in a bulk UPDATE on a table - with a PRIMARY KEY or REPLICA IDENTITY index + Fix query-lifespan memory leak in a bulk UPDATE on a table + with a PRIMARY KEY or REPLICA IDENTITY index (Tom Lane) - Fix COPY with a column name list from a table that has + Fix COPY with a column name list from a table that has row-level security enabled (Adam Brightwell) - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. @@ -2337,20 +2337,20 @@ Branch: REL9_2_STABLE [ceb005319] 2016-08-12 12:13:04 -0400 --> Suppress printing of zeroes for unmeasured times - in EXPLAIN (Maksim Milyutin) + in EXPLAIN (Maksim Milyutin) Certain option combinations resulted in printing zero values for times that actually aren't ever measured in that combination. Our general - policy in EXPLAIN is not to print such fields at all, so + policy in EXPLAIN is not to print such fields at all, so do that consistently in all cases. - Fix statistics update for TRUNCATE in a prepared + Fix statistics update for TRUNCATE in a prepared transaction (Stas Kelvich) @@ -2367,37 +2367,37 @@ Branch: REL9_2_STABLE [eaf6fe7fa] 2016-09-09 11:45:40 +0100 Branch: REL9_1_STABLE [3ed7f54bc] 2016-09-09 11:46:03 +0100 --> - Fix timeout length when VACUUM is waiting for exclusive + Fix timeout length when VACUUM is waiting for exclusive table lock so that it can truncate the table (Simon Riggs) The timeout was meant to be 50 milliseconds, but it was actually only - 50 microseconds, causing VACUUM to give up on truncation + 50 microseconds, causing VACUUM to give up on truncation much more easily than intended. Set it to the intended value. - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. Show a sensible value - in pg_settings.unit - for min_wal_size and max_wal_size (Tom Lane) + in pg_settings.unit + for min_wal_size and max_wal_size (Tom Lane) @@ -2413,15 +2413,15 @@ Branch: REL9_1_STABLE [7e01c8ef3] 2016-08-14 15:06:02 -0400 --> Remove artificial restrictions on the values accepted - by numeric_in() and numeric_recv() + by numeric_in() and numeric_recv() (Tom Lane) We allow numeric values up to the limit of the storage format (more - than 1e100000), so it seems fairly pointless - that numeric_in() rejected scientific-notation exponents - above 1000. Likewise, it was silly for numeric_recv() to + than 1e100000), so it seems fairly pointless + that numeric_in() rejected scientific-notation exponents + above 1000. Likewise, it was silly for numeric_recv() to reject more than 1000 digits in an input value. @@ -2467,7 +2467,7 @@ Branch: REL9_5_STABLE [da9659f87] 2016-08-22 15:30:37 -0400 In the worst case, this could result in a corrupt btree index, which - would need to be rebuilt using REINDEX. However, the + would need to be rebuilt using REINDEX. However, the situation is believed to be rare. @@ -2501,7 +2501,7 @@ Branch: REL9_2_STABLE [823df401d] 2016-08-31 08:52:13 -0400 Branch: REL9_1_STABLE [e3439a455] 2016-08-31 08:52:13 -0400 --> - Disallow starting a standalone backend with standby_mode + Disallow starting a standalone backend with standby_mode turned on (Michael Paquier) @@ -2527,7 +2527,7 @@ Branch: REL9_4_STABLE [690a2fb90] 2016-08-17 13:15:04 -0700 This failure to reset all of the fields of the slot could - prevent VACUUM from removing dead tuples. + prevent VACUUM from removing dead tuples. @@ -2538,7 +2538,7 @@ Branch: REL9_4_STABLE [690a2fb90] 2016-08-17 13:15:04 -0700 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -2564,7 +2564,7 @@ Branch: REL9_4_STABLE [32cdf680f] 2016-09-23 09:54:11 -0400 Previously, the same value would be chosen every time, because it was - derived from random() but srandom() had not + derived from random() but srandom() had not yet been called. While relatively harmless, this was not the intended behavior. @@ -2584,8 +2584,8 @@ Branch: REL9_4_STABLE [c23b2523d] 2016-09-20 12:12:36 -0400 - Windows sometimes returns ERROR_ACCESS_DENIED rather - than ERROR_ALREADY_EXISTS when there is an existing + Windows sometimes returns ERROR_ACCESS_DENIED rather + than ERROR_ALREADY_EXISTS when there is an existing segment. This led to postmaster startup failure due to believing that the former was an unrecoverable error. @@ -2599,8 +2599,8 @@ Branch: REL9_6_STABLE Release: REL9_6_0 [c81c71d88] 2016-08-18 14:48:51 -0400 Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 --> - Fix PL/pgSQL to not misbehave with parameters and - local variables of type int2vector or oidvector + Fix PL/pgSQL to not misbehave with parameters and + local variables of type int2vector or oidvector (Tom Lane) @@ -2608,7 +2608,7 @@ Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -2619,12 +2619,12 @@ Branch: REL9_5_STABLE [a8fc19505] 2016-08-18 14:48:51 -0400 - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. @@ -2640,7 +2640,7 @@ Branch: REL9_2_STABLE [a4a3fac16] 2016-09-18 14:00:13 +0300 Branch: REL9_1_STABLE [ed29d2de2] 2016-09-18 14:07:30 +0300 --> - Make ecpg's and options work consistently with our other executables (Haribabu Kommi) @@ -2658,12 +2658,12 @@ Branch: REL9_5_STABLE [b93d37474] 2016-09-21 13:16:20 +0300 Branch: REL9_4_STABLE [f16d4a241] 2016-09-21 13:16:24 +0300 --> - Fix pgbench's calculation of average latency + Fix pgbench's calculation of average latency (Fabien Coelho) - The calculation was incorrect when there were \sleep + The calculation was incorrect when there were \sleep commands in the script, or when the test duration was specified in number of transactions rather than total time. @@ -2671,7 +2671,7 @@ Branch: REL9_4_STABLE [f16d4a241] 2016-09-21 13:16:24 +0300 - In pg_upgrade, check library loadability in name order + In pg_upgrade, check library loadability in name order (Tom Lane) @@ -2693,12 +2693,12 @@ Branch: REL9_3_STABLE [f39bb487d] 2016-09-23 13:49:27 -0400 Branch: REL9_2_STABLE [53b29d986] 2016-09-23 13:49:27 -0400 --> - In pg_dump, never dump range constructor functions + In pg_dump, never dump range constructor functions (Tom Lane) - This oversight led to pg_upgrade failures with + This oversight led to pg_upgrade failures with extensions containing range types, due to duplicate creation of the constructor functions. @@ -2712,9 +2712,9 @@ Branch: REL9_6_STABLE Release: REL9_6_0 [a88cee90f] 2016-09-08 10:48:03 -0400 Branch: REL9_5_STABLE [142a110b3] 2016-09-08 10:48:03 -0400 --> - In pg_dump with @@ -2727,27 +2727,27 @@ Branch: REL9_5_STABLE [9050e5c89] 2016-08-29 12:18:57 +0100 Branch: REL9_5_STABLE [3aa233f82] 2016-08-29 18:12:04 -0300 --> - Make pg_receivexlog work correctly - with without slots (Gabriele Bartolini) - Disallow specifying both - Make pg_rewind turn off synchronous_commit + Make pg_rewind turn off synchronous_commit in its session on the source server (Michael Banck, Michael Paquier) - This allows pg_rewind to work even when the source + This allows pg_rewind to work even when the source server is using synchronous replication that is not working for some reason. @@ -2755,8 +2755,8 @@ Branch: REL9_5_STABLE [3aa233f82] 2016-08-29 18:12:04 -0300 - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -2775,7 +2775,7 @@ Branch: REL9_4_STABLE [314a25fb3] 2016-08-29 14:38:17 +0900 Branch: REL9_3_STABLE [5833306dd] 2016-08-29 15:51:30 +0900 --> - Fix pg_xlogdump to cope with a WAL file that begins + Fix pg_xlogdump to cope with a WAL file that begins with a continuation record spanning more than one page (Pavan Deolasee) @@ -2790,8 +2790,8 @@ Branch: REL9_5_STABLE [60b6d99da] 2016-09-15 09:30:36 -0400 Branch: REL9_4_STABLE [1336bd986] 2016-09-15 09:22:52 -0400 --> - Fix contrib/pg_buffercache to work - when shared_buffers exceeds 256GB (KaiGai Kohei) + Fix contrib/pg_buffercache to work + when shared_buffers exceeds 256GB (KaiGai Kohei) @@ -2807,8 +2807,8 @@ Branch: REL9_2_STABLE [60bb1bb12] 2016-08-17 15:51:11 -0400 Branch: REL9_1_STABLE [9942376a5] 2016-08-17 15:51:11 -0400 --> - Fix contrib/intarray/bench/bench.pl to print the results - of the EXPLAIN it does when given the option (Daniel Gustafsson) @@ -2842,11 +2842,11 @@ Branch: REL9_4_STABLE [5d41f27a9] 2016-09-23 15:50:00 -0400 - When PostgreSQL has been configured - with @@ -2859,7 +2859,7 @@ Branch: REL9_5_STABLE [52acf020a] 2016-09-19 14:27:08 -0400 Branch: REL9_4_STABLE [ca93b816f] 2016-09-19 14:27:13 -0400 --> - In MSVC builds, include pg_recvlogical in a + In MSVC builds, include pg_recvlogical in a client-only installation (MauMau) @@ -2899,17 +2899,17 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 If a dynamic time zone abbreviation does not match any entry in the referenced time zone, treat it as equivalent to the time zone name. This avoids unexpected failures when IANA removes abbreviations from - their time zone database, as they did in tzdata + their time zone database, as they did in tzdata release 2016f and seem likely to do again in the future. The consequences were not limited to not recognizing the individual abbreviation; any mismatch caused - the pg_timezone_abbrevs view to fail altogether. + the pg_timezone_abbrevs view to fail altogether. - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -2922,15 +2922,15 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -2984,17 +2984,17 @@ Branch: REL9_1_STABLE [5327b764a] 2016-08-08 10:33:47 -0400 --> Fix possible mis-evaluation of - nested CASE-WHEN expressions (Heikki + nested CASE-WHEN expressions (Heikki Linnakangas, Michael Paquier, Tom Lane) - A CASE expression appearing within the test value - subexpression of another CASE could become confused about + A CASE expression appearing within the test value + subexpression of another CASE could become confused about whether its own test value was null or not. Also, inlining of a SQL function implementing the equality operator used by - a CASE expression could result in passing the wrong test - value to functions called within a CASE expression in the + a CASE expression could result in passing the wrong test + value to functions called within a CASE expression in the SQL function's body. If the test values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. (CVE-2016-5423) @@ -3055,7 +3055,7 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 - Numerous places in vacuumdb and other client programs + Numerous places in vacuumdb and other client programs could become confused by database and role names containing double quotes or backslashes. Tighten up quoting rules to make that safe. Also, ensure that when a conninfo string is used as a database name @@ -3064,22 +3064,22 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 Fix handling of paired double quotes - in psql's \connect - and \password commands to match the documentation. + in psql's \connect + and \password commands to match the documentation. - Introduce a new - pg_dumpall now refuses to deal with database and role + pg_dumpall now refuses to deal with database and role names containing carriage returns or newlines, as it seems impractical to quote those characters safely on Windows. In future we may reject such names on the server side, but that step has not been taken yet. @@ -3089,7 +3089,7 @@ Branch: REL9_1_STABLE [aed766ab5] 2016-08-08 10:07:53 -0400 These are considered security fixes because crafted object names containing special characters could have been used to execute commands with superuser privileges the next time a superuser - executes pg_dumpall or other routine maintenance + executes pg_dumpall or other routine maintenance operations. (CVE-2016-5424) @@ -3111,18 +3111,18 @@ Branch: REL9_2_STABLE [7b8526e5d] 2016-07-28 16:09:15 -0400 Branch: REL9_1_STABLE [c0e5096fc] 2016-07-28 16:09:15 -0400 --> - Fix corner-case misbehaviors for IS NULL/IS NOT - NULL applied to nested composite values (Andrew Gierth, Tom Lane) + Fix corner-case misbehaviors for IS NULL/IS NOT + NULL applied to nested composite values (Andrew Gierth, Tom Lane) - The SQL standard specifies that IS NULL should return + The SQL standard specifies that IS NULL should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS - NULL yields TRUE), but this is not meant to apply recursively - (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). + NULL yields TRUE), but this is not meant to apply recursively + (thus ROW(NULL, ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got this right, but certain planner optimizations treated the test as recursive (thus producing TRUE in both cases), - and contrib/postgres_fdw could produce remote queries + and contrib/postgres_fdw could produce remote queries that misbehaved similarly. @@ -3134,8 +3134,8 @@ Branch: master [eae1ad9b6] 2016-05-23 19:23:36 -0400 Branch: REL9_5_STABLE [e504d915b] 2016-05-23 19:23:36 -0400 --> - Fix unrecognized node type error for INSERT ... ON - CONFLICT within a recursive CTE (a WITH item) (Peter + Fix unrecognized node type error for INSERT ... ON + CONFLICT within a recursive CTE (a WITH item) (Peter Geoghegan) @@ -3147,7 +3147,7 @@ Branch: master [26e66184d] 2016-05-11 16:20:23 -0400 Branch: REL9_5_STABLE [58d802410] 2016-05-11 16:20:03 -0400 --> - Fix INSERT ... ON CONFLICT to successfully match index + Fix INSERT ... ON CONFLICT to successfully match index expressions or index predicates that are simplified during the planner's expression preprocessing phase (Tom Lane) @@ -3161,7 +3161,7 @@ Branch: REL9_5_STABLE [31ce32ade] 2016-07-04 16:09:11 -0400 --> Correctly handle violations of exclusion constraints that apply to - the target table of an INSERT ... ON CONFLICT command, + the target table of an INSERT ... ON CONFLICT command, but are not one of the selected arbiter indexes (Tom Lane) @@ -3178,7 +3178,7 @@ Branch: master [8a13d5e6d] 2016-05-11 17:06:53 -0400 Branch: REL9_5_STABLE [428484ce1] 2016-05-11 17:06:53 -0400 --> - Fix INSERT ... ON CONFLICT to not fail if the target + Fix INSERT ... ON CONFLICT to not fail if the target table has a unique index on OID (Tom Lane) @@ -3194,7 +3194,7 @@ Branch: REL9_2_STABLE [f66e0fec3] 2016-06-16 17:16:53 -0400 Branch: REL9_1_STABLE [7b97dafa2] 2016-06-16 17:16:58 -0400 --> - Make the inet and cidr data types properly reject + Make the inet and cidr data types properly reject IPv6 addresses with too many colon-separated fields (Tom Lane) @@ -3210,8 +3210,8 @@ Branch: REL9_2_STABLE [89b301104] 2016-07-16 14:42:37 -0400 Branch: REL9_1_STABLE [608cc0c41] 2016-07-16 14:42:37 -0400 --> - Prevent crash in close_ps() - (the point ## lseg operator) + Prevent crash in close_ps() + (the point ## lseg operator) for NaN input coordinates (Tom Lane) @@ -3229,7 +3229,7 @@ Branch: REL9_4_STABLE [b25d87f91] 2016-07-01 11:40:22 -0400 Branch: REL9_3_STABLE [b0f20c2ea] 2016-07-01 11:40:22 -0400 --> - Avoid possible crash in pg_get_expr() when inconsistent + Avoid possible crash in pg_get_expr() when inconsistent values are passed to it (Michael Paquier, Thomas Munro) @@ -3245,12 +3245,12 @@ Branch: REL9_2_STABLE [b0134fe84] 2016-08-08 11:13:45 -0400 Branch: REL9_1_STABLE [d555d2642] 2016-08-08 11:13:51 -0400 --> - Fix several one-byte buffer over-reads in to_number() + Fix several one-byte buffer over-reads in to_number() (Peter Eisentraut) - In several cases the to_number() function would read one + In several cases the to_number() function would read one more character than it should from the input string. There is a small chance of a crash, if the input happens to be adjacent to the end of memory. @@ -3267,8 +3267,8 @@ Branch: REL9_3_STABLE [17bfef80e] 2016-06-27 15:57:21 -0400 --> Do not run the planner on the query contained in CREATE - MATERIALIZED VIEW or CREATE TABLE AS - when WITH NO DATA is specified (Michael Paquier, + MATERIALIZED VIEW or CREATE TABLE AS + when WITH NO DATA is specified (Michael Paquier, Tom Lane) @@ -3291,7 +3291,7 @@ Branch: REL9_1_STABLE [37276017f] 2016-07-15 17:49:49 -0700 --> Avoid unsafe intermediate state during expensive paths - through heap_update() (Masahiko Sawada, Andres Freund) + through heap_update() (Masahiko Sawada, Andres Freund) @@ -3331,8 +3331,8 @@ Branch: REL9_4_STABLE [166873dd0] 2016-07-15 14:17:20 -0400 Branch: REL9_3_STABLE [6c243f90a] 2016-07-15 14:17:20 -0400 --> - Avoid unnecessary could not serialize access errors when - acquiring FOR KEY SHARE row locks in serializable mode + Avoid unnecessary could not serialize access errors when + acquiring FOR KEY SHARE row locks in serializable mode (Álvaro Herrera) @@ -3346,14 +3346,14 @@ Branch: master [9eaf5be50] 2016-06-03 18:07:14 -0400 Branch: REL9_5_STABLE [8355897ff] 2016-06-03 18:07:14 -0400 --> - Make sure expanded datums returned by a plan node are + Make sure expanded datums returned by a plan node are read-only (Tom Lane) This avoids failures in some cases where the result of a lower plan node is referenced in multiple places in upper nodes. So far as - core PostgreSQL is concerned, only array values + core PostgreSQL is concerned, only array values returned by PL/pgSQL functions are at risk; but extensions might use expanded datums for other things. @@ -3374,7 +3374,7 @@ Branch: REL9_3_STABLE [dafdcbb6c] 2016-06-22 11:55:32 -0400 Branch: REL9_2_STABLE [dd41661d2] 2016-06-22 11:55:35 -0400 --> - Avoid crash in postgres -C when the specified variable + Avoid crash in postgres -C when the specified variable has a null string value (Michael Paquier) @@ -3470,12 +3470,12 @@ Branch: REL9_2_STABLE [4cf0978ea] 2016-05-24 15:47:51 -0400 Branch: REL9_1_STABLE [5551dac59] 2016-05-24 15:47:51 -0400 --> - Avoid consuming a transaction ID during VACUUM + Avoid consuming a transaction ID during VACUUM (Alexander Korotkov) - Some cases in VACUUM unnecessarily caused an XID to be + Some cases in VACUUM unnecessarily caused an XID to be assigned to the current transaction. Normally this is negligible, but if one is up against the XID wraparound limit, consuming more XIDs during anti-wraparound vacuums is a very bad thing. @@ -3498,7 +3498,7 @@ Branch: REL9_3_STABLE [28f294afd] 2016-06-24 18:29:28 -0400 The usual symptom of this bug is errors - like MultiXactId NNN has not been created + like MultiXactId NNN has not been created yet -- apparent wraparound. @@ -3514,8 +3514,8 @@ Branch: REL9_2_STABLE [3201709de] 2016-06-06 17:44:18 -0400 Branch: REL9_1_STABLE [32ceb8dfb] 2016-06-06 17:44:18 -0400 --> - When a manual ANALYZE specifies a column list, don't - reset the table's changes_since_analyze counter + When a manual ANALYZE specifies a column list, don't + reset the table's changes_since_analyze counter (Tom Lane) @@ -3536,7 +3536,7 @@ Branch: REL9_2_STABLE [127d73009] 2016-08-07 18:52:02 -0400 Branch: REL9_1_STABLE [a449ad095] 2016-08-07 18:52:02 -0400 --> - Fix ANALYZE's overestimation of n_distinct + Fix ANALYZE's overestimation of n_distinct for a unique or nearly-unique column with many null entries (Tom Lane) @@ -3600,7 +3600,7 @@ Branch: REL9_4_STABLE [98d5f366b] 2016-08-06 14:28:38 -0400 - This mistake prevented VACUUM from completing in some + This mistake prevented VACUUM from completing in some cases involving corrupt b-tree indexes. @@ -3612,7 +3612,7 @@ Branch: master [8cf739de8] 2016-06-24 16:57:36 -0400 Branch: REL9_5_STABLE [07f69137b] 2016-06-24 16:57:36 -0400 --> - Fix building of large (bigger than shared_buffers) + Fix building of large (bigger than shared_buffers) hash indexes (Tom Lane) @@ -3646,9 +3646,9 @@ Branch: master [8a859691d] 2016-06-05 11:53:06 -0400 Branch: REL9_5_STABLE [a7aa61ffe] 2016-06-05 11:53:06 -0400 --> - Fix possible crash during a nearest-neighbor (ORDER BY - distance) indexscan on a contrib/btree_gist index on - an interval column (Peter Geoghegan) + Fix possible crash during a nearest-neighbor (ORDER BY + distance) indexscan on a contrib/btree_gist index on + an interval column (Peter Geoghegan) @@ -3659,7 +3659,7 @@ Branch: master [975ad4e60] 2016-05-30 14:47:22 -0400 Branch: REL9_5_STABLE [2973d7d02] 2016-05-30 14:47:22 -0400 --> - Fix PANIC: failed to add BRIN tuple error when attempting + Fix PANIC: failed to add BRIN tuple error when attempting to update a BRIN index entry (Álvaro Herrera) @@ -3682,8 +3682,8 @@ Branch: master [baebab3ac] 2016-07-12 18:07:03 -0400 Branch: REL9_5_STABLE [a0943dbbe] 2016-07-12 18:06:50 -0400 --> - Fix PL/pgSQL's handling of the INTO clause - within IMPORT FOREIGN SCHEMA commands (Tom Lane) + Fix PL/pgSQL's handling of the INTO clause + within IMPORT FOREIGN SCHEMA commands (Tom Lane) @@ -3698,8 +3698,8 @@ Branch: REL9_2_STABLE [6c0be49b2] 2016-07-17 09:39:51 -0400 Branch: REL9_1_STABLE [84d679204] 2016-07-17 09:41:08 -0400 --> - Fix contrib/btree_gin to handle the smallest - possible bigint value correctly (Peter Eisentraut) + Fix contrib/btree_gin to handle the smallest + possible bigint value correctly (Peter Eisentraut) @@ -3721,7 +3721,7 @@ Branch: REL9_1_STABLE [1f63b0e09] 2016-08-05 18:58:36 -0400 It's planned to switch to two-part instead of three-part server version numbers for releases after 9.6. Make sure - that PQserverVersion() returns the correct value for + that PQserverVersion() returns the correct value for such cases. @@ -3737,7 +3737,7 @@ Branch: REL9_2_STABLE [295edbecf] 2016-08-01 15:08:48 +0200 Branch: REL9_1_STABLE [c15f502b6] 2016-08-01 15:08:36 +0200 --> - Fix ecpg's code for unsigned long long + Fix ecpg's code for unsigned long long array elements (Michael Meskes) @@ -3752,8 +3752,8 @@ Branch: REL9_3_STABLE [6693c9d7b] 2016-08-02 12:49:09 -0400 Branch: REL9_2_STABLE [a5a7caaa1] 2016-08-02 12:49:15 -0400 --> - In pg_dump with both @@ -3771,15 +3771,15 @@ Branch: REL9_4_STABLE [53c2601a5] 2016-06-03 11:29:20 -0400 Branch: REL9_3_STABLE [4a21c6fd7] 2016-06-03 11:29:20 -0400 --> - Improve handling of SIGTERM/control-C in - parallel pg_dump and pg_restore (Tom + Improve handling of SIGTERM/control-C in + parallel pg_dump and pg_restore (Tom Lane) Make sure that the worker processes will exit promptly, and also arrange to send query-cancel requests to the connected backends, in case they - are doing something long-running such as a CREATE INDEX. + are doing something long-running such as a CREATE INDEX. @@ -3792,17 +3792,17 @@ Branch: REL9_4_STABLE [ea274b2f4] 2016-05-25 12:39:57 -0400 Branch: REL9_3_STABLE [1c8205159] 2016-05-25 12:39:57 -0400 --> - Fix error reporting in parallel pg_dump - and pg_restore (Tom Lane) + Fix error reporting in parallel pg_dump + and pg_restore (Tom Lane) - Previously, errors reported by pg_dump - or pg_restore worker processes might never make it to + Previously, errors reported by pg_dump + or pg_restore worker processes might never make it to the user's console, because the messages went through the master process, and there were various deadlock scenarios that would prevent the master process from passing on the messages. Instead, just print - everything to stderr. In some cases this will result in + everything to stderr. In some cases this will result in duplicate messages (for instance, if all the workers report a server shutdown), but that seems better than no message. @@ -3817,8 +3817,8 @@ Branch: REL9_4_STABLE [d32bc204c] 2016-05-26 10:50:42 -0400 Branch: REL9_3_STABLE [b9784e1f7] 2016-05-26 10:50:46 -0400 --> - Ensure that parallel pg_dump - or pg_restore on Windows will shut down properly + Ensure that parallel pg_dump + or pg_restore on Windows will shut down properly after an error (Kyotaro Horiguchi) @@ -3835,13 +3835,13 @@ Branch: master [d74048def] 2016-05-26 22:14:23 +0200 Branch: REL9_5_STABLE [47e596976] 2016-05-26 22:18:04 +0200 --> - Make parallel pg_dump fail cleanly when run against a + Make parallel pg_dump fail cleanly when run against a standby server (Magnus Hagander) This usage is not supported - unless is specified, but the error was not handled very well. @@ -3855,7 +3855,7 @@ Branch: REL9_4_STABLE [f2f18a37c] 2016-05-26 11:51:16 -0400 Branch: REL9_3_STABLE [99565a1ef] 2016-05-26 11:51:20 -0400 --> - Make pg_dump behave better when built without zlib + Make pg_dump behave better when built without zlib support (Kyotaro Horiguchi) @@ -3876,7 +3876,7 @@ Branch: REL9_2_STABLE [a21617759] 2016-08-01 17:38:00 +0900 Branch: REL9_1_STABLE [366f4a962] 2016-08-01 17:38:05 +0900 --> - Make pg_basebackup accept -Z 0 as + Make pg_basebackup accept -Z 0 as specifying no compression (Fujii Masao) @@ -3922,13 +3922,13 @@ Branch: REL9_4_STABLE [c2651cd24] 2016-05-27 10:40:20 -0400 Branch: REL9_3_STABLE [1f1e70a87] 2016-05-27 10:40:20 -0400 --> - Be more predictable about reporting statement timeout - versus lock timeout (Tom Lane) + Be more predictable about reporting statement timeout + versus lock timeout (Tom Lane) On heavily loaded machines, the regression tests sometimes failed due - to reporting lock timeout even though the statement timeout + to reporting lock timeout even though the statement timeout should have occurred first. @@ -3981,7 +3981,7 @@ Branch: REL9_1_STABLE [d70df7867] 2016-07-19 17:53:31 -0400 --> Update our copy of the timezone code to match - IANA's tzcode release 2016c (Tom Lane) + IANA's tzcode release 2016c (Tom Lane) @@ -4002,7 +4002,7 @@ Branch: REL9_2_STABLE [7822792f7] 2016-08-05 12:58:58 -0400 Branch: REL9_1_STABLE [a44388ffe] 2016-08-05 12:59:02 -0400 --> - Update time zone data files to tzdata release 2016f + Update time zone data files to tzdata release 2016f for DST law changes in Kemerovo and Novosibirsk, plus historical corrections for Azerbaijan, Belarus, and Morocco. @@ -4066,7 +4066,7 @@ Branch: REL9_1_STABLE [9b676fd49] 2016-05-07 00:09:37 -0400 using OpenSSL within a single process and not all the code involved follows the same rules for when to clear the error queue. Failures have been reported specifically when a client application - uses SSL connections in libpq concurrently with + uses SSL connections in libpq concurrently with SSL connections using the PHP, Python, or Ruby wrappers for OpenSSL. It's possible for similar problems to arise within the server as well, if an extension module establishes an outgoing SSL connection. @@ -4084,7 +4084,7 @@ Branch: REL9_2_STABLE [ad2d32b57] 2016-04-21 20:05:58 -0400 Branch: REL9_1_STABLE [6882dbd34] 2016-04-21 20:05:58 -0400 --> - Fix failed to build any N-way joins + Fix failed to build any N-way joins planner error with a full join enclosed in the right-hand side of a left join (Tom Lane) @@ -4106,10 +4106,10 @@ Branch: REL9_2_STABLE [f02cb8c9a] 2016-04-29 20:19:38 -0400 Given a three-or-more-way equivalence class of variables, such - as X.X = Y.Y = Z.Z, it was possible for the planner to omit + as X.X = Y.Y = Z.Z, it was possible for the planner to omit some of the tests needed to enforce that all the variables are actually equal, leading to join rows being output that didn't satisfy - the WHERE clauses. For various reasons, erroneous plans + the WHERE clauses. For various reasons, erroneous plans were seldom selected in practice, so that this bug has gone undetected for a long time. @@ -4128,7 +4128,7 @@ Branch: REL9_5_STABLE [81deadd31] 2016-04-21 23:17:36 -0400 - An example is that SELECT (ARRAY[])::text[] gave an error, + An example is that SELECT (ARRAY[])::text[] gave an error, though it worked without the parentheses. @@ -4160,7 +4160,7 @@ Branch: REL9_4_STABLE [ef35afa35] 2016-04-20 14:25:15 -0400 The memory leak would typically not amount to much in simple queries, but it could be very substantial during a large GIN index build with - high maintenance_work_mem. + high maintenance_work_mem. @@ -4175,8 +4175,8 @@ Branch: REL9_2_STABLE [11247dd99] 2016-05-06 12:09:20 -0400 Branch: REL9_1_STABLE [7bad282c3] 2016-05-06 12:09:20 -0400 --> - Fix possible misbehavior of TH, th, - and Y,YYY format codes in to_timestamp() + Fix possible misbehavior of TH, th, + and Y,YYY format codes in to_timestamp() (Tom Lane) @@ -4197,9 +4197,9 @@ Branch: REL9_2_STABLE [c7c145e4f] 2016-04-21 14:20:18 -0400 Branch: REL9_1_STABLE [663624e60] 2016-04-21 14:20:18 -0400 --> - Fix dumping of rules and views in which the array - argument of a value operator - ANY (array) construct is a sub-SELECT + Fix dumping of rules and views in which the array + argument of a value operator + ANY (array) construct is a sub-SELECT (Tom Lane) @@ -4212,14 +4212,14 @@ Branch: REL9_5_STABLE [f3d17491c] 2016-04-04 18:05:23 -0400 Branch: REL9_4_STABLE [28148e258] 2016-04-04 18:05:24 -0400 --> - Disallow newlines in ALTER SYSTEM parameter values + Disallow newlines in ALTER SYSTEM parameter values (Tom Lane) The configuration-file parser doesn't support embedded newlines in string literals, so we mustn't allow them in values to be inserted - by ALTER SYSTEM. + by ALTER SYSTEM. @@ -4231,7 +4231,7 @@ Branch: REL9_5_STABLE [8f8e65d34] 2016-04-15 12:11:27 -0400 Branch: REL9_4_STABLE [8eed31ffb] 2016-04-15 12:11:27 -0400 --> - Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to + Fix ALTER TABLE ... REPLICA IDENTITY USING INDEX to work properly if an index on OID is selected (David Rowley) @@ -4290,13 +4290,13 @@ Branch: REL9_2_STABLE [1b22368ff] 2016-04-20 23:48:13 -0400 Branch: REL9_1_STABLE [4c1c9f80b] 2016-04-20 23:48:13 -0400 --> - Make pg_regress use a startup timeout from the - PGCTLTIMEOUT environment variable, if that's set (Tom Lane) + Make pg_regress use a startup timeout from the + PGCTLTIMEOUT environment variable, if that's set (Tom Lane) This is for consistency with a behavior recently added - to pg_ctl; it eases automated testing on slow machines. + to pg_ctl; it eases automated testing on slow machines. @@ -4311,7 +4311,7 @@ Branch: REL9_2_STABLE [6bb42d520] 2016-04-13 18:57:52 -0400 Branch: REL9_1_STABLE [3ef1f3a3e] 2016-04-13 18:57:52 -0400 --> - Fix pg_upgrade to correctly restore extension + Fix pg_upgrade to correctly restore extension membership for operator families containing only one operator class (Tom Lane) @@ -4319,7 +4319,7 @@ Branch: REL9_1_STABLE [3ef1f3a3e] 2016-04-13 18:57:52 -0400 In such a case, the operator family was restored into the new database, but it was no longer marked as part of the extension. This had no - immediate ill effects, but would cause later pg_dump + immediate ill effects, but would cause later pg_dump runs to emit output that would cause (harmless) errors on restore. @@ -4333,13 +4333,13 @@ Branch: REL9_4_STABLE [e1aecebc0] 2016-05-06 22:05:51 -0400 Branch: REL9_3_STABLE [e1d88f983] 2016-05-06 22:05:51 -0400 --> - Fix pg_upgrade to not fail when new-cluster TOAST rules + Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old (Tom Lane) - pg_upgrade had special-case code to handle the - situation where the new PostgreSQL version thinks that + pg_upgrade had special-case code to handle the + situation where the new PostgreSQL version thinks that a table should have a TOAST table while the old version did not. That code was broken, so remove it, and instead do nothing in such cases; there seems no reason to believe that we can't get along fine without @@ -4369,7 +4369,7 @@ Branch: REL9_2_STABLE [b24f7e280] 2016-04-18 13:33:07 -0400 --> Reduce the number of SysV semaphores used by a build configured with - (Tom Lane) @@ -4384,8 +4384,8 @@ Branch: REL9_2_STABLE [0f5491283] 2016-04-23 16:53:15 -0400 Branch: REL9_1_STABLE [cbff4b708] 2016-04-23 16:53:15 -0400 --> - Rename internal function strtoi() - to strtoint() to avoid conflict with a NetBSD library + Rename internal function strtoi() + to strtoint() to avoid conflict with a NetBSD library function (Thomas Munro) @@ -4407,8 +4407,8 @@ Branch: REL9_2_STABLE [b5ebc513d] 2016-04-21 16:59:13 -0400 Branch: REL9_1_STABLE [9028f404e] 2016-04-21 16:59:17 -0400 --> - Fix reporting of errors from bind() - and listen() system calls on Windows (Tom Lane) + Fix reporting of errors from bind() + and listen() system calls on Windows (Tom Lane) @@ -4463,7 +4463,7 @@ Branch: REL9_4_STABLE [c238a4101] 2016-04-22 05:20:07 -0400 Branch: REL9_3_STABLE [ab5c6d01f] 2016-04-22 05:20:18 -0400 --> - Fix putenv() to work properly with Visual Studio 2013 + Fix putenv() to work properly with Visual Studio 2013 (Michael Paquier) @@ -4479,12 +4479,12 @@ Branch: REL9_2_STABLE [b4b06931e] 2016-03-29 11:54:58 -0400 Branch: REL9_1_STABLE [6cd30292b] 2016-03-29 11:54:58 -0400 --> - Avoid possibly-unsafe use of Windows' FormatMessage() + Avoid possibly-unsafe use of Windows' FormatMessage() function (Christian Ullrich) - Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where + Use the FORMAT_MESSAGE_IGNORE_INSERTS flag where appropriate. No live bug is known to exist here, but it seems like a good idea to be careful. @@ -4501,9 +4501,9 @@ Branch: REL9_2_STABLE [29d154e36] 2016-05-05 20:09:27 -0400 Branch: REL9_1_STABLE [bfc39da64] 2016-05-05 20:09:32 -0400 --> - Update time zone data files to tzdata release 2016d + Update time zone data files to tzdata release 2016d for DST law changes in Russia and Venezuela. There are new zone - names Europe/Kirov and Asia/Tomsk to reflect + names Europe/Kirov and Asia/Tomsk to reflect the fact that these regions now have different time zone histories from adjacent regions. @@ -4536,7 +4536,7 @@ Branch: REL9_1_STABLE [bfc39da64] 2016-05-05 20:09:32 -0400 - However, you may need to REINDEX some indexes after applying + However, you may need to REINDEX some indexes after applying the update, as per the first changelog entry below. @@ -4554,39 +4554,39 @@ Branch: REL9_5_STABLE [8aa6e9780] 2016-03-23 16:04:35 -0400 - Disable abbreviated keys for string sorting in non-C + Disable abbreviated keys for string sorting in non-C locales (Robert Haas) - PostgreSQL 9.5 introduced logic for speeding up + PostgreSQL 9.5 introduced logic for speeding up comparisons of string data types by using the standard C library - function strxfrm() as a substitute - for strcoll(). It now emerges that most versions of + function strxfrm() as a substitute + for strcoll(). It now emerges that most versions of glibc (Linux's implementation of the C library) have buggy - implementations of strxfrm() that, in some locales, + implementations of strxfrm() that, in some locales, can produce string comparison results that do not - match strcoll(). Until this problem can be better - characterized, disable the optimization in all non-C - locales. (C locale is safe since it uses - neither strcoll() nor strxfrm().) + match strcoll(). Until this problem can be better + characterized, disable the optimization in all non-C + locales. (C locale is safe since it uses + neither strcoll() nor strxfrm().) Unfortunately, this problem affects not only sorting but also entry ordering in B-tree indexes, which means that B-tree indexes - on text, varchar, or char columns may now + on text, varchar, or char columns may now be corrupt if they sort according to an affected locale and were - built or modified under PostgreSQL 9.5.0 or 9.5.1. - Users should REINDEX indexes that might be affected. + built or modified under PostgreSQL 9.5.0 or 9.5.1. + Users should REINDEX indexes that might be affected. It is not possible at this time to give an exhaustive list of - known-affected locales. C locale is known safe, and + known-affected locales. C locale is known safe, and there is no evidence of trouble in English-based locales such - as en_US, but some other popular locales such - as de_DE are affected in most glibc versions. + as en_US, but some other popular locales such + as de_DE are affected in most glibc versions. @@ -4619,14 +4619,14 @@ Branch: REL9_5_STABLE [bf78a6f10] 2016-03-28 10:57:46 -0300 Add must-be-superuser checks to some - new contrib/pageinspect functions (Andreas Seltenreich) + new contrib/pageinspect functions (Andreas Seltenreich) - Most functions in the pageinspect extension that - inspect bytea values disallow calls by non-superusers, - but brin_page_type() and brin_metapage_info() - failed to do so. Passing contrived bytea values to them might + Most functions in the pageinspect extension that + inspect bytea values disallow calls by non-superusers, + but brin_page_type() and brin_metapage_info() + failed to do so. Passing contrived bytea values to them might crash the server or disclose a few bytes of server memory. Add the missing permissions checks to prevent misuse. (CVE-2016-3065) @@ -4641,15 +4641,15 @@ Branch: REL9_5_STABLE [bf7ced5e2] 2016-03-03 09:50:38 +0000 - Fix incorrect handling of indexed ROW() comparisons + Fix incorrect handling of indexed ROW() comparisons (Simon Riggs) Flaws in a minor optimization introduced in 9.5 caused incorrect - results if the ROW() comparison matches the index ordering + results if the ROW() comparison matches the index ordering partially but not exactly (for example, differing column order, or the - index contains both ASC and DESC columns). + index contains both ASC and DESC columns). Pending a better solution, the optimization has been removed. @@ -4667,15 +4667,15 @@ Branch: REL9_1_STABLE [d485d9581] 2016-03-09 14:51:02 -0500 Fix incorrect handling of NULL index entries in - indexed ROW() comparisons (Tom Lane) + indexed ROW() comparisons (Tom Lane) An index search using a row comparison such as ROW(a, b) > - ROW('x', 'y') would stop upon reaching a NULL entry in - the b column, ignoring the fact that there might be - non-NULL b values associated with later values - of a. + ROW('x', 'y') would stop upon reaching a NULL entry in + the b column, ignoring the fact that there might be + non-NULL b values associated with later values + of a. @@ -4698,7 +4698,7 @@ Branch: REL9_1_STABLE [d0e47bcd4] 2016-03-09 18:53:54 -0800 Avoid unlikely data-loss scenarios due to renaming files without - adequate fsync() calls before and after (Michael Paquier, + adequate fsync() calls before and after (Michael Paquier, Tomas Vondra, Andres Freund) @@ -4712,14 +4712,14 @@ Branch: REL9_5_STABLE [d8d5a00b1] 2016-03-22 17:56:06 -0400 Fix incorrect behavior when rechecking a just-modified row in a query - that does SELECT FOR UPDATE/SHARE and contains some + that does SELECT FOR UPDATE/SHARE and contains some relations that need not be locked (Tom Lane) Rows from non-locked relations were incorrectly treated as containing all NULLs during the recheck, which could result in incorrectly - deciding that the updated row no longer passes the WHERE + deciding that the updated row no longer passes the WHERE condition, or in incorrectly outputting NULLs. @@ -4733,7 +4733,7 @@ Branch: REL9_4_STABLE [597e41e45] 2016-03-02 23:31:39 -0500 - Fix bug in json_to_record() when a field of its input + Fix bug in json_to_record() when a field of its input object contains a sub-object with a field name matching one of the requested output column names (Tom Lane) @@ -4748,7 +4748,7 @@ Branch: REL9_5_STABLE [68d68ff83] 2016-02-21 10:40:39 -0500 Fix nonsense result from two-argument form - of jsonb_object() when called with empty arrays + of jsonb_object() when called with empty arrays (Michael Paquier, Andrew Dunstan) @@ -4761,7 +4761,7 @@ Branch: REL9_5_STABLE [5f95521b3] 2016-03-23 10:43:24 -0400 - Fix misbehavior in jsonb_set() when converting a path + Fix misbehavior in jsonb_set() when converting a path array element into an integer for use as an array subscript (Michael Paquier) @@ -4777,7 +4777,7 @@ Branch: REL9_4_STABLE [17a250b18] 2016-03-17 15:50:33 -0400 Fix misformatting of negative time zone offsets - by to_char()'s OF format code + by to_char()'s OF format code (Thomas Munro, Tom Lane) @@ -4791,7 +4791,7 @@ Branch: REL9_5_STABLE [3f14d8d59] 2016-03-15 18:04:48 -0400 Fix possible incorrect logging of waits done by - INSERT ... ON CONFLICT (Peter Geoghegan) + INSERT ... ON CONFLICT (Peter Geoghegan) @@ -4815,7 +4815,7 @@ Branch: REL9_4_STABLE [a9613ee69] 2016-03-06 02:43:26 +0900 Previously, standby servers would delay application of WAL records in - response to recovery_min_apply_delay even while replaying + response to recovery_min_apply_delay even while replaying the initial portion of WAL needed to make their database state valid. Since the standby is useless until it's reached a consistent database state, this was deemed unhelpful. @@ -4834,7 +4834,7 @@ Branch: REL9_1_STABLE [ca32f125b] 2016-02-19 08:35:02 +0000 - Correctly handle cases where pg_subtrans is close to XID + Correctly handle cases where pg_subtrans is close to XID wraparound during server startup (Jeff Janes) @@ -4870,10 +4870,10 @@ Branch: REL9_5_STABLE [f8a75881f] 2016-03-02 23:43:42 -0800 Trouble cases included tuples larger than one page when replica - identity is FULL, UPDATEs that change a + identity is FULL, UPDATEs that change a primary key within a transaction large enough to be spooled to disk, incorrect reports of subxact logged without previous toplevel - record, and incorrect reporting of a transaction's commit time. + record, and incorrect reporting of a transaction's commit time. @@ -4887,7 +4887,7 @@ Branch: REL9_4_STABLE [9b69d5c1d] 2016-02-29 12:34:33 +0000 Fix planner error with nested security barrier views when the outer - view has a WHERE clause containing a correlated subquery + view has a WHERE clause containing a correlated subquery (Dean Rasheed) @@ -4916,7 +4916,7 @@ Branch: REL9_1_STABLE [7d6c58aa1] 2016-02-28 23:40:35 -0500 - Fix corner-case crash due to trying to free localeconv() + Fix corner-case crash due to trying to free localeconv() output strings more than once (Tom Lane) @@ -4933,14 +4933,14 @@ Branch: REL9_1_STABLE [fe747b741] 2016-03-06 19:21:03 -0500 - Fix parsing of affix files for ispell dictionaries + Fix parsing of affix files for ispell dictionaries (Tom Lane) The code could go wrong if the affix file contained any characters whose byte length changes during case-folding, for - example I in Turkish UTF8 locales. + example I in Turkish UTF8 locales. @@ -4956,7 +4956,7 @@ Branch: REL9_1_STABLE [e56acbe2a] 2016-02-10 19:30:12 -0500 - Avoid use of sscanf() to parse ispell + Avoid use of sscanf() to parse ispell dictionary files (Artur Zakirov) @@ -5020,7 +5020,7 @@ Branch: REL9_1_STABLE [b4895bf79] 2016-03-04 11:57:40 -0500 - Fix psql's tab completion logic to handle multibyte + Fix psql's tab completion logic to handle multibyte characters properly (Kyotaro Horiguchi, Robert Haas) @@ -5036,12 +5036,12 @@ Branch: REL9_1_STABLE [2d61d88d8] 2016-03-14 11:31:49 -0400 - Fix psql's tab completion for - SECURITY LABEL (Tom Lane) + Fix psql's tab completion for + SECURITY LABEL (Tom Lane) - Pressing TAB after SECURITY LABEL might cause a crash + Pressing TAB after SECURITY LABEL might cause a crash or offering of inappropriate keywords. @@ -5058,8 +5058,8 @@ Branch: REL9_1_STABLE [f97664cf5] 2016-02-10 20:34:48 -0500 - Make pg_ctl accept a wait timeout from the - PGCTLTIMEOUT environment variable, if none is specified on + Make pg_ctl accept a wait timeout from the + PGCTLTIMEOUT environment variable, if none is specified on the command line (Noah Misch) @@ -5083,12 +5083,12 @@ Branch: REL9_1_STABLE [5a39c7395] 2016-03-07 10:41:11 -0500 Fix incorrect test for Windows service status - in pg_ctl (Manuel Mathar) + in pg_ctl (Manuel Mathar) The previous set of minor releases attempted to - fix pg_ctl to properly determine whether to send log + fix pg_ctl to properly determine whether to send log messages to Window's Event Log, but got the test backwards. @@ -5105,8 +5105,8 @@ Branch: REL9_1_STABLE [1965a8ce1] 2016-03-16 23:18:08 -0400 - Fix pgbench to correctly handle the combination - of -C and -M prepared options (Tom Lane) + Fix pgbench to correctly handle the combination + of -C and -M prepared options (Tom Lane) @@ -5120,7 +5120,7 @@ Branch: REL9_3_STABLE [bf26c4f44] 2016-02-18 18:32:26 -0500 - In pg_upgrade, skip creating a deletion script when + In pg_upgrade, skip creating a deletion script when the new data directory is inside the old data directory (Bruce Momjian) @@ -5178,7 +5178,7 @@ Branch: REL9_1_STABLE [0f359c7de] 2016-02-18 15:40:36 -0500 Fix multiple mistakes in the statistics returned - by contrib/pgstattuple's pgstatindex() + by contrib/pgstattuple's pgstatindex() function (Tom Lane) @@ -5195,7 +5195,7 @@ Branch: REL9_1_STABLE [2aa9fd963] 2016-03-19 18:59:41 -0400 - Remove dependency on psed in MSVC builds, since it's no + Remove dependency on psed in MSVC builds, since it's no longer provided by core Perl (Michael Paquier, Andrew Dunstan) @@ -5212,7 +5212,7 @@ Branch: REL9_1_STABLE [e5fd35cc5] 2016-03-25 19:03:54 -0400 - Update time zone data files to tzdata release 2016c + Update time zone data files to tzdata release 2016c for DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions), plus historical corrections for Lithuania, Moldova, and Russia @@ -5296,7 +5296,7 @@ Branch: REL9_5_STABLE [87dbc72a7] 2016-02-08 11:03:37 +0100 - Avoid pushdown of HAVING clauses when grouping sets are + Avoid pushdown of HAVING clauses when grouping sets are used (Andrew Gierth) @@ -5309,7 +5309,7 @@ Branch: REL9_5_STABLE [82406d6ff] 2016-02-07 14:57:24 -0500 - Fix deparsing of ON CONFLICT arbiter WHERE + Fix deparsing of ON CONFLICT arbiter WHERE clauses (Peter Geoghegan) @@ -5326,14 +5326,14 @@ Branch: REL9_1_STABLE [b043df093] 2016-01-26 15:38:33 -0500 - Make %h and %r escapes - in log_line_prefix work for messages emitted due - to log_connections (Tom Lane) + Make %h and %r escapes + in log_line_prefix work for messages emitted due + to log_connections (Tom Lane) - Previously, %h/%r started to work just after a - new session had emitted the connection received log message; + Previously, %h/%r started to work just after a + new session had emitted the connection received log message; now they work for that message too. @@ -5367,8 +5367,8 @@ Branch: REL9_1_STABLE [ed5f57218] 2016-01-29 10:28:03 +0100 - Fix psql's \det command to interpret its - pattern argument the same way as other \d commands with + Fix psql's \det command to interpret its + pattern argument the same way as other \d commands with potentially schema-qualified patterns do (Reece Hart) @@ -5385,7 +5385,7 @@ Branch: REL9_1_STABLE [b96f6f444] 2016-01-07 11:59:08 -0300 - In pg_ctl on Windows, check service status to decide + In pg_ctl on Windows, check service status to decide where to send output, rather than checking if standard output is a terminal (Michael Paquier) @@ -5403,7 +5403,7 @@ Branch: REL9_1_STABLE [5108013db] 2016-01-13 18:55:27 -0500 - Fix assorted corner-case bugs in pg_dump's processing + Fix assorted corner-case bugs in pg_dump's processing of extension member objects (Tom Lane) @@ -5417,7 +5417,7 @@ Branch: REL9_5_STABLE [1e910cf5b] 2016-01-22 20:04:35 -0300 Fix improper quoting of domain constraint names - in pg_dump (Elvis Pranskevichus) + in pg_dump (Elvis Pranskevichus) @@ -5433,9 +5433,9 @@ Branch: REL9_1_STABLE [9c704632c] 2016-02-04 00:26:10 -0500 - Make pg_dump mark a view's triggers as needing to be + Make pg_dump mark a view's triggers as needing to be processed after its rule, to prevent possible failure during - parallel pg_restore (Tom Lane) + parallel pg_restore (Tom Lane) @@ -5451,7 +5451,7 @@ Branch: REL9_1_STABLE [4c8b07d3c] 2016-02-03 09:25:34 -0500 - Install guards in pgbench against corner-case overflow + Install guards in pgbench against corner-case overflow conditions during evaluation of script-specified division or modulo operators (Fabien Coelho, Michael Paquier) @@ -5465,7 +5465,7 @@ Branch: REL9_5_STABLE [7ef311eb4] 2016-01-05 17:25:12 -0300 - Suppress useless warning message when pg_receivexlog + Suppress useless warning message when pg_receivexlog connects to a pre-9.4 server (Marco Nenciarini) @@ -5483,15 +5483,15 @@ Branch: REL9_5_STABLE [5ef26b8de] 2016-01-11 20:06:47 -0500 - Avoid dump/reload problems when using both plpython2 - and plpython3 (Tom Lane) + Avoid dump/reload problems when using both plpython2 + and plpython3 (Tom Lane) - In principle, both versions of PL/Python can be used in + In principle, both versions of PL/Python can be used in the same database, though not in the same session (because the two - versions of libpython cannot safely be used concurrently). - However, pg_restore and pg_upgrade both + versions of libpython cannot safely be used concurrently). + However, pg_restore and pg_upgrade both do things that can fall foul of the same-session restriction. Work around that by changing the timing of the check. @@ -5508,7 +5508,7 @@ Branch: REL9_5_STABLE [a66c1fcdd] 2016-01-08 11:39:28 -0500 - Fix PL/Python regression tests to pass with Python 3.5 + Fix PL/Python regression tests to pass with Python 3.5 (Peter Eisentraut) @@ -5525,16 +5525,16 @@ Branch: REL9_1_STABLE [b1f591c50] 2016-02-05 20:23:19 -0500 - Prevent certain PL/Java parameters from being set by + Prevent certain PL/Java parameters from being set by non-superusers (Noah Misch) - This change mitigates a PL/Java security bug - (CVE-2016-0766), which was fixed in PL/Java by marking + This change mitigates a PL/Java security bug + (CVE-2016-0766), which was fixed in PL/Java by marking these parameters as superuser-only. To fix the security hazard for - sites that update PostgreSQL more frequently - than PL/Java, make the core code aware of them also. + sites that update PostgreSQL more frequently + than PL/Java, make the core code aware of them also. @@ -5551,14 +5551,14 @@ Branch: REL9_4_STABLE [33b26426e] 2016-02-08 11:10:14 +0100 - Fix ecpg-supplied header files to not contain comments + Fix ecpg-supplied header files to not contain comments continued from a preprocessor directive line onto the next line (Michael Meskes) - Such a comment is rejected by ecpg. It's not yet clear - whether ecpg itself should be changed. + Such a comment is rejected by ecpg. It's not yet clear + whether ecpg itself should be changed. @@ -5572,8 +5572,8 @@ Branch: REL9_3_STABLE [1f2b195eb] 2016-02-03 01:39:08 -0500 - Fix hstore_to_json_loose()'s test for whether - an hstore value can be converted to a JSON number (Tom Lane) + Fix hstore_to_json_loose()'s test for whether + an hstore value can be converted to a JSON number (Tom Lane) @@ -5594,8 +5594,8 @@ Branch: REL9_4_STABLE [2099b911d] 2016-02-04 22:27:47 -0500 - In contrib/postgres_fdw, fix bugs triggered by use - of tableoid in data-modifying commands (Etsuro Fujita, + In contrib/postgres_fdw, fix bugs triggered by use + of tableoid in data-modifying commands (Etsuro Fujita, Robert Haas) @@ -5608,7 +5608,7 @@ Branch: REL9_5_STABLE [47acf3add] 2016-01-22 11:53:06 -0500 - Fix ill-advised restriction of NAMEDATALEN to be less + Fix ill-advised restriction of NAMEDATALEN to be less than 256 (Robert Haas, Tom Lane) @@ -5645,7 +5645,7 @@ Branch: REL9_1_STABLE [b1bc38144] 2016-01-19 23:30:28 -0500 - Ensure that dynloader.h is included in the installed + Ensure that dynloader.h is included in the installed header files in MSVC builds (Bruce Momjian, Michael Paquier) @@ -5662,7 +5662,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Update time zone data files to tzdata release 2016a for + Update time zone data files to tzdata release 2016a for DST law changes in Cayman Islands, Metlakatla, and Trans-Baikal Territory (Zabaykalsky Krai), plus historical corrections for Pakistan. @@ -5685,7 +5685,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 Overview - Major enhancements in PostgreSQL 9.5 include: + Major enhancements in PostgreSQL 9.5 include: @@ -5694,31 +5694,31 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Allow INSERTs + Allow INSERTs that would generate constraint conflicts to be turned into - UPDATEs or ignored + UPDATEs or ignored - Add GROUP BY analysis features GROUPING SETS, - CUBE and - ROLLUP + Add GROUP BY analysis features GROUPING SETS, + CUBE and + ROLLUP - Add row-level security control + Add row-level security control Create mechanisms for tracking - the progress of replication, + the progress of replication, including methods for identifying the origin of individual changes during logical replication @@ -5726,7 +5726,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) @@ -5772,21 +5772,21 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 2015-03-11 [c6b3c93] Tom Lane: Make operator precedence follow the SQL standar.. --> - Adjust operator precedence - to match the SQL standard (Tom Lane) + Adjust operator precedence + to match the SQL standard (Tom Lane) The precedence of <=, >= and <> has been reduced to match that of <, > - and =. The precedence of IS tests - (e.g., x IS NULL) has been reduced to be + and =. The precedence of IS tests + (e.g., x IS NULL) has been reduced to be just below these six comparison operators. - Also, multi-keyword operators beginning with NOT now have + Also, multi-keyword operators beginning with NOT now have the precedence of their base operator (for example, NOT - BETWEEN now has the same precedence as BETWEEN) whereas - before they had inconsistent precedence, behaving like NOT + BETWEEN now has the same precedence as BETWEEN) whereas + before they had inconsistent precedence, behaving like NOT with respect to their left operand but like their base operator with respect to their right operand. The new configuration parameter can be @@ -5801,7 +5801,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Change 's default shutdown mode from - smart to fast (Bruce Momjian) + smart to fast (Bruce Momjian) @@ -5816,18 +5816,18 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Use assignment cast behavior for data type conversions - in PL/pgSQL assignments, rather than converting to and + in PL/pgSQL assignments, rather than converting to and from text (Tom Lane) This change causes conversions of Booleans to strings to - produce true or false, not t - or f. Other type conversions may succeed in more cases - than before; for example, assigning a numeric value 3.9 to + produce true or false, not t + or f. Other type conversions may succeed in more cases + than before; for example, assigning a numeric value 3.9 to an integer variable will now assign 4 rather than failing. If no assignment-grade cast is defined for the particular source and - destination types, PL/pgSQL will fall back to its old + destination types, PL/pgSQL will fall back to its old I/O conversion behavior. @@ -5838,13 +5838,13 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Allow characters in server - command-line options to be escaped with a backslash (Andres Freund) + command-line options to be escaped with a backslash (Andres Freund) Formerly, spaces in the options string always separated options, so there was no way to include a space in an option value. Including - a backslash in an option value now requires writing \\. + a backslash in an option value now requires writing \\. @@ -5854,9 +5854,9 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Change the default value of the GSSAPI include_realm parameter to 1, so - that by default the realm is not removed from a GSS - or SSPI principal name (Stephen Frost) + linkend="gssapi-auth">include_realm parameter to 1, so + that by default the realm is not removed from a GSS + or SSPI principal name (Stephen Frost) @@ -5867,7 +5867,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 2015-06-29 [d661532] Heikki..: Also trigger restartpoints based on max_wal_siz.. --> - Replace configuration parameter checkpoint_segments + Replace configuration parameter checkpoint_segments with and (Heikki Linnakangas) @@ -5889,13 +5889,13 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2014-06-18 [df8b7bc] Tom Lane: Improve our mechanism for controlling the Linux.. --> - Control the Linux OOM killer via new environment + Control the Linux OOM killer via new environment variables PG_OOM_ADJUST_FILE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE, - instead of compile-time options LINUX_OOM_SCORE_ADJ and - LINUX_OOM_ADJ + linkend="linux-memory-overcommit">PG_OOM_ADJUST_VALUE, + instead of compile-time options LINUX_OOM_SCORE_ADJ and + LINUX_OOM_ADJ (Gurjeet Singh) @@ -5907,7 +5907,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB --> Decommission server configuration - parameter ssl_renegotiation_limit, which was deprecated + parameter ssl_renegotiation_limit, which was deprecated in earlier releases (Andres Freund) @@ -5915,8 +5915,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB While SSL renegotiation is a good idea in theory, it has caused enough bugs to be considered a net negative in practice, and it is due to be removed from future versions of the relevant standards. We have - therefore removed support for it from PostgreSQL. - The ssl_renegotiation_limit parameter still exists, but + therefore removed support for it from PostgreSQL. + The ssl_renegotiation_limit parameter still exists, but cannot be set to anything but zero (disabled). It's not documented anymore, either. @@ -5927,7 +5927,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2014-11-05 [525a489] Tom Lane: Remove the last vestige of server-side autocomm.. --> - Remove server configuration parameter autocommit, which + Remove server configuration parameter autocommit, which was already deprecated and non-operational (Tom Lane) @@ -5937,8 +5937,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-03-06 [bb8582a] Peter ..: Remove rolcatupdate --> - Remove the pg_authid - catalog's rolcatupdate field, as it had no usefulness + Remove the pg_authid + catalog's rolcatupdate field, as it had no usefulness (Adam Brightwell) @@ -5949,8 +5949,8 @@ max_wal_size = (3 * checkpoint_segments) * 16MB --> The pg_stat_replication - system view's sent field is now NULL, not zero, when + linkend="monitoring-stats-views-table">pg_stat_replication + system view's sent field is now NULL, not zero, when it has no valid value (Magnus Hagander) @@ -5960,13 +5960,13 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-07-17 [89ddd29] Andrew..: Support JSON negative array subscripts everywh.. --> - Allow json and jsonb array extraction operators to + Allow json and jsonb array extraction operators to accept negative subscripts, which count from the end of JSON arrays (Peter Geoghegan, Andrew Dunstan) - Previously, these operators returned NULL for negative + Previously, these operators returned NULL for negative subscripts. @@ -5999,12 +5999,12 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-05-15 [b0b7be6] Alvaro..: Add BRIN infrastructure for "inclusion" opclasses --> - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) (Álvaro Herrera) - BRIN indexes store only summary data (such as minimum + BRIN indexes store only summary data (such as minimum and maximum values) for ranges of heap blocks. They are therefore very compact and cheap to update; but if the data is naturally clustered, they can still provide substantial speedup of searches. @@ -6018,7 +6018,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB Allow queries to perform accurate distance filtering of bounding-box-indexed objects (polygons, circles) using GiST indexes (Alexander Korotkov, Heikki + linkend="GiST">GiST indexes (Alexander Korotkov, Heikki Linnakangas) @@ -6038,7 +6038,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-03-30 [0633a60] Heikki..: Add index-only scan support to range type GiST .. --> - Allow GiST indexes to perform index-only + Allow GiST indexes to perform index-only scans (Anastasia Lubennikova, Heikki Linnakangas, Andreas Karlsson) @@ -6049,14 +6049,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add configuration parameter - to control the size of GIN pending lists (Fujii Masao) + to control the size of GIN pending lists (Fujii Masao) This value can also be set on a per-index basis as an index storage parameter. Previously the pending-list size was controlled by , which was awkward because - appropriate values for work_mem are often much too large + appropriate values for work_mem are often much too large for this purpose. @@ -6067,7 +6067,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Issue a warning during the creation of hash indexes because they are not + linkend="indexes-types">hash indexes because they are not crash-safe (Bruce Momjian) @@ -6088,8 +6088,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-13 [78efd5c] Robert..: Extend abbreviated key infrastructure to datum .. --> - Improve the speed of sorting of varchar, text, - and numeric fields via abbreviated keys + Improve the speed of sorting of varchar, text, + and numeric fields via abbreviated keys (Peter Geoghegan, Andrew Gierth, Robert Haas) @@ -6101,8 +6101,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Extend the infrastructure that allows sorting to be performed by inlined, non-SQL-callable comparison functions to - cover CREATE INDEX, REINDEX, and - CLUSTER (Peter Geoghegan) + cover CREATE INDEX, REINDEX, and + CLUSTER (Peter Geoghegan) @@ -6163,7 +6163,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This particularly addresses scalability problems when running on - systems with multiple CPU sockets. + systems with multiple CPU sockets. @@ -6183,7 +6183,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pushdown of query restrictions into subqueries with window functions, where appropriate + linkend="tutorial-window">window functions, where appropriate (David Rowley) @@ -6206,7 +6206,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Teach the planner to use statistics obtained from an expression index on a boolean-returning function, when a matching function call - appears in WHERE (Tom Lane) + appears in WHERE (Tom Lane) @@ -6215,7 +6215,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-09-23 [cfb2024] Tom Lane: Make ANALYZE compute basic statistics even for.. --> - Make ANALYZE compute basic statistics (null fraction and + Make ANALYZE compute basic statistics (null fraction and average column width) even for columns whose data type lacks an equality function (Oleksandr Shulgin) @@ -6229,7 +6229,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> - Speed up CRC (cyclic redundancy check) computations + Speed up CRC (cyclic redundancy check) computations and switch to CRC-32C (Abhijit Menon-Sen, Heikki Linnakangas) @@ -6249,7 +6249,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-01 [9f03ca9] Robert..: Avoid copying index tuples when building an ind.. --> - Speed up CREATE INDEX by avoiding unnecessary memory + Speed up CREATE INDEX by avoiding unnecessary memory copies (Robert Haas) @@ -6283,7 +6283,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add per-table autovacuum logging control via new - log_autovacuum_min_duration storage parameter + log_autovacuum_min_duration storage parameter (Michael Paquier) @@ -6299,7 +6299,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This string, typically set in postgresql.conf, + linkend="config-setting-configuration-file">postgresql.conf, allows clients to identify the cluster. This name also appears in the process title of all server processes, allowing for easier identification of processes belonging to the same cluster. @@ -6321,7 +6321,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <acronym>SSL</> + <acronym>SSL</acronym> @@ -6331,13 +6331,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Check Subject Alternative - Names in SSL server certificates, if present + Names in SSL server certificates, if present (Alexey Klyukin) When they are present, this replaces checks against the certificate's - Common Name. + Common Name. @@ -6347,8 +6347,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add system view pg_stat_ssl to report - SSL connection information (Magnus Hagander) + linkend="pg-stat-ssl-view">pg_stat_ssl to report + SSL connection information (Magnus Hagander) @@ -6357,22 +6357,22 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-03 [91fa7b4] Heikki..: Add API functions to libpq to interrogate SSL .. --> - Add libpq functions to return SSL + Add libpq functions to return SSL information in an implementation-independent way (Heikki Linnakangas) - While PQgetssl() can - still be used to call OpenSSL functions, it is now + While PQgetssl() can + still be used to call OpenSSL functions, it is now considered deprecated because future versions - of libpq might support other SSL + of libpq might support other SSL implementations. When possible, use the new functions PQsslAttribute(), PQsslAttributeNames(), - and PQsslInUse() - to obtain SSL information in - an SSL-implementation-independent way. + linkend="libpq-pqsslattribute">PQsslAttribute(), PQsslAttributeNames(), + and PQsslInUse() + to obtain SSL information in + an SSL-implementation-independent way. @@ -6381,7 +6381,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [8a0d34e4] Peter ..: libpq: Don't overwrite existing OpenSSL thread.. --> - Make libpq honor any OpenSSL + Make libpq honor any OpenSSL thread callbacks (Jan Urbanski) @@ -6406,20 +6406,20 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-29 [d661532] Heikki..: Also trigger restartpoints based on max_wal_siz.. --> - Replace configuration parameter checkpoint_segments + Replace configuration parameter checkpoint_segments with and (Heikki Linnakangas) - This change allows the allocation of a large number of WAL + This change allows the allocation of a large number of WAL files without keeping them after they are no longer needed. - Therefore the default for max_wal_size has been set - to 1GB, much larger than the old default - for checkpoint_segments. + Therefore the default for max_wal_size has been set + to 1GB, much larger than the old default + for checkpoint_segments. Also note that standby servers perform restartpoints to try to limit - their WAL space consumption to max_wal_size; previously - they did not pay any attention to checkpoint_segments. + their WAL space consumption to max_wal_size; previously + they did not pay any attention to checkpoint_segments. @@ -6428,18 +6428,18 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-18 [df8b7bc] Tom Lane: Improve our mechanism for controlling the Linux.. --> - Control the Linux OOM killer via new environment + Control the Linux OOM killer via new environment variables PG_OOM_ADJUST_FILE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_FILE and PG_OOM_ADJUST_VALUE + linkend="linux-memory-overcommit">PG_OOM_ADJUST_VALUE (Gurjeet Singh) - The previous OOM control infrastructure involved - compile-time options LINUX_OOM_SCORE_ADJ and - LINUX_OOM_ADJ, which are no longer supported. + The previous OOM control infrastructure involved + compile-time options LINUX_OOM_SCORE_ADJ and + LINUX_OOM_ADJ, which are no longer supported. The new behavior is available in all builds. @@ -6457,8 +6457,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Time stamp information can be accessed using functions pg_xact_commit_timestamp() - and pg_last_committed_xact(). + linkend="functions-commit-timestamp">pg_xact_commit_timestamp() + and pg_last_committed_xact(). @@ -6468,7 +6468,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow to be set - by ALTER ROLE SET (Peter Eisentraut, Kyotaro Horiguchi) + by ALTER ROLE SET (Peter Eisentraut, Kyotaro Horiguchi) @@ -6477,7 +6477,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-03 [a75fb9b] Alvaro..: Have autovacuum workers listen to SIGHUP, too --> - Allow autovacuum workers + Allow autovacuum workers to respond to configuration parameter changes during a run (Michael Paquier) @@ -6496,7 +6496,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This means that assertions can no longer be turned off if they were enabled at compile time, allowing for more efficient code optimization. This change also removes the postgres option. @@ -6517,7 +6517,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add system view pg_file_settings + linkend="view-pg-file-settings">pg_file_settings to show the contents of the server's configuration files (Sawada Masahiko) @@ -6528,8 +6528,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-14 [a486e35] Peter ..: Add pg_settings.pending_restart column --> - Add pending_restart to the system view pg_settings to + Add pending_restart to the system view pg_settings to indicate a change has been made but will not take effect until a database restart (Peter Eisentraut) @@ -6540,14 +6540,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-02 [bd3b7a9] Fujii ..: Support ALTER SYSTEM RESET command. --> - Allow ALTER SYSTEM - values to be reset with ALTER SYSTEM RESET (Vik + Allow ALTER SYSTEM + values to be reset with ALTER SYSTEM RESET (Vik Fearing) This command removes the specified setting - from postgresql.auto.conf. + from postgresql.auto.conf. @@ -6568,7 +6568,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Create mechanisms for tracking - the progress of replication, + the progress of replication, including methods for identifying the origin of individual changes during logical replication (Andres Freund) @@ -6600,14 +6600,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-15 [51c11a7] Andres..: Remove pause_at_recovery_target recovery.conf s.. --> - Add recovery.conf + Add recovery.conf parameter recovery_target_action + linkend="recovery-target-action">recovery_target_action to control post-recovery activity (Petr Jelínek) - This replaces the old parameter pause_at_recovery_target. + This replaces the old parameter pause_at_recovery_target. @@ -6617,8 +6617,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add new value - always to allow standbys to always archive received - WAL files (Fujii Masao) + always to allow standbys to always archive received + WAL files (Fujii Masao) @@ -6629,7 +6629,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Add configuration parameter to - control WAL read retry after failure + control WAL read retry after failure (Alexey Vasiliev, Michael Paquier) @@ -6643,7 +6643,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-11 [57aa5b2] Fujii ..: Add GUC to enable compression of full page imag.. --> - Allow compression of full-page images stored in WAL + Allow compression of full-page images stored in WAL (Rahila Syed, Michael Paquier) @@ -6660,7 +6660,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-08 [de76884] Heikki..: At promotion, archive last segment from old tim.. --> - Archive WAL files with suffix .partial + Archive WAL files with suffix .partial during standby promotion (Heikki Linnakangas) @@ -6677,9 +6677,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. By default, replication commands, e.g. IDENTIFY_SYSTEM, + linkend="protocol-replication">IDENTIFY_SYSTEM, are not logged, even when is set - to all. + to all. @@ -6689,12 +6689,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Report the processes holding replication slots in pg_replication_slots + linkend="view-pg-replication-slots">pg_replication_slots (Craig Ringer) - The new output column is active_pid. + The new output column is active_pid. @@ -6703,9 +6703,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-25 [b3fc672] Heikki..: Allow using connection URI in primary_conninfo. --> - Allow recovery.conf's primary_conninfo setting to - use connection URIs, e.g. postgres:// + Allow recovery.conf's primary_conninfo setting to + use connection URIs, e.g. postgres:// (Alexander Shulgin) @@ -6725,16 +6725,16 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-08 [2c8f483] Andres..: Represent columns requiring insert and update p.. --> - Allow INSERTs + Allow INSERTs that would generate constraint conflicts to be turned into - UPDATEs or ignored (Peter Geoghegan, Heikki + UPDATEs or ignored (Peter Geoghegan, Heikki Linnakangas, Andres Freund) - The syntax is INSERT ... ON CONFLICT DO NOTHING/UPDATE. + The syntax is INSERT ... ON CONFLICT DO NOTHING/UPDATE. This is the Postgres implementation of the popular - UPSERT command. + UPSERT command. @@ -6743,10 +6743,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-16 [f3d3118] Andres..: Support GROUPING SETS, CUBE and ROLLUP. --> - Add GROUP BY analysis features GROUPING SETS, - CUBE and - ROLLUP + Add GROUP BY analysis features GROUPING SETS, + CUBE and + ROLLUP (Andrew Gierth, Atri Sharma) @@ -6757,13 +6757,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow setting multiple target columns in - an UPDATE from the result of + an UPDATE from the result of a single sub-SELECT (Tom Lane) This is accomplished using the syntax UPDATE tab SET - (col1, col2, ...) = (SELECT ...). + (col1, col2, ...) = (SELECT ...). @@ -6772,13 +6772,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-07 [df630b0] Alvaro..: Implement SKIP LOCKED for row-level locks --> - Add SELECT option - SKIP LOCKED to skip locked rows (Thomas Munro) + Add SELECT option + SKIP LOCKED to skip locked rows (Thomas Munro) This does not throw an error for locked rows like - NOWAIT does. + NOWAIT does. @@ -6787,8 +6787,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [f6d208d] Simon ..: TABLESAMPLE, SQL Standard and extensible --> - Add SELECT option - TABLESAMPLE to return a subset of a table (Petr + Add SELECT option + TABLESAMPLE to return a subset of a table (Petr Jelínek) @@ -6796,7 +6796,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature supports the SQL-standard table sampling methods. In addition, there are provisions for user-defined - table sampling methods. + table sampling methods. @@ -6825,13 +6825,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add more details about sort ordering in EXPLAIN output (Marius Timmer, + linkend="SQL-EXPLAIN">EXPLAIN output (Marius Timmer, Lukas Kreft, Arne Scheffer) - Details include COLLATE, DESC, - USING, and NULLS FIRST/LAST. + Details include COLLATE, DESC, + USING, and NULLS FIRST/LAST. @@ -6840,7 +6840,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-18 [35192f0] Alvaro..: Have VACUUM log number of skipped pages due to .. --> - Make VACUUM log the + Make VACUUM log the number of pages skipped due to pins (Jim Nasby) @@ -6850,8 +6850,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-20 [d42358e] Alvaro..: Have TRUNCATE update pgstat tuple counters --> - Make TRUNCATE properly - update the pg_stat* tuple counters (Alexander Shulgin) + Make TRUNCATE properly + update the pg_stat* tuple counters (Alexander Shulgin) @@ -6867,8 +6867,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-09 [fe263d1] Simon ..: REINDEX SCHEMA --> - Allow REINDEX to reindex an entire schema using the - SCHEMA option (Sawada Masahiko) + Allow REINDEX to reindex an entire schema using the + SCHEMA option (Sawada Masahiko) @@ -6877,7 +6877,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [ecd222e] Fujii ..: Support VERBOSE option in REINDEX command. --> - Add VERBOSE option to REINDEX (Sawada + Add VERBOSE option to REINDEX (Sawada Masahiko) @@ -6887,8 +6887,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-09 [ae4e688] Simon ..: Silence REINDEX --> - Prevent REINDEX DATABASE and SCHEMA - from outputting object names, unless VERBOSE is used + Prevent REINDEX DATABASE and SCHEMA + from outputting object names, unless VERBOSE is used (Simon Riggs) @@ -6898,7 +6898,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [17d436d] Fujii ..: Remove obsolete FORCE option from REINDEX. --> - Remove obsolete FORCE option from REINDEX + Remove obsolete FORCE option from REINDEX (Fujii Masao) @@ -6918,7 +6918,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-19 [491c029] Stephe..: Row-Level Security Policies (RLS) --> - Add row-level security control + Add row-level security control (Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, Stephen Frost) @@ -6926,11 +6926,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature allows row-by-row control over which users can add, modify, or even see rows in a table. This is controlled by new - commands CREATE/ALTER/DROP POLICY and CREATE/ALTER/DROP POLICY and ALTER TABLE ... ENABLE/DISABLE - ROW SECURITY. + ROW SECURITY. @@ -6942,7 +6942,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Allow changing of the WAL logging status of a table after creation with ALTER TABLE ... SET LOGGED / - UNLOGGED (Fabrízio de Royes Mello) + UNLOGGED (Fabrízio de Royes Mello) @@ -6953,12 +6953,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-13 [e39b6f9] Andrew..: Add CINE option for CREATE TABLE AS and CREATE .. --> - Add IF NOT EXISTS clause to CREATE TABLE AS, - CREATE INDEX, - CREATE SEQUENCE, + Add IF NOT EXISTS clause to CREATE TABLE AS, + CREATE INDEX, + CREATE SEQUENCE, and CREATE - MATERIALIZED VIEW (Fabrízio de Royes Mello) + MATERIALIZED VIEW (Fabrízio de Royes Mello) @@ -6967,9 +6967,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-24 [1d8198b] Bruce ..: Add support for ALTER TABLE IF EXISTS ... RENAM.. --> - Add support for IF EXISTS to IF EXISTS to ALTER TABLE ... RENAME - CONSTRAINT (Bruce Momjian) + CONSTRAINT (Bruce Momjian) @@ -6978,8 +6978,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-09 [31eae60] Alvaro..: Allow CURRENT/SESSION_USER to be used in certai.. --> - Allow some DDL commands to accept CURRENT_USER - or SESSION_USER, meaning the current user or session + Allow some DDL commands to accept CURRENT_USER + or SESSION_USER, meaning the current user or session user, in place of a specific user name (Kyotaro Horiguchi, Álvaro Herrera) @@ -6988,7 +6988,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature is now supported in , , , , - and ALTER object OWNER TO commands. + and ALTER object OWNER TO commands. @@ -6998,7 +6998,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Support comments on domain - constraints (Álvaro Herrera) + constraints (Álvaro Herrera) @@ -7018,13 +7018,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow LOCK TABLE ... ROW EXCLUSIVE - MODE for those with INSERT privileges on the + MODE for those with INSERT privileges on the target table (Stephen Frost) - Previously this command required UPDATE, DELETE, - or TRUNCATE privileges. + Previously this command required UPDATE, DELETE, + or TRUNCATE privileges. @@ -7033,7 +7033,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-23 [e5f455f] Tom Lane: Apply table and domain CHECK constraints in nam. --> - Apply table and domain CHECK constraints in order by name + Apply table and domain CHECK constraints in order by name (Tom Lane) @@ -7049,16 +7049,16 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow CREATE/ALTER DATABASE - to manipulate datistemplate and - datallowconn (Vik Fearing) + linkend="SQL-CREATEDATABASE">CREATE/ALTER DATABASE + to manipulate datistemplate and + datallowconn (Vik Fearing) This allows these per-database settings to be changed without manually modifying the pg_database + linkend="catalog-pg-database">pg_database system catalog. @@ -7090,7 +7090,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-17 [fc2ac1f] Tom Lane: Allow CHECK constraints to be placed on foreign.. --> - Allow CHECK constraints to be placed on foreign tables + Allow CHECK constraints to be placed on foreign tables (Shigeru Hanada, Etsuro Fujita) @@ -7099,7 +7099,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. and are not enforced locally. However, they are assumed to hold for purposes of query optimization, such as constraint - exclusion. + exclusion. @@ -7115,7 +7115,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. To let this work naturally, foreign tables are now allowed to have check constraints marked as not valid, and to set storage - and OID characteristics, even though these operations are + and OID characteristics, even though these operations are effectively no-ops for a foreign table. @@ -7145,14 +7145,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-11 [b488c58] Alvaro..: Allow on-the-fly capture of DDL event details --> - Whenever a ddl_command_end event trigger is installed, - capture details of DDL activity for it to inspect + Whenever a ddl_command_end event trigger is installed, + capture details of DDL activity for it to inspect (Álvaro Herrera) This information is available through a set-returning function pg_event_trigger_ddl_commands(), + linkend="pg-event-trigger-ddl-command-end-functions">pg_event_trigger_ddl_commands(), or by inspection of C data structures if that function doesn't provide enough detail. @@ -7164,7 +7164,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow event triggers on table rewrites caused by ALTER TABLE (Dimitri + linkend="SQL-ALTERTABLE">ALTER TABLE (Dimitri Fontaine) @@ -7175,10 +7175,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add event trigger support for database-level COMMENT, SECURITY LABEL, - and GRANT/REVOKE (Álvaro Herrera) + linkend="SQL-COMMENT">COMMENT, SECURITY LABEL, + and GRANT/REVOKE (Álvaro Herrera) @@ -7189,7 +7189,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add columns to the output of pg_event_trigger_dropped_objects + linkend="pg-event-trigger-sql-drop-functions">pg_event_trigger_dropped_objects (Álvaro Herrera) @@ -7214,12 +7214,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-09 [57b1085] Peter ..: Allow empty content in xml type --> - Allow the xml data type + Allow the xml data type to accept empty or all-whitespace content values (Peter Eisentraut) - This is required by the SQL/XML + This is required by the SQL/XML specification. @@ -7229,8 +7229,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-21 [6f04368] Peter ..: Allow input format xxxx-xxxx-xxxx for macaddr .. --> - Allow macaddr input - using the format xxxx-xxxx-xxxx (Herwin Weststrate) + Allow macaddr input + using the format xxxx-xxxx-xxxx (Herwin Weststrate) @@ -7240,15 +7240,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Disallow non-SQL-standard syntax for interval with + linkend="datatype-interval-input">interval with both precision and field specifications (Bruce Momjian) Per the standard, such type specifications should be written as, - for example, INTERVAL MINUTE TO SECOND(2). - PostgreSQL formerly allowed this to be written as - INTERVAL(2) MINUTE TO SECOND, but it must now be + for example, INTERVAL MINUTE TO SECOND(2). + PostgreSQL formerly allowed this to be written as + INTERVAL(2) MINUTE TO SECOND, but it must now be written in the standard way. @@ -7259,8 +7259,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add selectivity estimators for inet/cidr operators and improve + linkend="datatype-inet">inet/cidr operators and improve estimators for text search functions (Emre Hasegeli, Tom Lane) @@ -7272,9 +7272,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add data - types regrole - and regnamespace - to simplify entering and pretty-printing the OID of a role + types regrole + and regnamespace + to simplify entering and pretty-printing the OID of a role or namespace (Kyotaro Horiguchi) @@ -7282,7 +7282,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <link linkend="datatype-json"><acronym>JSON</></link> + <link linkend="datatype-json"><acronym>JSON</acronym></link> @@ -7292,10 +7292,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-31 [37def42] Andrew..: Rename jsonb_replace to jsonb_set and allow it .. --> - Add jsonb functions jsonb_set() + Add jsonb functions jsonb_set() and jsonb_pretty() + linkend="functions-json-processing-table">jsonb_pretty() (Dmitry Dolgov, Andrew Dunstan, Petr Jelínek) @@ -7305,23 +7305,23 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-12 [7e354ab] Andrew..: Add several generator functions for jsonb that .. --> - Add jsonb generator functions to_jsonb(), + Add jsonb generator functions to_jsonb(), jsonb_object(), + linkend="functions-json-creation-table">jsonb_object(), jsonb_build_object(), + linkend="functions-json-creation-table">jsonb_build_object(), jsonb_build_array(), + linkend="functions-json-creation-table">jsonb_build_array(), jsonb_agg(), + linkend="functions-aggregate-table">jsonb_agg(), and jsonb_object_agg() + linkend="functions-aggregate-table">jsonb_object_agg() (Andrew Dunstan) - Equivalent functions already existed for type json. + Equivalent functions already existed for type json. @@ -7331,8 +7331,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Reduce casting requirements to/from json and jsonb (Tom Lane) + linkend="datatype-json">json and jsonb (Tom Lane) @@ -7341,9 +7341,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-11 [908e234] Andrew..: Rename jsonb - text[] operator to #- to avoid a.. --> - Allow text, text array, and integer - values to be subtracted - from jsonb documents (Dmitry Dolgov, Andrew Dunstan) + Allow text, text array, and integer + values to be subtracted + from jsonb documents (Dmitry Dolgov, Andrew Dunstan) @@ -7352,8 +7352,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-12 [c694701] Andrew..: Additional functions and operators for jsonb --> - Add jsonb || operator + Add jsonb || operator (Dmitry Dolgov, Andrew Dunstan) @@ -7364,9 +7364,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add json_strip_nulls() + linkend="functions-json-processing-table">json_strip_nulls() and jsonb_strip_nulls() + linkend="functions-json-processing-table">jsonb_strip_nulls() functions to remove JSON null values from documents (Andrew Dunstan) @@ -7388,8 +7388,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-11 [1871c89] Fujii ..: Add generate_series(numeric, numeric). --> - Add generate_series() - for numeric values (Plato Malugin) + Add generate_series() + for numeric values (Plato Malugin) @@ -7399,8 +7399,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow array_agg() and - ARRAY() to take arrays as inputs (Ali Akbar, Tom Lane) + linkend="functions-aggregate-table">array_agg() and + ARRAY() to take arrays as inputs (Ali Akbar, Tom Lane) @@ -7411,9 +7411,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add functions array_position() + linkend="array-functions-table">array_position() and array_positions() + linkend="array-functions-table">array_positions() to return subscripts of array values (Pavel Stehule) @@ -7423,8 +7423,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-15 [4520ba6] Heikki..: Add point <-> polygon distance operator. --> - Add a point-to-polygon distance operator - <-> + Add a point-to-polygon distance operator + <-> (Alexander Korotkov) @@ -7435,8 +7435,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow multibyte characters as escapes in SIMILAR TO - and SUBSTRING + linkend="functions-similarto-regexp">SIMILAR TO + and SUBSTRING (Jeff Davis) @@ -7451,7 +7451,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add a width_bucket() + linkend="functions-math-func-table">width_bucket() variant that supports any sortable data type and non-uniform bucket widths (Petr Jelínek) @@ -7462,8 +7462,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-28 [cb2acb1] Heikki..: Add missing_ok option to the SQL functions for.. --> - Add an optional missing_ok argument to pg_read_file() + Add an optional missing_ok argument to pg_read_file() and related functions (Michael Paquier, Heikki Linnakangas) @@ -7473,14 +7473,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-10 [865f14a] Robert..: Allow named parameters to be specified using =>.. --> - Allow => + Allow => to specify named parameters in function calls (Pavel Stehule) - Previously only := could be used. This requires removing - the possibility for => to be a user-defined operator. - Creation of user-defined => operators has been issuing + Previously only := could be used. This requires removing + the possibility for => to be a user-defined operator. + Creation of user-defined => operators has been issuing warnings since PostgreSQL 9.0. @@ -7490,7 +7490,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-25 [06bf0dd] Tom Lane: Upgrade src/port/rint.c to be POSIX-compliant. --> - Add POSIX-compliant rounding for platforms that use + Add POSIX-compliant rounding for platforms that use PostgreSQL-supplied rounding functions (Pedro Gimeno Fortea) @@ -7509,11 +7509,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add function pg_get_object_address() - to return OIDs that uniquely + linkend="functions-info-object-table">pg_get_object_address() + to return OIDs that uniquely identify an object, and function pg_identify_object_as_address() - to return object information based on OIDs (Álvaro + linkend="functions-info-object-table">pg_identify_object_as_address() + to return object information based on OIDs (Álvaro Herrera) @@ -7524,11 +7524,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Loosen security checks for viewing queries in pg_stat_activity, + linkend="pg-stat-activity-view">pg_stat_activity, executing pg_cancel_backend(), + linkend="functions-admin-signal-table">pg_cancel_backend(), and executing pg_terminate_backend() + linkend="functions-admin-signal-table">pg_terminate_backend() (Stephen Frost) @@ -7544,7 +7544,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add pg_stat_get_snapshot_timestamp() + linkend="monitoring-stats-funcs-table">pg_stat_get_snapshot_timestamp() to output the time stamp of the statistics snapshot (Matt Kelly) @@ -7560,7 +7560,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add mxid_age() + linkend="vacuum-for-multixact-wraparound">mxid_age() to compute multi-xid age (Bruce Momjian) @@ -7578,9 +7578,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-28 [6c40f83] Tom Lane: Add min and max aggregates for inet/cidr data t.. --> - Add min()/max() aggregates - for inet/cidr data types (Haribabu + Add min()/max() aggregates + for inet/cidr data types (Haribabu Kommi) @@ -7613,12 +7613,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Improve support for composite types in PL/Python (Ed Behn, Ronan + linkend="plpython">PL/Python (Ed Behn, Ronan Dunklau) - This allows PL/Python functions to return arrays + This allows PL/Python functions to return arrays of composite types. @@ -7629,7 +7629,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Reduce lossiness of PL/Python floating-point value + linkend="plpython">PL/Python floating-point value conversions (Marko Kreen) @@ -7639,19 +7639,19 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-26 [cac7658] Peter ..: Add transforms feature --> - Allow specification of conversion routines between SQL + Allow specification of conversion routines between SQL data types and data types of procedural languages (Peter Eisentraut) This change adds new commands CREATE/DROP TRANSFORM. + linkend="SQL-CREATETRANSFORM">CREATE/DROP TRANSFORM. This also adds optional transformations between the hstore and ltree types to/from PL/Perl and PL/Python. + linkend="hstore">hstore and ltree types to/from PL/Perl and PL/Python. @@ -7670,7 +7670,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-16 [9e3ad1a] Tom Lane: Use fast path in plpgsql's RETURN/RETURN NEXT i.. --> - Improve PL/pgSQL array + Improve PL/pgSQL array performance (Tom Lane) @@ -7680,8 +7680,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-25 [a4847fc] Tom Lane: Add an ASSERT statement in plpgsql. --> - Add an ASSERT - statement in PL/pgSQL (Pavel Stehule) + Add an ASSERT + statement in PL/pgSQL (Pavel Stehule) @@ -7690,7 +7690,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-25 [bb1b8f6] Tom Lane: De-reserve most statement-introducing keywords .. --> - Allow more PL/pgSQL + Allow more PL/pgSQL keywords to be used as identifiers (Tom Lane) @@ -7715,11 +7715,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Move pg_archivecleanup, - pg_test_fsync, - pg_test_timing, - and pg_xlogdump - from contrib to src/bin (Peter Eisentraut) + linkend="pgarchivecleanup">pg_archivecleanup, + pg_test_fsync, + pg_test_timing, + and pg_xlogdump + from contrib to src/bin (Peter Eisentraut) @@ -7733,7 +7733,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-23 [61081e7] Heikki..: Add pg_rewind, for re-synchronizing a master se.. --> - Add pg_rewind, + Add pg_rewind, which allows re-synchronizing a master server after failback (Heikki Linnakangas) @@ -7745,13 +7745,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pg_receivexlog + linkend="app-pgreceivewal">pg_receivexlog to manage physical replication slots (Michael Paquier) - This is controlled via new and + options. @@ -7761,13 +7761,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pg_receivexlog - to synchronously flush WAL to storage using new - option (Furuya Osamu, Fujii Masao) - Without this, WAL files are fsync'ed only on close. + Without this, WAL files are fsync'ed only on close. @@ -7776,8 +7776,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-01-23 [a179232] Alvaro..: vacuumdb: enable parallel mode --> - Allow vacuumdb to - vacuum in parallel using new option (Dilip Kumar) @@ -7786,7 +7786,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-11-12 [5094da9] Alvaro..: vacuumdb: don't prompt for passwords over and .. --> - In vacuumdb, do not + In vacuumdb, do not prompt for the same password repeatedly when multiple connections are necessary (Haribabu Kommi, Michael Paquier) @@ -7797,8 +7797,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [458a077] Fujii ..: Support ––verbose option in reindexdb. --> - Add @@ -7808,10 +7808,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-12 [72d422a] Andrew..: Map basebackup tablespaces using a tablespace_.. --> - Make pg_basebackup - use a tablespace mapping file when using tar format, + Make pg_basebackup + use a tablespace mapping file when using tar format, to support symbolic links and file paths of 100+ characters in length - on MS Windows (Amit Kapila) + on MS Windows (Amit Kapila) @@ -7821,8 +7821,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-19 [bdd5726] Andres..: Add the capability to display summary statistic.. --> - Add pg_xlogdump option - to display summary statistics (Abhijit Menon-Sen) @@ -7838,7 +7838,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-31 [9d9991c] Bruce ..: psql: add asciidoc output format --> - Allow psql to produce AsciiDoc output (Szymon Guz) + Allow psql to produce AsciiDoc output (Szymon Guz) @@ -7847,14 +7847,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-10 [5b214c5] Fujii ..: Add new ECHO mode 'errors' that displays only .. --> - Add an errors mode that displays only failed commands - to psql's ECHO variable + Add an errors mode that displays only failed commands + to psql's ECHO variable (Pavel Stehule) - This behavior can also be selected with psql's - option. @@ -7864,12 +7864,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Provide separate column, header, and border linestyle control - in psql's unicode linestyle (Pavel Stehule) + in psql's unicode linestyle (Pavel Stehule) Single or double lines are supported; the default is - single. + single. @@ -7878,8 +7878,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-02 [51bb795] Andres..: Add psql PROMPT variable showing which line of .. --> - Add new option %l in psql's PROMPT variables + Add new option %l in psql's PROMPT variables to display the current multiline statement line number (Sawada Masahiko) @@ -7890,8 +7890,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-28 [7655f4c] Andrew..: Add a pager_min_lines setting to psql --> - Add \pset option pager_min_lines + Add \pset option pager_min_lines to control pager invocation (Andrew Dunstan) @@ -7901,7 +7901,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-21 [4077fb4] Andrew..: Fix an error in psql that overcounted output l.. --> - Improve psql line counting used when deciding + Improve psql line counting used when deciding to invoke the pager (Andrew Dunstan) @@ -7912,8 +7912,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-12-08 [e90371d] Tom Lane: Make failure to open psql log-file fatal. --> - psql now fails if the file specified by - an or switch cannot be written (Tom Lane, Daniel Vérité) @@ -7927,7 +7927,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-12 [bd40951] Andres..: Minimal psql tab completion support for SET se.. --> - Add psql tab completion when setting the + Add psql tab completion when setting the variable (Jeff Janes) @@ -7941,7 +7941,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-23 [631e7f6] Heikki..: Improve tab-completion of DROP and ALTER ENABLE.. --> - Improve psql's tab completion for triggers and rules + Improve psql's tab completion for triggers and rules (Andreas Karlsson) @@ -7958,17 +7958,17 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-10 [07c8651] Andres..: Add new psql help topics, accessible to both.. --> - Add psql \? help sections - variables and options (Pavel Stehule) + Add psql \? help sections + variables and options (Pavel Stehule) - \? variables shows psql's special - variables and \? options shows the command-line options. - \? commands shows the meta-commands, which is the + \? variables shows psql's special + variables and \? options shows the command-line options. + \? commands shows the meta-commands, which is the traditional output and remains the default. These help displays can also be obtained with the command-line - option --help=section. + option --help=section. @@ -7977,7 +7977,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [ee80f04] Alvaro..: psql: Show tablespace size in \db+ --> - Show tablespace size in psql's \db+ + Show tablespace size in psql's \db+ (Fabrízio de Royes Mello) @@ -7987,7 +7987,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-09 [a6f3c1f] Magnus..: Show owner of types in psql \dT+ --> - Show data type owners in psql's \dT+ + Show data type owners in psql's \dT+ (Magnus Hagander) @@ -7997,13 +7997,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-04 [f6f654f] Fujii ..: Allow \watch to display query execution time if.. --> - Allow psql's \watch to output - \timing information (Fujii Masao) + Allow psql's \watch to output + \timing information (Fujii Masao) - Also prevent @@ -8012,8 +8012,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-22 [eca2b9b] Andrew..: Rework echo_hidden for \sf and \ef from commit .. --> - Make psql's \sf and \ef - commands honor ECHO_HIDDEN (Andrew Dunstan) + Make psql's \sf and \ef + commands honor ECHO_HIDDEN (Andrew Dunstan) @@ -8022,8 +8022,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-12 [e15c4ab] Fujii ..: Add tab-completion for \unset and valid setting.. --> - Improve psql tab completion for \set, - \unset, and :variable names (Pavel + Improve psql tab completion for \set, + \unset, and :variable names (Pavel Stehule) @@ -8034,7 +8034,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow tab completion of role names - in psql \c commands (Ian Barwick) + in psql \c commands (Ian Barwick) @@ -8054,15 +8054,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-17 [be1cc8f] Simon ..: Add pg_dump ––snapshot option --> - Allow pg_dump to share a snapshot taken by another - session using (Simon Riggs, Michael Paquier) The remote snapshot must have been exported by - pg_export_snapshot() or logical replication slot + pg_export_snapshot() or logical replication slot creation. This can be used to share a consistent snapshot - across multiple pg_dump processes. + across multiple pg_dump processes. @@ -8087,13 +8087,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-07 [7700597] Tom Lane: In pg_dump, show server and pg_dump versions w.. --> - Make pg_dump always print the server and - pg_dump versions (Jing Wang) + Make pg_dump always print the server and + pg_dump versions (Jing Wang) Previously, version information was only printed in - mode. @@ -8102,9 +8102,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-06-04 [232cd63] Fujii ..: Remove -i/-ignore-version option from pg_dump.. --> - Remove the long-ignored @@ -8122,7 +8122,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-25 [ebe30ad] Bruce ..: pg_ctl, pg_upgrade: allow multiple -o/-O opti.. --> - Support multiple pg_ctl options, concatenating their values (Bruce Momjian) @@ -8132,13 +8132,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-17 [c0e4520] Magnus..: Add option to pg_ctl to choose event source for.. --> - Allow control of pg_ctl's event source logging - on MS Windows (MauMau) + Allow control of pg_ctl's event source logging + on MS Windows (MauMau) - This only controls pg_ctl, not the server, which - has separate settings in postgresql.conf. + This only controls pg_ctl, not the server, which + has separate settings in postgresql.conf. @@ -8148,14 +8148,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> If the server's listen address is set to a wildcard value - (0.0.0.0 in IPv4 or :: in IPv6), connect via + (0.0.0.0 in IPv4 or :: in IPv6), connect via the loopback address rather than trying to use the wildcard address literally (Kondo Yuta) This fix primarily affects Windows, since on other platforms - pg_ctl will prefer to use a Unix-domain socket. + pg_ctl will prefer to use a Unix-domain socket. @@ -8173,13 +8173,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-14 [9fa8b0e] Peter ..: Move pg_upgrade from contrib/ to src/bin/ --> - Move pg_upgrade from contrib to - src/bin (Peter Eisentraut) + Move pg_upgrade from contrib to + src/bin (Peter Eisentraut) In connection with this change, the functionality previously - provided by the pg_upgrade_support module has been + provided by the pg_upgrade_support module has been moved into the core server. @@ -8189,8 +8189,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-08-25 [ebe30ad] Bruce ..: pg_ctl, pg_upgrade: allow multiple -o/-O optio.. --> - Support multiple pg_upgrade - / options, concatenating their values (Bruce Momjian) @@ -8201,7 +8201,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Improve database collation comparisons in - pg_upgrade (Heikki Linnakangas) + pg_upgrade (Heikki Linnakangas) @@ -8228,7 +8228,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-13 [81134af] Peter ..: Move pgbench from contrib/ to src/bin/ --> - Move pgbench from contrib to src/bin + Move pgbench from contrib to src/bin (Peter Eisentraut) @@ -8239,7 +8239,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Fix calculation of TPS number excluding connections - establishing (Tatsuo Ishii, Fabien Coelho) + establishing (Tatsuo Ishii, Fabien Coelho) @@ -8261,7 +8261,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - This is controlled by a new option. @@ -8271,7 +8271,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow pgbench to generate Gaussian/exponential distributions - using \setrandom (Kondo Mitsumasa, Fabien Coelho) + using \setrandom (Kondo Mitsumasa, Fabien Coelho) @@ -8280,9 +8280,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-02 [878fdcb] Robert..: pgbench: Add a real expression syntax to \set --> - Allow pgbench's \set command to handle + Allow pgbench's \set command to handle arithmetic expressions containing more than one operator, and add - % (modulo) to the set of operators it supports + % (modulo) to the set of operators it supports (Robert Haas, Fabien Coelho) @@ -8303,7 +8303,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-20 [2c03216] Heikki..: Revamp the WAL record format. --> - Simplify WAL record format + Simplify WAL record format (Heikki Linnakangas) @@ -8328,7 +8328,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-25 [b64d92f] Andres..: Add a basic atomic ops API abstracting away pla.. --> - Add atomic memory operations API (Andres Freund) + Add atomic memory operations API (Andres Freund) @@ -8366,13 +8366,13 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Foreign tables can now take part in INSERT ... ON CONFLICT - DO NOTHING queries (Peter Geoghegan, Heikki Linnakangas, + DO NOTHING queries (Peter Geoghegan, Heikki Linnakangas, Andres Freund) Foreign data wrappers must be modified to handle this. - INSERT ... ON CONFLICT DO UPDATE is not supported on + INSERT ... ON CONFLICT DO UPDATE is not supported on foreign tables. @@ -8382,7 +8382,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-18 [4a14f13] Tom Lane: Improve hash_create's API for selecting simple-.. --> - Improve hash_create()'s API for selecting + Improve hash_create()'s API for selecting simple-binary-key hash functions (Teodor Sigaev, Tom Lane) @@ -8403,8 +8403,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-28 [a6d488c] Andres..: Remove Alpha and Tru64 support. --> - Remove Alpha (CPU) and Tru64 (OS) ports (Andres Freund) + Remove Alpha (CPU) and Tru64 (OS) ports (Andres Freund) @@ -8414,11 +8414,11 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Remove swap-byte-based spinlock implementation for - ARMv5 and earlier CPUs (Robert Haas) + ARMv5 and earlier CPUs (Robert Haas) - ARMv5's weak memory ordering made this locking + ARMv5's weak memory ordering made this locking implementation unsafe. Spinlock support is still possible on newer gcc implementations with atomics support. @@ -8444,10 +8444,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Change index operator class for columns pg_seclabel.provider + linkend="catalog-pg-seclabel">pg_seclabel.provider and pg_shseclabel.provider - to be text_pattern_ops (Tom Lane) + linkend="catalog-pg-shseclabel">pg_shseclabel.provider + to be text_pattern_ops (Tom Lane) @@ -8480,8 +8480,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow higher-precision time stamp resolution on Windows 8, Windows - Server 2012, and later Windows systems (Craig Ringer) + class="osname">Windows 8, Windows + Server 2012, and later Windows systems (Craig Ringer) @@ -8490,8 +8490,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-03-18 [f9dead5] Alvaro..: Install shared libraries to bin/ in Windows un.. --> - Install shared libraries to bin in MS Windows (Peter Eisentraut, Michael Paquier) + Install shared libraries to bin in MS Windows (Peter Eisentraut, Michael Paquier) @@ -8500,8 +8500,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-04-16 [22d0053] Alvaro..: MSVC: install src/test/modules together with c.. --> - Install src/test/modules together with - contrib on MSVC builds (Michael + Install src/test/modules together with + contrib on MSVC builds (Michael Paquier) @@ -8511,9 +8511,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-12 [8d9a0e8] Magnus..: Support ––with-extra-version equivalent functi.. --> - Allow configure's - @@ -8522,7 +8522,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [91f03ba] Noah M..: MSVC: Recognize PGFILEDESC in contrib and conv.. --> - Pass PGFILEDESC into MSVC contrib builds + Pass PGFILEDESC into MSVC contrib builds (Michael Paquier) @@ -8532,8 +8532,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-14 [c4a448e] Noah M..: MSVC: Apply icons to all binaries having them .. --> - Add icons to all MSVC-built binaries and version - information to all MS Windows + Add icons to all MSVC-built binaries and version + information to all MS Windows binaries (Noah Misch) @@ -8548,12 +8548,12 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add optional-argument support to the internal - getopt_long() implementation (Michael Paquier, + getopt_long() implementation (Michael Paquier, Andres Freund) - This is used by the MSVC build. + This is used by the MSVC build. @@ -8575,7 +8575,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Add statistics for minimum, maximum, mean, and standard deviation times to pg_stat_statements + linkend="pgstatstatements-columns">pg_stat_statements (Mitsumasa Kondo, Andrew Dunstan) @@ -8585,8 +8585,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-01 [32984d8] Heikki..: Add functions for dealing with PGP armor heade.. --> - Add pgcrypto function - pgp_armor_headers() to extract PGP + Add pgcrypto function + pgp_armor_headers() to extract PGP armor headers (Marko Tiikkaja, Heikki Linnakangas) @@ -8597,7 +8597,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow empty replacement strings in unaccent (Mohammad Alhashash) + linkend="unaccent">unaccent (Mohammad Alhashash) @@ -8612,7 +8612,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow multicharacter source strings in unaccent (Tom Lane) + linkend="unaccent">unaccent (Tom Lane) @@ -8628,9 +8628,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [149f6f1] Simon ..: TABLESAMPLE system_time(limit) --> - Add contrib modules tsm_system_rows and - tsm_system_time + Add contrib modules tsm_system_rows and + tsm_system_time to allow additional table sampling methods (Petr Jelínek) @@ -8640,9 +8640,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-21 [3a82bc6] Heikki..: Add pageinspect functions for inspecting GIN in.. --> - Add GIN + Add GIN index inspection functions to pageinspect (Heikki + linkend="pageinspect">pageinspect (Heikki Linnakangas, Peter Geoghegan, Michael Paquier) @@ -8653,7 +8653,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add information about buffer pins to pg_buffercache display + linkend="pgbuffercache">pg_buffercache display (Andres Freund) @@ -8663,9 +8663,9 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-13 [5850b20] Andres..: Add pgstattuple_approx() to the pgstattuple ext.. --> - Allow pgstattuple + Allow pgstattuple to report approximate answers with less overhead using - pgstattuple_approx() (Abhijit Menon-Sen) + pgstattuple_approx() (Abhijit Menon-Sen) @@ -8675,15 +8675,15 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-01 [df761e3] Alvaro..: Move security_label test --> - Move dummy_seclabel, test_shm_mq, - test_parser, and worker_spi - from contrib to src/test/modules + Move dummy_seclabel, test_shm_mq, + test_parser, and worker_spi + from contrib to src/test/modules (Álvaro Herrera) These modules are only meant for server testing, so they do not need - to be built or installed when packaging PostgreSQL. + to be built or installed when packaging PostgreSQL. diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 09b6b90254..a89b1b5879 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -46,20 +46,20 @@ Branch: REL9_2_STABLE [98e6784aa] 2017-08-15 19:33:04 -0400 --> Show foreign tables - in information_schema.table_privileges + in information_schema.table_privileges view (Peter Eisentraut) - All other relevant information_schema views include + All other relevant information_schema views include foreign tables, but this one ignored them. - Since this view definition is installed by initdb, + Since this view definition is installed by initdb, merely upgrading will not fix the problem. If you need to fix this in an existing installation, you can, as a superuser, do this - in psql: + in psql: SET search_path TO information_schema; CREATE OR REPLACE VIEW table_privileges AS @@ -98,7 +98,7 @@ CREATE OR REPLACE VIEW table_privileges AS OR grantee.rolname = 'PUBLIC'); This must be repeated in each database to be fixed, - including template0. + including template0. @@ -114,14 +114,14 @@ Branch: REL9_2_STABLE [8ae41ceae] 2017-08-14 15:43:20 -0400 --> Clean up handling of a fatal exit (e.g., due to receipt - of SIGTERM) that occurs while trying to execute - a ROLLBACK of a failed transaction (Tom Lane) + of SIGTERM) that occurs while trying to execute + a ROLLBACK of a failed transaction (Tom Lane) This situation could result in an assertion failure. In production builds, the exit would still occur, but it would log an unexpected - message about cannot drop active portal. + message about cannot drop active portal. @@ -156,7 +156,7 @@ Branch: REL9_2_STABLE [4e704aac1] 2017-08-09 17:03:10 -0400 - Certain ALTER commands that change the definition of a + Certain ALTER commands that change the definition of a composite type or domain type are supposed to fail if there are any stored values of that type in the database, because they lack the infrastructure needed to update or check such values. Previously, @@ -189,7 +189,7 @@ Branch: REL9_4_STABLE [59dde9fed] 2017-08-19 13:39:38 -0400 Branch: REL9_3_STABLE [ece4bd901] 2017-08-19 13:39:38 -0400 --> - Fix crash in pg_restore when using parallel mode and + Fix crash in pg_restore when using parallel mode and using a list file to select a subset of items to restore (Fabrízio de Royes Mello) @@ -206,13 +206,13 @@ Branch: REL9_3_STABLE [f8bc6b2f6] 2017-08-16 13:30:09 +0200 Branch: REL9_2_STABLE [60b135c82] 2017-08-16 13:30:20 +0200 --> - Change ecpg's parser to allow RETURNING + Change ecpg's parser to allow RETURNING clauses without attached C variables (Michael Meskes) - This allows ecpg programs to contain SQL constructs - that use RETURNING internally (for example, inside a CTE) + This allows ecpg programs to contain SQL constructs + that use RETURNING internally (for example, inside a CTE) rather than using it to define values to be returned to the client. @@ -225,7 +225,7 @@ Branch: REL_10_STABLE [a6b174f55] 2017-08-16 13:27:21 +0200 Branch: REL9_6_STABLE [954490fec] 2017-08-16 13:28:10 +0200 --> - Change ecpg's parser to recognize backslash + Change ecpg's parser to recognize backslash continuation of C preprocessor command lines (Michael Meskes) @@ -253,12 +253,12 @@ Branch: REL9_2_STABLE [f7e4783dd] 2017-08-17 13:15:46 -0400 This fix avoids possible crashes of PL/Perl due to inconsistent - assumptions about the width of time_t values. + assumptions about the width of time_t values. A side-effect that may be visible to extension developers is - that _USE_32BIT_TIME_T is no longer defined globally - in PostgreSQL Windows builds. This is not expected - to cause problems, because type time_t is not used - in any PostgreSQL API definitions. + that _USE_32BIT_TIME_T is no longer defined globally + in PostgreSQL Windows builds. This is not expected + to cause problems, because type time_t is not used + in any PostgreSQL API definitions. @@ -270,7 +270,7 @@ Branch: REL9_6_STABLE [fc2aafe4a] 2017-08-09 12:06:08 -0400 Branch: REL9_5_STABLE [a784d5f21] 2017-08-09 12:06:14 -0400 --> - Fix make check to behave correctly when invoked via a + Fix make check to behave correctly when invoked via a non-GNU make program (Thomas Munro) @@ -329,7 +329,7 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 --> Further restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Noah Misch) @@ -337,11 +337,11 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 The fix for CVE-2017-7486 was incorrect: it allowed a user to see the options in her own user mapping, even if she did not - have USAGE permission on the associated foreign server. + have USAGE permission on the associated foreign server. Such options might include a password that had been provided by the server owner rather than the user herself. - Since information_schema.user_mapping_options does not - show the options in such cases, pg_user_mappings + Since information_schema.user_mapping_options does not + show the options in such cases, pg_user_mappings should not either. (CVE-2017-7547) @@ -356,15 +356,15 @@ Branch: REL9_2_STABLE [e255e97a2] 2017-08-07 07:09:32 -0700 Restart the postmaster after adding allow_system_table_mods - = true to postgresql.conf. (In versions - supporting ALTER SYSTEM, you can use that to make the + = true to postgresql.conf. (In versions + supporting ALTER SYSTEM, you can use that to make the configuration change, but you'll still need a restart.) - In each database of the cluster, + In each database of the cluster, run the following commands as superuser: SET search_path = pg_catalog; @@ -395,15 +395,15 @@ CREATE OR REPLACE VIEW pg_user_mappings AS - Do not forget to include the template0 - and template1 databases, or the vulnerability will still - exist in databases you create later. To fix template0, + Do not forget to include the template0 + and template1 databases, or the vulnerability will still + exist in databases you create later. To fix template0, you'll need to temporarily make it accept connections. - In PostgreSQL 9.5 and later, you can use + In PostgreSQL 9.5 and later, you can use ALTER DATABASE template0 WITH ALLOW_CONNECTIONS true; - and then after fixing template0, undo that with + and then after fixing template0, undo that with ALTER DATABASE template0 WITH ALLOW_CONNECTIONS false; @@ -417,7 +417,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Finally, remove the allow_system_table_mods configuration + Finally, remove the allow_system_table_mods configuration setting, and again restart the postmaster. @@ -440,16 +440,16 @@ Branch: REL9_2_STABLE [06651648a] 2017-08-07 17:04:17 +0300 - libpq ignores empty password specifications, and does + libpq ignores empty password specifications, and does not transmit them to the server. So, if a user's password has been set to the empty string, it's impossible to log in with that password - via psql or other libpq-based + via psql or other libpq-based clients. An administrator might therefore believe that setting the password to empty is equivalent to disabling password login. - However, with a modified or non-libpq-based client, + However, with a modified or non-libpq-based client, logging in could be possible, depending on which authentication method is configured. In particular the most common - method, md5, accepted empty passwords. + method, md5, accepted empty passwords. Change the server to reject empty passwords in all cases. (CVE-2017-7546) @@ -464,13 +464,13 @@ Branch: REL9_5_STABLE [873741c68] 2017-08-07 10:19:21 -0400 Branch: REL9_4_STABLE [f1cda6d6c] 2017-08-07 10:19:22 -0400 --> - Make lo_put() check for UPDATE privilege on + Make lo_put() check for UPDATE privilege on the target large object (Tom Lane, Michael Paquier) - lo_put() should surely require the same permissions - as lowrite(), but the check was missing, allowing any + lo_put() should surely require the same permissions + as lowrite(), but the check was missing, allowing any user to change the data in a large object. (CVE-2017-7548) @@ -485,12 +485,12 @@ Branch: REL9_5_STABLE [fd376afc9] 2017-06-15 12:30:02 -0400 --> Correct the documentation about the process for upgrading standby - servers with pg_upgrade (Bruce Momjian) + servers with pg_upgrade (Bruce Momjian) The previous documentation instructed users to start/stop the primary - server after running pg_upgrade but before syncing + server after running pg_upgrade but before syncing the standby servers. This sequence is unsafe. @@ -697,7 +697,7 @@ Branch: REL9_2_STABLE [81bf7b5b1] 2017-06-21 14:13:58 -0700 --> Fix possible creation of an invalid WAL segment when a standby is - promoted just after it processes an XLOG_SWITCH WAL + promoted just after it processes an XLOG_SWITCH WAL record (Andres Freund) @@ -711,7 +711,7 @@ Branch: REL9_5_STABLE [446914f6b] 2017-06-30 12:00:03 -0400 Branch: REL9_4_STABLE [5aa8db014] 2017-06-30 12:00:03 -0400 --> - Fix walsender to exit promptly when client requests + Fix walsender to exit promptly when client requests shutdown (Tom Lane) @@ -731,7 +731,7 @@ Branch: REL9_3_STABLE [45d067d50] 2017-06-05 19:18:16 -0700 Branch: REL9_2_STABLE [133b1920c] 2017-06-05 19:18:16 -0700 --> - Fix SIGHUP and SIGUSR1 handling in + Fix SIGHUP and SIGUSR1 handling in walsender processes (Petr Jelinek, Andres Freund) @@ -761,7 +761,7 @@ Branch: REL9_3_STABLE [cb59949f6] 2017-06-26 17:31:56 -0400 Branch: REL9_2_STABLE [e96adaacd] 2017-06-26 17:31:56 -0400 --> - Fix unnecessarily slow restarts of walreceiver + Fix unnecessarily slow restarts of walreceiver processes due to race condition in postmaster (Tom Lane) @@ -880,7 +880,7 @@ Branch: REL9_3_STABLE [aea1a3f0e] 2017-07-12 18:00:04 -0400 Branch: REL9_2_STABLE [75670ec37] 2017-07-12 18:00:04 -0400 --> - Fix cases where an INSERT or UPDATE assigns + Fix cases where an INSERT or UPDATE assigns to more than one element of a column that is of domain-over-array type (Tom Lane) @@ -896,7 +896,7 @@ Branch: REL9_4_STABLE [dc777f9db] 2017-06-27 17:51:11 -0400 Branch: REL9_3_STABLE [66dee28b4] 2017-06-27 17:51:11 -0400 --> - Allow window functions to be used in sub-SELECTs that + Allow window functions to be used in sub-SELECTs that are within the arguments of an aggregate function (Tom Lane) @@ -908,7 +908,7 @@ Branch: master [7086be6e3] 2017-07-24 15:57:24 -0400 Branch: REL9_6_STABLE [971faefc2] 2017-07-24 16:24:42 -0400 --> - Ensure that a view's CHECK OPTIONS clause is enforced + Ensure that a view's CHECK OPTIONS clause is enforced properly when the underlying table is a foreign table (Etsuro Fujita) @@ -930,12 +930,12 @@ Branch: REL9_2_STABLE [da9165686] 2017-05-26 15:16:59 -0400 --> Move autogenerated array types out of the way during - ALTER ... RENAME (Vik Fearing) + ALTER ... RENAME (Vik Fearing) Previously, we would rename a conflicting autogenerated array type - out of the way during CREATE; this fix extends that + out of the way during CREATE; this fix extends that behavior to renaming operations. @@ -948,7 +948,7 @@ Branch: REL9_6_STABLE [b35cce914] 2017-05-15 11:33:44 -0400 Branch: REL9_5_STABLE [53a1aa9f9] 2017-05-15 11:33:45 -0400 --> - Fix dangling pointer in ALTER TABLE when there is a + Fix dangling pointer in ALTER TABLE when there is a comment on a constraint belonging to the table (David Rowley) @@ -969,8 +969,8 @@ Branch: REL9_3_STABLE [b7d1bc820] 2017-08-03 21:29:36 -0400 Branch: REL9_2_STABLE [22eb38caa] 2017-08-03 21:42:46 -0400 --> - Ensure that ALTER USER ... SET accepts all the syntax - variants that ALTER ROLE ... SET does (Peter Eisentraut) + Ensure that ALTER USER ... SET accepts all the syntax + variants that ALTER ROLE ... SET does (Peter Eisentraut) @@ -981,18 +981,18 @@ Branch: master [86705aa8c] 2017-08-03 13:24:48 -0400 Branch: REL9_6_STABLE [1f220c390] 2017-08-03 13:25:32 -0400 --> - Allow a foreign table's CHECK constraints to be - initially NOT VALID (Amit Langote) + Allow a foreign table's CHECK constraints to be + initially NOT VALID (Amit Langote) - CREATE TABLE silently drops NOT VALID - specifiers for CHECK constraints, reasoning that the + CREATE TABLE silently drops NOT VALID + specifiers for CHECK constraints, reasoning that the table must be empty so the constraint can be validated immediately. - But this is wrong for CREATE FOREIGN TABLE, where there's + But this is wrong for CREATE FOREIGN TABLE, where there's no reason to suppose that the underlying table is empty, and even if it is it's no business of ours to decide that the constraint can be - treated as valid going forward. Skip this optimization for + treated as valid going forward. Skip this optimization for foreign tables. @@ -1009,14 +1009,14 @@ Branch: REL9_2_STABLE [ac93a78b0] 2017-06-16 11:46:26 +0300 --> Properly update dependency info when changing a datatype I/O - function's argument or return type from opaque to the + function's argument or return type from opaque to the correct type (Heikki Linnakangas) - CREATE TYPE updates I/O functions declared in this + CREATE TYPE updates I/O functions declared in this long-obsolete style, but it forgot to record a dependency on the - type, allowing a subsequent DROP TYPE to leave broken + type, allowing a subsequent DROP TYPE to leave broken function definitions behind. @@ -1028,7 +1028,7 @@ Branch: master [34aebcf42] 2017-06-02 19:11:15 -0700 Branch: REL9_6_STABLE [8a7cd781e] 2017-06-02 19:11:23 -0700 --> - Allow parallelism in the query plan when COPY copies from + Allow parallelism in the query plan when COPY copies from a query's result (Andres Freund) @@ -1044,8 +1044,8 @@ Branch: REL9_3_STABLE [11854dee0] 2017-07-12 22:04:08 +0300 Branch: REL9_2_STABLE [40ba61b44] 2017-07-12 22:04:15 +0300 --> - Reduce memory usage when ANALYZE processes - a tsvector column (Heikki Linnakangas) + Reduce memory usage when ANALYZE processes + a tsvector column (Heikki Linnakangas) @@ -1061,7 +1061,7 @@ Branch: REL9_2_STABLE [798d2321e] 2017-05-21 13:05:17 -0400 --> Fix unnecessary precision loss and sloppy rounding when multiplying - or dividing money values by integers or floats (Tom Lane) + or dividing money values by integers or floats (Tom Lane) @@ -1077,7 +1077,7 @@ Branch: REL9_2_STABLE [a047270d5] 2017-05-24 15:28:35 -0400 --> Tighten checks for whitespace in functions that parse identifiers, - such as regprocedurein() (Tom Lane) + such as regprocedurein() (Tom Lane) @@ -1103,13 +1103,13 @@ Branch: REL9_3_STABLE [0d8f015e7] 2017-07-31 12:38:35 -0400 Branch: REL9_2_STABLE [456c7dff2] 2017-07-31 12:38:35 -0400 --> - Use relevant #define symbols from Perl while - compiling PL/Perl (Ashutosh Sharma, Tom Lane) + Use relevant #define symbols from Perl while + compiling PL/Perl (Ashutosh Sharma, Tom Lane) This avoids portability problems, typically manifesting as - a handshake mismatch during library load, when working with + a handshake mismatch during library load, when working with recent Perl versions. @@ -1124,7 +1124,7 @@ Branch: REL9_4_STABLE [1fe1fc449] 2017-06-07 14:04:49 +0300 Branch: REL9_3_STABLE [f2fa0c651] 2017-06-07 14:04:44 +0300 --> - In libpq, reset GSS/SASL and SSPI authentication + In libpq, reset GSS/SASL and SSPI authentication state properly after a failed connection attempt (Michael Paquier) @@ -1146,9 +1146,9 @@ Branch: REL9_3_STABLE [6bc710f6d] 2017-05-17 12:24:19 -0400 Branch: REL9_2_STABLE [07477130e] 2017-05-17 12:24:19 -0400 --> - In psql, fix failure when COPY FROM STDIN + In psql, fix failure when COPY FROM STDIN is ended with a keyboard EOF signal and then another COPY - FROM STDIN is attempted (Thomas Munro) + FROM STDIN is attempted (Thomas Munro) @@ -1167,8 +1167,8 @@ Branch: REL9_4_STABLE [b93217653] 2017-08-03 17:36:43 -0400 Branch: REL9_3_STABLE [035bb8222] 2017-08-03 17:36:23 -0400 --> - Fix pg_dump and pg_restore to - emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) + Fix pg_dump and pg_restore to + emit REFRESH MATERIALIZED VIEW commands last (Tom Lane) @@ -1190,8 +1190,8 @@ Branch: REL9_5_STABLE [12f1e523a] 2017-08-03 14:55:17 -0400 Branch: REL9_4_STABLE [69ad12b58] 2017-08-03 14:55:17 -0400 --> - Improve pg_dump/pg_restore's - reporting of error conditions originating in zlib + Improve pg_dump/pg_restore's + reporting of error conditions originating in zlib (Vladimir Kunschikov, Álvaro Herrera) @@ -1206,7 +1206,7 @@ Branch: REL9_4_STABLE [502ead3d6] 2017-07-22 20:20:10 -0400 Branch: REL9_3_STABLE [68a22bc69] 2017-07-22 20:20:10 -0400 --> - Fix pg_dump with the option to drop event triggers as expected (Tom Lane) @@ -1224,8 +1224,8 @@ Branch: master [4500edc7e] 2017-06-28 10:33:57 -0400 Branch: REL9_6_STABLE [a2de017b3] 2017-06-28 10:34:01 -0400 --> - Fix pg_dump with the @@ -1240,7 +1240,7 @@ Branch: REL9_3_STABLE [a561254e4] 2017-05-26 12:51:05 -0400 Branch: REL9_2_STABLE [f62e1eff5] 2017-05-26 12:51:06 -0400 --> - Fix pg_dump to not emit invalid SQL for an empty + Fix pg_dump to not emit invalid SQL for an empty operator class (Daniel Gustafsson) @@ -1256,7 +1256,7 @@ Branch: REL9_3_STABLE [2943c04f7] 2017-06-19 11:03:16 -0400 Branch: REL9_2_STABLE [c10cbf77a] 2017-06-19 11:03:21 -0400 --> - Fix pg_dump output to stdout on Windows (Kuntal Ghosh) + Fix pg_dump output to stdout on Windows (Kuntal Ghosh) @@ -1276,14 +1276,14 @@ Branch: REL9_3_STABLE [b6d640047] 2017-07-24 15:16:31 -0400 Branch: REL9_2_STABLE [d9874fde8] 2017-07-24 15:16:31 -0400 --> - Fix pg_get_ruledef() to print correct output for - the ON SELECT rule of a view whose columns have been + Fix pg_get_ruledef() to print correct output for + the ON SELECT rule of a view whose columns have been renamed (Tom Lane) - In some corner cases, pg_dump relies - on pg_get_ruledef() to dump views, so that this error + In some corner cases, pg_dump relies + on pg_get_ruledef() to dump views, so that this error could result in dump/reload failures. @@ -1299,7 +1299,7 @@ Branch: REL9_3_STABLE [e947838ae] 2017-07-20 11:29:36 -0400 --> Fix dumping of outer joins with empty constraints, such as the result - of a NATURAL LEFT JOIN with no common columns (Tom Lane) + of a NATURAL LEFT JOIN with no common columns (Tom Lane) @@ -1314,7 +1314,7 @@ Branch: REL9_3_STABLE [0ecc407d9] 2017-07-13 19:24:44 -0400 Branch: REL9_2_STABLE [bccfb1776] 2017-07-13 19:24:44 -0400 --> - Fix dumping of function expressions in the FROM clause in + Fix dumping of function expressions in the FROM clause in cases where the expression does not deparse into something that looks like a function call (Tom Lane) @@ -1331,7 +1331,7 @@ Branch: REL9_3_STABLE [f3633689f] 2017-07-14 16:03:23 +0300 Branch: REL9_2_STABLE [4b994a96c] 2017-07-14 16:03:27 +0300 --> - Fix pg_basebackup output to stdout on Windows + Fix pg_basebackup output to stdout on Windows (Haribabu Kommi) @@ -1349,12 +1349,12 @@ Branch: REL9_6_STABLE [73fbf3d3d] 2017-07-21 22:04:55 -0400 Branch: REL9_5_STABLE [ed367be64] 2017-07-21 22:05:07 -0400 --> - Fix pg_rewind to correctly handle files exceeding 2GB + Fix pg_rewind to correctly handle files exceeding 2GB (Kuntal Ghosh, Michael Paquier) - Ordinarily such files won't appear in PostgreSQL data + Ordinarily such files won't appear in PostgreSQL data directories, but they could be present in some cases. @@ -1370,8 +1370,8 @@ Branch: REL9_3_STABLE [5c890645d] 2017-06-20 13:20:02 -0400 Branch: REL9_2_STABLE [65beccae5] 2017-06-20 13:20:02 -0400 --> - Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + Fix pg_upgrade to ensure that the ending WAL record + does not have = minimum (Bruce Momjian) @@ -1389,7 +1389,7 @@ Branch: REL9_6_STABLE [d3ca4b4b4] 2017-06-05 16:10:07 -0700 Branch: REL9_5_STABLE [25653c171] 2017-06-05 16:10:07 -0700 --> - Fix pg_xlogdump's computation of WAL record length + Fix pg_xlogdump's computation of WAL record length (Andres Freund) @@ -1409,9 +1409,9 @@ Branch: REL9_4_STABLE [a648fc70a] 2017-07-21 14:20:43 -0400 Branch: REL9_3_STABLE [6d9de660d] 2017-07-21 14:20:43 -0400 --> - In postgres_fdw, re-establish connections to remote - servers after ALTER SERVER or ALTER USER - MAPPING commands (Kyotaro Horiguchi) + In postgres_fdw, re-establish connections to remote + servers after ALTER SERVER or ALTER USER + MAPPING commands (Kyotaro Horiguchi) @@ -1430,7 +1430,7 @@ Branch: REL9_4_STABLE [c02c450cf] 2017-06-07 15:40:35 -0400 Branch: REL9_3_STABLE [fc267a0c3] 2017-06-07 15:41:05 -0400 --> - In postgres_fdw, allow cancellation of remote + In postgres_fdw, allow cancellation of remote transaction control commands (Robert Haas, Rafia Sabih) @@ -1449,7 +1449,7 @@ Branch: REL9_5_STABLE [6f2fe2468] 2017-05-11 14:51:38 -0400 Branch: REL9_4_STABLE [5c633f76b] 2017-05-11 14:51:46 -0400 --> - Increase MAX_SYSCACHE_CALLBACKS to provide more room for + Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions (Tom Lane) @@ -1465,7 +1465,7 @@ Branch: REL9_3_STABLE [cee7238de] 2017-06-01 13:32:56 -0400 Branch: REL9_2_STABLE [a378b9bc2] 2017-06-01 13:32:56 -0400 --> - Always use , not , when building shared libraries with gcc (Tom Lane) @@ -1492,8 +1492,8 @@ Branch: REL9_3_STABLE [da30fa603] 2017-06-05 20:40:47 -0400 Branch: REL9_2_STABLE [f964a7c5a] 2017-06-05 20:41:01 -0400 --> - In MSVC builds, handle the case where the openssl - library is not within a VC subdirectory (Andrew Dunstan) + In MSVC builds, handle the case where the openssl + library is not within a VC subdirectory (Andrew Dunstan) @@ -1508,13 +1508,13 @@ Branch: REL9_3_STABLE [2c7d2114b] 2017-05-12 10:24:16 -0400 Branch: REL9_2_STABLE [614f83c12] 2017-05-12 10:24:36 -0400 --> - In MSVC builds, add proper include path for libxml2 + In MSVC builds, add proper include path for libxml2 header files (Andrew Dunstan) This fixes a former need to move things around in standard Windows - installations of libxml2. + installations of libxml2. @@ -1530,7 +1530,7 @@ Branch: REL9_2_STABLE [4885e5c88] 2017-07-23 23:53:55 -0700 --> In MSVC builds, recognize a Tcl library that is - named tcl86.lib (Noah Misch) + named tcl86.lib (Noah Misch) @@ -1551,8 +1551,8 @@ Branch: REL9_5_STABLE [7eb4124da] 2017-07-16 11:27:07 -0400 Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 --> - In MSVC builds, honor PROVE_FLAGS settings - on vcregress.pl's command line (Andrew Dunstan) + In MSVC builds, honor PROVE_FLAGS settings + on vcregress.pl's command line (Andrew Dunstan) @@ -1589,7 +1589,7 @@ Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 Also, if you are using third-party replication tools that depend - on logical decoding, see the fourth changelog entry below. + on logical decoding, see the fourth changelog entry below. @@ -1615,18 +1615,18 @@ Branch: REL9_2_STABLE [99cbb0bd9] 2017-05-08 07:24:28 -0700 --> Restrict visibility - of pg_user_mappings.umoptions, to + of pg_user_mappings.umoptions, to protect passwords stored as user mapping options (Michael Paquier, Feike Steenbergen) The previous coding allowed the owner of a foreign server object, - or anyone he has granted server USAGE permission to, + or anyone he has granted server USAGE permission to, to see the options for all user mappings associated with that server. This might well include passwords for other users. Adjust the view definition to match the behavior of - information_schema.user_mapping_options, namely that + information_schema.user_mapping_options, namely that these options are visible to the user being mapped, or if the mapping is for PUBLIC and the current user is the server owner, or if the current user is a superuser. @@ -1665,7 +1665,7 @@ Branch: REL9_3_STABLE [703da1795] 2017-05-08 11:19:08 -0400 Some selectivity estimation functions in the planner will apply user-defined operators to values obtained - from pg_statistic, such as most common values and + from pg_statistic, such as most common values and histogram entries. This occurs before table permissions are checked, so a nefarious user could exploit the behavior to obtain these values for table columns he does not have permission to read. To fix, @@ -1687,17 +1687,17 @@ Branch: REL9_4_STABLE [ed36c1fe1] 2017-05-08 07:24:27 -0700 Branch: REL9_3_STABLE [3eab81127] 2017-05-08 07:24:28 -0700 --> - Restore libpq's recognition of - the PGREQUIRESSL environment variable (Daniel Gustafsson) + Restore libpq's recognition of + the PGREQUIRESSL environment variable (Daniel Gustafsson) Processing of this environment variable was unintentionally dropped - in PostgreSQL 9.3, but its documentation remained. + in PostgreSQL 9.3, but its documentation remained. This creates a security hazard, since users might be relying on the environment variable to force SSL-encrypted connections, but that would no longer be guaranteed. Restore handling of the variable, - but give it lower priority than PGSSLMODE, to avoid + but give it lower priority than PGSSLMODE, to avoid breaking configurations that work correctly with post-9.3 code. (CVE-2017-7485) @@ -1748,7 +1748,7 @@ Branch: REL9_3_STABLE [6bd7816e7] 2017-03-14 12:08:14 -0400 Branch: REL9_2_STABLE [b2ae1d6c4] 2017-03-14 12:10:36 -0400 --> - Fix possible corruption of init forks of unlogged indexes + Fix possible corruption of init forks of unlogged indexes (Robert Haas, Michael Paquier) @@ -1770,7 +1770,7 @@ Branch: REL9_3_STABLE [856580873] 2017-04-23 13:10:57 -0400 Branch: REL9_2_STABLE [952e33b05] 2017-04-23 13:10:58 -0400 --> - Fix incorrect reconstruction of pg_subtrans entries + Fix incorrect reconstruction of pg_subtrans entries when a standby server replays a prepared but uncommitted two-phase transaction (Tom Lane) @@ -1778,7 +1778,7 @@ Branch: REL9_2_STABLE [952e33b05] 2017-04-23 13:10:58 -0400 In most cases this turned out to have no visible ill effects, but in corner cases it could result in circular references - in pg_subtrans, potentially causing infinite loops + in pg_subtrans, potentially causing infinite loops in queries that examine rows modified by the two-phase transaction. @@ -1792,7 +1792,7 @@ Branch: REL9_5_STABLE [feb659cce] 2017-02-22 08:29:44 +0900 Branch: REL9_4_STABLE [a3eb715a3] 2017-02-22 08:29:57 +0900 --> - Avoid possible crash in walsender due to failure + Avoid possible crash in walsender due to failure to initialize a string buffer (Stas Kelvich, Fujii Masao) @@ -1840,7 +1840,7 @@ Branch: REL9_5_STABLE [dba1f310a] 2017-04-24 12:16:58 -0400 Branch: REL9_4_STABLE [436b560b8] 2017-04-24 12:16:58 -0400 --> - Fix postmaster's handling of fork() failure for a + Fix postmaster's handling of fork() failure for a background worker process (Tom Lane) @@ -1858,7 +1858,7 @@ Branch: master [89deca582] 2017-04-07 12:18:38 -0400 Branch: REL9_6_STABLE [c0a493e17] 2017-04-07 12:18:38 -0400 --> - Fix possible no relation entry for relid 0 error when + Fix possible no relation entry for relid 0 error when planning nested set operations (Tom Lane) @@ -1886,7 +1886,7 @@ Branch: REL9_6_STABLE [6c73b390b] 2017-04-17 15:29:00 -0400 Branch: REL9_5_STABLE [6f0f98bb0] 2017-04-17 15:29:00 -0400 --> - Avoid applying physical targetlist optimization to custom + Avoid applying physical targetlist optimization to custom scans (Dmitry Ivanov, Tom Lane) @@ -1905,13 +1905,13 @@ Branch: REL9_6_STABLE [92b15224b] 2017-05-06 21:46:41 -0400 Branch: REL9_5_STABLE [d617c7629] 2017-05-06 21:46:56 -0400 --> - Use the correct sub-expression when applying a FOR ALL + Use the correct sub-expression when applying a FOR ALL row-level-security policy (Stephen Frost) - In some cases the WITH CHECK restriction would be applied - when the USING restriction is more appropriate. + In some cases the WITH CHECK restriction would be applied + when the USING restriction is more appropriate. @@ -1934,7 +1934,7 @@ Branch: REL9_2_STABLE [c9d6c564f] 2017-05-02 18:05:54 -0400 Due to lack of a cache flush step between commands in an extension script file, non-utility queries might not see the effects of an immediately preceding catalog change, such as ALTER TABLE - ... RENAME. + ... RENAME. @@ -1950,12 +1950,12 @@ Branch: REL9_2_STABLE [27a8c8033] 2017-02-12 16:05:23 -0500 --> Skip tablespace privilege checks when ALTER TABLE ... ALTER - COLUMN TYPE rebuilds an existing index (Noah Misch) + COLUMN TYPE rebuilds an existing index (Noah Misch) The command failed if the calling user did not currently have - CREATE privilege for the tablespace containing the index. + CREATE privilege for the tablespace containing the index. That behavior seems unhelpful, so skip the check, allowing the index to be rebuilt where it is. @@ -1972,13 +1972,13 @@ Branch: REL9_3_STABLE [954744f7a] 2017-04-28 14:53:56 -0400 Branch: REL9_2_STABLE [f60f0c8fe] 2017-04-28 14:55:42 -0400 --> - Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse - to child tables when the constraint is marked NO INHERIT + Fix ALTER TABLE ... VALIDATE CONSTRAINT to not recurse + to child tables when the constraint is marked NO INHERIT (Amit Langote) - This fix prevents unwanted constraint does not exist failures + This fix prevents unwanted constraint does not exist failures when no matching constraint is present in the child tables. @@ -1991,7 +1991,7 @@ Branch: REL9_6_STABLE [943140d57] 2017-03-06 16:50:47 -0500 Branch: REL9_5_STABLE [420d9ec0a] 2017-03-06 16:50:47 -0500 --> - Avoid dangling pointer in COPY ... TO when row-level + Avoid dangling pointer in COPY ... TO when row-level security is active for the source table (Tom Lane) @@ -2009,8 +2009,8 @@ Branch: REL9_6_STABLE [68f7b91e5] 2017-03-04 16:09:33 -0500 Branch: REL9_5_STABLE [807df31d1] 2017-03-04 16:09:33 -0500 --> - Avoid accessing an already-closed relcache entry in CLUSTER - and VACUUM FULL (Tom Lane) + Avoid accessing an already-closed relcache entry in CLUSTER + and VACUUM FULL (Tom Lane) @@ -2032,14 +2032,14 @@ Branch: master [64ae420b2] 2017-03-17 14:35:54 +0000 Branch: REL9_6_STABLE [733488dc6] 2017-03-17 14:46:15 +0000 --> - Fix VACUUM to account properly for pages that could not + Fix VACUUM to account properly for pages that could not be scanned due to conflicting page pins (Andrew Gierth) This tended to lead to underestimation of the number of tuples in the table. In the worst case of a small heavily-contended - table, VACUUM could incorrectly report that the table + table, VACUUM could incorrectly report that the table contained no tuples, leading to very bad planning choices. @@ -2067,13 +2067,13 @@ Branch: master [d5286aa90] 2017-03-21 16:23:10 +0300 Branch: REL9_6_STABLE [a4d07d2e9] 2017-03-21 16:24:10 +0300 --> - Fix incorrect support for certain box operators in SP-GiST + Fix incorrect support for certain box operators in SP-GiST (Nikita Glukhov) - SP-GiST index scans using the operators &< - &> &<| and |&> + SP-GiST index scans using the operators &< + &> &<| and |&> would yield incorrect answers. @@ -2087,12 +2087,12 @@ Branch: REL9_5_STABLE [d68a2b20a] 2017-04-05 23:51:28 -0400 Branch: REL9_4_STABLE [8851bcf88] 2017-04-05 23:51:28 -0400 --> - Fix integer-overflow problems in interval comparison (Kyotaro + Fix integer-overflow problems in interval comparison (Kyotaro Horiguchi, Tom Lane) - The comparison operators for type interval could yield wrong + The comparison operators for type interval could yield wrong answers for intervals larger than about 296000 years. Indexes on columns containing such large values should be reindexed, since they may be corrupt. @@ -2110,13 +2110,13 @@ Branch: REL9_3_STABLE [6e86b448f] 2017-05-04 21:31:12 -0400 Branch: REL9_2_STABLE [a48d47908] 2017-05-04 22:39:23 -0400 --> - Fix cursor_to_xml() to produce valid output - with tableforest = false + Fix cursor_to_xml() to produce valid output + with tableforest = false (Thomas Munro, Peter Eisentraut) - Previously it failed to produce a wrapping <table> + Previously it failed to produce a wrapping <table> element. @@ -2134,8 +2134,8 @@ Branch: REL9_5_STABLE [cf73c6bfc] 2017-02-09 15:49:57 -0500 Branch: REL9_4_STABLE [86ef376bb] 2017-02-09 15:49:58 -0500 --> - Fix roundoff problems in float8_timestamptz() - and make_interval() (Tom Lane) + Fix roundoff problems in float8_timestamptz() + and make_interval() (Tom Lane) @@ -2155,7 +2155,7 @@ Branch: REL9_6_STABLE [1ec36a9eb] 2017-04-16 20:49:40 -0400 Branch: REL9_5_STABLE [b6e6ae1dc] 2017-04-16 20:50:31 -0400 --> - Fix pg_get_object_address() to handle members of operator + Fix pg_get_object_address() to handle members of operator families correctly (Álvaro Herrera) @@ -2167,12 +2167,12 @@ Branch: master [78874531b] 2017-03-24 13:53:40 +0300 Branch: REL9_6_STABLE [8de6278d3] 2017-03-24 13:55:02 +0300 --> - Fix cancelling of pg_stop_backup() when attempting to stop + Fix cancelling of pg_stop_backup() when attempting to stop a non-exclusive backup (Michael Paquier, David Steele) - If pg_stop_backup() was cancelled while waiting for a + If pg_stop_backup() was cancelled while waiting for a non-exclusive backup to end, related state was left inconsistent; a new exclusive backup could not be started, and there were other minor problems. @@ -2196,7 +2196,7 @@ Branch: REL9_3_STABLE [07987304d] 2017-05-07 11:35:05 -0400 Branch: REL9_2_STABLE [9061680f0] 2017-05-07 11:35:11 -0400 --> - Improve performance of pg_timezone_names view + Improve performance of pg_timezone_names view (Tom Lane, David Rowley) @@ -2226,13 +2226,13 @@ Branch: REL9_3_STABLE [3f613c6a4] 2017-02-21 17:51:28 -0500 Branch: REL9_2_STABLE [775227590] 2017-02-21 17:51:28 -0500 --> - Fix sloppy handling of corner-case errors from lseek() - and close() (Tom Lane) + Fix sloppy handling of corner-case errors from lseek() + and close() (Tom Lane) Neither of these system calls are likely to fail in typical situations, - but if they did, fd.c could get quite confused. + but if they did, fd.c could get quite confused. @@ -2273,8 +2273,8 @@ Branch: REL9_3_STABLE [04207ef76] 2017-03-13 20:52:05 +0100 Branch: REL9_2_STABLE [d8c207437] 2017-03-13 20:52:16 +0100 --> - Fix ecpg to support COMMIT PREPARED - and ROLLBACK PREPARED (Masahiko Sawada) + Fix ecpg to support COMMIT PREPARED + and ROLLBACK PREPARED (Masahiko Sawada) @@ -2290,7 +2290,7 @@ Branch: REL9_2_STABLE [731afc91f] 2017-03-10 10:52:01 +0100 --> Fix a double-free error when processing dollar-quoted string literals - in ecpg (Michael Meskes) + in ecpg (Michael Meskes) @@ -2300,8 +2300,8 @@ Author: Teodor Sigaev Branch: REL9_6_STABLE [2ed391f95] 2017-03-24 19:23:13 +0300 --> - Fix pgbench to handle the combination - of and options correctly (Fabien Coelho) @@ -2313,8 +2313,8 @@ Branch: master [ef2662394] 2017-03-07 11:36:42 -0500 Branch: REL9_6_STABLE [0e2c85d13] 2017-03-07 11:36:35 -0500 --> - Fix pgbench to honor the long-form option - spelling , as per its documentation (Tom Lane) @@ -2325,15 +2325,15 @@ Branch: master [330b84d8c] 2017-03-06 23:29:02 -0500 Branch: REL9_6_STABLE [e961341cc] 2017-03-06 23:29:08 -0500 --> - Fix pg_dump/pg_restore to correctly - handle privileges for the public schema when - using option (Stephen Frost) Other schemas start out with no privileges granted, - but public does not; this requires special-case treatment - when it is dropped and restored due to the option. @@ -2348,7 +2348,7 @@ Branch: REL9_3_STABLE [783acfd4d] 2017-03-06 19:33:59 -0500 Branch: REL9_2_STABLE [0ab75448e] 2017-03-06 19:33:59 -0500 --> - In pg_dump, fix incorrect schema and owner marking for + In pg_dump, fix incorrect schema and owner marking for comments and security labels of some types of database objects (Giuseppe Broccolo, Tom Lane) @@ -2368,12 +2368,12 @@ Branch: master [39370e6a0] 2017-02-17 15:06:28 -0500 Branch: REL9_6_STABLE [4e8b2fd33] 2017-02-17 15:06:34 -0500 --> - Fix typo in pg_dump's query for initial privileges + Fix typo in pg_dump's query for initial privileges of a procedural language (Peter Eisentraut) - This resulted in pg_dump always believing that the + This resulted in pg_dump always believing that the language had no initial privileges. Since that's true for most procedural languages, ill effects from this bug are probably rare. @@ -2390,13 +2390,13 @@ Branch: REL9_3_STABLE [0c0a95c2f] 2017-03-10 14:15:09 -0500 Branch: REL9_2_STABLE [e6d2ba419] 2017-03-10 14:15:09 -0500 --> - Avoid emitting an invalid list file in pg_restore -l + Avoid emitting an invalid list file in pg_restore -l when SQL object names contain newlines (Tom Lane) Replace newlines by spaces, which is sufficient to make the output - valid for pg_restore -L's purposes. + valid for pg_restore -L's purposes. @@ -2411,8 +2411,8 @@ Branch: REL9_3_STABLE [7f831f09b] 2017-03-06 17:04:29 -0500 Branch: REL9_2_STABLE [e864cd25b] 2017-03-06 17:04:55 -0500 --> - Fix pg_upgrade to transfer comments and security labels - attached to large objects (blobs) (Stephen Frost) + Fix pg_upgrade to transfer comments and security labels + attached to large objects (blobs) (Stephen Frost) @@ -2433,13 +2433,13 @@ Branch: REL9_2_STABLE [0276da5eb] 2017-03-12 19:36:28 -0400 --> Improve error handling - in contrib/adminpack's pg_file_write() + in contrib/adminpack's pg_file_write() function (Noah Misch) Notably, it failed to detect errors reported - by fclose(). + by fclose(). @@ -2454,7 +2454,7 @@ Branch: REL9_3_STABLE [f6cfc14e5] 2017-03-11 13:33:22 -0800 Branch: REL9_2_STABLE [c4613c3f4] 2017-03-11 13:33:30 -0800 --> - In contrib/dblink, avoid leaking the previous unnamed + In contrib/dblink, avoid leaking the previous unnamed connection when establishing a new unnamed connection (Joe Conway) @@ -2479,7 +2479,7 @@ Branch: REL9_4_STABLE [b179684c7] 2017-04-13 17:18:35 -0400 Branch: REL9_3_STABLE [5be58cc89] 2017-04-13 17:18:35 -0400 --> - Fix contrib/pg_trgm's extraction of trigrams from regular + Fix contrib/pg_trgm's extraction of trigrams from regular expressions (Tom Lane) @@ -2497,7 +2497,7 @@ Branch: master [332bec1e6] 2017-04-24 22:50:07 -0400 Branch: REL9_6_STABLE [86e640a69] 2017-04-26 09:14:21 -0400 --> - In contrib/postgres_fdw, allow join conditions that + In contrib/postgres_fdw, allow join conditions that contain shippable extension-provided functions to be pushed to the remote server (David Rowley, Ashutosh Bapat) @@ -2555,7 +2555,7 @@ Branch: REL9_3_STABLE [dc93cafca] 2017-05-01 11:54:02 -0400 Branch: REL9_2_STABLE [c96ccc40e] 2017-05-01 11:54:08 -0400 --> - Update time zone data files to tzdata release 2017b + Update time zone data files to tzdata release 2017b for DST law changes in Chile, Haiti, and Mongolia, plus historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. Switch to numeric abbreviations for numerous time zones in South @@ -2569,9 +2569,9 @@ Branch: REL9_2_STABLE [c96ccc40e] 2017-05-01 11:54:08 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. @@ -2593,15 +2593,15 @@ Branch: REL9_2_STABLE [82e7d3dfd] 2017-05-07 11:57:41 -0400 The Microsoft MSVC build scripts neglected to install - the posixrules file in the timezone directory tree. + the posixrules file in the timezone directory tree. This resulted in the timezone code falling back to its built-in rule about what DST behavior to assume for a POSIX-style time zone name. For historical reasons that still corresponds to the DST rules the USA was using before 2007 (i.e., change on first Sunday in April and last Sunday in October). With this fix, a POSIX-style zone name will use the current and historical DST transition dates of - the US/Eastern zone. If you don't want that, remove - the posixrules file, or replace it with a copy of some + the US/Eastern zone. If you don't want that, remove + the posixrules file, or replace it with a copy of some other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -2663,15 +2663,15 @@ Branch: REL9_2_STABLE [bcd7b47c2] 2017-02-06 13:20:25 -0500 --> Fix a race condition that could cause indexes built - with CREATE INDEX CONCURRENTLY to be corrupt + with CREATE INDEX CONCURRENTLY to be corrupt (Pavan Deolasee, Tom Lane) - If CREATE INDEX CONCURRENTLY was used to build an index + If CREATE INDEX CONCURRENTLY was used to build an index that depends on a column not previously indexed, then rows updated by transactions that ran concurrently with - the CREATE INDEX command could have received incorrect + the CREATE INDEX command could have received incorrect index entries. If you suspect this may have happened, the most reliable solution is to rebuild affected indexes after installing this update. @@ -2695,7 +2695,7 @@ Branch: REL9_4_STABLE [3e844a34b] 2016-11-15 15:55:36 -0500 Backends failed to account for this snapshot when advertising their oldest xmin, potentially allowing concurrent vacuuming operations to remove data that was still needed. This led to transient failures - along the lines of cache lookup failed for relation 1255. + along the lines of cache lookup failed for relation 1255. @@ -2711,7 +2711,7 @@ Branch: REL9_5_STABLE [ed8e8b814] 2017-01-09 18:19:29 -0300 - The WAL record emitted for a BRIN revmap page when moving an + The WAL record emitted for a BRIN revmap page when moving an index tuple to a different page was incorrect. Replay would make the related portion of the index useless, forcing it to be recomputed. @@ -2728,13 +2728,13 @@ Branch: REL9_3_STABLE [8e403f215] 2016-12-08 14:16:47 -0500 Branch: REL9_2_STABLE [a00ac6299] 2016-12-08 14:19:25 -0500 --> - Unconditionally WAL-log creation of the init fork for an + Unconditionally WAL-log creation of the init fork for an unlogged table (Michael Paquier) Previously, this was skipped when - = minimal, but actually it's necessary even in that case + = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -2816,7 +2816,7 @@ Branch: master [93eb619cd] 2016-12-17 02:22:15 +0900 Branch: REL9_6_STABLE [6c75fb6b3] 2016-12-17 02:25:47 +0900 --> - Disallow setting the num_sync field to zero in + Disallow setting the num_sync field to zero in (Fujii Masao) @@ -2867,7 +2867,7 @@ Branch: REL9_6_STABLE [20064c0ec] 2017-01-29 23:05:09 -0500 --> Fix tracking of initial privileges for extension member objects so - that it works correctly with ALTER EXTENSION ... ADD/DROP + that it works correctly with ALTER EXTENSION ... ADD/DROP (Stephen Frost) @@ -2875,7 +2875,7 @@ Branch: REL9_6_STABLE [20064c0ec] 2017-01-29 23:05:09 -0500 An object's current privileges at the time it is added to the extension will now be considered its default privileges; only later changes in its privileges will be dumped by - subsequent pg_dump runs. + subsequent pg_dump runs. @@ -2890,7 +2890,7 @@ Branch: REL9_3_STABLE [8f67a6c22] 2016-11-23 13:45:56 -0500 Branch: REL9_2_STABLE [05975ab0a] 2016-11-23 13:45:56 -0500 --> - Make sure ALTER TABLE preserves index tablespace + Make sure ALTER TABLE preserves index tablespace assignments when rebuilding indexes (Tom Lane, Michael Paquier) @@ -2912,7 +2912,7 @@ Branch: REL9_4_STABLE [3a9a8c408] 2016-10-26 17:05:06 -0400 Fix incorrect updating of trigger function properties when changing a foreign-key constraint's deferrability properties with ALTER - TABLE ... ALTER CONSTRAINT (Tom Lane) + TABLE ... ALTER CONSTRAINT (Tom Lane) @@ -2937,8 +2937,8 @@ Branch: REL9_2_STABLE [6a363a4c2] 2016-11-25 13:44:48 -0500 - This avoids could not find trigger NNN - or relation NNN has no triggers errors. + This avoids could not find trigger NNN + or relation NNN has no triggers errors. @@ -2950,15 +2950,15 @@ Branch: REL9_6_STABLE [4e563a1f6] 2017-01-09 19:26:58 -0300 Branch: REL9_5_STABLE [4d4ab6ccd] 2017-01-09 19:26:58 -0300 --> - Fix ALTER TABLE ... SET DATA TYPE ... USING when child + Fix ALTER TABLE ... SET DATA TYPE ... USING when child table has different column ordering than the parent (Álvaro Herrera) - Failure to adjust the column numbering in the USING + Failure to adjust the column numbering in the USING expression led to errors, - typically attribute N has wrong type. + typically attribute N has wrong type. @@ -2974,7 +2974,7 @@ Branch: REL9_2_STABLE [6c4cf2be8] 2017-01-04 18:00:12 -0500 --> Fix processing of OID column when a table with OIDs is associated to - a parent with OIDs via ALTER TABLE ... INHERIT (Amit + a parent with OIDs via ALTER TABLE ... INHERIT (Amit Langote) @@ -2992,8 +2992,8 @@ Branch: master [1ead0208b] 2016-12-22 16:23:38 -0500 Branch: REL9_6_STABLE [68330c8b4] 2016-12-22 16:23:34 -0500 --> - Ensure that CREATE TABLE ... LIKE ... WITH OIDS creates - a table with OIDs, whether or not the LIKE-referenced + Ensure that CREATE TABLE ... LIKE ... WITH OIDS creates + a table with OIDs, whether or not the LIKE-referenced table(s) have OIDs (Tom Lane) @@ -3007,7 +3007,7 @@ Branch: REL9_5_STABLE [78a98b767] 2016-12-21 17:02:47 +0000 Branch: REL9_4_STABLE [cad24980e] 2016-12-21 17:03:54 +0000 --> - Fix CREATE OR REPLACE VIEW to update the view query + Fix CREATE OR REPLACE VIEW to update the view query before attempting to apply the new view options (Dean Rasheed) @@ -3028,7 +3028,7 @@ Branch: REL9_3_STABLE [0e3aadb68] 2016-12-22 17:09:00 -0500 --> Report correct object identity during ALTER TEXT SEARCH - CONFIGURATION (Artur Zakirov) + CONFIGURATION (Artur Zakirov) @@ -3046,8 +3046,8 @@ Branch: REL9_5_STABLE [7816d1356] 2016-11-24 15:39:55 -0300 --> Fix commit timestamp mechanism to not fail when queried about - the special XIDs FrozenTransactionId - and BootstrapTransactionId (Craig Ringer) + the special XIDs FrozenTransactionId + and BootstrapTransactionId (Craig Ringer) @@ -3068,8 +3068,8 @@ Branch: REL9_5_STABLE [6e00ba1e1] 2016-11-10 15:00:58 -0500 The symptom was spurious ON CONFLICT is not supported on table - ... used as a catalog table errors when the target - of INSERT ... ON CONFLICT is a view with cascade option. + ... used as a catalog table errors when the target + of INSERT ... ON CONFLICT is a view with cascade option. @@ -3081,8 +3081,8 @@ Branch: REL9_6_STABLE [da05d0ebc] 2016-12-04 15:02:46 -0500 Branch: REL9_5_STABLE [25c06a1ed] 2016-12-04 15:02:48 -0500 --> - Fix incorrect target lists can have at most N - entries complaint when using ON CONFLICT with + Fix incorrect target lists can have at most N + entries complaint when using ON CONFLICT with wide tables (Tom Lane) @@ -3094,8 +3094,8 @@ Branch: master [da8f3ebf3] 2016-11-02 14:32:13 -0400 Branch: REL9_6_STABLE [f4d865f22] 2016-11-02 14:32:13 -0400 --> - Fix spurious query provides a value for a dropped column - errors during INSERT or UPDATE on a table + Fix spurious query provides a value for a dropped column + errors during INSERT or UPDATE on a table with a dropped column (Tom Lane) @@ -3110,13 +3110,13 @@ Branch: REL9_4_STABLE [44c8b4fcd] 2016-11-20 14:26:19 -0500 Branch: REL9_3_STABLE [71db302ec] 2016-11-20 14:26:19 -0500 --> - Prevent multicolumn expansion of foo.* in - an UPDATE source expression (Tom Lane) + Prevent multicolumn expansion of foo.* in + an UPDATE source expression (Tom Lane) This led to UPDATE target count mismatch --- internal - error. Now the syntax is understood as a whole-row variable, + error. Now the syntax is understood as a whole-row variable, as it would be in other contexts. @@ -3133,12 +3133,12 @@ Branch: REL9_2_STABLE [082d1fb9e] 2016-12-09 12:01:14 -0500 --> Ensure that column typmods are determined accurately for - multi-row VALUES constructs (Tom Lane) + multi-row VALUES constructs (Tom Lane) This fixes problems occurring when the first value in a column has a - determinable typmod (e.g., length for a varchar value) but + determinable typmod (e.g., length for a varchar value) but later values don't share the same limit. @@ -3162,8 +3162,8 @@ Branch: REL9_2_STABLE [6e2c21ec5] 2016-12-21 17:39:33 -0500 Normally, a Unicode surrogate leading character must be followed by a Unicode surrogate trailing character, but the check for this was missed if the leading character was the last character in a Unicode - string literal (U&'...') or Unicode identifier - (U&"..."). + string literal (U&'...') or Unicode identifier + (U&"..."). @@ -3174,7 +3174,7 @@ Branch: master [db80acfc9] 2016-12-20 09:20:17 +0200 Branch: REL9_6_STABLE [ce92fc4e2] 2016-12-20 09:20:30 +0200 --> - Fix execution of DISTINCT and ordered aggregates when + Fix execution of DISTINCT and ordered aggregates when multiple such aggregates are able to share the same transition state (Heikki Linnakangas) @@ -3189,7 +3189,7 @@ Branch: master [260443847] 2016-12-19 13:49:50 -0500 Branch: REL9_6_STABLE [3f07eff10] 2016-12-19 13:49:45 -0500 --> - Fix implementation of phrase search operators in tsquery + Fix implementation of phrase search operators in tsquery (Tom Lane) @@ -3218,7 +3218,7 @@ Branch: REL9_2_STABLE [fe6120f9b] 2017-01-26 12:17:47 -0500 --> Ensure that a purely negative text search query, such - as !foo, matches empty tsvectors (Tom Dunstan) + as !foo, matches empty tsvectors (Tom Dunstan) @@ -3238,7 +3238,7 @@ Branch: REL9_3_STABLE [79e1a9efa] 2016-12-11 13:09:57 -0500 Branch: REL9_2_STABLE [f4ccee408] 2016-12-11 13:09:57 -0500 --> - Prevent crash when ts_rewrite() replaces a non-top-level + Prevent crash when ts_rewrite() replaces a non-top-level subtree with an empty query (Artur Zakirov) @@ -3254,7 +3254,7 @@ Branch: REL9_3_STABLE [407d513df] 2016-10-30 17:35:43 -0400 Branch: REL9_2_STABLE [606e16a7f] 2016-10-30 17:35:43 -0400 --> - Fix performance problems in ts_rewrite() (Tom Lane) + Fix performance problems in ts_rewrite() (Tom Lane) @@ -3269,7 +3269,7 @@ Branch: REL9_3_STABLE [77a22f898] 2016-10-30 15:24:40 -0400 Branch: REL9_2_STABLE [b0f8a273e] 2016-10-30 15:24:40 -0400 --> - Fix ts_rewrite()'s handling of nested NOT operators + Fix ts_rewrite()'s handling of nested NOT operators (Tom Lane) @@ -3283,7 +3283,7 @@ Branch: REL9_5_STABLE [7151e72d7] 2016-10-30 12:27:41 -0400 --> Improve speed of user-defined aggregates that - use array_append() as transition function (Tom Lane) + use array_append() as transition function (Tom Lane) @@ -3298,7 +3298,7 @@ Branch: REL9_3_STABLE [ee9cb284a] 2017-01-05 11:33:51 -0500 Branch: REL9_2_STABLE [e0d59c6ef] 2017-01-05 11:33:51 -0500 --> - Fix array_fill() to handle empty arrays properly (Tom Lane) + Fix array_fill() to handle empty arrays properly (Tom Lane) @@ -3310,8 +3310,8 @@ Branch: REL9_6_STABLE [79c89f1f4] 2016-12-09 12:42:17 -0300 Branch: REL9_5_STABLE [581b09c72] 2016-12-09 12:42:17 -0300 --> - Fix possible crash in array_position() - or array_positions() when processing arrays of records + Fix possible crash in array_position() + or array_positions() when processing arrays of records (Junseok Yang) @@ -3327,7 +3327,7 @@ Branch: REL9_3_STABLE [e71fe8470] 2016-12-16 12:53:22 +0200 Branch: REL9_2_STABLE [c8f8ed5c2] 2016-12-16 12:53:27 +0200 --> - Fix one-byte buffer overrun in quote_literal_cstr() + Fix one-byte buffer overrun in quote_literal_cstr() (Heikki Linnakangas) @@ -3348,8 +3348,8 @@ Branch: REL9_3_STABLE [f64b11fa0] 2017-01-17 17:32:20 +0900 Branch: REL9_2_STABLE [c73157ca0] 2017-01-17 17:32:45 +0900 --> - Prevent multiple calls of pg_start_backup() - and pg_stop_backup() from running concurrently (Michael + Prevent multiple calls of pg_start_backup() + and pg_stop_backup() from running concurrently (Michael Paquier) @@ -3368,7 +3368,7 @@ Branch: REL9_5_STABLE [74e67bbad] 2017-01-18 15:21:52 -0500 --> Disable transform that attempted to remove no-op AT TIME - ZONE conversions (Tom Lane) + ZONE conversions (Tom Lane) @@ -3388,15 +3388,15 @@ Branch: REL9_3_STABLE [583599839] 2016-12-27 15:43:54 -0500 Branch: REL9_2_STABLE [beae7d5f0] 2016-12-27 15:43:55 -0500 --> - Avoid discarding interval-to-interval casts + Avoid discarding interval-to-interval casts that aren't really no-ops (Tom Lane) In some cases, a cast that should result in zeroing out - low-order interval fields was mistakenly deemed to be a + low-order interval fields was mistakenly deemed to be a no-op and discarded. An example is that casting from INTERVAL - MONTH to INTERVAL YEAR failed to clear the months field. + MONTH to INTERVAL YEAR failed to clear the months field. @@ -3432,7 +3432,7 @@ Branch: master [4212cb732] 2016-12-06 11:11:54 -0500 Branch: REL9_6_STABLE [ebe5dc9e0] 2016-12-06 11:43:12 -0500 --> - Allow statements prepared with PREPARE to be given + Allow statements prepared with PREPARE to be given parallel plans (Amit Kapila, Tobias Bussmann) @@ -3501,7 +3501,7 @@ Branch: REL9_6_STABLE [7defc3b97] 2016-11-10 11:31:56 -0500 --> Fix the plan generated for sorted partial aggregation with a constant - GROUP BY clause (Tom Lane) + GROUP BY clause (Tom Lane) @@ -3512,8 +3512,8 @@ Branch: master [1f542a2ea] 2016-12-13 13:20:37 -0500 Branch: REL9_6_STABLE [997a2994e] 2016-12-13 13:20:16 -0500 --> - Fix could not find plan for CTE planner error when dealing - with a UNION ALL containing CTE references (Tom Lane) + Fix could not find plan for CTE planner error when dealing + with a UNION ALL containing CTE references (Tom Lane) @@ -3530,7 +3530,7 @@ Branch: REL9_6_STABLE [b971a98ce] 2017-02-02 19:11:27 -0500 The typical consequence of this mistake was a plan should not - reference subplan's variable error. + reference subplan's variable error. @@ -3561,7 +3561,7 @@ Branch: master [bec96c82f] 2017-01-19 12:06:21 -0500 Branch: REL9_6_STABLE [fd081cabf] 2017-01-19 12:06:27 -0500 --> - Fix pg_dump to emit the data of a sequence that is + Fix pg_dump to emit the data of a sequence that is marked as an extension configuration table (Michael Paquier) @@ -3573,14 +3573,14 @@ Branch: master [e2090d9d2] 2017-01-31 16:24:11 -0500 Branch: REL9_6_STABLE [eb5e9d90d] 2017-01-31 16:24:14 -0500 --> - Fix mishandling of ALTER DEFAULT PRIVILEGES ... REVOKE - in pg_dump (Stephen Frost) + Fix mishandling of ALTER DEFAULT PRIVILEGES ... REVOKE + in pg_dump (Stephen Frost) - pg_dump missed issuing the - required REVOKE commands in cases where ALTER - DEFAULT PRIVILEGES had been used to reduce privileges to less than + pg_dump missed issuing the + required REVOKE commands in cases where ALTER + DEFAULT PRIVILEGES had been used to reduce privileges to less than they would normally be. @@ -3602,7 +3602,7 @@ Branch: REL9_3_STABLE [fc03f7dd1] 2016-12-21 13:47:28 -0500 Branch: REL9_2_STABLE [59a389891] 2016-12-21 13:47:32 -0500 --> - Fix pg_dump to dump user-defined casts and transforms + Fix pg_dump to dump user-defined casts and transforms that use built-in functions (Stephen Frost) @@ -3616,15 +3616,15 @@ Branch: REL9_5_STABLE [a7864037d] 2016-11-17 14:59:23 -0500 Branch: REL9_4_STABLE [e69b532be] 2016-11-17 14:59:26 -0500 --> - Fix pg_restore with to behave more sanely if an archive contains - unrecognized DROP commands (Tom Lane) + unrecognized DROP commands (Tom Lane) This doesn't fix any live bug, but it may improve the behavior in - future if pg_restore is used with an archive - generated by a later pg_dump version. + future if pg_restore is used with an archive + generated by a later pg_dump version. @@ -3637,7 +3637,7 @@ Branch: REL9_5_STABLE [bc53d7130] 2016-12-19 10:16:02 +0100 Branch: REL9_4_STABLE [f6508827a] 2016-12-19 10:16:12 +0100 --> - Fix pg_basebackup's rate limiting in the presence of + Fix pg_basebackup's rate limiting in the presence of slow I/O (Antonin Houska) @@ -3656,8 +3656,8 @@ Branch: REL9_5_STABLE [6d779e05a] 2016-11-07 15:03:56 +0100 Branch: REL9_4_STABLE [5556420d4] 2016-11-07 15:04:23 +0100 --> - Fix pg_basebackup's handling of - symlinked pg_stat_tmp and pg_replslot + Fix pg_basebackup's handling of + symlinked pg_stat_tmp and pg_replslot subdirectories (Magnus Hagander, Michael Paquier) @@ -3673,7 +3673,7 @@ Branch: REL9_3_STABLE [92929a3e3] 2016-10-27 12:00:05 -0400 Branch: REL9_2_STABLE [629575fa2] 2016-10-27 12:14:07 -0400 --> - Fix possible pg_basebackup failure on standby + Fix possible pg_basebackup failure on standby server when including WAL files (Amit Kapila, Robert Haas) @@ -3685,10 +3685,10 @@ Branch: master [dbdfd114f] 2016-11-25 18:36:10 -0500 Branch: REL9_6_STABLE [255bcd27f] 2016-11-25 18:36:10 -0500 --> - Improve initdb to insert the correct + Improve initdb to insert the correct platform-specific default values for - the xxx_flush_after parameters - into postgresql.conf (Fabien Coelho, Tom Lane) + the xxx_flush_after parameters + into postgresql.conf (Fabien Coelho, Tom Lane) @@ -3706,7 +3706,7 @@ Branch: REL9_5_STABLE [c472f2a33] 2016-12-22 15:01:39 -0500 --> Fix possible mishandling of expanded arrays in domain check - constraints and CASE execution (Tom Lane) + constraints and CASE execution (Tom Lane) @@ -3762,14 +3762,14 @@ Branch: REL9_3_STABLE [9c0b04f18] 2016-11-06 14:43:14 -0500 Branch: REL9_2_STABLE [92b7b1058] 2016-11-06 14:43:14 -0500 --> - Fix PL/Tcl to support triggers on tables that have .tupno + Fix PL/Tcl to support triggers on tables that have .tupno as a column name (Tom Lane) This matches the (previously undocumented) behavior of - PL/Tcl's spi_exec and spi_execp commands, - namely that a magic .tupno column is inserted only if + PL/Tcl's spi_exec and spi_execp commands, + namely that a magic .tupno column is inserted only if there isn't a real column named that. @@ -3785,7 +3785,7 @@ Branch: REL9_3_STABLE [46b6f3fff] 2016-11-15 16:17:19 -0500 Branch: REL9_2_STABLE [13aa9af37] 2016-11-15 16:17:19 -0500 --> - Allow DOS-style line endings in ~/.pgpass files, + Allow DOS-style line endings in ~/.pgpass files, even on Unix (Vik Fearing) @@ -3806,7 +3806,7 @@ Branch: REL9_3_STABLE [1df8b3fe8] 2016-12-22 08:32:25 +0100 Branch: REL9_2_STABLE [501c91074] 2016-12-22 08:34:07 +0100 --> - Fix one-byte buffer overrun if ecpg is given a file + Fix one-byte buffer overrun if ecpg is given a file name that ends with a dot (Takayuki Tsunakawa) @@ -3819,11 +3819,11 @@ Branch: REL9_6_STABLE [6a8c67f50] 2016-12-25 16:04:47 -0500 --> Fix incorrect error reporting for duplicate data - in psql's \crosstabview (Tom Lane) + in psql's \crosstabview (Tom Lane) - psql sometimes quoted the wrong row and/or column + psql sometimes quoted the wrong row and/or column values when complaining about multiple entries for the same crosstab cell. @@ -3840,8 +3840,8 @@ Branch: REL9_3_STABLE [2022d594d] 2016-12-23 21:01:48 -0500 Branch: REL9_2_STABLE [26b55d669] 2016-12-23 21:01:51 -0500 --> - Fix psql's tab completion for ALTER DEFAULT - PRIVILEGES (Gilles Darold, Stephen Frost) + Fix psql's tab completion for ALTER DEFAULT + PRIVILEGES (Gilles Darold, Stephen Frost) @@ -3852,8 +3852,8 @@ Branch: master [404e66758] 2016-11-28 11:51:30 -0500 Branch: REL9_6_STABLE [28735cc72] 2016-11-28 11:51:35 -0500 --> - Fix psql's tab completion for ALTER TABLE t - ALTER c DROP ... (Kyotaro Horiguchi) + Fix psql's tab completion for ALTER TABLE t + ALTER c DROP ... (Kyotaro Horiguchi) @@ -3868,9 +3868,9 @@ Branch: REL9_3_STABLE [82eb5c514] 2016-12-07 12:19:56 -0500 Branch: REL9_2_STABLE [1ec5cc025] 2016-12-07 12:19:57 -0500 --> - In psql, treat an empty or all-blank setting of - the PAGER environment variable as meaning no - pager (Tom Lane) + In psql, treat an empty or all-blank setting of + the PAGER environment variable as meaning no + pager (Tom Lane) @@ -3890,8 +3890,8 @@ Branch: REL9_3_STABLE [9b8507bfa] 2016-12-22 09:47:25 -0800 Branch: REL9_2_STABLE [44de099f8] 2016-12-22 09:46:46 -0800 --> - Improve contrib/dblink's reporting of - low-level libpq errors, such as out-of-memory + Improve contrib/dblink's reporting of + low-level libpq errors, such as out-of-memory (Joe Conway) @@ -3906,14 +3906,14 @@ Branch: REL9_4_STABLE [cb687e0ac] 2016-12-22 09:19:08 -0800 Branch: REL9_3_STABLE [bd46cce21] 2016-12-22 09:18:50 -0800 --> - Teach contrib/dblink to ignore irrelevant server options - when it uses a contrib/postgres_fdw foreign server as + Teach contrib/dblink to ignore irrelevant server options + when it uses a contrib/postgres_fdw foreign server as the source of connection options (Corey Huinker) Previously, if the foreign server object had options that were not - also libpq connection options, an error occurred. + also libpq connection options, an error occurred. @@ -3927,7 +3927,7 @@ Branch: REL9_6_STABLE [2a8783e44] 2016-11-02 00:09:28 -0400 Branch: REL9_5_STABLE [af636d7b5] 2016-11-02 00:09:28 -0400 --> - Fix portability problems in contrib/pageinspect's + Fix portability problems in contrib/pageinspect's functions for GIN indexes (Peter Eisentraut, Tom Lane) @@ -4016,7 +4016,7 @@ Branch: REL9_3_STABLE [2b133be04] 2017-01-30 11:41:02 -0500 Branch: REL9_2_STABLE [ef878cc2c] 2017-01-30 11:41:09 -0500 --> - Update time zone data files to tzdata release 2016j + Update time zone data files to tzdata release 2016j for DST law changes in northern Cyprus (adding a new zone Asia/Famagusta), Russia (adding a new zone Europe/Saratov), Tonga, and Antarctica/Casey. @@ -4083,7 +4083,7 @@ Branch: REL9_3_STABLE [1c02ee314] 2016-10-19 15:00:34 +0300 crash recovery, or to be written incorrectly on a standby server. Bogus entries in a free space map could lead to attempts to access pages that have been truncated away from the relation itself, typically - producing errors like could not read block XXX: + producing errors like could not read block XXX: read only 0 of 8192 bytes. Checksum failures in the visibility map are also possible, if checksumming is enabled. @@ -4091,7 +4091,7 @@ Branch: REL9_3_STABLE [1c02ee314] 2016-10-19 15:00:34 +0300 Procedures for determining whether there is a problem and repairing it if so are discussed at - . + . @@ -4102,7 +4102,7 @@ Branch: master [5afcd2aa7] 2016-09-30 20:40:55 -0400 Branch: REL9_6_STABLE [b6d906073] 2016-09-30 20:39:06 -0400 --> - Fix possible data corruption when pg_upgrade rewrites + Fix possible data corruption when pg_upgrade rewrites a relation visibility map into 9.6 format (Tom Lane) @@ -4112,20 +4112,20 @@ Branch: REL9_6_STABLE [b6d906073] 2016-09-30 20:39:06 -0400 Windows, the old map was read using text mode, leading to incorrect results if the map happened to contain consecutive bytes that matched a carriage return/line feed sequence. The latter error would almost - always lead to a pg_upgrade failure due to the map + always lead to a pg_upgrade failure due to the map file appearing to be the wrong length. If you are using a big-endian machine (many non-Intel architectures - are big-endian) and have used pg_upgrade to upgrade + are big-endian) and have used pg_upgrade to upgrade from a pre-9.6 release, you should assume that all visibility maps are incorrect and need to be regenerated. It is sufficient to truncate each relation's visibility map - with contrib/pg_visibility's - pg_truncate_visibility_map() function. + with contrib/pg_visibility's + pg_truncate_visibility_map() function. For more information see - . + . @@ -4138,7 +4138,7 @@ Branch: REL9_5_STABLE [65d85b8f9] 2016-10-23 18:36:13 -0400 --> Don't throw serialization errors for self-conflicting insertions - in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) + in INSERT ... ON CONFLICT (Thomas Munro, Peter Geoghegan) @@ -4150,7 +4150,7 @@ Branch: REL9_6_STABLE [a5f0bd77a] 2016-10-17 12:13:35 +0300 --> Fix use-after-free hazard in execution of aggregate functions - using DISTINCT (Peter Geoghegan) + using DISTINCT (Peter Geoghegan) @@ -4185,7 +4185,7 @@ Branch: REL9_6_STABLE [190765a05] 2016-10-03 16:23:02 -0400 Branch: REL9_5_STABLE [647a86e37] 2016-10-03 16:23:12 -0400 --> - Fix COPY with a column name list from a table that has + Fix COPY with a column name list from a table that has row-level security enabled (Adam Brightwell) @@ -4201,14 +4201,14 @@ Branch: REL9_3_STABLE [edb514306] 2016-10-20 17:18:09 -0400 Branch: REL9_2_STABLE [f17c26dbd] 2016-10-20 17:18:14 -0400 --> - Fix EXPLAIN to emit valid XML when + Fix EXPLAIN to emit valid XML when is on (Markus Winand) Previously the XML output-format option produced syntactically invalid - tags such as <I/O-Read-Time>. That is now - rendered as <I-O-Read-Time>. + tags such as <I/O-Read-Time>. That is now + rendered as <I-O-Read-Time>. @@ -4220,7 +4220,7 @@ Branch: REL9_6_STABLE [03f2bf70a] 2016-10-13 19:46:06 -0400 Branch: REL9_5_STABLE [3cd504254] 2016-10-13 19:45:58 -0400 --> - Fix statistics update for TRUNCATE in a prepared + Fix statistics update for TRUNCATE in a prepared transaction (Stas Kelvich) @@ -4242,16 +4242,16 @@ Branch: REL9_3_STABLE [f0bf0f233] 2016-10-13 17:05:15 -0400 Branch: REL9_2_STABLE [6f2db29ec] 2016-10-13 17:05:15 -0400 --> - Fix bugs in merging inherited CHECK constraints while + Fix bugs in merging inherited CHECK constraints while creating or altering a table (Tom Lane, Amit Langote) - Allow identical CHECK constraints to be added to a parent + Allow identical CHECK constraints to be added to a parent and child table in either order. Prevent merging of a valid - constraint from the parent table with a NOT VALID + constraint from the parent table with a NOT VALID constraint on the child. Likewise, prevent merging of a NO - INHERIT child constraint with an inherited constraint. + INHERIT child constraint with an inherited constraint. @@ -4264,8 +4264,8 @@ Branch: REL9_5_STABLE [f50fa46cc] 2016-10-03 16:40:27 -0400 --> Show a sensible value - in pg_settings.unit - for min_wal_size and max_wal_size (Tom Lane) + in pg_settings.unit + for min_wal_size and max_wal_size (Tom Lane) @@ -4276,7 +4276,7 @@ Branch: master [9c4cc9e2c] 2016-10-13 00:25:48 -0400 Branch: REL9_6_STABLE [0e9e64c07] 2016-10-13 00:25:28 -0400 --> - Fix replacement of array elements in jsonb_set() + Fix replacement of array elements in jsonb_set() (Tom Lane) @@ -4364,7 +4364,7 @@ Branch: REL9_4_STABLE [6d3cbbf59] 2016-10-13 15:07:11 -0400 - This avoids possible failures during munmap() on systems + This avoids possible failures during munmap() on systems with atypical default huge page sizes. Except in crash-recovery cases, there were no ill effects other than a log message. @@ -4390,7 +4390,7 @@ Branch: REL9_1_STABLE [e84e4761f] 2016-10-07 12:53:51 +0300 --> Don't try to share SSL contexts across multiple connections - in libpq (Heikki Linnakangas) + in libpq (Heikki Linnakangas) @@ -4411,12 +4411,12 @@ Branch: REL9_2_STABLE [7397f62e7] 2016-10-10 10:35:58 -0400 Branch: REL9_1_STABLE [fb6825fe5] 2016-10-10 10:35:58 -0400 --> - Avoid corner-case memory leak in libpq (Tom Lane) + Avoid corner-case memory leak in libpq (Tom Lane) The reported problem involved leaking an error report - during PQreset(), but there might be related cases. + during PQreset(), but there might be related cases. @@ -4428,7 +4428,7 @@ Branch: REL9_6_STABLE [bac56dbe0] 2016-10-03 10:07:39 -0400 Branch: REL9_5_STABLE [0f259bd17] 2016-10-03 10:07:39 -0400 --> - In pg_upgrade, check library loadability in name order + In pg_upgrade, check library loadability in name order (Tom Lane) @@ -4446,13 +4446,13 @@ Branch: master [e8bdee277] 2016-10-02 14:31:28 -0400 Branch: REL9_6_STABLE [f40334b85] 2016-10-02 14:31:28 -0400 --> - Fix pg_upgrade to work correctly for extensions + Fix pg_upgrade to work correctly for extensions containing index access methods (Tom Lane) To allow this, the server has been extended to support ALTER - EXTENSION ADD/DROP ACCESS METHOD. That functionality should have + EXTENSION ADD/DROP ACCESS METHOD. That functionality should have been included in the original patch to support dynamic creation of access methods, but it was overlooked. @@ -4465,7 +4465,7 @@ Branch: master [f002ed2b8] 2016-09-30 20:40:56 -0400 Branch: REL9_6_STABLE [53fbeed40] 2016-09-30 20:40:27 -0400 --> - Improve error reporting in pg_upgrade's file + Improve error reporting in pg_upgrade's file copying/linking/rewriting steps (Tom Lane, Álvaro Herrera) @@ -4477,7 +4477,7 @@ Branch: master [4806f26f9] 2016-10-07 09:51:18 -0400 Branch: REL9_6_STABLE [1749332ec] 2016-10-07 09:51:28 -0400 --> - Fix pg_dump to work against pre-7.4 servers + Fix pg_dump to work against pre-7.4 servers (Amit Langote, Tom Lane) @@ -4490,8 +4490,8 @@ Branch: REL9_6_STABLE [2933ed036] 2016-10-07 14:35:41 +0300 Branch: REL9_5_STABLE [010a1b561] 2016-10-07 14:35:45 +0300 --> - Disallow specifying both @@ -4504,12 +4504,12 @@ Branch: REL9_6_STABLE [aab809664] 2016-10-06 13:34:38 +0300 Branch: REL9_5_STABLE [69da71254] 2016-10-06 13:34:32 +0300 --> - Make pg_rewind turn off synchronous_commit + Make pg_rewind turn off synchronous_commit in its session on the source server (Michael Banck, Michael Paquier) - This allows pg_rewind to work even when the source + This allows pg_rewind to work even when the source server is using synchronous replication that is not working for some reason. @@ -4525,8 +4525,8 @@ Branch: REL9_4_STABLE [da3f71a08] 2016-09-30 11:22:49 +0200 Branch: REL9_3_STABLE [4bff35cca] 2016-09-30 11:23:25 +0200 --> - In pg_xlogdump, retry opening new WAL segments when - using option (Magnus Hagander) @@ -4542,7 +4542,7 @@ Branch: master [9a109452d] 2016-10-01 16:32:54 -0400 Branch: REL9_6_STABLE [f4e787c82] 2016-10-01 16:32:55 -0400 --> - Fix contrib/pg_visibility to report the correct TID for + Fix contrib/pg_visibility to report the correct TID for a corrupt tuple that has been the subject of a rolled-back update (Tom Lane) @@ -4556,7 +4556,7 @@ Branch: REL9_6_STABLE [68fb75e10] 2016-10-01 13:35:20 -0400 --> Fix makefile dependencies so that parallel make - of PL/Python by itself will succeed reliably + of PL/Python by itself will succeed reliably (Pavel Raiskup) @@ -4594,7 +4594,7 @@ Branch: REL9_2_STABLE [a03339aef] 2016-10-19 17:57:01 -0400 Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> - Update time zone data files to tzdata release 2016h + Update time zone data files to tzdata release 2016h for DST law changes in Palestine and Turkey, plus historical corrections for Turkey and some regions of Russia. Switch to numeric abbreviations for some time zones in Antarctica, @@ -4607,15 +4607,15 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 or no currency among the local population. They are in process of reversing that policy in favor of using numeric UTC offsets in zones where there is no evidence of real-world use of an English - abbreviation. At least for the time being, PostgreSQL + abbreviation. At least for the time being, PostgreSQL will continue to accept such removed abbreviations for timestamp input. - But they will not be shown in the pg_timezone_names + But they will not be shown in the pg_timezone_names view nor used for output. - In this update, AMT is no longer shown as being in use to - mean Armenia Time. Therefore, we have changed the Default + In this update, AMT is no longer shown as being in use to + mean Armenia Time. Therefore, we have changed the Default abbreviation set to interpret it as Amazon Time, thus UTC-4 not UTC+4. @@ -4637,7 +4637,7 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 Overview - Major enhancements in PostgreSQL 9.6 include: + Major enhancements in PostgreSQL 9.6 include: @@ -4671,15 +4671,15 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 - postgres_fdw now supports remote joins, sorts, - UPDATEs, and DELETEs + postgres_fdw now supports remote joins, sorts, + UPDATEs, and DELETEs Substantial performance improvements, especially in the area of - scalability on multi-CPU-socket servers + scalability on multi-CPU-socket servers @@ -4714,7 +4714,7 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> Improve the pg_stat_activity + linkend="pg-stat-activity-view">pg_stat_activity view's information about what a process is waiting for (Amit Kapila, Ildus Kurbangaliev) @@ -4722,10 +4722,10 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 Historically a process has only been shown as waiting if it was waiting for a heavyweight lock. Now waits for lightweight locks - and buffer pins are also shown in pg_stat_activity. + and buffer pins are also shown in pg_stat_activity. Also, the type of lock being waited for is now visible. - These changes replace the waiting column with - wait_event_type and wait_event. + These changes replace the waiting column with + wait_event_type and wait_event. @@ -4735,14 +4735,14 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> In to_char(), + linkend="functions-formatting-table">to_char(), do not count a minus sign (when needed) as part of the field width for time-related fields (Bruce Momjian) - For example, to_char('-4 years'::interval, 'YY') - now returns -04, rather than -4. + For example, to_char('-4 years'::interval, 'YY') + now returns -04, rather than -4. @@ -4752,18 +4752,18 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 --> Make extract() behave + linkend="functions-datetime-table">extract() behave more reasonably with infinite inputs (Vitaly Burovoy) - Historically the extract() function just returned + Historically the extract() function just returned zero given an infinite timestamp, regardless of the given field name. Make it return infinity or -infinity as appropriate when the requested field is one that is monotonically increasing (e.g, - year, epoch), or NULL when - it is not (e.g., day, hour). Also, + year, epoch), or NULL when + it is not (e.g., day, hour). Also, throw the expected error for bad field names. @@ -4774,9 +4774,9 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 This commit is also listed under libpq and psql --> - Remove PL/pgSQL's feature that suppressed the - innermost line of CONTEXT for messages emitted by - RAISE commands (Pavel Stehule) + Remove PL/pgSQL's feature that suppressed the + innermost line of CONTEXT for messages emitted by + RAISE commands (Pavel Stehule) @@ -4791,13 +4791,13 @@ This commit is also listed under libpq and psql --> Fix the default text search parser to allow leading digits - in email and host tokens (Artur Zakirov) + in email and host tokens (Artur Zakirov) In most cases this will result in few changes in the parsing of text. But if you have data where such addresses occur frequently, - it may be worth rebuilding dependent tsvector columns + it may be worth rebuilding dependent tsvector columns and indexes so that addresses of this form will be found properly by text searches. @@ -4809,8 +4809,8 @@ This commit is also listed under libpq and psql 2016-03-16 [9a206d063] Improve script generating unaccent rules --> - Extend contrib/unaccent's - standard unaccent.rules file to handle all diacritics + Extend contrib/unaccent's + standard unaccent.rules file to handle all diacritics known to Unicode, and to expand ligatures correctly (Thomas Munro, Léonard Benedetti) @@ -4819,7 +4819,7 @@ This commit is also listed under libpq and psql The previous version neglected to convert some less-common letters with diacritic marks. Also, ligatures are now expanded into separate letters. Installations that use this rules file may wish - to rebuild tsvector columns and indexes that depend on the + to rebuild tsvector columns and indexes that depend on the result. @@ -4830,15 +4830,15 @@ This commit is also listed under libpq and psql --> Remove the long-deprecated - CREATEUSER/NOCREATEUSER options from - CREATE ROLE and allied commands (Tom Lane) + CREATEUSER/NOCREATEUSER options from + CREATE ROLE and allied commands (Tom Lane) - CREATEUSER actually meant SUPERUSER, + CREATEUSER actually meant SUPERUSER, for ancient backwards-compatibility reasons. This has been a constant source of confusion for people who (reasonably) expect - it to mean CREATEROLE. It has been deprecated for + it to mean CREATEROLE. It has been deprecated for ten years now, so fix the problem by removing it. @@ -4850,13 +4850,13 @@ This commit is also listed under libpq and psql 2016-05-08 [7df974ee0] Disallow superuser names starting with 'pg_' in initdb --> - Treat role names beginning with pg_ as reserved + Treat role names beginning with pg_ as reserved (Stephen Frost) User creation of such role names is now disallowed. This prevents - conflicts with built-in roles created by initdb. + conflicts with built-in roles created by initdb. @@ -4866,16 +4866,16 @@ This commit is also listed under libpq and psql --> Change a column name in the - information_schema.routines - view from result_cast_character_set_name - to result_cast_char_set_name (Clément + information_schema.routines + view from result_cast_character_set_name + to result_cast_char_set_name (Clément Prévost) The SQL:2011 standard specifies the longer name, but that appears to be a mistake, because adjacent column names use the shorter - style, as do other information_schema views. + style, as do other information_schema views. @@ -4884,7 +4884,7 @@ This commit is also listed under libpq and psql 2015-12-08 [d5563d7df] psql: Support multiple -c and -f options, and allow mixi --> - psql's option no longer implies + psql's option no longer implies (Pavel Stehule, Catalin Iacob) @@ -4893,7 +4893,7 @@ This commit is also listed under libpq and psql Write (or its abbreviation ) explicitly to obtain the old behavior. Scripts so modified will still work with old - versions of psql. + versions of psql. @@ -4902,7 +4902,7 @@ This commit is also listed under libpq and psql 2015-07-02 [5671aaca8] Improve pg_restore's -t switch to match all types of rel --> - Improve pg_restore's option to + Improve pg_restore's option to match all types of relations, not only plain tables (Craig Ringer) @@ -4912,17 +4912,17 @@ This commit is also listed under libpq and psql 2016-02-12 [59a884e98] Change delimiter used for display of NextXID --> - Change the display format used for NextXID in - pg_controldata and related places (Joe Conway, + Change the display format used for NextXID in + pg_controldata and related places (Joe Conway, Bruce Momjian) Display epoch-and-transaction-ID values in the format - number:number. + number:number. The previous format - number/number was - confusingly similar to that used for LSNs. + number/number was + confusingly similar to that used for LSNs. @@ -4940,8 +4940,8 @@ and many others in the same vein Many of the standard extensions have been updated to allow their functions to be executed within parallel query worker processes. These changes will not take effect in - databases pg_upgrade'd from prior versions unless - you apply ALTER EXTENSION UPDATE to each such extension + databases pg_upgrade'd from prior versions unless + you apply ALTER EXTENSION UPDATE to each such extension (in each database of a cluster). @@ -5002,7 +5002,7 @@ and many others in the same vein - With 9.6, PostgreSQL introduces initial support + With 9.6, PostgreSQL introduces initial support for parallel execution of large queries. Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized. Hash joins and nested loops can be performed @@ -5048,7 +5048,7 @@ and many others in the same vein 2015-09-02 [30bb26b5e] Allow usage of huge maintenance_work_mem for GIN build. --> - Allow GIN index builds to + Allow GIN index builds to make effective use of settings larger than 1 GB (Robert Abraham, Teodor Sigaev) @@ -5076,7 +5076,7 @@ and many others in the same vein --> Add gin_clean_pending_list() + linkend="functions-admin-index">gin_clean_pending_list() function to allow manual invocation of pending-list cleanup for a GIN index (Jeff Janes) @@ -5094,7 +5094,7 @@ and many others in the same vein --> Improve handling of dead index tuples in GiST indexes (Anastasia Lubennikova) + linkend="GiST">GiST indexes (Anastasia Lubennikova) @@ -5111,7 +5111,7 @@ and many others in the same vein --> Add an SP-GiST operator class for - type box (Alexander Lebedev) + type box (Alexander Lebedev) @@ -5137,7 +5137,7 @@ and many others in the same vein - The new approach makes better use of the CPU cache + The new approach makes better use of the CPU cache for typical cache sizes and data volumes. Where necessary, the behavior can be adjusted via the new configuration parameter replacement_sort_tuples. @@ -5162,17 +5162,17 @@ and many others in the same vein 2016-02-17 [f1f5ec1ef] Reuse abbreviated keys in ordered [set] aggregates. --> - Speed up sorting of uuid, bytea, and - char(n) fields by using abbreviated keys + Speed up sorting of uuid, bytea, and + char(n) fields by using abbreviated keys (Peter Geoghegan) Support for abbreviated keys has also been added to the non-default operator classes text_pattern_ops, - varchar_pattern_ops, and - bpchar_pattern_ops. Processing of ordered-set + linkend="indexes-opclass">text_pattern_ops, + varchar_pattern_ops, and + bpchar_pattern_ops. Processing of ordered-set aggregates can also now exploit abbreviated keys. @@ -5182,8 +5182,8 @@ and many others in the same vein 2015-12-16 [b648b7034] Speed up CREATE INDEX CONCURRENTLY's TID sort. --> - Speed up CREATE INDEX CONCURRENTLY by treating - TIDs as 64-bit integers during sorting (Peter + Speed up CREATE INDEX CONCURRENTLY by treating + TIDs as 64-bit integers during sorting (Peter Geoghegan) @@ -5203,7 +5203,7 @@ and many others in the same vein 2015-09-03 [4aec49899] Assorted code review for recent ProcArrayLock patch. --> - Reduce contention for the ProcArrayLock (Amit Kapila, + Reduce contention for the ProcArrayLock (Amit Kapila, Robert Haas) @@ -5234,7 +5234,7 @@ and many others in the same vein --> Use atomic operations, rather than a spinlock, to protect an - LWLock's wait queue (Andres Freund) + LWLock's wait queue (Andres Freund) @@ -5244,7 +5244,7 @@ and many others in the same vein --> Partition the shared hash table freelist to reduce contention on - multi-CPU-socket servers (Aleksander Alekseev) + multi-CPU-socket servers (Aleksander Alekseev) @@ -5280,14 +5280,14 @@ and many others in the same vein 2016-04-04 [391159e03] Partially revert commit 3d3bf62f30200500637b24fdb7b992a9 --> - Improve ANALYZE's estimates for columns with many nulls + Improve ANALYZE's estimates for columns with many nulls (Tomas Vondra, Alex Shulgin) - Previously ANALYZE tended to underestimate the number - of non-NULL distinct values in a column with many - NULLs, and was also inaccurate in computing the + Previously ANALYZE tended to underestimate the number + of non-NULL distinct values in a column with many + NULLs, and was also inaccurate in computing the most-common values. @@ -5314,13 +5314,13 @@ and many others in the same vein - If a table t has a foreign key restriction, say - (a,b) REFERENCES r (x,y), then a WHERE - condition such as t.a = r.x AND t.b = r.y cannot - select more than one r row per t row. - The planner formerly considered these AND conditions + If a table t has a foreign key restriction, say + (a,b) REFERENCES r (x,y), then a WHERE + condition such as t.a = r.x AND t.b = r.y cannot + select more than one r row per t row. + The planner formerly considered these AND conditions to be independent and would often drastically misestimate - selectivity as a result. Now it compares the WHERE + selectivity as a result. Now it compares the WHERE conditions to applicable foreign key constraints and produces better estimates. @@ -5331,7 +5331,7 @@ and many others in the same vein - <command>VACUUM</> + <command>VACUUM</command> @@ -5361,7 +5361,7 @@ and many others in the same vein If necessary, vacuum can be forced to process all-frozen - pages using the new DISABLE_PAGE_SKIPPING option. + pages using the new DISABLE_PAGE_SKIPPING option. Normally this should never be needed, but it might help in recovering from visibility-map corruption. @@ -5372,7 +5372,7 @@ and many others in the same vein 2015-12-30 [e84290823] Avoid useless truncation attempts during VACUUM. --> - Avoid useless heap-truncation attempts during VACUUM + Avoid useless heap-truncation attempts during VACUUM (Jeff Janes, Tom Lane) @@ -5401,19 +5401,19 @@ and many others in the same vein 2016-08-07 [9ee1cf04a] Fix TOAST access failure in RETURNING queries. --> - Allow old MVCC snapshots to be invalidated after a + Allow old MVCC snapshots to be invalidated after a configurable timeout (Kevin Grittner) Normally, deleted tuples cannot be physically removed by - vacuuming until the last transaction that could see + vacuuming until the last transaction that could see them is gone. A transaction that stays open for a long time can thus cause considerable table bloat because space cannot be recycled. This feature allows setting a time-based limit, via the new configuration parameter , on how long an - MVCC snapshot is guaranteed to be valid. After that, + MVCC snapshot is guaranteed to be valid. After that, dead tuples are candidates for removal. A transaction using an outdated snapshot will get an error if it attempts to read a page that potentially could have contained such data. @@ -5425,12 +5425,12 @@ and many others in the same vein 2016-02-11 [d4c3a156c] Remove GROUP BY columns that are functionally dependent --> - Ignore GROUP BY columns that are + Ignore GROUP BY columns that are functionally dependent on other columns (David Rowley) - If a GROUP BY clause includes all columns of a + If a GROUP BY clause includes all columns of a non-deferred primary key, as well as other columns of the same table, those other columns are redundant and can be dropped from the grouping. This saves computation in many common cases. @@ -5443,17 +5443,17 @@ and many others in the same vein --> Allow use of an index-only - scan on a partial index when the index's WHERE + scan on a partial index when the index's WHERE clause references columns that are not indexed (Tomas Vondra, Kyotaro Horiguchi) For example, an index defined by CREATE INDEX tidx_partial - ON t(b) WHERE a > 0 can now be used for an index-only scan by - a query that specifies WHERE a > 0 and does not - otherwise use a. Previously this was disallowed - because a is not listed as an index column. + ON t(b) WHERE a > 0 can now be used for an index-only scan by + a query that specifies WHERE a > 0 and does not + otherwise use a. Previously this was disallowed + because a is not listed as an index column. @@ -5493,7 +5493,7 @@ and many others in the same vein - PostgreSQL writes data to the kernel's disk cache, + PostgreSQL writes data to the kernel's disk cache, from where it will be flushed to physical storage in due time. Many operating systems are not smart about managing this and allow large amounts of dirty data to accumulate before deciding to flush @@ -5504,11 +5504,11 @@ and many others in the same vein - On Linux, sync_file_range() is used for this purpose, + On Linux, sync_file_range() is used for this purpose, and the feature is on by default on Linux because that function has few downsides. This flushing capability is also available on other - platforms if they have msync() - or posix_fadvise(), but those interfaces have some + platforms if they have msync() + or posix_fadvise(), but those interfaces have some undesirable side-effects so the feature is disabled by default on non-Linux platforms. @@ -5533,7 +5533,7 @@ and many others in the same vein - For example, SELECT AVG(x), VARIANCE(x) FROM tab can use + For example, SELECT AVG(x), VARIANCE(x) FROM tab can use a single per-row computation for both aggregates. @@ -5544,7 +5544,7 @@ and many others in the same vein --> Speed up visibility tests for recently-created tuples by checking - the current transaction's snapshot, not pg_clog, to + the current transaction's snapshot, not pg_clog, to decide if the source transaction should be considered committed (Jeff Janes, Tom Lane) @@ -5570,9 +5570,9 @@ and many others in the same vein - Two-phase commit information is now written only to WAL - during PREPARE TRANSACTION, and will be read back from - WAL during COMMIT PREPARED if that happens + Two-phase commit information is now written only to WAL + during PREPARE TRANSACTION, and will be read back from + WAL during COMMIT PREPARED if that happens soon thereafter. A separate state file is created only if the pending transaction does not get committed or aborted by the time of the next checkpoint. @@ -5603,8 +5603,8 @@ and many others in the same vein 2016-02-06 [aa2387e2f] Improve speed of timestamp/time/date output functions. --> - Improve speed of the output functions for timestamp, - time, and date data types (David Rowley, + Improve speed of the output functions for timestamp, + time, and date data types (David Rowley, Andres Freund) @@ -5615,7 +5615,7 @@ and many others in the same vein --> Avoid some unnecessary cancellations of hot-standby queries - during replay of actions that take AccessExclusive + during replay of actions that take AccessExclusive locks (Jeff Janes) @@ -5649,8 +5649,8 @@ and many others in the same vein 2015-07-05 [6c82d8d1f] Further reduce overhead for passing plpgsql variables to --> - Speed up expression evaluation in PL/pgSQL by - keeping ParamListInfo entries for simple variables + Speed up expression evaluation in PL/pgSQL by + keeping ParamListInfo entries for simple variables valid at all times (Tom Lane) @@ -5660,7 +5660,7 @@ and many others in the same vein 2015-07-06 [4f33621f3] Don't set SO_SNDBUF on recent Windows versions that have --> - Avoid reducing the SO_SNDBUF setting below its default + Avoid reducing the SO_SNDBUF setting below its default on recent Windows versions (Chen Huajun) @@ -5696,8 +5696,8 @@ and many others in the same vein --> Add pg_stat_progress_vacuum - system view to provide progress reporting for VACUUM + linkend="pg-stat-progress-vacuum-view">pg_stat_progress_vacuum + system view to provide progress reporting for VACUUM operations (Amit Langote, Robert Haas, Vinayak Pokale, Rahila Syed) @@ -5708,11 +5708,11 @@ and many others in the same vein --> Add pg_control_system(), - pg_control_checkpoint(), - pg_control_recovery(), and - pg_control_init() functions to expose fields of - pg_control to SQL (Joe Conway, Michael + linkend="functions-controldata">pg_control_system(), + pg_control_checkpoint(), + pg_control_recovery(), and + pg_control_init() functions to expose fields of + pg_control to SQL (Joe Conway, Michael Paquier) @@ -5722,15 +5722,15 @@ and many others in the same vein 2016-02-17 [a5c43b886] Add new system view, pg_config --> - Add pg_config + Add pg_config system view (Joe Conway) This view exposes the same information available from - the pg_config command-line utility, + the pg_config command-line utility, namely assorted compile-time configuration information for - PostgreSQL. + PostgreSQL. @@ -5739,8 +5739,8 @@ and many others in the same vein 2015-08-10 [3f811c2d6] Add confirmed_flush column to pg_replication_slots. --> - Add a confirmed_flush_lsn column to the pg_replication_slots + Add a confirmed_flush_lsn column to the pg_replication_slots system view (Marko Tiikkaja) @@ -5753,9 +5753,9 @@ and many others in the same vein --> Add pg_stat_wal_receiver + linkend="pg-stat-wal-receiver-view">pg_stat_wal_receiver system view to provide information about the state of a hot-standby - server's WAL receiver process (Michael Paquier) + server's WAL receiver process (Michael Paquier) @@ -5765,7 +5765,7 @@ and many others in the same vein --> Add pg_blocking_pids() + linkend="functions-info-session-table">pg_blocking_pids() function to reliably identify which sessions block which others (Tom Lane) @@ -5774,7 +5774,7 @@ and many others in the same vein This function returns an array of the process IDs of any sessions that are blocking the session with the given process ID. Historically users have obtained such information using a self-join - on the pg_locks view. However, it is unreasonably + on the pg_locks view. However, it is unreasonably tedious to do it that way with any modicum of correctness, and the addition of parallel queries has made the old approach entirely impractical, since locks might be held or awaited by child worker @@ -5788,7 +5788,7 @@ and many others in the same vein --> Add function pg_current_xlog_flush_location() + linkend="functions-admin-backup-table">pg_current_xlog_flush_location() to expose the current transaction log flush location (Tomas Vondra) @@ -5799,8 +5799,8 @@ and many others in the same vein --> Add function pg_notification_queue_usage() - to report how full the NOTIFY queue is (Brendan Jurd) + linkend="functions-info-session-table">pg_notification_queue_usage() + to report how full the NOTIFY queue is (Brendan Jurd) @@ -5816,7 +5816,7 @@ and many others in the same vein The memory usage dump that is output to the postmaster log during an out-of-memory failure now summarizes statistics when there are a large number of memory contexts, rather than possibly generating - a very large report. There is also a grand total + a very large report. There is also a grand total summary line now. @@ -5826,7 +5826,7 @@ and many others in the same vein - <acronym>Authentication</> + <acronym>Authentication</acronym> @@ -5835,15 +5835,15 @@ and many others in the same vein 2016-04-08 [34c33a1f0] Add BSD authentication method. --> - Add a BSD authentication + Add a BSD authentication method to allow use of - the BSD Authentication service for - PostgreSQL client authentication (Marisa Emerson) + the BSD Authentication service for + PostgreSQL client authentication (Marisa Emerson) BSD Authentication is currently only available on OpenBSD. + class="osname">OpenBSD. @@ -5852,9 +5852,9 @@ and many others in the same vein 2016-04-08 [2f1d2b7a7] Set PAM_RHOST item for PAM authentication --> - When using PAM + When using PAM authentication, provide the client IP address or host name - to PAM modules via the PAM_RHOST item + to PAM modules via the PAM_RHOST item (Grzegorz Sampolski) @@ -5870,7 +5870,7 @@ and many others in the same vein All ordinarily-reachable password authentication failure cases - should now provide specific DETAIL fields in the log. + should now provide specific DETAIL fields in the log. @@ -5879,7 +5879,7 @@ and many others in the same vein 2015-09-06 [643beffe8] Support RADIUS passwords up to 128 characters --> - Support RADIUS passwords + Support RADIUS passwords up to 128 characters long (Marko Tiikkaja) @@ -5889,11 +5889,11 @@ and many others in the same vein 2016-04-08 [35e2e357c] Add authentication parameters compat_realm and upn_usena --> - Add new SSPI + Add new SSPI authentication parameters - compat_realm and upn_username to control - whether NetBIOS or Kerberos - realm names and user names are used during SSPI + compat_realm and upn_username to control + whether NetBIOS or Kerberos + realm names and user names are used during SSPI authentication (Christian Ullrich) @@ -5939,7 +5939,7 @@ and many others in the same vein 2015-09-08 [1aba62ec6] Allow per-tablespace effective_io_concurrency --> - Allow effective_io_concurrency to be set per-tablespace + Allow effective_io_concurrency to be set per-tablespace to support cases where different tablespaces have different I/O characteristics (Julien Rouhaud) @@ -5951,7 +5951,7 @@ and many others in the same vein 2015-09-07 [b1e1862a1] Coordinate log_line_prefix options 'm' and 'n' to share --> - Add option %n to + Add option %n to print the current time in Unix epoch form, with milliseconds (Tomas Vondra, Jeff Davis) @@ -5966,7 +5966,7 @@ and many others in the same vein Add and configuration parameters to provide more control over the message format when logging to - syslog (Peter Eisentraut) + syslog (Peter Eisentraut) @@ -5975,16 +5975,16 @@ and many others in the same vein 2016-03-18 [b555ed810] Merge wal_level "archive" and "hot_standby" into new nam --> - Merge the archive and hot_standby values + Merge the archive and hot_standby values of the configuration parameter - into a single new value replica (Peter Eisentraut) + into a single new value replica (Peter Eisentraut) Making a distinction between these settings is no longer useful, and merging them is a step towards a planned future simplification of replication setup. The old names are still accepted but are - converted to replica internally. + converted to replica internally. @@ -5993,15 +5993,15 @@ and many others in the same vein 2016-02-02 [7d17e683f] Add support for systemd service notifications --> - Add configure option - This allows the use of systemd service units of - type notify, which greatly simplifies the management - of PostgreSQL under systemd. + This allows the use of systemd service units of + type notify, which greatly simplifies the management + of PostgreSQL under systemd. @@ -6010,17 +6010,17 @@ and many others in the same vein 2016-03-19 [9a83564c5] Allow SSL server key file to have group read access if o --> - Allow the server's SSL key file to have group read - access if it is owned by root (Christoph Berg) + Allow the server's SSL key file to have group read + access if it is owned by root (Christoph Berg) Formerly, we insisted the key file be owned by the - user running the PostgreSQL server, but + user running the PostgreSQL server, but that is inconvenient on some systems (such as Debian) that are configured to manage + class="osname">Debian) that are configured to manage certificates centrally. Therefore, allow the case where the key - file is owned by root and has group read access. + file is owned by root and has group read access. It is up to the operating system administrator to ensure that the group does not include any untrusted users. @@ -6085,8 +6085,8 @@ XXX this is pending backpatch, may need to remove 2016-04-26 [c6ff84b06] Emit invalidations to standby for transactions without x --> - Ensure that invalidation messages are recorded in WAL - even when issued by a transaction that has no XID + Ensure that invalidation messages are recorded in WAL + even when issued by a transaction that has no XID assigned (Andres Freund) @@ -6102,7 +6102,7 @@ XXX this is pending backpatch, may need to remove 2016-04-28 [e2c79e14d] Prevent multiple cleanup process for pending list in GIN --> - Prevent multiple processes from trying to clean a GIN + Prevent multiple processes from trying to clean a GIN index's pending list concurrently (Teodor Sigaev, Jeff Janes) @@ -6147,13 +6147,13 @@ XXX this is pending backpatch, may need to remove 2016-03-29 [314cbfc5d] Add new replication mode synchronous_commit = 'remote_ap --> - Add new setting remote_apply for configuration + Add new setting remote_apply for configuration parameter (Thomas Munro) In this mode, the master waits for the transaction to be - applied on the standby server, not just written + applied on the standby server, not just written to disk. That means that you can count on a transaction started on the standby to see all commits previously acknowledged by the master. @@ -6168,14 +6168,14 @@ XXX this is pending backpatch, may need to remove Add a feature to the replication protocol, and a corresponding option to pg_create_physical_replication_slot(), - to allow reserving WAL immediately when creating a + linkend="functions-replication-table">pg_create_physical_replication_slot(), + to allow reserving WAL immediately when creating a replication slot (Gurjeet Singh, Michael Paquier) This allows the creation of a replication slot to guarantee - that all the WAL needed for a base backup will be + that all the WAL needed for a base backup will be available. @@ -6186,13 +6186,13 @@ XXX this is pending backpatch, may need to remove --> Add a option to - pg_basebackup + pg_basebackup (Peter Eisentraut) - This lets pg_basebackup use a replication - slot defined for WAL streaming. After the base + This lets pg_basebackup use a replication + slot defined for WAL streaming. After the base backup completes, selecting the same slot for regular streaming replication allows seamless startup of the new standby server. @@ -6205,8 +6205,8 @@ XXX this is pending backpatch, may need to remove --> Extend pg_start_backup() - and pg_stop_backup() to support non-exclusive backups + linkend="functions-admin-backup-table">pg_start_backup() + and pg_stop_backup() to support non-exclusive backups (Magnus Hagander) @@ -6226,14 +6226,14 @@ XXX this is pending backpatch, may need to remove --> Allow functions that return sets of tuples to return simple - NULLs (Andrew Gierth, Tom Lane) + NULLs (Andrew Gierth, Tom Lane) - In the context of SELECT FROM function(...), a function + In the context of SELECT FROM function(...), a function that returned a set of composite values was previously not allowed - to return a plain NULL value as part of the set. - Now that is allowed and interpreted as a row of NULLs. + to return a plain NULL value as part of the set. + Now that is allowed and interpreted as a row of NULLs. This avoids corner-case errors with, for example, unnesting an array of composite values. @@ -6245,14 +6245,14 @@ XXX this is pending backpatch, may need to remove --> Fully support array subscripts and field selections in the - target column list of an INSERT with multiple - VALUES rows (Tom Lane) + target column list of an INSERT with multiple + VALUES rows (Tom Lane) Previously, such cases failed if the same target column was mentioned more than once, e.g., INSERT INTO tab (x[1], - x[2]) VALUES (...). + x[2]) VALUES (...). @@ -6262,16 +6262,16 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [d543170f2] Don't split up SRFs when choosing to postpone SELECT out --> - When appropriate, postpone evaluation of SELECT - output expressions until after an ORDER BY sort + When appropriate, postpone evaluation of SELECT + output expressions until after an ORDER BY sort (Konstantin Knizhnik) This change ensures that volatile or expensive functions in the output list are executed in the order suggested by ORDER - BY, and that they are not evaluated more times than required - when there is a LIMIT clause. Previously, these + BY, and that they are not evaluated more times than required + when there is a LIMIT clause. Previously, these properties held if the ordering was performed by an index scan or pre-merge-join sort, but not if it was performed by a top-level sort. @@ -6289,9 +6289,9 @@ XXX this is pending backpatch, may need to remove - This change allows command tags, e.g. SELECT, to + This change allows command tags, e.g. SELECT, to correctly report tuple counts larger than 4 billion. This also - applies to PL/pgSQL's GET DIAGNOSTICS ... ROW_COUNT + applies to PL/pgSQL's GET DIAGNOSTICS ... ROW_COUNT command. @@ -6302,17 +6302,17 @@ XXX this is pending backpatch, may need to remove --> Avoid doing encoding conversions by converting through the - MULE_INTERNAL encoding (Tom Lane) + MULE_INTERNAL encoding (Tom Lane) Previously, many conversions for Cyrillic and Central European single-byte encodings were done by converting to a - related MULE_INTERNAL coding scheme and then to the + related MULE_INTERNAL coding scheme and then to the destination encoding. Aside from being inefficient, this meant that when the conversion encountered an untranslatable character, the error message would confusingly complain about failure to - convert to or from MULE_INTERNAL, rather than the + convert to or from MULE_INTERNAL, rather than the user-visible encoding. @@ -6331,7 +6331,7 @@ XXX this is pending backpatch, may need to remove Previously, the foreign join pushdown infrastructure left the question of security entirely up to individual foreign data - wrappers, but that made it too easy for an FDW to + wrappers, but that made it too easy for an FDW to inadvertently create subtle security holes. So, make it the core code's job to determine which role ID will access each table, and do not attempt join pushdown unless the role is the same for @@ -6353,13 +6353,13 @@ XXX this is pending backpatch, may need to remove 2015-11-27 [92e38182d] COPY (INSERT/UPDATE/DELETE .. RETURNING ..) --> - Allow COPY to copy the output of an - INSERT/UPDATE/DELETE - ... RETURNING query (Marko Tiikkaja) + Allow COPY to copy the output of an + INSERT/UPDATE/DELETE + ... RETURNING query (Marko Tiikkaja) - Previously, an intermediate CTE had to be written to + Previously, an intermediate CTE had to be written to get this result. @@ -6369,16 +6369,16 @@ XXX this is pending backpatch, may need to remove 2016-04-05 [f2fcad27d] Support ALTER THING .. DEPENDS ON EXTENSION --> - Introduce ALTER object DEPENDS ON + Introduce ALTER object DEPENDS ON EXTENSION (Abhijit Menon-Sen) This command allows a database object to be marked as depending on an extension, so that it will be dropped automatically if - the extension is dropped (without needing CASCADE). + the extension is dropped (without needing CASCADE). However, the object is not part of the extension, and thus will - be dumped separately by pg_dump. + be dumped separately by pg_dump. @@ -6387,7 +6387,7 @@ XXX this is pending backpatch, may need to remove 2015-11-19 [bc4996e61] Make ALTER .. SET SCHEMA do nothing, instead of throwing --> - Make ALTER object SET SCHEMA do nothing + Make ALTER object SET SCHEMA do nothing when the object is already in the requested schema, rather than throwing an error as it historically has for most object types (Marti Raudsepp) @@ -6411,8 +6411,8 @@ XXX this is pending backpatch, may need to remove 2015-07-29 [2cd40adb8] Add IF NOT EXISTS processing to ALTER TABLE ADD COLUMN --> - Add an @@ -6422,7 +6422,7 @@ XXX this is pending backpatch, may need to remove 2016-03-10 [fcb4bfddb] Reduce lock level for altering fillfactor --> - Reduce the lock strength needed by ALTER TABLE + Reduce the lock strength needed by ALTER TABLE when setting fillfactor and autovacuum-related relation options (Fabrízio de Royes Mello, Simon Riggs) @@ -6434,7 +6434,7 @@ XXX this is pending backpatch, may need to remove --> Introduce CREATE - ACCESS METHOD to allow extensions to create index access + ACCESS METHOD to allow extensions to create index access methods (Alexander Korotkov, Petr Jelínek) @@ -6444,7 +6444,7 @@ XXX this is pending backpatch, may need to remove 2015-10-03 [b67aaf21e] Add CASCADE support for CREATE EXTENSION. --> - Add a CASCADE option to CREATE + Add a CASCADE option to CREATE EXTENSION to automatically create any extensions the requested one depends on (Petr Jelínek) @@ -6455,7 +6455,7 @@ XXX this is pending backpatch, may need to remove 2015-10-05 [b943f502b] Have CREATE TABLE LIKE add OID column if any LIKEd table --> - Make CREATE TABLE ... LIKE include an OID + Make CREATE TABLE ... LIKE include an OID column if any source table has one (Bruce Momjian) @@ -6465,14 +6465,14 @@ XXX this is pending backpatch, may need to remove 2015-12-16 [f27a6b15e] Mark CHECK constraints declared NOT VALID valid if creat --> - If a CHECK constraint is declared NOT VALID + If a CHECK constraint is declared NOT VALID in a table creation command, automatically mark it as valid (Amit Langote, Amul Sul) This is safe because the table has no existing rows. This matches - the longstanding behavior of FOREIGN KEY constraints. + the longstanding behavior of FOREIGN KEY constraints. @@ -6481,16 +6481,16 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [c94959d41] Fix DROP OPERATOR to reset oprcom/oprnegate links to the --> - Fix DROP OPERATOR to clear - pg_operator.oprcom and - pg_operator.oprnegate links to + Fix DROP OPERATOR to clear + pg_operator.oprcom and + pg_operator.oprnegate links to the dropped operator (Roma Sokolov) Formerly such links were left as-is, which could pose a problem in the somewhat unlikely event that the dropped operator's - OID was reused for another operator. + OID was reused for another operator. @@ -6499,13 +6499,13 @@ XXX this is pending backpatch, may need to remove 2016-07-11 [4d042999f] Print a given subplan only once in EXPLAIN. --> - Do not show the same subplan twice in EXPLAIN output + Do not show the same subplan twice in EXPLAIN output (Tom Lane) In certain cases, typically involving SubPlan nodes in index - conditions, EXPLAIN would print data for the same + conditions, EXPLAIN would print data for the same subplan twice. @@ -6516,7 +6516,7 @@ XXX this is pending backpatch, may need to remove --> Disallow creation of indexes on system columns, except for - OID columns (David Rowley) + OID columns (David Rowley) @@ -6550,8 +6550,8 @@ XXX this is pending backpatch, may need to remove checks that would throw an error if they were called by a non-superuser. This forced the use of superuser roles for some relatively pedestrian tasks. The hard-wired error checks - are now gone in favor of making initdb revoke the - default public EXECUTE privilege on these functions. + are now gone in favor of making initdb revoke the + default public EXECUTE privilege on these functions. This allows installations to choose to grant usage of such functions to trusted roles that do not need all superuser privileges. @@ -6569,7 +6569,7 @@ XXX this is pending backpatch, may need to remove - Currently the only such role is pg_signal_backend, + Currently the only such role is pg_signal_backend, but more are expected to be added in future. @@ -6591,19 +6591,19 @@ XXX this is pending backpatch, may need to remove 2016-06-27 [6734a1cac] Change predecence of phrase operator. --> - Improve full-text search to support + Improve full-text search to support searching for phrases, that is, lexemes appearing adjacent to each other in a specific order, or with a specified distance between them (Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov) - A phrase-search query can be specified in tsquery - input using the new operators <-> and - <N>. The former means + A phrase-search query can be specified in tsquery + input using the new operators <-> and + <N>. The former means that the lexemes before and after it must appear adjacent to each other in that order. The latter means they must be exactly - N lexemes apart. + N lexemes apart. @@ -6613,7 +6613,7 @@ XXX this is pending backpatch, may need to remove --> Allow omitting one or both boundaries in an array slice specifier, - e.g. array_col[3:] (Yury Zhuravlev) + e.g. array_col[3:] (Yury Zhuravlev) @@ -6634,19 +6634,19 @@ XXX this is pending backpatch, may need to remove This change prevents unexpected out-of-range errors for - timestamp with time zone values very close to the - implementation limits. Previously, the same value might - be accepted or not depending on the timezone setting, + timestamp with time zone values very close to the + implementation limits. Previously, the same value might + be accepted or not depending on the timezone setting, meaning that a dump and reload could fail on a value that had been accepted when presented. Now the limits are enforced according - to the equivalent UTC time, not local time, so as to - be independent of timezone. + to the equivalent UTC time, not local time, so as to + be independent of timezone. - Also, PostgreSQL is now more careful to detect + Also, PostgreSQL is now more careful to detect overflow in operations that compute new date or timestamp values, - such as date + integer. + such as date + integer. @@ -6655,14 +6655,14 @@ XXX this is pending backpatch, may need to remove 2016-03-30 [50861cd68] Improve portability of I/O behavior for the geometric ty --> - For geometric data types, make sure infinity and - NaN component values are treated consistently during + For geometric data types, make sure infinity and + NaN component values are treated consistently during input and output (Tom Lane) Such values will now always print the same as they would in - a simple float8 column, and be accepted the same way + a simple float8 column, and be accepted the same way on input. Previously the behavior was platform-dependent. @@ -6675,8 +6675,8 @@ XXX this is pending backpatch, may need to remove --> Upgrade - the ispell - dictionary type to handle modern Hunspell files and + the ispell + dictionary type to handle modern Hunspell files and support more languages (Artur Zakirov) @@ -6687,7 +6687,7 @@ XXX this is pending backpatch, may need to remove --> Implement look-behind constraints - in regular expressions + in regular expressions (Tom Lane) @@ -6706,12 +6706,12 @@ XXX this is pending backpatch, may need to remove --> In regular expressions, if an apparent three-digit octal escape - \nnn would exceed 377 (255 decimal), + \nnn would exceed 377 (255 decimal), assume it is a two-digit octal escape instead (Tom Lane) - This makes the behavior match current Tcl releases. + This makes the behavior match current Tcl releases. @@ -6720,8 +6720,8 @@ XXX this is pending backpatch, may need to remove 2015-11-07 [c5e86ea93] Add "xid <> xid" and "xid <> int4" operators. --> - Add transaction ID operators xid <> - xid and xid <> int4, + Add transaction ID operators xid <> + xid and xid <> int4, for consistency with the corresponding equality operators (Michael Paquier) @@ -6742,9 +6742,9 @@ XXX this is pending backpatch, may need to remove --> Add jsonb_insert() - function to insert a new element into a jsonb array, - or a not-previously-existing key into a jsonb object + linkend="functions-json-processing-table">jsonb_insert() + function to insert a new element into a jsonb array, + or a not-previously-existing key into a jsonb object (Dmitry Dolgov) @@ -6755,9 +6755,9 @@ XXX this is pending backpatch, may need to remove 2016-05-05 [18a02ad2a] Fix corner-case loss of precision in numeric pow() calcu --> - Improve the accuracy of the ln(), log(), - exp(), and pow() functions for type - numeric (Dean Rasheed) + Improve the accuracy of the ln(), log(), + exp(), and pow() functions for type + numeric (Dean Rasheed) @@ -6767,8 +6767,8 @@ XXX this is pending backpatch, may need to remove --> Add a scale(numeric) - function to extract the display scale of a numeric value + linkend="functions-math-func-table">scale(numeric) + function to extract the display scale of a numeric value (Marko Tiikkaja) @@ -6783,8 +6783,8 @@ XXX this is pending backpatch, may need to remove For example, sind() - measures its argument in degrees, whereas sin() + linkend="functions-math-trig-table">sind() + measures its argument in degrees, whereas sin() measures in radians. These functions go to some lengths to deliver exact results for values where an exact result can be expected, for instance sind(30) = 0.5. @@ -6796,15 +6796,15 @@ XXX this is pending backpatch, may need to remove 2016-01-22 [fd5200c3d] Improve cross-platform consistency of Inf/NaN handling i --> - Ensure that trigonometric functions handle infinity - and NaN inputs per the POSIX standard + Ensure that trigonometric functions handle infinity + and NaN inputs per the POSIX standard (Dean Rasheed) - The POSIX standard says that these functions should - return NaN for NaN input, and should throw - an error for out-of-range inputs including infinity. + The POSIX standard says that these functions should + return NaN for NaN input, and should throw + an error for out-of-range inputs including infinity. Previously our behavior varied across platforms. @@ -6815,9 +6815,9 @@ XXX this is pending backpatch, may need to remove --> Make to_timestamp(float8) - convert float infinity to - timestamp infinity (Vitaly Burovoy) + linkend="functions-datetime-table">to_timestamp(float8) + convert float infinity to + timestamp infinity (Vitaly Burovoy) @@ -6831,15 +6831,15 @@ XXX this is pending backpatch, may need to remove 2016-05-05 [0b9a23443] Rename tsvector delete() to ts_delete(), and filter() to --> - Add new functions for tsvector data (Stas Kelvich) + Add new functions for tsvector data (Stas Kelvich) The new functions are ts_delete(), - ts_filter(), unnest(), - tsvector_to_array(), array_to_tsvector(), - and a variant of setweight() that sets the weight + linkend="textsearch-functions-table">ts_delete(), + ts_filter(), unnest(), + tsvector_to_array(), array_to_tsvector(), + and a variant of setweight() that sets the weight only for specified lexeme(s). @@ -6849,11 +6849,11 @@ XXX this is pending backpatch, may need to remove 2015-09-17 [9acb9007d] Fix oversight in tsearch type check --> - Allow ts_stat() - and tsvector_update_trigger() + Allow ts_stat() + and tsvector_update_trigger() to operate on values that are of types binary-compatible with the expected argument type, not just exactly that type; for example - allow citext where text is expected (Teodor + allow citext where text is expected (Teodor Sigaev) @@ -6864,14 +6864,14 @@ XXX this is pending backpatch, may need to remove --> Add variadic functions num_nulls() - and num_nonnulls() that count the number of their + linkend="functions-comparison-func-table">num_nulls() + and num_nonnulls() that count the number of their arguments that are null or non-null (Marko Tiikkaja) - An example usage is CHECK(num_nonnulls(a,b,c) = 1) - which asserts that exactly one of a,b,c is not NULL. + An example usage is CHECK(num_nonnulls(a,b,c) = 1) + which asserts that exactly one of a,b,c is not NULL. These functions can also be used to count the number of null or nonnull elements in an array. @@ -6883,8 +6883,8 @@ XXX this is pending backpatch, may need to remove --> Add function parse_ident() - to split a qualified, possibly quoted SQL identifier + linkend="functions-string-other">parse_ident() + to split a qualified, possibly quoted SQL identifier into its parts (Pavel Stehule) @@ -6895,15 +6895,15 @@ XXX this is pending backpatch, may need to remove --> In to_number(), - interpret a V format code as dividing by 10 to the - power of the number of digits following V (Bruce + linkend="functions-formatting-table">to_number(), + interpret a V format code as dividing by 10 to the + power of the number of digits following V (Bruce Momjian) This makes it operate in an inverse fashion to - to_char(). + to_char(). @@ -6913,8 +6913,8 @@ XXX this is pending backpatch, may need to remove --> Make the to_reg*() - functions accept type text not cstring + linkend="functions-info-catalog-table">to_reg*() + functions accept type text not cstring (Petr Korobeinikov) @@ -6930,16 +6930,16 @@ XXX this is pending backpatch, may need to remove --> Add pg_size_bytes() + linkend="functions-admin-dbsize">pg_size_bytes() function to convert human-readable size strings to numbers (Pavel Stehule, Vitaly Burovoy, Dean Rasheed) This function converts strings like those produced by - pg_size_pretty() into bytes. An example + pg_size_pretty() into bytes. An example usage is SELECT oid::regclass FROM pg_class WHERE - pg_total_relation_size(oid) > pg_size_bytes('10 GB'). + pg_total_relation_size(oid) > pg_size_bytes('10 GB'). @@ -6949,7 +6949,7 @@ XXX this is pending backpatch, may need to remove --> In pg_size_pretty(), + linkend="functions-admin-dbsize">pg_size_pretty(), format negative numbers similarly to positive ones (Adrian Vondendriesch) @@ -6965,14 +6965,14 @@ XXX this is pending backpatch, may need to remove 2015-07-02 [10fb48d66] Add an optional missing_ok argument to SQL function curr --> - Add an optional missing_ok argument to the current_setting() + Add an optional missing_ok argument to the current_setting() function (David Christensen) This allows avoiding an error for an unrecognized parameter - name, instead returning a NULL. + name, instead returning a NULL. @@ -6984,16 +6984,16 @@ XXX this is pending backpatch, may need to remove --> Change various catalog-inspection functions to return - NULL for invalid input (Michael Paquier) + NULL for invalid input (Michael Paquier) pg_get_viewdef() - now returns NULL if given an invalid view OID, - and several similar functions likewise return NULL for + linkend="functions-info-catalog-table">pg_get_viewdef() + now returns NULL if given an invalid view OID, + and several similar functions likewise return NULL for bad input. Previously, such cases usually led to cache - lookup failed errors, which are not meant to occur in + lookup failed errors, which are not meant to occur in user-facing cases. @@ -7004,13 +7004,13 @@ XXX this is pending backpatch, may need to remove --> Fix pg_replication_origin_xact_reset() + linkend="pg-replication-origin-xact-reset">pg_replication_origin_xact_reset() to not have any arguments (Fujii Masao) The documentation said that it has no arguments, and the C code did - not expect any arguments, but the entry in pg_proc + not expect any arguments, but the entry in pg_proc mistakenly specified two arguments. @@ -7030,7 +7030,7 @@ XXX this is pending backpatch, may need to remove --> In PL/pgSQL, detect mismatched - CONTINUE and EXIT statements while + CONTINUE and EXIT statements while compiling a function, rather than at execution time (Jim Nasby) @@ -7043,7 +7043,7 @@ XXX this is pending backpatch, may need to remove 2016-07-02 [3a4a33ad4] PL/Python: Report argument parsing errors using exceptio --> - Extend PL/Python's error-reporting and + Extend PL/Python's error-reporting and message-reporting functions to allow specifying additional message fields besides the primary error message (Pavel Stehule) @@ -7055,7 +7055,7 @@ XXX this is pending backpatch, may need to remove --> Allow PL/Python functions to call themselves recursively - via SPI, and fix the behavior when multiple + via SPI, and fix the behavior when multiple set-returning PL/Python functions are called within one query (Alexey Grishchenko, Tom Lane) @@ -7077,14 +7077,14 @@ XXX this is pending backpatch, may need to remove 2016-03-02 [e2609323e] Make PL/Tcl require Tcl 8.4 or later. --> - Modernize PL/Tcl to use Tcl's object - APIs instead of simple strings (Jim Nasby, Karl + Modernize PL/Tcl to use Tcl's object + APIs instead of simple strings (Jim Nasby, Karl Lehenbauer) This can improve performance substantially in some cases. - Note that PL/Tcl now requires Tcl 8.4 or later. + Note that PL/Tcl now requires Tcl 8.4 or later. @@ -7094,8 +7094,8 @@ XXX this is pending backpatch, may need to remove 2016-03-25 [cd37bb785] Improve PL/Tcl errorCode facility by providing decoded n --> - In PL/Tcl, make database-reported errors return - additional information in Tcl's errorCode global + In PL/Tcl, make database-reported errors return + additional information in Tcl's errorCode global variable (Jim Nasby, Tom Lane) @@ -7110,15 +7110,15 @@ XXX this is pending backpatch, may need to remove 2016-03-02 [c8c7c93de] Fix PL/Tcl's encoding conversion logic. --> - Fix PL/Tcl to perform encoding conversion between - the database encoding and UTF-8, which is what Tcl + Fix PL/Tcl to perform encoding conversion between + the database encoding and UTF-8, which is what Tcl expects (Tom Lane) Previously, strings were passed through without conversion, - leading to misbehavior with non-ASCII characters when - the database encoding was not UTF-8. + leading to misbehavior with non-ASCII characters when + the database encoding was not UTF-8. @@ -7137,7 +7137,7 @@ XXX this is pending backpatch, may need to remove --> Add a nonlocalized version of - the severity field in + the severity field in error and notice messages (Tom Lane) @@ -7154,17 +7154,17 @@ XXX this is pending backpatch, may need to remove This commit is also listed under psql and PL/pgSQL --> - Introduce a feature in libpq whereby the - CONTEXT field of messages can be suppressed, either + Introduce a feature in libpq whereby the + CONTEXT field of messages can be suppressed, either always or only for non-error messages (Pavel Stehule) The default behavior of PQerrorMessage() - is now to print CONTEXT + linkend="libpq-pqerrormessage">PQerrorMessage() + is now to print CONTEXT only for errors. The new function PQsetErrorContextVisibility() + linkend="libpq-pqseterrorcontextvisibility">PQsetErrorContextVisibility() can be used to adjust this. @@ -7174,14 +7174,14 @@ This commit is also listed under psql and PL/pgSQL 2016-04-03 [e3161b231] Add libpq support for recreating an error message with d --> - Add support in libpq for regenerating an error + Add support in libpq for regenerating an error message with a different verbosity level (Alex Shulgin) This is done with the new function PQresultVerboseErrorMessage(). - This supports psql's new \errverbose + linkend="libpq-pqresultverboseerrormessage">PQresultVerboseErrorMessage(). + This supports psql's new \errverbose feature, and may be useful for other clients as well. @@ -7191,13 +7191,13 @@ This commit is also listed under psql and PL/pgSQL 2015-11-27 [40cb21f70] Improve PQhost() to return useful data for default Unix- --> - Improve libpq's PQhost() function to return + Improve libpq's PQhost() function to return useful data for default Unix-socket connections (Tom Lane) - Previously it would return NULL if no explicit host + Previously it would return NULL if no explicit host specification had been given; now it returns the default socket directory path. @@ -7208,7 +7208,7 @@ This commit is also listed under psql and PL/pgSQL 2016-02-16 [fc1ae7d2e] Change ecpg lexer to accept comments with line breaks in --> - Fix ecpg's lexer to handle line breaks within + Fix ecpg's lexer to handle line breaks within comments starting on preprocessor directive lines (Michael Meskes) @@ -7227,9 +7227,9 @@ This commit is also listed under psql and PL/pgSQL 2015-09-14 [d02426029] Check existency of table/schema for -t/-n option (pg_dum --> - Add a @@ -7249,7 +7249,7 @@ This commit is also listed under psql and PL/pgSQL 2016-05-06 [e1b120a8c] Only issue LOCK TABLE commands when necessary --> - In pg_dump, dump locally-made changes of privilege + In pg_dump, dump locally-made changes of privilege assignments for system objects (Stephen Frost) @@ -7257,7 +7257,7 @@ This commit is also listed under psql and PL/pgSQL While it has always been possible for a superuser to change the privilege assignments for built-in or extension-created objects, such changes were formerly lost in a dump and reload. - Now, pg_dump recognizes and dumps such changes. + Now, pg_dump recognizes and dumps such changes. (This works only when dumping from a 9.6 or later server, however.) @@ -7267,7 +7267,7 @@ This commit is also listed under psql and PL/pgSQL 2016-09-08 [31eb14504] Allow pg_dump to dump non-extension members of an extens --> - Allow pg_dump to dump non-extension-owned objects + Allow pg_dump to dump non-extension-owned objects that are within an extension-owned schema (Martín Marqués) @@ -7283,7 +7283,7 @@ This commit is also listed under psql and PL/pgSQL 2016-04-06 [3b3fcc4ee] pg_dump: Add table qualifications to some tags --> - In pg_dump output, include the table name in object + In pg_dump output, include the table name in object tags for object types that are only uniquely named per-table (for example, triggers) (Peter Eisentraut) @@ -7308,7 +7308,7 @@ this commit is also listed in the compatibility section The specified operations are carried out in the order in which the - options are given, and then psql terminates. + options are given, and then psql terminates. @@ -7317,7 +7317,7 @@ this commit is also listed in the compatibility section 2016-04-08 [c09b18f21] Support \crosstabview in psql --> - Add a \crosstabview command that prints the results of + Add a \crosstabview command that prints the results of a query in a cross-tabulated display (Daniel Vérité) @@ -7333,13 +7333,13 @@ this commit is also listed in the compatibility section 2016-04-03 [3cc38ca7d] Add psql \errverbose command to see last server error at --> - Add an \errverbose command that shows the last server + Add an \errverbose command that shows the last server error at full verbosity (Alex Shulgin) This is useful after getting an unexpected error — you - no longer need to adjust the VERBOSITY variable and + no longer need to adjust the VERBOSITY variable and recreate the failure in order to see error fields that are not shown by default. @@ -7351,13 +7351,13 @@ this commit is also listed in the compatibility section 2016-05-06 [9b66aa006] Fix psql's \ev and \sv commands so that they handle view --> - Add \ev and \sv commands for editing and + Add \ev and \sv commands for editing and showing view definitions (Petr Korobeinikov) - These are parallel to the existing \ef and - \sf commands for functions. + These are parallel to the existing \ef and + \sf commands for functions. @@ -7366,7 +7366,7 @@ this commit is also listed in the compatibility section 2016-04-04 [2bbe9112a] Add a \gexec command to psql for evaluation of computed --> - Add a \gexec command that executes a query and + Add a \gexec command that executes a query and re-submits the result(s) as new queries (Corey Huinker) @@ -7376,9 +7376,9 @@ this commit is also listed in the compatibility section 2015-10-05 [2145a7660] psql: allow \pset C in setting the title, matches \C --> - Allow \pset C string + Allow \pset C string to set the table title, for consistency with \C - string (Bruce Momjian) + string (Bruce Momjian) @@ -7387,7 +7387,7 @@ this commit is also listed in the compatibility section 2016-03-11 [69ab7b9d6] psql: Don't automatically use expanded format when there --> - In \pset expanded auto mode, do not use expanded + In \pset expanded auto mode, do not use expanded format for query results with only one column (Andreas Karlsson, Robert Haas) @@ -7399,16 +7399,16 @@ this commit is also listed in the compatibility section 2016-06-15 [9901d8ac2] Use strftime("%c") to format timestamps in psql's \watch --> - Improve the headers output by the \watch command + Improve the headers output by the \watch command (Michael Paquier, Tom Lane) - Include the \pset title string if one has + Include the \pset title string if one has been set, and shorten the prefabricated part of the - header to be timestamp (every - Ns). Also, the timestamp format now - obeys psql's locale environment. + header to be timestamp (every + Ns). Also, the timestamp format now + obeys psql's locale environment. @@ -7456,7 +7456,7 @@ this commit is also listed in the compatibility section 2015-07-07 [275f05c99] Add psql PROMPT variable showing the pid of the connecte --> - Add a PROMPT option %p to insert the + Add a PROMPT option %p to insert the process ID of the connected backend (Julien Rouhaud) @@ -7467,13 +7467,13 @@ this commit is also listed in the compatibility section This commit is also listed under libpq and PL/pgSQL --> - Introduce a feature whereby the CONTEXT field of + Introduce a feature whereby the CONTEXT field of messages can be suppressed, either always or only for non-error messages (Pavel Stehule) - Printing CONTEXT only for errors is now the default + Printing CONTEXT only for errors is now the default behavior. This can be changed by setting the special variable SHOW_CONTEXT. @@ -7484,7 +7484,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-07-11 [a670c24c3] Improve output of psql's \df+ command. --> - Make \df+ show function access privileges and + Make \df+ show function access privileges and parallel-safety attributes (Michael Paquier) @@ -7503,7 +7503,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-20 [68ab8e8ba] SQL commands in pgbench scripts are now ended by semicol --> - SQL commands in pgbench scripts are now ended by + SQL commands in pgbench scripts are now ended by semicolons, not newlines (Kyotaro Horiguchi, Tom Lane) @@ -7512,7 +7512,7 @@ This commit is also listed under libpq and PL/pgSQL Existing custom scripts will need to be modified to add a semicolon at the end of each line that does not have one already. (Doing so does not break the script for use with older versions - of pgbench.) + of pgbench.) @@ -7525,7 +7525,7 @@ This commit is also listed under libpq and PL/pgSQL --> Support floating-point arithmetic, as well as some built-in functions, in + linkend="pgbench-builtin-functions">built-in functions, in expressions in backslash commands (Fabien Coelho) @@ -7535,18 +7535,18 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-29 [ad9566470] pgbench: Remove \setrandom. --> - Replace \setrandom with built-in functions (Fabien + Replace \setrandom with built-in functions (Fabien Coelho) The new built-in functions include random(), - random_exponential(), and - random_gaussian(), which perform the same work as - \setrandom, but are easier to use since they can be + linkend="pgbench-functions">random(), + random_exponential(), and + random_gaussian(), which perform the same work as + \setrandom, but are easier to use since they can be embedded in larger expressions. Since these additions have made - \setrandom obsolete, remove it. + \setrandom obsolete, remove it. @@ -7561,8 +7561,8 @@ This commit is also listed under libpq and PL/pgSQL - This is done with the new switch, which works + similarly to for custom scripts. @@ -7577,7 +7577,7 @@ This commit is also listed under libpq and PL/pgSQL - When multiple scripts are specified, each pgbench + When multiple scripts are specified, each pgbench transaction randomly chooses one to execute. Formerly this was always done with uniform probability, but now different selection probabilities can be specified for different scripts. @@ -7604,7 +7604,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-16 [1def9063c] pgbench progress with timestamp --> - Add a option to report progress with Unix epoch timestamps, instead of time since the run started (Fabien Coelho) @@ -7615,8 +7615,8 @@ This commit is also listed under libpq and PL/pgSQL 2015-07-03 [ba3deeefb] Lift the limitation that # of clients must be a multiple --> - Allow the number of client connections () to not + be an exact multiple of the number of threads () (Fabien Coelho) @@ -7626,13 +7626,13 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-09 [accf7616f] pgbench: When -T is used, don't wait for transactions be --> - When the option is used, stop promptly at the end of the specified time (Fabien Coelho) Previously, specifying a low transaction rate could cause - pgbench to wait significantly longer than + pgbench to wait significantly longer than specified. @@ -7653,15 +7653,15 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-17 [66d947b9d] Adjust behavior of single-user -j mode for better initdb --> - Improve error reporting during initdb's + Improve error reporting during initdb's post-bootstrap phase (Tom Lane) Previously, an error here led to reporting the entire input - file as the failing query; now just the current + file as the failing query; now just the current query is reported. To get the desired behavior, queries in - initdb's input files must be separated by blank + initdb's input files must be separated by blank lines. @@ -7672,7 +7672,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-08-30 [d9720e437] Fix initdb misbehavior when user mis-enters superuser pa --> - Speed up initdb by using just one + Speed up initdb by using just one standalone-backend session for all the post-bootstrap steps (Tom Lane) @@ -7683,7 +7683,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-01 [e50cda784] Use pg_rewind when target timeline was switched --> - Improve pg_rewind + Improve pg_rewind so that it can work when the target timeline changes (Alexander Korotkov) @@ -7709,7 +7709,7 @@ This commit is also listed under libpq and PL/pgSQL --> Remove obsolete - heap_formtuple/heap_modifytuple/heap_deformtuple + heap_formtuple/heap_modifytuple/heap_deformtuple functions (Peter Geoghegan) @@ -7719,16 +7719,16 @@ This commit is also listed under libpq and PL/pgSQL 2016-08-27 [b9fe6cbc8] Add macros to make AllocSetContextCreate() calls simpler --> - Add macros to make AllocSetContextCreate() calls simpler + Add macros to make AllocSetContextCreate() calls simpler and safer (Tom Lane) Writing out the individual sizing parameters for a memory context is now deprecated in favor of using one of the new - macros ALLOCSET_DEFAULT_SIZES, - ALLOCSET_SMALL_SIZES, - or ALLOCSET_START_SMALL_SIZES. + macros ALLOCSET_DEFAULT_SIZES, + ALLOCSET_SMALL_SIZES, + or ALLOCSET_START_SMALL_SIZES. Existing code continues to work, however. @@ -7738,7 +7738,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-08-05 [de6fd1c89] Rely on inline functions even if that causes warnings in --> - Unconditionally use static inline functions in header + Unconditionally use static inline functions in header files (Andres Freund) @@ -7759,7 +7759,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-05-06 [6bd356c33] Add TAP tests for pg_dump --> - Improve TAP testing infrastructure (Michael + Improve TAP testing infrastructure (Michael Paquier, Craig Ringer, Álvaro Herrera, Stephen Frost) @@ -7774,7 +7774,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-11 [aa65de042] When trace_lwlocks is used, identify individual lwlocks --> - Make trace_lwlocks identify individual locks by name + Make trace_lwlocks identify individual locks by name (Robert Haas) @@ -7786,7 +7786,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-01-05 [4f18010af] Convert psql's tab completion for backslash commands to --> - Improve psql's tab-completion code infrastructure + Improve psql's tab-completion code infrastructure (Thomas Munro, Michael Paquier) @@ -7801,7 +7801,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-01-05 [efa318bcf] Make pg_shseclabel available in early backend startup --> - Nail the pg_shseclabel system catalog into cache, + Nail the pg_shseclabel system catalog into cache, so that it is available for access during connection authentication (Adam Brightwell) @@ -7820,21 +7820,21 @@ This commit is also listed under libpq and PL/pgSQL --> Restructure index access - method API to hide most of it at - the C level (Alexander Korotkov, Andrew Gierth) + method API to hide most of it at + the C level (Alexander Korotkov, Andrew Gierth) - This change modernizes the index AM API to look more + This change modernizes the index AM API to look more like the designs we have adopted for foreign data wrappers and - tablesample handlers. This simplifies the C code + tablesample handlers. This simplifies the C code and makes it much more practical to define index access methods in installable extensions. A consequence is that most of the columns - of the pg_am system catalog have disappeared. + of the pg_am system catalog have disappeared. New inspection functions have been added to allow SQL queries to determine index AM properties that used to be discoverable - from pg_am. + from pg_am. @@ -7844,14 +7844,14 @@ This commit is also listed under libpq and PL/pgSQL --> Add pg_init_privs + linkend="catalog-pg-init-privs">pg_init_privs system catalog to hold original privileges - of initdb-created and extension-created objects + of initdb-created and extension-created objects (Stephen Frost) - This infrastructure allows pg_dump to dump changes + This infrastructure allows pg_dump to dump changes that an installation may have made in privileges attached to system objects. Formerly, such changes would be lost in a dump and reload, but now they are preserved. @@ -7863,14 +7863,14 @@ This commit is also listed under libpq and PL/pgSQL 2016-02-04 [c1772ad92] Change the way that LWLocks for extensions are allocated --> - Change the way that extensions allocate custom LWLocks + Change the way that extensions allocate custom LWLocks (Amit Kapila, Robert Haas) - The RequestAddinLWLocks() function is removed, - and replaced by RequestNamedLWLockTranche(). - This allows better identification of custom LWLocks, + The RequestAddinLWLocks() function is removed, + and replaced by RequestNamedLWLockTranche(). + This allows better identification of custom LWLocks, and is less error-prone. @@ -7894,7 +7894,7 @@ This commit is also listed under libpq and PL/pgSQL - This change allows FDWs or custom scan providers + This change allows FDWs or custom scan providers to store data in a plan tree in a more convenient format than was previously possible. @@ -7911,7 +7911,7 @@ This commit is also listed under libpq and PL/pgSQL --> Make the planner deal with post-scan/join query steps by generating - and comparing Paths, replacing a lot of ad-hoc logic + and comparing Paths, replacing a lot of ad-hoc logic (Tom Lane) @@ -7961,7 +7961,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-24 [c1156411a] Move psql's psqlscan.l into src/fe_utils. --> - Separate out psql's flex lexer to + Separate out psql's flex lexer to make it usable by other client programs (Tom Lane, Kyotaro Horiguchi) @@ -7970,12 +7970,12 @@ This commit is also listed under libpq and PL/pgSQL This eliminates code duplication for programs that need to be able to parse SQL commands well enough to identify command boundaries. Doing that in full generality is more painful than one could - wish, and up to now only psql has really gotten + wish, and up to now only psql has really gotten it right among our supported client programs. - A new source-code subdirectory src/fe_utils/ has + A new source-code subdirectory src/fe_utils/ has been created to hold this and other code that is shared across our client programs. Formerly such sharing was accomplished by symbolic linking or copying source files at build time, which @@ -7988,7 +7988,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-21 [98a64d0bd] Introduce WaitEventSet API. --> - Introduce WaitEventSet API to allow + Introduce WaitEventSet API to allow efficient waiting for event sets that usually do not change from one wait to the next (Andres Freund, Amit Kapila) @@ -7999,16 +7999,16 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-01 [65578341a] Add Generic WAL interface --> - Add a generic interface for writing WAL records + Add a generic interface for writing WAL records (Alexander Korotkov, Petr Jelínek, Markus Nullmeier) - This change allows extensions to write WAL records for + This change allows extensions to write WAL records for changes to pages using a standard layout. The problem of needing to - replay WAL without access to the extension is solved by + replay WAL without access to the extension is solved by having generic replay code. This allows extensions to implement, - for example, index access methods and have WAL + for example, index access methods and have WAL support for them. @@ -8018,13 +8018,13 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-06 [3fe3511d0] Generic Messages for Logical Decoding --> - Support generic WAL messages for logical decoding + Support generic WAL messages for logical decoding (Petr Jelínek, Andres Freund) This feature allows extensions to insert data into the - WAL stream that can be read by logical-decoding + WAL stream that can be read by logical-decoding plugins, but is not connected to physical data restoration. @@ -8036,12 +8036,12 @@ This commit is also listed under libpq and PL/pgSQL --> Allow SP-GiST operator classes to store an arbitrary - traversal value while descending the index (Alexander + traversal value while descending the index (Alexander Lebedev, Teodor Sigaev) - This is somewhat like the reconstructed value, but it + This is somewhat like the reconstructed value, but it could be any arbitrary chunk of data, not necessarily of the same data type as the indexed column. @@ -8052,12 +8052,12 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-04 [66229ac00] Introduce a LOG_SERVER_ONLY ereport level, which is neve --> - Introduce a LOG_SERVER_ONLY message level for - ereport() (David Steele) + Introduce a LOG_SERVER_ONLY message level for + ereport() (David Steele) - This level acts like LOG except that the message is + This level acts like LOG except that the message is never sent to the client. It is meant for use in auditing and similar applications. @@ -8068,14 +8068,14 @@ This commit is also listed under libpq and PL/pgSQL 2016-07-01 [548af97fc] Provide and use a makefile target to build all generated --> - Provide a Makefile target to build all generated + Provide a Makefile target to build all generated headers (Michael Paquier, Tom Lane) - submake-generated-headers can now be invoked to ensure + submake-generated-headers can now be invoked to ensure that generated backend header files are up-to-date. This is - useful in subdirectories that might be built standalone. + useful in subdirectories that might be built standalone. @@ -8104,8 +8104,8 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-13 [7a8d87483] Rename auto_explain.sample_ratio to sample_rate --> - Add configuration parameter auto_explain.sample_rate to - allow contrib/auto_explain + Add configuration parameter auto_explain.sample_rate to + allow contrib/auto_explain to capture just a configurable fraction of all queries (Craig Ringer, Julien Rouhaud) @@ -8121,7 +8121,7 @@ This commit is also listed under libpq and PL/pgSQL 2016-04-01 [9ee014fc8] Bloom index contrib module --> - Add contrib/bloom module that + Add contrib/bloom module that implements an index access method based on Bloom filtering (Teodor Sigaev, Alexander Korotkov) @@ -8139,7 +8139,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-12-28 [81ee726d8] Code and docs review for cube kNN support. --> - In contrib/cube, introduce + In contrib/cube, introduce distance operators for cubes, and support kNN-style searches in GiST indexes on cube columns (Stas Kelvich) @@ -8150,19 +8150,19 @@ This commit is also listed under libpq and PL/pgSQL 2016-02-03 [41d2c081c] Make hstore_to_jsonb_loose match hstore_to_json_loose on --> - Make contrib/hstore's hstore_to_jsonb_loose() - and hstore_to_json_loose() functions agree on what + Make contrib/hstore's hstore_to_jsonb_loose() + and hstore_to_json_loose() functions agree on what is a number (Tom Lane) - Previously, hstore_to_jsonb_loose() would convert - numeric-looking strings to JSON numbers, rather than - strings, even if they did not exactly match the JSON + Previously, hstore_to_jsonb_loose() would convert + numeric-looking strings to JSON numbers, rather than + strings, even if they did not exactly match the JSON syntax specification for numbers. This was inconsistent with - hstore_to_json_loose(), so tighten the test to match - the JSON syntax. + hstore_to_json_loose(), so tighten the test to match + the JSON syntax. @@ -8172,7 +8172,7 @@ This commit is also listed under libpq and PL/pgSQL --> Add selectivity estimation functions for - contrib/intarray operators + contrib/intarray operators to improve plans for queries using those operators (Yury Zhuravlev, Alexander Korotkov) @@ -8184,10 +8184,10 @@ This commit is also listed under libpq and PL/pgSQL --> Make contrib/pageinspect's - heap_page_items() function show the raw data in each - tuple, and add new functions tuple_data_split() and - heap_page_item_attrs() for inspection of individual + linkend="pageinspect">contrib/pageinspect's + heap_page_items() function show the raw data in each + tuple, and add new functions tuple_data_split() and + heap_page_item_attrs() for inspection of individual tuple fields (Nikolay Shaplov) @@ -8197,9 +8197,9 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-09 [188f359d3] pgcrypto: support changing S2K iteration count --> - Add an optional S2K iteration count parameter to - contrib/pgcrypto's - pgp_sym_encrypt() function (Jeff Janes) + Add an optional S2K iteration count parameter to + contrib/pgcrypto's + pgp_sym_encrypt() function (Jeff Janes) @@ -8208,8 +8208,8 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-16 [f576b17cd] Add word_similarity to pg_trgm contrib module. --> - Add support for word similarity to - contrib/pg_trgm + Add support for word similarity to + contrib/pg_trgm (Alexander Korotkov, Artur Zakirov) @@ -8226,14 +8226,14 @@ This commit is also listed under libpq and PL/pgSQL --> Add configuration parameter - pg_trgm.similarity_threshold for - contrib/pg_trgm's similarity threshold (Artur Zakirov) + pg_trgm.similarity_threshold for + contrib/pg_trgm's similarity threshold (Artur Zakirov) This threshold has always been configurable, but formerly it was - controlled by special-purpose functions set_limit() - and show_limit(). Those are now deprecated. + controlled by special-purpose functions set_limit() + and show_limit(). Those are now deprecated. @@ -8242,7 +8242,7 @@ This commit is also listed under libpq and PL/pgSQL 2015-07-20 [97f301464] This supports the triconsistent function for pg_trgm GIN --> - Improve contrib/pg_trgm's GIN operator class to + Improve contrib/pg_trgm's GIN operator class to speed up index searches in which both common and rare keys appear (Jeff Janes) @@ -8254,7 +8254,7 @@ This commit is also listed under libpq and PL/pgSQL --> Improve performance of similarity searches in - contrib/pg_trgm GIN indexes (Christophe Fornaroli) + contrib/pg_trgm GIN indexes (Christophe Fornaroli) @@ -8265,7 +8265,7 @@ This commit is also listed under libpq and PL/pgSQL --> Add contrib/pg_visibility module + linkend="pgvisibility">contrib/pg_visibility module to allow examining table visibility maps (Robert Haas) @@ -8275,9 +8275,9 @@ This commit is also listed under libpq and PL/pgSQL 2015-09-07 [49124613f] contrib/sslinfo: add ssl_extension_info SRF --> - Add ssl_extension_info() - function to contrib/sslinfo, to print information - about SSL extensions present in the X509 + Add ssl_extension_info() + function to contrib/sslinfo, to print information + about SSL extensions present in the X509 certificate used for the current connection (Dmitry Voronin) @@ -8285,7 +8285,7 @@ This commit is also listed under libpq and PL/pgSQL - <link linkend="postgres-fdw"><filename>postgres_fdw</></> + <link linkend="postgres-fdw"><filename>postgres_fdw</filename></link> @@ -8332,12 +8332,12 @@ This commit is also listed under libpq and PL/pgSQL 2016-03-18 [0bf3ae88a] Directly modify foreign tables. --> - When feasible, perform UPDATE or DELETE + When feasible, perform UPDATE or DELETE entirely on the remote server (Etsuro Fujita) - Formerly, remote updates involved sending a SELECT FOR UPDATE + Formerly, remote updates involved sending a SELECT FOR UPDATE command and then updating or deleting the selected rows one-by-one. While that is still necessary if the operation requires any local processing, it can now be done remotely if all elements of the @@ -8355,7 +8355,7 @@ This commit is also listed under libpq and PL/pgSQL - Formerly, postgres_fdw always fetched 100 rows at + Formerly, postgres_fdw always fetched 100 rows at a time from remote queries; now that behavior is configurable. diff --git a/doc/src/sgml/release-old.sgml b/doc/src/sgml/release-old.sgml index 24a7233378..e95e5cac24 100644 --- a/doc/src/sgml/release-old.sgml +++ b/doc/src/sgml/release-old.sgml @@ -15,7 +15,7 @@ - This is expected to be the last PostgreSQL release + This is expected to be the last PostgreSQL release in the 7.3.X series. Users are encouraged to update to a newer release branch soon. @@ -39,7 +39,7 @@ Prevent functions in indexes from executing with the privileges of - the user running VACUUM, ANALYZE, etc (Tom) + the user running VACUUM, ANALYZE, etc (Tom) @@ -50,60 +50,60 @@ (Note that triggers, defaults, check constraints, etc. pose the same type of risk.) But functions in indexes pose extra danger because they will be executed by routine maintenance operations - such as VACUUM FULL, which are commonly performed + such as VACUUM FULL, which are commonly performed automatically under a superuser account. For example, a nefarious user can execute code with superuser privileges by setting up a trojan-horse index definition and waiting for the next routine vacuum. The fix arranges for standard maintenance operations - (including VACUUM, ANALYZE, REINDEX, - and CLUSTER) to execute as the table owner rather than + (including VACUUM, ANALYZE, REINDEX, + and CLUSTER) to execute as the table owner rather than the calling user, using the same privilege-switching mechanism already - used for SECURITY DEFINER functions. To prevent bypassing + used for SECURITY DEFINER functions. To prevent bypassing this security measure, execution of SET SESSION - AUTHORIZATION and SET ROLE is now forbidden within a - SECURITY DEFINER context. (CVE-2007-6600) + AUTHORIZATION and SET ROLE is now forbidden within a + SECURITY DEFINER context. (CVE-2007-6600) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) The fix that appeared for this in 7.3.20 was incomplete, as it plugged - the hole for only some dblink functions. (CVE-2007-6601, + the hole for only some dblink functions. (CVE-2007-6601, CVE-2007-3278) - Fix potential crash in translate() when using a multibyte + Fix potential crash in translate() when using a multibyte database encoding (Tom) - Make contrib/tablefunc's crosstab() handle + Make contrib/tablefunc's crosstab() handle NULL rowid as a category in its own right, rather than crashing (Joe) - Require a specific version of Autoconf to be used - when re-generating the configure script (Peter) + Require a specific version of Autoconf to be used + when re-generating the configure script (Peter) This affects developers and packagers only. The change was made to prevent accidental use of untested combinations of - Autoconf and PostgreSQL versions. + Autoconf and PostgreSQL versions. You can remove the version check if you really want to use a - different Autoconf version, but it's + different Autoconf version, but it's your responsibility whether the result works or not. @@ -144,27 +144,27 @@ Prevent index corruption when a transaction inserts rows and - then aborts close to the end of a concurrent VACUUM + then aborts close to the end of a concurrent VACUUM on the same table (Tom) - Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) + Make CREATE DOMAIN ... DEFAULT NULL work properly (Tom) - Fix crash when log_min_error_statement logging runs out + Fix crash when log_min_error_statement logging runs out of memory (Tom) - Require non-superusers who use /contrib/dblink to use only + Require non-superusers who use /contrib/dblink to use only password authentication, as a security measure (Joe) @@ -206,22 +206,22 @@ Support explicit placement of the temporary-table schema within - search_path, and disable searching it for functions + search_path, and disable searching it for functions and operators (Tom) This is needed to allow a security-definer function to set a - truly secure value of search_path. Without it, + truly secure value of search_path. Without it, an unprivileged SQL user can use temporary objects to execute code with the privileges of the security-definer function (CVE-2007-2138). - See CREATE FUNCTION for more information. + See CREATE FUNCTION for more information. - Fix potential-data-corruption bug in how VACUUM FULL handles - UPDATE chains (Tom, Pavan Deolasee) + Fix potential-data-corruption bug in how VACUUM FULL handles + UPDATE chains (Tom, Pavan Deolasee) @@ -322,13 +322,13 @@ - to_number() and to_char(numeric) - are now STABLE, not IMMUTABLE, for - new initdb installs (Tom) + to_number() and to_char(numeric) + are now STABLE, not IMMUTABLE, for + new initdb installs (Tom) - This is because lc_numeric can potentially + This is because lc_numeric can potentially change the output of these functions. @@ -339,7 +339,7 @@ - This improves psql \d performance also. + This improves psql \d performance also. @@ -376,7 +376,7 @@ Fix corner cases in pattern matching for - psql's \d commands + psql's \d commands Fix index-corrupting bugs in /contrib/ltree (Teodor) Back-port 7.4 spinlock code to improve performance and support @@ -419,9 +419,9 @@ into SQL commands, you should examine them as soon as possible to ensure that they are using recommended escaping techniques. In most cases, applications should be using subroutines provided by - libraries or drivers (such as libpq's - PQescapeStringConn()) to perform string escaping, - rather than relying on ad hoc code to do it. + libraries or drivers (such as libpq's + PQescapeStringConn()) to perform string escaping, + rather than relying on ad hoc code to do it. @@ -431,46 +431,46 @@ Change the server to reject invalidly-encoded multibyte characters in all cases (Tatsuo, Tom) -While PostgreSQL has been moving in this direction for +While PostgreSQL has been moving in this direction for some time, the checks are now applied uniformly to all encodings and all textual input, and are now always errors not merely warnings. This change defends against SQL-injection attacks of the type described in CVE-2006-2313. -Reject unsafe uses of \' in string literals +Reject unsafe uses of \' in string literals As a server-side defense against SQL-injection attacks of the type -described in CVE-2006-2314, the server now only accepts '' and not -\' as a representation of ASCII single quote in SQL string -literals. By default, \' is rejected only when -client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, +described in CVE-2006-2314, the server now only accepts '' and not +\' as a representation of ASCII single quote in SQL string +literals. By default, \' is rejected only when +client_encoding is set to a client-only encoding (SJIS, BIG5, GBK, GB18030, or UHC), which is the scenario in which SQL injection is possible. -A new configuration parameter backslash_quote is available to +A new configuration parameter backslash_quote is available to adjust this behavior when needed. Note that full security against CVE-2006-2314 might require client-side changes; the purpose of -backslash_quote is in part to make it obvious that insecure +backslash_quote is in part to make it obvious that insecure clients are insecure. -Modify libpq's string-escaping routines to be +Modify libpq's string-escaping routines to be aware of encoding considerations -This fixes libpq-using applications for the security +This fixes libpq-using applications for the security issues described in CVE-2006-2313 and CVE-2006-2314. -Applications that use multiple PostgreSQL connections -concurrently should migrate to PQescapeStringConn() and -PQescapeByteaConn() to ensure that escaping is done correctly +Applications that use multiple PostgreSQL connections +concurrently should migrate to PQescapeStringConn() and +PQescapeByteaConn() to ensure that escaping is done correctly for the settings in use in each database connection. Applications that -do string escaping by hand should be modified to rely on library +do string escaping by hand should be modified to rely on library routines instead. Fix some incorrect encoding conversion functions -win1251_to_iso, alt_to_iso, -euc_tw_to_big5, euc_tw_to_mic, -mic_to_euc_tw were all broken to varying +win1251_to_iso, alt_to_iso, +euc_tw_to_big5, euc_tw_to_mic, +mic_to_euc_tw were all broken to varying extents. -Clean up stray remaining uses of \' in strings +Clean up stray remaining uses of \' in strings (Bruce, Jan) Fix server to use custom DH SSL parameters correctly (Michael @@ -510,7 +510,7 @@ Fuhr) Fix potential crash in SET -SESSION AUTHORIZATION (CVE-2006-0553) +SESSION AUTHORIZATION (CVE-2006-0553) An unprivileged user could crash the server process, resulting in momentary denial of service to other users, if the server has been compiled with Asserts enabled (which is not the default). @@ -525,14 +525,14 @@ created in 7.3.11 release. Fix race condition that could lead to file already -exists errors during pg_clog file creation +exists errors during pg_clog file creation (Tom) Fix to allow restoring dumps that have cross-schema references to custom operators (Tom) -Portability fix for testing presence of finite -and isinf during configure (Tom) +Portability fix for testing presence of finite +and isinf during configure (Tom) @@ -558,9 +558,9 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.10, see . - Also, you might need to REINDEX indexes on textual + Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or - plperl issues described below. + plperl issues described below. @@ -571,28 +571,28 @@ and isinf during configure (Tom) Fix character string comparison for locales that consider different character combinations as equal, such as Hungarian (Tom) -This might require REINDEX to fix existing indexes on +This might require REINDEX to fix existing indexes on textual columns. Set locale environment variables during postmaster startup -to ensure that plperl won't change the locale later -This fixes a problem that occurred if the postmaster was +to ensure that plperl won't change the locale later +This fixes a problem that occurred if the postmaster was started with environment variables specifying a different locale than what -initdb had been told. Under these conditions, any use of -plperl was likely to lead to corrupt indexes. You might need -REINDEX to fix existing indexes on +initdb had been told. Under these conditions, any use of +plperl was likely to lead to corrupt indexes. You might need +REINDEX to fix existing indexes on textual columns if this has happened to you. Fix longstanding bug in strpos() and regular expression handling in certain rarely used Asian multi-byte character sets (Tatsuo) -Fix bug in /contrib/pgcrypto gen_salt, +Fix bug in /contrib/pgcrypto gen_salt, which caused it not to use all available salt space for MD5 and XDES algorithms (Marko Kreen, Solar Designer) Salts for Blowfish and standard DES are unaffected. -Fix /contrib/dblink to throw an error, +Fix /contrib/dblink to throw an error, rather than crashing, when the number of columns specified is different from what's actually returned by the query (Joe) @@ -634,13 +634,13 @@ for the wrong page, leading to an Assert failure or data corruption. -/contrib/ltree fixes (Teodor) +/contrib/ltree fixes (Teodor) Fix longstanding planning error for outer joins This bug sometimes caused a bogus error RIGHT JOIN is -only supported with merge-joinable join conditions. +only supported with merge-joinable join conditions. -Prevent core dump in pg_autovacuum when a +Prevent core dump in pg_autovacuum when a table has been dropped @@ -674,25 +674,25 @@ table has been dropped Changes -Fix error that allowed VACUUM to remove -ctid chains too soon, and add more checking in code that follows -ctid links +Fix error that allowed VACUUM to remove +ctid chains too soon, and add more checking in code that follows +ctid links This fixes a long-standing problem that could cause crashes in very rare circumstances. -Fix CHAR() to properly pad spaces to the specified +Fix CHAR() to properly pad spaces to the specified length when using a multiple-byte character set (Yoshiyuki Asaba) -In prior releases, the padding of CHAR() was incorrect +In prior releases, the padding of CHAR() was incorrect because it only padded to the specified number of bytes without considering how many characters were stored. Fix missing rows in queries like UPDATE a=... WHERE -a... with GiST index on column a +a... with GiST index on column a Improve checking for partially-written WAL pages Improve robustness of signal handling when SSL is enabled Various memory leakage fixes Various portability improvements -Fix PL/pgSQL to handle var := var correctly when +Fix PL/pgSQL to handle var := var correctly when the variable is of pass-by-reference type @@ -754,17 +754,17 @@ COMMIT; - The above procedure must be carried out in each database - of an installation, including template1, and ideally - including template0 as well. If you do not fix the + The above procedure must be carried out in each database + of an installation, including template1, and ideally + including template0 as well. If you do not fix the template databases then any subsequently created databases will contain - the same error. template1 can be fixed in the same way - as any other database, but fixing template0 requires + the same error. template1 can be fixed in the same way + as any other database, but fixing template0 requires additional steps. First, from any database issue: UPDATE pg_database SET datallowconn = true WHERE datname = 'template0'; - Next connect to template0 and perform the above repair + Next connect to template0 and perform the above repair procedure. Finally, do: -- re-freeze template0: @@ -792,34 +792,34 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix comparisons of TIME WITH TIME ZONE values +Fix comparisons of TIME WITH TIME ZONE values The comparison code was wrong in the case where the ---enable-integer-datetimes configuration switch had been used. -NOTE: if you have an index on a TIME WITH TIME ZONE column, -it will need to be REINDEXed after installing this update, because +--enable-integer-datetimes configuration switch had been used. +NOTE: if you have an index on a TIME WITH TIME ZONE column, +it will need to be REINDEXed after installing this update, because the fix corrects the sort order of column values. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Fix mis-display of negative fractional seconds in -INTERVAL values +INTERVAL values This error only occurred when the ---enable-integer-datetimes configuration switch had been used. +--enable-integer-datetimes configuration switch had been used. Additional buffer overrun checks in plpgsql (Neil) -Fix pg_dump to dump trigger names containing % +Fix pg_dump to dump trigger names containing % correctly (Neil) -Prevent to_char(interval) from dumping core for +Prevent to_char(interval) from dumping core for month-related formats -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) Still more 64-bit fixes for -contrib/intagg +contrib/intagg Prevent incorrect optimization of functions returning -RECORD +RECORD @@ -850,11 +850,11 @@ month-related formats Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Check that creator of an aggregate function has the right to execute the specified transition functions @@ -909,7 +909,7 @@ datestyles Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -1264,13 +1264,13 @@ operations on bytea columns (Joe) Restore creation of OID column in CREATE TABLE AS / SELECT INTO -Fix pg_dump core dump when dumping views having comments +Fix pg_dump core dump when dumping views having comments Dump DEFERRABLE/INITIALLY DEFERRED constraints properly Fix UPDATE when child table's column numbering differs from parent Increase default value of max_fsm_relations Fix problem when fetching backwards in a cursor for a single-row query Make backward fetch work properly with cursor on SELECT DISTINCT query -Fix problems with loading pg_dump files containing contrib/lo usage +Fix problems with loading pg_dump files containing contrib/lo usage Fix problem with all-numeric user names Fix possible memory leak and core dump during disconnect in libpgtcl Make plpython's spi_execute command handle nulls properly (Andrew Bosma) @@ -1328,7 +1328,7 @@ operations on bytea columns (Joe) Fix a core dump of COPY TO when client/server encodings don't match (Tom) -Allow pg_dump to work with pre-7.2 servers (Philip) +Allow pg_dump to work with pre-7.2 servers (Philip) contrib/adddepend fixes (Tom) Fix problem with deletion of per-user/per-database config settings (Tom) contrib/vacuumlo fix (Tom) @@ -1418,7 +1418,7 @@ operations on bytea columns (Joe) PostgreSQL now records object dependencies, which allows improvements in many areas. DROP statements now take either - CASCADE or RESTRICT to control whether + CASCADE or RESTRICT to control whether dependent objects are also dropped. @@ -1458,7 +1458,7 @@ operations on bytea columns (Joe) A large number of interfaces have been moved to http://gborg.postgresql.org + url="http://gborg.postgresql.org">http://gborg.postgresql.org where they can be developed and released independently. @@ -1469,9 +1469,9 @@ operations on bytea columns (Joe) By default, functions can now take up to 32 parameters, and - identifiers can be up to 63 bytes long. Also, OPAQUE - is now deprecated: there are specific pseudo-datatypes - to represent each of the former meanings of OPAQUE + identifiers can be up to 63 bytes long. Also, OPAQUE + is now deprecated: there are specific pseudo-datatypes + to represent each of the former meanings of OPAQUE in function argument and result types. @@ -1484,12 +1484,12 @@ operations on bytea columns (Joe) Migration to Version 7.3 - A dump/restore using pg_dump is required for those + A dump/restore using pg_dump is required for those wishing to migrate data from any previous release. If your application examines the system catalogs, additional changes will be required due to the introduction of schemas in 7.3; for more information, see: . + url="http://developer.postgresql.org/~momjian/upgrade_tips_7.3">. @@ -1538,7 +1538,7 @@ operations on bytea columns (Joe) serial columns are no longer automatically - UNIQUE; thus, an index will not automatically be + UNIQUE; thus, an index will not automatically be created. @@ -1724,7 +1724,7 @@ operations on bytea columns (Joe) Have COPY TO output embedded carriage returns and newlines as \r and \n (Tom) Allow DELIMITER in COPY FROM to be 8-bit clean (Tatsuo) -Make pg_dump use ALTER TABLE ADD PRIMARY KEY, for performance (Neil) +Make pg_dump use ALTER TABLE ADD PRIMARY KEY, for performance (Neil) Disable brackets in multistatement rules (Bruce) Disable VACUUM from being called inside a function (Bruce) Allow dropdb and other scripts to use identifiers with spaces (Bruce) @@ -1736,7 +1736,7 @@ operations on bytea columns (Joe) Add 'SET LOCAL var = value' to set configuration variables for a single transaction (Tom) Allow ANALYZE to run in a transaction (Bruce) Improve COPY syntax using new WITH clauses, keep backward compatibility (Bruce) -Fix pg_dump to consistently output tags in non-ASCII dumps (Bruce) +Fix pg_dump to consistently output tags in non-ASCII dumps (Bruce) Make foreign key constraints clearer in dump file (Rod) Add COMMENT ON CONSTRAINT (Rod) Allow COPY TO/FROM to specify column names (Brent Verner) @@ -1745,9 +1745,9 @@ operations on bytea columns (Joe) Generate failure on short COPY lines rather than pad NULLs (Neil) Fix CLUSTER to preserve all table attributes (Alvaro Herrera) New pg_settings table to view/modify GUC settings (Joe) -Add smart quoting, portability improvements to pg_dump output (Peter) +Add smart quoting, portability improvements to pg_dump output (Peter) Dump serial columns out as SERIAL (Tom) -Enable large file support, >2G for pg_dump (Peter, Philip Warner, Bruce) +Enable large file support, >2G for pg_dump (Peter, Philip Warner, Bruce) Disallow TRUNCATE on tables that are involved in referential constraints (Rod) Have TRUNCATE also auto-truncate the toast table of the relation (Tom) Add clusterdb utility that will auto-cluster an entire database based on previous CLUSTER operations (Alvaro Herrera) @@ -2020,15 +2020,15 @@ VACUUM freshly-inserted data, although the scenario seems of very low probability. There are no known cases of it having caused more than an Assert failure. -Fix EXTRACT(EPOCH) for -TIME WITH TIME ZONE values +Fix EXTRACT(EPOCH) for +TIME WITH TIME ZONE values Additional buffer overrun checks in plpgsql (Neil) Fix pg_dump to dump index names and trigger names containing -% correctly (Neil) -Prevent to_char(interval) from dumping core for +% correctly (Neil) +Prevent to_char(interval) from dumping core for month-related formats -Fix contrib/pgcrypto for newer OpenSSL builds +Fix contrib/pgcrypto for newer OpenSSL builds (Marko Kreen) @@ -2060,11 +2060,11 @@ month-related formats Changes -Disallow LOAD to non-superusers +Disallow LOAD to non-superusers On platforms that will automatically execute initialization functions of a shared library (this includes at least Windows and ELF-based Unixen), -LOAD can be used to make the server execute arbitrary code. +LOAD can be used to make the server execute arbitrary code. Thanks to NGS Software for reporting this. Add needed STRICT marking to some contrib functions (Kris Jurka) @@ -2111,7 +2111,7 @@ datestyles Repair possible failure to update hint bits on disk Under rare circumstances this oversight could lead to -could not access transaction status failures, which qualifies +could not access transaction status failures, which qualifies it as a potential-data-loss bug. Ensure that hashed outer join does not miss tuples @@ -2247,7 +2247,7 @@ since PostgreSQL 7.1. Handle pre-1970 date values in newer versions of glibc (Tom) Fix possible hang during server shutdown Prevent spinlock hangs on SMP PPC machines (Tomoyuki Niijima) -Fix pg_dump to properly dump FULL JOIN USING (Tom) +Fix pg_dump to properly dump FULL JOIN USING (Tom) @@ -2281,7 +2281,7 @@ since PostgreSQL 7.1. Allow EXECUTE of "CREATE TABLE AS ... SELECT" in PL/pgSQL (Tom) Fix for compressed transaction log id wraparound (Tom) Fix PQescapeBytea/PQunescapeBytea so that they handle bytes > 0x7f (Tatsuo) -Fix for psql and pg_dump crashing when invoked with non-existent long options (Tatsuo) +Fix for psql and pg_dump crashing when invoked with non-existent long options (Tatsuo) Fix crash when invoking geometric operators (Tom) Allow OPEN cursor(args) (Tom) Fix for rtree_gist index build (Teodor) @@ -2354,7 +2354,7 @@ since PostgreSQL 7.1. Overview - This release improves PostgreSQL for use in + This release improves PostgreSQL for use in high-volume applications. @@ -2368,7 +2368,7 @@ since PostgreSQL 7.1. Vacuuming no longer locks tables, thus allowing normal user - access during the vacuum. A new VACUUM FULL + access during the vacuum. A new VACUUM FULL command does old-style vacuum by locking the table and shrinking the on-disk copy of the table. @@ -2400,7 +2400,7 @@ since PostgreSQL 7.1. The system now computes histogram column statistics during - ANALYZE, allowing much better optimizer choices. + ANALYZE, allowing much better optimizer choices. @@ -2472,15 +2472,15 @@ since PostgreSQL 7.1. - The pg_hba.conf and pg_ident.conf + The pg_hba.conf and pg_ident.conf configuration is now only reloaded after receiving a - SIGHUP signal, not with each connection. + SIGHUP signal, not with each connection. - The function octet_length() now returns the uncompressed data length. + The function octet_length() now returns the uncompressed data length. @@ -2693,7 +2693,7 @@ since PostgreSQL 7.1. Internationalization -National language support in psql, pg_dump, libpq, and server (Peter E) +National language support in psql, pg_dump, libpq, and server (Peter E) Message translations in Chinese (simplified, traditional), Czech, French, German, Hungarian, Russian, Swedish (Peter E, Serguei A. Mokhov, Karel Zak, Weiping He, Zhenbang Wei, Kovacs Zoltan) Make trim, ltrim, rtrim, btrim, lpad, rpad, translate multibyte aware (Tatsuo) Add LATIN5,6,7,8,9,10 support (Tatsuo) @@ -2705,7 +2705,7 @@ since PostgreSQL 7.1. - <application>PL/pgSQL</> + <application>PL/pgSQL</application> Now uses portals for SELECT loops, allowing huge result sets (Jan) CURSOR and REFCURSOR support (Jan) @@ -2745,7 +2745,7 @@ since PostgreSQL 7.1. - <application>psql</> + <application>psql</application> \d displays indexes in unique, primary groupings (Christopher Kings-Lynne) Allow trailing semicolons in backslash commands (Greg Sabino Mullane) @@ -2756,7 +2756,7 @@ since PostgreSQL 7.1. - <application>libpq</> + <application>libpq</application> New function PQescapeString() to escape quotes in command strings (Florian Weimer) New function PQescapeBytea() escapes binary strings for use as SQL string literals @@ -2818,7 +2818,7 @@ since PostgreSQL 7.1. - <application>ECPG</> + <application>ECPG</application> EXECUTE ... INTO implemented (Christof Petig) Multiple row descriptor support (e.g. CARDINALITY) (Christof Petig) @@ -2839,7 +2839,7 @@ since PostgreSQL 7.1. Python fix fetchone() (Gerhard Haring) Use UTF, Unicode in Tcl where appropriate (Vsevolod Lobko, Reinhard Max) Add Tcl COPY TO/FROM (ljb) -Prevent output of default index op class in pg_dump (Tom) +Prevent output of default index op class in pg_dump (Tom) Fix libpgeasy memory leak (Bruce) @@ -3547,9 +3547,9 @@ ecpg changes (Michael) SQL92 join syntax is now supported, though only as - INNER JOIN for this release. JOIN, - NATURAL JOIN, JOIN/USING, - and JOIN/ON are available, as are + INNER JOIN for this release. JOIN, + NATURAL JOIN, JOIN/USING, + and JOIN/ON are available, as are column correlation names. @@ -3959,7 +3959,7 @@ New multibyte encodings This is basically a cleanup release for 6.5.2. We have added a new - PgAccess that was missing in 6.5.2, and installed an NT-specific fix. + PgAccess that was missing in 6.5.2, and installed an NT-specific fix. @@ -4209,7 +4209,7 @@ Add Win1250 (Czech) support (Pavel Behal) We continue to expand our port list, this time including - Windows NT/ix86 and NetBSD/arm32. + Windows NT/ix86 and NetBSD/arm32. @@ -4234,7 +4234,7 @@ Add Win1250 (Czech) support (Pavel Behal) New and updated material is present throughout the documentation. New FAQs have been - contributed for SGI and AIX platforms. + contributed for SGI and AIX platforms. The Tutorial has introductory information on SQL from Stefan Simkovics. For the User's Guide, there are @@ -4926,7 +4926,7 @@ Correctly handles function calls on the left side of BETWEEN and LIKE clauses. A dump/restore is NOT required for those running 6.3 or 6.3.1. A -make distclean, make, and make install is all that is required. +make distclean, make, and make install is all that is required. This last step should be performed while the postmaster is not running. You should re-link any custom applications that use PostgreSQL libraries. @@ -5003,7 +5003,7 @@ Improvements to the configuration autodetection for installation. A dump/restore is NOT required for those running 6.3. A -make distclean, make, and make install is all that is required. +make distclean, make, and make install is all that is required. This last step should be performed while the postmaster is not running. You should re-link any custom applications that use PostgreSQL libraries. @@ -5128,7 +5128,7 @@ Better identify tcl and tk libs and includes(Bruce) Third, char() fields will now allow faster access than varchar() or - text. Specifically, the text and varchar() have a penalty for access to + text. Specifically, the text and varchar() have a penalty for access to any columns after the first column of this type. char() used to also have this access penalty, but it no longer does. This might suggest that you redesign some of your tables, especially if you have short character @@ -5470,7 +5470,7 @@ to dump the 6.1 database. -Migration from version 1.<replaceable>x</> to version 6.2 +Migration from version 1.<replaceable>x</replaceable> to version 6.2 Those migrating from earlier 1.* releases should first upgrade to 1.09 @@ -5689,11 +5689,11 @@ optimizer which uses genetic - The random results in the random test should cause the + The random results in the random test should cause the random test to be failed, since the regression tests are evaluated using a simple diff. However, - random does not seem to produce random results on my test - machine (Linux/gcc/i686). + random does not seem to produce random results on my test + machine (Linux/gcc/i686). @@ -5990,16 +5990,16 @@ and a script to convert old ASCII files. The following notes are for the benefit of users who want to migrate -databases from Postgres95 1.01 and 1.02 to Postgres95 1.02.1. +databases from Postgres95 1.01 and 1.02 to Postgres95 1.02.1. -If you are starting afresh with Postgres95 1.02.1 and do not need +If you are starting afresh with Postgres95 1.02.1 and do not need to migrate old databases, you do not need to read any further. -In order to upgrade older Postgres95 version 1.01 or 1.02 databases to +In order to upgrade older Postgres95 version 1.01 or 1.02 databases to version 1.02.1, the following steps are required: @@ -6013,7 +6013,7 @@ Start up a new 1.02.1 postmaster Add the new built-in functions and operators of 1.02.1 to 1.01 or 1.02 databases. This is done by running the new 1.02.1 server against your own 1.01 or 1.02 database and applying the queries attached at - the end of the file. This can be done easily through psql. If your + the end of the file. This can be done easily through psql. If your 1.01 or 1.02 database is named testdb and you have cut the commands from the end of this file and saved them in addfunc.sql: @@ -6044,7 +6044,7 @@ sed 's/^\.$/\\./g' <in_file >out_file -If you are loading an older binary copy or non-stdout copy, there is no +If you are loading an older binary copy or non-stdout copy, there is no end-of-data character, and hence no conversion necessary. @@ -6135,15 +6135,15 @@ Contributors (apologies to any missed) The following notes are for the benefit of users who want to migrate -databases from Postgres95 1.0 to Postgres95 1.01. +databases from Postgres95 1.0 to Postgres95 1.01. -If you are starting afresh with Postgres95 1.01 and do not need +If you are starting afresh with Postgres95 1.01 and do not need to migrate old databases, you do not need to read any further. -In order to Postgres95 version 1.01 with databases created with -Postgres95 version 1.0, the following steps are required: +In order to Postgres95 version 1.01 with databases created with +Postgres95 version 1.0, the following steps are required: diff --git a/doc/src/sgml/release.sgml b/doc/src/sgml/release.sgml index f1f4e91252..a815a48b8d 100644 --- a/doc/src/sgml/release.sgml +++ b/doc/src/sgml/release.sgml @@ -44,7 +44,7 @@ For new features, add links to the documentation sections. The release notes contain the significant changes in each - PostgreSQL release, with major features and migration + PostgreSQL release, with major features and migration issues listed at the top. The release notes do not contain changes that affect only a few users or changes that are internal and therefore not user-visible. For example, the optimizer is improved in almost every diff --git a/doc/src/sgml/rowtypes.sgml b/doc/src/sgml/rowtypes.sgml index 9d6768e006..bc2fc9b885 100644 --- a/doc/src/sgml/rowtypes.sgml +++ b/doc/src/sgml/rowtypes.sgml @@ -12,7 +12,7 @@ - A composite type represents the structure of a row or record; + A composite type represents the structure of a row or record; it is essentially just a list of field names and their data types. PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. For example, a @@ -36,11 +36,11 @@ CREATE TYPE inventory_item AS ( price numeric ); - The syntax is comparable to CREATE TABLE, except that only + The syntax is comparable to CREATE TABLE, except that only field names and types can be specified; no constraints (such as NOT - NULL) can presently be included. Note that the AS keyword + NULL) can presently be included. Note that the AS keyword is essential; without it, the system will think a different kind - of CREATE TYPE command is meant, and you will get odd syntax + of CREATE TYPE command is meant, and you will get odd syntax errors. @@ -78,12 +78,12 @@ CREATE TABLE inventory_item ( price numeric CHECK (price > 0) ); - then the same inventory_item composite type shown above would + then the same inventory_item composite type shown above would come into being as a byproduct, and could be used just as above. Note however an important restriction of the current implementation: since no constraints are associated with a composite type, the constraints shown in the table - definition do not apply to values of the composite type + definition do not apply to values of the composite type outside the table. (A partial workaround is to use domain types as members of composite types.) @@ -111,7 +111,7 @@ CREATE TABLE inventory_item ( '("fuzzy dice",42,1.99)' - which would be a valid value of the inventory_item type + which would be a valid value of the inventory_item type defined above. To make a field be NULL, write no characters at all in its position in the list. For example, this constant specifies a NULL third field: @@ -150,7 +150,7 @@ ROW('', 42, NULL) ('fuzzy dice', 42, 1.99) ('', 42, NULL) - The ROW expression syntax is discussed in more detail in ROW expression syntax is discussed in more detail in . @@ -163,15 +163,15 @@ ROW('', 42, NULL) name, much like selecting a field from a table name. In fact, it's so much like selecting from a table name that you often have to use parentheses to keep from confusing the parser. For example, you might try to select - some subfields from our on_hand example table with something + some subfields from our on_hand example table with something like: SELECT item.name FROM on_hand WHERE item.price > 9.99; - This will not work since the name item is taken to be a table - name, not a column name of on_hand, per SQL syntax rules. + This will not work since the name item is taken to be a table + name, not a column name of on_hand, per SQL syntax rules. You must write it like this: @@ -186,7 +186,7 @@ SELECT (on_hand.item).name FROM on_hand WHERE (on_hand.item).price > 9.99; Now the parenthesized object is correctly interpreted as a reference to - the item column, and then the subfield can be selected from it. + the item column, and then the subfield can be selected from it. @@ -202,7 +202,7 @@ SELECT (my_func(...)).field FROM ... - The special field name * means all fields, as + The special field name * means all fields, as further explained in . @@ -221,7 +221,7 @@ INSERT INTO mytab (complex_col) VALUES((1.1,2.2)); UPDATE mytab SET complex_col = ROW(1.1,2.2) WHERE ...; - The first example omits ROW, the second uses it; we + The first example omits ROW, the second uses it; we could have done it either way. @@ -234,12 +234,12 @@ UPDATE mytab SET complex_col.r = (complex_col).r + 1 WHERE ...; Notice here that we don't need to (and indeed cannot) put parentheses around the column name appearing just after - SET, but we do need parentheses when referencing the same + SET, but we do need parentheses when referencing the same column in the expression to the right of the equal sign. - And we can specify subfields as targets for INSERT, too: + And we can specify subfields as targets for INSERT, too: INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); @@ -260,10 +260,10 @@ INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); - In PostgreSQL, a reference to a table name (or alias) + In PostgreSQL, a reference to a table name (or alias) in a query is effectively a reference to the composite value of the table's current row. For example, if we had a table - inventory_item as shown + inventory_item as shown above, we could write: SELECT c FROM inventory_item c; @@ -278,12 +278,12 @@ SELECT c FROM inventory_item c; Note however that simple names are matched to column names before table names, so this example works only because there is no column - named c in the query's tables. + named c in the query's tables. The ordinary qualified-column-name - syntax table_name.column_name + syntax table_name.column_name can be understood as applying field selection to the composite value of the table's current row. (For efficiency reasons, it's not actually implemented that way.) @@ -306,13 +306,13 @@ SELECT c.* FROM inventory_item c; SELECT c.name, c.supplier_id, c.price FROM inventory_item c; - PostgreSQL will apply this expansion behavior to + PostgreSQL will apply this expansion behavior to any composite-valued expression, although as shown above, you need to write parentheses - around the value that .* is applied to whenever it's not a - simple table name. For example, if myfunc() is a function - returning a composite type with columns a, - b, and c, then these two queries have the + around the value that .* is applied to whenever it's not a + simple table name. For example, if myfunc() is a function + returning a composite type with columns a, + b, and c, then these two queries have the same result: SELECT (myfunc(x)).* FROM some_table; @@ -322,33 +322,33 @@ SELECT (myfunc(x)).a, (myfunc(x)).b, (myfunc(x)).c FROM some_table; - PostgreSQL handles column expansion by + PostgreSQL handles column expansion by actually transforming the first form into the second. So, in this - example, myfunc() would get invoked three times per row + example, myfunc() would get invoked three times per row with either syntax. If it's an expensive function you may wish to avoid that, which you can do with a query like: SELECT (m).* FROM (SELECT myfunc(x) AS m FROM some_table OFFSET 0) ss; - The OFFSET 0 clause keeps the optimizer - from flattening the sub-select to arrive at the form with - multiple calls of myfunc(). + The OFFSET 0 clause keeps the optimizer + from flattening the sub-select to arrive at the form with + multiple calls of myfunc(). - The composite_value.* syntax results in + The composite_value.* syntax results in column expansion of this kind when it appears at the top level of - a SELECT output - list, a RETURNING - list in INSERT/UPDATE/DELETE, - a VALUES clause, or + a SELECT output + list, a RETURNING + list in INSERT/UPDATE/DELETE, + a VALUES clause, or a row constructor. In all other contexts (including when nested inside one of those - constructs), attaching .* to a composite value does not - change the value, since it means all columns and so the + constructs), attaching .* to a composite value does not + change the value, since it means all columns and so the same composite value is produced again. For example, - if somefunc() accepts a composite-valued argument, + if somefunc() accepts a composite-valued argument, these queries are the same: @@ -356,16 +356,16 @@ SELECT somefunc(c.*) FROM inventory_item c; SELECT somefunc(c) FROM inventory_item c; - In both cases, the current row of inventory_item is + In both cases, the current row of inventory_item is passed to the function as a single composite-valued argument. - Even though .* does nothing in such cases, using it is good + Even though .* does nothing in such cases, using it is good style, since it makes clear that a composite value is intended. In - particular, the parser will consider c in c.* to + particular, the parser will consider c in c.* to refer to a table name or alias, not to a column name, so that there is - no ambiguity; whereas without .*, it is not clear - whether c means a table name or a column name, and in fact + no ambiguity; whereas without .*, it is not clear + whether c means a table name or a column name, and in fact the column-name interpretation will be preferred if there is a column - named c. + named c. @@ -376,27 +376,27 @@ SELECT * FROM inventory_item c ORDER BY c; SELECT * FROM inventory_item c ORDER BY c.*; SELECT * FROM inventory_item c ORDER BY ROW(c.*); - All of these ORDER BY clauses specify the row's composite + All of these ORDER BY clauses specify the row's composite value, resulting in sorting the rows according to the rules described in . However, - if inventory_item contained a column - named c, the first case would be different from the + if inventory_item contained a column + named c, the first case would be different from the others, as it would mean to sort by that column only. Given the column names previously shown, these queries are also equivalent to those above: SELECT * FROM inventory_item c ORDER BY ROW(c.name, c.supplier_id, c.price); SELECT * FROM inventory_item c ORDER BY (c.name, c.supplier_id, c.price); - (The last case uses a row constructor with the key word ROW + (The last case uses a row constructor with the key word ROW omitted.) Another special syntactical behavior associated with composite values is - that we can use functional notation for extracting a field + that we can use functional notation for extracting a field of a composite value. The simple way to explain this is that - the notations field(table) - and table.field + the notations field(table) + and table.field are interchangeable. For example, these queries are equivalent: @@ -418,7 +418,7 @@ SELECT c.somefunc FROM inventory_item c; This equivalence between functional notation and field notation makes it possible to use functions on composite types to implement - computed fields. + computed fields. computed field @@ -427,7 +427,7 @@ SELECT c.somefunc FROM inventory_item c; computed An application using the last query above wouldn't need to be directly - aware that somefunc isn't a real column of the table. + aware that somefunc isn't a real column of the table. @@ -438,7 +438,7 @@ SELECT c.somefunc FROM inventory_item c; interpretation will be preferred, so that such a function could not be called without tricks. One way to force the function interpretation is to schema-qualify the function name, that is, write - schema.func(compositevalue). + schema.func(compositevalue). @@ -450,8 +450,8 @@ SELECT c.somefunc FROM inventory_item c; The external text representation of a composite value consists of items that are interpreted according to the I/O conversion rules for the individual field types, plus decoration that indicates the composite structure. - The decoration consists of parentheses (( and )) - around the whole value, plus commas (,) between adjacent + The decoration consists of parentheses (( and )) + around the whole value, plus commas (,) between adjacent items. Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. @@ -466,7 +466,7 @@ SELECT c.somefunc FROM inventory_item c; As shown previously, when writing a composite value you can write double quotes around any individual field value. - You must do so if the field value would otherwise + You must do so if the field value would otherwise confuse the composite-value parser. In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted. To put a double quote or backslash in a quoted composite field value, @@ -481,7 +481,7 @@ SELECT c.somefunc FROM inventory_item c; A completely empty field value (no characters at all between the commas or parentheses) represents a NULL. To write a value that is an empty - string rather than NULL, write "". + string rather than NULL, write "". @@ -497,7 +497,7 @@ SELECT c.somefunc FROM inventory_item c; Remember that what you write in an SQL command will first be interpreted as a string literal, and then as a composite. This doubles the number of backslashes you need (assuming escape string syntax is used). - For example, to insert a text field + For example, to insert a text field containing a double quote and a backslash in a composite value, you'd need to write: @@ -505,11 +505,11 @@ INSERT ... VALUES (E'("\\"\\\\")'); The string-literal processor removes one level of backslashes, so that what arrives at the composite-value parser looks like - ("\"\\"). In turn, the string - fed to the text data type's input routine - becomes "\. (If we were working + ("\"\\"). In turn, the string + fed to the text data type's input routine + becomes "\. (If we were working with a data type whose input routine also treated backslashes specially, - bytea for example, we might need as many as eight backslashes + bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored composite field.) Dollar quoting (see ) can be used to avoid the need to double backslashes. @@ -518,10 +518,10 @@ INSERT ... VALUES (E'("\\"\\\\")'); - The ROW constructor syntax is usually easier to work with + The ROW constructor syntax is usually easier to work with than the composite-literal syntax when writing composite values in SQL commands. - In ROW, individual field values are written the same way + In ROW, individual field values are written the same way they would be written when not members of a composite. diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 61c801a693..095bf6459c 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -99,7 +99,7 @@ the range table - range table + range table @@ -150,7 +150,7 @@ the target list - target list + target list @@ -168,9 +168,9 @@ DELETE commands don't need a normal target list because they don't produce any result. Instead, the rule system - adds a special CTID entry to the empty target list, + adds a special CTID entry to the empty target list, to allow the executor to find the row to be deleted. - (CTID is added when the result relation is an ordinary + (CTID is added when the result relation is an ordinary table. If it is a view, a whole-row variable is added instead, as described in .) @@ -178,7 +178,7 @@ For INSERT commands, the target list describes the new rows that should go into the result relation. It consists of the - expressions in the VALUES clause or the ones from the + expressions in the VALUES clause or the ones from the SELECT clause in INSERT ... SELECT. The first step of the rewrite process adds target list entries for any columns that were not assigned to by @@ -193,8 +193,8 @@ rule system, it contains just the expressions from the SET column = expression part of the command. The planner will handle missing columns by inserting expressions that copy the values - from the old row into the new one. Just as for DELETE, - the rule system adds a CTID or whole-row variable so that + from the old row into the new one. Just as for DELETE, + the rule system adds a CTID or whole-row variable so that the executor can identify the old row to be updated. @@ -218,7 +218,7 @@ this expression is a Boolean that tells whether the operation (INSERT, UPDATE, DELETE, or SELECT) for the - final result row should be executed or not. It corresponds to the WHERE clause + final result row should be executed or not. It corresponds to the WHERE clause of an SQL statement. @@ -230,18 +230,18 @@ - The query's join tree shows the structure of the FROM clause. + The query's join tree shows the structure of the FROM clause. For a simple query like SELECT ... FROM a, b, c, the join tree is just - a list of the FROM items, because we are allowed to join them in - any order. But when JOIN expressions, particularly outer joins, + a list of the FROM items, because we are allowed to join them in + any order. But when JOIN expressions, particularly outer joins, are used, we have to join in the order shown by the joins. - In that case, the join tree shows the structure of the JOIN expressions. The - restrictions associated with particular JOIN clauses (from ON or - USING expressions) are stored as qualification expressions attached + In that case, the join tree shows the structure of the JOIN expressions. The + restrictions associated with particular JOIN clauses (from ON or + USING expressions) are stored as qualification expressions attached to those join-tree nodes. It turns out to be convenient to store - the top-level WHERE expression as a qualification attached to the + the top-level WHERE expression as a qualification attached to the top-level join-tree item, too. So really the join tree represents - both the FROM and WHERE clauses of a SELECT. + both the FROM and WHERE clauses of a SELECT. @@ -252,7 +252,7 @@ - The other parts of the query tree like the ORDER BY + The other parts of the query tree like the ORDER BY clause aren't of interest here. The rule system substitutes some entries there while applying rules, but that doesn't have much to do with the fundamentals of the rule @@ -274,8 +274,8 @@ - view - implementation through rules + view + implementation through rules @@ -313,7 +313,7 @@ CREATE RULE "_RETURN" AS ON SELECT TO myview DO INSTEAD - Rules ON SELECT are applied to all queries as the last step, even + Rules ON SELECT are applied to all queries as the last step, even if the command given is an INSERT, UPDATE or DELETE. And they have different semantics from rules on the other command types in that they modify the @@ -322,10 +322,10 @@ CREATE RULE "_RETURN" AS ON SELECT TO myview DO INSTEAD - Currently, there can be only one action in an ON SELECT rule, and it must - be an unconditional SELECT action that is INSTEAD. This restriction was + Currently, there can be only one action in an ON SELECT rule, and it must + be an unconditional SELECT action that is INSTEAD. This restriction was required to make rules safe enough to open them for ordinary users, and - it restricts ON SELECT rules to act like views. + it restricts ON SELECT rules to act like views. @@ -423,12 +423,12 @@ CREATE VIEW shoe_ready AS The CREATE VIEW command for the shoelace view (which is the simplest one we - have) will create a relation shoelace and an entry in + have) will create a relation shoelace and an entry in pg_rewrite that tells that there is a - rewrite rule that must be applied whenever the relation shoelace + rewrite rule that must be applied whenever the relation shoelace is referenced in a query's range table. The rule has no rule - qualification (discussed later, with the non-SELECT rules, since - SELECT rules currently cannot have them) and it is INSTEAD. Note + qualification (discussed later, with the non-SELECT rules, since + SELECT rules currently cannot have them) and it is INSTEAD. Note that rule qualifications are not the same as query qualifications. The action of our rule has a query qualification. The action of the rule is one query tree that is a copy of the @@ -438,7 +438,7 @@ CREATE VIEW shoe_ready AS The two extra range - table entries for NEW and OLD that you can see in + table entries for NEW and OLD that you can see in the pg_rewrite entry aren't of interest for SELECT rules. @@ -533,7 +533,7 @@ SELECT shoelace.sl_name, shoelace.sl_avail, There is one difference however: the subquery's range table has two - extra entries shoelace old and shoelace new. These entries don't + extra entries shoelace old and shoelace new. These entries don't participate directly in the query, since they aren't referenced by the subquery's join tree or target list. The rewriter uses them to store the access privilege check information that was originally present @@ -548,8 +548,8 @@ SELECT shoelace.sl_name, shoelace.sl_avail, the remaining range-table entries in the top query (in this example there are no more), and it will recursively check the range-table entries in the added subquery to see if any of them reference views. (But it - won't expand old or new — otherwise we'd have infinite recursion!) - In this example, there are no rewrite rules for shoelace_data or unit, + won't expand old or new — otherwise we'd have infinite recursion!) + In this example, there are no rewrite rules for shoelace_data or unit, so rewriting is complete and the above is the final result given to the planner. @@ -671,8 +671,8 @@ SELECT shoe_ready.shoename, shoe_ready.sh_avail, command other than a SELECT, the result relation points to the range-table entry where the result should go. Everything else is absolutely the same. So having two tables - t1 and t2 with columns a and - b, the query trees for the two statements: + t1 and t2 with columns a and + b, the query trees for the two statements: SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a; @@ -685,27 +685,27 @@ UPDATE t1 SET b = t2.b FROM t2 WHERE t1.a = t2.a; - The range tables contain entries for the tables t1 and t2. + The range tables contain entries for the tables t1 and t2. The target lists contain one variable that points to column - b of the range table entry for table t2. + b of the range table entry for table t2. - The qualification expressions compare the columns a of both + The qualification expressions compare the columns a of both range-table entries for equality. - The join trees show a simple join between t1 and t2. + The join trees show a simple join between t1 and t2. @@ -714,7 +714,7 @@ UPDATE t1 SET b = t2.b FROM t2 WHERE t1.a = t2.a; The consequence is, that both query trees result in similar execution plans: They are both joins over the two tables. For the - UPDATE the missing columns from t1 are added to + UPDATE the missing columns from t1 are added to the target list by the planner and the final query tree will read as: @@ -736,7 +736,7 @@ SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; one is a SELECT command and the other is an UPDATE is handled higher up in the executor, where it knows that this is an UPDATE, and it knows that - this result should go into table t1. But which of the rows + this result should go into table t1. But which of the rows that are there has to be replaced by the new row? @@ -744,12 +744,12 @@ SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; To resolve this problem, another entry is added to the target list in UPDATE (and also in DELETE) statements: the current tuple ID - (CTID).CTID + (CTID).CTID This is a system column containing the file block number and position in the block for the row. Knowing - the table, the CTID can be used to retrieve the - original row of t1 to be updated. After adding the - CTID to the target list, the query actually looks like: + the table, the CTID can be used to retrieve the + original row of t1 to be updated. After adding the + CTID to the target list, the query actually looks like: SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; @@ -759,9 +759,9 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; the stage. Old table rows aren't overwritten, and this is why ROLLBACK is fast. In an UPDATE, the new result row is inserted into the table (after stripping the - CTID) and in the row header of the old row, which the - CTID pointed to, the cmax and - xmax entries are set to the current command counter + CTID) and in the row header of the old row, which the + CTID pointed to, the cmax and + xmax entries are set to the current command counter and current transaction ID. Thus the old row is hidden, and after the transaction commits the vacuum cleaner can eventually remove the dead row. @@ -780,7 +780,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; The above demonstrates how the rule system incorporates view definitions into the original query tree. In the second example, a simple SELECT from one view created a final - query tree that is a join of 4 tables (unit was used twice with + query tree that is a join of 4 tables (unit was used twice with different names). @@ -811,7 +811,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; DELETE? Doing the substitutions described above would give a query tree in which the result relation points at a subquery range-table entry, which will not - work. There are several ways in which PostgreSQL + work. There are several ways in which PostgreSQL can support the appearance of updating a view, however. @@ -821,20 +821,20 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; underlying base relation so that the INSERT, UPDATE, or DELETE is applied to the base relation in the appropriate way. Views that are - simple enough for this are called automatically - updatable. For detailed information on the kinds of view that can + simple enough for this are called automatically + updatable. For detailed information on the kinds of view that can be automatically updated, see . Alternatively, the operation may be handled by a user-provided - INSTEAD OF trigger on the view. + INSTEAD OF trigger on the view. Rewriting works slightly differently in this case. For INSERT, the rewriter does nothing at all with the view, leaving it as the result relation for the query. For UPDATE and DELETE, it's still necessary to expand the - view query to produce the old rows that the command will + view query to produce the old rows that the command will attempt to update or delete. So the view is expanded as normal, but another unexpanded range-table entry is added to the query to represent the view in its capacity as the result relation. @@ -843,21 +843,21 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; The problem that now arises is how to identify the rows to be updated in the view. Recall that when the result relation - is a table, a special CTID entry is added to the target + is a table, a special CTID entry is added to the target list to identify the physical locations of the rows to be updated. This does not work if the result relation is a view, because a view - does not have any CTID, since its rows do not have + does not have any CTID, since its rows do not have actual physical locations. Instead, for an UPDATE - or DELETE operation, a special wholerow + or DELETE operation, a special wholerow entry is added to the target list, which expands to include all columns from the view. The executor uses this value to supply the - old row to the INSTEAD OF trigger. It is + old row to the INSTEAD OF trigger. It is up to the trigger to work out what to update based on the old and new row values. - Another possibility is for the user to define INSTEAD + Another possibility is for the user to define INSTEAD rules that specify substitute actions for INSERT, UPDATE, and DELETE commands on a view. These rules will rewrite the command, typically into a command @@ -868,8 +868,8 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; Note that rules are evaluated first, rewriting the original query before it is planned and executed. Therefore, if a view has - INSTEAD OF triggers as well as rules on INSERT, - UPDATE, or DELETE, then the rules will be + INSTEAD OF triggers as well as rules on INSERT, + UPDATE, or DELETE, then the rules will be evaluated first, and depending on the result, the triggers may not be used at all. @@ -883,7 +883,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; - If there are no INSTEAD rules or INSTEAD OF + If there are no INSTEAD rules or INSTEAD OF triggers for the view, and the rewriter cannot automatically rewrite the query as an update on the underlying base relation, an error will be thrown because the executor cannot update a view as such. @@ -902,13 +902,13 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; - materialized view - implementation through rules + materialized view + implementation through rules - view - materialized + view + materialized @@ -1030,7 +1030,7 @@ SELECT count(*) FROM words WHERE word = 'caterpiler'; (1 row) - With EXPLAIN ANALYZE, we see: + With EXPLAIN ANALYZE, we see: Aggregate (cost=21763.99..21764.00 rows=1 width=0) (actual time=188.180..188.181 rows=1 loops=1) @@ -1104,7 +1104,7 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; -Rules on <command>INSERT</>, <command>UPDATE</>, and <command>DELETE</> +Rules on <command>INSERT</command>, <command>UPDATE</command>, and <command>DELETE</command> rule @@ -1122,8 +1122,8 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; - Rules that are defined on INSERT, UPDATE, - and DELETE are significantly different from the view rules + Rules that are defined on INSERT, UPDATE, + and DELETE are significantly different from the view rules described in the previous section. First, their CREATE RULE command allows more: @@ -1142,13 +1142,13 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; - They can be INSTEAD or ALSO (the default). + They can be INSTEAD or ALSO (the default). - The pseudorelations NEW and OLD become useful. + The pseudorelations NEW and OLD become useful. @@ -1167,7 +1167,7 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; In many cases, tasks that could be performed by rules - on INSERT/UPDATE/DELETE are better done + on INSERT/UPDATE/DELETE are better done with triggers. Triggers are notationally a bit more complicated, but their semantics are much simpler to understand. Rules tend to have surprising results when the original query contains volatile functions: volatile @@ -1177,9 +1177,9 @@ SELECT word FROM words ORDER BY word <-> 'caterpiler' LIMIT 10; Also, there are some cases that are not supported by these types of rules at - all, notably including WITH clauses in the original query and - multiple-assignment sub-SELECTs in the SET list - of UPDATE queries. This is because copying these constructs + all, notably including WITH clauses in the original query and + multiple-assignment sub-SELECTs in the SET list + of UPDATE queries. This is because copying these constructs into a rule query would result in multiple evaluations of the sub-query, contrary to the express intent of the query's author. @@ -1198,8 +1198,8 @@ CREATE [ OR REPLACE ] RULE name AS in mind. - In the following, update rules means rules that are defined - on INSERT, UPDATE, or DELETE. + In the following, update rules means rules that are defined + on INSERT, UPDATE, or DELETE. @@ -1208,16 +1208,16 @@ CREATE [ OR REPLACE ] RULE name AS object and event given in the CREATE RULE command. For update rules, the rule system creates a list of query trees. Initially the query-tree list is empty. - There can be zero (NOTHING key word), one, or multiple actions. + There can be zero (NOTHING key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule - can have a qualification or not and it can be INSTEAD or - ALSO (the default). + can have a qualification or not and it can be INSTEAD or + ALSO (the default). What is a rule qualification? It is a restriction that tells when the actions of the rule should be done and when not. This - qualification can only reference the pseudorelations NEW and/or OLD, + qualification can only reference the pseudorelations NEW and/or OLD, which basically represent the relation that was given as object (but with a special meaning). @@ -1228,8 +1228,8 @@ CREATE [ OR REPLACE ] RULE name AS - No qualification, with either ALSO or - INSTEAD + No qualification, with either ALSO or + INSTEAD the query tree from the rule action with the original query @@ -1239,7 +1239,7 @@ CREATE [ OR REPLACE ] RULE name AS - Qualification given and ALSO + Qualification given and ALSO the query tree from the rule action with the rule @@ -1250,7 +1250,7 @@ CREATE [ OR REPLACE ] RULE name AS - Qualification given and INSTEAD + Qualification given and INSTEAD the query tree from the rule action with the rule @@ -1262,17 +1262,17 @@ CREATE [ OR REPLACE ] RULE name AS - Finally, if the rule is ALSO, the unchanged original query tree is - added to the list. Since only qualified INSTEAD rules already add the + Finally, if the rule is ALSO, the unchanged original query tree is + added to the list. Since only qualified INSTEAD rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. - For ON INSERT rules, the original query (if not suppressed by INSTEAD) + For ON INSERT rules, the original query (if not suppressed by INSTEAD) is done before any actions added by rules. This allows the actions to - see the inserted row(s). But for ON UPDATE and ON - DELETE rules, the original query is done after the actions added by rules. + see the inserted row(s). But for ON UPDATE and ON + DELETE rules, the original query is done after the actions added by rules. This ensures that the actions can see the to-be-updated or to-be-deleted rows; otherwise, the actions might do nothing because they find no rows matching their qualifications. @@ -1293,12 +1293,12 @@ CREATE [ OR REPLACE ] RULE name AS The query trees found in the actions of the pg_rewrite system catalog are only templates. Since they can reference the range-table entries for - NEW and OLD, some substitutions have to be made before they can be - used. For any reference to NEW, the target list of the original + NEW and OLD, some substitutions have to be made before they can be + used. For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that - entry's expression replaces the reference. Otherwise, NEW means the - same as OLD (for an UPDATE) or is replaced by - a null value (for an INSERT). Any reference to OLD is + entry's expression replaces the reference. Otherwise, NEW means the + same as OLD (for an UPDATE) or is replaced by + a null value (for an INSERT). Any reference to OLD is replaced by a reference to the range-table entry that is the result relation. @@ -1313,7 +1313,7 @@ CREATE [ OR REPLACE ] RULE name AS A First Rule Step by Step - Say we want to trace changes to the sl_avail column in the + Say we want to trace changes to the sl_avail column in the shoelace_data relation. So we set up a log table and a rule that conditionally writes a log entry when an UPDATE is performed on @@ -1367,7 +1367,7 @@ UPDATE shoelace_data SET sl_avail = 6 WHERE shoelace_data.sl_name = 'sl7'; - There is a rule log_shoelace that is ON UPDATE with the rule + There is a rule log_shoelace that is ON UPDATE with the rule qualification expression: @@ -1384,15 +1384,15 @@ INSERT INTO shoelace_log VALUES ( (This looks a little strange since you cannot normally write - INSERT ... VALUES ... FROM. The FROM + INSERT ... VALUES ... FROM. The FROM clause here is just to indicate that there are range-table entries - in the query tree for new and old. + in the query tree for new and old. These are needed so that they can be referenced by variables in the INSERT command's query tree.) - The rule is a qualified ALSO rule, so the rule system + The rule is a qualified ALSO rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: @@ -1406,7 +1406,7 @@ INSERT INTO shoelace_log VALUES ( In step 2, the rule qualification is added to it, so the result set - is restricted to rows where sl_avail changes: + is restricted to rows where sl_avail changes: INSERT INTO shoelace_log VALUES ( @@ -1417,10 +1417,10 @@ INSERT INTO shoelace_log VALUES ( WHERE new.sl_avail <> old.sl_avail; - (This looks even stranger, since INSERT ... VALUES doesn't have - a WHERE clause either, but the planner and executor will have no + (This looks even stranger, since INSERT ... VALUES doesn't have + a WHERE clause either, but the planner and executor will have no difficulty with it. They need to support this same functionality - anyway for INSERT ... SELECT.) + anyway for INSERT ... SELECT.) @@ -1440,7 +1440,7 @@ INSERT INTO shoelace_log VALUES ( - Step 4 replaces references to NEW by the target list entries from the + Step 4 replaces references to NEW by the target list entries from the original query tree or by the matching variable references from the result relation: @@ -1457,7 +1457,7 @@ INSERT INTO shoelace_log VALUES ( - Step 5 changes OLD references into result relation references: + Step 5 changes OLD references into result relation references: INSERT INTO shoelace_log VALUES ( @@ -1471,7 +1471,7 @@ INSERT INTO shoelace_log VALUES ( - That's it. Since the rule is ALSO, we also output the + That's it. Since the rule is ALSO, we also output the original query tree. In short, the output from the rule system is a list of two query trees that correspond to these statements: @@ -1502,8 +1502,8 @@ UPDATE shoelace_data SET sl_color = 'green' no log entry would get written. In that case, the original query tree does not contain a target list entry for - sl_avail, so NEW.sl_avail will get - replaced by shoelace_data.sl_avail. Thus, the extra + sl_avail, so NEW.sl_avail will get + replaced by shoelace_data.sl_avail. Thus, the extra command generated by the rule is: @@ -1527,8 +1527,8 @@ UPDATE shoelace_data SET sl_avail = 0 WHERE sl_color = 'black'; - four rows in fact get updated (sl1, sl2, sl3, and sl4). - But sl3 already has sl_avail = 0. In this case, the original + four rows in fact get updated (sl1, sl2, sl3, and sl4). + But sl3 already has sl_avail = 0. In this case, the original query trees qualification is different and that results in the extra query tree: @@ -1559,7 +1559,7 @@ SELECT shoelace_data.sl_name, 0, Cooperation with Views -viewupdating +viewupdating A simple way to protect view relations from the mentioned @@ -1579,7 +1579,7 @@ CREATE RULE shoe_del_protect AS ON DELETE TO shoe If someone now tries to do any of these operations on the view relation shoe, the rule system will apply these rules. Since the rules have - no actions and are INSTEAD, the resulting list of + no actions and are INSTEAD, the resulting list of query trees will be empty and the whole query will become nothing because there is nothing left to be optimized or executed after the rule system is done with it. @@ -1621,8 +1621,8 @@ CREATE RULE shoelace_del AS ON DELETE TO shoelace - If you want to support RETURNING queries on the view, - you need to make the rules include RETURNING clauses that + If you want to support RETURNING queries on the view, + you need to make the rules include RETURNING clauses that compute the view rows. This is usually pretty trivial for views on a single table, but it's a bit tedious for join views such as shoelace. An example for the insert case is: @@ -1643,9 +1643,9 @@ CREATE RULE shoelace_ins AS ON INSERT TO shoelace FROM unit u WHERE shoelace_data.sl_unit = u.un_name); - Note that this one rule supports both INSERT and - INSERT RETURNING queries on the view — the - RETURNING clause is simply ignored for INSERT. + Note that this one rule supports both INSERT and + INSERT RETURNING queries on the view — the + RETURNING clause is simply ignored for INSERT. @@ -1785,7 +1785,7 @@ UPDATE shoelace_data AND shoelace_data.sl_name = shoelace.sl_name; - Again it's an INSTEAD rule and the previous query tree is trashed. + Again it's an INSTEAD rule and the previous query tree is trashed. Note that this query still uses the view shoelace. But the rule system isn't finished with this step, so it continues and applies the _RETURN rule on it, and we get: @@ -2041,16 +2041,16 @@ GRANT SELECT ON phone_number TO assistant; Nobody except that user (and the database superusers) can access the - phone_data table. But because of the GRANT, + phone_data table. But because of the GRANT, the assistant can run a SELECT on the - phone_number view. The rule system will rewrite the - SELECT from phone_number into a - SELECT from phone_data. + phone_number view. The rule system will rewrite the + SELECT from phone_number into a + SELECT from phone_data. Since the user is the owner of - phone_number and therefore the owner of the rule, the - read access to phone_data is now checked against the user's + phone_number and therefore the owner of the rule, the + read access to phone_data is now checked against the user's privileges and the query is permitted. The check for accessing - phone_number is also performed, but this is done + phone_number is also performed, but this is done against the invoking user, so nobody but the user and the assistant can use it. @@ -2059,19 +2059,19 @@ GRANT SELECT ON phone_number TO assistant; The privileges are checked rule by rule. So the assistant is for now the only one who can see the public phone numbers. But the assistant can set up another view and grant access to that to the public. Then, anyone - can see the phone_number data through the assistant's view. + can see the phone_number data through the assistant's view. What the assistant cannot do is to create a view that directly - accesses phone_data. (Actually the assistant can, but it will not work since + accesses phone_data. (Actually the assistant can, but it will not work since every access will be denied during the permission checks.) And as soon as the user notices that the assistant opened - their phone_number view, the user can revoke the assistant's access. Immediately, any + their phone_number view, the user can revoke the assistant's access. Immediately, any access to the assistant's view would fail. One might think that this rule-by-rule checking is a security hole, but in fact it isn't. But if it did not work this way, the assistant - could set up a table with the same columns as phone_number and + could set up a table with the same columns as phone_number and copy the data to there once per day. Then it's the assistant's own data and the assistant can grant access to everyone they want. A GRANT command means, I trust you. @@ -2090,9 +2090,9 @@ CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE phone NOT LIKE '412%'; This view might seem secure, since the rule system will rewrite any - SELECT from phone_number into a - SELECT from phone_data and add the - qualification that only entries where phone does not begin + SELECT from phone_number into a + SELECT from phone_data and add the + qualification that only entries where phone does not begin with 412 are wanted. But if the user can create their own functions, it is not difficult to convince the planner to execute the user-defined function prior to the NOT LIKE expression. @@ -2107,7 +2107,7 @@ $$ LANGUAGE plpgsql COST 0.0000000000000000000001; SELECT * FROM phone_number WHERE tricky(person, phone); - Every person and phone number in the phone_data table will be + Every person and phone number in the phone_data table will be printed as a NOTICE, because the planner will choose to execute the inexpensive tricky function before the more expensive NOT LIKE. Even if the user is @@ -2119,17 +2119,17 @@ SELECT * FROM phone_number WHERE tricky(person, phone); Similar considerations apply to update rules. In the examples of the previous section, the owner of the tables in the example - database could grant the privileges SELECT, - INSERT, UPDATE, and DELETE on - the shoelace view to someone else, but only - SELECT on shoelace_log. The rule action to + database could grant the privileges SELECT, + INSERT, UPDATE, and DELETE on + the shoelace view to someone else, but only + SELECT on shoelace_log. The rule action to write log entries will still be executed successfully, and that other user could see the log entries. But they could not create fake entries, nor could they manipulate or remove existing ones. In this case, there is no possibility of subverting the rules by convincing the planner to alter the order of operations, because the only rule - which references shoelace_log is an unqualified - INSERT. This might not be true in more complex scenarios. + which references shoelace_log is an unqualified + INSERT. This might not be true in more complex scenarios. @@ -2189,7 +2189,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS The PostgreSQL server returns a command - status string, such as INSERT 149592 1, for each + status string, such as INSERT 149592 1, for each command it receives. This is simple enough when there are no rules involved, but what happens when the query is rewritten by rules? @@ -2200,10 +2200,10 @@ CREATE VIEW phone_number WITH (security_barrier) AS - If there is no unconditional INSTEAD rule for the query, then + If there is no unconditional INSTEAD rule for the query, then the originally given query will be executed, and its command status will be returned as usual. (But note that if there were - any conditional INSTEAD rules, the negation of their qualifications + any conditional INSTEAD rules, the negation of their qualifications will have been added to the original query. This might reduce the number of rows it processes, and if so the reported status will be affected.) @@ -2212,10 +2212,10 @@ CREATE VIEW phone_number WITH (security_barrier) AS - If there is any unconditional INSTEAD rule for the query, then + If there is any unconditional INSTEAD rule for the query, then the original query will not be executed at all. In this case, the server will return the command status for the last query - that was inserted by an INSTEAD rule (conditional or + that was inserted by an INSTEAD rule (conditional or unconditional) and is of the same command type (INSERT, UPDATE, or DELETE) as the original query. If no query @@ -2228,7 +2228,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS - The programmer can ensure that any desired INSTEAD rule is the one + The programmer can ensure that any desired INSTEAD rule is the one that sets the command status in the second case, by giving it the alphabetically last rule name among the active rules, so that it gets applied last. @@ -2253,7 +2253,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS implemented using the PostgreSQL rule system. One of the things that cannot be implemented by rules are some kinds of constraints, especially foreign keys. It is possible - to place a qualified rule that rewrites a command to NOTHING + to place a qualified rule that rewrites a command to NOTHING if the value of a column does not appear in another table. But then the data is silently thrown away and that's not a good idea. If checks for valid values are required, @@ -2264,7 +2264,7 @@ CREATE VIEW phone_number WITH (security_barrier) AS In this chapter, we focused on using rules to update views. All of the update rule examples in this chapter can also be implemented - using INSTEAD OF triggers on the views. Writing such + using INSTEAD OF triggers on the views. Writing such triggers is often easier than writing rules, particularly if complex logic is required to perform the update. @@ -2298,8 +2298,8 @@ CREATE TABLE software ( Both tables have many thousands of rows and the indexes on - hostname are unique. The rule or trigger should - implement a constraint that deletes rows from software + hostname are unique. The rule or trigger should + implement a constraint that deletes rows from software that reference a deleted computer. The trigger would use this command: @@ -2307,8 +2307,8 @@ DELETE FROM software WHERE hostname = $1; Since the trigger is called for each individual row deleted from - computer, it can prepare and save the plan for this - command and pass the hostname value in the + computer, it can prepare and save the plan for this + command and pass the hostname value in the parameter. The rule would be written as: @@ -2324,7 +2324,7 @@ CREATE RULE computer_del AS ON DELETE TO computer DELETE FROM computer WHERE hostname = 'mypc.local.net'; - the table computer is scanned by index (fast), and the + the table computer is scanned by index (fast), and the command issued by the trigger would also use an index scan (also fast). The extra command from the rule would be: @@ -2348,8 +2348,8 @@ Nestloop With the next delete we want to get rid of all the 2000 computers - where the hostname starts with - old. There are two possible commands to do that. One + where the hostname starts with + old. There are two possible commands to do that. One is: @@ -2389,17 +2389,17 @@ Nestloop This shows, that the planner does not realize that the - qualification for hostname in - computer could also be used for an index scan on - software when there are multiple qualification - expressions combined with AND, which is what it does + qualification for hostname in + computer could also be used for an index scan on + software when there are multiple qualification + expressions combined with AND, which is what it does in the regular-expression version of the command. The trigger will get invoked once for each of the 2000 old computers that have to be deleted, and that will result in one index scan over - computer and 2000 index scans over - software. The rule implementation will do it with two + computer and 2000 index scans over + software. The rule implementation will do it with two commands that use indexes. And it depends on the overall size of - the table software whether the rule will still be faster in the + the table software whether the rule will still be faster in the sequential scan situation. 2000 command executions from the trigger over the SPI manager take some time, even if all the index blocks will soon be in the cache. @@ -2412,7 +2412,7 @@ DELETE FROM computer WHERE manufacturer = 'bim'; Again this could result in many rows to be deleted from - computer. So the trigger will again run many commands + computer. So the trigger will again run many commands through the executor. The command generated by the rule will be: @@ -2421,7 +2421,7 @@ DELETE FROM software WHERE computer.manufacturer = 'bim' The plan for that command will again be the nested loop over two - index scans, only using a different index on computer: + index scans, only using a different index on computer: Nestloop diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index 6c4c7f4a8e..c8bc684c0e 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -73,12 +73,12 @@ /usr/local/pgsql/data or /var/lib/pgsql/data are popular. To initialize a database cluster, use the command ,initdb which is + linkend="app-initdb">,initdb which is installed with PostgreSQL. The desired file system location of your database cluster is indicated by the option, for example: -$ initdb -D /usr/local/pgsql/data +$ initdb -D /usr/local/pgsql/data Note that you must execute this command while logged into the PostgreSQL user account, which is @@ -96,9 +96,9 @@ Alternatively, you can run initdb via the - programpg_ctl like so: + programpg_ctl like so: -$ pg_ctl -D /usr/local/pgsql/data initdb +$ pg_ctl -D /usr/local/pgsql/data initdb This may be more intuitive if you are using pg_ctl for starting and stopping the @@ -148,14 +148,14 @@ postgres$ initdb -D /usr/local/pgsql/data initdb's , or options to assign a password to the database superuser. - password - of the superuser + password + of the superuser - Also, specify - Non-C and non-POSIX locales rely on the + Non-C and non-POSIX locales rely on the operating system's collation library for character set ordering. This controls the ordering of keys stored in indexes. For this reason, a cluster cannot switch to an incompatible collation library version, @@ -201,14 +201,14 @@ postgres$ initdb -D /usr/local/pgsql/data Many installations create their database clusters on file systems - (volumes) other than the machine's root volume. If you + (volumes) other than the machine's root volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such - as pg_upgrade, and it also ensures clean failures if + as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline. @@ -220,30 +220,30 @@ postgres$ initdb -D /usr/local/pgsql/data Network File Systems - NFSNetwork File Systems - Network Attached Storage (NAS)Network File Systems + NFSNetwork File Systems + Network Attached Storage (NAS)Network File Systems Many installations create their database clusters on network file - systems. Sometimes this is done via NFS, or by using a - Network Attached Storage (NAS) device that uses - NFS internally. PostgreSQL does nothing - special for NFS file systems, meaning it assumes - NFS behaves exactly like locally-connected drives. - If the client or server NFS implementation does not + systems. Sometimes this is done via NFS, or by using a + Network Attached Storage (NAS) device that uses + NFS internally. PostgreSQL does nothing + special for NFS file systems, meaning it assumes + NFS behaves exactly like locally-connected drives. + If the client or server NFS implementation does not provide standard file system semantics, this can cause reliability problems (see ). - Specifically, delayed (asynchronous) writes to the NFS + Specifically, delayed (asynchronous) writes to the NFS server can cause data corruption problems. If possible, mount the - NFS file system synchronously (without caching) to avoid - this hazard. Also, soft-mounting the NFS file system is + NFS file system synchronously (without caching) to avoid + this hazard. Also, soft-mounting the NFS file system is not recommended. - Storage Area Networks (SAN) typically use communication - protocols other than NFS, and may or may not be subject + Storage Area Networks (SAN) typically use communication + protocols other than NFS, and may or may not be subject to hazards of this sort. It's advisable to consult the vendor's documentation concerning data consistency guarantees. PostgreSQL cannot be more reliable than @@ -260,7 +260,7 @@ postgres$ initdb -D /usr/local/pgsql/data Before anyone can access the database, you must start the database server. The database server program is called - postgres.postgres + postgres.postgres The postgres program must know where to find the data it is supposed to use. This is done with the option. Thus, the simplest way to start the @@ -281,8 +281,8 @@ $ postgres -D /usr/local/pgsql/data $ postgres -D /usr/local/pgsql/data >logfile 2>&1 & - It is important to store the server's stdout and - stderr output somewhere, as shown above. It will help + It is important to store the server's stdout and + stderr output somewhere, as shown above. It will help for auditing purposes and to diagnose problems. (See for a more thorough discussion of log file handling.) @@ -312,13 +312,13 @@ pg_ctl start -l logfile Normally, you will want to start the database server when the computer boots. - booting - starting the server during + booting + starting the server during Autostart scripts are operating-system-specific. There are a few distributed with PostgreSQL in the - contrib/start-scripts directory. Installing one will require + contrib/start-scripts directory. Installing one will require root privileges. @@ -327,7 +327,7 @@ pg_ctl start -l logfile at boot time. Many systems have a file /etc/rc.local or /etc/rc.d/rc.local. Others use init.d or - rc.d directories. Whatever you do, the server must be + rc.d directories. Whatever you do, the server must be run by the PostgreSQL user account and not by root or any other user. Therefore you probably should form your commands using @@ -348,7 +348,7 @@ su postgres -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' For FreeBSD, look at the file contrib/start-scripts/freebsd in the PostgreSQL source distribution. - FreeBSDstart script + FreeBSDstart script @@ -356,7 +356,7 @@ su postgres -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' On OpenBSD, add the following lines to the file /etc/rc.local: - OpenBSDstart script + OpenBSDstart script if [ -x /usr/local/pgsql/bin/pg_ctl -a -x /usr/local/pgsql/bin/postgres ]; then su -l postgres -c '/usr/local/pgsql/bin/pg_ctl start -s -l /var/postgresql/log -D /usr/local/pgsql/data' @@ -369,7 +369,7 @@ fi On Linux systems either add - Linuxstart script + Linuxstart script /usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data @@ -421,7 +421,7 @@ WantedBy=multi-user.target FreeBSD or Linux start scripts, depending on preference. - NetBSDstart script + NetBSDstart script @@ -430,12 +430,12 @@ WantedBy=multi-user.target On Solaris, create a file called /etc/init.d/postgresql that contains the following line: - Solarisstart script + Solarisstart script su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data" - Then, create a symbolic link to it in /etc/rc3.d as - S99postgresql. + Then, create a symbolic link to it in /etc/rc3.d as + S99postgresql. @@ -509,7 +509,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). does not mean you've run out of disk space. It means your kernel's limit on the number of System V semaphores is smaller than the number + class="osname">System V semaphores is smaller than the number PostgreSQL wants to create. As above, you might be able to work around the problem by starting the server with a reduced number of allowed connections @@ -518,15 +518,15 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). - If you get an illegal system call error, it is likely that + If you get an illegal system call error, it is likely that shared memory or semaphores are not supported in your kernel at all. In that case your only option is to reconfigure the kernel to enable these features. - Details about configuring System V - IPC facilities are given in . + Details about configuring System V + IPC facilities are given in . @@ -586,10 +586,10 @@ psql: could not connect to server: No such file or directory Managing Kernel Resources - PostgreSQL can sometimes exhaust various operating system + PostgreSQL can sometimes exhaust various operating system resource limits, especially when multiple copies of the server are running on the same system, or in very large installations. This section explains - the kernel resources used by PostgreSQL and the steps you + the kernel resources used by PostgreSQL and the steps you can take to resolve problems related to kernel resource consumption. @@ -605,27 +605,27 @@ psql: could not connect to server: No such file or directory - PostgreSQL requires the operating system to provide - inter-process communication (IPC) features, specifically + PostgreSQL requires the operating system to provide + inter-process communication (IPC) features, specifically shared memory and semaphores. Unix-derived systems typically provide - System V IPC, - POSIX IPC, or both. - Windows has its own implementation of + System V IPC, + POSIX IPC, or both. + Windows has its own implementation of these features and is not discussed here. The complete lack of these facilities is usually manifested by an - Illegal system call error upon server + Illegal system call error upon server start. In that case there is no alternative but to reconfigure your - kernel. PostgreSQL won't work without them. + kernel. PostgreSQL won't work without them. This situation is rare, however, among modern operating systems. - Upon starting the server, PostgreSQL normally allocates + Upon starting the server, PostgreSQL normally allocates a very small amount of System V shared memory, as well as a much larger - amount of POSIX (mmap) shared memory. + amount of POSIX (mmap) shared memory. In addition a significant number of semaphores, which can be either System V or POSIX style, are created at server startup. Currently, POSIX semaphores are used on Linux and FreeBSD systems while other @@ -634,7 +634,7 @@ psql: could not connect to server: No such file or directory - Prior to PostgreSQL 9.3, only System V shared memory + Prior to PostgreSQL 9.3, only System V shared memory was used, so the amount of System V shared memory required to start the server was much larger. If you are running an older version of the server, please consult the documentation for your server version. @@ -642,9 +642,9 @@ psql: could not connect to server: No such file or directory - System V IPC features are typically constrained by + System V IPC features are typically constrained by system-wide allocation limits. - When PostgreSQL exceeds one of these limits, + When PostgreSQL exceeds one of these limits, the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See also - <systemitem class="osname">System V</> <acronym>IPC</> Parameters + <systemitem class="osname">System V</systemitem> <acronym>IPC</acronym> Parameters - Name - Description - Values needed to run one PostgreSQL instance + Name + Description + Values needed to run one PostgreSQL instance - SHMMAX - Maximum size of shared memory segment (bytes) + SHMMAX + Maximum size of shared memory segment (bytes) at least 1kB, but the default is usually much higher - SHMMIN - Minimum size of shared memory segment (bytes) - 1 + SHMMIN + Minimum size of shared memory segment (bytes) + 1 - SHMALL - Total amount of shared memory available (bytes or pages) + SHMALL + Total amount of shared memory available (bytes or pages) same as SHMMAX if bytes, or ceil(SHMMAX/PAGE_SIZE) if pages, - plus room for other applications + plus room for other applications - SHMSEG - Maximum number of shared memory segments per process - only 1 segment is needed, but the default is much higher + SHMSEG + Maximum number of shared memory segments per process + only 1 segment is needed, but the default is much higher - SHMMNI - Maximum number of shared memory segments system-wide - like SHMSEG plus room for other applications + SHMMNI + Maximum number of shared memory segments system-wide + like SHMSEG plus room for other applications - SEMMNI - Maximum number of semaphore identifiers (i.e., sets) - at least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) plus room for other applications + SEMMNI + Maximum number of semaphore identifiers (i.e., sets) + at least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) plus room for other applications - SEMMNS - Maximum number of semaphores system-wide - ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) * 17 plus room for other applications + SEMMNS + Maximum number of semaphores system-wide + ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16) * 17 plus room for other applications - SEMMSL - Maximum number of semaphores per set - at least 17 + SEMMSL + Maximum number of semaphores per set + at least 17 - SEMMAP - Number of entries in semaphore map - see text + SEMMAP + Number of entries in semaphore map + see text - SEMVMX - Maximum value of semaphore - at least 1000 (The default is often 32767; do not change unless necessary) + SEMVMX + Maximum value of semaphore + at least 1000 (The default is often 32767; do not change unless necessary) @@ -734,28 +734,28 @@ psql: could not connect to server: No such file or directory
- PostgreSQL requires a few bytes of System V shared memory + PostgreSQL requires a few bytes of System V shared memory (typically 48 bytes, on 64-bit platforms) for each copy of the server. On most modern operating systems, this amount can easily be allocated. However, if you are running many copies of the server, or if other applications are also using System V shared memory, it may be necessary to - increase SHMALL, which is the total amount of System V shared - memory system-wide. Note that SHMALL is measured in pages + increase SHMALL, which is the total amount of System V shared + memory system-wide. Note that SHMALL is measured in pages rather than bytes on many systems. Less likely to cause problems is the minimum size for shared - memory segments (SHMMIN), which should be at most - approximately 32 bytes for PostgreSQL (it is + memory segments (SHMMIN), which should be at most + approximately 32 bytes for PostgreSQL (it is usually just 1). The maximum number of segments system-wide - (SHMMNI) or per-process (SHMSEG) are unlikely + (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has them set to zero. When using System V semaphores, - PostgreSQL uses one semaphore per allowed connection + PostgreSQL uses one semaphore per allowed connection (), allowed autovacuum worker process () and allowed background process (), in sets of 16. @@ -763,25 +763,25 @@ psql: could not connect to server: No such file or directory also contain a 17th semaphore which contains a magic number, to detect collision with semaphore sets used by other applications. The maximum number of semaphores in the system - is set by SEMMNS, which consequently must be at least - as high as max_connections plus - autovacuum_max_workers plus max_worker_processes, + is set by SEMMNS, which consequently must be at least + as high as max_connections plus + autovacuum_max_workers plus max_worker_processes, plus one extra for each 16 allowed connections plus workers (see the formula in ). The parameter SEMMNI + linkend="sysvipc-parameters">). The parameter SEMMNI determines the limit on the number of semaphore sets that can exist on the system at one time. Hence this parameter must be at - least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). + least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). Lowering the number of allowed connections is a temporary workaround for failures, which are usually confusingly worded No space - left on device, from the function semget. + left on device, from the function semget. In some cases it might also be necessary to increase - SEMMAP to be at least on the order of - SEMMNS. This parameter defines the size of the semaphore + SEMMAP to be at least on the order of + SEMMNS. This parameter defines the size of the semaphore resource map, in which each contiguous block of available semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that is adjacent to the freed block or it is @@ -792,9 +792,9 @@ psql: could not connect to server: No such file or directory - Various other settings related to semaphore undo, such as - SEMMNU and SEMUME, do not affect - PostgreSQL. + Various other settings related to semaphore undo, such as + SEMMNU and SEMUME, do not affect + PostgreSQL. @@ -810,8 +810,8 @@ psql: could not connect to server: No such file or directory - AIX - AIXIPC configuration + AIX + AIXIPC configuration @@ -833,8 +833,8 @@ psql: could not connect to server: No such file or directory - FreeBSD - FreeBSDIPC configuration + FreeBSD + FreeBSDIPC configuration @@ -861,8 +861,8 @@ kern.ipc.semmnu=256 After modifying these values a reboot is required for the new settings to take effect. - (Note: FreeBSD does not use SEMMAP. Older versions - would accept but ignore a setting for kern.ipc.semmap; + (Note: FreeBSD does not use SEMMAP. Older versions + would accept but ignore a setting for kern.ipc.semmap; newer versions reject it altogether.) @@ -874,8 +874,8 @@ kern.ipc.semmnu=256 - If running in FreeBSD jails by enabling sysctl's - security.jail.sysvipc_allowed, postmasters + If running in FreeBSD jails by enabling sysctl's + security.jail.sysvipc_allowed, postmasters running in different jails should be run by different operating system users. This improves security because it prevents non-root users from interfering with shared memory or semaphores in different jails, @@ -886,19 +886,19 @@ kern.ipc.semmnu=256 - FreeBSD versions before 4.0 work like - OpenBSD (see below). + FreeBSD versions before 4.0 work like + OpenBSD (see below). - NetBSD - NetBSDIPC configuration + NetBSD + NetBSDIPC configuration - In NetBSD 5.0 and later, + In NetBSD 5.0 and later, IPC parameters can be adjusted using sysctl, for example: @@ -916,24 +916,24 @@ kern.ipc.semmnu=256 - NetBSD versions before 5.0 work like - OpenBSD (see below), except that - parameters should be set with the keyword options not - option. + NetBSD versions before 5.0 work like + OpenBSD (see below), except that + parameters should be set with the keyword options not + option. - OpenBSD - OpenBSDIPC configuration + OpenBSD + OpenBSDIPC configuration - The options SYSVSHM and SYSVSEM need + The options SYSVSHM and SYSVSEM need to be enabled when the kernel is compiled. (They are by default.) The maximum size of shared memory is determined by - the option SHMMAXPGS (in pages). The following + the option SHMMAXPGS (in pages). The following shows an example of how to set the various parameters: option SYSVSHM @@ -958,30 +958,30 @@ option SEMMAP=256 - HP-UX - HP-UXIPC configuration + HP-UX + HP-UXIPC configuration The default settings tend to suffice for normal installations. - On HP-UX 10, the factory default for - SEMMNS is 128, which might be too low for larger + On HP-UX 10, the factory default for + SEMMNS is 128, which might be too low for larger database sites. - IPC parameters can be set in the System - Administration Manager (SAM) under + IPC parameters can be set in the System + Administration Manager (SAM) under Kernel - ConfigurationConfigurable Parameters. Choose - Create A New Kernel when you're done. + ConfigurationConfigurable Parameters. Choose + Create A New Kernel when you're done. - Linux - LinuxIPC configuration + Linux + LinuxIPC configuration @@ -1023,13 +1023,13 @@ option SEMMAP=256 - macOS - macOSIPC configuration + macOS + macOSIPC configuration The recommended method for configuring shared memory in macOS - is to create a file named /etc/sysctl.conf, + is to create a file named /etc/sysctl.conf, containing variable assignments such as: kern.sysv.shmmax=4194304 @@ -1039,32 +1039,32 @@ kern.sysv.shmseg=8 kern.sysv.shmall=1024 Note that in some macOS versions, - all five shared-memory parameters must be set in - /etc/sysctl.conf, else the values will be ignored. + all five shared-memory parameters must be set in + /etc/sysctl.conf, else the values will be ignored. Beware that recent releases of macOS ignore attempts to set - SHMMAX to a value that isn't an exact multiple of 4096. + SHMMAX to a value that isn't an exact multiple of 4096. - SHMALL is measured in 4 kB pages on this platform. + SHMALL is measured in 4 kB pages on this platform. In older macOS versions, you will need to reboot to have changes in the shared memory parameters take effect. As of 10.5 it is possible to - change all but SHMMNI on the fly, using - sysctl. But it's still best to set up your preferred - values via /etc/sysctl.conf, so that the values will be + change all but SHMMNI on the fly, using + sysctl. But it's still best to set up your preferred + values via /etc/sysctl.conf, so that the values will be kept across reboots. - The file /etc/sysctl.conf is only honored in macOS + The file /etc/sysctl.conf is only honored in macOS 10.3.9 and later. If you are running a previous 10.3.x release, - you must edit the file /etc/rc + you must edit the file /etc/rc and change the values in the following commands: sysctl -w kern.sysv.shmmax @@ -1074,27 +1074,27 @@ sysctl -w kern.sysv.shmseg sysctl -w kern.sysv.shmall Note that - /etc/rc is usually overwritten by macOS system updates, + /etc/rc is usually overwritten by macOS system updates, so you should expect to have to redo these edits after each update. In macOS 10.2 and earlier, instead edit these commands in the file - /System/Library/StartupItems/SystemTuning/SystemTuning. + /System/Library/StartupItems/SystemTuning/SystemTuning. - Solaris 2.6 to 2.9 (Solaris + Solaris 2.6 to 2.9 (Solaris 6 to Solaris 9) - SolarisIPC configuration + SolarisIPC configuration The relevant settings can be changed in - /etc/system, for example: + /etc/system, for example: set shmsys:shminfo_shmmax=0x2000000 set shmsys:shminfo_shmmin=1 @@ -1114,30 +1114,30 @@ set semsys:seminfo_semmsl=32 - Solaris 2.10 (Solaris + Solaris 2.10 (Solaris 10) and later - OpenSolaris + OpenSolaris In Solaris 10 and later, and OpenSolaris, the default shared memory and semaphore settings are good enough for most - PostgreSQL applications. Solaris now defaults - to a SHMMAX of one-quarter of system RAM. + PostgreSQL applications. Solaris now defaults + to a SHMMAX of one-quarter of system RAM. To further adjust this setting, use a project setting associated - with the postgres user. For example, run the - following as root: + with the postgres user. For example, run the + following as root: projadd -c "PostgreSQL DB User" -K "project.max-shm-memory=(privileged,8GB,deny)" -U postgres -G postgres user.postgres - This command adds the user.postgres project and - sets the shared memory maximum for the postgres + This command adds the user.postgres project and + sets the shared memory maximum for the postgres user to 8GB, and takes effect the next time that user logs - in, or when you restart PostgreSQL (not reload). - The above assumes that PostgreSQL is run by - the postgres user in the postgres + in, or when you restart PostgreSQL (not reload). + The above assumes that PostgreSQL is run by + the postgres user in the postgres group. No server reboot is required. @@ -1152,11 +1152,11 @@ project.max-msg-ids=(priv,4096,deny) - Additionally, if you are running PostgreSQL + Additionally, if you are running PostgreSQL inside a zone, you may need to raise the zone resource usage limits as well. See "Chapter2: Projects and Tasks" in the - System Administrator's Guide for more - information on projects and prctl. + System Administrator's Guide for more + information on projects and prctl. @@ -1259,7 +1259,7 @@ RemoveIPC=no limit can only be changed by the root user. The system call setrlimit is responsible for setting these parameters. The shell's built-in command ulimit - (Bourne shells) or limit (csh) is + (Bourne shells) or limit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file /etc/login.conf controls the various resource limits set during login. See the @@ -1320,7 +1320,7 @@ default:\ processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the - system-wide limit, you can set PostgreSQL's PostgreSQL's configuration parameter to limit the consumption of open files. @@ -1380,36 +1380,36 @@ Out of Memory: Killed process 12345 (postgres). system running out of memory, you can avoid the problem by changing your configuration. In some cases, it may help to lower memory-related configuration parameters, particularly - shared_buffers - and work_mem. In + shared_buffers + and work_mem. In other cases, the problem may be caused by allowing too many connections to the database server itself. In many cases, it may be better to reduce - max_connections + max_connections and instead make use of external connection-pooling software.
On Linux 2.6 and later, it is possible to modify the - kernel's behavior so that it will not overcommit memory. + kernel's behavior so that it will not overcommit memory. Although this setting will not prevent the OOM killer from being invoked + url="http://lwn.net/Articles/104179/">OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl: sysctl -w vm.overcommit_memory=2 - or placing an equivalent entry in /etc/sysctl.conf. + or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting - vm.overcommit_ratio. For details see the kernel documentation + vm.overcommit_ratio. For details see the kernel documentation file . Another approach, which can be used with or without altering - vm.overcommit_memory, is to set the process-specific - OOM score adjustment value for the postmaster process to - -1000, thereby guaranteeing it will not be targeted by the OOM + vm.overcommit_memory, is to set the process-specific + OOM score adjustment value for the postmaster process to + -1000, thereby guaranteeing it will not be targeted by the OOM killer. The simplest way to do this is to execute echo -1000 > /proc/self/oom_score_adj @@ -1426,33 +1426,33 @@ export PG_OOM_ADJUST_VALUE=0 These settings will cause postmaster child processes to run with the normal OOM score adjustment of zero, so that the OOM killer can still target them at need. You could use some other value for - PG_OOM_ADJUST_VALUE if you want the child processes to run - with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE + PG_OOM_ADJUST_VALUE if you want the child processes to run + with some other OOM score adjustment. (PG_OOM_ADJUST_VALUE can also be omitted, in which case it defaults to zero.) If you do not - set PG_OOM_ADJUST_FILE, the child processes will run with the + set PG_OOM_ADJUST_FILE, the child processes will run with the same OOM score adjustment as the postmaster, which is unwise since the whole point is to ensure that the postmaster has a preferential setting. - Older Linux kernels do not offer /proc/self/oom_score_adj, + Older Linux kernels do not offer /proc/self/oom_score_adj, but may have a previous version of the same functionality called - /proc/self/oom_adj. This works the same except the disable - value is -17 not -1000. + /proc/self/oom_adj. This works the same except the disable + value is -17 not -1000. Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl parameter. However, setting - vm.overcommit_memory to 2 + vm.overcommit_memory to 2 on a 2.4 kernel that does not have the relevant code will make things worse, not better. It is recommended that you inspect the actual kernel source code (see the function - vm_enough_memory in the file mm/mmap.c) + vm_enough_memory in the file mm/mmap.c) to verify what is supported in your kernel before you try this in a 2.4 - installation. The presence of the overcommit-accounting - documentation file should not be taken as evidence that the + installation. The presence of the overcommit-accounting + documentation file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor. @@ -1473,7 +1473,7 @@ export PG_OOM_ADJUST_VALUE=0 number of huge pages needed, start PostgreSQL without huge pages enabled and check the postmaster's VmPeak value, as well as the system's - huge page size, using the /proc file system. This might + huge page size, using the /proc file system. This might look like: $ head -1 $PGDATA/postmaster.pid @@ -1509,8 +1509,8 @@ $ grep Huge /proc/meminfo It may also be necessary to give the database server's operating system user permission to use huge pages by setting - vm.hugetlb_shm_group via sysctl, and/or - give permission to lock memory with ulimit -l. + vm.hugetlb_shm_group via sysctl, and/or + give permission to lock memory with ulimit -l. @@ -1518,8 +1518,8 @@ $ grep Huge /proc/meminfo PostgreSQL is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge pages, you can set - to on in postgresql.conf. - Note that with this setting PostgreSQL will fail to + to on in postgresql.conf. + Note that with this setting PostgreSQL will fail to start if not enough huge pages are available. @@ -1537,7 +1537,7 @@ $ grep Huge /proc/meminfo Shutting Down the Server - shutdown + shutdown @@ -1547,7 +1547,7 @@ $ grep Huge /proc/meminfo - SIGTERMSIGTERM + SIGTERMSIGTERM This is the Smart Shutdown mode. @@ -1566,7 +1566,7 @@ $ grep Huge /proc/meminfo - SIGINTSIGINT + SIGINTSIGINT This is the Fast Shutdown mode. @@ -1581,7 +1581,7 @@ $ grep Huge /proc/meminfo - SIGQUITSIGQUIT + SIGQUITSIGQUIT This is the Immediate Shutdown mode. @@ -1602,9 +1602,9 @@ $ grep Huge /proc/meminfo The program provides a convenient interface for sending these signals to shut down the server. - Alternatively, you can send the signal directly using kill + Alternatively, you can send the signal directly using kill on non-Windows systems. - The PID of the postgres process can be + The PID of the postgres process can be found using the ps program, or from the file postmaster.pid in the data directory. For example, to do a fast shutdown: @@ -1628,15 +1628,15 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` To terminate an individual session while allowing other sessions to - continue, use pg_terminate_backend() (see pg_terminate_backend() (see ) or send a - SIGTERM signal to the child process associated with + SIGTERM signal to the child process associated with the session. - Upgrading a <productname>PostgreSQL</> Cluster + Upgrading a <productname>PostgreSQL</productname> Cluster upgrading @@ -1649,7 +1649,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` This section discusses how to upgrade your database data from one - PostgreSQL release to a newer one. + PostgreSQL release to a newer one. @@ -1676,7 +1676,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - For major releases of PostgreSQL, the + For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version is to dump and reload the database, though this can be slow. A @@ -1698,7 +1698,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`PostgreSQL major upgrade, consider the + testing a PostgreSQL major upgrade, consider the following categories of possible changes: @@ -1728,7 +1728,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`Library API - Typically libraries like libpq only add new + Typically libraries like libpq only add new functionality, again unless mentioned in the release notes. @@ -1757,13 +1757,13 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - Upgrading Data via <application>pg_dumpall</> + Upgrading Data via <application>pg_dumpall</application> One upgrade method is to dump data from one major version of - PostgreSQL and reload it in another — to do - this, you must use a logical backup tool like - pg_dumpall; file system + PostgreSQL and reload it in another — to do + this, you must use a logical backup tool like + pg_dumpall; file system level backup methods will not work. (There are checks in place that prevent you from using a data directory with an incompatible version of PostgreSQL, so no great harm can be done by @@ -1771,18 +1771,18 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` - It is recommended that you use the pg_dump and - pg_dumpall programs from the newer + It is recommended that you use the pg_dump and + pg_dumpall programs from the newer version of - PostgreSQL, to take advantage of enhancements + PostgreSQL, to take advantage of enhancements that might have been made in these programs. Current releases of the dump programs can read data from any server version back to 7.0. These instructions assume that your existing installation is under the - /usr/local/pgsql directory, and that the data area is in - /usr/local/pgsql/data. Substitute your paths + /usr/local/pgsql directory, and that the data area is in + /usr/local/pgsql/data. Substitute your paths appropriately. @@ -1792,7 +1792,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`/usr/local/pgsql/data/pg_hba.conf + permissions in the file /usr/local/pgsql/data/pg_hba.conf (or equivalent) to disallow access from everyone except you. See for additional information on access control. @@ -1806,7 +1806,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -pg_dumpall > outputfile +pg_dumpall > outputfile @@ -1830,11 +1830,11 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Shut down the old server: -pg_ctl stop +pg_ctl stop - On systems that have PostgreSQL started at boot time, + On systems that have PostgreSQL started at boot time, there is probably a start-up file that will accomplish the same thing. For - example, on a Red Hat Linux system one + example, on a Red Hat Linux system one might find that this works: /etc/rc.d/init.d/postgresql stop @@ -1853,7 +1853,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -mv /usr/local/pgsql /usr/local/pgsql.old +mv /usr/local/pgsql /usr/local/pgsql.old (Be sure to move the directory as a single unit so relative paths remain unchanged.) @@ -1873,15 +1873,15 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data +/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
- Restore your previous pg_hba.conf and any - postgresql.conf modifications. + Restore your previous pg_hba.conf and any + postgresql.conf modifications. @@ -1890,7 +1890,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` -/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data +/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
@@ -1899,9 +1899,9 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Finally, restore your data from backup with: -/usr/local/pgsql/bin/psql -d postgres -f outputfile +/usr/local/pgsql/bin/psql -d postgres -f outputfile - using the new psql. + using the new psql.
@@ -1920,16 +1920,16 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - Upgrading Data via <application>pg_upgrade</> + Upgrading Data via <application>pg_upgrade</application> The module allows an installation to - be migrated in-place from one major PostgreSQL + be migrated in-place from one major PostgreSQL version to another. Upgrades can be performed in minutes, - particularly with @@ -1939,12 +1939,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 It is also possible to use certain replication methods, such as - Slony, to create a standby server with the updated version of - PostgreSQL. This is possible because Slony supports + Slony, to create a standby server with the updated version of + PostgreSQL. This is possible because Slony supports replication between different major versions of - PostgreSQL. The standby can be on the same computer or + PostgreSQL. The standby can be on the same computer or a different computer. Once it has synced up with the master server - (running the older version of PostgreSQL), you can + (running the older version of PostgreSQL), you can switch masters and make the standby the master and shut down the older database instance. Such a switch-over results in only several seconds of downtime for an upgrade. @@ -1966,28 +1966,28 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 server is down, it is possible for a local user to spoof the normal server by starting their own server. The spoof server could read passwords and queries sent by clients, but could not return any data - because the PGDATA directory would still be secure because + because the PGDATA directory would still be secure because of directory permissions. Spoofing is possible because any user can start a database server; a client cannot identify an invalid server unless it is specially configured. - One way to prevent spoofing of local + One way to prevent spoofing of local connections is to use a Unix domain socket directory () that has write permission only for a trusted local user. This prevents a malicious user from creating their own socket file in that directory. If you are concerned that - some applications might still reference /tmp for the + some applications might still reference /tmp for the socket file and hence be vulnerable to spoofing, during operating system - startup create a symbolic link /tmp/.s.PGSQL.5432 that points + startup create a symbolic link /tmp/.s.PGSQL.5432 that points to the relocated socket file. You also might need to modify your - /tmp cleanup script to prevent removal of the symbolic link. + /tmp cleanup script to prevent removal of the symbolic link. - Another option for local connections is for clients to use - requirepeer + Another option for local connections is for clients to use + requirepeer to specify the required owner of the server process connected to the socket. @@ -1996,11 +1996,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To prevent spoofing on TCP connections, the best solution is to use SSL certificates and make sure that clients check the server's certificate. To do that, the server - must be configured to accept only hostssl connections (hostssl connections () and have SSL key and certificate files (). The TCP client must connect using - sslmode=verify-ca or - verify-full and have the appropriate root certificate + sslmode=verify-ca or + verify-full and have the appropriate root certificate file installed ().
@@ -2091,7 +2091,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - The MD5 authentication method double-encrypts the + The MD5 authentication method double-encrypts the password on the client before sending it to the server. It first MD5-encrypts it based on the user name, and then encrypts it based on a random salt sent by the server when the database @@ -2111,12 +2111,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 SSL connections encrypt all data sent across the network: the password, the queries, and the data returned. The - pg_hba.conf file allows administrators to specify - which hosts can use non-encrypted connections (host) + pg_hba.conf file allows administrators to specify + which hosts can use non-encrypted connections (host) and which require SSL-encrypted connections - (hostssl). Also, clients can specify that they - connect to servers only via SSL. Stunnel or - SSH can also be used to encrypt transmissions. + (hostssl). Also, clients can specify that they + connect to servers only via SSL. Stunnel or + SSH can also be used to encrypt transmissions. @@ -2131,7 +2131,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 on each side, but this provides stronger verification of identity than the mere use of passwords. It prevents a computer from pretending to be the server just long enough to read the password - sent by the client. It also helps prevent man in the middle + sent by the client. It also helps prevent man in the middle attacks where a computer between the client and server pretends to be the server and reads and passes all data between the client and server. @@ -2166,32 +2166,32 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - PostgreSQL has native support for using - SSL connections to encrypt client/server communications + PostgreSQL has native support for using + SSL connections to encrypt client/server communications for increased security. This requires that OpenSSL is installed on both client and - server systems and that support in PostgreSQL is + server systems and that support in PostgreSQL is enabled at build time (see ). - With SSL support compiled in, the - PostgreSQL server can be started with - SSL enabled by setting the parameter - to on in - postgresql.conf. The server will listen for both normal - and SSL connections on the same TCP port, and will negotiate - with any connecting client on whether to use SSL. By + With SSL support compiled in, the + PostgreSQL server can be started with + SSL enabled by setting the parameter + to on in + postgresql.conf. The server will listen for both normal + and SSL connections on the same TCP port, and will negotiate + with any connecting client on whether to use SSL. By default, this is at the client's option; see about how to set up the server to require - use of SSL for some or all connections. + use of SSL for some or all connections. PostgreSQL reads the system-wide OpenSSL configuration file. By default, this file is named openssl.cnf and is located in the - directory reported by openssl version -d. + directory reported by openssl version -d. This default can be overridden by setting environment variable OPENSSL_CONF to the name of the desired configuration file. @@ -2202,13 +2202,13 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 ciphers can be specified in the OpenSSL configuration file, you can specify ciphers specifically for use by the database server by modifying in - postgresql.conf. + postgresql.conf.
It is possible to have authentication without encryption overhead by - using NULL-SHA or NULL-MD5 ciphers. However, + using NULL-SHA or NULL-MD5 ciphers. However, a man-in-the-middle could read and pass communications between client and server. Also, encryption overhead is minimal compared to the overhead of authentication. For these reasons NULL ciphers are not @@ -2217,9 +2217,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - To start in SSL mode, files containing the server certificate + To start in SSL mode, files containing the server certificate and private key must exist. By default, these files are expected to be - named server.crt and server.key, respectively, in + named server.crt and server.key, respectively, in the server's data directory, but other names and locations can be specified using the configuration parameters and . @@ -2248,11 +2248,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 In some cases, the server certificate might be signed by an - intermediate certificate authority, rather than one that is + intermediate certificate authority, rather than one that is directly trusted by clients. To use such a certificate, append the - certificate of the signing authority to the server.crt file, + certificate of the signing authority to the server.crt file, then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by + authority, root or intermediate, that is trusted by clients, i.e. signed by a certificate in the clients' root.crt files. @@ -2267,7 +2267,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 directory, set the parameter in postgresql.conf to root.crt, and add the authentication option clientcert=1 to the - appropriate hostssl line(s) in pg_hba.conf. + appropriate hostssl line(s) in pg_hba.conf. A certificate will then be requested from the client during SSL connection startup. (See for a description of how to set up certificates on the client.) The server will @@ -2276,21 +2276,21 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - If intermediate CAs appear in + If intermediate CAs appear in root.crt, the file must also contain certificate - chains to their root CAs. Certificate Revocation List + chains to their root CAs. Certificate Revocation List (CRL) entries are also checked if the parameter is set. (See + url="http://h71000.www7.hp.com/doc/83final/ba554_90007/ch04s02.html"> for diagrams showing SSL certificate usage.) The clientcert authentication option is available for - all authentication methods, but only in pg_hba.conf lines - specified as hostssl. When clientcert is + all authentication methods, but only in pg_hba.conf lines + specified as hostssl. When clientcert is not specified or is set to 0, the server will still verify any presented client certificates against its CA file, if one is configured — but it will not insist that a client certificate be presented. @@ -2306,11 +2306,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 If you are setting up client certificates, you may wish to use - the cert authentication method, so that the certificates + the cert authentication method, so that the certificates control user authentication as well as providing connection security. See for details. (It is not necessary to specify clientcert=1 explicitly when using - the cert authentication method.) + the cert authentication method.) @@ -2337,13 +2337,13 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - ($PGDATA/server.crt) + ($PGDATA/server.crt) server certificate sent to client to indicate server's identity - ($PGDATA/server.key) + ($PGDATA/server.key) server private key proves server certificate was sent by the owner; does not indicate certificate owner is trustworthy @@ -2368,7 +2368,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 The server reads these files at server start and whenever the server - configuration is reloaded. On Windows + configuration is reloaded. On Windows systems, they are also re-read whenever a new backend process is spawned for a new client connection. @@ -2377,7 +2377,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 If an error in these files is detected at server start, the server will refuse to start. But if an error is detected during a configuration reload, the files are ignored and the old SSL configuration continues to - be used. On Windows systems, if an error in + be used. On Windows systems, if an error in these files is detected at backend start, that backend will be unable to establish an SSL connection. In all these cases, the error condition is reported in the server log. @@ -2390,10 +2390,10 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To create a quick self-signed certificate for the server, valid for 365 days, use the following OpenSSL command, - replacing yourdomain.com with the server's host name: + replacing yourdomain.com with the server's host name: openssl req -new -x509 -days 365 -nodes -text -out server.crt \ - -keyout server.key -subj "/CN=yourdomain.com" + -keyout server.key -subj "/CN=yourdomain.com" Then do: @@ -2402,15 +2402,15 @@ chmod og-rwx server.key because the server will reject the file if its permissions are more liberal than this. For more details on how to create your server private key and - certificate, refer to the OpenSSL documentation. + certificate, refer to the OpenSSL documentation. A self-signed certificate can be used for testing, but a certificate - signed by a certificate authority (CA) (either one of the - global CAs or a local one) should be used in production + signed by a certificate authority (CA) (either one of the + global CAs or a local one) should be used in production so that clients can verify the server's identity. If all the clients - are local to the organization, using a local CA is + are local to the organization, using a local CA is recommended. @@ -2511,8 +2511,8 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com - Registering <application>Event Log</> on <systemitem - class="osname">Windows</> + Registering <application>Event Log</application> on <systemitem + class="osname">Windows</systemitem> event log @@ -2520,11 +2520,11 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com - To register a Windows - event log library with the operating system, + To register a Windows + event log library with the operating system, issue this command: -regsvr32 pgsql_library_directory/pgevent.dll +regsvr32 pgsql_library_directory/pgevent.dll This creates registry entries used by the event viewer, under the default event source named PostgreSQL. @@ -2535,15 +2535,15 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com ), use the /n and /i options: -regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll +regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll - To unregister the event log library from + To unregister the event log library from the operating system, issue this command: -regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll +regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll diff --git a/doc/src/sgml/seg.sgml b/doc/src/sgml/seg.sgml index 5d1f546b53..c7e9b5f4af 100644 --- a/doc/src/sgml/seg.sgml +++ b/doc/src/sgml/seg.sgml @@ -8,9 +8,9 @@ - This module implements a data type seg for + This module implements a data type seg for representing line segments, or floating point intervals. - seg can represent uncertainty in the interval endpoints, + seg can represent uncertainty in the interval endpoints, making it especially useful for representing laboratory measurements. @@ -92,40 +92,40 @@ test=> select '6.25 .. 6.50'::seg as "pH"; - In , x, y, and - delta denote - floating-point numbers. x and y, but - not delta, can be preceded by a certainty indicator. + In , x, y, and + delta denote + floating-point numbers. x and y, but + not delta, can be preceded by a certainty indicator. - <type>seg</> External Representations + <type>seg</type> External Representations - x + x Single value (zero-length interval) - x .. y - Interval from x to y + x .. y + Interval from x to y - x (+-) delta - Interval from x - delta to - x + delta + x (+-) delta + Interval from x - delta to + x + delta - x .. - Open interval with lower bound x + x .. + Open interval with lower bound x - .. x - Open interval with upper bound x + .. x + Open interval with upper bound x @@ -133,7 +133,7 @@ test=> select '6.25 .. 6.50'::seg as "pH";
- Examples of Valid <type>seg</> Input + Examples of Valid <type>seg</type> Input @@ -146,8 +146,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; ~5.0 Creates a zero-length segment and records - ~ in the data. ~ is ignored - by seg operations, but + ~ in the data. ~ is ignored + by seg operations, but is preserved as a comment. @@ -169,7 +169,7 @@ test=> select '6.25 .. 6.50'::seg as "pH"; 5(+-)0.3 Creates an interval 4.7 .. 5.3. - Note that the (+-) notation isn't preserved. + Note that the (+-) notation isn't preserved. @@ -197,17 +197,17 @@ test=> select '6.25 .. 6.50'::seg as "pH";
- Because ... is widely used in data sources, it is allowed - as an alternative spelling of ... Unfortunately, this + Because ... is widely used in data sources, it is allowed + as an alternative spelling of ... Unfortunately, this creates a parsing ambiguity: it is not clear whether the upper bound - in 0...23 is meant to be 23 or 0.23. + in 0...23 is meant to be 23 or 0.23. This is resolved by requiring at least one digit before the decimal - point in all numbers in seg input. + point in all numbers in seg input. - As a sanity check, seg rejects intervals with the lower bound - greater than the upper, for example 5 .. 2. + As a sanity check, seg rejects intervals with the lower bound + greater than the upper, for example 5 .. 2. @@ -216,7 +216,7 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Precision - seg values are stored internally as pairs of 32-bit floating point + seg values are stored internally as pairs of 32-bit floating point numbers. This means that numbers with more than 7 significant digits will be truncated. @@ -235,8 +235,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Usage - The seg module includes a GiST index operator class for - seg values. + The seg module includes a GiST index operator class for + seg values. The operators supported by the GiST operator class are shown in . @@ -304,8 +304,8 @@ test=> select '6.25 .. 6.50'::seg as "pH"; - (Before PostgreSQL 8.2, the containment operators @> and <@ were - respectively called @ and ~. These names are still available, but are + (Before PostgreSQL 8.2, the containment operators @> and <@ were + respectively called @ and ~. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) @@ -349,11 +349,11 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Notes - For examples of usage, see the regression test sql/seg.sql. + For examples of usage, see the regression test sql/seg.sql. - The mechanism that converts (+-) to regular ranges + The mechanism that converts (+-) to regular ranges isn't completely accurate in determining the number of significant digits for the boundaries. For example, it adds an extra digit to the lower boundary if the resulting interval includes a power of ten: @@ -369,7 +369,7 @@ postgres=> select '10(+-)1'::seg as seg; The performance of an R-tree index can largely depend on the initial order of input values. It may be very helpful to sort the input table - on the seg column; see the script sort-segments.pl + on the seg column; see the script sort-segments.pl for an example. diff --git a/doc/src/sgml/sepgsql.sgml b/doc/src/sgml/sepgsql.sgml index 6a8d3765a2..c6c89a389d 100644 --- a/doc/src/sgml/sepgsql.sgml +++ b/doc/src/sgml/sepgsql.sgml @@ -8,8 +8,8 @@ - sepgsql is a loadable module that supports label-based - mandatory access control (MAC) based on SELinux security + sepgsql is a loadable module that supports label-based + mandatory access control (MAC) based on SELinux security policy. @@ -25,10 +25,10 @@ Overview - This module integrates with SELinux to provide an + This module integrates with SELinux to provide an additional layer of security checking above and beyond what is normally provided by PostgreSQL. From the perspective of - SELinux, this module allows + SELinux, this module allows PostgreSQL to function as a user-space object manager. Each table or function access initiated by a DML query will be checked against the system security policy. This check is in addition to @@ -39,7 +39,7 @@ SELinux access control decisions are made using security labels, which are represented by strings such as - system_u:object_r:sepgsql_table_t:s0. Each access control + system_u:object_r:sepgsql_table_t:s0. Each access control decision involves two labels: the label of the subject attempting to perform the action, and the label of the object on which the operation is to be performed. Since these labels can be applied to any sort of object, @@ -60,17 +60,17 @@ Installation - sepgsql can only be used on Linux + sepgsql can only be used on Linux 2.6.28 or higher with SELinux enabled. It is not available on any other platform. You will also need - libselinux 2.1.10 or higher and - selinux-policy 3.9.13 or higher (although some + libselinux 2.1.10 or higher and + selinux-policy 3.9.13 or higher (although some distributions may backport the necessary rules into older policy versions). - The sestatus command allows you to check the status of + The sestatus command allows you to check the status of SELinux. A typical display is: $ sestatus @@ -81,20 +81,20 @@ Mode from config file: enforcing Policy version: 24 Policy from config file: targeted - If SELinux is disabled or not installed, you must set + If SELinux is disabled or not installed, you must set that product up first before installing this module. - To build this module, include the option --with-selinux in - your PostgreSQL configure command. Be sure that the - libselinux-devel RPM is installed at build time. + To build this module, include the option --with-selinux in + your PostgreSQL configure command. Be sure that the + libselinux-devel RPM is installed at build time. - To use this module, you must include sepgsql + To use this module, you must include sepgsql in the parameter in - postgresql.conf. The module will not function correctly + postgresql.conf. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute sepgsql.sql in each database. This will install functions needed for security label management, and @@ -103,7 +103,7 @@ Policy from config file: targeted Here is an example showing how to initialize a fresh database cluster - with sepgsql functions and security labels installed. + with sepgsql functions and security labels installed. Adjust the paths shown as appropriate for your installation: @@ -124,7 +124,7 @@ $ for DBNAME in template0 template1 postgres; do Please note that you may see some or all of the following notifications depending on the particular versions you have of - libselinux and selinux-policy: + libselinux and selinux-policy: /etc/selinux/targeted/contexts/sepgsql_contexts: line 33 has invalid object type db_blobs /etc/selinux/targeted/contexts/sepgsql_contexts: line 36 has invalid object type db_language @@ -147,16 +147,16 @@ $ for DBNAME in template0 template1 postgres; do Due to the nature of SELinux, running the - regression tests for sepgsql requires several extra + regression tests for sepgsql requires several extra configuration steps, some of which must be done as root. The regression tests will not be run by an ordinary - make check or make installcheck command; you must + make check or make installcheck command; you must set up the configuration and then invoke the test script manually. - The tests must be run in the contrib/sepgsql directory + The tests must be run in the contrib/sepgsql directory of a configured PostgreSQL build tree. Although they require a build tree, the tests are designed to be executed against an installed server, - that is they are comparable to make installcheck not - make check. + that is they are comparable to make installcheck not + make check. @@ -168,17 +168,17 @@ $ for DBNAME in template0 template1 postgres; do Second, build and install the policy package for the regression test. - The sepgsql-regtest policy is a special purpose policy package + The sepgsql-regtest policy is a special purpose policy package which provides a set of rules to be allowed during the regression tests. It should be built from the policy source file - sepgsql-regtest.te, which is done using + sepgsql-regtest.te, which is done using make with a Makefile supplied by SELinux. You will need to locate the appropriate Makefile on your system; the path shown below is only an example. Once built, install this policy package using the - semodule command, which loads supplied policy packages + semodule command, which loads supplied policy packages into the kernel. If the package is correctly installed, - semodule -l should list sepgsql-regtest as an + semodule -l should list sepgsql-regtest as an available policy package: @@ -191,12 +191,12 @@ sepgsql-regtest 1.07 - Third, turn on sepgsql_regression_test_mode. - For security reasons, the rules in sepgsql-regtest + Third, turn on sepgsql_regression_test_mode. + For security reasons, the rules in sepgsql-regtest are not enabled by default; the sepgsql_regression_test_mode parameter enables the rules needed to launch the regression tests. - It can be turned on using the setsebool command: + It can be turned on using the setsebool command: @@ -206,7 +206,7 @@ sepgsql_regression_test_mode --> on - Fourth, verify your shell is operating in the unconfined_t + Fourth, verify your shell is operating in the unconfined_t domain: @@ -229,7 +229,7 @@ $ ./test_sepgsql This script will attempt to verify that you have done all the configuration steps correctly, and then it will run the regression tests for the - sepgsql module. + sepgsql module. @@ -242,7 +242,7 @@ $ sudo setsebool sepgsql_regression_test_mode off - You might prefer to remove the sepgsql-regtest policy + You might prefer to remove the sepgsql-regtest policy entirely: @@ -257,22 +257,22 @@ $ sudo semodule -r sepgsql-regtest - sepgsql.permissive (boolean) + sepgsql.permissive (boolean) - sepgsql.permissive configuration parameter + sepgsql.permissive configuration parameter - This parameter enables sepgsql to function + This parameter enables sepgsql to function in permissive mode, regardless of the system setting. The default is off. - This parameter can only be set in the postgresql.conf + This parameter can only be set in the postgresql.conf file or on the server command line. - When this parameter is on, sepgsql functions + When this parameter is on, sepgsql functions in permissive mode, even if SELinux in general is working in enforcing mode. This parameter is primarily useful for testing purposes. @@ -281,9 +281,9 @@ $ sudo semodule -r sepgsql-regtest - sepgsql.debug_audit (boolean) + sepgsql.debug_audit (boolean) - sepgsql.debug_audit configuration parameter + sepgsql.debug_audit configuration parameter @@ -295,7 +295,7 @@ $ sudo semodule -r sepgsql-regtest - The security policy of SELinux also has rules to + The security policy of SELinux also has rules to control whether or not particular accesses are logged. By default, access violations are logged, but allowed accesses are not. @@ -315,13 +315,13 @@ $ sudo semodule -r sepgsql-regtest Controlled Object Classes - The security model of SELinux describes all the access + The security model of SELinux describes all the access control rules as relationships between a subject entity (typically, a client of the database) and an object entity (such as a database object), each of which is identified by a security label. If access to an unlabeled object is attempted, the object is treated as if it were assigned the label - unlabeled_t. + unlabeled_t. @@ -349,22 +349,22 @@ $ sudo semodule -r sepgsql-regtest DML Permissions - For tables, db_table:select, db_table:insert, - db_table:update or db_table:delete are + For tables, db_table:select, db_table:insert, + db_table:update or db_table:delete are checked for all the referenced target tables depending on the kind of - statement; in addition, db_table:select is also checked for + statement; in addition, db_table:select is also checked for all the tables that contain columns referenced in the - WHERE or RETURNING clause, as a data source - for UPDATE, and so on. + WHERE or RETURNING clause, as a data source + for UPDATE, and so on. Column-level permissions will also be checked for each referenced column. - db_column:select is checked on not only the columns being - read using SELECT, but those being referenced in other DML - statements; db_column:update or db_column:insert - will also be checked for columns being modified by UPDATE or - INSERT. + db_column:select is checked on not only the columns being + read using SELECT, but those being referenced in other DML + statements; db_column:update or db_column:insert + will also be checked for columns being modified by UPDATE or + INSERT. @@ -373,43 +373,43 @@ $ sudo semodule -r sepgsql-regtest UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; - Here, db_column:update will be checked for - t1.x, since it is being updated, - db_column:{select update} will be checked for - t1.y, since it is both updated and referenced, and - db_column:select will be checked for t1.z, since + Here, db_column:update will be checked for + t1.x, since it is being updated, + db_column:{select update} will be checked for + t1.y, since it is both updated and referenced, and + db_column:select will be checked for t1.z, since it is only referenced. - db_table:{select update} will also be checked + db_table:{select update} will also be checked at the table level. - For sequences, db_sequence:get_value is checked when we - reference a sequence object using SELECT; however, note that we + For sequences, db_sequence:get_value is checked when we + reference a sequence object using SELECT; however, note that we do not currently check permissions on execution of corresponding functions - such as lastval(). + such as lastval(). - For views, db_view:expand will be checked, then any other + For views, db_view:expand will be checked, then any other required permissions will be checked on the objects being expanded from the view, individually. - For functions, db_procedure:{execute} will be checked when + For functions, db_procedure:{execute} will be checked when user tries to execute a function as a part of query, or using fast-path invocation. If this function is a trusted procedure, it also checks - db_procedure:{entrypoint} permission to check whether it + db_procedure:{entrypoint} permission to check whether it can perform as entry point of trusted procedure. - In order to access any schema object, db_schema:search + In order to access any schema object, db_schema:search permission is required on the containing schema. When an object is referenced without schema qualification, schemas on which this permission is not present will not be searched (just as if the user did - not have USAGE privilege on the schema). If an explicit schema + not have USAGE privilege on the schema). If an explicit schema qualification is present, an error will occur if the user does not have the requisite permission on the named schema. @@ -425,22 +425,22 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; The default database privilege system allows database superusers to modify system catalogs using DML commands, and reference or modify toast tables. These operations are prohibited when - sepgsql is enabled. + sepgsql is enabled. DDL Permissions - SELinux defines several permissions to control common + SELinux defines several permissions to control common operations for each object type; such as creation, alter, drop and relabel of security label. In addition, several object types have special permissions to control their characteristic operations; such as addition or deletion of name entries within a particular schema. - Creating a new database object requires create permission. - SELinux will grant or deny this permission based on the + Creating a new database object requires create permission. + SELinux will grant or deny this permission based on the client's security label and the proposed security label for the new object. In some cases, additional privileges are required: @@ -449,12 +449,12 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; additionally requires - getattr permission for the source or template database. + getattr permission for the source or template database. - Creating a schema object additionally requires add_name + Creating a schema object additionally requires add_name permission on the parent schema. @@ -467,23 +467,23 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100;
- Creating a function marked as LEAKPROOF additionally - requires install permission. (This permission is also - checked when LEAKPROOF is set for an existing function.) + Creating a function marked as LEAKPROOF additionally + requires install permission. (This permission is also + checked when LEAKPROOF is set for an existing function.) - When DROP command is executed, drop will be + When DROP command is executed, drop will be checked on the object being removed. Permissions will be also checked for - objects dropped indirectly via CASCADE. Deletion of objects + objects dropped indirectly via CASCADE. Deletion of objects contained within a particular schema (tables, views, sequences and - procedures) additionally requires remove_name on the schema. + procedures) additionally requires remove_name on the schema. - When ALTER command is executed, setattr will be + When ALTER command is executed, setattr will be checked on the object being modified for each object types, except for subsidiary objects such as the indexes or triggers of a table, where permissions are instead checked on the parent object. In some cases, @@ -494,25 +494,25 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; Moving an object to a new schema additionally requires - remove_name permission on the old schema and - add_name permission on the new one. + remove_name permission on the old schema and + add_name permission on the new one. - Setting the LEAKPROOF attribute on a function requires - install permission. + Setting the LEAKPROOF attribute on a function requires + install permission. Using on an object additionally - requires relabelfrom permission for the object in - conjunction with its old security label and relabelto + requires relabelfrom permission for the object in + conjunction with its old security label and relabelto permission for the object in conjunction with its new security label. (In cases where multiple label providers are installed and the user tries to set a security label, but it is not managed by - SELinux, only setattr should be checked here. + SELinux, only setattr should be checked here. This is currently not done due to implementation restrictions.) @@ -524,7 +524,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; Trusted Procedures Trusted procedures are similar to security definer functions or setuid - commands. SELinux provides a feature to allow trusted + commands. SELinux provides a feature to allow trusted code to run using a security label different from that of the client, generally for the purpose of providing highly controlled access to sensitive data (e.g. rows might be omitted, or the precision of stored @@ -569,8 +569,8 @@ postgres=# SELECT cid, cname, show_credit(cid) FROM customer; - In this case, a regular user cannot reference customer.credit - directly, but a trusted procedure show_credit allows the user + In this case, a regular user cannot reference customer.credit + directly, but a trusted procedure show_credit allows the user to print the credit card numbers of customers with some of the digits masked out. @@ -582,8 +582,8 @@ postgres=# SELECT cid, cname, show_credit(cid) FROM customer; It is possible to use SELinux's dynamic domain transition feature to switch the security label of the client process, the client domain, to a new context, if that is allowed by the security policy. - The client domain needs the setcurrent permission and also - dyntransition from the old to the new domain. + The client domain needs the setcurrent permission and also + dyntransition from the old to the new domain. Dynamic domain transitions should be considered carefully, because they @@ -612,7 +612,7 @@ ERROR: SELinux: security policy violation In this example above we were allowed to switch from the larger MCS - range c1.c1023 to the smaller range c1.c4, but + range c1.c1023 to the smaller range c1.c4, but switching back was denied. @@ -726,7 +726,7 @@ ERROR: SELinux: security policy violation Row-level access control - PostgreSQL supports row-level access, but + PostgreSQL supports row-level access, but sepgsql does not. @@ -736,7 +736,7 @@ ERROR: SELinux: security policy violation Covert channels - sepgsql does not try to hide the existence of + sepgsql does not try to hide the existence of a certain object, even if the user is not allowed to reference it. For example, we can infer the existence of an invisible object as a result of primary key conflicts, foreign key violations, and so on, @@ -766,7 +766,7 @@ ERROR: SELinux: security policy violation This document provides a wide spectrum of knowledge to administer - SELinux on your systems. + SELinux on your systems. It focuses primarily on Red Hat operating systems, but is not limited to them. diff --git a/doc/src/sgml/sourcerepo.sgml b/doc/src/sgml/sourcerepo.sgml index dd9da5a7b0..b5618d7166 100644 --- a/doc/src/sgml/sourcerepo.sgml +++ b/doc/src/sgml/sourcerepo.sgml @@ -18,18 +18,18 @@ Note that building PostgreSQL from the source - repository requires reasonably up-to-date versions of bison, - flex, and Perl. These tools are not needed + repository requires reasonably up-to-date versions of bison, + flex, and Perl. These tools are not needed to build from a distribution tarball, because the files that these tools are used to build are included in the tarball. Other tool requirements are the same as shown in . - Getting The Source via <productname>Git</> + Getting The Source via <productname>Git</productname> - With Git you will make a copy of the entire code repository + With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test patches. @@ -40,9 +40,9 @@ - You will need an installed version of Git, which you can + You will need an installed version of Git, which you can get from . Many systems already - have a recent version of Git installed by default, or + have a recent version of Git installed by default, or available in their package distribution system. @@ -57,14 +57,14 @@ git clone git://git.postgresql.org/git/postgresql.git This will copy the full repository to your local machine, so it may take a while to complete, especially if you have a slow Internet connection. - The files will be placed in a new subdirectory postgresql of + The files will be placed in a new subdirectory postgresql of your current directory. The Git mirror can also be reached via the HTTP protocol, if for example a firewall is blocking access to the Git protocol. Just change the URL - prefix to https, as in: + prefix to https, as in: git clone https://git.postgresql.org/git/postgresql.git @@ -77,7 +77,7 @@ git clone https://git.postgresql.org/git/postgresql.git - Whenever you want to get the latest updates in the system, cd + Whenever you want to get the latest updates in the system, cd into the repository, and run: @@ -88,9 +88,9 @@ git fetch - Git can do a lot more things than just fetch the source. For - more information, consult the Git man pages, or see the - website at . + Git can do a lot more things than just fetch the source. For + more information, consult the Git man pages, or see the + website at . diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml index 7777bf5199..4c777de16f 100644 --- a/doc/src/sgml/sources.sgml +++ b/doc/src/sgml/sources.sgml @@ -14,8 +14,8 @@ Layout rules (brace positioning, etc) follow BSD conventions. In - particular, curly braces for the controlled blocks of if, - while, switch, etc go on their own lines. + particular, curly braces for the controlled blocks of if, + while, switch, etc go on their own lines. @@ -26,7 +26,7 @@ - Do not use C++ style comments (// comments). Strict ANSI C + Do not use C++ style comments (// comments). Strict ANSI C compilers do not accept them. For the same reason, do not use C++ extensions such as declaring new variables mid-block. @@ -40,7 +40,7 @@ */ Note that comment blocks that begin in column 1 will be preserved as-is - by pgindent, but it will re-flow indented comment blocks + by pgindent, but it will re-flow indented comment blocks as though they were plain text. If you want to preserve the line breaks in an indented block, add dashes like this: @@ -55,10 +55,10 @@ While submitted patches do not absolutely have to follow these formatting rules, it's a good idea to do so. Your code will get run through - pgindent before the next release, so there's no point in + pgindent before the next release, so there's no point in making it look nice under some other set of formatting conventions. A good rule of thumb for patches is make the new code look like - the existing code around it. + the existing code around it. @@ -92,37 +92,37 @@ less -x4 Error, warning, and log messages generated within the server code - should be created using ereport, or its older cousin - elog. The use of this function is complex enough to + should be created using ereport, or its older cousin + elog. The use of this function is complex enough to require some explanation. There are two required elements for every message: a severity level - (ranging from DEBUG to PANIC) and a primary + (ranging from DEBUG to PANIC) and a primary message text. In addition there are optional elements, the most common of which is an error identifier code that follows the SQL spec's SQLSTATE conventions. - ereport itself is just a shell function, that exists + ereport itself is just a shell function, that exists mainly for the syntactic convenience of making message generation look like a function call in the C source code. The only parameter - accepted directly by ereport is the severity level. + accepted directly by ereport is the severity level. The primary message text and any optional message elements are - generated by calling auxiliary functions, such as errmsg, - within the ereport call. + generated by calling auxiliary functions, such as errmsg, + within the ereport call. - A typical call to ereport might look like this: + A typical call to ereport might look like this: ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); - This specifies error severity level ERROR (a run-of-the-mill - error). The errcode call specifies the SQLSTATE error code - using a macro defined in src/include/utils/errcodes.h. The - errmsg call provides the primary message text. Notice the + This specifies error severity level ERROR (a run-of-the-mill + error). The errcode call specifies the SQLSTATE error code + using a macro defined in src/include/utils/errcodes.h. The + errmsg call provides the primary message text. Notice the extra set of parentheses surrounding the auxiliary function calls — these are annoying but syntactically necessary. @@ -139,72 +139,72 @@ ereport(ERROR, "You might need to add explicit typecasts."))); This illustrates the use of format codes to embed run-time values into - a message text. Also, an optional hint message is provided. + a message text. Also, an optional hint message is provided. - If the severity level is ERROR or higher, - ereport aborts the execution of the user-defined + If the severity level is ERROR or higher, + ereport aborts the execution of the user-defined function and does not return to the caller. If the severity level is - lower than ERROR, ereport returns normally. + lower than ERROR, ereport returns normally. - The available auxiliary routines for ereport are: + The available auxiliary routines for ereport are: errcode(sqlerrcode) specifies the SQLSTATE error identifier code for the condition. If this routine is not called, the error identifier defaults to - ERRCODE_INTERNAL_ERROR when the error severity level is - ERROR or higher, ERRCODE_WARNING when the - error level is WARNING, otherwise (for NOTICE - and below) ERRCODE_SUCCESSFUL_COMPLETION. + ERRCODE_INTERNAL_ERROR when the error severity level is + ERROR or higher, ERRCODE_WARNING when the + error level is WARNING, otherwise (for NOTICE + and below) ERRCODE_SUCCESSFUL_COMPLETION. While these defaults are often convenient, always think whether they - are appropriate before omitting the errcode() call. + are appropriate before omitting the errcode() call. errmsg(const char *msg, ...) specifies the primary error message text, and possibly run-time values to insert into it. Insertions - are specified by sprintf-style format codes. In addition to - the standard format codes accepted by sprintf, the format - code %m can be used to insert the error message returned - by strerror for the current value of errno. + are specified by sprintf-style format codes. In addition to + the standard format codes accepted by sprintf, the format + code %m can be used to insert the error message returned + by strerror for the current value of errno. - That is, the value that was current when the ereport call - was reached; changes of errno within the auxiliary reporting + That is, the value that was current when the ereport call + was reached; changes of errno within the auxiliary reporting routines will not affect it. That would not be true if you were to - write strerror(errno) explicitly in errmsg's + write strerror(errno) explicitly in errmsg's parameter list; accordingly, do not do so. - %m does not require any - corresponding entry in the parameter list for errmsg. - Note that the message string will be run through gettext + %m does not require any + corresponding entry in the parameter list for errmsg. + Note that the message string will be run through gettext for possible localization before format codes are processed. errmsg_internal(const char *msg, ...) is the same as - errmsg, except that the message string will not be + errmsg, except that the message string will not be translated nor included in the internationalization message dictionary. - This should be used for cannot happen cases that are probably + This should be used for cannot happen cases that are probably not worth expending translation effort on. errmsg_plural(const char *fmt_singular, const char *fmt_plural, - unsigned long n, ...) is like errmsg, but with + unsigned long n, ...) is like errmsg, but with support for various plural forms of the message. - fmt_singular is the English singular format, - fmt_plural is the English plural format, - n is the integer value that determines which plural + fmt_singular is the English singular format, + fmt_plural is the English plural format, + n is the integer value that determines which plural form is needed, and the remaining arguments are formatted according to the selected format string. For more information see . @@ -213,16 +213,16 @@ ereport(ERROR, errdetail(const char *msg, ...) supplies an optional - detail message; this is to be used when there is additional + detail message; this is to be used when there is additional information that seems inappropriate to put in the primary message. The message string is processed in just the same way as for - errmsg. + errmsg. errdetail_internal(const char *msg, ...) is the same - as errdetail, except that the message string will not be + as errdetail, except that the message string will not be translated nor included in the internationalization message dictionary. This should be used for detail messages that are not worth expending translation effort on, for instance because they are too technical to be @@ -232,7 +232,7 @@ ereport(ERROR, errdetail_plural(const char *fmt_singular, const char *fmt_plural, - unsigned long n, ...) is like errdetail, but with + unsigned long n, ...) is like errdetail, but with support for various plural forms of the message. For more information see . @@ -240,10 +240,10 @@ ereport(ERROR, errdetail_log(const char *msg, ...) is the same as - errdetail except that this string goes only to the server - log, never to the client. If both errdetail (or one of + errdetail except that this string goes only to the server + log, never to the client. If both errdetail (or one of its equivalents above) and - errdetail_log are used then one string goes to the client + errdetail_log are used then one string goes to the client and the other to the log. This is useful for error details that are too security-sensitive or too bulky to include in the report sent to the client. @@ -253,7 +253,7 @@ ereport(ERROR, errdetail_log_plural(const char *fmt_singular, const char *fmt_plural, unsigned long n, ...) is like - errdetail_log, but with support for various plural forms of + errdetail_log, but with support for various plural forms of the message. For more information see . @@ -261,23 +261,23 @@ ereport(ERROR, errhint(const char *msg, ...) supplies an optional - hint message; this is to be used when offering suggestions + hint message; this is to be used when offering suggestions about how to fix the problem, as opposed to factual details about what went wrong. The message string is processed in just the same way as for - errmsg. + errmsg. errcontext(const char *msg, ...) is not normally called - directly from an ereport message site; rather it is used - in error_context_stack callback functions to provide + directly from an ereport message site; rather it is used + in error_context_stack callback functions to provide information about the context in which an error occurred, such as the current location in a PL function. The message string is processed in just the same way as for - errmsg. Unlike the other auxiliary functions, this can - be called more than once per ereport call; the successive + errmsg. Unlike the other auxiliary functions, this can + be called more than once per ereport call; the successive strings thus supplied are concatenated with separating newlines. @@ -309,9 +309,9 @@ ereport(ERROR, specifies a table constraint whose name, table name, and schema name should be included as auxiliary fields in the error report. Indexes should be considered to be constraints for this purpose, whether or - not they have an associated pg_constraint entry. Be + not they have an associated pg_constraint entry. Be careful to pass the underlying heap relation, not the index itself, as - rel. + rel. @@ -330,17 +330,17 @@ ereport(ERROR, - errcode_for_file_access() is a convenience function that + errcode_for_file_access() is a convenience function that selects an appropriate SQLSTATE error identifier for a failure in a file-access-related system call. It uses the saved - errno to determine which error code to generate. - Usually this should be used in combination with %m in the + errno to determine which error code to generate. + Usually this should be used in combination with %m in the primary error message text. - errcode_for_socket_access() is a convenience function that + errcode_for_socket_access() is a convenience function that selects an appropriate SQLSTATE error identifier for a failure in a socket-related system call. @@ -348,7 +348,7 @@ ereport(ERROR, errhidestmt(bool hide_stmt) can be called to specify - suppression of the STATEMENT: portion of a message in the + suppression of the STATEMENT: portion of a message in the postmaster log. Generally this is appropriate if the message text includes the current statement already. @@ -356,7 +356,7 @@ ereport(ERROR, errhidecontext(bool hide_ctx) can be called to - specify suppression of the CONTEXT: portion of a message in + specify suppression of the CONTEXT: portion of a message in the postmaster log. This should only be used for verbose debugging messages where the repeated inclusion of context would bloat the log volume too much. @@ -367,24 +367,24 @@ ereport(ERROR, - At most one of the functions errtable, - errtablecol, errtableconstraint, - errdatatype, or errdomainconstraint should - be used in an ereport call. These functions exist to + At most one of the functions errtable, + errtablecol, errtableconstraint, + errdatatype, or errdomainconstraint should + be used in an ereport call. These functions exist to allow applications to extract the name of a database object associated with the error condition without having to examine the potentially-localized error message text. These functions should be used in error reports for which it's likely that applications would wish to have automatic error handling. As of - PostgreSQL 9.3, complete coverage exists only for + PostgreSQL 9.3, complete coverage exists only for errors in SQLSTATE class 23 (integrity constraint violation), but this is likely to be expanded in future. - There is an older function elog that is still heavily used. - An elog call: + There is an older function elog that is still heavily used. + An elog call: elog(level, "format string", ...); @@ -394,11 +394,11 @@ ereport(level, (errmsg_internal("format string", ...))); Notice that the SQLSTATE error code is always defaulted, and the message string is not subject to translation. - Therefore, elog should be used only for internal errors and + Therefore, elog should be used only for internal errors and low-level debug logging. Any message that is likely to be of interest to - ordinary users should go through ereport. Nonetheless, - there are enough internal cannot happen error checks in the - system that elog is still widely used; it is preferred for + ordinary users should go through ereport. Nonetheless, + there are enough internal cannot happen error checks in the + system that elog is still widely used; it is preferred for those messages for its notational simplicity. @@ -414,7 +414,7 @@ ereport(level, (errmsg_internal("format string", ...))); This style guide is offered in the hope of maintaining a consistent, user-friendly style throughout all the messages generated by - PostgreSQL. + PostgreSQL. @@ -643,7 +643,7 @@ cannot open file "%s" - Rationale: Otherwise no one will know what foo.bar.baz + Rationale: Otherwise no one will know what foo.bar.baz refers to. @@ -866,7 +866,7 @@ BETTER: unrecognized node type: 42 C Standard - Code in PostgreSQL should only rely on language + Code in PostgreSQL should only rely on language features available in the C89 standard. That means a conforming C89 compiler has to be able to compile postgres, at least aside from a few platform dependent pieces. Features from later @@ -874,7 +874,7 @@ BETTER: unrecognized node type: 42 used, if a fallback is provided. - For example static inline and + For example static inline and _StaticAssert() are currently used, even though they are from newer revisions of the C standard. If not available we respectively fall back to defining the functions @@ -886,7 +886,7 @@ BETTER: unrecognized node type: 42 Function-Like Macros and Inline Functions - Both, macros with arguments and static inline + Both, macros with arguments and static inline functions, may be used. The latter are preferable if there are multiple-evaluation hazards when written as a macro, as e.g. the case with @@ -914,7 +914,7 @@ MemoryContextSwitchTo(MemoryContext context) } #endif /* FRONTEND */ - In this example CurrentMemoryContext, which is only + In this example CurrentMemoryContext, which is only available in the backend, is referenced and the function thus hidden with a #ifndef FRONTEND. This rule exists because some compilers emit references to symbols @@ -957,8 +957,8 @@ handle_sighup(SIGNAL_ARGS) errno = save_errno; } - errno is saved and restored because - SetLatch() might change it. If that were not done + errno is saved and restored because + SetLatch() might change it. If that were not done interrupted code that's currently inspecting errno might see the wrong value. diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index cd4a8d07c4..3f2d31b4c0 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -57,7 +57,7 @@ Built-in Operator Classes - The core PostgreSQL distribution + The core PostgreSQL distribution includes the SP-GiST operator classes shown in . @@ -74,92 +74,92 @@ - kd_point_ops - point + kd_point_ops + point - << - <@ - <^ - >> - >^ - ~= + << + <@ + <^ + >> + >^ + ~= - quad_point_ops - point + quad_point_ops + point - << - <@ - <^ - >> - >^ - ~= + << + <@ + <^ + >> + >^ + ~= - range_ops + range_ops any range type - && - &< - &> - -|- - << - <@ - = - >> - @> + && + &< + &> + -|- + << + <@ + = + >> + @> - box_ops - box + box_ops + box - << - &< - && - &> - >> - ~= - @> - <@ - &<| - <<| + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| |>> - |&> + |&> - text_ops - text + text_ops + text - < - <= - = - > - >= - ~<=~ - ~<~ - ~>=~ - ~>~ + < + <= + = + > + >= + ~<=~ + ~<~ + ~>=~ + ~>~ - inet_ops - inet, cidr + inet_ops + inet, cidr - && - >> - >>= - > - >= - <> - << - <<= - < - <= - = + && + >> + >>= + > + >= + <> + << + <<= + < + <= + = @@ -167,8 +167,8 @@ - Of the two operator classes for type point, - quad_point_ops is the default. kd_point_ops + Of the two operator classes for type point, + quad_point_ops is the default. kd_point_ops supports the same operators but uses a different index data structure which may offer better performance in some applications. @@ -199,15 +199,15 @@ Inner tuples are more complex, since they are branching points in the search tree. Each inner tuple contains a set of one or more - nodes, which represent groups of similar leaf values. + nodes, which represent groups of similar leaf values. A node contains a downlink that leads either to another, lower-level inner tuple, or to a short list of leaf tuples that all lie on the same index page. - Each node normally has a label that describes it; for example, + Each node normally has a label that describes it; for example, in a radix tree the node label could be the next character of the string value. (Alternatively, an operator class can omit the node labels, if it works with a fixed set of nodes for all inner tuples; see .) - Optionally, an inner tuple can have a prefix value + Optionally, an inner tuple can have a prefix value that describes all its members. In a radix tree this could be the common prefix of the represented strings. The prefix value is not necessarily really a prefix, but can be any data needed by the operator class; @@ -223,7 +223,7 @@ operator classes to manage level counting while descending the tree. There is also support for incrementally reconstructing the represented value when that is needed, and for passing down additional data (called - traverse values) during a tree descent. + traverse values) during a tree descent. @@ -241,12 +241,12 @@ There are five user-defined methods that an index operator class for SP-GiST must provide. All five follow the convention - of accepting two internal arguments, the first of which is a + of accepting two internal arguments, the first of which is a pointer to a C struct containing input values for the support method, while the second argument is a pointer to a C struct where output values - must be placed. Four of the methods just return void, since + must be placed. Four of the methods just return void, since all their results appear in the output struct; but - leaf_consistent additionally returns a boolean result. + leaf_consistent additionally returns a boolean result. The methods must not modify any fields of their input structs. In all cases, the output struct is initialized to zeroes before calling the user-defined method. @@ -258,20 +258,20 @@ - config + config Returns static information about the index implementation, including the data type OIDs of the prefix and node label data types. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_config(internal, internal) RETURNS void ... - The first argument is a pointer to a spgConfigIn + The first argument is a pointer to a spgConfigIn C struct, containing input data for the function. - The second argument is a pointer to a spgConfigOut + The second argument is a pointer to a spgConfigOut C struct, which the function must fill with result data. typedef struct spgConfigIn @@ -288,20 +288,20 @@ typedef struct spgConfigOut } spgConfigOut; - attType is passed in order to support polymorphic + attType is passed in order to support polymorphic index operator classes; for ordinary fixed-data-type operator classes, it will always have the same value and so can be ignored. For operator classes that do not use prefixes, - prefixType can be set to VOIDOID. + prefixType can be set to VOIDOID. Likewise, for operator classes that do not use node labels, - labelType can be set to VOIDOID. - canReturnData should be set true if the operator class + labelType can be set to VOIDOID. + canReturnData should be set true if the operator class is capable of reconstructing the originally-supplied index value. - longValuesOK should be set true only when the - attType is of variable length and the operator + longValuesOK should be set true only when the + attType is of variable length and the operator class is capable of segmenting long values by repeated suffixing (see ). @@ -309,20 +309,20 @@ typedef struct spgConfigOut - choose + choose Chooses a method for inserting a new value into an inner tuple. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_choose(internal, internal) RETURNS void ... - The first argument is a pointer to a spgChooseIn + The first argument is a pointer to a spgChooseIn C struct, containing input data for the function. - The second argument is a pointer to a spgChooseOut + The second argument is a pointer to a spgChooseOut C struct, which the function must fill with result data. typedef struct spgChooseIn @@ -380,25 +380,25 @@ typedef struct spgChooseOut } spgChooseOut; - datum is the original datum that was to be inserted + datum is the original datum that was to be inserted into the index. - leafDatum is initially the same as - datum, but can change at lower levels of the tree + leafDatum is initially the same as + datum, but can change at lower levels of the tree if the choose or picksplit methods change it. When the insertion search reaches a leaf page, - the current value of leafDatum is what will be stored + the current value of leafDatum is what will be stored in the newly created leaf tuple. - level is the current inner tuple's level, starting at + level is the current inner tuple's level, starting at zero for the root level. - allTheSame is true if the current inner tuple is + allTheSame is true if the current inner tuple is marked as containing multiple equivalent nodes (see ). - hasPrefix is true if the current inner tuple contains + hasPrefix is true if the current inner tuple contains a prefix; if so, - prefixDatum is its value. - nNodes is the number of child nodes contained in the + prefixDatum is its value. + nNodes is the number of child nodes contained in the inner tuple, and - nodeLabels is an array of their label values, or + nodeLabels is an array of their label values, or NULL if there are no labels. @@ -412,80 +412,80 @@ typedef struct spgChooseOut If the new value matches one of the existing child nodes, - set resultType to spgMatchNode. - Set nodeN to the index (from zero) of that node in + set resultType to spgMatchNode. + Set nodeN to the index (from zero) of that node in the node array. - Set levelAdd to the increment in - level caused by descending through that node, + Set levelAdd to the increment in + level caused by descending through that node, or leave it as zero if the operator class does not use levels. - Set restDatum to equal datum + Set restDatum to equal datum if the operator class does not modify datums from one level to the next, or otherwise set it to the modified value to be used as - leafDatum at the next level. + leafDatum at the next level. If a new child node must be added, - set resultType to spgAddNode. - Set nodeLabel to the label to be used for the new - node, and set nodeN to the index (from zero) at which + set resultType to spgAddNode. + Set nodeLabel to the label to be used for the new + node, and set nodeN to the index (from zero) at which to insert the node in the node array. After the node has been added, the choose function will be called again with the modified inner tuple; - that call should result in an spgMatchNode result. + that call should result in an spgMatchNode result. If the new value is inconsistent with the tuple prefix, - set resultType to spgSplitTuple. + set resultType to spgSplitTuple. This action moves all the existing nodes into a new lower-level inner tuple, and replaces the existing inner tuple with a tuple having a single downlink pointing to the new lower-level inner tuple. - Set prefixHasPrefix to indicate whether the new + Set prefixHasPrefix to indicate whether the new upper tuple should have a prefix, and if so set - prefixPrefixDatum to the prefix value. This new + prefixPrefixDatum to the prefix value. This new prefix value must be sufficiently less restrictive than the original to accept the new value to be indexed. - Set prefixNNodes to the number of nodes needed in the - new tuple, and set prefixNodeLabels to a palloc'd array + Set prefixNNodes to the number of nodes needed in the + new tuple, and set prefixNodeLabels to a palloc'd array holding their labels, or to NULL if node labels are not required. Note that the total size of the new upper tuple must be no more than the total size of the tuple it is replacing; this constrains the lengths of the new prefix and new labels. - Set childNodeN to the index (from zero) of the node + Set childNodeN to the index (from zero) of the node that will downlink to the new lower-level inner tuple. - Set postfixHasPrefix to indicate whether the new + Set postfixHasPrefix to indicate whether the new lower-level inner tuple should have a prefix, and if so set - postfixPrefixDatum to the prefix value. The + postfixPrefixDatum to the prefix value. The combination of these two prefixes and the downlink node's label (if any) must have the same meaning as the original prefix, because there is no opportunity to alter the node labels that are moved to the new lower-level tuple, nor to change any child index entries. After the node has been split, the choose function will be called again with the replacement inner tuple. - That call may return an spgAddNode result, if no suitable - node was created by the spgSplitTuple action. Eventually - choose must return spgMatchNode to + That call may return an spgAddNode result, if no suitable + node was created by the spgSplitTuple action. Eventually + choose must return spgMatchNode to allow the insertion to descend to the next level. - picksplit + picksplit Decides how to create a new inner tuple over a set of leaf tuples. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_picksplit(internal, internal) RETURNS void ... - The first argument is a pointer to a spgPickSplitIn + The first argument is a pointer to a spgPickSplitIn C struct, containing input data for the function. - The second argument is a pointer to a spgPickSplitOut + The second argument is a pointer to a spgPickSplitOut C struct, which the function must fill with result data. typedef struct spgPickSplitIn @@ -508,52 +508,52 @@ typedef struct spgPickSplitOut } spgPickSplitOut; - nTuples is the number of leaf tuples provided. - datums is an array of their datum values. - level is the current level that all the leaf tuples + nTuples is the number of leaf tuples provided. + datums is an array of their datum values. + level is the current level that all the leaf tuples share, which will become the level of the new inner tuple. - Set hasPrefix to indicate whether the new inner + Set hasPrefix to indicate whether the new inner tuple should have a prefix, and if so set - prefixDatum to the prefix value. - Set nNodes to indicate the number of nodes that + prefixDatum to the prefix value. + Set nNodes to indicate the number of nodes that the new inner tuple will contain, and - set nodeLabels to an array of their label values, + set nodeLabels to an array of their label values, or to NULL if node labels are not required. - Set mapTuplesToNodes to an array that gives the index + Set mapTuplesToNodes to an array that gives the index (from zero) of the node that each leaf tuple should be assigned to. - Set leafTupleDatums to an array of the values to + Set leafTupleDatums to an array of the values to be stored in the new leaf tuples (these will be the same as the - input datums if the operator class does not modify + input datums if the operator class does not modify datums from one level to the next). - Note that the picksplit function is + Note that the picksplit function is responsible for palloc'ing the - nodeLabels, mapTuplesToNodes and - leafTupleDatums arrays. + nodeLabels, mapTuplesToNodes and + leafTupleDatums arrays. If more than one leaf tuple is supplied, it is expected that the - picksplit function will classify them into more than + picksplit function will classify them into more than one node; otherwise it is not possible to split the leaf tuples across multiple pages, which is the ultimate purpose of this - operation. Therefore, if the picksplit function + operation. Therefore, if the picksplit function ends up placing all the leaf tuples in the same node, the core SP-GiST code will override that decision and generate an inner tuple in which the leaf tuples are assigned at random to several identically-labeled nodes. Such a tuple is marked - allTheSame to signify that this has happened. The - choose and inner_consistent functions + allTheSame to signify that this has happened. The + choose and inner_consistent functions must take suitable care with such inner tuples. See for more information. - picksplit can be applied to a single leaf tuple only - in the case that the config function set - longValuesOK to true and a larger-than-a-page input + picksplit can be applied to a single leaf tuple only + in the case that the config function set + longValuesOK to true and a larger-than-a-page input value has been supplied. In this case the point of the operation is to strip off a prefix and produce a new, shorter leaf datum value. The call will be repeated until a leaf datum short enough to fit on @@ -564,20 +564,20 @@ typedef struct spgPickSplitOut - inner_consistent + inner_consistent Returns set of nodes (branches) to follow during tree search. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_inner_consistent(internal, internal) RETURNS void ... - The first argument is a pointer to a spgInnerConsistentIn + The first argument is a pointer to a spgInnerConsistentIn C struct, containing input data for the function. - The second argument is a pointer to a spgInnerConsistentOut + The second argument is a pointer to a spgInnerConsistentOut C struct, which the function must fill with result data. @@ -610,90 +610,90 @@ typedef struct spgInnerConsistentOut } spgInnerConsistentOut; - The array scankeys, of length nkeys, + The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of - them are interesting. (Note that nkeys = 0 implies + them are interesting. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent - function only cares about the sk_strategy and - sk_argument fields of each array entry, which + function only cares about the sk_strategy and + sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. - In particular it is not necessary to check sk_flags to + In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. - reconstructedValue is the value reconstructed for the - parent tuple; it is (Datum) 0 at the root level or if the - inner_consistent function did not provide a value at the + reconstructedValue is the value reconstructed for the + parent tuple; it is (Datum) 0 at the root level or if the + inner_consistent function did not provide a value at the parent level. - traversalValue is a pointer to any traverse data - passed down from the previous call of inner_consistent + traversalValue is a pointer to any traverse data + passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. - traversalMemoryContext is the memory context in which + traversalMemoryContext is the memory context in which to store output traverse values (see below). - level is the current inner tuple's level, starting at + level is the current inner tuple's level, starting at zero for the root level. - returnData is true if reconstructed data is + returnData is true if reconstructed data is required for this query; this will only be so if the - config function asserted canReturnData. - allTheSame is true if the current inner tuple is - marked all-the-same; in this case all the nodes have the + config function asserted canReturnData. + allTheSame is true if the current inner tuple is + marked all-the-same; in this case all the nodes have the same label (if any) and so either all or none of them match the query (see ). - hasPrefix is true if the current inner tuple contains + hasPrefix is true if the current inner tuple contains a prefix; if so, - prefixDatum is its value. - nNodes is the number of child nodes contained in the + prefixDatum is its value. + nNodes is the number of child nodes contained in the inner tuple, and - nodeLabels is an array of their label values, or + nodeLabels is an array of their label values, or NULL if the nodes do not have labels. - nNodes must be set to the number of child nodes that + nNodes must be set to the number of child nodes that need to be visited by the search, and - nodeNumbers must be set to an array of their indexes. + nodeNumbers must be set to an array of their indexes. If the operator class keeps track of levels, set - levelAdds to an array of the level increments + levelAdds to an array of the level increments required when descending to each node to be visited. (Often these increments will be the same for all the nodes, but that's not necessarily so, so an array is used.) If value reconstruction is needed, set - reconstructedValues to an array of the values + reconstructedValues to an array of the values reconstructed for each child node to be visited; otherwise, leave - reconstructedValues as NULL. + reconstructedValues as NULL. If it is desired to pass down additional out-of-band information - (traverse values) to lower levels of the tree search, - set traversalValues to an array of the appropriate + (traverse values) to lower levels of the tree search, + set traversalValues to an array of the appropriate traverse values, one for each child node to be visited; otherwise, - leave traversalValues as NULL. - Note that the inner_consistent function is + leave traversalValues as NULL. + Note that the inner_consistent function is responsible for palloc'ing the - nodeNumbers, levelAdds, - reconstructedValues, and - traversalValues arrays in the current memory context. + nodeNumbers, levelAdds, + reconstructedValues, and + traversalValues arrays in the current memory context. However, any output traverse values pointed to by - the traversalValues array should be allocated - in traversalMemoryContext. + the traversalValues array should be allocated + in traversalMemoryContext. Each traverse value must be a single palloc'd chunk. - leaf_consistent + leaf_consistent Returns true if a leaf tuple satisfies a query. - The SQL declaration of the function must look like this: + The SQL declaration of the function must look like this: CREATE FUNCTION my_leaf_consistent(internal, internal) RETURNS bool ... - The first argument is a pointer to a spgLeafConsistentIn + The first argument is a pointer to a spgLeafConsistentIn C struct, containing input data for the function. - The second argument is a pointer to a spgLeafConsistentOut + The second argument is a pointer to a spgLeafConsistentOut C struct, which the function must fill with result data. typedef struct spgLeafConsistentIn @@ -716,40 +716,40 @@ typedef struct spgLeafConsistentOut } spgLeafConsistentOut; - The array scankeys, of length nkeys, + The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of - them satisfy the query. (Note that nkeys = 0 implies + them satisfy the query. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent - function only cares about the sk_strategy and - sk_argument fields of each array entry, which + function only cares about the sk_strategy and + sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. - In particular it is not necessary to check sk_flags to + In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. - reconstructedValue is the value reconstructed for the - parent tuple; it is (Datum) 0 at the root level or if the - inner_consistent function did not provide a value at the + reconstructedValue is the value reconstructed for the + parent tuple; it is (Datum) 0 at the root level or if the + inner_consistent function did not provide a value at the parent level. - traversalValue is a pointer to any traverse data - passed down from the previous call of inner_consistent + traversalValue is a pointer to any traverse data + passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. - level is the current leaf tuple's level, starting at + level is the current leaf tuple's level, starting at zero for the root level. - returnData is true if reconstructed data is + returnData is true if reconstructed data is required for this query; this will only be so if the - config function asserted canReturnData. - leafDatum is the key value stored in the current + config function asserted canReturnData. + leafDatum is the key value stored in the current leaf tuple. - The function must return true if the leaf tuple matches the - query, or false if not. In the true case, - if returnData is true then - leafValue must be set to the value originally supplied + The function must return true if the leaf tuple matches the + query, or false if not. In the true case, + if returnData is true then + leafValue must be set to the value originally supplied to be indexed for this leaf tuple. Also, - recheck may be set to true if the match + recheck may be set to true if the match is uncertain and so the operator(s) must be re-applied to the actual heap tuple to verify the match. @@ -759,18 +759,18 @@ typedef struct spgLeafConsistentOut All the SP-GiST support methods are normally called in a short-lived - memory context; that is, CurrentMemoryContext will be reset + memory context; that is, CurrentMemoryContext will be reset after processing of each tuple. It is therefore not very important to - worry about pfree'ing everything you palloc. (The config + worry about pfree'ing everything you palloc. (The config method is an exception: it should try to avoid leaking memory. But - usually the config method need do nothing but assign + usually the config method need do nothing but assign constants into the passed parameter struct.) If the indexed column is of a collatable data type, the index collation will be passed to all the support methods, using the standard - PG_GET_COLLATION() mechanism. + PG_GET_COLLATION() mechanism. @@ -794,7 +794,7 @@ typedef struct spgLeafConsistentOut trees, in which each level of the tree includes a prefix that is short enough to fit on a page, and the final leaf level includes a suffix also short enough to fit on a page. The operator class should set - longValuesOK to TRUE only if it is prepared to arrange for + longValuesOK to TRUE only if it is prepared to arrange for this to happen. Otherwise, the SP-GiST core will reject any request to index a value that is too large to fit on an index page. @@ -814,8 +814,8 @@ typedef struct spgLeafConsistentOut links that chain such tuples together.) If the set of leaf tuples grows too large for a page, a split is performed and an intermediate inner tuple is inserted. For this to fix the problem, the new inner - tuple must divide the set of leaf values into more than one - node group. If the operator class's picksplit function + tuple must divide the set of leaf values into more than one + node group. If the operator class's picksplit function fails to do that, the SP-GiST core resorts to extraordinary measures described in . @@ -830,58 +830,58 @@ typedef struct spgLeafConsistentOut corresponding to the four quadrants around the inner tuple's centroid point. In such a case the code typically works with the nodes by number, and there is no need for explicit node labels. To suppress - node labels (and thereby save some space), the picksplit - function can return NULL for the nodeLabels array, - and likewise the choose function can return NULL for - the prefixNodeLabels array during - a spgSplitTuple action. - This will in turn result in nodeLabels being NULL during - subsequent calls to choose and inner_consistent. + node labels (and thereby save some space), the picksplit + function can return NULL for the nodeLabels array, + and likewise the choose function can return NULL for + the prefixNodeLabels array during + a spgSplitTuple action. + This will in turn result in nodeLabels being NULL during + subsequent calls to choose and inner_consistent. In principle, node labels could be used for some inner tuples and omitted for others in the same index. When working with an inner tuple having unlabeled nodes, it is an error - for choose to return spgAddNode, since the set + for choose to return spgAddNode, since the set of nodes is supposed to be fixed in such cases. - <quote>All-the-same</> Inner Tuples + <quote>All-the-same</quote> Inner Tuples The SP-GiST core can override the results of the - operator class's picksplit function when - picksplit fails to divide the supplied leaf values into + operator class's picksplit function when + picksplit fails to divide the supplied leaf values into at least two node categories. When this happens, the new inner tuple is created with multiple nodes that each have the same label (if any) - that picksplit gave to the one node it did use, and the + that picksplit gave to the one node it did use, and the leaf values are divided at random among these equivalent nodes. - The allTheSame flag is set on the inner tuple to warn the - choose and inner_consistent functions that the + The allTheSame flag is set on the inner tuple to warn the + choose and inner_consistent functions that the tuple does not have the node set that they might otherwise expect. - When dealing with an allTheSame tuple, a choose - result of spgMatchNode is interpreted to mean that the new + When dealing with an allTheSame tuple, a choose + result of spgMatchNode is interpreted to mean that the new value can be assigned to any of the equivalent nodes; the core code will - ignore the supplied nodeN value and descend into one + ignore the supplied nodeN value and descend into one of the nodes at random (so as to keep the tree balanced). It is an - error for choose to return spgAddNode, since + error for choose to return spgAddNode, since that would make the nodes not all equivalent; the - spgSplitTuple action must be used if the value to be inserted + spgSplitTuple action must be used if the value to be inserted doesn't match the existing nodes. - When dealing with an allTheSame tuple, the - inner_consistent function should return either all or none + When dealing with an allTheSame tuple, the + inner_consistent function should return either all or none of the nodes as targets for continuing the index search, since they are all equivalent. This may or may not require any special-case code, - depending on how much the inner_consistent function normally + depending on how much the inner_consistent function normally assumes about the meaning of the nodes. @@ -895,8 +895,8 @@ typedef struct spgLeafConsistentOut The PostgreSQL source distribution includes several examples of index operator classes for SP-GiST, as described in . Look - into src/backend/access/spgist/ - and src/backend/utils/adt/ to see the code. + into src/backend/access/spgist/ + and src/backend/utils/adt/ to see the code. diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 3594f9dce1..e2b44c5fa1 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -203,7 +203,7 @@ int SPI_execute(const char * command, bool rea SPI_execute executes the specified SQL command for count rows. If read_only - is true, the command must be read-only, and execution overhead + is true, the command must be read-only, and execution overhead is somewhat reduced. @@ -225,13 +225,13 @@ SPI_execute("SELECT * FROM foo", true, 5); SPI_execute("INSERT INTO foo SELECT * FROM bar", false, 5); - inserts all rows from bar, ignoring the + inserts all rows from bar, ignoring the count parameter. However, with SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); at most 5 rows would be inserted, since execution would stop after the - fifth RETURNING result row is retrieved. + fifth RETURNING result row is retrieved. @@ -244,26 +244,26 @@ SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); - When read_only is false, + When read_only is false, SPI_execute increments the command - counter and computes a new snapshot before executing each + counter and computes a new snapshot before executing each command in the string. The snapshot does not actually change if the - current transaction isolation level is SERIALIZABLE or REPEATABLE READ, but in - READ COMMITTED mode the snapshot update allows each command to + current transaction isolation level is SERIALIZABLE or REPEATABLE READ, but in + READ COMMITTED mode the snapshot update allows each command to see the results of newly committed transactions from other sessions. This is essential for consistent behavior when the commands are modifying the database. - When read_only is true, + When read_only is true, SPI_execute does not update either the snapshot - or the command counter, and it allows only plain SELECT + or the command counter, and it allows only plain SELECT commands to appear in the command string. The commands are executed using the snapshot previously established for the surrounding query. This execution mode is somewhat faster than the read/write mode due to eliminating per-command overhead. It also allows genuinely - stable functions to be built: since successive executions + stable functions to be built: since successive executions will all use the same snapshot, there will be no change in the results. @@ -284,11 +284,11 @@ SPI_execute("INSERT INTO foo SELECT * FROM bar RETURNING *", false, 5); then you can use the global pointer SPITupleTable *SPI_tuptable to access the result rows. Some utility commands (such as - EXPLAIN) also return row sets, and SPI_tuptable + EXPLAIN) also return row sets, and SPI_tuptable will contain the result in these cases too. Some utility commands - (COPY, CREATE TABLE AS) don't return a row set, so - SPI_tuptable is NULL, but they still return the number of - rows processed in SPI_processed. + (COPY, CREATE TABLE AS) don't return a row set, so + SPI_tuptable is NULL, but they still return the number of + rows processed in SPI_processed. @@ -304,17 +304,17 @@ typedef struct HeapTuple *vals; /* rows */ } SPITupleTable; - vals is an array of pointers to rows. (The number + vals is an array of pointers to rows. (The number of valid entries is given by SPI_processed.) - tupdesc is a row descriptor which you can pass to - SPI functions dealing with rows. tuptabcxt, - alloced, and free are internal + tupdesc is a row descriptor which you can pass to + SPI functions dealing with rows. tuptabcxt, + alloced, and free are internal fields not intended for use by SPI callers. SPI_finish frees all - SPITupleTables allocated during the current + SPITupleTables allocated during the current procedure. You can free a particular result table earlier, if you are done with it, by calling SPI_freetuptable. @@ -336,7 +336,7 @@ typedef struct bool read_only - true for read-only execution + true for read-only execution @@ -345,7 +345,7 @@ typedef struct maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -365,7 +365,7 @@ typedef struct if a SELECT (but not SELECT - INTO) was executed + INTO) was executed @@ -473,7 +473,7 @@ typedef struct SPI_ERROR_COPY - if COPY TO stdout or COPY FROM stdin + if COPY TO stdout or COPY FROM stdin was attempted @@ -484,13 +484,13 @@ typedef struct if a transaction manipulation command was attempted - (BEGIN, - COMMIT, - ROLLBACK, - SAVEPOINT, - PREPARE TRANSACTION, - COMMIT PREPARED, - ROLLBACK PREPARED, + (BEGIN, + COMMIT, + ROLLBACK, + SAVEPOINT, + PREPARE TRANSACTION, + COMMIT PREPARED, + ROLLBACK PREPARED, or any variant thereof) @@ -560,7 +560,7 @@ int SPI_exec(const char * command, long count< SPI_exec is the same as SPI_execute, with the latter's read_only parameter always taken as - false. + false. @@ -582,7 +582,7 @@ int SPI_exec(const char * command, long count< maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -628,7 +628,7 @@ int SPI_execute_with_args(const char *command, SPI_execute_with_args executes a command that might include references to externally supplied parameters. The command text - refers to a parameter as $n, and + refers to a parameter as $n, and the call specifies data types and values for each such symbol. read_only and count have the same interpretation as in SPI_execute. @@ -642,7 +642,7 @@ int SPI_execute_with_args(const char *command, - Similar results can be achieved with SPI_prepare followed by + Similar results can be achieved with SPI_prepare followed by SPI_execute_plan; however, when using this function the query plan is always customized to the specific parameter values provided. @@ -670,7 +670,7 @@ int SPI_execute_with_args(const char *command, int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -707,12 +707,12 @@ int SPI_execute_with_args(const char *command, If nulls is NULL then SPI_execute_with_args assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -720,7 +720,7 @@ int SPI_execute_with_args(const char *command, bool read_only - true for read-only execution + true for read-only execution @@ -729,7 +729,7 @@ int SPI_execute_with_args(const char *command, maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -796,7 +796,7 @@ SPIPlanPtr SPI_prepare(const char * command, int A prepared command can be generalized by writing parameters - ($1, $2, etc.) in place of what would be + ($1, $2, etc.) in place of what would be constants in a normal command. The actual values of the parameters are then specified when SPI_execute_plan is called. This allows the prepared command to be used over a wider range of @@ -829,7 +829,7 @@ SPIPlanPtr SPI_prepare(const char * command, int int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -851,14 +851,14 @@ SPIPlanPtr SPI_prepare(const char * command, int SPI_prepare returns a non-null pointer to an - SPIPlan, which is an opaque struct representing a prepared + SPIPlan, which is an opaque struct representing a prepared statement. On error, NULL will be returned, and SPI_result will be set to one of the same error codes used by SPI_execute, except that it is set to SPI_ERROR_ARGUMENT if command is NULL, or if - nargs is less than 0, or if nargs is - greater than 0 and argtypes is NULL. + nargs is less than 0, or if nargs is + greater than 0 and argtypes is NULL. @@ -875,21 +875,21 @@ SPIPlanPtr SPI_prepare(const char * command, int CURSOR_OPT_GENERIC_PLAN or - CURSOR_OPT_CUSTOM_PLAN flag to + passing the CURSOR_OPT_GENERIC_PLAN or + CURSOR_OPT_CUSTOM_PLAN flag to SPI_prepare_cursor, to force use of generic or custom plans respectively. Although the main point of a prepared statement is to avoid repeated parse - analysis and planning of the statement, PostgreSQL will + analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new - search_path. (This latter behavior is new as of + search_path. (This latter behavior is new as of PostgreSQL 9.3.) See for more information about the behavior of prepared statements. @@ -900,14 +900,14 @@ SPIPlanPtr SPI_prepare(const char * command, int - SPIPlanPtr is declared as a pointer to an opaque struct type in - spi.h. It is unwise to try to access its contents + SPIPlanPtr is declared as a pointer to an opaque struct type in + spi.h. It is unwise to try to access its contents directly, as that makes your code much more likely to break in future revisions of PostgreSQL. - The name SPIPlanPtr is somewhat historical, since the data + The name SPIPlanPtr is somewhat historical, since the data structure no longer necessarily contains an execution plan. @@ -941,9 +941,9 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < SPI_prepare_cursor is identical to SPI_prepare, except that it also allows specification - of the planner's cursor options parameter. This is a bit mask + of the planner's cursor options parameter. This is a bit mask having the values shown in nodes/parsenodes.h - for the options field of DeclareCursorStmt. + for the options field of DeclareCursorStmt. SPI_prepare always takes the cursor options as zero. @@ -965,7 +965,7 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -1004,7 +1004,7 @@ SPIPlanPtr SPI_prepare_cursor(const char * command, int < Notes - Useful bits to set in cursorOptions include + Useful bits to set in cursorOptions include CURSOR_OPT_SCROLL, CURSOR_OPT_NO_SCROLL, CURSOR_OPT_FAST_PLAN, @@ -1262,9 +1262,9 @@ bool SPI_is_cursor_plan(SPIPlanPtr plan) as an argument to SPI_cursor_open, or false if that is not the case. The criteria are that the plan represents one single command and that this - command returns tuples to the caller; for example, SELECT - is allowed unless it contains an INTO clause, and - UPDATE is allowed only if it contains a RETURNING + command returns tuples to the caller; for example, SELECT + is allowed unless it contains an INTO clause, and + UPDATE is allowed only if it contains a RETURNING clause. @@ -1368,12 +1368,12 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * If nulls is NULL then SPI_execute_plan assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1381,7 +1381,7 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * bool read_only - true for read-only execution + true for read-only execution @@ -1390,7 +1390,7 @@ int SPI_execute_plan(SPIPlanPtr plan, Datum * maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1467,10 +1467,10 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, prepared by SPI_prepare. This function is equivalent to SPI_execute_plan except that information about the parameter values to be passed to the - query is presented differently. The ParamListInfo + query is presented differently. The ParamListInfo representation can be convenient for passing down values that are already available in that format. It also supports use of dynamic - parameter sets via hook functions specified in ParamListInfo. + parameter sets via hook functions specified in ParamListInfo. @@ -1499,7 +1499,7 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, bool read_only - true for read-only execution + true for read-only execution @@ -1508,7 +1508,7 @@ int SPI_execute_plan_with_paramlist(SPIPlanPtr plan, maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1558,7 +1558,7 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< SPI_execp is the same as SPI_execute_plan, with the latter's read_only parameter always taken as - false. + false. @@ -1597,12 +1597,12 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< If nulls is NULL then SPI_execp assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1612,7 +1612,7 @@ int SPI_execp(SPIPlanPtr plan, Datum * values< maximum number of rows to return, - or 0 for no limit + or 0 for no limit @@ -1729,12 +1729,12 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr nulls is NULL then SPI_cursor_open assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1742,7 +1742,7 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr bool read_only - true for read-only execution + true for read-only execution @@ -1753,7 +1753,7 @@ Portal SPI_cursor_open(const char * name, SPIPlanPtr Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -1836,7 +1836,7 @@ Portal SPI_cursor_open_with_args(const char *name, int nargs - number of input parameters ($1, $2, etc.) + number of input parameters ($1, $2, etc.) @@ -1873,12 +1873,12 @@ Portal SPI_cursor_open_with_args(const char *name, If nulls is NULL then SPI_cursor_open_with_args assumes that no parameters are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding parameter - value is non-null, or 'n' if the corresponding parameter + array should be ' ' if the corresponding parameter + value is non-null, or 'n' if the corresponding parameter value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: - it does not need a '\0' terminator. + it does not need a '\0' terminator. @@ -1886,7 +1886,7 @@ Portal SPI_cursor_open_with_args(const char *name, bool read_only - true for read-only execution + true for read-only execution @@ -1906,7 +1906,7 @@ Portal SPI_cursor_open_with_args(const char *name, Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -1944,10 +1944,10 @@ Portal SPI_cursor_open_with_paramlist(const char *name, SPI_prepare. This function is equivalent to SPI_cursor_open except that information about the parameter values to be passed to the - query is presented differently. The ParamListInfo + query is presented differently. The ParamListInfo representation can be convenient for passing down values that are already available in that format. It also supports use of dynamic - parameter sets via hook functions specified in ParamListInfo. + parameter sets via hook functions specified in ParamListInfo. @@ -1991,7 +1991,7 @@ Portal SPI_cursor_open_with_paramlist(const char *name, bool read_only - true for read-only execution + true for read-only execution @@ -2002,7 +2002,7 @@ Portal SPI_cursor_open_with_paramlist(const char *name, Pointer to portal containing the cursor. Note there is no error - return convention; any error will be reported via elog. + return convention; any error will be reported via elog. @@ -2090,7 +2090,7 @@ void SPI_cursor_fetch(Portal portal, bool forw SPI_cursor_fetch fetches some rows from a cursor. This is equivalent to a subset of the SQL command - FETCH (see SPI_scroll_cursor_fetch + FETCH (see SPI_scroll_cursor_fetch for more functionality). @@ -2175,7 +2175,7 @@ void SPI_cursor_move(Portal portal, bool forwa SPI_cursor_move skips over some number of rows in a cursor. This is equivalent to a subset of the SQL command - MOVE (see SPI_scroll_cursor_move + MOVE (see SPI_scroll_cursor_move for more functionality). @@ -2250,7 +2250,7 @@ void SPI_scroll_cursor_fetch(Portal portal, FetchDirectio SPI_scroll_cursor_fetch fetches some rows from a - cursor. This is equivalent to the SQL command FETCH. + cursor. This is equivalent to the SQL command FETCH. @@ -2350,7 +2350,7 @@ void SPI_scroll_cursor_move(Portal portal, FetchDirection SPI_scroll_cursor_move skips over some number of rows in a cursor. This is equivalent to the SQL command - MOVE. + MOVE. @@ -2400,7 +2400,7 @@ void SPI_scroll_cursor_move(Portal portal, FetchDirection SPI_processed is set as in SPI_execute if successful. - SPI_tuptable is set to NULL, since + SPI_tuptable is set to NULL, since no rows are returned by this function. @@ -2628,7 +2628,7 @@ SPIPlanPtr SPI_saveplan(SPIPlanPtr plan) The originally passed-in statement is not freed, so you might wish to do SPI_freeplan on it to avoid leaking memory - until SPI_finish. + until SPI_finish. @@ -2975,7 +2975,7 @@ int SPI_register_trigger_data(TriggerData *tdata) The functions described here provide an interface for extracting - information from result sets returned by SPI_execute and + information from result sets returned by SPI_execute and other SPI functions. @@ -3082,7 +3082,7 @@ int SPI_fnumber(TupleDesc rowdesc, const char * If colname refers to a system column (e.g., - oid) then the appropriate negative column number will + oid) then the appropriate negative column number will be returned. The caller should be careful to test the return value for exact equality to SPI_ERROR_NOATTRIBUTE to detect an error; testing the result for less than or equal to 0 is @@ -3617,7 +3617,7 @@ const char * SPI_result_code_string(int code); to keep track of individual objects to avoid memory leaks; instead only a relatively small number of contexts have to be managed. palloc and related functions allocate memory - from the current context. + from the current context. @@ -3943,7 +3943,7 @@ HeapTupleHeader SPI_returntuple(HeapTuple row, TupleDesc Note that this should be used for functions that are declared to return composite types. It is not used for triggers; use - SPI_copytuple for returning a modified row in a trigger. + SPI_copytuple for returning a modified row in a trigger. @@ -4087,12 +4087,12 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple nulls is NULL then SPI_modifytuple assumes that no new values are null. Otherwise, each entry of the nulls - array should be ' ' if the corresponding new value is - non-null, or 'n' if the corresponding new value is + array should be ' ' if the corresponding new value is + non-null, or 'n' if the corresponding new value is null. (In the latter case, the actual value in the corresponding values entry doesn't matter.) Note that nulls is not a text string, just an array: it - does not need a '\0' terminator. + does not need a '\0' terminator. @@ -4115,10 +4115,10 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple SPI_ERROR_ARGUMENT - if rel is NULL, or if - row is NULL, or if ncols - is less than or equal to 0, or if colnum is - NULL, or if values is NULL. + if rel is NULL, or if + row is NULL, or if ncols + is less than or equal to 0, or if colnum is + NULL, or if values is NULL. @@ -4127,9 +4127,9 @@ HeapTuple SPI_modifytuple(Relation rel, HeapTuple SPI_ERROR_NOATTRIBUTE - if colnum contains an invalid column number (less + if colnum contains an invalid column number (less than or equal to 0 or greater than the number of columns in - row) + row) @@ -4211,7 +4211,7 @@ void SPI_freetuple(HeapTuple row) SPI_freetuptable - free a row set created by SPI_execute or a similar + free a row set created by SPI_execute or a similar function @@ -4227,7 +4227,7 @@ void SPI_freetuptable(SPITupleTable * tuptable) SPI_freetuptable frees a row set created by a prior SPI command execution function, such as - SPI_execute. Therefore, this function is often called + SPI_execute. Therefore, this function is often called with the global variable SPI_tuptable as argument. @@ -4236,14 +4236,14 @@ void SPI_freetuptable(SPITupleTable * tuptable) This function is useful if a SPI procedure needs to execute multiple commands and does not want to keep the results of earlier commands around until it ends. Note that any unfreed row sets will - be freed anyway at SPI_finish. + be freed anyway at SPI_finish. Also, if a subtransaction is started and then aborted within execution of a SPI procedure, SPI automatically frees any row sets created while the subtransaction was running. - Beginning in PostgreSQL 9.3, + Beginning in PostgreSQL 9.3, SPI_freetuptable contains guard logic to protect against duplicate deletion requests for the same row set. In previous releases, duplicate deletions would lead to crashes. @@ -4370,8 +4370,8 @@ INSERT INTO a SELECT * FROM a; All standard procedural languages set the SPI read-write mode depending on the volatility attribute of the function. Commands of - STABLE and IMMUTABLE functions are done in - read-only mode, while commands of VOLATILE functions are + STABLE and IMMUTABLE functions are done in + read-only mode, while commands of VOLATILE functions are done in read-write mode. While authors of C functions are able to violate this convention, it's unlikely to be a good idea to do so. diff --git a/doc/src/sgml/sslinfo.sgml b/doc/src/sgml/sslinfo.sgml index 1fd323a0b6..308e3e03a4 100644 --- a/doc/src/sgml/sslinfo.sgml +++ b/doc/src/sgml/sslinfo.sgml @@ -8,15 +8,15 @@ - The sslinfo module provides information about the SSL + The sslinfo module provides information about the SSL certificate that the current client provided when connecting to - PostgreSQL. The module is useless (most functions + PostgreSQL. The module is useless (most functions will return NULL) if the current connection does not use SSL. This extension won't build at all unless the installation was - configured with --with-openssl. + configured with --with-openssl. @@ -126,7 +126,7 @@ - The result looks like /CN=Somebody /C=Some country/O=Some organization. + The result looks like /CN=Somebody /C=Some country/O=Some organization. @@ -142,7 +142,7 @@ Returns the full issuer name of the current client certificate, converting character data into the current database encoding. Encoding conversions - are handled the same as for ssl_client_dn. + are handled the same as for ssl_client_dn. The combination of the return value of this function with the @@ -195,7 +195,7 @@ role emailAddress - All of these fields are optional, except commonName. + All of these fields are optional, except commonName. It depends entirely on your CA's policy which of them would be included and which wouldn't. The meaning of these fields, however, is strictly defined by @@ -214,7 +214,7 @@ emailAddress - Same as ssl_client_dn_field, but for the certificate issuer + Same as ssl_client_dn_field, but for the certificate issuer rather than the certificate subject. diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml index 1ce1a24e10..7a61b50579 100644 --- a/doc/src/sgml/start.sgml +++ b/doc/src/sgml/start.sgml @@ -162,7 +162,7 @@ createdb: command not found - then PostgreSQL was not installed properly. Either it was not + then PostgreSQL was not installed properly. Either it was not installed at all or your shell's search path was not set to include it. Try calling the command with an absolute path instead: @@ -191,17 +191,17 @@ createdb: could not connect to database postgres: could not connect to server: N createdb: could not connect to database postgres: FATAL: role "joe" does not exist where your own login name is mentioned. This will happen if the - administrator has not created a PostgreSQL user account - for you. (PostgreSQL user accounts are distinct from + administrator has not created a PostgreSQL user account + for you. (PostgreSQL user accounts are distinct from operating system user accounts.) If you are the administrator, see for help creating accounts. You will need to - become the operating system user under which PostgreSQL - was installed (usually postgres) to create the first user + become the operating system user under which PostgreSQL + was installed (usually postgres) to create the first user account. It could also be that you were assigned a - PostgreSQL user name that is different from your - operating system user name; in that case you need to use the @@ -288,7 +288,7 @@ createdb: database creation failed: ERROR: permission denied to create database Running the PostgreSQL interactive - terminal program, called psql, which allows you + terminal program, called psql, which allows you to interactively enter, edit, and execute SQL commands. @@ -298,7 +298,7 @@ createdb: database creation failed: ERROR: permission denied to create database Using an existing graphical frontend tool like pgAdmin or an office suite with - ODBC or JDBC support to create and manipulate a + ODBC or JDBC support to create and manipulate a database. These possibilities are not covered in this tutorial. diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml index aed2cf8bca..0f9bddf7ab 100644 --- a/doc/src/sgml/storage.sgml +++ b/doc/src/sgml/storage.sgml @@ -21,23 +21,23 @@ directories. Traditionally, the configuration and data files used by a database cluster are stored together within the cluster's data -directory, commonly referred to as PGDATA (after the name of the +directory, commonly referred to as PGDATA (after the name of the environment variable that can be used to define it). A common location for -PGDATA is /var/lib/pgsql/data. Multiple clusters, +PGDATA is /var/lib/pgsql/data. Multiple clusters, managed by different server instances, can exist on the same machine. -The PGDATA directory contains several subdirectories and control +The PGDATA directory contains several subdirectories and control files, as shown in . In addition to these required items, the cluster configuration files postgresql.conf, pg_hba.conf, and pg_ident.conf are traditionally stored in -PGDATA, although it is possible to place them elsewhere. +PGDATA, although it is possible to place them elsewhere. -Contents of <varname>PGDATA</> +Contents of <varname>PGDATA</varname> @@ -51,126 +51,126 @@ Item - PG_VERSION + PG_VERSION A file containing the major version number of PostgreSQL - base + base Subdirectory containing per-database subdirectories - current_logfiles + current_logfiles File recording the log file(s) currently written to by the logging collector - global + global Subdirectory containing cluster-wide tables, such as - pg_database + pg_database - pg_commit_ts + pg_commit_ts Subdirectory containing transaction commit timestamp data - pg_dynshmem + pg_dynshmem Subdirectory containing files used by the dynamic shared memory subsystem - pg_logical + pg_logical Subdirectory containing status data for logical decoding - pg_multixact + pg_multixact Subdirectory containing multitransaction status data (used for shared row locks) - pg_notify + pg_notify Subdirectory containing LISTEN/NOTIFY status data - pg_replslot + pg_replslot Subdirectory containing replication slot data - pg_serial + pg_serial Subdirectory containing information about committed serializable transactions - pg_snapshots + pg_snapshots Subdirectory containing exported snapshots - pg_stat + pg_stat Subdirectory containing permanent files for the statistics subsystem - pg_stat_tmp + pg_stat_tmp Subdirectory containing temporary files for the statistics subsystem - pg_subtrans + pg_subtrans Subdirectory containing subtransaction status data - pg_tblspc + pg_tblspc Subdirectory containing symbolic links to tablespaces - pg_twophase + pg_twophase Subdirectory containing state files for prepared transactions - pg_wal + pg_wal Subdirectory containing WAL (Write Ahead Log) files - pg_xact + pg_xact Subdirectory containing transaction commit status data - postgresql.auto.conf + postgresql.auto.conf A file used for storing configuration parameters that are set by ALTER SYSTEM - postmaster.opts + postmaster.opts A file recording the command-line options the server was last started with - postmaster.pid + postmaster.pid A lock file recording the current postmaster process ID (PID), cluster data directory path, postmaster start timestamp, port number, Unix-domain socket directory path (empty on Windows), - first valid listen_address (IP address or *, or empty if + first valid listen_address (IP address or *, or empty if not listening on TCP), and shared memory segment ID (this file is not present after server shutdown) @@ -182,25 +182,25 @@ last started with For each database in the cluster there is a subdirectory within -PGDATA/base, named after the database's OID in -pg_database. This subdirectory is the default location +PGDATA/base, named after the database's OID in +pg_database. This subdirectory is the default location for the database's files; in particular, its system catalogs are stored there. Each table and index is stored in a separate file. For ordinary relations, -these files are named after the table or index's filenode number, -which can be found in pg_class.relfilenode. But +these files are named after the table or index's filenode number, +which can be found in pg_class.relfilenode. But for temporary relations, the file name is of the form -tBBB_FFF, where BBB -is the backend ID of the backend which created the file, and FFF +tBBB_FFF, where BBB +is the backend ID of the backend which created the file, and FFF is the filenode number. In either case, in addition to the main file (a/k/a -main fork), each table and index has a free space map (see free space map (see ), which stores information about free space available in the relation. The free space map is stored in a file named with the filenode -number plus the suffix _fsm. Tables also have a -visibility map, stored in a fork with the suffix _vm, +number plus the suffix _fsm. Tables also have a +visibility map, stored in a fork with the suffix _vm, to track which pages are known to have no dead tuples. The visibility map is described further in . Unlogged tables and indexes have a third fork, known as the initialization fork, which is stored in a fork @@ -210,36 +210,36 @@ with the suffix _init (see ). Note that while a table's filenode often matches its OID, this is -not necessarily the case; some operations, like -TRUNCATE, REINDEX, CLUSTER and some forms -of ALTER TABLE, can change the filenode while preserving the OID. +not necessarily the case; some operations, like +TRUNCATE, REINDEX, CLUSTER and some forms +of ALTER TABLE, can change the filenode while preserving the OID. Avoid assuming that filenode and table OID are the same. -Also, for certain system catalogs including pg_class itself, -pg_class.relfilenode contains zero. The +Also, for certain system catalogs including pg_class itself, +pg_class.relfilenode contains zero. The actual filenode number of these catalogs is stored in a lower-level data -structure, and can be obtained using the pg_relation_filenode() +structure, and can be obtained using the pg_relation_filenode() function. When a table or index exceeds 1 GB, it is divided into gigabyte-sized -segments. The first segment's file name is the same as the +segments. The first segment's file name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrangement avoids problems on platforms that have file size limitations. (Actually, 1 GB is just the default segment size. The segment size can be adjusted using the configuration option -when building PostgreSQL.) +when building PostgreSQL.) In principle, free space map and visibility map forks could require multiple segments as well, though this is unlikely to happen in practice. A table that has columns with potentially large entries will have an -associated TOAST table, which is used for out-of-line storage of +associated TOAST table, which is used for out-of-line storage of field values that are too large to keep in the table rows proper. -pg_class.reltoastrelid links from a table to -its TOAST table, if any. +pg_class.reltoastrelid links from a table to +its TOAST table, if any. See for more information. @@ -250,45 +250,45 @@ The contents of tables and indexes are discussed further in Tablespaces make the scenario more complicated. Each user-defined tablespace -has a symbolic link inside the PGDATA/pg_tblspc +has a symbolic link inside the PGDATA/pg_tblspc directory, which points to the physical tablespace directory (i.e., the -location specified in the tablespace's CREATE TABLESPACE command). +location specified in the tablespace's CREATE TABLESPACE command). This symbolic link is named after the tablespace's OID. Inside the physical tablespace directory there is -a subdirectory with a name that depends on the PostgreSQL -server version, such as PG_9.0_201008051. (The reason for using +a subdirectory with a name that depends on the PostgreSQL +server version, such as PG_9.0_201008051. (The reason for using this subdirectory is so that successive versions of the database can use -the same CREATE TABLESPACE location value without conflicts.) +the same CREATE TABLESPACE location value without conflicts.) Within the version-specific subdirectory, there is a subdirectory for each database that has elements in the tablespace, named after the database's OID. Tables and indexes are stored within that directory, using the filenode naming scheme. -The pg_default tablespace is not accessed through -pg_tblspc, but corresponds to -PGDATA/base. Similarly, the pg_global -tablespace is not accessed through pg_tblspc, but corresponds to -PGDATA/global. +The pg_default tablespace is not accessed through +pg_tblspc, but corresponds to +PGDATA/base. Similarly, the pg_global +tablespace is not accessed through pg_tblspc, but corresponds to +PGDATA/global. -The pg_relation_filepath() function shows the entire path -(relative to PGDATA) of any relation. It is often useful +The pg_relation_filepath() function shows the entire path +(relative to PGDATA) of any relation. It is often useful as a substitute for remembering many of the above rules. But keep in mind that this function just gives the name of the first segment of the main fork of the relation — you may need to append a segment number -and/or _fsm, _vm, or _init to find all +and/or _fsm, _vm, or _init to find all the files associated with the relation. Temporary files (for operations such as sorting more data than can fit in -memory) are created within PGDATA/base/pgsql_tmp, -or within a pgsql_tmp subdirectory of a tablespace directory -if a tablespace other than pg_default is specified for them. +memory) are created within PGDATA/base/pgsql_tmp, +or within a pgsql_tmp subdirectory of a tablespace directory +if a tablespace other than pg_default is specified for them. The name of a temporary file has the form -pgsql_tmpPPP.NNN, -where PPP is the PID of the owning backend and -NNN distinguishes different temporary files of that backend. +pgsql_tmpPPP.NNN, +where PPP is the PID of the owning backend and +NNN distinguishes different temporary files of that backend. @@ -300,10 +300,10 @@ where PPP is the PID of the owning backend and TOAST - sliced breadTOAST + sliced breadTOAST -This section provides an overview of TOAST (The +This section provides an overview of TOAST (The Oversized-Attribute Storage Technique). @@ -314,36 +314,36 @@ not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately -known as TOAST (or the best thing since sliced bread). -The TOAST infrastructure is also used to improve handling of +known as TOAST (or the best thing since sliced bread). +The TOAST infrastructure is also used to improve handling of large data values in-memory. -Only certain data types support TOAST — there is no need to +Only certain data types support TOAST — there is no need to impose the overhead on data types that cannot produce large field values. -To support TOAST, a data type must have a variable-length -(varlena) representation, in which, ordinarily, the first +To support TOAST, a data type must have a variable-length +(varlena) representation, in which, ordinarily, the first four-byte word of any stored value contains the total length of the value in -bytes (including itself). TOAST does not constrain the rest +bytes (including itself). TOAST does not constrain the rest of the data type's representation. The special representations collectively -called TOASTed values work by modifying or +called TOASTed values work by modifying or reinterpreting this initial length word. Therefore, the C-level functions -supporting a TOAST-able data type must be careful about how they -handle potentially TOASTed input values: an input might not +supporting a TOAST-able data type must be careful about how they +handle potentially TOASTed input values: an input might not actually consist of a four-byte length word and contents until after it's -been detoasted. (This is normally done by invoking -PG_DETOAST_DATUM before doing anything with an input value, +been detoasted. (This is normally done by invoking +PG_DETOAST_DATUM before doing anything with an input value, but in some cases more efficient approaches are possible. See for more detail.) -TOAST usurps two bits of the varlena length word (the high-order +TOAST usurps two bits of the varlena length word (the high-order bits on big-endian machines, the low-order bits on little-endian machines), -thereby limiting the logical size of any value of a TOAST-able -data type to 1 GB (230 - 1 bytes). When both bits are zero, -the value is an ordinary un-TOASTed value of the data type, and +thereby limiting the logical size of any value of a TOAST-able +data type to 1 GB (230 - 1 bytes). When both bits are zero, +the value is an ordinary un-TOASTed value of the data type, and the remaining bits of the length word give the total datum size (including length word) in bytes. When the highest-order or lowest-order bit is set, the value has only a single-byte header instead of the normal four-byte @@ -357,7 +357,7 @@ additional space savings that is significant compared to short values. As a special case, if the remaining bits of a single-byte header are all zero (which would be impossible for a self-inclusive length), the value is a pointer to out-of-line data, with several possible alternatives as -described below. The type and size of such a TOAST pointer +described below. The type and size of such a TOAST pointer are determined by a code stored in the second byte of the datum. Lastly, when the highest-order or lowest-order bit is clear but the adjacent bit is set, the content of the datum has been compressed and must be @@ -365,19 +365,19 @@ decompressed before use. In this case the remaining bits of the four-byte length word give the total size of the compressed datum, not the original data. Note that compression is also possible for out-of-line data but the varlena header does not tell whether it has occurred — -the content of the TOAST pointer tells that, instead. +the content of the TOAST pointer tells that, instead. -As mentioned, there are multiple types of TOAST pointer datums. +As mentioned, there are multiple types of TOAST pointer datums. The oldest and most common type is a pointer to out-of-line data stored in -a TOAST table that is separate from, but -associated with, the table containing the TOAST pointer datum -itself. These on-disk pointer datums are created by the -TOAST management code (in access/heap/tuptoaster.c) +a TOAST table that is separate from, but +associated with, the table containing the TOAST pointer datum +itself. These on-disk pointer datums are created by the +TOAST management code (in access/heap/tuptoaster.c) when a tuple to be stored on disk is too large to be stored as-is. Further details appear in . -Alternatively, a TOAST pointer datum can contain a pointer to +Alternatively, a TOAST pointer datum can contain a pointer to out-of-line data that appears elsewhere in memory. Such datums are necessarily short-lived, and will never appear on-disk, but they are very useful for avoiding copying and redundant processing of large data values. @@ -388,57 +388,57 @@ Further details appear in . The compression technique used for either in-line or out-of-line compressed data is a fairly simple and very fast member of the LZ family of compression techniques. See -src/common/pg_lzcompress.c for the details. +src/common/pg_lzcompress.c for the details. Out-of-line, on-disk TOAST storage -If any of the columns of a table are TOAST-able, the table will -have an associated TOAST table, whose OID is stored in the table's -pg_class.reltoastrelid entry. On-disk -TOASTed values are kept in the TOAST table, as +If any of the columns of a table are TOAST-able, the table will +have an associated TOAST table, whose OID is stored in the table's +pg_class.reltoastrelid entry. On-disk +TOASTed values are kept in the TOAST table, as described in more detail below. Out-of-line values are divided (after compression if used) into chunks of at -most TOAST_MAX_CHUNK_SIZE bytes (by default this value is chosen +most TOAST_MAX_CHUNK_SIZE bytes (by default this value is chosen so that four chunk rows will fit on a page, making it about 2000 bytes). -Each chunk is stored as a separate row in the TOAST table +Each chunk is stored as a separate row in the TOAST table belonging to the owning table. Every -TOAST table has the columns chunk_id (an OID -identifying the particular TOASTed value), -chunk_seq (a sequence number for the chunk within its value), -and chunk_data (the actual data of the chunk). A unique index -on chunk_id and chunk_seq provides fast +TOAST table has the columns chunk_id (an OID +identifying the particular TOASTed value), +chunk_seq (a sequence number for the chunk within its value), +and chunk_data (the actual data of the chunk). A unique index +on chunk_id and chunk_seq provides fast retrieval of the values. A pointer datum representing an out-of-line on-disk -TOASTed value therefore needs to store the OID of the -TOAST table in which to look and the OID of the specific value -(its chunk_id). For convenience, pointer datums also store the +TOASTed value therefore needs to store the OID of the +TOAST table in which to look and the OID of the specific value +(its chunk_id). For convenience, pointer datums also store the logical datum size (original uncompressed data length) and physical stored size (different if compression was applied). Allowing for the varlena header bytes, -the total size of an on-disk TOAST pointer datum is therefore 18 +the total size of an on-disk TOAST pointer datum is therefore 18 bytes regardless of the actual size of the represented value. -The TOAST management code is triggered only +The TOAST management code is triggered only when a row value to be stored in a table is wider than -TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). -The TOAST code will compress and/or move +TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). +The TOAST code will compress and/or move field values out-of-line until the row value is shorter than -TOAST_TUPLE_TARGET bytes (also normally 2 kB) +TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains can be had. During an UPDATE operation, values of unchanged fields are normally preserved as-is; so an -UPDATE of a row with out-of-line values incurs no TOAST costs if +UPDATE of a row with out-of-line values incurs no TOAST costs if none of the out-of-line values change. -The TOAST management code recognizes four different strategies -for storing TOAST-able columns on disk: +The TOAST management code recognizes four different strategies +for storing TOAST-able columns on disk: @@ -447,13 +447,13 @@ for storing TOAST-able columns on disk: out-of-line storage; furthermore it disables use of single-byte headers for varlena types. This is the only possible strategy for - columns of non-TOAST-able data types. + columns of non-TOAST-able data types. EXTENDED allows both compression and out-of-line - storage. This is the default for most TOAST-able data types. + storage. This is the default for most TOAST-able data types. Compression will be attempted first, then out-of-line storage if the row is still too big. @@ -478,9 +478,9 @@ for storing TOAST-able columns on disk: -Each TOAST-able data type specifies a default strategy for columns +Each TOAST-able data type specifies a default strategy for columns of that data type, but the strategy for a given table column can be altered -with ALTER TABLE ... SET STORAGE. +with ALTER TABLE ... SET STORAGE. @@ -488,15 +488,15 @@ This scheme has a number of advantages compared to a more straightforward approach such as allowing row values to span pages. Assuming that queries are usually qualified by comparisons against relatively small key values, most of the work of the executor will be done using the main row entry. The big values -of TOASTed attributes will only be pulled out (if selected at all) +of TOASTed attributes will only be pulled out (if selected at all) at the time the result set is sent to the client. Thus, the main table is much smaller and more of its rows fit in the shared buffer cache than would be the case without any out-of-line storage. Sort sets shrink also, and sorts will more often be done entirely in memory. A little test showed that a table containing typical HTML pages and their URLs was stored in about half of the -raw data size including the TOAST table, and that the main table +raw data size including the TOAST table, and that the main table contained only about 10% of the entire data (the URLs and some small HTML -pages). There was no run time difference compared to an un-TOASTed +pages). There was no run time difference compared to an un-TOASTed comparison table, in which all the HTML pages were cut down to 7 kB to fit. @@ -506,16 +506,16 @@ comparison table, in which all the HTML pages were cut down to 7 kB to fit. Out-of-line, in-memory TOAST storage -TOAST pointers can point to data that is not on disk, but is +TOAST pointers can point to data that is not on disk, but is elsewhere in the memory of the current server process. Such pointers obviously cannot be long-lived, but they are nonetheless useful. There are currently two sub-cases: -pointers to indirect data and -pointers to expanded data. +pointers to indirect data and +pointers to expanded data. -Indirect TOAST pointers simply point at a non-indirect varlena +Indirect TOAST pointers simply point at a non-indirect varlena value stored somewhere in memory. This case was originally created merely as a proof of concept, but it is currently used during logical decoding to avoid possibly having to create physical tuples exceeding 1 GB (as pulling @@ -526,34 +526,34 @@ and there is no infrastructure to help with this. -Expanded TOAST pointers are useful for complex data types +Expanded TOAST pointers are useful for complex data types whose on-disk representation is not especially suited for computational purposes. As an example, the standard varlena representation of a -PostgreSQL array includes dimensionality information, a +PostgreSQL array includes dimensionality information, a nulls bitmap if there are any null elements, then the values of all the elements in order. When the element type itself is variable-length, the -only way to find the N'th element is to scan through all the +only way to find the N'th element is to scan through all the preceding elements. This representation is appropriate for on-disk storage because of its compactness, but for computations with the array it's much -nicer to have an expanded or deconstructed +nicer to have an expanded or deconstructed representation in which all the element starting locations have been -identified. The TOAST pointer mechanism supports this need by +identified. The TOAST pointer mechanism supports this need by allowing a pass-by-reference Datum to point to either a standard varlena -value (the on-disk representation) or a TOAST pointer that +value (the on-disk representation) or a TOAST pointer that points to an expanded representation somewhere in memory. The details of this expanded representation are up to the data type, though it must have a standard header and meet the other API requirements given -in src/include/utils/expandeddatum.h. C-level functions +in src/include/utils/expandeddatum.h. C-level functions working with the data type can choose to handle either representation. Functions that do not know about the expanded representation, but simply -apply PG_DETOAST_DATUM to their inputs, will automatically +apply PG_DETOAST_DATUM to their inputs, will automatically receive the traditional varlena representation; so support for an expanded representation can be introduced incrementally, one function at a time. -TOAST pointers to expanded values are further broken down -into read-write and read-only pointers. +TOAST pointers to expanded values are further broken down +into read-write and read-only pointers. The pointed-to representation is the same either way, but a function that receives a read-write pointer is allowed to modify the referenced value in-place, whereas one that receives a read-only pointer must not; it must @@ -563,11 +563,11 @@ unnecessary copying of expanded values during query execution. -For all types of in-memory TOAST pointer, the TOAST +For all types of in-memory TOAST pointer, the TOAST management code ensures that no such pointer datum can accidentally get -stored on disk. In-memory TOAST pointers are automatically +stored on disk. In-memory TOAST pointers are automatically expanded to normal in-line varlena values before storage — and then -possibly converted to on-disk TOAST pointers, if the containing +possibly converted to on-disk TOAST pointers, if the containing tuple would otherwise be too big. @@ -582,35 +582,35 @@ tuple would otherwise be too big. Free Space Map -FSMFree Space Map +FSMFree Space Map Each heap and index relation, except for hash indexes, has a Free Space Map (FSM) to keep track of available space in the relation. It's stored alongside the main relation data in a separate relation fork, named after the -filenode number of the relation, plus a _fsm suffix. For example, +filenode number of the relation, plus a _fsm suffix. For example, if the filenode of a relation is 12345, the FSM is stored in a file called -12345_fsm, in the same directory as the main relation file. +12345_fsm, in the same directory as the main relation file. -The Free Space Map is organized as a tree of FSM pages. The -bottom level FSM pages store the free space available on each +The Free Space Map is organized as a tree of FSM pages. The +bottom level FSM pages store the free space available on each heap (or index) page, using one byte to represent each such page. The upper levels aggregate information from the lower levels. -Within each FSM page is a binary tree, stored in an array with +Within each FSM page is a binary tree, stored in an array with one byte per node. Each leaf node represents a heap page, or a lower level -FSM page. In each non-leaf node, the higher of its children's +FSM page. In each non-leaf node, the higher of its children's values is stored. The maximum value in the leaf nodes is therefore stored at the root. -See src/backend/storage/freespace/README for more details on -how the FSM is structured, and how it's updated and searched. +See src/backend/storage/freespace/README for more details on +how the FSM is structured, and how it's updated and searched. The module can be used to examine the information stored in free space maps. @@ -624,7 +624,7 @@ can be used to examine the information stored in free space maps. Visibility Map -VMVisibility Map +VMVisibility Map Each heap relation has a Visibility Map @@ -632,9 +632,9 @@ Each heap relation has a Visibility Map visible to all active transactions; it also keeps track of which pages contain only frozen tuples. It's stored alongside the main relation data in a separate relation fork, named after the -filenode number of the relation, plus a _vm suffix. For example, +filenode number of the relation, plus a _vm suffix. For example, if the filenode of a relation is 12345, the VM is stored in a file called -12345_vm, in the same directory as the main relation file. +12345_vm, in the same directory as the main relation file. Note that indexes do not have VMs. @@ -644,7 +644,7 @@ indicates that the page is all-visible, or in other words that the page does not contain any tuples that need to be vacuumed. This information can also be used by index-only -scans to answer queries using only the index tuple. +scans to answer queries using only the index tuple. The second bit, if set, means that all tuples on the page have been frozen. That means that even an anti-wraparound vacuum need not revisit the page. @@ -695,7 +695,7 @@ This section provides an overview of the page format used within the item layout rules. -Sequences and TOAST tables are formatted just like a regular table. +Sequences and TOAST tables are formatted just like a regular table. @@ -708,11 +708,11 @@ an item is a row; in an index, an item is an index entry. -Every table and index is stored as an array of pages of a +Every table and index is stored as an array of pages of a fixed size (usually 8 kB, although a different page size can be selected when compiling the server). In a table, all the pages are logically equivalent, so a particular item (row) can be stored in any page. In -indexes, the first page is generally reserved as a metapage +indexes, the first page is generally reserved as a metapage holding control information, and there can be different types of pages within the index, depending on the index access method. @@ -773,7 +773,7 @@ data. Empty in ordinary tables. The first 24 bytes of each page consists of a page header - (PageHeaderData). Its format is detailed in PageHeaderData). Its format is detailed in . The first field tracks the most recent WAL entry related to this page. The second field contains the page checksum if are @@ -880,7 +880,7 @@ data. Empty in ordinary tables. New item identifiers are allocated as needed from the beginning of the unallocated space. The number of item identifiers present can be determined by looking at - pd_lower, which is increased to allocate a new identifier. + pd_lower, which is increased to allocate a new identifier. Because an item identifier is never moved until it is freed, its index can be used on a long-term basis to reference an item, even when the item itself is moved @@ -908,7 +908,7 @@ data. Empty in ordinary tables. b-tree indexes store links to the page's left and right siblings, as well as some other data relevant to the index structure. Ordinary tables do not use a special section at all (indicated by setting - pd_special to equal the page size). + pd_special to equal the page size). @@ -920,19 +920,19 @@ data. Empty in ordinary tables. detailed in . The actual user data (columns of the row) begins at the offset indicated by - t_hoff, which must always be a multiple of the MAXALIGN + t_hoff, which must always be a multiple of the MAXALIGN distance for the platform. The null bitmap is only present if the HEAP_HASNULL bit is set in t_infomask. If it is present it begins just after the fixed header and occupies enough bytes to have one bit per data column - (that is, t_natts bits altogether). In this list of bits, a + (that is, t_natts bits altogether). In this list of bits, a 1 bit indicates not-null, a 0 bit is a null. When the bitmap is not present, all columns are assumed not-null. The object ID is only present if the HEAP_HASOID bit is set in t_infomask. If present, it appears just - before the t_hoff boundary. Any padding needed to make - t_hoff a MAXALIGN multiple will appear between the null + before the t_hoff boundary. Any padding needed to make + t_hoff a MAXALIGN multiple will appear between the null bitmap and the object ID. (This in turn ensures that the object ID is suitably aligned.) @@ -1031,7 +1031,7 @@ data. Empty in ordinary tables. All variable-length data types share the common header structure struct varlena, which includes the total length of the stored value and some flag bits. Depending on the flags, the data can be either - inline or in a TOAST table; + inline or in a TOAST table; it might be compressed, too (see ). diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index 06f0f0b8e0..e4012cc182 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -119,7 +119,7 @@ INSERT INTO MY_TABLE VALUES (3, 'hi there'); (_). Subsequent characters in an identifier or key word can be letters, underscores, digits (0-9), or dollar signs - ($). Note that dollar signs are not allowed in identifiers + ($). Note that dollar signs are not allowed in identifiers according to the letter of the SQL standard, so their use might render applications less portable. The SQL standard will not define a key word that contains @@ -240,7 +240,7 @@ U&"d!0061t!+000061" UESCAPE '!' The Unicode escape syntax works only when the server encoding is - UTF8. When other server encodings are used, only code + UTF8. When other server encodings are used, only code points in the ASCII range (up to \007F) can be specified. Both the 4-digit and the 6-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code @@ -258,7 +258,7 @@ U&"d!0061t!+000061" UESCAPE '!' PostgreSQL, but "Foo" and "FOO" are different from these three and each other. (The folding of - unquoted names to lower case in PostgreSQL is + unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to "FOO" not @@ -305,8 +305,8 @@ U&"d!0061t!+000061" UESCAPE '!' a single-quote character within a string constant, write two adjacent single quotes, e.g., 'Dianne''s horse'. - Note that this is not the same as a double-quote - character ("). + Note that this is not the same as a double-quote + character ("). @@ -343,15 +343,15 @@ SELECT 'foo' 'bar'; - PostgreSQL also accepts escape + PostgreSQL also accepts escape string constants, which are an extension to the SQL standard. An escape string constant is specified by writing the letter E (upper or lower case) just before the opening single - quote, e.g., E'foo'. (When continuing an escape string - constant across lines, write E only before the first opening + quote, e.g., E'foo'. (When continuing an escape string + constant across lines, write E only before the first opening quote.) - Within an escape string, a backslash character (\) begins a - C-like backslash escape sequence, in which the combination + Within an escape string, a backslash character (\) begins a + C-like backslash escape sequence, in which the combination of backslash and following character(s) represent a special byte value, as shown in . @@ -361,7 +361,7 @@ SELECT 'foo' 'bar'; - Backslash Escape Sequence + Backslash Escape Sequence Interpretation @@ -419,9 +419,9 @@ SELECT 'foo' 'bar'; Any other character following a backslash is taken literally. Thus, to - include a backslash character, write two backslashes (\\). + include a backslash character, write two backslashes (\\). Also, a single quote can be included in an escape string by writing - \', in addition to the normal way of ''. + \', in addition to the normal way of ''. @@ -437,35 +437,35 @@ SELECT 'foo' 'bar'; The Unicode escape syntax works fully only when the server - encoding is UTF8. When other server encodings are + encoding is UTF8. When other server encodings are used, only code points in the ASCII range (up - to \u007F) can be specified. Both the 4-digit and + to \u007F) can be specified. Both the 4-digit and the 8-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 8-digit form technically makes this unnecessary. (When surrogate pairs are used when the server - encoding is UTF8, they are first combined into a + encoding is UTF8, they are first combined into a single code point that is then encoded in UTF-8.) If the configuration parameter - is off, + is off, then PostgreSQL recognizes backslash escapes in both regular and escape string constants. However, as of - PostgreSQL 9.1, the default is on, meaning + PostgreSQL 9.1, the default is on, meaning that backslash escapes are recognized only in escape string constants. This behavior is more standards-compliant, but might break applications which rely on the historical behavior, where backslash escapes were always recognized. As a workaround, you can set this parameter - to off, but it is better to migrate away from using backslash + to off, but it is better to migrate away from using backslash escapes. If you need to use a backslash escape to represent a special - character, write the string constant with an E. + character, write the string constant with an E. - In addition to standard_conforming_strings, the configuration + In addition to standard_conforming_strings, the configuration parameters and govern treatment of backslashes in string constants. @@ -525,13 +525,13 @@ U&'d!0061t!+000061' UESCAPE '!' The Unicode escape syntax works only when the server encoding is - UTF8. When other server encodings are used, only + UTF8. When other server encodings are used, only code points in the ASCII range (up to \007F) can be specified. Both the 4-digit and the 6-digit form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (When surrogate - pairs are used when the server encoding is UTF8, they + pairs are used when the server encoding is UTF8, they are first combined into a single code point that is then encoded in UTF-8.) @@ -573,7 +573,7 @@ U&'d!0061t!+000061' UESCAPE '!' sign, an arbitrary sequence of characters that makes up the string content, a dollar sign, the same tag that began this dollar quote, and a dollar sign. For example, here are two - different ways to specify the string Dianne's horse + different ways to specify the string Dianne's horse using dollar quoting: $$Dianne's horse$$ @@ -598,11 +598,11 @@ BEGIN END; $function$ - Here, the sequence $q$[\t\r\n\v\\]$q$ represents a - dollar-quoted literal string [\t\r\n\v\\], which will + Here, the sequence $q$[\t\r\n\v\\]$q$ represents a + dollar-quoted literal string [\t\r\n\v\\], which will be recognized when the function body is executed by - PostgreSQL. But since the sequence does not match - the outer dollar quoting delimiter $function$, it is + PostgreSQL. But since the sequence does not match + the outer dollar quoting delimiter $function$, it is just some more characters within the constant so far as the outer string is concerned. @@ -707,13 +707,13 @@ $function$ bigint numeric A numeric constant that contains neither a decimal point nor an - exponent is initially presumed to be type integer if its - value fits in type integer (32 bits); otherwise it is - presumed to be type bigint if its - value fits in type bigint (64 bits); otherwise it is - taken to be type numeric. Constants that contain decimal + exponent is initially presumed to be type integer if its + value fits in type integer (32 bits); otherwise it is + presumed to be type bigint if its + value fits in type bigint (64 bits); otherwise it is + taken to be type numeric. Constants that contain decimal points and/or exponents are always initially presumed to be type - numeric. + numeric. @@ -724,7 +724,7 @@ $function$ force a numeric value to be interpreted as a specific data type by casting it.type cast For example, you can force a numeric value to be treated as type - real (float4) by writing: + real (float4) by writing: REAL '1.23' -- string style @@ -780,17 +780,17 @@ CAST ( 'string' AS type ) function-call syntaxes can also be used to specify run-time type conversions of arbitrary expressions, as discussed in . To avoid syntactic ambiguity, the - type 'string' + type 'string' syntax can only be used to specify the type of a simple literal constant. Another restriction on the - type 'string' + type 'string' syntax is that it does not work for array types; use :: or CAST() to specify the type of an array constant. - The CAST() syntax conforms to SQL. The - type 'string' + The CAST() syntax conforms to SQL. The + type 'string' syntax is a generalization of the standard: SQL specifies this syntax only for a few data types, but PostgreSQL allows it for all types. The syntax with @@ -827,7 +827,7 @@ CAST ( 'string' AS type ) - A multiple-character operator name cannot end in + or -, + A multiple-character operator name cannot end in + or -, unless the name also contains at least one of these characters: ~ ! @ # % ^ & | ` ? @@ -981,7 +981,7 @@ CAST ( 'string' AS type ) shows the precedence and - associativity of the operators in PostgreSQL. + associativity of the operators in PostgreSQL. Most operators have the same precedence and are left-associative. The precedence and associativity of the operators is hard-wired into the parser. @@ -1085,8 +1085,8 @@ SELECT (5 !) - 6; IS ISNULL NOTNULL - IS TRUE, IS FALSE, IS - NULL, IS DISTINCT FROM, etc + IS TRUE, IS FALSE, IS + NULL, IS DISTINCT FROM, etc @@ -1121,29 +1121,29 @@ SELECT (5 !) - 6; When a schema-qualified operator name is used in the - OPERATOR syntax, as for example in: + OPERATOR syntax, as for example in: SELECT 3 OPERATOR(pg_catalog.+) 4; - the OPERATOR construct is taken to have the default precedence + the OPERATOR construct is taken to have the default precedence shown in for - any other operator. This is true no matter - which specific operator appears inside OPERATOR(). + any other operator. This is true no matter + which specific operator appears inside OPERATOR(). - PostgreSQL versions before 9.5 used slightly different + PostgreSQL versions before 9.5 used slightly different operator precedence rules. In particular, <= >= and <> used to be treated as - generic operators; IS tests used to have higher priority; - and NOT BETWEEN and related constructs acted inconsistently, - being taken in some cases as having the precedence of NOT - rather than BETWEEN. These rules were changed for better + generic operators; IS tests used to have higher priority; + and NOT BETWEEN and related constructs acted inconsistently, + being taken in some cases as having the precedence of NOT + rather than BETWEEN. These rules were changed for better compliance with the SQL standard and to reduce confusion from inconsistent treatment of logically equivalent constructs. In most cases, these changes will result in no behavioral change, or perhaps - in no such operator failures which can be resolved by adding + in no such operator failures which can be resolved by adding parentheses. However there are corner cases in which a query might change behavior without any parsing error being reported. If you are concerned about whether these changes have silently broken something, @@ -1279,7 +1279,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; Another value expression in parentheses (used to group subexpressions and override - precedenceparenthesis) + precedenceparenthesis) @@ -1376,7 +1376,7 @@ CREATE FUNCTION dept(text) RETURNS dept expression[subscript] - or multiple adjacent elements (an array slice) can be extracted + or multiple adjacent elements (an array slice) can be extracted by writing expression[lower_subscript:upper_subscript] @@ -1443,8 +1443,8 @@ $1.somecolumn The parentheses are required here to show that - compositecol is a column name not a table name, - or that mytable is a table name not a schema name + compositecol is a column name not a table name, + or that mytable is a table name not a schema name in the second case. @@ -1479,7 +1479,7 @@ $1.somecolumn key words AND, OR, and NOT, or is a qualified operator name in the form: -OPERATOR(schema.operatorname) +OPERATOR(schema.operatorname) Which particular operators exist and whether they are unary or binary depends on what operators have been @@ -1528,10 +1528,10 @@ sqrt(2) A function that takes a single argument of composite type can optionally be called using field-selection syntax, and conversely field selection can be written in functional style. That is, the - notations col(table) and table.col are + notations col(table) and table.col are interchangeable. This behavior is not SQL-standard but is provided - in PostgreSQL because it allows use of functions to - emulate computed fields. For more information see + in PostgreSQL because it allows use of functions to + emulate computed fields. For more information see . @@ -1592,7 +1592,7 @@ sqrt(2) The fourth form invokes the aggregate once for each input row; since no particular input value is specified, it is generally only useful for the count(*) aggregate function. - The last form is used with ordered-set aggregate + The last form is used with ordered-set aggregate functions, which are described below. @@ -1607,7 +1607,7 @@ sqrt(2) For example, count(*) yields the total number of input rows; count(f1) yields the number of input rows in which f1 is non-null, since - count ignores nulls; and + count ignores nulls; and count(distinct f1) yields the number of distinct non-null values of f1. @@ -1615,13 +1615,13 @@ sqrt(2) Ordinarily, the input rows are fed to the aggregate function in an unspecified order. In many cases this does not matter; for example, - min produces the same result no matter what order it + min produces the same result no matter what order it receives the inputs in. However, some aggregate functions - (such as array_agg and string_agg) produce + (such as array_agg and string_agg) produce results that depend on the ordering of the input rows. When using - such an aggregate, the optional order_by_clause can be - used to specify the desired ordering. The order_by_clause - has the same syntax as for a query-level ORDER BY clause, as + such an aggregate, the optional order_by_clause can be + used to specify the desired ordering. The order_by_clause + has the same syntax as for a query-level ORDER BY clause, as described in , except that its expressions are always just expressions and cannot be output-column names or numbers. For example: @@ -1632,7 +1632,7 @@ SELECT array_agg(a ORDER BY b DESC) FROM table; When dealing with multiple-argument aggregate functions, note that the - ORDER BY clause goes after all the aggregate arguments. + ORDER BY clause goes after all the aggregate arguments. For example, write this: SELECT string_agg(a, ',' ORDER BY a) FROM table; @@ -1642,58 +1642,58 @@ SELECT string_agg(a, ',' ORDER BY a) FROM table; SELECT string_agg(a ORDER BY a, ',') FROM table; -- incorrect The latter is syntactically valid, but it represents a call of a - single-argument aggregate function with two ORDER BY keys + single-argument aggregate function with two ORDER BY keys (the second one being rather useless since it's a constant). - If DISTINCT is specified in addition to an - order_by_clause, then all the ORDER BY + If DISTINCT is specified in addition to an + order_by_clause, then all the ORDER BY expressions must match regular arguments of the aggregate; that is, you cannot sort on an expression that is not included in the - DISTINCT list. + DISTINCT list. - The ability to specify both DISTINCT and ORDER BY - in an aggregate function is a PostgreSQL extension. + The ability to specify both DISTINCT and ORDER BY + in an aggregate function is a PostgreSQL extension. - Placing ORDER BY within the aggregate's regular argument + Placing ORDER BY within the aggregate's regular argument list, as described so far, is used when ordering the input rows for general-purpose and statistical aggregates, for which ordering is optional. There is a subclass of aggregate functions called ordered-set - aggregates for which an order_by_clause - is required, usually because the aggregate's computation is + aggregates for which an order_by_clause + is required, usually because the aggregate's computation is only sensible in terms of a specific ordering of its input rows. Typical examples of ordered-set aggregates include rank and percentile calculations. For an ordered-set aggregate, the order_by_clause is written - inside WITHIN GROUP (...), as shown in the final syntax + inside WITHIN GROUP (...), as shown in the final syntax alternative above. The expressions in the order_by_clause are evaluated once per input row just like regular aggregate arguments, sorted as per the order_by_clause's requirements, and fed to the aggregate function as input arguments. (This is unlike the case - for a non-WITHIN GROUP order_by_clause, + for a non-WITHIN GROUP order_by_clause, which is not treated as argument(s) to the aggregate function.) The - argument expressions preceding WITHIN GROUP, if any, are - called direct arguments to distinguish them from - the aggregated arguments listed in + argument expressions preceding WITHIN GROUP, if any, are + called direct arguments to distinguish them from + the aggregated arguments listed in the order_by_clause. Unlike regular aggregate arguments, direct arguments are evaluated only once per aggregate call, not once per input row. This means that they can contain variables only - if those variables are grouped by GROUP BY; this restriction + if those variables are grouped by GROUP BY; this restriction is the same as if the direct arguments were not inside an aggregate expression at all. Direct arguments are typically used for things like percentile fractions, which only make sense as a single value per aggregation calculation. The direct argument list can be empty; in this - case, write just () not (*). - (PostgreSQL will actually accept either spelling, but + case, write just () not (*). + (PostgreSQL will actually accept either spelling, but only the first way conforms to the SQL standard.) @@ -1712,8 +1712,8 @@ SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; which obtains the 50th percentile, or median, value of - the income column from table households. - Here, 0.5 is a direct argument; it would make no sense + the income column from table households. + Here, 0.5 is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows. @@ -1742,8 +1742,8 @@ FROM generate_series(1,10) AS s(i); An aggregate expression can only appear in the result list or - HAVING clause of a SELECT command. - It is forbidden in other clauses, such as WHERE, + HAVING clause of a SELECT command. + It is forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the results of aggregates are formed. @@ -1760,7 +1760,7 @@ FROM generate_series(1,10) AS s(i); as a whole is then an outer reference for the subquery it appears in, and acts as a constant over any one evaluation of that subquery. The restriction about - appearing only in the result list or HAVING clause + appearing only in the result list or HAVING clause applies with respect to the query level that the aggregate belongs to. @@ -1784,7 +1784,7 @@ FROM generate_series(1,10) AS s(i); to grouping of the selected rows into a single output row — each row remains separate in the query output. However the window function has access to all the rows that would be part of the current row's - group according to the grouping specification (PARTITION BY + group according to the grouping specification (PARTITION BY list) of the window function call. The syntax of a window function call is one of the following: @@ -1805,10 +1805,10 @@ FROM generate_series(1,10) AS s(i); and the optional frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS } frame_start +{ RANGE | ROWS } BETWEEN frame_start AND frame_end - where frame_start and frame_end can be + where frame_start and frame_end can be one of UNBOUNDED PRECEDING @@ -1831,59 +1831,59 @@ UNBOUNDED FOLLOWING be given within parentheses, using the same syntax as for defining a named window in the WINDOW clause; see the reference page for details. It's worth - pointing out that OVER wname is not exactly equivalent to - OVER (wname ...); the latter implies copying and modifying the + pointing out that OVER wname is not exactly equivalent to + OVER (wname ...); the latter implies copying and modifying the window definition, and will be rejected if the referenced window specification includes a frame clause. - The PARTITION BY clause groups the rows of the query into - partitions, which are processed separately by the window - function. PARTITION BY works similarly to a query-level - GROUP BY clause, except that its expressions are always just + The PARTITION BY clause groups the rows of the query into + partitions, which are processed separately by the window + function. PARTITION BY works similarly to a query-level + GROUP BY clause, except that its expressions are always just expressions and cannot be output-column names or numbers. - Without PARTITION BY, all rows produced by the query are + Without PARTITION BY, all rows produced by the query are treated as a single partition. - The ORDER BY clause determines the order in which the rows + The ORDER BY clause determines the order in which the rows of a partition are processed by the window function. It works similarly - to a query-level ORDER BY clause, but likewise cannot use - output-column names or numbers. Without ORDER BY, rows are + to a query-level ORDER BY clause, but likewise cannot use + output-column names or numbers. Without ORDER BY, rows are processed in an unspecified order. The frame_clause specifies - the set of rows constituting the window frame, which is a + the set of rows constituting the window frame, which is a subset of the current partition, for those window functions that act on the frame instead of the whole partition. The frame can be specified in - either RANGE or ROWS mode; in either case, it - runs from the frame_start to the - frame_end. If frame_end is omitted, - it defaults to CURRENT ROW. + either RANGE or ROWS mode; in either case, it + runs from the frame_start to the + frame_end. If frame_end is omitted, + it defaults to CURRENT ROW. - A frame_start of UNBOUNDED PRECEDING means + A frame_start of UNBOUNDED PRECEDING means that the frame starts with the first row of the partition, and similarly - a frame_end of UNBOUNDED FOLLOWING means + a frame_end of UNBOUNDED FOLLOWING means that the frame ends with the last row of the partition. - In RANGE mode, a frame_start of - CURRENT ROW means the frame starts with the current row's - first peer row (a row that ORDER BY considers - equivalent to the current row), while a frame_end of - CURRENT ROW means the frame ends with the last equivalent - ORDER BY peer. In ROWS mode, CURRENT ROW simply means + In RANGE mode, a frame_start of + CURRENT ROW means the frame starts with the current row's + first peer row (a row that ORDER BY considers + equivalent to the current row), while a frame_end of + CURRENT ROW means the frame ends with the last equivalent + ORDER BY peer. In ROWS mode, CURRENT ROW simply means the current row. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts + The value PRECEDING and + value FOLLOWING cases are currently only + allowed in ROWS mode. They indicate that the frame starts or ends the specified number of rows before or after the current row. value must be an integer expression not containing any variables, aggregate functions, or window functions. @@ -1892,22 +1892,22 @@ UNBOUNDED FOLLOWING - The default framing option is RANGE UNBOUNDED PRECEDING, + The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND - CURRENT ROW. With ORDER BY, this sets the frame to be + CURRENT ROW. With ORDER BY, this sets the frame to be all rows from the partition start up through the current row's last - ORDER BY peer. Without ORDER BY, all rows of the partition are + ORDER BY peer. Without ORDER BY, all rows of the partition are included in the window frame, since all rows become peers of the current row. Restrictions are that - frame_start cannot be UNBOUNDED FOLLOWING, - frame_end cannot be UNBOUNDED PRECEDING, - and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + frame_start cannot be UNBOUNDED FOLLOWING, + frame_end cannot be UNBOUNDED PRECEDING, + and the frame_end choice cannot appear earlier in the + above list than the frame_start choice — for example + RANGE BETWEEN CURRENT ROW AND value PRECEDING is not allowed. @@ -1928,18 +1928,18 @@ UNBOUNDED FOLLOWING - The syntaxes using * are used for calling parameter-less + The syntaxes using * are used for calling parameter-less aggregate functions as window functions, for example - count(*) OVER (PARTITION BY x ORDER BY y). - The asterisk (*) is customarily not used for + count(*) OVER (PARTITION BY x ORDER BY y). + The asterisk (*) is customarily not used for window-specific functions. Window-specific functions do not - allow DISTINCT or ORDER BY to be used within the + allow DISTINCT or ORDER BY to be used within the function argument list. Window function calls are permitted only in the SELECT - list and the ORDER BY clause of the query. + list and the ORDER BY clause of the query. @@ -1974,7 +1974,7 @@ UNBOUNDED FOLLOWING CAST ( expression AS type ) expression::type - The CAST syntax conforms to SQL; the syntax with + The CAST syntax conforms to SQL; the syntax with :: is historical PostgreSQL usage. @@ -1996,7 +1996,7 @@ CAST ( expression AS type to the type that a value expression must produce (for example, when it is assigned to a table column); the system will automatically apply a type cast in such cases. However, automatic casting is only done for - casts that are marked OK to apply implicitly + casts that are marked OK to apply implicitly in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction is intended to prevent surprising conversions from being applied silently. @@ -2011,8 +2011,8 @@ CAST ( expression AS type However, this only works for types whose names are also valid as function names. For example, double precision cannot be used this way, but the equivalent float8 - can. Also, the names interval, time, and - timestamp can only be used in this fashion if they are + can. Also, the names interval, time, and + timestamp can only be used in this fashion if they are double-quoted, because of syntactic conflicts. Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably be avoided. @@ -2025,7 +2025,7 @@ CAST ( expression AS type conversion, it will internally invoke a registered function to perform the conversion. By convention, these conversion functions have the same name as their output type, and thus the function-like - syntax is nothing more than a direct invocation of the underlying + syntax is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see . @@ -2061,7 +2061,7 @@ CAST ( expression AS type The two common uses of the COLLATE clause are - overriding the sort order in an ORDER BY clause, for + overriding the sort order in an ORDER BY clause, for example: SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C"; @@ -2071,14 +2071,14 @@ SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C"; SELECT * FROM tbl WHERE a > 'foo' COLLATE "C"; - Note that in the latter case the COLLATE clause is + Note that in the latter case the COLLATE clause is attached to an input argument of the operator we wish to affect. It doesn't matter which argument of the operator or function call the - COLLATE clause is attached to, because the collation that is + COLLATE clause is attached to, because the collation that is applied by the operator or function is derived by considering all - arguments, and an explicit COLLATE clause will override the + arguments, and an explicit COLLATE clause will override the collations of all other arguments. (Attaching non-matching - COLLATE clauses to more than one argument, however, is an + COLLATE clauses to more than one argument, however, is an error. For more details see .) Thus, this gives the same result as the previous example: @@ -2089,8 +2089,8 @@ SELECT * FROM tbl WHERE a COLLATE "C" > 'foo'; SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C"; because it attempts to apply a collation to the result of the - > operator, which is of the non-collatable data type - boolean. + > operator, which is of the non-collatable data type + boolean. @@ -2143,8 +2143,8 @@ SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) array value using values for its member elements. A simple array constructor consists of the key word ARRAY, a left square bracket - [, a list of expressions (separated by commas) for the - array element values, and finally a right square bracket ]. + [, a list of expressions (separated by commas) for the + array element values, and finally a right square bracket ]. For example: SELECT ARRAY[1,2,3+4]; @@ -2155,8 +2155,8 @@ SELECT ARRAY[1,2,3+4]; By default, the array element type is the common type of the member expressions, - determined using the same rules as for UNION or - CASE constructs (see ). + determined using the same rules as for UNION or + CASE constructs (see ). You can override this by explicitly casting the array constructor to the desired type, for example: @@ -2193,13 +2193,13 @@ SELECT ARRAY[[1,2],[3,4]]; Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-arrays of identical dimensions. - Any cast applied to the outer ARRAY constructor propagates + Any cast applied to the outer ARRAY constructor propagates automatically to all the inner constructors. Multidimensional array constructor elements can be anything yielding - an array of the proper kind, not only a sub-ARRAY construct. + an array of the proper kind, not only a sub-ARRAY construct. For example: CREATE TABLE arr(f1 int[], f2 int[]); @@ -2291,7 +2291,7 @@ SELECT ARRAY(SELECT ARRAY[i, i*2] FROM generate_series(1,5) AS a(i)); SELECT ROW(1,2.5,'this is a test'); - The key word ROW is optional when there is more than one + The key word ROW is optional when there is more than one expression in the list. @@ -2299,10 +2299,10 @@ SELECT ROW(1,2.5,'this is a test'); A row constructor can include the syntax rowvalue.*, which will be expanded to a list of the elements of the row value, - just as occurs when the .* syntax is used at the top level - of a SELECT list (see ). - For example, if table t has - columns f1 and f2, these are the same: + just as occurs when the .* syntax is used at the top level + of a SELECT list (see ). + For example, if table t has + columns f1 and f2, these are the same: SELECT ROW(t.*, 42) FROM t; SELECT ROW(t.f1, t.f2, 42) FROM t; @@ -2313,19 +2313,19 @@ SELECT ROW(t.f1, t.f2, 42) FROM t; Before PostgreSQL 8.2, the .* syntax was not expanded in row constructors, so - that writing ROW(t.*, 42) created a two-field row whose first + that writing ROW(t.*, 42) created a two-field row whose first field was another row value. The new behavior is usually more useful. If you need the old behavior of nested row values, write the inner row value without .*, for instance - ROW(t, 42). + ROW(t, 42). - By default, the value created by a ROW expression is of + By default, the value created by a ROW expression is of an anonymous record type. If necessary, it can be cast to a named composite type — either the row type of a table, or a composite type - created with CREATE TYPE AS. An explicit cast might be needed + created with CREATE TYPE AS. An explicit cast might be needed to avoid ambiguity. For example: CREATE TABLE mytable(f1 int, f2 float, f3 text); @@ -2366,7 +2366,7 @@ SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype)); in a composite-type table column, or to be passed to a function that accepts a composite parameter. Also, it is possible to compare two row values or test a row with - IS NULL or IS NOT NULL, for example: + IS NULL or IS NOT NULL, for example: SELECT ROW(1,2.5,'this is a test') = ROW(1, 3, 'not the same'); @@ -2413,18 +2413,18 @@ SELECT somefunc() OR true; As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is particularly dangerous to - rely on side effects or evaluation order in WHERE and HAVING clauses, + rely on side effects or evaluation order in WHERE and HAVING clauses, since those clauses are extensively reprocessed as part of developing an execution plan. Boolean - expressions (AND/OR/NOT combinations) in those clauses can be reorganized + expressions (AND/OR/NOT combinations) in those clauses can be reorganized in any manner allowed by the laws of Boolean algebra. - When it is essential to force evaluation order, a CASE + When it is essential to force evaluation order, a CASE construct (see ) can be used. For example, this is an untrustworthy way of trying to - avoid division by zero in a WHERE clause: + avoid division by zero in a WHERE clause: SELECT ... WHERE x > 0 AND y/x > 1.5; @@ -2432,14 +2432,14 @@ SELECT ... WHERE x > 0 AND y/x > 1.5; SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END; - A CASE construct used in this fashion will defeat optimization + A CASE construct used in this fashion will defeat optimization attempts, so it should only be done when necessary. (In this particular example, it would be better to sidestep the problem by writing - y > 1.5*x instead.) + y > 1.5*x instead.) - CASE is not a cure-all for such issues, however. + CASE is not a cure-all for such issues, however. One limitation of the technique illustrated above is that it does not prevent early evaluation of constant subexpressions. As described in , functions and @@ -2450,8 +2450,8 @@ SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab; is likely to result in a division-by-zero failure due to the planner trying to simplify the constant subexpression, - even if every row in the table has x > 0 so that the - ELSE arm would never be entered at run time. + even if every row in the table has x > 0 so that the + ELSE arm would never be entered at run time. @@ -2459,17 +2459,17 @@ SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab; obviously involve constants can occur in queries executed within functions, since the values of function arguments and local variables can be inserted into queries as constants for planning purposes. - Within PL/pgSQL functions, for example, using an - IF-THEN-ELSE statement to protect + Within PL/pgSQL functions, for example, using an + IF-THEN-ELSE statement to protect a risky computation is much safer than just nesting it in a - CASE expression. + CASE expression. - Another limitation of the same kind is that a CASE cannot + Another limitation of the same kind is that a CASE cannot prevent evaluation of an aggregate expression contained within it, because aggregate expressions are computed before other - expressions in a SELECT list or HAVING clause + expressions in a SELECT list or HAVING clause are considered. For example, the following query can cause a division-by-zero error despite seemingly having protected against it: @@ -2478,12 +2478,12 @@ SELECT CASE WHEN min(employees) > 0 END FROM departments; - The min() and avg() aggregates are computed + The min() and avg() aggregates are computed concurrently over all the input rows, so if any row - has employees equal to zero, the division-by-zero error + has employees equal to zero, the division-by-zero error will occur before there is any opportunity to test the result of - min(). Instead, use a WHERE - or FILTER clause to prevent problematic input rows from + min(). Instead, use a WHERE + or FILTER clause to prevent problematic input rows from reaching an aggregate function in the first place. @@ -2657,7 +2657,7 @@ SELECT concat_lower_or_upper('Hello', 'World', uppercase => true); In the above query, the arguments a and b are specified positionally, while - uppercase is specified by name. In this example, + uppercase is specified by name. In this example, that adds little except documentation. With a more complex function having numerous parameters that have default values, named or mixed notation can save a great deal of writing and reduce chances for error. diff --git a/doc/src/sgml/tablefunc.sgml b/doc/src/sgml/tablefunc.sgml index 90f6df9545..7cfae4d316 100644 --- a/doc/src/sgml/tablefunc.sgml +++ b/doc/src/sgml/tablefunc.sgml @@ -8,7 +8,7 @@ - The tablefunc module includes various functions that return + The tablefunc module includes various functions that return tables (that is, multiple rows). These functions are useful both in their own right and as examples of how to write C functions that return multiple rows. @@ -23,7 +23,7 @@
- <filename>tablefunc</> Functions + <filename>tablefunc</filename> Functions @@ -35,46 +35,46 @@ normal_rand(int numvals, float8 mean, float8 stddev) - setof float8 + setof float8 Produces a set of normally distributed random values crosstab(text sql) - setof record + setof record - Produces a pivot table containing - row names plus N value columns, where - N is determined by the row type specified in the calling + Produces a pivot table containing + row names plus N value columns, where + N is determined by the row type specified in the calling query - crosstabN(text sql) - setof table_crosstab_N + crosstabN(text sql) + setof table_crosstab_N - Produces a pivot table containing - row names plus N value columns. - crosstab2, crosstab3, and - crosstab4 are predefined, but you can create additional - crosstabN functions as described below + Produces a pivot table containing + row names plus N value columns. + crosstab2, crosstab3, and + crosstab4 are predefined, but you can create additional + crosstabN functions as described below crosstab(text source_sql, text category_sql) - setof record + setof record - Produces a pivot table + Produces a pivot table with the value columns specified by a second query crosstab(text sql, int N) - setof record + setof record - Obsolete version of crosstab(text). - The parameter N is now ignored, since the number of + Obsolete version of crosstab(text). + The parameter N is now ignored, since the number of value columns is always determined by the calling query @@ -88,7 +88,7 @@ connectby - setof record + setof record Produces a representation of a hierarchical tree structure @@ -109,7 +109,7 @@ normal_rand(int numvals, float8 mean, float8 stddev) returns setof float8 - normal_rand produces a set of normally distributed random + normal_rand produces a set of normally distributed random values (Gaussian distribution). @@ -157,7 +157,7 @@ crosstab(text sql, int N) - The crosstab function is used to produce pivot + The crosstab function is used to produce pivot displays, wherein data is listed across the page rather than down. For example, we might have data like @@ -176,7 +176,7 @@ row1 val11 val12 val13 ... row2 val21 val22 val23 ... ... - The crosstab function takes a text parameter that is a SQL + The crosstab function takes a text parameter that is a SQL query producing raw data formatted in the first way, and produces a table formatted in the second way. @@ -209,9 +209,9 @@ row2 val21 val22 val23 ... - The crosstab function is declared to return setof + The crosstab function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text); @@ -227,30 +227,30 @@ SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 t - The FROM clause must define the output as one - row_name column (of the same data type as the first result - column of the SQL query) followed by N value columns + The FROM clause must define the output as one + row_name column (of the same data type as the first result + column of the SQL query) followed by N value columns (all of the same data type as the third result column of the SQL query). You can set up as many output value columns as you wish. The names of the output columns are up to you. - The crosstab function produces one output row for each + The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. It fills the output - value columns, left to right, with the + value columns, left to right, with the value fields from these rows. If there - are fewer rows in a group than there are output value + are fewer rows in a group than there are output value columns, the extra output columns are filled with nulls; if there are more rows, the extra input rows are skipped. - In practice the SQL query should always specify ORDER BY 1,2 + In practice the SQL query should always specify ORDER BY 1,2 to ensure that the input rows are properly ordered, that is, values with the same row_name are brought together and - correctly ordered within the row. Notice that crosstab + correctly ordered within the row. Notice that crosstab itself does not pay any attention to the second column of the query result; it's just there to be ordered by, to control the order in which the third-column values appear across the page. @@ -286,41 +286,41 @@ AS ct(row_name text, category_1 text, category_2 text, category_3 text); - You can avoid always having to write out a FROM clause to + You can avoid always having to write out a FROM clause to define the output columns, by setting up a custom crosstab function that has the desired output row type wired into its definition. This is described in the next section. Another possibility is to embed the - required FROM clause in a view definition. + required FROM clause in a view definition. See also the \crosstabview - command in psql, which provides functionality similar - to crosstab(). + command in psql, which provides functionality similar + to crosstab(). - <function>crosstab<replaceable>N</>(text)</function> + <function>crosstab<replaceable>N</replaceable>(text)</function> crosstab -crosstabN(text sql) +crosstabN(text sql) - The crosstabN functions are examples of how - to set up custom wrappers for the general crosstab function, + The crosstabN functions are examples of how + to set up custom wrappers for the general crosstab function, so that you need not write out column names and types in the calling - SELECT query. The tablefunc module includes - crosstab2, crosstab3, and - crosstab4, whose output row types are defined as + SELECT query. The tablefunc module includes + crosstab2, crosstab3, and + crosstab4, whose output row types are defined as @@ -337,10 +337,10 @@ CREATE TYPE tablefunc_crosstab_N AS ( Thus, these functions can be used directly when the input query produces - row_name and value columns of type - text, and you want 2, 3, or 4 output values columns. + row_name and value columns of type + text, and you want 2, 3, or 4 output values columns. In all other ways they behave exactly as described above for the - general crosstab function. + general crosstab function. @@ -359,7 +359,7 @@ FROM crosstab3( These functions are provided mostly for illustration purposes. You can create your own return types and functions based on the - underlying crosstab() function. There are two ways + underlying crosstab() function. There are two ways to do it: @@ -367,13 +367,13 @@ FROM crosstab3( Create a composite type describing the desired output columns, similar to the examples in - contrib/tablefunc/tablefunc--1.0.sql. + contrib/tablefunc/tablefunc--1.0.sql. Then define a - unique function name accepting one text parameter and returning - setof your_type_name, but linking to the same underlying - crosstab C function. For example, if your source data - produces row names that are text, and values that are - float8, and you want 5 value columns: + unique function name accepting one text parameter and returning + setof your_type_name, but linking to the same underlying + crosstab C function. For example, if your source data + produces row names that are text, and values that are + float8, and you want 5 value columns: CREATE TYPE my_crosstab_float8_5_cols AS ( my_row_name text, @@ -393,7 +393,7 @@ CREATE OR REPLACE FUNCTION crosstab_float8_5_cols(text) - Use OUT parameters to define the return type implicitly. + Use OUT parameters to define the return type implicitly. The same example could also be done this way: CREATE OR REPLACE FUNCTION crosstab_float8_5_cols( @@ -426,12 +426,12 @@ crosstab(text source_sql, text category_sql) - The main limitation of the single-parameter form of crosstab + The main limitation of the single-parameter form of crosstab is that it treats all values in a group alike, inserting each value into the first available column. If you want the value columns to correspond to specific categories of data, and some groups might not have data for some of the categories, that doesn't work well. - The two-parameter form of crosstab handles this case by + The two-parameter form of crosstab handles this case by providing an explicit list of the categories corresponding to the output columns. @@ -447,7 +447,7 @@ crosstab(text source_sql, text category_sql) category and value columns must be the last two columns, in that order. Any columns between row_name and - category are treated as extra. + category are treated as extra. The extra columns are expected to be the same for all rows with the same row_name value. @@ -489,9 +489,9 @@ SELECT DISTINCT cat FROM foo ORDER BY 1; - The crosstab function is declared to return setof + The crosstab function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: @@ -512,25 +512,25 @@ row_name extra cat1 cat2 cat3 cat4 - The FROM clause must define the proper number of output - columns of the proper data types. If there are N - columns in the source_sql query's result, the first - N-2 of them must match up with the first - N-2 output columns. The remaining output columns - must have the type of the last column of the source_sql + The FROM clause must define the proper number of output + columns of the proper data types. If there are N + columns in the source_sql query's result, the first + N-2 of them must match up with the first + N-2 output columns. The remaining output columns + must have the type of the last column of the source_sql query's result, and there must be exactly as many of them as there are rows in the category_sql query's result. - The crosstab function produces one output row for each + The crosstab function produces one output row for each consecutive group of input rows with the same row_name value. The output - row_name column, plus any extra + row_name column, plus any extra columns, are copied from the first row of the group. The output - value columns are filled with the + value columns are filled with the value fields from rows having matching - category values. If a row's category + category values. If a row's category does not match any output of the category_sql query, its value is ignored. Output columns whose matching category is not present in any input row @@ -539,7 +539,7 @@ row_name extra cat1 cat2 cat3 cat4 In practice the source_sql query should always - specify ORDER BY 1 to ensure that values with the same + specify ORDER BY 1 to ensure that values with the same row_name are brought together. However, ordering of the categories within a group is not important. Also, it is essential to be sure that the order of the @@ -619,7 +619,7 @@ AS You can create predefined functions to avoid having to write out the result column names and types in each query. See the examples in the previous section. The underlying C function for this form - of crosstab is named crosstab_hash. + of crosstab is named crosstab_hash. @@ -638,10 +638,10 @@ connectby(text relname, text keyid_fld, text parent_keyid_fld - The connectby function produces a display of hierarchical + The connectby function produces a display of hierarchical data that is stored in a table. The table must have a key field that uniquely identifies rows, and a parent-key field that references the - parent (if any) of each row. connectby can display the + parent (if any) of each row. connectby can display the sub-tree descending from any row. @@ -694,14 +694,14 @@ connectby(text relname, text keyid_fld, text parent_keyid_fld The key and parent-key fields can be any data type, but they must be - the same type. Note that the start_with value must be + the same type. Note that the start_with value must be entered as a text string, regardless of the type of the key field. - The connectby function is declared to return setof + The connectby function is declared to return setof record, so the actual names and types of the output columns must be - defined in the FROM clause of the calling SELECT + defined in the FROM clause of the calling SELECT statement, for example: @@ -714,15 +714,15 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' The first two output columns are used for the current row's key and its parent row's key; they must match the type of the table's key field. The third output column is the depth in the tree and must be of type - integer. If a branch_delim parameter was + integer. If a branch_delim parameter was given, the next output column is the branch display and must be of type - text. Finally, if an orderby_fld + text. Finally, if an orderby_fld parameter was given, the last output column is a serial number, and must - be of type integer. + be of type integer. - The branch output column shows the path of keys taken to + The branch output column shows the path of keys taken to reach the current row. The keys are separated by the specified branch_delim string. If no branch display is wanted, omit both the branch_delim parameter @@ -740,7 +740,7 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' The parameters representing table and field names are copied as-is - into the SQL queries that connectby generates internally. + into the SQL queries that connectby generates internally. Therefore, include double quotes if the names are mixed-case or contain special characters. You may also need to schema-qualify the table name. @@ -752,10 +752,10 @@ SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2' It is important that the branch_delim string - not appear in any key values, else connectby may incorrectly + not appear in any key values, else connectby may incorrectly report an infinite-recursion error. Note that if branch_delim is not provided, a default value - of ~ is used for recursion detection purposes. + of ~ is used for recursion detection purposes. diff --git a/doc/src/sgml/tablesample-method.sgml b/doc/src/sgml/tablesample-method.sgml index 22f8bbe19a..9ac28ceb4c 100644 --- a/doc/src/sgml/tablesample-method.sgml +++ b/doc/src/sgml/tablesample-method.sgml @@ -12,11 +12,11 @@ - PostgreSQL's implementation of the TABLESAMPLE + PostgreSQL's implementation of the TABLESAMPLE clause supports custom table sampling methods, in addition to - the BERNOULLI and SYSTEM methods that are required + the BERNOULLI and SYSTEM methods that are required by the SQL standard. The sampling method determines which rows of the - table will be selected when the TABLESAMPLE clause is used. + table will be selected when the TABLESAMPLE clause is used. @@ -26,18 +26,18 @@ method_name(internal) RETURNS tsm_handler The name of the function is the same method name appearing in the - TABLESAMPLE clause. The internal argument is a dummy + TABLESAMPLE clause. The internal argument is a dummy (always having value zero) that simply serves to prevent this function from being called directly from a SQL command. The result of the function must be a palloc'd struct of - type TsmRoutine, which contains pointers to support functions for + type TsmRoutine, which contains pointers to support functions for the sampling method. These support functions are plain C functions and are not visible or callable at the SQL level. The support functions are described in . - In addition to function pointers, the TsmRoutine struct must + In addition to function pointers, the TsmRoutine struct must provide these additional fields: @@ -47,9 +47,9 @@ method_name(internal) RETURNS tsm_handler This is an OID list containing the data type OIDs of the parameter(s) - that will be accepted by the TABLESAMPLE clause when this + that will be accepted by the TABLESAMPLE clause when this sampling method is used. For example, for the built-in methods, this - list contains a single item with value FLOAT4OID, which + list contains a single item with value FLOAT4OID, which represents the sampling percentage. Custom sampling methods can have more or different parameters. @@ -60,11 +60,11 @@ method_name(internal) RETURNS tsm_handler bool repeatable_across_queries - If true, the sampling method can deliver identical samples + If true, the sampling method can deliver identical samples across successive queries, if the same parameters - and REPEATABLE seed value are supplied each time and the - table contents have not changed. When this is false, - the REPEATABLE clause is not accepted for use with the + and REPEATABLE seed value are supplied each time and the + table contents have not changed. When this is false, + the REPEATABLE clause is not accepted for use with the sampling method. @@ -74,10 +74,10 @@ method_name(internal) RETURNS tsm_handler bool repeatable_across_scans - If true, the sampling method can deliver identical samples + If true, the sampling method can deliver identical samples across successive scans in the same query (assuming unchanging parameters, seed value, and snapshot). - When this is false, the planner will not select plans that + When this is false, the planner will not select plans that would require scanning the sampled table more than once, since that might result in inconsistent query output. @@ -86,16 +86,16 @@ method_name(internal) RETURNS tsm_handler - The TsmRoutine struct type is declared - in src/include/access/tsmapi.h, which see for additional + The TsmRoutine struct type is declared + in src/include/access/tsmapi.h, which see for additional details. The table sampling methods included in the standard distribution are good references when trying to write your own. Look into - the src/backend/access/tablesample subdirectory of the source - tree for the built-in sampling methods, and into the contrib + the src/backend/access/tablesample subdirectory of the source + tree for the built-in sampling methods, and into the contrib subdirectory for add-on methods. @@ -103,7 +103,7 @@ method_name(internal) RETURNS tsm_handler Sampling Method Support Functions - The TSM handler function returns a palloc'd TsmRoutine struct + The TSM handler function returns a palloc'd TsmRoutine struct containing pointers to the support functions described below. Most of the functions are required, but some are optional, and those pointers can be NULL. @@ -123,16 +123,16 @@ SampleScanGetSampleSize (PlannerInfo *root, relation pages that will be read during a sample scan, and the number of tuples that will be selected by the scan. (For example, these might be determined by estimating the sampling fraction, and then multiplying - the baserel->pages and baserel->tuples + the baserel->pages and baserel->tuples numbers by that, being sure to round the results to integral values.) - The paramexprs list holds the expression(s) that are - parameters to the TABLESAMPLE clause. It is recommended to - use estimate_expression_value() to try to reduce these + The paramexprs list holds the expression(s) that are + parameters to the TABLESAMPLE clause. It is recommended to + use estimate_expression_value() to try to reduce these expressions to constants, if their values are needed for estimation purposes; but the function must provide size estimates even if they cannot be reduced, and it should not fail even if the values appear invalid (remember that they're only estimates of what the run-time values will be). - The pages and tuples parameters are outputs. + The pages and tuples parameters are outputs. @@ -145,29 +145,29 @@ InitSampleScan (SampleScanState *node, Initialize for execution of a SampleScan plan node. This is called during executor startup. It should perform any initialization needed before processing can start. - The SampleScanState node has already been created, but - its tsm_state field is NULL. - The InitSampleScan function can palloc whatever internal + The SampleScanState node has already been created, but + its tsm_state field is NULL. + The InitSampleScan function can palloc whatever internal state data is needed by the sampling method, and store a pointer to - it in node->tsm_state. + it in node->tsm_state. Information about the table to scan is accessible through other fields - of the SampleScanState node (but note that the - node->ss.ss_currentScanDesc scan descriptor is not set + of the SampleScanState node (but note that the + node->ss.ss_currentScanDesc scan descriptor is not set up yet). - eflags contains flag bits describing the executor's + eflags contains flag bits describing the executor's operating mode for this plan node. - When (eflags & EXEC_FLAG_EXPLAIN_ONLY) is true, + When (eflags & EXEC_FLAG_EXPLAIN_ONLY) is true, the scan will not actually be performed, so this function should only do - the minimum required to make the node state valid for EXPLAIN - and EndSampleScan. + the minimum required to make the node state valid for EXPLAIN + and EndSampleScan. This function can be omitted (set the pointer to NULL), in which case - BeginSampleScan must perform all initialization needed + BeginSampleScan must perform all initialization needed by the sampling method. @@ -184,32 +184,32 @@ BeginSampleScan (SampleScanState *node, This is called just before the first attempt to fetch a tuple, and may be called again if the scan needs to be restarted. Information about the table to scan is accessible through fields - of the SampleScanState node (but note that the - node->ss.ss_currentScanDesc scan descriptor is not set + of the SampleScanState node (but note that the + node->ss.ss_currentScanDesc scan descriptor is not set up yet). - The params array, of length nparams, contains the - values of the parameters supplied in the TABLESAMPLE clause. + The params array, of length nparams, contains the + values of the parameters supplied in the TABLESAMPLE clause. These will have the number and types specified in the sampling method's parameterTypes list, and have been checked to not be null. - seed contains a seed to use for any random numbers generated + seed contains a seed to use for any random numbers generated within the sampling method; it is either a hash derived from the - REPEATABLE value if one was given, or the result - of random() if not. + REPEATABLE value if one was given, or the result + of random() if not. - This function may adjust the fields node->use_bulkread - and node->use_pagemode. - If node->use_bulkread is true, which it is by + This function may adjust the fields node->use_bulkread + and node->use_pagemode. + If node->use_bulkread is true, which it is by default, the scan will use a buffer access strategy that encourages recycling buffers after use. It might be reasonable to set this - to false if the scan will visit only a small fraction of the + to false if the scan will visit only a small fraction of the table's pages. - If node->use_pagemode is true, which it is by + If node->use_pagemode is true, which it is by default, the scan will perform visibility checking in a single pass for all tuples on each visited page. It might be reasonable to set this - to false if the scan will select only a small fraction of the + to false if the scan will select only a small fraction of the tuples on each visited page. That will result in fewer tuple visibility checks being performed, though each one will be more expensive because it will require more locking. @@ -219,8 +219,8 @@ BeginSampleScan (SampleScanState *node, If the sampling method is marked repeatable_across_scans, it must be able to select the same set of tuples during a rescan as it did originally, that is - a fresh call of BeginSampleScan must lead to selecting the - same tuples as before (if the TABLESAMPLE parameters + a fresh call of BeginSampleScan must lead to selecting the + same tuples as before (if the TABLESAMPLE parameters and seed don't change). @@ -231,7 +231,7 @@ NextSampleBlock (SampleScanState *node); Returns the block number of the next page to be scanned, or - InvalidBlockNumber if no pages remain to be scanned. + InvalidBlockNumber if no pages remain to be scanned. @@ -251,34 +251,34 @@ NextSampleTuple (SampleScanState *node, Returns the offset number of the next tuple to be sampled on the - specified page, or InvalidOffsetNumber if no tuples remain to - be sampled. maxoffset is the largest offset number in use + specified page, or InvalidOffsetNumber if no tuples remain to + be sampled. maxoffset is the largest offset number in use on the page. - NextSampleTuple is not explicitly told which of the offset - numbers in the range 1 .. maxoffset actually contain valid + NextSampleTuple is not explicitly told which of the offset + numbers in the range 1 .. maxoffset actually contain valid tuples. This is not normally a problem since the core code ignores requests to sample missing or invisible tuples; that should not result in any bias in the sample. However, if necessary, the function can - examine node->ss.ss_currentScanDesc->rs_vistuples[] + examine node->ss.ss_currentScanDesc->rs_vistuples[] to identify which tuples are valid and visible. (This - requires node->use_pagemode to be true.) + requires node->use_pagemode to be true.) - NextSampleTuple must not assume - that blockno is the same page number returned by the most - recent NextSampleBlock call. It was returned by some - previous NextSampleBlock call, but the core code is allowed - to call NextSampleBlock in advance of actually scanning + NextSampleTuple must not assume + that blockno is the same page number returned by the most + recent NextSampleBlock call. It was returned by some + previous NextSampleBlock call, but the core code is allowed + to call NextSampleBlock in advance of actually scanning pages, so as to support prefetching. It is OK to assume that once - sampling of a given page begins, successive NextSampleTuple - calls all refer to the same page until InvalidOffsetNumber is + sampling of a given page begins, successive NextSampleTuple + calls all refer to the same page until InvalidOffsetNumber is returned. diff --git a/doc/src/sgml/tcn.sgml b/doc/src/sgml/tcn.sgml index 623094183d..8cc55efd29 100644 --- a/doc/src/sgml/tcn.sgml +++ b/doc/src/sgml/tcn.sgml @@ -12,16 +12,16 @@ - The tcn module provides a trigger function that notifies + The tcn module provides a trigger function that notifies listeners of changes to any table on which it is attached. It must be - used as an AFTER trigger FOR EACH ROW. + used as an AFTER trigger FOR EACH ROW. Only one parameter may be supplied to the function in a - CREATE TRIGGER statement, and that is optional. If supplied + CREATE TRIGGER statement, and that is optional. If supplied it will be used for the channel name for the notifications. If omitted - tcn will be used for the channel name. + tcn will be used for the channel name. diff --git a/doc/src/sgml/test-decoding.sgml b/doc/src/sgml/test-decoding.sgml index 4f4fd41e32..310a2d6974 100644 --- a/doc/src/sgml/test-decoding.sgml +++ b/doc/src/sgml/test-decoding.sgml @@ -8,13 +8,13 @@ - test_decoding is an example of a logical decoding + test_decoding is an example of a logical decoding output plugin. It doesn't do anything especially useful, but can serve as a starting point for developing your own decoder. - test_decoding receives WAL through the logical decoding + test_decoding receives WAL through the logical decoding mechanism and decodes it into text representations of the operations performed. diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index d5bde5c6c0..7b4912dd5e 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -16,7 +16,7 @@ Full Text Searching (or just text search) provides - the capability to identify natural-language documents that + the capability to identify natural-language documents that satisfy a query, and optionally to sort them by relevance to the query. The most common type of search is to find all documents containing given query terms @@ -73,13 +73,13 @@ - Parsing documents into tokens. It is + Parsing documents into tokens. It is useful to identify various classes of tokens, e.g., numbers, words, complex words, email addresses, so that they can be processed differently. In principle token classes depend on the specific application, but for most purposes it is adequate to use a predefined set of classes. - PostgreSQL uses a parser to + PostgreSQL uses a parser to perform this step. A standard parser is provided, and custom parsers can be created for specific needs. @@ -87,19 +87,19 @@ - Converting tokens into lexemes. + Converting tokens into lexemes. A lexeme is a string, just like a token, but it has been - normalized so that different forms of the same word + normalized so that different forms of the same word are made alike. For example, normalization almost always includes folding upper-case letters to lower-case, and often involves removal - of suffixes (such as s or es in English). + of suffixes (such as s or es in English). This allows searches to find variant forms of the same word, without tediously entering all the possible variants. - Also, this step typically eliminates stop words, which + Also, this step typically eliminates stop words, which are words that are so common that they are useless for searching. (In short, then, tokens are raw fragments of the document text, while lexemes are words that are believed useful for indexing and searching.) - PostgreSQL uses dictionaries to + PostgreSQL uses dictionaries to perform this step. Various standard dictionaries are provided, and custom ones can be created for specific needs. @@ -112,7 +112,7 @@ as a sorted array of normalized lexemes. Along with the lexemes it is often desirable to store positional information to use for proximity ranking, so that a document that - contains a more dense region of query words is + contains a more dense region of query words is assigned a higher rank than one with scattered query words. @@ -132,7 +132,7 @@ - Map synonyms to a single word using Ispell. + Map synonyms to a single word using Ispell. @@ -145,14 +145,14 @@ Map different variations of a word to a canonical form using - an Ispell dictionary. + an Ispell dictionary. Map different variations of a word to a canonical form using - Snowball stemmer rules. + Snowball stemmer rules. @@ -178,7 +178,7 @@ - A document is the unit of searching in a full text search + A document is the unit of searching in a full text search system; for example, a magazine article or email message. The text search engine must be able to parse documents and store associations of lexemes (key words) with their parent document. Later, these associations are @@ -226,11 +226,11 @@ WHERE mid = did AND mid = 12; For text search purposes, each document must be reduced to the - preprocessed tsvector format. Searching and ranking - are performed entirely on the tsvector representation + preprocessed tsvector format. Searching and ranking + are performed entirely on the tsvector representation of a document — the original text need only be retrieved when the document has been selected for display to a user. - We therefore often speak of the tsvector as being the + We therefore often speak of the tsvector as being the document, but of course it is only a compact representation of the full document. @@ -265,11 +265,11 @@ SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::t contains search terms, which must be already-normalized lexemes, and may combine multiple terms using AND, OR, NOT, and FOLLOWED BY operators. (For syntax details see .) There are - functions to_tsquery, plainto_tsquery, - and phraseto_tsquery + functions to_tsquery, plainto_tsquery, + and phraseto_tsquery that are helpful in converting user-written text into a proper tsquery, primarily by normalizing words appearing in - the text. Similarly, to_tsvector is used to parse and + the text. Similarly, to_tsvector is used to parse and normalize a document string. So in practice a text search match would look more like this: @@ -289,15 +289,15 @@ SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat'); f - since here no normalization of the word rats will occur. - The elements of a tsvector are lexemes, which are assumed - already normalized, so rats does not match rat. + since here no normalization of the word rats will occur. + The elements of a tsvector are lexemes, which are assumed + already normalized, so rats does not match rat. The @@ operator also supports text input, allowing explicit conversion of a text - string to tsvector or tsquery to be skipped + string to tsvector or tsquery to be skipped in simple cases. The variants available are: @@ -317,19 +317,19 @@ text @@ text - Within a tsquery, the & (AND) operator + Within a tsquery, the & (AND) operator specifies that both its arguments must appear in the document to have a match. Similarly, the | (OR) operator specifies that - at least one of its arguments must appear, while the ! (NOT) - operator specifies that its argument must not appear in + at least one of its arguments must appear, while the ! (NOT) + operator specifies that its argument must not appear in order to have a match. - For example, the query fat & ! rat matches documents that - contain fat but not rat. + For example, the query fat & ! rat matches documents that + contain fat but not rat. Searching for phrases is possible with the help of - the <-> (FOLLOWED BY) tsquery operator, which + the <-> (FOLLOWED BY) tsquery operator, which matches only if its arguments have matches that are adjacent and in the given order. For example: @@ -346,13 +346,13 @@ SELECT to_tsvector('error is not fatal') @@ to_tsquery('fatal <-> error'); There is a more general version of the FOLLOWED BY operator having the - form <N>, - where N is an integer standing for the difference between + form <N>, + where N is an integer standing for the difference between the positions of the matching lexemes. <1> is - the same as <->, while <2> + the same as <->, while <2> allows exactly one other lexeme to appear between the matches, and so - on. The phraseto_tsquery function makes use of this - operator to construct a tsquery that can match a multi-word + on. The phraseto_tsquery function makes use of this + operator to construct a tsquery that can match a multi-word phrase when some of the words are stop words. For example: @@ -374,7 +374,7 @@ SELECT phraseto_tsquery('the cats ate the rats'); - Parentheses can be used to control nesting of the tsquery + Parentheses can be used to control nesting of the tsquery operators. Without parentheses, | binds least tightly, then &, then <->, and ! most tightly. @@ -384,20 +384,20 @@ SELECT phraseto_tsquery('the cats ate the rats'); It's worth noticing that the AND/OR/NOT operators mean something subtly different when they are within the arguments of a FOLLOWED BY operator than when they are not, because within FOLLOWED BY the exact position of - the match is significant. For example, normally !x matches - only documents that do not contain x anywhere. - But !x <-> y matches y if it is not - immediately after an x; an occurrence of x + the match is significant. For example, normally !x matches + only documents that do not contain x anywhere. + But !x <-> y matches y if it is not + immediately after an x; an occurrence of x elsewhere in the document does not prevent a match. Another example is - that x & y normally only requires that x - and y both appear somewhere in the document, but - (x & y) <-> z requires x - and y to match at the same place, immediately before - a z. Thus this query behaves differently from - x <-> z & y <-> z, which will match a - document containing two separate sequences x z and - y z. (This specific query is useless as written, - since x and y could not match at the same place; + that x & y normally only requires that x + and y both appear somewhere in the document, but + (x & y) <-> z requires x + and y to match at the same place, immediately before + a z. Thus this query behaves differently from + x <-> z & y <-> z, which will match a + document containing two separate sequences x z and + y z. (This specific query is useless as written, + since x and y could not match at the same place; but with more complex situations such as prefix-match patterns, a query of this form could be useful.) @@ -412,26 +412,26 @@ SELECT phraseto_tsquery('the cats ate the rats'); skip indexing certain words (stop words), process synonyms, and use sophisticated parsing, e.g., parse based on more than just white space. This functionality is controlled by text search - configurations. PostgreSQL comes with predefined + configurations. PostgreSQL comes with predefined configurations for many languages, and you can easily create your own - configurations. (psql's \dF command + configurations. (psql's \dF command shows all available configurations.) During installation an appropriate configuration is selected and is set accordingly - in postgresql.conf. If you are using the same text search + in postgresql.conf. If you are using the same text search configuration for the entire cluster you can use the value in - postgresql.conf. To use different configurations + postgresql.conf. To use different configurations throughout the cluster but the same configuration within any one database, - use ALTER DATABASE ... SET. Otherwise, you can set + use ALTER DATABASE ... SET. Otherwise, you can set default_text_search_config in each session. Each text search function that depends on a configuration has an optional - regconfig argument, so that the configuration to use can be + regconfig argument, so that the configuration to use can be specified explicitly. default_text_search_config is used only when this argument is omitted. @@ -439,28 +439,28 @@ SELECT phraseto_tsquery('the cats ate the rats'); To make it easier to build custom text search configurations, a configuration is built up from simpler database objects. - PostgreSQL's text search facility provides + PostgreSQL's text search facility provides four types of configuration-related database objects: - Text search parsers break documents into tokens + Text search parsers break documents into tokens and classify each token (for example, as words or numbers). - Text search dictionaries convert tokens to normalized + Text search dictionaries convert tokens to normalized form and reject stop words. - Text search templates provide the functions underlying + Text search templates provide the functions underlying dictionaries. (A dictionary simply specifies a template and a set of parameters for the template.) @@ -468,7 +468,7 @@ SELECT phraseto_tsquery('the cats ate the rats'); - Text search configurations select a parser and a set + Text search configurations select a parser and a set of dictionaries to use to normalize the tokens produced by the parser. @@ -478,8 +478,8 @@ SELECT phraseto_tsquery('the cats ate the rats'); Text search parsers and templates are built from low-level C functions; therefore it requires C programming ability to develop new ones, and superuser privileges to install one into a database. (There are examples - of add-on parsers and templates in the contrib/ area of the - PostgreSQL distribution.) Since dictionaries and + of add-on parsers and templates in the contrib/ area of the + PostgreSQL distribution.) Since dictionaries and configurations just parameterize and connect together some underlying parsers and templates, no special privilege is needed to create a new dictionary or configuration. Examples of creating custom dictionaries and @@ -504,8 +504,8 @@ SELECT phraseto_tsquery('the cats ate the rats'); It is possible to do a full text search without an index. A simple query - to print the title of each row that contains the word - friend in its body field is: + to print the title of each row that contains the word + friend in its body field is: SELECT title @@ -513,13 +513,13 @@ FROM pgweb WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend'); - This will also find related words such as friends - and friendly, since all these are reduced to the same + This will also find related words such as friends + and friendly, since all these are reduced to the same normalized lexeme. - The query above specifies that the english configuration + The query above specifies that the english configuration is to be used to parse and normalize the strings. Alternatively we could omit the configuration parameters: @@ -535,8 +535,8 @@ WHERE to_tsvector(body) @@ to_tsquery('friend'); A more complex example is to - select the ten most recent documents that contain create and - table in the title or body: + select the ten most recent documents that contain create and + table in the title or body: SELECT title @@ -577,7 +577,7 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); This is because the index contents must be unaffected by . If they were affected, the index contents might be inconsistent because different entries could - contain tsvectors that were created with different text search + contain tsvectors that were created with different text search configurations, and there would be no way to guess which was which. It would be impossible to dump and restore such an index correctly. @@ -587,8 +587,8 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); used in the index above, only a query reference that uses the 2-argument version of to_tsvector with the same configuration name will use that index. That is, WHERE - to_tsvector('english', body) @@ 'a & b' can use the index, - but WHERE to_tsvector(body) @@ 'a & b' cannot. + to_tsvector('english', body) @@ 'a & b' can use the index, + but WHERE to_tsvector(body) @@ 'a & b' cannot. This ensures that an index will be used only with the same configuration used to create the index entries. @@ -601,13 +601,13 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector(config_name, body)); - where config_name is a column in the pgweb + where config_name is a column in the pgweb table. This allows mixed configurations in the same index while recording which configuration was used for each index entry. This would be useful, for example, if the document collection contained documents in different languages. Again, queries that are meant to use the index must be phrased to match, e.g., - WHERE to_tsvector(config_name, body) @@ 'a & b'. + WHERE to_tsvector(config_name, body) @@ 'a & b'. @@ -619,11 +619,11 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', title || ' ' | - Another approach is to create a separate tsvector column - to hold the output of to_tsvector. This example is a + Another approach is to create a separate tsvector column + to hold the output of to_tsvector. This example is a concatenation of title and body, - using coalesce to ensure that one field will still be - indexed when the other is NULL: + using coalesce to ensure that one field will still be + indexed when the other is NULL: ALTER TABLE pgweb ADD COLUMN textsearchable_index_col tsvector; @@ -649,10 +649,10 @@ LIMIT 10; - When using a separate column to store the tsvector + When using a separate column to store the tsvector representation, - it is necessary to create a trigger to keep the tsvector - column current anytime title or body changes. + it is necessary to create a trigger to keep the tsvector + column current anytime title or body changes. explains how to do that. @@ -661,13 +661,13 @@ LIMIT 10; is that it is not necessary to explicitly specify the text search configuration in queries in order to make use of the index. As shown in the example above, the query can depend on - default_text_search_config. Another advantage is that + default_text_search_config. Another advantage is that searches will be faster, since it will not be necessary to redo the - to_tsvector calls to verify index matches. (This is more + to_tsvector calls to verify index matches. (This is more important when using a GiST index than a GIN index; see .) The expression-index approach is simpler to set up, however, and it requires less disk space since the - tsvector representation is not stored explicitly. + tsvector representation is not stored explicitly. @@ -701,7 +701,7 @@ LIMIT 10; -to_tsvector( config regconfig, document text) returns tsvector +to_tsvector( config regconfig, document text) returns tsvector @@ -734,12 +734,12 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); each token. For each token, a list of dictionaries () is consulted, where the list can vary depending on the token type. The first dictionary - that recognizes the token emits one or more normalized + that recognizes the token emits one or more normalized lexemes to represent the token. For example, rats became rat because one of the dictionaries recognized that the word rats is a plural form of rat. Some words are recognized as - stop words (), which + stop words (), which causes them to be ignored since they occur too frequently to be useful in searching. In our example these are a, on, and it. @@ -758,9 +758,9 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); The function setweight can be used to label the - entries of a tsvector with a given weight, - where a weight is one of the letters A, B, - C, or D. + entries of a tsvector with a given weight, + where a weight is one of the letters A, B, + C, or D. This is typically used to mark entries coming from different parts of a document, such as title versus body. Later, this information can be used for ranking of search results. @@ -783,8 +783,8 @@ UPDATE tt SET ti = Here we have used setweight to label the source of each lexeme in the finished tsvector, and then merged - the labeled tsvector values using the tsvector - concatenation operator ||. (tsvector values using the tsvector + concatenation operator ||. ( gives details about these operations.) @@ -811,20 +811,20 @@ UPDATE tt SET ti = -to_tsquery( config regconfig, querytext text) returns tsquery +to_tsquery( config regconfig, querytext text) returns tsquery - to_tsquery creates a tsquery value from + to_tsquery creates a tsquery value from querytext, which must consist of single tokens - separated by the tsquery operators & (AND), + separated by the tsquery operators & (AND), | (OR), ! (NOT), and <-> (FOLLOWED BY), possibly grouped using parentheses. In other words, the input to to_tsquery must already follow the general rules for - tsquery input, as described in tsquery input, as described in . The difference is that while basic - tsquery input takes the tokens at face value, + tsquery input takes the tokens at face value, to_tsquery normalizes each token into a lexeme using the specified or default configuration, and discards any tokens that are stop words according to the configuration. For example: @@ -836,8 +836,8 @@ SELECT to_tsquery('english', 'The & Fat & Rats'); 'fat' & 'rat' - As in basic tsquery input, weight(s) can be attached to each - lexeme to restrict it to match only tsvector lexemes of those + As in basic tsquery input, weight(s) can be attached to each + lexeme to restrict it to match only tsvector lexemes of those weight(s). For example: @@ -847,7 +847,7 @@ SELECT to_tsquery('english', 'Fat | Rats:AB'); 'fat' | 'rat':AB - Also, * can be attached to a lexeme to specify prefix matching: + Also, * can be attached to a lexeme to specify prefix matching: SELECT to_tsquery('supern:*A & star:A*B'); @@ -856,7 +856,7 @@ SELECT to_tsquery('supern:*A & star:A*B'); 'supern':*A & 'star':*AB - Such a lexeme will match any word in a tsvector that begins + Such a lexeme will match any word in a tsvector that begins with the given string. @@ -884,13 +884,13 @@ SELECT to_tsquery('''supernovae stars'' & !crab'); -plainto_tsquery( config regconfig, querytext text) returns tsquery +plainto_tsquery( config regconfig, querytext text) returns tsquery - plainto_tsquery transforms the unformatted text + plainto_tsquery transforms the unformatted text querytext to a tsquery value. - The text is parsed and normalized much as for to_tsvector, + The text is parsed and normalized much as for to_tsvector, then the & (AND) tsquery operator is inserted between surviving words. @@ -905,7 +905,7 @@ SELECT plainto_tsquery('english', 'The Fat Rats'); 'fat' & 'rat' - Note that plainto_tsquery will not + Note that plainto_tsquery will not recognize tsquery operators, weight labels, or prefix-match labels in its input: @@ -924,16 +924,16 @@ SELECT plainto_tsquery('english', 'The Fat & Rats:C'); -phraseto_tsquery( config regconfig, querytext text) returns tsquery +phraseto_tsquery( config regconfig, querytext text) returns tsquery - phraseto_tsquery behaves much like - plainto_tsquery, except that it inserts + phraseto_tsquery behaves much like + plainto_tsquery, except that it inserts the <-> (FOLLOWED BY) operator between surviving words instead of the & (AND) operator. Also, stop words are not simply discarded, but are accounted for by - inserting <N> operators rather + inserting <N> operators rather than <-> operators. This function is useful when searching for exact lexeme sequences, since the FOLLOWED BY operators check lexeme order not just the presence of all the lexemes. @@ -949,8 +949,8 @@ SELECT phraseto_tsquery('english', 'The Fat Rats'); 'fat' <-> 'rat' - Like plainto_tsquery, the - phraseto_tsquery function will not + Like plainto_tsquery, the + phraseto_tsquery function will not recognize tsquery operators, weight labels, or prefix-match labels in its input: @@ -994,7 +994,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank - ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1011,7 +1011,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ts_rank_cd - ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 + ts_rank_cd( weights float4[], vector tsvector, query tsquery , normalization integer ) returns float4 @@ -1020,19 +1020,19 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); ranking for the given document vector and query, as described in Clarke, Cormack, and Tudhope's "Relevance Ranking for One to Three Term Queries" in the journal "Information Processing and Management", - 1999. Cover density is similar to ts_rank ranking + 1999. Cover density is similar to ts_rank ranking except that the proximity of matching lexemes to each other is taken into consideration. This function requires lexeme positional information to perform - its calculation. Therefore, it ignores any stripped - lexemes in the tsvector. If there are no unstripped + its calculation. Therefore, it ignores any stripped + lexemes in the tsvector. If there are no unstripped lexemes in the input, the result will be zero. (See for more information - about the strip function and positional information - in tsvectors.) + about the strip function and positional information + in tsvectors.) @@ -1094,7 +1094,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); 4 divides the rank by the mean harmonic distance between extents - (this is implemented only by ts_rank_cd) + (this is implemented only by ts_rank_cd) @@ -1189,7 +1189,7 @@ LIMIT 10; To present search results it is ideal to show a part of each document and how it is related to the query. Usually, search engines show fragments of - the document with marked search terms. PostgreSQL + the document with marked search terms. PostgreSQL provides a function ts_headline that implements this functionality. @@ -1199,7 +1199,7 @@ LIMIT 10; -ts_headline( config regconfig, document text, query tsquery , options text ) returns text +ts_headline( config regconfig, document text, query tsquery , options text ) returns text @@ -1215,13 +1215,13 @@ ts_headline( config If an options string is specified it must consist of a comma-separated list of one or more - option=value pairs. + option=value pairs. The available options are: - StartSel, StopSel: the strings with + StartSel, StopSel: the strings with which to delimit query words appearing in the document, to distinguish them from other excerpted words. You must double-quote these strings if they contain spaces or commas. @@ -1229,7 +1229,7 @@ ts_headline( config - MaxWords, MinWords: these numbers + MaxWords, MinWords: these numbers determine the longest and shortest headlines to output. @@ -1256,10 +1256,10 @@ ts_headline( config MaxWords and - words of length ShortWord or less are dropped at the start + each side. Each fragment will be of at most MaxWords and + words of length ShortWord or less are dropped at the start and end of each fragment. If not all query words are found in the - document, then a single fragment of the first MinWords + document, then a single fragment of the first MinWords in the document will be displayed. @@ -1312,7 +1312,7 @@ query.', - ts_headline uses the original document, not a + ts_headline uses the original document, not a tsvector summary, so it can be slow and should be used with care. @@ -1334,10 +1334,10 @@ query.', showed how raw textual - documents can be converted into tsvector values. + documents can be converted into tsvector values. PostgreSQL also provides functions and operators that can be used to manipulate documents that are already - in tsvector form. + in tsvector form. @@ -1349,18 +1349,18 @@ query.', tsvector concatenation - tsvector || tsvector + tsvector || tsvector - The tsvector concatenation operator + The tsvector concatenation operator returns a vector which combines the lexemes and positional information of the two vectors given as arguments. Positions and weight labels are retained during the concatenation. Positions appearing in the right-hand vector are offset by the largest position mentioned in the left-hand vector, so that the result is - nearly equivalent to the result of performing to_tsvector + nearly equivalent to the result of performing to_tsvector on the concatenation of the two original document strings. (The equivalence is not exact, because any stop-words removed from the end of the left-hand argument will not affect the result, whereas @@ -1370,11 +1370,11 @@ query.', One advantage of using concatenation in the vector form, rather than - concatenating text before applying to_tsvector, is that + concatenating text before applying to_tsvector, is that you can use different configurations to parse different sections - of the document. Also, because the setweight function + of the document. Also, because the setweight function marks all lexemes of the given vector the same way, it is necessary - to parse the text and do setweight before concatenating + to parse the text and do setweight before concatenating if you want to label different parts of the document with different weights. @@ -1388,13 +1388,13 @@ query.', setweight - setweight(vector tsvector, weight "char") returns tsvector + setweight(vector tsvector, weight "char") returns tsvector - setweight returns a copy of the input vector in which every - position has been labeled with the given weight, either + setweight returns a copy of the input vector in which every + position has been labeled with the given weight, either A, B, C, or D. (D is the default for new vectors and as such is not displayed on output.) These labels are @@ -1403,9 +1403,9 @@ query.', - Note that weight labels apply to positions, not - lexemes. If the input vector has been stripped of - positions then setweight does nothing. + Note that weight labels apply to positions, not + lexemes. If the input vector has been stripped of + positions then setweight does nothing. @@ -1416,7 +1416,7 @@ query.', length(tsvector) - length(vector tsvector) returns integer + length(vector tsvector) returns integer @@ -1433,7 +1433,7 @@ query.', strip - strip(vector tsvector) returns tsvector + strip(vector tsvector) returns tsvector @@ -1443,7 +1443,7 @@ query.', smaller than an unstripped vector, but it is also less useful. Relevance ranking does not work as well on stripped vectors as unstripped ones. Also, - the <-> (FOLLOWED BY) tsquery operator + the <-> (FOLLOWED BY) tsquery operator will never match stripped input, since it cannot determine the distance between lexeme occurrences. @@ -1454,7 +1454,7 @@ query.', - A full list of tsvector-related functions is available + A full list of tsvector-related functions is available in . @@ -1465,10 +1465,10 @@ query.', showed how raw textual - queries can be converted into tsquery values. + queries can be converted into tsquery values. PostgreSQL also provides functions and operators that can be used to manipulate queries that are already - in tsquery form. + in tsquery form. @@ -1476,7 +1476,7 @@ query.', - tsquery && tsquery + tsquery && tsquery @@ -1490,7 +1490,7 @@ query.', - tsquery || tsquery + tsquery || tsquery @@ -1504,7 +1504,7 @@ query.', - !! tsquery + !! tsquery @@ -1518,15 +1518,15 @@ query.', - tsquery <-> tsquery + tsquery <-> tsquery Returns a query that searches for a match to the first given query immediately followed by a match to the second given query, using - the <-> (FOLLOWED BY) - tsquery operator. For example: + the <-> (FOLLOWED BY) + tsquery operator. For example: SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); @@ -1546,7 +1546,7 @@ SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); tsquery_phrase - tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery + tsquery_phrase(query1 tsquery, query2 tsquery [, distance integer ]) returns tsquery @@ -1554,8 +1554,8 @@ SELECT to_tsquery('fat') <-> to_tsquery('cat | rat'); Returns a query that searches for a match to the first given query followed by a match to the second given query at a distance of at distance lexemes, using - the <N> - tsquery operator. For example: + the <N> + tsquery operator. For example: SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10); @@ -1575,13 +1575,13 @@ SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10); numnode - numnode(query tsquery) returns integer + numnode(query tsquery) returns integer Returns the number of nodes (lexemes plus operators) in a - tsquery. This function is useful + tsquery. This function is useful to determine if the query is meaningful (returns > 0), or contains only stop words (returns 0). Examples: @@ -1609,12 +1609,12 @@ SELECT numnode('foo & bar'::tsquery); querytree - querytree(query tsquery) returns text + querytree(query tsquery) returns text - Returns the portion of a tsquery that can be used for + Returns the portion of a tsquery that can be used for searching an index. This function is useful for detecting unindexable queries, for example those containing only stop words or only negated terms. For example: @@ -1640,16 +1640,16 @@ SELECT querytree(to_tsquery('!defined')); The ts_rewrite family of functions search a - given tsquery for occurrences of a target + given tsquery for occurrences of a target subquery, and replace each occurrence with a substitute subquery. In essence this operation is a - tsquery-specific version of substring replacement. + tsquery-specific version of substring replacement. A target and substitute combination can be - thought of as a query rewrite rule. A collection + thought of as a query rewrite rule. A collection of such rewrite rules can be a powerful search aid. For example, you can expand the search using synonyms - (e.g., new york, big apple, nyc, - gotham) or narrow the search to direct the user to some hot + (e.g., new york, big apple, nyc, + gotham) or narrow the search to direct the user to some hot topic. There is some overlap in functionality between this feature and thesaurus dictionaries (). However, you can modify a set of rewrite rules on-the-fly without @@ -1662,12 +1662,12 @@ SELECT querytree(to_tsquery('!defined')); - ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery + ts_rewrite (query tsquery, target tsquery, substitute tsquery) returns tsquery - This form of ts_rewrite simply applies a single + This form of ts_rewrite simply applies a single rewrite rule: target is replaced by substitute wherever it appears in - ts_rewrite (query tsquery, select text) returns tsquery + ts_rewrite (query tsquery, select text) returns tsquery - This form of ts_rewrite accepts a starting - query and a SQL select command, which - is given as a text string. The select must yield two - columns of tsquery type. For each row of the - select result, occurrences of the first column value + This form of ts_rewrite accepts a starting + query and a SQL select command, which + is given as a text string. The select must yield two + columns of tsquery type. For each row of the + select result, occurrences of the first column value (the target) are replaced by the second column value (the substitute) - within the current query value. For example: + within the current query value. For example: CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery); @@ -1713,7 +1713,7 @@ SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases'); Note that when multiple rewrite rules are applied in this way, the order of application can be important; so in practice you will - want the source query to ORDER BY some ordering key. + want the source query to ORDER BY some ordering key. @@ -1777,9 +1777,9 @@ SELECT ts_rewrite('a & b'::tsquery, - When using a separate column to store the tsvector representation + When using a separate column to store the tsvector representation of your documents, it is necessary to create a trigger to update the - tsvector column when the document content columns change. + tsvector column when the document content columns change. Two built-in trigger functions are available for this, or you can write your own. @@ -1790,9 +1790,9 @@ tsvector_update_trigger_column(tsvector_column_na - These trigger functions automatically compute a tsvector + These trigger functions automatically compute a tsvector column from one or more textual columns, under the control of - parameters specified in the CREATE TRIGGER command. + parameters specified in the CREATE TRIGGER command. An example of their use is: @@ -1819,24 +1819,24 @@ SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title & body'); title here | the body text is here - Having created this trigger, any change in title or - body will automatically be reflected into - tsv, without the application having to worry about it. + Having created this trigger, any change in title or + body will automatically be reflected into + tsv, without the application having to worry about it. - The first trigger argument must be the name of the tsvector + The first trigger argument must be the name of the tsvector column to be updated. The second argument specifies the text search configuration to be used to perform the conversion. For - tsvector_update_trigger, the configuration name is simply + tsvector_update_trigger, the configuration name is simply given as the second trigger argument. It must be schema-qualified as shown above, so that the trigger behavior will not change with changes - in search_path. For - tsvector_update_trigger_column, the second trigger argument + in search_path. For + tsvector_update_trigger_column, the second trigger argument is the name of another table column, which must be of type - regconfig. This allows a per-row selection of configuration + regconfig. This allows a per-row selection of configuration to be made. The remaining argument(s) are the names of textual columns - (of type text, varchar, or char). These + (of type text, varchar, or char). These will be included in the document in the order given. NULL values will be skipped (but the other columns will still be indexed). @@ -1865,9 +1865,9 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE Keep in mind that it is important to specify the configuration name - explicitly when creating tsvector values inside triggers, + explicitly when creating tsvector values inside triggers, so that the column's contents will not be affected by changes to - default_text_search_config. Failure to do this is likely to + default_text_search_config. Failure to do this is likely to lead to problems such as search results changing after a dump and reload. @@ -1881,38 +1881,38 @@ CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE - The function ts_stat is useful for checking your + The function ts_stat is useful for checking your configuration and for finding stop-word candidates. -ts_stat(sqlquery text, weights text, - OUT word text, OUT ndoc integer, - OUT nentry integer) returns setof record +ts_stat(sqlquery text, weights text, + OUT word text, OUT ndoc integer, + OUT nentry integer) returns setof record sqlquery is a text value containing an SQL query which must return a single tsvector column. - ts_stat executes the query and returns statistics about + ts_stat executes the query and returns statistics about each distinct lexeme (word) contained in the tsvector data. The columns returned are - word text — the value of a lexeme + word text — the value of a lexeme - ndoc integer — number of documents - (tsvectors) the word occurred in + ndoc integer — number of documents + (tsvectors) the word occurred in - nentry integer — total number of + nentry integer — total number of occurrences of the word @@ -1931,8 +1931,8 @@ ORDER BY nentry DESC, ndoc DESC, word LIMIT 10; - The same, but counting only word occurrences with weight A - or B: + The same, but counting only word occurrences with weight A + or B: SELECT * FROM ts_stat('SELECT vector FROM apod', 'ab') @@ -1950,7 +1950,7 @@ LIMIT 10; Text search parsers are responsible for splitting raw document text - into tokens and identifying each token's type, where + into tokens and identifying each token's type, where the set of possible types is defined by the parser itself. Note that a parser does not modify the text at all — it simply identifies plausible word boundaries. Because of this limited scope, @@ -1961,7 +1961,7 @@ LIMIT 10; - The built-in parser is named pg_catalog.default. + The built-in parser is named pg_catalog.default. It recognizes 23 token types, shown in . @@ -1977,119 +1977,119 @@ LIMIT 10; - asciiword + asciiword Word, all ASCII letters elephant - word + word Word, all letters mañana - numword + numword Word, letters and digits beta1 - asciihword + asciihword Hyphenated word, all ASCII up-to-date - hword + hword Hyphenated word, all letters lógico-matemática - numhword + numhword Hyphenated word, letters and digits postgresql-beta1 - hword_asciipart + hword_asciipart Hyphenated word part, all ASCII postgresql in the context postgresql-beta1 - hword_part + hword_part Hyphenated word part, all letters lógico or matemática in the context lógico-matemática - hword_numpart + hword_numpart Hyphenated word part, letters and digits beta1 in the context postgresql-beta1 - email + email Email address foo@example.com - protocol + protocol Protocol head http:// - url + url URL example.com/stuff/index.html - host + host Host example.com - url_path + url_path URL path /stuff/index.html, in the context of a URL - file + file File or path name /usr/local/foo.txt, if not within a URL - sfloat + sfloat Scientific notation -1.234e56 - float + float Decimal notation -1.234 - int + int Signed integer -1234 - uint + uint Unsigned integer 1234 - version + version Version number 8.3.0 - tag + tag XML tag <a href="dictionaries.html"> - entity + entity XML entity &amp; - blank + blank Space symbols (any whitespace or punctuation not otherwise recognized) @@ -2099,16 +2099,16 @@ LIMIT 10; - The parser's notion of a letter is determined by the database's - locale setting, specifically lc_ctype. Words containing + The parser's notion of a letter is determined by the database's + locale setting, specifically lc_ctype. Words containing only the basic ASCII letters are reported as a separate token type, since it is sometimes useful to distinguish them. In most European - languages, token types word and asciiword + languages, token types word and asciiword should be treated alike. - email does not support all valid email characters as + email does not support all valid email characters as defined by RFC 5322. Specifically, the only non-alphanumeric characters supported for email user names are period, dash, and underscore. @@ -2154,9 +2154,9 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h Dictionaries are used to eliminate words that should not be considered in a - search (stop words), and to normalize words so + search (stop words), and to normalize words so that different derived forms of the same word will match. A successfully - normalized word is called a lexeme. Aside from + normalized word is called a lexeme. Aside from improving search quality, normalization and removal of stop words reduce the size of the tsvector representation of a document, thereby improving performance. Normalization does not always have linguistic meaning @@ -2229,10 +2229,10 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h - a single lexeme with the TSL_FILTER flag set, to replace + a single lexeme with the TSL_FILTER flag set, to replace the original token with a new token to be passed to subsequent dictionaries (a dictionary that does this is called a - filtering dictionary) + filtering dictionary) @@ -2254,7 +2254,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h used to create new dictionaries with custom parameters. Each predefined dictionary template is described below. If no existing template is suitable, it is possible to create new ones; see the - contrib/ area of the PostgreSQL distribution + contrib/ area of the PostgreSQL distribution for examples. @@ -2267,7 +2267,7 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h until some dictionary recognizes it as a known word. If it is identified as a stop word, or if no dictionary recognizes the token, it will be discarded and not indexed or searched for. - Normally, the first dictionary that returns a non-NULL + Normally, the first dictionary that returns a non-NULL output determines the result, and any remaining dictionaries are not consulted; but a filtering dictionary can replace the given word with a modified word, which is then passed to subsequent dictionaries. @@ -2277,11 +2277,11 @@ SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.h The general rule for configuring a list of dictionaries is to place first the most narrow, most specific dictionary, then the more general dictionaries, finishing with a very general dictionary, like - a Snowball stemmer or simple, which + a Snowball stemmer or simple, which recognizes everything. For example, for an astronomy-specific search (astro_en configuration) one could bind token type asciiword (ASCII word) to a synonym dictionary of astronomical - terms, a general English dictionary and a Snowball English + terms, a general English dictionary and a Snowball English stemmer: @@ -2305,7 +2305,7 @@ ALTER TEXT SEARCH CONFIGURATION astro_en Stop words are words that are very common, appear in almost every document, and have no discrimination value. Therefore, they can be ignored in the context of full text searching. For example, every English text - contains words like a and the, so it is + contains words like a and the, so it is useless to store them in an index. However, stop words do affect the positions in tsvector, which in turn affect ranking: @@ -2347,7 +2347,7 @@ SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list &a Simple Dictionary - The simple dictionary template operates by converting the + The simple dictionary template operates by converting the input token to lower case and checking it against a file of stop words. If it is found in the file then an empty array is returned, causing the token to be discarded. If not, the lower-cased form of the word @@ -2357,7 +2357,7 @@ SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list &a - Here is an example of a dictionary definition using the simple + Here is an example of a dictionary definition using the simple template: @@ -2369,11 +2369,11 @@ CREATE TEXT SEARCH DICTIONARY public.simple_dict ( Here, english is the base name of a file of stop words. The file's full name will be - $SHAREDIR/tsearch_data/english.stop, - where $SHAREDIR means the + $SHAREDIR/tsearch_data/english.stop, + where $SHAREDIR means the PostgreSQL installation's shared-data directory, - often /usr/local/share/postgresql (use pg_config - --sharedir to determine it if you're not sure). + often /usr/local/share/postgresql (use pg_config + --sharedir to determine it if you're not sure). The file format is simply a list of words, one per line. Blank lines and trailing spaces are ignored, and upper case is folded to lower case, but no other processing is done @@ -2397,10 +2397,10 @@ SELECT ts_lexize('public.simple_dict','The'); - We can also choose to return NULL, instead of the lower-cased + We can also choose to return NULL, instead of the lower-cased word, if it is not found in the stop words file. This behavior is - selected by setting the dictionary's Accept parameter to - false. Continuing the example: + selected by setting the dictionary's Accept parameter to + false. Continuing the example: ALTER TEXT SEARCH DICTIONARY public.simple_dict ( Accept = false ); @@ -2418,17 +2418,17 @@ SELECT ts_lexize('public.simple_dict','The'); - With the default setting of Accept = true, - it is only useful to place a simple dictionary at the end + With the default setting of Accept = true, + it is only useful to place a simple dictionary at the end of a list of dictionaries, since it will never pass on any token to - a following dictionary. Conversely, Accept = false + a following dictionary. Conversely, Accept = false is only useful when there is at least one following dictionary. Most types of dictionaries rely on configuration files, such as files of - stop words. These files must be stored in UTF-8 encoding. + stop words. These files must be stored in UTF-8 encoding. They will be translated to the actual database encoding, if that is different, when they are read into the server. @@ -2439,8 +2439,8 @@ SELECT ts_lexize('public.simple_dict','The'); Normally, a database session will read a dictionary configuration file only once, when it is first used within the session. If you modify a configuration file and want to force existing sessions to pick up the - new contents, issue an ALTER TEXT SEARCH DICTIONARY command - on the dictionary. This can be a dummy update that doesn't + new contents, issue an ALTER TEXT SEARCH DICTIONARY command + on the dictionary. This can be a dummy update that doesn't actually change any parameter values. @@ -2457,7 +2457,7 @@ SELECT ts_lexize('public.simple_dict','The'); dictionary can be used to overcome linguistic problems, for example, to prevent an English stemmer dictionary from reducing the word Paris to pari. It is enough to have a Paris paris line in the - synonym dictionary and put it before the english_stem + synonym dictionary and put it before the english_stem dictionary. For example: @@ -2483,24 +2483,24 @@ SELECT * FROM ts_debug('english', 'Paris'); - The only parameter required by the synonym template is - SYNONYMS, which is the base name of its configuration file - — my_synonyms in the above example. + The only parameter required by the synonym template is + SYNONYMS, which is the base name of its configuration file + — my_synonyms in the above example. The file's full name will be - $SHAREDIR/tsearch_data/my_synonyms.syn - (where $SHAREDIR means the - PostgreSQL installation's shared-data directory). + $SHAREDIR/tsearch_data/my_synonyms.syn + (where $SHAREDIR means the + PostgreSQL installation's shared-data directory). The file format is just one line per word to be substituted, with the word followed by its synonym, separated by white space. Blank lines and trailing spaces are ignored. - The synonym template also has an optional parameter - CaseSensitive, which defaults to false. When - CaseSensitive is false, words in the synonym file + The synonym template also has an optional parameter + CaseSensitive, which defaults to false. When + CaseSensitive is false, words in the synonym file are folded to lower case, as are input tokens. When it is - true, words and tokens are not folded to lower case, + true, words and tokens are not folded to lower case, but are compared as-is. @@ -2513,7 +2513,7 @@ SELECT * FROM ts_debug('english', 'Paris'); the prefix match marker (see ). For example, suppose we have these entries in - $SHAREDIR/tsearch_data/synonym_sample.syn: + $SHAREDIR/tsearch_data/synonym_sample.syn: postgres pgsql postgresql pgsql @@ -2573,7 +2573,7 @@ mydb=# SELECT 'indexes are very useful'::tsvector @@ to_tsquery('tst','indices') Basically a thesaurus dictionary replaces all non-preferred terms by one preferred term and, optionally, preserves the original terms for indexing - as well. PostgreSQL's current implementation of the + as well. PostgreSQL's current implementation of the thesaurus dictionary is an extension of the synonym dictionary with added phrase support. A thesaurus dictionary requires a configuration file of the following format: @@ -2597,7 +2597,7 @@ more sample word(s) : more indexed word(s) recognize a word. In that case, you should remove the use of the word or teach the subdictionary about it. You can place an asterisk (*) at the beginning of an indexed word to skip applying - the subdictionary to it, but all sample words must be known + the subdictionary to it, but all sample words must be known to the subdictionary. @@ -2609,16 +2609,16 @@ more sample word(s) : more indexed word(s) Specific stop words recognized by the subdictionary cannot be - specified; instead use ? to mark the location where any - stop word can appear. For example, assuming that a and - the are stop words according to the subdictionary: + specified; instead use ? to mark the location where any + stop word can appear. For example, assuming that a and + the are stop words according to the subdictionary: ? one ? two : swsw - matches a one the two and the one a two; - both would be replaced by swsw. + matches a one the two and the one a two; + both would be replaced by swsw. @@ -2628,7 +2628,7 @@ more sample word(s) : more indexed word(s) accumulation. The thesaurus dictionary must be configured carefully. For example, if the thesaurus dictionary is assigned to handle only the asciiword token, then a thesaurus dictionary - definition like one 7 will not work since token type + definition like one 7 will not work since token type uint is not assigned to the thesaurus dictionary. @@ -2645,7 +2645,7 @@ more sample word(s) : more indexed word(s) Thesaurus Configuration - To define a new thesaurus dictionary, use the thesaurus + To define a new thesaurus dictionary, use the thesaurus template. For example: @@ -2667,8 +2667,8 @@ CREATE TEXT SEARCH DICTIONARY thesaurus_simple ( mythesaurus is the base name of the thesaurus configuration file. - (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths, - where $SHAREDIR means the installation shared-data + (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths, + where $SHAREDIR means the installation shared-data directory.) @@ -2752,7 +2752,7 @@ SELECT to_tsquery('''supernova star'''); Notice that supernova star matches supernovae stars in thesaurus_astro because we specified the english_stem stemmer in the thesaurus definition. - The stemmer removed the e and s. + The stemmer removed the e and s. @@ -2774,21 +2774,21 @@ SELECT plainto_tsquery('supernova star'); - <application>Ispell</> Dictionary + <application>Ispell</application> Dictionary - The Ispell dictionary template supports - morphological dictionaries, which can normalize many + The Ispell dictionary template supports + morphological dictionaries, which can normalize many different linguistic forms of a word into the same lexeme. For example, - an English Ispell dictionary can match all declensions and + an English Ispell dictionary can match all declensions and conjugations of the search term bank, e.g., - banking, banked, banks, - banks', and bank's. + banking, banked, banks, + banks', and bank's. The standard PostgreSQL distribution does - not include any Ispell configuration files. + not include any Ispell configuration files. Dictionaries for a large number of languages are available from Ispell. Also, some more modern dictionary file formats are supported — - To create an Ispell dictionary perform these steps: + To create an Ispell dictionary perform these steps: - download dictionary configuration files. OpenOffice - extension files have the .oxt extension. It is necessary - to extract .aff and .dic files, change - extensions to .affix and .dict. For some + download dictionary configuration files. OpenOffice + extension files have the .oxt extension. It is necessary + to extract .aff and .dic files, change + extensions to .affix and .dict. For some dictionary files it is also needed to convert characters to the UTF-8 encoding with commands (for example, for a Norwegian language dictionary): @@ -2819,7 +2819,7 @@ iconv -f ISO_8859-1 -t UTF-8 -o nn_no.dict nn_NO.dic - copy files to the $SHAREDIR/tsearch_data directory + copy files to the $SHAREDIR/tsearch_data directory @@ -2837,10 +2837,10 @@ CREATE TEXT SEARCH DICTIONARY english_hunspell ( - Here, DictFile, AffFile, and StopWords + Here, DictFile, AffFile, and StopWords specify the base names of the dictionary, affixes, and stop-words files. The stop-words file has the same format explained above for the - simple dictionary type. The format of the other files is + simple dictionary type. The format of the other files is not specified here but is available from the above-mentioned web sites. @@ -2851,7 +2851,7 @@ CREATE TEXT SEARCH DICTIONARY english_hunspell ( - The .affix file of Ispell has the following + The .affix file of Ispell has the following structure: prefixes @@ -2866,7 +2866,7 @@ flag T: - And the .dict file has the following structure: + And the .dict file has the following structure: lapse/ADGRS lard/DGRS @@ -2876,14 +2876,14 @@ lark/MRS - Format of the .dict file is: + Format of the .dict file is: basic_form/affix_class_name - In the .affix file every affix flag is described in the + In the .affix file every affix flag is described in the following format: condition > [-stripping_letters,] adding_affix @@ -2892,12 +2892,12 @@ condition > [-stripping_letters,] adding_affix Here, condition has a format similar to the format of regular expressions. - It can use groupings [...] and [^...]. - For example, [AEIOU]Y means that the last letter of the word - is "y" and the penultimate letter is "a", - "e", "i", "o" or "u". - [^EY] means that the last letter is neither "e" - nor "y". + It can use groupings [...] and [^...]. + For example, [AEIOU]Y means that the last letter of the word + is "y" and the penultimate letter is "a", + "e", "i", "o" or "u". + [^EY] means that the last letter is neither "e" + nor "y". @@ -2922,8 +2922,8 @@ SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk'); - MySpell format is a subset of Hunspell. - The .affix file of Hunspell has the following + MySpell format is a subset of Hunspell. + The .affix file of Hunspell has the following structure: PFX A Y 1 @@ -2970,8 +2970,8 @@ SFX T 0 est [^ey] - The .dict file looks like the .dict file of - Ispell: + The .dict file looks like the .dict file of + Ispell: larder/M lardy/RT @@ -2982,8 +2982,8 @@ largehearted - MySpell does not support compound words. - Hunspell has sophisticated support for compound words. At + MySpell does not support compound words. + Hunspell has sophisticated support for compound words. At present, PostgreSQL implements only the basic compound word operations of Hunspell. @@ -2992,18 +2992,18 @@ largehearted - <application>Snowball</> Dictionary + <application>Snowball</application> Dictionary - The Snowball dictionary template is based on a project + The Snowball dictionary template is based on a project by Martin Porter, inventor of the popular Porter's stemming algorithm for the English language. Snowball now provides stemming algorithms for many languages (see the Snowball site for more information). Each algorithm understands how to reduce common variant forms of words to a base, or stem, spelling within - its language. A Snowball dictionary requires a language + its language. A Snowball dictionary requires a language parameter to identify which stemmer to use, and optionally can specify a - stopword file name that gives a list of words to eliminate. + stopword file name that gives a list of words to eliminate. (PostgreSQL's standard stopword lists are also provided by the Snowball project.) For example, there is a built-in definition equivalent to @@ -3020,7 +3020,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( - A Snowball dictionary recognizes everything, whether + A Snowball dictionary recognizes everything, whether or not it is able to simplify the word, so it should be placed at the end of the dictionary list. It is useless to have it before any other dictionary because a token will never pass through it to @@ -3047,7 +3047,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( one used by text search functions if an explicit configuration parameter is omitted. It can be set in postgresql.conf, or set for an - individual session using the SET command. + individual session using the SET command. @@ -3061,7 +3061,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( As an example we will create a configuration pg, starting by duplicating the built-in - english configuration: + english configuration: CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english ); @@ -3088,7 +3088,7 @@ CREATE TEXT SEARCH DICTIONARY pg_dict ( ); - Next we register the Ispell dictionary + Next we register the Ispell dictionary english_ispell, which has its own configuration files: @@ -3101,7 +3101,7 @@ CREATE TEXT SEARCH DICTIONARY english_ispell ( Now we can set up the mappings for words in configuration - pg: + pg: ALTER TEXT SEARCH CONFIGURATION pg @@ -3133,7 +3133,7 @@ version of our software. The next step is to set the session to use the new configuration, which was - created in the public schema: + created in the public schema: => \dF @@ -3177,18 +3177,18 @@ SHOW default_text_search_config; -ts_debug( config regconfig, document text, - OUT alias text, - OUT description text, - OUT token text, - OUT dictionaries regdictionary[], - OUT dictionary regdictionary, - OUT lexemes text[]) +ts_debug( config regconfig, document text, + OUT alias text, + OUT description text, + OUT token text, + OUT dictionaries regdictionary[], + OUT dictionary regdictionary, + OUT lexemes text[]) returns setof record - ts_debug displays information about every token of + ts_debug displays information about every token of document as produced by the parser and processed by the configured dictionaries. It uses the configuration specified by config re - ts_debug returns one row for each token identified in the text + ts_debug returns one row for each token identified in the text by the parser. The columns returned are - alias text — short name of the token type + alias text — short name of the token type - description text — description of the + description text — description of the token type - token text — text of the token + token text — text of the token - dictionaries regdictionary[] — the + dictionaries regdictionary[] — the dictionaries selected by the configuration for this token type - dictionary regdictionary — the dictionary - that recognized the token, or NULL if none did + dictionary regdictionary — the dictionary + that recognized the token, or NULL if none did - lexemes text[] — the lexeme(s) produced - by the dictionary that recognized the token, or NULL if - none did; an empty array ({}) means it was recognized as a + lexemes text[] — the lexeme(s) produced + by the dictionary that recognized the token, or NULL if + none did; an empty array ({}) means it was recognized as a stop word @@ -3307,10 +3307,10 @@ SELECT * FROM ts_debug('public.english','The Brightest supernovaes'); - In this example, the word Brightest was recognized by the + In this example, the word Brightest was recognized by the parser as an ASCII word (alias asciiword). For this token type the dictionary list is - english_ispell and + english_ispell and english_stem. The word was recognized by english_ispell, which reduced it to the noun bright. The word supernovaes is @@ -3360,14 +3360,14 @@ FROM ts_debug('public.english','The Brightest supernovaes'); -ts_parse(parser_name text, document text, - OUT tokid integer, OUT token text) returns setof record -ts_parse(parser_oid oid, document text, - OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_name text, document text, + OUT tokid integer, OUT token text) returns setof record +ts_parse(parser_oid oid, document text, + OUT tokid integer, OUT token text) returns setof record - ts_parse parses the given document + ts_parse parses the given document and returns a series of records, one for each token produced by parsing. Each record includes a tokid showing the assigned token type and a token which is the text of the @@ -3391,14 +3391,14 @@ SELECT * FROM ts_parse('default', '123 - a number'); -ts_token_type(parser_name text, OUT tokid integer, - OUT alias text, OUT description text) returns setof record -ts_token_type(parser_oid oid, OUT tokid integer, - OUT alias text, OUT description text) returns setof record +ts_token_type(parser_name text, OUT tokid integer, + OUT alias text, OUT description text) returns setof record +ts_token_type(parser_oid oid, OUT tokid integer, + OUT alias text, OUT description text) returns setof record - ts_token_type returns a table which describes each type of + ts_token_type returns a table which describes each type of token the specified parser can recognize. For each token type, the table gives the integer tokid that the parser uses to label a token of that type, the alias that names the token type @@ -3441,7 +3441,7 @@ SELECT * FROM ts_token_type('default'); Dictionary Testing - The ts_lexize function facilitates dictionary testing. + The ts_lexize function facilitates dictionary testing. @@ -3449,11 +3449,11 @@ SELECT * FROM ts_token_type('default'); -ts_lexize(dict regdictionary, token text) returns text[] +ts_lexize(dict regdictionary, token text) returns text[] - ts_lexize returns an array of lexemes if the input + ts_lexize returns an array of lexemes if the input token is known to the dictionary, or an empty array if the token is known to the dictionary but it is a stop word, or @@ -3490,9 +3490,9 @@ SELECT ts_lexize('thesaurus_astro','supernovae stars') is null; The thesaurus dictionary thesaurus_astro does know the - phrase supernovae stars, but ts_lexize + phrase supernovae stars, but ts_lexize fails since it does not parse the input text but treats it as a single - token. Use plainto_tsquery or to_tsvector to + token. Use plainto_tsquery or to_tsvector to test thesaurus dictionaries, for example: @@ -3540,7 +3540,7 @@ SELECT plainto_tsquery('supernovae stars'); Creates a GIN (Generalized Inverted Index)-based index. - The column must be of tsvector type. + The column must be of tsvector type. @@ -3560,8 +3560,8 @@ SELECT plainto_tsquery('supernovae stars'); Creates a GiST (Generalized Search Tree)-based index. - The column can be of tsvector or - tsquery type. + The column can be of tsvector or + tsquery type. @@ -3575,7 +3575,7 @@ SELECT plainto_tsquery('supernovae stars'); compressed list of matching locations. Multi-word searches can find the first match, then use the index to remove rows that are lacking additional words. GIN indexes store only the words (lexemes) of - tsvector values, and not their weight labels. Thus a table + tsvector values, and not their weight labels. Thus a table row recheck is needed when using a query that involves weights. @@ -3622,7 +3622,7 @@ SELECT plainto_tsquery('supernovae stars'); - <application>psql</> Support + <application>psql</application> Support Information about text search configuration objects can be obtained @@ -3666,7 +3666,7 @@ SELECT plainto_tsquery('supernovae stars'); \dF+ PATTERN - List text search configurations (add + for more detail). + List text search configurations (add + for more detail). => \dF russian List of text search configurations @@ -3707,7 +3707,7 @@ Parser: "pg_catalog.default" \dFd+ PATTERN - List text search dictionaries (add + for more detail). + List text search dictionaries (add + for more detail). => \dFd List of text search dictionaries @@ -3738,7 +3738,7 @@ Parser: "pg_catalog.default" \dFp+ PATTERN - List text search parsers (add + for more detail). + List text search parsers (add + for more detail). => \dFp List of text search parsers @@ -3791,7 +3791,7 @@ Parser: "pg_catalog.default" \dFt+ PATTERN - List text search templates (add + for more detail). + List text search templates (add + for more detail). => \dFt List of text search templates @@ -3830,12 +3830,12 @@ Parser: "pg_catalog.default" 264 - Position values in tsvector must be greater than 0 and + Position values in tsvector must be greater than 0 and no more than 16,383 - The match distance in a <N> - (FOLLOWED BY) tsquery operator cannot be more than + The match distance in a <N> + (FOLLOWED BY) tsquery operator cannot be more than 16,384 @@ -3851,7 +3851,7 @@ Parser: "pg_catalog.default" For comparison, the PostgreSQL 8.1 documentation contained 10,441 unique words, a total of 335,420 words, and the most - frequent word postgresql was mentioned 6,127 times in 655 + frequent word postgresql was mentioned 6,127 times in 655 documents. diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index f5f74af5a1..b0e160acf6 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -53,7 +53,7 @@ On views, triggers can be defined to execute instead of INSERT, UPDATE, or - DELETE operations. INSTEAD OF triggers + DELETE operations. INSTEAD OF triggers are fired once for each row that needs to be modified in the view. It is the responsibility of the trigger's function to perform the necessary modifications to the @@ -67,9 +67,9 @@ The trigger function must be defined before the trigger itself can be created. The trigger function must be declared as a - function taking no arguments and returning type trigger. + function taking no arguments and returning type trigger. (The trigger function receives its input through a specially-passed - TriggerData structure, not in the form of ordinary function + TriggerData structure, not in the form of ordinary function arguments.) @@ -81,8 +81,8 @@ - PostgreSQL offers both per-row - triggers and per-statement triggers. With a per-row + PostgreSQL offers both per-row + triggers and per-statement triggers. With a per-row trigger, the trigger function is invoked once for each row that is affected by the statement that fired the trigger. In contrast, a per-statement trigger is @@ -90,27 +90,27 @@ regardless of the number of rows affected by that statement. In particular, a statement that affects zero rows will still result in the execution of any applicable per-statement triggers. These - two types of triggers are sometimes called row-level - triggers and statement-level triggers, + two types of triggers are sometimes called row-level + triggers and statement-level triggers, respectively. Triggers on TRUNCATE may only be defined at statement level, not per-row. Triggers are also classified according to whether they fire - before, after, or - instead of the operation. These are referred to - as BEFORE triggers, AFTER triggers, and - INSTEAD OF triggers respectively. - Statement-level BEFORE triggers naturally fire before the - statement starts to do anything, while statement-level AFTER + before, after, or + instead of the operation. These are referred to + as BEFORE triggers, AFTER triggers, and + INSTEAD OF triggers respectively. + Statement-level BEFORE triggers naturally fire before the + statement starts to do anything, while statement-level AFTER triggers fire at the very end of the statement. These types of triggers may be defined on tables, views, or foreign tables. Row-level - BEFORE triggers fire immediately before a particular row is - operated on, while row-level AFTER triggers fire at the end of - the statement (but before any statement-level AFTER triggers). + BEFORE triggers fire immediately before a particular row is + operated on, while row-level AFTER triggers fire at the end of + the statement (but before any statement-level AFTER triggers). These types of triggers may only be defined on non-partitioned tables and - foreign tables, not views. INSTEAD OF triggers may only be + foreign tables, not views. INSTEAD OF triggers may only be defined on views, and only at row level; they fire immediately as each row in the view is identified as needing to be operated on. @@ -125,31 +125,31 @@ If an INSERT contains an ON CONFLICT - DO UPDATE clause, it is possible that the effects of - row-level BEFORE INSERT triggers and + DO UPDATE clause, it is possible that the effects of + row-level BEFORE INSERT triggers and row-level BEFORE UPDATE triggers can both be applied in a way that is apparent from the final state of - the updated row, if an EXCLUDED column is referenced. - There need not be an EXCLUDED column reference for + the updated row, if an EXCLUDED column is referenced. + There need not be an EXCLUDED column reference for both sets of row-level BEFORE triggers to execute, though. The possibility of surprising outcomes should be considered when there - are both BEFORE INSERT and - BEFORE UPDATE row-level triggers + are both BEFORE INSERT and + BEFORE UPDATE row-level triggers that change a row being inserted/updated (this can be problematic even if the modifications are more or less equivalent, if they're not also idempotent). Note that statement-level UPDATE triggers are executed when ON - CONFLICT DO UPDATE is specified, regardless of whether or not + CONFLICT DO UPDATE is specified, regardless of whether or not any rows were affected by the UPDATE (and regardless of whether the alternative UPDATE path was ever taken). An INSERT with an - ON CONFLICT DO UPDATE clause will execute - statement-level BEFORE INSERT - triggers first, then statement-level BEFORE + ON CONFLICT DO UPDATE clause will execute + statement-level BEFORE INSERT + triggers first, then statement-level BEFORE UPDATE triggers, followed by statement-level - AFTER UPDATE triggers and finally - statement-level AFTER INSERT + AFTER UPDATE triggers and finally + statement-level AFTER INSERT triggers. @@ -164,7 +164,7 @@ - It can return NULL to skip the operation for the + It can return NULL to skip the operation for the current row. This instructs the executor to not perform the row-level operation that invoked the trigger (the insertion, modification, or deletion of a particular table row). @@ -182,7 +182,7 @@ - A row-level BEFORE trigger that does not intend to cause + A row-level BEFORE trigger that does not intend to cause either of these behaviors must be careful to return as its result the same row that was passed in (that is, the NEW row for INSERT and UPDATE @@ -191,8 +191,8 @@ - A row-level INSTEAD OF trigger should either return - NULL to indicate that it did not modify any data from + A row-level INSTEAD OF trigger should either return + NULL to indicate that it did not modify any data from the view's underlying base tables, or it should return the view row that was passed in (the NEW row for INSERT and UPDATE @@ -201,66 +201,66 @@ used to signal that the trigger performed the necessary data modifications in the view. This will cause the count of the number of rows affected by the command to be incremented. For - INSERT and UPDATE operations, the trigger - may modify the NEW row before returning it. This will + INSERT and UPDATE operations, the trigger + may modify the NEW row before returning it. This will change the data returned by - INSERT RETURNING or UPDATE RETURNING, + INSERT RETURNING or UPDATE RETURNING, and is useful when the view will not show exactly the same data that was provided. The return value is ignored for row-level triggers fired after an - operation, and so they can return NULL. + operation, and so they can return NULL. If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by - trigger name. In the case of BEFORE and - INSTEAD OF triggers, the possibly-modified row returned by + trigger name. In the case of BEFORE and + INSTEAD OF triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger. If any - BEFORE or INSTEAD OF trigger returns - NULL, the operation is abandoned for that row and subsequent + BEFORE or INSTEAD OF trigger returns + NULL, the operation is abandoned for that row and subsequent triggers are not fired (for that row). - A trigger definition can also specify a Boolean WHEN + A trigger definition can also specify a Boolean WHEN condition, which will be tested to see whether the trigger should - be fired. In row-level triggers the WHEN condition can + be fired. In row-level triggers the WHEN condition can examine the old and/or new values of columns of the row. (Statement-level - triggers can also have WHEN conditions, although the feature - is not so useful for them.) In a BEFORE trigger, the - WHEN + triggers can also have WHEN conditions, although the feature + is not so useful for them.) In a BEFORE trigger, the + WHEN condition is evaluated just before the function is or would be executed, - so using WHEN is not materially different from testing the + so using WHEN is not materially different from testing the same condition at the beginning of the trigger function. However, in - an AFTER trigger, the WHEN condition is evaluated + an AFTER trigger, the WHEN condition is evaluated just after the row update occurs, and it determines whether an event is queued to fire the trigger at the end of statement. So when an - AFTER trigger's - WHEN condition does not return true, it is not necessary + AFTER trigger's + WHEN condition does not return true, it is not necessary to queue an event nor to re-fetch the row at end of statement. This can result in significant speedups in statements that modify many rows, if the trigger only needs to be fired for a few of the rows. - INSTEAD OF triggers do not support - WHEN conditions. + INSTEAD OF triggers do not support + WHEN conditions. - Typically, row-level BEFORE triggers are used for checking or + Typically, row-level BEFORE triggers are used for checking or modifying the data that will be inserted or updated. For example, - a BEFORE trigger might be used to insert the current time into a + a BEFORE trigger might be used to insert the current time into a timestamp column, or to check that two elements of the row are - consistent. Row-level AFTER triggers are most sensibly + consistent. Row-level AFTER triggers are most sensibly used to propagate the updates to other tables, or make consistency checks against other tables. The reason for this division of labor is - that an AFTER trigger can be certain it is seeing the final - value of the row, while a BEFORE trigger cannot; there might - be other BEFORE triggers firing after it. If you have no - specific reason to make a trigger BEFORE or - AFTER, the BEFORE case is more efficient, since + that an AFTER trigger can be certain it is seeing the final + value of the row, while a BEFORE trigger cannot; there might + be other BEFORE triggers firing after it. If you have no + specific reason to make a trigger BEFORE or + AFTER, the BEFORE case is more efficient, since the information about the operation doesn't have to be saved until end of statement. @@ -279,8 +279,8 @@ - trigger - arguments for trigger functions + trigger + arguments for trigger functions When a trigger is being defined, arguments can be specified for it. The purpose of including arguments in the @@ -303,7 +303,7 @@ for making the trigger input data available to the trigger function. This input data includes the type of trigger event (e.g., INSERT or UPDATE) as well as any - arguments that were listed in CREATE TRIGGER. + arguments that were listed in CREATE TRIGGER. For a row-level trigger, the input data also includes the NEW row for INSERT and UPDATE triggers, and/or the OLD row @@ -313,9 +313,9 @@ By default, statement-level triggers do not have any way to examine the individual row(s) modified by the statement. But an AFTER - STATEMENT trigger can request that transition tables + STATEMENT trigger can request that transition tables be created to make the sets of affected rows available to the trigger. - AFTER ROW triggers can also request transition tables, so + AFTER ROW triggers can also request transition tables, so that they can see the total changes in the table as well as the change in the individual row they are currently being fired for. The method for examining the transition tables again depends on the programming language @@ -343,7 +343,7 @@ Statement-level triggers follow simple visibility rules: none of the changes made by a statement are visible to statement-level BEFORE triggers, whereas all - modifications are visible to statement-level AFTER + modifications are visible to statement-level AFTER triggers. @@ -352,14 +352,14 @@ The data change (insertion, update, or deletion) causing the trigger to fire is naturally not visible - to SQL commands executed in a row-level BEFORE trigger, + to SQL commands executed in a row-level BEFORE trigger, because it hasn't happened yet. - However, SQL commands executed in a row-level BEFORE + However, SQL commands executed in a row-level BEFORE trigger will see the effects of data changes for rows previously processed in the same outer command. This requires caution, since the ordering of these @@ -370,15 +370,15 @@ - Similarly, a row-level INSTEAD OF trigger will see the + Similarly, a row-level INSTEAD OF trigger will see the effects of data changes made by previous firings of INSTEAD - OF triggers in the same outer command. + OF triggers in the same outer command. - When a row-level AFTER trigger is fired, all data + When a row-level AFTER trigger is fired, all data changes made by the outer command are already complete, and are visible to the invoked trigger function. @@ -390,8 +390,8 @@ If your trigger function is written in any of the standard procedural languages, then the above statements apply only if the function is - declared VOLATILE. Functions that are declared - STABLE or IMMUTABLE will not see changes made by + declared VOLATILE. Functions that are declared + STABLE or IMMUTABLE will not see changes made by the calling command in any case. @@ -426,14 +426,14 @@ - Trigger functions must use the version 1 function manager + Trigger functions must use the version 1 function manager interface. When a function is called by the trigger manager, it is not passed - any normal arguments, but it is passed a context - pointer pointing to a TriggerData structure. C + any normal arguments, but it is passed a context + pointer pointing to a TriggerData structure. C functions can check whether they were called from the trigger manager or not by executing the macro: @@ -444,10 +444,10 @@ CALLED_AS_TRIGGER(fcinfo) ((fcinfo)->context != NULL && IsA((fcinfo)->context, TriggerData)) If this returns true, then it is safe to cast - fcinfo->context to type TriggerData + fcinfo->context to type TriggerData * and make use of the pointed-to - TriggerData structure. The function must - not alter the TriggerData + TriggerData structure. The function must + not alter the TriggerData structure or any of the data it points to. @@ -475,7 +475,7 @@ typedef struct TriggerData - type + type Always T_TriggerData. @@ -484,7 +484,7 @@ typedef struct TriggerData - tg_event + tg_event Describes the event for which the function is called. You can use the @@ -577,24 +577,24 @@ typedef struct TriggerData - tg_relation + tg_relation A pointer to a structure describing the relation that the trigger fired for. - Look at utils/rel.h for details about + Look at utils/rel.h for details about this structure. The most interesting things are - tg_relation->rd_att (descriptor of the relation - tuples) and tg_relation->rd_rel->relname - (relation name; the type is not char* but - NameData; use - SPI_getrelname(tg_relation) to get a char* if you + tg_relation->rd_att (descriptor of the relation + tuples) and tg_relation->rd_rel->relname + (relation name; the type is not char* but + NameData; use + SPI_getrelname(tg_relation) to get a char* if you need a copy of the name). - tg_trigtuple + tg_trigtuple A pointer to the row for which the trigger was fired. This is @@ -610,11 +610,11 @@ typedef struct TriggerData - tg_newtuple + tg_newtuple A pointer to the new version of the row, if the trigger was - fired for an UPDATE, and NULL if + fired for an UPDATE, and NULL if it is for an INSERT or a DELETE. This is what you have to return from the function if the event is an UPDATE @@ -626,11 +626,11 @@ typedef struct TriggerData - tg_trigger + tg_trigger - A pointer to a structure of type Trigger, - defined in utils/reltrigger.h: + A pointer to a structure of type Trigger, + defined in utils/reltrigger.h: typedef struct Trigger @@ -656,9 +656,9 @@ typedef struct Trigger } Trigger; - where tgname is the trigger's name, - tgnargs is the number of arguments in - tgargs, and tgargs is an array of + where tgname is the trigger's name, + tgnargs is the number of arguments in + tgargs, and tgargs is an array of pointers to the arguments specified in the CREATE TRIGGER statement. The other members are for internal use only. @@ -667,7 +667,7 @@ typedef struct Trigger - tg_trigtuplebuf + tg_trigtuplebuf The buffer containing tg_trigtuple, or InvalidBuffer if there @@ -677,7 +677,7 @@ typedef struct Trigger - tg_newtuplebuf + tg_newtuplebuf The buffer containing tg_newtuple, or InvalidBuffer if there @@ -687,24 +687,24 @@ typedef struct Trigger - tg_oldtable + tg_oldtable A pointer to a structure of type Tuplestorestate containing zero or more rows in the format specified by - tg_relation, or a NULL pointer + tg_relation, or a NULL pointer if there is no OLD TABLE transition relation. - tg_newtable + tg_newtable A pointer to a structure of type Tuplestorestate containing zero or more rows in the format specified by - tg_relation, or a NULL pointer + tg_relation, or a NULL pointer if there is no NEW TABLE transition relation. @@ -720,10 +720,10 @@ typedef struct Trigger A trigger function must return either a - HeapTuple pointer or a NULL pointer - (not an SQL null value, that is, do not set isNull true). + HeapTuple pointer or a NULL pointer + (not an SQL null value, that is, do not set isNull true). Be careful to return either - tg_trigtuple or tg_newtuple, + tg_trigtuple or tg_newtuple, as appropriate, if you don't want to modify the row being operated on. @@ -738,10 +738,10 @@ typedef struct Trigger - The function trigf reports the number of rows in the - table ttest and skips the actual operation if the + The function trigf reports the number of rows in the + table ttest and skips the actual operation if the command attempts to insert a null value into the column - x. (So the trigger acts as a not-null constraint but + x. (So the trigger acts as a not-null constraint but doesn't abort the transaction.) @@ -838,7 +838,7 @@ trigf(PG_FUNCTION_ARGS) linkend="dfunc">), declare the function and the triggers: CREATE FUNCTION trigf() RETURNS trigger - AS 'filename' + AS 'filename' LANGUAGE C; CREATE TRIGGER tbefore BEFORE INSERT OR UPDATE OR DELETE ON ttest diff --git a/doc/src/sgml/tsm-system-rows.sgml b/doc/src/sgml/tsm-system-rows.sgml index 93aa536664..8504ee1281 100644 --- a/doc/src/sgml/tsm-system-rows.sgml +++ b/doc/src/sgml/tsm-system-rows.sgml @@ -8,9 +8,9 @@ - The tsm_system_rows module provides the table sampling method + The tsm_system_rows module provides the table sampling method SYSTEM_ROWS, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. @@ -38,7 +38,7 @@ Here is an example of selecting a sample of a table with - SYSTEM_ROWS. First install the extension: + SYSTEM_ROWS. First install the extension: @@ -55,7 +55,7 @@ SELECT * FROM my_table TABLESAMPLE SYSTEM_ROWS(100); This command will return a sample of 100 rows from the - table my_table (unless the table does not have 100 + table my_table (unless the table does not have 100 visible rows, in which case all its rows are returned). diff --git a/doc/src/sgml/tsm-system-time.sgml b/doc/src/sgml/tsm-system-time.sgml index 3f8ff1a026..525292bb7c 100644 --- a/doc/src/sgml/tsm-system-time.sgml +++ b/doc/src/sgml/tsm-system-time.sgml @@ -8,9 +8,9 @@ - The tsm_system_time module provides the table sampling method + The tsm_system_time module provides the table sampling method SYSTEM_TIME, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. @@ -40,7 +40,7 @@ Here is an example of selecting a sample of a table with - SYSTEM_TIME. First install the extension: + SYSTEM_TIME. First install the extension: @@ -56,7 +56,7 @@ SELECT * FROM my_table TABLESAMPLE SYSTEM_TIME(1000); - This command will return as large a sample of my_table as + This command will return as large a sample of my_table as it can read in 1 second (1000 milliseconds). Of course, if the whole table can be read in under 1 second, all its rows will be returned. diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 63d41f03f3..5c99e3adaf 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -40,7 +40,7 @@ has an associated data type which determines its behavior and allowed usage. PostgreSQL has an extensible type system that is more general and flexible than other SQL implementations. Hence, most type conversion behavior in PostgreSQL -is governed by general rules rather than by ad hoc +is governed by general rules rather than by ad hoc heuristics. This allows the use of mixed-type expressions even with user-defined types. @@ -124,11 +124,11 @@ with, and perhaps converted to, the types of the target columns. Since all query results from a unionized SELECT statement must appear in a single set of columns, the types of the results of each -SELECT clause must be matched up and converted to a uniform set. -Similarly, the result expressions of a CASE construct must be -converted to a common type so that the CASE expression as a whole -has a known output type. The same holds for ARRAY constructs, -and for the GREATEST and LEAST functions. +SELECT clause must be matched up and converted to a uniform set. +Similarly, the result expressions of a CASE construct must be +converted to a common type so that the CASE expression as a whole +has a known output type. The same holds for ARRAY constructs, +and for the GREATEST and LEAST functions. @@ -345,7 +345,7 @@ Some examples follow. Factorial Operator Type Resolution -There is only one factorial operator (postfix !) +There is only one factorial operator (postfix !) defined in the standard catalog, and it takes an argument of type bigint. The scanner assigns an initial type of integer to the argument @@ -423,11 +423,11 @@ type to resolve the unknown-type literals as. The PostgreSQL operator catalog has several -entries for the prefix operator @, all of which implement +entries for the prefix operator @, all of which implement absolute-value operations for various numeric data types. One of these entries is for type float8, which is the preferred type in the numeric category. Therefore, PostgreSQL -will use that entry when faced with an unknown input: +will use that entry when faced with an unknown input: SELECT @ '-4.5' AS "abs"; abs @@ -446,9 +446,9 @@ ERROR: "-4.5e500" is out of range for type double precision -On the other hand, the prefix operator ~ (bitwise negation) +On the other hand, the prefix operator ~ (bitwise negation) is defined only for integer data types, not for float8. So, if we -try a similar case with ~, we get: +try a similar case with ~, we get: SELECT ~ '20' AS "negation"; @@ -457,7 +457,7 @@ HINT: Could not choose a best candidate operator. You might need to add explicit type casts. This happens because the system cannot decide which of the several -possible ~ operators should be preferred. We can help +possible ~ operators should be preferred. We can help it out with an explicit cast: SELECT ~ CAST('20' AS int8) AS "negation"; @@ -485,10 +485,10 @@ SELECT array[1,2] <@ '{1,2,3}' as "is subset"; (1 row) The PostgreSQL operator catalog has several -entries for the infix operator <@, but the only two that +entries for the infix operator <@, but the only two that could possibly accept an integer array on the left-hand side are -array inclusion (anyarray <@ anyarray) -and range inclusion (anyelement <@ anyrange). +array inclusion (anyarray <@ anyarray) +and range inclusion (anyelement <@ anyrange). Since none of these polymorphic pseudo-types (see ) are considered preferred, the parser cannot resolve the ambiguity on that basis. @@ -518,19 +518,19 @@ CREATE TABLE mytable (val mytext); SELECT * FROM mytable WHERE val = 'foo'; This query will not use the custom operator. The parser will first see if -there is a mytext = mytext operator +there is a mytext = mytext operator (), which there is not; -then it will consider the domain's base type text, and see if -there is a text = text operator +then it will consider the domain's base type text, and see if +there is a text = text operator (), which there is; -so it resolves the unknown-type literal as text and -uses the text = text operator. +so it resolves the unknown-type literal as text and +uses the text = text operator. The only way to get the custom operator to be used is to explicitly cast the literal: SELECT * FROM mytable WHERE val = text 'foo'; -so that the mytext = text operator is found +so that the mytext = text operator is found immediately according to the exact-match rule. If the best-match rules are reached, they actively discriminate against operators on domain types. If they did not, such an operator would create too many ambiguous-operator @@ -580,8 +580,8 @@ search path position. -If a function is declared with a VARIADIC array parameter, and -the call does not use the VARIADIC keyword, then the function +If a function is declared with a VARIADIC array parameter, and +the call does not use the VARIADIC keyword, then the function is treated as if the array parameter were replaced by one or more occurrences of its element type, as needed to match the call. After such expansion the function might have effective argument types identical to some non-variadic @@ -599,7 +599,7 @@ search path is used. If there are two or more such functions in the same schema with identical parameter types in the non-defaulted positions (which is possible if they have different sets of defaultable parameters), the system will not be able to determine which to prefer, and so an ambiguous -function call error will result if no better match to the call can be +function call error will result if no better match to the call can be found. @@ -626,7 +626,7 @@ an unknown-type literal, or a type that is binary-coercible to the named data type, or a type that could be converted to the named data type by applying that type's I/O functions (that is, the conversion is either to or from one of the standard string types). When these conditions are met, -the function call is treated as a form of CAST specification. +the function call is treated as a form of CAST specification. The reason for this step is to support function-style cast specifications @@ -709,7 +709,7 @@ Otherwise, fail. -Note that the best match rules are identical for operator and +Note that the best match rules are identical for operator and function type resolution. Some examples follow. @@ -790,7 +790,7 @@ SELECT substr(CAST (varchar '1234' AS text), 3); -The parser learns from the pg_cast catalog that +The parser learns from the pg_cast catalog that text and varchar are binary-compatible, meaning that one can be passed to a function that accepts the other without doing any physical conversion. Therefore, no @@ -809,8 +809,8 @@ HINT: No function matches the given name and argument types. You might need to add explicit type casts. -This does not work because integer does not have an implicit cast -to text. An explicit cast will work, however: +This does not work because integer does not have an implicit cast +to text. An explicit cast will work, however: SELECT substr(CAST (1234 AS text), 3); @@ -845,8 +845,8 @@ Check for an exact match with the target. Otherwise, try to convert the expression to the target type. This is possible -if an assignment cast between the two types is registered in the -pg_cast catalog (see ). +if an assignment cast between the two types is registered in the +pg_cast catalog (see ). Alternatively, if the expression is an unknown-type literal, the contents of the literal string will be fed to the input conversion routine for the target type. @@ -857,12 +857,12 @@ type. Check to see if there is a sizing cast for the target type. A sizing cast is a cast from that type to itself. If one is found in the -pg_cast catalog, apply it to the expression before storing +pg_cast catalog, apply it to the expression before storing into the destination column. The implementation function for such a cast always takes an extra parameter of type integer, which receives -the destination column's atttypmod value (typically its -declared length, although the interpretation of atttypmod -varies for different data types), and it may take a third boolean +the destination column's atttypmod value (typically its +declared length, although the interpretation of atttypmod +varies for different data types), and it may take a third boolean parameter that says whether the cast is explicit or implicit. The cast function is responsible for applying any length-dependent semantics such as size @@ -896,11 +896,11 @@ What has really happened here is that the two unknown literals are resolved to text by default, allowing the || operator to be resolved as text concatenation. Then the text result of the operator is converted to bpchar (blank-padded -char, the internal name of the character data type) to match the target +char, the internal name of the character data type) to match the target column type. (Since the conversion from text to bpchar is binary-coercible, this conversion does not insert any real function call.) Finally, the sizing function -bpchar(bpchar, integer, boolean) is found in the system catalog +bpchar(bpchar, integer, boolean) is found in the system catalog and applied to the operator's result and the stored column length. This type-specific function performs the required length check and addition of padding spaces. @@ -942,13 +942,13 @@ padding spaces. -SQL UNION constructs must match up possibly dissimilar +SQL UNION constructs must match up possibly dissimilar types to become a single result set. The resolution algorithm is applied separately to each output column of a union query. The -INTERSECT and EXCEPT constructs resolve -dissimilar types in the same way as UNION. The -CASE, ARRAY, VALUES, -GREATEST and LEAST constructs use the identical +INTERSECT and EXCEPT constructs resolve +dissimilar types in the same way as UNION. The +CASE, ARRAY, VALUES, +GREATEST and LEAST constructs use the identical algorithm to match up their component expressions and select a result data type. @@ -972,7 +972,7 @@ domain's base type for all subsequent steps. Somewhat like the treatment of domain inputs for operators and functions, this behavior allows a domain type to be preserved through - a UNION or similar construct, so long as the user is + a UNION or similar construct, so long as the user is careful to ensure that all inputs are implicitly or explicitly of that exact type. Otherwise the domain's base type will be preferred. @@ -1053,9 +1053,9 @@ SELECT 1.2 AS "numeric" UNION SELECT 1; 1.2 (2 rows) -The literal 1.2 is of type numeric, -and the integer value 1 can be cast implicitly to -numeric, so that type is used. +The literal 1.2 is of type numeric, +and the integer value 1 can be cast implicitly to +numeric, so that type is used. @@ -1072,9 +1072,9 @@ SELECT 1 AS "real" UNION SELECT CAST('2.2' AS REAL); 2.2 (2 rows) -Here, since type real cannot be implicitly cast to integer, -but integer can be implicitly cast to real, the union -result type is resolved as real. +Here, since type real cannot be implicitly cast to integer, +but integer can be implicitly cast to real, the union +result type is resolved as real. @@ -1089,38 +1089,38 @@ result type is resolved as real. The rules given in the preceding sections will result in assignment -of non-unknown data types to all expressions in a SQL query, +of non-unknown data types to all expressions in a SQL query, except for unspecified-type literals that appear as simple output -columns of a SELECT command. For example, in +columns of a SELECT command. For example, in SELECT 'Hello World'; there is nothing to identify what type the string literal should be -taken as. In this situation PostgreSQL will fall back -to resolving the literal's type as text. +taken as. In this situation PostgreSQL will fall back +to resolving the literal's type as text. -When the SELECT is one arm of a UNION -(or INTERSECT or EXCEPT) construct, or when it -appears within INSERT ... SELECT, this rule is not applied +When the SELECT is one arm of a UNION +(or INTERSECT or EXCEPT) construct, or when it +appears within INSERT ... SELECT, this rule is not applied since rules given in preceding sections take precedence. The type of an -unspecified-type literal can be taken from the other UNION arm +unspecified-type literal can be taken from the other UNION arm in the first case, or from the destination column in the second case. -RETURNING lists are treated the same as SELECT +RETURNING lists are treated the same as SELECT output lists for this purpose. - Prior to PostgreSQL 10, this rule did not exist, and - unspecified-type literals in a SELECT output list were - left as type unknown. That had assorted bad consequences, + Prior to PostgreSQL 10, this rule did not exist, and + unspecified-type literals in a SELECT output list were + left as type unknown. That had assorted bad consequences, so it's been changed. diff --git a/doc/src/sgml/unaccent.sgml b/doc/src/sgml/unaccent.sgml index d5cf98f6c1..a7f5f53041 100644 --- a/doc/src/sgml/unaccent.sgml +++ b/doc/src/sgml/unaccent.sgml @@ -8,7 +8,7 @@ - unaccent is a text search dictionary that removes accents + unaccent is a text search dictionary that removes accents (diacritic signs) from lexemes. It's a filtering dictionary, which means its output is always passed to the next dictionary (if any), unlike the normal @@ -17,7 +17,7 @@ - The current implementation of unaccent cannot be used as a + The current implementation of unaccent cannot be used as a normalizing dictionary for the thesaurus dictionary. @@ -25,17 +25,17 @@ Configuration - An unaccent dictionary accepts the following options: + An unaccent dictionary accepts the following options: - RULES is the base name of the file containing the list of + RULES is the base name of the file containing the list of translation rules. This file must be stored in - $SHAREDIR/tsearch_data/ (where $SHAREDIR means - the PostgreSQL installation's shared-data directory). - Its name must end in .rules (which is not to be included in - the RULES parameter). + $SHAREDIR/tsearch_data/ (where $SHAREDIR means + the PostgreSQL installation's shared-data directory). + Its name must end in .rules (which is not to be included in + the RULES parameter). @@ -72,15 +72,15 @@ - Actually, each character can be any string not containing - whitespace, so unaccent dictionaries could be used for + Actually, each character can be any string not containing + whitespace, so unaccent dictionaries could be used for other sorts of substring substitutions besides diacritic removal. - As with other PostgreSQL text search configuration files, + As with other PostgreSQL text search configuration files, the rules file must be stored in UTF-8 encoding. The data is automatically translated into the current database's encoding when loaded. Any lines containing untranslatable characters are silently @@ -92,8 +92,8 @@ A more complete example, which is directly useful for most European - languages, can be found in unaccent.rules, which is installed - in $SHAREDIR/tsearch_data/ when the unaccent + languages, can be found in unaccent.rules, which is installed + in $SHAREDIR/tsearch_data/ when the unaccent module is installed. This rules file translates characters with accents to the same characters without accents, and it also expands ligatures into the equivalent series of simple characters (for example, Æ to @@ -105,11 +105,11 @@ Usage - Installing the unaccent extension creates a text - search template unaccent and a dictionary unaccent - based on it. The unaccent dictionary has the default - parameter setting RULES='unaccent', which makes it immediately - usable with the standard unaccent.rules file. + Installing the unaccent extension creates a text + search template unaccent and a dictionary unaccent + based on it. The unaccent dictionary has the default + parameter setting RULES='unaccent', which makes it immediately + usable with the standard unaccent.rules file. If you wish, you can alter the parameter, for example @@ -132,7 +132,7 @@ mydb=# select ts_lexize('unaccent','Hôtel'); Here is an example showing how to insert the - unaccent dictionary into a text search configuration: + unaccent dictionary into a text search configuration: mydb=# CREATE TEXT SEARCH CONFIGURATION fr ( COPY = french ); mydb=# ALTER TEXT SEARCH CONFIGURATION fr @@ -163,9 +163,9 @@ mydb=# select ts_headline('fr','Hôtel de la Mer',to_tsquery('fr','Hotels') Functions - The unaccent() function removes accents (diacritic signs) from + The unaccent() function removes accents (diacritic signs) from a given string. Basically, it's a wrapper around - unaccent-type dictionaries, but it can be used outside normal + unaccent-type dictionaries, but it can be used outside normal text search contexts. @@ -179,7 +179,7 @@ unaccent(dictionary, If the dictionary argument is - omitted, unaccent is assumed. + omitted, unaccent is assumed. diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index 46989f0169..2416bfd03d 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -5,18 +5,18 @@ PostgreSQL manages database access permissions - using the concept of roles. A role can be thought of as + using the concept of roles. A role can be thought of as either a database user, or a group of database users, depending on how the role is set up. Roles can own database objects (for example, tables and functions) and can assign privileges on those objects to other roles to control who has access to which objects. Furthermore, it is possible - to grant membership in a role to another role, thus + to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role. - The concept of roles subsumes the concepts of users and - groups. In PostgreSQL versions + The concept of roles subsumes the concepts of users and + groups. In PostgreSQL versions before 8.1, users and groups were distinct kinds of entities, but now there are only roles. Any role can act as a user, a group, or both. @@ -59,7 +59,7 @@ CREATE ROLE name; name follows the rules for SQL identifiers: either unadorned without special characters, or double-quoted. (In practice, you will usually want to add additional - options, such as LOGIN, to the command. More details appear + options, such as LOGIN, to the command. More details appear below.) To remove an existing role, use the analogous command: @@ -87,19 +87,19 @@ dropuser name - To determine the set of existing roles, examine the pg_roles + To determine the set of existing roles, examine the pg_roles system catalog, for example SELECT rolname FROM pg_roles; - The program's \du meta-command + The program's \du meta-command is also useful for listing the existing roles. In order to bootstrap the database system, a freshly initialized system always contains one predefined role. This role is always - a superuser, and by default (unless altered when running + a superuser, and by default (unless altered when running initdb) it will have the same name as the operating system user that initialized the database cluster. Customarily, this role will be named @@ -118,7 +118,7 @@ SELECT rolname FROM pg_roles; command line option to indicate the role to connect as. Many applications assume the name of the current operating system user by default (including - createuser and psql). Therefore it + createuser and psql). Therefore it is often convenient to maintain a naming correspondence between roles and operating system users. @@ -145,27 +145,27 @@ SELECT rolname FROM pg_roles; - login privilegelogin privilege + login privilegelogin privilege - Only roles that have the LOGIN attribute can be used + Only roles that have the LOGIN attribute can be used as the initial role name for a database connection. A role with - the LOGIN attribute can be considered the same - as a database user. To create a role with login privilege, + the LOGIN attribute can be considered the same + as a database user. To create a role with login privilege, use either: CREATE ROLE name LOGIN; CREATE USER name; - (CREATE USER is equivalent to CREATE ROLE - except that CREATE USER assumes LOGIN by - default, while CREATE ROLE does not.) + (CREATE USER is equivalent to CREATE ROLE + except that CREATE USER assumes LOGIN by + default, while CREATE ROLE does not.) - superuser statussuperuser + superuser statussuperuser A database superuser bypasses all permission checks, except the right @@ -179,7 +179,7 @@ CREATE USER name; - database creationdatabaseprivilege to create + database creationdatabaseprivilege to create A role must be explicitly given permission to create databases @@ -191,30 +191,30 @@ CREATE USER name; - role creationroleprivilege to create + role creationroleprivilege to create A role must be explicitly given permission to create more roles (except for superusers, since those bypass all permission checks). To create such a role, use CREATE ROLE name CREATEROLE. - A role with CREATEROLE privilege can alter and drop + A role with CREATEROLE privilege can alter and drop other roles, too, as well as grant or revoke membership in them. However, to create, alter, drop, or change membership of a superuser role, superuser status is required; - CREATEROLE is insufficient for that. + CREATEROLE is insufficient for that. - initiating replicationroleprivilege to initiate replication + initiating replicationroleprivilege to initiate replication A role must explicitly be given permission to initiate streaming replication (except for superusers, since those bypass all permission checks). A role used for streaming replication must - have LOGIN permission as well. To create such a role, use + have LOGIN permission as well. To create such a role, use CREATE ROLE name REPLICATION LOGIN. @@ -222,32 +222,32 @@ CREATE USER name; - passwordpassword + passwordpassword A password is only significant if the client authentication method requires the user to supply a password when connecting - to the database. The and + authentication methods make use of passwords. Database passwords are separate from operating system passwords. Specify a password upon role creation with CREATE ROLE - name PASSWORD 'string'. + name PASSWORD 'string'. A role's attributes can be modified after creation with - ALTER ROLE.ALTER ROLE + ALTER ROLE.ALTER ROLE See the reference pages for the and commands for details. - It is good practice to create a role that has the CREATEDB - and CREATEROLE privileges, but is not a superuser, and then + It is good practice to create a role that has the CREATEDB + and CREATEROLE privileges, but is not a superuser, and then use this role for all routine management of databases and roles. This approach avoids the dangers of operating as a superuser for tasks that do not really require it. @@ -269,9 +269,9 @@ ALTER ROLE myname SET enable_indexscan TO off; just before the session started. You can still alter this setting during the session; it will only be the default. To remove a role-specific default setting, use - ALTER ROLE rolename RESET varname. + ALTER ROLE rolename RESET varname. Note that role-specific defaults attached to roles without - LOGIN privilege are fairly useless, since they will never + LOGIN privilege are fairly useless, since they will never be invoked. @@ -280,7 +280,7 @@ ALTER ROLE myname SET enable_indexscan TO off; Role Membership - rolemembership in + rolemembership in @@ -288,7 +288,7 @@ ALTER ROLE myname SET enable_indexscan TO off; management of privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In PostgreSQL this is done by creating a role that represents the group, and then - granting membership in the group role to individual user + granting membership in the group role to individual user roles. @@ -297,7 +297,7 @@ ALTER ROLE myname SET enable_indexscan TO off; CREATE ROLE name; - Typically a role being used as a group would not have the LOGIN + Typically a role being used as a group would not have the LOGIN attribute, though you can set it if you wish. @@ -320,11 +320,11 @@ REVOKE group_role FROM role1 to - temporarily become the group role. In this state, the + temporarily become the group role. In this state, the database session has access to the privileges of the group role rather than the original login role, and any database objects created are considered owned by the group role not the login role. Second, member - roles that have the INHERIT attribute automatically have use + roles that have the INHERIT attribute automatically have use of the privileges of roles of which they are members, including any privileges inherited by those roles. As an example, suppose we have done: @@ -335,25 +335,25 @@ CREATE ROLE wheel NOINHERIT; GRANT admin TO joe; GRANT wheel TO admin; - Immediately after connecting as role joe, a database - session will have use of privileges granted directly to joe - plus any privileges granted to admin, because joe - inherits admin's privileges. However, privileges - granted to wheel are not available, because even though - joe is indirectly a member of wheel, the - membership is via admin which has the NOINHERIT + Immediately after connecting as role joe, a database + session will have use of privileges granted directly to joe + plus any privileges granted to admin, because joe + inherits admin's privileges. However, privileges + granted to wheel are not available, because even though + joe is indirectly a member of wheel, the + membership is via admin which has the NOINHERIT attribute. After: SET ROLE admin; the session would have use of only those privileges granted to - admin, and not those granted to joe. After: + admin, and not those granted to joe. After: SET ROLE wheel; the session would have use of only those privileges granted to - wheel, and not those granted to either joe - or admin. The original privilege state can be restored + wheel, and not those granted to either joe + or admin. The original privilege state can be restored with any of: SET ROLE joe; @@ -364,10 +364,10 @@ RESET ROLE; - The SET ROLE command always allows selecting any role + The SET ROLE command always allows selecting any role that the original login role is directly or indirectly a member of. Thus, in the above example, it is not necessary to become - admin before becoming wheel. + admin before becoming wheel. @@ -376,26 +376,26 @@ RESET ROLE; In the SQL standard, there is a clear distinction between users and roles, and users do not automatically inherit privileges while roles do. This behavior can be obtained in PostgreSQL by giving - roles being used as SQL roles the INHERIT attribute, while - giving roles being used as SQL users the NOINHERIT attribute. + roles being used as SQL roles the INHERIT attribute, while + giving roles being used as SQL users the NOINHERIT attribute. However, PostgreSQL defaults to giving all roles - the INHERIT attribute, for backward compatibility with pre-8.1 + the INHERIT attribute, for backward compatibility with pre-8.1 releases in which users always had use of permissions granted to groups they were members of. - The role attributes LOGIN, SUPERUSER, - CREATEDB, and CREATEROLE can be thought of as + The role attributes LOGIN, SUPERUSER, + CREATEDB, and CREATEROLE can be thought of as special privileges, but they are never inherited as ordinary privileges - on database objects are. You must actually SET ROLE to a + on database objects are. You must actually SET ROLE to a specific role having one of these attributes in order to make use of the attribute. Continuing the above example, we might choose to - grant CREATEDB and CREATEROLE to the - admin role. Then a session connecting as role joe + grant CREATEDB and CREATEROLE to the + admin role. Then a session connecting as role joe would not have these privileges immediately, only after doing - SET ROLE admin. + SET ROLE admin. @@ -425,16 +425,16 @@ DROP ROLE name; Ownership of objects can be transferred one at a time - using ALTER commands, for example: + using ALTER commands, for example: ALTER TABLE bobs_table OWNER TO alice; Alternatively, the command can be used to reassign ownership of all objects owned by the role-to-be-dropped - to a single other role. Because REASSIGN OWNED cannot access + to a single other role. Because REASSIGN OWNED cannot access objects in other databases, it is necessary to run it in each database that contains objects owned by the role. (Note that the first - such REASSIGN OWNED will change the ownership of any + such REASSIGN OWNED will change the ownership of any shared-across-databases objects, that is databases or tablespaces, that are owned by the role-to-be-dropped.) @@ -445,17 +445,17 @@ ALTER TABLE bobs_table OWNER TO alice; the command. Again, this command cannot access objects in other databases, so it is necessary to run it in each database that contains objects owned by the role. Also, DROP - OWNED will not drop entire databases or tablespaces, so it is + OWNED will not drop entire databases or tablespaces, so it is necessary to do that manually if the role owns any databases or tablespaces that have not been transferred to new owners. - DROP OWNED also takes care of removing any privileges granted + DROP OWNED also takes care of removing any privileges granted to the target role for objects that do not belong to it. - Because REASSIGN OWNED does not touch such objects, it's - typically necessary to run both REASSIGN OWNED - and DROP OWNED (in that order!) to fully remove the + Because REASSIGN OWNED does not touch such objects, it's + typically necessary to run both REASSIGN OWNED + and DROP OWNED (in that order!) to fully remove the dependencies of a role to be dropped. @@ -477,7 +477,7 @@ DROP ROLE doomed_role; - If DROP ROLE is attempted while dependent objects still + If DROP ROLE is attempted while dependent objects still remain, it will issue messages identifying which objects need to be reassigned or dropped. @@ -487,7 +487,7 @@ DROP ROLE doomed_role; Default Roles - role + role @@ -589,7 +589,7 @@ GRANT pg_signal_backend TO admin_user; possible to change the server's internal data structures. Hence, among many other things, such functions can circumvent any system access controls. Function languages that allow such access - are considered untrusted, and + are considered untrusted, and PostgreSQL allows only superusers to create functions written in those languages. diff --git a/doc/src/sgml/uuid-ossp.sgml b/doc/src/sgml/uuid-ossp.sgml index 227d4a839c..b1c1cd6f0a 100644 --- a/doc/src/sgml/uuid-ossp.sgml +++ b/doc/src/sgml/uuid-ossp.sgml @@ -8,7 +8,7 @@ - The uuid-ossp module provides functions to generate universally + The uuid-ossp module provides functions to generate universally unique identifiers (UUIDs) using one of several standard algorithms. There are also functions to produce certain special UUID constants. @@ -63,7 +63,7 @@ This function generates a version 3 UUID in the given namespace using the specified input name. The namespace should be one of the special - constants produced by the uuid_ns_*() functions shown + constants produced by the uuid_ns_*() functions shown in . (It could be any UUID in theory.) The name is an identifier in the selected namespace. @@ -114,7 +114,7 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); uuid_nil() - A nil UUID constant, which does not occur as a real UUID. + A nil UUID constant, which does not occur as a real UUID. @@ -140,7 +140,7 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); Constant designating the ISO object identifier (OID) namespace for UUIDs. (This pertains to ASN.1 OIDs, which are unrelated to the OIDs - used in PostgreSQL.) + used in PostgreSQL.) @@ -159,33 +159,33 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); - Building <filename>uuid-ossp</> + Building <filename>uuid-ossp</filename> Historically this module depended on the OSSP UUID library, which accounts for the module's name. While the OSSP UUID library can still be found at , it is not well maintained, and is becoming increasingly difficult to port to newer - platforms. uuid-ossp can now be built without the OSSP + platforms. uuid-ossp can now be built without the OSSP library on some platforms. On FreeBSD, NetBSD, and some other BSD-derived platforms, suitable UUID creation functions are included in the - core libc library. On Linux, macOS, and some other - platforms, suitable functions are provided in the libuuid - library, which originally came from the e2fsprogs project + core libc library. On Linux, macOS, and some other + platforms, suitable functions are provided in the libuuid + library, which originally came from the e2fsprogs project (though on modern Linux it is considered part - of util-linux-ng). When invoking configure, + of util-linux-ng). When invoking configure, specify to use the BSD functions, or to - use e2fsprogs' libuuid, or + use e2fsprogs' libuuid, or to use the OSSP UUID library. More than one of these libraries might be available on a particular - machine, so configure does not automatically choose one. + machine, so configure does not automatically choose one. If you only need randomly-generated (version 4) UUIDs, - consider using the gen_random_uuid() function + consider using the gen_random_uuid() function from the module instead. diff --git a/doc/src/sgml/vacuumlo.sgml b/doc/src/sgml/vacuumlo.sgml index 9da61c93fe..190ed9880b 100644 --- a/doc/src/sgml/vacuumlo.sgml +++ b/doc/src/sgml/vacuumlo.sgml @@ -28,17 +28,17 @@ Description - vacuumlo is a simple utility program that will remove any - orphaned large objects from a - PostgreSQL database. An orphaned large object (LO) is - considered to be any LO whose OID does not appear in any oid or - lo data column of the database. + vacuumlo is a simple utility program that will remove any + orphaned large objects from a + PostgreSQL database. An orphaned large object (LO) is + considered to be any LO whose OID does not appear in any oid or + lo data column of the database. - If you use this, you may also be interested in the lo_manage + If you use this, you may also be interested in the lo_manage trigger in the module. - lo_manage is useful to try + lo_manage is useful to try to avoid creating orphaned LOs in the first place. @@ -55,10 +55,10 @@ - limit + limit - Remove no more than limit large objects per + Remove no more than limit large objects per transaction (default 1000). Since the server acquires a lock per LO removed, removing too many LOs in one transaction risks exceeding . Set the limit to @@ -82,8 +82,8 @@ - - + + Print the vacuumlo version and exit. @@ -92,8 +92,8 @@ - - + + Show help about vacuumlo command line @@ -110,29 +110,29 @@ - hostname + hostname Database server's host. - port + port Database server's port. - username + username User name to connect as. - - + + Never issue a password prompt. If the server requires password @@ -158,7 +158,7 @@ for a password if the server demands password authentication. However, vacuumlo will waste a connection attempt finding out that the server wants a password. - In some cases it is worth typing to avoid the extra connection attempt. @@ -172,10 +172,10 @@ vacuumlo works by the following method: - First, vacuumlo builds a temporary table which contains all + First, vacuumlo builds a temporary table which contains all of the OIDs of the large objects in the selected database. It then scans through all columns in the database that are of type - oid or lo, and removes matching entries from the temporary + oid or lo, and removes matching entries from the temporary table. (Note: Only types with these names are considered; in particular, domains over them are not considered.) The remaining entries in the temporary table identify orphaned LOs. These are removed. diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index ddcef5fbf5..f9febe916f 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -13,7 +13,7 @@ Reliability is an important property of any serious database - system, and PostgreSQL does everything possible to + system, and PostgreSQL does everything possible to guarantee reliable operation. One aspect of reliable operation is that all data recorded by a committed transaction should be stored in a nonvolatile area that is safe from power loss, operating @@ -34,21 +34,21 @@ First, there is the operating system's buffer cache, which caches frequently requested disk blocks and combines disk writes. Fortunately, all operating systems give applications a way to force writes from - the buffer cache to disk, and PostgreSQL uses those + the buffer cache to disk, and PostgreSQL uses those features. (See the parameter to adjust how this is done.) Next, there might be a cache in the disk drive controller; this is - particularly common on RAID controller cards. Some of - these caches are write-through, meaning writes are sent + particularly common on RAID controller cards. Some of + these caches are write-through, meaning writes are sent to the drive as soon as they arrive. Others are - write-back, meaning data is sent to the drive at + write-back, meaning data is sent to the drive at some later time. Such caches can be a reliability hazard because the memory in the disk controller cache is volatile, and will lose its contents in a power failure. Better controller cards have - battery-backup units (BBUs), meaning + battery-backup units (BBUs), meaning the card has a battery that maintains power to the cache in case of system power loss. After power is restored the data will be written to the disk drives. @@ -71,22 +71,22 @@ - On Linux, IDE and SATA drives can be queried using + On Linux, IDE and SATA drives can be queried using hdparm -I; write caching is enabled if there is - a * next to Write cache. hdparm -W 0 + a * next to Write cache. hdparm -W 0 can be used to turn off write caching. SCSI drives can be queried - using sdparm. + using sdparm. Use sdparm --get=WCE to check - whether the write cache is enabled and sdparm --clear=WCE + whether the write cache is enabled and sdparm --clear=WCE to disable it. - On FreeBSD, IDE drives can be queried using + On FreeBSD, IDE drives can be queried using atacontrol and write caching turned off using - hw.ata.wc=0 in /boot/loader.conf; + hw.ata.wc=0 in /boot/loader.conf; SCSI drives can be queried using camcontrol identify, and the write cache both queried and changed using sdparm when available. @@ -95,20 +95,20 @@ - On Solaris, the disk write cache is controlled by - format -e. - (The Solaris ZFS file system is safe with disk write-cache + On Solaris, the disk write cache is controlled by + format -e. + (The Solaris ZFS file system is safe with disk write-cache enabled because it issues its own disk cache flush commands.) - On Windows, if wal_sync_method is - open_datasync (the default), write caching can be disabled - by unchecking My Computer\Open\disk drive\Properties\Hardware\Properties\Policies\Enable write caching on the disk. + On Windows, if wal_sync_method is + open_datasync (the default), write caching can be disabled + by unchecking My Computer\Open\disk drive\Properties\Hardware\Properties\Policies\Enable write caching on the disk. Alternatively, set wal_sync_method to - fsync or fsync_writethrough, which prevent + fsync or fsync_writethrough, which prevent write caching. @@ -116,21 +116,21 @@ On macOS, write caching can be prevented by - setting wal_sync_method to fsync_writethrough. + setting wal_sync_method to fsync_writethrough. - Recent SATA drives (those following ATAPI-6 or later) - offer a drive cache flush command (FLUSH CACHE EXT), + Recent SATA drives (those following ATAPI-6 or later) + offer a drive cache flush command (FLUSH CACHE EXT), while SCSI drives have long supported a similar command - SYNCHRONIZE CACHE. These commands are not directly - accessible to PostgreSQL, but some file systems - (e.g., ZFS, ext4) can use them to flush + SYNCHRONIZE CACHE. These commands are not directly + accessible to PostgreSQL, but some file systems + (e.g., ZFS, ext4) can use them to flush data to the platters on write-back-enabled drives. Unfortunately, such file systems behave suboptimally when combined with battery-backup unit - (BBU) disk controllers. In such setups, the synchronize + (BBU) disk controllers. In such setups, the synchronize command forces all data from the controller cache to the disks, eliminating much of the benefit of the BBU. You can run the program to see @@ -164,13 +164,13 @@ commonly 512 bytes each. Every physical read or write operation processes a whole sector. When a write request arrives at the drive, it might be for some multiple - of 512 bytes (PostgreSQL typically writes 8192 bytes, or + of 512 bytes (PostgreSQL typically writes 8192 bytes, or 16 sectors, at a time), and the process of writing could fail due to power loss at any time, meaning some of the 512-byte sectors were written while others were not. To guard against such failures, - PostgreSQL periodically writes full page images to - permanent WAL storage before modifying the actual page on - disk. By doing this, during crash recovery PostgreSQL can + PostgreSQL periodically writes full page images to + permanent WAL storage before modifying the actual page on + disk. By doing this, during crash recovery PostgreSQL can restore partially-written pages from WAL. If you have file-system software that prevents partial page writes (e.g., ZFS), you can turn off this page imaging by turning off the - PostgreSQL also protects against some kinds of data corruption + PostgreSQL also protects against some kinds of data corruption on storage devices that may occur because of hardware errors or media failure over time, such as reading/writing garbage data. @@ -195,7 +195,7 @@ Data pages are not currently checksummed by default, though full page images recorded in WAL records will be protected; see initdb + linkend="app-initdb-data-checksums">initdb for details about enabling data page checksums. @@ -224,7 +224,7 @@ - PostgreSQL does not protect against correctable memory errors + PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. @@ -267,7 +267,7 @@ causes file system data to be flushed to disk. Fortunately, data flushing during journaling can often be disabled with a file system mount option, e.g. - data=writeback on a Linux ext3 file system. + data=writeback on a Linux ext3 file system. Journaled file systems do improve boot speed after a crash. @@ -313,7 +313,7 @@ - Asynchronous commit is an option that allows transactions + Asynchronous commit is an option that allows transactions to complete more quickly, at the cost that the most recent transactions may be lost if the database should crash. In many applications this is an acceptable trade-off. @@ -321,7 +321,7 @@ As described in the previous section, transaction commit is normally - synchronous: the server waits for the transaction's + synchronous: the server waits for the transaction's WAL records to be flushed to permanent storage before returning a success indication to the client. The client is therefore guaranteed that a transaction reported to be committed will @@ -374,22 +374,22 @@ - Certain utility commands, for instance DROP TABLE, are + Certain utility commands, for instance DROP TABLE, are forced to commit synchronously regardless of the setting of synchronous_commit. This is to ensure consistency between the server's file system and the logical state of the database. The commands supporting two-phase commit, such as PREPARE - TRANSACTION, are also always synchronous. + TRANSACTION, are also always synchronous. If the database crashes during the risk window between an asynchronous commit and the writing of the transaction's WAL records, - then changes made during that transaction will be lost. + then changes made during that transaction will be lost. The duration of the risk window is limited because a background process (the WAL - writer) flushes unwritten WAL records to disk + writer) flushes unwritten WAL records to disk every milliseconds. The actual maximum duration of the risk window is three times wal_writer_delay because the WAL writer is @@ -408,10 +408,10 @@ = off. fsync is a server-wide setting that will alter the behavior of all transactions. It disables - all logic within PostgreSQL that attempts to synchronize + all logic within PostgreSQL that attempts to synchronize writes to different portions of the database, and therefore a system crash (that is, a hardware or operating system crash, not a failure of - PostgreSQL itself) could result in arbitrarily bad + PostgreSQL itself) could result in arbitrarily bad corruption of the database state. In many scenarios, asynchronous commit provides most of the performance improvement that could be obtained by turning off fsync, but without the risk @@ -437,14 +437,14 @@ <acronym>WAL</acronym> Configuration - There are several WAL-related configuration parameters that + There are several WAL-related configuration parameters that affect database performance. This section explains their use. Consult for general information about setting server configuration parameters. - Checkpointscheckpoint + Checkpointscheckpoint are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before that checkpoint. At checkpoint time, all @@ -477,7 +477,7 @@ whichever comes first. The default settings are 5 minutes and 1 GB, respectively. If no WAL has been written since the previous checkpoint, new checkpoints - will be skipped even if checkpoint_timeout has passed. + will be skipped even if checkpoint_timeout has passed. (If WAL archiving is being used and you want to put a lower limit on how often files are archived in order to bound potential data loss, you should adjust the parameter rather than the @@ -509,13 +509,13 @@ don't happen too often. As a simple sanity check on your checkpointing parameters, you can set the parameter. If checkpoints happen closer together than - checkpoint_warning seconds, + checkpoint_warning seconds, a message will be output to the server log recommending increasing max_wal_size. Occasional appearance of such a message is not cause for alarm, but if it appears often then the checkpoint control parameters should be increased. Bulk operations such - as large COPY transfers might cause a number of such warnings - to appear if you have not set max_wal_size high + as large COPY transfers might cause a number of such warnings + to appear if you have not set max_wal_size high enough. @@ -530,7 +530,7 @@ checkpoint_timeout seconds have elapsed, or before max_wal_size is exceeded, whichever is sooner. With the default value of 0.5, - PostgreSQL can be expected to complete each checkpoint + PostgreSQL can be expected to complete each checkpoint in about half the time before the next checkpoint starts. On a system that's very close to maximum I/O throughput during normal operation, you might want to increase checkpoint_completion_target @@ -550,19 +550,19 @@ allows to force the OS that pages written by the checkpoint should be flushed to disk after a configurable number of bytes. Otherwise, these pages may be kept in the OS's page cache, inducing a stall when - fsync is issued at the end of a checkpoint. This setting will + fsync is issued at the end of a checkpoint. This setting will often help to reduce transaction latency, but it also can an adverse effect on performance; particularly for workloads that are bigger than , but smaller than the OS's page cache. - The number of WAL segment files in pg_wal directory depends on - min_wal_size, max_wal_size and + The number of WAL segment files in pg_wal directory depends on + min_wal_size, max_wal_size and the amount of WAL generated in previous checkpoint cycles. When old log segment files are no longer needed, they are removed or recycled (that is, renamed to become future segments in the numbered sequence). If, due to a - short-term peak of log output rate, max_wal_size is + short-term peak of log output rate, max_wal_size is exceeded, the unneeded segment files will be removed until the system gets back under this limit. Below that limit, the system recycles enough WAL files to cover the estimated need until the next checkpoint, and @@ -570,7 +570,7 @@ of WAL files used in previous checkpoint cycles. The moving average is increased immediately if the actual usage exceeds the estimate, so it accommodates peak usage rather than average usage to some extent. - min_wal_size puts a minimum on the amount of WAL files + min_wal_size puts a minimum on the amount of WAL files recycled for future usage; that much WAL is always recycled for future use, even if the system is idle and the WAL usage estimate suggests that little WAL is needed. @@ -582,7 +582,7 @@ kept at all times. Also, if WAL archiving is used, old segments can not be removed or recycled until they are archived. If WAL archiving cannot keep up with the pace that WAL is generated, or if archive_command - fails repeatedly, old WAL files will accumulate in pg_wal + fails repeatedly, old WAL files will accumulate in pg_wal until the situation is resolved. A slow or failed standby server that uses a replication slot will have the same effect (see ). @@ -590,21 +590,21 @@ In archive recovery or standby mode, the server periodically performs - restartpoints,restartpoint + restartpoints,restartpoint which are similar to checkpoints in normal operation: the server forces - all its state to disk, updates the pg_control file to + all its state to disk, updates the pg_control file to indicate that the already-processed WAL data need not be scanned again, - and then recycles any old log segment files in the pg_wal + and then recycles any old log segment files in the pg_wal directory. Restartpoints can't be performed more frequently than checkpoints in the master because restartpoints can only be performed at checkpoint records. A restartpoint is triggered when a checkpoint record is reached if at - least checkpoint_timeout seconds have passed since the last + least checkpoint_timeout seconds have passed since the last restartpoint, or if WAL size is about to exceed - max_wal_size. However, because of limitations on when a - restartpoint can be performed, max_wal_size is often exceeded + max_wal_size. However, because of limitations on when a + restartpoint can be performed, max_wal_size is often exceeded during recovery, by up to one checkpoint cycle's worth of WAL. - (max_wal_size is never a hard limit anyway, so you should + (max_wal_size is never a hard limit anyway, so you should always leave plenty of headroom to avoid running out of disk space.) @@ -631,7 +631,7 @@ one should increase the number of WAL buffers by modifying the parameter. When is set and the system is very busy, - setting wal_buffers higher will help smooth response times + setting wal_buffers higher will help smooth response times during the period immediately following each checkpoint. @@ -686,7 +686,7 @@ will consist only of sessions that reach the point where they need to flush their commit records during the window in which the previous flush operation (if any) is occurring. At higher client counts a - gangway effect tends to occur, so that the effects of group + gangway effect tends to occur, so that the effects of group commit become significant even when commit_delay is zero, and thus explicitly setting commit_delay tends to help less. Setting commit_delay can only help @@ -702,7 +702,7 @@ PostgreSQL will ask the kernel to force WAL updates out to disk. All the options should be the same in terms of reliability, with - the exception of fsync_writethrough, which can sometimes + the exception of fsync_writethrough, which can sometimes force a flush of the disk cache even when other options do not do so. However, it's quite platform-specific which one will be the fastest. You can test the speeds of different options using the LSN) that is a byte offset into the logs, increasing monotonically with each new record. LSN values are returned as the datatype - pg_lsn. Values can be + pg_lsn. Values can be compared to calculate the volume of WAL data that separates them, so they are used to measure the progress of replication and recovery. @@ -752,9 +752,9 @@ WAL logs are stored in the directory pg_wal under the data directory, as a set of segment files, normally each 16 MB in size (but the size can be changed - by altering the initdb option). Each segment is divided into pages, normally 8 kB each (this size can be changed via the - configure option). The log record headers are described in access/xlogrecord.h; the record content is dependent on the type of event that is being logged. Segment files are given ever-increasing numbers as names, starting at @@ -774,7 +774,7 @@ The aim of WAL is to ensure that the log is written before database records are altered, but this can be subverted by - disk drivesdisk drive that falsely report a + disk drivesdisk drive that falsely report a successful write to the kernel, when in fact they have only cached the data and not yet stored it on the disk. A power failure in such a situation might lead to diff --git a/doc/src/sgml/xaggr.sgml b/doc/src/sgml/xaggr.sgml index 9e6a6648dc..f99dbb6510 100644 --- a/doc/src/sgml/xaggr.sgml +++ b/doc/src/sgml/xaggr.sgml @@ -41,10 +41,10 @@ If we define an aggregate that does not use a final function, we have an aggregate that computes a running function of - the column values from each row. sum is an - example of this kind of aggregate. sum starts at + the column values from each row. sum is an + example of this kind of aggregate. sum starts at zero and always adds the current row's value to - its running total. For example, if we want to make a sum + its running total. For example, if we want to make a sum aggregate to work on a data type for complex numbers, we only need the addition function for that data type. The aggregate definition would be: @@ -69,7 +69,7 @@ SELECT sum(a) FROM test_complex; (Notice that we are relying on function overloading: there is more than - one aggregate named sum, but + one aggregate named sum, but PostgreSQL can figure out which kind of sum applies to a column of type complex.) @@ -83,17 +83,17 @@ SELECT sum(a) FROM test_complex; value is null. Ordinarily this would mean that the sfunc would need to check for a null state-value input. But for sum and some other simple aggregates like - max and min, + max and min, it is sufficient to insert the first nonnull input value into the state variable and then start applying the transition function at the second nonnull input value. PostgreSQL will do that automatically if the initial state value is null and - the transition function is marked strict (i.e., not to be called + the transition function is marked strict (i.e., not to be called for null inputs). - Another bit of default behavior for a strict transition function + Another bit of default behavior for a strict transition function is that the previous state value is retained unchanged whenever a null input value is encountered. Thus, null values are ignored. If you need some other behavior for null inputs, do not declare your @@ -102,7 +102,7 @@ SELECT sum(a) FROM test_complex; - avg (average) is a more complex example of an aggregate. + avg (average) is a more complex example of an aggregate. It requires two pieces of running state: the sum of the inputs and the count of the number of inputs. The final result is obtained by dividing @@ -124,16 +124,16 @@ CREATE AGGREGATE avg (float8) - float8_accum requires a three-element array, not just + float8_accum requires a three-element array, not just two elements, because it accumulates the sum of squares as well as the sum and count of the inputs. This is so that it can be used for - some other aggregates as well as avg. + some other aggregates as well as avg. - Aggregate function calls in SQL allow DISTINCT - and ORDER BY options that control which rows are fed + Aggregate function calls in SQL allow DISTINCT + and ORDER BY options that control which rows are fed to the aggregate's transition function and in what order. These options are implemented behind the scenes and are not the concern of the aggregate's support functions. @@ -159,16 +159,16 @@ CREATE AGGREGATE avg (float8) Aggregate functions can optionally support moving-aggregate - mode, which allows substantially faster execution of aggregate + mode, which allows substantially faster execution of aggregate functions within windows with moving frame starting points. (See and for information about use of aggregate functions as window functions.) - The basic idea is that in addition to a normal forward + The basic idea is that in addition to a normal forward transition function, the aggregate provides an inverse - transition function, which allows rows to be removed from the + transition function, which allows rows to be removed from the aggregate's running state value when they exit the window frame. - For example a sum aggregate, which uses addition as the + For example a sum aggregate, which uses addition as the forward transition function, would use subtraction as the inverse transition function. Without an inverse transition function, the window function mechanism must recalculate the aggregate from scratch each time @@ -193,7 +193,7 @@ CREATE AGGREGATE avg (float8) - As an example, we could extend the sum aggregate given above + As an example, we could extend the sum aggregate given above to support moving-aggregate mode like this: @@ -209,10 +209,10 @@ CREATE AGGREGATE sum (complex) ); - The parameters whose names begin with m define the + The parameters whose names begin with m define the moving-aggregate implementation. Except for the inverse transition - function minvfunc, they correspond to the plain-aggregate - parameters without m. + function minvfunc, they correspond to the plain-aggregate + parameters without m. @@ -224,10 +224,10 @@ CREATE AGGREGATE sum (complex) current frame starting position. This convention allows moving-aggregate mode to be used in situations where there are some infrequent cases that are impractical to reverse out of the running state value. The inverse - transition function can punt on these cases, and yet still come + transition function can punt on these cases, and yet still come out ahead so long as it can work for most cases. As an example, an aggregate working with floating-point numbers might choose to punt when - a NaN (not a number) input has to be removed from the running + a NaN (not a number) input has to be removed from the running state value. @@ -238,8 +238,8 @@ CREATE AGGREGATE sum (complex) in results depending on whether the moving-aggregate mode is used. An example of an aggregate for which adding an inverse transition function seems easy at first, yet where this requirement cannot be met - is sum over float4 or float8 inputs. A - naive declaration of sum(float8) could be + is sum over float4 or float8 inputs. A + naive declaration of sum(float8) could be CREATE AGGREGATE unsafe_sum (float8) @@ -262,13 +262,13 @@ FROM (VALUES (1, 1.0e20::float8), (2, 1.0::float8)) AS v (n,x); - This query returns 0 as its second result, rather than the - expected answer of 1. The cause is the limited precision of - floating-point values: adding 1 to 1e20 results - in 1e20 again, and so subtracting 1e20 from that - yields 0, not 1. Note that this is a limitation + This query returns 0 as its second result, rather than the + expected answer of 1. The cause is the limited precision of + floating-point values: adding 1 to 1e20 results + in 1e20 again, and so subtracting 1e20 from that + yields 0, not 1. Note that this is a limitation of floating-point arithmetic in general, not a limitation - of PostgreSQL. + of PostgreSQL. @@ -309,7 +309,7 @@ CREATE AGGREGATE array_accum (anyelement) Here, the actual state type for any given aggregate call is the array type having the actual input type as elements. The behavior of the aggregate is to concatenate all the inputs into an array of that type. - (Note: the built-in aggregate array_agg provides similar + (Note: the built-in aggregate array_agg provides similar functionality, with better performance than this definition would have.) @@ -344,19 +344,19 @@ SELECT attrelid::regclass, array_accum(atttypid::regtype) polymorphic state type, as in the above example. This is necessary because otherwise the final function cannot be declared sensibly: it would need to have a polymorphic result type but no polymorphic argument - type, which CREATE FUNCTION will reject on the grounds that + type, which CREATE FUNCTION will reject on the grounds that the result type cannot be deduced from a call. But sometimes it is inconvenient to use a polymorphic state type. The most common case is where the aggregate support functions are to be written in C and the - state type should be declared as internal because there is + state type should be declared as internal because there is no SQL-level equivalent for it. To address this case, it is possible to - declare the final function as taking extra dummy arguments + declare the final function as taking extra dummy arguments that match the input arguments of the aggregate. Such dummy arguments are always passed as null values since no specific value is available when the final function is called. Their only use is to allow a polymorphic final function's result type to be connected to the aggregate's input type(s). For example, the definition of the built-in - aggregate array_agg is equivalent to + aggregate array_agg is equivalent to CREATE FUNCTION array_agg_transfn(internal, anynonarray) @@ -373,30 +373,30 @@ CREATE AGGREGATE array_agg (anynonarray) ); - Here, the finalfunc_extra option specifies that the final + Here, the finalfunc_extra option specifies that the final function receives, in addition to the state value, extra dummy argument(s) corresponding to the aggregate's input argument(s). - The extra anynonarray argument allows the declaration - of array_agg_finalfn to be valid. + The extra anynonarray argument allows the declaration + of array_agg_finalfn to be valid. An aggregate function can be made to accept a varying number of arguments - by declaring its last argument as a VARIADIC array, in much + by declaring its last argument as a VARIADIC array, in much the same fashion as for regular functions; see . The aggregate's transition function(s) must have the same array type as their last argument. The - transition function(s) typically would also be marked VARIADIC, + transition function(s) typically would also be marked VARIADIC, but this is not strictly required. Variadic aggregates are easily misused in connection with - the ORDER BY option (see ), + the ORDER BY option (see ), since the parser cannot tell whether the wrong number of actual arguments have been given in such a combination. Keep in mind that everything to - the right of ORDER BY is a sort key, not an argument to the + the right of ORDER BY is a sort key, not an argument to the aggregate. For example, in SELECT myaggregate(a ORDER BY a, b, c) FROM ... @@ -406,7 +406,7 @@ SELECT myaggregate(a ORDER BY a, b, c) FROM ... SELECT myaggregate(a, b, c ORDER BY a) FROM ... - If myaggregate is variadic, both these calls could be + If myaggregate is variadic, both these calls could be perfectly valid. @@ -427,19 +427,19 @@ SELECT myaggregate(a, b, c ORDER BY a) FROM ... - The aggregates we have been describing so far are normal - aggregates. PostgreSQL also - supports ordered-set aggregates, which differ from + The aggregates we have been describing so far are normal + aggregates. PostgreSQL also + supports ordered-set aggregates, which differ from normal aggregates in two key ways. First, in addition to ordinary aggregated arguments that are evaluated once per input row, an - ordered-set aggregate can have direct arguments that are + ordered-set aggregate can have direct arguments that are evaluated only once per aggregation operation. Second, the syntax for the ordinary aggregated arguments specifies a sort ordering for them explicitly. An ordered-set aggregate is usually used to implement a computation that depends on a specific row ordering, for instance rank or percentile, so that the sort ordering is a required aspect of any call. For example, the built-in - definition of percentile_disc is equivalent to: + definition of percentile_disc is equivalent to: CREATE FUNCTION ordered_set_transition(internal, anyelement) @@ -456,7 +456,7 @@ CREATE AGGREGATE percentile_disc (float8 ORDER BY anyelement) ); - This aggregate takes a float8 direct argument (the percentile + This aggregate takes a float8 direct argument (the percentile fraction) and an aggregated input that can be of any sortable data type. It could be used to obtain a median household income like this: @@ -467,31 +467,31 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; 50489 - Here, 0.5 is a direct argument; it would make no sense + Here, 0.5 is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows. Unlike the case for normal aggregates, the sorting of input rows for - an ordered-set aggregate is not done behind the scenes, + an ordered-set aggregate is not done behind the scenes, but is the responsibility of the aggregate's support functions. The typical implementation approach is to keep a reference to - a tuplesort object in the aggregate's state value, feed the + a tuplesort object in the aggregate's state value, feed the incoming rows into that object, and then complete the sorting and read out the data in the final function. This design allows the final function to perform special operations such as injecting - additional hypothetical rows into the data to be sorted. + additional hypothetical rows into the data to be sorted. While normal aggregates can often be implemented with support functions written in PL/pgSQL or another PL language, ordered-set aggregates generally have to be written in C, since their state values aren't definable as any SQL data type. (In the above example, notice that the state value is declared as - type internal — this is typical.) + type internal — this is typical.) Also, because the final function performs the sort, it is not possible to continue adding input rows by executing the transition function again - later. This means the final function is not READ_ONLY; + later. This means the final function is not READ_ONLY; it must be declared in - as READ_WRITE, or as SHARABLE if it's + as READ_WRITE, or as SHARABLE if it's possible for additional final-function calls to make use of the already-sorted state. @@ -503,9 +503,9 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; same definition as for normal aggregates, but note that the direct arguments (if any) are not provided. The final function receives the last state value, the values of the direct arguments if any, - and (if finalfunc_extra is specified) null values + and (if finalfunc_extra is specified) null values corresponding to the aggregated input(s). As with normal - aggregates, finalfunc_extra is only really useful if the + aggregates, finalfunc_extra is only really useful if the aggregate is polymorphic; then the extra dummy argument(s) are needed to connect the final function's result type to the aggregate's input type(s). @@ -528,7 +528,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; Optionally, an aggregate function can support partial - aggregation. The idea of partial aggregation is to run the aggregate's + aggregation. The idea of partial aggregation is to run the aggregate's state transition function over different subsets of the input data independently, and then to combine the state values resulting from those subsets to produce the same state value that would have resulted from @@ -543,7 +543,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; To support partial aggregation, the aggregate definition must provide - a combine function, which takes two values of the + a combine function, which takes two values of the aggregate's state type (representing the results of aggregating over two subsets of the input rows) and produces a new value of the state type, representing what the state would have been after aggregating over the @@ -554,10 +554,10 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; - As simple examples, MAX and MIN aggregates can be + As simple examples, MAX and MIN aggregates can be made to support partial aggregation by specifying the combine function as the same greater-of-two or lesser-of-two comparison function that is used - as their transition function. SUM aggregates just need an + as their transition function. SUM aggregates just need an addition function as combine function. (Again, this is the same as their transition function, unless the state value is wider than the input data type.) @@ -568,26 +568,26 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; happens to take a value of the state type, not of the underlying input type, as its second argument. In particular, the rules for dealing with null values and strict functions are similar. Also, if the aggregate - definition specifies a non-null initcond, keep in mind that + definition specifies a non-null initcond, keep in mind that that will be used not only as the initial state for each partial aggregation run, but also as the initial state for the combine function, which will be called to combine each partial result into that state. - If the aggregate's state type is declared as internal, it is + If the aggregate's state type is declared as internal, it is the combine function's responsibility that its result is allocated in the correct memory context for aggregate state values. This means in - particular that when the first input is NULL it's invalid + particular that when the first input is NULL it's invalid to simply return the second input, as that value will be in the wrong context and will not have sufficient lifespan. - When the aggregate's state type is declared as internal, it is + When the aggregate's state type is declared as internal, it is usually also appropriate for the aggregate definition to provide a - serialization function and a deserialization - function, which allow such a state value to be copied from one process + serialization function and a deserialization + function, which allow such a state value to be copied from one process to another. Without these functions, parallel aggregation cannot be performed, and future applications such as local/remote aggregation will probably not work either. @@ -595,11 +595,11 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; A serialization function must take a single argument of - type internal and return a result of type bytea, which + type internal and return a result of type bytea, which represents the state value packaged up into a flat blob of bytes. Conversely, a deserialization function reverses that conversion. It must - take two arguments of types bytea and internal, and - return a result of type internal. (The second argument is unused + take two arguments of types bytea and internal, and + return a result of type internal. (The second argument is unused and is always zero, but it is required for type-safety reasons.) The result of the deserialization function should simply be allocated in the current memory context, as unlike the combine function's result, it is not @@ -608,7 +608,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; Worth noting also is that for an aggregate to be executed in parallel, - the aggregate itself must be marked PARALLEL SAFE. The + the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted. @@ -625,14 +625,14 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; A function written in C can detect that it is being called as an aggregate support function by calling - AggCheckCallContext, for example: + AggCheckCallContext, for example: if (AggCheckCallContext(fcinfo, NULL)) One reason for checking this is that when it is true, the first input must be a temporary state value and can therefore safely be modified in-place rather than allocating a new copy. - See int8inc() for an example. + See int8inc() for an example. (While aggregate transition functions are always allowed to modify the transition value in-place, aggregate final functions are generally discouraged from doing so; if they do so, the behavior must be declared @@ -641,14 +641,14 @@ if (AggCheckCallContext(fcinfo, NULL)) - The second argument of AggCheckCallContext can be used to + The second argument of AggCheckCallContext can be used to retrieve the memory context in which aggregate state values are being kept. - This is useful for transition functions that wish to use expanded + This is useful for transition functions that wish to use expanded objects (see ) as their state values. On first call, the transition function should return an expanded object whose memory context is a child of the aggregate state context, and then keep returning the same expanded object on subsequent calls. See - array_append() for an example. (array_append() + array_append() for an example. (array_append() is not the transition function of any built-in aggregate, but it is written to behave efficiently when used as transition function of a custom aggregate.) @@ -656,12 +656,12 @@ if (AggCheckCallContext(fcinfo, NULL)) Another support routine available to aggregate functions written in C - is AggGetAggref, which returns the Aggref + is AggGetAggref, which returns the Aggref parse node that defines the aggregate call. This is mainly useful for ordered-set aggregates, which can inspect the substructure of - the Aggref node to find out what sort ordering they are + the Aggref node to find out what sort ordering they are supposed to implement. Examples can be found - in orderedsetaggs.c in the PostgreSQL + in orderedsetaggs.c in the PostgreSQL source code. diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index 7475288354..b6f33037ff 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -22,7 +22,7 @@ procedural language functions (functions written in, for - example, PL/pgSQL or PL/Tcl) + example, PL/pgSQL or PL/Tcl) () @@ -66,7 +66,7 @@ page of the command to understand the examples better. Some examples from this chapter can be found in funcs.sql and - funcs.c in the src/tutorial + funcs.c in the src/tutorial directory in the PostgreSQL source distribution. @@ -87,7 +87,7 @@ In the simple (non-set) case, the first row of the last query's result will be returned. (Bear in mind that the first row of a multirow - result is not well-defined unless you use ORDER BY.) + result is not well-defined unless you use ORDER BY.) If the last query happens to return no rows at all, the null value will be returned. @@ -95,8 +95,8 @@ Alternatively, an SQL function can be declared to return a set (that is, multiple rows) by specifying the function's return type as SETOF - sometype, or equivalently by declaring it as - RETURNS TABLE(columns). In this case + sometype, or equivalently by declaring it as + RETURNS TABLE(columns). In this case all rows of the last query's result are returned. Further details appear below. @@ -105,9 +105,9 @@ The body of an SQL function must be a list of SQL statements separated by semicolons. A semicolon after the last statement is optional. Unless the function is declared to return - void, the last statement must be a SELECT, - or an INSERT, UPDATE, or DELETE - that has a RETURNING clause. + void, the last statement must be a SELECT, + or an INSERT, UPDATE, or DELETE + that has a RETURNING clause. @@ -117,16 +117,16 @@ modification queries (INSERT, UPDATE, and DELETE), as well as other SQL commands. (You cannot use transaction control commands, e.g. - COMMIT, SAVEPOINT, and some utility - commands, e.g. VACUUM, in SQL functions.) + COMMIT, SAVEPOINT, and some utility + commands, e.g. VACUUM, in SQL functions.) However, the final command - must be a SELECT or have a RETURNING + must be a SELECT or have a RETURNING clause that returns whatever is specified as the function's return type. Alternatively, if you want to define a SQL function that performs actions but has no - useful value to return, you can define it as returning void. + useful value to return, you can define it as returning void. For example, this function removes rows with negative salaries from - the emp table: + the emp table: CREATE FUNCTION clean_emp() RETURNS void AS ' @@ -147,13 +147,13 @@ SELECT clean_emp(); The entire body of a SQL function is parsed before any of it is executed. While a SQL function can contain commands that alter - the system catalogs (e.g., CREATE TABLE), the effects + the system catalogs (e.g., CREATE TABLE), the effects of such commands will not be visible during parse analysis of later commands in the function. Thus, for example, CREATE TABLE foo (...); INSERT INTO foo VALUES(...); will not work as desired if packaged up into a single SQL function, - since foo won't exist yet when the INSERT - command is parsed. It's recommended to use PL/pgSQL + since foo won't exist yet when the INSERT + command is parsed. It's recommended to use PL/pgSQL instead of a SQL function in this type of situation. @@ -164,8 +164,8 @@ SELECT clean_emp(); most convenient to use dollar quoting (see ) for the string constant. If you choose to use regular single-quoted string constant syntax, - you must double single quote marks (') and backslashes - (\) (assuming escape string syntax) in the body of + you must double single quote marks (') and backslashes + (\) (assuming escape string syntax) in the body of the function (see ). @@ -189,7 +189,7 @@ SELECT clean_emp(); is the same as any column name in the current SQL command within the function, the column name will take precedence. To override this, qualify the argument name with the name of the function itself, that is - function_name.argument_name. + function_name.argument_name. (If this would conflict with a qualified column name, again the column name wins. You can avoid the ambiguity by choosing a different alias for the table within the SQL command.) @@ -197,15 +197,15 @@ SELECT clean_emp(); In the older numeric approach, arguments are referenced using the syntax - $n: $1 refers to the first input - argument, $2 to the second, and so on. This will work + $n: $1 refers to the first input + argument, $2 to the second, and so on. This will work whether or not the particular argument was declared with a name. If an argument is of a composite type, then the dot notation, - e.g., argname.fieldname or - $1.fieldname, can be used to access attributes of the + e.g., argname.fieldname or + $1.fieldname, can be used to access attributes of the argument. Again, you might need to qualify the argument's name with the function name to make the form with an argument name unambiguous. @@ -226,7 +226,7 @@ INSERT INTO $1 VALUES (42); The ability to use names to reference SQL function arguments was added in PostgreSQL 9.2. Functions to be used in - older servers must use the $n notation. + older servers must use the $n notation. @@ -258,9 +258,9 @@ SELECT one(); Notice that we defined a column alias within the function body for the result of the function - (with the name result), but this column alias is not visible - outside the function. Hence, the result is labeled one - instead of result. + (with the name result), but this column alias is not visible + outside the function. Hence, the result is labeled one + instead of result. @@ -319,11 +319,11 @@ SELECT tf1(17, 100.0); - In this example, we chose the name accountno for the first + In this example, we chose the name accountno for the first argument, but this is the same as the name of a column in the - bank table. Within the UPDATE command, - accountno refers to the column bank.accountno, - so tf1.accountno must be used to refer to the argument. + bank table. Within the UPDATE command, + accountno refers to the column bank.accountno, + so tf1.accountno must be used to refer to the argument. We could of course avoid this by using a different name for the argument. @@ -342,7 +342,7 @@ $$ LANGUAGE SQL; which adjusts the balance and returns the new balance. - The same thing could be done in one command using RETURNING: + The same thing could be done in one command using RETURNING: CREATE FUNCTION tf1 (accountno integer, debit numeric) RETURNS numeric AS $$ @@ -394,8 +394,8 @@ SELECT name, double_salary(emp.*) AS dream Notice the use of the syntax $1.salary to select one field of the argument row value. Also notice - how the calling SELECT command - uses table_name.* to select + how the calling SELECT command + uses table_name.* to select the entire current row of a table as a composite value. The table row can alternatively be referenced using just the table name, like this: @@ -411,7 +411,7 @@ SELECT name, double_salary(emp) AS dream Sometimes it is handy to construct a composite argument value - on-the-fly. This can be done with the ROW construct. + on-the-fly. This can be done with the ROW construct. For example, we could adjust the data being passed to the function: SELECT name, double_salary(ROW(name, salary*1.1, age, cubicle)) AS dream @@ -473,7 +473,7 @@ CREATE FUNCTION new_emp() RETURNS emp AS $$ $$ LANGUAGE SQL; - Here we wrote a SELECT that returns just a single + Here we wrote a SELECT that returns just a single column of the correct composite type. This isn't really better in this situation, but it is a handy alternative in some cases — for example, if we need to compute the result by calling @@ -564,7 +564,7 @@ SELECT getname(new_emp()); - <acronym>SQL</> Functions with Output Parameters + <acronym>SQL</acronym> Functions with Output Parameters function @@ -573,7 +573,7 @@ SELECT getname(new_emp()); An alternative way of describing a function's results is to define it - with output parameters, as in this example: + with output parameters, as in this example: CREATE FUNCTION add_em (IN x int, IN y int, OUT sum int) @@ -587,7 +587,7 @@ SELECT add_em(3,7); (1 row) - This is not essentially different from the version of add_em + This is not essentially different from the version of add_em shown in . The real value of output parameters is that they provide a convenient way of defining functions that return several columns. For example, @@ -639,18 +639,18 @@ DROP FUNCTION sum_n_product (int, int); - Parameters can be marked as IN (the default), - OUT, INOUT, or VARIADIC. - An INOUT + Parameters can be marked as IN (the default), + OUT, INOUT, or VARIADIC. + An INOUT parameter serves as both an input parameter (part of the calling argument list) and an output parameter (part of the result record type). - VARIADIC parameters are input parameters, but are treated + VARIADIC parameters are input parameters, but are treated specially as described next. - <acronym>SQL</> Functions with Variable Numbers of Arguments + <acronym>SQL</acronym> Functions with Variable Numbers of Arguments function @@ -663,10 +663,10 @@ DROP FUNCTION sum_n_product (int, int); SQL functions can be declared to accept - variable numbers of arguments, so long as all the optional + variable numbers of arguments, so long as all the optional arguments are of the same data type. The optional arguments will be passed to the function as an array. The function is declared by - marking the last parameter as VARIADIC; this parameter + marking the last parameter as VARIADIC; this parameter must be declared as being of an array type. For example: @@ -682,7 +682,7 @@ SELECT mleast(10, -1, 5, 4.4); Effectively, all the actual arguments at or beyond the - VARIADIC position are gathered up into a one-dimensional + VARIADIC position are gathered up into a one-dimensional array, as if you had written @@ -691,7 +691,7 @@ SELECT mleast(ARRAY[10, -1, 5, 4.4]); -- doesn't work You can't actually write that, though — or at least, it will not match this function definition. A parameter marked - VARIADIC matches one or more occurrences of its element + VARIADIC matches one or more occurrences of its element type, not of its own type. @@ -699,7 +699,7 @@ SELECT mleast(ARRAY[10, -1, 5, 4.4]); -- doesn't work Sometimes it is useful to be able to pass an already-constructed array to a variadic function; this is particularly handy when one variadic function wants to pass on its array parameter to another one. You can - do that by specifying VARIADIC in the call: + do that by specifying VARIADIC in the call: SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]); @@ -707,21 +707,21 @@ SELECT mleast(VARIADIC ARRAY[10, -1, 5, 4.4]); This prevents expansion of the function's variadic parameter into its element type, thereby allowing the array argument value to match - normally. VARIADIC can only be attached to the last + normally. VARIADIC can only be attached to the last actual argument of a function call. - Specifying VARIADIC in the call is also the only way to + Specifying VARIADIC in the call is also the only way to pass an empty array to a variadic function, for example: SELECT mleast(VARIADIC ARRAY[]::numeric[]); - Simply writing SELECT mleast() does not work because a + Simply writing SELECT mleast() does not work because a variadic parameter must match at least one actual argument. - (You could define a second function also named mleast, + (You could define a second function also named mleast, with no parameters, if you wanted to allow such calls.) @@ -730,7 +730,7 @@ SELECT mleast(VARIADIC ARRAY[]::numeric[]); treated as not having any names of their own. This means it is not possible to call a variadic function using named arguments (), except when you specify - VARIADIC. For example, this will work: + VARIADIC. For example, this will work: SELECT mleast(VARIADIC arr => ARRAY[10, -1, 5, 4.4]); @@ -746,7 +746,7 @@ SELECT mleast(arr => ARRAY[10, -1, 5, 4.4]); - <acronym>SQL</> Functions with Default Values for Arguments + <acronym>SQL</acronym> Functions with Default Values for Arguments function @@ -804,7 +804,7 @@ ERROR: function foo() does not exist <acronym>SQL</acronym> Functions as Table Sources - All SQL functions can be used in the FROM clause of a query, + All SQL functions can be used in the FROM clause of a query, but it is particularly useful for functions returning composite types. If the function is defined to return a base type, the table function produces a one-column table. If the function is defined to return @@ -839,7 +839,7 @@ SELECT *, upper(fooname) FROM getfoo(1) AS t1; Note that we only got one row out of the function. This is because - we did not use SETOF. That is described in the next section. + we did not use SETOF. That is described in the next section. @@ -853,16 +853,16 @@ SELECT *, upper(fooname) FROM getfoo(1) AS t1; When an SQL function is declared as returning SETOF - sometype, the function's final + sometype, the function's final query is executed to completion, and each row it outputs is returned as an element of the result set. - This feature is normally used when calling the function in the FROM + This feature is normally used when calling the function in the FROM clause. In this case each row returned by the function becomes a row of the table seen by the query. For example, assume that - table foo has the same contents as above, and we say: + table foo has the same contents as above, and we say: CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$ @@ -906,17 +906,17 @@ SELECT * FROM sum_n_product_with_tab(10); (4 rows) - The key point here is that you must write RETURNS SETOF record + The key point here is that you must write RETURNS SETOF record to indicate that the function returns multiple rows instead of just one. If there is only one output parameter, write that parameter's type - instead of record. + instead of record. It is frequently useful to construct a query's result by invoking a set-returning function multiple times, with the parameters for each invocation coming from successive rows of a table or subquery. The - preferred way to do this is to use the LATERAL key word, + preferred way to do this is to use the LATERAL key word, which is described in . Here is an example using a set-returning function to enumerate elements of a tree structure: @@ -990,17 +990,17 @@ SELECT name, listchildren(name) FROM nodes; In the last SELECT, - notice that no output row appears for Child2, Child3, etc. + notice that no output row appears for Child2, Child3, etc. This happens because listchildren returns an empty set for those arguments, so no result rows are generated. This is the same behavior as we got from an inner join to the function result when using - the LATERAL syntax. + the LATERAL syntax. - PostgreSQL's behavior for a set-returning function in a + PostgreSQL's behavior for a set-returning function in a query's select list is almost exactly the same as if the set-returning - function had been written in a LATERAL FROM-clause item + function had been written in a LATERAL FROM-clause item instead. For example, SELECT x, generate_series(1,5) AS g FROM tab; @@ -1010,20 +1010,20 @@ SELECT x, generate_series(1,5) AS g FROM tab; SELECT x, g FROM tab, LATERAL generate_series(1,5) AS g; It would be exactly the same, except that in this specific example, - the planner could choose to put g on the outside of the - nestloop join, since g has no actual lateral dependency - on tab. That would result in a different output row + the planner could choose to put g on the outside of the + nestloop join, since g has no actual lateral dependency + on tab. That would result in a different output row order. Set-returning functions in the select list are always evaluated as though they are on the inside of a nestloop join with the rest of - the FROM clause, so that the function(s) are run to - completion before the next row from the FROM clause is + the FROM clause, so that the function(s) are run to + completion before the next row from the FROM clause is considered. If there is more than one set-returning function in the query's select list, the behavior is similar to what you get from putting the functions - into a single LATERAL ROWS FROM( ... ) FROM-clause + into a single LATERAL ROWS FROM( ... ) FROM-clause item. For each row from the underlying query, there is an output row using the first result from each function, then an output row using the second result, and so on. If some of the set-returning functions @@ -1031,48 +1031,48 @@ SELECT x, g FROM tab, LATERAL generate_series(1,5) AS g; missing data, so that the total number of rows emitted for one underlying row is the same as for the set-returning function that produced the most outputs. Thus the set-returning functions - run in lockstep until they are all exhausted, and then + run in lockstep until they are all exhausted, and then execution continues with the next underlying row. Set-returning functions can be nested in a select list, although that is - not allowed in FROM-clause items. In such cases, each level + not allowed in FROM-clause items. In such cases, each level of nesting is treated separately, as though it were - a separate LATERAL ROWS FROM( ... ) item. For example, in + a separate LATERAL ROWS FROM( ... ) item. For example, in SELECT srf1(srf2(x), srf3(y)), srf4(srf5(z)) FROM tab; - the set-returning functions srf2, srf3, - and srf5 would be run in lockstep for each row - of tab, and then srf1 and srf4 + the set-returning functions srf2, srf3, + and srf5 would be run in lockstep for each row + of tab, and then srf1 and srf4 would be applied in lockstep to each row produced by the lower functions. Set-returning functions cannot be used within conditional-evaluation - constructs, such as CASE or COALESCE. For + constructs, such as CASE or COALESCE. For example, consider SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; It might seem that this should produce five repetitions of input rows - that have x > 0, and a single repetition of those that do - not; but actually, because generate_series(1, 5) would be - run in an implicit LATERAL FROM item before - the CASE expression is ever evaluated, it would produce five + that have x > 0, and a single repetition of those that do + not; but actually, because generate_series(1, 5) would be + run in an implicit LATERAL FROM item before + the CASE expression is ever evaluated, it would produce five repetitions of every input row. To reduce confusion, such cases produce a parse-time error instead. - If a function's last command is INSERT, UPDATE, - or DELETE with RETURNING, that command will + If a function's last command is INSERT, UPDATE, + or DELETE with RETURNING, that command will always be executed to completion, even if the function is not declared - with SETOF or the calling query does not fetch all the - result rows. Any extra rows produced by the RETURNING + with SETOF or the calling query does not fetch all the + result rows. Any extra rows produced by the RETURNING clause are silently dropped, but the commanded table modifications still happen (and are all completed before returning from the function). @@ -1080,7 +1080,7 @@ SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; - Before PostgreSQL 10, putting more than one + Before PostgreSQL 10, putting more than one set-returning function in the same select list did not behave very sensibly unless they always produced equal numbers of rows. Otherwise, what you got was a number of output rows equal to the least common @@ -1089,10 +1089,10 @@ SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab; described above; instead, a set-returning function could have at most one set-returning argument, and each nest of set-returning functions was run independently. Also, conditional execution (set-returning - functions inside CASE etc) was previously allowed, + functions inside CASE etc) was previously allowed, complicating things even more. - Use of the LATERAL syntax is recommended when writing - queries that need to work in older PostgreSQL versions, + Use of the LATERAL syntax is recommended when writing + queries that need to work in older PostgreSQL versions, because that will give consistent results across different versions. If you have a query that is relying on conditional execution of a set-returning function, you may be able to fix it by moving the @@ -1115,13 +1115,13 @@ END$$ LANGUAGE plpgsql; SELECT x, case_generate_series(y > 0, 1, z, 5) FROM tab; This formulation will work the same in all versions - of PostgreSQL. + of PostgreSQL. - <acronym>SQL</acronym> Functions Returning <literal>TABLE</> + <acronym>SQL</acronym> Functions Returning <literal>TABLE</literal> function @@ -1131,12 +1131,12 @@ SELECT x, case_generate_series(y > 0, 1, z, 5) FROM tab; There is another way to declare a function as returning a set, which is to use the syntax - RETURNS TABLE(columns). - This is equivalent to using one or more OUT parameters plus - marking the function as returning SETOF record (or - SETOF a single output parameter's type, as appropriate). + RETURNS TABLE(columns). + This is equivalent to using one or more OUT parameters plus + marking the function as returning SETOF record (or + SETOF a single output parameter's type, as appropriate). This notation is specified in recent versions of the SQL standard, and - thus may be more portable than using SETOF. + thus may be more portable than using SETOF. @@ -1150,9 +1150,9 @@ RETURNS TABLE(sum int, product int) AS $$ $$ LANGUAGE SQL; - It is not allowed to use explicit OUT or INOUT - parameters with the RETURNS TABLE notation — you must - put all the output columns in the TABLE list. + It is not allowed to use explicit OUT or INOUT + parameters with the RETURNS TABLE notation — you must + put all the output columns in the TABLE list. @@ -1270,8 +1270,8 @@ SELECT concat_values('|', 1, 4, 2); <acronym>SQL</acronym> Functions with Collations - collation - in SQL functions + collation + in SQL functions @@ -1283,21 +1283,21 @@ SELECT concat_values('|', 1, 4, 2); then all the collatable parameters are treated as having that collation implicitly. This will affect the behavior of collation-sensitive operations within the function. For example, using the - anyleast function described above, the result of + anyleast function described above, the result of SELECT anyleast('abc'::text, 'ABC'); - will depend on the database's default collation. In C locale - the result will be ABC, but in many other locales it will - be abc. The collation to use can be forced by adding - a COLLATE clause to any of the arguments, for example + will depend on the database's default collation. In C locale + the result will be ABC, but in many other locales it will + be abc. The collation to use can be forced by adding + a COLLATE clause to any of the arguments, for example SELECT anyleast('abc'::text, 'ABC' COLLATE "C"); Alternatively, if you wish a function to operate with a particular collation regardless of what it is called with, insert - COLLATE clauses as needed in the function definition. - This version of anyleast would always use en_US + COLLATE clauses as needed in the function definition. + This version of anyleast would always use en_US locale to compare strings: CREATE FUNCTION anyleast (VARIADIC anyarray) RETURNS anyelement AS $$ @@ -1358,24 +1358,24 @@ CREATE FUNCTION test(smallint, double precision) RETURNS ... A function that takes a single argument of a composite type should generally not have the same name as any attribute (field) of that type. - Recall that attribute(table) + Recall that attribute(table) is considered equivalent - to table.attribute. + to table.attribute. In the case that there is an ambiguity between a function on a composite type and an attribute of the composite type, the attribute will always be used. It is possible to override that choice by schema-qualifying the function name - (that is, schema.func(table) + (that is, schema.func(table) ) but it's better to avoid the problem by not choosing conflicting names. Another possible conflict is between variadic and non-variadic functions. - For instance, it is possible to create both foo(numeric) and - foo(VARIADIC numeric[]). In this case it is unclear which one + For instance, it is possible to create both foo(numeric) and + foo(VARIADIC numeric[]). In this case it is unclear which one should be matched to a call providing a single numeric argument, such as - foo(10.1). The rule is that the function appearing + foo(10.1). The rule is that the function appearing earlier in the search path is used, or if the two functions are in the same schema, the non-variadic one is preferred. @@ -1388,15 +1388,15 @@ CREATE FUNCTION test(smallint, double precision) RETURNS ... rule is violated, the behavior is not portable. You might get a run-time linker error, or one of the functions will get called (usually the internal one). The alternative form of the - AS clause for the SQL CREATE + AS clause for the SQL CREATE FUNCTION command decouples the SQL function name from the function name in the C source code. For instance: CREATE FUNCTION test(int) RETURNS int - AS 'filename', 'test_1arg' + AS 'filename', 'test_1arg' LANGUAGE C; CREATE FUNCTION test(int, int) RETURNS int - AS 'filename', 'test_2arg' + AS 'filename', 'test_2arg' LANGUAGE C; The names of the C functions here reflect one of many possible conventions. @@ -1421,9 +1421,9 @@ CREATE FUNCTION test(int, int) RETURNS int - Every function has a volatility classification, with - the possibilities being VOLATILE, STABLE, or - IMMUTABLE. VOLATILE is the default if the + Every function has a volatility classification, with + the possibilities being VOLATILE, STABLE, or + IMMUTABLE. VOLATILE is the default if the command does not specify a category. The volatility category is a promise to the optimizer about the behavior of the function: @@ -1431,7 +1431,7 @@ CREATE FUNCTION test(int, int) RETURNS int - A VOLATILE function can do anything, including modifying + A VOLATILE function can do anything, including modifying the database. It can return different results on successive calls with the same arguments. The optimizer makes no assumptions about the behavior of such functions. A query using a volatile function will @@ -1440,26 +1440,26 @@ CREATE FUNCTION test(int, int) RETURNS int - A STABLE function cannot modify the database and is + A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call. In particular, it is safe to use an expression containing such a function in an index scan condition. (Since an index scan will evaluate the comparison value only once, not once at each - row, it is not valid to use a VOLATILE function in an + row, it is not valid to use a VOLATILE function in an index scan condition.) - An IMMUTABLE function cannot modify the database and is + An IMMUTABLE function cannot modify the database and is guaranteed to return the same results given the same arguments forever. This category allows the optimizer to pre-evaluate the function when a query calls it with constant arguments. For example, a query like - SELECT ... WHERE x = 2 + 2 can be simplified on sight to - SELECT ... WHERE x = 4, because the function underlying - the integer addition operator is marked IMMUTABLE. + SELECT ... WHERE x = 2 + 2 can be simplified on sight to + SELECT ... WHERE x = 4, because the function underlying + the integer addition operator is marked IMMUTABLE. @@ -1471,32 +1471,32 @@ CREATE FUNCTION test(int, int) RETURNS int - Any function with side-effects must be labeled - VOLATILE, so that calls to it cannot be optimized away. + Any function with side-effects must be labeled + VOLATILE, so that calls to it cannot be optimized away. Even a function with no side-effects needs to be labeled - VOLATILE if its value can change within a single query; - some examples are random(), currval(), - timeofday(). + VOLATILE if its value can change within a single query; + some examples are random(), currval(), + timeofday(). - Another important example is that the current_timestamp - family of functions qualify as STABLE, since their values do + Another important example is that the current_timestamp + family of functions qualify as STABLE, since their values do not change within a transaction. - There is relatively little difference between STABLE and - IMMUTABLE categories when considering simple interactive + There is relatively little difference between STABLE and + IMMUTABLE categories when considering simple interactive queries that are planned and immediately executed: it doesn't matter a lot whether a function is executed once during planning or once during query execution startup. But there is a big difference if the plan is - saved and reused later. Labeling a function IMMUTABLE when + saved and reused later. Labeling a function IMMUTABLE when it really isn't might allow it to be prematurely folded to a constant during planning, resulting in a stale value being re-used during subsequent uses of the plan. This is a hazard when using prepared statements or when using function languages that cache plans (such as - PL/pgSQL). + PL/pgSQL). @@ -1504,12 +1504,12 @@ CREATE FUNCTION test(int, int) RETURNS int languages, there is a second important property determined by the volatility category, namely the visibility of any data changes that have been made by the SQL command that is calling the function. A - VOLATILE function will see such changes, a STABLE - or IMMUTABLE function will not. This behavior is implemented + VOLATILE function will see such changes, a STABLE + or IMMUTABLE function will not. This behavior is implemented using the snapshotting behavior of MVCC (see ): - STABLE and IMMUTABLE functions use a snapshot + STABLE and IMMUTABLE functions use a snapshot established as of the start of the calling query, whereas - VOLATILE functions obtain a fresh snapshot at the start of + VOLATILE functions obtain a fresh snapshot at the start of each query they execute. @@ -1522,41 +1522,41 @@ CREATE FUNCTION test(int, int) RETURNS int Because of this snapshotting behavior, - a function containing only SELECT commands can safely be - marked STABLE, even if it selects from tables that might be + a function containing only SELECT commands can safely be + marked STABLE, even if it selects from tables that might be undergoing modifications by concurrent queries. PostgreSQL will execute all commands of a - STABLE function using the snapshot established for the + STABLE function using the snapshot established for the calling query, and so it will see a fixed view of the database throughout that query. - The same snapshotting behavior is used for SELECT commands - within IMMUTABLE functions. It is generally unwise to select - from database tables within an IMMUTABLE function at all, + The same snapshotting behavior is used for SELECT commands + within IMMUTABLE functions. It is generally unwise to select + from database tables within an IMMUTABLE function at all, since the immutability will be broken if the table contents ever change. However, PostgreSQL does not enforce that you do not do that. - A common error is to label a function IMMUTABLE when its + A common error is to label a function IMMUTABLE when its results depend on a configuration parameter. For example, a function that manipulates timestamps might well have results that depend on the setting. For safety, such functions should - be labeled STABLE instead. + be labeled STABLE instead. - PostgreSQL requires that STABLE - and IMMUTABLE functions contain no SQL commands other - than SELECT to prevent data modification. + PostgreSQL requires that STABLE + and IMMUTABLE functions contain no SQL commands other + than SELECT to prevent data modification. (This is not a completely bulletproof test, since such functions could - still call VOLATILE functions that modify the database. - If you do that, you will find that the STABLE or - IMMUTABLE function does not notice the database changes + still call VOLATILE functions that modify the database. + If you do that, you will find that the STABLE or + IMMUTABLE function does not notice the database changes applied by the called function, since they are hidden from its snapshot.) @@ -1569,7 +1569,7 @@ CREATE FUNCTION test(int, int) RETURNS int PostgreSQL allows user-defined functions to be written in other languages besides SQL and C. These other languages are generically called procedural - languages (PLs). + languages (PLs). Procedural languages aren't built into the PostgreSQL server; they are offered by loadable modules. @@ -1581,7 +1581,7 @@ CREATE FUNCTION test(int, int) RETURNS int Internal Functions - functioninternal + functioninternal Internal functions are functions written in C that have been statically @@ -1635,8 +1635,8 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision be made compatible with C, such as C++). Such functions are compiled into dynamically loadable objects (also called shared libraries) and are loaded by the server on demand. The dynamic - loading feature is what distinguishes C language functions - from internal functions — the actual coding conventions + loading feature is what distinguishes C language functions + from internal functions — the actual coding conventions are essentially the same for both. (Hence, the standard internal function library is a rich source of coding examples for user-defined C functions.) @@ -1683,9 +1683,9 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision If the name starts with the string $libdir, - that part is replaced by the PostgreSQL package + that part is replaced by the PostgreSQL package library directory - name, which is determined at build time.$libdir + name, which is determined at build time.$libdir @@ -1693,7 +1693,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision If the name does not contain a directory part, the file is searched for in the path specified by the configuration variable - .dynamic_library_path + .dynamic_library_path @@ -1742,7 +1742,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision PostgreSQL will not compile a C function automatically. The object file must be compiled before it is referenced in a CREATE - FUNCTION command. See for additional + FUNCTION command. See for additional information. @@ -1754,12 +1754,12 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision To ensure that a dynamically loaded object file is not loaded into an incompatible server, PostgreSQL checks that the - file contains a magic block with the appropriate contents. + file contains a magic block with the appropriate contents. This allows the server to detect obvious incompatibilities, such as code compiled for a different major version of PostgreSQL. To include a magic block, write this in one (and only one) of the module source files, after having - included the header fmgr.h: + included the header fmgr.h: PG_MODULE_MAGIC; @@ -1790,12 +1790,12 @@ PG_MODULE_MAGIC; Optionally, a dynamically loaded file can contain initialization and finalization functions. If the file includes a function named - _PG_init, that function will be called immediately after + _PG_init, that function will be called immediately after loading the file. The function receives no parameters and should return void. If the file includes a function named - _PG_fini, that function will be called immediately before + _PG_fini, that function will be called immediately before unloading the file. Likewise, the function receives no parameters and - should return void. Note that _PG_fini will only be called + should return void. Note that _PG_fini will only be called during an unload of the file, not during process termination. (Presently, unloads are disabled and will never occur, but this may change in the future.) @@ -1915,7 +1915,7 @@ typedef struct - Never modify the contents of a pass-by-reference input + Never modify the contents of a pass-by-reference input value. If you do so you are likely to corrupt on-disk data, since the pointer you are given might point directly into a disk buffer. The sole exception to this rule is explained in @@ -1934,7 +1934,7 @@ typedef struct { } text; - The [FLEXIBLE_ARRAY_MEMBER] notation means that the actual + The [FLEXIBLE_ARRAY_MEMBER] notation means that the actual length of the data part is not specified by this declaration. @@ -1942,7 +1942,7 @@ typedef struct { When manipulating variable-length types, we must be careful to allocate the correct amount of memory and set the length field correctly. - For example, if we wanted to store 40 bytes in a text + For example, if we wanted to store 40 bytes in a text structure, we might use a code fragment like this: data, buffer, 40); ]]> - VARHDRSZ is the same as sizeof(int32), but - it's considered good style to use the macro VARHDRSZ + VARHDRSZ is the same as sizeof(int32), but + it's considered good style to use the macro VARHDRSZ to refer to the size of the overhead for a variable-length type. - Also, the length field must be set using the - SET_VARSIZE macro, not by simple assignment. + Also, the length field must be set using the + SET_VARSIZE macro, not by simple assignment. specifies which C type corresponds to which SQL type when writing a C-language function - that uses a built-in type of PostgreSQL. + that uses a built-in type of PostgreSQL. The Defined In column gives the header file that needs to be included to get the type definition. (The actual definition might be in a different file that is included by the @@ -2175,8 +2175,8 @@ PG_FUNCTION_INFO_V1(funcname); must appear in the same source file. (Conventionally, it's written just before the function itself.) This macro call is not - needed for internal-language functions, since - PostgreSQL assumes that all internal functions + needed for internal-language functions, since + PostgreSQL assumes that all internal functions use the version-1 convention. It is, however, required for dynamically-loaded functions. @@ -2332,8 +2332,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text directory of the shared library file (for instance the PostgreSQL tutorial directory, which contains the code for the examples used in this section). - (Better style would be to use just 'funcs' in the - AS clause, after having added + (Better style would be to use just 'funcs' in the + AS clause, after having added DIRECTORY to the search path. In any case, we can omit the system-specific extension for a shared library, commonly .so.) @@ -2350,16 +2350,16 @@ CREATE FUNCTION concat_text(text, text) RETURNS text At first glance, the version-1 coding conventions might appear to be just - pointless obscurantism, over using plain C calling - conventions. They do however allow to deal with NULLable + pointless obscurantism, over using plain C calling + conventions. They do however allow to deal with NULLable arguments/return values, and toasted (compressed or out-of-line) values. - The macro PG_ARGISNULL(n) + The macro PG_ARGISNULL(n) allows a function to test whether each input is null. (Of course, doing - this is only necessary in functions not declared strict.) + this is only necessary in functions not declared strict.) As with the PG_GETARG_xxx() macros, the input arguments are counted beginning at zero. Note that one @@ -2394,8 +2394,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text ALTER TABLE tablename ALTER COLUMN colname SET STORAGE storagetype. storagetype is one of - plain, external, extended, - or main.) + plain, external, extended, + or main.) @@ -2433,8 +2433,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text Use pg_config - --includedir-serverpg_configwith user-defined C functions - to find out where the PostgreSQL server header + --includedir-serverpg_configwith user-defined C functions + to find out where the PostgreSQL server header files are installed on your system (or the system that your users will be running on). @@ -2452,7 +2452,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text - Remember to define a magic block for your shared library, + Remember to define a magic block for your shared library, as described in . @@ -2461,7 +2461,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text When allocating memory, use the PostgreSQL functions - pallocpalloc and pfreepfree + pallocpalloc and pfreepfree instead of the corresponding C library functions malloc and free. The memory allocated by palloc will be @@ -2472,8 +2472,8 @@ CREATE FUNCTION concat_text(text, text) RETURNS text - Always zero the bytes of your structures using memset - (or allocate them with palloc0 in the first place). + Always zero the bytes of your structures using memset + (or allocate them with palloc0 in the first place). Even if you assign to each field of your structure, there might be alignment padding (holes in the structure) that contain garbage values. Without this, it's difficult to @@ -2493,7 +2493,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text (PG_FUNCTION_ARGS, etc.) are in fmgr.h, so you will need to include at least these two files. For portability reasons it's best to - include postgres.h first, + include postgres.h first, before any other system or user header files. Including postgres.h will also include elog.h and palloc.h @@ -2539,7 +2539,7 @@ SELECT name, c_overpaid(emp, 1500) AS overpaid Using the version-1 calling conventions, we can define - c_overpaid as: + c_overpaid as: - Notice we have used STRICT so that we did not have to + Notice we have used STRICT so that we did not have to check whether the input arguments were NULL. @@ -2619,87 +2619,87 @@ CREATE FUNCTION c_overpaid(emp, integer) RETURNS boolean There are two ways you can build a composite data value (henceforth - a tuple): you can build it from an array of Datum values, + a tuple): you can build it from an array of Datum values, or from an array of C strings that can be passed to the input conversion functions of the tuple's column data types. In either - case, you first need to obtain or construct a TupleDesc + case, you first need to obtain or construct a TupleDesc descriptor for the tuple structure. When working with Datums, you - pass the TupleDesc to BlessTupleDesc, - and then call heap_form_tuple for each row. When working - with C strings, you pass the TupleDesc to - TupleDescGetAttInMetadata, and then call - BuildTupleFromCStrings for each row. In the case of a + pass the TupleDesc to BlessTupleDesc, + and then call heap_form_tuple for each row. When working + with C strings, you pass the TupleDesc to + TupleDescGetAttInMetadata, and then call + BuildTupleFromCStrings for each row. In the case of a function returning a set of tuples, the setup steps can all be done once during the first call of the function. Several helper functions are available for setting up the needed - TupleDesc. The recommended way to do this in most + TupleDesc. The recommended way to do this in most functions returning composite values is to call: TypeFuncClass get_call_result_type(FunctionCallInfo fcinfo, Oid *resultTypeId, TupleDesc *resultTupleDesc) - passing the same fcinfo struct passed to the calling function + passing the same fcinfo struct passed to the calling function itself. (This of course requires that you use the version-1 - calling conventions.) resultTypeId can be specified - as NULL or as the address of a local variable to receive the - function's result type OID. resultTupleDesc should be the - address of a local TupleDesc variable. Check that the - result is TYPEFUNC_COMPOSITE; if so, - resultTupleDesc has been filled with the needed - TupleDesc. (If it is not, you can report an error along + calling conventions.) resultTypeId can be specified + as NULL or as the address of a local variable to receive the + function's result type OID. resultTupleDesc should be the + address of a local TupleDesc variable. Check that the + result is TYPEFUNC_COMPOSITE; if so, + resultTupleDesc has been filled with the needed + TupleDesc. (If it is not, you can report an error along the lines of function returning record called in context that cannot accept type record.) - get_call_result_type can resolve the actual type of a + get_call_result_type can resolve the actual type of a polymorphic function result; so it is useful in functions that return scalar polymorphic results, not only functions that return composites. - The resultTypeId output is primarily useful for functions + The resultTypeId output is primarily useful for functions returning polymorphic scalars. - get_call_result_type has a sibling - get_expr_result_type, which can be used to resolve the + get_call_result_type has a sibling + get_expr_result_type, which can be used to resolve the expected output type for a function call represented by an expression tree. This can be used when trying to determine the result type from outside the function itself. There is also - get_func_result_type, which can be used when only the + get_func_result_type, which can be used when only the function's OID is available. However these functions are not able - to deal with functions declared to return record, and - get_func_result_type cannot resolve polymorphic types, - so you should preferentially use get_call_result_type. + to deal with functions declared to return record, and + get_func_result_type cannot resolve polymorphic types, + so you should preferentially use get_call_result_type. Older, now-deprecated functions for obtaining - TupleDescs are: + TupleDescs are: TupleDesc RelationNameGetTupleDesc(const char *relname) - to get a TupleDesc for the row type of a named relation, + to get a TupleDesc for the row type of a named relation, and: TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases) - to get a TupleDesc based on a type OID. This can - be used to get a TupleDesc for a base or + to get a TupleDesc based on a type OID. This can + be used to get a TupleDesc for a base or composite type. It will not work for a function that returns - record, however, and it cannot resolve polymorphic + record, however, and it cannot resolve polymorphic types. - Once you have a TupleDesc, call: + Once you have a TupleDesc, call: TupleDesc BlessTupleDesc(TupleDesc tupdesc) @@ -2709,8 +2709,8 @@ AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc) if you plan to work with C strings. If you are writing a function returning set, you can save the results of these functions in the - FuncCallContext structure — use the - tuple_desc or attinmeta field + FuncCallContext structure — use the + tuple_desc or attinmeta field respectively. @@ -2719,7 +2719,7 @@ AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc) HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull) - to build a HeapTuple given user data in Datum form. + to build a HeapTuple given user data in Datum form. @@ -2727,24 +2727,24 @@ HeapTuple heap_form_tuple(TupleDesc tupdesc, Datum *values, bool *isnull) HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values) - to build a HeapTuple given user data + to build a HeapTuple given user data in C string form. values is an array of C strings, one for each attribute of the return row. Each C string should be in the form expected by the input function of the attribute data type. In order to return a null value for one of the attributes, - the corresponding pointer in the values array - should be set to NULL. This function will need to + the corresponding pointer in the values array + should be set to NULL. This function will need to be called again for each row you return. Once you have built a tuple to return from your function, it - must be converted into a Datum. Use: + must be converted into a Datum. Use: HeapTupleGetDatum(HeapTuple tuple) - to convert a HeapTuple into a valid Datum. This - Datum can be returned directly if you intend to return + to convert a HeapTuple into a valid Datum. This + Datum can be returned directly if you intend to return just a single row, or it can be used as the current return value in a set-returning function. @@ -2767,13 +2767,13 @@ HeapTupleGetDatum(HeapTuple tuple) - A set-returning function (SRF) is called - once for each item it returns. The SRF must + A set-returning function (SRF) is called + once for each item it returns. The SRF must therefore save enough state to remember what it was doing and return the next item on each call. - The structure FuncCallContext is provided to help - control this process. Within a function, fcinfo->flinfo->fn_extra - is used to hold a pointer to FuncCallContext + The structure FuncCallContext is provided to help + control this process. Within a function, fcinfo->flinfo->fn_extra + is used to hold a pointer to FuncCallContext across calls. typedef struct FuncCallContext @@ -2847,9 +2847,9 @@ typedef struct FuncCallContext - An SRF uses several functions and macros that - automatically manipulate the FuncCallContext - structure (and expect to find it via fn_extra). Use: + An SRF uses several functions and macros that + automatically manipulate the FuncCallContext + structure (and expect to find it via fn_extra). Use: SRF_IS_FIRSTCALL() @@ -2858,12 +2858,12 @@ SRF_IS_FIRSTCALL() SRF_FIRSTCALL_INIT() - to initialize the FuncCallContext. On every function call, + to initialize the FuncCallContext. On every function call, including the first, use: SRF_PERCALL_SETUP() - to properly set up for using the FuncCallContext + to properly set up for using the FuncCallContext and clearing any previously returned data left over from the previous pass. @@ -2873,27 +2873,27 @@ SRF_PERCALL_SETUP() SRF_RETURN_NEXT(funcctx, result) - to return it to the caller. (result must be of type - Datum, either a single value or a tuple prepared as + to return it to the caller. (result must be of type + Datum, either a single value or a tuple prepared as described above.) Finally, when your function is finished returning data, use: SRF_RETURN_DONE(funcctx) - to clean up and end the SRF. + to clean up and end the SRF. - The memory context that is current when the SRF is called is + The memory context that is current when the SRF is called is a transient context that will be cleared between calls. This means - that you do not need to call pfree on everything - you allocated using palloc; it will go away anyway. However, if you want to allocate + that you do not need to call pfree on everything + you allocated using palloc; it will go away anyway. However, if you want to allocate any data structures to live across calls, you need to put them somewhere else. The memory context referenced by - multi_call_memory_ctx is a suitable location for any - data that needs to survive until the SRF is finished running. In most + multi_call_memory_ctx is a suitable location for any + data that needs to survive until the SRF is finished running. In most cases, this means that you should switch into - multi_call_memory_ctx while doing the first-call setup. + multi_call_memory_ctx while doing the first-call setup. @@ -2904,8 +2904,8 @@ SRF_RETURN_DONE(funcctx) PG_GETARG_xxx macro) in the transient context then the detoasted copies will be freed on each cycle. Accordingly, if you keep references to such values in - your user_fctx, you must either copy them into the - multi_call_memory_ctx after detoasting, or ensure + your user_fctx, you must either copy them into the + multi_call_memory_ctx after detoasting, or ensure that you detoast the values only in that context. @@ -2959,7 +2959,7 @@ my_set_returning_function(PG_FUNCTION_ARGS) - A complete example of a simple SRF returning a composite type + A complete example of a simple SRF returning a composite type looks like: filename', 'retcomposite' + AS 'filename', 'retcomposite' LANGUAGE C IMMUTABLE STRICT; A different way is to use OUT parameters: @@ -3067,15 +3067,15 @@ CREATE OR REPLACE FUNCTION retcomposite(integer, integer) CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer, OUT f1 integer, OUT f2 integer, OUT f3 integer) RETURNS SETOF record - AS 'filename', 'retcomposite' + AS 'filename', 'retcomposite' LANGUAGE C IMMUTABLE STRICT; Notice that in this method the output type of the function is formally - an anonymous record type. + an anonymous record type. - The directory contrib/tablefunc + The directory contrib/tablefunc module in the source distribution contains more examples of set-returning functions. @@ -3093,20 +3093,20 @@ CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer, of polymorphic functions. When function arguments or return types are defined as polymorphic types, the function author cannot know in advance what data type it will be called with, or - need to return. There are two routines provided in fmgr.h + need to return. There are two routines provided in fmgr.h to allow a version-1 C function to discover the actual data types of its arguments and the type it is expected to return. The routines are - called get_fn_expr_rettype(FmgrInfo *flinfo) and - get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). + called get_fn_expr_rettype(FmgrInfo *flinfo) and + get_fn_expr_argtype(FmgrInfo *flinfo, int argnum). They return the result or argument type OID, or InvalidOid if the information is not available. - The structure flinfo is normally accessed as - fcinfo->flinfo. The parameter argnum - is zero based. get_call_result_type can also be used - as an alternative to get_fn_expr_rettype. - There is also get_fn_expr_variadic, which can be used to + The structure flinfo is normally accessed as + fcinfo->flinfo. The parameter argnum + is zero based. get_call_result_type can also be used + as an alternative to get_fn_expr_rettype. + There is also get_fn_expr_variadic, which can be used to find out whether variadic arguments have been merged into an array. - This is primarily useful for VARIADIC "any" functions, + This is primarily useful for VARIADIC "any" functions, since such merging will always have occurred for variadic functions taking ordinary array types. @@ -3174,23 +3174,23 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray There is a variant of polymorphism that is only available to C-language functions: they can be declared to take parameters of type - "any". (Note that this type name must be double-quoted, + "any". (Note that this type name must be double-quoted, since it's also a SQL reserved word.) This works like - anyelement except that it does not constrain different - "any" arguments to be the same type, nor do they help + anyelement except that it does not constrain different + "any" arguments to be the same type, nor do they help determine the function's result type. A C-language function can also - declare its final parameter to be VARIADIC "any". This will + declare its final parameter to be VARIADIC "any". This will match one or more actual arguments of any type (not necessarily the same - type). These arguments will not be gathered into an array + type). These arguments will not be gathered into an array as happens with normal variadic functions; they will just be passed to - the function separately. The PG_NARGS() macro and the + the function separately. The PG_NARGS() macro and the methods described above must be used to determine the number of actual arguments and their types when using this feature. Also, users of such - a function might wish to use the VARIADIC keyword in their + a function might wish to use the VARIADIC keyword in their function call, with the expectation that the function would treat the array elements as separate arguments. The function itself must implement - that behavior if wanted, after using get_fn_expr_variadic to - detect that the actual argument was marked with VARIADIC. + that behavior if wanted, after using get_fn_expr_variadic to + detect that the actual argument was marked with VARIADIC. @@ -3200,22 +3200,22 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray Some function calls can be simplified during planning based on properties specific to the function. For example, - int4mul(n, 1) could be simplified to just n. + int4mul(n, 1) could be simplified to just n. To define such function-specific optimizations, write a - transform function and place its OID in the - protransform field of the primary function's - pg_proc entry. The transform function must have the SQL - signature protransform(internal) RETURNS internal. The - argument, actually FuncExpr *, is a dummy node representing a + transform function and place its OID in the + protransform field of the primary function's + pg_proc entry. The transform function must have the SQL + signature protransform(internal) RETURNS internal. The + argument, actually FuncExpr *, is a dummy node representing a call to the primary function. If the transform function's study of the expression tree proves that a simplified expression tree can substitute for all possible concrete calls represented thereby, build and return - that simplified expression. Otherwise, return a NULL - pointer (not a SQL null). + that simplified expression. Otherwise, return a NULL + pointer (not a SQL null). - We make no guarantee that PostgreSQL will never call the + We make no guarantee that PostgreSQL will never call the primary function in cases that the transform function could simplify. Ensure rigorous equivalence between the simplified expression and an actual call to the primary function. @@ -3235,26 +3235,26 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray Add-ins can reserve LWLocks and an allocation of shared memory on server startup. The add-in's shared library must be preloaded by specifying it in - shared_preload_libraries. + shared_preload_libraries. Shared memory is reserved by calling: void RequestAddinShmemSpace(int size) - from your _PG_init function. + from your _PG_init function. LWLocks are reserved by calling: void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks) - from _PG_init. This will ensure that an array of - num_lwlocks LWLocks is available under the name - tranche_name. Use GetNamedLWLockTranche + from _PG_init. This will ensure that an array of + num_lwlocks LWLocks is available under the name + tranche_name. Use GetNamedLWLockTranche to get a pointer to this array. To avoid possible race-conditions, each backend should use the LWLock - AddinShmemInitLock when connecting to and initializing + AddinShmemInitLock when connecting to and initializing its allocation of shared memory, as shown here: static mystruct *ptr = NULL; @@ -3294,7 +3294,7 @@ if (!ptr) All functions accessed by the backend must present a C interface to the backend; these C functions can then call C++ functions. - For example, extern C linkage is required for + For example, extern C linkage is required for backend-accessed functions. This is also necessary for any functions that are passed as pointers between the backend and C++ code. @@ -3303,30 +3303,30 @@ if (!ptr) Free memory using the appropriate deallocation method. For example, - most backend memory is allocated using palloc(), so use - pfree() to free it. Using C++ - delete in such cases will fail. + most backend memory is allocated using palloc(), so use + pfree() to free it. Using C++ + delete in such cases will fail. Prevent exceptions from propagating into the C code (use a catch-all - block at the top level of all extern C functions). This + block at the top level of all extern C functions). This is necessary even if the C++ code does not explicitly throw any exceptions, because events like out-of-memory can still throw exceptions. Any exceptions must be caught and appropriate errors passed back to the C interface. If possible, compile C++ with - to eliminate exceptions entirely; in such cases, you must check for failures in your C++ code, e.g. check for - NULL returned by new(). + NULL returned by new(). If calling backend functions from C++ code, be sure that the C++ call stack contains only plain old data structures - (POD). This is necessary because backend errors - generate a distant longjmp() that does not properly + (POD). This is necessary because backend errors + generate a distant longjmp() that does not properly unroll a C++ call stack with non-POD objects. @@ -3335,7 +3335,7 @@ if (!ptr) In summary, it is best to place C++ code behind a wall of - extern C functions that interface to the backend, + extern C functions that interface to the backend, and avoid exception, memory, and call stack leakage. diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index b951a58e0a..520eab8e99 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -12,14 +12,14 @@ The procedures described thus far let you define new types, new functions, and new operators. However, we cannot yet define an index on a column of a new data type. To do this, we must define an - operator class for the new data type. Later in this + operator class for the new data type. Later in this section, we will illustrate this concept in an example: a new operator class for the B-tree index method that stores and sorts complex numbers in ascending absolute value order. - Operator classes can be grouped into operator families + Operator classes can be grouped into operator families to show the relationships between semantically compatible classes. When only a single data type is involved, an operator class is sufficient, so we'll focus on that case first and then return to operator families. @@ -43,16 +43,16 @@ The routines for an index method do not directly know anything about the data types that the index method will operate on. Instead, an operator - classoperator class + classoperator class identifies the set of operations that the index method needs to use to work with a particular data type. Operator classes are so called because one thing they specify is the set of - WHERE-clause operators that can be used with an index + WHERE-clause operators that can be used with an index (i.e., can be converted into an index-scan qualification). An operator class can also specify some support - procedures that are needed by the internal operations of the + procedures that are needed by the internal operations of the index method, but do not directly correspond to any - WHERE-clause operator that can be used with the index. + WHERE-clause operator that can be used with the index. @@ -83,17 +83,17 @@ The operators associated with an operator class are identified by - strategy numbers, which serve to identify the semantics of + strategy numbers, which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, - and so operators like less than and greater than or equal - to are interesting with respect to a B-tree. + and so operators like less than and greater than or equal + to are interesting with respect to a B-tree. Because PostgreSQL allows the user to define operators, PostgreSQL cannot look at the name of an operator - (e.g., < or >=) and tell what kind of + (e.g., < or >=) and tell what kind of comparison it is. Instead, the index method defines a set of - strategies, which can be thought of as generalized operators. + strategies, which can be thought of as generalized operators. Each operator class specifies which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics. @@ -163,11 +163,11 @@ GiST indexes are more flexible: they do not have a fixed set of - strategies at all. Instead, the consistency support routine + strategies at all. Instead, the consistency support routine of each particular GiST operator class interprets the strategy numbers however it likes. As an example, several of the built-in GiST index operator classes index two-dimensional geometric objects, providing - the R-tree strategies shown in + the R-tree strategies shown in . Four of these are true two-dimensional tests (overlaps, same, contains, contained by); four of them consider only the X direction; and the other four @@ -175,7 +175,7 @@
- GiST Two-Dimensional <quote>R-tree</> Strategies + GiST Two-Dimensional <quote>R-tree</quote> Strategies @@ -327,7 +327,7 @@ don't have a fixed set of strategies either. Instead the support routines of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by - the built-in Minmax operator classes are shown in + the built-in Minmax operator classes are shown in . @@ -369,8 +369,8 @@ Notice that all the operators listed above return Boolean values. In practice, all operators defined as index method search operators must return type boolean, since they must appear at the top - level of a WHERE clause to be used with an index. - (Some index access methods also support ordering operators, + level of a WHERE clause to be used with an index. + (Some index access methods also support ordering operators, which typically don't return Boolean values; that feature is discussed in .) @@ -396,7 +396,7 @@ functions should play each of these roles for a given data type and semantic interpretation. The index method defines the set of functions it needs, and the operator class identifies the correct - functions to use by assigning them to the support function numbers + functions to use by assigning them to the support function numbers specified by the index method. @@ -427,7 +427,7 @@ Return the addresses of C-callable sort support function(s), - as documented in utils/sortsupport.h (optional) + as documented in utils/sortsupport.h (optional) 2 @@ -485,52 +485,52 @@ - consistent + consistent determine whether key satisfies the query qualifier 1 - union + union compute union of a set of keys 2 - compress + compress compute a compressed representation of a key or value to be indexed 3 - decompress + decompress compute a decompressed representation of a compressed key 4 - penalty + penalty compute penalty for inserting new key into subtree with given subtree's key 5 - picksplit + picksplit determine which entries of a page are to be moved to the new page and compute the union keys for resulting pages 6 - equal + equal compare two keys and return true if they are equal 7 - distance + distance determine distance from key to query value (optional) 8 - fetch + fetch compute original representation of a compressed key for index-only scans (optional) 9 @@ -557,28 +557,28 @@ - config + config provide basic information about the operator class 1 - choose + choose determine how to insert a new value into an inner tuple 2 - picksplit + picksplit determine how to partition a set of values 3 - inner_consistent + inner_consistent determine which sub-partitions need to be searched for a query 4 - leaf_consistent + leaf_consistent determine whether key satisfies the query qualifier 5 @@ -605,7 +605,7 @@ - compare + compare compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, @@ -614,17 +614,17 @@ 1 - extractValue + extractValue extract keys from a value to be indexed 2 - extractQuery + extractQuery extract keys from a query condition 3 - consistent + consistent determine whether value matches query condition (Boolean variant) (optional if support function 6 is present) @@ -632,7 +632,7 @@ 4 - comparePartial + comparePartial compare partial key from query and key from index, and return an integer less than zero, zero, @@ -642,7 +642,7 @@ 5 - triConsistent + triConsistent determine whether value matches query condition (ternary variant) (optional if support function 4 is present) @@ -672,7 +672,7 @@ - opcInfo + opcInfo return internal information describing the indexed columns' summary data @@ -680,17 +680,17 @@ 1 - add_value + add_value add a new value to an existing summary index tuple 2 - consistent + consistent determine whether value matches query condition 3 - union + union compute union of two summary tuples @@ -730,11 +730,11 @@ B-trees, the operators we require are: - absolute-value less-than (strategy 1) - absolute-value less-than-or-equal (strategy 2) - absolute-value equal (strategy 3) - absolute-value greater-than-or-equal (strategy 4) - absolute-value greater-than (strategy 5) + absolute-value less-than (strategy 1) + absolute-value less-than-or-equal (strategy 2) + absolute-value equal (strategy 3) + absolute-value greater-than-or-equal (strategy 4) + absolute-value greater-than (strategy 5) @@ -817,7 +817,7 @@ CREATE OPERATOR < ( type we'd probably want = to be the ordinary equality operation for complex numbers (and not the equality of the absolute values). In that case, we'd need to use some other - operator name for complex_abs_eq. + operator name for complex_abs_eq. @@ -894,7 +894,7 @@ CREATE OPERATOR CLASS complex_abs_ops The above example assumes that you want to make this new operator class the default B-tree operator class for the complex data type. - If you don't, just leave out the word DEFAULT. + If you don't, just leave out the word DEFAULT. @@ -917,11 +917,11 @@ CREATE OPERATOR CLASS complex_abs_ops To handle these needs, PostgreSQL uses the concept of an operator - familyoperator family. + familyoperator family. An operator family contains one or more operator classes, and can also contain indexable operators and corresponding support functions that belong to the family as a whole but not to any single class within the - family. We say that such operators and functions are loose + family. We say that such operators and functions are loose within the family, as opposed to being bound into a specific class. Typically each operator class contains single-data-type operators while cross-data-type operators are loose in the family. @@ -947,10 +947,10 @@ CREATE OPERATOR CLASS complex_abs_ops As an example, PostgreSQL has a built-in - B-tree operator family integer_ops, which includes operator - classes int8_ops, int4_ops, and - int2_ops for indexes on bigint (int8), - integer (int4), and smallint (int2) + B-tree operator family integer_ops, which includes operator + classes int8_ops, int4_ops, and + int2_ops for indexes on bigint (int8), + integer (int4), and smallint (int2) columns respectively. The family also contains cross-data-type comparison operators allowing any two of these types to be compared, so that an index on one of these types can be searched using a comparison value of another @@ -1043,7 +1043,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD ]]> - Notice that this definition overloads the operator strategy and + Notice that this definition overloads the operator strategy and support function numbers: each number occurs multiple times within the family. This is allowed so long as each instance of a particular number has distinct input data types. The instances that have @@ -1056,8 +1056,8 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In a B-tree operator family, all the operators in the family must sort compatibly, meaning that the transitive laws hold across all the data types - supported by the family: if A = B and B = C, then A = C, - and if A < B and B < C, then A < C. Moreover, implicit + supported by the family: if A = B and B = C, then A = C, + and if A < B and B < C, then A < C. Moreover, implicit or binary coercion casts between types represented in the operator family must not change the associated sort ordering. For each operator in the family there must be a support function having the same @@ -1094,7 +1094,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In BRIN, the requirements depends on the framework that provides the - operator classes. For operator classes based on minmax, + operator classes. For operator classes based on minmax, the behavior required is the same as for B-tree operator families: all the operators in the family must sort compatibly, and casts must not change the associated sort ordering. @@ -1128,14 +1128,14 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD - In particular, there are SQL features such as ORDER BY and - DISTINCT that require comparison and sorting of values. + In particular, there are SQL features such as ORDER BY and + DISTINCT that require comparison and sorting of values. To implement these features on a user-defined data type, PostgreSQL looks for the default B-tree operator - class for the data type. The equals member of this operator + class for the data type. The equals member of this operator class defines the system's notion of equality of values for - GROUP BY and DISTINCT, and the sort ordering - imposed by the operator class defines the default ORDER BY + GROUP BY and DISTINCT, and the sort ordering + imposed by the operator class defines the default ORDER BY ordering. @@ -1153,7 +1153,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD When there is no default operator class for a data type, you will get - errors like could not identify an ordering operator if you + errors like could not identify an ordering operator if you try to use these SQL features with the data type. @@ -1161,7 +1161,7 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In PostgreSQL versions before 7.4, sorting and grouping operations would implicitly use operators named - =, <, and >. The new + =, <, and >. The new behavior of relying on default operator classes avoids having to make any assumption about the behavior of operators with particular names. @@ -1180,22 +1180,22 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD Some index access methods (currently, only GiST) support the concept of - ordering operators. What we have been discussing so far - are search operators. A search operator is one for which + ordering operators. What we have been discussing so far + are search operators. A search operator is one for which the index can be searched to find all rows satisfying - WHERE - indexed_column - operator - constant. + WHERE + indexed_column + operator + constant. Note that nothing is promised about the order in which the matching rows will be returned. In contrast, an ordering operator does not restrict the set of rows that can be returned, but instead determines their order. An ordering operator is one for which the index can be scanned to return rows in the order represented by - ORDER BY - indexed_column - operator - constant. + ORDER BY + indexed_column + operator + constant. The reason for defining ordering operators that way is that it supports nearest-neighbor searches, if the operator is one that measures distance. For example, a query like @@ -1205,7 +1205,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; finds the ten places closest to a given target point. A GiST index on the location column can do this efficiently because - <-> is an ordering operator. + <-> is an ordering operator. @@ -1217,17 +1217,17 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; a B-tree operator family that specifies the sort ordering of the result data type. As was stated in the previous section, B-tree operator families define PostgreSQL's notion of ordering, so - this is a natural representation. Since the point <-> - operator returns float8, it could be specified in an operator + this is a natural representation. Since the point <-> + operator returns float8, it could be specified in an operator class creation command like this: (point, point) FOR ORDER BY float_ops ]]> - where float_ops is the built-in operator family that includes - operations on float8. This declaration states that the index + where float_ops is the built-in operator family that includes + operations on float8. This declaration states that the index is able to return rows in order of increasing values of the - <-> operator. + <-> operator. @@ -1243,21 +1243,21 @@ OPERATOR 15 <-> (point, point) FOR ORDER BY float_ops Normally, declaring an operator as a member of an operator class (or family) means that the index method can retrieve exactly the set of rows - that satisfy a WHERE condition using the operator. For example: + that satisfy a WHERE condition using the operator. For example: SELECT * FROM table WHERE integer_column < 4; can be satisfied exactly by a B-tree index on the integer column. But there are cases where an index is useful as an inexact guide to the matching rows. For example, if a GiST index stores only bounding boxes - for geometric objects, then it cannot exactly satisfy a WHERE + for geometric objects, then it cannot exactly satisfy a WHERE condition that tests overlap between nonrectangular objects such as polygons. Yet we could use the index to find objects whose bounding box overlaps the bounding box of the target object, and then do the exact overlap test only on the objects found by the index. If this - scenario applies, the index is said to be lossy for the + scenario applies, the index is said to be lossy for the operator. Lossy index searches are implemented by having the index - method return a recheck flag when a row might or might + method return a recheck flag when a row might or might not really satisfy the query condition. The core system will then test the original query condition on the retrieved row to see whether it should be returned as a valid match. This approach works if @@ -1274,8 +1274,8 @@ SELECT * FROM table WHERE integer_column < 4; the bounding box of a complex object such as a polygon. In this case there's not much value in storing the whole polygon in the index entry — we might as well store just a simpler object of type - box. This situation is expressed by the STORAGE - option in CREATE OPERATOR CLASS: we'd write something like: + box. This situation is expressed by the STORAGE + option in CREATE OPERATOR CLASS: we'd write something like: CREATE OPERATOR CLASS polygon_ops @@ -1285,16 +1285,16 @@ CREATE OPERATOR CLASS polygon_ops At present, only the GiST, GIN and BRIN index methods support a - STORAGE type that's different from the column data type. - The GiST compress and decompress support - routines must deal with data-type conversion when STORAGE - is used. In GIN, the STORAGE type identifies the type of - the key values, which normally is different from the type + STORAGE type that's different from the column data type. + The GiST compress and decompress support + routines must deal with data-type conversion when STORAGE + is used. In GIN, the STORAGE type identifies the type of + the key values, which normally is different from the type of the indexed column — for example, an operator class for integer-array columns might have keys that are just integers. The - GIN extractValue and extractQuery support + GIN extractValue and extractQuery support routines are responsible for extracting keys from indexed values. - BRIN is similar to GIN: the STORAGE type identifies the + BRIN is similar to GIN: the STORAGE type identifies the type of the stored summary values, and operator classes' support procedures are responsible for interpreting the summary values correctly. diff --git a/doc/src/sgml/xml2.sgml b/doc/src/sgml/xml2.sgml index 9bbc9e75d7..35e1ccb7a1 100644 --- a/doc/src/sgml/xml2.sgml +++ b/doc/src/sgml/xml2.sgml @@ -8,7 +8,7 @@ - The xml2 module provides XPath querying and + The xml2 module provides XPath querying and XSLT functionality. @@ -16,7 +16,7 @@ Deprecation Notice - From PostgreSQL 8.3 on, there is XML-related + From PostgreSQL 8.3 on, there is XML-related functionality based on the SQL/XML standard in the core server. That functionality covers XML syntax checking and XPath queries, which is what this module does, and more, but the API is @@ -36,7 +36,7 @@ shows the functions provided by this module. These functions provide straightforward XML parsing and XPath queries. - All arguments are of type text, so for brevity that is not shown. + All arguments are of type text, so for brevity that is not shown.
@@ -63,8 +63,8 @@ This parses the document text in its parameter and returns true if the document is well-formed XML. (Note: this is an alias for the standard - PostgreSQL function xml_is_well_formed(). The - name xml_valid() is technically incorrect since validity + PostgreSQL function xml_is_well_formed(). The + name xml_valid() is technically incorrect since validity and well-formedness have different meanings in XML.) @@ -124,7 +124,7 @@ <itemtag>Value 2....</itemtag> </toptag> - If either toptag or itemtag is an empty string, the relevant tag is omitted. + If either toptag or itemtag is an empty string, the relevant tag is omitted. @@ -139,7 +139,7 @@ - Like xpath_nodeset(document, query, toptag, itemtag) but result omits both tags. + Like xpath_nodeset(document, query, toptag, itemtag) but result omits both tags. @@ -154,7 +154,7 @@ - Like xpath_nodeset(document, query, toptag, itemtag) but result omits toptag. + Like xpath_nodeset(document, query, toptag, itemtag) but result omits toptag. @@ -170,8 +170,8 @@ This function returns multiple values separated by the specified - separator, for example Value 1,Value 2,Value 3 if - separator is ,. + separator, for example Value 1,Value 2,Value 3 if + separator is ,. @@ -185,7 +185,7 @@ text - This is a wrapper for the above function that uses , + This is a wrapper for the above function that uses , as the separator. @@ -206,7 +206,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - xpath_table is a table function that evaluates a set of XPath + xpath_table is a table function that evaluates a set of XPath queries on each of a set of documents and returns the results as a table. The primary key field from the original document table is returned as the first column of the result so that the result set @@ -228,7 +228,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) key - the name of the key field — this is just a field to be used as + the name of the key field — this is just a field to be used as the first column of the output table, i.e., it identifies the record from which each output row came (see note below about multiple values) @@ -285,7 +285,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - so those parameters can be anything valid in those particular + so those parameters can be anything valid in those particular locations. The result from this SELECT needs to return exactly two columns (which it will unless you try to list multiple fields for key or document). Beware that this simplistic approach requires that you @@ -293,8 +293,8 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) - The function has to be used in a FROM expression, with an - AS clause to specify the output columns; for example + The function has to be used in a FROM expression, with an + AS clause to specify the output columns; for example SELECT * FROM xpath_table('article_id', @@ -304,8 +304,8 @@ xpath_table('article_id', 'date_entered > ''2003-01-01'' ') AS t(article_id integer, author text, page_count integer, title text); - The AS clause defines the names and types of the columns in the - output table. The first is the key field and the rest correspond + The AS clause defines the names and types of the columns in the + output table. The first is the key field and the rest correspond to the XPath queries. If there are more XPath queries than result columns, the extra queries will be ignored. If there are more result columns @@ -313,19 +313,19 @@ AS t(article_id integer, author text, page_count integer, title text); - Notice that this example defines the page_count result + Notice that this example defines the page_count result column as an integer. The function deals internally with string representations, so when you say you want an integer in the output, it will take the string representation of the XPath result and use PostgreSQL input - functions to transform it into an integer (or whatever type the AS + functions to transform it into an integer (or whatever type the AS clause requests). An error will result if it can't do this — for example if the result is empty — so you may wish to just stick to - text as the column type if you think your data has any problems. + text as the column type if you think your data has any problems. - The calling SELECT statement doesn't necessarily have to be - just SELECT * — it can reference the output + The calling SELECT statement doesn't necessarily have to be + just SELECT * — it can reference the output columns by name or join them to other tables. The function produces a virtual table with which you can perform any operation you wish (e.g. aggregation, joining, sorting etc). So we could also have: @@ -346,7 +346,7 @@ WHERE t.author_id = p.person_id; Multivalued Results - The xpath_table function assumes that the results of each XPath query + The xpath_table function assumes that the results of each XPath query might be multivalued, so the number of rows returned by the function may not be the same as the number of input documents. The first row returned contains the first result from each query, the second row the @@ -393,8 +393,8 @@ WHERE id = 1 ORDER BY doc_num, line_num - To get doc_num on every line, the solution is to use two invocations - of xpath_table and join the results: + To get doc_num on every line, the solution is to use two invocations + of xpath_table and join the results: SELECT t.*,i.doc_num FROM @@ -437,15 +437,15 @@ xslt_process(text document, text stylesheet, text paramlist) returns text This function applies the XSL stylesheet to the document and returns - the transformed result. The paramlist is a list of parameter + the transformed result. The paramlist is a list of parameter assignments to be used in the transformation, specified in the form - a=1,b=2. Note that the + a=1,b=2. Note that the parameter parsing is very simple-minded: parameter values cannot contain commas! - There is also a two-parameter version of xslt_process which + There is also a two-parameter version of xslt_process which does not pass any parameters to the transformation. diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml index d484d80105..4b0716951a 100644 --- a/doc/src/sgml/xoper.sgml +++ b/doc/src/sgml/xoper.sgml @@ -65,12 +65,12 @@ SELECT (a + b) AS c FROM test_complex; We've shown how to create a binary operator here. To create unary - operators, just omit one of leftarg (for left unary) or - rightarg (for right unary). The procedure + operators, just omit one of leftarg (for left unary) or + rightarg (for right unary). The procedure clause and the argument clauses are the only required items in - CREATE OPERATOR. The commutator + CREATE OPERATOR. The commutator clause shown in the example is an optional hint to the query - optimizer. Further details about commutator and other + optimizer. Further details about commutator and other optimizer hints appear in the next section. @@ -98,16 +98,16 @@ SELECT (a + b) AS c FROM test_complex; - <literal>COMMUTATOR</> + <literal>COMMUTATOR</literal> - The COMMUTATOR clause, if provided, names an operator that is the + The COMMUTATOR clause, if provided, names an operator that is the commutator of the operator being defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible input values x, y. Notice that B is also the commutator of A. For example, - operators < and > for a particular data type are usually each others' - commutators, and operator + is usually commutative with itself. - But operator - is usually not commutative with anything. + operators < and > for a particular data type are usually each others' + commutators, and operator + is usually commutative with itself. + But operator - is usually not commutative with anything. @@ -115,23 +115,23 @@ SELECT (a + b) AS c FROM test_complex; right operand type of its commutator, and vice versa. So the name of the commutator operator is all that PostgreSQL needs to be given to look up the commutator, and that's all that needs to - be provided in the COMMUTATOR clause. + be provided in the COMMUTATOR clause. It's critical to provide commutator information for operators that will be used in indexes and join clauses, because this allows the - query optimizer to flip around such a clause to the forms + query optimizer to flip around such a clause to the forms needed for different plan types. For example, consider a query with - a WHERE clause like tab1.x = tab2.y, where tab1.x - and tab2.y are of a user-defined type, and suppose that - tab2.y is indexed. The optimizer cannot generate an + a WHERE clause like tab1.x = tab2.y, where tab1.x + and tab2.y are of a user-defined type, and suppose that + tab2.y is indexed. The optimizer cannot generate an index scan unless it can determine how to flip the clause around to - tab2.y = tab1.x, because the index-scan machinery expects + tab2.y = tab1.x, because the index-scan machinery expects to see the indexed column on the left of the operator it is given. - PostgreSQL will not simply + PostgreSQL will not simply assume that this is a valid transformation — the creator of the - = operator must specify that it is valid, by marking the + = operator must specify that it is valid, by marking the operator with commutator information. @@ -145,20 +145,20 @@ SELECT (a + b) AS c FROM test_complex; - One way is to omit the COMMUTATOR clause in the first operator that + One way is to omit the COMMUTATOR clause in the first operator that you define, and then provide one in the second operator's definition. Since PostgreSQL knows that commutative operators come in pairs, when it sees the second definition it will - automatically go back and fill in the missing COMMUTATOR clause in + automatically go back and fill in the missing COMMUTATOR clause in the first definition. - The other, more straightforward way is just to include COMMUTATOR clauses + The other, more straightforward way is just to include COMMUTATOR clauses in both definitions. When PostgreSQL processes - the first definition and realizes that COMMUTATOR refers to a nonexistent + the first definition and realizes that COMMUTATOR refers to a nonexistent operator, the system will make a dummy entry for that operator in the system catalog. This dummy entry will have valid data only for the operator name, left and right operand types, and result type, @@ -175,15 +175,15 @@ SELECT (a + b) AS c FROM test_complex; - <literal>NEGATOR</> + <literal>NEGATOR</literal> - The NEGATOR clause, if provided, names an operator that is the + The NEGATOR clause, if provided, names an operator that is the negator of the operator being defined. We say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT (x B y) for all possible inputs x, y. Notice that B is also the negator of A. - For example, < and >= are a negator pair for most data types. + For example, < and >= are a negator pair for most data types. An operator can never validly be its own negator. @@ -195,15 +195,15 @@ SELECT (a + b) AS c FROM test_complex; An operator's negator must have the same left and/or right operand types - as the operator to be defined, so just as with COMMUTATOR, only the operator - name need be given in the NEGATOR clause. + as the operator to be defined, so just as with COMMUTATOR, only the operator + name need be given in the NEGATOR clause. Providing a negator is very helpful to the query optimizer since - it allows expressions like NOT (x = y) to be simplified into - x <> y. This comes up more often than you might think, because - NOT operations can be inserted as a consequence of other rearrangements. + it allows expressions like NOT (x = y) to be simplified into + x <> y. This comes up more often than you might think, because + NOT operations can be inserted as a consequence of other rearrangements. @@ -214,13 +214,13 @@ SELECT (a + b) AS c FROM test_complex; - <literal>RESTRICT</> + <literal>RESTRICT</literal> - The RESTRICT clause, if provided, names a restriction selectivity + The RESTRICT clause, if provided, names a restriction selectivity estimation function for the operator. (Note that this is a function - name, not an operator name.) RESTRICT clauses only make sense for - binary operators that return boolean. The idea behind a restriction + name, not an operator name.) RESTRICT clauses only make sense for + binary operators that return boolean. The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a WHERE-clause condition of the form: @@ -228,10 +228,10 @@ column OP constant for the current operator and a particular constant value. This assists the optimizer by - giving it some idea of how many rows will be eliminated by WHERE + giving it some idea of how many rows will be eliminated by WHERE clauses that have this form. (What happens if the constant is on the left, you might be wondering? Well, that's one of the things that - COMMUTATOR is for...) + COMMUTATOR is for...) @@ -240,12 +240,12 @@ column OP constant one of the system's standard estimators for many of your own operators. These are the standard restriction estimators: - eqsel for = - neqsel for <> - scalarltsel for < - scalarlesel for <= - scalargtsel for > - scalargesel for >= + eqsel for = + neqsel for <> + scalarltsel for < + scalarlesel for <= + scalargtsel for > + scalargesel for >= @@ -258,14 +258,14 @@ column OP constant - You can use scalarltsel, scalarlesel, - scalargtsel and scalargesel for comparisons on + You can use scalarltsel, scalarlesel, + scalargtsel and scalargesel for comparisons on data types that have some sensible means of being converted into numeric scalars for range comparisons. If possible, add the data type to those understood by the function convert_to_scalar() in src/backend/utils/adt/selfuncs.c. (Eventually, this function should be replaced by per-data-type functions - identified through a column of the pg_type system catalog; but that hasn't happened + identified through a column of the pg_type system catalog; but that hasn't happened yet.) If you do not do this, things will still work, but the optimizer's estimates won't be as good as they could be. @@ -279,15 +279,15 @@ column OP constant - <literal>JOIN</> + <literal>JOIN</literal> - The JOIN clause, if provided, names a join selectivity + The JOIN clause, if provided, names a join selectivity estimation function for the operator. (Note that this is a function - name, not an operator name.) JOIN clauses only make sense for + name, not an operator name.) JOIN clauses only make sense for binary operators that return boolean. The idea behind a join selectivity estimator is to guess what fraction of the rows in a - pair of tables will satisfy a WHERE-clause condition of the form: + pair of tables will satisfy a WHERE-clause condition of the form: table1.column1 OP table2.column2 @@ -301,27 +301,27 @@ table1.column1 OP table2.column2 a join selectivity estimator function, but will just suggest that you use one of the standard estimators if one is applicable: - eqjoinsel for = - neqjoinsel for <> - scalarltjoinsel for < - scalarlejoinsel for <= - scalargtjoinsel for > - scalargejoinsel for >= - areajoinsel for 2D area-based comparisons - positionjoinsel for 2D position-based comparisons - contjoinsel for 2D containment-based comparisons + eqjoinsel for = + neqjoinsel for <> + scalarltjoinsel for < + scalarlejoinsel for <= + scalargtjoinsel for > + scalargejoinsel for >= + areajoinsel for 2D area-based comparisons + positionjoinsel for 2D position-based comparisons + contjoinsel for 2D containment-based comparisons - <literal>HASHES</> + <literal>HASHES</literal> The HASHES clause, if present, tells the system that it is permissible to use the hash join method for a join based on this - operator. HASHES only makes sense for a binary operator that - returns boolean, and in practice the operator must represent + operator. HASHES only makes sense for a binary operator that + returns boolean, and in practice the operator must represent equality for some data type or pair of data types. @@ -336,7 +336,7 @@ table1.column1 OP table2.column2 hashing for operators that take the same data type on both sides. However, sometimes it is possible to design compatible hash functions for two or more data types; that is, functions that will generate the - same hash codes for equal values, even though the values + same hash codes for equal values, even though the values have different representations. For example, it's fairly simple to arrange this property when hashing integers of different widths. @@ -357,10 +357,10 @@ table1.column1 OP table2.column2 are machine-dependent ways in which it might fail to do the right thing. For example, if your data type is a structure in which there might be uninteresting pad bits, you cannot simply pass the whole structure to - hash_any. (Unless you write your other operators and + hash_any. (Unless you write your other operators and functions to ensure that the unused bits are always zero, which is the recommended strategy.) - Another example is that on machines that meet the IEEE + Another example is that on machines that meet the IEEE floating-point standard, negative zero and positive zero are different values (different bit patterns) but they are defined to compare equal. If a float value might contain negative zero then extra steps are needed @@ -392,8 +392,8 @@ table1.column1 OP table2.column2 strict, the function must also be complete: that is, it should return true or false, never null, for any two nonnull inputs. If this rule is - not followed, hash-optimization of IN operations might - generate wrong results. (Specifically, IN might return + not followed, hash-optimization of IN operations might + generate wrong results. (Specifically, IN might return false where the correct answer according to the standard would be null; or it might yield an error complaining that it wasn't prepared for a null result.) @@ -403,13 +403,13 @@ table1.column1 OP table2.column2 - <literal>MERGES</> + <literal>MERGES</literal> The MERGES clause, if present, tells the system that it is permissible to use the merge-join method for a join based on this - operator. MERGES only makes sense for a binary operator that - returns boolean, and in practice the operator must represent + operator. MERGES only makes sense for a binary operator that + returns boolean, and in practice the operator must represent equality for some data type or pair of data types. @@ -418,7 +418,7 @@ table1.column1 OP table2.column2 into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the - same place + same place in the sort order. In practice this means that the join operator must behave like equality. But it is possible to merge-join two distinct data types so long as they are logically compatible. For @@ -430,7 +430,7 @@ table1.column1 OP table2.column2 To be marked MERGES, the join operator must appear - as an equality member of a btree index operator family. + as an equality member of a btree index operator family. This is not enforced when you create the operator, since of course the referencing operator family couldn't exist yet. But the operator will not actually be used for merge joins @@ -445,7 +445,7 @@ table1.column1 OP table2.column2 if they are different) that appears in the same operator family. If this is not the case, planner errors might occur when the operator is used. Also, it is a good idea (but not strictly required) for - a btree operator family that supports multiple data types to provide + a btree operator family that supports multiple data types to provide equality operators for every combination of the data types; this allows better optimization. diff --git a/doc/src/sgml/xplang.sgml b/doc/src/sgml/xplang.sgml index 4460c8f361..60d0cc6190 100644 --- a/doc/src/sgml/xplang.sgml +++ b/doc/src/sgml/xplang.sgml @@ -11,7 +11,7 @@ PostgreSQL allows user-defined functions to be written in other languages besides SQL and C. These other languages are generically called procedural - languages (PLs). For a function + languages (PLs). For a function written in a procedural language, the database server has no built-in knowledge about how to interpret the function's source text. Instead, the task is passed to a special handler that knows @@ -44,9 +44,9 @@ A procedural language must be installed into each database where it is to be used. But procedural languages installed in - the database template1 are automatically available in all + the database template1 are automatically available in all subsequently created databases, since their entries in - template1 will be copied by CREATE DATABASE. + template1 will be copied by CREATE DATABASE. So the database administrator can decide which languages are available in which databases and can make some languages available by default if desired. @@ -54,8 +54,8 @@ For the languages supplied with the standard distribution, it is - only necessary to execute CREATE EXTENSION - language_name to install the language into the + only necessary to execute CREATE EXTENSION + language_name to install the language into the current database. The manual procedure described below is only recommended for installing languages that have not been packaged as extensions. @@ -70,7 +70,7 @@ A procedural language is installed in a database in five steps, which must be carried out by a database superuser. In most cases the required SQL commands should be packaged as the installation script - of an extension, so that CREATE EXTENSION can be + of an extension, so that CREATE EXTENSION can be used to execute them. @@ -103,7 +103,7 @@ CREATE FUNCTION handler_function_name() - Optionally, the language handler can provide an inline + Optionally, the language handler can provide an inline handler function that executes anonymous code blocks ( commands) written in this language. If an inline handler function @@ -119,10 +119,10 @@ CREATE FUNCTION inline_function_name(internal) - Optionally, the language handler can provide a validator + Optionally, the language handler can provide a validator function that checks a function definition for correctness without actually executing it. The validator function is called by - CREATE FUNCTION if it exists. If a validator function + CREATE FUNCTION if it exists. If a validator function is provided by the language, declare it with a command like CREATE FUNCTION validator_function_name(oid) @@ -217,13 +217,13 @@ CREATE TRUSTED PROCEDURAL LANGUAGE plperl is built and installed into the library directory; furthermore, the PL/pgSQL language itself is installed in all databases. - If Tcl support is configured in, the handlers for - PL/Tcl and PL/TclU are built and installed + If Tcl support is configured in, the handlers for + PL/Tcl and PL/TclU are built and installed in the library directory, but the language itself is not installed in any database by default. - Likewise, the PL/Perl and PL/PerlU + Likewise, the PL/Perl and PL/PerlU handlers are built and installed if Perl support is configured, and the - PL/PythonU handler is installed if Python support is + PL/PythonU handler is installed if Python support is configured, but these languages are not installed by default. diff --git a/doc/src/sgml/xtypes.sgml b/doc/src/sgml/xtypes.sgml index ac0b8a2943..2f90c1d42c 100644 --- a/doc/src/sgml/xtypes.sgml +++ b/doc/src/sgml/xtypes.sgml @@ -12,7 +12,7 @@ As described in , PostgreSQL can be extended to support new data types. This section describes how to define new base types, - which are data types defined below the level of the SQL + which are data types defined below the level of the SQL language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C. @@ -20,8 +20,8 @@ The examples in this section can be found in complex.sql and complex.c - in the src/tutorial directory of the source distribution. - See the README file in that directory for instructions + in the src/tutorial directory of the source distribution. + See the README file in that directory for instructions about running the examples. @@ -45,7 +45,7 @@ - Suppose we want to define a type complex that represents + Suppose we want to define a type complex that represents complex numbers. A natural way to represent a complex number in memory would be the following C structure: @@ -57,7 +57,7 @@ typedef struct Complex { We will need to make this a pass-by-reference type, since it's too - large to fit into a single Datum value. + large to fit into a single Datum value. @@ -130,7 +130,7 @@ complex_out(PG_FUNCTION_ARGS) external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation. For complex, we will piggy-back on the binary I/O converters - for type float8: + for type float8: PostgreSQL automatically provides support for arrays of that type. The array type typically has the same name as the base type with the underscore character - (_) prepended. + (_) prepended. @@ -237,7 +237,7 @@ CREATE TYPE complex ( If the internal representation of the data type is variable-length, the internal representation must follow the standard layout for variable-length data: the first four bytes must be a char[4] field which is - never accessed directly (customarily named vl_len_). You + never accessed directly (customarily named vl_len_). You must use the SET_VARSIZE() macro to store the total size of the datum (including the length field itself) in this field and VARSIZE() to retrieve it. (These macros exist @@ -258,41 +258,41 @@ CREATE TYPE complex ( If the values of your data type vary in size (in internal form), it's - usually desirable to make the data type TOAST-able (see TOAST-able (see ). You should do this even if the values are always too small to be compressed or stored externally, because - TOAST can save space on small data too, by reducing header + TOAST can save space on small data too, by reducing header overhead. - To support TOAST storage, the C functions operating on the data + To support TOAST storage, the C functions operating on the data type must always be careful to unpack any toasted values they are handed - by using PG_DETOAST_DATUM. (This detail is customarily hidden + by using PG_DETOAST_DATUM. (This detail is customarily hidden by defining type-specific GETARG_DATATYPE_P macros.) Then, when running the CREATE TYPE command, specify the - internal length as variable and select some appropriate storage - option other than plain. + internal length as variable and select some appropriate storage + option other than plain. If data alignment is unimportant (either just for a specific function or because the data type specifies byte alignment anyway) then it's possible - to avoid some of the overhead of PG_DETOAST_DATUM. You can use - PG_DETOAST_DATUM_PACKED instead (customarily hidden by - defining a GETARG_DATATYPE_PP macro) and using the macros - VARSIZE_ANY_EXHDR and VARDATA_ANY to access + to avoid some of the overhead of PG_DETOAST_DATUM. You can use + PG_DETOAST_DATUM_PACKED instead (customarily hidden by + defining a GETARG_DATATYPE_PP macro) and using the macros + VARSIZE_ANY_EXHDR and VARDATA_ANY to access a potentially-packed datum. Again, the data returned by these macros is not aligned even if the data type definition specifies an alignment. If the alignment is important you - must go through the regular PG_DETOAST_DATUM interface. + must go through the regular PG_DETOAST_DATUM interface. - Older code frequently declares vl_len_ as an - int32 field instead of char[4]. This is OK as long as - the struct definition has other fields that have at least int32 + Older code frequently declares vl_len_ as an + int32 field instead of char[4]. This is OK as long as + the struct definition has other fields that have at least int32 alignment. But it is dangerous to use such a struct definition when working with a potentially unaligned datum; the compiler may take it as license to assume the datum actually is aligned, leading to core dumps on @@ -301,28 +301,28 @@ CREATE TYPE complex ( - Another feature that's enabled by TOAST support is the - possibility of having an expanded in-memory data + Another feature that's enabled by TOAST support is the + possibility of having an expanded in-memory data representation that is more convenient to work with than the format that - is stored on disk. The regular or flat varlena storage format + is stored on disk. The regular or flat varlena storage format is ultimately just a blob of bytes; it cannot for example contain pointers, since it may get copied to other locations in memory. For complex data types, the flat format may be quite expensive to work - with, so PostgreSQL provides a way to expand + with, so PostgreSQL provides a way to expand the flat format into a representation that is more suited to computation, and then pass that format in-memory between functions of the data type. To use expanded storage, a data type must define an expanded format that - follows the rules given in src/include/utils/expandeddatum.h, - and provide functions to expand a flat varlena value into - expanded format and flatten the expanded format back to the + follows the rules given in src/include/utils/expandeddatum.h, + and provide functions to expand a flat varlena value into + expanded format and flatten the expanded format back to the regular varlena representation. Then ensure that all C functions for the data type can accept either representation, possibly by converting one into the other immediately upon receipt. This does not require fixing all existing functions for the data type at once, because the standard - PG_DETOAST_DATUM macro is defined to convert expanded inputs + PG_DETOAST_DATUM macro is defined to convert expanded inputs into regular flat format. Therefore, existing functions that work with the flat varlena format will continue to work, though slightly inefficiently, with expanded inputs; they need not be converted until and @@ -344,14 +344,14 @@ CREATE TYPE complex ( will detoast external, short-header, and compressed varlena inputs, but not expanded inputs. Such a function can be defined as returning a pointer to a union of the flat varlena format and the expanded format. - Callers can use the VARATT_IS_EXPANDED_HEADER() macro to + Callers can use the VARATT_IS_EXPANDED_HEADER() macro to determine which format they received. - The TOAST infrastructure not only allows regular varlena + The TOAST infrastructure not only allows regular varlena values to be distinguished from expanded values, but also - distinguishes read-write and read-only pointers to + distinguishes read-write and read-only pointers to expanded values. C functions that only need to examine an expanded value, or will only change it in safe and non-semantically-visible ways, need not care which type of pointer they receive. C functions that @@ -368,7 +368,7 @@ CREATE TYPE complex ( For examples of working with expanded values, see the standard array infrastructure, particularly - src/backend/utils/adt/array_expanded.c. + src/backend/utils/adt/array_expanded.c. From f5b73093339d965744607b786d7c34bf8f430935 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 18 Oct 2017 13:21:43 +0200 Subject: [PATCH 0406/1087] Make release notes aware that --xlog-method was renamed Author: David G. Johnston Discussion: https:/postgr.es/m/CAKFQuwaCsb-OKOjQXGeN0R7byxiRWvr7OtyKDbJoYgiF2vBG4Q@mail.gmail.com --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 116f7224da..f1f7cfed5f 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -242,7 +242,7 @@ This changes pg_basebackup's - / default to stream. + / default to stream. An option value none has been added to reproduce the old behavior. The pg_basebackup option has been removed (instead, use -X fetch). From bf54c0f05c0a58db17627724a83e1b6d4ec2712c Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 18 Oct 2017 13:29:16 +0200 Subject: [PATCH 0407/1087] Make OWNER TO subcommand mention consistent We say 'OWNER TO' in the synopsis; let's use that form elsewhere. There is a paragraph in the section that refers to various subcommands very loosely (including OWNER); I didn't think it was an improvement to change that one. This is a fairly inconsequential change, so no backpatch. Author: Amit Langote Discussion: https://postgr.es/m/69ec7b51-03e5-f523-95ce-c070ee790e70@lab.ntt.co.jp --- doc/src/sgml/ref/alter_table.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 68393d70b4..b4b8dab911 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -713,7 +713,7 @@ ALTER TABLE [ IF EXISTS ] name - OWNER + OWNER TO This form changes the owner of the table, sequence, view, materialized view, From 927e1ee2cb74e3bc49454dfa181dcce83b70d371 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 19 Oct 2017 05:58:39 -0400 Subject: [PATCH 0408/1087] UCS_to_most.pl: Process encodings in sorted order Otherwise the order depends on the Perl hash implementation, making it cumbersome to scan the output when debugging. --- src/backend/utils/mb/Unicode/UCS_to_most.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl index 60431a6a27..1c3922e2ff 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_most.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl @@ -50,7 +50,7 @@ 'GBK' => 'CP936.TXT'); # make maps for all encodings if not specified -my @charsets = (scalar(@ARGV) > 0) ? @ARGV : keys(%filename); +my @charsets = (scalar(@ARGV) > 0) ? @ARGV : sort keys(%filename); foreach my $charset (@charsets) { From bcf2e5ceb0998318deb36911b13677e88d45c8b4 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Thu, 19 Oct 2017 13:54:33 +0200 Subject: [PATCH 0409/1087] Fix typo in release notes Spotted by Piotr Stefaniak --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index f1f7cfed5f..bc026d31a6 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -347,7 +347,7 @@ The server parameter no longer supports off or plain. The UNENCRYPTED option is no longer supported in - CREATE/ALTER USER ... PASSSWORD. Similarly, the + CREATE/ALTER USER ... PASSWORD. Similarly, the option has been removed from createuser. Unencrypted passwords migrated from older versions will be stored encrypted in this release. The default From 275c4be19deec4288c303cc9ba3a3540bd1b8f5f Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Thu, 19 Oct 2017 13:57:20 +0200 Subject: [PATCH 0410/1087] Fix typo Masahiko Sawada --- src/backend/libpq/auth.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 174ef1c49d..ab74fd8dfd 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2742,7 +2742,7 @@ typedef struct #define RADIUS_ACCESS_ACCEPT 2 #define RADIUS_ACCESS_REJECT 3 -/* RAIDUS attributes */ +/* RADIUS attributes */ #define RADIUS_USER_NAME 1 #define RADIUS_PASSWORD 2 #define RADIUS_SERVICE_TYPE 6 From 752871b6de9b4c7d2c685a059adb55887e169201 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Thu, 19 Oct 2017 13:58:30 +0200 Subject: [PATCH 0411/1087] Fix typos David Rowley --- src/backend/optimizer/path/allpaths.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 5535b63803..4e565b3c00 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -1353,7 +1353,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, build_partitioned_rels = true; break; default: - elog(ERROR, "unexpcted rtekind: %d", (int) rte->rtekind); + elog(ERROR, "unexpected rtekind: %d", (int) rte->rtekind); } } else if (rel->reloptkind == RELOPT_JOINREL && rel->part_scheme) @@ -3262,10 +3262,10 @@ generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) return; /* - * Nothing to do if the relation is not partitioned. An outer join - * relation which had empty inner relation in every pair will have rest of - * the partitioning properties set except the child-join RelOptInfos. See - * try_partition_wise_join() for more explanation. + * We've nothing to do if the relation is not partitioned. An outer join + * relation which had an empty inner relation in every pair will have the + * rest of the partitioning properties set except the child-join + * RelOptInfos. See try_partition_wise_join() for more details. */ if (rel->nparts <= 0 || rel->part_rels == NULL) return; @@ -3284,7 +3284,7 @@ generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) /* Add partition-wise join paths for partitioned child-joins. */ generate_partition_wise_join_paths(root, child_rel); - /* Dummy children will not be scanned, so ingore those. */ + /* Dummy children will not be scanned, so ignore those. */ if (IS_DUMMY_REL(child_rel)) continue; From 4b95cc1dc36c9d1971f757e9b519fcc442833f0e Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 19 Oct 2017 14:14:18 +0200 Subject: [PATCH 0412/1087] Add more tests for reloptions This is preparation for a future patch to extensively change how reloptions work. Author: Nikolay Shaplov Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/2615372.orqtEn8VGB@x200m --- contrib/bloom/expected/bloom.out | 17 ++ contrib/bloom/sql/bloom.sql | 11 ++ src/test/regress/expected/alter_table.out | 7 + src/test/regress/expected/create_index.out | 2 +- src/test/regress/expected/gist.out | 18 ++ src/test/regress/expected/hash_index.out | 12 ++ src/test/regress/expected/reloptions.out | 185 +++++++++++++++++++++ src/test/regress/expected/spgist.out | 12 +- src/test/regress/input/tablespace.source | 4 +- src/test/regress/output/tablespace.source | 4 +- src/test/regress/parallel_schedule | 2 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/alter_table.sql | 7 + src/test/regress/sql/create_index.sql | 2 +- src/test/regress/sql/gist.sql | 14 ++ src/test/regress/sql/hash_index.sql | 10 ++ src/test/regress/sql/reloptions.sql | 113 +++++++++++++ src/test/regress/sql/spgist.sql | 10 +- 18 files changed, 422 insertions(+), 9 deletions(-) create mode 100644 src/test/regress/expected/reloptions.out create mode 100644 src/test/regress/sql/reloptions.sql diff --git a/contrib/bloom/expected/bloom.out b/contrib/bloom/expected/bloom.out index cbc50f757b..5ab9e34f82 100644 --- a/contrib/bloom/expected/bloom.out +++ b/contrib/bloom/expected/bloom.out @@ -210,3 +210,20 @@ ORDER BY 1; text_ops | t (2 rows) +-- +-- relation options +-- +DROP INDEX bloomidx; +CREATE INDEX bloomidx ON tst USING bloom (i, t) WITH (length=7, col1=4); +SELECT reloptions FROM pg_class WHERE oid = 'bloomidx'::regclass; + reloptions +------------------- + {length=7,col1=4} +(1 row) + +-- check for min and max values +\set VERBOSITY terse +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0); +ERROR: value 0 out of bounds for option "length" +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0); +ERROR: value 0 out of bounds for option "col1" diff --git a/contrib/bloom/sql/bloom.sql b/contrib/bloom/sql/bloom.sql index 22274609f2..32755f2b1a 100644 --- a/contrib/bloom/sql/bloom.sql +++ b/contrib/bloom/sql/bloom.sql @@ -81,3 +81,14 @@ SELECT opcname, amvalidate(opc.oid) FROM pg_opclass opc JOIN pg_am am ON am.oid = opcmethod WHERE amname = 'bloom' ORDER BY 1; + +-- +-- relation options +-- +DROP INDEX bloomidx; +CREATE INDEX bloomidx ON tst USING bloom (i, t) WITH (length=7, col1=4); +SELECT reloptions FROM pg_class WHERE oid = 'bloomidx'::regclass; +-- check for min and max values +\set VERBOSITY terse +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (length=0); +CREATE INDEX bloomidx2 ON tst USING bloom (i, t) WITH (col1=0); diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 98f4db1f85..838588757a 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3644,3 +3644,10 @@ create table parted_validate_test_1 partition of parted_validate_test for values alter table parted_validate_test add constraint parted_validate_test_chka check (a > 0) not valid; alter table parted_validate_test validate constraint parted_validate_test_chka; drop table parted_validate_test; +-- test alter column options +CREATE TABLE tmp(i integer); +INSERT INTO tmp VALUES (1); +ALTER TABLE tmp ALTER COLUMN i SET (n_distinct = 1, n_distinct_inherited = 2); +ALTER TABLE tmp ALTER COLUMN i RESET (n_distinct_inherited); +ANALYZE tmp; +DROP TABLE tmp; diff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out index 8450f2463e..031a0bcec9 100644 --- a/src/test/regress/expected/create_index.out +++ b/src/test/regress/expected/create_index.out @@ -2337,7 +2337,7 @@ Options: fastupdate=on, gin_pending_list_limit=128 CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops); CREATE INDEX hash_name_index ON hash_name_heap USING hash (random name_ops); CREATE INDEX hash_txt_index ON hash_txt_heap USING hash (random text_ops); -CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops); +CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops) WITH (fillfactor=60); CREATE UNLOGGED TABLE unlogged_hash_table (id int4); CREATE INDEX unlogged_hash_index ON unlogged_hash_table USING hash (id int4_ops); DROP TABLE unlogged_hash_table; diff --git a/src/test/regress/expected/gist.out b/src/test/regress/expected/gist.out index 91f9998140..f5a2993aaf 100644 --- a/src/test/regress/expected/gist.out +++ b/src/test/regress/expected/gist.out @@ -5,6 +5,21 @@ -- testing GiST code itself. Vacuuming in particular. create table gist_point_tbl(id int4, p point); create index gist_pointidx on gist_point_tbl using gist(p); +-- Verify the fillfactor and buffering options +create index gist_pointidx2 on gist_point_tbl using gist(p) with (buffering = on, fillfactor=50); +create index gist_pointidx3 on gist_point_tbl using gist(p) with (buffering = off); +create index gist_pointidx4 on gist_point_tbl using gist(p) with (buffering = auto); +drop index gist_pointidx2, gist_pointidx3, gist_pointidx4; +-- Make sure bad values are refused +create index gist_pointidx5 on gist_point_tbl using gist(p) with (buffering = invalid_value); +ERROR: invalid value for "buffering" option +DETAIL: Valid values are "on", "off", and "auto". +create index gist_pointidx5 on gist_point_tbl using gist(p) with (fillfactor=9); +ERROR: value 9 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +create index gist_pointidx5 on gist_point_tbl using gist(p) with (fillfactor=101); +ERROR: value 101 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". -- Insert enough data to create a tree that's a couple of levels deep. insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 10000) g; @@ -17,6 +32,9 @@ delete from gist_point_tbl where id % 2 = 1; -- would exercise it) delete from gist_point_tbl where id < 10000; vacuum analyze gist_point_tbl; +-- rebuild the index with a different fillfactor +alter index gist_pointidx SET (fillfactor = 40); +reindex index gist_pointidx; -- -- Test Index-only plans on GiST indexes -- diff --git a/src/test/regress/expected/hash_index.out b/src/test/regress/expected/hash_index.out index 0bbaa2a768..e23de21b41 100644 --- a/src/test/regress/expected/hash_index.out +++ b/src/test/regress/expected/hash_index.out @@ -217,6 +217,9 @@ END; DELETE FROM hash_split_heap WHERE keycol = 1; INSERT INTO hash_split_heap SELECT a/2 FROM generate_series(1, 25000) a; VACUUM hash_split_heap; +-- Rebuild the index using a different fillfactor +ALTER INDEX hash_split_index SET (fillfactor = 10); +REINDEX INDEX hash_split_index; -- Clean up. DROP TABLE hash_split_heap; -- Index on temp table. @@ -229,3 +232,12 @@ CREATE TABLE hash_heap_float4 (x float4, y int); INSERT INTO hash_heap_float4 VALUES (1.1,1); CREATE INDEX hash_idx ON hash_heap_float4 USING hash (x); DROP TABLE hash_heap_float4 CASCADE; +-- Test out-of-range fillfactor values +CREATE INDEX hash_f8_index2 ON hash_f8_heap USING hash (random float8_ops) + WITH (fillfactor=9); +ERROR: value 9 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +CREATE INDEX hash_f8_index2 ON hash_f8_heap USING hash (random float8_ops) + WITH (fillfactor=101); +ERROR: value 101 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". diff --git a/src/test/regress/expected/reloptions.out b/src/test/regress/expected/reloptions.out new file mode 100644 index 0000000000..c4107d5ca1 --- /dev/null +++ b/src/test/regress/expected/reloptions.out @@ -0,0 +1,185 @@ +-- Simple create +CREATE TABLE reloptions_test(i INT) WITH (FiLLFaCToR=30, + autovacuum_enabled = false, autovacuum_analyze_scale_factor = 0.2); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions +------------------------------------------------------------------------------ + {fillfactor=30,autovacuum_enabled=false,autovacuum_analyze_scale_factor=0.2} +(1 row) + +-- Fail min/max values check +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=2); +ERROR: value 2 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=110); +ERROR: value 110 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor = -10.0); +ERROR: value -10.0 out of bounds for option "autovacuum_analyze_scale_factor" +DETAIL: Valid values are between "0.000000" and "100.000000". +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor = 110.0); +ERROR: value 110.0 out of bounds for option "autovacuum_analyze_scale_factor" +DETAIL: Valid values are between "0.000000" and "100.000000". +-- Fail when option and namespace do not exist +CREATE TABLE reloptions_test2(i INT) WITH (not_existing_option=2); +ERROR: unrecognized parameter "not_existing_option" +CREATE TABLE reloptions_test2(i INT) WITH (not_existing_namespace.fillfactor=2); +ERROR: unrecognized parameter namespace "not_existing_namespace" +-- Fail while setting improper values +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=30.5); +ERROR: invalid value for integer option "fillfactor": 30.5 +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor='string'); +ERROR: invalid value for integer option "fillfactor": string +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=true); +ERROR: invalid value for integer option "fillfactor": true +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled=12); +ERROR: invalid value for boolean option "autovacuum_enabled": 12 +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled=30.5); +ERROR: invalid value for boolean option "autovacuum_enabled": 30.5 +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled='string'); +ERROR: invalid value for boolean option "autovacuum_enabled": string +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor='string'); +ERROR: invalid value for floating point option "autovacuum_analyze_scale_factor": string +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor=true); +ERROR: invalid value for floating point option "autovacuum_analyze_scale_factor": true +-- Fail if option is specified twice +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=30, fillfactor=40); +ERROR: parameter "fillfactor" specified more than once +-- Specifying name only for a non-Boolean option should fail +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor); +ERROR: invalid value for integer option "fillfactor": true +-- Simple ALTER TABLE +ALTER TABLE reloptions_test SET (fillfactor=31, + autovacuum_analyze_scale_factor = 0.3); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions +------------------------------------------------------------------------------ + {autovacuum_enabled=false,fillfactor=31,autovacuum_analyze_scale_factor=0.3} +(1 row) + +-- Set boolean option to true without specifying value +ALTER TABLE reloptions_test SET (autovacuum_enabled, fillfactor=32); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions +----------------------------------------------------------------------------- + {autovacuum_analyze_scale_factor=0.3,autovacuum_enabled=true,fillfactor=32} +(1 row) + +-- Check that RESET works well +ALTER TABLE reloptions_test RESET (fillfactor); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions +--------------------------------------------------------------- + {autovacuum_analyze_scale_factor=0.3,autovacuum_enabled=true} +(1 row) + +-- Resetting all values causes the column to become null +ALTER TABLE reloptions_test RESET (autovacuum_enabled, + autovacuum_analyze_scale_factor); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass AND +reloptions IS NULL; + reloptions +------------ + +(1 row) + +-- RESET fails if a value is specified +ALTER TABLE reloptions_test RESET (fillfactor=12); +ERROR: RESET must not include values for parameters +-- The OIDS option is not stored +DROP TABLE reloptions_test; +CREATE TABLE reloptions_test(i INT) WITH (fillfactor=20, oids=true); +SELECT reloptions, relhasoids FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions | relhasoids +-----------------+------------ + {fillfactor=20} | t +(1 row) + +-- Test toast.* options +DROP TABLE reloptions_test; +CREATE TABLE reloptions_test (s VARCHAR) + WITH (toast.autovacuum_vacuum_cost_delay = 23); +SELECT reltoastrelid as toast_oid + FROM pg_class WHERE oid = 'reloptions_test'::regclass \gset +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + reloptions +----------------------------------- + {autovacuum_vacuum_cost_delay=23} +(1 row) + +ALTER TABLE reloptions_test SET (toast.autovacuum_vacuum_cost_delay = 24); +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + reloptions +----------------------------------- + {autovacuum_vacuum_cost_delay=24} +(1 row) + +ALTER TABLE reloptions_test RESET (toast.autovacuum_vacuum_cost_delay); +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + reloptions +------------ + +(1 row) + +-- Fail on non-existent options in toast namespace +CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option = 42); +ERROR: unrecognized parameter "not_existing_option" +-- Mix TOAST & heap +DROP TABLE reloptions_test; +CREATE TABLE reloptions_test (s VARCHAR) WITH + (toast.autovacuum_vacuum_cost_delay = 23, + autovacuum_vacuum_cost_delay = 24, fillfactor = 40); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + reloptions +------------------------------------------------- + {autovacuum_vacuum_cost_delay=24,fillfactor=40} +(1 row) + +SELECT reloptions FROM pg_class WHERE oid = ( + SELECT reltoastrelid FROM pg_class WHERE oid = 'reloptions_test'::regclass); + reloptions +----------------------------------- + {autovacuum_vacuum_cost_delay=23} +(1 row) + +-- +-- CREATE INDEX, ALTER INDEX for btrees +-- +CREATE INDEX reloptions_test_idx ON reloptions_test (s) WITH (fillfactor=30); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx'::regclass; + reloptions +----------------- + {fillfactor=30} +(1 row) + +-- Fail when option and namespace do not exist +CREATE INDEX reloptions_test_idx ON reloptions_test (s) + WITH (not_existing_option=2); +ERROR: unrecognized parameter "not_existing_option" +CREATE INDEX reloptions_test_idx ON reloptions_test (s) + WITH (not_existing_ns.fillfactor=2); +ERROR: unrecognized parameter namespace "not_existing_ns" +-- Check allowed ranges +CREATE INDEX reloptions_test_idx2 ON reloptions_test (s) WITH (fillfactor=1); +ERROR: value 1 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +CREATE INDEX reloptions_test_idx2 ON reloptions_test (s) WITH (fillfactor=130); +ERROR: value 130 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +-- Check ALTER +ALTER INDEX reloptions_test_idx SET (fillfactor=40); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx'::regclass; + reloptions +----------------- + {fillfactor=40} +(1 row) + +-- Check ALTER on empty reloption list +CREATE INDEX reloptions_test_idx3 ON reloptions_test (s); +ALTER INDEX reloptions_test_idx3 SET (fillfactor=40); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx3'::regclass; + reloptions +----------------- + {fillfactor=40} +(1 row) + diff --git a/src/test/regress/expected/spgist.out b/src/test/regress/expected/spgist.out index 0691e910c4..2d75bbf8dc 100644 --- a/src/test/regress/expected/spgist.out +++ b/src/test/regress/expected/spgist.out @@ -4,7 +4,7 @@ -- There are other tests to test different SP-GiST opclasses. This is for -- testing SP-GiST code itself. create table spgist_point_tbl(id int4, p point); -create index spgist_point_idx on spgist_point_tbl using spgist(p); +create index spgist_point_idx on spgist_point_tbl using spgist(p) with (fillfactor = 75); -- Test vacuum-root operation. It gets invoked when the root is also a leaf, -- i.e. the index is very small. insert into spgist_point_tbl (id, p) @@ -37,3 +37,13 @@ select g, 'baaaaaaaaaaaaaar' || g from generate_series(1, 1000) g; -- tuple to be moved to another page. insert into spgist_text_tbl (id, t) select -g, 'f' || repeat('o', 100-g) || 'surprise' from generate_series(1, 100) g; +-- Test out-of-range fillfactor values +create index spgist_point_idx2 on spgist_point_tbl using spgist(p) with (fillfactor = 9); +ERROR: value 9 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +create index spgist_point_idx2 on spgist_point_tbl using spgist(p) with (fillfactor = 101); +ERROR: value 101 out of bounds for option "fillfactor" +DETAIL: Valid values are between "10" and "100". +-- Modify fillfactor in existing index +alter index spgist_point_idx set (fillfactor = 90); +reindex index spgist_point_idx; diff --git a/src/test/regress/input/tablespace.source b/src/test/regress/input/tablespace.source index 03a62bd760..7f7934b745 100644 --- a/src/test/regress/input/tablespace.source +++ b/src/test/regress/input/tablespace.source @@ -12,10 +12,10 @@ DROP TABLESPACE regress_tblspacewith; CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@'; -- try setting and resetting some properties for the new tablespace -ALTER TABLESPACE regress_tblspace SET (random_page_cost = 1.0); +ALTER TABLESPACE regress_tblspace SET (random_page_cost = 1.0, seq_page_cost = 1.1); ALTER TABLESPACE regress_tblspace SET (some_nonexistent_parameter = true); -- fail ALTER TABLESPACE regress_tblspace RESET (random_page_cost = 2.0); -- fail -ALTER TABLESPACE regress_tblspace RESET (random_page_cost, seq_page_cost); -- ok +ALTER TABLESPACE regress_tblspace RESET (random_page_cost, effective_io_concurrency); -- ok -- create a schema we can use CREATE SCHEMA testschema; diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source index aaedf5f248..24435118bc 100644 --- a/src/test/regress/output/tablespace.source +++ b/src/test/regress/output/tablespace.source @@ -14,12 +14,12 @@ DROP TABLESPACE regress_tblspacewith; -- create a tablespace we can use CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@'; -- try setting and resetting some properties for the new tablespace -ALTER TABLESPACE regress_tblspace SET (random_page_cost = 1.0); +ALTER TABLESPACE regress_tblspace SET (random_page_cost = 1.0, seq_page_cost = 1.1); ALTER TABLESPACE regress_tblspace SET (some_nonexistent_parameter = true); -- fail ERROR: unrecognized parameter "some_nonexistent_parameter" ALTER TABLESPACE regress_tblspace RESET (random_page_cost = 2.0); -- fail ERROR: RESET must not include values for parameters -ALTER TABLESPACE regress_tblspace RESET (random_page_cost, seq_page_cost); -- ok +ALTER TABLESPACE regress_tblspace RESET (random_page_cost, effective_io_concurrency); -- ok -- create a schema we can use CREATE SCHEMA testschema; -- try a table diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index 53d4f49197..aa5e6af621 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -116,7 +116,7 @@ test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid c # ---------- # Another group of parallel tests # ---------- -test: identity partition_join +test: identity partition_join reloptions # event triggers cannot run concurrently with any test that runs DDL test: event_trigger diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index ed1df5ae24..3866314a92 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -180,5 +180,6 @@ test: with test: xml test: identity test: partition_join +test: reloptions test: event_trigger test: stats diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 0c8ae2ab97..2ef9541a8c 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2427,3 +2427,10 @@ create table parted_validate_test_1 partition of parted_validate_test for values alter table parted_validate_test add constraint parted_validate_test_chka check (a > 0) not valid; alter table parted_validate_test validate constraint parted_validate_test_chka; drop table parted_validate_test; +-- test alter column options +CREATE TABLE tmp(i integer); +INSERT INTO tmp VALUES (1); +ALTER TABLE tmp ALTER COLUMN i SET (n_distinct = 1, n_distinct_inherited = 2); +ALTER TABLE tmp ALTER COLUMN i RESET (n_distinct_inherited); +ANALYZE tmp; +DROP TABLE tmp; diff --git a/src/test/regress/sql/create_index.sql b/src/test/regress/sql/create_index.sql index 67470db918..a45e8ebeff 100644 --- a/src/test/regress/sql/create_index.sql +++ b/src/test/regress/sql/create_index.sql @@ -682,7 +682,7 @@ CREATE INDEX hash_name_index ON hash_name_heap USING hash (random name_ops); CREATE INDEX hash_txt_index ON hash_txt_heap USING hash (random text_ops); -CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops); +CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops) WITH (fillfactor=60); CREATE UNLOGGED TABLE unlogged_hash_table (id int4); CREATE INDEX unlogged_hash_index ON unlogged_hash_table USING hash (id int4_ops); diff --git a/src/test/regress/sql/gist.sql b/src/test/regress/sql/gist.sql index 49126fd466..bae722fe13 100644 --- a/src/test/regress/sql/gist.sql +++ b/src/test/regress/sql/gist.sql @@ -7,6 +7,17 @@ create table gist_point_tbl(id int4, p point); create index gist_pointidx on gist_point_tbl using gist(p); +-- Verify the fillfactor and buffering options +create index gist_pointidx2 on gist_point_tbl using gist(p) with (buffering = on, fillfactor=50); +create index gist_pointidx3 on gist_point_tbl using gist(p) with (buffering = off); +create index gist_pointidx4 on gist_point_tbl using gist(p) with (buffering = auto); +drop index gist_pointidx2, gist_pointidx3, gist_pointidx4; + +-- Make sure bad values are refused +create index gist_pointidx5 on gist_point_tbl using gist(p) with (buffering = invalid_value); +create index gist_pointidx5 on gist_point_tbl using gist(p) with (fillfactor=9); +create index gist_pointidx5 on gist_point_tbl using gist(p) with (fillfactor=101); + -- Insert enough data to create a tree that's a couple of levels deep. insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 10000) g; @@ -24,6 +35,9 @@ delete from gist_point_tbl where id < 10000; vacuum analyze gist_point_tbl; +-- rebuild the index with a different fillfactor +alter index gist_pointidx SET (fillfactor = 40); +reindex index gist_pointidx; -- -- Test Index-only plans on GiST indexes diff --git a/src/test/regress/sql/hash_index.sql b/src/test/regress/sql/hash_index.sql index 9af03d2bc1..4d1aa020a9 100644 --- a/src/test/regress/sql/hash_index.sql +++ b/src/test/regress/sql/hash_index.sql @@ -178,6 +178,10 @@ INSERT INTO hash_split_heap SELECT a/2 FROM generate_series(1, 25000) a; VACUUM hash_split_heap; +-- Rebuild the index using a different fillfactor +ALTER INDEX hash_split_index SET (fillfactor = 10); +REINDEX INDEX hash_split_index; + -- Clean up. DROP TABLE hash_split_heap; @@ -192,3 +196,9 @@ CREATE TABLE hash_heap_float4 (x float4, y int); INSERT INTO hash_heap_float4 VALUES (1.1,1); CREATE INDEX hash_idx ON hash_heap_float4 USING hash (x); DROP TABLE hash_heap_float4 CASCADE; + +-- Test out-of-range fillfactor values +CREATE INDEX hash_f8_index2 ON hash_f8_heap USING hash (random float8_ops) + WITH (fillfactor=9); +CREATE INDEX hash_f8_index2 ON hash_f8_heap USING hash (random float8_ops) + WITH (fillfactor=101); diff --git a/src/test/regress/sql/reloptions.sql b/src/test/regress/sql/reloptions.sql new file mode 100644 index 0000000000..c9119fd863 --- /dev/null +++ b/src/test/regress/sql/reloptions.sql @@ -0,0 +1,113 @@ + +-- Simple create +CREATE TABLE reloptions_test(i INT) WITH (FiLLFaCToR=30, + autovacuum_enabled = false, autovacuum_analyze_scale_factor = 0.2); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + +-- Fail min/max values check +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=2); +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=110); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor = -10.0); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor = 110.0); + +-- Fail when option and namespace do not exist +CREATE TABLE reloptions_test2(i INT) WITH (not_existing_option=2); +CREATE TABLE reloptions_test2(i INT) WITH (not_existing_namespace.fillfactor=2); + +-- Fail while setting improper values +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=30.5); +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor='string'); +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=true); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled=12); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled=30.5); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_enabled='string'); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor='string'); +CREATE TABLE reloptions_test2(i INT) WITH (autovacuum_analyze_scale_factor=true); + +-- Fail if option is specified twice +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor=30, fillfactor=40); + +-- Specifying name only for a non-Boolean option should fail +CREATE TABLE reloptions_test2(i INT) WITH (fillfactor); + +-- Simple ALTER TABLE +ALTER TABLE reloptions_test SET (fillfactor=31, + autovacuum_analyze_scale_factor = 0.3); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + +-- Set boolean option to true without specifying value +ALTER TABLE reloptions_test SET (autovacuum_enabled, fillfactor=32); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + +-- Check that RESET works well +ALTER TABLE reloptions_test RESET (fillfactor); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; + +-- Resetting all values causes the column to become null +ALTER TABLE reloptions_test RESET (autovacuum_enabled, + autovacuum_analyze_scale_factor); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass AND +reloptions IS NULL; + +-- RESET fails if a value is specified +ALTER TABLE reloptions_test RESET (fillfactor=12); + +-- The OIDS option is not stored +DROP TABLE reloptions_test; +CREATE TABLE reloptions_test(i INT) WITH (fillfactor=20, oids=true); +SELECT reloptions, relhasoids FROM pg_class WHERE oid = 'reloptions_test'::regclass; + +-- Test toast.* options +DROP TABLE reloptions_test; + +CREATE TABLE reloptions_test (s VARCHAR) + WITH (toast.autovacuum_vacuum_cost_delay = 23); +SELECT reltoastrelid as toast_oid + FROM pg_class WHERE oid = 'reloptions_test'::regclass \gset +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + +ALTER TABLE reloptions_test SET (toast.autovacuum_vacuum_cost_delay = 24); +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + +ALTER TABLE reloptions_test RESET (toast.autovacuum_vacuum_cost_delay); +SELECT reloptions FROM pg_class WHERE oid = :toast_oid; + +-- Fail on non-existent options in toast namespace +CREATE TABLE reloptions_test2 (i int) WITH (toast.not_existing_option = 42); + +-- Mix TOAST & heap +DROP TABLE reloptions_test; + +CREATE TABLE reloptions_test (s VARCHAR) WITH + (toast.autovacuum_vacuum_cost_delay = 23, + autovacuum_vacuum_cost_delay = 24, fillfactor = 40); + +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass; +SELECT reloptions FROM pg_class WHERE oid = ( + SELECT reltoastrelid FROM pg_class WHERE oid = 'reloptions_test'::regclass); + +-- +-- CREATE INDEX, ALTER INDEX for btrees +-- + +CREATE INDEX reloptions_test_idx ON reloptions_test (s) WITH (fillfactor=30); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx'::regclass; + +-- Fail when option and namespace do not exist +CREATE INDEX reloptions_test_idx ON reloptions_test (s) + WITH (not_existing_option=2); +CREATE INDEX reloptions_test_idx ON reloptions_test (s) + WITH (not_existing_ns.fillfactor=2); + +-- Check allowed ranges +CREATE INDEX reloptions_test_idx2 ON reloptions_test (s) WITH (fillfactor=1); +CREATE INDEX reloptions_test_idx2 ON reloptions_test (s) WITH (fillfactor=130); + +-- Check ALTER +ALTER INDEX reloptions_test_idx SET (fillfactor=40); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx'::regclass; + +-- Check ALTER on empty reloption list +CREATE INDEX reloptions_test_idx3 ON reloptions_test (s); +ALTER INDEX reloptions_test_idx3 SET (fillfactor=40); +SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test_idx3'::regclass; diff --git a/src/test/regress/sql/spgist.sql b/src/test/regress/sql/spgist.sql index 5896b50865..77b43a2d3e 100644 --- a/src/test/regress/sql/spgist.sql +++ b/src/test/regress/sql/spgist.sql @@ -5,7 +5,7 @@ -- testing SP-GiST code itself. create table spgist_point_tbl(id int4, p point); -create index spgist_point_idx on spgist_point_tbl using spgist(p); +create index spgist_point_idx on spgist_point_tbl using spgist(p) with (fillfactor = 75); -- Test vacuum-root operation. It gets invoked when the root is also a leaf, -- i.e. the index is very small. @@ -48,3 +48,11 @@ select g, 'baaaaaaaaaaaaaar' || g from generate_series(1, 1000) g; -- tuple to be moved to another page. insert into spgist_text_tbl (id, t) select -g, 'f' || repeat('o', 100-g) || 'surprise' from generate_series(1, 100) g; + +-- Test out-of-range fillfactor values +create index spgist_point_idx2 on spgist_point_tbl using spgist(p) with (fillfactor = 9); +create index spgist_point_idx2 on spgist_point_tbl using spgist(p) with (fillfactor = 101); + +-- Modify fillfactor in existing index +alter index spgist_point_idx set (fillfactor = 90); +reindex index spgist_point_idx; From e250c8c8408a1c068285df210a7ceff68c421b3b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 19 Oct 2017 11:16:18 -0400 Subject: [PATCH 0413/1087] Fix incorrect link in v10 release notes. As noted by M. Justin. Also, to keep the HEAD and REL_10 versions of release-10.sgml in sync, back-patch the effects of c29c57890 on that file. We have a bigger problem there though :-( Discussion: https://postgr.es/m/CALtA7pmsQyTTD3fC2rmfUWgfivv5sCJJ84PHY0F_5t_SRc07Qg@mail.gmail.com Discussion: https://postgr.es/m/6d137bd0-eef6-1d91-d9b8-1a5e9195a899@2ndquadrant.com --- doc/src/sgml/release-10.sgml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index bc026d31a6..98149db335 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1982,7 +1982,7 @@ --> Allow make_date() + linkend="functions-datetime-table">make_date() to interpret negative years as BC years (Álvaro Herrera) @@ -1993,7 +1993,9 @@ 2016-09-28 [d3cd36a13] Make to_timestamp() and to_date() range-check fields of --> - Make to_timestamp() and to_date() reject + Make to_timestamp() + and to_date() reject out-of-range input fields (Artur Zakirov) From 2959213bf33cf7d2d1fc0b46c67d36254ffe043f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 20 Oct 2017 09:40:17 -0400 Subject: [PATCH 0414/1087] pg_stat_statements: Add a comment about the dangers of padding bytes. Inspired by a patch from Julien Rouhaud, but I reworded it. Discussion: http://postgr.es/m/CAOBaU_a8AH8=ypfqgHnDYu06ts+jWTUgh=VgCxA3yNV-K10j9w@mail.gmail.com --- contrib/pg_stat_statements/pg_stat_statements.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index b04b4d6ce1..3de8333be2 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -125,6 +125,11 @@ typedef enum pgssVersion /* * Hashtable key that defines the identity of a hashtable entry. We separate * queries by user and by database even if they are otherwise identical. + * + * Right now, this structure contains no padding. If you add any, make sure + * to teach pgss_store() to zero the padding bytes. Otherwise, things will + * break, because pgss_hash is created using HASH_BLOBS, and thus tag_hash + * is used to hash this. */ typedef struct pgssHashKey { From a8f1efc8ace228b5258ee7d06eace923007072c4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 20 Oct 2017 16:08:17 -0400 Subject: [PATCH 0415/1087] Fix misimplementation of typcache logic for extended hashing. The previous coding would report that an array type supports extended hashing if its element type supports regular hashing. This bug is only latent at the moment, since AFAICS there is not yet any code that depends on checking presence of extended-hashing support to make any decisions. (And in any case it wouldn't matter unless the element type has only regular hashing, which isn't true of any core data type.) But that doesn't make it less broken. Extend the cache_array_element_properties infrastructure to check this properly. --- src/backend/utils/cache/typcache.c | 59 ++++++++++++++++++------------ 1 file changed, 36 insertions(+), 23 deletions(-) diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 16c52c5a38..a0a71dd86a 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -79,22 +79,23 @@ static HTAB *TypeCacheHash = NULL; static TypeCacheEntry *firstDomainTypeEntry = NULL; /* Private flag bits in the TypeCacheEntry.flags field */ -#define TCFLAGS_CHECKED_BTREE_OPCLASS 0x0001 -#define TCFLAGS_CHECKED_HASH_OPCLASS 0x0002 -#define TCFLAGS_CHECKED_EQ_OPR 0x0004 -#define TCFLAGS_CHECKED_LT_OPR 0x0008 -#define TCFLAGS_CHECKED_GT_OPR 0x0010 -#define TCFLAGS_CHECKED_CMP_PROC 0x0020 -#define TCFLAGS_CHECKED_HASH_PROC 0x0040 -#define TCFLAGS_CHECKED_ELEM_PROPERTIES 0x0080 -#define TCFLAGS_HAVE_ELEM_EQUALITY 0x0100 -#define TCFLAGS_HAVE_ELEM_COMPARE 0x0200 -#define TCFLAGS_HAVE_ELEM_HASHING 0x0400 -#define TCFLAGS_CHECKED_FIELD_PROPERTIES 0x0800 -#define TCFLAGS_HAVE_FIELD_EQUALITY 0x1000 -#define TCFLAGS_HAVE_FIELD_COMPARE 0x2000 -#define TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS 0x4000 -#define TCFLAGS_CHECKED_HASH_EXTENDED_PROC 0x8000 +#define TCFLAGS_CHECKED_BTREE_OPCLASS 0x000001 +#define TCFLAGS_CHECKED_HASH_OPCLASS 0x000002 +#define TCFLAGS_CHECKED_EQ_OPR 0x000004 +#define TCFLAGS_CHECKED_LT_OPR 0x000008 +#define TCFLAGS_CHECKED_GT_OPR 0x000010 +#define TCFLAGS_CHECKED_CMP_PROC 0x000020 +#define TCFLAGS_CHECKED_HASH_PROC 0x000040 +#define TCFLAGS_CHECKED_HASH_EXTENDED_PROC 0x000080 +#define TCFLAGS_CHECKED_ELEM_PROPERTIES 0x000100 +#define TCFLAGS_HAVE_ELEM_EQUALITY 0x000200 +#define TCFLAGS_HAVE_ELEM_COMPARE 0x000400 +#define TCFLAGS_HAVE_ELEM_HASHING 0x000800 +#define TCFLAGS_HAVE_ELEM_EXTENDED_HASHING 0x001000 +#define TCFLAGS_CHECKED_FIELD_PROPERTIES 0x002000 +#define TCFLAGS_HAVE_FIELD_EQUALITY 0x004000 +#define TCFLAGS_HAVE_FIELD_COMPARE 0x008000 +#define TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS 0x010000 /* * Data stored about a domain type's constraints. Note that we do not create @@ -273,6 +274,7 @@ static List *prep_domain_constraints(List *constraints, MemoryContext execctx); static bool array_element_has_equality(TypeCacheEntry *typentry); static bool array_element_has_compare(TypeCacheEntry *typentry); static bool array_element_has_hashing(TypeCacheEntry *typentry); +static bool array_element_has_extended_hashing(TypeCacheEntry *typentry); static void cache_array_element_properties(TypeCacheEntry *typentry); static bool record_fields_have_equality(TypeCacheEntry *typentry); static bool record_fields_have_compare(TypeCacheEntry *typentry); @@ -451,8 +453,8 @@ lookup_type_cache(Oid type_id, int flags) * eq_opr; if we already found one from the btree opclass, that * decision is still good. */ - typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC); - typentry->flags &= ~(TCFLAGS_CHECKED_HASH_EXTENDED_PROC); + typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC | + TCFLAGS_CHECKED_HASH_EXTENDED_PROC); typentry->flags |= TCFLAGS_CHECKED_HASH_OPCLASS; } @@ -500,8 +502,8 @@ lookup_type_cache(Oid type_id, int flags) * equality operator. This is so we can ensure that the hash * functions match the operator. */ - typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC); - typentry->flags &= ~(TCFLAGS_CHECKED_HASH_EXTENDED_PROC); + typentry->flags &= ~(TCFLAGS_CHECKED_HASH_PROC | + TCFLAGS_CHECKED_HASH_EXTENDED_PROC); typentry->flags |= TCFLAGS_CHECKED_EQ_OPR; } if ((flags & TYPECACHE_LT_OPR) && @@ -637,10 +639,10 @@ lookup_type_cache(Oid type_id, int flags) * we'll need more logic here to check that case too. */ if (hash_extended_proc == F_HASH_ARRAY_EXTENDED && - !array_element_has_hashing(typentry)) + !array_element_has_extended_hashing(typentry)) hash_extended_proc = InvalidOid; - /* Force update of hash_proc_finfo only if we're changing state */ + /* Force update of proc finfo only if we're changing state */ if (typentry->hash_extended_proc != hash_extended_proc) typentry->hash_extended_proc_finfo.fn_oid = InvalidOid; @@ -1269,6 +1271,14 @@ array_element_has_hashing(TypeCacheEntry *typentry) return (typentry->flags & TCFLAGS_HAVE_ELEM_HASHING) != 0; } +static bool +array_element_has_extended_hashing(TypeCacheEntry *typentry) +{ + if (!(typentry->flags & TCFLAGS_CHECKED_ELEM_PROPERTIES)) + cache_array_element_properties(typentry); + return (typentry->flags & TCFLAGS_HAVE_ELEM_EXTENDED_HASHING) != 0; +} + static void cache_array_element_properties(TypeCacheEntry *typentry) { @@ -1281,13 +1291,16 @@ cache_array_element_properties(TypeCacheEntry *typentry) elementry = lookup_type_cache(elem_type, TYPECACHE_EQ_OPR | TYPECACHE_CMP_PROC | - TYPECACHE_HASH_PROC); + TYPECACHE_HASH_PROC | + TYPECACHE_HASH_EXTENDED_PROC); if (OidIsValid(elementry->eq_opr)) typentry->flags |= TCFLAGS_HAVE_ELEM_EQUALITY; if (OidIsValid(elementry->cmp_proc)) typentry->flags |= TCFLAGS_HAVE_ELEM_COMPARE; if (OidIsValid(elementry->hash_proc)) typentry->flags |= TCFLAGS_HAVE_ELEM_HASHING; + if (OidIsValid(elementry->hash_extended_proc)) + typentry->flags |= TCFLAGS_HAVE_ELEM_EXTENDED_HASHING; } typentry->flags |= TCFLAGS_CHECKED_ELEM_PROPERTIES; } From 36ea99c84d856177ec307307788a279cc600566e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 20 Oct 2017 17:12:27 -0400 Subject: [PATCH 0416/1087] Fix typcache's failure to treat ranges as container types. Like the similar logic for arrays and records, it's necessary to examine the range's subtype to decide whether the range type can support hashing. We can omit checking the subtype for btree-defined operations, though, since range subtypes are required to have those operations. (Possibly that simplification for btree cases led us to overlook that it does not apply for hash cases.) This is only an issue if the subtype lacks hash support, which is not true of any built-in range type, but it's easy to demonstrate a problem with a range type over, eg, money: you can get a "could not identify a hash function" failure when the planner is misled into thinking that hash join or aggregation would work. This was born broken, so back-patch to all supported branches. --- src/backend/utils/cache/typcache.c | 90 ++++++++++++++++++++++-- src/test/regress/expected/rangetypes.out | 12 ++++ src/test/regress/sql/rangetypes.sql | 12 ++++ 3 files changed, 109 insertions(+), 5 deletions(-) diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index a0a71dd86a..61ce7dc2a7 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -279,6 +279,9 @@ static void cache_array_element_properties(TypeCacheEntry *typentry); static bool record_fields_have_equality(TypeCacheEntry *typentry); static bool record_fields_have_compare(TypeCacheEntry *typentry); static void cache_record_field_properties(TypeCacheEntry *typentry); +static bool range_element_has_hashing(TypeCacheEntry *typentry); +static bool range_element_has_extended_hashing(TypeCacheEntry *typentry); +static void cache_range_element_properties(TypeCacheEntry *typentry); static void TypeCacheRelCallback(Datum arg, Oid relid); static void TypeCacheOpcCallback(Datum arg, int cacheid, uint32 hashvalue); static void TypeCacheConstrCallback(Datum arg, int cacheid, uint32 hashvalue); @@ -480,9 +483,11 @@ lookup_type_cache(Oid type_id, int flags) /* * If the proposed equality operator is array_eq or record_eq, check - * to see if the element type or column types support equality. If + * to see if the element type or column types support equality. If * not, array_eq or record_eq would fail at runtime, so we don't want - * to report that the type has equality. + * to report that the type has equality. (We can omit similar + * checking for ranges because ranges can't be created in the first + * place unless their subtypes support equality.) */ if (eq_opr == ARRAY_EQ_OP && !array_element_has_equality(typentry)) @@ -517,7 +522,10 @@ lookup_type_cache(Oid type_id, int flags) typentry->btree_opintype, BTLessStrategyNumber); - /* As above, make sure array_cmp or record_cmp will succeed */ + /* + * As above, make sure array_cmp or record_cmp will succeed; but again + * we need no special check for ranges. + */ if (lt_opr == ARRAY_LT_OP && !array_element_has_compare(typentry)) lt_opr = InvalidOid; @@ -539,7 +547,10 @@ lookup_type_cache(Oid type_id, int flags) typentry->btree_opintype, BTGreaterStrategyNumber); - /* As above, make sure array_cmp or record_cmp will succeed */ + /* + * As above, make sure array_cmp or record_cmp will succeed; but again + * we need no special check for ranges. + */ if (gt_opr == ARRAY_GT_OP && !array_element_has_compare(typentry)) gt_opr = InvalidOid; @@ -561,7 +572,10 @@ lookup_type_cache(Oid type_id, int flags) typentry->btree_opintype, BTORDER_PROC); - /* As above, make sure array_cmp or record_cmp will succeed */ + /* + * As above, make sure array_cmp or record_cmp will succeed; but again + * we need no special check for ranges. + */ if (cmp_proc == F_BTARRAYCMP && !array_element_has_compare(typentry)) cmp_proc = InvalidOid; @@ -605,6 +619,13 @@ lookup_type_cache(Oid type_id, int flags) !array_element_has_hashing(typentry)) hash_proc = InvalidOid; + /* + * Likewise for hash_range. + */ + if (hash_proc == F_HASH_RANGE && + !range_element_has_hashing(typentry)) + hash_proc = InvalidOid; + /* Force update of hash_proc_finfo only if we're changing state */ if (typentry->hash_proc != hash_proc) typentry->hash_proc_finfo.fn_oid = InvalidOid; @@ -642,6 +663,13 @@ lookup_type_cache(Oid type_id, int flags) !array_element_has_extended_hashing(typentry)) hash_extended_proc = InvalidOid; + /* + * Likewise for hash_range_extended. + */ + if (hash_extended_proc == F_HASH_RANGE_EXTENDED && + !range_element_has_extended_hashing(typentry)) + hash_extended_proc = InvalidOid; + /* Force update of proc finfo only if we're changing state */ if (typentry->hash_extended_proc != hash_extended_proc) typentry->hash_extended_proc_finfo.fn_oid = InvalidOid; @@ -1305,6 +1333,10 @@ cache_array_element_properties(TypeCacheEntry *typentry) typentry->flags |= TCFLAGS_CHECKED_ELEM_PROPERTIES; } +/* + * Likewise, some helper functions for composite types. + */ + static bool record_fields_have_equality(TypeCacheEntry *typentry) { @@ -1376,6 +1408,54 @@ cache_record_field_properties(TypeCacheEntry *typentry) typentry->flags |= TCFLAGS_CHECKED_FIELD_PROPERTIES; } +/* + * Likewise, some helper functions for range types. + * + * We can borrow the flag bits for array element properties to use for range + * element properties, since those flag bits otherwise have no use in a + * range type's typcache entry. + */ + +static bool +range_element_has_hashing(TypeCacheEntry *typentry) +{ + if (!(typentry->flags & TCFLAGS_CHECKED_ELEM_PROPERTIES)) + cache_range_element_properties(typentry); + return (typentry->flags & TCFLAGS_HAVE_ELEM_HASHING) != 0; +} + +static bool +range_element_has_extended_hashing(TypeCacheEntry *typentry) +{ + if (!(typentry->flags & TCFLAGS_CHECKED_ELEM_PROPERTIES)) + cache_range_element_properties(typentry); + return (typentry->flags & TCFLAGS_HAVE_ELEM_EXTENDED_HASHING) != 0; +} + +static void +cache_range_element_properties(TypeCacheEntry *typentry) +{ + /* load up subtype link if we didn't already */ + if (typentry->rngelemtype == NULL && + typentry->typtype == TYPTYPE_RANGE) + load_rangetype_info(typentry); + + if (typentry->rngelemtype != NULL) + { + TypeCacheEntry *elementry; + + /* might need to calculate subtype's hash function properties */ + elementry = lookup_type_cache(typentry->rngelemtype->type_id, + TYPECACHE_HASH_PROC | + TYPECACHE_HASH_EXTENDED_PROC); + if (OidIsValid(elementry->hash_proc)) + typentry->flags |= TCFLAGS_HAVE_ELEM_HASHING; + if (OidIsValid(elementry->hash_extended_proc)) + typentry->flags |= TCFLAGS_HAVE_ELEM_EXTENDED_HASHING; + } + typentry->flags |= TCFLAGS_CHECKED_ELEM_PROPERTIES; +} + /* * Make sure that RecordCacheArray is large enough to store 'typmod'. */ diff --git a/src/test/regress/expected/rangetypes.out b/src/test/regress/expected/rangetypes.out index 4a2336cd8d..accf1e0d9e 100644 --- a/src/test/regress/expected/rangetypes.out +++ b/src/test/regress/expected/rangetypes.out @@ -1354,6 +1354,18 @@ select *, row_to_json(upper(t)) as u from drop type two_ints cascade; NOTICE: drop cascades to type two_ints_range -- +-- Check behavior when subtype lacks a hash function +-- +create type cashrange as range (subtype = money); +set enable_sort = off; -- try to make it pick a hash setop implementation +select '(2,5)'::cashrange except select '(5,6)'::cashrange; + cashrange +--------------- + ($2.00,$5.00) +(1 row) + +reset enable_sort; +-- -- OUT/INOUT/TABLE functions -- create function outparam_succeed(i anyrange, out r anyrange, out t text) diff --git a/src/test/regress/sql/rangetypes.sql b/src/test/regress/sql/rangetypes.sql index a60df9095e..55638a85ee 100644 --- a/src/test/regress/sql/rangetypes.sql +++ b/src/test/regress/sql/rangetypes.sql @@ -461,6 +461,18 @@ select *, row_to_json(upper(t)) as u from drop type two_ints cascade; +-- +-- Check behavior when subtype lacks a hash function +-- + +create type cashrange as range (subtype = money); + +set enable_sort = off; -- try to make it pick a hash setop implementation + +select '(2,5)'::cashrange except select '(5,6)'::cashrange; + +reset enable_sort; + -- -- OUT/INOUT/TABLE functions -- From 1ff01b3902cbf5b22d1a439014202499c21b2994 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 19 Oct 2017 21:16:39 -0400 Subject: [PATCH 0417/1087] Convert SGML IDs to lower case IDs in SGML are case insensitive, and we have accumulated a mix of upper and lower case IDs, including different variants of the same ID. In XML, these will be case sensitive, so we need to fix up those differences. Going to all lower case seems most straightforward, and the current build process already makes all anchors and lower case anyway during the SGML->XML conversion, so this doesn't create any difference in the output right now. A future XML-only build process would, however, maintain any mixed case ID spellings in the output, so that is another reason to clean this up beforehand. Author: Alexander Lakhin --- doc/src/sgml/acronyms.sgml | 6 +- doc/src/sgml/arch-dev.sgml | 2 +- doc/src/sgml/biblio.sgml | 40 ++--- doc/src/sgml/brin.sgml | 2 +- doc/src/sgml/btree-gist.sgml | 4 +- doc/src/sgml/config.sgml | 18 +- doc/src/sgml/dblink.sgml | 38 ++--- doc/src/sgml/ddl.sgml | 2 +- doc/src/sgml/ecpg.sgml | 70 ++++---- doc/src/sgml/func.sgml | 4 +- doc/src/sgml/geqo.sgml | 4 +- doc/src/sgml/gin.sgml | 2 +- doc/src/sgml/gist.sgml | 2 +- doc/src/sgml/history.sgml | 14 +- doc/src/sgml/indices.sgml | 16 +- doc/src/sgml/libpq.sgml | 2 +- doc/src/sgml/lobj.sgml | 2 +- doc/src/sgml/logical-replication.sgml | 2 +- doc/src/sgml/logicaldecoding.sgml | 6 +- doc/src/sgml/mvcc.sgml | 4 +- doc/src/sgml/passwordcheck.sgml | 4 +- doc/src/sgml/perform.sgml | 2 +- doc/src/sgml/queries.sgml | 2 +- doc/src/sgml/query.sgml | 2 +- doc/src/sgml/rangetypes.sgml | 6 +- doc/src/sgml/ref/abort.sgml | 6 +- doc/src/sgml/ref/alter_aggregate.sgml | 2 +- doc/src/sgml/ref/alter_collation.sgml | 2 +- doc/src/sgml/ref/alter_conversion.sgml | 2 +- doc/src/sgml/ref/alter_database.sgml | 2 +- .../sgml/ref/alter_default_privileges.sgml | 2 +- doc/src/sgml/ref/alter_domain.sgml | 8 +- doc/src/sgml/ref/alter_event_trigger.sgml | 2 +- doc/src/sgml/ref/alter_extension.sgml | 4 +- .../sgml/ref/alter_foreign_data_wrapper.sgml | 2 +- doc/src/sgml/ref/alter_foreign_table.sgml | 8 +- doc/src/sgml/ref/alter_function.sgml | 2 +- doc/src/sgml/ref/alter_group.sgml | 6 +- doc/src/sgml/ref/alter_index.sgml | 10 +- doc/src/sgml/ref/alter_language.sgml | 2 +- doc/src/sgml/ref/alter_large_object.sgml | 2 +- doc/src/sgml/ref/alter_materialized_view.sgml | 2 +- doc/src/sgml/ref/alter_opclass.sgml | 2 +- doc/src/sgml/ref/alter_operator.sgml | 2 +- doc/src/sgml/ref/alter_opfamily.sgml | 2 +- doc/src/sgml/ref/alter_policy.sgml | 2 +- doc/src/sgml/ref/alter_publication.sgml | 4 +- doc/src/sgml/ref/alter_role.sgml | 16 +- doc/src/sgml/ref/alter_rule.sgml | 2 +- doc/src/sgml/ref/alter_schema.sgml | 2 +- doc/src/sgml/ref/alter_sequence.sgml | 2 +- doc/src/sgml/ref/alter_server.sgml | 2 +- doc/src/sgml/ref/alter_statistics.sgml | 2 +- doc/src/sgml/ref/alter_subscription.sgml | 8 +- doc/src/sgml/ref/alter_system.sgml | 6 +- doc/src/sgml/ref/alter_table.sgml | 30 ++-- doc/src/sgml/ref/alter_tablespace.sgml | 2 +- doc/src/sgml/ref/alter_trigger.sgml | 4 +- doc/src/sgml/ref/alter_tsconfig.sgml | 2 +- doc/src/sgml/ref/alter_tsdictionary.sgml | 2 +- doc/src/sgml/ref/alter_tsparser.sgml | 2 +- doc/src/sgml/ref/alter_tstemplate.sgml | 2 +- doc/src/sgml/ref/alter_type.sgml | 6 +- doc/src/sgml/ref/alter_user.sgml | 2 +- doc/src/sgml/ref/alter_user_mapping.sgml | 2 +- doc/src/sgml/ref/alter_view.sgml | 2 +- doc/src/sgml/ref/analyze.sgml | 2 +- doc/src/sgml/ref/begin.sgml | 6 +- doc/src/sgml/ref/close.sgml | 2 +- doc/src/sgml/ref/cluster.sgml | 4 +- doc/src/sgml/ref/clusterdb.sgml | 8 +- doc/src/sgml/ref/comment.sgml | 2 +- doc/src/sgml/ref/commit.sgml | 4 +- doc/src/sgml/ref/commit_prepared.sgml | 2 +- doc/src/sgml/ref/copy.sgml | 4 +- doc/src/sgml/ref/create_aggregate.sgml | 2 +- doc/src/sgml/ref/create_cast.sgml | 2 +- doc/src/sgml/ref/create_collation.sgml | 2 +- doc/src/sgml/ref/create_conversion.sgml | 2 +- doc/src/sgml/ref/create_database.sgml | 8 +- doc/src/sgml/ref/create_domain.sgml | 6 +- doc/src/sgml/ref/create_event_trigger.sgml | 2 +- doc/src/sgml/ref/create_extension.sgml | 2 +- .../sgml/ref/create_foreign_data_wrapper.sgml | 2 +- doc/src/sgml/ref/create_foreign_table.sgml | 12 +- doc/src/sgml/ref/create_function.sgml | 4 +- doc/src/sgml/ref/create_group.sgml | 2 +- doc/src/sgml/ref/create_index.sgml | 16 +- doc/src/sgml/ref/create_language.sgml | 2 +- .../sgml/ref/create_materialized_view.sgml | 2 +- doc/src/sgml/ref/create_opclass.sgml | 2 +- doc/src/sgml/ref/create_operator.sgml | 2 +- doc/src/sgml/ref/create_opfamily.sgml | 2 +- doc/src/sgml/ref/create_policy.sgml | 12 +- doc/src/sgml/ref/create_publication.sgml | 2 +- doc/src/sgml/ref/create_role.sgml | 16 +- doc/src/sgml/ref/create_rule.sgml | 2 +- doc/src/sgml/ref/create_schema.sgml | 2 +- doc/src/sgml/ref/create_sequence.sgml | 2 +- doc/src/sgml/ref/create_server.sgml | 2 +- doc/src/sgml/ref/create_statistics.sgml | 6 +- doc/src/sgml/ref/create_subscription.sgml | 2 +- doc/src/sgml/ref/create_table.sgml | 26 +-- doc/src/sgml/ref/create_table_as.sgml | 2 +- doc/src/sgml/ref/create_tablespace.sgml | 2 +- doc/src/sgml/ref/create_transform.sgml | 2 +- doc/src/sgml/ref/create_trigger.sgml | 12 +- doc/src/sgml/ref/create_tsconfig.sgml | 2 +- doc/src/sgml/ref/create_tsdictionary.sgml | 2 +- doc/src/sgml/ref/create_tsparser.sgml | 2 +- doc/src/sgml/ref/create_tstemplate.sgml | 2 +- doc/src/sgml/ref/create_type.sgml | 12 +- doc/src/sgml/ref/create_user.sgml | 2 +- doc/src/sgml/ref/create_user_mapping.sgml | 2 +- doc/src/sgml/ref/create_view.sgml | 10 +- doc/src/sgml/ref/createdb.sgml | 12 +- doc/src/sgml/ref/createuser.sgml | 8 +- doc/src/sgml/ref/deallocate.sgml | 2 +- doc/src/sgml/ref/declare.sgml | 2 +- doc/src/sgml/ref/delete.sgml | 2 +- doc/src/sgml/ref/discard.sgml | 2 +- doc/src/sgml/ref/do.sgml | 2 +- doc/src/sgml/ref/drop_aggregate.sgml | 2 +- doc/src/sgml/ref/drop_cast.sgml | 2 +- doc/src/sgml/ref/drop_collation.sgml | 2 +- doc/src/sgml/ref/drop_conversion.sgml | 2 +- doc/src/sgml/ref/drop_database.sgml | 2 +- doc/src/sgml/ref/drop_domain.sgml | 8 +- doc/src/sgml/ref/drop_event_trigger.sgml | 2 +- doc/src/sgml/ref/drop_extension.sgml | 2 +- .../sgml/ref/drop_foreign_data_wrapper.sgml | 2 +- doc/src/sgml/ref/drop_foreign_table.sgml | 2 +- doc/src/sgml/ref/drop_function.sgml | 6 +- doc/src/sgml/ref/drop_group.sgml | 2 +- doc/src/sgml/ref/drop_index.sgml | 2 +- doc/src/sgml/ref/drop_language.sgml | 2 +- doc/src/sgml/ref/drop_materialized_view.sgml | 2 +- doc/src/sgml/ref/drop_opclass.sgml | 2 +- doc/src/sgml/ref/drop_operator.sgml | 2 +- doc/src/sgml/ref/drop_opfamily.sgml | 2 +- doc/src/sgml/ref/drop_owned.sgml | 2 +- doc/src/sgml/ref/drop_policy.sgml | 2 +- doc/src/sgml/ref/drop_publication.sgml | 2 +- doc/src/sgml/ref/drop_role.sgml | 4 +- doc/src/sgml/ref/drop_rule.sgml | 2 +- doc/src/sgml/ref/drop_schema.sgml | 2 +- doc/src/sgml/ref/drop_sequence.sgml | 2 +- doc/src/sgml/ref/drop_server.sgml | 2 +- doc/src/sgml/ref/drop_statistics.sgml | 2 +- doc/src/sgml/ref/drop_subscription.sgml | 2 +- doc/src/sgml/ref/drop_table.sgml | 2 +- doc/src/sgml/ref/drop_tablespace.sgml | 2 +- doc/src/sgml/ref/drop_transform.sgml | 2 +- doc/src/sgml/ref/drop_trigger.sgml | 6 +- doc/src/sgml/ref/drop_tsconfig.sgml | 2 +- doc/src/sgml/ref/drop_tsdictionary.sgml | 2 +- doc/src/sgml/ref/drop_tsparser.sgml | 2 +- doc/src/sgml/ref/drop_tstemplate.sgml | 2 +- doc/src/sgml/ref/drop_type.sgml | 8 +- doc/src/sgml/ref/drop_user.sgml | 2 +- doc/src/sgml/ref/drop_user_mapping.sgml | 2 +- doc/src/sgml/ref/drop_view.sgml | 2 +- doc/src/sgml/ref/dropdb.sgml | 8 +- doc/src/sgml/ref/dropuser.sgml | 8 +- doc/src/sgml/ref/ecpg-ref.sgml | 4 +- doc/src/sgml/ref/end.sgml | 4 +- doc/src/sgml/ref/execute.sgml | 2 +- doc/src/sgml/ref/explain.sgml | 2 +- doc/src/sgml/ref/fetch.sgml | 2 +- doc/src/sgml/ref/grant.sgml | 4 +- doc/src/sgml/ref/import_foreign_schema.sgml | 8 +- doc/src/sgml/ref/initdb.sgml | 4 +- doc/src/sgml/ref/insert.sgml | 8 +- doc/src/sgml/ref/listen.sgml | 2 +- doc/src/sgml/ref/load.sgml | 2 +- doc/src/sgml/ref/lock.sgml | 4 +- doc/src/sgml/ref/move.sgml | 2 +- doc/src/sgml/ref/notify.sgml | 2 +- doc/src/sgml/ref/pg_basebackup.sgml | 4 +- doc/src/sgml/ref/pg_controldata.sgml | 4 +- doc/src/sgml/ref/pg_ctl-ref.sgml | 10 +- doc/src/sgml/ref/pg_dump.sgml | 8 +- doc/src/sgml/ref/pg_dumpall.sgml | 2 +- doc/src/sgml/ref/pg_receivewal.sgml | 2 +- doc/src/sgml/ref/pg_recvlogical.sgml | 2 +- doc/src/sgml/ref/pg_resetwal.sgml | 4 +- doc/src/sgml/ref/pg_restore.sgml | 2 +- doc/src/sgml/ref/pg_waldump.sgml | 2 +- doc/src/sgml/ref/prepare.sgml | 4 +- doc/src/sgml/ref/prepare_transaction.sgml | 2 +- doc/src/sgml/ref/psql-ref.sgml | 156 +++++++++--------- doc/src/sgml/ref/reassign_owned.sgml | 2 +- .../sgml/ref/refresh_materialized_view.sgml | 4 +- doc/src/sgml/ref/reindex.sgml | 2 +- doc/src/sgml/ref/reindexdb.sgml | 8 +- doc/src/sgml/ref/release_savepoint.sgml | 2 +- doc/src/sgml/ref/reset.sgml | 6 +- doc/src/sgml/ref/revoke.sgml | 10 +- doc/src/sgml/ref/rollback.sgml | 4 +- doc/src/sgml/ref/rollback_prepared.sgml | 2 +- doc/src/sgml/ref/rollback_to.sgml | 4 +- doc/src/sgml/ref/savepoint.sgml | 6 +- doc/src/sgml/ref/security_label.sgml | 2 +- doc/src/sgml/ref/select.sgml | 28 ++-- doc/src/sgml/ref/select_into.sgml | 2 +- doc/src/sgml/ref/set.sgml | 6 +- doc/src/sgml/ref/set_constraints.sgml | 2 +- doc/src/sgml/ref/set_role.sgml | 4 +- doc/src/sgml/ref/set_session_auth.sgml | 4 +- doc/src/sgml/ref/set_transaction.sgml | 4 +- doc/src/sgml/ref/show.sgml | 8 +- doc/src/sgml/ref/start_transaction.sgml | 2 +- doc/src/sgml/ref/truncate.sgml | 2 +- doc/src/sgml/ref/unlisten.sgml | 2 +- doc/src/sgml/ref/update.sgml | 2 +- doc/src/sgml/ref/vacuum.sgml | 2 +- doc/src/sgml/ref/vacuumdb.sgml | 8 +- doc/src/sgml/ref/values.sgml | 2 +- doc/src/sgml/release-10.sgml | 46 +++--- doc/src/sgml/release-8.2.sgml | 94 +++++------ doc/src/sgml/release-8.3.sgml | 4 +- doc/src/sgml/release-9.0.sgml | 86 +++++----- doc/src/sgml/release-9.1.sgml | 78 ++++----- doc/src/sgml/release-9.2.sgml | 64 +++---- doc/src/sgml/release-9.3.sgml | 86 +++++----- doc/src/sgml/release-9.4.sgml | 82 ++++----- doc/src/sgml/release-9.5.sgml | 90 +++++----- doc/src/sgml/release-9.6.sgml | 12 +- doc/src/sgml/rules.sgml | 4 +- doc/src/sgml/spgist.sgml | 2 +- doc/src/sgml/start.sgml | 2 +- doc/src/sgml/tablefunc.sgml | 2 +- doc/src/sgml/xfunc.sgml | 4 +- doc/src/sgml/xindex.sgml | 6 +- 234 files changed, 925 insertions(+), 925 deletions(-) diff --git a/doc/src/sgml/acronyms.sgml b/doc/src/sgml/acronyms.sgml index 35514d4d9a..6e9fddf404 100644 --- a/doc/src/sgml/acronyms.sgml +++ b/doc/src/sgml/acronyms.sgml @@ -232,7 +232,7 @@ GIN - Generalized Inverted Index + Generalized Inverted Index @@ -241,7 +241,7 @@ GiST - Generalized Search Tree + Generalized Search Tree @@ -583,7 +583,7 @@ SP-GiST - Space-Partitioned Generalized Search Tree + Space-Partitioned Generalized Search Tree diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index 5423aadb9c..d49901c690 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -7,7 +7,7 @@ Author This chapter originated as part of - , Stefan Simkovics' + , Stefan Simkovics' Master's Thesis prepared at Vienna University of Technology under the direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr. diff --git a/doc/src/sgml/biblio.sgml b/doc/src/sgml/biblio.sgml index d7547e6e92..4953024162 100644 --- a/doc/src/sgml/biblio.sgml +++ b/doc/src/sgml/biblio.sgml @@ -18,7 +18,7 @@ <acronym>SQL</acronym> Reference Books - + The Practical <acronym>SQL</acronym> Handbook Using SQL Variants Fourth Edition @@ -43,7 +43,7 @@ 2001 - + A Guide to the <acronym>SQL</acronym> Standard A user's guide to the standard database language SQL Fourth Edition @@ -64,7 +64,7 @@ 1997 - + An Introduction to Database Systems Eighth Edition @@ -80,7 +80,7 @@ 2003 - + Fundamentals of Database Systems Fourth Edition @@ -100,7 +100,7 @@ 2003 - + Understanding the New <acronym>SQL</acronym> A complete guide @@ -120,7 +120,7 @@ 1993 - + Principles of Database and Knowledge Base Systems @@ -141,7 +141,7 @@ PostgreSQL-specific Documentation - + Enhancement of the ANSI SQL Implementation of PostgreSQL @@ -185,7 +185,7 @@ ssimkovi@ag.or.at November 29, 1998 - + The <productname>Postgres95</productname> User Manual @@ -204,7 +204,7 @@ ssimkovi@ag.or.at Sept. 5, 1995 - + <ulink url="http://db.cs.berkeley.edu/papers/UCB-MS-zfong.pdf">The design and implementation of the <productname>POSTGRES</productname> query optimizer</ulink> @@ -222,7 +222,7 @@ ssimkovi@ag.or.at Proceedings and Articles - + Partial indexing in POSTGRES: research project @@ -238,7 +238,7 @@ ssimkovi@ag.or.at 1993 - + A Unified Framework for Version Modeling Using Production Rules in a Database System @@ -262,7 +262,7 @@ ssimkovi@ag.or.at - + <ulink url="http://db.cs.berkeley.edu/papers/ERL-M87-13.pdf">The <productname>POSTGRES</productname> data model</ulink> @@ -284,7 +284,7 @@ ssimkovi@ag.or.at - + <ulink url="http://citeseer.ist.psu.edu/seshadri95generalized.html">Generalized Partial Indexes</ulink> @@ -313,7 +313,7 @@ ssimkovi@ag.or.at 420-7 - + <ulink url="http://db.cs.berkeley.edu/papers/ERL-M85-95.pdf">The design of <productname>POSTGRES</productname></ulink> @@ -335,7 +335,7 @@ ssimkovi@ag.or.at - + The design of the <productname>POSTGRES</productname> rules system @@ -360,7 +360,7 @@ ssimkovi@ag.or.at - + <ulink url="http://db.cs.berkeley.edu/papers/ERL-M87-06.pdf">The design of the <productname>POSTGRES</productname> storage @@ -379,7 +379,7 @@ ssimkovi@ag.or.at </confgroup> </biblioentry> - <biblioentry id="STON89"> + <biblioentry id="ston89"> <biblioset relation="article"> <title><ulink url="http://db.cs.berkeley.edu/papers/ERL-M89-82.pdf">A commentary on the <productname>POSTGRES</productname> rules @@ -405,7 +405,7 @@ ssimkovi@ag.or.at </biblioset> </biblioentry> - <biblioentry id="STON89b"> + <biblioentry id="ston89b"> <biblioset relation="article"> <title><ulink url="http://db.cs.berkeley.edu/papers/ERL-M89-17.pdf">The case for partial indexes</ulink> @@ -423,7 +423,7 @@ ssimkovi@ag.or.at - + <ulink url="http://db.cs.berkeley.edu/papers/ERL-M90-34.pdf">The implementation of <productname>POSTGRES</productname></ulink> @@ -451,7 +451,7 @@ ssimkovi@ag.or.at - + <ulink url="http://db.cs.berkeley.edu/papers/ERL-M90-36.pdf">On Rules, Procedures, Caching and Views in Database Systems</ulink> diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml index 91c01700ed..b7483df4c0 100644 --- a/doc/src/sgml/brin.sgml +++ b/doc/src/sgml/brin.sgml @@ -1,6 +1,6 @@ - + BRIN Indexes diff --git a/doc/src/sgml/btree-gist.sgml b/doc/src/sgml/btree-gist.sgml index dcb939f1fb..774442feee 100644 --- a/doc/src/sgml/btree-gist.sgml +++ b/doc/src/sgml/btree-gist.sgml @@ -36,7 +36,7 @@ In addition to the typical B-tree search operators, btree_gist also provides index support for <> (not equals). This may be useful in combination with an - exclusion constraint, + exclusion constraint, as described below. @@ -70,7 +70,7 @@ SELECT *, a <-> 42 AS dist FROM test ORDER BY a <-> 42 LIMIT 10; - Use an exclusion + Use an exclusion constraint to enforce the rule that a cage at a zoo can contain only one kind of animal: diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index aeda826d87..d360fc4d58 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -170,7 +170,7 @@ shared_buffers = 128MB postgresql.auto.confpostgresql.auto.conf, which has the same format as postgresql.conf but should never be edited manually. This file holds settings provided through - the command. This file is automatically + the command. This file is automatically read whenever postgresql.conf is, and its settings take effect in the same way. Settings in postgresql.auto.conf override those in postgresql.conf. @@ -191,7 +191,7 @@ shared_buffers = 128MB PostgreSQL provides three SQL commands to establish configuration defaults. - The already-mentioned command + The already-mentioned command provides a SQL-accessible means of changing global defaults; it is functionally equivalent to editing postgresql.conf. In addition, there are two commands that allow setting of defaults @@ -232,7 +232,7 @@ shared_buffers = 128MB - The command allows inspection of the + The command allows inspection of the current value of all parameters. The corresponding function is current_setting(setting_name text). @@ -240,7 +240,7 @@ shared_buffers = 128MB - The command allows modification of the + The command allows modification of the current value of those parameters that can be set locally to a session; it has no effect on other sessions. The corresponding function is @@ -266,7 +266,7 @@ shared_buffers = 128MB - Using on this view, specifically + Using on this view, specifically updating the setting column, is the equivalent of issuing SET commands. For example, the equivalent of @@ -6237,7 +6237,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; For more information on row security policies, - see . + see . @@ -7040,7 +7040,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries that are to be preloaded at connection start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. The parameter value only takes effect at the start of the connection. @@ -7091,7 +7091,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries that are to be preloaded at connection start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. The parameter value only takes effect at the start of the connection. @@ -7133,7 +7133,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries to be preloaded at server start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. This parameter can only be set at server start. If a specified diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index 1f17d3ad2d..12928e8bd3 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -18,7 +18,7 @@ functionality using a more modern and standards-compliant infrastructure. - + dblink_connect @@ -182,7 +182,7 @@ DROP SERVER fdtest; - + dblink_connect_u @@ -239,7 +239,7 @@ dblink_connect_u(text connname, text connstr) returns text - + dblink_disconnect @@ -314,7 +314,7 @@ SELECT dblink_disconnect('myconn'); - + dblink @@ -532,7 +532,7 @@ SELECT * FROM dblink('myconn', 'select proname, prosrc from pg_proc') - + dblink_exec @@ -669,7 +669,7 @@ DETAIL: ERROR: null value in column "relnamespace" violates not-null constrain - + dblink_open @@ -793,7 +793,7 @@ SELECT dblink_open('foo', 'select proname, prosrc from pg_proc'); - + dblink_fetch @@ -946,7 +946,7 @@ SELECT * FROM dblink_fetch('foo', 5) AS (funcname name, source text); - + dblink_close @@ -1057,7 +1057,7 @@ SELECT dblink_close('foo'); - + dblink_get_connections @@ -1102,7 +1102,7 @@ SELECT dblink_get_connections(); - + dblink_error_message @@ -1165,7 +1165,7 @@ SELECT dblink_error_message('dtest1'); - + dblink_send_query @@ -1247,7 +1247,7 @@ SELECT dblink_send_query('dtest1', 'SELECT * FROM foo WHERE f1 < 3'); - + dblink_is_busy @@ -1310,7 +1310,7 @@ SELECT dblink_is_busy('dtest1'); - + dblink_get_notify @@ -1392,7 +1392,7 @@ SELECT * FROM dblink_get_notify(); - + dblink_get_result @@ -1556,7 +1556,7 @@ contrib_regression=# SELECT * FROM dblink_get_result('dtest1') AS t1(f1 int, f2 - + dblink_cancel_query @@ -1624,7 +1624,7 @@ SELECT dblink_cancel_query('dtest1'); - + dblink_get_pkey @@ -1716,7 +1716,7 @@ SELECT * FROM dblink_get_pkey('foobar'); - + dblink_build_sql_insert @@ -1851,7 +1851,7 @@ SELECT dblink_build_sql_insert('foo', '1 2', 2, '{"1", "a"}', '{"1", "b''a"}'); - + dblink_build_sql_delete @@ -1969,7 +1969,7 @@ SELECT dblink_build_sql_delete('"MyFoo"', '1 2', 2, '{"1", "b"}'); - + dblink_build_sql_update diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 817db92af2..03cbaa60ab 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -908,7 +908,7 @@ CREATE TABLE circles ( - See also CREATE + See also CREATE TABLE ... CONSTRAINT ... EXCLUDE for details. diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 0f9ff3a8eb..bc3d080774 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -2270,7 +2270,7 @@ int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The following functions can be used to work with the date type: - + PGTYPESdate_from_timestamp @@ -2284,7 +2284,7 @@ date PGTYPESdate_from_timestamp(timestamp dt); - + PGTYPESdate_from_asc @@ -2389,7 +2389,7 @@ date PGTYPESdate_from_asc(char *str, char **endptr); - + PGTYPESdate_to_asc @@ -2404,7 +2404,7 @@ char *PGTYPESdate_to_asc(date dDate); - + PGTYPESdate_julmdy @@ -2423,7 +2423,7 @@ void PGTYPESdate_julmdy(date d, int *mdy); - + PGTYPESdate_mdyjul @@ -2439,7 +2439,7 @@ void PGTYPESdate_mdyjul(int *mdy, date *jdate); - + PGTYPESdate_dayofweek @@ -2491,7 +2491,7 @@ int PGTYPESdate_dayofweek(date d); - + PGTYPESdate_today @@ -2505,7 +2505,7 @@ void PGTYPESdate_today(date *d); - + PGTYPESdate_fmt_asc @@ -2626,7 +2626,7 @@ int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); - + PGTYPESdate_defmt_asc @@ -2747,7 +2747,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The following functions can be used to work with the timestamp type: - + PGTYPEStimestamp_from_asc @@ -2766,7 +2766,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); The function returns the parsed timestamp on success. On error, PGTYPESInvalidTimestamp is returned and errno is - set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. + set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. In general, the input string can contain any combination of an allowed @@ -2811,7 +2811,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); - + PGTYPEStimestamp_to_asc @@ -2826,7 +2826,7 @@ char *PGTYPEStimestamp_to_asc(timestamp tstamp); - + PGTYPEStimestamp_current @@ -2840,7 +2840,7 @@ void PGTYPEStimestamp_current(timestamp *ts); - + PGTYPEStimestamp_fmt_asc @@ -3175,7 +3175,7 @@ int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmt - + PGTYPEStimestamp_sub @@ -3196,7 +3196,7 @@ int PGTYPEStimestamp_sub(timestamp *ts1, timestamp *ts2, interval *iv); - + PGTYPEStimestamp_defmt_asc @@ -3217,13 +3217,13 @@ int PGTYPEStimestamp_defmt_asc(char *str, char *fmt, timestamp *d); This is the reverse function to . See the documentation there in + linkend="pgtypestimestampfmtasc">. See the documentation there in order to find out about the possible formatting mask entries. - + PGTYPEStimestamp_add_interval @@ -3243,7 +3243,7 @@ int PGTYPEStimestamp_add_interval(timestamp *tin, interval *span, timestamp *tou - + PGTYPEStimestamp_sub_interval @@ -3277,7 +3277,7 @@ int PGTYPEStimestamp_sub_interval(timestamp *tin, interval *span, timestamp *tou The following functions can be used to work with the interval type: - + PGTYPESinterval_new @@ -3289,7 +3289,7 @@ interval *PGTYPESinterval_new(void); - + PGTYPESinterval_free @@ -3301,7 +3301,7 @@ void PGTYPESinterval_new(interval *intvl); - + PGTYPESinterval_from_asc @@ -3319,7 +3319,7 @@ interval *PGTYPESinterval_from_asc(char *str, char **endptr); - + PGTYPESinterval_to_asc @@ -3334,7 +3334,7 @@ char *PGTYPESinterval_to_asc(interval *span); - + PGTYPESinterval_copy @@ -3543,7 +3543,7 @@ void PGTYPESdecimal_free(decimal *var); Special Constants of pgtypeslib - + PGTYPESInvalidTimestamp @@ -5868,7 +5868,7 @@ ECPG = ecpg For more details about the ECPGget_PGconn(), see . For information about the large - object function interface, see . + object function interface, see . @@ -8653,7 +8653,7 @@ void rtoday(date *d); that it sets to the current date. - Internally this function uses the + Internally this function uses the function. @@ -8678,7 +8678,7 @@ int rjulmdy(date d, short mdy[3]); The function always returns 0 at the moment. - Internally the function uses the + Internally the function uses the function. @@ -8748,7 +8748,7 @@ int rdefmtdate(date *d, char *fmt, char *str); Internally this function is implemented to use the function. See the reference there for a + linkend="pgtypesdatedefmtasc"> function. See the reference there for a table of example input. @@ -8771,7 +8771,7 @@ int rfmtdate(date d, char *fmt, char *str); On success, 0 is returned and a negative value if an error occurred. - Internally this function uses the + Internally this function uses the function, see the reference there for examples. @@ -8795,7 +8795,7 @@ int rmdyjul(short mdy[3], date *d); Internally the function is implemented to use the function . + linkend="pgtypesdatemdyjul">. @@ -8851,7 +8851,7 @@ int rdayofweek(date d); Internally the function is implemented to use the function . + linkend="pgtypesdatedayofweek">. @@ -8889,7 +8889,7 @@ int dtcvasc(char *str, timestamp *ts); Internally this function uses the function. See the reference there + linkend="pgtypestimestampfromasc"> function. See the reference there for a table with example inputs. @@ -8911,7 +8911,7 @@ dtcvfmtasc(char *inbuf, char *fmtstr, timestamp *dtvalue) This function is implemented by means of the function. See the documentation + linkend="pgtypestimestampdefmtasc"> function. See the documentation there for a list of format specifiers that can be used. @@ -8983,7 +8983,7 @@ int dttofmtasc(timestamp *ts, char *output, int str_len, char *fmtstr); Internally, this function uses the function. See the reference there for + linkend="pgtypestimestampfmtasc"> function. See the reference there for information on what format mask specifiers can be used. diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index c672988cc5..e571292bf4 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -17866,7 +17866,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same information, from the same source, as - , although in a form better suited + , although in a form better suited to SQL functions. @@ -20376,7 +20376,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); For more information about creating triggers, see - . + . diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index 99ee3ebca0..0f91272c54 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -320,13 +320,13 @@ - + - + diff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml index 873627a210..d63d8af440 100644 --- a/doc/src/sgml/gin.sgml +++ b/doc/src/sgml/gin.sgml @@ -1,6 +1,6 @@ - + GIN Indexes diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index 4e4470d439..f2f9ca0853 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -1,6 +1,6 @@ - + GiST Indexes diff --git a/doc/src/sgml/history.sgml b/doc/src/sgml/history.sgml index d1535469f9..b59e65bb20 100644 --- a/doc/src/sgml/history.sgml +++ b/doc/src/sgml/history.sgml @@ -31,12 +31,12 @@ Office (ARO), the National Science Foundation (NSF), and ESL, Inc. The implementation of POSTGRES began in 1986. The initial - concepts for the system were presented in , + concepts for the system were presented in , and the definition of the initial data model appeared in . The design of the rule system at that time was - described in . The rationale and + linkend="rowe87">. The design of the rule system at that time was + described in . The rationale and architecture of the storage manager were detailed in . + linkend="ston87b">. @@ -44,10 +44,10 @@ releases since then. The first demoware system became operational in 1987 and was shown at the 1988 ACM-SIGMOD Conference. Version 1, described in - , was released to a few external users in + , was released to a few external users in June 1989. In response to a critique of the first rule system - (), the rule system was redesigned (), and Version 2 was released in June 1990 with + (), the rule system was redesigned (), and Version 2 was released in June 1990 with the new rule system. Version 3 appeared in 1991 and added support for multiple storage managers, an improved query executor, and a rewritten rule system. For the most part, subsequent releases diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index 4cdd387b7b..248ed7e8eb 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -98,8 +98,8 @@ CREATE INDEX test1_id_index ON test1 (id); In production environments this is often unacceptable. It is possible to allow writes to occur in parallel with index creation, but there are several caveats to be aware of — - for more information see . + for more information see . @@ -232,7 +232,7 @@ CREATE INDEX name ON table documented in . Many other GiST operator classes are available in the contrib collection or as separate - projects. For more information see . + projects. For more information see . @@ -278,7 +278,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; these operators.) The SP-GiST operator classes included in the standard distribution are documented in . - For more information see . + For more information see . @@ -319,7 +319,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; documented in . Many other GIN operator classes are available in the contrib collection or as separate - projects. For more information see . + projects. For more information see . @@ -352,7 +352,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; The BRIN operator classes included in the standard distribution are documented in . - For more information see . + For more information see . @@ -962,8 +962,8 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) More information about partial indexes can be found in , , and . + linkend="ston89b">, , and . diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index a7e2653371..9f468b7fdc 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -2447,7 +2447,7 @@ PGresult *PQprepare(PGconn *conn, PQprepare creates a prepared statement for later execution with PQexecPrepared. This feature allows commands to be executed repeatedly without being parsed and - planned each time; see for details. + planned each time; see for details. PQprepare is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 2e930ac240..c743b5c0ba 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -1,6 +1,6 @@ - + Large Objects large object diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml index fa0bb56b7b..676ab1f5ad 100644 --- a/doc/src/sgml/logical-replication.sgml +++ b/doc/src/sgml/logical-replication.sgml @@ -126,7 +126,7 @@ fallback if no other solution is possible. If a replica identity other than full is set on the publisher side, a replica identity comprising the same or fewer columns must also be set on the subscriber - side. See for details on + side. See for details on how to set the replica identity. If a table without a replica identity is added to a publication that replicates UPDATE or DELETE operations then diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index c02f6e9765..3b268c3f3c 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -24,7 +24,7 @@ by INSERT and the new row version created by UPDATE. Availability of old row versions for UPDATE and DELETE depends on - the configured replica identity (see ). + the configured replica identity (see ). @@ -576,8 +576,8 @@ typedef void (*LogicalDecodeChangeCB) (struct LogicalDecodingContext *ctx, Only changes in user defined tables that are not unlogged - (see ) and not temporary - (see ) can be extracted using + (see ) and not temporary + (see ) can be extracted using logical decoding. diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index 75cb39359f..a0ca2851e5 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -929,7 +929,7 @@ ERROR: could not serialize access due to read/write dependencies among transact CREATE STATISTICS and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see - ). + ). @@ -972,7 +972,7 @@ ERROR: could not serialize access due to read/write dependencies among transact Acquired by CREATE COLLATION, CREATE TRIGGER, and many forms of - ALTER TABLE (see ). + ALTER TABLE (see ). diff --git a/doc/src/sgml/passwordcheck.sgml b/doc/src/sgml/passwordcheck.sgml index 6e6e4ef435..d034f8887f 100644 --- a/doc/src/sgml/passwordcheck.sgml +++ b/doc/src/sgml/passwordcheck.sgml @@ -10,8 +10,8 @@ The passwordcheck module checks users' passwords whenever they are set with - or - . + or + . If a password is considered too weak, it will be rejected and the command will terminate with an error. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 6a5182d85b..fed5c956a2 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1848,7 +1848,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Create unlogged + Create unlogged tables to avoid WAL writes, though it makes the tables non-crash-safe. diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index c2c1aaa208..52cc37a1d6 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -795,7 +795,7 @@ SELECT * AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%'; - The function + The function (part of the module) executes a remote query. It is declared to return record since it might be used for any kind of query. diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index fc60febcbd..b139c34577 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -12,7 +12,7 @@ tutorial is only intended to give you an introduction and is in no way a complete tutorial on SQL. Numerous books have been written on SQL, including and . + linkend="melt93"> and . You should be aware that some PostgreSQL language features are extensions to the standard. diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml index b585fd3d2a..ef2bff9cd9 100644 --- a/doc/src/sgml/rangetypes.sgml +++ b/doc/src/sgml/rangetypes.sgml @@ -65,7 +65,7 @@ In addition, you can define your own range types; - see for more information. + see for more information. @@ -406,7 +406,7 @@ SELECT '[11:10, 23:00]'::timerange; - See for more information about creating + See for more information about creating range types. @@ -462,7 +462,7 @@ CREATE INDEX reservation_idx ON reservation USING GIST (during); While UNIQUE is a natural constraint for scalar values, it is usually unsuitable for range types. Instead, an exclusion constraint is often more appropriate - (see CREATE TABLE + (see CREATE TABLE ... CONSTRAINT ... EXCLUDE). Exclusion constraints allow the specification of constraints such as non-overlapping on a range type. For example: diff --git a/doc/src/sgml/ref/abort.sgml b/doc/src/sgml/ref/abort.sgml index 285d0d4ac6..d341595785 100644 --- a/doc/src/sgml/ref/abort.sgml +++ b/doc/src/sgml/ref/abort.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/abort.sgml PostgreSQL documentation --> - + ABORT @@ -33,7 +33,7 @@ ABORT [ WORK | TRANSACTION ] all the updates made by the transaction to be discarded. This command is identical in behavior to the standard SQL command - , + , and is present only for historical reasons. @@ -58,7 +58,7 @@ ABORT [ WORK | TRANSACTION ] Notes - Use to + Use to successfully terminate a transaction. diff --git a/doc/src/sgml/ref/alter_aggregate.sgml b/doc/src/sgml/ref/alter_aggregate.sgml index 43f0a1609b..e00e726ad8 100644 --- a/doc/src/sgml/ref/alter_aggregate.sgml +++ b/doc/src/sgml/ref/alter_aggregate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_aggregate.sgml PostgreSQL documentation --> - + ALTER AGGREGATE diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml index 9d77ee5c2c..c7ad7437e8 100644 --- a/doc/src/sgml/ref/alter_collation.sgml +++ b/doc/src/sgml/ref/alter_collation.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_collation.sgml PostgreSQL documentation --> - + ALTER COLLATION diff --git a/doc/src/sgml/ref/alter_conversion.sgml b/doc/src/sgml/ref/alter_conversion.sgml index 83fcbbd5a5..08ed5e28fb 100644 --- a/doc/src/sgml/ref/alter_conversion.sgml +++ b/doc/src/sgml/ref/alter_conversion.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_conversion.sgml PostgreSQL documentation --> - + ALTER CONVERSION diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index 35e4123cad..1e09b5df1d 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_database.sgml PostgreSQL documentation --> - + ALTER DATABASE diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index 6c34f2446a..bc7401f845 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_default_privileges.sgml PostgreSQL documentation --> - + ALTER DEFAULT PRIVILEGES diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 96a7db95ec..26e95aefcf 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_domain.sgml PostgreSQL documentation --> - + ALTER DOMAIN @@ -80,7 +80,7 @@ ALTER DOMAIN name This form adds a new constraint to a domain using the same syntax as - . + . When a new constraint is added to a domain, all columns using that domain will be checked against the newly added constraint. These checks can be suppressed by adding the new constraint using the @@ -325,7 +325,7 @@ ALTER DOMAIN zipcode SET SCHEMA customers; - + Compatibility @@ -338,7 +338,7 @@ ALTER DOMAIN zipcode SET SCHEMA customers; - + See Also diff --git a/doc/src/sgml/ref/alter_event_trigger.sgml b/doc/src/sgml/ref/alter_event_trigger.sgml index 38b971fb08..b913ac9a5b 100644 --- a/doc/src/sgml/ref/alter_event_trigger.sgml +++ b/doc/src/sgml/ref/alter_event_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_event_trigger.sgml PostgreSQL documentation --> - + ALTER EVENT TRIGGER diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index c6c831fa30..c2b0669c38 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_extension.sgml PostgreSQL documentation --> - + ALTER EXTENSION @@ -319,7 +319,7 @@ ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore); - + See Also diff --git a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml index 1c0a26de6b..21bc83e512 100644 --- a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_foreign_data_wrapper.sgml PostgreSQL documentation --> - + ALTER FOREIGN DATA WRAPPER diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml index 44d981a5bd..df3d6d0696 100644 --- a/doc/src/sgml/ref/alter_foreign_table.sgml +++ b/doc/src/sgml/ref/alter_foreign_table.sgml @@ -3,7 +3,7 @@ doc/src/sgml/rel/alter_foreign_table.sgml PostgreSQL documentation --> - + ALTER FOREIGN TABLE @@ -72,7 +72,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new column to the foreign table, using the same syntax as - . + . Unlike the case when adding a column to a regular table, nothing happens to the underlying storage: this action simply declares that some new column is now accessible through the foreign table. @@ -173,7 +173,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new constraint to a foreign table, using the same - syntax as . + syntax as . Currently only CHECK constraints are supported. @@ -182,7 +182,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name.) + in .) If the constraint is marked NOT VALID, then it isn't assumed to hold, but is only recorded for possible future use. diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index cdecf631b1..fd35e98a88 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_function.sgml PostgreSQL documentation --> - + ALTER FUNCTION diff --git a/doc/src/sgml/ref/alter_group.sgml b/doc/src/sgml/ref/alter_group.sgml index a900145873..172a62a6f7 100644 --- a/doc/src/sgml/ref/alter_group.sgml +++ b/doc/src/sgml/ref/alter_group.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_group.sgml PostgreSQL documentation --> - + ALTER GROUP @@ -50,8 +50,8 @@ ALTER GROUP group_name RENAME TO group for this purpose.) These variants are effectively equivalent to granting or revoking membership in the role named as the group; so the preferred way to do this is to use - or - . + or + . diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index 30e399e62c..5d0b792e50 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_index.sgml PostgreSQL documentation --> - + ALTER INDEX @@ -70,7 +70,7 @@ ALTER INDEX ALL IN TABLESPACE name this command, use ALTER DATABASE or explicit ALTER INDEX invocations instead if desired. See also - . + . @@ -91,11 +91,11 @@ ALTER INDEX ALL IN TABLESPACE name This form changes one or more index-method-specific storage parameters for the index. See - + for details on the available parameters. Note that the index contents will not be modified immediately by this command; depending on the parameter you might need to rebuild the index with - + to get the desired effects. @@ -225,7 +225,7 @@ ALTER INDEX ALL IN TABLESPACE name These operations are also possible using - . + . ALTER INDEX is in fact just an alias for the forms of ALTER TABLE that apply to indexes. diff --git a/doc/src/sgml/ref/alter_language.sgml b/doc/src/sgml/ref/alter_language.sgml index 63d9ecd924..389824e3d2 100644 --- a/doc/src/sgml/ref/alter_language.sgml +++ b/doc/src/sgml/ref/alter_language.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_language.sgml PostgreSQL documentation --> - + ALTER LANGUAGE diff --git a/doc/src/sgml/ref/alter_large_object.sgml b/doc/src/sgml/ref/alter_large_object.sgml index 5e680f7720..0fbb8d5b62 100644 --- a/doc/src/sgml/ref/alter_large_object.sgml +++ b/doc/src/sgml/ref/alter_large_object.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_large_object.sgml PostgreSQL documentation --> - + ALTER LARGE OBJECT diff --git a/doc/src/sgml/ref/alter_materialized_view.sgml b/doc/src/sgml/ref/alter_materialized_view.sgml index eaea819744..f41b5058ff 100644 --- a/doc/src/sgml/ref/alter_materialized_view.sgml +++ b/doc/src/sgml/ref/alter_materialized_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_materialized_view.sgml PostgreSQL documentation --> - + ALTER MATERIALIZED VIEW diff --git a/doc/src/sgml/ref/alter_opclass.sgml b/doc/src/sgml/ref/alter_opclass.sgml index 834f3e4231..e69bcf2dd7 100644 --- a/doc/src/sgml/ref/alter_opclass.sgml +++ b/doc/src/sgml/ref/alter_opclass.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_opclass.sgml PostgreSQL documentation --> - + ALTER OPERATOR CLASS diff --git a/doc/src/sgml/ref/alter_operator.sgml b/doc/src/sgml/ref/alter_operator.sgml index 3cc28d5b18..4c6f75efff 100644 --- a/doc/src/sgml/ref/alter_operator.sgml +++ b/doc/src/sgml/ref/alter_operator.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_operator.sgml PostgreSQL documentation --> - + ALTER OPERATOR diff --git a/doc/src/sgml/ref/alter_opfamily.sgml b/doc/src/sgml/ref/alter_opfamily.sgml index d15fbfceea..f327267ff8 100644 --- a/doc/src/sgml/ref/alter_opfamily.sgml +++ b/doc/src/sgml/ref/alter_opfamily.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_opfamily.sgml PostgreSQL documentation --> - + ALTER OPERATOR FAMILY diff --git a/doc/src/sgml/ref/alter_policy.sgml b/doc/src/sgml/ref/alter_policy.sgml index 3eb9155602..a49f2fc5a5 100644 --- a/doc/src/sgml/ref/alter_policy.sgml +++ b/doc/src/sgml/ref/alter_policy.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_policy.sgml PostgreSQL documentation --> - + ALTER POLICY diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index 801404e0cf..5557f9b231 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_publication.sgml PostgreSQL documentation --> - + ALTER PUBLICATION @@ -101,7 +101,7 @@ ALTER PUBLICATION name RENAME TO This clause alters publication parameters originally set by - . See there for more information. + . See there for more information. diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index e30ca10454..c135364d4e 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_role.sgml PostgreSQL documentation --> - + ALTER ROLE @@ -65,8 +65,8 @@ ALTER ROLE { role_specification | A . (All the possible attributes are covered, except that there are no options for adding or removing memberships; use - and - for that.) + and + for that.) Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. Roles having CREATEROLE privilege can change any of these @@ -173,7 +173,7 @@ ALTER ROLE { role_specification | A These clauses alter attributes originally set by - . For more information, see the + . For more information, see the CREATE ROLE reference page. @@ -236,14 +236,14 @@ ALTER ROLE { role_specification | A Notes - Use - to add new roles, and to remove a role. + Use + to add new roles, and to remove a role. ALTER ROLE cannot change a role's memberships. - Use and - + Use and + to do that. diff --git a/doc/src/sgml/ref/alter_rule.sgml b/doc/src/sgml/ref/alter_rule.sgml index 26791b379b..f8833feee7 100644 --- a/doc/src/sgml/ref/alter_rule.sgml +++ b/doc/src/sgml/ref/alter_rule.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_rule.sgml PostgreSQL documentation --> - + ALTER RULE diff --git a/doc/src/sgml/ref/alter_schema.sgml b/doc/src/sgml/ref/alter_schema.sgml index 2ca406b914..dc91420954 100644 --- a/doc/src/sgml/ref/alter_schema.sgml +++ b/doc/src/sgml/ref/alter_schema.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_schema.sgml PostgreSQL documentation --> - + ALTER SCHEMA diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index 9b8ad36522..655b35c6fc 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_sequence.sgml PostgreSQL documentation --> - + ALTER SEQUENCE diff --git a/doc/src/sgml/ref/alter_server.sgml b/doc/src/sgml/ref/alter_server.sgml index 05e11f5ef2..53529abff7 100644 --- a/doc/src/sgml/ref/alter_server.sgml +++ b/doc/src/sgml/ref/alter_server.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_server.sgml PostgreSQL documentation --> - + ALTER SERVER diff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml index 87acb879b0..d7b012fd54 100644 --- a/doc/src/sgml/ref/alter_statistics.sgml +++ b/doc/src/sgml/ref/alter_statistics.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_statistics.sgml PostgreSQL documentation --> - + ALTER STATISTICS diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml index b76a21f654..7e0240d696 100644 --- a/doc/src/sgml/ref/alter_subscription.sgml +++ b/doc/src/sgml/ref/alter_subscription.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_subscription.sgml PostgreSQL documentation --> - + ALTER SUBSCRIPTION @@ -68,7 +68,7 @@ ALTER SUBSCRIPTION name RENAME TO < This clause alters the connection property originally set by - . See there for more + . See there for more information. @@ -79,7 +79,7 @@ ALTER SUBSCRIPTION name RENAME TO < Changes list of subscribed publications. See - for more information. + for more information. By default this command will also act like REFRESH PUBLICATION. @@ -162,7 +162,7 @@ ALTER SUBSCRIPTION name RENAME TO < This clause alters parameters originally set by - . See there for more + . See there for more information. The allowed options are slot_name and synchronous_commit diff --git a/doc/src/sgml/ref/alter_system.sgml b/doc/src/sgml/ref/alter_system.sgml index b8ef117b7d..887c4392dd 100644 --- a/doc/src/sgml/ref/alter_system.sgml +++ b/doc/src/sgml/ref/alter_system.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_system.sgml PostgreSQL documentation --> - + ALTER SYSTEM @@ -135,8 +135,8 @@ ALTER SYSTEM RESET wal_level; See Also - - + + diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index b4b8dab911..234ccb70e1 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_table.sgml PostgreSQL documentation --> - + ALTER TABLE @@ -109,7 +109,7 @@ ALTER TABLE [ IF EXISTS ] name This form adds a new column to the table, using the same syntax as - . If IF NOT EXISTS + . If IF NOT EXISTS is specified and a column already exists with this name, no error is thrown. @@ -314,7 +314,7 @@ ALTER TABLE [ IF EXISTS ] name This form adds a new constraint to a table using the same syntax as - , plus the option NOT + , plus the option NOT VALID, which is currently only allowed for foreign key and CHECK constraints. If the constraint is marked NOT VALID, the @@ -483,7 +483,7 @@ ALTER TABLE [ IF EXISTS ] name even if row level security is disabled - in this case, the policies will NOT be applied and the policies will be ignored. See also - . + . @@ -498,7 +498,7 @@ ALTER TABLE [ IF EXISTS ] name disabled (the default) then row level security will not be applied when the user is the table owner. See also - . + . @@ -508,7 +508,7 @@ ALTER TABLE [ IF EXISTS ] name This form selects the default index for future - + operations. It does not actually re-cluster the table. @@ -522,7 +522,7 @@ ALTER TABLE [ IF EXISTS ] name This form removes the most recently used - + index specification from the table. This affects future cluster operations that don't specify an index. @@ -582,7 +582,7 @@ ALTER TABLE [ IF EXISTS ] name information_schema relations are not considered part of the system catalogs and will be moved. See also - . + . @@ -592,7 +592,7 @@ ALTER TABLE [ IF EXISTS ] name This form changes the table from unlogged to logged or vice-versa - (see ). It cannot be applied + (see ). It cannot be applied to a temporary table. @@ -603,13 +603,13 @@ ALTER TABLE [ IF EXISTS ] name This form changes one or more storage parameters for the table. See - + for details on the available parameters. Note that the table contents will not be modified immediately by this command; depending on the parameter you might need to rewrite the table to get the desired effects. - That can be done with VACUUM - FULL, or one of the forms + That can be done with VACUUM + FULL, or one of the forms of ALTER TABLE that forces a table rewrite. For planner related parameters, changes will take effect from the next time the table is locked so currently executing queries will not be @@ -722,7 +722,7 @@ ALTER TABLE [ IF EXISTS ] name - + REPLICA IDENTITY @@ -810,7 +810,7 @@ ALTER TABLE [ IF EXISTS ] name If the new partition is a foreign table, nothing is done to verify that all the rows in the foreign table obey the partition constraint. - (See the discussion in about + (See the discussion in about constraints on the foreign table.) diff --git a/doc/src/sgml/ref/alter_tablespace.sgml b/doc/src/sgml/ref/alter_tablespace.sgml index def554bfb3..4d6f011e2f 100644 --- a/doc/src/sgml/ref/alter_tablespace.sgml +++ b/doc/src/sgml/ref/alter_tablespace.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_tablespace.sgml PostgreSQL documentation --> - + ALTER TABLESPACE diff --git a/doc/src/sgml/ref/alter_trigger.sgml b/doc/src/sgml/ref/alter_trigger.sgml index 2e872cf11f..4b4dacbf28 100644 --- a/doc/src/sgml/ref/alter_trigger.sgml +++ b/doc/src/sgml/ref/alter_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_trigger.sgml PostgreSQL documentation --> - + ALTER TRIGGER @@ -90,7 +90,7 @@ ALTER TRIGGER name ON The ability to temporarily enable or disable a trigger is provided by - , not by + , not by ALTER TRIGGER, because ALTER TRIGGER has no convenient way to express the option of enabling or disabling all of a table's triggers at once. diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml index b44aac9bf5..630927c15b 100644 --- a/doc/src/sgml/ref/alter_tsconfig.sgml +++ b/doc/src/sgml/ref/alter_tsconfig.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_tsconfig.sgml PostgreSQL documentation --> - + ALTER TEXT SEARCH CONFIGURATION diff --git a/doc/src/sgml/ref/alter_tsdictionary.sgml b/doc/src/sgml/ref/alter_tsdictionary.sgml index 16d76687ab..75a8b1dac6 100644 --- a/doc/src/sgml/ref/alter_tsdictionary.sgml +++ b/doc/src/sgml/ref/alter_tsdictionary.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_tsdictionary.sgml PostgreSQL documentation --> - + ALTER TEXT SEARCH DICTIONARY diff --git a/doc/src/sgml/ref/alter_tsparser.sgml b/doc/src/sgml/ref/alter_tsparser.sgml index 737a507565..c71faeec05 100644 --- a/doc/src/sgml/ref/alter_tsparser.sgml +++ b/doc/src/sgml/ref/alter_tsparser.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_tsparser.sgml PostgreSQL documentation --> - + ALTER TEXT SEARCH PARSER diff --git a/doc/src/sgml/ref/alter_tstemplate.sgml b/doc/src/sgml/ref/alter_tstemplate.sgml index d9a753017b..210baa7125 100644 --- a/doc/src/sgml/ref/alter_tstemplate.sgml +++ b/doc/src/sgml/ref/alter_tstemplate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_tstemplate.sgml PostgreSQL documentation --> - + ALTER TEXT SEARCH TEMPLATE diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 75be3187f1..7c32f0c5d5 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_type.sgml PostgreSQL documentation --> - + ALTER TYPE @@ -52,7 +52,7 @@ ALTER TYPE name RENAME VALUE This form adds a new attribute to a composite type, using the same syntax as - . + . @@ -364,7 +364,7 @@ ALTER TYPE colors RENAME VALUE 'purple' TO 'mauve'; - + See Also diff --git a/doc/src/sgml/ref/alter_user.sgml b/doc/src/sgml/ref/alter_user.sgml index 1a240ff430..8e03510bd4 100644 --- a/doc/src/sgml/ref/alter_user.sgml +++ b/doc/src/sgml/ref/alter_user.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_user.sgml PostgreSQL documentation --> - + ALTER USER diff --git a/doc/src/sgml/ref/alter_user_mapping.sgml b/doc/src/sgml/ref/alter_user_mapping.sgml index 18271d5199..eecff388cb 100644 --- a/doc/src/sgml/ref/alter_user_mapping.sgml +++ b/doc/src/sgml/ref/alter_user_mapping.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_user_mapping.sgml PostgreSQL documentation --> - + ALTER USER MAPPING diff --git a/doc/src/sgml/ref/alter_view.sgml b/doc/src/sgml/ref/alter_view.sgml index e7180b4409..f33519bd79 100644 --- a/doc/src/sgml/ref/alter_view.sgml +++ b/doc/src/sgml/ref/alter_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/alter_view.sgml PostgreSQL documentation --> - + ALTER VIEW diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index 12f2f09337..bc33f0fa23 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/analyze.sgml PostgreSQL documentation --> - + ANALYZE diff --git a/doc/src/sgml/ref/begin.sgml b/doc/src/sgml/ref/begin.sgml index fd6f073d18..45f85aea34 100644 --- a/doc/src/sgml/ref/begin.sgml +++ b/doc/src/sgml/ref/begin.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/begin.sgml PostgreSQL documentation --> - + BEGIN @@ -95,8 +95,8 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode - Use or - + Use or + to terminate a transaction block. diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index 4d71c45797..7ecc0cc463 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/close.sgml PostgreSQL documentation --> - + CLOSE diff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml index 5c5db75077..1210b5dffb 100644 --- a/doc/src/sgml/ref/cluster.sgml +++ b/doc/src/sgml/ref/cluster.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/cluster.sgml PostgreSQL documentation --> - + CLUSTER @@ -57,7 +57,7 @@ CLUSTER [VERBOSE] CLUSTER table_name reclusters the table using the same index as before. You can also use the CLUSTER or SET WITHOUT CLUSTER - forms of to set the index to be used for + forms of to set the index to be used for future cluster operations, or to clear any previous setting. diff --git a/doc/src/sgml/ref/clusterdb.sgml b/doc/src/sgml/ref/clusterdb.sgml index 081bbc5f7a..d2d4b52f48 100644 --- a/doc/src/sgml/ref/clusterdb.sgml +++ b/doc/src/sgml/ref/clusterdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/clusterdb.sgml PostgreSQL documentation --> - + clusterdb @@ -60,7 +60,7 @@ PostgreSQL documentation clusterdb is a wrapper around the SQL - command . + command . There is no effective difference between clustering databases via this utility and via other methods for accessing the server. @@ -289,8 +289,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index ab2e09d521..d705792a45 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/comment.sgml PostgreSQL documentation --> - + COMMENT diff --git a/doc/src/sgml/ref/commit.sgml b/doc/src/sgml/ref/commit.sgml index 8e3f53957e..e41d6ff3cf 100644 --- a/doc/src/sgml/ref/commit.sgml +++ b/doc/src/sgml/ref/commit.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/commit.sgml PostgreSQL documentation --> - + COMMIT @@ -55,7 +55,7 @@ COMMIT [ WORK | TRANSACTION ] Notes - Use to + Use to abort a transaction. diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml index 35bbf85af7..c200a3e573 100644 --- a/doc/src/sgml/ref/commit_prepared.sgml +++ b/doc/src/sgml/ref/commit_prepared.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/commit_prepared.sgml PostgreSQL documentation --> - + COMMIT PREPARED diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index 8f0974b256..eb91ad971d 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -4,7 +4,7 @@ PostgreSQL documentation --> - + COPY @@ -451,7 +451,7 @@ COPY count Do not confuse COPY with the psql instruction - \copy. \copy invokes + \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index 3de30fa580..4a8cee8057 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_aggregate.sgml PostgreSQL documentation --> - + CREATE AGGREGATE diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml index 89af1e5051..cd4565e336 100644 --- a/doc/src/sgml/ref/create_cast.sgml +++ b/doc/src/sgml/ref/create_cast.sgml @@ -1,6 +1,6 @@ - + CREATE CAST diff --git a/doc/src/sgml/ref/create_collation.sgml b/doc/src/sgml/ref/create_collation.sgml index d4e99e925f..cc76b04027 100644 --- a/doc/src/sgml/ref/create_collation.sgml +++ b/doc/src/sgml/ref/create_collation.sgml @@ -1,6 +1,6 @@ - + CREATE COLLATION diff --git a/doc/src/sgml/ref/create_conversion.sgml b/doc/src/sgml/ref/create_conversion.sgml index 03e0315eef..44475eb30e 100644 --- a/doc/src/sgml/ref/create_conversion.sgml +++ b/doc/src/sgml/ref/create_conversion.sgml @@ -1,6 +1,6 @@ - + CREATE CONVERSION diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 8adfa3a37b..3e35c776ea 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_database.sgml PostgreSQL documentation --> - + CREATE DATABASE @@ -45,7 +45,7 @@ CREATE DATABASE name To create a database, you must be a superuser or have the special CREATEDB privilege. - See . + See . @@ -203,11 +203,11 @@ CREATE DATABASE name - Use to remove a database. + Use to remove a database. - The program is a + The program is a wrapper program around this command, provided for convenience. diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index 705ff55c49..d38914e288 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_domain.sgml PostgreSQL documentation --> - + CREATE DOMAIN @@ -242,7 +242,7 @@ CREATE TABLE us_snail_addy ( - + Compatibility @@ -251,7 +251,7 @@ CREATE TABLE us_snail_addy ( - + See Also diff --git a/doc/src/sgml/ref/create_event_trigger.sgml b/doc/src/sgml/ref/create_event_trigger.sgml index 9652f02412..42cd065612 100644 --- a/doc/src/sgml/ref/create_event_trigger.sgml +++ b/doc/src/sgml/ref/create_event_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_event_trigger.sgml PostgreSQL documentation --> - + CREATE EVENT TRIGGER diff --git a/doc/src/sgml/ref/create_extension.sgml b/doc/src/sgml/ref/create_extension.sgml index a3a7892812..3e0f849f5b 100644 --- a/doc/src/sgml/ref/create_extension.sgml +++ b/doc/src/sgml/ref/create_extension.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_extension.sgml PostgreSQL documentation --> - + CREATE EXTENSION diff --git a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml index 87403a55e3..d9a1c18735 100644 --- a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_foreign_data_wrapper.sgml PostgreSQL documentation --> - + CREATE FOREIGN DATA WRAPPER diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml index 47705fd187..212c62ae1b 100644 --- a/doc/src/sgml/ref/create_foreign_table.sgml +++ b/doc/src/sgml/ref/create_foreign_table.sgml @@ -1,6 +1,6 @@ - + CREATE FOREIGN TABLE @@ -51,7 +51,7 @@ CHECK ( expression ) [ NO INHERIT ] - + Description @@ -252,7 +252,7 @@ CHECK ( expression ) [ NO INHERIT ] The name of an existing foreign server to use for the foreign table. For details on defining a server, see . + linkend="sql-createserver">. @@ -310,7 +310,7 @@ CHECK ( expression ) [ NO INHERIT ] - + Examples @@ -342,8 +342,8 @@ CREATE FOREIGN TABLE measurement_y2016m07 - - Compatibility + + Compatibility The CREATE FOREIGN TABLE command largely conforms to the diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index 97cb9b7fc8..970dc13359 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -2,7 +2,7 @@ doc/src/sgml/ref/create_function.sgml --> - + CREATE FUNCTION @@ -543,7 +543,7 @@ CREATE [ OR REPLACE ] FUNCTION the SQL function. The string obj_file is the name of the shared library file containing the compiled C function, and is interpreted - as for the command. The string + as for the command. The string link_symbol is the function's link symbol, that is, the name of the function in the C language source code. If the link symbol is omitted, it is assumed diff --git a/doc/src/sgml/ref/create_group.sgml b/doc/src/sgml/ref/create_group.sgml index 7896043a11..0382349404 100644 --- a/doc/src/sgml/ref/create_group.sgml +++ b/doc/src/sgml/ref/create_group.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_group.sgml PostgreSQL documentation --> - + CREATE GROUP diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index bb2601dc8c..92c0090dfd 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_index.sgml PostgreSQL documentation --> - + CREATE INDEX @@ -120,8 +120,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] . + — see . @@ -288,8 +288,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - - Index Storage Parameters + + Index Storage Parameters The optional WITH clause specifies storage @@ -409,10 +409,10 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] - - Building Indexes Concurrently + + Building Indexes Concurrently - + index building concurrently diff --git a/doc/src/sgml/ref/create_language.sgml b/doc/src/sgml/ref/create_language.sgml index 20d56a766f..92dae40ecc 100644 --- a/doc/src/sgml/ref/create_language.sgml +++ b/doc/src/sgml/ref/create_language.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_language.sgml PostgreSQL documentation --> - + CREATE LANGUAGE diff --git a/doc/src/sgml/ref/create_materialized_view.sgml b/doc/src/sgml/ref/create_materialized_view.sgml index 126aa5666f..8dd138f816 100644 --- a/doc/src/sgml/ref/create_materialized_view.sgml +++ b/doc/src/sgml/ref/create_materialized_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_materialized_view.sgml PostgreSQL documentation --> - + CREATE MATERIALIZED VIEW diff --git a/doc/src/sgml/ref/create_opclass.sgml b/doc/src/sgml/ref/create_opclass.sgml index 08eb7f2a08..882100583e 100644 --- a/doc/src/sgml/ref/create_opclass.sgml +++ b/doc/src/sgml/ref/create_opclass.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_opclass.sgml PostgreSQL documentation --> - + CREATE OPERATOR CLASS diff --git a/doc/src/sgml/ref/create_operator.sgml b/doc/src/sgml/ref/create_operator.sgml index 11c38fd38b..774616e244 100644 --- a/doc/src/sgml/ref/create_operator.sgml +++ b/doc/src/sgml/ref/create_operator.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_operator.sgml PostgreSQL documentation --> - + CREATE OPERATOR diff --git a/doc/src/sgml/ref/create_opfamily.sgml b/doc/src/sgml/ref/create_opfamily.sgml index ca5261b7a0..0953e238ce 100644 --- a/doc/src/sgml/ref/create_opfamily.sgml +++ b/doc/src/sgml/ref/create_opfamily.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_opfamily.sgml PostgreSQL documentation --> - + CREATE OPERATOR FAMILY diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index 1bcf2de429..64d3a6baa6 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_policy.sgml PostgreSQL documentation --> - + CREATE POLICY @@ -222,7 +222,7 @@ CREATE POLICY name ON - + ALL @@ -253,7 +253,7 @@ CREATE POLICY name ON - + SELECT @@ -273,7 +273,7 @@ CREATE POLICY name ON - + INSERT @@ -294,7 +294,7 @@ CREATE POLICY name ON - + UPDATE @@ -353,7 +353,7 @@ CREATE POLICY name ON - + DELETE diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml index b997d387e7..55771d1d31 100644 --- a/doc/src/sgml/ref/create_publication.sgml +++ b/doc/src/sgml/ref/create_publication.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_publication.sgml PostgreSQL documentation --> - + CREATE PUBLICATION diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml index 4a4061a237..7c050a3add 100644 --- a/doc/src/sgml/ref/create_role.sgml +++ b/doc/src/sgml/ref/create_role.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_role.sgml PostgreSQL documentation --> - + CREATE ROLE @@ -329,8 +329,8 @@ CREATE ROLE name [ [ WITH ] Notes - Use to - change the attributes of a role, and + Use to + change the attributes of a role, and to remove a role. All the attributes specified by CREATE ROLE can be modified by later ALTER ROLE commands. @@ -339,8 +339,8 @@ CREATE ROLE name [ [ WITH ] The preferred way to add and remove members of roles that are being used as groups is to use - and - . + and + . @@ -358,7 +358,7 @@ CREATE ROLE name [ [ WITH ] CREATEDB privilege does not immediately grant the ability to create databases, even if INHERIT is set; it would be necessary to become that role via - before + before creating a database. @@ -385,7 +385,7 @@ CREATE ROLE name [ [ WITH ] PostgreSQL includes a program that has + linkend="app-createuser"> that has the same functionality as CREATE ROLE (in fact, it calls this command) but can be run from the command shell. @@ -402,7 +402,7 @@ CREATE ROLE name [ [ WITH ] , however, transmits + linkend="app-createuser">, however, transmits the password encrypted. Also, contains a command \password that can be used to safely change the diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index c772c38399..c6403c0530 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_rule.sgml PostgreSQL documentation --> - + CREATE RULE diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index ce3530c048..ed856e21e4 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_schema.sgml PostgreSQL documentation --> - + CREATE SCHEMA diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index 9248b1d459..0cea9a49ce 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_sequence.sgml PostgreSQL documentation --> - + CREATE SEQUENCE diff --git a/doc/src/sgml/ref/create_server.sgml b/doc/src/sgml/ref/create_server.sgml index e14ce43bf9..e13636a268 100644 --- a/doc/src/sgml/ref/create_server.sgml +++ b/doc/src/sgml/ref/create_server.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_server.sgml PostgreSQL documentation --> - + CREATE SERVER diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index 066af8a4b4..bb99d8e785 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_statistics.sgml PostgreSQL documentation --> - + CREATE STATISTICS @@ -29,7 +29,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - + Description @@ -125,7 +125,7 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na - + Examples diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index cd51b7fcac..2a1514a5ac 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_subscription.sgml PostgreSQL documentation --> - + CREATE SUBSCRIPTION diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 2db2e9fc44..4f7b741526 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_table.sgml PostgreSQL documentation --> - + CREATE TABLE @@ -102,7 +102,7 @@ FROM ( { numeric_literal | - + Description @@ -157,7 +157,7 @@ FROM ( { numeric_literal | - + TEMPORARY or TEMP @@ -191,7 +191,7 @@ FROM ( { numeric_literal | - + UNLOGGED @@ -249,7 +249,7 @@ FROM ( { numeric_literal | - + PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT } @@ -783,7 +783,7 @@ FROM ( { numeric_literal | - + EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] @@ -1120,8 +1120,8 @@ FROM ( { numeric_literal | - - Storage Parameters + + Storage Parameters storage parameters @@ -1132,7 +1132,7 @@ FROM ( { numeric_literal | UNIQUE, PRIMARY KEY, or EXCLUDE constraint. Storage parameters for - indexes are documented in . + indexes are documented in . The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with @@ -1360,7 +1360,7 @@ FROM ( { numeric_literal | - + Notes @@ -1407,7 +1407,7 @@ FROM ( { numeric_literal | - + Examples @@ -1709,8 +1709,8 @@ CREATE TABLE cities_partdef - - Compatibility + + Compatibility The CREATE TABLE command conforms to the diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml index 8198442a97..89ca82baa5 100644 --- a/doc/src/sgml/ref/create_table_as.sgml +++ b/doc/src/sgml/ref/create_table_as.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_table_as.sgml PostgreSQL documentation --> - + CREATE TABLE AS diff --git a/doc/src/sgml/ref/create_tablespace.sgml b/doc/src/sgml/ref/create_tablespace.sgml index 4d95cac9e5..ed9635ef40 100644 --- a/doc/src/sgml/ref/create_tablespace.sgml +++ b/doc/src/sgml/ref/create_tablespace.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_tablespace.sgml PostgreSQL documentation --> - + CREATE TABLESPACE diff --git a/doc/src/sgml/ref/create_transform.sgml b/doc/src/sgml/ref/create_transform.sgml index 647c3b9f05..dfb83a76da 100644 --- a/doc/src/sgml/ref/create_transform.sgml +++ b/doc/src/sgml/ref/create_transform.sgml @@ -1,6 +1,6 @@ - + CREATE TRANSFORM diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 6726e3c766..9e97c364ef 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_trigger.sgml PostgreSQL documentation --> - + CREATE TRIGGER @@ -170,7 +170,7 @@ CREATE [ CONSTRAINT ] TRIGGER name When the CONSTRAINT option is specified, this command creates a constraint trigger. This is the same as a regular trigger except that the timing of the trigger firing can be adjusted using - . + . Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering @@ -302,7 +302,7 @@ UPDATE OF column_name1 [, column_name2 The default timing of the trigger. - See the documentation for details of + See the documentation for details of these constraint options. This can only be specified for constraint triggers. @@ -422,7 +422,7 @@ UPDATE OF column_name1 [, column_name2 - + Notes @@ -523,7 +523,7 @@ UPDATE OF column_name1 [, column_name2 - + Examples @@ -610,7 +610,7 @@ CREATE TRIGGER paired_items_update - + Compatibility - + CREATE TEXT SEARCH CONFIGURATION diff --git a/doc/src/sgml/ref/create_tsdictionary.sgml b/doc/src/sgml/ref/create_tsdictionary.sgml index 9c95c11608..20a01765b7 100644 --- a/doc/src/sgml/ref/create_tsdictionary.sgml +++ b/doc/src/sgml/ref/create_tsdictionary.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_tsdictionary.sgml PostgreSQL documentation --> - + CREATE TEXT SEARCH DICTIONARY diff --git a/doc/src/sgml/ref/create_tsparser.sgml b/doc/src/sgml/ref/create_tsparser.sgml index 044581f6f2..8e5e1b0b48 100644 --- a/doc/src/sgml/ref/create_tsparser.sgml +++ b/doc/src/sgml/ref/create_tsparser.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_tsparser.sgml PostgreSQL documentation --> - + CREATE TEXT SEARCH PARSER diff --git a/doc/src/sgml/ref/create_tstemplate.sgml b/doc/src/sgml/ref/create_tstemplate.sgml index e10f18b28b..0340e1ab1f 100644 --- a/doc/src/sgml/ref/create_tstemplate.sgml +++ b/doc/src/sgml/ref/create_tstemplate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_tstemplate.sgml PostgreSQL documentation --> - + CREATE TEXT SEARCH TEMPLATE diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 02ca27b281..1b409ad22f 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_type.sgml PostgreSQL documentation --> - + CREATE TYPE @@ -111,7 +111,7 @@ CREATE TYPE name - + Enumerated Types @@ -123,7 +123,7 @@ CREATE TYPE name - + Range Types @@ -769,7 +769,7 @@ CREATE TYPE name - + Notes @@ -928,7 +928,7 @@ CREATE TABLE big_objs ( - + Compatibility @@ -947,7 +947,7 @@ CREATE TABLE big_objs ( - + See Also diff --git a/doc/src/sgml/ref/create_user.sgml b/doc/src/sgml/ref/create_user.sgml index 500169da98..13dfd64c6d 100644 --- a/doc/src/sgml/ref/create_user.sgml +++ b/doc/src/sgml/ref/create_user.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_user.sgml PostgreSQL documentation --> - + CREATE USER diff --git a/doc/src/sgml/ref/create_user_mapping.sgml b/doc/src/sgml/ref/create_user_mapping.sgml index 10182e1426..d18cc91a00 100644 --- a/doc/src/sgml/ref/create_user_mapping.sgml +++ b/doc/src/sgml/ref/create_user_mapping.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_user_mapping.sgml PostgreSQL documentation --> - + CREATE USER MAPPING diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index c0dd022495..e52a4b85a7 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/create_view.sgml PostgreSQL documentation --> - + CREATE VIEW @@ -165,10 +165,10 @@ CREATE VIEW [ schema . ] view_name WITH [ CASCADED | LOCAL ] CHECK OPTION - + CHECK OPTION - + WITH CHECK OPTION @@ -278,8 +278,8 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; to replace it (this includes being a member of the owning role). - - Updatable Views + + Updatable Views updatable views diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml index 0112d3a848..265d14e149 100644 --- a/doc/src/sgml/ref/createdb.sgml +++ b/doc/src/sgml/ref/createdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/createdb.sgml PostgreSQL documentation --> - + createdb @@ -30,7 +30,7 @@ PostgreSQL documentation - + Description @@ -48,7 +48,7 @@ PostgreSQL documentation createdb is a wrapper around the - SQL command . + SQL command . There is no effective difference between creating databases via this utility and via other methods for accessing the server. @@ -199,7 +199,7 @@ PostgreSQL documentation The options , , , , and correspond to options of the underlying - SQL command ; see there for more information + SQL command ; see there for more information about them. @@ -337,8 +337,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml index 788ee81daf..f3c50c4113 100644 --- a/doc/src/sgml/ref/createuser.sgml +++ b/doc/src/sgml/ref/createuser.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/createuser.sgml PostgreSQL documentation --> - + createuser @@ -49,7 +49,7 @@ PostgreSQL documentation createuser is a wrapper around the - SQL command . + SQL command . There is no effective difference between creating users via this utility and via other methods for accessing the server. @@ -415,8 +415,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/deallocate.sgml b/doc/src/sgml/ref/deallocate.sgml index 394b125f52..4e23c6e091 100644 --- a/doc/src/sgml/ref/deallocate.sgml +++ b/doc/src/sgml/ref/deallocate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/deallocate.sgml PostgreSQL documentation --> - + DEALLOCATE diff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml index 8eae0354af..a70e2466e5 100644 --- a/doc/src/sgml/ref/declare.sgml +++ b/doc/src/sgml/ref/declare.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/declare.sgml PostgreSQL documentation --> - + DECLARE diff --git a/doc/src/sgml/ref/delete.sgml b/doc/src/sgml/ref/delete.sgml index 570e9aa710..d7869efd9a 100644 --- a/doc/src/sgml/ref/delete.sgml +++ b/doc/src/sgml/ref/delete.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/delete.sgml PostgreSQL documentation --> - + DELETE diff --git a/doc/src/sgml/ref/discard.sgml b/doc/src/sgml/ref/discard.sgml index f432e70430..063342494f 100644 --- a/doc/src/sgml/ref/discard.sgml +++ b/doc/src/sgml/ref/discard.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/discard.sgml PostgreSQL documentation --> - + DISCARD diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index 5d2e9b1b8c..c14dff0a28 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/do.sgml PostgreSQL documentation --> - + DO diff --git a/doc/src/sgml/ref/drop_aggregate.sgml b/doc/src/sgml/ref/drop_aggregate.sgml index ac29e7a419..1e5b3e8bd4 100644 --- a/doc/src/sgml/ref/drop_aggregate.sgml +++ b/doc/src/sgml/ref/drop_aggregate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_aggregate.sgml PostgreSQL documentation --> - + DROP AGGREGATE diff --git a/doc/src/sgml/ref/drop_cast.sgml b/doc/src/sgml/ref/drop_cast.sgml index dae3a39fce..6e0d741637 100644 --- a/doc/src/sgml/ref/drop_cast.sgml +++ b/doc/src/sgml/ref/drop_cast.sgml @@ -1,6 +1,6 @@ - + DROP CAST diff --git a/doc/src/sgml/ref/drop_collation.sgml b/doc/src/sgml/ref/drop_collation.sgml index 23f8e88fc9..03df0d17b1 100644 --- a/doc/src/sgml/ref/drop_collation.sgml +++ b/doc/src/sgml/ref/drop_collation.sgml @@ -1,6 +1,6 @@ - + DROP COLLATION diff --git a/doc/src/sgml/ref/drop_conversion.sgml b/doc/src/sgml/ref/drop_conversion.sgml index 9d56ec51a5..d5cf18c3e9 100644 --- a/doc/src/sgml/ref/drop_conversion.sgml +++ b/doc/src/sgml/ref/drop_conversion.sgml @@ -1,6 +1,6 @@ - + DROP CONVERSION diff --git a/doc/src/sgml/ref/drop_database.sgml b/doc/src/sgml/ref/drop_database.sgml index 7e5fbe7396..bbf3fd3776 100644 --- a/doc/src/sgml/ref/drop_database.sgml +++ b/doc/src/sgml/ref/drop_database.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_database.sgml PostgreSQL documentation --> - + DROP DATABASE diff --git a/doc/src/sgml/ref/drop_domain.sgml b/doc/src/sgml/ref/drop_domain.sgml index b1dac01e65..59379e8234 100644 --- a/doc/src/sgml/ref/drop_domain.sgml +++ b/doc/src/sgml/ref/drop_domain.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_domain.sgml PostgreSQL documentation --> - + DROP DOMAIN @@ -81,7 +81,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . - + Examples @@ -92,7 +92,7 @@ DROP DOMAIN box; - + Compatibility @@ -102,7 +102,7 @@ DROP DOMAIN box; - + See Also diff --git a/doc/src/sgml/ref/drop_event_trigger.sgml b/doc/src/sgml/ref/drop_event_trigger.sgml index 583048dc0f..a773170fa6 100644 --- a/doc/src/sgml/ref/drop_event_trigger.sgml +++ b/doc/src/sgml/ref/drop_event_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_event_trigger.sgml PostgreSQL documentation --> - + DROP EVENT TRIGGER diff --git a/doc/src/sgml/ref/drop_extension.sgml b/doc/src/sgml/ref/drop_extension.sgml index f75308a20d..bb296df17f 100644 --- a/doc/src/sgml/ref/drop_extension.sgml +++ b/doc/src/sgml/ref/drop_extension.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_extension.sgml PostgreSQL documentation --> - + DROP EXTENSION diff --git a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml index a3c73a0d46..8e8968ab1a 100644 --- a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_foreign_data_wrapper.sgml PostgreSQL documentation --> - + DROP FOREIGN DATA WRAPPER diff --git a/doc/src/sgml/ref/drop_foreign_table.sgml b/doc/src/sgml/ref/drop_foreign_table.sgml index 456d55d112..b12de03e65 100644 --- a/doc/src/sgml/ref/drop_foreign_table.sgml +++ b/doc/src/sgml/ref/drop_foreign_table.sgml @@ -1,6 +1,6 @@ - + DROP FOREIGN TABLE diff --git a/doc/src/sgml/ref/drop_function.sgml b/doc/src/sgml/ref/drop_function.sgml index 9c9adb9a46..05b405dda1 100644 --- a/doc/src/sgml/ref/drop_function.sgml +++ b/doc/src/sgml/ref/drop_function.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_function.sgml PostgreSQL documentation --> - + DROP FUNCTION @@ -127,7 +127,7 @@ DROP FUNCTION [ IF EXISTS ] name [ - + Examples @@ -159,7 +159,7 @@ DROP FUNCTION update_employee_salaries(); - + Compatibility diff --git a/doc/src/sgml/ref/drop_group.sgml b/doc/src/sgml/ref/drop_group.sgml index 5987c5f760..7844db0f7d 100644 --- a/doc/src/sgml/ref/drop_group.sgml +++ b/doc/src/sgml/ref/drop_group.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_group.sgml PostgreSQL documentation --> - + DROP GROUP diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml index de36c135d1..fd235a0c27 100644 --- a/doc/src/sgml/ref/drop_index.sgml +++ b/doc/src/sgml/ref/drop_index.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_index.sgml PostgreSQL documentation --> - + DROP INDEX diff --git a/doc/src/sgml/ref/drop_language.sgml b/doc/src/sgml/ref/drop_language.sgml index 524d758370..350375baef 100644 --- a/doc/src/sgml/ref/drop_language.sgml +++ b/doc/src/sgml/ref/drop_language.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_language.sgml PostgreSQL documentation --> - + DROP LANGUAGE diff --git a/doc/src/sgml/ref/drop_materialized_view.sgml b/doc/src/sgml/ref/drop_materialized_view.sgml index a898a1fc0a..b115aceb38 100644 --- a/doc/src/sgml/ref/drop_materialized_view.sgml +++ b/doc/src/sgml/ref/drop_materialized_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_materialized_view.sgml PostgreSQL documentation --> - + DROP MATERIALIZED VIEW diff --git a/doc/src/sgml/ref/drop_opclass.sgml b/doc/src/sgml/ref/drop_opclass.sgml index 83af6d7e48..53a40ff73e 100644 --- a/doc/src/sgml/ref/drop_opclass.sgml +++ b/doc/src/sgml/ref/drop_opclass.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_opclass.sgml PostgreSQL documentation --> - + DROP OPERATOR CLASS diff --git a/doc/src/sgml/ref/drop_operator.sgml b/doc/src/sgml/ref/drop_operator.sgml index 5897c99a62..b10bed09cc 100644 --- a/doc/src/sgml/ref/drop_operator.sgml +++ b/doc/src/sgml/ref/drop_operator.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_operator.sgml PostgreSQL documentation --> - + DROP OPERATOR diff --git a/doc/src/sgml/ref/drop_opfamily.sgml b/doc/src/sgml/ref/drop_opfamily.sgml index b825978aee..eb92664d85 100644 --- a/doc/src/sgml/ref/drop_opfamily.sgml +++ b/doc/src/sgml/ref/drop_opfamily.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_opfamily.sgml PostgreSQL documentation --> - + DROP OPERATOR FAMILY diff --git a/doc/src/sgml/ref/drop_owned.sgml b/doc/src/sgml/ref/drop_owned.sgml index 8b4b3644e6..1eb054dee6 100644 --- a/doc/src/sgml/ref/drop_owned.sgml +++ b/doc/src/sgml/ref/drop_owned.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_owned.sgml PostgreSQL documentation --> - + DROP OWNED diff --git a/doc/src/sgml/ref/drop_policy.sgml b/doc/src/sgml/ref/drop_policy.sgml index f474692105..2bc1e25f9c 100644 --- a/doc/src/sgml/ref/drop_policy.sgml +++ b/doc/src/sgml/ref/drop_policy.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_policy.sgml PostgreSQL documentation --> - + DROP POLICY diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml index 1c129c0444..3195c040bb 100644 --- a/doc/src/sgml/ref/drop_publication.sgml +++ b/doc/src/sgml/ref/drop_publication.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_publication.sgml PostgreSQL documentation --> - + DROP PUBLICATION diff --git a/doc/src/sgml/ref/drop_role.sgml b/doc/src/sgml/ref/drop_role.sgml index 3c1bbaba6f..413d1870d4 100644 --- a/doc/src/sgml/ref/drop_role.sgml +++ b/doc/src/sgml/ref/drop_role.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_role.sgml PostgreSQL documentation --> - + DROP ROLE @@ -83,7 +83,7 @@ DROP ROLE [ IF EXISTS ] name [, ... PostgreSQL includes a program that has the + linkend="app-dropuser"> that has the same functionality as this command (in fact, it calls this command) but can be run from the command shell. diff --git a/doc/src/sgml/ref/drop_rule.sgml b/doc/src/sgml/ref/drop_rule.sgml index d3fdf55080..6955016c27 100644 --- a/doc/src/sgml/ref/drop_rule.sgml +++ b/doc/src/sgml/ref/drop_rule.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_rule.sgml PostgreSQL documentation --> - + DROP RULE diff --git a/doc/src/sgml/ref/drop_schema.sgml b/doc/src/sgml/ref/drop_schema.sgml index bb3af1e186..2e608b2b20 100644 --- a/doc/src/sgml/ref/drop_schema.sgml +++ b/doc/src/sgml/ref/drop_schema.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_schema.sgml PostgreSQL documentation --> - + DROP SCHEMA diff --git a/doc/src/sgml/ref/drop_sequence.sgml b/doc/src/sgml/ref/drop_sequence.sgml index 5027129b38..be30f8f810 100644 --- a/doc/src/sgml/ref/drop_sequence.sgml +++ b/doc/src/sgml/ref/drop_sequence.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_sequence.sgml PostgreSQL documentation --> - + DROP SEQUENCE diff --git a/doc/src/sgml/ref/drop_server.sgml b/doc/src/sgml/ref/drop_server.sgml index 8ef0e014e4..fa941e8cd2 100644 --- a/doc/src/sgml/ref/drop_server.sgml +++ b/doc/src/sgml/ref/drop_server.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_server.sgml PostgreSQL documentation --> - + DROP SERVER diff --git a/doc/src/sgml/ref/drop_statistics.sgml b/doc/src/sgml/ref/drop_statistics.sgml index fd2087db6a..b34d070d50 100644 --- a/doc/src/sgml/ref/drop_statistics.sgml +++ b/doc/src/sgml/ref/drop_statistics.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_statistics.sgml PostgreSQL documentation --> - + DROP STATISTICS diff --git a/doc/src/sgml/ref/drop_subscription.sgml b/doc/src/sgml/ref/drop_subscription.sgml index 58b1489475..5ab2f9eae8 100644 --- a/doc/src/sgml/ref/drop_subscription.sgml +++ b/doc/src/sgml/ref/drop_subscription.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_subscription.sgml PostgreSQL documentation --> - + DROP SUBSCRIPTION diff --git a/doc/src/sgml/ref/drop_table.sgml b/doc/src/sgml/ref/drop_table.sgml index cea7e00351..b215369910 100644 --- a/doc/src/sgml/ref/drop_table.sgml +++ b/doc/src/sgml/ref/drop_table.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_table.sgml PostgreSQL documentation --> - + DROP TABLE diff --git a/doc/src/sgml/ref/drop_tablespace.sgml b/doc/src/sgml/ref/drop_tablespace.sgml index 4343035ebb..b761a4d92e 100644 --- a/doc/src/sgml/ref/drop_tablespace.sgml +++ b/doc/src/sgml/ref/drop_tablespace.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_tablespace.sgml PostgreSQL documentation --> - + DROP TABLESPACE diff --git a/doc/src/sgml/ref/drop_transform.sgml b/doc/src/sgml/ref/drop_transform.sgml index 698920a226..c7de707fcc 100644 --- a/doc/src/sgml/ref/drop_transform.sgml +++ b/doc/src/sgml/ref/drop_transform.sgml @@ -1,6 +1,6 @@ - + DROP TRANSFORM diff --git a/doc/src/sgml/ref/drop_trigger.sgml b/doc/src/sgml/ref/drop_trigger.sgml index d44bf138a6..118f38f3f4 100644 --- a/doc/src/sgml/ref/drop_trigger.sgml +++ b/doc/src/sgml/ref/drop_trigger.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_trigger.sgml PostgreSQL documentation --> - + DROP TRIGGER @@ -92,7 +92,7 @@ DROP TRIGGER [ IF EXISTS ] name ON - + Examples @@ -104,7 +104,7 @@ DROP TRIGGER if_dist_exists ON films; - + Compatibility diff --git a/doc/src/sgml/ref/drop_tsconfig.sgml b/doc/src/sgml/ref/drop_tsconfig.sgml index cc053beceb..b7acf46ff7 100644 --- a/doc/src/sgml/ref/drop_tsconfig.sgml +++ b/doc/src/sgml/ref/drop_tsconfig.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_tsconfig.sgml PostgreSQL documentation --> - + DROP TEXT SEARCH CONFIGURATION diff --git a/doc/src/sgml/ref/drop_tsdictionary.sgml b/doc/src/sgml/ref/drop_tsdictionary.sgml index 66af10fb0f..b670f55ff2 100644 --- a/doc/src/sgml/ref/drop_tsdictionary.sgml +++ b/doc/src/sgml/ref/drop_tsdictionary.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_tsdictionary.sgml PostgreSQL documentation --> - + DROP TEXT SEARCH DICTIONARY diff --git a/doc/src/sgml/ref/drop_tsparser.sgml b/doc/src/sgml/ref/drop_tsparser.sgml index 3fa9467ebd..dea9b2b1bd 100644 --- a/doc/src/sgml/ref/drop_tsparser.sgml +++ b/doc/src/sgml/ref/drop_tsparser.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_tsparser.sgml PostgreSQL documentation --> - + DROP TEXT SEARCH PARSER diff --git a/doc/src/sgml/ref/drop_tstemplate.sgml b/doc/src/sgml/ref/drop_tstemplate.sgml index ad83275457..244af48191 100644 --- a/doc/src/sgml/ref/drop_tstemplate.sgml +++ b/doc/src/sgml/ref/drop_tstemplate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_tstemplate.sgml PostgreSQL documentation --> - + DROP TEXT SEARCH TEMPLATE diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 92ac2729ca..37449ed19f 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_type.sgml PostgreSQL documentation --> - + DROP TYPE @@ -81,7 +81,7 @@ DROP TYPE [ IF EXISTS ] name [, ... - + Examples @@ -91,7 +91,7 @@ DROP TYPE box; - + Compatibility @@ -104,7 +104,7 @@ DROP TYPE box; - + See Also diff --git a/doc/src/sgml/ref/drop_user.sgml b/doc/src/sgml/ref/drop_user.sgml index 3cb90522da..0e680f30f6 100644 --- a/doc/src/sgml/ref/drop_user.sgml +++ b/doc/src/sgml/ref/drop_user.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_user.sgml PostgreSQL documentation --> - + DROP USER diff --git a/doc/src/sgml/ref/drop_user_mapping.sgml b/doc/src/sgml/ref/drop_user_mapping.sgml index 27284acae4..393a1eadcf 100644 --- a/doc/src/sgml/ref/drop_user_mapping.sgml +++ b/doc/src/sgml/ref/drop_user_mapping.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_user_mapping.sgml PostgreSQL documentation --> - + DROP USER MAPPING diff --git a/doc/src/sgml/ref/drop_view.sgml b/doc/src/sgml/ref/drop_view.sgml index a33b33335b..47e55bffb4 100644 --- a/doc/src/sgml/ref/drop_view.sgml +++ b/doc/src/sgml/ref/drop_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/drop_view.sgml PostgreSQL documentation --> - + DROP VIEW diff --git a/doc/src/sgml/ref/dropdb.sgml b/doc/src/sgml/ref/dropdb.sgml index 9dd44be882..f7ca0877b1 100644 --- a/doc/src/sgml/ref/dropdb.sgml +++ b/doc/src/sgml/ref/dropdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/dropdb.sgml PostgreSQL documentation --> - + dropdb @@ -41,7 +41,7 @@ PostgreSQL documentation dropdb is a wrapper around the - SQL command . + SQL command . There is no effective difference between dropping databases via this utility and via other methods for accessing the server. @@ -243,8 +243,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/dropuser.sgml b/doc/src/sgml/ref/dropuser.sgml index 1387b7dc2d..4c6a8bdb40 100644 --- a/doc/src/sgml/ref/dropuser.sgml +++ b/doc/src/sgml/ref/dropuser.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/dropuser.sgml PostgreSQL documentation --> - + dropuser @@ -42,7 +42,7 @@ PostgreSQL documentation dropuser is a wrapper around the - SQL command . + SQL command . There is no effective difference between dropping users via this utility and via other methods for accessing the server. @@ -235,8 +235,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/ecpg-ref.sgml b/doc/src/sgml/ref/ecpg-ref.sgml index a9eaff815d..18e7ed526a 100644 --- a/doc/src/sgml/ref/ecpg-ref.sgml +++ b/doc/src/sgml/ref/ecpg-ref.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/ecpg-ref.sgml PostgreSQL documentation --> - + ecpg @@ -28,7 +28,7 @@ PostgreSQL documentation - + Description diff --git a/doc/src/sgml/ref/end.sgml b/doc/src/sgml/ref/end.sgml index 1f74118efd..4904980dab 100644 --- a/doc/src/sgml/ref/end.sgml +++ b/doc/src/sgml/ref/end.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/end.sgml PostgreSQL documentation --> - + END @@ -57,7 +57,7 @@ END [ WORK | TRANSACTION ] Notes - Use to + Use to abort a transaction. diff --git a/doc/src/sgml/ref/execute.sgml b/doc/src/sgml/ref/execute.sgml index 6ac413d808..113a07a3ce 100644 --- a/doc/src/sgml/ref/execute.sgml +++ b/doc/src/sgml/ref/execute.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/execute.sgml PostgreSQL documentation --> - + EXECUTE diff --git a/doc/src/sgml/ref/explain.sgml b/doc/src/sgml/ref/explain.sgml index bbf2e11cbb..a32c150511 100644 --- a/doc/src/sgml/ref/explain.sgml +++ b/doc/src/sgml/ref/explain.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/explain.sgml PostgreSQL documentation --> - + EXPLAIN diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml index fb79a1ac61..aa2e8f64b8 100644 --- a/doc/src/sgml/ref/fetch.sgml +++ b/doc/src/sgml/ref/fetch.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/fetch.sgml PostgreSQL documentation --> - + FETCH diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index fd9fe03a6a..475c85b835 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/grant.sgml PostgreSQL documentation --> - + GRANT @@ -429,7 +429,7 @@ GRANT role_name [, ...] TO - + Notes diff --git a/doc/src/sgml/ref/import_foreign_schema.sgml b/doc/src/sgml/ref/import_foreign_schema.sgml index 9bc83f1c6a..66c8462a5a 100644 --- a/doc/src/sgml/ref/import_foreign_schema.sgml +++ b/doc/src/sgml/ref/import_foreign_schema.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/import_foreign_schema.sgml PostgreSQL documentation --> - + IMPORT FOREIGN SCHEMA @@ -29,7 +29,7 @@ IMPORT FOREIGN SCHEMA remote_schema - + Description @@ -120,7 +120,7 @@ IMPORT FOREIGN SCHEMA remote_schema - + Examples @@ -144,7 +144,7 @@ IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) - + Compatibility diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml index 6696d4d05a..9a02bc1dbb 100644 --- a/doc/src/sgml/ref/initdb.sgml +++ b/doc/src/sgml/ref/initdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/initdb.sgml PostgreSQL documentation --> - + initdb @@ -33,7 +33,7 @@ PostgreSQL documentation - + Description diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index 7f44ec31d1..f13fad4dd6 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/insert.sgml PostgreSQL documentation --> - + INSERT @@ -128,7 +128,7 @@ INSERT INTO table_name [ AS Parameters - + Inserting @@ -304,10 +304,10 @@ INSERT INTO table_name [ AS <literal>ON CONFLICT</literal> Clause - + UPSERT - + ON CONFLICT diff --git a/doc/src/sgml/ref/listen.sgml b/doc/src/sgml/ref/listen.sgml index 6527562717..f4c25efbd0 100644 --- a/doc/src/sgml/ref/listen.sgml +++ b/doc/src/sgml/ref/listen.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/listen.sgml PostgreSQL documentation --> - + LISTEN diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index b9e3fe8b25..14886199de 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -2,7 +2,7 @@ doc/src/sgml/ref/load.sgml --> - + LOAD diff --git a/doc/src/sgml/ref/lock.sgml b/doc/src/sgml/ref/lock.sgml index 6d68ec6c53..49e2933938 100644 --- a/doc/src/sgml/ref/lock.sgml +++ b/doc/src/sgml/ref/lock.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/lock.sgml PostgreSQL documentation --> - + LOCK @@ -236,7 +236,7 @@ COMMIT WORK; There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. PostgreSQL supports that too; - see for details. + see for details. diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml index 4bf7896858..50533e5e0e 100644 --- a/doc/src/sgml/ref/move.sgml +++ b/doc/src/sgml/ref/move.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/move.sgml PostgreSQL documentation --> - + MOVE diff --git a/doc/src/sgml/ref/notify.sgml b/doc/src/sgml/ref/notify.sgml index 4376b9fdd7..481163634f 100644 --- a/doc/src/sgml/ref/notify.sgml +++ b/doc/src/sgml/ref/notify.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/notify.sgml PostgreSQL documentation --> - + NOTIFY diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index 1944c185cb..167d523f5d 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -45,7 +45,7 @@ PostgreSQL documentation out of backup mode automatically. Backups are always taken of the entire database cluster; it is not possible to back up individual databases or database objects. For individual database backups, a tool such as - must be used. + must be used. @@ -768,7 +768,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_controldata.sgml b/doc/src/sgml/ref/pg_controldata.sgml index 4d4feacb93..9a676e0a78 100644 --- a/doc/src/sgml/ref/pg_controldata.sgml +++ b/doc/src/sgml/ref/pg_controldata.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/pg_controldata.sgml PostgreSQL documentation --> - + pg_controldata @@ -27,7 +27,7 @@ PostgreSQL documentation - + Description pg_controldata prints information initialized during diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index 3bcf0a2e9f..f930c7e245 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -610,10 +610,10 @@ PostgreSQL documentation - + Examples - + Starting the Server @@ -632,7 +632,7 @@ PostgreSQL documentation - + Stopping the Server To stop the server, use: @@ -646,7 +646,7 @@ PostgreSQL documentation - + Restarting the Server @@ -668,7 +668,7 @@ PostgreSQL documentation - + Showing the Server Status diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 79a9ee0983..57272e33bf 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/pg_dump.sgml PostgreSQL documentation --> - + pg_dump @@ -375,7 +375,7 @@ PostgreSQL documentation schema parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see ), + linkend="app-psql-patterns" endterm="app-psql-patterns-title">), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see @@ -526,7 +526,7 @@ PostgreSQL documentation table parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see ), + linkend="app-psql-patterns" endterm="app-psql-patterns-title">), so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see @@ -1374,7 +1374,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To specify an upper-case or mixed-case name in and related switches, you need to double-quote the name; else it will be folded to lower case (see ). But + linkend="app-psql-patterns" endterm="app-psql-patterns-title">). But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 0a64c3548e..ce6b895da2 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/pg_dumpall.sgml PostgreSQL documentation --> - + pg_dumpall diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index 5395fde6d6..4d13c57ffa 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -426,7 +426,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml index 5add6113f3..86f660070f 100644 --- a/doc/src/sgml/ref/pg_recvlogical.sgml +++ b/doc/src/sgml/ref/pg_recvlogical.sgml @@ -279,7 +279,7 @@ PostgreSQL documentation The database to connect to. See the description of the actions for what this means in detail. This can be a libpq connection string; - see for more information. Defaults + see for more information. Defaults to user name. diff --git a/doc/src/sgml/ref/pg_resetwal.sgml b/doc/src/sgml/ref/pg_resetwal.sgml index c8e5790a8e..0c30addd30 100644 --- a/doc/src/sgml/ref/pg_resetwal.sgml +++ b/doc/src/sgml/ref/pg_resetwal.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/pg_resetwal.sgml PostgreSQL documentation --> - + pg_resetwal @@ -29,7 +29,7 @@ PostgreSQL documentation - + Description pg_resetwal clears the write-ahead log (WAL) and diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index ed535f6f89..2b0a334025 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -1,6 +1,6 @@ - + pg_restore diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml index 0b39726e30..40049c51e5 100644 --- a/doc/src/sgml/ref/pg_waldump.sgml +++ b/doc/src/sgml/ref/pg_waldump.sgml @@ -29,7 +29,7 @@ PostgreSQL documentation - + Description pg_waldump displays the write-ahead log (WAL) and is mainly diff --git a/doc/src/sgml/ref/prepare.sgml b/doc/src/sgml/ref/prepare.sgml index bcf188f4b9..fb91ef8d50 100644 --- a/doc/src/sgml/ref/prepare.sgml +++ b/doc/src/sgml/ref/prepare.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/prepare.sgml PostgreSQL documentation --> - + PREPARE @@ -123,7 +123,7 @@ PREPARE name [ ( - + Notes diff --git a/doc/src/sgml/ref/prepare_transaction.sgml b/doc/src/sgml/ref/prepare_transaction.sgml index 4f78e6b131..6a1766ed3c 100644 --- a/doc/src/sgml/ref/prepare_transaction.sgml +++ b/doc/src/sgml/ref/prepare_transaction.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/prepare_transaction.sgml PostgreSQL documentation --> - + PREPARE TRANSACTION diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 8cbe0569cf..e520cdf3ba 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/psql-ref.sgml PostgreSQL documentation --> - + psql @@ -45,7 +45,7 @@ PostgreSQL documentation - + Options @@ -629,7 +629,7 @@ EOF Usage - + Connecting to a Database @@ -701,7 +701,7 @@ $ psql postgresql://dbmaster:5433/mydb?sslmode=require - + Entering SQL Commands @@ -730,8 +730,8 @@ testdb=> Whenever a command is executed, psql also polls for asynchronous notification events generated by - and - . + and + . @@ -741,7 +741,7 @@ testdb=> - + Meta-Commands @@ -779,7 +779,7 @@ testdb=> If an unquoted colon (:) followed by a psql variable name appears within an argument, it is replaced by the variable's value, as described in . + linkend="app-psql-interpolation" endterm="app-psql-interpolation-title">. The forms :'variable_name' and :"variable_name" described there work as well. @@ -949,7 +949,7 @@ testdb=> - + \copy { table [ ( column_list ) ] | ( query ) } { from | to } { 'filename' | program 'command' | stdin | stdout | pstdin | pstdout } @@ -958,7 +958,7 @@ testdb=> Performs a frontend (client) copy. This is an operation that - runs an SQL + runs an SQL command, but instead of the server reading or writing the specified file, psql reads or writes the file and @@ -1028,7 +1028,7 @@ testdb=> - + \crosstabview [ colV [ colH @@ -1102,7 +1102,7 @@ testdb=> - \d[S+] [ pattern ] + \d[S+] [ pattern ] @@ -1116,7 +1116,7 @@ testdb=> also shown. For foreign tables, the associated foreign server is shown as well. (Matching the pattern is defined in - + below.) @@ -1131,7 +1131,7 @@ testdb=> more information is displayed: any comments associated with the columns of the table are shown, as is the presence of OIDs in the table, the view definition if the relation is a view, a non-default - replica + replica identity setting. @@ -1155,7 +1155,7 @@ testdb=> - \da[S] [ pattern ] + \da[S] [ pattern ] @@ -1171,7 +1171,7 @@ testdb=> - \dA[+] [ pattern ] + \dA[+] [ pattern ] @@ -1185,7 +1185,7 @@ testdb=> - \db[+] [ pattern ] + \db[+] [ pattern ] @@ -1201,7 +1201,7 @@ testdb=> - \dc[S+] [ pattern ] + \dc[S+] [ pattern ] Lists conversions between character-set encodings. @@ -1219,7 +1219,7 @@ testdb=> - \dC[+] [ pattern ] + \dC[+] [ pattern ] Lists type casts. @@ -1234,7 +1234,7 @@ testdb=> - \dd[S] [ pattern ] + \dd[S] [ pattern ] Shows the descriptions of objects of type constraint, @@ -1263,7 +1263,7 @@ testdb=> - \dD[S+] [ pattern ] + \dD[S+] [ pattern ] Lists domains. If - \ddp [ pattern ] + \ddp [ pattern ] Lists default access privilege settings. An entry is shown for @@ -1302,12 +1302,12 @@ testdb=> - \dE[S+] [ pattern ] - \di[S+] [ pattern ] - \dm[S+] [ pattern ] - \ds[S+] [ pattern ] - \dt[S+] [ pattern ] - \dv[S+] [ pattern ] + \dE[S+] [ pattern ] + \di[S+] [ pattern ] + \dm[S+] [ pattern ] + \ds[S+] [ pattern ] + \dt[S+] [ pattern ] + \dv[S+] [ pattern ] @@ -1333,7 +1333,7 @@ testdb=> - \des[+] [ pattern ] + \des[+] [ pattern ] Lists foreign servers (mnemonic: external @@ -1349,7 +1349,7 @@ testdb=> - \det[+] [ pattern ] + \det[+] [ pattern ] Lists foreign tables (mnemonic: external tables). @@ -1364,7 +1364,7 @@ testdb=> - \deu[+] [ pattern ] + \deu[+] [ pattern ] Lists user mappings (mnemonic: external @@ -1387,7 +1387,7 @@ testdb=> - \dew[+] [ pattern ] + \dew[+] [ pattern ] Lists foreign-data wrappers (mnemonic: external @@ -1403,7 +1403,7 @@ testdb=> - \df[antwS+] [ pattern ] + \df[antwS+] [ pattern ] @@ -1437,7 +1437,7 @@ testdb=> - \dF[+] [ pattern ] + \dF[+] [ pattern ] Lists text search configurations. @@ -1451,7 +1451,7 @@ testdb=> - \dFd[+] [ pattern ] + \dFd[+] [ pattern ] Lists text search dictionaries. @@ -1465,7 +1465,7 @@ testdb=> - \dFp[+] [ pattern ] + \dFp[+] [ pattern ] Lists text search parsers. @@ -1479,7 +1479,7 @@ testdb=> - \dFt[+] [ pattern ] + \dFt[+] [ pattern ] Lists text search templates. @@ -1493,7 +1493,7 @@ testdb=> - \dg[S+] [ pattern ] + \dg[S+] [ pattern ] Lists database roles. @@ -1523,7 +1523,7 @@ testdb=> - \dL[S+] [ pattern ] + \dL[S+] [ pattern ] Lists procedural languages. If - \dn[S+] [ pattern ] + \dn[S+] [ pattern ] @@ -1557,7 +1557,7 @@ testdb=> - \do[S+] [ pattern ] + \do[S+] [ pattern ] Lists operators with their operand and result types. @@ -1575,7 +1575,7 @@ testdb=> - \dO[S+] [ pattern ] + \dO[S+] [ pattern ] Lists collations. @@ -1595,7 +1595,7 @@ testdb=> - \dp [ pattern ] + \dp [ pattern ] Lists tables, views and sequences with their @@ -1616,7 +1616,7 @@ testdb=> - \drds [ role-pattern [ database-pattern ] ] + \drds [ role-pattern [ database-pattern ] ] Lists defined configuration settings. These settings can be @@ -1638,7 +1638,7 @@ testdb=> - \dRp[+] [ pattern ] + \dRp[+] [ pattern ] Lists replication publications. @@ -1652,7 +1652,7 @@ testdb=> - \dRs[+] [ pattern ] + \dRs[+] [ pattern ] Lists replication subscriptions. @@ -1666,7 +1666,7 @@ testdb=> - \dT[S+] [ pattern ] + \dT[S+] [ pattern ] Lists data types. @@ -1683,7 +1683,7 @@ testdb=> - \du[S+] [ pattern ] + \du[S+] [ pattern ] Lists database roles. @@ -1702,7 +1702,7 @@ testdb=> - \dx[+] [ pattern ] + \dx[+] [ pattern ] Lists installed extensions. @@ -1716,7 +1716,7 @@ testdb=> - \dy[+] [ pattern ] + \dy[+] [ pattern ] Lists event triggers. @@ -2027,7 +2027,7 @@ CREATE INDEX Sends the current query buffer to the server and stores the query's output into psql variables (see ). + linkend="app-psql-variables" endterm="app-psql-variables-title">). The query to be executed must return exactly one row. Each column of the row is stored into a separate variable, named the same as the column. For example: @@ -2253,7 +2253,7 @@ SELECT - \l[+] or \list[+] [ pattern ] + \l[+] or \list[+] [ pattern ] List the databases in the server and show their names, owners, @@ -2831,8 +2831,8 @@ lo_import 152801 Illustrations of how these different formats look can be seen in - the section. + the section. @@ -2917,8 +2917,8 @@ lo_import 152801 Valid variable names can contain letters, digits, and underscores. See the section below for details. + linkend="app-psql-variables" + endterm="app-psql-variables-title"> below for details. Variable names are case-sensitive. @@ -2926,14 +2926,14 @@ lo_import 152801 Certain variables are special, in that they control psql's behavior or are automatically set to reflect connection state. These variables are - documented in , below. + documented in , below. This command is unrelated to the SQL - command . + command . @@ -3070,8 +3070,8 @@ testdb=> \setenv LESS -imx4F Most variables that control psql's behavior cannot be unset; instead, an \unset command is interpreted as setting them to their default values. - See , below. + See , below. @@ -3131,7 +3131,7 @@ testdb=> \setenv LESS -imx4F - \z [ pattern ] + \z [ pattern ] Lists tables, views and sequences with their @@ -3229,8 +3229,8 @@ select 1\; select 2\; select 3; - - Patterns + + Patterns patterns @@ -3322,8 +3322,8 @@ select 1\; select 2\; select 3; Advanced Features - - Variables + + Variables psql provides variable substitution @@ -3347,8 +3347,8 @@ testdb=> \echo :foo bar This works in both regular SQL commands and meta-commands; there is - more detail in , below. + more detail in , below. @@ -3742,8 +3742,8 @@ bar These specify what the prompts psql issues should look like. See below. + linkend="app-psql-prompting" + endterm="app-psql-prompting-title"> below. @@ -3875,8 +3875,8 @@ bar - - <acronym>SQL</acronym> Interpolation + + <acronym>SQL</acronym> Interpolation A key feature of psql @@ -3960,8 +3960,8 @@ testdb=> INSERT INTO my_table VALUES (:'content'); - - Prompting + + Prompting The prompts psql issues can be customized @@ -4118,8 +4118,8 @@ testdb=> INSERT INTO my_table VALUES (:'content'); The value of the psql variable name. See the - section for details. + section for details. @@ -4499,8 +4499,8 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' - - Examples + + Examples The first example shows how to spread a command over several lines of diff --git a/doc/src/sgml/ref/reassign_owned.sgml b/doc/src/sgml/ref/reassign_owned.sgml index 2bbd6b8f07..e29b88292b 100644 --- a/doc/src/sgml/ref/reassign_owned.sgml +++ b/doc/src/sgml/ref/reassign_owned.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/reassign_owned.sgml PostgreSQL documentation --> - + REASSIGN OWNED diff --git a/doc/src/sgml/ref/refresh_materialized_view.sgml b/doc/src/sgml/ref/refresh_materialized_view.sgml index 0135d15cec..e2ee836efb 100644 --- a/doc/src/sgml/ref/refresh_materialized_view.sgml +++ b/doc/src/sgml/ref/refresh_materialized_view.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/refresh_materialized_view.sgml PostgreSQL documentation --> - + REFRESH MATERIALIZED VIEW @@ -93,7 +93,7 @@ REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name While the default index for future - + operations is retained, REFRESH MATERIALIZED VIEW does not order the generated rows based on this property. If you want the data to be ordered upon generation, you must use an ORDER BY diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml index 3dc2608f76..2e053c4c24 100644 --- a/doc/src/sgml/ref/reindex.sgml +++ b/doc/src/sgml/ref/reindex.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/reindex.sgml PostgreSQL documentation --> - + REINDEX diff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml index 627be6a0ad..a7cc9c2d94 100644 --- a/doc/src/sgml/ref/reindexdb.sgml +++ b/doc/src/sgml/ref/reindexdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/reindexdb.sgml PostgreSQL documentation --> - + reindexdb @@ -93,7 +93,7 @@ PostgreSQL documentation reindexdb is a wrapper around the SQL - command . + command . There is no effective difference between reindexing databases via this utility and via other methods for accessing the server. @@ -357,8 +357,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/release_savepoint.sgml b/doc/src/sgml/ref/release_savepoint.sgml index 2e8dcc0746..7e629176b7 100644 --- a/doc/src/sgml/ref/release_savepoint.sgml +++ b/doc/src/sgml/ref/release_savepoint.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/release_savepoint.sgml PostgreSQL documentation --> - + RELEASE SAVEPOINT diff --git a/doc/src/sgml/ref/reset.sgml b/doc/src/sgml/ref/reset.sgml index b434ad10c2..bf3f5226ec 100644 --- a/doc/src/sgml/ref/reset.sgml +++ b/doc/src/sgml/ref/reset.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/reset.sgml PostgreSQL documentation --> - + RESET @@ -106,8 +106,8 @@ RESET timezone; See Also - - + + diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml index c893666e83..e3e3f2ffc3 100644 --- a/doc/src/sgml/ref/revoke.sgml +++ b/doc/src/sgml/ref/revoke.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/revoke.sgml PostgreSQL documentation --> - + REVOKE @@ -111,7 +111,7 @@ REVOKE [ ADMIN OPTION FOR ] - + Description @@ -174,7 +174,7 @@ REVOKE [ ADMIN OPTION FOR ] - + Notes @@ -246,7 +246,7 @@ REVOKE [ ADMIN OPTION FOR ] - + Examples @@ -278,7 +278,7 @@ REVOKE admins FROM joe; - + Compatibility diff --git a/doc/src/sgml/ref/rollback.sgml b/doc/src/sgml/ref/rollback.sgml index 1a0e5a0ebc..1f99343b08 100644 --- a/doc/src/sgml/ref/rollback.sgml +++ b/doc/src/sgml/ref/rollback.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/rollback.sgml PostgreSQL documentation --> - + ROLLBACK @@ -54,7 +54,7 @@ ROLLBACK [ WORK | TRANSACTION ] Notes - Use to + Use to successfully terminate a transaction. diff --git a/doc/src/sgml/ref/rollback_prepared.sgml b/doc/src/sgml/ref/rollback_prepared.sgml index 6c44049a89..d7468f78d7 100644 --- a/doc/src/sgml/ref/rollback_prepared.sgml +++ b/doc/src/sgml/ref/rollback_prepared.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/rollback_prepared.sgml PostgreSQL documentation --> - + ROLLBACK PREPARED diff --git a/doc/src/sgml/ref/rollback_to.sgml b/doc/src/sgml/ref/rollback_to.sgml index f1da804f67..1957cace11 100644 --- a/doc/src/sgml/ref/rollback_to.sgml +++ b/doc/src/sgml/ref/rollback_to.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/rollback_to.sgml PostgreSQL documentation --> - + ROLLBACK TO SAVEPOINT @@ -64,7 +64,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_nameNotes - Use to destroy a savepoint + Use to destroy a savepoint without discarding the effects of commands executed after it was established. diff --git a/doc/src/sgml/ref/savepoint.sgml b/doc/src/sgml/ref/savepoint.sgml index 6d40f4da42..6fa11a7358 100644 --- a/doc/src/sgml/ref/savepoint.sgml +++ b/doc/src/sgml/ref/savepoint.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/savepoint.sgml PostgreSQL documentation --> - + SAVEPOINT @@ -64,8 +64,8 @@ SAVEPOINT savepoint_name Notes - Use to - rollback to a savepoint. Use + Use to + rollback to a savepoint. Use to destroy a savepoint, keeping the effects of commands executed after it was established. diff --git a/doc/src/sgml/ref/security_label.sgml b/doc/src/sgml/ref/security_label.sgml index 999f9c80cd..ce5a1c1975 100644 --- a/doc/src/sgml/ref/security_label.sgml +++ b/doc/src/sgml/ref/security_label.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/security_label.sgml PostgreSQL documentation --> - + SECURITY LABEL diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml index 7355e790f6..3aab3fd8a7 100644 --- a/doc/src/sgml/ref/select.sgml +++ b/doc/src/sgml/ref/select.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/select.sgml PostgreSQL documentation --> - + SELECT @@ -218,7 +218,7 @@ TABLE [ ONLY ] table_name [ * ] Parameters - + <literal>WITH</literal> Clause @@ -295,7 +295,7 @@ TABLE [ ONLY ] table_name [ * ] - + <literal>FROM</literal> Clause @@ -666,7 +666,7 @@ TABLE [ ONLY ] table_name [ * ] - + <literal>WHERE</literal> Clause @@ -683,7 +683,7 @@ WHERE condition - + <literal>GROUP BY</literal> Clause @@ -757,7 +757,7 @@ GROUP BY grouping_element [, ...] - + <literal>HAVING</literal> Clause @@ -800,7 +800,7 @@ HAVING condition - + <literal>WINDOW</literal> Clause @@ -1073,7 +1073,7 @@ SELECT DISTINCT ON (location) location, time, report - + <literal>UNION</literal> Clause @@ -1126,7 +1126,7 @@ SELECT DISTINCT ON (location) location, time, report - + <literal>INTERSECT</literal> Clause @@ -1174,7 +1174,7 @@ SELECT DISTINCT ON (location) location, time, report - + <literal>EXCEPT</literal> Clause @@ -1218,7 +1218,7 @@ SELECT DISTINCT ON (location) location, time, report - + <literal>ORDER BY</literal> Clause @@ -1316,7 +1316,7 @@ SELECT name FROM distributors ORDER BY code; - + <literal>LIMIT</literal> Clause @@ -1396,7 +1396,7 @@ FETCH { FIRST | NEXT } [ count ] { - + The Locking Clause @@ -1560,7 +1560,7 @@ SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss ORDER BY column1; - + <literal>TABLE</literal> Command diff --git a/doc/src/sgml/ref/select_into.sgml b/doc/src/sgml/ref/select_into.sgml index 63c02349aa..a5b6ac9245 100644 --- a/doc/src/sgml/ref/select_into.sgml +++ b/doc/src/sgml/ref/select_into.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/select_into.sgml PostgreSQL documentation --> - + SELECT INTO diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml index 8c44d0e156..4bc8108765 100644 --- a/doc/src/sgml/ref/set.sgml +++ b/doc/src/sgml/ref/set.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/set.sgml PostgreSQL documentation --> - + SET @@ -323,8 +323,8 @@ SET TIME ZONE 'Europe/Rome'; See Also - - + + diff --git a/doc/src/sgml/ref/set_constraints.sgml b/doc/src/sgml/ref/set_constraints.sgml index 237a0a3988..671332afc7 100644 --- a/doc/src/sgml/ref/set_constraints.sgml +++ b/doc/src/sgml/ref/set_constraints.sgml @@ -1,5 +1,5 @@ - + SET CONSTRAINTS diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml index eac4b3405a..351e953f75 100644 --- a/doc/src/sgml/ref/set_role.sgml +++ b/doc/src/sgml/ref/set_role.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/set_role.sgml PostgreSQL documentation --> - + SET ROLE @@ -48,7 +48,7 @@ RESET ROLE The SESSION and LOCAL modifiers act the same - as for the regular + as for the regular command. diff --git a/doc/src/sgml/ref/set_session_auth.sgml b/doc/src/sgml/ref/set_session_auth.sgml index a8aee6f632..45fa378e18 100644 --- a/doc/src/sgml/ref/set_session_auth.sgml +++ b/doc/src/sgml/ref/set_session_auth.sgml @@ -1,5 +1,5 @@ - + SET SESSION AUTHORIZATION @@ -54,7 +54,7 @@ RESET SESSION AUTHORIZATION The SESSION and LOCAL modifiers act the same - as for the regular + as for the regular command. diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml index f5631372f5..3ab1e6f771 100644 --- a/doc/src/sgml/ref/set_transaction.sgml +++ b/doc/src/sgml/ref/set_transaction.sgml @@ -1,5 +1,5 @@ - + SET TRANSACTION @@ -237,7 +237,7 @@ SET TRANSACTION SNAPSHOT '00000003-0000001B-1'; - + Compatibility diff --git a/doc/src/sgml/ref/show.sgml b/doc/src/sgml/ref/show.sgml index 2a2b2fbb9f..53b47ac3d8 100644 --- a/doc/src/sgml/ref/show.sgml +++ b/doc/src/sgml/ref/show.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/show.sgml PostgreSQL documentation --> - + SHOW @@ -52,7 +52,7 @@ SHOW ALL The name of a run-time parameter. Available parameters are documented in and on the reference page. In + linkend="sql-set"> reference page. In addition, there are a few parameters that can be shown but not set: @@ -192,8 +192,8 @@ SHOW ALL; See Also - - + + diff --git a/doc/src/sgml/ref/start_transaction.sgml b/doc/src/sgml/ref/start_transaction.sgml index 8dcf6318d2..605fda5357 100644 --- a/doc/src/sgml/ref/start_transaction.sgml +++ b/doc/src/sgml/ref/start_transaction.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/start_transaction.sgml PostgreSQL documentation --> - + START TRANSACTION diff --git a/doc/src/sgml/ref/truncate.sgml b/doc/src/sgml/ref/truncate.sgml index 80abe67525..6892516987 100644 --- a/doc/src/sgml/ref/truncate.sgml +++ b/doc/src/sgml/ref/truncate.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/truncate.sgml PostgreSQL documentation --> - + TRUNCATE diff --git a/doc/src/sgml/ref/unlisten.sgml b/doc/src/sgml/ref/unlisten.sgml index 1ea9aa3a0b..1bffac3cb2 100644 --- a/doc/src/sgml/ref/unlisten.sgml +++ b/doc/src/sgml/ref/unlisten.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/unlisten.sgml PostgreSQL documentation --> - + UNLISTEN diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index 1ede52384f..0e99aa9739 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/update.sgml PostgreSQL documentation --> - + UPDATE diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index 2e205668c1..7ecd08977c 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/vacuum.sgml PostgreSQL documentation --> - + VACUUM diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml index 277c231687..2d47d8c1f1 100644 --- a/doc/src/sgml/ref/vacuumdb.sgml +++ b/doc/src/sgml/ref/vacuumdb.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/vacuumdb.sgml PostgreSQL documentation --> - + vacuumdb @@ -62,7 +62,7 @@ PostgreSQL documentation vacuumdb is a wrapper around the SQL - command . + command . There is no effective difference between vacuuming and analyzing databases via this utility and via other methods for accessing the server. @@ -382,8 +382,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment diff --git a/doc/src/sgml/ref/values.sgml b/doc/src/sgml/ref/values.sgml index 75a594725b..6b8083fc9d 100644 --- a/doc/src/sgml/ref/values.sgml +++ b/doc/src/sgml/ref/values.sgml @@ -3,7 +3,7 @@ doc/src/sgml/ref/values.sgml PostgreSQL documentation --> - + VALUES diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 98149db335..d37f700055 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -701,7 +701,7 @@ - A new CREATE + A new CREATE INDEX option enables auto-summarization of the previous BRIN page range when a new page range is created. @@ -825,9 +825,9 @@ New commands are CREATE STATISTICS, - ALTER STATISTICS, and - DROP STATISTICS. + linkend="sql-createstatistics">CREATE STATISTICS, + ALTER STATISTICS, and + DROP STATISTICS. This feature is helpful in estimating query memory usage and when combining the statistics from individual columns. @@ -950,7 +950,7 @@ --> Allow explicit control - over EXPLAIN's display + over EXPLAIN's display of planning and execution time (Ashutosh Bapat) @@ -983,7 +983,7 @@ --> Properly update the statistics collector during REFRESH MATERIALIZED + linkend="sql-refreshmaterializedview">REFRESH MATERIALIZED VIEW (Jim Mlodgenski) @@ -1586,7 +1586,7 @@ 2016-12-07 [f0e44751d] Implement table partitioning. --> - Add table partitioning + Add table partitioning syntax that automatically creates partition constraints and handles routing of tuple insertions and updates (Amit Langote) @@ -1603,7 +1603,7 @@ 2017-03-31 [597027163] Add transition table support to plpgsql. --> - Add AFTER trigger + Add AFTER trigger transition tables to record changed rows (Kevin Grittner, Thomas Munro) @@ -1619,7 +1619,7 @@ 2016-12-05 [093129c9d] Add support for restrictive RLS policies --> - Allow restrictive row-level + Allow restrictive row-level security policies (Stephen Frost) @@ -1655,7 +1655,7 @@ 2017-03-28 [ab89e465c] Altering default privileges on schemas --> - Allow default + Allow default permissions on schemas (Matheus Oliveira) @@ -1669,7 +1669,7 @@ 2017-02-10 [2ea5b06c7] Add CREATE SEQUENCE AS clause --> - Add CREATE SEQUENCE + Add CREATE SEQUENCE AS command to create a sequence matching an integer data type (Peter Eisentraut) @@ -1705,7 +1705,7 @@ - For example, allow DROP + For example, allow DROP FUNCTION on a function name without arguments if there is only one function with that name. This behavior is required by the SQL standard. @@ -1729,9 +1729,9 @@ --> Support IF NOT EXISTS - in CREATE SERVER, - CREATE USER MAPPING, - and CREATE COLLATION + in CREATE SERVER, + CREATE USER MAPPING, + and CREATE COLLATION (Anastasia Lubennikova, Peter Eisentraut) @@ -1742,7 +1742,7 @@ 2017-03-03 [9eb344faf] Allow vacuums to report oldestxmin --> - Make VACUUM VERBOSE report + Make VACUUM VERBOSE report the number of skipped frozen pages and oldest xmin (Masahiko Sawada, Simon Riggs) @@ -1809,7 +1809,7 @@ 2017-04-06 [321732705] Identity columns --> - Add identity columns for + Add identity columns for assigning a numeric value to columns on insert (Peter Eisentraut) @@ -1829,7 +1829,7 @@ - This uses the syntax ALTER + This uses the syntax ALTER TYPE ... RENAME VALUE. @@ -2191,7 +2191,7 @@ Client Applications - <xref linkend="APP-PSQL"> + <xref linkend="app-psql"> @@ -2414,7 +2414,7 @@ 2016-10-19 [5d58c07a4] initdb pg_basebackup: Rename -\-noxxx options to -\-no-x --> - Rename initdb + Rename initdb options and to be spelled and (Vik Fearing, Peter Eisentraut) @@ -2428,9 +2428,9 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</application></link>, - <link linkend="APP-PG-DUMPALL"><application>pg_dumpall</application></link>, - <link linkend="APP-PGRESTORE"><application>pg_restore</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link>, + <link linkend="app-pg-dumpall"><application>pg_dumpall</application></link>, + <link linkend="app-pgrestore"><application>pg_restore</application></link> diff --git a/doc/src/sgml/release-8.2.sgml b/doc/src/sgml/release-8.2.sgml index 71b50cfb01..39666e665b 100644 --- a/doc/src/sgml/release-8.2.sgml +++ b/doc/src/sgml/release-8.2.sgml @@ -4718,7 +4718,7 @@ - Make SET + Make SET CONSTRAINT affect only one constraint (Kris Jurka) @@ -5023,8 +5023,8 @@ Add FILLFACTOR to table and index creation (ITAGAKI + linkend="sql-createtable">table and index creation (ITAGAKI Takahiro) @@ -5132,13 +5132,13 @@ Avoid extra scan of tables without indexes during VACUUM (Greg Stark) + linkend="sql-vacuum">VACUUM (Greg Stark) - Improve multicolumn GiST + Improve multicolumn GiST indexing (Oleg, Teodor) @@ -5268,7 +5268,7 @@ - GiST indexes are + GiST indexes are now clusterable (Teodor) @@ -5475,9 +5475,9 @@ - Add INSERT/UPDATE/DELETE + Add INSERT/UPDATE/DELETE RETURNING (Jonah Harris, Tom) @@ -5506,8 +5506,8 @@ - Allow UPDATE - and DELETE + Allow UPDATE + and DELETE to use an alias for the target table (Atsushi Ogawa) @@ -5519,7 +5519,7 @@ - Allow UPDATE + Allow UPDATE to set multiple columns with a list of values (Susanne Ebrecht) @@ -5546,7 +5546,7 @@ - Add CASCADE + Add CASCADE option to TRUNCATE (Joachim Wieland) @@ -5560,7 +5560,7 @@ Support FOR UPDATE and FOR SHARE - in the same SELECT + in the same SELECT command (Tom) @@ -5652,8 +5652,8 @@ Support portal parameters in EXPLAIN and EXECUTE (Tom) + linkend="sql-explain">EXPLAIN and EXECUTE (Tom) @@ -5665,7 +5665,7 @@ If SQL-level PREPARE parameters + linkend="sql-prepare">PREPARE parameters are unspecified, infer their types from the content of the query (Neil) @@ -5693,7 +5693,7 @@ Add TABLESPACE clause to CREATE TABLE AS + linkend="sql-createtableas">CREATE TABLE AS (Neil) @@ -5705,7 +5705,7 @@ Add ON COMMIT clause to CREATE TABLE AS + linkend="sql-createtableas">CREATE TABLE AS (Neil) @@ -5719,7 +5719,7 @@ Add INCLUDING CONSTRAINTS to CREATE TABLE LIKE + linkend="sql-createtable">CREATE TABLE LIKE (Greg Stark) @@ -5732,7 +5732,7 @@ Allow the creation of placeholder (shell) types (Martijn van Oosterhout) + linkend="sql-createtype">types (Martijn van Oosterhout) @@ -5747,7 +5747,7 @@ - Aggregate functions + Aggregate functions now support multiple input parameters (Sergey Koposov, Tom) @@ -5755,7 +5755,7 @@ Add new aggregate creation syntax (Tom) + linkend="sql-createaggregate">syntax (Tom) @@ -5770,7 +5770,7 @@ Add ALTER ROLE PASSWORD NULL + linkend="sql-alterrole">ALTER ROLE PASSWORD NULL to remove a previously set role password (Peter) @@ -5789,14 +5789,14 @@ - Add DROP OWNED + Add DROP OWNED to drop all objects owned by a role (Alvaro) - Add REASSIGN + Add REASSIGN OWNED to reassign ownership of all objects owned by a role (Alvaro) @@ -5809,7 +5809,7 @@ - Add GRANT ON SEQUENCE + Add GRANT ON SEQUENCE syntax (Bruce) @@ -5822,7 +5822,7 @@ - Add USAGE + Add USAGE permission for sequences that allows only currval() and nextval(), not setval() (Bruce) @@ -5839,7 +5839,7 @@ - Add ALTER TABLE + Add ALTER TABLE [ NO ] INHERIT (Greg Stark) @@ -5852,7 +5852,7 @@ - Allow comments on global + Allow comments on global objects to be stored globally (Kris Jurka) @@ -5881,7 +5881,7 @@ - The new syntax is CREATE + The new syntax is CREATE INDEX CONCURRENTLY. The default behavior is still to block table modification while an index is being created. @@ -5902,7 +5902,7 @@ - Allow COPY to + Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak) @@ -5915,7 +5915,7 @@ - Make the COPY + Make the COPY command return a command tag that includes the number of rows copied (Volkan YAZICI) @@ -5923,7 +5923,7 @@ - Allow VACUUM + Allow VACUUM to expire rows without being affected by other concurrent VACUUM operations (Hannu Krossing, Alvaro, Tom) @@ -5931,7 +5931,7 @@ - Make initdb + Make initdb detect the operating system locale and set the default DateStyle accordingly (Peter) @@ -6154,7 +6154,7 @@ - Allow domains to be + Allow domains to be based on other domains (Tom) @@ -6185,7 +6185,7 @@ The fix is to dump a SERIAL column by explicitly specifying its DEFAULT and sequence elements, and reconstructing the SERIAL column on reload - using a new ALTER + using a new ALTER SEQUENCE OWNED BY command. This also allows dropping a SERIAL column specification. @@ -6353,7 +6353,7 @@ - <link linkend="APP-PSQL"><application>psql</application></link> Changes + <link linkend="app-psql"><application>psql</application></link> Changes @@ -6461,7 +6461,7 @@ - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> Changes + <link linkend="app-pgdump"><application>pg_dump</application></link> Changes @@ -6484,7 +6484,7 @@ - Add pg_restore + Add pg_restore --no-data-for-failed-tables option to suppress loading data if table creation failed (i.e., the table already exists) (Martin Pitt) @@ -6493,7 +6493,7 @@ - Add pg_restore + Add pg_restore option to run the entire session in a single transaction (Simon) @@ -6520,7 +6520,7 @@ This allows passwords to be sent pre-encrypted for commands - like ALTER ROLE ... + like ALTER ROLE ... PASSWORD. @@ -6582,14 +6582,14 @@ - Allow SHOW to + Allow SHOW to put its result into a variable (Joachim Wieland) - Add COPY TO STDOUT + Add COPY TO STDOUT (Joachim Wieland) @@ -6624,7 +6624,7 @@ Add MSVC support for utility commands and pg_dump (Hiroshi + linkend="app-pgdump">pg_dump (Hiroshi Saito) @@ -6670,7 +6670,7 @@ - Add GIN (Generalized + Add GIN (Generalized Inverted iNdex) index access method (Teodor, Oleg) @@ -6682,7 +6682,7 @@ Rtree has been re-implemented using GiST. Among other + linkend="gist">GiST. Among other differences, this means that rtree indexes now have support for crash recovery via write-ahead logging (WAL). diff --git a/doc/src/sgml/release-8.3.sgml b/doc/src/sgml/release-8.3.sgml index 45ecf9c054..844f796179 100644 --- a/doc/src/sgml/release-8.3.sgml +++ b/doc/src/sgml/release-8.3.sgml @@ -7867,7 +7867,7 @@ current_date < 2017-11-17 - <link linkend="APP-PSQL"><application>psql</application></link> + <link linkend="app-psql"><application>psql</application></link> @@ -7956,7 +7956,7 @@ current_date < 2017-11-17 - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link> diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index e09f38e180..7a8db62b8d 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -7742,9 +7742,9 @@ Easier database object permissions management. GRANT/REVOKE IN + linkend="sql-grant">GRANT/REVOKE IN SCHEMA supports mass permissions changes on existing objects, - while ALTER DEFAULT + while ALTER DEFAULT PRIVILEGES allows control of privileges for objects created in the future. Large objects (BLOBs) now support permissions management as well. @@ -7754,7 +7754,7 @@ Broadly enhanced stored procedure support. - The DO statement supports + The DO statement supports ad-hoc or anonymous code blocks. Functions can now be called using named parameters. PL/pgSQL is now installed by default, and @@ -7783,14 +7783,14 @@ New trigger features, including SQL-standard-compliant per-column triggers and + linkend="sql-createtrigger">per-column triggers and conditional trigger execution. - Deferrable + Deferrable unique constraints. Mass updates to unique keys are now possible without trickery. @@ -7816,8 +7816,8 @@ New high-performance implementation of the - LISTEN/NOTIFY feature. + LISTEN/NOTIFY feature. Pending events are now stored in a memory-based queue rather than a table. Also, a payload string can be sent with each event, rather than transmitting just an event name as before. @@ -7827,7 +7827,7 @@ New implementation of - VACUUM FULL. + VACUUM FULL. This command now rewrites the entire table and indexes, rather than moving individual rows to compact space. It is substantially faster in most cases, and no longer results in index bloat. @@ -8287,7 +8287,7 @@ Allow per-tablespace values to be set for sequential and random page cost estimates (seq_page_cost/random_page_cost) - via ALTER TABLESPACE + via ALTER TABLESPACE ... SET/RESET (Robert Haas) @@ -8308,7 +8308,7 @@ Improve performance of TRUNCATE when + linkend="sql-truncate">TRUNCATE when the table was created or truncated earlier in the same transaction (Tom Lane) @@ -8414,7 +8414,7 @@ - Improve ANALYZE + Improve ANALYZE to support inheritance-tree statistics (Tom Lane) @@ -8451,7 +8451,7 @@ Allow setting of number-of-distinct-values statistics using ALTER TABLE + linkend="sql-altertable">ALTER TABLE (Robert Haas) @@ -8707,7 +8707,7 @@ - Perform SELECT + Perform SELECT FOR UPDATE/SHARE processing after applying LIMIT, so the number of rows returned is always predictable (Tom Lane) @@ -8725,7 +8725,7 @@ Allow mixing of traditional and SQL-standard LIMIT/OFFSET + linkend="sql-limit">LIMIT/OFFSET syntax (Tom Lane) @@ -8733,7 +8733,7 @@ Extend the supported frame options in window functions (Hitoshi + linkend="sql-window">window functions (Hitoshi Harada) @@ -8795,7 +8795,7 @@ - Speed up CREATE + Speed up CREATE DATABASE by deferring flushes to disk (Andres Freund, Greg Stark) @@ -8803,7 +8803,7 @@ - Allow comments on + Allow comments on columns of tables, views, and composite types only, not other relation types such as indexes and TOAST tables (Tom Lane) @@ -8812,7 +8812,7 @@ Allow the creation of enumerated types containing + linkend="sql-createtype-enum">enumerated types containing no values (Bruce Momjian) @@ -8870,7 +8870,7 @@ - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> + <link linkend="sql-createtable"><command>CREATE TABLE</command></link> @@ -8914,7 +8914,7 @@ - Add deferrable + Add deferrable unique constraints (Dean Rasheed) @@ -8941,7 +8941,7 @@ Exclusion constraints generalize uniqueness constraints by allowing arbitrary comparison operators, not just equality. They are created - with the CREATE + with the CREATE TABLE CONSTRAINT ... EXCLUDE clause. The most common use of exclusion constraints is to specify that column entries must not overlap, rather than simply not be equal. This is @@ -8976,7 +8976,7 @@ Add the ability to make mass permission changes across a whole schema using the new GRANT/REVOKE + linkend="sql-grant">GRANT/REVOKE IN SCHEMA clause (Petr Jelinek) @@ -8989,7 +8989,7 @@ - Add ALTER + Add ALTER DEFAULT PRIVILEGES command to control privileges of objects created later (Petr Jelinek) @@ -9028,8 +9028,8 @@ - Make LISTEN/NOTIFY store pending events + Make LISTEN/NOTIFY store pending events in a memory queue, rather than in a system table (Joachim Wieland) @@ -9042,7 +9042,7 @@ - Allow NOTIFY + Allow NOTIFY to pass an optional payload string to listeners (Joachim Wieland) @@ -9056,7 +9056,7 @@ - Allow CLUSTER + Allow CLUSTER on all per-database system catalogs (Tom Lane) @@ -9068,7 +9068,7 @@ - <link linkend="SQL-COPY"><command>COPY</command></link> + <link linkend="sql-copy"><command>COPY</command></link> @@ -9101,7 +9101,7 @@ - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> + <link linkend="sql-explain"><command>EXPLAIN</command></link> @@ -9156,7 +9156,7 @@ - <link linkend="SQL-VACUUM"><command>VACUUM</command></link> + <link linkend="sql-vacuum"><command>VACUUM</command></link> @@ -9200,7 +9200,7 @@ Allow an index to be named automatically by omitting the index name in - CREATE INDEX + CREATE INDEX (Tom Lane) @@ -9581,7 +9581,7 @@ Support execution of anonymous code blocks using the DO statement + linkend="sql-do">DO statement (Petr Jelinek, Joshua Tolley, Hannu Valtonen) @@ -9595,7 +9595,7 @@ Implement SQL-standard-compliant per-column triggers + linkend="sql-createtrigger">per-column triggers (Itagaki Takahiro) @@ -9609,7 +9609,7 @@ Add the WHEN clause to CREATE TRIGGER + linkend="sql-createtrigger">CREATE TRIGGER to allow control over whether a trigger is fired (Itagaki Takahiro) @@ -9635,7 +9635,7 @@ Add the OR REPLACE clause to CREATE LANGUAGE + linkend="sql-createlanguage">CREATE LANGUAGE (Tom Lane) @@ -9937,7 +9937,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add an option to vacuumdb, to analyze without + linkend="app-vacuumdb">vacuumdb, to analyze without vacuuming (Bruce Momjian) @@ -9945,14 +9945,14 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PSQL"><application>psql</application></link> + <link linkend="app-psql"><application>psql</application></link> Add support for quoting/escaping the values of psql - variables as SQL strings or + variables as SQL strings or identifiers (Pavel Stehule, Robert Haas) @@ -9980,7 +9980,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Fix psql --file - to properly honor + linkend="r1-app-psql-3"> (Bruce Momjian) @@ -10039,7 +10039,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then <application>psql</application> <link - linkend="APP-PSQL-meta-commands"><command>\d</command></link> + linkend="app-psql-meta-commands"><command>\d</command></link> Commands @@ -10084,7 +10084,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link> @@ -10990,7 +10990,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then Add dblink_get_notify() + linkend="contrib-dblink-get-notify">dblink_get_notify() to contrib/dblink (Marcus Kempe) @@ -11007,7 +11007,7 @@ if TG_OP = 'INSERT' and NEW.col1 = ... then This affects dblink_build_sql_insert() + linkend="contrib-dblink-build-sql-insert">dblink_build_sql_insert() and related functions. These functions now number columns according to logical not physical column numbers. diff --git a/doc/src/sgml/release-9.1.sgml b/doc/src/sgml/release-9.1.sgml index 2939631609..92948a4ad0 100644 --- a/doc/src/sgml/release-9.1.sgml +++ b/doc/src/sgml/release-9.1.sgml @@ -8816,7 +8816,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add support for foreign + Add support for foreign tables @@ -8845,7 +8845,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support unlogged tables using the UNLOGGED - option in CREATE + option in CREATE TABLE @@ -8861,13 +8861,13 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add nearest-neighbor (order-by-operator) searching to GiST indexes + linkend="gist">GiST indexes - Add a SECURITY + Add a SECURITY LABEL command and support for SELinux permissions control @@ -9154,7 +9154,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 All contrib modules are now installed with CREATE EXTENSION + linkend="sql-createextension">CREATE EXTENSION rather than by manually invoking their SQL scripts (Dimitri Fontaine, Tom Lane) @@ -9229,7 +9229,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Support unlogged tables using the UNLOGGED - option in CREATE + option in CREATE TABLE (Robert Haas) @@ -9643,7 +9643,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a replication permission + Add a replication permission for roles (Magnus Hagander) @@ -9986,9 +9986,9 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Extensions are controlled by the new CREATE/ALTER/DROP EXTENSION + linkend="sql-createextension">CREATE/ALTER/DROP EXTENSION commands. This replaces ad-hoc methods of grouping objects that are added to a PostgreSQL installation. @@ -9996,7 +9996,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add support for foreign + Add support for foreign tables (Shigeru Hanada, Robert Haas, Jan Urbanski, Heikki Linnakangas) @@ -10011,14 +10011,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Allow new values to be added to an existing enum type via - ALTER TYPE (Andrew + ALTER TYPE (Andrew Dunstan) - Add ALTER TYPE ... + Add ALTER TYPE ... ADD/DROP/ALTER/RENAME ATTRIBUTE (Peter Eisentraut) @@ -10037,7 +10037,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add RESTRICT/CASCADE to ALTER TYPE operations + linkend="sql-altertype">ALTER TYPE operations on typed tables (Peter Eisentraut) @@ -10079,7 +10079,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CREATETABLE"><command>CREATE/ALTER TABLE</command></link> + <link linkend="sql-createtable"><command>CREATE/ALTER TABLE</command></link> @@ -10112,7 +10112,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Allow ALTER TABLE + Allow ALTER TABLE ... SET DATA TYPE to avoid table rewrites in appropriate cases (Noah Misch, Robert Haas) @@ -10127,7 +10127,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add CREATE TABLE IF + Add CREATE TABLE IF NOT EXISTS syntax (Robert Haas) @@ -10162,7 +10162,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add a SECURITY + Add a SECURITY LABEL command (KaiGai Kohei) @@ -10196,7 +10196,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Make TRUNCATE ... RESTART + Make TRUNCATE ... RESTART IDENTITY restart sequences transactionally (Steve Singer) @@ -10211,14 +10211,14 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-COPY"><command>COPY</command></link> + <link linkend="sql-copy"><command>COPY</command></link> Add ENCODING option to COPY TO/FROM (Hitoshi + linkend="sql-copy">COPY TO/FROM (Hitoshi Harada, Itagaki Takahiro) @@ -10230,7 +10230,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Add bidirectional COPY + Add bidirectional COPY protocol support (Fujii Masao) @@ -10244,7 +10244,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-EXPLAIN"><command>EXPLAIN</command></link> + <link linkend="sql-explain"><command>EXPLAIN</command></link> @@ -10260,15 +10260,15 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-VACUUM"><command>VACUUM</command></link> + <link linkend="sql-vacuum"><command>VACUUM</command></link> Add additional details to the output of VACUUM FULL VERBOSE - and CLUSTER VERBOSE + linkend="sql-vacuum">VACUUM FULL VERBOSE + and CLUSTER VERBOSE (Itagaki Takahiro) @@ -10294,7 +10294,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - <link linkend="SQL-CLUSTER"><command>CLUSTER</command></link> + <link linkend="sql-cluster"><command>CLUSTER</command></link> @@ -10317,7 +10317,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Add nearest-neighbor (order-by-operator) searching to GiST indexes (Teodor Sigaev, Tom Lane) + linkend="gist">GiST indexes (Teodor Sigaev, Tom Lane) @@ -10334,7 +10334,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow GIN indexes to index null + Allow GIN indexes to index null and empty values (Tom Lane) @@ -10346,7 +10346,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Allow GIN indexes to + Allow GIN indexes to better recognize duplicate search entries (Tom Lane) @@ -10358,7 +10358,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Fix GiST indexes to be fully + Fix GiST indexes to be fully crash-safe (Heikki Linnakangas) @@ -10668,7 +10668,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - Support INSTEAD + Support INSTEAD OF triggers on views (Dean Rasheed) @@ -10869,7 +10869,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PSQL"><application>psql</application></link> + <link linkend="app-psql"><application>psql</application></link> @@ -10963,15 +10963,15 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link> - Add pg_dump + Add pg_dump and pg_dumpall + linkend="app-pg-dumpall">pg_dumpall option to force quoting of all identifiers (Robert Haas) @@ -10994,7 +10994,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; - <link linkend="APP-PG-CTL"><application>pg_ctl</application></link> + <link linkend="app-pg-ctl"><application>pg_ctl</application></link> @@ -11528,7 +11528,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; This uses the new SECURITY LABEL + linkend="sql-security-label">SECURITY LABEL facility. @@ -11716,7 +11716,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; Merge documentation for CREATE CONSTRAINT TRIGGER and CREATE TRIGGER + linkend="sql-createtrigger">CREATE TRIGGER (Alvaro Herrera) diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index ca8c87a4ab..edc97fb8e7 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -9091,7 +9091,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add the SP-GiST (Space-Partitioned + Add the SP-GiST (Space-Partitioned GiST) index access method @@ -9112,7 +9112,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="sql-createview">security_barrier option for views @@ -9300,7 +9300,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Prevent ALTER + Prevent ALTER DOMAIN from working on non-domain types (Peter Eisentraut) @@ -9314,7 +9314,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer forcibly lowercase procedural language names in CREATE FUNCTION + linkend="sql-createfunction">CREATE FUNCTION (Robert Haas) @@ -9352,7 +9352,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Provide consistent backquote, variable expansion, and quoted substring behavior in psql meta-command + linkend="app-psql">psql meta-command arguments (Tom Lane) @@ -9368,9 +9368,9 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 No longer treat clusterdb + linkend="app-clusterdb">clusterdb table names as double-quoted; no longer treat reindexdb table + linkend="app-reindexdb">reindexdb table and index names as double-quoted (Bruce Momjian) @@ -9382,7 +9382,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - createuser + createuser no longer prompts for option settings by default (Peter Eisentraut) @@ -9394,7 +9394,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Disable prompting for the user name in dropuser unless + linkend="app-dropuser">dropuser unless is specified (Peter Eisentraut) @@ -9556,7 +9556,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add the SP-GiST (Space-Partitioned + Add the SP-GiST (Space-Partitioned GiST) index access method (Teodor Sigaev, Oleg Bartunov, Tom Lane) @@ -10385,7 +10385,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add CONCURRENTLY option to DROP INDEX + linkend="sql-dropindex">DROP INDEX (Simon Riggs) @@ -10450,7 +10450,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add the ability to rename + Add the ability to rename constraints (Peter Eisentraut) @@ -10466,7 +10466,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Reduce need to rebuild tables and indexes for certain ALTER TABLE + linkend="sql-altertable">ALTER TABLE ... ALTER COLUMN TYPE operations (Noah Misch) @@ -10484,7 +10484,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Avoid having ALTER + Avoid having ALTER TABLE revalidate foreign key constraints in some cases where it is not necessary (Noah Misch) @@ -10504,16 +10504,16 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add ALTER + Add ALTER FOREIGN DATA WRAPPER ... RENAME - and ALTER + and ALTER SERVER ... RENAME (Peter Eisentraut) - Add ALTER + Add ALTER DOMAIN ... RENAME (Peter Eisentraut) @@ -10540,7 +10540,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="SQL-CREATETABLE"><command>CREATE TABLE</command></link> + <link linkend="sql-createtable"><command>CREATE TABLE</command></link> @@ -10583,7 +10583,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a security_barrier + linkend="sql-createview">security_barrier option for views (KaiGai Kohei, Robert Haas) @@ -10599,7 +10599,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add a new LEAKPROOF function + linkend="sql-createfunction">LEAKPROOF function attribute to mark functions that can safely be pushed down into security_barrier views (KaiGai Kohei) @@ -10646,7 +10646,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow VACUUM to more + Allow VACUUM to more easily skip pages that cannot be locked (Simon Riggs, Robert Haas) @@ -10658,7 +10658,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Make EXPLAIN + Make EXPLAIN (BUFFERS) count blocks dirtied and written (Robert Haas) @@ -10738,7 +10738,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Allow domains to be + Allow domains to be declared NOT VALID (Álvaro Herrera) @@ -10828,7 +10828,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 database state. Snapshots are exported via pg_export_snapshot() - and imported via SET + and imported via SET TRANSACTION SNAPSHOT. Only snapshots from currently-running transactions can be imported. @@ -11083,7 +11083,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Add initdb + Add initdb options and (Peter Eisentraut) @@ -11098,7 +11098,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add / flags to - createuser + createuser to control replication permission (Fujii Masao) @@ -11106,8 +11106,8 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Add the option to dropdb and dropuser (Josh + linkend="app-dropdb">dropdb and dropuser (Josh Kupershmidt) @@ -11123,7 +11123,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="APP-PSQL"><application>psql</application></link> + <link linkend="app-psql"><application>psql</application></link> @@ -11309,7 +11309,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 In psql tab completion, complete SQL keywords in either upper or lower case according to the new COMP_KEYWORD_CASE + linkend="app-psql-variables">COMP_KEYWORD_CASE setting (Peter Eisentraut) @@ -11348,7 +11348,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link> @@ -11380,7 +11380,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Make pg_dumpall dump all + linkend="app-pg-dumpall">pg_dumpall dump all roles first, then all configuration settings on roles (Phil Sorber) diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index dada255057..84523d36b7 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -10497,7 +10497,7 @@ ALTER EXTENSION hstore UPDATE; - Add materialized + Add materialized views @@ -10505,7 +10505,7 @@ ALTER EXTENSION hstore UPDATE; Make simple views auto-updatable + linkend="sql-createview-updatable-views">auto-updatable @@ -10527,7 +10527,7 @@ ALTER EXTENSION hstore UPDATE; - Allow foreign data + Allow foreign data wrappers to support writes (inserts/updates/deletes) on foreign tables @@ -10582,7 +10582,7 @@ ALTER EXTENSION hstore UPDATE; A dump/restore using pg_dumpall, or use + linkend="app-pg-dumpall">pg_dumpall, or use of pg_upgrade, is required for those wishing to migrate data from any previous release. @@ -10664,7 +10664,7 @@ ALTER EXTENSION hstore UPDATE; - Change multicolumn ON UPDATE + Change multicolumn ON UPDATE SET NULL/SET DEFAULT foreign key actions to affect all columns of the constraint, not just those changed in the UPDATE (Tom Lane) @@ -10818,7 +10818,7 @@ ALTER EXTENSION hstore UPDATE; - Allow GiST indexes to be + Allow GiST indexes to be unlogged (Jeevan Chalke) @@ -10893,7 +10893,7 @@ ALTER EXTENSION hstore UPDATE; - Add COPY FREEZE + Add COPY FREEZE option to avoid the overhead of marking tuples as frozen later (Simon Riggs, Jeff Davis) @@ -10922,7 +10922,7 @@ ALTER EXTENSION hstore UPDATE; Improve performance of the CREATE TEMPORARY TABLE ... ON + linkend="sql-createtable">CREATE TEMPORARY TABLE ... ON COMMIT DELETE ROWS option by not truncating such temporary tables in transactions that haven't touched any temporary tables (Heikki Linnakangas) @@ -10989,7 +10989,7 @@ ALTER EXTENSION hstore UPDATE; The checksum option can be set during initdb. + linkend="app-initdb">initdb. @@ -11139,7 +11139,7 @@ ALTER EXTENSION hstore UPDATE; Increase the maximum initdb-configured value for initdb-configured value for shared_buffers to 128MB (Robert Haas) @@ -11216,7 +11216,7 @@ ALTER EXTENSION hstore UPDATE; Add the last checkpoint's redo location to pg_controldata's + linkend="app-pgcontroldata">pg_controldata's output (Fujii Masao) @@ -11318,8 +11318,8 @@ ALTER EXTENSION hstore UPDATE; Add support for piping COPY and psql \copy + linkend="sql-copy">COPY and psql \copy data to/from an external program (Etsuro Fujita) @@ -11327,7 +11327,7 @@ ALTER EXTENSION hstore UPDATE; Allow a multirow VALUES clause in a rule + linkend="sql-values">VALUES clause in a rule to reference OLD/NEW (Tom Lane) @@ -11355,7 +11355,7 @@ ALTER EXTENSION hstore UPDATE; - Allow foreign data + Allow foreign data wrappers to support writes (inserts/updates/deletes) on foreign tables (KaiGai Kohei) @@ -11363,14 +11363,14 @@ ALTER EXTENSION hstore UPDATE; - Add CREATE SCHEMA ... IF + Add CREATE SCHEMA ... IF NOT EXISTS clause (Fabrízio de Royes Mello) - Make REASSIGN + Make REASSIGN OWNED also change ownership of shared objects (Álvaro Herrera) @@ -11386,7 +11386,7 @@ ALTER EXTENSION hstore UPDATE; - Suppress CREATE + Suppress CREATE TABLE's messages about implicit index and sequence creation (Robert Haas) @@ -11399,7 +11399,7 @@ ALTER EXTENSION hstore UPDATE; - Allow DROP TABLE IF + Allow DROP TABLE IF EXISTS to succeed when a non-existent schema is specified in the table name (Bruce Momjian) @@ -11434,7 +11434,7 @@ ALTER EXTENSION hstore UPDATE; Support IF NOT EXISTS option in ALTER TYPE ... ADD VALUE + linkend="sql-altertype">ALTER TYPE ... ADD VALUE (Andrew Dunstan) @@ -11445,13 +11445,13 @@ ALTER EXTENSION hstore UPDATE; - Add ALTER ROLE ALL + Add ALTER ROLE ALL SET to establish settings for all users (Peter Eisentraut) This allows settings to apply to all users in all databases. ALTER DATABASE SET + linkend="sql-alterdatabase">ALTER DATABASE SET already allowed addition of settings for all users in a single database. postgresql.conf has a similar effect. @@ -11459,7 +11459,7 @@ ALTER EXTENSION hstore UPDATE; - Add support for ALTER RULE + Add support for ALTER RULE ... RENAME (Ali Dar) @@ -11475,7 +11475,7 @@ ALTER EXTENSION hstore UPDATE; - Add materialized + Add materialized views (Kevin Grittner) @@ -11491,7 +11491,7 @@ ALTER EXTENSION hstore UPDATE; Make simple views auto-updatable + linkend="sql-createview-updatable-views">auto-updatable (Dean Rasheed) @@ -11499,14 +11499,14 @@ ALTER EXTENSION hstore UPDATE; Simple views that reference some or all columns from a single base table are now updatable by default. More complex views can be made updatable using INSTEAD OF triggers - or INSTEAD rules. + linkend="sql-createtrigger">INSTEAD OF triggers + or INSTEAD rules. - Add CREATE RECURSIVE + Add CREATE RECURSIVE VIEW syntax (Peter Eisentraut) @@ -11545,7 +11545,7 @@ ALTER EXTENSION hstore UPDATE; - Increase the maximum size of large + Increase the maximum size of large objects from 2GB to 4TB (Nozomi Anzai, Yugo Nagata) @@ -11675,7 +11675,7 @@ ALTER EXTENSION hstore UPDATE; This reduces line length in view printing, for instance in pg_dump output. + linkend="app-pgdump">pg_dump output. @@ -11728,7 +11728,7 @@ ALTER EXTENSION hstore UPDATE; Allow PL/pgSQL to access the number of rows processed by - COPY (Pavel Stehule) + COPY (Pavel Stehule) @@ -11818,7 +11818,7 @@ ALTER EXTENSION hstore UPDATE; Allow SPI functions to access the number of rows processed - by COPY (Pavel Stehule) + by COPY (Pavel Stehule) @@ -11842,16 +11842,16 @@ ALTER EXTENSION hstore UPDATE; Support multiple arguments for pg_restore, - clusterdb, - reindexdb, - and vacuumdb + linkend="app-pgrestore">pg_restore, + clusterdb, + reindexdb, + and vacuumdb (Josh Kupershmidt) This is similar to the way pg_dump's + linkend="app-pgdump">pg_dump's option works. @@ -11859,7 +11859,7 @@ ALTER EXTENSION hstore UPDATE; Add option to pg_dumpall, pg_dumpall, pg_basebackup, and pg_receivexlog @@ -11879,7 +11879,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PSQL"><application>psql</application></link> + <link linkend="app-psql"><application>psql</application></link> @@ -11924,7 +11924,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PSQL-meta-commands">Backslash Commands</link> + <link linkend="app-psql-meta-commands">Backslash Commands</link> @@ -12035,7 +12035,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-PGDUMP"><application>pg_dump</application></link> + <link linkend="app-pgdump"><application>pg_dump</application></link> @@ -12076,7 +12076,7 @@ ALTER EXTENSION hstore UPDATE; - <link linkend="APP-INITDB"><application>initdb</application></link> + <link linkend="app-initdb"><application>initdb</application></link> @@ -12208,7 +12208,7 @@ ALTER EXTENSION hstore UPDATE; Add isolation tests for CREATE INDEX + linkend="sql-createindex">CREATE INDEX CONCURRENTLY (Abhijit Menon-Sen) diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index deb74b4e1c..f6c38bd912 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -8490,14 +8490,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command + Add new SQL command for changing postgresql.conf configuration file entries - Reduce lock strength for some + Reduce lock strength for some commands @@ -8686,7 +8686,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Rename EXPLAIN + Rename EXPLAIN ANALYZE's total runtime output to execution time (Tom Lane) @@ -8699,7 +8699,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - SHOW TIME ZONE now + SHOW TIME ZONE now outputs simple numeric UTC offsets in POSIX timezone format (Tom Lane) @@ -8924,7 +8924,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make properly report dead but + Make properly report dead but not-yet-removable rows to the statistics collector (Hari Babu) @@ -8942,14 +8942,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce GIN index size + Reduce GIN index size (Alexander Korotkov, Heikki Linnakangas) Indexes upgraded via will work fine but will still be in the old, larger GIN format. - Use to recreate old GIN indexes in the + Use to recreate old GIN indexes in the new format. @@ -8957,14 +8957,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of multi-key GIN lookups (Alexander Korotkov, + linkend="gin">GIN lookups (Alexander Korotkov, Heikki Linnakangas) - Add GiST index support + Add GiST index support for inet and cidr data types (Emre Hasegeli) @@ -9038,8 +9038,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Attempt to freeze tuples when tables are rewritten with or VACUUM FULL (Robert Haas, + linkend="sql-cluster"> or VACUUM FULL (Robert Haas, Andres Freund) @@ -9050,7 +9050,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve speed of with default with default nextval() columns (Simon Riggs) @@ -9059,7 +9059,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Improve speed of accessing many different sequences in the same session + linkend="sql-createsequence">sequences in the same session (David Rowley) @@ -9074,7 +9074,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Reduce memory allocated by PL/pgSQL - blocks (Tom Lane) + blocks (Tom Lane) @@ -9222,7 +9222,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command + Add new SQL command for changing postgresql.conf configuration file entries (Amit Kapila) @@ -9503,7 +9503,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add relation option + linkend="sql-createtable-storage-parameters"> to identify user-created tables involved in logical change-set encoding (Andres Freund) @@ -9558,7 +9558,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow to have + Allow to have an empty target list (Tom Lane) @@ -9570,7 +9570,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Ensure that SELECT ... FOR UPDATE + Ensure that SELECT ... FOR UPDATE NOWAIT does not wait in corner cases involving already-concurrently-updated tuples (Craig Ringer and Thomas Munro) @@ -9587,7 +9587,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add DISCARD + Add DISCARD SEQUENCES command to discard cached sequence-related state (Fabrízio de Royes Mello, Robert Haas) @@ -9600,7 +9600,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add FORCE NULL option - to COPY FROM, which + to COPY FROM, which causes quoted strings matching the specified null string to be converted to NULLs in CSV mode (Ian Barwick, Michael Paquier) @@ -9628,7 +9628,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="SQL-EXPLAIN"> + <xref linkend="sql-explain"> @@ -9671,7 +9671,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This is done with REFRESH MATERIALIZED + linkend="sql-refreshmaterializedview">REFRESH MATERIALIZED VIEW CONCURRENTLY. @@ -9679,7 +9679,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow views to be automatically + linkend="sql-createview-updatable-views">automatically updated even if they contain some non-updatable columns (Dean Rasheed) @@ -9701,7 +9701,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - This is controlled with the new + This is controlled with the new clause WITH CHECK OPTION. @@ -9726,7 +9726,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Support triggers on foreign + Support triggers on foreign tables (Ronan Dunklau) @@ -9735,22 +9735,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow moving groups of objects from one tablespace to another using the ALL IN TABLESPACE ... SET TABLESPACE form of - , , or - (Stephen Frost) + , , or + (Stephen Frost) Allow changing foreign key constraint deferrability - via ... ALTER + via ... ALTER CONSTRAINT (Simon Riggs) - Reduce lock strength for some + Reduce lock strength for some commands (Simon Riggs, Noah Misch, Robert Haas) @@ -9768,18 +9768,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow tablespace options to be set - in (Vik Fearing) + in (Vik Fearing) Formerly these options could only be set - via . + via . - Allow to define the estimated + Allow to define the estimated size of the aggregate's transition state data (Hadi Moshayedi) @@ -10266,14 +10266,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add option + Add option to specify role membership (Christopher Browne) - Add + Add option to analyze in stages of increasing granularity (Peter Eisentraut) @@ -10343,14 +10343,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="APP-PSQL"> + <xref linkend="app-psql"> Suppress No rows output in psql + linkend="app-psql-meta-commands"> mode when the footer is disabled (Bruce Momjian) @@ -10365,7 +10365,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <link linkend="APP-PSQL-meta-commands">Backslash Commands</link> + <link linkend="app-psql-meta-commands">Backslash Commands</link> @@ -10475,13 +10475,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="APP-PGDUMP"> + <xref linkend="app-pgdump"> - Allow options + Allow options , , and to be specified multiple times (Heikki Linnakangas) @@ -10501,8 +10501,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This change prevents unnecessary errors when removing old objects. The new option - for , , - and is only available + for , , + and is only available when is also specified. diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index 2f23abe329..11740a4108 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -5726,7 +5726,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) @@ -5999,7 +5999,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-05-15 [b0b7be6] Alvaro..: Add BRIN infrastructure for "inclusion" opclasses --> - Add Block Range Indexes (BRIN) + Add Block Range Indexes (BRIN) (Álvaro Herrera) @@ -6018,7 +6018,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB Allow queries to perform accurate distance filtering of bounding-box-indexed objects (polygons, circles) using GiST indexes (Alexander Korotkov, Heikki + linkend="gist">GiST indexes (Alexander Korotkov, Heikki Linnakangas) @@ -6038,7 +6038,7 @@ max_wal_size = (3 * checkpoint_segments) * 16MB 2015-03-30 [0633a60] Heikki..: Add index-only scan support to range type GiST .. --> - Allow GiST indexes to perform index-only + Allow GiST indexes to perform index-only scans (Anastasia Lubennikova, Heikki Linnakangas, Andreas Karlsson) @@ -6540,7 +6540,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-09-02 [bd3b7a9] Fujii ..: Support ALTER SYSTEM RESET command. --> - Allow ALTER SYSTEM + Allow ALTER SYSTEM values to be reset with ALTER SYSTEM RESET (Vik Fearing) @@ -6757,7 +6757,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow setting multiple target columns in - an UPDATE from the result of + an UPDATE from the result of a single sub-SELECT (Tom Lane) @@ -6772,7 +6772,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-07 [df630b0] Alvaro..: Implement SKIP LOCKED for row-level locks --> - Add SELECT option + Add SELECT option SKIP LOCKED to skip locked rows (Thomas Munro) @@ -6787,7 +6787,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [f6d208d] Simon ..: TABLESAMPLE, SQL Standard and extensible --> - Add SELECT option + Add SELECT option TABLESAMPLE to return a subset of a table (Petr Jelínek) @@ -6825,7 +6825,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add more details about sort ordering in EXPLAIN output (Marius Timmer, + linkend="sql-explain">EXPLAIN output (Marius Timmer, Lukas Kreft, Arne Scheffer) @@ -6840,7 +6840,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-18 [35192f0] Alvaro..: Have VACUUM log number of skipped pages due to .. --> - Make VACUUM log the + Make VACUUM log the number of pages skipped due to pins (Jim Nasby) @@ -6850,7 +6850,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-02-20 [d42358e] Alvaro..: Have TRUNCATE update pgstat tuple counters --> - Make TRUNCATE properly + Make TRUNCATE properly update the pg_stat* tuple counters (Alexander Shulgin) @@ -6858,7 +6858,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="SQL-REINDEX"> + <xref linkend="sql-reindex"> @@ -6926,10 +6926,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature allows row-by-row control over which users can add, modify, or even see rows in a table. This is controlled by new - commands CREATE/ALTER/DROP POLICY and ALTER TABLE ... ENABLE/DISABLE + commands CREATE/ALTER/DROP POLICY and ALTER TABLE ... ENABLE/DISABLE ROW SECURITY. @@ -6941,7 +6941,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Allow changing of the WAL logging status of a table after creation with ALTER TABLE ... SET LOGGED / + linkend="sql-altertable">ALTER TABLE ... SET LOGGED / UNLOGGED (Fabrízio de Royes Mello) @@ -6954,10 +6954,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add IF NOT EXISTS clause to CREATE TABLE AS, - CREATE INDEX, - CREATE SEQUENCE, - and CREATE + linkend="sql-createtableas">CREATE TABLE AS, + CREATE INDEX, + CREATE SEQUENCE, + and CREATE MATERIALIZED VIEW (Fabrízio de Royes Mello) @@ -6968,7 +6968,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add support for IF EXISTS to ALTER TABLE ... RENAME + linkend="sql-altertable">ALTER TABLE ... RENAME CONSTRAINT (Bruce Momjian) @@ -6986,8 +6986,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature is now supported in - , , - , , + , , + , , and ALTER object OWNER TO commands. @@ -6997,7 +6997,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-23 [7eca575] Alvaro..: get_object_address: separate domain constraints.. --> - Support comments on domain + Support comments on domain constraints (Álvaro Herrera) @@ -7017,7 +7017,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-11 [fa26424] Stephe..: Allow LOCK TABLE .. ROW EXCLUSIVE MODE with IN.. --> - Allow LOCK TABLE ... ROW EXCLUSIVE + Allow LOCK TABLE ... ROW EXCLUSIVE MODE for those with INSERT privileges on the target table (Stephen Frost) @@ -7049,8 +7049,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow CREATE/ALTER DATABASE + linkend="sql-createdatabase">CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn (Vik Fearing) @@ -7075,7 +7075,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-10 [59efda3] Tom Lane: Implement IMPORT FOREIGN SCHEMA. --> - Add support for + Add support for (Ronan Dunklau, Michael Paquier, Tom Lane) @@ -7164,7 +7164,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Allow event triggers on table rewrites caused by ALTER TABLE (Dimitri + linkend="sql-altertable">ALTER TABLE (Dimitri Fontaine) @@ -7175,10 +7175,10 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add event trigger support for database-level COMMENT, SECURITY LABEL, - and GRANT/REVOKE (Álvaro Herrera) + linkend="sql-comment">COMMENT, SECURITY LABEL, + and GRANT/REVOKE (Álvaro Herrera) @@ -7645,8 +7645,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This change adds new commands CREATE/DROP TRANSFORM. + linkend="sql-createtransform">CREATE/DROP TRANSFORM. This also adds optional transformations between the hstore and ltree types to/from - Allow vacuumdb to + Allow vacuumdb to vacuum in parallel using new option (Dilip Kumar) @@ -7786,7 +7786,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-11-12 [5094da9] Alvaro..: vacuumdb: don't prompt for passwords over and .. --> - In vacuumdb, do not + In vacuumdb, do not prompt for the same password repeatedly when multiple connections are necessary (Haribabu Kommi, Michael Paquier) @@ -7798,7 +7798,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add option to reindexdb (Sawada + linkend="app-reindexdb">reindexdb (Sawada Masahiko) @@ -7829,7 +7829,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="APP-PSQL"> + <xref linkend="app-psql"> @@ -7879,7 +7879,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add new option %l in psql's PROMPT variables + linkend="app-psql-variables">PROMPT variables to display the current multiline statement line number (Sawada Masahiko) @@ -7891,7 +7891,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add \pset option pager_min_lines + linkend="app-psql-meta-commands">pager_min_lines to control pager invocation (Andrew Dunstan) @@ -7949,7 +7949,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <link linkend="APP-PSQL-meta-commands">Backslash Commands</link> + <link linkend="app-psql-meta-commands">Backslash Commands</link> @@ -8045,7 +8045,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="APP-PGDUMP"> + <xref linkend="app-pgdump"> @@ -8640,7 +8640,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-11-21 [3a82bc6] Heikki..: Add pageinspect functions for inspecting GIN in.. --> - Add GIN + Add GIN index inspection functions to pageinspect (Heikki Linnakangas, Peter Geoghegan, Michael Paquier) diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index a89b1b5879..90b4ed3585 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -5048,7 +5048,7 @@ and many others in the same vein 2015-09-02 [30bb26b5e] Allow usage of huge maintenance_work_mem for GIN build. --> - Allow GIN index builds to + Allow GIN index builds to make effective use of settings larger than 1 GB (Robert Abraham, Teodor Sigaev) @@ -5094,7 +5094,7 @@ and many others in the same vein --> Improve handling of dead index tuples in GiST indexes (Anastasia Lubennikova) + linkend="gist">GiST indexes (Anastasia Lubennikova) @@ -5110,7 +5110,7 @@ and many others in the same vein 2016-03-30 [acdf2a8b3] Introduce SP-GiST operator class over box. --> - Add an SP-GiST operator class for + Add an SP-GiST operator class for type box (Alexander Lebedev) @@ -7228,8 +7228,8 @@ This commit is also listed under psql and PL/pgSQL --> Add a option - to pg_dump - and pg_restore + to pg_dump + and pg_restore (Pavel Stehule) @@ -7292,7 +7292,7 @@ This commit is also listed under psql and PL/pgSQL - <xref linkend="APP-PSQL"> + <xref linkend="app-psql"> diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 095bf6459c..819f2a8294 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -29,8 +29,8 @@ execution. It is very powerful, and can be used for many things such as query language procedures, views, and versions. The theoretical foundations and the power of this rule system are - also discussed in and . + also discussed in and . diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index 3f2d31b4c0..1b5f654a1b 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -1,6 +1,6 @@ - + SP-GiST Indexes diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml index 7a61b50579..4a6f746d20 100644 --- a/doc/src/sgml/start.sgml +++ b/doc/src/sgml/start.sgml @@ -268,7 +268,7 @@ createdb: database creation failed: ERROR: permission denied to create database More about createdb and dropdb can - be found in and + be found in and respectively. diff --git a/doc/src/sgml/tablefunc.sgml b/doc/src/sgml/tablefunc.sgml index 7cfae4d316..6f1d3df34d 100644 --- a/doc/src/sgml/tablefunc.sgml +++ b/doc/src/sgml/tablefunc.sgml @@ -295,7 +295,7 @@ AS ct(row_name text, category_1 text, category_2 text, category_3 text); - See also the \crosstabview + See also the \crosstabview command in psql, which provides functionality similar to crosstab(). diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index b6f33037ff..d9fccaa17c 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -3277,10 +3277,10 @@ if (!ptr) - + Using C++ for Extensibility - + C++ diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index 520eab8e99..dce68dd4ac 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -470,7 +470,7 @@ GiST indexes have nine support functions, two of which are optional, as shown in . - (For more information see .) + (For more information see .)
@@ -542,7 +542,7 @@ SP-GiST indexes require five support functions, as shown in . - (For more information see .) + (For more information see .)
@@ -590,7 +590,7 @@ GIN indexes have six support functions, three of which are optional, as shown in . - (For more information see .) + (For more information see .)
From 7c981590c2e8149a88f6b53829770e2277336879 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 21 Oct 2017 12:25:31 -0400 Subject: [PATCH 0418/1087] Convert another SGML ID to lower case The mostly automated conversion in 1ff01b3902cbf5b22d1a439014202499c21b2994 missed this one because of the unusual whitespace. --- doc/src/sgml/release-9.0.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index 7a8db62b8d..d5b3239c30 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -7853,7 +7853,7 @@ - EXPLAIN enhancements. + EXPLAIN enhancements. The output is now available in JSON, XML, or YAML format, and includes buffer utilization and other data not previously available. From 471d55859c11b40059aef7dd82f82b3a0dc338b1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 22 Oct 2017 16:45:16 -0400 Subject: [PATCH 0419/1087] Adjust psql \d query to avoid use of @> operator. It seems that the parray_gin extension has seen fit to introduce a "text[] @> text[]" operator, which conflicts with the core "anyarray @> anyarray" operator, causing ambiguous-operator failures if the input arguments are coercible to text[] without being exactly that type. This strikes me as a bad idea, but it's out there and people use it. As of v10, that breaks psql's query that tries to test "pg_statistic_ext.stxkind @> '{d}'", since stxkind is char[]. The best workaround seems to be to avoid use of that operator. We can use a scalar-vs-array test "'d' = any(stxkind)" instead; that's arguably more readable anyway. Per report from Justin Pryzby. Backpatch to v10 where this query was added. Discussion: https://postgr.es/m/20171022181525.GA21884@telsasoft.com --- src/bin/psql/describe.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 06885715a6..638275ca2f 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -2423,8 +2423,8 @@ describeOneTableDetails(const char *schemaname, " FROM pg_catalog.unnest(stxkeys) s(attnum)\n" " JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n" " a.attnum = s.attnum AND NOT attisdropped)) AS columns,\n" - " (stxkind @> '{d}') AS ndist_enabled,\n" - " (stxkind @> '{f}') AS deps_enabled\n" + " 'd' = any(stxkind) AS ndist_enabled,\n" + " 'f' = any(stxkind) AS deps_enabled\n" "FROM pg_catalog.pg_statistic_ext stat " "WHERE stxrelid = '%s'\n" "ORDER BY 1;", From f3ea3e3e820b6b7512e48660bf984603418d53ff Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 23 Oct 2017 13:57:45 -0400 Subject: [PATCH 0420/1087] Fix some oversights in expression dependency recording. find_expr_references() neglected to record a dependency on the result type of a FieldSelect node, allowing a DROP TYPE to break a view or rule that contains such an expression. I think we'd omitted this case intentionally, reasoning that there would always be a related dependency ensuring that the DROP would cascade to the view. But at least with nested field selection expressions, that's not true, as shown in bug #14867 from Mansur Galiev. Add the dependency, and for good measure a dependency on the node's exposed collation. Likewise add a dependency on the result type of a FieldStore. I think here the reasoning was that it'd only appear within an assignment to a field, and the dependency on the field's column would be enough ... but having seen this example, I think that's wrong for nested-composites cases. Looking at nearby code, I notice we're not recording a dependency on the exposed collation of CoerceViaIO, which seems inconsistent with our choices for related node types. Maybe that's OK but I'm feeling suspicious of this code today, so let's add that; it certainly can't hurt. This patch does not do anything to protect already-existing views, only views created after it's installed. But seeing that the issue has been there a very long time and nobody noticed till now, that's probably good enough. Back-patch to all supported branches. Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org --- src/backend/catalog/dependency.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 2668650f27..3b214e5702 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -1713,6 +1713,27 @@ find_expr_references_walker(Node *node, /* Extra work needed here if we ever need this case */ elog(ERROR, "already-planned subqueries not supported"); } + else if (IsA(node, FieldSelect)) + { + FieldSelect *fselect = (FieldSelect *) node; + + /* result type might not appear anywhere else in expression */ + add_object_address(OCLASS_TYPE, fselect->resulttype, 0, + context->addrs); + /* the collation might not be referenced anywhere else, either */ + if (OidIsValid(fselect->resultcollid) && + fselect->resultcollid != DEFAULT_COLLATION_OID) + add_object_address(OCLASS_COLLATION, fselect->resultcollid, 0, + context->addrs); + } + else if (IsA(node, FieldStore)) + { + FieldStore *fstore = (FieldStore *) node; + + /* result type might not appear anywhere else in expression */ + add_object_address(OCLASS_TYPE, fstore->resulttype, 0, + context->addrs); + } else if (IsA(node, RelabelType)) { RelabelType *relab = (RelabelType *) node; @@ -1733,6 +1754,11 @@ find_expr_references_walker(Node *node, /* since there is no exposed function, need to depend on type */ add_object_address(OCLASS_TYPE, iocoerce->resulttype, 0, context->addrs); + /* the collation might not be referenced anywhere else, either */ + if (OidIsValid(iocoerce->resultcollid) && + iocoerce->resultcollid != DEFAULT_COLLATION_OID) + add_object_address(OCLASS_COLLATION, iocoerce->resultcollid, 0, + context->addrs); } else if (IsA(node, ArrayCoerceExpr)) { From 24a1897ab92646795bf065aa1b9d266aba74469f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 23 Oct 2017 17:54:09 -0400 Subject: [PATCH 0421/1087] Sync our copy of the timezone library with IANA release tzcode2017c. This is a trivial update containing only cosmetic changes. The point is just to get back to being synced with an official release of tzcode, rather than some ad-hoc point in their commit history, which is where commit 47f849a3c left it. --- src/timezone/README | 2 +- src/timezone/localtime.c | 4 ++-- src/timezone/strftime.c | 15 +++++++-------- src/timezone/tzfile.h | 4 ++-- src/timezone/zic.c | 14 ++++++++------ 5 files changed, 20 insertions(+), 19 deletions(-) diff --git a/src/timezone/README b/src/timezone/README index 912e0c1b39..fc93e940f9 100644 --- a/src/timezone/README +++ b/src/timezone/README @@ -50,7 +50,7 @@ match properly on the old version. Time Zone code ============== -The code in this directory is currently synced with tzcode release 2017b. +The code in this directory is currently synced with tzcode release 2017c. There are many cosmetic (and not so cosmetic) differences from the original tzcode library, but diffs in the upstream version should usually be propagated to our version. Here are some notes about that. diff --git a/src/timezone/localtime.c b/src/timezone/localtime.c index d946e882aa..2b5b3a924f 100644 --- a/src/timezone/localtime.c +++ b/src/timezone/localtime.c @@ -802,8 +802,8 @@ transtime(int year, const struct rule *rulep, { bool leapyear; int32 value; - int i, - d, + int i; + int d, m1, yy0, yy1, diff --git a/src/timezone/strftime.c b/src/timezone/strftime.c index e1c6483443..bb638c81a4 100644 --- a/src/timezone/strftime.c +++ b/src/timezone/strftime.c @@ -119,8 +119,7 @@ static char *_yconv(int, int, bool, bool, char *, const char *); size_t -pg_strftime(char *s, size_t maxsize, const char *format, - const struct pg_tm *t) +pg_strftime(char *s, size_t maxsize, const char *format, const struct pg_tm *t) { char *p; enum warn warn = IN_NONE; @@ -228,9 +227,9 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, case 'k': /* - * This used to be... _conv(t->tm_hour % 12 ? t->tm_hour - * % 12 : 12, 2, ' '); ...and has been changed to the - * below to match SunOS 4.1.1 and Arnold Robbins' strftime + * This used to be... _conv(t->tm_hour % 12 ? t->tm_hour % + * 12 : 12, 2, ' '); ...and has been changed to the below + * to match SunOS 4.1.1 and Arnold Robbins' strftime * version 3.0. That is, "%k" and "%l" have been swapped. * (ado, 1993-05-24) */ @@ -248,7 +247,7 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, case 'l': /* - * This used to be... _conv(t->tm_hour, 2, ' '); ...and + * This used to be... _conv(t->tm_hour, 2, ' '); ...and * has been changed to the below to match SunOS 4.1.1 and * Arnold Robbin's strftime version 3.0. That is, "%k" and * "%l" have been swapped. (ado, 1993-05-24) @@ -312,7 +311,7 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, * (01-53)." * (ado, 1993-05-24) * - * From by Markus Kuhn: + * From by Markus Kuhn: * "Week 01 of a year is per definition the first week which has the * Thursday in this year, which is equivalent to the week which contains * the fourth day of January. In other words, the first week of a new year @@ -482,7 +481,7 @@ _fmt(const char *format, const struct pg_tm *t, char *pt, /* * X311J/88-090 (4.12.3.5): if conversion char is - * undefined, behavior is undefined. Print out the + * undefined, behavior is undefined. Print out the * character itself as printf(3) also does. */ default: diff --git a/src/timezone/tzfile.h b/src/timezone/tzfile.h index 2843833e49..25ca307403 100644 --- a/src/timezone/tzfile.h +++ b/src/timezone/tzfile.h @@ -50,8 +50,8 @@ struct tzhead * tzh_timecnt (unsigned char)s types of local time starting at above * tzh_typecnt repetitions of * one (char [4]) coded UT offset in seconds - * one (unsigned char) used to set tm_isdst - * one (unsigned char) that's an abbreviation list index + * one (unsigned char) used to set tm_isdst + * one (unsigned char) that's an abbreviation list index * tzh_charcnt (char)s '\0'-terminated zone abbreviations * tzh_leapcnt repetitions of * one (char [4]) coded leap second transition times diff --git a/src/timezone/zic.c b/src/timezone/zic.c index db119265c3..352e719329 100644 --- a/src/timezone/zic.c +++ b/src/timezone/zic.c @@ -17,15 +17,15 @@ #include "private.h" #include "tzfile.h" -#define ZIC_VERSION_PRE_2013 '2' -#define ZIC_VERSION '3' +#define ZIC_VERSION_PRE_2013 '2' +#define ZIC_VERSION '3' typedef int64 zic_t; #define ZIC_MIN PG_INT64_MIN #define ZIC_MAX PG_INT64_MAX #ifndef ZIC_MAX_ABBR_LEN_WO_WARN -#define ZIC_MAX_ABBR_LEN_WO_WARN 6 +#define ZIC_MAX_ABBR_LEN_WO_WARN 6 #endif /* !defined ZIC_MAX_ABBR_LEN_WO_WARN */ #ifndef WIN32 @@ -473,7 +473,7 @@ static void verror(const char *string, va_list args) { /* - * Match the format of "cc" to allow sh users to zic ... 2>&1 | error -t + * Match the format of "cc" to allow sh users to zic ... 2>&1 | error -t * "*" -v on BSD systems. */ if (filename) @@ -969,7 +969,7 @@ dolink(char const *fromfield, char const *tofield, bool staysymlink) } } -#define TIME_T_BITS_IN_FILE 64 +#define TIME_T_BITS_IN_FILE 64 static zic_t const min_time = MINVAL(zic_t, TIME_T_BITS_IN_FILE); static zic_t const max_time = MAXVAL(zic_t, TIME_T_BITS_IN_FILE); @@ -984,7 +984,7 @@ static zic_t const max_time = MAXVAL(zic_t, TIME_T_BITS_IN_FILE); * Ade PAR, Aghanim N, Armitage-Caplan C et al. Planck 2013 results. * I. Overview of products and scientific results. * arXiv:1303.5062 2013-03-20 20:10:01 UTC - * [PDF] + * [PDF] * * Page 36, Table 9, row Age/Gyr, column Planck+WP+highL+BAO 68% limits * gives the value 13.798 plus-or-minus 0.037 billion years. @@ -1208,7 +1208,9 @@ infile(const char *name) /* nothing to do */ } else if (wantcont) + { wantcont = inzcont(fields, nfields); + } else { struct lookup const *line_codes From 8df4ce1eac7835d87d89a4fc4d5d3ae5554f87b7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 23 Oct 2017 18:15:36 -0400 Subject: [PATCH 0422/1087] Update time zone data files to tzdata release 2017c. DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga, and Turks & Caicos Islands. Historical corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago. --- src/timezone/data/africa | 99 ++++++++++++------- src/timezone/data/antarctica | 8 +- src/timezone/data/asia | 144 ++++++++++++++++++--------- src/timezone/data/australasia | 67 ++++++++----- src/timezone/data/backward | 4 +- src/timezone/data/backzone | 18 ++-- src/timezone/data/europe | 88 +++++++++-------- src/timezone/data/northamerica | 171 +++++++++++++++++++++------------ src/timezone/data/southamerica | 32 +++--- 9 files changed, 395 insertions(+), 236 deletions(-) diff --git a/src/timezone/data/africa b/src/timezone/data/africa index dcc20b9b1c..3a60bc27d0 100644 --- a/src/timezone/data/africa +++ b/src/timezone/data/africa @@ -26,7 +26,7 @@ # # For data circa 1899, a common source is: # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# http://www.jstor.org/stable/1774359 +# https://www.jstor.org/stable/1774359 # # A reliable and entertaining source about time zones is # Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). @@ -218,7 +218,7 @@ Rule Egypt 2006 only - Sep 21 24:00 0 - # saving time in Egypt will end in the night of 2007-09-06 to 2007-09-07. # From Jesper Nørgaard Welen (2007-08-15): [The following agree:] # http://www.nentjes.info/Bill/bill5.htm -# http://www.timeanddate.com/worldclock/city.html?n=53 +# https://www.timeanddate.com/worldclock/city.html?n=53 # From Steffen Thorsen (2007-09-04): The official information...: # http://www.sis.gov.eg/En/EgyptOnline/Miscellaneous/000002/0207000000000000001580.htm Rule Egypt 2007 only - Sep Thu>=1 24:00 0 - @@ -256,8 +256,8 @@ Rule Egypt 2007 only - Sep Thu>=1 24:00 0 - # timeanddate[2] and another site I've found[3] also support that. # # [1] https://bugzilla.redhat.com/show_bug.cgi?id=492263 -# [2] http://www.timeanddate.com/worldclock/clockchange.html?n=53 -# [3] http://wwp.greenwichmeantime.com/time-zone/africa/egypt/ +# [2] https://www.timeanddate.com/worldclock/clockchange.html?n=53 +# [3] https://wwp.greenwichmeantime.com/time-zone/africa/egypt/ # From Arthur David Olson (2009-04-20): # In 2009 (and for the next several years), Ramadan ends before the fourth @@ -267,10 +267,10 @@ Rule Egypt 2007 only - Sep Thu>=1 24:00 0 - # From Steffen Thorsen (2009-08-11): # We have been able to confirm the August change with the Egyptian Cabinet # Information and Decision Support Center: -# http://www.timeanddate.com/news/time/egypt-dst-ends-2009.html +# https://www.timeanddate.com/news/time/egypt-dst-ends-2009.html # # The Middle East News Agency -# http://www.mena.org.eg/index.aspx +# https://www.mena.org.eg/index.aspx # also reports "Egypt starts winter time on August 21" # today in article numbered "71, 11/08/2009 12:25 GMT." # Only the title above is available without a subscription to their service, @@ -320,7 +320,7 @@ Rule Egypt 2007 only - Sep Thu>=1 24:00 0 - # Thursday of April.... Clocks will still be turned back for Ramadan, but # dates not yet announced.... # http://almogaz.com/news/weird-news/2015/04/05/1947105 ... -# http://www.timeanddate.com/news/time/egypt-starts-dst-2015.html +# https://www.timeanddate.com/news/time/egypt-starts-dst-2015.html # From Ahmed Nazmy (2015-04-20): # Egypt's ministers cabinet just announced ... that it will cancel DST at @@ -447,11 +447,11 @@ Zone Africa/Monrovia -0:43:08 - LMT 1882 # From Even Scharning (2012-11-10): # Libya set their time one hour back at 02:00 on Saturday November 10. -# http://www.libyaherald.com/2012/11/04/clocks-to-go-back-an-hour-on-saturday/ +# https://www.libyaherald.com/2012/11/04/clocks-to-go-back-an-hour-on-saturday/ # Here is an official source [in Arabic]: http://ls.ly/fb6Yc # # Steffen Thorsen forwarded a translation (2012-11-10) in -# http://mm.icann.org/pipermail/tz/2012-November/018451.html +# https://mm.icann.org/pipermail/tz/2012-November/018451.html # # From Tim Parenti (2012-11-11): # Treat the 2012-11-10 change as a zone change from UTC+2 to UTC+1. @@ -462,7 +462,7 @@ Zone Africa/Monrovia -0:43:08 - LMT 1882 # From Even Scharning (2013-10-25): # The scheduled end of DST in Libya on Friday, October 25, 2013 was # cancelled yesterday.... -# http://www.libyaherald.com/2013/10/24/correction-no-time-change-tomorrow/ +# https://www.libyaherald.com/2013/10/24/correction-no-time-change-tomorrow/ # # From Paul Eggert (2013-10-25): # For now, assume they're reverting to the pre-2012 rules of permanent UT +02. @@ -515,7 +515,7 @@ Zone Africa/Tripoli 0:52:44 - LMT 1920 # basis.... # It seems that Mauritius observed daylight saving time from 1982-10-10 to # 1983-03-20 as well, but that was not successful.... -# http://www.timeanddate.com/news/time/mauritius-daylight-saving-time.html +# https://www.timeanddate.com/news/time/mauritius-daylight-saving-time.html # From Alex Krivenyshev (2008-06-25): # http://economicdevelopment.gov.mu/portal/site/Mainhomepage/menuitem.a42b24128104d9845dabddd154508a0c/?content_id=0a7cee8b5d69a110VgnVCM1000000a04a8c0RCRD @@ -583,7 +583,7 @@ Zone Africa/Tripoli 0:52:44 - LMT 1920 # http://lexpress.mu/Story/3398~Beebeejaun---Les-objectifs-d-%C3%A9conomie-d-%C3%A9nergie-de-l-heure-d-%C3%A9t%C3%A9-ont-%C3%A9t%C3%A9-atteints- # # Our wrap-up: -# http://www.timeanddate.com/news/time/mauritius-dst-will-not-repeat.html +# https://www.timeanddate.com/news/time/mauritius-dst-will-not-repeat.html # From Arthur David Olson (2009-07-11): # The "mauritius-dst-will-not-repeat" wrapup includes this: @@ -615,7 +615,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # be one hour ahead of GMT between 1 June and 27 September, according to # Communication Minister and Government Spokesman, Khalid Naciri...." # -# http://www.worldtimezone.net/dst_news/dst_news_morocco01.html +# http://www.worldtimezone.com/dst_news/dst_news_morocco01.html # http://en.afrik.com/news11892.html # From Alex Krivenyshev (2008-05-09): @@ -628,7 +628,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # From Patrice Scattolin (2008-05-09): # According to this article: -# http://www.avmaroc.com/actualite/heure-dete-comment-a127896.html +# https://www.avmaroc.com/actualite/heure-dete-comment-a127896.html # (and republished here: ) # the changes occur at midnight: # @@ -650,7 +650,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # posted in English). # # The following Google query will generate many relevant hits: -# http://www.google.com/search?hl=en&q=Conseil+de+gouvernement+maroc+heure+avance&btnG=Search +# https://www.google.com/search?hl=en&q=Conseil+de+gouvernement+maroc+heure+avance&btnG=Search # From Steffen Thorsen (2008-08-27): # Morocco will change the clocks back on the midnight between August 31 @@ -661,7 +661,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # http://www.menara.ma/fr/Actualites/Maroc/Societe/ci.retour_a_l_heure_gmt_a_partir_du_dimanche_31_aout_a_minuit_officiel_.default # # We have some further details posted here: -# http://www.timeanddate.com/news/time/morocco-ends-dst-early-2008.html +# https://www.timeanddate.com/news/time/morocco-ends-dst-early-2008.html # From Steffen Thorsen (2009-03-17): # Morocco will observe DST from 2009-06-01 00:00 to 2009-08-21 00:00 according @@ -671,7 +671,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # (French) # # Our summary: -# http://www.timeanddate.com/news/time/morocco-starts-dst-2009.html +# https://www.timeanddate.com/news/time/morocco-starts-dst-2009.html # From Alexander Krivenyshev (2009-03-17): # Here is a link to official document from Royaume du Maroc Premier Ministre, @@ -694,7 +694,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # http://www.lavieeco.com/actualites/4099-le-maroc-passera-a-l-heure-d-ete-gmt1-le-2-mai.html # (French) # Our page: -# http://www.timeanddate.com/news/time/morocco-starts-dst-2010.html +# https://www.timeanddate.com/news/time/morocco-starts-dst-2010.html # From Dan Abitol (2011-03-30): # ...Rules for Africa/Casablanca are the following (24h format) @@ -711,7 +711,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # They said that the decision was already taken. # # More articles in the press -# http://www.yabiladi.com/articles/details/5058/secret-l-heure-d-ete-maroc-leve.html +# https://www.yabiladi.com/articles/details/5058/secret-l-heure-d-ete-maroc-leve.html # http://www.lematin.ma/Actualite/Express/Article.asp?id=148923 # http://www.lavieeco.com/actualite/Le-Maroc-passe-sur-GMT%2B1-a-partir-de-dim @@ -803,7 +803,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # 1433 (18 April 2012) and the decision of the Head of Government of # 16 N. 3-29-15 Chaaban 1435 (4 June 2015). # Source (french): -# http://lnt.ma/le-maroc-reculera-dune-heure-le-dimanche-14-juin/ +# https://lnt.ma/le-maroc-reculera-dune-heure-le-dimanche-14-juin/ # # From Milamber (2015-06-09): # http://www.mmsp.gov.ma/fr/actualites.aspx?id=863 @@ -812,7 +812,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis # [The gov.ma announcement] would (probably) make the switch on 2015-07-19 go # from 03:00 to 04:00 rather than from 02:00 to 03:00, as in the patch.... # I think the patch is correct and the quoted text is wrong; the text in -# agrees +# agrees # with the patch. # From Paul Eggert (2015-06-08): @@ -937,9 +937,17 @@ Link Africa/Maputo Africa/Kigali # Rwanda Link Africa/Maputo Africa/Lubumbashi # E Dem. Rep. of Congo Link Africa/Maputo Africa/Lusaka # Zambia + # Namibia -# The 1994-04-03 transition is from Shanks & Pottenger. -# Shanks & Pottenger report no DST after 1998-04; go with IATA. + +# From Arthur David Olson (2017-08-09): +# The text of the "Namibia Time Act, 1994" is available online at +# www.lac.org.na/laws/1994/811.pdf +# and includes this nugget: +# Notwithstanding the provisions of subsection (2) of section 1, the +# first winter period after the commencement of this Act shall +# commence at OOhOO on Monday 21 March 1994 and shall end at 02h00 on +# Sunday 4 September 1994. # From Petronella Sibeene (2007-03-30): # http://allafrica.com/stories/200703300178.html @@ -955,19 +963,30 @@ Link Africa/Maputo Africa/Lusaka # Zambia # observes Botswana time, we have no details about historical practice. # In the meantime people there can use Africa/Gaborone. # See: Immanuel S. The Namibian. 2017-02-23. -# http://www.namibian.com.na/51480/read/Time-change-divides-lawmakers +# https://www.namibian.com.na/51480/read/Time-change-divides-lawmakers + +# From Steffen Thorsen (2017-08-09): +# Namibia is going to change their time zone to what is now their DST: +# https://www.newera.com.na/2017/02/23/namibias-winter-time-might-be-repealed/ +# This video is from the government decision: +# https://www.nbc.na/news/na-passes-namibia-time-bill-repealing-1994-namibia-time-act.8665 +# We have made the assumption so far that they will change their time zone at +# the same time they would normally start DST, the first Sunday in September: +# https://www.timeanddate.com/news/time/namibia-new-time-zone.html # RULE NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Namibia 1994 max - Sep Sun>=1 2:00 1:00 S -Rule Namibia 1995 max - Apr Sun>=1 2:00 0 - +Rule Namibia 1994 only - Mar 21 0:00 0 - +Rule Namibia 1994 2016 - Sep Sun>=1 2:00 1:00 S +Rule Namibia 1995 2017 - Apr Sun>=1 2:00 0 - # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Africa/Windhoek 1:08:24 - LMT 1892 Feb 8 1:30 - +0130 1903 Mar 2:00 - SAST 1942 Sep 20 2:00 2:00 1:00 SAST 1943 Mar 21 2:00 2:00 - SAST 1990 Mar 21 # independence - 2:00 - CAT 1994 Apr 3 - 1:00 Namibia WA%sT + 2:00 - CAT 1994 Mar 21 0:00 + 1:00 Namibia WA%sT 2017 Sep 3 2:00 + 2:00 - CAT # Niger # See Africa/Lagos. @@ -1054,14 +1073,24 @@ Link Africa/Johannesburg Africa/Mbabane # Swaziland # no information # Sudan -# + # From # Sudan News Agency (2000-01-13), # also reported by Michaël De Beukelaer-Dossche via Steffen Thorsen: # Clocks will be moved ahead for 60 minutes all over the Sudan as of noon # Saturday.... This was announced Thursday by Caretaker State Minister for # Manpower Abdul-Rahman Nur-Eddin. + +# From Ahmed Atyya, National Telecommunications Corp. (NTC), Sudan (2017-10-17): +# ... the Republic of Sudan is going to change the time zone from (GMT+3:00) +# to (GMT+ 2:00) starting from Wednesday 1 November 2017. # +# From Paul Eggert (2017-10-18): +# A scanned copy (in Arabic) of Cabinet Resolution No. 352 for the +# year 2017 can be found as an attachment in email today from Yahia +# Abdalla of NTC, archived at: +# https://mm.icann.org/pipermail/tz/2017-October/025333.html + # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule Sudan 1970 only - May 1 0:00 1:00 S Rule Sudan 1970 1985 - Oct 15 0:00 0 - @@ -1070,10 +1099,14 @@ Rule Sudan 1972 1985 - Apr lastSun 0:00 1:00 S # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Africa/Khartoum 2:10:08 - LMT 1931 2:00 Sudan CA%sT 2000 Jan 15 12:00 - 3:00 - EAT + 3:00 - EAT 2017 Nov 1 + 2:00 - CAT # South Sudan -Link Africa/Khartoum Africa/Juba +# Zone NAME GMTOFF RULES FORMAT [UNTIL] +Zone Africa/Juba 2:06:28 - LMT 1931 + 2:00 Sudan CA%sT 2000 Jan 15 12:00 + 3:00 - EAT # Swaziland # See Africa/Johannesburg. @@ -1111,11 +1144,11 @@ Link Africa/Khartoum Africa/Juba # According to several news sources, Tunisia will not observe DST this year. # (Arabic) # http://www.elbashayer.com/?page=viewn&nid=42546 -# http://www.babnet.net/kiwidetail-15295.asp +# https://www.babnet.net/kiwidetail-15295.asp # # We have also confirmed this with the US embassy in Tunisia. # We have a wrap-up about this on the following page: -# http://www.timeanddate.com/news/time/tunisia-cancels-dst-2009.html +# https://www.timeanddate.com/news/time/tunisia-cancels-dst-2009.html # From Alexander Krivenyshev (2009-03-17): # Here is a link to Tunis Afrique Presse News Agency diff --git a/src/timezone/data/antarctica b/src/timezone/data/antarctica index 3332d66842..d9c132a30f 100644 --- a/src/timezone/data/antarctica +++ b/src/timezone/data/antarctica @@ -26,7 +26,7 @@ # Heard Island, McDonald Islands (uninhabited) # previously sealers and scientific personnel wintered # Margaret Turner reports -# http://web.archive.org/web/20021204222245/http://www.dstc.qut.edu.au/DST/marg/daylight.html +# https://web.archive.org/web/20021204222245/http://www.dstc.qut.edu.au/DST/marg/daylight.html # (1999-09-30) that they're UT +05, with no DST; # presumably this is when they have visitors. # @@ -47,7 +47,7 @@ # http://www.aad.gov.au/default.asp?casid=37079 # # We have more background information here: -# http://www.timeanddate.com/news/time/antarctica-new-times.html +# https://www.timeanddate.com/news/time/antarctica-new-times.html # From Steffen Thorsen (2010-03-10): # We got these changes from the Australian Antarctic Division: ... @@ -62,7 +62,7 @@ # - Mawson station stays on UTC+5. # # Background: -# http://www.timeanddate.com/news/time/antartica-time-changes-2010.html +# https://www.timeanddate.com/news/time/antartica-time-changes-2010.html # From Steffen Thorsen (2016-10-28): # Australian Antarctica Division informed us that Casey changed time @@ -145,7 +145,7 @@ Zone Indian/Kerguelen 0 - -00 1950 # Port-aux-Français # # year-round base in the main continent # Dumont d'Urville, Île des Pétrels, -6640+14001, since 1956-11 -# (2005-12-05) +# (2005-12-05) # # Another base at Port-Martin, 50km east, began operation in 1947. # It was destroyed by fire on 1952-01-14. diff --git a/src/timezone/data/asia b/src/timezone/data/asia index 35774c6d7e..ac39af351e 100644 --- a/src/timezone/data/asia +++ b/src/timezone/data/asia @@ -26,7 +26,7 @@ # # For data circa 1899, a common source is: # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# http://www.jstor.org/stable/1774359 +# https://www.jstor.org/stable/1774359 # # For Russian data circa 1919, a source is: # Byalokoz EL. New Counting of Time in Russia since July 1, 1919. @@ -75,8 +75,8 @@ Rule E-EurAsia 1996 max - Oct lastSun 0:00 0 - Rule RussiaAsia 1981 1984 - Apr 1 0:00 1:00 S Rule RussiaAsia 1981 1983 - Oct 1 0:00 0 - Rule RussiaAsia 1984 1995 - Sep lastSun 2:00s 0 - -Rule RussiaAsia 1985 2011 - Mar lastSun 2:00s 1:00 S -Rule RussiaAsia 1996 2011 - Oct lastSun 2:00s 0 - +Rule RussiaAsia 1985 2010 - Mar lastSun 2:00s 1:00 S +Rule RussiaAsia 1996 2010 - Oct lastSun 2:00s 0 - # Afghanistan # Zone NAME GMTOFF RULES FORMAT [UNTIL] @@ -109,13 +109,17 @@ Zone Asia/Kabul 4:36:48 - LMT 1890 # or # (brief) # http://www.worldtimezone.com/dst_news/dst_news_armenia03.html +# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S +Rule Armenia 2011 only - Mar lastSun 2:00s 1:00 S +Rule Armenia 2011 only - Oct lastSun 2:00s 0 - # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Asia/Yerevan 2:58:00 - LMT 1924 May 2 3:00 - +03 1957 Mar 4:00 RussiaAsia +04/+05 1991 Mar 31 2:00s 3:00 RussiaAsia +03/+04 1995 Sep 24 2:00s 4:00 - +04 1997 - 4:00 RussiaAsia +04/+05 + 4:00 RussiaAsia +04/+05 2011 + 4:00 Armenia +04/+05 # Azerbaijan @@ -127,7 +131,7 @@ Zone Asia/Yerevan 2:58:00 - LMT 1924 May 2 # From Steffen Thorsen (2016-03-17): # ... the Azerbaijani Cabinet of Ministers has cancelled switching to # daylight saving time.... -# http://www.azernews.az/azerbaijan/94137.html +# https://www.azernews.az/azerbaijan/94137.html # http://vestnikkavkaza.net/news/Azerbaijani-Cabinet-of-Ministers-cancels-daylight-saving-time.html # http://en.apa.az/xeber_azerbaijan_abolishes_daylight_savings_ti_240862.html @@ -168,11 +172,11 @@ Zone Asia/Baku 3:19:24 - LMT 1924 May 2 # the 19th and 20th, and they have not set the end date yet. # # Some sources: -# http://in.reuters.com/article/southAsiaNews/idINIndia-40017620090601 +# https://in.reuters.com/article/southAsiaNews/idINIndia-40017620090601 # http://bdnews24.com/details.php?id=85889&cid=2 # # Our wrap-up: -# http://www.timeanddate.com/news/time/bangladesh-daylight-saving-2009.html +# https://www.timeanddate.com/news/time/bangladesh-daylight-saving-2009.html # From A. N. M. Kamrus Saadat (2009-06-15): # Finally we've got the official mail regarding DST start time where DST start @@ -258,9 +262,15 @@ Zone Asia/Brunei 7:39:40 - LMT 1926 Mar # Bandar Seri Begawan # Milne says 6:24:40 was the meridian of the time ball observatory at Rangoon. +# From Paul Eggert (2017-04-20): +# Page 27 of Reed & Low (cited for Asia/Kolkata) says "Rangoon local time is +# used upon the railways and telegraphs of Burma, and is 6h. 24m. 47s. ahead +# of Greenwich." This refers to the period before Burma's transition to +0630, +# a transition for which Shanks is the only source. + # Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Yangon 6:24:40 - LMT 1880 # or Rangoon - 6:24:40 - RMT 1920 # Rangoon Mean Time? +Zone Asia/Yangon 6:24:47 - LMT 1880 # or Rangoon + 6:24:47 - RMT 1920 # Rangoon local time 6:30 - +0630 1942 May 9:00 - +09 1945 May 3 6:30 - +0630 @@ -317,7 +327,7 @@ Rule PRC 1987 1991 - Apr Sun>=10 0:00 1:00 D # # From Jesper Nørgaard Welen (2006-07-14): # I have investigated the timezones around 1970 on the -# http://www.astro.com/atlas site [with provinces and county +# https://www.astro.com/atlas site [with provinces and county # boundaries summarized below].... A few other exceptions were two # counties on the Sichuan side of the Xizang-Sichuan border, # counties Dege and Baiyu which lies on the Sichuan side and are @@ -469,7 +479,7 @@ Rule PRC 1987 1991 - Apr Sun>=10 0:00 1:00 D # From David Cochrane (2014-03-26): # Just a confirmation that Ürümqi time was implemented in Ürümqi on 1 Feb 1986: -# http://content.time.com/time/magazine/article/0,9171,960684,00.html +# https://content.time.com/time/magazine/article/0,9171,960684,00.html # From Luther Ma (2014-04-22): # I have interviewed numerous people of various nationalities and from @@ -626,7 +636,7 @@ Zone Asia/Hong_Kong 7:36:42 - LMT 1904 Oct 30 # (both in Okinawa) adopt the Western Standard Time which is based on # 120E. The adoption began from Jan 1, 1896. The original text can be # found on Wikisource: -# http://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) +# https://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) # ... This could be the first adoption of time zone in Taiwan, because # during the Qing Dynasty, it seems that there was no time zone # declared officially. @@ -639,7 +649,7 @@ Zone Asia/Hong_Kong 7:36:42 - LMT 1904 Oct 30 # territory, including later occupations, adopt Japan Central Time # (UTC+9). The adoption began on Oct 1, 1937. The original text can # be found on Wikisource: -# http://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 +# https://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 # # That is, the time zone of Taipei switched to UTC+9 on Oct 1, 1937. @@ -775,6 +785,12 @@ Zone Asia/Macau 7:34:20 - LMT 1912 Jan 1 # Looks like the time zone split in Cyprus went through last night. # http://cyprus-mail.com/2016/10/30/cyprus-new-division-two-time-zones-now-reality/ +# From Paul Eggert (2017-10-18): +# Northern Cyprus will reinstate winter time on October 29, thus +# staying in sync with the rest of Cyprus. See: Anastasiou A. +# Cyprus to remain united in time. Cyprus Mail 2017-10-17. +# https://cyprus-mail.com/2017/10/17/cyprus-remain-united-time/ + # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule Cyprus 1975 only - Apr 13 0:00 1:00 S Rule Cyprus 1975 only - Oct 12 0:00 0 - @@ -792,7 +808,8 @@ Zone Asia/Nicosia 2:13:28 - LMT 1921 Nov 14 Zone Asia/Famagusta 2:15:48 - LMT 1921 Nov 14 2:00 Cyprus EE%sT 1998 Sep 2:00 EUAsia EE%sT 2016 Sep 8 - 3:00 - +03 + 3:00 - +03 2017 Oct 29 1:00u + 2:00 EUAsia EE%sT # Classically, Cyprus belongs to Asia; e.g. see Herodotus, Histories, I.72. # However, for various reasons many users expect to find it under Europe. @@ -852,7 +869,7 @@ Zone Asia/Tbilisi 2:59:11 - LMT 1880 # From João Carrascalão, brother of the former governor of East Timor, in # East Timor may be late for its millennium -# (1999-12-26/31): +# (1999-12-26/31): # Portugal tried to change the time forward in 1974 because the sun # rises too early but the suggestion raised a lot of problems with the # Timorese and I still don't think it would work today because it @@ -880,21 +897,62 @@ Zone Asia/Dili 8:22:20 - LMT 1912 Jan 1 # India # From Ian P. Beacock, in "A brief history of (modern) time", The Atlantic -# http://www.theatlantic.com/technology/archive/2015/12/the-creation-of-modern-time/421419/ +# https://www.theatlantic.com/technology/archive/2015/12/the-creation-of-modern-time/421419/ # (2015-12-22): # In January 1906, several thousand cotton-mill workers rioted on the # outskirts of Bombay.... They were protesting the proposed abolition of # local time in favor of Indian Standard Time.... Journalists called this # dispute the "Battle of the Clocks." It lasted nearly half a century. +# From Paul Eggert (2017-04-20): +# Good luck trying to nail down old timekeeping records in India. +# "... in the nineteenth century ... Madras Observatory took its magnetic +# measurements on Göttingen time, its meteorological measurements on Madras +# (local) time, dropped its time ball on Greenwich (ocean navigator's) time, +# and distributed civil (local time)." -- Bartky IR. Selling the true time: +# 19th-century timekeeping in america. Stanford U Press (2000), 247 note 19. +# "A more potent cause of resistance to the general adoption of the present +# standard time lies in the fact that it is Madras time. The citizen of +# Bombay, proud of being 'primus in Indis' and of Calcutta, equally proud of +# his city being the Capital of India, and - for a part of the year - the Seat +# of the Supreme Government, alike look down on Madras, and refuse to change +# the time they are using, for that of what they regard as a benighted +# Presidency; while Madras, having for long given the standard time to the +# rest of India, would resist the adoption of any other Indian standard in its +# place." -- Oldham RD. On Time in India: a suggestion for its improvement. +# Proceedings of the Asiatic Society of Bengal (April 1899), 49-55. +# +# "In 1870 ... Madras time - 'now used by the telegraph and regulated from the +# only government observatory' - was suggested as a standard railway time, +# first to be adopted on the Great Indian Peninsular Railway (GIPR).... +# Calcutta, Bombay, and Karachi, were to be allowed to continue with their +# local time for civil purposes." - Prasad R. Tracks of Change: Railways and +# Everyday Life in Colonial India. Cambridge University Press (2016), 145. +# +# Reed S, Low F. The Indian Year Book 1936-37. Bennett, Coleman, pp 27-8. +# https://archive.org/details/in.ernet.dli.2015.282212 +# This lists +052110 as Madras local time used in railways, and says that on +# 1906-01-01 railways and telegraphs in India switched to +0530. Some +# municipalities retained their former time, and the time in Calcutta +# continued to depend on whether you were at the railway station or at +# government offices. Government time was at +055320 (according to Shanks) or +# at +0554 (according to the Indian Year Book). Railway time is more +# appropriate for our purposes, as it was better documented, it is what we do +# elsewhere (e.g., Europe/London before 1880), and after 1906 it was +# consistent in the region now identified by Asia/Kolkata. So, use railway +# time for 1870-1941. Shanks is our only (and dubious) source for the +# 1941-1945 data. + # Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kolkata 5:53:28 - LMT 1880 # Kolkata - 5:53:20 - HMT 1941 Oct # Howrah Mean Time? - 6:30 - +0630 1942 May 15 +Zone Asia/Kolkata 5:53:28 - LMT 1854 Jun 28 # Kolkata + 5:53:20 - HMT 1870 # Howrah Mean Time? + 5:21:10 - MMT 1906 Jan 1 # Madras local time + 5:30 - IST 1941 Oct + 5:30 1:00 +0630 1942 May 15 5:30 - IST 1942 Sep 5:30 1:00 +0630 1945 Oct 15 5:30 - IST -# The following are like Asia/Kolkata: +# Since 1970 the following are like Asia/Kolkata: # Andaman Is # Lakshadweep (Laccadive, Minicoy and Amindivi Is) # Nicobar Is @@ -1036,7 +1094,7 @@ Zone Asia/Jayapura 9:22:48 - LMT 1932 Nov # From Reuters (2007-09-16), with a heads-up from Jesper Nørgaard Welen: # ... the Guardian Council ... approved a law on Sunday to re-introduce # daylight saving time ... -# http://uk.reuters.com/article/oilRpt/idUKBLA65048420070916 +# https://uk.reuters.com/article/oilRpt/idUKBLA65048420070916 # # From Roozbeh Pournader (2007-11-05): # This is quoted from Official Gazette of the Islamic Republic of @@ -1135,7 +1193,7 @@ Zone Asia/Tehran 3:25:44 - LMT 1916 # http://www.aswataliraq.info/look/article.tpl?id=2047&IdLanguage=17&IdPublication=4&NrArticle=71743&NrIssue=1&NrSection=10 # # We have published a short article in English about the change: -# http://www.timeanddate.com/news/time/iraq-dumps-daylight-saving.html +# https://www.timeanddate.com/news/time/iraq-dumps-daylight-saving.html # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule Iraq 1982 only - May 1 0:00 1:00 D @@ -1443,12 +1501,12 @@ Rule Japan 1950 1951 - May Sun>=1 2:00 1:00 D # From Yu-Cheng Chuang (2013-07-12): # ...the Meiji Emperor announced Ordinance No. 167 of Meiji Year 28 "The clause # about standard time" ... The adoption began from Jan 1, 1896. -# http://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) +# https://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) # # ...the Showa Emperor announced Ordinance No. 529 of Showa Year 12 ... which # means the whole Japan territory, including later occupations, adopt Japan # Central Time (UTC+9). The adoption began on Oct 1, 1937. -# http://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 +# https://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Asia/Tokyo 9:18:59 - LMT 1887 Dec 31 15:00u @@ -1510,7 +1568,7 @@ Zone Asia/Tokyo 9:18:59 - LMT 1887 Dec 31 15:00u # Official, in Arabic: # http://www.petra.gov.jo/public_news/Nws_NewsDetails.aspx?Menu_ID=&Site_Id=2&lang=1&NewsID=133230&CatID=14 # ... Our background/permalink about it -# http://www.timeanddate.com/news/time/jordan-reverses-dst-decision.html +# https://www.timeanddate.com/news/time/jordan-reverses-dst-decision.html # ... # http://www.petra.gov.jo/Public_News/Nws_NewsDetails.aspx?lang=2&site_id=1&NewsID=133313&Type=P # ... says midnight for the coming one and 1:00 for the ones in the future @@ -1868,9 +1926,9 @@ Zone Asia/Bishkek 4:58:24 - LMT 1924 May 2 # between 1987 and 1988 ... # From Sanghyuk Jung (2014-10-29): -# http://mm.icann.org/pipermail/tz/2014-October/021830.html +# https://mm.icann.org/pipermail/tz/2014-October/021830.html # According to the Korean Wikipedia -# http://ko.wikipedia.org/wiki/한국_표준시 +# https://ko.wikipedia.org/wiki/한국_표준시 # [oldid=12896437 2014-09-04 08:03 UTC] # DST in Republic of Korea was as follows.... And I checked old # newspapers in Korean, all articles correspond with data in Wikipedia. @@ -2092,7 +2150,7 @@ Zone Indian/Maldives 4:54:00 - LMT 1880 # Male # +08:00 instead. Different sources appear to disagree with the tz # database on this, e.g.: # -# http://www.timeanddate.com/worldclock/city.html?n=1026 +# https://www.timeanddate.com/worldclock/city.html?n=1026 # http://www.worldtimeserver.com/current_time_in_MN.aspx # # both say GMT+08:00. @@ -2222,7 +2280,7 @@ Zone Asia/Kathmandu 5:41:16 - LMT 1920 # help reduce load shedding by approving the closure of commercial centres at # 9pm and moving clocks forward by one hour for the next three months. ...." # -# http://www.worldtimezone.net/dst_news/dst_news_pakistan01.html +# http://www.worldtimezone.com/dst_news/dst_news_pakistan01.html # http://www.dailytimes.com.pk/default.asp?page=2008%5C05%5C15%5Cstory_15-5-2008_pg1_4 # From Arthur David Olson (2008-05-19): @@ -2288,7 +2346,7 @@ Zone Asia/Kathmandu 5:41:16 - LMT 1920 # # We have confirmed this year's end date with both with the Ministry of # Water and Power and the Pakistan Electric Power Company: -# http://www.timeanddate.com/news/time/pakistan-ends-dst09.html +# https://www.timeanddate.com/news/time/pakistan-ends-dst09.html # From Christoph Göhre (2009-10-01): # [T]he German Consulate General in Karachi reported me today that Pakistan @@ -2470,7 +2528,7 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # # We are not sure if Gaza will do the same, last year they had a different # end date, we will keep this page updated: -# http://www.timeanddate.com/news/time/westbank-gaza-dst-2009.html +# https://www.timeanddate.com/news/time/westbank-gaza-dst-2009.html # From Alexander Krivenyshev (2009-09-02): # Seems that Gaza Strip will go back to Winter Time same date as West Bank. @@ -2508,7 +2566,7 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # the clocks were set back one hour at 2010-08-11 00:00:00 local time in # Gaza and the West Bank. # Some more background info: -# http://www.timeanddate.com/news/time/westbank-gaza-end-dst-2010.html +# https://www.timeanddate.com/news/time/westbank-gaza-end-dst-2010.html # From Steffen Thorsen (2011-08-26): # Gaza and the West Bank did go back to standard time in the beginning of @@ -2518,7 +2576,7 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # # http://www.maannews.net/eng/ViewDetails.aspx?ID=416217 # Additional info: -# http://www.timeanddate.com/news/time/palestine-dst-2011.html +# https://www.timeanddate.com/news/time/palestine-dst-2011.html # From Alexander Krivenyshev (2011-08-27): # According to the article in The Jerusalem Post: @@ -2528,7 +2586,7 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # The Hamas government said on Saturday that it won't observe summertime after # the Muslim feast of Id al-Fitr, which begins on Tuesday..." # ... -# http://www.jpost.com/MiddleEast/Article.aspx?id=235650 +# https://www.jpost.com/MiddleEast/Article.aspx?id=235650 # http://www.worldtimezone.com/dst_news/dst_news_gazastrip05.html # The rules for Egypt are stolen from the 'africa' file. @@ -2549,7 +2607,7 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # http://safa.ps/details/news/74352/%D8%A8%D8%AF%D8%A1-%D8%A7%D9%84%D8%AA%D9%88%D9%82%D9%8A%D8%AA-%D8%A7%D9%84%D8%B5%D9%8A%D9%81%D9%8A-%D8%A8%D8%A7%D9%84%D8%B6%D9%81%D8%A9-%D9%88%D8%BA%D8%B2%D8%A9-%D9%84%D9%8A%D9%84%D8%A9-%D8%A7%D9%84%D8%AC%D9%85%D8%B9%D8%A9.html # # Our brief summary: -# http://www.timeanddate.com/news/time/gaza-west-bank-dst-2012.html +# https://www.timeanddate.com/news/time/gaza-west-bank-dst-2012.html # From Steffen Thorsen (2013-03-26): # The following news sources tells that Palestine will "start daylight saving @@ -2569,11 +2627,11 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # From Steffen Thorsen (2015-03-03): # Sources such as http://www.alquds.com/news/article/view/id/548257 -# and http://www.raya.ps/ar/news/890705.html say Palestine areas will +# and https://www.raya.ps/ar/news/890705.html say Palestine areas will # start DST on 2015-03-28 00:00 which is one day later than expected. # # From Paul Eggert (2015-03-03): -# http://www.timeanddate.com/time/change/west-bank/ramallah?year=2014 +# https://www.timeanddate.com/time/change/west-bank/ramallah?year=2014 # says that the fall 2014 transition was Oct 23 at 24:00. # From Hannah Kreitem (2016-03-09): @@ -2597,8 +2655,8 @@ Zone Asia/Karachi 4:28:12 - LMT 1907 # # From Paul Eggert (2016-10-19): # It's also consistent with predictions in the following URLs today: -# http://www.timeanddate.com/time/change/gaza-strip/gaza -# http://www.timeanddate.com/time/change/west-bank/hebron +# https://www.timeanddate.com/time/change/gaza-strip/gaza +# https://www.timeanddate.com/time/change/west-bank/hebron # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule EgyptAsia 1957 only - May 10 0:00 1:00 S @@ -2661,7 +2719,7 @@ Zone Asia/Hebron 2:20:23 - LMT 1900 Oct # Philippines, issued a proclamation announcing that 1844-12-30 was to # be immediately followed by 1845-01-01; see R.H. van Gent's # History of the International Date Line -# http://www.staff.science.uu.nl/~gent0113/idl/idl_philippines.htm +# https://www.staff.science.uu.nl/~gent0113/idl/idl_philippines.htm # The rest of the data entries are from Shanks & Pottenger. # From Jesper Nørgaard Welen (2006-04-26): @@ -2925,7 +2983,7 @@ Rule Syria 2007 only - Nov Fri>=1 0:00 0 - # We have not found any sources saying anything about when DST ends this year. # # Our summary -# http://www.timeanddate.com/news/time/syria-dst-starts-march-27-2009.html +# https://www.timeanddate.com/news/time/syria-dst-starts-march-27-2009.html # From Steffen Thorsen (2009-10-27): # The Syrian Arab News Network on 2009-09-29 reported that Syria will @@ -2952,7 +3010,7 @@ Rule Syria 2007 only - Nov Fri>=1 0:00 0 - # http://www.sana.sy/ara/2/2012/03/26/408215.htm # # Our brief summary: -# http://www.timeanddate.com/news/time/syria-dst-2012.html +# https://www.timeanddate.com/news/time/syria-dst-2012.html # From Arthur David Olson (2012-03-27): # Assume last Friday in March going forward XXX. @@ -3035,7 +3093,7 @@ Zone Asia/Tashkent 4:37:11 - LMT 1924 May 2 # is quoted verbatim in: # http://www.thoigian.com.vn/?mPage=P80D01 # is translated by Brian Inglis in: -# http://mm.icann.org/pipermail/tz/2014-October/021654.html +# https://mm.icann.org/pipermail/tz/2014-October/021654.html # and is the basis for the information below. # # The 1906 transition was effective July 1 and standardized Indochina to diff --git a/src/timezone/data/australasia b/src/timezone/data/australasia index d389ae134a..5f7c86dda6 100644 --- a/src/timezone/data/australasia +++ b/src/timezone/data/australasia @@ -293,7 +293,7 @@ Zone Indian/Cocos 6:27:40 - LMT 1900 # http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=1096:3310-cabinet-approves-change-in-daylight-savings-dates&catid=49:cabinet-releases&Itemid=166 # # A bit more background info here: -# http://www.timeanddate.com/news/time/fiji-dst-ends-march-2010.html +# https://www.timeanddate.com/news/time/fiji-dst-ends-march-2010.html # From Alexander Krivenyshev (2010-10-24): # According to Radio Fiji and Fiji Times online, Fiji will end DST 3 @@ -357,9 +357,12 @@ Zone Indian/Cocos 6:27:40 - LMT 1900 # clocks go forward an hour at 2am to 3am.... Daylight Saving will # end at 3.00am on Sunday 15th January 2017." -# From Paul Eggert (2016-10-03): -# For now, guess DST from 02:00 the first Sunday in November to -# 03:00 the third Sunday in January. Although ad hoc, it matches +# From Paul Eggert (2017-08-21): +# Dominic Fok writes (2017-08-20) that DST ends 2018-01-14, citing +# Extraordinary Government of Fiji Gazette Supplement No. 21 (2017-08-27), +# [Legal Notice No. 41] of an order of the previous day by J Usamate. +# For now, guess DST from 02:00 the first Sunday in November to 03:00 +# the first Sunday on or after January 14. Although ad hoc, it matches # transitions since late 2014 and seems more likely to match future # practice than guessing no DST. @@ -373,7 +376,7 @@ Rule Fiji 2011 only - Mar Sun>=1 3:00 0 - Rule Fiji 2012 2013 - Jan Sun>=18 3:00 0 - Rule Fiji 2014 only - Jan Sun>=18 2:00 0 - Rule Fiji 2014 max - Nov Sun>=1 2:00 1:00 S -Rule Fiji 2015 max - Jan Sun>=15 3:00 0 - +Rule Fiji 2015 max - Jan Sun>=14 3:00 0 - # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Pacific/Fiji 11:55:44 - LMT 1915 Oct 26 # Suva 12:00 Fiji +12/+13 @@ -557,7 +560,7 @@ Zone Pacific/Port_Moresby 9:48:40 - LMT 1880 # The World War II entries below are instead based on Arawa-Kieta. # The Japanese occupied Kieta in July 1942, # according to the Pacific War Online Encyclopedia -# http://pwencycl.kgbudge.com/B/o/Bougainville.htm +# https://pwencycl.kgbudge.com/B/o/Bougainville.htm # and seem to have controlled it until their 1945-08-21 surrender. # # The Autonomous Region of Bougainville switched from UT +10 to +11 @@ -579,7 +582,7 @@ Zone Pacific/Pitcairn -8:40:20 - LMT 1901 # Adamstown -8:00 - -08 # American Samoa -Zone Pacific/Pago_Pago 12:37:12 - LMT 1879 Jul 5 +Zone Pacific/Pago_Pago 12:37:12 - LMT 1892 Jul 5 -11:22:48 - LMT 1911 -11:00 - SST # S=Samoa Link Pacific/Pago_Pago Pacific/Midway # in US minor outlying islands @@ -595,7 +598,7 @@ Link Pacific/Pago_Pago Pacific/Midway # in US minor outlying islands # Sunday of April 2011." # # Background info: -# http://www.timeanddate.com/news/time/samoa-dst-plan-2009.html +# https://www.timeanddate.com/news/time/samoa-dst-plan-2009.html # # Samoa's Daylight Saving Time Act 2009 is available here, but does not # contain any dates: @@ -659,7 +662,7 @@ Rule WS 2011 only - Sep lastSat 3:00 1 D Rule WS 2012 max - Apr Sun>=1 4:00 0 S Rule WS 2012 max - Sep lastSun 3:00 1 D # Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Apia 12:33:04 - LMT 1879 Jul 5 +Zone Pacific/Apia 12:33:04 - LMT 1892 Jul 5 -11:26:56 - LMT 1911 -11:30 - -1130 1950 -11:00 WS -11/-10 2011 Dec 29 24:00 @@ -686,7 +689,7 @@ Zone Pacific/Guadalcanal 10:39:48 - LMT 1912 Oct # Honiara # From Paul Eggert (2012-07-25) # A Google Books snippet of Appendix to the Journals of the House of # Representatives of New Zealand, Session 1948, -# , page 65, says Tokelau +# , page 65, says Tokelau # was "11 hours slow on G.M.T." Go with Thorsen and assume Shanks & Pottenger # are off by an hour starting in 1901. @@ -701,8 +704,8 @@ Rule Tonga 1999 only - Oct 7 2:00s 1:00 S Rule Tonga 2000 only - Mar 19 2:00s 0 - Rule Tonga 2000 2001 - Nov Sun>=1 2:00 1:00 S Rule Tonga 2001 2002 - Jan lastSun 2:00 0 - -Rule Tonga 2016 max - Nov Sun>=1 2:00 1:00 S -Rule Tonga 2017 max - Jan Sun>=15 3:00 0 - +Rule Tonga 2016 only - Nov Sun>=1 2:00 1:00 S +Rule Tonga 2017 only - Jan Sun>=15 3:00 0 - # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Pacific/Tongatapu 12:19:20 - LMT 1901 12:20 - +1220 1941 @@ -756,7 +759,7 @@ Zone Pacific/Funafuti 11:56:52 - LMT 1901 # Operation Fishbowl shot (Tightrope, 1962-11-04).... [See] Herman Hoerlin, # "The United States High-Altitude Test Experience: A Review Emphasizing the # Impact on the Environment", Los Alamos LA-6405, Oct 1976. -# http://www.fas.org/sgp/othergov/doe/lanl/docs1/00322994.pdf +# https://www.fas.org/sgp/othergov/doe/lanl/docs1/00322994.pdf # See the table on page 4 where he lists GMT and local times for the tests; a # footnote for the JI tests reads that local time is "JI time = Hawaii Time # Minus One Hour". @@ -822,7 +825,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # # For data circa 1899, a common source is: # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# http://www.jstor.org/stable/1774359 +# https://www.jstor.org/stable/1774359 # # A reliable and entertaining source about time zones is # Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). @@ -969,7 +972,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # AEST ACST AWST AEDT ACDT # # Parliamentary Library (2008-11-10) -# http://www.aph.gov.au/binaries/library/pubs/rp/2008-09/09rp14.pdf +# https://www.aph.gov.au/binaries/library/pubs/rp/2008-09/09rp14.pdf # EST CST WST preferred for standard time; AEST AEDT ACST ACDT also used # # The Transport Safety Bureau has an extensive series of accident reports, @@ -1005,13 +1008,13 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # # NSW (including LHI and Broken Hill): # Standard Time Act 1987 (updated 1995-04-04) -# http://www.austlii.edu.au/au/legis/nsw/consol_act/sta1987137/index.html +# https://www.austlii.edu.au/au/legis/nsw/consol_act/sta1987137/index.html # ACT # Standard Time and Summer Time Act 1972 -# http://www.austlii.edu.au/au/legis/act/consol_act/stasta1972279/index.html +# https://www.austlii.edu.au/au/legis/act/consol_act/stasta1972279/index.html # SA # Standard Time Act, 1898 -# http://www.austlii.edu.au/au/legis/sa/consol_act/sta1898137/index.html +# https://www.austlii.edu.au/au/legis/sa/consol_act/sta1898137/index.html # From David Grosz (2005-06-13): # It was announced last week that Daylight Saving would be extended by @@ -1306,7 +1309,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # http://abc.net.au/news/regionals/neweng/monthly/regeng-22jul1999-1.htm # (1999-07-22). For now, we'll wait to see if this really happens. # -# Victoria will following NSW. See: +# Victoria will follow NSW. See: # Vic to extend daylight saving (1999-07-28) # http://abc.net.au/local/news/olympics/1999/07/item19990728112314_1.htm # @@ -1409,7 +1412,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # the ACT for all 52 weeks of the year... # # We have a wrap-up here: -# http://www.timeanddate.com/news/time/south-australia-extends-dst.html +# https://www.timeanddate.com/news/time/south-australia-extends-dst.html ############################################################################### # New Zealand @@ -1463,7 +1466,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # From Paul Eggert (2014-07-14): # Chatham Island time was formally standardized on 1957-01-01 by # New Zealand's Standard Time Amendment Act 1956 (1956-10-26). -# http://www.austlii.edu.au/nz/legis/hist_act/staa19561956n100244.pdf +# https://www.austlii.edu.au/nz/legis/hist_act/staa19561956n100244.pdf # According to Google Books snippet view, a speaker in the New Zealand # parliamentary debates in 1956 said "Clause 78 makes provision for standard # time in the Chatham Islands. The time there is 45 minutes in advance of New @@ -1578,7 +1581,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # the Norfolk Island Museum and the Australian Bureau of Meteorology's # Norfolk Island station, and found no record of Norfolk observing DST # other than in 1974/5. See: -# http://www.timeanddate.com/time/australia/norfolk-island.html +# https://www.timeanddate.com/time/australia/norfolk-island.html # Pitcairn @@ -1606,11 +1609,13 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # (Western) Samoa and American Samoa -# Howse writes (p 153, citing p 10 of the 1883-11-18 New York Herald) -# that in 1879 the King of Samoa decided to change +# Howse writes (p 153) that after the 1879 standardization on Antipodean +# time by the British governor of Fiji, the King of Samoa decided to change # "the date in his kingdom from the Antipodean to the American system, # ordaining - by a masterpiece of diplomatic flattery - that # the Fourth of July should be celebrated twice in that year." +# This happened in 1892, according to the Evening News (Sydney) of 1892-07-20. +# https://www.staff.science.uu.nl/~gent0113/idl/idl.htm # Although Shanks & Pottenger says they both switched to UT -11:30 # in 1911, and to -11 in 1950. many earlier sources give -11 @@ -1621,6 +1626,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # day in 2011. Assume also that the Samoas follow the US and New # Zealand's "ST"/"DT" style of daylight-saving abbreviations. + # Tonga # From Paul Eggert (1996-01-22): @@ -1715,6 +1721,15 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # Assume Tonga will observe DST from the first Sunday in November at 02:00 # through the third Sunday in January at 03:00, like Fiji, for now. +# From David Wade (2017-10-18): +# In August government was disolved by the King. The current prime minister +# continued in office in care taker mode. It is easy to see that few +# decisions will be made until elections 16th November. +# +# From Paul Eggert (2017-10-18): +# For now, guess that DST is discontinued. That's what the IATA is guessing. + + # Wake # From Vernice Anderson, Personal Secretary to Philip Jessup, @@ -1727,7 +1742,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # making calculation of time in Washington difficult if not almost # impossible. # -# http://www.trumanlibrary.org/wake/meeting.htm +# https://www.trumanlibrary.org/oralhist/andrsonv.htm # From Paul Eggert (2003-03-23): # We have no other report of DST in Wake Island, so omit this info for now. @@ -1755,7 +1770,7 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901 # an international standard, there are some places on the high seas where the # correct date is ambiguous. -# From Wikipedia (2005-08-31): +# From Wikipedia (2005-08-31): # Before 1920, all ships kept local apparent time on the high seas by setting # their clocks at night or at the morning sight so that, given the ship's # speed and direction, it would be 12 o'clock when the Sun crossed the ship's diff --git a/src/timezone/data/backward b/src/timezone/data/backward index 09f2a31b68..2141f0d579 100644 --- a/src/timezone/data/backward +++ b/src/timezone/data/backward @@ -61,7 +61,9 @@ Link America/Sao_Paulo Brazil/East Link America/Manaus Brazil/West Link America/Halifax Canada/Atlantic Link America/Winnipeg Canada/Central -Link America/Regina Canada/East-Saskatchewan +# This line is commented out, as the name exceeded the 14-character limit +# and was an unused misnomer. +#Link America/Regina Canada/East-Saskatchewan Link America/Toronto Canada/Eastern Link America/Edmonton Canada/Mountain Link America/St_Johns Canada/Newfoundland diff --git a/src/timezone/data/backzone b/src/timezone/data/backzone index 9ce78316c2..32bd0f1061 100644 --- a/src/timezone/data/backzone +++ b/src/timezone/data/backzone @@ -181,7 +181,7 @@ Zone Africa/Lome 0:04:52 - LMT 1893 # with the date that it took effect, namely 1912-01-01. # Zone Africa/Luanda 0:52:56 - LMT 1892 - 0:52:04 - +005204 1912 Jan 1 + 0:52:04 - LMT 1912 Jan 1 # Luanda Mean Time? 1:00 - WAT # Democratic Republic of the Congo (east) @@ -540,10 +540,10 @@ Zone Europe/Belfast -0:23:40 - LMT 1880 Aug 2 # Guernsey # Data from Joseph S. Myers -# http://mm.icann.org/pipermail/tz/2013-September/019883.html +# https://mm.icann.org/pipermail/tz/2013-September/019883.html # References to be added -# LMT Location - 49.27N -2.33E - St.Peter Port -Zone Europe/Guernsey -0:09:19 - LMT 1913 Jun 18 +# LMT is for Town Church, St. Peter Port, 49 degrees 27'17"N 2 degrees 32'10"W +Zone Europe/Guernsey -0:10:09 - LMT 1913 Jun 18 0:00 GB-Eire %s 1940 Jul 2 1:00 C-Eur CE%sT 1945 May 8 0:00 GB-Eire %s 1968 Oct 27 @@ -555,11 +555,11 @@ Zone Europe/Guernsey -0:09:19 - LMT 1913 Jun 18 # # From Lester Caine (2013-09-04): # The Isle of Man legislation is now on-line at -# , starting with the original Statutory +# , starting with the original Statutory # Time Act in 1883 and including additional confirmation of some of # the dates of the 'Summer Time' orders originating at # Westminster. There is a little uncertainty as to the starting date -# of the first summer time in 1916 which may have be announced a +# of the first summer time in 1916 which may have been announced a # couple of days late. There is still a substantial number of # documents to work through, but it is thought that every GB change # was also implemented on the island. @@ -574,10 +574,10 @@ Zone Europe/Isle_of_Man -0:17:55 - LMT 1883 Mar 30 0:00s # Jersey # Data from Joseph S. Myers -# http://mm.icann.org/pipermail/tz/2013-September/019883.html +# https://mm.icann.org/pipermail/tz/2013-September/019883.html # References to be added -# LMT Location - 49.187N -2.107E - St. Helier -Zone Europe/Jersey -0:08:25 - LMT 1898 Jun 11 16:00u +# LMT is for Parish Church, St. Helier, 49 degrees 11'0.57"N 2 degrees 6'24.33"W +Zone Europe/Jersey -0:08:26 - LMT 1898 Jun 11 16:00u 0:00 GB-Eire %s 1940 Jul 2 1:00 C-Eur CE%sT 1945 May 8 0:00 GB-Eire %s 1968 Oct 27 diff --git a/src/timezone/data/europe b/src/timezone/data/europe index 558b9f168f..5b3b4e52fa 100644 --- a/src/timezone/data/europe +++ b/src/timezone/data/europe @@ -37,14 +37,14 @@ # [PDF] (1914-03) # # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94 -# . He writes: +# . He writes: # "It is requested that corrections and additions to these tables # may be sent to Mr. John Milne, Royal Geographical Society, # Savile Row, London." Nowadays please email them to tz@iana.org. # # Byalokoz EL. New Counting of Time in Russia since July 1, 1919. # This Russian-language source was consulted by Vladimir Karpinsky; see -# http://mm.icann.org/pipermail/tz/2014-August/021320.html +# https://mm.icann.org/pipermail/tz/2014-August/021320.html # The full Russian citation is: # Бялокоз, Евгений Людвигович. Новый счет времени в течении суток # введенный декретом Совета народных комиссаров для всей России с 1-го @@ -187,7 +187,7 @@ # foundations of civilization throughout the world. # -- "A Silent Toast to William Willett", Pictorial Weekly; # republished in Finest Hour (Spring 2002) 1(114):26 -# http://www.winstonchurchill.org/images/finesthour/Vol.01%20No.114.pdf +# https://www.winstonchurchill.org/publications/finest-hour/finest-hour-114/a-silent-toast-to-william-willett-by-winston-s-churchill # From Paul Eggert (2015-08-08): # The OED Supplement says that the English originally said "Daylight Saving" @@ -225,8 +225,8 @@ # official designation; the reply of the 21st was that there wasn't # but he couldn't think of anything better than the "Double British # Summer Time" that the BBC had been using informally. -# http://www.polyomino.org.uk/british-time/bbc-19410418.png -# http://www.polyomino.org.uk/british-time/ho-19410421.png +# https://www.polyomino.org.uk/british-time/bbc-19410418.png +# https://www.polyomino.org.uk/british-time/ho-19410421.png # From Sir Alexander Maxwell in the above-mentioned letter (1941-04-21): # [N]o official designation has as far as I know been adopted for the time @@ -243,13 +243,13 @@ # the history of summer time legislation in the United Kingdom. # Since 1998 Joseph S. Myers has been updating # and extending this list, which can be found in -# http://www.polyomino.org.uk/british-time/ +# https://www.polyomino.org.uk/british-time/ # From Joseph S. Myers (1998-01-06): # # The legal time in the UK outside of summer time is definitely GMT, not UTC; # see Lord Tanlaw's speech -# http://www.publications.parliament.uk/pa/ld199798/ldhansrd/vo970611/text/70611-10.htm#70611-10_head0 +# https://www.publications.parliament.uk/pa/ld199798/ldhansrd/vo970611/text/70611-10.htm#70611-10_head0 # (Lords Hansard 11 June 1997 columns 964 to 976). # From Paul Eggert (2006-03-22): @@ -295,7 +295,7 @@ # Irish 'public feeling (was) outraged by forcing of English time on us'." # -- Parsons M. Dublin lost its time zone - and 25 minutes - after 1916 Rising. # Irish Times 2014-10-27. -# http://www.irishtimes.com/news/politics/dublin-lost-its-time-zone-and-25-minutes-after-1916-rising-1.1977411 +# https://www.irishtimes.com/news/politics/dublin-lost-its-time-zone-and-25-minutes-after-1916-rising-1.1977411 # From Joseph S. Myers (2005-01-26): # Irish laws are available online at . @@ -348,6 +348,12 @@ # Justice (tel +353 1 678 9711) who confirmed to me that the correct name is # "Irish Summer Time", abbreviated to "IST". +# Michael Deckers (2017-06-01) gave the following URLs for Ireland's +# Summer Time Act, 1925 and Summer Time Orders, 1926 and 1947: +# http://www.irishstatutebook.ie/eli/1925/act/8/enacted/en/print.html +# http://www.irishstatutebook.ie/eli/1926/sro/919/made/en/print.html +# http://www.irishstatutebook.ie/eli/1947/sro/71/made/en/print.html + # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S # Summer Time Act, 1916 Rule GB-Eire 1916 only - May 21 2:00s 1:00 BST @@ -472,14 +478,14 @@ Link Europe/London Europe/Isle_of_Man # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone Europe/Dublin -0:25:00 - LMT 1880 Aug 2 - -0:25:21 - DMT 1916 May 21 2:00 # Dublin MT + -0:25:21 - DMT 1916 May 21 2:00s # Dublin MT -0:25:21 1:00 IST 1916 Oct 1 2:00s 0:00 GB-Eire %s 1921 Dec 6 # independence - 0:00 GB-Eire GMT/IST 1940 Feb 25 2:00 - 0:00 1:00 IST 1946 Oct 6 2:00 - 0:00 - GMT 1947 Mar 16 2:00 - 0:00 1:00 IST 1947 Nov 2 2:00 - 0:00 - GMT 1948 Apr 18 2:00 + 0:00 GB-Eire GMT/IST 1940 Feb 25 2:00s + 0:00 1:00 IST 1946 Oct 6 2:00s + 0:00 - GMT 1947 Mar 16 2:00s + 0:00 1:00 IST 1947 Nov 2 2:00s + 0:00 - GMT 1948 Apr 18 2:00s 0:00 GB-Eire GMT/IST 1968 Oct 27 1:00 - IST 1971 Oct 31 2:00u 0:00 GB-Eire GMT/IST 1996 @@ -625,7 +631,7 @@ Rule Russia 1996 2010 - Oct lastSun 2:00s 0 - # Council of Ministers of the USSR from 1989-03-14 No. 227. # # I did not find full texts of these acts. For the 1989 one we have -# title at http://base.garant.ru/70754136/ : +# title at https://base.garant.ru/70754136/ : # "About change in calculation of time on the territories of # Lithuanian SSR, Latvian SSR and Estonian SSR, Astrakhan, # Kaliningrad, Kirov, Kuybyshev, Ulyanovsk and Uralsk oblasts". @@ -656,7 +662,7 @@ Rule Russia 1996 2010 - Oct lastSun 2:00s 0 - # http://bmockbe.ru/events/?ID=7583 # # Medvedev signed a law on the calculation of the time (in russian): -# http://www.regnum.ru/news/polit/1413906.html +# https://www.regnum.ru/news/polit/1413906.html # From Arthur David Olson (2011-06-15): # Take "abolishing daylight saving time" to mean that time is now considered @@ -783,7 +789,7 @@ Zone Europe/Vienna 1:05:21 - LMT 1893 Apr # Sources (Russian language): # http://www.belta.by/ru/all_news/society/V-Belarusi-otmenjaetsja-perexod-na-sezonnoe-vremja_i_572952.html # http://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/ -# http://news.tut.by/society/250578.html +# https://news.tut.by/society/250578.html # # From Alexander Bokovoy (2014-10-09): # Belarussian government decided against changing to winter time.... @@ -1104,7 +1110,7 @@ Zone America/Thule -4:35:08 - LMT 1916 Jul 28 # Pituffik air base # for their standard and summer times. He says no, they use "suveaeg" # (summer time) and "talveaeg" (winter time). -# From The Baltic Times (1999-09-09) +# From The Baltic Times (1999-09-09) # via Steffen Thorsen: # This year will mark the last time Estonia shifts to summer time, # a council of the ruling coalition announced Sept. 6.... @@ -1156,7 +1162,7 @@ Zone Europe/Tallinn 1:39:00 - LMT 1880 # This is documented in Heikki Oja: Aikakirja 2007, published by The Almanac # Office of University of Helsinki, ISBN 952-10-3221-9, available online (in # Finnish) at -# http://almanakka.helsinki.fi/aikakirja/Aikakirja2007kokonaan.pdf +# https://almanakka.helsinki.fi/aikakirja/Aikakirja2007kokonaan.pdf # # Page 105 (56 in PDF version) has a handy table of all past daylight savings # transitions. It is easy enough to interpret without Finnish skills. @@ -1169,7 +1175,7 @@ Zone Europe/Tallinn 1:39:00 - LMT 1880 # From Konstantin Hyppönen (2014-06-13): # [Heikki Oja's book Aikakirja 2013] -# http://almanakka.helsinki.fi/images/aikakirja/Aikakirja2013kokonaan.pdf +# https://almanakka.helsinki.fi/images/aikakirja/Aikakirja2013kokonaan.pdf # pages 104-105, including a scan from a newspaper published on Apr 2 1942 # say that ... [o]n Apr 2 1942, 24 o'clock (which means Apr 3 1942, # 00:00), clocks were moved one hour forward. The newspaper @@ -1299,7 +1305,7 @@ Zone Europe/Paris 0:09:21 - LMT 1891 Mar 15 0:01 # From Jörg Schilling (2002-10-23): # In 1945, Berlin was switched to Moscow Summer time (GMT+4) by -# http://www.dhm.de/lemo/html/biografien/BersarinNikolai/ +# https://www.dhm.de/lemo/html/biografien/BersarinNikolai/ # General [Nikolai] Bersarin. # From Paul Eggert (2003-03-08): @@ -1524,7 +1530,7 @@ Zone Atlantic/Reykjavik -1:28 - LMT 1908 # From Paul Eggert (2016-10-27): # Go with INRiM for DST rules, except as corrected by Inglis for 1944 # for the Kingdom of Italy. This is consistent with Renzo Baldini. -# Model Rome's occupation by using using C-Eur rules from 1943-09-10 +# Model Rome's occupation by using C-Eur rules from 1943-09-10 # to 1944-06-04; although Rome was an open city during this period, it # was effectively controlled by Germany. # @@ -1839,14 +1845,14 @@ Zone Europe/Malta 0:58:04 - LMT 1893 Nov 2 0:00s # Valletta # Following Moldova and neighboring Ukraine- Transnistria (Pridnestrovie)- # Tiraspol will go back to winter time on October 30, 2011. # News from Moldova (in russian): -# http://ru.publika.md/link_317061.html +# https://ru.publika.md/link_317061.html # From Roman Tudos (2015-07-02): # http://lex.justice.md/index.php?action=view&view=doc&lang=1&id=355077 # From Paul Eggert (2015-07-01): # The abovementioned official link to IGO1445-868/2014 states that # 2014-10-26's fallback transition occurred at 03:00 local time. Also, -# http://www.trm.md/en/social/la-30-martie-vom-trece-la-ora-de-vara +# https://www.trm.md/en/social/la-30-martie-vom-trece-la-ora-de-vara # says the 2014-03-30 spring-forward transition was at 02:00 local time. # Guess that since 1997 Moldova has switched one hour before the EU. @@ -1918,7 +1924,7 @@ Zone Europe/Monaco 0:29:32 - LMT 1891 Mar 15 # Amsterdam mean time. # The data entries before 1945 are taken from -# http://www.staff.science.uu.nl/~gent0113/wettijd/wettijd.htm +# https://www.staff.science.uu.nl/~gent0113/wettijd/wettijd.htm # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule Neth 1916 only - May 1 0:00 1:00 NST # Netherlands Summer Time @@ -1999,7 +2005,7 @@ Zone Europe/Oslo 0:43:00 - LMT 1895 Jan 1 # so it must have diverged from Oslo time during the war, as Oslo was # keeping Berlin time. # -# says that the meteorologists +# says that the meteorologists # burned down their station in 1940 and left the island, but returned in # 1941 with a small Norwegian garrison and continued operations despite # frequent air attacks from Germans. In 1943 the Americans established a @@ -2037,7 +2043,7 @@ Rule Poland 1945 only - Apr 29 0:00 1:00 S Rule Poland 1945 only - Nov 1 0:00 0 - # For 1946 on the source is Kazimierz Borkowski, # Toruń Center for Astronomy, Dept. of Radio Astronomy, Nicolaus Copernicus U., -# http://www.astro.uni.torun.pl/~kb/Artykuly/U-PA/Czas2.htm#tth_tAb1 +# https://www.astro.uni.torun.pl/~kb/Artykuly/U-PA/Czas2.htm#tth_tAb1 # Thanks to Przemysław Augustyniak (2005-05-28) for this reference. # He also gives these further references: # Mon Pol nr 13, poz 162 (1995) @@ -2071,7 +2077,7 @@ Zone Europe/Warsaw 1:24:00 - LMT 1880 # # From Paul Eggert (2014-08-11), after a heads-up from Stephen Colebourne: # According to a Portuguese decree (1911-05-26) -# http://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf +# https://dre.pt/application/dir/pdf1sdip/1911/05/12500/23132313.pdf # Lisbon was at -0:36:44.68, but switched to GMT on 1912-01-01 at 00:00. # Round the old offset to -0:36:45. This agrees with Willett but disagrees # with Shanks, who says the transition occurred on 1911-05-24 at 00:00 for @@ -2253,7 +2259,7 @@ Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct # 2011 No. 725" and contains no other dates or "effective date" information. # # Another source is -# http://www.rg.ru/2011/09/06/chas-zona-dok.html +# https://rg.ru/2011/09/06/chas-zona-dok.html # which, according to translate.google.com, begins "Resolution of the # Government of the Russian Federation on August 31, 2011 N 725" and also # contains "Date first official publication: September 6, 2011 Posted on: @@ -2261,7 +2267,7 @@ Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct # does not contain any "effective date" information. # # Another source is -# http://en.wikipedia.org/wiki/Oymyakonsky_District#cite_note-RuTime-7 +# https://en.wikipedia.org/wiki/Oymyakonsky_District#cite_note-RuTime-7 # which, in note 8, contains "Resolution No. 725 of August 31, 2011... # Effective as of after 7 days following the day of the official publication" # but which does not contain any reference to September 6, 2011. @@ -2297,7 +2303,7 @@ Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct # http://itar-tass.com/obschestvo/1333711 # http://www.pravo.gov.ru:8080/page.aspx?111660 # http://www.kremlin.ru/acts/46279 -# From October 26, 2014 the new Russian time zone map will looks like this: +# From October 26, 2014 the new Russian time zone map will look like this: # http://www.worldtimezone.com/dst_news/dst_news_russia-map-2014-07.html # From Paul Eggert (2006-03-22): @@ -2344,7 +2350,7 @@ Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct # with maintenance only and represent our best guesses as to which regions # are covered by each zone. They are not meant to be taken as an authoritative # listing. The region codes listed come from -# http://en.wikipedia.org/w/?title=Federal_subjects_of_Russia&oldid=611810498 +# https://en.wikipedia.org/w/?title=Federal_subjects_of_Russia&oldid=611810498 # and are used for convenience only; no guarantees are made regarding their # future stability. ISO 3166-2:RU codes are also listed for first-level # divisions where available. @@ -2509,7 +2515,7 @@ Zone Europe/Kaliningrad 1:22:00 - LMT 1893 Apr # http://www.kaliningradka.ru/site_pc/cherez/index.php?ELEMENT_ID=40091 # says that Kaliningrad decided not to be an exception 2 days before the # 1991-03-31 switch and one person at -# http://izhevsk.ru/forum_light_message/50/682597-m8369040.html +# https://izhevsk.ru/forum_light_message/50/682597-m8369040.html # says he remembers that Samara opted out of the 1992-01-19 exception # 2 days before the switch. # @@ -2581,7 +2587,7 @@ Zone Europe/Simferopol 2:16:24 - LMT 1880 3:00 - MSK 1997 Mar lastSun 1:00u # From Alexander Krivenyshev (2014-03-17): # time change at 2:00 (2am) on March 30, 2014 -# http://vz.ru/news/2014/3/17/677464.html +# https://vz.ru/news/2014/3/17/677464.html # From Paul Eggert (2014-03-30): # Simferopol and Sevastopol reportedly changed their central town clocks # late the previous day, but this appears to have been ceremonial @@ -2764,7 +2770,7 @@ Zone Asia/Omsk 4:53:30 - LMT 1919 Nov 14 # suggests that Altai Republic transitioned to Moscow+3 on # 1995-05-28. # -# http://regnum.ru/news/society/1957270.html +# https://regnum.ru/news/society/1957270.html # has some historical data for Altai Krai: # before 1957: west part on UTC+6, east on UTC+7 # after 1957: UTC+7 @@ -3138,8 +3144,8 @@ Zone Asia/Magadan 10:03:12 - LMT 1924 May 2 # districts, but have very similar populations. In fact, Wikipedia currently # lists them both as having 3528 people, exactly 1668 males and 1860 females # each! (Yikes!) -# http://en.wikipedia.org/w/?title=Srednekolymsky_District&oldid=603435276 -# http://en.wikipedia.org/w/?title=Verkhnekolymsky_District&oldid=594378493 +# https://en.wikipedia.org/w/?title=Srednekolymsky_District&oldid=603435276 +# https://en.wikipedia.org/w/?title=Verkhnekolymsky_District&oldid=594378493 # Assume this is a mistake, albeit an amusing one. # # Looking at censuses, the populations of the two municipalities seem to have @@ -3460,7 +3466,7 @@ Zone Europe/Stockholm 1:12:12 - LMT 1879 Jan 1 # # From Alois Treindl (2013-09-11): # The Federal regulations say -# http://www.admin.ch/opc/de/classified-compilation/20071096/index.html +# https://www.admin.ch/opc/de/classified-compilation/20071096/index.html # ... the meridian for Bern mean time ... is 7 degrees 26' 22.50". # Expressed in time, it is 0h29m45.5s. @@ -3537,9 +3543,9 @@ Zone Europe/Zurich 0:34:08 - LMT 1853 Jul 16 # See above comment. # According to the articles linked below, Turkey will change into summer # time zone (GMT+3) on March 28, 2011 at 3:00 a.m. instead of March 27. # This change is due to a nationwide exam on 27th. -# http://www.worldbulletin.net/?aType=haber&ArticleID=70872 +# https://www.worldbulletin.net/?aType=haber&ArticleID=70872 # Turkish: -# http://www.hurriyet.com.tr/ekonomi/17230464.asp?gid=373 +# https://www.hurriyet.com.tr/yaz-saati-uygulamasi-bir-gun-ileri-alindi-17230464 # From Faruk Pasin (2014-02-14): # The DST for Turkey has been changed for this year because of the @@ -3675,7 +3681,7 @@ Link Europe/Istanbul Asia/Istanbul # Istanbul is in both continents. # http://www.segodnya.ua/news/14290482.html # # Deputies cancelled the winter time (in Russian) -# http://www.pravda.com.ua/rus/news/2011/09/20/6600616/ +# https://www.pravda.com.ua/rus/news/2011/09/20/6600616/ # # From Philip Pizzey (2011-10-18): # Today my Ukrainian colleagues have informed me that the diff --git a/src/timezone/data/northamerica b/src/timezone/data/northamerica index 6ede9dcd96..e5d3eca41c 100644 --- a/src/timezone/data/northamerica +++ b/src/timezone/data/northamerica @@ -105,10 +105,13 @@ # Last night I heard part of a rebroadcast of a 1945 Arch Oboler radio drama. # In the introduction, Oboler spoke of "Eastern Peace Time." # An AltaVista search turned up: -# http://rowayton.org/rhs/hstaug45.html +# https://web.archive.org/web/20000926032210/http://rowayton.org/rhs/hstaug45.html # "When the time is announced over the radio now, it is 'Eastern Peace # Time' instead of the old familiar 'Eastern War Time.' Peace is wonderful." # (August 1945) by way of confirmation. +# +# From Paul Eggert (2017-09-23): +# This was the V-J Day issue of the Clamdigger, a Rowayton, CT newsletter. # From Joseph Gallant citing # George H. Douglas, _The Early Days of Radio Broadcasting_ (1987): @@ -257,7 +260,7 @@ Zone PST8PDT -8:00 US P%sT # HST and HDT are standardized abbreviations for Hawaii-Aleutian # standard and daylight times. See section 9.47 (p 234) of the # U.S. Government Printing Office Style Manual (2008) -# http://www.gpo.gov/fdsys/pkg/GPO-STYLEMANUAL-2008/pdf/GPO-STYLEMANUAL-2008.pdf +# https://www.gpo.gov/fdsys/pkg/GPO-STYLEMANUAL-2008/pdf/GPO-STYLEMANUAL-2008.pdf # From Arthur David Olson, 2005-08-09 # The following was signed into law on 2005-08-08. @@ -346,7 +349,7 @@ Zone America/New_York -4:56:02 - LMT 1883 Nov 18 12:03:58 # western Tennessee, most of Texas, Wisconsin # From Larry M. Smith (2006-04-26) re Wisconsin: -# http://www.legis.state.wi.us/statutes/Stat0175.pdf ... +# https://docs.legis.wisconsin.gov/statutes/statutes/175.pdf # is currently enforced at the 01:00 time of change. Because the local # "bar time" in the state corresponds to 02:00, a number of citations # are issued for the "sale of class 'B' alcohol after prohibited @@ -355,7 +358,7 @@ Zone America/New_York -4:56:02 - LMT 1883 Nov 18 12:03:58 # From Douglas R. Bomberg (2007-03-12): # Wisconsin has enacted (nearly eleventh-hour) legislation to get WI # Statue 175 closer in synch with the US Congress' intent.... -# http://www.legis.state.wi.us/2007/data/acts/07Act3.pdf +# https://docs.legis.wisconsin.gov/2007/related/acts/3 # From an email administrator of the City of Fort Pierre, SD (2015-12-21): # Fort Pierre is technically located in the Mountain time zone as is @@ -402,7 +405,7 @@ Zone America/North_Dakota/New_Salem -6:45:39 - LMT 1883 Nov 18 12:14:21 # ...it appears that Mercer County, North Dakota, changed from the # mountain time zone to the central time zone at the last transition from # daylight-saving to standard time (on Nov. 7, 2010): -# http://www.gpo.gov/fdsys/pkg/FR-2010-09-29/html/2010-24376.htm +# https://www.gpo.gov/fdsys/pkg/FR-2010-09-29/html/2010-24376.htm # http://www.bismarcktribune.com/news/local/article_1eb1b588-c758-11df-b472-001cc4c03286.html # From Andy Lipscomb (2011-01-24): @@ -453,7 +456,7 @@ Zone America/Denver -6:59:56 - LMT 1883 Nov 18 12:00:04 # legal time, and is not part of the data here.) See: # Ross SA. An energy crisis from the past: Northern California in 1948. # Working Paper No. 8, Institute of Governmental Studies, UC Berkeley, -# 1973-11. http://escholarship.org/uc/item/8x22k30c +# 1973-11. https://escholarship.org/uc/item/8x22k30c # # In another measure to save electricity, DST was instituted from 1948-03-14 # at 02:01 to 1949-01-16 at 02:00, with the governor having the option to move @@ -474,8 +477,8 @@ Zone America/Denver -6:59:56 - LMT 1883 Nov 18 12:00:04 # which established DST from April's last Sunday at 01:00 until September's # last Sunday at 02:00. This was amended by 1962's Proposition 6, which changed # the fall-back date to October's last Sunday. See: -# http://repository.uchastings.edu/cgi/viewcontent.cgi?article=1501&context=ca_ballot_props -# http://repository.uchastings.edu/cgi/viewcontent.cgi?article=1636&context=ca_ballot_props +# https://repository.uchastings.edu/cgi/viewcontent.cgi?article=1501&context=ca_ballot_props +# https://repository.uchastings.edu/cgi/viewcontent.cgi?article=1636&context=ca_ballot_props # # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER Rule CA 1948 only - Mar 14 2:01 1:00 D @@ -492,20 +495,31 @@ Zone America/Los_Angeles -7:52:58 - LMT 1883 Nov 18 12:07:02 # Alaska # AK%sT is the modern abbreviation for -09 per USNO. # -# From Paul Eggert (2001-05-30): +# From Paul Eggert (2017-06-15): # Howse writes that Alaska switched from the Julian to the Gregorian calendar, # and from east-of-GMT to west-of-GMT days, when the US bought it from Russia. -# This was on 1867-10-18, a Friday; the previous day was 1867-10-06 Julian, -# also a Friday. Include only the time zone part of this transition, -# ignoring the switch from Julian to Gregorian, since we can't represent -# the Julian calendar. -# -# As far as we know, none of the exact locations mentioned below were +# On Friday, 1867-10-18 (Gregorian), at precisely 15:30 local time, the +# Russian forts and fleet at Sitka fired salutes to mark the ceremony of +# formal transfer. See the Sacramento Daily Union (1867-11-14), p 3, col 2. +# https://cdnc.ucr.edu/cgi-bin/cdnc?a=d&d=SDU18671114.2.12.1 +# Sitka workers did not change their calendars until Sunday, 1867-10-20, +# and so celebrated two Sundays that week. See: Ahllund T (tr Hallamaa P). +# From the memoirs of a Finnish workman. Alaska History. 2006 Fall;21(2):1-25. +# http://alaskahistoricalsociety.org/wp-content/uploads/2016/12/Ahllund-2006-Memoirs-of-a-Finnish-Workman.pdf +# Include only the time zone part of this transition, ignoring the switch +# from Julian to Gregorian, since we can't represent the Julian calendar. +# +# As far as we know, of the locations mentioned below only Sitka was # permanently inhabited in 1867 by anyone using either calendar. -# (Yakutat was colonized by the Russians in 1799, but the settlement -# was destroyed in 1805 by a Yakutat-kon war party.) However, there -# were nearby inhabitants in some cases and for our purposes perhaps -# it's best to simply use the official transition. +# (Yakutat was colonized by the Russians in 1799, but the settlement was +# destroyed in 1805 by a Yakutat-kon war party.) Many of Alaska's inhabitants +# were unaware of the US acquisition of Alaska, much less of any calendar or +# time change. However, the Russian-influenced part of Alaska did observe +# Russian time, and it is more accurate to model this than to ignore it. +# The database format requires an exact transition time; use the Russian +# salute as a somewhat-arbitrary time for the formal transfer of control for +# all of Alaska. Sitka's UTC offset is -9:01:13; adjust its 15:30 to the +# local times of other Alaskan locations so that they change simultaneously. # From Paul Eggert (2014-07-18): # One opinion of the early-1980s turmoil in Alaska over time zones and @@ -558,10 +572,10 @@ Zone America/Los_Angeles -7:52:58 - LMT 1883 Nov 18 12:07:02 # It seems Metlakatla did go off PST on Sunday, November 1, changing # their time to AKST and are going to follow Alaska's DST, switching # between AKST and AKDT from now on.... -# http://www.krbd.org/2015/10/30/annette-island-times-they-are-a-changing/ +# https://www.krbd.org/2015/10/30/annette-island-times-they-are-a-changing/ # Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Juneau 15:02:19 - LMT 1867 Oct 18 +Zone America/Juneau 15:02:19 - LMT 1867 Oct 19 15:33:32 -8:57:41 - LMT 1900 Aug 20 12:00 -8:00 - PST 1942 -8:00 US P%sT 1946 @@ -571,7 +585,7 @@ Zone America/Juneau 15:02:19 - LMT 1867 Oct 18 -8:00 US P%sT 1983 Oct 30 2:00 -9:00 US Y%sT 1983 Nov 30 -9:00 US AK%sT -Zone America/Sitka 14:58:47 - LMT 1867 Oct 18 +Zone America/Sitka 14:58:47 - LMT 1867 Oct 19 15:30 -9:01:13 - LMT 1900 Aug 20 12:00 -8:00 - PST 1942 -8:00 US P%sT 1946 @@ -579,7 +593,7 @@ Zone America/Sitka 14:58:47 - LMT 1867 Oct 18 -8:00 US P%sT 1983 Oct 30 2:00 -9:00 US Y%sT 1983 Nov 30 -9:00 US AK%sT -Zone America/Metlakatla 15:13:42 - LMT 1867 Oct 18 +Zone America/Metlakatla 15:13:42 - LMT 1867 Oct 19 15:44:55 -8:46:18 - LMT 1900 Aug 20 12:00 -8:00 - PST 1942 -8:00 US P%sT 1946 @@ -587,14 +601,14 @@ Zone America/Metlakatla 15:13:42 - LMT 1867 Oct 18 -8:00 US P%sT 1983 Oct 30 2:00 -8:00 - PST 2015 Nov 1 2:00 -9:00 US AK%sT -Zone America/Yakutat 14:41:05 - LMT 1867 Oct 18 +Zone America/Yakutat 14:41:05 - LMT 1867 Oct 19 15:12:18 -9:18:55 - LMT 1900 Aug 20 12:00 -9:00 - YST 1942 -9:00 US Y%sT 1946 -9:00 - YST 1969 -9:00 US Y%sT 1983 Nov 30 -9:00 US AK%sT -Zone America/Anchorage 14:00:24 - LMT 1867 Oct 18 +Zone America/Anchorage 14:00:24 - LMT 1867 Oct 19 14:31:37 -9:59:36 - LMT 1900 Aug 20 12:00 -10:00 - AST 1942 -10:00 US A%sT 1967 Apr @@ -602,7 +616,7 @@ Zone America/Anchorage 14:00:24 - LMT 1867 Oct 18 -10:00 US AH%sT 1983 Oct 30 2:00 -9:00 US Y%sT 1983 Nov 30 -9:00 US AK%sT -Zone America/Nome 12:58:21 - LMT 1867 Oct 18 +Zone America/Nome 12:58:22 - LMT 1867 Oct 19 13:29:35 -11:01:38 - LMT 1900 Aug 20 12:00 -11:00 - NST 1942 -11:00 US N%sT 1946 @@ -611,7 +625,7 @@ Zone America/Nome 12:58:21 - LMT 1867 Oct 18 -11:00 US B%sT 1983 Oct 30 2:00 -9:00 US Y%sT 1983 Nov 30 -9:00 US AK%sT -Zone America/Adak 12:13:21 - LMT 1867 Oct 18 +Zone America/Adak 12:13:22 - LMT 1867 Oct 19 12:44:35 -11:46:38 - LMT 1900 Aug 20 12:00 -11:00 - NST 1942 -11:00 US N%sT 1946 @@ -647,7 +661,7 @@ Zone America/Adak 12:13:21 - LMT 1867 Oct 18 # "Hawaiian Time" by Robert C. Schmitt and Doak C. Cox appears on pages 207-225 # of volume 26 of The Hawaiian Journal of History (1992). As of 2010-12-09, # the article is available at -# http://evols.library.manoa.hawaii.edu/bitstream/10524/239/2/JL26215.pdf +# https://evols.library.manoa.hawaii.edu/bitstream/10524/239/2/JL26215.pdf # and indicates that standard time was adopted effective noon, January # 13, 1896 (page 218), that in "1933, the Legislature decreed daylight # saving for the period between the last Sunday of each April and the @@ -746,7 +760,7 @@ Zone America/Boise -7:44:49 - LMT 1883 Nov 18 12:15:11 # Indiana # # For a map of Indiana's time zone regions, see: -# http://en.wikipedia.org/wiki/Time_in_Indiana +# https://en.wikipedia.org/wiki/Time_in_Indiana # # From Paul Eggert (2007-08-17): # Since 1970, most of Indiana has been like America/Indiana/Indianapolis, @@ -973,7 +987,7 @@ Zone America/Kentucky/Louisville -5:43:02 - LMT 1883 Nov 18 12:16:58 # From Paul Eggert (2001-07-16): # The final rule was published in the # Federal Register 65, 160 (2000-08-17), pp 50154-50158. -# http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=2000_register&docid=fr17au00-22 +# https://www.gpo.gov/fdsys/pkg/FR-2000-08-17/html/00-20854.htm # Zone America/Kentucky/Monticello -5:39:24 - LMT 1883 Nov 18 12:20:36 -6:00 US C%sT 1946 @@ -999,7 +1013,7 @@ Zone America/Kentucky/Monticello -5:39:24 - LMT 1883 Nov 18 12:20:36 # West Wendover, NV officially switched from Pacific to mountain time on # 1999-10-31. See the # Federal Register 64, 203 (1999-10-21), pp 56705-56707. -# http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=1999_register&docid=fr21oc99-15 +# https://www.gpo.gov/fdsys/pkg/FR-1999-10-21/html/99-27240.htm # However, the Federal Register says that West Wendover already operated # on mountain time, and the rule merely made this official; # hence a separate tz entry is not needed. @@ -1029,12 +1043,23 @@ Zone America/Kentucky/Monticello -5:39:24 - LMT 1883 Nov 18 12:20:36 # one hour in 1914." This change is not in Shanks. We have no more # info, so omit this for now. # +# From Paul Eggert (2017-07-26): +# Although Shanks says Detroit observed DST in 1967 from 06-14 00:01 +# until 10-29 00:01, I now see multiple reports that this is incorrect. +# For example, according to a 50-year anniversary report about the 1967 +# Detroit riots and a major-league doubleheader on 1967-07-23, "By the time +# the last fly ball of the doubleheader settled into the glove of leftfielder +# Lenny Green, it was after 7 p.m. Detroit did not observe daylight saving +# time, so light was already starting to fail. Twilight was made even deeper +# by billowing columns of smoke that ascended in an unbroken wall north of the +# ballpark." See: Dow B. Detroit '67: As violence unfolded, Tigers played two +# at home vs. Yankees. Detroit Free Press 2017-07-23. +# https://www.freep.com/story/sports/mlb/tigers/2017/07/23/detroit-tigers-1967-riot-new-york-yankees/499951001/ +# # Most of Michigan observed DST from 1973 on, but was a bit late in 1975. # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER Rule Detroit 1948 only - Apr lastSun 2:00 1:00 D Rule Detroit 1948 only - Sep lastSun 2:00 0 S -Rule Detroit 1967 only - Jun 14 2:00 1:00 D -Rule Detroit 1967 only - Oct lastSun 2:00 0 S # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone America/Detroit -5:32:11 - LMT 1905 -6:00 - CST 1915 May 15 2:00 @@ -1098,7 +1123,7 @@ Zone America/Menominee -5:50:27 - LMT 1885 Sep 18 12:00 # [PDF] (1914-03) # # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94 -# . +# . # # See the 'europe' file for Greenland. @@ -1144,19 +1169,19 @@ Zone America/Menominee -5:50:27 - LMT 1885 Sep 18 12:00 # The British Columbia government announced yesterday that it will # adjust daylight savings next year to align with changes in the # U.S. and the rest of Canada.... -# http://www2.news.gov.bc.ca/news_releases_2005-2009/2006AG0014-000330.htm +# https://archive.news.gov.bc.ca/releases/news_releases_2005-2009/2006AG0014-000330.htm # ... # Nova Scotia # Daylight saving time will be extended by four weeks starting in 2007.... -# http://www.gov.ns.ca/just/regulations/rg2/2006/ma1206.pdf +# https://www.novascotia.ca/just/regulations/rg2/2006/ma1206.pdf # # [For New Brunswick] the new legislation dictates that the time change is to # be done at 02:00 instead of 00:01. -# http://www.gnb.ca/0062/acts/BBA-2006/Chap-19.pdf +# https://www.gnb.ca/0062/acts/BBA-2006/Chap-19.pdf # ... # Manitoba has traditionally changed the clock every fall at 03:00. # As of 2006, the transition is to take place one hour earlier at 02:00. -# http://web2.gov.mb.ca/laws/statutes/ccsm/o030e.php +# https://web2.gov.mb.ca/laws/statutes/ccsm/o030e.php # ... # [Alberta, Ontario, Quebec] will follow US rules. # http://www.qp.gov.ab.ca/documents/spring/CH03_06.CFM @@ -1170,7 +1195,7 @@ Zone America/Menominee -5:50:27 - LMT 1885 Sep 18 12:00 # http://www.hoa.gov.nl.ca/hoa/bills/Bill0634.htm # ... # Yukon -# http://www.gov.yk.ca/legislation/regs/oic2006_127.pdf +# https://www.gov.yk.ca/legislation/regs/oic2006_127.pdf # ... # N.W.T. will follow US rules. Whoever maintains the government web site # does not seem to believe in bookmarks. To see the news release, click the @@ -1191,8 +1216,8 @@ Zone America/Menominee -5:50:27 - LMT 1885 Sep 18 12:00 # time and daylight saving time arrangements in Canada circa 1998. # # National Research Council Canada maintains info about time zones and DST. -# http://www.nrc-cnrc.gc.ca/eng/services/time/time_zones.html -# http://www.nrc-cnrc.gc.ca/eng/services/time/faq/index.html#Q5 +# https://www.nrc-cnrc.gc.ca/eng/services/time/time_zones.html +# https://www.nrc-cnrc.gc.ca/eng/services/time/faq/index.html#Q5 # Its unofficial information is often taken from Matthews and Vincent. # From Paul Eggert (2006-06-27): @@ -1229,11 +1254,13 @@ Rule Canada 2007 max - Nov Sun>=1 2:00 0 S # Newfoundland and Labrador -# From Paul Eggert (2000-10-02): -# Matthews and Vincent (1998) write that Labrador should use NST/NDT, -# but the only part of Labrador that follows the rules is the -# southeast corner, including Port Hope Simpson and Mary's Harbour, -# but excluding, say, Black Tickle. +# From Paul Eggert (2017-10-14): +# Legally Labrador should observe Newfoundland time; see: +# McLeod J. Labrador time - legal or not? St. John's Telegram, 2017-10-07 +# http://www.thetelegram.com/news/local/labrador-time--legal-or-not-154860/ +# Matthews and Vincent (1998) write that the only part of Labrador +# that follows the rules is the southeast corner, including Port Hope +# Simpson and Mary's Harbour, but excluding, say, Black Tickle. # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule StJohns 1917 only - Apr 8 2:00 1:00 D @@ -1433,7 +1460,7 @@ Zone America/Moncton -4:19:08 - LMT 1883 Dec 9 # http://www.justice.gouv.qc.ca/english/publications/generale/temps-minganie-a.htm # that the coastal strip from just east of Natashquan to Blanc-Sablon # observes Atlantic standard time all year round. -# http://www.assnat.qc.ca/Media/Process.aspx?MediaId=ANQ.Vigie.Bll.DocumentGenerique_8845en +# https://www.assnat.qc.ca/Media/Process.aspx?MediaId=ANQ.Vigie.Bll.DocumentGenerique_8845en # says this common practice was codified into law as of 2007. # For lack of better info, guess this practice began around 1970, contra to # Shanks & Pottenger who have this region observing AST/ADT. @@ -1465,6 +1492,11 @@ Zone America/Blanc-Sablon -3:48:28 - LMT 1884 # earlier in June). # # Kenora, Ontario, was to abandon DST on 1914-06-01 (-05-21). +# +# From Paul Eggert (2017-07-08): +# For more on Orillia, see: Daubs K. Bold attempt at daylight saving +# time became a comic failure in Orillia. Toronto Star 2017-07-08. +# https://www.thestar.com/news/insight/2017/07/08/bold-attempt-at-daylight-saving-time-became-a-comic-failure-in-orillia.html # From Paul Eggert (1997-10-17): # Mark Brader writes that an article in the 1997-10-14 Toronto Star @@ -1956,7 +1988,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # * 1967. Paragraph 28(34)(g) of the Interpretation Act, S.C. 1967-68, # c. 7 defines Yukon standard time as UTC-9.... # see Interpretation Act, R.S.C. 1985, c. I-21, s. 35(1). -# [http://canlii.ca/t/7vhg] +# [https://www.canlii.org/en/ca/laws/stat/rsc-1985-c-i-21/latest/rsc-1985-c-i-21.html] # * C.O. 1973/214 switched Yukon to PST on 1973-10-28 00:00. # * O.I.C. 1980/02 established DST. # * O.I.C. 1987/056 changed DST to Apr firstSun 2:00 to Oct lastSun 2:00. @@ -2021,7 +2053,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # hours behind Greenwich Time. # # * Yukon Standard Time defined as Pacific Standard Time, YCO 1973/214 -# http://www.canlii.org/en/yk/laws/regu/yco-1973-214/latest/yco-1973-214.html +# https://www.canlii.org/en/yk/laws/regu/yco-1973-214/latest/yco-1973-214.html # C.O. 1973/214 INTERPRETATION ACT ... # # 1. Effective October 28, 1973 Commissioner's Order 1967/59 is hereby @@ -2036,7 +2068,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # http://? - no online source found # # * Yukon Daylight Saving Time, YOIC 1987/56 -# http://www.canlii.org/en/yk/laws/regu/yoic-1987-56/latest/yoic-1987-56.html +# https://www.canlii.org/en/yk/laws/regu/yoic-1987-56/latest/yoic-1987-56.html # O.I.C. 1987/056 INTERPRETATION ACT ... # # In every year between @@ -2048,7 +2080,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # Dated ... 9th day of March, A.D., 1987. # # * Yukon Daylight Saving Time 2006, YOIC 2006/127 -# http://www.canlii.org/en/yk/laws/regu/yoic-2006-127/latest/yoic-2006-127.html +# https://www.canlii.org/en/yk/laws/regu/yoic-2006-127/latest/yoic-2006-127.html # O.I.C. 2006/127 INTERPRETATION ACT ... # # 1. In Yukon each year the time for general purposes shall be 7 hours @@ -2062,7 +2094,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # 3. This order comes into force January 1, 2007. # # * Interpretation Act, RSY 2002, c 125 -# http://www.canlii.org/en/yk/laws/stat/rsy-2002-c-125/latest/rsy-2002-c-125.html +# https://www.canlii.org/en/yk/laws/stat/rsy-2002-c-125/latest/rsy-2002-c-125.html # From Rives McDow (1999-09-04): # Nunavut ... moved ... to incorporate the whole territory into one time zone. @@ -2105,7 +2137,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # From Michaela Rodrigue, writing in the # Nunatsiaq News (1999-11-19): -# http://www.nunatsiaq.com/archives/nunavut991130/nvt91119_17.html +# http://www.nunatsiaqonline.ca/archives/nunavut991130/nvt91119_17.html # Clyde River, Pangnirtung and Sanikiluaq now operate with two time zones, # central - or Nunavut time - for government offices, and eastern time # for municipal offices and schools.... Igloolik [was similar but then] @@ -2123,7 +2155,7 @@ Zone America/Creston -7:46:04 - LMT 1884 # Central Time and Southampton Island [in the Central zone] is not # required to use daylight savings. -# From +# From # Nunavut now has two time zones (2000-11-10): # The Nunavut government would allow its employees in Kugluktuk and # Cambridge Bay to operate on central time year-round, putting them @@ -2454,7 +2486,7 @@ Zone America/Dawson -9:17:40 - LMT 1900 Aug 20 # http://gaceta.diputados.gob.mx/Gaceta/61/2009/dic/V2-101209.html # # Our page: -# http://www.timeanddate.com/news/time/north-mexico-dst-change.html +# https://www.timeanddate.com/news/time/north-mexico-dst-change.html # From Arthur David Olson (2010-01-20): # The page @@ -2873,7 +2905,7 @@ Zone America/Costa_Rica -5:36:13 - LMT 1890 # San José # http://www.nnc.cubaweb.cu/marzo-2008/cien-1-11-3-08.htm # # Some more background information is posted here: -# http://www.timeanddate.com/news/time/cuba-starts-dst-march-16.html +# https://www.timeanddate.com/news/time/cuba-starts-dst-march-16.html # # The article also says that Cuba has been observing DST since 1963, # while Shanks (and tzdata) has 1965 as the first date (except in the @@ -2920,7 +2952,7 @@ Zone America/Costa_Rica -5:36:13 - LMT 1890 # San José # http://granma.co.cu/2011/03/08/nacional/artic01.html # # Our info: -# http://www.timeanddate.com/news/time/cuba-starts-dst-2011.html +# https://www.timeanddate.com/news/time/cuba-starts-dst-2011.html # # From Steffen Thorsen (2011-10-30) # Cuba will end DST two weeks later this year. Instead of going back @@ -2930,7 +2962,7 @@ Zone America/Costa_Rica -5:36:13 - LMT 1890 # San José # http://www.radioangulo.cu/noticias/cuba/17105-cuba-restablecera-el-horario-del-meridiano-de-greenwich.html # # Our page: -# http://www.timeanddate.com/news/time/cuba-time-changes-2011.html +# https://www.timeanddate.com/news/time/cuba-time-changes-2011.html # # From Steffen Thorsen (2012-03-01) # According to Radio Reloj, Cuba will start DST on Midnight between March @@ -2940,7 +2972,7 @@ Zone America/Costa_Rica -5:36:13 - LMT 1890 # San José # http://www.radioreloj.cu/index.php/noticias-radio-reloj/71-miscelaneas/7529-cuba-aplicara-el-horario-de-verano-desde-el-1-de-abril # # Our info on it: -# http://www.timeanddate.com/news/time/cuba-starts-dst-2012.html +# https://www.timeanddate.com/news/time/cuba-starts-dst-2012.html # From Steffen Thorsen (2012-11-03): # Radio Reloj and many other sources report that Cuba is changing back @@ -3135,8 +3167,8 @@ Zone America/Guatemala -6:02:04 - LMT 1918 Oct 5 # From Steffen Thorsen (2016-03-12): # Jean Antoine, editor of www.haiti-reference.com informed us that Haiti # are not going on DST this year. Several other resources confirm this: ... -# http://www.radiotelevisioncaraibes.com/presse/heure_d_t_pas_de_changement_d_heure_pr_vu_pour_cet_ann_e.html -# http://www.vantbefinfo.com/changement-dheure-pas-pour-haiti/ +# https://www.radiotelevisioncaraibes.com/presse/heure_d_t_pas_de_changement_d_heure_pr_vu_pour_cet_ann_e.html +# https://www.vantbefinfo.com/changement-dheure-pas-pour-haiti/ # http://news.anmwe.com/haiti-lheure-nationale-ne-sera-ni-avancee-ni-reculee-cette-annee/ # From Steffen Thorsen (2017-03-12): @@ -3335,7 +3367,7 @@ Zone America/Miquelon -3:44:40 - LMT 1911 May 15 # St Pierre # Turks and Caicos # # From Chris Dunn in -# http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=415007 +# https://bugs.debian.org/415007 # (2007-03-15): In the Turks & Caicos Islands (America/Grand_Turk) the # daylight saving dates for time changes have been adjusted to match # the recent U.S. change of dates. @@ -3357,12 +3389,25 @@ Zone America/Miquelon -3:44:40 - LMT 1911 May 15 # St Pierre # "permanent daylight saving time" by one year.... # http://tcweeklynews.com/time-change-to-go-ahead-this-november-p5437-127.htm # +# From the Turks & Caicos Cabinet (2017-07-20), heads-up from Steffen Thorsen: +# ... agreed to the reintroduction in TCI of Daylight Saving Time (DST) +# during the summer months and Standard Time, also known as Local +# Time, during the winter months with effect from April 2018 ... +# https://www.gov.uk/government/news/turks-and-caicos-post-cabinet-meeting-statement--3 +# +# From Paul Eggert (2017-08-26): +# The date of effect of the spring 2018 change appears to be March 11, +# which makes more sense. See: Hamilton D. Time change back +# by March 2018 for TCI. Magnetic Media. 2017-08-25. +# http://magneticmediatv.com/2017/08/time-change-back-by-march-2018-for-tci/ +# # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone America/Grand_Turk -4:44:32 - LMT 1890 -5:07:11 - KMT 1912 Feb # Kingston Mean Time -5:00 - EST 1979 -5:00 US E%sT 2015 Nov Sun>=1 2:00 - -4:00 - AST + -4:00 - AST 2018 Mar 11 3:00 + -5:00 US E%sT # British Virgin Is # Virgin Is diff --git a/src/timezone/data/southamerica b/src/timezone/data/southamerica index 6038c3b65c..bbae226156 100644 --- a/src/timezone/data/southamerica +++ b/src/timezone/data/southamerica @@ -22,7 +22,7 @@ # # For data circa 1899, a common source is: # Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# http://www.jstor.org/stable/1774359 +# https://www.jstor.org/stable/1774359 # # These tables use numeric abbreviations like -03 and -0330 for # integer hour and minute UTC offsets. Although earlier editions used @@ -265,8 +265,8 @@ Rule Arg 2008 only - Oct Sun>=15 0:00 1:00 S # # Es inminente que en San Luis atrasen una hora los relojes # (It is imminent in San Luis clocks one hour delay) -# http://www.lagaceta.com.ar/nota/253414/Economia/Es-inminente-que-en-San-Luis-atrasen-una-hora-los-relojes.html -# http://www.worldtimezone.net/dst_news/dst_news_argentina02.html +# https://www.lagaceta.com.ar/nota/253414/Economia/Es-inminente-que-en-San-Luis-atrasen-una-hora-los-relojes.html +# http://www.worldtimezone.com/dst_news/dst_news_argentina02.html # From Jesper Nørgaard Welen (2008-01-18): # The page of the San Luis provincial government @@ -385,7 +385,7 @@ Rule Arg 2008 only - Oct Sun>=15 0:00 1:00 S # Perhaps San Luis operates on the legal fiction that it is at -04 # with perpetual summer time, but ordinary usage typically seems to # just say it's at -03; see, for example, -# http://es.wikipedia.org/wiki/Hora_oficial_argentina +# https://es.wikipedia.org/wiki/Hora_oficial_argentina # We've documented similar situations as being plain changes to # standard time, so let's do that here too. This does not change UTC # offsets, only tm_isdst and the time zone abbreviations. One minor @@ -716,7 +716,7 @@ Zone America/La_Paz -4:32:36 - LMT 1890 # (Portuguese) # # We have a written a short article about it as well: -# http://www.timeanddate.com/news/time/brazil-dst-2008-2009.html +# https://www.timeanddate.com/news/time/brazil-dst-2008-2009.html # # From Alexander Krivenyshev (2011-10-04): # State Bahia will return to Daylight savings time this year after 8 years off. @@ -725,7 +725,7 @@ Zone America/La_Paz -4:32:36 - LMT 1890 # In Portuguese: # http://g1.globo.com/bahia/noticia/2011/10/governador-jaques-wagner-confirma-horario-de-verao-na-bahia.html -# http://noticias.terra.com.br/brasil/noticias/0,,OI5390887-EI8139,00-Bahia+volta+a+ter+horario+de+verao+apos+oito+anos.html +# https://noticias.terra.com.br/brasil/noticias/0,,OI5390887-EI8139,00-Bahia+volta+a+ter+horario+de+verao+apos+oito+anos.html # From Guilherme Bernardes Rodrigues (2011-10-07): # There is news in the media, however there is still no decree about it. @@ -751,16 +751,16 @@ Zone America/La_Paz -4:32:36 - LMT 1890 # From Rodrigo Severo (2012-10-16): # Tocantins state will have DST. -# http://noticias.terra.com.br/brasil/noticias/0,,OI6232536-EI306.html +# https://noticias.terra.com.br/brasil/noticias/0,,OI6232536-EI306.html # From Steffen Thorsen (2013-09-20): # Tocantins in Brazil is very likely not to observe DST from October.... # http://conexaoto.com.br/2013/09/18/ministerio-confirma-que-tocantins-esta-fora-do-horario-de-verao-em-2013-mas-falta-publicacao-de-decreto # We will keep this article updated when this is confirmed: -# http://www.timeanddate.com/news/time/brazil-starts-dst-2013.html +# https://www.timeanddate.com/news/time/brazil-starts-dst-2013.html # From Steffen Thorsen (2013-10-17): -# http://www.timeanddate.com/news/time/acre-amazonas-change-time-zone.html +# https://www.timeanddate.com/news/time/acre-amazonas-change-time-zone.html # Senator Jorge Viana announced that Acre will change time zone on November 10. # He did not specify the time of the change, nor if western parts of Amazonas # will change as well. @@ -1076,18 +1076,18 @@ Zone America/Rio_Branco -4:31:12 - LMT 1914 # the following source, cited by Oscar van Vlijmen (2006-10-08): # [1] Chile Law # http://www.webexhibits.org/daylightsaving/chile.html -# This contains a copy of a this official table: +# This contains a copy of this official table: # Cambios en la hora oficial de Chile desde 1900 (retrieved 2008-03-30) -# http://web.archive.org/web/20080330200901/http://www.horaoficial.cl/cambio.htm +# https://web.archive.org/web/20080330200901/http://www.horaoficial.cl/cambio.htm # [1] needs several corrections, though. # # The first set of corrections is from: # [2] History of the Official Time of Chile # http://www.horaoficial.cl/ing/horaof_ing.html (retrieved 2012-03-06). See: -# http://web.archive.org/web/20120306042032/http://www.horaoficial.cl/ing/horaof_ing.html +# https://web.archive.org/web/20120306042032/http://www.horaoficial.cl/ing/horaof_ing.html # This is an English translation of: # Historia de la hora oficial de Chile (retrieved 2012-10-24). See: -# http://web.archive.org/web/20121024234627/http://www.horaoficial.cl/horaof.htm +# https://web.archive.org/web/20121024234627/http://www.horaoficial.cl/horaof.htm # A fancier Spanish version (requiring mouse-clicking) is at: # http://www.horaoficial.cl/historia_hora.html # Conflicts between [1] and [2] were resolved as follows: @@ -1363,10 +1363,10 @@ Link America/Curacao America/Kralendijk # Caribbean Netherlands # Milne says the Central and South American Telegraph Company used -5:24:15. # # From Alois Treindl (2016-12-15): -# http://www.elcomercio.com/actualidad/hora-sixto-1993.html +# https://www.elcomercio.com/actualidad/hora-sixto-1993.html # ... Whether the law applied also to Galápagos, I do not know. # From Paul Eggert (2016-12-15): -# http://www.elcomercio.com/afull/modificacion-husohorario-ecuador-presidentes-decreto.html +# https://www.elcomercio.com/afull/modificacion-husohorario-ecuador-presidentes-decreto.html # This says President Sixto Durán Ballén signed decree No. 285, which # established DST from 1992-11-28 to 1993-02-05; it does not give transition # times. The people called it "hora de Sixto" ("Sixto hour"). The change did @@ -1778,7 +1778,7 @@ Zone America/Montevideo -3:44:44 - LMT 1898 Jun 28 # hours of presidential broadcasts, hours of lines,' quipped comedian # Jean Mary Curró ...". See: Cawthorne A, Kai D. Venezuela scraps # half-hour time difference set by Chavez. Reuters 2016-04-15 14:50 -0400 -# http://www.reuters.com/article/us-venezuela-timezone-idUSKCN0XC2BE +# https://www.reuters.com/article/us-venezuela-timezone-idUSKCN0XC2BE # # From Matt Johnson (2016-04-20): # ... published in the official Gazette [2016-04-18], here: From a32c0923b4da7f7df95616aaecbb85ef9f12f93f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 24 Oct 2017 14:08:40 -0400 Subject: [PATCH 0423/1087] Documentation improvements around domain types. I was a bit surprised to find that domains were almost completely unmentioned in the main SGML documentation, outside of the reference pages for CREATE/ALTER/DROP DOMAIN. In particular, noplace was it mentioned that we don't support domains over composite, making it hard to document the planned fix for that. Hence, add a section about domains to chapter 8 (Data Types). Also, modernize the type system overview in section 37.2; it had never heard of range types, and insisted on calling arrays base types, which seems a bit odd from a user's perspective; furthermore it didn't fit well with the fact that we now support arrays over types other than base types. It seems appropriate to use the term "container types" to describe all of arrays, composites, and ranges, so let's do that. Also a few other minor improvements, notably improve an example query in rowtypes.sgml by using a LATERAL function instead of an ad-hoc OFFSET 0 clause. In part this is mop-up for commit c12d570fa, which missed updating 37.2 to reflect the fact that it added arrays of domains. We could possibly back-patch this without that claim, but I don't feel a strong need to. --- doc/src/sgml/datatype.sgml | 54 ++++++++++++++++++++ doc/src/sgml/extend.sgml | 79 +++++++++++++++++++++--------- doc/src/sgml/ref/alter_domain.sgml | 12 ++--- doc/src/sgml/rowtypes.sgml | 13 +++-- 4 files changed, 124 insertions(+), 34 deletions(-) diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 6a15f9030c..b397e18858 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -4359,6 +4359,60 @@ SET xmloption TO { DOCUMENT | CONTENT }; &rangetypes; + + Domain Types + + + domain + + + + data type + domain + + + + A domain is a user-defined data type that is + based on another underlying type. Optionally, + it can have constraints that restrict its valid values to a subset of + what the underlying type would allow. Otherwise it behaves like the + underlying type — for example, any operator or function that + can be applied to the underlying type will work on the domain type. + The underlying type can be any built-in or user-defined base type, + enum type, array or range type, or another domain. (Currently, domains + over composite types are not implemented.) + + + + For example, we could create a domain over integers that accepts only + positive integers: + +CREATE DOMAIN posint AS integer CHECK (VALUE > 0); +CREATE TABLE mytable (id posint); +INSERT INTO mytable VALUES(1); -- works +INSERT INTO mytable VALUES(-1); -- fails + + + + + When an operator or function of the underlying type is applied to a + domain value, the domain is automatically down-cast to the underlying + type. Thus, for example, the result of mytable.id - 1 + is considered to be of type integer not posint. + We could write (mytable.id - 1)::posint to cast the + result back to posint, causing the domain's constraints + to be rechecked. In this case, that would result in an error if the + expression had been applied to an id value of + 1. Assigning a value of the underlying type to a field or variable of + the domain type is allowed without writing an explicit cast, but the + domain's constraints will be checked. + + + + For additional information see . + + + Object Identifier Types diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml index c1bd03ad4c..e819010875 100644 --- a/doc/src/sgml/extend.sgml +++ b/doc/src/sgml/extend.sgml @@ -106,31 +106,60 @@ composite + + container type + + + + data type + container + + - PostgreSQL data types are divided into base - types, composite types, domains, and pseudo-types. + PostgreSQL data types can be divided into base + types, container types, domains, and pseudo-types. Base Types - Base types are those, like int4, that are + Base types are those, like integer, that are implemented below the level of the SQL language (typically in a low-level language such as C). They generally correspond to what are often known as abstract data types. PostgreSQL can only operate on such types through functions provided by the user and only understands the behavior of such types to the extent that the user describes - them. Base types are further subdivided into scalar and array - types. For each scalar type, a corresponding array type is - automatically created that can hold variable-size arrays of that - scalar type. + them. + The built-in base types are described in . + + + + Enumerated (enum) types can be considered as a subcategory of base + types. The main difference is that they can be created using + just SQL commands, without any low-level programming. + Refer to for more information. - Composite Types + Container Types + + + PostgreSQL has three kinds + of container types, which are types that contain multiple + values of other types. These are arrays, composites, and ranges. + + + + Arrays can hold multiple values that are all of the same type. An array + type is automatically created for each base type, composite type, range + type, and domain type. But there are no arrays of arrays. So far as + the type system is concerned, multi-dimensional arrays are the same as + one-dimensional arrays. Refer to for more + information. + Composite types, or row types, are created whenever the user @@ -139,9 +168,15 @@ define a stand-alone composite type with no associated table. A composite type is simply a list of types with associated field names. A value of a composite type is a row or - record of field values. The user can access the component fields - from SQL queries. Refer to - for more information on composite types. + record of field values. Refer to + for more information. + + + + A range type can hold two values of the same type, which are the lower + and upper bounds of the range. Range types are user-created, although + a few built-in ones exist. Refer to + for more information. @@ -149,16 +184,12 @@ Domains - A domain is based on a particular base type and for many purposes - is interchangeable with its base type. However, a domain can - have constraints that restrict its valid values to a subset of - what the underlying base type would allow. - - - - Domains can be created using the SQL command - . - Their creation and use is not discussed in this chapter. + A domain is based on a particular underlying type and for many purposes + is interchangeable with its underlying type. However, a domain can have + constraints that restrict its valid values to a subset of what the + underlying type would allow. Domains are created using + the SQL command . + Refer to for more information. @@ -167,8 +198,8 @@ There are a few pseudo-types for special purposes. - Pseudo-types cannot appear as columns of tables or attributes of - composite types, but they can be used to declare the argument and + Pseudo-types cannot appear as columns of tables or components of + container types, but they can be used to declare the argument and result types of functions. This provides a mechanism within the type system to identify special classes of functions. lists the existing @@ -188,7 +219,7 @@ - type + data type polymorphic diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 26e95aefcf..9cd044de54 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -199,7 +199,7 @@ ALTER DOMAIN name - NOT VALID + NOT VALID Do not verify existing column data for constraint validity. @@ -274,11 +274,11 @@ ALTER DOMAIN name Currently, ALTER DOMAIN ADD CONSTRAINT, ALTER - DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT NULL - will fail if the validated named domain or - any derived domain is used within a composite-type column of any - table in the database. They should eventually be improved to be - able to verify the new constraint for such nested columns. + DOMAIN VALIDATE CONSTRAINT, and ALTER DOMAIN SET NOT + NULL will fail if the named domain or any derived domain is used + within a container-type column (a composite, array, or range column) in + any table in the database. They should eventually be improved to be able + to verify the new constraint for such nested values. diff --git a/doc/src/sgml/rowtypes.sgml b/doc/src/sgml/rowtypes.sgml index bc2fc9b885..7e436a4606 100644 --- a/doc/src/sgml/rowtypes.sgml +++ b/doc/src/sgml/rowtypes.sgml @@ -328,11 +328,16 @@ SELECT (myfunc(x)).a, (myfunc(x)).b, (myfunc(x)).c FROM some_table; with either syntax. If it's an expensive function you may wish to avoid that, which you can do with a query like: -SELECT (m).* FROM (SELECT myfunc(x) AS m FROM some_table OFFSET 0) ss; +SELECT m.* FROM some_table, LATERAL myfunc(x) AS m; - The OFFSET 0 clause keeps the optimizer - from flattening the sub-select to arrive at the form with - multiple calls of myfunc(). + Placing the function in + a LATERAL FROM item keeps it from + being invoked more than once per row. m.* is still + expanded into m.a, m.b, m.c, but now those variables + are just references to the output of the FROM item. + (The LATERAL keyword is optional here, but we show it + to clarify that the function is getting x + from some_table.) From 896eb5efbdcea5df12e7a464ae9c23dd1e25abd2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 24 Oct 2017 18:42:47 -0400 Subject: [PATCH 0424/1087] In the planner, delete joinaliasvars lists after we're done with them. Although joinaliasvars lists coming out of the parser are quite simple, those lists can contain arbitrarily complex expressions after subquery pullup. We do not perform expression preprocessing on them, meaning that expressions in those lists will not meet the expectations of later phases of the planner (for example, that they do not contain SubLinks). This had been thought pretty harmless, since we don't intentionally touch those lists in later phases --- but Andreas Seltenreich found a case in which adjust_appendrel_attrs() could recurse into a joinaliasvars list and then die on its assertion that it never sees a SubLink. We considered a couple of localized fixes to prevent that specific case from looking at the joinaliasvars lists, but really this seems like a generic hazard for all expression processing in the planner. Therefore, probably the best answer is to delete the joinaliasvars lists from the parsetree at the end of expression preprocessing, so that there are no reachable expressions that haven't been through preprocessing. The case Andreas found seems to be harmless in non-Assert builds, and so far there are no field reports suggesting that there are user-visible effects in other cases. I considered back-patching this anyway, but it turns out that Andreas' test doesn't fail at all in 9.4-9.6, because in those versions adjust_appendrel_attrs contains code (added in commit 842faa714 and removed again in commit 215b43cdc) to process SubLinks rather than complain about them. Barring discovery of another path by which unprocessed joinaliasvars lists can cause trouble, the most prudent compromise seems to be to patch this into v10 but not further. Patch by me, with thanks to Amit Langote for initial investigation and review. Discussion: https://postgr.es/m/87r2tvt9f1.fsf@ansel.ydns.eu --- src/backend/optimizer/plan/planner.c | 32 +++++++++++++++++++++---- src/test/regress/expected/subselect.out | 14 +++++++++++ src/test/regress/sql/subselect.sql | 18 ++++++++++++++ 3 files changed, 59 insertions(+), 5 deletions(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index ecdd7280eb..d58635c887 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -776,6 +776,27 @@ subquery_planner(PlannerGlobal *glob, Query *parse, } } + /* + * Now that we are done preprocessing expressions, and in particular done + * flattening join alias variables, get rid of the joinaliasvars lists. + * They no longer match what expressions in the rest of the tree look + * like, because we have not preprocessed expressions in those lists (and + * do not want to; for example, expanding a SubLink there would result in + * a useless unreferenced subplan). Leaving them in place simply creates + * a hazard for later scans of the tree. We could try to prevent that by + * using QTW_IGNORE_JOINALIASES in every tree scan done after this point, + * but that doesn't sound very reliable. + */ + if (root->hasJoinRTEs) + { + foreach(l, parse->rtable) + { + RangeTblEntry *rte = lfirst_node(RangeTblEntry, l); + + rte->joinaliasvars = NIL; + } + } + /* * In some cases we may want to transfer a HAVING clause into WHERE. We * cannot do so if the HAVING clause contains aggregates (obviously) or @@ -902,11 +923,12 @@ preprocess_expression(PlannerInfo *root, Node *expr, int kind) /* * If the query has any join RTEs, replace join alias variables with - * base-relation variables. We must do this before sublink processing, - * else sublinks expanded out from join aliases would not get processed. - * We can skip it in non-lateral RTE functions, VALUES lists, and - * TABLESAMPLE clauses, however, since they can't contain any Vars of the - * current query level. + * base-relation variables. We must do this first, since any expressions + * we may extract from the joinaliasvars lists have not been preprocessed. + * For example, if we did this after sublink processing, sublinks expanded + * out from join aliases would not get processed. But we can skip this in + * non-lateral RTE functions, VALUES lists, and TABLESAMPLE clauses, since + * they can't contain any Vars of the current query level. */ if (root->hasJoinRTEs && !(kind == EXPRKIND_RTFUNC || diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index f3ebad5857..992d29bb86 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -678,6 +678,20 @@ with q as (select max(f1) from int4_tbl group by f1 order by f1) (2147483647) (5 rows) +-- +-- Test case for sublinks pulled up into joinaliasvars lists in an +-- inherited update/delete query +-- +begin; -- this shouldn't delete anything, but be safe +delete from road +where exists ( + select 1 + from + int4_tbl cross join + ( select f1, array(select q1 from int8_tbl) as arr + from text_tbl ) ss + where road.name = ss.f1 ); +rollback; -- -- Test case for sublinks pushed down into subselects via join alias expansion -- diff --git a/src/test/regress/sql/subselect.sql b/src/test/regress/sql/subselect.sql index 5ac8badabe..0d2dd2f1d8 100644 --- a/src/test/regress/sql/subselect.sql +++ b/src/test/regress/sql/subselect.sql @@ -376,6 +376,24 @@ select q from (select max(f1) from int4_tbl group by f1 order by f1) q; with q as (select max(f1) from int4_tbl group by f1 order by f1) select q from q; +-- +-- Test case for sublinks pulled up into joinaliasvars lists in an +-- inherited update/delete query +-- + +begin; -- this shouldn't delete anything, but be safe + +delete from road +where exists ( + select 1 + from + int4_tbl cross join + ( select f1, array(select q1 from int8_tbl) as arr + from text_tbl ) ss + where road.name = ss.f1 ); + +rollback; + -- -- Test case for sublinks pushed down into subselects via join alias expansion -- From f3c6e8a27a8f8436cada9a42e4f57338ed38c785 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 25 Oct 2017 07:13:11 -0400 Subject: [PATCH 0425/1087] Add a utility function to extract variadic function arguments This is epecially useful in the case or "VARIADIC ANY" functions. The caller can get the artguments and types regardless of whether or not and explicit VARIADIC array argument has been used. The function also provides an option to convert arguments on type "unknown" to to "text". Michael Paquier and me, reviewed by Tom Lane. Backpatch to 9.4 in order to support the following json bug fix. --- src/backend/utils/fmgr/funcapi.c | 115 ++++++++++++++++++++++++++++++- src/include/funcapi.h | 23 +++++++ 2 files changed, 137 insertions(+), 1 deletion(-) diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c index 9c3f4510ce..b4f856eb13 100644 --- a/src/backend/utils/fmgr/funcapi.c +++ b/src/backend/utils/fmgr/funcapi.c @@ -2,7 +2,7 @@ * * funcapi.c * Utility and convenience functions for fmgr functions that return - * sets and/or composite types. + * sets and/or composite types, or deal with VARIADIC inputs. * * Copyright (c) 2002-2017, PostgreSQL Global Development Group * @@ -1400,3 +1400,116 @@ TypeGetTupleDesc(Oid typeoid, List *colaliases) return tupdesc; } + +/* + * extract_variadic_args + * + * Extract a set of argument values, types and NULL markers for a given + * input function which makes use of a VARIADIC input whose argument list + * depends on the caller context. When doing a VARIADIC call, the caller + * has provided one argument made of an array of values, so deconstruct the + * array data before using it for the next processing. If no VARIADIC call + * is used, just fill in the status data based on all the arguments given + * by the caller. + * + * This function returns the number of arguments generated, or -1 in the + * case of "VARIADIC NULL". + */ +int +extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, + bool convert_unknown, Datum **args, Oid **types, + bool **nulls) +{ + bool variadic = get_fn_expr_variadic(fcinfo->flinfo); + Datum *args_res; + bool *nulls_res; + Oid *types_res; + int nargs, i; + + *args = NULL; + *types = NULL; + *nulls = NULL; + + if (variadic) + { + ArrayType *array_in; + Oid element_type; + bool typbyval; + char typalign; + int16 typlen; + + Assert(PG_NARGS() == variadic_start + 1); + + if (PG_ARGISNULL(variadic_start)) + return -1; + + array_in = PG_GETARG_ARRAYTYPE_P(variadic_start); + element_type = ARR_ELEMTYPE(array_in); + + get_typlenbyvalalign(element_type, + &typlen, &typbyval, &typalign); + deconstruct_array(array_in, element_type, typlen, typbyval, + typalign, &args_res, &nulls_res, + &nargs); + + /* All the elements of the array have the same type */ + types_res = (Oid *) palloc0(nargs * sizeof(Oid)); + for (i = 0; i < nargs; i++) + types_res[i] = element_type; + } + else + { + nargs = PG_NARGS() - variadic_start; + Assert (nargs > 0); + nulls_res = (bool *) palloc0(nargs * sizeof(bool)); + args_res = (Datum *) palloc0(nargs * sizeof(Datum)); + types_res = (Oid *) palloc0(nargs * sizeof(Oid)); + + for (i = 0; i < nargs; i++) + { + nulls_res[i] = PG_ARGISNULL(i + variadic_start); + types_res[i] = get_fn_expr_argtype(fcinfo->flinfo, + i + variadic_start); + + /* + * Turn a constant (more or less literal) value that's of unknown + * type into text if required . Unknowns come in as a cstring + * pointer. + * Note: for functions declared as taking type "any", the parser + * will not do any type conversion on unknown-type literals (that + * is, undecorated strings or NULLs). + */ + if (convert_unknown && + types_res[i] == UNKNOWNOID && + get_fn_expr_arg_stable(fcinfo->flinfo, i + variadic_start)) + { + types_res[i] = TEXTOID; + + if (PG_ARGISNULL(i + variadic_start)) + args_res[i] = (Datum) 0; + else + args_res[i] = + CStringGetTextDatum(PG_GETARG_POINTER(i + variadic_start)); + } + else + { + /* no conversion needed, just take the datum as given */ + args_res[i] = PG_GETARG_DATUM(i + variadic_start); + } + + if (!OidIsValid(types_res[i]) || + (convert_unknown && types_res[i] == UNKNOWNOID)) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("could not determine data type for argument %d", + i + 1))); + } + } + + /* Fill in results */ + *args = args_res; + *nulls = nulls_res; + *types = types_res; + + return nargs; +} diff --git a/src/include/funcapi.h b/src/include/funcapi.h index 951af2aad3..d2dbc163f2 100644 --- a/src/include/funcapi.h +++ b/src/include/funcapi.h @@ -2,6 +2,7 @@ * * funcapi.h * Definitions for functions which return composite type and/or sets + * or work on VARIADIC inputs. * * This file must be included by all Postgres modules that either define * or call FUNCAPI-callable functions or macros. @@ -315,4 +316,26 @@ extern void end_MultiFuncCall(PG_FUNCTION_ARGS, FuncCallContext *funcctx); PG_RETURN_NULL(); \ } while (0) +/*---------- + * Support to ease writing of functions dealing with VARIADIC inputs + *---------- + * + * This function extracts a set of argument values, types and NULL markers + * for a given input function. This returns a set of data: + * - **values includes the set of Datum values extracted. + * - **types the data type OID for each element. + * - **nulls tracks if an element is NULL. + * + * variadic_start indicates the argument number where the VARIADIC argument + * starts. + * convert_unknown set to true will enforce the conversion of arguments + * with unknown data type to text. + * + * The return result is the number of elements stored, or -1 in the case of + * "VARIADIC NULL". + */ +extern int extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, + bool convert_unknown, Datum **values, + Oid **types, bool **nulls); + #endif /* FUNCAPI_H */ From 18fc4ecf4afafe40bd7e7577bd611e5caf74c9fd Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 25 Oct 2017 07:34:00 -0400 Subject: [PATCH 0426/1087] Process variadic arguments consistently in json functions json_build_object and json_build_array and the jsonb equivalents did not correctly process explicit VARIADIC arguments. They are modified to use the new extract_variadic_args() utility function which abstracts away the details of the call method. Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov. Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as that's where they originated. --- src/backend/utils/adt/json.c | 84 +++++++--------------- src/backend/utils/adt/jsonb.c | 99 ++++++++----------------- src/test/regress/expected/json.out | 107 ++++++++++++++++++++++++++++ src/test/regress/expected/jsonb.out | 105 +++++++++++++++++++++++++++ src/test/regress/sql/json.sql | 21 ++++++ src/test/regress/sql/jsonb.sql | 22 +++++- 6 files changed, 306 insertions(+), 132 deletions(-) diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index 1ddb42b4d0..baf1178995 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -17,6 +17,7 @@ #include "access/transam.h" #include "catalog/pg_type.h" #include "executor/spi.h" +#include "funcapi.h" #include "lib/stringinfo.h" #include "libpq/pqformat.h" #include "mb/pg_wchar.h" @@ -2111,10 +2112,17 @@ json_build_object(PG_FUNCTION_ARGS) { int nargs = PG_NARGS(); int i; - Datum arg; const char *sep = ""; StringInfo result; - Oid val_type; + Datum *args; + bool *nulls; + Oid *types; + + /* fetch argument values to build the object */ + nargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls); + + if (nargs < 0) + PG_RETURN_NULL(); if (nargs % 2 != 0) ereport(ERROR, @@ -2128,52 +2136,22 @@ json_build_object(PG_FUNCTION_ARGS) for (i = 0; i < nargs; i += 2) { - /* - * Note: since json_build_object() is declared as taking type "any", - * the parser will not do any type conversion on unknown-type literals - * (that is, undecorated strings or NULLs). Such values will arrive - * here as type UNKNOWN, which fortunately does not matter to us, - * since unknownout() works fine. - */ appendStringInfoString(result, sep); sep = ", "; /* process key */ - val_type = get_fn_expr_argtype(fcinfo->flinfo, i); - - if (val_type == InvalidOid) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", - i + 1))); - - if (PG_ARGISNULL(i)) + if (nulls[i]) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("argument %d cannot be null", i + 1), errhint("Object keys should be text."))); - arg = PG_GETARG_DATUM(i); - - add_json(arg, false, result, val_type, true); + add_json(args[i], false, result, types[i], true); appendStringInfoString(result, " : "); /* process value */ - val_type = get_fn_expr_argtype(fcinfo->flinfo, i + 1); - - if (val_type == InvalidOid) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", - i + 2))); - - if (PG_ARGISNULL(i + 1)) - arg = (Datum) 0; - else - arg = PG_GETARG_DATUM(i + 1); - - add_json(arg, PG_ARGISNULL(i + 1), result, val_type, false); + add_json(args[i + 1], nulls[i + 1], result, types[i + 1], false); } appendStringInfoChar(result, '}'); @@ -2196,12 +2174,19 @@ json_build_object_noargs(PG_FUNCTION_ARGS) Datum json_build_array(PG_FUNCTION_ARGS) { - int nargs = PG_NARGS(); + int nargs; int i; - Datum arg; const char *sep = ""; StringInfo result; - Oid val_type; + Datum *args; + bool *nulls; + Oid *types; + + /* fetch argument values to build the array */ + nargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls); + + if (nargs < 0) + PG_RETURN_NULL(); result = makeStringInfo(); @@ -2209,30 +2194,9 @@ json_build_array(PG_FUNCTION_ARGS) for (i = 0; i < nargs; i++) { - /* - * Note: since json_build_array() is declared as taking type "any", - * the parser will not do any type conversion on unknown-type literals - * (that is, undecorated strings or NULLs). Such values will arrive - * here as type UNKNOWN, which fortunately does not matter to us, - * since unknownout() works fine. - */ appendStringInfoString(result, sep); sep = ", "; - - val_type = get_fn_expr_argtype(fcinfo->flinfo, i); - - if (val_type == InvalidOid) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", - i + 1))); - - if (PG_ARGISNULL(i)) - arg = (Datum) 0; - else - arg = PG_GETARG_DATUM(i); - - add_json(arg, PG_ARGISNULL(i), result, val_type, false); + add_json(args[i], nulls[i], result, types[i], false); } appendStringInfoChar(result, ']'); diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 771c05120b..7185b4cce5 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -3,7 +3,7 @@ * jsonb.c * I/O routines for jsonb type * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * COPYRIGHT (c) 2014-2017, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/jsonb.c @@ -16,6 +16,7 @@ #include "access/htup_details.h" #include "access/transam.h" #include "catalog/pg_type.h" +#include "funcapi.h" #include "libpq/pqformat.h" #include "parser/parse_coerce.h" #include "utils/builtins.h" @@ -1171,16 +1172,24 @@ to_jsonb(PG_FUNCTION_ARGS) Datum jsonb_build_object(PG_FUNCTION_ARGS) { - int nargs = PG_NARGS(); + int nargs; int i; - Datum arg; - Oid val_type; JsonbInState result; + Datum *args; + bool *nulls; + Oid *types; + + /* build argument values to build the object */ + nargs = extract_variadic_args(fcinfo, 0, true, &args, &types, &nulls); + + if (nargs < 0) + PG_RETURN_NULL(); if (nargs % 2 != 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("invalid number of arguments: object must be matched key value pairs"))); + errmsg("argument list must have even number of elements"), + errhint("The arguments of jsonb_build_object() must consist of alternating keys and values."))); memset(&result, 0, sizeof(JsonbInState)); @@ -1189,54 +1198,15 @@ jsonb_build_object(PG_FUNCTION_ARGS) for (i = 0; i < nargs; i += 2) { /* process key */ - - if (PG_ARGISNULL(i)) + if (nulls[i]) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("argument %d: key must not be null", i + 1))); - val_type = get_fn_expr_argtype(fcinfo->flinfo, i); - - /* - * turn a constant (more or less literal) value that's of unknown type - * into text. Unknowns come in as a cstring pointer. - */ - if (val_type == UNKNOWNOID && get_fn_expr_arg_stable(fcinfo->flinfo, i)) - { - val_type = TEXTOID; - arg = CStringGetTextDatum(PG_GETARG_POINTER(i)); - } - else - { - arg = PG_GETARG_DATUM(i); - } - if (val_type == InvalidOid || val_type == UNKNOWNOID) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", i + 1))); - add_jsonb(arg, false, &result, val_type, true); + add_jsonb(args[i], false, &result, types[i], true); /* process value */ - - val_type = get_fn_expr_argtype(fcinfo->flinfo, i + 1); - /* see comments above */ - if (val_type == UNKNOWNOID && get_fn_expr_arg_stable(fcinfo->flinfo, i + 1)) - { - val_type = TEXTOID; - if (PG_ARGISNULL(i + 1)) - arg = (Datum) 0; - else - arg = CStringGetTextDatum(PG_GETARG_POINTER(i + 1)); - } - else - { - arg = PG_GETARG_DATUM(i + 1); - } - if (val_type == InvalidOid || val_type == UNKNOWNOID) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", i + 2))); - add_jsonb(arg, PG_ARGISNULL(i + 1), &result, val_type, false); + add_jsonb(args[i + 1], nulls[i + 1], &result, types[i + 1], false); } result.res = pushJsonbValue(&result.parseState, WJB_END_OBJECT, NULL); @@ -1266,38 +1236,25 @@ jsonb_build_object_noargs(PG_FUNCTION_ARGS) Datum jsonb_build_array(PG_FUNCTION_ARGS) { - int nargs = PG_NARGS(); + int nargs; int i; - Datum arg; - Oid val_type; JsonbInState result; + Datum *args; + bool *nulls; + Oid *types; + + /* build argument values to build the array */ + nargs = extract_variadic_args(fcinfo, 0, true, &args, &types, &nulls); + + if (nargs < 0) + PG_RETURN_NULL(); memset(&result, 0, sizeof(JsonbInState)); result.res = pushJsonbValue(&result.parseState, WJB_BEGIN_ARRAY, NULL); for (i = 0; i < nargs; i++) - { - val_type = get_fn_expr_argtype(fcinfo->flinfo, i); - /* see comments in jsonb_build_object above */ - if (val_type == UNKNOWNOID && get_fn_expr_arg_stable(fcinfo->flinfo, i)) - { - val_type = TEXTOID; - if (PG_ARGISNULL(i)) - arg = (Datum) 0; - else - arg = CStringGetTextDatum(PG_GETARG_POINTER(i)); - } - else - { - arg = PG_GETARG_DATUM(i); - } - if (val_type == InvalidOid || val_type == UNKNOWNOID) - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("could not determine data type for argument %d", i + 1))); - add_jsonb(arg, PG_ARGISNULL(i), &result, val_type, false); - } + add_jsonb(args[i], nulls[i], &result, types[i], false); result.res = pushJsonbValue(&result.parseState, WJB_END_ARRAY, NULL); diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out index d7abae9867..9fc91f8d12 100644 --- a/src/test/regress/expected/json.out +++ b/src/test/regress/expected/json.out @@ -1864,6 +1864,54 @@ SELECT json_build_array('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": ["a", 1, "b", 1.2, "c", true, "d", null, "e", {"x": 3, "y": [1,2,3]}] (1 row) +SELECT json_build_array('a', NULL); -- ok + json_build_array +------------------ + ["a", null] +(1 row) + +SELECT json_build_array(VARIADIC NULL::text[]); -- ok + json_build_array +------------------ + +(1 row) + +SELECT json_build_array(VARIADIC '{}'::text[]); -- ok + json_build_array +------------------ + [] +(1 row) + +SELECT json_build_array(VARIADIC '{a,b,c}'::text[]); -- ok + json_build_array +------------------ + ["a", "b", "c"] +(1 row) + +SELECT json_build_array(VARIADIC ARRAY['a', NULL]::text[]); -- ok + json_build_array +------------------ + ["a", null] +(1 row) + +SELECT json_build_array(VARIADIC '{1,2,3,4}'::text[]); -- ok + json_build_array +---------------------- + ["1", "2", "3", "4"] +(1 row) + +SELECT json_build_array(VARIADIC '{1,2,3,4}'::int[]); -- ok + json_build_array +------------------ + [1, 2, 3, 4] +(1 row) + +SELECT json_build_array(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok + json_build_array +-------------------- + [1, 4, 2, 5, 3, 6] +(1 row) + SELECT json_build_object('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); json_build_object ---------------------------------------------------------------------------- @@ -1879,6 +1927,65 @@ SELECT json_build_object( {"a" : {"b" : false, "c" : 99}, "d" : {"e" : [9,8,7], "f" : {"relkind":"r","name":"pg_class"}}} (1 row) +SELECT json_build_object('{a,b,c}'::text[]); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of json_build_object() must consist of alternating keys and values. +SELECT json_build_object('{a,b,c}'::text[], '{d,e,f}'::text[]); -- error, key cannot be array +ERROR: key value must be scalar, not array, composite, or json +SELECT json_build_object('a', 'b', 'c'); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of json_build_object() must consist of alternating keys and values. +SELECT json_build_object(NULL, 'a'); -- error, key cannot be NULL +ERROR: argument 1 cannot be null +HINT: Object keys should be text. +SELECT json_build_object('a', NULL); -- ok + json_build_object +------------------- + {"a" : null} +(1 row) + +SELECT json_build_object(VARIADIC NULL::text[]); -- ok + json_build_object +------------------- + +(1 row) + +SELECT json_build_object(VARIADIC '{}'::text[]); -- ok + json_build_object +------------------- + {} +(1 row) + +SELECT json_build_object(VARIADIC '{a,b,c}'::text[]); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of json_build_object() must consist of alternating keys and values. +SELECT json_build_object(VARIADIC ARRAY['a', NULL]::text[]); -- ok + json_build_object +------------------- + {"a" : null} +(1 row) + +SELECT json_build_object(VARIADIC ARRAY[NULL, 'a']::text[]); -- error, key cannot be NULL +ERROR: argument 1 cannot be null +HINT: Object keys should be text. +SELECT json_build_object(VARIADIC '{1,2,3,4}'::text[]); -- ok + json_build_object +------------------------ + {"1" : "2", "3" : "4"} +(1 row) + +SELECT json_build_object(VARIADIC '{1,2,3,4}'::int[]); -- ok + json_build_object +-------------------- + {"1" : 2, "3" : 4} +(1 row) + +SELECT json_build_object(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok + json_build_object +----------------------------- + {"1" : 4, "2" : 5, "3" : 6} +(1 row) + -- empty objects/arrays SELECT json_build_array(); json_build_array diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out index dcea6a47a3..eeac2a13c7 100644 --- a/src/test/regress/expected/jsonb.out +++ b/src/test/regress/expected/jsonb.out @@ -1345,6 +1345,54 @@ SELECT jsonb_build_array('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": ["a", 1, "b", 1.2, "c", true, "d", null, "e", {"x": 3, "y": [1, 2, 3]}] (1 row) +SELECT jsonb_build_array('a', NULL); -- ok + jsonb_build_array +------------------- + ["a", null] +(1 row) + +SELECT jsonb_build_array(VARIADIC NULL::text[]); -- ok + jsonb_build_array +------------------- + +(1 row) + +SELECT jsonb_build_array(VARIADIC '{}'::text[]); -- ok + jsonb_build_array +------------------- + [] +(1 row) + +SELECT jsonb_build_array(VARIADIC '{a,b,c}'::text[]); -- ok + jsonb_build_array +------------------- + ["a", "b", "c"] +(1 row) + +SELECT jsonb_build_array(VARIADIC ARRAY['a', NULL]::text[]); -- ok + jsonb_build_array +------------------- + ["a", null] +(1 row) + +SELECT jsonb_build_array(VARIADIC '{1,2,3,4}'::text[]); -- ok + jsonb_build_array +---------------------- + ["1", "2", "3", "4"] +(1 row) + +SELECT jsonb_build_array(VARIADIC '{1,2,3,4}'::int[]); -- ok + jsonb_build_array +------------------- + [1, 2, 3, 4] +(1 row) + +SELECT jsonb_build_array(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok + jsonb_build_array +-------------------- + [1, 4, 2, 5, 3, 6] +(1 row) + SELECT jsonb_build_object('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); jsonb_build_object ------------------------------------------------------------------------- @@ -1360,6 +1408,63 @@ SELECT jsonb_build_object( {"a": {"b": false, "c": 99}, "d": {"e": [9, 8, 7], "f": {"name": "pg_class", "relkind": "r"}}} (1 row) +SELECT jsonb_build_object('{a,b,c}'::text[]); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of jsonb_build_object() must consist of alternating keys and values. +SELECT jsonb_build_object('{a,b,c}'::text[], '{d,e,f}'::text[]); -- error, key cannot be array +ERROR: key value must be scalar, not array, composite, or json +SELECT jsonb_build_object('a', 'b', 'c'); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of jsonb_build_object() must consist of alternating keys and values. +SELECT jsonb_build_object(NULL, 'a'); -- error, key cannot be NULL +ERROR: argument 1: key must not be null +SELECT jsonb_build_object('a', NULL); -- ok + jsonb_build_object +-------------------- + {"a": null} +(1 row) + +SELECT jsonb_build_object(VARIADIC NULL::text[]); -- ok + jsonb_build_object +-------------------- + +(1 row) + +SELECT jsonb_build_object(VARIADIC '{}'::text[]); -- ok + jsonb_build_object +-------------------- + {} +(1 row) + +SELECT jsonb_build_object(VARIADIC '{a,b,c}'::text[]); -- error +ERROR: argument list must have even number of elements +HINT: The arguments of jsonb_build_object() must consist of alternating keys and values. +SELECT jsonb_build_object(VARIADIC ARRAY['a', NULL]::text[]); -- ok + jsonb_build_object +-------------------- + {"a": null} +(1 row) + +SELECT jsonb_build_object(VARIADIC ARRAY[NULL, 'a']::text[]); -- error, key cannot be NULL +ERROR: argument 1: key must not be null +SELECT jsonb_build_object(VARIADIC '{1,2,3,4}'::text[]); -- ok + jsonb_build_object +---------------------- + {"1": "2", "3": "4"} +(1 row) + +SELECT jsonb_build_object(VARIADIC '{1,2,3,4}'::int[]); -- ok + jsonb_build_object +-------------------- + {"1": 2, "3": 4} +(1 row) + +SELECT jsonb_build_object(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok + jsonb_build_object +-------------------------- + {"1": 4, "2": 5, "3": 6} +(1 row) + -- empty objects/arrays SELECT jsonb_build_array(); jsonb_build_array diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql index 506e3a8fc5..598498d40a 100644 --- a/src/test/regress/sql/json.sql +++ b/src/test/regress/sql/json.sql @@ -569,6 +569,14 @@ select value, json_typeof(value) -- json_build_array, json_build_object, json_object_agg SELECT json_build_array('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); +SELECT json_build_array('a', NULL); -- ok +SELECT json_build_array(VARIADIC NULL::text[]); -- ok +SELECT json_build_array(VARIADIC '{}'::text[]); -- ok +SELECT json_build_array(VARIADIC '{a,b,c}'::text[]); -- ok +SELECT json_build_array(VARIADIC ARRAY['a', NULL]::text[]); -- ok +SELECT json_build_array(VARIADIC '{1,2,3,4}'::text[]); -- ok +SELECT json_build_array(VARIADIC '{1,2,3,4}'::int[]); -- ok +SELECT json_build_array(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok SELECT json_build_object('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); @@ -576,6 +584,19 @@ SELECT json_build_object( 'a', json_build_object('b',false,'c',99), 'd', json_build_object('e',array[9,8,7]::int[], 'f', (select row_to_json(r) from ( select relkind, oid::regclass as name from pg_class where relname = 'pg_class') r))); +SELECT json_build_object('{a,b,c}'::text[]); -- error +SELECT json_build_object('{a,b,c}'::text[], '{d,e,f}'::text[]); -- error, key cannot be array +SELECT json_build_object('a', 'b', 'c'); -- error +SELECT json_build_object(NULL, 'a'); -- error, key cannot be NULL +SELECT json_build_object('a', NULL); -- ok +SELECT json_build_object(VARIADIC NULL::text[]); -- ok +SELECT json_build_object(VARIADIC '{}'::text[]); -- ok +SELECT json_build_object(VARIADIC '{a,b,c}'::text[]); -- error +SELECT json_build_object(VARIADIC ARRAY['a', NULL]::text[]); -- ok +SELECT json_build_object(VARIADIC ARRAY[NULL, 'a']::text[]); -- error, key cannot be NULL +SELECT json_build_object(VARIADIC '{1,2,3,4}'::text[]); -- ok +SELECT json_build_object(VARIADIC '{1,2,3,4}'::int[]); -- ok +SELECT json_build_object(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok -- empty objects/arrays SELECT json_build_array(); diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql index 57fff3bfb3..d0e3f2a1f6 100644 --- a/src/test/regress/sql/jsonb.sql +++ b/src/test/regress/sql/jsonb.sql @@ -313,6 +313,14 @@ SELECT jsonb_typeof('"1.0"') AS string; -- jsonb_build_array, jsonb_build_object, jsonb_object_agg SELECT jsonb_build_array('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); +SELECT jsonb_build_array('a', NULL); -- ok +SELECT jsonb_build_array(VARIADIC NULL::text[]); -- ok +SELECT jsonb_build_array(VARIADIC '{}'::text[]); -- ok +SELECT jsonb_build_array(VARIADIC '{a,b,c}'::text[]); -- ok +SELECT jsonb_build_array(VARIADIC ARRAY['a', NULL]::text[]); -- ok +SELECT jsonb_build_array(VARIADIC '{1,2,3,4}'::text[]); -- ok +SELECT jsonb_build_array(VARIADIC '{1,2,3,4}'::int[]); -- ok +SELECT jsonb_build_array(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok SELECT jsonb_build_object('a',1,'b',1.2,'c',true,'d',null,'e',json '{"x": 3, "y": [1,2,3]}'); @@ -320,7 +328,19 @@ SELECT jsonb_build_object( 'a', jsonb_build_object('b',false,'c',99), 'd', jsonb_build_object('e',array[9,8,7]::int[], 'f', (select row_to_json(r) from ( select relkind, oid::regclass as name from pg_class where relname = 'pg_class') r))); - +SELECT jsonb_build_object('{a,b,c}'::text[]); -- error +SELECT jsonb_build_object('{a,b,c}'::text[], '{d,e,f}'::text[]); -- error, key cannot be array +SELECT jsonb_build_object('a', 'b', 'c'); -- error +SELECT jsonb_build_object(NULL, 'a'); -- error, key cannot be NULL +SELECT jsonb_build_object('a', NULL); -- ok +SELECT jsonb_build_object(VARIADIC NULL::text[]); -- ok +SELECT jsonb_build_object(VARIADIC '{}'::text[]); -- ok +SELECT jsonb_build_object(VARIADIC '{a,b,c}'::text[]); -- error +SELECT jsonb_build_object(VARIADIC ARRAY['a', NULL]::text[]); -- ok +SELECT jsonb_build_object(VARIADIC ARRAY[NULL, 'a']::text[]); -- error, key cannot be NULL +SELECT jsonb_build_object(VARIADIC '{1,2,3,4}'::text[]); -- ok +SELECT jsonb_build_object(VARIADIC '{1,2,3,4}'::int[]); -- ok +SELECT jsonb_build_object(VARIADIC '{{1,4},{2,5},{3,6}}'::int[][]); -- ok -- empty objects/arrays SELECT jsonb_build_array(); From db6986f47c9531628d151d6bf760a2fe1214b19d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 25 Oct 2017 19:32:24 -0400 Subject: [PATCH 0427/1087] Fix libpq to not require user's home directory to exist. Some people like to run libpq-using applications in environments where there's no home directory. We've broken that scenario before (cf commits 5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making it a hard error if we fail to get the home directory name while looking for ~/.pgpass. The previous precedent is that if we can't get the home directory name, we should just silently act as though the file we hoped to find there doesn't exist. Rearrange the new code to honor that. Looking around, the service-file code added by commit 41a4e4595 had the same disease. Apparently, that escaped notice because it only runs when a service name has been specified, which I guess the people who use this scenario don't do. Nonetheless, it's wrong too, so fix that case as well. Add a comment about this policy to pqGetHomeDirectory, in the probably vain hope of forestalling the same error in future. And upgrade the rather miserable commenting in parseServiceInfo, too. In passing, also back off parseServiceInfo's assumption that only ENOENT is an ignorable error from stat() when checking a service file. We would need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that the far-better-tested code for ~/.pgpass treats all stat() failures alike, I think this code ought to as well. Per bug #14872 from Dan Watson. Back-patch the .pgpass change to v10 where ba005f193 came in. The service-file bugs are far older, so back-patch the other changes to all supported branches. Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org --- src/interfaces/libpq/fe-connect.c | 109 +++++++++++++++++------------- 1 file changed, 63 insertions(+), 46 deletions(-) diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 5f79803607..6bcf60a712 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -1063,52 +1063,51 @@ connectOptions2(PGconn *conn) */ if (conn->pgpass == NULL || conn->pgpass[0] == '\0') { - int i; - + /* If password file wasn't specified, use ~/PGPASSFILE */ if (conn->pgpassfile == NULL || conn->pgpassfile[0] == '\0') { - /* Identify password file to use; fail if we can't */ char homedir[MAXPGPATH]; - if (!pqGetHomeDirectory(homedir, sizeof(homedir))) + if (pqGetHomeDirectory(homedir, sizeof(homedir))) { - conn->status = CONNECTION_BAD; - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("could not get home directory to locate password file\n")); - return false; + if (conn->pgpassfile) + free(conn->pgpassfile); + conn->pgpassfile = malloc(MAXPGPATH); + if (!conn->pgpassfile) + goto oom_error; + snprintf(conn->pgpassfile, MAXPGPATH, "%s/%s", + homedir, PGPASSFILE); } - - if (conn->pgpassfile) - free(conn->pgpassfile); - conn->pgpassfile = malloc(MAXPGPATH); - if (!conn->pgpassfile) - goto oom_error; - - snprintf(conn->pgpassfile, MAXPGPATH, "%s/%s", homedir, PGPASSFILE); } - for (i = 0; i < conn->nconnhost; i++) + if (conn->pgpassfile != NULL && conn->pgpassfile[0] != '\0') { - /* - * Try to get a password for this host from pgpassfile. We use - * host name rather than host address in the same manner to - * PQhost(). - */ - char *pwhost = conn->connhost[i].host; - - if (conn->connhost[i].type == CHT_HOST_ADDRESS && - conn->connhost[i].host != NULL && conn->connhost[i].host[0] != '\0') - pwhost = conn->connhost[i].hostaddr; - - conn->connhost[i].password = - passwordFromFile(pwhost, - conn->connhost[i].port, - conn->dbName, - conn->pguser, - conn->pgpassfile); - /* If we got one, set pgpassfile_used */ - if (conn->connhost[i].password != NULL) - conn->pgpassfile_used = true; + int i; + + for (i = 0; i < conn->nconnhost; i++) + { + /* + * Try to get a password for this host from pgpassfile. We use + * host name rather than host address in the same manner as + * PQhost(). + */ + char *pwhost = conn->connhost[i].host; + + if (conn->connhost[i].type == CHT_HOST_ADDRESS && + conn->connhost[i].host != NULL && + conn->connhost[i].host[0] != '\0') + pwhost = conn->connhost[i].hostaddr; + + conn->connhost[i].password = + passwordFromFile(pwhost, + conn->connhost[i].port, + conn->dbName, + conn->pguser, + conn->pgpassfile); + /* If we got one, set pgpassfile_used */ + if (conn->connhost[i].password != NULL) + conn->pgpassfile_used = true; + } } } @@ -4469,6 +4468,16 @@ ldapServiceLookup(const char *purl, PQconninfoOption *options, #define MAXBUFSIZE 256 +/* + * parseServiceInfo: if a service name has been given, look it up and absorb + * connection options from it into *options. + * + * Returns 0 on success, nonzero on failure. On failure, if errorMessage + * isn't null, also store an error message there. (Note: the only reason + * this function and related ones don't dump core on errorMessage == NULL + * is the undocumented fact that printfPQExpBuffer does nothing when passed + * a null PQExpBuffer pointer.) + */ static int parseServiceInfo(PQconninfoOption *options, PQExpBuffer errorMessage) { @@ -4487,9 +4496,14 @@ parseServiceInfo(PQconninfoOption *options, PQExpBuffer errorMessage) if (service == NULL) service = getenv("PGSERVICE"); + /* If no service name given, nothing to do */ if (service == NULL) return 0; + /* + * Try PGSERVICEFILE if specified, else try ~/.pg_service.conf (if that + * exists). + */ if ((env = getenv("PGSERVICEFILE")) != NULL) strlcpy(serviceFile, env, sizeof(serviceFile)); else @@ -4497,13 +4511,9 @@ parseServiceInfo(PQconninfoOption *options, PQExpBuffer errorMessage) char homedir[MAXPGPATH]; if (!pqGetHomeDirectory(homedir, sizeof(homedir))) - { - printfPQExpBuffer(errorMessage, libpq_gettext("could not get home directory to locate service definition file")); - return 1; - } + goto next_file; snprintf(serviceFile, MAXPGPATH, "%s/%s", homedir, ".pg_service.conf"); - errno = 0; - if (stat(serviceFile, &stat_buf) != 0 && errno == ENOENT) + if (stat(serviceFile, &stat_buf) != 0) goto next_file; } @@ -4519,8 +4529,7 @@ parseServiceInfo(PQconninfoOption *options, PQExpBuffer errorMessage) */ snprintf(serviceFile, MAXPGPATH, "%s/pg_service.conf", getenv("PGSYSCONFDIR") ? getenv("PGSYSCONFDIR") : SYSCONFDIR); - errno = 0; - if (stat(serviceFile, &stat_buf) != 0 && errno == ENOENT) + if (stat(serviceFile, &stat_buf) != 0) goto last_file; status = parseServiceFile(serviceFile, service, options, errorMessage, &group_found); @@ -6510,7 +6519,15 @@ pgpassfileWarning(PGconn *conn) * * This is essentially the same as get_home_path(), but we don't use that * because we don't want to pull path.c into libpq (it pollutes application - * namespace) + * namespace). + * + * Returns true on success, false on failure to obtain the directory name. + * + * CAUTION: although in most situations failure is unexpected, there are users + * who like to run applications in a home-directory-less environment. On + * failure, you almost certainly DO NOT want to report an error. Just act as + * though whatever file you were hoping to find in the home directory isn't + * there (which it isn't). */ bool pqGetHomeDirectory(char *buf, int bufsize) From 0af98a95cf8397d36202a34cd615f222faf24e9a Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Thu, 26 Oct 2017 10:16:04 +0200 Subject: [PATCH 0428/1087] Fixed handling of escape character in libecpg. Patch by Tsunakawa Takayuki --- src/interfaces/ecpg/ecpglib/execute.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c index 50f831aa2b..7776813d1b 100644 --- a/src/interfaces/ecpg/ecpglib/execute.c +++ b/src/interfaces/ecpg/ecpglib/execute.c @@ -108,14 +108,14 @@ free_statement(struct statement *stmt) } static int -next_insert(char *text, int pos, bool questionmarks) +next_insert(char *text, int pos, bool questionmarks, bool std_strings) { bool string = false; int p = pos; for (; text[p] != '\0'; p++) { - if (text[p] == '\\') /* escape character */ + if (string && !std_strings && text[p] == '\\') /* escape character */ p++; else if (text[p] == '\'') string = string ? false : true; @@ -1109,6 +1109,13 @@ ecpg_build_params(struct statement *stmt) struct variable *var; int desc_counter = 0; int position = 0; + const char *value; + bool std_strings = false; + + /* Get standard_conforming_strings setting. */ + value = PQparameterStatus(stmt->connection->connection, "standard_conforming_strings"); + if (value && strcmp(value, "on") == 0) + std_strings = true; /* * If the type is one of the fill in types then we take the argument and @@ -1299,7 +1306,7 @@ ecpg_build_params(struct statement *stmt) * now tobeinserted points to an area that contains the next * parameter; now find the position in the string where it belongs */ - if ((position = next_insert(stmt->command, position, stmt->questionmarks) + 1) == 0) + if ((position = next_insert(stmt->command, position, stmt->questionmarks, std_strings) + 1) == 0) { /* * We have an argument but we dont have the matched up placeholder @@ -1386,7 +1393,7 @@ ecpg_build_params(struct statement *stmt) } /* Check if there are unmatched things left. */ - if (next_insert(stmt->command, position, stmt->questionmarks) >= 0) + if (next_insert(stmt->command, position, stmt->questionmarks, std_strings) >= 0) { ecpg_raise(stmt->lineno, ECPG_TOO_FEW_ARGUMENTS, ECPG_SQLSTATE_USING_CLAUSE_DOES_NOT_MATCH_PARAMETERS, NULL); From b55509332f50f998b6e8b3830a51c5b9d8f666aa Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 26 Oct 2017 12:35:34 +0200 Subject: [PATCH 0429/1087] In relevant log messages, indicate whether vacuums are aggressive. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Kyotaro Horiguchi, reviewed Masahiko Sawada, David G. Johnston, Álvaro Herrera, and me. Grammar correction to the final posted patch by me. Discussion: http://postgr.es/m/20170329.124649.193656100.horiguchi.kyotaro@lab.ntt.co.jp --- src/backend/commands/vacuumlazy.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 30b1c08c6c..172d213fdb 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -355,6 +355,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, params->log_min_duration)) { StringInfoData buf; + char *msgfmt; TimestampDifference(starttime, endtime, &secs, &usecs); @@ -373,7 +374,11 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, * emitting individual parts of the message when not applicable. */ initStringInfo(&buf); - appendStringInfo(&buf, _("automatic vacuum of table \"%s.%s.%s\": index scans: %d\n"), + if (aggressive) + msgfmt = _("automatic aggressive vacuum of table \"%s.%s.%s\": index scans: %d\n"); + else + msgfmt = _("automatic vacuum of table \"%s.%s.%s\": index scans: %d\n"); + appendStringInfo(&buf, msgfmt, get_database_name(MyDatabaseId), get_namespace_name(RelationGetNamespace(onerel)), RelationGetRelationName(onerel), @@ -486,10 +491,16 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, pg_rusage_init(&ru0); relname = RelationGetRelationName(onerel); - ereport(elevel, - (errmsg("vacuuming \"%s.%s\"", - get_namespace_name(RelationGetNamespace(onerel)), - relname))); + if (aggressive) + ereport(elevel, + (errmsg("aggressively vacuuming \"%s.%s\"", + get_namespace_name(RelationGetNamespace(onerel)), + relname))); + else + ereport(elevel, + (errmsg("vacuuming \"%s.%s\"", + get_namespace_name(RelationGetNamespace(onerel)), + relname))); empty_pages = vacuumed_pages = 0; num_tuples = tups_vacuumed = nkeep = nunused = 0; From adee9e4e317169463816d005e8bf901333271917 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 26 Oct 2017 08:20:00 -0400 Subject: [PATCH 0430/1087] Undo inadvertent change in capitalization in commit 18fc4ec. --- src/backend/utils/adt/jsonb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 7185b4cce5..4b2a541128 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -3,7 +3,7 @@ * jsonb.c * I/O routines for jsonb type * - * COPYRIGHT (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2017, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/jsonb.c From 74d2c0dbfd94aa5512be3828a793b4c2d43df2d0 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 26 Oct 2017 10:01:02 -0400 Subject: [PATCH 0431/1087] Improve gendef.pl diagnostic on failure to open sym file There have been numerous buildfarm failures but the diagnostic is currently silent about the reason for failure to open the file. Let's see if we can get to the bottom of it. Backpatch to all live branches. --- src/tools/msvc/gendef.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tools/msvc/gendef.pl b/src/tools/msvc/gendef.pl index 96122750f1..9b5bc081e1 100644 --- a/src/tools/msvc/gendef.pl +++ b/src/tools/msvc/gendef.pl @@ -32,7 +32,7 @@ sub dumpsyms sub extract_syms { my ($symfile, $def) = @_; - open(my $f, '<', $symfile) || die "Could not open $symfile for $_\n"; + open(my $f, '<', $symfile) || die "Could not open $symfile for $_: $!\n"; while (<$f>) { From 08f1e1f0a47b4b0e87b07b9794698747b279c711 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 26 Oct 2017 12:17:40 -0400 Subject: [PATCH 0432/1087] Make setrefs.c match by ressortgroupref even for plain Vars. Previously, we skipped using search_indexed_tlist_for_sortgroupref() if the tlist expression being sought in the child plan node was merely a Var. This is purely an optimization, based on the theory that search_indexed_tlist_for_var() is faster, and one copy of a Var should be as good as another. However, the GROUPING SETS patch broke the latter assumption: grouping columns containing the "same" Var can sometimes have different outputs, as shown in the test case added here. So do it the hard way whenever a ressortgroupref marking exists. (If this seems like a bottleneck, we could imagine building a tlist index data structure for ressortgroupref values, as we do for Vars. But I'll let that idea go until there's some evidence it's worthwhile.) Back-patch to 9.6. The problem also exists in 9.5 where GROUPING SETS came in, but this patch is insufficient to resolve the problem in 9.5: there is some obscure dependency on the upper-planner-pathification work that happened in 9.6. Given that this is such a weird corner case, and no end users have complained about it, it doesn't seem worth the work to develop a fix for 9.5. Patch by me, per a report from Heikki Linnakangas. (This does not fix Heikki's original complaint, just the follow-on one.) Discussion: https://postgr.es/m/aefc657e-edb2-64d5-6df1-a0828f6e9104@iki.fi --- src/backend/optimizer/plan/setrefs.c | 7 +++--- src/test/regress/expected/groupingsets.out | 29 ++++++++++++++++++++++ src/test/regress/sql/groupingsets.sql | 11 ++++++++ 3 files changed, 43 insertions(+), 4 deletions(-) diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index 1382b67974..fa9a3f0b47 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -1744,8 +1744,8 @@ set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset) TargetEntry *tle = (TargetEntry *) lfirst(l); Node *newexpr; - /* If it's a non-Var sort/group item, first try to match by sortref */ - if (tle->ressortgroupref != 0 && !IsA(tle->expr, Var)) + /* If it's a sort/group item, first try to match by sortref */ + if (tle->ressortgroupref != 0) { newexpr = (Node *) search_indexed_tlist_for_sortgroupref(tle->expr, @@ -2113,7 +2113,6 @@ search_indexed_tlist_for_non_var(Expr *node, /* * search_indexed_tlist_for_sortgroupref --- find a sort/group expression - * (which is assumed not to be just a Var) * * If a match is found, return a Var constructed to reference the tlist item. * If no match, return NULL. @@ -2644,7 +2643,7 @@ is_converted_whole_row_reference(Node *node) if (IsA(convexpr->arg, Var)) { - Var *var = castNode(Var, convexpr->arg); + Var *var = castNode(Var, convexpr->arg); if (var->varattno == 0) return true; diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out index fd618afe60..833d515174 100644 --- a/src/test/regress/expected/groupingsets.out +++ b/src/test/regress/expected/groupingsets.out @@ -360,6 +360,35 @@ select a, d, grouping(a,b,c) 2 | 2 | 2 (4 rows) +-- check that distinct grouping columns are kept separate +-- even if they are equal() +explain (costs off) +select g as alias1, g as alias2 + from generate_series(1,3) g + group by alias1, rollup(alias2); + QUERY PLAN +------------------------------------------------ + GroupAggregate + Group Key: g, g + Group Key: g + -> Sort + Sort Key: g + -> Function Scan on generate_series g +(6 rows) + +select g as alias1, g as alias2 + from generate_series(1,3) g + group by alias1, rollup(alias2); + alias1 | alias2 +--------+-------- + 1 | 1 + 1 | + 2 | 2 + 2 | + 3 | 3 + 3 | +(6 rows) + -- simple rescan tests select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql index 564ebc9b05..2b4ab692c4 100644 --- a/src/test/regress/sql/groupingsets.sql +++ b/src/test/regress/sql/groupingsets.sql @@ -141,6 +141,17 @@ select a, d, grouping(a,b,c) from gstest3 group by grouping sets ((a,b), (a,c)); +-- check that distinct grouping columns are kept separate +-- even if they are equal() +explain (costs off) +select g as alias1, g as alias2 + from generate_series(1,3) g + group by alias1, rollup(alias2); + +select g as alias1, g as alias2 + from generate_series(1,3) g + group by alias1, rollup(alias2); + -- simple rescan tests select a, b, sum(v.x) From 37a795a60b4f4b1def11c615525ec5e0e9449e05 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 26 Oct 2017 13:47:45 -0400 Subject: [PATCH 0433/1087] Support domains over composite types. This is the last major omission in our domains feature: you can now make a domain over anything that's not a pseudotype. The major complication from an implementation standpoint is that places that might be creating tuples of a domain type now need to be prepared to apply domain_check(). It seems better that unprepared code fail with an error like " is not composite" than that it silently fail to apply domain constraints. Therefore, relevant infrastructure like get_func_result_type() and lookup_rowtype_tupdesc() has been adjusted to treat domain-over-composite as a distinct case that unprepared code won't recognize, rather than just transparently treating it the same as plain composite. This isn't a 100% solution to the possibility of overlooked domain checks, but it catches most places. In passing, improve typcache.c's support for domains (it can now cache the identity of a domain's base type), and rewrite the argument handling logic in jsonfuncs.c's populate_record[set]_worker to reduce duplicative per-call lookups. I believe this is code-complete so far as the core and contrib code go. The PLs need varying amounts of work, which will be tackled in followup patches. Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us --- contrib/hstore/hstore_io.c | 46 ++- doc/src/sgml/datatype.sgml | 3 +- doc/src/sgml/rowtypes.sgml | 5 +- doc/src/sgml/xfunc.sgml | 37 +- src/backend/catalog/pg_inherits.c | 9 +- src/backend/catalog/pg_proc.c | 2 +- src/backend/commands/tablecmds.c | 6 +- src/backend/commands/typecmds.c | 11 +- src/backend/executor/execExprInterp.c | 6 +- src/backend/executor/execSRF.c | 5 +- src/backend/executor/functions.c | 15 +- src/backend/executor/nodeFunctionscan.c | 3 +- src/backend/nodes/makefuncs.c | 6 +- src/backend/optimizer/util/clauses.c | 10 +- src/backend/parser/parse_coerce.c | 48 ++- src/backend/parser/parse_func.c | 9 +- src/backend/parser/parse_relation.c | 18 +- src/backend/parser/parse_target.c | 49 +-- src/backend/parser/parse_type.c | 38 ++- src/backend/utils/adt/domains.c | 9 +- src/backend/utils/adt/jsonfuncs.c | 431 +++++++++++++++--------- src/backend/utils/adt/ruleutils.c | 18 +- src/backend/utils/cache/lsyscache.c | 18 +- src/backend/utils/cache/typcache.c | 130 +++++-- src/backend/utils/fmgr/funcapi.c | 93 ++++- src/include/access/htup_details.h | 5 + src/include/access/tupdesc.h | 6 + src/include/funcapi.h | 11 +- src/include/nodes/primnodes.h | 11 +- src/include/parser/parse_type.h | 4 +- src/include/utils/typcache.h | 17 +- src/test/regress/expected/domain.out | 96 ++++++ src/test/regress/expected/json.out | 53 +++ src/test/regress/expected/jsonb.out | 53 +++ src/test/regress/sql/domain.sql | 47 +++ src/test/regress/sql/json.sql | 23 ++ src/test/regress/sql/jsonb.sql | 23 ++ 37 files changed, 1083 insertions(+), 291 deletions(-) diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index d8284012d0..e999a8e12c 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -752,6 +752,8 @@ typedef struct RecordIOData { Oid record_type; int32 record_typmod; + /* this field is used only if target type is domain over composite: */ + void *domain_info; /* opaque cache for domain checks */ int ncolumns; ColumnIOData columns[FLEXIBLE_ARRAY_MEMBER]; } RecordIOData; @@ -780,9 +782,11 @@ hstore_from_record(PG_FUNCTION_ARGS) Oid argtype = get_fn_expr_argtype(fcinfo->flinfo, 0); /* - * have no tuple to look at, so the only source of type info is the - * argtype. The lookup_rowtype_tupdesc call below will error out if we - * don't have a known composite type oid here. + * We have no tuple to look at, so the only source of type info is the + * argtype --- which might be domain over composite, but we don't care + * here, since we have no need to be concerned about domain + * constraints. The lookup_rowtype_tupdesc_domain call below will + * error out if we don't have a known composite type oid here. */ tupType = argtype; tupTypmod = -1; @@ -793,12 +797,15 @@ hstore_from_record(PG_FUNCTION_ARGS) { rec = PG_GETARG_HEAPTUPLEHEADER(0); - /* Extract type info from the tuple itself */ + /* + * Extract type info from the tuple itself -- this will work even for + * anonymous record types. + */ tupType = HeapTupleHeaderGetTypeId(rec); tupTypmod = HeapTupleHeaderGetTypMod(rec); } - tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); + tupdesc = lookup_rowtype_tupdesc_domain(tupType, tupTypmod, false); ncolumns = tupdesc->natts; /* @@ -943,9 +950,9 @@ hstore_populate_record(PG_FUNCTION_ARGS) rec = NULL; /* - * have no tuple to look at, so the only source of type info is the - * argtype. The lookup_rowtype_tupdesc call below will error out if we - * don't have a known composite type oid here. + * We have no tuple to look at, so the only source of type info is the + * argtype. The lookup_rowtype_tupdesc_domain call below will error + * out if we don't have a known composite type oid here. */ tupType = argtype; tupTypmod = -1; @@ -957,7 +964,10 @@ hstore_populate_record(PG_FUNCTION_ARGS) if (PG_ARGISNULL(1)) PG_RETURN_POINTER(rec); - /* Extract type info from the tuple itself */ + /* + * Extract type info from the tuple itself -- this will work even for + * anonymous record types. + */ tupType = HeapTupleHeaderGetTypeId(rec); tupTypmod = HeapTupleHeaderGetTypMod(rec); } @@ -975,7 +985,11 @@ hstore_populate_record(PG_FUNCTION_ARGS) if (HS_COUNT(hs) == 0 && rec) PG_RETURN_POINTER(rec); - tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); + /* + * Lookup the input record's tupdesc. For the moment, we don't worry + * about whether it is a domain over composite. + */ + tupdesc = lookup_rowtype_tupdesc_domain(tupType, tupTypmod, false); ncolumns = tupdesc->natts; if (rec) @@ -1002,6 +1016,7 @@ hstore_populate_record(PG_FUNCTION_ARGS) my_extra = (RecordIOData *) fcinfo->flinfo->fn_extra; my_extra->record_type = InvalidOid; my_extra->record_typmod = 0; + my_extra->domain_info = NULL; } if (my_extra->record_type != tupType || @@ -1103,6 +1118,17 @@ hstore_populate_record(PG_FUNCTION_ARGS) rettuple = heap_form_tuple(tupdesc, values, nulls); + /* + * If the target type is domain over composite, all we know at this point + * is that we've made a valid value of the base composite type. Must + * check domain constraints before deciding we're done. + */ + if (argtype != tupdesc->tdtypeid) + domain_check(HeapTupleGetDatum(rettuple), false, + argtype, + &my_extra->domain_info, + fcinfo->flinfo->fn_mcxt); + ReleaseTupleDesc(tupdesc); PG_RETURN_DATUM(HeapTupleGetDatum(rettuple)); diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index b397e18858..3d46098263 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -4379,8 +4379,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; underlying type — for example, any operator or function that can be applied to the underlying type will work on the domain type. The underlying type can be any built-in or user-defined base type, - enum type, array or range type, or another domain. (Currently, domains - over composite types are not implemented.) + enum type, array type, composite type, range type, or another domain. diff --git a/doc/src/sgml/rowtypes.sgml b/doc/src/sgml/rowtypes.sgml index 7e436a4606..f80d44bc75 100644 --- a/doc/src/sgml/rowtypes.sgml +++ b/doc/src/sgml/rowtypes.sgml @@ -84,8 +84,9 @@ CREATE TABLE inventory_item ( restriction of the current implementation: since no constraints are associated with a composite type, the constraints shown in the table definition do not apply to values of the composite type - outside the table. (A partial workaround is to use domain - types as members of composite types.) + outside the table. (To work around this, create a domain over the composite + type, and apply the desired constraints as CHECK + constraints of the domain.) diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index d9fccaa17c..9bdb72cd98 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -351,6 +351,31 @@ CREATE FUNCTION tf1 (accountno integer, debit numeric) RETURNS numeric AS $$ WHERE accountno = tf1.accountno RETURNING balance; $$ LANGUAGE SQL; + + + + + A SQL function must return exactly its declared + result type. This may require inserting an explicit cast. + For example, suppose we wanted the + previous add_em function to return + type float8 instead. This won't work: + + +CREATE FUNCTION add_em(integer, integer) RETURNS float8 AS $$ + SELECT $1 + $2; +$$ LANGUAGE SQL; + + + even though in other contexts PostgreSQL + would be willing to insert an implicit cast to + convert integer to float8. + We need to write it as + + +CREATE FUNCTION add_em(integer, integer) RETURNS float8 AS $$ + SELECT ($1 + $2)::float8; +$$ LANGUAGE SQL; @@ -452,13 +477,16 @@ $$ LANGUAGE SQL; - You must typecast the expressions to match the - definition of the composite type, or you will get errors like this: + We must ensure each expression's type matches the corresponding + column of the composite type, inserting a cast if necessary. + Otherwise we'll get errors like this: ERROR: function declared to return emp returns varchar instead of text at column 1 + As with the base-type case, the function will not insert any casts + automatically. @@ -478,6 +506,11 @@ $$ LANGUAGE SQL; in this situation, but it is a handy alternative in some cases — for example, if we need to compute the result by calling another function that returns the desired composite value. + Another example is that if we are trying to write a function that + returns a domain over composite, rather than a plain composite type, + it is always necessary to write it as returning a single column, + since there is no other way to produce a value that is exactly of + the domain type. diff --git a/src/backend/catalog/pg_inherits.c b/src/backend/catalog/pg_inherits.c index 245a374fc9..1bd8a58b7f 100644 --- a/src/backend/catalog/pg_inherits.c +++ b/src/backend/catalog/pg_inherits.c @@ -301,6 +301,11 @@ has_superclass(Oid relationId) /* * Given two type OIDs, determine whether the first is a complex type * (class type) that inherits from the second. + * + * This essentially asks whether the first type is guaranteed to be coercible + * to the second. Therefore, we allow the first type to be a domain over a + * complex type that inherits from the second; that creates no difficulties. + * But the second type cannot be a domain. */ bool typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId) @@ -314,9 +319,9 @@ typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId) ListCell *queue_item; /* We need to work with the associated relation OIDs */ - subclassRelid = typeidTypeRelid(subclassTypeId); + subclassRelid = typeOrDomainTypeRelid(subclassTypeId); if (subclassRelid == InvalidOid) - return false; /* not a complex type */ + return false; /* not a complex type or domain over one */ superclassRelid = typeidTypeRelid(superclassTypeId); if (superclassRelid == InvalidOid) return false; /* not a complex type */ diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index 571856e525..47916cfb54 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -262,7 +262,7 @@ ProcedureCreate(const char *procedureName, */ if (parameterCount == 1 && OidIsValid(parameterTypes->values[0]) && - (relid = typeidTypeRelid(parameterTypes->values[0])) != InvalidOid && + (relid = typeOrDomainTypeRelid(parameterTypes->values[0])) != InvalidOid && get_attnum(relid, procedureName) != InvalidAttrNumber) ereport(ERROR, (errcode(ERRCODE_DUPLICATE_COLUMN), diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 2d4dcd7556..3ab808715b 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -5091,6 +5091,8 @@ find_typed_table_dependencies(Oid typeOid, const char *typeName, DropBehavior be * isn't suitable, throw an error. Currently, we require that the type * originated with CREATE TYPE AS. We could support any row type, but doing so * would require handling a number of extra corner cases in the DDL commands. + * (Also, allowing domain-over-composite would open up a can of worms about + * whether and how the domain's constraints should apply to derived tables.) */ void check_of_type(HeapTuple typetuple) @@ -6190,8 +6192,8 @@ ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newVa RelationGetRelationName(rel)))); /* - * We allow referencing columns by numbers only for indexes, since - * table column numbers could contain gaps if columns are later dropped. + * We allow referencing columns by numbers only for indexes, since table + * column numbers could contain gaps if columns are later dropped. */ if (rel->rd_rel->relkind != RELKIND_INDEX && !colName) ereport(ERROR, diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index c1b87e09e7..7df942b18b 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -798,13 +798,16 @@ DefineDomain(CreateDomainStmt *stmt) basetypeoid = HeapTupleGetOid(typeTup); /* - * Base type must be a plain base type, another domain, an enum or a range - * type. Domains over pseudotypes would create a security hole. Domains - * over composite types might be made to work in the future, but not - * today. + * Base type must be a plain base type, a composite type, another domain, + * an enum or a range type. Domains over pseudotypes would create a + * security hole. (It would be shorter to code this to just check for + * pseudotypes; but it seems safer to call out the specific typtypes that + * are supported, rather than assume that all future typtypes would be + * automatically supported.) */ typtype = baseType->typtype; if (typtype != TYPTYPE_BASE && + typtype != TYPTYPE_COMPOSITE && typtype != TYPTYPE_DOMAIN && typtype != TYPTYPE_ENUM && typtype != TYPTYPE_RANGE) diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index c5e97ef9e2..a0f537b706 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -3469,8 +3469,12 @@ ExecEvalWholeRowVar(ExprState *state, ExprEvalStep *op, ExprContext *econtext) * generates an INT4 NULL regardless of the dropped column type). * If we find a dropped column and cannot verify that case (1) * holds, we have to use the slow path to check (2) for each row. + * + * If vartype is a domain over composite, just look through that + * to the base composite type. */ - var_tupdesc = lookup_rowtype_tupdesc(variable->vartype, -1); + var_tupdesc = lookup_rowtype_tupdesc_domain(variable->vartype, + -1, false); slot_tupdesc = slot->tts_tupleDescriptor; diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index cce771d4be..c24d8b9ead 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -502,7 +502,7 @@ ExecMakeFunctionResultSet(SetExprState *fcache, { TupleTableSlot *slot = fcache->funcResultSlot; MemoryContext oldContext; - bool foundTup; + bool foundTup; /* * Have to make sure tuple in slot lives long enough, otherwise @@ -734,7 +734,8 @@ init_sexpr(Oid foid, Oid input_collation, Expr *node, /* Must save tupdesc in sexpr's context */ oldcontext = MemoryContextSwitchTo(sexprCxt); - if (functypclass == TYPEFUNC_COMPOSITE) + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) { /* Composite data type, e.g. a table's row type */ Assert(tupdesc); diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index 42a4ca94e9..98eb777421 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -1665,7 +1665,15 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, } else if (fn_typtype == TYPTYPE_COMPOSITE || rettype == RECORDOID) { - /* Returns a rowtype */ + /* + * Returns a rowtype. + * + * Note that we will not consider a domain over composite to be a + * "rowtype" return type; it goes through the scalar case above. This + * is because SQL functions don't provide any implicit casting to the + * result type, so there is no way to produce a domain-over-composite + * result except by computing it as an explicit single-column result. + */ TupleDesc tupdesc; int tupnatts; /* physical number of columns in tuple */ int tuplogcols; /* # of nondeleted columns in tuple */ @@ -1711,7 +1719,10 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, } } - /* Is the rowtype fixed, or determined only at runtime? */ + /* + * Is the rowtype fixed, or determined only at runtime? (Note we + * cannot see TYPEFUNC_COMPOSITE_DOMAIN here.) + */ if (get_func_result_type(func_id, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) { /* diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c index 9f87a7e5cd..de476ac75c 100644 --- a/src/backend/executor/nodeFunctionscan.c +++ b/src/backend/executor/nodeFunctionscan.c @@ -383,7 +383,8 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags) &funcrettype, &tupdesc); - if (functypclass == TYPEFUNC_COMPOSITE) + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) { /* Composite data type, e.g. a table's row type */ Assert(tupdesc); diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c index b58eb0f815..7a676531ae 100644 --- a/src/backend/nodes/makefuncs.c +++ b/src/backend/nodes/makefuncs.c @@ -120,8 +120,10 @@ makeVarFromTargetEntry(Index varno, * table entry, and varattno == 0 to signal that it references the whole * tuple. (Use of zero here is unclean, since it could easily be confused * with error cases, but it's not worth changing now.) The vartype indicates - * a rowtype; either a named composite type, or RECORD. This function - * encapsulates the logic for determining the correct rowtype OID to use. + * a rowtype; either a named composite type, or a domain over a named + * composite type (only possible if the RTE is a function returning that), + * or RECORD. This function encapsulates the logic for determining the + * correct rowtype OID to use. * * If allowScalar is true, then for the case where the RTE is a single function * returning a non-composite result type, we produce a normal Var referencing diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 7961362280..5344f6167a 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -2356,6 +2356,10 @@ CommuteRowCompareExpr(RowCompareExpr *clause) * is still what it was when the expression was parsed. This is needed to * guard against improper simplification after ALTER COLUMN TYPE. (XXX we * may well need to make similar checks elsewhere?) + * + * rowtypeid may come from a whole-row Var, and therefore it can be a domain + * over composite, but for this purpose we only care about checking the type + * of a contained field. */ static bool rowtype_field_matches(Oid rowtypeid, int fieldnum, @@ -2368,7 +2372,7 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum, /* No issue for RECORD, since there is no way to ALTER such a type */ if (rowtypeid == RECORDOID) return true; - tupdesc = lookup_rowtype_tupdesc(rowtypeid, -1); + tupdesc = lookup_rowtype_tupdesc_domain(rowtypeid, -1, false); if (fieldnum <= 0 || fieldnum > tupdesc->natts) { ReleaseTupleDesc(tupdesc); @@ -5005,7 +5009,9 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte) * * If the function returns a composite type, don't inline unless the check * shows it's returning a whole tuple result; otherwise what it's - * returning is a single composite column which is not what we need. + * returning is a single composite column which is not what we need. (Like + * check_sql_fn_retval, we deliberately exclude domains over composite + * here.) */ if (!check_sql_fn_retval(func_oid, fexpr->funcresulttype, querytree_list, diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index 53457dc2c8..def41b36b6 100644 --- a/src/backend/parser/parse_coerce.c +++ b/src/backend/parser/parse_coerce.c @@ -499,9 +499,26 @@ coerce_type(ParseState *pstate, Node *node, * Input class type is a subclass of target, so generate an * appropriate runtime conversion (removing unneeded columns and * possibly rearranging the ones that are wanted). + * + * We will also get here when the input is a domain over a subclass of + * the target type. To keep life simple for the executor, we define + * ConvertRowtypeExpr as only working between regular composite types; + * therefore, in such cases insert a RelabelType to smash the input + * expression down to its base type. */ + Oid baseTypeId = getBaseType(inputTypeId); ConvertRowtypeExpr *r = makeNode(ConvertRowtypeExpr); + if (baseTypeId != inputTypeId) + { + RelabelType *rt = makeRelabelType((Expr *) node, + baseTypeId, -1, + InvalidOid, + COERCE_IMPLICIT_CAST); + + rt->location = location; + node = (Node *) rt; + } r->arg = (Expr *) node; r->resulttype = targetTypeId; r->convertformat = cformat; @@ -966,6 +983,8 @@ coerce_record_to_complex(ParseState *pstate, Node *node, int location) { RowExpr *rowexpr; + Oid baseTypeId; + int32 baseTypeMod = -1; TupleDesc tupdesc; List *args = NIL; List *newargs; @@ -1001,7 +1020,14 @@ coerce_record_to_complex(ParseState *pstate, Node *node, format_type_be(targetTypeId)), parser_coercion_errposition(pstate, location, node))); - tupdesc = lookup_rowtype_tupdesc(targetTypeId, -1); + /* + * Look up the composite type, accounting for possibility that what we are + * given is a domain over composite. + */ + baseTypeId = getBaseTypeAndTypmod(targetTypeId, &baseTypeMod); + tupdesc = lookup_rowtype_tupdesc(baseTypeId, baseTypeMod); + + /* Process the fields */ newargs = NIL; ucolno = 1; arg = list_head(args); @@ -1070,10 +1096,22 @@ coerce_record_to_complex(ParseState *pstate, Node *node, rowexpr = makeNode(RowExpr); rowexpr->args = newargs; - rowexpr->row_typeid = targetTypeId; + rowexpr->row_typeid = baseTypeId; rowexpr->row_format = cformat; rowexpr->colnames = NIL; /* not needed for named target type */ rowexpr->location = location; + + /* If target is a domain, apply constraints */ + if (baseTypeId != targetTypeId) + { + rowexpr->row_format = COERCE_IMPLICIT_CAST; + return coerce_to_domain((Node *) rowexpr, + baseTypeId, baseTypeMod, + targetTypeId, + ccontext, cformat, location, + false); + } + return (Node *) rowexpr; } @@ -2401,13 +2439,13 @@ is_complex_array(Oid typid) /* * Check whether reltypeId is the row type of a typed table of type - * reloftypeId. (This is conceptually similar to the subtype - * relationship checked by typeInheritsFrom().) + * reloftypeId, or is a domain over such a row type. (This is conceptually + * similar to the subtype relationship checked by typeInheritsFrom().) */ static bool typeIsOfTypedTable(Oid reltypeId, Oid reloftypeId) { - Oid relid = typeidTypeRelid(reltypeId); + Oid relid = typeOrDomainTypeRelid(reltypeId); bool result = false; if (relid) diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index 2f2f2c7fb0..fc0d6bc2f2 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -1819,18 +1819,19 @@ ParseComplexProjection(ParseState *pstate, char *funcname, Node *first_arg, } /* - * Else do it the hard way with get_expr_result_type(). + * Else do it the hard way with get_expr_result_tupdesc(). * * If it's a Var of type RECORD, we have to work even harder: we have to - * find what the Var refers to, and pass that to get_expr_result_type. + * find what the Var refers to, and pass that to get_expr_result_tupdesc. * That task is handled by expandRecordVariable(). */ if (IsA(first_arg, Var) && ((Var *) first_arg)->vartype == RECORDOID) tupdesc = expandRecordVariable(pstate, (Var *) first_arg, 0); - else if (get_expr_result_type(first_arg, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + else + tupdesc = get_expr_result_tupdesc(first_arg, true); + if (!tupdesc) return NULL; /* unresolvable RECORD type */ - Assert(tupdesc); for (i = 0; i < tupdesc->natts; i++) { diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index a9273affb2..ca32a37e26 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -1496,7 +1496,8 @@ addRangeTableEntryForFunction(ParseState *pstate, parser_errposition(pstate, exprLocation(funcexpr)))); } - if (functypclass == TYPEFUNC_COMPOSITE) + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) { /* Composite data type, e.g. a table's row type */ Assert(tupdesc); @@ -2245,7 +2246,8 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, functypclass = get_expr_result_type(rtfunc->funcexpr, &funcrettype, &tupdesc); - if (functypclass == TYPEFUNC_COMPOSITE) + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) { /* Composite data type, e.g. a table's row type */ Assert(tupdesc); @@ -2765,7 +2767,8 @@ get_rte_attribute_type(RangeTblEntry *rte, AttrNumber attnum, &funcrettype, &tupdesc); - if (functypclass == TYPEFUNC_COMPOSITE) + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) { /* Composite data type, e.g. a table's row type */ Form_pg_attribute att_tup; @@ -2966,14 +2969,11 @@ get_rte_attribute_is_dropped(RangeTblEntry *rte, AttrNumber attnum) if (attnum > atts_done && attnum <= atts_done + rtfunc->funccolcount) { - TypeFuncClass functypclass; - Oid funcrettype; TupleDesc tupdesc; - functypclass = get_expr_result_type(rtfunc->funcexpr, - &funcrettype, - &tupdesc); - if (functypclass == TYPEFUNC_COMPOSITE) + tupdesc = get_expr_result_tupdesc(rtfunc->funcexpr, + true); + if (tupdesc) { /* Composite data type, e.g. a table's row type */ Form_pg_attribute att_tup; diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index 2547524025..01fd726a3d 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -725,6 +725,8 @@ transformAssignmentIndirection(ParseState *pstate, else { FieldStore *fstore; + Oid baseTypeId; + int32 baseTypeMod; Oid typrelid; AttrNumber attnum; Oid fieldTypeId; @@ -752,7 +754,14 @@ transformAssignmentIndirection(ParseState *pstate, /* No subscripts, so can process field selection here */ - typrelid = typeidTypeRelid(targetTypeId); + /* + * Look up the composite type, accounting for possibility that + * what we are given is a domain over composite. + */ + baseTypeMod = targetTypMod; + baseTypeId = getBaseTypeAndTypmod(targetTypeId, &baseTypeMod); + + typrelid = typeidTypeRelid(baseTypeId); if (!typrelid) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), @@ -796,7 +805,17 @@ transformAssignmentIndirection(ParseState *pstate, fstore->arg = (Expr *) basenode; fstore->newvals = list_make1(rhs); fstore->fieldnums = list_make1_int(attnum); - fstore->resulttype = targetTypeId; + fstore->resulttype = baseTypeId; + + /* If target is a domain, apply constraints */ + if (baseTypeId != targetTypeId) + return coerce_to_domain((Node *) fstore, + baseTypeId, baseTypeMod, + targetTypeId, + COERCION_IMPLICIT, + COERCE_IMPLICIT_CAST, + location, + false); return (Node *) fstore; } @@ -1164,7 +1183,7 @@ ExpandColumnRefStar(ParseState *pstate, ColumnRef *cref, Node *node; node = pstate->p_post_columnref_hook(pstate, cref, - (Node *) rte); + (Node *) rte); if (node != NULL) { if (rte != NULL) @@ -1387,22 +1406,18 @@ ExpandRowReference(ParseState *pstate, Node *expr, * (This can be pretty inefficient if the expression involves nontrivial * computation :-(.) * - * Verify it's a composite type, and get the tupdesc. We use - * get_expr_result_type() because that can handle references to functions - * returning anonymous record types. If that fails, use - * lookup_rowtype_tupdesc(), which will almost certainly fail as well, but - * it will give an appropriate error message. + * Verify it's a composite type, and get the tupdesc. + * get_expr_result_tupdesc() handles this conveniently. * * If it's a Var of type RECORD, we have to work even harder: we have to - * find what the Var refers to, and pass that to get_expr_result_type. + * find what the Var refers to, and pass that to get_expr_result_tupdesc. * That task is handled by expandRecordVariable(). */ if (IsA(expr, Var) && ((Var *) expr)->vartype == RECORDOID) tupleDesc = expandRecordVariable(pstate, (Var *) expr, 0); - else if (get_expr_result_type(expr, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE) - tupleDesc = lookup_rowtype_tupdesc_copy(exprType(expr), - exprTypmod(expr)); + else + tupleDesc = get_expr_result_tupdesc(expr, false); Assert(tupleDesc); /* Generate a list of references to the individual fields */ @@ -1610,15 +1625,9 @@ expandRecordVariable(ParseState *pstate, Var *var, int levelsup) /* * We now have an expression we can't expand any more, so see if - * get_expr_result_type() can do anything with it. If not, pass to - * lookup_rowtype_tupdesc() which will probably fail, but will give an - * appropriate error message while failing. + * get_expr_result_tupdesc() can do anything with it. */ - if (get_expr_result_type(expr, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE) - tupleDesc = lookup_rowtype_tupdesc_copy(exprType(expr), - exprTypmod(expr)); - - return tupleDesc; + return get_expr_result_tupdesc(expr, false); } diff --git a/src/backend/parser/parse_type.c b/src/backend/parser/parse_type.c index d0b3fbeb57..b032651cf4 100644 --- a/src/backend/parser/parse_type.c +++ b/src/backend/parser/parse_type.c @@ -641,7 +641,10 @@ stringTypeDatum(Type tp, char *string, int32 atttypmod) return OidInputFunctionCall(typinput, string, typioparam, atttypmod); } -/* given a typeid, return the type's typrelid (associated relation, if any) */ +/* + * Given a typeid, return the type's typrelid (associated relation), if any. + * Returns InvalidOid if type is not a composite type. + */ Oid typeidTypeRelid(Oid type_id) { @@ -652,13 +655,44 @@ typeidTypeRelid(Oid type_id) typeTuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(type_id)); if (!HeapTupleIsValid(typeTuple)) elog(ERROR, "cache lookup failed for type %u", type_id); - type = (Form_pg_type) GETSTRUCT(typeTuple); result = type->typrelid; ReleaseSysCache(typeTuple); return result; } +/* + * Given a typeid, return the type's typrelid (associated relation), if any. + * Returns InvalidOid if type is not a composite type or a domain over one. + * This is the same as typeidTypeRelid(getBaseType(type_id)), but faster. + */ +Oid +typeOrDomainTypeRelid(Oid type_id) +{ + HeapTuple typeTuple; + Form_pg_type type; + Oid result; + + for (;;) + { + typeTuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(type_id)); + if (!HeapTupleIsValid(typeTuple)) + elog(ERROR, "cache lookup failed for type %u", type_id); + type = (Form_pg_type) GETSTRUCT(typeTuple); + if (type->typtype != TYPTYPE_DOMAIN) + { + /* Not a domain, so done looking through domains */ + break; + } + /* It is a domain, so examine the base type instead */ + type_id = type->typbasetype; + ReleaseSysCache(typeTuple); + } + result = type->typrelid; + ReleaseSysCache(typeTuple); + return result; +} + /* * error context callback for parse failure during parseTypeString() */ diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c index e61d91bd88..86f916ff43 100644 --- a/src/backend/utils/adt/domains.c +++ b/src/backend/utils/adt/domains.c @@ -82,9 +82,10 @@ domain_state_setup(Oid domainType, bool binary, MemoryContext mcxt) * Verify that domainType represents a valid domain type. We need to be * careful here because domain_in and domain_recv can be called from SQL, * possibly with incorrect arguments. We use lookup_type_cache mainly - * because it will throw a clean user-facing error for a bad OID. + * because it will throw a clean user-facing error for a bad OID; but also + * it can cache the underlying base type info. */ - typentry = lookup_type_cache(domainType, 0); + typentry = lookup_type_cache(domainType, TYPECACHE_DOMAIN_BASE_INFO); if (typentry->typtype != TYPTYPE_DOMAIN) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), @@ -92,8 +93,8 @@ domain_state_setup(Oid domainType, bool binary, MemoryContext mcxt) format_type_be(domainType)))); /* Find out the base type */ - my_extra->typtypmod = -1; - baseType = getBaseTypeAndTypmod(domainType, &my_extra->typtypmod); + baseType = typentry->domainBaseType; + my_extra->typtypmod = typentry->domainBaseTypmod; /* Look up underlying I/O function */ if (binary) diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index d36fd9e929..242d8fe743 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -169,6 +169,11 @@ typedef struct CompositeIOData */ RecordIOData *record_io; /* metadata cache for populate_record() */ TupleDesc tupdesc; /* cached tuple descriptor */ + /* these fields differ from target type only if domain over composite: */ + Oid base_typid; /* base type id */ + int32 base_typmod; /* base type modifier */ + /* this field is used only if target type is domain over composite: */ + void *domain_info; /* opaque cache for domain checks */ } CompositeIOData; /* structure to cache metadata needed for populate_domain() */ @@ -186,6 +191,7 @@ typedef enum TypeCat TYPECAT_SCALAR = 's', TYPECAT_ARRAY = 'a', TYPECAT_COMPOSITE = 'c', + TYPECAT_COMPOSITE_DOMAIN = 'C', TYPECAT_DOMAIN = 'd' } TypeCat; @@ -217,7 +223,15 @@ struct RecordIOData ColumnIOData columns[FLEXIBLE_ARRAY_MEMBER]; }; -/* state for populate_recordset */ +/* per-query cache for populate_recordset */ +typedef struct PopulateRecordsetCache +{ + Oid argtype; /* declared type of the record argument */ + ColumnIOData c; /* metadata cache for populate_composite() */ + MemoryContext fn_mcxt; /* where this is stored */ +} PopulateRecordsetCache; + +/* per-call state for populate_recordset */ typedef struct PopulateRecordsetState { JsonLexContext *lex; @@ -227,17 +241,15 @@ typedef struct PopulateRecordsetState char *save_json_start; JsonTokenType saved_token_type; Tuplestorestate *tuple_store; - TupleDesc ret_tdesc; HeapTupleHeader rec; - RecordIOData **my_extra; - MemoryContext fn_mcxt; /* used to stash IO funcs */ + PopulateRecordsetCache *cache; } PopulateRecordsetState; /* structure to cache metadata needed for populate_record_worker() */ typedef struct PopulateRecordCache { - Oid argtype; /* verified row type of the first argument */ - CompositeIOData io; /* metadata cache for populate_composite() */ + Oid argtype; /* declared type of the record argument */ + ColumnIOData c; /* metadata cache for populate_composite() */ } PopulateRecordCache; /* common data for populate_array_json() and populate_array_dim_jsonb() */ @@ -415,16 +427,13 @@ static Datum populate_record_worker(FunctionCallInfo fcinfo, const char *funcnam static HeapTupleHeader populate_record(TupleDesc tupdesc, RecordIOData **record_p, HeapTupleHeader defaultval, MemoryContext mcxt, JsObject *obj); -static Datum populate_record_field(ColumnIOData *col, Oid typid, int32 typmod, - const char *colname, MemoryContext mcxt, - Datum defaultval, JsValue *jsv, bool *isnull); static void JsValueToJsObject(JsValue *jsv, JsObject *jso); -static Datum populate_composite(CompositeIOData *io, Oid typid, int32 typmod, +static Datum populate_composite(CompositeIOData *io, Oid typid, const char *colname, MemoryContext mcxt, - HeapTupleHeader defaultval, JsValue *jsv); + HeapTupleHeader defaultval, JsValue *jsv, bool isnull); static Datum populate_scalar(ScalarIOData *io, Oid typid, int32 typmod, JsValue *jsv); static void prepare_column_cache(ColumnIOData *column, Oid typid, int32 typmod, - MemoryContext mcxt, bool json); + MemoryContext mcxt, bool need_scalar); static Datum populate_record_field(ColumnIOData *col, Oid typid, int32 typmod, const char *colname, MemoryContext mcxt, Datum defaultval, JsValue *jsv, bool *isnull); @@ -2704,25 +2713,16 @@ JsValueToJsObject(JsValue *jsv, JsObject *jso) } } -/* recursively populate a composite (row type) value from json/jsonb */ -static Datum -populate_composite(CompositeIOData *io, - Oid typid, - int32 typmod, - const char *colname, - MemoryContext mcxt, - HeapTupleHeader defaultval, - JsValue *jsv) +/* acquire or update cached tuple descriptor for a composite type */ +static void +update_cached_tupdesc(CompositeIOData *io, MemoryContext mcxt) { - HeapTupleHeader tuple; - JsObject jso; - - /* acquire cached tuple descriptor */ if (!io->tupdesc || - io->tupdesc->tdtypeid != typid || - io->tupdesc->tdtypmod != typmod) + io->tupdesc->tdtypeid != io->base_typid || + io->tupdesc->tdtypmod != io->base_typmod) { - TupleDesc tupdesc = lookup_rowtype_tupdesc(typid, typmod); + TupleDesc tupdesc = lookup_rowtype_tupdesc(io->base_typid, + io->base_typmod); MemoryContext oldcxt; if (io->tupdesc) @@ -2735,17 +2735,50 @@ populate_composite(CompositeIOData *io, ReleaseTupleDesc(tupdesc); } +} + +/* recursively populate a composite (row type) value from json/jsonb */ +static Datum +populate_composite(CompositeIOData *io, + Oid typid, + const char *colname, + MemoryContext mcxt, + HeapTupleHeader defaultval, + JsValue *jsv, + bool isnull) +{ + Datum result; - /* prepare input value */ - JsValueToJsObject(jsv, &jso); + /* acquire/update cached tuple descriptor */ + update_cached_tupdesc(io, mcxt); - /* populate resulting record tuple */ - tuple = populate_record(io->tupdesc, &io->record_io, - defaultval, mcxt, &jso); + if (isnull) + result = (Datum) 0; + else + { + HeapTupleHeader tuple; + JsObject jso; + + /* prepare input value */ + JsValueToJsObject(jsv, &jso); + + /* populate resulting record tuple */ + tuple = populate_record(io->tupdesc, &io->record_io, + defaultval, mcxt, &jso); + result = HeapTupleHeaderGetDatum(tuple); - JsObjectFree(&jso); + JsObjectFree(&jso); + } + + /* + * If it's domain over composite, check domain constraints. (This should + * probably get refactored so that we can see the TYPECAT value, but for + * now, we can tell by comparing typid to base_typid.) + */ + if (typid != io->base_typid && typid != RECORDOID) + domain_check(result, isnull, typid, &io->domain_info, mcxt); - return HeapTupleHeaderGetDatum(tuple); + return result; } /* populate non-null scalar value from json/jsonb value */ @@ -2867,7 +2900,7 @@ prepare_column_cache(ColumnIOData *column, Oid typid, int32 typmod, MemoryContext mcxt, - bool json) + bool need_scalar) { HeapTuple tup; Form_pg_type type; @@ -2883,18 +2916,43 @@ prepare_column_cache(ColumnIOData *column, if (type->typtype == TYPTYPE_DOMAIN) { - column->typcat = TYPECAT_DOMAIN; - column->io.domain.base_typid = type->typbasetype; - column->io.domain.base_typmod = type->typtypmod; - column->io.domain.base_io = MemoryContextAllocZero(mcxt, - sizeof(ColumnIOData)); - column->io.domain.domain_info = NULL; + /* + * We can move directly to the bottom base type; domain_check() will + * take care of checking all constraints for a stack of domains. + */ + Oid base_typid; + int32 base_typmod = typmod; + + base_typid = getBaseTypeAndTypmod(typid, &base_typmod); + if (get_typtype(base_typid) == TYPTYPE_COMPOSITE) + { + /* domain over composite has its own code path */ + column->typcat = TYPECAT_COMPOSITE_DOMAIN; + column->io.composite.record_io = NULL; + column->io.composite.tupdesc = NULL; + column->io.composite.base_typid = base_typid; + column->io.composite.base_typmod = base_typmod; + column->io.composite.domain_info = NULL; + } + else + { + /* domain over anything else */ + column->typcat = TYPECAT_DOMAIN; + column->io.domain.base_typid = base_typid; + column->io.domain.base_typmod = base_typmod; + column->io.domain.base_io = + MemoryContextAllocZero(mcxt, sizeof(ColumnIOData)); + column->io.domain.domain_info = NULL; + } } else if (type->typtype == TYPTYPE_COMPOSITE || typid == RECORDOID) { column->typcat = TYPECAT_COMPOSITE; column->io.composite.record_io = NULL; column->io.composite.tupdesc = NULL; + column->io.composite.base_typid = typid; + column->io.composite.base_typmod = typmod; + column->io.composite.domain_info = NULL; } else if (type->typlen == -1 && OidIsValid(type->typelem)) { @@ -2906,10 +2964,13 @@ prepare_column_cache(ColumnIOData *column, column->io.array.element_typmod = typmod; } else + { column->typcat = TYPECAT_SCALAR; + need_scalar = true; + } - /* don't need input function when converting from jsonb to jsonb */ - if (json || typid != JSONBOID) + /* caller can force us to look up scalar_io info even for non-scalars */ + if (need_scalar) { Oid typioproc; @@ -2935,9 +2996,12 @@ populate_record_field(ColumnIOData *col, check_stack_depth(); - /* prepare column metadata cache for the given type */ + /* + * Prepare column metadata cache for the given type. Force lookup of the + * scalar_io data so that the json string hack below will work. + */ if (col->typid != typid || col->typmod != typmod) - prepare_column_cache(col, typid, typmod, mcxt, jsv->is_json); + prepare_column_cache(col, typid, typmod, mcxt, true); *isnull = JsValueIsNull(jsv); @@ -2945,11 +3009,15 @@ populate_record_field(ColumnIOData *col, /* try to convert json string to a non-scalar type through input function */ if (JsValueIsString(jsv) && - (typcat == TYPECAT_ARRAY || typcat == TYPECAT_COMPOSITE)) + (typcat == TYPECAT_ARRAY || + typcat == TYPECAT_COMPOSITE || + typcat == TYPECAT_COMPOSITE_DOMAIN)) typcat = TYPECAT_SCALAR; - /* we must perform domain checks for NULLs */ - if (*isnull && typcat != TYPECAT_DOMAIN) + /* we must perform domain checks for NULLs, otherwise exit immediately */ + if (*isnull && + typcat != TYPECAT_DOMAIN && + typcat != TYPECAT_COMPOSITE_DOMAIN) return (Datum) 0; switch (typcat) @@ -2961,12 +3029,13 @@ populate_record_field(ColumnIOData *col, return populate_array(&col->io.array, colname, mcxt, jsv); case TYPECAT_COMPOSITE: - return populate_composite(&col->io.composite, typid, typmod, + case TYPECAT_COMPOSITE_DOMAIN: + return populate_composite(&col->io.composite, typid, colname, mcxt, DatumGetPointer(defaultval) ? DatumGetHeapTupleHeader(defaultval) : NULL, - jsv); + jsv, *isnull); case TYPECAT_DOMAIN: return populate_domain(&col->io.domain, typid, colname, mcxt, @@ -3137,10 +3206,7 @@ populate_record_worker(FunctionCallInfo fcinfo, const char *funcname, int json_arg_num = have_record_arg ? 1 : 0; Oid jtype = get_fn_expr_argtype(fcinfo->flinfo, json_arg_num); JsValue jsv = {0}; - HeapTupleHeader rec = NULL; - Oid tupType; - int32 tupTypmod; - TupleDesc tupdesc = NULL; + HeapTupleHeader rec; Datum rettuple; JsonbValue jbv; MemoryContext fnmcxt = fcinfo->flinfo->fn_mcxt; @@ -3149,77 +3215,88 @@ populate_record_worker(FunctionCallInfo fcinfo, const char *funcname, Assert(jtype == JSONOID || jtype == JSONBOID); /* - * We arrange to look up the needed I/O info just once per series of - * calls, assuming the record type doesn't change underneath us. + * If first time through, identify input/result record type. Note that + * this stanza looks only at fcinfo context, which can't change during the + * query; so we may not be able to fully resolve a RECORD input type yet. */ if (!cache) + { fcinfo->flinfo->fn_extra = cache = MemoryContextAllocZero(fnmcxt, sizeof(*cache)); - if (have_record_arg) - { - Oid argtype = get_fn_expr_argtype(fcinfo->flinfo, 0); - - if (cache->argtype != argtype) + if (have_record_arg) { - if (!type_is_rowtype(argtype)) + /* + * json{b}_populate_record case: result type will be same as first + * argument's. + */ + cache->argtype = get_fn_expr_argtype(fcinfo->flinfo, 0); + prepare_column_cache(&cache->c, + cache->argtype, -1, + fnmcxt, false); + if (cache->c.typcat != TYPECAT_COMPOSITE && + cache->c.typcat != TYPECAT_COMPOSITE_DOMAIN) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("first argument of %s must be a row type", funcname))); - - cache->argtype = argtype; } - - if (PG_ARGISNULL(0)) + else { - if (PG_ARGISNULL(1)) - PG_RETURN_NULL(); - /* - * We have no tuple to look at, so the only source of type info is - * the argtype. The lookup_rowtype_tupdesc call below will error - * out if we don't have a known composite type oid here. + * json{b}_to_record case: result type is specified by calling + * query. Here it is syntactically impossible to specify the + * target type as domain-over-composite. */ - tupType = argtype; - tupTypmod = -1; - } - else - { - rec = PG_GETARG_HEAPTUPLEHEADER(0); + TupleDesc tupdesc; + MemoryContext old_cxt; - if (PG_ARGISNULL(1)) - PG_RETURN_POINTER(rec); - - /* Extract type info from the tuple itself */ - tupType = HeapTupleHeaderGetTypeId(rec); - tupTypmod = HeapTupleHeaderGetTypMod(rec); + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("function returning record called in context " + "that cannot accept type record"), + errhint("Try calling the function in the FROM clause " + "using a column definition list."))); + + Assert(tupdesc); + cache->argtype = tupdesc->tdtypeid; + + /* Save identified tupdesc */ + old_cxt = MemoryContextSwitchTo(fnmcxt); + cache->c.io.composite.tupdesc = CreateTupleDescCopy(tupdesc); + cache->c.io.composite.base_typid = tupdesc->tdtypeid; + cache->c.io.composite.base_typmod = tupdesc->tdtypmod; + MemoryContextSwitchTo(old_cxt); } } - else - { - /* json{b}_to_record case */ - if (PG_ARGISNULL(0)) - PG_RETURN_NULL(); - - if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("function returning record called in context " - "that cannot accept type record"), - errhint("Try calling the function in the FROM clause " - "using a column definition list."))); - Assert(tupdesc); + /* Collect record arg if we have one */ + if (have_record_arg && !PG_ARGISNULL(0)) + { + rec = PG_GETARG_HEAPTUPLEHEADER(0); /* - * Add tupdesc to the cache and set the appropriate values of - * tupType/tupTypmod for proper cache usage in populate_composite(). + * When declared arg type is RECORD, identify actual record type from + * the tuple itself. Note the lookup_rowtype_tupdesc call in + * update_cached_tupdesc will fail if we're unable to do this. */ - cache->io.tupdesc = tupdesc; + if (cache->argtype == RECORDOID) + { + cache->c.io.composite.base_typid = HeapTupleHeaderGetTypeId(rec); + cache->c.io.composite.base_typmod = HeapTupleHeaderGetTypMod(rec); + } + } + else + rec = NULL; - tupType = tupdesc->tdtypeid; - tupTypmod = tupdesc->tdtypmod; + /* If no JSON argument, just return the record (if any) unchanged */ + if (PG_ARGISNULL(json_arg_num)) + { + if (rec) + PG_RETURN_POINTER(rec); + else + PG_RETURN_NULL(); } jsv.is_json = jtype == JSONOID; @@ -3245,14 +3322,8 @@ populate_record_worker(FunctionCallInfo fcinfo, const char *funcname, jbv.val.binary.len = VARSIZE(jb) - VARHDRSZ; } - rettuple = populate_composite(&cache->io, tupType, tupTypmod, - NULL, fnmcxt, rec, &jsv); - - if (tupdesc) - { - cache->io.tupdesc = NULL; - ReleaseTupleDesc(tupdesc); - } + rettuple = populate_composite(&cache->c.io.composite, cache->argtype, + NULL, fnmcxt, rec, &jsv, false); PG_RETURN_DATUM(rettuple); } @@ -3438,13 +3509,28 @@ json_to_recordset(PG_FUNCTION_ARGS) static void populate_recordset_record(PopulateRecordsetState *state, JsObject *obj) { + PopulateRecordsetCache *cache = state->cache; + HeapTupleHeader tuphead; HeapTupleData tuple; - HeapTupleHeader tuphead = populate_record(state->ret_tdesc, - state->my_extra, - state->rec, - state->fn_mcxt, - obj); + /* acquire/update cached tuple descriptor */ + update_cached_tupdesc(&cache->c.io.composite, cache->fn_mcxt); + + /* replace record fields from json */ + tuphead = populate_record(cache->c.io.composite.tupdesc, + &cache->c.io.composite.record_io, + state->rec, + cache->fn_mcxt, + obj); + + /* if it's domain over composite, check domain constraints */ + if (cache->c.typcat == TYPECAT_COMPOSITE_DOMAIN) + domain_check(HeapTupleHeaderGetDatum(tuphead), false, + cache->argtype, + &cache->c.io.composite.domain_info, + cache->fn_mcxt); + + /* ok, save into tuplestore */ tuple.t_len = HeapTupleHeaderGetDatumLength(tuphead); ItemPointerSetInvalid(&(tuple.t_self)); tuple.t_tableOid = InvalidOid; @@ -3465,25 +3551,13 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname, ReturnSetInfo *rsi; MemoryContext old_cxt; HeapTupleHeader rec; - TupleDesc tupdesc; + PopulateRecordsetCache *cache = fcinfo->flinfo->fn_extra; PopulateRecordsetState *state; - if (have_record_arg) - { - Oid argtype = get_fn_expr_argtype(fcinfo->flinfo, 0); - - if (!type_is_rowtype(argtype)) - ereport(ERROR, - (errcode(ERRCODE_DATATYPE_MISMATCH), - errmsg("first argument of %s must be a row type", - funcname))); - } - rsi = (ReturnSetInfo *) fcinfo->resultinfo; if (!rsi || !IsA(rsi, ReturnSetInfo) || - (rsi->allowedModes & SFRM_Materialize) == 0 || - rsi->expectedDesc == NULL) + (rsi->allowedModes & SFRM_Materialize) == 0) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that " @@ -3492,40 +3566,97 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname, rsi->returnMode = SFRM_Materialize; /* - * get the tupdesc from the result set info - it must be a record type - * because we already checked that arg1 is a record type, or we're in a - * to_record function which returns a setof record. + * If first time through, identify input/result record type. Note that + * this stanza looks only at fcinfo context, which can't change during the + * query; so we may not be able to fully resolve a RECORD input type yet. */ - if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("function returning record called in context " - "that cannot accept type record"))); + if (!cache) + { + fcinfo->flinfo->fn_extra = cache = + MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt, sizeof(*cache)); + cache->fn_mcxt = fcinfo->flinfo->fn_mcxt; + + if (have_record_arg) + { + /* + * json{b}_populate_recordset case: result type will be same as + * first argument's. + */ + cache->argtype = get_fn_expr_argtype(fcinfo->flinfo, 0); + prepare_column_cache(&cache->c, + cache->argtype, -1, + cache->fn_mcxt, false); + if (cache->c.typcat != TYPECAT_COMPOSITE && + cache->c.typcat != TYPECAT_COMPOSITE_DOMAIN) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("first argument of %s must be a row type", + funcname))); + } + else + { + /* + * json{b}_to_recordset case: result type is specified by calling + * query. Here it is syntactically impossible to specify the + * target type as domain-over-composite. + */ + TupleDesc tupdesc; + + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("function returning record called in context " + "that cannot accept type record"), + errhint("Try calling the function in the FROM clause " + "using a column definition list."))); + + Assert(tupdesc); + cache->argtype = tupdesc->tdtypeid; + + /* Save identified tupdesc */ + old_cxt = MemoryContextSwitchTo(cache->fn_mcxt); + cache->c.io.composite.tupdesc = CreateTupleDescCopy(tupdesc); + cache->c.io.composite.base_typid = tupdesc->tdtypeid; + cache->c.io.composite.base_typmod = tupdesc->tdtypmod; + MemoryContextSwitchTo(old_cxt); + } + } + + /* Collect record arg if we have one */ + if (have_record_arg && !PG_ARGISNULL(0)) + { + rec = PG_GETARG_HEAPTUPLEHEADER(0); + + /* + * When declared arg type is RECORD, identify actual record type from + * the tuple itself. Note the lookup_rowtype_tupdesc call in + * update_cached_tupdesc will fail if we're unable to do this. + */ + if (cache->argtype == RECORDOID) + { + cache->c.io.composite.base_typid = HeapTupleHeaderGetTypeId(rec); + cache->c.io.composite.base_typmod = HeapTupleHeaderGetTypMod(rec); + } + } + else + rec = NULL; /* if the json is null send back an empty set */ if (PG_ARGISNULL(json_arg_num)) PG_RETURN_NULL(); - if (!have_record_arg || PG_ARGISNULL(0)) - rec = NULL; - else - rec = PG_GETARG_HEAPTUPLEHEADER(0); - state = palloc0(sizeof(PopulateRecordsetState)); - /* make these in a sufficiently long-lived memory context */ + /* make tuplestore in a sufficiently long-lived memory context */ old_cxt = MemoryContextSwitchTo(rsi->econtext->ecxt_per_query_memory); - state->ret_tdesc = CreateTupleDescCopy(tupdesc); - BlessTupleDesc(state->ret_tdesc); state->tuple_store = tuplestore_begin_heap(rsi->allowedModes & SFRM_Materialize_Random, false, work_mem); MemoryContextSwitchTo(old_cxt); state->function_name = funcname; - state->my_extra = (RecordIOData **) &fcinfo->flinfo->fn_extra; + state->cache = cache; state->rec = rec; - state->fn_mcxt = fcinfo->flinfo->fn_mcxt; if (jtype == JSONOID) { @@ -3592,7 +3723,7 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname, } rsi->setResult = state->tuple_store; - rsi->setDesc = state->ret_tdesc; + rsi->setDesc = cache->c.io.composite.tupdesc; PG_RETURN_NULL(); } diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 84759b6149..b1e70a0d19 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -6731,17 +6731,12 @@ get_name_for_var_field(Var *var, int fieldno, /* * If it's a Var of type RECORD, we have to find what the Var refers to; - * if not, we can use get_expr_result_type. If that fails, we try - * lookup_rowtype_tupdesc, which will probably fail too, but will ereport - * an acceptable message. + * if not, we can use get_expr_result_tupdesc(). */ if (!IsA(var, Var) || var->vartype != RECORDOID) { - if (get_expr_result_type((Node *) var, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE) - tupleDesc = lookup_rowtype_tupdesc_copy(exprType((Node *) var), - exprTypmod((Node *) var)); - Assert(tupleDesc); + tupleDesc = get_expr_result_tupdesc((Node *) var, false); /* Got the tupdesc, so we can extract the field name */ Assert(fieldno >= 1 && fieldno <= tupleDesc->natts); return NameStr(TupleDescAttr(tupleDesc, fieldno - 1)->attname); @@ -7044,14 +7039,9 @@ get_name_for_var_field(Var *var, int fieldno, /* * We now have an expression we can't expand any more, so see if - * get_expr_result_type() can do anything with it. If not, pass to - * lookup_rowtype_tupdesc() which will probably fail, but will give an - * appropriate error message while failing. + * get_expr_result_tupdesc() can do anything with it. */ - if (get_expr_result_type(expr, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE) - tupleDesc = lookup_rowtype_tupdesc_copy(exprType(expr), - exprTypmod(expr)); - Assert(tupleDesc); + tupleDesc = get_expr_result_tupdesc(expr, false); /* Got the tupdesc, so we can extract the field name */ Assert(fieldno >= 1 && fieldno <= tupleDesc->natts); return NameStr(TupleDescAttr(tupleDesc, fieldno - 1)->attname); diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index b7a14dc87e..48961e31aa 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -2398,12 +2398,26 @@ get_typtype(Oid typid) * type_is_rowtype * * Convenience function to determine whether a type OID represents - * a "rowtype" type --- either RECORD or a named composite type. + * a "rowtype" type --- either RECORD or a named composite type + * (including a domain over a named composite type). */ bool type_is_rowtype(Oid typid) { - return (typid == RECORDOID || get_typtype(typid) == TYPTYPE_COMPOSITE); + if (typid == RECORDOID) + return true; /* easy case */ + switch (get_typtype(typid)) + { + case TYPTYPE_COMPOSITE: + return true; + case TYPTYPE_DOMAIN: + if (get_typtype(getBaseType(typid)) == TYPTYPE_COMPOSITE) + return true; + break; + default: + break; + } + return false; } /* diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 61ce7dc2a7..7aadc5d6ef 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -96,6 +96,7 @@ static TypeCacheEntry *firstDomainTypeEntry = NULL; #define TCFLAGS_HAVE_FIELD_EQUALITY 0x004000 #define TCFLAGS_HAVE_FIELD_COMPARE 0x008000 #define TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS 0x010000 +#define TCFLAGS_DOMAIN_BASE_IS_COMPOSITE 0x020000 /* * Data stored about a domain type's constraints. Note that we do not create @@ -747,7 +748,15 @@ lookup_type_cache(Oid type_id, int flags) /* * If requested, get information about a domain type */ - if ((flags & TYPECACHE_DOMAIN_INFO) && + if ((flags & TYPECACHE_DOMAIN_BASE_INFO) && + typentry->domainBaseType == InvalidOid && + typentry->typtype == TYPTYPE_DOMAIN) + { + typentry->domainBaseTypmod = -1; + typentry->domainBaseType = + getBaseTypeAndTypmod(type_id, &typentry->domainBaseTypmod); + } + if ((flags & TYPECACHE_DOMAIN_CONSTR_INFO) && (typentry->flags & TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS) == 0 && typentry->typtype == TYPTYPE_DOMAIN) { @@ -1166,7 +1175,7 @@ InitDomainConstraintRef(Oid type_id, DomainConstraintRef *ref, MemoryContext refctx, bool need_exprstate) { /* Look up the typcache entry --- we assume it survives indefinitely */ - ref->tcache = lookup_type_cache(type_id, TYPECACHE_DOMAIN_INFO); + ref->tcache = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO); ref->need_exprstate = need_exprstate; /* For safety, establish the callback before acquiring a refcount */ ref->refctx = refctx; @@ -1257,7 +1266,7 @@ DomainHasConstraints(Oid type_id) * Note: a side effect is to cause the typcache's domain data to become * valid. This is fine since we'll likely need it soon if there is any. */ - typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_INFO); + typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO); return (typentry->domainData != NULL); } @@ -1405,6 +1414,29 @@ cache_record_field_properties(TypeCacheEntry *typentry) DecrTupleDescRefCount(tupdesc); } + else if (typentry->typtype == TYPTYPE_DOMAIN) + { + /* If it's domain over composite, copy base type's properties */ + TypeCacheEntry *baseentry; + + /* load up basetype info if we didn't already */ + if (typentry->domainBaseType == InvalidOid) + { + typentry->domainBaseTypmod = -1; + typentry->domainBaseType = + getBaseTypeAndTypmod(typentry->type_id, + &typentry->domainBaseTypmod); + } + baseentry = lookup_type_cache(typentry->domainBaseType, + TYPECACHE_EQ_OPR | + TYPECACHE_CMP_PROC); + if (baseentry->typtype == TYPTYPE_COMPOSITE) + { + typentry->flags |= TCFLAGS_DOMAIN_BASE_IS_COMPOSITE; + typentry->flags |= baseentry->flags & (TCFLAGS_HAVE_FIELD_EQUALITY | + TCFLAGS_HAVE_FIELD_COMPARE); + } + } typentry->flags |= TCFLAGS_CHECKED_FIELD_PROPERTIES; } @@ -1618,6 +1650,53 @@ lookup_rowtype_tupdesc_copy(Oid type_id, int32 typmod) return CreateTupleDescCopyConstr(tmp); } +/* + * lookup_rowtype_tupdesc_domain + * + * Same as lookup_rowtype_tupdesc_noerror(), except that the type can also be + * a domain over a named composite type; so this is effectively equivalent to + * lookup_rowtype_tupdesc_noerror(getBaseType(type_id), typmod, noError) + * except for being a tad faster. + * + * Note: the reason we don't fold the look-through-domain behavior into plain + * lookup_rowtype_tupdesc() is that we want callers to know they might be + * dealing with a domain. Otherwise they might construct a tuple that should + * be of the domain type, but not apply domain constraints. + */ +TupleDesc +lookup_rowtype_tupdesc_domain(Oid type_id, int32 typmod, bool noError) +{ + TupleDesc tupDesc; + + if (type_id != RECORDOID) + { + /* + * Check for domain or named composite type. We might as well load + * whichever data is needed. + */ + TypeCacheEntry *typentry; + + typentry = lookup_type_cache(type_id, + TYPECACHE_TUPDESC | + TYPECACHE_DOMAIN_BASE_INFO); + if (typentry->typtype == TYPTYPE_DOMAIN) + return lookup_rowtype_tupdesc_noerror(typentry->domainBaseType, + typentry->domainBaseTypmod, + noError); + if (typentry->tupDesc == NULL && !noError) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("type %s is not composite", + format_type_be(type_id)))); + tupDesc = typentry->tupDesc; + } + else + tupDesc = lookup_rowtype_tupdesc_internal(type_id, typmod, noError); + if (tupDesc != NULL) + PinTupleDesc(tupDesc); + return tupDesc; +} + /* * Hash function for the hash table of RecordCacheEntry. */ @@ -1929,29 +2008,40 @@ TypeCacheRelCallback(Datum arg, Oid relid) hash_seq_init(&status, TypeCacheHash); while ((typentry = (TypeCacheEntry *) hash_seq_search(&status)) != NULL) { - if (typentry->typtype != TYPTYPE_COMPOSITE) - continue; /* skip non-composites */ + if (typentry->typtype == TYPTYPE_COMPOSITE) + { + /* Skip if no match, unless we're zapping all composite types */ + if (relid != typentry->typrelid && relid != InvalidOid) + continue; - /* Skip if no match, unless we're zapping all composite types */ - if (relid != typentry->typrelid && relid != InvalidOid) - continue; + /* Delete tupdesc if we have it */ + if (typentry->tupDesc != NULL) + { + /* + * Release our refcount, and free the tupdesc if none remain. + * (Can't use DecrTupleDescRefCount because this reference is + * not logged in current resource owner.) + */ + Assert(typentry->tupDesc->tdrefcount > 0); + if (--typentry->tupDesc->tdrefcount == 0) + FreeTupleDesc(typentry->tupDesc); + typentry->tupDesc = NULL; + } - /* Delete tupdesc if we have it */ - if (typentry->tupDesc != NULL) + /* Reset equality/comparison/hashing validity information */ + typentry->flags = 0; + } + else if (typentry->typtype == TYPTYPE_DOMAIN) { /* - * Release our refcount, and free the tupdesc if none remain. - * (Can't use DecrTupleDescRefCount because this reference is not - * logged in current resource owner.) + * If it's domain over composite, reset flags. (We don't bother + * trying to determine whether the specific base type needs a + * reset.) Note that if we haven't determined whether the base + * type is composite, we don't need to reset anything. */ - Assert(typentry->tupDesc->tdrefcount > 0); - if (--typentry->tupDesc->tdrefcount == 0) - FreeTupleDesc(typentry->tupDesc); - typentry->tupDesc = NULL; + if (typentry->flags & TCFLAGS_DOMAIN_BASE_IS_COMPOSITE) + typentry->flags = 0; } - - /* Reset equality/comparison/hashing validity information */ - typentry->flags = 0; } } diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c index b4f856eb13..bfd5031b9d 100644 --- a/src/backend/utils/fmgr/funcapi.c +++ b/src/backend/utils/fmgr/funcapi.c @@ -39,7 +39,7 @@ static TypeFuncClass internal_get_result_type(Oid funcid, static bool resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, Node *call_expr); -static TypeFuncClass get_type_func_class(Oid typid); +static TypeFuncClass get_type_func_class(Oid typid, Oid *base_typeid); /* @@ -246,14 +246,17 @@ get_expr_result_type(Node *expr, { /* handle as a generic expression; no chance to resolve RECORD */ Oid typid = exprType(expr); + Oid base_typid; if (resultTypeId) *resultTypeId = typid; if (resultTupleDesc) *resultTupleDesc = NULL; - result = get_type_func_class(typid); - if (result == TYPEFUNC_COMPOSITE && resultTupleDesc) - *resultTupleDesc = lookup_rowtype_tupdesc_copy(typid, -1); + result = get_type_func_class(typid, &base_typid); + if ((result == TYPEFUNC_COMPOSITE || + result == TYPEFUNC_COMPOSITE_DOMAIN) && + resultTupleDesc) + *resultTupleDesc = lookup_rowtype_tupdesc_copy(base_typid, -1); } return result; @@ -296,6 +299,7 @@ internal_get_result_type(Oid funcid, HeapTuple tp; Form_pg_proc procform; Oid rettype; + Oid base_rettype; TupleDesc tupdesc; /* First fetch the function's pg_proc row to inspect its rettype */ @@ -363,12 +367,13 @@ internal_get_result_type(Oid funcid, *resultTupleDesc = NULL; /* default result */ /* Classify the result type */ - result = get_type_func_class(rettype); + result = get_type_func_class(rettype, &base_rettype); switch (result) { case TYPEFUNC_COMPOSITE: + case TYPEFUNC_COMPOSITE_DOMAIN: if (resultTupleDesc) - *resultTupleDesc = lookup_rowtype_tupdesc_copy(rettype, -1); + *resultTupleDesc = lookup_rowtype_tupdesc_copy(base_rettype, -1); /* Named composite types can't have any polymorphic columns */ break; case TYPEFUNC_SCALAR: @@ -393,6 +398,46 @@ internal_get_result_type(Oid funcid, return result; } +/* + * get_expr_result_tupdesc + * Get a tupdesc describing the result of a composite-valued expression + * + * If expression is not composite or rowtype can't be determined, returns NULL + * if noError is true, else throws error. + * + * This is a simpler version of get_expr_result_type() for use when the caller + * is only interested in determinate rowtype results. + */ +TupleDesc +get_expr_result_tupdesc(Node *expr, bool noError) +{ + TupleDesc tupleDesc; + TypeFuncClass functypclass; + + functypclass = get_expr_result_type(expr, NULL, &tupleDesc); + + if (functypclass == TYPEFUNC_COMPOSITE || + functypclass == TYPEFUNC_COMPOSITE_DOMAIN) + return tupleDesc; + + if (!noError) + { + Oid exprTypeId = exprType(expr); + + if (exprTypeId != RECORDOID) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("type %s is not composite", + format_type_be(exprTypeId)))); + else + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("record type has not been registered"))); + } + + return NULL; +} + /* * Given the result tuple descriptor for a function with OUT parameters, * replace any polymorphic columns (ANYELEMENT etc) with correct data types @@ -741,23 +786,31 @@ resolve_polymorphic_argtypes(int numargs, Oid *argtypes, char *argmodes, /* * get_type_func_class * Given the type OID, obtain its TYPEFUNC classification. + * Also, if it's a domain, return the base type OID. * * This is intended to centralize a bunch of formerly ad-hoc code for * classifying types. The categories used here are useful for deciding * how to handle functions returning the datatype. */ static TypeFuncClass -get_type_func_class(Oid typid) +get_type_func_class(Oid typid, Oid *base_typeid) { + *base_typeid = typid; + switch (get_typtype(typid)) { case TYPTYPE_COMPOSITE: return TYPEFUNC_COMPOSITE; case TYPTYPE_BASE: - case TYPTYPE_DOMAIN: case TYPTYPE_ENUM: case TYPTYPE_RANGE: return TYPEFUNC_SCALAR; + case TYPTYPE_DOMAIN: + *base_typeid = typid = getBaseType(typid); + if (get_typtype(typid) == TYPTYPE_COMPOSITE) + return TYPEFUNC_COMPOSITE_DOMAIN; + else /* domain base type can't be a pseudotype */ + return TYPEFUNC_SCALAR; case TYPTYPE_PSEUDO: if (typid == RECORDOID) return TYPEFUNC_RECORD; @@ -1320,16 +1373,20 @@ RelationNameGetTupleDesc(const char *relname) TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases) { - TypeFuncClass functypclass = get_type_func_class(typeoid); + Oid base_typeoid; + TypeFuncClass functypclass = get_type_func_class(typeoid, &base_typeoid); TupleDesc tupdesc = NULL; /* - * Build a suitable tupledesc representing the output rows + * Build a suitable tupledesc representing the output rows. We + * intentionally do not support TYPEFUNC_COMPOSITE_DOMAIN here, as it's + * unlikely that legacy callers of this obsolete function would be + * prepared to apply domain constraints. */ if (functypclass == TYPEFUNC_COMPOSITE) { /* Composite data type, e.g. a table's row type */ - tupdesc = lookup_rowtype_tupdesc_copy(typeoid, -1); + tupdesc = lookup_rowtype_tupdesc_copy(base_typeoid, -1); if (colaliases != NIL) { @@ -1424,7 +1481,8 @@ extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, Datum *args_res; bool *nulls_res; Oid *types_res; - int nargs, i; + int nargs, + i; *args = NULL; *types = NULL; @@ -1460,7 +1518,7 @@ extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, else { nargs = PG_NARGS() - variadic_start; - Assert (nargs > 0); + Assert(nargs > 0); nulls_res = (bool *) palloc0(nargs * sizeof(bool)); args_res = (Datum *) palloc0(nargs * sizeof(Datum)); types_res = (Oid *) palloc0(nargs * sizeof(Oid)); @@ -1473,11 +1531,10 @@ extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, /* * Turn a constant (more or less literal) value that's of unknown - * type into text if required . Unknowns come in as a cstring - * pointer. - * Note: for functions declared as taking type "any", the parser - * will not do any type conversion on unknown-type literals (that - * is, undecorated strings or NULLs). + * type into text if required. Unknowns come in as a cstring + * pointer. Note: for functions declared as taking type "any", the + * parser will not do any type conversion on unknown-type literals + * (that is, undecorated strings or NULLs). */ if (convert_unknown && types_res[i] == UNKNOWNOID && diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h index fa04a63b76..b0d4c54121 100644 --- a/src/include/access/htup_details.h +++ b/src/include/access/htup_details.h @@ -134,6 +134,11 @@ typedef struct DatumTupleFields Oid datum_typeid; /* composite type OID, or RECORDOID */ /* + * datum_typeid cannot be a domain over composite, only plain composite, + * even if the datum is meant as a value of a domain-over-composite type. + * This is in line with the general principle that CoerceToDomain does not + * change the physical representation of the base type value. + * * Note: field ordering is chosen with thought that Oid might someday * widen to 64 bits. */ diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index c15610e767..2be5af1d3e 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -60,6 +60,12 @@ typedef struct tupleConstr * row type, or a value >= 0 to allow the rowtype to be looked up in the * typcache.c type cache. * + * Note that tdtypeid is never the OID of a domain over composite, even if + * we are dealing with values that are known (at some higher level) to be of + * a domain-over-composite type. This is because tdtypeid/tdtypmod need to + * match up with the type labeling of composite Datums, and those are never + * explicitly marked as being of a domain type, either. + * * Tuple descriptors that live in caches (relcache or typcache, at present) * are reference-counted: they can be deleted when their reference count goes * to zero. Tuple descriptors created by the executor need no reference diff --git a/src/include/funcapi.h b/src/include/funcapi.h index d2dbc163f2..223eef28d1 100644 --- a/src/include/funcapi.h +++ b/src/include/funcapi.h @@ -144,6 +144,10 @@ typedef struct FuncCallContext * get_call_result_type. Note: the cases in which rowtypes cannot be * determined are different from the cases for get_call_result_type. * Do *not* use this if you can use one of the others. + * + * See also get_expr_result_tupdesc(), which is a convenient wrapper around + * get_expr_result_type() for use when the caller only cares about + * determinable-rowtype cases. *---------- */ @@ -152,6 +156,7 @@ typedef enum TypeFuncClass { TYPEFUNC_SCALAR, /* scalar result type */ TYPEFUNC_COMPOSITE, /* determinable rowtype result */ + TYPEFUNC_COMPOSITE_DOMAIN, /* domain over determinable rowtype result */ TYPEFUNC_RECORD, /* indeterminate rowtype result */ TYPEFUNC_OTHER /* bogus type, eg pseudotype */ } TypeFuncClass; @@ -166,6 +171,8 @@ extern TypeFuncClass get_func_result_type(Oid functionId, Oid *resultTypeId, TupleDesc *resultTupleDesc); +extern TupleDesc get_expr_result_tupdesc(Node *expr, bool noError); + extern bool resolve_polymorphic_argtypes(int numargs, Oid *argtypes, char *argmodes, Node *call_expr); @@ -335,7 +342,7 @@ extern void end_MultiFuncCall(PG_FUNCTION_ARGS, FuncCallContext *funcctx); * "VARIADIC NULL". */ extern int extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start, - bool convert_unknown, Datum **values, - Oid **types, bool **nulls); + bool convert_unknown, Datum **values, + Oid **types, bool **nulls); #endif /* FUNCAPI_H */ diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h index ccb5123e2e..c2929ac387 100644 --- a/src/include/nodes/primnodes.h +++ b/src/include/nodes/primnodes.h @@ -166,7 +166,7 @@ typedef struct Var Index varno; /* index of this var's relation in the range * table, or INNER_VAR/OUTER_VAR/INDEX_VAR */ AttrNumber varattno; /* attribute number of this var, or zero for - * all */ + * all attrs ("whole-row Var") */ Oid vartype; /* pg_type OID for the type of this var */ int32 vartypmod; /* pg_attribute typmod value */ Oid varcollid; /* OID of collation, or InvalidOid if none */ @@ -755,6 +755,9 @@ typedef struct FieldSelect * the assign case of ArrayRef, this is used to implement UPDATE of a * portion of a column. * + * resulttype is always a named composite type (not a domain). To update + * a composite domain value, apply CoerceToDomain to the FieldStore. + * * A single FieldStore can actually represent updates of several different * fields. The parser only generates FieldStores with single-element lists, * but the planner will collapse multiple updates of the same base column @@ -849,7 +852,8 @@ typedef struct ArrayCoerceExpr * needed for the destination type plus possibly others; the columns need not * be in the same positions, but are matched up by name. This is primarily * used to convert a whole-row value of an inheritance child table into a - * valid whole-row value of its parent table's rowtype. + * valid whole-row value of its parent table's rowtype. Both resulttype + * and the exposed type of "arg" must be named composite types (not domains). * ---------------- */ @@ -987,6 +991,9 @@ typedef struct RowExpr Oid row_typeid; /* RECORDOID or a composite type's ID */ /* + * row_typeid cannot be a domain over composite, only plain composite. To + * create a composite domain value, apply CoerceToDomain to the RowExpr. + * * Note: we deliberately do NOT store a typmod. Although a typmod will be * associated with specific RECORD types at runtime, it will differ for * different backends, and so cannot safely be stored in stored diff --git a/src/include/parser/parse_type.h b/src/include/parser/parse_type.h index 7b843d0b9d..af1e314b21 100644 --- a/src/include/parser/parse_type.h +++ b/src/include/parser/parse_type.h @@ -46,10 +46,12 @@ extern Oid typeTypeCollation(Type typ); extern Datum stringTypeDatum(Type tp, char *string, int32 atttypmod); extern Oid typeidTypeRelid(Oid type_id); +extern Oid typeOrDomainTypeRelid(Oid type_id); extern TypeName *typeStringToTypeName(const char *str); extern void parseTypeString(const char *str, Oid *typeid_p, int32 *typmod_p, bool missing_ok); -#define ISCOMPLEX(typeid) (typeidTypeRelid(typeid) != InvalidOid) +/* true if typeid is composite, or domain over composite, but not RECORD */ +#define ISCOMPLEX(typeid) (typeOrDomainTypeRelid(typeid) != InvalidOid) #endif /* PARSE_TYPE_H */ diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index 41b645a58f..ea799a8894 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -91,6 +91,13 @@ typedef struct TypeCacheEntry FmgrInfo rng_canonical_finfo; /* canonicalization function, if any */ FmgrInfo rng_subdiff_finfo; /* difference function, if any */ + /* + * Domain's base type and typmod if it's a domain type. Zeroes if not + * domain, or if information hasn't been requested. + */ + Oid domainBaseType; + int32 domainBaseTypmod; + /* * Domain constraint data if it's a domain type. NULL if not domain, or * if domain has no constraints, or if information hasn't been requested. @@ -123,9 +130,10 @@ typedef struct TypeCacheEntry #define TYPECACHE_BTREE_OPFAMILY 0x0200 #define TYPECACHE_HASH_OPFAMILY 0x0400 #define TYPECACHE_RANGE_INFO 0x0800 -#define TYPECACHE_DOMAIN_INFO 0x1000 -#define TYPECACHE_HASH_EXTENDED_PROC 0x2000 -#define TYPECACHE_HASH_EXTENDED_PROC_FINFO 0x4000 +#define TYPECACHE_DOMAIN_BASE_INFO 0x1000 +#define TYPECACHE_DOMAIN_CONSTR_INFO 0x2000 +#define TYPECACHE_HASH_EXTENDED_PROC 0x4000 +#define TYPECACHE_HASH_EXTENDED_PROC_FINFO 0x8000 /* * Callers wishing to maintain a long-lived reference to a domain's constraint @@ -163,6 +171,9 @@ extern TupleDesc lookup_rowtype_tupdesc_noerror(Oid type_id, int32 typmod, extern TupleDesc lookup_rowtype_tupdesc_copy(Oid type_id, int32 typmod); +extern TupleDesc lookup_rowtype_tupdesc_domain(Oid type_id, int32 typmod, + bool noError); + extern void assign_record_type_typmod(TupleDesc tupDesc); extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2); diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out index 1e62c57a68..f7f3948d43 100644 --- a/src/test/regress/expected/domain.out +++ b/src/test/regress/expected/domain.out @@ -198,6 +198,94 @@ select pg_typeof('{1,2,3}'::dia || 42); -- should be int[] not dia (1 row) drop domain dia; +-- Test domains over composites +create type comptype as (r float8, i float8); +create domain dcomptype as comptype; +create table dcomptable (d1 dcomptype unique); +insert into dcomptable values (row(1,2)::dcomptype); +insert into dcomptable values (row(3,4)::comptype); +insert into dcomptable values (row(1,2)::dcomptype); -- fail on uniqueness +ERROR: duplicate key value violates unique constraint "dcomptable_d1_key" +DETAIL: Key (d1)=((1,2)) already exists. +insert into dcomptable (d1.r) values(11); +select * from dcomptable; + d1 +------- + (1,2) + (3,4) + (11,) +(3 rows) + +select (d1).r, (d1).i, (d1).* from dcomptable; + r | i | r | i +----+---+----+--- + 1 | 2 | 1 | 2 + 3 | 4 | 3 | 4 + 11 | | 11 | +(3 rows) + +update dcomptable set d1.r = (d1).r + 1 where (d1).i > 0; +select * from dcomptable; + d1 +------- + (11,) + (2,2) + (4,4) +(3 rows) + +alter domain dcomptype add constraint c1 check ((value).r <= (value).i); +alter domain dcomptype add constraint c2 check ((value).r > (value).i); -- fail +ERROR: column "d1" of table "dcomptable" contains values that violate the new constraint +select row(2,1)::dcomptype; -- fail +ERROR: value for domain dcomptype violates check constraint "c1" +insert into dcomptable values (row(1,2)::comptype); +insert into dcomptable values (row(2,1)::comptype); -- fail +ERROR: value for domain dcomptype violates check constraint "c1" +insert into dcomptable (d1.r) values(99); +insert into dcomptable (d1.r, d1.i) values(99, 100); +insert into dcomptable (d1.r, d1.i) values(100, 99); -- fail +ERROR: value for domain dcomptype violates check constraint "c1" +update dcomptable set d1.r = (d1).r + 1 where (d1).i > 0; -- fail +ERROR: value for domain dcomptype violates check constraint "c1" +update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; +select * from dcomptable; + d1 +---------- + (11,) + (99,) + (1,3) + (3,5) + (0,3) + (98,101) +(6 rows) + +explain (verbose, costs off) + update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; + QUERY PLAN +----------------------------------------------------------------------------------------------- + Update on public.dcomptable + -> Seq Scan on public.dcomptable + Output: ROW(((d1).r - '1'::double precision), ((d1).i + '1'::double precision)), ctid + Filter: ((dcomptable.d1).i > '0'::double precision) +(4 rows) + +create rule silly as on delete to dcomptable do instead + update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; +\d+ dcomptable + Table "public.dcomptable" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+-----------+-----------+----------+---------+----------+--------------+------------- + d1 | dcomptype | | | | extended | | +Indexes: + "dcomptable_d1_key" UNIQUE CONSTRAINT, btree (d1) +Rules: + silly AS + ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision + WHERE (dcomptable.d1).i > 0::double precision + +drop table dcomptable; +drop type comptype cascade; +NOTICE: drop cascades to type dcomptype -- Test domains over arrays of composite create type comptype as (r float8, i float8); create domain dcomptypea as comptype[]; @@ -762,6 +850,14 @@ insert into ddtest2 values('{(-1)}'); alter domain posint add constraint c1 check(value >= 0); ERROR: cannot alter type "posint" because column "ddtest2.f1" uses it drop table ddtest2; +-- Likewise for domains within domains over composite +create domain ddtest1d as ddtest1; +create table ddtest2(f1 ddtest1d); +insert into ddtest2 values('(-1)'); +alter domain posint add constraint c1 check(value >= 0); +ERROR: cannot alter type "posint" because column "ddtest2.f1" uses it +drop table ddtest2; +drop domain ddtest1d; -- Likewise for domains within domains over array of composite create domain ddtest1d as ddtest1[]; create table ddtest2(f1 ddtest1d); diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out index 9fc91f8d12..6081604437 100644 --- a/src/test/regress/expected/json.out +++ b/src/test/regress/expected/json.out @@ -1316,6 +1316,8 @@ create type jpop as (a text, b int, c timestamp); CREATE DOMAIN js_int_not_null AS int NOT NULL; CREATE DOMAIN js_int_array_1d AS int[] CHECK(array_length(VALUE, 1) = 3); CREATE DOMAIN js_int_array_2d AS int[][] CHECK(array_length(VALUE, 2) = 3); +create type j_unordered_pair as (x int, y int); +create domain j_ordered_pair as j_unordered_pair check((value).x <= (value).y); CREATE TYPE jsrec AS ( i int, ia _int4, @@ -1740,6 +1742,30 @@ SELECT rec FROM json_populate_record( (abc,3,"Thu Jan 02 00:00:00 2003") (1 row) +-- anonymous record type +SELECT json_populate_record(null::record, '{"x": 0, "y": 1}'); +ERROR: record type has not been registered +SELECT json_populate_record(row(1,2), '{"f1": 0, "f2": 1}'); + json_populate_record +---------------------- + (0,1) +(1 row) + +-- composite domain +SELECT json_populate_record(null::j_ordered_pair, '{"x": 0, "y": 1}'); + json_populate_record +---------------------- + (0,1) +(1 row) + +SELECT json_populate_record(row(1,2)::j_ordered_pair, '{"x": 0}'); + json_populate_record +---------------------- + (0,2) +(1 row) + +SELECT json_populate_record(row(1,2)::j_ordered_pair, '{"x": 1, "y": 0}'); +ERROR: value for domain j_ordered_pair violates check constraint "j_ordered_pair_check" -- populate_recordset select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; a | b | c @@ -1806,6 +1832,31 @@ select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,3 {"z":true} | 3 | Fri Jan 20 10:42:53 2012 (2 rows) +-- anonymous record type +SELECT json_populate_recordset(null::record, '[{"x": 0, "y": 1}]'); +ERROR: record type has not been registered +SELECT json_populate_recordset(row(1,2), '[{"f1": 0, "f2": 1}]'); + json_populate_recordset +------------------------- + (0,1) +(1 row) + +-- composite domain +SELECT json_populate_recordset(null::j_ordered_pair, '[{"x": 0, "y": 1}]'); + json_populate_recordset +------------------------- + (0,1) +(1 row) + +SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 0}, {"y": 3}]'); + json_populate_recordset +------------------------- + (0,2) + (1,3) +(2 rows) + +SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 1, "y": 0}]'); +ERROR: value for domain j_ordered_pair violates check constraint "j_ordered_pair_check" -- test type info caching in json_populate_record() CREATE TEMP TABLE jspoptest (js json); INSERT INTO jspoptest @@ -1828,6 +1879,8 @@ DROP TYPE jsrec_i_not_null; DROP DOMAIN js_int_not_null; DROP DOMAIN js_int_array_1d; DROP DOMAIN js_int_array_2d; +DROP DOMAIN j_ordered_pair; +DROP TYPE j_unordered_pair; --json_typeof() function select value, json_typeof(value) from (values (json '123.4'), diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out index eeac2a13c7..cf16a15c0f 100644 --- a/src/test/regress/expected/jsonb.out +++ b/src/test/regress/expected/jsonb.out @@ -2005,6 +2005,8 @@ CREATE TYPE jbpop AS (a text, b int, c timestamp); CREATE DOMAIN jsb_int_not_null AS int NOT NULL; CREATE DOMAIN jsb_int_array_1d AS int[] CHECK(array_length(VALUE, 1) = 3); CREATE DOMAIN jsb_int_array_2d AS int[][] CHECK(array_length(VALUE, 2) = 3); +create type jb_unordered_pair as (x int, y int); +create domain jb_ordered_pair as jb_unordered_pair check((value).x <= (value).y); CREATE TYPE jsbrec AS ( i int, ia _int4, @@ -2429,6 +2431,30 @@ SELECT rec FROM jsonb_populate_record( (abc,3,"Thu Jan 02 00:00:00 2003") (1 row) +-- anonymous record type +SELECT jsonb_populate_record(null::record, '{"x": 0, "y": 1}'); +ERROR: record type has not been registered +SELECT jsonb_populate_record(row(1,2), '{"f1": 0, "f2": 1}'); + jsonb_populate_record +----------------------- + (0,1) +(1 row) + +-- composite domain +SELECT jsonb_populate_record(null::jb_ordered_pair, '{"x": 0, "y": 1}'); + jsonb_populate_record +----------------------- + (0,1) +(1 row) + +SELECT jsonb_populate_record(row(1,2)::jb_ordered_pair, '{"x": 0}'); + jsonb_populate_record +----------------------- + (0,2) +(1 row) + +SELECT jsonb_populate_record(row(1,2)::jb_ordered_pair, '{"x": 1, "y": 0}'); +ERROR: value for domain jb_ordered_pair violates check constraint "jb_ordered_pair_check" -- populate_recordset SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; a | b | c @@ -2488,6 +2514,31 @@ SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200 {"z": true} | 3 | Fri Jan 20 10:42:53 2012 (2 rows) +-- anonymous record type +SELECT jsonb_populate_recordset(null::record, '[{"x": 0, "y": 1}]'); +ERROR: record type has not been registered +SELECT jsonb_populate_recordset(row(1,2), '[{"f1": 0, "f2": 1}]'); + jsonb_populate_recordset +-------------------------- + (0,1) +(1 row) + +-- composite domain +SELECT jsonb_populate_recordset(null::jb_ordered_pair, '[{"x": 0, "y": 1}]'); + jsonb_populate_recordset +-------------------------- + (0,1) +(1 row) + +SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 0}, {"y": 3}]'); + jsonb_populate_recordset +-------------------------- + (0,2) + (1,3) +(2 rows) + +SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 1, "y": 0}]'); +ERROR: value for domain jb_ordered_pair violates check constraint "jb_ordered_pair_check" -- jsonb_to_record and jsonb_to_recordset select * from jsonb_to_record('{"a":1,"b":"foo","c":"bar"}') as x(a int, b text, d text); @@ -2587,6 +2638,8 @@ DROP TYPE jsbrec_i_not_null; DROP DOMAIN jsb_int_not_null; DROP DOMAIN jsb_int_array_1d; DROP DOMAIN jsb_int_array_2d; +DROP DOMAIN jb_ordered_pair; +DROP TYPE jb_unordered_pair; -- indexing SELECT count(*) FROM testjsonb WHERE j @> '{"wait":null}'; count diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql index 8fb3e2086a..5201f008a1 100644 --- a/src/test/regress/sql/domain.sql +++ b/src/test/regress/sql/domain.sql @@ -120,6 +120,45 @@ select pg_typeof('{1,2,3}'::dia || 42); -- should be int[] not dia drop domain dia; +-- Test domains over composites + +create type comptype as (r float8, i float8); +create domain dcomptype as comptype; +create table dcomptable (d1 dcomptype unique); + +insert into dcomptable values (row(1,2)::dcomptype); +insert into dcomptable values (row(3,4)::comptype); +insert into dcomptable values (row(1,2)::dcomptype); -- fail on uniqueness +insert into dcomptable (d1.r) values(11); + +select * from dcomptable; +select (d1).r, (d1).i, (d1).* from dcomptable; +update dcomptable set d1.r = (d1).r + 1 where (d1).i > 0; +select * from dcomptable; + +alter domain dcomptype add constraint c1 check ((value).r <= (value).i); +alter domain dcomptype add constraint c2 check ((value).r > (value).i); -- fail + +select row(2,1)::dcomptype; -- fail +insert into dcomptable values (row(1,2)::comptype); +insert into dcomptable values (row(2,1)::comptype); -- fail +insert into dcomptable (d1.r) values(99); +insert into dcomptable (d1.r, d1.i) values(99, 100); +insert into dcomptable (d1.r, d1.i) values(100, 99); -- fail +update dcomptable set d1.r = (d1).r + 1 where (d1).i > 0; -- fail +update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; +select * from dcomptable; + +explain (verbose, costs off) + update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; +create rule silly as on delete to dcomptable do instead + update dcomptable set d1.r = (d1).r - 1, d1.i = (d1).i + 1 where (d1).i > 0; +\d+ dcomptable + +drop table dcomptable; +drop type comptype cascade; + + -- Test domains over arrays of composite create type comptype as (r float8, i float8); @@ -500,6 +539,14 @@ insert into ddtest2 values('{(-1)}'); alter domain posint add constraint c1 check(value >= 0); drop table ddtest2; +-- Likewise for domains within domains over composite +create domain ddtest1d as ddtest1; +create table ddtest2(f1 ddtest1d); +insert into ddtest2 values('(-1)'); +alter domain posint add constraint c1 check(value >= 0); +drop table ddtest2; +drop domain ddtest1d; + -- Likewise for domains within domains over array of composite create domain ddtest1d as ddtest1[]; create table ddtest2(f1 ddtest1d); diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql index 598498d40a..a4ce9d2ef3 100644 --- a/src/test/regress/sql/json.sql +++ b/src/test/regress/sql/json.sql @@ -388,6 +388,9 @@ CREATE DOMAIN js_int_not_null AS int NOT NULL; CREATE DOMAIN js_int_array_1d AS int[] CHECK(array_length(VALUE, 1) = 3); CREATE DOMAIN js_int_array_2d AS int[][] CHECK(array_length(VALUE, 2) = 3); +create type j_unordered_pair as (x int, y int); +create domain j_ordered_pair as j_unordered_pair check((value).x <= (value).y); + CREATE TYPE jsrec AS ( i int, ia _int4, @@ -516,6 +519,15 @@ SELECT rec FROM json_populate_record( '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}' ) q; +-- anonymous record type +SELECT json_populate_record(null::record, '{"x": 0, "y": 1}'); +SELECT json_populate_record(row(1,2), '{"f1": 0, "f2": 1}'); + +-- composite domain +SELECT json_populate_record(null::j_ordered_pair, '{"x": 0, "y": 1}'); +SELECT json_populate_record(row(1,2)::j_ordered_pair, '{"x": 0}'); +SELECT json_populate_record(row(1,2)::j_ordered_pair, '{"x": 1, "y": 0}'); + -- populate_recordset select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; @@ -532,6 +544,15 @@ select * from json_populate_recordset(null::jpop,'[{"a":"blurfl","x":43.2},{"b": select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; select * from json_populate_recordset(row('def',99,null)::jpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q; +-- anonymous record type +SELECT json_populate_recordset(null::record, '[{"x": 0, "y": 1}]'); +SELECT json_populate_recordset(row(1,2), '[{"f1": 0, "f2": 1}]'); + +-- composite domain +SELECT json_populate_recordset(null::j_ordered_pair, '[{"x": 0, "y": 1}]'); +SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 0}, {"y": 3}]'); +SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 1, "y": 0}]'); + -- test type info caching in json_populate_record() CREATE TEMP TABLE jspoptest (js json); @@ -550,6 +571,8 @@ DROP TYPE jsrec_i_not_null; DROP DOMAIN js_int_not_null; DROP DOMAIN js_int_array_1d; DROP DOMAIN js_int_array_2d; +DROP DOMAIN j_ordered_pair; +DROP TYPE j_unordered_pair; --json_typeof() function select value, json_typeof(value) diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql index d0e3f2a1f6..8698b8d332 100644 --- a/src/test/regress/sql/jsonb.sql +++ b/src/test/regress/sql/jsonb.sql @@ -508,6 +508,9 @@ CREATE DOMAIN jsb_int_not_null AS int NOT NULL; CREATE DOMAIN jsb_int_array_1d AS int[] CHECK(array_length(VALUE, 1) = 3); CREATE DOMAIN jsb_int_array_2d AS int[][] CHECK(array_length(VALUE, 2) = 3); +create type jb_unordered_pair as (x int, y int); +create domain jb_ordered_pair as jb_unordered_pair check((value).x <= (value).y); + CREATE TYPE jsbrec AS ( i int, ia _int4, @@ -636,6 +639,15 @@ SELECT rec FROM jsonb_populate_record( '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}' ) q; +-- anonymous record type +SELECT jsonb_populate_record(null::record, '{"x": 0, "y": 1}'); +SELECT jsonb_populate_record(row(1,2), '{"f1": 0, "f2": 1}'); + +-- composite domain +SELECT jsonb_populate_record(null::jb_ordered_pair, '{"x": 0, "y": 1}'); +SELECT jsonb_populate_record(row(1,2)::jb_ordered_pair, '{"x": 0}'); +SELECT jsonb_populate_record(row(1,2)::jb_ordered_pair, '{"x": 1, "y": 0}'); + -- populate_recordset SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; @@ -648,6 +660,15 @@ SELECT * FROM jsonb_populate_recordset(NULL::jbpop,'[{"a":"blurfl","x":43.2},{"b SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":"blurfl","x":43.2},{"b":3,"c":"2012-01-20 10:42:53"}]') q; SELECT * FROM jsonb_populate_recordset(row('def',99,NULL)::jbpop,'[{"a":[100,200,300],"x":43.2},{"a":{"z":true},"b":3,"c":"2012-01-20 10:42:53"}]') q; +-- anonymous record type +SELECT jsonb_populate_recordset(null::record, '[{"x": 0, "y": 1}]'); +SELECT jsonb_populate_recordset(row(1,2), '[{"f1": 0, "f2": 1}]'); + +-- composite domain +SELECT jsonb_populate_recordset(null::jb_ordered_pair, '[{"x": 0, "y": 1}]'); +SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 0}, {"y": 3}]'); +SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 1, "y": 0}]'); + -- jsonb_to_record and jsonb_to_recordset select * from jsonb_to_record('{"a":1,"b":"foo","c":"bar"}') @@ -693,6 +714,8 @@ DROP TYPE jsbrec_i_not_null; DROP DOMAIN jsb_int_not_null; DROP DOMAIN jsb_int_array_1d; DROP DOMAIN jsb_int_array_2d; +DROP DOMAIN jb_ordered_pair; +DROP TYPE jb_unordered_pair; -- indexing SELECT count(*) FROM testjsonb WHERE j @> '{"wait":null}'; From 820c0305f64507490f00b6220f9175a303c821dd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 26 Oct 2017 16:00:17 -0400 Subject: [PATCH 0434/1087] Support domains over composite types in PL/Tcl. Since PL/Tcl does little with SQL types internally, this is just a matter of making it work with composite-domain function arguments and results. In passing, make it allow RECORD-type arguments --- that's a trivial change that nobody had bothered with up to now. Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us --- src/pl/tcl/expected/pltcl_queries.out | 94 +++++++++++++++++++++++++++ src/pl/tcl/pltcl.c | 82 +++++++++++++++-------- src/pl/tcl/sql/pltcl_queries.sql | 43 ++++++++++++ 3 files changed, 193 insertions(+), 26 deletions(-) diff --git a/src/pl/tcl/expected/pltcl_queries.out b/src/pl/tcl/expected/pltcl_queries.out index 5f50f46887..736671cc1b 100644 --- a/src/pl/tcl/expected/pltcl_queries.out +++ b/src/pl/tcl/expected/pltcl_queries.out @@ -327,6 +327,46 @@ select tcl_composite_arg_ref2(row('tkey', 42, 'ref2')); ref2 (1 row) +-- More tests for composite argument/result types +create domain d_dta1 as T_dta1 check ((value).ref1 > 0); +create function tcl_record_arg(record, fldname text) returns int as ' + return $1($2) +' language pltcl; +select tcl_record_arg(row('tkey', 42, 'ref2')::T_dta1, 'ref1'); + tcl_record_arg +---------------- + 42 +(1 row) + +select tcl_record_arg(row('tkey', 42, 'ref2')::d_dta1, 'ref1'); + tcl_record_arg +---------------- + 42 +(1 row) + +select tcl_record_arg(row(2,4), 'f2'); + tcl_record_arg +---------------- + 4 +(1 row) + +create function tcl_cdomain_arg(d_dta1) returns int as ' + return $1(ref1) +' language pltcl; +select tcl_cdomain_arg(row('tkey', 42, 'ref2')); + tcl_cdomain_arg +----------------- + 42 +(1 row) + +select tcl_cdomain_arg(row('tkey', 42, 'ref2')::T_dta1); + tcl_cdomain_arg +----------------- + 42 +(1 row) + +select tcl_cdomain_arg(row('tkey', -1, 'ref2')); -- fail +ERROR: value for domain d_dta1 violates check constraint "d_dta1_check" -- Test argisnull primitive select tcl_argisnull('foo'); tcl_argisnull @@ -438,6 +478,60 @@ return_next [list a 1 b 2 cow 3] $$ language pltcl; select bad_field_srf(); ERROR: column name/value list contains nonexistent column name "cow" +-- test composite and domain-over-composite results +create function tcl_composite_result(int) returns T_dta1 as $$ +return [list tkey tkey1 ref1 $1 ref2 ref22] +$$ language pltcl; +select tcl_composite_result(1001); + tcl_composite_result +-------------------------------------------- + ("tkey1 ",1001,"ref22 ") +(1 row) + +select * from tcl_composite_result(1002); + tkey | ref1 | ref2 +------------+------+---------------------- + tkey1 | 1002 | ref22 +(1 row) + +create function tcl_dcomposite_result(int) returns d_dta1 as $$ +return [list tkey tkey2 ref1 $1 ref2 ref42] +$$ language pltcl; +select tcl_dcomposite_result(1001); + tcl_dcomposite_result +-------------------------------------------- + ("tkey2 ",1001,"ref42 ") +(1 row) + +select * from tcl_dcomposite_result(1002); + tkey | ref1 | ref2 +------------+------+---------------------- + tkey2 | 1002 | ref42 +(1 row) + +select * from tcl_dcomposite_result(-1); -- fail +ERROR: value for domain d_dta1 violates check constraint "d_dta1_check" +create function tcl_record_result(int) returns record as $$ +return [list q1 sometext q2 $1 q3 moretext] +$$ language pltcl; +select tcl_record_result(42); -- fail +ERROR: function returning record called in context that cannot accept type record +select * from tcl_record_result(42); -- fail +ERROR: a column definition list is required for functions returning "record" at character 15 +select * from tcl_record_result(42) as (q1 text, q2 int, q3 text); + q1 | q2 | q3 +----------+----+---------- + sometext | 42 | moretext +(1 row) + +select * from tcl_record_result(42) as (q1 text, q2 int, q3 text, q4 int); + q1 | q2 | q3 | q4 +----------+----+----------+---- + sometext | 42 | moretext | +(1 row) + +select * from tcl_record_result(42) as (q1 text, q2 int, q4 int); -- fail +ERROR: column name/value list contains nonexistent column name "q3" -- test quote select tcl_eval('quote foo bar'); ERROR: wrong # args: should be "quote string" diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index 09f87ec791..6d97ddc99b 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -143,10 +143,13 @@ typedef struct pltcl_proc_desc bool fn_readonly; /* is function readonly? */ bool lanpltrusted; /* is it pltcl (vs. pltclu)? */ pltcl_interp_desc *interp_desc; /* interpreter to use */ + Oid result_typid; /* OID of fn's result type */ FmgrInfo result_in_func; /* input function for fn's result type */ Oid result_typioparam; /* param to pass to same */ bool fn_retisset; /* true if function returns a set */ bool fn_retistuple; /* true if function returns composite */ + bool fn_retisdomain; /* true if function returns domain */ + void *domain_info; /* opaque cache for domain checks */ int nargs; /* number of arguments */ /* these arrays have nargs entries: */ FmgrInfo *arg_out_func; /* output fns for arg types */ @@ -988,11 +991,26 @@ pltcl_func_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, * result type is a named composite type, so it's not exactly trivial. * Maybe worth improving someday. */ - if (get_call_result_type(fcinfo, NULL, &td) != TYPEFUNC_COMPOSITE) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("function returning record called in context " - "that cannot accept type record"))); + switch (get_call_result_type(fcinfo, NULL, &td)) + { + case TYPEFUNC_COMPOSITE: + /* success */ + break; + case TYPEFUNC_COMPOSITE_DOMAIN: + Assert(prodesc->fn_retisdomain); + break; + case TYPEFUNC_RECORD: + /* failed to determine actual type of RECORD */ + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("function returning record called in context " + "that cannot accept type record"))); + break; + default: + /* result type isn't composite? */ + elog(ERROR, "return type must be a row type"); + break; + } Assert(!call_state->ret_tupdesc); Assert(!call_state->attinmeta); @@ -1490,22 +1508,21 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid, ************************************************************/ if (!is_trigger && !is_event_trigger) { - typeTup = - SearchSysCache1(TYPEOID, - ObjectIdGetDatum(procStruct->prorettype)); + Oid rettype = procStruct->prorettype; + + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(rettype)); if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - procStruct->prorettype); + elog(ERROR, "cache lookup failed for type %u", rettype); typeStruct = (Form_pg_type) GETSTRUCT(typeTup); /* Disallow pseudotype result, except VOID and RECORD */ if (typeStruct->typtype == TYPTYPE_PSEUDO) { - if (procStruct->prorettype == VOIDOID || - procStruct->prorettype == RECORDOID) + if (rettype == VOIDOID || + rettype == RECORDOID) /* okay */ ; - else if (procStruct->prorettype == TRIGGEROID || - procStruct->prorettype == EVTTRIGGEROID) + else if (rettype == TRIGGEROID || + rettype == EVTTRIGGEROID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("trigger functions can only be called as triggers"))); @@ -1513,17 +1530,19 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid, ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/Tcl functions cannot return type %s", - format_type_be(procStruct->prorettype)))); + format_type_be(rettype)))); } + prodesc->result_typid = rettype; fmgr_info_cxt(typeStruct->typinput, &(prodesc->result_in_func), proc_cxt); prodesc->result_typioparam = getTypeIOParam(typeTup); prodesc->fn_retisset = procStruct->proretset; - prodesc->fn_retistuple = (procStruct->prorettype == RECORDOID || - typeStruct->typtype == TYPTYPE_COMPOSITE); + prodesc->fn_retistuple = type_is_rowtype(rettype); + prodesc->fn_retisdomain = (typeStruct->typtype == TYPTYPE_DOMAIN); + prodesc->domain_info = NULL; ReleaseSysCache(typeTup); } @@ -1537,21 +1556,22 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid, proc_internal_args[0] = '\0'; for (i = 0; i < prodesc->nargs; i++) { - typeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(procStruct->proargtypes.values[i])); + Oid argtype = procStruct->proargtypes.values[i]; + + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(argtype)); if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - procStruct->proargtypes.values[i]); + elog(ERROR, "cache lookup failed for type %u", argtype); typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - /* Disallow pseudotype argument */ - if (typeStruct->typtype == TYPTYPE_PSEUDO) + /* Disallow pseudotype argument, except RECORD */ + if (typeStruct->typtype == TYPTYPE_PSEUDO && + argtype != RECORDOID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/Tcl functions cannot accept type %s", - format_type_be(procStruct->proargtypes.values[i])))); + format_type_be(argtype)))); - if (typeStruct->typtype == TYPTYPE_COMPOSITE) + if (type_is_rowtype(argtype)) { prodesc->arg_is_rowtype[i] = true; snprintf(buf, sizeof(buf), "__PLTcl_Tup_%d", i + 1); @@ -3075,6 +3095,7 @@ static HeapTuple pltcl_build_tuple_result(Tcl_Interp *interp, Tcl_Obj **kvObjv, int kvObjc, pltcl_call_state *call_state) { + HeapTuple tuple; TupleDesc tupdesc; AttInMetadata *attinmeta; char **values; @@ -3133,7 +3154,16 @@ pltcl_build_tuple_result(Tcl_Interp *interp, Tcl_Obj **kvObjv, int kvObjc, values[attn - 1] = utf_u2e(Tcl_GetString(kvObjv[i + 1])); } - return BuildTupleFromCStrings(attinmeta, values); + tuple = BuildTupleFromCStrings(attinmeta, values); + + /* if result type is domain-over-composite, check domain constraints */ + if (call_state->prodesc->fn_retisdomain) + domain_check(HeapTupleGetDatum(tuple), false, + call_state->prodesc->result_typid, + &call_state->prodesc->domain_info, + call_state->prodesc->fn_cxt); + + return tuple; } /********************************************************************** diff --git a/src/pl/tcl/sql/pltcl_queries.sql b/src/pl/tcl/sql/pltcl_queries.sql index dabd8cd35f..71c1238bd2 100644 --- a/src/pl/tcl/sql/pltcl_queries.sql +++ b/src/pl/tcl/sql/pltcl_queries.sql @@ -89,6 +89,26 @@ truncate trigger_test; select tcl_composite_arg_ref1(row('tkey', 42, 'ref2')); select tcl_composite_arg_ref2(row('tkey', 42, 'ref2')); +-- More tests for composite argument/result types + +create domain d_dta1 as T_dta1 check ((value).ref1 > 0); + +create function tcl_record_arg(record, fldname text) returns int as ' + return $1($2) +' language pltcl; + +select tcl_record_arg(row('tkey', 42, 'ref2')::T_dta1, 'ref1'); +select tcl_record_arg(row('tkey', 42, 'ref2')::d_dta1, 'ref1'); +select tcl_record_arg(row(2,4), 'f2'); + +create function tcl_cdomain_arg(d_dta1) returns int as ' + return $1(ref1) +' language pltcl; + +select tcl_cdomain_arg(row('tkey', 42, 'ref2')); +select tcl_cdomain_arg(row('tkey', 42, 'ref2')::T_dta1); +select tcl_cdomain_arg(row('tkey', -1, 'ref2')); -- fail + -- Test argisnull primitive select tcl_argisnull('foo'); select tcl_argisnull(''); @@ -136,6 +156,29 @@ return_next [list a 1 b 2 cow 3] $$ language pltcl; select bad_field_srf(); +-- test composite and domain-over-composite results +create function tcl_composite_result(int) returns T_dta1 as $$ +return [list tkey tkey1 ref1 $1 ref2 ref22] +$$ language pltcl; +select tcl_composite_result(1001); +select * from tcl_composite_result(1002); + +create function tcl_dcomposite_result(int) returns d_dta1 as $$ +return [list tkey tkey2 ref1 $1 ref2 ref42] +$$ language pltcl; +select tcl_dcomposite_result(1001); +select * from tcl_dcomposite_result(1002); +select * from tcl_dcomposite_result(-1); -- fail + +create function tcl_record_result(int) returns record as $$ +return [list q1 sometext q2 $1 q3 moretext] +$$ language pltcl; +select tcl_record_result(42); -- fail +select * from tcl_record_result(42); -- fail +select * from tcl_record_result(42) as (q1 text, q2 int, q3 text); +select * from tcl_record_result(42) as (q1 text, q2 int, q3 text, q4 int); +select * from tcl_record_result(42) as (q1 text, q2 int, q4 int); -- fail + -- test quote select tcl_eval('quote foo bar'); select tcl_eval('quote [format %c 39]'); From 639c1a6bb9ee08fe4757a6fab1ddbd01291515e1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 27 Oct 2017 16:04:01 +0200 Subject: [PATCH 0435/1087] Fix mistaken failure to allow parallelism in corner case. If we try to run a parallel plan in serial mode because, for example, it's going to be scanned via a cursor, but for some reason we're already in parallel mode (for example because an outer query is running in parallel), we'd incorrectly try to launch workers. Fix by adding a flag to the EState, so that we can be certain that ExecutePlan() and ExecGather()/ExecGatherMerge() will have the same idea about whether we are executing serially or in parallel. Report and fix by Amit Kapila with help from Kuntal Ghosh. A few tweaks by me. Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com --- src/backend/executor/execMain.c | 1 + src/backend/executor/execUtils.c | 2 ++ src/backend/executor/nodeGather.c | 2 +- src/backend/executor/nodeGatherMerge.c | 2 +- src/include/nodes/execnodes.h | 2 ++ 5 files changed, 7 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 9689429912..638a856dc3 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1702,6 +1702,7 @@ ExecutePlan(EState *estate, if (!execute_once) use_parallel_mode = false; + estate->es_use_parallel_mode = use_parallel_mode; if (use_parallel_mode) EnterParallelMode(); diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index ee6c4af055..e8c06c7605 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -156,6 +156,8 @@ CreateExecutorState(void) estate->es_epqScanDone = NULL; estate->es_sourceText = NULL; + estate->es_use_parallel_mode = false; + /* * Return the executor state structure */ diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 8370037c43..639f4f5af8 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -150,7 +150,7 @@ ExecGather(PlanState *pstate) * Sometimes we might have to run without parallelism; but if parallel * mode is active then we can try to fire up some workers. */ - if (gather->num_workers > 0 && IsInParallelMode()) + if (gather->num_workers > 0 && estate->es_use_parallel_mode) { ParallelContext *pcxt; diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 70f33a9a28..5625b12521 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -194,7 +194,7 @@ ExecGatherMerge(PlanState *pstate) * Sometimes we might have to run without parallelism; but if parallel * mode is active then we can try to fire up some workers. */ - if (gm->num_workers > 0 && IsInParallelMode()) + if (gm->num_workers > 0 && estate->es_use_parallel_mode) { ParallelContext *pcxt; diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 52d3532580..8698c8a50c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -507,6 +507,8 @@ typedef struct EState bool *es_epqTupleSet; /* true if EPQ tuple is provided */ bool *es_epqScanDone; /* true if EPQ tuple has been fetched */ + bool es_use_parallel_mode; /* can we use parallel workers? */ + /* The per-query shared memory area to use for parallel execution. */ struct dsa_area *es_query_dsa; } EState; From 94d622f27be6d48e61a68496da4f2efb06fe8746 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 27 Oct 2017 16:40:06 +0200 Subject: [PATCH 0436/1087] Move new structure member to the end. Reduces ABI breakage. Per Tom Lane. Discussion: http://postgr.es/m/4035.1509113974@sss.pgh.pa.us --- src/include/nodes/execnodes.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 8698c8a50c..c9c10f05dd 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -507,10 +507,10 @@ typedef struct EState bool *es_epqTupleSet; /* true if EPQ tuple is provided */ bool *es_epqScanDone; /* true if EPQ tuple has been fetched */ - bool es_use_parallel_mode; /* can we use parallel workers? */ - /* The per-query shared memory area to use for parallel execution. */ struct dsa_area *es_query_dsa; + + bool es_use_parallel_mode; /* can we use parallel workers? */ } EState; From e4fbf22831c2bbcf032ee60a327b871d2364b3f5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 27 Oct 2017 10:46:06 -0400 Subject: [PATCH 0437/1087] Doc: mention that you can't PREPARE TRANSACTION after NOTIFY. The NOTIFY page said this already, but the PREPARE TRANSACTION page missed it. Discussion: https://postgr.es/m/20171024010602.1488.80066@wrigleys.postgresql.org --- doc/src/sgml/ref/prepare_transaction.sgml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/prepare_transaction.sgml b/doc/src/sgml/ref/prepare_transaction.sgml index 6a1766ed3c..990546a8c7 100644 --- a/doc/src/sgml/ref/prepare_transaction.sgml +++ b/doc/src/sgml/ref/prepare_transaction.sgml @@ -100,7 +100,8 @@ PREPARE TRANSACTION transaction_id It is not currently allowed to PREPARE a transaction that has executed any operations involving temporary tables, created any cursors WITH HOLD, or executed - LISTEN or UNLISTEN. + LISTEN, UNLISTEN, or + NOTIFY. Those features are too tightly tied to the current session to be useful in a transaction to be prepared. From f0392e677ed098e9e514ad5e4d5dc148c0474c63 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 27 Oct 2017 17:29:20 +0200 Subject: [PATCH 0438/1087] Revert "Move new structure member to the end." This reverts commit 94d622f27be6d48e61a68496da4f2efb06fe8746. That commit was supposed to get pushed to REL_10_STABLE, but I messed up. --- src/include/nodes/execnodes.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index c9c10f05dd..8698c8a50c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -507,10 +507,10 @@ typedef struct EState bool *es_epqTupleSet; /* true if EPQ tuple is provided */ bool *es_epqScanDone; /* true if EPQ tuple has been fetched */ + bool es_use_parallel_mode; /* can we use parallel workers? */ + /* The per-query shared memory area to use for parallel execution. */ struct dsa_area *es_query_dsa; - - bool es_use_parallel_mode; /* can we use parallel workers? */ } EState; From 6784d7a1dc69d53b7f41eebf62bf7ffd63885294 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 27 Oct 2017 12:18:56 -0400 Subject: [PATCH 0439/1087] Rethink the dependencies recorded for FieldSelect/FieldStore nodes. On closer investigation, commits f3ea3e3e8 et al were a few bricks shy of a load. What we need is not so much to lock down the result type of a FieldSelect, as to lock down the existence of the column it's trying to extract. Otherwise, we can break it by dropping that column. The dependency on the result type is then held indirectly through the column, and doesn't need to be recorded explicitly. Out of paranoia, I left in the code to record a dependency on the result type, but it's used only if we can't identify the pg_class OID for the column. That shouldn't ever happen right now, AFAICS, but it seems possible that in future the input node could be marked as being of type RECORD rather than some specific composite type. Likewise for FieldStore. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us --- src/backend/catalog/dependency.c | 36 +++++++++++++++++++---- src/test/regress/expected/alter_table.out | 17 +++++++++++ src/test/regress/sql/alter_table.sql | 8 +++++ 3 files changed, 55 insertions(+), 6 deletions(-) diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 3b214e5702..033c4358ea 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -1716,10 +1716,24 @@ find_expr_references_walker(Node *node, else if (IsA(node, FieldSelect)) { FieldSelect *fselect = (FieldSelect *) node; + Oid argtype = getBaseType(exprType((Node *) fselect->arg)); + Oid reltype = get_typ_typrelid(argtype); - /* result type might not appear anywhere else in expression */ - add_object_address(OCLASS_TYPE, fselect->resulttype, 0, - context->addrs); + /* + * We need a dependency on the specific column named in FieldSelect, + * assuming we can identify the pg_class OID for it. (Probably we + * always can at the moment, but in future it might be possible for + * argtype to be RECORDOID.) If we can make a column dependency then + * we shouldn't need a dependency on the column's type; but if we + * can't, make a dependency on the type, as it might not appear + * anywhere else in the expression. + */ + if (OidIsValid(reltype)) + add_object_address(OCLASS_CLASS, reltype, fselect->fieldnum, + context->addrs); + else + add_object_address(OCLASS_TYPE, fselect->resulttype, 0, + context->addrs); /* the collation might not be referenced anywhere else, either */ if (OidIsValid(fselect->resultcollid) && fselect->resultcollid != DEFAULT_COLLATION_OID) @@ -1729,10 +1743,20 @@ find_expr_references_walker(Node *node, else if (IsA(node, FieldStore)) { FieldStore *fstore = (FieldStore *) node; + Oid reltype = get_typ_typrelid(fstore->resulttype); - /* result type might not appear anywhere else in expression */ - add_object_address(OCLASS_TYPE, fstore->resulttype, 0, - context->addrs); + /* similar considerations to FieldSelect, but multiple column(s) */ + if (OidIsValid(reltype)) + { + ListCell *l; + + foreach(l, fstore->fieldnums) + add_object_address(OCLASS_CLASS, reltype, lfirst_int(l), + context->addrs); + } + else + add_object_address(OCLASS_TYPE, fstore->resulttype, 0, + context->addrs); } else if (IsA(node, RelabelType)) { diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 838588757a..e12a1ac5cb 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -2706,6 +2706,23 @@ Typed table of type: test_type2 Inherits: test_tbl2 DROP TABLE test_tbl2_subclass; +CREATE TYPE test_typex AS (a int, b text); +CREATE TABLE test_tblx (x int, y test_typex check ((y).a > 0)); +ALTER TYPE test_typex DROP ATTRIBUTE a; -- fails +ERROR: cannot drop composite type test_typex column a because other objects depend on it +DETAIL: constraint test_tblx_y_check on table test_tblx depends on composite type test_typex column a +HINT: Use DROP ... CASCADE to drop the dependent objects too. +ALTER TYPE test_typex DROP ATTRIBUTE a CASCADE; +NOTICE: drop cascades to constraint test_tblx_y_check on table test_tblx +\d test_tblx + Table "public.test_tblx" + Column | Type | Collation | Nullable | Default +--------+------------+-----------+----------+--------- + x | integer | | | + y | test_typex | | | + +DROP TABLE test_tblx; +DROP TYPE test_typex; -- This test isn't that interesting on its own, but the purpose is to leave -- behind a table to test pg_upgrade with. The table has a composite type -- column in it, and the composite type has a dropped attribute. diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 2ef9541a8c..f248f52925 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -1714,6 +1714,14 @@ ALTER TYPE test_type2 RENAME ATTRIBUTE a TO aa CASCADE; DROP TABLE test_tbl2_subclass; +CREATE TYPE test_typex AS (a int, b text); +CREATE TABLE test_tblx (x int, y test_typex check ((y).a > 0)); +ALTER TYPE test_typex DROP ATTRIBUTE a; -- fails +ALTER TYPE test_typex DROP ATTRIBUTE a CASCADE; +\d test_tblx +DROP TABLE test_tblx; +DROP TYPE test_typex; + -- This test isn't that interesting on its own, but the purpose is to leave -- behind a table to test pg_upgrade with. The table has a composite type -- column in it, and the composite type has a dropped attribute. From 682ce911f8f30de39b13cf211fc8ceb8c6cbc01b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 27 Oct 2017 22:22:39 +0200 Subject: [PATCH 0440/1087] Allow parallel query for prepared statements with generic plans. This was always intended to work, but due to an oversight in max_parallel_hazard_walker, it didn't. In testing, we missed the fact that it was only working for custom plans, where the parameter value has been substituted for the parameter itself early enough that everything worked. In a generic plan, the Param node survives and must be treated as parallel-safe. SerializeParamList provides for the transmission of parameter values to workers. Amit Kapila with help from Kuntal Ghosh. Some changes by me. Discussion: http://postgr.es/m/CAA4eK1+_BuZrmVCeua5Eqnm4Co9DAXdM5HPAOE2J19ePbR912Q@mail.gmail.com --- src/backend/optimizer/util/clauses.c | 8 ++++++-- src/pl/plpgsql/src/pl_exec.c | 10 +++++----- src/test/regress/expected/select_parallel.out | 20 +++++++++++++++++++ src/test/regress/sql/select_parallel.sql | 6 ++++++ 4 files changed, 37 insertions(+), 7 deletions(-) diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 5344f6167a..652843a146 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1223,13 +1223,17 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context) /* * We can't pass Params to workers at the moment either, so they are also - * parallel-restricted, unless they are PARAM_EXEC Params listed in - * safe_param_ids, meaning they could be generated within the worker. + * parallel-restricted, unless they are PARAM_EXTERN Params or are + * PARAM_EXEC Params listed in safe_param_ids, meaning they could be + * generated within the worker. */ else if (IsA(node, Param)) { Param *param = (Param *) node; + if (param->paramkind == PARAM_EXTERN) + return false; + if (param->paramkind != PARAM_EXEC || !list_member_int(context->safe_param_ids, param->paramid)) { diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 9716697259..e605ec829e 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -6588,8 +6588,8 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan) * force_parallel_mode is on, the planner might've stuck a Gather node * atop that. The simplest way to deal with this is to look through the * Gather node. The Gather node's tlist would normally contain a Var - * referencing the child node's output ... but setrefs.c might also have - * copied a Const as-is. + * referencing the child node's output, but it could also be a Param, or + * it could be a Const that setrefs.c copied as-is. */ plan = stmt->planTree; for (;;) @@ -6616,9 +6616,9 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan) /* If setrefs.c copied up a Const, no need to look further */ if (IsA(tle_expr, Const)) break; - /* Otherwise, it better be an outer Var */ - Assert(IsA(tle_expr, Var)); - Assert(((Var *) tle_expr)->varno == OUTER_VAR); + /* Otherwise, it had better be a Param or an outer Var */ + Assert(IsA(tle_expr, Param) || (IsA(tle_expr, Var) && + ((Var *) tle_expr)->varno == OUTER_VAR)); /* Descend to the child node */ plan = plan->lefttree; } diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 2ae600f1bb..3c63ad1de8 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -101,6 +101,26 @@ explain (costs off) -> Parallel Index Only Scan using tenk1_unique1 on tenk1 (5 rows) +-- test prepared statement +prepare tenk1_count(integer) As select count((unique1)) from tenk1 where hundred > $1; +explain (costs off) execute tenk1_count(1); + QUERY PLAN +---------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Seq Scan on tenk1 + Filter: (hundred > 1) +(6 rows) + +execute tenk1_count(1); + count +------- + 9800 +(1 row) + +deallocate tenk1_count; -- test parallel plans for queries containing un-correlated subplans. alter table tenk2 set (parallel_workers = 0); explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 89fe80a35c..720495c811 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -39,6 +39,12 @@ explain (costs off) select sum(parallel_restricted(unique1)) from tenk1 group by(parallel_restricted(unique1)); +-- test prepared statement +prepare tenk1_count(integer) As select count((unique1)) from tenk1 where hundred > $1; +explain (costs off) execute tenk1_count(1); +execute tenk1_count(1); +deallocate tenk1_count; + -- test parallel plans for queries containing un-correlated subplans. alter table tenk2 set (parallel_workers = 0); explain (costs off) From d5b760ecb5e172252810fae877e6d6193b818167 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 27 Oct 2017 17:10:21 -0400 Subject: [PATCH 0441/1087] Fix crash when columns have been added to the end of a view. expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly as many non-junk tlist items as the RTE has column aliases for it. This was true at the time the code was written, and is still true so far as parse analysis is concerned --- but when the function is used during planning, the subquery might have appeared through insertion of a view that now has more columns than it did when the outer query was parsed. This results in a core dump if, for instance, we have to expand a whole-row Var that references the subquery. To avoid crashing, we can either stop expanding the RTE when we run out of aliases, or invent new aliases for the added columns. While the latter might be more useful, the former is consistent with what expandRTE() does for composite-returning functions in the RTE_FUNCTION case, so it seems like we'd better do it that way. Per bug #14876 from Samuel Horwitz. This has been busted since commit ff1ea2173 allowed views to acquire more columns, so back-patch to all supported branches. Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org --- src/backend/parser/parse_relation.c | 12 +++- src/test/regress/expected/alter_table.out | 85 +++++++++++++++++++++++ src/test/regress/sql/alter_table.sql | 20 ++++++ 3 files changed, 116 insertions(+), 1 deletion(-) diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index ca32a37e26..e89bebfcc3 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -2205,9 +2205,19 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, varattno++; Assert(varattno == te->resno); + /* + * In scenarios where columns have been added to a view + * since the outer query was originally parsed, there can + * be more items in the subquery tlist than the outer + * query expects. We should ignore such extra column(s) + * --- compare the behavior for composite-returning + * functions, in the RTE_FUNCTION case below. + */ + if (!aliasp_item) + break; + if (colnames) { - /* Assume there is one alias per target item */ char *label = strVal(lfirst(aliasp_item)); *colnames = lappend(*colnames, makeString(pstrdup(label))); diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index e12a1ac5cb..d7a084c5b7 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -2180,6 +2180,91 @@ Foreign-key constraints: "check_fk_presence_2_id_fkey" FOREIGN KEY (id) REFERENCES check_fk_presence_1(id) DROP TABLE check_fk_presence_1, check_fk_presence_2; +-- check column addition within a view (bug #14876) +create table at_base_table(id int, stuff text); +insert into at_base_table values (23, 'skidoo'); +create view at_view_1 as select * from at_base_table bt; +create view at_view_2 as select *, to_json(v1) as j from at_view_1 v1; +\d+ at_view_1 + View "public.at_view_1" + Column | Type | Collation | Nullable | Default | Storage | Description +--------+---------+-----------+----------+---------+----------+------------- + id | integer | | | | plain | + stuff | text | | | | extended | +View definition: + SELECT bt.id, + bt.stuff + FROM at_base_table bt; + +\d+ at_view_2 + View "public.at_view_2" + Column | Type | Collation | Nullable | Default | Storage | Description +--------+---------+-----------+----------+---------+----------+------------- + id | integer | | | | plain | + stuff | text | | | | extended | + j | json | | | | extended | +View definition: + SELECT v1.id, + v1.stuff, + to_json(v1.*) AS j + FROM at_view_1 v1; + +explain (verbose, costs off) select * from at_view_2; + QUERY PLAN +---------------------------------------------------------- + Seq Scan on public.at_base_table bt + Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff)) +(2 rows) + +select * from at_view_2; + id | stuff | j +----+--------+---------------------------- + 23 | skidoo | {"id":23,"stuff":"skidoo"} +(1 row) + +create or replace view at_view_1 as select *, 2+2 as more from at_base_table bt; +\d+ at_view_1 + View "public.at_view_1" + Column | Type | Collation | Nullable | Default | Storage | Description +--------+---------+-----------+----------+---------+----------+------------- + id | integer | | | | plain | + stuff | text | | | | extended | + more | integer | | | | plain | +View definition: + SELECT bt.id, + bt.stuff, + 2 + 2 AS more + FROM at_base_table bt; + +\d+ at_view_2 + View "public.at_view_2" + Column | Type | Collation | Nullable | Default | Storage | Description +--------+---------+-----------+----------+---------+----------+------------- + id | integer | | | | plain | + stuff | text | | | | extended | + j | json | | | | extended | +View definition: + SELECT v1.id, + v1.stuff, + to_json(v1.*) AS j + FROM at_view_1 v1; + +explain (verbose, costs off) select * from at_view_2; + QUERY PLAN +---------------------------------------------------------------- + Seq Scan on public.at_base_table bt + Output: bt.id, bt.stuff, to_json(ROW(bt.id, bt.stuff, NULL)) +(2 rows) + +select * from at_view_2; + id | stuff | j +----+--------+---------------------------------------- + 23 | skidoo | {"id":23,"stuff":"skidoo","more":null} +(1 row) + +drop view at_view_2; +drop view at_view_1; +drop table at_base_table; -- -- lock levels -- diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index f248f52925..339d25b5e5 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -1418,6 +1418,26 @@ ROLLBACK; \d check_fk_presence_2 DROP TABLE check_fk_presence_1, check_fk_presence_2; +-- check column addition within a view (bug #14876) +create table at_base_table(id int, stuff text); +insert into at_base_table values (23, 'skidoo'); +create view at_view_1 as select * from at_base_table bt; +create view at_view_2 as select *, to_json(v1) as j from at_view_1 v1; +\d+ at_view_1 +\d+ at_view_2 +explain (verbose, costs off) select * from at_view_2; +select * from at_view_2; + +create or replace view at_view_1 as select *, 2+2 as more from at_base_table bt; +\d+ at_view_1 +\d+ at_view_2 +explain (verbose, costs off) select * from at_view_2; +select * from at_view_2; + +drop view at_view_2; +drop view at_view_1; +drop table at_base_table; + -- -- lock levels -- From d76886c2d33123299ce7c8255a71e39b9e53711b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 27 Oct 2017 18:16:24 -0400 Subject: [PATCH 0442/1087] Dept of second thoughts: keep aliasp_item in sync with tlistitem. Commit d5b760ecb wasn't quite right, on second thought: if the caller didn't ask for column names then it would happily emit more Vars than if the caller did ask for column names. This is surely not a good idea. Advance the aliasp_item whether or not we're preparing a colnames list. --- src/backend/parser/parse_relation.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index e89bebfcc3..6acc21dfe6 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -2221,7 +2221,6 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, char *label = strVal(lfirst(aliasp_item)); *colnames = lappend(*colnames, makeString(pstrdup(label))); - aliasp_item = lnext(aliasp_item); } if (colvars) @@ -2237,6 +2236,8 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, *colvars = lappend(*colvars, varnode); } + + aliasp_item = lnext(aliasp_item); } } break; From 1310ac258c773ab9d41650b509098dd01cb4ecf3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 28 Oct 2017 11:10:21 +0200 Subject: [PATCH 0443/1087] Fix misplaced ReleaseSysCache call in get_default_partition_oid. Julien Rouhaud Discussion: http://postgr.es/m/CAOBaU_Y4omLA+VbsVdA-JwBLoJWiPxfdKCkMjrZM7NMZxa1fKw@mail.gmail.com --- src/backend/catalog/partition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 07fdf66c38..66ec214e02 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -2831,9 +2831,9 @@ get_default_partition_oid(Oid parentId) part_table_form = (Form_pg_partitioned_table) GETSTRUCT(tuple); defaultPartId = part_table_form->partdefid; + ReleaseSysCache(tuple); } - ReleaseSysCache(tuple); return defaultPartId; } From 24fd674a1affe1ca9776bd6b21b2b35feb0fe6ed Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 28 Oct 2017 11:14:23 +0200 Subject: [PATCH 0444/1087] Fix grammar. Etsuro Fujita Discussion: http://postgr.es/m/cc7767b6-6a1b-74a2-8b3c-48b8e64c12ed@lab.ntt.co.jp --- src/backend/optimizer/README | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index 031e503761..1e4084dcf4 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -1088,8 +1088,8 @@ broken into joins between the matching partitions. The resultant join is partitioned in the same way as the joining relations, thus allowing an N-way join between similarly partitioned tables having equi-join condition between their partition keys to be broken down into N-way joins between their matching -partitions. This technique of breaking down a join between partition tables -into join between their partitions is called partition-wise join. We will use +partitions. This technique of breaking down a join between partitioned tables +into joins between their partitions is called partition-wise join. We will use term "partitioned relation" for either a partitioned table or a join between compatibly partitioned tables. From 9f295c08f8776213ccb592de0c4f094738d6d841 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 28 Oct 2017 11:20:00 +0200 Subject: [PATCH 0445/1087] Add table_constraint synopsis to ALTER TABLE documentation. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This is already present in the CREATE TABLE documentation, but it's nicer not to have to refer to CREATE TABLE to find out the syntax for ALTER TABLE. Lætitia Avrot --- doc/src/sgml/ref/alter_table.sgml | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 234ccb70e1..41acda003f 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -85,6 +85,17 @@ ALTER TABLE [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } +and table_constraint is: + +[ CONSTRAINT constraint_name ] +{ CHECK ( expression ) [ NO INHERIT ] | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters | + EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] | + FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] + [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } +[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + and table_constraint_using_index is: [ CONSTRAINT constraint_name ] From 11c1d555cebe8045a45bc0ee10d0673fad8d4895 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 28 Oct 2017 11:50:22 +0200 Subject: [PATCH 0446/1087] Improve comments for parallel executor estimation functions. The previous comment (which was copied as boilerplate from one file to the next) implied that it was the executor node itself which was being serialized, but that's not right. We're not serializing the executor nodes; we're just allowing them to store some additional information in DSM. Adjusts the comment to reflect this. Discussion: http://postgr.es/m/CA+TgmoaHVinxG=3h6qBAsyV8xaDyQwbzK7YZnYfE8nJFMK1=FA@mail.gmail.com --- src/backend/executor/nodeBitmapHeapscan.c | 3 ++- src/backend/executor/nodeIndexonlyscan.c | 3 ++- src/backend/executor/nodeIndexscan.c | 3 ++- src/backend/executor/nodeSeqscan.c | 3 ++- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index f7e55e0b45..6035b4dfd4 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -934,7 +934,8 @@ BitmapShouldInitializeSharedState(ParallelBitmapHeapState *pstate) /* ---------------------------------------------------------------- * ExecBitmapHeapEstimate * - * estimates the space required to serialize bitmap scan node. + * Compute the amount of space we'll need in the parallel + * query DSM, and inform pcxt->estimator about our needs. * ---------------------------------------------------------------- */ void diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index 5351cb8981..9368ca04f8 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -604,7 +604,8 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags) /* ---------------------------------------------------------------- * ExecIndexOnlyScanEstimate * - * estimates the space required to serialize index-only scan node. + * Compute the amount of space we'll need in the parallel + * query DSM, and inform pcxt->estimator about our needs. * ---------------------------------------------------------------- */ void diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index 638b17b07c..262008240d 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -1644,7 +1644,8 @@ ExecIndexBuildScanKeys(PlanState *planstate, Relation index, /* ---------------------------------------------------------------- * ExecIndexScanEstimate * - * estimates the space required to serialize indexscan node. + * Compute the amount of space we'll need in the parallel + * query DSM, and inform pcxt->estimator about our needs. * ---------------------------------------------------------------- */ void diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c index d4ac939c9b..76bec780a8 100644 --- a/src/backend/executor/nodeSeqscan.c +++ b/src/backend/executor/nodeSeqscan.c @@ -289,7 +289,8 @@ ExecReScanSeqScan(SeqScanState *node) /* ---------------------------------------------------------------- * ExecSeqScanEstimate * - * estimates the space required to serialize seqscan node. + * Compute the amount of space we'll need in the parallel + * query DSM, and inform pcxt->estimator about our needs. * ---------------------------------------------------------------- */ void From c6fd5cd7062283575a436ec4ea3ed7899ace79a0 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 28 Oct 2017 12:04:37 +0200 Subject: [PATCH 0447/1087] Fix typo. Eiji Seki Discussion: http://postgr.es/m/A11BD0E1A40FAC479D740CEFA373E203397E5276@g01jpexmbkw05 --- contrib/bloom/blvacuum.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c index 2e060871b6..b0e44330ff 100644 --- a/contrib/bloom/blvacuum.c +++ b/contrib/bloom/blvacuum.c @@ -125,7 +125,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, /* Is it empty page now? */ if (BloomPageGetMaxOffset(page) == 0) BloomPageSetDeleted(page); - /* Adjust pg_lower */ + /* Adjust pd_lower */ ((PageHeader) page)->pd_lower = (Pointer) itupPtr - page; /* Finish WAL-logging */ GenericXLogFinish(gxlogState); From 60651e4cddbb77e8f1a0c7fc0be6a7e7bf626fe0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 28 Oct 2017 14:02:21 -0400 Subject: [PATCH 0448/1087] Support domains over composite types in PL/Perl. In passing, don't insist on rsi->expectedDesc being set unless we actually need it; this allows succeeding in a couple of cases where PL/Perl functions returning setof composite would have failed before, and makes the error message more apropos in other cases. Discussion: https://postgr.es/m/4206.1499798337@sss.pgh.pa.us --- src/pl/plperl/expected/plperl.out | 88 ++++++++++++++++++++- src/pl/plperl/expected/plperl_util.out | 11 ++- src/pl/plperl/plperl.c | 103 +++++++++++++++++-------- src/pl/plperl/sql/plperl.sql | 49 ++++++++++++ src/pl/plperl/sql/plperl_util.sql | 9 +++ 5 files changed, 222 insertions(+), 38 deletions(-) diff --git a/src/pl/plperl/expected/plperl.out b/src/pl/plperl/expected/plperl.out index 14df5f42df..ebfba3eb8d 100644 --- a/src/pl/plperl/expected/plperl.out +++ b/src/pl/plperl/expected/plperl.out @@ -214,8 +214,10 @@ CREATE OR REPLACE FUNCTION perl_record_set() RETURNS SETOF record AS $$ return undef; $$ LANGUAGE plperl; SELECT perl_record_set(); -ERROR: set-valued function called in context that cannot accept a set -CONTEXT: PL/Perl function "perl_record_set" + perl_record_set +----------------- +(0 rows) + SELECT * FROM perl_record_set(); ERROR: a column definition list is required for functions returning "record" LINE 1: SELECT * FROM perl_record_set(); @@ -233,7 +235,7 @@ CREATE OR REPLACE FUNCTION perl_record_set() RETURNS SETOF record AS $$ ]; $$ LANGUAGE plperl; SELECT perl_record_set(); -ERROR: set-valued function called in context that cannot accept a set +ERROR: function returning record called in context that cannot accept type record CONTEXT: PL/Perl function "perl_record_set" SELECT * FROM perl_record_set(); ERROR: a column definition list is required for functions returning "record" @@ -250,7 +252,7 @@ CREATE OR REPLACE FUNCTION perl_record_set() RETURNS SETOF record AS $$ ]; $$ LANGUAGE plperl; SELECT perl_record_set(); -ERROR: set-valued function called in context that cannot accept a set +ERROR: function returning record called in context that cannot accept type record CONTEXT: PL/Perl function "perl_record_set" SELECT * FROM perl_record_set(); ERROR: a column definition list is required for functions returning "record" @@ -387,6 +389,44 @@ $$ LANGUAGE plperl; SELECT * FROM foo_set_bad(); ERROR: Perl hash contains nonexistent column "z" CONTEXT: PL/Perl function "foo_set_bad" +CREATE DOMAIN orderedfootype AS footype CHECK ((VALUE).x <= (VALUE).y); +CREATE OR REPLACE FUNCTION foo_ordered() RETURNS orderedfootype AS $$ + return {x => 3, y => 4}; +$$ LANGUAGE plperl; +SELECT * FROM foo_ordered(); + x | y +---+--- + 3 | 4 +(1 row) + +CREATE OR REPLACE FUNCTION foo_ordered() RETURNS orderedfootype AS $$ + return {x => 5, y => 4}; +$$ LANGUAGE plperl; +SELECT * FROM foo_ordered(); -- fail +ERROR: value for domain orderedfootype violates check constraint "orderedfootype_check" +CONTEXT: PL/Perl function "foo_ordered" +CREATE OR REPLACE FUNCTION foo_ordered_set() RETURNS SETOF orderedfootype AS $$ +return [ + {x => 3, y => 4}, + {x => 4, y => 7} +]; +$$ LANGUAGE plperl; +SELECT * FROM foo_ordered_set(); + x | y +---+--- + 3 | 4 + 4 | 7 +(2 rows) + +CREATE OR REPLACE FUNCTION foo_ordered_set() RETURNS SETOF orderedfootype AS $$ +return [ + {x => 3, y => 4}, + {x => 9, y => 7} +]; +$$ LANGUAGE plperl; +SELECT * FROM foo_ordered_set(); -- fail +ERROR: value for domain orderedfootype violates check constraint "orderedfootype_check" +CONTEXT: PL/Perl function "foo_ordered_set" -- -- Check passing a tuple argument -- @@ -411,6 +451,46 @@ SELECT perl_get_field((11,12), 'z'); (1 row) +CREATE OR REPLACE FUNCTION perl_get_cfield(orderedfootype, text) RETURNS integer AS $$ + return $_[0]->{$_[1]}; +$$ LANGUAGE plperl; +SELECT perl_get_cfield((11,12), 'x'); + perl_get_cfield +----------------- + 11 +(1 row) + +SELECT perl_get_cfield((11,12), 'y'); + perl_get_cfield +----------------- + 12 +(1 row) + +SELECT perl_get_cfield((12,11), 'x'); -- fail +ERROR: value for domain orderedfootype violates check constraint "orderedfootype_check" +CREATE OR REPLACE FUNCTION perl_get_rfield(record, text) RETURNS integer AS $$ + return $_[0]->{$_[1]}; +$$ LANGUAGE plperl; +SELECT perl_get_rfield((11,12), 'f1'); + perl_get_rfield +----------------- + 11 +(1 row) + +SELECT perl_get_rfield((11,12)::footype, 'y'); + perl_get_rfield +----------------- + 12 +(1 row) + +SELECT perl_get_rfield((11,12)::orderedfootype, 'x'); + perl_get_rfield +----------------- + 11 +(1 row) + +SELECT perl_get_rfield((12,11)::orderedfootype, 'x'); -- fail +ERROR: value for domain orderedfootype violates check constraint "orderedfootype_check" -- -- Test return_next -- diff --git a/src/pl/plperl/expected/plperl_util.out b/src/pl/plperl/expected/plperl_util.out index 7cd027f33e..698a8a17fe 100644 --- a/src/pl/plperl/expected/plperl_util.out +++ b/src/pl/plperl/expected/plperl_util.out @@ -172,11 +172,13 @@ select perl_looks_like_number(); -- test encode_typed_literal create type perl_foo as (a integer, b text[]); create type perl_bar as (c perl_foo[]); +create domain perl_foo_pos as perl_foo check((value).a > 0); create or replace function perl_encode_typed_literal() returns setof text language plperl as $$ return_next encode_typed_literal(undef, 'text'); return_next encode_typed_literal([[1,2,3],[3,2,1],[1,3,2]], 'integer[]'); return_next encode_typed_literal({a => 1, b => ['PL','/','Perl']}, 'perl_foo'); return_next encode_typed_literal({c => [{a => 9, b => ['PostgreSQL']}, {b => ['Postgres'], a => 1}]}, 'perl_bar'); + return_next encode_typed_literal({a => 1, b => ['PL','/','Perl']}, 'perl_foo_pos'); $$; select perl_encode_typed_literal(); perl_encode_typed_literal @@ -185,5 +187,12 @@ select perl_encode_typed_literal(); {{1,2,3},{3,2,1},{1,3,2}} (1,"{PL,/,Perl}") ("{""(9,{PostgreSQL})"",""(1,{Postgres})""}") -(4 rows) + (1,"{PL,/,Perl}") +(5 rows) +create or replace function perl_encode_typed_literal() returns setof text language plperl as $$ + return_next encode_typed_literal({a => 0, b => ['PL','/','Perl']}, 'perl_foo_pos'); +$$; +select perl_encode_typed_literal(); -- fail +ERROR: value for domain perl_foo_pos violates check constraint "perl_foo_pos_check" +CONTEXT: PL/Perl function "perl_encode_typed_literal" diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index 5a575bdbe4..ca0d1bccf8 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -179,8 +179,11 @@ typedef struct plperl_call_data { plperl_proc_desc *prodesc; FunctionCallInfo fcinfo; + /* remaining fields are used only in a function returning set: */ Tuplestorestate *tuple_store; TupleDesc ret_tdesc; + Oid cdomain_oid; /* 0 unless returning domain-over-composite */ + void *cdomain_info; MemoryContext tmp_cxt; } plperl_call_data; @@ -1356,6 +1359,7 @@ plperl_sv_to_datum(SV *sv, Oid typid, int32 typmod, /* handle a hashref */ Datum ret; TupleDesc td; + bool isdomain; if (!type_is_rowtype(typid)) ereport(ERROR, @@ -1363,20 +1367,36 @@ plperl_sv_to_datum(SV *sv, Oid typid, int32 typmod, errmsg("cannot convert Perl hash to non-composite type %s", format_type_be(typid)))); - td = lookup_rowtype_tupdesc_noerror(typid, typmod, true); - if (td == NULL) + td = lookup_rowtype_tupdesc_domain(typid, typmod, true); + if (td != NULL) { - /* Try to look it up based on our result type */ - if (fcinfo == NULL || - get_call_result_type(fcinfo, NULL, &td) != TYPEFUNC_COMPOSITE) + /* Did we look through a domain? */ + isdomain = (typid != td->tdtypeid); + } + else + { + /* Must be RECORD, try to resolve based on call info */ + TypeFuncClass funcclass; + + if (fcinfo) + funcclass = get_call_result_type(fcinfo, &typid, &td); + else + funcclass = TYPEFUNC_OTHER; + if (funcclass != TYPEFUNC_COMPOSITE && + funcclass != TYPEFUNC_COMPOSITE_DOMAIN) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("function returning record called in context " "that cannot accept type record"))); + Assert(td); + isdomain = (funcclass == TYPEFUNC_COMPOSITE_DOMAIN); } ret = plperl_hash_to_datum(sv, td); + if (isdomain) + domain_check(ret, false, typid, NULL, NULL); + /* Release on the result of get_call_result_type is harmless */ ReleaseTupleDesc(td); @@ -2401,8 +2421,7 @@ plperl_func_handler(PG_FUNCTION_ARGS) { /* Check context before allowing the call to go through */ if (!rsi || !IsA(rsi, ReturnSetInfo) || - (rsi->allowedModes & SFRM_Materialize) == 0 || - rsi->expectedDesc == NULL) + (rsi->allowedModes & SFRM_Materialize) == 0) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that " @@ -2809,22 +2828,21 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) ************************************************************/ if (!is_trigger && !is_event_trigger) { - typeTup = - SearchSysCache1(TYPEOID, - ObjectIdGetDatum(procStruct->prorettype)); + Oid rettype = procStruct->prorettype; + + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(rettype)); if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - procStruct->prorettype); + elog(ERROR, "cache lookup failed for type %u", rettype); typeStruct = (Form_pg_type) GETSTRUCT(typeTup); /* Disallow pseudotype result, except VOID or RECORD */ if (typeStruct->typtype == TYPTYPE_PSEUDO) { - if (procStruct->prorettype == VOIDOID || - procStruct->prorettype == RECORDOID) + if (rettype == VOIDOID || + rettype == RECORDOID) /* okay */ ; - else if (procStruct->prorettype == TRIGGEROID || - procStruct->prorettype == EVTTRIGGEROID) + else if (rettype == TRIGGEROID || + rettype == EVTTRIGGEROID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("trigger functions can only be called " @@ -2833,13 +2851,12 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/Perl functions cannot return type %s", - format_type_be(procStruct->prorettype)))); + format_type_be(rettype)))); } - prodesc->result_oid = procStruct->prorettype; + prodesc->result_oid = rettype; prodesc->fn_retisset = procStruct->proretset; - prodesc->fn_retistuple = (procStruct->prorettype == RECORDOID || - typeStruct->typtype == TYPTYPE_COMPOSITE); + prodesc->fn_retistuple = type_is_rowtype(rettype); prodesc->fn_retisarray = (typeStruct->typlen == -1 && typeStruct->typelem); @@ -2862,23 +2879,22 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) for (i = 0; i < prodesc->nargs; i++) { - typeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(procStruct->proargtypes.values[i])); + Oid argtype = procStruct->proargtypes.values[i]; + + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(argtype)); if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - procStruct->proargtypes.values[i]); + elog(ERROR, "cache lookup failed for type %u", argtype); typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - /* Disallow pseudotype argument */ + /* Disallow pseudotype argument, except RECORD */ if (typeStruct->typtype == TYPTYPE_PSEUDO && - procStruct->proargtypes.values[i] != RECORDOID) + argtype != RECORDOID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/Perl functions cannot accept type %s", - format_type_be(procStruct->proargtypes.values[i])))); + format_type_be(argtype)))); - if (typeStruct->typtype == TYPTYPE_COMPOSITE || - procStruct->proargtypes.values[i] == RECORDOID) + if (type_is_rowtype(argtype)) prodesc->arg_is_rowtype[i] = true; else { @@ -2888,9 +2904,9 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) proc_cxt); } - /* Identify array attributes */ + /* Identify array-type arguments */ if (typeStruct->typelem != 0 && typeStruct->typlen == -1) - prodesc->arg_arraytype[i] = procStruct->proargtypes.values[i]; + prodesc->arg_arraytype[i] = argtype; else prodesc->arg_arraytype[i] = InvalidOid; @@ -3249,11 +3265,25 @@ plperl_return_next_internal(SV *sv) /* * This is the first call to return_next in the current PL/Perl - * function call, so identify the output tuple descriptor and create a + * function call, so identify the output tuple type and create a * tuplestore to hold the result rows. */ if (prodesc->fn_retistuple) - (void) get_call_result_type(fcinfo, NULL, &tupdesc); + { + TypeFuncClass funcclass; + Oid typid; + + funcclass = get_call_result_type(fcinfo, &typid, &tupdesc); + if (funcclass != TYPEFUNC_COMPOSITE && + funcclass != TYPEFUNC_COMPOSITE_DOMAIN) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("function returning record called in context " + "that cannot accept type record"))); + /* if domain-over-composite, remember the domain's type OID */ + if (funcclass == TYPEFUNC_COMPOSITE_DOMAIN) + current_call_data->cdomain_oid = typid; + } else { tupdesc = rsi->expectedDesc; @@ -3304,6 +3334,13 @@ plperl_return_next_internal(SV *sv) tuple = plperl_build_tuple_result((HV *) SvRV(sv), current_call_data->ret_tdesc); + + if (OidIsValid(current_call_data->cdomain_oid)) + domain_check(HeapTupleGetDatum(tuple), false, + current_call_data->cdomain_oid, + ¤t_call_data->cdomain_info, + rsi->econtext->ecxt_per_query_memory); + tuplestore_puttuple(current_call_data->tuple_store, tuple); } else diff --git a/src/pl/plperl/sql/plperl.sql b/src/pl/plperl/sql/plperl.sql index dc6b169464..c36da0ff04 100644 --- a/src/pl/plperl/sql/plperl.sql +++ b/src/pl/plperl/sql/plperl.sql @@ -231,6 +231,38 @@ $$ LANGUAGE plperl; SELECT * FROM foo_set_bad(); +CREATE DOMAIN orderedfootype AS footype CHECK ((VALUE).x <= (VALUE).y); + +CREATE OR REPLACE FUNCTION foo_ordered() RETURNS orderedfootype AS $$ + return {x => 3, y => 4}; +$$ LANGUAGE plperl; + +SELECT * FROM foo_ordered(); + +CREATE OR REPLACE FUNCTION foo_ordered() RETURNS orderedfootype AS $$ + return {x => 5, y => 4}; +$$ LANGUAGE plperl; + +SELECT * FROM foo_ordered(); -- fail + +CREATE OR REPLACE FUNCTION foo_ordered_set() RETURNS SETOF orderedfootype AS $$ +return [ + {x => 3, y => 4}, + {x => 4, y => 7} +]; +$$ LANGUAGE plperl; + +SELECT * FROM foo_ordered_set(); + +CREATE OR REPLACE FUNCTION foo_ordered_set() RETURNS SETOF orderedfootype AS $$ +return [ + {x => 3, y => 4}, + {x => 9, y => 7} +]; +$$ LANGUAGE plperl; + +SELECT * FROM foo_ordered_set(); -- fail + -- -- Check passing a tuple argument -- @@ -243,6 +275,23 @@ SELECT perl_get_field((11,12), 'x'); SELECT perl_get_field((11,12), 'y'); SELECT perl_get_field((11,12), 'z'); +CREATE OR REPLACE FUNCTION perl_get_cfield(orderedfootype, text) RETURNS integer AS $$ + return $_[0]->{$_[1]}; +$$ LANGUAGE plperl; + +SELECT perl_get_cfield((11,12), 'x'); +SELECT perl_get_cfield((11,12), 'y'); +SELECT perl_get_cfield((12,11), 'x'); -- fail + +CREATE OR REPLACE FUNCTION perl_get_rfield(record, text) RETURNS integer AS $$ + return $_[0]->{$_[1]}; +$$ LANGUAGE plperl; + +SELECT perl_get_rfield((11,12), 'f1'); +SELECT perl_get_rfield((11,12)::footype, 'y'); +SELECT perl_get_rfield((11,12)::orderedfootype, 'x'); +SELECT perl_get_rfield((12,11)::orderedfootype, 'x'); -- fail + -- -- Test return_next -- diff --git a/src/pl/plperl/sql/plperl_util.sql b/src/pl/plperl/sql/plperl_util.sql index 143d047802..5b31605ccd 100644 --- a/src/pl/plperl/sql/plperl_util.sql +++ b/src/pl/plperl/sql/plperl_util.sql @@ -102,11 +102,20 @@ select perl_looks_like_number(); -- test encode_typed_literal create type perl_foo as (a integer, b text[]); create type perl_bar as (c perl_foo[]); +create domain perl_foo_pos as perl_foo check((value).a > 0); + create or replace function perl_encode_typed_literal() returns setof text language plperl as $$ return_next encode_typed_literal(undef, 'text'); return_next encode_typed_literal([[1,2,3],[3,2,1],[1,3,2]], 'integer[]'); return_next encode_typed_literal({a => 1, b => ['PL','/','Perl']}, 'perl_foo'); return_next encode_typed_literal({c => [{a => 9, b => ['PostgreSQL']}, {b => ['Postgres'], a => 1}]}, 'perl_bar'); + return_next encode_typed_literal({a => 1, b => ['PL','/','Perl']}, 'perl_foo_pos'); $$; select perl_encode_typed_literal(); + +create or replace function perl_encode_typed_literal() returns setof text language plperl as $$ + return_next encode_typed_literal({a => 0, b => ['PL','/','Perl']}, 'perl_foo_pos'); +$$; + +select perl_encode_typed_literal(); -- fail From b7f3eb31405f1dbbf086e5a8f88569a6dc85157a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sun, 29 Oct 2017 12:41:43 +0530 Subject: [PATCH 0449/1087] Add hash_combine64. Extracted from a larger patch by Amul Sul, with some comment additions by me. Discussion: http://postgr.es/m/20171024113004.hn5qajypin4dy5sw@alap3.anarazel.de --- src/include/utils/hashutils.h | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/src/include/utils/hashutils.h b/src/include/utils/hashutils.h index 366bd0e78b..3a5c21f523 100644 --- a/src/include/utils/hashutils.h +++ b/src/include/utils/hashutils.h @@ -8,8 +8,8 @@ #define HASHUTILS_H /* - * Combine two hash values, resulting in another hash value, with decent bit - * mixing. + * Combine two 32-bit hash values, resulting in another hash value, with + * decent bit mixing. * * Similar to boost's hash_combine(). */ @@ -20,6 +20,18 @@ hash_combine(uint32 a, uint32 b) return a; } +/* + * Combine two 64-bit hash values, resulting in another hash value, using the + * same kind of technique as hash_combine(). Testing shows that this also + * produces good bit mixing. + */ +static inline uint64 +hash_combine64(uint64 a, uint64 b) +{ + /* 0x49a0f4dd15e5a8e3 is 64bit random data */ + a ^= b + 0x49a0f4dd15e5a8e3 + (a << 54) + (a >> 7); + return a; +} /* * Simple inline murmur hash implementation hashing a 32 bit integer, for From 5f3971291fc231bb65a38198b1bcb1c29ef63108 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sun, 29 Oct 2017 12:46:55 +0530 Subject: [PATCH 0450/1087] pg_receivewal: Add --no-sync option. Michael Paquier, reviewed by Kuntal Ghosh and by me. I did a little wordsmithing on the documentation, too. Discussion: http://postgr.es/m/CAB7nPqTuXuyEoVKcWcExh_b0uAjgWd_14KfGLrCTccBZ=VA0KA@mail.gmail.com --- doc/src/sgml/ref/pg_receivewal.sgml | 17 +++++++++++++++++ src/bin/pg_basebackup/pg_receivewal.c | 16 +++++++++++++++- src/bin/pg_basebackup/t/020_pg_receivewal.pl | 5 ++++- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index 4d13c57ffa..4e2e0cb44c 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -135,6 +135,23 @@ PostgreSQL documentation + + + + + This option causes pg_receivewal to not force WAL + data to be flushed to disk. This is faster, but means that a + subsequent operating system crash can leave the WAL segments corrupt. + Generally, this option is useful for testing but should not be used + when doing WAL archiving on a production deployment. + + + + This option is incompatible with --synchronous. + + + + diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c index 888ae6c571..d801ea07fc 100644 --- a/src/bin/pg_basebackup/pg_receivewal.c +++ b/src/bin/pg_basebackup/pg_receivewal.c @@ -40,6 +40,7 @@ static volatile bool time_to_stop = false; static bool do_create_slot = false; static bool slot_exists_ok = false; static bool do_drop_slot = false; +static bool do_sync = true; static bool synchronous = false; static char *replication_slot = NULL; static XLogRecPtr endpos = InvalidXLogRecPtr; @@ -81,6 +82,7 @@ usage(void) printf(_(" -E, --endpos=LSN exit after receiving the specified LSN\n")); printf(_(" --if-not-exists do not error if slot already exists when creating a slot\n")); printf(_(" -n, --no-loop do not loop on connection lost\n")); + printf(_(" --no-sync do not wait for changes to be written safely to disk\n")); printf(_(" -s, --status-interval=SECS\n" " time between status packets sent to server (default: %d)\n"), (standby_message_timeout / 1000)); printf(_(" -S, --slot=SLOTNAME replication slot to use\n")); @@ -425,7 +427,7 @@ StreamLog(void) stream.stop_socket = PGINVALID_SOCKET; stream.standby_message_timeout = standby_message_timeout; stream.synchronous = synchronous; - stream.do_sync = true; + stream.do_sync = do_sync; stream.mark_done = false; stream.walmethod = CreateWalDirectoryMethod(basedir, compresslevel, stream.do_sync); @@ -487,6 +489,7 @@ main(int argc, char **argv) {"drop-slot", no_argument, NULL, 2}, {"if-not-exists", no_argument, NULL, 3}, {"synchronous", no_argument, NULL, 4}, + {"no-sync", no_argument, NULL, 5}, {NULL, 0, NULL, 0} }; @@ -595,6 +598,9 @@ main(int argc, char **argv) case 4: synchronous = true; break; + case 5: + do_sync = false; + break; default: /* @@ -637,6 +643,14 @@ main(int argc, char **argv) exit(1); } + if (synchronous && !do_sync) + { + fprintf(stderr, _("%s: cannot use --synchronous together with --no-sync\n"), progname); + fprintf(stderr, _("Try \"%s --help\" for more information.\n"), + progname); + exit(1); + } + /* * Required arguments */ diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl index f9f7bf75ab..64e3a35a87 100644 --- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl +++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl @@ -2,7 +2,7 @@ use warnings; use TestLib; use PostgresNode; -use Test::More tests => 17; +use Test::More tests => 18; program_help_ok('pg_receivewal'); program_version_ok('pg_receivewal'); @@ -24,6 +24,9 @@ $primary->command_fails( [ 'pg_receivewal', '-D', $stream_dir, '--create-slot' ], 'failure if --create-slot specified without --slot'); +$primary->command_fails( + [ 'pg_receivewal', '-D', $stream_dir, '--synchronous', '--no-sync' ], + 'failure if --synchronous specified with --no-sync'); # Slot creation and drop my $slot_name = 'test'; From 846fcc85167c417873865099d70068ed85f758a8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sun, 29 Oct 2017 12:58:40 +0530 Subject: [PATCH 0451/1087] Fix problems with the "role" GUC and parallel query. Without this fix, dropping a role can sometimes result in parallel query failures in sessions that have used "SET ROLE" to assume the dropped role, even if that setting isn't active any more. Report by Pavan Deolasee. Patch by Amit Kapila, reviewed by me. Discussion: http://postgr.es/m/CABOikdOomRcZsLsLK+Z+qENM1zxyaWnAvFh3MJZzZnnKiF+REg@mail.gmail.com --- src/backend/access/transam/parallel.c | 11 ++++++++ src/backend/utils/misc/guc.c | 25 +++++++------------ src/include/utils/guc.h | 1 + src/test/regress/expected/select_parallel.out | 16 ++++++++++++ src/test/regress/sql/select_parallel.sql | 11 ++++++++ 5 files changed, 48 insertions(+), 16 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index d683050733..1f542ed8d8 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -75,9 +75,11 @@ typedef struct FixedParallelState Oid database_id; Oid authenticated_user_id; Oid current_user_id; + Oid outer_user_id; Oid temp_namespace_id; Oid temp_toast_namespace_id; int sec_context; + bool is_superuser; PGPROC *parallel_master_pgproc; pid_t parallel_master_pid; BackendId parallel_master_backend_id; @@ -296,6 +298,8 @@ InitializeParallelDSM(ParallelContext *pcxt) shm_toc_allocate(pcxt->toc, sizeof(FixedParallelState)); fps->database_id = MyDatabaseId; fps->authenticated_user_id = GetAuthenticatedUserId(); + fps->outer_user_id = GetCurrentRoleId(); + fps->is_superuser = session_auth_is_superuser; GetUserIdAndSecContext(&fps->current_user_id, &fps->sec_context); GetTempNamespaceState(&fps->temp_namespace_id, &fps->temp_toast_namespace_id); @@ -1115,6 +1119,13 @@ ParallelWorkerMain(Datum main_arg) */ InvalidateSystemCaches(); + /* + * Restore current role id. Skip verifying whether session user is + * allowed to become this role and blindly restore the leader's state for + * current role. + */ + SetCurrentRoleId(fps->outer_user_id, fps->is_superuser); + /* Restore user ID and security context. */ SetUserIdAndSecContext(fps->current_user_id, fps->sec_context); diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index ae22185fbd..65372d7cc5 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -446,6 +446,7 @@ char *event_source; bool row_security; bool check_function_bodies = true; bool default_with_oids = false; +bool session_auth_is_superuser; int log_min_error_statement = ERROR; int log_min_messages = WARNING; @@ -492,7 +493,6 @@ int huge_pages; * and is kept in sync by assign_hooks. */ static char *syslog_ident_str; -static bool session_auth_is_superuser; static double phony_random_seed; static char *client_encoding_string; static char *datestyle_string; @@ -8986,12 +8986,18 @@ read_nondefault_variables(void) * constants; a few, like server_encoding and lc_ctype, are handled specially * outside the serialize/restore procedure. Therefore, SerializeGUCState() * never sends these, and RestoreGUCState() never changes them. + * + * Role is a special variable in the sense that its current value can be an + * invalid value and there are multiple ways by which that can happen (like + * after setting the role, someone drops it). So we handle it outside of + * serialize/restore machinery. */ static bool can_skip_gucvar(struct config_generic *gconf) { return gconf->context == PGC_POSTMASTER || - gconf->context == PGC_INTERNAL || gconf->source == PGC_S_DEFAULT; + gconf->context == PGC_INTERNAL || gconf->source == PGC_S_DEFAULT || + strcmp(gconf->name, "role") == 0; } /* @@ -9252,7 +9258,6 @@ SerializeGUCState(Size maxsize, char *start_address) Size actual_size; Size bytes_left; int i; - int i_role = -1; /* Reserve space for saving the actual size of the guc state */ Assert(maxsize > sizeof(actual_size)); @@ -9260,19 +9265,7 @@ SerializeGUCState(Size maxsize, char *start_address) bytes_left = maxsize - sizeof(actual_size); for (i = 0; i < num_guc_variables; i++) - { - /* - * It's pretty ugly, but we've got to force "role" to be initialized - * after "session_authorization"; otherwise, the latter will override - * the former. - */ - if (strcmp(guc_variables[i]->name, "role") == 0) - i_role = i; - else - serialize_variable(&curptr, &bytes_left, guc_variables[i]); - } - if (i_role >= 0) - serialize_variable(&curptr, &bytes_left, guc_variables[i_role]); + serialize_variable(&curptr, &bytes_left, guc_variables[i]); /* Store actual size without assuming alignment of start_address. */ actual_size = maxsize - bytes_left - sizeof(actual_size); diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index 467125a09d..aa130cdb84 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -245,6 +245,7 @@ extern bool log_btree_build_stats; extern PGDLLIMPORT bool check_function_bodies; extern bool default_with_oids; +extern bool session_auth_is_superuser; extern int log_min_error_statement; extern int log_min_messages; diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 3c63ad1de8..ac9ad0668d 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -508,6 +508,22 @@ SELECT make_record(x) FROM (SELECT generate_series(1, 5) x) ss ORDER BY x; ROLLBACK TO SAVEPOINT settings; DROP function make_record(n int); +-- test the sanity of parallel query after the active role is dropped. +drop role if exists regress_parallel_worker; +NOTICE: role "regress_parallel_worker" does not exist, skipping +create role regress_parallel_worker; +set role regress_parallel_worker; +reset session authorization; +drop role regress_parallel_worker; +set force_parallel_mode = 1; +select count(*) from tenk1; + count +------- + 10000 +(1 row) + +reset force_parallel_mode; +reset role; -- to increase the parallel query test coverage SAVEPOINT settings; SET LOCAL force_parallel_mode = 1; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 720495c811..495f0335dc 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -201,6 +201,17 @@ SELECT make_record(x) FROM (SELECT generate_series(1, 5) x) ss ORDER BY x; ROLLBACK TO SAVEPOINT settings; DROP function make_record(n int); +-- test the sanity of parallel query after the active role is dropped. +drop role if exists regress_parallel_worker; +create role regress_parallel_worker; +set role regress_parallel_worker; +reset session authorization; +drop role regress_parallel_worker; +set force_parallel_mode = 1; +select count(*) from tenk1; +reset force_parallel_mode; +reset role; + -- to increase the parallel query test coverage SAVEPOINT settings; SET LOCAL force_parallel_mode = 1; From 854b643c8eb476ab957d83d562c8bfa10586d123 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 30 Oct 2017 14:37:00 +0100 Subject: [PATCH 0452/1087] Fix typo in comment Etsuro Fujita --- src/backend/catalog/partition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 66ec214e02..5daa8a1c19 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -703,7 +703,7 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, /* * Return a copy of given PartitionBoundInfo structure. The data types of bounds - * are described by given partition key specificiation. + * are described by given partition key specification. */ extern PartitionBoundInfo partition_bounds_copy(PartitionBoundInfo src, From 77954f996cdb31ead2718aa3a9b4878da382e385 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 30 Oct 2017 14:37:44 +0100 Subject: [PATCH 0453/1087] Fix typo --- doc/src/sgml/charset.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 3874a3f1ea..ce395e115a 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -857,7 +857,7 @@ CREATE COLLATION german (provider = libc, locale = 'de_DE'); By design, ICU will accept almost any string as a locale name and match - it to the closet locale it can provide, using the fallback procedure + it to the closest locale it can provide, using the fallback procedure described in its documentation. Thus, there will be no direct feedback if a collation specification is composed using features that the given ICU installation does not actually support. It is therefore recommended From be72b9c378bfe99a3d175c98d36dc150229f4faf Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 30 Oct 2017 15:52:02 +0100 Subject: [PATCH 0454/1087] Fix autovacuum work item error handling In autovacuum's "work item" processing, a few strings were allocated in the current transaction's memory context, which goes away during error handling; if an error happened during execution of the work item, the pfree() calls to clean up afterwards would try to release already-released memory, possibly leading to a crash. In branch master, this was already fixed by commit 335f3d04e4c8, so backpatch that to REL_10_STABLE to fix the problem there too. As a secondary problem, verify that the autovacuum worker is connected to the right database for each work item; otherwise some items would be discarded by workers in other databases. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20171014035732.GB31726@telsasoft.com --- src/backend/postmaster/autovacuum.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index c04c0b548d..48765bb01b 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2531,6 +2531,8 @@ do_autovacuum(void) continue; if (workitem->avw_active) continue; + if (workitem->avw_database != MyDatabaseId) + continue; /* claim this one, and release lock while performing it */ workitem->avw_active = true; @@ -2606,9 +2608,7 @@ perform_work_item(AutoVacuumWorkItem *workitem) /* * Save the relation name for a possible error message, to avoid a catalog * lookup in case of an error. If any of these return NULL, then the - * relation has been dropped since last we checked; skip it. Note: they - * must live in a long-lived memory context because we call vacuum and - * analyze in different transactions. + * relation has been dropped since last we checked; skip it. */ Assert(CurrentMemoryContext == AutovacMemCxt); From 86182b18957b8f9e8045d55b137aeef7c9af9916 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 30 Oct 2017 16:44:26 -0400 Subject: [PATCH 0455/1087] Doc: call out UPDATE syntax change as a v10 compatibility issue. The change made by commit 906bfcad7 means that if you're writing a parenthesized column list in UPDATE ... SET, but that column list is only one column, you now need to write ROW(expression) on the righthand side, not just a parenthesized expression. This was an intentional change for spec compatibility and potential future expansion of the possibilities for the RHS, but I'd neglected to document it as a compatibility issue, figuring that hardly anyone would bother with parenthesized syntax for a single target column. I was wrong, as shown by questions from Justin Pryzby, Adam Brusselback, and others. Move the release note item into the compatibility section and point out the behavior change for a single target column. Discussion: https://postgr.es/m/CAMjNa7cDLzPcs0xnRpkvqmJ6Vb6G3EH8CYGp9ZBjXdpFfTz6dg@mail.gmail.com --- doc/src/sgml/release-10.sgml | 46 ++++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 20 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index d37f700055..906b3814f7 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -157,6 +157,32 @@ + + Use standard row constructor syntax in UPDATE ... SET + (column_list) = row_constructor + (Tom Lane) + + + + The row_constructor can now begin with the + keyword ROW; previously that had to be omitted. + If just one column name appears in + the column_list, then + the row_constructor now must use + the ROW keyword, since otherwise it is not a valid + row constructor but just a parenthesized expression. + Also, an occurrence + of table_name.* within + the row_constructor is now expanded into + multiple columns, as occurs in other uses + of row_constructors. + + + + + @@ -1538,26 +1564,6 @@ - - Allow standard row constructor syntax in UPDATE ... SET - (column_list) = row_constructor - (Tom Lane) - - - - The row_constructor can now begin with the - keyword ROW; previously that had to be omitted. Also, - an occurrence of table_name.* - within the row_constructor is now expanded into - multiple columns, as in other uses - of row_constructors. - - - - - From 35f059e9bdfb3b14ac9d22a9e159d36ec0ccf804 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 31 Oct 2017 09:52:39 +0530 Subject: [PATCH 0456/1087] Add sanity check for pg_proc.provariadic Check that the values from pg_proc.h match what ProcedureCreate would have done. Robert Haas and Amul Sul Discussion: http://postgr.es/m/CA+TgmoZ_UGXfq5ygeDDMdUSJ4J_VX7nFnjC6mfY6BgOJ3qZCmw@mail.gmail.com --- src/test/regress/expected/type_sanity.out | 18 ++++++++++++++++++ src/test/regress/sql/type_sanity.sql | 16 ++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out index 7b200baef8..c6440060dc 100644 --- a/src/test/regress/expected/type_sanity.out +++ b/src/test/regress/expected/type_sanity.out @@ -129,6 +129,24 @@ WHERE p1.typinput = p2.oid AND NOT -----+---------+-----+--------- (0 rows) +-- Check for type of the variadic array parameter's elements. +-- provariadic should be ANYOID if the type of the last element is ANYOID, +-- ANYELEMENTOID if the type of the last element is ANYARRAYOID, and otherwise +-- the element type corresponding to the array type. +SELECT oid::regprocedure, provariadic::regtype, proargtypes::regtype[] +FROM pg_proc +WHERE provariadic != 0 +AND case proargtypes[array_length(proargtypes, 1)-1] + WHEN 2276 THEN 2276 -- any -> any + WHEN 2277 THEN 2283 -- anyarray -> anyelement + ELSE (SELECT t.oid + FROM pg_type t + WHERE t.typarray = proargtypes[array_length(proargtypes, 1)-1]) + END != provariadic; + oid | provariadic | proargtypes +-----+-------------+------------- +(0 rows) + -- As of 8.0, this check finds refcursor, which is borrowing -- other types' I/O routines SELECT p1.oid, p1.typname, p2.oid, p2.proname diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql index 4c65814008..428c2d324d 100644 --- a/src/test/regress/sql/type_sanity.sql +++ b/src/test/regress/sql/type_sanity.sql @@ -104,6 +104,22 @@ WHERE p1.typinput = p2.oid AND NOT p2.proargtypes[1] = 'oid'::regtype AND p2.proargtypes[2] = 'int4'::regtype)); +-- Check for type of the variadic array parameter's elements. +-- provariadic should be ANYOID if the type of the last element is ANYOID, +-- ANYELEMENTOID if the type of the last element is ANYARRAYOID, and otherwise +-- the element type corresponding to the array type. + +SELECT oid::regprocedure, provariadic::regtype, proargtypes::regtype[] +FROM pg_proc +WHERE provariadic != 0 +AND case proargtypes[array_length(proargtypes, 1)-1] + WHEN 2276 THEN 2276 -- any -> any + WHEN 2277 THEN 2283 -- anyarray -> anyelement + ELSE (SELECT t.oid + FROM pg_type t + WHERE t.typarray = proargtypes[array_length(proargtypes, 1)-1]) + END != provariadic; + -- As of 8.0, this check finds refcursor, which is borrowing -- other types' I/O routines SELECT p1.oid, p1.typname, p2.oid, p2.proname From cf7ab13bfb450dde50c86fa714a92964ce32b537 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 31 Oct 2017 14:41:21 +0530 Subject: [PATCH 0457/1087] Fix code related to partitioning schemes for dropped columns. The entry in appinfo->translated_vars can be NULL; if so, we must avoid dereferencing it. Ashutosh Bapat Discussion: http://postgr.es/m/CAFjFpReL7+1ien=-21rhjpO3bV7aAm1rQ8XgLVk2csFagSzpZQ@mail.gmail.com --- src/backend/optimizer/path/allpaths.c | 12 ++++++++++++ src/test/regress/expected/alter_table.out | 7 +++++++ src/test/regress/sql/alter_table.sql | 4 ++++ 3 files changed, 23 insertions(+) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 4e565b3c00..a6efb4e1d3 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -950,6 +950,18 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, attno - 1); int child_index; + /* + * Ignore any column dropped from the parent. + * Corresponding Var won't have any translation. It won't + * have attr_needed information, since it can not be + * referenced in the query. + */ + if (var == NULL) + { + Assert(attr_needed == NULL); + continue; + } + child_index = var->varattno - childrel->min_attr; childrel->attr_needed[child_index] = attr_needed; } diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index d7a084c5b7..ee1f10c8e0 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3706,6 +3706,13 @@ ALTER TABLE list_parted2 DROP COLUMN b; ERROR: cannot drop column named in partition key ALTER TABLE list_parted2 ALTER COLUMN b TYPE text; ERROR: cannot alter type of column named in partition key +-- dropping non-partition key columns should be allowed on the parent table. +ALTER TABLE list_parted DROP COLUMN b; +SELECT * FROM list_parted; + a +--- +(0 rows) + -- cleanup DROP TABLE list_parted, list_parted2, range_parted; DROP TABLE fail_def_part; diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 339d25b5e5..4ae4c2ecee 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2418,6 +2418,10 @@ ALTER TABLE part_2 INHERIT inh_test; ALTER TABLE list_parted2 DROP COLUMN b; ALTER TABLE list_parted2 ALTER COLUMN b TYPE text; +-- dropping non-partition key columns should be allowed on the parent table. +ALTER TABLE list_parted DROP COLUMN b; +SELECT * FROM list_parted; + -- cleanup DROP TABLE list_parted, list_parted2, range_parted; DROP TABLE fail_def_part; From ee4673ac071f8352c41cc673299b7ec695f079ff Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 31 Oct 2017 14:54:41 +0530 Subject: [PATCH 0458/1087] Don't exaggerate the number of temporary blocks read. A read that returns zero bytes (or an error) should not increment the number of temporary blocks read. Thomas Munro Discussion: http://postgr.es/m/CAEepm=21xgihg=WaG+O5MFotEZfN6kFETpfw+RkSnEqNQqGn2Q@mail.gmail.com --- src/backend/storage/file/buffile.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index 4ca0ea4f2a..de85b6805c 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -264,7 +264,8 @@ BufFileLoadBuffer(BufFile *file) file->offsets[file->curFile] += file->nbytes; /* we choose not to advance curOffset here */ - pgBufferUsage.temp_blks_read++; + if (file->nbytes > 0) + pgBufferUsage.temp_blks_read++; } /* From 080351466c5a669bf35a323bdec9e296330a5dbb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 31 Oct 2017 13:40:23 -0400 Subject: [PATCH 0459/1087] Fix underqualified cast-target type names in pg_dump and psql queries. Queries running with some non-pg_catalog schema frontmost in their search path need to be careful to schema-qualify type names that should be sought in pg_catalog. Vitaly Burovoy reported an oversight of this sort in pg_dump's dumpSequence, and grepping detected another one in psql's describeOneTableDetails, both introduced by sequence-related changes in v10. In pg_dump, we can fix things by removing the cast altogether, since it doesn't really matter what data types are reported for these query result columns. Likewise in psql, the query seemed to be working unduly hard to get a result that's guaranteed to be exactly 'bigint'. I also changed a couple of occurrences of "::char" similarly. These are not bugs, since "char" is a typename keyword and not subject to search_path rules, but it seems better to use uniform style. Vitaly Burovoy and Tom Lane Discussion: https://postgr.es/m/CAKOSWN=ds66zLw2SqkLTM8wbXFgDbc_OdkmT3dJfPT2mE5kipA@mail.gmail.com --- src/bin/pg_dump/pg_dump.c | 9 ++++++--- src/bin/psql/describe.c | 4 ++-- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 8733426e8a..b807df58a8 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -13160,7 +13160,7 @@ dumpCollation(Archive *fout, CollInfo *collinfo) collinfo->dobj.catId.oid); else appendPQExpBuffer(query, "SELECT " - "'c'::char AS collprovider, " + "'c' AS collprovider, " "collcollate, " "collctype, " "NULL AS collversion " @@ -16549,13 +16549,16 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) /* * Before PostgreSQL 10, sequence metadata is in the sequence itself, * so switch to the sequence's schema instead of pg_catalog. + * + * Note: it might seem that 'bigint' potentially needs to be + * schema-qualified, but actually that's a keyword. */ /* Make sure we are in proper schema */ selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); appendPQExpBuffer(query, - "SELECT 'bigint'::name AS sequence_type, " + "SELECT 'bigint' AS sequence_type, " "start_value, increment_by, max_value, min_value, " "cache_value, is_cycled FROM %s", fmtId(tbinfo->dobj.name)); @@ -16566,7 +16569,7 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); appendPQExpBuffer(query, - "SELECT 'bigint'::name AS sequence_type, " + "SELECT 'bigint' AS sequence_type, " "0 AS start_value, increment_by, max_value, min_value, " "cache_value, is_cycled FROM %s", fmtId(tbinfo->dobj.name)); diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 638275ca2f..986172616e 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1596,7 +1596,7 @@ describeOneTableDetails(const char *schemaname, else { printfPQExpBuffer(&buf, - "SELECT pg_catalog.format_type('bigint'::regtype, NULL) AS \"%s\",\n" + "SELECT 'bigint' AS \"%s\",\n" " start_value AS \"%s\",\n" " min_value AS \"%s\",\n" " max_value AS \"%s\",\n" @@ -2489,7 +2489,7 @@ describeOneTableDetails(const char *schemaname, { printfPQExpBuffer(&buf, "SELECT r.rulename, trim(trailing ';' from pg_catalog.pg_get_ruledef(r.oid, true)), " - "'O'::char AS ev_enabled\n" + "'O' AS ev_enabled\n" "FROM pg_catalog.pg_rewrite r\n" "WHERE r.ev_class = '%s' ORDER BY 1;", oid); From 0fe2780db4876cb38f9f914c855a54db7c141e2f Mon Sep 17 00:00:00 2001 From: Stephen Frost Date: Tue, 31 Oct 2017 14:04:49 -0400 Subject: [PATCH 0460/1087] Remove inbound links to sql-createuser CREATE USER is an alias for CREATE ROLE, not its own command any longer, so clean up references to the 'sql-createuser' link to go to 'sql-createrole' instead. In passing, change a few cases of 'CREATE USER' to be 'CREATE ROLE ... LOGIN'. The remaining cases appear reasonable and also mention the distinction between 'CREATE ROLE' and 'CREATE USER'. Also, don't say CREATE USER "assumes" LOGIN, but rather "includes". Patch-by: David G. Johnston, with assumes->includes by me. Discussion: https://postgr.es/m/CAKFQuwYrbhKV8hH4TEABrDRBwf=gKremF=mLPQ6X2yGqxgFpYA@mail.gmail.com --- doc/src/sgml/client-auth.sgml | 4 ++-- doc/src/sgml/ref/create_database.sgml | 2 +- doc/src/sgml/user-manag.sgml | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 722f3da813..99921ba079 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -998,9 +998,9 @@ omicron bryanh guest1 separate from operating system user passwords. The password for each database user is stored in the pg_authid system catalog. Passwords can be managed with the SQL commands - and + and , - e.g., CREATE USER foo WITH PASSWORD 'secret', + e.g., CREATE ROLE foo WITH LOGIN PASSWORD 'secret', or the psql command \password. If no password has been set up for a user, the stored password diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index 3e35c776ea..f63f1f92ac 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -45,7 +45,7 @@ CREATE DATABASE name To create a database, you must be a superuser or have the special CREATEDB privilege. - See . + See . diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index 2416bfd03d..94fcdf9829 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -158,7 +158,7 @@ CREATE ROLE name LOGIN; CREATE USER name; (CREATE USER is equivalent to CREATE ROLE - except that CREATE USER assumes LOGIN by + except that CREATE USER includes LOGIN by default, while CREATE ROLE does not.) From 63d6b97fd904232e7c7a8a2b9c52a3cc7eb47bef Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Wed, 1 Nov 2017 13:32:18 +0100 Subject: [PATCH 0461/1087] Make sure ecpglib does accepts digits behind decimal point even for integers in Informix mode. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Spotted and fixed by 高增琦 --- src/interfaces/ecpg/ecpglib/data.c | 32 ++++++++++++++++++------------ 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c index a2f3916f38..5375934d16 100644 --- a/src/interfaces/ecpg/ecpglib/data.c +++ b/src/interfaces/ecpg/ecpglib/data.c @@ -44,7 +44,7 @@ array_boundary(enum ARRAY_TYPE isarray, char c) /* returns true if some garbage is found at the end of the scanned string */ static bool -garbage_left(enum ARRAY_TYPE isarray, char *scan_length, enum COMPAT_MODE compat) +garbage_left(enum ARRAY_TYPE isarray, char **scan_length, enum COMPAT_MODE compat) { /* * INFORMIX allows for selecting a numeric into an int, the result is @@ -52,13 +52,19 @@ garbage_left(enum ARRAY_TYPE isarray, char *scan_length, enum COMPAT_MODE compat */ if (isarray == ECPG_ARRAY_NONE) { - if (INFORMIX_MODE(compat) && *scan_length == '.') + if (INFORMIX_MODE(compat) && **scan_length == '.') + { + /* skip invalid characters */ + do { + (*scan_length)++; + } while (**scan_length != ' ' && **scan_length != '\0' && isdigit(**scan_length)); return false; + } - if (*scan_length != ' ' && *scan_length != '\0') + if (**scan_length != ' ' && **scan_length != '\0') return true; } - else if (ECPG_IS_ARRAY(isarray) && !array_delimiter(isarray, *scan_length) && !array_boundary(isarray, *scan_length)) + else if (ECPG_IS_ARRAY(isarray) && !array_delimiter(isarray, **scan_length) && !array_boundary(isarray, **scan_length)) return true; return false; @@ -303,7 +309,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, case ECPGt_int: case ECPGt_long: res = strtol(pval, &scan_length, 10); - if (garbage_left(isarray, scan_length, compat)) + if (garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_INT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); @@ -332,7 +338,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, case ECPGt_unsigned_int: case ECPGt_unsigned_long: ures = strtoul(pval, &scan_length, 10); - if (garbage_left(isarray, scan_length, compat)) + if (garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_UINT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); @@ -361,7 +367,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, #ifdef HAVE_STRTOLL case ECPGt_long_long: *((long long int *) (var + offset * act_tuple)) = strtoll(pval, &scan_length, 10); - if (garbage_left(isarray, scan_length, compat)) + if (garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_INT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); return false; @@ -373,7 +379,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, #ifdef HAVE_STRTOULL case ECPGt_unsigned_long_long: *((unsigned long long int *) (var + offset * act_tuple)) = strtoull(pval, &scan_length, 10); - if (garbage_left(isarray, scan_length, compat)) + if (garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_UINT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); return false; @@ -395,7 +401,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (isarray && *scan_length == '"') scan_length++; - if (garbage_left(isarray, scan_length, compat)) + if (garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_FLOAT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); @@ -593,7 +599,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, } else { - if (!isarray && garbage_left(isarray, scan_length, compat)) + if (!isarray && garbage_left(isarray, &scan_length, compat)) { free(nres); ecpg_raise(lineno, ECPG_NUMERIC_FORMAT, @@ -651,7 +657,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (*scan_length == '"') scan_length++; - if (!isarray && garbage_left(isarray, scan_length, compat)) + if (!isarray && garbage_left(isarray, &scan_length, compat)) { free(ires); ecpg_raise(lineno, ECPG_INTERVAL_FORMAT, @@ -701,7 +707,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (*scan_length == '"') scan_length++; - if (!isarray && garbage_left(isarray, scan_length, compat)) + if (!isarray && garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_DATE_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); @@ -749,7 +755,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (*scan_length == '"') scan_length++; - if (!isarray && garbage_left(isarray, scan_length, compat)) + if (!isarray && garbage_left(isarray, &scan_length, compat)) { ecpg_raise(lineno, ECPG_TIMESTAMP_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); From 067a2259fd2d7050ecf13a82a96e9a95bf8b3785 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 1 Nov 2017 10:20:05 -0400 Subject: [PATCH 0462/1087] pg_basebackup: Fix comparison handling of tablespace mappings on Windows A candidate path needs to be canonicalized before being checked against the mappings, because the mappings are also canonicalized. This is especially relevant on Windows Reported-by: nb Author: Michael Paquier Reviewed-by: Ashutosh Sharma --- src/bin/pg_basebackup/pg_basebackup.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index dac7299ff4..a8715d912d 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -298,6 +298,11 @@ tablespace_list_append(const char *arg) exit(1); } + /* + * Comparisons done with these values should involve similarly + * canonicalized path values. This is particularly sensitive on Windows + * where path values may not necessarily use Unix slashes. + */ canonicalize_path(cell->old_dir); canonicalize_path(cell->new_dir); @@ -1303,9 +1308,14 @@ static const char * get_tablespace_mapping(const char *dir) { TablespaceListCell *cell; + char canon_dir[MAXPGPATH]; + + /* Canonicalize path for comparison consistency */ + strlcpy(canon_dir, dir, sizeof(canon_dir)); + canonicalize_path(canon_dir); for (cell = tablespace_dirs.head; cell; cell = cell->next) - if (strcmp(dir, cell->old_dir) == 0) + if (strcmp(canon_dir, cell->old_dir) == 0) return cell->new_dir; return dir; From 387ec70322aaf60127537bc200e20791f0b415ae Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 1 Nov 2017 10:50:24 -0400 Subject: [PATCH 0463/1087] doc: Add to hot standby documentation Document the order of changing certain settings when using hot-standby servers. This is just a logical consequence of what was already documented, but it gives the users some more practical advice. Author: Yorick Peterse Reviewed-by: Aleksander Alekseev Reviewed-by: Robert Haas --- doc/src/sgml/high-availability.sgml | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 086d6abb30..aa780d360d 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -2147,7 +2147,12 @@ LOG: database system is ready to accept read only connections The setting of some parameters on the standby will need reconfiguration if they have been changed on the primary. For these parameters, the value on the standby must - be equal to or greater than the value on the primary. If these parameters + be equal to or greater than the value on the primary. + Therefore, if you want to increase these values, you should do so on all + standby servers first, before applying the changes to the primary server. + Conversely, if you want to decrease these values, you should do so on the + primary server first, before applying the changes to all standby servers. + If these parameters are not set high enough then the standby will refuse to start. Higher values can then be supplied and the server restarted to begin recovery again. These parameters are: From af20e2d728eb508bb169e7294e4e210a3459833a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 1 Nov 2017 13:32:23 -0400 Subject: [PATCH 0464/1087] Fix ALTER TABLE code to update domain constraints when needed. It's possible for dropping a column, or altering its type, to require changes in domain CHECK constraint expressions; but the code was previously only expecting to find dependent table CHECK constraints. Make the necessary adjustments. This is a fairly old oversight, but it's a lot easier to encounter the problem in the context of domains over composite types than it was before. Given the lack of field complaints, I'm not going to bother with a back-patch, though I'd be willing to reconsider that decision if someone does complain. Patch by me, reviewed by Michael Paquier Discussion: https://postgr.es/m/30656.1509128130@sss.pgh.pa.us --- src/backend/commands/tablecmds.c | 98 +++++++++++++++++++++++----- src/backend/utils/adt/ruleutils.c | 67 ++++++++++++++++--- src/include/nodes/parsenodes.h | 1 + src/test/regress/expected/domain.out | 25 +++++++ src/test/regress/sql/domain.sql | 20 ++++++ 5 files changed, 186 insertions(+), 25 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 3ab808715b..c902293741 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -425,7 +425,8 @@ static void ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, List **wqueue, LOCKMODE lockmode, bool rewrite); static void RebuildConstraintComment(AlteredTableInfo *tab, int pass, - Oid objid, Relation rel, char *conname); + Oid objid, Relation rel, List *domname, + char *conname); static void TryReuseIndex(Oid oldId, IndexStmt *stmt); static void TryReuseForeignKey(Oid oldId, Constraint *con); static void change_owner_fix_column_acls(Oid relationOid, @@ -3319,6 +3320,7 @@ AlterTableGetLockLevel(List *cmds) case AT_ProcessedConstraint: /* becomes AT_AddConstraint */ case AT_AddConstraintRecurse: /* becomes AT_AddConstraint */ case AT_ReAddConstraint: /* becomes AT_AddConstraint */ + case AT_ReAddDomainConstraint: /* becomes AT_AddConstraint */ if (IsA(cmd->def, Constraint)) { Constraint *con = (Constraint *) cmd->def; @@ -3819,7 +3821,9 @@ ATRewriteCatalogs(List **wqueue, LOCKMODE lockmode) rel = relation_open(tab->relid, NoLock); foreach(lcmd, subcmds) - ATExecCmd(wqueue, tab, rel, (AlterTableCmd *) lfirst(lcmd), lockmode); + ATExecCmd(wqueue, tab, rel, + castNode(AlterTableCmd, lfirst(lcmd)), + lockmode); /* * After the ALTER TYPE pass, do cleanup work (this is not done in @@ -3936,6 +3940,13 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab, Relation rel, ATExecAddConstraint(wqueue, tab, rel, (Constraint *) cmd->def, true, true, lockmode); break; + case AT_ReAddDomainConstraint: /* Re-add pre-existing domain check + * constraint */ + address = + AlterDomainAddConstraint(((AlterDomainStmt *) cmd->def)->typeName, + ((AlterDomainStmt *) cmd->def)->def, + NULL); + break; case AT_ReAddComment: /* Re-add existing comment */ address = CommentObject((CommentStmt *) cmd->def); break; @@ -9616,7 +9627,15 @@ ATPostAlterTypeCleanup(List **wqueue, AlteredTableInfo *tab, LOCKMODE lockmode) if (!HeapTupleIsValid(tup)) /* should not happen */ elog(ERROR, "cache lookup failed for constraint %u", oldId); con = (Form_pg_constraint) GETSTRUCT(tup); - relid = con->conrelid; + if (OidIsValid(con->conrelid)) + relid = con->conrelid; + else + { + /* must be a domain constraint */ + relid = get_typ_typrelid(getBaseType(con->contypid)); + if (!OidIsValid(relid)) + elog(ERROR, "could not identify relation associated with constraint %u", oldId); + } confrelid = con->confrelid; conislocal = con->conislocal; ReleaseSysCache(tup); @@ -9753,7 +9772,7 @@ ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, foreach(lcmd, stmt->cmds) { - AlterTableCmd *cmd = (AlterTableCmd *) lfirst(lcmd); + AlterTableCmd *cmd = castNode(AlterTableCmd, lfirst(lcmd)); if (cmd->subtype == AT_AddIndex) { @@ -9777,13 +9796,14 @@ ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, RebuildConstraintComment(tab, AT_PASS_OLD_INDEX, oldId, - rel, indstmt->idxname); + rel, + NIL, + indstmt->idxname); } else if (cmd->subtype == AT_AddConstraint) { - Constraint *con; + Constraint *con = castNode(Constraint, cmd->def); - con = castNode(Constraint, cmd->def); con->old_pktable_oid = refRelId; /* rewriting neither side of a FK */ if (con->contype == CONSTR_FOREIGN && @@ -9797,13 +9817,41 @@ ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, RebuildConstraintComment(tab, AT_PASS_OLD_CONSTR, oldId, - rel, con->conname); + rel, + NIL, + con->conname); } else elog(ERROR, "unexpected statement subtype: %d", (int) cmd->subtype); } } + else if (IsA(stm, AlterDomainStmt)) + { + AlterDomainStmt *stmt = (AlterDomainStmt *) stm; + + if (stmt->subtype == 'C') /* ADD CONSTRAINT */ + { + Constraint *con = castNode(Constraint, stmt->def); + AlterTableCmd *cmd = makeNode(AlterTableCmd); + + cmd->subtype = AT_ReAddDomainConstraint; + cmd->def = (Node *) stmt; + tab->subcmds[AT_PASS_OLD_CONSTR] = + lappend(tab->subcmds[AT_PASS_OLD_CONSTR], cmd); + + /* recreate any comment on the constraint */ + RebuildConstraintComment(tab, + AT_PASS_OLD_CONSTR, + oldId, + NULL, + stmt->typeName, + con->conname); + } + else + elog(ERROR, "unexpected statement subtype: %d", + (int) stmt->subtype); + } else elog(ERROR, "unexpected statement type: %d", (int) nodeTag(stm)); @@ -9813,12 +9861,19 @@ ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, } /* - * Subroutine for ATPostAlterTypeParse() to recreate a comment entry for - * a constraint that is being re-added. + * Subroutine for ATPostAlterTypeParse() to recreate any existing comment + * for a table or domain constraint that is being rebuilt. + * + * objid is the OID of the constraint. + * Pass "rel" for a table constraint, or "domname" (domain's qualified name + * as a string list) for a domain constraint. + * (We could dig that info, as well as the conname, out of the pg_constraint + * entry; but callers already have them so might as well pass them.) */ static void RebuildConstraintComment(AlteredTableInfo *tab, int pass, Oid objid, - Relation rel, char *conname) + Relation rel, List *domname, + char *conname) { CommentStmt *cmd; char *comment_str; @@ -9829,12 +9884,23 @@ RebuildConstraintComment(AlteredTableInfo *tab, int pass, Oid objid, if (comment_str == NULL) return; - /* Build node CommentStmt */ + /* Build CommentStmt node, copying all input data for safety */ cmd = makeNode(CommentStmt); - cmd->objtype = OBJECT_TABCONSTRAINT; - cmd->object = (Node *) list_make3(makeString(get_namespace_name(RelationGetNamespace(rel))), - makeString(pstrdup(RelationGetRelationName(rel))), - makeString(pstrdup(conname))); + if (rel) + { + cmd->objtype = OBJECT_TABCONSTRAINT; + cmd->object = (Node *) + list_make3(makeString(get_namespace_name(RelationGetNamespace(rel))), + makeString(pstrdup(RelationGetRelationName(rel))), + makeString(pstrdup(conname))); + } + else + { + cmd->objtype = OBJECT_DOMCONSTRAINT; + cmd->object = (Node *) + list_make2(makeTypeNameFromNameList(copyObject(domname)), + makeString(pstrdup(conname))); + } cmd->comment = comment_str; /* Append it to list of commands */ diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index b1e70a0d19..cc6cec7877 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -460,6 +460,7 @@ static char *generate_function_name(Oid funcid, int nargs, bool has_variadic, bool *use_variadic_p, ParseExprKind special_exprkind); static char *generate_operator_name(Oid operid, Oid arg1, Oid arg2); +static char *generate_qualified_type_name(Oid typid); static text *string_to_text(char *str); static char *flatten_reloptions(Oid relid); @@ -1867,15 +1868,27 @@ pg_get_constraintdef_worker(Oid constraintId, bool fullCommand, if (fullCommand) { - /* - * Currently, callers want ALTER TABLE (without ONLY) for CHECK - * constraints, and other types of constraints don't inherit anyway so - * it doesn't matter whether we say ONLY or not. Someday we might - * need to let callers specify whether to put ONLY in the command. - */ - appendStringInfo(&buf, "ALTER TABLE %s ADD CONSTRAINT %s ", - generate_qualified_relation_name(conForm->conrelid), - quote_identifier(NameStr(conForm->conname))); + if (OidIsValid(conForm->conrelid)) + { + /* + * Currently, callers want ALTER TABLE (without ONLY) for CHECK + * constraints, and other types of constraints don't inherit + * anyway so it doesn't matter whether we say ONLY or not. Someday + * we might need to let callers specify whether to put ONLY in the + * command. + */ + appendStringInfo(&buf, "ALTER TABLE %s ADD CONSTRAINT %s ", + generate_qualified_relation_name(conForm->conrelid), + quote_identifier(NameStr(conForm->conname))); + } + else + { + /* Must be a domain constraint */ + Assert(OidIsValid(conForm->contypid)); + appendStringInfo(&buf, "ALTER DOMAIN %s ADD CONSTRAINT %s ", + generate_qualified_type_name(conForm->contypid), + quote_identifier(NameStr(conForm->conname))); + } } switch (conForm->contype) @@ -10778,6 +10791,42 @@ generate_operator_name(Oid operid, Oid arg1, Oid arg2) return buf.data; } +/* + * generate_qualified_type_name + * Compute the name to display for a type specified by OID + * + * This is different from format_type_be() in that we unconditionally + * schema-qualify the name. That also means no special syntax for + * SQL-standard type names ... although in current usage, this should + * only get used for domains, so such cases wouldn't occur anyway. + */ +static char * +generate_qualified_type_name(Oid typid) +{ + HeapTuple tp; + Form_pg_type typtup; + char *typname; + char *nspname; + char *result; + + tp = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid)); + if (!HeapTupleIsValid(tp)) + elog(ERROR, "cache lookup failed for type %u", typid); + typtup = (Form_pg_type) GETSTRUCT(tp); + typname = NameStr(typtup->typname); + + nspname = get_namespace_name(typtup->typnamespace); + if (!nspname) + elog(ERROR, "cache lookup failed for namespace %u", + typtup->typnamespace); + + result = quote_qualified_identifier(nspname, typname); + + ReleaseSysCache(tp); + + return result; +} + /* * generate_collation_name * Compute the name to display for a collation specified by OID diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 732e5d6788..06a2b81fb5 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1713,6 +1713,7 @@ typedef enum AlterTableType AT_AddConstraint, /* add constraint */ AT_AddConstraintRecurse, /* internal to commands/tablecmds.c */ AT_ReAddConstraint, /* internal to commands/tablecmds.c */ + AT_ReAddDomainConstraint, /* internal to commands/tablecmds.c */ AT_AlterConstraint, /* alter constraint */ AT_ValidateConstraint, /* validate constraint */ AT_ValidateConstraintRecurse, /* internal to commands/tablecmds.c */ diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out index f7f3948d43..f4eebb75cf 100644 --- a/src/test/regress/expected/domain.out +++ b/src/test/regress/expected/domain.out @@ -284,6 +284,31 @@ Rules: WHERE (dcomptable.d1).i > 0::double precision drop table dcomptable; +drop type comptype cascade; +NOTICE: drop cascades to type dcomptype +-- check altering and dropping columns used by domain constraints +create type comptype as (r float8, i float8); +create domain dcomptype as comptype; +alter domain dcomptype add constraint c1 check ((value).r > 0); +comment on constraint c1 on domain dcomptype is 'random commentary'; +select row(0,1)::dcomptype; -- fail +ERROR: value for domain dcomptype violates check constraint "c1" +alter type comptype alter attribute r type varchar; -- fail +ERROR: operator does not exist: character varying > double precision +HINT: No operator matches the given name and argument types. You might need to add explicit type casts. +alter type comptype alter attribute r type bigint; +alter type comptype drop attribute r; -- fail +ERROR: cannot drop composite type comptype column r because other objects depend on it +DETAIL: constraint c1 depends on composite type comptype column r +HINT: Use DROP ... CASCADE to drop the dependent objects too. +alter type comptype drop attribute i; +select conname, obj_description(oid, 'pg_constraint') from pg_constraint + where contypid = 'dcomptype'::regtype; -- check comment is still there + conname | obj_description +---------+------------------- + c1 | random commentary +(1 row) + drop type comptype cascade; NOTICE: drop cascades to type dcomptype -- Test domains over arrays of composite diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql index 5201f008a1..68da27de22 100644 --- a/src/test/regress/sql/domain.sql +++ b/src/test/regress/sql/domain.sql @@ -159,6 +159,26 @@ drop table dcomptable; drop type comptype cascade; +-- check altering and dropping columns used by domain constraints +create type comptype as (r float8, i float8); +create domain dcomptype as comptype; +alter domain dcomptype add constraint c1 check ((value).r > 0); +comment on constraint c1 on domain dcomptype is 'random commentary'; + +select row(0,1)::dcomptype; -- fail + +alter type comptype alter attribute r type varchar; -- fail +alter type comptype alter attribute r type bigint; + +alter type comptype drop attribute r; -- fail +alter type comptype drop attribute i; + +select conname, obj_description(oid, 'pg_constraint') from pg_constraint + where contypid = 'dcomptype'::regtype; -- check comment is still there + +drop type comptype cascade; + + -- Test domains over arrays of composite create type comptype as (r float8, i float8); From ec7ce54204147ccf1a55aaba526ac4b39071f712 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 1 Nov 2017 14:32:05 -0400 Subject: [PATCH 0465/1087] doc: Mention pg_stat_wal_receiver in streaming replication docs Also make the link to pg_stat_replication more precise. Author: Michael Paquier Reviewed-by: Jeff Janes --- doc/src/sgml/high-availability.sgml | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index aa780d360d..6c0679b0a8 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -890,14 +890,20 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' You can retrieve a list of WAL sender processes via the - - pg_stat_replication view. Large differences between + view. Large differences between pg_current_wal_lsn and the view's sent_lsn field might indicate that the master server is under heavy load, while differences between sent_lsn and pg_last_wal_receive_lsn on the standby might indicate network delay, or that the standby is under heavy load. + + On a hot standby, the status of the WAL receiver process can be retrieved + via the view. A large + difference between pg_last_wal_replay_lsn and the + view's received_lsn indicates that WAL is being + received faster than it can be replayed. + From 7c70996ebf0949b142a99c9445061c3c83ce62b3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 1 Nov 2017 17:38:12 -0400 Subject: [PATCH 0466/1087] Allow bitmap scans to operate as index-only scans when possible. If we don't have to return any columns from heap tuples, and there's no need to recheck qual conditions, and the heap page is all-visible, then we can skip fetching the heap page altogether. Skip prefetching pages too, when possible, on the assumption that the recheck flag will remain the same from one page to the next. While that assumption is hardly bulletproof, it seems like a good bet most of the time, and better than prefetching pages we don't need. This commit installs the executor infrastructure, but doesn't change any planner cost estimates, thus possibly causing bitmap scans to not be chosen in cases where this change renders them the best choice. I (tgl) am not entirely convinced that we need to account for this behavior in the planner, because I think typically the bitmap scan would get chosen anyway if it's the best bet. In any case the submitted patch took way too many shortcuts, resulting in too many clearly-bad choices, to be committable. Alexander Kuzmenkov, reviewed by Alexey Chernyshov, and whacked around rather heavily by me. Discussion: https://postgr.es/m/239a8955-c0fc-f506-026d-c837e86c827b@postgrespro.ru --- src/backend/executor/nodeBitmapHeapscan.c | 165 +++++++++++++++++----- src/backend/optimizer/plan/createplan.c | 9 ++ src/include/nodes/execnodes.h | 10 +- src/test/regress/expected/stats.out | 3 + src/test/regress/sql/stats.sql | 3 + 5 files changed, 150 insertions(+), 40 deletions(-) diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index 6035b4dfd4..b885f2a3a6 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -39,6 +39,7 @@ #include "access/relscan.h" #include "access/transam.h" +#include "access/visibilitymap.h" #include "executor/execdebug.h" #include "executor/nodeBitmapHeapscan.h" #include "miscadmin.h" @@ -225,9 +226,31 @@ BitmapHeapNext(BitmapHeapScanState *node) } /* - * Fetch the current heap page and identify candidate tuples. + * We can skip fetching the heap page if we don't need any fields + * from the heap, and the bitmap entries don't need rechecking, + * and all tuples on the page are visible to our transaction. */ - bitgetpage(scan, tbmres); + node->skip_fetch = (node->can_skip_fetch && + !tbmres->recheck && + VM_ALL_VISIBLE(node->ss.ss_currentRelation, + tbmres->blockno, + &node->vmbuffer)); + + if (node->skip_fetch) + { + /* + * The number of tuples on this page is put into + * scan->rs_ntuples; note we don't fill scan->rs_vistuples. + */ + scan->rs_ntuples = tbmres->ntuples; + } + else + { + /* + * Fetch the current heap page and identify candidate tuples. + */ + bitgetpage(scan, tbmres); + } if (tbmres->ntuples >= 0) node->exact_pages++; @@ -289,45 +312,55 @@ BitmapHeapNext(BitmapHeapScanState *node) */ BitmapPrefetch(node, scan); - /* - * Okay to fetch the tuple - */ - targoffset = scan->rs_vistuples[scan->rs_cindex]; - dp = (Page) BufferGetPage(scan->rs_cbuf); - lp = PageGetItemId(dp, targoffset); - Assert(ItemIdIsNormal(lp)); + if (node->skip_fetch) + { + /* + * If we don't have to fetch the tuple, just return nulls. + */ + ExecStoreAllNullTuple(slot); + } + else + { + /* + * Okay to fetch the tuple. + */ + targoffset = scan->rs_vistuples[scan->rs_cindex]; + dp = (Page) BufferGetPage(scan->rs_cbuf); + lp = PageGetItemId(dp, targoffset); + Assert(ItemIdIsNormal(lp)); - scan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp); - scan->rs_ctup.t_len = ItemIdGetLength(lp); - scan->rs_ctup.t_tableOid = scan->rs_rd->rd_id; - ItemPointerSet(&scan->rs_ctup.t_self, tbmres->blockno, targoffset); + scan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp); + scan->rs_ctup.t_len = ItemIdGetLength(lp); + scan->rs_ctup.t_tableOid = scan->rs_rd->rd_id; + ItemPointerSet(&scan->rs_ctup.t_self, tbmres->blockno, targoffset); - pgstat_count_heap_fetch(scan->rs_rd); + pgstat_count_heap_fetch(scan->rs_rd); - /* - * Set up the result slot to point to this tuple. Note that the slot - * acquires a pin on the buffer. - */ - ExecStoreTuple(&scan->rs_ctup, - slot, - scan->rs_cbuf, - false); - - /* - * If we are using lossy info, we have to recheck the qual conditions - * at every tuple. - */ - if (tbmres->recheck) - { - econtext->ecxt_scantuple = slot; - ResetExprContext(econtext); + /* + * Set up the result slot to point to this tuple. Note that the + * slot acquires a pin on the buffer. + */ + ExecStoreTuple(&scan->rs_ctup, + slot, + scan->rs_cbuf, + false); - if (!ExecQual(node->bitmapqualorig, econtext)) + /* + * If we are using lossy info, we have to recheck the qual + * conditions at every tuple. + */ + if (tbmres->recheck) { - /* Fails recheck, so drop it and loop back for another */ - InstrCountFiltered2(node, 1); - ExecClearTuple(slot); - continue; + econtext->ecxt_scantuple = slot; + ResetExprContext(econtext); + + if (!ExecQual(node->bitmapqualorig, econtext)) + { + /* Fails recheck, so drop it and loop back for another */ + InstrCountFiltered2(node, 1); + ExecClearTuple(slot); + continue; + } } } @@ -582,6 +615,7 @@ BitmapPrefetch(BitmapHeapScanState *node, HeapScanDesc scan) while (node->prefetch_pages < node->prefetch_target) { TBMIterateResult *tbmpre = tbm_iterate(prefetch_iterator); + bool skip_fetch; if (tbmpre == NULL) { @@ -591,7 +625,26 @@ BitmapPrefetch(BitmapHeapScanState *node, HeapScanDesc scan) break; } node->prefetch_pages++; - PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno); + + /* + * If we expect not to have to actually read this heap page, + * skip this prefetch call, but continue to run the prefetch + * logic normally. (Would it be better not to increment + * prefetch_pages?) + * + * This depends on the assumption that the index AM will + * report the same recheck flag for this future heap page as + * it did for the current heap page; which is not a certainty + * but is true in many cases. + */ + skip_fetch = (node->can_skip_fetch && + (node->tbmres ? !node->tbmres->recheck : false) && + VM_ALL_VISIBLE(node->ss.ss_currentRelation, + tbmpre->blockno, + &node->pvmbuffer)); + + if (!skip_fetch) + PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno); } } @@ -608,6 +661,7 @@ BitmapPrefetch(BitmapHeapScanState *node, HeapScanDesc scan) { TBMIterateResult *tbmpre; bool do_prefetch = false; + bool skip_fetch; /* * Recheck under the mutex. If some other process has already @@ -633,7 +687,15 @@ BitmapPrefetch(BitmapHeapScanState *node, HeapScanDesc scan) break; } - PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno); + /* As above, skip prefetch if we expect not to need page */ + skip_fetch = (node->can_skip_fetch && + (node->tbmres ? !node->tbmres->recheck : false) && + VM_ALL_VISIBLE(node->ss.ss_currentRelation, + tbmpre->blockno, + &node->pvmbuffer)); + + if (!skip_fetch) + PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno); } } } @@ -687,6 +749,7 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node) /* rescan to release any page pin */ heap_rescan(node->ss.ss_currentScanDesc, NULL); + /* release bitmaps and buffers if any */ if (node->tbmiterator) tbm_end_iterate(node->tbmiterator); if (node->prefetch_iterator) @@ -697,6 +760,10 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node) tbm_end_shared_iterate(node->shared_prefetch_iterator); if (node->tbm) tbm_free(node->tbm); + if (node->vmbuffer != InvalidBuffer) + ReleaseBuffer(node->vmbuffer); + if (node->pvmbuffer != InvalidBuffer) + ReleaseBuffer(node->pvmbuffer); node->tbm = NULL; node->tbmiterator = NULL; node->tbmres = NULL; @@ -704,6 +771,8 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node) node->initialized = false; node->shared_tbmiterator = NULL; node->shared_prefetch_iterator = NULL; + node->vmbuffer = InvalidBuffer; + node->pvmbuffer = InvalidBuffer; ExecScanReScan(&node->ss); @@ -748,7 +817,7 @@ ExecEndBitmapHeapScan(BitmapHeapScanState *node) ExecEndNode(outerPlanState(node)); /* - * release bitmap if any + * release bitmaps and buffers if any */ if (node->tbmiterator) tbm_end_iterate(node->tbmiterator); @@ -760,6 +829,10 @@ ExecEndBitmapHeapScan(BitmapHeapScanState *node) tbm_end_shared_iterate(node->shared_tbmiterator); if (node->shared_prefetch_iterator) tbm_end_shared_iterate(node->shared_prefetch_iterator); + if (node->vmbuffer != InvalidBuffer) + ReleaseBuffer(node->vmbuffer); + if (node->pvmbuffer != InvalidBuffer) + ReleaseBuffer(node->pvmbuffer); /* * close heap scan @@ -805,6 +878,9 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags) scanstate->tbm = NULL; scanstate->tbmiterator = NULL; scanstate->tbmres = NULL; + scanstate->skip_fetch = false; + scanstate->vmbuffer = InvalidBuffer; + scanstate->pvmbuffer = InvalidBuffer; scanstate->exact_pages = 0; scanstate->lossy_pages = 0; scanstate->prefetch_iterator = NULL; @@ -815,8 +891,19 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags) scanstate->pscan_len = 0; scanstate->initialized = false; scanstate->shared_tbmiterator = NULL; + scanstate->shared_prefetch_iterator = NULL; scanstate->pstate = NULL; + /* + * We can potentially skip fetching heap pages if we do not need any + * columns of the table, either for checking non-indexable quals or for + * returning data. This test is a bit simplistic, as it checks the + * stronger condition that there's no qual or return tlist at all. But in + * most cases it's probably not worth working harder than that. + */ + scanstate->can_skip_fetch = (node->scan.plan.qual == NIL && + node->scan.plan.targetlist == NIL); + /* * Miscellaneous initialization * diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index c802d61c39..4b497486a0 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -807,6 +807,15 @@ use_physical_tlist(PlannerInfo *root, Path *path, int flags) if (IsA(path, CustomPath)) return false; + /* + * If a bitmap scan's tlist is empty, keep it as-is. This may allow the + * executor to skip heap page fetches, and in any case, the benefit of + * using a physical tlist instead would be minimal. + */ + if (IsA(path, BitmapHeapPath) && + path->pathtarget->exprs == NIL) + return false; + /* * Can't do it if any system columns or whole-row Vars are requested. * (This could possibly be fixed but would take some fragile assumptions diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 8698c8a50c..d209ec012c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -507,7 +507,7 @@ typedef struct EState bool *es_epqTupleSet; /* true if EPQ tuple is provided */ bool *es_epqScanDone; /* true if EPQ tuple has been fetched */ - bool es_use_parallel_mode; /* can we use parallel workers? */ + bool es_use_parallel_mode; /* can we use parallel workers? */ /* The per-query shared memory area to use for parallel execution. */ struct dsa_area *es_query_dsa; @@ -1331,6 +1331,10 @@ typedef struct ParallelBitmapHeapState * tbm bitmap obtained from child index scan(s) * tbmiterator iterator for scanning current pages * tbmres current-page data + * can_skip_fetch can we potentially skip tuple fetches in this scan? + * skip_fetch are we skipping tuple fetches on this page? + * vmbuffer buffer for visibility-map lookups + * pvmbuffer ditto, for prefetched pages * exact_pages total number of exact pages retrieved * lossy_pages total number of lossy pages retrieved * prefetch_iterator iterator for prefetching ahead of current page @@ -1351,6 +1355,10 @@ typedef struct BitmapHeapScanState TIDBitmap *tbm; TBMIterator *tbmiterator; TBMIterateResult *tbmres; + bool can_skip_fetch; + bool skip_fetch; + Buffer vmbuffer; + Buffer pvmbuffer; long exact_pages; long lossy_pages; TBMIterator *prefetch_iterator; diff --git a/src/test/regress/expected/stats.out b/src/test/regress/expected/stats.out index fc91f3ce36..991c287b11 100644 --- a/src/test/regress/expected/stats.out +++ b/src/test/regress/expected/stats.out @@ -136,12 +136,15 @@ SELECT count(*) FROM tenk2; (1 row) -- do an indexscan +-- make sure it is not a bitmap scan, which might skip fetching heap tuples +SET enable_bitmapscan TO off; SELECT count(*) FROM tenk2 WHERE unique1 = 1; count ------- 1 (1 row) +RESET enable_bitmapscan; -- We can't just call wait_for_stats() at this point, because we only -- transmit stats when the session goes idle, and we probably didn't -- transmit the last couple of counts yet thanks to the rate-limiting logic diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql index 6e882bf3ac..2be7dde834 100644 --- a/src/test/regress/sql/stats.sql +++ b/src/test/regress/sql/stats.sql @@ -138,7 +138,10 @@ ROLLBACK; -- do a seqscan SELECT count(*) FROM tenk2; -- do an indexscan +-- make sure it is not a bitmap scan, which might skip fetching heap tuples +SET enable_bitmapscan TO off; SELECT count(*) FROM tenk2 WHERE unique1 = 1; +RESET enable_bitmapscan; -- We can't just call wait_for_stats() at this point, because we only -- transmit stats when the session goes idle, and we probably didn't From c0e2062d3214f6230a0e1eee9236b202bda9221f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 1 Nov 2017 22:07:14 -0400 Subject: [PATCH 0467/1087] Doc: update URL for check_postgres. Reported by Dan Vianello. Discussion: https://postgr.es/m/e6e12f18f70e46848c058084d42fb651@KSTLMEXGP001.CORP.CHARTERCOM.com --- doc/src/sgml/maintenance.sgml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 1952bc9178..1a379058a2 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -46,7 +46,7 @@ check_postgres + url="https://bucardo.org/check_postgres/">check_postgres is available for monitoring database health and reporting unusual conditions. check_postgres integrates with Nagios and MRTG, but can be run standalone too. @@ -981,7 +981,7 @@ pg_ctl start | rotatelogs /var/log/pgsql_log 86400 pgBadger is an external project that does sophisticated log file analysis. check_postgres + url="https://bucardo.org/check_postgres/">check_postgres provides Nagios alerts when important messages appear in the log files, as well as detection of many other extraordinary conditions. From 51f4d3ed7ea40998f66e15830aa84009c0e36e11 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Wed, 1 Nov 2017 19:16:14 -0700 Subject: [PATCH 0468/1087] In client support of v10 features, use standard schema handling. Back-patch to v10. This continues the work of commit 080351466c5a669bf35a323bdec9e296330a5dbb. Discussion: https://postgr.es/m/CAKOSWN=ds66zLw2SqkLTM8wbXFgDbc_OdkmT3dJfPT2mE5kipA@mail.gmail.com --- src/bin/pg_dump/pg_dump.c | 22 +++++++++++++++------- src/bin/psql/describe.c | 2 +- 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index b807df58a8..6d4c28852c 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -3475,12 +3475,15 @@ getPublications(Archive *fout) resetPQExpBuffer(query); + /* Make sure we are in proper schema */ + selectSourceSchema(fout, "pg_catalog"); + /* Get the publications. */ appendPQExpBuffer(query, "SELECT p.tableoid, p.oid, p.pubname, " "(%s p.pubowner) AS rolname, " "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete " - "FROM pg_catalog.pg_publication p", + "FROM pg_publication p", username_subquery); res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); @@ -3631,6 +3634,9 @@ getPublicationTables(Archive *fout, TableInfo tblinfo[], int numTables) query = createPQExpBuffer(); + /* Make sure we are in proper schema */ + selectSourceSchema(fout, "pg_catalog"); + for (i = 0; i < numTables; i++) { TableInfo *tbinfo = &tblinfo[i]; @@ -3656,8 +3662,7 @@ getPublicationTables(Archive *fout, TableInfo tblinfo[], int numTables) /* Get the publication membership for the table. */ appendPQExpBuffer(query, "SELECT pr.tableoid, pr.oid, p.pubname " - "FROM pg_catalog.pg_publication_rel pr," - " pg_catalog.pg_publication p " + "FROM pg_publication_rel pr, pg_publication p " "WHERE pr.prrelid = '%u'" " AND p.oid = pr.prpubid", tbinfo->dobj.catId.oid); @@ -3783,13 +3788,16 @@ getSubscriptions(Archive *fout) if (dopt->no_subscriptions || fout->remoteVersion < 100000) return; + /* Make sure we are in proper schema */ + selectSourceSchema(fout, "pg_catalog"); + if (!is_superuser(fout)) { int n; res = ExecuteSqlQuery(fout, "SELECT count(*) FROM pg_subscription " - "WHERE subdbid = (SELECT oid FROM pg_catalog.pg_database" + "WHERE subdbid = (SELECT oid FROM pg_database" " WHERE datname = current_database())", PGRES_TUPLES_OK); n = atoi(PQgetvalue(res, 0, 0)); @@ -3809,8 +3817,8 @@ getSubscriptions(Archive *fout) "(%s s.subowner) AS rolname, " " s.subconninfo, s.subslotname, s.subsynccommit, " " s.subpublications " - "FROM pg_catalog.pg_subscription s " - "WHERE s.subdbid = (SELECT oid FROM pg_catalog.pg_database" + "FROM pg_subscription s " + "WHERE s.subdbid = (SELECT oid FROM pg_database" " WHERE datname = current_database())", username_subquery); res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); @@ -6830,7 +6838,7 @@ getExtendedStatistics(Archive *fout, TableInfo tblinfo[], int numTables) "oid, " "stxname, " "pg_catalog.pg_get_statisticsobjdef(oid) AS stxdef " - "FROM pg_statistic_ext " + "FROM pg_catalog.pg_statistic_ext " "WHERE stxrelid = '%u' " "ORDER BY stxname", tbinfo->dobj.catId.oid); diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 986172616e..b7b978a361 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -5363,7 +5363,7 @@ describeSubscriptions(const char *pattern, bool verbose) "FROM pg_catalog.pg_subscription\n" "WHERE subdbid = (SELECT oid\n" " FROM pg_catalog.pg_database\n" - " WHERE datname = current_database())"); + " WHERE datname = pg_catalog.current_database())"); processSQLNamePattern(pset.db, &buf, pattern, true, false, NULL, "subname", NULL, From d8c435e1743773eba4e36498479ca6aef28a2d70 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 2 Nov 2017 09:08:03 -0400 Subject: [PATCH 0469/1087] doc: Adjust name in acknowledgments per request of the named person --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 906b3814f7..7e95ba8313 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -3248,6 +3248,7 @@ Daisuke Higuchi Damian Quiroga Dan Wood + Dang Minh Huong Daniel Gustafsson Daniel Vérité Daniel Westermann @@ -3311,7 +3312,6 @@ Heikki Linnakangas Henry Boehlert Huan Ruan - Huong Dangminh Ian Barwick Igor Korot Ildus Kurbangaliev From c6764eb3aea63f3f95582bd660785e2b0d4439f9 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 2 Nov 2017 15:51:05 +0100 Subject: [PATCH 0470/1087] Revert bogus fixes of HOT-freezing bug It turns out we misdiagnosed what the real problem was. Revert the previous changes, because they may have worse consequences going forward. A better fix is forthcoming. The simplistic test case is kept, though disabled. Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de --- src/backend/access/heap/heapam.c | 109 ++++++-------------------- src/backend/access/heap/pruneheap.c | 4 +- src/backend/commands/vacuumlazy.c | 20 ++--- src/backend/executor/execMain.c | 6 +- src/include/access/heapam.h | 3 - src/test/isolation/isolation_schedule | 1 - 6 files changed, 38 insertions(+), 105 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 52dda41cc4..765750b874 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -2074,7 +2074,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer, * broken. */ if (TransactionIdIsValid(prev_xmax) && - !HeapTupleUpdateXmaxMatchesXmin(prev_xmax, heapTuple->t_data)) + !TransactionIdEquals(prev_xmax, + HeapTupleHeaderGetXmin(heapTuple->t_data))) break; /* @@ -2260,7 +2261,7 @@ heap_get_latest_tid(Relation relation, * tuple. Check for XMIN match. */ if (TransactionIdIsValid(priorXmax) && - !HeapTupleUpdateXmaxMatchesXmin(priorXmax, tp.t_data)) + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data))) { UnlockReleaseBuffer(buffer); break; @@ -2292,50 +2293,6 @@ heap_get_latest_tid(Relation relation, } /* end of loop */ } -/* - * HeapTupleUpdateXmaxMatchesXmin - verify update chain xmax/xmin lineage - * - * Given the new version of a tuple after some update, verify whether the - * given Xmax (corresponding to the previous version) matches the tuple's - * Xmin, taking into account that the Xmin might have been frozen after the - * update. - */ -bool -HeapTupleUpdateXmaxMatchesXmin(TransactionId xmax, HeapTupleHeader htup) -{ - TransactionId xmin = HeapTupleHeaderGetXmin(htup); - - /* - * If the xmax of the old tuple is identical to the xmin of the new one, - * it's a match. - */ - if (TransactionIdEquals(xmax, xmin)) - return true; - - /* - * If the Xmin that was in effect prior to a freeze matches the Xmax, - * it's good too. - */ - if (HeapTupleHeaderXminFrozen(htup) && - TransactionIdEquals(HeapTupleHeaderGetRawXmin(htup), xmax)) - return true; - - /* - * When a tuple is frozen, the original Xmin is lost, but we know it's a - * committed transaction. So unless the Xmax is InvalidXid, we don't know - * for certain that there is a match, but there may be one; and we must - * return true so that a HOT chain that is half-frozen can be walked - * correctly. - * - * We no longer freeze tuples this way, but we must keep this in order to - * interpret pre-pg_upgrade pages correctly. - */ - if (TransactionIdEquals(xmin, FrozenTransactionId) && - TransactionIdIsValid(xmax)) - return true; - - return false; -} /* * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends @@ -5755,7 +5712,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid, * end of the chain, we're done, so return success. */ if (TransactionIdIsValid(priorXmax) && - !HeapTupleUpdateXmaxMatchesXmin(priorXmax, mytup.t_data)) + !TransactionIdEquals(HeapTupleHeaderGetXmin(mytup.t_data), + priorXmax)) { result = HeapTupleMayBeUpdated; goto out_locked; @@ -6449,23 +6407,14 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, Assert(TransactionIdIsValid(xid)); /* - * The updating transaction cannot possibly be still running, but - * verify whether it has committed, and request to set the - * COMMITTED flag if so. (We normally don't see any tuples in - * this state, because they are removed by page pruning before we - * try to freeze the page; but this can happen if the updating - * transaction commits after the page is pruned but before - * HeapTupleSatisfiesVacuum). + * If the xid is older than the cutoff, it has to have aborted, + * otherwise the tuple would have gotten pruned away. */ if (TransactionIdPrecedes(xid, cutoff_xid)) { - if (TransactionIdDidCommit(xid)) - *flags = FRM_MARK_COMMITTED | FRM_RETURN_IS_XID; - else - { - *flags |= FRM_INVALIDATE_XMAX; - xid = InvalidTransactionId; /* not strictly necessary */ - } + Assert(!TransactionIdDidCommit(xid)); + *flags |= FRM_INVALIDATE_XMAX; + xid = InvalidTransactionId; /* not strictly necessary */ } else { @@ -6538,16 +6487,13 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, /* * It's an update; should we keep it? If the transaction is known * aborted or crashed then it's okay to ignore it, otherwise not. + * Note that an updater older than cutoff_xid cannot possibly be + * committed, because HeapTupleSatisfiesVacuum would have returned + * HEAPTUPLE_DEAD and we would not be trying to freeze the tuple. * * As with all tuple visibility routines, it's critical to test * TransactionIdIsInProgress before TransactionIdDidCommit, * because of race conditions explained in detail in tqual.c. - * - * We normally don't see committed updating transactions earlier - * than the cutoff xid, because they are removed by page pruning - * before we try to freeze the page; but it can happen if the - * updating transaction commits after the page is pruned but - * before HeapTupleSatisfiesVacuum. */ if (TransactionIdIsCurrentTransactionId(xid) || TransactionIdIsInProgress(xid)) @@ -6572,6 +6518,13 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, * we can ignore it. */ + /* + * Since the tuple wasn't marked HEAPTUPLE_DEAD by vacuum, the + * update Xid cannot possibly be older than the xid cutoff. + */ + Assert(!TransactionIdIsValid(update_xid) || + !TransactionIdPrecedes(update_xid, cutoff_xid)); + /* * If we determined that it's an Xid corresponding to an update * that must be retained, additionally add it to the list of @@ -6650,10 +6603,7 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, * * It is assumed that the caller has checked the tuple with * HeapTupleSatisfiesVacuum() and determined that it is not HEAPTUPLE_DEAD - * (else we should be removing the tuple, not freezing it). However, note - * that we don't remove HOT tuples even if they are dead, and it'd be incorrect - * to freeze them (because that would make them visible), so we mark them as - * update-committed, and needing further freezing later on. + * (else we should be removing the tuple, not freezing it). * * NB: cutoff_xid *must* be <= the current global xmin, to ensure that any * XID older than it could neither be running nor seen as running by any @@ -6764,22 +6714,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, else if (TransactionIdIsNormal(xid)) { if (TransactionIdPrecedes(xid, cutoff_xid)) - { - /* - * Must freeze regular XIDs older than the cutoff. We must not - * freeze a HOT-updated tuple, though; doing so would bring it - * back to life. - */ - if (HeapTupleHeaderIsHotUpdated(tuple)) - { - frz->t_infomask |= HEAP_XMAX_COMMITTED; - totally_frozen = false; - changed = true; - /* must not freeze */ - } - else - freeze_xmax = true; - } + freeze_xmax = true; else totally_frozen = false; } diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c index 7753ee7b12..52231ac417 100644 --- a/src/backend/access/heap/pruneheap.c +++ b/src/backend/access/heap/pruneheap.c @@ -473,7 +473,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum, * Check the tuple XMIN against prior XMAX, if any */ if (TransactionIdIsValid(priorXmax) && - !HeapTupleUpdateXmaxMatchesXmin(priorXmax, htup)) + !TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax)) break; /* @@ -813,7 +813,7 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets) htup = (HeapTupleHeader) PageGetItem(page, lp); if (TransactionIdIsValid(priorXmax) && - !HeapTupleUpdateXmaxMatchesXmin(priorXmax, htup)) + !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup))) break; /* Remember the root line pointer for this item */ diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 172d213fdb..6587db77ac 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -2029,17 +2029,17 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats, ItemPointer itemptr) { /* - * The array must never overflow, since we rely on all deletable tuples - * being removed; inability to remove a tuple might cause an old XID to - * persist beyond the freeze limit, which could be disastrous later on. + * The array shouldn't overflow under normal behavior, but perhaps it + * could if we are given a really small maintenance_work_mem. In that + * case, just forget the last few tuples (we'll get 'em next time). */ - if (vacrelstats->num_dead_tuples >= vacrelstats->max_dead_tuples) - elog(ERROR, "dead tuple array overflow"); - - vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr; - vacrelstats->num_dead_tuples++; - pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, - vacrelstats->num_dead_tuples); + if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples) + { + vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr; + vacrelstats->num_dead_tuples++; + pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES, + vacrelstats->num_dead_tuples); + } } /* diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 638a856dc3..2e8aca59a7 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -2595,7 +2595,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode, * atomic, and Xmin never changes in an existing tuple, except to * invalid or frozen, and neither of those can match priorXmax.) */ - if (!HeapTupleUpdateXmaxMatchesXmin(priorXmax, tuple.t_data)) + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple.t_data), + priorXmax)) { ReleaseBuffer(buffer); return NULL; @@ -2742,7 +2743,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode, /* * As above, if xmin isn't what we're expecting, do nothing. */ - if (!HeapTupleUpdateXmaxMatchesXmin(priorXmax, tuple.t_data)) + if (!TransactionIdEquals(HeapTupleHeaderGetXmin(tuple.t_data), + priorXmax)) { ReleaseBuffer(buffer); return NULL; diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h index 9f4367d704..4e41024e92 100644 --- a/src/include/access/heapam.h +++ b/src/include/access/heapam.h @@ -146,9 +146,6 @@ extern void heap_get_latest_tid(Relation relation, Snapshot snapshot, ItemPointer tid); extern void setLastTid(const ItemPointer tid); -extern bool HeapTupleUpdateXmaxMatchesXmin(TransactionId xmax, - HeapTupleHeader htup); - extern BulkInsertState GetBulkInsertState(void); extern void FreeBulkInsertState(BulkInsertState); extern void ReleaseBulkInsertStatePin(BulkInsertState bistate); diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index 7dad3c2316..32c965b2a0 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -44,7 +44,6 @@ test: update-locked-tuple test: propagate-lock-delete test: tuplelock-conflict test: tuplelock-update -test: freeze-the-dead test: nowait test: nowait-2 test: nowait-3 From 7b6c07547190f056b0464098bb5a2247129d7aa2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 2 Nov 2017 11:24:12 -0400 Subject: [PATCH 0471/1087] Teach planner to account for HAVING quals in aggregation plan nodes. For some reason, we have never accounted for either the evaluation cost or the selectivity of filter conditions attached to Agg and Group nodes (which, in practice, are always conditions from a HAVING clause). Applying our regular selectivity logic to post-grouping conditions is a bit bogus, but it's surely better than taking the selectivity as 1.0. Perhaps someday the extended-statistics mechanism can be taught to provide statistics that would help us in getting non-default estimates here. Per a gripe from Benjamin Coutu. This is surely a bug fix, but I'm hesitant to back-patch because of the prospect of destabilizing existing plan choices. Given that it took us this long to notice the bug, it's probably not hurting too many people in the field. Discussion: https://postgr.es/m/20968.1509486337@sss.pgh.pa.us --- src/backend/optimizer/path/costsize.c | 46 +++++++++++++++++++++++++- src/backend/optimizer/prep/prepunion.c | 2 ++ src/backend/optimizer/util/pathnode.c | 25 ++++++++++++++ src/include/optimizer/cost.h | 2 ++ 4 files changed, 74 insertions(+), 1 deletion(-) diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index ce32b8a4b9..98fb16e85a 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -1874,6 +1874,7 @@ void cost_agg(Path *path, PlannerInfo *root, AggStrategy aggstrategy, const AggClauseCosts *aggcosts, int numGroupCols, double numGroups, + List *quals, Cost input_startup_cost, Cost input_total_cost, double input_tuples) { @@ -1955,6 +1956,26 @@ cost_agg(Path *path, PlannerInfo *root, output_tuples = numGroups; } + /* + * If there are quals (HAVING quals), account for their cost and + * selectivity. + */ + if (quals) + { + QualCost qual_cost; + + cost_qual_eval(&qual_cost, quals, root); + startup_cost += qual_cost.startup; + total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple; + + output_tuples = clamp_row_est(output_tuples * + clauselist_selectivity(root, + quals, + 0, + JOIN_INNER, + NULL)); + } + path->rows = output_tuples; path->startup_cost = startup_cost; path->total_cost = total_cost; @@ -2040,12 +2061,15 @@ cost_windowagg(Path *path, PlannerInfo *root, void cost_group(Path *path, PlannerInfo *root, int numGroupCols, double numGroups, + List *quals, Cost input_startup_cost, Cost input_total_cost, double input_tuples) { + double output_tuples; Cost startup_cost; Cost total_cost; + output_tuples = numGroups; startup_cost = input_startup_cost; total_cost = input_total_cost; @@ -2055,7 +2079,27 @@ cost_group(Path *path, PlannerInfo *root, */ total_cost += cpu_operator_cost * input_tuples * numGroupCols; - path->rows = numGroups; + /* + * If there are quals (HAVING quals), account for their cost and + * selectivity. + */ + if (quals) + { + QualCost qual_cost; + + cost_qual_eval(&qual_cost, quals, root); + startup_cost += qual_cost.startup; + total_cost += qual_cost.startup + output_tuples * qual_cost.per_tuple; + + output_tuples = clamp_row_est(output_tuples * + clauselist_selectivity(root, + quals, + 0, + JOIN_INNER, + NULL)); + } + + path->rows = output_tuples; path->startup_cost = startup_cost; path->total_cost = total_cost; } diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 1c84a2cb28..f620243ab4 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -977,6 +977,7 @@ choose_hashed_setop(PlannerInfo *root, List *groupClauses, */ cost_agg(&hashed_p, root, AGG_HASHED, NULL, numGroupCols, dNumGroups, + NIL, input_path->startup_cost, input_path->total_cost, input_path->rows); @@ -991,6 +992,7 @@ choose_hashed_setop(PlannerInfo *root, List *groupClauses, input_path->rows, input_path->pathtarget->width, 0.0, work_mem, -1.0); cost_group(&sorted_p, root, numGroupCols, dNumGroups, + NIL, sorted_p.startup_cost, sorted_p.total_cost, input_path->rows); diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 2d491eb0ba..36ec025b05 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -1374,6 +1374,11 @@ create_result_path(PlannerInfo *root, RelOptInfo *rel, pathnode->path.startup_cost = target->cost.startup; pathnode->path.total_cost = target->cost.startup + cpu_tuple_cost + target->cost.per_tuple; + + /* + * Add cost of qual, if any --- but we ignore its selectivity, since our + * rowcount estimate should be 1 no matter what the qual is. + */ if (resconstantqual) { QualCost qual_cost; @@ -1596,6 +1601,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, cost_agg(&agg_path, root, AGG_HASHED, NULL, numCols, pathnode->path.rows, + NIL, subpath->startup_cost, subpath->total_cost, rel->rows); @@ -2592,6 +2598,7 @@ create_group_path(PlannerInfo *root, cost_group(&pathnode->path, root, list_length(groupClause), numGroups, + qual, subpath->startup_cost, subpath->total_cost, subpath->rows); @@ -2709,6 +2716,7 @@ create_agg_path(PlannerInfo *root, cost_agg(&pathnode->path, root, aggstrategy, aggcosts, list_length(groupClause), numGroups, + qual, subpath->startup_cost, subpath->total_cost, subpath->rows); @@ -2817,6 +2825,7 @@ create_groupingsets_path(PlannerInfo *root, agg_costs, numGroupCols, rollup->numGroups, + having_qual, subpath->startup_cost, subpath->total_cost, subpath->rows); @@ -2840,6 +2849,7 @@ create_groupingsets_path(PlannerInfo *root, agg_costs, numGroupCols, rollup->numGroups, + having_qual, 0.0, 0.0, subpath->rows); if (!rollup->is_hashed) @@ -2863,6 +2873,7 @@ create_groupingsets_path(PlannerInfo *root, agg_costs, numGroupCols, rollup->numGroups, + having_qual, sort_path.startup_cost, sort_path.total_cost, sort_path.rows); @@ -2932,6 +2943,19 @@ create_minmaxagg_path(PlannerInfo *root, pathnode->path.total_cost = initplan_cost + target->cost.startup + target->cost.per_tuple + cpu_tuple_cost; + /* + * Add cost of qual, if any --- but we ignore its selectivity, since our + * rowcount estimate should be 1 no matter what the qual is. + */ + if (quals) + { + QualCost qual_cost; + + cost_qual_eval(&qual_cost, quals, root); + pathnode->path.startup_cost += qual_cost.startup; + pathnode->path.total_cost += qual_cost.startup + qual_cost.per_tuple; + } + return pathnode; } @@ -3781,6 +3805,7 @@ reparameterize_pathlist_by_child(PlannerInfo *root, { Path *path = reparameterize_path_by_child(root, lfirst(lc), child_rel); + if (path == NULL) { list_free(result); diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 306d923a22..6c2317df39 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -116,6 +116,7 @@ extern void cost_material(Path *path, extern void cost_agg(Path *path, PlannerInfo *root, AggStrategy aggstrategy, const AggClauseCosts *aggcosts, int numGroupCols, double numGroups, + List *quals, Cost input_startup_cost, Cost input_total_cost, double input_tuples); extern void cost_windowagg(Path *path, PlannerInfo *root, @@ -124,6 +125,7 @@ extern void cost_windowagg(Path *path, PlannerInfo *root, double input_tuples); extern void cost_group(Path *path, PlannerInfo *root, int numGroupCols, double numGroups, + List *quals, Cost input_startup_cost, Cost input_total_cost, double input_tuples); extern void initial_cost_nestloop(PlannerInfo *root, From 0f53934164d37682fd6a6d87d57008f9ca03e3d0 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 2 Nov 2017 12:12:23 -0400 Subject: [PATCH 0472/1087] doc: Clarify pgstattuple privileges information The description has gotten a bit confusing over time, so rewrite the paragraph a bit. Reported-by: Feike Steenbergen --- doc/src/sgml/pgstattuple.sgml | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/pgstattuple.sgml b/doc/src/sgml/pgstattuple.sgml index 611df9d0bf..04a4423dc5 100644 --- a/doc/src/sgml/pgstattuple.sgml +++ b/doc/src/sgml/pgstattuple.sgml @@ -13,12 +13,14 @@ - As these functions return detailed page-level information, only the superuser - has EXECUTE privileges on them upon installation. After the functions have - been installed, users may issue GRANT commands to change - the privileges on the functions to allow non-superusers to execute them. Members - of the pg_stat_scan_tables role are granted access by default. See - the description of the command for specifics. + Because these functions return detailed page-level information, access is + restricted by default. By default, only the + role pg_stat_scan_tables has EXECUTE + privilege. Superusers of course bypass this restriction. After the + extension has been installed, users may issue GRANT + commands to change the privileges on the functions to allow others to + execute them. However, it might be preferable to add those users to + the pg_stat_scan_tables role instead. From 5eb8bf2d42676523143c1c76ba584bcdcc584f3e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 2 Nov 2017 12:38:59 -0400 Subject: [PATCH 0473/1087] Remove wal_keep_segments from default configuration in PostgresNode.pm This is only used in the pg_rewind tests, so only set it there. It's better if other tests run closer to a default configuration. Author: Michael Paquier --- src/bin/pg_rewind/RewindTest.pm | 5 +++++ src/test/perl/PostgresNode.pm | 1 - 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm index 76ce295cef..e6041f38a5 100644 --- a/src/bin/pg_rewind/RewindTest.pm +++ b/src/bin/pg_rewind/RewindTest.pm @@ -119,6 +119,11 @@ sub setup_cluster # Initialize master, data checksums are mandatory $node_master = get_new_node('master' . ($extra_name ? "_${extra_name}" : '')); $node_master->init(allows_streaming => 1); + # Set wal_keep_segments to prevent WAL segment recycling after enforced + # checkpoints in the tests. + $node_master->append_conf('postgresql.conf', qq( +wal_keep_segments = 20 +)); } sub start_master diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index b44f70d27c..93faadc20e 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -435,7 +435,6 @@ sub init } print $conf "max_wal_senders = 5\n"; print $conf "max_replication_slots = 5\n"; - print $conf "wal_keep_segments = 20\n"; print $conf "max_wal_size = 128MB\n"; print $conf "shared_buffers = 1MB\n"; print $conf "wal_log_hints = on\n"; From 62a16572d5714bfb19e2a273e61218be6682d3df Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 2 Nov 2017 12:54:22 -0400 Subject: [PATCH 0474/1087] Fix corner-case errors in brin_doupdate(). In some cases the BRIN code releases lock on an index page, and later re-acquires lock and tries to check that the tuple it was working on is still there. That check was a couple bricks shy of a load. It didn't consider that the page might have turned into a "revmap" page. (The samepage code path doesn't call brin_getinsertbuffer(), so it isn't protected by the checks for revmap status there.) It also didn't check whether the tuple offset was now off the end of the linepointer array. Since commit 24992c6db the latter case is pretty common, but at least in principle it could have occurred before that. The net result is that concurrent updates of a BRIN index could fail with errors like "invalid index offnum" or "inconsistent range map". Per report from Tomas Vondra. Back-patch to 9.5, since this code is substantially the same in all versions containing BRIN. Discussion: https://postgr.es/m/10d2b9f9-f427-03b8-8ad9-6af4ecacbee9@2ndquadrant.com --- src/backend/access/brin/brin_pageops.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c index 80f803e438..b0f86f3663 100644 --- a/src/backend/access/brin/brin_pageops.c +++ b/src/backend/access/brin/brin_pageops.c @@ -113,9 +113,15 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange, /* * Check that the old tuple wasn't updated concurrently: it might have - * moved someplace else entirely ... + * moved someplace else entirely, and for that matter the whole page + * might've become a revmap page. Note that in the first two cases + * checked here, the "oldlp" we just calculated is garbage; but + * PageGetItemId() is simple enough that it was safe to do that + * calculation anyway. */ - if (!ItemIdIsNormal(oldlp)) + if (!BRIN_IS_REGULAR_PAGE(oldpage) || + oldoff > PageGetMaxOffsetNumber(oldpage) || + !ItemIdIsNormal(oldlp)) { LockBuffer(oldbuf, BUFFER_LOCK_UNLOCK); From 637a934ab9bac615af6032bb8424056e91988fb8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 2 Nov 2017 12:56:30 -0400 Subject: [PATCH 0475/1087] Simplify new test suite handling of passwordcheck This changes the use of a custom configuration file to enforce the value of preload_shared_libraries to simply load the library during the tests. This removes the restriction of running installcheck on the tests, and simplifies its makefile contrary to what has been introduced in af7211e. Author: Michael Paquier --- contrib/passwordcheck/Makefile | 3 --- contrib/passwordcheck/expected/passwordcheck.out | 1 + contrib/passwordcheck/passwordcheck.conf | 1 - contrib/passwordcheck/sql/passwordcheck.sql | 2 ++ 4 files changed, 3 insertions(+), 4 deletions(-) delete mode 100644 contrib/passwordcheck/passwordcheck.conf diff --git a/contrib/passwordcheck/Makefile b/contrib/passwordcheck/Makefile index 7edc968b90..4da0b1417c 100644 --- a/contrib/passwordcheck/Makefile +++ b/contrib/passwordcheck/Makefile @@ -8,10 +8,7 @@ PGFILEDESC = "passwordcheck - strengthen user password checks" # PG_CPPFLAGS = -DUSE_CRACKLIB '-DCRACKLIB_DICTPATH="/usr/lib/cracklib_dict"' # SHLIB_LINK = -lcrack -REGRESS_OPTS = --temp-config $(srcdir)/passwordcheck.conf REGRESS = passwordcheck -# disabled because these tests require setting shared_preload_libraries -NO_INSTALLCHECK = 1 ifdef USE_PGXS PG_CONFIG = pg_config diff --git a/contrib/passwordcheck/expected/passwordcheck.out b/contrib/passwordcheck/expected/passwordcheck.out index b3515df3e8..e04cda6bd9 100644 --- a/contrib/passwordcheck/expected/passwordcheck.out +++ b/contrib/passwordcheck/expected/passwordcheck.out @@ -1,3 +1,4 @@ +LOAD 'passwordcheck'; CREATE USER regress_user1; -- ok ALTER USER regress_user1 PASSWORD 'a_nice_long_password'; diff --git a/contrib/passwordcheck/passwordcheck.conf b/contrib/passwordcheck/passwordcheck.conf deleted file mode 100644 index f6604f3d6b..0000000000 --- a/contrib/passwordcheck/passwordcheck.conf +++ /dev/null @@ -1 +0,0 @@ -shared_preload_libraries = 'passwordcheck' diff --git a/contrib/passwordcheck/sql/passwordcheck.sql b/contrib/passwordcheck/sql/passwordcheck.sql index 59c84f522e..d98796ac49 100644 --- a/contrib/passwordcheck/sql/passwordcheck.sql +++ b/contrib/passwordcheck/sql/passwordcheck.sql @@ -1,3 +1,5 @@ +LOAD 'passwordcheck'; + CREATE USER regress_user1; -- ok From 4b0fbfdf81e0a847b31d0b430f25f8660d5652c0 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 2 Nov 2017 13:27:42 -0400 Subject: [PATCH 0476/1087] pg_ctl: Improve message Change message for restarting a server from a directory without a PID file. This accounts for the case where a restart happens after an initdb. The new message indicates that the start has not completed yet and might fail. Author: Jesper Pedersen --- src/bin/pg_ctl/pg_ctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index 0893d0d26f..82de7df048 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -965,7 +965,7 @@ do_restart(void) write_stderr(_("%s: PID file \"%s\" does not exist\n"), progname, pid_file); write_stderr(_("Is server running?\n")); - write_stderr(_("starting server anyway\n")); + write_stderr(_("trying to start server anyway\n")); do_start(); return; } From 6976a4f05fc5f9d3b469869e412e0814c8c7ab2a Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Thu, 2 Nov 2017 20:46:34 +0100 Subject: [PATCH 0477/1087] Fix float parsing in ecpg INFORMIX mode. --- src/interfaces/ecpg/ecpglib/data.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c index 5375934d16..b8b8e06474 100644 --- a/src/interfaces/ecpg/ecpglib/data.c +++ b/src/interfaces/ecpg/ecpglib/data.c @@ -57,7 +57,7 @@ garbage_left(enum ARRAY_TYPE isarray, char **scan_length, enum COMPAT_MODE compa /* skip invalid characters */ do { (*scan_length)++; - } while (**scan_length != ' ' && **scan_length != '\0' && isdigit(**scan_length)); + } while (isdigit(**scan_length)); return false; } @@ -401,7 +401,8 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno, if (isarray && *scan_length == '"') scan_length++; - if (garbage_left(isarray, &scan_length, compat)) + /* no special INFORMIX treatment for floats */ + if (garbage_left(isarray, &scan_length, ECPG_COMPAT_PGSQL)) { ecpg_raise(lineno, ECPG_FLOAT_FORMAT, ECPG_SQLSTATE_DATATYPE_MISMATCH, pval); From 81e334ce4e6d687d548e60ad8954b7dfd9e568a2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 2 Nov 2017 17:22:08 -0400 Subject: [PATCH 0478/1087] Set the metapage's pd_lower correctly in brin, gin, and spgist indexes. Previously, these index types left the pd_lower field set to the default SizeOfPageHeaderData, which is really a lie because it ought to point past whatever space is being used for metadata. The coding accidentally failed to fail because we never told xlog.c that the metapage is of standard format --- but that's not very good, because it impedes WAL consistency checking, and in some cases prevents compression of full-page images. To fix, ensure that we set pd_lower correctly, not only when creating a metapage but whenever we write it out (these apparently redundant steps are needed to cope with pg_upgrade'd indexes that don't yet contain the right value). This allows telling xlog.c that the page is of standard format. The WAL consistency check mask functions are made to mask only if pd_lower appears valid, which I think is likely unnecessary complication, since any metapage appearing in a v11 WAL stream should contain valid pd_lower. But it doesn't cost much to be paranoid. Amit Langote, reviewed by Michael Paquier and Amit Kapila Discussion: https://postgr.es/m/0d273805-0e9e-ec1a-cb84-d4da400b8f85@lab.ntt.co.jp --- src/backend/access/brin/brin.c | 4 ++-- src/backend/access/brin/brin_pageops.c | 10 +++++++++- src/backend/access/brin/brin_revmap.c | 15 +++++++++++++-- src/backend/access/brin/brin_xlog.c | 21 +++++++++++++++++++-- src/backend/access/gin/ginfast.c | 26 +++++++++++++++++++++++--- src/backend/access/gin/gininsert.c | 4 ++-- src/backend/access/gin/ginutil.c | 20 +++++++++++++++++++- src/backend/access/gin/ginxlog.c | 25 ++++++++++--------------- src/backend/access/spgist/spginsert.c | 4 ++-- src/backend/access/spgist/spgutils.c | 24 ++++++++++++++++++++++-- src/backend/access/spgist/spgxlog.c | 7 ++++--- 11 files changed, 125 insertions(+), 35 deletions(-) diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index b3aa6d1ced..e6909d7aea 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -685,7 +685,7 @@ brinbuild(Relation heap, Relation index, IndexInfo *indexInfo) XLogBeginInsert(); XLogRegisterData((char *) &xlrec, SizeOfBrinCreateIdx); - XLogRegisterBuffer(0, meta, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, meta, REGBUF_WILL_INIT | REGBUF_STANDARD); recptr = XLogInsert(RM_BRIN_ID, XLOG_BRIN_CREATE_INDEX); @@ -742,7 +742,7 @@ brinbuildempty(Relation index) brin_metapage_init(BufferGetPage(metabuf), BrinGetPagesPerRange(index), BRIN_CURRENT_VERSION); MarkBufferDirty(metabuf); - log_newpage_buffer(metabuf, false); + log_newpage_buffer(metabuf, true); END_CRIT_SECTION(); UnlockReleaseBuffer(metabuf); diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c index b0f86f3663..09db5c6f8f 100644 --- a/src/backend/access/brin/brin_pageops.c +++ b/src/backend/access/brin/brin_pageops.c @@ -476,7 +476,7 @@ brin_page_init(Page page, uint16 type) } /* - * Initialize a new BRIN index' metapage. + * Initialize a new BRIN index's metapage. */ void brin_metapage_init(Page page, BlockNumber pagesPerRange, uint16 version) @@ -497,6 +497,14 @@ brin_metapage_init(Page page, BlockNumber pagesPerRange, uint16 version) * revmap page to be created when the index is. */ metadata->lastRevmapPage = 0; + + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. + */ + ((PageHeader) page)->pd_lower = + ((char *) metadata + sizeof(BrinMetaPageData)) - (char *) page; } /* diff --git a/src/backend/access/brin/brin_revmap.c b/src/backend/access/brin/brin_revmap.c index 22f2076887..5a88574bf6 100644 --- a/src/backend/access/brin/brin_revmap.c +++ b/src/backend/access/brin/brin_revmap.c @@ -615,7 +615,7 @@ revmap_physical_extend(BrinRevmap *revmap) /* * Ok, we have now locked the metapage and the target block. Re-initialize - * it as a revmap page. + * the target block as a revmap page, and update the metapage. */ START_CRIT_SECTION(); @@ -624,6 +624,17 @@ revmap_physical_extend(BrinRevmap *revmap) MarkBufferDirty(buf); metadata->lastRevmapPage = mapBlk; + + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. (We must do this here because pre-v11 versions of PG did not + * set the metapage's pd_lower correctly, so a pg_upgraded index might + * contain the wrong value.) + */ + ((PageHeader) metapage)->pd_lower = + ((char *) metadata + sizeof(BrinMetaPageData)) - (char *) metapage; + MarkBufferDirty(revmap->rm_metaBuf); if (RelationNeedsWAL(revmap->rm_irel)) @@ -635,7 +646,7 @@ revmap_physical_extend(BrinRevmap *revmap) XLogBeginInsert(); XLogRegisterData((char *) &xlrec, SizeOfBrinRevmapExtend); - XLogRegisterBuffer(0, revmap->rm_metaBuf, 0); + XLogRegisterBuffer(0, revmap->rm_metaBuf, REGBUF_STANDARD); XLogRegisterBuffer(1, buf, REGBUF_WILL_INIT); diff --git a/src/backend/access/brin/brin_xlog.c b/src/backend/access/brin/brin_xlog.c index 60daa54a95..645e516a52 100644 --- a/src/backend/access/brin/brin_xlog.c +++ b/src/backend/access/brin/brin_xlog.c @@ -234,6 +234,17 @@ brin_xlog_revmap_extend(XLogReaderState *record) metadata->lastRevmapPage = xlrec->targetBlk; PageSetLSN(metapg, lsn); + + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c + * compresses the page. (We must do this here because pre-v11 + * versions of PG did not set the metapage's pd_lower correctly, so a + * pg_upgraded index might contain the wrong value.) + */ + ((PageHeader) metapg)->pd_lower = + ((char *) metadata + sizeof(BrinMetaPageData)) - (char *) metapg; + MarkBufferDirty(metabuf); } @@ -331,14 +342,20 @@ void brin_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; + PageHeader pagehdr = (PageHeader) page; mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); - if (BRIN_IS_REGULAR_PAGE(page)) + /* + * Regular brin pages contain unused space which needs to be masked. + * Similarly for meta pages, but mask it only if pd_lower appears to have + * been set correctly. + */ + if (BRIN_IS_REGULAR_PAGE(page) || + (BRIN_IS_META_PAGE(page) && pagehdr->pd_lower > SizeOfPageHeaderData)) { - /* Regular brin pages contain unused space which needs to be masked. */ mask_unused_space(page); } } diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c index 59e435465a..00348891a2 100644 --- a/src/backend/access/gin/ginfast.c +++ b/src/backend/access/gin/ginfast.c @@ -396,6 +396,16 @@ ginHeapTupleFastInsert(GinState *ginstate, GinTupleCollector *collector) MarkBufferDirty(buffer); } + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. (We must do this here because pre-v11 versions of PG did not + * set the metapage's pd_lower correctly, so a pg_upgraded index might + * contain the wrong value.) + */ + ((PageHeader) metapage)->pd_lower = + ((char *) metadata + sizeof(GinMetaPageData)) - (char *) metapage; + /* * Write metabuffer, make xlog entry */ @@ -407,7 +417,7 @@ ginHeapTupleFastInsert(GinState *ginstate, GinTupleCollector *collector) memcpy(&data.metadata, metadata, sizeof(GinMetaPageData)); - XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); XLogRegisterData((char *) &data, sizeof(ginxlogUpdateMeta)); recptr = XLogInsert(RM_GIN_ID, XLOG_GIN_UPDATE_META_PAGE); @@ -572,6 +582,16 @@ shiftList(Relation index, Buffer metabuffer, BlockNumber newHead, metadata->nPendingHeapTuples = 0; } + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c + * compresses the page. (We must do this here because pre-v11 + * versions of PG did not set the metapage's pd_lower correctly, so a + * pg_upgraded index might contain the wrong value.) + */ + ((PageHeader) metapage)->pd_lower = + ((char *) metadata + sizeof(GinMetaPageData)) - (char *) metapage; + MarkBufferDirty(metabuffer); for (i = 0; i < data.ndeleted; i++) @@ -586,7 +606,8 @@ shiftList(Relation index, Buffer metabuffer, BlockNumber newHead, XLogRecPtr recptr; XLogBeginInsert(); - XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, metabuffer, + REGBUF_WILL_INIT | REGBUF_STANDARD); for (i = 0; i < data.ndeleted; i++) XLogRegisterBuffer(i + 1, buffers[i], REGBUF_WILL_INIT); @@ -968,7 +989,6 @@ ginInsertCleanup(GinState *ginstate, bool full_clean, if (fsm_vac && fill_fsm) IndexFreeSpaceMapVacuum(index); - /* Clean up temporary space */ MemoryContextSwitchTo(oldCtx); MemoryContextDelete(opCtx); diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c index 5378011f50..c9aa4ee147 100644 --- a/src/backend/access/gin/gininsert.c +++ b/src/backend/access/gin/gininsert.c @@ -348,7 +348,7 @@ ginbuild(Relation heap, Relation index, IndexInfo *indexInfo) Page page; XLogBeginInsert(); - XLogRegisterBuffer(0, MetaBuffer, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, MetaBuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); XLogRegisterBuffer(1, RootBuffer, REGBUF_WILL_INIT); recptr = XLogInsert(RM_GIN_ID, XLOG_GIN_CREATE_INDEX); @@ -447,7 +447,7 @@ ginbuildempty(Relation index) START_CRIT_SECTION(); GinInitMetabuffer(MetaBuffer); MarkBufferDirty(MetaBuffer); - log_newpage_buffer(MetaBuffer, false); + log_newpage_buffer(MetaBuffer, true); GinInitBuffer(RootBuffer, GIN_LEAF); MarkBufferDirty(RootBuffer); log_newpage_buffer(RootBuffer, false); diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c index 136ea27718..d9c6483437 100644 --- a/src/backend/access/gin/ginutil.c +++ b/src/backend/access/gin/ginutil.c @@ -374,6 +374,14 @@ GinInitMetabuffer(Buffer b) metadata->nDataPages = 0; metadata->nEntries = 0; metadata->ginVersion = GIN_CURRENT_VERSION; + + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. + */ + ((PageHeader) page)->pd_lower = + ((char *) metadata + sizeof(GinMetaPageData)) - (char *) page; } /* @@ -676,6 +684,16 @@ ginUpdateStats(Relation index, const GinStatsData *stats) metadata->nDataPages = stats->nDataPages; metadata->nEntries = stats->nEntries; + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. (We must do this here because pre-v11 versions of PG did not + * set the metapage's pd_lower correctly, so a pg_upgraded index might + * contain the wrong value.) + */ + ((PageHeader) metapage)->pd_lower = + ((char *) metadata + sizeof(GinMetaPageData)) - (char *) metapage; + MarkBufferDirty(metabuffer); if (RelationNeedsWAL(index)) @@ -690,7 +708,7 @@ ginUpdateStats(Relation index, const GinStatsData *stats) XLogBeginInsert(); XLogRegisterData((char *) &data, sizeof(ginxlogUpdateMeta)); - XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); recptr = XLogInsert(RM_GIN_ID, XLOG_GIN_UPDATE_META_PAGE); PageSetLSN(metapage, recptr); diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c index 92cafe950b..1bf3f0a88a 100644 --- a/src/backend/access/gin/ginxlog.c +++ b/src/backend/access/gin/ginxlog.c @@ -514,7 +514,7 @@ ginRedoUpdateMetapage(XLogReaderState *record) Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO); metapage = BufferGetPage(metabuffer); - GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer)); + GinInitMetabuffer(metabuffer); memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData)); PageSetLSN(metapage, lsn); MarkBufferDirty(metabuffer); @@ -656,7 +656,7 @@ ginRedoDeleteListPages(XLogReaderState *record) Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO); metapage = BufferGetPage(metabuffer); - GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer)); + GinInitMetabuffer(metabuffer); memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData)); PageSetLSN(metapage, lsn); @@ -768,6 +768,7 @@ void gin_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; + PageHeader pagehdr = (PageHeader) page; GinPageOpaque opaque; mask_page_lsn_and_checksum(page); @@ -776,18 +777,12 @@ gin_mask(char *pagedata, BlockNumber blkno) mask_page_hint_bits(page); /* - * GIN metapage doesn't use pd_lower/pd_upper. Other page types do. Hence, - * we need to apply masking for those pages. + * For a GIN_DELETED page, the page is initialized to empty. Hence, mask + * the whole page content. For other pages, mask the hole if pd_lower + * appears to have been set correctly. */ - if (opaque->flags != GIN_META) - { - /* - * For GIN_DELETED page, the page is initialized to empty. Hence, mask - * the page content. - */ - if (opaque->flags & GIN_DELETED) - mask_page_content(page); - else - mask_unused_space(page); - } + if (opaque->flags & GIN_DELETED) + mask_page_content(page); + else if (pagehdr->pd_lower > SizeOfPageHeaderData) + mask_unused_space(page); } diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c index e4b2c29b0e..80b82e1602 100644 --- a/src/backend/access/spgist/spginsert.c +++ b/src/backend/access/spgist/spginsert.c @@ -110,7 +110,7 @@ spgbuild(Relation heap, Relation index, IndexInfo *indexInfo) * Replay will re-initialize the pages, so don't take full pages * images. No other data to log. */ - XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); XLogRegisterBuffer(1, rootbuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); XLogRegisterBuffer(2, nullbuffer, REGBUF_WILL_INIT | REGBUF_STANDARD); @@ -173,7 +173,7 @@ spgbuildempty(Relation index) smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_METAPAGE_BLKNO, (char *) page, true); log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM, - SPGIST_METAPAGE_BLKNO, page, false); + SPGIST_METAPAGE_BLKNO, page, true); /* Likewise for the root page. */ SpGistInitPage(page, SPGIST_LEAF); diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c index 22f64b0103..bd5301f383 100644 --- a/src/backend/access/spgist/spgutils.c +++ b/src/backend/access/spgist/spgutils.c @@ -256,15 +256,27 @@ SpGistUpdateMetaPage(Relation index) if (cache != NULL) { Buffer metabuffer; - SpGistMetaPageData *metadata; metabuffer = ReadBuffer(index, SPGIST_METAPAGE_BLKNO); if (ConditionalLockBuffer(metabuffer)) { - metadata = SpGistPageGetMeta(BufferGetPage(metabuffer)); + Page metapage = BufferGetPage(metabuffer); + SpGistMetaPageData *metadata = SpGistPageGetMeta(metapage); + metadata->lastUsedPages = cache->lastUsedPages; + /* + * Set pd_lower just past the end of the metadata. This is + * essential, because without doing so, metadata will be lost if + * xlog.c compresses the page. (We must do this here because + * pre-v11 versions of PG did not set the metapage's pd_lower + * correctly, so a pg_upgraded index might contain the wrong + * value.) + */ + ((PageHeader) metapage)->pd_lower = + ((char *) metadata + sizeof(SpGistMetaPageData)) - (char *) metapage; + MarkBufferDirty(metabuffer); UnlockReleaseBuffer(metabuffer); } @@ -534,6 +546,14 @@ SpGistInitMetapage(Page page) /* initialize last-used-page cache to empty */ for (i = 0; i < SPGIST_CACHED_PAGES; i++) metadata->lastUsedPages.cachedPage[i].blkno = InvalidBlockNumber; + + /* + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. + */ + ((PageHeader) page)->pd_lower = + ((char *) metadata + sizeof(SpGistMetaPageData)) - (char *) page; } /* diff --git a/src/backend/access/spgist/spgxlog.c b/src/backend/access/spgist/spgxlog.c index 87def79ee5..b2da415169 100644 --- a/src/backend/access/spgist/spgxlog.c +++ b/src/backend/access/spgist/spgxlog.c @@ -1033,15 +1033,16 @@ void spg_mask(char *pagedata, BlockNumber blkno) { Page page = (Page) pagedata; + PageHeader pagehdr = (PageHeader) page; mask_page_lsn_and_checksum(page); mask_page_hint_bits(page); /* - * Any SpGist page other than meta contains unused space which needs to be - * masked. + * Mask the unused space, but only if the page's pd_lower appears to have + * been set correctly. */ - if (!SpGistPageIsMeta(page)) + if (pagehdr->pd_lower > SizeOfPageHeaderData) mask_unused_space(page); } From f987f83de20afe3ba78be1e15db5dffe7488faa7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 2 Nov 2017 18:32:14 -0400 Subject: [PATCH 0479/1087] pgbench: replace run-time string comparisons with an enum identifier. Minor refactoring that should yield some performance benefit. Fabien Coelho, reviewed by Aleksandr Parfenov Discussion: https://postgr.es/m/alpine.DEB.2.20.1709230538130.4999@lancre --- src/bin/pgbench/pgbench.c | 62 ++++++++++++++++++++++++++++++++------- 1 file changed, 51 insertions(+), 11 deletions(-) diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 5d8a01c72c..d4a60351a8 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -362,6 +362,15 @@ typedef struct #define META_COMMAND 2 #define MAX_ARGS 10 +typedef enum MetaCommand +{ + META_NONE, /* not a known meta-command */ + META_SET, /* \set */ + META_SETSHELL, /* \setshell */ + META_SHELL, /* \shell */ + META_SLEEP /* \sleep */ +} MetaCommand; + typedef enum QueryMode { QUERY_SIMPLE, /* simple query */ @@ -378,6 +387,7 @@ typedef struct char *line; /* text of command line */ int command_num; /* unique index of this Command struct */ int type; /* command type (SQL_COMMAND or META_COMMAND) */ + MetaCommand meta; /* meta command identifier, or META_NONE */ int argc; /* number of command words */ char *argv[MAX_ARGS]; /* command word list */ PgBenchExpr *expr; /* parsed expression, if needed */ @@ -1721,6 +1731,29 @@ evaluateExpr(TState *thread, CState *st, PgBenchExpr *expr, PgBenchValue *retval } } +/* + * Convert command name to meta-command enum identifier + */ +static MetaCommand +getMetaCommand(const char *cmd) +{ + MetaCommand mc; + + if (cmd == NULL) + mc = META_NONE; + else if (pg_strcasecmp(cmd, "set") == 0) + mc = META_SET; + else if (pg_strcasecmp(cmd, "setshell") == 0) + mc = META_SETSHELL; + else if (pg_strcasecmp(cmd, "shell") == 0) + mc = META_SHELL; + else if (pg_strcasecmp(cmd, "sleep") == 0) + mc = META_SLEEP; + else + mc = META_NONE; + return mc; +} + /* * Run a shell command. The result is assigned to the variable if not NULL. * Return true if succeeded, or false on error. @@ -2214,7 +2247,7 @@ doCustom(TState *thread, CState *st, StatsData *agg) fprintf(stderr, "\n"); } - if (pg_strcasecmp(argv[0], "sleep") == 0) + if (command->meta == META_SLEEP) { /* * A \sleep doesn't execute anything, we just get the @@ -2240,7 +2273,7 @@ doCustom(TState *thread, CState *st, StatsData *agg) } else { - if (pg_strcasecmp(argv[0], "set") == 0) + if (command->meta == META_SET) { PgBenchExpr *expr = command->expr; PgBenchValue result; @@ -2259,7 +2292,7 @@ doCustom(TState *thread, CState *st, StatsData *agg) break; } } - else if (pg_strcasecmp(argv[0], "setshell") == 0) + else if (command->meta == META_SETSHELL) { bool ret = runShellCommand(st, argv[1], argv + 2, argc - 2); @@ -2279,7 +2312,7 @@ doCustom(TState *thread, CState *st, StatsData *agg) /* succeeded */ } } - else if (pg_strcasecmp(argv[0], "shell") == 0) + else if (command->meta == META_SHELL) { bool ret = runShellCommand(st, NULL, argv + 1, argc - 1); @@ -3023,6 +3056,7 @@ process_sql_command(PQExpBuffer buf, const char *source) my_command = (Command *) pg_malloc0(sizeof(Command)); my_command->command_num = num_commands++; my_command->type = SQL_COMMAND; + my_command->meta = META_NONE; initSimpleStats(&my_command->stats); /* @@ -3091,7 +3125,10 @@ process_backslash_command(PsqlScanState sstate, const char *source) my_command->argv[j++] = pg_strdup(word_buf.data); my_command->argc++; - if (pg_strcasecmp(my_command->argv[0], "set") == 0) + /* ... and convert it to enum form */ + my_command->meta = getMetaCommand(my_command->argv[0]); + + if (my_command->meta == META_SET) { /* For \set, collect var name, then lex the expression. */ yyscan_t yyscanner; @@ -3146,7 +3183,7 @@ process_backslash_command(PsqlScanState sstate, const char *source) expr_scanner_offset(sstate), true); - if (pg_strcasecmp(my_command->argv[0], "sleep") == 0) + if (my_command->meta == META_SLEEP) { if (my_command->argc < 2) syntax_error(source, lineno, my_command->line, my_command->argv[0], @@ -3187,13 +3224,13 @@ process_backslash_command(PsqlScanState sstate, const char *source) my_command->argv[2], offsets[2] - start_offset); } } - else if (pg_strcasecmp(my_command->argv[0], "setshell") == 0) + else if (my_command->meta == META_SETSHELL) { if (my_command->argc < 3) syntax_error(source, lineno, my_command->line, my_command->argv[0], "missing argument", NULL, -1); } - else if (pg_strcasecmp(my_command->argv[0], "shell") == 0) + else if (my_command->meta == META_SHELL) { if (my_command->argc < 2) syntax_error(source, lineno, my_command->line, my_command->argv[0], @@ -3201,6 +3238,7 @@ process_backslash_command(PsqlScanState sstate, const char *source) } else { + /* my_command->meta == META_NONE */ syntax_error(source, lineno, my_command->line, my_command->argv[0], "invalid command", NULL, -1); } @@ -4592,12 +4630,12 @@ threadRun(void *arg) timeout.tv_usec = min_usec % 1000000; nsocks = select(maxsock + 1, &input_mask, NULL, NULL, &timeout); } - else /* nothing active, simple sleep */ + else /* nothing active, simple sleep */ { pg_usleep(min_usec); } } - else /* no explicit delay, select without timeout */ + else /* no explicit delay, select without timeout */ { nsocks = select(maxsock + 1, &input_mask, NULL, NULL, NULL); } @@ -4614,8 +4652,10 @@ threadRun(void *arg) goto done; } } - else /* min_usec == 0, i.e. something needs to be executed */ + else { + /* min_usec == 0, i.e. something needs to be executed */ + /* If we didn't call select(), don't try to read any data */ FD_ZERO(&input_mask); } From 7164991caf3858cbd75e5efb7943e11a51ad04f9 Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Fri, 3 Nov 2017 11:14:30 +0100 Subject: [PATCH 0480/1087] Improve error message for incorrect number inputs in libecpg. --- src/interfaces/ecpg/ecpglib/data.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c index b8b8e06474..82229ecfbf 100644 --- a/src/interfaces/ecpg/ecpglib/data.c +++ b/src/interfaces/ecpg/ecpglib/data.c @@ -58,7 +58,6 @@ garbage_left(enum ARRAY_TYPE isarray, char **scan_length, enum COMPAT_MODE compa do { (*scan_length)++; } while (isdigit(**scan_length)); - return false; } if (**scan_length != ' ' && **scan_length != '\0') From ec42a1dcb30de235b252f6d4972f2f2bdb2e47f2 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 3 Nov 2017 17:23:13 +0100 Subject: [PATCH 0481/1087] Fix BRIN summarization concurrent with extension MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit If a process is extending a table concurrently with some BRIN summarization process, it is possible for the latter to miss pages added by the former because the number of pages is computed ahead of time. Fix by determining a fresh relation size after inserting the placeholder tuple: any process that further extends the table concurrently will update the placeholder tuple, while previous pages will be processed by the heap scan. Reported-by: Tomas Vondra Reviewed-by: Tom Lane Author: Álvaro Herrera Discussion: https://postgr.es/m/083d996a-4a8a-0e13-800a-851dd09ad8cc@2ndquadrant.com Backpatch-to: 9.5 --- src/backend/access/brin/brin.c | 90 +++++++++++++++++++++++----------- 1 file changed, 61 insertions(+), 29 deletions(-) diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index e6909d7aea..8636620f64 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -67,7 +67,7 @@ static BrinBuildState *initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap, BlockNumber pagesPerRange); static void terminate_brin_buildstate(BrinBuildState *state); static void brinsummarize(Relation index, Relation heapRel, BlockNumber pageRange, - double *numSummarized, double *numExisting); + bool include_partial, double *numSummarized, double *numExisting); static void form_and_insert_tuple(BrinBuildState *state); static void union_tuples(BrinDesc *bdesc, BrinMemTuple *a, BrinTuple *b); @@ -791,7 +791,7 @@ brinvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) brin_vacuum_scan(info->index, info->strategy); - brinsummarize(info->index, heapRel, BRIN_ALL_BLOCKRANGES, + brinsummarize(info->index, heapRel, BRIN_ALL_BLOCKRANGES, false, &stats->num_index_tuples, &stats->num_index_tuples); heap_close(heapRel, AccessShareLock); @@ -909,7 +909,7 @@ brin_summarize_range(PG_FUNCTION_ARGS) RelationGetRelationName(indexRel)))); /* OK, do it */ - brinsummarize(indexRel, heapRel, heapBlk, &numSummarized, NULL); + brinsummarize(indexRel, heapRel, heapBlk, true, &numSummarized, NULL); relation_close(indexRel, ShareUpdateExclusiveLock); relation_close(heapRel, ShareUpdateExclusiveLock); @@ -1129,7 +1129,8 @@ terminate_brin_buildstate(BrinBuildState *state) } /* - * Summarize the given page range of the given index. + * On the given BRIN index, summarize the heap page range that corresponds + * to the heap block number given. * * This routine can run in parallel with insertions into the heap. To avoid * missing those values from the summary tuple, we first insert a placeholder @@ -1139,6 +1140,12 @@ terminate_brin_buildstate(BrinBuildState *state) * update of the index value happens in a loop, so that if somebody updates * the placeholder tuple after we read it, we detect the case and try again. * This ensures that the concurrently inserted tuples are not lost. + * + * A further corner case is this routine being asked to summarize the partial + * range at the end of the table. heapNumBlocks is the (possibly outdated) + * table size; if we notice that the requested range lies beyond that size, + * we re-compute the table size after inserting the placeholder tuple, to + * avoid missing pages that were appended recently. */ static void summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, @@ -1159,6 +1166,33 @@ summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, state->bs_rmAccess, &phbuf, heapBlk, phtup, phsz); + /* + * Compute range end. We hold ShareUpdateExclusive lock on table, so it + * cannot shrink concurrently (but it can grow). + */ + Assert(heapBlk % state->bs_pagesPerRange == 0); + if (heapBlk + state->bs_pagesPerRange > heapNumBlks) + { + /* + * If we're asked to scan what we believe to be the final range on the + * table (i.e. a range that might be partial) we need to recompute our + * idea of what the latest page is after inserting the placeholder + * tuple. Anyone that grows the table later will update the + * placeholder tuple, so it doesn't matter that we won't scan these + * pages ourselves. Careful: the table might have been extended + * beyond the current range, so clamp our result. + * + * Fortunately, this should occur infrequently. + */ + scanNumBlks = Min(RelationGetNumberOfBlocks(heapRel) - heapBlk, + state->bs_pagesPerRange); + } + else + { + /* Easy case: range is known to be complete */ + scanNumBlks = state->bs_pagesPerRange; + } + /* * Execute the partial heap scan covering the heap blocks in the specified * page range, summarizing the heap tuples in it. This scan stops just @@ -1169,8 +1203,6 @@ summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, * by transactions that are still in progress, among other corner cases. */ state->bs_currRangeStart = heapBlk; - scanNumBlks = heapBlk + state->bs_pagesPerRange <= heapNumBlks ? - state->bs_pagesPerRange : heapNumBlks - heapBlk; IndexBuildHeapRangeScan(heapRel, state->bs_irel, indexInfo, false, true, heapBlk, scanNumBlks, brinbuildCallback, (void *) state); @@ -1234,6 +1266,8 @@ summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, * Summarize page ranges that are not already summarized. If pageRange is * BRIN_ALL_BLOCKRANGES then the whole table is scanned; otherwise, only the * page range containing the given heap page number is scanned. + * If include_partial is true, then the partial range at the end of the table + * is summarized, otherwise not. * * For each new index tuple inserted, *numSummarized (if not NULL) is * incremented; for each existing tuple, *numExisting (if not NULL) is @@ -1241,56 +1275,54 @@ summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, */ static void brinsummarize(Relation index, Relation heapRel, BlockNumber pageRange, - double *numSummarized, double *numExisting) + bool include_partial, double *numSummarized, double *numExisting) { BrinRevmap *revmap; BrinBuildState *state = NULL; IndexInfo *indexInfo = NULL; BlockNumber heapNumBlocks; - BlockNumber heapBlk; BlockNumber pagesPerRange; Buffer buf; BlockNumber startBlk; - BlockNumber endBlk; - - /* determine range of pages to process; nothing to do for an empty table */ - heapNumBlocks = RelationGetNumberOfBlocks(heapRel); - if (heapNumBlocks == 0) - return; revmap = brinRevmapInitialize(index, &pagesPerRange, NULL); + /* determine range of pages to process */ + heapNumBlocks = RelationGetNumberOfBlocks(heapRel); if (pageRange == BRIN_ALL_BLOCKRANGES) - { startBlk = 0; - endBlk = heapNumBlocks; - } else - { startBlk = (pageRange / pagesPerRange) * pagesPerRange; + if (startBlk >= heapNumBlocks) + { /* Nothing to do if start point is beyond end of table */ - if (startBlk > heapNumBlocks) - { - brinRevmapTerminate(revmap); - return; - } - endBlk = startBlk + pagesPerRange; - if (endBlk > heapNumBlocks) - endBlk = heapNumBlocks; + brinRevmapTerminate(revmap); + return; } /* * Scan the revmap to find unsummarized items. */ buf = InvalidBuffer; - for (heapBlk = startBlk; heapBlk < endBlk; heapBlk += pagesPerRange) + for (; startBlk < heapNumBlocks; startBlk += pagesPerRange) { BrinTuple *tup; OffsetNumber off; + /* + * Unless requested to summarize even a partial range, go away now if + * we think the next range is partial. Caller would pass true when + * it is typically run once bulk data loading is done + * (brin_summarize_new_values), and false when it is typically the + * result of arbitrarily-scheduled maintenance command (vacuuming). + */ + if (!include_partial && + (startBlk + pagesPerRange > heapNumBlocks)) + break; + CHECK_FOR_INTERRUPTS(); - tup = brinGetTupleForHeapBlock(revmap, heapBlk, &buf, &off, NULL, + tup = brinGetTupleForHeapBlock(revmap, startBlk, &buf, &off, NULL, BUFFER_LOCK_SHARE, NULL); if (tup == NULL) { @@ -1303,7 +1335,7 @@ brinsummarize(Relation index, Relation heapRel, BlockNumber pageRange, pagesPerRange); indexInfo = BuildIndexInfo(index); } - summarize_range(indexInfo, state, heapRel, heapBlk, heapNumBlocks); + summarize_range(indexInfo, state, heapRel, startBlk, heapNumBlocks); /* and re-initialize state for the next range */ brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc); From a9fce66729ad5217e8219e22e595974059c21291 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 3 Nov 2017 11:59:20 -0400 Subject: [PATCH 0482/1087] Don't reset additional columns on subscriber to NULL on UPDATE When a publisher table has fewer columns than a subscriber, the update of a row on the publisher should result in updating of only the columns in common. The previous coding mistakenly reset the values of additional columns on the subscriber to NULL because it failed to skip updates of columns not found in the attribute map. Author: Petr Jelinek --- src/backend/replication/logical/worker.c | 7 +- src/test/subscription/t/008_diff_schema.pl | 80 ++++++++++++++++++++++ 2 files changed, 85 insertions(+), 2 deletions(-) create mode 100644 src/test/subscription/t/008_diff_schema.pl diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index bc6d8246a7..0e68670767 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -391,10 +391,13 @@ slot_modify_cstrings(TupleTableSlot *slot, LogicalRepRelMapEntry *rel, Form_pg_attribute att = TupleDescAttr(slot->tts_tupleDescriptor, i); int remoteattnum = rel->attrmap[i]; - if (remoteattnum >= 0 && !replaces[remoteattnum]) + if (remoteattnum < 0) continue; - if (remoteattnum >= 0 && values[remoteattnum] != NULL) + if (!replaces[remoteattnum]) + continue; + + if (values[remoteattnum] != NULL) { Oid typinput; Oid typioparam; diff --git a/src/test/subscription/t/008_diff_schema.pl b/src/test/subscription/t/008_diff_schema.pl new file mode 100644 index 0000000000..b71be6e487 --- /dev/null +++ b/src/test/subscription/t/008_diff_schema.pl @@ -0,0 +1,80 @@ +# Test behavior with different schema on subscriber +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More tests => 3; + +sub wait_for_caught_up +{ + my ($node, $appname) = @_; + + $node->poll_query_until('postgres', +"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" + ) or die "Timed out while waiting for subscriber to catch up"; +} + +# Create publisher node +my $node_publisher = get_new_node('publisher'); +$node_publisher->init(allows_streaming => 'logical'); +$node_publisher->start; + +# Create subscriber node +my $node_subscriber = get_new_node('subscriber'); +$node_subscriber->init(allows_streaming => 'logical'); +$node_subscriber->start; + +# Create some preexisting content on publisher +$node_publisher->safe_psql('postgres', + "CREATE TABLE test_tab (a int primary key, b varchar)"); +$node_publisher->safe_psql('postgres', + "INSERT INTO test_tab VALUES (1, 'foo'), (2, 'bar')"); + +# Setup structure on subscriber +$node_subscriber->safe_psql('postgres', "CREATE TABLE test_tab (a int primary key, b text, c timestamptz DEFAULT now(), d bigint DEFAULT 999)"); + +# Setup logical replication +my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres'; +$node_publisher->safe_psql('postgres', "CREATE PUBLICATION tap_pub FOR TABLE test_tab"); + +my $appname = 'tap_sub'; +$node_subscriber->safe_psql('postgres', +"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub" +); + +wait_for_caught_up($node_publisher, $appname); + +# Also wait for initial table sync to finish +my $synced_query = +"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');"; +$node_subscriber->poll_query_until('postgres', $synced_query) + or die "Timed out while waiting for subscriber to synchronize data"; + +my $result = + $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999) FROM test_tab"); +is($result, qq(2|2|2), 'check initial data was copied to subscriber'); + +# Update the rows on the publisher and check the additional columns on +# subscriber didn't change +$node_publisher->safe_psql('postgres', "UPDATE test_tab SET b = md5(b)"); + +wait_for_caught_up($node_publisher, $appname); + +$result = + $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999) FROM test_tab"); +is($result, qq(2|2|2), 'check extra columns contain local defaults'); + +# Change the local values of the extra columns on the subscriber, +# update publisher, and check that subscriber retains the expected +# values +$node_subscriber->safe_psql('postgres', "UPDATE test_tab SET c = 'epoch'::timestamptz + 987654321 * interval '1s'"); +$node_publisher->safe_psql('postgres', "UPDATE test_tab SET b = md5(a::text)"); + +wait_for_caught_up($node_publisher, $appname); + +$result = + $node_subscriber->safe_psql('postgres', "SELECT count(*), count(extract(epoch from c) = 987654321), count(d = 999) FROM test_tab"); +is($result, qq(2|2|2), 'check extra columns contain locally changed data'); + +$node_subscriber->stop; +$node_publisher->stop; From 49df45acd8d40ee172c2f5491485de997c5f1020 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 26 Oct 2017 15:19:56 -0400 Subject: [PATCH 0483/1087] doc: Convert ids to upper case at build time This makes the produced HTML anchors upper case, making it backward compatible with the previous (9.6) build system. Reported-by: Thomas Kellerer --- doc/src/sgml/stylesheet-html-common.xsl | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/doc/src/sgml/stylesheet-html-common.xsl b/doc/src/sgml/stylesheet-html-common.xsl index 72fac1e806..17b7230d2c 100644 --- a/doc/src/sgml/stylesheet-html-common.xsl +++ b/doc/src/sgml/stylesheet-html-common.xsl @@ -263,4 +263,29 @@ set toc,title + + + + + + + + + + + + + + + + id- + + + + + + + + + From 1b890562b8d1b44bd3ef948aeeb58dd59abd04b7 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 3 Nov 2017 20:36:32 +0100 Subject: [PATCH 0484/1087] Fix thinkos in BRIN summarization The previous commit contained a thinko that made a single-range summarization request process from there to end of table. Fix by setting the correct end range point. Per buildfarm. --- src/backend/access/brin/brin.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index 8636620f64..cafc8fe7be 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -1292,8 +1292,11 @@ brinsummarize(Relation index, Relation heapRel, BlockNumber pageRange, if (pageRange == BRIN_ALL_BLOCKRANGES) startBlk = 0; else + { startBlk = (pageRange / pagesPerRange) * pagesPerRange; - if (startBlk >= heapNumBlocks) + heapNumBlocks = Min(heapNumBlocks, startBlk + pagesPerRange); + } + if (startBlk > heapNumBlocks) { /* Nothing to do if start point is beyond end of table */ brinRevmapTerminate(revmap); From 4c11d2c559e76892156fd08d6a3cf5e1848a017f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 3 Nov 2017 16:31:32 -0400 Subject: [PATCH 0485/1087] Flag index metapages as standard-format in xlog.c calls. btree, hash, and bloom indexes all set up their metapages in standard format (that is, with pd_lower and pd_upper correctly delimiting the unused area); but they mostly didn't inform the xlog routines of this. When calling log_newpage[_buffer], this is bad because it loses the opportunity to compress unused data out of the WAL record. When calling XLogRegisterBuffer, it's not such a performance problem because all of these call sites also use REGBUF_WILL_INIT, preventing an FPI image from being written. But it's still a good idea to provide the flag when relevant, because that aids WAL consistency checking. This completes the project of getting all the in-core index AMs to handle their metapage WAL operations similarly. Amit Kapila, reviewed by Michael Paquier Discussion: https://postgr.es/m/0d273805-0e9e-ec1a-cb84-d4da400b8f85@lab.ntt.co.jp --- contrib/bloom/blinsert.c | 2 +- src/backend/access/hash/hashpage.c | 7 ++++--- src/backend/access/nbtree/nbtinsert.c | 4 ++-- src/backend/access/nbtree/nbtpage.c | 9 +++++---- src/backend/access/nbtree/nbtree.c | 2 +- src/backend/access/nbtree/nbtxlog.c | 5 +++-- 6 files changed, 16 insertions(+), 13 deletions(-) diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c index 0d506e3c1a..1fcb281508 100644 --- a/contrib/bloom/blinsert.c +++ b/contrib/bloom/blinsert.c @@ -175,7 +175,7 @@ blbuildempty(Relation index) smgrwrite(index->rd_smgr, INIT_FORKNUM, BLOOM_METAPAGE_BLKNO, (char *) metapage, true); log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM, - BLOOM_METAPAGE_BLKNO, metapage, false); + BLOOM_METAPAGE_BLKNO, metapage, true); /* * An immediate sync is required even if we xlog'd the page, because the diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c index f279dcea1d..4b14f88af9 100644 --- a/src/backend/access/hash/hashpage.c +++ b/src/backend/access/hash/hashpage.c @@ -403,7 +403,7 @@ _hash_init(Relation rel, double num_tuples, ForkNumber forkNum) XLogBeginInsert(); XLogRegisterData((char *) &xlrec, SizeOfHashInitMetaPage); - XLogRegisterBuffer(0, metabuf, REGBUF_WILL_INIT); + XLogRegisterBuffer(0, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); recptr = XLogInsert(RM_HASH_ID, XLOG_HASH_INIT_META_PAGE); @@ -592,8 +592,9 @@ _hash_init_metabuffer(Buffer buf, double num_tuples, RegProcedure procid, metap->hashm_firstfree = 0; /* - * Set pd_lower just past the end of the metadata. This is to log full - * page image of metapage in xloginsert.c. + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. */ ((PageHeader) page)->pd_lower = ((char *) metap + sizeof(HashMetaPageData)) - (char *) page; diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c index bf963fcdef..5cbaba1b7d 100644 --- a/src/backend/access/nbtree/nbtinsert.c +++ b/src/backend/access/nbtree/nbtinsert.c @@ -898,7 +898,7 @@ _bt_insertonpg(Relation rel, xlmeta.fastroot = metad->btm_fastroot; xlmeta.fastlevel = metad->btm_fastlevel; - XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT); + XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); XLogRegisterBufData(2, (char *) &xlmeta, sizeof(xl_btree_metadata)); xlinfo = XLOG_BTREE_INSERT_META; @@ -2032,7 +2032,7 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf) XLogRegisterBuffer(0, rootbuf, REGBUF_WILL_INIT); XLogRegisterBuffer(1, lbuf, REGBUF_STANDARD); - XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT); + XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); md.root = rootblknum; md.level = metad->btm_level; diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c index 10697e9e23..c77434904e 100644 --- a/src/backend/access/nbtree/nbtpage.c +++ b/src/backend/access/nbtree/nbtpage.c @@ -65,8 +65,9 @@ _bt_initmetapage(Page page, BlockNumber rootbknum, uint32 level) metaopaque->btpo_flags = BTP_META; /* - * Set pd_lower just past the end of the metadata. This is not essential - * but it makes the page look compressible to xlog.c. + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. */ ((PageHeader) page)->pd_lower = ((char *) metad + sizeof(BTMetaPageData)) - (char *) page; @@ -241,7 +242,7 @@ _bt_getroot(Relation rel, int access) XLogBeginInsert(); XLogRegisterBuffer(0, rootbuf, REGBUF_WILL_INIT); - XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT); + XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); md.root = rootblkno; md.level = 0; @@ -1827,7 +1828,7 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty) if (BufferIsValid(metabuf)) { - XLogRegisterBuffer(4, metabuf, REGBUF_WILL_INIT); + XLogRegisterBuffer(4, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); xlmeta.root = metad->btm_root; xlmeta.level = metad->btm_level; diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index 3dbafdd6fc..399e6a1ae5 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -298,7 +298,7 @@ btbuildempty(Relation index) smgrwrite(index->rd_smgr, INIT_FORKNUM, BTREE_METAPAGE, (char *) metapage, true); log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM, - BTREE_METAPAGE, metapage, false); + BTREE_METAPAGE, metapage, true); /* * An immediate sync is required even if we xlog'd the page, because the diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 82337f8ef2..7250b4f0b8 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -107,8 +107,9 @@ _bt_restore_meta(XLogReaderState *record, uint8 block_id) pageop->btpo_flags = BTP_META; /* - * Set pd_lower just past the end of the metadata. This is not essential - * but it makes the page look compressible to xlog.c. + * Set pd_lower just past the end of the metadata. This is essential, + * because without doing so, metadata will be lost if xlog.c compresses + * the page. */ ((PageHeader) metapg)->pd_lower = ((char *) md + sizeof(BTMetaPageData)) - (char *) metapg; From a9169f0200fc57e01cbd216bbd41c9ea3a79a7b0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 3 Nov 2017 17:21:59 -0400 Subject: [PATCH 0486/1087] Avoid looping through line pointers twice in PageRepairFragmentation(). There doesn't seem to be any good reason to do the filling of the itemidbase[] array separately from the first traversal of the pointers. It's certainly not a win if there are any line pointers with storage, and even if there aren't, this change doesn't insert code into the part of the first loop that will be traversed in that case. So let's just merge the two loops. Yura Sokolov, reviewed by Claudio Freire Discussion: https://postgr.es/m/e49befcc6f1d7099834c6fdf5c675a60@postgrespro.ru --- src/backend/storage/page/bufpage.c | 46 ++++++++++++++---------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c index 41642eb59c..b6aa2af818 100644 --- a/src/backend/storage/page/bufpage.c +++ b/src/backend/storage/page/bufpage.c @@ -481,6 +481,8 @@ PageRepairFragmentation(Page page) Offset pd_lower = ((PageHeader) page)->pd_lower; Offset pd_upper = ((PageHeader) page)->pd_upper; Offset pd_special = ((PageHeader) page)->pd_special; + itemIdSortData itemidbase[MaxHeapTuplesPerPage]; + itemIdSort itemidptr; ItemId lp; int nline, nstorage, @@ -505,15 +507,31 @@ PageRepairFragmentation(Page page) errmsg("corrupted page pointers: lower = %u, upper = %u, special = %u", pd_lower, pd_upper, pd_special))); + /* + * Run through the line pointer array and collect data about live items. + */ nline = PageGetMaxOffsetNumber(page); - nunused = nstorage = 0; + itemidptr = itemidbase; + nunused = totallen = 0; for (i = FirstOffsetNumber; i <= nline; i++) { lp = PageGetItemId(page, i); if (ItemIdIsUsed(lp)) { if (ItemIdHasStorage(lp)) - nstorage++; + { + itemidptr->offsetindex = i - 1; + itemidptr->itemoff = ItemIdGetOffset(lp); + if (unlikely(itemidptr->itemoff < (int) pd_upper || + itemidptr->itemoff >= (int) pd_special)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg("corrupted item pointer: %u", + itemidptr->itemoff))); + itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp)); + totallen += itemidptr->alignedlen; + itemidptr++; + } } else { @@ -523,6 +541,7 @@ PageRepairFragmentation(Page page) } } + nstorage = itemidptr - itemidbase; if (nstorage == 0) { /* Page is completely empty, so just reset it quickly */ @@ -531,29 +550,6 @@ PageRepairFragmentation(Page page) else { /* Need to compact the page the hard way */ - itemIdSortData itemidbase[MaxHeapTuplesPerPage]; - itemIdSort itemidptr = itemidbase; - - totallen = 0; - for (i = 0; i < nline; i++) - { - lp = PageGetItemId(page, i + 1); - if (ItemIdHasStorage(lp)) - { - itemidptr->offsetindex = i; - itemidptr->itemoff = ItemIdGetOffset(lp); - if (itemidptr->itemoff < (int) pd_upper || - itemidptr->itemoff >= (int) pd_special) - ereport(ERROR, - (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("corrupted item pointer: %u", - itemidptr->itemoff))); - itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp)); - totallen += itemidptr->alignedlen; - itemidptr++; - } - } - if (totallen > (Size) (pd_special - pd_lower)) ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), From cb29ff8315ef74043f279c21783cca8aaf79ebde Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 14 Sep 2017 08:30:03 -0400 Subject: [PATCH 0487/1087] Fix incorrect use of bool NSUnLinkModule() doesn't take a bool as second argument but one of set of specific constants. The numeric values are the same in this case, but clean it up while we're cleaning up bool use elsewhere. Reviewed-by: Michael Paquier --- src/backend/port/dynloader/darwin.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/port/dynloader/darwin.c b/src/backend/port/dynloader/darwin.c index f8fdeaf122..18092cc759 100644 --- a/src/backend/port/dynloader/darwin.c +++ b/src/backend/port/dynloader/darwin.c @@ -69,7 +69,7 @@ pg_dlopen(char *filename) void pg_dlclose(void *handle) { - NSUnLinkModule(handle, FALSE); + NSUnLinkModule(handle, NSUNLINKMODULE_OPTION_NONE); } PGFunction From d6148e7d44e91cac8bd21d8c6d3aaaf1eed10486 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Sep 2017 20:43:05 -0400 Subject: [PATCH 0488/1087] ecpg: Remove useless return values Remove useless or inconsistently used return values from functions, matching backend changes 99bf328237d89e0fd22821a940d4af0506353218 and 791359fe0eae83641f0929159d5861359d395e97. Reviewed-by: Michael Paquier --- src/interfaces/ecpg/pgtypeslib/dt.h | 6 +++--- src/interfaces/ecpg/pgtypeslib/dt_common.c | 15 +++++---------- src/interfaces/ecpg/pgtypeslib/interval.c | 16 ++++------------ src/interfaces/ecpg/pgtypeslib/timestamp.c | 8 +++----- 4 files changed, 15 insertions(+), 30 deletions(-) diff --git a/src/interfaces/ecpg/pgtypeslib/dt.h b/src/interfaces/ecpg/pgtypeslib/dt.h index 5a192ddc45..8cf03bfedf 100644 --- a/src/interfaces/ecpg/pgtypeslib/dt.h +++ b/src/interfaces/ecpg/pgtypeslib/dt.h @@ -313,12 +313,12 @@ do { \ int DecodeInterval(char **, int *, int, int *, struct tm *, fsec_t *); int DecodeTime(char *, int *, struct tm *, fsec_t *); -int EncodeDateTime(struct tm *tm, fsec_t fsec, bool print_tz, int tz, const char *tzn, int style, char *str, bool EuroDates); -int EncodeInterval(struct tm *tm, fsec_t fsec, int style, char *str); +void EncodeDateTime(struct tm *tm, fsec_t fsec, bool print_tz, int tz, const char *tzn, int style, char *str, bool EuroDates); +void EncodeInterval(struct tm *tm, fsec_t fsec, int style, char *str); int tm2timestamp(struct tm *, fsec_t, int *, timestamp *); int DecodeUnits(int field, char *lowtoken, int *val); bool CheckDateTokenTables(void); -int EncodeDateOnly(struct tm *tm, int style, char *str, bool EuroDates); +void EncodeDateOnly(struct tm *tm, int style, char *str, bool EuroDates); int GetEpochTime(struct tm *); int ParseDateTime(char *, char *, char **, int *, int *, char **); int DecodeDateTime(char **, int *, int, int *, struct tm *, fsec_t *, bool); diff --git a/src/interfaces/ecpg/pgtypeslib/dt_common.c b/src/interfaces/ecpg/pgtypeslib/dt_common.c index a26d61b32c..59b69d917b 100644 --- a/src/interfaces/ecpg/pgtypeslib/dt_common.c +++ b/src/interfaces/ecpg/pgtypeslib/dt_common.c @@ -671,11 +671,10 @@ DecodeSpecial(int field, char *lowtoken, int *val) /* EncodeDateOnly() * Encode date as local time. */ -int +void EncodeDateOnly(struct tm *tm, int style, char *str, bool EuroDates) { - if (tm->tm_mon < 1 || tm->tm_mon > MONTHS_PER_YEAR) - return -1; + Assert(tm->tm_mon >= 1 && tm->tm_mon <= MONTHS_PER_YEAR); switch (style) { @@ -723,9 +722,7 @@ EncodeDateOnly(struct tm *tm, int style, char *str, bool EuroDates) sprintf(str + 5, "-%04d %s", -(tm->tm_year - 1), "BC"); break; } - - return TRUE; -} /* EncodeDateOnly() */ +} void TrimTrailingZeros(char *str) @@ -758,7 +755,7 @@ TrimTrailingZeros(char *str) * US - mm/dd/yyyy * European - dd/mm/yyyy */ -int +void EncodeDateTime(struct tm *tm, fsec_t fsec, bool print_tz, int tz, const char *tzn, int style, char *str, bool EuroDates) { int day, @@ -951,9 +948,7 @@ EncodeDateTime(struct tm *tm, fsec_t fsec, bool print_tz, int tz, const char *tz } break; } - - return TRUE; -} /* EncodeDateTime() */ +} int GetEpochTime(struct tm *tm) diff --git a/src/interfaces/ecpg/pgtypeslib/interval.c b/src/interfaces/ecpg/pgtypeslib/interval.c index 30f2ccbcb7..4a7227e926 100644 --- a/src/interfaces/ecpg/pgtypeslib/interval.c +++ b/src/interfaces/ecpg/pgtypeslib/interval.c @@ -331,8 +331,6 @@ DecodeISO8601Interval(char *str, * * ECPG semes not to have a global IntervalStyle * so added * int IntervalStyle = INTSTYLE_POSTGRES; - * - * * Assert wasn't available so removed it. */ int DecodeInterval(char **field, int *ftype, int nf, /* int range, */ @@ -374,7 +372,7 @@ DecodeInterval(char **field, int *ftype, int nf, /* int range, */ * least one digit; there could be ':', '.', '-' embedded in * it as well. */ - /* Assert(*field[i] == '-' || *field[i] == '+'); */ + Assert(*field[i] == '-' || *field[i] == '+'); /* * Try for hh:mm or hh:mm:ss. If not, fall through to @@ -771,7 +769,7 @@ AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros) * Change pg_tm to tm */ -int +void EncodeInterval(struct /* pg_ */ tm *tm, fsec_t fsec, int style, char *str) { char *cp = str; @@ -947,9 +945,7 @@ EncodeInterval(struct /* pg_ */ tm *tm, fsec_t fsec, int style, char *str) strcat(cp, " ago"); break; } - - return 0; -} /* EncodeInterval() */ +} /* interval2tm() @@ -1091,11 +1087,7 @@ PGTYPESinterval_to_asc(interval * span) return NULL; } - if (EncodeInterval(tm, fsec, IntervalStyle, buf) != 0) - { - errno = PGTYPES_INTVL_BAD_INTERVAL; - return NULL; - } + EncodeInterval(tm, fsec, IntervalStyle, buf); return pgtypes_strdup(buf); } diff --git a/src/interfaces/ecpg/pgtypeslib/timestamp.c b/src/interfaces/ecpg/pgtypeslib/timestamp.c index fa5b32ed9d..b63880dc55 100644 --- a/src/interfaces/ecpg/pgtypeslib/timestamp.c +++ b/src/interfaces/ecpg/pgtypeslib/timestamp.c @@ -192,7 +192,7 @@ timestamp2tm(timestamp dt, int *tzp, struct tm *tm, fsec_t *fsec, const char **t /* EncodeSpecialTimestamp() * * Convert reserved timestamp data type to string. * */ -static int +static void EncodeSpecialTimestamp(timestamp dt, char *str) { if (TIMESTAMP_IS_NOBEGIN(dt)) @@ -200,10 +200,8 @@ EncodeSpecialTimestamp(timestamp dt, char *str) else if (TIMESTAMP_IS_NOEND(dt)) strcpy(str, LATE); else - return FALSE; - - return TRUE; -} /* EncodeSpecialTimestamp() */ + abort(); /* shouldn't happen */ +} timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr) From 4703a480a9e15f8b8b481dac44f2e36a4a687fe4 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Sep 2017 20:54:55 -0400 Subject: [PATCH 0489/1087] ecpg: Use bool instead of int Use "bool" for Boolean variables, rather than "int", matching backend change f505edace12655f3491b9c91991731e2b6bf1f0b. Reviewed-by: Michael Paquier --- src/interfaces/ecpg/pgtypeslib/dt_common.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/src/interfaces/ecpg/pgtypeslib/dt_common.c b/src/interfaces/ecpg/pgtypeslib/dt_common.c index 59b69d917b..be72fce8c5 100644 --- a/src/interfaces/ecpg/pgtypeslib/dt_common.c +++ b/src/interfaces/ecpg/pgtypeslib/dt_common.c @@ -1089,7 +1089,7 @@ dt2time(double jd, int *hour, int *min, int *sec, fsec_t *fsec) */ static int DecodeNumberField(int len, char *str, int fmask, - int *tmask, struct tm *tm, fsec_t *fsec, int *is2digits) + int *tmask, struct tm *tm, fsec_t *fsec, bool *is2digits) { char *cp; @@ -1199,7 +1199,7 @@ DecodeNumberField(int len, char *str, int fmask, */ static int DecodeNumber(int flen, char *str, int fmask, - int *tmask, struct tm *tm, fsec_t *fsec, int *is2digits, bool EuroDates) + int *tmask, struct tm *tm, fsec_t *fsec, bool *is2digits, bool EuroDates) { int val; char *cp; @@ -1314,8 +1314,8 @@ DecodeDate(char *str, int fmask, int *tmask, struct tm *tm, bool EuroDates) int nf = 0; int i, len; - int bc = FALSE; - int is2digits = FALSE; + bool bc = FALSE; + bool is2digits = FALSE; int type, val, dmask = 0; @@ -1792,9 +1792,9 @@ DecodeDateTime(char **field, int *ftype, int nf, int i; int val; int mer = HR24; - int haveTextMonth = FALSE; - int is2digits = FALSE; - int bc = FALSE; + bool haveTextMonth = FALSE; + bool is2digits = FALSE; + bool bc = FALSE; int t = 0; int *tzp = &t; From bc105c4be057177c6fe7bd93b31eb1dc66ed4395 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 4 Nov 2017 14:42:20 -0400 Subject: [PATCH 0490/1087] doc: Update text for new recovery_target_lsn setting Reported-by: Tomonari Katsumata Author: Michael Paquier --- doc/src/sgml/recovery-config.sgml | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/recovery-config.sgml b/doc/src/sgml/recovery-config.sgml index 4e1aa74c1f..ca37ab5187 100644 --- a/doc/src/sgml/recovery-config.sgml +++ b/doc/src/sgml/recovery-config.sgml @@ -270,10 +270,11 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows Specifies whether to stop just after the specified recovery target (true), or just before the recovery target (false). - Applies when either - or is specified. + Applies when , + , or + is specified. This setting controls whether transactions - having exactly the target commit time or ID, respectively, will + having exactly the target WAL location (LSN), commit time, or transaction ID, respectively, will be included in the recovery. Default is true. From 42de8a0255c2509bf179205e94b9d65f9d6f3cf9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 4 Nov 2017 18:27:06 -0400 Subject: [PATCH 0491/1087] First-draft release notes for 10.1. As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first. Note that a fair percentage of the entries apply only to prior branches because their issue was already fixed in 10.0. --- doc/src/sgml/release-10.sgml | 861 +++++++++++++++++++++++++++++++++++ 1 file changed, 861 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 7e95ba8313..543af40706 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1,6 +1,867 @@ + + Release 10.1 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 10.0. + For information about new features in major release 10, see + . + + + + Migration to Version 10.1 + + + A dump/restore is not required for those running 10.X. + + + + However, if you use BRIN indexes, see the first changelog entry below. + + + + + Changes + + + + + + + Fix BRIN index summarization to handle concurrent table extension + correctly (Álvaro Herrera) + + + + Previously, a race condition allowed some table rows to be omitted from + the index. It may be necessary to reindex existing BRIN indexes to + recover from past occurrences of this problem. + + + + + + + Fix possible failures during concurrent updates of a BRIN index + (Tom Lane) + + + + These race conditions could result in errors like invalid index + offnum or inconsistent range map. + + + + + + + Prevent logical replication from setting non-replicated columns to + nulls when replicating an UPDATE (Petr Jelinek) + + + + + + + Fix logical replication to fire BEFORE ROW DELETE + triggers when expected (Masahiko Sawada) + + + + Previously, that failed to happen unless the table also had + a BEFORE ROW UPDATE trigger. + + + + + + + Fix crash when logical decoding is invoked from a SPI-using function, + in particular any function written in a PL language + (Tom Lane) + + + + + + + Ignore CTEs when looking up the target table for + INSERT/UPDATE/DELETE, + and prevent matching qualified target-table names to trigger transition + table names (Thomas Munro) + + + + This restores the pre-v10 behavior for CTEs attached to DML commands. + + + + + + + Avoid evaluating an aggregate function's argument expression(s) at rows + where its FILTER test fails (Tom Lane) + + + + This restores the pre-v10 (and SQL-standard) behavior. + + + + + + + Fix incorrect query results when multiple GROUPING + SETS columns contain the same simple variable (Tom Lane) + + + + + + + Fix query-lifespan memory leakage while evaluating a set-returning + function in a SELECT's target list (Tom Lane) + + + + + + + Allow parallel execution of prepared statements with generic plans + (Amit Kapila, Kuntal Ghosh) + + + + + + + Fix incorrect parallelization decisions for nested queries + (Amit Kapila, Kuntal Ghosh) + + + + + + + Fix parallel query handling to not fail when a recently-active role is + dropped (Amit Kapila) + + + + + + + Fix crash in parallel execution of a bitmap scan having a BitmapAnd + plan node below a BitmapOr node (Dilip Kumar) + + + + + + + Fix json_build_array(), + json_build_object(), and their jsonb + equivalents to handle explicit VARIADIC arguments + correctly (Michael Paquier) + + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + + Fix autovacuum's work item logic to prevent possible + crashes and silent loss of work items (Álvaro Herrera) + + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + + Correctly ignore RelabelType expression nodes + when examining functional-dependency statistics (David Rowley) + + + + This allows, e.g., extended statistics on varchar columns + to be used properly. + + + + + + + Correctly ignore RelabelType expression nodes + when determining relation distinctness (David Rowley) + + + + + + + Prevent sharing transition states between ordered-set aggregates + (David Rowley) + + + + This causes a crash with the built-in ordered-set aggregates, and + probably with user-written ones as well. v11 and later will include + provisions for dealing with such cases safely, but in released + branches, just disable the optimization. + + + + + + + Prevent idle_in_transaction_session_timeout from + being ignored when a statement_timeout occurred + earlier (Lukas Fittl) + + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + + Avoid SIGBUS crash on Linux when a DSM memory + request exceeds the space available in tmpfs + (Thomas Munro) + + + + + + + Reduce the frequency of data flush requests during bulk file copies to + avoid performance problems on macOS, particularly with its new APFS + file system (Tom Lane) + + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + + Add missing cases in GetCommandLogLevel(), + preventing errors when certain SQL commands are used while + log_statement is set to ddl + (Michael Paquier) + + + + + + + Fix mis-parsing of the last line in a + non-newline-terminated pg_hba.conf file + (Tom Lane) + + + + + + + Fix AggGetAggref() to return the + correct Aggref nodes to aggregate final + functions whose transition calculations have been merged (Tom Lane) + + + + + + + Fix pg_dump to ensure that it + emits GRANT commands in a valid order + (Stephen Frost) + + + + + + + Fix insufficient schema-qualification in some new queries + in pg_dump + and psql + (Vitaly Burovoy, Tom Lane, Noah Misch) + + + + + + + Avoid use of @> operator + in psql's queries for \d + (Tom Lane) + + + + This prevents problems when the parray_gin + extension is installed, since it defines a conflicting operator. + + + + + + + Fix pg_basebackup's matching of tablespace + paths to canonicalize both paths before comparing (Michael Paquier) + + + + This is particularly helpful on Windows. + + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + + Fix ecpg's handling of out-of-scope cursor + declarations with pointer or array variables (Michael Meskes) + + + + + + + In ecpglib, correctly handle backslashes in string literals depending + on whether standard_conforming_strings is set + (Tsunakawa Takayuki) + + + + + + + Make ecpglib's Informix-compatibility mode ignore fractional digits in + integer input strings, as expected (Gao Zengqi, Michael Meskes) + + + + + + + Fix ecpg's regression tests to work reliably + on Windows (Christian Ullrich, Michael Meskes) + + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + In the documentation, restore HTML anchors to being upper-case strings + (Peter Eisentraut) + + + + Due to a toolchain change, the 10.0 user manual had lower-case strings + for intrapage anchors, thus breaking some external links into our + website documentation. Return to our previous convention of using + upper-case strings. + + + + + + + + Release 10 From 86bc521811f381a121817fdfb096df431edb32f5 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sun, 5 Nov 2017 11:48:20 -0500 Subject: [PATCH 0492/1087] Fix comment Author: Bernd Helmle --- src/bin/pg_basebackup/receivelog.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c index 07509cb825..d29b501740 100644 --- a/src/bin/pg_basebackup/receivelog.c +++ b/src/bin/pg_basebackup/receivelog.c @@ -747,7 +747,7 @@ ReadEndOfStreamingResult(PGresult *res, XLogRecPtr *startpos, uint32 *timeline) /* * The main loop of ReceiveXlogStream. Handles the COPY stream after - * initiating streaming with the START_STREAMING command. + * initiating streaming with the START_REPLICATION command. * * If the COPY ends (not necessarily successfully) due a message from the * server, returns a PGresult and sets *stoppos to the last byte written. From bab3a714b62160f0e89c8943c5e054649cd58945 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 5 Nov 2017 09:25:52 -0800 Subject: [PATCH 0493/1087] Ignore CatalogSnapshot when checking COPY FREEZE prerequisites. This restores the ability, essentially lost in commit ffaa44cb559db332baeee7d25dedd74a61974203, to use COPY FREEZE under REPEATABLE READ isolation. Back-patch to 9.4, like that commit. Reviewed by Tom Lane. Discussion: https://postgr.es/m/CA+TgmoahWDm-7fperBxzU9uZ99LPMUmEpSXLTw9TmrOgzwnORw@mail.gmail.com --- src/backend/commands/copy.c | 18 +++++++++++++++--- src/backend/utils/time/snapmgr.c | 8 ++++++++ 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 8006df32a8..1bdd4927d9 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2394,13 +2394,25 @@ CopyFrom(CopyState cstate) /* * Optimize if new relfilenode was created in this subxact or one of its * committed children and we won't see those rows later as part of an - * earlier scan or command. This ensures that if this subtransaction - * aborts then the frozen rows won't be visible after xact cleanup. Note + * earlier scan or command. The subxact test ensures that if this subxact + * aborts then the frozen rows won't be visible after xact cleanup. Note * that the stronger test of exactly which subtransaction created it is - * crucial for correctness of this optimization. + * crucial for correctness of this optimization. The test for an earlier + * scan or command tolerates false negatives. FREEZE causes other sessions + * to see rows they would not see under MVCC, and a false negative merely + * spreads that anomaly to the current session. */ if (cstate->freeze) { + /* + * Tolerate one registration for the benefit of FirstXactSnapshot. + * Scan-bearing queries generally create at least two registrations, + * though relying on that is fragile, as is ignoring ActiveSnapshot. + * Clear CatalogSnapshot to avoid counting its registration. We'll + * still detect ongoing catalog scans, each of which separately + * registers the snapshot it uses. + */ + InvalidateCatalogSnapshot(); if (!ThereAreNoPriorRegisteredSnapshots() || !ThereAreNoReadyPortals()) ereport(ERROR, (errcode(ERRCODE_INVALID_TRANSACTION_STATE), diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index 294ab705f1..addf87dc3b 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -1645,6 +1645,14 @@ DeleteAllExportedSnapshotFiles(void) FreeDir(s_dir); } +/* + * ThereAreNoPriorRegisteredSnapshots + * Is the registered snapshot count less than or equal to one? + * + * Don't use this to settle important decisions. While zero registrations and + * no ActiveSnapshot would confirm a certain idleness, the system makes no + * guarantees about the significance of one registered snapshot. + */ bool ThereAreNoPriorRegisteredSnapshots(void) { From b35b185bf705c4dbaf21198c81b3d85f4a96804a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 5 Nov 2017 13:47:56 -0500 Subject: [PATCH 0494/1087] Release notes for 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, 9.2.24. In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890 on these files, to reduce future maintenance issues. (I'd do it further back, except that the 9.X branches differ anyway due to xlog-to-wal link tag renaming.) --- doc/src/sgml/release-10.sgml | 253 +---------------- doc/src/sgml/release-9.2.sgml | 176 ++++++++++++ doc/src/sgml/release-9.3.sgml | 192 +++++++++++++ doc/src/sgml/release-9.4.sgml | 236 ++++++++++++++++ doc/src/sgml/release-9.5.sgml | 288 +++++++++++++++++++ doc/src/sgml/release-9.6.sgml | 506 ++++++++++++++++++++++++++++++++++ 6 files changed, 1412 insertions(+), 239 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 543af40706..6c07157d29 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -128,8 +128,8 @@ Branch: REL_10_STABLE [799037099] 2017-10-16 17:56:43 -0400 Ignore CTEs when looking up the target table for INSERT/UPDATE/DELETE, - and prevent matching qualified target-table names to trigger transition - table names (Thomas Munro) + and prevent matching schema-qualified target table names to trigger + transition table names (Thomas Munro) @@ -211,7 +211,7 @@ Branch: REL_10_STABLE [69125c883] 2017-10-29 13:04:37 +0530 Branch: REL9_6_STABLE [f74f871b8] 2017-10-29 13:14:37 +0530 --> - Fix parallel query handling to not fail when a recently-active role is + Fix parallel query handling to not fail when a recently-used role is dropped (Amit Kapila) @@ -253,27 +253,6 @@ Branch: REL9_4_STABLE [9cb28e98b] 2017-10-25 07:52:45 -0400 - - Properly reject attempts to convert infinite float values to - type numeric (Tom Lane, KaiGai Kohei) - - - - Previously the behavior was platform-dependent. - - - - - - - Correctly ignore RelabelType expression nodes - when determining relation distinctness (David Rowley) - - - - - - - Avoid SIGBUS crash on Linux when a DSM memory - request exceeds the space available in tmpfs - (Thomas Munro) - - - - - - - Prevent low-probability crash in processing of nested trigger firings - (Tom Lane) - - - - - - - Correctly restore the umask setting when file creation fails - in COPY or lo_export() - (Peter Eisentraut) - - - - - - - Give a better error message for duplicate column names - in ANALYZE (Nathan Bossart) - - - - - - Add missing cases in GetCommandLogLevel(), - preventing errors when certain SQL commands are used while - log_statement is set to ddl - (Michael Paquier) + Allow COPY's FREEZE option to + work when the transaction isolation level is REPEATABLE + READ or higher (Noah Misch) - - - - Fix mis-parsing of the last line in a - non-newline-terminated pg_hba.conf file - (Tom Lane) + This case was unintentionally broken by a previous bug fix. @@ -588,20 +469,6 @@ Branch: REL9_6_STABLE [aa1e9b3a4] 2017-10-12 15:20:04 -0400 - - Fix pg_dump to ensure that it - emits GRANT commands in a valid order - (Stephen Frost) - - - - - - - Fix libpq to guard against integer - overflow in the row count of a PGresult - (Michael Paquier) - - - - - - - Fix ecpg's handling of out-of-scope cursor - declarations with pointer or array variables (Michael Meskes) - - - - - - - Fix ecpg's regression tests to work reliably - on Windows (Christian Ullrich, Michael Meskes) - - - - - - - Sync our copy of the timezone library with IANA release tzcode2017c - (Tom Lane) - - - - This fixes various issues; the only one likely to be user-visible - is that the default DST rules for a POSIX-style zone name, if - no posixrules file exists in the timezone data - directory, now match current US law rather than what it was a dozen - years ago. - - - - - + + Release 9.2.24 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 9.2.23. + For information about new features in the 9.2 major release, see + . + + + + This is expected to be the last PostgreSQL + release in the 9.2.X series. Users are encouraged to update to a newer + release branch soon. + + + + Migration to Version 9.2.24 + + + A dump/restore is not required for those running 9.2.X. + + + + However, if you are upgrading from a version earlier than 9.2.22, + see . + + + + + + Changes + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + + Release 9.2.23 diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index 84523d36b7..82f705522e 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -1,6 +1,198 @@ + + Release 9.3.20 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 9.3.19. + For information about new features in the 9.3 major release, see + . + + + + Migration to Version 9.3.20 + + + A dump/restore is not required for those running 9.3.X. + + + + However, if you are upgrading from a version earlier than 9.3.18, + see . + + + + + + Changes + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + Fix mis-parsing of the last line in a + non-newline-terminated pg_hba.conf file + (Tom Lane) + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + Fix ecpg's handling of out-of-scope cursor + declarations with pointer or array variables (Michael Meskes) + + + + + + Make ecpglib's Informix-compatibility mode ignore fractional digits in + integer input strings, as expected (Gao Zengqi, Michael Meskes) + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + + Release 9.3.19 diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index f6c38bd912..ab47dc50dd 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -1,6 +1,242 @@ + + Release 9.4.15 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 9.4.14. + For information about new features in the 9.4 major release, see + . + + + + Migration to Version 9.4.15 + + + A dump/restore is not required for those running 9.4.X. + + + + However, if you are upgrading from a version earlier than 9.4.13, + see . + + + + + Changes + + + + + + Fix crash when logical decoding is invoked from a SPI-using function, + in particular any function written in a PL language + (Tom Lane) + + + + + + Fix json_build_array(), + json_build_object(), and their jsonb + equivalents to handle explicit VARIADIC arguments + correctly (Michael Paquier) + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + Avoid SIGBUS crash on Linux when a DSM memory + request exceeds the space available in tmpfs + (Thomas Munro) + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + Allow COPY's FREEZE option to + work when the transaction isolation level is REPEATABLE + READ or higher (Noah Misch) + + + + This case was unintentionally broken by a previous bug fix. + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + Fix mis-parsing of the last line in a + non-newline-terminated pg_hba.conf file + (Tom Lane) + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + Fix ecpg's handling of out-of-scope cursor + declarations with pointer or array variables (Michael Meskes) + + + + + + In ecpglib, correctly handle backslashes in string literals depending + on whether standard_conforming_strings is set + (Tsunakawa Takayuki) + + + + + + Make ecpglib's Informix-compatibility mode ignore fractional digits in + integer input strings, as expected (Gao Zengqi, Michael Meskes) + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + + Release 9.4.14 diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index 11740a4108..3ab5df7a5f 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -1,6 +1,294 @@ + + Release 9.5.10 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 9.5.9. + For information about new features in the 9.5 major release, see + . + + + + Migration to Version 9.5.10 + + + A dump/restore is not required for those running 9.5.X. + + + + However, if you use BRIN indexes, see the first changelog entry below. + + + + Also, if you are upgrading from a version earlier than 9.5.8, + see . + + + + + Changes + + + + + + Fix BRIN index summarization to handle concurrent table extension + correctly (Álvaro Herrera) + + + + Previously, a race condition allowed some table rows to be omitted from + the index. It may be necessary to reindex existing BRIN indexes to + recover from past occurrences of this problem. + + + + + + Fix possible failures during concurrent updates of a BRIN index + (Tom Lane) + + + + These race conditions could result in errors like invalid index + offnum or inconsistent range map. + + + + + + Fix crash when logical decoding is invoked from a SPI-using function, + in particular any function written in a PL language + (Tom Lane) + + + + + + Fix json_build_array(), + json_build_object(), and their jsonb + equivalents to handle explicit VARIADIC arguments + correctly (Michael Paquier) + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + Correctly ignore RelabelType expression nodes + when determining relation distinctness (David Rowley) + + + + This allows the intended optimization to occur when a subquery has + a result column of type varchar. + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + Avoid SIGBUS crash on Linux when a DSM memory + request exceeds the space available in tmpfs + (Thomas Munro) + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + Allow COPY's FREEZE option to + work when the transaction isolation level is REPEATABLE + READ or higher (Noah Misch) + + + + This case was unintentionally broken by a previous bug fix. + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + Fix mis-parsing of the last line in a + non-newline-terminated pg_hba.conf file + (Tom Lane) + + + + + + Fix pg_basebackup's matching of tablespace + paths to canonicalize both paths before comparing (Michael Paquier) + + + + This is particularly helpful on Windows. + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + Fix ecpg's handling of out-of-scope cursor + declarations with pointer or array variables (Michael Meskes) + + + + + + In ecpglib, correctly handle backslashes in string literals depending + on whether standard_conforming_strings is set + (Tsunakawa Takayuki) + + + + + + Make ecpglib's Informix-compatibility mode ignore fractional digits in + integer input strings, as expected (Gao Zengqi, Michael Meskes) + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + + Release 9.5.9 diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 90b4ed3585..5e358ef4b4 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -1,6 +1,512 @@ + + Release 9.6.6 + + + Release date: + 2017-11-09 + + + + This release contains a variety of fixes from 9.6.5. + For information about new features in the 9.6 major release, see + . + + + + Migration to Version 9.6.6 + + + A dump/restore is not required for those running 9.6.X. + + + + However, if you use BRIN indexes, see the first changelog entry below. + + + + Also, if you are upgrading from a version earlier than 9.6.4, + see . + + + + + Changes + + + + + + Fix BRIN index summarization to handle concurrent table extension + correctly (Álvaro Herrera) + + + + Previously, a race condition allowed some table rows to be omitted from + the index. It may be necessary to reindex existing BRIN indexes to + recover from past occurrences of this problem. + + + + + + Fix possible failures during concurrent updates of a BRIN index + (Tom Lane) + + + + These race conditions could result in errors like invalid index + offnum or inconsistent range map. + + + + + + Fix crash when logical decoding is invoked from a SPI-using function, + in particular any function written in a PL language + (Tom Lane) + + + + + + Fix incorrect query results when multiple GROUPING + SETS columns contain the same simple variable (Tom Lane) + + + + + + Fix incorrect parallelization decisions for nested queries + (Amit Kapila, Kuntal Ghosh) + + + + + + Fix parallel query handling to not fail when a recently-used role is + dropped (Amit Kapila) + + + + + + Fix json_build_array(), + json_build_object(), and their jsonb + equivalents to handle explicit VARIADIC arguments + correctly (Michael Paquier) + + + + + + + Properly reject attempts to convert infinite float values to + type numeric (Tom Lane, KaiGai Kohei) + + + + Previously the behavior was platform-dependent. + + + + + + Fix corner-case crashes when columns have been added to the end of a + view (Tom Lane) + + + + + + Record proper dependencies when a view or rule + contains FieldSelect + or FieldStore expression nodes (Tom Lane) + + + + Lack of these dependencies could allow a column or data + type DROP to go through when it ought to fail, + thereby causing later uses of the view or rule to get errors. + This patch does not do anything to protect existing views/rules, + only ones created in the future. + + + + + + Correctly detect hashability of range data types (Tom Lane) + + + + The planner mistakenly assumed that any range type could be hashed + for use in hash joins or hash aggregation, but actually it must check + whether the range's subtype has hash support. This does not affect any + of the built-in range types, since they're all hashable anyway. + + + + + + + Correctly ignore RelabelType expression nodes + when determining relation distinctness (David Rowley) + + + + This allows the intended optimization to occur when a subquery has + a result column of type varchar. + + + + + + Prevent sharing transition states between ordered-set aggregates + (David Rowley) + + + + This causes a crash with the built-in ordered-set aggregates, and + probably with user-written ones as well. v11 and later will include + provisions for dealing with such cases safely, but in released + branches, just disable the optimization. + + + + + + Prevent idle_in_transaction_session_timeout from + being ignored when a statement_timeout occurred + earlier (Lukas Fittl) + + + + + + Fix low-probability loss of NOTIFY messages due to + XID wraparound (Marko Tiikkaja, Tom Lane) + + + + If a session executed no queries, but merely listened for + notifications, for more than 2 billion transactions, it started to miss + some notifications from concurrently-committing transactions. + + + + + + + Avoid SIGBUS crash on Linux when a DSM memory + request exceeds the space available in tmpfs + (Thomas Munro) + + + + + + Reduce the frequency of data flush requests during bulk file copies to + avoid performance problems on macOS, particularly with its new APFS + file system (Tom Lane) + + + + + + + Prevent low-probability crash in processing of nested trigger firings + (Tom Lane) + + + + + + Allow COPY's FREEZE option to + work when the transaction isolation level is REPEATABLE + READ or higher (Noah Misch) + + + + This case was unintentionally broken by a previous bug fix. + + + + + + + Correctly restore the umask setting when file creation fails + in COPY or lo_export() + (Peter Eisentraut) + + + + + + + Give a better error message for duplicate column names + in ANALYZE (Nathan Bossart) + + + + + + + Add missing cases in GetCommandLogLevel(), + preventing errors when certain SQL commands are used while + log_statement is set to ddl + (Michael Paquier) + + + + + + + Fix mis-parsing of the last line in a + non-newline-terminated pg_hba.conf file + (Tom Lane) + + + + + + Fix AggGetAggref() to return the + correct Aggref nodes to aggregate final + functions whose transition calculations have been merged (Tom Lane) + + + + + + + Fix pg_dump to ensure that it + emits GRANT commands in a valid order + (Stephen Frost) + + + + + + Fix pg_basebackup's matching of tablespace + paths to canonicalize both paths before comparing (Michael Paquier) + + + + This is particularly helpful on Windows. + + + + + + Fix libpq to not require user's home + directory to exist (Tom Lane) + + + + In v10, failure to find the home directory while trying to + read ~/.pgpass was treated as a hard error, + but it should just cause that file to not be found. Both v10 and + previous release branches made the same mistake when + reading ~/.pg_service.conf, though this was less + obvious since that file is not sought unless a service name is + specified. + + + + + + + Fix libpq to guard against integer + overflow in the row count of a PGresult + (Michael Paquier) + + + + + + + Fix ecpg's handling of out-of-scope cursor + declarations with pointer or array variables (Michael Meskes) + + + + + + In ecpglib, correctly handle backslashes in string literals depending + on whether standard_conforming_strings is set + (Tsunakawa Takayuki) + + + + + + Make ecpglib's Informix-compatibility mode ignore fractional digits in + integer input strings, as expected (Gao Zengqi, Michael Meskes) + + + + + + + Fix ecpg's regression tests to work reliably + on Windows (Christian Ullrich, Michael Meskes) + + + + + + + Sync our copy of the timezone library with IANA release tzcode2017c + (Tom Lane) + + + + This fixes various issues; the only one likely to be user-visible + is that the default DST rules for a POSIX-style zone name, if + no posixrules file exists in the timezone data + directory, now match current US law rather than what it was a dozen + years ago. + + + + + + Update time zone data files to tzdata + release 2017c for DST law changes in Fiji, Namibia, Northern Cyprus, + Sudan, Tonga, and Turks & Caicos Islands, plus historical + corrections for Alaska, Apia, Burma, Calcutta, Detroit, Ireland, + Namibia, and Pago Pago. + + + + + + + + Release 9.6.5 From c66b438db62748000700c9b90b585e756dd54141 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 5 Nov 2017 18:51:08 -0800 Subject: [PATCH 0495/1087] Add a temp-install prerequisite to "check"-like targets not having one. Makefile.global assigns this prerequisite to every target named "check", but similar targets must mention it explicitly. Affected targets failed, tested $PATH binaries, or tested a stale temporary installation. The src/test/modules examples worked properly when called as "make -C src/test/modules/$FOO check", but "make -j" allowed the test to start before the temporary installation was in place. Back-patch to 9.5, where commit dcae5faccab64776376d354decda0017c648bb53 introduced the shared temp-install. --- src/interfaces/ecpg/test/Makefile | 4 ++-- src/test/locale/Makefile | 1 + src/test/modules/brin/Makefile | 4 ++-- src/test/modules/commit_ts/Makefile | 2 +- src/test/modules/test_pg_dump/Makefile | 2 +- src/test/regress/GNUmakefile | 4 ++-- 6 files changed, 9 insertions(+), 8 deletions(-) diff --git a/src/interfaces/ecpg/test/Makefile b/src/interfaces/ecpg/test/Makefile index 73ac9e2ac0..6097fea900 100644 --- a/src/interfaces/ecpg/test/Makefile +++ b/src/interfaces/ecpg/test/Makefile @@ -81,7 +81,7 @@ check: all $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule sql/twophase # the same options, but with --listen-on-tcp -checktcp: all +checktcp: all | temp-install $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule_tcp --host=localhost installcheck: all @@ -95,5 +95,5 @@ installcheck: all installcheck-prepared-txns: all ./pg_regress $(REGRESS_OPTS) --bindir='$(bindir)' $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule sql/twophase -check-prepared-txns: all +check-prepared-txns: all | temp-install $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule sql/twophase diff --git a/src/test/locale/Makefile b/src/test/locale/Makefile index 26ec5c9a90..22a45b65f2 100644 --- a/src/test/locale/Makefile +++ b/src/test/locale/Makefile @@ -16,5 +16,6 @@ clean distclean maintainer-clean: $(MAKE) -C $$d clean || exit; \ done +# These behave like installcheck targets. check-%: all @$(MAKE) -C `echo $@ | sed 's/^check-//'` test diff --git a/src/test/modules/brin/Makefile b/src/test/modules/brin/Makefile index 912dca8009..566655cd61 100644 --- a/src/test/modules/brin/Makefile +++ b/src/test/modules/brin/Makefile @@ -21,13 +21,13 @@ endif check: isolation-check prove-check -isolation-check: | submake-isolation +isolation-check: | submake-isolation temp-install $(MKDIR_P) isolation_output $(pg_isolation_regress_check) \ --outputdir=./isolation_output \ $(ISOLATIONCHECKS) -prove-check: +prove-check: | temp-install $(prove_check) .PHONY: check isolation-check prove-check diff --git a/src/test/modules/commit_ts/Makefile b/src/test/modules/commit_ts/Makefile index 86b93b5e76..6d4f3be358 100644 --- a/src/test/modules/commit_ts/Makefile +++ b/src/test/modules/commit_ts/Makefile @@ -16,5 +16,5 @@ endif check: prove-check -prove-check: +prove-check: | temp-install $(prove_check) diff --git a/src/test/modules/test_pg_dump/Makefile b/src/test/modules/test_pg_dump/Makefile index 5050572777..c64b353707 100644 --- a/src/test/modules/test_pg_dump/Makefile +++ b/src/test/modules/test_pg_dump/Makefile @@ -21,5 +21,5 @@ endif check: prove-check -prove-check: +prove-check: | temp-install $(prove_check) diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile index 56cd202078..8b2d20c5b5 100644 --- a/src/test/regress/GNUmakefile +++ b/src/test/regress/GNUmakefile @@ -129,7 +129,7 @@ REGRESS_OPTS = --dlpath=. --max-concurrent-tests=20 $(EXTRA_REGRESS_OPTS) check: all tablespace-setup $(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) $(EXTRA_TESTS) -check-tests: all tablespace-setup +check-tests: all tablespace-setup | temp-install $(pg_regress_check) $(REGRESS_OPTS) $(MAXCONNOPT) $(TESTS) $(EXTRA_TESTS) installcheck: all tablespace-setup @@ -153,7 +153,7 @@ runtest-parallel: installcheck-parallel bigtest: all tablespace-setup $(pg_regress_installcheck) $(REGRESS_OPTS) --schedule=$(srcdir)/serial_schedule numeric_big -bigcheck: all tablespace-setup +bigcheck: all tablespace-setup | temp-install $(pg_regress_check) $(REGRESS_OPTS) --schedule=$(srcdir)/parallel_schedule $(MAXCONNOPT) numeric_big From 87b2ebd352c4afe1ded0841604b59a3afbae97d1 Mon Sep 17 00:00:00 2001 From: Dean Rasheed Date: Mon, 6 Nov 2017 09:19:22 +0000 Subject: [PATCH 0496/1087] Always require SELECT permission for ON CONFLICT DO UPDATE. The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT permission on the columns of the arbiter index, but it failed to check for that in the case of an arbiter specified by constraint name. In addition, for a table with row level security enabled, it failed to check updated rows against the table's SELECT policies when the update path was taken (regardless of how the arbiter index was specified). Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced. Security: CVE-2017-15099 --- src/backend/catalog/pg_constraint.c | 98 +++++++++++++++++++++++ src/backend/parser/parse_clause.c | 21 ++++- src/backend/rewrite/rowsecurity.c | 20 ++++- src/include/catalog/pg_constraint_fn.h | 2 + src/test/regress/expected/privileges.out | 16 +++- src/test/regress/expected/rowsecurity.out | 15 +++- src/test/regress/sql/privileges.sql | 19 ++++- src/test/regress/sql/rowsecurity.sql | 14 +++- 8 files changed, 194 insertions(+), 11 deletions(-) diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c index 1336c46d3f..104e930a6f 100644 --- a/src/backend/catalog/pg_constraint.c +++ b/src/backend/catalog/pg_constraint.c @@ -805,6 +805,104 @@ get_relation_constraint_oid(Oid relid, const char *conname, bool missing_ok) return conOid; } +/* + * get_relation_constraint_attnos + * Find a constraint on the specified relation with the specified name + * and return the constrained columns. + * + * Returns a Bitmapset of the column attnos of the constrained columns, with + * attnos being offset by FirstLowInvalidHeapAttributeNumber so that system + * columns can be represented. + * + * *constraintOid is set to the OID of the constraint, or InvalidOid on + * failure. + */ +Bitmapset * +get_relation_constraint_attnos(Oid relid, const char *conname, + bool missing_ok, Oid *constraintOid) +{ + Bitmapset *conattnos = NULL; + Relation pg_constraint; + HeapTuple tuple; + SysScanDesc scan; + ScanKeyData skey[1]; + + /* Set *constraintOid, to avoid complaints about uninitialized vars */ + *constraintOid = InvalidOid; + + /* + * Fetch the constraint tuple from pg_constraint. There may be more than + * one match, because constraints are not required to have unique names; + * if so, error out. + */ + pg_constraint = heap_open(ConstraintRelationId, AccessShareLock); + + ScanKeyInit(&skey[0], + Anum_pg_constraint_conrelid, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(relid)); + + scan = systable_beginscan(pg_constraint, ConstraintRelidIndexId, true, + NULL, 1, skey); + + while (HeapTupleIsValid(tuple = systable_getnext(scan))) + { + Form_pg_constraint con = (Form_pg_constraint) GETSTRUCT(tuple); + Datum adatum; + bool isNull; + ArrayType *arr; + int16 *attnums; + int numcols; + int i; + + /* Check the constraint name */ + if (strcmp(NameStr(con->conname), conname) != 0) + continue; + if (OidIsValid(*constraintOid)) + ereport(ERROR, + (errcode(ERRCODE_DUPLICATE_OBJECT), + errmsg("table \"%s\" has multiple constraints named \"%s\"", + get_rel_name(relid), conname))); + + *constraintOid = HeapTupleGetOid(tuple); + + /* Extract the conkey array, ie, attnums of constrained columns */ + adatum = heap_getattr(tuple, Anum_pg_constraint_conkey, + RelationGetDescr(pg_constraint), &isNull); + if (isNull) + continue; /* no constrained columns */ + + arr = DatumGetArrayTypeP(adatum); /* ensure not toasted */ + numcols = ARR_DIMS(arr)[0]; + if (ARR_NDIM(arr) != 1 || + numcols < 0 || + ARR_HASNULL(arr) || + ARR_ELEMTYPE(arr) != INT2OID) + elog(ERROR, "conkey is not a 1-D smallint array"); + attnums = (int16 *) ARR_DATA_PTR(arr); + + /* Construct the result value */ + for (i = 0; i < numcols; i++) + { + conattnos = bms_add_member(conattnos, + attnums[i] - FirstLowInvalidHeapAttributeNumber); + } + } + + systable_endscan(scan); + + /* If no such constraint exists, complain */ + if (!OidIsValid(*constraintOid) && !missing_ok) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("constraint \"%s\" for table \"%s\" does not exist", + conname, get_rel_name(relid)))); + + heap_close(pg_constraint, AccessShareLock); + + return conattnos; +} + /* * get_domain_constraint_oid * Find a constraint on the specified domain with the specified name. diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index af99e65aa7..ed26517c26 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -3164,9 +3164,26 @@ transformOnConflictArbiter(ParseState *pstate, pstate->p_namespace = save_namespace; + /* + * If the arbiter is specified by constraint name, get the constraint + * OID and mark the constrained columns as requiring SELECT privilege, + * in the same way as would have happened if the arbiter had been + * specified by explicit reference to the constraint's index columns. + */ if (infer->conname) - *constraint = get_relation_constraint_oid(RelationGetRelid(pstate->p_target_relation), - infer->conname, false); + { + Oid relid = RelationGetRelid(pstate->p_target_relation); + RangeTblEntry *rte = pstate->p_target_rangetblentry; + Bitmapset *conattnos; + + conattnos = get_relation_constraint_attnos(relid, infer->conname, + false, constraint); + + /* Make sure the rel as a whole is marked for SELECT access */ + rte->requiredPerms |= ACL_SELECT; + /* Mark the constrained columns as requiring SELECT access */ + rte->selectedCols = bms_add_members(rte->selectedCols, conattnos); + } } /* diff --git a/src/backend/rewrite/rowsecurity.c b/src/backend/rewrite/rowsecurity.c index 9e7e54db67..a0cd6b1075 100644 --- a/src/backend/rewrite/rowsecurity.c +++ b/src/backend/rewrite/rowsecurity.c @@ -310,6 +310,8 @@ get_row_security_policies(Query *root, RangeTblEntry *rte, int rt_index, { List *conflict_permissive_policies; List *conflict_restrictive_policies; + List *conflict_select_permissive_policies = NIL; + List *conflict_select_restrictive_policies = NIL; /* Get the policies that apply to the auxiliary UPDATE */ get_policies_for_relation(rel, CMD_UPDATE, user_id, @@ -339,9 +341,6 @@ get_row_security_policies(Query *root, RangeTblEntry *rte, int rt_index, */ if (rte->requiredPerms & ACL_SELECT) { - List *conflict_select_permissive_policies = NIL; - List *conflict_select_restrictive_policies = NIL; - get_policies_for_relation(rel, CMD_SELECT, user_id, &conflict_select_permissive_policies, &conflict_select_restrictive_policies); @@ -362,6 +361,21 @@ get_row_security_policies(Query *root, RangeTblEntry *rte, int rt_index, withCheckOptions, hasSubLinks, false); + + /* + * Add ALL/SELECT policies as WCO_RLS_UPDATE_CHECK WCOs, to ensure + * that the final updated row is visible when taking the UPDATE + * path of an INSERT .. ON CONFLICT DO UPDATE, if SELECT rights + * are required for this relation. + */ + if (rte->requiredPerms & ACL_SELECT) + add_with_check_options(rel, rt_index, + WCO_RLS_UPDATE_CHECK, + conflict_select_permissive_policies, + conflict_select_restrictive_policies, + withCheckOptions, + hasSubLinks, + true); } } diff --git a/src/include/catalog/pg_constraint_fn.h b/src/include/catalog/pg_constraint_fn.h index a4c46897ed..37b0b4ba82 100644 --- a/src/include/catalog/pg_constraint_fn.h +++ b/src/include/catalog/pg_constraint_fn.h @@ -69,6 +69,8 @@ extern char *ChooseConstraintName(const char *name1, const char *name2, extern void AlterConstraintNamespaces(Oid ownerId, Oid oldNspId, Oid newNspId, bool isType, ObjectAddresses *objsMoved); extern Oid get_relation_constraint_oid(Oid relid, const char *conname, bool missing_ok); +extern Bitmapset *get_relation_constraint_attnos(Oid relid, const char *conname, + bool missing_ok, Oid *constraintOid); extern Oid get_domain_constraint_oid(Oid typid, const char *conname, bool missing_ok); extern Bitmapset *get_primary_key_attnos(Oid relid, bool deferrableOk, diff --git a/src/test/regress/expected/privileges.out b/src/test/regress/expected/privileges.out index f37df6c709..65d950f15b 100644 --- a/src/test/regress/expected/privileges.out +++ b/src/test/regress/expected/privileges.out @@ -488,10 +488,22 @@ ERROR: permission denied for relation atest5 INSERT INTO atest5(three) VALUES (4) ON CONFLICT (two) DO UPDATE set three = 10; -- fails (due to INSERT) ERROR: permission denied for relation atest5 -- Check that the columns in the inference require select privileges --- Error. No privs on four -INSERT INTO atest5(three) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 10; +INSERT INTO atest5(four) VALUES (4); -- fail ERROR: permission denied for relation atest5 SET SESSION AUTHORIZATION regress_user1; +GRANT INSERT (four) ON atest5 TO regress_user4; +SET SESSION AUTHORIZATION regress_user4; +INSERT INTO atest5(four) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 3; -- fails (due to SELECT) +ERROR: permission denied for relation atest5 +INSERT INTO atest5(four) VALUES (4) ON CONFLICT ON CONSTRAINT atest5_four_key DO UPDATE set three = 3; -- fails (due to SELECT) +ERROR: permission denied for relation atest5 +INSERT INTO atest5(four) VALUES (4); -- ok +SET SESSION AUTHORIZATION regress_user1; +GRANT SELECT (four) ON atest5 TO regress_user4; +SET SESSION AUTHORIZATION regress_user4; +INSERT INTO atest5(four) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 3; -- ok +INSERT INTO atest5(four) VALUES (4) ON CONFLICT ON CONSTRAINT atest5_four_key DO UPDATE set three = 3; -- ok +SET SESSION AUTHORIZATION regress_user1; REVOKE ALL (one) ON atest5 FROM regress_user4; GRANT SELECT (one,two,blue) ON atest6 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out index de2ee4d2c9..b8dcf51a30 100644 --- a/src/test/regress/expected/rowsecurity.out +++ b/src/test/regress/expected/rowsecurity.out @@ -3807,9 +3807,10 @@ DROP TABLE r1; -- SET SESSION AUTHORIZATION regress_rls_alice; SET row_security = on; -CREATE TABLE r1 (a int); +CREATE TABLE r1 (a int PRIMARY KEY); CREATE POLICY p1 ON r1 FOR SELECT USING (a < 20); CREATE POLICY p2 ON r1 FOR UPDATE USING (a < 20) WITH CHECK (true); +CREATE POLICY p3 ON r1 FOR INSERT WITH CHECK (true); INSERT INTO r1 VALUES (10); ALTER TABLE r1 ENABLE ROW LEVEL SECURITY; ALTER TABLE r1 FORCE ROW LEVEL SECURITY; @@ -3836,6 +3837,18 @@ ALTER TABLE r1 FORCE ROW LEVEL SECURITY; -- Error UPDATE r1 SET a = 30 RETURNING *; ERROR: new row violates row-level security policy for table "r1" +-- UPDATE path of INSERT ... ON CONFLICT DO UPDATE should also error out +INSERT INTO r1 VALUES (10) + ON CONFLICT (a) DO UPDATE SET a = 30 RETURNING *; +ERROR: new row violates row-level security policy for table "r1" +-- Should still error out without RETURNING (use of arbiter always requires +-- SELECT permissions) +INSERT INTO r1 VALUES (10) + ON CONFLICT (a) DO UPDATE SET a = 30; +ERROR: new row violates row-level security policy for table "r1" +INSERT INTO r1 VALUES (10) + ON CONFLICT ON CONSTRAINT r1_pkey DO UPDATE SET a = 30; +ERROR: new row violates row-level security policy for table "r1" DROP TABLE r1; -- Check dependency handling RESET SESSION AUTHORIZATION; diff --git a/src/test/regress/sql/privileges.sql b/src/test/regress/sql/privileges.sql index e2c13e08a4..902f64c747 100644 --- a/src/test/regress/sql/privileges.sql +++ b/src/test/regress/sql/privileges.sql @@ -320,9 +320,24 @@ INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = EXCLU INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = EXCLUDED.three; INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set one = 8; -- fails (due to UPDATE) INSERT INTO atest5(three) VALUES (4) ON CONFLICT (two) DO UPDATE set three = 10; -- fails (due to INSERT) + -- Check that the columns in the inference require select privileges --- Error. No privs on four -INSERT INTO atest5(three) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 10; +INSERT INTO atest5(four) VALUES (4); -- fail + +SET SESSION AUTHORIZATION regress_user1; +GRANT INSERT (four) ON atest5 TO regress_user4; +SET SESSION AUTHORIZATION regress_user4; + +INSERT INTO atest5(four) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 3; -- fails (due to SELECT) +INSERT INTO atest5(four) VALUES (4) ON CONFLICT ON CONSTRAINT atest5_four_key DO UPDATE set three = 3; -- fails (due to SELECT) +INSERT INTO atest5(four) VALUES (4); -- ok + +SET SESSION AUTHORIZATION regress_user1; +GRANT SELECT (four) ON atest5 TO regress_user4; +SET SESSION AUTHORIZATION regress_user4; + +INSERT INTO atest5(four) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 3; -- ok +INSERT INTO atest5(four) VALUES (4) ON CONFLICT ON CONSTRAINT atest5_four_key DO UPDATE set three = 3; -- ok SET SESSION AUTHORIZATION regress_user1; REVOKE ALL (one) ON atest5 FROM regress_user4; diff --git a/src/test/regress/sql/rowsecurity.sql b/src/test/regress/sql/rowsecurity.sql index e03a7ab65f..f3a31dbee0 100644 --- a/src/test/regress/sql/rowsecurity.sql +++ b/src/test/regress/sql/rowsecurity.sql @@ -1674,10 +1674,11 @@ DROP TABLE r1; -- SET SESSION AUTHORIZATION regress_rls_alice; SET row_security = on; -CREATE TABLE r1 (a int); +CREATE TABLE r1 (a int PRIMARY KEY); CREATE POLICY p1 ON r1 FOR SELECT USING (a < 20); CREATE POLICY p2 ON r1 FOR UPDATE USING (a < 20) WITH CHECK (true); +CREATE POLICY p3 ON r1 FOR INSERT WITH CHECK (true); INSERT INTO r1 VALUES (10); ALTER TABLE r1 ENABLE ROW LEVEL SECURITY; ALTER TABLE r1 FORCE ROW LEVEL SECURITY; @@ -1699,6 +1700,17 @@ ALTER TABLE r1 FORCE ROW LEVEL SECURITY; -- Error UPDATE r1 SET a = 30 RETURNING *; +-- UPDATE path of INSERT ... ON CONFLICT DO UPDATE should also error out +INSERT INTO r1 VALUES (10) + ON CONFLICT (a) DO UPDATE SET a = 30 RETURNING *; + +-- Should still error out without RETURNING (use of arbiter always requires +-- SELECT permissions) +INSERT INTO r1 VALUES (10) + ON CONFLICT (a) DO UPDATE SET a = 30; +INSERT INTO r1 VALUES (10) + ON CONFLICT ON CONSTRAINT r1_pkey DO UPDATE SET a = 30; + DROP TABLE r1; -- Check dependency handling From dfc015dcf46c1996bd7ed5866e9e045d258604b3 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Mon, 6 Nov 2017 07:11:10 -0800 Subject: [PATCH 0497/1087] start-scripts: switch to $PGUSER before opening $PGLOG. By default, $PGUSER has permission to unlink $PGLOG. If $PGUSER replaces $PGLOG with a symbolic link, the server will corrupt the link-targeted file by appending log messages. Since these scripts open $PGLOG as root, the attack works regardless of target file ownership. "make install" does not install these scripts anywhere. Users having manually installed them in the past should repeat that process to acquire this fix. Most script users have $PGLOG writable to root only, located in $PGDATA. Just before updating one of these scripts, such users should rename $PGLOG to $PGLOG.old. The script will then recreate $PGLOG with proper ownership. Reviewed by Peter Eisentraut. Reported by Antoine Scemama. Security: CVE-2017-12172 --- contrib/start-scripts/freebsd | 4 ++-- contrib/start-scripts/linux | 4 ++-- contrib/start-scripts/osx/PostgreSQL | 8 ++++---- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/contrib/start-scripts/freebsd b/contrib/start-scripts/freebsd index c6ac8cd47a..3323237a54 100644 --- a/contrib/start-scripts/freebsd +++ b/contrib/start-scripts/freebsd @@ -43,7 +43,7 @@ test -x $DAEMON || case $1 in start) - su -l $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 + su -l $PGUSER -c "$DAEMON -D '$PGDATA' >>$PGLOG 2>&1 &" echo -n ' postgresql' ;; stop) @@ -51,7 +51,7 @@ case $1 in ;; restart) su -l $PGUSER -c "$PGCTL stop -D '$PGDATA' -s" - su -l $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 + su -l $PGUSER -c "$DAEMON -D '$PGDATA' >>$PGLOG 2>&1 &" ;; status) su -l $PGUSER -c "$PGCTL status -D '$PGDATA'" diff --git a/contrib/start-scripts/linux b/contrib/start-scripts/linux index 44a775b030..a7757162fc 100644 --- a/contrib/start-scripts/linux +++ b/contrib/start-scripts/linux @@ -91,7 +91,7 @@ case $1 in start) echo -n "Starting PostgreSQL: " test -e "$PG_OOM_ADJUST_FILE" && echo "$PG_MASTER_OOM_SCORE_ADJ" > "$PG_OOM_ADJUST_FILE" - su - $PGUSER -c "$DAEMON_ENV $DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 + su - $PGUSER -c "$DAEMON_ENV $DAEMON -D '$PGDATA' >>$PGLOG 2>&1 &" echo "ok" ;; stop) @@ -103,7 +103,7 @@ case $1 in echo -n "Restarting PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s" test -e "$PG_OOM_ADJUST_FILE" && echo "$PG_MASTER_OOM_SCORE_ADJ" > "$PG_OOM_ADJUST_FILE" - su - $PGUSER -c "$DAEMON_ENV $DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 + su - $PGUSER -c "$DAEMON_ENV $DAEMON -D '$PGDATA' >>$PGLOG 2>&1 &" echo "ok" ;; reload) diff --git a/contrib/start-scripts/osx/PostgreSQL b/contrib/start-scripts/osx/PostgreSQL index 7ff1d0e377..7ac12bb9e3 100755 --- a/contrib/start-scripts/osx/PostgreSQL +++ b/contrib/start-scripts/osx/PostgreSQL @@ -80,9 +80,9 @@ StartService () { if [ "${POSTGRESQL:=-NO-}" = "-YES-" ]; then ConsoleMessage "Starting PostgreSQL database server" if [ "${ROTATELOGS}" = "1" ]; then - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' &" 2>&1 | ${LOGUTIL} "${PGLOG}" ${ROTATESEC} & + sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' 2>&1 | ${LOGUTIL} \"${PGLOG}\" ${ROTATESEC} &" else - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' &" >>"$PGLOG" 2>&1 + sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' >>\"$PGLOG\" 2>&1 &" fi fi } @@ -99,9 +99,9 @@ RestartService () { sudo -u $PGUSER sh -c "$PGCTL stop -D '${PGDATA}' -s" # should match StartService: if [ "${ROTATELOGS}" = "1" ]; then - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' &" 2>&1 | ${LOGUTIL} "${PGLOG}" ${ROTATESEC} & + sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' 2>&1 | ${LOGUTIL} \"${PGLOG}\" ${ROTATESEC} &" else - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' &" >>"$PGLOG" 2>&1 + sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' >>\"$PGLOG\" 2>&1 &" fi else StopService From b574228715f0fd77ed1f4f084603cff9e757e992 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 6 Nov 2017 10:29:11 -0500 Subject: [PATCH 0498/1087] Add tests for json{b}_populate_recordset() crash case. The problem reported as CVE-2017-15098 was already resolved in HEAD by commit 37a795a60, but let's add the relevant test cases anyway. Michael Paquier and Tom Lane, per a report from David Rowley. Security: CVE-2017-15098 --- src/test/regress/expected/json.out | 13 +++++++++++++ src/test/regress/expected/jsonb.out | 13 +++++++++++++ src/test/regress/sql/json.sql | 6 ++++++ src/test/regress/sql/jsonb.sql | 6 ++++++ 4 files changed, 38 insertions(+) diff --git a/src/test/regress/expected/json.out b/src/test/regress/expected/json.out index 6081604437..06c728e363 100644 --- a/src/test/regress/expected/json.out +++ b/src/test/regress/expected/json.out @@ -1857,6 +1857,19 @@ SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 0}, {"y": 3}]') SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 1, "y": 0}]'); ERROR: value for domain j_ordered_pair violates check constraint "j_ordered_pair_check" +-- negative cases where the wrong record type is supplied +select * from json_populate_recordset(row(0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned row contains 1 attribute, but query expects 2. +select * from json_populate_recordset(row(0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned type integer at ordinal position 1, but query expects text. +select * from json_populate_recordset(row(0::int,0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned row contains 3 attributes, but query expects 2. +select * from json_populate_recordset(row(1000000000::int,50::int),'[{"b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned type integer at ordinal position 1, but query expects text. -- test type info caching in json_populate_record() CREATE TEMP TABLE jspoptest (js json); INSERT INTO jspoptest diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out index cf16a15c0f..465195a317 100644 --- a/src/test/regress/expected/jsonb.out +++ b/src/test/regress/expected/jsonb.out @@ -2539,6 +2539,19 @@ SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 0}, {"y": 3}] SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 1, "y": 0}]'); ERROR: value for domain jb_ordered_pair violates check constraint "jb_ordered_pair_check" +-- negative cases where the wrong record type is supplied +select * from jsonb_populate_recordset(row(0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned row contains 1 attribute, but query expects 2. +select * from jsonb_populate_recordset(row(0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned type integer at ordinal position 1, but query expects text. +select * from jsonb_populate_recordset(row(0::int,0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned row contains 3 attributes, but query expects 2. +select * from jsonb_populate_recordset(row(1000000000::int,50::int),'[{"b":"2"},{"a":"3"}]') q (a text, b text); +ERROR: function return row and query-specified return row do not match +DETAIL: Returned type integer at ordinal position 1, but query expects text. -- jsonb_to_record and jsonb_to_recordset select * from jsonb_to_record('{"a":1,"b":"foo","c":"bar"}') as x(a int, b text, d text); diff --git a/src/test/regress/sql/json.sql b/src/test/regress/sql/json.sql index a4ce9d2ef3..256652c41f 100644 --- a/src/test/regress/sql/json.sql +++ b/src/test/regress/sql/json.sql @@ -553,6 +553,12 @@ SELECT json_populate_recordset(null::j_ordered_pair, '[{"x": 0, "y": 1}]'); SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 0}, {"y": 3}]'); SELECT json_populate_recordset(row(1,2)::j_ordered_pair, '[{"x": 1, "y": 0}]'); +-- negative cases where the wrong record type is supplied +select * from json_populate_recordset(row(0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from json_populate_recordset(row(0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from json_populate_recordset(row(0::int,0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from json_populate_recordset(row(1000000000::int,50::int),'[{"b":"2"},{"a":"3"}]') q (a text, b text); + -- test type info caching in json_populate_record() CREATE TEMP TABLE jspoptest (js json); diff --git a/src/test/regress/sql/jsonb.sql b/src/test/regress/sql/jsonb.sql index 8698b8d332..903e5ef67d 100644 --- a/src/test/regress/sql/jsonb.sql +++ b/src/test/regress/sql/jsonb.sql @@ -669,6 +669,12 @@ SELECT jsonb_populate_recordset(null::jb_ordered_pair, '[{"x": 0, "y": 1}]'); SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 0}, {"y": 3}]'); SELECT jsonb_populate_recordset(row(1,2)::jb_ordered_pair, '[{"x": 1, "y": 0}]'); +-- negative cases where the wrong record type is supplied +select * from jsonb_populate_recordset(row(0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from jsonb_populate_recordset(row(0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from jsonb_populate_recordset(row(0::int,0::int,0::int),'[{"a":"1","b":"2"},{"a":"3"}]') q (a text, b text); +select * from jsonb_populate_recordset(row(1000000000::int,50::int),'[{"b":"2"},{"a":"3"}]') q (a text, b text); + -- jsonb_to_record and jsonb_to_recordset select * from jsonb_to_record('{"a":1,"b":"foo","c":"bar"}') From 92d830f4bff643953a09563abaa106af42625207 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 6 Nov 2017 12:02:30 -0500 Subject: [PATCH 0499/1087] Last-minute updates for release notes. Security: CVE-2017-12172, CVE-2017-15098, CVE-2017-15099 --- doc/src/sgml/release-10.sgml | 108 +++++++++++++++++++++++++++++++++- doc/src/sgml/release-9.2.sgml | 25 ++++++++ doc/src/sgml/release-9.3.sgml | 42 +++++++++++++ doc/src/sgml/release-9.4.sgml | 42 +++++++++++++ doc/src/sgml/release-9.5.sgml | 75 ++++++++++++++++++++++- doc/src/sgml/release-9.6.sgml | 75 ++++++++++++++++++++++- 6 files changed, 364 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 6c07157d29..30d602a053 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -23,7 +23,7 @@ - However, if you use BRIN indexes, see the first changelog entry below. + However, if you use BRIN indexes, see the fourth changelog entry below. @@ -34,6 +34,92 @@ + + Ensure that INSERT ... ON CONFLICT DO UPDATE checks + table permissions and RLS policies in all cases (Dean Rasheed) + + + + The update path of INSERT ... ON CONFLICT DO UPDATE + requires SELECT permission on the columns of the + arbiter index, but it failed to check for that in the case of an + arbiter specified by constraint name. + In addition, for a table with row level security enabled, it failed to + check updated rows against the table's SELECT + policies (regardless of how the arbiter index was specified). + (CVE-2017-15099) + + + + + + + Fix crash due to rowtype mismatch + in json{b}_populate_recordset() + (Michael Paquier, Tom Lane) + + + + These functions used the result rowtype specified in the FROM + ... AS clause without checking that it matched the actual + rowtype of the supplied tuple value. If it didn't, that would usually + result in a crash, though disclosure of server memory contents seems + possible as well. + (CVE-2017-15098) + + + + + + + Fix sample server-start scripts to become $PGUSER + before opening $PGLOG (Noah Misch) + + + + Previously, the postmaster log file was opened while still running as + root. The database owner could therefore mount an attack against + another system user by making $PGLOG be a symbolic + link to some other file, which would then become corrupted by appending + log messages. + + + + By default, these scripts are not installed anywhere. Users who have + made use of them will need to manually recopy them, or apply the same + changes to their modified versions. If the + existing $PGLOG file is root-owned, it will need to + be removed or renamed out of the way before restarting the server with + the corrected script. + (CVE-2017-12172) + + + + + + + Fix missing temp-install prerequisites + for check-like Make targets (Noah Misch) + + + + Some non-default test procedures that are meant to work + like make check failed to ensure that the temporary + installation was up to date. + + + + + " .1" */ - if (Np->sign_wrote == FALSE && + if (Np->sign_wrote == false && (Np->num_curr >= Np->out_pre_spaces || (IS_ZERO(Np->Num) && Np->Num->zero_start == Np->num_curr)) && - (IS_PREDEC_SPACE(Np) == FALSE || (Np->last_relevant && *Np->last_relevant == '.'))) + (IS_PREDEC_SPACE(Np) == false || (Np->last_relevant && *Np->last_relevant == '.'))) { if (IS_LSIGN(Np->Num)) { @@ -4496,14 +4496,14 @@ NUM_numpart_to_char(NUMProc *Np, int id) else strcpy(Np->inout_p, Np->L_positive_sign); Np->inout_p += strlen(Np->inout_p); - Np->sign_wrote = TRUE; + Np->sign_wrote = true; } } else if (IS_BRACKET(Np->Num)) { *Np->inout_p = Np->sign == '+' ? ' ' : '<'; ++Np->inout_p; - Np->sign_wrote = TRUE; + Np->sign_wrote = true; } else if (Np->sign == '+') { @@ -4512,13 +4512,13 @@ NUM_numpart_to_char(NUMProc *Np, int id) *Np->inout_p = ' '; /* Write + */ ++Np->inout_p; } - Np->sign_wrote = TRUE; + Np->sign_wrote = true; } else if (Np->sign == '-') { /* Write - */ *Np->inout_p = '-'; ++Np->inout_p; - Np->sign_wrote = TRUE; + Np->sign_wrote = true; } } @@ -4549,7 +4549,7 @@ NUM_numpart_to_char(NUMProc *Np, int id) */ *Np->inout_p = '0'; /* Write '0' */ ++Np->inout_p; - Np->num_in = TRUE; + Np->num_in = true; } else { @@ -4607,7 +4607,7 @@ NUM_numpart_to_char(NUMProc *Np, int id) { *Np->inout_p = *Np->number_p; /* Write DIGIT */ ++Np->inout_p; - Np->num_in = TRUE; + Np->num_in = true; } } /* do no exceed string length */ @@ -4622,7 +4622,7 @@ NUM_numpart_to_char(NUMProc *Np, int id) if (Np->num_curr + 1 == end) { - if (Np->sign_wrote == TRUE && IS_BRACKET(Np->Num)) + if (Np->sign_wrote == true && IS_BRACKET(Np->Num)) { *Np->inout_p = Np->sign == '+' ? ' ' : '>'; ++Np->inout_p; @@ -4659,7 +4659,7 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, Np->last_relevant = NULL; Np->read_post = 0; Np->read_pre = 0; - Np->read_dec = FALSE; + Np->read_dec = false; if (Np->Num->zero_start) --Np->Num->zero_start; @@ -4706,10 +4706,10 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, /* MI/PL/SG - write sign itself and not in number */ if (IS_PLUS(Np->Num) || IS_MINUS(Np->Num)) { - if (IS_PLUS(Np->Num) && IS_MINUS(Np->Num) == FALSE) - Np->sign_wrote = FALSE; /* need sign */ + if (IS_PLUS(Np->Num) && IS_MINUS(Np->Num) == false) + Np->sign_wrote = false; /* need sign */ else - Np->sign_wrote = TRUE; /* needn't sign */ + Np->sign_wrote = true; /* needn't sign */ } else { @@ -4723,17 +4723,17 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, else if (Np->sign != '+' && IS_PLUS(Np->Num)) Np->Num->flag &= ~NUM_F_PLUS; - if (Np->sign == '+' && IS_FILLMODE(Np->Num) && IS_LSIGN(Np->Num) == FALSE) - Np->sign_wrote = TRUE; /* needn't sign */ + if (Np->sign == '+' && IS_FILLMODE(Np->Num) && IS_LSIGN(Np->Num) == false) + Np->sign_wrote = true; /* needn't sign */ else - Np->sign_wrote = FALSE; /* need sign */ + Np->sign_wrote = false; /* need sign */ if (Np->Num->lsign == NUM_LSIGN_PRE && Np->Num->pre == Np->Num->pre_lsign_num) Np->Num->lsign = NUM_LSIGN_POST; } } else - Np->sign = FALSE; + Np->sign = false; /* * Count @@ -4762,7 +4762,7 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, } } - if (Np->sign_wrote == FALSE && Np->out_pre_spaces == 0) + if (Np->sign_wrote == false && Np->out_pre_spaces == 0) ++Np->num_count; } else diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c index e13389a6cc..9dbe5db2b2 100644 --- a/src/backend/utils/adt/geo_ops.c +++ b/src/backend/utils/adt/geo_ops.c @@ -1530,7 +1530,7 @@ path_close(PG_FUNCTION_ARGS) { PATH *path = PG_GETARG_PATH_P_COPY(0); - path->closed = TRUE; + path->closed = true; PG_RETURN_PATH_P(path); } @@ -1540,7 +1540,7 @@ path_open(PG_FUNCTION_ARGS) { PATH *path = PG_GETARG_PATH_P_COPY(0); - path->closed = FALSE; + path->closed = false; PG_RETURN_PATH_P(path); } @@ -4499,7 +4499,7 @@ poly_path(PG_FUNCTION_ARGS) SET_VARSIZE(path, size); path->npts = poly->npts; - path->closed = TRUE; + path->closed = true; /* prevent instability in unused pad bytes */ path->dummy = 0; @@ -5401,7 +5401,7 @@ plist_same(int npts, Point *p1, Point *p2) printf("plist_same- ii = %d/%d after forward match\n", ii, npts); #endif if (ii == npts) - return TRUE; + return true; /* match not found forwards? then look backwards */ for (ii = 1, j = i - 1; ii < npts; ii++, j--) @@ -5421,11 +5421,11 @@ plist_same(int npts, Point *p1, Point *p2) printf("plist_same- ii = %d/%d after reverse match\n", ii, npts); #endif if (ii == npts) - return TRUE; + return true; } } - return FALSE; + return false; } diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c index 62341b84d1..1980ff5ac7 100644 --- a/src/backend/utils/adt/misc.c +++ b/src/backend/utils/adt/misc.c @@ -47,7 +47,7 @@ /* * Common subroutine for num_nulls() and num_nonnulls(). - * Returns TRUE if successful, FALSE if function should return NULL. + * Returns true if successful, false if function should return NULL. * If successful, total argument count and number of nulls are * returned into *nargs and *nulls. */ diff --git a/src/backend/utils/adt/network_gist.c b/src/backend/utils/adt/network_gist.c index 0e36b7685d..c4cafba503 100644 --- a/src/backend/utils/adt/network_gist.c +++ b/src/backend/utils/adt/network_gist.c @@ -561,13 +561,13 @@ inet_gist_compress(PG_FUNCTION_ARGS) gistentryinit(*retval, PointerGetDatum(r), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); } else { gistentryinit(*retval, (Datum) 0, entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); } } else @@ -602,7 +602,7 @@ inet_gist_fetch(PG_FUNCTION_ARGS) retval = palloc(sizeof(GISTENTRY)); gistentryinit(*retval, InetPGetDatum(dst), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); PG_RETURN_POINTER(retval); } diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 2cd14f3401..82e6f4483b 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -5527,7 +5527,7 @@ zero_var(NumericVar *var) static const char * set_var_from_str(const char *str, const char *cp, NumericVar *dest) { - bool have_dp = FALSE; + bool have_dp = false; int i; unsigned char *decdigits; int sign = NUMERIC_POS; @@ -5558,7 +5558,7 @@ set_var_from_str(const char *str, const char *cp, NumericVar *dest) if (*cp == '.') { - have_dp = TRUE; + have_dp = true; cp++; } @@ -5591,7 +5591,7 @@ set_var_from_str(const char *str, const char *cp, NumericVar *dest) (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("invalid input syntax for type %s: \"%s\"", "numeric", str))); - have_dp = TRUE; + have_dp = true; cp++; } else @@ -6160,7 +6160,7 @@ apply_typmod(NumericVar *var, int32 typmod) /* * Convert numeric to int8, rounding if needed. * - * If overflow, return FALSE (no error is raised). Return TRUE if okay. + * If overflow, return false (no error is raised). Return true if okay. */ static bool numericvar_to_int64(const NumericVar *var, int64 *result) @@ -6279,7 +6279,7 @@ int64_to_numericvar(int64 val, NumericVar *var) /* * Convert numeric to int128, rounding if needed. * - * If overflow, return FALSE (no error is raised). Return TRUE if okay. + * If overflow, return false (no error is raised). Return true if okay. */ static bool numericvar_to_int128(const NumericVar *var, int128 *result) diff --git a/src/backend/utils/adt/regexp.c b/src/backend/utils/adt/regexp.c index 139bb583b1..e858b5910f 100644 --- a/src/backend/utils/adt/regexp.c +++ b/src/backend/utils/adt/regexp.c @@ -247,7 +247,7 @@ RE_compile_and_cache(text *text_re, int cflags, Oid collation) /* * RE_wchar_execute - execute a RE on pg_wchar data * - * Returns TRUE on match, FALSE on no match + * Returns true on match, false on no match * * re --- the compiled pattern as returned by RE_compile_and_cache * data --- the data to match against (need not be null-terminated) @@ -291,7 +291,7 @@ RE_wchar_execute(regex_t *re, pg_wchar *data, int data_len, /* * RE_execute - execute a RE * - * Returns TRUE on match, FALSE on no match + * Returns true on match, false on no match * * re --- the compiled pattern as returned by RE_compile_and_cache * dat --- the data to match against (need not be null-terminated) @@ -323,7 +323,7 @@ RE_execute(regex_t *re, char *dat, int dat_len, /* * RE_compile_and_execute - compile and execute a RE * - * Returns TRUE on match, FALSE on no match + * Returns true on match, false on no match * * text_re --- the pattern, expressed as a TEXT object * dat --- the data to match against (need not be null-terminated) @@ -1294,7 +1294,7 @@ build_regexp_split_result(regexp_matches_ctx *splitctx) * regexp_fixed_prefix - extract fixed prefix, if any, for a regexp * * The result is NULL if there is no fixed prefix, else a palloc'd string. - * If it is an exact match, not just a prefix, *exact is returned as TRUE. + * If it is an exact match, not just a prefix, *exact is returned as true. */ char * regexp_fixed_prefix(text *text_re, bool case_insensitive, Oid collation, diff --git a/src/backend/utils/adt/regproc.c b/src/backend/utils/adt/regproc.c index 6fe81fab7e..afd0c00b8a 100644 --- a/src/backend/utils/adt/regproc.c +++ b/src/backend/utils/adt/regproc.c @@ -1728,7 +1728,7 @@ stringToQualifiedNameList(const char *string) * the argtypes array should be of size FUNC_MAX_ARGS). The function or * operator name is returned to *names as a List of Strings. * - * If allowNone is TRUE, accept "NONE" and return it as InvalidOid (this is + * If allowNone is true, accept "NONE" and return it as InvalidOid (this is * for unary operators). */ static void diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c index ba0e3ad87d..4badb5fd7c 100644 --- a/src/backend/utils/adt/ri_triggers.c +++ b/src/backend/utils/adt/ri_triggers.c @@ -2075,8 +2075,8 @@ RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) * * Check if we really need to fire the RI trigger for an update to a PK * relation. This is called by the AFTER trigger queue manager to see if - * it can skip queuing an instance of an RI trigger. Returns TRUE if the - * trigger must be fired, FALSE if we can prove the constraint will still + * it can skip queuing an instance of an RI trigger. Returns true if the + * trigger must be fired, false if we can prove the constraint will still * be satisfied. * ---------- */ @@ -2132,8 +2132,8 @@ RI_FKey_pk_upd_check_required(Trigger *trigger, Relation pk_rel, * * Check if we really need to fire the RI trigger for an update to an FK * relation. This is called by the AFTER trigger queue manager to see if - * it can skip queuing an instance of an RI trigger. Returns TRUE if the - * trigger must be fired, FALSE if we can prove the constraint will still + * it can skip queuing an instance of an RI trigger. Returns true if the + * trigger must be fired, false if we can prove the constraint will still * be satisfied. * ---------- */ diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index cc6cec7877..752cef09e6 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -112,7 +112,7 @@ typedef struct int prettyFlags; /* enabling of pretty-print functions */ int wrapColumn; /* max line length, or -1 for no limit */ int indentLevel; /* current indent level for prettyprint */ - bool varprefix; /* TRUE to print prefixes on Vars */ + bool varprefix; /* true to print prefixes on Vars */ ParseExprKind special_exprkind; /* set only for exprkinds needing special * handling */ } deparse_context; @@ -130,7 +130,7 @@ typedef struct * rtable_columns holds the column alias names to be used for each RTE. * * In some cases we need to make names of merged JOIN USING columns unique - * across the whole query, not only per-RTE. If so, unique_using is TRUE + * across the whole query, not only per-RTE. If so, unique_using is true * and using_names is a list of C strings representing names already assigned * to USING columns. * @@ -3012,9 +3012,9 @@ deparse_expression(Node *expr, List *dpcontext, * for interpreting Vars in the node tree. It can be NIL if no Vars are * expected. * - * forceprefix is TRUE to force all Vars to be prefixed with their table names. + * forceprefix is true to force all Vars to be prefixed with their table names. * - * showimplicit is TRUE to force all implicit casts to be shown explicitly. + * showimplicit is true to force all implicit casts to be shown explicitly. * * Tries to pretty up the output according to prettyFlags and startIndent. * @@ -3587,7 +3587,7 @@ set_using_names(deparse_namespace *dpns, Node *jtnode, List *parentUsing) * If there's a USING clause, select the USING column names and push * those names down to the children. We have two strategies: * - * If dpns->unique_using is TRUE, we force all USING names to be + * If dpns->unique_using is true, we force all USING names to be * unique across the whole query level. In principle we'd only need * the names of dangerous USING columns to be globally unique, but to * safely assign all USING names in a single pass, we have to enforce @@ -3600,7 +3600,7 @@ set_using_names(deparse_namespace *dpns, Node *jtnode, List *parentUsing) * this simplifies the logic and seems likely to lead to less aliasing * overall. * - * If dpns->unique_using is FALSE, we only need USING names to be + * If dpns->unique_using is false, we only need USING names to be * unique within their own join RTE. We still need to honor * pushed-down names, though. * @@ -5167,7 +5167,7 @@ get_simple_values_rte(Query *query) ListCell *lc; /* - * We want to return TRUE even if the Query also contains OLD or NEW rule + * We want to return true even if the Query also contains OLD or NEW rule * RTEs. So the idea is to scan the rtable and see if there is only one * inFromCl RTE that is a VALUES RTE. */ @@ -6416,7 +6416,7 @@ get_utility_query_def(Query *query, deparse_context *context) * the Var's varlevelsup has to be interpreted with respect to a context * above the current one; levelsup indicates the offset. * - * If istoplevel is TRUE, the Var is at the top level of a SELECT's + * If istoplevel is true, the Var is at the top level of a SELECT's * targetlist, which means we need special treatment of whole-row Vars. * Instead of the normal "tab.*", we'll print "tab.*::typename", which is a * dirty hack to prevent "tab.*" from being expanded into multiple columns. @@ -10612,7 +10612,7 @@ generate_qualified_relation_name(Oid relid) * means a FuncExpr or Aggref, not some other way of calling a function), then * has_variadic must specify whether variadic arguments have been merged, * and *use_variadic_p will be set to indicate whether to print VARIADIC in - * the output. For non-FuncExpr cases, has_variadic should be FALSE and + * the output. For non-FuncExpr cases, has_variadic should be false and * use_variadic_p can be NULL. * * The result includes all necessary quoting and schema-prefixing. diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 7361e9d43c..4bbb4a850e 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -790,7 +790,7 @@ ineq_histogram_selectivity(PlannerInfo *root, * hand! (For example, we might have a '<=' operator rather than the '<' * operator that will appear in staop.) For now, assume that whatever * appears in pg_statistic is sorted the same way our operator sorts, or - * the reverse way if isgt is TRUE. + * the reverse way if isgt is true. */ if (HeapTupleIsValid(vardata->statsTuple) && statistic_proc_security_check(vardata, opproc->fn_oid) && @@ -3814,7 +3814,7 @@ estimate_hash_bucket_stats(PlannerInfo *root, Node *hashkey, double nbuckets, * * Varinfos that aren't for simple Vars are ignored. * - * Return TRUE if we're able to find a match, FALSE otherwise. + * Return true if we're able to find a match, false otherwise. */ static bool estimate_multivariate_ndistinct(PlannerInfo *root, RelOptInfo *rel, @@ -4527,12 +4527,12 @@ convert_timevalue_to_scalar(Datum value, Oid typid) * args: clause argument list * varRelid: see specs for restriction selectivity functions * - * Outputs: (these are valid only if TRUE is returned) + * Outputs: (these are valid only if true is returned) * *vardata: gets information about variable (see examine_variable) * *other: gets other clause argument, aggressively reduced to a constant - * *varonleft: set TRUE if variable is on the left, FALSE if on the right + * *varonleft: set true if variable is on the left, false if on the right * - * Returns TRUE if a variable is identified, otherwise FALSE. + * Returns true if a variable is identified, otherwise false. * * Note: if there are Vars on both sides of the clause, we must fail, because * callers are expecting that the other side will act like a pseudoconstant. @@ -4648,12 +4648,12 @@ get_join_variables(PlannerInfo *root, List *args, SpecialJoinInfo *sjinfo, * atttype, atttypmod: actual type/typmod of the "var" expression. This is * commonly the same as the exposed type of the variable argument, * but can be different in binary-compatible-type cases. - * isunique: TRUE if we were able to match the var to a unique index or a + * isunique: true if we were able to match the var to a unique index or a * single-column DISTINCT clause, implying its values are unique for * this query. (Caution: this should be trusted for statistical * purposes only, since we do not check indimmediate nor verify that * the exact same definition of equality applies.) - * acl_ok: TRUE if current user has permission to read the column(s) + * acl_ok: true if current user has permission to read the column(s) * underlying the pg_statistic entry. This is consulted by * statistic_proc_security_check(). * @@ -5060,7 +5060,7 @@ statistic_proc_security_check(VariableStatData *vardata, Oid func_oid) * Estimate the number of distinct values of a variable. * * vardata: results of examine_variable - * *isdefault: set to TRUE if the result is a default rather than based on + * *isdefault: set to true if the result is a default rather than based on * anything meaningful. * * NB: be careful to produce a positive integral result, since callers may @@ -5193,8 +5193,8 @@ get_variable_numdistinct(VariableStatData *vardata, bool *isdefault) /* * get_variable_range * Estimate the minimum and maximum value of the specified variable. - * If successful, store values in *min and *max, and return TRUE. - * If no data available, return FALSE. + * If successful, store values in *min and *max, and return true. + * If no data available, return false. * * sortop is the "<" comparison operator to use. This should generally * be "<" not ">", as only the former is likely to be found in pg_statistic. @@ -5327,9 +5327,9 @@ get_variable_range(PlannerInfo *root, VariableStatData *vardata, Oid sortop, * Attempt to identify the current *actual* minimum and/or maximum * of the specified variable, by looking for a suitable btree index * and fetching its low and/or high values. - * If successful, store values in *min and *max, and return TRUE. + * If successful, store values in *min and *max, and return true. * (Either pointer can be NULL if that endpoint isn't needed.) - * If no data available, return FALSE. + * If no data available, return false. * * sortop is the "<" comparison operator to use. */ diff --git a/src/backend/utils/adt/tsginidx.c b/src/backend/utils/adt/tsginidx.c index 83a939dfd5..aba456ed88 100644 --- a/src/backend/utils/adt/tsginidx.c +++ b/src/backend/utils/adt/tsginidx.c @@ -295,7 +295,7 @@ gin_tsquery_consistent(PG_FUNCTION_ARGS) /* int32 nkeys = PG_GETARG_INT32(3); */ Pointer *extra_data = (Pointer *) PG_GETARG_POINTER(4); bool *recheck = (bool *) PG_GETARG_POINTER(5); - bool res = FALSE; + bool res = false; /* Initially assume query doesn't require recheck */ *recheck = false; diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c index 578af5d512..3295f7252b 100644 --- a/src/backend/utils/adt/tsgistidx.c +++ b/src/backend/utils/adt/tsgistidx.c @@ -240,7 +240,7 @@ gtsvector_compress(PG_FUNCTION_ARGS) retval = (GISTENTRY *) palloc(sizeof(GISTENTRY)); gistentryinit(*retval, PointerGetDatum(res), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); } else if (ISSIGNKEY(DatumGetPointer(entry->key)) && !ISALLTRUE(DatumGetPointer(entry->key))) @@ -264,7 +264,7 @@ gtsvector_compress(PG_FUNCTION_ARGS) retval = (GISTENTRY *) palloc(sizeof(GISTENTRY)); gistentryinit(*retval, PointerGetDatum(res), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); } PG_RETURN_POINTER(retval); } @@ -285,7 +285,7 @@ gtsvector_decompress(PG_FUNCTION_ARGS) gistentryinit(*retval, PointerGetDatum(key), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); PG_RETURN_POINTER(retval); } diff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c index 05bc0d6adb..04134d14ad 100644 --- a/src/backend/utils/adt/tsquery_gist.c +++ b/src/backend/utils/adt/tsquery_gist.c @@ -37,7 +37,7 @@ gtsquery_compress(PG_FUNCTION_ARGS) gistentryinit(*retval, TSQuerySignGetDatum(sign), entry->rel, entry->page, - entry->offset, FALSE); + entry->offset, false); } PG_RETURN_POINTER(retval); @@ -79,7 +79,7 @@ gtsquery_consistent(PG_FUNCTION_ARGS) retval = (key & sq) != 0; break; default: - retval = FALSE; + retval = false; } PG_RETURN_BOOL(retval); } diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index 4b5483dbb9..4674ee2938 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -3255,7 +3255,7 @@ textToQualifiedNameList(text *textval) * namelist: filled with a palloc'd list of pointers to identifiers within * rawstring. Caller should list_free() this even on error return. * - * Returns TRUE if okay, FALSE if there is a syntax error in the string. + * Returns true if okay, false if there is a syntax error in the string. * * Note that an empty string is considered okay here, though not in * textToQualifiedNameList. @@ -3383,7 +3383,7 @@ SplitIdentifierString(char *rawstring, char separator, * namelist: filled with a palloc'd list of directory names. * Caller should list_free_deep() this even on error return. * - * Returns TRUE if okay, FALSE if there is a syntax error in the string. + * Returns true if okay, false if there is a syntax error in the string. * * Note that an empty string is considered okay here. */ diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c index 24229c2dff..c9d07f2ae9 100644 --- a/src/backend/utils/adt/xml.c +++ b/src/backend/utils/adt/xml.c @@ -4376,7 +4376,7 @@ XmlTableSetColumnFilter(TableFuncScanState *state, char *path, int colnum) /* * XmlTableFetchRow * Prepare the next "current" tuple for upcoming GetValue calls. - * Returns FALSE if the row-filter expression returned no more rows. + * Returns false if the row-filter expression returned no more rows. */ static bool XmlTableFetchRow(TableFuncScanState *state) diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index 48961e31aa..0ea2f2bc54 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -186,7 +186,7 @@ get_opfamily_member(Oid opfamily, Oid lefttype, Oid righttype, * determine its opfamily, its declared input datatype, and its * strategy number (BTLessStrategyNumber or BTGreaterStrategyNumber). * - * Returns TRUE if successful, FALSE if no matching pg_amop entry exists. + * Returns true if successful, false if no matching pg_amop entry exists. * (This indicates that the operator is not a valid ordering operator.) * * Note: the operator could be registered in multiple families, for example @@ -254,8 +254,8 @@ get_ordering_op_properties(Oid opno, * Get the OID of the datatype-specific btree equality operator * associated with an ordering operator (a "<" or ">" operator). * - * If "reverse" isn't NULL, also set *reverse to FALSE if the operator is "<", - * TRUE if it's ">" + * If "reverse" isn't NULL, also set *reverse to false if the operator is "<", + * true if it's ">" * * Returns InvalidOid if no matching equality operator can be found. * (This indicates that the operator is not a valid ordering operator.) @@ -682,7 +682,7 @@ get_op_btree_interpretation(Oid opno) /* * equality_ops_are_compatible - * Return TRUE if the two given equality operators have compatible + * Return true if the two given equality operators have compatible * semantics. * * This is trivially true if they are the same operator. Otherwise, @@ -2868,7 +2868,7 @@ get_attavgwidth(Oid relid, AttrNumber attnum) * get_attstatsslot * * Extract the contents of a "slot" of a pg_statistic tuple. - * Returns TRUE if requested slot type was found, else FALSE. + * Returns true if requested slot type was found, else false. * * Unlike other routines in this file, this takes a pointer to an * already-looked-up tuple in the pg_statistic cache. We do this since @@ -2884,7 +2884,7 @@ get_attavgwidth(Oid relid, AttrNumber attnum) * reqop: STAOP value wanted, or InvalidOid if don't care. * flags: bitmask of ATTSTATSSLOT_VALUES and/or ATTSTATSSLOT_NUMBERS. * - * If a matching slot is found, TRUE is returned, and *sslot is filled thus: + * If a matching slot is found, true is returned, and *sslot is filled thus: * staop: receives the actual STAOP value. * valuetype: receives actual datatype of the elements of stavalues. * values: receives pointer to an array of the slot's stavalues. @@ -2896,7 +2896,7 @@ get_attavgwidth(Oid relid, AttrNumber attnum) * wasn't specified. Likewise, numbers/nnumbers are NULL/0 if * ATTSTATSSLOT_NUMBERS wasn't specified. * - * If no matching slot is found, FALSE is returned, and *sslot is zeroed. + * If no matching slot is found, false is returned, and *sslot is zeroed. * * The data referred to by the fields of sslot is locally palloc'd and * is independent of the original pg_statistic tuple. When the caller diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c index ad8a82f1e3..853c1f6e85 100644 --- a/src/backend/utils/cache/plancache.c +++ b/src/backend/utils/cache/plancache.c @@ -319,7 +319,7 @@ CreateOneShotCachedPlan(RawStmt *raw_parse_tree, * parserSetup: alternate method for handling query parameters * parserSetupArg: data to pass to parserSetup * cursor_options: options bitmask to pass to planner - * fixed_result: TRUE to disallow future changes in query's result tupdesc + * fixed_result: true to disallow future changes in query's result tupdesc */ void CompleteCachedPlan(CachedPlanSource *plansource, diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 5015719915..a31b68a8d5 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -227,7 +227,7 @@ do { \ typedef struct opclasscacheent { Oid opclassoid; /* lookup key: OID of opclass */ - bool valid; /* set TRUE after successful fill-in */ + bool valid; /* set true after successful fill-in */ StrategyNumber numSupport; /* max # of support procs (from pg_am) */ Oid opcfamily; /* OID of opclass's family */ Oid opcintype; /* OID of opclass's declared input type */ @@ -5358,9 +5358,9 @@ errtableconstraint(Relation rel, const char *conname) * load_relcache_init_file -- attempt to load cache from the shared * or local cache init file * - * If successful, return TRUE and set criticalRelcachesBuilt or + * If successful, return true and set criticalRelcachesBuilt or * criticalSharedRelcachesBuilt to true. - * If not successful, return FALSE. + * If not successful, return false. * * NOTE: we assume we are already switched into CacheMemoryContext. */ diff --git a/src/backend/utils/cache/relmapper.c b/src/backend/utils/cache/relmapper.c index 41c2ba7f97..e0c5dd404c 100644 --- a/src/backend/utils/cache/relmapper.c +++ b/src/backend/utils/cache/relmapper.c @@ -693,13 +693,13 @@ load_relmap_file(bool shared) * The magic number and CRC are automatically updated in *newmap. On * success, we copy the data to the appropriate permanent static variable. * - * If write_wal is TRUE then an appropriate WAL message is emitted. + * If write_wal is true then an appropriate WAL message is emitted. * (It will be false for bootstrap and WAL replay cases.) * - * If send_sinval is TRUE then a SI invalidation message is sent. + * If send_sinval is true then a SI invalidation message is sent. * (This should be true except in bootstrap case.) * - * If preserve_files is TRUE then the storage manager is warned not to + * If preserve_files is true then the storage manager is warned not to * delete the files listed in the map. * * Because this may be called during WAL replay when MyDatabaseId, diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index 977c03834a..f6bb05f135 100644 --- a/src/backend/utils/error/elog.c +++ b/src/backend/utils/error/elog.c @@ -226,7 +226,7 @@ err_gettext(const char *str) * the stack entry. Finally, errfinish() will be called to actually process * the error report. * - * Returns TRUE in normal case. Returns FALSE to short-circuit the error + * Returns true in normal case. Returns false to short-circuit the error * report (if it's a warning or lower and not to be reported anywhere). */ bool @@ -285,7 +285,7 @@ errstart(int elevel, const char *filename, int lineno, /* * Now decide whether we need to process this report at all; if it's - * warning or less and not enabled for logging, just return FALSE without + * warning or less and not enabled for logging, just return false without * starting up any error logging machinery. */ diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c index bfd5031b9d..24a3950575 100644 --- a/src/backend/utils/fmgr/funcapi.c +++ b/src/backend/utils/fmgr/funcapi.c @@ -441,8 +441,8 @@ get_expr_result_tupdesc(Node *expr, bool noError) /* * Given the result tuple descriptor for a function with OUT parameters, * replace any polymorphic columns (ANYELEMENT etc) with correct data types - * deduced from the input arguments. Returns TRUE if able to deduce all types, - * FALSE if not. + * deduced from the input arguments. Returns true if able to deduce all types, + * false if not. */ static bool resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, @@ -634,7 +634,7 @@ resolve_polymorphic_tupdesc(TupleDesc tupdesc, oidvector *declared_args, /* * Given the declared argument types and modes for a function, replace any * polymorphic types (ANYELEMENT etc) with correct data types deduced from the - * input arguments. Returns TRUE if able to deduce all types, FALSE if not. + * input arguments. Returns true if able to deduce all types, false if not. * This is the same logic as resolve_polymorphic_tupdesc, but with a different * argument representation. * diff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c index 6f6b03c815..71f5f0688a 100644 --- a/src/backend/utils/hash/dynahash.c +++ b/src/backend/utils/hash/dynahash.c @@ -891,8 +891,8 @@ calc_bucket(HASHHDR *hctl, uint32 hash_val) * HASH_ENTER_NULL cannot be used with the default palloc-based allocator, * since palloc internally ereports on out-of-memory. * - * If foundPtr isn't NULL, then *foundPtr is set TRUE if we found an - * existing entry in the table, FALSE otherwise. This is needed in the + * If foundPtr isn't NULL, then *foundPtr is set true if we found an + * existing entry in the table, false otherwise. This is needed in the * HASH_ENTER case, but is redundant with the return value otherwise. * * For hash_search_with_hash_value, the hashvalue parameter must have been @@ -1096,12 +1096,12 @@ hash_search_with_hash_value(HTAB *hashp, * Therefore this cannot suffer an out-of-memory failure, even if there are * other processes operating in other partitions of the hashtable. * - * Returns TRUE if successful, FALSE if the requested new hash key is already + * Returns true if successful, false if the requested new hash key is already * present. Throws error if the specified entry pointer isn't actually a * table member. * * NB: currently, there is no special case for old and new hash keys being - * identical, which means we'll report FALSE for that situation. This is + * identical, which means we'll report false for that situation. This is * preferable for existing uses. * * NB: for a partitioned hashtable, caller must hold lock on both relevant diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c index afbf8f8691..544fed8096 100644 --- a/src/backend/utils/init/miscinit.c +++ b/src/backend/utils/init/miscinit.c @@ -1273,11 +1273,11 @@ AddToDataDirLockFile(int target_line, const char *str) /* * Recheck that the data directory lock file still exists with expected - * content. Return TRUE if the lock file appears OK, FALSE if it isn't. + * content. Return true if the lock file appears OK, false if it isn't. * * We call this periodically in the postmaster. The idea is that if the * lock file has been removed or replaced by another postmaster, we should - * do a panic database shutdown. Therefore, we should return TRUE if there + * do a panic database shutdown. Therefore, we should return true if there * is any doubt: we do not want to cause a panic shutdown unnecessarily. * Transient failures like EINTR or ENFILE should not cause us to fail. * (If there really is something wrong, we'll detect it on a future recheck.) diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 65372d7cc5..a609619f4d 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -3950,9 +3950,9 @@ static int num_guc_variables; static int size_guc_variables; -static bool guc_dirty; /* TRUE if need to do commit/abort work */ +static bool guc_dirty; /* true if need to do commit/abort work */ -static bool reporting_enabled; /* TRUE to enable GUC_REPORT */ +static bool reporting_enabled; /* true to enable GUC_REPORT */ static int GUCNestLevel = 0; /* 1 when in main transaction */ @@ -4374,7 +4374,7 @@ add_placeholder_variable(const char *name, int elevel) /* * Look up option NAME. If it exists, return a pointer to its record, - * else return NULL. If create_placeholders is TRUE, we'll create a + * else return NULL. If create_placeholders is true, we'll create a * placeholder record for a valid-looking custom variable name. */ static struct config_generic * @@ -5643,7 +5643,7 @@ config_enum_lookup_by_value(struct config_enum *record, int val) * Lookup the value for an enum option with the selected name * (case-insensitive). * If the enum option is found, sets the retval value and returns - * true. If it's not found, return FALSE and retval is set to 0. + * true. If it's not found, return false and retval is set to 0. */ bool config_enum_lookup_by_name(struct config_enum *record, const char *value, @@ -5656,12 +5656,12 @@ config_enum_lookup_by_name(struct config_enum *record, const char *value, if (pg_strcasecmp(value, entry->name) == 0) { *retval = entry->val; - return TRUE; + return true; } } *retval = 0; - return FALSE; + return false; } @@ -8370,7 +8370,7 @@ show_config_by_name(PG_FUNCTION_ARGS) /* * show_config_by_name_missing_ok - equiv to SHOW X command but implemented as * a function. If X does not exist, suppress the error and just return NULL - * if missing_ok is TRUE. + * if missing_ok is true. */ Datum show_config_by_name_missing_ok(PG_FUNCTION_ARGS) @@ -9692,7 +9692,7 @@ GUCArrayReset(ArrayType *array) * or NULL for the Delete/Reset cases. If skipIfNoPermissions is true, it's * not an error to have no permissions to set the option. * - * Returns TRUE if OK, FALSE if skipIfNoPermissions is true and user does not + * Returns true if OK, false if skipIfNoPermissions is true and user does not * have permission to change this option (all other error cases result in an * error being thrown). */ @@ -9718,7 +9718,7 @@ validate_option_array_item(const char *name, const char *value, * define_custom_variable assumes we checked that. * * name is not known and can't be created as a placeholder. Throw error, - * unless skipIfNoPermissions is true, in which case return FALSE. + * unless skipIfNoPermissions is true, in which case return false. */ gconf = find_option(name, true, WARNING); if (!gconf) diff --git a/src/backend/utils/misc/tzparser.c b/src/backend/utils/misc/tzparser.c index 04d6ee3503..3986141899 100644 --- a/src/backend/utils/misc/tzparser.c +++ b/src/backend/utils/misc/tzparser.c @@ -45,7 +45,7 @@ static int ParseTzFile(const char *filename, int depth, /* * Apply additional validation checks to a tzEntry * - * Returns TRUE if OK, else false + * Returns true if OK, else false */ static bool validateTzEntry(tzEntry *tzentry) @@ -92,7 +92,7 @@ validateTzEntry(tzEntry *tzentry) * name zone * name offset dst * - * Returns TRUE if OK, else false; data is stored in *tzentry + * Returns true if OK, else false; data is stored in *tzentry */ static bool splitTzLine(const char *filename, int lineno, char *line, tzEntry *tzentry) @@ -180,7 +180,7 @@ splitTzLine(const char *filename, int lineno, char *line, tzEntry *tzentry) * *arraysize: allocated length of array (changeable if must enlarge array) * n: current number of valid elements in array * entry: new data to insert - * override: TRUE if OK to override + * override: true if OK to override * * Returns the new array length (new value for n), or -1 if error */ diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index 89db08464f..d03b779407 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -623,8 +623,8 @@ PortalHashTableDeleteAll(void) * simply removed. Portals remaining from prior transactions should be * left untouched. * - * Returns TRUE if any portals changed state (possibly causing user-defined - * code to be run), FALSE if not. + * Returns true if any portals changed state (possibly causing user-defined + * code to be run), false if not. */ bool PreCommit_Portals(bool isPrepare) diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 60522cb442..34af8d6334 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -1166,7 +1166,7 @@ tuplesort_end(Tuplesortstate *state) /* * Grow the memtuples[] array, if possible within our memory constraint. We * must not exceed INT_MAX tuples in memory or the caller-provided memory - * limit. Return TRUE if we were able to enlarge the array, FALSE if not. + * limit. Return true if we were able to enlarge the array, false if not. * * Normally, at each increment we double the size of the array. When doing * that would exceed a limit, we attempt one last, smaller increase (and then @@ -1733,7 +1733,7 @@ tuplesort_performsort(Tuplesortstate *state) /* * Internal routine to fetch the next tuple in either forward or back - * direction into *stup. Returns FALSE if no more tuples. + * direction into *stup. Returns false if no more tuples. * Returned tuple belongs to tuplesort memory context, and must not be freed * by caller. Note that fetched tuple is stored in memory that may be * recycled by any future fetch. @@ -1975,10 +1975,10 @@ tuplesort_gettuple_common(Tuplesortstate *state, bool forward, /* * Fetch the next tuple in either forward or back direction. - * If successful, put tuple in slot and return TRUE; else, clear the slot - * and return FALSE. + * If successful, put tuple in slot and return true; else, clear the slot + * and return false. * - * Caller may optionally be passed back abbreviated value (on TRUE return + * Caller may optionally be passed back abbreviated value (on true return * value) when abbreviation was used, which can be used to cheaply avoid * equality checks that might otherwise be required. Caller can safely make a * determination of "non-equal tuple" based on simple binary inequality. A @@ -2065,13 +2065,13 @@ tuplesort_getindextuple(Tuplesortstate *state, bool forward) /* * Fetch the next Datum in either forward or back direction. - * Returns FALSE if no more datums. + * Returns false if no more datums. * * If the Datum is pass-by-ref type, the returned value is freshly palloc'd * and is now owned by the caller (this differs from similar routines for * other types of tuplesorts). * - * Caller may optionally be passed back abbreviated value (on TRUE return + * Caller may optionally be passed back abbreviated value (on true return * value) when abbreviation was used, which can be used to cheaply avoid * equality checks that might otherwise be required. Caller can safely make a * determination of "non-equal tuple" based on simple binary inequality. A @@ -2115,7 +2115,7 @@ tuplesort_getdatum(Tuplesortstate *state, bool forward, /* * Advance over N tuples in either forward or back direction, * without returning any data. N==0 is a no-op. - * Returns TRUE if successful, FALSE if ran out of tuples. + * Returns true if successful, false if ran out of tuples. */ bool tuplesort_skiptuples(Tuplesortstate *state, int64 ntuples, bool forward) diff --git a/src/backend/utils/sort/tuplestore.c b/src/backend/utils/sort/tuplestore.c index 98c006b663..1977b61fd9 100644 --- a/src/backend/utils/sort/tuplestore.c +++ b/src/backend/utils/sort/tuplestore.c @@ -562,7 +562,7 @@ tuplestore_ateof(Tuplestorestate *state) /* * Grow the memtuples[] array, if possible within our memory constraint. We * must not exceed INT_MAX tuples in memory or the caller-provided memory - * limit. Return TRUE if we were able to enlarge the array, FALSE if not. + * limit. Return true if we were able to enlarge the array, false if not. * * Normally, at each increment we double the size of the array. When doing * that would exceed a limit, we attempt one last, smaller increase (and then @@ -1064,12 +1064,12 @@ tuplestore_gettuple(Tuplestorestate *state, bool forward, /* * tuplestore_gettupleslot - exported function to fetch a MinimalTuple * - * If successful, put tuple in slot and return TRUE; else, clear the slot - * and return FALSE. + * If successful, put tuple in slot and return true; else, clear the slot + * and return false. * - * If copy is TRUE, the slot receives a copied tuple (allocated in current + * If copy is true, the slot receives a copied tuple (allocated in current * memory context) that will stay valid regardless of future manipulations of - * the tuplestore's state. If copy is FALSE, the slot may just receive a + * the tuplestore's state. If copy is false, the slot may just receive a * pointer to a tuple held within the tuplestore. The latter is more * efficient but the slot contents may be corrupted if additional writes to * the tuplestore occur. (If using tuplestore_trim, see comments therein.) @@ -1129,7 +1129,7 @@ tuplestore_advance(Tuplestorestate *state, bool forward) /* * Advance over N tuples in either forward or back direction, * without returning any data. N<=0 is a no-op. - * Returns TRUE if successful, FALSE if ran out of tuples. + * Returns true if successful, false if ran out of tuples. */ bool tuplestore_skiptuples(Tuplestorestate *state, int64 ntuples, bool forward) diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c index c7e4331efb..200fa3765f 100644 --- a/src/backend/utils/time/combocid.c +++ b/src/backend/utils/time/combocid.c @@ -142,8 +142,8 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup) * into its t_cid field. * * If we don't need a combo CID, *cmax is unchanged and *iscombo is set to - * FALSE. If we do need one, *cmax is replaced by a combo CID and *iscombo - * is set to TRUE. + * false. If we do need one, *cmax is replaced by a combo CID and *iscombo + * is set to true. * * The reason this is separate from the actual HeapTupleHeaderSetCmax() * operation is that this could fail due to out-of-memory conditions. Hence diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c index b7aab0dd19..a821e2eed1 100644 --- a/src/backend/utils/time/tqual.c +++ b/src/backend/utils/time/tqual.c @@ -1422,7 +1422,7 @@ HeapTupleSatisfiesNonVacuumable(HeapTuple htup, Snapshot snapshot, * should already be set. We assume that if no hint bits are set, the xmin * or xmax transaction is still running. This is therefore faster than * HeapTupleSatisfiesVacuum, because we don't consult PGXACT nor CLOG. - * It's okay to return FALSE when in doubt, but we must return TRUE only + * It's okay to return false when in doubt, but we must return true only * if the tuple is removable. */ bool diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index e4c95feb63..70d8f24d17 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -42,7 +42,7 @@ static void AddAcl(PQExpBuffer aclbuf, const char *keyword, * prefix: string to prefix to each generated command; typically empty * remoteVersion: version of database * - * Returns TRUE if okay, FALSE if could not parse the acl string. + * Returns true if okay, false if could not parse the acl string. * The resulting commands (if any) are appended to the contents of 'sql'. * * Note: when processing a default ACL, prefix is "ALTER DEFAULT PRIVILEGES " @@ -359,7 +359,7 @@ buildACLCommands(const char *name, const char *subname, * owner: username of privileges owner (will be passed through fmtId) * remoteVersion: version of database * - * Returns TRUE if okay, FALSE if could not parse the acl string. + * Returns true if okay, false if could not parse the acl string. * The resulting commands (if any) are appended to the contents of 'sql'. */ bool diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 6d4c28852c..d8fb356130 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -2455,7 +2455,7 @@ getTableDataFKConstraints(void) * In 8.4 and up we can rely on the conislocal field to decide which * constraints must be dumped; much safer. * - * This function assumes all conislocal flags were initialized to TRUE. + * This function assumes all conislocal flags were initialized to true. * It clears the flag on anything that seems to be inherited. */ static void diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h index e7593e6da7..da884ffd09 100644 --- a/src/bin/pg_dump/pg_dump.h +++ b/src/bin/pg_dump/pg_dump.h @@ -339,7 +339,7 @@ typedef struct _attrDefInfo TableInfo *adtable; /* link to table of attribute */ int adnum; char *adef_expr; /* decompiled DEFAULT expression */ - bool separate; /* TRUE if must dump as separate item */ + bool separate; /* true if must dump as separate item */ } AttrDefInfo; typedef struct _tableDataInfo @@ -380,7 +380,7 @@ typedef struct _ruleInfo char ev_type; bool is_instead; char ev_enabled; - bool separate; /* TRUE if must dump as separate item */ + bool separate; /* true if must dump as separate item */ /* separate is always true for non-ON SELECT rules */ } RuleInfo; @@ -430,10 +430,10 @@ typedef struct _constraintInfo char *condef; /* definition, if CHECK or FOREIGN KEY */ Oid confrelid; /* referenced table, if FOREIGN KEY */ DumpId conindex; /* identifies associated index if any */ - bool condeferrable; /* TRUE if constraint is DEFERRABLE */ - bool condeferred; /* TRUE if constraint is INITIALLY DEFERRED */ - bool conislocal; /* TRUE if constraint has local definition */ - bool separate; /* TRUE if must dump as separate item */ + bool condeferrable; /* true if constraint is DEFERRABLE */ + bool condeferred; /* true if constraint is INITIALLY DEFERRED */ + bool conislocal; /* true if constraint has local definition */ + bool separate; /* true if must dump as separate item */ } ConstraintInfo; typedef struct _procLangInfo diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c index 5044a76787..48b6dd594c 100644 --- a/src/bin/pg_dump/pg_dump_sort.c +++ b/src/bin/pg_dump/pg_dump_sort.c @@ -342,13 +342,13 @@ sortDumpableObjects(DumpableObject **objs, int numObjs, * The input is the list of numObjs objects in objs[]. This list is not * modified. * - * Returns TRUE if able to build an ordering that satisfies all the - * constraints, FALSE if not (there are contradictory constraints). + * Returns true if able to build an ordering that satisfies all the + * constraints, false if not (there are contradictory constraints). * - * On success (TRUE result), ordering[] is filled with a sorted array of + * On success (true result), ordering[] is filled with a sorted array of * DumpableObject pointers, of length equal to the input list length. * - * On failure (FALSE result), ordering[] is filled with an unsorted array of + * On failure (false result), ordering[] is filled with an unsorted array of * DumpableObject pointers of length *nOrdering, listing the objects that * prevented the sort from being completed. In general, these objects either * participate directly in a dependency cycle, or are depended on by objects diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index e44c23654d..a21dd48c42 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -284,7 +284,7 @@ typedef struct typedef struct { FILE *internal; /* internal log FILE */ - bool verbose; /* TRUE -> be verbose in messages */ + bool verbose; /* true -> be verbose in messages */ bool retain; /* retain log files on success */ } LogOpts; @@ -294,7 +294,7 @@ typedef struct */ typedef struct { - bool check; /* TRUE -> ask user for permission to make + bool check; /* true -> ask user for permission to make * changes */ transferMode transfer_mode; /* copy files or link them? */ int jobs; diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c index 041b5e0c87..8cc4de3878 100644 --- a/src/bin/psql/command.c +++ b/src/bin/psql/command.c @@ -4370,7 +4370,7 @@ echo_hidden_command(const char *query) /* * Look up the object identified by obj_type and desc. If successful, - * store its OID in *obj_oid and return TRUE, else return FALSE. + * store its OID in *obj_oid and return true, else return false. * * Note that we'll fail if the object doesn't exist OR if there are multiple * matching candidates OR if there's something syntactically wrong with the diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c index 9b59ee840b..7a91a44b2b 100644 --- a/src/bin/psql/common.c +++ b/src/bin/psql/common.c @@ -44,7 +44,7 @@ static bool is_select_command(const char *query); * Returns output file pointer into *fout, and is-a-pipe flag into *is_pipe. * Caller is responsible for adjusting SIGPIPE state if it's a pipe. * - * On error, reports suitable error message and returns FALSE. + * On error, reports suitable error message and returns false. */ bool openQueryOutputFile(const char *fname, FILE **fout, bool *is_pipe) @@ -266,7 +266,7 @@ NoticeProcessor(void *arg, const char *message) * database queries. In most places, this is accomplished by checking * cancel_pressed during long-running loops. However, that won't work when * blocked on user input (in readline() or fgets()). In those places, we - * set sigint_interrupt_enabled TRUE while blocked, instructing the signal + * set sigint_interrupt_enabled true while blocked, instructing the signal * catcher to longjmp through sigint_interrupt_jmp. We assume readline and * fgets are coded to handle possible interruption. (XXX currently this does * not work on win32, so control-C is less useful there) diff --git a/src/bin/psql/large_obj.c b/src/bin/psql/large_obj.c index 2a3416b369..8a8887202a 100644 --- a/src/bin/psql/large_obj.c +++ b/src/bin/psql/large_obj.c @@ -48,7 +48,7 @@ print_lo_result(const char *fmt,...) * Prepare to do a large-object operation. We *must* be inside a transaction * block for all these operations, so start one if needed. * - * Returns TRUE if okay, FALSE if failed. *own_transaction is set to indicate + * Returns true if okay, false if failed. *own_transaction is set to indicate * if we started our own transaction or not. */ static bool diff --git a/src/bin/psql/stringutils.c b/src/bin/psql/stringutils.c index 959381d085..eefd18fbd9 100644 --- a/src/bin/psql/stringutils.c +++ b/src/bin/psql/stringutils.c @@ -27,8 +27,8 @@ * delim - set of non-whitespace separator characters (or NULL) * quote - set of characters that can quote a token (NULL if none) * escape - character that can quote quotes (0 if none) - * e_strings - if TRUE, treat E'...' syntax as a valid token - * del_quotes - if TRUE, strip quotes from the returned token, else return + * e_strings - if true, treat E'...' syntax as a valid token + * del_quotes - if true, strip quotes from the returned token, else return * it exactly as found in the string * encoding - the active character-set encoding * @@ -39,7 +39,7 @@ * a single quote character in the data. If escape isn't 0, then escape * followed by anything (except \0) is a data character too. * - * The combination of e_strings and del_quotes both TRUE is not currently + * The combination of e_strings and del_quotes both true is not currently * handled. This could be fixed but it's not needed anywhere at the moment. * * Note that the string s is _not_ overwritten in this implementation. diff --git a/src/common/md5.c b/src/common/md5.c index ba65b02af6..9144cab6ee 100644 --- a/src/common/md5.c +++ b/src/common/md5.c @@ -317,7 +317,7 @@ pg_md5_binary(const void *buff, size_t len, void *outbuf) * Output format is "md5" followed by a 32-hex-digit MD5 checksum. * Hence, the output buffer "buf" must be at least 36 bytes long. * - * Returns TRUE if okay, FALSE on error (out of memory). + * Returns true if okay, false on error (out of memory). */ bool pg_md5_encrypt(const char *passwd, const char *salt, size_t salt_len, diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h index adfdb0c6d9..7b5c845b83 100644 --- a/src/include/access/gin_private.h +++ b/src/include/access/gin_private.h @@ -300,7 +300,7 @@ typedef struct GinScanKeyData /* * Match status data. curItem is the TID most recently tested (could be a - * lossy-page pointer). curItemMatches is TRUE if it passes the + * lossy-page pointer). curItemMatches is true if it passes the * consistentFn test; if so, recheckCurItem is the recheck flag. * isFinished means that all the input entry streams are finished, so this * key cannot succeed for any later TIDs. diff --git a/src/include/access/hash_xlog.h b/src/include/access/hash_xlog.h index c778fdc8df..abe8579249 100644 --- a/src/include/access/hash_xlog.h +++ b/src/include/access/hash_xlog.h @@ -149,7 +149,7 @@ typedef struct xl_hash_split_complete typedef struct xl_hash_move_page_contents { uint16 ntups; - bool is_prim_bucket_same_wrt; /* TRUE if the page to which + bool is_prim_bucket_same_wrt; /* true if the page to which * tuples are moved is same as * primary bucket page */ } xl_hash_move_page_contents; @@ -174,10 +174,10 @@ typedef struct xl_hash_squeeze_page BlockNumber prevblkno; BlockNumber nextblkno; uint16 ntups; - bool is_prim_bucket_same_wrt; /* TRUE if the page to which + bool is_prim_bucket_same_wrt; /* true if the page to which * tuples are moved is same as * primary bucket page */ - bool is_prev_bucket_same_wrt; /* TRUE if the page to which + bool is_prev_bucket_same_wrt; /* true if the page to which * tuples are moved is the page * previous to the freed overflow * page */ @@ -196,9 +196,9 @@ typedef struct xl_hash_squeeze_page */ typedef struct xl_hash_delete { - bool clear_dead_marking; /* TRUE if this operation clears + bool clear_dead_marking; /* true if this operation clears * LH_PAGE_HAS_DEAD_TUPLES flag */ - bool is_primary_bucket_page; /* TRUE if the operation is for + bool is_primary_bucket_page; /* true if the operation is for * primary bucket page */ } xl_hash_delete; diff --git a/src/include/access/slru.h b/src/include/access/slru.h index d829a6fab4..20114c4d44 100644 --- a/src/include/access/slru.h +++ b/src/include/access/slru.h @@ -37,7 +37,7 @@ /* * Page status codes. Note that these do not include the "dirty" bit. - * page_dirty can be TRUE only in the VALID or WRITE_IN_PROGRESS states; + * page_dirty can be true only in the VALID or WRITE_IN_PROGRESS states; * in the latter case it implies that the page has been re-dirtied since * the write started. */ diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index 0f2b8bd53f..8fd6010ba0 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -43,7 +43,7 @@ extern bool InRecovery; /* * Like InRecovery, standbyState is only valid in the startup process. * In all other processes it will have the value STANDBY_DISABLED (so - * InHotStandby will read as FALSE). + * InHotStandby will read as false). * * In DISABLED state, we're performing crash recovery or hot standby was * disabled in postgresql.conf. diff --git a/src/include/c.h b/src/include/c.h index 62df4d5b0c..852551c121 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -493,7 +493,7 @@ typedef NameData *Name; #define NameStr(name) ((name).data) /* - * Support macros for escaping strings. escape_backslash should be TRUE + * Support macros for escaping strings. escape_backslash should be true * if generating a non-standard-conforming string. Prefixing a string * with ESCAPE_STRING_SYNTAX guarantees it is non-standard-conforming. * Beware of multiple evaluation of the "ch" argument! diff --git a/src/include/catalog/pg_conversion.h b/src/include/catalog/pg_conversion.h index 0682d7eb22..9344585e66 100644 --- a/src/include/catalog/pg_conversion.h +++ b/src/include/catalog/pg_conversion.h @@ -32,7 +32,7 @@ * conforencoding FOR encoding id * contoencoding TO encoding id * conproc OID of the conversion proc - * condefault TRUE if this is a default conversion + * condefault true if this is a default conversion * ---------------------------------------------------------------- */ #define ConversionRelationId 2607 diff --git a/src/include/catalog/pg_type.h b/src/include/catalog/pg_type.h index ffdb452b02..e3551440a0 100644 --- a/src/include/catalog/pg_type.h +++ b/src/include/catalog/pg_type.h @@ -51,7 +51,7 @@ CATALOG(pg_type,1247) BKI_BOOTSTRAP BKI_ROWTYPE_OID(71) BKI_SCHEMA_MACRO /* * typbyval determines whether internal Postgres routines pass a value of - * this type by value or by reference. typbyval had better be FALSE if + * this type by value or by reference. typbyval had better be false if * the length is not 1, 2, or 4 (or 8 on 8-byte-Datum machines). * Variable-length types are always passed by reference. Note that * typbyval can be false even if the length would allow pass-by-value; diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index 7a7b793ddf..60586b2ca6 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -29,8 +29,8 @@ * so they live until the end of the ANALYZE operation. * * The type-specific typanalyze function is passed a pointer to this struct - * and must return TRUE to continue analysis, FALSE to skip analysis of this - * column. In the TRUE case it must set the compute_stats and minrows fields, + * and must return true to continue analysis, false to skip analysis of this + * column. In the true case it must set the compute_stats and minrows fields, * and can optionally set extra_data to pass additional info to compute_stats. * minrows is its request for the minimum number of sample rows to be gathered * (but note this request might not be honored, eg if there are fewer rows @@ -45,7 +45,7 @@ * The fetchfunc may be called with rownum running from 0 to samplerows-1. * It returns a Datum and an isNull flag. * - * compute_stats should set stats_valid TRUE if it is able to compute + * compute_stats should set stats_valid true if it is able to compute * any useful statistics. If it does, the remainder of the struct holds * the information to be stored in a pg_statistic row for the column. Be * careful to allocate any pointed-to data in anl_context, which will NOT @@ -86,7 +86,7 @@ typedef struct VacAttrStats /* * These fields must be filled in by the typanalyze routine, unless it - * returns FALSE. + * returns false. */ AnalyzeAttrComputeStatsFunc compute_stats; /* function pointer */ int minrows; /* Minimum # of rows wanted for stats */ diff --git a/src/include/executor/instrument.h b/src/include/executor/instrument.h index 31573145a9..f1bae7a44d 100644 --- a/src/include/executor/instrument.h +++ b/src/include/executor/instrument.h @@ -44,10 +44,10 @@ typedef enum InstrumentOption typedef struct Instrumentation { /* Parameters set at node creation: */ - bool need_timer; /* TRUE if we need timer data */ - bool need_bufusage; /* TRUE if we need buffer usage data */ + bool need_timer; /* true if we need timer data */ + bool need_bufusage; /* true if we need buffer usage data */ /* Info about current plan cycle: */ - bool running; /* TRUE if we've completed first tuple */ + bool running; /* true if we've completed first tuple */ instr_time starttime; /* Start time of current iteration of node */ instr_time counter; /* Accumulated runtime for this node */ double firsttuple; /* Time for first tuple of this cycle */ diff --git a/src/include/executor/tuptable.h b/src/include/executor/tuptable.h index 55f4cce4ee..db2a42af5e 100644 --- a/src/include/executor/tuptable.h +++ b/src/include/executor/tuptable.h @@ -68,7 +68,7 @@ * A TupleTableSlot can also be "empty", holding no valid data. This is * the only valid state for a freshly-created slot that has not yet had a * tuple descriptor assigned to it. In this state, tts_isempty must be - * TRUE, tts_shouldFree FALSE, tts_tuple NULL, tts_buffer InvalidBuffer, + * true, tts_shouldFree false, tts_tuple NULL, tts_buffer InvalidBuffer, * and tts_nvalid zero. * * The tupleDescriptor is simply referenced, not copied, by the TupleTableSlot diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index d209ec012c..e05bc04f52 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -768,8 +768,8 @@ typedef struct SubPlanState ProjectionInfo *projRight; /* for projecting subselect output */ TupleHashTable hashtable; /* hash table for no-nulls subselect rows */ TupleHashTable hashnulls; /* hash table for rows with null(s) */ - bool havehashrows; /* TRUE if hashtable is not empty */ - bool havenullrows; /* TRUE if hashnulls is not empty */ + bool havehashrows; /* true if hashtable is not empty */ + bool havenullrows; /* true if hashnulls is not empty */ MemoryContext hashtablecxt; /* memory context containing hash tables */ MemoryContext hashtempcxt; /* temp memory context for hash tables */ ExprContext *innerecontext; /* econtext for computing inner tuples */ diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 06a2b81fb5..a240c271db 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -198,7 +198,7 @@ typedef struct Query * Similarly, if "typmods" is NIL then the actual typmod is expected to * be prespecified in typemod, otherwise typemod is unused. * - * If pct_type is TRUE, then names is actually a field name and we look up + * If pct_type is true, then names is actually a field name and we look up * the type of that field. Otherwise (the normal case), names is a type * name possibly qualified with schema and database name. */ @@ -888,8 +888,8 @@ typedef struct PartitionCmd * them from the joinaliasvars list, because that would affect the attnums * of Vars referencing the rest of the list.) * - * inh is TRUE for relation references that should be expanded to include - * inheritance children, if the rel has any. This *must* be FALSE for + * inh is true for relation references that should be expanded to include + * inheritance children, if the rel has any. This *must* be false for * RTEs other than RTE_RELATION entries. * * inFromCl marks those range variables that are listed in the FROM clause. @@ -1147,7 +1147,7 @@ typedef struct WithCheckOption * or InvalidOid if not available. * nulls_first means about what you'd expect. If sortop is InvalidOid * then nulls_first is meaningless and should be set to false. - * hashable is TRUE if eqop is hashable (note this condition also depends + * hashable is true if eqop is hashable (note this condition also depends * on the datatype of the input expression). * * In an ORDER BY item, all fields must be valid. (The eqop isn't essential @@ -2680,7 +2680,7 @@ typedef struct FetchStmt FetchDirection direction; /* see above */ long howMany; /* number of rows, or position argument */ char *portalname; /* name of portal (cursor) */ - bool ismove; /* TRUE if MOVE */ + bool ismove; /* true if MOVE */ } FetchStmt; /* ---------------------- diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h index c2929ac387..074ae0a865 100644 --- a/src/include/nodes/primnodes.h +++ b/src/include/nodes/primnodes.h @@ -302,7 +302,7 @@ typedef struct Aggref List *aggorder; /* ORDER BY (list of SortGroupClause) */ List *aggdistinct; /* DISTINCT (list of SortGroupClause) */ Expr *aggfilter; /* FILTER expression, if any */ - bool aggstar; /* TRUE if argument list was really '*' */ + bool aggstar; /* true if argument list was really '*' */ bool aggvariadic; /* true if variadic arguments have been * combined into an array last argument */ char aggkind; /* aggregate kind (see pg_aggregate.h) */ @@ -359,7 +359,7 @@ typedef struct WindowFunc List *args; /* arguments to the window function */ Expr *aggfilter; /* FILTER expression, if any */ Index winref; /* index of associated WindowClause */ - bool winstar; /* TRUE if argument list was really '*' */ + bool winstar; /* true if argument list was really '*' */ bool winagg; /* is function a simple aggregate? */ int location; /* token location, or -1 if unknown */ } WindowFunc; @@ -695,9 +695,9 @@ typedef struct SubPlan Oid firstColCollation; /* Collation of first column of subplan * result */ /* Information about execution strategy: */ - bool useHashTable; /* TRUE to store subselect output in a hash + bool useHashTable; /* true to store subselect output in a hash * table (implies we are doing "IN") */ - bool unknownEqFalse; /* TRUE if it's okay to return FALSE when the + bool unknownEqFalse; /* true if it's okay to return FALSE when the * spec result is UNKNOWN; this allows much * simpler handling of null values */ bool parallel_safe; /* is the subplan parallel-safe? */ diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index e085cefb7b..05fc9a3f48 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1423,14 +1423,14 @@ typedef JoinPath NestPath; * mergejoin. If it is not NIL then it is a PathKeys list describing * the ordering that must be created by an explicit Sort node. * - * skip_mark_restore is TRUE if the executor need not do mark/restore calls. + * skip_mark_restore is true if the executor need not do mark/restore calls. * Mark/restore overhead is usually required, but can be skipped if we know * that the executor need find only one match per outer tuple, and that the * mergeclauses are sufficient to identify a match. In such cases the * executor can immediately advance the outer relation after processing a * match, and therefoere it need never back up the inner relation. * - * materialize_inner is TRUE if a Material node should be placed atop the + * materialize_inner is true if a Material node should be placed atop the * inner input. This may appear with or without an inner Sort step. */ @@ -1834,15 +1834,15 @@ typedef struct RestrictInfo Expr *clause; /* the represented clause of WHERE or JOIN */ - bool is_pushed_down; /* TRUE if clause was pushed down in level */ + bool is_pushed_down; /* true if clause was pushed down in level */ - bool outerjoin_delayed; /* TRUE if delayed by lower outer join */ + bool outerjoin_delayed; /* true if delayed by lower outer join */ bool can_join; /* see comment above */ bool pseudoconstant; /* see comment above */ - bool leakproof; /* TRUE if known to contain no leaked Vars */ + bool leakproof; /* true if known to contain no leaked Vars */ Index security_level; /* see comment above */ @@ -1973,7 +1973,7 @@ typedef struct PlaceHolderVar * syntactically below this special join. (These are needed to help compute * min_lefthand and min_righthand for higher joins.) * - * delay_upper_joins is set TRUE if we detect a pushed-down clause that has + * delay_upper_joins is set true if we detect a pushed-down clause that has * to be evaluated after this join is formed (because it references the RHS). * Any outer joins that have such a clause and this join in their RHS cannot * commute with this join, because that would leave noplace to check the diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h index 68930c1f4a..f0e210ad8d 100644 --- a/src/include/parser/parse_node.h +++ b/src/include/parser/parse_node.h @@ -111,7 +111,7 @@ typedef Node *(*CoerceParamHook) (ParseState *pstate, Param *param, * namespace for table and column lookup. (The RTEs listed here may be just * a subset of the whole rtable. See ParseNamespaceItem comments below.) * - * p_lateral_active: TRUE if we are currently parsing a LATERAL subexpression + * p_lateral_active: true if we are currently parsing a LATERAL subexpression * of this parse level. This makes p_lateral_only namespace items visible, * whereas they are not visible when p_lateral_active is FALSE. * diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h index bbf505e246..99e109853f 100644 --- a/src/include/storage/s_lock.h +++ b/src/include/storage/s_lock.h @@ -22,7 +22,7 @@ * Unlock a previously acquired lock. * * bool S_LOCK_FREE(slock_t *lock) - * Tests if the lock is free. Returns TRUE if free, FALSE if locked. + * Tests if the lock is free. Returns true if free, false if locked. * This does *not* change the state of the lock. * * void SPIN_DELAY(void) diff --git a/src/include/storage/spin.h b/src/include/storage/spin.h index 66698645c2..16413856ca 100644 --- a/src/include/storage/spin.h +++ b/src/include/storage/spin.h @@ -19,7 +19,7 @@ * Unlock a previously acquired lock. * * bool SpinLockFree(slock_t *lock) - * Tests if the lock is free. Returns TRUE if free, FALSE if locked. + * Tests if the lock is free. Returns true if free, false if locked. * This does *not* change the state of the lock. * * Callers must beware that the macro argument may be evaluated multiple diff --git a/src/include/tsearch/ts_utils.h b/src/include/tsearch/ts_utils.h index 3312353026..782548c0af 100644 --- a/src/include/tsearch/ts_utils.h +++ b/src/include/tsearch/ts_utils.h @@ -146,7 +146,7 @@ typedef struct ExecPhraseData * val: lexeme to test for presence of * data: to be filled with lexeme positions; NULL if position data not needed * - * Return TRUE if lexeme is present in data, else FALSE. If data is not + * Return true if lexeme is present in data, else false. If data is not * NULL, it should be filled with lexeme positions, but function can leave * it as zeroes if position data is not available. */ @@ -167,7 +167,7 @@ typedef bool (*TSExecuteCallback) (void *arg, QueryOperand *val, #define TS_EXEC_CALC_NOT (0x01) /* * If TS_EXEC_PHRASE_NO_POS is set, allow OP_PHRASE to be executed lossily - * in the absence of position information: a TRUE result indicates that the + * in the absence of position information: a true result indicates that the * phrase might be present. Without this flag, OP_PHRASE always returns * false if lexeme position information is not available. */ diff --git a/src/interfaces/ecpg/ecpglib/misc.c b/src/interfaces/ecpg/ecpglib/misc.c index 2084d7fe60..a0257c8957 100644 --- a/src/interfaces/ecpg/ecpglib/misc.c +++ b/src/interfaces/ecpg/ecpglib/misc.c @@ -225,13 +225,13 @@ ECPGtrans(int lineno, const char *connection_name, const char *transaction) { res = PQexec(con->connection, "begin transaction"); if (!ecpg_check_PQresult(res, lineno, con->connection, ECPG_COMPAT_PGSQL)) - return FALSE; + return false; PQclear(res); } res = PQexec(con->connection, transaction); if (!ecpg_check_PQresult(res, lineno, con->connection, ECPG_COMPAT_PGSQL)) - return FALSE; + return false; PQclear(res); } diff --git a/src/interfaces/ecpg/pgtypeslib/datetime.c b/src/interfaces/ecpg/pgtypeslib/datetime.c index 33c9011a71..c2f78f5a56 100644 --- a/src/interfaces/ecpg/pgtypeslib/datetime.c +++ b/src/interfaces/ecpg/pgtypeslib/datetime.c @@ -59,7 +59,7 @@ PGTYPESdate_from_asc(char *str, char **endptr) char *realptr; char **ptr = (endptr != NULL) ? endptr : &realptr; - bool EuroDates = FALSE; + bool EuroDates = false; errno = 0; if (strlen(str) > MAXDATELEN) @@ -105,7 +105,7 @@ PGTYPESdate_to_asc(date dDate) *tm = &tt; char buf[MAXDATELEN + 1]; int DateStyle = 1; - bool EuroDates = FALSE; + bool EuroDates = false; j2date(dDate + date2j(2000, 1, 1), &(tm->tm_year), &(tm->tm_mon), &(tm->tm_mday)); EncodeDateOnly(tm, DateStyle, buf, EuroDates); diff --git a/src/interfaces/ecpg/pgtypeslib/dt_common.c b/src/interfaces/ecpg/pgtypeslib/dt_common.c index be72fce8c5..994389f4a1 100644 --- a/src/interfaces/ecpg/pgtypeslib/dt_common.c +++ b/src/interfaces/ecpg/pgtypeslib/dt_common.c @@ -1144,7 +1144,7 @@ DecodeNumberField(int len, char *str, int fmask, tm->tm_mon = atoi(str + 2); *(str + 2) = '\0'; tm->tm_year = atoi(str + 0); - *is2digits = TRUE; + *is2digits = true; return DTK_DATE; } @@ -1156,7 +1156,7 @@ DecodeNumberField(int len, char *str, int fmask, *(str + 2) = '\0'; tm->tm_mon = 1; tm->tm_year = atoi(str + 0); - *is2digits = TRUE; + *is2digits = true; return DTK_DATE; } @@ -1314,8 +1314,8 @@ DecodeDate(char *str, int fmask, int *tmask, struct tm *tm, bool EuroDates) int nf = 0; int i, len; - bool bc = FALSE; - bool is2digits = FALSE; + bool bc = false; + bool is2digits = false; int type, val, dmask = 0; @@ -1792,9 +1792,9 @@ DecodeDateTime(char **field, int *ftype, int nf, int i; int val; int mer = HR24; - bool haveTextMonth = FALSE; - bool is2digits = FALSE; - bool bc = FALSE; + bool haveTextMonth = false; + bool is2digits = false; + bool bc = false; int t = 0; int *tzp = &t; @@ -2200,7 +2200,7 @@ DecodeDateTime(char **field, int *ftype, int nf, tm->tm_mday = tm->tm_mon; tmask = DTK_M(DAY); } - haveTextMonth = TRUE; + haveTextMonth = true; tm->tm_mon = val; break; diff --git a/src/interfaces/ecpg/pgtypeslib/interval.c b/src/interfaces/ecpg/pgtypeslib/interval.c index 4a7227e926..41976a188a 100644 --- a/src/interfaces/ecpg/pgtypeslib/interval.c +++ b/src/interfaces/ecpg/pgtypeslib/interval.c @@ -338,7 +338,7 @@ DecodeInterval(char **field, int *ftype, int nf, /* int range, */ { int IntervalStyle = INTSTYLE_POSTGRES_VERBOSE; int range = INTERVAL_FULL_RANGE; - bool is_before = FALSE; + bool is_before = false; char *cp; int fmask = 0, tmask, @@ -583,7 +583,7 @@ DecodeInterval(char **field, int *ftype, int nf, /* int range, */ break; case AGO: - is_before = TRUE; + is_before = true; type = val; break; @@ -705,7 +705,7 @@ AddVerboseIntPart(char *cp, int value, const char *units, else if (*is_before) value = -value; sprintf(cp, " %d %s%s", value, units, (value == 1) ? "" : "s"); - *is_zero = FALSE; + *is_zero = false; return cp + strlen(cp); } @@ -728,7 +728,7 @@ AddPostgresIntPart(char *cp, int value, const char *units, * tad bizarre but it's how it worked before... */ *is_before = (value < 0); - *is_zero = FALSE; + *is_zero = false; return cp + strlen(cp); } @@ -779,8 +779,8 @@ EncodeInterval(struct /* pg_ */ tm *tm, fsec_t fsec, int style, char *str) int hour = tm->tm_hour; int min = tm->tm_min; int sec = tm->tm_sec; - bool is_before = FALSE; - bool is_zero = TRUE; + bool is_before = false; + bool is_zero = true; /* * The sign of year and month are guaranteed to match, since they are @@ -926,7 +926,7 @@ EncodeInterval(struct /* pg_ */ tm *tm, fsec_t fsec, int style, char *str) if (sec < 0 || (sec == 0 && fsec < 0)) { if (is_zero) - is_before = TRUE; + is_before = true; else if (!is_before) *cp++ = '-'; } @@ -936,7 +936,7 @@ EncodeInterval(struct /* pg_ */ tm *tm, fsec_t fsec, int style, char *str) cp += strlen(cp); sprintf(cp, " sec%s", (abs(sec) != 1 || fsec != 0) ? "s" : ""); - is_zero = FALSE; + is_zero = false; } /* identically zero? then put in a unitless zero... */ if (is_zero) diff --git a/src/interfaces/ecpg/pgtypeslib/numeric.c b/src/interfaces/ecpg/pgtypeslib/numeric.c index a8619168ff..6643242ab1 100644 --- a/src/interfaces/ecpg/pgtypeslib/numeric.c +++ b/src/interfaces/ecpg/pgtypeslib/numeric.c @@ -162,7 +162,7 @@ PGTYPESdecimal_new(void) static int set_var_from_str(char *str, char **ptr, numeric *dest) { - bool have_dp = FALSE; + bool have_dp = false; int i = 0; errno = 0; @@ -214,7 +214,7 @@ set_var_from_str(char *str, char **ptr, numeric *dest) if (*(*ptr) == '.') { - have_dp = TRUE; + have_dp = true; (*ptr)++; } @@ -241,7 +241,7 @@ set_var_from_str(char *str, char **ptr, numeric *dest) errno = PGTYPES_NUM_BAD_NUMERIC; return -1; } - have_dp = TRUE; + have_dp = true; (*ptr)++; } else diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l index e35843ba4e..da4fc673d9 100644 --- a/src/interfaces/ecpg/preproc/pgc.l +++ b/src/interfaces/ecpg/preproc/pgc.l @@ -989,12 +989,12 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ return S_ANYTHING; } } -{exec_sql}{ifdef}{space}* { ifcond = TRUE; BEGIN(xcond); } +{exec_sql}{ifdef}{space}* { ifcond = true; BEGIN(xcond); } {informix_special}{ifdef}{space}* { /* are we simulating Informix? */ if (INFORMIX_MODE) { - ifcond = TRUE; + ifcond = true; BEGIN(xcond); } else @@ -1003,12 +1003,12 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ return S_ANYTHING; } } -{exec_sql}{ifndef}{space}* { ifcond = FALSE; BEGIN(xcond); } +{exec_sql}{ifndef}{space}* { ifcond = false; BEGIN(xcond); } {informix_special}{ifndef}{space}* { /* are we simulating Informix? */ if (INFORMIX_MODE) { - ifcond = FALSE; + ifcond = false; BEGIN(xcond); } else @@ -1026,7 +1026,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else preproc_tos--; - ifcond = TRUE; BEGIN(xcond); + ifcond = true; BEGIN(xcond); } {informix_special}{elif}{space}* { /* are we simulating Informix? */ @@ -1039,7 +1039,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ else preproc_tos--; - ifcond = TRUE; + ifcond = true; BEGIN(xcond); } else @@ -1054,7 +1054,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ mmfatal(PARSE_ERROR, "more than one EXEC SQL ELSE"); else { - stacked_if_value[preproc_tos].else_branch = TRUE; + stacked_if_value[preproc_tos].else_branch = true; stacked_if_value[preproc_tos].condition = (stacked_if_value[preproc_tos-1].condition && !stacked_if_value[preproc_tos].condition); @@ -1073,7 +1073,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ mmfatal(PARSE_ERROR, "more than one EXEC SQL ELSE"); else { - stacked_if_value[preproc_tos].else_branch = TRUE; + stacked_if_value[preproc_tos].else_branch = true; stacked_if_value[preproc_tos].condition = (stacked_if_value[preproc_tos-1].condition && !stacked_if_value[preproc_tos].condition); @@ -1147,7 +1147,7 @@ cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\/\*[^*/]*\*+ defptr = defptr->next); preproc_tos++; - stacked_if_value[preproc_tos].else_branch = FALSE; + stacked_if_value[preproc_tos].else_branch = false; stacked_if_value[preproc_tos].condition = (defptr ? ifcond : !ifcond) && stacked_if_value[preproc_tos-1].condition; } @@ -1265,9 +1265,9 @@ lex_init(void) preproc_tos = 0; yylineno = 1; - ifcond = TRUE; + ifcond = true; stacked_if_value[preproc_tos].condition = ifcond; - stacked_if_value[preproc_tos].else_branch = FALSE; + stacked_if_value[preproc_tos].else_branch = false; /* initialize literal buffer to a reasonable but expansible size */ if (literalbuf == NULL) @@ -1412,7 +1412,7 @@ parse_include(void) } /* - * ecpg_isspace() --- return TRUE if flex scanner considers char whitespace + * ecpg_isspace() --- return true if flex scanner considers char whitespace */ static bool ecpg_isspace(char ch) diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 6bcf60a712..ada219032e 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -3586,7 +3586,7 @@ closePGconn(PGconn *conn) * Don't call PQsetnonblocking() because it will fail if it's unable to * flush the connection. */ - conn->nonblocking = FALSE; + conn->nonblocking = false; /* * Close the connection, reset all transient state, flush I/O buffers. @@ -3781,8 +3781,8 @@ PQfreeCancel(PGcancel *cancel) * PQcancel and PQrequestCancel: attempt to request cancellation of the * current operation. * - * The return value is TRUE if the cancel request was successfully - * dispatched, FALSE if not (in which case an error message is available). + * The return value is true if the cancel request was successfully + * dispatched, false if not (in which case an error message is available). * Note: successful dispatch is no guarantee that there will be any effect at * the backend. The application must read the operation result as usual. * @@ -3871,7 +3871,7 @@ internal_cancel(SockAddr *raddr, int be_pid, int be_key, /* All done */ closesocket(tmpsock); SOCK_ERRNO_SET(save_errno); - return TRUE; + return true; cancel_errReturn: @@ -3889,13 +3889,13 @@ internal_cancel(SockAddr *raddr, int be_pid, int be_key, if (tmpsock != PGINVALID_SOCKET) closesocket(tmpsock); SOCK_ERRNO_SET(save_errno); - return FALSE; + return false; } /* * PQcancel: request query cancel * - * Returns TRUE if able to send the cancel request, FALSE if not. + * Returns true if able to send the cancel request, false if not. * * On failure, an error message is stored in *errbuf, which must be of size * errbufsize (recommended size is 256 bytes). *errbuf is not changed on @@ -3907,7 +3907,7 @@ PQcancel(PGcancel *cancel, char *errbuf, int errbufsize) if (!cancel) { strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize); - return FALSE; + return false; } return internal_cancel(&cancel->raddr, cancel->be_pid, cancel->be_key, @@ -3917,7 +3917,7 @@ PQcancel(PGcancel *cancel, char *errbuf, int errbufsize) /* * PQrequestCancel: old, not thread-safe function for requesting query cancel * - * Returns TRUE if able to send the cancel request, FALSE if not. + * Returns true if able to send the cancel request, false if not. * * On failure, the error message is saved in conn->errorMessage; this means * that this can't be used when there might be other active operations on @@ -3933,7 +3933,7 @@ PQrequestCancel(PGconn *conn) /* Check we have an open connection */ if (!conn) - return FALSE; + return false; if (conn->sock == PGINVALID_SOCKET) { @@ -3942,7 +3942,7 @@ PQrequestCancel(PGconn *conn) conn->errorMessage.maxlen); conn->errorMessage.len = strlen(conn->errorMessage.data); - return FALSE; + return false; } r = internal_cancel(&conn->raddr, conn->be_pid, conn->be_key, @@ -4781,7 +4781,7 @@ conninfo_init(PQExpBuffer errorMessage) * Returns a malloc'd PQconninfoOption array, if parsing is successful. * Otherwise, NULL is returned and an error message is left in errorMessage. * - * If use_defaults is TRUE, default values are filled in (from a service file, + * If use_defaults is true, default values are filled in (from a service file, * environment variables, etc). */ static PQconninfoOption * @@ -5004,7 +5004,7 @@ conninfo_parse(const char *conninfo, PQExpBuffer errorMessage, * If not successful, NULL is returned and an error message is * left in errorMessage. * Defaults are supplied (from a service file, environment variables, etc) - * for unspecified options, but only if use_defaults is TRUE. + * for unspecified options, but only if use_defaults is true. * * If expand_dbname is non-zero, and the value passed for the first occurrence * of "dbname" keyword is a connection string (as indicated by @@ -5175,7 +5175,7 @@ conninfo_array_parse(const char *const *keywords, const char *const *values, * * Defaults are obtained from a service file, environment variables, etc. * - * Returns TRUE if successful, otherwise FALSE; errorMessage, if supplied, + * Returns true if successful, otherwise false; errorMessage, if supplied, * is filled in upon failure. Note that failure to locate a default value * is not an error condition here --- we just leave the option's value as * NULL. @@ -5826,7 +5826,7 @@ conninfo_getval(PQconninfoOption *connOptions, * * If not successful, returns NULL and fills errorMessage accordingly. * However, if the reason of failure is an invalid keyword being passed and - * ignoreMissing is TRUE, errorMessage will be left untouched. + * ignoreMissing is true, errorMessage will be left untouched. */ static PQconninfoOption * conninfo_storeval(PQconninfoOption *connOptions, diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c index c24bce62dd..66530870d4 100644 --- a/src/interfaces/libpq/fe-exec.c +++ b/src/interfaces/libpq/fe-exec.c @@ -231,17 +231,17 @@ PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs) /* If attrs already exist, they cannot be overwritten. */ if (!res || res->numAttributes > 0) - return FALSE; + return false; /* ignore no-op request */ if (numAttributes <= 0 || !attDescs) - return TRUE; + return true; res->attDescs = (PGresAttDesc *) PQresultAlloc(res, numAttributes * sizeof(PGresAttDesc)); if (!res->attDescs) - return FALSE; + return false; res->numAttributes = numAttributes; memcpy(res->attDescs, attDescs, numAttributes * sizeof(PGresAttDesc)); @@ -256,13 +256,13 @@ PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs) res->attDescs[i].name = res->null_field; if (!res->attDescs[i].name) - return FALSE; + return false; if (res->attDescs[i].format == 0) res->binary = 0; } - return TRUE; + return true; } /* @@ -368,7 +368,7 @@ PQcopyResult(const PGresult *src, int flags) PQclear(dest); return NULL; } - dest->events[i].resultInitialized = TRUE; + dest->events[i].resultInitialized = true; } } @@ -398,7 +398,7 @@ dupEvents(PGEvent *events, int count) newEvents[i].proc = events[i].proc; newEvents[i].passThrough = events[i].passThrough; newEvents[i].data = NULL; - newEvents[i].resultInitialized = FALSE; + newEvents[i].resultInitialized = false; newEvents[i].name = strdup(events[i].name); if (!newEvents[i].name) { @@ -428,7 +428,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) /* Note that this check also protects us against null "res" */ if (!check_field_number(res, field_num)) - return FALSE; + return false; /* Invalid tup_num, must be <= ntups */ if (tup_num < 0 || tup_num > res->ntups) @@ -436,7 +436,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) pqInternalNotice(&res->noticeHooks, "row number %d is out of range 0..%d", tup_num, res->ntups); - return FALSE; + return false; } /* need to allocate a new tuple? */ @@ -447,7 +447,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) tup = (PGresAttValue *) pqResultAlloc(res, res->numAttributes * sizeof(PGresAttValue), - TRUE); + true); if (!tup) goto fail; @@ -479,7 +479,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) } else { - attval->value = (char *) pqResultAlloc(res, len + 1, TRUE); + attval->value = (char *) pqResultAlloc(res, len + 1, true); if (!attval->value) goto fail; attval->len = len; @@ -487,7 +487,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) attval->value[len] = '\0'; } - return TRUE; + return true; /* * Report failure via pqInternalNotice. If preceding code didn't provide @@ -498,7 +498,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) errmsg = libpq_gettext("out of memory"); pqInternalNotice(&res->noticeHooks, "%s", errmsg); - return FALSE; + return false; } /* @@ -510,7 +510,7 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len) void * PQresultAlloc(PGresult *res, size_t nBytes) { - return pqResultAlloc(res, nBytes, TRUE); + return pqResultAlloc(res, nBytes, true); } /* @@ -622,7 +622,7 @@ pqResultAlloc(PGresult *res, size_t nBytes, bool isBinary) char * pqResultStrdup(PGresult *res, const char *str) { - char *space = (char *) pqResultAlloc(res, strlen(str) + 1, FALSE); + char *space = (char *) pqResultAlloc(res, strlen(str) + 1, false); if (space) strcpy(space, str); @@ -852,7 +852,7 @@ pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...) * Result text is always just the primary message + newline. If we can't * allocate it, don't bother invoking the receiver. */ - res->errMsg = (char *) pqResultAlloc(res, strlen(msgBuf) + 2, FALSE); + res->errMsg = (char *) pqResultAlloc(res, strlen(msgBuf) + 2, false); if (res->errMsg) { sprintf(res->errMsg, "%s\n", msgBuf); @@ -868,7 +868,7 @@ pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...) /* * pqAddTuple * add a row pointer to the PGresult structure, growing it if necessary - * Returns TRUE if OK, FALSE if an error prevented adding the row + * Returns true if OK, false if an error prevented adding the row * * On error, *errmsgp can be set to an error string to be returned. * If it is left NULL, the error is presumed to be "out of memory". @@ -903,7 +903,7 @@ pqAddTuple(PGresult *res, PGresAttValue *tup, const char **errmsgp) else { *errmsgp = libpq_gettext("PGresult cannot support more than INT_MAX tuples"); - return FALSE; + return false; } /* @@ -915,7 +915,7 @@ pqAddTuple(PGresult *res, PGresAttValue *tup, const char **errmsgp) if (newSize > SIZE_MAX / sizeof(PGresAttValue *)) { *errmsgp = libpq_gettext("size_t overflow"); - return FALSE; + return false; } #endif @@ -926,13 +926,13 @@ pqAddTuple(PGresult *res, PGresAttValue *tup, const char **errmsgp) newTuples = (PGresAttValue **) realloc(res->tuples, newSize * sizeof(PGresAttValue *)); if (!newTuples) - return FALSE; /* malloc or realloc failed */ + return false; /* malloc or realloc failed */ res->tupArrSize = newSize; res->tuples = newTuples; } res->tuples[res->ntups] = tup; res->ntups++; - return TRUE; + return true; } /* @@ -947,7 +947,7 @@ pqSaveMessageField(PGresult *res, char code, const char *value) pqResultAlloc(res, offsetof(PGMessageField, contents) + strlen(value) + 1, - TRUE); + true); if (!pfield) return; /* out of memory? */ pfield->code = code; @@ -1111,7 +1111,7 @@ pqRowProcessor(PGconn *conn, const char **errmsgp) * memory for gettext() to do anything. */ tup = (PGresAttValue *) - pqResultAlloc(res, nfields * sizeof(PGresAttValue), TRUE); + pqResultAlloc(res, nfields * sizeof(PGresAttValue), true); if (tup == NULL) goto fail; @@ -1725,14 +1725,14 @@ parseInput(PGconn *conn) /* * PQisBusy - * Return TRUE if PQgetResult would block waiting for input. + * Return true if PQgetResult would block waiting for input. */ int PQisBusy(PGconn *conn) { if (!conn) - return FALSE; + return false; /* Parse any available data, if our state permits. */ parseInput(conn); @@ -1771,7 +1771,7 @@ PQgetResult(PGconn *conn) */ while ((flushResult = pqFlush(conn)) > 0) { - if (pqWait(FALSE, TRUE, conn)) + if (pqWait(false, true, conn)) { flushResult = -1; break; @@ -1780,7 +1780,7 @@ PQgetResult(PGconn *conn) /* Wait for some more data, and load it. */ if (flushResult || - pqWait(TRUE, FALSE, conn) || + pqWait(true, false, conn) || pqReadData(conn) < 0) { /* @@ -1844,7 +1844,7 @@ PQgetResult(PGconn *conn) res->resultStatus = PGRES_FATAL_ERROR; break; } - res->events[i].resultInitialized = TRUE; + res->events[i].resultInitialized = true; } } @@ -2746,22 +2746,22 @@ PQbinaryTuples(const PGresult *res) /* * Helper routines to range-check field numbers and tuple numbers. - * Return TRUE if OK, FALSE if not + * Return true if OK, false if not */ static int check_field_number(const PGresult *res, int field_num) { if (!res) - return FALSE; /* no way to display error message... */ + return false; /* no way to display error message... */ if (field_num < 0 || field_num >= res->numAttributes) { pqInternalNotice(&res->noticeHooks, "column number %d is out of range 0..%d", field_num, res->numAttributes - 1); - return FALSE; + return false; } - return TRUE; + return true; } static int @@ -2769,38 +2769,38 @@ check_tuple_field_number(const PGresult *res, int tup_num, int field_num) { if (!res) - return FALSE; /* no way to display error message... */ + return false; /* no way to display error message... */ if (tup_num < 0 || tup_num >= res->ntups) { pqInternalNotice(&res->noticeHooks, "row number %d is out of range 0..%d", tup_num, res->ntups - 1); - return FALSE; + return false; } if (field_num < 0 || field_num >= res->numAttributes) { pqInternalNotice(&res->noticeHooks, "column number %d is out of range 0..%d", field_num, res->numAttributes - 1); - return FALSE; + return false; } - return TRUE; + return true; } static int check_param_number(const PGresult *res, int param_num) { if (!res) - return FALSE; /* no way to display error message... */ + return false; /* no way to display error message... */ if (param_num < 0 || param_num >= res->numParameters) { pqInternalNotice(&res->noticeHooks, "parameter number %d is out of range 0..%d", param_num, res->numParameters - 1); - return FALSE; + return false; } - return TRUE; + return true; } /* @@ -3177,8 +3177,8 @@ PQparamtype(const PGresult *res, int param_num) /* PQsetnonblocking: - * sets the PGconn's database connection non-blocking if the arg is TRUE - * or makes it blocking if the arg is FALSE, this will not protect + * sets the PGconn's database connection non-blocking if the arg is true + * or makes it blocking if the arg is false, this will not protect * you from PQexec(), you'll only be safe when using the non-blocking API. * Needs to be called only on a connected database connection. */ @@ -3190,7 +3190,7 @@ PQsetnonblocking(PGconn *conn, int arg) if (!conn || conn->status == CONNECTION_BAD) return -1; - barg = (arg ? TRUE : FALSE); + barg = (arg ? true : false); /* early out if the socket is already in the state requested */ if (barg == conn->nonblocking) @@ -3213,7 +3213,7 @@ PQsetnonblocking(PGconn *conn, int arg) /* * return the blocking status of the database connection - * TRUE == nonblocking, FALSE == blocking + * true == nonblocking, false == blocking */ int PQisnonblocking(const PGconn *conn) diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c index 41b1749d07..f885106615 100644 --- a/src/interfaces/libpq/fe-misc.c +++ b/src/interfaces/libpq/fe-misc.c @@ -934,7 +934,7 @@ pqSendSome(PGconn *conn, int len) break; } - if (pqWait(TRUE, TRUE, conn)) + if (pqWait(true, true, conn)) { result = -1; break; diff --git a/src/interfaces/libpq/fe-protocol2.c b/src/interfaces/libpq/fe-protocol2.c index 1320d18a99..5335a91440 100644 --- a/src/interfaces/libpq/fe-protocol2.c +++ b/src/interfaces/libpq/fe-protocol2.c @@ -583,7 +583,7 @@ pqParseInput2(PGconn *conn) if (conn->result != NULL) { /* Read another tuple of a normal query response */ - if (getAnotherTuple(conn, FALSE)) + if (getAnotherTuple(conn, false)) return; /* getAnotherTuple() moves inStart itself */ continue; @@ -601,7 +601,7 @@ pqParseInput2(PGconn *conn) if (conn->result != NULL) { /* Read another tuple of a normal query response */ - if (getAnotherTuple(conn, TRUE)) + if (getAnotherTuple(conn, true)) return; /* getAnotherTuple() moves inStart itself */ continue; @@ -679,7 +679,7 @@ getRowDescriptions(PGconn *conn) if (nfields > 0) { result->attDescs = (PGresAttDesc *) - pqResultAlloc(result, nfields * sizeof(PGresAttDesc), TRUE); + pqResultAlloc(result, nfields * sizeof(PGresAttDesc), true); if (!result->attDescs) { errmsg = NULL; /* means "out of memory", see below */ @@ -1218,7 +1218,7 @@ pqGetCopyData2(PGconn *conn, char **buffer, int async) if (async) return 0; /* Need to load more data */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) return -2; } @@ -1263,7 +1263,7 @@ pqGetline2(PGconn *conn, char *s, int maxlen) else { /* need to load more data */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) { result = EOF; @@ -1484,7 +1484,7 @@ pqFunctionCall2(PGconn *conn, Oid fnid, if (needInput) { /* Wait for some data to arrive (or for the channel to close) */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) break; } diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c index 21fb8f2f21..0c5099caac 100644 --- a/src/interfaces/libpq/fe-protocol3.c +++ b/src/interfaces/libpq/fe-protocol3.c @@ -510,7 +510,7 @@ getRowDescriptions(PGconn *conn, int msgLength) if (nfields > 0) { result->attDescs = (PGresAttDesc *) - pqResultAlloc(result, nfields * sizeof(PGresAttDesc), TRUE); + pqResultAlloc(result, nfields * sizeof(PGresAttDesc), true); if (!result->attDescs) { errmsg = NULL; /* means "out of memory", see below */ @@ -668,7 +668,7 @@ getParamDescriptions(PGconn *conn, int msgLength) if (nparams > 0) { result->paramDescs = (PGresParamDesc *) - pqResultAlloc(result, nparams * sizeof(PGresParamDesc), TRUE); + pqResultAlloc(result, nparams * sizeof(PGresParamDesc), true); if (!result->paramDescs) goto advance_and_error; MemSet(result->paramDescs, 0, nparams * sizeof(PGresParamDesc)); @@ -1466,7 +1466,7 @@ getCopyStart(PGconn *conn, ExecStatusType copytype) if (nfields > 0) { result->attDescs = (PGresAttDesc *) - pqResultAlloc(result, nfields * sizeof(PGresAttDesc), TRUE); + pqResultAlloc(result, nfields * sizeof(PGresAttDesc), true); if (!result->attDescs) goto failure; MemSet(result->attDescs, 0, nfields * sizeof(PGresAttDesc)); @@ -1657,7 +1657,7 @@ pqGetCopyData3(PGconn *conn, char **buffer, int async) if (async) return 0; /* Need to load more data */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) return -2; continue; @@ -1715,7 +1715,7 @@ pqGetline3(PGconn *conn, char *s, int maxlen) while ((status = PQgetlineAsync(conn, s, maxlen - 1)) == 0) { /* need to load more data */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) { *s = '\0'; @@ -1968,7 +1968,7 @@ pqFunctionCall3(PGconn *conn, Oid fnid, if (needInput) { /* Wait for some data to arrive (or for the channel to close) */ - if (pqWait(TRUE, FALSE, conn) || + if (pqWait(true, false, conn) || pqReadData(conn) < 0) break; } diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c index 7c2d0cb4e6..c122c63106 100644 --- a/src/interfaces/libpq/fe-secure.c +++ b/src/interfaces/libpq/fe-secure.c @@ -465,10 +465,10 @@ pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending) * As long as it doesn't queue multiple events, we're OK because the caller * can't tell the difference. * - * The caller should say got_epipe = FALSE if it is certain that it + * The caller should say got_epipe = false if it is certain that it * didn't get an EPIPE error; in that case we'll skip the clear operation * and things are definitely OK, queuing or no. If it got one or might have - * gotten one, pass got_epipe = TRUE. + * gotten one, pass got_epipe = true. * * We do not want this to change errno, since if it did that could lose * the error code from a preceding send(). We essentially assume that if diff --git a/src/interfaces/libpq/libpq-events.c b/src/interfaces/libpq/libpq-events.c index e533017a03..883e2af8f4 100644 --- a/src/interfaces/libpq/libpq-events.c +++ b/src/interfaces/libpq/libpq-events.c @@ -44,12 +44,12 @@ PQregisterEventProc(PGconn *conn, PGEventProc proc, PGEventRegister regevt; if (!proc || !conn || !name || !*name) - return FALSE; /* bad arguments */ + return false; /* bad arguments */ for (i = 0; i < conn->nEvents; i++) { if (conn->events[i].proc == proc) - return FALSE; /* already registered */ + return false; /* already registered */ } if (conn->nEvents >= conn->eventArraySize) @@ -64,7 +64,7 @@ PQregisterEventProc(PGconn *conn, PGEventProc proc, e = (PGEvent *) malloc(newSize * sizeof(PGEvent)); if (!e) - return FALSE; + return false; conn->eventArraySize = newSize; conn->events = e; @@ -73,10 +73,10 @@ PQregisterEventProc(PGconn *conn, PGEventProc proc, conn->events[conn->nEvents].proc = proc; conn->events[conn->nEvents].name = strdup(name); if (!conn->events[conn->nEvents].name) - return FALSE; + return false; conn->events[conn->nEvents].passThrough = passThrough; conn->events[conn->nEvents].data = NULL; - conn->events[conn->nEvents].resultInitialized = FALSE; + conn->events[conn->nEvents].resultInitialized = false; conn->nEvents++; regevt.conn = conn; @@ -84,10 +84,10 @@ PQregisterEventProc(PGconn *conn, PGEventProc proc, { conn->nEvents--; free(conn->events[conn->nEvents].name); - return FALSE; + return false; } - return TRUE; + return true; } /* @@ -100,18 +100,18 @@ PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data) int i; if (!conn || !proc) - return FALSE; + return false; for (i = 0; i < conn->nEvents; i++) { if (conn->events[i].proc == proc) { conn->events[i].data = data; - return TRUE; + return true; } } - return FALSE; + return false; } /* @@ -144,18 +144,18 @@ PQresultSetInstanceData(PGresult *result, PGEventProc proc, void *data) int i; if (!result || !proc) - return FALSE; + return false; for (i = 0; i < result->nEvents; i++) { if (result->events[i].proc == proc) { result->events[i].data = data; - return TRUE; + return true; } } - return FALSE; + return false; } /* @@ -187,7 +187,7 @@ PQfireResultCreateEvents(PGconn *conn, PGresult *res) int i; if (!res) - return FALSE; + return false; for (i = 0; i < res->nEvents; i++) { @@ -199,11 +199,11 @@ PQfireResultCreateEvents(PGconn *conn, PGresult *res) evt.result = res; if (!res->events[i].proc(PGEVT_RESULTCREATE, &evt, res->events[i].passThrough)) - return FALSE; + return false; - res->events[i].resultInitialized = TRUE; + res->events[i].resultInitialized = true; } } - return TRUE; + return true; } diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 9931ee038f..d0afa59242 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -869,7 +869,7 @@ plpgsql_compile_inline(char *proc_source) /* * Remember if function is STABLE/IMMUTABLE. XXX would it be better to - * set this TRUE inside a read-only transaction? Not clear. + * set this true inside a read-only transaction? Not clear. */ function->fn_readonly = false; @@ -1350,8 +1350,8 @@ make_datum_param(PLpgSQL_expr *expr, int dno, int location) * yytxt is the original token text; we need this to check for quoting, * so that later checks for unreserved keywords work properly. * - * If recognized as a variable, fill in *wdatum and return TRUE; - * if not recognized, fill in *word and return FALSE. + * If recognized as a variable, fill in *wdatum and return true; + * if not recognized, fill in *word and return false. * (Note: those two pointers actually point to members of the same union, * but for notational reasons we pass them separately.) * ---------- diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index e605ec829e..7a6dd15460 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -5491,12 +5491,12 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, * a Datum by directly calling ExecEvalExpr(). * * If successful, store results into *result, *isNull, *rettype, *rettypmod - * and return TRUE. If the expression cannot be handled by simple evaluation, - * return FALSE. + * and return true. If the expression cannot be handled by simple evaluation, + * return false. * * Because we only store one execution tree for a simple expression, we * can't handle recursion cases. So, if we see the tree is already busy - * with an evaluation in the current xact, we just return FALSE and let the + * with an evaluation in the current xact, we just return false and let the * caller run the expression the hard way. (Other alternatives such as * creating a new tree for a recursive call either introduce memory leaks, * or add enough bookkeeping to be doubtful wins anyway.) Another case that @@ -6309,7 +6309,7 @@ exec_cast_value(PLpgSQL_execstate *estate, * or NULL if the cast is a mere no-op relabeling. If there's work to be * done, the cast_exprstate field contains an expression evaluation tree * based on a CaseTestExpr input, and the cast_in_use field should be set - * TRUE while executing it. + * true while executing it. * ---------- */ static plpgsql_CastHashEntry * diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index cd44a8e9a3..23f54e1c21 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -113,7 +113,7 @@ plpgsql_ns_additem(PLpgSQL_nsitem_type itemtype, int itemno, const char *name) * * Note that this only searches for variables, not labels. * - * If localmode is TRUE, only the topmost block level is searched. + * If localmode is true, only the topmost block level is searched. * * name1 must be non-NULL. Pass NULL for name2 and/or name3 if parsing a name * with fewer than three components. From c5269472ea9bb4a6fbb8a0510f7d676d725933ab Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 8 Nov 2017 16:50:12 -0500 Subject: [PATCH 0509/1087] Fix two violations of the ResourceOwnerEnlarge/Remember protocol. The point of having separate ResourceOwnerEnlargeFoo and ResourceOwnerRememberFoo functions is so that resource allocation can happen in between. Doing it in some other order is just wrong. OpenTemporaryFile() did open(), enlarge, remember, which would leak the open file if the enlarge step ran out of memory. Because fd.c has its own layer of resource-remembering, the consequences look like they'd be limited to an intratransaction FD leak, but it's still not good. IncrBufferRefCount() did enlarge, remember, incr-refcount, which would blow up if the incr-refcount step ever failed. It was safe enough when written, but since the introduction of PrivateRefCountHash, I think the assumption that no error could happen there is pretty shaky. The odds of real problems from either bug are probably small, but still, back-patch to supported branches. Thomas Munro and Tom Lane, per a comment from Andres Freund --- src/backend/storage/buffer/bufmgr.c | 2 +- src/backend/storage/file/fd.c | 10 ++++++++-- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 572f413d6e..26df7cb38f 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -3348,7 +3348,6 @@ IncrBufferRefCount(Buffer buffer) { Assert(BufferIsPinned(buffer)); ResourceOwnerEnlargeBuffers(CurrentResourceOwner); - ResourceOwnerRememberBuffer(CurrentResourceOwner, buffer); if (BufferIsLocal(buffer)) LocalRefCount[-buffer - 1]++; else @@ -3359,6 +3358,7 @@ IncrBufferRefCount(Buffer buffer) Assert(ref != NULL); ref->refcount++; } + ResourceOwnerRememberBuffer(CurrentResourceOwner, buffer); } /* diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 3849bfb15d..aa2fe2c6c0 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -1397,6 +1397,13 @@ OpenTemporaryFile(bool interXact) { File file = 0; + /* + * Make sure the current resource owner has space for this File before we + * open it, if we'll be registering it below. + */ + if (!interXact) + ResourceOwnerEnlargeFiles(CurrentResourceOwner); + /* * If some temp tablespace(s) have been given to us, try to use the next * one. If a given tablespace can't be found, we silently fall back to @@ -1433,9 +1440,8 @@ OpenTemporaryFile(bool interXact) { VfdCache[file].fdstate |= FD_XACT_TEMPORARY; - ResourceOwnerEnlargeFiles(CurrentResourceOwner); - ResourceOwnerRememberFile(CurrentResourceOwner, file); VfdCache[file].resowner = CurrentResourceOwner; + ResourceOwnerRememberFile(CurrentResourceOwner, file); /* ensure cleanup happens at eoxact */ have_xact_temporary_files = true; From bd65e0c62486e6108a7dc824f918754a13072f7a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 8 Nov 2017 17:20:53 -0500 Subject: [PATCH 0510/1087] Doc: fix erroneous example. The grammar requires these options to appear the other way 'round. jotpe@posteo.de Discussion: https://postgr.es/m/78933bd0-45ce-690e-b832-a328dd1a5567@posteo.de --- doc/src/sgml/ddl.sgml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 03cbaa60ab..3f41939bea 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3095,8 +3095,8 @@ CREATE TABLE measurement_y2007m12 PARTITION OF measurement CREATE TABLE measurement_y2008m01 PARTITION OF measurement FOR VALUES FROM ('2008-01-01') TO ('2008-02-01') - TABLESPACE fasttablespace - WITH (parallel_workers = 4); + WITH (parallel_workers = 4) + TABLESPACE fasttablespace; From 9b9cb3c4534d717c1c95758670198ebbf8a20af2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 8 Nov 2017 17:47:14 -0500 Subject: [PATCH 0511/1087] Allow --with-bonjour to work with non-macOS implementations of Bonjour. On macOS the relevant functions require no special library, but elsewhere we need to pull in libdns_sd. Back-patch to supported branches. No docs change since the docs do not suggest that this is a Mac-only feature. Luke Lonergan Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com --- configure | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++ configure.in | 6 ++++-- 2 files changed, 62 insertions(+), 2 deletions(-) diff --git a/configure b/configure index b8995ad547..59ce1a75d4 100755 --- a/configure +++ b/configure @@ -11124,6 +11124,64 @@ else fi + { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing DNSServiceRefSockFD" >&5 +$as_echo_n "checking for library containing DNSServiceRefSockFD... " >&6; } +if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : + $as_echo_n "(cached) " >&6 +else + ac_func_search_save_LIBS=$LIBS +cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ + +/* Override any GCC internal prototype to avoid an error. + Use char because int might match the return type of a GCC + builtin and then its argument prototype would still apply. */ +#ifdef __cplusplus +extern "C" +#endif +char DNSServiceRefSockFD (); +int +main () +{ +return DNSServiceRefSockFD (); + ; + return 0; +} +_ACEOF +for ac_lib in '' dns_sd; do + if test -z "$ac_lib"; then + ac_res="none required" + else + ac_res=-l$ac_lib + LIBS="-l$ac_lib $ac_func_search_save_LIBS" + fi + if ac_fn_c_try_link "$LINENO"; then : + ac_cv_search_DNSServiceRefSockFD=$ac_res +fi +rm -f core conftest.err conftest.$ac_objext \ + conftest$ac_exeext + if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : + break +fi +done +if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : + +else + ac_cv_search_DNSServiceRefSockFD=no +fi +rm conftest.$ac_ext +LIBS=$ac_func_search_save_LIBS +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_DNSServiceRefSockFD" >&5 +$as_echo "$ac_cv_search_DNSServiceRefSockFD" >&6; } +ac_res=$ac_cv_search_DNSServiceRefSockFD +if test "$ac_res" != no; then : + test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" + +else + as_fn_error $? "could not find function 'DNSServiceRefSockFD' required for Bonjour" "$LINENO" 5 +fi + fi # for contrib/uuid-ossp diff --git a/configure.in b/configure.in index 2b209d80ab..bed1a2d846 100644 --- a/configure.in +++ b/configure.in @@ -1061,8 +1061,8 @@ if test "$with_openssl" = yes ; then AC_CHECK_LIB(crypto, CRYPTO_new_ex_data, [], [AC_MSG_ERROR([library 'crypto' is required for OpenSSL])]) AC_CHECK_LIB(ssl, SSL_new, [], [AC_MSG_ERROR([library 'ssl' is required for OpenSSL])]) else - AC_SEARCH_LIBS(CRYPTO_new_ex_data, eay32 crypto, [], [AC_MSG_ERROR([library 'eay32' or 'crypto' is required for OpenSSL])]) - AC_SEARCH_LIBS(SSL_new, ssleay32 ssl, [], [AC_MSG_ERROR([library 'ssleay32' or 'ssl' is required for OpenSSL])]) + AC_SEARCH_LIBS(CRYPTO_new_ex_data, [eay32 crypto], [], [AC_MSG_ERROR([library 'eay32' or 'crypto' is required for OpenSSL])]) + AC_SEARCH_LIBS(SSL_new, [ssleay32 ssl], [], [AC_MSG_ERROR([library 'ssleay32' or 'ssl' is required for OpenSSL])]) fi AC_CHECK_FUNCS([SSL_get_current_compression]) # Functions introduced in OpenSSL 1.1.0. We used to check for @@ -1262,6 +1262,8 @@ fi if test "$with_bonjour" = yes ; then AC_CHECK_HEADER(dns_sd.h, [], [AC_MSG_ERROR([header file is required for Bonjour])]) + AC_SEARCH_LIBS(DNSServiceRefSockFD, dns_sd, [], + [AC_MSG_ERROR([could not find function 'DNSServiceRefSockFD' required for Bonjour])]) fi # for contrib/uuid-ossp From 20d9adab60754ac71b0b500c91c45e12e940b3ce Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 9 Nov 2017 11:00:36 -0500 Subject: [PATCH 0512/1087] Revert "Allow --with-bonjour to work with non-macOS implementations of Bonjour." Upon further review, our Bonjour code doesn't actually work with the Avahi not-too-compatible compatibility library. While you can get it to work on non-macOS platforms if you link to Apple's own mDNSResponder code, there don't seem to be many people who care about that. Leaving in the AC_SEARCH_LIBS call seems more likely to encourage people to build broken configurations than to do anything very useful. Hence, remove the AC_SEARCH_LIBS call and put in a warning comment instead. Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com --- configure | 58 ---------------------------------------------------- configure.in | 8 ++++++-- 2 files changed, 6 insertions(+), 60 deletions(-) diff --git a/configure b/configure index 59ce1a75d4..b8995ad547 100755 --- a/configure +++ b/configure @@ -11124,64 +11124,6 @@ else fi - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing DNSServiceRefSockFD" >&5 -$as_echo_n "checking for library containing DNSServiceRefSockFD... " >&6; } -if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : - $as_echo_n "(cached) " >&6 -else - ac_func_search_save_LIBS=$LIBS -cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ - -/* Override any GCC internal prototype to avoid an error. - Use char because int might match the return type of a GCC - builtin and then its argument prototype would still apply. */ -#ifdef __cplusplus -extern "C" -#endif -char DNSServiceRefSockFD (); -int -main () -{ -return DNSServiceRefSockFD (); - ; - return 0; -} -_ACEOF -for ac_lib in '' dns_sd; do - if test -z "$ac_lib"; then - ac_res="none required" - else - ac_res=-l$ac_lib - LIBS="-l$ac_lib $ac_func_search_save_LIBS" - fi - if ac_fn_c_try_link "$LINENO"; then : - ac_cv_search_DNSServiceRefSockFD=$ac_res -fi -rm -f core conftest.err conftest.$ac_objext \ - conftest$ac_exeext - if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : - break -fi -done -if ${ac_cv_search_DNSServiceRefSockFD+:} false; then : - -else - ac_cv_search_DNSServiceRefSockFD=no -fi -rm conftest.$ac_ext -LIBS=$ac_func_search_save_LIBS -fi -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_DNSServiceRefSockFD" >&5 -$as_echo "$ac_cv_search_DNSServiceRefSockFD" >&6; } -ac_res=$ac_cv_search_DNSServiceRefSockFD -if test "$ac_res" != no; then : - test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" - -else - as_fn_error $? "could not find function 'DNSServiceRefSockFD' required for Bonjour" "$LINENO" 5 -fi - fi # for contrib/uuid-ossp diff --git a/configure.in b/configure.in index bed1a2d846..0e5aef37b4 100644 --- a/configure.in +++ b/configure.in @@ -1262,8 +1262,12 @@ fi if test "$with_bonjour" = yes ; then AC_CHECK_HEADER(dns_sd.h, [], [AC_MSG_ERROR([header file is required for Bonjour])]) - AC_SEARCH_LIBS(DNSServiceRefSockFD, dns_sd, [], - [AC_MSG_ERROR([could not find function 'DNSServiceRefSockFD' required for Bonjour])]) +dnl At some point we might add something like +dnl AC_SEARCH_LIBS(DNSServiceRegister, dns_sd) +dnl but right now, what that would mainly accomplish is to encourage +dnl people to try to use the avahi implementation, which does not work. +dnl If you want to use Apple's own Bonjour code on another platform, +dnl just add -ldns_sd to LIBS manually. fi # for contrib/uuid-ossp From 9be95ef156e7c2ae0924300acddd483504fa33b3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 9 Nov 2017 11:30:30 -0500 Subject: [PATCH 0513/1087] Fix bogus logic for checking executables' versions within pg_upgrade. Somebody messed up a refactoring here. As it stood, we'd check pg_ctl's --version output twice for each cluster. Worse, the first check for the new cluster's version happened before we'd done any validate_exec checks there, breaking the check ordering the code intended. A. Akenteva Discussion: https://postgr.es/m/f9266a85d918a3cf3a386b5148aee666@postgrespro.ru --- src/bin/pg_upgrade/exec.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index 59ddc16d6b..810a5a0c3c 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -382,12 +382,11 @@ check_bin_dir(ClusterInfo *cluster) validate_exec(cluster->bindir, "pg_ctl"); /* - * Fetch the binary versions after checking for the existence of pg_ctl, - * this gives a correct error if the binary used itself for the version - * fetching is broken. + * Fetch the binary version after checking for the existence of pg_ctl. + * This way we report a useful error if the pg_ctl binary used for version + * fetching is missing/broken. */ - get_bin_version(&old_cluster); - get_bin_version(&new_cluster); + get_bin_version(cluster); /* pg_resetxlog has been renamed to pg_resetwal in version 10 */ if (GET_MAJOR_VERSION(cluster->bin_version) < 1000) From 6c3a7ba5bb0f960ed412b1c36e815f53347b3d79 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 9 Nov 2017 11:57:20 -0500 Subject: [PATCH 0514/1087] Fix typo in ALTER SYSTEM output. The header comment written into postgresql.auto.conf by ALTER SYSTEM should match what initdb put there originally. Feike Steenbergen Discussion: https://postgr.es/m/CAK_s-G0KcKdO=0hqZkwb3s+tqZuuHwWqmF5BDsmoO9FtX75r0g@mail.gmail.com --- src/backend/utils/misc/guc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index a609619f4d..da061023f5 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -6944,7 +6944,7 @@ write_auto_conf_file(int fd, const char *filename, ConfigVariable *head) /* Emit file header containing warning comment */ appendStringInfoString(&buf, "# Do not edit this file manually!\n"); - appendStringInfoString(&buf, "# It will be overwritten by ALTER SYSTEM command.\n"); + appendStringInfoString(&buf, "# It will be overwritten by the ALTER SYSTEM command.\n"); errno = 0; if (write(fd, buf.data, buf.len) != buf.len) From 5ecc0d738e5864848bbc2d1d97e56d5846624ba2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 9 Nov 2017 12:36:58 -0500 Subject: [PATCH 0515/1087] Restrict lo_import()/lo_export() via SQL permissions not hard-wired checks. While it's generally unwise to give permissions on these functions to anyone but a superuser, we've been moving away from hard-wired permission checks inside functions in favor of using the SQL permission system to control access. Bring lo_import() and lo_export() into compliance with that approach. In particular, this removes the manual configuration option ALLOW_DANGEROUS_LO_FUNCTIONS. That dates back to 1999 (commit 4cd4a54c8); it's unlikely anyone has used it in many years. Moreover, if you really want such behavior, now you can get it with GRANT ... TO PUBLIC instead. Michael Paquier Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com --- src/backend/catalog/system_views.sql | 10 ++++++++++ src/backend/libpq/be-fsstubs.c | 16 ---------------- src/include/catalog/catversion.h | 2 +- src/include/pg_config_manual.h | 10 ---------- src/test/regress/expected/privileges.out | 10 ++++++---- src/test/regress/sql/privileges.sql | 2 ++ 6 files changed, 19 insertions(+), 31 deletions(-) diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql index dc40cde424..394aea8e0f 100644 --- a/src/backend/catalog/system_views.sql +++ b/src/backend/catalog/system_views.sql @@ -1115,12 +1115,14 @@ LANGUAGE INTERNAL STRICT IMMUTABLE PARALLEL SAFE AS 'jsonb_insert'; +-- -- The default permissions for functions mean that anyone can execute them. -- A number of functions shouldn't be executable by just anyone, but rather -- than use explicit 'superuser()' checks in those functions, we use the GRANT -- system to REVOKE access to those functions at initdb time. Administrators -- can later change who can access these functions, or leave them as only -- available to superuser / cluster owner, if they choose. +-- REVOKE EXECUTE ON FUNCTION pg_start_backup(text, boolean, boolean) FROM public; REVOKE EXECUTE ON FUNCTION pg_stop_backup() FROM public; REVOKE EXECUTE ON FUNCTION pg_stop_backup(boolean, boolean) FROM public; @@ -1138,8 +1140,16 @@ REVOKE EXECUTE ON FUNCTION pg_stat_reset_shared(text) FROM public; REVOKE EXECUTE ON FUNCTION pg_stat_reset_single_table_counters(oid) FROM public; REVOKE EXECUTE ON FUNCTION pg_stat_reset_single_function_counters(oid) FROM public; +REVOKE EXECUTE ON FUNCTION lo_import(text) FROM public; +REVOKE EXECUTE ON FUNCTION lo_import(text, oid) FROM public; +REVOKE EXECUTE ON FUNCTION lo_export(oid, text) FROM public; + REVOKE EXECUTE ON FUNCTION pg_ls_logdir() FROM public; REVOKE EXECUTE ON FUNCTION pg_ls_waldir() FROM public; + +-- +-- We also set up some things as accessible to standard roles. +-- GRANT EXECUTE ON FUNCTION pg_ls_logdir() TO pg_monitor; GRANT EXECUTE ON FUNCTION pg_ls_waldir() TO pg_monitor; diff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c index 84c2d26402..50c70dd66d 100644 --- a/src/backend/libpq/be-fsstubs.c +++ b/src/backend/libpq/be-fsstubs.c @@ -448,14 +448,6 @@ lo_import_internal(text *filename, Oid lobjOid) LargeObjectDesc *lobj; Oid oid; -#ifndef ALLOW_DANGEROUS_LO_FUNCTIONS - if (!superuser()) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("must be superuser to use server-side lo_import()"), - errhint("Anyone can use the client-side lo_import() provided by libpq."))); -#endif - CreateFSContext(); /* @@ -514,14 +506,6 @@ be_lo_export(PG_FUNCTION_ARGS) LargeObjectDesc *lobj; mode_t oumask; -#ifndef ALLOW_DANGEROUS_LO_FUNCTIONS - if (!superuser()) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("must be superuser to use server-side lo_export()"), - errhint("Anyone can use the client-side lo_export() provided by libpq."))); -#endif - CreateFSContext(); /* diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 9a7f5b25a3..39c70b415a 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201710161 +#define CATALOG_VERSION_NO 201711091 #endif diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h index b048175321..6f2238b330 100644 --- a/src/include/pg_config_manual.h +++ b/src/include/pg_config_manual.h @@ -72,16 +72,6 @@ */ #define NUM_ATOMICS_SEMAPHORES 64 -/* - * Define this if you want to allow the lo_import and lo_export SQL - * functions to be executed by ordinary users. By default these - * functions are only available to the Postgres superuser. CAUTION: - * These functions are SECURITY HOLES since they can read and write - * any file that the PostgreSQL server has permission to access. If - * you turn this on, don't say we didn't warn you. - */ -/* #define ALLOW_DANGEROUS_LO_FUNCTIONS */ - /* * MAXPGPATH: standard size of a pathname buffer in PostgreSQL (hence, * maximum usable pathname length is one less). diff --git a/src/test/regress/expected/privileges.out b/src/test/regress/expected/privileges.out index 65d950f15b..771971a095 100644 --- a/src/test/regress/expected/privileges.out +++ b/src/test/regress/expected/privileges.out @@ -1358,8 +1358,11 @@ ERROR: permission denied for large object 1002 SELECT lo_unlink(1002); -- to be denied ERROR: must be owner of large object 1002 SELECT lo_export(1001, '/dev/null'); -- to be denied -ERROR: must be superuser to use server-side lo_export() -HINT: Anyone can use the client-side lo_export() provided by libpq. +ERROR: permission denied for function lo_export +SELECT lo_import('/dev/null'); -- to be denied +ERROR: permission denied for function lo_import +SELECT lo_import('/dev/null', 2003); -- to be denied +ERROR: permission denied for function lo_import \c - SET lo_compat_privileges = true; -- compatibility mode SET SESSION AUTHORIZATION regress_user4; @@ -1388,8 +1391,7 @@ SELECT lo_unlink(1002); (1 row) SELECT lo_export(1001, '/dev/null'); -- to be denied -ERROR: must be superuser to use server-side lo_export() -HINT: Anyone can use the client-side lo_export() provided by libpq. +ERROR: permission denied for function lo_export -- don't allow unpriv users to access pg_largeobject contents \c - SELECT * FROM pg_largeobject LIMIT 0; diff --git a/src/test/regress/sql/privileges.sql b/src/test/regress/sql/privileges.sql index 902f64c747..a900ba2f84 100644 --- a/src/test/regress/sql/privileges.sql +++ b/src/test/regress/sql/privileges.sql @@ -839,6 +839,8 @@ SELECT lo_truncate(lo_open(1002, x'20000'::int), 10); -- to be denied SELECT lo_put(1002, 1, 'abcd'); -- to be denied SELECT lo_unlink(1002); -- to be denied SELECT lo_export(1001, '/dev/null'); -- to be denied +SELECT lo_import('/dev/null'); -- to be denied +SELECT lo_import('/dev/null', 2003); -- to be denied \c - SET lo_compat_privileges = true; -- compatibility mode From ae20b23a9e7029f31ee902da08a464d968319f56 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 9 Nov 2017 12:56:07 -0500 Subject: [PATCH 0516/1087] Refactor permissions checks for large objects. Up to now, ACL checks for large objects happened at the level of the SQL-callable functions, which led to CVE-2017-7548 because of a missing check. Push them down to be enforced in inv_api.c as much as possible, in hopes of preventing future bugs. This does have the effect of moving read and write permission errors to happen at lo_open time not loread or lowrite time, but that seems acceptable. Michael Paquier and Tom Lane Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com --- src/backend/catalog/objectaddress.c | 2 +- src/backend/libpq/be-fsstubs.c | 88 ++++------------- src/backend/storage/large_object/inv_api.c | 108 ++++++++++++++++----- src/backend/utils/misc/guc.c | 12 +-- src/include/libpq/be-fsstubs.h | 5 - src/include/storage/large_object.h | 13 ++- 6 files changed, 117 insertions(+), 111 deletions(-) diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index c2ad7c675e..8d55c76fc4 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -69,13 +69,13 @@ #include "commands/trigger.h" #include "foreign/foreign.h" #include "funcapi.h" -#include "libpq/be-fsstubs.h" #include "miscadmin.h" #include "nodes/makefuncs.h" #include "parser/parse_func.h" #include "parser/parse_oper.h" #include "parser/parse_type.h" #include "rewrite/rewriteSupport.h" +#include "storage/large_object.h" #include "storage/lmgr.h" #include "storage/sinval.h" #include "utils/builtins.h" diff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c index 50c70dd66d..5a2479e6d3 100644 --- a/src/backend/libpq/be-fsstubs.c +++ b/src/backend/libpq/be-fsstubs.c @@ -51,11 +51,6 @@ #include "utils/builtins.h" #include "utils/memutils.h" -/* - * compatibility flag for permission checks - */ -bool lo_compat_privileges; - /* define this to enable debug logging */ /* #define FSDB 1 */ /* chunk size for lo_import/lo_export transfers */ @@ -108,14 +103,6 @@ be_lo_open(PG_FUNCTION_ARGS) lobjDesc = inv_open(lobjId, mode, fscxt); - if (lobjDesc == NULL) - { /* lookup failed */ -#if FSDB - elog(DEBUG4, "could not open large object %u", lobjId); -#endif - PG_RETURN_INT32(-1); - } - fd = newLOfd(lobjDesc); PG_RETURN_INT32(fd); @@ -163,22 +150,16 @@ lo_read(int fd, char *buf, int len) errmsg("invalid large-object descriptor: %d", fd))); lobj = cookies[fd]; - /* We don't bother to check IFS_RDLOCK, since it's always set */ - - /* Permission checks --- first time through only */ - if ((lobj->flags & IFS_RD_PERM_OK) == 0) - { - if (!lo_compat_privileges && - pg_largeobject_aclcheck_snapshot(lobj->id, - GetUserId(), - ACL_SELECT, - lobj->snapshot) != ACLCHECK_OK) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("permission denied for large object %u", - lobj->id))); - lobj->flags |= IFS_RD_PERM_OK; - } + /* + * Check state. inv_read() would throw an error anyway, but we want the + * error to be about the FD's state not the underlying privilege; it might + * be that the privilege exists but user forgot to ask for read mode. + */ + if ((lobj->flags & IFS_RDLOCK) == 0) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("large object descriptor %d was not opened for reading", + fd))); status = inv_read(lobj, buf, len); @@ -197,27 +178,13 @@ lo_write(int fd, const char *buf, int len) errmsg("invalid large-object descriptor: %d", fd))); lobj = cookies[fd]; + /* see comment in lo_read() */ if ((lobj->flags & IFS_WRLOCK) == 0) ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("large object descriptor %d was not opened for writing", fd))); - /* Permission checks --- first time through only */ - if ((lobj->flags & IFS_WR_PERM_OK) == 0) - { - if (!lo_compat_privileges && - pg_largeobject_aclcheck_snapshot(lobj->id, - GetUserId(), - ACL_UPDATE, - lobj->snapshot) != ACLCHECK_OK) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("permission denied for large object %u", - lobj->id))); - lobj->flags |= IFS_WR_PERM_OK; - } - status = inv_write(lobj, buf, len); return status; @@ -342,7 +309,11 @@ be_lo_unlink(PG_FUNCTION_ARGS) { Oid lobjId = PG_GETARG_OID(0); - /* Must be owner of the largeobject */ + /* + * Must be owner of the large object. It would be cleaner to check this + * in inv_drop(), but we want to throw the error before not after closing + * relevant FDs. + */ if (!lo_compat_privileges && !pg_largeobject_ownercheck(lobjId, GetUserId())) ereport(ERROR, @@ -574,27 +545,13 @@ lo_truncate_internal(int32 fd, int64 len) errmsg("invalid large-object descriptor: %d", fd))); lobj = cookies[fd]; + /* see comment in lo_read() */ if ((lobj->flags & IFS_WRLOCK) == 0) ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("large object descriptor %d was not opened for writing", fd))); - /* Permission checks --- first time through only */ - if ((lobj->flags & IFS_WR_PERM_OK) == 0) - { - if (!lo_compat_privileges && - pg_largeobject_aclcheck_snapshot(lobj->id, - GetUserId(), - ACL_UPDATE, - lobj->snapshot) != ACLCHECK_OK) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("permission denied for large object %u", - lobj->id))); - lobj->flags |= IFS_WR_PERM_OK; - } - inv_truncate(lobj, len); } @@ -770,17 +727,6 @@ lo_get_fragment_internal(Oid loOid, int64 offset, int32 nbytes) loDesc = inv_open(loOid, INV_READ, fscxt); - /* Permission check */ - if (!lo_compat_privileges && - pg_largeobject_aclcheck_snapshot(loDesc->id, - GetUserId(), - ACL_SELECT, - loDesc->snapshot) != ACLCHECK_OK) - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg("permission denied for large object %u", - loDesc->id))); - /* * Compute number of bytes we'll actually read, accommodating nbytes == -1 * and reads beyond the end of the LO. diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c index aa43b46c30..12940e5075 100644 --- a/src/backend/storage/large_object/inv_api.c +++ b/src/backend/storage/large_object/inv_api.c @@ -51,6 +51,11 @@ #include "utils/tqual.h" +/* + * GUC: backwards-compatibility flag to suppress LO permission checks + */ +bool lo_compat_privileges; + /* * All accesses to pg_largeobject and its index make use of a single Relation * reference, so that we only need to open pg_relation once per transaction. @@ -250,46 +255,78 @@ inv_open(Oid lobjId, int flags, MemoryContext mcxt) Snapshot snapshot = NULL; int descflags = 0; + /* + * Historically, no difference is made between (INV_WRITE) and (INV_WRITE + * | INV_READ), the caller being allowed to read the large object + * descriptor in either case. + */ if (flags & INV_WRITE) - { - snapshot = NULL; /* instantaneous MVCC snapshot */ - descflags = IFS_WRLOCK | IFS_RDLOCK; - } - else if (flags & INV_READ) - { - snapshot = GetActiveSnapshot(); - descflags = IFS_RDLOCK; - } - else + descflags |= IFS_WRLOCK | IFS_RDLOCK; + if (flags & INV_READ) + descflags |= IFS_RDLOCK; + + if (descflags == 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("invalid flags for opening a large object: %d", flags))); + /* Get snapshot. If write is requested, use an instantaneous snapshot. */ + if (descflags & IFS_WRLOCK) + snapshot = NULL; + else + snapshot = GetActiveSnapshot(); + /* Can't use LargeObjectExists here because we need to specify snapshot */ if (!myLargeObjectExists(lobjId, snapshot)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_OBJECT), errmsg("large object %u does not exist", lobjId))); + /* Apply permission checks, again specifying snapshot */ + if ((descflags & IFS_RDLOCK) != 0) + { + if (!lo_compat_privileges && + pg_largeobject_aclcheck_snapshot(lobjId, + GetUserId(), + ACL_SELECT, + snapshot) != ACLCHECK_OK) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("permission denied for large object %u", + lobjId))); + } + if ((descflags & IFS_WRLOCK) != 0) + { + if (!lo_compat_privileges && + pg_largeobject_aclcheck_snapshot(lobjId, + GetUserId(), + ACL_UPDATE, + snapshot) != ACLCHECK_OK) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("permission denied for large object %u", + lobjId))); + } + + /* OK to create a descriptor */ + retval = (LargeObjectDesc *) MemoryContextAlloc(mcxt, + sizeof(LargeObjectDesc)); + retval->id = lobjId; + retval->subid = GetCurrentSubTransactionId(); + retval->offset = 0; + retval->flags = descflags; + /* * We must register the snapshot in TopTransaction's resowner, because it * must stay alive until the LO is closed rather than until the current - * portal shuts down. Do this after checking that the LO exists, to avoid - * leaking the snapshot if an error is thrown. + * portal shuts down. Do this last to avoid uselessly leaking the + * snapshot if an error is thrown above. */ if (snapshot) snapshot = RegisterSnapshotOnOwner(snapshot, TopTransactionResourceOwner); - - /* All set, create a descriptor */ - retval = (LargeObjectDesc *) MemoryContextAlloc(mcxt, - sizeof(LargeObjectDesc)); - retval->id = lobjId; - retval->subid = GetCurrentSubTransactionId(); - retval->offset = 0; retval->snapshot = snapshot; - retval->flags = descflags; return retval; } @@ -312,7 +349,7 @@ inv_close(LargeObjectDesc *obj_desc) /* * Destroys an existing large object (not to be confused with a descriptor!) * - * returns -1 if failed + * Note we expect caller to have done any required permissions check. */ int inv_drop(Oid lobjId) @@ -333,6 +370,7 @@ inv_drop(Oid lobjId) */ CommandCounterIncrement(); + /* For historical reasons, we always return 1 on success. */ return 1; } @@ -397,6 +435,11 @@ inv_seek(LargeObjectDesc *obj_desc, int64 offset, int whence) Assert(PointerIsValid(obj_desc)); + /* + * We allow seek/tell if you have either read or write permission, so no + * need for a permission check here. + */ + /* * Note: overflow in the additions is possible, but since we will reject * negative results, we don't need any extra test for that. @@ -439,6 +482,11 @@ inv_tell(LargeObjectDesc *obj_desc) { Assert(PointerIsValid(obj_desc)); + /* + * We allow seek/tell if you have either read or write permission, so no + * need for a permission check here. + */ + return obj_desc->offset; } @@ -458,6 +506,12 @@ inv_read(LargeObjectDesc *obj_desc, char *buf, int nbytes) Assert(PointerIsValid(obj_desc)); Assert(buf != NULL); + if ((obj_desc->flags & IFS_RDLOCK) == 0) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("permission denied for large object %u", + obj_desc->id))); + if (nbytes <= 0) return 0; @@ -563,7 +617,11 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes) Assert(buf != NULL); /* enforce writability because snapshot is probably wrong otherwise */ - Assert(obj_desc->flags & IFS_WRLOCK); + if ((obj_desc->flags & IFS_WRLOCK) == 0) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("permission denied for large object %u", + obj_desc->id))); if (nbytes <= 0) return 0; @@ -749,7 +807,11 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len) Assert(PointerIsValid(obj_desc)); /* enforce writability because snapshot is probably wrong otherwise */ - Assert(obj_desc->flags & IFS_WRLOCK); + if ((obj_desc->flags & IFS_WRLOCK) == 0) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("permission denied for large object %u", + obj_desc->id))); /* * use errmsg_internal here because we don't want to expose INT64_FORMAT diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index da061023f5..c4c1afa084 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -43,7 +43,6 @@ #include "commands/trigger.h" #include "funcapi.h" #include "libpq/auth.h" -#include "libpq/be-fsstubs.h" #include "libpq/libpq.h" #include "libpq/pqformat.h" #include "miscadmin.h" @@ -71,6 +70,7 @@ #include "storage/dsm_impl.h" #include "storage/standby.h" #include "storage/fd.h" +#include "storage/large_object.h" #include "storage/pg_shmem.h" #include "storage/proc.h" #include "storage/predicate.h" @@ -4900,7 +4900,7 @@ ResetAllOptions(void) if (conf->assign_hook) conf->assign_hook(conf->reset_val, - conf->reset_extra); + conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, conf->reset_extra); @@ -4912,7 +4912,7 @@ ResetAllOptions(void) if (conf->assign_hook) conf->assign_hook(conf->reset_val, - conf->reset_extra); + conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, conf->reset_extra); @@ -4924,7 +4924,7 @@ ResetAllOptions(void) if (conf->assign_hook) conf->assign_hook(conf->reset_val, - conf->reset_extra); + conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, conf->reset_extra); @@ -4936,7 +4936,7 @@ ResetAllOptions(void) if (conf->assign_hook) conf->assign_hook(conf->reset_val, - conf->reset_extra); + conf->reset_extra); set_string_field(conf, conf->variable, conf->reset_val); set_extra_field(&conf->gen, &conf->gen.extra, conf->reset_extra); @@ -4948,7 +4948,7 @@ ResetAllOptions(void) if (conf->assign_hook) conf->assign_hook(conf->reset_val, - conf->reset_extra); + conf->reset_extra); *conf->variable = conf->reset_val; set_extra_field(&conf->gen, &conf->gen.extra, conf->reset_extra); diff --git a/src/include/libpq/be-fsstubs.h b/src/include/libpq/be-fsstubs.h index 96bcaa0f08..e8107a2c9f 100644 --- a/src/include/libpq/be-fsstubs.h +++ b/src/include/libpq/be-fsstubs.h @@ -14,11 +14,6 @@ #ifndef BE_FSSTUBS_H #define BE_FSSTUBS_H -/* - * compatibility option for access control - */ -extern bool lo_compat_privileges; - /* * These are not fmgr-callable, but are available to C code. * Probably these should have had the underscore-free names, diff --git a/src/include/storage/large_object.h b/src/include/storage/large_object.h index 796a8fdeea..01d0985b44 100644 --- a/src/include/storage/large_object.h +++ b/src/include/storage/large_object.h @@ -27,9 +27,9 @@ * offset is the current seek offset within the LO * flags contains some flag bits * - * NOTE: in current usage, flag bit IFS_RDLOCK is *always* set, and we don't - * bother to test for it. Permission checks are made at first read or write - * attempt, not during inv_open(), so we have other bits to remember that. + * NOTE: as of v11, permission checks are made when the large object is + * opened; therefore IFS_RDLOCK/IFS_WRLOCK indicate that read or write mode + * has been requested *and* the corresponding permission has been checked. * * NOTE: before 7.1, we also had to store references to the separate table * and index of a specific large object. Now they all live in pg_largeobject @@ -47,8 +47,6 @@ typedef struct LargeObjectDesc /* bits in flags: */ #define IFS_RDLOCK (1 << 0) /* LO was opened for reading */ #define IFS_WRLOCK (1 << 1) /* LO was opened for writing */ -#define IFS_RD_PERM_OK (1 << 2) /* read permission has been verified */ -#define IFS_WR_PERM_OK (1 << 3) /* write permission has been verified */ } LargeObjectDesc; @@ -78,6 +76,11 @@ typedef struct LargeObjectDesc #define MAX_LARGE_OBJECT_SIZE ((int64) INT_MAX * LOBLKSIZE) +/* + * GUC: backwards-compatibility flag to suppress LO permission checks + */ +extern bool lo_compat_privileges; + /* * Function definitions... */ From e7397f015c9589f95f5f5b48d7a274b2f1628971 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 9 Nov 2017 17:00:53 -0500 Subject: [PATCH 0517/1087] Remove junk left from DSSSL to XSL conversion --- doc/src/sgml/Makefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 428eb569fc..bcfd7db0bd 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -303,7 +303,7 @@ endif # sqlmansectnum != 7 # tabs are harmless, but it is best to avoid them in SGML files check-tabs: - @( ! grep ' ' $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml $(srcdir)/*.dsl $(srcdir)/*.xsl) ) || (echo "Tabs appear in SGML/XML files" 1>&2; exit 1) + @( ! grep ' ' $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml $(srcdir)/*.xsl) ) || (echo "Tabs appear in SGML/XML files" 1>&2; exit 1) ## ## Clean @@ -311,7 +311,7 @@ check-tabs: # This allows removing some files from the distribution tarballs while # keeping the dependencies satisfied. -.SECONDARY: postgres.xml $(GENERATED_SGML) HTML.index +.SECONDARY: postgres.xml $(GENERATED_SGML) .SECONDARY: INSTALL.html INSTALL.xml .SECONDARY: postgres-A4.fo postgres-US.fo From 1aba8e651ac3e37e1d2d875842de1e0ed22a651e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 9 Nov 2017 18:07:25 -0500 Subject: [PATCH 0518/1087] Add hash partitioning. Hash partitioning is useful when you want to partition a growing data set evenly. This can be useful to keep table sizes reasonable, which makes maintenance operations such as VACUUM faster, or to enable partition-wise join. At present, we still depend on constraint exclusion for partitioning pruning, and the shape of the partition constraints for hash partitioning is such that that doesn't work. Work is underway to fix that, which should both improve performance and make partitioning pruning work with hash partitioning. Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few final tweaks also by me. Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com --- doc/src/sgml/ddl.sgml | 28 +- doc/src/sgml/ref/alter_table.sgml | 7 + doc/src/sgml/ref/create_table.sgml | 85 ++- src/backend/catalog/partition.c | 682 +++++++++++++++++-- src/backend/commands/tablecmds.c | 48 +- src/backend/nodes/copyfuncs.c | 2 + src/backend/nodes/equalfuncs.c | 2 + src/backend/nodes/outfuncs.c | 2 + src/backend/nodes/readfuncs.c | 2 + src/backend/optimizer/path/joinrels.c | 12 +- src/backend/parser/gram.y | 76 ++- src/backend/parser/parse_utilcmd.c | 29 +- src/backend/utils/adt/ruleutils.c | 15 +- src/backend/utils/cache/relcache.c | 26 +- src/bin/psql/tab-complete.c | 2 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/partition.h | 3 + src/include/catalog/pg_proc.h | 4 + src/include/nodes/parsenodes.h | 8 +- src/test/regress/expected/alter_table.out | 62 ++ src/test/regress/expected/create_table.out | 78 ++- src/test/regress/expected/insert.out | 46 ++ src/test/regress/expected/partition_join.out | 81 +++ src/test/regress/expected/update.out | 29 + src/test/regress/sql/alter_table.sql | 64 ++ src/test/regress/sql/create_table.sql | 51 +- src/test/regress/sql/insert.sql | 33 + src/test/regress/sql/partition_join.sql | 32 + src/test/regress/sql/update.sql | 28 + src/tools/pgindent/typedefs.list | 1 + 30 files changed, 1420 insertions(+), 120 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 3f41939bea..daba66c187 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -2875,6 +2875,19 @@ VALUES ('Albany', NULL, NULL, 'NY'); + + + Hash Partitioning + + + + The table is partitioned by specifying a modulus and a remainder for + each partition. Each partition will hold the rows for which the hash + value of the partition key divided by the specified modulus will + produce the specified remainder. + + + If your application needs to use other forms of partitioning not listed @@ -2901,9 +2914,8 @@ VALUES ('Albany', NULL, NULL, 'NY'); All rows inserted into a partitioned table will be routed to one of the partitions based on the value of the partition key. Each partition has a subset of the data defined by its - partition bounds. Currently supported - partitioning methods include range and list, where each partition is - assigned a range of keys and a list of keys, respectively. + partition bounds. The currently supported + partitioning methods are range, list, and hash. @@ -3328,11 +3340,11 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - Declarative partitioning only supports list and range partitioning, - whereas table inheritance allows data to be divided in a manner of - the user's choosing. (Note, however, that if constraint exclusion is - unable to prune partitions effectively, query performance will be very - poor.) + Declarative partitioning only supports range, list and hash + partitioning, whereas table inheritance allows data to be divided in a + manner of the user's choosing. (Note, however, that if constraint + exclusion is unable to prune partitions effectively, query performance + will be very poor.) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 41acda003f..3b19ea7131 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -1431,6 +1431,13 @@ ALTER TABLE cities ATTACH PARTITION cities_partdef DEFAULT; + + Attach a partition to hash partitioned table: + +ALTER TABLE orders + ATTACH PARTITION orders_p4 FOR VALUES WITH (MODULUS 4, REMAINDER 3); + + Detach a partition from partitioned table: diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 4f7b741526..bbb3a51def 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -28,7 +28,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI [, ... ] ] ) [ INHERITS ( parent_table [, ... ] ) ] -[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] +[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] [ TABLESPACE tablespace_name ] @@ -39,7 +39,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI | table_constraint } [, ... ] ) ] -[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] +[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] [ TABLESPACE tablespace_name ] @@ -50,7 +50,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI | table_constraint } [, ... ] ) ] { FOR VALUES partition_bound_spec | DEFAULT } -[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] +[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ] [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ] [ TABLESPACE tablespace_name ] @@ -88,7 +88,8 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI IN ( { numeric_literal | string_literal | NULL } [, ...] ) | FROM ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) - TO ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) + TO ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) | +WITH ( MODULUS numeric_literal, REMAINDER numeric_literal ) index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are: @@ -256,7 +257,8 @@ FROM ( { numeric_literal | partition of the specified parent table. The table can be created either as a partition for specific values using FOR VALUES or as a default partition - using DEFAULT. + using DEFAULT. This option is not available for + hash-partitioned tables. @@ -264,8 +266,9 @@ FROM ( { numeric_literal | IN is used for list partitioning, - while the form with FROM and TO is used for - range partitioning. + the form with FROM and TO is used + for range partitioning, and the form with WITH is used + for hash partitioning. @@ -363,6 +366,29 @@ FROM ( { numeric_literal | + + When creating a hash partition, a modulus and remainder must be specified. + The modulus must be a positive integer, and the remainder must be a + non-negative integer less than the modulus. Typically, when initially + setting up a hash-partitioned table, you should choose a modulus equal to + the number of partitions and assign every table the same modulus and a + different remainder (see examples, below). However, it is not required + that every partition have the same modulus, only that every modulus which + occurs among the partitions of a hash-partitioned table is a factor of the + next larger modulus. This allows the number of partitions to be increased + incrementally without needing to move all the data at once. For example, + suppose you have a hash-partitioned table with 8 partitions, each of which + has modulus 8, but find it necessary to increase the number of partitions + to 16. You can detach one of the modulus-8 partitions, create two new + modulus-16 partitions covering the same portion of the key space (one with + a remainder equal to the remainder of the detached partition, and the + other with a remainder equal to that value plus 8), and repopulate them + with data. You can then repeat this -- perhaps at a later time -- for + each modulus-8 partition until none remain. While this may still involve + a large amount of data movement at each step, it is still better than + having to create a whole new table and move all the data at once. + + A partition must have the same column names and types as the partitioned table to which it belongs. If the parent is specified WITH @@ -486,20 +512,28 @@ FROM ( { numeric_literal | - PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ opclass ] [, ...] ) + PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ opclass ] [, ...] ) The optional PARTITION BY clause specifies a strategy of partitioning the table. The table thus created is called a partitioned table. The parenthesized list of columns or expressions forms the partition key - for the table. When using range partitioning, the partition key can - include multiple columns or expressions (up to 32, but this limit can be - altered when building PostgreSQL), but for + for the table. When using range or hash partitioning, the partition key + can include multiple columns or expressions (up to 32, but this limit can + be altered when building PostgreSQL), but for list partitioning, the partition key must consist of a single column or - expression. If no B-tree operator class is specified when creating a - partitioned table, the default B-tree operator class for the datatype will - be used. If there is none, an error will be reported. + expression. + + + + Range and list partitioning require a btree operator class, while hash + partitioning requires a hash operator class. If no operator class is + specified explicitly, the default operator class of the appropriate + type will be used; if no default operator class exists, an error will + be raised. When hash partitioning is used, the operator class used + must implement support function 2 (see + for details). @@ -1647,6 +1681,16 @@ CREATE TABLE cities ( name text not null, population bigint ) PARTITION BY LIST (left(lower(name), 1)); + + + + Create a hash partitioned table: + +CREATE TABLE orders ( + order_id bigint not null, + cust_id bigint not null, + status text +) PARTITION BY HASH (order_id); @@ -1701,6 +1745,19 @@ CREATE TABLE cities_ab_10000_to_100000 PARTITION OF cities_ab FOR VALUES FROM (10000) TO (100000); + + Create partitions of a hash partitioned table: + +CREATE TABLE orders_p1 PARTITION OF orders + FOR VALUES WITH (MODULUS 4, REMAINDER 0); +CREATE TABLE orders_p2 PARTITION OF orders + FOR VALUES WITH (MODULUS 4, REMAINDER 1); +CREATE TABLE orders_p3 PARTITION OF orders + FOR VALUES WITH (MODULUS 4, REMAINDER 2); +CREATE TABLE orders_p4 PARTITION OF orders + FOR VALUES WITH (MODULUS 4, REMAINDER 3); + + Create a default partition: diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 5daa8a1c19..cff59ed055 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -15,6 +15,7 @@ #include "postgres.h" +#include "access/hash.h" #include "access/heapam.h" #include "access/htup_details.h" #include "access/nbtree.h" @@ -46,6 +47,7 @@ #include "utils/datum.h" #include "utils/memutils.h" #include "utils/fmgroids.h" +#include "utils/hashutils.h" #include "utils/inval.h" #include "utils/lsyscache.h" #include "utils/rel.h" @@ -61,26 +63,35 @@ * In the case of range partitioning, ndatums will typically be far less than * 2 * nparts, because a partition's upper bound and the next partition's lower * bound are the same in most common cases, and we only store one of them (the - * upper bound). + * upper bound). In case of hash partitioning, ndatums will be same as the + * number of partitions. + * + * For range and list partitioned tables, datums is an array of datum-tuples + * with key->partnatts datums each. For hash partitioned tables, it is an array + * of datum-tuples with 2 datums, modulus and remainder, corresponding to a + * given partition. * * In the case of list partitioning, the indexes array stores one entry for * every datum, which is the index of the partition that accepts a given datum. * In case of range partitioning, it stores one entry per distinct range * datum, which is the index of the partition for which a given datum - * is an upper bound. + * is an upper bound. In the case of hash partitioning, the number of the + * entries in the indexes array is same as the greatest modulus amongst all + * partitions. For a given partition key datum-tuple, the index of the + * partition which would accept that datum-tuple would be given by the entry + * pointed by remainder produced when hash value of the datum-tuple is divided + * by the greatest modulus. */ typedef struct PartitionBoundInfoData { - char strategy; /* list or range bounds? */ + char strategy; /* hash, list or range? */ int ndatums; /* Length of the datums following array */ - Datum **datums; /* Array of datum-tuples with key->partnatts - * datums each */ + Datum **datums; PartitionRangeDatumKind **kind; /* The kind of each range bound datum; - * NULL for list partitioned tables */ - int *indexes; /* Partition indexes; one entry per member of - * the datums array (plus one if range - * partitioned table) */ + * NULL for hash and list partitioned + * tables */ + int *indexes; /* Partition indexes */ int null_index; /* Index of the null-accepting partition; -1 * if there isn't one */ int default_index; /* Index of the default partition; -1 if there @@ -95,6 +106,14 @@ typedef struct PartitionBoundInfoData * is represented with one of the following structs. */ +/* One bound of a hash partition */ +typedef struct PartitionHashBound +{ + int modulus; + int remainder; + int index; +} PartitionHashBound; + /* One value coming from some (index'th) list partition */ typedef struct PartitionListValue { @@ -111,6 +130,7 @@ typedef struct PartitionRangeBound bool lower; /* this is the lower (vs upper) bound */ } PartitionRangeBound; +static int32 qsort_partition_hbound_cmp(const void *a, const void *b); static int32 qsort_partition_list_value_cmp(const void *a, const void *b, void *arg); static int32 qsort_partition_rbound_cmp(const void *a, const void *b, @@ -126,6 +146,7 @@ static void get_range_key_properties(PartitionKey key, int keynum, ListCell **partexprs_item, Expr **keyCol, Const **lower_val, Const **upper_val); +static List *get_qual_for_hash(Relation parent, PartitionBoundSpec *spec); static List *get_qual_for_list(Relation parent, PartitionBoundSpec *spec); static List *get_qual_for_range(Relation parent, PartitionBoundSpec *spec, bool for_default); @@ -134,6 +155,8 @@ static List *generate_partition_qual(Relation rel); static PartitionRangeBound *make_one_range_bound(PartitionKey key, int index, List *datums, bool lower); +static int32 partition_hbound_cmp(int modulus1, int remainder1, int modulus2, + int remainder2); static int32 partition_rbound_cmp(PartitionKey key, Datum *datums1, PartitionRangeDatumKind *kind1, bool lower1, PartitionRangeBound *b2); @@ -149,6 +172,12 @@ static int partition_bound_bsearch(PartitionKey key, void *probe, bool probe_is_bound, bool *is_equal); static void get_partition_dispatch_recurse(Relation rel, Relation parent, List **pds, List **leaf_part_oids); +static int get_partition_bound_num_indexes(PartitionBoundInfo b); +static int get_greatest_modulus(PartitionBoundInfo b); +static uint64 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); + +/* SQL-callable function for use in hash partition CHECK constraints */ +PG_FUNCTION_INFO_V1(satisfies_hash_partition); /* * RelationBuildPartitionDesc @@ -174,6 +203,9 @@ RelationBuildPartitionDesc(Relation rel) int ndatums = 0; int default_index = -1; + /* Hash partitioning specific */ + PartitionHashBound **hbounds = NULL; + /* List partitioning specific */ PartitionListValue **all_values = NULL; int null_index = -1; @@ -255,7 +287,35 @@ RelationBuildPartitionDesc(Relation rel) oids[i++] = lfirst_oid(cell); /* Convert from node to the internal representation */ - if (key->strategy == PARTITION_STRATEGY_LIST) + if (key->strategy == PARTITION_STRATEGY_HASH) + { + ndatums = nparts; + hbounds = (PartitionHashBound **) + palloc(nparts * sizeof(PartitionHashBound *)); + + i = 0; + foreach(cell, boundspecs) + { + PartitionBoundSpec *spec = castNode(PartitionBoundSpec, + lfirst(cell)); + + if (spec->strategy != PARTITION_STRATEGY_HASH) + elog(ERROR, "invalid strategy in partition bound spec"); + + hbounds[i] = (PartitionHashBound *) + palloc(sizeof(PartitionHashBound)); + + hbounds[i]->modulus = spec->modulus; + hbounds[i]->remainder = spec->remainder; + hbounds[i]->index = i; + i++; + } + + /* Sort all the bounds in ascending order */ + qsort(hbounds, nparts, sizeof(PartitionHashBound *), + qsort_partition_hbound_cmp); + } + else if (key->strategy == PARTITION_STRATEGY_LIST) { List *non_null_values = NIL; @@ -484,6 +544,42 @@ RelationBuildPartitionDesc(Relation rel) switch (key->strategy) { + case PARTITION_STRATEGY_HASH: + { + /* Modulus are stored in ascending order */ + int greatest_modulus = hbounds[ndatums - 1]->modulus; + + boundinfo->indexes = (int *) palloc(greatest_modulus * + sizeof(int)); + + for (i = 0; i < greatest_modulus; i++) + boundinfo->indexes[i] = -1; + + for (i = 0; i < nparts; i++) + { + int modulus = hbounds[i]->modulus; + int remainder = hbounds[i]->remainder; + + boundinfo->datums[i] = (Datum *) palloc(2 * + sizeof(Datum)); + boundinfo->datums[i][0] = Int32GetDatum(modulus); + boundinfo->datums[i][1] = Int32GetDatum(remainder); + + while (remainder < greatest_modulus) + { + /* overlap? */ + Assert(boundinfo->indexes[remainder] == -1); + boundinfo->indexes[remainder] = i; + remainder += modulus; + } + + mapping[hbounds[i]->index] = i; + pfree(hbounds[i]); + } + pfree(hbounds); + break; + } + case PARTITION_STRATEGY_LIST: { boundinfo->indexes = (int *) palloc(ndatums * sizeof(int)); @@ -617,8 +713,7 @@ RelationBuildPartitionDesc(Relation rel) * Now assign OIDs from the original array into mapped indexes of the * result array. Order of OIDs in the former is defined by the * catalog scan that retrieved them, whereas that in the latter is - * defined by canonicalized representation of the list values or the - * range bounds. + * defined by canonicalized representation of the partition bounds. */ for (i = 0; i < nparts; i++) result->oids[mapping[i]] = oids[i]; @@ -655,49 +750,97 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, if (b1->default_index != b2->default_index) return false; - for (i = 0; i < b1->ndatums; i++) + if (b1->strategy == PARTITION_STRATEGY_HASH) { - int j; + int greatest_modulus; + + /* + * If two hash partitioned tables have different greatest moduli, + * their partition schemes don't match. For hash partitioned table, + * the greatest modulus is given by the last datum and number of + * partitions is given by ndatums. + */ + if (b1->datums[b1->ndatums - 1][0] != b2->datums[b2->ndatums - 1][0]) + return false; + + /* + * We arrange the partitions in the ascending order of their modulus + * and remainders. Also every modulus is factor of next larger + * modulus. Therefore we can safely store index of a given partition + * in indexes array at remainder of that partition. Also entries at + * (remainder + N * modulus) positions in indexes array are all same + * for (modulus, remainder) specification for any partition. Thus + * datums array from both the given bounds are same, if and only if + * their indexes array will be same. So, it suffices to compare + * indexes array. + */ + greatest_modulus = get_greatest_modulus(b1); + for (i = 0; i < greatest_modulus; i++) + if (b1->indexes[i] != b2->indexes[i]) + return false; + +#ifdef USE_ASSERT_CHECKING - for (j = 0; j < partnatts; j++) + /* + * Nonetheless make sure that the bounds are indeed same when the + * indexes match. Hash partition bound stores modulus and remainder + * at b1->datums[i][0] and b1->datums[i][1] position respectively. + */ + for (i = 0; i < b1->ndatums; i++) + Assert((b1->datums[i][0] == b2->datums[i][0] && + b1->datums[i][1] == b2->datums[i][1])); +#endif + } + else + { + for (i = 0; i < b1->ndatums; i++) { - /* For range partitions, the bounds might not be finite. */ - if (b1->kind != NULL) + int j; + + for (j = 0; j < partnatts; j++) { - /* The different kinds of bound all differ from each other */ - if (b1->kind[i][j] != b2->kind[i][j]) - return false; + /* For range partitions, the bounds might not be finite. */ + if (b1->kind != NULL) + { + /* The different kinds of bound all differ from each other */ + if (b1->kind[i][j] != b2->kind[i][j]) + return false; - /* Non-finite bounds are equal without further examination. */ - if (b1->kind[i][j] != PARTITION_RANGE_DATUM_VALUE) - continue; + /* + * Non-finite bounds are equal without further + * examination. + */ + if (b1->kind[i][j] != PARTITION_RANGE_DATUM_VALUE) + continue; + } + + /* + * Compare the actual values. Note that it would be both + * incorrect and unsafe to invoke the comparison operator + * derived from the partitioning specification here. It would + * be incorrect because we want the relcache entry to be + * updated for ANY change to the partition bounds, not just + * those that the partitioning operator thinks are + * significant. It would be unsafe because we might reach + * this code in the context of an aborted transaction, and an + * arbitrary partitioning operator might not be safe in that + * context. datumIsEqual() should be simple enough to be + * safe. + */ + if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j], + parttypbyval[j], parttyplen[j])) + return false; } - /* - * Compare the actual values. Note that it would be both incorrect - * and unsafe to invoke the comparison operator derived from the - * partitioning specification here. It would be incorrect because - * we want the relcache entry to be updated for ANY change to the - * partition bounds, not just those that the partitioning operator - * thinks are significant. It would be unsafe because we might - * reach this code in the context of an aborted transaction, and - * an arbitrary partitioning operator might not be safe in that - * context. datumIsEqual() should be simple enough to be safe. - */ - if (!datumIsEqual(b1->datums[i][j], b2->datums[i][j], - parttypbyval[j], parttyplen[j])) + if (b1->indexes[i] != b2->indexes[i]) return false; } - if (b1->indexes[i] != b2->indexes[i]) + /* There are ndatums+1 indexes in case of range partitions */ + if (b1->strategy == PARTITION_STRATEGY_RANGE && + b1->indexes[i] != b2->indexes[i]) return false; } - - /* There are ndatums+1 indexes in case of range partitions */ - if (b1->strategy == PARTITION_STRATEGY_RANGE && - b1->indexes[i] != b2->indexes[i]) - return false; - return true; } @@ -709,11 +852,11 @@ extern PartitionBoundInfo partition_bounds_copy(PartitionBoundInfo src, PartitionKey key) { - PartitionBoundInfo dest; - int i; - int ndatums; - int partnatts; - int num_indexes; + PartitionBoundInfo dest; + int i; + int ndatums; + int partnatts; + int num_indexes; dest = (PartitionBoundInfo) palloc(sizeof(PartitionBoundInfoData)); @@ -721,8 +864,7 @@ partition_bounds_copy(PartitionBoundInfo src, ndatums = dest->ndatums = src->ndatums; partnatts = key->partnatts; - /* Range partitioned table has an extra index. */ - num_indexes = key->strategy == PARTITION_STRATEGY_RANGE ? ndatums + 1 : ndatums; + num_indexes = get_partition_bound_num_indexes(src); /* List partitioned tables have only a single partition key. */ Assert(key->strategy != PARTITION_STRATEGY_LIST || partnatts == 1); @@ -732,11 +874,11 @@ partition_bounds_copy(PartitionBoundInfo src, if (src->kind != NULL) { dest->kind = (PartitionRangeDatumKind **) palloc(ndatums * - sizeof(PartitionRangeDatumKind *)); + sizeof(PartitionRangeDatumKind *)); for (i = 0; i < ndatums; i++) { dest->kind[i] = (PartitionRangeDatumKind *) palloc(partnatts * - sizeof(PartitionRangeDatumKind)); + sizeof(PartitionRangeDatumKind)); memcpy(dest->kind[i], src->kind[i], sizeof(PartitionRangeDatumKind) * key->partnatts); @@ -747,16 +889,37 @@ partition_bounds_copy(PartitionBoundInfo src, for (i = 0; i < ndatums; i++) { - int j; - dest->datums[i] = (Datum *) palloc(sizeof(Datum) * partnatts); + int j; + + /* + * For a corresponding to hash partition, datums array will have two + * elements - modulus and remainder. + */ + bool hash_part = (key->strategy == PARTITION_STRATEGY_HASH); + int natts = hash_part ? 2 : partnatts; - for (j = 0; j < partnatts; j++) + dest->datums[i] = (Datum *) palloc(sizeof(Datum) * natts); + + for (j = 0; j < natts; j++) { + bool byval; + int typlen; + + if (hash_part) + { + typlen = sizeof(int32); /* Always int4 */ + byval = true; /* int4 is pass-by-value */ + } + else + { + byval = key->parttypbyval[j]; + typlen = key->parttyplen[j]; + } + if (dest->kind == NULL || dest->kind[i][j] == PARTITION_RANGE_DATUM_VALUE) dest->datums[i][j] = datumCopy(src->datums[i][j], - key->parttypbyval[j], - key->parttyplen[j]); + byval, typlen); } } @@ -801,6 +964,89 @@ check_new_partition_bound(char *relname, Relation parent, switch (key->strategy) { + case PARTITION_STRATEGY_HASH: + { + Assert(spec->strategy == PARTITION_STRATEGY_HASH); + Assert(spec->remainder >= 0 && spec->remainder < spec->modulus); + + if (partdesc->nparts > 0) + { + PartitionBoundInfo boundinfo = partdesc->boundinfo; + Datum **datums = boundinfo->datums; + int ndatums = boundinfo->ndatums; + int greatest_modulus; + int remainder; + int offset; + bool equal, + valid_modulus = true; + int prev_modulus, /* Previous largest modulus */ + next_modulus; /* Next largest modulus */ + + /* + * Check rule that every modulus must be a factor of the + * next larger modulus. For example, if you have a bunch + * of partitions that all have modulus 5, you can add a + * new partition with modulus 10 or a new partition with + * modulus 15, but you cannot add both a partition with + * modulus 10 and a partition with modulus 15, because 10 + * is not a factor of 15. + * + * Get greatest bound in array boundinfo->datums which is + * less than or equal to spec->modulus and + * spec->remainder. + */ + offset = partition_bound_bsearch(key, boundinfo, spec, + true, &equal); + if (offset < 0) + { + next_modulus = DatumGetInt32(datums[0][0]); + valid_modulus = (next_modulus % spec->modulus) == 0; + } + else + { + prev_modulus = DatumGetInt32(datums[offset][0]); + valid_modulus = (spec->modulus % prev_modulus) == 0; + + if (valid_modulus && (offset + 1) < ndatums) + { + next_modulus = DatumGetInt32(datums[offset + 1][0]); + valid_modulus = (next_modulus % spec->modulus) == 0; + } + } + + if (!valid_modulus) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("every hash partition modulus must be a factor of the next larger modulus"))); + + greatest_modulus = get_greatest_modulus(boundinfo); + remainder = spec->remainder; + + /* + * Normally, the lowest remainder that could conflict with + * the new partition is equal to the remainder specified + * for the new partition, but when the new partition has a + * modulus higher than any used so far, we need to adjust. + */ + if (remainder >= greatest_modulus) + remainder = remainder % greatest_modulus; + + /* Check every potentially-conflicting remainder. */ + do + { + if (boundinfo->indexes[remainder] != -1) + { + overlap = true; + with = boundinfo->indexes[remainder]; + break; + } + remainder += spec->modulus; + } while (remainder < greatest_modulus); + } + + break; + } + case PARTITION_STRATEGY_LIST: { Assert(spec->strategy == PARTITION_STRATEGY_LIST); @@ -1171,6 +1417,11 @@ get_qual_from_partbound(Relation rel, Relation parent, switch (key->strategy) { + case PARTITION_STRATEGY_HASH: + Assert(spec->strategy == PARTITION_STRATEGY_HASH); + my_qual = get_qual_for_hash(parent, spec); + break; + case PARTITION_STRATEGY_LIST: Assert(spec->strategy == PARTITION_STRATEGY_LIST); my_qual = get_qual_for_list(parent, spec); @@ -1541,6 +1792,92 @@ make_partition_op_expr(PartitionKey key, int keynum, return result; } +/* + * get_qual_for_hash + * + * Given a list of partition columns, modulus and remainder corresponding to a + * partition, this function returns CHECK constraint expression Node for that + * partition. + * + * The partition constraint for a hash partition is always a call to the + * built-in function satisfies_hash_partition(). The first two arguments are + * the modulus and remainder for the partition; the remaining arguments are the + * values to be hashed. + */ +static List * +get_qual_for_hash(Relation parent, PartitionBoundSpec *spec) +{ + PartitionKey key = RelationGetPartitionKey(parent); + FuncExpr *fexpr; + Node *relidConst; + Node *modulusConst; + Node *remainderConst; + List *args; + ListCell *partexprs_item; + int i; + + /* Fixed arguments. */ + relidConst = (Node *) makeConst(OIDOID, + -1, + InvalidOid, + sizeof(Oid), + ObjectIdGetDatum(RelationGetRelid(parent)), + false, + true); + + modulusConst = (Node *) makeConst(INT4OID, + -1, + InvalidOid, + sizeof(int32), + Int32GetDatum(spec->modulus), + false, + true); + + remainderConst = (Node *) makeConst(INT4OID, + -1, + InvalidOid, + sizeof(int32), + Int32GetDatum(spec->remainder), + false, + true); + + args = list_make3(relidConst, modulusConst, remainderConst); + partexprs_item = list_head(key->partexprs); + + /* Add an argument for each key column. */ + for (i = 0; i < key->partnatts; i++) + { + Node *keyCol; + + /* Left operand */ + if (key->partattrs[i] != 0) + { + keyCol = (Node *) makeVar(1, + key->partattrs[i], + key->parttypid[i], + key->parttypmod[i], + key->parttypcoll[i], + 0); + } + else + { + keyCol = (Node *) copyObject(lfirst(partexprs_item)); + partexprs_item = lnext(partexprs_item); + } + + args = lappend(args, keyCol); + } + + fexpr = makeFuncExpr(F_SATISFIES_HASH_PARTITION, + BOOLOID, + args, + InvalidOid, + InvalidOid, + COERCE_EXPLICIT_CALL); + + return list_make1(fexpr); +} + /* * get_qual_for_list * @@ -2412,6 +2749,17 @@ get_partition_for_tuple(PartitionDispatch *pd, /* Route as appropriate based on partitioning strategy. */ switch (key->strategy) { + case PARTITION_STRATEGY_HASH: + { + PartitionBoundInfo boundinfo = partdesc->boundinfo; + int greatest_modulus = get_greatest_modulus(boundinfo); + uint64 rowHash = compute_hash_value(key, values, + isnull); + + cur_index = boundinfo->indexes[rowHash % greatest_modulus]; + } + break; + case PARTITION_STRATEGY_LIST: if (isnull[0]) @@ -2524,6 +2872,38 @@ get_partition_for_tuple(PartitionDispatch *pd, return result; } +/* + * qsort_partition_hbound_cmp + * + * We sort hash bounds by modulus, then by remainder. + */ +static int32 +qsort_partition_hbound_cmp(const void *a, const void *b) +{ + PartitionHashBound *h1 = (*(PartitionHashBound *const *) a); + PartitionHashBound *h2 = (*(PartitionHashBound *const *) b); + + return partition_hbound_cmp(h1->modulus, h1->remainder, + h2->modulus, h2->remainder); +} + +/* + * partition_hbound_cmp + * + * Compares modulus first, then remainder if modulus are equal. + */ +static int32 +partition_hbound_cmp(int modulus1, int remainder1, int modulus2, int remainder2) +{ + if (modulus1 < modulus2) + return -1; + if (modulus1 > modulus2) + return 1; + if (modulus1 == modulus2 && remainder1 != remainder2) + return (remainder1 > remainder2) ? 1 : -1; + return 0; +} + /* * qsort_partition_list_value_cmp * @@ -2710,6 +3090,15 @@ partition_bound_cmp(PartitionKey key, PartitionBoundInfo boundinfo, switch (key->strategy) { + case PARTITION_STRATEGY_HASH: + { + PartitionBoundSpec *spec = (PartitionBoundSpec *) probe; + + cmpval = partition_hbound_cmp(DatumGetInt32(bound_datums[0]), + DatumGetInt32(bound_datums[1]), + spec->modulus, spec->remainder); + break; + } case PARTITION_STRATEGY_LIST: cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[0], key->partcollation[0], @@ -2894,3 +3283,182 @@ get_proposed_default_constraint(List *new_part_constraints) return list_make1(defPartConstraint); } + +/* + * get_partition_bound_num_indexes + * + * Returns the number of the entries in the partition bound indexes array. + */ +static int +get_partition_bound_num_indexes(PartitionBoundInfo bound) +{ + int num_indexes; + + Assert(bound); + + switch (bound->strategy) + { + case PARTITION_STRATEGY_HASH: + + /* + * The number of the entries in the indexes array is same as the + * greatest modulus. + */ + num_indexes = get_greatest_modulus(bound); + break; + + case PARTITION_STRATEGY_LIST: + num_indexes = bound->ndatums; + break; + + case PARTITION_STRATEGY_RANGE: + /* Range partitioned table has an extra index. */ + num_indexes = bound->ndatums + 1; + break; + + default: + elog(ERROR, "unexpected partition strategy: %d", + (int) bound->strategy); + } + + return num_indexes; +} + +/* + * get_greatest_modulus + * + * Returns the greatest modulus of the hash partition bound. The greatest + * modulus will be at the end of the datums array because hash partitions are + * arranged in the ascending order of their modulus and remainders. + */ +static int +get_greatest_modulus(PartitionBoundInfo bound) +{ + Assert(bound && bound->strategy == PARTITION_STRATEGY_HASH); + Assert(bound->datums && bound->ndatums > 0); + Assert(DatumGetInt32(bound->datums[bound->ndatums - 1][0]) > 0); + + return DatumGetInt32(bound->datums[bound->ndatums - 1][0]); +} + +/* + * compute_hash_value + * + * Compute the hash value for given not null partition key values. + */ +static uint64 +compute_hash_value(PartitionKey key, Datum *values, bool *isnull) +{ + int i; + int nkeys = key->partnatts; + uint64 rowHash = 0; + Datum seed = UInt64GetDatum(HASH_PARTITION_SEED); + + for (i = 0; i < nkeys; i++) + { + if (!isnull[i]) + { + Datum hash; + + Assert(OidIsValid(key->partsupfunc[i].fn_oid)); + + /* + * Compute hash for each datum value by calling respective + * datatype-specific hash functions of each partition key + * attribute. + */ + hash = FunctionCall2(&key->partsupfunc[i], values[i], seed); + + /* Form a single 64-bit hash value */ + rowHash = hash_combine64(rowHash, DatumGetUInt64(hash)); + } + } + + return rowHash; +} + +/* + * satisfies_hash_partition + * + * This is a SQL-callable function for use in hash partition constraints takes + * an already computed hash values of each partition key attribute, and combine + * them into a single hash value by calling hash_combine64. + * + * Returns true if remainder produced when this computed single hash value is + * divided by the given modulus is equal to given remainder, otherwise false. + * + * See get_qual_for_hash() for usage. + */ +Datum +satisfies_hash_partition(PG_FUNCTION_ARGS) +{ + typedef struct ColumnsHashData + { + Oid relid; + int16 nkeys; + FmgrInfo partsupfunc[PARTITION_MAX_KEYS]; + } ColumnsHashData; + Oid parentId = PG_GETARG_OID(0); + int modulus = PG_GETARG_INT32(1); + int remainder = PG_GETARG_INT32(2); + short nkeys = PG_NARGS() - 3; + int i; + Datum seed = UInt64GetDatum(HASH_PARTITION_SEED); + ColumnsHashData *my_extra; + uint64 rowHash = 0; + + /* + * Cache hash function information. + */ + my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; + if (my_extra == NULL || my_extra->nkeys != nkeys || + my_extra->relid != parentId) + { + Relation parent; + PartitionKey key; + int j; + + fcinfo->flinfo->fn_extra = + MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt, + offsetof(ColumnsHashData, partsupfunc) + + sizeof(FmgrInfo) * nkeys); + my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; + my_extra->nkeys = nkeys; + my_extra->relid = parentId; + + /* Open parent relation and fetch partition keyinfo */ + parent = heap_open(parentId, AccessShareLock); + key = RelationGetPartitionKey(parent); + + Assert(key->partnatts == nkeys); + for (j = 0; j < nkeys; ++j) + fmgr_info_copy(&my_extra->partsupfunc[j], + key->partsupfunc, + fcinfo->flinfo->fn_mcxt); + + /* Hold lock until commit */ + heap_close(parent, NoLock); + } + + for (i = 0; i < nkeys; i++) + { + /* keys start from fourth argument of function. */ + int argno = i + 3; + + if (!PG_ARGISNULL(argno)) + { + Datum hash; + + Assert(OidIsValid(my_extra->partsupfunc[i].fn_oid)); + + hash = FunctionCall2(&my_extra->partsupfunc[i], + PG_GETARG_DATUM(argno), + seed); + + /* Form a single 64-bit hash value */ + rowHash = hash_combine64(rowHash, DatumGetUInt64(hash)); + } + } + + PG_RETURN_BOOL(rowHash % modulus == remainder); +} diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index b7ddb335d2..165b165d55 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -471,7 +471,7 @@ static void RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, static bool is_partition_attr(Relation rel, AttrNumber attnum, bool *used_in_expr); static PartitionSpec *transformPartitionSpec(Relation rel, PartitionSpec *partspec, char *strategy); static void ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, - List **partexprs, Oid *partopclass, Oid *partcollation); + List **partexprs, Oid *partopclass, Oid *partcollation, char strategy); static void CreateInheritance(Relation child_rel, Relation parent_rel); static void RemoveInheritance(Relation child_rel, Relation parent_rel); static ObjectAddress ATExecAttachPartition(List **wqueue, Relation rel, @@ -894,7 +894,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, ComputePartitionAttrs(rel, stmt->partspec->partParams, partattrs, &partexprs, partopclass, - partcollation); + partcollation, strategy); StorePartitionKey(rel, strategy, partnatts, partattrs, partexprs, partopclass, partcollation); @@ -13337,7 +13337,9 @@ transformPartitionSpec(Relation rel, PartitionSpec *partspec, char *strategy) newspec->location = partspec->location; /* Parse partitioning strategy name */ - if (pg_strcasecmp(partspec->strategy, "list") == 0) + if (pg_strcasecmp(partspec->strategy, "hash") == 0) + *strategy = PARTITION_STRATEGY_HASH; + else if (pg_strcasecmp(partspec->strategy, "list") == 0) *strategy = PARTITION_STRATEGY_LIST; else if (pg_strcasecmp(partspec->strategy, "range") == 0) *strategy = PARTITION_STRATEGY_RANGE; @@ -13407,10 +13409,12 @@ transformPartitionSpec(Relation rel, PartitionSpec *partspec, char *strategy) */ static void ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, - List **partexprs, Oid *partopclass, Oid *partcollation) + List **partexprs, Oid *partopclass, Oid *partcollation, + char strategy) { int attn; ListCell *lc; + Oid am_oid; attn = 0; foreach(lc, partParams) @@ -13570,25 +13574,41 @@ ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, partcollation[attn] = attcollation; /* - * Identify a btree opclass to use. Currently, we use only btree - * operators, which seems enough for list and range partitioning. + * Identify the appropriate operator class. For list and range + * partitioning, we use a btree operator class; hash partitioning uses + * a hash operator class. */ + if (strategy == PARTITION_STRATEGY_HASH) + am_oid = HASH_AM_OID; + else + am_oid = BTREE_AM_OID; + if (!pelem->opclass) { - partopclass[attn] = GetDefaultOpClass(atttype, BTREE_AM_OID); + partopclass[attn] = GetDefaultOpClass(atttype, am_oid); if (!OidIsValid(partopclass[attn])) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_OBJECT), - errmsg("data type %s has no default btree operator class", - format_type_be(atttype)), - errhint("You must specify a btree operator class or define a default btree operator class for the data type."))); + { + if (strategy == PARTITION_STRATEGY_HASH) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("data type %s has no default hash operator class", + format_type_be(atttype)), + errhint("You must specify a hash operator class or define a default hash operator class for the data type."))); + else + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("data type %s has no default btree operator class", + format_type_be(atttype)), + errhint("You must specify a btree operator class or define a default btree operator class for the data type."))); + + } } else partopclass[attn] = ResolveOpClass(pelem->opclass, atttype, - "btree", - BTREE_AM_OID); + am_oid == HASH_AM_OID ? "hash" : "btree", + am_oid); attn++; } diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index c1a83ca909..cadd253ef1 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -4461,6 +4461,8 @@ _copyPartitionBoundSpec(const PartitionBoundSpec *from) COPY_SCALAR_FIELD(strategy); COPY_SCALAR_FIELD(is_default); + COPY_SCALAR_FIELD(modulus); + COPY_SCALAR_FIELD(remainder); COPY_NODE_FIELD(listdatums); COPY_NODE_FIELD(lowerdatums); COPY_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 7a700018e7..2866fd7b4a 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2848,6 +2848,8 @@ _equalPartitionBoundSpec(const PartitionBoundSpec *a, const PartitionBoundSpec * { COMPARE_SCALAR_FIELD(strategy); COMPARE_SCALAR_FIELD(is_default); + COMPARE_SCALAR_FIELD(modulus); + COMPARE_SCALAR_FIELD(remainder); COMPARE_NODE_FIELD(listdatums); COMPARE_NODE_FIELD(lowerdatums); COMPARE_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 43d62062bc..291d1eeb46 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -3578,6 +3578,8 @@ _outPartitionBoundSpec(StringInfo str, const PartitionBoundSpec *node) WRITE_CHAR_FIELD(strategy); WRITE_BOOL_FIELD(is_default); + WRITE_INT_FIELD(modulus); + WRITE_INT_FIELD(remainder); WRITE_NODE_FIELD(listdatums); WRITE_NODE_FIELD(lowerdatums); WRITE_NODE_FIELD(upperdatums); diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index ccb6a1f4ac..42c595dc03 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -2397,6 +2397,8 @@ _readPartitionBoundSpec(void) READ_CHAR_FIELD(strategy); READ_BOOL_FIELD(is_default); + READ_INT_FIELD(modulus); + READ_INT_FIELD(remainder); READ_NODE_FIELD(listdatums); READ_NODE_FIELD(lowerdatums); READ_NODE_FIELD(upperdatums); diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index 244708ad5a..453f25964a 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -1463,7 +1463,7 @@ have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, JoinType jointype, continue; /* Skip clauses which are not equality conditions. */ - if (!rinfo->mergeopfamilies) + if (!rinfo->mergeopfamilies && !OidIsValid(rinfo->hashjoinoperator)) continue; opexpr = (OpExpr *) rinfo->clause; @@ -1515,8 +1515,14 @@ have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, JoinType jointype, * The clause allows partition-wise join if only it uses the same * operator family as that specified by the partition key. */ - if (!list_member_oid(rinfo->mergeopfamilies, - part_scheme->partopfamily[ipk1])) + if (rel1->part_scheme->strategy == PARTITION_STRATEGY_HASH) + { + if (!op_in_opfamily(rinfo->hashjoinoperator, + part_scheme->partopfamily[ipk1])) + continue; + } + else if (!list_member_oid(rinfo->mergeopfamilies, + part_scheme->partopfamily[ipk1])) continue; /* Mark the partition key as having an equi-join clause. */ diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 09b9a899e4..c301ca465d 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -579,7 +579,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type part_params %type PartitionBoundSpec %type partbound_datum PartitionRangeDatum -%type partbound_datum_list range_datum_list +%type hash_partbound partbound_datum_list range_datum_list +%type hash_partbound_elem /* * Non-keyword token types. These are hard-wired into the "flex" lexer. @@ -2638,8 +2639,61 @@ alter_identity_column_option: ; PartitionBoundSpec: + /* a HASH partition*/ + FOR VALUES WITH '(' hash_partbound ')' + { + ListCell *lc; + PartitionBoundSpec *n = makeNode(PartitionBoundSpec); + + n->strategy = PARTITION_STRATEGY_HASH; + n->modulus = n->remainder = -1; + + foreach (lc, $5) + { + DefElem *opt = lfirst_node(DefElem, lc); + + if (strcmp(opt->defname, "modulus") == 0) + { + if (n->modulus != -1) + ereport(ERROR, + (errcode(ERRCODE_DUPLICATE_OBJECT), + errmsg("modulus for hash partition provided more than once"), + parser_errposition(opt->location))); + n->modulus = defGetInt32(opt); + } + else if (strcmp(opt->defname, "remainder") == 0) + { + if (n->remainder != -1) + ereport(ERROR, + (errcode(ERRCODE_DUPLICATE_OBJECT), + errmsg("remainder for hash partition provided more than once"), + parser_errposition(opt->location))); + n->remainder = defGetInt32(opt); + } + else + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("unrecognized hash partition bound specification \"%s\"", + opt->defname), + parser_errposition(opt->location))); + } + + if (n->modulus == -1) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("modulus for hash partition must be specified"))); + if (n->remainder == -1) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("remainder for hash partition must be specified"))); + + n->location = @3; + + $$ = n; + } + /* a LIST partition */ - FOR VALUES IN_P '(' partbound_datum_list ')' + | FOR VALUES IN_P '(' partbound_datum_list ')' { PartitionBoundSpec *n = makeNode(PartitionBoundSpec); @@ -2677,6 +2731,24 @@ PartitionBoundSpec: } ; +hash_partbound_elem: + NonReservedWord Iconst + { + $$ = makeDefElem($1, (Node *)makeInteger($2), @1); + } + ; + +hash_partbound: + hash_partbound_elem + { + $$ = list_make1($1); + } + | hash_partbound ',' hash_partbound_elem + { + $$ = lappend($1, $3); + } + ; + partbound_datum: Sconst { $$ = makeStringConst($1, @1); } | NumericOnly { $$ = makeAConst($1, @1); } diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 30fc2d9ff8..8461da490a 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -3310,6 +3310,11 @@ transformPartitionBound(ParseState *pstate, Relation parent, if (spec->is_default) { + if (strategy == PARTITION_STRATEGY_HASH) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TABLE_DEFINITION), + errmsg("a hash-partitioned table may not have a default partition"))); + /* * In case of the default partition, parser had no way to identify the * partition strategy. Assign the parent's strategy to the default @@ -3320,7 +3325,27 @@ transformPartitionBound(ParseState *pstate, Relation parent, return result_spec; } - if (strategy == PARTITION_STRATEGY_LIST) + if (strategy == PARTITION_STRATEGY_HASH) + { + if (spec->strategy != PARTITION_STRATEGY_HASH) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TABLE_DEFINITION), + errmsg("invalid bound specification for a hash partition"), + parser_errposition(pstate, exprLocation((Node *) spec)))); + + if (spec->modulus <= 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TABLE_DEFINITION), + errmsg("modulus for hash partition must be a positive integer"))); + + Assert(spec->remainder >= 0); + + if (spec->remainder >= spec->modulus) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TABLE_DEFINITION), + errmsg("remainder for hash partition must be less than modulus"))); + } + else if (strategy == PARTITION_STRATEGY_LIST) { ListCell *cell; char *colname; @@ -3485,7 +3510,7 @@ transformPartitionBound(ParseState *pstate, Relation parent, static void validateInfiniteBounds(ParseState *pstate, List *blist) { - ListCell *lc; + ListCell *lc; PartitionRangeDatumKind kind = PARTITION_RANGE_DATUM_VALUE; foreach(lc, blist) diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 752cef09e6..b543b7046c 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -1551,7 +1551,7 @@ pg_get_statisticsobj_worker(Oid statextid, bool missing_ok) * * Returns the partition key specification, ie, the following: * - * PARTITION BY { RANGE | LIST } (column opt_collation opt_opclass [, ...]) + * PARTITION BY { RANGE | LIST | HASH } (column opt_collation opt_opclass [, ...]) */ Datum pg_get_partkeydef(PG_FUNCTION_ARGS) @@ -1655,6 +1655,10 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags, switch (form->partstrat) { + case PARTITION_STRATEGY_HASH: + if (!attrsOnly) + appendStringInfo(&buf, "HASH"); + break; case PARTITION_STRATEGY_LIST: if (!attrsOnly) appendStringInfoString(&buf, "LIST"); @@ -8711,6 +8715,15 @@ get_rule_expr(Node *node, deparse_context *context, switch (spec->strategy) { + case PARTITION_STRATEGY_HASH: + Assert(spec->modulus > 0 && spec->remainder >= 0); + Assert(spec->modulus > spec->remainder); + + appendStringInfoString(buf, "FOR VALUES"); + appendStringInfo(buf, " WITH (modulus %d, remainder %d)", + spec->modulus, spec->remainder); + break; + case PARTITION_STRATEGY_LIST: Assert(spec->listdatums != NIL); diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index a31b68a8d5..1908420d82 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -30,6 +30,7 @@ #include #include +#include "access/hash.h" #include "access/htup_details.h" #include "access/multixact.h" #include "access/nbtree.h" @@ -833,6 +834,7 @@ RelationBuildPartitionKey(Relation relation) Datum datum; MemoryContext partkeycxt, oldcxt; + int16 procnum; tuple = SearchSysCache1(PARTRELID, ObjectIdGetDatum(RelationGetRelid(relation))); @@ -912,6 +914,10 @@ RelationBuildPartitionKey(Relation relation) key->parttypalign = (char *) palloc0(key->partnatts * sizeof(char)); key->parttypcoll = (Oid *) palloc0(key->partnatts * sizeof(Oid)); + /* For the hash partitioning, an extended hash function will be used. */ + procnum = (key->strategy == PARTITION_STRATEGY_HASH) ? + HASHEXTENDED_PROC : BTORDER_PROC; + /* Copy partattrs and fill other per-attribute info */ memcpy(key->partattrs, attrs, key->partnatts * sizeof(int16)); partexprs_item = list_head(key->partexprs); @@ -932,18 +938,20 @@ RelationBuildPartitionKey(Relation relation) key->partopfamily[i] = opclassform->opcfamily; key->partopcintype[i] = opclassform->opcintype; - /* - * A btree support function covers the cases of list and range methods - * currently supported. - */ + /* Get a support function for the specified opfamily and datatypes */ funcid = get_opfamily_proc(opclassform->opcfamily, opclassform->opcintype, opclassform->opcintype, - BTORDER_PROC); - if (!OidIsValid(funcid)) /* should not happen */ - elog(ERROR, "missing support function %d(%u,%u) in opfamily %u", - BTORDER_PROC, opclassform->opcintype, opclassform->opcintype, - opclassform->opcfamily); + procnum); + if (!OidIsValid(funcid)) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("operator class \"%s\" of access method %s is missing support function %d for data type \"%s\"", + NameStr(opclassform->opcname), + (key->strategy == PARTITION_STRATEGY_HASH) ? + "hash" : "btree", + procnum, + format_type_be(opclassform->opcintype)))); fmgr_info(funcid, &key->partsupfunc[i]); diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index a09c49d6cf..b3e3799c13 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -2055,7 +2055,7 @@ psql_completion(const char *text, int start, int end) else if (TailMatches3("ATTACH", "PARTITION", MatchAny)) COMPLETE_WITH_LIST2("FOR VALUES", "DEFAULT"); else if (TailMatches2("FOR", "VALUES")) - COMPLETE_WITH_LIST2("FROM (", "IN ("); + COMPLETE_WITH_LIST3("FROM (", "IN (", "WITH ("); /* * If we have ALTER TABLE DETACH PARTITION, provide a list of diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 39c70b415a..bd4014a69d 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201711091 +#define CATALOG_VERSION_NO 201711092 #endif diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 945ac0239d..8acc01a876 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -19,6 +19,9 @@ #include "parser/parse_node.h" #include "utils/rel.h" +/* Seed for the extended hash function */ +#define HASH_PARTITION_SEED UINT64CONST(0x7A5B22367996DCFD) + /* * PartitionBoundInfo encapsulates a set of partition bounds. It is usually * associated with partitioned tables as part of its partition descriptor. diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 5e3e7228d6..0330c04f16 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -5522,6 +5522,10 @@ DESCR("list files in the log directory"); DATA(insert OID = 3354 ( pg_ls_waldir PGNSP PGUID 12 10 20 0 0 f f f f t t v s 0 0 2249 "" "{25,20,1184}" "{o,o,o}" "{name,size,modification}" _null_ _null_ pg_ls_waldir _null_ _null_ _null_ )); DESCR("list of files in the WAL directory"); +/* hash partitioning constraint function */ +DATA(insert OID = 5028 ( satisfies_hash_partition PGNSP PGUID 12 1 0 2276 0 f f f f f f i s 4 0 16 "26 23 23 2276" _null_ _null_ _null_ _null_ _null_ satisfies_hash_partition _null_ _null_ _null_ )); +DESCR("hash partition CHECK constraint"); + /* * Symbolic values for provolatile column: these indicate whether the result * of a function is dependent *only* on the values of its explicit arguments, diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index a240c271db..34d6afc80f 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -777,12 +777,14 @@ typedef struct PartitionElem typedef struct PartitionSpec { NodeTag type; - char *strategy; /* partitioning strategy ('list' or 'range') */ + char *strategy; /* partitioning strategy ('hash', 'list' or + * 'range') */ List *partParams; /* List of PartitionElems */ int location; /* token location, or -1 if unknown */ } PartitionSpec; /* Internal codes for partitioning strategies */ +#define PARTITION_STRATEGY_HASH 'h' #define PARTITION_STRATEGY_LIST 'l' #define PARTITION_STRATEGY_RANGE 'r' @@ -799,6 +801,10 @@ typedef struct PartitionBoundSpec char strategy; /* see PARTITION_STRATEGY codes above */ bool is_default; /* is it a default partition bound? */ + /* Partitioning info for HASH strategy: */ + int modulus; + int remainder; + /* Partitioning info for LIST strategy: */ List *listdatums; /* List of Consts (or A_Consts in raw tree) */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index ee1f10c8e0..11f0baa11b 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3399,6 +3399,7 @@ SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_1'::reg CREATE TABLE fail_part (LIKE part_1 INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION fail_part FOR VALUES IN (1); ERROR: partition "fail_part" would overlap partition "part_1" +DROP TABLE fail_part; -- check that an existing table can be attached as a default partition CREATE TABLE def_part (LIKE list_parted INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION def_part DEFAULT; @@ -3596,6 +3597,59 @@ CREATE TABLE quuux1 PARTITION OF quuux FOR VALUES IN (1); CREATE TABLE quuux2 PARTITION OF quuux FOR VALUES IN (2); INFO: updated partition constraint for default partition "quuux_default1" is implied by existing constraints DROP TABLE quuux; +-- check validation when attaching hash partitions +-- The default hash functions as they exist today aren't portable; they can +-- return different results on different machines. Depending upon how the +-- values are hashed, the row may map to different partitions, which result in +-- regression failure. To avoid this, let's create a non-default hash function +-- that just returns the input value unchanged. +CREATE OR REPLACE FUNCTION dummy_hashint4(a int4, seed int8) RETURNS int8 AS +$$ BEGIN RETURN (a + 1 + seed); END; $$ LANGUAGE 'plpgsql' IMMUTABLE; +CREATE OPERATOR CLASS custom_opclass FOR TYPE int4 USING HASH AS +OPERATOR 1 = , FUNCTION 2 dummy_hashint4(int4, int8); +-- check that the new partition won't overlap with an existing partition +CREATE TABLE hash_parted ( + a int, + b int +) PARTITION BY HASH (a custom_opclass); +CREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 4, REMAINDER 0); +CREATE TABLE fail_part (LIKE hpart_1); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 4); +ERROR: partition "fail_part" would overlap partition "hpart_1" +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 0); +ERROR: partition "fail_part" would overlap partition "hpart_1" +DROP TABLE fail_part; +-- check validation when attaching hash partitions +-- check that violating rows are correctly reported +CREATE TABLE hpart_2 (LIKE hash_parted); +INSERT INTO hpart_2 VALUES (3, 0); +ALTER TABLE hash_parted ATTACH PARTITION hpart_2 FOR VALUES WITH (MODULUS 4, REMAINDER 1); +ERROR: partition constraint is violated by some row +-- should be ok after deleting the bad row +DELETE FROM hpart_2; +ALTER TABLE hash_parted ATTACH PARTITION hpart_2 FOR VALUES WITH (MODULUS 4, REMAINDER 1); +-- check that leaf partitions are scanned when attaching a partitioned +-- table +CREATE TABLE hpart_5 ( + LIKE hash_parted +) PARTITION BY LIST (b); +-- check that violating rows are correctly reported +CREATE TABLE hpart_5_a PARTITION OF hpart_5 FOR VALUES IN ('1', '2', '3'); +INSERT INTO hpart_5_a (a, b) VALUES (7, 1); +ALTER TABLE hash_parted ATTACH PARTITION hpart_5 FOR VALUES WITH (MODULUS 4, REMAINDER 2); +ERROR: partition constraint is violated by some row +-- should be ok after deleting the bad row +DELETE FROM hpart_5_a; +ALTER TABLE hash_parted ATTACH PARTITION hpart_5 FOR VALUES WITH (MODULUS 4, REMAINDER 2); +-- check that the table being attach is with valid modulus and remainder value +CREATE TABLE fail_part(LIKE hash_parted); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 0, REMAINDER 1); +ERROR: modulus for hash partition must be a positive integer +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 8); +ERROR: remainder for hash partition must be less than modulus +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 3, REMAINDER 2); +ERROR: every hash partition modulus must be a factor of the next larger modulus +DROP TABLE fail_part; -- -- DETACH PARTITION -- @@ -3607,12 +3661,17 @@ DROP TABLE regular_table; -- check that the partition being detached exists at all ALTER TABLE list_parted2 DETACH PARTITION part_4; ERROR: relation "part_4" does not exist +ALTER TABLE hash_parted DETACH PARTITION hpart_4; +ERROR: relation "hpart_4" does not exist -- check that the partition being detached is actually a partition of the parent CREATE TABLE not_a_part (a int); ALTER TABLE list_parted2 DETACH PARTITION not_a_part; ERROR: relation "not_a_part" is not a partition of relation "list_parted2" ALTER TABLE list_parted2 DETACH PARTITION part_1; ERROR: relation "part_1" is not a partition of relation "list_parted2" +ALTER TABLE hash_parted DETACH PARTITION not_a_part; +ERROR: relation "not_a_part" is not a partition of relation "hash_parted" +DROP TABLE not_a_part; -- check that, after being detached, attinhcount/coninhcount is dropped to 0 and -- attislocal/conislocal is set to true ALTER TABLE list_parted2 DETACH PARTITION part_3_4; @@ -3716,6 +3775,9 @@ SELECT * FROM list_parted; -- cleanup DROP TABLE list_parted, list_parted2, range_parted; DROP TABLE fail_def_part; +DROP TABLE hash_parted; +DROP OPERATOR CLASS custom_opclass USING HASH; +DROP FUNCTION dummy_hashint4(a int4, seed int8); -- more tests for certain multi-level partitioning scenarios create table p (a int, b int) partition by range (a, b); create table p1 (b int, a int not null) partition by range (b); diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index 60ab28a96a..335cd37e18 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -340,11 +340,11 @@ CREATE TABLE partitioned ( ) PARTITION BY RANGE (const_func()); ERROR: cannot use constant expression as partition key DROP FUNCTION const_func(); --- only accept "list" and "range" as partitioning strategy +-- only accept valid partitioning strategy CREATE TABLE partitioned ( - a int -) PARTITION BY HASH (a); -ERROR: unrecognized partitioning strategy "hash" + a int +) PARTITION BY MAGIC (a); +ERROR: unrecognized partitioning strategy "magic" -- specified column must be present in the table CREATE TABLE partitioned ( a int @@ -467,6 +467,11 @@ CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) TO (2); ERROR: invalid bound specification for a list partition LINE 1: ...BLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) T... ^ +-- trying to specify modulus and remainder for list partitioned table +CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); +ERROR: invalid bound specification for a list partition +LINE 1: ...BLE fail_part PARTITION OF list_parted FOR VALUES WITH (MODU... + ^ -- check default partition cannot be created more than once CREATE TABLE part_default PARTITION OF list_parted DEFAULT; CREATE TABLE fail_default_part PARTITION OF list_parted DEFAULT; @@ -509,6 +514,11 @@ CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES IN ('a'); ERROR: invalid bound specification for a range partition LINE 1: ...BLE fail_part PARTITION OF range_parted FOR VALUES IN ('a'); ^ +-- trying to specify modulus and remainder for range partitioned table +CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); +ERROR: invalid bound specification for a range partition +LINE 1: ...LE fail_part PARTITION OF range_parted FOR VALUES WITH (MODU... + ^ -- each of start and end bounds must have same number of values as the -- length of the partition key CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES FROM ('a', 1) TO ('z'); @@ -518,6 +528,37 @@ ERROR: TO must specify exactly one value per partitioning column -- cannot specify null values in range bounds CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES FROM (null) TO (maxvalue); ERROR: cannot specify NULL in range bound +-- trying to specify modulus and remainder for range partitioned table +CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); +ERROR: invalid bound specification for a range partition +LINE 1: ...LE fail_part PARTITION OF range_parted FOR VALUES WITH (MODU... + ^ +-- check partition bound syntax for the hash partition +CREATE TABLE hash_parted ( + a int +) PARTITION BY HASH (a); +CREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 10, REMAINDER 0); +CREATE TABLE hpart_2 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 50, REMAINDER 1); +CREATE TABLE hpart_3 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 200, REMAINDER 2); +-- modulus 25 is factor of modulus of 50 but 10 is not factor of 25. +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS 25, REMAINDER 3); +ERROR: every hash partition modulus must be a factor of the next larger modulus +-- previous modulus 50 is factor of 150 but this modulus is not factor of next modulus 200. +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS 150, REMAINDER 3); +ERROR: every hash partition modulus must be a factor of the next larger modulus +-- trying to specify range for the hash partitioned table +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES FROM ('a', 1) TO ('z'); +ERROR: invalid bound specification for a hash partition +LINE 1: ...BLE fail_part PARTITION OF hash_parted FOR VALUES FROM ('a',... + ^ +-- trying to specify list value for the hash partitioned table +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES IN (1000); +ERROR: invalid bound specification for a hash partition +LINE 1: ...BLE fail_part PARTITION OF hash_parted FOR VALUES IN (1000); + ^ +-- trying to create default partition for the hash partitioned table +CREATE TABLE fail_default_part PARTITION OF hash_parted DEFAULT; +ERROR: a hash-partitioned table may not have a default partition -- check if compatible with the specified parent -- cannot create as partition of a non-partitioned table CREATE TABLE unparted ( @@ -525,6 +566,8 @@ CREATE TABLE unparted ( ); CREATE TABLE fail_part PARTITION OF unparted FOR VALUES IN ('a'); ERROR: "unparted" is not partitioned +CREATE TABLE fail_part PARTITION OF unparted FOR VALUES WITH (MODULUS 2, REMAINDER 1); +ERROR: "unparted" is not partitioned DROP TABLE unparted; -- cannot create a permanent rel as partition of a temp rel CREATE TEMP TABLE temp_parted ( @@ -623,6 +666,23 @@ CREATE TABLE range3_default PARTITION OF range_parted3 DEFAULT; -- more specific ranges CREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1, minvalue) TO (1, maxvalue); ERROR: partition "fail_part" would overlap partition "part10" +-- check for partition bound overlap and other invalid specifications for the hash partition +CREATE TABLE hash_parted2 ( + a varchar +) PARTITION BY HASH (a); +CREATE TABLE h2part_1 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 4, REMAINDER 2); +CREATE TABLE h2part_2 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 0); +CREATE TABLE h2part_3 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 4); +CREATE TABLE h2part_4 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 5); +-- overlap with part_4 +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 2, REMAINDER 1); +ERROR: partition "fail_part" would overlap partition "h2part_4" +-- modulus must be greater than zero +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 0, REMAINDER 1); +ERROR: modulus for hash partition must be a positive integer +-- remainder must be greater than or equal to zero and less than modulus +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 8); +ERROR: remainder for hash partition must be less than modulus -- check schema propagation from parent CREATE TABLE parted ( a text, @@ -721,6 +781,14 @@ Check constraints: "check_a" CHECK (length(a) > 0) Number of partitions: 3 (Use \d+ to list them.) +\d hash_parted + Table "public.hash_parted" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition key: HASH (a) +Number of partitions: 3 (Use \d+ to list them.) + -- check that we get the expected partition constraints CREATE TABLE range_parted4 (a int, b int, c int) PARTITION BY RANGE (abs(a), abs(b), c); CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE); @@ -771,6 +839,8 @@ Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS N DROP TABLE range_parted4; -- cleanup DROP TABLE parted, list_parted, range_parted, list_parted2, range_parted2, range_parted3; +DROP TABLE hash_parted; +DROP TABLE hash_parted2; -- comments on partitioned tables columns CREATE TABLE parted_col_comment (a int, b text) PARTITION BY LIST (a); COMMENT ON TABLE parted_col_comment IS 'Am partitioned table'; diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index b715619313..9d84ba4658 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -382,8 +382,54 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p part_null | | 1 | 1 (9 rows) +-- direct partition inserts should check hash partition bound constraint +-- create custom operator class and hash function, for the same reason +-- explained in alter_table.sql +create or replace function dummy_hashint4(a int4, seed int8) returns int8 as +$$ begin return (a + seed); end; $$ language 'plpgsql' immutable; +create operator class custom_opclass for type int4 using hash as +operator 1 = , function 2 dummy_hashint4(int4, int8); +create table hash_parted ( + a int +) partition by hash (a custom_opclass); +create table hpart0 partition of hash_parted for values with (modulus 4, remainder 0); +create table hpart1 partition of hash_parted for values with (modulus 4, remainder 1); +create table hpart2 partition of hash_parted for values with (modulus 4, remainder 2); +create table hpart3 partition of hash_parted for values with (modulus 4, remainder 3); +insert into hash_parted values(generate_series(1,10)); +-- direct insert of values divisible by 4 - ok; +insert into hpart0 values(12),(16); +-- fail; +insert into hpart0 values(11); +ERROR: new row for relation "hpart0" violates partition constraint +DETAIL: Failing row contains (11). +-- 11 % 4 -> 3 remainder i.e. valid data for hpart3 partition +insert into hpart3 values(11); +-- view data +select tableoid::regclass as part, a, a%4 as "remainder = a % 4" +from hash_parted order by part; + part | a | remainder = a % 4 +--------+----+------------------- + hpart0 | 4 | 0 + hpart0 | 8 | 0 + hpart0 | 12 | 0 + hpart0 | 16 | 0 + hpart1 | 1 | 1 + hpart1 | 5 | 1 + hpart1 | 9 | 1 + hpart2 | 2 | 2 + hpart2 | 6 | 2 + hpart2 | 10 | 2 + hpart3 | 3 | 3 + hpart3 | 7 | 3 + hpart3 | 11 | 3 +(13 rows) + -- cleanup drop table range_parted, list_parted; +drop table hash_parted; +drop operator class custom_opclass using hash; +drop function dummy_hashint4(a int4, seed int8); -- test that a default partition added as the first partition accepts any value -- including null create table list_parted (a int) partition by list (a); diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index adf6aedfa6..27ab8521f8 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -1256,6 +1256,87 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 One-Time Filter: false (14 rows) +-- +-- tests for hash partitioned tables. +-- +CREATE TABLE pht1 (a int, b int, c text) PARTITION BY HASH(c); +CREATE TABLE pht1_p1 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht1_p2 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht1_p3 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE pht1; +CREATE TABLE pht2 (a int, b int, c text) PARTITION BY HASH(c); +CREATE TABLE pht2_p1 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht2_p2 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht2_p3 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE pht2; +-- +-- hash partitioned by expression +-- +CREATE TABLE pht1_e (a int, b int, c text) PARTITION BY HASH(ltrim(c, 'A')); +CREATE TABLE pht1_e_p1 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht1_e_p2 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht1_e_p3 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE pht1_e; +-- test partition matching with N-way join +EXPLAIN (COSTS OFF) +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + QUERY PLAN +-------------------------------------------------------------------------------------- + Sort + Sort Key: t1.c, t3.c + -> HashAggregate + Group Key: t1.c, t2.c, t3.c + -> Result + -> Append + -> Hash Join + Hash Cond: (t1.c = t2.c) + -> Seq Scan on pht1_p1 t1 + -> Hash + -> Hash Join + Hash Cond: (t2.c = ltrim(t3.c, 'A'::text)) + -> Seq Scan on pht2_p1 t2 + -> Hash + -> Seq Scan on pht1_e_p1 t3 + -> Hash Join + Hash Cond: (t1_1.c = t2_1.c) + -> Seq Scan on pht1_p2 t1_1 + -> Hash + -> Hash Join + Hash Cond: (t2_1.c = ltrim(t3_1.c, 'A'::text)) + -> Seq Scan on pht2_p2 t2_1 + -> Hash + -> Seq Scan on pht1_e_p2 t3_1 + -> Hash Join + Hash Cond: (t1_2.c = t2_2.c) + -> Seq Scan on pht1_p3 t1_2 + -> Hash + -> Hash Join + Hash Cond: (t2_2.c = ltrim(t3_2.c, 'A'::text)) + -> Seq Scan on pht2_p3 t2_2 + -> Hash + -> Seq Scan on pht1_e_p3 t3_2 +(33 rows) + +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + avg | avg | avg | c | c | c +----------------------+----------------------+-----------------------+------+------+------- + 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 + 74.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 + 124.0000000000000000 | 124.5000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 + 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 + 224.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 + 274.0000000000000000 | 274.5000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 + 324.0000000000000000 | 324.0000000000000000 | 648.0000000000000000 | 0006 | 0006 | A0006 + 374.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 + 424.0000000000000000 | 424.5000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 + 474.0000000000000000 | 474.0000000000000000 | 948.0000000000000000 | 0009 | 0009 | A0009 + 524.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 + 574.0000000000000000 | 574.5000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 +(12 rows) + -- -- multiple levels of partitioning -- diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out index cef70b1a1e..a4fe96112e 100644 --- a/src/test/regress/expected/update.out +++ b/src/test/regress/expected/update.out @@ -250,6 +250,35 @@ ERROR: new row for relation "list_default" violates partition constraint DETAIL: Failing row contains (a, 10). -- ok update list_default set a = 'x' where a = 'd'; +-- create custom operator class and hash function, for the same reason +-- explained in alter_table.sql +create or replace function dummy_hashint4(a int4, seed int8) returns int8 as +$$ begin return (a + seed); end; $$ language 'plpgsql' immutable; +create operator class custom_opclass for type int4 using hash as +operator 1 = , function 2 dummy_hashint4(int4, int8); +create table hash_parted ( + a int, + b int +) partition by hash (a custom_opclass, b custom_opclass); +create table hpart1 partition of hash_parted for values with (modulus 2, remainder 1); +create table hpart2 partition of hash_parted for values with (modulus 4, remainder 2); +create table hpart3 partition of hash_parted for values with (modulus 8, remainder 0); +create table hpart4 partition of hash_parted for values with (modulus 8, remainder 4); +insert into hpart1 values (1, 1); +insert into hpart2 values (2, 5); +insert into hpart4 values (3, 4); +-- fail +update hpart1 set a = 3, b=4 where a = 1; +ERROR: new row for relation "hpart1" violates partition constraint +DETAIL: Failing row contains (3, 4). +update hash_parted set b = b - 1 where b = 1; +ERROR: new row for relation "hpart1" violates partition constraint +DETAIL: Failing row contains (1, 0). +-- ok +update hash_parted set b = b + 8 where b = 1; -- cleanup drop table range_parted; drop table list_parted; +drop table hash_parted; +drop operator class custom_opclass using hash; +drop function dummy_hashint4(a int4, seed int8); diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 4ae4c2ecee..02a33ca7c4 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2139,6 +2139,7 @@ SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = 'part_1'::reg -- check that the new partition won't overlap with an existing partition CREATE TABLE fail_part (LIKE part_1 INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION fail_part FOR VALUES IN (1); +DROP TABLE fail_part; -- check that an existing table can be attached as a default partition CREATE TABLE def_part (LIKE list_parted INCLUDING CONSTRAINTS); ALTER TABLE list_parted ATTACH PARTITION def_part DEFAULT; @@ -2332,6 +2333,62 @@ CREATE TABLE quuux1 PARTITION OF quuux FOR VALUES IN (1); CREATE TABLE quuux2 PARTITION OF quuux FOR VALUES IN (2); DROP TABLE quuux; +-- check validation when attaching hash partitions + +-- The default hash functions as they exist today aren't portable; they can +-- return different results on different machines. Depending upon how the +-- values are hashed, the row may map to different partitions, which result in +-- regression failure. To avoid this, let's create a non-default hash function +-- that just returns the input value unchanged. +CREATE OR REPLACE FUNCTION dummy_hashint4(a int4, seed int8) RETURNS int8 AS +$$ BEGIN RETURN (a + 1 + seed); END; $$ LANGUAGE 'plpgsql' IMMUTABLE; +CREATE OPERATOR CLASS custom_opclass FOR TYPE int4 USING HASH AS +OPERATOR 1 = , FUNCTION 2 dummy_hashint4(int4, int8); + +-- check that the new partition won't overlap with an existing partition +CREATE TABLE hash_parted ( + a int, + b int +) PARTITION BY HASH (a custom_opclass); +CREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 4, REMAINDER 0); +CREATE TABLE fail_part (LIKE hpart_1); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 4); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 0); +DROP TABLE fail_part; + +-- check validation when attaching hash partitions + +-- check that violating rows are correctly reported +CREATE TABLE hpart_2 (LIKE hash_parted); +INSERT INTO hpart_2 VALUES (3, 0); +ALTER TABLE hash_parted ATTACH PARTITION hpart_2 FOR VALUES WITH (MODULUS 4, REMAINDER 1); + +-- should be ok after deleting the bad row +DELETE FROM hpart_2; +ALTER TABLE hash_parted ATTACH PARTITION hpart_2 FOR VALUES WITH (MODULUS 4, REMAINDER 1); + +-- check that leaf partitions are scanned when attaching a partitioned +-- table +CREATE TABLE hpart_5 ( + LIKE hash_parted +) PARTITION BY LIST (b); + +-- check that violating rows are correctly reported +CREATE TABLE hpart_5_a PARTITION OF hpart_5 FOR VALUES IN ('1', '2', '3'); +INSERT INTO hpart_5_a (a, b) VALUES (7, 1); +ALTER TABLE hash_parted ATTACH PARTITION hpart_5 FOR VALUES WITH (MODULUS 4, REMAINDER 2); + +-- should be ok after deleting the bad row +DELETE FROM hpart_5_a; +ALTER TABLE hash_parted ATTACH PARTITION hpart_5 FOR VALUES WITH (MODULUS 4, REMAINDER 2); + +-- check that the table being attach is with valid modulus and remainder value +CREATE TABLE fail_part(LIKE hash_parted); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 0, REMAINDER 1); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 8, REMAINDER 8); +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH (MODULUS 3, REMAINDER 2); +DROP TABLE fail_part; + -- -- DETACH PARTITION -- @@ -2343,12 +2400,16 @@ DROP TABLE regular_table; -- check that the partition being detached exists at all ALTER TABLE list_parted2 DETACH PARTITION part_4; +ALTER TABLE hash_parted DETACH PARTITION hpart_4; -- check that the partition being detached is actually a partition of the parent CREATE TABLE not_a_part (a int); ALTER TABLE list_parted2 DETACH PARTITION not_a_part; ALTER TABLE list_parted2 DETACH PARTITION part_1; +ALTER TABLE hash_parted DETACH PARTITION not_a_part; +DROP TABLE not_a_part; + -- check that, after being detached, attinhcount/coninhcount is dropped to 0 and -- attislocal/conislocal is set to true ALTER TABLE list_parted2 DETACH PARTITION part_3_4; @@ -2425,6 +2486,9 @@ SELECT * FROM list_parted; -- cleanup DROP TABLE list_parted, list_parted2, range_parted; DROP TABLE fail_def_part; +DROP TABLE hash_parted; +DROP OPERATOR CLASS custom_opclass USING HASH; +DROP FUNCTION dummy_hashint4(a int4, seed int8); -- more tests for certain multi-level partitioning scenarios create table p (a int, b int) partition by range (a, b); diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index df6a6d7326..b77b476436 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -350,10 +350,10 @@ CREATE TABLE partitioned ( ) PARTITION BY RANGE (const_func()); DROP FUNCTION const_func(); --- only accept "list" and "range" as partitioning strategy +-- only accept valid partitioning strategy CREATE TABLE partitioned ( - a int -) PARTITION BY HASH (a); + a int +) PARTITION BY MAGIC (a); -- specified column must be present in the table CREATE TABLE partitioned ( @@ -446,6 +446,8 @@ CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES IN ('1'::int); CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES IN (); -- trying to specify range for list partitioned table CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES FROM (1) TO (2); +-- trying to specify modulus and remainder for list partitioned table +CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); -- check default partition cannot be created more than once CREATE TABLE part_default PARTITION OF list_parted DEFAULT; @@ -481,6 +483,8 @@ CREATE TABLE range_parted ( -- trying to specify list for range partitioned table CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES IN ('a'); +-- trying to specify modulus and remainder for range partitioned table +CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); -- each of start and end bounds must have same number of values as the -- length of the partition key CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES FROM ('a', 1) TO ('z'); @@ -489,6 +493,28 @@ CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES FROM ('a') TO ('z', -- cannot specify null values in range bounds CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES FROM (null) TO (maxvalue); +-- trying to specify modulus and remainder for range partitioned table +CREATE TABLE fail_part PARTITION OF range_parted FOR VALUES WITH (MODULUS 10, REMAINDER 1); + +-- check partition bound syntax for the hash partition +CREATE TABLE hash_parted ( + a int +) PARTITION BY HASH (a); +CREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 10, REMAINDER 0); +CREATE TABLE hpart_2 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 50, REMAINDER 1); +CREATE TABLE hpart_3 PARTITION OF hash_parted FOR VALUES WITH (MODULUS 200, REMAINDER 2); +-- modulus 25 is factor of modulus of 50 but 10 is not factor of 25. +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS 25, REMAINDER 3); +-- previous modulus 50 is factor of 150 but this modulus is not factor of next modulus 200. +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS 150, REMAINDER 3); +-- trying to specify range for the hash partitioned table +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES FROM ('a', 1) TO ('z'); +-- trying to specify list value for the hash partitioned table +CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES IN (1000); + +-- trying to create default partition for the hash partitioned table +CREATE TABLE fail_default_part PARTITION OF hash_parted DEFAULT; + -- check if compatible with the specified parent -- cannot create as partition of a non-partitioned table @@ -496,6 +522,7 @@ CREATE TABLE unparted ( a int ); CREATE TABLE fail_part PARTITION OF unparted FOR VALUES IN ('a'); +CREATE TABLE fail_part PARTITION OF unparted FOR VALUES WITH (MODULUS 2, REMAINDER 1); DROP TABLE unparted; -- cannot create a permanent rel as partition of a temp rel @@ -585,6 +612,21 @@ CREATE TABLE range3_default PARTITION OF range_parted3 DEFAULT; -- more specific ranges CREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1, minvalue) TO (1, maxvalue); +-- check for partition bound overlap and other invalid specifications for the hash partition +CREATE TABLE hash_parted2 ( + a varchar +) PARTITION BY HASH (a); +CREATE TABLE h2part_1 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 4, REMAINDER 2); +CREATE TABLE h2part_2 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 0); +CREATE TABLE h2part_3 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 4); +CREATE TABLE h2part_4 PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 5); +-- overlap with part_4 +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 2, REMAINDER 1); +-- modulus must be greater than zero +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 0, REMAINDER 1); +-- remainder must be greater than or equal to zero and less than modulus +CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 8, REMAINDER 8); + -- check schema propagation from parent CREATE TABLE parted ( @@ -638,6 +680,7 @@ CREATE TABLE part_c_1_10 PARTITION OF part_c FOR VALUES FROM (1) TO (10); -- output could vary depending on the order in which partition oids are -- returned. \d parted +\d hash_parted -- check that we get the expected partition constraints CREATE TABLE range_parted4 (a int, b int, c int) PARTITION BY RANGE (abs(a), abs(b), c); @@ -654,6 +697,8 @@ DROP TABLE range_parted4; -- cleanup DROP TABLE parted, list_parted, range_parted, list_parted2, range_parted2, range_parted3; +DROP TABLE hash_parted; +DROP TABLE hash_parted2; -- comments on partitioned tables columns CREATE TABLE parted_col_comment (a int, b text) PARTITION BY LIST (a); diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index d741514414..791817ba50 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -222,8 +222,41 @@ insert into list_parted select 'gg', s.a from generate_series(1, 9) s(a); insert into list_parted (b) values (1); select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_parted group by 1, 2 order by 1; +-- direct partition inserts should check hash partition bound constraint + +-- create custom operator class and hash function, for the same reason +-- explained in alter_table.sql +create or replace function dummy_hashint4(a int4, seed int8) returns int8 as +$$ begin return (a + seed); end; $$ language 'plpgsql' immutable; +create operator class custom_opclass for type int4 using hash as +operator 1 = , function 2 dummy_hashint4(int4, int8); + +create table hash_parted ( + a int +) partition by hash (a custom_opclass); +create table hpart0 partition of hash_parted for values with (modulus 4, remainder 0); +create table hpart1 partition of hash_parted for values with (modulus 4, remainder 1); +create table hpart2 partition of hash_parted for values with (modulus 4, remainder 2); +create table hpart3 partition of hash_parted for values with (modulus 4, remainder 3); + +insert into hash_parted values(generate_series(1,10)); + +-- direct insert of values divisible by 4 - ok; +insert into hpart0 values(12),(16); +-- fail; +insert into hpart0 values(11); +-- 11 % 4 -> 3 remainder i.e. valid data for hpart3 partition +insert into hpart3 values(11); + +-- view data +select tableoid::regclass as part, a, a%4 as "remainder = a % 4" +from hash_parted order by part; + -- cleanup drop table range_parted, list_parted; +drop table hash_parted; +drop operator class custom_opclass using hash; +drop function dummy_hashint4(a int4, seed int8); -- test that a default partition added as the first partition accepts any value -- including null diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql index 25abf2dc13..6efdf3c517 100644 --- a/src/test/regress/sql/partition_join.sql +++ b/src/test/regress/sql/partition_join.sql @@ -229,6 +229,38 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1 WHERE a = 1 AND a = 2) t1 FULL JOIN prt2 t2 ON t1.a = t2.b WHERE t2.a = 0 ORDER BY t1.a, t2.b; +-- +-- tests for hash partitioned tables. +-- +CREATE TABLE pht1 (a int, b int, c text) PARTITION BY HASH(c); +CREATE TABLE pht1_p1 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht1_p2 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht1_p3 PARTITION OF pht1 FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht1 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE pht1; + +CREATE TABLE pht2 (a int, b int, c text) PARTITION BY HASH(c); +CREATE TABLE pht2_p1 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht2_p2 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht2_p3 PARTITION OF pht2 FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht2 SELECT i, i, to_char(i/50, 'FM0000') FROM generate_series(0, 599, 3) i; +ANALYZE pht2; + +-- +-- hash partitioned by expression +-- +CREATE TABLE pht1_e (a int, b int, c text) PARTITION BY HASH(ltrim(c, 'A')); +CREATE TABLE pht1_e_p1 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 0); +CREATE TABLE pht1_e_p2 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 1); +CREATE TABLE pht1_e_p3 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 2); +INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +ANALYZE pht1_e; + +-- test partition matching with N-way join +EXPLAIN (COSTS OFF) +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + -- -- multiple levels of partitioning -- diff --git a/src/test/regress/sql/update.sql b/src/test/regress/sql/update.sql index 66d1feca10..0c70d64a89 100644 --- a/src/test/regress/sql/update.sql +++ b/src/test/regress/sql/update.sql @@ -148,6 +148,34 @@ update list_default set a = 'a' where a = 'd'; -- ok update list_default set a = 'x' where a = 'd'; +-- create custom operator class and hash function, for the same reason +-- explained in alter_table.sql +create or replace function dummy_hashint4(a int4, seed int8) returns int8 as +$$ begin return (a + seed); end; $$ language 'plpgsql' immutable; +create operator class custom_opclass for type int4 using hash as +operator 1 = , function 2 dummy_hashint4(int4, int8); + +create table hash_parted ( + a int, + b int +) partition by hash (a custom_opclass, b custom_opclass); +create table hpart1 partition of hash_parted for values with (modulus 2, remainder 1); +create table hpart2 partition of hash_parted for values with (modulus 4, remainder 2); +create table hpart3 partition of hash_parted for values with (modulus 8, remainder 0); +create table hpart4 partition of hash_parted for values with (modulus 8, remainder 4); +insert into hpart1 values (1, 1); +insert into hpart2 values (2, 5); +insert into hpart4 values (3, 4); + +-- fail +update hpart1 set a = 3, b=4 where a = 1; +update hash_parted set b = b - 1 where b = 1; +-- ok +update hash_parted set b = b + 8 where b = 1; + -- cleanup drop table range_parted; drop table list_parted; +drop table hash_parted; +drop operator class custom_opclass using hash; +drop function dummy_hashint4(a int4, seed int8); diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 7f0ae978c1..61aeb51c29 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -1565,6 +1565,7 @@ PartitionDispatch PartitionDispatchData PartitionElem PartitionKey +PartitionHashBound PartitionListValue PartitionRangeBound PartitionRangeDatum From 9a8d3c4eeaf34966056a41a263c6e2ca4d5e4012 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 9 Nov 2017 17:06:32 -0500 Subject: [PATCH 0519/1087] Add -wnet to SP invocations This causes a warning when accidentally backpatching an XML-style empty-element tag like . --- doc/src/sgml/Makefile | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index bcfd7db0bd..df77a142e4 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -66,7 +66,8 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML) # Enable some extra warnings # -wfully-tagged needed to throw a warning on missing tags # for older tool chains, 2007-08-31 -override SPFLAGS += -wall -wno-unused-param -wfully-tagged +# -wnet catches XML-style empty-element tags like . +override SPFLAGS += -wall -wno-unused-param -wfully-tagged -wnet # Additional warnings for XML compatibility. The conditional is meant # to detect whether we are using OpenSP rather than the ancient # original SP. From b9941d3468505aea8bfdd74840b753ed27b9d29f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 10 Nov 2017 10:55:09 -0500 Subject: [PATCH 0520/1087] Fix incorrect comment. Etsuro Fujita Discussion: http://postgr.es/m/5A05728E.4050009@lab.ntt.co.jp --- src/backend/optimizer/util/relnode.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index 3bd1063aa8..cb94c318a7 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -676,8 +676,7 @@ build_join_rel(PlannerInfo *root, * 'sjinfo': child-join context info * 'restrictlist': list of RestrictInfo nodes that apply to this particular * pair of joinable relations - * 'join_appinfos': list of AppendRelInfo nodes for base child relations - * involved in this join + * 'jointype' is the join type (inner, left, full, etc) */ RelOptInfo * build_child_join_rel(PlannerInfo *root, RelOptInfo *outer_rel, From 7e60e678615b1f803ac73faee71cca79ec310d1d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 10 Nov 2017 12:30:01 -0500 Subject: [PATCH 0521/1087] Tighten test in contrib/bloom/t/001_wal.pl. Make bloom WAL test compare psql output text, not just result codes; this was evidently the intent all along, but it was mis-coded. In passing, make sure we will notice any failure in setup steps. Alexander Korotkov, reviewed by Michael Paquier and Masahiko Sawada Discussion: https://postgr.es/m/CAPpHfdtohPdQ9rc5mdWjxq+3VsBNw534KV_5O65dTQrSdVJNgw@mail.gmail.com --- contrib/bloom/t/001_wal.pl | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/contrib/bloom/t/001_wal.pl b/contrib/bloom/t/001_wal.pl index dbba198254..1b319c993c 100644 --- a/contrib/bloom/t/001_wal.pl +++ b/contrib/bloom/t/001_wal.pl @@ -32,8 +32,8 @@ sub test_index_replay ); # Run test queries and compare their result - my $master_result = $node_master->psql("postgres", $queries); - my $standby_result = $node_standby->psql("postgres", $queries); + my $master_result = $node_master->safe_psql("postgres", $queries); + my $standby_result = $node_standby->safe_psql("postgres", $queries); is($master_result, $standby_result, "$test_name: query result matches"); } @@ -54,12 +54,12 @@ sub test_index_replay $node_standby->start; # Create some bloom index on master -$node_master->psql("postgres", "CREATE EXTENSION bloom;"); -$node_master->psql("postgres", "CREATE TABLE tst (i int4, t text);"); -$node_master->psql("postgres", +$node_master->safe_psql("postgres", "CREATE EXTENSION bloom;"); +$node_master->safe_psql("postgres", "CREATE TABLE tst (i int4, t text);"); +$node_master->safe_psql("postgres", "INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series(1,100000) i;" ); -$node_master->psql("postgres", +$node_master->safe_psql("postgres", "CREATE INDEX bloomidx ON tst USING bloom (i, t) WITH (col1 = 3);"); # Test that queries give same result @@ -68,12 +68,12 @@ sub test_index_replay # Run 10 cycles of table modification. Run test queries after each modification. for my $i (1 .. 10) { - $node_master->psql("postgres", "DELETE FROM tst WHERE i = $i;"); + $node_master->safe_psql("postgres", "DELETE FROM tst WHERE i = $i;"); test_index_replay("delete $i"); - $node_master->psql("postgres", "VACUUM tst;"); + $node_master->safe_psql("postgres", "VACUUM tst;"); test_index_replay("vacuum $i"); my ($start, $end) = (100001 + ($i - 1) * 10000, 100000 + $i * 10000); - $node_master->psql("postgres", + $node_master->safe_psql("postgres", "INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series($start,$end) i;" ); test_index_replay("insert $i"); From 0e1539ba0d0a43de06c6e0572a565e73b9472538 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 31 Oct 2017 10:34:31 -0400 Subject: [PATCH 0522/1087] Add some const decorations to prototypes Reviewed-by: Fabien COELHO --- contrib/dict_xsyn/dict_xsyn.c | 2 +- contrib/fuzzystrmatch/dmetaphone.c | 4 +-- contrib/pgcrypto/pgcrypto.c | 4 +-- contrib/seg/seg.c | 4 +-- contrib/seg/segdata.h | 2 +- contrib/seg/segparse.y | 4 +-- contrib/unaccent/unaccent.c | 2 +- contrib/uuid-ossp/uuid-ossp.c | 2 +- src/backend/access/common/reloptions.c | 12 +++---- src/backend/access/gist/gistbuild.c | 2 +- src/backend/access/transam/xact.c | 6 ++-- src/backend/access/transam/xlogarchive.c | 4 +-- src/backend/catalog/heap.c | 10 +++--- src/backend/commands/comment.c | 4 +-- src/backend/commands/event_trigger.c | 4 +-- src/backend/commands/extension.c | 4 +-- src/backend/commands/indexcmds.c | 8 ++--- src/backend/commands/opclasscmds.c | 2 +- src/backend/commands/tablecmds.c | 16 +++++----- src/backend/commands/typecmds.c | 6 ++-- src/backend/commands/view.c | 2 +- src/backend/libpq/auth.c | 24 +++++++------- src/backend/libpq/hba.c | 6 ++-- src/backend/parser/parse_expr.c | 2 +- src/backend/parser/parse_func.c | 4 +-- src/backend/parser/parse_relation.c | 8 ++--- src/backend/parser/parse_target.c | 2 +- src/backend/port/dynloader/darwin.c | 8 ++--- src/backend/port/dynloader/darwin.h | 4 +-- src/backend/port/dynloader/hpux.c | 4 +-- src/backend/port/dynloader/hpux.h | 4 +-- src/backend/port/dynloader/linux.c | 4 +-- src/backend/postmaster/postmaster.c | 2 +- src/backend/replication/basebackup.c | 8 ++--- src/backend/rewrite/rewriteDefine.c | 4 +-- src/backend/snowball/dict_snowball.c | 2 +- src/backend/storage/lmgr/lwlock.c | 8 ++--- src/backend/tsearch/dict_thesaurus.c | 2 +- src/backend/tsearch/spell.c | 4 +-- src/backend/utils/adt/genfile.c | 2 +- src/backend/utils/adt/ruleutils.c | 4 +-- src/backend/utils/adt/varlena.c | 2 +- src/backend/utils/adt/xml.c | 32 +++++++++---------- src/bin/initdb/initdb.c | 12 +++---- src/bin/pg_dump/pg_backup_db.c | 2 +- src/bin/pg_dump/pg_backup_db.h | 2 +- src/bin/pg_rewind/fetch.c | 2 +- src/bin/pg_rewind/fetch.h | 2 +- src/bin/pg_upgrade/option.c | 6 ++-- src/bin/pg_upgrade/pg_upgrade.c | 4 +-- src/bin/pg_waldump/pg_waldump.c | 2 +- src/bin/pgbench/pgbench.c | 4 +-- src/include/access/gist_private.h | 2 +- src/include/access/reloptions.h | 14 ++++---- src/include/access/xact.h | 6 ++-- src/include/access/xlog_internal.h | 4 +-- src/include/catalog/heap.h | 2 +- src/include/commands/comment.h | 4 +-- src/include/commands/defrem.h | 4 +-- src/include/commands/typecmds.h | 2 +- src/include/commands/view.h | 2 +- src/include/executor/tablefunc.h | 8 ++--- src/include/parser/parse_relation.h | 6 ++-- src/include/parser/parse_target.h | 2 +- src/include/postmaster/bgworker.h | 2 +- src/include/rewrite/rewriteDefine.h | 2 +- src/include/storage/lwlock.h | 2 +- src/include/utils/dynamic_loader.h | 4 +-- src/include/utils/varlena.h | 2 +- src/include/utils/xml.h | 6 ++-- src/interfaces/ecpg/compatlib/informix.c | 14 ++++---- src/interfaces/ecpg/ecpglib/misc.c | 20 ++++++------ src/interfaces/ecpg/include/ecpg_informix.h | 12 +++---- src/interfaces/ecpg/include/ecpglib.h | 2 +- src/interfaces/ecpg/include/pgtypes_date.h | 2 +- .../ecpg/include/pgtypes_timestamp.h | 2 +- src/interfaces/ecpg/pgtypeslib/datetime.c | 2 +- src/interfaces/ecpg/pgtypeslib/interval.c | 4 +-- src/interfaces/ecpg/pgtypeslib/timestamp.c | 2 +- src/interfaces/ecpg/preproc/type.c | 2 +- src/interfaces/ecpg/preproc/type.h | 2 +- .../ecpg/test/compat_informix/rfmtdate.pgc | 6 ++-- .../ecpg/test/compat_informix/rfmtlong.pgc | 2 +- .../test/compat_informix/test_informix2.pgc | 2 +- .../test/expected/compat_informix-rfmtdate.c | 6 ++-- .../test/expected/compat_informix-rfmtlong.c | 2 +- .../expected/compat_informix-test_informix2.c | 2 +- .../ecpg/test/expected/preproc-init.c | 2 +- .../ecpg/test/expected/preproc-whenever.c | 2 +- src/interfaces/ecpg/test/preproc/init.pgc | 2 +- src/interfaces/ecpg/test/preproc/whenever.pgc | 2 +- src/interfaces/libpq/fe-connect.c | 16 +++++----- src/pl/plperl/plperl.c | 4 +-- src/test/regress/pg_regress.c | 4 +-- src/test/regress/pg_regress.h | 2 +- 95 files changed, 236 insertions(+), 236 deletions(-) diff --git a/contrib/dict_xsyn/dict_xsyn.c b/contrib/dict_xsyn/dict_xsyn.c index fcf541ee0f..977162951a 100644 --- a/contrib/dict_xsyn/dict_xsyn.c +++ b/contrib/dict_xsyn/dict_xsyn.c @@ -70,7 +70,7 @@ compare_syn(const void *a, const void *b) } static void -read_dictionary(DictSyn *d, char *filename) +read_dictionary(DictSyn *d, const char *filename) { char *real_filename = get_tsearch_config_filename(filename, "rules"); tsearch_readline_state trst; diff --git a/contrib/fuzzystrmatch/dmetaphone.c b/contrib/fuzzystrmatch/dmetaphone.c index 918ee0d90e..16e4c66167 100644 --- a/contrib/fuzzystrmatch/dmetaphone.c +++ b/contrib/fuzzystrmatch/dmetaphone.c @@ -232,7 +232,7 @@ metastring; */ static metastring * -NewMetaString(char *init_str) +NewMetaString(const char *init_str) { metastring *s; char empty_string[] = ""; @@ -375,7 +375,7 @@ StringAt(metastring *s, int start, int length,...) static void -MetaphAdd(metastring *s, char *new_str) +MetaphAdd(metastring *s, const char *new_str) { int add_length; diff --git a/contrib/pgcrypto/pgcrypto.c b/contrib/pgcrypto/pgcrypto.c index e09f3378da..de09ececcf 100644 --- a/contrib/pgcrypto/pgcrypto.c +++ b/contrib/pgcrypto/pgcrypto.c @@ -47,7 +47,7 @@ PG_MODULE_MAGIC; /* private stuff */ typedef int (*PFN) (const char *name, void **res); -static void *find_provider(text *name, PFN pf, char *desc, int silent); +static void *find_provider(text *name, PFN pf, const char *desc, int silent); /* SQL function: hash(bytea, text) returns bytea */ PG_FUNCTION_INFO_V1(pg_digest); @@ -474,7 +474,7 @@ pg_random_uuid(PG_FUNCTION_ARGS) static void * find_provider(text *name, PFN provider_lookup, - char *desc, int silent) + const char *desc, int silent) { void *res; char *buf; diff --git a/contrib/seg/seg.c b/contrib/seg/seg.c index 8bac4d50d0..4e34fba7c7 100644 --- a/contrib/seg/seg.c +++ b/contrib/seg/seg.c @@ -1052,9 +1052,9 @@ restore(char *result, float val, int n) * a floating point number */ int -significant_digits(char *s) +significant_digits(const char *s) { - char *p = s; + const char *p = s; int n, c, zeroes; diff --git a/contrib/seg/segdata.h b/contrib/seg/segdata.h index cac68ee2b2..9488bf3a81 100644 --- a/contrib/seg/segdata.h +++ b/contrib/seg/segdata.h @@ -12,7 +12,7 @@ typedef struct SEG } SEG; /* in seg.c */ -extern int significant_digits(char *str); +extern int significant_digits(const char *str); /* in segscan.l */ extern int seg_yylex(void); diff --git a/contrib/seg/segparse.y b/contrib/seg/segparse.y index 045ff91f3e..040cab3904 100644 --- a/contrib/seg/segparse.y +++ b/contrib/seg/segparse.y @@ -21,7 +21,7 @@ #define YYMALLOC palloc #define YYFREE pfree -static float seg_atof(char *value); +static float seg_atof(const char *value); static char strbuf[25] = { '0', '0', '0', '0', '0', @@ -151,7 +151,7 @@ deviation: SEGFLOAT static float -seg_atof(char *value) +seg_atof(const char *value) { Datum datum; diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c index e08cca1707..e68b098b78 100644 --- a/contrib/unaccent/unaccent.c +++ b/contrib/unaccent/unaccent.c @@ -90,7 +90,7 @@ placeChar(TrieChar *node, const unsigned char *str, int lenstr, * Function converts UTF8-encoded file into current encoding. */ static TrieChar * -initTrie(char *filename) +initTrie(const char *filename) { TrieChar *volatile rootTrie = NULL; MemoryContext ccxt = CurrentMemoryContext; diff --git a/contrib/uuid-ossp/uuid-ossp.c b/contrib/uuid-ossp/uuid-ossp.c index fce4bc9140..151223a199 100644 --- a/contrib/uuid-ossp/uuid-ossp.c +++ b/contrib/uuid-ossp/uuid-ossp.c @@ -253,7 +253,7 @@ uuid_generate_v35_internal(int mode, pg_uuid_t *ns, text *name) #else /* !HAVE_UUID_OSSP */ static Datum -uuid_generate_internal(int v, unsigned char *ns, char *ptr, int len) +uuid_generate_internal(int v, unsigned char *ns, const char *ptr, int len) { char strbuf[40]; diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index ec10762529..3d0ce9af6f 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -582,7 +582,7 @@ add_reloption(relopt_gen *newoption) * (for types other than string) */ static relopt_gen * -allocate_reloption(bits32 kinds, int type, char *name, char *desc) +allocate_reloption(bits32 kinds, int type, const char *name, const char *desc) { MemoryContext oldcxt; size_t size; @@ -630,7 +630,7 @@ allocate_reloption(bits32 kinds, int type, char *name, char *desc) * Add a new boolean reloption */ void -add_bool_reloption(bits32 kinds, char *name, char *desc, bool default_val) +add_bool_reloption(bits32 kinds, const char *name, const char *desc, bool default_val) { relopt_bool *newoption; @@ -646,7 +646,7 @@ add_bool_reloption(bits32 kinds, char *name, char *desc, bool default_val) * Add a new integer reloption */ void -add_int_reloption(bits32 kinds, char *name, char *desc, int default_val, +add_int_reloption(bits32 kinds, const char *name, const char *desc, int default_val, int min_val, int max_val) { relopt_int *newoption; @@ -665,7 +665,7 @@ add_int_reloption(bits32 kinds, char *name, char *desc, int default_val, * Add a new float reloption */ void -add_real_reloption(bits32 kinds, char *name, char *desc, double default_val, +add_real_reloption(bits32 kinds, const char *name, const char *desc, double default_val, double min_val, double max_val) { relopt_real *newoption; @@ -689,7 +689,7 @@ add_real_reloption(bits32 kinds, char *name, char *desc, double default_val, * the validation. */ void -add_string_reloption(bits32 kinds, char *name, char *desc, char *default_val, +add_string_reloption(bits32 kinds, const char *name, const char *desc, const char *default_val, validate_string_relopt validator) { relopt_string *newoption; @@ -742,7 +742,7 @@ add_string_reloption(bits32 kinds, char *name, char *desc, char *default_val, * but we declare them as Datums to avoid including array.h in reloptions.h. */ Datum -transformRelOptions(Datum oldOptions, List *defList, char *namspace, +transformRelOptions(Datum oldOptions, List *defList, const char *namspace, char *validnsps[], bool ignoreOids, bool isReset) { Datum result; diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index b4cb364869..2415f00e06 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -238,7 +238,7 @@ gistbuild(Relation heap, Relation index, IndexInfo *indexInfo) * and "auto" values. */ void -gistValidateBufferingOption(char *value) +gistValidateBufferingOption(const char *value) { if (value == NULL || (strcmp(value, "on") != 0 && diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 02a60f66b8..c06fabca10 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -3478,7 +3478,7 @@ BeginTransactionBlock(void) * resource owner, etc while executing inside a Portal. */ bool -PrepareTransactionBlock(char *gid) +PrepareTransactionBlock(const char *gid) { TransactionState s; bool result; @@ -3823,7 +3823,7 @@ EndImplicitTransactionBlock(void) * This executes a SAVEPOINT command. */ void -DefineSavepoint(char *name) +DefineSavepoint(const char *name) { TransactionState s = CurrentTransactionState; @@ -4168,7 +4168,7 @@ RollbackToSavepoint(List *options) * the caller to do it. */ void -BeginInternalSubTransaction(char *name) +BeginInternalSubTransaction(const char *name) { TransactionState s = CurrentTransactionState; diff --git a/src/backend/access/transam/xlogarchive.c b/src/backend/access/transam/xlogarchive.c index f64f04cfaf..488acd0f70 100644 --- a/src/backend/access/transam/xlogarchive.c +++ b/src/backend/access/transam/xlogarchive.c @@ -327,7 +327,7 @@ RestoreArchivedFile(char *path, const char *xlogfname, * This is currently used for recovery_end_command and archive_cleanup_command. */ void -ExecuteRecoveryCommand(char *command, char *commandName, bool failOnSignal) +ExecuteRecoveryCommand(const char *command, const char *commandName, bool failOnSignal) { char xlogRecoveryCmd[MAXPGPATH]; char lastRestartPointFname[MAXPGPATH]; @@ -425,7 +425,7 @@ ExecuteRecoveryCommand(char *command, char *commandName, bool failOnSignal) * in pg_wal (xlogfname), replacing any existing file with the same name. */ void -KeepFileRestoredFromArchive(char *path, char *xlogfname) +KeepFileRestoredFromArchive(const char *path, const char *xlogfname) { char xlogfpath[MAXPGPATH]; bool reload = false; diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 2bc9e90dcf..9e14880b99 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -103,12 +103,12 @@ static ObjectAddress AddNewRelationType(const char *typeName, Oid new_row_type, Oid new_array_type); static void RelationRemoveInheritance(Oid relid); -static Oid StoreRelCheck(Relation rel, char *ccname, Node *expr, +static Oid StoreRelCheck(Relation rel, const char *ccname, Node *expr, bool is_validated, bool is_local, int inhcount, bool is_no_inherit, bool is_internal); static void StoreConstraints(Relation rel, List *cooked_constraints, bool is_internal); -static bool MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr, +static bool MergeWithExistingConstraint(Relation rel, const char *ccname, Node *expr, bool allow_merge, bool is_local, bool is_initially_valid, bool is_no_inherit); @@ -2037,7 +2037,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum, * The OID of the new constraint is returned. */ static Oid -StoreRelCheck(Relation rel, char *ccname, Node *expr, +StoreRelCheck(Relation rel, const char *ccname, Node *expr, bool is_validated, bool is_local, int inhcount, bool is_no_inherit, bool is_internal) { @@ -2461,7 +2461,7 @@ AddRelationNewConstraints(Relation rel, * XXX See MergeConstraintsIntoExisting too if you change this code. */ static bool -MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr, +MergeWithExistingConstraint(Relation rel, const char *ccname, Node *expr, bool allow_merge, bool is_local, bool is_initially_valid, bool is_no_inherit) @@ -2658,7 +2658,7 @@ cookDefault(ParseState *pstate, Node *raw_default, Oid atttypid, int32 atttypmod, - char *attname) + const char *attname) { Node *expr; diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c index 1c17927c49..2dc9371fdb 100644 --- a/src/backend/commands/comment.c +++ b/src/backend/commands/comment.c @@ -139,7 +139,7 @@ CommentObject(CommentStmt *stmt) * existing comment for the specified key. */ void -CreateComments(Oid oid, Oid classoid, int32 subid, char *comment) +CreateComments(Oid oid, Oid classoid, int32 subid, const char *comment) { Relation description; ScanKeyData skey[3]; @@ -234,7 +234,7 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment) * existing comment for the specified key. */ void -CreateSharedComments(Oid oid, Oid classoid, char *comment) +CreateSharedComments(Oid oid, Oid classoid, const char *comment) { Relation shdescription; ScanKeyData skey[2]; diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index 938133bbe4..fa7d0d015a 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -152,7 +152,7 @@ static event_trigger_command_tag_check_result check_table_rewrite_ddl_tag( const char *tag); static void error_duplicate_filter_variable(const char *defname); static Datum filter_list_to_array(List *filterlist); -static Oid insert_event_trigger_tuple(char *trigname, char *eventname, +static Oid insert_event_trigger_tuple(const char *trigname, const char *eventname, Oid evtOwner, Oid funcoid, List *tags); static void validate_ddl_tags(const char *filtervar, List *taglist); static void validate_table_rewrite_tags(const char *filtervar, List *taglist); @@ -372,7 +372,7 @@ error_duplicate_filter_variable(const char *defname) * Insert the new pg_event_trigger row and record dependencies. */ static Oid -insert_event_trigger_tuple(char *trigname, char *eventname, Oid evtOwner, +insert_event_trigger_tuple(const char *trigname, const char *eventname, Oid evtOwner, Oid funcoid, List *taglist) { Relation tgrel; diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c index e4340eed8c..9f77d25352 100644 --- a/src/backend/commands/extension.c +++ b/src/backend/commands/extension.c @@ -1266,8 +1266,8 @@ find_install_path(List *evi_list, ExtensionVersionInfo *evi_target, static ObjectAddress CreateExtensionInternal(char *extensionName, char *schemaName, - char *versionName, - char *oldVersionName, + const char *versionName, + const char *oldVersionName, bool cascade, List *parents, bool is_create) diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 3f615b6260..89114af119 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -67,7 +67,7 @@ static void ComputeIndexAttrs(IndexInfo *indexInfo, List *attList, List *exclusionOpNames, Oid relId, - char *accessMethodName, Oid accessMethodId, + const char *accessMethodName, Oid accessMethodId, bool amcanorder, bool isconstraint); static char *ChooseIndexName(const char *tabname, Oid namespaceId, @@ -115,7 +115,7 @@ static void RangeVarCallbackForReindexIndex(const RangeVar *relation, */ bool CheckIndexCompatible(Oid oldId, - char *accessMethodName, + const char *accessMethodName, List *attributeList, List *exclusionOpNames) { @@ -1011,7 +1011,7 @@ ComputeIndexAttrs(IndexInfo *indexInfo, List *attList, /* list of IndexElem's */ List *exclusionOpNames, Oid relId, - char *accessMethodName, + const char *accessMethodName, Oid accessMethodId, bool amcanorder, bool isconstraint) @@ -1277,7 +1277,7 @@ ComputeIndexAttrs(IndexInfo *indexInfo, */ Oid ResolveOpClass(List *opclass, Oid attrType, - char *accessMethodName, Oid accessMethodId) + const char *accessMethodName, Oid accessMethodId) { char *schemaname; char *opcname; diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index d23e6d6f25..1641e68abe 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -239,7 +239,7 @@ get_opclass_oid(Oid amID, List *opclassname, bool missing_ok) * Caller must have done permissions checks etc. already. */ static ObjectAddress -CreateOpFamily(char *amname, char *opfname, Oid namespaceoid, Oid amoid) +CreateOpFamily(const char *amname, const char *opfname, Oid namespaceoid, Oid amoid) { Oid opfamilyoid; Relation rel; diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 165b165d55..9c66aa75ed 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -426,7 +426,7 @@ static void ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, bool rewrite); static void RebuildConstraintComment(AlteredTableInfo *tab, int pass, Oid objid, Relation rel, List *domname, - char *conname); + const char *conname); static void TryReuseIndex(Oid oldId, IndexStmt *stmt); static void TryReuseForeignKey(Oid oldId, Constraint *con); static void change_owner_fix_column_acls(Oid relationOid, @@ -438,14 +438,14 @@ static ObjectAddress ATExecClusterOn(Relation rel, const char *indexName, static void ATExecDropCluster(Relation rel, LOCKMODE lockmode); static bool ATPrepChangePersistence(Relation rel, bool toLogged); static void ATPrepSetTableSpace(AlteredTableInfo *tab, Relation rel, - char *tablespacename, LOCKMODE lockmode); + const char *tablespacename, LOCKMODE lockmode); static void ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmode); static void ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation, LOCKMODE lockmode); -static void ATExecEnableDisableTrigger(Relation rel, char *trigname, +static void ATExecEnableDisableTrigger(Relation rel, const char *trigname, char fires_when, bool skip_system, LOCKMODE lockmode); -static void ATExecEnableDisableRule(Relation rel, char *rulename, +static void ATExecEnableDisableRule(Relation rel, const char *rulename, char fires_when, LOCKMODE lockmode); static void ATPrepAddInherit(Relation child_rel); static ObjectAddress ATExecAddInherit(Relation child_rel, RangeVar *parent, LOCKMODE lockmode); @@ -9873,7 +9873,7 @@ ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd, static void RebuildConstraintComment(AlteredTableInfo *tab, int pass, Oid objid, Relation rel, List *domname, - char *conname) + const char *conname) { CommentStmt *cmd; char *comment_str; @@ -10393,7 +10393,7 @@ ATExecDropCluster(Relation rel, LOCKMODE lockmode) * ALTER TABLE SET TABLESPACE */ static void -ATPrepSetTableSpace(AlteredTableInfo *tab, Relation rel, char *tablespacename, LOCKMODE lockmode) +ATPrepSetTableSpace(AlteredTableInfo *tab, Relation rel, const char *tablespacename, LOCKMODE lockmode) { Oid tablespaceId; @@ -11060,7 +11060,7 @@ copy_relation_data(SMgrRelation src, SMgrRelation dst, * We just pass this off to trigger.c. */ static void -ATExecEnableDisableTrigger(Relation rel, char *trigname, +ATExecEnableDisableTrigger(Relation rel, const char *trigname, char fires_when, bool skip_system, LOCKMODE lockmode) { EnableDisableTrigger(rel, trigname, fires_when, skip_system); @@ -11072,7 +11072,7 @@ ATExecEnableDisableTrigger(Relation rel, char *trigname, * We just pass this off to rewriteDefine.c. */ static void -ATExecEnableDisableRule(Relation rel, char *rulename, +ATExecEnableDisableRule(Relation rel, const char *rulename, char fires_when, LOCKMODE lockmode) { EnableDisableRule(rel, rulename, fires_when); diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index 08f3a3d357..f86af4c054 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -103,7 +103,7 @@ static void checkEnumOwner(HeapTuple tup); static char *domainAddConstraint(Oid domainOid, Oid domainNamespace, Oid baseTypeOid, int typMod, Constraint *constr, - char *domainName, ObjectAddress *constrAddr); + const char *domainName, ObjectAddress *constrAddr); static Node *replace_domain_constraint_value(ParseState *pstate, ColumnRef *cref); @@ -2649,7 +2649,7 @@ AlterDomainAddConstraint(List *names, Node *newConstraint, * Implements the ALTER DOMAIN .. VALIDATE CONSTRAINT statement. */ ObjectAddress -AlterDomainValidateConstraint(List *names, char *constrName) +AlterDomainValidateConstraint(List *names, const char *constrName) { TypeName *typename; Oid domainoid; @@ -3060,7 +3060,7 @@ checkDomainOwner(HeapTuple tup) static char * domainAddConstraint(Oid domainOid, Oid domainNamespace, Oid baseTypeOid, int typMod, Constraint *constr, - char *domainName, ObjectAddress *constrAddr) + const char *domainName, ObjectAddress *constrAddr) { Node *expr; char *ccsrc; diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c index 076e2a3a40..c1e80e61d4 100644 --- a/src/backend/commands/view.c +++ b/src/backend/commands/view.c @@ -43,7 +43,7 @@ static void checkViewTupleDesc(TupleDesc newdesc, TupleDesc olddesc); * are "local" and "cascaded". */ void -validateWithCheckOption(char *value) +validateWithCheckOption(const char *value) { if (value == NULL || (pg_strcasecmp(value, "local") != 0 && diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index ab74fd8dfd..6505b1f2b9 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -43,7 +43,7 @@ * Global authentication functions *---------------------------------------------------------------- */ -static void sendAuthRequest(Port *port, AuthRequest areq, char *extradata, +static void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata, int extralen); static void auth_failed(Port *port, int status, char *logdetail); static char *recv_password_packet(Port *port); @@ -91,7 +91,7 @@ static int auth_peer(hbaPort *port); #define PGSQL_PAM_SERVICE "postgresql" /* Service name passed to PAM */ -static int CheckPAMAuth(Port *port, char *user, char *password); +static int CheckPAMAuth(Port *port, const char *user, const char *password); static int pam_passwd_conv_proc(int num_msg, const struct pam_message **msg, struct pam_response **resp, void *appdata_ptr); @@ -100,7 +100,7 @@ static struct pam_conv pam_passw_conv = { NULL }; -static char *pam_passwd = NULL; /* Workaround for Solaris 2.6 brokenness */ +static const char *pam_passwd = NULL; /* Workaround for Solaris 2.6 brokenness */ static Port *pam_port_cludge; /* Workaround for passing "Port *port" into * pam_passwd_conv_proc */ #endif /* USE_PAM */ @@ -202,7 +202,7 @@ static int pg_SSPI_make_upn(char *accountname, *---------------------------------------------------------------- */ static int CheckRADIUSAuth(Port *port); -static int PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identifier, char *user_name, char *passwd); +static int PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd); /* @@ -612,7 +612,7 @@ ClientAuthentication(Port *port) * Send an authentication request packet to the frontend. */ static void -sendAuthRequest(Port *port, AuthRequest areq, char *extradata, int extralen) +sendAuthRequest(Port *port, AuthRequest areq, const char *extradata, int extralen) { StringInfoData buf; @@ -1040,7 +1040,7 @@ static GSS_DLLIMP gss_OID GSS_C_NT_USER_NAME = &GSS_C_NT_USER_NAME_desc; static void -pg_GSS_error(int severity, char *errmsg, OM_uint32 maj_stat, OM_uint32 min_stat) +pg_GSS_error(int severity, const char *errmsg, OM_uint32 maj_stat, OM_uint32 min_stat) { gss_buffer_desc gmsg; OM_uint32 lmin_s, @@ -2051,7 +2051,7 @@ static int pam_passwd_conv_proc(int num_msg, const struct pam_message **msg, struct pam_response **resp, void *appdata_ptr) { - char *passwd; + const char *passwd; struct pam_response *reply; int i; @@ -2149,7 +2149,7 @@ pam_passwd_conv_proc(int num_msg, const struct pam_message **msg, * Check authentication against PAM. */ static int -CheckPAMAuth(Port *port, char *user, char *password) +CheckPAMAuth(Port *port, const char *user, const char *password) { int retval; pam_handle_t *pamh = NULL; @@ -2874,7 +2874,7 @@ CheckRADIUSAuth(Port *port) } static int -PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identifier, char *user_name, char *passwd) +PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd) { radius_packet radius_send_pack; radius_packet radius_recv_pack; @@ -2941,9 +2941,9 @@ PerformRadiusTransaction(char *server, char *secret, char *portstr, char *identi return STATUS_ERROR; } packet->id = packet->vector[0]; - radius_add_attribute(packet, RADIUS_SERVICE_TYPE, (unsigned char *) &service, sizeof(service)); - radius_add_attribute(packet, RADIUS_USER_NAME, (unsigned char *) user_name, strlen(user_name)); - radius_add_attribute(packet, RADIUS_NAS_IDENTIFIER, (unsigned char *) identifier, strlen(identifier)); + radius_add_attribute(packet, RADIUS_SERVICE_TYPE, (const unsigned char *) &service, sizeof(service)); + radius_add_attribute(packet, RADIUS_USER_NAME, (const unsigned char *) user_name, strlen(user_name)); + radius_add_attribute(packet, RADIUS_NAS_IDENTIFIER, (const unsigned char *) identifier, strlen(identifier)); /* * RADIUS password attributes are calculated as: e[0] = p[0] XOR diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index 210f13cc87..1e97c9db10 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -144,8 +144,8 @@ static List *tokenize_inc_file(List *tokens, const char *outer_filename, const char *inc_filename, int elevel, char **err_msg); static bool parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, int elevel, char **err_msg); -static bool verify_option_list_length(List *options, char *optionname, - List *masters, char *mastername, int line_num); +static bool verify_option_list_length(List *options, const char *optionname, + List *masters, const char *mastername, int line_num); static ArrayType *gethba_options(HbaLine *hba); static void fill_hba_line(Tuplestorestate *tuple_store, TupleDesc tupdesc, int lineno, HbaLine *hba, const char *err_msg); @@ -1617,7 +1617,7 @@ parse_hba_line(TokenizedLine *tok_line, int elevel) static bool -verify_option_list_length(List *options, char *optionname, List *masters, char *mastername, int line_num) +verify_option_list_length(List *options, const char *optionname, List *masters, const char *mastername, int line_num) { if (list_length(options) == 0 || list_length(options) == 1 || diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index 1aaa5244e6..86d1da0677 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -386,7 +386,7 @@ transformExprRecurse(ParseState *pstate, Node *expr) * selection from an arbitrary node needs it.) */ static void -unknown_attribute(ParseState *pstate, Node *relref, char *attname, +unknown_attribute(ParseState *pstate, Node *relref, const char *attname, int location) { RangeTblEntry *rte; diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index fc0d6bc2f2..a11843332b 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -39,7 +39,7 @@ static void unify_hypothetical_args(ParseState *pstate, List *fargs, int numAggregatedArgs, Oid *actual_arg_types, Oid *declared_arg_types); static Oid FuncNameAsType(List *funcname); -static Node *ParseComplexProjection(ParseState *pstate, char *funcname, +static Node *ParseComplexProjection(ParseState *pstate, const char *funcname, Node *first_arg, int location); @@ -1790,7 +1790,7 @@ FuncNameAsType(List *funcname) * transformed expression tree. If not, return NULL. */ static Node * -ParseComplexProjection(ParseState *pstate, char *funcname, Node *first_arg, +ParseComplexProjection(ParseState *pstate, const char *funcname, Node *first_arg, int location) { TupleDesc tupdesc; diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index e6740c291d..58bdb23c4e 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -652,7 +652,7 @@ updateFuzzyAttrMatchState(int fuzzy_rte_penalty, * for an approximate match and update fuzzystate accordingly. */ Node * -scanRTEForColumn(ParseState *pstate, RangeTblEntry *rte, char *colname, +scanRTEForColumn(ParseState *pstate, RangeTblEntry *rte, const char *colname, int location, int fuzzy_rte_penalty, FuzzyAttrMatchState *fuzzystate) { @@ -754,7 +754,7 @@ scanRTEForColumn(ParseState *pstate, RangeTblEntry *rte, char *colname, * If localonly is true, only names in the innermost query are considered. */ Node * -colNameToVar(ParseState *pstate, char *colname, bool localonly, +colNameToVar(ParseState *pstate, const char *colname, bool localonly, int location) { Node *result = NULL; @@ -828,7 +828,7 @@ colNameToVar(ParseState *pstate, char *colname, bool localonly, * and 'second' will contain the attribute number for the second match. */ static FuzzyAttrMatchState * -searchRangeTableForCol(ParseState *pstate, const char *alias, char *colname, +searchRangeTableForCol(ParseState *pstate, const char *alias, const char *colname, int location) { ParseState *orig_pstate = pstate; @@ -3248,7 +3248,7 @@ errorMissingRTE(ParseState *pstate, RangeVar *relation) */ void errorMissingColumn(ParseState *pstate, - char *relname, char *colname, int location) + const char *relname, const char *colname, int location) { FuzzyAttrMatchState *state; char *closestfirst = NULL; diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index 01fd726a3d..21593b249f 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -455,7 +455,7 @@ Expr * transformAssignedExpr(ParseState *pstate, Expr *expr, ParseExprKind exprKind, - char *colname, + const char *colname, int attrno, List *indirection, int location) diff --git a/src/backend/port/dynloader/darwin.c b/src/backend/port/dynloader/darwin.c index 18092cc759..93f19878f5 100644 --- a/src/backend/port/dynloader/darwin.c +++ b/src/backend/port/dynloader/darwin.c @@ -20,7 +20,7 @@ #ifdef HAVE_DLOPEN void * -pg_dlopen(char *filename) +pg_dlopen(const char *filename) { return dlopen(filename, RTLD_NOW | RTLD_GLOBAL); } @@ -32,7 +32,7 @@ pg_dlclose(void *handle) } PGFunction -pg_dlsym(void *handle, char *funcname) +pg_dlsym(void *handle, const char *funcname) { /* Do not prepend an underscore: see dlopen(3) */ return dlsym(handle, funcname); @@ -54,7 +54,7 @@ pg_dlerror(void) static NSObjectFileImageReturnCode cofiff_result = NSObjectFileImageFailure; void * -pg_dlopen(char *filename) +pg_dlopen(const char *filename) { NSObjectFileImage image; @@ -73,7 +73,7 @@ pg_dlclose(void *handle) } PGFunction -pg_dlsym(void *handle, char *funcname) +pg_dlsym(void *handle, const char *funcname) { NSSymbol symbol; char *symname = (char *) malloc(strlen(funcname) + 2); diff --git a/src/backend/port/dynloader/darwin.h b/src/backend/port/dynloader/darwin.h index 44a3bd6b82..292a31de13 100644 --- a/src/backend/port/dynloader/darwin.h +++ b/src/backend/port/dynloader/darwin.h @@ -2,7 +2,7 @@ #include "fmgr.h" -void *pg_dlopen(char *filename); -PGFunction pg_dlsym(void *handle, char *funcname); +void *pg_dlopen(const char *filename); +PGFunction pg_dlsym(void *handle, const char *funcname); void pg_dlclose(void *handle); char *pg_dlerror(void); diff --git a/src/backend/port/dynloader/hpux.c b/src/backend/port/dynloader/hpux.c index 5a0e40146d..5ab24f8fd9 100644 --- a/src/backend/port/dynloader/hpux.c +++ b/src/backend/port/dynloader/hpux.c @@ -26,7 +26,7 @@ #include "utils/dynamic_loader.h" void * -pg_dlopen(char *filename) +pg_dlopen(const char *filename) { /* * Use BIND_IMMEDIATE so that undefined symbols cause a failure return @@ -41,7 +41,7 @@ pg_dlopen(char *filename) } PGFunction -pg_dlsym(void *handle, char *funcname) +pg_dlsym(void *handle, const char *funcname) { PGFunction f; diff --git a/src/backend/port/dynloader/hpux.h b/src/backend/port/dynloader/hpux.h index 0a17454f2b..6c1b367e97 100644 --- a/src/backend/port/dynloader/hpux.h +++ b/src/backend/port/dynloader/hpux.h @@ -19,7 +19,7 @@ /* System includes */ #include "fmgr.h" -extern void *pg_dlopen(char *filename); -extern PGFunction pg_dlsym(void *handle, char *funcname); +extern void *pg_dlopen(const char *filename); +extern PGFunction pg_dlsym(void *handle, const char *funcname); extern void pg_dlclose(void *handle); extern char *pg_dlerror(void); diff --git a/src/backend/port/dynloader/linux.c b/src/backend/port/dynloader/linux.c index 38e19f7484..375ade32e5 100644 --- a/src/backend/port/dynloader/linux.c +++ b/src/backend/port/dynloader/linux.c @@ -29,7 +29,7 @@ #ifndef HAVE_DLOPEN void * -pg_dlopen(char *filename) +pg_dlopen(const char *filename) { #ifndef HAVE_DLD_H elog(ERROR, "dynamic load not supported"); @@ -101,7 +101,7 @@ pg_dlopen(char *filename) } PGFunction -pg_dlsym(void *handle, char *funcname) +pg_dlsym(void *handle, const char *funcname) { #ifndef HAVE_DLD_H return NULL; diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 2b2b993e2c..9906a85bc0 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -5537,7 +5537,7 @@ MaxLivePostmasterChildren(void) * Connect background worker to a database. */ void -BackgroundWorkerInitializeConnection(char *dbname, char *username) +BackgroundWorkerInitializeConnection(const char *dbname, const char *username) { BackgroundWorker *worker = MyBgworkerEntry; diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index 1411c14e92..cbcb3dbec3 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -52,9 +52,9 @@ typedef struct } basebackup_options; -static int64 sendDir(char *path, int basepathlen, bool sizeonly, +static int64 sendDir(const char *path, int basepathlen, bool sizeonly, List *tablespaces, bool sendtblspclinks); -static bool sendFile(char *readfilename, char *tarfilename, +static bool sendFile(const char *readfilename, const char *tarfilename, struct stat *statbuf, bool missing_ok); static void sendFileWithContent(const char *filename, const char *content); static int64 _tarWriteHeader(const char *filename, const char *linktarget, @@ -962,7 +962,7 @@ sendTablespace(char *path, bool sizeonly) * as it will be sent separately in the tablespace_map file. */ static int64 -sendDir(char *path, int basepathlen, bool sizeonly, List *tablespaces, +sendDir(const char *path, int basepathlen, bool sizeonly, List *tablespaces, bool sendtblspclinks) { DIR *dir; @@ -1207,7 +1207,7 @@ sendDir(char *path, int basepathlen, bool sizeonly, List *tablespaces, * and the file did not exist. */ static bool -sendFile(char *readfilename, char *tarfilename, struct stat *statbuf, +sendFile(const char *readfilename, const char *tarfilename, struct stat *statbuf, bool missing_ok) { FILE *fp; diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c index 007d3dabc1..247809927a 100644 --- a/src/backend/rewrite/rewriteDefine.c +++ b/src/backend/rewrite/rewriteDefine.c @@ -56,7 +56,7 @@ static void setRuleCheckAsUser_Query(Query *qry, Oid userid); * relation "pg_rewrite" */ static Oid -InsertRule(char *rulname, +InsertRule(const char *rulname, int evtype, Oid eventrel_oid, bool evinstead, @@ -225,7 +225,7 @@ DefineRule(RuleStmt *stmt, const char *queryString) * action and qual have already been passed through parse analysis. */ ObjectAddress -DefineQueryRewrite(char *rulename, +DefineQueryRewrite(const char *rulename, Oid event_relid, Node *event_qual, CmdType event_type, diff --git a/src/backend/snowball/dict_snowball.c b/src/backend/snowball/dict_snowball.c index 7cf668de19..42384b42b1 100644 --- a/src/backend/snowball/dict_snowball.c +++ b/src/backend/snowball/dict_snowball.c @@ -138,7 +138,7 @@ typedef struct DictSnowball static void -locate_stem_module(DictSnowball *d, char *lang) +locate_stem_module(DictSnowball *d, const char *lang) { const stemmer_module *m; diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 6b53b70d33..e5c3e86709 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -111,7 +111,7 @@ extern slock_t *ShmemLock; * This is indexed by tranche ID and stores the names of all tranches known * to the current backend. */ -static char **LWLockTrancheArray = NULL; +static const char **LWLockTrancheArray = NULL; static int LWLockTranchesAllocated = 0; #define T_NAME(lock) \ @@ -495,7 +495,7 @@ RegisterLWLockTranches(void) if (LWLockTrancheArray == NULL) { LWLockTranchesAllocated = 128; - LWLockTrancheArray = (char **) + LWLockTrancheArray = (const char **) MemoryContextAllocZero(TopMemoryContext, LWLockTranchesAllocated * sizeof(char *)); Assert(LWLockTranchesAllocated >= LWTRANCHE_FIRST_USER_DEFINED); @@ -595,7 +595,7 @@ LWLockNewTrancheId(void) * (TopMemoryContext, static variable, or similar). */ void -LWLockRegisterTranche(int tranche_id, char *tranche_name) +LWLockRegisterTranche(int tranche_id, const char *tranche_name) { Assert(LWLockTrancheArray != NULL); @@ -607,7 +607,7 @@ LWLockRegisterTranche(int tranche_id, char *tranche_name) while (i <= tranche_id) i *= 2; - LWLockTrancheArray = (char **) + LWLockTrancheArray = (const char **) repalloc(LWLockTrancheArray, i * sizeof(char *)); LWLockTranchesAllocated = i; while (j < LWLockTranchesAllocated) diff --git a/src/backend/tsearch/dict_thesaurus.c b/src/backend/tsearch/dict_thesaurus.c index 1b6085add3..2a458db691 100644 --- a/src/backend/tsearch/dict_thesaurus.c +++ b/src/backend/tsearch/dict_thesaurus.c @@ -165,7 +165,7 @@ addWrd(DictThesaurus *d, char *b, char *e, uint32 idsubst, uint16 nwrd, uint16 p #define TR_INSUBS 4 static void -thesaurusRead(char *filename, DictThesaurus *d) +thesaurusRead(const char *filename, DictThesaurus *d) { tsearch_readline_state trst; uint32 idsubst = 0; diff --git a/src/backend/tsearch/spell.c b/src/backend/tsearch/spell.c index e82a69d337..e70e901066 100644 --- a/src/backend/tsearch/spell.c +++ b/src/backend/tsearch/spell.c @@ -450,7 +450,7 @@ getNextFlagFromString(IspellDict *Conf, char **sflagset, char *sflag) * otherwise returns false. */ static bool -IsAffixFlagInUse(IspellDict *Conf, int affix, char *affixflag) +IsAffixFlagInUse(IspellDict *Conf, int affix, const char *affixflag) { char *flagcur; char flag[BUFSIZ]; @@ -596,7 +596,7 @@ NIImportDictionary(IspellDict *Conf, const char *filename) * Returns 1 if the word was found in the prefix tree, else returns 0. */ static int -FindWord(IspellDict *Conf, const char *word, char *affixflag, int flag) +FindWord(IspellDict *Conf, const char *word, const char *affixflag, int flag) { SPNode *node = Conf->Dictionary; SPNodeData *StopLow, diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c index 5285aa54f1..b3b9fc522d 100644 --- a/src/backend/utils/adt/genfile.c +++ b/src/backend/utils/adt/genfile.c @@ -477,7 +477,7 @@ pg_ls_dir_1arg(PG_FUNCTION_ARGS) /* Generic function to return a directory listing of files */ static Datum -pg_ls_dir_files(FunctionCallInfo fcinfo, char *dir) +pg_ls_dir_files(FunctionCallInfo fcinfo, const char *dir) { FuncCallContext *funcctx; struct dirent *de; diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index b543b7046c..06cf32f5d7 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -343,7 +343,7 @@ static void set_relation_column_names(deparse_namespace *dpns, deparse_columns *colinfo); static void set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte, deparse_columns *colinfo); -static bool colname_is_unique(char *colname, deparse_namespace *dpns, +static bool colname_is_unique(const char *colname, deparse_namespace *dpns, deparse_columns *colinfo); static char *make_colname_unique(char *colname, deparse_namespace *dpns, deparse_columns *colinfo); @@ -4117,7 +4117,7 @@ set_join_column_names(deparse_namespace *dpns, RangeTblEntry *rte, * dpns is query-wide info, colinfo is for the column's RTE */ static bool -colname_is_unique(char *colname, deparse_namespace *dpns, +colname_is_unique(const char *colname, deparse_namespace *dpns, deparse_columns *colinfo) { int i; diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index 4674ee2938..39b68dbc65 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -1379,7 +1379,7 @@ text_position_cleanup(TextPositionState *state) * whether arg1 is less than, equal to, or greater than arg2. */ int -varstr_cmp(char *arg1, int len1, char *arg2, int len2, Oid collid) +varstr_cmp(const char *arg1, int len1, const char *arg2, int len2, Oid collid) { int result; diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c index c9d07f2ae9..56e262819e 100644 --- a/src/backend/utils/adt/xml.c +++ b/src/backend/utils/adt/xml.c @@ -146,7 +146,7 @@ static text *xml_xmlnodetoxmltype(xmlNodePtr cur, PgXmlErrorContext *xmlerrcxt); static int xml_xpathobjtoxmlarray(xmlXPathObjectPtr xpathobj, ArrayBuildState *astate, PgXmlErrorContext *xmlerrcxt); -static xmlChar *pg_xmlCharStrndup(char *str, size_t len); +static xmlChar *pg_xmlCharStrndup(const char *str, size_t len); #endif /* USE_LIBXML */ static void xmldata_root_element_start(StringInfo result, const char *eltname, @@ -192,11 +192,11 @@ typedef struct XmlTableBuilderData static void XmlTableInitOpaque(struct TableFuncScanState *state, int natts); static void XmlTableSetDocument(struct TableFuncScanState *state, Datum value); -static void XmlTableSetNamespace(struct TableFuncScanState *state, char *name, - char *uri); -static void XmlTableSetRowFilter(struct TableFuncScanState *state, char *path); +static void XmlTableSetNamespace(struct TableFuncScanState *state, const char *name, + const char *uri); +static void XmlTableSetRowFilter(struct TableFuncScanState *state, const char *path); static void XmlTableSetColumnFilter(struct TableFuncScanState *state, - char *path, int colnum); + const char *path, int colnum); static bool XmlTableFetchRow(struct TableFuncScanState *state); static Datum XmlTableGetValue(struct TableFuncScanState *state, int colnum, Oid typid, int32 typmod, bool *isnull); @@ -765,7 +765,7 @@ xmlparse(text *data, XmlOptionType xmloption_arg, bool preserve_whitespace) xmltype * -xmlpi(char *target, text *arg, bool arg_is_null, bool *result_is_null) +xmlpi(const char *target, text *arg, bool arg_is_null, bool *result_is_null) { #ifdef USE_LIBXML xmltype *result; @@ -1164,7 +1164,7 @@ xml_pnstrdup(const xmlChar *str, size_t len) /* Ditto, except input is char* */ static xmlChar * -pg_xmlCharStrndup(char *str, size_t len) +pg_xmlCharStrndup(const char *str, size_t len) { xmlChar *result; @@ -1850,7 +1850,7 @@ appendStringInfoLineSeparator(StringInfo str) * Convert one char in the current server encoding to a Unicode codepoint. */ static pg_wchar -sqlchar_to_unicode(char *s) +sqlchar_to_unicode(const char *s) { char *utf8string; pg_wchar ret[2]; /* need space for trailing zero */ @@ -1894,12 +1894,12 @@ is_valid_xml_namechar(pg_wchar c) * Map SQL identifier to XML name; see SQL/XML:2008 section 9.1. */ char * -map_sql_identifier_to_xml_name(char *ident, bool fully_escaped, +map_sql_identifier_to_xml_name(const char *ident, bool fully_escaped, bool escape_period) { #ifdef USE_LIBXML StringInfoData buf; - char *p; + const char *p; /* * SQL/XML doesn't make use of this case anywhere, so it's probably a @@ -1970,10 +1970,10 @@ unicode_to_sqlchar(pg_wchar c) * Map XML name to SQL identifier; see SQL/XML:2008 section 9.3. */ char * -map_xml_name_to_sql_identifier(char *name) +map_xml_name_to_sql_identifier(const char *name) { StringInfoData buf; - char *p; + const char *p; initStringInfo(&buf); @@ -3009,7 +3009,7 @@ database_to_xml_and_xmlschema(PG_FUNCTION_ARGS) * 9.2. */ static char * -map_multipart_sql_identifier_to_xml_name(char *a, char *b, char *c, char *d) +map_multipart_sql_identifier_to_xml_name(const char *a, const char *b, const char *c, const char *d) { StringInfoData result; @@ -4292,7 +4292,7 @@ XmlTableSetDocument(TableFuncScanState *state, Datum value) * Add a namespace declaration */ static void -XmlTableSetNamespace(TableFuncScanState *state, char *name, char *uri) +XmlTableSetNamespace(TableFuncScanState *state, const char *name, const char *uri) { #ifdef USE_LIBXML XmlTableBuilderData *xtCxt; @@ -4318,7 +4318,7 @@ XmlTableSetNamespace(TableFuncScanState *state, char *name, char *uri) * Install the row-filter Xpath expression. */ static void -XmlTableSetRowFilter(TableFuncScanState *state, char *path) +XmlTableSetRowFilter(TableFuncScanState *state, const char *path) { #ifdef USE_LIBXML XmlTableBuilderData *xtCxt; @@ -4347,7 +4347,7 @@ XmlTableSetRowFilter(TableFuncScanState *state, char *path) * Install the column-filter Xpath expression, for the given column. */ static void -XmlTableSetColumnFilter(TableFuncScanState *state, char *path, int colnum) +XmlTableSetColumnFilter(TableFuncScanState *state, const char *path, int colnum) { #ifdef USE_LIBXML XmlTableBuilderData *xtCxt; diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index 27fcf5a87f..bb2bc065ef 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -239,10 +239,10 @@ static void writefile(char *path, char **lines); static FILE *popen_check(const char *command, const char *mode); static void exit_nicely(void); static char *get_id(void); -static int get_encoding_id(char *encoding_name); -static void set_input(char **dest, char *filename); +static int get_encoding_id(const char *encoding_name); +static void set_input(char **dest, const char *filename); static void check_input(char *path); -static void write_version_file(char *extrapath); +static void write_version_file(const char *extrapath); static void set_null_conf(void); static void test_config_settings(void); static void setup_config(void); @@ -640,7 +640,7 @@ encodingid_to_string(int enc) * get the encoding id for a given encoding name */ static int -get_encoding_id(char *encoding_name) +get_encoding_id(const char *encoding_name) { int enc; @@ -751,7 +751,7 @@ find_matching_ts_config(const char *lc_type) * set name of given input file variable under data directory */ static void -set_input(char **dest, char *filename) +set_input(char **dest, const char *filename) { *dest = psprintf("%s/%s", share_path, filename); } @@ -801,7 +801,7 @@ check_input(char *path) * if extrapath is not NULL */ static void -write_version_file(char *extrapath) +write_version_file(const char *extrapath) { FILE *version_file; char *path; diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c index befcde4630..216853d627 100644 --- a/src/bin/pg_dump/pg_backup_db.c +++ b/src/bin/pg_dump/pg_backup_db.c @@ -419,7 +419,7 @@ ExecuteSqlQuery(Archive *AHX, const char *query, ExecStatusType status) * Execute an SQL query and verify that we got exactly one row back. */ PGresult * -ExecuteSqlQueryForSingleRow(Archive *fout, char *query) +ExecuteSqlQueryForSingleRow(Archive *fout, const char *query) { PGresult *res; int ntups; diff --git a/src/bin/pg_dump/pg_backup_db.h b/src/bin/pg_dump/pg_backup_db.h index 527449e044..a79f5283fe 100644 --- a/src/bin/pg_dump/pg_backup_db.h +++ b/src/bin/pg_dump/pg_backup_db.h @@ -16,7 +16,7 @@ extern int ExecuteSqlCommandBuf(Archive *AHX, const char *buf, size_t bufLen); extern void ExecuteSqlStatement(Archive *AHX, const char *query); extern PGresult *ExecuteSqlQuery(Archive *AHX, const char *query, ExecStatusType status); -extern PGresult *ExecuteSqlQueryForSingleRow(Archive *fout, char *query); +extern PGresult *ExecuteSqlQueryForSingleRow(Archive *fout, const char *query); extern void EndDBCopyMode(Archive *AHX, const char *tocEntryTag); diff --git a/src/bin/pg_rewind/fetch.c b/src/bin/pg_rewind/fetch.c index e9353d8866..13553e3b5a 100644 --- a/src/bin/pg_rewind/fetch.c +++ b/src/bin/pg_rewind/fetch.c @@ -51,7 +51,7 @@ executeFileMap(void) * handy for text files. */ char * -fetchFile(char *filename, size_t *filesize) +fetchFile(const char *filename, size_t *filesize) { if (datadir_source) return slurpFile(datadir_source, filename, filesize); diff --git a/src/bin/pg_rewind/fetch.h b/src/bin/pg_rewind/fetch.h index 1e08f76b3e..7288120a0b 100644 --- a/src/bin/pg_rewind/fetch.h +++ b/src/bin/pg_rewind/fetch.h @@ -24,7 +24,7 @@ * config options. */ extern void fetchSourceFileList(void); -extern char *fetchFile(char *filename, size_t *filesize); +extern char *fetchFile(const char *filename, size_t *filesize); extern void executeFileMap(void); /* in libpq_fetch.c */ diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c index c74eb25e18..f7f2ebdacf 100644 --- a/src/bin/pg_upgrade/option.c +++ b/src/bin/pg_upgrade/option.c @@ -22,7 +22,7 @@ static void usage(void); static void check_required_directory(char **dirpath, char **configpath, - char *envVarName, char *cmdLineOption, char *description); + const char *envVarName, const char *cmdLineOption, const char *description); #define FIX_DEFAULT_READ_ONLY "-c default_transaction_read_only=false" @@ -341,8 +341,8 @@ usage(void) */ static void check_required_directory(char **dirpath, char **configpath, - char *envVarName, char *cmdLineOption, - char *description) + const char *envVarName, const char *cmdLineOption, + const char *description) { if (*dirpath == NULL || strlen(*dirpath) == 0) { diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index d44fefb457..c10103f0bf 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -363,7 +363,7 @@ create_new_objects(void) * Delete the given subdirectory contents from the new cluster */ static void -remove_new_subdir(char *subdir, bool rmtopdir) +remove_new_subdir(const char *subdir, bool rmtopdir) { char new_path[MAXPGPATH]; @@ -380,7 +380,7 @@ remove_new_subdir(char *subdir, bool rmtopdir) * Copy the files from the old cluster into it */ static void -copy_subdir_files(char *old_subdir, char *new_subdir) +copy_subdir_files(const char *old_subdir, const char *new_subdir) { char old_path[MAXPGPATH]; char new_path[MAXPGPATH]; diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c index 53eca4c8e0..6443eda6df 100644 --- a/src/bin/pg_waldump/pg_waldump.c +++ b/src/bin/pg_waldump/pg_waldump.c @@ -175,7 +175,7 @@ open_file_in_directory(const char *directory, const char *fname) * wal segment size. */ static bool -search_directory(char *directory, char *fname) +search_directory(const char *directory, const char *fname) { int fd = -1; DIR *xldir; diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index d4a60351a8..ec56a74de0 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -1870,7 +1870,7 @@ preparedStatementName(char *buffer, int file, int state) } static void -commandFailed(CState *st, char *message) +commandFailed(CState *st, const char *message) { fprintf(stderr, "client %d aborted in command %d of script %d; %s\n", @@ -3538,7 +3538,7 @@ addScript(ParsedScript script) } static void -printSimpleStats(char *prefix, SimpleStats *ss) +printSimpleStats(const char *prefix, SimpleStats *ss) { /* print NaN if no transactions where executed */ double latency = ss->sum / ss->count; diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h index bfef2df420..eb1c6728d4 100644 --- a/src/include/access/gist_private.h +++ b/src/include/access/gist_private.h @@ -503,7 +503,7 @@ extern void gistSplitByKey(Relation r, Page page, IndexTuple *itup, /* gistbuild.c */ extern IndexBuildResult *gistbuild(Relation heap, Relation index, struct IndexInfo *indexInfo); -extern void gistValidateBufferingOption(char *value); +extern void gistValidateBufferingOption(const char *value); /* gistbuildbuffers.c */ extern GISTBuildBuffers *gistInitBuildBuffers(int pagesPerBuffer, int levelStep, diff --git a/src/include/access/reloptions.h b/src/include/access/reloptions.h index 5cdaa3bff1..cd43e3a52e 100644 --- a/src/include/access/reloptions.h +++ b/src/include/access/reloptions.h @@ -108,7 +108,7 @@ typedef struct relopt_real } relopt_real; /* validation routines for strings */ -typedef void (*validate_string_relopt) (char *value); +typedef void (*validate_string_relopt) (const char *value); typedef struct relopt_string { @@ -246,17 +246,17 @@ typedef struct extern relopt_kind add_reloption_kind(void); -extern void add_bool_reloption(bits32 kinds, char *name, char *desc, +extern void add_bool_reloption(bits32 kinds, const char *name, const char *desc, bool default_val); -extern void add_int_reloption(bits32 kinds, char *name, char *desc, +extern void add_int_reloption(bits32 kinds, const char *name, const char *desc, int default_val, int min_val, int max_val); -extern void add_real_reloption(bits32 kinds, char *name, char *desc, +extern void add_real_reloption(bits32 kinds, const char *name, const char *desc, double default_val, double min_val, double max_val); -extern void add_string_reloption(bits32 kinds, char *name, char *desc, - char *default_val, validate_string_relopt validator); +extern void add_string_reloption(bits32 kinds, const char *name, const char *desc, + const char *default_val, validate_string_relopt validator); extern Datum transformRelOptions(Datum oldOptions, List *defList, - char *namspace, char *validnsps[], + const char *namspace, char *validnsps[], bool ignoreOids, bool isReset); extern List *untransformRelOptions(Datum options); extern bytea *extractRelOptions(HeapTuple tuple, TupleDesc tupdesc, diff --git a/src/include/access/xact.h b/src/include/access/xact.h index f2c10f905f..118b0a8432 100644 --- a/src/include/access/xact.h +++ b/src/include/access/xact.h @@ -350,14 +350,14 @@ extern void CommitTransactionCommand(void); extern void AbortCurrentTransaction(void); extern void BeginTransactionBlock(void); extern bool EndTransactionBlock(void); -extern bool PrepareTransactionBlock(char *gid); +extern bool PrepareTransactionBlock(const char *gid); extern void UserAbortTransactionBlock(void); extern void BeginImplicitTransactionBlock(void); extern void EndImplicitTransactionBlock(void); extern void ReleaseSavepoint(List *options); -extern void DefineSavepoint(char *name); +extern void DefineSavepoint(const char *name); extern void RollbackToSavepoint(List *options); -extern void BeginInternalSubTransaction(char *name); +extern void BeginInternalSubTransaction(const char *name); extern void ReleaseCurrentSubTransaction(void); extern void RollbackAndReleaseCurrentSubTransaction(void); extern bool IsSubTransaction(void); diff --git a/src/include/access/xlog_internal.h b/src/include/access/xlog_internal.h index 22a8e63658..7805c3c747 100644 --- a/src/include/access/xlog_internal.h +++ b/src/include/access/xlog_internal.h @@ -321,9 +321,9 @@ extern char *recoveryRestoreCommand; extern bool RestoreArchivedFile(char *path, const char *xlogfname, const char *recovername, off_t expectedSize, bool cleanupEnabled); -extern void ExecuteRecoveryCommand(char *command, char *commandName, +extern void ExecuteRecoveryCommand(const char *command, const char *commandName, bool failOnerror); -extern void KeepFileRestoredFromArchive(char *path, char *xlogfname); +extern void KeepFileRestoredFromArchive(const char *path, const char *xlogfname); extern void XLogArchiveNotify(const char *xlog); extern void XLogArchiveNotifySeg(XLogSegNo segno); extern void XLogArchiveForceDone(const char *xlog); diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h index cb1bc887f8..0fae02295b 100644 --- a/src/include/catalog/heap.h +++ b/src/include/catalog/heap.h @@ -109,7 +109,7 @@ extern Node *cookDefault(ParseState *pstate, Node *raw_default, Oid atttypid, int32 atttypmod, - char *attname); + const char *attname); extern void DeleteRelationTuple(Oid relid); extern void DeleteAttributeTuples(Oid relid); diff --git a/src/include/commands/comment.h b/src/include/commands/comment.h index 85bd801513..0caf0e81ab 100644 --- a/src/include/commands/comment.h +++ b/src/include/commands/comment.h @@ -34,11 +34,11 @@ extern ObjectAddress CommentObject(CommentStmt *stmt); extern void DeleteComments(Oid oid, Oid classoid, int32 subid); -extern void CreateComments(Oid oid, Oid classoid, int32 subid, char *comment); +extern void CreateComments(Oid oid, Oid classoid, int32 subid, const char *comment); extern void DeleteSharedComments(Oid oid, Oid classoid); -extern void CreateSharedComments(Oid oid, Oid classoid, char *comment); +extern void CreateSharedComments(Oid oid, Oid classoid, const char *comment); extern char *GetComment(Oid oid, Oid classoid, int32 subid); diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index f7bb4a54f7..bfead9af3d 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -39,12 +39,12 @@ extern char *makeObjectName(const char *name1, const char *name2, extern char *ChooseRelationName(const char *name1, const char *name2, const char *label, Oid namespaceid); extern bool CheckIndexCompatible(Oid oldId, - char *accessMethodName, + const char *accessMethodName, List *attributeList, List *exclusionOpNames); extern Oid GetDefaultOpClass(Oid type_id, Oid am_id); extern Oid ResolveOpClass(List *opclass, Oid attrType, - char *accessMethodName, Oid accessMethodId); + const char *accessMethodName, Oid accessMethodId); /* commands/functioncmds.c */ extern ObjectAddress CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt); diff --git a/src/include/commands/typecmds.h b/src/include/commands/typecmds.h index 8f3fc65536..9fbf38629d 100644 --- a/src/include/commands/typecmds.h +++ b/src/include/commands/typecmds.h @@ -34,7 +34,7 @@ extern ObjectAddress AlterDomainDefault(List *names, Node *defaultRaw); extern ObjectAddress AlterDomainNotNull(List *names, bool notNull); extern ObjectAddress AlterDomainAddConstraint(List *names, Node *constr, ObjectAddress *constrAddr); -extern ObjectAddress AlterDomainValidateConstraint(List *names, char *constrName); +extern ObjectAddress AlterDomainValidateConstraint(List *names, const char *constrName); extern ObjectAddress AlterDomainDropConstraint(List *names, const char *constrName, DropBehavior behavior, bool missing_ok); diff --git a/src/include/commands/view.h b/src/include/commands/view.h index cf08ce2ac7..46d762db22 100644 --- a/src/include/commands/view.h +++ b/src/include/commands/view.h @@ -17,7 +17,7 @@ #include "catalog/objectaddress.h" #include "nodes/parsenodes.h" -extern void validateWithCheckOption(char *value); +extern void validateWithCheckOption(const char *value); extern ObjectAddress DefineView(ViewStmt *stmt, const char *queryString, int stmt_location, int stmt_len); diff --git a/src/include/executor/tablefunc.h b/src/include/executor/tablefunc.h index a24a555b75..49e8c7c1b2 100644 --- a/src/include/executor/tablefunc.h +++ b/src/include/executor/tablefunc.h @@ -53,11 +53,11 @@ typedef struct TableFuncRoutine { void (*InitOpaque) (struct TableFuncScanState *state, int natts); void (*SetDocument) (struct TableFuncScanState *state, Datum value); - void (*SetNamespace) (struct TableFuncScanState *state, char *name, - char *uri); - void (*SetRowFilter) (struct TableFuncScanState *state, char *path); + void (*SetNamespace) (struct TableFuncScanState *state, const char *name, + const char *uri); + void (*SetRowFilter) (struct TableFuncScanState *state, const char *path); void (*SetColumnFilter) (struct TableFuncScanState *state, - char *path, int colnum); + const char *path, int colnum); bool (*FetchRow) (struct TableFuncScanState *state); Datum (*GetValue) (struct TableFuncScanState *state, int colnum, Oid typid, int32 typmod, bool *isnull); diff --git a/src/include/parser/parse_relation.h b/src/include/parser/parse_relation.h index 91542d4f15..290f3b78cb 100644 --- a/src/include/parser/parse_relation.h +++ b/src/include/parser/parse_relation.h @@ -54,9 +54,9 @@ extern RangeTblEntry *GetRTEByRangeTablePosn(ParseState *pstate, extern CommonTableExpr *GetCTEForRTE(ParseState *pstate, RangeTblEntry *rte, int rtelevelsup); extern Node *scanRTEForColumn(ParseState *pstate, RangeTblEntry *rte, - char *colname, int location, + const char *colname, int location, int fuzzy_rte_penalty, FuzzyAttrMatchState *fuzzystate); -extern Node *colNameToVar(ParseState *pstate, char *colname, bool localonly, +extern Node *colNameToVar(ParseState *pstate, const char *colname, bool localonly, int location); extern void markVarForSelectPriv(ParseState *pstate, Var *var, RangeTblEntry *rte); @@ -117,7 +117,7 @@ extern void addRTEtoQuery(ParseState *pstate, RangeTblEntry *rte, bool addToRelNameSpace, bool addToVarNameSpace); extern void errorMissingRTE(ParseState *pstate, RangeVar *relation) pg_attribute_noreturn(); extern void errorMissingColumn(ParseState *pstate, - char *relname, char *colname, int location) pg_attribute_noreturn(); + const char *relname, const char *colname, int location) pg_attribute_noreturn(); extern void expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up, int location, bool include_dropped, List **colnames, List **colvars); diff --git a/src/include/parser/parse_target.h b/src/include/parser/parse_target.h index 44af46b1aa..bb7b7b606b 100644 --- a/src/include/parser/parse_target.h +++ b/src/include/parser/parse_target.h @@ -28,7 +28,7 @@ extern TargetEntry *transformTargetEntry(ParseState *pstate, char *colname, bool resjunk); extern Expr *transformAssignedExpr(ParseState *pstate, Expr *expr, ParseExprKind exprKind, - char *colname, + const char *colname, int attrno, List *indirection, int location); diff --git a/src/include/postmaster/bgworker.h b/src/include/postmaster/bgworker.h index 6b4e631880..b6c5800cfe 100644 --- a/src/include/postmaster/bgworker.h +++ b/src/include/postmaster/bgworker.h @@ -140,7 +140,7 @@ extern PGDLLIMPORT BackgroundWorker *MyBgworkerEntry; * If dbname is NULL, connection is made to no specific database; * only shared catalogs can be accessed. */ -extern void BackgroundWorkerInitializeConnection(char *dbname, char *username); +extern void BackgroundWorkerInitializeConnection(const char *dbname, const char *username); /* Just like the above, but specifying database and user by OID. */ extern void BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid); diff --git a/src/include/rewrite/rewriteDefine.h b/src/include/rewrite/rewriteDefine.h index 2e25288bb4..b496a0c154 100644 --- a/src/include/rewrite/rewriteDefine.h +++ b/src/include/rewrite/rewriteDefine.h @@ -25,7 +25,7 @@ extern ObjectAddress DefineRule(RuleStmt *stmt, const char *queryString); -extern ObjectAddress DefineQueryRewrite(char *rulename, +extern ObjectAddress DefineQueryRewrite(const char *rulename, Oid event_relid, Node *event_qual, CmdType event_type, diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index f4c4aed7f9..596fdadc63 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -184,7 +184,7 @@ extern LWLockPadded *GetNamedLWLockTranche(const char *tranche_name); * registration in the main shared memory segment wouldn't work for that case. */ extern int LWLockNewTrancheId(void); -extern void LWLockRegisterTranche(int tranche_id, char *tranche_name); +extern void LWLockRegisterTranche(int tranche_id, const char *tranche_name); extern void LWLockInitialize(LWLock *lock, int tranche_id); /* diff --git a/src/include/utils/dynamic_loader.h b/src/include/utils/dynamic_loader.h index 6c9287b611..80ac1e3fe6 100644 --- a/src/include/utils/dynamic_loader.h +++ b/src/include/utils/dynamic_loader.h @@ -17,8 +17,8 @@ #include "fmgr.h" -extern void *pg_dlopen(char *filename); -extern PGFunction pg_dlsym(void *handle, char *funcname); +extern void *pg_dlopen(const char *filename); +extern PGFunction pg_dlsym(void *handle, const char *funcname); extern void pg_dlclose(void *handle); extern char *pg_dlerror(void); diff --git a/src/include/utils/varlena.h b/src/include/utils/varlena.h index cab82ee888..06f3b69893 100644 --- a/src/include/utils/varlena.h +++ b/src/include/utils/varlena.h @@ -16,7 +16,7 @@ #include "nodes/pg_list.h" #include "utils/sortsupport.h" -extern int varstr_cmp(char *arg1, int len1, char *arg2, int len2, Oid collid); +extern int varstr_cmp(const char *arg1, int len1, const char *arg2, int len2, Oid collid); extern void varstr_sortsupport(SortSupport ssup, Oid collid, bool bpchar); extern int varstr_levenshtein(const char *source, int slen, const char *target, int tlen, diff --git a/src/include/utils/xml.h b/src/include/utils/xml.h index e6fa0e2051..385b728f42 100644 --- a/src/include/utils/xml.h +++ b/src/include/utils/xml.h @@ -65,14 +65,14 @@ extern xmltype *xmlelement(XmlExpr *xexpr, Datum *named_argvalue, bool *named_argnull, Datum *argvalue, bool *argnull); extern xmltype *xmlparse(text *data, XmlOptionType xmloption, bool preserve_whitespace); -extern xmltype *xmlpi(char *target, text *arg, bool arg_is_null, bool *result_is_null); +extern xmltype *xmlpi(const char *target, text *arg, bool arg_is_null, bool *result_is_null); extern xmltype *xmlroot(xmltype *data, text *version, int standalone); extern bool xml_is_document(xmltype *arg); extern text *xmltotext_with_xmloption(xmltype *data, XmlOptionType xmloption_arg); extern char *escape_xml(const char *str); -extern char *map_sql_identifier_to_xml_name(char *ident, bool fully_escaped, bool escape_period); -extern char *map_xml_name_to_sql_identifier(char *name); +extern char *map_sql_identifier_to_xml_name(const char *ident, bool fully_escaped, bool escape_period); +extern char *map_xml_name_to_sql_identifier(const char *name); extern char *map_sql_value_to_xml_value(Datum value, Oid type, bool xml_escape_strings); extern int xmlbinary; /* XmlBinaryType, but int for guc enum */ diff --git a/src/interfaces/ecpg/compatlib/informix.c b/src/interfaces/ecpg/compatlib/informix.c index e9bcb4cde2..13058cf7bf 100644 --- a/src/interfaces/ecpg/compatlib/informix.c +++ b/src/interfaces/ecpg/compatlib/informix.c @@ -195,7 +195,7 @@ ecpg_strndup(const char *str, size_t len) } int -deccvasc(char *cp, int len, decimal *np) +deccvasc(const char *cp, int len, decimal *np) { char *str; int ret = 0; @@ -520,7 +520,7 @@ rdatestr(date d, char *str) * */ int -rstrdate(char *str, date * d) +rstrdate(const char *str, date * d) { return rdefmtdate(d, "mm/dd/yyyy", str); } @@ -545,7 +545,7 @@ rjulmdy(date d, short mdy[3]) } int -rdefmtdate(date * d, char *fmt, char *str) +rdefmtdate(date * d, const char *fmt, const char *str) { /* TODO: take care of DBCENTURY environment variable */ /* PGSQL functions allow all centuries */ @@ -571,7 +571,7 @@ rdefmtdate(date * d, char *fmt, char *str) } int -rfmtdate(date d, char *fmt, char *str) +rfmtdate(date d, const char *fmt, char *str) { errno = 0; if (PGTYPESdate_fmt_asc(d, fmt, str) == 0) @@ -747,7 +747,7 @@ initValue(long lng_val) /* return the position oft the right-most dot in some string */ static int -getRightMostDot(char *str) +getRightMostDot(const char *str) { size_t len = strlen(str); int i, @@ -765,7 +765,7 @@ getRightMostDot(char *str) /* And finally some misc functions */ int -rfmtlong(long lng_val, char *fmt, char *outbuf) +rfmtlong(long lng_val, const char *fmt, char *outbuf) { size_t fmt_len = strlen(fmt); size_t temp_len; @@ -1047,7 +1047,7 @@ rsetnull(int t, char *ptr) } int -risnull(int t, char *ptr) +risnull(int t, const char *ptr) { return ECPGis_noind_null(t, ptr); } diff --git a/src/interfaces/ecpg/ecpglib/misc.c b/src/interfaces/ecpg/ecpglib/misc.c index a0257c8957..be9cac6e7b 100644 --- a/src/interfaces/ecpg/ecpglib/misc.c +++ b/src/interfaces/ecpg/ecpglib/misc.c @@ -375,7 +375,7 @@ ECPGset_noind_null(enum ECPGttype type, void *ptr) } static bool -_check(unsigned char *ptr, int length) +_check(const unsigned char *ptr, int length) { for (length--; length >= 0; length--) if (ptr[length] != 0xff) @@ -385,36 +385,36 @@ _check(unsigned char *ptr, int length) } bool -ECPGis_noind_null(enum ECPGttype type, void *ptr) +ECPGis_noind_null(enum ECPGttype type, const void *ptr) { switch (type) { case ECPGt_char: case ECPGt_unsigned_char: case ECPGt_string: - if (*((char *) ptr) == '\0') + if (*((const char *) ptr) == '\0') return true; break; case ECPGt_short: case ECPGt_unsigned_short: - if (*((short int *) ptr) == SHRT_MIN) + if (*((const short int *) ptr) == SHRT_MIN) return true; break; case ECPGt_int: case ECPGt_unsigned_int: - if (*((int *) ptr) == INT_MIN) + if (*((const int *) ptr) == INT_MIN) return true; break; case ECPGt_long: case ECPGt_unsigned_long: case ECPGt_date: - if (*((long *) ptr) == LONG_MIN) + if (*((const long *) ptr) == LONG_MIN) return true; break; #ifdef HAVE_LONG_LONG_INT case ECPGt_long_long: case ECPGt_unsigned_long_long: - if (*((long long *) ptr) == LONG_LONG_MIN) + if (*((const long long *) ptr) == LONG_LONG_MIN) return true; break; #endif /* HAVE_LONG_LONG_INT */ @@ -425,15 +425,15 @@ ECPGis_noind_null(enum ECPGttype type, void *ptr) return _check(ptr, sizeof(double)); break; case ECPGt_varchar: - if (*(((struct ECPGgeneric_varchar *) ptr)->arr) == 0x00) + if (*(((const struct ECPGgeneric_varchar *) ptr)->arr) == 0x00) return true; break; case ECPGt_decimal: - if (((decimal *) ptr)->sign == NUMERIC_NULL) + if (((const decimal *) ptr)->sign == NUMERIC_NULL) return true; break; case ECPGt_numeric: - if (((numeric *) ptr)->sign == NUMERIC_NULL) + if (((const numeric *) ptr)->sign == NUMERIC_NULL) return true; break; case ECPGt_interval: diff --git a/src/interfaces/ecpg/include/ecpg_informix.h b/src/interfaces/ecpg/include/ecpg_informix.h index dd6258152a..a5260a5542 100644 --- a/src/interfaces/ecpg/include/ecpg_informix.h +++ b/src/interfaces/ecpg/include/ecpg_informix.h @@ -36,15 +36,15 @@ extern "C" extern int rdatestr(date, char *); extern void rtoday(date *); extern int rjulmdy(date, short *); -extern int rdefmtdate(date *, char *, char *); -extern int rfmtdate(date, char *, char *); +extern int rdefmtdate(date *, const char *, const char *); +extern int rfmtdate(date, const char *, char *); extern int rmdyjul(short *, date *); -extern int rstrdate(char *, date *); +extern int rstrdate(const char *, date *); extern int rdayofweek(date); -extern int rfmtlong(long, char *, char *); +extern int rfmtlong(long, const char *, char *); extern int rgetmsg(int, char *, int); -extern int risnull(int, char *); +extern int risnull(int, const char *); extern int rsetnull(int, char *); extern int rtypalign(int, int); extern int rtypmsize(int, int); @@ -62,7 +62,7 @@ extern void ECPG_informix_reset_sqlca(void); int decadd(decimal *, decimal *, decimal *); int deccmp(decimal *, decimal *); void deccopy(decimal *, decimal *); -int deccvasc(char *, int, decimal *); +int deccvasc(const char *, int, decimal *); int deccvdbl(double, decimal *); int deccvint(int, decimal *); int deccvlong(long, decimal *); diff --git a/src/interfaces/ecpg/include/ecpglib.h b/src/interfaces/ecpg/include/ecpglib.h index 536b7506ff..8a601996d2 100644 --- a/src/interfaces/ecpg/include/ecpglib.h +++ b/src/interfaces/ecpg/include/ecpglib.h @@ -80,7 +80,7 @@ bool ECPGset_desc_header(int, const char *, int); bool ECPGset_desc(int, const char *, int,...); void ECPGset_noind_null(enum ECPGttype, void *); -bool ECPGis_noind_null(enum ECPGttype, void *); +bool ECPGis_noind_null(enum ECPGttype, const void *); bool ECPGdescribe(int, int, bool, const char *, const char *,...); void ECPGset_var(int, void *, int); diff --git a/src/interfaces/ecpg/include/pgtypes_date.h b/src/interfaces/ecpg/include/pgtypes_date.h index 3d1a181b2b..caf8a33d12 100644 --- a/src/interfaces/ecpg/include/pgtypes_date.h +++ b/src/interfaces/ecpg/include/pgtypes_date.h @@ -21,7 +21,7 @@ extern void PGTYPESdate_julmdy(date, int *); extern void PGTYPESdate_mdyjul(int *, date *); extern int PGTYPESdate_dayofweek(date); extern void PGTYPESdate_today(date *); -extern int PGTYPESdate_defmt_asc(date *, const char *, char *); +extern int PGTYPESdate_defmt_asc(date *, const char *, const char *); extern int PGTYPESdate_fmt_asc(date, const char *, char *); #ifdef __cplusplus diff --git a/src/interfaces/ecpg/include/pgtypes_timestamp.h b/src/interfaces/ecpg/include/pgtypes_timestamp.h index 283ecca25e..1545be4ee9 100644 --- a/src/interfaces/ecpg/include/pgtypes_timestamp.h +++ b/src/interfaces/ecpg/include/pgtypes_timestamp.h @@ -19,7 +19,7 @@ extern char *PGTYPEStimestamp_to_asc(timestamp); extern int PGTYPEStimestamp_sub(timestamp *, timestamp *, interval *); extern int PGTYPEStimestamp_fmt_asc(timestamp *, char *, int, const char *); extern void PGTYPEStimestamp_current(timestamp *); -extern int PGTYPEStimestamp_defmt_asc(char *, const char *, timestamp *); +extern int PGTYPEStimestamp_defmt_asc(const char *, const char *, timestamp *); extern int PGTYPEStimestamp_add_interval(timestamp * tin, interval * span, timestamp * tout); extern int PGTYPEStimestamp_sub_interval(timestamp * tin, interval * span, timestamp * tout); diff --git a/src/interfaces/ecpg/pgtypeslib/datetime.c b/src/interfaces/ecpg/pgtypeslib/datetime.c index c2f78f5a56..1e692a5f9e 100644 --- a/src/interfaces/ecpg/pgtypeslib/datetime.c +++ b/src/interfaces/ecpg/pgtypeslib/datetime.c @@ -329,7 +329,7 @@ PGTYPESdate_fmt_asc(date dDate, const char *fmtstring, char *outbuf) #define PGTYPES_DATE_MONTH_MAXLENGTH 20 /* probably even less :-) */ int -PGTYPESdate_defmt_asc(date * d, const char *fmt, char *str) +PGTYPESdate_defmt_asc(date * d, const char *fmt, const char *str) { /* * token[2] = { 4,6 } means that token 2 starts at position 4 and ends at diff --git a/src/interfaces/ecpg/pgtypeslib/interval.c b/src/interfaces/ecpg/pgtypeslib/interval.c index 41976a188a..24a2f36d4d 100644 --- a/src/interfaces/ecpg/pgtypeslib/interval.c +++ b/src/interfaces/ecpg/pgtypeslib/interval.c @@ -65,7 +65,7 @@ AdjustFractDays(double frac, struct /* pg_ */ tm *tm, fsec_t *fsec, int scale) /* copy&pasted from .../src/backend/utils/adt/datetime.c */ static int -ParseISO8601Number(char *str, char **endptr, int *ipart, double *fpart) +ParseISO8601Number(const char *str, char **endptr, int *ipart, double *fpart) { double val; @@ -90,7 +90,7 @@ ParseISO8601Number(char *str, char **endptr, int *ipart, double *fpart) /* copy&pasted from .../src/backend/utils/adt/datetime.c */ static int -ISO8601IntegerWidth(char *fieldstart) +ISO8601IntegerWidth(const char *fieldstart) { /* We might have had a leading '-' */ if (*fieldstart == '-') diff --git a/src/interfaces/ecpg/pgtypeslib/timestamp.c b/src/interfaces/ecpg/pgtypeslib/timestamp.c index b63880dc55..abccc268dc 100644 --- a/src/interfaces/ecpg/pgtypeslib/timestamp.c +++ b/src/interfaces/ecpg/pgtypeslib/timestamp.c @@ -813,7 +813,7 @@ PGTYPEStimestamp_sub(timestamp * ts1, timestamp * ts2, interval * iv) } int -PGTYPEStimestamp_defmt_asc(char *str, const char *fmt, timestamp * d) +PGTYPEStimestamp_defmt_asc(const char *str, const char *fmt, timestamp * d) { int year, month, diff --git a/src/interfaces/ecpg/preproc/type.c b/src/interfaces/ecpg/preproc/type.c index 256a3c395c..4abbf93d19 100644 --- a/src/interfaces/ecpg/preproc/type.c +++ b/src/interfaces/ecpg/preproc/type.c @@ -74,7 +74,7 @@ ECPGstruct_member_dup(struct ECPGstruct_member *rm) /* The NAME argument is copied. The type argument is preserved as a pointer. */ void -ECPGmake_struct_member(char *name, struct ECPGtype *type, struct ECPGstruct_member **start) +ECPGmake_struct_member(const char *name, struct ECPGtype *type, struct ECPGstruct_member **start) { struct ECPGstruct_member *ptr, *ne = diff --git a/src/interfaces/ecpg/preproc/type.h b/src/interfaces/ecpg/preproc/type.h index 4b93336480..fc70d7d218 100644 --- a/src/interfaces/ecpg/preproc/type.h +++ b/src/interfaces/ecpg/preproc/type.h @@ -33,7 +33,7 @@ struct ECPGtype }; /* Everything is malloced. */ -void ECPGmake_struct_member(char *, struct ECPGtype *, struct ECPGstruct_member **); +void ECPGmake_struct_member(const char *, struct ECPGtype *, struct ECPGstruct_member **); struct ECPGtype *ECPGmake_simple_type(enum ECPGttype, char *, int); struct ECPGtype *ECPGmake_array_type(struct ECPGtype *, char *); struct ECPGtype *ECPGmake_struct_type(struct ECPGstruct_member *, enum ECPGttype, char *, char *); diff --git a/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc b/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc index f1a9048889..a147f405ab 100644 --- a/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc +++ b/src/interfaces/ecpg/test/compat_informix/rfmtdate.pgc @@ -13,7 +13,7 @@ static void check_return(int ret); static void -date_test_strdate(char *input) +date_test_strdate(const char *input) { static int i; date d; @@ -38,7 +38,7 @@ date_test_strdate(char *input) } static void -date_test_defmt(char *fmt, char *input) +date_test_defmt(const char *fmt, const char *input) { static int i; char dbuf[11]; @@ -63,7 +63,7 @@ date_test_defmt(char *fmt, char *input) } static void -date_test_fmt(date d, char *fmt) +date_test_fmt(date d, const char *fmt) { static int i; char buf[200]; diff --git a/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc b/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc index a1070e1331..2ecf09c837 100644 --- a/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc +++ b/src/interfaces/ecpg/test/compat_informix/rfmtlong.pgc @@ -13,7 +13,7 @@ static void check_return(int ret); static void -fmtlong(long lng, char *fmt) +fmtlong(long lng, const char *fmt) { static int i; int r; diff --git a/src/interfaces/ecpg/test/compat_informix/test_informix2.pgc b/src/interfaces/ecpg/test/compat_informix/test_informix2.pgc index 0386093d70..5380f9eb5a 100644 --- a/src/interfaces/ecpg/test/compat_informix/test_informix2.pgc +++ b/src/interfaces/ecpg/test/compat_informix/test_informix2.pgc @@ -7,7 +7,7 @@ EXEC SQL include ../regression; EXEC SQL DEFINE MAXDBLEN 30; /* Check SQLCODE, and produce a "standard error" if it's wrong! */ -static void sql_check(char *fn, char *caller, int ignore) +static void sql_check(const char *fn, const char *caller, int ignore) { char errorstring[255]; diff --git a/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c b/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c index 87a435e9bd..68be08276d 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-rfmtdate.c @@ -24,7 +24,7 @@ static void check_return(int ret); static void -date_test_strdate(char *input) +date_test_strdate(const char *input) { static int i; date d; @@ -49,7 +49,7 @@ date_test_strdate(char *input) } static void -date_test_defmt(char *fmt, char *input) +date_test_defmt(const char *fmt, const char *input) { static int i; char dbuf[11]; @@ -74,7 +74,7 @@ date_test_defmt(char *fmt, char *input) } static void -date_test_fmt(date d, char *fmt) +date_test_fmt(date d, const char *fmt) { static int i; char buf[200]; diff --git a/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c b/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c index 70e015a130..b2e397e38c 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-rfmtlong.c @@ -24,7 +24,7 @@ static void check_return(int ret); static void -fmtlong(long lng, char *fmt) +fmtlong(long lng, const char *fmt) { static int i; int r; diff --git a/src/interfaces/ecpg/test/expected/compat_informix-test_informix2.c b/src/interfaces/ecpg/test/expected/compat_informix-test_informix2.c index 4e372a5799..eeb9b62ab4 100644 --- a/src/interfaces/ecpg/test/expected/compat_informix-test_informix2.c +++ b/src/interfaces/ecpg/test/expected/compat_informix-test_informix2.c @@ -97,7 +97,7 @@ struct sqlca_t *ECPGget_sqlca(void); /* Check SQLCODE, and produce a "standard error" if it's wrong! */ -static void sql_check(char *fn, char *caller, int ignore) +static void sql_check(const char *fn, const char *caller, int ignore) { char errorstring[255]; diff --git a/src/interfaces/ecpg/test/expected/preproc-init.c b/src/interfaces/ecpg/test/expected/preproc-init.c index ca23d348d6..b0e04731fe 100644 --- a/src/interfaces/ecpg/test/expected/preproc-init.c +++ b/src/interfaces/ecpg/test/expected/preproc-init.c @@ -114,7 +114,7 @@ static int fe(enum e x) return (int)x; } -static void sqlnotice(char *notice, short trans) +static void sqlnotice(const char *notice, short trans) { if (!notice) notice = "-empty-"; diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever.c b/src/interfaces/ecpg/test/expected/preproc-whenever.c index 922ef76b92..332ef85b10 100644 --- a/src/interfaces/ecpg/test/expected/preproc-whenever.c +++ b/src/interfaces/ecpg/test/expected/preproc-whenever.c @@ -24,7 +24,7 @@ #line 5 "whenever.pgc" -static void print(char *msg) +static void print(const char *msg) { fprintf(stderr, "Error in statement '%s':\n", msg); sqlprint(); diff --git a/src/interfaces/ecpg/test/preproc/init.pgc b/src/interfaces/ecpg/test/preproc/init.pgc index 11dc01ade4..b1f71997a2 100644 --- a/src/interfaces/ecpg/test/preproc/init.pgc +++ b/src/interfaces/ecpg/test/preproc/init.pgc @@ -35,7 +35,7 @@ static int fe(enum e x) return (int)x; } -static void sqlnotice(char *notice, short trans) +static void sqlnotice(const char *notice, short trans) { if (!notice) notice = "-empty-"; diff --git a/src/interfaces/ecpg/test/preproc/whenever.pgc b/src/interfaces/ecpg/test/preproc/whenever.pgc index 9b3ae9e9ec..14cf571e6a 100644 --- a/src/interfaces/ecpg/test/preproc/whenever.pgc +++ b/src/interfaces/ecpg/test/preproc/whenever.pgc @@ -4,7 +4,7 @@ exec sql include ../regression; exec sql whenever sqlerror sqlprint; -static void print(char *msg) +static void print(const char *msg) { fprintf(stderr, "Error in statement '%s':\n", msg); sqlprint(); diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index ada219032e..2c175a2a24 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -398,9 +398,9 @@ static int parseServiceFile(const char *serviceFile, PQconninfoOption *options, PQExpBuffer errorMessage, bool *group_found); -static char *pwdfMatchesString(char *buf, char *token); -static char *passwordFromFile(char *hostname, char *port, char *dbname, - char *username, char *pgpassfile); +static char *pwdfMatchesString(char *buf, const char *token); +static char *passwordFromFile(const char *hostname, const char *port, const char *dbname, + const char *username, const char *pgpassfile); static void pgpassfileWarning(PGconn *conn); static void default_threadlock(int acquire); @@ -6329,10 +6329,10 @@ defaultNoticeProcessor(void *arg, const char *message) * token doesn't match */ static char * -pwdfMatchesString(char *buf, char *token) +pwdfMatchesString(char *buf, const char *token) { - char *tbuf, - *ttok; + char *tbuf; + const char *ttok; bool bslash = false; if (buf == NULL || token == NULL) @@ -6366,8 +6366,8 @@ pwdfMatchesString(char *buf, char *token) /* Get a password from the password file. Return value is malloc'd. */ static char * -passwordFromFile(char *hostname, char *port, char *dbname, - char *username, char *pgpassfile) +passwordFromFile(const char *hostname, const char *port, const char *dbname, + const char *username, const char *pgpassfile) { FILE *fp; struct stat stat_buf; diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index ca0d1bccf8..a57393fbdd 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -293,7 +293,7 @@ static void plperl_return_next_internal(SV *sv); static char *hek2cstr(HE *he); static SV **hv_store_string(HV *hv, const char *key, SV *val); static SV **hv_fetch_string(HV *hv, const char *key); -static void plperl_create_sub(plperl_proc_desc *desc, char *s, Oid fn_oid); +static void plperl_create_sub(plperl_proc_desc *desc, const char *s, Oid fn_oid); static SV *plperl_call_perl_func(plperl_proc_desc *desc, FunctionCallInfo fcinfo); static void plperl_compile_callback(void *arg); @@ -2083,7 +2083,7 @@ plperlu_validator(PG_FUNCTION_ARGS) * supplied in s, and returns a reference to it */ static void -plperl_create_sub(plperl_proc_desc *prodesc, char *s, Oid fn_oid) +plperl_create_sub(plperl_proc_desc *prodesc, const char *s, Oid fn_oid) { dTHX; dSP; diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c index 0156b00bfb..e7ea3ae138 100644 --- a/src/test/regress/pg_regress.c +++ b/src/test/regress/pg_regress.c @@ -438,7 +438,7 @@ string_matches_pattern(const char *str, const char *pattern) * NOTE: Assumes there is enough room in the target buffer! */ void -replace_string(char *string, char *replace, char *replacement) +replace_string(char *string, const char *replace, const char *replacement) { char *ptr; @@ -460,7 +460,7 @@ replace_string(char *string, char *replace, char *replacement) * the given suffix. */ static void -convert_sourcefiles_in(char *source_subdir, char *dest_dir, char *dest_subdir, char *suffix) +convert_sourcefiles_in(const char *source_subdir, const char *dest_dir, const char *dest_subdir, const char *suffix) { char testtablespace[MAXPGPATH]; char indir[MAXPGPATH]; diff --git a/src/test/regress/pg_regress.h b/src/test/regress/pg_regress.h index 4abfc628e5..0d9c4bfac3 100644 --- a/src/test/regress/pg_regress.h +++ b/src/test/regress/pg_regress.h @@ -49,5 +49,5 @@ int regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc); void add_stringlist_item(_stringlist **listhead, const char *str); PID_TYPE spawn_process(const char *cmdline); -void replace_string(char *string, char *replace, char *replacement); +void replace_string(char *string, const char *replace, const char *replacement); bool file_exists(const char *file); From 0c98d0dd5c85ce0c8485ae1a8351a26b83c4338b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 10 Nov 2017 14:21:32 -0500 Subject: [PATCH 0523/1087] Fix some null pointer dereferences in LDAP auth code An LDAP URL without a host name such as "ldap://" or without a base DN such as "ldap://localhost" would cause a crash when reading pg_hba.conf. If no binddn is configured, an error message might end up trying to print a null pointer, which could crash on some platforms. Author: Thomas Munro Reviewed-by: Michael Paquier --- src/backend/libpq/auth.c | 3 ++- src/backend/libpq/hba.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 6505b1f2b9..6c915a7289 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2520,7 +2520,8 @@ CheckLDAPAuth(Port *port) { ereport(LOG, (errmsg("could not perform initial LDAP bind for ldapbinddn \"%s\" on server \"%s\": %s", - port->hba->ldapbinddn, port->hba->ldapserver, + port->hba->ldapbinddn ? port->hba->ldapbinddn : "", + port->hba->ldapserver, ldap_err2string(r)), errdetail_for_ldap(ldap))); ldap_unbind(ldap); diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index 1e97c9db10..ca78a7e0ba 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -1739,9 +1739,11 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, return false; } - hbaline->ldapserver = pstrdup(urldata->lud_host); + if (urldata->lud_host) + hbaline->ldapserver = pstrdup(urldata->lud_host); hbaline->ldapport = urldata->lud_port; - hbaline->ldapbasedn = pstrdup(urldata->lud_dn); + if (urldata->lud_dn) + hbaline->ldapbasedn = pstrdup(urldata->lud_dn); if (urldata->lud_attrs) hbaline->ldapsearchattribute = pstrdup(urldata->lud_attrs[0]); /* only use first one */ From 5edc63bda68a77c4d38f0cbeae1c4271f9ef4100 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 10 Nov 2017 16:50:50 -0500 Subject: [PATCH 0524/1087] Account for the effect of lossy pages when costing bitmap scans. Dilip Kumar, reviewed by Alexander Kumenkov, Amul Sul, and me. Some final adjustments by me. Discussion: http://postgr.es/m/CAFiTN-sYtqUOXQ4SpuhTv0Z9gD0si3YxZGv_PQAAMX8qbOotcg@mail.gmail.com --- src/backend/nodes/tidbitmap.c | 37 +++++++++++------ src/backend/optimizer/path/costsize.c | 59 ++++++++++++++++++++++----- src/include/nodes/tidbitmap.h | 1 + 3 files changed, 75 insertions(+), 22 deletions(-) diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c index c47d5849ef..acfe6b263c 100644 --- a/src/backend/nodes/tidbitmap.c +++ b/src/backend/nodes/tidbitmap.c @@ -265,7 +265,6 @@ TIDBitmap * tbm_create(long maxbytes, dsa_area *dsa) { TIDBitmap *tbm; - long nbuckets; /* Create the TIDBitmap struct and zero all its fields */ tbm = makeNode(TIDBitmap); @@ -273,17 +272,7 @@ tbm_create(long maxbytes, dsa_area *dsa) tbm->mcxt = CurrentMemoryContext; tbm->status = TBM_EMPTY; - /* - * Estimate number of hashtable entries we can have within maxbytes. This - * estimates the hash cost as sizeof(PagetableEntry), which is good enough - * for our purpose. Also count an extra Pointer per entry for the arrays - * created during iteration readout. - */ - nbuckets = maxbytes / - (sizeof(PagetableEntry) + sizeof(Pointer) + sizeof(Pointer)); - nbuckets = Min(nbuckets, INT_MAX - 1); /* safety limit */ - nbuckets = Max(nbuckets, 16); /* sanity limit */ - tbm->maxentries = (int) nbuckets; + tbm->maxentries = (int) tbm_calculate_entries(maxbytes); tbm->lossify_start = 0; tbm->dsa = dsa; tbm->dsapagetable = InvalidDsaPointer; @@ -1546,3 +1535,27 @@ pagetable_free(pagetable_hash *pagetable, void *pointer) tbm->dsapagetableold = InvalidDsaPointer; } } + +/* + * tbm_calculate_entries + * + * Estimate number of hashtable entries we can have within maxbytes. + */ +long +tbm_calculate_entries(double maxbytes) +{ + long nbuckets; + + /* + * Estimate number of hashtable entries we can have within maxbytes. This + * estimates the hash cost as sizeof(PagetableEntry), which is good enough + * for our purpose. Also count an extra Pointer per entry for the arrays + * created during iteration readout. + */ + nbuckets = maxbytes / + (sizeof(PagetableEntry) + sizeof(Pointer) + sizeof(Pointer)); + nbuckets = Min(nbuckets, INT_MAX - 1); /* safety limit */ + nbuckets = Max(nbuckets, 16); /* sanity limit */ + + return nbuckets; +} diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 98fb16e85a..2d2df60886 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -5171,6 +5171,8 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual, double T; double pages_fetched; double tuples_fetched; + double heap_pages; + long maxentries; /* * Fetch total cost of obtaining the bitmap, as well as its total @@ -5185,6 +5187,24 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual, T = (baserel->pages > 1) ? (double) baserel->pages : 1.0; + /* + * For a single scan, the number of heap pages that need to be fetched is + * the same as the Mackert and Lohman formula for the case T <= b (ie, no + * re-reads needed). + */ + pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched); + + /* + * Calculate the number of pages fetched from the heap. Then based on + * current work_mem estimate get the estimated maxentries in the bitmap. + * (Note that we always do this calculation based on the number of pages + * that would be fetched in a single iteration, even if loop_count > 1. + * That's correct, because only that number of entries will be stored in + * the bitmap at one time.) + */ + heap_pages = Min(pages_fetched, baserel->pages); + maxentries = tbm_calculate_entries(work_mem * 1024L); + if (loop_count > 1) { /* @@ -5199,22 +5219,41 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual, root); pages_fetched /= loop_count; } - else - { - /* - * For a single scan, the number of heap pages that need to be fetched - * is the same as the Mackert and Lohman formula for the case T <= b - * (ie, no re-reads needed). - */ - pages_fetched = - (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched); - } if (pages_fetched >= T) pages_fetched = T; else pages_fetched = ceil(pages_fetched); + if (maxentries < heap_pages) + { + double exact_pages; + double lossy_pages; + + /* + * Crude approximation of the number of lossy pages. Because of the + * way tbm_lossify() is coded, the number of lossy pages increases + * very sharply as soon as we run short of memory; this formula has + * that property and seems to perform adequately in testing, but it's + * possible we could do better somehow. + */ + lossy_pages = Max(0, heap_pages - maxentries / 2); + exact_pages = heap_pages - lossy_pages; + + /* + * If there are lossy pages then recompute the number of tuples + * processed by the bitmap heap node. We assume here that the chance + * of a given tuple coming from an exact page is the same as the + * chance that a given page is exact. This might not be true, but + * it's not clear how we can do any better. + */ + if (lossy_pages > 0) + tuples_fetched = + clamp_row_est(indexSelectivity * + (exact_pages / heap_pages) * baserel->tuples + + (lossy_pages / heap_pages) * baserel->tuples); + } + if (cost) *cost = indexTotalCost; if (tuple) diff --git a/src/include/nodes/tidbitmap.h b/src/include/nodes/tidbitmap.h index f9a1902da8..d3ad0a5566 100644 --- a/src/include/nodes/tidbitmap.h +++ b/src/include/nodes/tidbitmap.h @@ -70,5 +70,6 @@ extern void tbm_end_iterate(TBMIterator *iterator); extern void tbm_end_shared_iterate(TBMSharedIterator *iterator); extern TBMSharedIterator *tbm_attach_shared_iterate(dsa_area *dsa, dsa_pointer dp); +extern long tbm_calculate_entries(double maxbytes); #endif /* TIDBITMAP_H */ From 2918fcedbf2b2adab688a7849ecce4556ef912ac Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 11 Nov 2017 11:10:53 -0800 Subject: [PATCH 0525/1087] Ignore XML declaration in xpath_internal(), for UTF8 databases. When a value contained an XML declaration naming some other encoding, this function interpreted UTF8 bytes as the named encoding, yielding mojibake. xml_parse() already has similar logic. This would be necessary but not sufficient for non-UTF8 databases, so preserve behavior there until the xpath facility can support such databases comprehensively. Back-patch to 9.3 (all supported versions). Pavel Stehule and Noah Misch Discussion: https://postgr.es/m/CAFj8pRC-dM=tT=QkGi+Achkm+gwPmjyOayGuUfXVumCxkDgYWg@mail.gmail.com --- src/backend/utils/adt/xml.c | 14 +++++++++++- src/test/regress/expected/xml.out | 31 +++++++++++++++++++++++++ src/test/regress/expected/xml_1.out | 35 +++++++++++++++++++++++++++++ src/test/regress/expected/xml_2.out | 31 +++++++++++++++++++++++++ src/test/regress/sql/xml.sql | 32 ++++++++++++++++++++++++++ 5 files changed, 142 insertions(+), 1 deletion(-) diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c index 56e262819e..fa392cd0e5 100644 --- a/src/backend/utils/adt/xml.c +++ b/src/backend/utils/adt/xml.c @@ -3845,6 +3845,7 @@ xpath_internal(text *xpath_expr_text, xmltype *data, ArrayType *namespaces, int32 xpath_len; xmlChar *string; xmlChar *xpath_expr; + size_t xmldecl_len = 0; int i; int ndim; Datum *ns_names_uris; @@ -3900,6 +3901,16 @@ xpath_internal(text *xpath_expr_text, xmltype *data, ArrayType *namespaces, string = pg_xmlCharStrndup(datastr, len); xpath_expr = pg_xmlCharStrndup(VARDATA_ANY(xpath_expr_text), xpath_len); + /* + * In a UTF8 database, skip any xml declaration, which might assert + * another encoding. Ignore parse_xml_decl() failure, letting + * xmlCtxtReadMemory() report parse errors. Documentation disclaims + * xpath() support for non-ASCII data in non-UTF8 databases, so leave + * those scenarios bug-compatible with historical behavior. + */ + if (GetDatabaseEncoding() == PG_UTF8) + parse_xml_decl(string, &xmldecl_len, NULL, NULL, NULL); + xmlerrcxt = pg_xml_init(PG_XML_STRICTNESS_ALL); PG_TRY(); @@ -3914,7 +3925,8 @@ xpath_internal(text *xpath_expr_text, xmltype *data, ArrayType *namespaces, if (ctxt == NULL || xmlerrcxt->err_occurred) xml_ereport(xmlerrcxt, ERROR, ERRCODE_OUT_OF_MEMORY, "could not allocate parser context"); - doc = xmlCtxtReadMemory(ctxt, (char *) string, len, NULL, NULL, 0); + doc = xmlCtxtReadMemory(ctxt, (char *) string + xmldecl_len, + len - xmldecl_len, NULL, NULL, 0); if (doc == NULL || xmlerrcxt->err_occurred) xml_ereport(xmlerrcxt, ERROR, ERRCODE_INVALID_XML_DOCUMENT, "could not parse XML document"); diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out index bcc585d427..f7a8c38bde 100644 --- a/src/test/regress/expected/xml.out +++ b/src/test/regress/expected/xml.out @@ -670,6 +670,37 @@ SELECT xpath('/nosuchtag', ''); {} (1 row) +-- Round-trip non-ASCII data through xpath(). +DO $$ +DECLARE + xml_declaration text := ''; + degree_symbol text; + res xml[]; +BEGIN + -- Per the documentation, xpath() doesn't work on non-ASCII data when + -- the server encoding is not UTF8. The EXCEPTION block below, + -- currently dead code, will be relevant if we remove this limitation. + IF current_setting('server_encoding') <> 'UTF8' THEN + RAISE LOG 'skip: encoding % unsupported for xml', + current_setting('server_encoding'); + RETURN; + END IF; + + degree_symbol := convert_from('\xc2b0', 'UTF8'); + res := xpath('text()', (xml_declaration || + '' || degree_symbol || '')::xml); + IF degree_symbol <> res[1]::text THEN + RAISE 'expected % (%), got % (%)', + degree_symbol, convert_to(degree_symbol, 'UTF8'), + res[1], convert_to(res[1]::text, 'UTF8'); + END IF; +EXCEPTION + -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" + WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist + WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; +END +$$; -- Test xmlexists and xpath_exists SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); xmlexists diff --git a/src/test/regress/expected/xml_1.out b/src/test/regress/expected/xml_1.out index d3bd8c91d7..1a9efa2ea0 100644 --- a/src/test/regress/expected/xml_1.out +++ b/src/test/regress/expected/xml_1.out @@ -576,6 +576,41 @@ LINE 1: SELECT xpath('/nosuchtag', ''); ^ DETAIL: This functionality requires the server to be built with libxml support. HINT: You need to rebuild PostgreSQL using --with-libxml. +-- Round-trip non-ASCII data through xpath(). +DO $$ +DECLARE + xml_declaration text := ''; + degree_symbol text; + res xml[]; +BEGIN + -- Per the documentation, xpath() doesn't work on non-ASCII data when + -- the server encoding is not UTF8. The EXCEPTION block below, + -- currently dead code, will be relevant if we remove this limitation. + IF current_setting('server_encoding') <> 'UTF8' THEN + RAISE LOG 'skip: encoding % unsupported for xml', + current_setting('server_encoding'); + RETURN; + END IF; + + degree_symbol := convert_from('\xc2b0', 'UTF8'); + res := xpath('text()', (xml_declaration || + '' || degree_symbol || '')::xml); + IF degree_symbol <> res[1]::text THEN + RAISE 'expected % (%), got % (%)', + degree_symbol, convert_to(degree_symbol, 'UTF8'), + res[1], convert_to(res[1]::text, 'UTF8'); + END IF; +EXCEPTION + -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" + WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist + WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; +END +$$; +ERROR: unsupported XML feature +DETAIL: This functionality requires the server to be built with libxml support. +HINT: You need to rebuild PostgreSQL using --with-libxml. +CONTEXT: PL/pgSQL function inline_code_block line 17 at assignment -- Test xmlexists and xpath_exists SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); ERROR: unsupported XML feature diff --git a/src/test/regress/expected/xml_2.out b/src/test/regress/expected/xml_2.out index ff77132803..3868da1a0d 100644 --- a/src/test/regress/expected/xml_2.out +++ b/src/test/regress/expected/xml_2.out @@ -650,6 +650,37 @@ SELECT xpath('/nosuchtag', ''); {} (1 row) +-- Round-trip non-ASCII data through xpath(). +DO $$ +DECLARE + xml_declaration text := ''; + degree_symbol text; + res xml[]; +BEGIN + -- Per the documentation, xpath() doesn't work on non-ASCII data when + -- the server encoding is not UTF8. The EXCEPTION block below, + -- currently dead code, will be relevant if we remove this limitation. + IF current_setting('server_encoding') <> 'UTF8' THEN + RAISE LOG 'skip: encoding % unsupported for xml', + current_setting('server_encoding'); + RETURN; + END IF; + + degree_symbol := convert_from('\xc2b0', 'UTF8'); + res := xpath('text()', (xml_declaration || + '' || degree_symbol || '')::xml); + IF degree_symbol <> res[1]::text THEN + RAISE 'expected % (%), got % (%)', + degree_symbol, convert_to(degree_symbol, 'UTF8'), + res[1], convert_to(res[1]::text, 'UTF8'); + END IF; +EXCEPTION + -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" + WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist + WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; +END +$$; -- Test xmlexists and xpath_exists SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); xmlexists diff --git a/src/test/regress/sql/xml.sql b/src/test/regress/sql/xml.sql index eb4687fb09..fdb51c9ede 100644 --- a/src/test/regress/sql/xml.sql +++ b/src/test/regress/sql/xml.sql @@ -189,6 +189,38 @@ SELECT xpath('count(//*)=3', ''); SELECT xpath('name(/*)', ''); SELECT xpath('/nosuchtag', ''); +-- Round-trip non-ASCII data through xpath(). +DO $$ +DECLARE + xml_declaration text := ''; + degree_symbol text; + res xml[]; +BEGIN + -- Per the documentation, xpath() doesn't work on non-ASCII data when + -- the server encoding is not UTF8. The EXCEPTION block below, + -- currently dead code, will be relevant if we remove this limitation. + IF current_setting('server_encoding') <> 'UTF8' THEN + RAISE LOG 'skip: encoding % unsupported for xml', + current_setting('server_encoding'); + RETURN; + END IF; + + degree_symbol := convert_from('\xc2b0', 'UTF8'); + res := xpath('text()', (xml_declaration || + '' || degree_symbol || '')::xml); + IF degree_symbol <> res[1]::text THEN + RAISE 'expected % (%), got % (%)', + degree_symbol, convert_to(degree_symbol, 'UTF8'), + res[1], convert_to(res[1]::text, 'UTF8'); + END IF; +EXCEPTION + -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" + WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist + WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; +END +$$; + -- Test xmlexists and xpath_exists SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); SELECT xmlexists('//town[text() = ''Cwmbran'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); From 4b865aee2582292a42a8e58247a41b46f5aa8a82 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 11 Nov 2017 13:07:46 -0800 Subject: [PATCH 0526/1087] Fix previous commit's test, for non-UTF8 databases with non-XML builds. To ensure stable output, catch one more configuration-specific error. Back-patch to 9.3, like the commit that added the test. --- src/test/regress/expected/xml.out | 16 ++++++++++------ src/test/regress/expected/xml_1.out | 20 ++++++++++---------- src/test/regress/expected/xml_2.out | 16 ++++++++++------ src/test/regress/sql/xml.sql | 16 ++++++++++------ 4 files changed, 40 insertions(+), 28 deletions(-) diff --git a/src/test/regress/expected/xml.out b/src/test/regress/expected/xml.out index f7a8c38bde..7fa1309108 100644 --- a/src/test/regress/expected/xml.out +++ b/src/test/regress/expected/xml.out @@ -677,11 +677,12 @@ DECLARE degree_symbol text; res xml[]; BEGIN - -- Per the documentation, xpath() doesn't work on non-ASCII data when - -- the server encoding is not UTF8. The EXCEPTION block below, - -- currently dead code, will be relevant if we remove this limitation. + -- Per the documentation, except when the server encoding is UTF8, xpath() + -- may not work on non-ASCII data. The untranslatable_character and + -- undefined_function traps below, currently dead code, will become relevant + -- if we remove this limitation. IF current_setting('server_encoding') <> 'UTF8' THEN - RAISE LOG 'skip: encoding % unsupported for xml', + RAISE LOG 'skip: encoding % unsupported for xpath', current_setting('server_encoding'); RETURN; END IF; @@ -696,9 +697,12 @@ BEGIN END IF; EXCEPTION -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" - WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + WHEN untranslatable_character -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist - WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; + OR undefined_function + -- unsupported XML feature + OR feature_not_supported THEN + RAISE LOG 'skip: %', SQLERRM; END $$; -- Test xmlexists and xpath_exists diff --git a/src/test/regress/expected/xml_1.out b/src/test/regress/expected/xml_1.out index 1a9efa2ea0..970ab26fce 100644 --- a/src/test/regress/expected/xml_1.out +++ b/src/test/regress/expected/xml_1.out @@ -583,11 +583,12 @@ DECLARE degree_symbol text; res xml[]; BEGIN - -- Per the documentation, xpath() doesn't work on non-ASCII data when - -- the server encoding is not UTF8. The EXCEPTION block below, - -- currently dead code, will be relevant if we remove this limitation. + -- Per the documentation, except when the server encoding is UTF8, xpath() + -- may not work on non-ASCII data. The untranslatable_character and + -- undefined_function traps below, currently dead code, will become relevant + -- if we remove this limitation. IF current_setting('server_encoding') <> 'UTF8' THEN - RAISE LOG 'skip: encoding % unsupported for xml', + RAISE LOG 'skip: encoding % unsupported for xpath', current_setting('server_encoding'); RETURN; END IF; @@ -602,15 +603,14 @@ BEGIN END IF; EXCEPTION -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" - WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + WHEN untranslatable_character -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist - WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; + OR undefined_function + -- unsupported XML feature + OR feature_not_supported THEN + RAISE LOG 'skip: %', SQLERRM; END $$; -ERROR: unsupported XML feature -DETAIL: This functionality requires the server to be built with libxml support. -HINT: You need to rebuild PostgreSQL using --with-libxml. -CONTEXT: PL/pgSQL function inline_code_block line 17 at assignment -- Test xmlexists and xpath_exists SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol'); ERROR: unsupported XML feature diff --git a/src/test/regress/expected/xml_2.out b/src/test/regress/expected/xml_2.out index 3868da1a0d..112ebe47cd 100644 --- a/src/test/regress/expected/xml_2.out +++ b/src/test/regress/expected/xml_2.out @@ -657,11 +657,12 @@ DECLARE degree_symbol text; res xml[]; BEGIN - -- Per the documentation, xpath() doesn't work on non-ASCII data when - -- the server encoding is not UTF8. The EXCEPTION block below, - -- currently dead code, will be relevant if we remove this limitation. + -- Per the documentation, except when the server encoding is UTF8, xpath() + -- may not work on non-ASCII data. The untranslatable_character and + -- undefined_function traps below, currently dead code, will become relevant + -- if we remove this limitation. IF current_setting('server_encoding') <> 'UTF8' THEN - RAISE LOG 'skip: encoding % unsupported for xml', + RAISE LOG 'skip: encoding % unsupported for xpath', current_setting('server_encoding'); RETURN; END IF; @@ -676,9 +677,12 @@ BEGIN END IF; EXCEPTION -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" - WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + WHEN untranslatable_character -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist - WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; + OR undefined_function + -- unsupported XML feature + OR feature_not_supported THEN + RAISE LOG 'skip: %', SQLERRM; END $$; -- Test xmlexists and xpath_exists diff --git a/src/test/regress/sql/xml.sql b/src/test/regress/sql/xml.sql index fdb51c9ede..cb96e18005 100644 --- a/src/test/regress/sql/xml.sql +++ b/src/test/regress/sql/xml.sql @@ -196,11 +196,12 @@ DECLARE degree_symbol text; res xml[]; BEGIN - -- Per the documentation, xpath() doesn't work on non-ASCII data when - -- the server encoding is not UTF8. The EXCEPTION block below, - -- currently dead code, will be relevant if we remove this limitation. + -- Per the documentation, except when the server encoding is UTF8, xpath() + -- may not work on non-ASCII data. The untranslatable_character and + -- undefined_function traps below, currently dead code, will become relevant + -- if we remove this limitation. IF current_setting('server_encoding') <> 'UTF8' THEN - RAISE LOG 'skip: encoding % unsupported for xml', + RAISE LOG 'skip: encoding % unsupported for xpath', current_setting('server_encoding'); RETURN; END IF; @@ -215,9 +216,12 @@ BEGIN END IF; EXCEPTION -- character with byte sequence 0xc2 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN8" - WHEN untranslatable_character THEN RAISE LOG 'skip: %', SQLERRM; + WHEN untranslatable_character -- default conversion function for encoding "UTF8" to "MULE_INTERNAL" does not exist - WHEN undefined_function THEN RAISE LOG 'skip: %', SQLERRM; + OR undefined_function + -- unsupported XML feature + OR feature_not_supported THEN + RAISE LOG 'skip: %', SQLERRM; END $$; From 34baf8a00b018caf7269134cf9b461266e66d9a7 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 11 Nov 2017 14:33:02 -0800 Subject: [PATCH 0527/1087] Make connect/test1 independent of localhost IPv6. Since commit 868898739a8da9ab74c105b8349b7b5c711f265a, it has assumed "localhost" resolves to both ::1 and 127.0.0.1. We gain nothing from that assumption, and it does not hold in a default installation of Red Hat Enterprise Linux 5. Back-patch to 9.3 (all supported versions). --- src/interfaces/ecpg/test/connect/test1.pgc | 2 +- src/interfaces/ecpg/test/expected/connect-test1.c | 2 +- src/interfaces/ecpg/test/expected/connect-test1.stderr | 7 ++----- 3 files changed, 4 insertions(+), 7 deletions(-) diff --git a/src/interfaces/ecpg/test/connect/test1.pgc b/src/interfaces/ecpg/test/connect/test1.pgc index 86633a7af6..101b806d5b 100644 --- a/src/interfaces/ecpg/test/connect/test1.pgc +++ b/src/interfaces/ecpg/test/connect/test1.pgc @@ -54,7 +54,7 @@ exec sql end declare section; exec sql disconnect; /* wrong port */ - exec sql connect to tcp:postgresql://localhost:20/ecpg2_regression user regress_ecpg_user1 identified by connectpw; + exec sql connect to tcp:postgresql://127.0.0.1:20/ecpg2_regression user regress_ecpg_user1 identified by connectpw; /* no disconnect necessary */ /* wrong password */ diff --git a/src/interfaces/ecpg/test/expected/connect-test1.c b/src/interfaces/ecpg/test/expected/connect-test1.c index 18e5968d3a..98b7e717c7 100644 --- a/src/interfaces/ecpg/test/expected/connect-test1.c +++ b/src/interfaces/ecpg/test/expected/connect-test1.c @@ -109,7 +109,7 @@ main(void) /* wrong port */ - { ECPGconnect(__LINE__, 0, "tcp:postgresql://localhost:20/ecpg2_regression" , "regress_ecpg_user1" , "connectpw" , NULL, 0); } + { ECPGconnect(__LINE__, 0, "tcp:postgresql://127.0.0.1:20/ecpg2_regression" , "regress_ecpg_user1" , "connectpw" , NULL, 0); } #line 57 "test1.pgc" /* no disconnect necessary */ diff --git a/src/interfaces/ecpg/test/expected/connect-test1.stderr b/src/interfaces/ecpg/test/expected/connect-test1.stderr index 0e43a1a398..ad806a0225 100644 --- a/src/interfaces/ecpg/test/expected/connect-test1.stderr +++ b/src/interfaces/ecpg/test/expected/connect-test1.stderr @@ -63,13 +63,10 @@ [NO_PID]: sqlca: code: -402, state: 08001 [NO_PID]: raising sqlcode -220 on line 54: connection "CURRENT" does not exist on line 54 [NO_PID]: sqlca: code: -220, state: 08003 -[NO_PID]: ECPGconnect: opening database ecpg2_regression on localhost port for user regress_ecpg_user1 +[NO_PID]: ECPGconnect: opening database ecpg2_regression on 127.0.0.1 port for user regress_ecpg_user1 [NO_PID]: sqlca: code: 0, state: 00000 [NO_PID]: ECPGconnect: could not open database: could not connect to server: Connection refused - Is the server running on host "localhost" (::1) and accepting - TCP/IP connections on port 20? -could not connect to server: Connection refused - Is the server running on host "localhost" (127.0.0.1) and accepting + Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 20? [NO_PID]: sqlca: code: 0, state: 00000 From 0b7e76eb2b142d0b4a2a831e7fa1fac44820f52c Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 11 Nov 2017 14:35:22 -0800 Subject: [PATCH 0528/1087] Add post-2010 ecpg tests to checktcp. This suite had been a proper superset of the regular ecpg test suite, but the three newest tests didn't reach it. To make this less likely to recur, delete the extra schedule file and pass the TCP-specific test on the command line. Back-patch to 9.3 (all supported versions). --- src/interfaces/ecpg/test/Makefile | 4 +- src/interfaces/ecpg/test/ecpg_schedule_tcp | 55 ---------------------- 2 files changed, 2 insertions(+), 57 deletions(-) delete mode 100644 src/interfaces/ecpg/test/ecpg_schedule_tcp diff --git a/src/interfaces/ecpg/test/Makefile b/src/interfaces/ecpg/test/Makefile index 6097fea900..45a4b07837 100644 --- a/src/interfaces/ecpg/test/Makefile +++ b/src/interfaces/ecpg/test/Makefile @@ -80,9 +80,9 @@ REGRESS_OPTS = --dbname=ecpg1_regression,ecpg2_regression --create-role=regress_ check: all $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule sql/twophase -# the same options, but with --listen-on-tcp +# Connect to the server using TCP, and add a TCP-specific test. checktcp: all | temp-install - $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule_tcp --host=localhost + $(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule --host=localhost sql/twophase connect/test1 installcheck: all ./pg_regress $(REGRESS_OPTS) --bindir='$(bindir)' $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule diff --git a/src/interfaces/ecpg/test/ecpg_schedule_tcp b/src/interfaces/ecpg/test/ecpg_schedule_tcp deleted file mode 100644 index 77481b5147..0000000000 --- a/src/interfaces/ecpg/test/ecpg_schedule_tcp +++ /dev/null @@ -1,55 +0,0 @@ -test: compat_informix/dec_test -test: compat_informix/charfuncs -test: compat_informix/rfmtdate -test: compat_informix/rfmtlong -test: compat_informix/rnull -test: compat_informix/sqlda -test: compat_informix/describe -test: compat_informix/test_informix -test: compat_informix/test_informix2 -test: connect/test2 -test: connect/test3 -test: connect/test4 -test: connect/test5 -test: pgtypeslib/dt_test -test: pgtypeslib/dt_test2 -test: pgtypeslib/num_test -test: pgtypeslib/num_test2 -test: pgtypeslib/nan_test -test: preproc/array_of_struct -test: preproc/autoprep -test: preproc/comment -test: preproc/cursor -test: preproc/define -test: preproc/init -test: preproc/strings -test: preproc/type -test: preproc/variable -test: preproc/outofscope -test: preproc/whenever -test: sql/array -test: sql/binary -test: sql/code100 -test: sql/copystdout -test: sql/define -test: sql/desc -test: sql/sqlda -test: sql/describe -test: sql/dynalloc -test: sql/dynalloc2 -test: sql/dyntest -test: sql/execute -test: sql/fetch -test: sql/func -test: sql/indicators -test: sql/oldexec -test: sql/quote -test: sql/show -test: sql/insupd -test: sql/parser -test: thread/thread -test: thread/thread_implicit -test: thread/prep -test: thread/alloc -test: thread/descriptor -test: connect/test1 From e02571b73f2d8124fe75d7408f9b63d4c5fe03b0 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 12 Nov 2017 13:03:15 -0800 Subject: [PATCH 0529/1087] Don't call pgwin32_message_to_UTF16() without CurrentMemoryContext. PostgreSQL running as a Windows service crashed upon calling write_stderr() before MemoryContextInit(). This fix completes work started in 5735efee15540765315aa8c1a230575e756037f7. Messages this early contain only ASCII bytes; if we removed the CurrentMemoryContext requirement, the ensuing conversions would have no effect. Back-patch to 9.3 (all supported versions). Takayuki Tsunakawa, reviewed by Michael Paquier. Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CC73@G01JPEXMBYT05 --- src/backend/utils/error/elog.c | 5 +++++ src/backend/utils/mb/mbutils.c | 6 ++++-- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index f6bb05f135..9cdc07f006 100644 --- a/src/backend/utils/error/elog.c +++ b/src/backend/utils/error/elog.c @@ -2117,10 +2117,15 @@ write_eventlog(int level, const char *line, int len) * try to convert the message to UTF16 and write it with ReportEventW(). * Fall back on ReportEventA() if conversion failed. * + * Since we palloc the structure required for conversion, also fall + * through to writing unconverted if we have not yet set up + * CurrentMemoryContext. + * * Also verify that we are not on our way into error recursion trouble due * to error messages thrown deep inside pgwin32_message_to_UTF16(). */ if (!in_error_recursion_trouble() && + CurrentMemoryContext != NULL && GetMessageEncoding() != GetACPEncoding()) { utf16 = pgwin32_message_to_UTF16(line, len, NULL); diff --git a/src/backend/utils/mb/mbutils.c b/src/backend/utils/mb/mbutils.c index 56f4dc1453..fee8e66bd6 100644 --- a/src/backend/utils/mb/mbutils.c +++ b/src/backend/utils/mb/mbutils.c @@ -1038,8 +1038,10 @@ GetMessageEncoding(void) #ifdef WIN32 /* - * Result is palloc'ed null-terminated utf16 string. The character length - * is also passed to utf16len if not null. Returns NULL iff failed. + * Convert from MessageEncoding to a palloc'ed, null-terminated utf16 + * string. The character length is also passed to utf16len if not + * null. Returns NULL iff failed. Before MessageEncoding initialization, "str" + * should be ASCII-only; this will function as though MessageEncoding is UTF8. */ WCHAR * pgwin32_message_to_UTF16(const char *str, int len, int *utf16len) From cbfffee41c3f571fa3fcb26fca5eb11bc508f972 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 12 Nov 2017 14:31:00 -0800 Subject: [PATCH 0530/1087] Install Windows crash dump handler before all else. Apart from calling write_stderr() on failure, the handler depends on no PostgreSQL facilities. We have experienced crashes before reaching the former call site. Given such an early crash, this change cannot hurt and may produce a helpful dump. Absent an early crash, this change has no effect. Back-patch to 9.3 (all supported versions). Takayuki Tsunakawa Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CD13@G01JPEXMBYT05 --- src/backend/main/main.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/src/backend/main/main.c b/src/backend/main/main.c index 87b7d3bf65..f9d673f881 100644 --- a/src/backend/main/main.c +++ b/src/backend/main/main.c @@ -61,6 +61,14 @@ main(int argc, char *argv[]) { bool do_check_root = true; + /* + * If supported on the current platform, set up a handler to be called if + * the backend/postmaster crashes with a fatal signal or exception. + */ +#if defined(WIN32) && defined(HAVE_MINIDUMP_TYPE) + pgwin32_install_crashdump_handler(); +#endif + progname = get_progname(argv[0]); /* @@ -81,14 +89,6 @@ main(int argc, char *argv[]) */ argv = save_ps_display_args(argc, argv); - /* - * If supported on the current platform, set up a handler to be called if - * the backend/postmaster crashes with a fatal signal or exception. - */ -#if defined(WIN32) && defined(HAVE_MINIDUMP_TYPE) - pgwin32_install_crashdump_handler(); -#endif - /* * Fire up essential subsystems: error and memory management * From 9363b8b79b0f2475b5b607fe4e0aa73a86398223 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 12 Nov 2017 18:43:32 -0800 Subject: [PATCH 0531/1087] MSVC: Rebuild spiexceptions.h when out of date. Also, add a warning to catch future instances of naming a nonexistent file as a prerequisite. Back-patch to 9.3 (all supported versions). --- src/tools/msvc/Solution.pm | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm index 947d904244..6dcdd29f57 100644 --- a/src/tools/msvc/Solution.pm +++ b/src/tools/msvc/Solution.pm @@ -81,6 +81,7 @@ sub DeterminePlatform sub IsNewer { my ($newfile, $oldfile) = @_; + -e $oldfile or warn "source file \"$oldfile\" does not exist"; if ( $oldfile ne 'src/tools/msvc/config.pl' && $oldfile ne 'src/tools/msvc/config_default.pl') { @@ -325,7 +326,7 @@ s{PG_VERSION_STR "[^"]+"}{PG_VERSION_STR "PostgreSQL $self->{strver}$extraver, c if ($self->{options}->{python} && IsNewer( 'src/pl/plpython/spiexceptions.h', - 'src/include/backend/errcodes.txt')) + 'src/backend/utils/errcodes.txt')) { print "Generating spiexceptions.h...\n"; system( From cfd8c87e16bc77eceddb1227c8b865c8606e4ccd Mon Sep 17 00:00:00 2001 From: Stephen Frost Date: Mon, 13 Nov 2017 09:40:30 -0500 Subject: [PATCH 0532/1087] Fix typo Determinisitcally -> Deterministically Author: Michael Paquier Discussion: https://postgr.es/m/CAB7nPqSauJ9gUMzj1aiXQVxqEkyko+WZ+wUac8_hB_M_bO6U_A@mail.gmail.com --- src/backend/libpq/auth-scram.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 9161c885e1..ec4bb9a88e 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -1116,7 +1116,7 @@ build_server_final_message(scram_state *state) /* - * Determinisitcally generate salt for mock authentication, using a SHA256 + * Deterministically generate salt for mock authentication, using a SHA256 * hash based on the username and a cluster-level secret key. Returns a * pointer to a static buffer of size SCRAM_DEFAULT_SALT_LEN. */ From ce4c86a656d2c0174d1ff1f64f38da07574562c0 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 13 Nov 2017 19:26:35 +0100 Subject: [PATCH 0533/1087] Mention CREATE/DROP STATISTICS in event triggers docs The new commands are reported by event triggers, but they weren't documented as such. Repair. Author: David Rowley Discussion: https://postgr.es/m/CAKJS1f-t-NE=AThB3zu1mKhdrm8PCb=++3e7x=Lf343xcrFHxQ@mail.gmail.com --- doc/src/sgml/event-trigger.sgml | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml index e19571b8eb..c16ff338a3 100644 --- a/doc/src/sgml/event-trigger.sgml +++ b/doc/src/sgml/event-trigger.sgml @@ -503,6 +503,14 @@ - + + CREATE STATISTICS + X + X + - + - + + CREATE TABLE X @@ -743,6 +751,14 @@ - + + DROP STATISTICS + X + X + X + - + + DROP TABLE X From e64861c79bda659ee384bc253f651401f953dadc Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 13 Nov 2017 15:24:12 -0500 Subject: [PATCH 0534/1087] Track in the plan the types associated with PARAM_EXEC parameters. Up until now, we only tracked the number of parameters, which was sufficient to allocate an array of Datums of the appropriate size, but not sufficient to, for example, know how to serialize a Datum stored in one of those slots. An upcoming patch wants to do that, so add this tracking to make it possible. Patch by me, reviewed by Tom Lane and Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoYqpxDKn8koHdW8BEKk8FMUL0=e8m2Qe=M+r0UBjr3tuQ@mail.gmail.com --- src/backend/executor/execMain.c | 20 ++++++++++----- src/backend/executor/execParallel.c | 2 +- src/backend/nodes/copyfuncs.c | 2 +- src/backend/nodes/outfuncs.c | 4 +-- src/backend/nodes/readfuncs.c | 2 +- src/backend/optimizer/plan/planner.c | 6 ++--- src/backend/optimizer/plan/subselect.c | 35 +++++++++++++++++++------- src/backend/optimizer/util/clauses.c | 2 +- src/include/nodes/plannodes.h | 2 +- src/include/nodes/relation.h | 6 ++--- 10 files changed, 53 insertions(+), 28 deletions(-) diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 493ff82775..47f2131642 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -195,9 +195,14 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags) */ estate->es_param_list_info = queryDesc->params; - if (queryDesc->plannedstmt->nParamExec > 0) + if (queryDesc->plannedstmt->paramExecTypes != NIL) + { + int nParamExec; + + nParamExec = list_length(queryDesc->plannedstmt->paramExecTypes); estate->es_param_exec_vals = (ParamExecData *) - palloc0(queryDesc->plannedstmt->nParamExec * sizeof(ParamExecData)); + palloc0(nParamExec * sizeof(ParamExecData)); + } estate->es_sourceText = queryDesc->sourceText; @@ -3032,9 +3037,11 @@ EvalPlanQualBegin(EPQState *epqstate, EState *parentestate) MemSet(estate->es_epqScanDone, 0, rtsize * sizeof(bool)); /* Recopy current values of parent parameters */ - if (parentestate->es_plannedstmt->nParamExec > 0) + if (parentestate->es_plannedstmt->paramExecTypes != NIL) { - int i = parentestate->es_plannedstmt->nParamExec; + int i; + + i = list_length(parentestate->es_plannedstmt->paramExecTypes); while (--i >= 0) { @@ -3122,10 +3129,11 @@ EvalPlanQualStart(EPQState *epqstate, EState *parentestate, Plan *planTree) * already set from other parts of the parent's plan tree. */ estate->es_param_list_info = parentestate->es_param_list_info; - if (parentestate->es_plannedstmt->nParamExec > 0) + if (parentestate->es_plannedstmt->paramExecTypes != NIL) { - int i = parentestate->es_plannedstmt->nParamExec; + int i; + i = list_length(parentestate->es_plannedstmt->paramExecTypes); estate->es_param_exec_vals = (ParamExecData *) palloc0(i * sizeof(ParamExecData)); while (--i >= 0) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 1b477baecb..fd7e7cbf3d 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -195,7 +195,7 @@ ExecSerializePlan(Plan *plan, EState *estate) pstmt->rowMarks = NIL; pstmt->relationOids = NIL; pstmt->invalItems = NIL; /* workers can't replan anyway... */ - pstmt->nParamExec = estate->es_plannedstmt->nParamExec; + pstmt->paramExecTypes = estate->es_plannedstmt->paramExecTypes; pstmt->utilityStmt = NULL; pstmt->stmt_location = -1; pstmt->stmt_len = -1; diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index cadd253ef1..76e75459b4 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -97,7 +97,7 @@ _copyPlannedStmt(const PlannedStmt *from) COPY_NODE_FIELD(rowMarks); COPY_NODE_FIELD(relationOids); COPY_NODE_FIELD(invalItems); - COPY_SCALAR_FIELD(nParamExec); + COPY_NODE_FIELD(paramExecTypes); COPY_NODE_FIELD(utilityStmt); COPY_LOCATION_FIELD(stmt_location); COPY_LOCATION_FIELD(stmt_len); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 291d1eeb46..dc35df9e4f 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -282,7 +282,7 @@ _outPlannedStmt(StringInfo str, const PlannedStmt *node) WRITE_NODE_FIELD(rowMarks); WRITE_NODE_FIELD(relationOids); WRITE_NODE_FIELD(invalItems); - WRITE_INT_FIELD(nParamExec); + WRITE_NODE_FIELD(paramExecTypes); WRITE_NODE_FIELD(utilityStmt); WRITE_LOCATION_FIELD(stmt_location); WRITE_LOCATION_FIELD(stmt_len); @@ -2181,7 +2181,7 @@ _outPlannerGlobal(StringInfo str, const PlannerGlobal *node) WRITE_NODE_FIELD(rootResultRelations); WRITE_NODE_FIELD(relationOids); WRITE_NODE_FIELD(invalItems); - WRITE_INT_FIELD(nParamExec); + WRITE_NODE_FIELD(paramExecTypes); WRITE_UINT_FIELD(lastPHId); WRITE_UINT_FIELD(lastRowMarkId); WRITE_INT_FIELD(lastPlanNodeId); diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 42c595dc03..593658dd8a 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -1480,7 +1480,7 @@ _readPlannedStmt(void) READ_NODE_FIELD(rowMarks); READ_NODE_FIELD(relationOids); READ_NODE_FIELD(invalItems); - READ_INT_FIELD(nParamExec); + READ_NODE_FIELD(paramExecTypes); READ_NODE_FIELD(utilityStmt); READ_LOCATION_FIELD(stmt_location); READ_LOCATION_FIELD(stmt_len); diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 9b7a8fd82c..607f7cd251 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -243,7 +243,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) glob->rootResultRelations = NIL; glob->relationOids = NIL; glob->invalItems = NIL; - glob->nParamExec = 0; + glob->paramExecTypes = NIL; glob->lastPHId = 0; glob->lastRowMarkId = 0; glob->lastPlanNodeId = 0; @@ -415,7 +415,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) * set_plan_references' tree traversal, but for now it has to be separate * because we need to visit subplans before not after main plan. */ - if (glob->nParamExec > 0) + if (glob->paramExecTypes != NIL) { Assert(list_length(glob->subplans) == list_length(glob->subroots)); forboth(lp, glob->subplans, lr, glob->subroots) @@ -466,7 +466,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) result->rowMarks = glob->finalrowmarks; result->relationOids = glob->relationOids; result->invalItems = glob->invalItems; - result->nParamExec = glob->nParamExec; + result->paramExecTypes = glob->paramExecTypes; /* utilityStmt should be null, but we might as well copy it */ result->utilityStmt = parse->utilityStmt; result->stmt_location = parse->stmt_location; diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c index 8f75fa98ed..2e3abeea3d 100644 --- a/src/backend/optimizer/plan/subselect.c +++ b/src/backend/optimizer/plan/subselect.c @@ -131,7 +131,9 @@ assign_param_for_var(PlannerInfo *root, Var *var) pitem = makeNode(PlannerParamItem); pitem->item = (Node *) var; - pitem->paramId = root->glob->nParamExec++; + pitem->paramId = list_length(root->glob->paramExecTypes); + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + var->vartype); root->plan_params = lappend(root->plan_params, pitem); @@ -234,7 +236,9 @@ assign_param_for_placeholdervar(PlannerInfo *root, PlaceHolderVar *phv) pitem = makeNode(PlannerParamItem); pitem->item = (Node *) phv; - pitem->paramId = root->glob->nParamExec++; + pitem->paramId = list_length(root->glob->paramExecTypes); + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + exprType((Node *) phv->phexpr)); root->plan_params = lappend(root->plan_params, pitem); @@ -323,7 +327,9 @@ replace_outer_agg(PlannerInfo *root, Aggref *agg) pitem = makeNode(PlannerParamItem); pitem->item = (Node *) agg; - pitem->paramId = root->glob->nParamExec++; + pitem->paramId = list_length(root->glob->paramExecTypes); + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + agg->aggtype); root->plan_params = lappend(root->plan_params, pitem); @@ -348,6 +354,7 @@ replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp) Param *retval; PlannerParamItem *pitem; Index levelsup; + Oid ptype; Assert(grp->agglevelsup > 0 && grp->agglevelsup < root->query_level); @@ -362,17 +369,20 @@ replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp) grp = copyObject(grp); IncrementVarSublevelsUp((Node *) grp, -((int) grp->agglevelsup), 0); Assert(grp->agglevelsup == 0); + ptype = exprType((Node *) grp); pitem = makeNode(PlannerParamItem); pitem->item = (Node *) grp; - pitem->paramId = root->glob->nParamExec++; + pitem->paramId = list_length(root->glob->paramExecTypes); + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + ptype); root->plan_params = lappend(root->plan_params, pitem); retval = makeNode(Param); retval->paramkind = PARAM_EXEC; retval->paramid = pitem->paramId; - retval->paramtype = exprType((Node *) grp); + retval->paramtype = ptype; retval->paramtypmod = -1; retval->paramcollid = InvalidOid; retval->location = grp->location; @@ -385,7 +395,8 @@ replace_outer_grouping(PlannerInfo *root, GroupingFunc *grp) * * This is used to create Params representing subplan outputs. * We don't need to build a PlannerParamItem for such a Param, but we do - * need to record the PARAM_EXEC slot number as being allocated. + * need to make sure we record the type in paramExecTypes (otherwise, + * there won't be a slot allocated for it). */ static Param * generate_new_param(PlannerInfo *root, Oid paramtype, int32 paramtypmod, @@ -395,7 +406,9 @@ generate_new_param(PlannerInfo *root, Oid paramtype, int32 paramtypmod, retval = makeNode(Param); retval->paramkind = PARAM_EXEC; - retval->paramid = root->glob->nParamExec++; + retval->paramid = list_length(root->glob->paramExecTypes); + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + paramtype); retval->paramtype = paramtype; retval->paramtypmod = paramtypmod; retval->paramcollid = paramcollation; @@ -415,7 +428,11 @@ generate_new_param(PlannerInfo *root, Oid paramtype, int32 paramtypmod, int SS_assign_special_param(PlannerInfo *root) { - return root->glob->nParamExec++; + int paramId = list_length(root->glob->paramExecTypes); + + root->glob->paramExecTypes = lappend_oid(root->glob->paramExecTypes, + InvalidOid); + return paramId; } /* @@ -2098,7 +2115,7 @@ SS_identify_outer_params(PlannerInfo *root) * If no parameters have been assigned anywhere in the tree, we certainly * don't need to do anything here. */ - if (root->glob->nParamExec == 0) + if (root->glob->paramExecTypes == NIL) return; /* diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 30cdd3da4c..66e098f488 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1095,7 +1095,7 @@ is_parallel_safe(PlannerInfo *root, Node *node) * in this expression. But otherwise we don't need to look. */ if (root->glob->maxParallelHazard == PROPARALLEL_SAFE && - root->glob->nParamExec == 0) + root->glob->paramExecTypes == NIL) return true; /* Else use max_parallel_hazard's search logic, but stop on RESTRICTED */ context.max_hazard = PROPARALLEL_SAFE; diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index dd74efa9a4..a127682b0e 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -89,7 +89,7 @@ typedef struct PlannedStmt List *invalItems; /* other dependencies, as PlanInvalItems */ - int nParamExec; /* number of PARAM_EXEC Params used */ + List *paramExecTypes; /* type OIDs for PARAM_EXEC Params */ Node *utilityStmt; /* non-null if this is utility stmt */ diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 05fc9a3f48..9e68e65cc6 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -114,7 +114,7 @@ typedef struct PlannerGlobal List *invalItems; /* other dependencies, as PlanInvalItems */ - int nParamExec; /* number of PARAM_EXEC Params used */ + List *paramExecTypes; /* type OIDs for PARAM_EXEC Params */ Index lastPHId; /* highest PlaceHolderVar ID assigned */ @@ -2219,8 +2219,8 @@ typedef struct MinMaxAggInfo * from subplans (values that are setParam items for those subplans). These * IDs need not be tracked via PlannerParamItems, since we do not need any * duplicate-elimination nor later processing of the represented expressions. - * Instead, we just record the assignment of the slot number by incrementing - * root->glob->nParamExec. + * Instead, we just record the assignment of the slot number by appending to + * root->glob->paramExecTypes. */ typedef struct PlannerParamItem { From 44ae64c388bde6e4b077272570c84dedfb17bed3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 13 Nov 2017 16:33:56 -0500 Subject: [PATCH 0535/1087] Push target list evaluation through Gather Merge. We already do this for Gather, but it got overlooked for Gather Merge. Amit Kapila, with review and minor revisions by Rushabh Lathia and by me. Discussion: http://postgr.es/m/CAA4eK1KUC5Uyu7qaifxrjpHxbSeoQh3yzwN3bThnJsmJcZ-qtA@mail.gmail.com --- src/backend/optimizer/plan/planner.c | 3 +- src/backend/optimizer/util/pathnode.c | 45 ++++++++++++------- src/test/regress/expected/select_parallel.out | 25 +++++++++++ src/test/regress/sql/select_parallel.sql | 13 ++++++ 4 files changed, 69 insertions(+), 17 deletions(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 607f7cd251..90fd9cc959 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -5060,7 +5060,8 @@ create_ordered_paths(PlannerInfo *root, path = (Path *) create_gather_merge_path(root, ordered_rel, path, - target, root->sort_pathkeys, NULL, + path->pathtarget, + root->sort_pathkeys, NULL, &total_groups); /* Add projection step if needed */ diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 36ec025b05..68dee0f501 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -2368,9 +2368,9 @@ create_projection_path(PlannerInfo *root, * knows that the given path isn't referenced elsewhere and so can be modified * in-place. * - * If the input path is a GatherPath, we try to push the new target down to - * its input as well; this is a yet more invasive modification of the input - * path, which create_projection_path() can't do. + * If the input path is a GatherPath or GatherMergePath, we try to push the + * new target down to its input as well; this is a yet more invasive + * modification of the input path, which create_projection_path() can't do. * * Note that we mustn't change the source path's parent link; so when it is * add_path'd to "rel" things will be a bit inconsistent. So far that has @@ -2407,31 +2407,44 @@ apply_projection_to_path(PlannerInfo *root, (target->cost.per_tuple - oldcost.per_tuple) * path->rows; /* - * If the path happens to be a Gather path, we'd like to arrange for the - * subpath to return the required target list so that workers can help - * project. But if there is something that is not parallel-safe in the - * target expressions, then we can't. + * If the path happens to be a Gather or GatherMerge path, we'd like to + * arrange for the subpath to return the required target list so that + * workers can help project. But if there is something that is not + * parallel-safe in the target expressions, then we can't. */ - if (IsA(path, GatherPath) && + if ((IsA(path, GatherPath) || IsA(path, GatherMergePath)) && is_parallel_safe(root, (Node *) target->exprs)) { - GatherPath *gpath = (GatherPath *) path; - /* * We always use create_projection_path here, even if the subpath is * projection-capable, so as to avoid modifying the subpath in place. * It seems unlikely at present that there could be any other * references to the subpath, but better safe than sorry. * - * Note that we don't change the GatherPath's cost estimates; it might + * Note that we don't change the parallel path's cost estimates; it might * be appropriate to do so, to reflect the fact that the bulk of the * target evaluation will happen in workers. */ - gpath->subpath = (Path *) - create_projection_path(root, - gpath->subpath->parent, - gpath->subpath, - target); + if (IsA(path, GatherPath)) + { + GatherPath *gpath = (GatherPath *) path; + + gpath->subpath = (Path *) + create_projection_path(root, + gpath->subpath->parent, + gpath->subpath, + target); + } + else + { + GatherMergePath *gmpath = (GatherMergePath *) path; + + gmpath->subpath = (Path *) + create_projection_path(root, + gmpath->subpath->parent, + gmpath->subpath, + target); + } } else if (path->parallel_safe && !is_parallel_safe(root, (Node *) target->exprs)) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index ac9ad0668d..6f04769e3e 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -375,6 +375,31 @@ select count(*) from tenk1 group by twenty; 500 (20 rows) +--test expressions in targetlist are pushed down for gather merge +create or replace function simple_func(var1 integer) returns integer +as $$ +begin + return var1 + 10; +end; +$$ language plpgsql PARALLEL SAFE; +explain (costs off, verbose) + select ten, simple_func(ten) from tenk1 where ten < 100 order by ten; + QUERY PLAN +----------------------------------------------------- + Gather Merge + Output: ten, (simple_func(ten)) + Workers Planned: 4 + -> Result + Output: ten, simple_func(ten) + -> Sort + Output: ten + Sort Key: tenk1.ten + -> Parallel Seq Scan on public.tenk1 + Output: ten + Filter: (tenk1.ten < 100) +(11 rows) + +drop function simple_func(integer); --test rescan behavior of gather merge set enable_material = false; explain (costs off) diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 495f0335dc..9c1b87abdf 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -144,6 +144,19 @@ explain (costs off) select count(*) from tenk1 group by twenty; +--test expressions in targetlist are pushed down for gather merge +create or replace function simple_func(var1 integer) returns integer +as $$ +begin + return var1 + 10; +end; +$$ language plpgsql PARALLEL SAFE; + +explain (costs off, verbose) + select ten, simple_func(ten) from tenk1 where ten < 100 order by ten; + +drop function simple_func(integer); + --test rescan behavior of gather merge set enable_material = false; From 591c504fad0de88b559bf28e929d23672179a857 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 13 Nov 2017 16:40:03 -0500 Subject: [PATCH 0536/1087] Allow running just selected steps of pgbench's initialization sequence. This feature caters to specialized use-cases such as running the normal pgbench scenario with nonstandard indexes, or inserting other actions between steps of the initialization sequence. The normal sequence of initialization actions is broken down into half a dozen steps which can be executed in a user-specified order, to the extent to which that's sensible. The actions themselves aren't changed, except to make them more robust against nonstandard uses: * all four tables are now dropped in one DROP command, to reduce assumptions about what foreign key relationships exist; * all four tables are now truncated at the start of the data load step, for consistency; * the foreign key creation commands now specify constraint names, to prevent accidentally creating duplicate constraints by executing the 'f' step twice. Make some cosmetic adjustments in the messages emitted by pgbench so that it's clear which steps are getting run, and so that the messages agree with the documented names of the steps. In passing, fix failure to enforce that the -v option is used only in benchmarking mode. Masahiko Sawada, reviewed by Fabien Coelho, editorialized a bit by me Discussion: https://postgr.es/m/CAD21AoCsz0ZzfCFcxYZ+PUdpkDd5VsCSG0Pre_-K1EgokCDFYA@mail.gmail.com --- doc/src/sgml/ref/pgbench.sgml | 85 ++++- src/bin/pgbench/pgbench.c | 363 +++++++++++++------ src/bin/pgbench/t/001_pgbench_with_server.pl | 25 +- src/bin/pgbench/t/002_pgbench_no_server.pl | 5 + 4 files changed, 364 insertions(+), 114 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index e509e6c7f6..c48a69713a 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -134,9 +134,9 @@ pgbench options d Options - The following is divided into three subsections: Different options are used - during database initialization and while running benchmarks, some options - are useful in both cases. + The following is divided into three subsections. Different options are + used during database initialization and while running benchmarks, but some + options are useful in both cases. @@ -158,6 +158,79 @@ pgbench options d + + + + + + Perform just a selected set of the normal initialization steps. + init_steps specifies the + initialization steps to be performed, using one character per step. + Each step is invoked in the specified order. + The default is dtgvp. + The available steps are: + + + + d (Drop) + + + Drop any existing pgbench tables. + + + + + t (create Tables) + + + Create the tables used by the + standard pgbench scenario, namely + pgbench_accounts, + pgbench_branches, + pgbench_history, and + pgbench_tellers. + + + + + g (Generate data) + + + Generate data and load it into the standard tables, + replacing any data already present. + + + + + v (Vacuum) + + + Invoke VACUUM on the standard tables. + + + + + p (create Primary keys) + + + Create primary key indexes on the standard tables. + + + + + f (create Foreign keys) + + + Create foreign key constraints between the standard tables. + (Note that this step is not performed by default.) + + + + + + + + fillfactor fillfactor @@ -176,7 +249,9 @@ pgbench options d - Perform no vacuuming after initialization. + Perform no vacuuming during initialization. + (This option suppresses the v initialization step, + even if it was specified in .) @@ -215,6 +290,8 @@ pgbench options d Create foreign key constraints between the standard tables. + (This option adds the f step to the initialization + step sequence, if it is not already present.) diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index ec56a74de0..ceb2fcc3ad 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -90,6 +90,8 @@ static int pthread_join(pthread_t th, void **thread_return); #define MAXCLIENTS 1024 #endif +#define DEFAULT_INIT_STEPS "dtgvp" /* default -I setting */ + #define LOG_STEP_SECONDS 5 /* seconds between log messages */ #define DEFAULT_NXACTS 10 /* default nxacts */ @@ -111,15 +113,10 @@ int scale = 1; */ int fillfactor = 100; -/* - * create foreign key constraints on the tables? - */ -int foreign_keys = 0; - /* * use unlogged tables? */ -int unlogged_tables = 0; +bool unlogged_tables = false; /* * log sampling rate (1.0 = log everything, 0.0 = option not given) @@ -485,8 +482,10 @@ usage(void) " %s [OPTION]... [DBNAME]\n" "\nInitialization options:\n" " -i, --initialize invokes initialization mode\n" + " -I, --init-steps=[dtgvpf]+ (default \"dtgvp\")\n" + " run selected initialization steps\n" " -F, --fillfactor=NUM set fill factor\n" - " -n, --no-vacuum do not run VACUUM after initialization\n" + " -n, --no-vacuum do not run VACUUM during initialization\n" " -q, --quiet quiet logging (one message each 5 seconds)\n" " -s, --scale=NUM scaling factor\n" " --foreign-keys create foreign key constraints between tables\n" @@ -2634,17 +2633,39 @@ disconnect_all(CState *state, int length) } } -/* create tables and setup data */ +/* + * Remove old pgbench tables, if any exist + */ static void -init(bool is_no_vacuum) +initDropTables(PGconn *con) { + fprintf(stderr, "dropping old tables...\n"); + + /* + * We drop all the tables in one command, so that whether there are + * foreign key dependencies or not doesn't matter. + */ + executeStatement(con, "drop table if exists " + "pgbench_accounts, " + "pgbench_branches, " + "pgbench_history, " + "pgbench_tellers"); +} + /* - * The scale factor at/beyond which 32-bit integers are insufficient for - * storing TPC-B account IDs. - * - * Although the actual threshold is 21474, we use 20000 because it is easier to - * document and remember, and isn't that far away from the real threshold. + * Create pgbench's standard tables */ +static void +initCreateTables(PGconn *con) +{ + /* + * The scale factor at/beyond which 32-bit integers are insufficient for + * storing TPC-B account IDs. + * + * Although the actual threshold is 21474, we use 20000 because it is + * easier to document and remember, and isn't that far away from the real + * threshold. + */ #define SCALE_32BIT_THRESHOLD 20000 /* @@ -2691,34 +2712,9 @@ init(bool is_no_vacuum) 1 } }; - static const char *const DDLINDEXes[] = { - "alter table pgbench_branches add primary key (bid)", - "alter table pgbench_tellers add primary key (tid)", - "alter table pgbench_accounts add primary key (aid)" - }; - static const char *const DDLKEYs[] = { - "alter table pgbench_tellers add foreign key (bid) references pgbench_branches", - "alter table pgbench_accounts add foreign key (bid) references pgbench_branches", - "alter table pgbench_history add foreign key (bid) references pgbench_branches", - "alter table pgbench_history add foreign key (tid) references pgbench_tellers", - "alter table pgbench_history add foreign key (aid) references pgbench_accounts" - }; - - PGconn *con; - PGresult *res; - char sql[256]; int i; - int64 k; - /* used to track elapsed time and estimate of the remaining time */ - instr_time start, - diff; - double elapsed_sec, - remaining_sec; - int log_interval = 1; - - if ((con = doConnect()) == NULL) - exit(1); + fprintf(stderr, "creating tables...\n"); for (i = 0; i < lengthof(DDLs); i++) { @@ -2727,10 +2723,6 @@ init(bool is_no_vacuum) const struct ddlinfo *ddl = &DDLs[i]; const char *cols; - /* Remove old table, if it exists. */ - snprintf(buffer, sizeof(buffer), "drop table if exists %s", ddl->table); - executeStatement(con, buffer); - /* Construct new create table statement. */ opts[0] = '\0'; if (ddl->declare_fillfactor) @@ -2755,9 +2747,48 @@ init(bool is_no_vacuum) executeStatement(con, buffer); } +} +/* + * Fill the standard tables with some data + */ +static void +initGenerateData(PGconn *con) +{ + char sql[256]; + PGresult *res; + int i; + int64 k; + + /* used to track elapsed time and estimate of the remaining time */ + instr_time start, + diff; + double elapsed_sec, + remaining_sec; + int log_interval = 1; + + fprintf(stderr, "generating data...\n"); + + /* + * we do all of this in one transaction to enable the backend's + * data-loading optimizations + */ executeStatement(con, "begin"); + /* + * truncate away any old data, in one command in case there are foreign + * keys + */ + executeStatement(con, "truncate table " + "pgbench_accounts, " + "pgbench_branches, " + "pgbench_history, " + "pgbench_tellers"); + + /* + * fill branches, tellers, accounts in that order in case foreign keys + * already exist + */ for (i = 0; i < nbranches * scale; i++) { /* "filler" column defaults to NULL */ @@ -2776,16 +2807,9 @@ init(bool is_no_vacuum) executeStatement(con, sql); } - executeStatement(con, "commit"); - /* - * fill the pgbench_accounts table with some data + * accounts is big enough to be worth using COPY and tracking runtime */ - fprintf(stderr, "creating tables...\n"); - - executeStatement(con, "begin"); - executeStatement(con, "truncate pgbench_accounts"); - res = PQexec(con, "copy pgbench_accounts from stdin"); if (PQresultStatus(res) != PGRES_COPY_IN) { @@ -2859,22 +2883,37 @@ init(bool is_no_vacuum) fprintf(stderr, "PQendcopy failed\n"); exit(1); } + executeStatement(con, "commit"); +} - /* vacuum */ - if (!is_no_vacuum) - { - fprintf(stderr, "vacuum...\n"); - executeStatement(con, "vacuum analyze pgbench_branches"); - executeStatement(con, "vacuum analyze pgbench_tellers"); - executeStatement(con, "vacuum analyze pgbench_accounts"); - executeStatement(con, "vacuum analyze pgbench_history"); - } +/* + * Invoke vacuum on the standard tables + */ +static void +initVacuum(PGconn *con) +{ + fprintf(stderr, "vacuuming...\n"); + executeStatement(con, "vacuum analyze pgbench_branches"); + executeStatement(con, "vacuum analyze pgbench_tellers"); + executeStatement(con, "vacuum analyze pgbench_accounts"); + executeStatement(con, "vacuum analyze pgbench_history"); +} - /* - * create indexes - */ - fprintf(stderr, "set primary keys...\n"); +/* + * Create primary keys on the standard tables + */ +static void +initCreatePKeys(PGconn *con) +{ + static const char *const DDLINDEXes[] = { + "alter table pgbench_branches add primary key (bid)", + "alter table pgbench_tellers add primary key (tid)", + "alter table pgbench_accounts add primary key (aid)" + }; + int i; + + fprintf(stderr, "creating primary keys...\n"); for (i = 0; i < lengthof(DDLINDEXes); i++) { char buffer[256]; @@ -2894,16 +2933,101 @@ init(bool is_no_vacuum) executeStatement(con, buffer); } +} - /* - * create foreign keys - */ - if (foreign_keys) +/* + * Create foreign key constraints between the standard tables + */ +static void +initCreateFKeys(PGconn *con) +{ + static const char *const DDLKEYs[] = { + "alter table pgbench_tellers add constraint pgbench_tellers_bid_fkey foreign key (bid) references pgbench_branches", + "alter table pgbench_accounts add constraint pgbench_accounts_bid_fkey foreign key (bid) references pgbench_branches", + "alter table pgbench_history add constraint pgbench_history_bid_fkey foreign key (bid) references pgbench_branches", + "alter table pgbench_history add constraint pgbench_history_tid_fkey foreign key (tid) references pgbench_tellers", + "alter table pgbench_history add constraint pgbench_history_aid_fkey foreign key (aid) references pgbench_accounts" + }; + int i; + + fprintf(stderr, "creating foreign keys...\n"); + for (i = 0; i < lengthof(DDLKEYs); i++) + { + executeStatement(con, DDLKEYs[i]); + } +} + +/* + * Validate an initialization-steps string + * + * (We could just leave it to runInitSteps() to fail if there are wrong + * characters, but since initialization can take awhile, it seems friendlier + * to check during option parsing.) + */ +static void +checkInitSteps(const char *initialize_steps) +{ + const char *step; + + if (initialize_steps[0] == '\0') + { + fprintf(stderr, "no initialization steps specified\n"); + exit(1); + } + + for (step = initialize_steps; *step != '\0'; step++) { - fprintf(stderr, "set foreign keys...\n"); - for (i = 0; i < lengthof(DDLKEYs); i++) + if (strchr("dtgvpf ", *step) == NULL) { - executeStatement(con, DDLKEYs[i]); + fprintf(stderr, "unrecognized initialization step \"%c\"\n", + *step); + fprintf(stderr, "allowed steps are: \"d\", \"t\", \"g\", \"v\", \"p\", \"f\"\n"); + exit(1); + } + } +} + +/* + * Invoke each initialization step in the given string + */ +static void +runInitSteps(const char *initialize_steps) +{ + PGconn *con; + const char *step; + + if ((con = doConnect()) == NULL) + exit(1); + + for (step = initialize_steps; *step != '\0'; step++) + { + switch (*step) + { + case 'd': + initDropTables(con); + break; + case 't': + initCreateTables(con); + break; + case 'g': + initGenerateData(con); + break; + case 'v': + initVacuum(con); + break; + case 'p': + initCreatePKeys(con); + break; + case 'f': + initCreateFKeys(con); + break; + case ' ': + break; /* ignore */ + default: + fprintf(stderr, "unrecognized initialization step \"%c\"\n", + *step); + PQfinish(con); + exit(1); } } @@ -3682,6 +3806,7 @@ main(int argc, char **argv) {"fillfactor", required_argument, NULL, 'F'}, {"host", required_argument, NULL, 'h'}, {"initialize", no_argument, NULL, 'i'}, + {"init-steps", required_argument, NULL, 'I'}, {"jobs", required_argument, NULL, 'j'}, {"log", no_argument, NULL, 'l'}, {"latency-limit", required_argument, NULL, 'L'}, @@ -3700,21 +3825,23 @@ main(int argc, char **argv) {"username", required_argument, NULL, 'U'}, {"vacuum-all", no_argument, NULL, 'v'}, /* long-named only options */ - {"foreign-keys", no_argument, &foreign_keys, 1}, - {"index-tablespace", required_argument, NULL, 3}, + {"unlogged-tables", no_argument, NULL, 1}, {"tablespace", required_argument, NULL, 2}, - {"unlogged-tables", no_argument, &unlogged_tables, 1}, + {"index-tablespace", required_argument, NULL, 3}, {"sampling-rate", required_argument, NULL, 4}, {"aggregate-interval", required_argument, NULL, 5}, {"progress-timestamp", no_argument, NULL, 6}, {"log-prefix", required_argument, NULL, 7}, + {"foreign-keys", no_argument, NULL, 8}, {NULL, 0, NULL, 0} }; int c; - int is_init_mode = 0; /* initialize mode? */ - int is_no_vacuum = 0; /* no vacuum at all before testing? */ - int do_vacuum_accounts = 0; /* do vacuum accounts before testing? */ + bool is_init_mode = false; /* initialize mode? */ + char *initialize_steps = NULL; + bool foreign_keys = false; + bool is_no_vacuum = false; + bool do_vacuum_accounts = false; /* vacuum accounts table? */ int optindex; bool scale_given = false; @@ -3774,23 +3901,31 @@ main(int argc, char **argv) state = (CState *) pg_malloc(sizeof(CState)); memset(state, 0, sizeof(CState)); - while ((c = getopt_long(argc, argv, "ih:nvp:dqb:SNc:j:Crs:t:T:U:lf:D:F:M:P:R:L:", long_options, &optindex)) != -1) + while ((c = getopt_long(argc, argv, "iI:h:nvp:dqb:SNc:j:Crs:t:T:U:lf:D:F:M:P:R:L:", long_options, &optindex)) != -1) { char *script; switch (c) { case 'i': - is_init_mode++; + is_init_mode = true; + break; + case 'I': + if (initialize_steps) + pg_free(initialize_steps); + initialize_steps = pg_strdup(optarg); + checkInitSteps(initialize_steps); + initialization_option_set = true; break; case 'h': pghost = pg_strdup(optarg); break; case 'n': - is_no_vacuum++; + is_no_vacuum = true; break; case 'v': - do_vacuum_accounts++; + benchmarking_option_set = true; + do_vacuum_accounts = true; break; case 'p': pgport = pg_strdup(optarg); @@ -3863,11 +3998,6 @@ main(int argc, char **argv) break; case 't': benchmarking_option_set = true; - if (duration > 0) - { - fprintf(stderr, "specify either a number of transactions (-t) or a duration (-T), not both\n"); - exit(1); - } nxacts = atoi(optarg); if (nxacts <= 0) { @@ -3878,11 +4008,6 @@ main(int argc, char **argv) break; case 'T': benchmarking_option_set = true; - if (nxacts > 0) - { - fprintf(stderr, "specify either a number of transactions (-t) or a duration (-T), not both\n"); - exit(1); - } duration = atoi(optarg); if (duration <= 0) { @@ -3901,20 +4026,17 @@ main(int argc, char **argv) initialization_option_set = true; use_quiet = true; break; - case 'b': if (strcmp(optarg, "list") == 0) { listAvailableScripts(); exit(0); } - weight = parseScriptWeight(optarg, &script); process_builtin(findBuiltin(script), weight); benchmarking_option_set = true; internal_script_used = true; break; - case 'S': process_builtin(findBuiltin("select-only"), 1); benchmarking_option_set = true; @@ -4009,10 +4131,9 @@ main(int argc, char **argv) latency_limit = (int64) (limit_ms * 1000); } break; - case 0: - /* This covers long options which take no argument. */ - if (foreign_keys || unlogged_tables) - initialization_option_set = true; + case 1: /* unlogged-tables */ + initialization_option_set = true; + unlogged_tables = true; break; case 2: /* tablespace */ initialization_option_set = true; @@ -4022,7 +4143,7 @@ main(int argc, char **argv) initialization_option_set = true; index_tablespace = pg_strdup(optarg); break; - case 4: + case 4: /* sampling-rate */ benchmarking_option_set = true; sample_rate = atof(optarg); if (sample_rate <= 0.0 || sample_rate > 1.0) @@ -4031,7 +4152,7 @@ main(int argc, char **argv) exit(1); } break; - case 5: + case 5: /* aggregate-interval */ benchmarking_option_set = true; agg_interval = atoi(optarg); if (agg_interval <= 0) @@ -4041,14 +4162,18 @@ main(int argc, char **argv) exit(1); } break; - case 6: + case 6: /* progress-timestamp */ progress_timestamp = true; benchmarking_option_set = true; break; - case 7: + case 7: /* log-prefix */ benchmarking_option_set = true; logfile_prefix = pg_strdup(optarg); break; + case 8: /* foreign-keys */ + initialization_option_set = true; + foreign_keys = true; + break; default: fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname); exit(1); @@ -4128,7 +4253,31 @@ main(int argc, char **argv) exit(1); } - init(is_no_vacuum); + if (initialize_steps == NULL) + initialize_steps = pg_strdup(DEFAULT_INIT_STEPS); + + if (is_no_vacuum) + { + /* Remove any vacuum step in initialize_steps */ + char *p; + + while ((p = strchr(initialize_steps, 'v')) != NULL) + *p = ' '; + } + + if (foreign_keys) + { + /* Add 'f' to end of initialize_steps, if not already there */ + if (strchr(initialize_steps, 'f') == NULL) + { + initialize_steps = (char *) + pg_realloc(initialize_steps, + strlen(initialize_steps) + 2); + strcat(initialize_steps, "f"); + } + } + + runInitSteps(initialize_steps); exit(0); } else @@ -4140,6 +4289,12 @@ main(int argc, char **argv) } } + if (nxacts > 0 && duration > 0) + { + fprintf(stderr, "specify either a number of transactions (-t) or a duration (-T), not both\n"); + exit(1); + } + /* Use DEFAULT_NXACTS if neither nxacts nor duration is specified. */ if (nxacts <= 0 && duration <= 0) nxacts = DEFAULT_NXACTS; diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 11bc0fecfe..864d580c64 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -76,21 +76,34 @@ sub pgbench # Initialize pgbench tables scale 1 pgbench( '-i', 0, [qr{^$}], - [ qr{creating tables}, qr{vacuum}, qr{set primary keys}, qr{done\.} ], + [ qr{creating tables}, qr{vacuuming}, qr{creating primary keys}, qr{done\.} ], 'pgbench scale 1 initialization',); # Again, with all possible options pgbench( -'--initialize --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default', +'--initialize --init-steps=dtpvg --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default', 0, [qr{^$}i], - [ qr{creating tables}, - qr{vacuum}, - qr{set primary keys}, - qr{set foreign keys}, + [ qr{dropping old tables}, + qr{creating tables}, + qr{vacuuming}, + qr{creating primary keys}, + qr{creating foreign keys}, qr{done\.} ], 'pgbench scale 1 initialization'); +# Test interaction of --init-steps with legacy step-selection options +pgbench( + '--initialize --init-steps=dtpvgvv --no-vacuum --foreign-keys --unlogged-tables', + 0, [qr{^$}], + [ qr{dropping old tables}, + qr{creating tables}, + qr{creating primary keys}, + qr{.* of .* tuples \(.*\) done}, + qr{creating foreign keys}, + qr{done\.} ], + 'pgbench --init-steps'); + # Run all builtin scripts, for a few transactions each pgbench( '--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t' diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl index d6b3d4f926..6ea55f8dae 100644 --- a/src/bin/pgbench/t/002_pgbench_no_server.pl +++ b/src/bin/pgbench/t/002_pgbench_no_server.pl @@ -73,6 +73,11 @@ sub pgbench [ 'ambiguous builtin', '-b s', [qr{ambiguous}] ], [ '--progress-timestamp => --progress', '--progress-timestamp', [qr{allowed only under}] ], + [ '-I without init option', '-I dtg', + [qr{cannot be used in benchmarking mode}] ], + [ 'invalid init step', '-i -I dta', + [qr{unrecognized initialization step}, + qr{allowed steps are} ] ], # loging sub-options [ 'sampling => log', '--sampling-rate=0.01', From a61f5ab986386628cf20b33971364475ce452412 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 14 Nov 2017 15:19:05 +0100 Subject: [PATCH 0537/1087] Simplify index_[constraint_]create API Instead of passing large swaths of boolean arguments, define some flags that can be used in a bitmask. This makes it easier not only to figure out what each call site is doing, but also to add some new flags. The flags are split in two -- one set for index_create directly and another for constraints. index_create() itself receives both, and then passes down the latter to index_constraint_create(), which can also be called standalone. Discussion: https://postgr.es/m/20171023151251.j75uoe27gajdjmlm@alvherre.pgsql Reviewed-by: Simon Riggs --- src/backend/catalog/index.c | 105 ++++++++++++++++--------------- src/backend/catalog/toasting.c | 3 +- src/backend/commands/indexcmds.c | 33 +++++++--- src/backend/commands/tablecmds.c | 13 ++-- src/include/catalog/index.h | 29 +++++---- 5 files changed, 104 insertions(+), 79 deletions(-) diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index c7b2f031f0..0125c18bc1 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -680,19 +680,25 @@ UpdateIndexRelation(Oid indexoid, * classObjectId: array of index opclass OIDs, one per index column * coloptions: array of per-index-column indoption settings * reloptions: AM-specific options - * isprimary: index is a PRIMARY KEY - * isconstraint: index is owned by PRIMARY KEY, UNIQUE, or EXCLUSION constraint - * deferrable: constraint is DEFERRABLE - * initdeferred: constraint is INITIALLY DEFERRED + * flags: bitmask that can include any combination of these bits: + * INDEX_CREATE_IS_PRIMARY + * the index is a primary key + * INDEX_CREATE_ADD_CONSTRAINT: + * invoke index_constraint_create also + * INDEX_CREATE_SKIP_BUILD: + * skip the index_build() step for the moment; caller must do it + * later (typically via reindex_index()) + * INDEX_CREATE_CONCURRENT: + * do not lock the table against writers. The index will be + * marked "invalid" and the caller must take additional steps + * to fix it up. + * INDEX_CREATE_IF_NOT_EXISTS: + * do not throw an error if a relation with the same name + * already exists. + * constr_flags: flags passed to index_constraint_create + * (only if INDEX_CREATE_ADD_CONSTRAINT is set) * allow_system_table_mods: allow table to be a system catalog - * skip_build: true to skip the index_build() step for the moment; caller - * must do it later (typically via reindex_index()) - * concurrent: if true, do not lock the table against writers. The index - * will be marked "invalid" and the caller must take additional steps - * to fix it up. * is_internal: if true, post creation hook for new index - * if_not_exists: if true, do not throw an error if a relation with - * the same name already exists. * * Returns the OID of the created index. */ @@ -709,15 +715,10 @@ index_create(Relation heapRelation, Oid *classObjectId, int16 *coloptions, Datum reloptions, - bool isprimary, - bool isconstraint, - bool deferrable, - bool initdeferred, + bits16 flags, + bits16 constr_flags, bool allow_system_table_mods, - bool skip_build, - bool concurrent, - bool is_internal, - bool if_not_exists) + bool is_internal) { Oid heapRelationId = RelationGetRelid(heapRelation); Relation pg_class; @@ -729,6 +730,12 @@ index_create(Relation heapRelation, Oid namespaceId; int i; char relpersistence; + bool isprimary = (flags & INDEX_CREATE_IS_PRIMARY) != 0; + bool concurrent = (flags & INDEX_CREATE_CONCURRENT) != 0; + + /* constraint flags can only be set when a constraint is requested */ + Assert((constr_flags == 0) || + ((flags & INDEX_CREATE_ADD_CONSTRAINT) != 0)); is_exclusion = (indexInfo->ii_ExclusionOps != NULL); @@ -794,7 +801,7 @@ index_create(Relation heapRelation, if (get_relname_relid(indexRelationName, namespaceId)) { - if (if_not_exists) + if ((flags & INDEX_CREATE_IF_NOT_EXISTS) != 0) { ereport(NOTICE, (errcode(ERRCODE_DUPLICATE_TABLE), @@ -917,7 +924,7 @@ index_create(Relation heapRelation, UpdateIndexRelation(indexRelationId, heapRelationId, indexInfo, collationObjectId, classObjectId, coloptions, isprimary, is_exclusion, - !deferrable, + (constr_flags & INDEX_CONSTR_CREATE_DEFERRABLE) == 0, !concurrent); /* @@ -943,7 +950,7 @@ index_create(Relation heapRelation, myself.objectId = indexRelationId; myself.objectSubId = 0; - if (isconstraint) + if ((flags & INDEX_CREATE_ADD_CONSTRAINT) != 0) { char constraintType; @@ -964,11 +971,7 @@ index_create(Relation heapRelation, indexInfo, indexRelationName, constraintType, - deferrable, - initdeferred, - false, /* already marked primary */ - false, /* pg_index entry is OK */ - false, /* no old dependencies */ + constr_flags, allow_system_table_mods, is_internal); } @@ -1005,10 +1008,6 @@ index_create(Relation heapRelation, recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO); } - - /* Non-constraint indexes can't be deferrable */ - Assert(!deferrable); - Assert(!initdeferred); } /* Store dependency on collations */ @@ -1059,9 +1058,7 @@ index_create(Relation heapRelation, else { /* Bootstrap mode - assert we weren't asked for constraint support */ - Assert(!isconstraint); - Assert(!deferrable); - Assert(!initdeferred); + Assert((flags & INDEX_CREATE_ADD_CONSTRAINT) == 0); } /* Post creation hook for new index */ @@ -1089,15 +1086,16 @@ index_create(Relation heapRelation, * If this is bootstrap (initdb) time, then we don't actually fill in the * index yet. We'll be creating more indexes and classes later, so we * delay filling them in until just before we're done with bootstrapping. - * Similarly, if the caller specified skip_build then filling the index is - * delayed till later (ALTER TABLE can save work in some cases with this). - * Otherwise, we call the AM routine that constructs the index. + * Similarly, if the caller specified to skip the build then filling the + * index is delayed till later (ALTER TABLE can save work in some cases + * with this). Otherwise, we call the AM routine that constructs the + * index. */ if (IsBootstrapProcessingMode()) { index_register(heapRelationId, indexRelationId, indexInfo); } - else if (skip_build) + else if ((flags & INDEX_CREATE_SKIP_BUILD) != 0) { /* * Caller is responsible for filling the index later on. However, @@ -1137,12 +1135,13 @@ index_create(Relation heapRelation, * constraintName: what it say (generally, should match name of index) * constraintType: one of CONSTRAINT_PRIMARY, CONSTRAINT_UNIQUE, or * CONSTRAINT_EXCLUSION - * deferrable: constraint is DEFERRABLE - * initdeferred: constraint is INITIALLY DEFERRED - * mark_as_primary: if true, set flags to mark index as primary key - * update_pgindex: if true, update pg_index row (else caller's done that) - * remove_old_dependencies: if true, remove existing dependencies of index - * on table's columns + * flags: bitmask that can include any combination of these bits: + * INDEX_CONSTR_CREATE_MARK_AS_PRIMARY: index is a PRIMARY KEY + * INDEX_CONSTR_CREATE_DEFERRABLE: constraint is DEFERRABLE + * INDEX_CONSTR_CREATE_INIT_DEFERRED: constraint is INITIALLY DEFERRED + * INDEX_CONSTR_CREATE_UPDATE_INDEX: update the pg_index row + * INDEX_CONSTR_CREATE_REMOVE_OLD_DEPS: remove existing dependencies + * of index on table's columns * allow_system_table_mods: allow table to be a system catalog * is_internal: index is constructed due to internal process */ @@ -1152,11 +1151,7 @@ index_constraint_create(Relation heapRelation, IndexInfo *indexInfo, const char *constraintName, char constraintType, - bool deferrable, - bool initdeferred, - bool mark_as_primary, - bool update_pgindex, - bool remove_old_dependencies, + bits16 constr_flags, bool allow_system_table_mods, bool is_internal) { @@ -1164,6 +1159,13 @@ index_constraint_create(Relation heapRelation, ObjectAddress myself, referenced; Oid conOid; + bool deferrable; + bool initdeferred; + bool mark_as_primary; + + deferrable = (constr_flags & INDEX_CONSTR_CREATE_DEFERRABLE) != 0; + initdeferred = (constr_flags & INDEX_CONSTR_CREATE_INIT_DEFERRED) != 0; + mark_as_primary = (constr_flags & INDEX_CONSTR_CREATE_MARK_AS_PRIMARY) != 0; /* constraint creation support doesn't work while bootstrapping */ Assert(!IsBootstrapProcessingMode()); @@ -1190,7 +1192,7 @@ index_constraint_create(Relation heapRelation, * has any expressions or predicate, but we'd never be turning such an * index into a UNIQUE or PRIMARY KEY constraint. */ - if (remove_old_dependencies) + if (constr_flags & INDEX_CONSTR_CREATE_REMOVE_OLD_DEPS) deleteDependencyRecordsForClass(RelationRelationId, indexRelationId, RelationRelationId, DEPENDENCY_AUTO); @@ -1295,7 +1297,8 @@ index_constraint_create(Relation heapRelation, * is a risk that concurrent readers of the table will miss seeing this * index at all. */ - if (update_pgindex && (mark_as_primary || deferrable)) + if ((constr_flags & INDEX_CONSTR_CREATE_UPDATE_INDEX) && + (mark_as_primary || deferrable)) { Relation pg_index; HeapTuple indexTuple; diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index 6f517bbcda..539ca79ad3 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -333,8 +333,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, BTREE_AM_OID, rel->rd_rel->reltablespace, collationObjectId, classObjectId, coloptions, (Datum) 0, - true, false, false, false, - true, false, false, true, false); + INDEX_CREATE_IS_PRIMARY, 0, true, true); heap_close(toast_rel, NoLock); diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 89114af119..97091dd9fb 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -333,6 +333,8 @@ DefineIndex(Oid relationId, Datum reloptions; int16 *coloptions; IndexInfo *indexInfo; + bits16 flags; + bits16 constr_flags; int numberOfAttributes; TransactionId limitXmin; VirtualTransactionId *old_snapshots; @@ -661,20 +663,35 @@ DefineIndex(Oid relationId, Assert(!OidIsValid(stmt->oldNode) || (skip_build && !stmt->concurrent)); /* - * Make the catalog entries for the index, including constraints. Then, if - * not skip_build || concurrent, actually build the index. + * Make the catalog entries for the index, including constraints. This + * step also actually builds the index, except if caller requested not to + * or in concurrent mode, in which case it'll be done later. */ + flags = constr_flags = 0; + if (stmt->isconstraint) + flags |= INDEX_CREATE_ADD_CONSTRAINT; + if (skip_build || stmt->concurrent) + flags |= INDEX_CREATE_SKIP_BUILD; + if (stmt->if_not_exists) + flags |= INDEX_CREATE_IF_NOT_EXISTS; + if (stmt->concurrent) + flags |= INDEX_CREATE_CONCURRENT; + if (stmt->primary) + flags |= INDEX_CREATE_IS_PRIMARY; + + if (stmt->deferrable) + constr_flags |= INDEX_CONSTR_CREATE_DEFERRABLE; + if (stmt->initdeferred) + constr_flags |= INDEX_CONSTR_CREATE_INIT_DEFERRED; + indexRelationId = index_create(rel, indexRelationName, indexRelationId, stmt->oldNode, indexInfo, indexColNames, accessMethodId, tablespaceId, collationObjectId, classObjectId, - coloptions, reloptions, stmt->primary, - stmt->isconstraint, stmt->deferrable, stmt->initdeferred, - allowSystemTableMods, - skip_build || stmt->concurrent, - stmt->concurrent, !check_rights, - stmt->if_not_exists); + coloptions, reloptions, + flags, constr_flags, + allowSystemTableMods, !check_rights); ObjectAddressSet(address, RelationRelationId, indexRelationId); diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 9c66aa75ed..d19846d005 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -6836,6 +6836,7 @@ ATExecAddIndexConstraint(AlteredTableInfo *tab, Relation rel, char *constraintName; char constraintType; ObjectAddress address; + bits16 flags; Assert(IsA(stmt, IndexStmt)); Assert(OidIsValid(index_oid)); @@ -6880,16 +6881,18 @@ ATExecAddIndexConstraint(AlteredTableInfo *tab, Relation rel, constraintType = CONSTRAINT_UNIQUE; /* Create the catalog entries for the constraint */ + flags = INDEX_CONSTR_CREATE_UPDATE_INDEX | + INDEX_CONSTR_CREATE_REMOVE_OLD_DEPS | + (stmt->initdeferred ? INDEX_CONSTR_CREATE_INIT_DEFERRED : 0) | + (stmt->deferrable ? INDEX_CONSTR_CREATE_DEFERRABLE : 0) | + (stmt->primary ? INDEX_CONSTR_CREATE_MARK_AS_PRIMARY : 0); + address = index_constraint_create(rel, index_oid, indexInfo, constraintName, constraintType, - stmt->deferrable, - stmt->initdeferred, - stmt->primary, - true, /* update pg_index */ - true, /* remove old dependencies */ + flags, allowSystemTableMods, false); /* is_internal */ diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index 1d4ec09f8f..ceaa91f1b2 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -42,6 +42,12 @@ extern void index_check_primary_key(Relation heapRel, IndexInfo *indexInfo, bool is_alter_table); +#define INDEX_CREATE_IS_PRIMARY (1 << 0) +#define INDEX_CREATE_ADD_CONSTRAINT (1 << 1) +#define INDEX_CREATE_SKIP_BUILD (1 << 2) +#define INDEX_CREATE_CONCURRENT (1 << 3) +#define INDEX_CREATE_IF_NOT_EXISTS (1 << 4) + extern Oid index_create(Relation heapRelation, const char *indexRelationName, Oid indexRelationId, @@ -54,26 +60,23 @@ extern Oid index_create(Relation heapRelation, Oid *classObjectId, int16 *coloptions, Datum reloptions, - bool isprimary, - bool isconstraint, - bool deferrable, - bool initdeferred, + bits16 flags, + bits16 constr_flags, bool allow_system_table_mods, - bool skip_build, - bool concurrent, - bool is_internal, - bool if_not_exists); + bool is_internal); + +#define INDEX_CONSTR_CREATE_MARK_AS_PRIMARY (1 << 0) +#define INDEX_CONSTR_CREATE_DEFERRABLE (1 << 1) +#define INDEX_CONSTR_CREATE_INIT_DEFERRED (1 << 2) +#define INDEX_CONSTR_CREATE_UPDATE_INDEX (1 << 3) +#define INDEX_CONSTR_CREATE_REMOVE_OLD_DEPS (1 << 4) extern ObjectAddress index_constraint_create(Relation heapRelation, Oid indexRelationId, IndexInfo *indexInfo, const char *constraintName, char constraintType, - bool deferrable, - bool initdeferred, - bool mark_as_primary, - bool update_pgindex, - bool remove_old_dependencies, + bits16 constr_flags, bool allow_system_table_mods, bool is_internal); From 6d776522d243d38faca6924d9b3c7cfaf0c4860d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 14 Nov 2017 12:33:10 -0500 Subject: [PATCH 0538/1087] Document changes in large-object privilege checking. Commit 5ecc0d738 removed the hard-wired superuser checks in lo_import and lo_export in favor of protecting them with SQL permissions, but failed to adjust the documentation to match. Fix that, and add a paragraph pointing out the nontrivial security hazards involved with actually granting such permissions. (It's still better than ALLOW_DANGEROUS_LO_FUNCTIONS, though.) Also, commit ae20b23a9 caused large object read/write privilege to be checked during lo_open() rather than in the actual read or write calls. Document that. Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com --- doc/src/sgml/config.sgml | 3 --- doc/src/sgml/lobj.sgml | 42 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index d360fc4d58..996e82534a 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -7540,9 +7540,6 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Setting this variable does not disable all security checks related to large objects — only those for which the default behavior has changed in PostgreSQL 9.0. - For example, lo_import() and - lo_export() need superuser privileges regardless - of this setting. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index c743b5c0ba..e11c8e0f8b 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -292,6 +292,18 @@ int lo_open(PGconn *conn, Oid lobjId, int mode); modes for ordinary SQL SELECT commands. + + lo_open will fail if SELECT + privilege is not available for the large object, or + if INV_WRITE is specified and UPDATE + privilege is not available. + (Prior to PostgreSQL 11, these privilege + checks were instead performed at the first actual read or write call + using the descriptor.) + These privilege checks can be disabled with the + run-time parameter. + + An example: @@ -634,12 +646,34 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image lo_export functions behave considerably differently from their client-side analogs. These two functions read and write files in the server's file system, using the permissions of the database's - owning user. Therefore, their use is restricted to superusers. In - contrast, the client-side import and export functions read and write files - in the client's file system, using the permissions of the client program. - The client-side functions do not require superuser privilege. + owning user. Therefore, by default their use is restricted to superusers. + In contrast, the client-side import and export functions read and write + files in the client's file system, using the permissions of the client + program. The client-side functions do not require any database + privileges, except the privilege to read or write the large object in + question. + + + It is possible to use of the + server-side lo_import + and lo_export functions to non-superusers, but + careful consideration of the security implications is required. A + malicious user of such privileges could easily parlay them into becoming + superuser (for example by rewriting server configuration files), or could + attack the rest of the server's file system without bothering to obtain + database superuser privileges as such. Access to roles having + such privilege must therefore be guarded just as carefully as access to + superuser roles. Nonetheless, if use of + server-side lo_import + or lo_export is needed for some routine task, it's + safer to use a role with such privileges than one with full superuser + privileges, as that helps to reduce the risk of damage from accidental + errors. + + + The functionality of lo_read and lo_write is also available via server-side calls, From 91aec93e6089a5ba49cce0aca3bf7f7022d62ea4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 14 Nov 2017 13:46:54 -0500 Subject: [PATCH 0539/1087] Rearrange c.h to create a "compiler characteristics" section. Generalize section 1 to handle stuff that is principally about the compiler (not libraries), such as attributes, and collect stuff there that had been dropped into various other parts of c.h. Also, push all the gettext macros into section 8, so that section 0 is really just inclusions rather than inclusions and random other stuff. The primary goal here is to get pg_attribute_aligned() defined before section 3, so that we can use it with int128. But this seems like good cleanup anyway. This patch just moves macro definitions around, and shouldn't result in any changes in generated code. But I'll push it out separately to see if the buildfarm agrees. Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org --- src/include/c.h | 271 ++++++++++++++++++++++++------------------------ 1 file changed, 136 insertions(+), 135 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 852551c121..331e6f8f93 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -26,7 +26,7 @@ * section description * ------- ------------------------------------------------ * 0) pg_config.h and standard system headers - * 1) hacks to cope with non-ANSI C compilers + * 1) compiler characteristics * 2) bool, true, false, TRUE, FALSE * 3) standard system types * 4) IsValid macros for system types @@ -90,61 +90,133 @@ #include #endif #include - #include #if defined(WIN32) || defined(__CYGWIN__) #include /* ensure O_BINARY is available */ #endif +#include +#ifdef ENABLE_NLS +#include +#endif #if defined(WIN32) || defined(__CYGWIN__) /* We have to redefine some system functions after they are included above. */ #include "pg_config_os.h" #endif -/* - * Force disable inlining if PG_FORCE_DISABLE_INLINE is defined. This is used - * to work around compiler bugs and might also be useful for investigatory - * purposes by defining the symbol in the platform's header.. + +/* ---------------------------------------------------------------- + * Section 1: compiler characteristics * - * This is done early (in slightly the wrong section) as functionality later - * in this file might want to rely on inline functions. + * type prefixes (const, signed, volatile, inline) are handled in pg_config.h. + * ---------------------------------------------------------------- + */ + +/* + * Disable "inline" if PG_FORCE_DISABLE_INLINE is defined. + * This is used to work around compiler bugs and might also be useful for + * investigatory purposes. */ #ifdef PG_FORCE_DISABLE_INLINE #undef inline #define inline #endif -/* Must be before gettext() games below */ -#include +/* + * Attribute macros + * + * GCC: https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html + * GCC: https://gcc.gnu.org/onlinedocs/gcc/Type-Attributes.html + * Sunpro: https://docs.oracle.com/cd/E18659_01/html/821-1384/gjzke.html + * XLC: http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/function_attributes.html + * XLC: http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/type_attrib.html + */ -#define _(x) gettext(x) +/* only GCC supports the unused attribute */ +#ifdef __GNUC__ +#define pg_attribute_unused() __attribute__((unused)) +#else +#define pg_attribute_unused() +#endif -#ifdef ENABLE_NLS -#include +/* + * Append PG_USED_FOR_ASSERTS_ONLY to definitions of variables that are only + * used in assert-enabled builds, to avoid compiler warnings about unused + * variables in assert-disabled builds. + */ +#ifdef USE_ASSERT_CHECKING +#define PG_USED_FOR_ASSERTS_ONLY #else -#define gettext(x) (x) -#define dgettext(d,x) (x) -#define ngettext(s,p,n) ((n) == 1 ? (s) : (p)) -#define dngettext(d,s,p,n) ((n) == 1 ? (s) : (p)) +#define PG_USED_FOR_ASSERTS_ONLY pg_attribute_unused() #endif +/* GCC and XLC support format attributes */ +#if defined(__GNUC__) || defined(__IBMC__) +#define pg_attribute_format_arg(a) __attribute__((format_arg(a))) +#define pg_attribute_printf(f,a) __attribute__((format(PG_PRINTF_ATTRIBUTE, f, a))) +#else +#define pg_attribute_format_arg(a) +#define pg_attribute_printf(f,a) +#endif + +/* GCC, Sunpro and XLC support aligned, packed and noreturn */ +#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) +#define pg_attribute_aligned(a) __attribute__((aligned(a))) +#define pg_attribute_noreturn() __attribute__((noreturn)) +#define pg_attribute_packed() __attribute__((packed)) +#define HAVE_PG_ATTRIBUTE_NORETURN 1 +#else /* - * Use this to mark string constants as needing translation at some later - * time, rather than immediately. This is useful for cases where you need - * access to the original string and translated string, and for cases where - * immediate translation is not possible, like when initializing global - * variables. - * http://www.gnu.org/software/autoconf/manual/gettext/Special-cases.html + * NB: aligned and packed are not given default definitions because they + * affect code functionality; they *must* be implemented by the compiler + * if they are to be used. */ -#define gettext_noop(x) (x) +#define pg_attribute_noreturn() +#endif +/* + * Forcing a function not to be inlined can be useful if it's the slow path of + * a performance-critical function, or should be visible in profiles to allow + * for proper cost attribution. Note that unlike the pg_attribute_XXX macros + * above, this should be placed before the function's return type and name. + */ +/* GCC, Sunpro and XLC support noinline via __attribute__ */ +#if (defined(__GNUC__) && __GNUC__ > 2) || defined(__SUNPRO_C) || defined(__IBMC__) +#define pg_noinline __attribute__((noinline)) +/* msvc via declspec */ +#elif defined(_MSC_VER) +#define pg_noinline __declspec(noinline) +#else +#define pg_noinline +#endif -/* ---------------------------------------------------------------- - * Section 1: hacks to cope with non-ANSI C compilers +/* + * Mark a point as unreachable in a portable fashion. This should preferably + * be something that the compiler understands, to aid code generation. + * In assert-enabled builds, we prefer abort() for debugging reasons. + */ +#if defined(HAVE__BUILTIN_UNREACHABLE) && !defined(USE_ASSERT_CHECKING) +#define pg_unreachable() __builtin_unreachable() +#elif defined(_MSC_VER) && !defined(USE_ASSERT_CHECKING) +#define pg_unreachable() __assume(0) +#else +#define pg_unreachable() abort() +#endif + +/* + * Hints to the compiler about the likelihood of a branch. Both likely() and + * unlikely() return the boolean value of the contained expression. * - * type prefixes (const, signed, volatile, inline) are handled in pg_config.h. - * ---------------------------------------------------------------- + * These should only be used sparingly, in very hot code paths. It's very easy + * to mis-estimate likelihoods. */ +#if __GNUC__ >= 3 +#define likely(x) __builtin_expect((x) != 0, 1) +#define unlikely(x) __builtin_expect((x) != 0, 0) +#else +#define likely(x) ((x) != 0) +#define unlikely(x) ((x) != 0) +#endif /* * CppAsString @@ -183,6 +255,7 @@ #endif #endif + /* ---------------------------------------------------------------- * Section 2: bool, true, false, TRUE, FALSE * ---------------------------------------------------------------- @@ -209,6 +282,7 @@ typedef char bool; #ifndef false #define false ((bool) 0) #endif + #endif /* not C++ */ #ifndef TRUE @@ -492,16 +566,6 @@ typedef NameData *Name; #define NameStr(name) ((name).data) -/* - * Support macros for escaping strings. escape_backslash should be true - * if generating a non-standard-conforming string. Prefixing a string - * with ESCAPE_STRING_SYNTAX guarantees it is non-standard-conforming. - * Beware of multiple evaluation of the "ch" argument! - */ -#define SQL_STR_DOUBLE(ch, escape_backslash) \ - ((ch) == '\'' || ((ch) == '\\' && (escape_backslash))) - -#define ESCAPE_STRING_SYNTAX 'E' /* ---------------------------------------------------------------- * Section 4: IsValid macros for system types @@ -563,6 +627,9 @@ typedef NameData *Name; * * NOTE: TYPEALIGN[_DOWN] will not work if ALIGNVAL is not a power of 2. * That case seems extremely unlikely to be needed in practice, however. + * + * NOTE: MAXIMUM_ALIGNOF, and hence MAXALIGN(), intentionally exclude any + * larger-than-8-byte types the compiler might have. * ---------------- */ @@ -600,64 +667,6 @@ typedef NameData *Name; /* we don't currently need wider versions of the other ALIGN macros */ #define MAXALIGN64(LEN) TYPEALIGN64(MAXIMUM_ALIGNOF, (LEN)) -/* ---------------- - * Attribute macros - * - * GCC: https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html - * GCC: https://gcc.gnu.org/onlinedocs/gcc/Type-Attributes.html - * Sunpro: https://docs.oracle.com/cd/E18659_01/html/821-1384/gjzke.html - * XLC: http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/function_attributes.html - * XLC: http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/type_attrib.html - * ---------------- - */ - -/* only GCC supports the unused attribute */ -#ifdef __GNUC__ -#define pg_attribute_unused() __attribute__((unused)) -#else -#define pg_attribute_unused() -#endif - -/* GCC and XLC support format attributes */ -#if defined(__GNUC__) || defined(__IBMC__) -#define pg_attribute_format_arg(a) __attribute__((format_arg(a))) -#define pg_attribute_printf(f,a) __attribute__((format(PG_PRINTF_ATTRIBUTE, f, a))) -#else -#define pg_attribute_format_arg(a) -#define pg_attribute_printf(f,a) -#endif - -/* GCC, Sunpro and XLC support aligned, packed and noreturn */ -#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) -#define pg_attribute_aligned(a) __attribute__((aligned(a))) -#define pg_attribute_noreturn() __attribute__((noreturn)) -#define pg_attribute_packed() __attribute__((packed)) -#define HAVE_PG_ATTRIBUTE_NORETURN 1 -#else -/* - * NB: aligned and packed are not given default definitions because they - * affect code functionality; they *must* be implemented by the compiler - * if they are to be used. - */ -#define pg_attribute_noreturn() -#endif - - -/* - * Forcing a function not to be inlined can be useful if it's the slow path of - * a performance-critical function, or should be visible in profiles to allow - * for proper cost attribution. Note that unlike the pg_attribute_XXX macros - * above, this should be placed before the function's return type and name. - */ -/* GCC, Sunpro and XLC support noinline via __attribute__ */ -#if (defined(__GNUC__) && __GNUC__ > 2) || defined(__SUNPRO_C) || defined(__IBMC__) -#define pg_noinline __attribute__((noinline)) -/* msvc via declspec */ -#elif defined(_MSC_VER) -#define pg_noinline __declspec(noinline) -#else -#define pg_noinline -#endif /* ---------------------------------------------------------------- * Section 6: assertions @@ -694,6 +703,7 @@ typedef NameData *Name; #define AssertArg(condition) assert(condition) #define AssertState(condition) assert(condition) #define AssertPointerAlignment(ptr, bndr) ((void)true) + #else /* USE_ASSERT_CHECKING && !FRONTEND */ /* @@ -939,36 +949,6 @@ typedef NameData *Name; } while (0) -/* - * Mark a point as unreachable in a portable fashion. This should preferably - * be something that the compiler understands, to aid code generation. - * In assert-enabled builds, we prefer abort() for debugging reasons. - */ -#if defined(HAVE__BUILTIN_UNREACHABLE) && !defined(USE_ASSERT_CHECKING) -#define pg_unreachable() __builtin_unreachable() -#elif defined(_MSC_VER) && !defined(USE_ASSERT_CHECKING) -#define pg_unreachable() __assume(0) -#else -#define pg_unreachable() abort() -#endif - - -/* - * Hints to the compiler about the likelihood of a branch. Both likely() and - * unlikely() return the boolean value of the contained expression. - * - * These should only be used sparingly, in very hot code paths. It's very easy - * to mis-estimate likelihoods. - */ -#if __GNUC__ >= 3 -#define likely(x) __builtin_expect((x) != 0, 1) -#define unlikely(x) __builtin_expect((x) != 0, 0) -#else -#define likely(x) ((x) != 0) -#define unlikely(x) ((x) != 0) -#endif - - /* ---------------------------------------------------------------- * Section 8: random stuff * ---------------------------------------------------------------- @@ -978,26 +958,47 @@ typedef NameData *Name; #define HIGHBIT (0x80) #define IS_HIGHBIT_SET(ch) ((unsigned char)(ch) & HIGHBIT) +/* + * Support macros for escaping strings. escape_backslash should be true + * if generating a non-standard-conforming string. Prefixing a string + * with ESCAPE_STRING_SYNTAX guarantees it is non-standard-conforming. + * Beware of multiple evaluation of the "ch" argument! + */ +#define SQL_STR_DOUBLE(ch, escape_backslash) \ + ((ch) == '\'' || ((ch) == '\\' && (escape_backslash))) + +#define ESCAPE_STRING_SYNTAX 'E' + + #define STATUS_OK (0) #define STATUS_ERROR (-1) #define STATUS_EOF (-2) #define STATUS_FOUND (1) #define STATUS_WAITING (2) - /* - * Append PG_USED_FOR_ASSERTS_ONLY to definitions of variables that are only - * used in assert-enabled builds, to avoid compiler warnings about unused - * variables in assert-disabled builds. + * gettext support */ -#ifdef USE_ASSERT_CHECKING -#define PG_USED_FOR_ASSERTS_ONLY -#else -#define PG_USED_FOR_ASSERTS_ONLY pg_attribute_unused() + +#ifndef ENABLE_NLS +/* stuff we'd otherwise get from */ +#define gettext(x) (x) +#define dgettext(d,x) (x) +#define ngettext(s,p,n) ((n) == 1 ? (s) : (p)) +#define dngettext(d,s,p,n) ((n) == 1 ? (s) : (p)) #endif +#define _(x) gettext(x) -/* gettext domain name mangling */ +/* + * Use this to mark string constants as needing translation at some later + * time, rather than immediately. This is useful for cases where you need + * access to the original string and translated string, and for cases where + * immediate translation is not possible, like when initializing global + * variables. + * http://www.gnu.org/software/autoconf/manual/gettext/Special-cases.html + */ +#define gettext_noop(x) (x) /* * To better support parallel installations of major PostgreSQL From 7518049980be1d90264addab003476ae105f70d4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 14 Nov 2017 15:03:55 -0500 Subject: [PATCH 0540/1087] Prevent int128 from requiring more than MAXALIGN alignment. Our initial work with int128 neglected alignment considerations, an oversight that came back to bite us in bug #14897 from Vincent Lachenal. It is unsurprising that int128 might have a 16-byte alignment requirement; what's slightly more surprising is that even notoriously lax Intel chips sometimes enforce that. Raising MAXALIGN seems out of the question: the costs in wasted disk and memory space would be significant, and there would also be an on-disk compatibility break. Nor does it seem very practical to try to allow some data structures to have more-than-MAXALIGN alignment requirement, as we'd have to push knowledge of that throughout various code that copies data structures around. The only way out of the box is to make type int128 conform to the system's alignment assumptions. Fortunately, gcc supports that via its __attribute__(aligned()) pragma; and since we don't currently support int128 on non-gcc-workalike compilers, we shouldn't be losing any platform support this way. Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and called it a day, I did a little bit of extra work to make the code more portable than that: it will also support int128 on compilers without __attribute__(aligned()), if the native alignment of their 128-bit-int type is no more than that of int64. Add a regression test case that exercises the one known instance of the problem, in parallel aggregation over a bigint column. This will need to be back-patched, along with the preparatory commit 91aec93e6. But let's see what the buildfarm makes of it first. Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org --- config/c-compiler.m4 | 9 ++-- configure | 42 ++++++++++++++++++- configure.in | 7 +++- src/include/c.h | 26 +++++++++--- src/include/pg_config.h.in | 3 ++ src/include/pg_config.h.win32 | 3 ++ src/test/regress/expected/select_parallel.out | 18 ++++++++ src/test/regress/sql/select_parallel.sql | 6 +++ 8 files changed, 102 insertions(+), 12 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 6dcc790649..492f6832cf 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -96,9 +96,11 @@ undefine([Ac_cachevar])dnl # PGAC_TYPE_128BIT_INT # --------------------- # Check if __int128 is a working 128 bit integer type, and if so -# define PG_INT128_TYPE to that typename. This currently only detects -# a GCC/clang extension, but support for different environments may be -# added in the future. +# define PG_INT128_TYPE to that typename, and define ALIGNOF_PG_INT128_TYPE +# as its alignment requirement. +# +# This currently only detects a GCC/clang extension, but support for other +# environments may be added in the future. # # For the moment we only test for support for 128bit math; support for # 128bit literals and snprintf is not required. @@ -128,6 +130,7 @@ return 1; [pgac_cv__128bit_int=no])]) if test x"$pgac_cv__128bit_int" = xyes ; then AC_DEFINE(PG_INT128_TYPE, __int128, [Define to the name of a signed 128-bit integer type.]) + AC_CHECK_ALIGNOF(PG_INT128_TYPE) fi])# PGAC_TYPE_128BIT_INT diff --git a/configure b/configure index b8995ad547..b31134832e 100755 --- a/configure +++ b/configure @@ -14864,7 +14864,10 @@ _ACEOF # Compute maximum alignment of any basic type. # We assume long's alignment is at least as strong as char, short, or int; -# but we must check long long (if it exists) and double. +# but we must check long long (if it is being used for int64) and double. +# Note that we intentionally do not consider any types wider than 64 bits, +# as allowing MAXIMUM_ALIGNOF to exceed 8 would be too much of a penalty +# for disk and memory space. MAX_ALIGNOF=$ac_cv_alignof_long if test $MAX_ALIGNOF -lt $ac_cv_alignof_double ; then @@ -14924,7 +14927,7 @@ _ACEOF fi -# Check for extensions offering the integer scalar type __int128. +# Some compilers offer a 128-bit integer scalar type. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __int128" >&5 $as_echo_n "checking for __int128... " >&6; } if ${pgac_cv__128bit_int+:} false; then : @@ -14974,6 +14977,41 @@ if test x"$pgac_cv__128bit_int" = xyes ; then $as_echo "#define PG_INT128_TYPE __int128" >>confdefs.h + # The cast to long int works around a bug in the HP C Compiler, +# see AC_CHECK_SIZEOF for more information. +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking alignment of PG_INT128_TYPE" >&5 +$as_echo_n "checking alignment of PG_INT128_TYPE... " >&6; } +if ${ac_cv_alignof_PG_INT128_TYPE+:} false; then : + $as_echo_n "(cached) " >&6 +else + if ac_fn_c_compute_int "$LINENO" "(long int) offsetof (ac__type_alignof_, y)" "ac_cv_alignof_PG_INT128_TYPE" "$ac_includes_default +#ifndef offsetof +# define offsetof(type, member) ((char *) &((type *) 0)->member - (char *) 0) +#endif +typedef struct { char x; PG_INT128_TYPE y; } ac__type_alignof_;"; then : + +else + if test "$ac_cv_type_PG_INT128_TYPE" = yes; then + { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 +$as_echo "$as_me: error: in \`$ac_pwd':" >&2;} +as_fn_error 77 "cannot compute alignment of PG_INT128_TYPE +See \`config.log' for more details" "$LINENO" 5; } + else + ac_cv_alignof_PG_INT128_TYPE=0 + fi +fi + +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_alignof_PG_INT128_TYPE" >&5 +$as_echo "$ac_cv_alignof_PG_INT128_TYPE" >&6; } + + + +cat >>confdefs.h <<_ACEOF +#define ALIGNOF_PG_INT128_TYPE $ac_cv_alignof_PG_INT128_TYPE +_ACEOF + + fi # Check for various atomic operations now that we have checked how to declare diff --git a/configure.in b/configure.in index 0e5aef37b4..3f26f038d6 100644 --- a/configure.in +++ b/configure.in @@ -1820,7 +1820,10 @@ AC_CHECK_ALIGNOF(double) # Compute maximum alignment of any basic type. # We assume long's alignment is at least as strong as char, short, or int; -# but we must check long long (if it exists) and double. +# but we must check long long (if it is being used for int64) and double. +# Note that we intentionally do not consider any types wider than 64 bits, +# as allowing MAXIMUM_ALIGNOF to exceed 8 would be too much of a penalty +# for disk and memory space. MAX_ALIGNOF=$ac_cv_alignof_long if test $MAX_ALIGNOF -lt $ac_cv_alignof_double ; then @@ -1837,7 +1840,7 @@ AC_DEFINE_UNQUOTED(MAXIMUM_ALIGNOF, $MAX_ALIGNOF, [Define as the maximum alignme AC_CHECK_TYPES([int8, uint8, int64, uint64], [], [], [#include ]) -# Check for extensions offering the integer scalar type __int128. +# Some compilers offer a 128-bit integer scalar type. PGAC_TYPE_128BIT_INT # Check for various atomic operations now that we have checked how to declare diff --git a/src/include/c.h b/src/include/c.h index 331e6f8f93..18809c9372 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -376,13 +376,29 @@ typedef unsigned long long int uint64; /* * 128-bit signed and unsigned integers - * There currently is only a limited support for the type. E.g. 128bit - * literals and snprintf are not supported; but math is. + * There currently is only limited support for such types. + * E.g. 128bit literals and snprintf are not supported; but math is. + * Also, because we exclude such types when choosing MAXIMUM_ALIGNOF, + * it must be possible to coerce the compiler to allocate them on no + * more than MAXALIGN boundaries. */ #if defined(PG_INT128_TYPE) -#define HAVE_INT128 -typedef PG_INT128_TYPE int128; -typedef unsigned PG_INT128_TYPE uint128; +#if defined(pg_attribute_aligned) || ALIGNOF_PG_INT128_TYPE <= MAXIMUM_ALIGNOF +#define HAVE_INT128 1 + +typedef PG_INT128_TYPE int128 +#if defined(pg_attribute_aligned) +pg_attribute_aligned(MAXIMUM_ALIGNOF) +#endif +; + +typedef unsigned PG_INT128_TYPE uint128 +#if defined(pg_attribute_aligned) +pg_attribute_aligned(MAXIMUM_ALIGNOF) +#endif +; + +#endif #endif /* diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index cfdcc5ac62..84d59f12b2 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -27,6 +27,9 @@ /* The normal alignment of `long long int', in bytes. */ #undef ALIGNOF_LONG_LONG_INT +/* The normal alignment of `PG_INT128_TYPE', in bytes. */ +#undef ALIGNOF_PG_INT128_TYPE + /* The normal alignment of `short', in bytes. */ #undef ALIGNOF_SHORT diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index ab9b941e89..e192d98c5a 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -34,6 +34,9 @@ /* The alignment requirement of a `long long int'. */ #define ALIGNOF_LONG_LONG_INT 8 +/* The normal alignment of `PG_INT128_TYPE', in bytes. */ +#undef ALIGNOF_PG_INT128_TYPE + /* The alignment requirement of a `short'. */ #define ALIGNOF_SHORT 2 diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 6f04769e3e..63ed6a33c1 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -444,6 +444,24 @@ select * from reset enable_material; reset enable_hashagg; +-- check parallelized int8 aggregate (bug #14897) +explain (costs off) +select avg(unique1::int8) from tenk1; + QUERY PLAN +------------------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Index Only Scan using tenk1_unique1 on tenk1 +(5 rows) + +select avg(unique1::int8) from tenk1; + avg +----------------------- + 4999.5000000000000000 +(1 row) + -- gather merge test with a LIMIT explain (costs off) select fivethous from tenk1 order by fivethous limit 4; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 9c1b87abdf..1bd2821083 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -175,6 +175,12 @@ reset enable_material; reset enable_hashagg; +-- check parallelized int8 aggregate (bug #14897) +explain (costs off) +select avg(unique1::int8) from tenk1; + +select avg(unique1::int8) from tenk1; + -- gather merge test with a LIMIT explain (costs off) select fivethous from tenk1 order by fivethous limit 4; From e5253fdc4f5fe2f38aec47e08c6aee93f934183d Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 15 Nov 2017 08:17:29 -0500 Subject: [PATCH 0541/1087] Add parallel_leader_participation GUC. Sometimes, for testing, it's useful to have the leader do nothing but read tuples from workers; and it's possible that could work out better even in production. Thomas Munro, reviewed by Amit Kapila and by me. A few final tweaks by me. Discussion: http://postgr.es/m/CAEepm=2U++Lp3bNTv2Bv_kkr5NE2pOyHhxU=G0YTa4ZhSYhHiw@mail.gmail.com --- doc/src/sgml/config.sgml | 26 ++++ src/backend/executor/nodeGather.c | 8 +- src/backend/executor/nodeGatherMerge.c | 6 +- src/backend/optimizer/path/costsize.c | 12 +- src/backend/optimizer/plan/planner.c | 1 + src/backend/utils/misc/guc.c | 10 ++ src/backend/utils/misc/postgresql.conf.sample | 1 + src/include/optimizer/planmain.h | 1 + src/test/regress/expected/select_parallel.out | 113 ++++++++++++++++++ src/test/regress/sql/select_parallel.sql | 36 ++++++ 10 files changed, 205 insertions(+), 9 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 996e82534a..fc1752fb3f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -4265,6 +4265,32 @@ SELECT * FROM parent WHERE key = 2400; + + + parallel_leader_participation (boolean) + + + parallel_leader_participation configuration + parameter + + + + + + Allows the leader process to execute the query plan under + Gather and Gather Merge nodes + instead of waiting for worker processes. The default is + on. Setting this value to off + reduces the likelihood that workers will become blocked because the + leader is not reading tuples fast enough, but requires the leader + process to wait for worker processes to start up before the first + tuples can be produced. The degree to which the leader can help or + hinder performance depends on the plan type, number of workers and + query duration. + + + + force_parallel_mode (enum) diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 639f4f5af8..0298c65d06 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -38,6 +38,7 @@ #include "executor/nodeSubplan.h" #include "executor/tqueue.h" #include "miscadmin.h" +#include "optimizer/planmain.h" #include "pgstat.h" #include "utils/memutils.h" #include "utils/rel.h" @@ -73,7 +74,8 @@ ExecInitGather(Gather *node, EState *estate, int eflags) gatherstate->ps.ExecProcNode = ExecGather; gatherstate->initialized = false; - gatherstate->need_to_scan_locally = !node->single_copy; + gatherstate->need_to_scan_locally = + !node->single_copy && parallel_leader_participation; gatherstate->tuples_needed = -1; /* @@ -193,9 +195,9 @@ ExecGather(PlanState *pstate) node->nextreader = 0; } - /* Run plan locally if no workers or not single-copy. */ + /* Run plan locally if no workers or enabled and not single-copy. */ node->need_to_scan_locally = (node->nreaders == 0) - || !gather->single_copy; + || (!gather->single_copy && parallel_leader_participation); node->initialized = true; } diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 5625b12521..7206ab9197 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -23,6 +23,7 @@ #include "executor/tqueue.h" #include "lib/binaryheap.h" #include "miscadmin.h" +#include "optimizer/planmain.h" #include "utils/memutils.h" #include "utils/rel.h" @@ -233,8 +234,9 @@ ExecGatherMerge(PlanState *pstate) } } - /* always allow leader to participate */ - node->need_to_scan_locally = true; + /* allow leader to participate if enabled or no choice */ + if (parallel_leader_participation || node->nreaders == 0) + node->need_to_scan_locally = true; node->initialized = true; } diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 2d2df60886..d11bf19e30 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -5137,7 +5137,6 @@ static double get_parallel_divisor(Path *path) { double parallel_divisor = path->parallel_workers; - double leader_contribution; /* * Early experience with parallel query suggests that when there is only @@ -5150,9 +5149,14 @@ get_parallel_divisor(Path *path) * its time servicing each worker, and the remainder executing the * parallel plan. */ - leader_contribution = 1.0 - (0.3 * path->parallel_workers); - if (leader_contribution > 0) - parallel_divisor += leader_contribution; + if (parallel_leader_participation) + { + double leader_contribution; + + leader_contribution = 1.0 - (0.3 * path->parallel_workers); + if (leader_contribution > 0) + parallel_divisor += leader_contribution; + } return parallel_divisor; } diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 90fd9cc959..4c00a1453b 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -61,6 +61,7 @@ /* GUC parameters */ double cursor_tuple_fraction = DEFAULT_CURSOR_TUPLE_FRACTION; int force_parallel_mode = FORCE_PARALLEL_OFF; +bool parallel_leader_participation = true; /* Hook for plugins to get control in planner() */ planner_hook_type planner_hook = NULL; diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index c4c1afa084..6dcd738be6 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -1676,6 +1676,16 @@ static struct config_bool ConfigureNamesBool[] = NULL, NULL, NULL }, + { + {"parallel_leader_participation", PGC_USERSET, RESOURCES_ASYNCHRONOUS, + gettext_noop("Controls whether Gather and Gather Merge also run subplans."), + gettext_noop("Should gather nodes also run subplans, or just gather tuples?") + }, + ¶llel_leader_participation, + true, + NULL, NULL, NULL + }, + /* End-of-list marker */ { {NULL, 0, 0, NULL, NULL}, NULL, false, NULL, NULL, NULL diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 368b280c8a..c7cd72ade2 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -163,6 +163,7 @@ #effective_io_concurrency = 1 # 1-1000; 0 disables prefetching #max_worker_processes = 8 # (change requires restart) #max_parallel_workers_per_gather = 2 # taken from max_parallel_workers +#parallel_leader_particulation = on #max_parallel_workers = 8 # maximum number of max_worker_processes that # can be used in parallel queries #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h index f1d16cffab..d6133228bd 100644 --- a/src/include/optimizer/planmain.h +++ b/src/include/optimizer/planmain.h @@ -29,6 +29,7 @@ typedef enum #define DEFAULT_CURSOR_TUPLE_FRACTION 0.1 extern double cursor_tuple_fraction; extern int force_parallel_mode; +extern bool parallel_leader_participation; /* query_planner callback to compute query_pathkeys */ typedef void (*query_pathkeys_callback) (PlannerInfo *root, void *extra); diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 63ed6a33c1..06aeddd805 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -34,6 +34,49 @@ select count(*) from a_star; 50 (1 row) +-- test with leader participation disabled +set parallel_leader_participation = off; +explain (costs off) + select count(*) from tenk1 where stringu1 = 'GRAAAA'; + QUERY PLAN +--------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Seq Scan on tenk1 + Filter: (stringu1 = 'GRAAAA'::name) +(6 rows) + +select count(*) from tenk1 where stringu1 = 'GRAAAA'; + count +------- + 15 +(1 row) + +-- test with leader participation disabled, but no workers available (so +-- the leader will have to run the plan despite the setting) +set max_parallel_workers = 0; +explain (costs off) + select count(*) from tenk1 where stringu1 = 'GRAAAA'; + QUERY PLAN +--------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Seq Scan on tenk1 + Filter: (stringu1 = 'GRAAAA'::name) +(6 rows) + +select count(*) from tenk1 where stringu1 = 'GRAAAA'; + count +------- + 15 +(1 row) + +reset max_parallel_workers; +reset parallel_leader_participation; -- test that parallel_restricted function doesn't run in worker alter table tenk1 set (parallel_workers = 4); explain (verbose, costs off) @@ -400,6 +443,49 @@ explain (costs off, verbose) (11 rows) drop function simple_func(integer); +-- test gather merge with parallel leader participation disabled +set parallel_leader_participation = off; +explain (costs off) + select count(*) from tenk1 group by twenty; + QUERY PLAN +---------------------------------------------------- + Finalize GroupAggregate + Group Key: twenty + -> Gather Merge + Workers Planned: 4 + -> Partial GroupAggregate + Group Key: twenty + -> Sort + Sort Key: twenty + -> Parallel Seq Scan on tenk1 +(9 rows) + +select count(*) from tenk1 group by twenty; + count +------- + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 + 500 +(20 rows) + +reset parallel_leader_participation; --test rescan behavior of gather merge set enable_material = false; explain (costs off) @@ -508,6 +594,33 @@ select string4 from tenk1 order by string4 limit 5; AAAAxx (5 rows) +-- gather merge test with 0 workers, with parallel leader +-- participation disabled (the leader will have to run the plan +-- despite the setting) +set parallel_leader_participation = off; +explain (costs off) + select string4 from tenk1 order by string4 limit 5; + QUERY PLAN +---------------------------------------------- + Limit + -> Gather Merge + Workers Planned: 4 + -> Sort + Sort Key: string4 + -> Parallel Seq Scan on tenk1 +(6 rows) + +select string4 from tenk1 order by string4 limit 5; + string4 +--------- + AAAAxx + AAAAxx + AAAAxx + AAAAxx + AAAAxx +(5 rows) + +reset parallel_leader_participation; reset max_parallel_workers; SAVEPOINT settings; SET LOCAL force_parallel_mode = 1; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 1bd2821083..b701b35408 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -19,6 +19,22 @@ explain (costs off) select count(*) from a_star; select count(*) from a_star; +-- test with leader participation disabled +set parallel_leader_participation = off; +explain (costs off) + select count(*) from tenk1 where stringu1 = 'GRAAAA'; +select count(*) from tenk1 where stringu1 = 'GRAAAA'; + +-- test with leader participation disabled, but no workers available (so +-- the leader will have to run the plan despite the setting) +set max_parallel_workers = 0; +explain (costs off) + select count(*) from tenk1 where stringu1 = 'GRAAAA'; +select count(*) from tenk1 where stringu1 = 'GRAAAA'; + +reset max_parallel_workers; +reset parallel_leader_participation; + -- test that parallel_restricted function doesn't run in worker alter table tenk1 set (parallel_workers = 4); explain (verbose, costs off) @@ -157,6 +173,16 @@ explain (costs off, verbose) drop function simple_func(integer); +-- test gather merge with parallel leader participation disabled +set parallel_leader_participation = off; + +explain (costs off) + select count(*) from tenk1 group by twenty; + +select count(*) from tenk1 group by twenty; + +reset parallel_leader_participation; + --test rescan behavior of gather merge set enable_material = false; @@ -192,6 +218,16 @@ set max_parallel_workers = 0; explain (costs off) select string4 from tenk1 order by string4 limit 5; select string4 from tenk1 order by string4 limit 5; + +-- gather merge test with 0 workers, with parallel leader +-- participation disabled (the leader will have to run the plan +-- despite the setting) +set parallel_leader_participation = off; +explain (costs off) + select string4 from tenk1 order by string4 limit 5; +select string4 from tenk1 order by string4 limit 5; + +reset parallel_leader_participation; reset max_parallel_workers; SAVEPOINT settings; From ebc189e12259cc28b9a09db000626fea1e2a3ffa Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 15 Nov 2017 08:37:32 -0500 Subject: [PATCH 0542/1087] Fix typo. Jesper Pedersen Discussion: http://postgr.es/m/000f92d6-f623-95a5-b341-46e2c0495cea@redhat.com --- src/backend/utils/misc/postgresql.conf.sample | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index c7cd72ade2..63b8723569 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -163,7 +163,7 @@ #effective_io_concurrency = 1 # 1-1000; 0 disables prefetching #max_worker_processes = 8 # (change requires restart) #max_parallel_workers_per_gather = 2 # taken from max_parallel_workers -#parallel_leader_particulation = on +#parallel_leader_participation = on #max_parallel_workers = 8 # maximum number of max_worker_processes that # can be used in parallel queries #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate From cd8ce3a22c0b48d32ffe6543837ba3bb647ac2b2 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 15 Nov 2017 10:16:34 -0500 Subject: [PATCH 0543/1087] Add hooks for session start and session end MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit These hooks can be used in loadable modules. A simple test module is included. Discussion: https://postgr.es/m/20170720204733.40f2b7eb.nagata@sraoss.co.jp Fabrízio de Royes Mello and Yugo Nagata Reviewed by Michael Paquier and Aleksandr Parfenov --- src/backend/tcop/postgres.c | 6 + src/backend/utils/init/postinit.c | 6 + src/include/tcop/tcopprot.h | 7 + src/test/modules/Makefile | 1 + .../modules/test_session_hooks/.gitignore | 4 + src/test/modules/test_session_hooks/Makefile | 21 +++ src/test/modules/test_session_hooks/README | 2 + .../expected/test_session_hooks.out | 31 ++++ .../test_session_hooks/session_hooks.conf | 2 + .../sql/test_session_hooks.sql | 12 ++ .../test_session_hooks--1.0.sql | 4 + .../test_session_hooks/test_session_hooks.c | 134 ++++++++++++++++++ .../test_session_hooks.control | 3 + 13 files changed, 233 insertions(+) create mode 100644 src/test/modules/test_session_hooks/.gitignore create mode 100644 src/test/modules/test_session_hooks/Makefile create mode 100644 src/test/modules/test_session_hooks/README create mode 100644 src/test/modules/test_session_hooks/expected/test_session_hooks.out create mode 100644 src/test/modules/test_session_hooks/session_hooks.conf create mode 100644 src/test/modules/test_session_hooks/sql/test_session_hooks.sql create mode 100644 src/test/modules/test_session_hooks/test_session_hooks--1.0.sql create mode 100644 src/test/modules/test_session_hooks/test_session_hooks.c create mode 100644 src/test/modules/test_session_hooks/test_session_hooks.control diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 05c5c194ec..d3156ad49e 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -169,6 +169,9 @@ static ProcSignalReason RecoveryConflictReason; static MemoryContext row_description_context = NULL; static StringInfoData row_description_buf; +/* Hook for plugins to get control at start of session */ +session_start_hook_type session_start_hook = NULL; + /* ---------------------------------------------------------------- * decls for routines only used in this file * ---------------------------------------------------------------- @@ -3857,6 +3860,9 @@ PostgresMain(int argc, char *argv[], if (!IsUnderPostmaster) PgStartTime = GetCurrentTimestamp(); + if (session_start_hook) + (*session_start_hook) (); + /* * POSTGRES main processing loop begins here * diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c index 20f1d279e9..16ec376b22 100644 --- a/src/backend/utils/init/postinit.c +++ b/src/backend/utils/init/postinit.c @@ -76,6 +76,8 @@ static bool ThereIsAtLeastOneRole(void); static void process_startup_options(Port *port, bool am_superuser); static void process_settings(Oid databaseid, Oid roleid); +/* Hook for plugins to get control at end of session */ +session_end_hook_type session_end_hook = NULL; /*** InitPostgres support ***/ @@ -1154,6 +1156,10 @@ ShutdownPostgres(int code, Datum arg) * them explicitly. */ LockReleaseAll(USER_LOCKMETHOD, true); + + /* Hook at session end */ + if (session_end_hook) + (*session_end_hook) (); } diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h index f8c535c91e..9f05bfb4ab 100644 --- a/src/include/tcop/tcopprot.h +++ b/src/include/tcop/tcopprot.h @@ -35,6 +35,13 @@ extern PGDLLIMPORT const char *debug_query_string; extern int max_stack_depth; extern int PostAuthDelay; +/* Hook for plugins to get control at start and end of session */ +typedef void (*session_start_hook_type) (void); +typedef void (*session_end_hook_type) (void); + +extern PGDLLIMPORT session_start_hook_type session_start_hook; +extern PGDLLIMPORT session_end_hook_type session_end_hook; + /* GUC-configurable parameters */ typedef enum diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index b7ed0af021..7246552d38 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -15,6 +15,7 @@ SUBDIRS = \ test_pg_dump \ test_rbtree \ test_rls_hooks \ + test_session_hooks \ test_shm_mq \ worker_spi diff --git a/src/test/modules/test_session_hooks/.gitignore b/src/test/modules/test_session_hooks/.gitignore new file mode 100644 index 0000000000..5dcb3ff972 --- /dev/null +++ b/src/test/modules/test_session_hooks/.gitignore @@ -0,0 +1,4 @@ +# Generated subdirectories +/log/ +/results/ +/tmp_check/ diff --git a/src/test/modules/test_session_hooks/Makefile b/src/test/modules/test_session_hooks/Makefile new file mode 100644 index 0000000000..c5c386084e --- /dev/null +++ b/src/test/modules/test_session_hooks/Makefile @@ -0,0 +1,21 @@ +# src/test/modules/test_session_hooks/Makefile + +MODULES = test_session_hooks +PGFILEDESC = "test_session_hooks - Test session hooks with an extension" + +EXTENSION = test_session_hooks +DATA = test_session_hooks--1.0.sql + +REGRESS = test_session_hooks +REGRESS_OPTS = --temp-config=$(top_srcdir)/src/test/modules/test_session_hooks/session_hooks.conf + +ifdef USE_PGXS +PG_CONFIG = pg_config +PGXS := $(shell $(PG_CONFIG) --pgxs) +include $(PGXS) +else +subdir = src/test/modules/test_session_hooks +top_builddir = ../../../.. +include $(top_builddir)/src/Makefile.global +include $(top_srcdir)/contrib/contrib-global.mk +endif diff --git a/src/test/modules/test_session_hooks/README b/src/test/modules/test_session_hooks/README new file mode 100644 index 0000000000..9cb42020c6 --- /dev/null +++ b/src/test/modules/test_session_hooks/README @@ -0,0 +1,2 @@ +test_session_hooks is an example of how to use session start and end +hooks. diff --git a/src/test/modules/test_session_hooks/expected/test_session_hooks.out b/src/test/modules/test_session_hooks/expected/test_session_hooks.out new file mode 100644 index 0000000000..be1b94953c --- /dev/null +++ b/src/test/modules/test_session_hooks/expected/test_session_hooks.out @@ -0,0 +1,31 @@ +CREATE ROLE regress_sess_hook_usr1 SUPERUSER LOGIN; +CREATE ROLE regress_sess_hook_usr2 SUPERUSER LOGIN; +\set prevdb :DBNAME +\set prevusr :USER +CREATE TABLE session_hook_log(id SERIAL, dbname TEXT, username TEXT, hook_at TEXT); +SELECT * FROM session_hook_log ORDER BY id; + id | dbname | username | hook_at +----+--------+----------+--------- +(0 rows) + +\c :prevdb regress_sess_hook_usr1 +SELECT * FROM session_hook_log ORDER BY id; + id | dbname | username | hook_at +----+--------+----------+--------- +(0 rows) + +\c :prevdb regress_sess_hook_usr2 +SELECT * FROM session_hook_log ORDER BY id; + id | dbname | username | hook_at +----+--------------------+------------------------+--------- + 1 | contrib_regression | regress_sess_hook_usr2 | START +(1 row) + +\c :prevdb :prevusr +SELECT * FROM session_hook_log ORDER BY id; + id | dbname | username | hook_at +----+--------------------+------------------------+--------- + 1 | contrib_regression | regress_sess_hook_usr2 | START + 2 | contrib_regression | regress_sess_hook_usr2 | END +(2 rows) + diff --git a/src/test/modules/test_session_hooks/session_hooks.conf b/src/test/modules/test_session_hooks/session_hooks.conf new file mode 100644 index 0000000000..fc62b4adef --- /dev/null +++ b/src/test/modules/test_session_hooks/session_hooks.conf @@ -0,0 +1,2 @@ +shared_preload_libraries = 'test_session_hooks' +test_session_hooks.username = regress_sess_hook_usr2 diff --git a/src/test/modules/test_session_hooks/sql/test_session_hooks.sql b/src/test/modules/test_session_hooks/sql/test_session_hooks.sql new file mode 100644 index 0000000000..5e0864753d --- /dev/null +++ b/src/test/modules/test_session_hooks/sql/test_session_hooks.sql @@ -0,0 +1,12 @@ +CREATE ROLE regress_sess_hook_usr1 SUPERUSER LOGIN; +CREATE ROLE regress_sess_hook_usr2 SUPERUSER LOGIN; +\set prevdb :DBNAME +\set prevusr :USER +CREATE TABLE session_hook_log(id SERIAL, dbname TEXT, username TEXT, hook_at TEXT); +SELECT * FROM session_hook_log ORDER BY id; +\c :prevdb regress_sess_hook_usr1 +SELECT * FROM session_hook_log ORDER BY id; +\c :prevdb regress_sess_hook_usr2 +SELECT * FROM session_hook_log ORDER BY id; +\c :prevdb :prevusr +SELECT * FROM session_hook_log ORDER BY id; diff --git a/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql b/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql new file mode 100644 index 0000000000..16bcee9882 --- /dev/null +++ b/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql @@ -0,0 +1,4 @@ +/* src/test/modules/test_hook_session/test_hook_session--1.0.sql */ + +-- complain if script is sourced in psql, rather than via CREATE EXTENSION +\echo Use "CREATE EXTENSION test_hook_session" to load this file. \quit diff --git a/src/test/modules/test_session_hooks/test_session_hooks.c b/src/test/modules/test_session_hooks/test_session_hooks.c new file mode 100644 index 0000000000..4e2eef183e --- /dev/null +++ b/src/test/modules/test_session_hooks/test_session_hooks.c @@ -0,0 +1,134 @@ +/* ------------------------------------------------------------------------- + * + * test_session_hooks.c + * Code for testing SESSION hooks. + * + * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * + * IDENTIFICATION + * src/test/modules/test_session_hooks/test_session_hooks.c + * + * ------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "access/xact.h" +#include "commands/dbcommands.h" +#include "executor/spi.h" +#include "lib/stringinfo.h" +#include "miscadmin.h" +#include "tcop/tcopprot.h" +#include "utils/snapmgr.h" +#include "utils/builtins.h" + +PG_MODULE_MAGIC; + +/* Entry point of library loading/unloading */ +void _PG_init(void); +void _PG_fini(void); + +/* GUC variables */ +static char *session_hook_username = "postgres"; + +/* Original Hook */ +static session_start_hook_type prev_session_start_hook = NULL; +static session_end_hook_type prev_session_end_hook = NULL; + +static void +register_session_hook(const char *hook_at) +{ + const char *username; + + StartTransactionCommand(); + SPI_connect(); + PushActiveSnapshot(GetTransactionSnapshot()); + + username = GetUserNameFromId(GetUserId(), false); + + /* Register log just for configured username */ + if (!strcmp(username, session_hook_username)) + { + const char *dbname; + int ret; + StringInfoData buf; + + dbname = get_database_name(MyDatabaseId); + + initStringInfo(&buf); + + appendStringInfo(&buf, "INSERT INTO session_hook_log (dbname, username, hook_at) "); + appendStringInfo(&buf, "VALUES ('%s', '%s', '%s');", + dbname, username, hook_at); + + ret = SPI_exec(buf.data, 0); + if (ret != SPI_OK_INSERT) + elog(ERROR, "SPI_execute failed: error code %d", ret); + } + + SPI_finish(); + PopActiveSnapshot(); + CommitTransactionCommand(); +} + +/* sample session start hook function */ +static void +sample_session_start_hook() +{ + /* Hook just normal backends */ + if (MyBackendId != InvalidBackendId) + { + (void) register_session_hook("START"); + + if (prev_session_start_hook) + prev_session_start_hook(); + } +} + +/* sample session end hook function */ +static void +sample_session_end_hook() +{ + /* Hook just normal backends */ + if (MyBackendId != InvalidBackendId) + { + if (prev_session_end_hook) + prev_session_end_hook(); + + (void) register_session_hook("END"); + } +} + +/* + * Module Load Callback + */ +void +_PG_init(void) +{ + /* Save Hooks for Unload */ + prev_session_start_hook = session_start_hook; + prev_session_end_hook = session_end_hook; + + /* Set New Hooks */ + session_start_hook = sample_session_start_hook; + session_end_hook = sample_session_end_hook; + + /* Load GUCs */ + DefineCustomStringVariable("test_session_hooks.username", + "Username to register log on session start or end", + NULL, + &session_hook_username, + "postgres", + PGC_SIGHUP, + 0, NULL, NULL, NULL); +} + +/* + * Module Unload Callback + */ +void +_PG_fini(void) +{ + /* Uninstall Hooks */ + session_start_hook = prev_session_start_hook; + session_end_hook = prev_session_end_hook; +} diff --git a/src/test/modules/test_session_hooks/test_session_hooks.control b/src/test/modules/test_session_hooks/test_session_hooks.control new file mode 100644 index 0000000000..7d7ef9f3f4 --- /dev/null +++ b/src/test/modules/test_session_hooks/test_session_hooks.control @@ -0,0 +1,3 @@ +comment = 'Test start/end hook session with an extension' +default_version = '1.0' +relocatable = true From 4e5fe9ad19e14af360de7970caa8b150436c9dec Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 15 Nov 2017 10:23:28 -0500 Subject: [PATCH 0544/1087] Centralize executor-related partitioning code. Some code is moved from partition.c, which has grown very quickly lately; splitting the executor parts out might help to keep it from getting totally out of control. Other code is moved from execMain.c. All is moved to a new file execPartition.c. get_partition_for_tuple now has a new interface that more clearly separates executor concerns from generic concerns. Amit Langote. A slight comment tweak by me. Discussion: http://postgr.es/m/1f0985f8-3b61-8bc4-4350-baa6d804cb6d@lab.ntt.co.jp --- src/backend/catalog/partition.c | 455 ++++---------------- src/backend/commands/copy.c | 1 + src/backend/executor/Makefile | 2 +- src/backend/executor/execMain.c | 266 +----------- src/backend/executor/execPartition.c | 560 +++++++++++++++++++++++++ src/backend/executor/nodeModifyTable.c | 1 + src/include/catalog/partition.h | 48 +-- src/include/executor/execPartition.h | 65 +++ src/include/executor/executor.h | 14 +- 9 files changed, 717 insertions(+), 695 deletions(-) create mode 100644 src/backend/executor/execPartition.c create mode 100644 src/include/executor/execPartition.h diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index cff59ed055..ce29ba2eda 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -170,8 +170,6 @@ static int32 partition_bound_cmp(PartitionKey key, static int partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, void *probe, bool probe_is_bound, bool *is_equal); -static void get_partition_dispatch_recurse(Relation rel, Relation parent, - List **pds, List **leaf_part_oids); static int get_partition_bound_num_indexes(PartitionBoundInfo b); static int get_greatest_modulus(PartitionBoundInfo b); static uint64 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); @@ -1530,148 +1528,6 @@ get_partition_qual_relid(Oid relid) return result; } -/* - * RelationGetPartitionDispatchInfo - * Returns information necessary to route tuples down a partition tree - * - * The number of elements in the returned array (that is, the number of - * PartitionDispatch objects for the partitioned tables in the partition tree) - * is returned in *num_parted and a list of the OIDs of all the leaf - * partitions of rel is returned in *leaf_part_oids. - * - * All the relations in the partition tree (including 'rel') must have been - * locked (using at least the AccessShareLock) by the caller. - */ -PartitionDispatch * -RelationGetPartitionDispatchInfo(Relation rel, - int *num_parted, List **leaf_part_oids) -{ - List *pdlist = NIL; - PartitionDispatchData **pd; - ListCell *lc; - int i; - - Assert(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); - - *num_parted = 0; - *leaf_part_oids = NIL; - - get_partition_dispatch_recurse(rel, NULL, &pdlist, leaf_part_oids); - *num_parted = list_length(pdlist); - pd = (PartitionDispatchData **) palloc(*num_parted * - sizeof(PartitionDispatchData *)); - i = 0; - foreach(lc, pdlist) - { - pd[i++] = lfirst(lc); - } - - return pd; -} - -/* - * get_partition_dispatch_recurse - * Recursively expand partition tree rooted at rel - * - * As the partition tree is expanded in a depth-first manner, we maintain two - * global lists: of PartitionDispatch objects corresponding to partitioned - * tables in *pds and of the leaf partition OIDs in *leaf_part_oids. - * - * Note that the order of OIDs of leaf partitions in leaf_part_oids matches - * the order in which the planner's expand_partitioned_rtentry() processes - * them. It's not necessarily the case that the offsets match up exactly, - * because constraint exclusion might prune away some partitions on the - * planner side, whereas we'll always have the complete list; but unpruned - * partitions will appear in the same order in the plan as they are returned - * here. - */ -static void -get_partition_dispatch_recurse(Relation rel, Relation parent, - List **pds, List **leaf_part_oids) -{ - TupleDesc tupdesc = RelationGetDescr(rel); - PartitionDesc partdesc = RelationGetPartitionDesc(rel); - PartitionKey partkey = RelationGetPartitionKey(rel); - PartitionDispatch pd; - int i; - - check_stack_depth(); - - /* Build a PartitionDispatch for this table and add it to *pds. */ - pd = (PartitionDispatch) palloc(sizeof(PartitionDispatchData)); - *pds = lappend(*pds, pd); - pd->reldesc = rel; - pd->key = partkey; - pd->keystate = NIL; - pd->partdesc = partdesc; - if (parent != NULL) - { - /* - * For every partitioned table other than the root, we must store a - * tuple table slot initialized with its tuple descriptor and a tuple - * conversion map to convert a tuple from its parent's rowtype to its - * own. That is to make sure that we are looking at the correct row - * using the correct tuple descriptor when computing its partition key - * for tuple routing. - */ - pd->tupslot = MakeSingleTupleTableSlot(tupdesc); - pd->tupmap = convert_tuples_by_name(RelationGetDescr(parent), - tupdesc, - gettext_noop("could not convert row type")); - } - else - { - /* Not required for the root partitioned table */ - pd->tupslot = NULL; - pd->tupmap = NULL; - } - - /* - * Go look at each partition of this table. If it's a leaf partition, - * simply add its OID to *leaf_part_oids. If it's a partitioned table, - * recursively call get_partition_dispatch_recurse(), so that its - * partitions are processed as well and a corresponding PartitionDispatch - * object gets added to *pds. - * - * About the values in pd->indexes: for a leaf partition, it contains the - * leaf partition's position in the global list *leaf_part_oids minus 1, - * whereas for a partitioned table partition, it contains the partition's - * position in the global list *pds multiplied by -1. The latter is - * multiplied by -1 to distinguish partitioned tables from leaf partitions - * when going through the values in pd->indexes. So, for example, when - * using it during tuple-routing, encountering a value >= 0 means we found - * a leaf partition. It is immediately returned as the index in the array - * of ResultRelInfos of all the leaf partitions, using which we insert the - * tuple into that leaf partition. A negative value means we found a - * partitioned table. The value multiplied by -1 is returned as the index - * in the array of PartitionDispatch objects of all partitioned tables in - * the tree. This value is used to continue the search in the next level - * of the partition tree. - */ - pd->indexes = (int *) palloc(partdesc->nparts * sizeof(int)); - for (i = 0; i < partdesc->nparts; i++) - { - Oid partrelid = partdesc->oids[i]; - - if (get_rel_relkind(partrelid) != RELKIND_PARTITIONED_TABLE) - { - *leaf_part_oids = lappend_oid(*leaf_part_oids, partrelid); - pd->indexes[i] = list_length(*leaf_part_oids) - 1; - } - else - { - /* - * We assume all tables in the partition tree were already locked - * by the caller. - */ - Relation partrel = heap_open(partrelid, NoLock); - - pd->indexes[i] = -list_length(*pds); - get_partition_dispatch_recurse(partrel, rel, pds, leaf_part_oids); - } - } -} - /* Module-local functions */ /* @@ -2617,259 +2473,108 @@ generate_partition_qual(Relation rel) return result; } -/* ---------------- - * FormPartitionKeyDatum - * Construct values[] and isnull[] arrays for the partition key - * of a tuple. - * - * pd Partition dispatch object of the partitioned table - * slot Heap tuple from which to extract partition key - * estate executor state for evaluating any partition key - * expressions (must be non-NULL) - * values Array of partition key Datums (output area) - * isnull Array of is-null indicators (output area) - * - * the ecxt_scantuple slot of estate's per-tuple expr context must point to - * the heap tuple passed in. - * ---------------- - */ -void -FormPartitionKeyDatum(PartitionDispatch pd, - TupleTableSlot *slot, - EState *estate, - Datum *values, - bool *isnull) -{ - ListCell *partexpr_item; - int i; - - if (pd->key->partexprs != NIL && pd->keystate == NIL) - { - /* Check caller has set up context correctly */ - Assert(estate != NULL && - GetPerTupleExprContext(estate)->ecxt_scantuple == slot); - - /* First time through, set up expression evaluation state */ - pd->keystate = ExecPrepareExprList(pd->key->partexprs, estate); - } - - partexpr_item = list_head(pd->keystate); - for (i = 0; i < pd->key->partnatts; i++) - { - AttrNumber keycol = pd->key->partattrs[i]; - Datum datum; - bool isNull; - - if (keycol != 0) - { - /* Plain column; get the value directly from the heap tuple */ - datum = slot_getattr(slot, keycol, &isNull); - } - else - { - /* Expression; need to evaluate it */ - if (partexpr_item == NULL) - elog(ERROR, "wrong number of partition key expressions"); - datum = ExecEvalExprSwitchContext((ExprState *) lfirst(partexpr_item), - GetPerTupleExprContext(estate), - &isNull); - partexpr_item = lnext(partexpr_item); - } - values[i] = datum; - isnull[i] = isNull; - } - - if (partexpr_item != NULL) - elog(ERROR, "wrong number of partition key expressions"); -} - /* * get_partition_for_tuple - * Finds a leaf partition for tuple contained in *slot + * Finds partition of relation which accepts the partition key specified + * in values and isnull * - * Returned value is the sequence number of the leaf partition thus found, - * or -1 if no leaf partition is found for the tuple. *failed_at is set - * to the OID of the partitioned table whose partition was not found in - * the latter case. + * Return value is index of the partition (>= 0 and < partdesc->nparts) if one + * found or -1 if none found. */ int -get_partition_for_tuple(PartitionDispatch *pd, - TupleTableSlot *slot, - EState *estate, - PartitionDispatchData **failed_at, - TupleTableSlot **failed_slot) +get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) { - PartitionDispatch parent; - Datum values[PARTITION_MAX_KEYS]; - bool isnull[PARTITION_MAX_KEYS]; - int result; - ExprContext *ecxt = GetPerTupleExprContext(estate); - TupleTableSlot *ecxt_scantuple_old = ecxt->ecxt_scantuple; - - /* start with the root partitioned table */ - parent = pd[0]; - while (true) - { - PartitionKey key = parent->key; - PartitionDesc partdesc = parent->partdesc; - TupleTableSlot *myslot = parent->tupslot; - TupleConversionMap *map = parent->tupmap; - int cur_index = -1; - - if (myslot != NULL && map != NULL) - { - HeapTuple tuple = ExecFetchSlotTuple(slot); - - ExecClearTuple(myslot); - tuple = do_convert_tuple(tuple, map); - ExecStoreTuple(tuple, myslot, InvalidBuffer, true); - slot = myslot; - } + int bound_offset; + int part_index = -1; + PartitionKey key = RelationGetPartitionKey(relation); + PartitionDesc partdesc = RelationGetPartitionDesc(relation); - /* Quick exit */ - if (partdesc->nparts == 0) - { - *failed_at = parent; - *failed_slot = slot; - result = -1; - goto error_exit; - } - - /* - * Extract partition key from tuple. Expression evaluation machinery - * that FormPartitionKeyDatum() invokes expects ecxt_scantuple to - * point to the correct tuple slot. The slot might have changed from - * what was used for the parent table if the table of the current - * partitioning level has different tuple descriptor from the parent. - * So update ecxt_scantuple accordingly. - */ - ecxt->ecxt_scantuple = slot; - FormPartitionKeyDatum(parent, slot, estate, values, isnull); - - /* Route as appropriate based on partitioning strategy. */ - switch (key->strategy) - { - case PARTITION_STRATEGY_HASH: - { - PartitionBoundInfo boundinfo = partdesc->boundinfo; - int greatest_modulus = get_greatest_modulus(boundinfo); - uint64 rowHash = compute_hash_value(key, values, - isnull); + /* Route as appropriate based on partitioning strategy. */ + switch (key->strategy) + { + case PARTITION_STRATEGY_HASH: + { + PartitionBoundInfo boundinfo = partdesc->boundinfo; + int greatest_modulus = get_greatest_modulus(boundinfo); + uint64 rowHash = compute_hash_value(key, values, isnull); - cur_index = boundinfo->indexes[rowHash % greatest_modulus]; - } - break; + part_index = boundinfo->indexes[rowHash % greatest_modulus]; + } + break; - case PARTITION_STRATEGY_LIST: + case PARTITION_STRATEGY_LIST: + if (isnull[0]) + { + if (partition_bound_accepts_nulls(partdesc->boundinfo)) + part_index = partdesc->boundinfo->null_index; + } + else + { + bool equal = false; + + bound_offset = partition_bound_bsearch(key, + partdesc->boundinfo, + values, + false, + &equal); + if (bound_offset >= 0 && equal) + part_index = partdesc->boundinfo->indexes[bound_offset]; + } + break; - if (isnull[0]) - { - if (partition_bound_accepts_nulls(partdesc->boundinfo)) - cur_index = partdesc->boundinfo->null_index; - } - else - { - bool equal = false; - int cur_offset; - - cur_offset = partition_bound_bsearch(key, - partdesc->boundinfo, - values, - false, - &equal); - if (cur_offset >= 0 && equal) - cur_index = partdesc->boundinfo->indexes[cur_offset]; - } - break; + case PARTITION_STRATEGY_RANGE: + { + bool equal = false, + range_partkey_has_null = false; + int i; - case PARTITION_STRATEGY_RANGE: + /* + * No range includes NULL, so this will be accepted by the + * default partition if there is one, and otherwise + * rejected. + */ + for (i = 0; i < key->partnatts; i++) { - bool equal = false, - range_partkey_has_null = false; - int cur_offset; - int i; - - /* - * No range includes NULL, so this will be accepted by the - * default partition if there is one, and otherwise - * rejected. - */ - for (i = 0; i < key->partnatts; i++) + if (isnull[i] && + partition_bound_has_default(partdesc->boundinfo)) { - if (isnull[i] && - partition_bound_has_default(partdesc->boundinfo)) - { - range_partkey_has_null = true; - break; - } - else if (isnull[i]) - { - *failed_at = parent; - *failed_slot = slot; - result = -1; - goto error_exit; - } + range_partkey_has_null = true; + part_index = partdesc->boundinfo->default_index; } + } - /* - * No need to search for partition, as the null key will - * be routed to the default partition. - */ - if (range_partkey_has_null) - break; - - cur_offset = partition_bound_bsearch(key, - partdesc->boundinfo, - values, - false, - &equal); + if (!range_partkey_has_null) + { + bound_offset = partition_bound_bsearch(key, + partdesc->boundinfo, + values, + false, + &equal); /* - * The offset returned is such that the bound at - * cur_offset is less than or equal to the tuple value, so - * the bound at offset+1 is the upper bound. + * The bound at bound_offset is less than or equal to the + * tuple value, so the bound at offset+1 is the upper + * bound of the partition we're looking for, if there + * actually exists one. */ - cur_index = partdesc->boundinfo->indexes[cur_offset + 1]; + part_index = partdesc->boundinfo->indexes[bound_offset + 1]; } - break; - - default: - elog(ERROR, "unexpected partition strategy: %d", - (int) key->strategy); - } - - /* - * cur_index < 0 means we failed to find a partition of this parent. - * Use the default partition, if there is one. - */ - if (cur_index < 0) - cur_index = partdesc->boundinfo->default_index; - - /* - * If cur_index is still less than 0 at this point, there's no - * partition for this tuple. Otherwise, we either found the leaf - * partition, or a child partitioned table through which we have to - * route the tuple. - */ - if (cur_index < 0) - { - result = -1; - *failed_at = parent; - *failed_slot = slot; - break; - } - else if (parent->indexes[cur_index] >= 0) - { - result = parent->indexes[cur_index]; + } break; - } - else - parent = pd[-parent->indexes[cur_index]]; + + default: + elog(ERROR, "unexpected partition strategy: %d", + (int) key->strategy); } -error_exit: - ecxt->ecxt_scantuple = ecxt_scantuple_old; - return result; + /* + * part_index < 0 means we failed to find a partition of this parent. + * Use the default partition, if there is one. + */ + if (part_index < 0) + part_index = partdesc->boundinfo->default_index; + + return part_index; } /* diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 8f1a8ede33..d6b235ca79 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -27,6 +27,7 @@ #include "commands/copy.h" #include "commands/defrem.h" #include "commands/trigger.h" +#include "executor/execPartition.h" #include "executor/executor.h" #include "libpq/libpq.h" #include "libpq/pqformat.h" diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile index 083b20f3fe..cc09895fa5 100644 --- a/src/backend/executor/Makefile +++ b/src/backend/executor/Makefile @@ -14,7 +14,7 @@ include $(top_builddir)/src/Makefile.global OBJS = execAmi.o execCurrent.o execExpr.o execExprInterp.o \ execGrouping.o execIndexing.o execJunk.o \ - execMain.o execParallel.o execProcnode.o \ + execMain.o execParallel.o execPartition.o execProcnode.o \ execReplication.o execScan.o execSRF.o execTuples.o \ execUtils.o functions.o instrument.o nodeAppend.o nodeAgg.o \ nodeBitmapAnd.o nodeBitmapOr.o \ diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 47f2131642..dbaa47f2d3 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -43,7 +43,6 @@ #include "access/xact.h" #include "catalog/namespace.h" #include "catalog/partition.h" -#include "catalog/pg_inherits_fn.h" #include "catalog/pg_publication.h" #include "commands/matview.h" #include "commands/trigger.h" @@ -98,14 +97,8 @@ static char *ExecBuildSlotValueDescription(Oid reloid, TupleDesc tupdesc, Bitmapset *modifiedCols, int maxfieldlen); -static char *ExecBuildSlotPartitionKeyDescription(Relation rel, - Datum *values, - bool *isnull, - int maxfieldlen); static void EvalPlanQualStart(EPQState *epqstate, EState *parentestate, Plan *planTree); -static void ExecPartitionCheck(ResultRelInfo *resultRelInfo, - TupleTableSlot *slot, EState *estate); /* * Note that GetUpdatedColumns() also exists in commands/trigger.c. There does @@ -1854,8 +1847,10 @@ ExecRelCheck(ResultRelInfo *resultRelInfo, /* * ExecPartitionCheck --- check that tuple meets the partition constraint. + * + * Exported in executor.h for outside use. */ -static void +void ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate) { @@ -3245,258 +3240,3 @@ EvalPlanQualEnd(EPQState *epqstate) epqstate->planstate = NULL; epqstate->origslot = NULL; } - -/* - * ExecSetupPartitionTupleRouting - set up information needed during - * tuple routing for partitioned tables - * - * Output arguments: - * 'pd' receives an array of PartitionDispatch objects with one entry for - * every partitioned table in the partition tree - * 'partitions' receives an array of ResultRelInfo* objects with one entry for - * every leaf partition in the partition tree - * 'tup_conv_maps' receives an array of TupleConversionMap objects with one - * entry for every leaf partition (required to convert input tuple based - * on the root table's rowtype to a leaf partition's rowtype after tuple - * routing is done) - * 'partition_tuple_slot' receives a standalone TupleTableSlot to be used - * to manipulate any given leaf partition's rowtype after that partition - * is chosen by tuple-routing. - * 'num_parted' receives the number of partitioned tables in the partition - * tree (= the number of entries in the 'pd' output array) - * 'num_partitions' receives the number of leaf partitions in the partition - * tree (= the number of entries in the 'partitions' and 'tup_conv_maps' - * output arrays - * - * Note that all the relations in the partition tree are locked using the - * RowExclusiveLock mode upon return from this function. - */ -void -ExecSetupPartitionTupleRouting(Relation rel, - Index resultRTindex, - EState *estate, - PartitionDispatch **pd, - ResultRelInfo ***partitions, - TupleConversionMap ***tup_conv_maps, - TupleTableSlot **partition_tuple_slot, - int *num_parted, int *num_partitions) -{ - TupleDesc tupDesc = RelationGetDescr(rel); - List *leaf_parts; - ListCell *cell; - int i; - ResultRelInfo *leaf_part_rri; - - /* - * Get the information about the partition tree after locking all the - * partitions. - */ - (void) find_all_inheritors(RelationGetRelid(rel), RowExclusiveLock, NULL); - *pd = RelationGetPartitionDispatchInfo(rel, num_parted, &leaf_parts); - *num_partitions = list_length(leaf_parts); - *partitions = (ResultRelInfo **) palloc(*num_partitions * - sizeof(ResultRelInfo *)); - *tup_conv_maps = (TupleConversionMap **) palloc0(*num_partitions * - sizeof(TupleConversionMap *)); - - /* - * Initialize an empty slot that will be used to manipulate tuples of any - * given partition's rowtype. It is attached to the caller-specified node - * (such as ModifyTableState) and released when the node finishes - * processing. - */ - *partition_tuple_slot = MakeTupleTableSlot(); - - leaf_part_rri = (ResultRelInfo *) palloc0(*num_partitions * - sizeof(ResultRelInfo)); - i = 0; - foreach(cell, leaf_parts) - { - Relation partrel; - TupleDesc part_tupdesc; - - /* - * We locked all the partitions above including the leaf partitions. - * Note that each of the relations in *partitions are eventually - * closed by the caller. - */ - partrel = heap_open(lfirst_oid(cell), NoLock); - part_tupdesc = RelationGetDescr(partrel); - - /* - * Save a tuple conversion map to convert a tuple routed to this - * partition from the parent's type to the partition's. - */ - (*tup_conv_maps)[i] = convert_tuples_by_name(tupDesc, part_tupdesc, - gettext_noop("could not convert row type")); - - InitResultRelInfo(leaf_part_rri, - partrel, - resultRTindex, - rel, - estate->es_instrument); - - /* - * Verify result relation is a valid target for INSERT. - */ - CheckValidResultRel(leaf_part_rri, CMD_INSERT); - - /* - * Open partition indices (remember we do not support ON CONFLICT in - * case of partitioned tables, so we do not need support information - * for speculative insertion) - */ - if (leaf_part_rri->ri_RelationDesc->rd_rel->relhasindex && - leaf_part_rri->ri_IndexRelationDescs == NULL) - ExecOpenIndices(leaf_part_rri, false); - - estate->es_leaf_result_relations = - lappend(estate->es_leaf_result_relations, leaf_part_rri); - - (*partitions)[i] = leaf_part_rri++; - i++; - } -} - -/* - * ExecFindPartition -- Find a leaf partition in the partition tree rooted - * at parent, for the heap tuple contained in *slot - * - * estate must be non-NULL; we'll need it to compute any expressions in the - * partition key(s) - * - * If no leaf partition is found, this routine errors out with the appropriate - * error message, else it returns the leaf partition sequence number returned - * by get_partition_for_tuple() unchanged. - */ -int -ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, - TupleTableSlot *slot, EState *estate) -{ - int result; - PartitionDispatchData *failed_at; - TupleTableSlot *failed_slot; - - /* - * First check the root table's partition constraint, if any. No point in - * routing the tuple if it doesn't belong in the root table itself. - */ - if (resultRelInfo->ri_PartitionCheck) - ExecPartitionCheck(resultRelInfo, slot, estate); - - result = get_partition_for_tuple(pd, slot, estate, - &failed_at, &failed_slot); - if (result < 0) - { - Relation failed_rel; - Datum key_values[PARTITION_MAX_KEYS]; - bool key_isnull[PARTITION_MAX_KEYS]; - char *val_desc; - ExprContext *ecxt = GetPerTupleExprContext(estate); - - failed_rel = failed_at->reldesc; - ecxt->ecxt_scantuple = failed_slot; - FormPartitionKeyDatum(failed_at, failed_slot, estate, - key_values, key_isnull); - val_desc = ExecBuildSlotPartitionKeyDescription(failed_rel, - key_values, - key_isnull, - 64); - Assert(OidIsValid(RelationGetRelid(failed_rel))); - ereport(ERROR, - (errcode(ERRCODE_CHECK_VIOLATION), - errmsg("no partition of relation \"%s\" found for row", - RelationGetRelationName(failed_rel)), - val_desc ? errdetail("Partition key of the failing row contains %s.", val_desc) : 0)); - } - - return result; -} - -/* - * BuildSlotPartitionKeyDescription - * - * This works very much like BuildIndexValueDescription() and is currently - * used for building error messages when ExecFindPartition() fails to find - * partition for a row. - */ -static char * -ExecBuildSlotPartitionKeyDescription(Relation rel, - Datum *values, - bool *isnull, - int maxfieldlen) -{ - StringInfoData buf; - PartitionKey key = RelationGetPartitionKey(rel); - int partnatts = get_partition_natts(key); - int i; - Oid relid = RelationGetRelid(rel); - AclResult aclresult; - - if (check_enable_rls(relid, InvalidOid, true) == RLS_ENABLED) - return NULL; - - /* If the user has table-level access, just go build the description. */ - aclresult = pg_class_aclcheck(relid, GetUserId(), ACL_SELECT); - if (aclresult != ACLCHECK_OK) - { - /* - * Step through the columns of the partition key and make sure the - * user has SELECT rights on all of them. - */ - for (i = 0; i < partnatts; i++) - { - AttrNumber attnum = get_partition_col_attnum(key, i); - - /* - * If this partition key column is an expression, we return no - * detail rather than try to figure out what column(s) the - * expression includes and if the user has SELECT rights on them. - */ - if (attnum == InvalidAttrNumber || - pg_attribute_aclcheck(relid, attnum, GetUserId(), - ACL_SELECT) != ACLCHECK_OK) - return NULL; - } - } - - initStringInfo(&buf); - appendStringInfo(&buf, "(%s) = (", - pg_get_partkeydef_columns(relid, true)); - - for (i = 0; i < partnatts; i++) - { - char *val; - int vallen; - - if (isnull[i]) - val = "null"; - else - { - Oid foutoid; - bool typisvarlena; - - getTypeOutputInfo(get_partition_col_typid(key, i), - &foutoid, &typisvarlena); - val = OidOutputFunctionCall(foutoid, values[i]); - } - - if (i > 0) - appendStringInfoString(&buf, ", "); - - /* truncate if needed */ - vallen = strlen(val); - if (vallen <= maxfieldlen) - appendStringInfoString(&buf, val); - else - { - vallen = pg_mbcliplen(val, vallen, maxfieldlen); - appendBinaryStringInfo(&buf, val, vallen); - appendStringInfoString(&buf, "..."); - } - } - - appendStringInfoChar(&buf, ')'); - - return buf.data; -} diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c new file mode 100644 index 0000000000..d275cefe1d --- /dev/null +++ b/src/backend/executor/execPartition.c @@ -0,0 +1,560 @@ +/*------------------------------------------------------------------------- + * + * execPartition.c + * Support routines for partitioning. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/executor/execPartition.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "catalog/pg_inherits_fn.h" +#include "executor/execPartition.h" +#include "executor/executor.h" +#include "mb/pg_wchar.h" +#include "miscadmin.h" +#include "utils/lsyscache.h" +#include "utils/rls.h" +#include "utils/ruleutils.h" + +static PartitionDispatch *RelationGetPartitionDispatchInfo(Relation rel, + int *num_parted, List **leaf_part_oids); +static void get_partition_dispatch_recurse(Relation rel, Relation parent, + List **pds, List **leaf_part_oids); +static void FormPartitionKeyDatum(PartitionDispatch pd, + TupleTableSlot *slot, + EState *estate, + Datum *values, + bool *isnull); +static char *ExecBuildSlotPartitionKeyDescription(Relation rel, + Datum *values, + bool *isnull, + int maxfieldlen); + +/* + * ExecSetupPartitionTupleRouting - set up information needed during + * tuple routing for partitioned tables + * + * Output arguments: + * 'pd' receives an array of PartitionDispatch objects with one entry for + * every partitioned table in the partition tree + * 'partitions' receives an array of ResultRelInfo* objects with one entry for + * every leaf partition in the partition tree + * 'tup_conv_maps' receives an array of TupleConversionMap objects with one + * entry for every leaf partition (required to convert input tuple based + * on the root table's rowtype to a leaf partition's rowtype after tuple + * routing is done) + * 'partition_tuple_slot' receives a standalone TupleTableSlot to be used + * to manipulate any given leaf partition's rowtype after that partition + * is chosen by tuple-routing. + * 'num_parted' receives the number of partitioned tables in the partition + * tree (= the number of entries in the 'pd' output array) + * 'num_partitions' receives the number of leaf partitions in the partition + * tree (= the number of entries in the 'partitions' and 'tup_conv_maps' + * output arrays + * + * Note that all the relations in the partition tree are locked using the + * RowExclusiveLock mode upon return from this function. + */ +void +ExecSetupPartitionTupleRouting(Relation rel, + Index resultRTindex, + EState *estate, + PartitionDispatch **pd, + ResultRelInfo ***partitions, + TupleConversionMap ***tup_conv_maps, + TupleTableSlot **partition_tuple_slot, + int *num_parted, int *num_partitions) +{ + TupleDesc tupDesc = RelationGetDescr(rel); + List *leaf_parts; + ListCell *cell; + int i; + ResultRelInfo *leaf_part_rri; + + /* + * Get the information about the partition tree after locking all the + * partitions. + */ + (void) find_all_inheritors(RelationGetRelid(rel), RowExclusiveLock, NULL); + *pd = RelationGetPartitionDispatchInfo(rel, num_parted, &leaf_parts); + *num_partitions = list_length(leaf_parts); + *partitions = (ResultRelInfo **) palloc(*num_partitions * + sizeof(ResultRelInfo *)); + *tup_conv_maps = (TupleConversionMap **) palloc0(*num_partitions * + sizeof(TupleConversionMap *)); + + /* + * Initialize an empty slot that will be used to manipulate tuples of any + * given partition's rowtype. It is attached to the caller-specified node + * (such as ModifyTableState) and released when the node finishes + * processing. + */ + *partition_tuple_slot = MakeTupleTableSlot(); + + leaf_part_rri = (ResultRelInfo *) palloc0(*num_partitions * + sizeof(ResultRelInfo)); + i = 0; + foreach(cell, leaf_parts) + { + Relation partrel; + TupleDesc part_tupdesc; + + /* + * We locked all the partitions above including the leaf partitions. + * Note that each of the relations in *partitions are eventually + * closed by the caller. + */ + partrel = heap_open(lfirst_oid(cell), NoLock); + part_tupdesc = RelationGetDescr(partrel); + + /* + * Save a tuple conversion map to convert a tuple routed to this + * partition from the parent's type to the partition's. + */ + (*tup_conv_maps)[i] = convert_tuples_by_name(tupDesc, part_tupdesc, + gettext_noop("could not convert row type")); + + InitResultRelInfo(leaf_part_rri, + partrel, + resultRTindex, + rel, + estate->es_instrument); + + /* + * Verify result relation is a valid target for INSERT. + */ + CheckValidResultRel(leaf_part_rri, CMD_INSERT); + + /* + * Open partition indices (remember we do not support ON CONFLICT in + * case of partitioned tables, so we do not need support information + * for speculative insertion) + */ + if (leaf_part_rri->ri_RelationDesc->rd_rel->relhasindex && + leaf_part_rri->ri_IndexRelationDescs == NULL) + ExecOpenIndices(leaf_part_rri, false); + + estate->es_leaf_result_relations = + lappend(estate->es_leaf_result_relations, leaf_part_rri); + + (*partitions)[i] = leaf_part_rri++; + i++; + } +} + +/* + * ExecFindPartition -- Find a leaf partition in the partition tree rooted + * at parent, for the heap tuple contained in *slot + * + * estate must be non-NULL; we'll need it to compute any expressions in the + * partition key(s) + * + * If no leaf partition is found, this routine errors out with the appropriate + * error message, else it returns the leaf partition sequence number + * as an index into the array of (ResultRelInfos of) all leaf partitions in + * the partition tree. + */ +int +ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, + TupleTableSlot *slot, EState *estate) +{ + int result; + Datum values[PARTITION_MAX_KEYS]; + bool isnull[PARTITION_MAX_KEYS]; + Relation rel; + PartitionDispatch parent; + ExprContext *ecxt = GetPerTupleExprContext(estate); + TupleTableSlot *ecxt_scantuple_old = ecxt->ecxt_scantuple; + + /* + * First check the root table's partition constraint, if any. No point in + * routing the tuple if it doesn't belong in the root table itself. + */ + if (resultRelInfo->ri_PartitionCheck) + ExecPartitionCheck(resultRelInfo, slot, estate); + + /* start with the root partitioned table */ + parent = pd[0]; + while (true) + { + PartitionDesc partdesc; + TupleTableSlot *myslot = parent->tupslot; + TupleConversionMap *map = parent->tupmap; + int cur_index = -1; + + rel = parent->reldesc; + partdesc = RelationGetPartitionDesc(rel); + + /* + * Convert the tuple to this parent's layout so that we can do certain + * things we do below. + */ + if (myslot != NULL && map != NULL) + { + HeapTuple tuple = ExecFetchSlotTuple(slot); + + ExecClearTuple(myslot); + tuple = do_convert_tuple(tuple, map); + ExecStoreTuple(tuple, myslot, InvalidBuffer, true); + slot = myslot; + } + + /* Quick exit */ + if (partdesc->nparts == 0) + { + result = -1; + break; + } + + /* + * Extract partition key from tuple. Expression evaluation machinery + * that FormPartitionKeyDatum() invokes expects ecxt_scantuple to + * point to the correct tuple slot. The slot might have changed from + * what was used for the parent table if the table of the current + * partitioning level has different tuple descriptor from the parent. + * So update ecxt_scantuple accordingly. + */ + ecxt->ecxt_scantuple = slot; + FormPartitionKeyDatum(parent, slot, estate, values, isnull); + cur_index = get_partition_for_tuple(rel, values, isnull); + + /* + * cur_index < 0 means we failed to find a partition of this parent. + * cur_index >= 0 means we either found the leaf partition, or the + * next parent to find a partition of. + */ + if (cur_index < 0) + { + result = -1; + break; + } + else if (parent->indexes[cur_index] >= 0) + { + result = parent->indexes[cur_index]; + break; + } + else + parent = pd[-parent->indexes[cur_index]]; + } + + /* A partition was not found. */ + if (result < 0) + { + char *val_desc; + + val_desc = ExecBuildSlotPartitionKeyDescription(rel, + values, isnull, 64); + Assert(OidIsValid(RelationGetRelid(rel))); + ereport(ERROR, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("no partition of relation \"%s\" found for row", + RelationGetRelationName(rel)), + val_desc ? errdetail("Partition key of the failing row contains %s.", val_desc) : 0)); + } + + ecxt->ecxt_scantuple = ecxt_scantuple_old; + return result; +} + +/* + * RelationGetPartitionDispatchInfo + * Returns information necessary to route tuples down a partition tree + * + * The number of elements in the returned array (that is, the number of + * PartitionDispatch objects for the partitioned tables in the partition tree) + * is returned in *num_parted and a list of the OIDs of all the leaf + * partitions of rel is returned in *leaf_part_oids. + * + * All the relations in the partition tree (including 'rel') must have been + * locked (using at least the AccessShareLock) by the caller. + */ +static PartitionDispatch * +RelationGetPartitionDispatchInfo(Relation rel, + int *num_parted, List **leaf_part_oids) +{ + List *pdlist = NIL; + PartitionDispatchData **pd; + ListCell *lc; + int i; + + Assert(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); + + *num_parted = 0; + *leaf_part_oids = NIL; + + get_partition_dispatch_recurse(rel, NULL, &pdlist, leaf_part_oids); + *num_parted = list_length(pdlist); + pd = (PartitionDispatchData **) palloc(*num_parted * + sizeof(PartitionDispatchData *)); + i = 0; + foreach(lc, pdlist) + { + pd[i++] = lfirst(lc); + } + + return pd; +} + +/* + * get_partition_dispatch_recurse + * Recursively expand partition tree rooted at rel + * + * As the partition tree is expanded in a depth-first manner, we maintain two + * global lists: of PartitionDispatch objects corresponding to partitioned + * tables in *pds and of the leaf partition OIDs in *leaf_part_oids. + * + * Note that the order of OIDs of leaf partitions in leaf_part_oids matches + * the order in which the planner's expand_partitioned_rtentry() processes + * them. It's not necessarily the case that the offsets match up exactly, + * because constraint exclusion might prune away some partitions on the + * planner side, whereas we'll always have the complete list; but unpruned + * partitions will appear in the same order in the plan as they are returned + * here. + */ +static void +get_partition_dispatch_recurse(Relation rel, Relation parent, + List **pds, List **leaf_part_oids) +{ + TupleDesc tupdesc = RelationGetDescr(rel); + PartitionDesc partdesc = RelationGetPartitionDesc(rel); + PartitionKey partkey = RelationGetPartitionKey(rel); + PartitionDispatch pd; + int i; + + check_stack_depth(); + + /* Build a PartitionDispatch for this table and add it to *pds. */ + pd = (PartitionDispatch) palloc(sizeof(PartitionDispatchData)); + *pds = lappend(*pds, pd); + pd->reldesc = rel; + pd->key = partkey; + pd->keystate = NIL; + pd->partdesc = partdesc; + if (parent != NULL) + { + /* + * For every partitioned table other than the root, we must store a + * tuple table slot initialized with its tuple descriptor and a tuple + * conversion map to convert a tuple from its parent's rowtype to its + * own. That is to make sure that we are looking at the correct row + * using the correct tuple descriptor when computing its partition key + * for tuple routing. + */ + pd->tupslot = MakeSingleTupleTableSlot(tupdesc); + pd->tupmap = convert_tuples_by_name(RelationGetDescr(parent), + tupdesc, + gettext_noop("could not convert row type")); + } + else + { + /* Not required for the root partitioned table */ + pd->tupslot = NULL; + pd->tupmap = NULL; + } + + /* + * Go look at each partition of this table. If it's a leaf partition, + * simply add its OID to *leaf_part_oids. If it's a partitioned table, + * recursively call get_partition_dispatch_recurse(), so that its + * partitions are processed as well and a corresponding PartitionDispatch + * object gets added to *pds. + * + * About the values in pd->indexes: for a leaf partition, it contains the + * leaf partition's position in the global list *leaf_part_oids minus 1, + * whereas for a partitioned table partition, it contains the partition's + * position in the global list *pds multiplied by -1. The latter is + * multiplied by -1 to distinguish partitioned tables from leaf partitions + * when going through the values in pd->indexes. So, for example, when + * using it during tuple-routing, encountering a value >= 0 means we found + * a leaf partition. It is immediately returned as the index in the array + * of ResultRelInfos of all the leaf partitions, using which we insert the + * tuple into that leaf partition. A negative value means we found a + * partitioned table. The value multiplied by -1 is returned as the index + * in the array of PartitionDispatch objects of all partitioned tables in + * the tree. This value is used to continue the search in the next level + * of the partition tree. + */ + pd->indexes = (int *) palloc(partdesc->nparts * sizeof(int)); + for (i = 0; i < partdesc->nparts; i++) + { + Oid partrelid = partdesc->oids[i]; + + if (get_rel_relkind(partrelid) != RELKIND_PARTITIONED_TABLE) + { + *leaf_part_oids = lappend_oid(*leaf_part_oids, partrelid); + pd->indexes[i] = list_length(*leaf_part_oids) - 1; + } + else + { + /* + * We assume all tables in the partition tree were already locked + * by the caller. + */ + Relation partrel = heap_open(partrelid, NoLock); + + pd->indexes[i] = -list_length(*pds); + get_partition_dispatch_recurse(partrel, rel, pds, leaf_part_oids); + } + } +} + +/* ---------------- + * FormPartitionKeyDatum + * Construct values[] and isnull[] arrays for the partition key + * of a tuple. + * + * pd Partition dispatch object of the partitioned table + * slot Heap tuple from which to extract partition key + * estate executor state for evaluating any partition key + * expressions (must be non-NULL) + * values Array of partition key Datums (output area) + * isnull Array of is-null indicators (output area) + * + * the ecxt_scantuple slot of estate's per-tuple expr context must point to + * the heap tuple passed in. + * ---------------- + */ +static void +FormPartitionKeyDatum(PartitionDispatch pd, + TupleTableSlot *slot, + EState *estate, + Datum *values, + bool *isnull) +{ + ListCell *partexpr_item; + int i; + + if (pd->key->partexprs != NIL && pd->keystate == NIL) + { + /* Check caller has set up context correctly */ + Assert(estate != NULL && + GetPerTupleExprContext(estate)->ecxt_scantuple == slot); + + /* First time through, set up expression evaluation state */ + pd->keystate = ExecPrepareExprList(pd->key->partexprs, estate); + } + + partexpr_item = list_head(pd->keystate); + for (i = 0; i < pd->key->partnatts; i++) + { + AttrNumber keycol = pd->key->partattrs[i]; + Datum datum; + bool isNull; + + if (keycol != 0) + { + /* Plain column; get the value directly from the heap tuple */ + datum = slot_getattr(slot, keycol, &isNull); + } + else + { + /* Expression; need to evaluate it */ + if (partexpr_item == NULL) + elog(ERROR, "wrong number of partition key expressions"); + datum = ExecEvalExprSwitchContext((ExprState *) lfirst(partexpr_item), + GetPerTupleExprContext(estate), + &isNull); + partexpr_item = lnext(partexpr_item); + } + values[i] = datum; + isnull[i] = isNull; + } + + if (partexpr_item != NULL) + elog(ERROR, "wrong number of partition key expressions"); +} + +/* + * BuildSlotPartitionKeyDescription + * + * This works very much like BuildIndexValueDescription() and is currently + * used for building error messages when ExecFindPartition() fails to find + * partition for a row. + */ +static char * +ExecBuildSlotPartitionKeyDescription(Relation rel, + Datum *values, + bool *isnull, + int maxfieldlen) +{ + StringInfoData buf; + PartitionKey key = RelationGetPartitionKey(rel); + int partnatts = get_partition_natts(key); + int i; + Oid relid = RelationGetRelid(rel); + AclResult aclresult; + + if (check_enable_rls(relid, InvalidOid, true) == RLS_ENABLED) + return NULL; + + /* If the user has table-level access, just go build the description. */ + aclresult = pg_class_aclcheck(relid, GetUserId(), ACL_SELECT); + if (aclresult != ACLCHECK_OK) + { + /* + * Step through the columns of the partition key and make sure the + * user has SELECT rights on all of them. + */ + for (i = 0; i < partnatts; i++) + { + AttrNumber attnum = get_partition_col_attnum(key, i); + + /* + * If this partition key column is an expression, we return no + * detail rather than try to figure out what column(s) the + * expression includes and if the user has SELECT rights on them. + */ + if (attnum == InvalidAttrNumber || + pg_attribute_aclcheck(relid, attnum, GetUserId(), + ACL_SELECT) != ACLCHECK_OK) + return NULL; + } + } + + initStringInfo(&buf); + appendStringInfo(&buf, "(%s) = (", + pg_get_partkeydef_columns(relid, true)); + + for (i = 0; i < partnatts; i++) + { + char *val; + int vallen; + + if (isnull[i]) + val = "null"; + else + { + Oid foutoid; + bool typisvarlena; + + getTypeOutputInfo(get_partition_col_typid(key, i), + &foutoid, &typisvarlena); + val = OidOutputFunctionCall(foutoid, values[i]); + } + + if (i > 0) + appendStringInfoString(&buf, ", "); + + /* truncate if needed */ + vallen = strlen(val); + if (vallen <= maxfieldlen) + appendStringInfoString(&buf, val); + else + { + vallen = pg_mbcliplen(val, vallen, maxfieldlen); + appendBinaryStringInfo(&buf, val, vallen); + appendStringInfoString(&buf, "..."); + } + } + + appendStringInfoChar(&buf, ')'); + + return buf.data; +} diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 0027d21253..503b89f606 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -40,6 +40,7 @@ #include "access/htup_details.h" #include "access/xact.h" #include "commands/trigger.h" +#include "executor/execPartition.h" #include "executor/executor.h" #include "executor/nodeModifyTable.h" #include "foreign/fdwapi.h" diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 8acc01a876..295e9d224e 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -42,37 +42,6 @@ typedef struct PartitionDescData typedef struct PartitionDescData *PartitionDesc; -/*----------------------- - * PartitionDispatch - information about one partitioned table in a partition - * hierarchy required to route a tuple to one of its partitions - * - * reldesc Relation descriptor of the table - * key Partition key information of the table - * keystate Execution state required for expressions in the partition key - * partdesc Partition descriptor of the table - * tupslot A standalone TupleTableSlot initialized with this table's tuple - * descriptor - * tupmap TupleConversionMap to convert from the parent's rowtype to - * this table's rowtype (when extracting the partition key of a - * tuple just before routing it through this table) - * indexes Array with partdesc->nparts members (for details on what - * individual members represent, see how they are set in - * RelationGetPartitionDispatchInfo()) - *----------------------- - */ -typedef struct PartitionDispatchData -{ - Relation reldesc; - PartitionKey key; - List *keystate; /* list of ExprState */ - PartitionDesc partdesc; - TupleTableSlot *tupslot; - TupleConversionMap *tupmap; - int *indexes; -} PartitionDispatchData; - -typedef struct PartitionDispatchData *PartitionDispatch; - extern void RelationBuildPartitionDesc(Relation relation); extern bool partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, PartitionBoundInfo b1, @@ -91,19 +60,6 @@ extern List *map_partition_varattnos(List *expr, int target_varno, extern List *RelationGetPartitionQual(Relation rel); extern Expr *get_partition_qual_relid(Oid relid); -/* For tuple routing */ -extern PartitionDispatch *RelationGetPartitionDispatchInfo(Relation rel, - int *num_parted, List **leaf_part_oids); -extern void FormPartitionKeyDatum(PartitionDispatch pd, - TupleTableSlot *slot, - EState *estate, - Datum *values, - bool *isnull); -extern int get_partition_for_tuple(PartitionDispatch *pd, - TupleTableSlot *slot, - EState *estate, - PartitionDispatchData **failed_at, - TupleTableSlot **failed_slot); extern Oid get_default_oid_from_partdesc(PartitionDesc partdesc); extern Oid get_default_partition_oid(Oid parentId); extern void update_default_partition_oid(Oid parentId, Oid defaultPartId); @@ -111,4 +67,8 @@ extern void check_default_allows_bound(Relation parent, Relation defaultRel, PartitionBoundSpec *new_spec); extern List *get_proposed_default_constraint(List *new_part_constaints); +/* For tuple routing */ +extern int get_partition_for_tuple(Relation relation, Datum *values, + bool *isnull); + #endif /* PARTITION_H */ diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h new file mode 100644 index 0000000000..64e5aab4eb --- /dev/null +++ b/src/include/executor/execPartition.h @@ -0,0 +1,65 @@ +/*-------------------------------------------------------------------- + * execPartition.h + * POSTGRES partitioning executor interface + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/include/executor/execPartition.h + *-------------------------------------------------------------------- + */ + +#ifndef EXECPARTITION_H +#define EXECPARTITION_H + +#include "catalog/partition.h" +#include "nodes/execnodes.h" +#include "nodes/parsenodes.h" +#include "nodes/plannodes.h" + +/*----------------------- + * PartitionDispatch - information about one partitioned table in a partition + * hierarchy required to route a tuple to one of its partitions + * + * reldesc Relation descriptor of the table + * key Partition key information of the table + * keystate Execution state required for expressions in the partition key + * partdesc Partition descriptor of the table + * tupslot A standalone TupleTableSlot initialized with this table's tuple + * descriptor + * tupmap TupleConversionMap to convert from the parent's rowtype to + * this table's rowtype (when extracting the partition key of a + * tuple just before routing it through this table) + * indexes Array with partdesc->nparts members (for details on what + * individual members represent, see how they are set in + * get_partition_dispatch_recurse()) + *----------------------- + */ +typedef struct PartitionDispatchData +{ + Relation reldesc; + PartitionKey key; + List *keystate; /* list of ExprState */ + PartitionDesc partdesc; + TupleTableSlot *tupslot; + TupleConversionMap *tupmap; + int *indexes; +} PartitionDispatchData; + +typedef struct PartitionDispatchData *PartitionDispatch; + +extern void ExecSetupPartitionTupleRouting(Relation rel, + Index resultRTindex, + EState *estate, + PartitionDispatch **pd, + ResultRelInfo ***partitions, + TupleConversionMap ***tup_conv_maps, + TupleTableSlot **partition_tuple_slot, + int *num_parted, int *num_partitions); +extern int ExecFindPartition(ResultRelInfo *resultRelInfo, + PartitionDispatch *pd, + TupleTableSlot *slot, + EState *estate); + +#endif /* EXECPARTITION_H */ diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index c4ecf0d50f..bee4ebf269 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -188,6 +188,8 @@ extern void ExecCleanUpTriggerState(EState *estate); extern bool ExecContextForcesOids(PlanState *planstate, bool *hasoids); extern void ExecConstraints(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); +extern void ExecPartitionCheck(ResultRelInfo *resultRelInfo, + TupleTableSlot *slot, EState *estate); extern void ExecWithCheckOptions(WCOKind kind, ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo); @@ -206,18 +208,6 @@ extern void EvalPlanQualSetPlan(EPQState *epqstate, extern void EvalPlanQualSetTuple(EPQState *epqstate, Index rti, HeapTuple tuple); extern HeapTuple EvalPlanQualGetTuple(EPQState *epqstate, Index rti); -extern void ExecSetupPartitionTupleRouting(Relation rel, - Index resultRTindex, - EState *estate, - PartitionDispatch **pd, - ResultRelInfo ***partitions, - TupleConversionMap ***tup_conv_maps, - TupleTableSlot **partition_tuple_slot, - int *num_parted, int *num_partitions); -extern int ExecFindPartition(ResultRelInfo *resultRelInfo, - PartitionDispatch *pd, - TupleTableSlot *slot, - EState *estate); #define EvalPlanQualSetSlot(epqstate, slot) ((epqstate)->origslot = (slot)) extern void EvalPlanQualFetchRowMarks(EPQState *epqstate); From 6337865f36da34e9c89aaa292f976bde6df0b065 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 16 Aug 2017 00:22:32 -0400 Subject: [PATCH 0545/1087] Remove TRUE and FALSE Code should be using true and false. Existing code can be changed to those in a backward compatible way. The definitions in the ecpg header files are left around to avoid upsetting those users unnecessarily. Reviewed-by: Michael Paquier --- src/include/c.h | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 18809c9372..c8c7be1d21 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -27,7 +27,7 @@ * ------- ------------------------------------------------ * 0) pg_config.h and standard system headers * 1) compiler characteristics - * 2) bool, true, false, TRUE, FALSE + * 2) bool, true, false * 3) standard system types * 4) IsValid macros for system types * 5) offsetof, lengthof, alignment @@ -257,7 +257,7 @@ /* ---------------------------------------------------------------- - * Section 2: bool, true, false, TRUE, FALSE + * Section 2: bool, true, false * ---------------------------------------------------------------- */ @@ -285,14 +285,6 @@ typedef char bool; #endif /* not C++ */ -#ifndef TRUE -#define TRUE 1 -#endif - -#ifndef FALSE -#define FALSE 0 -#endif - /* ---------------------------------------------------------------- * Section 3: standard system types From 9989f92aabdc2015c9dc64b9df4cdeceecf75e47 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 15 Nov 2017 13:32:29 -0500 Subject: [PATCH 0546/1087] Disable test_session_hooks test module until buildfarm issues are sorted out --- src/test/modules/Makefile | 1 - 1 file changed, 1 deletion(-) diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index 7246552d38..b7ed0af021 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -15,7 +15,6 @@ SUBDIRS = \ test_pg_dump \ test_rbtree \ test_rls_hooks \ - test_session_hooks \ test_shm_mq \ worker_spi From 745948422c799c1b9f976ee30f21a7aac050e0f3 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 15 Nov 2017 17:49:04 -0500 Subject: [PATCH 0547/1087] Disable installcheck tests for test_session_hooks The module requires a preloaded library and the defect can't be cured by a LOAD instruction in the test script. To achieve this we override the installcheck target in the module's Makefile, and exclude ithe module in vcregress.pl. Along the way, revert commit 9989f92aabd. --- src/test/modules/Makefile | 1 + src/test/modules/test_session_hooks/Makefile | 4 ++++ src/tools/msvc/vcregress.pl | 2 ++ 3 files changed, 7 insertions(+) diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index b7ed0af021..7246552d38 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -15,6 +15,7 @@ SUBDIRS = \ test_pg_dump \ test_rbtree \ test_rls_hooks \ + test_session_hooks \ test_shm_mq \ worker_spi diff --git a/src/test/modules/test_session_hooks/Makefile b/src/test/modules/test_session_hooks/Makefile index c5c386084e..636ae61c0e 100644 --- a/src/test/modules/test_session_hooks/Makefile +++ b/src/test/modules/test_session_hooks/Makefile @@ -19,3 +19,7 @@ top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global include $(top_srcdir)/contrib/contrib-global.mk endif + +# override installcheck - this module requires preloading the test module +installcheck: + @echo Cannot run $@ for test_session_hooks. Run "'make check'" instead. diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl index 719fe83047..41f7832e5a 100644 --- a/src/tools/msvc/vcregress.pl +++ b/src/tools/msvc/vcregress.pl @@ -383,6 +383,8 @@ sub modulescheck my $mstat = 0; foreach my $module (glob("*")) { + # test_session_hooks can't run installcheck, so skip it here + next if $module eq 'test_session_hooks'; subdircheck("$topdir/src/test/modules", $module); my $status = $? >> 8; $mstat ||= $status; From 642bafa0c5f9f08d106a14f31429e0e0c718b603 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 16 Nov 2017 08:41:40 -0500 Subject: [PATCH 0548/1087] Refactor routine to test connection to SSL server Move the sub-routines wrappers to check if a connection to a server is fine or not into the test main module. This is useful for other tests willing to check connectivity into a server. Author: Michael Paquier --- src/test/ssl/ServerSetup.pm | 45 ++++++++++- src/test/ssl/t/001_ssltests.pl | 132 ++++++++++++++------------------- 2 files changed, 100 insertions(+), 77 deletions(-) diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index f63c81cfc6..ad2e036602 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -26,9 +26,52 @@ use Test::More; use Exporter 'import'; our @EXPORT = qw( - configure_test_server_for_ssl switch_server_cert + configure_test_server_for_ssl + run_test_psql + switch_server_cert + test_connect_fails + test_connect_ok ); +# Define a couple of helper functions to test connecting to the server. + +# Attempt connection to server with given connection string. +sub run_test_psql +{ + my $connstr = $_[0]; + my $logstring = $_[1]; + + my $cmd = [ + 'psql', '-X', '-A', '-t', '-c', "SELECT 'connected with $connstr'", + '-d', "$connstr" ]; + + my $result = run_log($cmd); + return $result; +} + +# +# The first argument is a base connection string to use for connection. +# The second argument is a complementary connection string, and it's also +# printed out as the test case name. +sub test_connect_ok +{ + my $common_connstr = $_[0]; + my $connstr = $_[1]; + + my $result = + run_test_psql("$common_connstr $connstr", "(should succeed)"); + ok($result, $connstr); +} + +sub test_connect_fails +{ + my $common_connstr = $_[0]; + my $connstr = $_[1]; + + my $result = run_test_psql("$common_connstr $connstr", "(should fail)"); + ok(!$result, "$connstr (should fail)"); +} + # Copy a set of files, taking into account wildcards sub copy_files { diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl index 32df273929..890e3051a2 100644 --- a/src/test/ssl/t/001_ssltests.pl +++ b/src/test/ssl/t/001_ssltests.pl @@ -13,44 +13,9 @@ # postgresql-ssl-regression.test. my $SERVERHOSTADDR = '127.0.0.1'; -# Define a couple of helper functions to test connecting to the server. - +# Allocation of base connection string shared among multiple tests. my $common_connstr; -sub run_test_psql -{ - my $connstr = $_[0]; - my $logstring = $_[1]; - - my $cmd = [ - 'psql', '-X', '-A', '-t', '-c', "SELECT 'connected with $connstr'", - '-d', "$connstr" ]; - - my $result = run_log($cmd); - return $result; -} - -# -# The first argument is a (part of a) connection string, and it's also printed -# out as the test case name. It is appended to $common_connstr global variable, -# which also contains a libpq connection string. -sub test_connect_ok -{ - my $connstr = $_[0]; - - my $result = - run_test_psql("$common_connstr $connstr", "(should succeed)"); - ok($result, $connstr); -} - -sub test_connect_fails -{ - my $connstr = $_[0]; - - my $result = run_test_psql("$common_connstr $connstr", "(should fail)"); - ok(!$result, "$connstr (should fail)"); -} - # The client's private key must not be world-readable, so take a copy # of the key stored in the code tree and update its permissions. copy("ssl/client.key", "ssl/client_tmp.key"); @@ -83,50 +48,59 @@ sub test_connect_fails # The server should not accept non-SSL connections note "test that the server doesn't accept non-SSL connections"; -test_connect_fails("sslmode=disable"); +test_connect_fails($common_connstr, "sslmode=disable"); # Try without a root cert. In sslmode=require, this should work. In verify-ca # or verify-full mode it should fail note "connect without server root cert"; -test_connect_ok("sslrootcert=invalid sslmode=require"); -test_connect_fails("sslrootcert=invalid sslmode=verify-ca"); -test_connect_fails("sslrootcert=invalid sslmode=verify-full"); +test_connect_ok($common_connstr, "sslrootcert=invalid sslmode=require"); +test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-ca"); +test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-full"); # Try with wrong root cert, should fail. (we're using the client CA as the # root, but the server's key is signed by the server CA) note "connect without wrong server root cert"; -test_connect_fails("sslrootcert=ssl/client_ca.crt sslmode=require"); -test_connect_fails("sslrootcert=ssl/client_ca.crt sslmode=verify-ca"); -test_connect_fails("sslrootcert=ssl/client_ca.crt sslmode=verify-full"); +test_connect_fails($common_connstr, + "sslrootcert=ssl/client_ca.crt sslmode=require"); +test_connect_fails($common_connstr, + "sslrootcert=ssl/client_ca.crt sslmode=verify-ca"); +test_connect_fails($common_connstr, + "sslrootcert=ssl/client_ca.crt sslmode=verify-full"); # Try with just the server CA's cert. This fails because the root file # must contain the whole chain up to the root CA. note "connect with server CA cert, without root CA"; -test_connect_fails("sslrootcert=ssl/server_ca.crt sslmode=verify-ca"); +test_connect_fails($common_connstr, + "sslrootcert=ssl/server_ca.crt sslmode=verify-ca"); # And finally, with the correct root cert. note "connect with correct server CA cert file"; -test_connect_ok("sslrootcert=ssl/root+server_ca.crt sslmode=require"); -test_connect_ok("sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); -test_connect_ok("sslrootcert=ssl/root+server_ca.crt sslmode=verify-full"); +test_connect_ok($common_connstr, + "sslrootcert=ssl/root+server_ca.crt sslmode=require"); +test_connect_ok($common_connstr, + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); +test_connect_ok($common_connstr, + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-full"); # Test with cert root file that contains two certificates. The client should # be able to pick the right one, regardless of the order in the file. -test_connect_ok("sslrootcert=ssl/both-cas-1.crt sslmode=verify-ca"); -test_connect_ok("sslrootcert=ssl/both-cas-2.crt sslmode=verify-ca"); +test_connect_ok($common_connstr, + "sslrootcert=ssl/both-cas-1.crt sslmode=verify-ca"); +test_connect_ok($common_connstr, + "sslrootcert=ssl/both-cas-2.crt sslmode=verify-ca"); note "testing sslcrl option with a non-revoked cert"; # Invalid CRL filename is the same as no CRL, succeeds -test_connect_ok( +test_connect_ok($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=invalid"); # A CRL belonging to a different CA is not accepted, fails -test_connect_fails( +test_connect_fails($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/client.crl"); # With the correct CRL, succeeds (this cert is not revoked) -test_connect_ok( +test_connect_ok($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl" ); @@ -136,9 +110,9 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok("sslmode=require host=wronghost.test"); -test_connect_ok("sslmode=verify-ca host=wronghost.test"); -test_connect_fails("sslmode=verify-full host=wronghost.test"); +test_connect_ok($common_connstr, "sslmode=require host=wronghost.test"); +test_connect_ok($common_connstr, "sslmode=verify-ca host=wronghost.test"); +test_connect_fails($common_connstr, "sslmode=verify-full host=wronghost.test"); # Test Subject Alternative Names. switch_server_cert($node, 'server-multiple-alt-names'); @@ -147,12 +121,13 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok("host=dns1.alt-name.pg-ssltest.test"); -test_connect_ok("host=dns2.alt-name.pg-ssltest.test"); -test_connect_ok("host=foo.wildcard.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=foo.wildcard.pg-ssltest.test"); -test_connect_fails("host=wronghost.alt-name.pg-ssltest.test"); -test_connect_fails("host=deep.subdomain.wildcard.pg-ssltest.test"); +test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test"); +test_connect_fails($common_connstr, + "host=deep.subdomain.wildcard.pg-ssltest.test"); # Test certificate with a single Subject Alternative Name. (this gives a # slightly different error message, that's all) @@ -162,10 +137,11 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok("host=single.alt-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=single.alt-name.pg-ssltest.test"); -test_connect_fails("host=wronghost.alt-name.pg-ssltest.test"); -test_connect_fails("host=deep.subdomain.wildcard.pg-ssltest.test"); +test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test"); +test_connect_fails($common_connstr, + "host=deep.subdomain.wildcard.pg-ssltest.test"); # Test server certificate with a CN and SANs. Per RFCs 2818 and 6125, the CN # should be ignored when the certificate has both. @@ -175,9 +151,9 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok("host=dns1.alt-name.pg-ssltest.test"); -test_connect_ok("host=dns2.alt-name.pg-ssltest.test"); -test_connect_fails("host=common-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test"); +test_connect_fails($common_connstr, "host=common-name.pg-ssltest.test"); # Finally, test a server certificate that has no CN or SANs. Of course, that's # not a very sensible certificate, but libpq should handle it gracefully. @@ -185,8 +161,10 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR"; -test_connect_ok("sslmode=verify-ca host=common-name.pg-ssltest.test"); -test_connect_fails("sslmode=verify-full host=common-name.pg-ssltest.test"); +test_connect_ok($common_connstr, + "sslmode=verify-ca host=common-name.pg-ssltest.test"); +test_connect_fails($common_connstr, + "sslmode=verify-full host=common-name.pg-ssltest.test"); # Test that the CRL works note "testing client-side CRL"; @@ -196,8 +174,9 @@ sub test_connect_fails "user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test"; # Without the CRL, succeeds. With it, fails. -test_connect_ok("sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); -test_connect_fails( +test_connect_ok($common_connstr, + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); +test_connect_fails($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl" ); @@ -210,18 +189,18 @@ sub test_connect_fails "sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=certdb hostaddr=$SERVERHOSTADDR"; # no client cert -test_connect_fails("user=ssltestuser sslcert=invalid"); +test_connect_fails($common_connstr, "user=ssltestuser sslcert=invalid"); # correct client cert -test_connect_ok( +test_connect_ok($common_connstr, "user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key"); # client cert belonging to another user -test_connect_fails( +test_connect_fails($common_connstr, "user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key"); # revoked client cert -test_connect_fails( +test_connect_fails($common_connstr, "user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked.key" ); @@ -230,8 +209,9 @@ sub test_connect_fails $common_connstr = "user=ssltestuser dbname=certdb sslkey=ssl/client_tmp.key sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR"; -test_connect_ok("sslmode=require sslcert=ssl/client+client_ca.crt"); -test_connect_fails("sslmode=require sslcert=ssl/client.crt"); +test_connect_ok($common_connstr, + "sslmode=require sslcert=ssl/client+client_ca.crt"); +test_connect_fails($common_connstr, "sslmode=require sslcert=ssl/client.crt"); # clean up unlink "ssl/client_tmp.key"; From ed9b3606dadb461aac57e41ac509f3892095a394 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 16 Nov 2017 10:36:18 -0500 Subject: [PATCH 0549/1087] Further refactoring of c.h and nearby files. This continues the work of commit 91aec93e6 by getting rid of a lot of Windows-specific funny business in "section 0". Instead of including pg_config_os.h in different places depending on platform, let's standardize on putting it before the system headers, and in consequence reduce win32.h to just what has to appear before the system headers or the body of c.h (the latter category seems to include only PGDLLIMPORT and PGDLLEXPORT). The rest of what was in win32.h is moved to a new sub-include of port.h, win32_port.h. Some of what was in port.h seems to better belong there too. It's possible that I missed some declaration ordering dependency that needs to be preserved, but hopefully the buildfarm will find that out in short order. Unlike the previous commit, no back-patch, since this is just cleanup not a prerequisite for a bug fix. Discussion: https://postgr.es/m/29650.1510761080@sss.pgh.pa.us --- src/bin/pg_ctl/pg_ctl.c | 8 - src/include/c.h | 30 +-- src/include/port.h | 59 +---- src/include/port/win32.h | 427 ++---------------------------- src/include/port/win32_port.h | 483 ++++++++++++++++++++++++++++++++++ 5 files changed, 512 insertions(+), 495 deletions(-) create mode 100644 src/include/port/win32_port.h diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index 82de7df048..e43e7b24e1 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -9,14 +9,6 @@ *------------------------------------------------------------------------- */ -#ifdef WIN32 -/* - * Need this to get defines for restricted tokens and jobs. And it - * has to be set before any header from the Win32 API is loaded. - */ -#define _WIN32_WINNT 0x0501 -#endif - #include "postgres_fe.h" #include diff --git a/src/include/c.h b/src/include/c.h index c8c7be1d21..a61428843a 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -52,32 +52,9 @@ #include "pg_config.h" #include "pg_config_manual.h" /* must be after pg_config.h */ - -/* - * We always rely on the WIN32 macro being set by our build system, - * but _WIN32 is the compiler pre-defined macro. So make sure we define - * WIN32 whenever _WIN32 is set, to facilitate standalone building. - */ -#if defined(_WIN32) && !defined(WIN32) -#define WIN32 -#endif - -#if !defined(WIN32) && !defined(__CYGWIN__) /* win32 includes further down */ #include "pg_config_os.h" /* must be before any system header files */ -#endif - -#if _MSC_VER >= 1400 || defined(HAVE_CRTDEFS_H) -#define errcode __msvc_errcode -#include -#undef errcode -#endif - -/* - * We have to include stdlib.h here because it defines many of these macros - * on some platforms, and we only want our definitions used if stdlib.h doesn't - * have its own. The same goes for stddef and stdarg if present. - */ +/* System header files that should be available everywhere in Postgres */ #include #include #include @@ -99,11 +76,6 @@ #include #endif -#if defined(WIN32) || defined(__CYGWIN__) -/* We have to redefine some system functions after they are included above. */ -#include "pg_config_os.h" -#endif - /* ---------------------------------------------------------------- * Section 1: compiler characteristics diff --git a/src/include/port.h b/src/include/port.h index 17a7710a5e..f55085de3a 100644 --- a/src/include/port.h +++ b/src/include/port.h @@ -17,6 +17,15 @@ #include #include +/* + * Windows has enough specialized port stuff that we push most of it off + * into another file. + * Note: Some CYGWIN includes might #define WIN32. + */ +#if defined(WIN32) && !defined(__CYGWIN__) +#include "port/win32_port.h" +#endif + /* socket has a different definition on WIN32 */ #ifndef WIN32 typedef int pgsocket; @@ -101,11 +110,6 @@ extern int find_other_exec(const char *argv0, const char *target, /* Doesn't belong here, but this is used with find_other_exec(), so... */ #define PG_BACKEND_VERSIONSTR "postgres (PostgreSQL) " PG_VERSION "\n" -/* Windows security token manipulation (in exec.c) */ -#ifdef WIN32 -extern BOOL AddUserToTokenDacl(HANDLE hToken); -#endif - #if defined(WIN32) || defined(__CYGWIN__) #define EXE ".exe" @@ -185,36 +189,10 @@ extern int pg_printf(const char *fmt,...) pg_attribute_printf(1, 2); #endif #endif /* USE_REPL_SNPRINTF */ -#if defined(WIN32) -/* - * Versions of libintl >= 0.18? try to replace setlocale() with a macro - * to their own versions. Remove the macro, if it exists, because it - * ends up calling the wrong version when the backend and libintl use - * different versions of msvcrt. - */ -#if defined(setlocale) -#undef setlocale -#endif - -/* - * Define our own wrapper macro around setlocale() to work around bugs in - * Windows' native setlocale() function. - */ -extern char *pgwin32_setlocale(int category, const char *locale); - -#define setlocale(a,b) pgwin32_setlocale(a,b) -#endif /* WIN32 */ - /* Portable prompt handling */ extern void simple_prompt(const char *prompt, char *destination, size_t destlen, bool echo); -#ifdef WIN32 -#define PG_SIGNAL_COUNT 32 -#define kill(pid,sig) pgkill(pid,sig) -extern int pgkill(int pid, int sig); -#endif - extern int pclose_check(FILE *stream); /* Global variable holding time zone information. */ @@ -262,23 +240,6 @@ extern bool pgwin32_is_junction(const char *path); extern bool rmtree(const char *path, bool rmtopdir); -/* - * stat() is not guaranteed to set the st_size field on win32, so we - * redefine it to our own implementation that is. - * - * We must pull in sys/stat.h here so the system header definition - * goes in first, and we redefine that, and not the other way around. - * - * Some frontends don't need the size from stat, so if UNSAFE_STAT_OK - * is defined we don't bother with this. - */ -#if defined(WIN32) && !defined(__CYGWIN__) && !defined(UNSAFE_STAT_OK) -#include -extern int pgwin32_safestat(const char *path, struct stat *buf); - -#define stat(a,b) pgwin32_safestat(a,b) -#endif - #if defined(WIN32) && !defined(__CYGWIN__) /* @@ -353,7 +314,7 @@ extern int gettimeofday(struct timeval *tp, struct timezone *tzp); extern char *crypt(const char *key, const char *setting); #endif -/* WIN32 handled in port/win32.h */ +/* WIN32 handled in port/win32_port.h */ #ifndef WIN32 #define pgoff_t off_t #ifdef __NetBSD__ diff --git a/src/include/port/win32.h b/src/include/port/win32.h index 23f89748ac..123b2100f8 100644 --- a/src/include/port/win32.h +++ b/src/include/port/win32.h @@ -1,5 +1,14 @@ /* src/include/port/win32.h */ +/* + * We always rely on the WIN32 macro being set by our build system, + * but _WIN32 is the compiler pre-defined macro. So make sure we define + * WIN32 whenever _WIN32 is set, to facilitate standalone building. + */ +#if defined(_WIN32) && !defined(WIN32) +#define WIN32 +#endif + /* * Make sure _WIN32_WINNT has the minimum required value. * Leave a higher value in place. When building with at least Visual @@ -25,64 +34,26 @@ #endif /* - * Always build with SSPI support. Keep it as a #define in case - * we want a switch to disable it sometime in the future. + * We need to prevent from defining a symbol conflicting with + * our errcode() function. Since it's likely to get included by standard + * system headers, pre-emptively include it now. */ -#define ENABLE_SSPI 1 - -/* undefine and redefine after #include */ -#undef mkdir - -#undef ERROR - -/* - * The Mingw64 headers choke if this is already defined - they - * define it themselves. - */ -#if !defined(__MINGW64_VERSION_MAJOR) || defined(_MSC_VER) -#define _WINSOCKAPI_ +#if _MSC_VER >= 1400 || defined(HAVE_CRTDEFS_H) +#define errcode __msvc_errcode +#include +#undef errcode #endif -#include -#include -#include -#undef small -#include -#include -#include -#include -#include /* for non-unicode version */ -#undef near - -/* Must be here to avoid conflicting with prototype in windows.h */ -#define mkdir(a,b) mkdir(a) - -#define ftruncate(a,b) chsize(a,b) - -/* Windows doesn't have fsync() as such, use _commit() */ -#define fsync(fd) _commit(fd) /* - * For historical reasons, we allow setting wal_sync_method to - * fsync_writethrough on Windows, even though it's really identical to fsync - * (both code paths wind up at _commit()). - */ -#define HAVE_FSYNC_WRITETHROUGH -#define FSYNC_WRITETHROUGH_IS_FSYNC - -#define USES_WINSOCK - -/* defines for dynamic linking on Win32 platform - * + * defines for dynamic linking on Win32 platform * http://support.microsoft.com/kb/132044 * http://msdn.microsoft.com/en-us/library/8fskxacy(v=vs.80).aspx * http://msdn.microsoft.com/en-us/library/a90k134d(v=vs.80).aspx */ -#if defined(WIN32) || defined(__CYGWIN__) - #ifdef BUILDING_DLL #define PGDLLIMPORT __declspec (dllexport) -#else /* not BUILDING_DLL */ +#else #define PGDLLIMPORT __declspec (dllimport) #endif @@ -91,365 +62,3 @@ #else #define PGDLLEXPORT #endif -#else /* not CYGWIN, not MSVC, not MingW */ -#define PGDLLIMPORT -#define PGDLLEXPORT -#endif - - -/* - * IPC defines - */ -#undef HAVE_UNION_SEMUN -#define HAVE_UNION_SEMUN 1 - -#define IPC_RMID 256 -#define IPC_CREAT 512 -#define IPC_EXCL 1024 -#define IPC_PRIVATE 234564 -#define IPC_NOWAIT 2048 -#define IPC_STAT 4096 - -#define EACCESS 2048 -#ifndef EIDRM -#define EIDRM 4096 -#endif - -#define SETALL 8192 -#define GETNCNT 16384 -#define GETVAL 65536 -#define SETVAL 131072 -#define GETPID 262144 - - -/* - * Signal stuff - * - * For WIN32, there is no wait() call so there are no wait() macros - * to interpret the return value of system(). Instead, system() - * return values < 0x100 are used for exit() termination, and higher - * values are used to indicated non-exit() termination, which is - * similar to a unix-style signal exit (think SIGSEGV == - * STATUS_ACCESS_VIOLATION). Return values are broken up into groups: - * - * http://msdn2.microsoft.com/en-gb/library/aa489609.aspx - * - * NT_SUCCESS 0 - 0x3FFFFFFF - * NT_INFORMATION 0x40000000 - 0x7FFFFFFF - * NT_WARNING 0x80000000 - 0xBFFFFFFF - * NT_ERROR 0xC0000000 - 0xFFFFFFFF - * - * Effectively, we don't care on the severity of the return value from - * system(), we just need to know if it was because of exit() or generated - * by the system, and it seems values >= 0x100 are system-generated. - * See this URL for a list of WIN32 STATUS_* values: - * - * Wine (URL used in our error messages) - - * http://source.winehq.org/source/include/ntstatus.h - * Descriptions - http://www.comp.nus.edu.sg/~wuyongzh/my_doc/ntstatus.txt - * MS SDK - http://www.nologs.com/ntstatus.html - * - * It seems the exception lists are in both ntstatus.h and winnt.h, but - * ntstatus.h has a more comprehensive list, and it only contains - * exception values, rather than winnt, which contains lots of other - * things: - * - * http://www.microsoft.com/msj/0197/exception/exception.aspx - * - * The ExceptionCode parameter is the number that the operating system - * assigned to the exception. You can see a list of various exception codes - * in WINNT.H by searching for #defines that start with "STATUS_". For - * example, the code for the all-too-familiar STATUS_ACCESS_VIOLATION is - * 0xC0000005. A more complete set of exception codes can be found in - * NTSTATUS.H from the Windows NT DDK. - * - * Some day we might want to print descriptions for the most common - * exceptions, rather than printing an include file name. We could use - * RtlNtStatusToDosError() and pass to FormatMessage(), which can print - * the text of error values, but MinGW does not support - * RtlNtStatusToDosError(). - */ -#define WIFEXITED(w) (((w) & 0XFFFFFF00) == 0) -#define WIFSIGNALED(w) (!WIFEXITED(w)) -#define WEXITSTATUS(w) (w) -#define WTERMSIG(w) (w) - -#define sigmask(sig) ( 1 << ((sig)-1) ) - -/* Signal function return values */ -#undef SIG_DFL -#undef SIG_ERR -#undef SIG_IGN -#define SIG_DFL ((pqsigfunc)0) -#define SIG_ERR ((pqsigfunc)-1) -#define SIG_IGN ((pqsigfunc)1) - -/* Some extra signals */ -#define SIGHUP 1 -#define SIGQUIT 3 -#define SIGTRAP 5 -#define SIGABRT 22 /* Set to match W32 value -- not UNIX value */ -#define SIGKILL 9 -#define SIGPIPE 13 -#define SIGALRM 14 -#define SIGSTOP 17 -#define SIGTSTP 18 -#define SIGCONT 19 -#define SIGCHLD 20 -#define SIGTTIN 21 -#define SIGTTOU 22 /* Same as SIGABRT -- no problem, I hope */ -#define SIGWINCH 28 -#define SIGUSR1 30 -#define SIGUSR2 31 - -/* - * New versions of mingw have gettimeofday() and also declare - * struct timezone to support it. - */ -#ifndef HAVE_GETTIMEOFDAY -struct timezone -{ - int tz_minuteswest; /* Minutes west of GMT. */ - int tz_dsttime; /* Nonzero if DST is ever in effect. */ -}; -#endif - -/* for setitimer in backend/port/win32/timer.c */ -#define ITIMER_REAL 0 -struct itimerval -{ - struct timeval it_interval; - struct timeval it_value; -}; - -int setitimer(int which, const struct itimerval *value, struct itimerval *ovalue); - -/* - * WIN32 does not provide 64-bit off_t, but does provide the functions operating - * with 64-bit offsets. - */ -#define pgoff_t __int64 -#ifdef _MSC_VER -#define fseeko(stream, offset, origin) _fseeki64(stream, offset, origin) -#define ftello(stream) _ftelli64(stream) -#else -#ifndef fseeko -#define fseeko(stream, offset, origin) fseeko64(stream, offset, origin) -#endif -#ifndef ftello -#define ftello(stream) ftello64(stream) -#endif -#endif - -/* - * Supplement to . - * - * Perl already has typedefs for uid_t and gid_t. - */ -#ifndef PLPERL_HAVE_UID_GID -typedef int uid_t; -typedef int gid_t; -#endif -typedef long key_t; - -#ifdef _MSC_VER -typedef int pid_t; -#endif - -/* - * Supplement to . - */ -#define lstat(path, sb) stat((path), (sb)) - -/* - * Supplement to . - * This is the same value as _O_NOINHERIT in the MS header file. This is - * to ensure that we don't collide with a future definition. It means - * we cannot use _O_NOINHERIT ourselves. - */ -#define O_DSYNC 0x0080 - -/* - * Supplement to . - * - * We redefine network-related Berkeley error symbols as the corresponding WSA - * constants. This allows elog.c to recognize them as being in the Winsock - * error code range and pass them off to pgwin32_socket_strerror(), since - * Windows' version of plain strerror() won't cope. Note that this will break - * if these names are used for anything else besides Windows Sockets errors. - * See TranslateSocketError() when changing this list. - */ -#undef EAGAIN -#define EAGAIN WSAEWOULDBLOCK -#undef EINTR -#define EINTR WSAEINTR -#undef EMSGSIZE -#define EMSGSIZE WSAEMSGSIZE -#undef EAFNOSUPPORT -#define EAFNOSUPPORT WSAEAFNOSUPPORT -#undef EWOULDBLOCK -#define EWOULDBLOCK WSAEWOULDBLOCK -#undef ECONNABORTED -#define ECONNABORTED WSAECONNABORTED -#undef ECONNRESET -#define ECONNRESET WSAECONNRESET -#undef EINPROGRESS -#define EINPROGRESS WSAEINPROGRESS -#undef EISCONN -#define EISCONN WSAEISCONN -#undef ENOBUFS -#define ENOBUFS WSAENOBUFS -#undef EPROTONOSUPPORT -#define EPROTONOSUPPORT WSAEPROTONOSUPPORT -#undef ECONNREFUSED -#define ECONNREFUSED WSAECONNREFUSED -#undef ENOTSOCK -#define ENOTSOCK WSAENOTSOCK -#undef EOPNOTSUPP -#define EOPNOTSUPP WSAEOPNOTSUPP -#undef EADDRINUSE -#define EADDRINUSE WSAEADDRINUSE -#undef EADDRNOTAVAIL -#define EADDRNOTAVAIL WSAEADDRNOTAVAIL -#undef EHOSTUNREACH -#define EHOSTUNREACH WSAEHOSTUNREACH -#undef ENOTCONN -#define ENOTCONN WSAENOTCONN - -/* - * Extended locale functions with gratuitous underscore prefixes. - * (These APIs are nevertheless fully documented by Microsoft.) - */ -#define locale_t _locale_t -#define tolower_l _tolower_l -#define toupper_l _toupper_l -#define towlower_l _towlower_l -#define towupper_l _towupper_l -#define isdigit_l _isdigit_l -#define iswdigit_l _iswdigit_l -#define isalpha_l _isalpha_l -#define iswalpha_l _iswalpha_l -#define isalnum_l _isalnum_l -#define iswalnum_l _iswalnum_l -#define isupper_l _isupper_l -#define iswupper_l _iswupper_l -#define islower_l _islower_l -#define iswlower_l _iswlower_l -#define isgraph_l _isgraph_l -#define iswgraph_l _iswgraph_l -#define isprint_l _isprint_l -#define iswprint_l _iswprint_l -#define ispunct_l _ispunct_l -#define iswpunct_l _iswpunct_l -#define isspace_l _isspace_l -#define iswspace_l _iswspace_l -#define strcoll_l _strcoll_l -#define strxfrm_l _strxfrm_l -#define wcscoll_l _wcscoll_l -#define wcstombs_l _wcstombs_l -#define mbstowcs_l _mbstowcs_l - - -/* In backend/port/win32/signal.c */ -extern PGDLLIMPORT volatile int pg_signal_queue; -extern PGDLLIMPORT int pg_signal_mask; -extern HANDLE pgwin32_signal_event; -extern HANDLE pgwin32_initial_signal_pipe; - -#define UNBLOCKED_SIGNAL_QUEUE() (pg_signal_queue & ~pg_signal_mask) - - -void pgwin32_signal_initialize(void); -HANDLE pgwin32_create_signal_listener(pid_t pid); -void pgwin32_dispatch_queued_signals(void); -void pg_queue_signal(int signum); - -/* In backend/port/win32/socket.c */ -#ifndef FRONTEND -#define socket(af, type, protocol) pgwin32_socket(af, type, protocol) -#define bind(s, addr, addrlen) pgwin32_bind(s, addr, addrlen) -#define listen(s, backlog) pgwin32_listen(s, backlog) -#define accept(s, addr, addrlen) pgwin32_accept(s, addr, addrlen) -#define connect(s, name, namelen) pgwin32_connect(s, name, namelen) -#define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout) -#define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags) -#define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags) - -SOCKET pgwin32_socket(int af, int type, int protocol); -int pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen); -int pgwin32_listen(SOCKET s, int backlog); -SOCKET pgwin32_accept(SOCKET s, struct sockaddr *addr, int *addrlen); -int pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen); -int pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout); -int pgwin32_recv(SOCKET s, char *buf, int len, int flags); -int pgwin32_send(SOCKET s, const void *buf, int len, int flags); - -const char *pgwin32_socket_strerror(int err); -int pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout); - -extern int pgwin32_noblock; - -#endif - -/* in backend/port/win32_shmem.c */ -extern int pgwin32_ReserveSharedMemoryRegion(HANDLE); - -/* in backend/port/win32/crashdump.c */ -extern void pgwin32_install_crashdump_handler(void); - -/* in port/win32error.c */ -extern void _dosmaperr(unsigned long); - -/* in port/win32env.c */ -extern int pgwin32_putenv(const char *); -extern void pgwin32_unsetenv(const char *); - -/* in port/win32security.c */ -extern int pgwin32_is_service(void); -extern int pgwin32_is_admin(void); - -#define putenv(x) pgwin32_putenv(x) -#define unsetenv(x) pgwin32_unsetenv(x) - -/* Things that exist in MingW headers, but need to be added to MSVC */ -#ifdef _MSC_VER - -#ifndef _WIN64 -typedef long ssize_t; -#else -typedef __int64 ssize_t; -#endif - -typedef unsigned short mode_t; - -#define S_IRUSR _S_IREAD -#define S_IWUSR _S_IWRITE -#define S_IXUSR _S_IEXEC -#define S_IRWXU (S_IRUSR | S_IWUSR | S_IXUSR) -/* see also S_IRGRP etc below */ -#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) -#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) - -#define F_OK 0 -#define W_OK 2 -#define R_OK 4 - -#if (_MSC_VER < 1800) -#define isinf(x) ((_fpclass(x) == _FPCLASS_PINF) || (_fpclass(x) == _FPCLASS_NINF)) -#define isnan(x) _isnan(x) -#endif - -/* Pulled from Makefile.port in mingw */ -#define DLSUFFIX ".dll" - -#endif /* _MSC_VER */ - -/* These aren't provided by either MingW or MSVC */ -#define S_IRGRP 0 -#define S_IWGRP 0 -#define S_IXGRP 0 -#define S_IRWXG 0 -#define S_IROTH 0 -#define S_IWOTH 0 -#define S_IXOTH 0 -#define S_IRWXO 0 diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h new file mode 100644 index 0000000000..db7dc16932 --- /dev/null +++ b/src/include/port/win32_port.h @@ -0,0 +1,483 @@ +/*------------------------------------------------------------------------- + * + * win32_port.h + * Windows-specific compatibility stuff. + * + * Note this is read in MinGW as well as native Windows builds, + * but not in Cygwin builds. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/port/win32_port.h + * + *------------------------------------------------------------------------- + */ +#ifndef PG_WIN32_PORT_H +#define PG_WIN32_PORT_H + +/* + * Always build with SSPI support. Keep it as a #define in case + * we want a switch to disable it sometime in the future. + */ +#define ENABLE_SSPI 1 + +/* undefine and redefine after #include */ +#undef mkdir + +#undef ERROR + +/* + * The MinGW64 headers choke if this is already defined - they + * define it themselves. + */ +#if !defined(__MINGW64_VERSION_MAJOR) || defined(_MSC_VER) +#define _WINSOCKAPI_ +#endif + +#include +#include +#include +#undef small +#include +#include +#include +#include /* for non-unicode version */ +#undef near + +/* Must be here to avoid conflicting with prototype in windows.h */ +#define mkdir(a,b) mkdir(a) + +#define ftruncate(a,b) chsize(a,b) + +/* Windows doesn't have fsync() as such, use _commit() */ +#define fsync(fd) _commit(fd) + +/* + * For historical reasons, we allow setting wal_sync_method to + * fsync_writethrough on Windows, even though it's really identical to fsync + * (both code paths wind up at _commit()). + */ +#define HAVE_FSYNC_WRITETHROUGH +#define FSYNC_WRITETHROUGH_IS_FSYNC + +#define USES_WINSOCK + +/* + * IPC defines + */ +#undef HAVE_UNION_SEMUN +#define HAVE_UNION_SEMUN 1 + +#define IPC_RMID 256 +#define IPC_CREAT 512 +#define IPC_EXCL 1024 +#define IPC_PRIVATE 234564 +#define IPC_NOWAIT 2048 +#define IPC_STAT 4096 + +#define EACCESS 2048 +#ifndef EIDRM +#define EIDRM 4096 +#endif + +#define SETALL 8192 +#define GETNCNT 16384 +#define GETVAL 65536 +#define SETVAL 131072 +#define GETPID 262144 + + +/* + * Signal stuff + * + * For WIN32, there is no wait() call so there are no wait() macros + * to interpret the return value of system(). Instead, system() + * return values < 0x100 are used for exit() termination, and higher + * values are used to indicated non-exit() termination, which is + * similar to a unix-style signal exit (think SIGSEGV == + * STATUS_ACCESS_VIOLATION). Return values are broken up into groups: + * + * http://msdn2.microsoft.com/en-gb/library/aa489609.aspx + * + * NT_SUCCESS 0 - 0x3FFFFFFF + * NT_INFORMATION 0x40000000 - 0x7FFFFFFF + * NT_WARNING 0x80000000 - 0xBFFFFFFF + * NT_ERROR 0xC0000000 - 0xFFFFFFFF + * + * Effectively, we don't care on the severity of the return value from + * system(), we just need to know if it was because of exit() or generated + * by the system, and it seems values >= 0x100 are system-generated. + * See this URL for a list of WIN32 STATUS_* values: + * + * Wine (URL used in our error messages) - + * http://source.winehq.org/source/include/ntstatus.h + * Descriptions - http://www.comp.nus.edu.sg/~wuyongzh/my_doc/ntstatus.txt + * MS SDK - http://www.nologs.com/ntstatus.html + * + * It seems the exception lists are in both ntstatus.h and winnt.h, but + * ntstatus.h has a more comprehensive list, and it only contains + * exception values, rather than winnt, which contains lots of other + * things: + * + * http://www.microsoft.com/msj/0197/exception/exception.aspx + * + * The ExceptionCode parameter is the number that the operating system + * assigned to the exception. You can see a list of various exception codes + * in WINNT.H by searching for #defines that start with "STATUS_". For + * example, the code for the all-too-familiar STATUS_ACCESS_VIOLATION is + * 0xC0000005. A more complete set of exception codes can be found in + * NTSTATUS.H from the Windows NT DDK. + * + * Some day we might want to print descriptions for the most common + * exceptions, rather than printing an include file name. We could use + * RtlNtStatusToDosError() and pass to FormatMessage(), which can print + * the text of error values, but MinGW does not support + * RtlNtStatusToDosError(). + */ +#define WIFEXITED(w) (((w) & 0XFFFFFF00) == 0) +#define WIFSIGNALED(w) (!WIFEXITED(w)) +#define WEXITSTATUS(w) (w) +#define WTERMSIG(w) (w) + +#define sigmask(sig) ( 1 << ((sig)-1) ) + +/* Signal function return values */ +#undef SIG_DFL +#undef SIG_ERR +#undef SIG_IGN +#define SIG_DFL ((pqsigfunc)0) +#define SIG_ERR ((pqsigfunc)-1) +#define SIG_IGN ((pqsigfunc)1) + +/* Some extra signals */ +#define SIGHUP 1 +#define SIGQUIT 3 +#define SIGTRAP 5 +#define SIGABRT 22 /* Set to match W32 value -- not UNIX value */ +#define SIGKILL 9 +#define SIGPIPE 13 +#define SIGALRM 14 +#define SIGSTOP 17 +#define SIGTSTP 18 +#define SIGCONT 19 +#define SIGCHLD 20 +#define SIGTTIN 21 +#define SIGTTOU 22 /* Same as SIGABRT -- no problem, I hope */ +#define SIGWINCH 28 +#define SIGUSR1 30 +#define SIGUSR2 31 + +/* + * New versions of MinGW have gettimeofday() and also declare + * struct timezone to support it. + */ +#ifndef HAVE_GETTIMEOFDAY +struct timezone +{ + int tz_minuteswest; /* Minutes west of GMT. */ + int tz_dsttime; /* Nonzero if DST is ever in effect. */ +}; +#endif + +/* for setitimer in backend/port/win32/timer.c */ +#define ITIMER_REAL 0 +struct itimerval +{ + struct timeval it_interval; + struct timeval it_value; +}; + +int setitimer(int which, const struct itimerval *value, struct itimerval *ovalue); + +/* + * WIN32 does not provide 64-bit off_t, but does provide the functions operating + * with 64-bit offsets. + */ +#define pgoff_t __int64 +#ifdef _MSC_VER +#define fseeko(stream, offset, origin) _fseeki64(stream, offset, origin) +#define ftello(stream) _ftelli64(stream) +#else +#ifndef fseeko +#define fseeko(stream, offset, origin) fseeko64(stream, offset, origin) +#endif +#ifndef ftello +#define ftello(stream) ftello64(stream) +#endif +#endif + +/* + * Win32 also doesn't have symlinks, but we can emulate them with + * junction points on newer Win32 versions. + * + * Cygwin has its own symlinks which work on Win95/98/ME where + * junction points don't, so use those instead. We have no way of + * knowing what type of system Cygwin binaries will be run on. + * Note: Some CYGWIN includes might #define WIN32. + */ +extern int pgsymlink(const char *oldpath, const char *newpath); +extern int pgreadlink(const char *path, char *buf, size_t size); +extern bool pgwin32_is_junction(const char *path); + +#define symlink(oldpath, newpath) pgsymlink(oldpath, newpath) +#define readlink(path, buf, size) pgreadlink(path, buf, size) + +/* + * Supplement to . + * + * Perl already has typedefs for uid_t and gid_t. + */ +#ifndef PLPERL_HAVE_UID_GID +typedef int uid_t; +typedef int gid_t; +#endif +typedef long key_t; + +#ifdef _MSC_VER +typedef int pid_t; +#endif + +/* + * Supplement to . + */ +#define lstat(path, sb) stat(path, sb) + +/* + * stat() is not guaranteed to set the st_size field on win32, so we + * redefine it to our own implementation that is. + * + * We must pull in sys/stat.h here so the system header definition + * goes in first, and we redefine that, and not the other way around. + * + * Some frontends don't need the size from stat, so if UNSAFE_STAT_OK + * is defined we don't bother with this. + */ +#ifndef UNSAFE_STAT_OK +#include +extern int pgwin32_safestat(const char *path, struct stat *buf); + +#define stat(a,b) pgwin32_safestat(a,b) +#endif + +/* + * Supplement to . + * This is the same value as _O_NOINHERIT in the MS header file. This is + * to ensure that we don't collide with a future definition. It means + * we cannot use _O_NOINHERIT ourselves. + */ +#define O_DSYNC 0x0080 + +/* + * Supplement to . + * + * We redefine network-related Berkeley error symbols as the corresponding WSA + * constants. This allows elog.c to recognize them as being in the Winsock + * error code range and pass them off to pgwin32_socket_strerror(), since + * Windows' version of plain strerror() won't cope. Note that this will break + * if these names are used for anything else besides Windows Sockets errors. + * See TranslateSocketError() when changing this list. + */ +#undef EAGAIN +#define EAGAIN WSAEWOULDBLOCK +#undef EINTR +#define EINTR WSAEINTR +#undef EMSGSIZE +#define EMSGSIZE WSAEMSGSIZE +#undef EAFNOSUPPORT +#define EAFNOSUPPORT WSAEAFNOSUPPORT +#undef EWOULDBLOCK +#define EWOULDBLOCK WSAEWOULDBLOCK +#undef ECONNABORTED +#define ECONNABORTED WSAECONNABORTED +#undef ECONNRESET +#define ECONNRESET WSAECONNRESET +#undef EINPROGRESS +#define EINPROGRESS WSAEINPROGRESS +#undef EISCONN +#define EISCONN WSAEISCONN +#undef ENOBUFS +#define ENOBUFS WSAENOBUFS +#undef EPROTONOSUPPORT +#define EPROTONOSUPPORT WSAEPROTONOSUPPORT +#undef ECONNREFUSED +#define ECONNREFUSED WSAECONNREFUSED +#undef ENOTSOCK +#define ENOTSOCK WSAENOTSOCK +#undef EOPNOTSUPP +#define EOPNOTSUPP WSAEOPNOTSUPP +#undef EADDRINUSE +#define EADDRINUSE WSAEADDRINUSE +#undef EADDRNOTAVAIL +#define EADDRNOTAVAIL WSAEADDRNOTAVAIL +#undef EHOSTUNREACH +#define EHOSTUNREACH WSAEHOSTUNREACH +#undef ENOTCONN +#define ENOTCONN WSAENOTCONN + +/* + * Locale stuff. + * + * Extended locale functions with gratuitous underscore prefixes. + * (These APIs are nevertheless fully documented by Microsoft.) + */ +#define locale_t _locale_t +#define tolower_l _tolower_l +#define toupper_l _toupper_l +#define towlower_l _towlower_l +#define towupper_l _towupper_l +#define isdigit_l _isdigit_l +#define iswdigit_l _iswdigit_l +#define isalpha_l _isalpha_l +#define iswalpha_l _iswalpha_l +#define isalnum_l _isalnum_l +#define iswalnum_l _iswalnum_l +#define isupper_l _isupper_l +#define iswupper_l _iswupper_l +#define islower_l _islower_l +#define iswlower_l _iswlower_l +#define isgraph_l _isgraph_l +#define iswgraph_l _iswgraph_l +#define isprint_l _isprint_l +#define iswprint_l _iswprint_l +#define ispunct_l _ispunct_l +#define iswpunct_l _iswpunct_l +#define isspace_l _isspace_l +#define iswspace_l _iswspace_l +#define strcoll_l _strcoll_l +#define strxfrm_l _strxfrm_l +#define wcscoll_l _wcscoll_l +#define wcstombs_l _wcstombs_l +#define mbstowcs_l _mbstowcs_l + +/* + * Versions of libintl >= 0.18? try to replace setlocale() with a macro + * to their own versions. Remove the macro, if it exists, because it + * ends up calling the wrong version when the backend and libintl use + * different versions of msvcrt. + */ +#if defined(setlocale) +#undef setlocale +#endif + +/* + * Define our own wrapper macro around setlocale() to work around bugs in + * Windows' native setlocale() function. + */ +extern char *pgwin32_setlocale(int category, const char *locale); + +#define setlocale(a,b) pgwin32_setlocale(a,b) + + +/* In backend/port/win32/signal.c */ +extern PGDLLIMPORT volatile int pg_signal_queue; +extern PGDLLIMPORT int pg_signal_mask; +extern HANDLE pgwin32_signal_event; +extern HANDLE pgwin32_initial_signal_pipe; + +#define UNBLOCKED_SIGNAL_QUEUE() (pg_signal_queue & ~pg_signal_mask) +#define PG_SIGNAL_COUNT 32 + +void pgwin32_signal_initialize(void); +HANDLE pgwin32_create_signal_listener(pid_t pid); +void pgwin32_dispatch_queued_signals(void); +void pg_queue_signal(int signum); + +/* In src/port/kill.c */ +#define kill(pid,sig) pgkill(pid,sig) +extern int pgkill(int pid, int sig); + +/* In backend/port/win32/socket.c */ +#ifndef FRONTEND +#define socket(af, type, protocol) pgwin32_socket(af, type, protocol) +#define bind(s, addr, addrlen) pgwin32_bind(s, addr, addrlen) +#define listen(s, backlog) pgwin32_listen(s, backlog) +#define accept(s, addr, addrlen) pgwin32_accept(s, addr, addrlen) +#define connect(s, name, namelen) pgwin32_connect(s, name, namelen) +#define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout) +#define recv(s, buf, len, flags) pgwin32_recv(s, buf, len, flags) +#define send(s, buf, len, flags) pgwin32_send(s, buf, len, flags) + +SOCKET pgwin32_socket(int af, int type, int protocol); +int pgwin32_bind(SOCKET s, struct sockaddr *addr, int addrlen); +int pgwin32_listen(SOCKET s, int backlog); +SOCKET pgwin32_accept(SOCKET s, struct sockaddr *addr, int *addrlen); +int pgwin32_connect(SOCKET s, const struct sockaddr *name, int namelen); +int pgwin32_select(int nfds, fd_set *readfs, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout); +int pgwin32_recv(SOCKET s, char *buf, int len, int flags); +int pgwin32_send(SOCKET s, const void *buf, int len, int flags); + +const char *pgwin32_socket_strerror(int err); +int pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout); + +extern int pgwin32_noblock; + +#endif /* FRONTEND */ + +/* in backend/port/win32_shmem.c */ +extern int pgwin32_ReserveSharedMemoryRegion(HANDLE); + +/* in backend/port/win32/crashdump.c */ +extern void pgwin32_install_crashdump_handler(void); + +/* in port/win32error.c */ +extern void _dosmaperr(unsigned long); + +/* in port/win32env.c */ +extern int pgwin32_putenv(const char *); +extern void pgwin32_unsetenv(const char *); + +/* in port/win32security.c */ +extern int pgwin32_is_service(void); +extern int pgwin32_is_admin(void); + +/* Windows security token manipulation (in src/common/exec.c) */ +extern BOOL AddUserToTokenDacl(HANDLE hToken); + +#define putenv(x) pgwin32_putenv(x) +#define unsetenv(x) pgwin32_unsetenv(x) + +/* Things that exist in MinGW headers, but need to be added to MSVC */ +#ifdef _MSC_VER + +#ifndef _WIN64 +typedef long ssize_t; +#else +typedef __int64 ssize_t; +#endif + +typedef unsigned short mode_t; + +#define S_IRUSR _S_IREAD +#define S_IWUSR _S_IWRITE +#define S_IXUSR _S_IEXEC +#define S_IRWXU (S_IRUSR | S_IWUSR | S_IXUSR) +/* see also S_IRGRP etc below */ +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) + +#define F_OK 0 +#define W_OK 2 +#define R_OK 4 + +#if (_MSC_VER < 1800) +#define isinf(x) ((_fpclass(x) == _FPCLASS_PINF) || (_fpclass(x) == _FPCLASS_NINF)) +#define isnan(x) _isnan(x) +#endif + +/* Pulled from Makefile.port in MinGW */ +#define DLSUFFIX ".dll" + +#endif /* _MSC_VER */ + +/* These aren't provided by either MinGW or MSVC */ +#define S_IRGRP 0 +#define S_IWGRP 0 +#define S_IXGRP 0 +#define S_IRWXG 0 +#define S_IROTH 0 +#define S_IWOTH 0 +#define S_IXOTH 0 +#define S_IRWXO 0 + +#endif /* PG_WIN32_PORT_H */ From 164d6338785b0b6c5a1ac30ee3e4b63bd77441ba Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 16 Nov 2017 11:16:53 -0500 Subject: [PATCH 0550/1087] Fix bogus logic for checking data dirs' versions within pg_upgrade. Commit 9be95ef15 failed to cure all of the redundancy here: we were actually calling get_major_server_version() three times for each of the old and new data directories. While that's not enormously expensive, it's still sloppy. A. Akenteva Discussion: https://postgr.es/m/f9266a85d918a3cf3a386b5148aee666@postgrespro.ru --- src/bin/pg_upgrade/check.c | 6 +++--- src/bin/pg_upgrade/exec.c | 5 ++--- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index b7e1e4be19..1b9083597c 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -234,9 +234,9 @@ check_cluster_versions(void) { prep_status("Checking cluster versions"); - /* get old and new cluster versions */ - old_cluster.major_version = get_major_server_version(&old_cluster); - new_cluster.major_version = get_major_server_version(&new_cluster); + /* cluster versions should already have been obtained */ + Assert(old_cluster.major_version != 0); + Assert(new_cluster.major_version != 0); /* * We allow upgrades from/to the same major version for alpha/beta diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index 810a5a0c3c..f5cd74ff97 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -331,9 +331,8 @@ check_data_dir(ClusterInfo *cluster) { const char *pg_data = cluster->pgdata; - /* get old and new cluster versions */ - old_cluster.major_version = get_major_server_version(&old_cluster); - new_cluster.major_version = get_major_server_version(&new_cluster); + /* get the cluster version */ + cluster->major_version = get_major_server_version(cluster); check_single_dir(pg_data, ""); check_single_dir(pg_data, "base"); From 98d54bb7790d5fb0a77173d5e5e3c655901b472c Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 16 Nov 2017 11:35:02 -0500 Subject: [PATCH 0551/1087] Back out the session_start and session_end hooks feature. It's become apparent during testing that there are problems with at least the testing regime. I don't think we should have it without a working test regime, and the difficulties might indicate implementation problems anyway, so I'm backing out the whole thing until that's sorted out. This reverts commits 7459484 9989f92 cd8ce3a --- src/backend/tcop/postgres.c | 6 - src/backend/utils/init/postinit.c | 6 - src/include/tcop/tcopprot.h | 7 - src/test/modules/Makefile | 1 - .../modules/test_session_hooks/.gitignore | 4 - src/test/modules/test_session_hooks/Makefile | 25 ---- src/test/modules/test_session_hooks/README | 2 - .../expected/test_session_hooks.out | 31 ---- .../test_session_hooks/session_hooks.conf | 2 - .../sql/test_session_hooks.sql | 12 -- .../test_session_hooks--1.0.sql | 4 - .../test_session_hooks/test_session_hooks.c | 134 ------------------ .../test_session_hooks.control | 3 - src/tools/msvc/vcregress.pl | 2 - 14 files changed, 239 deletions(-) delete mode 100644 src/test/modules/test_session_hooks/.gitignore delete mode 100644 src/test/modules/test_session_hooks/Makefile delete mode 100644 src/test/modules/test_session_hooks/README delete mode 100644 src/test/modules/test_session_hooks/expected/test_session_hooks.out delete mode 100644 src/test/modules/test_session_hooks/session_hooks.conf delete mode 100644 src/test/modules/test_session_hooks/sql/test_session_hooks.sql delete mode 100644 src/test/modules/test_session_hooks/test_session_hooks--1.0.sql delete mode 100644 src/test/modules/test_session_hooks/test_session_hooks.c delete mode 100644 src/test/modules/test_session_hooks/test_session_hooks.control diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index d3156ad49e..05c5c194ec 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -169,9 +169,6 @@ static ProcSignalReason RecoveryConflictReason; static MemoryContext row_description_context = NULL; static StringInfoData row_description_buf; -/* Hook for plugins to get control at start of session */ -session_start_hook_type session_start_hook = NULL; - /* ---------------------------------------------------------------- * decls for routines only used in this file * ---------------------------------------------------------------- @@ -3860,9 +3857,6 @@ PostgresMain(int argc, char *argv[], if (!IsUnderPostmaster) PgStartTime = GetCurrentTimestamp(); - if (session_start_hook) - (*session_start_hook) (); - /* * POSTGRES main processing loop begins here * diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c index 16ec376b22..20f1d279e9 100644 --- a/src/backend/utils/init/postinit.c +++ b/src/backend/utils/init/postinit.c @@ -76,8 +76,6 @@ static bool ThereIsAtLeastOneRole(void); static void process_startup_options(Port *port, bool am_superuser); static void process_settings(Oid databaseid, Oid roleid); -/* Hook for plugins to get control at end of session */ -session_end_hook_type session_end_hook = NULL; /*** InitPostgres support ***/ @@ -1156,10 +1154,6 @@ ShutdownPostgres(int code, Datum arg) * them explicitly. */ LockReleaseAll(USER_LOCKMETHOD, true); - - /* Hook at session end */ - if (session_end_hook) - (*session_end_hook) (); } diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h index 9f05bfb4ab..f8c535c91e 100644 --- a/src/include/tcop/tcopprot.h +++ b/src/include/tcop/tcopprot.h @@ -35,13 +35,6 @@ extern PGDLLIMPORT const char *debug_query_string; extern int max_stack_depth; extern int PostAuthDelay; -/* Hook for plugins to get control at start and end of session */ -typedef void (*session_start_hook_type) (void); -typedef void (*session_end_hook_type) (void); - -extern PGDLLIMPORT session_start_hook_type session_start_hook; -extern PGDLLIMPORT session_end_hook_type session_end_hook; - /* GUC-configurable parameters */ typedef enum diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index 7246552d38..b7ed0af021 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -15,7 +15,6 @@ SUBDIRS = \ test_pg_dump \ test_rbtree \ test_rls_hooks \ - test_session_hooks \ test_shm_mq \ worker_spi diff --git a/src/test/modules/test_session_hooks/.gitignore b/src/test/modules/test_session_hooks/.gitignore deleted file mode 100644 index 5dcb3ff972..0000000000 --- a/src/test/modules/test_session_hooks/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -# Generated subdirectories -/log/ -/results/ -/tmp_check/ diff --git a/src/test/modules/test_session_hooks/Makefile b/src/test/modules/test_session_hooks/Makefile deleted file mode 100644 index 636ae61c0e..0000000000 --- a/src/test/modules/test_session_hooks/Makefile +++ /dev/null @@ -1,25 +0,0 @@ -# src/test/modules/test_session_hooks/Makefile - -MODULES = test_session_hooks -PGFILEDESC = "test_session_hooks - Test session hooks with an extension" - -EXTENSION = test_session_hooks -DATA = test_session_hooks--1.0.sql - -REGRESS = test_session_hooks -REGRESS_OPTS = --temp-config=$(top_srcdir)/src/test/modules/test_session_hooks/session_hooks.conf - -ifdef USE_PGXS -PG_CONFIG = pg_config -PGXS := $(shell $(PG_CONFIG) --pgxs) -include $(PGXS) -else -subdir = src/test/modules/test_session_hooks -top_builddir = ../../../.. -include $(top_builddir)/src/Makefile.global -include $(top_srcdir)/contrib/contrib-global.mk -endif - -# override installcheck - this module requires preloading the test module -installcheck: - @echo Cannot run $@ for test_session_hooks. Run "'make check'" instead. diff --git a/src/test/modules/test_session_hooks/README b/src/test/modules/test_session_hooks/README deleted file mode 100644 index 9cb42020c6..0000000000 --- a/src/test/modules/test_session_hooks/README +++ /dev/null @@ -1,2 +0,0 @@ -test_session_hooks is an example of how to use session start and end -hooks. diff --git a/src/test/modules/test_session_hooks/expected/test_session_hooks.out b/src/test/modules/test_session_hooks/expected/test_session_hooks.out deleted file mode 100644 index be1b94953c..0000000000 --- a/src/test/modules/test_session_hooks/expected/test_session_hooks.out +++ /dev/null @@ -1,31 +0,0 @@ -CREATE ROLE regress_sess_hook_usr1 SUPERUSER LOGIN; -CREATE ROLE regress_sess_hook_usr2 SUPERUSER LOGIN; -\set prevdb :DBNAME -\set prevusr :USER -CREATE TABLE session_hook_log(id SERIAL, dbname TEXT, username TEXT, hook_at TEXT); -SELECT * FROM session_hook_log ORDER BY id; - id | dbname | username | hook_at -----+--------+----------+--------- -(0 rows) - -\c :prevdb regress_sess_hook_usr1 -SELECT * FROM session_hook_log ORDER BY id; - id | dbname | username | hook_at -----+--------+----------+--------- -(0 rows) - -\c :prevdb regress_sess_hook_usr2 -SELECT * FROM session_hook_log ORDER BY id; - id | dbname | username | hook_at -----+--------------------+------------------------+--------- - 1 | contrib_regression | regress_sess_hook_usr2 | START -(1 row) - -\c :prevdb :prevusr -SELECT * FROM session_hook_log ORDER BY id; - id | dbname | username | hook_at -----+--------------------+------------------------+--------- - 1 | contrib_regression | regress_sess_hook_usr2 | START - 2 | contrib_regression | regress_sess_hook_usr2 | END -(2 rows) - diff --git a/src/test/modules/test_session_hooks/session_hooks.conf b/src/test/modules/test_session_hooks/session_hooks.conf deleted file mode 100644 index fc62b4adef..0000000000 --- a/src/test/modules/test_session_hooks/session_hooks.conf +++ /dev/null @@ -1,2 +0,0 @@ -shared_preload_libraries = 'test_session_hooks' -test_session_hooks.username = regress_sess_hook_usr2 diff --git a/src/test/modules/test_session_hooks/sql/test_session_hooks.sql b/src/test/modules/test_session_hooks/sql/test_session_hooks.sql deleted file mode 100644 index 5e0864753d..0000000000 --- a/src/test/modules/test_session_hooks/sql/test_session_hooks.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE ROLE regress_sess_hook_usr1 SUPERUSER LOGIN; -CREATE ROLE regress_sess_hook_usr2 SUPERUSER LOGIN; -\set prevdb :DBNAME -\set prevusr :USER -CREATE TABLE session_hook_log(id SERIAL, dbname TEXT, username TEXT, hook_at TEXT); -SELECT * FROM session_hook_log ORDER BY id; -\c :prevdb regress_sess_hook_usr1 -SELECT * FROM session_hook_log ORDER BY id; -\c :prevdb regress_sess_hook_usr2 -SELECT * FROM session_hook_log ORDER BY id; -\c :prevdb :prevusr -SELECT * FROM session_hook_log ORDER BY id; diff --git a/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql b/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql deleted file mode 100644 index 16bcee9882..0000000000 --- a/src/test/modules/test_session_hooks/test_session_hooks--1.0.sql +++ /dev/null @@ -1,4 +0,0 @@ -/* src/test/modules/test_hook_session/test_hook_session--1.0.sql */ - --- complain if script is sourced in psql, rather than via CREATE EXTENSION -\echo Use "CREATE EXTENSION test_hook_session" to load this file. \quit diff --git a/src/test/modules/test_session_hooks/test_session_hooks.c b/src/test/modules/test_session_hooks/test_session_hooks.c deleted file mode 100644 index 4e2eef183e..0000000000 --- a/src/test/modules/test_session_hooks/test_session_hooks.c +++ /dev/null @@ -1,134 +0,0 @@ -/* ------------------------------------------------------------------------- - * - * test_session_hooks.c - * Code for testing SESSION hooks. - * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group - * - * IDENTIFICATION - * src/test/modules/test_session_hooks/test_session_hooks.c - * - * ------------------------------------------------------------------------- - */ -#include "postgres.h" - -#include "access/xact.h" -#include "commands/dbcommands.h" -#include "executor/spi.h" -#include "lib/stringinfo.h" -#include "miscadmin.h" -#include "tcop/tcopprot.h" -#include "utils/snapmgr.h" -#include "utils/builtins.h" - -PG_MODULE_MAGIC; - -/* Entry point of library loading/unloading */ -void _PG_init(void); -void _PG_fini(void); - -/* GUC variables */ -static char *session_hook_username = "postgres"; - -/* Original Hook */ -static session_start_hook_type prev_session_start_hook = NULL; -static session_end_hook_type prev_session_end_hook = NULL; - -static void -register_session_hook(const char *hook_at) -{ - const char *username; - - StartTransactionCommand(); - SPI_connect(); - PushActiveSnapshot(GetTransactionSnapshot()); - - username = GetUserNameFromId(GetUserId(), false); - - /* Register log just for configured username */ - if (!strcmp(username, session_hook_username)) - { - const char *dbname; - int ret; - StringInfoData buf; - - dbname = get_database_name(MyDatabaseId); - - initStringInfo(&buf); - - appendStringInfo(&buf, "INSERT INTO session_hook_log (dbname, username, hook_at) "); - appendStringInfo(&buf, "VALUES ('%s', '%s', '%s');", - dbname, username, hook_at); - - ret = SPI_exec(buf.data, 0); - if (ret != SPI_OK_INSERT) - elog(ERROR, "SPI_execute failed: error code %d", ret); - } - - SPI_finish(); - PopActiveSnapshot(); - CommitTransactionCommand(); -} - -/* sample session start hook function */ -static void -sample_session_start_hook() -{ - /* Hook just normal backends */ - if (MyBackendId != InvalidBackendId) - { - (void) register_session_hook("START"); - - if (prev_session_start_hook) - prev_session_start_hook(); - } -} - -/* sample session end hook function */ -static void -sample_session_end_hook() -{ - /* Hook just normal backends */ - if (MyBackendId != InvalidBackendId) - { - if (prev_session_end_hook) - prev_session_end_hook(); - - (void) register_session_hook("END"); - } -} - -/* - * Module Load Callback - */ -void -_PG_init(void) -{ - /* Save Hooks for Unload */ - prev_session_start_hook = session_start_hook; - prev_session_end_hook = session_end_hook; - - /* Set New Hooks */ - session_start_hook = sample_session_start_hook; - session_end_hook = sample_session_end_hook; - - /* Load GUCs */ - DefineCustomStringVariable("test_session_hooks.username", - "Username to register log on session start or end", - NULL, - &session_hook_username, - "postgres", - PGC_SIGHUP, - 0, NULL, NULL, NULL); -} - -/* - * Module Unload Callback - */ -void -_PG_fini(void) -{ - /* Uninstall Hooks */ - session_start_hook = prev_session_start_hook; - session_end_hook = prev_session_end_hook; -} diff --git a/src/test/modules/test_session_hooks/test_session_hooks.control b/src/test/modules/test_session_hooks/test_session_hooks.control deleted file mode 100644 index 7d7ef9f3f4..0000000000 --- a/src/test/modules/test_session_hooks/test_session_hooks.control +++ /dev/null @@ -1,3 +0,0 @@ -comment = 'Test start/end hook session with an extension' -default_version = '1.0' -relocatable = true diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl index 41f7832e5a..719fe83047 100644 --- a/src/tools/msvc/vcregress.pl +++ b/src/tools/msvc/vcregress.pl @@ -383,8 +383,6 @@ sub modulescheck my $mstat = 0; foreach my $module (glob("*")) { - # test_session_hooks can't run installcheck, so skip it here - next if $module eq 'test_session_hooks'; subdircheck("$topdir/src/test/modules", $module); my $status = $? >> 8; $mstat ||= $status; From ff2d4356f8b18f5489e5d7b1f8b4b5357d088c8c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 16 Nov 2017 12:03:04 -0500 Subject: [PATCH 0552/1087] Define _WINSOCK_DEPRECATED_NO_WARNINGS in all MSVC builds. Commit 0fb54de9a thought that this was only needed in VS2015 and later, but buildfarm member woodlouse shows that at least VS2013 whines as well. Let's just define it regardless of MSVC version; it should be harmless enough in older releases. Also, in the wake of ed9b3606d, it seems better to put it in win32_port.h where is included. Since this is only suppressing a pedantic compiler warning, I don't feel a need for a back-patch. Discussion: https://postgr.es/m/20124.1510850225@sss.pgh.pa.us --- src/include/port/win32.h | 3 --- src/include/port/win32_port.h | 8 ++++++++ 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/src/include/port/win32.h b/src/include/port/win32.h index 123b2100f8..611e04fac6 100644 --- a/src/include/port/win32.h +++ b/src/include/port/win32.h @@ -15,11 +15,8 @@ * Studio 2015 the minimum requirement is Windows Vista (0x0600) to * get support for GetLocaleInfoEx() with locales. For everything else * the minimum version is Windows XP (0x0501). - * Also for VS2015, add a define that stops compiler complaints about - * using the old Winsock API. */ #if defined(_MSC_VER) && _MSC_VER >= 1900 -#define _WINSOCK_DEPRECATED_NO_WARNINGS #define MIN_WINNT 0x0600 #else #define MIN_WINNT 0x0501 diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h index db7dc16932..46d7b0035f 100644 --- a/src/include/port/win32_port.h +++ b/src/include/port/win32_port.h @@ -27,6 +27,14 @@ #undef ERROR +/* + * VS2013 and later issue warnings about using the old Winsock API, + * which we don't really want to hear about. + */ +#ifdef _MSC_VER +#define _WINSOCK_DEPRECATED_NO_WARNINGS +#endif + /* * The MinGW64 headers choke if this is already defined - they * define it themselves. From e89a71fb449af2ef74f47be1175f99956cf21524 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 16 Nov 2017 12:06:14 -0500 Subject: [PATCH 0553/1087] Pass InitPlan values to workers via Gather (Merge). If a PARAM_EXEC parameter is used below a Gather (Merge) but the InitPlan that computes it is attached to or above the Gather (Merge), force the value to be computed before starting parallelism and pass it down to all workers. This allows us to use parallelism in cases where it previously would have had to be rejected as unsafe. We do - in this case - lose the optimization that the value is only computed if it's actually used. An alternative strategy would be to have the first worker that needs the value compute it, but one downside of that approach is that we'd then need to select a parallel-safe path to compute the parameter value; it couldn't for example contain a Gather (Merge) node. At some point in the future, we might want to consider both approaches. Independent of that consideration, there is a great deal more work that could be done to make more kinds of PARAM_EXEC parameters parallel-safe. This infrastructure could be used to allow a Gather (Merge) on the inner side of a nested loop (although that's not a very appealing plan) and cases where the InitPlan is attached below the Gather (Merge) could be addressed as well using various techniques. But this is a good start. Amit Kapila, reviewed and revised by me. Reviewing and testing from Kuntal Ghosh, Haribabu Kommi, and Tushar Ahuja. Discussion: http://postgr.es/m/CAA4eK1LV0Y1AUV4cUCdC+sYOx0Z0-8NAJ2Pd9=UKsbQ5Sr7+JQ@mail.gmail.com --- src/backend/commands/explain.c | 34 +++ src/backend/executor/execExprInterp.c | 27 +++ src/backend/executor/execParallel.c | 219 ++++++++++++++++-- src/backend/executor/nodeGather.c | 4 +- src/backend/executor/nodeGatherMerge.c | 4 +- src/backend/nodes/copyfuncs.c | 2 + src/backend/nodes/outfuncs.c | 3 + src/backend/nodes/readfuncs.c | 2 + src/backend/optimizer/plan/createplan.c | 1 + src/backend/optimizer/plan/planner.c | 8 + src/backend/optimizer/plan/setrefs.c | 51 +++- src/backend/optimizer/util/clauses.c | 24 +- src/include/executor/execExpr.h | 1 + src/include/executor/execParallel.h | 6 +- src/include/nodes/plannodes.h | 4 + src/test/regress/expected/select_parallel.out | 35 +++ src/test/regress/sql/select_parallel.sql | 17 ++ 17 files changed, 419 insertions(+), 23 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 8f7062cd6e..447f69d044 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -107,6 +107,7 @@ static void show_tidbitmap_info(BitmapHeapScanState *planstate, static void show_instrumentation_count(const char *qlabel, int which, PlanState *planstate, ExplainState *es); static void show_foreignscan_info(ForeignScanState *fsstate, ExplainState *es); +static void show_eval_params(Bitmapset *bms_params, ExplainState *es); static const char *explain_get_index_name(Oid indexId); static void show_buffer_usage(ExplainState *es, const BufferUsage *usage); static void ExplainIndexScanDetails(Oid indexid, ScanDirection indexorderdir, @@ -1441,6 +1442,11 @@ ExplainNode(PlanState *planstate, List *ancestors, planstate, es); ExplainPropertyInteger("Workers Planned", gather->num_workers, es); + + /* Show params evaluated at gather node */ + if (gather->initParam) + show_eval_params(gather->initParam, es); + if (es->analyze) { int nworkers; @@ -1463,6 +1469,11 @@ ExplainNode(PlanState *planstate, List *ancestors, planstate, es); ExplainPropertyInteger("Workers Planned", gm->num_workers, es); + + /* Show params evaluated at gather-merge node */ + if (gm->initParam) + show_eval_params(gm->initParam, es); + if (es->analyze) { int nworkers; @@ -2487,6 +2498,29 @@ show_foreignscan_info(ForeignScanState *fsstate, ExplainState *es) } } +/* + * Show initplan params evaluated at Gather or Gather Merge node. + */ +static void +show_eval_params(Bitmapset *bms_params, ExplainState *es) +{ + int paramid = -1; + List *params = NIL; + + Assert(bms_params); + + while ((paramid = bms_next_member(bms_params, paramid)) >= 0) + { + char param[32]; + + snprintf(param, sizeof(param), "$%d", paramid); + params = lappend(params, pstrdup(param)); + } + + if (params) + ExplainPropertyList("Params Evaluated", params, es); +} + /* * Fetch the name of an index in an EXPLAIN * diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index a0f537b706..6c4612dad4 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -1926,6 +1926,33 @@ ExecEvalParamExec(ExprState *state, ExprEvalStep *op, ExprContext *econtext) *op->resnull = prm->isnull; } +/* + * ExecEvalParamExecParams + * + * Execute the subplan stored in PARAM_EXEC initplans params, if not executed + * till now. + */ +void +ExecEvalParamExecParams(Bitmapset *params, EState *estate) +{ + ParamExecData *prm; + int paramid; + + paramid = -1; + while ((paramid = bms_next_member(params, paramid)) >= 0) + { + prm = &(estate->es_param_exec_vals[paramid]); + + if (prm->execPlan != NULL) + { + /* Parameter not evaluated yet, so go do it */ + ExecSetParamPlan(prm->execPlan, GetPerTupleExprContext(estate)); + /* ExecSetParamPlan should have processed this param... */ + Assert(prm->execPlan == NULL); + } + } +} + /* * Evaluate a PARAM_EXTERN parameter. * diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index fd7e7cbf3d..c435550637 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -23,6 +23,7 @@ #include "postgres.h" +#include "executor/execExpr.h" #include "executor/execParallel.h" #include "executor/executor.h" #include "executor/nodeBitmapHeapscan.h" @@ -38,7 +39,9 @@ #include "optimizer/planner.h" #include "storage/spin.h" #include "tcop/tcopprot.h" +#include "utils/datum.h" #include "utils/dsa.h" +#include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/snapmgr.h" #include "pgstat.h" @@ -50,7 +53,7 @@ */ #define PARALLEL_KEY_EXECUTOR_FIXED UINT64CONST(0xE000000000000001) #define PARALLEL_KEY_PLANNEDSTMT UINT64CONST(0xE000000000000002) -#define PARALLEL_KEY_PARAMS UINT64CONST(0xE000000000000003) +#define PARALLEL_KEY_PARAMLISTINFO UINT64CONST(0xE000000000000003) #define PARALLEL_KEY_BUFFER_USAGE UINT64CONST(0xE000000000000004) #define PARALLEL_KEY_TUPLE_QUEUE UINT64CONST(0xE000000000000005) #define PARALLEL_KEY_INSTRUMENTATION UINT64CONST(0xE000000000000006) @@ -65,6 +68,7 @@ typedef struct FixedParallelExecutorState { int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ + dsa_pointer param_exec; } FixedParallelExecutorState; /* @@ -266,6 +270,133 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) return planstate_tree_walker(planstate, ExecParallelEstimate, e); } +/* + * Estimate the amount of space required to serialize the indicated parameters. + */ +static Size +EstimateParamExecSpace(EState *estate, Bitmapset *params) +{ + int paramid; + Size sz = sizeof(int); + + paramid = -1; + while ((paramid = bms_next_member(params, paramid)) >= 0) + { + Oid typeOid; + int16 typLen; + bool typByVal; + ParamExecData *prm; + + prm = &(estate->es_param_exec_vals[paramid]); + typeOid = list_nth_oid(estate->es_plannedstmt->paramExecTypes, + paramid); + + sz = add_size(sz, sizeof(int)); /* space for paramid */ + + /* space for datum/isnull */ + if (OidIsValid(typeOid)) + get_typlenbyval(typeOid, &typLen, &typByVal); + else + { + /* If no type OID, assume by-value, like copyParamList does. */ + typLen = sizeof(Datum); + typByVal = true; + } + sz = add_size(sz, + datumEstimateSpace(prm->value, prm->isnull, + typByVal, typLen)); + } + return sz; +} + +/* + * Serialize specified PARAM_EXEC parameters. + * + * We write the number of parameters first, as a 4-byte integer, and then + * write details for each parameter in turn. The details for each parameter + * consist of a 4-byte paramid (location of param in execution time internal + * parameter array) and then the datum as serialized by datumSerialize(). + */ +static dsa_pointer +SerializeParamExecParams(EState *estate, Bitmapset *params) +{ + Size size; + int nparams; + int paramid; + ParamExecData *prm; + dsa_pointer handle; + char *start_address; + + /* Allocate enough space for the current parameter values. */ + size = EstimateParamExecSpace(estate, params); + handle = dsa_allocate(estate->es_query_dsa, size); + start_address = dsa_get_address(estate->es_query_dsa, handle); + + /* First write the number of parameters as a 4-byte integer. */ + nparams = bms_num_members(params); + memcpy(start_address, &nparams, sizeof(int)); + start_address += sizeof(int); + + /* Write details for each parameter in turn. */ + paramid = -1; + while ((paramid = bms_next_member(params, paramid)) >= 0) + { + Oid typeOid; + int16 typLen; + bool typByVal; + + prm = &(estate->es_param_exec_vals[paramid]); + typeOid = list_nth_oid(estate->es_plannedstmt->paramExecTypes, + paramid); + + /* Write paramid. */ + memcpy(start_address, ¶mid, sizeof(int)); + start_address += sizeof(int); + + /* Write datum/isnull */ + if (OidIsValid(typeOid)) + get_typlenbyval(typeOid, &typLen, &typByVal); + else + { + /* If no type OID, assume by-value, like copyParamList does. */ + typLen = sizeof(Datum); + typByVal = true; + } + datumSerialize(prm->value, prm->isnull, typByVal, typLen, + &start_address); + } + + return handle; +} + +/* + * Restore specified PARAM_EXEC parameters. + */ +static void +RestoreParamExecParams(char *start_address, EState *estate) +{ + int nparams; + int i; + int paramid; + + memcpy(&nparams, start_address, sizeof(int)); + start_address += sizeof(int); + + for (i = 0; i < nparams; i++) + { + ParamExecData *prm; + + /* Read paramid */ + memcpy(¶mid, start_address, sizeof(int)); + start_address += sizeof(int); + prm = &(estate->es_param_exec_vals[paramid]); + + /* Read datum/isnull. */ + prm->value = datumRestore(&start_address, &prm->isnull); + prm->execPlan = NULL; + } +} + /* * Initialize the dynamic shared memory segment that will be used to control * parallel execution. @@ -395,7 +526,8 @@ ExecParallelSetupTupleQueues(ParallelContext *pcxt, bool reinitialize) * execution and return results to the main backend. */ ParallelExecutorInfo * -ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, +ExecInitParallelPlan(PlanState *planstate, EState *estate, + Bitmapset *sendParams, int nworkers, int64 tuples_needed) { ParallelExecutorInfo *pei; @@ -405,17 +537,20 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, FixedParallelExecutorState *fpes; char *pstmt_data; char *pstmt_space; - char *param_space; + char *paramlistinfo_space; BufferUsage *bufusage_space; SharedExecutorInstrumentation *instrumentation = NULL; int pstmt_len; - int param_len; + int paramlistinfo_len; int instrumentation_len = 0; int instrument_offset = 0; Size dsa_minsize = dsa_minimum_size(); char *query_string; int query_len; + /* Force parameters we're going to pass to workers to be evaluated. */ + ExecEvalParamExecParams(sendParams, estate); + /* Allocate object for return value. */ pei = palloc0(sizeof(ParallelExecutorInfo)); pei->finished = false; @@ -450,8 +585,8 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, shm_toc_estimate_keys(&pcxt->estimator, 1); /* Estimate space for serialized ParamListInfo. */ - param_len = EstimateParamListSpace(estate->es_param_list_info); - shm_toc_estimate_chunk(&pcxt->estimator, param_len); + paramlistinfo_len = EstimateParamListSpace(estate->es_param_list_info); + shm_toc_estimate_chunk(&pcxt->estimator, paramlistinfo_len); shm_toc_estimate_keys(&pcxt->estimator, 1); /* @@ -511,6 +646,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, /* Store fixed-size state. */ fpes = shm_toc_allocate(pcxt->toc, sizeof(FixedParallelExecutorState)); fpes->tuples_needed = tuples_needed; + fpes->param_exec = InvalidDsaPointer; shm_toc_insert(pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, fpes); /* Store query string */ @@ -524,9 +660,9 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, shm_toc_insert(pcxt->toc, PARALLEL_KEY_PLANNEDSTMT, pstmt_space); /* Store serialized ParamListInfo. */ - param_space = shm_toc_allocate(pcxt->toc, param_len); - shm_toc_insert(pcxt->toc, PARALLEL_KEY_PARAMS, param_space); - SerializeParamList(estate->es_param_list_info, ¶m_space); + paramlistinfo_space = shm_toc_allocate(pcxt->toc, paramlistinfo_len); + shm_toc_insert(pcxt->toc, PARALLEL_KEY_PARAMLISTINFO, paramlistinfo_space); + SerializeParamList(estate->es_param_list_info, ¶mlistinfo_space); /* Allocate space for each worker's BufferUsage; no need to initialize. */ bufusage_space = shm_toc_allocate(pcxt->toc, @@ -577,13 +713,25 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers, pei->area = dsa_create_in_place(area_space, dsa_minsize, LWTRANCHE_PARALLEL_QUERY_DSA, pcxt->seg); - } - /* - * Make the area available to executor nodes running in the leader. See - * also ParallelQueryMain which makes it available to workers. - */ - estate->es_query_dsa = pei->area; + /* + * Make the area available to executor nodes running in the leader. + * See also ParallelQueryMain which makes it available to workers. + */ + estate->es_query_dsa = pei->area; + + /* + * Serialize parameters, if any, using DSA storage. We don't dare use + * the main parallel query DSM for this because we might relaunch + * workers after the values have changed (and thus the amount of + * storage required has changed). + */ + if (!bms_is_empty(sendParams)) + { + pei->param_exec = SerializeParamExecParams(estate, sendParams); + fpes->param_exec = pei->param_exec; + } + } /* * Give parallel-aware nodes a chance to initialize their shared data. @@ -640,16 +788,39 @@ ExecParallelCreateReaders(ParallelExecutorInfo *pei) */ void ExecParallelReinitialize(PlanState *planstate, - ParallelExecutorInfo *pei) + ParallelExecutorInfo *pei, + Bitmapset *sendParams) { + EState *estate = planstate->state; + FixedParallelExecutorState *fpes; + /* Old workers must already be shut down */ Assert(pei->finished); + /* Force parameters we're going to pass to workers to be evaluated. */ + ExecEvalParamExecParams(sendParams, estate); + ReinitializeParallelDSM(pei->pcxt); pei->tqueue = ExecParallelSetupTupleQueues(pei->pcxt, true); pei->reader = NULL; pei->finished = false; + fpes = shm_toc_lookup(pei->pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, false); + + /* Free any serialized parameters from the last round. */ + if (DsaPointerIsValid(fpes->param_exec)) + { + dsa_free(estate->es_query_dsa, fpes->param_exec); + fpes->param_exec = InvalidDsaPointer; + } + + /* Serialize current parameter values if required. */ + if (!bms_is_empty(sendParams)) + { + pei->param_exec = SerializeParamExecParams(estate, sendParams); + fpes->param_exec = pei->param_exec; + } + /* Traverse plan tree and let each child node reset associated state. */ ExecParallelReInitializeDSM(planstate, pei->pcxt); } @@ -831,6 +1002,12 @@ ExecParallelFinish(ParallelExecutorInfo *pei) void ExecParallelCleanup(ParallelExecutorInfo *pei) { + /* Free any serialized parameters. */ + if (DsaPointerIsValid(pei->param_exec)) + { + dsa_free(pei->area, pei->param_exec); + pei->param_exec = InvalidDsaPointer; + } if (pei->area != NULL) { dsa_detach(pei->area); @@ -882,7 +1059,7 @@ ExecParallelGetQueryDesc(shm_toc *toc, DestReceiver *receiver, pstmt = (PlannedStmt *) stringToNode(pstmtspace); /* Reconstruct ParamListInfo. */ - paramspace = shm_toc_lookup(toc, PARALLEL_KEY_PARAMS, false); + paramspace = shm_toc_lookup(toc, PARALLEL_KEY_PARAMLISTINFO, false); paramLI = RestoreParamList(¶mspace); /* @@ -1046,6 +1223,14 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) /* Special executor initialization steps for parallel workers */ queryDesc->planstate->state->es_query_dsa = area; + if (DsaPointerIsValid(fpes->param_exec)) + { + char *paramexec_space; + + paramexec_space = dsa_get_address(area, fpes->param_exec); + RestoreParamExecParams(paramexec_space, queryDesc->estate); + + } ExecParallelInitializeWorker(queryDesc->planstate, toc); /* Pass down any tuple bound */ diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 0298c65d06..07c62d2fea 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -160,11 +160,13 @@ ExecGather(PlanState *pstate) if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, + gather->initParam, gather->num_workers, node->tuples_needed); else ExecParallelReinitialize(node->ps.lefttree, - node->pei); + node->pei, + gather->initParam); /* * Register backend workers. We might not get as many as we diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 7206ab9197..7dd655c448 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -203,11 +203,13 @@ ExecGatherMerge(PlanState *pstate) if (!node->pei) node->pei = ExecInitParallelPlan(node->ps.lefttree, estate, + gm->initParam, gm->num_workers, node->tuples_needed); else ExecParallelReinitialize(node->ps.lefttree, - node->pei); + node->pei, + gm->initParam); /* Try to launch workers. */ pcxt = node->pei->pcxt; diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 76e75459b4..d9ff8a7e51 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -364,6 +364,7 @@ _copyGather(const Gather *from) COPY_SCALAR_FIELD(rescan_param); COPY_SCALAR_FIELD(single_copy); COPY_SCALAR_FIELD(invisible); + COPY_BITMAPSET_FIELD(initParam); return newnode; } @@ -391,6 +392,7 @@ _copyGatherMerge(const GatherMerge *from) COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid)); COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid)); COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool)); + COPY_BITMAPSET_FIELD(initParam); return newnode; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index dc35df9e4f..c97ee24ade 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -487,6 +487,7 @@ _outGather(StringInfo str, const Gather *node) WRITE_INT_FIELD(rescan_param); WRITE_BOOL_FIELD(single_copy); WRITE_BOOL_FIELD(invisible); + WRITE_BITMAPSET_FIELD(initParam); } static void @@ -517,6 +518,8 @@ _outGatherMerge(StringInfo str, const GatherMerge *node) appendStringInfoString(str, " :nullsFirst"); for (i = 0; i < node->numCols; i++) appendStringInfo(str, " %s", booltostr(node->nullsFirst[i])); + + WRITE_BITMAPSET_FIELD(initParam); } static void diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 593658dd8a..7eb67fc040 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -2172,6 +2172,7 @@ _readGather(void) READ_INT_FIELD(rescan_param); READ_BOOL_FIELD(single_copy); READ_BOOL_FIELD(invisible); + READ_BITMAPSET_FIELD(initParam); READ_DONE(); } @@ -2193,6 +2194,7 @@ _readGatherMerge(void) READ_OID_ARRAY(sortOperators, local_node->numCols); READ_OID_ARRAY(collations, local_node->numCols); READ_BOOL_ARRAY(nullsFirst, local_node->numCols); + READ_BITMAPSET_FIELD(initParam); READ_DONE(); } diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 9c74e39bd3..d4454779ee 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -6279,6 +6279,7 @@ make_gather(List *qptlist, node->rescan_param = rescan_param; node->single_copy = single_copy; node->invisible = false; + node->initParam = NULL; return node; } diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 4c00a1453b..f6b8bbf5fa 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -377,6 +377,14 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams) { Gather *gather = makeNode(Gather); + /* + * If there are any initPlans attached to the formerly-top plan node, + * move them up to the Gather node; same as we do for Material node in + * materialize_finished_plan. + */ + gather->plan.initPlan = top_plan->initPlan; + top_plan->initPlan = NIL; + gather->plan.targetlist = top_plan->targetlist; gather->plan.qual = NIL; gather->plan.lefttree = top_plan; diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index fa9a3f0b47..28a7f7ec45 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -107,6 +107,7 @@ static Node *fix_scan_expr_mutator(Node *node, fix_scan_expr_context *context); static bool fix_scan_expr_walker(Node *node, fix_scan_expr_context *context); static void set_join_references(PlannerInfo *root, Join *join, int rtoffset); static void set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset); +static void set_param_references(PlannerInfo *root, Plan *plan); static Node *convert_combining_aggrefs(Node *node, void *context); static void set_dummy_tlist_references(Plan *plan, int rtoffset); static indexed_tlist *build_tlist_index(List *tlist); @@ -632,7 +633,10 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset) case T_Gather: case T_GatherMerge: - set_upper_references(root, plan, rtoffset); + { + set_upper_references(root, plan, rtoffset); + set_param_references(root, plan); + } break; case T_Hash: @@ -1781,6 +1785,51 @@ set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset) pfree(subplan_itlist); } +/* + * set_param_references + * Initialize the initParam list in Gather or Gather merge node such that + * it contains reference of all the params that needs to be evaluated + * before execution of the node. It contains the initplan params that are + * being passed to the plan nodes below it. + */ +static void +set_param_references(PlannerInfo *root, Plan *plan) +{ + Assert(IsA(plan, Gather) || IsA(plan, GatherMerge)); + + if (plan->lefttree->extParam) + { + PlannerInfo *proot; + Bitmapset *initSetParam = NULL; + ListCell *l; + + for (proot = root; proot != NULL; proot = proot->parent_root) + { + foreach(l, proot->init_plans) + { + SubPlan *initsubplan = (SubPlan *) lfirst(l); + ListCell *l2; + + foreach(l2, initsubplan->setParam) + { + initSetParam = bms_add_member(initSetParam, lfirst_int(l2)); + } + } + } + + /* + * Remember the list of all external initplan params that are used by + * the children of Gather or Gather merge node. + */ + if (IsA(plan, Gather)) + ((Gather *) plan)->initParam = + bms_intersect(plan->lefttree->extParam, initSetParam); + else + ((GatherMerge *) plan)->initParam = + bms_intersect(plan->lefttree->extParam, initSetParam); + } +} + /* * Recursively scan an expression tree and convert Aggrefs to the proper * intermediate form for combining aggregates. This means (1) replacing each diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 66e098f488..d14ef31eae 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1087,6 +1087,8 @@ bool is_parallel_safe(PlannerInfo *root, Node *node) { max_parallel_hazard_context context; + PlannerInfo *proot; + ListCell *l; /* * Even if the original querytree contained nothing unsafe, we need to @@ -1101,6 +1103,25 @@ is_parallel_safe(PlannerInfo *root, Node *node) context.max_hazard = PROPARALLEL_SAFE; context.max_interesting = PROPARALLEL_RESTRICTED; context.safe_param_ids = NIL; + + /* + * The params that refer to the same or parent query level are considered + * parallel-safe. The idea is that we compute such params at Gather or + * Gather Merge node and pass their value to workers. + */ + for (proot = root; proot != NULL; proot = proot->parent_root) + { + foreach(l, proot->init_plans) + { + SubPlan *initsubplan = (SubPlan *) lfirst(l); + ListCell *l2; + + foreach(l2, initsubplan->setParam) + context.safe_param_ids = lcons_int(lfirst_int(l2), + context.safe_param_ids); + } + } + return !max_parallel_hazard_walker(node, &context); } @@ -1225,7 +1246,8 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context) * We can't pass Params to workers at the moment either, so they are also * parallel-restricted, unless they are PARAM_EXTERN Params or are * PARAM_EXEC Params listed in safe_param_ids, meaning they could be - * generated within the worker. + * either generated within the worker or can be computed in master and + * then their value can be passed to the worker. */ else if (IsA(node, Param)) { diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 78d2247816..5bbb63a9d8 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -609,6 +609,7 @@ extern ExprEvalOp ExecEvalStepOp(ExprState *state, ExprEvalStep *op); */ extern void ExecEvalParamExec(ExprState *state, ExprEvalStep *op, ExprContext *econtext); +extern void ExecEvalParamExecParams(Bitmapset *params, EState *estate); extern void ExecEvalParamExtern(ExprState *state, ExprEvalStep *op, ExprContext *econtext); extern void ExecEvalSQLValueFunction(ExprState *state, ExprEvalStep *op); diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index e1b3e7af1f..99a13f3b7d 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -28,6 +28,7 @@ typedef struct ParallelExecutorInfo BufferUsage *buffer_usage; /* points to bufusage area in DSM */ SharedExecutorInstrumentation *instrumentation; /* optional */ dsa_area *area; /* points to DSA area in DSM */ + dsa_pointer param_exec; /* serialized PARAM_EXEC parameters */ bool finished; /* set true by ExecParallelFinish */ /* These two arrays have pcxt->nworkers_launched entries: */ shm_mq_handle **tqueue; /* tuple queues for worker output */ @@ -35,12 +36,13 @@ typedef struct ParallelExecutorInfo } ParallelExecutorInfo; extern ParallelExecutorInfo *ExecInitParallelPlan(PlanState *planstate, - EState *estate, int nworkers, int64 tuples_needed); + EState *estate, Bitmapset *sendParam, int nworkers, + int64 tuples_needed); extern void ExecParallelCreateReaders(ParallelExecutorInfo *pei); extern void ExecParallelFinish(ParallelExecutorInfo *pei); extern void ExecParallelCleanup(ParallelExecutorInfo *pei); extern void ExecParallelReinitialize(PlanState *planstate, - ParallelExecutorInfo *pei); + ParallelExecutorInfo *pei, Bitmapset *sendParam); extern void ParallelQueryMain(dsm_segment *seg, shm_toc *toc); diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index a127682b0e..9b38d44ba0 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -841,6 +841,8 @@ typedef struct Gather int rescan_param; /* ID of Param that signals a rescan, or -1 */ bool single_copy; /* don't execute plan more than once */ bool invisible; /* suppress EXPLAIN display (for testing)? */ + Bitmapset *initParam; /* param id's of initplans which are referred + * at gather or one of it's child node */ } Gather; /* ------------ @@ -858,6 +860,8 @@ typedef struct GatherMerge Oid *sortOperators; /* OIDs of operators to sort them by */ Oid *collations; /* OIDs of collations */ bool *nullsFirst; /* NULLS FIRST/LAST directions */ + Bitmapset *initParam; /* param id's of initplans which are referred + * at gather merge or one of it's child node */ } GatherMerge; /* ---------------- diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 06aeddd805..d1d5b228ce 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -201,6 +201,41 @@ explain (costs off) -> Seq Scan on tenk2 (4 rows) +alter table tenk2 reset (parallel_workers); +-- test parallel plan for a query containing initplan. +set enable_indexscan = off; +set enable_indexonlyscan = off; +set enable_bitmapscan = off; +alter table tenk2 set (parallel_workers = 2); +explain (costs off) + select count(*) from tenk1 + where tenk1.unique1 = (Select max(tenk2.unique1) from tenk2); + QUERY PLAN +------------------------------------------------------ + Aggregate + InitPlan 1 (returns $2) + -> Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Parallel Seq Scan on tenk2 + -> Gather + Workers Planned: 4 + Params Evaluated: $2 + -> Parallel Seq Scan on tenk1 + Filter: (unique1 = $2) +(12 rows) + +select count(*) from tenk1 + where tenk1.unique1 = (Select max(tenk2.unique1) from tenk2); + count +------- + 1 +(1 row) + +reset enable_indexscan; +reset enable_indexonlyscan; +reset enable_bitmapscan; alter table tenk2 reset (parallel_workers); -- test parallel index scans. set enable_seqscan to off; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index b701b35408..bb4e34adbe 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -74,6 +74,23 @@ explain (costs off) (select ten from tenk2); alter table tenk2 reset (parallel_workers); +-- test parallel plan for a query containing initplan. +set enable_indexscan = off; +set enable_indexonlyscan = off; +set enable_bitmapscan = off; +alter table tenk2 set (parallel_workers = 2); + +explain (costs off) + select count(*) from tenk1 + where tenk1.unique1 = (Select max(tenk2.unique1) from tenk2); +select count(*) from tenk1 + where tenk1.unique1 = (Select max(tenk2.unique1) from tenk2); + +reset enable_indexscan; +reset enable_indexonlyscan; +reset enable_bitmapscan; +alter table tenk2 reset (parallel_workers); + -- test parallel index scans. set enable_seqscan to off; set enable_bitmapscan to off; From 79f2d637139f117a7be2e751328b504f1decd9b7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 16 Nov 2017 12:57:17 -0500 Subject: [PATCH 0554/1087] Update postgresql.conf.sample to match pg_settings classificaitons. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit A handful of settings, most notably shared_preload_libraries, were just plain the wrong place compared to their assigned config_group value in guc.c (and thus pg_settings). In other cases the names of the sections in postgresql.conf.sample were mildly different from the corresponding entries in config_group_names[]. Make it all consistent. Adrián Escoms, reviewed by me. Discussion: http://postgr.es/m/CACksPC2veEmFRYqwYepWYO9U7aFhAx6sYq+WqjTyHw7uV=E=pw@mail.gmail.com --- src/backend/utils/misc/postgresql.conf.sample | 32 +++++++++++-------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 63b8723569..7f942ccb38 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -137,11 +137,10 @@ #temp_file_limit = -1 # limits per-process temp file space # in kB, or -1 for no limit -# - Kernel Resource Usage - +# - Kernel Resource - #max_files_per_process = 1000 # min 25 # (change requires restart) -#shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - @@ -172,7 +171,7 @@ #------------------------------------------------------------------------------ -# WRITE AHEAD LOG +# WRITE-AHEAD LOG #------------------------------------------------------------------------------ # - Settings - @@ -228,7 +227,7 @@ # REPLICATION #------------------------------------------------------------------------------ -# - Sending Server(s) - +# - Sending Servers - # Set these on the master and on any standby that will send replication data. @@ -337,7 +336,7 @@ #------------------------------------------------------------------------------ -# ERROR REPORTING AND LOGGING +# REPORTING AND LOGGING #------------------------------------------------------------------------------ # - Where to Log - @@ -472,8 +471,9 @@ # -1 disables, 0 logs all temp files #log_timezone = 'GMT' - -# - Process Title - +#------------------------------------------------------------------------------ +# PROCESS TITLE +#------------------------------------------------------------------------------ #cluster_name = '' # added to process titles if nonempty # (change requires restart) @@ -481,10 +481,10 @@ #------------------------------------------------------------------------------ -# RUNTIME STATISTICS +# STATISTICS #------------------------------------------------------------------------------ -# - Query/Index Statistics Collector - +# - Query and Index Statistics Collector - #track_activities = on #track_counts = on @@ -494,7 +494,7 @@ #stats_temp_directory = 'pg_stat_tmp' -# - Statistics Monitoring - +# - Monitoring - #log_parser_stats = off #log_planner_stats = off @@ -503,7 +503,7 @@ #------------------------------------------------------------------------------ -# AUTOVACUUM PARAMETERS +# AUTOVACUUM #------------------------------------------------------------------------------ #autovacuum = on # Enable autovacuum subprocess? 'on' @@ -588,12 +588,16 @@ # default configuration for text search #default_text_search_config = 'pg_catalog.simple' -# - Other Defaults - +# - Shared Library Preloading - -#dynamic_library_path = '$libdir' +#shared_preload_libraries = '' # (change requires restart) #local_preload_libraries = '' #session_preload_libraries = '' +# - Other Defaults - + +#dynamic_library_path = '$libdir' + #------------------------------------------------------------------------------ # LOCK MANAGEMENT @@ -611,7 +615,7 @@ #------------------------------------------------------------------------------ -# VERSION/PLATFORM COMPATIBILITY +# VERSION AND PLATFORM COMPATIBILITY #------------------------------------------------------------------------------ # - Previous PostgreSQL Versions - From 6b2cd278a9d1e4643c419b598780aa55520f4f1a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 16 Nov 2017 14:16:45 -0500 Subject: [PATCH 0555/1087] Fix typo in comment. Etsuro Fujita Discussion: http://postgr.es/m/5A0D7C3D.80803@lab.ntt.co.jp --- src/backend/catalog/partition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index ce29ba2eda..67d4c2a09b 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -2686,7 +2686,7 @@ qsort_partition_rbound_cmp(const void *a, const void *b, void *arg) /* * partition_rbound_cmp * - * Return for two range bounds whether the 1st one (specified in datum1, + * Return for two range bounds whether the 1st one (specified in datums1, * kind1, and lower1) is <, =, or > the bound specified in *b2. * * Note that if the values of the two range bounds compare equal, then we take From 3b2787e1f8f1eeeb6bd9565288ab210262705b56 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 16 Nov 2017 14:19:27 -0500 Subject: [PATCH 0556/1087] Fix broken cleanup interlock for GIN pending list. The pending list must (for correctness) always be cleaned up by vacuum, and should (for the avoidance of surprising behavior) always be cleaned up by an explicit call to gin_clean_pending_list, but cleanup is optional when inserting. The old logic got this backward: cleanup was forced if (stats == NULL), but that's going to be *false* when vacuuming and *true* for inserts. Masahiko Sawada, reviewed by me. Discussion: http://postgr.es/m/CAD21AoBLUSyiYKnTYtSAbC+F=XDjiaBrOUEGK+zUXdQ8owfPKw@mail.gmail.com --- src/backend/access/gin/ginfast.c | 14 +++++++++----- src/backend/access/gin/ginvacuum.c | 6 +++--- src/include/access/gin_private.h | 2 +- 3 files changed, 13 insertions(+), 9 deletions(-) diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c index 00348891a2..95c8bd7b43 100644 --- a/src/backend/access/gin/ginfast.c +++ b/src/backend/access/gin/ginfast.c @@ -450,8 +450,12 @@ ginHeapTupleFastInsert(GinState *ginstate, GinTupleCollector *collector) END_CRIT_SECTION(); + /* + * Since it could contend with concurrent cleanup process we cleanup + * pending list not forcibly. + */ if (needCleanup) - ginInsertCleanup(ginstate, false, true, NULL); + ginInsertCleanup(ginstate, false, true, false, NULL); } /* @@ -748,7 +752,8 @@ processPendingPage(BuildAccumulator *accum, KeyArray *ka, */ void ginInsertCleanup(GinState *ginstate, bool full_clean, - bool fill_fsm, IndexBulkDeleteResult *stats) + bool fill_fsm, bool forceCleanup, + IndexBulkDeleteResult *stats) { Relation index = ginstate->index; Buffer metabuffer, @@ -765,7 +770,6 @@ ginInsertCleanup(GinState *ginstate, bool full_clean, bool cleanupFinish = false; bool fsm_vac = false; Size workMemory; - bool inVacuum = (stats == NULL); /* * We would like to prevent concurrent cleanup process. For that we will @@ -774,7 +778,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean, * insertion into pending list */ - if (inVacuum) + if (forceCleanup) { /* * We are called from [auto]vacuum/analyze or gin_clean_pending_list() @@ -1036,7 +1040,7 @@ gin_clean_pending_list(PG_FUNCTION_ARGS) memset(&stats, 0, sizeof(stats)); initGinState(&ginstate, indexRel); - ginInsertCleanup(&ginstate, true, true, &stats); + ginInsertCleanup(&ginstate, true, true, true, &stats); index_close(indexRel, AccessShareLock); diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c index a20a99c814..394bc832a4 100644 --- a/src/backend/access/gin/ginvacuum.c +++ b/src/backend/access/gin/ginvacuum.c @@ -570,7 +570,7 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, * and cleanup any pending inserts */ ginInsertCleanup(&gvs.ginstate, !IsAutoVacuumWorkerProcess(), - false, stats); + false, true, stats); } /* we'll re-count the tuples each time */ @@ -683,7 +683,7 @@ ginvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) if (IsAutoVacuumWorkerProcess()) { initGinState(&ginstate, index); - ginInsertCleanup(&ginstate, false, true, stats); + ginInsertCleanup(&ginstate, false, true, true, stats); } return stats; } @@ -697,7 +697,7 @@ ginvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult)); initGinState(&ginstate, index); ginInsertCleanup(&ginstate, !IsAutoVacuumWorkerProcess(), - false, stats); + false, true, stats); } memset(&idxStat, 0, sizeof(idxStat)); diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h index 7b5c845b83..dc49b6f17d 100644 --- a/src/include/access/gin_private.h +++ b/src/include/access/gin_private.h @@ -439,7 +439,7 @@ extern void ginHeapTupleFastCollect(GinState *ginstate, OffsetNumber attnum, Datum value, bool isNull, ItemPointer ht_ctid); extern void ginInsertCleanup(GinState *ginstate, bool full_clean, - bool fill_fsm, IndexBulkDeleteResult *stats); + bool fill_fsm, bool forceCleanup, IndexBulkDeleteResult *stats); /* ginpostinglist.c */ From 575cead991398aac255cf6f0e333c6d59053cf55 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 16 Nov 2017 15:30:56 -0500 Subject: [PATCH 0557/1087] Remove redundant line from Makefile. Masahiko Sawada, reviewed by Michael Paquier Discussion: http://postgr.es/m/CAD21AoDFes_Mgye-1K89rmTgeU3RxYF3zgTjzCJVq2KzzcpC4A@mail.gmail.com --- src/test/recovery/Makefile | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/test/recovery/Makefile b/src/test/recovery/Makefile index 142a1b8de2..e31accf0f5 100644 --- a/src/test/recovery/Makefile +++ b/src/test/recovery/Makefile @@ -20,5 +20,3 @@ check: clean distclean maintainer-clean: rm -rf tmp_check - -EXTRA_INSTALL = contrib/test_decoding From 687f096ea9da82d267f1809a5f3fdfa027092045 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 16 Nov 2017 16:22:57 -0500 Subject: [PATCH 0558/1087] Make PL/Python handle domain-type conversions correctly. Fix PL/Python so that it can handle domains over composite, and so that it enforces domain constraints correctly in other cases that were not always done properly before. Notably, it didn't do arrays of domains right (oversight in commit c12d570fa), and it failed to enforce domain constraints when returning a composite type containing a domain field, and if a transform function is being used for a domain's base type then it failed to enforce domain constraints on the result. Also, in many places it missed checking domain constraints on null values, because the plpy_typeio code simply wasn't called for Py_None. Rather than try to band-aid these problems, I made a significant refactoring of the plpy_typeio logic. The existing design of recursing for array and composite members is extended to also treat domains as containers requiring recursion, and the APIs for the module are cleaned up and simplified. The patch also modifies plpy_typeio to rely on the typcache more than it did before (which was pretty much not at all). This reduces the need for repetitive lookups, and lets us get rid of an ad-hoc scheme for detecting changes in composite types. I added a couple of small features to typcache to help with that. Although some of this is fixing bugs that long predate v11, I don't think we should risk a back-patch: it's a significant amount of code churn, and there've been no complaints from the field about the bugs. Tom Lane, reviewed by Anthony Bykov Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us --- .../expected/hstore_plpython.out | 20 +- .../hstore_plpython/sql/hstore_plpython.sql | 16 +- src/backend/utils/cache/typcache.c | 7 + src/include/utils/typcache.h | 5 +- src/pl/plpython/expected/plpython_types.out | 128 ++ src/pl/plpython/expected/plpython_types_3.out | 128 ++ src/pl/plpython/plpy_cursorobject.c | 76 +- src/pl/plpython/plpy_cursorobject.h | 2 +- src/pl/plpython/plpy_exec.c | 173 +-- src/pl/plpython/plpy_main.c | 7 +- src/pl/plpython/plpy_planobject.h | 2 +- src/pl/plpython/plpy_procedure.c | 148 +- src/pl/plpython/plpy_procedure.h | 8 +- src/pl/plpython/plpy_spi.c | 84 +- src/pl/plpython/plpy_typeio.c | 1192 ++++++++++------- src/pl/plpython/plpy_typeio.h | 220 +-- src/pl/plpython/sql/plpython_types.sql | 91 ++ 17 files changed, 1378 insertions(+), 929 deletions(-) diff --git a/contrib/hstore_plpython/expected/hstore_plpython.out b/contrib/hstore_plpython/expected/hstore_plpython.out index df49cd5f37..1ab5feea93 100644 --- a/contrib/hstore_plpython/expected/hstore_plpython.out +++ b/contrib/hstore_plpython/expected/hstore_plpython.out @@ -68,12 +68,30 @@ AS $$ val = [{'a': 1, 'b': 'boo', 'c': None}, {'d': 2}] return val $$; - SELECT test2arr(); +SELECT test2arr(); test2arr -------------------------------------------------------------- {"\"a\"=>\"1\", \"b\"=>\"boo\", \"c\"=>NULL","\"d\"=>\"2\""} (1 row) +-- test python -> domain over hstore +CREATE DOMAIN hstore_foo AS hstore CHECK(VALUE ? 'foo'); +CREATE FUNCTION test2dom(fn text) RETURNS hstore_foo +LANGUAGE plpythonu +TRANSFORM FOR TYPE hstore +AS $$ +return {'a': 1, fn: 'boo', 'c': None} +$$; +SELECT test2dom('foo'); + test2dom +----------------------------------- + "a"=>"1", "c"=>NULL, "foo"=>"boo" +(1 row) + +SELECT test2dom('bar'); -- fail +ERROR: value for domain hstore_foo violates check constraint "hstore_foo_check" +CONTEXT: while creating return value +PL/Python function "test2dom" -- test as part of prepare/execute CREATE FUNCTION test3() RETURNS void LANGUAGE plpythonu diff --git a/contrib/hstore_plpython/sql/hstore_plpython.sql b/contrib/hstore_plpython/sql/hstore_plpython.sql index 911bbd67fe..2c54ee6aaa 100644 --- a/contrib/hstore_plpython/sql/hstore_plpython.sql +++ b/contrib/hstore_plpython/sql/hstore_plpython.sql @@ -60,7 +60,21 @@ val = [{'a': 1, 'b': 'boo', 'c': None}, {'d': 2}] return val $$; - SELECT test2arr(); +SELECT test2arr(); + + +-- test python -> domain over hstore +CREATE DOMAIN hstore_foo AS hstore CHECK(VALUE ? 'foo'); + +CREATE FUNCTION test2dom(fn text) RETURNS hstore_foo +LANGUAGE plpythonu +TRANSFORM FOR TYPE hstore +AS $$ +return {'a': 1, fn: 'boo', 'c': None} +$$; + +SELECT test2dom('foo'); +SELECT test2dom('bar'); -- fail -- test as part of prepare/execute diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index 7aadc5d6ef..f6450c402c 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -377,6 +377,7 @@ lookup_type_cache(Oid type_id, int flags) typentry->typstorage = typtup->typstorage; typentry->typtype = typtup->typtype; typentry->typrelid = typtup->typrelid; + typentry->typelem = typtup->typelem; /* If it's a domain, immediately thread it into the domain cache list */ if (typentry->typtype == TYPTYPE_DOMAIN) @@ -791,6 +792,12 @@ load_typcache_tupdesc(TypeCacheEntry *typentry) Assert(typentry->tupDesc->tdrefcount > 0); typentry->tupDesc->tdrefcount++; + /* + * In future, we could take some pains to not increment the seqno if the + * tupdesc didn't really change; but for now it's not worth it. + */ + typentry->tupDescSeqNo++; + relation_close(rel, AccessShareLock); } diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index ea799a8894..c203dabbd0 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -40,6 +40,7 @@ typedef struct TypeCacheEntry char typstorage; char typtype; Oid typrelid; + Oid typelem; /* * Information obtained from opfamily entries @@ -75,9 +76,11 @@ typedef struct TypeCacheEntry /* * Tuple descriptor if it's a composite type (row type). NULL if not * composite or information hasn't yet been requested. (NOTE: this is a - * reference-counted tupledesc.) + * reference-counted tupledesc.) To simplify caching dependent info, + * tupDescSeqNo is incremented each time tupDesc is rebuilt in a session. */ TupleDesc tupDesc; + int64 tupDescSeqNo; /* * Fields computed when TYPECACHE_RANGE_INFO is requested. Zeroes if not diff --git a/src/pl/plpython/expected/plpython_types.out b/src/pl/plpython/expected/plpython_types.out index 893de301dd..eda965a9e0 100644 --- a/src/pl/plpython/expected/plpython_types.out +++ b/src/pl/plpython/expected/plpython_types.out @@ -765,6 +765,76 @@ SELECT * FROM test_type_conversion_array_domain_check_violation(); ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" CONTEXT: while creating return value PL/Python function "test_type_conversion_array_domain_check_violation" +-- +-- Arrays of domains +-- +CREATE FUNCTION test_read_uint2_array(x uint2[]) RETURNS uint2 AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; +select test_read_uint2_array(array[1::uint2]); +INFO: ([1], ) + test_read_uint2_array +----------------------- + 1 +(1 row) + +CREATE FUNCTION test_build_uint2_array(x int2) RETURNS uint2[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; +select test_build_uint2_array(1::int2); + test_build_uint2_array +------------------------ + {1,1} +(1 row) + +select test_build_uint2_array(-1::int2); -- fail +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: while creating return value +PL/Python function "test_build_uint2_array" +-- +-- ideally this would work, but for now it doesn't, because the return value +-- is [[2,4], [2,4]] which our conversion code thinks should become a 2-D +-- integer array, not an array of arrays. +-- +CREATE FUNCTION test_type_conversion_domain_array(x integer[]) + RETURNS ordered_pair_domain[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; +select test_type_conversion_domain_array(array[2,4]); +ERROR: return value of function with array return type is not a Python sequence +CONTEXT: while creating return value +PL/Python function "test_type_conversion_domain_array" +select test_type_conversion_domain_array(array[4,2]); -- fail +ERROR: return value of function with array return type is not a Python sequence +CONTEXT: while creating return value +PL/Python function "test_type_conversion_domain_array" +CREATE FUNCTION test_type_conversion_domain_array2(x ordered_pair_domain) + RETURNS integer AS $$ +plpy.info(x, type(x)) +return x[1] +$$ LANGUAGE plpythonu; +select test_type_conversion_domain_array2(array[2,4]); +INFO: ([2, 4], ) + test_type_conversion_domain_array2 +------------------------------------ + 4 +(1 row) + +select test_type_conversion_domain_array2(array[4,2]); -- fail +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CREATE FUNCTION test_type_conversion_array_domain_array(x ordered_pair_domain[]) + RETURNS ordered_pair_domain AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; +select test_type_conversion_array_domain_array(array[array[2,4]::ordered_pair_domain]); +INFO: ([[2, 4]], ) + test_type_conversion_array_domain_array +----------------------------------------- + {2,4} +(1 row) + --- --- Composite types --- @@ -820,6 +890,64 @@ SELECT test_composite_type_input(row(1, 2)); 3 (1 row) +-- +-- Domains within composite +-- +CREATE TYPE nnint_container AS (f1 int, f2 nnint); +CREATE FUNCTION nnint_test(x int, y int) RETURNS nnint_container AS $$ +return {'f1': x, 'f2': y} +$$ LANGUAGE plpythonu; +SELECT nnint_test(null, 3); + nnint_test +------------ + (,3) +(1 row) + +SELECT nnint_test(3, null); -- fail +ERROR: value for domain nnint violates check constraint "nnint_check" +CONTEXT: while creating return value +PL/Python function "nnint_test" +-- +-- Domains of composite +-- +CREATE DOMAIN ordered_named_pair AS named_pair_2 CHECK((VALUE).i <= (VALUE).j); +CREATE FUNCTION read_ordered_named_pair(p ordered_named_pair) RETURNS integer AS $$ +return p['i'] + p['j'] +$$ LANGUAGE plpythonu; +SELECT read_ordered_named_pair(row(1, 2)); + read_ordered_named_pair +------------------------- + 3 +(1 row) + +SELECT read_ordered_named_pair(row(2, 1)); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CREATE FUNCTION build_ordered_named_pair(i int, j int) RETURNS ordered_named_pair AS $$ +return {'i': i, 'j': j} +$$ LANGUAGE plpythonu; +SELECT build_ordered_named_pair(1,2); + build_ordered_named_pair +-------------------------- + (1,2) +(1 row) + +SELECT build_ordered_named_pair(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: while creating return value +PL/Python function "build_ordered_named_pair" +CREATE FUNCTION build_ordered_named_pairs(i int, j int) RETURNS ordered_named_pair[] AS $$ +return [{'i': i, 'j': j}, {'i': i, 'j': j+1}] +$$ LANGUAGE plpythonu; +SELECT build_ordered_named_pairs(1,2); + build_ordered_named_pairs +--------------------------- + {"(1,2)","(1,3)"} +(1 row) + +SELECT build_ordered_named_pairs(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: while creating return value +PL/Python function "build_ordered_named_pairs" -- -- Prepared statements -- diff --git a/src/pl/plpython/expected/plpython_types_3.out b/src/pl/plpython/expected/plpython_types_3.out index 2d853bd573..69f958cbf2 100644 --- a/src/pl/plpython/expected/plpython_types_3.out +++ b/src/pl/plpython/expected/plpython_types_3.out @@ -765,6 +765,76 @@ SELECT * FROM test_type_conversion_array_domain_check_violation(); ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" CONTEXT: while creating return value PL/Python function "test_type_conversion_array_domain_check_violation" +-- +-- Arrays of domains +-- +CREATE FUNCTION test_read_uint2_array(x uint2[]) RETURNS uint2 AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; +select test_read_uint2_array(array[1::uint2]); +INFO: ([1], ) + test_read_uint2_array +----------------------- + 1 +(1 row) + +CREATE FUNCTION test_build_uint2_array(x int2) RETURNS uint2[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; +select test_build_uint2_array(1::int2); + test_build_uint2_array +------------------------ + {1,1} +(1 row) + +select test_build_uint2_array(-1::int2); -- fail +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: while creating return value +PL/Python function "test_build_uint2_array" +-- +-- ideally this would work, but for now it doesn't, because the return value +-- is [[2,4], [2,4]] which our conversion code thinks should become a 2-D +-- integer array, not an array of arrays. +-- +CREATE FUNCTION test_type_conversion_domain_array(x integer[]) + RETURNS ordered_pair_domain[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; +select test_type_conversion_domain_array(array[2,4]); +ERROR: return value of function with array return type is not a Python sequence +CONTEXT: while creating return value +PL/Python function "test_type_conversion_domain_array" +select test_type_conversion_domain_array(array[4,2]); -- fail +ERROR: return value of function with array return type is not a Python sequence +CONTEXT: while creating return value +PL/Python function "test_type_conversion_domain_array" +CREATE FUNCTION test_type_conversion_domain_array2(x ordered_pair_domain) + RETURNS integer AS $$ +plpy.info(x, type(x)) +return x[1] +$$ LANGUAGE plpythonu; +select test_type_conversion_domain_array2(array[2,4]); +INFO: ([2, 4], ) + test_type_conversion_domain_array2 +------------------------------------ + 4 +(1 row) + +select test_type_conversion_domain_array2(array[4,2]); -- fail +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CREATE FUNCTION test_type_conversion_array_domain_array(x ordered_pair_domain[]) + RETURNS ordered_pair_domain AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; +select test_type_conversion_array_domain_array(array[array[2,4]::ordered_pair_domain]); +INFO: ([[2, 4]], ) + test_type_conversion_array_domain_array +----------------------------------------- + {2,4} +(1 row) + --- --- Composite types --- @@ -820,6 +890,64 @@ SELECT test_composite_type_input(row(1, 2)); 3 (1 row) +-- +-- Domains within composite +-- +CREATE TYPE nnint_container AS (f1 int, f2 nnint); +CREATE FUNCTION nnint_test(x int, y int) RETURNS nnint_container AS $$ +return {'f1': x, 'f2': y} +$$ LANGUAGE plpythonu; +SELECT nnint_test(null, 3); + nnint_test +------------ + (,3) +(1 row) + +SELECT nnint_test(3, null); -- fail +ERROR: value for domain nnint violates check constraint "nnint_check" +CONTEXT: while creating return value +PL/Python function "nnint_test" +-- +-- Domains of composite +-- +CREATE DOMAIN ordered_named_pair AS named_pair_2 CHECK((VALUE).i <= (VALUE).j); +CREATE FUNCTION read_ordered_named_pair(p ordered_named_pair) RETURNS integer AS $$ +return p['i'] + p['j'] +$$ LANGUAGE plpythonu; +SELECT read_ordered_named_pair(row(1, 2)); + read_ordered_named_pair +------------------------- + 3 +(1 row) + +SELECT read_ordered_named_pair(row(2, 1)); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CREATE FUNCTION build_ordered_named_pair(i int, j int) RETURNS ordered_named_pair AS $$ +return {'i': i, 'j': j} +$$ LANGUAGE plpythonu; +SELECT build_ordered_named_pair(1,2); + build_ordered_named_pair +-------------------------- + (1,2) +(1 row) + +SELECT build_ordered_named_pair(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: while creating return value +PL/Python function "build_ordered_named_pair" +CREATE FUNCTION build_ordered_named_pairs(i int, j int) RETURNS ordered_named_pair[] AS $$ +return [{'i': i, 'j': j}, {'i': i, 'j': j+1}] +$$ LANGUAGE plpythonu; +SELECT build_ordered_named_pairs(1,2); + build_ordered_named_pairs +--------------------------- + {"(1,2)","(1,3)"} +(1 row) + +SELECT build_ordered_named_pairs(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: while creating return value +PL/Python function "build_ordered_named_pairs" -- -- Prepared statements -- diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c index 0108471bfe..10ca786fbc 100644 --- a/src/pl/plpython/plpy_cursorobject.c +++ b/src/pl/plpython/plpy_cursorobject.c @@ -9,6 +9,7 @@ #include #include "access/xact.h" +#include "catalog/pg_type.h" #include "mb/pg_wchar.h" #include "utils/memutils.h" @@ -106,6 +107,7 @@ static PyObject * PLy_cursor_query(const char *query) { PLyCursorObject *cursor; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; volatile ResourceOwner oldowner; @@ -116,7 +118,11 @@ PLy_cursor_query(const char *query) cursor->mcxt = AllocSetContextCreate(TopMemoryContext, "PL/Python cursor context", ALLOCSET_DEFAULT_SIZES); - PLy_typeinfo_init(&cursor->result, cursor->mcxt); + + /* Initialize for converting result tuples to Python */ + PLy_input_setup_func(&cursor->result, cursor->mcxt, + RECORDOID, -1, + exec_ctx->curr_proc); oldcontext = CurrentMemoryContext; oldowner = CurrentResourceOwner; @@ -125,7 +131,6 @@ PLy_cursor_query(const char *query) PG_TRY(); { - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); SPIPlanPtr plan; Portal portal; @@ -166,6 +171,7 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) volatile int nargs; int i; PLyPlanObject *plan; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; volatile ResourceOwner oldowner; @@ -208,7 +214,11 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) cursor->mcxt = AllocSetContextCreate(TopMemoryContext, "PL/Python cursor context", ALLOCSET_DEFAULT_SIZES); - PLy_typeinfo_init(&cursor->result, cursor->mcxt); + + /* Initialize for converting result tuples to Python */ + PLy_input_setup_func(&cursor->result, cursor->mcxt, + RECORDOID, -1, + exec_ctx->curr_proc); oldcontext = CurrentMemoryContext; oldowner = CurrentResourceOwner; @@ -217,7 +227,6 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) PG_TRY(); { - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); Portal portal; char *volatile nulls; volatile int j; @@ -229,39 +238,24 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) for (j = 0; j < nargs; j++) { + PLyObToDatum *arg = &plan->args[j]; PyObject *elem; elem = PySequence_GetItem(args, j); - if (elem != Py_None) + PG_TRY(); { - PG_TRY(); - { - plan->values[j] = - plan->args[j].out.d.func(&(plan->args[j].out.d), - -1, - elem, - false); - } - PG_CATCH(); - { - Py_DECREF(elem); - PG_RE_THROW(); - } - PG_END_TRY(); + bool isnull; - Py_DECREF(elem); - nulls[j] = ' '; + plan->values[j] = PLy_output_convert(arg, elem, &isnull); + nulls[j] = isnull ? 'n' : ' '; } - else + PG_CATCH(); { Py_DECREF(elem); - plan->values[j] = - InputFunctionCall(&(plan->args[j].out.d.typfunc), - NULL, - plan->args[j].out.d.typioparam, - -1); - nulls[j] = 'n'; + PG_RE_THROW(); } + PG_END_TRY(); + Py_DECREF(elem); } portal = SPI_cursor_open(NULL, plan->plan, plan->values, nulls, @@ -281,7 +275,7 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) /* cleanup plan->values array */ for (k = 0; k < nargs; k++) { - if (!plan->args[k].out.d.typbyval && + if (!plan->args[k].typbyval && (plan->values[k] != PointerGetDatum(NULL))) { pfree(DatumGetPointer(plan->values[k])); @@ -298,7 +292,7 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) for (i = 0; i < nargs; i++) { - if (!plan->args[i].out.d.typbyval && + if (!plan->args[i].typbyval && (plan->values[i] != PointerGetDatum(NULL))) { pfree(DatumGetPointer(plan->values[i])); @@ -339,6 +333,7 @@ PLy_cursor_iternext(PyObject *self) { PLyCursorObject *cursor; PyObject *ret; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; volatile ResourceOwner oldowner; Portal portal; @@ -374,11 +369,11 @@ PLy_cursor_iternext(PyObject *self) } else { - if (cursor->result.is_rowtype != 1) - PLy_input_tuple_funcs(&cursor->result, SPI_tuptable->tupdesc); + PLy_input_setup_tuple(&cursor->result, SPI_tuptable->tupdesc, + exec_ctx->curr_proc); - ret = PLyDict_FromTuple(&cursor->result, SPI_tuptable->vals[0], - SPI_tuptable->tupdesc); + ret = PLy_input_from_tuple(&cursor->result, SPI_tuptable->vals[0], + SPI_tuptable->tupdesc); } SPI_freetuptable(SPI_tuptable); @@ -401,6 +396,7 @@ PLy_cursor_fetch(PyObject *self, PyObject *args) PLyCursorObject *cursor; int count; PLyResultObject *ret; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; volatile ResourceOwner oldowner; Portal portal; @@ -437,9 +433,6 @@ PLy_cursor_fetch(PyObject *self, PyObject *args) { SPI_cursor_fetch(portal, true, count); - if (cursor->result.is_rowtype != 1) - PLy_input_tuple_funcs(&cursor->result, SPI_tuptable->tupdesc); - Py_DECREF(ret->status); ret->status = PyInt_FromLong(SPI_OK_FETCH); @@ -465,11 +458,14 @@ PLy_cursor_fetch(PyObject *self, PyObject *args) Py_DECREF(ret->rows); ret->rows = PyList_New(SPI_processed); + PLy_input_setup_tuple(&cursor->result, SPI_tuptable->tupdesc, + exec_ctx->curr_proc); + for (i = 0; i < SPI_processed; i++) { - PyObject *row = PLyDict_FromTuple(&cursor->result, - SPI_tuptable->vals[i], - SPI_tuptable->tupdesc); + PyObject *row = PLy_input_from_tuple(&cursor->result, + SPI_tuptable->vals[i], + SPI_tuptable->tupdesc); PyList_SetItem(ret->rows, i, row); } diff --git a/src/pl/plpython/plpy_cursorobject.h b/src/pl/plpython/plpy_cursorobject.h index 018b169cbf..e4d2c0ed25 100644 --- a/src/pl/plpython/plpy_cursorobject.h +++ b/src/pl/plpython/plpy_cursorobject.h @@ -12,7 +12,7 @@ typedef struct PLyCursorObject { PyObject_HEAD char *portalname; - PLyTypeInfo result; + PLyDatumToOb result; bool closed; MemoryContext mcxt; } PLyCursorObject; diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index 26f61dd0f3..02d7d2ad5f 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -202,7 +202,7 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) * return value as a special "void datum" rather than NULL (as is the * case for non-void-returning functions). */ - if (proc->result.out.d.typoid == VOIDOID) + if (proc->result.typoid == VOIDOID) { if (plrv != Py_None) ereport(ERROR, @@ -212,48 +212,22 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) fcinfo->isnull = false; rv = (Datum) 0; } - else if (plrv == Py_None) + else if (plrv == Py_None && + srfstate && srfstate->iter == NULL) { - fcinfo->isnull = true; - /* * In a SETOF function, the iteration-ending null isn't a real * value; don't pass it through the input function, which might * complain. */ - if (srfstate && srfstate->iter == NULL) - rv = (Datum) 0; - else if (proc->result.is_rowtype < 1) - rv = InputFunctionCall(&proc->result.out.d.typfunc, - NULL, - proc->result.out.d.typioparam, - -1); - else - /* Tuple as None */ - rv = (Datum) NULL; - } - else if (proc->result.is_rowtype >= 1) - { - TupleDesc desc; - - /* make sure it's not an unnamed record */ - Assert((proc->result.out.d.typoid == RECORDOID && - proc->result.out.d.typmod != -1) || - (proc->result.out.d.typoid != RECORDOID && - proc->result.out.d.typmod == -1)); - - desc = lookup_rowtype_tupdesc(proc->result.out.d.typoid, - proc->result.out.d.typmod); - - rv = PLyObject_ToCompositeDatum(&proc->result, desc, plrv, false); - fcinfo->isnull = (rv == (Datum) NULL); - - ReleaseTupleDesc(desc); + fcinfo->isnull = true; + rv = (Datum) 0; } else { - fcinfo->isnull = false; - rv = (proc->result.out.d.func) (&proc->result.out.d, -1, plrv, false); + /* Normal conversion of result */ + rv = PLy_output_convert(&proc->result, plrv, + &fcinfo->isnull); } } PG_CATCH(); @@ -328,20 +302,32 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc) PyObject *volatile plargs = NULL; PyObject *volatile plrv = NULL; TriggerData *tdata; + TupleDesc rel_descr; Assert(CALLED_AS_TRIGGER(fcinfo)); + tdata = (TriggerData *) fcinfo->context; /* - * Input/output conversion for trigger tuples. Use the result TypeInfo - * variable to store the tuple conversion info. We do this over again on - * each call to cover the possibility that the relation's tupdesc changed - * since the trigger was last called. PLy_input_tuple_funcs and - * PLy_output_tuple_funcs are responsible for not doing repetitive work. + * Input/output conversion for trigger tuples. We use the result and + * result_in fields to store the tuple conversion info. We do this over + * again on each call to cover the possibility that the relation's tupdesc + * changed since the trigger was last called. The PLy_xxx_setup_func + * calls should only happen once, but PLy_input_setup_tuple and + * PLy_output_setup_tuple are responsible for not doing repetitive work. */ - tdata = (TriggerData *) fcinfo->context; - - PLy_input_tuple_funcs(&(proc->result), tdata->tg_relation->rd_att); - PLy_output_tuple_funcs(&(proc->result), tdata->tg_relation->rd_att); + rel_descr = RelationGetDescr(tdata->tg_relation); + if (proc->result.typoid != rel_descr->tdtypeid) + PLy_output_setup_func(&proc->result, proc->mcxt, + rel_descr->tdtypeid, + rel_descr->tdtypmod, + proc); + if (proc->result_in.typoid != rel_descr->tdtypeid) + PLy_input_setup_func(&proc->result_in, proc->mcxt, + rel_descr->tdtypeid, + rel_descr->tdtypmod, + proc); + PLy_output_setup_tuple(&proc->result, rel_descr, proc); + PLy_input_setup_tuple(&proc->result_in, rel_descr, proc); PG_TRY(); { @@ -436,46 +422,12 @@ PLy_function_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc) args = PyList_New(proc->nargs); for (i = 0; i < proc->nargs; i++) { - if (proc->args[i].is_rowtype > 0) - { - if (fcinfo->argnull[i]) - arg = NULL; - else - { - HeapTupleHeader td; - Oid tupType; - int32 tupTypmod; - TupleDesc tupdesc; - HeapTupleData tmptup; - - td = DatumGetHeapTupleHeader(fcinfo->arg[i]); - /* Extract rowtype info and find a tupdesc */ - tupType = HeapTupleHeaderGetTypeId(td); - tupTypmod = HeapTupleHeaderGetTypMod(td); - tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); - - /* Set up I/O funcs if not done yet */ - if (proc->args[i].is_rowtype != 1) - PLy_input_tuple_funcs(&(proc->args[i]), tupdesc); - - /* Build a temporary HeapTuple control structure */ - tmptup.t_len = HeapTupleHeaderGetDatumLength(td); - tmptup.t_data = td; - - arg = PLyDict_FromTuple(&(proc->args[i]), &tmptup, tupdesc); - ReleaseTupleDesc(tupdesc); - } - } + PLyDatumToOb *arginfo = &proc->args[i]; + + if (fcinfo->argnull[i]) + arg = NULL; else - { - if (fcinfo->argnull[i]) - arg = NULL; - else - { - arg = (proc->args[i].in.d.func) (&(proc->args[i].in.d), - fcinfo->arg[i]); - } - } + arg = PLy_input_convert(arginfo, fcinfo->arg[i]); if (arg == NULL) { @@ -493,7 +445,7 @@ PLy_function_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc) } /* Set up output conversion for functions returning RECORD */ - if (proc->result.out.d.typoid == RECORDOID) + if (proc->result.typoid == RECORDOID) { TupleDesc desc; @@ -504,7 +456,7 @@ PLy_function_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc) "that cannot accept type record"))); /* cache the output conversion functions */ - PLy_output_record_funcs(&(proc->result), desc); + PLy_output_setup_record(&proc->result, desc, proc); } } PG_CATCH(); @@ -723,6 +675,7 @@ static PyObject * PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *rv) { TriggerData *tdata = (TriggerData *) fcinfo->context; + TupleDesc rel_descr = RelationGetDescr(tdata->tg_relation); PyObject *pltname, *pltevent, *pltwhen, @@ -790,8 +743,9 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r pltevent = PyString_FromString("INSERT"); PyDict_SetItemString(pltdata, "old", Py_None); - pytnew = PLyDict_FromTuple(&(proc->result), tdata->tg_trigtuple, - tdata->tg_relation->rd_att); + pytnew = PLy_input_from_tuple(&proc->result_in, + tdata->tg_trigtuple, + rel_descr); PyDict_SetItemString(pltdata, "new", pytnew); Py_DECREF(pytnew); *rv = tdata->tg_trigtuple; @@ -801,8 +755,9 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r pltevent = PyString_FromString("DELETE"); PyDict_SetItemString(pltdata, "new", Py_None); - pytold = PLyDict_FromTuple(&(proc->result), tdata->tg_trigtuple, - tdata->tg_relation->rd_att); + pytold = PLy_input_from_tuple(&proc->result_in, + tdata->tg_trigtuple, + rel_descr); PyDict_SetItemString(pltdata, "old", pytold); Py_DECREF(pytold); *rv = tdata->tg_trigtuple; @@ -811,12 +766,14 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r { pltevent = PyString_FromString("UPDATE"); - pytnew = PLyDict_FromTuple(&(proc->result), tdata->tg_newtuple, - tdata->tg_relation->rd_att); + pytnew = PLy_input_from_tuple(&proc->result_in, + tdata->tg_newtuple, + rel_descr); PyDict_SetItemString(pltdata, "new", pytnew); Py_DECREF(pytnew); - pytold = PLyDict_FromTuple(&(proc->result), tdata->tg_trigtuple, - tdata->tg_relation->rd_att); + pytold = PLy_input_from_tuple(&proc->result_in, + tdata->tg_trigtuple, + rel_descr); PyDict_SetItemString(pltdata, "old", pytold); Py_DECREF(pytold); *rv = tdata->tg_newtuple; @@ -897,6 +854,9 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r return pltdata; } +/* + * Apply changes requested by a MODIFY return from a trigger function. + */ static HeapTuple PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, HeapTuple otup) @@ -938,7 +898,7 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, plkeys = PyDict_Keys(plntup); nkeys = PyList_Size(plkeys); - tupdesc = tdata->tg_relation->rd_att; + tupdesc = RelationGetDescr(tdata->tg_relation); modvalues = (Datum *) palloc0(tupdesc->natts * sizeof(Datum)); modnulls = (bool *) palloc0(tupdesc->natts * sizeof(bool)); @@ -950,7 +910,6 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, char *plattstr; int attn; PLyObToDatum *att; - Form_pg_attribute attr; platt = PyList_GetItem(plkeys, i); if (PyString_Check(platt)) @@ -975,7 +934,6 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot set system attribute \"%s\"", plattstr))); - att = &proc->result.out.r.atts[attn - 1]; plval = PyDict_GetItem(plntup, platt); if (plval == NULL) @@ -983,25 +941,12 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata, Py_INCREF(plval); - attr = TupleDescAttr(tupdesc, attn - 1); - if (plval != Py_None) - { - modvalues[attn - 1] = - (att->func) (att, - attr->atttypmod, - plval, - false); - modnulls[attn - 1] = false; - } - else - { - modvalues[attn - 1] = - InputFunctionCall(&att->typfunc, - NULL, - att->typioparam, - attr->atttypmod); - modnulls[attn - 1] = true; - } + /* We assume proc->result is set up to convert tuples properly */ + att = &proc->result.u.tuple.atts[attn - 1]; + + modvalues[attn - 1] = PLy_output_convert(att, + plval, + &modnulls[attn - 1]); modrepls[attn - 1] = true; Py_DECREF(plval); diff --git a/src/pl/plpython/plpy_main.c b/src/pl/plpython/plpy_main.c index 7df50c09c8..29db90e448 100644 --- a/src/pl/plpython/plpy_main.c +++ b/src/pl/plpython/plpy_main.c @@ -318,7 +318,12 @@ plpython_inline_handler(PG_FUNCTION_ARGS) ALLOCSET_DEFAULT_SIZES); proc.pyname = MemoryContextStrdup(proc.mcxt, "__plpython_inline_block"); proc.langid = codeblock->langOid; - proc.result.out.d.typoid = VOIDOID; + + /* + * This is currently sufficient to get PLy_exec_function to work, but + * someday we might need to be honest and use PLy_output_setup_func. + */ + proc.result.typoid = VOIDOID; /* * Push execution context onto stack. It is important that this get diff --git a/src/pl/plpython/plpy_planobject.h b/src/pl/plpython/plpy_planobject.h index 5adc957053..729effb163 100644 --- a/src/pl/plpython/plpy_planobject.h +++ b/src/pl/plpython/plpy_planobject.h @@ -16,7 +16,7 @@ typedef struct PLyPlanObject int nargs; Oid *types; Datum *values; - PLyTypeInfo *args; + PLyObToDatum *args; MemoryContext mcxt; } PLyPlanObject; diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c index 26acc88b27..58d6988202 100644 --- a/src/pl/plpython/plpy_procedure.c +++ b/src/pl/plpython/plpy_procedure.c @@ -15,6 +15,7 @@ #include "utils/builtins.h" #include "utils/hsearch.h" #include "utils/inval.h" +#include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/syscache.h" @@ -29,7 +30,6 @@ static HTAB *PLy_procedure_cache = NULL; static PLyProcedure *PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger); -static bool PLy_procedure_argument_valid(PLyTypeInfo *arg); static bool PLy_procedure_valid(PLyProcedure *proc, HeapTuple procTup); static char *PLy_procedure_munge_source(const char *name, const char *src); @@ -165,6 +165,7 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) *ptr = '_'; } + /* Create long-lived context that all procedure info will live in */ cxt = AllocSetContextCreate(TopMemoryContext, procName, ALLOCSET_DEFAULT_SIZES); @@ -188,11 +189,9 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) proc->fn_tid = procTup->t_self; proc->fn_readonly = (procStruct->provolatile != PROVOLATILE_VOLATILE); proc->is_setof = procStruct->proretset; - PLy_typeinfo_init(&proc->result, proc->mcxt); proc->src = NULL; proc->argnames = NULL; - for (i = 0; i < FUNC_MAX_ARGS; i++) - PLy_typeinfo_init(&proc->args[i], proc->mcxt); + proc->args = NULL; proc->nargs = 0; proc->langid = procStruct->prolang; protrftypes_datum = SysCacheGetAttr(PROCOID, procTup, @@ -211,50 +210,48 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) */ if (!is_trigger) { + Oid rettype = procStruct->prorettype; HeapTuple rvTypeTup; Form_pg_type rvTypeStruct; - rvTypeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(procStruct->prorettype)); + rvTypeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(rettype)); if (!HeapTupleIsValid(rvTypeTup)) - elog(ERROR, "cache lookup failed for type %u", - procStruct->prorettype); + elog(ERROR, "cache lookup failed for type %u", rettype); rvTypeStruct = (Form_pg_type) GETSTRUCT(rvTypeTup); /* Disallow pseudotype result, except for void or record */ if (rvTypeStruct->typtype == TYPTYPE_PSEUDO) { - if (procStruct->prorettype == TRIGGEROID) + if (rettype == VOIDOID || + rettype == RECORDOID) + /* okay */ ; + else if (rettype == TRIGGEROID || rettype == EVTTRIGGEROID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("trigger functions can only be called as triggers"))); - else if (procStruct->prorettype != VOIDOID && - procStruct->prorettype != RECORDOID) + else ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/Python functions cannot return type %s", - format_type_be(procStruct->prorettype)))); + format_type_be(rettype)))); } - if (rvTypeStruct->typtype == TYPTYPE_COMPOSITE || - procStruct->prorettype == RECORDOID) - { - /* - * Tuple: set up later, during first call to - * PLy_function_handler - */ - proc->result.out.d.typoid = procStruct->prorettype; - proc->result.out.d.typmod = -1; - proc->result.is_rowtype = 2; - } - else - { - /* do the real work */ - PLy_output_datum_func(&proc->result, rvTypeTup, proc->langid, proc->trftypes); - } + /* set up output function for procedure result */ + PLy_output_setup_func(&proc->result, proc->mcxt, + rettype, -1, proc); ReleaseSysCache(rvTypeTup); } + else + { + /* + * In a trigger function, we use proc->result and proc->result_in + * for converting tuples, but we don't yet have enough info to set + * them up. PLy_exec_trigger will deal with it. + */ + proc->result.typoid = InvalidOid; + proc->result_in.typoid = InvalidOid; + } /* * Now get information required for input conversion of the @@ -287,7 +284,10 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) } } + /* Allocate arrays for per-input-argument data */ proc->argnames = (char **) palloc0(sizeof(char *) * proc->nargs); + proc->args = (PLyDatumToOb *) palloc0(sizeof(PLyDatumToOb) * proc->nargs); + for (i = pos = 0; i < total; i++) { HeapTuple argTypeTup; @@ -306,28 +306,17 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) elog(ERROR, "cache lookup failed for type %u", types[i]); argTypeStruct = (Form_pg_type) GETSTRUCT(argTypeTup); - /* check argument type is OK, set up I/O function info */ - switch (argTypeStruct->typtype) - { - case TYPTYPE_PSEUDO: - /* Disallow pseudotype argument */ - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("PL/Python functions cannot accept type %s", - format_type_be(types[i])))); - break; - case TYPTYPE_COMPOSITE: - /* we'll set IO funcs at first call */ - proc->args[pos].is_rowtype = 2; - break; - default: - PLy_input_datum_func(&(proc->args[pos]), - types[i], - argTypeTup, - proc->langid, - proc->trftypes); - break; - } + /* disallow pseudotype arguments */ + if (argTypeStruct->typtype == TYPTYPE_PSEUDO) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("PL/Python functions cannot accept type %s", + format_type_be(types[i])))); + + /* set up I/O function info */ + PLy_input_setup_func(&proc->args[pos], proc->mcxt, + types[i], -1, /* typmod not known */ + proc); /* get argument name */ proc->argnames[pos] = names ? pstrdup(names[i]) : NULL; @@ -424,54 +413,12 @@ PLy_procedure_delete(PLyProcedure *proc) MemoryContextDelete(proc->mcxt); } -/* - * Check if our cached information about a datatype is still valid - */ -static bool -PLy_procedure_argument_valid(PLyTypeInfo *arg) -{ - HeapTuple relTup; - bool valid; - - /* Nothing to cache unless type is composite */ - if (arg->is_rowtype != 1) - return true; - - /* - * Zero typ_relid means that we got called on an output argument of a - * function returning an unnamed record type; the info for it can't - * change. - */ - if (!OidIsValid(arg->typ_relid)) - return true; - - /* Else we should have some cached data */ - Assert(TransactionIdIsValid(arg->typrel_xmin)); - Assert(ItemPointerIsValid(&arg->typrel_tid)); - - /* Get the pg_class tuple for the data type */ - relTup = SearchSysCache1(RELOID, ObjectIdGetDatum(arg->typ_relid)); - if (!HeapTupleIsValid(relTup)) - elog(ERROR, "cache lookup failed for relation %u", arg->typ_relid); - - /* If it has changed, the cached data is not valid */ - valid = (arg->typrel_xmin == HeapTupleHeaderGetRawXmin(relTup->t_data) && - ItemPointerEquals(&arg->typrel_tid, &relTup->t_self)); - - ReleaseSysCache(relTup); - - return valid; -} - /* * Decide whether a cached PLyProcedure struct is still valid */ static bool PLy_procedure_valid(PLyProcedure *proc, HeapTuple procTup) { - int i; - bool valid; - if (proc == NULL) return false; @@ -480,22 +427,7 @@ PLy_procedure_valid(PLyProcedure *proc, HeapTuple procTup) ItemPointerEquals(&proc->fn_tid, &procTup->t_self))) return false; - /* Else check the input argument datatypes */ - valid = true; - for (i = 0; i < proc->nargs; i++) - { - valid = PLy_procedure_argument_valid(&proc->args[i]); - - /* Short-circuit on first changed argument */ - if (!valid) - break; - } - - /* if the output type is composite, it might have changed */ - if (valid) - valid = PLy_procedure_argument_valid(&proc->result); - - return valid; + return true; } static char * diff --git a/src/pl/plpython/plpy_procedure.h b/src/pl/plpython/plpy_procedure.h index d05944fc39..cd1b87fdc3 100644 --- a/src/pl/plpython/plpy_procedure.h +++ b/src/pl/plpython/plpy_procedure.h @@ -31,12 +31,12 @@ typedef struct PLyProcedure ItemPointerData fn_tid; bool fn_readonly; bool is_setof; /* true, if procedure returns result set */ - PLyTypeInfo result; /* also used to store info for trigger tuple - * type */ + PLyObToDatum result; /* Function result output conversion info */ + PLyDatumToOb result_in; /* For converting input tuples in a trigger */ char *src; /* textual procedure code, after mangling */ char **argnames; /* Argument names */ - PLyTypeInfo args[FUNC_MAX_ARGS]; - int nargs; + PLyDatumToOb *args; /* Argument input conversion info */ + int nargs; /* Number of elements in above arrays */ Oid langid; /* OID of plpython pg_language entry */ List *trftypes; /* OID list of transform types */ PyObject *code; /* compiled procedure code */ diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index 955769c5e3..69eb6b39f6 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -46,6 +46,7 @@ PLy_spi_prepare(PyObject *self, PyObject *args) PyObject *list = NULL; PyObject *volatile optr = NULL; char *query; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; volatile ResourceOwner oldowner; volatile int nargs; @@ -71,9 +72,9 @@ PLy_spi_prepare(PyObject *self, PyObject *args) nargs = list ? PySequence_Length(list) : 0; plan->nargs = nargs; - plan->types = nargs ? palloc(sizeof(Oid) * nargs) : NULL; - plan->values = nargs ? palloc(sizeof(Datum) * nargs) : NULL; - plan->args = nargs ? palloc(sizeof(PLyTypeInfo) * nargs) : NULL; + plan->types = nargs ? palloc0(sizeof(Oid) * nargs) : NULL; + plan->values = nargs ? palloc0(sizeof(Datum) * nargs) : NULL; + plan->args = nargs ? palloc0(sizeof(PLyObToDatum) * nargs) : NULL; MemoryContextSwitchTo(oldcontext); @@ -85,22 +86,10 @@ PLy_spi_prepare(PyObject *self, PyObject *args) PG_TRY(); { int i; - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - - /* - * the other loop might throw an exception, if PLyTypeInfo member - * isn't properly initialized the Py_DECREF(plan) will go boom - */ - for (i = 0; i < nargs; i++) - { - PLy_typeinfo_init(&plan->args[i], plan->mcxt); - plan->values[i] = PointerGetDatum(NULL); - } for (i = 0; i < nargs; i++) { char *sptr; - HeapTuple typeTup; Oid typeId; int32 typmod; @@ -124,11 +113,6 @@ PLy_spi_prepare(PyObject *self, PyObject *args) parseTypeString(sptr, &typeId, &typmod, false); - typeTup = SearchSysCache1(TYPEOID, - ObjectIdGetDatum(typeId)); - if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", typeId); - Py_DECREF(optr); /* @@ -138,8 +122,9 @@ PLy_spi_prepare(PyObject *self, PyObject *args) optr = NULL; plan->types[i] = typeId; - PLy_output_datum_func(&plan->args[i], typeTup, exec_ctx->curr_proc->langid, exec_ctx->curr_proc->trftypes); - ReleaseSysCache(typeTup); + PLy_output_setup_func(&plan->args[i], plan->mcxt, + typeId, typmod, + exec_ctx->curr_proc); } pg_verifymbstr(query, strlen(query), false); @@ -253,39 +238,24 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit) for (j = 0; j < nargs; j++) { + PLyObToDatum *arg = &plan->args[j]; PyObject *elem; elem = PySequence_GetItem(list, j); - if (elem != Py_None) + PG_TRY(); { - PG_TRY(); - { - plan->values[j] = - plan->args[j].out.d.func(&(plan->args[j].out.d), - -1, - elem, - false); - } - PG_CATCH(); - { - Py_DECREF(elem); - PG_RE_THROW(); - } - PG_END_TRY(); + bool isnull; - Py_DECREF(elem); - nulls[j] = ' '; + plan->values[j] = PLy_output_convert(arg, elem, &isnull); + nulls[j] = isnull ? 'n' : ' '; } - else + PG_CATCH(); { Py_DECREF(elem); - plan->values[j] = - InputFunctionCall(&(plan->args[j].out.d.typfunc), - NULL, - plan->args[j].out.d.typioparam, - -1); - nulls[j] = 'n'; + PG_RE_THROW(); } + PG_END_TRY(); + Py_DECREF(elem); } rv = SPI_execute_plan(plan->plan, plan->values, nulls, @@ -306,7 +276,7 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit) */ for (k = 0; k < nargs; k++) { - if (!plan->args[k].out.d.typbyval && + if (!plan->args[k].typbyval && (plan->values[k] != PointerGetDatum(NULL))) { pfree(DatumGetPointer(plan->values[k])); @@ -321,7 +291,7 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit) for (i = 0; i < nargs; i++) { - if (!plan->args[i].out.d.typbyval && + if (!plan->args[i].typbyval && (plan->values[i] != PointerGetDatum(NULL))) { pfree(DatumGetPointer(plan->values[i])); @@ -386,6 +356,7 @@ static PyObject * PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) { PLyResultObject *result; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); volatile MemoryContext oldcontext; result = (PLyResultObject *) PLy_result_new(); @@ -401,7 +372,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) } else if (status > 0 && tuptable != NULL) { - PLyTypeInfo args; + PLyDatumToOb ininfo; MemoryContext cxt; Py_DECREF(result->nrows); @@ -412,7 +383,10 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) cxt = AllocSetContextCreate(CurrentMemoryContext, "PL/Python temp context", ALLOCSET_DEFAULT_SIZES); - PLy_typeinfo_init(&args, cxt); + + /* Initialize for converting result tuples to Python */ + PLy_input_setup_func(&ininfo, cxt, RECORDOID, -1, + exec_ctx->curr_proc); oldcontext = CurrentMemoryContext; PG_TRY(); @@ -436,12 +410,14 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) Py_DECREF(result->rows); result->rows = PyList_New(rows); - PLy_input_tuple_funcs(&args, tuptable->tupdesc); + PLy_input_setup_tuple(&ininfo, tuptable->tupdesc, + exec_ctx->curr_proc); + for (i = 0; i < rows; i++) { - PyObject *row = PLyDict_FromTuple(&args, - tuptable->vals[i], - tuptable->tupdesc); + PyObject *row = PLy_input_from_tuple(&ininfo, + tuptable->vals[i], + tuptable->tupdesc); PyList_SetItem(result->rows, i, row); } diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c index e4af8cc9ef..ce1527072e 100644 --- a/src/pl/plpython/plpy_typeio.c +++ b/src/pl/plpython/plpy_typeio.c @@ -7,19 +7,15 @@ #include "postgres.h" #include "access/htup_details.h" -#include "access/transam.h" #include "catalog/pg_type.h" #include "funcapi.h" #include "mb/pg_wchar.h" -#include "parser/parse_type.h" +#include "miscadmin.h" #include "utils/array.h" #include "utils/builtins.h" #include "utils/fmgroids.h" #include "utils/lsyscache.h" #include "utils/memutils.h" -#include "utils/numeric.h" -#include "utils/syscache.h" -#include "utils/typcache.h" #include "plpython.h" @@ -29,10 +25,6 @@ #include "plpy_main.h" -/* I/O function caching */ -static void PLy_input_datum_func2(PLyDatumToOb *arg, MemoryContext arg_mcxt, Oid typeOid, HeapTuple typeTup, Oid langid, List *trftypes); -static void PLy_output_datum_func2(PLyObToDatum *arg, MemoryContext arg_mcxt, HeapTuple typeTup, Oid langid, List *trftypes); - /* conversion from Datums to Python objects */ static PyObject *PLyBool_FromBool(PLyDatumToOb *arg, Datum d); static PyObject *PLyFloat_FromFloat4(PLyDatumToOb *arg, Datum d); @@ -43,361 +35,365 @@ static PyObject *PLyInt_FromInt32(PLyDatumToOb *arg, Datum d); static PyObject *PLyLong_FromInt64(PLyDatumToOb *arg, Datum d); static PyObject *PLyLong_FromOid(PLyDatumToOb *arg, Datum d); static PyObject *PLyBytes_FromBytea(PLyDatumToOb *arg, Datum d); -static PyObject *PLyString_FromDatum(PLyDatumToOb *arg, Datum d); +static PyObject *PLyString_FromScalar(PLyDatumToOb *arg, Datum d); static PyObject *PLyObject_FromTransform(PLyDatumToOb *arg, Datum d); static PyObject *PLyList_FromArray(PLyDatumToOb *arg, Datum d); static PyObject *PLyList_FromArray_recurse(PLyDatumToOb *elm, int *dims, int ndim, int dim, char **dataptr_p, bits8 **bitmap_p, int *bitmask_p); +static PyObject *PLyDict_FromComposite(PLyDatumToOb *arg, Datum d); +static PyObject *PLyDict_FromTuple(PLyDatumToOb *arg, HeapTuple tuple, TupleDesc desc); /* conversion from Python objects to Datums */ -static Datum PLyObject_ToBool(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); -static Datum PLyObject_ToBytea(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); -static Datum PLyObject_ToComposite(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); -static Datum PLyObject_ToDatum(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); -static Datum PLyObject_ToTransform(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); -static Datum PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray); +static Datum PLyObject_ToBool(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLyObject_ToBytea(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLyObject_ToComposite(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLyObject_ToScalar(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLyObject_ToDomain(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLyObject_ToTransform(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); +static Datum PLySequence_ToArray(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray); static void PLySequence_ToArray_recurse(PLyObToDatum *elm, PyObject *list, int *dims, int ndim, int dim, Datum *elems, bool *nulls, int *currelem); -/* conversion from Python objects to composite Datums (used by triggers and SRFs) */ -static Datum PLyString_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *string, bool inarray); -static Datum PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping); -static Datum PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence); -static Datum PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object, bool inarray); +/* conversion from Python objects to composite Datums */ +static Datum PLyString_ToComposite(PLyObToDatum *arg, PyObject *string, bool inarray); +static Datum PLyMapping_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *mapping); +static Datum PLySequence_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *sequence); +static Datum PLyGenericObject_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *object, bool inarray); -void -PLy_typeinfo_init(PLyTypeInfo *arg, MemoryContext mcxt) -{ - arg->is_rowtype = -1; - arg->in.r.natts = arg->out.r.natts = 0; - arg->in.r.atts = NULL; - arg->out.r.atts = NULL; - arg->typ_relid = InvalidOid; - arg->typrel_xmin = InvalidTransactionId; - ItemPointerSetInvalid(&arg->typrel_tid); - arg->mcxt = mcxt; -} /* * Conversion functions. Remember output from Python is input to * PostgreSQL, and vice versa. */ -void -PLy_input_datum_func(PLyTypeInfo *arg, Oid typeOid, HeapTuple typeTup, Oid langid, List *trftypes) + +/* + * Perform input conversion, given correctly-set-up state information. + * + * This is the outer-level entry point for any input conversion. Internally, + * the conversion functions recurse directly to each other. + */ +PyObject * +PLy_input_convert(PLyDatumToOb *arg, Datum val) { - if (arg->is_rowtype > 0) - elog(ERROR, "PLyTypeInfo struct is initialized for Tuple"); - arg->is_rowtype = 0; - PLy_input_datum_func2(&(arg->in.d), arg->mcxt, typeOid, typeTup, langid, trftypes); + PyObject *result; + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); + MemoryContext scratch_context = PLy_get_scratch_context(exec_ctx); + MemoryContext oldcontext; + + /* + * Do the work in the scratch context to avoid leaking memory from the + * datatype output function calls. (The individual PLyDatumToObFunc + * functions can't reset the scratch context, because they recurse and an + * inner one might clobber data an outer one still needs. So we do it + * once at the outermost recursion level.) + * + * We reset the scratch context before, not after, each conversion cycle. + * This way we aren't on the hook to release a Python refcount on the + * result object in case MemoryContextReset throws an error. + */ + MemoryContextReset(scratch_context); + + oldcontext = MemoryContextSwitchTo(scratch_context); + + result = arg->func(arg, val); + + MemoryContextSwitchTo(oldcontext); + + return result; } -void -PLy_output_datum_func(PLyTypeInfo *arg, HeapTuple typeTup, Oid langid, List *trftypes) +/* + * Perform output conversion, given correctly-set-up state information. + * + * This is the outer-level entry point for any output conversion. Internally, + * the conversion functions recurse directly to each other. + * + * The result, as well as any cruft generated along the way, are in the + * current memory context. Caller is responsible for cleanup. + */ +Datum +PLy_output_convert(PLyObToDatum *arg, PyObject *val, bool *isnull) { - if (arg->is_rowtype > 0) - elog(ERROR, "PLyTypeInfo struct is initialized for a Tuple"); - arg->is_rowtype = 0; - PLy_output_datum_func2(&(arg->out.d), arg->mcxt, typeTup, langid, trftypes); + /* at outer level, we are not considering an array element */ + return arg->func(arg, val, isnull, false); } -void -PLy_input_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc) +/* + * Transform a tuple into a Python dict object. + * + * Note: the tupdesc must match the one used to set up *arg. We could + * insist that this function lookup the tupdesc from what is in *arg, + * but in practice all callers have the right tupdesc available. + */ +PyObject * +PLy_input_from_tuple(PLyDatumToOb *arg, HeapTuple tuple, TupleDesc desc) { - int i; + PyObject *dict; PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - MemoryContext oldcxt; + MemoryContext scratch_context = PLy_get_scratch_context(exec_ctx); + MemoryContext oldcontext; - oldcxt = MemoryContextSwitchTo(arg->mcxt); + /* + * As in PLy_input_convert, do the work in the scratch context. + */ + MemoryContextReset(scratch_context); - if (arg->is_rowtype == 0) - elog(ERROR, "PLyTypeInfo struct is initialized for a Datum"); - arg->is_rowtype = 1; + oldcontext = MemoryContextSwitchTo(scratch_context); - if (arg->in.r.natts != desc->natts) - { - if (arg->in.r.atts) - pfree(arg->in.r.atts); - arg->in.r.natts = desc->natts; - arg->in.r.atts = palloc0(desc->natts * sizeof(PLyDatumToOb)); - } + dict = PLyDict_FromTuple(arg, tuple, desc); - /* Can this be an unnamed tuple? If not, then an Assert would be enough */ - if (desc->tdtypmod != -1) - elog(ERROR, "received unnamed record type as input"); + MemoryContextSwitchTo(oldcontext); - Assert(OidIsValid(desc->tdtypeid)); + return dict; +} - /* - * RECORDOID means we got called to create input functions for a tuple - * fetched by plpy.execute or for an anonymous record type - */ - if (desc->tdtypeid != RECORDOID) - { - HeapTuple relTup; +/* + * Initialize, or re-initialize, per-column input info for a composite type. + * + * This is separate from PLy_input_setup_func() because in cases involving + * anonymous record types, we need to be passed the tupdesc explicitly. + * It's caller's responsibility that the tupdesc has adequate lifespan + * in such cases. If the tupdesc is for a named composite or registered + * record type, it does not need to be long-lived. + */ +void +PLy_input_setup_tuple(PLyDatumToOb *arg, TupleDesc desc, PLyProcedure *proc) +{ + int i; - /* Get the pg_class tuple corresponding to the type of the input */ - arg->typ_relid = typeidTypeRelid(desc->tdtypeid); - relTup = SearchSysCache1(RELOID, ObjectIdGetDatum(arg->typ_relid)); - if (!HeapTupleIsValid(relTup)) - elog(ERROR, "cache lookup failed for relation %u", arg->typ_relid); + /* We should be working on a previously-set-up struct */ + Assert(arg->func == PLyDict_FromComposite); - /* Remember XMIN and TID for later validation if cache is still OK */ - arg->typrel_xmin = HeapTupleHeaderGetRawXmin(relTup->t_data); - arg->typrel_tid = relTup->t_self; + /* Save pointer to tupdesc, but only if this is an anonymous record type */ + if (arg->typoid == RECORDOID && arg->typmod < 0) + arg->u.tuple.recdesc = desc; - ReleaseSysCache(relTup); + /* (Re)allocate atts array as needed */ + if (arg->u.tuple.natts != desc->natts) + { + if (arg->u.tuple.atts) + pfree(arg->u.tuple.atts); + arg->u.tuple.natts = desc->natts; + arg->u.tuple.atts = (PLyDatumToOb *) + MemoryContextAllocZero(arg->mcxt, + desc->natts * sizeof(PLyDatumToOb)); } + /* Fill the atts entries, except for dropped columns */ for (i = 0; i < desc->natts; i++) { - HeapTuple typeTup; Form_pg_attribute attr = TupleDescAttr(desc, i); + PLyDatumToOb *att = &arg->u.tuple.atts[i]; if (attr->attisdropped) continue; - if (arg->in.r.atts[i].typoid == attr->atttypid) + if (att->typoid == attr->atttypid && att->typmod == attr->atttypmod) continue; /* already set up this entry */ - typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(attr->atttypid)); - if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - attr->atttypid); - - PLy_input_datum_func2(&(arg->in.r.atts[i]), arg->mcxt, - attr->atttypid, - typeTup, - exec_ctx->curr_proc->langid, - exec_ctx->curr_proc->trftypes); - - ReleaseSysCache(typeTup); + PLy_input_setup_func(att, arg->mcxt, + attr->atttypid, attr->atttypmod, + proc); } - - MemoryContextSwitchTo(oldcxt); } +/* + * Initialize, or re-initialize, per-column output info for a composite type. + * + * This is separate from PLy_output_setup_func() because in cases involving + * anonymous record types, we need to be passed the tupdesc explicitly. + * It's caller's responsibility that the tupdesc has adequate lifespan + * in such cases. If the tupdesc is for a named composite or registered + * record type, it does not need to be long-lived. + */ void -PLy_output_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc) +PLy_output_setup_tuple(PLyObToDatum *arg, TupleDesc desc, PLyProcedure *proc) { int i; - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - MemoryContext oldcxt; - - oldcxt = MemoryContextSwitchTo(arg->mcxt); - - if (arg->is_rowtype == 0) - elog(ERROR, "PLyTypeInfo struct is initialized for a Datum"); - arg->is_rowtype = 1; - if (arg->out.r.natts != desc->natts) - { - if (arg->out.r.atts) - pfree(arg->out.r.atts); - arg->out.r.natts = desc->natts; - arg->out.r.atts = palloc0(desc->natts * sizeof(PLyObToDatum)); - } + /* We should be working on a previously-set-up struct */ + Assert(arg->func == PLyObject_ToComposite); - Assert(OidIsValid(desc->tdtypeid)); + /* Save pointer to tupdesc, but only if this is an anonymous record type */ + if (arg->typoid == RECORDOID && arg->typmod < 0) + arg->u.tuple.recdesc = desc; - /* - * RECORDOID means we got called to create output functions for an - * anonymous record type - */ - if (desc->tdtypeid != RECORDOID) + /* (Re)allocate atts array as needed */ + if (arg->u.tuple.natts != desc->natts) { - HeapTuple relTup; - - /* Get the pg_class tuple corresponding to the type of the output */ - arg->typ_relid = typeidTypeRelid(desc->tdtypeid); - relTup = SearchSysCache1(RELOID, ObjectIdGetDatum(arg->typ_relid)); - if (!HeapTupleIsValid(relTup)) - elog(ERROR, "cache lookup failed for relation %u", arg->typ_relid); - - /* Remember XMIN and TID for later validation if cache is still OK */ - arg->typrel_xmin = HeapTupleHeaderGetRawXmin(relTup->t_data); - arg->typrel_tid = relTup->t_self; - - ReleaseSysCache(relTup); + if (arg->u.tuple.atts) + pfree(arg->u.tuple.atts); + arg->u.tuple.natts = desc->natts; + arg->u.tuple.atts = (PLyObToDatum *) + MemoryContextAllocZero(arg->mcxt, + desc->natts * sizeof(PLyObToDatum)); } + /* Fill the atts entries, except for dropped columns */ for (i = 0; i < desc->natts; i++) { - HeapTuple typeTup; Form_pg_attribute attr = TupleDescAttr(desc, i); + PLyObToDatum *att = &arg->u.tuple.atts[i]; if (attr->attisdropped) continue; - if (arg->out.r.atts[i].typoid == attr->atttypid) + if (att->typoid == attr->atttypid && att->typmod == attr->atttypmod) continue; /* already set up this entry */ - typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(attr->atttypid)); - if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", - attr->atttypid); - - PLy_output_datum_func2(&(arg->out.r.atts[i]), arg->mcxt, typeTup, - exec_ctx->curr_proc->langid, - exec_ctx->curr_proc->trftypes); - - ReleaseSysCache(typeTup); + PLy_output_setup_func(att, arg->mcxt, + attr->atttypid, attr->atttypmod, + proc); } - - MemoryContextSwitchTo(oldcxt); } +/* + * Set up output info for a PL/Python function returning record. + * + * Note: the given tupdesc is not necessarily long-lived. + */ void -PLy_output_record_funcs(PLyTypeInfo *arg, TupleDesc desc) +PLy_output_setup_record(PLyObToDatum *arg, TupleDesc desc, PLyProcedure *proc) { + /* Makes no sense unless RECORD */ + Assert(arg->typoid == RECORDOID); + Assert(desc->tdtypeid == RECORDOID); + /* - * If the output record functions are already set, we just have to check - * if the record descriptor has not changed + * Bless the record type if not already done. We'd have to do this anyway + * to return a tuple, so we might as well force the issue so we can use + * the known-record-type code path. */ - if ((arg->is_rowtype == 1) && - (arg->out.d.typmod != -1) && - (arg->out.d.typmod == desc->tdtypmod)) - return; - - /* bless the record to make it known to the typcache lookup code */ BlessTupleDesc(desc); - /* save the freshly generated typmod */ - arg->out.d.typmod = desc->tdtypmod; - /* proceed with normal I/O function caching */ - PLy_output_tuple_funcs(arg, desc); /* - * it should change is_rowtype to 1, so we won't go through this again - * unless the output record description changes + * Update arg->typmod, and clear the recdesc link if it's changed. The + * next call of PLyObject_ToComposite will look up a long-lived tupdesc + * for the record type. */ - Assert(arg->is_rowtype == 1); + arg->typmod = desc->tdtypmod; + if (arg->u.tuple.recdesc && + arg->u.tuple.recdesc->tdtypmod != arg->typmod) + arg->u.tuple.recdesc = NULL; + + /* Update derived data if necessary */ + PLy_output_setup_tuple(arg, desc, proc); } /* - * Transform a tuple into a Python dict object. + * Recursively initialize the PLyObToDatum structure(s) needed to construct + * a SQL value of the specified typeOid/typmod from a Python value. + * (But note that at this point we may have RECORDOID/-1, ie, an indeterminate + * record type.) + * proc is used to look up transform functions. */ -PyObject * -PLyDict_FromTuple(PLyTypeInfo *info, HeapTuple tuple, TupleDesc desc) +void +PLy_output_setup_func(PLyObToDatum *arg, MemoryContext arg_mcxt, + Oid typeOid, int32 typmod, + PLyProcedure *proc) { - PyObject *volatile dict; - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - MemoryContext scratch_context = PLy_get_scratch_context(exec_ctx); - MemoryContext oldcontext = CurrentMemoryContext; + TypeCacheEntry *typentry; + char typtype; + Oid trfuncid; + Oid typinput; - if (info->is_rowtype != 1) - elog(ERROR, "PLyTypeInfo structure describes a datum"); + /* Since this is recursive, it could theoretically be driven to overflow */ + check_stack_depth(); - dict = PyDict_New(); - if (dict == NULL) - PLy_elog(ERROR, "could not create new dictionary"); + arg->typoid = typeOid; + arg->typmod = typmod; + arg->mcxt = arg_mcxt; - PG_TRY(); + /* + * Fetch typcache entry for the target type, asking for whatever info + * we'll need later. RECORD is a special case: just treat it as composite + * without bothering with the typcache entry. + */ + if (typeOid != RECORDOID) { - int i; - - /* - * Do the work in the scratch context to avoid leaking memory from the - * datatype output function calls. - */ - MemoryContextSwitchTo(scratch_context); - for (i = 0; i < info->in.r.natts; i++) - { - char *key; - Datum vattr; - bool is_null; - PyObject *value; - Form_pg_attribute attr = TupleDescAttr(desc, i); - - if (attr->attisdropped) - continue; - - key = NameStr(attr->attname); - vattr = heap_getattr(tuple, (i + 1), desc, &is_null); - - if (is_null || info->in.r.atts[i].func == NULL) - PyDict_SetItemString(dict, key, Py_None); - else - { - value = (info->in.r.atts[i].func) (&info->in.r.atts[i], vattr); - PyDict_SetItemString(dict, key, value); - Py_DECREF(value); - } - } - MemoryContextSwitchTo(oldcontext); - MemoryContextReset(scratch_context); + typentry = lookup_type_cache(typeOid, TYPECACHE_DOMAIN_BASE_INFO); + typtype = typentry->typtype; + arg->typbyval = typentry->typbyval; + arg->typlen = typentry->typlen; + arg->typalign = typentry->typalign; } - PG_CATCH(); + else { - MemoryContextSwitchTo(oldcontext); - Py_DECREF(dict); - PG_RE_THROW(); + typentry = NULL; + typtype = TYPTYPE_COMPOSITE; + /* hard-wired knowledge about type RECORD: */ + arg->typbyval = false; + arg->typlen = -1; + arg->typalign = 'd'; } - PG_END_TRY(); - - return dict; -} - -/* - * Convert a Python object to a composite Datum, using all supported - * conversion methods: composite as a string, as a sequence, as a mapping or - * as an object that has __getattr__ support. - */ -Datum -PLyObject_ToCompositeDatum(PLyTypeInfo *info, TupleDesc desc, PyObject *plrv, bool inarray) -{ - Datum datum; - - if (PyString_Check(plrv) || PyUnicode_Check(plrv)) - datum = PLyString_ToComposite(info, desc, plrv, inarray); - else if (PySequence_Check(plrv)) - /* composite type as sequence (tuple, list etc) */ - datum = PLySequence_ToComposite(info, desc, plrv); - else if (PyMapping_Check(plrv)) - /* composite type as mapping (currently only dict) */ - datum = PLyMapping_ToComposite(info, desc, plrv); - else - /* returned as smth, must provide method __getattr__(name) */ - datum = PLyGenericObject_ToComposite(info, desc, plrv, inarray); - - return datum; -} - -static void -PLy_output_datum_func2(PLyObToDatum *arg, MemoryContext arg_mcxt, HeapTuple typeTup, Oid langid, List *trftypes) -{ - Form_pg_type typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - Oid element_type; - Oid base_type; - Oid funcid; - MemoryContext oldcxt; - - oldcxt = MemoryContextSwitchTo(arg_mcxt); - - fmgr_info_cxt(typeStruct->typinput, &arg->typfunc, arg_mcxt); - arg->typoid = HeapTupleGetOid(typeTup); - arg->typmod = -1; - arg->typioparam = getTypeIOParam(typeTup); - arg->typbyval = typeStruct->typbyval; - - element_type = get_base_element_type(arg->typoid); - base_type = getBaseType(element_type ? element_type : arg->typoid); /* - * Select a conversion function to convert Python objects to PostgreSQL - * datums. + * Choose conversion method. Note that transform functions are checked + * for composite and scalar types, but not for arrays or domains. This is + * somewhat historical, but we'd have a problem allowing them on domains, + * since we drill down through all levels of a domain nest without looking + * at the intermediate levels at all. */ - - if ((funcid = get_transform_tosql(base_type, langid, trftypes))) + if (typtype == TYPTYPE_DOMAIN) + { + /* Domain */ + arg->func = PLyObject_ToDomain; + arg->u.domain.domain_info = NULL; + /* Recursively set up conversion info for the element type */ + arg->u.domain.base = (PLyObToDatum *) + MemoryContextAllocZero(arg_mcxt, sizeof(PLyObToDatum)); + PLy_output_setup_func(arg->u.domain.base, arg_mcxt, + typentry->domainBaseType, + typentry->domainBaseTypmod, + proc); + } + else if (typentry && + OidIsValid(typentry->typelem) && typentry->typlen == -1) + { + /* Standard varlena array (cf. get_element_type) */ + arg->func = PLySequence_ToArray; + /* Get base type OID to insert into constructed array */ + /* (note this might not be the same as the immediate child type) */ + arg->u.array.elmbasetype = getBaseType(typentry->typelem); + /* Recursively set up conversion info for the element type */ + arg->u.array.elm = (PLyObToDatum *) + MemoryContextAllocZero(arg_mcxt, sizeof(PLyObToDatum)); + PLy_output_setup_func(arg->u.array.elm, arg_mcxt, + typentry->typelem, typmod, + proc); + } + else if ((trfuncid = get_transform_tosql(typeOid, + proc->langid, + proc->trftypes))) { arg->func = PLyObject_ToTransform; - fmgr_info_cxt(funcid, &arg->typtransform, arg_mcxt); + fmgr_info_cxt(trfuncid, &arg->u.transform.typtransform, arg_mcxt); } - else if (typeStruct->typtype == TYPTYPE_COMPOSITE) + else if (typtype == TYPTYPE_COMPOSITE) { + /* Named composite type, or RECORD */ arg->func = PLyObject_ToComposite; + /* We'll set up the per-field data later */ + arg->u.tuple.recdesc = NULL; + arg->u.tuple.typentry = typentry; + arg->u.tuple.tupdescseq = typentry ? typentry->tupDescSeqNo - 1 : 0; + arg->u.tuple.atts = NULL; + arg->u.tuple.natts = 0; + /* Mark this invalid till needed, too */ + arg->u.tuple.recinfunc.fn_oid = InvalidOid; } else - switch (base_type) + { + /* Scalar type, but we have a couple of special cases */ + switch (typeOid) { case BOOLOID: arg->func = PLyObject_ToBool; @@ -406,66 +402,111 @@ PLy_output_datum_func2(PLyObToDatum *arg, MemoryContext arg_mcxt, HeapTuple type arg->func = PLyObject_ToBytea; break; default: - arg->func = PLyObject_ToDatum; + arg->func = PLyObject_ToScalar; + getTypeInputInfo(typeOid, &typinput, &arg->u.scalar.typioparam); + fmgr_info_cxt(typinput, &arg->u.scalar.typfunc, arg_mcxt); break; } - - if (element_type) - { - char dummy_delim; - Oid funcid; - - if (type_is_rowtype(element_type)) - arg->func = PLyObject_ToComposite; - - arg->elm = palloc0(sizeof(*arg->elm)); - arg->elm->func = arg->func; - arg->elm->typtransform = arg->typtransform; - arg->func = PLySequence_ToArray; - - arg->elm->typoid = element_type; - arg->elm->typmod = -1; - get_type_io_data(element_type, IOFunc_input, - &arg->elm->typlen, &arg->elm->typbyval, &arg->elm->typalign, &dummy_delim, - &arg->elm->typioparam, &funcid); - fmgr_info_cxt(funcid, &arg->elm->typfunc, arg_mcxt); } - - MemoryContextSwitchTo(oldcxt); } -static void -PLy_input_datum_func2(PLyDatumToOb *arg, MemoryContext arg_mcxt, Oid typeOid, HeapTuple typeTup, Oid langid, List *trftypes) +/* + * Recursively initialize the PLyDatumToOb structure(s) needed to construct + * a Python value from a SQL value of the specified typeOid/typmod. + * (But note that at this point we may have RECORDOID/-1, ie, an indeterminate + * record type.) + * proc is used to look up transform functions. + */ +void +PLy_input_setup_func(PLyDatumToOb *arg, MemoryContext arg_mcxt, + Oid typeOid, int32 typmod, + PLyProcedure *proc) { - Form_pg_type typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - Oid element_type; - Oid base_type; - Oid funcid; - MemoryContext oldcxt; - - oldcxt = MemoryContextSwitchTo(arg_mcxt); + TypeCacheEntry *typentry; + char typtype; + Oid trfuncid; + Oid typoutput; + bool typisvarlena; - /* Get the type's conversion information */ - fmgr_info_cxt(typeStruct->typoutput, &arg->typfunc, arg_mcxt); - arg->typoid = HeapTupleGetOid(typeTup); - arg->typmod = -1; - arg->typioparam = getTypeIOParam(typeTup); - arg->typbyval = typeStruct->typbyval; - arg->typlen = typeStruct->typlen; - arg->typalign = typeStruct->typalign; + /* Since this is recursive, it could theoretically be driven to overflow */ + check_stack_depth(); - /* Determine which kind of Python object we will convert to */ + arg->typoid = typeOid; + arg->typmod = typmod; + arg->mcxt = arg_mcxt; - element_type = get_base_element_type(typeOid); - base_type = getBaseType(element_type ? element_type : typeOid); + /* + * Fetch typcache entry for the target type, asking for whatever info + * we'll need later. RECORD is a special case: just treat it as composite + * without bothering with the typcache entry. + */ + if (typeOid != RECORDOID) + { + typentry = lookup_type_cache(typeOid, TYPECACHE_DOMAIN_BASE_INFO); + typtype = typentry->typtype; + arg->typbyval = typentry->typbyval; + arg->typlen = typentry->typlen; + arg->typalign = typentry->typalign; + } + else + { + typentry = NULL; + typtype = TYPTYPE_COMPOSITE; + /* hard-wired knowledge about type RECORD: */ + arg->typbyval = false; + arg->typlen = -1; + arg->typalign = 'd'; + } - if ((funcid = get_transform_fromsql(base_type, langid, trftypes))) + /* + * Choose conversion method. Note that transform functions are checked + * for composite and scalar types, but not for arrays or domains. This is + * somewhat historical, but we'd have a problem allowing them on domains, + * since we drill down through all levels of a domain nest without looking + * at the intermediate levels at all. + */ + if (typtype == TYPTYPE_DOMAIN) + { + /* Domain --- we don't care, just recurse down to the base type */ + PLy_input_setup_func(arg, arg_mcxt, + typentry->domainBaseType, + typentry->domainBaseTypmod, + proc); + } + else if (typentry && + OidIsValid(typentry->typelem) && typentry->typlen == -1) + { + /* Standard varlena array (cf. get_element_type) */ + arg->func = PLyList_FromArray; + /* Recursively set up conversion info for the element type */ + arg->u.array.elm = (PLyDatumToOb *) + MemoryContextAllocZero(arg_mcxt, sizeof(PLyDatumToOb)); + PLy_input_setup_func(arg->u.array.elm, arg_mcxt, + typentry->typelem, typmod, + proc); + } + else if ((trfuncid = get_transform_fromsql(typeOid, + proc->langid, + proc->trftypes))) { arg->func = PLyObject_FromTransform; - fmgr_info_cxt(funcid, &arg->typtransform, arg_mcxt); + fmgr_info_cxt(trfuncid, &arg->u.transform.typtransform, arg_mcxt); + } + else if (typtype == TYPTYPE_COMPOSITE) + { + /* Named composite type, or RECORD */ + arg->func = PLyDict_FromComposite; + /* We'll set up the per-field data later */ + arg->u.tuple.recdesc = NULL; + arg->u.tuple.typentry = typentry; + arg->u.tuple.tupdescseq = typentry ? typentry->tupDescSeqNo - 1 : 0; + arg->u.tuple.atts = NULL; + arg->u.tuple.natts = 0; } else - switch (base_type) + { + /* Scalar type, but we have a couple of special cases */ + switch (typeOid) { case BOOLOID: arg->func = PLyBool_FromBool; @@ -495,30 +536,19 @@ PLy_input_datum_func2(PLyDatumToOb *arg, MemoryContext arg_mcxt, Oid typeOid, He arg->func = PLyBytes_FromBytea; break; default: - arg->func = PLyString_FromDatum; + arg->func = PLyString_FromScalar; + getTypeOutputInfo(typeOid, &typoutput, &typisvarlena); + fmgr_info_cxt(typoutput, &arg->u.scalar.typfunc, arg_mcxt); break; } - - if (element_type) - { - char dummy_delim; - Oid funcid; - - arg->elm = palloc0(sizeof(*arg->elm)); - arg->elm->func = arg->func; - arg->elm->typtransform = arg->typtransform; - arg->func = PLyList_FromArray; - arg->elm->typoid = element_type; - arg->elm->typmod = -1; - get_type_io_data(element_type, IOFunc_output, - &arg->elm->typlen, &arg->elm->typbyval, &arg->elm->typalign, &dummy_delim, - &arg->elm->typioparam, &funcid); - fmgr_info_cxt(funcid, &arg->elm->typfunc, arg_mcxt); } - - MemoryContextSwitchTo(oldcxt); } + +/* + * Special-purpose input converters. + */ + static PyObject * PLyBool_FromBool(PLyDatumToOb *arg, Datum d) { @@ -611,27 +641,40 @@ PLyBytes_FromBytea(PLyDatumToOb *arg, Datum d) return PyBytes_FromStringAndSize(str, size); } + +/* + * Generic input conversion using a SQL type's output function. + */ static PyObject * -PLyString_FromDatum(PLyDatumToOb *arg, Datum d) +PLyString_FromScalar(PLyDatumToOb *arg, Datum d) { - char *x = OutputFunctionCall(&arg->typfunc, d); + char *x = OutputFunctionCall(&arg->u.scalar.typfunc, d); PyObject *r = PyString_FromString(x); pfree(x); return r; } +/* + * Convert using a from-SQL transform function. + */ static PyObject * PLyObject_FromTransform(PLyDatumToOb *arg, Datum d) { - return (PyObject *) DatumGetPointer(FunctionCall1(&arg->typtransform, d)); + Datum t; + + t = FunctionCall1(&arg->u.transform.typtransform, d); + return (PyObject *) DatumGetPointer(t); } +/* + * Convert a SQL array to a Python list. + */ static PyObject * PLyList_FromArray(PLyDatumToOb *arg, Datum d) { ArrayType *array = DatumGetArrayTypeP(d); - PLyDatumToOb *elm = arg->elm; + PLyDatumToOb *elm = arg->u.array.elm; int ndim; int *dims; char *dataptr; @@ -736,6 +779,94 @@ PLyList_FromArray_recurse(PLyDatumToOb *elm, int *dims, int ndim, int dim, return list; } +/* + * Convert a composite SQL value to a Python dict. + */ +static PyObject * +PLyDict_FromComposite(PLyDatumToOb *arg, Datum d) +{ + PyObject *dict; + HeapTupleHeader td; + Oid tupType; + int32 tupTypmod; + TupleDesc tupdesc; + HeapTupleData tmptup; + + td = DatumGetHeapTupleHeader(d); + /* Extract rowtype info and find a tupdesc */ + tupType = HeapTupleHeaderGetTypeId(td); + tupTypmod = HeapTupleHeaderGetTypMod(td); + tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); + + /* Set up I/O funcs if not done yet */ + PLy_input_setup_tuple(arg, tupdesc, + PLy_current_execution_context()->curr_proc); + + /* Build a temporary HeapTuple control structure */ + tmptup.t_len = HeapTupleHeaderGetDatumLength(td); + tmptup.t_data = td; + + dict = PLyDict_FromTuple(arg, &tmptup, tupdesc); + + ReleaseTupleDesc(tupdesc); + + return dict; +} + +/* + * Transform a tuple into a Python dict object. + */ +static PyObject * +PLyDict_FromTuple(PLyDatumToOb *arg, HeapTuple tuple, TupleDesc desc) +{ + PyObject *volatile dict; + + /* Simple sanity check that desc matches */ + Assert(desc->natts == arg->u.tuple.natts); + + dict = PyDict_New(); + if (dict == NULL) + PLy_elog(ERROR, "could not create new dictionary"); + + PG_TRY(); + { + int i; + + for (i = 0; i < arg->u.tuple.natts; i++) + { + PLyDatumToOb *att = &arg->u.tuple.atts[i]; + Form_pg_attribute attr = TupleDescAttr(desc, i); + char *key; + Datum vattr; + bool is_null; + PyObject *value; + + if (attr->attisdropped) + continue; + + key = NameStr(attr->attname); + vattr = heap_getattr(tuple, (i + 1), desc, &is_null); + + if (is_null) + PyDict_SetItemString(dict, key, Py_None); + else + { + value = att->func(att, vattr); + PyDict_SetItemString(dict, key, value); + Py_DECREF(value); + } + } + } + PG_CATCH(); + { + Py_DECREF(dict); + PG_RE_THROW(); + } + PG_END_TRY(); + + return dict; +} + /* * Convert a Python object to a PostgreSQL bool datum. This can't go * through the generic conversion function, because Python attaches a @@ -743,17 +874,16 @@ PLyList_FromArray_recurse(PLyDatumToOb *elm, int *dims, int ndim, int dim, * type can parse. */ static Datum -PLyObject_ToBool(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLyObject_ToBool(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { - Datum rv; - - Assert(plrv != Py_None); - rv = BoolGetDatum(PyObject_IsTrue(plrv)); - - if (get_typtype(arg->typoid) == TYPTYPE_DOMAIN) - domain_check(rv, false, arg->typoid, &arg->typfunc.fn_extra, arg->typfunc.fn_mcxt); - - return rv; + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; + return BoolGetDatum(PyObject_IsTrue(plrv)); } /* @@ -762,12 +892,18 @@ PLyObject_ToBool(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) * with embedded nulls. And it's faster this way. */ static Datum -PLyObject_ToBytea(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLyObject_ToBytea(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { PyObject *volatile plrv_so = NULL; Datum rv; - Assert(plrv != Py_None); + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; plrv_so = PyObject_Bytes(plrv); if (!plrv_so) @@ -793,9 +929,6 @@ PLyObject_ToBytea(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) Py_XDECREF(plrv_so); - if (get_typtype(arg->typoid) == TYPTYPE_DOMAIN) - domain_check(rv, false, arg->typoid, &arg->typfunc.fn_extra, arg->typfunc.fn_mcxt); - return rv; } @@ -806,45 +939,87 @@ PLyObject_ToBytea(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) * for obtaining PostgreSQL tuples. */ static Datum -PLyObject_ToComposite(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLyObject_ToComposite(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { Datum rv; - PLyTypeInfo info; TupleDesc desc; - MemoryContext cxt; - if (typmod != -1) - elog(ERROR, "received unnamed record type as input"); + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; + + /* + * The string conversion case doesn't require a tupdesc, nor per-field + * conversion data, so just go for it if that's the case to use. + */ + if (PyString_Check(plrv) || PyUnicode_Check(plrv)) + return PLyString_ToComposite(arg, plrv, inarray); - /* Create a dummy PLyTypeInfo */ - cxt = AllocSetContextCreate(CurrentMemoryContext, - "PL/Python temp context", - ALLOCSET_DEFAULT_SIZES); - MemSet(&info, 0, sizeof(PLyTypeInfo)); - PLy_typeinfo_init(&info, cxt); - /* Mark it as needing output routines lookup */ - info.is_rowtype = 2; + /* + * If we're dealing with a named composite type, we must look up the + * tupdesc every time, to protect against possible changes to the type. + * RECORD types can't change between calls; but we must still be willing + * to set up the info the first time, if nobody did yet. + */ + if (arg->typoid != RECORDOID) + { + desc = lookup_rowtype_tupdesc(arg->typoid, arg->typmod); + /* We should have the descriptor of the type's typcache entry */ + Assert(desc == arg->u.tuple.typentry->tupDesc); + /* Detect change of descriptor, update cache if needed */ + if (arg->u.tuple.tupdescseq != arg->u.tuple.typentry->tupDescSeqNo) + { + PLy_output_setup_tuple(arg, desc, + PLy_current_execution_context()->curr_proc); + arg->u.tuple.tupdescseq = arg->u.tuple.typentry->tupDescSeqNo; + } + } + else + { + desc = arg->u.tuple.recdesc; + if (desc == NULL) + { + desc = lookup_rowtype_tupdesc(arg->typoid, arg->typmod); + arg->u.tuple.recdesc = desc; + } + else + { + /* Pin descriptor to match unpin below */ + PinTupleDesc(desc); + } + } - desc = lookup_rowtype_tupdesc(arg->typoid, arg->typmod); + /* Simple sanity check on our caching */ + Assert(desc->natts == arg->u.tuple.natts); /* - * This will set up the dummy PLyTypeInfo's output conversion routines, - * since we left is_rowtype as 2. A future optimization could be caching - * that info instead of looking it up every time a tuple is returned from - * the function. + * Convert, using the appropriate method depending on the type of the + * supplied Python object. */ - rv = PLyObject_ToCompositeDatum(&info, desc, plrv, inarray); + if (PySequence_Check(plrv)) + /* composite type as sequence (tuple, list etc) */ + rv = PLySequence_ToComposite(arg, desc, plrv); + else if (PyMapping_Check(plrv)) + /* composite type as mapping (currently only dict) */ + rv = PLyMapping_ToComposite(arg, desc, plrv); + else + /* returned as smth, must provide method __getattr__(name) */ + rv = PLyGenericObject_ToComposite(arg, desc, plrv, inarray); ReleaseTupleDesc(desc); - MemoryContextDelete(cxt); - return rv; } /* * Convert Python object to C string in server encoding. + * + * Note: this is exported for use by add-on transform modules. */ char * PLyObject_AsString(PyObject *plrv) @@ -901,74 +1076,71 @@ PLyObject_AsString(PyObject *plrv) /* - * Generic conversion function: Convert PyObject to cstring and + * Generic output conversion function: convert PyObject to cstring and * cstring into PostgreSQL type. */ static Datum -PLyObject_ToDatum(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLyObject_ToScalar(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { char *str; - Assert(plrv != Py_None); + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; str = PLyObject_AsString(plrv); - /* - * If we are parsing a composite type within an array, and the string - * isn't a valid record literal, there's a high chance that the function - * did something like: - * - * CREATE FUNCTION .. RETURNS comptype[] AS $$ return [['foo', 'bar']] $$ - * LANGUAGE plpython; - * - * Before PostgreSQL 10, that was interpreted as a single-dimensional - * array, containing record ('foo', 'bar'). PostgreSQL 10 added support - * for multi-dimensional arrays, and it is now interpreted as a - * two-dimensional array, containing two records, 'foo', and 'bar'. - * record_in() will throw an error, because "foo" is not a valid record - * literal. - * - * To make that less confusing to users who are upgrading from older - * versions, try to give a hint in the typical instances of that. If we - * are parsing an array of composite types, and we see a string literal - * that is not a valid record literal, give a hint. We only want to give - * the hint in the narrow case of a malformed string literal, not any - * error from record_in(), so check for that case here specifically. - * - * This check better match the one in record_in(), so that we don't forbid - * literals that are actually valid! - */ - if (inarray && arg->typfunc.fn_oid == F_RECORD_IN) - { - char *ptr = str; + return InputFunctionCall(&arg->u.scalar.typfunc, + str, + arg->u.scalar.typioparam, + arg->typmod); +} - /* Allow leading whitespace */ - while (*ptr && isspace((unsigned char) *ptr)) - ptr++; - if (*ptr++ != '(') - ereport(ERROR, - (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("malformed record literal: \"%s\"", str), - errdetail("Missing left parenthesis."), - errhint("To return a composite type in an array, return the composite type as a Python tuple, e.g., \"[('foo',)]\"."))); - } - return InputFunctionCall(&arg->typfunc, - str, - arg->typioparam, - typmod); +/* + * Convert to a domain type. + */ +static Datum +PLyObject_ToDomain(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) +{ + Datum result; + PLyObToDatum *base = arg->u.domain.base; + + result = base->func(base, plrv, isnull, inarray); + domain_check(result, *isnull, arg->typoid, + &arg->u.domain.domain_info, arg->mcxt); + return result; } +/* + * Convert using a to-SQL transform function. + */ static Datum -PLyObject_ToTransform(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLyObject_ToTransform(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { - return FunctionCall1(&arg->typtransform, PointerGetDatum(plrv)); + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; + return FunctionCall1(&arg->u.transform.typtransform, PointerGetDatum(plrv)); } +/* + * Convert Python sequence to SQL array. + */ static Datum -PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarray) +PLySequence_ToArray(PLyObToDatum *arg, PyObject *plrv, + bool *isnull, bool inarray) { ArrayType *array; int i; @@ -979,11 +1151,15 @@ PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarra int dims[MAXDIM]; int lbs[MAXDIM]; int currelem; - Datum rv; PyObject *pyptr = plrv; PyObject *next; - Assert(plrv != Py_None); + if (plrv == Py_None) + { + *isnull = true; + return (Datum) 0; + } + *isnull = false; /* * Determine the number of dimensions, and their sizes. @@ -1049,7 +1225,7 @@ PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarra elems = palloc(sizeof(Datum) * len); nulls = palloc(sizeof(bool) * len); currelem = 0; - PLySequence_ToArray_recurse(arg->elm, plrv, + PLySequence_ToArray_recurse(arg->u.array.elm, plrv, dims, ndim, 0, elems, nulls, &currelem); @@ -1061,19 +1237,12 @@ PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv, bool inarra ndim, dims, lbs, - get_base_element_type(arg->typoid), - arg->elm->typlen, - arg->elm->typbyval, - arg->elm->typalign); + arg->u.array.elmbasetype, + arg->u.array.elm->typlen, + arg->u.array.elm->typbyval, + arg->u.array.elm->typalign); - /* - * If the result type is a domain of array, the resulting array must be - * checked. - */ - rv = PointerGetDatum(array); - if (get_typtype(arg->typoid) == TYPTYPE_DOMAIN) - domain_check(rv, false, arg->typoid, &arg->typfunc.fn_extra, arg->typfunc.fn_mcxt); - return rv; + return PointerGetDatum(array); } /* @@ -1110,16 +1279,7 @@ PLySequence_ToArray_recurse(PLyObToDatum *elm, PyObject *list, { PyObject *obj = PySequence_GetItem(list, i); - if (obj == Py_None) - { - nulls[*currelem] = true; - elems[*currelem] = (Datum) 0; - } - else - { - nulls[*currelem] = false; - elems[*currelem] = elm->func(elm, -1, obj, true); - } + elems[*currelem] = elm->func(elm, obj, &nulls[*currelem], true); Py_XDECREF(obj); (*currelem)++; } @@ -1127,42 +1287,72 @@ PLySequence_ToArray_recurse(PLyObToDatum *elm, PyObject *list, } +/* + * Convert a Python string to composite, using record_in. + */ static Datum -PLyString_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *string, bool inarray) +PLyString_ToComposite(PLyObToDatum *arg, PyObject *string, bool inarray) { - Datum result; - HeapTuple typeTup; - PLyTypeInfo locinfo; - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - MemoryContext cxt; - - /* Create a dummy PLyTypeInfo */ - cxt = AllocSetContextCreate(CurrentMemoryContext, - "PL/Python temp context", - ALLOCSET_DEFAULT_SIZES); - MemSet(&locinfo, 0, sizeof(PLyTypeInfo)); - PLy_typeinfo_init(&locinfo, cxt); - - typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(desc->tdtypeid)); - if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", desc->tdtypeid); + char *str; - PLy_output_datum_func2(&locinfo.out.d, locinfo.mcxt, typeTup, - exec_ctx->curr_proc->langid, - exec_ctx->curr_proc->trftypes); + /* + * Set up call data for record_in, if we didn't already. (We can't just + * use DirectFunctionCall, because record_in needs a fn_extra field.) + */ + if (!OidIsValid(arg->u.tuple.recinfunc.fn_oid)) + fmgr_info_cxt(F_RECORD_IN, &arg->u.tuple.recinfunc, arg->mcxt); - ReleaseSysCache(typeTup); + str = PLyObject_AsString(string); - result = PLyObject_ToDatum(&locinfo.out.d, desc->tdtypmod, string, inarray); + /* + * If we are parsing a composite type within an array, and the string + * isn't a valid record literal, there's a high chance that the function + * did something like: + * + * CREATE FUNCTION .. RETURNS comptype[] AS $$ return [['foo', 'bar']] $$ + * LANGUAGE plpython; + * + * Before PostgreSQL 10, that was interpreted as a single-dimensional + * array, containing record ('foo', 'bar'). PostgreSQL 10 added support + * for multi-dimensional arrays, and it is now interpreted as a + * two-dimensional array, containing two records, 'foo', and 'bar'. + * record_in() will throw an error, because "foo" is not a valid record + * literal. + * + * To make that less confusing to users who are upgrading from older + * versions, try to give a hint in the typical instances of that. If we + * are parsing an array of composite types, and we see a string literal + * that is not a valid record literal, give a hint. We only want to give + * the hint in the narrow case of a malformed string literal, not any + * error from record_in(), so check for that case here specifically. + * + * This check better match the one in record_in(), so that we don't forbid + * literals that are actually valid! + */ + if (inarray) + { + char *ptr = str; - MemoryContextDelete(cxt); + /* Allow leading whitespace */ + while (*ptr && isspace((unsigned char) *ptr)) + ptr++; + if (*ptr++ != '(') + ereport(ERROR, + (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), + errmsg("malformed record literal: \"%s\"", str), + errdetail("Missing left parenthesis."), + errhint("To return a composite type in an array, return the composite type as a Python tuple, e.g., \"[('foo',)]\"."))); + } - return result; + return InputFunctionCall(&arg->u.tuple.recinfunc, + str, + arg->typoid, + arg->typmod); } static Datum -PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping) +PLyMapping_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *mapping) { Datum result; HeapTuple tuple; @@ -1172,10 +1362,6 @@ PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping) Assert(PyMapping_Check(mapping)); - if (info->is_rowtype == 2) - PLy_output_tuple_funcs(info, desc); - Assert(info->is_rowtype == 1); - /* Build tuple */ values = palloc(sizeof(Datum) * desc->natts); nulls = palloc(sizeof(bool) * desc->natts); @@ -1195,27 +1381,19 @@ PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping) key = NameStr(attr->attname); value = NULL; - att = &info->out.r.atts[i]; + att = &arg->u.tuple.atts[i]; PG_TRY(); { value = PyMapping_GetItemString(mapping, key); - if (value == Py_None) - { - values[i] = (Datum) NULL; - nulls[i] = true; - } - else if (value) - { - values[i] = (att->func) (att, -1, value, false); - nulls[i] = false; - } - else + if (!value) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_COLUMN), errmsg("key \"%s\" not found in mapping", key), errhint("To return null in a column, " "add the value None to the mapping with the key named after the column."))); + values[i] = att->func(att, value, &nulls[i], false); + Py_XDECREF(value); value = NULL; } @@ -1239,7 +1417,7 @@ PLyMapping_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *mapping) static Datum -PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) +PLySequence_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *sequence) { Datum result; HeapTuple tuple; @@ -1266,10 +1444,6 @@ PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("length of returned sequence did not match number of columns in row"))); - if (info->is_rowtype == 2) - PLy_output_tuple_funcs(info, desc); - Assert(info->is_rowtype == 1); - /* Build tuple */ values = palloc(sizeof(Datum) * desc->natts); nulls = palloc(sizeof(bool) * desc->natts); @@ -1287,21 +1461,13 @@ PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) } value = NULL; - att = &info->out.r.atts[i]; + att = &arg->u.tuple.atts[i]; PG_TRY(); { value = PySequence_GetItem(sequence, idx); Assert(value); - if (value == Py_None) - { - values[i] = (Datum) NULL; - nulls[i] = true; - } - else if (value) - { - values[i] = (att->func) (att, -1, value, false); - nulls[i] = false; - } + + values[i] = att->func(att, value, &nulls[i], false); Py_XDECREF(value); value = NULL; @@ -1328,7 +1494,7 @@ PLySequence_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *sequence) static Datum -PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object, bool inarray) +PLyGenericObject_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *object, bool inarray) { Datum result; HeapTuple tuple; @@ -1336,10 +1502,6 @@ PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object bool *nulls; volatile int i; - if (info->is_rowtype == 2) - PLy_output_tuple_funcs(info, desc); - Assert(info->is_rowtype == 1); - /* Build tuple */ values = palloc(sizeof(Datum) * desc->natts); nulls = palloc(sizeof(bool) * desc->natts); @@ -1359,21 +1521,11 @@ PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object key = NameStr(attr->attname); value = NULL; - att = &info->out.r.atts[i]; + att = &arg->u.tuple.atts[i]; PG_TRY(); { value = PyObject_GetAttrString(object, key); - if (value == Py_None) - { - values[i] = (Datum) NULL; - nulls[i] = true; - } - else if (value) - { - values[i] = (att->func) (att, -1, value, false); - nulls[i] = false; - } - else + if (!value) { /* * No attribute for this column in the object. @@ -1384,7 +1536,7 @@ PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object * array, with a composite type (123, 'foo') in it. But now * it's interpreted as a two-dimensional array, and we try to * interpret "123" as the composite type. See also similar - * heuristic in PLyObject_ToDatum(). + * heuristic in PLyObject_ToScalar(). */ ereport(ERROR, (errcode(ERRCODE_UNDEFINED_COLUMN), @@ -1394,6 +1546,8 @@ PLyGenericObject_ToComposite(PLyTypeInfo *info, TupleDesc desc, PyObject *object errhint("To return null in a column, let the returned object have an attribute named after column with value None."))); } + values[i] = att->func(att, value, &nulls[i], false); + Py_XDECREF(value); value = NULL; } diff --git a/src/pl/plpython/plpy_typeio.h b/src/pl/plpython/plpy_typeio.h index 95f84d8341..91870c91b0 100644 --- a/src/pl/plpython/plpy_typeio.h +++ b/src/pl/plpython/plpy_typeio.h @@ -6,117 +6,169 @@ #define PLPY_TYPEIO_H #include "access/htup.h" -#include "access/tupdesc.h" #include "fmgr.h" -#include "storage/itemptr.h" +#include "utils/typcache.h" + +struct PLyProcedure; /* avoid requiring plpy_procedure.h here */ + /* - * Conversion from PostgreSQL Datum to a Python object. + * "Input" conversion from PostgreSQL Datum to a Python object. + * + * arg is the previously-set-up conversion data, val is the value to convert. + * val mustn't be NULL. + * + * Note: the conversion data structs should be regarded as private to + * plpy_typeio.c. We declare them here only so that other modules can + * define structs containing them. */ -struct PLyDatumToOb; -typedef PyObject *(*PLyDatumToObFunc) (struct PLyDatumToOb *arg, Datum val); +typedef struct PLyDatumToOb PLyDatumToOb; /* forward reference */ -typedef struct PLyDatumToOb +typedef PyObject *(*PLyDatumToObFunc) (PLyDatumToOb *arg, Datum val); + +typedef struct PLyScalarToOb { - PLyDatumToObFunc func; - FmgrInfo typfunc; /* The type's output function */ - FmgrInfo typtransform; /* from-SQL transform */ - Oid typoid; /* The OID of the type */ - int32 typmod; /* The typmod of the type */ - Oid typioparam; - bool typbyval; - int16 typlen; - char typalign; - struct PLyDatumToOb *elm; -} PLyDatumToOb; + FmgrInfo typfunc; /* lookup info for type's output function */ +} PLyScalarToOb; + +typedef struct PLyArrayToOb +{ + PLyDatumToOb *elm; /* conversion info for array's element type */ +} PLyArrayToOb; typedef struct PLyTupleToOb { - PLyDatumToOb *atts; - int natts; + /* If we're dealing with a RECORD type, actual descriptor is here: */ + TupleDesc recdesc; + /* If we're dealing with a named composite type, these fields are set: */ + TypeCacheEntry *typentry; /* typcache entry for type */ + int64 tupdescseq; /* last tupdesc seqno seen in typcache */ + /* These fields are NULL/0 if not yet set: */ + PLyDatumToOb *atts; /* array of per-column conversion info */ + int natts; /* length of array */ } PLyTupleToOb; -typedef union PLyTypeInput +typedef struct PLyTransformToOb +{ + FmgrInfo typtransform; /* lookup info for from-SQL transform func */ +} PLyTransformToOb; + +struct PLyDatumToOb { - PLyDatumToOb d; - PLyTupleToOb r; -} PLyTypeInput; + PLyDatumToObFunc func; /* conversion control function */ + Oid typoid; /* OID of the source type */ + int32 typmod; /* typmod of the source type */ + bool typbyval; /* its physical representation details */ + int16 typlen; + char typalign; + MemoryContext mcxt; /* context this info is stored in */ + union /* conversion-type-specific data */ + { + PLyScalarToOb scalar; + PLyArrayToOb array; + PLyTupleToOb tuple; + PLyTransformToOb transform; + } u; +}; /* - * Conversion from Python object to a PostgreSQL Datum. + * "Output" conversion from Python object to a PostgreSQL Datum. + * + * arg is the previously-set-up conversion data, val is the value to convert. * - * The 'inarray' argument to the conversion function is true, if the - * converted value was in an array (Python list). It is used to give a - * better error message in some cases. + * *isnull is set to true if val is Py_None, false otherwise. + * (The conversion function *must* be called even for Py_None, + * so that domain constraints can be checked.) + * + * inarray is true if the converted value was in an array (Python list). + * It is used to give a better error message in some cases. */ -struct PLyObToDatum; -typedef Datum (*PLyObToDatumFunc) (struct PLyObToDatum *arg, int32 typmod, PyObject *val, bool inarray); +typedef struct PLyObToDatum PLyObToDatum; /* forward reference */ + +typedef Datum (*PLyObToDatumFunc) (PLyObToDatum *arg, PyObject *val, + bool *isnull, + bool inarray); -typedef struct PLyObToDatum +typedef struct PLyObToScalar { - PLyObToDatumFunc func; - FmgrInfo typfunc; /* The type's input function */ - FmgrInfo typtransform; /* to-SQL transform */ - Oid typoid; /* The OID of the type */ - int32 typmod; /* The typmod of the type */ - Oid typioparam; - bool typbyval; - int16 typlen; - char typalign; - struct PLyObToDatum *elm; -} PLyObToDatum; + FmgrInfo typfunc; /* lookup info for type's input function */ + Oid typioparam; /* argument to pass to it */ +} PLyObToScalar; + +typedef struct PLyObToArray +{ + PLyObToDatum *elm; /* conversion info for array's element type */ + Oid elmbasetype; /* element base type */ +} PLyObToArray; typedef struct PLyObToTuple { - PLyObToDatum *atts; - int natts; + /* If we're dealing with a RECORD type, actual descriptor is here: */ + TupleDesc recdesc; + /* If we're dealing with a named composite type, these fields are set: */ + TypeCacheEntry *typentry; /* typcache entry for type */ + int64 tupdescseq; /* last tupdesc seqno seen in typcache */ + /* These fields are NULL/0 if not yet set: */ + PLyObToDatum *atts; /* array of per-column conversion info */ + int natts; /* length of array */ + /* We might need to convert using record_in(); if so, cache info here */ + FmgrInfo recinfunc; /* lookup info for record_in */ } PLyObToTuple; -typedef union PLyTypeOutput +typedef struct PLyObToDomain { - PLyObToDatum d; - PLyObToTuple r; -} PLyTypeOutput; + PLyObToDatum *base; /* conversion info for domain's base type */ + void *domain_info; /* cache space for domain_check() */ +} PLyObToDomain; -/* all we need to move PostgreSQL data to Python objects, - * and vice versa - */ -typedef struct PLyTypeInfo +typedef struct PLyObToTransform { - PLyTypeInput in; - PLyTypeOutput out; - - /* - * is_rowtype can be: -1 = not known yet (initial state); 0 = scalar - * datatype; 1 = rowtype; 2 = rowtype, but I/O functions not set up yet - */ - int is_rowtype; - /* used to check if the type has been modified */ - Oid typ_relid; - TransactionId typrel_xmin; - ItemPointerData typrel_tid; - - /* context for subsidiary data (doesn't belong to this struct though) */ - MemoryContext mcxt; -} PLyTypeInfo; - -extern void PLy_typeinfo_init(PLyTypeInfo *arg, MemoryContext mcxt); + FmgrInfo typtransform; /* lookup info for to-SQL transform function */ +} PLyObToTransform; -extern void PLy_input_datum_func(PLyTypeInfo *arg, Oid typeOid, HeapTuple typeTup, Oid langid, List *trftypes); -extern void PLy_output_datum_func(PLyTypeInfo *arg, HeapTuple typeTup, Oid langid, List *trftypes); - -extern void PLy_input_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc); -extern void PLy_output_tuple_funcs(PLyTypeInfo *arg, TupleDesc desc); - -extern void PLy_output_record_funcs(PLyTypeInfo *arg, TupleDesc desc); - -/* conversion from Python objects to composite Datums */ -extern Datum PLyObject_ToCompositeDatum(PLyTypeInfo *info, TupleDesc desc, PyObject *plrv, bool isarray); - -/* conversion from heap tuples to Python dictionaries */ -extern PyObject *PLyDict_FromTuple(PLyTypeInfo *info, HeapTuple tuple, TupleDesc desc); - -/* conversion from Python objects to C strings */ +struct PLyObToDatum +{ + PLyObToDatumFunc func; /* conversion control function */ + Oid typoid; /* OID of the target type */ + int32 typmod; /* typmod of the target type */ + bool typbyval; /* its physical representation details */ + int16 typlen; + char typalign; + MemoryContext mcxt; /* context this info is stored in */ + union /* conversion-type-specific data */ + { + PLyObToScalar scalar; + PLyObToArray array; + PLyObToTuple tuple; + PLyObToDomain domain; + PLyObToTransform transform; + } u; +}; + + +extern PyObject *PLy_input_convert(PLyDatumToOb *arg, Datum val); +extern Datum PLy_output_convert(PLyObToDatum *arg, PyObject *val, + bool *isnull); + +extern PyObject *PLy_input_from_tuple(PLyDatumToOb *arg, HeapTuple tuple, + TupleDesc desc); + +extern void PLy_input_setup_func(PLyDatumToOb *arg, MemoryContext arg_mcxt, + Oid typeOid, int32 typmod, + struct PLyProcedure *proc); +extern void PLy_output_setup_func(PLyObToDatum *arg, MemoryContext arg_mcxt, + Oid typeOid, int32 typmod, + struct PLyProcedure *proc); + +extern void PLy_input_setup_tuple(PLyDatumToOb *arg, TupleDesc desc, + struct PLyProcedure *proc); +extern void PLy_output_setup_tuple(PLyObToDatum *arg, TupleDesc desc, + struct PLyProcedure *proc); + +extern void PLy_output_setup_record(PLyObToDatum *arg, TupleDesc desc, + struct PLyProcedure *proc); + +/* conversion from Python objects to C strings --- exported for transforms */ extern char *PLyObject_AsString(PyObject *plrv); #endif /* PLPY_TYPEIO_H */ diff --git a/src/pl/plpython/sql/plpython_types.sql b/src/pl/plpython/sql/plpython_types.sql index 8c57297c24..cc0524ee80 100644 --- a/src/pl/plpython/sql/plpython_types.sql +++ b/src/pl/plpython/sql/plpython_types.sql @@ -387,6 +387,55 @@ $$ LANGUAGE plpythonu; SELECT * FROM test_type_conversion_array_domain_check_violation(); +-- +-- Arrays of domains +-- + +CREATE FUNCTION test_read_uint2_array(x uint2[]) RETURNS uint2 AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; + +select test_read_uint2_array(array[1::uint2]); + +CREATE FUNCTION test_build_uint2_array(x int2) RETURNS uint2[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; + +select test_build_uint2_array(1::int2); +select test_build_uint2_array(-1::int2); -- fail + +-- +-- ideally this would work, but for now it doesn't, because the return value +-- is [[2,4], [2,4]] which our conversion code thinks should become a 2-D +-- integer array, not an array of arrays. +-- +CREATE FUNCTION test_type_conversion_domain_array(x integer[]) + RETURNS ordered_pair_domain[] AS $$ +return [x, x] +$$ LANGUAGE plpythonu; + +select test_type_conversion_domain_array(array[2,4]); +select test_type_conversion_domain_array(array[4,2]); -- fail + +CREATE FUNCTION test_type_conversion_domain_array2(x ordered_pair_domain) + RETURNS integer AS $$ +plpy.info(x, type(x)) +return x[1] +$$ LANGUAGE plpythonu; + +select test_type_conversion_domain_array2(array[2,4]); +select test_type_conversion_domain_array2(array[4,2]); -- fail + +CREATE FUNCTION test_type_conversion_array_domain_array(x ordered_pair_domain[]) + RETURNS ordered_pair_domain AS $$ +plpy.info(x, type(x)) +return x[0] +$$ LANGUAGE plpythonu; + +select test_type_conversion_array_domain_array(array[array[2,4]::ordered_pair_domain]); + + --- --- Composite types --- @@ -430,6 +479,48 @@ ALTER TYPE named_pair RENAME TO named_pair_2; SELECT test_composite_type_input(row(1, 2)); +-- +-- Domains within composite +-- + +CREATE TYPE nnint_container AS (f1 int, f2 nnint); + +CREATE FUNCTION nnint_test(x int, y int) RETURNS nnint_container AS $$ +return {'f1': x, 'f2': y} +$$ LANGUAGE plpythonu; + +SELECT nnint_test(null, 3); +SELECT nnint_test(3, null); -- fail + + +-- +-- Domains of composite +-- + +CREATE DOMAIN ordered_named_pair AS named_pair_2 CHECK((VALUE).i <= (VALUE).j); + +CREATE FUNCTION read_ordered_named_pair(p ordered_named_pair) RETURNS integer AS $$ +return p['i'] + p['j'] +$$ LANGUAGE plpythonu; + +SELECT read_ordered_named_pair(row(1, 2)); +SELECT read_ordered_named_pair(row(2, 1)); -- fail + +CREATE FUNCTION build_ordered_named_pair(i int, j int) RETURNS ordered_named_pair AS $$ +return {'i': i, 'j': j} +$$ LANGUAGE plpythonu; + +SELECT build_ordered_named_pair(1,2); +SELECT build_ordered_named_pair(2,1); -- fail + +CREATE FUNCTION build_ordered_named_pairs(i int, j int) RETURNS ordered_named_pair[] AS $$ +return [{'i': i, 'j': j}, {'i': i, 'j': j+1}] +$$ LANGUAGE plpythonu; + +SELECT build_ordered_named_pairs(1,2); +SELECT build_ordered_named_pairs(2,1); -- fail + + -- -- Prepared statements -- From 09a777447a858a01ac4d547d678ba295d9542c3b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 16 Nov 2017 17:57:38 -0500 Subject: [PATCH 0559/1087] Clean up warnings in MinGW builds. Experimentation with modern MinGW (specifically the 5.0.2 version packaged for Fedora 26) shows that its version of sys/stat.h *does* provide S_IRGRP and friends, contrary to the expectation of win32_port.h. This results in an astonishing number of compiler warnings, and perhaps in incorrect code --- I'm not sure if the nonzero values supplied by MinGW's header actually do anything. Hence, adjust win32_port.h to only define these macros if doesn't. This might be worth back-patching, but given the lack of complaints so far, I'm not too excited about it. --- src/include/port/win32_port.h | 70 +++++++++++++++++++++++------------ 1 file changed, 47 insertions(+), 23 deletions(-) diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h index 46d7b0035f..d8a6392588 100644 --- a/src/include/port/win32_port.h +++ b/src/include/port/win32_port.h @@ -52,6 +52,7 @@ #include #include /* for non-unicode version */ #undef near +#include /* needed before sys/stat hacking below */ /* Must be here to avoid conflicting with prototype in windows.h */ #define mkdir(a,b) mkdir(a) @@ -248,6 +249,8 @@ typedef int pid_t; /* * Supplement to . + * + * We must pull in sys/stat.h before this part, else our overrides lose. */ #define lstat(path, sb) stat(path, sb) @@ -255,19 +258,58 @@ typedef int pid_t; * stat() is not guaranteed to set the st_size field on win32, so we * redefine it to our own implementation that is. * - * We must pull in sys/stat.h here so the system header definition - * goes in first, and we redefine that, and not the other way around. - * * Some frontends don't need the size from stat, so if UNSAFE_STAT_OK * is defined we don't bother with this. */ #ifndef UNSAFE_STAT_OK -#include extern int pgwin32_safestat(const char *path, struct stat *buf); - #define stat(a,b) pgwin32_safestat(a,b) #endif +/* These macros are not provided by older MinGW, nor by MSVC */ +#ifndef S_IRUSR +#define S_IRUSR _S_IREAD +#endif +#ifndef S_IWUSR +#define S_IWUSR _S_IWRITE +#endif +#ifndef S_IXUSR +#define S_IXUSR _S_IEXEC +#endif +#ifndef S_IRWXU +#define S_IRWXU (S_IRUSR | S_IWUSR | S_IXUSR) +#endif +#ifndef S_IRGRP +#define S_IRGRP 0 +#endif +#ifndef S_IWGRP +#define S_IWGRP 0 +#endif +#ifndef S_IXGRP +#define S_IXGRP 0 +#endif +#ifndef S_IRWXG +#define S_IRWXG 0 +#endif +#ifndef S_IROTH +#define S_IROTH 0 +#endif +#ifndef S_IWOTH +#define S_IWOTH 0 +#endif +#ifndef S_IXOTH +#define S_IXOTH 0 +#endif +#ifndef S_IRWXO +#define S_IRWXO 0 +#endif +#ifndef S_ISDIR +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#endif +#ifndef S_ISREG +#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) +#endif + /* * Supplement to . * This is the same value as _O_NOINHERIT in the MS header file. This is @@ -456,14 +498,6 @@ typedef __int64 ssize_t; typedef unsigned short mode_t; -#define S_IRUSR _S_IREAD -#define S_IWUSR _S_IWRITE -#define S_IXUSR _S_IEXEC -#define S_IRWXU (S_IRUSR | S_IWUSR | S_IXUSR) -/* see also S_IRGRP etc below */ -#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) -#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) - #define F_OK 0 #define W_OK 2 #define R_OK 4 @@ -478,14 +512,4 @@ typedef unsigned short mode_t; #endif /* _MSC_VER */ -/* These aren't provided by either MinGW or MSVC */ -#define S_IRGRP 0 -#define S_IWGRP 0 -#define S_IXGRP 0 -#define S_IRWXG 0 -#define S_IROTH 0 -#define S_IWOTH 0 -#define S_IXOTH 0 -#define S_IRWXO 0 - #endif /* PG_WIN32_PORT_H */ From 7082e614c0dd504cdf49c4d5a692159f22e78f9d Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 16 Nov 2017 17:28:11 -0800 Subject: [PATCH 0560/1087] Provide DSM segment to ExecXXXInitializeWorker functions. Previously, executor nodes running in parallel worker processes didn't have access to the dsm_segment object used for parallel execution. In order to support resource management based on DSM segment lifetime, they need that. So create a ParallelWorkerContext object to hold it and pass it to all InitializeWorker functions. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com --- src/backend/executor/execParallel.c | 27 ++++++++++++++--------- src/backend/executor/nodeBitmapHeapscan.c | 5 +++-- src/backend/executor/nodeCustom.c | 7 +++--- src/backend/executor/nodeForeignscan.c | 7 +++--- src/backend/executor/nodeIndexonlyscan.c | 5 +++-- src/backend/executor/nodeIndexscan.c | 5 +++-- src/backend/executor/nodeSeqscan.c | 5 +++-- src/backend/executor/nodeSort.c | 4 ++-- src/include/access/parallel.h | 6 +++++ src/include/executor/nodeBitmapHeapscan.h | 2 +- src/include/executor/nodeCustom.h | 2 +- src/include/executor/nodeForeignscan.h | 2 +- src/include/executor/nodeIndexonlyscan.h | 2 +- src/include/executor/nodeIndexscan.h | 3 ++- src/include/executor/nodeSeqscan.h | 3 ++- src/include/executor/nodeSort.h | 2 +- src/tools/pgindent/typedefs.list | 1 + 17 files changed, 55 insertions(+), 33 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index c435550637..2ead32d5ad 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -1122,7 +1122,7 @@ ExecParallelReportInstrumentation(PlanState *planstate, * is allocated and initialized by executor; that is, after ExecutorStart(). */ static bool -ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) +ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt) { if (planstate == NULL) return false; @@ -1131,40 +1131,44 @@ ExecParallelInitializeWorker(PlanState *planstate, shm_toc *toc) { case T_SeqScanState: if (planstate->plan->parallel_aware) - ExecSeqScanInitializeWorker((SeqScanState *) planstate, toc); + ExecSeqScanInitializeWorker((SeqScanState *) planstate, pwcxt); break; case T_IndexScanState: if (planstate->plan->parallel_aware) - ExecIndexScanInitializeWorker((IndexScanState *) planstate, toc); + ExecIndexScanInitializeWorker((IndexScanState *) planstate, + pwcxt); break; case T_IndexOnlyScanState: if (planstate->plan->parallel_aware) - ExecIndexOnlyScanInitializeWorker((IndexOnlyScanState *) planstate, toc); + ExecIndexOnlyScanInitializeWorker((IndexOnlyScanState *) planstate, + pwcxt); break; case T_ForeignScanState: if (planstate->plan->parallel_aware) ExecForeignScanInitializeWorker((ForeignScanState *) planstate, - toc); + pwcxt); break; case T_CustomScanState: if (planstate->plan->parallel_aware) ExecCustomScanInitializeWorker((CustomScanState *) planstate, - toc); + pwcxt); break; case T_BitmapHeapScanState: if (planstate->plan->parallel_aware) - ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate, toc); + ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate, + pwcxt); break; case T_SortState: /* even when not parallel-aware */ - ExecSortInitializeWorker((SortState *) planstate, toc); + ExecSortInitializeWorker((SortState *) planstate, pwcxt); break; default: break; } - return planstate_tree_walker(planstate, ExecParallelInitializeWorker, toc); + return planstate_tree_walker(planstate, ExecParallelInitializeWorker, + pwcxt); } /* @@ -1194,6 +1198,7 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) int instrument_options = 0; void *area_space; dsa_area *area; + ParallelWorkerContext pwcxt; /* Get fixed-size state. */ fpes = shm_toc_lookup(toc, PARALLEL_KEY_EXECUTOR_FIXED, false); @@ -1231,7 +1236,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) RestoreParamExecParams(paramexec_space, queryDesc->estate); } - ExecParallelInitializeWorker(queryDesc->planstate, toc); + pwcxt.toc = toc; + pwcxt.seg = seg; + ExecParallelInitializeWorker(queryDesc->planstate, &pwcxt); /* Pass down any tuple bound */ ExecSetTupleBound(fpes->tuples_needed, queryDesc->planstate); diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index b885f2a3a6..221391908c 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -1102,12 +1102,13 @@ ExecBitmapHeapReInitializeDSM(BitmapHeapScanState *node, * ---------------------------------------------------------------- */ void -ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node, shm_toc *toc) +ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node, + ParallelWorkerContext *pwcxt) { ParallelBitmapHeapState *pstate; Snapshot snapshot; - pstate = shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, false); + pstate = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false); node->pstate = pstate; snapshot = RestoreSnapshot(pstate->phs_snapshot_data); diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c index 07dcabef55..5f1732d6ac 100644 --- a/src/backend/executor/nodeCustom.c +++ b/src/backend/executor/nodeCustom.c @@ -210,7 +210,8 @@ ExecCustomScanReInitializeDSM(CustomScanState *node, ParallelContext *pcxt) } void -ExecCustomScanInitializeWorker(CustomScanState *node, shm_toc *toc) +ExecCustomScanInitializeWorker(CustomScanState *node, + ParallelWorkerContext *pwcxt) { const CustomExecMethods *methods = node->methods; @@ -219,8 +220,8 @@ ExecCustomScanInitializeWorker(CustomScanState *node, shm_toc *toc) int plan_node_id = node->ss.ps.plan->plan_node_id; void *coordinate; - coordinate = shm_toc_lookup(toc, plan_node_id, false); - methods->InitializeWorkerCustomScan(node, toc, coordinate); + coordinate = shm_toc_lookup(pwcxt->toc, plan_node_id, false); + methods->InitializeWorkerCustomScan(node, pwcxt->toc, coordinate); } } diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c index 20892d6d5f..dc6cfcfa66 100644 --- a/src/backend/executor/nodeForeignscan.c +++ b/src/backend/executor/nodeForeignscan.c @@ -359,7 +359,8 @@ ExecForeignScanReInitializeDSM(ForeignScanState *node, ParallelContext *pcxt) * ---------------------------------------------------------------- */ void -ExecForeignScanInitializeWorker(ForeignScanState *node, shm_toc *toc) +ExecForeignScanInitializeWorker(ForeignScanState *node, + ParallelWorkerContext *pwcxt) { FdwRoutine *fdwroutine = node->fdwroutine; @@ -368,8 +369,8 @@ ExecForeignScanInitializeWorker(ForeignScanState *node, shm_toc *toc) int plan_node_id = node->ss.ps.plan->plan_node_id; void *coordinate; - coordinate = shm_toc_lookup(toc, plan_node_id, false); - fdwroutine->InitializeWorkerForeignScan(node, toc, coordinate); + coordinate = shm_toc_lookup(pwcxt->toc, plan_node_id, false); + fdwroutine->InitializeWorkerForeignScan(node, pwcxt->toc, coordinate); } } diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index 9368ca04f8..c54c5aa659 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -678,11 +678,12 @@ ExecIndexOnlyScanReInitializeDSM(IndexOnlyScanState *node, * ---------------------------------------------------------------- */ void -ExecIndexOnlyScanInitializeWorker(IndexOnlyScanState *node, shm_toc *toc) +ExecIndexOnlyScanInitializeWorker(IndexOnlyScanState *node, + ParallelWorkerContext *pwcxt) { ParallelIndexScanDesc piscan; - piscan = shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, false); + piscan = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false); node->ioss_ScanDesc = index_beginscan_parallel(node->ss.ss_currentRelation, node->ioss_RelationDesc, diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index 2d6da28fbd..2ffef23107 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -1716,11 +1716,12 @@ ExecIndexScanReInitializeDSM(IndexScanState *node, * ---------------------------------------------------------------- */ void -ExecIndexScanInitializeWorker(IndexScanState *node, shm_toc *toc) +ExecIndexScanInitializeWorker(IndexScanState *node, + ParallelWorkerContext *pwcxt) { ParallelIndexScanDesc piscan; - piscan = shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, false); + piscan = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false); node->iss_ScanDesc = index_beginscan_parallel(node->ss.ss_currentRelation, node->iss_RelationDesc, diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c index 76bec780a8..a5bd60e579 100644 --- a/src/backend/executor/nodeSeqscan.c +++ b/src/backend/executor/nodeSeqscan.c @@ -348,11 +348,12 @@ ExecSeqScanReInitializeDSM(SeqScanState *node, * ---------------------------------------------------------------- */ void -ExecSeqScanInitializeWorker(SeqScanState *node, shm_toc *toc) +ExecSeqScanInitializeWorker(SeqScanState *node, + ParallelWorkerContext *pwcxt) { ParallelHeapScanDesc pscan; - pscan = shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, false); + pscan = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false); node->ss.ss_currentScanDesc = heap_beginscan_parallel(node->ss.ss_currentRelation, pscan); } diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index 98bcaeb66f..73aa3715e6 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -420,10 +420,10 @@ ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt) * ---------------------------------------------------------------- */ void -ExecSortInitializeWorker(SortState *node, shm_toc *toc) +ExecSortInitializeWorker(SortState *node, ParallelWorkerContext *pwcxt) { node->shared_info = - shm_toc_lookup(toc, node->ss.ps.plan->plan_node_id, true); + shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, true); node->am_worker = true; } diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index e3e0cecf1e..f4db88294a 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -45,6 +45,12 @@ typedef struct ParallelContext ParallelWorkerInfo *worker; } ParallelContext; +typedef struct ParallelWorkerContext +{ + dsm_segment *seg; + shm_toc *toc; +} ParallelWorkerContext; + extern volatile bool ParallelMessagePending; extern int ParallelWorkerNumber; extern bool InitializingParallelWorker; diff --git a/src/include/executor/nodeBitmapHeapscan.h b/src/include/executor/nodeBitmapHeapscan.h index 10844a405a..7907ecc3cb 100644 --- a/src/include/executor/nodeBitmapHeapscan.h +++ b/src/include/executor/nodeBitmapHeapscan.h @@ -27,6 +27,6 @@ extern void ExecBitmapHeapInitializeDSM(BitmapHeapScanState *node, extern void ExecBitmapHeapReInitializeDSM(BitmapHeapScanState *node, ParallelContext *pcxt); extern void ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node, - shm_toc *toc); + ParallelWorkerContext *pwcxt); #endif /* NODEBITMAPHEAPSCAN_H */ diff --git a/src/include/executor/nodeCustom.h b/src/include/executor/nodeCustom.h index 25767b6a4a..d7dcf3b8cb 100644 --- a/src/include/executor/nodeCustom.h +++ b/src/include/executor/nodeCustom.h @@ -37,7 +37,7 @@ extern void ExecCustomScanInitializeDSM(CustomScanState *node, extern void ExecCustomScanReInitializeDSM(CustomScanState *node, ParallelContext *pcxt); extern void ExecCustomScanInitializeWorker(CustomScanState *node, - shm_toc *toc); + ParallelWorkerContext *pwcxt); extern void ExecShutdownCustomScan(CustomScanState *node); #endif /* NODECUSTOM_H */ diff --git a/src/include/executor/nodeForeignscan.h b/src/include/executor/nodeForeignscan.h index 0354c2c430..152abf022b 100644 --- a/src/include/executor/nodeForeignscan.h +++ b/src/include/executor/nodeForeignscan.h @@ -28,7 +28,7 @@ extern void ExecForeignScanInitializeDSM(ForeignScanState *node, extern void ExecForeignScanReInitializeDSM(ForeignScanState *node, ParallelContext *pcxt); extern void ExecForeignScanInitializeWorker(ForeignScanState *node, - shm_toc *toc); + ParallelWorkerContext *pwcxt); extern void ExecShutdownForeignScan(ForeignScanState *node); #endif /* NODEFOREIGNSCAN_H */ diff --git a/src/include/executor/nodeIndexonlyscan.h b/src/include/executor/nodeIndexonlyscan.h index 690b5dbfe5..c5344a8d5d 100644 --- a/src/include/executor/nodeIndexonlyscan.h +++ b/src/include/executor/nodeIndexonlyscan.h @@ -31,6 +31,6 @@ extern void ExecIndexOnlyScanInitializeDSM(IndexOnlyScanState *node, extern void ExecIndexOnlyScanReInitializeDSM(IndexOnlyScanState *node, ParallelContext *pcxt); extern void ExecIndexOnlyScanInitializeWorker(IndexOnlyScanState *node, - shm_toc *toc); + ParallelWorkerContext *pwcxt); #endif /* NODEINDEXONLYSCAN_H */ diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h index 0670e87e39..ae0f44806a 100644 --- a/src/include/executor/nodeIndexscan.h +++ b/src/include/executor/nodeIndexscan.h @@ -25,7 +25,8 @@ extern void ExecReScanIndexScan(IndexScanState *node); extern void ExecIndexScanEstimate(IndexScanState *node, ParallelContext *pcxt); extern void ExecIndexScanInitializeDSM(IndexScanState *node, ParallelContext *pcxt); extern void ExecIndexScanReInitializeDSM(IndexScanState *node, ParallelContext *pcxt); -extern void ExecIndexScanInitializeWorker(IndexScanState *node, shm_toc *toc); +extern void ExecIndexScanInitializeWorker(IndexScanState *node, + ParallelWorkerContext *pwcxt); /* * These routines are exported to share code with nodeIndexonlyscan.c and diff --git a/src/include/executor/nodeSeqscan.h b/src/include/executor/nodeSeqscan.h index eb96799cad..ee3b1a0bb8 100644 --- a/src/include/executor/nodeSeqscan.h +++ b/src/include/executor/nodeSeqscan.h @@ -25,6 +25,7 @@ extern void ExecReScanSeqScan(SeqScanState *node); extern void ExecSeqScanEstimate(SeqScanState *node, ParallelContext *pcxt); extern void ExecSeqScanInitializeDSM(SeqScanState *node, ParallelContext *pcxt); extern void ExecSeqScanReInitializeDSM(SeqScanState *node, ParallelContext *pcxt); -extern void ExecSeqScanInitializeWorker(SeqScanState *node, shm_toc *toc); +extern void ExecSeqScanInitializeWorker(SeqScanState *node, + ParallelWorkerContext *pwcxt); #endif /* NODESEQSCAN_H */ diff --git a/src/include/executor/nodeSort.h b/src/include/executor/nodeSort.h index 1ab8f76721..cc61a9db69 100644 --- a/src/include/executor/nodeSort.h +++ b/src/include/executor/nodeSort.h @@ -27,7 +27,7 @@ extern void ExecReScanSort(SortState *node); extern void ExecSortEstimate(SortState *node, ParallelContext *pcxt); extern void ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt); extern void ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt); -extern void ExecSortInitializeWorker(SortState *node, shm_toc *toc); +extern void ExecSortInitializeWorker(SortState *node, ParallelWorkerContext *pwcxt); extern void ExecSortRetrieveInstrumentation(SortState *node); #endif /* NODESORT_H */ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 61aeb51c29..b422050a92 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -1534,6 +1534,7 @@ ParallelHeapScanDesc ParallelIndexScanDesc ParallelSlot ParallelState +ParallelWorkerContext ParallelWorkerInfo Param ParamExecData From 11e264517dff7a911d9e6494de86049cab42cde3 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 16 Nov 2017 17:52:57 -0800 Subject: [PATCH 0561/1087] Remove BufFile's isTemp flag. The isTemp flag controls whether buffile.c chops BufFile data up into 1GB segments on disk. Since it was badly named and always true, get rid of it. Author: Thomas Munro (based on suggestion by Peter Geoghegan) Reviewed-By: Peter Geoghegan, Andres Freund Discussion: https://postgr.es/m/CAH2-Wz%3D%2B9Rfqh5UdvdW9rGezdhrMGGH-JL1X9FXXVZdeeGeOJA%40mail.gmail.com --- src/backend/storage/file/buffile.c | 43 +++++++++++++----------------- 1 file changed, 18 insertions(+), 25 deletions(-) diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index a73c025c81..b527d38b05 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -68,7 +68,6 @@ struct BufFile * avoid making redundant FileSeek calls. */ - bool isTemp; /* can only add files if this is true */ bool isInterXact; /* keep open over transactions? */ bool dirty; /* does buffer need to be written? */ @@ -99,7 +98,7 @@ static int BufFileFlush(BufFile *file); /* * Create a BufFile given the first underlying physical file. - * NOTE: caller must set isTemp and isInterXact if appropriate. + * NOTE: caller must set isInterXact if appropriate. */ static BufFile * makeBufFile(File firstfile) @@ -111,7 +110,6 @@ makeBufFile(File firstfile) file->files[0] = firstfile; file->offsets = (off_t *) palloc(sizeof(off_t)); file->offsets[0] = 0L; - file->isTemp = false; file->isInterXact = false; file->dirty = false; file->resowner = CurrentResourceOwner; @@ -136,7 +134,6 @@ extendBufFile(BufFile *file) oldowner = CurrentResourceOwner; CurrentResourceOwner = file->resowner; - Assert(file->isTemp); pfile = OpenTemporaryFile(file->isInterXact); Assert(pfile >= 0); @@ -173,7 +170,6 @@ BufFileCreateTemp(bool interXact) Assert(pfile >= 0); file = makeBufFile(pfile); - file->isTemp = true; file->isInterXact = interXact; return file; @@ -288,10 +284,12 @@ BufFileDumpBuffer(BufFile *file) */ while (wpos < file->nbytes) { + off_t availbytes; + /* * Advance to next component file if necessary and possible. */ - if (file->curOffset >= MAX_PHYSICAL_FILESIZE && file->isTemp) + if (file->curOffset >= MAX_PHYSICAL_FILESIZE) { while (file->curFile + 1 >= file->numFiles) extendBufFile(file); @@ -304,13 +302,10 @@ BufFileDumpBuffer(BufFile *file) * write as much as asked... */ bytestowrite = file->nbytes - wpos; - if (file->isTemp) - { - off_t availbytes = MAX_PHYSICAL_FILESIZE - file->curOffset; + availbytes = MAX_PHYSICAL_FILESIZE - file->curOffset; - if ((off_t) bytestowrite > availbytes) - bytestowrite = (int) availbytes; - } + if ((off_t) bytestowrite > availbytes) + bytestowrite = (int) availbytes; /* * May need to reposition physical file. @@ -543,20 +538,18 @@ BufFileSeek(BufFile *file, int fileno, off_t offset, int whence) * above flush could have created a new segment, so checking sooner would * not work (at least not with this code). */ - if (file->isTemp) + + /* convert seek to "start of next seg" to "end of last seg" */ + if (newFile == file->numFiles && newOffset == 0) { - /* convert seek to "start of next seg" to "end of last seg" */ - if (newFile == file->numFiles && newOffset == 0) - { - newFile--; - newOffset = MAX_PHYSICAL_FILESIZE; - } - while (newOffset > MAX_PHYSICAL_FILESIZE) - { - if (++newFile >= file->numFiles) - return EOF; - newOffset -= MAX_PHYSICAL_FILESIZE; - } + newFile--; + newOffset = MAX_PHYSICAL_FILESIZE; + } + while (newOffset > MAX_PHYSICAL_FILESIZE) + { + if (++newFile >= file->numFiles) + return EOF; + newOffset -= MAX_PHYSICAL_FILESIZE; } if (newFile >= file->numFiles) return EOF; From be92769e4e63de949fe3ba29e0bf5c0a96f54ae3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 17 Nov 2017 11:53:00 -0500 Subject: [PATCH 0562/1087] Set proargmodes for satisfies_hash_partition. It appears that proargmodes should always be set for variadic functions, but satifies_hash_partition had it as NULL. In addition to fixing the problem, add a regression test to guard against future mistakes of this type. --- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_proc.h | 2 +- src/test/regress/expected/type_sanity.out | 11 +++++++++++ src/test/regress/sql/type_sanity.sql | 8 ++++++++ 4 files changed, 21 insertions(+), 2 deletions(-) diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index bd4014a69d..a30ce6b81d 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201711092 +#define CATALOG_VERSION_NO 201711171 #endif diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 0330c04f16..c969375981 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -5523,7 +5523,7 @@ DATA(insert OID = 3354 ( pg_ls_waldir PGNSP PGUID 12 10 20 0 0 f f f f t t DESCR("list of files in the WAL directory"); /* hash partitioning constraint function */ -DATA(insert OID = 5028 ( satisfies_hash_partition PGNSP PGUID 12 1 0 2276 0 f f f f f f i s 4 0 16 "26 23 23 2276" _null_ _null_ _null_ _null_ _null_ satisfies_hash_partition _null_ _null_ _null_ )); +DATA(insert OID = 5028 ( satisfies_hash_partition PGNSP PGUID 12 1 0 2276 0 f f f f f f i s 4 0 16 "26 23 23 2276" _null_ "{i,i,i,v}" _null_ _null_ _null_ satisfies_hash_partition _null_ _null_ _null_ )); DESCR("hash partition CHECK constraint"); /* diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out index c6440060dc..b1419d4bc2 100644 --- a/src/test/regress/expected/type_sanity.out +++ b/src/test/regress/expected/type_sanity.out @@ -147,6 +147,17 @@ AND case proargtypes[array_length(proargtypes, 1)-1] -----+-------------+------------- (0 rows) +-- Check that all and only those functions with a variadic type have +-- a variadic argument. +SELECT oid::regprocedure, proargmodes, provariadic +FROM pg_proc +WHERE (proargmodes IS NOT NULL AND 'v' = any(proargmodes)) + IS DISTINCT FROM + (provariadic != 0); + oid | proargmodes | provariadic +-----+-------------+------------- +(0 rows) + -- As of 8.0, this check finds refcursor, which is borrowing -- other types' I/O routines SELECT p1.oid, p1.typname, p2.oid, p2.proname diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql index 428c2d324d..f9aeea3214 100644 --- a/src/test/regress/sql/type_sanity.sql +++ b/src/test/regress/sql/type_sanity.sql @@ -120,6 +120,14 @@ AND case proargtypes[array_length(proargtypes, 1)-1] WHERE t.typarray = proargtypes[array_length(proargtypes, 1)-1]) END != provariadic; +-- Check that all and only those functions with a variadic type have +-- a variadic argument. +SELECT oid::regprocedure, proargmodes, provariadic +FROM pg_proc +WHERE (proargmodes IS NOT NULL AND 'v' = any(proargmodes)) + IS DISTINCT FROM + (provariadic != 0); + -- As of 8.0, this check finds refcursor, which is borrowing -- other types' I/O routines SELECT p1.oid, p1.typname, p2.oid, p2.proname From e87d4965bd39e4d0d56346c1bbe9361d3eb9ff0a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 17 Nov 2017 12:04:06 -0500 Subject: [PATCH 0563/1087] Prevent to_number() from losing data when template doesn't match exactly. Non-data template patterns would consume characters whether or not those characters were what the pattern expected, for example SELECT TO_NUMBER('1234', '9,999'); produced 134 because the '2' got eaten by the comma pattern. This seems undesirable, not least because it doesn't happen in Oracle. For the ',' and 'G' template patterns, we can fix this by consuming characters only if they match what the pattern would output. For non-data patterns such as 'L' and 'TH', it seems impractical to tighten things up to the point of consuming only exact matches to what the pattern would output; but we can improve matters quite a lot by redefining the behavior as "consume only characters that aren't digits, signs, decimal point, or comma". Also, fix it so that the behavior is to consume the number of *characters* the pattern would output, not the number of *bytes*. The old coding would do surprising things with non-ASCII currency symbols, for example. (It would be good to apply that rule for literal text as well, but this commit only fixes it for non-data patterns.) Oliver Ford, reviewed by Thomas Munro and Nathan Wagner, and whacked around a bit more by me Discussion: https://postgr.es/m/CAGMVOdvpbMqPf9XWNzOwBpzJfErkydr_fEGhmuDGa015z97mwg@mail.gmail.com --- doc/src/sgml/func.sgml | 28 ++++-- src/backend/utils/adt/formatting.c | 134 +++++++++++++++++++++----- src/test/regress/expected/numeric.out | 56 +++++++++++ src/test/regress/sql/numeric.sql | 11 +++ 4 files changed, 197 insertions(+), 32 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index f901567f7e..35a845c400 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -5850,7 +5850,10 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); data based on the given value. Any text that is not a template pattern is simply copied verbatim. Similarly, in an input template string (for the other functions), template patterns identify the values to be supplied by - the input data string. + the input data string. If there are characters in the template string + that are not template patterns, the corresponding characters in the input + data string are simply skipped over (whether or not they are equal to the + template string characters). @@ -6176,13 +6179,15 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Ordinary text is allowed in to_char templates and will be output literally. You can put a substring in double quotes to force it to be interpreted as literal text - even if it contains pattern key words. For example, in + even if it contains template patterns. For example, in '"Hello Year "YYYY', the YYYY will be replaced by the year data, but the single Y in Year - will not be. In to_date, to_number, - and to_timestamp, double-quoted strings skip the number of - input characters contained in the string, e.g. "XX" - skips two input characters. + will not be. + In to_date, to_number, + and to_timestamp, literal text and double-quoted + strings result in skipping the number of characters contained in the + string; for example "XX" skips two input characters + (whether or not they are XX). @@ -6483,6 +6488,17 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); + + + In to_number, if non-data template patterns such + as L or TH are used, the + corresponding number of input characters are skipped, whether or not + they match the template pattern, unless they are data characters + (that is, digits, sign, decimal point, or comma). For + example, TH would skip two non-data characters. + + + V with to_char diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index 50254f2388..5afc293a5a 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -988,7 +988,7 @@ static char *get_last_relevant_decnum(char *num); static void NUM_numpart_from_char(NUMProc *Np, int id, int input_len); static void NUM_numpart_to_char(NUMProc *Np, int id); static char *NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, - char *number, int from_char_input_len, int to_char_out_pre_spaces, + char *number, int input_len, int to_char_out_pre_spaces, int sign, bool is_to_char, Oid collid); static DCHCacheEntry *DCH_cache_getnew(const char *str); static DCHCacheEntry *DCH_cache_search(const char *str); @@ -4232,6 +4232,14 @@ get_last_relevant_decnum(char *num) return result; } +/* + * These macros are used in NUM_processor() and its subsidiary routines. + * OVERLOAD_TEST: true if we've reached end of input string + * AMOUNT_TEST(s): true if at least s characters remain in string + */ +#define OVERLOAD_TEST (Np->inout_p >= Np->inout + input_len) +#define AMOUNT_TEST(s) (Np->inout_p <= Np->inout + (input_len - (s))) + /* ---------- * Number extraction for TO_NUMBER() * ---------- @@ -4246,9 +4254,6 @@ NUM_numpart_from_char(NUMProc *Np, int id, int input_len) (id == NUM_0 || id == NUM_9) ? "NUM_0/9" : id == NUM_DEC ? "NUM_DEC" : "???"); #endif -#define OVERLOAD_TEST (Np->inout_p >= Np->inout + input_len) -#define AMOUNT_TEST(_s) (input_len-(Np->inout_p-Np->inout) >= _s) - if (OVERLOAD_TEST) return; @@ -4641,14 +4646,32 @@ NUM_numpart_to_char(NUMProc *Np, int id) ++Np->num_curr; } +/* + * Skip over "n" input characters, but only if they aren't numeric data + */ +static void +NUM_eat_non_data_chars(NUMProc *Np, int n, int input_len) +{ + while (n-- > 0) + { + if (OVERLOAD_TEST) + break; /* end of input */ + if (strchr("0123456789.,+-", *Np->inout_p) != NULL) + break; /* it's a data character */ + Np->inout_p += pg_mblen(Np->inout_p); + } +} + static char * NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, - char *number, int from_char_input_len, int to_char_out_pre_spaces, + char *number, int input_len, int to_char_out_pre_spaces, int sign, bool is_to_char, Oid collid) { FormatNode *n; NUMProc _Np, *Np = &_Np; + const char *pattern; + int pattern_len; MemSet(Np, 0, sizeof(NUMProc)); @@ -4816,9 +4839,11 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, if (!Np->is_to_char) { /* - * Check non-string inout end + * Check at least one character remains to be scanned. (In + * actions below, must use AMOUNT_TEST if we want to read more + * characters than that.) */ - if (Np->inout_p >= Np->inout + from_char_input_len) + if (OVERLOAD_TEST) break; } @@ -4828,12 +4853,16 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, if (n->type == NODE_TYPE_ACTION) { /* - * Create/reading digit/zero/blank/sing + * Create/read digit/zero/blank/sign/special-case * * 'NUM_S' note: The locale sign is anchored to number and we * read/write it when we work with first or last number - * (NUM_0/NUM_9). This is reason why NUM_S missing in follow - * switch(). + * (NUM_0/NUM_9). This is why NUM_S is missing in switch(). + * + * Notice the "Np->inout_p++" at the bottom of the loop. This is + * why most of the actions advance inout_p one less than you might + * expect. In cases where we don't want that increment to happen, + * a switch case ends with "continue" not "break". */ switch (n->key->id) { @@ -4848,7 +4877,7 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, } else { - NUM_numpart_from_char(Np, n->key->id, from_char_input_len); + NUM_numpart_from_char(Np, n->key->id, input_len); break; /* switch() case: */ } @@ -4872,10 +4901,14 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, if (IS_FILLMODE(Np->Num)) continue; } + if (*Np->inout_p != ',') + continue; } break; case NUM_G: + pattern = Np->L_thousands_sep; + pattern_len = strlen(pattern); if (Np->is_to_char) { if (!Np->num_in) @@ -4884,16 +4917,16 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, continue; else { - int x = strlen(Np->L_thousands_sep); - - memset(Np->inout_p, ' ', x); - Np->inout_p += x - 1; + /* just in case there are MB chars */ + pattern_len = pg_mbstrlen(pattern); + memset(Np->inout_p, ' ', pattern_len); + Np->inout_p += pattern_len - 1; } } else { - strcpy(Np->inout_p, Np->L_thousands_sep); - Np->inout_p += strlen(Np->inout_p) - 1; + strcpy(Np->inout_p, pattern); + Np->inout_p += pattern_len - 1; } } else @@ -4903,18 +4936,33 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, if (IS_FILLMODE(Np->Num)) continue; } - Np->inout_p += strlen(Np->L_thousands_sep) - 1; + + /* + * Because L_thousands_sep typically contains data + * characters (either '.' or ','), we can't use + * NUM_eat_non_data_chars here. Instead skip only if + * the input matches L_thousands_sep. + */ + if (AMOUNT_TEST(pattern_len) && + strncmp(Np->inout_p, pattern, pattern_len) == 0) + Np->inout_p += pattern_len - 1; + else + continue; } break; case NUM_L: + pattern = Np->L_currency_symbol; if (Np->is_to_char) { - strcpy(Np->inout_p, Np->L_currency_symbol); - Np->inout_p += strlen(Np->inout_p) - 1; + strcpy(Np->inout_p, pattern); + Np->inout_p += strlen(pattern) - 1; } else - Np->inout_p += strlen(Np->L_currency_symbol) - 1; + { + NUM_eat_non_data_chars(Np, pg_mbstrlen(pattern), input_len); + continue; + } break; case NUM_RN: @@ -4949,8 +4997,16 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, continue; if (Np->is_to_char) + { strcpy(Np->inout_p, get_th(Np->number, TH_LOWER)); - Np->inout_p += 1; + Np->inout_p += 1; + } + else + { + /* All variants of 'th' occupy 2 characters */ + NUM_eat_non_data_chars(Np, 2, input_len); + continue; + } break; case NUM_TH: @@ -4959,8 +5015,16 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, continue; if (Np->is_to_char) + { strcpy(Np->inout_p, get_th(Np->number, TH_UPPER)); - Np->inout_p += 1; + Np->inout_p += 1; + } + else + { + /* All variants of 'TH' occupy 2 characters */ + NUM_eat_non_data_chars(Np, 2, input_len); + continue; + } break; case NUM_MI: @@ -4977,6 +5041,11 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, { if (*Np->inout_p == '-') *Np->number = '-'; + else + { + NUM_eat_non_data_chars(Np, 1, input_len); + continue; + } } break; @@ -4994,23 +5063,31 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, { if (*Np->inout_p == '+') *Np->number = '+'; + else + { + NUM_eat_non_data_chars(Np, 1, input_len); + continue; + } } break; case NUM_SG: if (Np->is_to_char) *Np->inout_p = Np->sign; - else { if (*Np->inout_p == '-') *Np->number = '-'; else if (*Np->inout_p == '+') *Np->number = '+'; + else + { + NUM_eat_non_data_chars(Np, 1, input_len); + continue; + } } break; - default: continue; break; @@ -5019,7 +5096,12 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, else { /* - * Remove to output char from input in TO_CHAR + * In TO_CHAR, non-pattern characters in the format are copied to + * the output. In TO_NUMBER, we skip one input character for each + * non-pattern format character, whether or not it matches the + * format character. (Currently, that's actually implemented as + * skipping one input byte per non-pattern format byte, which is + * wrong...) */ if (Np->is_to_char) *Np->inout_p = n->character; diff --git a/src/test/regress/expected/numeric.out b/src/test/regress/expected/numeric.out index 7e55b0e293..a96bfc0eb0 100644 --- a/src/test/regress/expected/numeric.out +++ b/src/test/regress/expected/numeric.out @@ -1219,6 +1219,7 @@ SELECT '' AS to_char_26, to_char('100'::numeric, 'FM999'); -- TO_NUMBER() -- +SET lc_numeric = 'C'; SELECT '' AS to_number_1, to_number('-34,338,492', '99G999G999'); to_number_1 | to_number -------------+----------- @@ -1297,6 +1298,61 @@ SELECT '' AS to_number_13, to_number(' . 0 1-', ' 9 9 . 9 9 S'); | -0.01 (1 row) +SELECT '' AS to_number_14, to_number('34,50','999,99'); + to_number_14 | to_number +--------------+----------- + | 3450 +(1 row) + +SELECT '' AS to_number_15, to_number('123,000','999G'); + to_number_15 | to_number +--------------+----------- + | 123 +(1 row) + +SELECT '' AS to_number_16, to_number('123456','999G999'); + to_number_16 | to_number +--------------+----------- + | 123456 +(1 row) + +SELECT '' AS to_number_17, to_number('$1234.56','L9,999.99'); + to_number_17 | to_number +--------------+----------- + | 1234.56 +(1 row) + +SELECT '' AS to_number_18, to_number('$1234.56','L99,999.99'); + to_number_18 | to_number +--------------+----------- + | 1234.56 +(1 row) + +SELECT '' AS to_number_19, to_number('$1,234.56','L99,999.99'); + to_number_19 | to_number +--------------+----------- + | 1234.56 +(1 row) + +SELECT '' AS to_number_20, to_number('1234.56','L99,999.99'); + to_number_20 | to_number +--------------+----------- + | 1234.56 +(1 row) + +SELECT '' AS to_number_21, to_number('1,234.56','L99,999.99'); + to_number_21 | to_number +--------------+----------- + | 1234.56 +(1 row) + +SELECT '' AS to_number_22, to_number('42nd', '99th'); + to_number_22 | to_number +--------------+----------- + | 42 +(1 row) + +RESET lc_numeric; -- -- Input syntax -- diff --git a/src/test/regress/sql/numeric.sql b/src/test/regress/sql/numeric.sql index 9675b6eabf..321c7bdf7c 100644 --- a/src/test/regress/sql/numeric.sql +++ b/src/test/regress/sql/numeric.sql @@ -788,6 +788,7 @@ SELECT '' AS to_char_26, to_char('100'::numeric, 'FM999'); -- TO_NUMBER() -- +SET lc_numeric = 'C'; SELECT '' AS to_number_1, to_number('-34,338,492', '99G999G999'); SELECT '' AS to_number_2, to_number('-34,338,492.654,878', '99G999G999D999G999'); SELECT '' AS to_number_3, to_number('<564646.654564>', '999999.999999PR'); @@ -801,6 +802,16 @@ SELECT '' AS to_number_10, to_number('0', '99.99'); SELECT '' AS to_number_11, to_number('.-01', 'S99.99'); SELECT '' AS to_number_12, to_number('.01-', '99.99S'); SELECT '' AS to_number_13, to_number(' . 0 1-', ' 9 9 . 9 9 S'); +SELECT '' AS to_number_14, to_number('34,50','999,99'); +SELECT '' AS to_number_15, to_number('123,000','999G'); +SELECT '' AS to_number_16, to_number('123456','999G999'); +SELECT '' AS to_number_17, to_number('$1234.56','L9,999.99'); +SELECT '' AS to_number_18, to_number('$1234.56','L99,999.99'); +SELECT '' AS to_number_19, to_number('$1,234.56','L99,999.99'); +SELECT '' AS to_number_20, to_number('1234.56','L99,999.99'); +SELECT '' AS to_number_21, to_number('1,234.56','L99,999.99'); +SELECT '' AS to_number_22, to_number('42nd', '99th'); +RESET lc_numeric; -- -- Input syntax From ac3b9626812b1dd1482ec201711f26af733800f9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 17 Nov 2017 12:46:52 -0500 Subject: [PATCH 0564/1087] Provide modern examples of how to auto-start Postgres on macOS. The scripts in contrib/start-scripts/osx don't work at all on macOS 10.10 (Yosemite) or later, because they depend on SystemStarter which Apple deprecated long ago and removed in 10.10. Add a new subdirectory contrib/start-scripts/macos with scripts that use the newer launchd infrastructure. Since this problem is independent of which Postgres version you're using, back-patch to all supported branches. Discussion: https://postgr.es/m/31338.1510763554@sss.pgh.pa.us --- contrib/start-scripts/macos/README | 24 ++++++++++++++++++ .../macos/org.postgresql.postgres.plist | 17 +++++++++++++ .../start-scripts/macos/postgres-wrapper.sh | 25 +++++++++++++++++++ contrib/start-scripts/osx/README | 5 ++++ 4 files changed, 71 insertions(+) create mode 100644 contrib/start-scripts/macos/README create mode 100644 contrib/start-scripts/macos/org.postgresql.postgres.plist create mode 100644 contrib/start-scripts/macos/postgres-wrapper.sh diff --git a/contrib/start-scripts/macos/README b/contrib/start-scripts/macos/README new file mode 100644 index 0000000000..c4f2d9a270 --- /dev/null +++ b/contrib/start-scripts/macos/README @@ -0,0 +1,24 @@ +To make macOS automatically launch your PostgreSQL server at system start, +do the following: + +1. Edit the postgres-wrapper.sh script and adjust the file path +variables at its start to reflect where you have installed Postgres, +if that's not /usr/local/pgsql. + +2. Copy the modified postgres-wrapper.sh script into some suitable +installation directory. It can be, but doesn't have to be, where +you keep the Postgres executables themselves. + +3. Edit the org.postgresql.postgres.plist file and adjust its path +for postgres-wrapper.sh to match what you did in step 2. Also, +if you plan to run the Postgres server under some user name other +than "postgres", adjust the UserName parameter value for that. + +4. Copy the modified org.postgresql.postgres.plist file into +/Library/LaunchDaemons/. You must do this as root: + sudo cp org.postgresql.postgres.plist /Library/LaunchDaemons +because the file will be ignored if it is not root-owned. + +At this point a reboot should launch the server. But if you want +to test it without rebooting, you can do + sudo launchctl load /Library/LaunchDaemons/org.postgresql.postgres.plist diff --git a/contrib/start-scripts/macos/org.postgresql.postgres.plist b/contrib/start-scripts/macos/org.postgresql.postgres.plist new file mode 100644 index 0000000000..fdbd74f27d --- /dev/null +++ b/contrib/start-scripts/macos/org.postgresql.postgres.plist @@ -0,0 +1,17 @@ + + + + + Label + org.postgresql.postgres + ProgramArguments + + /bin/sh + /usr/local/pgsql/bin/postgres-wrapper.sh + + UserName + postgres + KeepAlive + + + diff --git a/contrib/start-scripts/macos/postgres-wrapper.sh b/contrib/start-scripts/macos/postgres-wrapper.sh new file mode 100644 index 0000000000..3a4ebdaf0f --- /dev/null +++ b/contrib/start-scripts/macos/postgres-wrapper.sh @@ -0,0 +1,25 @@ +#!/bin/sh + +# PostgreSQL server start script (launched by org.postgresql.postgres.plist) + +# edit these as needed: + +# directory containing postgres executable: +PGBINDIR="/usr/local/pgsql/bin" +# data directory: +PGDATA="/usr/local/pgsql/data" +# file to receive postmaster's initial log messages: +PGLOGFILE="${PGDATA}/pgstart.log" + +# (it's recommendable to enable the Postgres logging_collector feature +# so that PGLOGFILE doesn't grow without bound) + + +# set umask to ensure PGLOGFILE is not created world-readable +umask 077 + +# wait for networking to be up (else server may not bind to desired ports) +/usr/sbin/ipconfig waitall + +# and launch the server +exec "$PGBINDIR"/postgres -D "$PGDATA" >>"$PGLOGFILE" 2>&1 diff --git a/contrib/start-scripts/osx/README b/contrib/start-scripts/osx/README index 97e299f7da..9faf5a4a1c 100644 --- a/contrib/start-scripts/osx/README +++ b/contrib/start-scripts/osx/README @@ -1,3 +1,8 @@ +The scripts in this directory are for use with Apple's SystemStarter +infrastructure, which is deprecated since macOS 10.4 and is gone entirely +as of 10.10. You should use the scripts in ../macos instead, unless +you are using a macOS release too old to have launchd. + To install execute the following: sudo /bin/sh ./install.sh From 527878635030489e464d965b3b64f6caf178f641 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 17 Nov 2017 12:53:20 -0500 Subject: [PATCH 0565/1087] Remove contrib/start-scripts/osx/. Since those scripts haven't worked at all in macOS releases of 2014 and later, and aren't the recommended way to do it on any release since 2005, there seems little point carrying them into the future. It's very unlikely that anyone would be installing PG >= 11 on a macOS release where they couldn't use contrib/start-scripts/macos/. Discussion: https://postgr.es/m/31338.1510763554@sss.pgh.pa.us --- contrib/start-scripts/osx/PostgreSQL | 111 ------------------ contrib/start-scripts/osx/README | 8 -- .../start-scripts/osx/StartupParameters.plist | 33 ------ contrib/start-scripts/osx/install.sh | 10 -- 4 files changed, 162 deletions(-) delete mode 100755 contrib/start-scripts/osx/PostgreSQL delete mode 100644 contrib/start-scripts/osx/README delete mode 100644 contrib/start-scripts/osx/StartupParameters.plist delete mode 100755 contrib/start-scripts/osx/install.sh diff --git a/contrib/start-scripts/osx/PostgreSQL b/contrib/start-scripts/osx/PostgreSQL deleted file mode 100755 index 7ac12bb9e3..0000000000 --- a/contrib/start-scripts/osx/PostgreSQL +++ /dev/null @@ -1,111 +0,0 @@ -#!/bin/sh - -## -# PostgreSQL RDBMS Server -## - -# PostgreSQL boot time startup script for OS X. To install, change -# the "prefix", "PGDATA", "PGUSER", and "PGLOG" variables below as -# necessary. Next, create a new directory, "/Library/StartupItems/PostgreSQL". -# Then copy this script and the accompanying "StartupParameters.plist" file -# into that directory. The name of this script file *must* be the same as the -# directory it is in. So you'll end up with these two files: -# -# /Library/StartupItems/PostgreSQL/PostgreSQL -# /Library/StartupItems/PostgreSQL/StartupParameters.plist -# -# Next, add this line to the /etc/hostconfig file: -# -# POSTGRESQL=-YES- -# -# The startup bundle will now be ready to go. To prevent this script from -# starting PostgreSQL at system startup, simply change that line in -# /etc/hostconfig back to: -# -# POSTGRESQL=-NO- -# -# Created by David Wheeler, 2002 - -# modified by Ray Aspeitia 12-03-2003 : -# added log rotation script to db startup -# modified StartupParameters.plist "Provides" parameter to make it easier to -# start and stop with the SystemStarter utility - -# use the below command in order to correctly start/stop/restart PG with log rotation script: -# SystemStarter [start|stop|restart] PostgreSQL - -################################################################################ -## EDIT FROM HERE -################################################################################ - -# Installation prefix -prefix="/usr/local/pgsql" - -# Data directory -PGDATA="/usr/local/pgsql/data" - -# Who to run the postmaster as, usually "postgres". (NOT "root") -PGUSER="postgres" - -# the logfile path and name (NEEDS to be writeable by PGUSER) -PGLOG="${PGDATA}/logs/logfile" - -# do you want to rotate the log files, 1=true 0=false -ROTATELOGS=1 - -# logfile rotate in seconds -ROTATESEC="604800" - - -################################################################################ -## STOP EDITING HERE -################################################################################ - -# The path that is to be used for the script -PATH="$prefix/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" - -# What to use to start up the postmaster. (If you want the script to wait -# until the server has started, you could use "pg_ctl start" here.) -DAEMON="$prefix/bin/postmaster" - -# What to use to shut down the postmaster -PGCTL="$prefix/bin/pg_ctl" - -# The apache log rotation utility -LOGUTIL="/usr/sbin/rotatelogs" - -. /etc/rc.common - -StartService () { - if [ "${POSTGRESQL:=-NO-}" = "-YES-" ]; then - ConsoleMessage "Starting PostgreSQL database server" - if [ "${ROTATELOGS}" = "1" ]; then - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' 2>&1 | ${LOGUTIL} \"${PGLOG}\" ${ROTATESEC} &" - else - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' >>\"$PGLOG\" 2>&1 &" - fi - fi -} - -StopService () { - ConsoleMessage "Stopping PostgreSQL database server" - sudo -u $PGUSER sh -c "$PGCTL stop -D '${PGDATA}' -s" -} - -RestartService () { - if [ "${POSTGRESQL:=-NO-}" = "-YES-" ]; then - ConsoleMessage "Restarting PostgreSQL database server" - # should match StopService: - sudo -u $PGUSER sh -c "$PGCTL stop -D '${PGDATA}' -s" - # should match StartService: - if [ "${ROTATELOGS}" = "1" ]; then - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' 2>&1 | ${LOGUTIL} \"${PGLOG}\" ${ROTATESEC} &" - else - sudo -u $PGUSER sh -c "${DAEMON} -D '${PGDATA}' >>\"$PGLOG\" 2>&1 &" - fi - else - StopService - fi -} - -RunService "$1" diff --git a/contrib/start-scripts/osx/README b/contrib/start-scripts/osx/README deleted file mode 100644 index 9faf5a4a1c..0000000000 --- a/contrib/start-scripts/osx/README +++ /dev/null @@ -1,8 +0,0 @@ -The scripts in this directory are for use with Apple's SystemStarter -infrastructure, which is deprecated since macOS 10.4 and is gone entirely -as of 10.10. You should use the scripts in ../macos instead, unless -you are using a macOS release too old to have launchd. - -To install execute the following: - -sudo /bin/sh ./install.sh diff --git a/contrib/start-scripts/osx/StartupParameters.plist b/contrib/start-scripts/osx/StartupParameters.plist deleted file mode 100644 index 6c788d0dda..0000000000 --- a/contrib/start-scripts/osx/StartupParameters.plist +++ /dev/null @@ -1,33 +0,0 @@ - - - - - Description - PostgreSQL Database Server - Messages - - start - Starting PostgreSQL database server - stop - Stopping PostgreSQL database server - restart - Restarting PostgreSQL database server - - OrderPreference - Late - Provides - - PostgreSQL - - Requires - - Disks - Resolver - - Uses - - NFS - NetworkTime - - - diff --git a/contrib/start-scripts/osx/install.sh b/contrib/start-scripts/osx/install.sh deleted file mode 100755 index bbc5ee3926..0000000000 --- a/contrib/start-scripts/osx/install.sh +++ /dev/null @@ -1,10 +0,0 @@ -sudo sh -c 'echo "POSTGRESQL=-YES-" >> /etc/hostconfig' -sudo mkdir /Library/StartupItems/PostgreSQL -sudo cp PostgreSQL /Library/StartupItems/PostgreSQL -sudo cp StartupParameters.plist /Library/StartupItems/PostgreSQL -if [ -e /Library/StartupItems/PostgreSQL/PostgreSQL ] -then - echo "Startup Item Installed Successfully . . . " - echo "Starting PostgreSQL Server . . . " - SystemStarter restart PostgreSQL -fi From 611fe7d4793ba6516e839dc50b5319b990283f4f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 17 Nov 2017 14:52:00 -0500 Subject: [PATCH 0566/1087] Update postgresql.conf.sample comment for bgwriter_lru_maxpages Commit 14ca9abfbe4643408ad6ed3279f2f6366cafb3f1 should have done this, but did not. Jeff Janes Discussion: http://postgr.es/m/CAMkU=1yWOvL+YFYzGM9yXSoWjxr_5_Ny78pPzLKQCkfgB7H-JQ@mail.gmail.com --- src/backend/utils/misc/postgresql.conf.sample | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 7f942ccb38..16ffbbeea8 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -153,7 +153,7 @@ # - Background Writer - #bgwriter_delay = 200ms # 10-10000ms between rounds -#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round +#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables #bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round #bgwriter_flush_after = 0 # measured in pages, 0 disables From 9288d62bb4b6f302bf13bb2fed3783b61385f315 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 18 Nov 2017 10:07:57 -0500 Subject: [PATCH 0567/1087] Support channel binding 'tls-unique' in SCRAM This is the basic feature set using OpenSSL to support the feature. In order to allow the frontend and the backend to fetch the sent and expected TLS Finished messages, a PG-like API is added to be able to make the interface pluggable for other SSL implementations. This commit also adds a infrastructure to facilitate the addition of future channel binding types as well as libpq parameters to control the SASL mechanism names and channel binding names. Those will be added by upcoming commits. Some tests are added to the SSL test suite to test SCRAM authentication with channel binding. Author: Michael Paquier Reviewed-by: Peter Eisentraut --- doc/src/sgml/protocol.sgml | 31 ++-- src/backend/libpq/auth-scram.c | 181 +++++++++++++++++++---- src/backend/libpq/auth.c | 54 +++++-- src/backend/libpq/be-secure-openssl.c | 24 +++ src/include/libpq/libpq-be.h | 1 + src/include/libpq/scram.h | 10 +- src/interfaces/libpq/fe-auth-scram.c | 170 ++++++++++++++++++--- src/interfaces/libpq/fe-auth.c | 90 +++++++---- src/interfaces/libpq/fe-auth.h | 7 +- src/interfaces/libpq/fe-secure-openssl.c | 27 ++++ src/interfaces/libpq/libpq-int.h | 5 +- src/test/ssl/ServerSetup.pm | 27 ++-- src/test/ssl/t/001_ssltests.pl | 2 +- src/test/ssl/t/002_scram.pl | 38 +++++ 14 files changed, 555 insertions(+), 112 deletions(-) create mode 100644 src/test/ssl/t/002_scram.pl diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 6d4dcf83ac..4d3b6446c4 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1461,10 +1461,11 @@ SELCT 1/0; SASL is a framework for authentication in connection-oriented -protocols. At the moment, PostgreSQL implements only one SASL -authentication mechanism, SCRAM-SHA-256, but more might be added in the -future. The below steps illustrate how SASL authentication is performed in -general, while the next subsection gives more details on SCRAM-SHA-256. +protocols. At the moment, PostgreSQL implements two SASL +authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More +might be added in the future. The below steps illustrate how SASL +authentication is performed in general, while the next subsection gives +more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. @@ -1518,9 +1519,10 @@ ErrorMessage. SCRAM-SHA-256 authentication - SCRAM-SHA-256 (called just SCRAM from now on) is - the only implemented SASL mechanism, at the moment. It is described in detail - in RFC 7677 and RFC 5802. + The implemented SASL mechanisms at the moment + are SCRAM-SHA-256 and its variant with channel + binding SCRAM-SHA-256-PLUS. They are described in + detail in RFC 7677 and RFC 5802. @@ -1547,7 +1549,10 @@ the password is in. -Channel binding has not been implemented yet. +Channel binding is supported in PostgreSQL builds with +SSL support. The SASL mechanism name for SCRAM with channel binding +is SCRAM-SHA-256-PLUS. The only channel binding type +supported at the moment is tls-unique, defined in RFC 5929. @@ -1556,13 +1561,19 @@ the password is in. The server sends an AuthenticationSASL message. It includes a list of SASL authentication mechanisms that the server can accept. + This will be SCRAM-SHA-256-PLUS + and SCRAM-SHA-256 if the server is built with SSL + support, or else just the latter. The client responds by sending a SASLInitialResponse message, which - indicates the chosen mechanism, SCRAM-SHA-256. In the Initial - Client response field, the message contains the SCRAM + indicates the chosen mechanism, SCRAM-SHA-256 or + SCRAM-SHA-256-PLUS. (A client is free to choose either + mechanism, but for better security it should choose the channel-binding + variant if it can support it.) In the Initial Client response field, + the message contains the SCRAM client-first-message. diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index ec4bb9a88e..22103ce479 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -17,8 +17,6 @@ * by the SASLprep profile, we skip the SASLprep pre-processing and use * the raw bytes in calculating the hash. * - * - Channel binding is not supported yet. - * * * The password stored in pg_authid consists of the iteration count, salt, * StoredKey and ServerKey. @@ -112,6 +110,11 @@ typedef struct const char *username; /* username from startup packet */ + bool ssl_in_use; + const char *tls_finished_message; + size_t tls_finished_len; + char *channel_binding_type; + int iterations; char *salt; /* base64-encoded */ uint8 StoredKey[SCRAM_KEY_LEN]; @@ -168,7 +171,11 @@ static char *scram_mock_salt(const char *username); * it will fail, as if an incorrect password was given. */ void * -pg_be_scram_init(const char *username, const char *shadow_pass) +pg_be_scram_init(const char *username, + const char *shadow_pass, + bool ssl_in_use, + const char *tls_finished_message, + size_t tls_finished_len) { scram_state *state; bool got_verifier; @@ -176,6 +183,10 @@ pg_be_scram_init(const char *username, const char *shadow_pass) state = (scram_state *) palloc0(sizeof(scram_state)); state->state = SCRAM_AUTH_INIT; state->username = username; + state->ssl_in_use = ssl_in_use; + state->tls_finished_message = tls_finished_message; + state->tls_finished_len = tls_finished_len; + state->channel_binding_type = NULL; /* * Parse the stored password verifier. @@ -773,31 +784,89 @@ read_client_first_message(scram_state *state, char *input) *------ */ - /* read gs2-cbind-flag */ + /* + * Read gs2-cbind-flag. (For details see also RFC 5802 Section 6 "Channel + * Binding".) + */ switch (*input) { case 'n': - /* Client does not support channel binding */ + /* + * The client does not support channel binding or has simply + * decided to not use it. In that case just let it go. + */ + input++; + if (*input != ',') + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + errmsg("malformed SCRAM message"), + errdetail("Comma expected, but found character \"%s\".", + sanitize_char(*input)))); input++; break; case 'y': - /* Client supports channel binding, but we're not doing it today */ + /* + * The client supports channel binding and thinks that the server + * does not. In this case, the server must fail authentication if + * it supports channel binding, which in this implementation is + * the case if a connection is using SSL. + */ + if (state->ssl_in_use) + ereport(ERROR, + (errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION), + errmsg("SCRAM channel binding negotiation error"), + errdetail("The client supports SCRAM channel binding but thinks the server does not. " + "However, this server does support channel binding."))); + input++; + if (*input != ',') + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + errmsg("malformed SCRAM message"), + errdetail("Comma expected, but found character \"%s\".", + sanitize_char(*input)))); input++; break; case 'p': - /* - * Client requires channel binding. We don't support it. - * - * RFC 5802 specifies a particular error code, - * e=server-does-support-channel-binding, for this. But it can - * only be sent in the server-final message, and we don't want to - * go through the motions of the authentication, knowing it will - * fail, just to send that error message. + * The client requires channel binding. Channel binding type + * follows, e.g., "p=tls-unique". */ - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("client requires SCRAM channel binding, but it is not supported"))); + { + char *channel_binding_type; + + if (!state->ssl_in_use) + { + /* + * Without SSL, we don't support channel binding. + * + * RFC 5802 specifies a particular error code, + * e=server-does-support-channel-binding, for this. But + * it can only be sent in the server-final message, and we + * don't want to go through the motions of the + * authentication, knowing it will fail, just to send that + * error message. + */ + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + errmsg("client requires SCRAM channel binding, but it is not supported"))); + } + + /* + * Read value provided by client; only tls-unique is supported + * for now. (It is not safe to print the name of an + * unsupported binding type in the error message. Pranksters + * could print arbitrary strings into the log that way.) + */ + channel_binding_type = read_attr_value(&input, 'p'); + if (strcmp(channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) != 0) + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + (errmsg("unsupported SCRAM channel-binding type")))); + + /* Save the name for handling of subsequent messages */ + state->channel_binding_type = pstrdup(channel_binding_type); + } + break; default: ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), @@ -805,13 +874,6 @@ read_client_first_message(scram_state *state, char *input) errdetail("Unexpected channel-binding flag \"%s\".", sanitize_char(*input)))); } - if (*input != ',') - ereport(ERROR, - (errcode(ERRCODE_PROTOCOL_VIOLATION), - errmsg("malformed SCRAM message"), - errdetail("Comma expected, but found character \"%s\".", - sanitize_char(*input)))); - input++; /* * Forbid optional authzid (authorization identity). We don't support it. @@ -1032,14 +1094,73 @@ read_client_final_message(scram_state *state, char *input) */ /* - * Read channel-binding. We don't support channel binding, so it's - * expected to always be "biws", which is "n,,", base64-encoded. + * Read channel binding. This repeats the channel-binding flags and is + * then followed by the actual binding data depending on the type. */ channel_binding = read_attr_value(&p, 'c'); - if (strcmp(channel_binding, "biws") != 0) - ereport(ERROR, - (errcode(ERRCODE_PROTOCOL_VIOLATION), - (errmsg("unexpected SCRAM channel-binding attribute in client-final-message")))); + if (state->channel_binding_type) + { + const char *cbind_data = NULL; + size_t cbind_data_len = 0; + size_t cbind_header_len; + char *cbind_input; + size_t cbind_input_len; + char *b64_message; + int b64_message_len; + + /* + * Fetch data appropriate for channel binding type + */ + if (strcmp(state->channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) == 0) + { + cbind_data = state->tls_finished_message; + cbind_data_len = state->tls_finished_len; + } + else + { + /* should not happen */ + elog(ERROR, "invalid channel binding type"); + } + + /* should not happen */ + if (cbind_data == NULL || cbind_data_len == 0) + elog(ERROR, "empty channel binding data for channel binding type \"%s\"", + state->channel_binding_type); + + cbind_header_len = 4 + strlen(state->channel_binding_type); /* p=type,, */ + cbind_input_len = cbind_header_len + cbind_data_len; + cbind_input = palloc(cbind_input_len); + snprintf(cbind_input, cbind_input_len, "p=%s,,", state->channel_binding_type); + memcpy(cbind_input + cbind_header_len, cbind_data, cbind_data_len); + + b64_message = palloc(pg_b64_enc_len(cbind_input_len) + 1); + b64_message_len = pg_b64_encode(cbind_input, cbind_input_len, + b64_message); + b64_message[b64_message_len] = '\0'; + + /* + * Compare the value sent by the client with the value expected by + * the server. + */ + if (strcmp(channel_binding, b64_message) != 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION), + (errmsg("SCRAM channel binding check failed")))); + } + else + { + /* + * If we are not using channel binding, the binding data is expected + * to always be "biws", which is "n,," base64-encoded, or "eSws", + * which is "y,,". + */ + if (strcmp(channel_binding, "biws") != 0 && + strcmp(channel_binding, "eSws") != 0) + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + (errmsg("unexpected SCRAM channel-binding attribute in client-final-message")))); + } + state->client_final_nonce = read_attr_value(&p, 'r'); /* ignore optional extensions */ diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 6c915a7289..2dd3328d71 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -860,6 +860,8 @@ CheckMD5Auth(Port *port, char *shadow_pass, char **logdetail) static int CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) { + char *sasl_mechs; + char *p; int mtype; StringInfoData buf; void *scram_opaq; @@ -869,6 +871,8 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) int inputlen; int result; bool initial; + char *tls_finished = NULL; + size_t tls_finished_len = 0; /* * SASL auth is not supported for protocol versions before 3, because it @@ -885,12 +889,39 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) /* * Send the SASL authentication request to user. It includes the list of - * authentication mechanisms (which is trivial, because we only support - * SCRAM-SHA-256 at the moment). The extra "\0" is for an empty string to - * terminate the list. + * authentication mechanisms that are supported. The order of mechanisms + * is advertised in decreasing order of importance. So the + * channel-binding variants go first, if they are supported. Channel + * binding is only supported in SSL builds. */ - sendAuthRequest(port, AUTH_REQ_SASL, SCRAM_SHA256_NAME "\0", - strlen(SCRAM_SHA256_NAME) + 2); + sasl_mechs = palloc(strlen(SCRAM_SHA256_PLUS_NAME) + + strlen(SCRAM_SHA256_NAME) + 3); + p = sasl_mechs; + + if (port->ssl_in_use) + { + strcpy(p, SCRAM_SHA256_PLUS_NAME); + p += strlen(SCRAM_SHA256_PLUS_NAME) + 1; + } + + strcpy(p, SCRAM_SHA256_NAME); + p += strlen(SCRAM_SHA256_NAME) + 1; + + /* Put another '\0' to mark that list is finished. */ + p[0] = '\0'; + + sendAuthRequest(port, AUTH_REQ_SASL, sasl_mechs, p - sasl_mechs + 1); + pfree(sasl_mechs); + +#ifdef USE_SSL + /* + * Get data for channel binding. + */ + if (port->ssl_in_use) + { + tls_finished = be_tls_get_peer_finished(port, &tls_finished_len); + } +#endif /* * Initialize the status tracker for message exchanges. @@ -903,7 +934,11 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) * This is because we don't want to reveal to an attacker what usernames * are valid, nor which users have a valid password. */ - scram_opaq = pg_be_scram_init(port->user_name, shadow_pass); + scram_opaq = pg_be_scram_init(port->user_name, + shadow_pass, + port->ssl_in_use, + tls_finished, + tls_finished_len); /* * Loop through SASL message exchange. This exchange can consist of @@ -951,12 +986,9 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) { const char *selected_mech; - /* - * We only support SCRAM-SHA-256 at the moment, so anything else - * is an error. - */ selected_mech = pq_getmsgrawstring(&buf); - if (strcmp(selected_mech, SCRAM_SHA256_NAME) != 0) + if (strcmp(selected_mech, SCRAM_SHA256_NAME) != 0 && + strcmp(selected_mech, SCRAM_SHA256_PLUS_NAME) != 0) { ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index fe15227a77..1e3e19f5e0 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -1215,6 +1215,30 @@ be_tls_get_peerdn_name(Port *port, char *ptr, size_t len) ptr[0] = '\0'; } +/* + * Routine to get the expected TLS Finished message information from the + * client, useful for authorization when doing channel binding. + * + * Result is a palloc'd copy of the TLS Finished message with its size. + */ +char * +be_tls_get_peer_finished(Port *port, size_t *len) +{ + char dummy[1]; + char *result; + + /* + * OpenSSL does not offer an API to directly get the length of the + * expected TLS Finished message, so just do a dummy call to grab this + * information to allow caller to do an allocation with a correct size. + */ + *len = SSL_get_peer_finished(port->ssl, dummy, sizeof(dummy)); + result = palloc(*len); + (void) SSL_get_peer_finished(port->ssl, result, *len); + + return result; +} + /* * Convert an X509 subject name to a cstring. * diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index 7bde744d51..856e0439d5 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -209,6 +209,7 @@ extern bool be_tls_get_compression(Port *port); extern void be_tls_get_version(Port *port, char *ptr, size_t len); extern void be_tls_get_cipher(Port *port, char *ptr, size_t len); extern void be_tls_get_peerdn_name(Port *port, char *ptr, size_t len); +extern char *be_tls_get_peer_finished(Port *port, size_t *len); #endif extern ProtocolVersion FrontendProtocol; diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h index 0166e1945d..99560d3d2f 100644 --- a/src/include/libpq/scram.h +++ b/src/include/libpq/scram.h @@ -13,8 +13,12 @@ #ifndef PG_SCRAM_H #define PG_SCRAM_H -/* Name of SCRAM-SHA-256 per IANA */ +/* Name of SCRAM mechanisms per IANA */ #define SCRAM_SHA256_NAME "SCRAM-SHA-256" +#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ + +/* Channel binding types */ +#define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" /* Status codes for message exchange */ #define SASL_EXCHANGE_CONTINUE 0 @@ -22,7 +26,9 @@ #define SASL_EXCHANGE_FAILURE 2 /* Routines dedicated to authentication */ -extern void *pg_be_scram_init(const char *username, const char *shadow_pass); +extern void *pg_be_scram_init(const char *username, const char *shadow_pass, + bool ssl_in_use, const char *tls_finished_message, + size_t tls_finished_len); extern int pg_be_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, char **logdetail); diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index edfd42df85..f2403147ca 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -17,6 +17,7 @@ #include "common/base64.h" #include "common/saslprep.h" #include "common/scram-common.h" +#include "libpq/scram.h" #include "fe-auth.h" /* These are needed for getpid(), in the fallback implementation */ @@ -44,6 +45,11 @@ typedef struct /* These are supplied by the user */ const char *username; char *password; + bool ssl_in_use; + char *tls_finished_message; + size_t tls_finished_len; + char *sasl_mechanism; + const char *channel_binding_type; /* We construct these */ uint8 SaltedPassword[SCRAM_KEY_LEN]; @@ -79,25 +85,50 @@ static bool pg_frontend_random(char *dst, int len); /* * Initialize SCRAM exchange status. + * + * The non-const char* arguments should be passed in malloc'ed. They will be + * freed by pg_fe_scram_free(). */ void * -pg_fe_scram_init(const char *username, const char *password) +pg_fe_scram_init(const char *username, + const char *password, + bool ssl_in_use, + const char *sasl_mechanism, + char *tls_finished_message, + size_t tls_finished_len) { fe_scram_state *state; char *prep_password; pg_saslprep_rc rc; + Assert(sasl_mechanism != NULL); + state = (fe_scram_state *) malloc(sizeof(fe_scram_state)); if (!state) return NULL; memset(state, 0, sizeof(fe_scram_state)); state->state = FE_SCRAM_INIT; state->username = username; + state->ssl_in_use = ssl_in_use; + state->tls_finished_message = tls_finished_message; + state->tls_finished_len = tls_finished_len; + state->sasl_mechanism = strdup(sasl_mechanism); + if (!state->sasl_mechanism) + { + free(state); + return NULL; + } + + /* + * Store channel binding type. Only one type is currently supported. + */ + state->channel_binding_type = SCRAM_CHANNEL_BINDING_TLS_UNIQUE; /* Normalize the password with SASLprep, if possible */ rc = pg_saslprep(password, &prep_password); if (rc == SASLPREP_OOM) { + free(state->sasl_mechanism); free(state); return NULL; } @@ -106,6 +137,7 @@ pg_fe_scram_init(const char *username, const char *password) prep_password = strdup(password); if (!prep_password) { + free(state->sasl_mechanism); free(state); return NULL; } @@ -125,6 +157,10 @@ pg_fe_scram_free(void *opaq) if (state->password) free(state->password); + if (state->tls_finished_message) + free(state->tls_finished_message); + if (state->sasl_mechanism) + free(state->sasl_mechanism); /* client messages */ if (state->client_nonce) @@ -297,9 +333,10 @@ static char * build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) { char raw_nonce[SCRAM_RAW_NONCE_LEN + 1]; - char *buf; - char buflen; + char *result; + int channel_info_len; int encoded_len; + PQExpBufferData buf; /* * Generate a "raw" nonce. This is converted to ASCII-printable form by @@ -328,26 +365,61 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) * prepared with SASLprep, the message parsing would fail if it includes * '=' or ',' characters. */ - buflen = 8 + strlen(state->client_nonce) + 1; - buf = malloc(buflen); - if (buf == NULL) + + initPQExpBuffer(&buf); + + /* + * First build the gs2-header with channel binding information. + */ + if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) { - printfPQExpBuffer(errormessage, - libpq_gettext("out of memory\n")); - return NULL; + Assert(state->ssl_in_use); + appendPQExpBuffer(&buf, "p=%s", state->channel_binding_type); } - snprintf(buf, buflen, "n,,n=,r=%s", state->client_nonce); - - state->client_first_message_bare = strdup(buf + 3); - if (!state->client_first_message_bare) + else if (state->ssl_in_use) { - free(buf); - printfPQExpBuffer(errormessage, - libpq_gettext("out of memory\n")); - return NULL; + /* + * Client supports channel binding, but thinks the server does not. + */ + appendPQExpBuffer(&buf, "y"); } + else + { + /* + * Client does not support channel binding. + */ + appendPQExpBuffer(&buf, "n"); + } + + if (PQExpBufferDataBroken(buf)) + goto oom_error; + + channel_info_len = buf.len; + + appendPQExpBuffer(&buf, ",,n=,r=%s", state->client_nonce); + if (PQExpBufferDataBroken(buf)) + goto oom_error; + + /* + * The first message content needs to be saved without channel binding + * information. + */ + state->client_first_message_bare = strdup(buf.data + channel_info_len + 2); + if (!state->client_first_message_bare) + goto oom_error; + + result = strdup(buf.data); + if (result == NULL) + goto oom_error; + + termPQExpBuffer(&buf); + return result; - return buf; +oom_error: + termPQExpBuffer(&buf); + printfPQExpBuffer(errormessage, + libpq_gettext("out of memory\n")); + return NULL; } /* @@ -366,7 +438,67 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) * Construct client-final-message-without-proof. We need to remember it * for verifying the server proof in the final step of authentication. */ - appendPQExpBuffer(&buf, "c=biws,r=%s", state->nonce); + if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) + { + char *cbind_data; + size_t cbind_data_len; + size_t cbind_header_len; + char *cbind_input; + size_t cbind_input_len; + + if (strcmp(state->channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) == 0) + { + cbind_data = state->tls_finished_message; + cbind_data_len = state->tls_finished_len; + } + else + { + /* should not happen */ + termPQExpBuffer(&buf); + printfPQExpBuffer(errormessage, + libpq_gettext("invalid channel binding type\n")); + return NULL; + } + + /* should not happen */ + if (cbind_data == NULL || cbind_data_len == 0) + { + termPQExpBuffer(&buf); + printfPQExpBuffer(errormessage, + libpq_gettext("empty channel binding data for channel binding type \"%s\"\n"), + state->channel_binding_type); + return NULL; + } + + appendPQExpBuffer(&buf, "c="); + + cbind_header_len = 4 + strlen(state->channel_binding_type); /* p=type,, */ + cbind_input_len = cbind_header_len + cbind_data_len; + cbind_input = malloc(cbind_input_len); + if (!cbind_input) + goto oom_error; + snprintf(cbind_input, cbind_input_len, "p=%s,,", state->channel_binding_type); + memcpy(cbind_input + cbind_header_len, cbind_data, cbind_data_len); + + if (!enlargePQExpBuffer(&buf, pg_b64_enc_len(cbind_input_len))) + { + free(cbind_input); + goto oom_error; + } + buf.len += pg_b64_encode(cbind_input, cbind_input_len, buf.data + buf.len); + buf.data[buf.len] = '\0'; + + free(cbind_input); + } + else if (state->ssl_in_use) + appendPQExpBuffer(&buf, "c=eSws"); /* base64 of "y,," */ + else + appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ + + if (PQExpBufferDataBroken(buf)) + goto oom_error; + + appendPQExpBuffer(&buf, ",r=%s", state->nonce); if (PQExpBufferDataBroken(buf)) goto oom_error; diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index 382558f3f8..9d394919ef 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -491,6 +491,9 @@ pg_SASL_init(PGconn *conn, int payloadlen) bool success; const char *selected_mechanism; PQExpBufferData mechanism_buf; + char *tls_finished = NULL; + size_t tls_finished_len = 0; + char *password; initPQExpBuffer(&mechanism_buf); @@ -504,7 +507,8 @@ pg_SASL_init(PGconn *conn, int payloadlen) /* * Parse the list of SASL authentication mechanisms in the * AuthenticationSASL message, and select the best mechanism that we - * support. (Only SCRAM-SHA-256 is supported at the moment.) + * support. SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones + * supported at the moment, listed by order of decreasing importance. */ selected_mechanism = NULL; for (;;) @@ -523,35 +527,17 @@ pg_SASL_init(PGconn *conn, int payloadlen) break; /* - * If we have already selected a mechanism, just skip through the rest - * of the list. + * Select the mechanism to use. Pick SCRAM-SHA-256-PLUS over anything + * else. Pick SCRAM-SHA-256 if nothing else has already been picked. + * If we add more mechanisms, a more refined priority mechanism might + * become necessary. */ - if (selected_mechanism) - continue; - - /* - * Do we support this mechanism? - */ - if (strcmp(mechanism_buf.data, SCRAM_SHA256_NAME) == 0) - { - char *password; - - conn->password_needed = true; - password = conn->connhost[conn->whichhost].password; - if (password == NULL) - password = conn->pgpass; - if (password == NULL || password[0] == '\0') - { - printfPQExpBuffer(&conn->errorMessage, - PQnoPasswordSupplied); - goto error; - } - - conn->sasl_state = pg_fe_scram_init(conn->pguser, password); - if (!conn->sasl_state) - goto oom_error; + if (conn->ssl_in_use && + strcmp(mechanism_buf.data, SCRAM_SHA256_PLUS_NAME) == 0) + selected_mechanism = SCRAM_SHA256_PLUS_NAME; + else if (strcmp(mechanism_buf.data, SCRAM_SHA256_NAME) == 0 && + !selected_mechanism) selected_mechanism = SCRAM_SHA256_NAME; - } } if (!selected_mechanism) @@ -561,6 +547,54 @@ pg_SASL_init(PGconn *conn, int payloadlen) goto error; } + /* + * Now that the SASL mechanism has been chosen for the exchange, + * initialize its state information. + */ + + /* + * First, select the password to use for the exchange, complaining if + * there isn't one. Currently, all supported SASL mechanisms require a + * password, so we can just go ahead here without further distinction. + */ + conn->password_needed = true; + password = conn->connhost[conn->whichhost].password; + if (password == NULL) + password = conn->pgpass; + if (password == NULL || password[0] == '\0') + { + printfPQExpBuffer(&conn->errorMessage, + PQnoPasswordSupplied); + goto error; + } + +#ifdef USE_SSL + /* + * Get data for channel binding. + */ + if (strcmp(selected_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) + { + tls_finished = pgtls_get_finished(conn, &tls_finished_len); + if (tls_finished == NULL) + goto oom_error; + } +#endif + + /* + * Initialize the SASL state information with all the information + * gathered during the initial exchange. + * + * Note: Only tls-unique is supported for the moment. + */ + conn->sasl_state = pg_fe_scram_init(conn->pguser, + password, + conn->ssl_in_use, + selected_mechanism, + tls_finished, + tls_finished_len); + if (!conn->sasl_state) + goto oom_error; + /* Get the mechanism-specific Initial Client Response, if any */ pg_fe_scram_exchange(conn->sasl_state, NULL, -1, diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h index 5dc6bb5341..1525a52742 100644 --- a/src/interfaces/libpq/fe-auth.h +++ b/src/interfaces/libpq/fe-auth.h @@ -23,7 +23,12 @@ extern int pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn); extern char *pg_fe_getauthname(PQExpBuffer errorMessage); /* Prototypes for functions in fe-auth-scram.c */ -extern void *pg_fe_scram_init(const char *username, const char *password); +extern void *pg_fe_scram_init(const char *username, + const char *password, + bool ssl_in_use, + const char *sasl_mechanism, + char *tls_finished_message, + size_t tls_finished_len); extern void pg_fe_scram_free(void *opaq); extern void pg_fe_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index 2f29820e82..61d161b367 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -393,6 +393,33 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len) return n; } +/* + * Get the TLS finish message sent during last handshake + * + * This information is useful for callers doing channel binding during + * authentication. + */ +char * +pgtls_get_finished(PGconn *conn, size_t *len) +{ + char dummy[1]; + char *result; + + /* + * OpenSSL does not offer an API to get directly the length of the TLS + * Finished message sent, so first do a dummy call to grab this + * information and then do an allocation with the correct size. + */ + *len = SSL_get_finished(conn->ssl, dummy, sizeof(dummy)); + result = malloc(*len); + if (result == NULL) + return NULL; + (void) SSL_get_finished(conn->ssl, result, *len); + + return result; +} + + /* ------------------------------------------------------------ */ /* OpenSSL specific code */ /* ------------------------------------------------------------ */ diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index 42913604e3..8412ee8160 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -453,11 +453,13 @@ struct pg_conn /* Assorted state for SASL, SSL, GSS, etc */ void *sasl_state; + /* SSL structures */ + bool ssl_in_use; + #ifdef USE_SSL bool allow_ssl_try; /* Allowed to try SSL negotiation */ bool wait_ssl_try; /* Delay SSL negotiation until after * attempting normal connection */ - bool ssl_in_use; #ifdef USE_OPENSSL SSL *ssl; /* SSL status, if have SSL connection */ X509 *peer; /* X509 cert of server */ @@ -668,6 +670,7 @@ extern void pgtls_close(PGconn *conn); extern ssize_t pgtls_read(PGconn *conn, void *ptr, size_t len); extern bool pgtls_read_pending(PGconn *conn); extern ssize_t pgtls_write(PGconn *conn, const void *ptr, size_t len); +extern char *pgtls_get_finished(PGconn *conn, size_t *len); /* * this is so that we can check if a connection is non-blocking internally diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index ad2e036602..02f8028b2b 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -57,19 +57,21 @@ sub test_connect_ok { my $common_connstr = $_[0]; my $connstr = $_[1]; + my $test_name = $_[2]; my $result = run_test_psql("$common_connstr $connstr", "(should succeed)"); - ok($result, $connstr); + ok($result, $test_name || $connstr); } sub test_connect_fails { my $common_connstr = $_[0]; my $connstr = $_[1]; + my $test_name = $_[2]; my $result = run_test_psql("$common_connstr $connstr", "(should fail)"); - ok(!$result, "$connstr (should fail)"); + ok(!$result, $test_name || "$connstr (should fail)"); } # Copy a set of files, taking into account wildcards @@ -89,8 +91,7 @@ sub copy_files sub configure_test_server_for_ssl { - my $node = $_[0]; - my $serverhost = $_[1]; + my ($node, $serverhost, $authmethod, $password, $password_enc) = @_; my $pgdata = $node->data_dir; @@ -100,6 +101,15 @@ sub configure_test_server_for_ssl $node->psql('postgres', "CREATE DATABASE trustdb"); $node->psql('postgres', "CREATE DATABASE certdb"); + # Update password of each user as needed. + if (defined($password)) + { + $node->psql('postgres', +"SET password_encryption='$password_enc'; ALTER USER ssltestuser PASSWORD '$password';"); + $node->psql('postgres', +"SET password_encryption='$password_enc'; ALTER USER anotheruser PASSWORD '$password';"); + } + # enable logging etc. open my $conf, '>>', "$pgdata/postgresql.conf"; print $conf "fsync=off\n"; @@ -129,7 +139,7 @@ sub configure_test_server_for_ssl $node->restart; # Change pg_hba after restart because hostssl requires ssl=on - configure_hba_for_ssl($node, $serverhost); + configure_hba_for_ssl($node, $serverhost, $authmethod); } # Change the configuration to use given server cert file, and reload @@ -157,8 +167,7 @@ sub switch_server_cert sub configure_hba_for_ssl { - my $node = $_[0]; - my $serverhost = $_[1]; + my ($node, $serverhost, $authmethod) = @_; my $pgdata = $node->data_dir; # Only accept SSL connections from localhost. Our tests don't depend on this @@ -169,9 +178,9 @@ sub configure_hba_for_ssl print $hba "# TYPE DATABASE USER ADDRESS METHOD\n"; print $hba -"hostssl trustdb ssltestuser $serverhost/32 trust\n"; +"hostssl trustdb ssltestuser $serverhost/32 $authmethod\n"; print $hba -"hostssl trustdb ssltestuser ::1/128 trust\n"; +"hostssl trustdb ssltestuser ::1/128 $authmethod\n"; print $hba "hostssl certdb ssltestuser $serverhost/32 cert\n"; print $hba diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl index 890e3051a2..a0a06825c6 100644 --- a/src/test/ssl/t/001_ssltests.pl +++ b/src/test/ssl/t/001_ssltests.pl @@ -32,7 +32,7 @@ $ENV{PGHOST} = $node->host; $ENV{PGPORT} = $node->port; $node->start; -configure_test_server_for_ssl($node, $SERVERHOSTADDR); +configure_test_server_for_ssl($node, $SERVERHOSTADDR, 'trust'); switch_server_cert($node, 'server-cn-only'); ### Part 1. Run client-side tests. diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl new file mode 100644 index 0000000000..25f75bd52a --- /dev/null +++ b/src/test/ssl/t/002_scram.pl @@ -0,0 +1,38 @@ +# Test SCRAM authentication and TLS channel binding types + +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More tests => 1; +use ServerSetup; +use File::Copy; + +# This is the hostname used to connect to the server. +my $SERVERHOSTADDR = '127.0.0.1'; + +# Allocation of base connection string shared among multiple tests. +my $common_connstr; + +# Set up the server. + +note "setting up data directory"; +my $node = get_new_node('master'); +$node->init; + +# PGHOST is enforced here to set up the node, subsequent connections +# will use a dedicated connection string. +$ENV{PGHOST} = $node->host; +$ENV{PGPORT} = $node->port; +$node->start; + +# Configure server for SSL connections, with password handling. +configure_test_server_for_ssl($node, $SERVERHOSTADDR, "scram-sha-256", + "pass", "scram-sha-256"); +switch_server_cert($node, 'server-cn-only'); +$ENV{PGPASSWORD} = "pass"; +$common_connstr = +"user=ssltestuser dbname=trustdb sslmode=require hostaddr=$SERVERHOSTADDR"; + +test_connect_ok($common_connstr, '', + "SCRAM authentication with default channel binding"); From 63ca86318dc3d6a768eed78efbc6ca014a0622a8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 18 Nov 2017 12:16:37 -0500 Subject: [PATCH 0568/1087] Fix quoted-substring handling in format parsing for to_char/to_number/etc. This code evidently intended to treat backslash as an escape character within double-quoted substrings, but it was sufficiently confused that cases like ..."foo\\"... did not work right: the second backslash managed to quote the double-quote after it, despite being quoted itself. Rewrite to get that right, while preserving the existing behavior outside double-quoted substrings, which is that backslash isn't special except in the combination \". Comparing to Oracle, it seems that their version of to_char() for timestamps allows literal alphanumerics only within double quotes, while non-alphanumerics are allowed outside quotes; backslashes aren't special anywhere; there is no way at all to emit a literal double quote. (Bizarrely, their to_char() for numbers is different; it doesn't allow literal text at all AFAICT.) The fact that they don't treat backslash as special justifies our existing behavior for backslash outside double quotes. I considered making backslash inside double quotes act the same way (ie, special only if before "), which in a green field would be a more consistent behavior. But that would likely break more existing SQL code than what this patch does. Add some test cases illustrating this behavior. (Only the last new case actually changes behavior in this commit.) Little of this behavior was documented, either, so fix that. Discussion: https://postgr.es/m/3626.1510949486@sss.pgh.pa.us --- doc/src/sgml/func.sgml | 5 ++ src/backend/utils/adt/formatting.c | 70 ++++++++++----------------- src/test/regress/expected/numeric.out | 61 +++++++++++++++++++++++ src/test/regress/sql/numeric.sql | 12 +++++ 4 files changed, 104 insertions(+), 44 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 35a845c400..698daf69ea 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -6196,6 +6196,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); If you want to have a double quote in the output you must precede it with a backslash, for example '\"YYYY Month\"'. + Backslashes are not otherwise special outside of double-quoted + strings. Within a double-quoted string, a backslash causes the + next character to be taken literally, whatever it is (but this + has no special effect unless the next character is a double quote + or another backslash). diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index 5afc293a5a..cb0dbf748e 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -1227,11 +1227,7 @@ static void parse_format(FormatNode *node, const char *str, const KeyWord *kw, const KeySuffix *suf, const int *index, int ver, NUMDesc *Num) { - const KeySuffix *s; FormatNode *n; - int node_set = 0, - suffix, - last = 0; #ifdef DEBUG_TO_FROM_CHAR elog(DEBUG_elog_output, "to_char/number(): run parser"); @@ -1241,12 +1237,14 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, while (*str) { - suffix = 0; + int suffix = 0; + const KeySuffix *s; /* * Prefix */ - if (ver == DCH_TYPE && (s = suff_search(str, suf, SUFFTYPE_PREFIX)) != NULL) + if (ver == DCH_TYPE && + (s = suff_search(str, suf, SUFFTYPE_PREFIX)) != NULL) { suffix |= s->id; if (s->len) @@ -1259,8 +1257,7 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, if (*str && (n->key = index_seq_search(str, kw, index)) != NULL) { n->type = NODE_TYPE_ACTION; - n->suffix = 0; - node_set = 1; + n->suffix = suffix; if (n->key->len) str += n->key->len; @@ -1273,71 +1270,56 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, /* * Postfix */ - if (ver == DCH_TYPE && *str && (s = suff_search(str, suf, SUFFTYPE_POSTFIX)) != NULL) + if (ver == DCH_TYPE && *str && + (s = suff_search(str, suf, SUFFTYPE_POSTFIX)) != NULL) { - suffix |= s->id; + n->suffix |= s->id; if (s->len) str += s->len; } + + n++; } else if (*str) { /* - * Special characters '\' and '"' + * Process double-quoted literal string, if any */ - if (*str == '"' && last != '\\') + if (*str == '"') { - int x = 0; - while (*(++str)) { - if (*str == '"' && x != '\\') + if (*str == '"') { str++; break; } - else if (*str == '\\' && x != '\\') - { - x = '\\'; - continue; - } + /* backslash quotes the next character, if any */ + if (*str == '\\' && *(str + 1)) + str++; n->type = NODE_TYPE_CHAR; n->character = *str; n->key = NULL; n->suffix = 0; - ++n; - x = *str; + n++; } - node_set = 0; - suffix = 0; - last = 0; } - else if (*str && *str == '\\' && last != '\\' && *(str + 1) == '"') - { - last = *str; - str++; - } - else if (*str) + else { + /* + * Outside double-quoted strings, backslash is only special if + * it immediately precedes a double quote. + */ + if (*str == '\\' && *(str + 1) == '"') + str++; n->type = NODE_TYPE_CHAR; n->character = *str; n->key = NULL; - node_set = 1; - last = 0; + n->suffix = 0; + n++; str++; } } - - /* end */ - if (node_set) - { - if (n->type == NODE_TYPE_ACTION) - n->suffix = suffix; - ++n; - - n->suffix = 0; - node_set = 0; - } } n->type = NODE_TYPE_END; diff --git a/src/test/regress/expected/numeric.out b/src/test/regress/expected/numeric.out index a96bfc0eb0..17985e8540 100644 --- a/src/test/regress/expected/numeric.out +++ b/src/test/regress/expected/numeric.out @@ -1217,6 +1217,67 @@ SELECT '' AS to_char_26, to_char('100'::numeric, 'FM999'); | 100 (1 row) +-- Check parsing of literal text in a format string +SELECT '' AS to_char_27, to_char('100'::numeric, 'foo999'); + to_char_27 | to_char +------------+--------- + | foo 100 +(1 row) + +SELECT '' AS to_char_28, to_char('100'::numeric, 'f\oo999'); + to_char_28 | to_char +------------+---------- + | f\oo 100 +(1 row) + +SELECT '' AS to_char_29, to_char('100'::numeric, 'f\\oo999'); + to_char_29 | to_char +------------+----------- + | f\\oo 100 +(1 row) + +SELECT '' AS to_char_30, to_char('100'::numeric, 'f\"oo999'); + to_char_30 | to_char +------------+---------- + | f"oo 100 +(1 row) + +SELECT '' AS to_char_31, to_char('100'::numeric, 'f\\"oo999'); + to_char_31 | to_char +------------+----------- + | f\"oo 100 +(1 row) + +SELECT '' AS to_char_32, to_char('100'::numeric, 'f"ool"999'); + to_char_32 | to_char +------------+---------- + | fool 100 +(1 row) + +SELECT '' AS to_char_33, to_char('100'::numeric, 'f"\ool"999'); + to_char_33 | to_char +------------+---------- + | fool 100 +(1 row) + +SELECT '' AS to_char_34, to_char('100'::numeric, 'f"\\ool"999'); + to_char_34 | to_char +------------+----------- + | f\ool 100 +(1 row) + +SELECT '' AS to_char_35, to_char('100'::numeric, 'f"ool\"999'); + to_char_35 | to_char +------------+---------- + | fool"999 +(1 row) + +SELECT '' AS to_char_36, to_char('100'::numeric, 'f"ool\\"999'); + to_char_36 | to_char +------------+----------- + | fool\ 100 +(1 row) + -- TO_NUMBER() -- SET lc_numeric = 'C'; diff --git a/src/test/regress/sql/numeric.sql b/src/test/regress/sql/numeric.sql index 321c7bdf7c..d77504e624 100644 --- a/src/test/regress/sql/numeric.sql +++ b/src/test/regress/sql/numeric.sql @@ -786,6 +786,18 @@ SELECT '' AS to_char_24, to_char('100'::numeric, 'FM999.9'); SELECT '' AS to_char_25, to_char('100'::numeric, 'FM999.'); SELECT '' AS to_char_26, to_char('100'::numeric, 'FM999'); +-- Check parsing of literal text in a format string +SELECT '' AS to_char_27, to_char('100'::numeric, 'foo999'); +SELECT '' AS to_char_28, to_char('100'::numeric, 'f\oo999'); +SELECT '' AS to_char_29, to_char('100'::numeric, 'f\\oo999'); +SELECT '' AS to_char_30, to_char('100'::numeric, 'f\"oo999'); +SELECT '' AS to_char_31, to_char('100'::numeric, 'f\\"oo999'); +SELECT '' AS to_char_32, to_char('100'::numeric, 'f"ool"999'); +SELECT '' AS to_char_33, to_char('100'::numeric, 'f"\ool"999'); +SELECT '' AS to_char_34, to_char('100'::numeric, 'f"\\ool"999'); +SELECT '' AS to_char_35, to_char('100'::numeric, 'f"ool\"999'); +SELECT '' AS to_char_36, to_char('100'::numeric, 'f"ool\\"999'); + -- TO_NUMBER() -- SET lc_numeric = 'C'; From 976a1a48fc35cde3c750982be64f872c4de4d343 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 18 Nov 2017 12:42:52 -0500 Subject: [PATCH 0569/1087] Improve to_date/to_number/to_timestamp behavior with multibyte characters. The documentation says that these functions skip one input character per literal (non-pattern) format character. Actually, though, they skipped one input *byte* per literal *byte*, which could be hugely confusing if either data or format contained multibyte characters. To fix, adjust the FormatNode representation and parse_format() so that multibyte format characters are stored as one FormatNode not several, and adjust the data-skipping bits to advance by pg_mblen() not necessarily one byte. There's no user-visible behavior change on the to_char() side, although the internal representation changes. Commit e87d4965b had already fixed most places where we skip characters on the basis of non-literal format patterns to advance by characters not bytes, but this gets one more place, the SKIP_THth macro. I think everything in formatting.c gets that right now. It'd be nice to have some regression test cases covering this behavior; but of course there's no way to do so in an encoding-agnostic way, and many of the interesting aspects would also require unportable locale selections. So I've not bothered here. Discussion: https://postgr.es/m/28186.1510957703@sss.pgh.pa.us --- src/backend/utils/adt/formatting.c | 68 ++++++++++++++++++------------ 1 file changed, 41 insertions(+), 27 deletions(-) diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index cb0dbf748e..ec97de0ad2 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -151,8 +151,6 @@ typedef enum FROM_CHAR_DATE_ISOWEEK /* ISO 8601 week date */ } FromCharDateMode; -typedef struct FormatNode FormatNode; - typedef struct { const char *name; @@ -162,13 +160,13 @@ typedef struct FromCharDateMode date_mode; } KeyWord; -struct FormatNode +typedef struct { - int type; /* node type */ - const KeyWord *key; /* if node type is KEYWORD */ - char character; /* if node type is CHAR */ - int suffix; /* keyword suffix */ -}; + int type; /* NODE_TYPE_XXX, see below */ + const KeyWord *key; /* if type is ACTION */ + char character[MAX_MULTIBYTE_CHAR_LEN + 1]; /* if type is CHAR */ + int suffix; /* keyword prefix/suffix code, if any */ +} FormatNode; #define NODE_TYPE_END 1 #define NODE_TYPE_ACTION 2 @@ -1282,12 +1280,15 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, } else if (*str) { + int chlen; + /* * Process double-quoted literal string, if any */ if (*str == '"') { - while (*(++str)) + str++; + while (*str) { if (*str == '"') { @@ -1297,11 +1298,14 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, /* backslash quotes the next character, if any */ if (*str == '\\' && *(str + 1)) str++; + chlen = pg_mblen(str); n->type = NODE_TYPE_CHAR; - n->character = *str; + memcpy(n->character, str, chlen); + n->character[chlen] = '\0'; n->key = NULL; n->suffix = 0; n++; + str += chlen; } } else @@ -1312,12 +1316,14 @@ parse_format(FormatNode *node, const char *str, const KeyWord *kw, */ if (*str == '\\' && *(str + 1) == '"') str++; + chlen = pg_mblen(str); n->type = NODE_TYPE_CHAR; - n->character = *str; + memcpy(n->character, str, chlen); + n->character[chlen] = '\0'; n->key = NULL; n->suffix = 0; n++; - str++; + str += chlen; } } } @@ -1349,7 +1355,8 @@ dump_node(FormatNode *node, int max) elog(DEBUG_elog_output, "%d:\t NODE_TYPE_ACTION '%s'\t(%s,%s)", a, n->key->name, DUMP_THth(n->suffix), DUMP_FM(n->suffix)); else if (n->type == NODE_TYPE_CHAR) - elog(DEBUG_elog_output, "%d:\t NODE_TYPE_CHAR '%c'", a, n->character); + elog(DEBUG_elog_output, "%d:\t NODE_TYPE_CHAR '%s'", + a, n->character); else if (n->type == NODE_TYPE_END) { elog(DEBUG_elog_output, "%d:\t NODE_TYPE_END", a); @@ -2008,8 +2015,8 @@ asc_toupper_z(const char *buff) do { \ if (S_THth(_suf)) \ { \ - if (*(ptr)) (ptr)++; \ - if (*(ptr)) (ptr)++; \ + if (*(ptr)) (ptr) += pg_mblen(ptr); \ + if (*(ptr)) (ptr) += pg_mblen(ptr); \ } \ } while (0) @@ -2076,7 +2083,8 @@ is_next_separator(FormatNode *n) return true; } - else if (isdigit((unsigned char) n->character)) + else if (n->character[1] == '\0' && + isdigit((unsigned char) n->character[0])) return false; return true; /* some non-digit input (separator) */ @@ -2405,8 +2413,8 @@ DCH_to_char(FormatNode *node, bool is_interval, TmToChar *in, char *out, Oid col { if (n->type != NODE_TYPE_ACTION) { - *s = n->character; - s++; + strcpy(s, n->character); + s += strlen(s); continue; } @@ -2974,7 +2982,7 @@ DCH_from_char(FormatNode *node, char *in, TmFromChar *out) * we don't insist that the consumed character match the format's * character. */ - s++; + s += pg_mblen(s); continue; } @@ -4217,7 +4225,7 @@ get_last_relevant_decnum(char *num) /* * These macros are used in NUM_processor() and its subsidiary routines. * OVERLOAD_TEST: true if we've reached end of input string - * AMOUNT_TEST(s): true if at least s characters remain in string + * AMOUNT_TEST(s): true if at least s bytes remain in string */ #define OVERLOAD_TEST (Np->inout_p >= Np->inout + input_len) #define AMOUNT_TEST(s) (Np->inout_p <= Np->inout + (input_len - (s))) @@ -4821,9 +4829,9 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, if (!Np->is_to_char) { /* - * Check at least one character remains to be scanned. (In - * actions below, must use AMOUNT_TEST if we want to read more - * characters than that.) + * Check at least one byte remains to be scanned. (In actions + * below, must use AMOUNT_TEST if we want to read more bytes than + * that.) */ if (OVERLOAD_TEST) break; @@ -5081,12 +5089,18 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, * In TO_CHAR, non-pattern characters in the format are copied to * the output. In TO_NUMBER, we skip one input character for each * non-pattern format character, whether or not it matches the - * format character. (Currently, that's actually implemented as - * skipping one input byte per non-pattern format byte, which is - * wrong...) + * format character. */ if (Np->is_to_char) - *Np->inout_p = n->character; + { + strcpy(Np->inout_p, n->character); + Np->inout_p += strlen(Np->inout_p); + } + else + { + Np->inout_p += pg_mblen(Np->inout_p); + } + continue; } Np->inout_p++; } From d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 31 Oct 2017 10:49:36 -0400 Subject: [PATCH 0570/1087] Consistently catch errors from Python _New() functions Python Py*_New() functions can fail and return NULL in out-of-memory conditions. The previous code handled that inconsistently or not at all. This change organizes that better. If we are in a function that is called from Python, we just check for failure and return NULL ourselves, which will cause any exception information to be passed up. If we are called from PostgreSQL, we consistently create an "out of memory" error. Reviewed-by: Tom Lane --- contrib/hstore_plpython/hstore_plpython.c | 4 ++++ contrib/ltree_plpython/ltree_plpython.c | 4 ++++ src/pl/plpython/plpy_cursorobject.c | 25 +++++++++++++-------- src/pl/plpython/plpy_exec.c | 10 ++++++++- src/pl/plpython/plpy_main.c | 2 +- src/pl/plpython/plpy_plpymodule.c | 4 ++-- src/pl/plpython/plpy_procedure.c | 2 ++ src/pl/plpython/plpy_resultobject.c | 11 +++++++++ src/pl/plpython/plpy_spi.c | 27 +++++++++++++++-------- src/pl/plpython/plpy_typeio.c | 4 +++- 10 files changed, 70 insertions(+), 23 deletions(-) diff --git a/contrib/hstore_plpython/hstore_plpython.c b/contrib/hstore_plpython/hstore_plpython.c index 22366bd40f..218e6612b1 100644 --- a/contrib/hstore_plpython/hstore_plpython.c +++ b/contrib/hstore_plpython/hstore_plpython.c @@ -93,6 +93,10 @@ hstore_to_plpython(PG_FUNCTION_ARGS) PyObject *dict; dict = PyDict_New(); + if (!dict) + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"))); for (i = 0; i < count; i++) { diff --git a/contrib/ltree_plpython/ltree_plpython.c b/contrib/ltree_plpython/ltree_plpython.c index ae9b90dd10..e88636a0a9 100644 --- a/contrib/ltree_plpython/ltree_plpython.c +++ b/contrib/ltree_plpython/ltree_plpython.c @@ -46,6 +46,10 @@ ltree_to_plpython(PG_FUNCTION_ARGS) ltree_level *curlevel; list = PyList_New(in->numlevel); + if (!list) + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"))); curlevel = LTREE_FIRST(in); for (i = 0; i < in->numlevel; i++) diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c index 10ca786fbc..9467f64808 100644 --- a/src/pl/plpython/plpy_cursorobject.c +++ b/src/pl/plpython/plpy_cursorobject.c @@ -457,17 +457,24 @@ PLy_cursor_fetch(PyObject *self, PyObject *args) Py_DECREF(ret->rows); ret->rows = PyList_New(SPI_processed); - - PLy_input_setup_tuple(&cursor->result, SPI_tuptable->tupdesc, - exec_ctx->curr_proc); - - for (i = 0; i < SPI_processed; i++) + if (!ret->rows) { - PyObject *row = PLy_input_from_tuple(&cursor->result, - SPI_tuptable->vals[i], - SPI_tuptable->tupdesc); + Py_DECREF(ret); + ret = NULL; + } + else + { + PLy_input_setup_tuple(&cursor->result, SPI_tuptable->tupdesc, + exec_ctx->curr_proc); + + for (i = 0; i < SPI_processed; i++) + { + PyObject *row = PLy_input_from_tuple(&cursor->result, + SPI_tuptable->vals[i], + SPI_tuptable->tupdesc); - PyList_SetItem(ret->rows, i, row); + PyList_SetItem(ret->rows, i, row); + } } } diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index 02d7d2ad5f..9d2341a4a3 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -420,6 +420,9 @@ PLy_function_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc) PG_TRY(); { args = PyList_New(proc->nargs); + if (!args) + return NULL; + for (i = 0; i < proc->nargs; i++) { PLyDatumToOb *arginfo = &proc->args[i]; @@ -693,7 +696,7 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r { pltdata = PyDict_New(); if (!pltdata) - PLy_elog(ERROR, "could not create new dictionary while building trigger arguments"); + return NULL; pltname = PyString_FromString(tdata->tg_trigger->tgname); PyDict_SetItemString(pltdata, "name", pltname); @@ -826,6 +829,11 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r PyObject *pltarg; pltargs = PyList_New(tdata->tg_trigger->tgnargs); + if (!pltargs) + { + Py_DECREF(pltdata); + return NULL; + } for (i = 0; i < tdata->tg_trigger->tgnargs; i++) { pltarg = PyString_FromString(tdata->tg_trigger->tgargs[i]); diff --git a/src/pl/plpython/plpy_main.c b/src/pl/plpython/plpy_main.c index 29db90e448..32d23ae5b6 100644 --- a/src/pl/plpython/plpy_main.c +++ b/src/pl/plpython/plpy_main.c @@ -167,7 +167,7 @@ PLy_init_interp(void) PLy_interp_globals = PyModule_GetDict(mainmod); PLy_interp_safe_globals = PyDict_New(); if (PLy_interp_safe_globals == NULL) - PLy_elog(ERROR, "could not create globals"); + PLy_elog(ERROR, NULL); PyDict_SetItemString(PLy_interp_globals, "GD", PLy_interp_safe_globals); Py_DECREF(mainmod); if (PLy_interp_globals == NULL || PyErr_Occurred()) diff --git a/src/pl/plpython/plpy_plpymodule.c b/src/pl/plpython/plpy_plpymodule.c index 759ad44932..23f99e20ca 100644 --- a/src/pl/plpython/plpy_plpymodule.c +++ b/src/pl/plpython/plpy_plpymodule.c @@ -233,7 +233,7 @@ PLy_create_exception(char *name, PyObject *base, PyObject *dict, exc = PyErr_NewException(name, base, dict); if (exc == NULL) - PLy_elog(ERROR, "could not create exception \"%s\"", name); + PLy_elog(ERROR, NULL); /* * PyModule_AddObject does not add a refcount to the object, for some odd @@ -268,7 +268,7 @@ PLy_generate_spi_exceptions(PyObject *mod, PyObject *base) PyObject *dict = PyDict_New(); if (dict == NULL) - PLy_elog(ERROR, "could not generate SPI exceptions"); + PLy_elog(ERROR, NULL); sqlstate = PyString_FromString(unpack_sql_state(exception_map[i].sqlstate)); if (sqlstate == NULL) diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c index 58d6988202..faa4977463 100644 --- a/src/pl/plpython/plpy_procedure.c +++ b/src/pl/plpython/plpy_procedure.c @@ -368,6 +368,8 @@ PLy_procedure_compile(PLyProcedure *proc, const char *src) * all functions */ proc->statics = PyDict_New(); + if (!proc->statics) + PLy_elog(ERROR, NULL); PyDict_SetItemString(proc->globals, "SD", proc->statics); /* diff --git a/src/pl/plpython/plpy_resultobject.c b/src/pl/plpython/plpy_resultobject.c index 098a366f6f..ca70e25689 100644 --- a/src/pl/plpython/plpy_resultobject.c +++ b/src/pl/plpython/plpy_resultobject.c @@ -112,6 +112,11 @@ PLy_result_new(void) ob->nrows = PyInt_FromLong(-1); ob->rows = PyList_New(0); ob->tupdesc = NULL; + if (!ob->rows) + { + Py_DECREF(ob); + return NULL; + } return (PyObject *) ob; } @@ -147,6 +152,8 @@ PLy_result_colnames(PyObject *self, PyObject *unused) } list = PyList_New(ob->tupdesc->natts); + if (!list) + return NULL; for (i = 0; i < ob->tupdesc->natts; i++) { Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); @@ -171,6 +178,8 @@ PLy_result_coltypes(PyObject *self, PyObject *unused) } list = PyList_New(ob->tupdesc->natts); + if (!list) + return NULL; for (i = 0; i < ob->tupdesc->natts; i++) { Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); @@ -195,6 +204,8 @@ PLy_result_coltypmods(PyObject *self, PyObject *unused) } list = PyList_New(ob->tupdesc->natts); + if (!list) + return NULL; for (i = 0; i < ob->tupdesc->natts; i++) { Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i); diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index 69eb6b39f6..ade27f3924 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -360,6 +360,8 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) volatile MemoryContext oldcontext; result = (PLyResultObject *) PLy_result_new(); + if (!result) + return NULL; Py_DECREF(result->status); result->status = PyInt_FromLong(status); @@ -409,17 +411,24 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) Py_DECREF(result->rows); result->rows = PyList_New(rows); - - PLy_input_setup_tuple(&ininfo, tuptable->tupdesc, - exec_ctx->curr_proc); - - for (i = 0; i < rows; i++) + if (!result->rows) { - PyObject *row = PLy_input_from_tuple(&ininfo, - tuptable->vals[i], - tuptable->tupdesc); + Py_DECREF(result); + result = NULL; + } + else + { + PLy_input_setup_tuple(&ininfo, tuptable->tupdesc, + exec_ctx->curr_proc); + + for (i = 0; i < rows; i++) + { + PyObject *row = PLy_input_from_tuple(&ininfo, + tuptable->vals[i], + tuptable->tupdesc); - PyList_SetItem(result->rows, i, row); + PyList_SetItem(result->rows, i, row); + } } } diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c index ce1527072e..c48e8fd5f3 100644 --- a/src/pl/plpython/plpy_typeio.c +++ b/src/pl/plpython/plpy_typeio.c @@ -718,6 +718,8 @@ PLyList_FromArray_recurse(PLyDatumToOb *elm, int *dims, int ndim, int dim, PyObject *list; list = PyList_New(dims[dim]); + if (!list) + return NULL; if (dim < ndim - 1) { @@ -826,7 +828,7 @@ PLyDict_FromTuple(PLyDatumToOb *arg, HeapTuple tuple, TupleDesc desc) dict = PyDict_New(); if (dict == NULL) - PLy_elog(ERROR, "could not create new dictionary"); + return NULL; PG_TRY(); { From 4797f9b519995ceca5d6b8550b5caa2ff6d19347 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 18 Nov 2017 16:24:05 -0500 Subject: [PATCH 0571/1087] Merge near-duplicate code in RI triggers. Merge ri_restrict_del and ri_restrict_upd into one function ri_restrict. Create a function ri_setnull that is the common implementation of RI_FKey_setnull_del and RI_FKey_setnull_upd. Likewise create a function ri_setdefault that is the common implementation of RI_FKey_setdefault_del and RI_FKey_setdefault_upd. All of these pairs of functions were identical except for needing to check for no-actual-key-change in the UPDATE cases; the one extra if-test is a small price to pay for saving so much code. Aside from removing about 400 lines of essentially duplicate code, this allows us to recognize that we were uselessly caching two identical plans whenever there were pairs of triggers using these duplicated functions (which is likely very common). Ildar Musin, reviewed by Ildus Kurbangaliev Discussion: https://postgr.es/m/ca7064a7-6adc-6f22-ca47-8615ba9425a5@postgrespro.ru --- src/backend/utils/adt/ri_triggers.c | 715 ++++++---------------------- 1 file changed, 146 insertions(+), 569 deletions(-) diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c index 4badb5fd7c..b1ae9e5f96 100644 --- a/src/backend/utils/adt/ri_triggers.c +++ b/src/backend/utils/adt/ri_triggers.c @@ -81,12 +81,9 @@ /* these queries are executed against the FK (referencing) table: */ #define RI_PLAN_CASCADE_DEL_DODELETE 3 #define RI_PLAN_CASCADE_UPD_DOUPDATE 4 -#define RI_PLAN_RESTRICT_DEL_CHECKREF 5 -#define RI_PLAN_RESTRICT_UPD_CHECKREF 6 -#define RI_PLAN_SETNULL_DEL_DOUPDATE 7 -#define RI_PLAN_SETNULL_UPD_DOUPDATE 8 -#define RI_PLAN_SETDEFAULT_DEL_DOUPDATE 9 -#define RI_PLAN_SETDEFAULT_UPD_DOUPDATE 10 +#define RI_PLAN_RESTRICT_CHECKREF 5 +#define RI_PLAN_SETNULL_DOUPDATE 6 +#define RI_PLAN_SETDEFAULT_DOUPDATE 7 #define MAX_QUOTED_NAME_LEN (NAMEDATALEN*2+3) #define MAX_QUOTED_REL_NAME_LEN (MAX_QUOTED_NAME_LEN*2) @@ -196,8 +193,9 @@ static int ri_constraint_cache_valid_count = 0; static bool ri_Check_Pk_Match(Relation pk_rel, Relation fk_rel, HeapTuple old_row, const RI_ConstraintInfo *riinfo); -static Datum ri_restrict_del(TriggerData *trigdata, bool is_no_action); -static Datum ri_restrict_upd(TriggerData *trigdata, bool is_no_action); +static Datum ri_restrict(TriggerData *trigdata, bool is_no_action); +static Datum ri_setnull(TriggerData *trigdata); +static Datum ri_setdefault(TriggerData *trigdata); static void quoteOneName(char *buffer, const char *name); static void quoteRelationName(char *buffer, Relation rel); static void ri_GenerateQual(StringInfo buf, @@ -603,9 +601,9 @@ RI_FKey_noaction_del(PG_FUNCTION_ARGS) ri_CheckTrigger(fcinfo, "RI_FKey_noaction_del", RI_TRIGTYPE_DELETE); /* - * Share code with RESTRICT case. + * Share code with RESTRICT/UPDATE cases. */ - return ri_restrict_del((TriggerData *) fcinfo->context, true); + return ri_restrict((TriggerData *) fcinfo->context, true); } /* ---------- @@ -628,176 +626,11 @@ RI_FKey_restrict_del(PG_FUNCTION_ARGS) ri_CheckTrigger(fcinfo, "RI_FKey_restrict_del", RI_TRIGTYPE_DELETE); /* - * Share code with NO ACTION case. + * Share code with NO ACTION/UPDATE cases. */ - return ri_restrict_del((TriggerData *) fcinfo->context, false); + return ri_restrict((TriggerData *) fcinfo->context, false); } -/* ---------- - * ri_restrict_del - - * - * Common code for ON DELETE RESTRICT and ON DELETE NO ACTION. - * ---------- - */ -static Datum -ri_restrict_del(TriggerData *trigdata, bool is_no_action) -{ - const RI_ConstraintInfo *riinfo; - Relation fk_rel; - Relation pk_rel; - HeapTuple old_row; - RI_QueryKey qkey; - SPIPlanPtr qplan; - int i; - - /* - * Get arguments. - */ - riinfo = ri_FetchConstraintInfo(trigdata->tg_trigger, - trigdata->tg_relation, true); - - /* - * Get the relation descriptors of the FK and PK tables and the old tuple. - * - * fk_rel is opened in RowShareLock mode since that's what our eventual - * SELECT FOR KEY SHARE will get on it. - */ - fk_rel = heap_open(riinfo->fk_relid, RowShareLock); - pk_rel = trigdata->tg_relation; - old_row = trigdata->tg_trigtuple; - - switch (riinfo->confmatchtype) - { - /* ---------- - * SQL:2008 15.17 - * General rules 9) a) iv): - * MATCH SIMPLE/FULL - * ... ON DELETE RESTRICT - * ---------- - */ - case FKCONSTR_MATCH_SIMPLE: - case FKCONSTR_MATCH_FULL: - switch (ri_NullCheck(old_row, riinfo, true)) - { - case RI_KEYS_ALL_NULL: - case RI_KEYS_SOME_NULL: - - /* - * No check needed - there cannot be any reference to old - * key if it contains a NULL - */ - heap_close(fk_rel, RowShareLock); - return PointerGetDatum(NULL); - - case RI_KEYS_NONE_NULL: - - /* - * Have a full qualified key - continue below - */ - break; - } - - /* - * If another PK row now exists providing the old key values, we - * should not do anything. However, this check should only be - * made in the NO ACTION case; in RESTRICT cases we don't wish to - * allow another row to be substituted. - */ - if (is_no_action && - ri_Check_Pk_Match(pk_rel, fk_rel, old_row, riinfo)) - { - heap_close(fk_rel, RowShareLock); - return PointerGetDatum(NULL); - } - - if (SPI_connect() != SPI_OK_CONNECT) - elog(ERROR, "SPI_connect failed"); - - /* - * Fetch or prepare a saved plan for the restrict delete lookup - */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_RESTRICT_DEL_CHECKREF); - - if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) - { - StringInfoData querybuf; - char fkrelname[MAX_QUOTED_REL_NAME_LEN]; - char attname[MAX_QUOTED_NAME_LEN]; - char paramname[16]; - const char *querysep; - Oid queryoids[RI_MAX_NUMKEYS]; - - /* ---------- - * The query string built is - * SELECT 1 FROM ONLY x WHERE $1 = fkatt1 [AND ...] - * FOR KEY SHARE OF x - * The type id's for the $ parameters are those of the - * corresponding PK attributes. - * ---------- - */ - initStringInfo(&querybuf); - quoteRelationName(fkrelname, fk_rel); - appendStringInfo(&querybuf, "SELECT 1 FROM ONLY %s x", - fkrelname); - querysep = "WHERE"; - for (i = 0; i < riinfo->nkeys; i++) - { - Oid pk_type = RIAttType(pk_rel, riinfo->pk_attnums[i]); - Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]); - - quoteOneName(attname, - RIAttName(fk_rel, riinfo->fk_attnums[i])); - sprintf(paramname, "$%d", i + 1); - ri_GenerateQual(&querybuf, querysep, - paramname, pk_type, - riinfo->pf_eq_oprs[i], - attname, fk_type); - querysep = "AND"; - queryoids[i] = pk_type; - } - appendStringInfoString(&querybuf, " FOR KEY SHARE OF x"); - - /* Prepare and save the plan */ - qplan = ri_PlanCheck(querybuf.data, riinfo->nkeys, queryoids, - &qkey, fk_rel, pk_rel, true); - } - - /* - * We have a plan now. Run it to check for existing references. - */ - ri_PerformCheck(riinfo, &qkey, qplan, - fk_rel, pk_rel, - old_row, NULL, - true, /* must detect new rows */ - SPI_OK_SELECT); - - if (SPI_finish() != SPI_OK_FINISH) - elog(ERROR, "SPI_finish failed"); - - heap_close(fk_rel, RowShareLock); - - return PointerGetDatum(NULL); - - /* - * Handle MATCH PARTIAL restrict delete. - */ - case FKCONSTR_MATCH_PARTIAL: - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("MATCH PARTIAL not yet implemented"))); - return PointerGetDatum(NULL); - - default: - elog(ERROR, "unrecognized confmatchtype: %d", - riinfo->confmatchtype); - break; - } - - /* Never reached */ - return PointerGetDatum(NULL); -} - - /* ---------- * RI_FKey_noaction_upd - * @@ -815,9 +648,9 @@ RI_FKey_noaction_upd(PG_FUNCTION_ARGS) ri_CheckTrigger(fcinfo, "RI_FKey_noaction_upd", RI_TRIGTYPE_UPDATE); /* - * Share code with RESTRICT case. + * Share code with RESTRICT/DELETE cases. */ - return ri_restrict_upd((TriggerData *) fcinfo->context, true); + return ri_restrict((TriggerData *) fcinfo->context, true); } /* ---------- @@ -840,28 +673,27 @@ RI_FKey_restrict_upd(PG_FUNCTION_ARGS) ri_CheckTrigger(fcinfo, "RI_FKey_restrict_upd", RI_TRIGTYPE_UPDATE); /* - * Share code with NO ACTION case. + * Share code with NO ACTION/DELETE cases. */ - return ri_restrict_upd((TriggerData *) fcinfo->context, false); + return ri_restrict((TriggerData *) fcinfo->context, false); } /* ---------- - * ri_restrict_upd - + * ri_restrict - * - * Common code for ON UPDATE RESTRICT and ON UPDATE NO ACTION. + * Common code for ON DELETE RESTRICT, ON DELETE NO ACTION, + * ON UPDATE RESTRICT, and ON UPDATE NO ACTION. * ---------- */ static Datum -ri_restrict_upd(TriggerData *trigdata, bool is_no_action) +ri_restrict(TriggerData *trigdata, bool is_no_action) { const RI_ConstraintInfo *riinfo; Relation fk_rel; Relation pk_rel; - HeapTuple new_row; HeapTuple old_row; RI_QueryKey qkey; SPIPlanPtr qplan; - int i; /* * Get arguments. @@ -870,21 +702,22 @@ ri_restrict_upd(TriggerData *trigdata, bool is_no_action) trigdata->tg_relation, true); /* - * Get the relation descriptors of the FK and PK tables and the new and - * old tuple. + * Get the relation descriptors of the FK and PK tables and the old tuple. * * fk_rel is opened in RowShareLock mode since that's what our eventual * SELECT FOR KEY SHARE will get on it. */ fk_rel = heap_open(riinfo->fk_relid, RowShareLock); pk_rel = trigdata->tg_relation; - new_row = trigdata->tg_newtuple; old_row = trigdata->tg_trigtuple; switch (riinfo->confmatchtype) { /* ---------- * SQL:2008 15.17 + * General rules 9) a) iv): + * MATCH SIMPLE/FULL + * ... ON DELETE RESTRICT * General rules 10) a) iv): * MATCH SIMPLE/FULL * ... ON UPDATE RESTRICT @@ -913,12 +746,17 @@ ri_restrict_upd(TriggerData *trigdata, bool is_no_action) } /* - * No need to check anything if old and new keys are equal + * In UPDATE, no need to do anything if old and new keys are equal */ - if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) + if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) { - heap_close(fk_rel, RowShareLock); - return PointerGetDatum(NULL); + HeapTuple new_row = trigdata->tg_newtuple; + + if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) + { + heap_close(fk_rel, RowShareLock); + return PointerGetDatum(NULL); + } } /* @@ -938,9 +776,10 @@ ri_restrict_upd(TriggerData *trigdata, bool is_no_action) elog(ERROR, "SPI_connect failed"); /* - * Fetch or prepare a saved plan for the restrict update lookup + * Fetch or prepare a saved plan for the restrict lookup (it's the + * same query for delete and update cases) */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_RESTRICT_UPD_CHECKREF); + ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_RESTRICT_CHECKREF); if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) { @@ -950,10 +789,12 @@ ri_restrict_upd(TriggerData *trigdata, bool is_no_action) char paramname[16]; const char *querysep; Oid queryoids[RI_MAX_NUMKEYS]; + int i; /* ---------- * The query string built is - * SELECT 1 FROM ONLY WHERE $1 = fkatt1 [AND ...] + * SELECT 1 FROM ONLY x WHERE $1 = fkatt1 [AND ...] + * FOR KEY SHARE OF x * The type id's for the $ parameters are those of the * corresponding PK attributes. * ---------- @@ -1002,7 +843,7 @@ ri_restrict_upd(TriggerData *trigdata, bool is_no_action) return PointerGetDatum(NULL); /* - * Handle MATCH PARTIAL restrict update. + * Handle MATCH PARTIAL restrict delete or update. */ case FKCONSTR_MATCH_PARTIAL: ereport(ERROR, @@ -1367,7 +1208,46 @@ RI_FKey_cascade_upd(PG_FUNCTION_ARGS) Datum RI_FKey_setnull_del(PG_FUNCTION_ARGS) { - TriggerData *trigdata = (TriggerData *) fcinfo->context; + /* + * Check that this is a valid trigger call on the right time and event. + */ + ri_CheckTrigger(fcinfo, "RI_FKey_setnull_del", RI_TRIGTYPE_DELETE); + + /* + * Share code with UPDATE case + */ + return ri_setnull((TriggerData *) fcinfo->context); +} + +/* ---------- + * RI_FKey_setnull_upd - + * + * Set foreign key references to NULL at update event on PK table. + * ---------- + */ +Datum +RI_FKey_setnull_upd(PG_FUNCTION_ARGS) +{ + /* + * Check that this is a valid trigger call on the right time and event. + */ + ri_CheckTrigger(fcinfo, "RI_FKey_setnull_upd", RI_TRIGTYPE_UPDATE); + + /* + * Share code with DELETE case + */ + return ri_setnull((TriggerData *) fcinfo->context); +} + +/* ---------- + * ri_setnull - + * + * Common code for ON DELETE SET NULL and ON UPDATE SET NULL + * ---------- + */ +static Datum +ri_setnull(TriggerData *trigdata) +{ const RI_ConstraintInfo *riinfo; Relation fk_rel; Relation pk_rel; @@ -1376,11 +1256,6 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) SPIPlanPtr qplan; int i; - /* - * Check that this is a valid trigger call on the right time and event. - */ - ri_CheckTrigger(fcinfo, "RI_FKey_setnull_del", RI_TRIGTYPE_DELETE); - /* * Get arguments. */ @@ -1404,6 +1279,9 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) * General rules 9) a) ii): * MATCH SIMPLE/FULL * ... ON DELETE SET NULL + * General rules 10) a) ii): + * MATCH SIMPLE/FULL + * ... ON UPDATE SET NULL * ---------- */ case FKCONSTR_MATCH_SIMPLE: @@ -1428,13 +1306,28 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) break; } + /* + * In UPDATE, no need to do anything if old and new keys are equal + */ + if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) + { + HeapTuple new_row = trigdata->tg_newtuple; + + if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) + { + heap_close(fk_rel, RowExclusiveLock); + return PointerGetDatum(NULL); + } + } + if (SPI_connect() != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed"); /* - * Fetch or prepare a saved plan for the set null delete operation + * Fetch or prepare a saved plan for the set null operation (it's + * the same query for delete and update cases) */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETNULL_DEL_DOUPDATE); + ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETNULL_DOUPDATE); if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) { @@ -1488,7 +1381,7 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) } /* - * We have a plan now. Run it to check for existing references. + * We have a plan now. Run it to update the existing references. */ ri_PerformCheck(riinfo, &qkey, qplan, fk_rel, pk_rel, @@ -1504,7 +1397,7 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) return PointerGetDatum(NULL); /* - * Handle MATCH PARTIAL set null delete. + * Handle MATCH PARTIAL set null delete or update. */ case FKCONSTR_MATCH_PARTIAL: ereport(ERROR, @@ -1524,384 +1417,61 @@ RI_FKey_setnull_del(PG_FUNCTION_ARGS) /* ---------- - * RI_FKey_setnull_upd - + * RI_FKey_setdefault_del - * - * Set foreign key references to NULL at update event on PK table. + * Set foreign key references to defaults at delete event on PK table. * ---------- */ Datum -RI_FKey_setnull_upd(PG_FUNCTION_ARGS) +RI_FKey_setdefault_del(PG_FUNCTION_ARGS) { - TriggerData *trigdata = (TriggerData *) fcinfo->context; - const RI_ConstraintInfo *riinfo; - Relation fk_rel; - Relation pk_rel; - HeapTuple new_row; - HeapTuple old_row; - RI_QueryKey qkey; - SPIPlanPtr qplan; - int i; - /* * Check that this is a valid trigger call on the right time and event. */ - ri_CheckTrigger(fcinfo, "RI_FKey_setnull_upd", RI_TRIGTYPE_UPDATE); + ri_CheckTrigger(fcinfo, "RI_FKey_setdefault_del", RI_TRIGTYPE_DELETE); /* - * Get arguments. + * Share code with UPDATE case */ - riinfo = ri_FetchConstraintInfo(trigdata->tg_trigger, - trigdata->tg_relation, true); - - /* - * Get the relation descriptors of the FK and PK tables and the old tuple. - * - * fk_rel is opened in RowExclusiveLock mode since that's what our - * eventual UPDATE will get on it. - */ - fk_rel = heap_open(riinfo->fk_relid, RowExclusiveLock); - pk_rel = trigdata->tg_relation; - new_row = trigdata->tg_newtuple; - old_row = trigdata->tg_trigtuple; - - switch (riinfo->confmatchtype) - { - /* ---------- - * SQL:2008 15.17 - * General rules 10) a) ii): - * MATCH SIMPLE/FULL - * ... ON UPDATE SET NULL - * ---------- - */ - case FKCONSTR_MATCH_SIMPLE: - case FKCONSTR_MATCH_FULL: - switch (ri_NullCheck(old_row, riinfo, true)) - { - case RI_KEYS_ALL_NULL: - case RI_KEYS_SOME_NULL: - - /* - * No check needed - there cannot be any reference to old - * key if it contains a NULL - */ - heap_close(fk_rel, RowExclusiveLock); - return PointerGetDatum(NULL); - - case RI_KEYS_NONE_NULL: - - /* - * Have a full qualified key - continue below - */ - break; - } - - /* - * No need to do anything if old and new keys are equal - */ - if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) - { - heap_close(fk_rel, RowExclusiveLock); - return PointerGetDatum(NULL); - } - - if (SPI_connect() != SPI_OK_CONNECT) - elog(ERROR, "SPI_connect failed"); - - /* - * Fetch or prepare a saved plan for the set null update operation - */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETNULL_UPD_DOUPDATE); - - if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) - { - StringInfoData querybuf; - StringInfoData qualbuf; - char fkrelname[MAX_QUOTED_REL_NAME_LEN]; - char attname[MAX_QUOTED_NAME_LEN]; - char paramname[16]; - const char *querysep; - const char *qualsep; - Oid queryoids[RI_MAX_NUMKEYS]; - - /* ---------- - * The query string built is - * UPDATE ONLY SET fkatt1 = NULL [, ...] - * WHERE $1 = fkatt1 [AND ...] - * The type id's for the $ parameters are those of the - * corresponding PK attributes. - * ---------- - */ - initStringInfo(&querybuf); - initStringInfo(&qualbuf); - quoteRelationName(fkrelname, fk_rel); - appendStringInfo(&querybuf, "UPDATE ONLY %s SET", fkrelname); - querysep = ""; - qualsep = "WHERE"; - for (i = 0; i < riinfo->nkeys; i++) - { - Oid pk_type = RIAttType(pk_rel, riinfo->pk_attnums[i]); - Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]); - - quoteOneName(attname, - RIAttName(fk_rel, riinfo->fk_attnums[i])); - appendStringInfo(&querybuf, - "%s %s = NULL", - querysep, attname); - sprintf(paramname, "$%d", i + 1); - ri_GenerateQual(&qualbuf, qualsep, - paramname, pk_type, - riinfo->pf_eq_oprs[i], - attname, fk_type); - querysep = ","; - qualsep = "AND"; - queryoids[i] = pk_type; - } - appendStringInfoString(&querybuf, qualbuf.data); - - /* Prepare and save the plan */ - qplan = ri_PlanCheck(querybuf.data, riinfo->nkeys, queryoids, - &qkey, fk_rel, pk_rel, true); - } - - /* - * We have a plan now. Run it to update the existing references. - */ - ri_PerformCheck(riinfo, &qkey, qplan, - fk_rel, pk_rel, - old_row, NULL, - true, /* must detect new rows */ - SPI_OK_UPDATE); - - if (SPI_finish() != SPI_OK_FINISH) - elog(ERROR, "SPI_finish failed"); - - heap_close(fk_rel, RowExclusiveLock); - - return PointerGetDatum(NULL); - - /* - * Handle MATCH PARTIAL set null update. - */ - case FKCONSTR_MATCH_PARTIAL: - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("MATCH PARTIAL not yet implemented"))); - return PointerGetDatum(NULL); - - default: - elog(ERROR, "unrecognized confmatchtype: %d", - riinfo->confmatchtype); - break; - } - - /* Never reached */ - return PointerGetDatum(NULL); -} - + return ri_setdefault((TriggerData *) fcinfo->context); +} /* ---------- - * RI_FKey_setdefault_del - + * RI_FKey_setdefault_upd - * - * Set foreign key references to defaults at delete event on PK table. + * Set foreign key references to defaults at update event on PK table. * ---------- */ Datum -RI_FKey_setdefault_del(PG_FUNCTION_ARGS) +RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) { - TriggerData *trigdata = (TriggerData *) fcinfo->context; - const RI_ConstraintInfo *riinfo; - Relation fk_rel; - Relation pk_rel; - HeapTuple old_row; - RI_QueryKey qkey; - SPIPlanPtr qplan; - /* * Check that this is a valid trigger call on the right time and event. */ - ri_CheckTrigger(fcinfo, "RI_FKey_setdefault_del", RI_TRIGTYPE_DELETE); - - /* - * Get arguments. - */ - riinfo = ri_FetchConstraintInfo(trigdata->tg_trigger, - trigdata->tg_relation, true); + ri_CheckTrigger(fcinfo, "RI_FKey_setdefault_upd", RI_TRIGTYPE_UPDATE); /* - * Get the relation descriptors of the FK and PK tables and the old tuple. - * - * fk_rel is opened in RowExclusiveLock mode since that's what our - * eventual UPDATE will get on it. + * Share code with DELETE case */ - fk_rel = heap_open(riinfo->fk_relid, RowExclusiveLock); - pk_rel = trigdata->tg_relation; - old_row = trigdata->tg_trigtuple; - - switch (riinfo->confmatchtype) - { - /* ---------- - * SQL:2008 15.17 - * General rules 9) a) iii): - * MATCH SIMPLE/FULL - * ... ON DELETE SET DEFAULT - * ---------- - */ - case FKCONSTR_MATCH_SIMPLE: - case FKCONSTR_MATCH_FULL: - switch (ri_NullCheck(old_row, riinfo, true)) - { - case RI_KEYS_ALL_NULL: - case RI_KEYS_SOME_NULL: - - /* - * No check needed - there cannot be any reference to old - * key if it contains a NULL - */ - heap_close(fk_rel, RowExclusiveLock); - return PointerGetDatum(NULL); - - case RI_KEYS_NONE_NULL: - - /* - * Have a full qualified key - continue below - */ - break; - } - - if (SPI_connect() != SPI_OK_CONNECT) - elog(ERROR, "SPI_connect failed"); - - /* - * Fetch or prepare a saved plan for the set default delete - * operation - */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETDEFAULT_DEL_DOUPDATE); - - if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) - { - StringInfoData querybuf; - StringInfoData qualbuf; - char fkrelname[MAX_QUOTED_REL_NAME_LEN]; - char attname[MAX_QUOTED_NAME_LEN]; - char paramname[16]; - const char *querysep; - const char *qualsep; - Oid queryoids[RI_MAX_NUMKEYS]; - int i; - - /* ---------- - * The query string built is - * UPDATE ONLY SET fkatt1 = DEFAULT [, ...] - * WHERE $1 = fkatt1 [AND ...] - * The type id's for the $ parameters are those of the - * corresponding PK attributes. - * ---------- - */ - initStringInfo(&querybuf); - initStringInfo(&qualbuf); - quoteRelationName(fkrelname, fk_rel); - appendStringInfo(&querybuf, "UPDATE ONLY %s SET", fkrelname); - querysep = ""; - qualsep = "WHERE"; - for (i = 0; i < riinfo->nkeys; i++) - { - Oid pk_type = RIAttType(pk_rel, riinfo->pk_attnums[i]); - Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]); - - quoteOneName(attname, - RIAttName(fk_rel, riinfo->fk_attnums[i])); - appendStringInfo(&querybuf, - "%s %s = DEFAULT", - querysep, attname); - sprintf(paramname, "$%d", i + 1); - ri_GenerateQual(&qualbuf, qualsep, - paramname, pk_type, - riinfo->pf_eq_oprs[i], - attname, fk_type); - querysep = ","; - qualsep = "AND"; - queryoids[i] = pk_type; - } - appendStringInfoString(&querybuf, qualbuf.data); - - /* Prepare and save the plan */ - qplan = ri_PlanCheck(querybuf.data, riinfo->nkeys, queryoids, - &qkey, fk_rel, pk_rel, true); - } - - /* - * We have a plan now. Run it to update the existing references. - */ - ri_PerformCheck(riinfo, &qkey, qplan, - fk_rel, pk_rel, - old_row, NULL, - true, /* must detect new rows */ - SPI_OK_UPDATE); - - if (SPI_finish() != SPI_OK_FINISH) - elog(ERROR, "SPI_finish failed"); - - heap_close(fk_rel, RowExclusiveLock); - - /* - * If we just deleted the PK row whose key was equal to the FK - * columns' default values, and a referencing row exists in the FK - * table, we would have updated that row to the same values it - * already had --- and RI_FKey_fk_upd_check_required would hence - * believe no check is necessary. So we need to do another lookup - * now and in case a reference still exists, abort the operation. - * That is already implemented in the NO ACTION trigger, so just - * run it. (This recheck is only needed in the SET DEFAULT case, - * since CASCADE would remove such rows, while SET NULL is certain - * to result in rows that satisfy the FK constraint.) - */ - RI_FKey_noaction_del(fcinfo); - - return PointerGetDatum(NULL); - - /* - * Handle MATCH PARTIAL set default delete. - */ - case FKCONSTR_MATCH_PARTIAL: - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("MATCH PARTIAL not yet implemented"))); - return PointerGetDatum(NULL); - - default: - elog(ERROR, "unrecognized confmatchtype: %d", - riinfo->confmatchtype); - break; - } - - /* Never reached */ - return PointerGetDatum(NULL); + return ri_setdefault((TriggerData *) fcinfo->context); } - /* ---------- - * RI_FKey_setdefault_upd - + * ri_setdefault - * - * Set foreign key references to defaults at update event on PK table. + * Common code for ON DELETE SET DEFAULT and ON UPDATE SET DEFAULT * ---------- */ -Datum -RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) +static Datum +ri_setdefault(TriggerData *trigdata) { - TriggerData *trigdata = (TriggerData *) fcinfo->context; const RI_ConstraintInfo *riinfo; Relation fk_rel; Relation pk_rel; - HeapTuple new_row; HeapTuple old_row; RI_QueryKey qkey; SPIPlanPtr qplan; - /* - * Check that this is a valid trigger call on the right time and event. - */ - ri_CheckTrigger(fcinfo, "RI_FKey_setdefault_upd", RI_TRIGTYPE_UPDATE); - /* * Get arguments. */ @@ -1916,13 +1486,15 @@ RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) */ fk_rel = heap_open(riinfo->fk_relid, RowExclusiveLock); pk_rel = trigdata->tg_relation; - new_row = trigdata->tg_newtuple; old_row = trigdata->tg_trigtuple; switch (riinfo->confmatchtype) { /* ---------- * SQL:2008 15.17 + * General rules 9) a) iii): + * MATCH SIMPLE/FULL + * ... ON DELETE SET DEFAULT * General rules 10) a) iii): * MATCH SIMPLE/FULL * ... ON UPDATE SET DEFAULT @@ -1951,22 +1523,27 @@ RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) } /* - * No need to do anything if old and new keys are equal + * In UPDATE, no need to do anything if old and new keys are equal */ - if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) + if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) { - heap_close(fk_rel, RowExclusiveLock); - return PointerGetDatum(NULL); + HeapTuple new_row = trigdata->tg_newtuple; + + if (ri_KeysEqual(pk_rel, old_row, new_row, riinfo, true)) + { + heap_close(fk_rel, RowExclusiveLock); + return PointerGetDatum(NULL); + } } if (SPI_connect() != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed"); /* - * Fetch or prepare a saved plan for the set default update - * operation + * Fetch or prepare a saved plan for the set default operation + * (it's the same query for delete and update cases) */ - ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETDEFAULT_UPD_DOUPDATE); + ri_BuildQueryKey(&qkey, riinfo, RI_PLAN_SETDEFAULT_DOUPDATE); if ((qplan = ri_FetchPreparedPlan(&qkey)) == NULL) { @@ -2035,23 +1612,23 @@ RI_FKey_setdefault_upd(PG_FUNCTION_ARGS) heap_close(fk_rel, RowExclusiveLock); /* - * If we just updated the PK row whose key was equal to the FK - * columns' default values, and a referencing row exists in the FK - * table, we would have updated that row to the same values it - * already had --- and RI_FKey_fk_upd_check_required would hence - * believe no check is necessary. So we need to do another lookup - * now and in case a reference still exists, abort the operation. - * That is already implemented in the NO ACTION trigger, so just - * run it. (This recheck is only needed in the SET DEFAULT case, - * since CASCADE must change the FK key values, while SET NULL is - * certain to result in rows that satisfy the FK constraint.) + * If we just deleted or updated the PK row whose key was equal to + * the FK columns' default values, and a referencing row exists in + * the FK table, we would have updated that row to the same values + * it already had --- and RI_FKey_fk_upd_check_required would + * hence believe no check is necessary. So we need to do another + * lookup now and in case a reference still exists, abort the + * operation. That is already implemented in the NO ACTION + * trigger, so just run it. (This recheck is only needed in the + * SET DEFAULT case, since CASCADE would remove such rows in case + * of a DELETE operation or would change the FK key values in case + * of an UPDATE, while SET NULL is certain to result in rows that + * satisfy the FK constraint.) */ - RI_FKey_noaction_upd(fcinfo); - - return PointerGetDatum(NULL); + return ri_restrict(trigdata, true); /* - * Handle MATCH PARTIAL set default update. + * Handle MATCH PARTIAL set default delete or update. */ case FKCONSTR_MATCH_PARTIAL: ereport(ERROR, From 52f63bd916184b5f07130c293475d0cf4f202a86 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 18 Nov 2017 16:46:29 -0500 Subject: [PATCH 0572/1087] Fix compiler warning in rangetypes_spgist.c. On gcc 7.2.0, comparing pointer to (Datum) 0 produces a warning. Treat it as a simple pointer to avoid that; this is more consistent with comparable code elsewhere, anyway. Tomas Vondra Discussion: https://postgr.es/m/99410021-61ef-9a9a-9bc8-f733ece637ee@2ndquadrant.com --- src/backend/utils/adt/rangetypes_spgist.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c index d934105d32..27fbf66433 100644 --- a/src/backend/utils/adt/rangetypes_spgist.c +++ b/src/backend/utils/adt/rangetypes_spgist.c @@ -556,7 +556,7 @@ spg_range_quad_inner_consistent(PG_FUNCTION_ARGS) * for lower or upper bounds to be adjacent. Deserialize * previous centroid range if present for checking this. */ - if (in->traversalValue != (Datum) 0) + if (in->traversalValue) { prevCentroid = DatumGetRangeTypeP(in->traversalValue); range_deserialize(typcache, prevCentroid, From c2513365a0a85e77d3c21adb92fe12cfbe0d1897 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Mon, 20 Nov 2017 09:50:10 +1100 Subject: [PATCH 0573/1087] Parameter toast_tuple_target controls TOAST for new rows Specifies the point at which we try to move long column values into TOAST tables. No effect on existing rows. Discussion: https://postgr.es/m/CANP8+jKsVmw6CX6YP9z7zqkTzcKV1+Uzr3XjKcZW=2Ya00OyQQ@mail.gmail.com Author: Simon Riggs Reviewed-by: Andrew Dunstan --- doc/src/sgml/ref/alter_table.sgml | 2 +- doc/src/sgml/ref/create_table.sgml | 21 ++++++++++++ src/backend/access/common/reloptions.c | 12 +++++++ src/backend/access/heap/tuptoaster.c | 2 +- src/include/utils/rel.h | 9 +++++ src/test/regress/expected/strings.out | 47 ++++++++++++++++++++++++++ src/test/regress/sql/strings.sql | 19 +++++++++++ 7 files changed, 110 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 3b19ea7131..92db00f52d 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -629,7 +629,7 @@ ALTER TABLE [ IF EXISTS ] name SHARE UPDATE EXCLUSIVE lock will be taken for - fillfactor and autovacuum storage parameters, as well as the + fillfactor, toast and autovacuum storage parameters, as well as the following planner related parameters: effective_io_concurrency, parallel_workers, seq_page_cost, random_page_cost, n_distinct and n_distinct_inherited. diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index bbb3a51def..83eef7f10d 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1200,6 +1200,27 @@ WITH ( MODULUS numeric_literal, REM + + toast_tuple_target (integer) + + + The toast_tuple_target specifies the minimum tuple length required before + we try to move long column values into TOAST tables, and is also the + target length we try to reduce the length below once toasting begins. + This only affects columns marked as either External or Extended + and applies only to new tuples - there is no effect on existing rows. + By default this parameter is set to allow at least 4 tuples per block, + which with the default blocksize will be 2040 bytes. Valid values are + between 128 bytes and the (blocksize - header), by default 8160 bytes. + Changing this value may not be useful for very short or very long rows. + Note that the default setting is often close to optimal, and + it is possible that setting this parameter could have negative + effects in some cases. + This parameter cannot be set for TOAST tables. + + + + parallel_workers (integer) diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index 3d0ce9af6f..aa9c0f1bb9 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -23,6 +23,7 @@ #include "access/nbtree.h" #include "access/reloptions.h" #include "access/spgist.h" +#include "access/tuptoaster.h" #include "catalog/pg_type.h" #include "commands/defrem.h" #include "commands/tablespace.h" @@ -290,6 +291,15 @@ static relopt_int intRelOpts[] = }, -1, -1, INT_MAX }, + { + { + "toast_tuple_target", + "Sets the target tuple length at which external columns will be toasted", + RELOPT_KIND_HEAP, + ShareUpdateExclusiveLock + }, + TOAST_TUPLE_TARGET, 128, TOAST_TUPLE_TARGET_MAIN + }, { { "pages_per_range", @@ -1344,6 +1354,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind) offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, multixact_freeze_table_age)}, {"log_autovacuum_min_duration", RELOPT_TYPE_INT, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, log_min_duration)}, + {"toast_tuple_target", RELOPT_TYPE_INT, + offsetof(StdRdOptions, toast_tuple_target)}, {"autovacuum_vacuum_scale_factor", RELOPT_TYPE_REAL, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_scale_factor)}, {"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL, diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c index 5a8f1dab83..c74945a52a 100644 --- a/src/backend/access/heap/tuptoaster.c +++ b/src/backend/access/heap/tuptoaster.c @@ -727,7 +727,7 @@ toast_insert_or_update(Relation rel, HeapTuple newtup, HeapTuple oldtup, hoff += sizeof(Oid); hoff = MAXALIGN(hoff); /* now convert to a limit on the tuple data size */ - maxDataLen = TOAST_TUPLE_TARGET - hoff; + maxDataLen = RelationGetToastTupleTarget(rel, TOAST_TUPLE_TARGET) - hoff; /* * Look for attributes with attstorage 'x' to compress. Also find large diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h index 4bc61e5380..68fd6fbd54 100644 --- a/src/include/utils/rel.h +++ b/src/include/utils/rel.h @@ -277,6 +277,7 @@ typedef struct StdRdOptions { int32 vl_len_; /* varlena header (do not touch directly!) */ int fillfactor; /* page fill factor in percent (0..100) */ + int toast_tuple_target; /* target for tuple toasting */ AutoVacOpts autovacuum; /* autovacuum-related options */ bool user_catalog_table; /* use as an additional catalog relation */ int parallel_workers; /* max number of parallel workers */ @@ -285,6 +286,14 @@ typedef struct StdRdOptions #define HEAP_MIN_FILLFACTOR 10 #define HEAP_DEFAULT_FILLFACTOR 100 +/* + * RelationGetToastTupleTarget + * Returns the relation's toast_tuple_target. Note multiple eval of argument! + */ +#define RelationGetToastTupleTarget(relation, defaulttarg) \ + ((relation)->rd_options ? \ + ((StdRdOptions *) (relation)->rd_options)->toast_tuple_target : (defaulttarg)) + /* * RelationGetFillFactor * Returns the relation's fillfactor. Note multiple eval of argument! diff --git a/src/test/regress/expected/strings.out b/src/test/regress/expected/strings.out index 35cadb24aa..3a42ef77be 100644 --- a/src/test/regress/expected/strings.out +++ b/src/test/regress/expected/strings.out @@ -1166,6 +1166,53 @@ SELECT substr(f1, 99995, 10) from toasttest; 567890 (4 rows) +TRUNCATE TABLE toasttest; +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + blocks +-------- + 1 +(1 row) + +select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; + blocks +-------- + 3 +(1 row) + +SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + blocks +-------- + 9 +(1 row) + +TRUNCATE TABLE toasttest; +ALTER TABLE toasttest set (toast_tuple_target = 4080); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + blocks +-------- + 2 +(1 row) + +select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; + blocks +-------- + 0 +(1 row) + +SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + blocks +-------- + 6 +(1 row) + DROP TABLE toasttest; -- -- test substr with toasted bytea values diff --git a/src/test/regress/sql/strings.sql b/src/test/regress/sql/strings.sql index f9cfaeb44a..6396693f27 100644 --- a/src/test/regress/sql/strings.sql +++ b/src/test/regress/sql/strings.sql @@ -366,6 +366,25 @@ SELECT substr(f1, 99995) from toasttest; -- string length SELECT substr(f1, 99995, 10) from toasttest; +TRUNCATE TABLE toasttest; +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; +SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + +TRUNCATE TABLE toasttest; +ALTER TABLE toasttest set (toast_tuple_target = 4080); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +INSERT INTO toasttest values (repeat('1234567890',400)); +SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; +SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; + DROP TABLE toasttest; -- From 56f34686220731eef72dfd129519b25f28406db1 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Mon, 20 Nov 2017 12:09:40 +1100 Subject: [PATCH 0574/1087] Reduce test variability for toast_tuple_target test --- src/test/regress/expected/strings.out | 50 ++++++++------------------- src/test/regress/sql/strings.sql | 26 +++++++------- 2 files changed, 26 insertions(+), 50 deletions(-) diff --git a/src/test/regress/expected/strings.out b/src/test/regress/expected/strings.out index 3a42ef77be..8073eb4fad 100644 --- a/src/test/regress/expected/strings.out +++ b/src/test/regress/expected/strings.out @@ -1167,50 +1167,28 @@ SELECT substr(f1, 99995, 10) from toasttest; (4 rows) TRUNCATE TABLE toasttest; -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +-- expect >0 blocks +select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; blocks -------- - 1 -(1 row) - -select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; - blocks --------- - 3 -(1 row) - -SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; - blocks --------- - 9 + f (1 row) TRUNCATE TABLE toasttest; ALTER TABLE toasttest set (toast_tuple_target = 4080); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +-- expect 0 blocks +select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; blocks -------- - 2 -(1 row) - -select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; - blocks --------- - 0 -(1 row) - -SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; - blocks --------- - 6 + t (1 row) DROP TABLE toasttest; diff --git a/src/test/regress/sql/strings.sql b/src/test/regress/sql/strings.sql index 6396693f27..9ed242208f 100644 --- a/src/test/regress/sql/strings.sql +++ b/src/test/regress/sql/strings.sql @@ -367,23 +367,21 @@ SELECT substr(f1, 99995) from toasttest; SELECT substr(f1, 99995, 10) from toasttest; TRUNCATE TABLE toasttest; -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; -select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; -SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +-- expect >0 blocks +select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; TRUNCATE TABLE toasttest; ALTER TABLE toasttest set (toast_tuple_target = 4080); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -INSERT INTO toasttest values (repeat('1234567890',400)); -SELECT pg_relation_size('toasttest')/current_setting('block_size')::integer as blocks; -select pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; -SELECT pg_total_relation_size('toasttest')/current_setting('block_size')::integer as blocks; +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +INSERT INTO toasttest values (repeat('1234567890',300)); +-- expect 0 blocks +select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from pg_class where relname = 'toasttest'))/current_setting('block_size')::integer as blocks; DROP TABLE toasttest; From f455e1125e2588d4cd4fc663c6a10da4e003a3b5 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 20 Nov 2017 12:00:33 -0500 Subject: [PATCH 0575/1087] Pass eflags down to parallel workers. Currently, there are no known consequences of this oversight, so no back-patch. Several of the EXEC_FLAG_* constants aren't usable in parallel mode anyway, and potential problems related to the presence or absence of OIDs (see EXEC_FLAG_WITH_OIDS, EXEC_FLAG_WITHOUT_OIDS) seem at present to be masked by the unconditional projection step performed by Gather and Gather Merge. In general, however, it seems important that all participants agree on the values of these flags, which modify executor behavior globally, and a pending patch to skip projection in Gather (Merge) would be outright broken in certain cases without this fix. Patch by me, based on investigation of a test case provided by Amit Kapila. This patch was also reviewed by Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoZ0ZL=cesZFq8c9NnfK6bqy-wwUd3_74iYGodYrSoQ7Fw@mail.gmail.com --- src/backend/executor/execParallel.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 2ead32d5ad..53c5254be1 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -69,6 +69,7 @@ typedef struct FixedParallelExecutorState { int64 tuples_needed; /* tuple bound, see ExecSetTupleBound */ dsa_pointer param_exec; + int eflags; } FixedParallelExecutorState; /* @@ -647,6 +648,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, fpes = shm_toc_allocate(pcxt->toc, sizeof(FixedParallelExecutorState)); fpes->tuples_needed = tuples_needed; fpes->param_exec = InvalidDsaPointer; + fpes->eflags = estate->es_top_eflags; shm_toc_insert(pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, fpes); /* Store query string */ @@ -1224,7 +1226,7 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc) area = dsa_attach_in_place(area_space, seg); /* Start up the executor */ - ExecutorStart(queryDesc, 0); + ExecutorStart(queryDesc, fpes->eflags); /* Special executor initialization steps for parallel workers */ queryDesc->planstate->state->es_query_dsa = area; From 0510421ee091ee3ddabdd5bd7ed096563f6bf17f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 20 Nov 2017 12:34:06 -0500 Subject: [PATCH 0576/1087] Tweak use of ExecContextForcesOids by Gather (Merge). Specifically, pass the outer plan's PlanState instead of our own PlanState. At present, ExecContextForcesOids doesn't actually care which PlanState we pass; it just looks through to the underlying EState to find the result relation or top-level eflags. However, in the future it might care. If that happens, and if our goal is to get a tuple descriptor that matches that of the outer plan, then I think what we care about is whether the outer plan's context forces OIDs, rather than whether our own context forces OIDs, just as we use the outer node's target list rather than our own. Patch by me, reviewed by Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoZ0ZL=cesZFq8c9NnfK6bqy-wwUd3_74iYGodYrSoQ7Fw@mail.gmail.com --- src/backend/executor/nodeGather.c | 2 +- src/backend/executor/nodeGatherMerge.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 07c62d2fea..30885e6f5c 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -112,7 +112,7 @@ ExecInitGather(Gather *node, EState *estate, int eflags) /* * Initialize funnel slot to same tuple descriptor as outer plan. */ - if (!ExecContextForcesOids(&gatherstate->ps, &hasoid)) + if (!ExecContextForcesOids(outerPlanState(gatherstate), &hasoid)) hasoid = false; tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); ExecSetSlotDescriptor(gatherstate->funnel_slot, tupDesc); diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 7dd655c448..d81462e72b 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -155,7 +155,7 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) * Store the tuple descriptor into gather merge state, so we can use it * while initializing the gather merge slots. */ - if (!ExecContextForcesOids(&gm_state->ps, &hasoid)) + if (!ExecContextForcesOids(outerPlanState(gm_state), &hasoid)) hasoid = false; tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); gm_state->tupDesc = tupDesc; From 2ede45c3a49e484edfa143850d55eb32dba296de Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Tue, 21 Nov 2017 08:00:54 +1100 Subject: [PATCH 0577/1087] Fix pg_control_checkpoint from commit 4b0d28de06 Author: Simon Riggs Reported-By: Andreas Seltenreich --- src/backend/utils/misc/pg_controldata.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/misc/pg_controldata.c b/src/backend/utils/misc/pg_controldata.c index 1b5086a45d..dee6dfc12f 100644 --- a/src/backend/utils/misc/pg_controldata.c +++ b/src/backend/utils/misc/pg_controldata.c @@ -90,7 +90,7 @@ pg_control_checkpoint(PG_FUNCTION_ARGS) * Construct a tuple descriptor for the result row. This must match this * function's pg_proc entry! */ - tupdesc = CreateTemplateTupleDesc(19, false); + tupdesc = CreateTemplateTupleDesc(18, false); TupleDescInitEntry(tupdesc, (AttrNumber) 1, "checkpoint_lsn", LSNOID, -1, 0); TupleDescInitEntry(tupdesc, (AttrNumber) 2, "redo_lsn", From f3bd00c0168abaf13ac0733a77bc1106d3f6720d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 20 Nov 2017 17:57:46 -0500 Subject: [PATCH 0578/1087] Add support for Motorola 88K to s_lock.h. Apparently there are still people out there who care about this old architecture. They probably care about dusty versions of Postgres too, so back-patch to all supported branches. David Carlier (from a patch being carried by OpenBSD packagers) Discussion: https://postgr.es/m/CA+XhMqzwFSGVU7MEnfhCecc8YdP98tigXzzpd0AAdwaGwaVXEA@mail.gmail.com --- src/include/storage/s_lock.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h index 99e109853f..35088db10a 100644 --- a/src/include/storage/s_lock.h +++ b/src/include/storage/s_lock.h @@ -543,6 +543,30 @@ tas(volatile slock_t *lock) #endif /* (__mc68000__ || __m68k__) && __linux__ */ +/* Motorola 88k */ +#if defined(__m88k__) +#define HAS_TEST_AND_SET + +typedef unsigned int slock_t; + +#define TAS(lock) tas(lock) + +static __inline__ int +tas(volatile slock_t *lock) +{ + register slock_t _res = 1; + + __asm__ __volatile__( + " xmem %0, %2, %%r0 \n" +: "+r"(_res), "+m"(*lock) +: "r"(lock) +: "memory"); + return (int) _res; +} + +#endif /* __m88k__ */ + + /* * VAXen -- even multiprocessor ones * (thanks to Tom Ivar Helbekkmo) From 84669c9b06ba0d2485007643322bd3fcaa8c729e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 20 Nov 2017 18:05:02 -0500 Subject: [PATCH 0579/1087] Use out-of-line M68K spinlock code for OpenBSD as well as NetBSD. David Carlier (from a patch being carried by OpenBSD packagers) Discussion: https://postgr.es/m/CA+XhMqzwFSGVU7MEnfhCecc8YdP98tigXzzpd0AAdwaGwaVXEA@mail.gmail.com --- src/backend/storage/lmgr/s_lock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/storage/lmgr/s_lock.c b/src/backend/storage/lmgr/s_lock.c index 40d8156331..247d7b8199 100644 --- a/src/backend/storage/lmgr/s_lock.c +++ b/src/backend/storage/lmgr/s_lock.c @@ -251,7 +251,7 @@ static void tas_dummy() { __asm__ __volatile__( -#if defined(__NetBSD__) && defined(__ELF__) +#if (defined(__NetBSD__) || defined(__OpenBSD__)) && defined(__ELF__) /* no underscore for label and % for registers */ "\ .global tas \n\ @@ -276,7 +276,7 @@ _tas: \n\ _success: \n\ moveq #0,d0 \n\ rts \n" -#endif /* __NetBSD__ && __ELF__ */ +#endif /* (__NetBSD__ || __OpenBSD__) && __ELF__ */ ); } #endif /* __m68k__ && !__linux__ */ From de1d042f5979bc1388e9a6d52a4d445342b04932 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 20 Nov 2017 20:25:18 -0500 Subject: [PATCH 0580/1087] Support index-only scans in contrib/cube and contrib/seg GiST indexes. To do this, we only have to remove the compress and decompress support functions, which have never done anything more than detoasting. In the wake of commit d3a4f89d8, this results in automatically enabling index-only scans, since the core code will now know that the stored representation is the same as the original data (up to detoasting). The only exciting part of this is that ALTER OPERATOR FAMILY lacks a way to drop a support function that was declared as being part of an opclass rather than being loose in the family. For the moment, we'll hack our way to a solution with a manual update of the pg_depend entry type, which is what distinguishes the two cases. Perhaps someday it'll be worth providing a cleaner way to do that, but for now it seems like a very niche problem. Note that the underlying C functions remain, to support use of the shared libraries with older versions of the modules' SQL declarations. Someday we may be able to remove them, but not soon. Andrey Borodin, reviewed by me Discussion: https://postgr.es/m/D0F53A05-4F4A-4DEC-8339-3C069FA0EE11@yandex-team.ru --- contrib/cube/Makefile | 2 +- contrib/cube/cube--1.3--1.4.sql | 45 ++++++++++++++++++++++++++++++++ contrib/cube/cube.control | 2 +- contrib/cube/expected/cube.out | 22 ++++++++++++++++ contrib/cube/expected/cube_2.out | 22 ++++++++++++++++ contrib/cube/sql/cube.sql | 7 +++++ contrib/seg/Makefile | 2 +- contrib/seg/expected/seg.out | 28 ++++++++++++++++++++ contrib/seg/expected/seg_1.out | 28 ++++++++++++++++++++ contrib/seg/seg--1.2--1.3.sql | 45 ++++++++++++++++++++++++++++++++ contrib/seg/seg.control | 2 +- contrib/seg/sql/seg.sql | 9 +++++++ 12 files changed, 210 insertions(+), 4 deletions(-) create mode 100644 contrib/cube/cube--1.3--1.4.sql create mode 100644 contrib/seg/seg--1.2--1.3.sql diff --git a/contrib/cube/Makefile b/contrib/cube/Makefile index 244c1d9bbf..accb7d28a3 100644 --- a/contrib/cube/Makefile +++ b/contrib/cube/Makefile @@ -4,7 +4,7 @@ MODULE_big = cube OBJS= cube.o cubeparse.o $(WIN32RES) EXTENSION = cube -DATA = cube--1.2.sql cube--1.2--1.3.sql \ +DATA = cube--1.2.sql cube--1.2--1.3.sql cube--1.3--1.4.sql \ cube--1.1--1.2.sql cube--1.0--1.1.sql \ cube--unpackaged--1.0.sql PGFILEDESC = "cube - multidimensional cube data type" diff --git a/contrib/cube/cube--1.3--1.4.sql b/contrib/cube/cube--1.3--1.4.sql new file mode 100644 index 0000000000..869820c0c8 --- /dev/null +++ b/contrib/cube/cube--1.3--1.4.sql @@ -0,0 +1,45 @@ +/* contrib/cube/cube--1.3--1.4.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION cube UPDATE TO '1.4'" to load this file. \quit + +-- +-- Get rid of unnecessary compress and decompress support functions. +-- +-- To be allowed to drop the opclass entry for a support function, +-- we must change the entry's dependency type from 'internal' to 'auto', +-- as though it were a loose member of the opfamily rather than being +-- bound into a particular opclass. There's no SQL command for that, +-- so fake it with a manual update on pg_depend. +-- +UPDATE pg_catalog.pg_depend +SET deptype = 'a' +WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND objid = + (SELECT objid + FROM pg_catalog.pg_depend + WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND refclassid = 'pg_catalog.pg_proc'::pg_catalog.regclass + AND (refobjid = 'g_cube_compress(pg_catalog.internal)'::pg_catalog.regprocedure)) + AND refclassid = 'pg_catalog.pg_opclass'::pg_catalog.regclass + AND deptype = 'i'; + +ALTER OPERATOR FAMILY gist_cube_ops USING gist drop function 3 (cube); +ALTER EXTENSION cube DROP function g_cube_compress(pg_catalog.internal); +DROP FUNCTION g_cube_compress(pg_catalog.internal); + +UPDATE pg_catalog.pg_depend +SET deptype = 'a' +WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND objid = + (SELECT objid + FROM pg_catalog.pg_depend + WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND refclassid = 'pg_catalog.pg_proc'::pg_catalog.regclass + AND (refobjid = 'g_cube_decompress(pg_catalog.internal)'::pg_catalog.regprocedure)) + AND refclassid = 'pg_catalog.pg_opclass'::pg_catalog.regclass + AND deptype = 'i'; + +ALTER OPERATOR FAMILY gist_cube_ops USING gist drop function 4 (cube); +ALTER EXTENSION cube DROP function g_cube_decompress(pg_catalog.internal); +DROP FUNCTION g_cube_decompress(pg_catalog.internal); diff --git a/contrib/cube/cube.control b/contrib/cube/cube.control index af062d4843..f39a838e3f 100644 --- a/contrib/cube/cube.control +++ b/contrib/cube/cube.control @@ -1,5 +1,5 @@ # cube extension comment = 'data type for multidimensional cubes' -default_version = '1.3' +default_version = '1.4' module_pathname = '$libdir/cube' relocatable = true diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out index 328b3b5f5d..c430b4e1f0 100644 --- a/contrib/cube/expected/cube.out +++ b/contrib/cube/expected/cube.out @@ -1589,6 +1589,28 @@ SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' GROUP BY c ORDER BY c; (2424, 160),(2424, 81) (5 rows) +-- Test index-only scans +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; + QUERY PLAN +-------------------------------------------------------- + Sort + Sort Key: c + -> Index Only Scan using test_cube_ix on test_cube + Index Cond: (c <@ '(3000, 1000),(0, 0)'::cube) +(4 rows) + +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; + c +------------------------- + (337, 455),(240, 359) + (759, 187),(662, 163) + (1444, 403),(1346, 344) + (2424, 160),(2424, 81) +(4 rows) + +RESET enable_bitmapscan; -- kNN with index SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out index 1aa5cf2f98..b979c4d6c8 100644 --- a/contrib/cube/expected/cube_2.out +++ b/contrib/cube/expected/cube_2.out @@ -1589,6 +1589,28 @@ SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' GROUP BY c ORDER BY c; (2424, 160),(2424, 81) (5 rows) +-- Test index-only scans +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; + QUERY PLAN +-------------------------------------------------------- + Sort + Sort Key: c + -> Index Only Scan using test_cube_ix on test_cube + Index Cond: (c <@ '(3000, 1000),(0, 0)'::cube) +(4 rows) + +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; + c +------------------------- + (337, 455),(240, 359) + (759, 187),(662, 163) + (1444, 403),(1346, 344) + (2424, 160),(2424, 81) +(4 rows) + +RESET enable_bitmapscan; -- kNN with index SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql index 58ea3ad811..eb24576895 100644 --- a/contrib/cube/sql/cube.sql +++ b/contrib/cube/sql/cube.sql @@ -382,6 +382,13 @@ SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c; -- Test sorting SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' GROUP BY c ORDER BY c; +-- Test index-only scans +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; +SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; +RESET enable_bitmapscan; + -- kNN with index SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; diff --git a/contrib/seg/Makefile b/contrib/seg/Makefile index 00a5472d3b..41270f84f6 100644 --- a/contrib/seg/Makefile +++ b/contrib/seg/Makefile @@ -4,7 +4,7 @@ MODULE_big = seg OBJS = seg.o segparse.o $(WIN32RES) EXTENSION = seg -DATA = seg--1.1.sql seg--1.1--1.2.sql \ +DATA = seg--1.1.sql seg--1.1--1.2.sql seg--1.2--1.3.sql \ seg--1.0--1.1.sql seg--unpackaged--1.0.sql PGFILEDESC = "seg - line segment data type" diff --git a/contrib/seg/expected/seg.out b/contrib/seg/expected/seg.out index 18010c4d5c..a289dbe5f9 100644 --- a/contrib/seg/expected/seg.out +++ b/contrib/seg/expected/seg.out @@ -930,12 +930,40 @@ SELECT '1'::seg <@ '-1 .. 1'::seg AS bool; CREATE TABLE test_seg (s seg); \copy test_seg from 'data/test_seg.data' CREATE INDEX test_seg_ix ON test_seg USING gist (s); +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + QUERY PLAN +------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on test_seg + Recheck Cond: (s @> '1.1e1 .. 11.3'::seg) + -> Bitmap Index Scan on test_seg_ix + Index Cond: (s @> '1.1e1 .. 11.3'::seg) +(5 rows) + +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + count +------- + 143 +(1 row) + +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + QUERY PLAN +----------------------------------------------------- + Aggregate + -> Index Only Scan using test_seg_ix on test_seg + Index Cond: (s @> '1.1e1 .. 11.3'::seg) +(3 rows) + SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; count ------- 143 (1 row) +RESET enable_bitmapscan; -- Test sorting SELECT * FROM test_seg WHERE s @> '11..11.3' GROUP BY s; s diff --git a/contrib/seg/expected/seg_1.out b/contrib/seg/expected/seg_1.out index 566ce394ed..48abb65bb0 100644 --- a/contrib/seg/expected/seg_1.out +++ b/contrib/seg/expected/seg_1.out @@ -930,12 +930,40 @@ SELECT '1'::seg <@ '-1 .. 1'::seg AS bool; CREATE TABLE test_seg (s seg); \copy test_seg from 'data/test_seg.data' CREATE INDEX test_seg_ix ON test_seg USING gist (s); +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + QUERY PLAN +------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on test_seg + Recheck Cond: (s @> '1.1e1 .. 11.3'::seg) + -> Bitmap Index Scan on test_seg_ix + Index Cond: (s @> '1.1e1 .. 11.3'::seg) +(5 rows) + +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + count +------- + 143 +(1 row) + +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + QUERY PLAN +----------------------------------------------------- + Aggregate + -> Index Only Scan using test_seg_ix on test_seg + Index Cond: (s @> '1.1e1 .. 11.3'::seg) +(3 rows) + SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; count ------- 143 (1 row) +RESET enable_bitmapscan; -- Test sorting SELECT * FROM test_seg WHERE s @> '11..11.3' GROUP BY s; s diff --git a/contrib/seg/seg--1.2--1.3.sql b/contrib/seg/seg--1.2--1.3.sql new file mode 100644 index 0000000000..cd71a300f6 --- /dev/null +++ b/contrib/seg/seg--1.2--1.3.sql @@ -0,0 +1,45 @@ +/* contrib/seg/seg--1.2--1.3.sql */ + +-- complain if script is sourced in psql, rather than via ALTER EXTENSION +\echo Use "ALTER EXTENSION seg UPDATE TO '1.3'" to load this file. \quit + +-- +-- Get rid of unnecessary compress and decompress support functions. +-- +-- To be allowed to drop the opclass entry for a support function, +-- we must change the entry's dependency type from 'internal' to 'auto', +-- as though it were a loose member of the opfamily rather than being +-- bound into a particular opclass. There's no SQL command for that, +-- so fake it with a manual update on pg_depend. +-- +UPDATE pg_catalog.pg_depend +SET deptype = 'a' +WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND objid = + (SELECT objid + FROM pg_catalog.pg_depend + WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND refclassid = 'pg_catalog.pg_proc'::pg_catalog.regclass + AND (refobjid = 'gseg_compress(pg_catalog.internal)'::pg_catalog.regprocedure)) + AND refclassid = 'pg_catalog.pg_opclass'::pg_catalog.regclass + AND deptype = 'i'; + +ALTER OPERATOR FAMILY gist_seg_ops USING gist drop function 3 (seg); +ALTER EXTENSION seg DROP function gseg_compress(pg_catalog.internal); +DROP function gseg_compress(pg_catalog.internal); + +UPDATE pg_catalog.pg_depend +SET deptype = 'a' +WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND objid = + (SELECT objid + FROM pg_catalog.pg_depend + WHERE classid = 'pg_catalog.pg_amproc'::pg_catalog.regclass + AND refclassid = 'pg_catalog.pg_proc'::pg_catalog.regclass + AND (refobjid = 'gseg_decompress(pg_catalog.internal)'::pg_catalog.regprocedure)) + AND refclassid = 'pg_catalog.pg_opclass'::pg_catalog.regclass + AND deptype = 'i'; + +ALTER OPERATOR FAMILY gist_seg_ops USING gist drop function 4 (seg); +ALTER EXTENSION seg DROP function gseg_decompress(pg_catalog.internal); +DROP function gseg_decompress(pg_catalog.internal); diff --git a/contrib/seg/seg.control b/contrib/seg/seg.control index ba3d092c25..d697cd6c2a 100644 --- a/contrib/seg/seg.control +++ b/contrib/seg/seg.control @@ -1,5 +1,5 @@ # seg extension comment = 'data type for representing line segments or floating-point intervals' -default_version = '1.2' +default_version = '1.3' module_pathname = '$libdir/seg' relocatable = true diff --git a/contrib/seg/sql/seg.sql b/contrib/seg/sql/seg.sql index aa91931474..1d7bad7c37 100644 --- a/contrib/seg/sql/seg.sql +++ b/contrib/seg/sql/seg.sql @@ -216,7 +216,16 @@ CREATE TABLE test_seg (s seg); \copy test_seg from 'data/test_seg.data' CREATE INDEX test_seg_ix ON test_seg USING gist (s); + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; + +SET enable_bitmapscan = false; +EXPLAIN (COSTS OFF) +SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; SELECT count(*) FROM test_seg WHERE s @> '11..11.3'; +RESET enable_bitmapscan; -- Test sorting SELECT * FROM test_seg WHERE s @> '11..11.3' GROUP BY s; From f3b0897a1213f46b4d3a99a7f8ef3a4b32e03572 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 21 Nov 2017 13:06:32 -0500 Subject: [PATCH 0581/1087] Fix multiple problems with satisfies_hash_partition. Fix the function header comment to describe the actual behavior. Check that table OID, modulus, and remainder arguments are not NULL before accessing them. Check that the modulus and remainder are sensible. If the table OID doesn't exist, return NULL instead of emitting an internal error, similar to what we do elsewhere. Check that the actual argument types match, or at least are binary coercible to, the expected argument types. Correctly handle invocation of this function using the VARIADIC syntax. Add regression tests. Robert Haas and Amul Sul, per a report by Andreas Seltenreich and subsequent followup investigation. Discussion: http://postgr.es/m/871sl4sdrv.fsf@ansel.ydns.eu --- src/backend/catalog/partition.c | 202 ++++++++++++++++++++---- src/test/regress/expected/hash_part.out | 113 +++++++++++++ src/test/regress/parallel_schedule | 2 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/hash_part.sql | 90 +++++++++++ 5 files changed, 377 insertions(+), 31 deletions(-) create mode 100644 src/test/regress/expected/hash_part.out create mode 100644 src/test/regress/sql/hash_part.sql diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 67d4c2a09b..9a44cceb22 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -40,6 +40,7 @@ #include "optimizer/planmain.h" #include "optimizer/prep.h" #include "optimizer/var.h" +#include "parser/parse_coerce.h" #include "rewrite/rewriteManip.h" #include "storage/lmgr.h" #include "utils/array.h" @@ -3085,9 +3086,11 @@ compute_hash_value(PartitionKey key, Datum *values, bool *isnull) /* * satisfies_hash_partition * - * This is a SQL-callable function for use in hash partition constraints takes - * an already computed hash values of each partition key attribute, and combine - * them into a single hash value by calling hash_combine64. + * This is an SQL-callable function for use in hash partition constraints. + * The first three arguments are the parent table OID, modulus, and remainder. + * The remaining arguments are the value of the partitioning columns (or + * expressions); these are hashed and the results are combined into a single + * hash value by calling hash_combine64. * * Returns true if remainder produced when this computed single hash value is * divided by the given modulus is equal to given remainder, otherwise false. @@ -3100,60 +3103,160 @@ satisfies_hash_partition(PG_FUNCTION_ARGS) typedef struct ColumnsHashData { Oid relid; - int16 nkeys; + int nkeys; + Oid variadic_type; + int16 variadic_typlen; + bool variadic_typbyval; + char variadic_typalign; FmgrInfo partsupfunc[PARTITION_MAX_KEYS]; } ColumnsHashData; - Oid parentId = PG_GETARG_OID(0); - int modulus = PG_GETARG_INT32(1); - int remainder = PG_GETARG_INT32(2); - short nkeys = PG_NARGS() - 3; - int i; + Oid parentId; + int modulus; + int remainder; Datum seed = UInt64GetDatum(HASH_PARTITION_SEED); ColumnsHashData *my_extra; uint64 rowHash = 0; + /* Return null if the parent OID, modulus, or remainder is NULL. */ + if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2)) + PG_RETURN_NULL(); + parentId = PG_GETARG_OID(0); + modulus = PG_GETARG_INT32(1); + remainder = PG_GETARG_INT32(2); + + /* Sanity check modulus and remainder. */ + if (modulus <= 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("modulus for hash partition must be a positive integer"))); + if (remainder < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("remainder for hash partition must be a non-negative integer"))); + if (remainder >= modulus) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("remainder for hash partition must be less than modulus"))); + /* * Cache hash function information. */ my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; - if (my_extra == NULL || my_extra->nkeys != nkeys || - my_extra->relid != parentId) + if (my_extra == NULL || my_extra->relid != parentId) { Relation parent; PartitionKey key; - int j; - - fcinfo->flinfo->fn_extra = - MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt, - offsetof(ColumnsHashData, partsupfunc) + - sizeof(FmgrInfo) * nkeys); - my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; - my_extra->nkeys = nkeys; - my_extra->relid = parentId; + int j; /* Open parent relation and fetch partition keyinfo */ - parent = heap_open(parentId, AccessShareLock); + parent = try_relation_open(parentId, AccessShareLock); + if (parent == NULL) + PG_RETURN_NULL(); key = RelationGetPartitionKey(parent); - Assert(key->partnatts == nkeys); - for (j = 0; j < nkeys; ++j) - fmgr_info_copy(&my_extra->partsupfunc[j], - key->partsupfunc, + /* Reject parent table that is not hash-partitioned. */ + if (parent->rd_rel->relkind != RELKIND_PARTITIONED_TABLE || + key->strategy != PARTITION_STRATEGY_HASH) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("\"%s\" is not a hash partitioned table", + get_rel_name(parentId)))); + + if (!get_fn_expr_variadic(fcinfo->flinfo)) + { + int nargs = PG_NARGS() - 3; + + /* complain if wrong number of column values */ + if (key->partnatts != nargs) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("number of partitioning columns (%d) does not match number of partition keys provided (%d)", + key->partnatts, nargs))); + + /* allocate space for our cache */ + fcinfo->flinfo->fn_extra = + MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt, + offsetof(ColumnsHashData, partsupfunc) + + sizeof(FmgrInfo) * nargs); + my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; + my_extra->relid = parentId; + my_extra->nkeys = key->partnatts; + + /* check argument types and save fmgr_infos */ + for (j = 0; j < key->partnatts; ++j) + { + Oid argtype = get_fn_expr_argtype(fcinfo->flinfo, j + 3); + + if (argtype != key->parttypid[j] && !IsBinaryCoercible(argtype, key->parttypid[j])) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("column %d of the partition key has type \"%s\", but supplied value is of type \"%s\"", + j + 1, format_type_be(key->parttypid[j]), format_type_be(argtype)))); + + fmgr_info_copy(&my_extra->partsupfunc[j], + &key->partsupfunc[j], + fcinfo->flinfo->fn_mcxt); + } + + } + else + { + ArrayType *variadic_array = PG_GETARG_ARRAYTYPE_P(3); + + /* allocate space for our cache -- just one FmgrInfo in this case */ + fcinfo->flinfo->fn_extra = + MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt, + offsetof(ColumnsHashData, partsupfunc) + + sizeof(FmgrInfo)); + my_extra = (ColumnsHashData *) fcinfo->flinfo->fn_extra; + my_extra->relid = parentId; + my_extra->nkeys = key->partnatts; + my_extra->variadic_type = ARR_ELEMTYPE(variadic_array); + get_typlenbyvalalign(my_extra->variadic_type, + &my_extra->variadic_typlen, + &my_extra->variadic_typbyval, + &my_extra->variadic_typalign); + + /* check argument types */ + for (j = 0; j < key->partnatts; ++j) + if (key->parttypid[j] != my_extra->variadic_type) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("column %d of the partition key has type \"%s\", but supplied value is of type \"%s\"", + j + 1, + format_type_be(key->parttypid[j]), + format_type_be(my_extra->variadic_type)))); + + fmgr_info_copy(&my_extra->partsupfunc[0], + &key->partsupfunc[0], fcinfo->flinfo->fn_mcxt); + } /* Hold lock until commit */ - heap_close(parent, NoLock); + relation_close(parent, NoLock); } - for (i = 0; i < nkeys; i++) + if (!OidIsValid(my_extra->variadic_type)) { - /* keys start from fourth argument of function. */ - int argno = i + 3; + int nkeys = my_extra->nkeys; + int i; + + /* + * For a non-variadic call, neither the number of arguments nor their + * types can change across calls, so avoid the expense of rechecking + * here. + */ - if (!PG_ARGISNULL(argno)) + for (i = 0; i < nkeys; i++) { Datum hash; + /* keys start from fourth argument of function. */ + int argno = i + 3; + + if (PG_ARGISNULL(argno)) + continue; + Assert(OidIsValid(my_extra->partsupfunc[i].fn_oid)); hash = FunctionCall2(&my_extra->partsupfunc[i], @@ -3164,6 +3267,45 @@ satisfies_hash_partition(PG_FUNCTION_ARGS) rowHash = hash_combine64(rowHash, DatumGetUInt64(hash)); } } + else + { + ArrayType *variadic_array = PG_GETARG_ARRAYTYPE_P(3); + int i; + int nelems; + Datum *datum; + bool *isnull; + + deconstruct_array(variadic_array, + my_extra->variadic_type, + my_extra->variadic_typlen, + my_extra->variadic_typbyval, + my_extra->variadic_typalign, + &datum, &isnull, &nelems); + + /* complain if wrong number of column values */ + if (nelems != my_extra->nkeys) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("number of partitioning columns (%d) does not match number of partition keys provided (%d)", + my_extra->nkeys, nelems))); + + for (i = 0; i < nelems; i++) + { + Datum hash; + + if (isnull[i]) + continue; + + Assert(OidIsValid(my_extra->partsupfunc[0].fn_oid)); + + hash = FunctionCall2(&my_extra->partsupfunc[0], + datum[i], + seed); + + /* Form a single 64-bit hash value */ + rowHash = hash_combine64(rowHash, DatumGetUInt64(hash)); + } + } PG_RETURN_BOOL(rowHash % modulus == remainder); } diff --git a/src/test/regress/expected/hash_part.out b/src/test/regress/expected/hash_part.out new file mode 100644 index 0000000000..9e9e56f6fc --- /dev/null +++ b/src/test/regress/expected/hash_part.out @@ -0,0 +1,113 @@ +-- +-- Hash partitioning. +-- +CREATE OR REPLACE FUNCTION hashint4_noop(int4, int8) RETURNS int8 AS +$$SELECT coalesce($1,0)::int8$$ LANGUAGE sql IMMUTABLE; +CREATE OPERATOR CLASS test_int4_ops FOR TYPE int4 USING HASH AS +OPERATOR 1 = , FUNCTION 2 hashint4_noop(int4, int8); +CREATE OR REPLACE FUNCTION hashtext_length(text, int8) RETURNS int8 AS +$$SELECT length(coalesce($1,''))::int8$$ LANGUAGE sql IMMUTABLE; +CREATE OPERATOR CLASS test_text_ops FOR TYPE text USING HASH AS +OPERATOR 1 = , FUNCTION 2 hashtext_length(text, int8); +CREATE TABLE mchash (a int, b text, c jsonb) + PARTITION BY HASH (a test_int4_ops, b test_text_ops); +CREATE TABLE mchash1 + PARTITION OF mchash FOR VALUES WITH (MODULUS 4, REMAINDER 0); +-- invalid OID, no such table +SELECT satisfies_hash_partition(0, 4, 0, NULL); + satisfies_hash_partition +-------------------------- + +(1 row) + +-- not partitioned +SELECT satisfies_hash_partition('tenk1'::regclass, 4, 0, NULL); +ERROR: "tenk1" is not a hash partitioned table +-- partition rather than the parent +SELECT satisfies_hash_partition('mchash1'::regclass, 4, 0, NULL); +ERROR: "mchash1" is not a hash partitioned table +-- invalid modulus +SELECT satisfies_hash_partition('mchash'::regclass, 0, 0, NULL); +ERROR: modulus for hash partition must be a positive integer +-- remainder too small +SELECT satisfies_hash_partition('mchash'::regclass, 1, -1, NULL); +ERROR: remainder for hash partition must be a non-negative integer +-- remainder too large +SELECT satisfies_hash_partition('mchash'::regclass, 1, 1, NULL); +ERROR: remainder for hash partition must be less than modulus +-- modulus is null +SELECT satisfies_hash_partition('mchash'::regclass, NULL, 0, NULL); + satisfies_hash_partition +-------------------------- + +(1 row) + +-- remainder is null +SELECT satisfies_hash_partition('mchash'::regclass, 4, NULL, NULL); + satisfies_hash_partition +-------------------------- + +(1 row) + +-- too many arguments +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, NULL::int, NULL::text, NULL::json); +ERROR: number of partitioning columns (2) does not match number of partition keys provided (3) +-- too few arguments +SELECT satisfies_hash_partition('mchash'::regclass, 3, 1, NULL::int); +ERROR: number of partitioning columns (2) does not match number of partition keys provided (1) +-- wrong argument type +SELECT satisfies_hash_partition('mchash'::regclass, 2, 1, NULL::int, NULL::int); +ERROR: column 2 of the partition key has type "text", but supplied value is of type "integer" +-- ok, should be false +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, 0, ''::text); + satisfies_hash_partition +-------------------------- + f +(1 row) + +-- ok, should be true +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, 1, ''::text); + satisfies_hash_partition +-------------------------- + t +(1 row) + +-- argument via variadic syntax, should fail because not all partitioning +-- columns are of the correct type +SELECT satisfies_hash_partition('mchash'::regclass, 2, 1, + variadic array[1,2]::int[]); +ERROR: column 2 of the partition key has type "text", but supplied value is of type "integer" +-- multiple partitioning columns of the same type +CREATE TABLE mcinthash (a int, b int, c jsonb) + PARTITION BY HASH (a test_int4_ops, b test_int4_ops); +-- now variadic should work, should be false +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[0, 0]); + satisfies_hash_partition +-------------------------- + f +(1 row) + +-- should be true +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[1, 0]); + satisfies_hash_partition +-------------------------- + t +(1 row) + +-- wrong length +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[]::int[]); +ERROR: number of partitioning columns (2) does not match number of partition keys provided (0) +-- wrong type +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[now(), now()]); +ERROR: column 1 of the partition key has type "integer", but supplied value is of type "timestamp with time zone" +-- cleanup +DROP TABLE mchash; +DROP TABLE mcinthash; +DROP OPERATOR CLASS test_text_ops USING hash; +DROP OPERATOR CLASS test_int4_ops USING hash; +DROP FUNCTION hashint4_noop(int4, int8); +DROP FUNCTION hashtext_length(text, int8); diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index aa5e6af621..1a3ac4c1f9 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -116,7 +116,7 @@ test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid c # ---------- # Another group of parallel tests # ---------- -test: identity partition_join reloptions +test: identity partition_join reloptions hash_part # event triggers cannot run concurrently with any test that runs DDL test: event_trigger diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index 3866314a92..a205e5d05c 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -181,5 +181,6 @@ test: xml test: identity test: partition_join test: reloptions +test: hash_part test: event_trigger test: stats diff --git a/src/test/regress/sql/hash_part.sql b/src/test/regress/sql/hash_part.sql new file mode 100644 index 0000000000..94c5eaab0c --- /dev/null +++ b/src/test/regress/sql/hash_part.sql @@ -0,0 +1,90 @@ +-- +-- Hash partitioning. +-- + +CREATE OR REPLACE FUNCTION hashint4_noop(int4, int8) RETURNS int8 AS +$$SELECT coalesce($1,0)::int8$$ LANGUAGE sql IMMUTABLE; +CREATE OPERATOR CLASS test_int4_ops FOR TYPE int4 USING HASH AS +OPERATOR 1 = , FUNCTION 2 hashint4_noop(int4, int8); + +CREATE OR REPLACE FUNCTION hashtext_length(text, int8) RETURNS int8 AS +$$SELECT length(coalesce($1,''))::int8$$ LANGUAGE sql IMMUTABLE; +CREATE OPERATOR CLASS test_text_ops FOR TYPE text USING HASH AS +OPERATOR 1 = , FUNCTION 2 hashtext_length(text, int8); + +CREATE TABLE mchash (a int, b text, c jsonb) + PARTITION BY HASH (a test_int4_ops, b test_text_ops); +CREATE TABLE mchash1 + PARTITION OF mchash FOR VALUES WITH (MODULUS 4, REMAINDER 0); + +-- invalid OID, no such table +SELECT satisfies_hash_partition(0, 4, 0, NULL); + +-- not partitioned +SELECT satisfies_hash_partition('tenk1'::regclass, 4, 0, NULL); + +-- partition rather than the parent +SELECT satisfies_hash_partition('mchash1'::regclass, 4, 0, NULL); + +-- invalid modulus +SELECT satisfies_hash_partition('mchash'::regclass, 0, 0, NULL); + +-- remainder too small +SELECT satisfies_hash_partition('mchash'::regclass, 1, -1, NULL); + +-- remainder too large +SELECT satisfies_hash_partition('mchash'::regclass, 1, 1, NULL); + +-- modulus is null +SELECT satisfies_hash_partition('mchash'::regclass, NULL, 0, NULL); + +-- remainder is null +SELECT satisfies_hash_partition('mchash'::regclass, 4, NULL, NULL); + +-- too many arguments +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, NULL::int, NULL::text, NULL::json); + +-- too few arguments +SELECT satisfies_hash_partition('mchash'::regclass, 3, 1, NULL::int); + +-- wrong argument type +SELECT satisfies_hash_partition('mchash'::regclass, 2, 1, NULL::int, NULL::int); + +-- ok, should be false +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, 0, ''::text); + +-- ok, should be true +SELECT satisfies_hash_partition('mchash'::regclass, 4, 0, 1, ''::text); + +-- argument via variadic syntax, should fail because not all partitioning +-- columns are of the correct type +SELECT satisfies_hash_partition('mchash'::regclass, 2, 1, + variadic array[1,2]::int[]); + +-- multiple partitioning columns of the same type +CREATE TABLE mcinthash (a int, b int, c jsonb) + PARTITION BY HASH (a test_int4_ops, b test_int4_ops); + +-- now variadic should work, should be false +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[0, 0]); + +-- should be true +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[1, 0]); + +-- wrong length +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[]::int[]); + +-- wrong type +SELECT satisfies_hash_partition('mcinthash'::regclass, 4, 0, + variadic array[now(), now()]); + +-- cleanup +DROP TABLE mchash; +DROP TABLE mcinthash; +DROP OPERATOR CLASS test_text_ops USING hash; +DROP OPERATOR CLASS test_int4_ops USING hash; +DROP FUNCTION hashint4_noop(int4, int8); +DROP FUNCTION hashtext_length(text, int8); From ae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 21 Nov 2017 13:56:24 -0500 Subject: [PATCH 0582/1087] Provide for forward compatibility with future minor protocol versions. Previously, any attempt to request a 3.x protocol version other than 3.0 would lead to a hard connection failure, which made the minor protocol version really no different from the major protocol version and precluded gentle protocol version breaks. Instead, when the client requests a 3.x protocol version where x is greater than 0, send the new NegotiateProtocolVersion message to convey that we support only 3.0. This makes it possible to introduce new minor protocol versions without requiring a connection retry when the server is older. In addition, if the startup packet includes name/value pairs where the name starts with "_pq_.", assume that those are protocol options, not GUCs. Include those we don't support (i.e. all of them, at present) in the NegotiateProtocolVersion message so that the client knows they were not understood. This makes it possible for the client to request previously-unsupported features without bumping the protocol version at all; the client can tell from the server's response whether the option was understood. It will take some time before servers that support these new facilities become common in the wild; to speed things up and make things easier for a future 3.1 protocol version, back-patch to all supported releases. Robert Haas and Badrul Chowdhury Discussion: http://postgr.es/m/BN6PR21MB0772FFA0CBD298B76017744CD1730@BN6PR21MB0772.namprd21.prod.outlook.com Discussion: http://postgr.es/m/30788.1498672033@sss.pgh.pa.us --- doc/src/sgml/protocol.sgml | 117 +++++++++++++++++++++++++--- src/backend/postmaster/postmaster.c | 58 ++++++++++++-- 2 files changed, 159 insertions(+), 16 deletions(-) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 4d3b6446c4..999fa06018 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -22,10 +22,18 @@ PostgreSQL 7.4 and later. For descriptions of the earlier protocol versions, see previous releases of the PostgreSQL documentation. A single server - can support multiple protocol versions. The initial - startup-request message tells the server which protocol version the - client is attempting to use, and then the server follows that protocol - if it is able. + can support multiple protocol versions. The initial startup-request + message tells the server which protocol version the client is attempting to + use. If the major version requested by the client is not supported by + the server, the connection will be rejected (for example, this would occur + if the client requested protocol version 4.0, which does not exist as of + this writing). If the minor version requested by the client is not + supported by the server (e.g. the client requests version 3.1, but the + server supports only 3.0), the server may either reject the connection or + may respond with a NegotiateProtocolVersion message containing the highest + minor protocol version which it supports. The client may then choose either + to continue with the connection using the specified protocol version or + to abort the connection. @@ -406,6 +414,21 @@ + + NegotiateProtocolVersion + + + The server does not support the minor protocol version requested + by the client, but does support an earlier version of the protocol; + this message indicates the highest supported minor version. This + message will also be sent if the client requested unsupported protocol + options (i.e. beginning with _pq_.) in the + startup packet. This message will be followed by an ErrorResponse or + a message indicating the success or failure of authentication. + + + + @@ -420,8 +443,10 @@ for further messages from the server. In this phase a backend process is being started, and the frontend is just an interested bystander. It is still possible for the startup attempt - to fail (ErrorResponse), but in the normal case the backend will send - some ParameterStatus messages, BackendKeyData, and finally ReadyForQuery. + to fail (ErrorResponse) or the server to decline support for the requested + minor protocol version (NegotiateProtocolVersion), but in the normal case + the backend will send some ParameterStatus messages, BackendKeyData, and + finally ReadyForQuery. @@ -4715,6 +4740,74 @@ GSSResponse (F) + + +NegotiateProtocolVersion (B) + + + + + + + + Byte1('v') + + + + Identifies the message as a protocol version negotiation + message. + + + + + + Int32 + + + + Length of message contents in bytes, including self. + + + + + + Int32 + + + + Newest minor protocol version supported by the server + for the major protocol version requested by the client. + + + + + + Int32 + + + + Number of protocol options not recognized by the server. + + + + + Then, for protocol option not recognized by the server, there + is the following: + + + + String + + + + The option name. + + + + + + + @@ -5670,11 +5763,13 @@ StartupMessage (F) - In addition to the above, any run-time parameter that can be - set at backend start time might be listed. Such settings - will be applied during backend start (after parsing the - command-line arguments if any). The values will act as - session defaults. + In addition to the above, others parameter may be listed. + Parameter names beginning with _pq_. are + reserved for use as protocol extensions, while others are + treated as run-time parameters to be set at backend start + time. Such settings will be applied during backend start + (after parsing the command-line arguments if any) and will + act as session defaults. diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 9906a85bc0..a3d4917173 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -101,6 +101,7 @@ #include "lib/ilist.h" #include "libpq/auth.h" #include "libpq/libpq.h" +#include "libpq/pqformat.h" #include "libpq/pqsignal.h" #include "miscadmin.h" #include "pg_getopt.h" @@ -412,6 +413,7 @@ static void ExitPostmaster(int status) pg_attribute_noreturn(); static int ServerLoop(void); static int BackendStartup(Port *port); static int ProcessStartupPacket(Port *port, bool SSLdone); +static void SendNegotiateProtocolVersion(List *unrecognized_protocol_options); static void processCancelRequest(Port *port, void *pkt); static int initMasks(fd_set *rmask); static void report_fork_failure_to_client(Port *port, int errnum); @@ -2052,12 +2054,9 @@ ProcessStartupPacket(Port *port, bool SSLdone) */ FrontendProtocol = proto; - /* Check we can handle the protocol the frontend is using. */ - + /* Check that the major protocol version is in range. */ if (PG_PROTOCOL_MAJOR(proto) < PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST) || - PG_PROTOCOL_MAJOR(proto) > PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST) || - (PG_PROTOCOL_MAJOR(proto) == PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST) && - PG_PROTOCOL_MINOR(proto) > PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST))) + PG_PROTOCOL_MAJOR(proto) > PG_PROTOCOL_MAJOR(PG_PROTOCOL_LATEST)) ereport(FATAL, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u", @@ -2079,6 +2078,7 @@ ProcessStartupPacket(Port *port, bool SSLdone) if (PG_PROTOCOL_MAJOR(proto) >= 3) { int32 offset = sizeof(ProtocolVersion); + List *unrecognized_protocol_options = NIL; /* * Scan packet body for name/option pairs. We can assume any string @@ -2128,6 +2128,16 @@ ProcessStartupPacket(Port *port, bool SSLdone) valptr), errhint("Valid values are: \"false\", 0, \"true\", 1, \"database\"."))); } + else if (strncmp(nameptr, "_pq_.", 5) == 0) + { + /* + * Any option beginning with _pq_. is reserved for use as a + * protocol-level option, but at present no such options are + * defined. + */ + unrecognized_protocol_options = + lappend(unrecognized_protocol_options, pstrdup(nameptr)); + } else { /* Assume it's a generic GUC option */ @@ -2147,6 +2157,16 @@ ProcessStartupPacket(Port *port, bool SSLdone) ereport(FATAL, (errcode(ERRCODE_PROTOCOL_VIOLATION), errmsg("invalid startup packet layout: expected terminator as last byte"))); + + /* + * If the client requested a newer protocol version or if the client + * requested any protocol options we didn't recognize, let them know + * the newest minor protocol version we do support and the names of any + * unrecognized options. + */ + if (PG_PROTOCOL_MINOR(proto) > PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) || + unrecognized_protocol_options != NIL) + SendNegotiateProtocolVersion(unrecognized_protocol_options); } else { @@ -2260,6 +2280,34 @@ ProcessStartupPacket(Port *port, bool SSLdone) return STATUS_OK; } +/* + * Send a NegotiateProtocolVersion to the client. This lets the client know + * that they have requested a newer minor protocol version than we are able + * to speak. We'll speak the highest version we know about; the client can, + * of course, abandon the connection if that's a problem. + * + * We also include in the response a list of protocol options we didn't + * understand. This allows clients to include optional parameters that might + * be present either in newer protocol versions or third-party protocol + * extensions without fear of having to reconnect if those options are not + * understood, while at the same time making certain that the client is aware + * of which options were actually accepted. + */ +static void +SendNegotiateProtocolVersion(List *unrecognized_protocol_options) +{ + StringInfoData buf; + ListCell *lc; + + pq_beginmessage(&buf, 'v'); /* NegotiateProtocolVersion */ + pq_sendint32(&buf, PG_PROTOCOL_LATEST); + pq_sendint32(&buf, list_length(unrecognized_protocol_options)); + foreach(lc, unrecognized_protocol_options) + pq_sendstring(&buf, lfirst(lc)); + pq_endmessage(&buf); + + /* no need to flush, some other message will follow */ +} /* * The client has sent a cancel request packet, not a normal From 41761265e88f09fba4028352b8e2be82d049cedc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 21 Nov 2017 16:37:11 -0500 Subject: [PATCH 0583/1087] Doc: fix broken markup. --- doc/src/sgml/ref/create_table.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 83eef7f10d..3bc155a775 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1201,7 +1201,7 @@ WITH ( MODULUS numeric_literal, REM - toast_tuple_target (integer) + toast_tuple_target (integer) The toast_tuple_target specifies the minimum tuple length required before From 16827d442448d1935ed644e944a4cb8213345351 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 21 Nov 2017 17:30:48 -0500 Subject: [PATCH 0584/1087] pgbench: fix stats reporting when some transactions are skipped. pgbench can skip some transactions when both -R and -L options are used. Previously, this resulted in slightly silly statistics both in progress reports and final output, because the skipped transactions were counted as executed for TPS and related stats. Discount skipped xacts in TPS numbers, and also when figuring the percentage of xacts exceeding the latency limit. Also, don't print per-script skipped-transaction counts when there is only one script. That's redundant with the overall count, and it's inconsistent with the fact that we don't print other per-script stats when there's only one script. Clean up some unnecessary interactions between what should be independent options that were due to that decision. While at it, avoid division-by-zero in cases where no transactions were executed. While on modern platforms this would generally result in printing "NaN" rather than a crash, that isn't spelled consistently across platforms and it would confuse many people. Skip the relevant output entirely when practical, else print zeroes. Fabien Coelho, reviewed by Steve Singer, additional hacking by me Discussion: https://postgr.es/m/26654.1505232433@sss.pgh.pa.us --- doc/src/sgml/ref/pgbench.sgml | 3 +- src/bin/pgbench/pgbench.c | 104 +++++++++++-------- src/bin/pgbench/t/001_pgbench_with_server.pl | 2 +- 3 files changed, 65 insertions(+), 44 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index c48a69713a..f6e93c3ade 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -1363,8 +1363,7 @@ latency average = 15.844 ms latency stddev = 2.715 ms tps = 618.764555 (including connections establishing) tps = 622.977698 (excluding connections establishing) -script statistics: - - statement latencies in milliseconds: +statement latencies in milliseconds: 0.002 \set aid random(1, 100000 * :scale) 0.005 \set bid random(1, 1 * :scale) 0.002 \set tid random(1, 10 * :scale) diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index ceb2fcc3ad..bd96eae5e6 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -3664,27 +3664,32 @@ addScript(ParsedScript script) static void printSimpleStats(const char *prefix, SimpleStats *ss) { - /* print NaN if no transactions where executed */ - double latency = ss->sum / ss->count; - double stddev = sqrt(ss->sum2 / ss->count - latency * latency); + if (ss->count > 0) + { + double latency = ss->sum / ss->count; + double stddev = sqrt(ss->sum2 / ss->count - latency * latency); - printf("%s average = %.3f ms\n", prefix, 0.001 * latency); - printf("%s stddev = %.3f ms\n", prefix, 0.001 * stddev); + printf("%s average = %.3f ms\n", prefix, 0.001 * latency); + printf("%s stddev = %.3f ms\n", prefix, 0.001 * stddev); + } } /* print out results */ static void printResults(TState *threads, StatsData *total, instr_time total_time, - instr_time conn_total_time, int latency_late) + instr_time conn_total_time, int64 latency_late) { double time_include, tps_include, tps_exclude; + int64 ntx = total->cnt - total->skipped; time_include = INSTR_TIME_GET_DOUBLE(total_time); - tps_include = total->cnt / time_include; - tps_exclude = total->cnt / (time_include - - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients)); + + /* tps is about actually executed transactions */ + tps_include = ntx / time_include; + tps_exclude = ntx / + (time_include - (INSTR_TIME_GET_DOUBLE(conn_total_time) / nclients)); /* Report test parameters. */ printf("transaction type: %s\n", @@ -3697,13 +3702,13 @@ printResults(TState *threads, StatsData *total, instr_time total_time, { printf("number of transactions per client: %d\n", nxacts); printf("number of transactions actually processed: " INT64_FORMAT "/%d\n", - total->cnt - total->skipped, nxacts * nclients); + ntx, nxacts * nclients); } else { printf("duration: %d s\n", duration); printf("number of transactions actually processed: " INT64_FORMAT "\n", - total->cnt); + ntx); } /* Remaining stats are nonsensical if we failed to execute any xacts */ @@ -3716,9 +3721,9 @@ printResults(TState *threads, StatsData *total, instr_time total_time, 100.0 * total->skipped / total->cnt); if (latency_limit) - printf("number of transactions above the %.1f ms latency limit: %d (%.3f %%)\n", - latency_limit / 1000.0, latency_late, - 100.0 * latency_late / total->cnt); + printf("number of transactions above the %.1f ms latency limit: " INT64_FORMAT "/" INT64_FORMAT " (%.3f %%)\n", + latency_limit / 1000.0, latency_late, ntx, + (ntx > 0) ? 100.0 * latency_late / ntx : 0.0); if (throttle_delay || progress || latency_limit) printSimpleStats("latency", &total->latency); @@ -3745,47 +3750,55 @@ printResults(TState *threads, StatsData *total, instr_time total_time, printf("tps = %f (excluding connections establishing)\n", tps_exclude); /* Report per-script/command statistics */ - if (per_script_stats || latency_limit || is_latencies) + if (per_script_stats || is_latencies) { int i; for (i = 0; i < num_scripts; i++) { - if (num_scripts > 1) + if (per_script_stats) + { + StatsData *sstats = &sql_script[i].stats; + printf("SQL script %d: %s\n" " - weight: %d (targets %.1f%% of total)\n" " - " INT64_FORMAT " transactions (%.1f%% of total, tps = %f)\n", i + 1, sql_script[i].desc, sql_script[i].weight, 100.0 * sql_script[i].weight / total_weight, - sql_script[i].stats.cnt, - 100.0 * sql_script[i].stats.cnt / total->cnt, - sql_script[i].stats.cnt / time_include); - else - printf("script statistics:\n"); + sstats->cnt, + 100.0 * sstats->cnt / total->cnt, + (sstats->cnt - sstats->skipped) / time_include); - if (latency_limit) - printf(" - number of transactions skipped: " INT64_FORMAT " (%.3f%%)\n", - sql_script[i].stats.skipped, - 100.0 * sql_script[i].stats.skipped / sql_script[i].stats.cnt); + if (throttle_delay && latency_limit && sstats->cnt > 0) + printf(" - number of transactions skipped: " INT64_FORMAT " (%.3f%%)\n", + sstats->skipped, + 100.0 * sstats->skipped / sstats->cnt); - if (num_scripts > 1) - printSimpleStats(" - latency", &sql_script[i].stats.latency); + printSimpleStats(" - latency", &sstats->latency); + } /* Report per-command latencies */ if (is_latencies) { Command **commands; - printf(" - statement latencies in milliseconds:\n"); + if (per_script_stats) + printf(" - statement latencies in milliseconds:\n"); + else + printf("statement latencies in milliseconds:\n"); for (commands = sql_script[i].commands; *commands != NULL; commands++) + { + SimpleStats *cstats = &(*commands)->stats; + printf(" %11.3f %s\n", - 1000.0 * (*commands)->stats.sum / - (*commands)->stats.count, + (cstats->count > 0) ? + 1000.0 * cstats->sum / cstats->count : 0.0, (*commands)->line); + } } } } @@ -3984,7 +3997,6 @@ main(int argc, char **argv) break; case 'r': benchmarking_option_set = true; - per_script_stats = true; is_latencies = true; break; case 's': @@ -4861,7 +4873,8 @@ threadRun(void *arg) { /* generate and show report */ StatsData cur; - int64 run = now - last_report; + int64 run = now - last_report, + ntx; double tps, total_run, latency, @@ -4876,7 +4889,7 @@ threadRun(void *arg) * XXX: No locking. There is no guarantee that we get an * atomic snapshot of the transaction count and latencies, so * these figures can well be off by a small amount. The - * progress is report's purpose is to give a quick overview of + * progress report's purpose is to give a quick overview of * how the test is going, so that shouldn't matter too much. * (If a read from a 64-bit integer is not atomic, you might * get a "torn" read and completely bogus latencies though!) @@ -4890,15 +4903,21 @@ threadRun(void *arg) cur.skipped += thread[i].stats.skipped; } + /* we count only actually executed transactions */ + ntx = (cur.cnt - cur.skipped) - (last.cnt - last.skipped); total_run = (now - thread_start) / 1000000.0; - tps = 1000000.0 * (cur.cnt - last.cnt) / run; - latency = 0.001 * (cur.latency.sum - last.latency.sum) / - (cur.cnt - last.cnt); - sqlat = 1.0 * (cur.latency.sum2 - last.latency.sum2) - / (cur.cnt - last.cnt); - stdev = 0.001 * sqrt(sqlat - 1000000.0 * latency * latency); - lag = 0.001 * (cur.lag.sum - last.lag.sum) / - (cur.cnt - last.cnt); + tps = 1000000.0 * ntx / run; + if (ntx > 0) + { + latency = 0.001 * (cur.latency.sum - last.latency.sum) / ntx; + sqlat = 1.0 * (cur.latency.sum2 - last.latency.sum2) / ntx; + stdev = 0.001 * sqrt(sqlat - 1000000.0 * latency * latency); + lag = 0.001 * (cur.lag.sum - last.lag.sum) / ntx; + } + else + { + latency = sqlat = stdev = lag = 0; + } if (progress_timestamp) { @@ -4915,7 +4934,10 @@ threadRun(void *arg) (long) tv.tv_sec, (long) (tv.tv_usec / 1000)); } else + { + /* round seconds are expected, but the thread may be late */ snprintf(tbuf, sizeof(tbuf), "%.1f s", total_run); + } fprintf(stderr, "progress: %s, %.1f tps, lat %.3f ms stddev %.3f", diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 864d580c64..c095881312 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -427,7 +427,7 @@ sub pgbench 0, [ qr{processed: [01]/10}, qr{type: .*/001_pgbench_sleep}, - qr{above the 1.0 ms latency limit: [01] } ], + qr{above the 1.0 ms latency limit: [01]/} ], [qr{^$}i], 'pgbench late throttling', { '001_pgbench_sleep' => q{\sleep 2ms} }); From 7e17a6889a4441c2cebca2dd47f4170ff8dc5de2 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Wed, 22 Nov 2017 16:28:14 +1100 Subject: [PATCH 0585/1087] Set es_output_cid in replication worker Allows triggers to operate correctly Author: Petr Jelinek Reported-by: Konstantin Knizhnik --- src/backend/replication/logical/worker.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index 0e68670767..fa5d9bb120 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -204,6 +204,8 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel) estate->es_num_result_relations = 1; estate->es_result_relation_info = resultRelInfo; + estate->es_output_cid = GetCurrentCommandId(true); + /* Triggers might need a slot */ if (resultRelInfo->ri_TrigDesc) estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); From 05b6ec39d72f7065bb5ce770319e826f1da92441 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Thu, 23 Nov 2017 05:10:39 +1100 Subject: [PATCH 0586/1087] Show partition info from psql \d+ MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Author: Amit Langote, Ashutosh Bapat Reviewed-by: Álvaro Herrera, Simon Riggs --- src/bin/psql/describe.c | 34 ++++++++++++++++++---- src/test/regress/expected/create_table.out | 13 +++++---- src/test/regress/expected/foreign_data.out | 3 ++ src/test/regress/expected/insert.out | 17 +++++++++++ src/test/regress/sql/create_table.sql | 2 +- src/test/regress/sql/insert.sql | 4 +++ 6 files changed, 61 insertions(+), 12 deletions(-) diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index b7b978a361..44c508971a 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -2870,7 +2870,9 @@ describeOneTableDetails(const char *schemaname, /* print child tables (with additional info if partitions) */ if (pset.sversion >= 100000) printfPQExpBuffer(&buf, - "SELECT c.oid::pg_catalog.regclass, pg_catalog.pg_get_expr(c.relpartbound, c.oid)" + "SELECT c.oid::pg_catalog.regclass," + " pg_catalog.pg_get_expr(c.relpartbound, c.oid)," + " c.relkind" " FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i" " WHERE c.oid=i.inhrelid AND i.inhparent = '%s'" " ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;", oid); @@ -2893,7 +2895,18 @@ describeOneTableDetails(const char *schemaname, else tuples = PQntuples(result); - if (!verbose) + /* + * For a partitioned table with no partitions, always print the number + * of partitions as zero, even when verbose output is expected. + * Otherwise, we will not print "Partitions" section for a partitioned + * table without any partitions. + */ + if (tableinfo.relkind == RELKIND_PARTITIONED_TABLE && tuples == 0) + { + printfPQExpBuffer(&buf, _("Number of partitions: %d"), tuples); + printTableAddFooter(&cont, buf.data); + } + else if (!verbose) { /* print the number of child tables, if any */ if (tuples > 0) @@ -2925,12 +2938,21 @@ describeOneTableDetails(const char *schemaname, } else { + char *partitioned_note; + + if (*PQgetvalue(result, i, 2) == RELKIND_PARTITIONED_TABLE) + partitioned_note = ", PARTITIONED"; + else + partitioned_note = ""; + if (i == 0) - printfPQExpBuffer(&buf, "%s: %s %s", - ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1)); + printfPQExpBuffer(&buf, "%s: %s %s%s", + ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1), + partitioned_note); else - printfPQExpBuffer(&buf, "%*s %s %s", - ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1)); + printfPQExpBuffer(&buf, "%*s %s %s%s", + ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1), + partitioned_note); } if (i < tuples - 1) appendPQExpBufferChar(&buf, ','); diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index 335cd37e18..8e745402ae 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -428,13 +428,15 @@ ERROR: cannot inherit from partitioned table "partitioned2" c | text | | | d | text | | | Partition key: RANGE (a oid_ops, plusone(b), c, d COLLATE "C") +Number of partitions: 0 -\d partitioned2 - Table "public.partitioned2" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - a | integer | | | +\d+ partitioned2 + Table "public.partitioned2" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+---------+--------------+------------- + a | integer | | | | plain | | Partition key: LIST (((a + 1))) +Number of partitions: 0 DROP TABLE partitioned, partitioned2; -- @@ -858,5 +860,6 @@ SELECT obj_description('parted_col_comment'::regclass); a | integer | | | | plain | | Partition key b | text | | | | extended | | Partition key: LIST (a) +Number of partitions: 0 DROP TABLE parted_col_comment; diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out index 331f7a911f..d2c184f2cf 100644 --- a/src/test/regress/expected/foreign_data.out +++ b/src/test/regress/expected/foreign_data.out @@ -1898,6 +1898,7 @@ DROP FOREIGN TABLE pt2_1; c2 | text | | | | extended | | c3 | date | | | | plain | | Partition key: LIST (c1) +Number of partitions: 0 CREATE FOREIGN TABLE pt2_1 ( c1 integer NOT NULL, @@ -1982,6 +1983,7 @@ ALTER TABLE pt2 ALTER c2 SET NOT NULL; c2 | text | | not null | | extended | | c3 | date | | | | plain | | Partition key: LIST (c1) +Number of partitions: 0 \d+ pt2_1 Foreign table "public.pt2_1" @@ -2011,6 +2013,7 @@ ALTER TABLE pt2 ADD CONSTRAINT pt2chk1 CHECK (c1 > 0); Partition key: LIST (c1) Check constraints: "pt2chk1" CHECK (c1 > 0) +Number of partitions: 0 \d+ pt2_1 Foreign table "public.pt2_1" diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index 9d84ba4658..1116b3a8d2 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -425,6 +425,23 @@ from hash_parted order by part; hpart3 | 11 | 3 (13 rows) +-- test \d+ output on a table which has both partitioned and unpartitioned +-- partitions +\d+ list_parted + Table "public.list_parted" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+----------+--------------+------------- + a | text | | | | extended | | + b | integer | | | | plain | | +Partition key: LIST (lower(a)) +Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'), + part_cc_dd FOR VALUES IN ('cc', 'dd'), + part_default DEFAULT, PARTITIONED, + part_ee_ff FOR VALUES IN ('ee', 'ff'), PARTITIONED, + part_gg FOR VALUES IN ('gg'), PARTITIONED, + part_null FOR VALUES IN (NULL), + part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED + -- cleanup drop table range_parted, list_parted; drop table hash_parted; diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index b77b476436..8f9991ef18 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -421,7 +421,7 @@ CREATE TABLE fail () INHERITS (partitioned2); -- Partition key in describe output \d partitioned -\d partitioned2 +\d+ partitioned2 DROP TABLE partitioned, partitioned2; diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index 791817ba50..f22ab41ae3 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -252,6 +252,10 @@ insert into hpart3 values(11); select tableoid::regclass as part, a, a%4 as "remainder = a % 4" from hash_parted order by part; +-- test \d+ output on a table which has both partitioned and unpartitioned +-- partitions +\d+ list_parted + -- cleanup drop table range_parted, list_parted; drop table hash_parted; From 3bae43ca4dc6c3123788044436521f1d33d9f930 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Thu, 23 Nov 2017 05:17:47 +1100 Subject: [PATCH 0587/1087] Sort default partition to bottom of psql \d+ MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Minor patch to change sort order only Author: Ashutosh Bapat Reviewed-by: Álvaro Herrera, Simon Riggs --- src/bin/psql/describe.c | 3 ++- src/test/regress/expected/insert.out | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 44c508971a..99167104d4 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -2875,7 +2875,8 @@ describeOneTableDetails(const char *schemaname, " c.relkind" " FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i" " WHERE c.oid=i.inhrelid AND i.inhparent = '%s'" - " ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;", oid); + " ORDER BY pg_catalog.pg_get_expr(c.relpartbound, c.oid) = 'DEFAULT'," + " c.oid::pg_catalog.regclass::pg_catalog.text;", oid); else if (pset.sversion >= 80300) printfPQExpBuffer(&buf, "SELECT c.oid::pg_catalog.regclass" diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index 1116b3a8d2..7481bebd83 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -436,11 +436,11 @@ from hash_parted order by part; Partition key: LIST (lower(a)) Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'), part_cc_dd FOR VALUES IN ('cc', 'dd'), - part_default DEFAULT, PARTITIONED, part_ee_ff FOR VALUES IN ('ee', 'ff'), PARTITIONED, part_gg FOR VALUES IN ('gg'), PARTITIONED, part_null FOR VALUES IN (NULL), - part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED + part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED, + part_default DEFAULT, PARTITIONED -- cleanup drop table range_parted, list_parted; From a4ccc1cef5a04cc054af83bc4582a045d5232cb3 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Thu, 23 Nov 2017 05:45:07 +1100 Subject: [PATCH 0588/1087] Generational memory allocator MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add new style of memory allocator, known as Generational appropriate for use in cases where memory is allocated and then freed in roughly oldest first order (FIFO). Use new allocator for logical decoding’s reorderbuffer to significantly reduce memory usage and improve performance. Author: Tomas Vondra Reviewed-by: Simon Riggs --- .../replication/logical/reorderbuffer.c | 80 +- src/backend/utils/mmgr/Makefile | 2 +- src/backend/utils/mmgr/README | 23 + src/backend/utils/mmgr/generation.c | 768 ++++++++++++++++++ src/include/nodes/memnodes.h | 4 +- src/include/nodes/nodes.h | 1 + src/include/replication/reorderbuffer.h | 15 +- src/include/utils/memutils.h | 5 + 8 files changed, 819 insertions(+), 79 deletions(-) create mode 100644 src/backend/utils/mmgr/generation.c diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index 0f607bab70..dc0ad5b0e7 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -43,6 +43,12 @@ * transaction there will be no other data carrying records between a row's * toast chunks and the row data itself. See ReorderBufferToast* for * details. + * + * ReorderBuffer uses two special memory context types - SlabContext for + * allocations of fixed-length structures (changes and transactions), and + * GenerationContext for the variable-length transaction data (allocated + * and freed in groups with similar lifespan). + * * ------------------------------------------------------------------------- */ #include "postgres.h" @@ -150,15 +156,6 @@ typedef struct ReorderBufferDiskChange */ static const Size max_changes_in_memory = 4096; -/* - * We use a very simple form of a slab allocator for frequently allocated - * objects, simply keeping a fixed number in a linked list when unused, - * instead pfree()ing them. Without that in many workloads aset.c becomes a - * major bottleneck, especially when spilling to disk while decoding batch - * workloads. - */ -static const Size max_cached_tuplebufs = 4096 * 2; /* ~8MB */ - /* --------------------------------------- * primary reorderbuffer support routines * --------------------------------------- @@ -248,6 +245,10 @@ ReorderBufferAllocate(void) SLAB_DEFAULT_BLOCK_SIZE, sizeof(ReorderBufferTXN)); + buffer->tup_context = GenerationContextCreate(new_ctx, + "Tuples", + SLAB_LARGE_BLOCK_SIZE); + hash_ctl.keysize = sizeof(TransactionId); hash_ctl.entrysize = sizeof(ReorderBufferTXNByIdEnt); hash_ctl.hcxt = buffer->context; @@ -258,15 +259,12 @@ ReorderBufferAllocate(void) buffer->by_txn_last_xid = InvalidTransactionId; buffer->by_txn_last_txn = NULL; - buffer->nr_cached_tuplebufs = 0; - buffer->outbuf = NULL; buffer->outbufsize = 0; buffer->current_restart_decoding_lsn = InvalidXLogRecPtr; dlist_init(&buffer->toplevel_by_lsn); - slist_init(&buffer->cached_tuplebufs); return buffer; } @@ -419,42 +417,12 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len) alloc_len = tuple_len + SizeofHeapTupleHeader; - /* - * Most tuples are below MaxHeapTupleSize, so we use a slab allocator for - * those. Thus always allocate at least MaxHeapTupleSize. Note that tuples - * generated for oldtuples can be bigger, as they don't have out-of-line - * toast columns. - */ - if (alloc_len < MaxHeapTupleSize) - alloc_len = MaxHeapTupleSize; - - - /* if small enough, check the slab cache */ - if (alloc_len <= MaxHeapTupleSize && rb->nr_cached_tuplebufs) - { - rb->nr_cached_tuplebufs--; - tuple = slist_container(ReorderBufferTupleBuf, node, - slist_pop_head_node(&rb->cached_tuplebufs)); - Assert(tuple->alloc_tuple_size == MaxHeapTupleSize); -#ifdef USE_ASSERT_CHECKING - memset(&tuple->tuple, 0xa9, sizeof(HeapTupleData)); - VALGRIND_MAKE_MEM_UNDEFINED(&tuple->tuple, sizeof(HeapTupleData)); -#endif - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple); -#ifdef USE_ASSERT_CHECKING - memset(tuple->tuple.t_data, 0xa8, tuple->alloc_tuple_size); - VALGRIND_MAKE_MEM_UNDEFINED(tuple->tuple.t_data, tuple->alloc_tuple_size); -#endif - } - else - { - tuple = (ReorderBufferTupleBuf *) - MemoryContextAlloc(rb->context, - sizeof(ReorderBufferTupleBuf) + - MAXIMUM_ALIGNOF + alloc_len); - tuple->alloc_tuple_size = alloc_len; - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple); - } + tuple = (ReorderBufferTupleBuf *) + MemoryContextAlloc(rb->tup_context, + sizeof(ReorderBufferTupleBuf) + + MAXIMUM_ALIGNOF + alloc_len); + tuple->alloc_tuple_size = alloc_len; + tuple->tuple.t_data = ReorderBufferTupleBufData(tuple); return tuple; } @@ -468,21 +436,7 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len) void ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple) { - /* check whether to put into the slab cache, oversized tuples never are */ - if (tuple->alloc_tuple_size == MaxHeapTupleSize && - rb->nr_cached_tuplebufs < max_cached_tuplebufs) - { - rb->nr_cached_tuplebufs++; - slist_push_head(&rb->cached_tuplebufs, &tuple->node); - VALGRIND_MAKE_MEM_UNDEFINED(tuple->tuple.t_data, tuple->alloc_tuple_size); - VALGRIND_MAKE_MEM_UNDEFINED(tuple, sizeof(ReorderBufferTupleBuf)); - VALGRIND_MAKE_MEM_DEFINED(&tuple->node, sizeof(tuple->node)); - VALGRIND_MAKE_MEM_DEFINED(&tuple->alloc_tuple_size, sizeof(tuple->alloc_tuple_size)); - } - else - { - pfree(tuple); - } + pfree(tuple); } /* diff --git a/src/backend/utils/mmgr/Makefile b/src/backend/utils/mmgr/Makefile index cd0e803253..f644c40c46 100644 --- a/src/backend/utils/mmgr/Makefile +++ b/src/backend/utils/mmgr/Makefile @@ -12,6 +12,6 @@ subdir = src/backend/utils/mmgr top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global -OBJS = aset.o dsa.o freepage.o mcxt.o memdebug.o portalmem.o slab.o +OBJS = aset.o dsa.o freepage.o generation.o mcxt.o memdebug.o portalmem.o slab.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/utils/mmgr/README b/src/backend/utils/mmgr/README index 0ab81bd80f..296fa198dc 100644 --- a/src/backend/utils/mmgr/README +++ b/src/backend/utils/mmgr/README @@ -431,3 +431,26 @@ will not allocate very much space per tuple cycle. To make this usage pattern cheap, the first block allocated in a context is not given back to malloc() during reset, but just cleared. This avoids malloc thrashing. + + +Alternative Memory Context Implementations +------------------------------------------ + +aset.c is our default general-purpose implementation, working fine +in most situations. We also have two implementations optimized for +special use cases, providing either better performance or lower memory +usage compared to aset.c (or both). + +* slab.c (SlabContext) is designed for allocations of fixed-length + chunks, and does not allow allocations of chunks with different size. + +* generation.c (GenerationContext) is designed for cases when chunks + are allocated in groups with similar lifespan (generations), or + roughly in FIFO order. + +Both memory contexts aim to free memory back to the operating system +(unlike aset.c, which keeps the freed chunks in a freelist, and only +returns the memory when reset/deleted). + +These memory contexts were initially developed for ReorderBuffer, but +may be useful elsewhere as long as the allocation patterns match. diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c new file mode 100644 index 0000000000..11a6a37a37 --- /dev/null +++ b/src/backend/utils/mmgr/generation.c @@ -0,0 +1,768 @@ +/*------------------------------------------------------------------------- + * + * generation.c + * Generational allocator definitions. + * + * Generation is a custom MemoryContext implementation designed for cases of + * chunks with similar lifespan. + * + * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * + * IDENTIFICATION + * src/backend/utils/mmgr/Generation.c + * + * + * This memory context is based on the assumption that the chunks are freed + * roughly in the same order as they were allocated (FIFO), or in groups with + * similar lifespan (generations - hence the name of the context). This is + * typical for various queue-like use cases, i.e. when tuples are constructed, + * processed and then thrown away. + * + * The memory context uses a very simple approach to free space management. + * Instead of a complex global freelist, each block tracks a number + * of allocated and freed chunks. Freed chunks are not reused, and once all + * chunks on a block are freed, the whole block is thrown away. When the + * chunks allocated on the same block have similar lifespan, this works + * very well and is very cheap. + * + * The current implementation only uses a fixed block size - maybe it should + * adapt a min/max block size range, and grow the blocks automatically. + * It already uses dedicated blocks for oversized chunks. + * + * XXX It might be possible to improve this by keeping a small freelist for + * only a small number of recent blocks, but it's not clear it's worth the + * additional complexity. + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "utils/memdebug.h" +#include "utils/memutils.h" +#include "lib/ilist.h" + + +#define Generation_BLOCKHDRSZ MAXALIGN(sizeof(GenerationBlock)) +#define Generation_CHUNKHDRSZ sizeof(GenerationChunk) + +/* Portion of Generation_CHUNKHDRSZ examined outside Generation.c. */ +#define Generation_CHUNK_PUBLIC \ + (offsetof(GenerationChunk, size) + sizeof(Size)) + +/* Portion of Generation_CHUNKHDRSZ excluding trailing padding. */ +#ifdef MEMORY_CONTEXT_CHECKING +#define Generation_CHUNK_USED \ + (offsetof(GenerationChunk, requested_size) + sizeof(Size)) +#else +#define Generation_CHUNK_USED \ + (offsetof(GenerationChunk, size) + sizeof(Size)) +#endif + +typedef struct GenerationBlock GenerationBlock; /* forward reference */ +typedef struct GenerationChunk GenerationChunk; + +typedef void *GenerationPointer; + +/* + * GenerationContext is a simple memory context not reusing allocated chunks, and + * freeing blocks once all chunks are freed. + */ +typedef struct GenerationContext +{ + MemoryContextData header; /* Standard memory-context fields */ + + /* Generationerational context parameters */ + Size blockSize; /* block size */ + + GenerationBlock *block; /* current (most recently allocated) block */ + dlist_head blocks; /* list of blocks */ + +} GenerationContext; + +/* + * GenerationBlock + * A GenerationBlock is the unit of memory that is obtained by Generation.c + * from malloc(). It contains one or more GenerationChunks, which are + * the units requested by palloc() and freed by pfree(). GenerationChunks + * cannot be returned to malloc() individually, instead pfree() + * updates a free counter on a block and when all chunks on a block + * are freed the whole block is returned to malloc(). + * + * GenerationBloc is the header data for a block --- the usable space + * within the block begins at the next alignment boundary. + */ +typedef struct GenerationBlock +{ + dlist_node node; /* doubly-linked list */ + int nchunks; /* number of chunks in the block */ + int nfree; /* number of free chunks */ + char *freeptr; /* start of free space in this block */ + char *endptr; /* end of space in this block */ +} GenerationBlock; + +/* + * GenerationChunk + * The prefix of each piece of memory in an GenerationBlock + */ +typedef struct GenerationChunk +{ + /* block owning this chunk */ + void *block; + + /* size is always the size of the usable space in the chunk */ + Size size; +#ifdef MEMORY_CONTEXT_CHECKING + /* when debugging memory usage, also store actual requested size */ + /* this is zero in a free chunk */ + Size requested_size; +#endif /* MEMORY_CONTEXT_CHECKING */ + + GenerationContext *context; /* owning context */ + /* there must not be any padding to reach a MAXALIGN boundary here! */ +} GenerationChunk; + + +/* + * GenerationIsValid + * True iff set is valid allocation set. + */ +#define GenerationIsValid(set) PointerIsValid(set) + +#define GenerationPointerGetChunk(ptr) \ + ((GenerationChunk *)(((char *)(ptr)) - Generation_CHUNKHDRSZ)) +#define GenerationChunkGetPointer(chk) \ + ((GenerationPointer *)(((char *)(chk)) + Generation_CHUNKHDRSZ)) + +/* + * These functions implement the MemoryContext API for Generation contexts. + */ +static void *GenerationAlloc(MemoryContext context, Size size); +static void GenerationFree(MemoryContext context, void *pointer); +static void *GenerationRealloc(MemoryContext context, void *pointer, Size size); +static void GenerationInit(MemoryContext context); +static void GenerationReset(MemoryContext context); +static void GenerationDelete(MemoryContext context); +static Size GenerationGetChunkSpace(MemoryContext context, void *pointer); +static bool GenerationIsEmpty(MemoryContext context); +static void GenerationStats(MemoryContext context, int level, bool print, + MemoryContextCounters *totals); + +#ifdef MEMORY_CONTEXT_CHECKING +static void GenerationCheck(MemoryContext context); +#endif + +/* + * This is the virtual function table for Generation contexts. + */ +static MemoryContextMethods GenerationMethods = { + GenerationAlloc, + GenerationFree, + GenerationRealloc, + GenerationInit, + GenerationReset, + GenerationDelete, + GenerationGetChunkSpace, + GenerationIsEmpty, + GenerationStats +#ifdef MEMORY_CONTEXT_CHECKING + ,GenerationCheck +#endif +}; + +/* ---------- + * Debug macros + * ---------- + */ +#ifdef HAVE_ALLOCINFO +#define GenerationFreeInfo(_cxt, _chunk) \ + fprintf(stderr, "GenerationFree: %s: %p, %lu\n", \ + (_cxt)->name, (_chunk), (_chunk)->size) +#define GenerationAllocInfo(_cxt, _chunk) \ + fprintf(stderr, "GenerationAlloc: %s: %p, %lu\n", \ + (_cxt)->name, (_chunk), (_chunk)->size) +#else +#define GenerationFreeInfo(_cxt, _chunk) +#define GenerationAllocInfo(_cxt, _chunk) +#endif + + +/* + * Public routines + */ + + +/* + * GenerationContextCreate + * Create a new Generation context. + */ +MemoryContext +GenerationContextCreate(MemoryContext parent, + const char *name, + Size blockSize) +{ + GenerationContext *set; + + StaticAssertStmt(offsetof(GenerationChunk, context) + sizeof(MemoryContext) == + MAXALIGN(sizeof(GenerationChunk)), + "padding calculation in GenerationChunk is wrong"); + + /* + * First, validate allocation parameters. (If we're going to throw an + * error, we should do so before the context is created, not after.) We + * somewhat arbitrarily enforce a minimum 1K block size, mostly because + * that's what AllocSet does. + */ + if (blockSize != MAXALIGN(blockSize) || + blockSize < 1024 || + !AllocHugeSizeIsValid(blockSize)) + elog(ERROR, "invalid blockSize for memory context: %zu", + blockSize); + + /* Do the type-independent part of context creation */ + set = (GenerationContext *) MemoryContextCreate(T_GenerationContext, + sizeof(GenerationContext), + &GenerationMethods, + parent, + name); + + set->blockSize = blockSize; + set->block = NULL; + + return (MemoryContext) set; +} + +/* + * GenerationInit + * Context-type-specific initialization routine. + */ +static void +GenerationInit(MemoryContext context) +{ + GenerationContext *set = (GenerationContext *) context; + + dlist_init(&set->blocks); +} + +/* + * GenerationReset + * Frees all memory which is allocated in the given set. + * + * The code simply frees all the blocks in the context - we don't keep any + * keeper blocks or anything like that. + */ +static void +GenerationReset(MemoryContext context) +{ + GenerationContext *set = (GenerationContext *) context; + dlist_mutable_iter miter; + + AssertArg(GenerationIsValid(set)); + +#ifdef MEMORY_CONTEXT_CHECKING + /* Check for corruption and leaks before freeing */ + GenerationCheck(context); +#endif + + dlist_foreach_modify(miter, &set->blocks) + { + GenerationBlock *block = dlist_container(GenerationBlock, node, miter.cur); + + dlist_delete(miter.cur); + + /* Normal case, release the block */ +#ifdef CLOBBER_FREED_MEMORY + wipe_mem(block, set->blockSize); +#endif + + free(block); + } + + set->block = NULL; + + Assert(dlist_is_empty(&set->blocks)); +} + +/* + * GenerationDelete + * Frees all memory which is allocated in the given set, in preparation + * for deletion of the set. We simply call GenerationReset() which does all the + * dirty work. + */ +static void +GenerationDelete(MemoryContext context) +{ + /* just reset (although not really necessary) */ + GenerationReset(context); +} + +/* + * GenerationAlloc + * Returns pointer to allocated memory of given size or NULL if + * request could not be completed; memory is added to the set. + * + * No request may exceed: + * MAXALIGN_DOWN(SIZE_MAX) - Generation_BLOCKHDRSZ - Generation_CHUNKHDRSZ + * All callers use a much-lower limit. + */ +static void * +GenerationAlloc(MemoryContext context, Size size) +{ + GenerationContext *set = (GenerationContext *) context; + GenerationBlock *block; + GenerationChunk *chunk; + + Size chunk_size = MAXALIGN(size); + + /* is it an over-sized chunk? if yes, allocate special block */ + if (chunk_size > set->blockSize / 8) + { + Size blksize = chunk_size + Generation_BLOCKHDRSZ + Generation_CHUNKHDRSZ; + + block = (GenerationBlock *) malloc(blksize); + if (block == NULL) + return NULL; + + /* block with a single (used) chunk */ + block->nchunks = 1; + block->nfree = 0; + + /* the block is completely full */ + block->freeptr = block->endptr = ((char *) block) + blksize; + + chunk = (GenerationChunk *) (((char *) block) + Generation_BLOCKHDRSZ); + chunk->context = set; + chunk->size = chunk_size; + +#ifdef MEMORY_CONTEXT_CHECKING + /* Valgrind: Will be made NOACCESS below. */ + chunk->requested_size = size; + /* set mark to catch clobber of "unused" space */ + if (size < chunk_size) + set_sentinel(GenerationChunkGetPointer(chunk), size); +#endif +#ifdef RANDOMIZE_ALLOCATED_MEMORY + /* fill the allocated space with junk */ + randomize_mem((char *) GenerationChunkGetPointer(chunk), size); +#endif + + /* add the block to the list of allocated blocks */ + dlist_push_head(&set->blocks, &block->node); + + GenerationAllocInfo(set, chunk); + + /* + * Chunk header public fields remain DEFINED. The requested + * allocation itself can be NOACCESS or UNDEFINED; our caller will + * soon make it UNDEFINED. Make extra space at the end of the chunk, + * if any, NOACCESS. + */ + VALGRIND_MAKE_MEM_NOACCESS((char *) chunk + Generation_CHUNK_PUBLIC, + chunk_size + Generation_CHUNKHDRSZ - Generation_CHUNK_PUBLIC); + + return GenerationChunkGetPointer(chunk); + } + + /* + * Not an over-sized chunk. Is there enough space on the current block? If + * not, allocate a new "regular" block. + */ + block = set->block; + + if ((block == NULL) || + (block->endptr - block->freeptr) < Generation_CHUNKHDRSZ + chunk_size) + { + Size blksize = set->blockSize; + + block = (GenerationBlock *) malloc(blksize); + + if (block == NULL) + return NULL; + + block->nchunks = 0; + block->nfree = 0; + + block->freeptr = ((char *) block) + Generation_BLOCKHDRSZ; + block->endptr = ((char *) block) + blksize; + + /* Mark unallocated space NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS(block->freeptr, + blksize - Generation_BLOCKHDRSZ); + + /* add it to the doubly-linked list of blocks */ + dlist_push_head(&set->blocks, &block->node); + + /* and also use it as the current allocation block */ + set->block = block; + } + + /* we're supposed to have a block with enough free space now */ + Assert(block != NULL); + Assert((block->endptr - block->freeptr) >= Generation_CHUNKHDRSZ + chunk_size); + + chunk = (GenerationChunk *) block->freeptr; + + block->nchunks += 1; + block->freeptr += (Generation_CHUNKHDRSZ + chunk_size); + + chunk->block = block; + + chunk->context = set; + chunk->size = chunk_size; + +#ifdef MEMORY_CONTEXT_CHECKING + /* Valgrind: Free list requested_size should be DEFINED. */ + chunk->requested_size = size; + VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, + sizeof(chunk->requested_size)); + /* set mark to catch clobber of "unused" space */ + if (size < chunk->size) + set_sentinel(GenerationChunkGetPointer(chunk), size); +#endif +#ifdef RANDOMIZE_ALLOCATED_MEMORY + /* fill the allocated space with junk */ + randomize_mem((char *) GenerationChunkGetPointer(chunk), size); +#endif + + GenerationAllocInfo(set, chunk); + return GenerationChunkGetPointer(chunk); +} + +/* + * GenerationFree + * Update number of chunks on the block, and if all chunks on the block + * are freeed then discard the block. + */ +static void +GenerationFree(MemoryContext context, void *pointer) +{ + GenerationContext *set = (GenerationContext *) context; + GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + GenerationBlock *block = chunk->block; + +#ifdef MEMORY_CONTEXT_CHECKING + VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, + sizeof(chunk->requested_size)); + /* Test for someone scribbling on unused space in chunk */ + if (chunk->requested_size < chunk->size) + if (!sentinel_ok(pointer, chunk->requested_size)) + elog(WARNING, "detected write past chunk end in %s %p", + ((MemoryContext)set)->name, chunk); +#endif + +#ifdef CLOBBER_FREED_MEMORY + wipe_mem(pointer, chunk->size); +#endif + +#ifdef MEMORY_CONTEXT_CHECKING + /* Reset requested_size to 0 in chunks that are on freelist */ + chunk->requested_size = 0; +#endif + + block->nfree += 1; + + Assert(block->nchunks > 0); + Assert(block->nfree <= block->nchunks); + + /* If there are still allocated chunks on the block, we're done. */ + if (block->nfree < block->nchunks) + return; + + /* + * The block is empty, so let's get rid of it. First remove it from the + * list of blocks, then return it to malloc(). + */ + dlist_delete(&block->node); + + /* Also make sure the block is not marked as the current block. */ + if (set->block == block) + set->block = NULL; + + free(block); +} + +/* + * GenerationRealloc + * When handling repalloc, we simply allocate a new chunk, copy the data + * and discard the old one. The only exception is when the new size fits + * into the old chunk - in that case we just update chunk header. + */ +static void * +GenerationRealloc(MemoryContext context, void *pointer, Size size) +{ + GenerationContext *set = (GenerationContext *) context; + GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + GenerationPointer newPointer; + Size oldsize = chunk->size; + +#ifdef MEMORY_CONTEXT_CHECKING + VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, + sizeof(chunk->requested_size)); + /* Test for someone scribbling on unused space in chunk */ + if (chunk->requested_size < oldsize) + if (!sentinel_ok(pointer, chunk->requested_size)) + elog(WARNING, "detected write past chunk end in %s %p", + ((MemoryContext)set)->name, chunk); +#endif + + /* + * Maybe the allocated area already is >= the new size. (In particular, + * we always fall out here if the requested size is a decrease.) + * + * This memory context is not use the power-of-2 chunk sizing and instead + * carves the chunks to be as small as possible, so most repalloc() calls + * will end up in the palloc/memcpy/pfree branch. + * + * XXX Perhaps we should annotate this condition with unlikely()? + */ + if (oldsize >= size) + { +#ifdef MEMORY_CONTEXT_CHECKING + Size oldrequest = chunk->requested_size; + +#ifdef RANDOMIZE_ALLOCATED_MEMORY + /* We can only fill the extra space if we know the prior request */ + if (size > oldrequest) + randomize_mem((char *) pointer + oldrequest, + size - oldrequest); +#endif + + chunk->requested_size = size; + VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, + sizeof(chunk->requested_size)); + + /* + * If this is an increase, mark any newly-available part UNDEFINED. + * Otherwise, mark the obsolete part NOACCESS. + */ + if (size > oldrequest) + VALGRIND_MAKE_MEM_UNDEFINED((char *) pointer + oldrequest, + size - oldrequest); + else + VALGRIND_MAKE_MEM_NOACCESS((char *) pointer + size, + oldsize - size); + + /* set mark to catch clobber of "unused" space */ + if (size < oldsize) + set_sentinel(pointer, size); +#else /* !MEMORY_CONTEXT_CHECKING */ + + /* + * We don't have the information to determine whether we're growing + * the old request or shrinking it, so we conservatively mark the + * entire new allocation DEFINED. + */ + VALGRIND_MAKE_MEM_NOACCESS(pointer, oldsize); + VALGRIND_MAKE_MEM_DEFINED(pointer, size); +#endif + + return pointer; + } + + /* allocate new chunk */ + newPointer = GenerationAlloc((MemoryContext) set, size); + + /* leave immediately if request was not completed */ + if (newPointer == NULL) + return NULL; + + /* + * GenerationSetAlloc() just made the region NOACCESS. Change it to UNDEFINED + * for the moment; memcpy() will then transfer definedness from the old + * allocation to the new. If we know the old allocation, copy just that + * much. Otherwise, make the entire old chunk defined to avoid errors as + * we copy the currently-NOACCESS trailing bytes. + */ + VALGRIND_MAKE_MEM_UNDEFINED(newPointer, size); +#ifdef MEMORY_CONTEXT_CHECKING + oldsize = chunk->requested_size; +#else + VALGRIND_MAKE_MEM_DEFINED(pointer, oldsize); +#endif + + /* transfer existing data (certain to fit) */ + memcpy(newPointer, pointer, oldsize); + + /* free old chunk */ + GenerationFree((MemoryContext) set, pointer); + + return newPointer; +} + +/* + * GenerationGetChunkSpace + * Given a currently-allocated chunk, determine the total space + * it occupies (including all memory-allocation overhead). + */ +static Size +GenerationGetChunkSpace(MemoryContext context, void *pointer) +{ + GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + + return chunk->size + Generation_CHUNKHDRSZ; +} + +/* + * GenerationIsEmpty + * Is an Generation empty of any allocated space? + */ +static bool +GenerationIsEmpty(MemoryContext context) +{ + GenerationContext *set = (GenerationContext *) context; + + return dlist_is_empty(&set->blocks); +} + +/* + * GenerationStats + * Compute stats about memory consumption of an Generation. + * + * level: recursion level (0 at top level); used for print indentation. + * print: true to print stats to stderr. + * totals: if not NULL, add stats about this Generation into *totals. + * + * XXX freespace only accounts for empty space at the end of the block, not + * space of freed chunks (which is unknown). + */ +static void +GenerationStats(MemoryContext context, int level, bool print, + MemoryContextCounters *totals) +{ + GenerationContext *set = (GenerationContext *) context; + + Size nblocks = 0; + Size nchunks = 0; + Size nfreechunks = 0; + Size totalspace = 0; + Size freespace = 0; + + dlist_iter iter; + + dlist_foreach(iter, &set->blocks) + { + GenerationBlock *block = dlist_container(GenerationBlock, node, iter.cur); + + nblocks++; + nchunks += block->nchunks; + nfreechunks += block->nfree; + totalspace += set->blockSize; + freespace += (block->endptr - block->freeptr); + } + + if (print) + { + int i; + + for (i = 0; i < level; i++) + fprintf(stderr, " "); + fprintf(stderr, + "Generation: %s: %zu total in %zd blocks (%zd chunks); %zu free (%zd chunks); %zu used\n", + ((MemoryContext)set)->name, totalspace, nblocks, nchunks, freespace, + nfreechunks, totalspace - freespace); + } + + if (totals) + { + totals->nblocks += nblocks; + totals->freechunks += nfreechunks; + totals->totalspace += totalspace; + totals->freespace += freespace; + } +} + + +#ifdef MEMORY_CONTEXT_CHECKING + +/* + * GenerationCheck + * Walk through chunks and check consistency of memory. + * + * NOTE: report errors as WARNING, *not* ERROR or FATAL. Otherwise you'll + * find yourself in an infinite loop when trouble occurs, because this + * routine will be entered again when elog cleanup tries to release memory! + */ +static void +GenerationCheck(MemoryContext context) +{ + GenerationContext *gen = (GenerationContext *) context; + char *name = context->name; + dlist_iter iter; + + /* walk all blocks in this context */ + dlist_foreach(iter, &gen->blocks) + { + int nfree, + nchunks; + char *ptr; + GenerationBlock *block = dlist_container(GenerationBlock, node, iter.cur); + + /* We can't free more chunks than allocated. */ + if (block->nfree <= block->nchunks) + elog(WARNING, "problem in Generation %s: number of free chunks %d in block %p exceeds %d allocated", + name, block->nfree, block, block->nchunks); + + /* Now walk through the chunks and count them. */ + nfree = 0; + nchunks = 0; + ptr = ((char *) block) + Generation_BLOCKHDRSZ; + + while (ptr < block->freeptr) + { + GenerationChunk *chunk = (GenerationChunk *) ptr; + + /* move to the next chunk */ + ptr += (chunk->size + Generation_CHUNKHDRSZ); + + /* chunks have both block and context pointers, so check both */ + if (chunk->block != block) + elog(WARNING, "problem in Generation %s: bogus block link in block %p, chunk %p", + name, block, chunk); + + if (chunk->context != gen) + elog(WARNING, "problem in Generation %s: bogus context link in block %p, chunk %p", + name, block, chunk); + + nchunks += 1; + + /* if requested_size==0, the chunk was freed */ + if (chunk->requested_size > 0) + { + /* if the chunk was not freed, we can trigger valgrind checks */ + VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, + sizeof(chunk->requested_size)); + + /* we're in a no-freelist branch */ + VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, + sizeof(chunk->requested_size)); + + /* now make sure the chunk size is correct */ + if (chunk->size != MAXALIGN(chunk->requested_size)) + elog(WARNING, "problem in Generation %s: bogus chunk size in block %p, chunk %p", + name, block, chunk); + + /* there might be sentinel (thanks to alignment) */ + if (chunk->requested_size < chunk->size && + !sentinel_ok(chunk, Generation_CHUNKHDRSZ + chunk->requested_size)) + elog(WARNING, "problem in Generation %s: detected write past chunk end in block %p, chunk %p", + name, block, chunk); + } + else + nfree += 1; + } + + /* + * Make sure we got the expected number of allocated and free chunks + * (as tracked in the block header). + */ + if (nchunks != block->nchunks) + elog(WARNING, "problem in Generation %s: number of allocated chunks %d in block %p does not match header %d", + name, nchunks, block, block->nchunks); + + if (nfree != block->nfree) + elog(WARNING, "problem in Generation %s: number of free chunks %d in block %p does not match header %d", + name, nfree, block, block->nfree); + } +} + +#endif /* MEMORY_CONTEXT_CHECKING */ diff --git a/src/include/nodes/memnodes.h b/src/include/nodes/memnodes.h index 7a0c6763df..e22d9fb178 100644 --- a/src/include/nodes/memnodes.h +++ b/src/include/nodes/memnodes.h @@ -96,6 +96,8 @@ typedef struct MemoryContextData */ #define MemoryContextIsValid(context) \ ((context) != NULL && \ - (IsA((context), AllocSetContext) || IsA((context), SlabContext))) + (IsA((context), AllocSetContext) || \ + IsA((context), SlabContext) || \ + IsA((context), GenerationContext))) #endif /* MEMNODES_H */ diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index ffeeb4919b..03dc5307e8 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -274,6 +274,7 @@ typedef enum NodeTag T_MemoryContext, T_AllocSetContext, T_SlabContext, + T_GenerationContext, /* * TAGS FOR VALUE NODES (value.h) diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h index 86effe106b..b18ce5a9df 100644 --- a/src/include/replication/reorderbuffer.h +++ b/src/include/replication/reorderbuffer.h @@ -344,20 +344,7 @@ struct ReorderBuffer */ MemoryContext change_context; MemoryContext txn_context; - - /* - * Data structure slab cache. - * - * We allocate/deallocate some structures very frequently, to avoid bigger - * overhead we cache some unused ones here. - * - * The maximum number of cached entries is controlled by const variables - * on top of reorderbuffer.c - */ - - /* cached ReorderBufferTupleBufs */ - slist_head cached_tuplebufs; - Size nr_cached_tuplebufs; + MemoryContext tup_context; XLogRecPtr current_restart_decoding_lsn; diff --git a/src/include/utils/memutils.h b/src/include/utils/memutils.h index 869c59dc85..ff8e5d7d79 100644 --- a/src/include/utils/memutils.h +++ b/src/include/utils/memutils.h @@ -155,6 +155,11 @@ extern MemoryContext SlabContextCreate(MemoryContext parent, Size blockSize, Size chunkSize); +/* generation.c */ +extern MemoryContext GenerationContextCreate(MemoryContext parent, + const char *name, + Size blockSize); + /* * Recommended default alloc parameters, suitable for "ordinary" contexts * that might hold quite a lot of data. From b99661c2ff4eef923abd96d2a733556f5f64c2d6 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Thu, 23 Nov 2017 06:55:18 +1100 Subject: [PATCH 0589/1087] Tweak code for older compilers Attempt to quiesce build farm Author: Tomas Vondra --- src/backend/utils/mmgr/generation.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 11a6a37a37..cdff20ff81 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -92,20 +92,20 @@ typedef struct GenerationContext * GenerationBloc is the header data for a block --- the usable space * within the block begins at the next alignment boundary. */ -typedef struct GenerationBlock +struct GenerationBlock { dlist_node node; /* doubly-linked list */ int nchunks; /* number of chunks in the block */ int nfree; /* number of free chunks */ char *freeptr; /* start of free space in this block */ char *endptr; /* end of space in this block */ -} GenerationBlock; +}; /* * GenerationChunk * The prefix of each piece of memory in an GenerationBlock */ -typedef struct GenerationChunk +struct GenerationChunk { /* block owning this chunk */ void *block; @@ -120,7 +120,7 @@ typedef struct GenerationChunk GenerationContext *context; /* owning context */ /* there must not be any padding to reach a MAXALIGN boundary here! */ -} GenerationChunk; +}; /* From 2393194c0dded22972f03dd53dcbf864dab8ebc6 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Wed, 22 Nov 2017 21:51:55 +0100 Subject: [PATCH 0590/1087] Fix typo Daniel Gustafsson --- src/bin/pg_upgrade/relfilenode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_upgrade/relfilenode.c b/src/bin/pg_upgrade/relfilenode.c index 06d3ed04a7..8c3f8ac332 100644 --- a/src/bin/pg_upgrade/relfilenode.c +++ b/src/bin/pg_upgrade/relfilenode.c @@ -194,7 +194,7 @@ transfer_relfile(FileNameMap *map, const char *type_suffix, bool vm_must_add_fro /* * Now copy/link any related segments as well. Remember, PG breaks large * files into 1GB segments, the first segment has no extension, subsequent - * segments are named relfilenode.1, relfilenode.2, relfilenode.3. copied. + * segments are named relfilenode.1, relfilenode.2, relfilenode.3. */ for (segno = 0;; segno++) { From de0aca6a82af9c04cb4634d091ab065763fd4d5a Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Wed, 22 Nov 2017 20:18:15 -0800 Subject: [PATCH 0591/1087] Build src/test/isolation during "make" and "make install". This hack closes a race condition in "make -j check-world" and "make -j installcheck-world". Back-patch to v10, before which these parallel invocations had worse problems. Discussion: https://postgr.es/m/20171106080752.GA1298146@rfd.leadboat.com --- src/Makefile | 1 + src/test/isolation/Makefile | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/src/Makefile b/src/Makefile index 380da92c75..febbcede7d 100644 --- a/src/Makefile +++ b/src/Makefile @@ -28,6 +28,7 @@ SUBDIRS = \ pl \ makefiles \ test/regress \ + test/isolation \ test/perl # There are too many interdependencies between the subdirectories, so diff --git a/src/test/isolation/Makefile b/src/test/isolation/Makefile index 8eb4969e9b..efbdc40e1d 100644 --- a/src/test/isolation/Makefile +++ b/src/test/isolation/Makefile @@ -15,6 +15,13 @@ OBJS = specparse.o isolationtester.o $(WIN32RES) all: isolationtester$(X) pg_isolation_regress$(X) +# Though we don't install these binaries, build them during installation +# (including temp-install). Otherwise, "make -j check-world" and "make -j +# installcheck-world" would spawn multiple, concurrent builds in this +# directory. Later builds would overwrite files while earlier builds are +# reading them, causing occasional failures. +install: | all + submake-regress: $(MAKE) -C $(top_builddir)/src/test/regress pg_regress.o From 2f8d6369e60a244f28e0c93b8a02e73758322915 Mon Sep 17 00:00:00 2001 From: Fujii Masao Date: Thu, 23 Nov 2017 16:46:42 +0900 Subject: [PATCH 0592/1087] doc: mention wal_receiver_status_interval as GUC affecting logical rep worker. wal_receiver_timeout, wal_receiver_status_interval and wal_retrieve_retry_interval configuration parameters affect the logical rep worker, but previously only wal_receiver_status_interval was not mentioned as such parameter in the doc. Back-patch to v10 where logical rep was added. Author: Masahiko Sawada Discussion: https://www.postgresql.org/message-id/CAD21AoBUnuH_UsnKXyPCsCR7EAMamW0sSb6a7=WgiQRpnMAp5w@mail.gmail.com --- doc/src/sgml/config.sgml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index fc1752fb3f..7059dd4e5f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3410,7 +3410,8 @@ ANY num_sync ( - The functions shown in provide + The functions shown in provide write access to files on the machine hosting the server. (See also the - functions in , which + functions in , which provide read-only access.) Only files within the database cluster directory can be accessed, but either a relative or absolute path is allowable. @@ -107,18 +107,18 @@ pg_logdir_ls returns the start timestamps and path - names of all the log files in the - directory. The parameter must have its + names of all the log files in the + directory. The parameter must have its default setting (postgresql-%Y-%m-%d_%H%M%S.log) to use this function. The functions shown - in are deprecated + in are deprecated and should not be used in new applications; instead use those shown - in - and . These functions are + in + and . These functions are provided in adminpack only for compatibility with old versions of pgAdmin. diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml index bf87df4dcb..ae5f3fac75 100644 --- a/doc/src/sgml/advanced.sgml +++ b/doc/src/sgml/advanced.sgml @@ -18,12 +18,12 @@ This chapter will on occasion refer to examples found in to change or improve them, so it will be + linkend="tutorial-sql"/> to change or improve them, so it will be useful to have read that chapter. Some examples from this chapter can also be found in advanced.sql in the tutorial directory. This file also contains some sample data to load, which is not - repeated here. (Refer to for + repeated here. (Refer to for how to use the file.) @@ -37,7 +37,7 @@ - Refer back to the queries in . + Refer back to the queries in . Suppose the combined listing of weather records and city location is of particular interest to your application, but you do not want to type the query each time you need it. You can create a @@ -82,7 +82,7 @@ SELECT * FROM myview; Recall the weather and cities tables from . Consider the following problem: You + linkend="tutorial-sql"/>. Consider the following problem: You want to make sure that no one can insert rows in the weather table that do not have a matching entry in the cities table. This is called @@ -129,7 +129,7 @@ DETAIL: Key (city)=(Berkeley) is not present in table "cities". The behavior of foreign keys can be finely tuned to your application. We will not go beyond this simple example in this - tutorial, but just refer you to + tutorial, but just refer you to for more information. Making correct use of foreign keys will definitely improve the quality of your database applications, so you are strongly encouraged to learn about them. @@ -447,7 +447,7 @@ FROM empsalary; There are options to define the window frame in other ways, but this tutorial does not cover them. See - for details. + for details. Here is an example using sum: @@ -554,10 +554,10 @@ SELECT sum(salary) OVER w, avg(salary) OVER w More details about window functions can be found in - , - , - , and the - reference page. + , + , + , and the + reference page. @@ -692,7 +692,7 @@ SELECT name, altitude Although inheritance is frequently useful, it has not been integrated with unique constraints or foreign keys, which limits its usefulness. - See for more detail. + See for more detail. diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml index 0dd68f0ba1..852e260c09 100644 --- a/doc/src/sgml/amcheck.sgml +++ b/doc/src/sgml/amcheck.sgml @@ -31,7 +31,7 @@ index scans themselves, which may be user-defined operator class code. For example, B-Tree index verification relies on comparisons made with one or more B-Tree support function 1 routines. See for details of operator class support + linkend="xindex-support"/> for details of operator class support functions. @@ -192,7 +192,7 @@ ORDER BY c.relpages DESC LIMIT 10; index that is ordered using an affected collation, simply because indexed values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See - and for + and for further details about how PostgreSQL uses operating system locales and collations. @@ -210,7 +210,7 @@ ORDER BY c.relpages DESC LIMIT 10; logical inconsistency to be introduced. One obvious testing strategy is to call amcheck functions continuously when running the standard regression tests. See for details on running the tests. + linkend="regress-run"/> for details on running the tests. @@ -263,7 +263,7 @@ ORDER BY c.relpages DESC LIMIT 10; There is no general method of repairing problems that amcheck detects. An explanation for the root cause of an invariant violation should be sought. may play a useful role in diagnosing + linkend="pageinspect"/> may play a useful role in diagnosing corruption that amcheck detects. A REINDEX may not be effective in repairing corruption. diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index d49901c690..53f8049df3 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -7,7 +7,7 @@ Author This chapter originated as part of - , Stefan Simkovics' + , Stefan Simkovics' Master's Thesis prepared at Vienna University of Technology under the direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr. @@ -136,7 +136,7 @@ The client process can be any program that understands the PostgreSQL protocol described in - . Many clients are based on the + . Many clients are based on the C-language library libpq, but several independent implementations of the protocol exist, such as the Java JDBC driver. @@ -317,7 +317,7 @@ The query rewriter is discussed in some detail in - , so there is no need to cover it here. + , so there is no need to cover it here. We will only point out that both the input and the output of the rewriter are query trees, that is, there is no change in the representation or level of semantic detail in the trees. Rewriting @@ -347,8 +347,8 @@ involving large numbers of join operations. In order to determine a reasonable (not necessarily optimal) query plan in a reasonable amount of time, PostgreSQL uses a Genetic - Query Optimizer (see ) when the number of joins - exceeds a threshold (see ). + Query Optimizer (see ) when the number of joins + exceeds a threshold (see ). @@ -438,7 +438,7 @@ - If the query uses fewer than + If the query uses fewer than relations, a near-exhaustive search is conducted to find the best join sequence. The planner preferentially considers joins between any two relations for which there exist a corresponding join clause in the @@ -454,7 +454,7 @@ When geqo_threshold is exceeded, the join sequences considered are determined by heuristics, as described - in . Otherwise the process is the same. + in . Otherwise the process is the same. diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 9187f6e02e..f4d4a610ef 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -128,7 +128,7 @@ CREATE TABLE tictactoe ( (These kinds of array constants are actually only a special case of the generic type constants discussed in . The constant is initially + linkend="sql-syntax-constants-generic"/>. The constant is initially treated as a string and passed to the array input conversion routine. An explicit type specification might be necessary.) @@ -192,7 +192,7 @@ INSERT INTO sal_emp expressions; for instance, string literals are single quoted, instead of double quoted as they would be in an array literal. The ARRAY constructor syntax is discussed in more detail in - . + . @@ -616,7 +616,7 @@ SELECT * FROM sal_emp WHERE pay_by_quarter[1] = 10000 OR However, this quickly becomes tedious for large arrays, and is not helpful if the size of the array is unknown. An alternative method is - described in . The above + described in . The above query could be replaced by: @@ -644,7 +644,7 @@ SELECT * FROM WHERE pay_by_quarter[s] = 10000; - This function is described in . + This function is described in . @@ -657,8 +657,8 @@ SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; This and other array operators are further described in - . It can be accelerated by an appropriate - index, as described in . + . It can be accelerated by an appropriate + index, as described in . @@ -755,7 +755,7 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 or backslashes disables this and allows the literal string value NULL to be entered. Also, for backward compatibility with pre-8.2 versions of PostgreSQL, the configuration parameter can be turned + linkend="guc-array-nulls"/> configuration parameter can be turned off to suppress recognition of NULL as a NULL. @@ -797,7 +797,7 @@ INSERT ... VALUES (E'{"\\\\","\\""}'); with a data type whose input routine also treated backslashes specially, bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored array element.) - Dollar quoting (see ) can be + Dollar quoting (see ) can be used to avoid the need to double backslashes. @@ -805,7 +805,7 @@ INSERT ... VALUES (E'{"\\\\","\\""}'); The ARRAY constructor syntax (see - ) is often easier to work + ) is often easier to work with than the array-literal syntax when writing array values in SQL commands. In ARRAY, individual element values are written the same way they would be written when not members of an array. diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml index 9221d2dfb6..bd3ef7128d 100644 --- a/doc/src/sgml/auth-delay.sgml +++ b/doc/src/sgml/auth-delay.sgml @@ -18,7 +18,7 @@ In order to function, this module must be loaded via - in postgresql.conf. + in postgresql.conf. diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml index 240098c82f..08b67f2600 100644 --- a/doc/src/sgml/auto-explain.sgml +++ b/doc/src/sgml/auto-explain.sgml @@ -10,7 +10,7 @@ The auto_explain module provides a means for logging execution plans of slow statements automatically, without - having to run + having to run by hand. This is especially helpful for tracking down un-optimized queries in large applications. @@ -25,8 +25,8 @@ LOAD 'auto_explain'; (You must be superuser to do that.) More typical usage is to preload it into some or all sessions by including auto_explain in - or - in + or + in postgresql.conf. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that. diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index 39bb25c8e2..9d8e69056f 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -32,7 +32,7 @@ commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program - for this purpose. The basic usage of this + for this purpose. The basic usage of this command is: pg_dump dbname > outfile @@ -79,7 +79,7 @@ pg_dump dbname > PGUSER. Remember that pg_dump connections are subject to the normal client authentication mechanisms (which are described in ). + linkend="client-authentication"/>). @@ -120,9 +120,9 @@ psql dbname < dbname). psql supports options similar to pg_dump for specifying the database server to connect to and the user name to use. See - the reference page for more information. + the reference page for more information. Non-text file dumps are restored using the utility. + linkend="app-pgrestore"/> utility. @@ -178,13 +178,13 @@ pg_dump -h host1 dbname | After restoring a backup, it is wise to run on each + linkend="sql-analyze"/> on each database so the query optimizer has useful statistics; - see - and for more information. + see + and for more information. For more advice on how to load large amounts of data into PostgreSQL efficiently, refer to . + linkend="populate"/>. @@ -196,7 +196,7 @@ pg_dump -h host1 dbname | and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database - cluster, the program is provided. + cluster, the program is provided. pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is: @@ -308,8 +308,8 @@ pg_dump -Fc dbname > dbname filename - See the and reference pages for details. + See the and reference pages for details. @@ -345,7 +345,7 @@ pg_dump -j num -F d -f An alternative backup strategy is to directly copy the files that PostgreSQL uses to store the data in the database; - explains where these files + explains where these files are located. You can use whatever method you prefer for doing file system backups; for example: @@ -369,7 +369,7 @@ tar -cf backup.tar /usr/local/pgsql/data an atomic snapshot of the state of the file system, but also because of internal buffering within the server). Information about stopping the server can be found in - . Needless to say, you + . Needless to say, you also need to shut down the server before restoring the data. @@ -428,10 +428,10 @@ tar -cf backup.tar /usr/local/pgsql/data If simultaneous snapshots are not possible, one option is to shut down the database server long enough to establish all the frozen snapshots. Another option is to perform a continuous archiving base backup () because such backups are immune to file + linkend="backup-base-backup"/>) because such backups are immune to file system changes during the backup. This requires enabling continuous archiving just during the backup process; restore is done using - continuous archive recovery (). + continuous archive recovery (). @@ -591,11 +591,11 @@ tar -cf backup.tar /usr/local/pgsql/data - To enable WAL archiving, set the + To enable WAL archiving, set the configuration parameter to replica or higher, - to on, + to on, and specify the shell command to use in the configuration parameter. In practice + linkend="guc-archive-command"/> configuration parameter. In practice these settings will always be placed in the postgresql.conf file. In archive_command, @@ -705,7 +705,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 than through SQL operations. You might wish to keep the configuration files in a location that will be backed up by your regular file system backup procedures. See - for how to relocate the + for how to relocate the configuration files. @@ -715,7 +715,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To put a limit on how old unarchived data can be, you can set - to force the server to switch + to force the server to switch to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very @@ -729,13 +729,13 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 pg_switch_wal if you want to ensure that a just-finished transaction is archived as soon as possible. Other utility functions related to WAL management are listed in . + linkend="functions-admin-backup-table"/>. When wal_level is minimal some SQL commands are optimized to avoid WAL logging, as described in . If archiving or streaming replication were + linkend="populate-pitr"/>. If archiving or streaming replication were turned on during execution of one of these statements, WAL would not contain enough information for archive recovery. (Crash recovery is unaffected.) For this reason, wal_level can only be changed at @@ -753,11 +753,11 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 The easiest way to perform a base backup is to use the - tool. It can create + tool. It can create a base backup either as regular files or as a tar archive. If more - flexibility than can provide is + flexibility than can provide is required, you can also make a base backup using the low level API - (see ). + (see ). @@ -791,7 +791,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 The backup history file is just a small text file. It contains the - label string you gave to , as well as + label string you gave to , as well as the starting and ending times and WAL segments of the backup. If you used the label to identify the associated dump file, then the archived history file is enough to tell you which dump file to @@ -814,7 +814,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 The procedure for making a base backup using the low level APIs contains a few more steps than - the method, but is relatively + the method, but is relatively simple. It is very important that these steps are executed in sequence, and that the success of a step is verified before proceeding to the next step. @@ -830,7 +830,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 A non-exclusive low level backup is one that allows other concurrent backups to be running (both those started using the same backup API and those started using - ). + ). @@ -859,7 +859,7 @@ SELECT pg_start_backup('label', false, false); required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval (see the configuration parameter - ). This is + ). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as possible, change the second parameter to true, which will @@ -879,7 +879,7 @@ SELECT pg_start_backup('label', false, false); pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database while you do this. See - for things to + for things to consider during this backup. @@ -989,7 +989,7 @@ SELECT pg_start_backup('label'); required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval (see the configuration parameter - ). This is + ). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as possible, use: @@ -1007,7 +1007,7 @@ SELECT pg_start_backup('label', true); pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database while you do this. See - for things to + for things to consider during this backup. @@ -1119,7 +1119,7 @@ SELECT pg_stop_backup(); pg_snapshots/, pg_stat_tmp/, and pg_subtrans/ (but not the directories themselves) can be omitted from the backup as they will be initialized on postmaster startup. - If is set and is under the data + If is set and is under the data directory then the contents of that directory can also be omitted. @@ -1221,7 +1221,7 @@ SELECT pg_stop_backup(); Create a recovery command file recovery.conf in the cluster - data directory (see ). You might + data directory (see ). You might also want to temporarily modify pg_hba.conf to prevent ordinary users from connecting until you are sure the recovery was successful. @@ -1310,7 +1310,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' at the start of recovery for a file named something like 00000001.history. This is also normal and does not indicate a problem in simple recovery situations; see - for discussion. + for discussion. @@ -1440,7 +1440,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' As with base backups, the easiest way to produce a standalone - hot backup is to use the + hot backup is to use the tool. If you include the -X parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is @@ -1548,7 +1548,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' When using an archive_command script, it's desirable - to enable . + to enable . Any messages written to stderr from the script will then appear in the database server log, allowing complex configurations to be diagnosed easily if they fail. @@ -1567,7 +1567,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' - If a + If a command is executed while a base backup is being taken, and then the template database that the CREATE DATABASE copied is modified while the base backup is still in progress, it is @@ -1580,7 +1580,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' - + commands are WAL-logged with the literal absolute path, and will therefore be replayed as tablespace creations with the same absolute path. This might be undesirable if the log is being @@ -1603,8 +1603,8 @@ archive_command = 'local_backup_script.sh "%p" "%f"' your system hardware and software, the risk of partial writes might be small enough to ignore, in which case you can significantly reduce the total volume of archived logs by turning off page - snapshots using the - parameter. (Read the notes and warnings in + snapshots using the + parameter. (Read the notes and warnings in before you do so.) Turning off page snapshots does not prevent use of the logs for PITR operations. An area for future development is to compress archived WAL data by removing diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index 0b092f6e49..4bc2b696b3 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -286,6 +286,6 @@ typedef struct BackgroundWorker The maximum number of registered background workers is limited by - . + . diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml index b7483df4c0..23c0e05ed6 100644 --- a/doc/src/sgml/brin.sgml +++ b/doc/src/sgml/brin.sgml @@ -95,7 +95,7 @@ The core PostgreSQL distribution includes the BRIN operator classes shown in - . + . @@ -590,7 +590,7 @@ typedef struct BrinOpcInfo To write an operator class for a data type that implements a totally ordered set, it is possible to use the minmax support procedures alongside the corresponding operators, as shown in - . + . All operator class members (procedures and operators) are mandatory. @@ -648,7 +648,7 @@ typedef struct BrinOpcInfo To write an operator class for a complex data type which has values included within another type, it's possible to use the inclusion support procedures alongside the corresponding operators, as shown - in . It requires + in . It requires only a single additional function, which can be written in any language. More functions can be defined for additional functionality. All operators are optional. Some operators require other operators, as shown as @@ -821,7 +821,7 @@ typedef struct BrinOpcInfo additional data types to be supported by defining extra sets of operators. Inclusion operator class operator strategies are dependent on another operator strategy as shown in - , or the same + , or the same operator strategy as themselves. They require the dependency operator to be defined with the STORAGE data type as the left-hand-side argument and the other supported data type to be the diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index ef60a58631..da881a7737 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -27,7 +27,7 @@ Overview - lists the system catalogs. + lists the system catalogs. More detailed documentation of each catalog follows below. @@ -567,8 +567,8 @@ New aggregate functions are registered with the - command. See for more information about + linkend="sql-createaggregate"/> + command. See for more information about writing aggregate functions and the meaning of the transition functions, etc. @@ -588,7 +588,7 @@ relation access methods. There is one row for each access method supported by the system. Currently, only indexes have access methods. The requirements for index - access methods are discussed in detail in . + access methods are discussed in detail in .
@@ -649,7 +649,7 @@ methods. That data is now only directly visible at the C code level. However, pg_index_column_has_property() and related functions have been added to allow SQL queries to inspect index access - method properties; see . + method properties; see . @@ -1034,7 +1034,7 @@ attstattarget controls the level of detail of statistics accumulated for this column by - . + . A zero value indicates that no statistics should be collected. A negative value says to use the system default statistics target. The exact meaning of positive values is data type-dependent. @@ -1270,7 +1270,7 @@ - contains detailed information about user and + contains detailed information about user and privilege management. @@ -1356,7 +1356,7 @@ bool Role bypasses every row level security policy, see - for more information. + for more information. @@ -1964,8 +1964,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -2015,7 +2015,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_collation describes the available collations, which are essentially mappings from an SQL name to operating system locale categories. - See for more information. + See for more information.
@@ -2424,7 +2424,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_conversion describes - encoding conversion procedures. See + encoding conversion procedures. See for more information. @@ -2516,8 +2516,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_database stores information about the available databases. Databases are created with the command. - Consult for details about the meaning + linkend="sql-createdatabase"/> command. + Consult for details about the meaning of some of the parameters. @@ -2675,8 +2675,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -3053,7 +3053,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_description stores optional descriptions (comments) for each database object. Descriptions can be manipulated - with the command and viewed with + with the command and viewed with psql's \d commands. Descriptions of many built-in system objects are provided in the initial contents of pg_description. @@ -3208,7 +3208,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_event_trigger stores event triggers. - See for more information. + See for more information.
@@ -3258,7 +3258,7 @@ SCRAM-SHA-256$<iteration count>:&l char - Controls in which modes + Controls in which modes the event trigger fires. O = trigger fires in origin and local modes, D = trigger is disabled, @@ -3291,7 +3291,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_extension stores information - about the installed extensions. See + about the installed extensions. See for details about extensions. @@ -3463,8 +3463,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -3559,8 +3559,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -4011,8 +4011,8 @@ SCRAM-SHA-256$<iteration count>:&l The initial access privileges; see - and - + and + for details @@ -4034,8 +4034,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_language registers languages in which you can write functions or stored procedures. - See - and for more information about language handlers. + See + and for more information about language handlers.
@@ -4117,7 +4117,7 @@ SCRAM-SHA-256$<iteration count>:&l This references a function that is responsible for executing inline anonymous code blocks - ( blocks). + ( blocks). Zero if inline blocks are not supported. @@ -4139,8 +4139,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -4279,8 +4279,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -4346,8 +4346,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -4377,7 +4377,7 @@ SCRAM-SHA-256$<iteration count>:&l - Operator classes are described at length in . + Operator classes are described at length in .
@@ -4481,8 +4481,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_operator stores information about operators. - See - and for more information. + See + and for more information.
@@ -4639,7 +4639,7 @@ SCRAM-SHA-256$<iteration count>:&l - Operator families are described at length in . + Operator families are described at length in .
@@ -5040,8 +5040,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_proc stores information about functions (or procedures). - See - and for more information. + See + and for more information. @@ -5106,7 +5106,7 @@ SCRAM-SHA-256$<iteration count>:&l float4 Estimated execution cost (in units of - ); if proretset, + ); if proretset, this is cost per row returned @@ -5130,7 +5130,7 @@ SCRAM-SHA-256$<iteration count>:&l regproc pg_proc.oid Calls to this function can be simplified by this other function - (see ) + (see ) @@ -5359,8 +5359,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -5390,7 +5390,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_publication contains all publications created in the database. For more on publications see - . + .
@@ -5475,7 +5475,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_publication_rel contains the mapping between relations and publications in the database. This is a - many-to-many mapping. See also + many-to-many mapping. See also for a more user-friendly view of this information. @@ -5605,7 +5605,7 @@ SCRAM-SHA-256$<iteration count>:&l The pg_replication_origin catalog contains all replication origins created. For more on replication origins - see . + see .
@@ -5705,7 +5705,7 @@ SCRAM-SHA-256$<iteration count>:&l char - Controls in which modes + Controls in which modes the rule fires. O = rule fires in origin and local modes, D = rule is disabled, @@ -5765,8 +5765,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_seclabel stores security labels on database objects. Security labels can be manipulated - with the command. For an easier - way to view security labels, see . + with the command. For an easier + way to view security labels, see . @@ -6093,7 +6093,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_shdescription stores optional descriptions (comments) for shared database objects. Descriptions can be - manipulated with the command and viewed with + manipulated with the command and viewed with psql's \d commands. @@ -6160,8 +6160,8 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_shseclabel stores security labels on shared database objects. Security labels can be manipulated - with the command. For an easier - way to view security labels, see . + with the command. For an easier + way to view security labels, see . @@ -6228,7 +6228,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_statistic stores statistical data about the contents of the database. Entries are - created by + created by and subsequently used by the query planner. Note that all the statistical data is inherently approximate, even assuming that it is up-to-date. @@ -6408,7 +6408,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_statistic_ext holds extended planner statistics. Each row in this catalog corresponds to a statistics object - created with . + created with .
@@ -6521,7 +6521,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_subscription contains all existing logical replication subscriptions. For more information about logical - replication see . + replication see . @@ -6616,7 +6616,7 @@ SCRAM-SHA-256$<iteration count>:&l Array of subscribed publication names. These reference the publications on the publisher server. For more on publications - see . + see . @@ -6758,8 +6758,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -6788,7 +6788,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_transform stores information about transforms, which are a mechanism to adapt data types to procedural - languages. See for more information. + languages. See for more information.
@@ -6856,7 +6856,7 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_trigger stores triggers on tables and views. - See + See for more information. @@ -6914,7 +6914,7 @@ SCRAM-SHA-256$<iteration count>:&l char - Controls in which modes + Controls in which modes the trigger fires. O = trigger fires in origin and local modes, D = trigger is disabled, @@ -7066,7 +7066,7 @@ SCRAM-SHA-256$<iteration count>:&l PostgreSQL's text search features are - described at length in . + described at length in .
@@ -7141,7 +7141,7 @@ SCRAM-SHA-256$<iteration count>:&l PostgreSQL's text search features are - described at length in . + described at length in .
@@ -7212,7 +7212,7 @@ SCRAM-SHA-256$<iteration count>:&l PostgreSQL's text search features are - described at length in . + described at length in .
@@ -7295,7 +7295,7 @@ SCRAM-SHA-256$<iteration count>:&l PostgreSQL's text search features are - described at length in . + described at length in .
@@ -7392,7 +7392,7 @@ SCRAM-SHA-256$<iteration count>:&l PostgreSQL's text search features are - described at length in . + described at length in .
@@ -7461,9 +7461,9 @@ SCRAM-SHA-256$<iteration count>:&l The catalog pg_type stores information about data types. Base types and enum types (scalar types) are created with - , and + , and domains with - . + . A composite type is automatically created for each table in the database, to represent the row structure of the table. It is also possible to create composite types with CREATE TYPE AS. @@ -7567,7 +7567,7 @@ SCRAM-SHA-256$<iteration count>:&l typcategory is an arbitrary classification of data types that is used by the parser to determine which implicit casts should be preferred. - See . + See . @@ -7871,8 +7871,8 @@ SCRAM-SHA-256$<iteration count>:&l Access privileges; see - and - + and + for details @@ -7881,7 +7881,7 @@ SCRAM-SHA-256$<iteration count>:&l
- lists the system-defined values + lists the system-defined values of typcategory. Any future additions to this list will also be upper-case ASCII letters. All other ASCII characters are reserved for user-defined categories. @@ -8043,7 +8043,7 @@ SCRAM-SHA-256$<iteration count>:&l - The information schema () provides + The information schema () provides an alternative set of views which overlap the functionality of the system views. Since the information schema is SQL-standard whereas the views described here are PostgreSQL-specific, @@ -8052,11 +8052,11 @@ SCRAM-SHA-256$<iteration count>:&l - lists the system views described here. + lists the system views described here. More detailed documentation of each view follows below. There are some additional views that provide access to the results of the statistics collector; they are described in . + linkend="monitoring-stats-views-table"/>. @@ -8389,7 +8389,7 @@ SCRAM-SHA-256$<iteration count>:&l be used by software packages that want to interface to PostgreSQL to facilitate finding the required header files and libraries. It provides the same basic information as the - PostgreSQL client + PostgreSQL client application. @@ -8440,7 +8440,7 @@ SCRAM-SHA-256$<iteration count>:&l - via the + via the statement in SQL @@ -8448,14 +8448,14 @@ SCRAM-SHA-256$<iteration count>:&l via the Bind message in the frontend/backend protocol, as - described in + described in via the Server Programming Interface (SPI), as described in - + @@ -8648,7 +8648,7 @@ SCRAM-SHA-256$<iteration count>:&l
- See for more information about the various + See for more information about the various ways to change run-time parameters. @@ -8813,7 +8813,7 @@ SCRAM-SHA-256$<iteration count>:&l
- See for more information about + See for more information about client authentication configuration. @@ -8890,7 +8890,7 @@ SCRAM-SHA-256$<iteration count>:&l The view pg_locks provides access to information about the locks held by active processes within the - database server. See for more discussion + database server. See for more discussion of locking. @@ -9053,7 +9053,7 @@ SCRAM-SHA-256$<iteration count>:&l text Name of the lock mode held or desired by this process (see and ) + linkend="locking-tables"/> and ) granted @@ -9164,7 +9164,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx queues, nor information about which processes are parallel workers running on behalf of which other client sessions. It is better to use the pg_blocking_pids() function - (see ) to identify which + (see ) to identify which process(es) a waiting process is blocked behind. @@ -9369,7 +9369,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The pg_prepared_statements view displays all the prepared statements that are available in the current - session. See for more information about prepared + session. See for more information about prepared statements. @@ -9377,7 +9377,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx pg_prepared_statements contains one row for each prepared statement. Rows are added to the view when a new prepared statement is created and removed when a prepared statement - is released (for example, via the command). + is released (for example, via the command). @@ -9457,7 +9457,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_prepared_xacts displays information about transactions that are currently prepared for two-phase - commit (see for details). + commit (see for details). @@ -9601,7 +9601,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The pg_replication_origin_status view contains information about how far replay for a certain origin has progressed. For more on replication origins - see . + see .
@@ -9670,7 +9670,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx For more on replication slots, - see and . + see and .
@@ -9917,7 +9917,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Role bypasses every row level security policy, see - for more information. + for more information. @@ -10203,8 +10203,8 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_settings provides access to run-time parameters of the server. It is essentially an alternative - interface to the - and commands. + interface to the + and commands. It also provides access to some facts about each parameter that are not directly available from SHOW, such as minimum and maximum values. @@ -10441,7 +10441,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx - See for more information about the various + See for more information about the various ways to change these parameters. @@ -10449,7 +10449,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The pg_settings view cannot be inserted into or deleted from, but it can be updated. An UPDATE applied to a row of pg_settings is equivalent to executing - the command on that named + the command on that named parameter. The change only affects the value used by the current session. If an UPDATE is issued within a transaction that is later aborted, the effects of the UPDATE command @@ -10543,7 +10543,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx User bypasses every row level security policy, see - for more information. + for more information. @@ -10763,7 +10763,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The maximum number of entries in the array fields can be controlled on a column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the - run-time parameter. + run-time parameter. @@ -10858,7 +10858,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The view pg_timezone_abbrevs provides a list of time zone abbreviations that are currently recognized by the datetime input routines. The contents of this view change when the - run-time parameter is modified. + run-time parameter is modified.
@@ -10895,7 +10895,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx While most timezone abbreviations represent fixed offsets from UTC, there are some that have historically varied in value - (see for more information). + (see for more information). In such cases this view presents their current meaning. @@ -11025,7 +11025,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx bool User bypasses every row level security policy, see - for more information. + for more information. diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index ce395e115a..dc3fd34a62 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -15,8 +15,8 @@ Using the locale features of the operating system to provide locale-specific collation order, number formatting, translated messages, and other aspects. - This is covered in and - . + This is covered in and + . @@ -25,7 +25,7 @@ Providing a number of different character sets to support storing text in all kinds of languages, and providing character set translation between client and server. - This is covered in . + This is covered in . @@ -146,7 +146,7 @@ initdb --locale=sv_SE the sort order of indexes, so they must be kept fixed, or indexes on text columns would become corrupt. (But you can alleviate this restriction using collations, as discussed - in .) + in .) The default values for these categories are determined when initdb is run, and those values are used when new databases are created, unless @@ -157,7 +157,7 @@ initdb --locale=sv_SE The other locale categories can be changed whenever desired by setting the server configuration parameters that have the same name as the locale categories (see for details). The values + linkend="runtime-config-client-format"/> for details). The values that are chosen by initdb are actually only written into the configuration file postgresql.conf to serve as defaults when the server is started. If you remove these @@ -267,10 +267,10 @@ initdb --locale=sv_SE with LIKE clauses under a non-C locale, several custom operator classes exist. These allow the creation of an index that performs a strict character-by-character comparison, ignoring - locale comparison rules. Refer to + locale comparison rules. Refer to for more information. Another approach is to create indexes using the C collation, as discussed in - . + . @@ -316,7 +316,7 @@ initdb --locale=sv_SE PostgreSQL speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated. If you want to - help, refer to or write to the developers' + help, refer to or write to the developers' mailing list. @@ -524,7 +524,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; these under one concept than to create another infrastructure for setting LC_CTYPE per expression.) Also, a libc collation - is tied to a character set encoding (see ). + is tied to a character set encoding (see ). The same collation name may exist for different encodings. @@ -605,7 +605,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; for LC_COLLATE and LC_CTYPE, or if new locales are installed in the operating system after the database system was initialized, then a new collation may be created using - the command. + the command. New operating system locales can also be imported en masse using the pg_import_system_collations() function. @@ -702,7 +702,7 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; If the standard and predefined collations are not sufficient, users can create their own collation objects using the SQL - command . + command . @@ -730,7 +730,7 @@ CREATE COLLATION german (provider = libc, locale = 'de_DE'); defined in the operating system when the database instance is initialized, it is not often necessary to manually create new ones. Reasons might be if a different naming system is desired (in which case - see also ) or if the operating system has + see also ) or if the operating system has been upgraded to provide new locale definitions (in which case see also pg_import_system_collations()). @@ -871,7 +871,7 @@ CREATE COLLATION german (provider = libc, locale = 'de_DE'); Copying Collations - The command can also be used to + The command can also be used to create a new collation from an existing collation, which can be useful to be able to use operating-system-independent collation names in applications, create compatibility names, or use an ICU-provided collation @@ -924,7 +924,7 @@ CREATE COLLATION french FROM "fr-x-icu"; Supported Character Sets - shows the character sets available + shows the character sets available for use in PostgreSQL. @@ -1392,7 +1392,7 @@ CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE= database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data. For more information see - . + . @@ -1449,7 +1449,7 @@ $ psql -l character set combinations. The conversion information is stored in the pg_conversion system catalog. PostgreSQL comes with some predefined conversions, as shown in . You can create a new + linkend="multibyte-translation-table"/>. You can create a new conversion using the SQL command CREATE CONVERSION. @@ -1763,7 +1763,7 @@ $ psql -l - libpq () has functions to control the client encoding. + libpq () has functions to control the client encoding. @@ -1812,7 +1812,7 @@ RESET client_encoding; Using the configuration variable . If the + linkend="guc-client-encoding"/>. If the client_encoding variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 99921ba079..c8a1bc79aa 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -13,13 +13,13 @@ wants to connect as, much the same way one logs into a Unix computer as a particular user. Within the SQL environment the active database user name determines access privileges to database objects — see - for more information. Therefore, it is + for more information. Therefore, it is essential to restrict which database users can connect. - As explained in , + As explained in , PostgreSQL actually does privilege management in terms of roles. In this chapter, we consistently use database user to mean role with the @@ -70,7 +70,7 @@ pg_hba.conf file is installed when the data directory is initialized by initdb. It is possible to place the authentication configuration file elsewhere, - however; see the configuration parameter. + however; see the configuration parameter. @@ -136,7 +136,7 @@ hostnossl database user Remote TCP/IP connections will not be possible unless the server is started with an appropriate value for the - configuration parameter, + configuration parameter, since the default behavior is to listen for TCP/IP connections only on the local loopback address localhost. @@ -157,8 +157,8 @@ hostnossl database user To make use of this option the server must be built with SSL support. Furthermore, SSL must be enabled - by setting the configuration parameter (see - for more information). + by setting the configuration parameter (see + for more information). Otherwise, the hostssl record is ignored except for logging a warning that it cannot match any connections. @@ -381,7 +381,7 @@ hostnossl database user Specifies the authentication method to use when a connection matches this record. The possible choices are summarized here; details - are in . + are in . @@ -393,7 +393,7 @@ hostnossl database user PostgreSQL database server to login as any PostgreSQL user they wish, without the need for a password or any other authentication. See for details. + linkend="auth-trust"/> for details. @@ -416,7 +416,7 @@ hostnossl database user Perform SCRAM-SHA-256 authentication to verify the user's - password. See for details. + password. See for details. @@ -426,7 +426,7 @@ hostnossl database user Perform SCRAM-SHA-256 or MD5 authentication to verify the - user's password. See + user's password. See for details. @@ -440,7 +440,7 @@ hostnossl database user authentication. Since the password is sent in clear text over the network, this should not be used on untrusted networks. - See for details. + See for details. @@ -451,7 +451,7 @@ hostnossl database user Use GSSAPI to authenticate the user. This is only available for TCP/IP connections. See for details. + linkend="gssapi-auth"/> for details. @@ -462,7 +462,7 @@ hostnossl database user Use SSPI to authenticate the user. This is only available on Windows. See for details. + linkend="sspi-auth"/> for details. @@ -477,7 +477,7 @@ hostnossl database user Ident authentication can only be used on TCP/IP connections. When specified for local connections, peer authentication will be used instead. - See for details. + See for details. @@ -489,7 +489,7 @@ hostnossl database user Obtain the client's operating system user name from the operating system and check if it matches the requested database user name. This is only available for local connections. - See for details. + See for details. @@ -499,7 +499,7 @@ hostnossl database user Authenticate using an LDAP server. See for details. + linkend="auth-ldap"/> for details. @@ -509,7 +509,7 @@ hostnossl database user Authenticate using a RADIUS server. See for details. + linkend="auth-radius"/> for details. @@ -519,7 +519,7 @@ hostnossl database user Authenticate using SSL client certificates. See - for details. + for details. @@ -530,7 +530,7 @@ hostnossl database user Authenticate using the Pluggable Authentication Modules (PAM) service provided by the operating system. See for details. + linkend="auth-pam"/> for details. @@ -540,7 +540,7 @@ hostnossl database user Authenticate using the BSD Authentication service provided by the - operating system. See for details. + operating system. See for details. @@ -638,7 +638,7 @@ hostnossl database user Some examples of pg_hba.conf entries are shown in - . See the next section for details on the + . See the next section for details on the different authentication methods. @@ -763,7 +763,7 @@ local db1,db2,@demodbs all md5 pg_ident.confpg_ident.conf and is stored in the cluster's data directory. (It is possible to place the map file - elsewhere, however; see the + elsewhere, however; see the configuration parameter.) The ident map file contains lines of the general form: @@ -790,7 +790,7 @@ local db1,db2,@demodbs all md5 If the system-username field starts with a slash (/), the remainder of the field is treated as a regular expression. - (See for details of + (See for details of PostgreSQL's regular expression syntax.) The regular expression can include a single capture, or parenthesized subexpression, which can then be referenced in the database-username @@ -828,8 +828,8 @@ mymap /^(.*)@otherdomain\.com$ guest A pg_ident.conf file that could be used in conjunction with the pg_hba.conf file in is shown in . In this example, anyone + linkend="example-pg-hba.conf"/> is shown in . In this example, anyone logged in to a machine on the 192.168 network that does not have the operating system user name bryanh, ann, or robert would not be granted access. Unix user @@ -885,7 +885,7 @@ omicron bryanh guest1 Unix-domain socket file using file-system permissions. To do this, set the unix_socket_permissions (and possibly unix_socket_group) configuration parameters as - described in . Or you + described in . Or you could set the unix_socket_directories configuration parameter to place the socket file in a suitably restricted directory. @@ -965,7 +965,7 @@ omicron bryanh guest1 The md5 method cannot be used with - the feature. + the feature. @@ -998,8 +998,8 @@ omicron bryanh guest1 separate from operating system user passwords. The password for each database user is stored in the pg_authid system catalog. Passwords can be managed with the SQL commands - and - , + and + , e.g., CREATE ROLE foo WITH LOGIN PASSWORD 'secret', or the psql command \password. @@ -1011,7 +1011,7 @@ omicron bryanh guest1 The availability of the different password-based authentication methods depends on how a user's password on the server is encrypted (or hashed, more accurately). This is controlled by the configuration - parameter at the time the + parameter at the time the password is set. If a password was encrypted using the scram-sha-256 setting, then it can be used for the authentication methods scram-sha-256 @@ -1061,7 +1061,7 @@ omicron bryanh guest1 GSSAPI support has to be enabled when PostgreSQL is built; - see for more information. + see for more information. @@ -1072,7 +1072,7 @@ omicron bryanh guest1 The PostgreSQL server will accept any principal that is included in the keytab used by the server, but care needs to be taken to specify the correct principal details when making the connection from the client using the krbsrvname connection parameter. (See - also .) The installation default can be + also .) The installation default can be changed from the default postgres at build time using ./configure --with-krb-srvnam=whatever. In most environments, @@ -1112,9 +1112,9 @@ omicron bryanh guest1 Make sure that your server keytab file is readable (and preferably only readable, not writable) by the PostgreSQL - server account. (See also .) The location + server account. (See also .) The location of the key file is specified by the configuration + linkend="guc-krb-server-keyfile"/> configuration parameter. The default is /usr/local/pgsql/etc/krb5.keytab (or whatever directory was specified as sysconfdir at build time). @@ -1138,7 +1138,7 @@ omicron bryanh guest1 database user name fred, principal fred@EXAMPLE.COM would be able to connect. To also allow principal fred/users.example.com@EXAMPLE.COM, use a user name - map, as described in . + map, as described in . @@ -1150,7 +1150,7 @@ omicron bryanh guest1 If set to 0, the realm name from the authenticated user principal is stripped off before being passed through the user name mapping - (). This is discouraged and is + (). This is discouraged and is primarily available for backwards compatibility, as it is not secure in multi-realm environments unless krb_realm is also used. It is recommended to @@ -1166,7 +1166,7 @@ omicron bryanh guest1 Allows for mapping between system and database user names. See - for details. For a GSSAPI/Kerberos + for details. For a GSSAPI/Kerberos principal, such as username@EXAMPLE.COM (or, less commonly, username/hostbased@EXAMPLE.COM), the user name used for mapping is @@ -1217,7 +1217,7 @@ omicron bryanh guest1 When using Kerberos authentication, SSPI works the same way - GSSAPI does; see + GSSAPI does; see for details. @@ -1231,7 +1231,7 @@ omicron bryanh guest1 If set to 0, the realm name from the authenticated user principal is stripped off before being passed through the user name mapping - (). This is discouraged and is + (). This is discouraged and is primarily available for backwards compatibility, as it is not secure in multi-realm environments unless krb_realm is also used. It is recommended to @@ -1284,7 +1284,7 @@ omicron bryanh guest1 Allows for mapping between system and database user names. See - for details. For a SSPI/Kerberos + for details. For a SSPI/Kerberos principal, such as username@EXAMPLE.COM (or, less commonly, username/hostbased@EXAMPLE.COM), the user name used for mapping is @@ -1329,7 +1329,7 @@ omicron bryanh guest1 When ident is specified for a local (non-TCP/IP) connection, - peer authentication (see ) will be + peer authentication (see ) will be used instead. @@ -1342,7 +1342,7 @@ omicron bryanh guest1 Allows for mapping between system and database user names. See - for details. + for details. @@ -1415,7 +1415,7 @@ omicron bryanh guest1 Allows for mapping between system and database user names. See - for details. + for details. @@ -1828,7 +1828,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse Allows for mapping between system and database user names. See - for details. + for details. diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 7059dd4e5f..3060597011 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -170,7 +170,7 @@ shared_buffers = 128MB postgresql.auto.confpostgresql.auto.conf, which has the same format as postgresql.conf but should never be edited manually. This file holds settings provided through - the command. This file is automatically + the command. This file is automatically read whenever postgresql.conf is, and its settings take effect in the same way. Settings in postgresql.auto.conf override those in postgresql.conf. @@ -191,7 +191,7 @@ shared_buffers = 128MB PostgreSQL provides three SQL commands to establish configuration defaults. - The already-mentioned command + The already-mentioned command provides a SQL-accessible means of changing global defaults; it is functionally equivalent to editing postgresql.conf. In addition, there are two commands that allow setting of defaults @@ -201,14 +201,14 @@ shared_buffers = 128MB - The command allows global + The command allows global settings to be overridden on a per-database basis. - The command allows both global and + The command allows both global and per-database settings to be overridden with user-specific values. @@ -232,7 +232,7 @@ shared_buffers = 128MB - The command allows inspection of the + The command allows inspection of the current value of all parameters. The corresponding function is current_setting(setting_name text). @@ -240,7 +240,7 @@ shared_buffers = 128MB - The command allows modification of the + The command allows modification of the current value of those parameters that can be set locally to a session; it has no effect on other sessions. The corresponding function is @@ -266,7 +266,7 @@ shared_buffers = 128MB - Using on this view, specifically + Using on this view, specifically updating the setting column, is the equivalent of issuing SET commands. For example, the equivalent of @@ -470,7 +470,7 @@ include_dir 'conf.d' already mentioned, PostgreSQL uses two other manually-edited configuration files, which control client authentication (their use is discussed in ). By default, all three + linkend="client-authentication"/>). By default, all three configuration files are stored in the database cluster's data directory. The parameters described in this section allow the configuration files to be placed elsewhere. (Doing so can ease @@ -535,7 +535,7 @@ include_dir 'conf.d' Specifies the configuration file for user name mapping (customarily called pg_ident.conf). This parameter can only be set at server start. - See also . + See also . @@ -625,7 +625,7 @@ include_dir 'conf.d' The default value is localhost, which allows only local TCP/IP loopback connections to be made. While client authentication () allows fine-grained control + linkend="client-authentication"/>) allows fine-grained control over who can access the server, listen_addresses controls which interfaces accept connection attempts, which can help prevent repeated malicious connection requests on @@ -685,7 +685,7 @@ include_dir 'conf.d' Determines the number of connection slots that are reserved for connections by PostgreSQL - superusers. At most + superusers. At most connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least max_connections minus @@ -794,7 +794,7 @@ include_dir 'conf.d' This access control mechanism is independent of the one - described in . + described in . @@ -959,7 +959,7 @@ include_dir 'conf.d' Enables SSL connections. Please read - before using this. + before using this. This parameter can only be set in the postgresql.conf file or on the server command line. The default is off. @@ -1180,8 +1180,8 @@ include_dir 'conf.d' - When a password is specified in or - , this parameter determines the algorithm + When a password is specified in or + , this parameter determines the algorithm to use to encrypt the password. The default value is md5, which stores the password as an MD5 hash (on is also accepted, as alias for md5). Setting this parameter to @@ -1190,7 +1190,7 @@ include_dir 'conf.d' Note that older clients might lack support for the SCRAM authentication mechanism, and hence not work with passwords encrypted with - SCRAM-SHA-256. See for more details. + SCRAM-SHA-256. See for more details. @@ -1228,7 +1228,7 @@ include_dir 'conf.d' Sets the location of the Kerberos server key file. See - + for details. This parameter can only be set in the postgresql.conf file or on the server command line. @@ -1376,7 +1376,7 @@ include_dir 'conf.d' The use of huge pages results in smaller page tables and less CPU time spent on memory management, increasing performance. For more details, - see . + see . @@ -1428,7 +1428,7 @@ include_dir 'conf.d' Sets the maximum number of transactions that can be in the prepared state simultaneously (see ). + linkend="sql-prepare-transaction"/>). Setting this parameter to zero (which is the default) disables the prepared-transaction feature. This parameter can only be set at server start. @@ -1439,7 +1439,7 @@ include_dir 'conf.d' should be set to zero to prevent accidental creation of prepared transactions. If you are using prepared transactions, you will probably want max_prepared_transactions to be at - least as large as , so that every + least as large as , so that every session can have a prepared transaction pending. @@ -1497,10 +1497,10 @@ include_dir 'conf.d' Note that when autovacuum runs, up to - times this memory + times this memory may be allocated, so be careful not to set the default value too high. It may be useful to control for this by separately - setting . + setting . @@ -1515,7 +1515,7 @@ include_dir 'conf.d' Specifies the maximum amount of memory to be used by each autovacuum worker process. It defaults to -1, indicating that - the value of should + the value of should be used instead. The setting has no effect on the behavior of VACUUM when run in other contexts. @@ -1649,8 +1649,8 @@ include_dir 'conf.d' Cost-based Vacuum Delay - During the execution of - and + During the execution of + and commands, the system maintains an internal counter that keeps track of the estimated cost of the various I/O operations that are performed. When the accumulated @@ -1893,7 +1893,7 @@ include_dir 'conf.d' the OS writes data back in larger batches in the background. Often that will result in greatly reduced transaction latency, but there also are some cases, especially with workloads that are bigger than - , but smaller than the OS's page + , but smaller than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, and @@ -1962,7 +1962,7 @@ include_dir 'conf.d' The default is 1 on supported systems, otherwise 0. This value can be overridden for tables in a particular tablespace by setting the tablespace parameter of the same name (see - ). + ). @@ -1988,8 +1988,8 @@ include_dir 'conf.d' When changing this value, consider also adjusting - and - . + and + . @@ -2005,8 +2005,8 @@ include_dir 'conf.d' Sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Parallel workers are taken from the pool of processes established by - , limited by - . Note that the requested + , limited by + . Note that the requested number of workers may not actually be available at run time. If this occurs, the plan will run with fewer workers than expected, which may be inefficient. The default value is 2. Setting this value to 0 @@ -2020,7 +2020,7 @@ include_dir 'conf.d' system as an additional user session. This should be taken into account when choosing a value for this setting, as well as when configuring other settings that control resource utilization, such - as . Resource limits such as + as . Resource limits such as work_mem are applied individually to each worker, which means the total utilization may be much higher across all processes than it would normally be for any single process. @@ -2031,7 +2031,7 @@ include_dir 'conf.d' For more information on parallel query, see - . + . @@ -2047,9 +2047,9 @@ include_dir 'conf.d' Sets the maximum number of workers that the system can support for parallel queries. The default value is 8. When increasing or decreasing this value, consider also adjusting - . + . Also, note that a setting for this value which is higher than - will have no effect, + will have no effect, since parallel workers are taken from the pool of worker processes established by that setting. @@ -2072,7 +2072,7 @@ include_dir 'conf.d' checkpoint, or when the OS writes data back in larger batches in the background. Often that will result in greatly reduced transaction latency, but there also are some cases, especially with workloads - that are bigger than , but smaller + that are bigger than , but smaller than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, @@ -2148,7 +2148,7 @@ include_dir 'conf.d' For additional information on tuning these settings, - see . + see . @@ -2176,7 +2176,7 @@ include_dir 'conf.d' In minimal level, WAL-logging of some bulk operations can be safely skipped, which can make those - operations much faster (see ). + operations much faster (see ). Operations in which this optimization can be applied include: CREATE TABLE AS @@ -2188,7 +2188,7 @@ include_dir 'conf.d' But minimal WAL does not contain enough information to reconstruct the data from a base backup and the WAL logs, so replica or higher must be used to enable WAL archiving - () and streaming replication. + () and streaming replication. In logical level, the same information is logged as @@ -2218,7 +2218,7 @@ include_dir 'conf.d' If this parameter is on, the PostgreSQL server will try to make sure that updates are physically written to disk, by issuing fsync() system calls or various - equivalent methods (see ). + equivalent methods (see ). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash. @@ -2254,7 +2254,7 @@ include_dir 'conf.d' - In many situations, turning off + In many situations, turning off for noncritical transactions can provide much of the potential performance benefit of turning off fsync, without the attendant risks of data corruption. @@ -2264,7 +2264,7 @@ include_dir 'conf.d' fsync can only be set in the postgresql.conf file or on the server command line. If you turn this parameter off, also consider turning off - . + . @@ -2285,8 +2285,8 @@ include_dir 'conf.d' is on. When off, there can be a delay between when success is reported to the client and when the transaction is really guaranteed to be safe against a server crash. (The maximum - delay is three times .) Unlike - , setting this parameter to off + delay is three times .) Unlike + , setting this parameter to off does not create any risk of database inconsistency: an operating system or database crash might result in some recent allegedly-committed transactions being lost, but @@ -2294,10 +2294,10 @@ include_dir 'conf.d' been aborted cleanly. So, turning synchronous_commit off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more - discussion see . + discussion see . - If is non-empty, this + If is non-empty, this parameter also controls whether or not transaction commits will wait for their WAL records to be replicated to the standby server(s). When set to on, commits will wait until replies @@ -2389,7 +2389,7 @@ include_dir 'conf.d' necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. - These aspects are discussed in . + These aspects are discussed in . This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2432,7 +2432,7 @@ include_dir 'conf.d' Turning off this parameter does not affect use of WAL archiving for point-in-time recovery (PITR) - (see ). + (see ). @@ -2480,7 +2480,7 @@ include_dir 'conf.d' When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when - is on or during a base backup. + is on or during a base backup. A compressed page image will be decompressed during WAL replay. The default value is off. Only superusers can change this setting. @@ -2505,7 +2505,7 @@ include_dir 'conf.d' The amount of shared memory used for WAL data that has not yet been written to disk. The default setting of -1 selects a size equal to - 1/32nd (about 3%) of , but not less + 1/32nd (about 3%) of , but not less than 64kB nor more than the size of one WAL segment, typically 16MB. This value can be set manually if the automatic choice is too large or too small, @@ -2682,7 +2682,7 @@ include_dir 'conf.d' checkpoint, or when the OS writes data back in larger batches in the background. Often that will result in greatly reduced transaction latency, but there also are some cases, especially with workloads - that are bigger than , but smaller + that are bigger than , but smaller than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between 0, which disables forced writeback, @@ -2772,14 +2772,14 @@ include_dir 'conf.d' When archive_mode is enabled, completed WAL segments are sent to archive storage by setting - . In addition to off, + . In addition to off, to disable, there are two modes: on, and always. During normal operation, there is no difference between the two modes, but when set to always the WAL archiver is enabled also during archive recovery or standby mode. In always mode, all files restored from the archive or streamed with streaming replication will be archived (again). See - for details. + for details. archive_mode and archive_command are @@ -2809,7 +2809,7 @@ include_dir 'conf.d' Use %% to embed an actual % character in the command. It is important for the command to return a zero exit status only if it succeeds. For more information see - . + . This parameter can only be set in the postgresql.conf @@ -2836,7 +2836,7 @@ include_dir 'conf.d' - The is only invoked for + The is only invoked for completed WAL segments. Hence, if your server generates little WAL traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe @@ -2872,10 +2872,10 @@ include_dir 'conf.d' These settings control the behavior of the built-in streaming replication feature (see - ). Servers will be either a + ). Servers will be either a Master or a Standby server. Masters can send data, while Standby(s) are always receivers of replicated data. When cascading replication - (see ) is used, Standby server(s) + (see ) is used, Standby server(s) can also be senders, as well as receivers. Parameters are mainly for Sending and Standby servers, though some parameters have meaning only on the Master server. Settings may vary @@ -2909,7 +2909,7 @@ include_dir 'conf.d' processes). The default is 10. The value 0 means replication is disabled. WAL sender processes count towards the total number of connections, so the parameter cannot be set higher than - . Abrupt streaming client + . Abrupt streaming client disconnection might cause an orphaned connection slot until a timeout is reached, so this parameter should be set slightly higher than the maximum number of expected clients so disconnected @@ -2930,7 +2930,7 @@ include_dir 'conf.d' Specifies the maximum number of replication slots - (see ) that the server + (see ) that the server can support. The default is 10. This parameter can only be set at server start. wal_level must be set @@ -3021,9 +3021,9 @@ include_dir 'conf.d' These parameters can be set on the master/primary server that is to send replication data to one or more standby servers. Note that in addition to these parameters, - must be set appropriately on the master + must be set appropriately on the master server, and optionally WAL archiving can be enabled as - well (see ). + well (see ). The values of these parameters on standby servers are irrelevant, although you may wish to set them there in preparation for the possibility of a standby becoming the master. @@ -3041,7 +3041,7 @@ include_dir 'conf.d' Specifies a list of standby servers that can support synchronous replication, as described in - . + . There will be one or more active synchronous standbys; transactions waiting for commit will be allowed to proceed after these standby servers confirm receipt of their data. @@ -3148,7 +3148,7 @@ ANY num_sync ( parameter to + parameter to local or off. @@ -3172,7 +3172,7 @@ ANY num_sync ( . This allows + servers, as described in . This allows more time for queries on the standby to complete without incurring conflicts due to early cleanup of rows. However, since the value is measured in terms of number of write transactions occurring on the @@ -3215,7 +3215,7 @@ ANY num_sync ( . + recovery, as described in . The default value is on. This parameter can only be set at server start. It only has effect during archive recovery or in standby mode. @@ -3234,7 +3234,7 @@ ANY num_sync ( . + . max_standby_archive_delay applies when WAL data is being read from WAL archive (and is therefore not current). The default is 30 seconds. Units are milliseconds if not specified. @@ -3265,7 +3265,7 @@ ANY num_sync ( . + . max_standby_streaming_delay applies when WAL data is being received via streaming replication. The default is 30 seconds. Units are milliseconds if not specified. @@ -3484,10 +3484,10 @@ ANY num_sync ( ), - running manually, increasing + constants (see ), + running manually, increasing the value of the configuration parameter, + linkend="guc-default-statistics-target"/> configuration parameter, and increasing the amount of statistics collected for specific columns using ALTER TABLE SET STATISTICS. @@ -3579,7 +3579,7 @@ ANY num_sync ( ). + types (see ). The default is on. @@ -3745,7 +3745,7 @@ ANY num_sync ( ). + (see ). @@ -3762,7 +3762,7 @@ ANY num_sync ( ). + (see ). @@ -3960,7 +3960,7 @@ ANY num_sync ( . + For more information see . @@ -4124,7 +4124,7 @@ ANY num_sync ( . + query planner, refer to . @@ -4180,7 +4180,7 @@ SELECT * FROM parent WHERE key = 2400; - Refer to for + Refer to for more information on using constraint exclusion and partitioning. @@ -4219,13 +4219,13 @@ SELECT * FROM parent WHERE key = 2400; resulting FROM list would have no more than this many items. Smaller values reduce planning time but might yield inferior query plans. The default is eight. - For more information see . + For more information see . - Setting this value to or more + Setting this value to or more may trigger use of the GEQO planner, resulting in non-optimal - plans. See . + plans. See . @@ -4255,13 +4255,13 @@ SELECT * FROM parent WHERE key = 2400; the optimal join order, advanced users can elect to temporarily set this variable to 1, and then specify the join order they desire explicitly. - For more information see . + For more information see . - Setting this value to or more + Setting this value to or more may trigger use of the GEQO planner, resulting in non-optimal - plans. See . + plans. See . @@ -4386,8 +4386,8 @@ SELECT * FROM parent WHERE key = 2400; log entries are output in comma separated value (CSV) format, which is convenient for loading logs into programs. - See for details. - must be enabled to generate + See for details. + must be enabled to generate CSV-format log output. @@ -4420,7 +4420,7 @@ csvlog log/postgresql.csv log_destination. PostgreSQL can log to syslog facilities LOCAL0 through LOCAL7 (see ), but the default + linkend="guc-syslog-facility"/>), but the default syslog configuration on most platforms will discard all such messages. You will need to add something like: @@ -4435,7 +4435,7 @@ local0.* /var/log/postgresql register an event source and its library with the operating system so that the Windows Event Viewer can display event log messages cleanly. - See for details. + See for details. @@ -4522,7 +4522,7 @@ local0.* /var/log/postgresql file names. (Note that if there are any time-zone-dependent %-escapes, the computation is done in the zone specified - by .) + by .) The supported %-escapes are similar to those listed in the Open Group's strftime @@ -4576,7 +4576,7 @@ local0.* /var/log/postgresql server owner can read or write the log files. The other commonly useful setting is 0640, allowing members of the owner's group to read the files. Note however that to make use of such a - setting, you'll need to alter to + setting, you'll need to alter to store the files somewhere outside the cluster data directory. In any case, it's unwise to make the log files world-readable, since they might contain sensitive data. @@ -4897,13 +4897,13 @@ local0.* /var/log/postgresql When using this option together with - , + , the text of statements that are logged because of log_statement will not be repeated in the duration log message. If you are not using syslog, it is recommended that you log the PID or session ID using - + so that you can link the statement message to the later duration message using the process ID or session ID. @@ -4914,7 +4914,7 @@ local0.* /var/log/postgresql - explains the message + explains the message severity levels used by PostgreSQL. If logging output is sent to syslog or Windows' eventlog, the severity levels are translated @@ -5019,7 +5019,7 @@ local0.* /var/log/postgresql It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular - log entries via the parameter. + log entries via the parameter. Only printable ASCII characters may be used in the application_name value. Other characters will be replaced with question marks (?). @@ -5051,8 +5051,8 @@ local0.* /var/log/postgresql These messages are emitted at LOG message level, so by default they will appear in the server log but will not be sent to the client. You can change that by adjusting - and/or - . + and/or + . These parameters are off by default. @@ -5159,7 +5159,7 @@ local0.* /var/log/postgresql The difference between setting this option and setting - to zero is that + to zero is that exceeding log_min_duration_statement forces the text of the query to be logged, but this option doesn't. Thus, if log_duration is on and @@ -5187,7 +5187,7 @@ local0.* /var/log/postgresql the logging of DETAIL, HINT, QUERY, and CONTEXT error information. VERBOSE output includes the SQLSTATE error - code (see also ) and the source code file name, function name, + code (see also ) and the source code file name, function name, and line number that generated the error. Only superusers can change this setting. @@ -5397,7 +5397,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Controls whether a log message is produced when a session waits - longer than to acquire a + longer than to acquire a lock. This is useful in determining if lock waits are causing poor performance. The default is off. Only superusers can change this setting. @@ -5459,7 +5459,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Causes each replication command to be logged in the server log. - See for more information about + See for more information about replication command. The default value is off. Only superusers can change this setting. @@ -5496,12 +5496,12 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Sets the time zone used for timestamps written in the server log. - Unlike , this value is cluster-wide, + Unlike , this value is cluster-wide, so that all sessions will report timestamps consistently. The built-in default is GMT, but that is typically overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. - See for more information. + See for more information. This parameter can only be set in the postgresql.conf file or on the server command line. @@ -5641,7 +5641,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; These settings control how process titles of server processes are modified. Process titles are typically viewed using programs like ps or, on Windows, Process Explorer. - See for details. + See for details. @@ -5697,7 +5697,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; When statistics collection is enabled, the data that is produced can be accessed via the pg_stat and pg_statio family of system views. - Refer to for more information. + Refer to for more information. @@ -5766,12 +5766,12 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some - platforms. You can use the tool to + platforms. You can use the tool to measure the overhead of timing on your system. I/O timing information is - displayed in , in the output of - when the BUFFERS option is - used, and by . Only superusers can + displayed in , in the output of + when the BUFFERS option is + used, and by . Only superusers can change this setting. @@ -5878,10 +5878,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; These settings control the behavior of the autovacuum - feature. Refer to for more information. + feature. Refer to for more information. Note that many of these settings can be overridden on a per-table basis; see . + endterm="sql-createtable-storage-parameters-title"/>. @@ -5896,7 +5896,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Controls whether the server should run the autovacuum launcher daemon. This is on by default; however, - must also be enabled for + must also be enabled for autovacuum to work. This parameter can only be set in the postgresql.conf file or on the server command line; however, autovacuuming can be @@ -5906,7 +5906,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Note that even when this parameter is disabled, the system will launch autovacuum processes if necessary to prevent transaction ID wraparound. See for more information. + linkend="vacuum-for-wraparound"/> for more information. @@ -6071,7 +6071,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This parameter can only be set at server start, but the setting can be reduced for individual tables by changing table storage parameters. - For more information see . + For more information see . @@ -6099,7 +6099,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; 400 million multixacts. This parameter can only be set at server start, but the setting can be reduced for individual tables by changing table storage parameters. - For more information see . + For more information see . @@ -6114,7 +6114,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Specifies the cost delay value that will be used in automatic VACUUM operations. If -1 is specified, the regular - value will be used. + value will be used. The default value is 20 milliseconds. This parameter can only be set in the postgresql.conf file or on the server command line; @@ -6135,7 +6135,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Specifies the cost limit value that will be used in automatic VACUUM operations. If -1 is specified (which is the default), the regular - value will be used. Note that + value will be used. Note that the value is distributed proportionally among the running autovacuum workers, if there is more than one, so that the sum of the limits for each worker does not exceed the value of this variable. @@ -6230,7 +6230,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The current effective value of the search path can be examined via the SQL function current_schemas - (see ). + (see ). This is not quite the same as examining the value of search_path, since current_schemas shows how the items @@ -6238,7 +6238,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - For more information on schema handling, see . + For more information on schema handling, see . @@ -6264,7 +6264,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; For more information on row security policies, - see . + see . @@ -6295,7 +6295,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This variable is not used for temporary tables; for them, - is consulted instead. + is consulted instead. @@ -6306,7 +6306,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; For more information on tablespaces, - see . + see . @@ -6355,7 +6355,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - See also . + See also . @@ -6370,7 +6370,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This parameter is normally on. When set to off, it disables validation of the function body string during . Disabling validation avoids side + linkend="sql-createfunction"/>. Disabling validation avoids side effects of the validation process and avoids false positives due to problems such as forward references. Set this parameter to off before loading functions on behalf of other @@ -6400,8 +6400,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - Consult and for more information. + Consult and for more information. @@ -6424,7 +6424,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - Consult for more information. + Consult for more information. @@ -6458,7 +6458,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; - Consult for more information. + Consult for more information. @@ -6477,7 +6477,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; superuser privilege and results in discarding any previously cached query plans. Possible values are origin (the default), replica and local. - See for + See for more information. @@ -6553,7 +6553,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; longer than the specified duration in milliseconds. This allows any locks held by that session to be released and the connection slot to be reused; it also allows tuples visible only to this transaction to be vacuumed. See - for more details about this. + for more details about this. The default value of 0 disables this feature. @@ -6577,11 +6577,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; tuples. The default is 150 million transactions. Although users can set this value anywhere from zero to two billions, VACUUM will silently limit the effective value to 95% of - , so that a + , so that a periodical manual VACUUM has a chance to run before an anti-wraparound autovacuum is launched for the table. For more information see - . + . @@ -6600,10 +6600,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The default is 50 million transactions. Although users can set this value anywhere from zero to one billion, VACUUM will silently limit the effective value to half - the value of , so + the value of , so that there is not an unreasonably short time between forced autovacuums. For more information see . + linkend="vacuum-for-wraparound"/>. @@ -6624,10 +6624,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; tuples. The default is 150 million multixacts. Although users can set this value anywhere from zero to two billions, VACUUM will silently limit the effective value to 95% of - , so that a + , so that a periodical manual VACUUM has a chance to run before an anti-wraparound is launched for the table. - For more information see . + For more information see . @@ -6646,10 +6646,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; is 5 million multixacts. Although users can set this value anywhere from zero to one billion, VACUUM will silently limit the effective value to half - the value of , + the value of , so that there is not an unreasonably short time between forced autovacuums. - For more information see . + For more information see . @@ -6665,7 +6665,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Sets the output format for values of type bytea. Valid values are hex (the default) and escape (the traditional PostgreSQL - format). See for more + format). See for more information. The bytea type always accepts both formats on input, regardless of this setting. @@ -6687,7 +6687,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; base64 and hex, which are both defined in the XML Schema standard. The default is base64. For further information about - XML-related functions, see . + XML-related functions, see . @@ -6717,7 +6717,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Sets whether DOCUMENT or CONTENT is implicit when converting between XML and character string values. See for a description of this. Valid + linkend="datatype-xml"/> for a description of this. Valid values are DOCUMENT and CONTENT. The default is CONTENT. @@ -6748,7 +6748,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; The default is four megabytes (4MB). This setting can be overridden for individual GIN indexes by changing index storage parameters. - See and + See and for more information. @@ -6780,7 +6780,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; and European are synonyms for DMY; the keywords US, NonEuro, and NonEuropean are synonyms for MDY. See - for more information. The + for more information. The built-in default is ISO, MDY, but initdb will initialize the configuration file with a setting that corresponds to the @@ -6802,7 +6802,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; output matching SQL standard interval literals. The value postgres (which is the default) will produce output matching PostgreSQL releases prior to 8.4 - when the + when the parameter was set to ISO. The value postgres_verbose will produce output matching PostgreSQL releases prior to 8.4 @@ -6815,7 +6815,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; The IntervalStyle parameter also affects the interpretation of ambiguous interval input. See - for more information. + for more information. @@ -6833,7 +6833,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; The built-in default is GMT, but that is typically overridden in postgresql.conf; initdb will install a setting there corresponding to its system environment. - See for more information. + See for more information. @@ -6852,7 +6852,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; which is a collection that works in most of the world; there are also 'Australia' and 'India', and other collections can be defined for a particular installation. - See for more information. + See for more information. @@ -6880,7 +6880,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set negative to suppress unwanted digits. - See also . + See also . @@ -6897,7 +6897,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the client-side encoding (character set). The default is to use the database encoding. The character sets supported by the PostgreSQL - server are described in . + server are described in . @@ -6911,7 +6911,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the language in which messages are displayed. Acceptable - values are system-dependent; see for + values are system-dependent; see for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way. @@ -6945,7 +6945,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the locale to use for formatting monetary amounts, for example with the to_char family of functions. Acceptable values are system-dependent; see for more information. If this variable is + linkend="locale"/> for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way. @@ -6964,7 +6964,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the locale to use for formatting numbers, for example with the to_char family of functions. Acceptable values are system-dependent; see for more information. If this variable is + linkend="locale"/> for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way. @@ -6983,7 +6983,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the locale to use for formatting dates and times, for example with the to_char family of functions. Acceptable values are system-dependent; see for more information. If this variable is + linkend="locale"/> for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way. @@ -7002,7 +7002,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; Selects the text search configuration that is used by those variants of the text search functions that do not have an explicit argument specifying the configuration. - See for further information. + See for further information. The built-in default is pg_catalog.simple, but initdb will initialize the configuration file with a setting that corresponds to the @@ -7067,7 +7067,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries that are to be preloaded at connection start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. The parameter value only takes effect at the start of the connection. @@ -7101,7 +7101,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; However, unless a module is specifically designed to be used in this way by non-superusers, this is usually not the right setting to use. Look - at instead. + at instead. @@ -7118,7 +7118,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries that are to be preloaded at connection start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. The parameter value only takes effect at the start of the connection. @@ -7132,7 +7132,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; performance-measurement libraries to be loaded into specific sessions without an explicit LOAD command being given. For - example, could be enabled for all + example, could be enabled for all sessions under a given user name by setting this parameter with ALTER ROLE SET. Also, this parameter can be changed without restarting the server (but changes only take effect when a new @@ -7141,7 +7141,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; - Unlike , there is no large + Unlike , there is no large performance advantage to loading a library at session start rather than when it is first used. There is some advantage, however, when connection pooling is used. @@ -7160,7 +7160,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; This variable specifies one or more shared libraries to be preloaded at server start. It contains a comma-separated list of library names, where each name - is interpreted as for the command. + is interpreted as for the command. Whitespace between entries is ignored; surround a library name with double quotes if you need to include whitespace or commas in the name. This parameter can only be set at server start. If a specified @@ -7183,7 +7183,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; parameter is recommended only for libraries that will be used in most sessions. Also, changing this parameter requires a server restart, so this is not the right setting to use for short-term debugging tasks, - say. Use for that + say. Use for that instead. @@ -7269,7 +7269,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Soft upper limit of the size of the set returned by GIN index scans. For more - information see . + information see . @@ -7317,7 +7317,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' - When is set, + When is set, this parameter also determines the length of time to wait before a log message is issued about the lock wait. If you are trying to investigate locking delays you might want to set a shorter than @@ -7336,8 +7336,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' The shared lock table tracks locks on max_locks_per_transaction * ( + ) objects (e.g., tables); + linkend="guc-max-connections"/> + ) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions @@ -7368,8 +7368,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' The shared predicate lock table tracks locks on max_pred_locks_per_transaction * ( + ) objects (e.g., tables); + linkend="guc-max-connections"/> + ) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions @@ -7396,7 +7396,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' predicate-locked before the lock is promoted to covering the whole relation. Values greater than or equal to zero mean an absolute limit, while negative values - mean divided by + mean divided by the absolute value of this setting. The default is -2, which keeps the behavior from previous versions of PostgreSQL. This parameter can only be set in the postgresql.conf @@ -7589,7 +7589,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' - See for more information. + See for more information. @@ -7607,7 +7607,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' output of EXPLAIN as well as the results of functions like pg_get_viewdef. See also the option of - and . + and . @@ -7630,7 +7630,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' parameter to determine how string literals will be processed. The presence of this parameter can also be taken as an indication that the escape string syntax (E'...') is supported. - Escape string syntax () + Escape string syntax () should be used if an application desires backslashes to be treated as escape characters. @@ -7710,7 +7710,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' - Refer to for related information. + Refer to for related information. @@ -7788,9 +7788,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports the size of a disk block. It is determined by the value of BLCKSZ when building the server. The default value is 8192 bytes. The meaning of some configuration - variables (such as ) is + variables (such as ) is influenced by block_size. See for information. + linkend="runtime-config-resource"/> for information. @@ -7804,7 +7804,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports whether data checksums are enabled for this cluster. - See for more information. + See for more information. @@ -7853,7 +7853,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports the locale in which sorting of textual data is done. - See for more information. + See for more information. This value is determined when a database is created. @@ -7868,7 +7868,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports the locale that determines character classifications. - See for more information. + See for more information. This value is determined when a database is created. Ordinarily this will be the same as lc_collate, but for special applications it might be set differently. @@ -7953,7 +7953,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports the database encoding (character set). It is determined when the database is created. Ordinarily, clients need only be concerned with the value of . + linkend="guc-client-encoding"/>. @@ -8012,7 +8012,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Reports the number of blocks (pages) in a WAL segment file. The total size of a WAL segment file in bytes is equal to wal_segment_size multiplied by wal_block_size; - by default this is 16MB. See for + by default this is 16MB. See for more information. @@ -8140,8 +8140,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Generates a great amount of debugging output for the LISTEN and NOTIFY - commands. or - must be + commands. or + must be DEBUG1 or lower to send this output to the client or server logs, respectively. @@ -8158,7 +8158,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' Enables logging of recovery-related debugging output that otherwise would not be logged. This parameter allows the user to override the - normal setting of , but only for + normal setting of , but only for specific messages. This is intended for use in debugging Hot Standby. Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, and @@ -8401,7 +8401,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) - Only has effect if are enabled. + Only has effect if are enabled. Detection of a checksum failure during a read normally causes @@ -8452,7 +8452,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) For convenience there are also single letter command-line option switches available for some parameters. They are described in - . Some of these + . Some of these options exist for historical reasons, and their presence as a single-letter option does not necessarily indicate an endorsement to use the option heavily. diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index 7dd203e9cd..0622227bee 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -16,14 +16,14 @@ This appendix covers extensions and other server plug-in modules found in - contrib. covers utility + contrib. covers utility programs. When building from the source distribution, these components are not built automatically, unless you build the "world" target - (see ). + (see ). You can build and install all of them by running: make @@ -55,7 +55,7 @@ To make use of one of these modules, after you have installed the code you need to register the new SQL objects in the database system. In PostgreSQL 9.1 and later, this is done by executing - a command. In a fresh database, + a command. In a fresh database, you can simply do @@ -89,16 +89,16 @@ CREATE EXTENSION module_name FROM unpackaged; This will update the pre-9.1 objects of the module into a proper extension object. Future updates to the module will be - managed by . + managed by . For more information about extension updates, see - . + . Note, however, that some of these modules are not extensions in this sense, but are loaded into the server in some other way, for instance by way of - . See the documentation of each + . See the documentation of each module for details. @@ -163,7 +163,7 @@ pages. This appendix and the previous one contain information regarding the modules that can be found in the contrib directory of the - PostgreSQL distribution. See for + PostgreSQL distribution. See for more information about the contrib section in general and server extensions and plug-ins found in contrib specifically. @@ -184,7 +184,7 @@ pages. This section covers PostgreSQL client applications in contrib. They can be run from anywhere, independent of where the database server resides. See - also for information about client + also for information about client applications that part of the core PostgreSQL distribution. @@ -200,7 +200,7 @@ pages. This section covers PostgreSQL server-related applications in contrib. They are typically run on the host where the database server resides. See also for information about server applications that + linkend="reference-server"/> for information about server applications that part of the core PostgreSQL distribution. diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index 46d8e4eb8f..b995dc7e2a 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -16,7 +16,7 @@ Syntax - shows the valid external + shows the valid external representations for the cube type. x, y, etc. denote floating-point numbers. @@ -106,7 +106,7 @@ Usage - shows the operators provided for + shows the operators provided for type cube. @@ -268,7 +268,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; - shows the available functions. + shows the available functions.
diff --git a/doc/src/sgml/custom-scan.sgml b/doc/src/sgml/custom-scan.sgml index a46641674f..24631f5f40 100644 --- a/doc/src/sgml/custom-scan.sgml +++ b/doc/src/sgml/custom-scan.sgml @@ -123,7 +123,7 @@ Plan *(*PlanCustomPath) (PlannerInfo *root, Convert a custom path to a finished plan. The return value will generally be a CustomScan object, which the callback must allocate and - initialize. See for more details. + initialize. See for more details. diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 3d46098263..9aa9b28f3e 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -16,11 +16,11 @@ PostgreSQL has a rich set of native data types available to users. Users can add new types to PostgreSQL using the command. + linkend="sql-createtype"/> command. - shows all the built-in general-purpose data + shows all the built-in general-purpose data types. Most of the alternative names listed in the Aliases column are the names used internally by PostgreSQL for historical reasons. In @@ -336,7 +336,7 @@ Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision - decimals. lists the + decimals. lists the available types. @@ -424,9 +424,9 @@ The syntax of constants for the numeric types is described in - . The numeric types have a + . The numeric types have a full set of corresponding arithmetic operators and - functions. Refer to for more + functions. Refer to for more information. The following sections describe the types in detail. @@ -559,7 +559,7 @@ NUMERIC The maximum allowed precision when explicitly specified in the type declaration is 1000; NUMERIC without a specified precision is subject to the limits described in . + linkend="datatype-numeric-table"/>. @@ -728,7 +728,7 @@ FROM generate_series(-3.5, 3.5, 1) as x; - The setting controls the + The setting controls the number of extra significant digits included when a floating point value is converted to text for output. With the default value of 0, the output is the same on every platform @@ -841,7 +841,7 @@ FROM generate_series(-3.5, 3.5, 1) as x; This section describes a PostgreSQL-specific way to create an autoincrementing column. Another way is to use the SQL-standard - identity column feature, described at . + identity column feature, described at . @@ -888,7 +888,7 @@ ALTER SEQUENCE tablename_nextval() in + See nextval() in for details. @@ -929,8 +929,8 @@ ALTER SEQUENCE tablename_ The money type stores a currency amount with a fixed fractional precision; see . The fractional precision is - determined by the database's setting. + linkend="datatype-money-table"/>. The fractional precision is + determined by the database's setting. The range shown in the table assumes there are two fractional digits. Input is accepted in a variety of formats, including integer and floating-point literals, as well as typical @@ -1063,7 +1063,7 @@ SELECT '52093.89'::money::numeric::float8;
- shows the + shows the general-purpose character types available in PostgreSQL. @@ -1166,12 +1166,12 @@ SELECT '52093.89'::money::numeric::float8; - Refer to for information about - the syntax of string literals, and to + Refer to for information about + the syntax of string literals, and to for information about available operators and functions. The database character set determines the character set used to store textual values; for more information on character set support, - refer to . + refer to . @@ -1180,7 +1180,7 @@ SELECT '52093.89'::money::numeric::float8; CREATE TABLE test1 (a character(4)); INSERT INTO test1 VALUES ('ok'); -SELECT a, char_length(a) FROM test1; -- +SELECT a, char_length(a) FROM test1; -- a | char_length ------+------------- @@ -1206,7 +1206,7 @@ SELECT b, char_length(b) FROM test2; The char_length function is discussed in - . + . @@ -1215,7 +1215,7 @@ SELECT b, char_length(b) FROM test2; There are two other fixed-length character types in PostgreSQL, shown in . The name + linkend="datatype-character-special-table"/>. The name type exists only for the storage of identifiers in the internal system catalogs and is not intended for use by the general user. Its length is currently defined as 64 bytes (63 usable characters plus @@ -1269,7 +1269,7 @@ SELECT b, char_length(b) FROM test2; The bytea data type allows storage of binary strings; - see . + see . @@ -1313,7 +1313,7 @@ SELECT b, char_length(b) FROM test2; input and output: PostgreSQL's historical escape format, and hex format. Both of these are always accepted on input. The output format depends - on the configuration parameter ; + on the configuration parameter ; the default is hex. (Note that the hex format was introduced in PostgreSQL 9.0; earlier versions and some tools don't understand it.) @@ -1384,7 +1384,7 @@ SELECT E'\\xDEADBEEF'; literal using escape string syntax). Backslash itself (octet value 92) can alternatively be represented by double backslashes. - + shows the characters that must be escaped, and gives the alternative escape sequences where applicable. @@ -1443,14 +1443,14 @@ SELECT E'\\xDEADBEEF'; The requirement to escape non-printable octets varies depending on locale settings. In some instances you can get away with leaving them unescaped. Note that the result in each of the examples - in was exactly one octet in + in was exactly one octet in length, even though the output representation is sometimes more than one character. The reason multiple backslashes are required, as shown - in , is that an input + in , is that an input string written as a string literal must pass through two parse phases in the PostgreSQL server. The first backslash of each pair is interpreted as an escape @@ -1467,7 +1467,7 @@ SELECT E'\\xDEADBEEF'; to a single octet with a decimal value of 1. Note that the single-quote character is not treated specially by bytea, so it follows the normal rules for string literals. (See also - .) + .) @@ -1477,7 +1477,7 @@ SELECT E'\\xDEADBEEF'; Most printable octets are represented by their standard representation in the client character set. The octet with decimal value 92 (backslash) is doubled in the output. - Details are in . + Details are in .
@@ -1571,12 +1571,12 @@ SELECT E'\\xDEADBEEF'; PostgreSQL supports the full set of SQL date and time types, shown in . The operations available + linkend="datatype-datetime-table"/>. The operations available on these data types are described in - . + . Dates are counted according to the Gregorian calendar, even in years before that calendar was introduced (see for more information). + linkend="datetime-units-history"/> for more information).
@@ -1716,7 +1716,7 @@ MINUTE TO SECOND traditional POSTGRES, and others. For some formats, ordering of day, month, and year in date input is ambiguous and there is support for specifying the expected - ordering of these fields. Set the parameter + ordering of these fields. Set the parameter to MDY to select month-day-year interpretation, DMY to select day-month-year interpretation, or YMD to select year-month-day interpretation. @@ -1726,7 +1726,7 @@ MINUTE TO SECOND PostgreSQL is more flexible in handling date/time input than the SQL standard requires. - See + See for the exact parsing rules of date/time input and for the recognized text fields including months, days of the week, and time zones. @@ -1735,7 +1735,7 @@ MINUTE TO SECOND Remember that any date or time literal input needs to be enclosed in single quotes, like text strings. Refer to - for more + for more information. SQL requires the following syntax @@ -1759,7 +1759,7 @@ MINUTE TO SECOND - shows some possible + shows some possible inputs for the date type. @@ -1872,8 +1872,8 @@ MINUTE TO SECOND Valid input for these types consists of a time of day followed by an optional time zone. (See - and .) If a time zone is + linkend="datatype-datetime-time-table"/> + and .) If a time zone is specified in the input for time without time zone, it is silently ignored. You can also specify a date but it will be ignored, except when you use a time zone name that involves a @@ -1993,7 +1993,7 @@ MINUTE TO SECOND
- Refer to for more information on how + Refer to for more information on how to specify time zones. @@ -2074,7 +2074,7 @@ January 8 04:05:06 1999 PST time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's - parameter, and is converted to UTC using the + parameter, and is converted to UTC using the offset for the timezone zone.
@@ -2084,7 +2084,7 @@ January 8 04:05:06 1999 PST current timezone zone, and displayed as local time in that zone. To see the time in another time zone, either change timezone or use the AT TIME ZONE construct - (see ). + (see ). @@ -2112,7 +2112,7 @@ January 8 04:05:06 1999 PST PostgreSQL supports several special date/time input values for convenience, as shown in . The values + linkend="datatype-datetime-special-table"/>. The values infinity and -infinity are specially represented inside the system and will be displayed unchanged; but the others are simply notational shorthands @@ -2186,7 +2186,7 @@ January 8 04:05:06 1999 PST CURRENT_TIMESTAMP, LOCALTIME, LOCALTIMESTAMP. The latter four accept an optional subsecond precision specification. (See .) Note that these are + linkend="functions-datetime-current"/>.) Note that these are SQL functions and are not recognized in data input strings. @@ -2218,7 +2218,7 @@ January 8 04:05:06 1999 PST SQL standard requires the use of the ISO 8601 format. The name of the SQL output format is a historical accident.) shows examples of each + linkend="datatype-datetime-output-table"/> shows examples of each output style. The output of the date and time types is generally only the date or time part in accordance with the given examples. However, the @@ -2275,9 +2275,9 @@ January 8 04:05:06 1999 PST In the SQL and POSTGRES styles, day appears before month if DMY field ordering has been specified, otherwise month appears before day. - (See + (See for how this setting also affects interpretation of input values.) - shows examples. + shows examples. @@ -2313,7 +2313,7 @@ January 8 04:05:06 1999 PST The date/time style can be selected by the user using the SET datestyle command, the parameter in the + linkend="guc-datestyle"/> parameter in the postgresql.conf configuration file, or the PGDATESTYLE environment variable on the server or client. @@ -2321,7 +2321,7 @@ January 8 04:05:06 1999 PST The formatting function to_char - (see ) is also available as + (see ) is also available as a more flexible way to format date/time output. @@ -2391,7 +2391,7 @@ January 8 04:05:06 1999 PST All timezone-aware dates and times are stored internally in UTC. They are converted to local time - in the zone specified by the configuration + in the zone specified by the configuration parameter before being displayed to the client. @@ -2404,7 +2404,7 @@ January 8 04:05:06 1999 PST A full time zone name, for example America/New_York. The recognized time zone names are listed in the pg_timezone_names view (see ). + linkend="view-pg-timezone-names"/>). PostgreSQL uses the widely-used IANA time zone data for this purpose, so the same time zone names are also recognized by much other software. @@ -2417,9 +2417,9 @@ January 8 04:05:06 1999 PST contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations are listed in the pg_timezone_abbrevs view (see ). You cannot set the - configuration parameters or - to a time + linkend="view-pg-timezone-abbrevs"/>). You cannot set the + configuration parameters or + to a time zone abbreviation, but you can use abbreviations in date/time input values and with the AT TIME ZONE operator. @@ -2499,13 +2499,13 @@ January 8 04:05:06 1999 PST they are obtained from configuration files stored under .../share/timezone/ and .../share/timezonesets/ of the installation directory - (see ). + (see ). - The configuration parameter can + The configuration parameter can be set in the file postgresql.conf, or in any of the - other standard ways described in . + other standard ways described in . There are also some special ways to set it: @@ -2556,7 +2556,7 @@ January 8 04:05:06 1999 PST of the different units are implicitly added with appropriate sign accounting. ago negates all the fields. This syntax is also used for interval output, if - is set to + is set to postgres_verbose. @@ -2582,7 +2582,7 @@ P quantityunit The string must start with a P, and may include a T that introduces the time-of-day units. The available unit abbreviations are given in . Units may be + linkend="datatype-interval-iso8601-units"/>. Units may be omitted, and may be specified in any order, but units smaller than a day must appear after T. In particular, the meaning of M depends on whether it is before or after @@ -2696,7 +2696,7 @@ P years-months- - shows some examples + shows some examples of valid interval input. @@ -2751,7 +2751,7 @@ P years-months- postgres_verbose, or iso_8601, using the command SET intervalstyle. The default is the postgres format. - shows examples of each + shows examples of each output style. @@ -2768,7 +2768,7 @@ P years-months- The output of the postgres style matches the output of PostgreSQL releases prior to 8.4 when the - parameter was set to ISO. + parameter was set to ISO. @@ -2846,7 +2846,7 @@ P years-months- PostgreSQL provides the standard SQL type boolean; - see . + see . The boolean type can have several states: true, false, and a third state, unknown, which is represented by the @@ -2902,7 +2902,7 @@ P years-months- - shows that + shows that boolean values are output using the letters t and f. @@ -2954,7 +2954,7 @@ SELECT * FROM test1 WHERE a; Enum types are created using the command, + linkend="sql-createtype"/> command, for example: @@ -3087,7 +3087,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Geometric data types represent two-dimensional spatial - objects. shows the geometric + objects. shows the geometric types available in PostgreSQL. @@ -3158,7 +3158,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays A rich set of functions and operators is available to perform various geometric operations such as scaling, translation, rotation, and determining - intersections. They are explained in . + intersections. They are explained in . @@ -3410,11 +3410,11 @@ SELECT person.name, holidays.num_weeks FROM person, holidays PostgreSQL offers data types to store IPv4, IPv6, and MAC - addresses, as shown in . It + addresses, as shown in . It is better to use these types instead of plain text types to store network addresses, because these types offer input error checking and specialized - operators and functions (see ). + operators and functions (see ).
@@ -3526,7 +3526,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays - shows some examples. + shows some examples.
@@ -3809,10 +3809,10 @@ SELECT macaddr8_set7bit('08:00:2b:01:02:03'); Refer to for information about the syntax + linkend="sql-syntax-bit-strings"/> for information about the syntax of bit string constants. Bit-logical operators and string manipulation functions are available; see . + linkend="functions-bitstring"/>. @@ -3840,7 +3840,7 @@ SELECT * FROM test; A bit string value requires 1 byte for each group of 8 bits, plus 5 or 8 bytes overhead depending on the length of the string (but long values may be compressed or moved out-of-line, as explained - in for character strings). + in for character strings). @@ -3865,8 +3865,8 @@ SELECT * FROM test; The tsvector type represents a document in a form optimized for text search; the tsquery type similarly represents a text query. - provides a detailed explanation of this - facility, and summarizes the + provides a detailed explanation of this + facility, and summarizes the related functions and operators. @@ -3881,7 +3881,7 @@ SELECT * FROM test; A tsvector value is a sorted list of distinct lexemes, which are words that have been normalized to merge different variants of the same word - (see for details). Sorting and + (see for details). Sorting and duplicate-elimination are done automatically during input, as shown in this example: @@ -3975,7 +3975,7 @@ SELECT to_tsvector('english', 'The Fat Rats'); 'fat':2 'rat':3 - Again, see for more detail. + Again, see for more detail. @@ -4140,9 +4140,9 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 functions for UUIDs, but the core database does not include any function for generating UUIDs, because no single algorithm is well suited for every application. The module + linkend="uuid-ossp"/> module provides functions that implement several standard algorithms. - The module also provides a generation + The module also provides a generation function for random UUIDs. Alternatively, UUIDs could be generated by client applications or other libraries invoked through a server-side function. @@ -4161,7 +4161,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 advantage over storing XML data in a text field is that it checks the input values for well-formedness, and there are support functions to perform type-safe operations on it; see . Use of this data type requires the + linkend="functions-xml"/>. Use of this data type requires the installation to have been built with configure --with-libxml. @@ -4267,7 +4267,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; results to the client (which is the normal mode), PostgreSQL converts all character data passed between the client and the server and vice versa to the character encoding of the respective - end; see . This includes string + end; see . This includes string representations of XML values, such as in the above examples. This would ordinarily mean that encoding declarations contained in XML data can become invalid as the character data is converted @@ -4408,7 +4408,7 @@ INSERT INTO mytable VALUES(-1); -- fails - For additional information see . + For additional information see . @@ -4473,14 +4473,14 @@ INSERT INTO mytable VALUES(-1); -- fails PostgreSQL as primary keys for various system tables. OIDs are not added to user-created tables, unless WITH OIDS is specified when the table is - created, or the + created, or the configuration variable is enabled. Type oid represents an object identifier. There are also several alias types for oid: regproc, regprocedure, regoper, regoperator, regclass, regtype, regrole, regnamespace, regconfig, and regdictionary. - shows an overview. + shows an overview. @@ -4677,7 +4677,7 @@ SELECT * FROM pg_attribute (The system columns are further explained in .) + linkend="ddl-system-columns"/>.) @@ -4795,7 +4795,7 @@ SELECT * FROM pg_attribute useful in situations where a function's behavior does not correspond to simply taking or returning a value of a specific SQL data type. lists the existing + linkend="datatype-pseudotypes-table"/> lists the existing pseudo-types. @@ -4818,33 +4818,33 @@ SELECT * FROM pg_attribute anyelement Indicates that a function accepts any data type - (see ). + (see ). anyarray Indicates that a function accepts any array data type - (see ). + (see ). anynonarray Indicates that a function accepts any non-array data type - (see ). + (see ). anyenum Indicates that a function accepts any enum data type - (see and - ). + (see and + ). anyrange Indicates that a function accepts any range data type - (see and - ). + (see and + ). diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml index a533bbf8d2..d269aa4cc5 100644 --- a/doc/src/sgml/datetime.sgml +++ b/doc/src/sgml/datetime.sgml @@ -180,7 +180,7 @@ Date/Time Key Words - shows the tokens that are + shows the tokens that are recognized as names of months. @@ -247,7 +247,7 @@
- shows the tokens that are + shows the tokens that are recognized as names of days of the week. @@ -294,7 +294,7 @@ - shows the tokens that serve + shows the tokens that serve various modifier purposes. @@ -349,7 +349,7 @@ Since timezone abbreviations are not well standardized, PostgreSQL provides a means to customize the set of abbreviations accepted by the server. The - run-time parameter + run-time parameter determines the active set of abbreviations. While this parameter can be altered by any database user, the possible values for it are under the control of the database administrator — they diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index 12928e8bd3..4c07f886aa 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -14,7 +14,7 @@ - See also , which provides roughly the same + See also , which provides roughly the same functionality using a more modern and standards-compliant infrastructure. @@ -58,8 +58,8 @@ dblink_connect(text connname, text connstr) returns text server. It is recommended to use the foreign-data wrapper dblink_fdw when defining the foreign server. See the example below, as well as - and - . + and + . @@ -84,7 +84,7 @@ dblink_connect(text connname, text connstr) returns text libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres password=mypasswd. - For details see . + For details see . Alternatively, the name of a foreign server. @@ -1340,7 +1340,7 @@ dblink_get_notify(text connname) returns setof (notify_name text, be_pid int, ex the unnamed connection, or on a named connection if specified. To receive notifications via dblink, LISTEN must first be issued, using dblink_exec. - For details see and . + For details see and . diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index daba66c187..e6f50ec819 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -39,7 +39,7 @@ SQL does not make any guarantees about the order of the rows in a table. When a table is read, the rows will appear in an unspecified order, unless sorting is explicitly requested. This is covered in . Furthermore, SQL does not assign unique + linkend="queries"/>. Furthermore, SQL does not assign unique identifiers to rows, so it is possible to have several completely identical rows in a table. This is a consequence of the mathematical model that underlies SQL but is usually not desirable. @@ -64,7 +64,7 @@ built-in data types that fit many applications. Users can also define their own data types. Most built-in data types have obvious names and semantics, so we defer a detailed explanation to . Some of the frequently used data types are + linkend="datatype"/>. Some of the frequently used data types are integer for whole numbers, numeric for possibly fractional numbers, text for character strings, date for dates, time for @@ -79,7 +79,7 @@ To create a table, you use the aptly named command. + linkend="sql-createtable"/> command. In this command you specify at least a name for the new table, the names of the columns and the data type of each column. For example: @@ -95,7 +95,7 @@ CREATE TABLE my_first_table ( text; the second column has the name second_column and the type integer. The table and column names follow the identifier syntax explained - in . The type names are + in . The type names are usually also identifiers, but there are some exceptions. Note that the column list is comma-separated and surrounded by parentheses. @@ -139,7 +139,7 @@ CREATE TABLE products ( If you no longer need a table, you can remove it using the command. + linkend="sql-droptable"/> command. For example: DROP TABLE my_first_table; @@ -155,7 +155,7 @@ DROP TABLE products; If you need to modify a table that already exists, see later in this chapter. + linkend="ddl-alter"/> later in this chapter. @@ -163,7 +163,7 @@ DROP TABLE products; tables. The remainder of this chapter is concerned with adding features to the table definition to ensure data integrity, security, or convenience. If you are eager to fill your tables with - data now you can skip ahead to and read the + data now you can skip ahead to and read the rest of this chapter later. @@ -181,7 +181,7 @@ DROP TABLE products; columns will be filled with their respective default values. A data manipulation command can also request explicitly that a column be set to its default value, without having to know what that value is. - (Details about data manipulation commands are in .) + (Details about data manipulation commands are in .) @@ -220,7 +220,7 @@ CREATE TABLE products (
where the nextval() function supplies successive values from a sequence object (see ). This arrangement is sufficiently common + linkend="functions-sequence"/>). This arrangement is sufficiently common that there's a special shorthand for it: CREATE TABLE products ( @@ -229,7 +229,7 @@ CREATE TABLE products ( ); The SERIAL shorthand is discussed further in . + linkend="datatype-serial"/>. @@ -876,9 +876,9 @@ CREATE TABLE order_items ( More information about updating and deleting data is in . Also see the description of foreign key constraint + linkend="dml"/>. Also see the description of foreign key constraint syntax in the reference documentation for - . + . @@ -948,10 +948,10 @@ CREATE TABLE circles ( The object identifier (object ID) of a row. This column is only present if the table was created using WITH - OIDS, or if the + OIDS
, or if the configuration variable was set at the time. This column is of type oid (same name as the column); see for more information about the type. + linkend="datatype-oid"/> for more information about the type. @@ -966,7 +966,7 @@ CREATE TABLE circles ( The OID of the table containing this row. This column is particularly handy for queries that select from inheritance - hierarchies (see ), since without it, + hierarchies (see ), since without it, it's difficult to tell which individual table a row came from. The tableoid can be joined against the oid column of @@ -1100,7 +1100,7 @@ CREATE TABLE circles ( Transaction identifiers are also 32-bit quantities. In a long-lived database it is possible for transaction IDs to wrap around. This is not a fatal problem given appropriate maintenance - procedures; see for details. It is + procedures; see for details. It is unwise, however, to depend on the uniqueness of transaction IDs over the long term (more than one billion transactions). @@ -1167,7 +1167,7 @@ CREATE TABLE circles ( All these actions are performed using the - + command, whose reference page contains details beyond those given here. @@ -1238,7 +1238,7 @@ ALTER TABLE products DROP COLUMN description; ALTER TABLE products DROP COLUMN description CASCADE; - See for a description of the general + See for a description of the general mechanism behind this. @@ -1446,7 +1446,7 @@ ALTER TABLE products RENAME TO items; object vary depending on the object's type (table, function, etc). For complete information on the different types of privileges supported by PostgreSQL, refer to the - reference + reference page. The following sections and chapters will also show you how those privileges are used. @@ -1459,7 +1459,7 @@ ALTER TABLE products RENAME TO items; An object can be assigned to a new owner with an ALTER command of the appropriate kind for the object, e.g. . Superusers can always do + linkend="sql-altertable"/>. Superusers can always do this; ordinary roles can only do it if they are both the current owner of the object (or a member of the owning role) and a member of the new owning role. @@ -1482,7 +1482,7 @@ GRANT UPDATE ON accounts TO joe; be used to grant a privilege to every role on the system. Also, group roles can be set up to help manage privileges when there are many users of a database — for details see - . + . @@ -1506,8 +1506,8 @@ REVOKE ALL ON accounts FROM PUBLIC; the right to grant it in turn to others. If the grant option is subsequently revoked then all who received the privilege from that recipient (directly or through a chain of grants) will lose the - privilege. For details see the and - reference pages. + privilege. For details see the and + reference pages. @@ -1524,7 +1524,7 @@ REVOKE ALL ON accounts FROM PUBLIC; In addition to the SQL-standard privilege - system available through , + system available through , tables can have row security policies that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. @@ -1584,11 +1584,11 @@ REVOKE ALL ON accounts FROM PUBLIC; - Policies are created using the - command, altered using the command, - and dropped using the command. To + Policies are created using the + command, altered using the command, + and dropped using the command. To enable and disable row security for a given table, use the - command. + command. @@ -1829,7 +1829,7 @@ UPDATE 0 not being applied. For example, when taking a backup, it could be disastrous if row security silently caused some rows to be omitted from the backup. In such a situation, you can set the - configuration parameter + configuration parameter to off. This does not in itself bypass row security; what it does is throw an error if any query's results would get filtered by a policy. The reason for the error can then be investigated and @@ -1951,8 +1951,8 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE; - For additional details see - and . + For additional details see + and . @@ -2034,7 +2034,7 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE; - To create a schema, use the + To create a schema, use the command. Give the schema a name of your choice. For example: @@ -2099,7 +2099,7 @@ DROP SCHEMA myschema; DROP SCHEMA myschema CASCADE; - See for a description of the general + See for a description of the general mechanism behind this. @@ -2112,7 +2112,7 @@ CREATE SCHEMA schema_name AUTHORIZATION You can even omit the schema name, in which case the schema name will be the same as the user name. See for how this can be useful. + linkend="ddl-schemas-patterns"/> for how this can be useful. @@ -2242,7 +2242,7 @@ SET search_path TO myschema; - See also for other ways to manipulate + See also for other ways to manipulate the schema search path. @@ -2297,7 +2297,7 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; public means every user. In the first sense it is an identifier, in the second sense it is a key word, hence the different capitalization; recall the - guidelines from .) + guidelines from .) @@ -2483,7 +2483,7 @@ SELECT name, altitude Given the sample data from the PostgreSQL - tutorial (see ), this returns: + tutorial (see ), this returns: name | altitude @@ -2602,7 +2602,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); capitals table, but this does not happen: INSERT always inserts into exactly the table specified. In some cases it is possible to redirect the insertion - using a rule (see ). However that does not + using a rule (see ). However that does not help for the above case because the cities table does not contain the column state, and so the command will be rejected before the rule can be applied. @@ -2633,11 +2633,11 @@ VALUES ('Albany', NULL, NULL, 'NY'); Table inheritance is typically established when the child table is created, using the INHERITS clause of the - + statement. Alternatively, a table which is already defined in a compatible way can have a new parent relationship added, using the INHERIT - variant of . + variant of . To do this the new child table must already include columns with the same names and types as the columns of the parent. It must also include check constraints with the same names and check expressions as those of the @@ -2645,7 +2645,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); NO INHERIT variant of ALTER TABLE. Dynamically adding and removing inheritance links like this can be useful when the inheritance relationship is being used for table - partitioning (see ). + partitioning (see ). @@ -2665,11 +2665,11 @@ VALUES ('Albany', NULL, NULL, 'NY'); if they are inherited from any parent tables. If you wish to remove a table and all of its descendants, one easy way is to drop the parent table with the - CASCADE option (see ). + CASCADE option (see ). - will + will propagate any changes in column data definitions and check constraints down the inheritance hierarchy. Again, dropping columns that are depended on by other tables is only possible when using @@ -2687,7 +2687,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); that the data is (also) in the parent table. But the capitals table could not be updated directly without an additional grant. In a similar way, the parent table's row - security policies (see ) are applied to + security policies (see ) are applied to rows coming from child tables during an inherited query. A child table's policies, if any, are applied only when it is the table explicitly named in the query; and in that case, any policies attached to its parent(s) are @@ -2695,7 +2695,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); - Foreign tables (see ) can also + Foreign tables (see ) can also be part of inheritance hierarchies, either as parent or child tables, just as regular tables can be. If a foreign table is part of an inheritance hierarchy then any operations not supported by @@ -2719,7 +2719,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); typically only work on individual, physical tables and do not support recursing over inheritance hierarchies. The respective behavior of each individual command is documented in its reference - page (). + page (). @@ -2923,7 +2923,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); called sub-partitioning. Partitions may have their own indexes, constraints and default values, distinct from those of other partitions. Indexes must be created separately for each partition. See - for more details on creating partitioned + for more details on creating partitioned tables and partitions. @@ -2932,7 +2932,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); vice versa. However, it is possible to add a regular or partitioned table containing data as a partition of a partitioned table, or remove a partition from a partitioned table turning it into a standalone table; - see to learn more about the + see to learn more about the ATTACH PARTITION and DETACH PARTITION sub-commands. @@ -2948,7 +2948,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); inheritance with regular tables. Since a partition hierarchy consisting of the partitioned table and its partitions is still an inheritance hierarchy, all the normal rules of inheritance apply as described in - with some exceptions, most notably: + with some exceptions, most notably: @@ -2999,7 +2999,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); Partitions can also be foreign tables - (see ), + (see ), although these have some limitations that normal tables do not. For example, data inserted into the partitioned table is not routed to foreign table partitions. @@ -3158,7 +3158,7 @@ CREATE INDEX ON measurement_y2008m01 (logdate); - Ensure that the + Ensure that the configuration parameter is not disabled in postgresql.conf. If it is, queries will not be optimized as desired. @@ -3595,7 +3595,7 @@ DO INSTEAD - Ensure that the + Ensure that the configuration parameter is not disabled in postgresql.conf. If it is, queries will not be optimized as desired. @@ -3806,7 +3806,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; The default (and recommended) setting of - is actually neither + is actually neither on nor off, but an intermediate setting called partition, which causes the technique to be applied only to queries that are likely to be working on partitioned @@ -3889,10 +3889,10 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; library that can communicate with an external data source, hiding the details of connecting to the data source and obtaining data from it. There are some foreign data wrappers available as contrib - modules; see . Other kinds of foreign data + modules; see . Other kinds of foreign data wrappers might be found as third party products. If none of the existing foreign data wrappers suit your needs, you can write your own; see . + linkend="fdwhandler"/>. @@ -3918,11 +3918,11 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; For additional information, see - , - , - , - , and - . + , + , + , + , and + . @@ -3966,7 +3966,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; Detailed information on - these topics appears in . + these topics appears in . @@ -3996,7 +3996,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; PostgreSQL makes sure that you cannot drop objects that other objects still depend on. For example, attempting to drop the products table we considered in , with the orders table depending on + linkend="ddl-constraints-fk"/>, with the orders table depending on it, would result in an error message like this: DROP TABLE products; @@ -4066,7 +4066,7 @@ CREATE FUNCTION get_color_note (rainbow) RETURNS text AS LANGUAGE SQL; - (See for an explanation of SQL-language + (See for an explanation of SQL-language functions.) PostgreSQL will be aware that the get_color_note function depends on the rainbow type: dropping the type would force dropping the function, because its diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml index 7ef996b51f..dfefa9e686 100644 --- a/doc/src/sgml/dfunc.sgml +++ b/doc/src/sgml/dfunc.sgml @@ -226,7 +226,7 @@ gcc -G -o foo.so foo.o - Refer back to about where the + Refer back to about where the server expects to find the shared library files. diff --git a/doc/src/sgml/dict-int.sgml b/doc/src/sgml/dict-int.sgml index 04cf14a73d..c15cbd0e4d 100644 --- a/doc/src/sgml/dict-int.sgml +++ b/doc/src/sgml/dict-int.sgml @@ -71,7 +71,7 @@ mydb# select ts_lexize('intdict', '12345678'); but real-world usage will involve including it in a text search - configuration as described in . + configuration as described in . That might look like this: diff --git a/doc/src/sgml/dict-xsyn.sgml b/doc/src/sgml/dict-xsyn.sgml index bf4965c36f..256aff7c58 100644 --- a/doc/src/sgml/dict-xsyn.sgml +++ b/doc/src/sgml/dict-xsyn.sgml @@ -135,7 +135,7 @@ mydb=# SELECT ts_lexize('xsyn', 'syn1'); Real-world usage will involve including it in a text search - configuration as described in . + configuration as described in . That might look like this: diff --git a/doc/src/sgml/diskusage.sgml b/doc/src/sgml/diskusage.sgml index ba23084354..3708e5f3d8 100644 --- a/doc/src/sgml/diskusage.sgml +++ b/doc/src/sgml/diskusage.sgml @@ -20,18 +20,18 @@ stored. If the table has any columns with potentially-wide values, there also might be a TOAST file associated with the table, which is used to store values too wide to fit comfortably in the main - table (see ). There will be one valid index + table (see ). There will be one valid index on the TOAST table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte. Naming conventions for these files are described - in . + in . You can monitor disk space in three ways: - using the SQL functions listed in , - using the module, or + using the SQL functions listed in , + using the module, or using manual inspection of the system catalogs. The SQL functions are the easiest to use and are generally recommended. The remainder of this section shows how to do it by inspection of the @@ -124,7 +124,7 @@ ORDER BY relpages DESC; If you cannot free up additional space on the disk by deleting other things, you can move some of the database files to other file systems by making use of tablespaces. See for more information about that. + linkend="manage-ag-tablespaces"/> for more information about that. diff --git a/doc/src/sgml/dml.sgml b/doc/src/sgml/dml.sgml index bc016d3cae..1e05c84fd1 100644 --- a/doc/src/sgml/dml.sgml +++ b/doc/src/sgml/dml.sgml @@ -33,10 +33,10 @@ - To create a new row, use the + To create a new row, use the command. The command requires the table name and column values. For - example, consider the products table from : + example, consider the products table from : CREATE TABLE products ( product_no integer, @@ -107,16 +107,16 @@ INSERT INTO products (product_no, name, price) WHERE release_date = 'today'; This provides the full power of the SQL query mechanism () for computing the rows to be inserted. + linkend="queries"/>) for computing the rows to be inserted. When inserting a lot of data at the same time, considering using - the command. - It is not as flexible as the + the command. + It is not as flexible as the command, but is more efficient. Refer - to for more information on improving + to for more information on improving bulk loading performance. @@ -141,7 +141,7 @@ INSERT INTO products (product_no, name, price) - To update existing rows, use the + To update existing rows, use the command. This requires three pieces of information: @@ -160,7 +160,7 @@ INSERT INTO products (product_no, name, price) - Recall from that SQL does not, in general, + Recall from that SQL does not, in general, provide a unique identifier for rows. Therefore it is not always possible to directly specify which row to update. Instead, you specify which conditions a row must meet in order to @@ -203,7 +203,7 @@ UPDATE products SET price = price * 1.10; this does not create any ambiguity. Of course, the WHERE condition does not have to be an equality test. Many other operators are - available (see ). But the expression + available (see ). But the expression needs to evaluate to a Boolean result. @@ -243,7 +243,7 @@ UPDATE mytable SET a = 5, b = 3, c = 1 WHERE a > 0; - You use the + You use the command to remove rows; the syntax is very similar to the UPDATE command. For instance, to remove all rows from the products table that have a price of 10, use: @@ -296,7 +296,7 @@ DELETE FROM products; The allowed contents of a RETURNING clause are the same as a SELECT command's output list - (see ). It can contain column + (see ). It can contain column names of the command's target table, or value expressions using those columns. A common shorthand is RETURNING *, which selects all columns of the target table in order. @@ -340,7 +340,7 @@ DELETE FROM products - If there are triggers () on the target table, + If there are triggers () on the target table, the data available to RETURNING is the row as modified by the triggers. Thus, inspecting columns computed by triggers is another common use-case for RETURNING. diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index 3a5b88ca1c..090ca95835 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -47,23 +47,11 @@ The documentation sources are written in DocBook, which is a markup language - superficially similar to HTML. Both of these - languages are applications of the Standard Generalized - Markup Language, SGML, which is - essentially a language for describing other languages. In what - follows, the terms DocBook and SGML are both + defined in XML. In what + follows, the terms DocBook and XML are both used, but technically they are not interchangeable. - - - The PostgreSQL documentation is currently being transitioned from DocBook - SGML and DSSSL style sheets to DocBook XML and XSLT style sheets. Be - careful to look at the instructions relating to the PostgreSQL version you - are dealing with, as the procedures and required tools will change. - - - DocBook allows an author to specify the structure and content of a technical document without worrying @@ -97,19 +85,8 @@ This is the definition of DocBook itself. We currently use version 4.2; you cannot use later or earlier versions. You need - the SGML and the XML variant of - the DocBook DTD of the same version. These will usually be in separate - packages. - - - - - - ISO 8879 character entities - - - These are required by DocBook SGML but are distributed separately - because they are maintained by ISO. + the XML variant of the DocBook DTD, not + the SGML variant. @@ -130,17 +107,6 @@ - - OpenSP - - - This is the base package of SGML processing. Note - that we no longer need OpenJade, the DSSSL - processor, only the OpenSP package for converting SGML to XML. - - - - Libxml2 for xmllint @@ -201,7 +167,7 @@ To install the required packages, use: -yum install docbook-dtds docbook-style-xsl fop libxslt opensp +yum install docbook-dtds docbook-style-xsl fop libxslt @@ -209,41 +175,10 @@ yum install docbook-dtds docbook-style-xsl fop libxslt opensp Installation on FreeBSD - - The FreeBSD Documentation Project is itself a heavy user of - DocBook, so it comes as no surprise that there is a full set of - ports of the documentation tools available on - FreeBSD. The following ports need to be installed to build the - documentation on FreeBSD. - - - textproc/docbook-sgml - - - textproc/docbook-xml - - - textproc/docbook-xsl - - - textproc/dsssl-docbook-modular - - - textproc/libxslt - - - textproc/fop - - - textproc/opensp - - - - To install the required packages with pkg, use: -pkg install docbook-sgml docbook-xml docbook-xsl fop libxslt opensp +pkg install docbook-xml docbook-xsl fop libxslt @@ -268,7 +203,7 @@ pkg install docbook-sgml docbook-xml docbook-xsl fop libxslt opensp available for Debian GNU/Linux. To install, simply use: -apt-get install docbook docbook-xml docbook-xsl fop libxml2-utils opensp xsltproc +apt-get install docbook-xml docbook-xsl fop libxml2-utils xsltproc @@ -277,117 +212,21 @@ apt-get install docbook docbook-xml docbook-xsl fop libxml2-utils opensp xsltpro macOS - If you use MacPorts, the following will get you set up: - -sudo port install docbook-sgml-4.2 docbook-xml-4.2 docbook-xsl fop libxslt opensp - + On macOS, you can build the HTML and man documentation without installing + anything extra. If you want to build PDFs or want to install a local copy + of DocBook, you can get those from your preferred package manager. - - - - Manual Installation from Source - The manual installation process of the DocBook tools is somewhat - complex, so if you have pre-built packages available, use them. - We describe here only a standard setup, with reasonably standard - installation paths, and no fancy features. For - details, you should study the documentation of the respective - package, and read SGML introductory material. - - - - Installing OpenSP - - - The installation of OpenSP offers a GNU-style - ./configure; make; make install build process. - Details can be found in the OpenSP source distribution. In a nutshell: - -./configure --enable-default-catalog=/usr/local/etc/sgml/catalog -make -make install - - Be sure to remember where you put the default catalog; you - will need it below. You can also leave it off, but then you will have to - set the environment variable SGML_CATALOG_FILES to point - to the file whenever you use any programs from OpenSP later on. (This - method is also an option if OpenSP is already installed and you want to - install the rest of the toolchain locally.) - - - - - Installing the <productname>DocBook</productname> <acronym>DTD</acronym> Kit - - - - - Obtain the - DocBook V4.2 distribution. - - - - - - Create the directory - /usr/local/share/sgml/docbook-4.2 and change - to it. (The exact location is irrelevant, but this one is - reasonable within the layout we are following here.) - -$ mkdir /usr/local/share/sgml/docbook-4.2 -$ cd /usr/local/share/sgml/docbook-4.2 - - - - - - - Unpack the archive: - -$ unzip -a ...../docbook-4.2.zip - - (The archive will unpack its files into the current directory.) - - - - - - Edit the file - /usr/local/share/sgml/catalog (or whatever - you told jade during installation) and put a line like this - into it: + If you use MacPorts, the following will get you set up: -CATALOG "docbook-4.2/docbook.cat" +sudo port install docbook-xml-4.2 docbook-xsl fop - - - - - - Download the - ISO 8879 character entities archive, unpack it, and put the - files in the same directory you put the DocBook files in: - -$ cd /usr/local/share/sgml/docbook-4.2 -$ unzip ...../ISOEnts.zip - - - - - - - Run the following command in the directory with the DocBook and ISO files: + If you use Homebrew, use this: -perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat +brew install docbook docbook-xsl fop - (This fixes a mixup between the names used in the DocBook - catalog file and the actual names of the ISO character entity - files.) - - - - + @@ -400,26 +239,14 @@ perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat Check the output near the end of the run, it should look something like this: - -checking for onsgmls... onsgmls -checking for DocBook V4.2... yes -checking for dbtoepub... dbtoepub checking for xmllint... xmllint +checking for DocBook XML V4.2... yes +checking for dbtoepub... dbtoepub checking for xsltproc... xsltproc -checking for osx... osx checking for fop... fop - - If neither onsgmls nor - nsgmls were found then some of the following tests - will be skipped. nsgmls is part of the OpenSP - package. You can pass the environment variable - NSGMLS to configure to point - to the programs if they are not found automatically. If - DocBook V4.2 was not found then you did not install - the DocBook DTD kit in a place where OpenSP can find it, or you have - not set up the catalog files correctly. See the installation hints - above. + If xmllint was not found then some of the following + tests will be skipped. @@ -464,9 +291,7 @@ checking for fop... fop We use the DocBook XSL stylesheets to convert DocBook refentry pages to *roff output suitable for man - pages. The man pages are also distributed as a tar archive, - similar to the HTML version. To create the man - pages, use the commands: + pages. To create the man pages, use the command: doc/src/sgml$ make man @@ -536,7 +361,7 @@ ADDITIONAL_FLAGS='-Xmx1000m' The installation instructions are also distributed as plain text, in case they are needed in a situation where better reading tools are not available. The INSTALL file - corresponds to , with some minor + corresponds to , with some minor changes to account for the different context. To recreate the file, change to the directory doc/src/sgml and enter make INSTALL. diff --git a/doc/src/sgml/earthdistance.sgml b/doc/src/sgml/earthdistance.sgml index 1bdcf64629..1f3ea6aa6e 100644 --- a/doc/src/sgml/earthdistance.sgml +++ b/doc/src/sgml/earthdistance.sgml @@ -56,7 +56,7 @@ The provided functions are shown - in . + in . @@ -150,7 +150,7 @@ A single operator is provided, shown - in . + in .
diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index bc3d080774..d1872c1a5c 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -31,7 +31,7 @@ specially marked sections. To build the program, the source code (*.pgc) is first passed through the embedded SQL preprocessor, which converts it to an ordinary C program (*.c), and afterwards it can be processed by a C - compiler. (For details about the compiling and linking see ). + compiler. (For details about the compiling and linking see ). Converted ECPG applications call functions in the libpq library through the embedded SQL library (ecpglib), and communicate with the PostgreSQL server using the normal frontend-backend protocol. @@ -397,9 +397,9 @@ EXEC SQL COMMIT; row can also be executed using EXEC SQL directly. To handle result sets with multiple rows, an application has to use a cursor; - see below. (As a special case, an + see below. (As a special case, an application can fetch multiple rows at once into an array host - variable; see .) + variable; see .) @@ -422,7 +422,7 @@ EXEC SQL SHOW search_path INTO :var; :something are host variables, that is, they refer to variables in the C program. They are explained in . + linkend="ecpg-variables"/>. @@ -452,8 +452,8 @@ EXEC SQL COMMIT; For more details about declaration of the cursor, - see , and - see for FETCH command + see , and + see for FETCH command details. @@ -477,7 +477,7 @@ EXEC SQL COMMIT; interface also supports autocommit of transactions (similar to psql's default behavior) via the command-line option to ecpg (see ) or via the EXEC SQL SET AUTOCOMMIT TO + linkend="app-ecpg"/>) or via the EXEC SQL SET AUTOCOMMIT TO ON statement. In autocommit mode, each command is automatically committed unless it is inside an explicit transaction block. This mode can be explicitly turned off using EXEC @@ -617,8 +617,8 @@ EXEC SQL DEALLOCATE PREPARE name; For more details about PREPARE, - see . Also - see for more details about using + see . Also + see for more details about using placeholders and input parameters. @@ -628,7 +628,7 @@ EXEC SQL DEALLOCATE PREPARE name; Using Host Variables - In you saw how you can execute SQL + In you saw how you can execute SQL statements from an embedded SQL program. Some of those statements only used fixed values and did not provide a way to insert user-supplied values into statements or have the program process @@ -646,7 +646,7 @@ EXEC SQL DEALLOCATE PREPARE name; Another way to exchange values between PostgreSQL backends and ECPG applications is the use of SQL descriptors, described - in . + in . @@ -812,11 +812,11 @@ do directly. Other PostgreSQL data types, such as timestamp and numeric can only be accessed through special library functions; see - . + . - shows which PostgreSQL + shows which PostgreSQL data types correspond to which C data types. When you wish to send or receive a value of a given PostgreSQL data type, you should declare a C variable of the corresponding C data type in @@ -851,12 +851,12 @@ do decimal - decimalThis type can only be accessed through special library functions; see . + decimalThis type can only be accessed through special library functions; see . numeric - numeric + numeric @@ -901,17 +901,17 @@ do timestamp - timestamp + timestamp interval - interval + interval date - date + date @@ -1002,7 +1002,7 @@ struct varchar_var { int len; char arr[180]; } var; structure. Applications deal with these types by declaring host variables in special types and accessing them using functions in the pgtypes library. The pgtypes library, described in detail - in contains basic functions to deal + in contains basic functions to deal with those types, such that you do not need to send a query to the SQL server just for adding an interval to a time stamp for example. @@ -1011,7 +1011,7 @@ struct varchar_var { int len; char arr[180]; } var; The follow subsections describe these special data types. For more details about pgtypes library functions, - see . + see . @@ -1062,7 +1062,7 @@ ts = 2010-06-27 18:03:56.949343 program has to include pgtypes_date.h, declare a host variable as the date type and convert a DATE value into a text form using PGTYPESdate_to_asc() function. For more details about the - pgtypes library functions, see . + pgtypes library functions, see . @@ -1117,7 +1117,7 @@ EXEC SQL END DECLARE SECTION; allocating some memory space on the heap, and accessing the variable using the pgtypes library functions. For more details about the pgtypes library functions, - see . + see . @@ -1193,7 +1193,7 @@ EXEC SQL END DECLARE SECTION; There are two use cases for arrays as host variables. The first is a way to store some text string in char[] or VARCHAR[], as - explained in . The second use case is to + explained in . The second use case is to retrieve multiple rows from a query result without using a cursor. Without an array, to process a query result consisting of multiple rows, it is required to use a cursor and @@ -1378,7 +1378,7 @@ EXEC SQL TYPE serial_t IS long; You can declare pointers to the most common types. Note however that you cannot use pointers as target variables of queries - without auto-allocation. See + without auto-allocation. See for more information on auto-allocation. @@ -1520,7 +1520,7 @@ while (1) Another workaround is to store arrays in their external string representation in host variables of type char[] or VARCHAR[]. For more details about this - representation, see . Note that + representation, see . Note that this means that the array cannot be accessed naturally as an array in the host program (without further processing that parses the text representation). @@ -1578,7 +1578,7 @@ EXEC SQL CLOSE cur1; To enhance this example, the host variables to store values in the FETCH command can be gathered into one structure. For more details about the host variable in the - structure form, see . + structure form, see . To switch to the structure, the example can be modified as below. The two host variables, intval and textval, become members of @@ -1659,12 +1659,12 @@ while (1) Here is an example using the data type complex from - the example in . The external string + the example in . The external string representation of that type is (%lf,%lf), which is defined in the functions complex_in() and complex_out() functions - in . The following example inserts the + in . The following example inserts the complex type values (1,1) and (3,3) into the columns a and b, and select @@ -1875,7 +1875,7 @@ EXEC SQL EXECUTE mystmt INTO :v1, :v2, :v3 USING 37; If a query is expected to return more than one result row, a cursor should be used, as in the following example. - (See for more details about the + (See for more details about the cursor.) EXEC SQL BEGIN DECLARE SECTION; @@ -1941,7 +1941,7 @@ free(out); The numeric Type The numeric type offers to do calculations with arbitrary precision. See - for the equivalent type in the + for the equivalent type in the PostgreSQL server. Because of the arbitrary precision this variable needs to be able to expand and shrink dynamically. That's why you can only create numeric variables on the heap, by means of the @@ -2264,7 +2264,7 @@ int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); The date Type The date type in C enables your programs to deal with data of the SQL type - date. See for the equivalent type in the + date. See for the equivalent type in the PostgreSQL server. @@ -2303,7 +2303,7 @@ date PGTYPESdate_from_asc(char *str, char **endptr); currently no variable to change that within ECPG. - shows the allowed input formats. + shows the allowed input formats.
Valid Input Formats for <function>PGTYPESdate_from_asc</function> @@ -2558,7 +2558,7 @@ int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); All other characters are copied 1:1 to the output string. - indicates a few possible formats. This will give + indicates a few possible formats. This will give you an idea of how to use this function. All output lines are based on the same date: November 23, 1959. @@ -2649,7 +2649,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); day. - indicates a few possible formats. This will give + indicates a few possible formats. This will give you an idea of how to use this function.
@@ -2741,7 +2741,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); The timestamp Type The timestamp type in C enables your programs to deal with data of the SQL - type timestamp. See for the equivalent + type timestamp. See for the equivalent type in the PostgreSQL server. @@ -2766,7 +2766,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); The function returns the parsed timestamp on success. On error, PGTYPESInvalidTimestamp is returned and errno is - set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. + set to PGTYPES_TS_BAD_TIMESTAMP. See for important notes on this value. In general, the input string can contain any combination of an allowed @@ -2777,7 +2777,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); specifiers are silently discarded. - contains a few examples for input strings. + contains a few examples for input strings.
Valid Input Formats for <function>PGTYPEStimestamp_from_asc</function> @@ -3217,7 +3217,7 @@ int PGTYPEStimestamp_defmt_asc(char *str, char *fmt, timestamp *d); This is the reverse function to . See the documentation there in + linkend="pgtypestimestampfmtasc"/>. See the documentation there in order to find out about the possible formatting mask entries. @@ -3270,7 +3270,7 @@ int PGTYPEStimestamp_sub_interval(timestamp *tin, interval *span, timestamp *tou The interval Type The interval type in C enables your programs to deal with data of the SQL - type interval. See for the equivalent + type interval. See for the equivalent type in the PostgreSQL server. @@ -3364,7 +3364,7 @@ int PGTYPESinterval_copy(interval *intvlsrc, interval *intvldest); PGTYPESdecimal_free). There are a lot of other functions that deal with the decimal type in the Informix compatibility mode described in . + linkend="ecpg-informix-compat"/>. The following functions can be used to work with the decimal type and are @@ -3632,7 +3632,7 @@ EXEC SQL DESCRIBE stmt1 INTO SQL DESCRIPTOR mydesc; so using DESCRIPTOR and SQL DESCRIPTOR produced named SQL Descriptor Areas. Now it is mandatory, omitting the SQL keyword produces SQLDA Descriptor Areas, - see . + see . @@ -3853,7 +3853,7 @@ EXEC SQL FETCH 3 FROM mycursor INTO DESCRIPTOR mysqlda; Note that the SQL keyword is omitted. The paragraphs about the use cases of the INTO and USING - keywords in also apply here with an addition. + keywords in also apply here with an addition. In a DESCRIBE statement the DESCRIPTOR keyword can be completely omitted if the INTO keyword is used: @@ -4038,7 +4038,7 @@ typedef struct sqlvar_struct sqlvar_t; Points to the data. The format of the data is described - in . + in . @@ -4447,7 +4447,7 @@ main(void) The whole program is shown - in . + in . @@ -5016,7 +5016,7 @@ sqlstate: 42P01 SQLSTATE error codes; therefore a high degree of consistency can be achieved by using this error code scheme throughout all applications. For further information see - . + . @@ -5037,7 +5037,7 @@ sqlstate: 42P01 SQLSTATE is also listed. There is, however, no one-to-one or one-to-many mapping between the two schemes (indeed it is many-to-many), so you should consult the global - SQLSTATE listing in + SQLSTATE listing in in each case. @@ -5767,7 +5767,7 @@ ECPG = ecpg The complete syntax of the ecpg command is - detailed in . + detailed in . @@ -5835,7 +5835,7 @@ ECPG = ecpg ECPGtransactionStatus(const char *connection_name) returns the current transaction status of the given connection identified by connection_name. - See and libpq's PQtransactionStatus() for details about the returned status codes. + See and libpq's PQtransactionStatus() for details about the returned status codes. @@ -5867,8 +5867,8 @@ ECPG = ecpg For more details about the ECPGget_PGconn(), see - . For information about the large - object function interface, see . + . For information about the large + object function interface, see . @@ -5878,7 +5878,7 @@ ECPG = ecpg - shows an example program that + shows an example program that illustrates how to create, write, and read a large object in an ECPG application. @@ -5997,7 +5997,7 @@ main(void) A safe way to use the embedded SQL code in a C++ application is hiding the ECPG calls in a C module, which the C++ application code calls into to access the database, and linking that together with - the rest of the C++ code. See + the rest of the C++ code. See about that. @@ -6252,7 +6252,7 @@ c++ test_cpp.o test_mod.o -lecpg -o test_cpp This section describes all SQL commands that are specific to embedded SQL. Also refer to the SQL commands listed - in , which can also be used in + in , which can also be used in embedded SQL, unless stated otherwise. @@ -6320,9 +6320,9 @@ EXEC SQL ALLOCATE DESCRIPTOR mydesc; See Also - - - + + + @@ -6539,8 +6539,8 @@ EXEC SQL END DECLARE SECTION; See Also - - + + @@ -6604,9 +6604,9 @@ EXEC SQL DEALLOCATE DESCRIPTOR mydesc; See Also - - - + + + @@ -6668,8 +6668,8 @@ DECLARE cursor_name [ BINARY ] [ IN query - A or - command which will provide the + A or + command which will provide the rows to be returned by the cursor. @@ -6678,7 +6678,7 @@ DECLARE cursor_name [ BINARY ] [ IN For the meaning of the cursor options, - see . + see . @@ -6715,9 +6715,9 @@ EXEC SQL DECLARE cur1 CURSOR FOR stmt1; See Also - - - + + + @@ -6805,8 +6805,8 @@ EXEC SQL DEALLOCATE DESCRIPTOR mydesc; See Also - - + + @@ -6915,8 +6915,8 @@ main(void) See Also - - + + @@ -7056,7 +7056,7 @@ GET DESCRIPTOR descriptor_name VALU A token identifying which item of information about a column - to retrieve. See for + to retrieve. See for a list of supported items. @@ -7164,8 +7164,8 @@ d_data = testdb See Also - - + + @@ -7258,8 +7258,8 @@ EXEC SQL OPEN :curname1; See Also - - + + @@ -7282,8 +7282,8 @@ PREPARE name FROM PREPARE prepares a statement dynamically specified as a string for execution. This is different from the - direct SQL statement , which can also - be used in embedded programs. The + direct SQL statement , which can also + be used in embedded programs. The command is used to execute either kind of prepared statement. @@ -7338,7 +7338,7 @@ EXEC SQL EXECUTE foo USING SQL DESCRIPTOR indesc INTO SQL DESCRIPTOR outdesc; See Also - + @@ -7445,8 +7445,8 @@ EXEC SQL SET CONNECTION = con1; See Also - - + + @@ -7520,7 +7520,7 @@ SET DESCRIPTOR descriptor_name VALU A token identifying which item of information to set in the - descriptor. See for a + descriptor. See for a list of supported items. @@ -7561,8 +7561,8 @@ EXEC SQL SET DESCRIPTOR indesc VALUE 2 INDICATOR = :val2null, DATA = :val2; See Also - - + + @@ -7796,7 +7796,7 @@ WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } ac Parameters - See for a description of the + See for a description of the parameters. @@ -7979,7 +7979,7 @@ EXEC SQL CLOSE DATABASE; Informix-compatible SQLDA Descriptor Areas Informix-compatible mode supports a different structure than the one described in - . See below: + . See below: struct sqlvar_compat { @@ -8653,7 +8653,7 @@ void rtoday(date *d); that it sets to the current date. - Internally this function uses the + Internally this function uses the function. @@ -8678,7 +8678,7 @@ int rjulmdy(date d, short mdy[3]); The function always returns 0 at the moment. - Internally the function uses the + Internally the function uses the function. @@ -8748,7 +8748,7 @@ int rdefmtdate(date *d, char *fmt, char *str); Internally this function is implemented to use the function. See the reference there for a + linkend="pgtypesdatedefmtasc"/> function. See the reference there for a table of example input. @@ -8771,7 +8771,7 @@ int rfmtdate(date d, char *fmt, char *str); On success, 0 is returned and a negative value if an error occurred. - Internally this function uses the + Internally this function uses the function, see the reference there for examples. @@ -8795,7 +8795,7 @@ int rmdyjul(short mdy[3], date *d); Internally the function is implemented to use the function . + linkend="pgtypesdatemdyjul"/>. @@ -8851,7 +8851,7 @@ int rdayofweek(date d); Internally the function is implemented to use the function . + linkend="pgtypesdatedayofweek"/>. @@ -8889,7 +8889,7 @@ int dtcvasc(char *str, timestamp *ts); Internally this function uses the function. See the reference there + linkend="pgtypestimestampfromasc"/> function. See the reference there for a table with example inputs. @@ -8911,7 +8911,7 @@ dtcvfmtasc(char *inbuf, char *fmtstr, timestamp *dtvalue) This function is implemented by means of the function. See the documentation + linkend="pgtypestimestampdefmtasc"/> function. See the documentation there for a list of format specifiers that can be used. @@ -8983,7 +8983,7 @@ int dttofmtasc(timestamp *ts, char *output, int str_len, char *fmtstr); Internally, this function uses the function. See the reference there for + linkend="pgtypestimestampfmtasc"/> function. See the reference there for information on what format mask specifiers can be used. @@ -9289,7 +9289,7 @@ int risnull(int t, char *ptr); The function receives the type of the variable to test (t) as well a pointer to this variable (ptr). Note that the latter needs to be cast to a char*. See the function for a list of possible variable types. + linkend="rsetnull"/> for a list of possible variable types. Here is an example of how to use this function: diff --git a/doc/src/sgml/errcodes.sgml b/doc/src/sgml/errcodes.sgml index 61ad3e00e9..6fd16f643e 100644 --- a/doc/src/sgml/errcodes.sgml +++ b/doc/src/sgml/errcodes.sgml @@ -32,7 +32,7 @@ - lists all the error codes defined in + lists all the error codes defined in PostgreSQL &version;. (Some are not actually used at present, but are defined by the SQL standard.) The error classes are also shown. For each error class there is a @@ -66,9 +66,9 @@ <productname>PostgreSQL</productname> Error Codes - - - + + + diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml index c16ff338a3..0a8860490a 100644 --- a/doc/src/sgml/event-trigger.sgml +++ b/doc/src/sgml/event-trigger.sgml @@ -8,7 +8,7 @@ - To supplement the trigger mechanism discussed in , + To supplement the trigger mechanism discussed in , PostgreSQL also provides event triggers. Unlike regular triggers, which are attached to a single table and capture only DML events, event triggers are global to a particular database and are capable of @@ -57,7 +57,7 @@ operations that took place, use the set-returning function pg_event_trigger_ddl_commands() from the ddl_command_end event trigger code (see - ). Note that the trigger fires + ). Note that the trigger fires after the actions have taken place (but before the transaction commits), and thus the system catalogs can be read as already changed. @@ -68,7 +68,7 @@ database objects. To list the objects that have been dropped, use the set-returning function pg_event_trigger_dropped_objects() from the sql_drop event trigger code (see - ). Note that + ). Note that the trigger is executed after the objects have been deleted from the system catalogs, so it's not possible to look them up anymore. @@ -96,11 +96,11 @@ For a complete list of commands supported by the event trigger mechanism, - see . + see . - Event triggers are created using the command . + Event triggers are created using the command . In order to create an event trigger, you must first create a function with the special return type event_trigger. This function need not (and may not) return a value; the return type serves merely as @@ -125,7 +125,7 @@ Event Trigger Firing Matrix - lists all commands + lists all commands for which event triggers are supported. @@ -953,7 +953,7 @@ typedef struct EventTriggerData Describes the event for which the function is called, one of "ddl_command_start", "ddl_command_end", "sql_drop", "table_rewrite". - See for the meaning of these + See for the meaning of these events. @@ -1003,7 +1003,7 @@ typedef struct EventTriggerData The event trigger definition associated the function with the ddl_command_start event. The effect is that all DDL commands (with the exceptions mentioned - in ) are prevented from running. + in ) are prevented from running. @@ -1037,7 +1037,7 @@ noddl(PG_FUNCTION_ARGS) - After you have compiled the source code (see ), + After you have compiled the source code (see ), declare the function and the triggers: CREATE FUNCTION noddl() RETURNS event_trigger diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml index e819010875..5f1bb70e97 100644 --- a/doc/src/sgml/extend.sgml +++ b/doc/src/sgml/extend.sgml @@ -15,32 +15,32 @@ - functions (starting in ) + functions (starting in ) - aggregates (starting in ) + aggregates (starting in ) - data types (starting in ) + data types (starting in ) - operators (starting in ) + operators (starting in ) - operator classes for indexes (starting in ) + operator classes for indexes (starting in ) - packages of related objects (starting in ) + packages of related objects (starting in ) @@ -132,14 +132,14 @@ types through functions provided by the user and only understands the behavior of such types to the extent that the user describes them. - The built-in base types are described in . + The built-in base types are described in . Enumerated (enum) types can be considered as a subcategory of base types. The main difference is that they can be created using just SQL commands, without any low-level programming. - Refer to for more information. + Refer to for more information. @@ -157,25 +157,25 @@ type is automatically created for each base type, composite type, range type, and domain type. But there are no arrays of arrays. So far as the type system is concerned, multi-dimensional arrays are the same as - one-dimensional arrays. Refer to for more + one-dimensional arrays. Refer to for more information. Composite types, or row types, are created whenever the user creates a table. It is also possible to use to + linkend="sql-createtype"/> to define a stand-alone composite type with no associated table. A composite type is simply a list of types with associated field names. A value of a composite type is a row or - record of field values. Refer to + record of field values. Refer to for more information. A range type can hold two values of the same type, which are the lower and upper bounds of the range. Range types are user-created, although - a few built-in ones exist. Refer to + a few built-in ones exist. Refer to for more information. @@ -188,8 +188,8 @@ is interchangeable with its underlying type. However, a domain can have constraints that restrict its valid values to a subset of what the underlying type would allow. Domains are created using - the SQL command . - Refer to for more information. + the SQL command . + Refer to for more information. @@ -202,7 +202,7 @@ container types, but they can be used to declare the argument and result types of functions. This provides a mechanism within the type system to identify special classes of functions. lists the existing + linkend="datatype-pseudotypes-table"/> lists the existing pseudo-types. @@ -300,7 +300,7 @@ A variadic function (one taking a variable number of arguments, as in - ) can be + ) can be polymorphic: this is accomplished by declaring its last parameter as VARIADIC anyarray. For purposes of argument matching and determining the actual result type, such a function behaves @@ -337,7 +337,7 @@ of the extension itself. If the extension includes C code, there will typically also be a shared library file into which the C code has been built. Once you have these files, a simple - command loads the objects into + command loads the objects into your database. @@ -346,7 +346,7 @@ SQL script to load a bunch of loose objects into your database, is that PostgreSQL will then understand that the objects of the extension go together. You can - drop all the objects with a single + drop all the objects with a single command (no need to maintain a separate uninstall script). Even more useful, pg_dump knows that it should not dump the individual member objects of the extension — it will @@ -366,7 +366,7 @@ by pg_dump. Such a change is usually only sensible if you concurrently make the same change in the extension's script file. (But there are special provisions for tables containing configuration - data; see .) + data; see .) In production situations, it's generally better to create an extension update script to perform changes to extension member objects. @@ -405,7 +405,7 @@ The kinds of SQL objects that can be members of an extension are shown in - the description of . Notably, objects + the description of . Notably, objects that are database-cluster-wide, such as databases, roles, and tablespaces, cannot be extension members since an extension is only known within one database. (Although an extension script is not prohibited from creating @@ -438,7 +438,7 @@ - The command relies on a control + The command relies on a control file for each extension, which must be named the same as the extension with a suffix of .control, and must be placed in the installation's SHAREDIR/extension directory. There @@ -499,7 +499,7 @@ when initially creating an extension, but not during extension updates (since that might override user-added comments). Alternatively, the extension's comment can be set by writing - a command in the script file. + a command in the script file. @@ -562,7 +562,7 @@ its contained objects into a different schema after initial creation of the extension. The default is false, i.e. the extension is not relocatable. - See for more information. + See for more information. @@ -576,7 +576,7 @@ and not any other. The schema parameter is consulted only when initially creating an extension, not during extension updates. - See for more information. + See for more information. @@ -609,7 +609,7 @@ comments) by the extension mechanism. This provision is commonly used to throw an error if the script file is fed to psql rather than being loaded via CREATE EXTENSION (see example - script in ). + script in ). Without that, users might accidentally load the extension's contents as loose objects rather than as an extension, a state of affairs that's a bit tedious to recover from. @@ -687,7 +687,7 @@ In all cases, the script file will be executed with - initially set to point to the target + initially set to point to the target schema; that is, CREATE EXTENSION does the equivalent of this: @@ -1031,14 +1031,14 @@ include $(PGXS) This makefile relies on PGXS, which is described - in . The command make install + in . The command make install will install the control and script files into the correct directory as reported by pg_config. Once the files are installed, use the - command to load the objects into + command to load the objects into any particular database. diff --git a/doc/src/sgml/external-projects.sgml b/doc/src/sgml/external-projects.sgml index 03fd18aeb8..89147817ec 100644 --- a/doc/src/sgml/external-projects.sgml +++ b/doc/src/sgml/external-projects.sgml @@ -40,7 +40,7 @@ All other language interfaces are external projects and are distributed - separately. includes a list of + separately. includes a list of some of these projects. Note that some of these packages might not be released under the same license as PostgreSQL. For more information on each language interface, including licensing terms, refer to @@ -170,7 +170,7 @@ In addition, there are a number of procedural languages that are developed and maintained outside the core PostgreSQL - distribution. lists some of these + distribution. lists some of these packages. Note that some of these projects might not be released under the same license as PostgreSQL. For more information on each procedural language, including licensing information, refer to its website @@ -238,7 +238,7 @@ just like features that are built in. The contrib/ directory shipped with the source code contains several extensions, which are described in - . Other extensions are developed + . Other extensions are developed independently, like PostGIS. Even PostgreSQL replication solutions can be developed diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index 4250a03f16..a2f8137713 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -22,7 +22,7 @@ The foreign data wrappers included in the standard distribution are good references when trying to write your own. Look into the contrib subdirectory of the source tree. - The reference page also has + The reference page also has some useful details. @@ -43,7 +43,7 @@ a validator function. Both functions must be written in a compiled language such as C, using the version-1 interface. For details on C language calling conventions and dynamic loading, - see . + see . @@ -57,7 +57,7 @@ returning the special pseudo-type fdw_handler. The callback functions are plain C functions and are not visible or callable at the SQL level. The callback functions are described in - . + . @@ -126,7 +126,7 @@ GetForeignRelSize(PlannerInfo *root, - See for additional information. + See for additional information. @@ -157,7 +157,7 @@ GetForeignPaths(PlannerInfo *root, - See for additional information. + See for additional information. @@ -193,7 +193,7 @@ GetForeignPlan(PlannerInfo *root, - See for additional information. + See for additional information. @@ -341,7 +341,7 @@ GetForeignJoinPaths(PlannerInfo *root, - See for additional information. + See for additional information. @@ -388,7 +388,7 @@ GetForeignUpperPaths(PlannerInfo *root, - See for additional information. + See for additional information. @@ -477,7 +477,7 @@ PlanForeignModify(PlannerInfo *root, - See for additional information. + See for additional information. @@ -759,7 +759,7 @@ PlanDirectModify(PlannerInfo *root, - See for additional information. + See for additional information. @@ -872,7 +872,7 @@ EndDirectModify(ForeignScanState *node); If an FDW wishes to support late row locking (as described - in ), it must provide the following + in ), it must provide the following callback functions: @@ -905,7 +905,7 @@ GetForeignRowMarkType(RangeTblEntry *rte, - See for more information. + See for more information. @@ -964,7 +964,7 @@ RefetchForeignRow(EState *estate, - See for more information. + See for more information. @@ -1093,7 +1093,7 @@ AnalyzeForeignTable(Relation relation, BlockNumber *totalpages); - This function is called when is executed on + This function is called when is executed on a foreign table. If the FDW can collect statistics for this foreign table, it should return true, and provide a pointer to a function that will collect sample rows from the table in @@ -1139,10 +1139,10 @@ ImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid); Obtain a list of foreign table creation commands. This function is - called when executing , and is + called when executing , and is passed the parse tree for that statement, as well as the OID of the foreign server to use. It should return a list of C strings, each of - which must contain a command. + which must contain a command. These strings will be parsed and executed by the core server. @@ -1605,7 +1605,7 @@ GetForeignServerByName(const char *name, bool missing_ok); PlanForeignModify and the other callbacks described in - are designed around the assumption + are designed around the assumption that the foreign relation will be scanned in the usual way and then individual row updates will be driven by a local ModifyTable plan node. This approach is necessary for the general case where an @@ -1616,7 +1616,7 @@ GetForeignServerByName(const char *name, bool missing_ok); compete against the ModifyTable approach. This approach could also be used to implement remote SELECT FOR UPDATE, rather than using the row locking callbacks described in - . Keep in mind that a path + . Keep in mind that a path inserted into UPPERREL_FINAL is responsible for implementing all behavior of the query. @@ -1676,7 +1676,7 @@ GetForeignServerByName(const char *name, bool missing_ok); By default, PostgreSQL ignores locking considerations when interfacing to FDWs, but an FDW can perform early locking without any explicit support from the core code. The API functions described - in , which were added + in , which were added in PostgreSQL 9.5, allow an FDW to use late locking if it wishes. @@ -1720,7 +1720,7 @@ GetForeignServerByName(const char *name, bool missing_ok); again perform early locking by fetching tuples with the equivalent of SELECT FOR UPDATE/SHARE. To perform late locking instead, provide the callback functions defined - in . + in . In GetForeignRowMarkType, select rowmark option ROW_MARK_EXCLUSIVE, ROW_MARK_NOKEYEXCLUSIVE, ROW_MARK_SHARE, or ROW_MARK_KEYSHARE depending diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml index 88aefb8ef0..e2598a07da 100644 --- a/doc/src/sgml/file-fdw.sgml +++ b/doc/src/sgml/file-fdw.sgml @@ -13,7 +13,7 @@ files in the server's file system, or to execute programs on the server and read their output. The data file or program output must be in a format that can be read by COPY FROM; - see for details. + see for details. Access to data files is currently read-only. diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 698daf69ea..4dd9d029e6 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -15,7 +15,7 @@ PostgreSQL provides a large number of functions and operators for the built-in data types. Users can also define their own functions and operators, as described in - . The + . The psql commands \df and \do can be used to list all available functions and operators, respectively. @@ -176,7 +176,7 @@ The operators AND and OR are commutative, that is, you can switch the left and right operand without affecting the result. But see for more information about the + linkend="syntax-express-eval"/> for more information about the order of evaluation of subexpressions. @@ -191,7 +191,7 @@ The usual comparison operators are available, as shown in . + linkend="functions-comparison-op-table"/>.
@@ -258,7 +258,7 @@ There are also some comparison predicates, as shown in . These behave much like + linkend="functions-comparison-pred-table"/>. These behave much like operators, but have special syntax mandated by the SQL standard. @@ -455,7 +455,7 @@ returns true if expression evaluates to the null value. It is highly recommended that these applications be modified to comply with the SQL standard. However, if that - cannot be done the + cannot be done the configuration variable is available. If it is enabled, PostgreSQL will convert x = NULL clauses to x IS NULL. @@ -536,7 +536,7 @@ Some comparison-related functions are also available, as shown in . + linkend="functions-comparison-func-table"/>.
@@ -591,7 +591,7 @@ - shows the available mathematical operators. + shows the available mathematical operators.
@@ -736,11 +736,11 @@ the others are available for all numeric data types. The bitwise operators are also available for the bit string types bit and bit varying, as - shown in . + shown in . - shows the available + shows the available mathematical functions. In the table, dp indicates double precision. Many of these functions are provided in multiple forms with different argument types. @@ -1093,7 +1093,7 @@
- shows functions for + shows functions for generating random numbers. @@ -1139,11 +1139,11 @@ The characteristics of the values returned by random() depend on the system implementation. It is not suitable for cryptographic - applications; see module for an alternative. + applications; see module for an alternative. - Finally, shows the + Finally, shows the available trigonometric functions. All trigonometric functions take arguments and return values of type double precision. Each of the trigonometric functions comes in @@ -1328,10 +1328,10 @@ SQL defines some string functions that use key words, rather than commas, to separate arguments. Details are in - . + . PostgreSQL also provides versions of these functions that use the regular function invocation syntax - (see ). + (see ). @@ -1343,7 +1343,7 @@ caused surprising behaviors. However, the string concatenation operator (||) still accepts non-string input, so long as at least one input is of a string type, as shown in . For other cases, insert an explicit + linkend="functions-string-sql"/>. For other cases, insert an explicit coercion to text if you need to duplicate the previous behavior. @@ -1504,7 +1504,7 @@ text Extract substring matching POSIX regular expression. See - for more information on pattern + for more information on pattern matching. substring('Thomas' from '...$') @@ -1516,7 +1516,7 @@ text Extract substring matching SQL regular expression. - See for more information on + See for more information on pattern matching. substring('Thomas' from '%#"o_a#"_' for '#') @@ -1577,8 +1577,8 @@ Additional string manipulation functions are available and are - listed in . Some of them are used internally to implement the - SQL-standard string functions listed in . + listed in . Some of them are used internally to implement the + SQL-standard string functions listed in . @@ -1702,7 +1702,7 @@ string must be valid in this encoding. Conversions can be defined by CREATE CONVERSION. Also there are some predefined conversions. See for available conversions. + linkend="conversion-names"/> for available conversions. convert('text_in_utf8', 'UTF8', 'LATIN1') text_in_utf8 represented in Latin-1 @@ -1792,7 +1792,7 @@ Format arguments according to a format string. This function is similar to the C function sprintf. - See . + See . format('Hello %s, %1$s', 'World') Hello World, World @@ -1968,7 +1968,7 @@ Quotes are added only if necessary (i.e., if the string contains non-identifier characters or would be case-folded). Embedded quotes are properly doubled. - See also . + See also . quote_ident('Foo bar') "Foo bar" @@ -1989,7 +1989,7 @@ Note that quote_literal returns null on null input; if the argument might be null, quote_nullable is often more suitable. - See also . + See also . quote_literal(E'O\'Reilly') 'O''Reilly' @@ -2019,7 +2019,7 @@ in an SQL statement string; or, if the argument is null, return NULL. Embedded single-quotes and backslashes are properly doubled. - See also . + See also . quote_nullable(NULL) NULL @@ -2048,7 +2048,7 @@ Return captured substring(s) resulting from the first match of a POSIX regular expression to the string. See - for more information. + for more information. regexp_match('foobarbequebaz', '(bar)(beque)') {bar,beque} @@ -2065,7 +2065,7 @@ Return captured substring(s) resulting from matching a POSIX regular expression to the string. See - for more information. + for more information. regexp_matches('foobarbequebaz', 'ba.', 'g') {bar}{baz} (2 rows) @@ -2081,7 +2081,7 @@ text Replace substring(s) matching a POSIX regular expression. See - for more information. + for more information. regexp_replace('Thomas', '.[mN]a.', 'M') ThM @@ -2097,7 +2097,7 @@ text[] Split string using a POSIX regular expression as - the delimiter. See for more + the delimiter. See for more information. regexp_split_to_array('hello world', E'\\s+') @@ -2114,7 +2114,7 @@ setof text Split string using a POSIX regular expression as - the delimiter. See for more + the delimiter. See for more information. regexp_split_to_table('hello world', E'\\s+') @@ -2339,7 +2339,7 @@ format functions are variadic, so it is possible to pass the values to be concatenated or formatted as an array marked with the VARIADIC keyword (see ). The array's elements are + linkend="xfunc-sql-variadic-functions"/>). The array's elements are treated as if they were separate ordinary arguments to the function. If the variadic array argument is NULL, concat and concat_ws return NULL, but @@ -2348,7 +2348,7 @@ See also the aggregate function string_agg in - . + .
@@ -3351,7 +3351,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); The %I and %L format specifiers are particularly useful for safely constructing dynamic SQL statements. See - . + . @@ -3375,10 +3375,10 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); SQL defines some string functions that use key words, rather than commas, to separate arguments. Details are in - . + . PostgreSQL also provides versions of these functions that use the regular function invocation syntax - (see ). + (see ). @@ -3498,10 +3498,10 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); Additional binary string manipulation functions are available and - are listed in . Some + are listed in . Some of them are used internally to implement the SQL-standard string functions listed in . + linkend="functions-binarystring-sql"/>.
@@ -3688,8 +3688,8 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); See also the aggregate function string_agg in - and the large object functions - in . + and the large object functions + in . @@ -3707,7 +3707,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); manipulating bit strings, that is values of the types bit and bit varying. Aside from the usual comparison operators, the operators - shown in can be used. + shown in can be used. Bit string operands of &, |, and # must be of equal length. When bit shifting, the original length of the string is preserved, as shown @@ -3935,9 +3935,9 @@ cast(-44 as bit(12)) 111111010100 - If you have turned off, + If you have turned off, any backslashes you write in literal string constants will need to be - doubled. See for more information. + doubled. See for more information. @@ -4144,7 +4144,7 @@ substring('foobar' from '#"o_b#"%' for '#') NULL - lists the available + lists the available operators for pattern matching using POSIX regular expressions. @@ -4277,7 +4277,7 @@ substring('foobar' from 'o(.)b') o matching, while flag g specifies replacement of each matching substring rather than only the first one. Supported flags (though not g) are - described in . + described in . @@ -4311,7 +4311,7 @@ regexp_replace('foobarbaz', 'b(..)', E'X\\1Y', 'g') The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described - in . + in . @@ -4353,7 +4353,7 @@ SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; subexpressions of the pattern, just as described above for regexp_match. regexp_matches accepts all the flags shown - in , plus + in , plus the g flag which commands it to return all matches, not just the first one. @@ -4407,7 +4407,7 @@ SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab; The flags parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table supports the flags described in - . + . @@ -4513,7 +4513,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or BRE rules can be chosen by prepending an embedded option - to the RE pattern, as described in . + to the RE pattern, as described in . This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules. @@ -4539,9 +4539,9 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities - shown in . + shown in . The possible quantifiers and their meanings are shown in - . + . @@ -4549,7 +4549,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in - ; + ; some more constraints are described later. @@ -4589,7 +4589,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; [chars] a bracket expression, matching any one of the chars (see - for more detail) + for more detail) @@ -4603,7 +4603,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; \c where c is alphanumeric (possibly followed by other characters) - is an escape, see + is an escape, see (AREs only; in EREs and BREs, this matches c) @@ -4630,9 +4630,9 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; - If you have turned off, + If you have turned off, any backslashes you write in literal string constants will need to be - doubled. See for more information. + doubled. See for more information. @@ -4727,7 +4727,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. - See for more detail. + See for more detail. @@ -4795,7 +4795,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Lookahead and lookbehind constraints cannot contain back - references (see ), + references (see ), and all parentheses within them are considered non-capturing. @@ -4926,27 +4926,27 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are - shown in . + shown in . Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are - shown in . + shown in . A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are - shown in . + shown in . A back reference (\n) matches the same string matched by the previous parenthesized subexpression specified by the number n - (see ). For example, + (see ). For example, ([bc])\1 matches bb or cc but not bc or cb. The subexpression must entirely precede the back reference in the RE. @@ -5167,7 +5167,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; \A matches only at the beginning of the string - (see for how this differs from + (see for how this differs from ^) @@ -5195,7 +5195,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; \Z matches only at the end of the string - (see for how this differs from + (see for how this differs from $) @@ -5284,7 +5284,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; a regex operator, or the flags parameter to a regex function. The available option letters are - shown in . + shown in . Note that these same option letters are used in the flags parameters of regex functions. @@ -5319,7 +5319,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; i case-insensitive matching (see - ) (overrides operator type) + ) (overrides operator type) @@ -5330,13 +5330,13 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; n newline-sensitive matching (see - ) + ) p partial newline-sensitive matching (see - ) + ) @@ -5358,7 +5358,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; w inverse partial newline-sensitive (weird) matching - (see ) + (see ) @@ -5735,7 +5735,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); provide a powerful set of tools for converting various data types (date/time, integer, floating point, numeric) to formatted strings and for converting from formatted strings to specific data types. - lists them. + lists them. These functions all follow a common calling convention: the first argument is the value to be formatted and the second argument is a template that defines the output or input format. @@ -5829,7 +5829,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); There is also a single-argument to_timestamp - function; see . + function; see . @@ -5857,7 +5857,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - shows the + shows the template patterns available for formatting date and time values. @@ -6087,7 +6087,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); behavior. For example, FMMonth is the Month pattern with the FM modifier. - shows the + shows the modifier patterns for date/time formatting. @@ -6125,7 +6125,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); TM prefix translation mode (print localized day and month names based on - ) + ) TMMonth @@ -6291,7 +6291,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); specifications like YYYY-MM-DD (IYYY-IDDD) can be useful. But avoid writing something like IYYY-MM-DD; that would yield surprising results near the start of the year. - (See for more + (See for more information.) @@ -6345,7 +6345,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); - shows the + shows the template patterns available for formatting numeric values. @@ -6447,8 +6447,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); The pattern characters S, L, D, and G represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale - (see - and ). The pattern characters period + (see + and ). The pattern characters period and comma represent those exact characters, with the meanings of decimal point and thousands separator, regardless of locale. @@ -6535,7 +6535,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); behavior. For example, FM99.99 is the 99.99 pattern with the FM modifier. - shows the + shows the modifier patterns for numeric formatting. @@ -6570,7 +6570,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}');
- shows some + shows some examples of the use of the to_char function. @@ -6747,15 +6747,15 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Date/Time Functions and Operators - shows the available + shows the available functions for date/time value processing, with details appearing in the following subsections. illustrates the behaviors of + linkend="operators-datetime-table"/> illustrates the behaviors of the basic arithmetic operators (+, *, etc.). For formatting functions, refer to - . You should be familiar with + . You should be familiar with the background information on date/time data types from . + linkend="datatype-datetime"/>. @@ -6943,7 +6943,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp with time zone Current date and time (changes during statement execution); - see + see @@ -6958,7 +6958,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); date Current date; - see + see @@ -6973,7 +6973,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); time with time zone Current time of day; - see + see @@ -6988,7 +6988,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp with time zone Current date and time (start of current transaction); - see + see @@ -7003,7 +7003,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); double precision Get subfield (equivalent to extract); - see + see date_part('hour', timestamp '2001-02-16 20:38:40') 20 @@ -7013,7 +7013,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); date_part(text, interval) double precision Get subfield (equivalent to - extract); see + extract); see date_part('month', interval '2 years 3 months') 3 @@ -7027,7 +7027,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); date_trunc(text, timestamp) timestamp - Truncate to specified precision; see also + Truncate to specified precision; see also date_trunc('hour', timestamp '2001-02-16 20:38:40') 2001-02-16 20:00:00 @@ -7036,7 +7036,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); date_trunc(text, interval) interval - Truncate to specified precision; see also + Truncate to specified precision; see also date_trunc('hour', interval '2 days 3 hours 40 minutes') 2 days 03:00:00 @@ -7051,7 +7051,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp)
double precision - Get subfield; see + Get subfield; see extract(hour from timestamp '2001-02-16 20:38:40') 20 @@ -7061,7 +7061,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); extract(field from interval) double precision - Get subfield; see + Get subfield; see extract(month from interval '2 years 3 months') 3 @@ -7144,7 +7144,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); time Current time of day; - see + see @@ -7159,7 +7159,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp Current date and time (start of current transaction); - see + see @@ -7293,7 +7293,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp with time zone Current date and time (start of current transaction); - see + see @@ -7308,7 +7308,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp with time zone Current date and time (start of current statement); - see + see @@ -7324,7 +7324,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); text Current date and time (like clock_timestamp, but as a text string); - see + see @@ -7339,7 +7339,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); timestamp with time zone Current date and time (start of current transaction); - see + see @@ -7886,7 +7886,7 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); The extract function is primarily intended for computational processing. For formatting date/time values for - display, see . + display, see . @@ -7986,7 +7986,7 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); The AT TIME ZONE construct allows conversions of time stamps to different time zones. shows its + linkend="functions-datetime-zoneconvert-table"/> shows its variants. @@ -8035,7 +8035,7 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); specified either as a text string (e.g., 'PST') or as an interval (e.g., INTERVAL '-08:00'). In the text case, a time zone name can be specified in any of the ways - described in . + described in . @@ -8279,10 +8279,10 @@ SELECT pg_sleep_until('tomorrow 03:00'); Enum Support Functions - For enum types (described in ), + For enum types (described in ), there are several functions that allow cleaner programming without hard-coding particular values of an enum type. - These are listed in . The examples + These are listed in . The examples assume an enum type created as: @@ -8379,9 +8379,9 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple lseg, line, path, polygon, and circle have a large set of native support functions and operators, shown in , , and . + linkend="functions-geometry-op-table"/>, , and . @@ -8912,7 +8912,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple Network Address Functions and Operators - shows the operators + shows the operators available for the cidr and inet types. The operators <<, <<=, >>, @@ -9024,7 +9024,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - shows the functions + shows the functions available for use with the cidr and inet types. The abbrev, host, and text @@ -9225,7 +9225,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - shows the functions + shows the functions available for use with the macaddr type. The function trunc(macaddr) returns a MAC address with the last 3 bytes set to zero. This can be used to @@ -9270,7 +9270,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - shows the functions + shows the functions available for use with the macaddr8 type. The function trunc(macaddr8) returns a MAC address with the last 5 bytes set to zero. This can be used to @@ -9342,11 +9342,11 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple - , - and - + , + and + summarize the functions and operators that are provided - for full text searching. See for a detailed + for full text searching. See for a detailed explanation of PostgreSQL's text search facility. @@ -9797,14 +9797,14 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple All the text search functions that accept an optional regconfig argument will use the configuration specified by - + when that argument is omitted. The functions in - + are listed separately because they are not usually used in everyday text searching operations. They are helpful for development and debugging of new text search configurations. @@ -9910,7 +9910,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple The functions and function-like expressions described in this section operate on values of type xml. Check for information about the xml + linkend="datatype-xml"/> for information about the xml type. The function-like expressions xmlparse and xmlserialize for converting to and from type xml are not repeated here. Use of most of these @@ -10107,7 +10107,7 @@ SELECT xmlelement(name foo, xmlattributes('xyz' as bar), and & will be converted to entities. Binary data (data type bytea) will be represented in base64 or hex encoding, depending on the setting of the configuration parameter - . The particular behavior for + . The particular behavior for individual data types is expected to evolve in order to align the SQL and PostgreSQL data types with the XML Schema specification, at which point a more precise description will appear. @@ -10249,7 +10249,7 @@ SELECT xmlroot(xmlparse(document 'abc'), input values to the aggregate function call, much like xmlconcat does, except that concatenation occurs across rows rather than across expressions in a single row. - See for additional information + See for additional information about aggregate functions. @@ -10269,7 +10269,7 @@ SELECT xmlagg(x) FROM test; To determine the order of the concatenation, an ORDER BY clause may be added to the aggregate call as described in - . For example: + . For example: IS DOCUMENT
returns true if the argument XML value is a proper XML document, false if it is not (that is, it is a content fragment), or null if the argument is - null. See about the difference + null. See about the difference between documents and content fragments. @@ -10391,7 +10391,7 @@ SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Tor xml_is_well_formed_document checks for a well-formed document, while xml_is_well_formed_content checks for well-formed content. xml_is_well_formed does - the former if the configuration + the former if the configuration parameter is set to DOCUMENT, or the latter if it is set to CONTENT. This means that xml_is_well_formed is useful for seeing whether @@ -10975,7 +10975,7 @@ table2-mapping As an example of using the output produced by these functions, - shows an XSLT stylesheet that + shows an XSLT stylesheet that converts the output of table_to_xml_and_xmlschema to an HTML document containing a tabular rendition of the table data. In a @@ -11044,9 +11044,9 @@ table2-mapping - shows the operators that + shows the operators that are available for use with the two JSON data types (see ). + linkend="datatype-json"/>). @@ -11127,18 +11127,18 @@ table2-mapping The standard comparison operators shown in are available for + linkend="functions-comparison-op-table"/> are available for jsonb, but not for json. They follow the ordering rules for B-tree operations outlined at . + linkend="json-indexing"/>. Some further operators also exist only for jsonb, as shown - in . + in . Many of these operators can be indexed by jsonb operator classes. For a full description of jsonb containment and existence semantics, see . + linkend="json-containment"/>. describes how these operators can be used to effectively index jsonb data. @@ -11240,7 +11240,7 @@ table2-mapping - shows the functions that are + shows the functions that are available for creating json and jsonb values. (There are no equivalent functions for jsonb, of the row_to_json and array_to_json functions. However, the to_jsonb @@ -11394,7 +11394,7 @@ table2-mapping - The extension has a cast + The extension has a cast from hstore to json, so that hstore values converted via the JSON creation functions will be represented as JSON objects, not as primitive string values. @@ -11402,7 +11402,7 @@ table2-mapping - shows the functions that + shows the functions that are available for processing json and jsonb values. @@ -11843,7 +11843,7 @@ table2-mapping JSON strings to the appropriate single character. This is a non-issue if the input is type jsonb, because the conversion was already done; but for json input, this may result in throwing an error, - as noted in . + as noted in . @@ -11902,7 +11902,7 @@ table2-mapping - See also for the aggregate + See also for the aggregate function json_agg which aggregates record values as JSON, and the aggregate function json_object_agg which aggregates pairs of values @@ -11935,10 +11935,10 @@ table2-mapping This section describes functions for operating on sequence objects, also called sequence generators or just sequences. Sequence objects are special single-row tables created with . + linkend="sql-createsequence"/>. Sequence objects are commonly used to generate unique identifiers for rows of a table. The sequence functions, listed in , provide simple, multiuser-safe + linkend="functions-sequence-table"/>, provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects. @@ -12003,7 +12003,7 @@ nextval('myschema.foo') operates on myschema.foosame as above nextval('foo') searches search path for foo - See for more information about + See for more information about regclass. @@ -12061,7 +12061,7 @@ nextval('foo'::text) foo is looked up at If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using - special parameters in the command; + special parameters in the command; see its command reference page for more information. @@ -12262,7 +12262,7 @@ SELECT a, The data types of all the result expressions must be convertible to a single output type. - See for more details. + See for more details. @@ -12316,7 +12316,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; - As described in , there are various + As described in , there are various situations in which subexpressions of an expression are evaluated at different times, so that the principle that CASE evaluates only necessary subexpressions is not ironclad. For @@ -12419,7 +12419,7 @@ SELECT NULLIF(value, '(none)') ... largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result - (see for details). NULL values + (see for details). NULL values in the list are ignored. The result will be NULL only if all the expressions evaluate to NULL. @@ -12437,7 +12437,7 @@ SELECT NULLIF(value, '(none)') ... Array Functions and Operators - shows the operators + shows the operators available for array types. @@ -12561,14 +12561,14 @@ SELECT NULLIF(value, '(none)') ... - See for more details about array operator - behavior. See for more details about + See for more details about array operator + behavior. See for more details about which operators support indexed operations. - shows the functions - available for use with array types. See + shows the functions + available for use with array types. See for more information and examples of the use of these functions. @@ -12843,7 +12843,7 @@ SELECT NULLIF(value, '(none)') ... setof anyelement, anyelement [, ...] expand multiple arrays (possibly of different types) to a set of rows. This is only allowed in the FROM clause; see - + unnest(ARRAY[1,2],ARRAY['foo','bar','baz']) 1 foo 2 bar @@ -12899,7 +12899,7 @@ NULL baz(3 rows) - See also about the aggregate + See also about the aggregate function array_agg for use with arrays. @@ -12908,11 +12908,11 @@ NULL baz(3 rows) Range Functions and Operators - See for an overview of range types. + See for an overview of range types. - shows the operators + shows the operators available for range types. @@ -13087,7 +13087,7 @@ NULL baz(3 rows) - shows the functions + shows the functions available for use with range types. @@ -13238,18 +13238,18 @@ NULL baz(3 rows) Aggregate functions compute a single result from a set of input values. The built-in general-purpose aggregate - functions are listed in + functions are listed in and statistical aggregates in . + linkend="functions-aggregate-statistics-table"/>. The built-in within-group ordered-set aggregate functions - are listed in + are listed in while the built-in within-group hypothetical-set ones are in . Grouping operations, + linkend="functions-hypothetical-table"/>. Grouping operations, which are closely related to aggregate functions, are listed in - . + . The special syntax considerations for aggregate - functions are explained in . - Consult for additional introductory + functions are explained in . + Consult for additional introductory information. @@ -13597,7 +13597,7 @@ NULL baz(3 rows) xml No - concatenation of XML values (see also ) + concatenation of XML values (see also ) @@ -13669,7 +13669,7 @@ SELECT count(*) FROM sometable; depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an ORDER BY clause within the aggregate call, as shown in - . + . Alternatively, supplying the input values from a sorted subquery will usually work. For example: @@ -13683,7 +13683,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; - shows + shows aggregate functions typically used in statistical analysis. (These are separated out merely to avoid cluttering the listing of more-commonly-used aggregates.) Where the description mentions @@ -14102,7 +14102,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
- shows some + shows some aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as inverse distribution functions. @@ -14252,7 +14252,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; - All the aggregates listed in + All the aggregates listed in ignore null values in their sorted input. For those that take a fraction parameter, the fraction value must be between 0 and 1; an error is thrown if not. However, a null fraction value @@ -14266,9 +14266,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Each of the aggregates listed in - is associated with a + is associated with a window function of the same name defined in - . In each case, the aggregate result + . In each case, the aggregate result is the value that the associated window function would have returned for the hypothetical row constructed from args, if such a row had been added to the sorted @@ -14433,7 +14433,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Grouping operations are used in conjunction with grouping sets (see - ) to distinguish result rows. The + ) to distinguish result rows. The arguments to the GROUPING operation are not actually evaluated, but they must match exactly expressions given in the GROUP BY clause of the associated query level. Bits are assigned with the rightmost @@ -14477,14 +14477,14 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Window functions provide the ability to perform calculations across sets of rows that are related to the current query - row. See for an introduction to this - feature, and for syntax + row. See for an introduction to this + feature, and for syntax details. The built-in window functions are listed in - . Note that these functions + . Note that these functions must be invoked using window function syntax, i.e., an OVER clause is required. @@ -14494,7 +14494,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; general-purpose or statistical aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see - for a list of the built-in aggregates. + for a list of the built-in aggregates. Aggregate functions act as window functions only when an OVER clause follows the call; otherwise they act as non-window aggregates and return a single row for the entire set. @@ -14706,7 +14706,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; All of the functions listed in - depend on the sort ordering + depend on the sort ordering specified by the ORDER BY clause of the associated window definition. Rows that are not distinct when considering only the ORDER BY columns are said to be peers. @@ -14723,7 +14723,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; sometimes also nth_value. You can redefine the frame by adding a suitable frame specification (RANGE or ROWS) to the OVER clause. - See for more information + See for more information about frame specifications. @@ -14887,7 +14887,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The left-hand side of this form of IN is a row constructor, - as described in . + as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are @@ -14943,7 +14943,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The left-hand side of this form of NOT IN is a row constructor, - as described in . + as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are @@ -15008,7 +15008,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The left-hand side of this form of ANY is a row constructor, - as described in . + as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are @@ -15024,7 +15024,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); - See for details about the meaning + See for details about the meaning of a row constructor comparison. @@ -15064,7 +15064,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The left-hand side of this form of ALL is a row constructor, - as described in . + as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are @@ -15080,7 +15080,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); - See for details about the meaning + See for details about the meaning of a row constructor comparison. @@ -15099,7 +15099,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The left-hand side is a row constructor, - as described in . + as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. Furthermore, the subquery cannot return more than one row. (If it returns zero rows, @@ -15108,7 +15108,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); - See for details about the meaning + See for details about the meaning of a row constructor comparison. @@ -15327,7 +15327,7 @@ AND Each side is a row constructor, - as described in . + as described in . The two row values must have the same number of fields. Each side is evaluated and they are compared row-wise. Row constructor comparisons are allowed when the operator is @@ -15419,8 +15419,8 @@ AND result depends on comparing two NULL values or a NULL and a non-NULL. PostgreSQL does this only when comparing the results of two row constructors (as in - ) or comparing a row constructor - to the output of a subquery (as in ). + ) or comparing a row constructor + to the output of a subquery (as in ). In other contexts where two composite-type values are compared, two NULL field values are considered equal, and a NULL is considered larger than a non-NULL. This is necessary in order to have consistent sorting @@ -15441,7 +15441,7 @@ AND class, or is the negator of the = member of a B-tree operator class.) The default behavior of the above operators is the same as for IS [ NOT ] DISTINCT FROM for row constructors (see - ). + ). @@ -15481,10 +15481,10 @@ AND This section describes functions that possibly return more than one row. The most widely used functions in this class are series generating - functions, as detailed in and - . Other, more specialized + functions, as detailed in and + . Other, more specialized set-returning functions are described elsewhere in this manual. - See for ways to combine multiple + See for ways to combine multiple set-returning functions. @@ -15738,14 +15738,14 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); System Information Functions - shows several + shows several functions that extract session and system information. In addition to the functions listed in this section, there are a number of functions related to the statistics system that also provide system - information. See for more + information. See for more information. @@ -15910,7 +15910,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); version() text - PostgreSQL version information. See also for a machine-readable version. + PostgreSQL version information. See also for a machine-readable version. @@ -15988,11 +15988,11 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); The session_user is normally the user who initiated the current database connection; but superusers can change this setting - with . + with . The current_user is the user identifier that is applicable for permission checking. Normally it is equal to the session user, but it can be changed with - . + . It also changes during the execution of functions with the attribute SECURITY DEFINER. In Unix parlance, the session user is the real user and @@ -16111,7 +16111,7 @@ SET search_path TO schema , sc pg_current_logfile returns, as text, the path of the log file(s) currently in use by the logging collector. - The path includes the directory + The path includes the directory and the log file name. Log collection must be enabled or the return value is NULL. When multiple log files exist, each in a different format, pg_current_logfile called @@ -16122,7 +16122,7 @@ SET search_path TO schema , sc either csvlog or stderr as the value of the optional parameter. The return value is NULL when the log format requested is not a configured - . The + . The pg_current_logfiles reflects the contents of the current_logfiles file. @@ -16160,7 +16160,7 @@ SET search_path TO schema , sc fraction of the total available space for notifications currently occupied by notifications that are waiting to be processed, as a double in the range 0-1. - See and + See and for more information. @@ -16186,7 +16186,7 @@ SET search_path TO schema , sc running a SERIALIZABLE transaction blocks a SERIALIZABLE READ ONLY DEFERRABLE transaction from acquiring a snapshot until the latter determines that it is safe to avoid - taking any predicate locks. See for + taking any predicate locks. See for more information about serializable and deferrable transactions. Frequent calls to this function could have some impact on database performance, because it needs access to the predicate lock manager's shared @@ -16200,10 +16200,10 @@ SET search_path TO schema , sc version returns a string describing the PostgreSQL server's version. You can also - get this information from or - for a machine-readable version, . + get this information from or + for a machine-readable version, . Software developers should use server_version_num - (available since 8.2) or instead + (available since 8.2) or instead of parsing the text version. @@ -16213,9 +16213,9 @@ SET search_path TO schema , sc - lists functions that + lists functions that allow the user to query object access privileges programmatically. - See for more information about + See for more information about privileges. @@ -16569,7 +16569,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') are analogous to has_table_privilege. When specifying a function by a text string rather than by OID, the allowed input is the same as for the regprocedure data type - (see ). + (see ). The desired access privilege type must evaluate to EXECUTE. An example is: @@ -16631,7 +16631,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); are analogous to has_table_privilege. When specifying a type by a text string rather than by OID, the allowed input is the same as for the regtype data type - (see ). + (see ). The desired access privilege type must evaluate to USAGE. @@ -16659,7 +16659,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); - shows functions that + shows functions that determine whether a certain object is visible in the current schema search path. For example, a table is said to be visible if its @@ -16957,7 +16957,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); - lists functions that + lists functions that extract information from the system catalogs. @@ -17250,7 +17250,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); second parameter, being just a column name, is treated as double-quoted and has its case preserved. The function returns a value suitably formatted for passing to sequence functions - (see ). A typical use is in reading the + (see ). A typical use is in reading the current value of a sequence for an identity or serial column, for example: SELECT currval(pg_get_serial_sequence('sometable', 'id')); @@ -17270,9 +17270,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); property. NULL is returned if the property name is not known or does not apply to the particular object, or if the OID or column number does not identify a valid object. Refer to - for column properties, - for index properties, and - for access method properties. + for column properties, + for index properties, and + for access method properties. (Note that extension access methods can define additional property names for their indexes.) @@ -17423,7 +17423,7 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as returning regtype, which is an OID alias type (see - ); this means that it is the same as an + ); this means that it is the same as an OID for comparison purposes but displays as a type name. For example: SELECT pg_typeof(33); @@ -17496,7 +17496,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); - lists functions related to + lists functions related to database object identification and addressing. @@ -17603,8 +17603,8 @@ SELECT collation for ('foo' COLLATE "de_DE"); - The functions shown in - extract comments previously stored with the + The functions shown in + extract comments previously stored with the command. A null value is returned if no comment could be found for the specified parameters. @@ -17701,7 +17701,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); - The functions shown in + The functions shown in provide server transaction information in an exportable form. The main use of these functions is to determine which transactions were committed between two snapshots. @@ -17767,7 +17767,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); The data type used by these functions, txid_snapshot, stores information about transaction ID visibility at a particular moment in time. Its components are - described in . + described in . @@ -17843,11 +17843,11 @@ SELECT collation for ('foo' COLLATE "de_DE"); - The functions shown in + The functions shown in provide information about transactions that have been already committed. These functions mainly provide information about when the transactions were committed. They only provide useful data when - configuration option is enabled + configuration option is enabled and only for transactions that were committed after it was enabled. @@ -17881,13 +17881,13 @@ SELECT collation for ('foo' COLLATE "de_DE");
- The functions shown in + The functions shown in print information initialized during initdb, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same information, from the same source, as - , although in a form better suited + , although in a form better suited to SQL functions. @@ -17949,7 +17949,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_control_checkpoint returns a record, shown in - + @@ -18060,7 +18060,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_control_system returns a record, shown in - +
@@ -18101,7 +18101,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_control_init returns a record, shown in - +
@@ -18182,7 +18182,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); pg_control_recovery returns a record, shown in - +
@@ -18240,7 +18240,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); Configuration Settings Functions - shows the functions + shows the functions available to query and alter run-time configuration parameters. @@ -18356,7 +18356,7 @@ SELECT set_config('log_statement_stats', 'off', false); The functions shown in send control signals to + linkend="functions-admin-signal-table"/> send control signals to other server processes. Use of these functions is restricted to superusers by default but access may be granted to others using GRANT, with noted exceptions. @@ -18491,7 +18491,7 @@ SELECT set_config('log_statement_stats', 'off', false); The functions shown in assist in making on-line backups. + linkend="functions-admin-backup-table"/> assist in making on-line backups. These functions cannot be executed during recovery (except pg_is_in_backup, pg_backup_start_time and pg_wal_lsn_diff). @@ -18674,7 +18674,7 @@ postgres=# select pg_start_backup('label_goes_here'); pg_create_restore_point creates a named write-ahead log record that can be used as recovery target, and returns the corresponding write-ahead log location. The given name can then be used with - to specify the point up to which + to specify the point up to which recovery will proceed. Avoid creating multiple restore points with the same name, since recovery will stop at the first one whose name matches the recovery target. @@ -18719,12 +18719,12 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_wal_lsn_diff calculates the difference in bytes between two write-ahead log locations. It can be used with pg_stat_replication or some functions shown in - to get the replication lag. + to get the replication lag. For details about proper usage of these functions, see - . + . @@ -18747,7 +18747,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in provide information + linkend="functions-recovery-info-table"/> provide information about the current status of the standby. These functions may be executed both during recovery and in normal running. @@ -18828,7 +18828,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in control the progress of recovery. + linkend="functions-recovery-control-table"/> control the progress of recovery. These functions may be executed only during recovery. @@ -18919,8 +18919,8 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Snapshots are exported with the pg_export_snapshot function, - shown in , and - imported with the command. + shown in , and + imported with the command.
@@ -18953,11 +18953,11 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); COMMITTED transactions, since in REPEATABLE READ and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot - be prepared with . + be prepared with . - See for details of how to use an + See for details of how to use an exported snapshot. @@ -18967,25 +18967,25 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown - in are for + in are for controlling and interacting with replication features. - See , - , and - + See , + , and + for information about the underlying features. Use of these functions is restricted to superusers. Many of these functions have equivalent commands in the replication - protocol; see . + protocol; see . The functions described in - , - , and - + , + , and + are also relevant for replication. @@ -19018,7 +19018,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — - see . The optional third + see . The optional third parameter, temporary, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also @@ -19386,7 +19386,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Database Object Management Functions - The functions shown in calculate + The functions shown in calculate the disk space usage of database objects. @@ -19598,13 +19598,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); 'fsm' returns the size of the Free Space Map - (see ) associated with the relation. + (see ) associated with the relation. 'vm' returns the size of the Visibility Map - (see ) associated with the relation. + (see ) associated with the relation. @@ -19656,7 +19656,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - The functions shown in assist + The functions shown in assist in identifying the specific disk files associated with database objects. @@ -19714,7 +19714,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); pg_relation_filenode accepts the OID or name of a table, index, sequence, or toast table, and returns the filenode number currently assigned to it. The filenode is the base component of the file - name(s) used for the relation (see + name(s) used for the relation (see for more information). For most tables the result is the same as pg_class.relfilenode, but for certain system catalogs relfilenode is zero and this function must @@ -19737,7 +19737,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - lists functions used to manage + lists functions used to manage collations. @@ -19775,7 +19775,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); operating system. If this is different from the value in pg_collation.collversion, then objects depending on the collation might need to be rebuilt. See also - . + . @@ -19783,7 +19783,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); catalog pg_collation based on all the locales it finds in the operating system. This is what initdb uses; - see for more details. If additional + see for more details. If additional locales are installed into the operating system later on, this function can be run again to add collations for the new locales. Locales that match existing entries in pg_collation will be skipped. @@ -19817,7 +19817,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); - shows the functions + shows the functions available for index maintenance tasks. These functions cannot be executed during recovery. Use of these functions is restricted to superusers and the owner @@ -19882,7 +19882,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Note that if the argument is a GIN index built with the fastupdate option disabled, no cleanup happens and the return value is 0, because the index doesn't have a pending list. - Please see and + Please see and for details of the pending list and fastupdate option. @@ -19893,7 +19893,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in provide native access to + linkend="functions-admin-genfile-table"/> provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory can be accessed. Use a relative path for files in the cluster directory, @@ -20064,9 +20064,9 @@ SELECT (pg_stat_file('filename')).modification; Advisory Lock Functions - The functions shown in + The functions shown in manage advisory locks. For details about proper use of these functions, - see . + see .
@@ -20392,7 +20392,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); For more information about creating triggers, see - . + . @@ -20406,7 +20406,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); For more information about event triggers, - see . + see . @@ -20645,7 +20645,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops The functions shown in - + provide information about a table for which a table_rewrite event has just been called. If called in any other context, an error is raised. diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index 0f91272c54..5120dfbb42 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -237,7 +237,7 @@ choices made during both the initial population selection and subsequent mutation of the best candidates. To avoid surprising changes of the selected plan, each run of the GEQO algorithm restarts its - random number generator with the current + random number generator with the current parameter setting. As long as geqo_seed and the other GEQO parameters are kept fixed, the same plan will be generated for a given query (and other planner inputs such as statistics). To experiment @@ -320,13 +320,13 @@ - + - + diff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml index 95d6278d28..cc7cd1ed2c 100644 --- a/doc/src/sgml/gin.sgml +++ b/doc/src/sgml/gin.sgml @@ -68,8 +68,8 @@ The core PostgreSQL distribution includes the GIN operator classes shown in - . - (Some of the optional modules described in + . + (Some of the optional modules described in provide additional GIN operator classes.) @@ -127,7 +127,7 @@ Of the two operator classes for type jsonb, jsonb_ops is the default. jsonb_path_ops supports fewer operators but offers better performance for those operators. - See for details. + See for details. @@ -182,7 +182,7 @@ query is the value on the right-hand side of an indexable operator whose left-hand side is the indexed column. n is the strategy number of the operator within the - operator class (see ). + operator class (see ). Often, extractQuery will need to consult n to determine the data type of query and the method it should use to extract key values. @@ -406,7 +406,7 @@ provide the comparePartial method, and its extractQuery method must set the pmatch parameter when a partial-match query is encountered. See - for details. + for details. @@ -466,7 +466,7 @@ When the table is vacuumed or autoanalyzed, or when gin_clean_pending_list function is called, or if the pending list becomes larger than - , the entries are moved to the + , the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly improves GIN index update speed, even counting the additional @@ -488,7 +488,7 @@ If consistent response time is more important than update speed, use of pending entries can be disabled by turning off the fastupdate storage parameter for a - GIN index. See + GIN index. See for details. @@ -531,14 +531,14 @@ As of PostgreSQL 8.4, this advice is less necessary since delayed indexing is used (see for details). But for very large updates + linkend="gin-fast-update"/> for details). But for very large updates it may still be best to drop and recreate the index. - + Build time for a GIN index is very sensitive to @@ -549,7 +549,7 @@ - + During a series of insertions into an existing GIN @@ -574,7 +574,7 @@ - + The primary goal of developing GIN indexes was @@ -631,7 +631,7 @@ The core PostgreSQL distribution includes the GIN operator classes previously shown in - . + . The following contrib modules also contain GIN operator classes: diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index f2f9ca0853..44a3b2c03c 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -46,8 +46,8 @@ The core PostgreSQL distribution includes the GiST operator classes shown in - . - (Some of the optional modules described in + . + (Some of the optional modules described in provide additional GiST operator classes.) @@ -985,7 +985,7 @@ my_fetch(PG_FUNCTION_ARGS) By default, a GiST index build switches to the buffering method when the - index size reaches . It can + index size reaches . It can be manually turned on or off by the buffering parameter to the CREATE INDEX command. The default behavior is good for most cases, but turning buffering off might speed up the build somewhat if the input diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 6c0679b0a8..46bf198a2a 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -100,7 +100,7 @@ Shared hardware functionality is common in network storage devices. Using a network file system is also possible, though care must be taken that the file system has full POSIX behavior (see ). One significant limitation of this + linkend="creating-cluster-nfs"/>). One significant limitation of this method is that if the shared disk array fails or becomes corrupt, the primary and standby servers are both nonfunctional. Another issue is that the standby server should never access the shared storage while @@ -151,9 +151,9 @@ protocol to make nodes agree on a serializable transactional order. A standby server can be implemented using file-based log shipping - () or streaming replication (see - ), or a combination of both. For - information on hot standby, see . + () or streaming replication (see + ), or a combination of both. For + information on hot standby, see . @@ -169,8 +169,8 @@ protocol to make nodes agree on a serializable transactional order. individual tables to be replicated. Logical replication doesn't require a particular server to be designated as a master or a replica but allows data to flow in multiple directions. For more information on logical - replication, see . Through the - logical decoding interface (), + replication, see . Through the + logical decoding interface (), third-party extensions can also provide similar functionality. @@ -224,8 +224,8 @@ protocol to make nodes agree on a serializable transactional order. standby servers via master-standby replication, not by the replication middleware. Care must also be taken that all transactions either commit or abort on all servers, perhaps - using two-phase commit ( - and ). + using two-phase commit ( + and ). Pgpool-II and Continuent Tungsten are examples of this type of replication. @@ -272,8 +272,8 @@ protocol to make nodes agree on a serializable transactional order. PostgreSQL does not offer this type of replication, though PostgreSQL two-phase commit ( and ) + linkend="sql-prepare-transaction"/> and ) can be used to implement this in application code or middleware. @@ -295,7 +295,7 @@ protocol to make nodes agree on a serializable transactional order. - summarizes + summarizes the capabilities of the various solutions listed above. @@ -522,7 +522,7 @@ protocol to make nodes agree on a serializable transactional order. varies according to the transaction rate of the primary server. Record-based log shipping is more granular and streams WAL changes incrementally over a network connection (see ). + linkend="streaming-replication"/>). @@ -534,7 +534,7 @@ protocol to make nodes agree on a serializable transactional order. archive_timeout parameter, which can be set as low as a few seconds. However such a low setting will substantially increase the bandwidth required for file shipping. - Streaming replication (see ) + Streaming replication (see ) allows a much smaller window of data loss. @@ -547,7 +547,7 @@ protocol to make nodes agree on a serializable transactional order. rollforward will take considerably longer, so that technique only offers a solution for disaster recovery, not high availability. A standby server can also be used for read-only queries, in which case - it is called a Hot Standby server. See for + it is called a Hot Standby server. See for more information. @@ -585,7 +585,7 @@ protocol to make nodes agree on a serializable transactional order. associated with tablespaces will be passed across unmodified, so both primary and standby servers must have the same mount paths for tablespaces if that feature is used. Keep in mind that if - + is executed on the primary, any new mount point needed for it must be created on the primary and all standby servers before the command is executed. Hardware need not be exactly the same, but experience shows @@ -618,7 +618,7 @@ protocol to make nodes agree on a serializable transactional order. In standby mode, the server continuously applies WAL received from the master server. The standby server can read WAL from a WAL archive - (see ) or directly from the master + (see ) or directly from the master over a TCP connection (streaming replication). The standby server will also attempt to restore any WAL found in the standby cluster's pg_wal directory. That typically happens after a server @@ -657,7 +657,7 @@ protocol to make nodes agree on a serializable transactional order. Set up continuous archiving on the primary to an archive directory accessible from the standby, as described - in . The archive location should be + in . The archive location should be accessible from the standby even when the master is down, i.e. it should reside on the standby server itself or another trusted server, not on the master server. @@ -676,7 +676,7 @@ protocol to make nodes agree on a serializable transactional order. - Take a base backup as described in + Take a base backup as described in to bootstrap the standby server. @@ -686,7 +686,7 @@ protocol to make nodes agree on a serializable transactional order. To set up the standby server, restore the base backup taken from primary - server (see ). Create a recovery + server (see ). Create a recovery command file recovery.conf in the standby's cluster data directory, and turn on standby_mode. Set restore_command to a simple command to copy files from @@ -701,7 +701,7 @@ protocol to make nodes agree on a serializable transactional order. Do not use pg_standby or similar tools with the built-in standby mode described here. restore_command should return immediately if the file does not exist; the server will retry the command again if - necessary. See + necessary. See for using tools like pg_standby. @@ -724,11 +724,11 @@ protocol to make nodes agree on a serializable transactional order. If you're using a WAL archive, its size can be minimized using the parameter to remove files that are no + linkend="archive-cleanup-command"/> parameter to remove files that are no longer required by the standby server. The pg_archivecleanup utility is designed specifically to be used with archive_cleanup_command in typical single-standby - configurations, see . + configurations, see . Note however, that if you're using the archive for backup purposes, you need to retain files needed to recover from at least the latest base backup, even if they're no longer needed by the standby. @@ -768,7 +768,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' Streaming replication is asynchronous by default - (see ), in which case there is + (see ), in which case there is a small delay between committing a transaction in the primary and the changes becoming visible in the standby. This delay is however much smaller than with file-based log shipping, typically under one second @@ -791,27 +791,27 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' To use streaming replication, set up a file-based log-shipping standby - server as described in . The step that + server as described in . The step that turns a file-based log-shipping standby into streaming replication standby is setting primary_conninfo setting in the recovery.conf file to point to the primary server. Set - and authentication options + and authentication options (see pg_hba.conf) on the primary so that the standby server can connect to the replication pseudo-database on the primary - server (see ). + server (see ). On systems that support the keepalive socket option, setting - , - and - helps the primary promptly + , + and + helps the primary promptly notice a broken connection. Set the maximum number of concurrent connections from the standby servers - (see for details). + (see for details). @@ -882,15 +882,15 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' standby. These locations can be retrieved using pg_current_wal_lsn on the primary and pg_last_wal_receive_lsn on the standby, - respectively (see and - for details). + respectively (see and + for details). The last WAL receive location in the standby is also displayed in the process status of the WAL receiver process, displayed using the - ps command (see for details). + ps command (see for details). You can retrieve a list of WAL sender processes via the - view. Large differences between + view. Large differences between pg_current_wal_lsn and the view's sent_lsn field might indicate that the master server is under heavy load, while differences between sent_lsn and @@ -899,7 +899,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' On a hot standby, the status of the WAL receiver process can be retrieved - via the view. A large + via the view. A large difference between pg_last_wal_replay_lsn and the view's received_lsn indicates that WAL is being received faster than it can be replayed. @@ -922,9 +922,9 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' In lieu of using replication slots, it is possible to prevent the removal - of old WAL segments using , or by + of old WAL segments using , or by storing the segments in an archive using - . + . However, these methods often result in retaining more WAL segments than required, whereas replication slots retain only the number of segments known to be needed. An advantage of these methods is that they bound @@ -932,8 +932,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' to do this using replication slots. - Similarly, - and provide protection against + Similarly, + and provide protection against relevant rows being removed by vacuum, but the former provides no protection during any time period when the standby is not connected, and the latter often needs to be set to a high value to provide adequate @@ -952,8 +952,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' Slots can be created and dropped either via the streaming replication - protocol (see ) or via SQL - functions (see ). + protocol (see ) or via SQL + functions (see ). @@ -1017,7 +1017,7 @@ primary_slot_name = 'node_a_slot' Cascading replication is currently asynchronous. Synchronous replication - (see ) settings have no effect on + (see ) settings have no effect on cascading replication at present. @@ -1034,7 +1034,7 @@ primary_slot_name = 'node_a_slot' To use cascading replication, set up the cascading standby so that it can accept replication connections (that is, set - and , + and , and configure host-based authentication). You will also need to set primary_conninfo in the downstream @@ -1109,11 +1109,11 @@ primary_slot_name = 'node_a_slot' Once streaming replication has been configured, configuring synchronous replication requires only one additional configuration step: - must be set to + must be set to a non-empty value. synchronous_commit must also be set to on, but since this is the default value, typically no change is - required. (See and - .) + required. (See and + .) This configuration will cause each commit to wait for confirmation that the standby has written the commit record to durable storage. @@ -1451,7 +1451,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' and might stay down. To return to normal operation, a standby server must be recreated, either on the former primary system when it comes up, or on a third, - possibly new, system. The utility can be + possibly new, system. The utility can be used to speed up this process on large clusters. Once complete, the primary and standby can be considered to have switched roles. Some people choose to use a third @@ -1491,7 +1491,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' This was the only option available in versions 8.4 and below. In this setup, set standby_mode off, because you are implementing the polling required for standby operation yourself. See the - module for a reference + module for a reference implementation of this. @@ -1551,7 +1551,7 @@ if (!triggered) A working example of a waiting restore_command is provided - in the module. It + in the module. It should be used as a reference on how to correctly implement the logic described above. It can also be extended as needed to support specific configurations and environments. @@ -1592,17 +1592,17 @@ if (!triggered) Set up continuous archiving from the primary to a WAL archive directory on the standby server. Ensure that - , - and - + , + and + are set appropriately on the primary - (see ). + (see ). Make a base backup of the primary server (see ), and load this data onto the standby. + linkend="backup-base-backup"/>), and load this data onto the standby. @@ -1610,7 +1610,7 @@ if (!triggered) Begin recovery on the standby server from the local WAL archive, using a recovery.conf that specifies a restore_command that waits as described - previously (see ). + previously (see ). @@ -1644,7 +1644,7 @@ if (!triggered) An external program can call the pg_walfile_name_offset() - function (see ) + function (see ) to find out the file name and the exact byte offset within it of the current end of WAL. It can then access the WAL file directly and copy the data from the last known end of WAL through the current end @@ -1663,7 +1663,7 @@ if (!triggered) Starting with PostgreSQL version 9.0, you can use - streaming replication (see ) to + streaming replication (see ) to achieve the same benefits with less effort. @@ -1697,7 +1697,7 @@ if (!triggered) User's Overview - When the parameter is set to true on a + When the parameter is set to true on a standby server, it will begin accepting connections once the recovery has brought the system to a consistent state. All such connections are strictly read-only; not even temporary tables may be written. @@ -1713,7 +1713,7 @@ if (!triggered) made by that transaction will be visible to any new snapshots taken on the standby. Snapshots may be taken at the start of each query or at the start of each transaction, depending on the current transaction isolation - level. For more details, see . + level. For more details, see . @@ -1891,7 +1891,7 @@ if (!triggered) Users will be able to tell whether their session is read-only by issuing SHOW transaction_read_only. In addition, a set of - functions () allow users to + functions () allow users to access information about the standby server. These allow you to write programs that are aware of the current state of the database. These can be used to monitor the progress of recovery, or to allow you to @@ -1986,8 +1986,8 @@ if (!triggered) When a conflicting query is short, it's typically desirable to allow it to complete by delaying WAL application for a little bit; but a long delay in WAL application is usually not desirable. So the cancel mechanism has - parameters, and , that define the maximum + parameters, and , that define the maximum allowed delay in WAL application. Conflicting queries will be canceled once it has taken longer than the relevant delay setting to apply any newly-received WAL data. There are two parameters so that different delay @@ -2082,7 +2082,7 @@ if (!triggered) - Another option is to increase + Another option is to increase on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set @@ -2189,8 +2189,8 @@ LOG: database system is ready to accept read only connections It is important that the administrator select appropriate settings for - and . The best choices vary + and . The best choices vary depending on business priorities. For example if the server is primarily tasked as a High Availability server, then you will want low delay settings, perhaps even zero, though that is a very aggressive setting. If @@ -2382,23 +2382,23 @@ LOG: database system is ready to accept read only connections Various parameters have been mentioned above in - and - . + and + . - On the primary, parameters and - can be used. - and - have no effect if set on + On the primary, parameters and + can be used. + and + have no effect if set on the primary. - On the standby, parameters , - and - can be used. - has no effect + On the standby, parameters , + and + can be used. + has no effect as long as the server remains in standby mode, though it will become relevant if the standby becomes primary. @@ -2452,8 +2452,8 @@ LOG: database system is ready to accept read only connections The Serializable transaction isolation level is not yet available in hot - standby. (See and - for details.) + standby. (See and + for details.) An attempt to set a transaction to the serializable isolation level in hot standby mode will generate an error. diff --git a/doc/src/sgml/history.sgml b/doc/src/sgml/history.sgml index b59e65bb20..59bfdb6055 100644 --- a/doc/src/sgml/history.sgml +++ b/doc/src/sgml/history.sgml @@ -31,12 +31,12 @@ Office (ARO), the National Science Foundation (NSF), and ESL, Inc. The implementation of POSTGRES began in 1986. The initial - concepts for the system were presented in , + concepts for the system were presented in , and the definition of the initial data model appeared in . The design of the rule system at that time was - described in . The rationale and + linkend="rowe87"/>. The design of the rule system at that time was + described in . The rationale and architecture of the storage manager were detailed in . + linkend="ston87b"/>. @@ -44,10 +44,10 @@ releases since then. The first demoware system became operational in 1987 and was shown at the 1988 ACM-SIGMOD Conference. Version 1, described in - , was released to a few external users in + , was released to a few external users in June 1989. In response to a critique of the first rule system - (), the rule system was redesigned (), and Version 2 was released in June 1990 with + (), the rule system was redesigned (), and Version 2 was released in June 1990 with the new rule system. Version 3 appeared in 1991 and added support for multiple storage managers, an improved query executor, and a rewritten rule system. For the most part, subsequent releases @@ -216,7 +216,7 @@ Details about what has happened in PostgreSQL since - then can be found in . + then can be found in . diff --git a/doc/src/sgml/hstore.sgml b/doc/src/sgml/hstore.sgml index 0264e4e532..94ccd1201e 100644 --- a/doc/src/sgml/hstore.sgml +++ b/doc/src/sgml/hstore.sgml @@ -70,7 +70,7 @@ key => NULL constant, then any single-quote characters and (depending on the setting of the standard_conforming_strings configuration parameter) backslash characters need to be escaped correctly. See - for more on the handling of string + for more on the handling of string constants. @@ -87,8 +87,8 @@ key => NULL The operators provided by the hstore module are - shown in , the functions - in . + shown in , the functions + in .
@@ -629,7 +629,7 @@ ALTER TABLE tablename ALTER hstorecol TYPE hstore USING hstorecol || ''; extensions for PL/Python are called hstore_plpythonu, hstore_plpython2u, and hstore_plpython3u - (see for the PL/Python naming + (see for the PL/Python naming convention). If you use them, hstore values are mapped to Python dictionaries. diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml index a94b188f22..a7f6c8dc6a 100644 --- a/doc/src/sgml/indexam.sgml +++ b/doc/src/sgml/indexam.sgml @@ -22,7 +22,7 @@ pages so that they can use the regular storage manager and buffer manager to access the index contents. (All the existing index access methods furthermore use the standard page layout described in , and most use the same format for index + linkend="storage-page-layout"/>, and most use the same format for index tuple headers; but these decisions are not forced on an access method.) @@ -31,7 +31,7 @@ tuple identifiers, or TIDs, of row versions (tuples) in the index's parent table. A TID consists of a block number and an item number within that block (see ). This is sufficient + linkend="storage-page-layout"/>). This is sufficient information to fetch a particular row version from the table. Indexes are not directly aware that under MVCC, there might be multiple extant versions of the same logical row; to an index, each tuple is @@ -52,8 +52,8 @@ system catalog. The pg_am entry specifies a name and a handler function for the access method. These entries can be created and deleted using the - and - SQL commands. + and + SQL commands. @@ -71,7 +71,7 @@ functions for the access method, which do all of the real work to access indexes. These support functions are plain C functions and are not visible or callable at the SQL level. The support functions are described - in . + in . @@ -153,7 +153,7 @@ typedef struct IndexAmRoutine These entries allow the planner to determine what kinds of query qualifications can be used with indexes of this access method. Operator families and classes are described - in , which is prerequisite material for reading + in , which is prerequisite material for reading this chapter. @@ -177,7 +177,7 @@ typedef struct IndexAmRoutine Some of the flag fields of IndexAmRoutine have nonobvious implications. The requirements of amcanunique - are discussed in . + are discussed in . The amcanmulticol flag asserts that the access method supports multicolumn indexes, while amoptionalkey asserts that it allows scans @@ -271,7 +271,7 @@ aminsert (Relation indexRelation, amcanunique flag is true) then checkUnique indicates the type of uniqueness check to perform. This varies depending on whether the unique constraint is - deferrable; see for details. + deferrable; see for details. Normally the access method only needs the heapRelation parameter when performing uniqueness checking (since then it will have to look into the heap to verify tuple liveness). @@ -386,7 +386,7 @@ amcostestimate (PlannerInfo *root, double *indexCorrelation); Estimate the costs of an index scan. This function is described fully - in , below. + in , below. @@ -480,7 +480,7 @@ amvalidate (Oid opclassoid); The purpose of an index, of course, is to support scans for tuples matching an indexable WHERE condition, often called a qualifier or scan key. The semantics of - index scanning are described more fully in , + index scanning are described more fully in , below. An index access method can support plain index scans, bitmap index scans, or both. The scan-related functions that an index access method must or may provide are: @@ -594,7 +594,7 @@ amgetbitmap (IndexScanDesc scan, amgetbitmap and amgettuple cannot be used in the same index scan; there are other restrictions too when using amgetbitmap, as explained - in . + in . @@ -852,7 +852,7 @@ amparallelrescan (IndexScanDesc scan); index tuples. Finally, amgetbitmap does not guarantee any locking of the returned tuples, with implications - spelled out in . + spelled out in . @@ -901,7 +901,7 @@ amparallelrescan (IndexScanDesc scan); A new heap entry is made before making its index entries. (Therefore a concurrent index scan is likely to fail to see the heap entry. This is okay because the index reader would be uninterested in an - uncommitted row anyway. But see .) + uncommitted row anyway. But see .) diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index 248ed7e8eb..0818196e76 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -77,7 +77,7 @@ CREATE INDEX test1_id_index ON test1 (id); than a sequential table scan. But you might have to run the ANALYZE command regularly to update statistics to allow the query planner to make educated decisions. - See for information about + See for information about how to find out whether an index is used and when and why the planner might choose not to use an index. @@ -99,7 +99,7 @@ CREATE INDEX test1_id_index ON test1 (id); It is possible to allow writes to occur in parallel with index creation, but there are several caveats to be aware of — for more information see . + endterm="sql-createindex-concurrently-title"/>. @@ -161,7 +161,7 @@ CREATE INDEX test1_id_index ON test1 (id); col LIKE '%bar'. However, if your database does not use the C locale you will need to create the index with a special operator class to support indexing of pattern-matching queries; see - below. It is also possible to use + below. It is also possible to use B-tree indexes for ILIKE and ~*, but only if the pattern starts with non-alphabetic characters, i.e., characters that are not affected by @@ -226,13 +226,13 @@ CREATE INDEX name ON table && - (See for the meaning of + (See for the meaning of these operators.) The GiST operator classes included in the standard distribution are - documented in . + documented in . Many other GiST operator classes are available in the contrib collection or as separate - projects. For more information see . + projects. For more information see . @@ -244,7 +244,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; which finds the ten places closest to a given target point. The ability to do this is again dependent on the particular operator class being used. - In , operators that can be + In , operators that can be used in this way are listed in the column Ordering Operators. @@ -274,11 +274,11 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; >^ - (See for the meaning of + (See for the meaning of these operators.) The SP-GiST operator classes included in the standard distribution are - documented in . - For more information see . + documented in . + For more information see . @@ -313,13 +313,13 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; && - (See for the meaning of + (See for the meaning of these operators.) The GIN operator classes included in the standard distribution are - documented in . + documented in . Many other GIN operator classes are available in the contrib collection or as separate - projects. For more information see . + projects. For more information see . @@ -351,8 +351,8 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; The BRIN operator classes included in the standard distribution are - documented in . - For more information see . + documented in . + For more information see . @@ -454,8 +454,8 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); an index on a single column is sufficient and saves space and time. Indexes with more than three columns are unlikely to be helpful unless the usage of the table is extremely stylized. See also - and - for some discussion of the + and + for some discussion of the merits of different index configurations. @@ -609,7 +609,7 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); process the queries that use both columns. You could also create a multicolumn index on (x, y). This index would typically be more efficient than index combination for queries involving both - columns, but as discussed in , it + columns, but as discussed in , it would be almost useless for queries involving only y, so it should not be the only index. A combination of the multicolumn index and a separate index on y would serve reasonably well. For @@ -762,7 +762,7 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); index at all. This reduces the size of the index, which will speed up those queries that do use the index. It will also speed up many table update operations because the index does not need to be - updated in all cases. shows a + updated in all cases. shows a possible application of this idea. @@ -827,7 +827,7 @@ WHERE client_ip = inet '192.168.100.23'; Another possible use for a partial index is to exclude values from the index that the typical query workload is not interested in; this is shown in . This results in the same + linkend="indexes-partial-ex2"/>. This results in the same advantages as listed above, but it prevents the uninteresting values from being accessed via that index, even if an index scan might be profitable in that @@ -878,7 +878,7 @@ SELECT * FROM orders WHERE order_nr = 3501; - also illustrates that the + also illustrates that the indexed column and the column used in the predicate do not need to match. PostgreSQL supports partial indexes with arbitrary predicates, so long as only columns of the @@ -909,7 +909,7 @@ SELECT * FROM orders WHERE order_nr = 3501; A third possible use for partial indexes does not require the index to be used in queries at all. The idea here is to create a unique index over a subset of a table, as in . This enforces uniqueness + linkend="indexes-partial-ex3"/>. This enforces uniqueness among the rows that satisfy the index predicate, without constraining those that do not. @@ -962,8 +962,8 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) More information about partial indexes can be found in , , and . + linkend="ston89b"/>, , and . @@ -1157,7 +1157,7 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); the index, the table rows they reference might be anywhere in the heap. The heap-access portion of an index scan thus involves a lot of random access into the heap, which can be slow, particularly on traditional - rotating media. (As described in , + rotating media. (As described in , bitmap scans try to alleviate this cost by doing the heap accesses in sorted order, but that only goes so far.) @@ -1212,7 +1212,7 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; is physically possible. But there is an additional requirement for any table scan in PostgreSQL: it must verify that each retrieved row be visible to the query's MVCC snapshot, as - discussed in . Visibility information is not stored + discussed in . Visibility information is not stored in index entries, only in heap entries; so at first glance it would seem that every row retrieval would require a heap access anyway. And this is indeed the case, if the table row has been modified recently. However, @@ -1289,7 +1289,7 @@ SELECT f(x) FROM tab WHERE f(x) < 1; Partial indexes also have interesting interactions with index-only scans. - Consider the partial index shown in : + Consider the partial index shown in : CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) WHERE success; @@ -1325,11 +1325,11 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; maintenance or tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage for an individual query is done with the - + command; its application for this purpose is - illustrated in . + illustrated in . It is also possible to gather overall statistics about index usage - in a running server, as described in . + in a running server, as described in . @@ -1343,7 +1343,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; - Always run + Always run first. This command collects statistics about the distribution of the values in the table. This information is required to estimate the number of rows @@ -1353,8 +1353,8 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; almost certain to be inaccurate. Examining an application's index usage without having run ANALYZE is therefore a lost cause. - See - and for more information. + See + and for more information. @@ -1386,7 +1386,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; When indexes are not used, it can be useful for testing to force their use. There are run-time parameters that can turn off - various plan types (see ). + various plan types (see ). For instance, turning off sequential scans (enable_seqscan) and nested-loop joins (enable_nestloop), which are the most basic plans, @@ -1417,11 +1417,11 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; per-row costs of each plan node times the selectivity estimate of the plan node. The costs estimated for the plan nodes can be adjusted via run-time parameters (described in ). + linkend="runtime-config-query-constants"/>). An inaccurate selectivity estimate is due to insufficient statistics. It might be possible to improve this by tuning the statistics-gathering parameters (see - ). + ). diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index 58c54254d7..99b0ea8519 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -579,7 +579,7 @@
- See also under , a similarly + See also under , a similarly structured view, for further information on some of the columns. @@ -776,7 +776,7 @@ sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. @@ -895,7 +895,7 @@ identifies which character set the available collations are applicable to. In PostgreSQL, there is only one character set per database (see explanation - in ), so this view does + in ), so this view does not provide much useful information.
@@ -1178,7 +1178,7 @@ that use data types owned by a currently enabled role. Note that in PostgreSQL, built-in data types behave like user-defined types, so they are included here as well. See - also for details. + also for details.
@@ -3134,7 +3134,7 @@ ORDER BY c.ordinal_position; sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. @@ -3594,7 +3594,7 @@ ORDER BY c.ordinal_position; sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. @@ -3930,7 +3930,7 @@ ORDER BY c.ordinal_position; sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. @@ -4762,7 +4762,7 @@ ORDER BY c.ordinal_position; The table sql_features contains information about which formal features defined in the SQL standard are supported by PostgreSQL. This is the - same information that is presented in . + same information that is presented in . There you can also find some additional background information. @@ -4998,7 +4998,7 @@ ORDER BY c.ordinal_position; The table sql_packages contains information about which feature packages defined in the SQL standard are supported by PostgreSQL. Refer to for background information on feature packages. + linkend="features"/> for background information on feature packages.
@@ -5586,7 +5586,7 @@ ORDER BY c.ordinal_position; sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. @@ -5891,9 +5891,9 @@ ORDER BY c.ordinal_position; USAGE privileges granted on user-defined types to a currently enabled role or by a currently enabled role. There is one row for each combination of type, grantor, and grantee. This view shows only - composite types (see under + composite types (see under for why); see - for domain privileges. + for domain privileges.
@@ -6068,7 +6068,7 @@ ORDER BY c.ordinal_position; differentiate between these. Other user-defined types such as base types and enums, which are PostgreSQL extensions, are not shown here. For domains, - see instead. + see instead.
@@ -6522,7 +6522,7 @@ ORDER BY c.ordinal_position; sql_identifier The specific name of the function. See for more information. + linkend="infoschema-routines"/> for more information. diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml index 029e1dbc28..99e9c7b78e 100644 --- a/doc/src/sgml/install-windows.sgml +++ b/doc/src/sgml/install-windows.sgml @@ -37,8 +37,8 @@ Building using MinGW or Cygwin uses the normal build system, see - and the specific notes in - and . + and the specific notes in + and . To produce native 64 bit binaries in these environments, use the tools from MinGW-w64. These tools can also be used to cross-compile for 32 bit and 64 bit Windows @@ -457,7 +457,7 @@ $ENV{CONFIG}="Debug"; For more information about the regression tests, see - . + . diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index f8e1d60356..a922819fce 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -54,7 +54,7 @@ su - postgres In general, a modern Unix-compatible platform should be able to run PostgreSQL. The platforms that had received specific testing at the - time of release are listed in + time of release are listed in below. In the doc subdirectory of the distribution there are several platform-specific FAQ documents you might wish to consult if you are having trouble. @@ -193,7 +193,7 @@ su - postgres required version is Python 2.4. Python 3 is supported if it's version 3.1 or later; but see - + when using Python 3. @@ -262,7 +262,7 @@ su - postgres To build the PostgreSQL documentation, there is a separate set of requirements; see - . + . @@ -358,7 +358,7 @@ su - postgres You can also get the source directly from the version control repository, see - . + . @@ -835,8 +835,8 @@ su - postgres Build with LDAPLDAP support for authentication and connection parameter lookup (see - and - for more information). On Unix, + and + for more information). On Unix, this requires the OpenLDAP package to be installed. On Windows, the default WinLDAP library is used. configure will check for the required @@ -855,7 +855,7 @@ su - postgres for systemdsystemd service notifications. This improves integration if the server binary is started under systemd but has no impact - otherwise; see for more + otherwise; see for more information. libsystemd and the associated header files need to be installed to be able to use this option. @@ -901,7 +901,7 @@ su - postgres - Build the module + Build the module (which provides functions to generate UUIDs), using the specified UUID library.UUID LIBRARY must be one of: @@ -968,7 +968,7 @@ su - postgres Use libxslt when building the - + module. xml2 relies on this library to perform XSL transformations of XML. @@ -1084,7 +1084,7 @@ su - postgres has no support for strong random numbers on the platform. A source of random numbers is needed for some authentication protocols, as well as some routines in the - + module. disables functionality that requires cryptographically strong random numbers, and substitutes a weak pseudo-random-number-generator for the generation of @@ -1188,7 +1188,7 @@ su - postgres code coverage testing instrumentation. When run, they generate files in the build directory with code coverage metrics. - See + See for more information. This option is for use only with GCC and when doing development work. @@ -1249,7 +1249,7 @@ su - postgres Compiles PostgreSQL with support for the dynamic tracing tool DTrace. - See + See for more information. @@ -1285,7 +1285,7 @@ su - postgres Enable tests using the Perl TAP tools. This requires a Perl installation and the Perl module IPC::Run. - See for more information. + See for more information. @@ -1442,7 +1442,7 @@ su - postgres whether Python 2 or 3 is specified here (or otherwise implicitly chosen) determines which variant of the PL/Python language becomes available. See - + for more information. @@ -1569,7 +1569,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. make check (This won't work as root; do it as an unprivileged user.) - See for + See for detailed information about interpreting the test results. You can repeat this test at any later time by issuing the same command. @@ -1581,7 +1581,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. If you are upgrading an existing system be sure to read - , + , which has instructions about upgrading a cluster. @@ -1593,7 +1593,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. make install This will install files into the directories that were specified - in . Make sure that you have appropriate + in . Make sure that you have appropriate permissions to write into that area. Normally you need to do this step as root. Alternatively, you can create the target directories in advance and arrange for appropriate permissions to @@ -1727,7 +1727,7 @@ export LD_LIBRARY_PATH setenv LD_LIBRARY_PATH /usr/local/pgsql/lib Replace /usr/local/pgsql/lib with whatever you set - to in . + to in . You should put these commands into a shell start-up file such as /etc/profile or ~/.bash_profile. Some good information about the caveats associated with this method can @@ -1793,7 +1793,7 @@ libpq.so.2.1: cannot open shared object file: No such file or directory If you installed into /usr/local/pgsql or some other location that is not searched for programs by default, you should add /usr/local/pgsql/bin (or whatever you set - to in ) + to in ) into your PATH. Strictly speaking, this is not necessary, but it will make the use of PostgreSQL much more convenient. @@ -1873,7 +1873,7 @@ export MANPATH Other Unix-like systems may also work but are not currently being tested. In most cases, all CPU architectures supported by a given operating system will work. Look in - below to see if + below to see if there is information specific to your operating system, particularly if using an older system. @@ -1895,8 +1895,8 @@ export MANPATH This section documents additional platform-specific issues regarding the installation and setup of PostgreSQL. Be sure to read the installation instructions, and in - particular as well. Also, - check regarding the + particular as well. Also, + check regarding the interpretation of regression test results. @@ -2247,7 +2247,7 @@ ERROR: could not load library "/opt/dbs/pgsql/lib/plperl.so": Bad address PostgreSQL can be built using Cygwin, a Linux-like environment for Windows, but that method is inferior to the native Windows build - (see ) and + (see ) and running a server under Cygwin is no longer recommended. @@ -2441,7 +2441,7 @@ PHSS_30849 s700_800 u2comp/be/plugin library Patch Microsoft's Visual C++ compiler suite. The MinGW build variant uses the normal build system described in this chapter; the Visual C++ build works completely differently - and is described in . + and is described in . It is a fully native build and uses no additional software like MinGW. A ready-made installer is available on the main PostgreSQL web site. @@ -2602,7 +2602,7 @@ LIBOBJS = snprintf.o Using DTrace for Tracing PostgreSQL - Yes, using DTrace is possible. See for + Yes, using DTrace is possible. See for further information. diff --git a/doc/src/sgml/intarray.sgml b/doc/src/sgml/intarray.sgml index 0cf9d5f3f6..b633cf3677 100644 --- a/doc/src/sgml/intarray.sgml +++ b/doc/src/sgml/intarray.sgml @@ -29,8 +29,8 @@ The functions provided by the intarray module - are shown in , the operators - in . + are shown in , the operators + in .
diff --git a/doc/src/sgml/intro.sgml b/doc/src/sgml/intro.sgml index 2fb19725f0..3038826311 100644 --- a/doc/src/sgml/intro.sgml +++ b/doc/src/sgml/intro.sgml @@ -23,13 +23,13 @@ - is an informal introduction for new users. + is an informal introduction for new users. - documents the SQL query + documents the SQL query language environment, including data types and functions, as well as user-level performance tuning. Every PostgreSQL user should read this. @@ -38,7 +38,7 @@ - describes the installation and + describes the installation and administration of the server. Everyone who runs a PostgreSQL server, be it for private use or for others, should read this part. @@ -47,7 +47,7 @@ - describes the programming + describes the programming interfaces for PostgreSQL client programs. @@ -56,7 +56,7 @@ - contains information for + contains information for advanced users about the extensibility capabilities of the server. Topics include user-defined data types and functions. @@ -65,7 +65,7 @@ - contains reference information about + contains reference information about SQL commands, client and server programs. This part supports the other parts with structured information sorted by command or program. @@ -74,7 +74,7 @@ - contains assorted information that might be of + contains assorted information that might be of use to PostgreSQL developers. diff --git a/doc/src/sgml/isn.sgml b/doc/src/sgml/isn.sgml index 329b7b2c54..34d37ede01 100644 --- a/doc/src/sgml/isn.sgml +++ b/doc/src/sgml/isn.sgml @@ -25,7 +25,7 @@ Data Types - shows the data types provided by + shows the data types provided by the isn module. @@ -222,7 +222,7 @@ The isn module provides the standard comparison operators, plus B-tree and hash indexing support for all these data types. In - addition there are several specialized functions; shown in . + addition there are several specialized functions; shown in . In this table, isn means any one of the module's data types. diff --git a/doc/src/sgml/json.sgml b/doc/src/sgml/json.sgml index 05ecef2ffc..731b469613 100644 --- a/doc/src/sgml/json.sgml +++ b/doc/src/sgml/json.sgml @@ -18,7 +18,7 @@ the JSON data types have the advantage of enforcing that each stored value is valid according to the JSON rules. There are also assorted JSON-specific functions and operators available for data stored - in these data types; see . + in these data types; see . @@ -82,7 +82,7 @@ Many of the JSON processing functions described - in will convert Unicode escapes to + in will convert Unicode escapes to regular characters, and will therefore throw the same types of errors just described even if their input is of type json not jsonb. The fact that the json input function does @@ -98,7 +98,7 @@ When converting textual JSON input into jsonb, the primitive types described by RFC 7159 are effectively mapped onto native PostgreSQL types, as shown - in . + in . Therefore, there are some minor additional constraints on what constitutes valid jsonb data that do not apply to the json type, nor to JSON in the abstract, corresponding @@ -380,7 +380,7 @@ SELECT doc->'site_name' FROM websites The various containment and existence operators, along with all other JSON operators and functions are documented - in . + in . @@ -404,7 +404,7 @@ SELECT doc->'site_name' FROM websites and ?| operators and path/value-exists operator @>. (For details of the semantics that these operators - implement, see .) + implement, see .) An example of creating an index with this operator class is: CREATE INDEX idxgin ON api USING GIN (jdoc); @@ -465,7 +465,7 @@ CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags')); operator ? to the indexed expression jdoc -> 'tags'. (More information on expression indexes can be found in .) + linkend="indexes-expressional"/>.) Another approach to querying is to exploit containment, for example: diff --git a/doc/src/sgml/keywords.sgml b/doc/src/sgml/keywords.sgml index 01bc9b47b1..2dba7adedf 100644 --- a/doc/src/sgml/keywords.sgml +++ b/doc/src/sgml/keywords.sgml @@ -9,10 +9,10 @@ - lists all tokens that are key words + lists all tokens that are key words in the SQL standard and in PostgreSQL &version;. Background information can be found in . + linkend="sql-syntax-identifiers"/>. (For space reasons, only the latest two versions of the SQL standard, and SQL-92 for historical comparison, are included. The differences between those and the other intermediate standard versions are small.) @@ -45,7 +45,7 @@ - In in the column for + In in the column for PostgreSQL we classify as non-reserved those key words that are explicitly known to the parser but are allowed as column or table names. @@ -69,7 +69,7 @@ It is important to understand before studying that the fact that a key word is not + linkend="keywords-table"/> that the fact that a key word is not reserved in PostgreSQL does not mean that the feature related to the word is not implemented. Conversely, the presence of a key word does not indicate the existence of a feature. diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 694921f212..4703309254 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -25,15 +25,15 @@ those written for C++, Perl, Python, Tcl and ECPG. So some aspects of libpq's behavior will be important to you if you use one of those packages. In particular, - , - and - + , + and + describe behavior that is visible to the user of any application that uses libpq. - Some short programs are included at the end of this chapter () to show how + Some short programs are included at the end of this chapter () to show how to write programs that use libpq. There are also several complete examples of libpq applications in the directory src/test/examples in the source code distribution. @@ -118,7 +118,7 @@ PGconn *PQconnectdbParams(const char * const *keywords, The currently recognized parameter key words are listed in - . + . @@ -128,7 +128,7 @@ PGconn *PQconnectdbParams(const char * const *keywords, dbname is expanded this way, any subsequent dbname value is processed as plain database name. More details on the possible connection string formats appear in - . + . @@ -140,7 +140,7 @@ PGconn *PQconnectdbParams(const char * const *keywords, If any parameter is NULL or an empty string, the corresponding - environment variable (see ) is checked. + environment variable (see ) is checked. If the environment variable is not set either, then the indicated built-in defaults are used. @@ -176,7 +176,7 @@ PGconn *PQconnectdb(const char *conninfo); The passed string can be empty to use all default parameters, or it can contain one or more parameter settings separated by whitespace, or it can contain a URI. - See for details. + See for details. @@ -289,7 +289,7 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn); The hostaddr and host parameters are used appropriately to ensure that name and reverse name queries are not made. See the documentation of - these parameters in for details. + these parameters in for details. @@ -802,7 +802,7 @@ host=localhost port=5432 dbname=mydb connect_timeout=10 The recognized parameter key words are listed in . + linkend="libpq-paramkeywords"/>. @@ -847,7 +847,7 @@ postgresql:///mydb?host=localhost&port=5433 Any connection parameters not corresponding to key words listed in are ignored and a warning message about them + linkend="libpq-paramkeywords"/> are ignored and a warning message about them is sent to stderr. @@ -867,7 +867,7 @@ postgresql://[2001:db8::1234]/database The host component is interpreted as described for the parameter . In particular, a Unix-domain socket + linkend="libpq-connect-host"/>. In particular, a Unix-domain socket connection is chosen if the host part is either empty or starts with a slash, otherwise a TCP/IP connection is initiated. Note, however, that the slash is a reserved character in the hierarchical part of the URI. So, to @@ -954,7 +954,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname A comma-separated list of host names is also accepted, in which case each host name in the list is tried in order. See - for details. + for details. @@ -1006,13 +1006,13 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname is not the name of the server at network address hostaddr. Also, note that host rather than hostaddr is used to identify the connection in a password file (see - ). + ). A comma-separated list of hostaddrs is also accepted, in which case each host in the list is tried in order. See - for details. + for details. Without either a host name or host address, @@ -1044,7 +1044,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname The database name. Defaults to be the same as the user name. In certain contexts, the value is checked for extended - formats; see for more details on + formats; see for more details on those. @@ -1075,7 +1075,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies the name of the file used to store passwords - (see ). + (see ). Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. (No error is reported if this file does not exist.) @@ -1125,7 +1125,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname separate command-line arguments, unless escaped with a backslash (\); write \\ to represent a literal backslash. For a detailed discussion of the available - options, consult . + options, consult . @@ -1134,7 +1134,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname application_name - Specifies a value for the + Specifies a value for the configuration parameter. @@ -1145,7 +1145,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Specifies a fallback value for the configuration parameter. + linkend="guc-application-name"/> configuration parameter. This value will be used if no value has been given for application_name via a connection parameter or the PGAPPNAME environment variable. Specifying @@ -1295,7 +1295,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - See for a detailed description of how + See for a detailed description of how these options work. @@ -1430,7 +1430,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname to ensure that you are connected to a server run by a trusted user.) This option is only supported on platforms for which the peer authentication method is implemented; see - . + . @@ -1442,7 +1442,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Kerberos service name to use when authenticating with GSSAPI. This must match the service name specified in the server configuration for Kerberos authentication to succeed. (See also - .) + .) @@ -1465,7 +1465,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname Service name to use for additional parameters. It specifies a service name in pg_service.conf that holds additional connection parameters. This allows applications to specify only a service name so connection parameters - can be centrally maintained. See . + can be centrally maintained. See . @@ -2225,7 +2225,7 @@ PGresult *PQexec(PGconn *conn, const char *command); PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple - transactions. (See + transactions. (See for more details about how the server handles multi-query strings.) Note however that the returned PGresult structure describes only the result @@ -2447,7 +2447,7 @@ PGresult *PQprepare(PGconn *conn, PQprepare creates a prepared statement for later execution with PQexecPrepared. This feature allows commands to be executed repeatedly without being parsed and - planned each time; see for details. + planned each time; see for details. PQprepare is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0. @@ -2489,10 +2489,10 @@ PGresult *PQprepare(PGconn *conn, Prepared statements for use with PQexecPrepared can also - be created by executing SQL + be created by executing SQL statements. Also, although there is no libpq function for deleting a prepared statement, the SQL statement + linkend="sql-deallocate"/> statement can be used for that purpose. @@ -2746,7 +2746,7 @@ ExecStatusType PQresultStatus(const PGresult *res); The PGresult contains a single result tuple from the current command. This status occurs only when single-row mode has been selected for the query - (see ). + (see ). @@ -2770,7 +2770,7 @@ ExecStatusType PQresultStatus(const PGresult *res); never be returned directly by PQexec or other query execution functions; results of this kind are instead passed to the notice processor (see ). + linkend="libpq-notice-processing"/>). @@ -2941,7 +2941,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); front-end applications to perform specific operations (such as error handling) in response to a particular database error. For a list of the possible SQLSTATE codes, see . This field is not localizable, + linkend="errcodes-appendix"/>. This field is not localizable, and is always present. @@ -3118,7 +3118,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode); The fields for schema name, table name, column name, data type name, and constraint name are supplied only for a limited number of error - types; see . Do not assume that + types; see . Do not assume that the presence of any of these fields guarantees the presence of another field. Core error sources observe the interrelationships noted above, but user-defined functions may use these fields in other @@ -4075,7 +4075,7 @@ unsigned char *PQescapeByteaConn(PGconn *conn, bytea literal in an SQL statement. PQescapeByteaConn escapes bytes using either hex encoding or backslash escaping. See for more information. + linkend="datatype-binary"/> for more information. @@ -4508,7 +4508,7 @@ PGresult *PQgetResult(PGconn *conn); Another frequently-desired feature that can be obtained with PQsendQuery and PQgetResult is retrieving large query results a row at a time. This is discussed - in . + in . @@ -4600,14 +4600,14 @@ int PQisBusy(PGconn *conn); PQgetResult if PQisBusy returns false (0). It can also call PQnotifies to detect NOTIFY messages (see ). + linkend="libpq-notify"/>). A client that uses PQsendQuery/PQgetResult can also attempt to cancel a command that is still being processed - by the server; see . But regardless of + by the server; see . But regardless of the return value of PQcancel, the application must continue with the normal result-reading sequence using PQgetResult. A successful cancellation will @@ -4753,7 +4753,7 @@ int PQflush(PGconn *conn); (or a sibling function). This mode selection is effective only for the currently executing query. Then call PQgetResult repeatedly, until it returns null, as documented in . If the query returns any rows, they are returned + linkend="libpq-async"/>. If the query returns any rows, they are returned as individual PGresult objects, which look like normal query results except for having status code PGRES_SINGLE_TUPLE instead of @@ -5119,7 +5119,7 @@ typedef struct pgNotify - gives a sample program that illustrates + gives a sample program that illustrates the use of asynchronous notification. @@ -5242,7 +5242,7 @@ typedef struct pgNotify 0 indicates the overall copy format is textual (rows separated by newlines, columns separated by separator characters, etc). 1 indicates the overall copy format is binary. See for more information. + linkend="sql-copy"/> for more information. @@ -5322,7 +5322,7 @@ int PQputCopyData(PGconn *conn, into buffer loads of any convenient size. Buffer-load boundaries have no semantic significance when sending. The contents of the data stream must match the data format expected by the - COPY command; see for details. + COPY command; see for details. @@ -5982,7 +5982,7 @@ char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, version 10, and will not work correctly with older server versions. If algorithm is NULL, this function will query the server for the current value of the - setting. That can block, and + setting. That can block, and will fail if the current transaction is aborted, or if the connection is busy executing another query. If you wish to use the default algorithm for the server but want to avoid blocking, query @@ -6072,7 +6072,7 @@ PGresult *PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status); Fires a PGEVT_RESULTCREATE event (see ) for each event procedure registered in the + linkend="libpq-events"/>) for each event procedure registered in the PGresult object. Returns non-zero for success, zero if any event procedure fails. @@ -7004,7 +7004,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGHOST PGHOST behaves the same as the connection parameter. + linkend="libpq-connect-host"/> connection parameter. @@ -7014,7 +7014,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGHOSTADDR PGHOSTADDR behaves the same as the connection parameter. + linkend="libpq-connect-hostaddr"/> connection parameter. This can be set instead of or in addition to PGHOST to avoid DNS lookup overhead. @@ -7026,7 +7026,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGPORT PGPORT behaves the same as the connection parameter. + linkend="libpq-connect-port"/> connection parameter. @@ -7036,7 +7036,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGDATABASE PGDATABASE behaves the same as the connection parameter. + linkend="libpq-connect-dbname"/> connection parameter. @@ -7046,7 +7046,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGUSER PGUSER behaves the same as the connection parameter. + linkend="libpq-connect-user"/> connection parameter. @@ -7056,12 +7056,12 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGPASSWORD PGPASSWORD behaves the same as the connection parameter. + linkend="libpq-connect-password"/> connection parameter. Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using a password file - (see ). + (see ). @@ -7071,7 +7071,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGPASSFILE PGPASSFILE behaves the same as the connection parameter. + linkend="libpq-connect-passfile"/> connection parameter. @@ -7081,7 +7081,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSERVICE PGSERVICE behaves the same as the connection parameter. + linkend="libpq-connect-service"/> connection parameter. @@ -7093,7 +7093,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSERVICEFILE specifies the name of the per-user connection service file. If not set, it defaults to ~/.pg_service.conf - (see ). + (see ). @@ -7103,7 +7103,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGOPTIONS PGOPTIONS behaves the same as the connection parameter. + linkend="libpq-connect-options"/> connection parameter. @@ -7113,7 +7113,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGAPPNAME PGAPPNAME behaves the same as the connection parameter. + linkend="libpq-connect-application-name"/> connection parameter. @@ -7123,7 +7123,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLMODE PGSSLMODE behaves the same as the connection parameter. + linkend="libpq-connect-sslmode"/> connection parameter. @@ -7133,7 +7133,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGREQUIRESSL PGREQUIRESSL behaves the same as the connection parameter. + linkend="libpq-connect-requiressl"/> connection parameter. This environment variable is deprecated in favor of the PGSSLMODE variable; setting both variables suppresses the effect of this one. @@ -7146,7 +7146,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLCOMPRESSION PGSSLCOMPRESSION behaves the same as the connection parameter. + linkend="libpq-connect-sslcompression"/> connection parameter. @@ -7156,7 +7156,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLCERT PGSSLCERT behaves the same as the connection parameter. + linkend="libpq-connect-sslcert"/> connection parameter. @@ -7166,7 +7166,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLKEY PGSSLKEY behaves the same as the connection parameter. + linkend="libpq-connect-sslkey"/> connection parameter. @@ -7176,7 +7176,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLROOTCERT PGSSLROOTCERT behaves the same as the connection parameter. + linkend="libpq-connect-sslrootcert"/> connection parameter. @@ -7186,7 +7186,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGSSLCRL PGSSLCRL behaves the same as the connection parameter. + linkend="libpq-connect-sslcrl"/> connection parameter. @@ -7196,7 +7196,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGREQUIREPEER PGREQUIREPEER behaves the same as the connection parameter. + linkend="libpq-connect-requirepeer"/> connection parameter. @@ -7206,7 +7206,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGKRBSRVNAME PGKRBSRVNAME behaves the same as the connection parameter. + linkend="libpq-connect-krbsrvname"/> connection parameter. @@ -7216,7 +7216,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGGSSLIB PGGSSLIB behaves the same as the connection parameter. + linkend="libpq-connect-gsslib"/> connection parameter. @@ -7226,7 +7226,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGCONNECT_TIMEOUT PGCONNECT_TIMEOUT behaves the same as the connection parameter. + linkend="libpq-connect-connect-timeout"/> connection parameter. @@ -7236,7 +7236,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGCLIENTENCODING PGCLIENTENCODING behaves the same as the connection parameter. + linkend="libpq-connect-client-encoding"/> connection parameter. @@ -7246,7 +7246,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) PGTARGETSESSIONATTRS PGTARGETSESSIONATTRS behaves the same as the connection parameter. + linkend="libpq-connect-target-session-attrs"/> connection parameter. @@ -7255,8 +7255,8 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) The following environment variables can be used to specify default behavior for each PostgreSQL session. (See - also the - and + also the + and commands for ways to set default behavior on a per-user or per-database basis.) @@ -7293,7 +7293,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) - Refer to the SQL command + Refer to the SQL command for information on correct values for these environment variables. @@ -7348,7 +7348,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) %APPDATA% refers to the Application Data subdirectory in the user's profile). Alternatively, a password file can be specified - using the connection parameter + using the connection parameter or the environment variable PGPASSFILE. @@ -7422,7 +7422,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) The file uses an INI file format where the section name is the service name and the parameters are connection - parameters; see for a list. For + parameters; see for a list. For example: # comment @@ -7456,7 +7456,7 @@ user=admin LDAP connection parameter lookup uses the connection service file pg_service.conf (see ). A line in a + linkend="libpq-pgservice"/>). A line in a pg_service.conf stanza that starts with ldap:// will be recognized as an LDAP URL and an LDAP query will be performed. The result must be a list of @@ -7528,7 +7528,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased - security. See for details about the server-side + security. See for details about the server-side SSL functionality. @@ -7643,7 +7643,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) file, then its parent authority's certificate, and so on up to a certificate authority, root or intermediate, that is trusted by the server, i.e. signed by a certificate in the server's root CA file - (). + (). @@ -7728,7 +7728,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) All SSL options carry overhead in the form of encryption and key-exchange, so there is a trade-off that has to be made between performance - and security. + and security. illustrates the risks the different sslmode values protect against, and what statement they make about security and overhead. @@ -7828,7 +7828,7 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) SSL Client File Usage - summarizes the files that are + summarizes the files that are relevant to the SSL setup on the client. @@ -8027,7 +8027,7 @@ int PQisthreadsafe(); PGresult objects are normally read-only after creation, and so can be passed around freely between threads. However, if you use any of the PGresult-modifying functions described in - or , it's up + or , it's up to you to avoid concurrent operations on the same PGresult, too. diff --git a/doc/src/sgml/lo.sgml b/doc/src/sgml/lo.sgml index 8d8ee82722..ab8d192bc1 100644 --- a/doc/src/sgml/lo.sgml +++ b/doc/src/sgml/lo.sgml @@ -102,7 +102,7 @@ CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image If you already have, or suspect you have, orphaned large objects, see the - module to help + module to help you clean them up. It's a good idea to run vacuumlo occasionally as a back-stop to the lo_manage trigger. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index e11c8e0f8b..086cb8dbe8 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -83,8 +83,8 @@ As of PostgreSQL 9.0, large objects have an owner and a set of access permissions, which can be managed using - and - . + and + . SELECT privileges are required to read a large object, and UPDATE privileges are required to write or @@ -92,7 +92,7 @@ Only the large object's owner (or a database superuser) can delete, comment on, or change the owner of a large object. To adjust this behavior for compatibility with prior releases, see the - run-time parameter. + run-time parameter. @@ -301,7 +301,7 @@ int lo_open(PGconn *conn, Oid lobjId, int mode); checks were instead performed at the first actual read or write call using the descriptor.) These privilege checks can be disabled with the - run-time parameter. + run-time parameter. @@ -539,7 +539,7 @@ int lo_unlink(PGconn *conn, Oid lobjId); Server-side functions tailored for manipulating large objects from SQL are - listed in . + listed in .
@@ -656,7 +656,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image - It is possible to use of the + It is possible to use of the server-side lo_import and lo_export functions to non-superusers, but careful consideration of the security implications is required. A @@ -688,7 +688,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image Example Program - is a sample program which shows how the large object + is a sample program which shows how the large object interface in libpq can be used. Parts of the program are commented out but are left in the source for the reader's diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml index 676ab1f5ad..75551d8ee1 100644 --- a/doc/src/sgml/logical-replication.sgml +++ b/doc/src/sgml/logical-replication.sgml @@ -8,7 +8,7 @@ changes, based upon their replication identity (usually a primary key). We use the term logical in contrast to physical replication, which uses exact block addresses and byte-by-byte replication. PostgreSQL supports both - mechanisms concurrently, see . Logical + mechanisms concurrently, see . Logical replication allows fine-grained control over both data replication and security. @@ -126,7 +126,7 @@ fallback if no other solution is possible. If a replica identity other than full is set on the publisher side, a replica identity comprising the same or fewer columns must also be set on the subscriber - side. See for details on + side. See for details on how to set the replica identity. If a table without a replica identity is added to a publication that replicates UPDATE or DELETE operations then @@ -140,13 +140,13 @@ - A publication is created using the + A publication is created using the command and may later be altered or dropped using corresponding commands. The individual tables can be added and removed dynamically using - . Both the ADD + . Both the ADD TABLE and DROP TABLE operations are transactional; so the table will start or stop replicating at the correct snapshot once the transaction has committed. @@ -179,14 +179,14 @@ Each subscription will receive changes via one replication slot (see - ). Additional temporary + ). Additional temporary replication slots may be required for the initial data synchronization of pre-existing table data. A logical replication subscription can be a standby for synchronous - replication (see ). The standby + replication (see ). The standby name is by default the subscription name. An alternative name can be specified as application_name in the connection information of the subscription. @@ -200,10 +200,10 @@ - The subscription is added using and + The subscription is added using and can be stopped/resumed at any time using the - command and removed using - . + command and removed using + . @@ -375,7 +375,7 @@ - Large objects (see ) are not replicated. + Large objects (see ) are not replicated. There is no workaround for that, other than storing data in normal tables. @@ -409,13 +409,13 @@ Logical replication is built with an architecture similar to physical - streaming replication (see ). It is + streaming replication (see ). It is implemented by walsender and apply processes. The walsender process starts logical decoding (described - in ) of the WAL and loads the standard + in ) of the WAL and loads the standard logical decoding plugin (pgoutput). The plugin transforms the changes read from WAL to the logical replication protocol - (see ) and filters the data + (see ) and filters the data according to the publication specification. The data is then continuously transferred using the streaming replication protocol to the apply worker, which maps the data to local tables and applies the individual changes as @@ -461,7 +461,7 @@ physical streaming replication, the monitoring on a publication node is similar to monitoring of a physical replication master - (see ). + (see ). diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index 3b268c3f3c..6bab1b9b32 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -24,17 +24,17 @@ by INSERT and the new row version created by UPDATE. Availability of old row versions for UPDATE and DELETE depends on - the configured replica identity (see ). + the configured replica identity (see ). Changes can be consumed either using the streaming replication protocol - (see and - ), or by calling functions - via SQL (see ). It is also possible + (see and + ), or by calling functions + via SQL (see ). It is also possible to write additional methods of consuming the output of a replication slot without modifying core code - (see ). + (see ). @@ -47,8 +47,8 @@ Before you can use logical decoding, you must set - to logical and - to at least 1. Then, you + to logical and + to at least 1. Then, you should connect to the target database (in the example below, postgres) as a superuser. @@ -146,10 +146,10 @@ postgres=# SELECT pg_drop_replication_slot('regression_slot'); The following example shows how logical decoding is controlled over the streaming replication protocol, using the - program included in the PostgreSQL + program included in the PostgreSQL distribution. This requires that client authentication is set up to allow replication connections - (see ) and + (see ) and that max_wal_senders is set sufficiently high to allow an additional connection. @@ -208,7 +208,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot PostgreSQL also has streaming replication slots - (see ), but they are used somewhat + (see ), but they are used somewhat differently there. @@ -272,9 +272,9 @@ $ pg_recvlogical -d postgres --slot test --drop-slot Exported Snapshots When a new replication slot is created using the streaming replication - interface (see ), a + interface (see ), a snapshot is exported - (see ), which will show + (see ), which will show exactly the state of the database after which all changes will be included in the change stream. This can be used to create a new replica by using SET TRANSACTION @@ -313,11 +313,11 @@ $ pg_recvlogical -d postgres --slot test --drop-slot are used to create, drop, and stream changes from a replication slot, respectively. These commands are only available over a replication connection; they cannot be used via SQL. - See for details on these commands. + See for details on these commands. - The command can be used to control + The command can be used to control logical decoding over a streaming replication connection. (It uses these commands internally.) @@ -327,12 +327,12 @@ $ pg_recvlogical -d postgres --slot test --drop-slot Logical Decoding <acronym>SQL</acronym> Interface - See for detailed documentation on + See for detailed documentation on the SQL-level API for interacting with logical decoding. - Synchronous replication (see ) is + Synchronous replication (see ) is only supported on replication slots used over the streaming replication interface. The function interface and additional, non-core interfaces do not support synchronous replication. @@ -489,7 +489,7 @@ typedef struct OutputPluginOptions output_type has to either be set to OUTPUT_PLUGIN_TEXTUAL_OUTPUT or OUTPUT_PLUGIN_BINARY_OUTPUT. See also - . + . @@ -576,8 +576,8 @@ typedef void (*LogicalDecodeChangeCB) (struct LogicalDecodingContext *ctx, Only changes in user defined tables that are not unlogged - (see ) and not temporary - (see ) can be extracted using + (see ) and not temporary + (see ) can be extracted using logical decoding. @@ -685,7 +685,7 @@ OutputPluginWrite(ctx, true); src/backend/replication/logical/logicalfuncs.c. Essentially, three functions need to be provided: one to read WAL, one to prepare writing output, and one to write the output - (see ). + (see ). @@ -698,9 +698,9 @@ OutputPluginWrite(ctx, true); replication solutions with the same user interface as synchronous replication for streaming replication. To do this, the streaming replication interface - (see ) must be used to stream out + (see ) must be used to stream out data. Clients have to send Standby status update (F) - (see ) messages, just like streaming + (see ) messages, just like streaming replication clients do. diff --git a/doc/src/sgml/ltree.sgml b/doc/src/sgml/ltree.sgml index 602d9403f7..ea362f8a1d 100644 --- a/doc/src/sgml/ltree.sgml +++ b/doc/src/sgml/ltree.sgml @@ -183,7 +183,7 @@ Europe & Russia*@ & !Transportation <, >, <=, >=. Comparison sorts in the order of a tree traversal, with the children of a node sorted by label text. In addition, the specialized - operators shown in are available. + operators shown in are available.
@@ -362,7 +362,7 @@ Europe & Russia*@ & !Transportation - The available functions are shown in . + The available functions are shown in .
@@ -672,7 +672,7 @@ ltreetest=> SELECT ins_label(path,2,'Space') FROM test WHERE path <@ 'Top. the ltree type for PL/Python. The extensions are called ltree_plpythonu, ltree_plpython2u, and ltree_plpython3u - (see for the PL/Python naming + (see for the PL/Python naming convention). If you install these transforms and specify them when creating a function, ltree values are mapped to Python lists. (The reverse is currently not supported, however.) diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index 1a379058a2..4a68ec3b40 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -28,20 +28,20 @@ after a catastrophe (disk failure, fire, mistakenly dropping a critical table, etc.). The backup and recovery mechanisms available in PostgreSQL are discussed at length in - . + . The other main category of maintenance task is periodic vacuuming of the database. This activity is discussed in - . Closely related to this is updating + . Closely related to this is updating the statistics that will be used by the query planner, as discussed in - . + . Another task that might need periodic attention is log file management. - This is discussed in . + This is discussed in . @@ -70,7 +70,7 @@ PostgreSQL databases require periodic maintenance known as vacuuming. For many installations, it is sufficient to let vacuuming be performed by the autovacuum - daemon, which is described in . You might + daemon, which is described in . You might need to adjust the autovacuuming parameters described there to obtain best results for your situation. Some database administrators will want to supplement or replace the daemon's activities with manually-managed @@ -87,7 +87,7 @@ PostgreSQL's - command has to + command has to process each table on a regular basis for several reasons: @@ -140,7 +140,7 @@ traffic, which can cause poor performance for other active sessions. There are configuration parameters that can be adjusted to reduce the performance impact of background vacuuming — see - . + . @@ -156,7 +156,7 @@ UPDATE or DELETE of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion - concurrency control (MVCC, see ): the row version + concurrency control (MVCC, see ): the row version must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must then be @@ -217,7 +217,7 @@ their busiest tables as often as once every few minutes.) If you have multiple databases in a cluster, don't forget to VACUUM each one; the program might be helpful. + linkend="app-vacuumdb"/> might be helpful. @@ -227,9 +227,9 @@ massive update or delete activity. If you have such a table and you need to reclaim the excess disk space it occupies, you will need to use VACUUM FULL, or alternatively - + or one of the table-rewriting variants of - . + . These commands rewrite an entire new copy of the table and build new indexes for it. All these options require exclusive lock. Note that they also temporarily use extra disk space approximately equal to the size @@ -242,7 +242,7 @@ If you have a table whose entire contents are deleted on a periodic basis, consider doing it with - rather + rather than using DELETE followed by VACUUM. TRUNCATE removes the entire content of the table immediately, without requiring a @@ -269,7 +269,7 @@ The PostgreSQL query planner relies on statistical information about the contents of tables in order to generate good plans for queries. These statistics are gathered by - the command, + the command, which can be invoked by itself or as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans might @@ -323,7 +323,7 @@ clauses and have highly irregular data distributions might require a finer-grain data histogram than other columns. See ALTER TABLE SET STATISTICS, or change the database-wide default using the configuration parameter. + linkend="guc-default-statistics-target"/> configuration parameter. @@ -453,7 +453,7 @@ - + controls how old an XID value has to be before rows bearing that XID will be frozen. Increasing this setting may avoid unnecessary work if the rows that would otherwise be frozen will soon be modified again, @@ -471,7 +471,7 @@ Periodically, VACUUM will perform an aggressive vacuum, skipping only those pages which contain neither dead rows nor any unfrozen XID or MXID values. - + controls when VACUUM does that: all-visible but not all-frozen pages are scanned if the number of transactions that have passed since the last such scan is greater than vacuum_freeze_table_age minus @@ -488,7 +488,7 @@ that, data loss could result. To ensure that this does not happen, autovacuum is invoked on any table that might contain unfrozen rows with XIDs older than the age specified by the configuration parameter . (This will happen even if + linkend="guc-autovacuum-freeze-max-age"/>. (This will happen even if autovacuum is disabled.) @@ -636,7 +636,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. execute commands once it has gone into the safety shutdown mode, the only way to do this is to stop the server and start the server in single-user mode to execute VACUUM. The shutdown mode is not enforced - in single-user mode. See the reference + in single-user mode. See the reference page for details about using single-user mode. @@ -673,13 +673,13 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. Whenever VACUUM scans any part of a table, it will replace any multixact ID it encounters which is older than - + by a different value, which can be the zero value, a single transaction ID, or a newer multixact ID. For each table, pg_class.relminmxid stores the oldest possible multixact ID still appearing in any tuple of that table. If this value is older than - , an aggressive + , an aggressive vacuum is forced. As discussed in the previous section, an aggressive vacuum means that only those pages which are known to be all-frozen will be skipped. mxid_age() can be used on @@ -697,7 +697,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. As a safety device, an aggressive vacuum scan will occur for any table whose multixact-age is greater than - . Aggressive + . Aggressive vacuum scans will also occur progressively for all tables, starting with those that have the oldest multixact-age, if the amount of used member storage space exceeds the amount 50% of the addressable storage space. @@ -723,7 +723,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. tables that have had a large number of inserted, updated or deleted tuples. These checks use the statistics collection facility; therefore, autovacuum cannot be used unless is set to true. + linkend="guc-track-counts"/> is set to true. In the default configuration, autovacuuming is enabled and the related configuration parameters are appropriately set. @@ -734,17 +734,17 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. autovacuum launcher, which is in charge of starting autovacuum worker processes for all databases. The launcher will distribute the work across time, attempting to start one - worker within each database every + worker within each database every seconds. (Therefore, if the installation has N databases, a new worker will be launched every autovacuum_naptime/N seconds.) - A maximum of worker processes + A maximum of worker processes are allowed to run at the same time. If there are more than autovacuum_max_workers databases to be processed, the next database will be processed as soon as the first worker finishes. Each worker process will check each table within its database and execute VACUUM and/or ANALYZE as needed. - can be set to monitor + can be set to monitor autovacuum workers' activity. @@ -756,13 +756,13 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. available. There is no limit on how many workers might be in a single database, but workers do try to avoid repeating work that has already been done by other workers. Note that the number of running - workers does not count towards or - limits. + workers does not count towards or + limits. Tables whose relfrozenxid value is more than - transactions old are always + transactions old are always vacuumed (this also applies to those tables whose freeze max age has been modified via storage parameters; see below). Otherwise, if the number of tuples obsoleted since the last @@ -772,9 +772,9 @@ HINT: Stop the postmaster and vacuum that database in single-user mode. vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples where the vacuum base threshold is - , + , the vacuum scale factor is - , + , and the number of tuples is pg_class.reltuples. The number of obsolete tuples is obtained from the statistics @@ -808,16 +808,16 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu postgresql.conf, but it is possible to override them (and many other autovacuum control parameters) on a per-table basis; see for more information. + endterm="sql-createtable-storage-parameters-title"/> for more information. If a setting has been changed via a table's storage parameters, that value is used when processing that table; otherwise the global settings are - used. See for more details on + used. See for more details on the global settings. When multiple workers are running, the autovacuum cost delay parameters - (see ) are + (see ) are balanced among all the running workers, so that the total I/O impact on the system is the same regardless of the number of workers actually running. However, any workers processing tables whose @@ -838,7 +838,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu In some situations it is worthwhile to rebuild indexes periodically - with the command or a series of individual + with the command or a series of individual rebuilding steps. @@ -868,16 +868,16 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu - can be used safely and easily in all cases. + can be used safely and easily in all cases. But since the command requires an exclusive table lock, it is often preferable to execute an index rebuild with a sequence of creation and replacement steps. Index types that support - with the CONCURRENTLY + with the CONCURRENTLY option can instead be recreated that way. If that is successful and the resulting index is valid, the original index can then be replaced by - the newly built one using a combination of - and . When an index is used to enforce - uniqueness or other constraints, might + the newly built one using a combination of + and . When an index is used to enforce + uniqueness or other constraints, might be necessary to swap the existing constraint with one enforced by the new index. Review this alternate multistep rebuild approach carefully before using it as there are limitations on which @@ -922,7 +922,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu setting the configuration parameter logging_collector to true in postgresql.conf. The control parameters for this program are described in . You can also use this approach + linkend="runtime-config-logging-where"/>. You can also use this approach to capture the log data in machine readable CSV (comma-separated values) format. diff --git a/doc/src/sgml/manage-ag.sgml b/doc/src/sgml/manage-ag.sgml index f005538220..0154064e50 100644 --- a/doc/src/sgml/manage-ag.sgml +++ b/doc/src/sgml/manage-ag.sgml @@ -49,20 +49,20 @@ resources, they should be put in the same database but possibly into separate schemas. Schemas are a purely logical structure and who can access what is managed by the privilege system. More information about - managing schemas is in . + managing schemas is in . Databases are created with the CREATE DATABASE command - (see ) and destroyed with the + (see ) and destroyed with the DROP DATABASE command - (see ). + (see ). To determine the set of existing databases, examine the pg_database system catalog, for example SELECT datname FROM pg_database; - The program's \l meta-command + The program's \l meta-command and command-line option are also useful for listing the existing databases. @@ -83,12 +83,12 @@ SELECT datname FROM pg_database; In order to create a database, the PostgreSQL server must be up and running (see ). + linkend="server-start"/>). Databases are created with the SQL command - : + : CREATE DATABASE name; @@ -101,7 +101,7 @@ CREATE DATABASE name; The creation of databases is a restricted operation. See for how to grant permission. + linkend="role-attributes"/> for how to grant permission. @@ -110,7 +110,7 @@ CREATE DATABASE name; question remains how the first database at any given site can be created. The first database is always created by the initdb command when the data storage area is - initialized. (See .) This + initialized. (See .) This database is called postgres.postgres So to create the first ordinary database you can connect to @@ -127,7 +127,7 @@ CREATE DATABASE name; propagated to all subsequently created databases. Because of this, avoid creating objects in template1 unless you want them propagated to every newly created database. More details - appear in . + appear in . @@ -142,14 +142,14 @@ createdb dbname createdb does no magic. It connects to the postgres database and issues the CREATE DATABASE command, exactly as described above. - The reference page contains the invocation + The reference page contains the invocation details. Note that createdb without any arguments will create a database with the current user name. - contains information about + contains information about how to restrict who can connect to a given database. @@ -283,7 +283,7 @@ createdb -T template0 dbname Database Configuration - Recall from that the + Recall from that the PostgreSQL server provides a large number of run-time configuration variables. You can set database-specific default values for many of these settings. @@ -315,7 +315,7 @@ ALTER DATABASE mydb SET geqo TO off; Databases are destroyed with the command - :DROP DATABASE + :DROP DATABASE DROP DATABASE name; @@ -337,7 +337,7 @@ DROP DATABASE name; For convenience, there is also a shell program to drop - databases, :dropdb + databases, :dropdb dropdb dbname @@ -396,7 +396,7 @@ dropdb dbname To define a tablespace, use the + linkend="sql-createtablespace"/> command, for example:CREATE TABLESPACE: CREATE TABLESPACE fastspace LOCATION '/ssd1/postgresql/data'; @@ -438,7 +438,7 @@ CREATE TABLE foo(i int) TABLESPACE space1; - Alternatively, use the parameter: + Alternatively, use the parameter: SET default_tablespace = space1; CREATE TABLE foo(i int); @@ -450,7 +450,7 @@ CREATE TABLE foo(i int); - There is also a parameter, which + There is also a parameter, which determines the placement of temporary tables and indexes, as well as temporary files that are used for purposes such as sorting large data sets. This can be a list of tablespace names, rather than only one, @@ -490,7 +490,7 @@ CREATE TABLE foo(i int); To remove an empty tablespace, use the + linkend="sql-droptablespace"/> command. @@ -501,7 +501,7 @@ CREATE TABLE foo(i int); SELECT spcname FROM pg_tablespace; - The program's \db meta-command + The program's \db meta-command is also useful for listing the existing tablespaces. diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 6f8203355e..8d461c8145 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -27,8 +27,8 @@ ps, top, iostat, and vmstat. Also, once one has identified a poorly-performing query, further investigation might be needed using - PostgreSQL's command. - discusses EXPLAIN + PostgreSQL's command. + discusses EXPLAIN and other methods for understanding the behavior of an individual query. @@ -92,7 +92,7 @@ postgres: user database - If has been configured the + If has been configured the cluster name will also be shown in ps output: $ psql -c 'SHOW cluster_name' @@ -108,7 +108,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - If you have turned off then the + If you have turned off then the activity indicator is not updated; the process title is set only once when a new process is launched. On some platforms this saves a measurable amount of per-command overhead; on others it's insignificant. @@ -161,27 +161,27 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in - postgresql.conf. (See for + postgresql.conf. (See for details about setting configuration parameters.) - The parameter enables monitoring + The parameter enables monitoring of the current command being executed by any server process. - The parameter controls whether + The parameter controls whether statistics are collected about table and index accesses. - The parameter enables tracking of + The parameter enables tracking of usage of user-defined functions. - The parameter enables monitoring + The parameter enables monitoring of block read and write times. @@ -189,7 +189,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the command. (To prevent + linkend="sql-set"/> command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.) @@ -199,7 +199,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser The statistics collector transmits the collected information to other PostgreSQL processes through temporary files. These files are stored in the directory named by the - parameter, + parameter, pg_stat_tmp by default. For better performance, stats_temp_directory can be pointed at a RAM-based file system, decreasing physical I/O requirements. @@ -217,13 +217,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Several predefined views, listed in , are available to show + linkend="monitoring-stats-dynamic-views-table"/>, are available to show the current state of the system. There are also several other views, listed in , available to show the results + linkend="monitoring-stats-views-table"/>, available to show the results of statistics collection. Alternatively, one can build custom views using the underlying statistics functions, as discussed - in . + in . @@ -288,7 +288,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row per server process, showing information related to the current activity of that process, such as state and current query. - See for details. + See for details. @@ -296,7 +296,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_replicationpg_stat_replication One row per WAL sender process, showing statistics about replication to that sender's connected standby server. - See for details. + See for details. @@ -304,7 +304,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_wal_receiverpg_stat_wal_receiver Only one row, showing statistics about the WAL receiver from that receiver's connected server. - See for details. + See for details. @@ -312,7 +312,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_subscriptionpg_stat_subscription At least one row per subscription, showing information about the subscription workers. - See for details. + See for details. @@ -320,7 +320,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_sslpg_stat_ssl One row per connection (regular and replication), showing information about SSL used on this connection. - See for details. + See for details. @@ -328,7 +328,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_progress_vacuumpg_stat_progress_vacuum One row for each backend (including autovacuum worker processes) running VACUUM, showing current progress. - See . + See . @@ -352,7 +352,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_archiverpg_stat_archiver One row only, showing statistics about the WAL archiver process's activity. See - for details. + for details. @@ -360,14 +360,14 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser pg_stat_bgwriterpg_stat_bgwriter One row only, showing statistics about the background writer process's activity. See - for details. + for details. pg_stat_databasepg_stat_database One row per database, showing database-wide statistics. See - for details. + for details. @@ -376,7 +376,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. - See for details. + See for details. @@ -385,7 +385,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each table in the current database, showing statistics about accesses to that specific table. - See for details. + See for details. @@ -427,7 +427,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each index in the current database, showing statistics about accesses to that specific index. - See for details. + See for details. @@ -448,7 +448,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each table in the current database, showing statistics about I/O on that specific table. - See for details. + See for details. @@ -469,7 +469,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each index in the current database, showing statistics about I/O on that specific index. - See for details. + See for details. @@ -490,7 +490,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each sequence in the current database, showing statistics about I/O on that specific sequence. - See for details. + See for details. @@ -512,7 +512,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser One row for each tracked function, showing statistics about executions of that function. See - for details. + for details. @@ -609,7 +609,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. + linkend="guc-log-hostname"/> is enabled. @@ -731,7 +731,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser wait_event text Wait event name if backend is currently waiting, otherwise NULL. - See for details. + See for details. @@ -772,7 +772,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser disabled: This state is reported if is disabled in this backend. + linkend="guc-track-activities"/> is disabled in this backend. @@ -796,7 +796,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 characters; this value can be changed via the parameter - . + . @@ -1683,7 +1683,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Host name of the connected client, as reported by a reverse DNS lookup of client_addr. This field will only be non-null for IP connections, and only when is enabled. + linkend="guc-log-hostname"/> is enabled. @@ -1704,7 +1704,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i backend_xmin xid This standby's xmin horizon reported - by . + by . state @@ -2347,7 +2347,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i bigint Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see - for details.) + for details.) @@ -2356,7 +2356,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the - setting. + setting. @@ -2365,7 +2365,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and - regardless of the setting. + regardless of the setting. @@ -2942,7 +2942,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i The pg_stat_user_functions view will contain one row for each tracked function, showing statistics about executions of - that function. The parameter + that function. The parameter controls exactly which functions are tracked. @@ -2967,7 +2967,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Additional functions related to statistics collection are listed in . + linkend="monitoring-stats-funcs-table"/>.
@@ -3074,7 +3074,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; these are shown in . + linkend="monitoring-stats-backend-funcs-table"/>. These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function pg_stat_get_backend_idset provides a @@ -3162,7 +3162,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, pg_stat_get_backend_wait_event_type(integer) text Wait event type name if backend is currently waiting, otherwise NULL. - See for details. + See for details. @@ -3170,7 +3170,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, pg_stat_get_backend_wait_event(integer) text Wait event name if backend is currently waiting, otherwise NULL. - See for details. + See for details. @@ -3230,9 +3230,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, Details of the pg_locks view appear in - . + . For more information on locking and managing concurrency with - PostgreSQL, refer to . + PostgreSQL, refer to . @@ -3296,7 +3296,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, phase text - Current processing phase of vacuum. See . + Current processing phase of vacuum. See . @@ -3343,7 +3343,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on - . + . @@ -3390,7 +3390,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum - if is insufficient to + if is insufficient to store the number of dead tuples found. @@ -3478,7 +3478,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, explicitly tell the configure script to make the probes available in PostgreSQL. To include DTrace support specify to configure. See for further information. + linkend="install-procedure"/> for further information. @@ -3487,8 +3487,8 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, A number of standard probes are provided in the source code, - as shown in ; - + as shown in ; + shows the types used in the probes. More probes can certainly be added to enhance PostgreSQL's observability. @@ -3752,7 +3752,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, (ForkNumber, BlockNumber, Oid, Oid, Oid) Probe that fires when a server process begins to write a dirty buffer. (If this happens often, it implies that - is too + is too small or the background writer control parameters need adjustment.) arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs @@ -3770,7 +3770,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, Probe that fires when a server process begins to write a dirty WAL buffer because no more WAL buffer space is available. (If this happens often, it implies that - is too small.) + is too small.) wal-buffer-write-dirty-done diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml index a0ca2851e5..24613e3c75 100644 --- a/doc/src/sgml/mvcc.sgml +++ b/doc/src/sgml/mvcc.sgml @@ -165,7 +165,7 @@ transaction isolation level The SQL standard and PostgreSQL-implemented transaction isolation levels - are described in . + are described in .
@@ -286,7 +286,7 @@ To set the transaction isolation level of a transaction, use the - command . + command . @@ -296,8 +296,8 @@ made to a sequence (and therefore the counter of a column declared using serial) are immediately visible to all other transactions and are not rolled back if the transaction - that made the changes aborts. See - and . + that made the changes aborts. See + and . @@ -461,7 +461,7 @@ COMMIT; even though they are not yet committed.) This is a stronger guarantee than is required by the SQL standard for this isolation level, and prevents all of the phenomena described - in except for serialization + in except for serialization anomalies. As mentioned above, this is specifically allowed by the standard, which only describes the minimum protections each isolation level must @@ -748,7 +748,7 @@ ERROR: could not serialize access due to read/write dependencies among transact Don't leave connections dangling idle in transaction longer than necessary. The configuration parameter - may be used to + may be used to automatically disconnect lingering sessions. @@ -765,9 +765,9 @@ ERROR: could not serialize access due to read/write dependencies among transact locks into a single relation-level predicate lock because the predicate lock table is short of memory, an increase in the rate of serialization failures may occur. You can avoid this by increasing - , - , and/or - . + , + , and/or + . @@ -775,8 +775,8 @@ ERROR: could not serialize access due to read/write dependencies among transact A sequential scan will always necessitate a relation-level predicate lock. This can result in an increased rate of serialization failures. It may be helpful to encourage the use of index scans by reducing - and/or increasing - . Be sure to weigh any decrease + and/or increasing + . Be sure to weigh any decrease in transaction rollbacks and restarts against any overall change in query execution time. @@ -811,7 +811,7 @@ ERROR: could not serialize access due to read/write dependencies among transact server, use the pg_locks system view. For more information on monitoring the status of the lock - manager subsystem, refer to . + manager subsystem, refer to . @@ -826,14 +826,14 @@ ERROR: could not serialize access due to read/write dependencies among transact which they are used automatically by PostgreSQL. You can also acquire any of these locks explicitly with the command . + linkend="sql-lock"/>. Remember that all of these lock modes are table-level locks, even if the name contains the word row; the names of the lock modes are historical. To some extent the names reflect the typical usage of each lock mode — but the semantics are all the same. The only real difference between one lock mode and another is the set of lock modes with - which each conflicts (see ). + which each conflicts (see ). Two transactions cannot hold locks of conflicting modes on the same table at the same time. (However, a transaction never conflicts with itself. For example, it might acquire @@ -929,7 +929,7 @@ ERROR: could not serialize access due to read/write dependencies among transact CREATE STATISTICS and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see - ). + ). @@ -972,7 +972,7 @@ ERROR: could not serialize access due to read/write dependencies among transact Acquired by CREATE COLLATION, CREATE TRIGGER, and many forms of - ALTER TABLE (see ). + ALTER TABLE (see ). @@ -1053,9 +1053,9 @@ ERROR: could not serialize access due to read/write dependencies among transact
Conflicting Lock Modes - - - + + + Requested Lock Mode @@ -1173,7 +1173,7 @@ ERROR: could not serialize access due to read/write dependencies among transact In addition to table-level locks, there are row-level locks, which are listed as below with the contexts in which they are used automatically by PostgreSQL. See - for a complete table of + for a complete table of row-level lock conflicts. Note that a transaction can hold conflicting locks on the same row, even in different subtransactions; but other than that, two transactions can never hold conflicting locks @@ -1208,7 +1208,7 @@ ERROR: could not serialize access due to read/write dependencies among transact SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see - . + . The FOR UPDATE lock mode @@ -1286,9 +1286,9 @@ ERROR: could not serialize access due to read/write dependencies among transact
Conflicting Row-level Locks - - - + + + Requested Lock Mode @@ -1495,8 +1495,8 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; Both advisory locks and regular locks are stored in a shared memory pool whose size is defined by the configuration variables - and - . + and + . Care must be taken not to exhaust this memory or the server will be unable to grant any locks at all. This imposes an upper limit on the number of advisory locks @@ -1529,7 +1529,7 @@ SELECT pg_advisory_lock(q.id) FROM The functions provided to manipulate advisory locks are described in - . + . @@ -1565,7 +1565,7 @@ SELECT pg_advisory_lock(q.id) FROM - As mentioned in , Serializable + As mentioned in , Serializable transactions are just Repeatable Read transactions which add nonblocking monitoring for dangerous patterns of read/write conflicts. When a pattern is detected which could cause a cycle in the apparent @@ -1598,13 +1598,13 @@ SELECT pg_advisory_lock(q.id) FROM - See for performance suggestions. + See for performance suggestions. This level of integrity protection using Serializable transactions - does not yet extend to hot standby mode (). + does not yet extend to hot standby mode (). Because of that, those using hot standby may want to use Repeatable Read and explicit locking on the master. @@ -1687,8 +1687,8 @@ SELECT pg_advisory_lock(q.id) FROM Caveats - Some DDL commands, currently only and the - table-rewriting forms of , are not + Some DDL commands, currently only and the + table-rewriting forms of , are not MVCC-safe. This means that after the truncation or rewrite commits, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the DDL command committed. This will only be an @@ -1705,7 +1705,7 @@ SELECT pg_advisory_lock(q.id) FROM Support for the Serializable transaction isolation level has not yet been added to Hot Standby replication targets (described in - ). The strictest isolation level currently + ). The strictest isolation level currently supported in hot standby mode is Repeatable Read. While performing all permanent database writes within Serializable transactions on the master will ensure that all standbys will eventually reach a consistent diff --git a/doc/src/sgml/nls.sgml b/doc/src/sgml/nls.sgml index f312b5bfb5..16e25abd31 100644 --- a/doc/src/sgml/nls.sgml +++ b/doc/src/sgml/nls.sgml @@ -272,7 +272,7 @@ msgstr "Die Datei %2$s hat %1$u Zeichen." open file %s) should probably not start with a capital letter (if your language distinguishes letter case) or end with a period (if your language uses punctuation marks). - It might help to read . + It might help to read . diff --git a/doc/src/sgml/oid2name.sgml b/doc/src/sgml/oid2name.sgml index 4ab2cf1a85..dd875281c8 100644 --- a/doc/src/sgml/oid2name.sgml +++ b/doc/src/sgml/oid2name.sgml @@ -30,7 +30,7 @@ oid2name is a utility program that helps administrators to examine the file structure used by PostgreSQL. To make use of it, you need to be familiar with the database file structure, which is described in - . + . diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml index 6aac506942..f15a9233cb 100644 --- a/doc/src/sgml/parallel.sgml +++ b/doc/src/sgml/parallel.sgml @@ -64,10 +64,10 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; worker processes equal to the number of workers chosen by the planner. The number of background workers that the planner will consider using is limited to at most - . The total number + . The total number of background workers that can exist at any one time is limited by both - and - . Therefore, it is possible for a + and + . Therefore, it is possible for a parallel query to run with fewer workers than planned, or even with no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance. If this @@ -118,7 +118,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - must be set to a + must be set to a value which is greater than zero. This is a special case of the more general principle that no more workers should be used than the number configured via max_parallel_workers_per_gather. @@ -127,7 +127,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; - must be set to a + must be set to a value other than none. Parallel query requires dynamic shared memory in order to pass data between cooperating processes. @@ -178,7 +178,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; Most system-defined functions are PARALLEL SAFE, but user-defined functions are marked PARALLEL UNSAFE by default. See the discussion of - . + . @@ -215,7 +215,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; No background workers can be obtained because of the limitation that the total number of background workers cannot exceed - . + . @@ -223,7 +223,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; No background workers can be obtained because of the limitation that the total number of background workers launched for purposes of - parallel query cannot exceed . + parallel query cannot exceed . @@ -236,7 +236,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; send such a message, this can only occur when using a client that does not rely on libpq. If this is a frequent occurrence, it may be a good idea to set - to zero in + to zero in sessions where it is likely, so as to avoid generating query plans that may be suboptimal when run serially. @@ -374,7 +374,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; must be safe for parallelism and must have a combine function. If the aggregate has a transition state of type internal, it must have serialization and deserialization - functions. See for more details. + functions. See for more details. Parallel aggregation is not supported if any aggregate function call contains DISTINCT or ORDER BY clause and is also not supported for ordered set aggregates or when the query involves @@ -389,15 +389,15 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; If a query that is expected to do so does not produce a parallel plan, - you can try reducing or - . Of course, this plan may turn + you can try reducing or + . Of course, this plan may turn out to be slower than the serial plan which the planner preferred, but this will not always be the case. If you don't get a parallel plan even with very small values of these settings (e.g. after setting them both to zero), there may be some reason why the query planner is unable to generate a parallel plan for your query. See - and - for information on why this may be + and + for information on why this may be the case. @@ -473,11 +473,11 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; where it conceivably be done, we do not try, since this would be expensive and error-prone. Instead, all user-defined functions are assumed to be parallel unsafe unless otherwise marked. When using - or - , markings can be set by specifying + or + , markings can be set by specifying PARALLEL SAFE, PARALLEL RESTRICTED, or PARALLEL UNSAFE as appropriate. When using - , the + , the PARALLEL option can be specified with SAFE, RESTRICTED, or UNSAFE as the corresponding value. diff --git a/doc/src/sgml/passwordcheck.sgml b/doc/src/sgml/passwordcheck.sgml index d034f8887f..3db169b4c1 100644 --- a/doc/src/sgml/passwordcheck.sgml +++ b/doc/src/sgml/passwordcheck.sgml @@ -10,15 +10,15 @@ The passwordcheck module checks users' passwords whenever they are set with - or - . + or + . If a password is considered too weak, it will be rejected and the command will terminate with an error. To enable this module, add '$libdir/passwordcheck' - to in + to in postgresql.conf, then restart the server. @@ -49,7 +49,7 @@ For this reason, passwordcheck is not recommended if your security requirements are high. It is more secure to use an external authentication method such as GSSAPI - (see ) than to rely on + (see ) than to rely on passwords within the database. diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index fed5c956a2..a51fd1c7dc 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -31,7 +31,7 @@ plan to match the query structure and the properties of the data is absolutely critical for good performance, so the system includes a complex planner that tries to choose good plans. - You can use the command + You can use the command to see what query plan the planner creates for any query. Plan-reading is an art that requires some experience to master, but this section attempts to cover the basics. @@ -132,9 +132,9 @@ EXPLAIN SELECT * FROM tenk1; The costs are measured in arbitrary units determined by the planner's - cost parameters (see ). + cost parameters (see ). Traditional practice is to measure the costs in units of disk page - fetches; that is, is conventionally + fetches; that is, is conventionally set to 1.0 and the other cost parameters are set relative to that. The examples in this section are run with the default cost parameters. @@ -182,8 +182,8 @@ SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1'; you will find that tenk1 has 358 disk pages and 10000 rows. The estimated cost is computed as (disk pages read * - ) + (rows scanned * - ). By default, + ) + (rows scanned * + ). By default, seq_page_cost is 1.0 and cpu_tuple_cost is 0.01, so the estimated cost is (358 * 1.0) + (10000 * 0.01) = 458. @@ -209,7 +209,7 @@ EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000; WHERE clause. However, the scan will still have to visit all 10000 rows, so the cost hasn't decreased; in fact it has gone up a bit (by 10000 * , to be exact) to reflect the extra CPU + linkend="guc-cpu-operator-cost"/>, to be exact) to reflect the extra CPU time spent checking the WHERE condition. @@ -508,9 +508,9 @@ WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2; One way to look at variant plans is to force the planner to disregard whatever strategy it thought was the cheapest, using the enable/disable - flags described in . + flags described in . (This is a crude tool, but useful. See - also .) + also .) For example, if we're unconvinced that sequential-scan-and-sort is the best way to deal with table onek in the previous example, we could try @@ -828,7 +828,7 @@ EXPLAIN UPDATE parent SET f2 = f2 + 1 WHERE f1 = 101; Second, the measurement overhead added by EXPLAIN ANALYZE can be significant, especially on machines with slow gettimeofday() operating-system calls. You can use the - tool to measure the overhead of timing + tool to measure the overhead of timing on your system. @@ -1032,7 +1032,7 @@ WHERE tablename = 'road'; arrays for each column, can be set on a column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the - configuration variable. + configuration variable. The default limit is presently 100 entries. Raising the limit might allow more accurate planner estimates to be made, particularly for columns with irregular data distributions, at the price of consuming @@ -1043,7 +1043,7 @@ WHERE tablename = 'road'; Further details about the planner's use of statistics can be found in - . + . @@ -1087,7 +1087,7 @@ WHERE tablename = 'road'; Statistics objects are created using - , which see for more details. + , which see for more details. Creation of such an object merely creates a catalog entry expressing interest in the statistics. Actual data collection is performed by ANALYZE (either a manual command, or background @@ -1323,7 +1323,7 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; PostgreSQL planner will switch from exhaustive search to a genetic probabilistic search through a limited number of possibilities. (The switch-over threshold is - set by the run-time + set by the run-time parameter.) The genetic search takes less time, but it won't necessarily find the best possible plan. @@ -1379,7 +1379,7 @@ SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); To force the planner to follow the join order laid out by explicit JOINs, - set the run-time parameter to 1. + set the run-time parameter to 1. (Other possible values are discussed below.) @@ -1436,8 +1436,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - and + and are similarly named because they do almost the same thing: one controls when the planner will flatten out subqueries, and the other controls when it will flatten out explicit joins. Typically @@ -1488,7 +1488,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Use <command>COPY</command> - Use to load + Use to load all the rows in one command, instead of using a series of INSERT commands. The COPY command is optimized for loading large numbers of rows; it is less @@ -1500,7 +1500,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; If you cannot use COPY, it might help to use to create a + linkend="sql-prepare"/> to create a prepared INSERT statement, and then use EXECUTE as many times as required. This avoids some of the overhead of repeatedly parsing and planning @@ -1523,7 +1523,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway. However, this consideration only applies when - is minimal as all commands + is minimal as all commands must write WAL otherwise. @@ -1581,7 +1581,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Increase <varname>maintenance_work_mem</varname> - Temporarily increasing the + Temporarily increasing the configuration variable when loading large amounts of data can lead to improved performance. This will help to speed up CREATE INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. @@ -1594,7 +1594,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Increase <varname>max_wal_size</varname> - Temporarily increasing the + Temporarily increasing the configuration variable can also make large data loads faster. This is because loading a large amount of data into PostgreSQL will @@ -1617,9 +1617,9 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; new base backup after the load has completed than to process a large amount of incremental WAL data. To prevent incremental WAL logging while loading, disable archiving and streaming replication, by setting - to minimal, - to off, and - to zero. + to minimal, + to off, and + to zero. But note that changing these settings requires a server restart. @@ -1668,7 +1668,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Whenever you have significantly altered the distribution of data - within a table, running is strongly recommended. This + within a table, running is strongly recommended. This includes bulk loading large amounts of data into the table. Running ANALYZE (or VACUUM ANALYZE) ensures that the planner has up-to-date statistics about the @@ -1677,8 +1677,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; performance on any tables with inaccurate or nonexistent statistics. Note that if the autovacuum daemon is enabled, it might run ANALYZE automatically; see - - and for more information. + + and for more information. @@ -1779,8 +1779,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; maintenance_work_mem; rather, you'd do that while manually recreating indexes and foreign keys afterwards. And don't forget to ANALYZE when you're done; see - - and for more information. + + and for more information. @@ -1816,14 +1816,14 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Turn off ; there is no need to flush + Turn off ; there is no need to flush data to disk. - Turn off ; there might be no + Turn off ; there might be no need to force WAL writes to disk on every commit. This setting does risk transaction loss (though not data corruption) in case of a crash of the database. @@ -1832,15 +1832,15 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Turn off ; there is no need + Turn off ; there is no need to guard against partial page writes. - Increase and ; this reduces the frequency + Increase and ; this reduces the frequency of checkpoints, but increases the storage requirements of /pg_wal. diff --git a/doc/src/sgml/pgbuffercache.sgml b/doc/src/sgml/pgbuffercache.sgml index 18ac781d0d..faf5a3115d 100644 --- a/doc/src/sgml/pgbuffercache.sgml +++ b/doc/src/sgml/pgbuffercache.sgml @@ -33,7 +33,7 @@ The <structname>pg_buffercache</structname> View - The definitions of the columns exposed by the view are shown in . + The definitions of the columns exposed by the view are shown in .
diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml index 80595e193b..25341d684c 100644 --- a/doc/src/sgml/pgcrypto.sgml +++ b/doc/src/sgml/pgcrypto.sgml @@ -40,7 +40,7 @@ digest(data bytea, type text) returns bytea sha384 and sha512. If pgcrypto was built with OpenSSL, more algorithms are available, as detailed in - . + . @@ -129,7 +129,7 @@ hmac(data bytea, key text, type text) returns bytea - lists the algorithms + lists the algorithms supported by the crypt() function. @@ -247,7 +247,7 @@ gen_salt(type text [, iter_count integer ]) returns text — which is somewhat impractical. If the iter_count parameter is omitted, the default iteration count is used. Allowed values for iter_count depend on the algorithm and - are shown in . + are shown in .
@@ -292,7 +292,7 @@ gen_salt(type text [, iter_count integer ]) returns text - gives an overview of the relative slowness + gives an overview of the relative slowness of different hashing algorithms. The table shows how much time it would take to try all combinations of characters in an 8-character password, assuming diff --git a/doc/src/sgml/pgprewarm.sgml b/doc/src/sgml/pgprewarm.sgml index e0a6d0503f..51afc5df3f 100644 --- a/doc/src/sgml/pgprewarm.sgml +++ b/doc/src/sgml/pgprewarm.sgml @@ -13,7 +13,7 @@ or the PostgreSQL buffer cache. Prewarming can be performed manually using the pg_prewarm function, or can be performed automatically by including pg_prewarm in - . In the latter case, the + . In the latter case, the system will run a background worker which periodically records the contents of shared buffers in a file called autoprewarm.blocks and will, using 2 background workers, reload those same blocks after a restart. diff --git a/doc/src/sgml/pgrowlocks.sgml b/doc/src/sgml/pgrowlocks.sgml index 57dcf6beb2..39de41a03c 100644 --- a/doc/src/sgml/pgrowlocks.sgml +++ b/doc/src/sgml/pgrowlocks.sgml @@ -33,7 +33,7 @@ pgrowlocks(text) returns setof record The parameter is the name of a table. The result is a set of records, with one row for each locked row within the table. The output columns - are shown in . + are shown in .
diff --git a/doc/src/sgml/pgstandby.sgml b/doc/src/sgml/pgstandby.sgml index 7feba8cdd6..2cc58fe356 100644 --- a/doc/src/sgml/pgstandby.sgml +++ b/doc/src/sgml/pgstandby.sgml @@ -41,7 +41,7 @@ restore_command, which is needed to turn a standard archive recovery into a warm standby operation. Other configuration is required as well, all of which is described in the main - server manual (see ). + server manual (see ). @@ -180,7 +180,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' Set the number of seconds (up to 60, default 5) to sleep between tests to see if the WAL file to be restored is available in the archive yet. The default setting is not necessarily - recommended; consult for discussion. + recommended; consult for discussion. @@ -216,7 +216,7 @@ restore_command = 'pg_standby archiveDir %f %p %r' after which a fast failover will be performed. A setting of zero (the default) means wait forever. The default setting is not necessarily recommended; - consult for discussion. + consult for discussion. @@ -388,7 +388,7 @@ recovery_end_command = 'del C:\pgsql.trigger.5442' See Also - + diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml index 4b15a268cd..c0217ed485 100644 --- a/doc/src/sgml/pgstatstatements.sgml +++ b/doc/src/sgml/pgstatstatements.sgml @@ -14,7 +14,7 @@ The module must be loaded by adding pg_stat_statements to - in + in postgresql.conf, because it requires additional shared memory. This means that a server restart is needed to add or remove the module. @@ -38,7 +38,7 @@ contains one row for each distinct database ID, user ID and query ID (up to the maximum number of distinct statements that the module can track). The columns of the view are shown in - . + .
@@ -207,7 +207,7 @@ Total time the statement spent reading blocks, in milliseconds - (if is enabled, otherwise zero) + (if is enabled, otherwise zero) @@ -217,7 +217,7 @@ Total time the statement spent writing blocks, in milliseconds - (if is enabled, otherwise zero) + (if is enabled, otherwise zero) diff --git a/doc/src/sgml/pgstattuple.sgml b/doc/src/sgml/pgstattuple.sgml index 04a4423dc5..b17b3c59e0 100644 --- a/doc/src/sgml/pgstattuple.sgml +++ b/doc/src/sgml/pgstattuple.sgml @@ -55,7 +55,7 @@ dead_tuple_percent | 0.69 free_space | 8932 free_percent | 1.95 - The output columns are described in . + The output columns are described in .
@@ -509,7 +509,7 @@ dead_tuple_percent | 0 approx_free_space | 11996 approx_free_percent | 2.09 - The output columns are described in . + The output columns are described in . diff --git a/doc/src/sgml/pgtrgm.sgml b/doc/src/sgml/pgtrgm.sgml index 7903dc3d82..338ef30fbc 100644 --- a/doc/src/sgml/pgtrgm.sgml +++ b/doc/src/sgml/pgtrgm.sgml @@ -58,8 +58,8 @@ The functions provided by the pg_trgm module - are shown in , the operators - in . + are shown in , the operators + in .
diff --git a/doc/src/sgml/planstats.sgml b/doc/src/sgml/planstats.sgml index ee081308a9..ef643ad064 100644 --- a/doc/src/sgml/planstats.sgml +++ b/doc/src/sgml/planstats.sgml @@ -5,7 +5,7 @@ This chapter builds on the material covered in and to show some + linkend="using-explain"/> and to show some additional details about how the planner uses the system statistics to estimate the number of rows each part of a query might return. This is a significant part of the planning process, @@ -49,7 +49,7 @@ EXPLAIN SELECT * FROM tenk1; How the planner determines the cardinality of tenk1 - is covered in , but is repeated here for + is covered in , but is repeated here for completeness. The number of pages and rows is looked up in pg_class: @@ -468,7 +468,7 @@ INSERT INTO t SELECT i % 100, i % 100 FROM generate_series(1, 10000) s(i); ANALYZE t; - As explained in , the planner can determine + As explained in , the planner can determine cardinality of t using the number of pages and rows obtained from pg_class: diff --git a/doc/src/sgml/plhandler.sgml b/doc/src/sgml/plhandler.sgml index 95e7dc9fc0..363f84b9f3 100644 --- a/doc/src/sgml/plhandler.sgml +++ b/doc/src/sgml/plhandler.sgml @@ -29,7 +29,7 @@ special pseudo-type identifies the function as a call handler and prevents it from being called directly in SQL commands. For more details on C language calling conventions and dynamic loading, - see . + see . @@ -144,7 +144,7 @@ plsample_call_handler(PG_FUNCTION_ARGS) After having compiled the handler function into a loadable module - (see ), the following commands then + (see ), the following commands then register the sample procedural language: CREATE FUNCTION plsample_call_handler() RETURNS language_handler @@ -162,9 +162,9 @@ CREATE LANGUAGE plsample are a validator and an inline handler. A validator can be provided to allow language-specific checking to be done during - . + . An inline handler can be provided to allow the language to support - anonymous code blocks executed via the command. + anonymous code blocks executed via the command. @@ -191,7 +191,7 @@ CREATE LANGUAGE plsample Validator functions should typically honor the parameter: if it is turned off then + linkend="guc-check-function-bodies"/> parameter: if it is turned off then any expensive or context-sensitive checking should be skipped. If the language provides for code execution at compilation time, the validator must suppress checks that would induce such execution. In particular, @@ -230,7 +230,7 @@ CREATE LANGUAGE plsample as well as the CREATE LANGUAGE command itself, into an extension so that a simple CREATE EXTENSION command is sufficient to install the language. See - for information about writing + for information about writing extensions. @@ -238,7 +238,7 @@ CREATE LANGUAGE plsample The procedural languages included in the standard distribution are good references when trying to write your own language handler. Look into the src/pl subdirectory of the source tree. - The + The reference page also has some useful details. diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml index dfffa4077f..33e39d85e4 100644 --- a/doc/src/sgml/plperl.sgml +++ b/doc/src/sgml/plperl.sgml @@ -41,7 +41,7 @@ Users of source packages must specially enable the build of PL/Perl during the installation process. (Refer to for more information.) Users of + linkend="installation"/> for more information.) Users of binary packages might find PL/Perl in a separate subpackage. @@ -51,7 +51,7 @@ To create a function in the PL/Perl language, use the standard - + syntax: @@ -69,7 +69,7 @@ $$ LANGUAGE plperl; PL/Perl also supports anonymous code blocks called with the - statement: + statement: DO $$ @@ -99,11 +99,11 @@ $$ LANGUAGE plperl; The syntax of the CREATE FUNCTION command requires the function body to be written as a string constant. It is usually most convenient to use dollar quoting (see ) for the string constant. + linkend="sql-syntax-dollar-quoting"/>) for the string constant. If you choose to use escape string syntax E'', you must double any single quote marks (') and backslashes (\) used in the body of the function - (see ). + (see ). @@ -686,9 +686,9 @@ SELECT release_hosts_query(); priority levels. Whether messages of a particular priority are reported to the client, written to the server log, or both is controlled by the - and - configuration - variables. See for more + and + configuration + variables. See for more information. @@ -792,7 +792,7 @@ SELECT release_hosts_query(); Returns the contents of the referenced array as a string in array literal format - (see ). + (see ). Returns the argument value unaltered if it's not a reference to an array. The delimiter used between elements of the array literal defaults to ", " if a delimiter is not specified or is undef. @@ -828,7 +828,7 @@ SELECT release_hosts_query(); Returns the contents of the referenced array as a string in array constructor format - (see ). + (see ). Individual values are quoted using quote_nullable. Returns the argument value, quoted using quote_nullable, if it's not a reference to an array. @@ -1336,7 +1336,7 @@ DO 'elog(WARNING, join ", ", sort keys %INC)' LANGUAGE plperl; Initialization will happen in the postmaster if the plperl library is - included in , in which + included in , in which case extra consideration should be given to the risk of destabilizing the postmaster. The principal reason for making use of this feature is that Perl modules loaded by plperl.on_init need be diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 7323c2f67d..6d14b34448 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -125,14 +125,14 @@ It is also possible to declare a PL/pgSQL function as returning record, which means that the result is a row type whose columns are determined by specification in the - calling query, as discussed in . + calling query, as discussed in . PL/pgSQL functions can be declared to accept a variable number of arguments by using the VARIADIC marker. This works exactly the same way as for SQL functions, as discussed in - . + . @@ -141,8 +141,8 @@ anyelement, anyarray, anynonarray, anyenum, and anyrange. The actual data types handled by a polymorphic function can vary from call to - call, as discussed in . - An example is shown in . + call, as discussed in . + An example is shown in . @@ -170,8 +170,8 @@ Specific examples appear in - and - . + and + . @@ -181,7 +181,7 @@ Functions written in PL/pgSQL are defined - to the server by executing commands. + to the server by executing commands. Such a command would normally look like, say, CREATE FUNCTION somefunc(integer, text) RETURNS integer @@ -190,7 +190,7 @@ LANGUAGE plpgsql; The function body is simply a string literal so far as CREATE FUNCTION is concerned. It is often helpful to use dollar quoting - (see ) to write the function + (see ) to write the function body, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function body must be escaped by doubling them. Almost all the examples in this chapter use dollar-quoted @@ -289,7 +289,7 @@ $$ LANGUAGE plpgsql; of any PL/pgSQL function. This block provides the declarations of the function's parameters (if any), as well as some special variables such as FOUND (see - ). The outer block is + ). The outer block is labeled with the function's name, meaning that parameters and special variables can be qualified with the function's name. @@ -308,7 +308,7 @@ $$ LANGUAGE plpgsql; However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see . + linkend="plpgsql-error-trapping"/>. @@ -356,7 +356,7 @@ arow RECORD; assigned to after initialization, so that its value will remain constant for the duration of the block. The COLLATE option specifies a collation to use for the - variable (see ). + variable (see ). If NOT NULL is specified, an assignment of a null value results in a run-time error. All variables declared as NOT NULL @@ -491,7 +491,7 @@ END; $$ LANGUAGE plpgsql; - As discussed in , this + As discussed in , this effectively creates an anonymous record type for the function's results. If a RETURNS clause is given, it must say RETURNS record. @@ -523,9 +523,9 @@ $$ LANGUAGE plpgsql; or anyrange), a special parameter $0 is created. Its data type is the actual return type of the function, as deduced from the actual input types (see ). + linkend="extend-types-polymorphic"/>). This allows the function to access its actual return type - as shown in . + as shown in . $0 is initialized to null and can be modified by the function, so it can be used to hold the return value if desired, though that is not required. $0 can also be @@ -740,7 +740,7 @@ SELECT merge_fields(t.*) FROM table1 t WHERE ... ; When a PL/pgSQL function has one or more parameters of collatable data types, a collation is identified for each function call depending on the collations assigned to the actual - arguments, as described in . If a collation is + arguments, as described in . If a collation is successfully identified (i.e., there are no conflicts of implicit collations among the arguments) then all the collatable parameters are treated as having that collation implicitly. This will affect the @@ -841,7 +841,7 @@ SELECT expression to the main SQL engine. While forming the SELECT command, any occurrences of PL/pgSQL variable names are replaced by parameters, as discussed in detail in - . + . This allows the query plan for the SELECT to be prepared just once and then reused for subsequent evaluations with different values of the variables. Thus, what @@ -861,7 +861,7 @@ PREPARE statement_name(integer, integer) AS SELECT $1 parameter values. Normally these details are not important to a PL/pgSQL user, but they are useful to know when trying to diagnose a problem. - More information appears in . + More information appears in . @@ -874,8 +874,8 @@ PREPARE statement_name(integer, integer) AS SELECT $1 PL/pgSQL. Anything not recognized as one of these statement types is presumed to be an SQL command and is sent to the main database engine to execute, - as described in - and . + as described in + and . @@ -900,7 +900,7 @@ PREPARE statement_name(integer, integer) AS SELECT $1 If the expression's result data type doesn't match the variable's data type, the value will be coerced as though by an assignment cast - (see ). If no assignment cast is known + (see ). If no assignment cast is known for the pair of data types involved, the PL/pgSQL interpreter will attempt to convert the result value textually, that is by applying the result type's output function followed by the variable @@ -933,14 +933,14 @@ my_record.user_id := 20; in the command text is treated as a parameter, and then the current value of the variable is provided as the parameter value at run time. This is exactly like the processing described earlier - for expressions; for details see . + for expressions; for details see . When executing a SQL command in this way, PL/pgSQL may cache and re-use the execution plan for the command, as discussed in - . + . @@ -966,7 +966,7 @@ PERFORM query; and the plan is cached in the same way. Also, the special variable FOUND is set to true if the query produced at least one row, or false if it produced no rows (see - ). + ). @@ -1067,7 +1067,7 @@ DELETE ... RETURNING expressions INTO STRIC well-defined unless you've used ORDER BY.) Any result rows after the first row are discarded. You can check the special FOUND variable (see - ) to + ) to determine whether a row was returned: @@ -1147,7 +1147,7 @@ CONTEXT: PL/pgSQL function get_userid(text) line 6 at SQL statement To handle cases where you need to process multiple result rows - from a SQL query, see . + from a SQL query, see . @@ -1161,7 +1161,7 @@ CONTEXT: PL/pgSQL function get_userid(text) line 6 at SQL statement that will involve different tables or different data types each time they are executed. PL/pgSQL's normal attempts to cache plans for commands (as discussed in - ) will not work in such + ) will not work in such scenarios. To handle this sort of problem, the EXECUTE statement is provided: @@ -1283,7 +1283,7 @@ EXECUTE format('SELECT count(*) FROM %I ' The PL/pgSQL EXECUTE statement is not related to the - SQL + SQL statement supported by the PostgreSQL server. The server's EXECUTE statement cannot be used directly within @@ -1319,7 +1319,7 @@ EXECUTE format('SELECT count(*) FROM %I ' of single quotes. The recommended method for quoting fixed text in your function body is dollar quoting. (If you have legacy code that does not use dollar quoting, please refer to the - overview in , which can save you + overview in , which can save you some effort when translating said code to a more reasonable scheme.) @@ -1347,7 +1347,7 @@ EXECUTE 'UPDATE tbl SET ' This example demonstrates the use of the quote_ident and quote_literal functions (see ). For safety, expressions containing column + linkend="functions-string"/>). For safety, expressions containing column or table identifiers should be passed through quote_ident before insertion in a dynamic query. Expressions containing values that should be literal strings in the @@ -1394,7 +1394,7 @@ EXECUTE 'UPDATE tbl SET ' (At present, IS NOT DISTINCT FROM is handled much less efficiently than =, so don't do this unless you must. - See for + See for more information on nulls and IS DISTINCT.) @@ -1420,7 +1420,7 @@ EXECUTE 'UPDATE tbl SET ' Dynamic SQL statements can also be safely constructed using the format function (see ). For example: + linkend="functions-string"/>). For example: EXECUTE format('UPDATE tbl SET %I = %L ' 'WHERE key = %L', colname, newvalue, keyvalue); @@ -1442,7 +1442,7 @@ EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) A much larger example of a dynamic command and EXECUTE can be seen in , which builds and executes a + linkend="plpgsql-porting-ex2"/>, which builds and executes a CREATE FUNCTION command to define a new function. @@ -1461,12 +1461,12 @@ GET CURRENT DIAGNOSTICS variableCURRENT is a noise word (but see also GET STACKED - DIAGNOSTICS in ). + DIAGNOSTICS in ). Each item is a key word identifying a status value to be assigned to the specified variable (which should be of the right data type to receive it). The currently available status items are shown - in . Colon-equal + in . Colon-equal (:=) can be used instead of the SQL-standard = token. An example: @@ -1503,7 +1503,7 @@ GET DIAGNOSTICS integer_var = ROW_COUNT; PG_CONTEXT text line(s) of text describing the current call stack - (see ) + (see ) @@ -1856,7 +1856,7 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); allow users to define set-returning functions that do not have this limitation. Currently, the point at which data begins being written to disk is controlled by the - + configuration variable. Administrators who have sufficient memory to store larger result sets in memory should consider increasing this parameter. @@ -2440,8 +2440,8 @@ $$ LANGUAGE plpgsql; PL/pgSQL variables are substituted into the query text, and the query plan is cached for possible re-use, as discussed in - detail in and - . + detail in and + . @@ -2465,7 +2465,7 @@ END LOOP label ; Another way to specify the query whose results should be iterated through is to declare it as a cursor. This is described in - . + . @@ -2605,7 +2605,7 @@ END; The condition names can be any of - those shown in . A category + those shown in . A category name matches any error within its category. The special condition name OTHERS matches every error type except QUERY_CANCELED and ASSERT_FAILURE. @@ -2729,7 +2729,7 @@ SELECT merge_db(1, 'dennis'); Within an exception handler, the special variable SQLSTATE contains the error code that corresponds to - the exception that was raised (refer to + the exception that was raised (refer to for a list of possible error codes). The special variable SQLERRM contains the error message associated with the exception. These variables are undefined outside exception handlers. @@ -2748,7 +2748,7 @@ GET STACKED DIAGNOSTICS variable { = | := } variable (which should be of the right data type to receive it). The currently available status items are shown - in . + in .
@@ -2811,7 +2811,7 @@ GET STACKED DIAGNOSTICS variable { = | := } PG_EXCEPTION_CONTEXT text line(s) of text describing the call stack at the time of the - exception (see ) + exception (see ) @@ -2847,7 +2847,7 @@ END; The GET DIAGNOSTICS command, previously described - in , retrieves information + in , retrieves information about current execution state (whereas the GET STACKED DIAGNOSTICS command discussed above reports information about the execution state as of a previous error). Its PG_CONTEXT @@ -2978,7 +2978,7 @@ DECLARE Bound cursor variables can also be used without explicitly opening the cursor, via the FOR statement described in - . + . @@ -3031,7 +3031,7 @@ OPEN unbound_cursorvar NO refcursor variable). The query is specified as a string expression, in the same way as in the EXECUTE command. As usual, this gives flexibility so the query plan can vary - from one run to the next (see ), + from one run to the next (see ), and it also means that variable substitution is not done on the command string. As with EXECUTE, parameter values can be inserted into the dynamic command via @@ -3082,7 +3082,7 @@ OPEN bound_cursorvar ( := to separate it from the argument expression. Similar to calling - functions, described in , it + functions, described in , it is also allowed to mix positional and named notation. @@ -3160,7 +3160,7 @@ FETCH direction { FROM | IN } The direction clause can be any of the - variants allowed in the SQL + variants allowed in the SQL command except the ones that can fetch more than one row; namely, it can be NEXT, @@ -3212,7 +3212,7 @@ MOVE direction { FROM | IN } < The direction clause can be any of the - variants allowed in the SQL + variants allowed in the SQL command, namely NEXT, PRIOR, @@ -3255,7 +3255,7 @@ DELETE FROM table WHERE CURRENT OF curso restrictions on what the cursor's query can be (in particular, no grouping) and it's best to use FOR UPDATE in the cursor. For more information see the - + reference page. @@ -3422,7 +3422,7 @@ END LOOP label ; expressions must appear if and only if the cursor was declared to take arguments. These values will be substituted in the query, in just the same way as during an OPEN (see ). + linkend="plpgsql-open-bound-cursor"/>). @@ -3475,9 +3475,9 @@ RAISE ; priority levels. Whether messages of a particular priority are reported to the client, written to the server log, or both is controlled by the - and - configuration - variables. See for more + and + configuration + variables. See for more information. @@ -3541,7 +3541,7 @@ RAISE NOTICE 'Calling cs_create_job(%)', v_job_id; ERRCODE Specifies the error code (SQLSTATE) to report, either by condition - name, as shown in , or directly as a + name, as shown in , or directly as a five-character SQLSTATE code. @@ -3928,7 +3928,7 @@ ASSERT condition , - shows an example of a + shows an example of a trigger procedure in PL/pgSQL. @@ -3981,7 +3981,7 @@ CREATE TRIGGER emp_stamp BEFORE INSERT OR UPDATE ON emp Another way to log changes to a table involves creating a new table that holds a row for each insert, update, or delete that occurs. This approach can be thought of as auditing changes to a table. - shows an example of an + shows an example of an audit trigger procedure in PL/pgSQL. @@ -4038,7 +4038,7 @@ AFTER INSERT OR UPDATE OR DELETE ON emp approach still records the full audit trail of changes to the table, but also presents a simplified view of the audit trail, showing just the last modified timestamp derived from the audit trail for each entry. - shows an example + shows an example of an audit trigger on a view in PL/pgSQL. @@ -4118,7 +4118,7 @@ INSTEAD OF INSERT OR UPDATE OR DELETE ON emp_view times. This technique is commonly used in Data Warehousing, where the tables of measured or observed data (called fact tables) might be extremely large. - shows an example of a + shows an example of a trigger procedure in PL/pgSQL that maintains a summary table for a fact table in a data warehouse. @@ -4272,7 +4272,7 @@ SELECT * FROM sales_summary_bytime; statement. The CREATE TRIGGER command assigns names to one or both transition tables, and then the function can refer to those names as though they were read-only temporary tables. - shows an example. + shows an example. @@ -4280,7 +4280,7 @@ SELECT * FROM sales_summary_bytime; This example produces the same results as - , but instead of using a + , but instead of using a trigger that fires for every row, it uses a trigger that fires once per statement, after collecting the relevant information in a transition table. This can be significantly faster than the row-trigger approach @@ -4383,7 +4383,7 @@ CREATE TRIGGER emp_audit_del - shows an example of an + shows an example of an event trigger procedure in PL/pgSQL. @@ -4482,7 +4482,7 @@ INSERT INTO dest (col) SELECT foo + bar FROM src; In the above example, src.foo would be an unambiguous reference to the table column. To create an unambiguous reference to a variable, declare it in a labeled block and use the block's label - (see ). For example, + (see ). For example, <<block>> DECLARE @@ -4575,7 +4575,7 @@ $$ LANGUAGE plpgsql; to EXECUTE or one of its variants. If you need to insert a varying value into such a command, do so as part of constructing the string value, or use USING, as illustrated in - . + . @@ -4636,7 +4636,7 @@ $$ LANGUAGE plpgsql; this will happen only if the execution plan is not very sensitive to the values of the PL/pgSQL variables referenced in it. If it is, generating a plan each time is a net win. See for more information about the behavior of + linkend="sql-prepare"/> for more information about the behavior of prepared statements. @@ -4796,7 +4796,7 @@ $$ LANGUAGE plpgsql; easily find yourself needing half a dozen or more adjacent quote marks. It's recommended that you instead write the function body as a dollar-quoted string literal (see ). In the dollar-quoting + linkend="sql-syntax-dollar-quoting"/>). In the dollar-quoting approach, you never double any quote marks, but instead take care to choose a different dollar-quoting delimiter for each level of nesting you need. For example, you might write the CREATE @@ -4907,7 +4907,7 @@ a_output := a_output || $$ AND name LIKE 'foobar'$$ accounts for 8 quotation marks) and this is adjacent to the end of that string constant (2 more). You will probably only need that if you are writing a function that generates other functions, as in - . + . For example: a_output := a_output || '' if v_'' || @@ -5029,7 +5029,7 @@ CREATE FUNCTION to PL/pgSQL's plpgsql.variable_conflict = use_column behavior, which is not the default, - as explained in . + as explained in . It's often best to avoid such ambiguities in the first place, but if you have to port a large amount of code that depends on this behavior, setting variable_conflict may be the @@ -5042,7 +5042,7 @@ CREATE FUNCTION In PostgreSQL the function body must be written as a string literal. Therefore you need to use dollar quoting or escape single quotes in the function body. (See .) + linkend="plpgsql-quote-tips"/>.) @@ -5080,7 +5080,7 @@ CREATE FUNCTION from the first number to the second, requiring the loop bounds to be swapped when porting. This incompatibility is unfortunate but is unlikely to be changed. (See .) + linkend="plpgsql-integer-for"/>.) @@ -5108,7 +5108,7 @@ CREATE FUNCTION Porting Examples - shows how to port a simple + shows how to port a simple function from PL/SQL to PL/pgSQL. @@ -5197,7 +5197,7 @@ $$ LANGUAGE plpgsql; - shows how to port a + shows how to port a function that creates another function and how to handle the ensuing quoting problems. @@ -5292,12 +5292,12 @@ $func$ LANGUAGE plpgsql; - shows how to port a function + shows how to port a function with OUT parameters and string manipulation. PostgreSQL does not have a built-in instr function, but you can create one using a combination of other - functions. In there is a + functions. In there is a PL/pgSQL implementation of instr that you can use to make your porting easier. @@ -5406,7 +5406,7 @@ SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); - shows how to port a procedure + shows how to port a procedure that uses numerous features that are specific to Oracle. @@ -5419,14 +5419,14 @@ SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); CREATE OR REPLACE PROCEDURE cs_create_job(v_job_id IN INTEGER) IS a_running_job_count INTEGER; - PRAGMA AUTONOMOUS_TRANSACTION; -- + PRAGMA AUTONOMOUS_TRANSACTION; -- BEGIN - LOCK TABLE cs_jobs IN EXCLUSIVE MODE; -- + LOCK TABLE cs_jobs IN EXCLUSIVE MODE; -- SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; IF a_running_job_count > 0 THEN - COMMIT; -- free lock + COMMIT; -- free lock raise_application_error(-20000, 'Unable to create a new job: a job is currently running.'); END IF; @@ -5493,7 +5493,7 @@ BEGIN SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; IF a_running_job_count > 0 THEN - RAISE EXCEPTION 'Unable to create a new job: a job is currently running'; -- + RAISE EXCEPTION 'Unable to create a new job: a job is currently running'; -- END IF; DELETE FROM cs_active_job; @@ -5502,7 +5502,7 @@ BEGIN BEGIN INSERT INTO cs_jobs (job_id, start_stamp) VALUES (v_job_id, now()); EXCEPTION - WHEN unique_violation THEN -- + WHEN unique_violation THEN -- -- don't worry if it already exists END; END; @@ -5522,7 +5522,7 @@ $$ LANGUAGE plpgsql; The exception names supported by PL/pgSQL are different from Oracle's. The set of built-in exception names - is much larger (see ). There + is much larger (see ). There is not currently a way to declare user-defined exception names, although you can throw user-chosen SQLSTATE values instead. @@ -5588,7 +5588,7 @@ END; PL/SQL version, but you have to remember to use quote_literal and quote_ident as described in . Constructs of the + linkend="plpgsql-statements-executing-dyn"/>. Constructs of the type EXECUTE 'SELECT * FROM $1'; will not work reliably unless you use these functions. @@ -5603,7 +5603,7 @@ END; the function always returns the same result when given the same arguments) and strictness (whether the function returns null if any argument is null). Consult the + linkend="sql-createfunction"/> reference page for details. diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml index 043225fc47..ec5f671632 100644 --- a/doc/src/sgml/plpython.sgml +++ b/doc/src/sgml/plpython.sgml @@ -15,7 +15,7 @@ To install PL/Python in a particular database, use CREATE EXTENSION plpythonu (but - see also ). + see also ). @@ -103,7 +103,7 @@ The built variant depends on which Python version was found during the installation or which version was explicitly set using the PYTHON environment variable; - see . To make both variants of + see . To make both variants of PL/Python available in one installation, the source tree has to be configured and built twice. @@ -186,7 +186,7 @@ Functions in PL/Python are declared via the - standard syntax: + standard syntax: CREATE FUNCTION funcname (argument-list) @@ -420,7 +420,7 @@ $$ LANGUAGE plpythonu; sortas="PL/Python">in PL/Python is passed to a function, the argument value will appear as None in Python. For example, the function definition of pymax - shown in will return the wrong answer for null + shown in will return the wrong answer for null inputs. We could add STRICT to the function definition to make PostgreSQL do something more reasonable: if a null value is passed, the function will not be called at all, @@ -774,7 +774,7 @@ SELECT * FROM multiout_simple_setof(3); PL/Python also supports anonymous code blocks called with the - statement: + statement: DO $$ @@ -1056,16 +1056,16 @@ rv = plan.execute(["name"], 5) Query parameters and result row fields are converted between PostgreSQL - and Python data types as described in . + and Python data types as described in . When you prepare a plan using the PL/Python module it is automatically - saved. Read the SPI documentation () for a + saved. Read the SPI documentation () for a description of what this means. In order to make effective use of this across function calls one needs to use one of the persistent storage dictionaries SD or GD (see - ). For example: + ). For example: CREATE FUNCTION usesavedplan() RETURNS trigger AS $$ if "plan" in SD: @@ -1190,7 +1190,7 @@ $$ LANGUAGE plpythonu; The actual class of the exception being raised corresponds to the specific condition that caused the error. Refer - to for a list of possible + to for a list of possible conditions. The module plpy.spiexceptions defines an exception class for each PostgreSQL condition, deriving @@ -1241,7 +1241,7 @@ $$ LANGUAGE plpythonu; Recovering from errors caused by database access as described in - can lead to an undesirable + can lead to an undesirable situation where some operations succeed before one of them fails, and after recovering from that error the data is left in an inconsistent state. PL/Python offers a solution to this problem in @@ -1391,9 +1391,9 @@ $$ LANGUAGE plpythonu; The other functions only generate messages of different priority levels. Whether messages of a particular priority are reported to the client, written to the server log, or both is controlled by the - and - configuration - variables. See for more information. + and + configuration + variables. See for more information. @@ -1442,9 +1442,9 @@ PL/Python function "raise_custom_exception" plpy.quote_nullable(string), and plpy.quote_ident(string). They are equivalent to the built-in quoting functions described in . They are useful when constructing + linkend="functions-string"/>. They are useful when constructing ad-hoc queries. A PL/Python equivalent of dynamic SQL from would be: + linkend="plpgsql-quote-literal-example"/> would be: plpy.execute("UPDATE tbl SET %s = %s WHERE key = %s" % ( plpy.quote_ident(colname), diff --git a/doc/src/sgml/pltcl.sgml b/doc/src/sgml/pltcl.sgml index e74321a06e..0646a8ba0b 100644 --- a/doc/src/sgml/pltcl.sgml +++ b/doc/src/sgml/pltcl.sgml @@ -79,7 +79,7 @@ To create a function in the PL/Tcl language, use - the standard syntax: + the standard syntax: CREATE FUNCTION funcname (argument-types) RETURNS return-type AS $$ @@ -483,7 +483,7 @@ $$ LANGUAGE pltcl; executed within a SQL subtransaction. If the script returns an error, that entire subtransaction is rolled back before returning the error out to the surrounding Tcl code. - See for more details and an + See for more details and an example. @@ -559,10 +559,10 @@ SELECT 'doesn''t' AS ret priority levels. Whether messages of a particular priority are reported to the client, written to the server log, or both is controlled by the - and - configuration - variables. See - and + and + configuration + variables. See + and for more information. @@ -888,7 +888,7 @@ CREATE EVENT TRIGGER tcl_a_snitch ON ddl_command_start EXECUTE PROCEDURE tclsnit Fields SQLSTATE, condition, and message are always supplied (the first two represent the error code and condition name as shown - in ). + in ). Fields that may be present include detail, hint, context, schema, table, column, @@ -929,7 +929,7 @@ if {[catch { spi_exec $sql_command }]} { Recovering from errors caused by database access as described in - can lead to an undesirable + can lead to an undesirable situation where some operations succeed before one of them fails, and after recovering from that error the data is left in an inconsistent state. PL/Tcl offers a solution to this problem in diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml index 265effbe48..54b5e98a0e 100644 --- a/doc/src/sgml/postgres-fdw.sgml +++ b/doc/src/sgml/postgres-fdw.sgml @@ -15,7 +15,7 @@ The functionality provided by this module overlaps substantially - with the functionality of the older module. + with the functionality of the older module. But postgres_fdw provides more transparent and standards-compliant syntax for accessing remote tables, and can give better performance in many cases. @@ -27,12 +27,12 @@ Install the postgres_fdw extension using . + linkend="sql-createextension"/>. - Create a foreign server object, using , + Create a foreign server object, using , to represent each remote database you want to connect to. Specify connection information, except user and password, as options of the server object. @@ -40,7 +40,7 @@ - Create a user mapping, using , for + Create a user mapping, using , for each database user you want to allow to access each foreign server. Specify the remote user name and password to use as user and password options of the @@ -49,8 +49,8 @@ - Create a foreign table, using - or , + Create a foreign table, using + or , for each remote table you want to access. The columns of the foreign table must match the referenced remote table. You can, however, use table and/or column names different from the remote table's, if you @@ -101,7 +101,7 @@ A foreign server using the postgres_fdw foreign data wrapper can have the same options that libpq accepts in - connection strings, as described in , + connection strings, as described in , except that these options are not allowed: @@ -254,7 +254,7 @@ fdw_tuple_cost to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the remote table's statistics are available. Running - on the foreign table is the way to update + on the foreign table is the way to update the local statistics; this will perform a scan of the remote table and then calculate and store statistics just as though the table were local. Keeping local statistics can be a useful way to reduce per-query planning @@ -359,7 +359,7 @@ postgres_fdw is able to import foreign table definitions - using . This command creates + using . This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported have columns of user-defined data types, the local server must have @@ -423,7 +423,7 @@ So if you wish to import CHECK constraints, you must do so manually, and you should verify the semantics of each one carefully. For more detail about the treatment of CHECK constraints on - foreign tables, see . + foreign tables, see . @@ -528,7 +528,7 @@ In the remote sessions opened by postgres_fdw, - the parameter is set to + the parameter is set to just pg_catalog, so that only built-in objects are visible without schema qualification. This is not an issue for queries generated by postgres_fdw itself, because it always @@ -538,7 +538,7 @@ any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such functions, or else attach SET search_path options - (see ) to such functions + (see ) to such functions to establish their expected search path environment. @@ -548,22 +548,22 @@ - is set to UTC + is set to UTC - is set to ISO + is set to ISO - is set to postgres + is set to postgres - is set to 3 for remote + is set to 3 for remote servers 9.0 and newer and is set to 2 for older versions @@ -612,7 +612,7 @@ CREATE EXTENSION postgres_fdw; - Then create a foreign server using . + Then create a foreign server using . In this example we wish to connect to a PostgreSQL server on host 192.83.123.89 listening on port 5432. The database to which the connection is made @@ -626,7 +626,7 @@ CREATE SERVER foreign_server - A user mapping, defined with , is + A user mapping, defined with , is needed as well to identify the role that will be used on the remote server: @@ -639,7 +639,7 @@ CREATE USER MAPPING FOR local_user Now it is possible to create a foreign table with - . In this example we + . In this example we wish to access the table named some_schema.some_table on the remote server. The local name for it will be foreign_table: @@ -658,7 +658,7 @@ CREATE FOREIGN TABLE foreign_table ( Column names must match as well, unless you attach column_name options to the individual columns to show how they are named in the remote table. - In many cases, use of is + In many cases, use of is preferable to constructing foreign table definitions manually. diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml index f8a6c48a57..041afdbd86 100644 --- a/doc/src/sgml/postgres.sgml +++ b/doc/src/sgml/postgres.sgml @@ -1,6 +1,8 @@ - %version; @@ -42,11 +44,11 @@ After you have worked through this tutorial you might want to move - on to reading to gain a more formal knowledge - of the SQL language, or for + on to reading to gain a more formal knowledge + of the SQL language, or for information about developing applications for PostgreSQL. Those who set up and - manage their own server should also read . + manage their own server should also read . @@ -80,14 +82,14 @@ chapters individually as they choose. The information in this part is presented in a narrative fashion in topical units. Readers looking for a complete description of a particular command - should see . + should see . Readers of this part should know how to connect to a PostgreSQL database and issue SQL commands. Readers that are unfamiliar with - these issues are encouraged to read + these issues are encouraged to read first. SQL commands are typically entered using the PostgreSQL interactive terminal psql, but other programs that have @@ -130,7 +132,7 @@ self-contained and can be read individually as desired. The information in this part is presented in a narrative fashion in topical units. Readers looking for a complete description of a - particular command should see . + particular command should see . @@ -140,8 +142,8 @@ The rest of this part is about tuning and management; that material assumes that the reader is familiar with the general use of the PostgreSQL database system. Readers are - encouraged to look at and for additional information. + encouraged to look at and for additional information. @@ -174,10 +176,10 @@ with PostgreSQL. Each of these chapters can be read independently. Note that there are many other programming interfaces for client programs that are distributed separately and - contain their own documentation ( + contain their own documentation ( lists some of the more popular ones). Readers of this part should be familiar with using SQL commands to manipulate - and query the database (see ) and of course + and query the database (see ) and of course with the programming language that the interface uses. @@ -203,7 +205,7 @@ PostgreSQL distribution as well as general issues concerning server-side programming languages. It is essential to read at least the earlier sections of (covering functions) before diving into the + linkend="extend"/> (covering functions) before diving into the material about server-side programming languages. diff --git a/doc/src/sgml/problems.sgml b/doc/src/sgml/problems.sgml index edceec3381..7d14789e51 100644 --- a/doc/src/sgml/problems.sgml +++ b/doc/src/sgml/problems.sgml @@ -170,7 +170,7 @@ form of the message. In psql, say \set VERBOSITY verbose beforehand. If you are extracting the message from the server log, set the run-time parameter - to verbose so that all + to verbose so that all details are logged. diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 999fa06018..8174e3defa 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -207,7 +207,7 @@ This section describes the message flow and the semantics of each message type. (Details of the exact representation of each message - appear in .) There are + appear in .) There are several different sub-protocols depending on the state of the connection: start-up, query, function call, COPY, and termination. There are also special @@ -383,7 +383,7 @@ SASLInitialResponse with the name of the selected mechanism, and the first part of the SASL data stream in response to this. If further messages are needed, the server will respond with - AuthenticationSASLContinue. See + AuthenticationSASLContinue. See for details. @@ -478,9 +478,9 @@ This message informs the frontend about the current (initial) setting of backend parameters, such as or . + linkend="guc-client-encoding"/> or . The frontend can ignore this message, or record the settings - for its future use; see for + for its future use; see for more details. The frontend should not respond to this message, but should continue listening for a ReadyForQuery message. @@ -564,7 +564,7 @@ The backend is ready to copy data from the frontend to a - table; see . + table; see . @@ -574,7 +574,7 @@ The backend is ready to copy data from a table to the - frontend; see . + frontend; see . @@ -654,7 +654,7 @@ normally consists of RowDescription, zero or more DataRow messages, and then CommandComplete. COPY to or from the frontend invokes special protocol - as described in . + as described in . All other query types normally produce only a CommandComplete message. @@ -691,7 +691,7 @@ A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is expecting any other type of - message. See also concerning messages + message. See also concerning messages that the backend might generate due to outside events. @@ -1198,7 +1198,7 @@ SELCT 1/0; It is possible for NoticeResponse and ParameterStatus messages to be interspersed between CopyData messages; frontends must handle these cases, and should be prepared for other asynchronous message types as well (see - ). Otherwise, any message type other than + ). Otherwise, any message type other than CopyData or CopyDone may be treated as terminating copy-out mode. @@ -1221,7 +1221,7 @@ SELCT 1/0; until a Sync message is received, and then issue ReadyForQuery and return to normal processing. The frontend should treat receipt of ErrorResponse as terminating the copy in both directions; no CopyDone should be sent - in this case. See for more + in this case. See for more information on the subprotocol transmitted over copy-both mode. @@ -1435,7 +1435,7 @@ SELCT 1/0; communication security in environments where attackers might be able to capture the session traffic. For more information on encrypting PostgreSQL sessions with - SSL, see . + SSL, see . @@ -1635,7 +1635,7 @@ of true tells the backend to go into walsender mode, wherein small set of replication commands can be issued instead of SQL statements. Only the simple query protocol can be used in walsender mode. Replication commands are logged in the server log when - is enabled. + is enabled. Passing database as the value instructs walsender to connect to the database specified in the dbname parameter, which will allow the connection to be used for logical replication from that database. @@ -1649,8 +1649,8 @@ the connection to be used for logical replication from that database. psql "dbname=postgres replication=database" -c "IDENTIFY_SYSTEM;" However, it is often more useful to use - (for physical replication) or - (for logical replication). + (for physical replication) or + (for logical replication). @@ -1728,7 +1728,7 @@ The commands accepted in walsender mode are: Requests the server to send the current setting of a run-time parameter. - This is similar to the SQL command . + This is similar to the SQL command . @@ -1737,7 +1737,7 @@ The commands accepted in walsender mode are: The name of a run-time parameter. Available parameters are documented - in . + in . @@ -1792,7 +1792,7 @@ The commands accepted in walsender mode are: Create a physical or logical replication - slot. See for more about + slot. See for more about replication slots. @@ -1801,7 +1801,7 @@ The commands accepted in walsender mode are: The name of the slot to create. Must be a valid replication slot - name (see ). + name (see ). @@ -1811,7 +1811,7 @@ The commands accepted in walsender mode are: The name of the output plugin used for logical decoding - (see ). + (see ). @@ -2378,7 +2378,7 @@ The commands accepted in walsender mode are: Sets the label of the backup. If none is specified, a backup label of base backup will be used. The quoting rules for the label are the same as a standard SQL string with - turned on. + turned on. @@ -2642,7 +2642,7 @@ The commands accepted in walsender mode are: The individual protocol messages are discussed in the following subsections. Individual messages are described in - . + . @@ -4006,7 +4006,7 @@ CopyInResponse (B) characters, etc). 1 indicates the overall copy format is binary (similar to DataRow format). - See + See for more information. @@ -4080,7 +4080,7 @@ CopyOutResponse (B) is textual (rows separated by newlines, columns separated by separator characters, etc). 1 indicates the overall copy format is binary (similar to DataRow - format). See for more information. + format). See for more information. @@ -4153,7 +4153,7 @@ CopyBothResponse (B) is textual (rows separated by newlines, columns separated by separator characters, etc). 1 indicates the overall copy format is binary (similar to DataRow - format). See for more information. + format). See for more information. @@ -4394,7 +4394,7 @@ ErrorResponse (B) A code identifying the field type; if zero, this is the message terminator and no string follows. The presently defined field types are listed in - . + . Since more field types might be added in future, frontends should silently ignore fields of unrecognized type. @@ -4886,7 +4886,7 @@ NoticeResponse (B) A code identifying the field type; if zero, this is the message terminator and no string follows. The presently defined field types are listed in - . + . Since more field types might be added in future, frontends should silently ignore fields of unrecognized type. @@ -5757,7 +5757,7 @@ StartupMessage (F) true, false, or database, and the default is false. See - for details. + for details. @@ -5919,7 +5919,7 @@ message. Code: the SQLSTATE code for the error (see ). Not localizable. Always present. + linkend="errcodes-appendix"/>). Not localizable. Always present. @@ -6124,7 +6124,7 @@ message. The fields for schema name, table name, column name, data type name, and constraint name are supplied only for a limited number of error types; - see . Frontends should not assume that + see . Frontends should not assume that the presence of any of these fields guarantees the presence of another field. Core error sources observe the interrelationships noted above, but user-defined functions may use these fields in other ways. In the same @@ -6149,7 +6149,7 @@ not line breaks. This section describes the detailed format of each logical replication message. These messages are returned either by the replication slot SQL interface or are sent by a walsender. In case of a walsender they are encapsulated inside the replication -protocol WAL messages as described in +protocol WAL messages as described in and generally obey same message flow as physical replication. diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml index 52cc37a1d6..19a7d2e18b 100644 --- a/doc/src/sgml/queries.sgml +++ b/doc/src/sgml/queries.sgml @@ -24,7 +24,7 @@ The process of retrieving or the command to retrieve data from a database is called a query. In SQL the - command is + command is used to specify queries. The general syntax of the SELECT command is @@ -59,7 +59,7 @@ SELECT a, b + c FROM table1; (assuming that b and c are of a numerical data type). - See for more details. + See for more details. @@ -110,7 +110,7 @@ SELECT random(); The <literal>FROM</literal> Clause - The derives a + The derives a table from one or more other tables given in a comma-separated table reference list. @@ -589,7 +589,7 @@ SELECT * FROM my_table AS m WHERE my_table.a > 5; -- wrong SELECT * FROM people AS mother JOIN people AS child ON mother.id = child.mother_id; Additionally, an alias is required if the table reference is a - subquery (see ). + subquery (see ). @@ -640,7 +640,7 @@ SELECT a.* FROM (my_table AS a JOIN your_table AS b ON ...) AS c Subqueries specifying a derived table must be enclosed in parentheses and must be assigned a table - alias name (as in ). For + alias name (as in ). For example: FROM (SELECT * FROM table1) AS alias_name @@ -662,7 +662,7 @@ FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) Again, a table alias is required. Assigning alias names to the columns of the VALUES list is optional, but is good practice. - For more information see . + For more information see . @@ -713,7 +713,7 @@ ROWS FROM( function_call , ... The special table function UNNEST may be called with any number of array parameters, and it returns a corresponding number of columns, as if UNNEST - () had been called on each parameter + () had been called on each parameter separately and combined using the ROWS FROM construct. @@ -795,8 +795,8 @@ SELECT * AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%'; - The function - (part of the module) executes + The function + (part of the module) executes a remote query. It is declared to return record since it might be used for any kind of query. The actual column set must be specified in the calling query so @@ -908,12 +908,12 @@ WHERE pname IS NULL; The syntax of the is + endterm="sql-where-title"/> is WHERE search_condition where search_condition is any value - expression (see ) that + expression (see ) that returns a value of type boolean. @@ -1014,7 +1014,7 @@ SELECT select_list - The is + The is used to group together those rows in a table that have the same values in all the columns listed. The order in which the columns are listed does not matter. The effect is to combine each set @@ -1066,7 +1066,7 @@ SELECT select_list Here sum is an aggregate function that computes a single value over the entire group. More information about the available aggregate functions can be found in . + linkend="functions-aggregate"/>. @@ -1074,7 +1074,7 @@ SELECT select_list Grouping without aggregate expressions effectively calculates the set of distinct values in a column. This can also be achieved using the DISTINCT clause (see ). + linkend="queries-distinct"/>). @@ -1236,7 +1236,7 @@ SELECT product_id, p.name, (sum(s.units) * (p.price - p.cost)) AS profit References to the grouping columns or expressions are replaced by null values in result rows for grouping sets in which those columns do not appear. To distinguish which grouping a particular output - row resulted from, see . + row resulted from, see . @@ -1366,9 +1366,9 @@ GROUP BY GROUPING SETS ( If the query contains any window functions (see - , - and - ), these functions are evaluated + , + and + ), these functions are evaluated after any grouping, aggregation, and HAVING filtering is performed. That is, if the query uses any aggregates, GROUP BY, or HAVING, then the rows seen by the window functions @@ -1430,7 +1430,7 @@ GROUP BY GROUPING SETS ( The simplest kind of select list is * which emits all columns that the table expression produces. Otherwise, a select list is a comma-separated list of value expressions (as - defined in ). For instance, it + defined in ). For instance, it could be a list of column names: SELECT a, b, c FROM ... @@ -1438,7 +1438,7 @@ SELECT a, b, c FROM ... The columns names a, b, and c are either the actual names of the columns of tables referenced in the FROM clause, or the aliases given to them as - explained in . The name + explained in . The name space available in the select list is the same as in the WHERE clause, unless grouping is used, in which case it is the same as in the HAVING clause. @@ -1455,7 +1455,7 @@ SELECT tbl1.a, tbl2.a, tbl1.b FROM ... SELECT tbl1.*, tbl2.a FROM ... - See for more about + See for more about the table_name.* notation. @@ -1499,7 +1499,7 @@ SELECT a AS value, b + c AS sum FROM ... The AS keyword is optional, but only if the new column name does not match any PostgreSQL keyword (see ). To avoid an accidental match to + linkend="sql-keywords-appendix"/>). To avoid an accidental match to a keyword, you can double-quote the column name. For example, VALUE is a keyword, so this does not work: @@ -1518,7 +1518,7 @@ SELECT a "value", b + c AS sum FROM ... The naming of output columns here is different from that done in the FROM clause (see ). It is possible + linkend="queries-table-aliases"/>). It is possible to rename the same column twice, but the name assigned in the select list is the one that will be passed on. @@ -1663,7 +1663,7 @@ SELECT DISTINCT ON (expression , union compatible, which means that they return the same number of columns and the corresponding columns have compatible data types, as - described in . + described in . @@ -1861,7 +1861,7 @@ VALUES ( expression [, ...] ) [, .. of columns in the table), and corresponding entries in each list must have compatible data types. The actual data type assigned to each column of the result is determined using the same rules as for UNION - (see ). + (see ). @@ -1912,7 +1912,7 @@ SELECT select_list FROM table_expression - For more information see . + For more information see . @@ -2261,7 +2261,7 @@ SELECT * FROM moved_rows; Data-modifying statements in WITH usually have - RETURNING clauses (see ), + RETURNING clauses (see ), as shown in the example above. It is the output of the RETURNING clause, not the target table of the data-modifying statement, that forms the temporary @@ -2317,7 +2317,7 @@ DELETE FROM parts each other and with the main query. Therefore, when using data-modifying statements in WITH, the order in which the specified updates actually happen is unpredictable. All the statements are executed with - the same snapshot (see ), so they + the same snapshot (see ), so they cannot see one another's effects on the target tables. This alleviates the effects of the unpredictability of the actual order of row updates, and means that RETURNING data is the only way to diff --git a/doc/src/sgml/query.sgml b/doc/src/sgml/query.sgml index b139c34577..c0889743c4 100644 --- a/doc/src/sgml/query.sgml +++ b/doc/src/sgml/query.sgml @@ -12,7 +12,7 @@ tutorial is only intended to give you an introduction and is in no way a complete tutorial on SQL. Numerous books have been written on SQL, including and . + linkend="melt93"/> and . You should be aware that some PostgreSQL language features are extensions to the standard. @@ -267,7 +267,7 @@ COPY weather FROM '/home/user/weather.txt'; where the file name for the source file must be available on the machine running the backend process, not the client, since the backend process reads the file directly. You can read more about the - COPY command in . + COPY command in . @@ -754,7 +754,7 @@ SELECT city, max(temp_lo) SELECT city, max(temp_lo) FROM weather - WHERE city LIKE 'S%' -- + WHERE city LIKE 'S%' -- GROUP BY city HAVING max(temp_lo) < 40; @@ -762,7 +762,7 @@ SELECT city, max(temp_lo) The LIKE operator does pattern matching and - is explained in . + is explained in . diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml index ef2bff9cd9..3a034d9b06 100644 --- a/doc/src/sgml/rangetypes.sgml +++ b/doc/src/sgml/rangetypes.sgml @@ -65,7 +65,7 @@ In addition, you can define your own range types; - see for more information. + see for more information. @@ -94,8 +94,8 @@ SELECT int4range(10, 20) * int4range(15, 25); SELECT isempty(numrange(1, 5)); - See - and for complete lists of + See + and for complete lists of operators and functions on range types. @@ -117,7 +117,7 @@ SELECT isempty(numrange(1, 5)); represented by (. Likewise, an inclusive upper bound is represented by ], while an exclusive upper bound is represented by ). - (See for more details.) + (See for more details.) @@ -214,7 +214,7 @@ empty These rules are very similar to those for writing field values in - composite-type literals. See for + composite-type literals. See for additional commentary. @@ -406,7 +406,7 @@ SELECT '[11:10, 23:00]'::timerange; - See for more information about creating + See for more information about creating range types. @@ -435,7 +435,7 @@ CREATE INDEX reservation_idx ON reservation USING GIST (during); -|-, &<, and &> - (see for more information). + (see for more information). diff --git a/doc/src/sgml/recovery-config.sgml b/doc/src/sgml/recovery-config.sgml index ca37ab5187..92825fdf19 100644 --- a/doc/src/sgml/recovery-config.sgml +++ b/doc/src/sgml/recovery-config.sgml @@ -58,7 +58,7 @@ to truncate the archive to just the minimum required to support restarting from the current restore. %r is typically only used by warm-standby configurations - (see ). + (see ). Write %% to embed an actual % character. @@ -99,7 +99,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows may be safely removed. This information can be used to truncate the archive to just the minimum required to support restart from the current restore. - The module + The module is often used in archive_cleanup_command for single-standby configurations, for example: archive_cleanup_command = 'pg_archivecleanup /mnt/server/archivedir %r' @@ -107,7 +107,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows same archive directory, you will need to ensure that you do not delete WAL files until they are no longer needed by any of the servers. archive_cleanup_command would typically be used in a - warm-standby configuration (see ). + warm-standby configuration (see ). Write %% to embed an actual % character in the command. @@ -133,7 +133,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows recovery_end_command is to provide a mechanism for cleanup following replication or recovery. Any %r is replaced by the name of the file containing the - last valid restart point, like in . + last valid restart point, like in . If the command returns a nonzero exit status then a warning log @@ -209,7 +209,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows This parameter specifies the time stamp up to which recovery will proceed. The precise stopping point is also influenced by - . + . @@ -229,7 +229,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows The transactions that will be recovered are those that committed before (and optionally including) the specified one. The precise stopping point is also influenced by - . + . @@ -244,7 +244,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows This parameter specifies the LSN of the write-ahead log location up to which recovery will proceed. The precise stopping point is also - influenced by . This + influenced by . This parameter is parsed using the system data type pg_lsn. @@ -270,9 +270,9 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows Specifies whether to stop just after the specified recovery target (true), or just before the recovery target (false). - Applies when , - , or - is specified. + Applies when , + , or + is specified. This setting controls whether transactions having exactly the target WAL location (LSN), commit time, or transaction ID, respectively, will be included in the recovery. Default is true. @@ -296,7 +296,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows a standby server. Other than that you only need to set this parameter in complex re-recovery situations, where you need to return to a state that itself was reached after a point-in-time recovery. - See for discussion. + See for discussion. @@ -323,7 +323,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows is the most desirable point for recovery. The paused state can be resumed by using pg_wal_replay_resume() (see - ), which then + ), which then causes recovery to end. If this recovery target is not the desired stopping point, then shut down the server, change the recovery target settings to a later target and restart to @@ -344,7 +344,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows This setting has no effect if no recovery target is set. - If is not enabled, a setting of + If is not enabled, a setting of pause will act the same as shutdown. @@ -386,9 +386,9 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows Specifies a connection string to be used for the standby server to connect with the primary. This string is in the format - described in . If any option is + described in . If any option is unspecified in this string, then the corresponding environment - variable (see ) is checked. If the + variable (see ) is checked. If the environment variable is not set either, then defaults are used. @@ -398,7 +398,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows the same as the standby server's default. Also specify a user name corresponding to a suitably-privileged role on the primary (see - ). + ). A password needs to be provided too, if the primary demands password authentication. It can be provided in the primary_conninfo string, or in a separate @@ -423,7 +423,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows Optionally specifies an existing replication slot to be used when connecting to the primary via streaming replication to control resource removal on the upstream node - (see ). + (see ). This setting has no effect if primary_conninfo is not set. diff --git a/doc/src/sgml/ref/abort.sgml b/doc/src/sgml/ref/abort.sgml index d341595785..21799d2a83 100644 --- a/doc/src/sgml/ref/abort.sgml +++ b/doc/src/sgml/ref/abort.sgml @@ -33,7 +33,7 @@ ABORT [ WORK | TRANSACTION ] all the updates made by the transaction to be discarded. This command is identical in behavior to the standard SQL command - , + , and is present only for historical reasons. @@ -58,7 +58,7 @@ ABORT [ WORK | TRANSACTION ] Notes - Use to + Use to successfully terminate a transaction. @@ -92,9 +92,9 @@ ABORT; See Also - - - + + + diff --git a/doc/src/sgml/ref/alter_aggregate.sgml b/doc/src/sgml/ref/alter_aggregate.sgml index e00e726ad8..2ad3e0440b 100644 --- a/doc/src/sgml/ref/alter_aggregate.sgml +++ b/doc/src/sgml/ref/alter_aggregate.sgml @@ -142,7 +142,7 @@ ALTER AGGREGATE name ( aggregate_signatu The recommended syntax for referencing an ordered-set aggregate is to write ORDER BY between the direct and aggregated argument specifications, in the same style as in - . However, it will also work to + . However, it will also work to omit ORDER BY and just run the direct and aggregated argument specifications into a single list. In this abbreviated form, if VARIADIC "any" was used in both the direct and @@ -195,8 +195,8 @@ ALTER AGGREGATE mypercentile(float8, integer) SET SCHEMA myschema; See Also - - + + diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml index c7ad7437e8..b51b3a2564 100644 --- a/doc/src/sgml/ref/alter_collation.sgml +++ b/doc/src/sgml/ref/alter_collation.sgml @@ -94,7 +94,7 @@ ALTER COLLATION name SET SCHEMA new_sche Update the collation's version. See below. + endterm="sql-altercollation-notes-title"/> below. @@ -176,8 +176,8 @@ ALTER COLLATION "en_US" OWNER TO joe; See Also - - + + diff --git a/doc/src/sgml/ref/alter_conversion.sgml b/doc/src/sgml/ref/alter_conversion.sgml index 08ed5e28fb..c42bd8b3e4 100644 --- a/doc/src/sgml/ref/alter_conversion.sgml +++ b/doc/src/sgml/ref/alter_conversion.sgml @@ -120,8 +120,8 @@ ALTER CONVERSION iso_8859_1_to_utf8 OWNER TO joe; See Also - - + + diff --git a/doc/src/sgml/ref/alter_database.sgml b/doc/src/sgml/ref/alter_database.sgml index 1e09b5df1d..7db878cf53 100644 --- a/doc/src/sgml/ref/alter_database.sgml +++ b/doc/src/sgml/ref/alter_database.sgml @@ -188,7 +188,7 @@ ALTER DATABASE name RESET ALL - See and + See and for more information about allowed parameter names and values. @@ -203,7 +203,7 @@ ALTER DATABASE name RESET ALL It is also possible to tie a session default to a specific role rather than to a database; see - . + . Role-specific settings override database-specific ones if there is a conflict. @@ -234,10 +234,10 @@ ALTER DATABASE test SET enable_indexscan TO off; See Also - - - - + + + + diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index bc7401f845..ab2c35b4dd 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -106,7 +106,7 @@ REVOKE [ GRANT OPTION FOR ] - As explained under , + As explained under , the default privileges for any object type normally grant all grantable permissions to the object owner, and may grant some privileges to PUBLIC as well. However, this behavior can be changed by @@ -150,8 +150,8 @@ REVOKE [ GRANT OPTION FOR ] This parameter, and all the other parameters in abbreviated_grant_or_revoke, act as described under - or - , + or + , except that one is setting permissions for a whole class of objects rather than specific named objects. @@ -165,11 +165,11 @@ REVOKE [ GRANT OPTION FOR ] Notes - Use 's \ddp command + Use 's \ddp command to obtain information about existing assignments of default privileges. The meaning of the privilege values is the same as explained for \dp under - . + . @@ -226,8 +226,8 @@ ALTER DEFAULT PRIVILEGES FOR ROLE admin REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC; See Also - - + + diff --git a/doc/src/sgml/ref/alter_domain.sgml b/doc/src/sgml/ref/alter_domain.sgml index 9cd044de54..85253e209b 100644 --- a/doc/src/sgml/ref/alter_domain.sgml +++ b/doc/src/sgml/ref/alter_domain.sgml @@ -80,7 +80,7 @@ ALTER DOMAIN name This form adds a new constraint to a domain using the same syntax as - . + . When a new constraint is added to a domain, all columns using that domain will be checked against the newly added constraint. These checks can be suppressed by adding the new constraint using the @@ -214,7 +214,7 @@ ALTER DOMAIN name Automatically drop objects that depend on the constraint, and in turn all objects that depend on those objects - (see ). + (see ). @@ -342,8 +342,8 @@ ALTER DOMAIN zipcode SET SCHEMA customers; See Also - - + + diff --git a/doc/src/sgml/ref/alter_event_trigger.sgml b/doc/src/sgml/ref/alter_event_trigger.sgml index b913ac9a5b..61919f7845 100644 --- a/doc/src/sgml/ref/alter_event_trigger.sgml +++ b/doc/src/sgml/ref/alter_event_trigger.sgml @@ -78,7 +78,7 @@ ALTER EVENT TRIGGER name RENAME TO These forms configure the firing of event triggers. A disabled trigger is still known to the system, but is not executed when its triggering - event occurs. See also . + event occurs. See also . @@ -98,8 +98,8 @@ ALTER EVENT TRIGGER name RENAME TO See Also - - + + diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index c2b0669c38..e54925507e 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -119,7 +119,7 @@ ALTER EXTENSION name DROP - See for more information about these + See for more information about these operations. @@ -323,8 +323,8 @@ ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore); See Also - - + + diff --git a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml index 21bc83e512..14f3d616e7 100644 --- a/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/alter_foreign_data_wrapper.sgml @@ -180,8 +180,8 @@ ALTER FOREIGN DATA WRAPPER dbi VALIDATOR bob.myvalidator; See Also - - + + diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml index df3d6d0696..f266be0c37 100644 --- a/doc/src/sgml/ref/alter_foreign_table.sgml +++ b/doc/src/sgml/ref/alter_foreign_table.sgml @@ -72,7 +72,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new column to the foreign table, using the same syntax as - . + . Unlike the case when adding a column to a regular table, nothing happens to the underlying storage: this action simply declares that some new column is now accessible through the foreign table. @@ -134,8 +134,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form sets the per-column statistics-gathering target for subsequent - operations. - See the similar form of + operations. + See the similar form of for more details. @@ -147,7 +147,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form sets or resets per-attribute options. - See the similar form of + See the similar form of for more details. @@ -160,7 +160,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form sets the storage mode for a column. - See the similar form of + See the similar form of for more details. Note that the storage mode has no effect unless the table's foreign-data wrapper chooses to pay attention to it. @@ -173,7 +173,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds a new constraint to a foreign table, using the same - syntax as . + syntax as . Currently only CHECK constraints are supported. @@ -182,7 +182,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name.) + in .) If the constraint is marked NOT VALID, then it isn't assumed to hold, but is only recorded for possible future use. @@ -217,7 +217,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name These forms configure the firing of trigger(s) belonging to the foreign - table. See the similar form of for more + table. See the similar form of for more details. @@ -228,7 +228,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds an oid system column to the - table (see ). + table (see ). It does nothing if the table already has OIDs. Unless the table's foreign-data wrapper supports OIDs, this column will simply read as zeroes. @@ -261,7 +261,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name This form adds the target foreign table as a new child of the specified parent table. - See the similar form of + See the similar form of for more details. @@ -433,7 +433,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name). + (see ). @@ -525,7 +525,7 @@ ALTER FOREIGN TABLE [ IF EXISTS ] name - Refer to for a further description of valid + Refer to for a further description of valid parameters. @@ -571,8 +571,8 @@ ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'v See Also - - + + diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index fd35e98a88..196d2dde0c 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -171,7 +171,7 @@ ALTER FUNCTION name [ ( [ [ + is assumed automatically. See for more information. @@ -185,7 +185,7 @@ ALTER FUNCTION name [ ( [ [ for details. + See for details. @@ -198,7 +198,7 @@ ALTER FUNCTION name [ ( [ [ for more information about + conformance. See for more information about this capability. @@ -210,7 +210,7 @@ ALTER FUNCTION name [ ( [ [ for details. + See for details. @@ -220,7 +220,7 @@ ALTER FUNCTION name [ ( [ [ for more information about + See for more information about this capability. @@ -232,7 +232,7 @@ ALTER FUNCTION name [ ( [ [ for more information. + See for more information. @@ -243,7 +243,7 @@ ALTER FUNCTION name [ ( [ [ for more information. + function. See for more information. @@ -266,8 +266,8 @@ ALTER FUNCTION name [ ( [ [ and - + See and + for more information about allowed parameter names and values. @@ -357,8 +357,8 @@ ALTER FUNCTION check_password(text) RESET search_path; See Also - - + + diff --git a/doc/src/sgml/ref/alter_group.sgml b/doc/src/sgml/ref/alter_group.sgml index 172a62a6f7..39cc2b88cf 100644 --- a/doc/src/sgml/ref/alter_group.sgml +++ b/doc/src/sgml/ref/alter_group.sgml @@ -50,14 +50,14 @@ ALTER GROUP group_name RENAME TO group for this purpose.) These variants are effectively equivalent to granting or revoking membership in the role named as the group; so the preferred way to do this is to use - or - . + or + . The third variant changes the name of the group. This is exactly equivalent to renaming the role with - . + . @@ -125,9 +125,9 @@ ALTER GROUP workers DROP USER beth; See Also - - - + + + diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index 5d0b792e50..e54237272c 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -70,7 +70,7 @@ ALTER INDEX ALL IN TABLESPACE name this command, use ALTER DATABASE or explicit ALTER INDEX invocations instead if desired. See also - . + . @@ -91,11 +91,11 @@ ALTER INDEX ALL IN TABLESPACE name This form changes one or more index-method-specific storage parameters for the index. See - + for details on the available parameters. Note that the index contents will not be modified immediately by this command; depending on the parameter you might need to rebuild the index with - + to get the desired effects. @@ -117,16 +117,16 @@ ALTER INDEX ALL IN TABLESPACE name This form sets the per-column statistics-gathering target for - subsequent operations, though can + subsequent operations, though can be used only on index columns that are defined as an expression. Since expressions lack a unique name, we refer to them using the ordinal number of the index column. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the system default statistics - target (). + target (). For more information on the use of statistics by the PostgreSQL query planner, refer to - . + . @@ -225,7 +225,7 @@ ALTER INDEX ALL IN TABLESPACE name These operations are also possible using - . + . ALTER INDEX is in fact just an alias for the forms of ALTER TABLE that apply to indexes. @@ -290,8 +290,8 @@ ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000; See Also - - + + diff --git a/doc/src/sgml/ref/alter_language.sgml b/doc/src/sgml/ref/alter_language.sgml index 389824e3d2..eac63dec13 100644 --- a/doc/src/sgml/ref/alter_language.sgml +++ b/doc/src/sgml/ref/alter_language.sgml @@ -83,8 +83,8 @@ ALTER [ PROCEDURAL ] LANGUAGE name OWNER TO { See Also - - + + diff --git a/doc/src/sgml/ref/alter_large_object.sgml b/doc/src/sgml/ref/alter_large_object.sgml index 0fbb8d5b62..f4a9c9e2a5 100644 --- a/doc/src/sgml/ref/alter_large_object.sgml +++ b/doc/src/sgml/ref/alter_large_object.sgml @@ -73,7 +73,7 @@ ALTER LARGE OBJECT large_object_oid See Also - + diff --git a/doc/src/sgml/ref/alter_materialized_view.sgml b/doc/src/sgml/ref/alter_materialized_view.sgml index f41b5058ff..03e3df1ffd 100644 --- a/doc/src/sgml/ref/alter_materialized_view.sgml +++ b/doc/src/sgml/ref/alter_materialized_view.sgml @@ -78,7 +78,7 @@ ALTER MATERIALIZED VIEW ALL IN TABLESPACE nameALTER MATERIALIZED VIEW are a subset of those available for ALTER TABLE, and have the same meaning when used for - materialized views. See the descriptions for + materialized views. See the descriptions for for details. @@ -177,9 +177,9 @@ ALTER MATERIALIZED VIEW foo RENAME TO bar; See Also - - - + + + diff --git a/doc/src/sgml/ref/alter_opclass.sgml b/doc/src/sgml/ref/alter_opclass.sgml index e69bcf2dd7..59a64caa4f 100644 --- a/doc/src/sgml/ref/alter_opclass.sgml +++ b/doc/src/sgml/ref/alter_opclass.sgml @@ -116,9 +116,9 @@ ALTER OPERATOR CLASS name USING - - - + + + diff --git a/doc/src/sgml/ref/alter_operator.sgml b/doc/src/sgml/ref/alter_operator.sgml index 4c6f75efff..b3bfa9ccbe 100644 --- a/doc/src/sgml/ref/alter_operator.sgml +++ b/doc/src/sgml/ref/alter_operator.sgml @@ -154,8 +154,8 @@ ALTER OPERATOR && (_int4, _int4) SET (RESTRICT = _int_contsel, JOIN = _i See Also - - + + diff --git a/doc/src/sgml/ref/alter_opfamily.sgml b/doc/src/sgml/ref/alter_opfamily.sgml index f327267ff8..3c0922c645 100644 --- a/doc/src/sgml/ref/alter_opfamily.sgml +++ b/doc/src/sgml/ref/alter_opfamily.sgml @@ -62,7 +62,7 @@ ALTER OPERATOR FAMILY name USING .) + instead; see .) PostgreSQL will allow loose members of a family to be dropped from the family at any time, but members of an operator class cannot be dropped without dropping the whole class and @@ -88,7 +88,7 @@ ALTER OPERATOR FAMILY name USING for further information. + Refer to for further information. @@ -349,11 +349,11 @@ ALTER OPERATOR FAMILY integer_ops USING btree DROP See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/alter_policy.sgml b/doc/src/sgml/ref/alter_policy.sgml index a49f2fc5a5..a1c720a956 100644 --- a/doc/src/sgml/ref/alter_policy.sgml +++ b/doc/src/sgml/ref/alter_policy.sgml @@ -105,7 +105,7 @@ ALTER POLICY name ON The USING expression for the policy. - See for details. + See for details. @@ -115,7 +115,7 @@ ALTER POLICY name ON The WITH CHECK expression for the policy. - See for details. + See for details. @@ -135,8 +135,8 @@ ALTER POLICY name ON See Also - - + + diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index 5557f9b231..534e598d93 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -52,7 +52,7 @@ ALTER PUBLICATION name RENAME TO The fourth variant of this command listed in the synopsis can change all of the publication properties specified in - . Properties not mentioned in the + . Properties not mentioned in the command retain their previous settings. @@ -101,7 +101,7 @@ ALTER PUBLICATION name RENAME TO This clause alters publication parameters originally set by - . See there for more information. + . See there for more information. @@ -156,10 +156,10 @@ ALTER PUBLICATION mypublication ADD TABLE users, departments; See Also - - - - + + + + diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml index c135364d4e..573a3e80f7 100644 --- a/doc/src/sgml/ref/alter_role.sgml +++ b/doc/src/sgml/ref/alter_role.sgml @@ -62,11 +62,11 @@ ALTER ROLE { role_specification | A The first variant of this command listed in the synopsis can change many of the role attributes that can be specified in - . + . (All the possible attributes are covered, except that there are no options for adding or removing memberships; use - and - for that.) + and + for that.) Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. Roles having CREATEROLE privilege can change any of these @@ -102,8 +102,8 @@ ALTER ROLE { role_specification | A default, overriding whatever setting is present in postgresql.conf or has been received from the postgres command line. This only happens at login time; executing - or - does not cause new + or + does not cause new configuration values to be set. Settings set for all databases are overridden by database-specific settings attached to a role. Settings for specific databases or specific roles override @@ -173,7 +173,7 @@ ALTER ROLE { role_specification | A These clauses alter attributes originally set by - . For more information, see the + . For more information, see the CREATE ROLE reference page. @@ -217,14 +217,14 @@ ALTER ROLE { role_specification | A Role-specific variable settings take effect only at login; - and - + and + do not process role-specific variable settings. - See and for more information about allowed + See and for more information about allowed parameter names and values. @@ -236,14 +236,14 @@ ALTER ROLE { role_specification | A Notes - Use - to add new roles, and to remove a role. + Use + to add new roles, and to remove a role. ALTER ROLE cannot change a role's memberships. - Use and - + Use and + to do that. @@ -251,7 +251,7 @@ ALTER ROLE { role_specification | A Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in cleartext, and it might also be logged in the client's command - history or the server log. + history or the server log. contains a command \password that can be used to change a role's password without exposing the cleartext password. @@ -260,7 +260,7 @@ ALTER ROLE { role_specification | A It is also possible to tie a session default to a specific database rather than to a role; see - . + . If there is a conflict, database-role-specific settings override role-specific ones, which in turn override database-specific ones. @@ -311,7 +311,7 @@ ALTER ROLE miriam CREATEROLE CREATEDB; Give a role a non-default setting of the - parameter: + parameter: ALTER ROLE worker_bee SET maintenance_work_mem = 100000; @@ -320,7 +320,7 @@ ALTER ROLE worker_bee SET maintenance_work_mem = 100000; Give a role a non-default, database-specific setting of the - parameter: + parameter: ALTER ROLE fred IN DATABASE devel SET client_min_messages = DEBUG; @@ -340,10 +340,10 @@ ALTER ROLE fred IN DATABASE devel SET client_min_messages = DEBUG; See Also - - - - + + + + diff --git a/doc/src/sgml/ref/alter_rule.sgml b/doc/src/sgml/ref/alter_rule.sgml index f8833feee7..c20bfb35e1 100644 --- a/doc/src/sgml/ref/alter_rule.sgml +++ b/doc/src/sgml/ref/alter_rule.sgml @@ -97,8 +97,8 @@ ALTER RULE notify_all ON emp RENAME TO notify_me; See Also - - + + diff --git a/doc/src/sgml/ref/alter_schema.sgml b/doc/src/sgml/ref/alter_schema.sgml index dc91420954..2937214026 100644 --- a/doc/src/sgml/ref/alter_schema.sgml +++ b/doc/src/sgml/ref/alter_schema.sgml @@ -92,8 +92,8 @@ ALTER SCHEMA name OWNER TO { new_ownerSee Also - - + + diff --git a/doc/src/sgml/ref/alter_sequence.sgml b/doc/src/sgml/ref/alter_sequence.sgml index 655b35c6fc..bfd20af6d3 100644 --- a/doc/src/sgml/ref/alter_sequence.sgml +++ b/doc/src/sgml/ref/alter_sequence.sgml @@ -343,8 +343,8 @@ ALTER SEQUENCE serial RESTART WITH 105; See Also - - + + diff --git a/doc/src/sgml/ref/alter_server.sgml b/doc/src/sgml/ref/alter_server.sgml index 53529abff7..17e55b093e 100644 --- a/doc/src/sgml/ref/alter_server.sgml +++ b/doc/src/sgml/ref/alter_server.sgml @@ -136,8 +136,8 @@ ALTER SERVER foo VERSION '8.4' OPTIONS (SET host 'baz'); See Also - - + + diff --git a/doc/src/sgml/ref/alter_statistics.sgml b/doc/src/sgml/ref/alter_statistics.sgml index d7b012fd54..58c7ed020d 100644 --- a/doc/src/sgml/ref/alter_statistics.sgml +++ b/doc/src/sgml/ref/alter_statistics.sgml @@ -109,8 +109,8 @@ ALTER STATISTICS name SET SCHEMA See Also - - + + diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml index 7e0240d696..6dfb2e4d3e 100644 --- a/doc/src/sgml/ref/alter_subscription.sgml +++ b/doc/src/sgml/ref/alter_subscription.sgml @@ -38,7 +38,7 @@ ALTER SUBSCRIPTION name RENAME TO < ALTER SUBSCRIPTION can change most of the subscription properties that can be specified - in . + in . @@ -68,7 +68,7 @@ ALTER SUBSCRIPTION name RENAME TO < This clause alters the connection property originally set by - . See there for more + . See there for more information. @@ -79,7 +79,7 @@ ALTER SUBSCRIPTION name RENAME TO < Changes list of subscribed publications. See - for more information. + for more information. By default this command will also act like REFRESH PUBLICATION. @@ -162,7 +162,7 @@ ALTER SUBSCRIPTION name RENAME TO < This clause alters parameters originally set by - . See there for more + . See there for more information. The allowed options are slot_name and synchronous_commit @@ -220,10 +220,10 @@ ALTER SUBSCRIPTION mysub DISABLE; See Also - - - - + + + + diff --git a/doc/src/sgml/ref/alter_system.sgml b/doc/src/sgml/ref/alter_system.sgml index 887c4392dd..5e41f7f644 100644 --- a/doc/src/sgml/ref/alter_system.sgml +++ b/doc/src/sgml/ref/alter_system.sgml @@ -70,7 +70,7 @@ ALTER SYSTEM RESET ALL Name of a settable configuration parameter. Available parameters are - documented in . + documented in . @@ -94,13 +94,13 @@ ALTER SYSTEM RESET ALL Notes - This command can't be used to set , + This command can't be used to set , nor parameters that are not allowed in postgresql.conf (e.g., preset options). - See for other ways to set the parameters. + See for other ways to set the parameters. @@ -135,8 +135,8 @@ ALTER SYSTEM RESET wal_level; See Also - - + + diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 92db00f52d..7bcf242846 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -120,7 +120,7 @@ ALTER TABLE [ IF EXISTS ] name This form adds a new column to the table, using the same syntax as - . If IF NOT EXISTS + . If IF NOT EXISTS is specified and a column already exists with this name, no error is thrown. @@ -209,7 +209,7 @@ ALTER TABLE [ IF EXISTS ] name These forms change whether a column is an identity column or change the generation attribute of an existing identity column. - See for details. + See for details. @@ -227,7 +227,7 @@ ALTER TABLE [ IF EXISTS ] name These forms alter the sequence that underlies an existing identity column. sequence_option is an option - supported by such + supported by such as INCREMENT BY. @@ -239,13 +239,13 @@ ALTER TABLE [ IF EXISTS ] name This form sets the per-column statistics-gathering target for subsequent - operations. + operations. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the system default statistics - target (). + target (). For more information on the use of statistics by the PostgreSQL query planner, refer to - . + . SET STATISTICS acquires a @@ -263,7 +263,7 @@ ALTER TABLE [ IF EXISTS ] name defined per-attribute options are n_distinct and n_distinct_inherited, which override the number-of-distinct-values estimates made by subsequent - + operations. n_distinct affects the statistics for the table itself, while n_distinct_inherited affects the statistics gathered for the table plus its inheritance children. When set to a @@ -281,7 +281,7 @@ ALTER TABLE [ IF EXISTS ] name until query planning time. Specify a value of 0 to revert to estimating the number of distinct values normally. For more information on the use of statistics by the PostgreSQL query - planner, refer to . + planner, refer to . Changing per-attribute options acquires a @@ -315,7 +315,7 @@ ALTER TABLE [ IF EXISTS ] name at the penalty of increased storage space. Note that SET STORAGE doesn't itself change anything in the table, it just sets the strategy to be pursued during future table updates. - See for more information. + See for more information. @@ -325,7 +325,7 @@ ALTER TABLE [ IF EXISTS ] name This form adds a new constraint to a table using the same syntax as - , plus the option NOT + , plus the option NOT VALID, which is currently only allowed for foreign key and CHECK constraints. If the constraint is marked NOT VALID, the @@ -457,7 +457,7 @@ ALTER TABLE [ IF EXISTS ] name of course the integrity of the constraint cannot be guaranteed if the triggers are not executed. The trigger firing mechanism is also affected by the configuration - variable . Simply enabled + variable . Simply enabled triggers will fire when the replication role is origin (the default) or local. Triggers configured as ENABLE REPLICA will only fire if the session is in replica @@ -494,7 +494,7 @@ ALTER TABLE [ IF EXISTS ] name even if row level security is disabled - in this case, the policies will NOT be applied and the policies will be ignored. See also - . + . @@ -509,7 +509,7 @@ ALTER TABLE [ IF EXISTS ] name disabled (the default) then row level security will not be applied when the user is the table owner. See also - . + . @@ -519,7 +519,7 @@ ALTER TABLE [ IF EXISTS ] name This form selects the default index for future - + operations. It does not actually re-cluster the table. @@ -533,7 +533,7 @@ ALTER TABLE [ IF EXISTS ] name This form removes the most recently used - + index specification from the table. This affects future cluster operations that don't specify an index. @@ -548,7 +548,7 @@ ALTER TABLE [ IF EXISTS ] name This form adds an oid system column to the - table (see ). + table (see ). It does nothing if the table already has OIDs. @@ -593,7 +593,7 @@ ALTER TABLE [ IF EXISTS ] name information_schema relations are not considered part of the system catalogs and will be moved. See also - . + . @@ -603,7 +603,7 @@ ALTER TABLE [ IF EXISTS ] name This form changes the table from unlogged to logged or vice-versa - (see ). It cannot be applied + (see ). It cannot be applied to a temporary table. @@ -615,12 +615,12 @@ ALTER TABLE [ IF EXISTS ] name This form changes one or more storage parameters for the table. See + endterm="sql-createtable-storage-parameters-title"/> for details on the available parameters. Note that the table contents will not be modified immediately by this command; depending on the parameter you might need to rewrite the table to get the desired effects. That can be done with VACUUM - FULL, or one of the forms + FULL, or one of the forms of ALTER TABLE that forces a table rewrite. For planner related parameters, changes will take effect from the next time the table is locked so currently executing queries will not be @@ -789,7 +789,7 @@ ALTER TABLE [ IF EXISTS ] name A partition using FOR VALUES uses same syntax for partition_bound_spec as - . The partition bound specification + . The partition bound specification must correspond to the partitioning strategy and partition key of the target table. The table to be attached must have all the same columns as the target table and no more; moreover, the column types must also @@ -821,7 +821,7 @@ ALTER TABLE [ IF EXISTS ] name If the new partition is a foreign table, nothing is done to verify that all the rows in the foreign table obey the partition constraint. - (See the discussion in about + (See the discussion in about constraints on the foreign table.) @@ -972,7 +972,7 @@ ALTER TABLE [ IF EXISTS ] name Automatically drop objects that depend on the dropped column or constraint (for example, views referencing the column), and in turn all objects that depend on those objects - (see ). + (see ). @@ -1099,7 +1099,7 @@ ALTER TABLE [ IF EXISTS ] name The partition bound specification for a new partition. Refer to - for more details on the syntax of the same. + for more details on the syntax of the same. @@ -1177,7 +1177,7 @@ ALTER TABLE [ IF EXISTS ] name The rewriting forms of ALTER TABLE are not MVCC-safe. After a table rewrite, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the rewrite - occurred. See for more details. + occurred. See for more details. @@ -1239,8 +1239,8 @@ ALTER TABLE [ IF EXISTS ] name - Refer to for a further description of valid - parameters. has further information on + Refer to for a further description of valid + parameters. has further information on inheritance. @@ -1472,7 +1472,7 @@ ALTER TABLE measurement See Also - + diff --git a/doc/src/sgml/ref/alter_tablespace.sgml b/doc/src/sgml/ref/alter_tablespace.sgml index 4d6f011e2f..acec33469f 100644 --- a/doc/src/sgml/ref/alter_tablespace.sgml +++ b/doc/src/sgml/ref/alter_tablespace.sgml @@ -88,9 +88,9 @@ ALTER TABLESPACE name RESET ( , - , - ). This may be useful if + same name (see , + , + ). This may be useful if one tablespace is located on a disk which is faster or slower than the remainder of the I/O subsystem. @@ -130,8 +130,8 @@ ALTER TABLESPACE index_space OWNER TO mary; See Also - - + + diff --git a/doc/src/sgml/ref/alter_trigger.sgml b/doc/src/sgml/ref/alter_trigger.sgml index 4b4dacbf28..6cf789a67a 100644 --- a/doc/src/sgml/ref/alter_trigger.sgml +++ b/doc/src/sgml/ref/alter_trigger.sgml @@ -90,7 +90,7 @@ ALTER TRIGGER name ON The ability to temporarily enable or disable a trigger is provided by - , not by + , not by ALTER TRIGGER, because ALTER TRIGGER has no convenient way to express the option of enabling or disabling all of a table's triggers at once. @@ -126,7 +126,7 @@ ALTER TRIGGER emp_stamp ON emp DEPENDS ON EXTENSION emplib; See Also - + diff --git a/doc/src/sgml/ref/alter_tsconfig.sgml b/doc/src/sgml/ref/alter_tsconfig.sgml index 630927c15b..ebe0b94b27 100644 --- a/doc/src/sgml/ref/alter_tsconfig.sgml +++ b/doc/src/sgml/ref/alter_tsconfig.sgml @@ -182,8 +182,8 @@ ALTER TEXT SEARCH CONFIGURATION my_config See Also - - + + diff --git a/doc/src/sgml/ref/alter_tsdictionary.sgml b/doc/src/sgml/ref/alter_tsdictionary.sgml index 75a8b1dac6..b29865e11e 100644 --- a/doc/src/sgml/ref/alter_tsdictionary.sgml +++ b/doc/src/sgml/ref/alter_tsdictionary.sgml @@ -163,8 +163,8 @@ ALTER TEXT SEARCH DICTIONARY my_dict ( dummy ); See Also - - + + diff --git a/doc/src/sgml/ref/alter_tsparser.sgml b/doc/src/sgml/ref/alter_tsparser.sgml index c71faeec05..9edff4b71a 100644 --- a/doc/src/sgml/ref/alter_tsparser.sgml +++ b/doc/src/sgml/ref/alter_tsparser.sgml @@ -86,8 +86,8 @@ ALTER TEXT SEARCH PARSER name SET SCHEMA See Also - - + + diff --git a/doc/src/sgml/ref/alter_tstemplate.sgml b/doc/src/sgml/ref/alter_tstemplate.sgml index 210baa7125..5d3c826533 100644 --- a/doc/src/sgml/ref/alter_tstemplate.sgml +++ b/doc/src/sgml/ref/alter_tstemplate.sgml @@ -86,8 +86,8 @@ ALTER TEXT SEARCH TEMPLATE name SET SCHEMA See Also - - + + diff --git a/doc/src/sgml/ref/alter_type.sgml b/doc/src/sgml/ref/alter_type.sgml index 7c32f0c5d5..9127dfd88d 100644 --- a/doc/src/sgml/ref/alter_type.sgml +++ b/doc/src/sgml/ref/alter_type.sgml @@ -52,7 +52,7 @@ ALTER TYPE name RENAME VALUE This form adds a new attribute to a composite type, using the same syntax as - . + . @@ -368,8 +368,8 @@ ALTER TYPE colors RENAME VALUE 'purple' TO 'mauve'; See Also - - + + diff --git a/doc/src/sgml/ref/alter_user.sgml b/doc/src/sgml/ref/alter_user.sgml index 8e03510bd4..8f50f43089 100644 --- a/doc/src/sgml/ref/alter_user.sgml +++ b/doc/src/sgml/ref/alter_user.sgml @@ -56,7 +56,7 @@ ALTER USER { role_specification | A ALTER USER is now an alias for - . + . @@ -74,7 +74,7 @@ ALTER USER { role_specification | A See Also - + diff --git a/doc/src/sgml/ref/alter_user_mapping.sgml b/doc/src/sgml/ref/alter_user_mapping.sgml index eecff388cb..7a9b5a188a 100644 --- a/doc/src/sgml/ref/alter_user_mapping.sgml +++ b/doc/src/sgml/ref/alter_user_mapping.sgml @@ -116,8 +116,8 @@ ALTER USER MAPPING FOR bob SERVER foo OPTIONS (SET password 'public'); See Also - - + + diff --git a/doc/src/sgml/ref/alter_view.sgml b/doc/src/sgml/ref/alter_view.sgml index f33519bd79..2e9edc1975 100644 --- a/doc/src/sgml/ref/alter_view.sgml +++ b/doc/src/sgml/ref/alter_view.sgml @@ -194,8 +194,8 @@ INSERT INTO a_view(id) VALUES(2); -- ts will receive the current time See Also - - + + diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml index bc33f0fa23..83b07a0300 100644 --- a/doc/src/sgml/ref/analyze.sgml +++ b/doc/src/sgml/ref/analyze.sgml @@ -111,7 +111,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns In the default PostgreSQL configuration, - the autovacuum daemon (see ) + the autovacuum daemon (see ) takes care of automatic analyzing of tables when they are first loaded with data, and as they change throughout regular operation. When autovacuum is disabled, @@ -119,7 +119,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns + strategy for read-mostly databases is to run and ANALYZE once a day during a low-usage time of day. (This will not be sufficient if there is heavy update activity.) @@ -139,7 +139,7 @@ ANALYZE [ VERBOSE ] [ table_and_columns. + linkend="maintenance"/>. @@ -150,7 +150,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsANALYZE is run, even if the actual table contents did not change. This might result in small changes in the planner's estimated costs shown by - . + . In rare situations, this non-determinism will cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of statistics collected by @@ -159,10 +159,10 @@ ANALYZE [ VERBOSE ] [ table_and_columns The extent of analysis can be controlled by adjusting the - configuration variable, or + configuration variable, or on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER COLUMN ... SET - STATISTICS (see ). + STATISTICS (see ). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value @@ -192,7 +192,7 @@ ANALYZE [ VERBOSE ] [ table_and_columnsALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) - (see ). + (see ). @@ -233,10 +233,10 @@ ANALYZE [ VERBOSE ] [ table_and_columnsSee Also - - - - + + + + diff --git a/doc/src/sgml/ref/begin.sgml b/doc/src/sgml/ref/begin.sgml index 45f85aea34..c23bbfb4e7 100644 --- a/doc/src/sgml/ref/begin.sgml +++ b/doc/src/sgml/ref/begin.sgml @@ -38,8 +38,8 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_modeBEGIN initiates a transaction block, that is, all statements after a BEGIN command will be executed in a single transaction until an explicit or is given. + linkend="sql-commit"/> or is given. By default (without BEGIN), PostgreSQL executes transactions in autocommit mode, that is, each @@ -60,7 +60,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode If the isolation level, read/write mode, or deferrable mode is specified, the new transaction has those characteristics, as if - + was executed. @@ -81,7 +81,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_mode - Refer to for information on the meaning + Refer to for information on the meaning of the other parameters to this statement. @@ -90,13 +90,13 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_modeNotes - has the same functionality + has the same functionality as BEGIN. - Use or - + Use or + to terminate a transaction block. @@ -104,7 +104,7 @@ BEGIN [ WORK | TRANSACTION ] [ transaction_modeBEGIN when already inside a transaction block will provoke a warning message. The state of the transaction is not affected. To nest transactions within a transaction block, use savepoints - (see ). + (see ). @@ -131,7 +131,7 @@ BEGIN; BEGIN is a PostgreSQL language extension. It is equivalent to the SQL-standard command - , whose reference page + , whose reference page contains additional compatibility information. @@ -152,10 +152,10 @@ BEGIN; See Also - - - - + + + + diff --git a/doc/src/sgml/ref/checkpoint.sgml b/doc/src/sgml/ref/checkpoint.sgml index a8f3186d8c..dfcadcf402 100644 --- a/doc/src/sgml/ref/checkpoint.sgml +++ b/doc/src/sgml/ref/checkpoint.sgml @@ -29,7 +29,7 @@ CHECKPOINT A checkpoint is a point in the write-ahead log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to disk. Refer to - for more details about what happens + for more details about what happens during a checkpoint. @@ -37,14 +37,14 @@ CHECKPOINT The CHECKPOINT command forces an immediate checkpoint when the command is issued, without waiting for a regular checkpoint scheduled by the system (controlled by the settings in - ). + ). CHECKPOINT is not intended for use during normal operation. If executed during recovery, the CHECKPOINT command - will force a restartpoint (see ) + will force a restartpoint (see ) rather than writing a new checkpoint. diff --git a/doc/src/sgml/ref/close.sgml b/doc/src/sgml/ref/close.sgml index 7ecc0cc463..e464df1965 100644 --- a/doc/src/sgml/ref/close.sgml +++ b/doc/src/sgml/ref/close.sgml @@ -84,7 +84,7 @@ CLOSE { name | ALL } PostgreSQL does not have an explicit OPEN cursor statement; a cursor is considered open when it is declared. Use the - + statement to declare a cursor. @@ -124,9 +124,9 @@ CLOSE liahona; See Also - - - + + + diff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml index 1210b5dffb..4da60d8d56 100644 --- a/doc/src/sgml/ref/cluster.sgml +++ b/doc/src/sgml/ref/cluster.sgml @@ -57,7 +57,7 @@ CLUSTER [VERBOSE] CLUSTER table_name reclusters the table using the same index as before. You can also use the CLUSTER or SET WITHOUT CLUSTER - forms of to set the index to be used for + forms of to set the index to be used for future cluster operations, or to clear any previous setting. @@ -148,18 +148,18 @@ CLUSTER [VERBOSE] as double the table size, plus the index sizes. This method is often faster than the index scan method, but if the disk space requirement is intolerable, you can disable this choice by temporarily setting to off. + linkend="guc-enable-sort"/> to off. - It is advisable to set to + It is advisable to set to a reasonably large value (but not more than the amount of RAM you can dedicate to the CLUSTER operation) before clustering. Because the planner records statistics about the ordering of - tables, it is advisable to run + tables, it is advisable to run on the newly clustered table. Otherwise, the planner might make poor choices of query plans. @@ -221,7 +221,7 @@ CLUSTER index_name ON See Also - + diff --git a/doc/src/sgml/ref/clusterdb.sgml b/doc/src/sgml/ref/clusterdb.sgml index d2d4b52f48..ed343dd7da 100644 --- a/doc/src/sgml/ref/clusterdb.sgml +++ b/doc/src/sgml/ref/clusterdb.sgml @@ -60,7 +60,7 @@ PostgreSQL documentation clusterdb is a wrapper around the SQL - command . + command . There is no effective difference between clustering databases via this utility and via other methods for accessing the server. @@ -279,7 +279,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -289,8 +289,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -325,7 +325,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index d705792a45..7d66c1a34c 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -105,7 +105,7 @@ COMMENT ON the same built-in functions that psql uses, namely obj_description, col_description, and shobj_description - (see ). + (see ). diff --git a/doc/src/sgml/ref/commit.sgml b/doc/src/sgml/ref/commit.sgml index e41d6ff3cf..b2e8d5d180 100644 --- a/doc/src/sgml/ref/commit.sgml +++ b/doc/src/sgml/ref/commit.sgml @@ -55,7 +55,7 @@ COMMIT [ WORK | TRANSACTION ] Notes - Use to + Use to abort a transaction. @@ -89,8 +89,8 @@ COMMIT; See Also - - + + diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml index c200a3e573..d938b65bbe 100644 --- a/doc/src/sgml/ref/commit_prepared.sgml +++ b/doc/src/sgml/ref/commit_prepared.sgml @@ -99,8 +99,8 @@ COMMIT PREPARED 'foobar'; See Also - - + + diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml index eb91ad971d..af2a0e91b9 100644 --- a/doc/src/sgml/ref/copy.sgml +++ b/doc/src/sgml/ref/copy.sgml @@ -112,9 +112,9 @@ COPY { table_name [ ( query - A , , - , or - command whose results are to be + A , , + , or + command whose results are to be copied. Note that parentheses are required around the query. diff --git a/doc/src/sgml/ref/create_access_method.sgml b/doc/src/sgml/ref/create_access_method.sgml index 1bb1a79bd2..851c5e63be 100644 --- a/doc/src/sgml/ref/create_access_method.sgml +++ b/doc/src/sgml/ref/create_access_method.sgml @@ -78,7 +78,7 @@ CREATE ACCESS METHOD name for INDEX access methods, it must be index_am_handler. The C-level API that the handler function must implement varies depending on the type of access method. - The index access method API is described in . + The index access method API is described in . @@ -109,9 +109,9 @@ CREATE ACCESS METHOD heptree TYPE INDEX HANDLER heptree_handler; See Also - - - + + + diff --git a/doc/src/sgml/ref/create_aggregate.sgml b/doc/src/sgml/ref/create_aggregate.sgml index 4a8cee8057..a4aaae876e 100644 --- a/doc/src/sgml/ref/create_aggregate.sgml +++ b/doc/src/sgml/ref/create_aggregate.sgml @@ -91,7 +91,7 @@ CREATE AGGREGATE name ( CREATE AGGREGATE defines a new aggregate function. Some basic and commonly-used aggregate functions are included with the distribution; they are documented in . If one defines new types or needs + linkend="functions-aggregate"/>. If one defines new types or needs an aggregate function not already provided, then CREATE AGGREGATE can be used to provide the desired features. @@ -110,7 +110,7 @@ CREATE AGGREGATE name ( the name and input data type(s) of every ordinary function in the same schema. This behavior is identical to overloading of ordinary function names - (see ). + (see ). @@ -199,7 +199,7 @@ CREATE AGGREGATE name ( An aggregate can optionally support moving-aggregate mode, - as described in . This requires + as described in . This requires specifying the MSFUNC, MINVFUNC, and MSTYPE parameters, and optionally the MSPACE, MFINALFUNC, @@ -228,7 +228,7 @@ CREATE AGGREGATE name ( An aggregate can optionally support partial aggregation, - as described in . + as described in . This requires specifying the COMBINEFUNC parameter. If the state_data_type is internal, it's usually also appropriate to provide the @@ -379,7 +379,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The planner uses this value to estimate the memory required for a grouped aggregate query. The planner will consider using hash aggregation for such a query only if the hash table is estimated to fit - in ; therefore, large values of this + in ; therefore, large values of this parameter discourage use of hash aggregation. @@ -426,7 +426,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; that does not modify its arguments. READ_ONLY indicates it does not; the other two values indicate that it may change the transition state value. See below for more detail. The + endterm="sql-createaggregate-notes-title"/> below for more detail. The default is READ_ONLY, except for ordered-set aggregates, for which the default is READ_WRITE. @@ -623,7 +623,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; The meanings of PARALLEL SAFE, PARALLEL RESTRICTED, and PARALLEL UNSAFE are the same as - in . An aggregate will not be + in . An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support @@ -773,7 +773,7 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; Examples - See . + See . @@ -791,8 +791,8 @@ SELECT col FROM tab ORDER BY col USING sortop LIMIT 1; See Also - - + + diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml index cd4565e336..84317047c2 100644 --- a/doc/src/sgml/ref/create_cast.sgml +++ b/doc/src/sgml/ref/create_cast.sgml @@ -153,7 +153,7 @@ SELECT CAST ( 2 AS numeric ) + 4.0; ambiguity that cannot be avoided as above. The parser has a fallback heuristic based on type categories and preferred types that can help to provide desired behavior in such cases. See - for + for more information. @@ -301,7 +301,7 @@ SELECT CAST ( 2 AS numeric ) + 4.0; Notes - Use to remove user-defined casts. + Use to remove user-defined casts. @@ -412,9 +412,9 @@ CREATE CAST (bigint AS int4) WITH FUNCTION int4(bigint) AS ASSIGNMENT; See Also - , - , - + , + , + diff --git a/doc/src/sgml/ref/create_collation.sgml b/doc/src/sgml/ref/create_collation.sgml index cc76b04027..5bc9af5499 100644 --- a/doc/src/sgml/ref/create_collation.sgml +++ b/doc/src/sgml/ref/create_collation.sgml @@ -138,7 +138,7 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM - See also for how to handle + See also for how to handle collation version mismatches. @@ -167,13 +167,13 @@ CREATE COLLATION [ IF NOT EXISTS ] name FROM - See for more information on how to create collations. + See for more information on how to create collations. When using the libc collation provider, the locale must be applicable to the current database encoding. - See for the precise rules. + See for the precise rules. @@ -223,8 +223,8 @@ CREATE COLLATION german FROM "de_DE"; See Also - - + + diff --git a/doc/src/sgml/ref/create_conversion.sgml b/doc/src/sgml/ref/create_conversion.sgml index 44475eb30e..4ddbcfacef 100644 --- a/doc/src/sgml/ref/create_conversion.sgml +++ b/doc/src/sgml/ref/create_conversion.sgml @@ -161,9 +161,9 @@ CREATE CONVERSION myconv FOR 'UTF8' TO 'LATIN1' FROM myfunc; See Also - - - + + + diff --git a/doc/src/sgml/ref/create_database.sgml b/doc/src/sgml/ref/create_database.sgml index f63f1f92ac..b2c9e241c2 100644 --- a/doc/src/sgml/ref/create_database.sgml +++ b/doc/src/sgml/ref/create_database.sgml @@ -45,7 +45,7 @@ CREATE DATABASE name To create a database, you must be a superuser or have the special CREATEDB privilege. - See . + See . @@ -106,7 +106,7 @@ CREATE DATABASE name to use the default encoding (namely, the encoding of the template database). The character sets supported by the PostgreSQL server are described in - . See below for + . See below for additional restrictions. @@ -143,7 +143,7 @@ CREATE DATABASE name template database's tablespace. This tablespace will be the default tablespace used for objects created in this database. See - + for more information. @@ -203,17 +203,17 @@ CREATE DATABASE name - Use to remove a database. + Use to remove a database. - The program is a + The program is a wrapper program around this command, provided for convenience. Database-level configuration parameters (set via ) are not copied from the template + linkend="sql-alterdatabase"/>) are not copied from the template database. @@ -226,7 +226,7 @@ CREATE DATABASE name DATABASE will fail if any other connection exists when it starts; otherwise, new connections to the template database are locked out until CREATE DATABASE completes. - See for more information. + See for more information. @@ -328,8 +328,8 @@ CREATE DATABASE music2 See Also - - + + diff --git a/doc/src/sgml/ref/create_domain.sgml b/doc/src/sgml/ref/create_domain.sgml index d38914e288..49d5304330 100644 --- a/doc/src/sgml/ref/create_domain.sgml +++ b/doc/src/sgml/ref/create_domain.sgml @@ -255,8 +255,8 @@ CREATE TABLE us_snail_addy ( See Also - - + + diff --git a/doc/src/sgml/ref/create_event_trigger.sgml b/doc/src/sgml/ref/create_event_trigger.sgml index 42cd065612..396d82118e 100644 --- a/doc/src/sgml/ref/create_event_trigger.sgml +++ b/doc/src/sgml/ref/create_event_trigger.sgml @@ -36,7 +36,7 @@ CREATE EVENT TRIGGER name Whenever the designated event occurs and the WHEN condition associated with the trigger, if any, is satisfied, the trigger function will be executed. For a general introduction to event triggers, see - . The user who creates an event trigger + . The user who creates an event trigger becomes its owner. @@ -60,7 +60,7 @@ CREATE EVENT TRIGGER name The name of the event that triggers a call to the given function. - See for more information + See for more information on event names. @@ -113,7 +113,7 @@ CREATE EVENT TRIGGER name Event triggers are disabled in single-user mode (see ). If an erroneous event trigger disables the + linkend="app-postgres"/>). If an erroneous event trigger disables the database so much that you can't even drop the trigger, restart in single-user mode and you'll be able to do that. @@ -154,9 +154,9 @@ CREATE EVENT TRIGGER abort_ddl ON ddl_command_start See Also - - - + + + diff --git a/doc/src/sgml/ref/create_extension.sgml b/doc/src/sgml/ref/create_extension.sgml index 3e0f849f5b..36837f927d 100644 --- a/doc/src/sgml/ref/create_extension.sgml +++ b/doc/src/sgml/ref/create_extension.sgml @@ -195,7 +195,7 @@ CREATE EXTENSION [ IF NOT EXISTS ] extension_name For information about writing new extensions, see - . + . @@ -234,8 +234,8 @@ CREATE EXTENSION hstore SCHEMA public FROM unpackaged; See Also - - + + diff --git a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml index d9a1c18735..0fcba18a34 100644 --- a/doc/src/sgml/ref/create_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/create_foreign_data_wrapper.sgml @@ -172,11 +172,11 @@ CREATE FOREIGN DATA WRAPPER mywrapper See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml index 212c62ae1b..37a45b26db 100644 --- a/doc/src/sgml/ref/create_foreign_table.sgml +++ b/doc/src/sgml/ref/create_foreign_table.sgml @@ -131,7 +131,7 @@ CHECK ( expression ) [ NO INHERIT ] The data type of the column. This can include array specifiers. For more information on the data types supported by PostgreSQL, refer to . + linkend="datatype"/>. @@ -155,7 +155,7 @@ CHECK ( expression ) [ NO INHERIT ] tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables. See the similar form of - for more details. + for more details. @@ -252,7 +252,7 @@ CHECK ( expression ) [ NO INHERIT ] The name of an existing foreign server to use for the foreign table. For details on defining a server, see . + linkend="sql-createserver"/>. @@ -361,11 +361,11 @@ CREATE FOREIGN TABLE measurement_y2016m07 See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index 970dc13359..75331165fe 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -141,7 +141,7 @@ CREATE [ OR REPLACE ] FUNCTION name of an input argument is just extra documentation, so far as the function itself is concerned; but you can use input argument names when calling a function to improve readability (see ). In any case, the name + linkend="sql-syntax-calling-funcs"/>). In any case, the name of an output argument is significant, because it defines the column name in the result row type. (If you omit the name for an output argument, the system will choose a default column name.) @@ -269,7 +269,7 @@ CREATE [ OR REPLACE ] FUNCTION Lists which transforms a call to the function should apply. Transforms convert between SQL types and language-specific data types; - see . Procedural language + see . Procedural language implementations usually have hardcoded knowledge of the built-in types, so those don't need to be listed here. If a procedural language implementation does not know how to handle a type and no transform is @@ -338,7 +338,7 @@ CREATE [ OR REPLACE ] FUNCTION - For additional details see . + For additional details see . @@ -363,7 +363,7 @@ CREATE [ OR REPLACE ] FUNCTION In addition, functions which do not take arguments or which are not passed any arguments from the security barrier view or table do not have to be marked as leakproof to be executed before security conditions. See - and . + and . This option can only be set by the superuser. @@ -455,7 +455,7 @@ CREATE [ OR REPLACE ] FUNCTION A positive number giving the estimated execution cost for the function, - in units of . If the function + in units of . If the function returns a set, this is the cost per returned row. If the cost is not specified, 1 unit is assumed for C-language and internal functions, and 100 units for functions in all other languages. Larger values @@ -504,8 +504,8 @@ CREATE [ OR REPLACE ] FUNCTION - See and - + See and + for more information about allowed parameter names and values. @@ -523,7 +523,7 @@ CREATE [ OR REPLACE ] FUNCTION It is often helpful to use dollar quoting (see ) to write the function definition + linkend="sql-syntax-dollar-quoting"/>) to write the function definition string, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function definition must be escaped by doubling them. @@ -543,7 +543,7 @@ CREATE [ OR REPLACE ] FUNCTION the SQL function. The string obj_file is the name of the shared library file containing the compiled C function, and is interpreted - as for the command. The string + as for the command. The string link_symbol is the function's link symbol, that is, the name of the function in the C language source code. If the link symbol is omitted, it is assumed @@ -598,7 +598,7 @@ CREATE [ OR REPLACE ] FUNCTION - Refer to for further information on writing + Refer to for further information on writing functions. @@ -681,7 +681,7 @@ CREATE FUNCTION foo(int, int default 42) ... Here are some trivial examples to help you get started. For more - information and examples, see . + information and examples, see . CREATE FUNCTION add(integer, integer) RETURNS integer AS 'select $1 + $2;' @@ -750,7 +750,7 @@ SELECT * FROM dup(42); Because a SECURITY DEFINER function is executed with the privileges of the user that owns it, care is needed to ensure that the function cannot be misused. For security, - should be set to exclude any schemas + should be set to exclude any schemas writable by untrusted users. This prevents malicious users from creating objects (e.g., tables, functions, and operators) that mask objects intended to be used by the function. @@ -795,7 +795,7 @@ $$ LANGUAGE plpgsql Another point to keep in mind is that by default, execute privilege is granted to PUBLIC for newly created functions - (see for more + (see for more information). Frequently you will wish to restrict use of a security definer function to only some users. To do that, you must revoke the default PUBLIC privileges and then grant execute @@ -843,11 +843,11 @@ COMMIT; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_group.sgml b/doc/src/sgml/ref/create_group.sgml index 0382349404..1b8e76e326 100644 --- a/doc/src/sgml/ref/create_group.sgml +++ b/doc/src/sgml/ref/create_group.sgml @@ -46,7 +46,7 @@ CREATE GROUP name [ [ WITH ] CREATE GROUP is now an alias for - . + . @@ -63,7 +63,7 @@ CREATE GROUP name [ [ WITH ] See Also - + diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 92c0090dfd..025537575b 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -72,7 +72,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] WHERE with UNIQUE to enforce uniqueness over a subset of a - table. See for more discussion. + table. See for more discussion. @@ -121,7 +121,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] . + endterm="sql-createindex-concurrently-title"/>. @@ -259,7 +259,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The name of an index-method-specific storage parameter. See - + for details. @@ -270,8 +270,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] The tablespace in which to create the index. If not specified, - is consulted, or - for indexes on temporary + is consulted, or + for indexes on temporary tables. @@ -331,10 +331,10 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Determines whether the buffering build technique described in - is used to build the index. With + is used to build the index. With OFF it is disabled, with ON it is enabled, and with AUTO it is initially disabled, but turned on - on-the-fly once the index size reaches . The default is AUTO. + on-the-fly once the index size reaches . The default is AUTO. @@ -350,10 +350,10 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] This setting controls usage of the fast update technique described in - . It is a Boolean parameter: + . It is a Boolean parameter: ON enables fast update, OFF disables it. (Alternative spellings of ON and OFF are - allowed as described in .) The + allowed as described in .) The default is ON. @@ -374,7 +374,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] gin_pending_list_limit - Custom parameter. + Custom parameter. This value is specified in kilobytes. @@ -391,7 +391,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] Defines the number of table blocks that make up one block range for - each entry of a BRIN index (see + each entry of a BRIN index (see for more details). The default is 128. @@ -450,7 +450,7 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] ) predating the second + that have a snapshot (see ) predating the second scan to terminate. Then finally the index can be marked ready for use, and the CREATE INDEX command terminates. Even then, however, the index may not be immediately usable for queries: @@ -515,7 +515,7 @@ Indexes: Notes - See for information about when indexes can + See for information about when indexes can be used, when they are not used, and in which particular situations they can be useful. @@ -541,8 +541,8 @@ Indexes: type either by absolute value or by real part. We could do this by defining two operator classes for the data type and then selecting the proper class when making an index. More information about - operator classes is in and in . + operator classes is in and in . @@ -562,14 +562,14 @@ Indexes: For most index methods, the speed of creating an index is - dependent on the setting of . + dependent on the setting of . Larger values will reduce the time needed for index creation, so long as you don't make it larger than the amount of memory really available, which would drive the machine into swapping. - Use + Use to remove an index. @@ -675,8 +675,8 @@ CREATE INDEX CONCURRENTLY sales_quantity_index ON sales_table (quantity); See Also - - + + diff --git a/doc/src/sgml/ref/create_language.sgml b/doc/src/sgml/ref/create_language.sgml index 92dae40ecc..6bb69cf0ef 100644 --- a/doc/src/sgml/ref/create_language.sgml +++ b/doc/src/sgml/ref/create_language.sgml @@ -41,7 +41,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE + therefore be installed with not CREATE LANGUAGE. Direct use of CREATE LANGUAGE should now be confined to extension installation scripts. If you have a bare @@ -55,7 +55,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE + functions written in the language. Refer to for more information about language handlers. @@ -178,7 +178,7 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE inline_handler is the name of a previously registered function that will be called to execute an anonymous code block - ( command) + ( command) in this language. If no inline_handler function is specified, the language does not support anonymous code @@ -230,12 +230,12 @@ CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE to drop procedural languages. + Use to drop procedural languages. The system catalog pg_language (see ) records information about the + linkend="catalog-pg-language"/>) records information about the currently installed languages. Also, the psql command \dL lists the installed languages. @@ -313,11 +313,11 @@ CREATE LANGUAGE plsample See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_materialized_view.sgml b/doc/src/sgml/ref/create_materialized_view.sgml index 8dd138f816..eed4273c4b 100644 --- a/doc/src/sgml/ref/create_materialized_view.sgml +++ b/doc/src/sgml/ref/create_materialized_view.sgml @@ -92,11 +92,11 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name This clause specifies optional storage parameters for the new materialized view; see for more + endterm="sql-createtable-storage-parameters-title"/> for more information. All parameters supported for CREATE TABLE are also supported for CREATE MATERIALIZED VIEW with the exception of OIDS. - See for more information. + See for more information. @@ -107,7 +107,7 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name The tablespace_name is the name of the tablespace in which the new materialized view is to be created. - If not specified, is consulted. + If not specified, is consulted. @@ -116,8 +116,8 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name query - A , TABLE, - or command. This query will run within a + A , TABLE, + or command. This query will run within a security-restricted operation; in particular, calls to functions that themselves create temporary tables will fail. @@ -152,11 +152,11 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] table_name See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_opclass.sgml b/doc/src/sgml/ref/create_opclass.sgml index 882100583e..0714aeca7c 100644 --- a/doc/src/sgml/ref/create_opclass.sgml +++ b/doc/src/sgml/ref/create_opclass.sgml @@ -77,7 +77,7 @@ CREATE OPERATOR CLASS name [ DEFAUL - Refer to for further information. + Refer to for further information. @@ -283,7 +283,7 @@ CREATE OPERATOR CLASS name [ DEFAUL The following example command defines a GiST index operator class for the data type _int4 (array of int4). See the - module for the complete example. + module for the complete example. @@ -319,10 +319,10 @@ CREATE OPERATOR CLASS gist__int_ops See Also - - - - + + + + diff --git a/doc/src/sgml/ref/create_operator.sgml b/doc/src/sgml/ref/create_operator.sgml index 774616e244..35f2f46985 100644 --- a/doc/src/sgml/ref/create_operator.sgml +++ b/doc/src/sgml/ref/create_operator.sgml @@ -101,7 +101,7 @@ CREATE OPERATOR name ( The other clauses specify optional operator optimization clauses. - Their meaning is detailed in . + Their meaning is detailed in . @@ -228,13 +228,13 @@ COMMUTATOR = OPERATOR(myschema.===) , Notes - Refer to for further information. + Refer to for further information. It is not possible to specify an operator's lexical precedence in CREATE OPERATOR, because the parser's precedence behavior - is hard-wired. See for precedence details. + is hard-wired. See for precedence details. @@ -248,8 +248,8 @@ COMMUTATOR = OPERATOR(myschema.===) , - Use to delete user-defined operators - from a database. Use to modify operators in a + Use to delete user-defined operators + from a database. Use to modify operators in a database. @@ -288,9 +288,9 @@ CREATE OPERATOR === ( See Also - - - + + + diff --git a/doc/src/sgml/ref/create_opfamily.sgml b/doc/src/sgml/ref/create_opfamily.sgml index 0953e238ce..ba612c2f2b 100644 --- a/doc/src/sgml/ref/create_opfamily.sgml +++ b/doc/src/sgml/ref/create_opfamily.sgml @@ -64,7 +64,7 @@ CREATE OPERATOR FAMILY name USING < - Refer to for further information. + Refer to for further information. @@ -108,11 +108,11 @@ CREATE OPERATOR FAMILY name USING < See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index 64d3a6baa6..c30506abc2 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -515,7 +515,7 @@ AND Additional discussion and practical examples can be found - in . + in . @@ -533,9 +533,9 @@ AND See Also - - - + + + diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml index 55771d1d31..bfe12d5f41 100644 --- a/doc/src/sgml/ref/create_publication.sgml +++ b/doc/src/sgml/ref/create_publication.sgml @@ -41,7 +41,7 @@ CREATE PUBLICATION name A publication is essentially a group of tables whose data changes are intended to be replicated through logical replication. See - for details about how + for details about how publications fit into the logical replication setup. @@ -212,8 +212,8 @@ CREATE PUBLICATION insert_only FOR TABLE mydata See Also - - + + diff --git a/doc/src/sgml/ref/create_role.sgml b/doc/src/sgml/ref/create_role.sgml index 7c050a3add..9c3b6978af 100644 --- a/doc/src/sgml/ref/create_role.sgml +++ b/doc/src/sgml/ref/create_role.sgml @@ -53,8 +53,8 @@ CREATE ROLE name [ [ WITH ] user, a group, or both depending on how it is used. Refer to - and for information about managing + and for information about managing users and authentication. You must have CREATEROLE privilege or be a database superuser to use this command. @@ -157,7 +157,7 @@ CREATE ROLE name [ [ WITH ] NOLOGIN is the default, except when CREATE ROLE is invoked through its alternative spelling - . + . @@ -237,7 +237,7 @@ CREATE ROLE name [ [ WITH ] ENCRYPTED keyword has no effect, but is accepted for backwards compatibility. The method of encryption is determined - by the configuration parameter . + by the configuration parameter . If the presented password string is already in MD5-encrypted or SCRAM-encrypted format, then it is stored as-is regardless of password_encryption (since the system cannot decrypt @@ -329,8 +329,8 @@ CREATE ROLE name [ [ WITH ] Notes - Use to - change the attributes of a role, and + Use to + change the attributes of a role, and to remove a role. All the attributes specified by CREATE ROLE can be modified by later ALTER ROLE commands. @@ -339,8 +339,8 @@ CREATE ROLE name [ [ WITH ] The preferred way to add and remove members of roles that are being used as groups is to use - and - . + and + . @@ -358,7 +358,7 @@ CREATE ROLE name [ [ WITH ] CREATEDB privilege does not immediately grant the ability to create databases, even if INHERIT is set; it would be necessary to become that role via - before + before creating a database. @@ -385,7 +385,7 @@ CREATE ROLE name [ [ WITH ] PostgreSQL includes a program that has + linkend="app-createuser"/> that has the same functionality as CREATE ROLE (in fact, it calls this command) but can be run from the command shell. @@ -402,8 +402,8 @@ CREATE ROLE name [ [ WITH ] , however, transmits - the password encrypted. Also, + linkend="app-createuser"/>, however, transmits + the password encrypted. Also, contains a command \password that can be used to safely change the password later. @@ -480,12 +480,12 @@ CREATE ROLE name [ WITH ADMIN See Also - - - - - - + + + + + + diff --git a/doc/src/sgml/ref/create_rule.sgml b/doc/src/sgml/ref/create_rule.sgml index c6403c0530..dbf4c93784 100644 --- a/doc/src/sgml/ref/create_rule.sgml +++ b/doc/src/sgml/ref/create_rule.sgml @@ -55,7 +55,7 @@ CREATE [ OR REPLACE ] RULE name AS transformation happens before the execution of the command starts. If you actually want an operation that fires independently for each physical row, you probably want to use a trigger, not a rule. - More information about the rules system is in . + More information about the rules system is in . @@ -101,7 +101,7 @@ CREATE [ OR REPLACE ] RULE name AS A view that is simple enough to be automatically updatable (see ) does not require a user-created rule in + linkend="sql-createview"/>) does not require a user-created rule in order to be updatable. While you can create an explicit rule anyway, the automatic update transformation will generally outperform an explicit rule. @@ -109,7 +109,7 @@ CREATE [ OR REPLACE ] RULE name AS Another alternative worth considering is to use INSTEAD OF - triggers (see ) in place of rules. + triggers (see ) in place of rules. @@ -297,8 +297,8 @@ UPDATE mytable SET name = 'foo' WHERE id = 42; See Also - - + + diff --git a/doc/src/sgml/ref/create_schema.sgml b/doc/src/sgml/ref/create_schema.sgml index ed856e21e4..ffbe1ba3bc 100644 --- a/doc/src/sgml/ref/create_schema.sgml +++ b/doc/src/sgml/ref/create_schema.sgml @@ -219,8 +219,8 @@ CREATE VIEW hollywood.winners AS See Also - - + + diff --git a/doc/src/sgml/ref/create_sequence.sgml b/doc/src/sgml/ref/create_sequence.sgml index 0cea9a49ce..3e0d339c85 100644 --- a/doc/src/sgml/ref/create_sequence.sgml +++ b/doc/src/sgml/ref/create_sequence.sgml @@ -56,7 +56,7 @@ CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] . + . @@ -383,8 +383,8 @@ END; See Also - - + + diff --git a/doc/src/sgml/ref/create_server.sgml b/doc/src/sgml/ref/create_server.sgml index e13636a268..eb4ca890c2 100644 --- a/doc/src/sgml/ref/create_server.sgml +++ b/doc/src/sgml/ref/create_server.sgml @@ -122,9 +122,9 @@ CREATE SERVER [IF NOT EXISTS] server_nameNotes - When using the module, + When using the module, a foreign server's name can be used - as an argument of the + as an argument of the function to indicate the connection parameters. It is necessary to have the USAGE privilege on the foreign server to be able to use it in this way. @@ -140,7 +140,7 @@ CREATE SERVER [IF NOT EXISTS] server_name CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foo', dbname 'foodb', port '5432'); - See for more details. + See for more details. @@ -156,11 +156,11 @@ CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foo', db See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml index bb99d8e785..382a1eb64b 100644 --- a/doc/src/sgml/ref/create_statistics.sgml +++ b/doc/src/sgml/ref/create_statistics.sgml @@ -86,8 +86,8 @@ CREATE STATISTICS [ IF NOT EXISTS ] statistics_na dependency statistics. If this clause is omitted, all supported statistics kinds are included in the statistics object. - For more information, see - and . + For more information, see + and . @@ -178,8 +178,8 @@ EXPLAIN ANALYZE SELECT * FROM t1 WHERE (a = 1) AND (b = 0); See Also - - + + diff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml index 2a1514a5ac..49e4841188 100644 --- a/doc/src/sgml/ref/create_subscription.sgml +++ b/doc/src/sgml/ref/create_subscription.sgml @@ -50,8 +50,8 @@ CREATE SUBSCRIPTION subscription_name Additional info about subscriptions and logical replication as a whole - can is available at and - . + can is available at and + . @@ -74,7 +74,7 @@ CREATE SUBSCRIPTION subscription_name The connection string to the publisher. For details - see . + see . @@ -152,7 +152,7 @@ CREATE SUBSCRIPTION subscription_name The value of this parameter overrides the - setting. The default + setting. The default value is off. @@ -217,7 +217,7 @@ CREATE SUBSCRIPTION subscription_nameNotes - See for details on + See for details on how to configure access control between the subscription and the publication instance. @@ -281,10 +281,10 @@ CREATE SUBSCRIPTION mysub See Also - - - - + + + + diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 3bc155a775..a0c9a6d257 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -187,7 +187,7 @@ WITH ( MODULUS numeric_literal, REM This presently makes no difference in PostgreSQL and is deprecated; see . + endterm="sql-createtable-compatibility-title"/>. @@ -198,7 +198,7 @@ WITH ( MODULUS numeric_literal, REM If specified, the table is created as an unlogged table. Data written to unlogged tables is not written to the write-ahead log (see ), which makes them considerably faster than ordinary + linkend="wal"/>), which makes them considerably faster than ordinary tables. However, they are not crash-safe: an unlogged table is automatically truncated after a crash or unclean shutdown. The contents of an unlogged table are also not replicated to standby servers. @@ -296,7 +296,7 @@ WITH ( MODULUS numeric_literal, REM are valid values of the corresponding partition key columns for this partition, whereas those in the TO list are not. Note that this statement must be understood according to the - rules of row-wise comparison (). + rules of row-wise comparison (). For example, given PARTITION BY RANGE (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2, @@ -438,7 +438,7 @@ WITH ( MODULUS numeric_literal, REM The data type of the column. This can include array specifiers. For more information on the data types supported by PostgreSQL, refer to . + linkend="datatype"/>. @@ -532,7 +532,7 @@ WITH ( MODULUS numeric_literal, REM specified explicitly, the default operator class of the appropriate type will be used; if no default operator class exists, an error will be raised. When hash partitioning is used, the operator class used - must implement support function 2 (see + must implement support function 2 (see for details). @@ -607,7 +607,7 @@ WITH ( MODULUS numeric_literal, REM default behavior is to exclude STORAGE settings, resulting in the copied columns in the new table having type-specific default settings. For more on STORAGE settings, see - . + . Comments for the copied columns, constraints, and indexes @@ -749,7 +749,7 @@ WITH ( MODULUS numeric_literal, REM only accepted if the INSERT statement specifies OVERRIDING SYSTEM VALUE. If BY DEFAULT is specified, then the user-specified value takes - precedence. See for details. (In + precedence. See for details. (In the COPY command, user-specified values are always used regardless of this setting.) @@ -757,7 +757,7 @@ WITH ( MODULUS numeric_literal, REM The optional sequence_options clause can be used to override the options of the sequence. - See for details. + See for details. @@ -832,7 +832,7 @@ WITH ( MODULUS numeric_literal, REM constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles - (see ) by using the + (see ) by using the && operator. @@ -840,18 +840,18 @@ WITH ( MODULUS numeric_literal, REM Exclusion constraints are implemented using an index, so each specified operator must be associated with an appropriate operator class - (see ) for the index access + (see ) for the index access method index_method. The operators are required to be commutative. Each exclude_element can optionally specify an operator class and/or ordering options; these are described fully under - . + . The access method must support amgettuple (see ); at present this means GIN + linkend="indexam"/>); at present this means GIN cannot be used. Although it's allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does nothing that an ordinary unique constraint doesn't do better. @@ -1001,7 +1001,7 @@ WITH ( MODULUS numeric_literal, REM constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction - (using the command). + (using the command). NOT DEFERRABLE is the default. Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, and @@ -1025,7 +1025,7 @@ WITH ( MODULUS numeric_literal, REM statement. This is the default. If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction. The constraint check time can be - altered with the command. + altered with the command. @@ -1036,14 +1036,14 @@ WITH ( MODULUS numeric_literal, REM This clause specifies optional storage parameters for a table or index; see for more + endterm="sql-createtable-storage-parameters-title"/> for more information. The WITH clause for a table can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or OIDS=FALSE to specify that the rows should not have OIDs. If OIDS is not specified, the default setting depends upon - the configuration parameter. + the configuration parameter. (If the new table inherits from any tables that have OIDs, then OIDS=TRUE is forced even if the command says OIDS=FALSE.) @@ -1063,7 +1063,7 @@ WITH ( MODULUS numeric_literal, REM To remove OIDs from a table after it has been created, use . + linkend="sql-altertable"/>. @@ -1106,7 +1106,7 @@ WITH ( MODULUS numeric_literal, REM All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic is done + linkend="sql-truncate"/> is done at each commit. @@ -1132,8 +1132,8 @@ WITH ( MODULUS numeric_literal, REM The tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, - is consulted, or - if the table is temporary. + is consulted, or + if the table is temporary. @@ -1146,8 +1146,8 @@ WITH ( MODULUS numeric_literal, REM associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint will be created. If not specified, - is consulted, or - if the table is temporary. + is consulted, or + if the table is temporary. @@ -1166,13 +1166,13 @@ WITH ( MODULUS numeric_literal, REM for tables, and for indexes associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint. Storage parameters for - indexes are documented in . + indexes are documented in . The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast., which controls the behavior of the table's secondary TOAST table, if any - (see for more information about TOAST). + (see for more information about TOAST). If a table parameter value is set and the equivalent toast. parameter is not, the TOAST table will use the table's parameter value. @@ -1229,7 +1229,7 @@ WITH ( MODULUS numeric_literal, REM scan of this table. If not set, the system will determine a value based on the relation size. The actual number of workers chosen by the planner may be less, for example due to - the setting of . + the setting of . @@ -1241,12 +1241,12 @@ WITH ( MODULUS numeric_literal, REM Enables or disables the autovacuum daemon for a particular table. If true, the autovacuum daemon will perform automatic VACUUM and/or ANALYZE operations on this table following the rules - discussed in . + discussed in . If false, this table will not be autovacuumed, except to prevent - transaction ID wraparound. See for + transaction ID wraparound. See for more about wraparound prevention. Note that the autovacuum daemon does not run at all (except to prevent - transaction ID wraparound) if the + transaction ID wraparound) if the parameter is false; setting individual tables' storage parameters does not override that. Therefore there is seldom much point in explicitly setting this storage parameter to true, only @@ -1259,7 +1259,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer) - Per-table value for + Per-table value for parameter. @@ -1269,7 +1269,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (float4) - Per-table value for + Per-table value for parameter. @@ -1279,7 +1279,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_analyze_threshold (integer) - Per-table value for + Per-table value for parameter. @@ -1289,7 +1289,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_analyze_scale_factor (float4) - Per-table value for + Per-table value for parameter. @@ -1299,7 +1299,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (integer) - Per-table value for + Per-table value for parameter. @@ -1309,7 +1309,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer) - Per-table value for + Per-table value for parameter. @@ -1319,11 +1319,11 @@ WITH ( MODULUS numeric_literal, REM autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer) - Per-table value for + Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_freeze_min_age parameters that are larger than half the - system-wide setting. + system-wide setting. @@ -1332,7 +1332,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer) - Per-table value for + Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller). @@ -1344,7 +1344,7 @@ WITH ( MODULUS numeric_literal, REM autovacuum_freeze_table_age, toast.autovacuum_freeze_table_age (integer) - Per-table value for + Per-table value for parameter. @@ -1354,11 +1354,11 @@ WITH ( MODULUS numeric_literal, REM autovacuum_multixact_freeze_min_age, toast.autovacuum_multixact_freeze_min_age (integer) - Per-table value for + Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_min_age parameters that are larger than half the - system-wide + system-wide setting. @@ -1369,7 +1369,7 @@ WITH ( MODULUS numeric_literal, REM Per-table value - for parameter. + for parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_max_age parameters that are larger than the system-wide setting (it can only be set @@ -1383,7 +1383,7 @@ WITH ( MODULUS numeric_literal, REM Per-table value - for parameter. + for parameter. @@ -1392,7 +1392,7 @@ WITH ( MODULUS numeric_literal, REM log_autovacuum_min_duration, toast.log_autovacuum_min_duration (integer) - Per-table value for + Per-table value for parameter. @@ -1404,7 +1404,7 @@ WITH ( MODULUS numeric_literal, REM Declare the table as an additional catalog table for purposes of logical replication. See - for details. + for details. This parameter cannot be set for TOAST tables. @@ -1445,7 +1445,7 @@ WITH ( MODULUS numeric_literal, REM index for each unique constraint and primary key constraint to enforce uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns. (See for more information.) + linkend="sql-createindex"/> for more information.) @@ -2006,11 +2006,11 @@ CREATE TABLE cities_partdef See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml index 89ca82baa5..527138e787 100644 --- a/doc/src/sgml/ref/create_table_as.sgml +++ b/doc/src/sgml/ref/create_table_as.sgml @@ -63,7 +63,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI Ignored for compatibility. Use of these keywords is deprecated; - refer to for details. + refer to for details. @@ -75,7 +75,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI If specified, the table is created as a temporary table. - Refer to for details. + Refer to for details. @@ -85,7 +85,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI If specified, the table is created as an unlogged table. - Refer to for details. + Refer to for details. @@ -95,7 +95,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI Do not throw an error if a relation with the same name already exists. - A notice is issued in this case. Refer to + A notice is issued in this case. Refer to for details. @@ -126,13 +126,13 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This clause specifies optional storage parameters for the new table; see for more + endterm="sql-createtable-storage-parameters-title"/> for more information. The WITH clause can also include OIDS=TRUE (or just OIDS) to specify that rows of the new table should have OIDs (object identifiers) assigned to them, or OIDS=FALSE to specify that the rows should not have OIDs. - See for more information. + See for more information. @@ -175,7 +175,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic is done + linkend="sql-truncate"/> is done at each commit. @@ -201,8 +201,8 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI The tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, - is consulted, or - if the table is temporary. + is consulted, or + if the table is temporary. @@ -211,9 +211,9 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI query - A , TABLE, or - command, or an command that runs a + A , TABLE, or + command, or an command that runs a prepared SELECT, TABLE, or VALUES query. @@ -239,7 +239,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI This command is functionally similar to , but it is + linkend="sql-selectinto"/>, but it is preferred since it is less likely to be confused with other uses of the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset of the functionality offered @@ -250,7 +250,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI The CREATE TABLE AS command allows the user to explicitly specify whether OIDs should be included. If the presence of OIDs is not explicitly specified, - the configuration variable is + the configuration variable is used. @@ -317,7 +317,7 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS PostgreSQL handles temporary tables in a way rather different from the standard; see - + for details. @@ -343,12 +343,12 @@ CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS See Also - - - - - - + + + + + + diff --git a/doc/src/sgml/ref/create_tablespace.sgml b/doc/src/sgml/ref/create_tablespace.sgml index ed9635ef40..18fa5f0ebf 100644 --- a/doc/src/sgml/ref/create_tablespace.sgml +++ b/doc/src/sgml/ref/create_tablespace.sgml @@ -54,7 +54,7 @@ CREATE TABLESPACE tablespace_name A tablespace cannot be used independently of the cluster in which it - is defined; see . + is defined; see . @@ -109,9 +109,9 @@ CREATE TABLESPACE tablespace_name Setting either value for a particular tablespace will override the planner's usual estimate of the cost of reading pages from tables in that tablespace, as established by the configuration parameters of the - same name (see , - , - ). This may be useful if + same name (see , + , + ). This may be useful if one tablespace is located on a disk which is faster or slower than the remainder of the I/O subsystem. @@ -164,11 +164,11 @@ CREATE TABLESPACE indexspace OWNER genevieve LOCATION '/data/indexes'; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/create_transform.sgml b/doc/src/sgml/ref/create_transform.sgml index dfb83a76da..17cb1932cf 100644 --- a/doc/src/sgml/ref/create_transform.sgml +++ b/doc/src/sgml/ref/create_transform.sgml @@ -144,7 +144,7 @@ CREATE [ OR REPLACE ] TRANSFORM FOR type_name LANGUAG Notes - Use to remove transforms. + Use to remove transforms. @@ -201,10 +201,10 @@ CREATE TRANSFORM FOR hstore LANGUAGE plpythonu ( See Also - , - , - , - + , + , + , + diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index 9e97c364ef..ad7f9efb55 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -170,7 +170,7 @@ CREATE [ CONSTRAINT ] TRIGGER name When the CONSTRAINT option is specified, this command creates a constraint trigger. This is the same as a regular trigger except that the timing of the trigger firing can be adjusted using - . + . Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering @@ -208,7 +208,7 @@ CREATE [ CONSTRAINT ] TRIGGER name - Refer to for more information about triggers. + Refer to for more information about triggers. @@ -302,7 +302,7 @@ UPDATE OF column_name1 [, column_name2 The default timing of the trigger. - See the documentation for details of + See the documentation for details of these constraint options. This can only be specified for constraint triggers. @@ -432,7 +432,7 @@ UPDATE OF column_name1 [, column_name2 - Use to remove a trigger. + Use to remove a trigger. @@ -605,7 +605,7 @@ CREATE TRIGGER paired_items_update - contains a complete example of a trigger + contains a complete example of a trigger function written in C. @@ -707,10 +707,10 @@ CREATE TRIGGER paired_items_update See Also - - - - + + + + diff --git a/doc/src/sgml/ref/create_tsconfig.sgml b/doc/src/sgml/ref/create_tsconfig.sgml index 49634b4362..390d3cfe51 100644 --- a/doc/src/sgml/ref/create_tsconfig.sgml +++ b/doc/src/sgml/ref/create_tsconfig.sgml @@ -57,7 +57,7 @@ CREATE TEXT SEARCH CONFIGURATION name - Refer to for further information. + Refer to for further information. @@ -119,8 +119,8 @@ CREATE TEXT SEARCH CONFIGURATION nameSee Also - - + + diff --git a/doc/src/sgml/ref/create_tsdictionary.sgml b/doc/src/sgml/ref/create_tsdictionary.sgml index 20a01765b7..0608104821 100644 --- a/doc/src/sgml/ref/create_tsdictionary.sgml +++ b/doc/src/sgml/ref/create_tsdictionary.sgml @@ -50,7 +50,7 @@ CREATE TEXT SEARCH DICTIONARY name - Refer to for further information. + Refer to for further information. @@ -134,8 +134,8 @@ CREATE TEXT SEARCH DICTIONARY my_russian ( See Also - - + + diff --git a/doc/src/sgml/ref/create_tsparser.sgml b/doc/src/sgml/ref/create_tsparser.sgml index 8e5e1b0b48..088d92323a 100644 --- a/doc/src/sgml/ref/create_tsparser.sgml +++ b/doc/src/sgml/ref/create_tsparser.sgml @@ -55,7 +55,7 @@ CREATE TEXT SEARCH PARSER name ( - Refer to for further information. + Refer to for further information. @@ -146,8 +146,8 @@ CREATE TEXT SEARCH PARSER name ( See Also - - + + diff --git a/doc/src/sgml/ref/create_tstemplate.sgml b/doc/src/sgml/ref/create_tstemplate.sgml index 0340e1ab1f..5b82d5564b 100644 --- a/doc/src/sgml/ref/create_tstemplate.sgml +++ b/doc/src/sgml/ref/create_tstemplate.sgml @@ -56,7 +56,7 @@ CREATE TEXT SEARCH TEMPLATE name ( - Refer to for further information. + Refer to for further information. @@ -119,8 +119,8 @@ CREATE TEXT SEARCH TEMPLATE name ( See Also - - + + diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index 1b409ad22f..fa9b520b24 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -116,7 +116,7 @@ CREATE TYPE name The second form of CREATE TYPE creates an enumerated - (enum) type, as described in . + (enum) type, as described in . Enum types take a list of one or more quoted labels, each of which must be less than NAMEDATALEN bytes long (64 bytes in a standard PostgreSQL build). @@ -128,7 +128,7 @@ CREATE TYPE name The third form of CREATE TYPE creates a new - range type, as described in . + range type, as described in . @@ -148,7 +148,7 @@ CREATE TYPE name function must take one argument of the range type being defined, and return a value of the same type. This is used to convert range values to a canonical form, when applicable. See for more information. Creating a + linkend="rangetypes-defining"/> for more information. Creating a canonical function is a bit tricky, since it must be defined before the range type can be declared. To do this, you must first create a shell type, which is a @@ -167,7 +167,7 @@ CREATE TYPE name and return a double precision value representing the difference between the two given values. While this is optional, providing it allows much greater efficiency of GiST indexes on columns of - the range type. See for more + the range type. See for more information. @@ -330,7 +330,7 @@ CREATE TYPE name by setting typlen to -1.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type. (Note that the length field is often - encoded, as described in ; it's unwise + encoded, as described in ; it's unwise to access it directly.) @@ -373,7 +373,7 @@ CREATE TYPE name All storage values other than plain imply that the functions of the data type can handle values that have been toasted, as described - in and . + in and . The specific other value given merely determines the default TOAST storage strategy for columns of a toastable data type; users can pick other strategies for individual columns using ALTER TABLE @@ -404,7 +404,7 @@ CREATE TYPE name category. The parser will prefer casting to preferred types (but only from other types within the same category) when this rule is helpful in resolving overloaded functions or operators. For more details see . For types that have no implicit casts to or from any + linkend="typeconv"/>. For types that have no implicit casts to or from any other types, it is sufficient to leave these settings at the defaults. However, for a group of related types that have implicit casts, it is often helpful to mark them all as belonging to a category and select one or two @@ -709,7 +709,7 @@ CREATE TYPE name The category code (a single ASCII character) for this type. The default is 'U' for user-defined type. Other standard category codes can be found in - . You may also choose + . You may also choose other ASCII characters in order to create custom categories. @@ -924,7 +924,7 @@ CREATE TABLE big_objs ( More examples, including suitable input and output functions, are - in . + in . @@ -951,10 +951,10 @@ CREATE TABLE big_objs ( See Also - - - - + + + + diff --git a/doc/src/sgml/ref/create_user.sgml b/doc/src/sgml/ref/create_user.sgml index 13dfd64c6d..a51dc50c97 100644 --- a/doc/src/sgml/ref/create_user.sgml +++ b/doc/src/sgml/ref/create_user.sgml @@ -49,7 +49,7 @@ CREATE USER name [ [ WITH ] CREATE USER is now an alias for - . + . The only difference is that when the command is spelled CREATE USER, LOGIN is assumed by default, whereas NOLOGIN is assumed when @@ -72,7 +72,7 @@ CREATE USER name [ [ WITH ] See Also - + diff --git a/doc/src/sgml/ref/create_user_mapping.sgml b/doc/src/sgml/ref/create_user_mapping.sgml index d18cc91a00..c2f5278f62 100644 --- a/doc/src/sgml/ref/create_user_mapping.sgml +++ b/doc/src/sgml/ref/create_user_mapping.sgml @@ -122,10 +122,10 @@ CREATE USER MAPPING FOR bob SERVER foo OPTIONS (user 'bob', password 'secret'); See Also - - - - + + + + diff --git a/doc/src/sgml/ref/create_view.sgml b/doc/src/sgml/ref/create_view.sgml index e52a4b85a7..b325c1be50 100644 --- a/doc/src/sgml/ref/create_view.sgml +++ b/doc/src/sgml/ref/create_view.sgml @@ -133,7 +133,7 @@ CREATE VIEW [ schema . ] view_namecascaded, and is equivalent to specifying WITH [ CASCADED | LOCAL ] CHECK OPTION (see below). This option can be changed on existing views using . + linkend="sql-alterview"/>. @@ -143,7 +143,7 @@ CREATE VIEW [ schema . ] view_name This should be used if the view is intended to provide row-level - security. See for full details. + security. See for full details. @@ -156,8 +156,8 @@ CREATE VIEW [ schema . ] view_namequery - A or - command + A or + command which will provide the columns and rows of the view. @@ -241,7 +241,7 @@ CREATE VIEW [ schema . ] view_nameNotes - Use the + Use the statement to drop views. @@ -264,7 +264,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; Access to tables referenced in the view is determined by permissions of the view owner. In some cases, this can be used to provide secure but restricted access to the underlying tables. However, not all views are - secure against tampering; see for + secure against tampering; see for details. Functions called in the view are treated the same as if they had been called directly from the query using the view. Therefore the user of a view must have permissions to call all functions used by the view. @@ -360,7 +360,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; security_barrier property then all the view's WHERE conditions (and any conditions using operators which are marked as LEAKPROOF) will always be evaluated before any conditions that a user of the view has - added. See for full details. Note that, + added. See for full details. Note that, due to this, rows which are not ultimately returned (because they do not pass the user's WHERE conditions) may still end up being locked. EXPLAIN can be used to see which conditions are @@ -375,8 +375,8 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; creating INSTEAD OF triggers on the view, which must convert attempted inserts, etc. on the view into appropriate actions on other tables. For more information see . Another possibility is to create rules - (see ), but in practice triggers are + linkend="sql-createtrigger"/>. Another possibility is to create rules + (see ), but in practice triggers are easier to understand and use correctly. @@ -386,7 +386,7 @@ CREATE VIEW vista AS SELECT text 'Hello World' AS hello; view. In addition the view's owner must have the relevant privileges on the underlying base relations, but the user performing the update does not need any permissions on the underlying base relations (see - ). + ). @@ -490,9 +490,9 @@ UNION ALL See Also - - - + + + diff --git a/doc/src/sgml/ref/createdb.sgml b/doc/src/sgml/ref/createdb.sgml index 265d14e149..2658efeb1a 100644 --- a/doc/src/sgml/ref/createdb.sgml +++ b/doc/src/sgml/ref/createdb.sgml @@ -48,7 +48,7 @@ PostgreSQL documentation createdb is a wrapper around the - SQL command . + SQL command . There is no effective difference between creating databases via this utility and via other methods for accessing the server. @@ -115,7 +115,7 @@ PostgreSQL documentation Specifies the character encoding scheme to be used in this database. The character sets supported by the PostgreSQL server are described in - . + . @@ -199,7 +199,7 @@ PostgreSQL documentation The options , , , , and correspond to options of the underlying - SQL command ; see there for more information + SQL command ; see there for more information about them. @@ -327,7 +327,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -337,8 +337,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -376,8 +376,8 @@ PostgreSQL documentation See Also - - + + diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml index f3c50c4113..22ee99f2cc 100644 --- a/doc/src/sgml/ref/createuser.sgml +++ b/doc/src/sgml/ref/createuser.sgml @@ -49,7 +49,7 @@ PostgreSQL documentation createuser is a wrapper around the - SQL command . + SQL command . There is no effective difference between creating users via this utility and via other methods for accessing the server. @@ -274,7 +274,7 @@ PostgreSQL documentation The new user will have the REPLICATION privilege, which is described more fully in the documentation for . + linkend="sql-createrole"/>. @@ -285,7 +285,7 @@ PostgreSQL documentation The new user will not have the REPLICATION privilege, which is described more fully in the documentation for . + linkend="sql-createrole"/>. @@ -405,7 +405,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -415,8 +415,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -479,8 +479,8 @@ PostgreSQL documentation See Also - - + + diff --git a/doc/src/sgml/ref/deallocate.sgml b/doc/src/sgml/ref/deallocate.sgml index 4e23c6e091..3875e5069e 100644 --- a/doc/src/sgml/ref/deallocate.sgml +++ b/doc/src/sgml/ref/deallocate.sgml @@ -41,7 +41,7 @@ DEALLOCATE [ PREPARE ] { name | ALL For more information on prepared statements, see . + linkend="sql-prepare"/>. @@ -91,8 +91,8 @@ DEALLOCATE [ PREPARE ] { name | ALL See Also - - + + diff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml index a70e2466e5..648c2950f4 100644 --- a/doc/src/sgml/ref/declare.sgml +++ b/doc/src/sgml/ref/declare.sgml @@ -39,7 +39,7 @@ DECLARE name [ BINARY ] [ INSENSITI can be used to retrieve a small number of rows at a time out of a larger query. After the cursor is created, rows are fetched from it using - . + . @@ -47,7 +47,7 @@ DECLARE name [ BINARY ] [ INSENSITI This page describes usage of cursors at the SQL command level. If you are trying to use cursors inside a PL/pgSQL function, the rules are different — - see . + see . @@ -100,7 +100,7 @@ DECLARE name [ BINARY ] [ INSENSITI used to retrieve rows in a nonsequential fashion. The default is to allow scrolling in some cases; this is not the same as specifying SCROLL. See for details. + endterm="sql-declare-notes-title"/> for details. @@ -124,8 +124,8 @@ DECLARE name [ BINARY ] [ INSENSITI query - A or - command + A or + command which will provide the rows to be returned by the cursor. @@ -183,9 +183,9 @@ DECLARE name [ BINARY ] [ INSENSITI PostgreSQL reports an error if such a command is used outside a transaction block. Use - and - - (or ) + and + + (or ) to define a transaction block. @@ -230,7 +230,7 @@ DECLARE name [ BINARY ] [ INSENSITI Scrollable and WITH HOLD cursors may give unexpected results if they invoke any volatile functions (see ). When a previously fetched row is + linkend="xfunc-volatility"/>). When a previously fetched row is re-fetched, the functions might be re-executed, perhaps leading to results different from the first time. One workaround for such cases is to declare the cursor WITH HOLD and commit the @@ -244,7 +244,7 @@ DECLARE name [ BINARY ] [ INSENSITI If the cursor's query includes FOR UPDATE or FOR SHARE, then returned rows are locked at the time they are first fetched, in the same way as for a regular - command with + command with these options. In addition, the returned rows will be the most up-to-date versions; therefore these options provide the equivalent of what the SQL standard @@ -309,7 +309,7 @@ DECLARE name [ BINARY ] [ INSENSITI DECLARE liahona CURSOR FOR SELECT * FROM films; - See for more + See for more examples of cursor usage. @@ -341,9 +341,9 @@ DECLARE liahona CURSOR FOR SELECT * FROM films; See Also - - - + + + diff --git a/doc/src/sgml/ref/delete.sgml b/doc/src/sgml/ref/delete.sgml index d7869efd9a..df8cea48cf 100644 --- a/doc/src/sgml/ref/delete.sgml +++ b/doc/src/sgml/ref/delete.sgml @@ -41,7 +41,7 @@ DELETE FROM [ ONLY ] table_name [ * - provides a + provides a faster mechanism to remove all rows from a table. @@ -82,7 +82,7 @@ DELETE FROM [ ONLY ] table_name [ * The WITH clause allows you to specify one or more subqueries that can be referenced by name in the DELETE - query. See and + query. See and for details. @@ -123,7 +123,7 @@ DELETE FROM [ ONLY ] table_name [ * A list of table expressions, allowing columns from other tables to appear in the WHERE condition. This is similar to the list of tables that can be specified in the of a + linkend="sql-from" endterm="sql-from-title"/> of a SELECT statement; for example, an alias for the table name can be specified. Do not repeat the target table in the using_list, @@ -153,7 +153,7 @@ DELETE FROM [ ONLY ] table_name [ * query on the DELETE's target table. Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See - + for more information about using cursors with WHERE CURRENT OF. @@ -283,7 +283,7 @@ DELETE FROM tasks WHERE CURRENT OF c_tasks; See Also - + diff --git a/doc/src/sgml/ref/discard.sgml b/doc/src/sgml/ref/discard.sgml index 063342494f..6b909b7232 100644 --- a/doc/src/sgml/ref/discard.sgml +++ b/doc/src/sgml/ref/discard.sgml @@ -60,7 +60,7 @@ DISCARD { ALL | PLANS | SEQUENCES | TEMPORARY | TEMP } including currval()/lastval() information and any preallocated sequence values that have not yet been returned by nextval(). - (See for a description of + (See for a description of preallocated sequence values.) diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index c14dff0a28..061218b135 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -122,7 +122,7 @@ END$$; See Also - + diff --git a/doc/src/sgml/ref/drop_access_method.sgml b/doc/src/sgml/ref/drop_access_method.sgml index aa5d2505c7..a908a64b74 100644 --- a/doc/src/sgml/ref/drop_access_method.sgml +++ b/doc/src/sgml/ref/drop_access_method.sgml @@ -64,7 +64,7 @@ DROP ACCESS METHOD [ IF EXISTS ] name). + (see ). @@ -104,7 +104,7 @@ DROP ACCESS METHOD heptree; See Also - + diff --git a/doc/src/sgml/ref/drop_aggregate.sgml b/doc/src/sgml/ref/drop_aggregate.sgml index 1e5b3e8bd4..ba74f4f5eb 100644 --- a/doc/src/sgml/ref/drop_aggregate.sgml +++ b/doc/src/sgml/ref/drop_aggregate.sgml @@ -110,7 +110,7 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr Automatically drop objects that depend on the aggregate function (such as views using it), and in turn all objects that depend on those objects - (see ). + (see ). @@ -132,7 +132,7 @@ DROP AGGREGATE [ IF EXISTS ] name ( aggr Alternative syntaxes for referencing ordered-set aggregates - are described under . + are described under . @@ -176,8 +176,8 @@ DROP AGGREGATE myavg(integer), myavg(bigint); See Also - - + + diff --git a/doc/src/sgml/ref/drop_cast.sgml b/doc/src/sgml/ref/drop_cast.sgml index 6e0d741637..9f42867e12 100644 --- a/doc/src/sgml/ref/drop_cast.sgml +++ b/doc/src/sgml/ref/drop_cast.sgml @@ -107,7 +107,7 @@ DROP CAST (text AS int); See Also - + diff --git a/doc/src/sgml/ref/drop_collation.sgml b/doc/src/sgml/ref/drop_collation.sgml index 03df0d17b1..89b975aac3 100644 --- a/doc/src/sgml/ref/drop_collation.sgml +++ b/doc/src/sgml/ref/drop_collation.sgml @@ -62,7 +62,7 @@ DROP COLLATION [ IF EXISTS ] name [ CASCADE | RESTRIC Automatically drop objects that depend on the collation, and in turn all objects that depend on those objects - (see ). + (see ). @@ -103,8 +103,8 @@ DROP COLLATION german; See Also - - + + diff --git a/doc/src/sgml/ref/drop_conversion.sgml b/doc/src/sgml/ref/drop_conversion.sgml index d5cf18c3e9..131e3cbc0b 100644 --- a/doc/src/sgml/ref/drop_conversion.sgml +++ b/doc/src/sgml/ref/drop_conversion.sgml @@ -96,8 +96,8 @@ DROP CONVERSION myname; See Also - - + + diff --git a/doc/src/sgml/ref/drop_database.sgml b/doc/src/sgml/ref/drop_database.sgml index bbf3fd3776..3ac06c984a 100644 --- a/doc/src/sgml/ref/drop_database.sgml +++ b/doc/src/sgml/ref/drop_database.sgml @@ -78,7 +78,7 @@ DROP DATABASE [ IF EXISTS ] name This command cannot be executed while connected to the target database. Thus, it might be more convenient to use the program - instead, + instead, which is a wrapper around this command. @@ -95,7 +95,7 @@ DROP DATABASE [ IF EXISTS ] name See Also - + diff --git a/doc/src/sgml/ref/drop_domain.sgml b/doc/src/sgml/ref/drop_domain.sgml index 59379e8234..b18faf3917 100644 --- a/doc/src/sgml/ref/drop_domain.sgml +++ b/doc/src/sgml/ref/drop_domain.sgml @@ -64,7 +64,7 @@ DROP DOMAIN [ IF EXISTS ] name [, . Automatically drop objects that depend on the domain (such as table columns), and in turn all objects that depend on those objects - (see ). + (see ). @@ -106,8 +106,8 @@ DROP DOMAIN box; See Also - - + + diff --git a/doc/src/sgml/ref/drop_event_trigger.sgml b/doc/src/sgml/ref/drop_event_trigger.sgml index a773170fa6..137884cc83 100644 --- a/doc/src/sgml/ref/drop_event_trigger.sgml +++ b/doc/src/sgml/ref/drop_event_trigger.sgml @@ -65,7 +65,7 @@ DROP EVENT TRIGGER [ IF EXISTS ] name Automatically drop objects that depend on the trigger, and in turn all objects that depend on those objects - (see ). + (see ). @@ -107,8 +107,8 @@ DROP EVENT TRIGGER snitch; See Also - - + + diff --git a/doc/src/sgml/ref/drop_extension.sgml b/doc/src/sgml/ref/drop_extension.sgml index bb296df17f..5e507dec92 100644 --- a/doc/src/sgml/ref/drop_extension.sgml +++ b/doc/src/sgml/ref/drop_extension.sgml @@ -68,7 +68,7 @@ DROP EXTENSION [ IF EXISTS ] name [ Automatically drop objects that depend on the extension, and in turn all objects that depend on those objects - (see ). + (see ). @@ -115,8 +115,8 @@ DROP EXTENSION hstore; See Also - - + + diff --git a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml index 8e8968ab1a..e53178b8c7 100644 --- a/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml +++ b/doc/src/sgml/ref/drop_foreign_data_wrapper.sgml @@ -65,7 +65,7 @@ DROP FOREIGN DATA WRAPPER [ IF EXISTS ] name). + (see ). @@ -106,8 +106,8 @@ DROP FOREIGN DATA WRAPPER dbi; See Also - - + + diff --git a/doc/src/sgml/ref/drop_foreign_table.sgml b/doc/src/sgml/ref/drop_foreign_table.sgml index b12de03e65..b29169e0d3 100644 --- a/doc/src/sgml/ref/drop_foreign_table.sgml +++ b/doc/src/sgml/ref/drop_foreign_table.sgml @@ -60,7 +60,7 @@ DROP FOREIGN TABLE [ IF EXISTS ] name Automatically drop objects that depend on the foreign table (such as views), and in turn all objects that depend on those objects - (see ). + (see ). @@ -104,8 +104,8 @@ DROP FOREIGN TABLE films, distributors; See Also - - + + diff --git a/doc/src/sgml/ref/drop_function.sgml b/doc/src/sgml/ref/drop_function.sgml index 05b405dda1..eda1a59c84 100644 --- a/doc/src/sgml/ref/drop_function.sgml +++ b/doc/src/sgml/ref/drop_function.sgml @@ -110,7 +110,7 @@ DROP FUNCTION [ IF EXISTS ] name [ Automatically drop objects that depend on the function (such as operators or triggers), and in turn all objects that depend on those objects - (see ). + (see ). @@ -183,8 +183,8 @@ DROP FUNCTION update_employee_salaries(); See Also - - + + diff --git a/doc/src/sgml/ref/drop_group.sgml b/doc/src/sgml/ref/drop_group.sgml index 7844db0f7d..47d4a72121 100644 --- a/doc/src/sgml/ref/drop_group.sgml +++ b/doc/src/sgml/ref/drop_group.sgml @@ -30,7 +30,7 @@ DROP GROUP [ IF EXISTS ] name [, .. DROP GROUP is now an alias for - . + . @@ -46,7 +46,7 @@ DROP GROUP [ IF EXISTS ] name [, .. See Also - + diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml index fd235a0c27..2a8ca5bf68 100644 --- a/doc/src/sgml/ref/drop_index.sgml +++ b/doc/src/sgml/ref/drop_index.sgml @@ -86,7 +86,7 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] name Automatically drop objects that depend on the index, and in turn all objects that depend on those objects - (see ). + (see ). @@ -128,7 +128,7 @@ DROP INDEX title_idx; See Also - + diff --git a/doc/src/sgml/ref/drop_language.sgml b/doc/src/sgml/ref/drop_language.sgml index 350375baef..976f3a0ce1 100644 --- a/doc/src/sgml/ref/drop_language.sgml +++ b/doc/src/sgml/ref/drop_language.sgml @@ -38,7 +38,7 @@ DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name As of PostgreSQL 9.1, most procedural languages have been made into extensions, and should - therefore be removed with + therefore be removed with not DROP LANGUAGE. @@ -76,7 +76,7 @@ DROP [ PROCEDURAL ] LANGUAGE [ IF EXISTS ] name). + (see ). @@ -118,8 +118,8 @@ DROP LANGUAGE plsample; See Also - - + + diff --git a/doc/src/sgml/ref/drop_materialized_view.sgml b/doc/src/sgml/ref/drop_materialized_view.sgml index b115aceb38..c8f3bc5b0d 100644 --- a/doc/src/sgml/ref/drop_materialized_view.sgml +++ b/doc/src/sgml/ref/drop_materialized_view.sgml @@ -66,7 +66,7 @@ DROP MATERIALIZED VIEW [ IF EXISTS ] name). + (see ). @@ -107,9 +107,9 @@ DROP MATERIALIZED VIEW order_summary; See Also - - - + + + diff --git a/doc/src/sgml/ref/drop_opclass.sgml b/doc/src/sgml/ref/drop_opclass.sgml index 53a40ff73e..4cb4cc35f7 100644 --- a/doc/src/sgml/ref/drop_opclass.sgml +++ b/doc/src/sgml/ref/drop_opclass.sgml @@ -80,7 +80,7 @@ DROP OPERATOR CLASS [ IF EXISTS ] name Automatically drop objects that depend on the operator class (such as indexes), and in turn all objects that depend on those objects - (see ). + (see ). @@ -140,9 +140,9 @@ DROP OPERATOR CLASS widget_ops USING btree; See Also - - - + + + diff --git a/doc/src/sgml/ref/drop_operator.sgml b/doc/src/sgml/ref/drop_operator.sgml index b10bed09cc..2dff050ecf 100644 --- a/doc/src/sgml/ref/drop_operator.sgml +++ b/doc/src/sgml/ref/drop_operator.sgml @@ -85,7 +85,7 @@ DROP OPERATOR [ IF EXISTS ] name ( Automatically drop objects that depend on the operator (such as views using it), and in turn all objects that depend on those objects - (see ). + (see ). @@ -146,8 +146,8 @@ DROP OPERATOR ~ (none, bit), ! (bigint, none); See Also - - + + diff --git a/doc/src/sgml/ref/drop_opfamily.sgml b/doc/src/sgml/ref/drop_opfamily.sgml index eb92664d85..c1bcd0d1df 100644 --- a/doc/src/sgml/ref/drop_opfamily.sgml +++ b/doc/src/sgml/ref/drop_opfamily.sgml @@ -81,7 +81,7 @@ DROP OPERATOR FAMILY [ IF EXISTS ] name Automatically drop objects that depend on the operator family, and in turn all objects that depend on those objects - (see ). + (see ). @@ -127,11 +127,11 @@ DROP OPERATOR FAMILY float_ops USING btree; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/drop_owned.sgml b/doc/src/sgml/ref/drop_owned.sgml index 1eb054dee6..4c66da2b34 100644 --- a/doc/src/sgml/ref/drop_owned.sgml +++ b/doc/src/sgml/ref/drop_owned.sgml @@ -57,7 +57,7 @@ DROP OWNED BY { name | CURRENT_USER Automatically drop objects that depend on the affected objects, and in turn all objects that depend on those objects - (see ). + (see ). @@ -90,7 +90,7 @@ DROP OWNED BY { name | CURRENT_USER - The command is an alternative that + The command is an alternative that reassigns the ownership of all the database objects owned by one or more roles. However, REASSIGN OWNED does not deal with privileges for other objects. @@ -101,7 +101,7 @@ DROP OWNED BY { name | CURRENT_USER - See for more discussion. + See for more discussion. @@ -118,8 +118,8 @@ DROP OWNED BY { name | CURRENT_USER See Also - - + + diff --git a/doc/src/sgml/ref/drop_policy.sgml b/doc/src/sgml/ref/drop_policy.sgml index 2bc1e25f9c..9297ade113 100644 --- a/doc/src/sgml/ref/drop_policy.sgml +++ b/doc/src/sgml/ref/drop_policy.sgml @@ -111,8 +111,8 @@ DROP POLICY p1 ON my_table; See Also - - + + diff --git a/doc/src/sgml/ref/drop_publication.sgml b/doc/src/sgml/ref/drop_publication.sgml index 3195c040bb..5dcbb837d1 100644 --- a/doc/src/sgml/ref/drop_publication.sgml +++ b/doc/src/sgml/ref/drop_publication.sgml @@ -98,8 +98,8 @@ DROP PUBLICATION mypublication; See Also - - + + diff --git a/doc/src/sgml/ref/drop_role.sgml b/doc/src/sgml/ref/drop_role.sgml index 413d1870d4..13079f3e1f 100644 --- a/doc/src/sgml/ref/drop_role.sgml +++ b/doc/src/sgml/ref/drop_role.sgml @@ -40,8 +40,8 @@ DROP ROLE [ IF EXISTS ] name [, ... of the cluster; an error will be raised if so. Before dropping the role, you must drop all the objects it owns (or reassign their ownership) and revoke any privileges the role has been granted on other objects. - The and - commands can be useful for this purpose; see + The and + commands can be useful for this purpose; see for more discussion. @@ -83,7 +83,7 @@ DROP ROLE [ IF EXISTS ] name [, ... PostgreSQL includes a program that has the + linkend="app-dropuser"/> that has the same functionality as this command (in fact, it calls this command) but can be run from the command shell. @@ -113,9 +113,9 @@ DROP ROLE jonathan; See Also - - - + + + diff --git a/doc/src/sgml/ref/drop_rule.sgml b/doc/src/sgml/ref/drop_rule.sgml index 6955016c27..cc5d00e4dc 100644 --- a/doc/src/sgml/ref/drop_rule.sgml +++ b/doc/src/sgml/ref/drop_rule.sgml @@ -73,7 +73,7 @@ DROP RULE [ IF EXISTS ] name ON Automatically drop objects that depend on the rule, and in turn all objects that depend on those objects - (see ). + (see ). @@ -115,8 +115,8 @@ DROP RULE newrule ON mytable; See Also - - + + diff --git a/doc/src/sgml/ref/drop_schema.sgml b/doc/src/sgml/ref/drop_schema.sgml index 2e608b2b20..4d5d9f55df 100644 --- a/doc/src/sgml/ref/drop_schema.sgml +++ b/doc/src/sgml/ref/drop_schema.sgml @@ -69,7 +69,7 @@ DROP SCHEMA [ IF EXISTS ] name [, . Automatically drop objects (tables, functions, etc.) that are contained in the schema, and in turn all objects that depend on those objects - (see ). + (see ). @@ -123,8 +123,8 @@ DROP SCHEMA mystuff CASCADE; See Also - - + + diff --git a/doc/src/sgml/ref/drop_sequence.sgml b/doc/src/sgml/ref/drop_sequence.sgml index be30f8f810..387c98edbc 100644 --- a/doc/src/sgml/ref/drop_sequence.sgml +++ b/doc/src/sgml/ref/drop_sequence.sgml @@ -63,7 +63,7 @@ DROP SEQUENCE [ IF EXISTS ] name [, Automatically drop objects that depend on the sequence, and in turn all objects that depend on those objects - (see ). + (see ). @@ -107,8 +107,8 @@ DROP SEQUENCE serial; See Also - - + + diff --git a/doc/src/sgml/ref/drop_server.sgml b/doc/src/sgml/ref/drop_server.sgml index fa941e8cd2..f83a661b3e 100644 --- a/doc/src/sgml/ref/drop_server.sgml +++ b/doc/src/sgml/ref/drop_server.sgml @@ -65,7 +65,7 @@ DROP SERVER [ IF EXISTS ] name [, . Automatically drop objects that depend on the server (such as user mappings), and in turn all objects that depend on those objects - (see ). + (see ). @@ -106,8 +106,8 @@ DROP SERVER IF EXISTS foo; See Also - - + + diff --git a/doc/src/sgml/ref/drop_statistics.sgml b/doc/src/sgml/ref/drop_statistics.sgml index b34d070d50..f58c3d6d22 100644 --- a/doc/src/sgml/ref/drop_statistics.sgml +++ b/doc/src/sgml/ref/drop_statistics.sgml @@ -88,8 +88,8 @@ DROP STATISTICS IF EXISTS See Also - - + + diff --git a/doc/src/sgml/ref/drop_subscription.sgml b/doc/src/sgml/ref/drop_subscription.sgml index 5ab2f9eae8..adbdeafb4e 100644 --- a/doc/src/sgml/ref/drop_subscription.sgml +++ b/doc/src/sgml/ref/drop_subscription.sgml @@ -91,7 +91,7 @@ DROP SUBSCRIPTION [ IF EXISTS ] name. + also . @@ -123,8 +123,8 @@ DROP SUBSCRIPTION mysub; See Also - - + + diff --git a/doc/src/sgml/ref/drop_table.sgml b/doc/src/sgml/ref/drop_table.sgml index b215369910..bf8996d198 100644 --- a/doc/src/sgml/ref/drop_table.sgml +++ b/doc/src/sgml/ref/drop_table.sgml @@ -32,8 +32,8 @@ DROP TABLE [ IF EXISTS ] name [, .. DROP TABLE removes tables from the database. Only the table owner, the schema owner, and superuser can drop a table. To empty a table of rows - without destroying the table, use - or . + without destroying the table, use + or . @@ -77,7 +77,7 @@ DROP TABLE [ IF EXISTS ] name [, .. Automatically drop objects that depend on the table (such as views), and in turn all objects that depend on those objects - (see ). + (see ). @@ -121,8 +121,8 @@ DROP TABLE films, distributors; See Also - - + + diff --git a/doc/src/sgml/ref/drop_tablespace.sgml b/doc/src/sgml/ref/drop_tablespace.sgml index b761a4d92e..047e4e0481 100644 --- a/doc/src/sgml/ref/drop_tablespace.sgml +++ b/doc/src/sgml/ref/drop_tablespace.sgml @@ -38,7 +38,7 @@ DROP TABLESPACE [ IF EXISTS ] name dropped. It is possible that objects in other databases might still reside in the tablespace even if no objects in the current database are using the tablespace. Also, if the tablespace is listed in the setting of any active session, the + linkend="guc-temp-tablespaces"/> setting of any active session, the DROP might fail due to temporary files residing in the tablespace. @@ -102,8 +102,8 @@ DROP TABLESPACE mystuff; See Also - - + + diff --git a/doc/src/sgml/ref/drop_transform.sgml b/doc/src/sgml/ref/drop_transform.sgml index c7de707fcc..582e782219 100644 --- a/doc/src/sgml/ref/drop_transform.sgml +++ b/doc/src/sgml/ref/drop_transform.sgml @@ -76,7 +76,7 @@ DROP TRANSFORM [ IF EXISTS ] FOR type_name LANGUAGE < Automatically drop objects that depend on the transform, and in turn all objects that depend on those objects - (see ). + (see ). @@ -110,7 +110,7 @@ DROP TRANSFORM FOR hstore LANGUAGE plpythonu; This form of DROP TRANSFORM is a PostgreSQL extension. See for details. + linkend="sql-createtransform"/> for details. @@ -118,7 +118,7 @@ DROP TRANSFORM FOR hstore LANGUAGE plpythonu; See Also - + diff --git a/doc/src/sgml/ref/drop_trigger.sgml b/doc/src/sgml/ref/drop_trigger.sgml index 118f38f3f4..728541e557 100644 --- a/doc/src/sgml/ref/drop_trigger.sgml +++ b/doc/src/sgml/ref/drop_trigger.sgml @@ -75,7 +75,7 @@ DROP TRIGGER [ IF EXISTS ] name ON Automatically drop objects that depend on the trigger, and in turn all objects that depend on those objects - (see ). + (see ). @@ -120,7 +120,7 @@ DROP TRIGGER if_dist_exists ON films; See Also - + diff --git a/doc/src/sgml/ref/drop_tsconfig.sgml b/doc/src/sgml/ref/drop_tsconfig.sgml index b7acf46ff7..9eec4bab53 100644 --- a/doc/src/sgml/ref/drop_tsconfig.sgml +++ b/doc/src/sgml/ref/drop_tsconfig.sgml @@ -66,7 +66,7 @@ DROP TEXT SEARCH CONFIGURATION [ IF EXISTS ] name Automatically drop objects that depend on the text search configuration, and in turn all objects that depend on those objects - (see ). + (see ). @@ -113,8 +113,8 @@ DROP TEXT SEARCH CONFIGURATION my_english; See Also - - + + diff --git a/doc/src/sgml/ref/drop_tsdictionary.sgml b/doc/src/sgml/ref/drop_tsdictionary.sgml index b670f55ff2..8d22cfdc75 100644 --- a/doc/src/sgml/ref/drop_tsdictionary.sgml +++ b/doc/src/sgml/ref/drop_tsdictionary.sgml @@ -66,7 +66,7 @@ DROP TEXT SEARCH DICTIONARY [ IF EXISTS ] name Automatically drop objects that depend on the text search dictionary, and in turn all objects that depend on those objects - (see ). + (see ). @@ -112,8 +112,8 @@ DROP TEXT SEARCH DICTIONARY english; See Also - - + + diff --git a/doc/src/sgml/ref/drop_tsparser.sgml b/doc/src/sgml/ref/drop_tsparser.sgml index dea9b2b1bd..2b647ccaa6 100644 --- a/doc/src/sgml/ref/drop_tsparser.sgml +++ b/doc/src/sgml/ref/drop_tsparser.sgml @@ -64,7 +64,7 @@ DROP TEXT SEARCH PARSER [ IF EXISTS ] name Automatically drop objects that depend on the text search parser, and in turn all objects that depend on those objects - (see ). + (see ). @@ -110,8 +110,8 @@ DROP TEXT SEARCH PARSER my_parser; See Also - - + + diff --git a/doc/src/sgml/ref/drop_tstemplate.sgml b/doc/src/sgml/ref/drop_tstemplate.sgml index 244af48191..972b90a85c 100644 --- a/doc/src/sgml/ref/drop_tstemplate.sgml +++ b/doc/src/sgml/ref/drop_tstemplate.sgml @@ -65,7 +65,7 @@ DROP TEXT SEARCH TEMPLATE [ IF EXISTS ] name Automatically drop objects that depend on the text search template, and in turn all objects that depend on those objects - (see ). + (see ). @@ -111,8 +111,8 @@ DROP TEXT SEARCH TEMPLATE thesaurus; See Also - - + + diff --git a/doc/src/sgml/ref/drop_type.sgml b/doc/src/sgml/ref/drop_type.sgml index 37449ed19f..9e555c0624 100644 --- a/doc/src/sgml/ref/drop_type.sgml +++ b/doc/src/sgml/ref/drop_type.sgml @@ -64,7 +64,7 @@ DROP TYPE [ IF EXISTS ] name [, ... Automatically drop objects that depend on the type (such as table columns, functions, and operators), and in turn all objects that depend on those objects - (see ). + (see ). @@ -108,8 +108,8 @@ DROP TYPE box; See Also - - + + diff --git a/doc/src/sgml/ref/drop_user.sgml b/doc/src/sgml/ref/drop_user.sgml index 0e680f30f6..37ab856125 100644 --- a/doc/src/sgml/ref/drop_user.sgml +++ b/doc/src/sgml/ref/drop_user.sgml @@ -30,7 +30,7 @@ DROP USER [ IF EXISTS ] name [, ... DROP USER is simply an alternate spelling of - . + . @@ -48,7 +48,7 @@ DROP USER [ IF EXISTS ] name [, ... See Also - + diff --git a/doc/src/sgml/ref/drop_user_mapping.sgml b/doc/src/sgml/ref/drop_user_mapping.sgml index 393a1eadcf..7cb09f1166 100644 --- a/doc/src/sgml/ref/drop_user_mapping.sgml +++ b/doc/src/sgml/ref/drop_user_mapping.sgml @@ -102,8 +102,8 @@ DROP USER MAPPING IF EXISTS FOR bob SERVER foo; See Also - - + + diff --git a/doc/src/sgml/ref/drop_view.sgml b/doc/src/sgml/ref/drop_view.sgml index 47e55bffb4..a1c550ec3e 100644 --- a/doc/src/sgml/ref/drop_view.sgml +++ b/doc/src/sgml/ref/drop_view.sgml @@ -64,7 +64,7 @@ DROP VIEW [ IF EXISTS ] name [, ... Automatically drop objects that depend on the view (such as other views), and in turn all objects that depend on those objects - (see ). + (see ). @@ -106,8 +106,8 @@ DROP VIEW kinds; See Also - - + + diff --git a/doc/src/sgml/ref/dropdb.sgml b/doc/src/sgml/ref/dropdb.sgml index f7ca0877b1..38f38f01ce 100644 --- a/doc/src/sgml/ref/dropdb.sgml +++ b/doc/src/sgml/ref/dropdb.sgml @@ -41,7 +41,7 @@ PostgreSQL documentation dropdb is a wrapper around the - SQL command . + SQL command . There is no effective difference between dropping databases via this utility and via other methods for accessing the server. @@ -233,7 +233,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -243,8 +243,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -283,8 +283,8 @@ Are you sure? (y/n) y See Also - - + + diff --git a/doc/src/sgml/ref/dropuser.sgml b/doc/src/sgml/ref/dropuser.sgml index 4c6a8bdb40..3d4e4b37b3 100644 --- a/doc/src/sgml/ref/dropuser.sgml +++ b/doc/src/sgml/ref/dropuser.sgml @@ -42,7 +42,7 @@ PostgreSQL documentation dropuser is a wrapper around the - SQL command . + SQL command . There is no effective difference between dropping users via this utility and via other methods for accessing the server. @@ -225,7 +225,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -235,8 +235,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -275,8 +275,8 @@ Are you sure? (y/n) y See Also - - + + diff --git a/doc/src/sgml/ref/ecpg-ref.sgml b/doc/src/sgml/ref/ecpg-ref.sgml index 18e7ed526a..7bfee805a7 100644 --- a/doc/src/sgml/ref/ecpg-ref.sgml +++ b/doc/src/sgml/ref/ecpg-ref.sgml @@ -51,7 +51,7 @@ PostgreSQL documentation This reference page does not describe the embedded SQL language. - See for more information on that topic. + See for more information on that topic. @@ -235,7 +235,7 @@ PostgreSQL documentation The value of either of these directories that is appropriate for the installation can be found out using . + linkend="app-pgconfig"/>. diff --git a/doc/src/sgml/ref/end.sgml b/doc/src/sgml/ref/end.sgml index 4904980dab..7523315f34 100644 --- a/doc/src/sgml/ref/end.sgml +++ b/doc/src/sgml/ref/end.sgml @@ -33,7 +33,7 @@ END [ WORK | TRANSACTION ] made by the transaction become visible to others and are guaranteed to be durable if a crash occurs. This command is a PostgreSQL extension - that is equivalent to . + that is equivalent to . @@ -57,7 +57,7 @@ END [ WORK | TRANSACTION ] Notes - Use to + Use to abort a transaction. @@ -83,7 +83,7 @@ END; END is a PostgreSQL extension that provides functionality equivalent to , which is + linkend="sql-commit"/>, which is specified in the SQL standard. @@ -92,9 +92,9 @@ END; See Also - - - + + + diff --git a/doc/src/sgml/ref/execute.sgml b/doc/src/sgml/ref/execute.sgml index 113a07a3ce..aab1f4b7e0 100644 --- a/doc/src/sgml/ref/execute.sgml +++ b/doc/src/sgml/ref/execute.sgml @@ -52,7 +52,7 @@ EXECUTE name [ ( For more information on the creation and usage of prepared statements, - see . + see . @@ -95,8 +95,8 @@ EXECUTE name [ ( Examples Examples are given in the section of the documentation. + endterm="sql-prepare-examples-title"/> section of the documentation. @@ -115,8 +115,8 @@ EXECUTE name [ ( See Also - - + + diff --git a/doc/src/sgml/ref/explain.sgml b/doc/src/sgml/ref/explain.sgml index a32c150511..8dc0d7038a 100644 --- a/doc/src/sgml/ref/explain.sgml +++ b/doc/src/sgml/ref/explain.sgml @@ -260,7 +260,7 @@ ROLLBACK; The command's result is a textual description of the plan selected for the statement, optionally annotated with execution statistics. - describes the information provided. + describes the information provided. @@ -276,7 +276,7 @@ ROLLBACK; the autovacuum daemon will take care of that automatically. But if a table has recently had substantial changes in its contents, you might need to do a manual - rather than wait for autovacuum to catch up + rather than wait for autovacuum to catch up with the changes. @@ -450,7 +450,7 @@ EXPLAIN ANALYZE EXECUTE query(100, 200); See Also - + diff --git a/doc/src/sgml/ref/fetch.sgml b/doc/src/sgml/ref/fetch.sgml index aa2e8f64b8..5ef63f0058 100644 --- a/doc/src/sgml/ref/fetch.sgml +++ b/doc/src/sgml/ref/fetch.sgml @@ -99,7 +99,7 @@ FETCH [ direction [ FROM | IN ] ] < This page describes usage of cursors at the SQL command level. If you are trying to use cursors inside a PL/pgSQL function, the rules are different — - see . + see . @@ -335,9 +335,9 @@ FETCH count - + is used to define a cursor. Use - + to change cursor position without retrieving data. @@ -410,9 +410,9 @@ COMMIT WORK; See Also - - - + + + diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index 475c85b835..a5e895d09d 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -178,7 +178,7 @@ GRANT role_name [, ...] TO + command. @@ -190,14 +190,14 @@ GRANT role_name [, ...] TO SELECT - Allows from + Allows from any column, or the specific columns listed, of the specified table, view, or sequence. Also allows the use of - TO. + TO. This privilege is also needed to reference existing column values in - or - . + or + . For sequences, this privilege also allows the use of the currval function. For large objects, this privilege allows the object to be read. @@ -209,11 +209,11 @@ GRANT role_name [, ...] TO INSERT - Allows of a new + Allows of a new row into the specified table. If specific columns are listed, only those columns may be assigned to in the INSERT command (other columns will therefore receive default values). - Also allows FROM. + Also allows FROM. @@ -222,7 +222,7 @@ GRANT role_name [, ...] TO UPDATE - Allows of any + Allows of any column, or the specific columns listed, of the specified table. (In practice, any nontrivial UPDATE command will require SELECT privilege as well, since it must reference table @@ -244,7 +244,7 @@ GRANT role_name [, ...] TO DELETE - Allows of a row + Allows of a row from the specified table. (In practice, any nontrivial DELETE command will require SELECT privilege as well, since it must reference table @@ -257,7 +257,7 @@ GRANT role_name [, ...] TO TRUNCATE - Allows on + Allows on the specified table. @@ -269,7 +269,7 @@ GRANT role_name [, ...] TO Allows creation of a foreign key constraint referencing the specified table, or specified column(s) of the table. (See the - statement.) + statement.) @@ -279,7 +279,7 @@ GRANT role_name [, ...] TO Allows the creation of a trigger on the specified table. (See the - statement.) + statement.) @@ -433,7 +433,7 @@ GRANT role_name [, ...] TO Notes - The command is used + The command is used to revoke access privileges. @@ -515,7 +515,7 @@ GRANT role_name [, ...] TO - Use 's \dp command + Use 's \dp command to obtain information about existing privileges for tables and columns. For example: @@ -682,8 +682,8 @@ GRANT admins TO joe; See Also - - + + diff --git a/doc/src/sgml/ref/import_foreign_schema.sgml b/doc/src/sgml/ref/import_foreign_schema.sgml index 66c8462a5a..f07f757ac6 100644 --- a/doc/src/sgml/ref/import_foreign_schema.sgml +++ b/doc/src/sgml/ref/import_foreign_schema.sgml @@ -159,8 +159,8 @@ IMPORT FOREIGN SCHEMA foreign_films LIMIT TO (actors, directors) See Also - - + + diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml index 9a02bc1dbb..585665f161 100644 --- a/doc/src/sgml/ref/initdb.sgml +++ b/doc/src/sgml/ref/initdb.sgml @@ -100,12 +100,12 @@ PostgreSQL documentation default for all locale categories, including collation order and character set classes. All server locale values (lc_*) can be displayed via SHOW ALL. - More details can be found in . + More details can be found in . To alter the default encoding, use the . - More details can be found in . + More details can be found in . @@ -183,7 +183,7 @@ PostgreSQL documentation unless you override it there. The default is derived from the locale, or SQL_ASCII if that does not work. The character sets supported by the PostgreSQL server are described - in . + in . @@ -209,7 +209,7 @@ PostgreSQL documentation Sets the default locale for the database cluster. If this option is not specified, the locale is inherited from the environment that initdb runs in. Locale - support is described in . + support is described in . @@ -281,7 +281,7 @@ PostgreSQL documentation Sets the default text search configuration. - See for further information. + See for further information. @@ -442,7 +442,7 @@ PostgreSQL documentation Specifies the default time zone of the created database cluster. The value should be a full time zone name - (see ). + (see ). @@ -451,7 +451,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -469,8 +469,8 @@ PostgreSQL documentation See Also - - + + diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml index f13fad4dd6..134092fa9c 100644 --- a/doc/src/sgml/ref/insert.sgml +++ b/doc/src/sgml/ref/insert.sgml @@ -78,7 +78,7 @@ INSERT INTO table_name [ AS ON CONFLICT can be used to specify an alternative action to raising a unique constraint or exclusion constraint violation error. (See below.) + endterm="sql-on-conflict-title"/> below.) @@ -145,7 +145,7 @@ INSERT INTO table_name [ AS The WITH clause allows you to specify one or more subqueries that can be referenced by name in the INSERT - query. See and + query. See and for details. @@ -270,7 +270,7 @@ INSERT INTO table_name [ AS A query (SELECT statement) that supplies the rows to be inserted. Refer to the - + statement for a description of the syntax. @@ -754,7 +754,7 @@ INSERT INTO distributors (did, dname) VALUES (10, 'Conrad International') Possible limitations of the query clause are documented under - . + . diff --git a/doc/src/sgml/ref/listen.sgml b/doc/src/sgml/ref/listen.sgml index f4c25efbd0..ecc1fb82de 100644 --- a/doc/src/sgml/ref/listen.sgml +++ b/doc/src/sgml/ref/listen.sgml @@ -65,7 +65,7 @@ LISTEN channel - + contains a more extensive discussion of the use of LISTEN and NOTIFY. @@ -129,8 +129,8 @@ Asynchronous notification "virtual" received from server process with PID 8448. See Also - - + + diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml index 14886199de..506699ef6f 100644 --- a/doc/src/sgml/ref/load.sgml +++ b/doc/src/sgml/ref/load.sgml @@ -40,10 +40,10 @@ LOAD 'filename' The library file name is typically given as just a bare file name, which is sought in the server's library search path (set - by ). Alternatively it can be + by ). Alternatively it can be given as a full path name. In either case the platform's standard shared library file name extension may be omitted. - See for more information on this topic. + See for more information on this topic. @@ -74,7 +74,7 @@ LOAD 'filename' See Also - + diff --git a/doc/src/sgml/ref/lock.sgml b/doc/src/sgml/ref/lock.sgml index 49e2933938..b2c7203919 100644 --- a/doc/src/sgml/ref/lock.sgml +++ b/doc/src/sgml/ref/lock.sgml @@ -98,7 +98,7 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] More information about the lock modes and locking strategies can be - found in . + found in . @@ -132,7 +132,7 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] The lock mode specifies which locks this lock conflicts with. - Lock modes are described in . + Lock modes are described in . @@ -174,9 +174,9 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] PostgreSQL reports an error if LOCK is used outside a transaction block. Use - and - - (or ) + and + + (or ) to define a transaction block. @@ -189,9 +189,9 @@ LOCK [ TABLE ] [ ONLY ] name [ * ] mind that all the lock modes have identical semantics so far as LOCK TABLE is concerned, differing only in the rules about which modes conflict with which. For information on how to - acquire an actual row-level lock, see + acquire an actual row-level lock, see and the in the SELECT + endterm="sql-for-update-share-title"/> in the SELECT reference documentation. @@ -236,7 +236,7 @@ COMMIT WORK; There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. PostgreSQL supports that too; - see for details. + see for details. diff --git a/doc/src/sgml/ref/move.sgml b/doc/src/sgml/ref/move.sgml index 50533e5e0e..4c7d1dca39 100644 --- a/doc/src/sgml/ref/move.sgml +++ b/doc/src/sgml/ref/move.sgml @@ -60,7 +60,7 @@ MOVE [ direction [ FROM | IN ] ] The parameters for the MOVE command are identical to those of the FETCH command; refer to - + for details on syntax and usage. @@ -116,9 +116,9 @@ COMMIT WORK; See Also - - - + + + diff --git a/doc/src/sgml/ref/notify.sgml b/doc/src/sgml/ref/notify.sgml index 481163634f..e0e125a2a2 100644 --- a/doc/src/sgml/ref/notify.sgml +++ b/doc/src/sgml/ref/notify.sgml @@ -168,7 +168,7 @@ NOTIFY channel [ , The function pg_notification_queue_usage returns the fraction of the queue that is currently occupied by pending notifications. - See for more information. + See for more information. A transaction that has executed NOTIFY cannot be @@ -226,8 +226,8 @@ Asynchronous notification "foo" with payload "payload" received from server proc See Also - - + + diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml index 167d523f5d..e8921b1bb4 100644 --- a/doc/src/sgml/ref/pg_basebackup.sgml +++ b/doc/src/sgml/ref/pg_basebackup.sgml @@ -34,9 +34,9 @@ PostgreSQL documentation pg_basebackup is used to take base backups of a running PostgreSQL database cluster. These are taken without affecting other clients to the database, and can be used - both for point-in-time recovery (see ) + both for point-in-time recovery (see ) and as the starting point for a log shipping or streaming replication standby - servers (see ). + servers (see ). @@ -45,17 +45,17 @@ PostgreSQL documentation out of backup mode automatically. Backups are always taken of the entire database cluster; it is not possible to back up individual databases or database objects. For individual database backups, a tool such as - must be used. + must be used. The backup is made over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION - permissions (see ), + permissions (see ), and pg_hba.conf must explicitly permit the replication connection. The server must also be configured - with set high enough to leave at least + with set high enough to leave at least one session available for the backup and one for WAL streaming (if used). @@ -69,9 +69,9 @@ PostgreSQL documentation pg_basebackup can make a base backup from not only the master but also the standby. To take a backup from the standby, set up the standby so that it can accept replication connections (that is, set - max_wal_senders and , + max_wal_senders and , and configure host-based authentication). - You will also need to enable on the master. + You will also need to enable on the master. @@ -299,7 +299,7 @@ PostgreSQL documentation The write-ahead log files are collected at the end of the backup. Therefore, it is necessary for the - parameter to be set high + parameter to be set high enough that the log is not removed before the end of the backup. If the log has been rotated when it's time to transfer it, the backup will fail and be unusable. @@ -320,7 +320,7 @@ PostgreSQL documentation open a second connection to the server and start streaming the write-ahead log in parallel while running the backup. Therefore, it will use up two connections configured by the - parameter. As long as the + parameter. As long as the client can keep up with write-ahead log received, using this mode requires no extra write-ahead logs to be saved on the master. @@ -377,7 +377,7 @@ PostgreSQL documentation - Sets checkpoint mode to fast (immediate) or spread (default) (see ). + Sets checkpoint mode to fast (immediate) or spread (default) (see ). @@ -531,7 +531,7 @@ PostgreSQL documentation Specifies parameters used to connect to the server, as a connection - string. See for more information. + string. See for more information. The option is called --dbname for consistency with other @@ -667,7 +667,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, uses the environment variables supported by libpq - (see ). + (see ). @@ -691,7 +691,7 @@ PostgreSQL documentation symbolic links used for tablespaces are preserved. Symbolic links pointing to certain directories known to PostgreSQL are copied as empty directories. Other symbolic links and special device files are skipped. - See for the precise details. + See for the precise details. @@ -768,7 +768,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index f930c7e245..7eb5dd320c 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -140,7 +140,7 @@ PostgreSQL documentation pg_ctl is a utility for initializing a PostgreSQL database cluster, starting, stopping, or restarting the PostgreSQL - database server (), or displaying the + database server (), or displaying the status of a running server. Although the server can be started manually, pg_ctl encapsulates tasks such as redirecting log output and properly detaching from the terminal @@ -153,7 +153,7 @@ PostgreSQL documentation PostgreSQL database cluster, that is, a collection of databases that will be managed by a single server instance. This mode invokes the initdb - command. See for details. + command. See for details. @@ -475,7 +475,7 @@ PostgreSQL documentation default is PostgreSQL. Note that this only controls messages sent from pg_ctl itself; once started, the server will use the event source specified - by its parameter. Should the server + by its parameter. Should the server fail very early in startup, before that parameter has been set, it might also log using the default event source name PostgreSQL. @@ -567,12 +567,12 @@ PostgreSQL documentation pg_ctl, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). For additional variables that affect the server, - see . + see . @@ -691,8 +691,8 @@ pg_ctl: server is running (PID: 13718) See Also - - + + diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 57272e33bf..08cad68199 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -48,7 +48,7 @@ PostgreSQL documentation pg_dump only dumps a single database. To backup global objects that are common to all databases in a cluster, such as roles - and tablespaces, use . + and tablespaces, use . @@ -56,7 +56,7 @@ PostgreSQL documentation dumps are plain-text files containing the SQL commands required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, feed it to . Script files + linkend="app-psql"/>. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products. @@ -64,7 +64,7 @@ PostgreSQL documentation The alternative archive file formats must be used with - to rebuild the database. They + to rebuild the database. They allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. @@ -316,7 +316,7 @@ PostgreSQL documentation can write their data at the same time. pg_dump will open njobs - + 1 connections to the database, so make sure your + + 1 connections to the database, so make sure your setting is high enough to accommodate all connections. @@ -375,11 +375,11 @@ PostgreSQL documentation schema parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see ), + linkend="app-psql-patterns" endterm="app-psql-patterns-title"/>), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see - . + . @@ -526,11 +526,11 @@ PostgreSQL documentation table parameter is interpreted as a pattern according to the same rules used by psql's \d commands (see ), + linkend="app-psql-patterns" endterm="app-psql-patterns-title"/>), so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see - . + . @@ -712,11 +712,11 @@ PostgreSQL documentation This option is relevant only when dumping the contents of a table which has row security. By default, pg_dump will set - to off, to ensure + to off, to ensure that all data is dumped from the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_dump to set - to on instead, allowing the user + to on instead, allowing the user to dump the parts of the contents of the table that they have access to. @@ -933,7 +933,7 @@ PostgreSQL documentation states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a - serialization_failure. See + serialization_failure. See for more information about transaction isolation and concurrency control. @@ -965,12 +965,12 @@ PostgreSQL documentation Use the specified synchronized snapshot when making a dump of the database (see - for more + for more details). This option is useful when needing to synchronize the dump with - a logical replication slot (see ) + a logical replication slot (see ) or with a concurrent session. @@ -1050,7 +1050,7 @@ PostgreSQL documentation with a valid URI prefix (postgresql:// or postgres://), it is treated as a - conninfo string. See for more information. + conninfo string. See for more information. @@ -1171,7 +1171,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -1184,7 +1184,7 @@ PostgreSQL documentation SELECT statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, . Also, any default connection settings and environment + linkend="app-psql"/>. Also, any default connection settings and environment variables used by the libpq front-end library will apply. @@ -1229,11 +1229,11 @@ CREATE DATABASE foo WITH TEMPLATE template0; does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run ANALYZE after restoring from a dump file - to ensure optimal performance; see - and for more information. + to ensure optimal performance; see + and for more information. The dump file also does not contain any ALTER DATABASE ... SET commands; - these settings are dumped by , + these settings are dumped by , along with database users and other installation-wide settings. @@ -1374,7 +1374,7 @@ CREATE DATABASE foo WITH TEMPLATE template0; To specify an upper-case or mixed-case name in and related switches, you need to double-quote the name; else it will be folded to lower case (see ). But + linkend="app-psql-patterns" endterm="app-psql-patterns-title"/>). But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like @@ -1389,9 +1389,9 @@ CREATE DATABASE foo WITH TEMPLATE template0; See Also - - - + + + diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index ce6b895da2..5196a211b1 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -35,8 +35,8 @@ PostgreSQL documentation (dumping) all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to to restore the databases. It does this by - calling for each database in a cluster. + linkend="app-psql"/> to restore the databases. It does this by + calling for each database in a cluster. pg_dumpall also dumps global objects that are common to all databases. (pg_dump does not save these objects.) @@ -64,7 +64,7 @@ PostgreSQL documentation database). If you use password authentication it will ask for a password each time. It is convenient to have a ~/.pgpass file in such cases. See for more information. + linkend="libpq-pgpass"/> for more information. @@ -495,7 +495,7 @@ PostgreSQL documentation Specifies parameters used to connect to the server, as a connection - string. See for more information. + string. See for more information. The option is called --dbname for consistency with other @@ -642,7 +642,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -699,7 +699,7 @@ PostgreSQL documentation See Also - Check for details on possible + Check for details on possible error conditions. diff --git a/doc/src/sgml/ref/pg_isready.sgml b/doc/src/sgml/ref/pg_isready.sgml index f140c82079..9567b57ebe 100644 --- a/doc/src/sgml/ref/pg_isready.sgml +++ b/doc/src/sgml/ref/pg_isready.sgml @@ -55,7 +55,7 @@ PostgreSQL documentation (postgresql:// or postgres://), it is treated as a conninfo string. See for more information. + linkend="libpq-connstring"/> for more information. @@ -162,7 +162,7 @@ PostgreSQL documentation pg_isready, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). diff --git a/doc/src/sgml/ref/pg_receivewal.sgml b/doc/src/sgml/ref/pg_receivewal.sgml index 4e2e0cb44c..e3f2ce1fcb 100644 --- a/doc/src/sgml/ref/pg_receivewal.sgml +++ b/doc/src/sgml/ref/pg_receivewal.sgml @@ -36,15 +36,15 @@ PostgreSQL documentation log is streamed using the streaming replication protocol, and is written to a local directory of files. This directory can be used as the archive location for doing a restore using point-in-time recovery (see - ). + ). pg_receivewal streams the write-ahead log in real time as it's being generated on the server, and does not wait - for segments to complete like does. + for segments to complete like does. For this reason, it is not necessary to set - when using + when using pg_receivewal. @@ -60,9 +60,9 @@ PostgreSQL documentation PostgreSQL connection and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION permissions (see - ), and pg_hba.conf + ), and pg_hba.conf must permit the replication connection. The server must also be - configured with set high enough to + configured with set high enough to leave at least one session available for the stream. @@ -172,7 +172,7 @@ PostgreSQL documentation Require pg_receivewal to use an existing - replication slot (see ). + replication slot (see ). When this option is used, pg_receivewal will report a flush position to the server, indicating when each segment has been synchronized to disk so that the server can remove that segment if it @@ -244,7 +244,7 @@ PostgreSQL documentation Specifies parameters used to connect to the server, as a connection - string. See for more information. + string. See for more information. The option is called --dbname for consistency with other @@ -405,7 +405,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, uses the environment variables supported by libpq - (see ). + (see ). @@ -415,11 +415,11 @@ PostgreSQL documentation When using pg_receivewal instead of - as the main WAL backup method, it is + as the main WAL backup method, it is strongly recommended to use replication slots. Otherwise, the server is free to recycle or remove write-ahead log files before they are backed up, because it does not have any information, either - from or the replication slots, about + from or the replication slots, about how far the WAL stream has been archived. Note, however, that a replication slot will fill up the server's disk space if the receiver does not keep up with fetching the WAL data. @@ -443,7 +443,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml index 86f660070f..a79ca20084 100644 --- a/doc/src/sgml/ref/pg_recvlogical.sgml +++ b/doc/src/sgml/ref/pg_recvlogical.sgml @@ -35,8 +35,8 @@ PostgreSQL documentation It creates a replication-mode connection, so it is subject to the same - constraints as , plus those for logical - replication (see ). + constraints as , plus those for logical + replication (see ). @@ -182,8 +182,8 @@ PostgreSQL documentation In mode, start replication from the given LSN. For details on the effect of this, see the documentation - in - and . Ignored in other modes. + in + and . Ignored in other modes. @@ -226,7 +226,7 @@ PostgreSQL documentation When creating a slot, use the specified logical decoding output - plugin. See . This option has no + plugin. See . This option has no effect if the slot already exists. @@ -238,7 +238,7 @@ PostgreSQL documentation This option has the same effect as the option of the same name - in . See the description there. + in . See the description there. @@ -279,7 +279,7 @@ PostgreSQL documentation The database to connect to. See the description of the actions for what this means in detail. This can be a libpq connection string; - see for more information. Defaults + see for more information. Defaults to user name. @@ -395,7 +395,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, uses the environment variables supported by libpq - (see ). + (see ). @@ -403,7 +403,7 @@ PostgreSQL documentation Examples - See for an example. + See for an example. @@ -411,7 +411,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_resetwal.sgml b/doc/src/sgml/ref/pg_resetwal.sgml index 0c30addd30..43b58a49e6 100644 --- a/doc/src/sgml/ref/pg_resetwal.sgml +++ b/doc/src/sgml/ref/pg_resetwal.sgml @@ -292,7 +292,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index 2b0a334025..9946b94e84 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -36,7 +36,7 @@ pg_restore is a utility for restoring a PostgreSQL database from an archive - created by in one of the non-plain-text + created by in one of the non-plain-text formats. It will issue the commands necessary to reconstruct the database to the state it was in at the time it was saved. The archive files also allow pg_restore to @@ -531,11 +531,11 @@ This option is relevant only when restoring the contents of a table which has row security. By default, pg_restore will set - to off, to ensure + to off, to ensure that all data is restored in to the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_restore to set - to on instead, allowing the user to attempt to restore + to on instead, allowing the user to attempt to restore the contents of the table with row security enabled. This might still fail if the user does not have the right to insert the rows from the dump into the table. @@ -801,7 +801,7 @@ This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). However, it does not read + (see ). However, it does not read PGDATABASE when a database name is not supplied. @@ -817,7 +817,7 @@ internally executes SQL statements. If you have problems running pg_restore, make sure you are able to select information from the database using, for - example, . Also, any default connection + example, . Also, any default connection settings and environment variables used by the libpq front-end library will apply. @@ -867,15 +867,15 @@ CREATE DATABASE foo WITH TEMPLATE template0; - See also the documentation for details on + See also the documentation for details on limitations of pg_dump. Once restored, it is wise to run ANALYZE on each restored table so the optimizer has useful statistics; see - and - for more information. + and + for more information. @@ -976,9 +976,9 @@ CREATE DATABASE foo WITH TEMPLATE template0; See Also - - - + + + diff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml index 4bafdfe581..8e49249826 100644 --- a/doc/src/sgml/ref/pg_rewind.sgml +++ b/doc/src/sgml/ref/pg_rewind.sgml @@ -89,10 +89,10 @@ PostgreSQL documentation pg_rewind requires that the target server either has - the option enabled + the option enabled in postgresql.conf or data checksums enabled when the cluster was initialized with initdb. Neither of these - are currently on by default. + are currently on by default. must also be set to on, but is enabled by default. @@ -194,7 +194,7 @@ PostgreSQL documentation When option is used, pg_rewind also uses the environment variables - supported by libpq (see ). + supported by libpq (see ). diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml index 40049c51e5..35f974f8c1 100644 --- a/doc/src/sgml/ref/pg_waldump.sgml +++ b/doc/src/sgml/ref/pg_waldump.sgml @@ -230,7 +230,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/pgarchivecleanup.sgml b/doc/src/sgml/ref/pgarchivecleanup.sgml index 65ba3df928..4117a4392c 100644 --- a/doc/src/sgml/ref/pgarchivecleanup.sgml +++ b/doc/src/sgml/ref/pgarchivecleanup.sgml @@ -31,7 +31,7 @@ pg_archivecleanup is designed to be used as an archive_cleanup_command to clean up WAL file archives when - running as a standby server (see ). + running as a standby server (see ). pg_archivecleanup can also be used as a standalone program to clean WAL file archives. @@ -47,7 +47,7 @@ archive_cleanup_command = 'pg_archivecleanup archivelocation - When used within , all WAL files + When used within , all WAL files logically preceding the value of the %r argument will be removed from archivelocation. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of @@ -194,7 +194,7 @@ archive_cleanup_command = 'pg_archivecleanup -d /mnt/standby/archive %r 2>>clean See Also - + diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index f6e93c3ade..94b495e606 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -473,7 +473,7 @@ pgbench options d prepared: use extended query protocol with prepared statements. - The default is simple query protocol. (See + The default is simple query protocol. (See for more information.) @@ -854,7 +854,7 @@ pgbench options d explained above, or by the meta commands explained below. In addition to any variables preset by command-line options, there are a few variables that are preset automatically, listed in - . A value specified for these + . A value specified for these variables using takes precedence over the automatic presets. Once set, a variable's value can be inserted into a SQL command by writing @@ -1000,7 +1000,7 @@ pgbench options d Built-In Functions - The functions listed in are built + The functions listed in are built into pgbench and may be used in expressions appearing in \set. diff --git a/doc/src/sgml/ref/pgtestfsync.sgml b/doc/src/sgml/ref/pgtestfsync.sgml index 811438ceaf..501157cb36 100644 --- a/doc/src/sgml/ref/pgtestfsync.sgml +++ b/doc/src/sgml/ref/pgtestfsync.sgml @@ -28,7 +28,7 @@ pg_test_fsync is intended to give you a reasonable - idea of what the fastest is on your + idea of what the fastest is on your specific system, as well as supplying diagnostic information in the event of an identified I/O problem. However, differences shown by @@ -37,7 +37,7 @@ are not speed-limited by their write-ahead logs. pg_test_fsync reports average file sync operation time in microseconds for each wal_sync_method, which can also be used to - inform efforts to optimize the value of . + inform efforts to optimize the value of . @@ -107,7 +107,7 @@ See Also - + diff --git a/doc/src/sgml/ref/pgtesttiming.sgml b/doc/src/sgml/ref/pgtesttiming.sgml index 966546747e..545a934cf8 100644 --- a/doc/src/sgml/ref/pgtesttiming.sgml +++ b/doc/src/sgml/ref/pgtesttiming.sgml @@ -294,7 +294,7 @@ Histogram of timing durations: See Also - + diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index 8785a3ded2..055eac31a0 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -287,7 +287,7 @@ make prefix=/usr/local/pgsql.new install pg_upgrade will connect to the old and new servers several times, so you might want to set authentication to peer in pg_hba.conf or use a ~/.pgpass file - (see ). + (see ). @@ -321,7 +321,7 @@ NET STOP postgresql-&majorversion; If you are upgrading standby servers using methods outlined in section , verify that the old standby + linkend="pgupgrade-step-replicas"/>, verify that the old standby servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the Latest checkpoint location values match in all clusters. @@ -404,7 +404,7 @@ pg_upgrade.exe If an error occurs while restoring the database schema, pg_upgrade will - exit and you will have to revert to the old cluster as outlined in + exit and you will have to revert to the old cluster as outlined in below. To try pg_upgrade again, you will need to modify the old cluster so the pg_upgrade schema restore succeeds. If the problem is a contrib module, you might need to uninstall the contrib module from @@ -418,8 +418,8 @@ pg_upgrade.exe If you used link mode and have Streaming Replication (see ) or Log-Shipping (see ) standby servers, you can follow these steps to + linkend="streaming-replication"/>) or Log-Shipping (see ) standby servers, you can follow these steps to quickly upgrade them. You will not be running pg_upgrade on the standby servers, but rather rsync on the primary. Do not start any servers yet. @@ -730,7 +730,7 @@ psql --username=postgres --file=script.sql postgres is necessary because rsync only has file modification-time granularity of one second.) You might want to exclude some files, e.g. postmaster.pid, as documented in . If your file system supports + linkend="backup-lowlevel-base-backup"/>. If your file system supports file system snapshots or copy-on-write file copies, you can use that to make a backup of the old cluster and tablespaces, though the snapshot and copies must be created simultaneously or while the database server @@ -743,10 +743,10 @@ psql --username=postgres --file=script.sql postgres See Also - - - - + + + + diff --git a/doc/src/sgml/ref/postgres-ref.sgml b/doc/src/sgml/ref/postgres-ref.sgml index b62e626e7a..53dc05a78f 100644 --- a/doc/src/sgml/ref/postgres-ref.sgml +++ b/doc/src/sgml/ref/postgres-ref.sgml @@ -51,8 +51,8 @@ PostgreSQL documentation option or the PGDATA environment variable; there is no default. Typically, or PGDATA points directly to the data area directory - created by . Other possible file layouts are - discussed in . + created by . Other possible file layouts are + discussed in . @@ -65,7 +65,7 @@ PostgreSQL documentation The postgres command can also be called in single-user mode. The primary use for this mode is during - bootstrapping by . Sometimes it is used + bootstrapping by . Sometimes it is used for debugging or disaster recovery; note that running a single-user server is not truly suitable for debugging the server, since no realistic interprocess communication and locking will happen. @@ -87,7 +87,7 @@ PostgreSQL documentation postgres accepts the following command-line arguments. For a detailed discussion of the options consult . You can save typing most of these + linkend="runtime-config"/>. You can save typing most of these options by setting up a configuration file. Some (safe) options can also be set from the connecting client in an application-dependent way to apply only for that session. For @@ -109,7 +109,7 @@ PostgreSQL documentation processes. The default value of this parameter is chosen automatically by initdb. Specifying this option is equivalent to setting the - configuration parameter. + configuration parameter. @@ -120,7 +120,7 @@ PostgreSQL documentation Sets a named run-time parameter. The configuration parameters supported by PostgreSQL are - described in . Most of the + described in . Most of the other command line options are in fact short forms of such a parameter assignment. can appear multiple times to set multiple parameters. @@ -142,9 +142,9 @@ PostgreSQL documentation This option is meant for other programs that interact with a server - instance, such as , to query configuration + instance, such as , to query configuration parameter values. User-facing applications should instead use or the pg_settings view. + linkend="sql-show"/> or the pg_settings view. @@ -169,7 +169,7 @@ PostgreSQL documentation Specifies the file system location of the database configuration files. See - for details. + for details. @@ -181,7 +181,7 @@ PostgreSQL documentation Sets the default date style to European, that is DMY ordering of input date fields. This also causes the day to be printed before the month in certain date output formats. - See for more information. + See for more information. @@ -193,7 +193,7 @@ PostgreSQL documentation Disables fsync calls for improved performance, at the risk of data corruption in the event of a system crash. Specifying this option is equivalent to - disabling the configuration + disabling the configuration parameter. Read the detailed documentation before using this! @@ -213,7 +213,7 @@ PostgreSQL documentation server. Defaults to listening only on localhost. Specifying this option is equivalent to setting the configuration parameter. + linkend="guc-listen-addresses"/> configuration parameter. @@ -230,7 +230,7 @@ PostgreSQL documentation This option is deprecated since it does not allow access to the - full functionality of . + full functionality of . It's usually better to set listen_addresses directly. @@ -249,7 +249,7 @@ PostgreSQL documentation The default value is normally /tmp, but that can be changed at build time. Specifying this option is equivalent to setting the configuration parameter. + linkend="guc-unix-socket-directories"/> configuration parameter. @@ -262,7 +262,7 @@ PostgreSQL documentation PostgreSQL must have been compiled with support for SSL for this option to be available. For more information on using SSL, - refer to . + refer to . @@ -275,7 +275,7 @@ PostgreSQL documentation server will accept. The default value of this parameter is chosen automatically by initdb. Specifying this option is equivalent to setting the - configuration parameter. + configuration parameter. @@ -341,7 +341,7 @@ PostgreSQL documentation Specifies the amount of memory to be used by internal sorts and hashes before resorting to temporary disk files. See the description of the work_mem configuration parameter in . + linkend="runtime-config-resource-memory"/>. @@ -531,7 +531,7 @@ PostgreSQL documentation The following options only apply to the single-user mode (see ). + endterm="app-postgres-single-user-title"/>). @@ -620,7 +620,7 @@ PostgreSQL documentation - Default value of the run-time + Default value of the run-time parameter. (The use of this environment variable is deprecated.) @@ -646,11 +646,11 @@ PostgreSQL documentation A failure message mentioning semget or shmget probably indicates you need to configure your kernel to provide adequate shared memory and semaphores. For more - discussion see . You might be able + discussion see . You might be able to postpone reconfiguring your kernel by decreasing to reduce the shared memory + linkend="guc-shared-buffers"/> to reduce the shared memory consumption of PostgreSQL, and/or by reducing - to reduce the semaphore + to reduce the semaphore consumption. @@ -689,7 +689,7 @@ PostgreSQL documentation Notes - The utility command can be used to + The utility command can be used to start and shut down the postgres server safely and comfortably. @@ -726,7 +726,7 @@ PostgreSQL documentation to the process running that command. To terminate a backend process cleanly, send SIGTERM to that process. See also pg_cancel_backend and pg_terminate_backend - in for the SQL-callable equivalents + in for the SQL-callable equivalents of these two actions. @@ -857,8 +857,8 @@ PostgreSQL documentation See Also - , - + , + diff --git a/doc/src/sgml/ref/postmaster.sgml b/doc/src/sgml/ref/postmaster.sgml index ec11ec65f5..311510a44d 100644 --- a/doc/src/sgml/ref/postmaster.sgml +++ b/doc/src/sgml/ref/postmaster.sgml @@ -38,7 +38,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/prepare.sgml b/doc/src/sgml/ref/prepare.sgml index fb91ef8d50..704fb5e4ab 100644 --- a/doc/src/sgml/ref/prepare.sgml +++ b/doc/src/sgml/ref/prepare.sgml @@ -55,7 +55,7 @@ PREPARE name [ ( EXECUTE statement. Refer to for more + linkend="sql-execute"/> for more information about that. @@ -66,7 +66,7 @@ PREPARE name [ ( command. + manually cleaned up using the command. @@ -154,7 +154,7 @@ PREPARE name [ ( To examine the query plan PostgreSQL is using - for a prepared statement, use , e.g. + for a prepared statement, use , e.g. EXPLAIN EXECUTE. If a generic plan is in use, it will contain parameter symbols $n, while a custom plan will have the @@ -166,7 +166,7 @@ PREPARE name [ ( For more information on query planning and the statistics collected by PostgreSQL for that purpose, see - the + the documentation. @@ -176,7 +176,7 @@ PREPARE name [ ( changes + statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new search_path. (This latter behavior is new as of PostgreSQL 9.3.) These rules make use of a @@ -240,8 +240,8 @@ EXECUTE usrrptplan(1, current_date); See Also - - + + diff --git a/doc/src/sgml/ref/prepare_transaction.sgml b/doc/src/sgml/ref/prepare_transaction.sgml index 990546a8c7..d958f7a06f 100644 --- a/doc/src/sgml/ref/prepare_transaction.sgml +++ b/doc/src/sgml/ref/prepare_transaction.sgml @@ -39,8 +39,8 @@ PREPARE TRANSACTION transaction_id Once prepared, a transaction can later be committed or rolled back - with - or , + with + or , respectively. Those commands can be issued from any session, not only the one that executed the original transaction. @@ -93,7 +93,7 @@ PREPARE TRANSACTION transaction_id This command must be used inside a transaction block. Use to start one. + linkend="sql-begin"/> to start one. @@ -128,7 +128,7 @@ PREPARE TRANSACTION transaction_id This will interfere with the ability of VACUUM to reclaim storage, and in extreme cases could cause the database to shut down to prevent transaction ID wraparound (see ). Keep in mind also that the transaction + linkend="vacuum-for-wraparound"/>). Keep in mind also that the transaction continues to hold whatever locks it held. The intended usage of the feature is that a prepared transaction will normally be committed or rolled back as soon as an external transaction manager has verified that @@ -139,7 +139,7 @@ PREPARE TRANSACTION transaction_id If you have not set up an external transaction manager to track prepared transactions and ensure they get closed out promptly, it is best to keep the prepared-transaction feature disabled by setting - to zero. This will + to zero. This will prevent accidental creation of prepared transactions that might then be forgotten and eventually cause problems. @@ -173,8 +173,8 @@ PREPARE TRANSACTION 'foobar'; See Also - - + + diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index e520cdf3ba..fce7e3a585 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -125,7 +125,7 @@ echo '\x \\ SELECT * FROM foo;' | psql if the string contains multiple SQL commands, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple - transactions. (See + transactions. (See for more details about how the server handles multi-query strings.) Also, psql only prints the result of the last SQL command in the string. @@ -167,7 +167,7 @@ EOF (postgresql:// or postgres://), it is treated as a conninfo string. See for more information. + linkend="libpq-connstring"/> for more information. @@ -662,9 +662,9 @@ EOF PGDATABASE, PGHOST, PGPORT and/or PGUSER to appropriate values. (For additional environment variables, see .) It is also convenient to have a + linkend="libpq-envars"/>.) It is also convenient to have a ~/.pgpass file to avoid regularly having to type in - passwords. See for more information. + passwords. See for more information. @@ -678,8 +678,8 @@ $ psql "service=myservice sslmode=require" $ psql postgresql://dbmaster:5433/mydb?sslmode=require This way you can also use LDAP for connection - parameter lookup as described in . - See for more information on all the + parameter lookup as described in . + See for more information on all the available connection options. @@ -730,8 +730,8 @@ testdb=> Whenever a command is executed, psql also polls for asynchronous notification events generated by - and - . + and + . @@ -779,7 +779,7 @@ testdb=> If an unquoted colon (:) followed by a psql variable name appears within an argument, it is replaced by the variable's value, as described in . + linkend="app-psql-interpolation" endterm="app-psql-interpolation-title"/>. The forms :'variable_name' and :"variable_name" described there work as well. @@ -864,7 +864,7 @@ testdb=> Establishes a new connection to a PostgreSQL server. The connection parameters to use can be specified either using a positional syntax, or using conninfo connection - strings as detailed in . + strings as detailed in . @@ -958,7 +958,7 @@ testdb=> Performs a frontend (client) copy. This is an operation that - runs an SQL + runs an SQL command, but instead of the server reading or writing the specified file, psql reads or writes the file and @@ -995,9 +995,9 @@ testdb=> The syntax of this command is similar to that of the - SQL + SQL command. All options other than the data source/destination are - as specified for . + as specified for . Because of this, special parsing rules apply to the \copy meta-command. Unlike most other meta-commands, the entire remainder of the line is always taken to be the arguments of \copy, @@ -1116,7 +1116,7 @@ testdb=> also shown. For foreign tables, the associated foreign server is shown as well. (Matching the pattern is defined in - + below.) @@ -1255,7 +1255,7 @@ testdb=> Descriptions for objects can be created with the + linkend="sql-comment"/> SQL command. @@ -1292,10 +1292,10 @@ testdb=> - The command is used to set + The command is used to set default access privileges. The meaning of the privilege display is explained under - . + . @@ -1606,11 +1606,11 @@ testdb=> - The and - + The and + commands are used to set access privileges. The meaning of the privilege display is explained under - . + . @@ -1629,8 +1629,8 @@ testdb=> - The and - + The and + commands are used to define per-role and per-database configuration settings. @@ -1770,7 +1770,7 @@ testdb=> See under for how to configure and + endterm="app-psql-environment-title"/> for how to configure and customize your editor. @@ -1844,7 +1844,7 @@ Tue Oct 26 21:40:57 CEST 1999 See under for how to configure and + endterm="app-psql-environment-title"/> for how to configure and customize your editor. @@ -2027,7 +2027,7 @@ CREATE INDEX Sends the current query buffer to the server and stores the query's output into psql variables (see ). + linkend="app-psql-variables" endterm="app-psql-variables-title"/>). The query to be executed must return exactly one row. Each column of the row is stored into a separate variable, named the same as the column. For example: @@ -2832,7 +2832,7 @@ lo_import 152801 Illustrations of how these different formats look can be seen in the section. + endterm="app-psql-examples-title"/> section. @@ -2918,7 +2918,7 @@ lo_import 152801 Valid variable names can contain letters, digits, and underscores. See the section below for details. + endterm="app-psql-variables-title"/> below for details. Variable names are case-sensitive. @@ -2927,13 +2927,13 @@ lo_import 152801 control psql's behavior or are automatically set to reflect connection state. These variables are documented in , below. + endterm="app-psql-variables-title"/>, below. This command is unrelated to the SQL - command . + command . @@ -3071,7 +3071,7 @@ testdb=> \setenv LESS -imx4F cannot be unset; instead, an \unset command is interpreted as setting them to their default values. See , below. + endterm="app-psql-variables-title"/>, below. @@ -3216,7 +3216,7 @@ select 1\; select 2\; select 3; The server executes such a request as a single transaction, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple - transactions. (See + transactions. (See for more details about how the server handles multi-query strings.) psql prints only the last query result it receives for each request; in this example, although all @@ -3295,7 +3295,7 @@ select 1\; select 2\; select 3; Advanced users can use regular-expression notations such as character classes, for example [0-9] to match any digit. All regular expression special characters work as specified in - , except for . which + , except for . which is taken as a separator as mentioned above, * which is translated to the regular-expression notation .*, ? which is translated to ., and @@ -3348,7 +3348,7 @@ bar This works in both regular SQL commands and meta-commands; there is more detail in , below. + endterm="app-psql-interpolation-title"/>, below. @@ -3743,7 +3743,7 @@ bar These specify what the prompts psql issues should look like. See below. + endterm="app-psql-prompting-title"/> below. @@ -3825,7 +3825,7 @@ bar SQLSTATE - The error code (see ) associated + The error code (see ) associated with the last SQL query's failure, or 00000 if it succeeded. @@ -4119,7 +4119,7 @@ testdb=> INSERT INTO my_table VALUES (:'content'); The value of the psql variable name. See the section for details. + endterm="app-psql-variables-title"/> for details. @@ -4230,7 +4230,7 @@ $endif - Default connection parameters (see ). + Default connection parameters (see ). @@ -4346,7 +4346,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). diff --git a/doc/src/sgml/ref/reassign_owned.sgml b/doc/src/sgml/ref/reassign_owned.sgml index e29b88292b..0fffd6088a 100644 --- a/doc/src/sgml/ref/reassign_owned.sgml +++ b/doc/src/sgml/ref/reassign_owned.sgml @@ -82,7 +82,7 @@ REASSIGN OWNED BY { old_role | CURR - The command is an alternative that + The command is an alternative that simply drops all the database objects owned by one or more roles. @@ -94,7 +94,7 @@ REASSIGN OWNED BY { old_role | CURR - See for more discussion. + See for more discussion. @@ -112,9 +112,9 @@ REASSIGN OWNED BY { old_role | CURR See Also - - - + + + diff --git a/doc/src/sgml/ref/refresh_materialized_view.sgml b/doc/src/sgml/ref/refresh_materialized_view.sgml index e2ee836efb..9cf01a25a5 100644 --- a/doc/src/sgml/ref/refresh_materialized_view.sgml +++ b/doc/src/sgml/ref/refresh_materialized_view.sgml @@ -93,7 +93,7 @@ REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name While the default index for future - + operations is retained, REFRESH MATERIALIZED VIEW does not order the generated rows based on this property. If you want the data to be ordered upon generation, you must use an ORDER BY @@ -135,9 +135,9 @@ REFRESH MATERIALIZED VIEW annual_statistics_basis WITH NO DATA; See Also - - - + + + diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml index 2e053c4c24..79f6931c6a 100644 --- a/doc/src/sgml/ref/reindex.sgml +++ b/doc/src/sgml/ref/reindex.sgml @@ -52,7 +52,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } REINDEX provides a way to reduce the space consumption of the index by writing a new version of the index without the dead pages. See for more information. + linkend="routine-reindex"/> for more information. @@ -192,7 +192,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } REINDEX SYSTEM to select reconstruction of all system indexes in the database. Then quit the single-user server session and restart the regular server. - See the reference page for more + See the reference page for more information about how to interact with the single-user server interface. diff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml index a7cc9c2d94..1273dad807 100644 --- a/doc/src/sgml/ref/reindexdb.sgml +++ b/doc/src/sgml/ref/reindexdb.sgml @@ -93,7 +93,7 @@ PostgreSQL documentation reindexdb is a wrapper around the SQL - command . + command . There is no effective difference between reindexing databases via this utility and via other methods for accessing the server. @@ -347,7 +347,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -357,8 +357,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -377,7 +377,7 @@ PostgreSQL documentation times to the PostgreSQL server, asking for a password each time. It is convenient to have a ~/.pgpass file in such cases. See for more information. + linkend="libpq-pgpass"/> for more information. @@ -405,7 +405,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/release_savepoint.sgml b/doc/src/sgml/ref/release_savepoint.sgml index 7e629176b7..39665d28ef 100644 --- a/doc/src/sgml/ref/release_savepoint.sgml +++ b/doc/src/sgml/ref/release_savepoint.sgml @@ -42,7 +42,7 @@ RELEASE [ SAVEPOINT ] savepoint_name Destroying a savepoint makes it unavailable as a rollback point, but it has no other user visible behavior. It does not undo the effects of commands executed after the savepoint was established. - (To do that, see .) + (To do that, see .) Destroying a savepoint when it is no longer needed allows the system to reclaim some resources earlier than transaction end. @@ -120,11 +120,11 @@ COMMIT; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/reset.sgml b/doc/src/sgml/ref/reset.sgml index bf3f5226ec..95599072e7 100644 --- a/doc/src/sgml/ref/reset.sgml +++ b/doc/src/sgml/ref/reset.sgml @@ -36,7 +36,7 @@ RESET ALL SET configuration_parameter TO DEFAULT - Refer to for + Refer to for details. @@ -49,7 +49,7 @@ SET configuration_parameter TO DEFA from defining it as the value that the parameter had at session start, because if the value came from the configuration file, it will be reset to whatever is specified by the configuration file now. - See for details. + See for details. @@ -67,8 +67,8 @@ SET configuration_parameter TO DEFA Name of a settable run-time parameter. Available parameters are - documented in and on the - reference page. + documented in and on the + reference page. @@ -106,8 +106,8 @@ RESET timezone; See Also - - + + diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml index e3e3f2ffc3..4d133a782b 100644 --- a/doc/src/sgml/ref/revoke.sgml +++ b/doc/src/sgml/ref/revoke.sgml @@ -122,7 +122,7 @@ REVOKE [ ADMIN OPTION FOR ] - See the description of the command for + See the description of the command for the meaning of the privilege types. @@ -178,9 +178,9 @@ REVOKE [ ADMIN OPTION FOR ] Notes - Use 's \dp command to + Use 's \dp command to display the privileges granted on existing tables and columns. See for information about the + linkend="sql-grant"/> for information about the format. For non-table objects there are other \d commands that can display their privileges. @@ -282,7 +282,7 @@ REVOKE admins FROM joe; Compatibility - The compatibility notes of the command + The compatibility notes of the command apply analogously to REVOKE. The keyword RESTRICT or CASCADE is required according to the standard, but PostgreSQL @@ -294,7 +294,7 @@ REVOKE admins FROM joe; See Also - + diff --git a/doc/src/sgml/ref/rollback.sgml b/doc/src/sgml/ref/rollback.sgml index 1f99343b08..3cafb848a9 100644 --- a/doc/src/sgml/ref/rollback.sgml +++ b/doc/src/sgml/ref/rollback.sgml @@ -54,7 +54,7 @@ ROLLBACK [ WORK | TRANSACTION ] Notes - Use to + Use to successfully terminate a transaction. @@ -88,9 +88,9 @@ ROLLBACK; See Also - - - + + + diff --git a/doc/src/sgml/ref/rollback_prepared.sgml b/doc/src/sgml/ref/rollback_prepared.sgml index d7468f78d7..08821a6652 100644 --- a/doc/src/sgml/ref/rollback_prepared.sgml +++ b/doc/src/sgml/ref/rollback_prepared.sgml @@ -99,8 +99,8 @@ ROLLBACK PREPARED 'foobar'; See Also - - + + diff --git a/doc/src/sgml/ref/rollback_to.sgml b/doc/src/sgml/ref/rollback_to.sgml index 1957cace11..4d5647a302 100644 --- a/doc/src/sgml/ref/rollback_to.sgml +++ b/doc/src/sgml/ref/rollback_to.sgml @@ -64,7 +64,7 @@ ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_nameNotes - Use to destroy a savepoint + Use to destroy a savepoint without discarding the effects of commands executed after it was established. @@ -148,11 +148,11 @@ COMMIT; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/savepoint.sgml b/doc/src/sgml/ref/savepoint.sgml index 6fa11a7358..87243b1d20 100644 --- a/doc/src/sgml/ref/savepoint.sgml +++ b/doc/src/sgml/ref/savepoint.sgml @@ -64,8 +64,8 @@ SAVEPOINT savepoint_name Notes - Use to - rollback to a savepoint. Use + Use to + rollback to a savepoint. Use to destroy a savepoint, keeping the effects of commands executed after it was established. @@ -127,11 +127,11 @@ COMMIT; See Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/security_label.sgml b/doc/src/sgml/ref/security_label.sgml index ce5a1c1975..d52113e035 100644 --- a/doc/src/sgml/ref/security_label.sgml +++ b/doc/src/sgml/ref/security_label.sgml @@ -208,7 +208,7 @@ SECURITY LABEL FOR selinux ON TABLE mytable IS 'system_u:object_r:sepgsql_table_ See Also - + src/test/modules/dummy_seclabel diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml index 3aab3fd8a7..8a3e86b6db 100644 --- a/doc/src/sgml/ref/select.sgml +++ b/doc/src/sgml/ref/select.sgml @@ -94,7 +94,7 @@ TABLE [ ONLY ] table_name [ * ] in the FROM list. A WITH query that is referenced more than once in FROM is computed only once. - (See below.) + (See below.) @@ -104,7 +104,7 @@ TABLE [ ONLY ] table_name [ * ] (Each element in the FROM list is a real or virtual table.) If more than one element is specified in the FROM list, they are cross-joined together. - (See below.) + (See below.) @@ -113,7 +113,7 @@ TABLE [ ONLY ] table_name [ * ] If the WHERE clause is specified, all rows that do not satisfy the condition are eliminated from the output. (See below.) + endterm="sql-where-title"/> below.) @@ -125,8 +125,8 @@ TABLE [ ONLY ] table_name [ * ] values, and the results of aggregate functions are computed. If the HAVING clause is present, it eliminates groups that do not satisfy the given condition. (See - and - below.) + and + below.) @@ -135,7 +135,7 @@ TABLE [ ONLY ] table_name [ * ] The actual output rows are computed using the SELECT output expressions for each selected row or row group. (See - + below.) @@ -146,7 +146,7 @@ TABLE [ ONLY ] table_name [ * ] match on all the specified expressions. SELECT ALL (the default) will return all candidate rows, including duplicates. (See below.) + endterm="sql-distinct-title"/> below.) @@ -167,9 +167,9 @@ TABLE [ ONLY ] table_name [ * ] eliminating duplicate rows. Notice that DISTINCT is the default behavior here, even though ALL is the default for SELECT itself. (See - , , and - below.) + , , and + below.) @@ -179,7 +179,7 @@ TABLE [ ONLY ] table_name [ * ] returned rows are sorted in the specified order. If ORDER BY is not given, the rows are returned in whatever order the system finds fastest to produce. (See - below.) + below.) @@ -188,7 +188,7 @@ TABLE [ ONLY ] table_name [ * ] If the LIMIT (or FETCH FIRST) or OFFSET clause is specified, the SELECT statement only returns a subset of the result rows. (See below.) + linkend="sql-limit" endterm="sql-limit-title"/> below.) @@ -199,7 +199,7 @@ TABLE [ ONLY ] table_name [ * ] is specified, the SELECT statement locks the selected rows against concurrent updates. (See below.) + endterm="sql-for-update-share-title"/> below.) @@ -258,7 +258,7 @@ TABLE [ ONLY ] table_name [ * ] is permitted per query. Recursive data-modifying statements are not supported, but you can use the results of a recursive SELECT query in - a data-modifying statement. See for + a data-modifying statement. See for an example. @@ -291,7 +291,7 @@ TABLE [ ONLY ] table_name [ * ] - See for additional information. + See for additional information. @@ -410,7 +410,7 @@ TABLE [ ONLY ] table_name [ * ] sub-SELECT must be surrounded by parentheses, and an alias must be provided for it. A - command + command can also be used here. @@ -714,7 +714,7 @@ GROUP BY grouping_element [, ...] equivalent to constructing a UNION ALL between subqueries with the individual grouping sets as their GROUP BY clauses. For further details on the handling - of grouping sets see . + of grouping sets see . @@ -725,7 +725,7 @@ GROUP BY grouping_element [, ...] the selected rows.) The set of rows fed to each aggregate function can be further filtered by attaching a FILTER clause to the aggregate function - call; see for more information. When + call; see for more information. When a FILTER clause is present, only those rows matching it are included in the input to that aggregate function. @@ -747,7 +747,7 @@ GROUP BY grouping_element [, ...] evaluating any scalar expressions in the HAVING clause or SELECT list. This means that, for example, a CASE expression cannot be used to skip evaluation of - an aggregate function; see . + an aggregate function; see . @@ -834,7 +834,7 @@ WINDOW window_name AS ( The elements of the PARTITION BY list are interpreted in much the same fashion as elements of a - , except that + , except that they are always simple expressions and never the name or number of an output column. Another difference is that these expressions can contain aggregate @@ -846,7 +846,7 @@ WINDOW window_name AS ( Similarly, the elements of the ORDER BY list are interpreted in much the same fashion as elements of an - , except that + , except that the expressions are always taken as simple expressions and never the name or number of an output column. @@ -920,8 +920,8 @@ UNBOUNDED FOLLOWING The purpose of a WINDOW clause is to specify the behavior of window functions appearing in the query's - or - . These functions + or + . These functions can reference the WINDOW clause entries by name in their OVER clauses. A WINDOW clause entry does not have to be referenced anywhere, however; if it is not @@ -941,9 +941,9 @@ UNBOUNDED FOLLOWING Window functions are described in detail in - , - , and - . + , + , and + . @@ -969,7 +969,7 @@ UNBOUNDED FOLLOWING after the column's expression. (You can omit AS, but only if the desired output name does not match any PostgreSQL keyword (see ). For protection against possible + linkend="sql-keywords-appendix"/>). For protection against possible future keyword additions, it is recommended that you always either write AS or double-quote the output name.) If you do not specify a column name, a name is chosen automatically @@ -1311,8 +1311,8 @@ SELECT name FROM distributors ORDER BY code; a COLLATE clause in the expression, for example ORDER BY mycolumn COLLATE "en_US". - For more information see and - . + For more information see and + . @@ -1425,7 +1425,7 @@ KEY SHARE For more information on each row-level lock mode, refer to - . + . @@ -1441,8 +1441,8 @@ KEY SHARE Note that NOWAIT and SKIP LOCKED apply only to the row-level lock(s) — the required ROW SHARE table-level lock is still taken in the ordinary way (see - ). You can use - + ). You can use + with the NOWAIT option first, if you need to acquire the table-level lock without waiting. @@ -1791,7 +1791,7 @@ SELECT distance, employee_name FROM employee_recursive; an initial condition, followed by UNION, followed by the recursive part of the query. Be sure that the recursive part of the query will eventually return no tuples, or - else the query will loop indefinitely. (See + else the query will loop indefinitely. (See for more examples.) @@ -2001,7 +2001,7 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; used by MySQL. The SQL:2008 standard has introduced the clauses OFFSET ... FETCH {FIRST|NEXT} ... for the same functionality, as shown above - in . This + in . This syntax is also used by IBM DB2. (Applications written for Oracle frequently use a workaround involving the automatically diff --git a/doc/src/sgml/ref/select_into.sgml b/doc/src/sgml/ref/select_into.sgml index a5b6ac9245..6c1a25f5ed 100644 --- a/doc/src/sgml/ref/select_into.sgml +++ b/doc/src/sgml/ref/select_into.sgml @@ -60,7 +60,7 @@ SELECT [ ALL | DISTINCT [ ON ( expression If specified, the table is created as a temporary table. Refer - to for details. + to for details. @@ -70,7 +70,7 @@ SELECT [ ALL | DISTINCT [ ON ( expression If specified, the table is created as an unlogged table. Refer - to for details. + to for details. @@ -87,7 +87,7 @@ SELECT [ ALL | DISTINCT [ ON ( expression All other parameters are described in detail under . + linkend="sql-select"/>. @@ -95,7 +95,7 @@ SELECT [ ALL | DISTINCT [ ON ( expressionNotes - is functionally similar to + is functionally similar to SELECT INTO. CREATE TABLE AS is the recommended syntax, since this form of SELECT INTO is not available in ECPG @@ -107,7 +107,7 @@ SELECT [ ALL | DISTINCT [ ON ( expression To add OIDs to the table created by SELECT INTO, - enable the configuration + enable the configuration variable. Alternatively, CREATE TABLE AS can be used with the WITH OIDS clause. @@ -132,8 +132,8 @@ SELECT * INTO films_recent FROM films WHERE date_prod >= '2002-01-01'; The SQL standard uses SELECT INTO to represent selecting values into scalar variables of a host program, rather than creating a new table. This indeed is the usage found - in ECPG (see ) and - PL/pgSQL (see ). + in ECPG (see ) and + PL/pgSQL (see ). The PostgreSQL usage of SELECT INTO to represent table creation is historical. It is best to use CREATE TABLE AS for this purpose in @@ -145,7 +145,7 @@ SELECT * INTO films_recent FROM films WHERE date_prod >= '2002-01-01'; See Also - + diff --git a/doc/src/sgml/ref/set.sgml b/doc/src/sgml/ref/set.sgml index 4bc8108765..63f312e812 100644 --- a/doc/src/sgml/ref/set.sgml +++ b/doc/src/sgml/ref/set.sgml @@ -32,7 +32,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone The SET command changes run-time configuration parameters. Many of the run-time parameters listed in - can be changed on-the-fly with + can be changed on-the-fly with SET. (But some require superuser privileges to change, and others cannot be changed after server or session start.) @@ -67,7 +67,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone If SET LOCAL is used within a function that has a SET option for the same variable (see - ), + ), the effects of the SET LOCAL command disappear at function exit; that is, the value in effect when the function was called is restored anyway. This allows SET LOCAL to be used for @@ -122,7 +122,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone Name of a settable run-time parameter. Available parameters are - documented in and below. + documented in and below. @@ -145,7 +145,7 @@ SET [ SESSION | LOCAL ] TIME ZONE { timezone Besides the configuration parameters documented in , there are a few that can only be + linkend="runtime-config"/>, there are a few that can only be adjusted using the SET command or that have a special syntax: @@ -253,7 +253,7 @@ SELECT setseed(value); - See for more information + See for more information about time zones. @@ -267,7 +267,7 @@ SELECT setseed(value); The function set_config provides equivalent - functionality; see . + functionality; see . Also, it is possible to UPDATE the pg_settings system view to perform the equivalent of SET. @@ -323,8 +323,8 @@ SET TIME ZONE 'Europe/Rome'; See Also - - + + diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml index 351e953f75..0ef6eb9a9c 100644 --- a/doc/src/sgml/ref/set_role.sgml +++ b/doc/src/sgml/ref/set_role.sgml @@ -48,7 +48,7 @@ RESET ROLE The SESSION and LOCAL modifiers act the same - as for the regular + as for the regular command. @@ -82,7 +82,7 @@ RESET ROLE SET ROLE has effects comparable to - , but the privilege + , but the privilege checks involved are quite different. Also, SET SESSION AUTHORIZATION determines which roles are allowable for later SET ROLE commands, whereas changing @@ -92,7 +92,7 @@ RESET ROLE SET ROLE does not process session variables as specified by - the role's settings; this only happens during + the role's settings; this only happens during login. @@ -142,7 +142,7 @@ SELECT SESSION_USER, CURRENT_USER; See Also - + diff --git a/doc/src/sgml/ref/set_session_auth.sgml b/doc/src/sgml/ref/set_session_auth.sgml index 45fa378e18..37b9ff8b18 100644 --- a/doc/src/sgml/ref/set_session_auth.sgml +++ b/doc/src/sgml/ref/set_session_auth.sgml @@ -41,7 +41,7 @@ RESET SESSION AUTHORIZATION identifier is normally equal to the session user identifier, but might change temporarily in the context of SECURITY DEFINER functions and similar mechanisms; it can also be changed by - . + . The current user identifier is relevant for permission checking. @@ -54,7 +54,7 @@ RESET SESSION AUTHORIZATION The SESSION and LOCAL modifiers act the same - as for the regular + as for the regular command. @@ -120,7 +120,7 @@ SELECT SESSION_USER, CURRENT_USER; See Also - + diff --git a/doc/src/sgml/ref/set_transaction.sgml b/doc/src/sgml/ref/set_transaction.sgml index 3ab1e6f771..43b1c6c892 100644 --- a/doc/src/sgml/ref/set_transaction.sgml +++ b/doc/src/sgml/ref/set_transaction.sgml @@ -119,7 +119,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa INSERT, DELETE, UPDATE, FETCH, or COPY) of a transaction has been executed. See - for more information about transaction + for more information about transaction isolation and concurrency control. @@ -156,7 +156,7 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa transaction to run with the same snapshot as an existing transaction. The pre-existing transaction must have exported its snapshot with the pg_export_snapshot function (see ). That function returns a + linkend="functions-snapshot-synchronization"/>). That function returns a snapshot identifier, which must be given to SET TRANSACTION SNAPSHOT to specify which snapshot is to be imported. The identifier must be written as a string literal in this command, for example @@ -199,13 +199,13 @@ SET SESSION CHARACTERISTICS AS TRANSACTION transa The session default transaction modes can also be set by setting the - configuration parameters , - , and - . + configuration parameters , + , and + . (In fact SET SESSION CHARACTERISTICS is just a verbose equivalent for setting these variables with SET.) This means the defaults can be set in the configuration file, via - ALTER DATABASE, etc. Consult + ALTER DATABASE, etc. Consult for more information. diff --git a/doc/src/sgml/ref/show.sgml b/doc/src/sgml/ref/show.sgml index 53b47ac3d8..945b0491b1 100644 --- a/doc/src/sgml/ref/show.sgml +++ b/doc/src/sgml/ref/show.sgml @@ -38,7 +38,7 @@ SHOW ALL libpq or a libpq-based application), or through command-line flags when starting the postgres server. See for details. + linkend="runtime-config"/> for details. @@ -51,8 +51,8 @@ SHOW ALL The name of a run-time parameter. Available parameters are - documented in and on the reference page. In + documented in and on the reference page. In addition, there are a few parameters that can be shown but not set: @@ -129,7 +129,7 @@ SHOW ALL The function current_setting produces - equivalent output; see . + equivalent output; see . Also, the pg_settings system view produces the same information. @@ -192,8 +192,8 @@ SHOW ALL; See Also - - + + diff --git a/doc/src/sgml/ref/start_transaction.sgml b/doc/src/sgml/ref/start_transaction.sgml index 605fda5357..d6cd1d4177 100644 --- a/doc/src/sgml/ref/start_transaction.sgml +++ b/doc/src/sgml/ref/start_transaction.sgml @@ -37,8 +37,8 @@ START TRANSACTION [ transaction_mode This command begins a new transaction block. If the isolation level, read/write mode, or deferrable mode is specified, the new transaction has those - characteristics, as if was executed. This is the same - as the command. + characteristics, as if was executed. This is the same + as the command. @@ -46,7 +46,7 @@ START TRANSACTION [ transaction_modeParameters - Refer to for information on the meaning + Refer to for information on the meaning of the parameters to this statement. @@ -79,7 +79,7 @@ START TRANSACTION [ transaction_mode - See also the compatibility section of . + See also the compatibility section of . @@ -87,11 +87,11 @@ START TRANSACTION [ transaction_modeSee Also - - - - - + + + + + diff --git a/doc/src/sgml/ref/truncate.sgml b/doc/src/sgml/ref/truncate.sgml index 6892516987..c1e42376ab 100644 --- a/doc/src/sgml/ref/truncate.sgml +++ b/doc/src/sgml/ref/truncate.sgml @@ -144,7 +144,7 @@ TRUNCATE [ TABLE ] [ ONLY ] name [ TRUNCATE is not MVCC-safe. After truncation, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the truncation occurred. - See for more details. + See for more details. @@ -225,7 +225,7 @@ TRUNCATE othertable CASCADE; See Also - + diff --git a/doc/src/sgml/ref/unlisten.sgml b/doc/src/sgml/ref/unlisten.sgml index 1bffac3cb2..687bf485c9 100644 --- a/doc/src/sgml/ref/unlisten.sgml +++ b/doc/src/sgml/ref/unlisten.sgml @@ -40,7 +40,7 @@ UNLISTEN { channel | * } - + contains a more extensive discussion of the use of LISTEN and NOTIFY. @@ -126,8 +126,8 @@ NOTIFY virtual; See Also - - + + diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index 0e99aa9739..c0d0f7134d 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -81,7 +81,7 @@ UPDATE [ ONLY ] table_name [ * ] [ The WITH clause allows you to specify one or more subqueries that can be referenced by name in the UPDATE - query. See and + query. See and for details. @@ -171,7 +171,7 @@ UPDATE [ ONLY ] table_name [ * ] [ to appear in the WHERE condition and the update expressions. This is similar to the list of tables that can be specified in the of a SELECT + endterm="sql-from-title"/> of a SELECT statement. Note that the target table must not appear in the from_list, unless you intend a self-join (in which case it must appear with an alias in the from_list). @@ -200,7 +200,7 @@ UPDATE [ ONLY ] table_name [ * ] [ query on the UPDATE's target table. Note that WHERE CURRENT OF cannot be specified together with a Boolean condition. See - + for more information about using cursors with WHERE CURRENT OF. diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index 7ecd08977c..b760e8ede1 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -61,7 +61,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ + for more details about its processing. @@ -113,8 +113,8 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ and - parameters + and + parameters set to zero. Aggressive freezing is always performed when the table is rewritten, so this option is redundant when FULL is specified. @@ -215,7 +215,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ for details. + structure. See for details. @@ -243,14 +243,14 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ for details. + See for details. PostgreSQL includes an autovacuum facility which can automate routine vacuum maintenance. For more information about automatic and manual vacuuming, see - . + . @@ -278,9 +278,9 @@ VACUUM (VERBOSE, ANALYZE) onek; See Also - - - + + + diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml index 2d47d8c1f1..955a17a849 100644 --- a/doc/src/sgml/ref/vacuumdb.sgml +++ b/doc/src/sgml/ref/vacuumdb.sgml @@ -62,7 +62,7 @@ PostgreSQL documentation vacuumdb is a wrapper around the SQL - command . + command . There is no effective difference between vacuuming and analyzing databases via this utility and via other methods for accessing the server. @@ -146,7 +146,7 @@ PostgreSQL documentation vacuumdb will open njobs connections to the - database, so make sure your + database, so make sure your setting is high enough to accommodate all connections. @@ -372,7 +372,7 @@ PostgreSQL documentation This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq - (see ). + (see ). @@ -382,8 +382,8 @@ PostgreSQL documentation Diagnostics - In case of difficulty, see - and for + In case of difficulty, see + and for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment @@ -402,7 +402,7 @@ PostgreSQL documentation times to the PostgreSQL server, asking for a password each time. It is convenient to have a ~/.pgpass file in such cases. See for more information. + linkend="libpq-pgpass"/> for more information. @@ -439,7 +439,7 @@ PostgreSQL documentation See Also - + diff --git a/doc/src/sgml/ref/values.sgml b/doc/src/sgml/ref/values.sgml index 6b8083fc9d..849220b120 100644 --- a/doc/src/sgml/ref/values.sgml +++ b/doc/src/sgml/ref/values.sgml @@ -44,7 +44,7 @@ VALUES ( expression [, ...] ) [, .. number of elements. The data types of the resulting table's columns are determined by combining the explicit or inferred types of the expressions appearing in that column, using the same rules as for UNION - (see ). + (see ). @@ -85,7 +85,7 @@ VALUES ( expression [, ...] ) [, .. rows. This expression can refer to the columns of the VALUES result as column1, column2, etc. For more details see - . + . @@ -95,7 +95,7 @@ VALUES ( expression [, ...] ) [, .. A sorting operator. For details see - . + . @@ -105,7 +105,7 @@ VALUES ( expression [, ...] ) [, .. The maximum number of rows to return. For details see - . + . @@ -116,7 +116,7 @@ VALUES ( expression [, ...] ) [, .. The number of rows to skip before starting to return rows. For details see - . + . @@ -233,7 +233,7 @@ WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), ('192.168.1. VALUES conforms to the SQL standard. LIMIT and OFFSET are PostgreSQL extensions; see also - under . + under . @@ -241,8 +241,8 @@ WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), ('192.168.1. See Also - - + + diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml index 9000b3aaaa..d20eaa87e7 100644 --- a/doc/src/sgml/reference.sgml +++ b/doc/src/sgml/reference.sgml @@ -264,7 +264,7 @@ PostgreSQL server applications and support utilities. These commands can only be run usefully on the host where the database server resides. Other utility programs - are listed in . + are listed in . diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml index e83edf96ec..53716a029f 100644 --- a/doc/src/sgml/regress.sgml +++ b/doc/src/sgml/regress.sgml @@ -52,7 +52,7 @@ make check or otherwise a note about which tests failed. See below before assuming that a + linkend="regress-evaluation"/> below before assuming that a failure represents a serious problem. @@ -97,9 +97,9 @@ make MAX_CONNECTIONS=10 check Running the Tests Against an Existing Installation - To run the tests after installation (see ), + To run the tests after installation (see ), initialize a data area and start the - server as explained in , then type: + server as explained in , then type: make installcheck @@ -193,7 +193,7 @@ make installcheck-world Tests of client programs under src/bin. See - also . + also . @@ -371,7 +371,7 @@ make standbycheck for a given test, but inspection of the output convinces you that the result is valid, you can add a new comparison file to silence the failure report in future test runs. See - for details. + for details. @@ -521,7 +521,7 @@ exclusion of those that don't. If the errors test results in a server crash at the select infinite_recurse() command, it means that the platform's limit on process stack size is smaller than the - parameter indicates. This + parameter indicates. This can be fixed by running the server under a higher stack size limit (4MB is recommended with the default value of max_stack_depth). If you are unable to do that, an diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 30d602a053..18323b349c 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 10.0. For information about new features in major release 10, see - . + . @@ -781,8 +781,8 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Migration to Version 10 - A dump/restore using , or use of , is required for those wishing to migrate data + A dump/restore using , or use of , is required for those wishing to migrate data from any previous release. @@ -893,7 +893,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 functions' periods. In addition, set-returning functions are now disallowed within CASE and COALESCE constructs. For more information - see . + see . @@ -1003,7 +1003,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-01-04 [9a4d51077] Make wal streaming the default mode for pg_basebackup --> - Make stream the + Make stream the WAL needed to restore the backup by default (Magnus Hagander) @@ -1043,7 +1043,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-01-14 [05cd12ed5] pg_ctl: Change default to wait for all actions --> - Make all actions wait + Make all actions wait for completion by default (Peter Eisentraut) @@ -1058,7 +1058,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-03-27 [3371e4d9b] Change default of log_directory to 'log' --> - Change the default value of the + Change the default value of the server parameter from pg_log to log (Andreas Karlsson) @@ -1069,7 +1069,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-07-31 [c0a15e07c] Always use 2048 bit DH parameters for OpenSSL ephemeral --> - Add configuration option to + Add configuration option to specify file name for custom OpenSSL DH parameters (Heikki Linnakangas) @@ -1098,7 +1098,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 DH parameters longer than 1024 bits, and hence will not be able to connect over SSL. If it's necessary to support such old clients, you can use custom 1024-bit DH parameters instead of the compiled-in - defaults. See . + defaults. See . @@ -1112,7 +1112,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 - The server parameter + The server parameter no longer supports off or plain. The UNENCRYPTED option is no longer supported in CREATE/ALTER USER ... PASSWORD. Similarly, the @@ -1129,8 +1129,8 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-02-15 [51ee6f316] Replace min_parallel_relation_size with two new GUCs. --> - Add - and server + Add + and server parameters to control parallel queries (Amit Kapila, Robert Haas) @@ -1146,7 +1146,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 --> Don't downcase unquoted text - within and related + within and related server parameters (QL Zhuo) @@ -1205,8 +1205,8 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 This functionality has been replaced by new server - parameters - and , which are easier to use + parameters + and , which are easier to use and more similar to features available in other PLs. @@ -1398,14 +1398,14 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2016-12-05 [2b959d495] Reduce the default for max_worker_processes back to 8. --> - Add server parameter + Add server parameter to limit the number of worker processes that can be used for query parallelism (Julien Rouhaud) This parameter can be set lower than to reserve worker processes + linkend="guc-max-worker-processes"/> to reserve worker processes for purposes other than parallel queries. @@ -1548,7 +1548,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 For example, changing a table's setting can now be done + linkend="guc-effective-io-concurrency"/> setting can now be done with a more lightweight lock. @@ -1565,8 +1565,8 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Lock promotion can now be controlled through two new server parameters, and - . + linkend="guc-max-pred-locks-per-relation"/> and + . @@ -1768,7 +1768,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2016-10-17 [7d3235ba4] By default, set log_line_prefix = '%m [%p] '. --> - Change the default value of + Change the default value of to include current timestamp (with milliseconds) and the process ID in each line of postmaster log output (Christoph Berg) @@ -1845,12 +1845,12 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Reduce message verbosity of lower-numbered debug levels controlled by - (Robert Haas) + (Robert Haas) This also changes the verbosity of debug levels. + linkend="guc-client-min-messages"/> debug levels. @@ -1957,7 +1957,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2016-09-28 [babe05bc2] Turn password_encryption GUC into an enum. --> - Change the server parameter + Change the server parameter from boolean to enum (Michael Paquier) @@ -2034,7 +2034,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 --> Make the maximum value of effectively unlimited + linkend="guc-bgwriter-lru-maxpages"/> effectively unlimited (Jim Nasby) @@ -2085,7 +2085,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-03-14 [bb4a39637] hash: Support WAL consistency checking. --> - Add server parameter + Add server parameter to add details to WAL that can be sanity-checked on the standby (Kuntal Ghosh, Robert Haas) @@ -2106,7 +2106,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 A larger WAL segment size allows for fewer - invocations and fewer + invocations and fewer WAL files to manage. @@ -2150,7 +2150,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Allow waiting for commit acknowledgement from standby servers irrespective of the order they appear in (Masahiko Sawada) + linkend="guc-synchronous-standby-names"/> (Masahiko Sawada) @@ -2174,9 +2174,9 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Specifically, the defaults were changed for , , - , and to make them suitable for these usages + linkend="guc-wal-level"/>, , + , and to make them suitable for these usages out-of-the-box. @@ -2194,7 +2194,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Previously pg_hba.conf's replication connection lines were commented out by default. This is particularly useful for - . + . @@ -2497,7 +2497,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 This information is also included in output. + linkend="guc-log-autovacuum-min-duration"/> output. @@ -2830,8 +2830,8 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2017-03-07 [0d2b1f305] Invent start_proc parameters for PL/Tcl. --> - Add server parameters - and , to allow initialization + Add server parameters + and , to allow initialization functions to be called on PL/Tcl startup (Tom Lane) @@ -2939,7 +2939,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Client Applications - <xref linkend="app-psql"> + <xref linkend="app-psql"/> @@ -3080,7 +3080,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 - <xref linkend="pgbench"> + <xref linkend="pgbench"/> @@ -3258,7 +3258,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 - <xref linkend="app-pgbasebackup"> + <xref linkend="app-pgbasebackup"/> @@ -3331,7 +3331,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 - <application><xref linkend="app-pg-ctl"></application> + <application><xref linkend="app-pg-ctl"/></application> @@ -3340,7 +3340,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 2016-09-21 [e7010ce47] pg_ctl: Add wait option to promote action --> - Add wait option for 's + Add wait option for 's promote operation (Peter Eisentraut) diff --git a/doc/src/sgml/release-7.4.sgml b/doc/src/sgml/release-7.4.sgml index bdbfe8e006..a67945a42b 100644 --- a/doc/src/sgml/release-7.4.sgml +++ b/doc/src/sgml/release-7.4.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 7.4.29. For information about new features in the 7.4 major release, see - . + . @@ -27,7 +27,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.26, - see . + see . @@ -146,7 +146,7 @@ This release contains a variety of fixes from 7.4.28. For information about new features in the 7.4 major release, see - . + . @@ -161,7 +161,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.26, - see . + see . @@ -290,7 +290,7 @@ This release contains a variety of fixes from 7.4.27. For information about new features in the 7.4 major release, see - . + . @@ -305,7 +305,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.26, - see . + see . @@ -409,7 +409,7 @@ This release contains a variety of fixes from 7.4.26. For information about new features in the 7.4 major release, see - . + . @@ -418,7 +418,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.26, - see . + see . @@ -529,7 +529,7 @@ This release contains a variety of fixes from 7.4.25. For information about new features in the 7.4 major release, see - . + . @@ -540,7 +540,7 @@ However, if you have any hash indexes on interval columns, you must REINDEX them after updating to 7.4.26. Also, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -659,7 +659,7 @@ This release contains a variety of fixes from 7.4.24. For information about new features in the 7.4 major release, see - . + . @@ -668,7 +668,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -739,7 +739,7 @@ This release contains a variety of fixes from 7.4.23. For information about new features in the 7.4 major release, see - . + . @@ -748,7 +748,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -825,7 +825,7 @@ This release contains a variety of fixes from 7.4.22. For information about new features in the 7.4 major release, see - . + . @@ -834,7 +834,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -909,7 +909,7 @@ This release contains a variety of fixes from 7.4.21. For information about new features in the 7.4 major release, see - . + . @@ -918,7 +918,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -985,7 +985,7 @@ This release contains one serious bug fix over 7.4.20. For information about new features in the 7.4 major release, see - . + . @@ -994,7 +994,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1039,7 +1039,7 @@ This release contains a variety of fixes from 7.4.19. For information about new features in the 7.4 major release, see - . + . @@ -1048,7 +1048,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1185,7 +1185,7 @@ This release contains a variety of fixes from 7.4.18, including fixes for significant security issues. For information about new features in the 7.4 major release, see - . + . @@ -1194,7 +1194,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1338,7 +1338,7 @@ This release contains fixes from 7.4.17. For information about new features in the 7.4 major release, see - . + . @@ -1347,7 +1347,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1415,7 +1415,7 @@ This release contains fixes from 7.4.16, including a security fix. For information about new features in the 7.4 major release, see - . + . @@ -1424,7 +1424,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1486,7 +1486,7 @@ This release contains a variety of fixes from 7.4.15, including a security fix. For information about new features in the 7.4 major release, see - . + . @@ -1495,7 +1495,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1556,7 +1556,7 @@ This release contains a variety of fixes from 7.4.14. For information about new features in the 7.4 major release, see - . + . @@ -1565,7 +1565,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1645,7 +1645,7 @@ This release contains a variety of fixes from 7.4.13. For information about new features in the 7.4 major release, see - . + . @@ -1654,7 +1654,7 @@ A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1693,7 +1693,7 @@ ANYARRAY This release contains a variety of fixes from 7.4.12, including patches for extremely serious security issues. For information about new features in the 7.4 major release, see - . + . @@ -1702,7 +1702,7 @@ ANYARRAY A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1799,7 +1799,7 @@ Fuhr) This release contains a variety of fixes from 7.4.11. For information about new features in the 7.4 major release, see - . + . @@ -1808,7 +1808,7 @@ Fuhr) A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.11, - see . + see . @@ -1862,7 +1862,7 @@ and isinf during configure (Tom) This release contains a variety of fixes from 7.4.10. For information about new features in the 7.4 major release, see - . + . @@ -1871,7 +1871,7 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.8, - see . + see . Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or plperl issues described below. @@ -1929,7 +1929,7 @@ what's actually returned by the query (Joe) This release contains a variety of fixes from 7.4.9. For information about new features in the 7.4 major release, see - . + . @@ -1938,7 +1938,7 @@ what's actually returned by the query (Joe) A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.8, - see . + see . @@ -1982,7 +1982,7 @@ table has been dropped This release contains a variety of fixes from 7.4.8. For information about new features in the 7.4 major release, see - . + . @@ -1991,7 +1991,7 @@ table has been dropped A dump/restore is not required for those running 7.4.X. However, if you are upgrading from a version earlier than 7.4.8, - see . + see . @@ -2052,7 +2052,7 @@ code This release contains a variety of fixes from 7.4.7, including several security-related issues. For information about new features in the 7.4 major release, see - . + . @@ -2236,7 +2236,7 @@ holder of the lock released it within a very narrow window. This release contains a variety of fixes from 7.4.6, including several security-related issues. For information about new features in the 7.4 major release, see - . + . @@ -2295,7 +2295,7 @@ GMT This release contains a variety of fixes from 7.4.5. For information about new features in the 7.4 major release, see - . + . @@ -2367,7 +2367,7 @@ ECPG prepare statement This release contains one serious bug fix over 7.4.4. For information about new features in the 7.4 major release, see - . + . @@ -2405,7 +2405,7 @@ still worth a re-release. The bug does not exist in pre-7.4 releases. This release contains a variety of fixes from 7.4.3. For information about new features in the 7.4 major release, see - . + . @@ -2457,7 +2457,7 @@ aggregate plan This release contains a variety of fixes from 7.4.2. For information about new features in the 7.4 major release, see - . + . @@ -2515,7 +2515,7 @@ names from outer query levels. This release contains a variety of fixes from 7.4.1. For information about new features in the 7.4 major release, see - . + . @@ -2658,7 +2658,7 @@ inconveniences associated with the i/I problem. This release contains a variety of fixes from 7.4. For information about new features in the 7.4 major release, see - . + . diff --git a/doc/src/sgml/release-8.0.sgml b/doc/src/sgml/release-8.0.sgml index 46ca87e93a..6171e0d1ee 100644 --- a/doc/src/sgml/release-8.0.sgml +++ b/doc/src/sgml/release-8.0.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 8.0.25. For information about new features in the 8.0 major release, see - . + . @@ -27,7 +27,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.22, - see . + see . @@ -216,7 +216,7 @@ This release contains a variety of fixes from 8.0.24. For information about new features in the 8.0 major release, see - . + . @@ -231,7 +231,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.22, - see . + see . @@ -376,7 +376,7 @@ This release contains a variety of fixes from 8.0.23. For information about new features in the 8.0 major release, see - . + . @@ -391,7 +391,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.22, - see . + see . @@ -553,7 +553,7 @@ This release contains a variety of fixes from 8.0.22. For information about new features in the 8.0 major release, see - . + . @@ -562,7 +562,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.22, - see . + see . @@ -708,7 +708,7 @@ This release contains a variety of fixes from 8.0.21. For information about new features in the 8.0 major release, see - . + . @@ -719,7 +719,7 @@ However, if you have any hash indexes on interval columns, you must REINDEX them after updating to 8.0.22. Also, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -872,7 +872,7 @@ This release contains a variety of fixes from 8.0.20. For information about new features in the 8.0 major release, see - . + . @@ -881,7 +881,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -952,7 +952,7 @@ This release contains a variety of fixes from 8.0.19. For information about new features in the 8.0 major release, see - . + . @@ -961,7 +961,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1038,7 +1038,7 @@ This release contains a variety of fixes from 8.0.18. For information about new features in the 8.0 major release, see - . + . @@ -1047,7 +1047,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1149,7 +1149,7 @@ This release contains a variety of fixes from 8.0.17. For information about new features in the 8.0 major release, see - . + . @@ -1158,7 +1158,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1283,7 +1283,7 @@ This release contains one serious bug fix over 8.0.16. For information about new features in the 8.0 major release, see - . + . @@ -1292,7 +1292,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1337,7 +1337,7 @@ This release contains a variety of fixes from 8.0.15. For information about new features in the 8.0 major release, see - . + . @@ -1346,7 +1346,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1575,7 +1575,7 @@ This release contains a variety of fixes from 8.0.14, including fixes for significant security issues. For information about new features in the 8.0 major release, see - . + . @@ -1593,7 +1593,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1790,7 +1790,7 @@ This release contains a variety of fixes from 8.0.13. For information about new features in the 8.0 major release, see - . + . @@ -1799,7 +1799,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1899,7 +1899,7 @@ This release contains a variety of fixes from 8.0.12, including a security fix. For information about new features in the 8.0 major release, see - . + . @@ -1908,7 +1908,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -1975,7 +1975,7 @@ This release contains one fix from 8.0.11. For information about new features in the 8.0 major release, see - . + . @@ -1984,7 +1984,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2018,7 +2018,7 @@ This release contains a variety of fixes from 8.0.10, including a security fix. For information about new features in the 8.0 major release, see - . + . @@ -2027,7 +2027,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2088,7 +2088,7 @@ This release contains a variety of fixes from 8.0.9. For information about new features in the 8.0 major release, see - . + . @@ -2097,7 +2097,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2207,7 +2207,7 @@ This release contains a variety of fixes from 8.0.8. For information about new features in the 8.0 major release, see - . + . @@ -2216,7 +2216,7 @@ A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2264,7 +2264,7 @@ Wieland) This release contains a variety of fixes from 8.0.7, including patches for extremely serious security issues. For information about new features in the 8.0 major release, see - . + . @@ -2273,7 +2273,7 @@ Wieland) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2377,7 +2377,7 @@ Fuhr) This release contains a variety of fixes from 8.0.6. For information about new features in the 8.0 major release, see - . + . @@ -2386,7 +2386,7 @@ Fuhr) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.6, - see . + see . @@ -2476,7 +2476,7 @@ and isinf during configure (Tom) This release contains a variety of fixes from 8.0.5. For information about new features in the 8.0 major release, see - . + . @@ -2485,7 +2485,7 @@ and isinf during configure (Tom) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.3, - see . + see . Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or plperl issues described below. @@ -2561,7 +2561,7 @@ what's actually returned by the query (Joe) This release contains a variety of fixes from 8.0.4. For information about new features in the 8.0 major release, see - . + . @@ -2570,7 +2570,7 @@ what's actually returned by the query (Joe) A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.3, - see . + see . @@ -2643,7 +2643,7 @@ to subquery results This release contains a variety of fixes from 8.0.3. For information about new features in the 8.0 major release, see - . + . @@ -2652,7 +2652,7 @@ to subquery results A dump/restore is not required for those running 8.0.X. However, if you are upgrading from a version earlier than 8.0.3, - see . + see . @@ -2741,7 +2741,7 @@ code This release contains a variety of fixes from 8.0.2, including several security-related issues. For information about new features in the 8.0 major release, see - . + . @@ -2860,7 +2860,7 @@ data types This release contains a variety of fixes from 8.0.1. For information about new features in the 8.0 major release, see - . + . @@ -3018,7 +3018,7 @@ addresses in INET data types (Tom) This release contains a variety of fixes from 8.0.0, including several security-related issues. For information about new features in the 8.0 major release, see - . + . diff --git a/doc/src/sgml/release-8.1.sgml b/doc/src/sgml/release-8.1.sgml index 6827afd7e0..44a30892fd 100644 --- a/doc/src/sgml/release-8.1.sgml +++ b/doc/src/sgml/release-8.1.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 8.1.22. For information about new features in the 8.1 major release, see - . + . @@ -27,7 +27,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.18, - see . + see . @@ -231,7 +231,7 @@ This release contains a variety of fixes from 8.1.21. For information about new features in the 8.1 major release, see - . + . @@ -246,7 +246,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.18, - see . + see . @@ -449,7 +449,7 @@ This release contains a variety of fixes from 8.1.20. For information about new features in the 8.1 major release, see - . + . @@ -458,7 +458,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.18, - see . + see . @@ -603,7 +603,7 @@ This release contains a variety of fixes from 8.1.19. For information about new features in the 8.1 major release, see - . + . @@ -612,7 +612,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.18, - see . + see . @@ -793,7 +793,7 @@ This release contains a variety of fixes from 8.1.18. For information about new features in the 8.1 major release, see - . + . @@ -802,7 +802,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.18, - see . + see . @@ -974,7 +974,7 @@ This release contains a variety of fixes from 8.1.17. For information about new features in the 8.1 major release, see - . + . @@ -985,7 +985,7 @@ However, if you have any hash indexes on interval columns, you must REINDEX them after updating to 8.1.18. Also, if you are upgrading from a version earlier than 8.1.15, - see . + see . @@ -1138,7 +1138,7 @@ This release contains a variety of fixes from 8.1.16. For information about new features in the 8.1 major release, see - . + . @@ -1147,7 +1147,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.15, - see . + see . @@ -1261,7 +1261,7 @@ This release contains a variety of fixes from 8.1.15. For information about new features in the 8.1 major release, see - . + . @@ -1270,7 +1270,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.15, - see . + see . @@ -1381,7 +1381,7 @@ This release contains a variety of fixes from 8.1.14. For information about new features in the 8.1 major release, see - . + . @@ -1390,7 +1390,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . Also, if you were running a previous + see . Also, if you were running a previous 8.1.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -1533,7 +1533,7 @@ This release contains a variety of fixes from 8.1.13. For information about new features in the 8.1 major release, see - . + . @@ -1542,7 +1542,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -1709,7 +1709,7 @@ This release contains one serious and one minor bug fix over 8.1.12. For information about new features in the 8.1 major release, see - . + . @@ -1718,7 +1718,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -1776,7 +1776,7 @@ This release contains a variety of fixes from 8.1.11. For information about new features in the 8.1 major release, see - . + . @@ -1785,7 +1785,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2027,7 +2027,7 @@ This release contains a variety of fixes from 8.1.10, including fixes for significant security issues. For information about new features in the 8.1 major release, see - . + . @@ -2045,7 +2045,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2278,7 +2278,7 @@ This release contains a variety of fixes from 8.1.9. For information about new features in the 8.1 major release, see - . + . @@ -2287,7 +2287,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2400,7 +2400,7 @@ This release contains a variety of fixes from 8.1.8, including a security fix. For information about new features in the 8.1 major release, see - . + . @@ -2409,7 +2409,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2490,7 +2490,7 @@ This release contains one fix from 8.1.7. For information about new features in the 8.1 major release, see - . + . @@ -2499,7 +2499,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2533,7 +2533,7 @@ This release contains a variety of fixes from 8.1.6, including a security fix. For information about new features in the 8.1 major release, see - . + . @@ -2542,7 +2542,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2634,7 +2634,7 @@ This release contains a variety of fixes from 8.1.5. For information about new features in the 8.1 major release, see - . + . @@ -2643,7 +2643,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2784,7 +2784,7 @@ This release contains a variety of fixes from 8.1.4. For information about new features in the 8.1 major release, see - . + . @@ -2793,7 +2793,7 @@ A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -2862,7 +2862,7 @@ compilers (Hiroshi Saito) This release contains a variety of fixes from 8.1.3, including patches for extremely serious security issues. For information about new features in the 8.1 major release, see - . + . @@ -2871,7 +2871,7 @@ compilers (Hiroshi Saito) A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -3016,7 +3016,7 @@ documented (Tom) This release contains a variety of fixes from 8.1.2, including one very serious security issue. For information about new features in the 8.1 major release, see - . + . @@ -3025,7 +3025,7 @@ documented (Tom) A dump/restore is not required for those running 8.1.X. However, if you are upgrading from a version earlier than 8.1.2, - see . + see . @@ -3137,7 +3137,7 @@ creation (Tom) This release contains a variety of fixes from 8.1.1. For information about new features in the 8.1 major release, see - . + . @@ -3240,7 +3240,7 @@ what's actually returned by the query (Joe) This release contains a variety of fixes from 8.1.0. For information about new features in the 8.1 major release, see - . + . diff --git a/doc/src/sgml/release-8.2.sgml b/doc/src/sgml/release-8.2.sgml index 39666e665b..51239f9b9d 100644 --- a/doc/src/sgml/release-8.2.sgml +++ b/doc/src/sgml/release-8.2.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 8.2.22. For information about new features in the 8.2 major release, see - . + . @@ -37,7 +37,7 @@ Also, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -240,7 +240,7 @@ This release contains a variety of fixes from 8.2.21. For information about new features in the 8.2 major release, see - . + . @@ -255,7 +255,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -547,7 +547,7 @@ This release contains a variety of fixes from 8.2.20. For information about new features in the 8.2 major release, see - . + . @@ -556,7 +556,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -679,7 +679,7 @@ This release contains a variety of fixes from 8.2.19. For information about new features in the 8.2 major release, see - . + . @@ -688,7 +688,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -811,7 +811,7 @@ This release contains a variety of fixes from 8.2.18. For information about new features in the 8.2 major release, see - . + . @@ -820,7 +820,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -1055,7 +1055,7 @@ This release contains a variety of fixes from 8.2.17. For information about new features in the 8.2 major release, see - . + . @@ -1064,7 +1064,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -1341,7 +1341,7 @@ This release contains a variety of fixes from 8.2.16. For information about new features in the 8.2 major release, see - . + . @@ -1350,7 +1350,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -1542,7 +1542,7 @@ This release contains a variety of fixes from 8.2.15. For information about new features in the 8.2 major release, see - . + . @@ -1551,7 +1551,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -1813,7 +1813,7 @@ This release contains a variety of fixes from 8.2.14. For information about new features in the 8.2 major release, see - . + . @@ -1822,7 +1822,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.14, - see . + see . @@ -2057,7 +2057,7 @@ This release contains a variety of fixes from 8.2.13. For information about new features in the 8.2 major release, see - . + . @@ -2068,7 +2068,7 @@ However, if you have any hash indexes on interval columns, you must REINDEX them after updating to 8.2.14. Also, if you are upgrading from a version earlier than 8.2.11, - see . + see . @@ -2287,7 +2287,7 @@ This release contains a variety of fixes from 8.2.12. For information about new features in the 8.2 major release, see - . + . @@ -2296,7 +2296,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.11, - see . + see . @@ -2453,7 +2453,7 @@ This release contains a variety of fixes from 8.2.11. For information about new features in the 8.2 major release, see - . + . @@ -2462,7 +2462,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.11, - see . + see . @@ -2632,7 +2632,7 @@ This release contains a variety of fixes from 8.2.10. For information about new features in the 8.2 major release, see - . + . @@ -2641,7 +2641,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, - see . Also, if you were running a previous + see . Also, if you were running a previous 8.2.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -2816,7 +2816,7 @@ This release contains a variety of fixes from 8.2.9. For information about new features in the 8.2 major release, see - . + . @@ -2825,7 +2825,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, - see . + see . @@ -3048,7 +3048,7 @@ This release contains one serious and one minor bug fix over 8.2.8. For information about new features in the 8.2 major release, see - . + . @@ -3057,7 +3057,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, - see . + see . @@ -3115,7 +3115,7 @@ This release contains a variety of fixes from 8.2.7. For information about new features in the 8.2 major release, see - . + . @@ -3124,7 +3124,7 @@ A dump/restore is not required for those running 8.2.X. However, if you are upgrading from a version earlier than 8.2.7, - see . + see . @@ -3310,7 +3310,7 @@ This release contains a variety of fixes from 8.2.6. For information about new features in the 8.2 major release, see - . + . @@ -3580,7 +3580,7 @@ This release contains a variety of fixes from 8.2.5, including fixes for significant security issues. For information about new features in the 8.2 major release, see - . + . @@ -3870,7 +3870,7 @@ This release contains a variety of fixes from 8.2.4. For information about new features in the 8.2 major release, see - . + . @@ -4048,7 +4048,7 @@ This release contains a variety of fixes from 8.2.3, including a security fix. For information about new features in the 8.2 major release, see - . + . @@ -4191,7 +4191,7 @@ This release contains two fixes from 8.2.2. For information about new features in the 8.2 major release, see - . + . @@ -4238,7 +4238,7 @@ This release contains a variety of fixes from 8.2.1, including a security fix. For information about new features in the 8.2 major release, see - . + . @@ -4395,7 +4395,7 @@ This release contains a variety of fixes from 8.2. For information about new features in the 8.2 major release, see - . + . diff --git a/doc/src/sgml/release-8.3.sgml b/doc/src/sgml/release-8.3.sgml index 844f796179..2e10bc4982 100644 --- a/doc/src/sgml/release-8.3.sgml +++ b/doc/src/sgml/release-8.3.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 8.3.22. For information about new features in the 8.3 major release, see - . + . @@ -30,7 +30,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -181,7 +181,7 @@ This release contains a variety of fixes from 8.3.21. For information about new features in the 8.3 major release, see - . + . @@ -199,7 +199,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -477,7 +477,7 @@ This release contains a variety of fixes from 8.3.20. For information about new features in the 8.3 major release, see - . + . @@ -495,7 +495,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -587,7 +587,7 @@ This release contains a variety of fixes from 8.3.19. For information about new features in the 8.3 major release, see - . + . @@ -605,7 +605,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -810,7 +810,7 @@ This release contains a variety of fixes from 8.3.18. For information about new features in the 8.3 major release, see - . + . @@ -822,7 +822,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -1039,7 +1039,7 @@ This release contains a variety of fixes from 8.3.17. For information about new features in the 8.3 major release, see - . + . @@ -1051,7 +1051,7 @@ However, if you are upgrading from a version earlier than 8.3.17, - see . + see . @@ -1327,7 +1327,7 @@ This release contains a variety of fixes from 8.3.16. For information about new features in the 8.3 major release, see - . + . @@ -1346,7 +1346,7 @@ Also, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -1594,7 +1594,7 @@ This release contains a variety of fixes from 8.3.15. For information about new features in the 8.3 major release, see - . + . @@ -1603,7 +1603,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -1965,7 +1965,7 @@ This release contains a variety of fixes from 8.3.14. For information about new features in the 8.3 major release, see - . + . @@ -1974,7 +1974,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -2128,7 +2128,7 @@ This release contains a variety of fixes from 8.3.13. For information about new features in the 8.3 major release, see - . + . @@ -2137,7 +2137,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -2260,7 +2260,7 @@ This release contains a variety of fixes from 8.3.12. For information about new features in the 8.3 major release, see - . + . @@ -2269,7 +2269,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -2538,7 +2538,7 @@ This release contains a variety of fixes from 8.3.11. For information about new features in the 8.3 major release, see - . + . @@ -2547,7 +2547,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -2901,7 +2901,7 @@ This release contains a variety of fixes from 8.3.10. For information about new features in the 8.3 major release, see - . + . @@ -2910,7 +2910,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -3130,7 +3130,7 @@ This release contains a variety of fixes from 8.3.9. For information about new features in the 8.3 major release, see - . + . @@ -3139,7 +3139,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -3457,7 +3457,7 @@ This release contains a variety of fixes from 8.3.8. For information about new features in the 8.3 major release, see - . + . @@ -3466,7 +3466,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.8, - see . + see . @@ -3790,7 +3790,7 @@ This release contains a variety of fixes from 8.3.7. For information about new features in the 8.3 major release, see - . + . @@ -3801,7 +3801,7 @@ However, if you have any hash indexes on interval columns, you must REINDEX them after updating to 8.3.8. Also, if you are upgrading from a version earlier than 8.3.5, - see . + see . @@ -4080,7 +4080,7 @@ This release contains a variety of fixes from 8.3.6. For information about new features in the 8.3 major release, see - . + . @@ -4089,7 +4089,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.5, - see . + see . @@ -4303,7 +4303,7 @@ This release contains a variety of fixes from 8.3.5. For information about new features in the 8.3 major release, see - . + . @@ -4312,7 +4312,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.5, - see . + see . @@ -4597,7 +4597,7 @@ This release contains a variety of fixes from 8.3.4. For information about new features in the 8.3 major release, see - . + . @@ -4606,7 +4606,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, - see . Also, if you were running a previous + see . Also, if you were running a previous 8.3.X release, it is recommended to REINDEX all GiST indexes after the upgrade. @@ -4852,7 +4852,7 @@ This release contains a variety of fixes from 8.3.3. For information about new features in the 8.3 major release, see - . + . @@ -4861,7 +4861,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, - see . + see . @@ -5206,7 +5206,7 @@ This release contains one serious and one minor bug fix over 8.3.2. For information about new features in the 8.3 major release, see - . + . @@ -5215,7 +5215,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, - see . + see . @@ -5273,7 +5273,7 @@ This release contains a variety of fixes from 8.3.1. For information about new features in the 8.3 major release, see - . + . @@ -5282,7 +5282,7 @@ A dump/restore is not required for those running 8.3.X. However, if you are upgrading from a version earlier than 8.3.1, - see . + see . @@ -5638,7 +5638,7 @@ This release contains a variety of fixes from 8.3.0. For information about new features in the 8.3 major release, see - . + . diff --git a/doc/src/sgml/release-8.4.sgml b/doc/src/sgml/release-8.4.sgml index 521048ad93..6e6efa3bd1 100644 --- a/doc/src/sgml/release-8.4.sgml +++ b/doc/src/sgml/release-8.4.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 8.4.21. For information about new features in the 8.4 major release, see - . + . @@ -36,7 +36,7 @@ Also, if you are upgrading from a version earlier than 8.4.19, - see . + see . @@ -216,7 +216,7 @@ On Windows, allow new sessions to absorb values of PGC_BACKEND - parameters (such as ) from the + parameters (such as ) from the configuration file (Amit Kapila) @@ -331,7 +331,7 @@ This release contains a variety of fixes from 8.4.20. For information about new features in the 8.4 major release, see - . + . @@ -349,7 +349,7 @@ However, if you are upgrading from a version earlier than 8.4.19, - see . + see . @@ -450,7 +450,7 @@ This release contains a variety of fixes from 8.4.19. For information about new features in the 8.4 major release, see - . + . @@ -468,7 +468,7 @@ However, if you are upgrading from a version earlier than 8.4.19, - see . + see . @@ -909,7 +909,7 @@ This release contains a variety of fixes from 8.4.18. For information about new features in the 8.4 major release, see - . + . @@ -927,7 +927,7 @@ Also, if you are upgrading from a version earlier than 8.4.17, - see . + see . @@ -1110,7 +1110,7 @@ This release contains a variety of fixes from 8.4.17. For information about new features in the 8.4 major release, see - . + . @@ -1122,7 +1122,7 @@ However, if you are upgrading from a version earlier than 8.4.17, - see . + see . @@ -1310,7 +1310,7 @@ This release contains a variety of fixes from 8.4.16. For information about new features in the 8.4 major release, see - . + . @@ -1329,7 +1329,7 @@ Also, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -1539,7 +1539,7 @@ This release contains a variety of fixes from 8.4.15. For information about new features in the 8.4 major release, see - . + . @@ -1551,7 +1551,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -1749,7 +1749,7 @@ This release contains a variety of fixes from 8.4.14. For information about new features in the 8.4 major release, see - . + . @@ -1761,7 +1761,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -2051,7 +2051,7 @@ This release contains a variety of fixes from 8.4.13. For information about new features in the 8.4 major release, see - . + . @@ -2063,7 +2063,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -2167,7 +2167,7 @@ This release contains a variety of fixes from 8.4.12. For information about new features in the 8.4 major release, see - . + . @@ -2179,7 +2179,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -2401,7 +2401,7 @@ This release contains a variety of fixes from 8.4.11. For information about new features in the 8.4 major release, see - . + . @@ -2413,7 +2413,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -2686,7 +2686,7 @@ This release contains a variety of fixes from 8.4.10. For information about new features in the 8.4 major release, see - . + . @@ -2698,7 +2698,7 @@ However, if you are upgrading from a version earlier than 8.4.10, - see . + see . @@ -3048,7 +3048,7 @@ This release contains a variety of fixes from 8.4.9. For information about new features in the 8.4 major release, see - . + . @@ -3067,7 +3067,7 @@ Also, if you are upgrading from a version earlier than 8.4.8, - see . + see . @@ -3363,7 +3363,7 @@ This release contains a variety of fixes from 8.4.8. For information about new features in the 8.4 major release, see - . + . @@ -3375,7 +3375,7 @@ However, if you are upgrading from a version earlier than 8.4.8, - see . + see . @@ -3888,7 +3888,7 @@ This release contains a variety of fixes from 8.4.7. For information about new features in the 8.4 major release, see - . + . @@ -3911,7 +3911,7 @@ Also, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -4133,7 +4133,7 @@ This release contains a variety of fixes from 8.4.6. For information about new features in the 8.4 major release, see - . + . @@ -4142,7 +4142,7 @@ A dump/restore is not required for those running 8.4.X. However, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -4265,7 +4265,7 @@ This release contains a variety of fixes from 8.4.5. For information about new features in the 8.4 major release, see - . + . @@ -4274,7 +4274,7 @@ A dump/restore is not required for those running 8.4.X. However, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -4576,7 +4576,7 @@ This release contains a variety of fixes from 8.4.4. For information about new features in the 8.4 major release, see - . + . @@ -4585,7 +4585,7 @@ A dump/restore is not required for those running 8.4.X. However, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -5098,7 +5098,7 @@ This release contains a variety of fixes from 8.4.3. For information about new features in the 8.4 major release, see - . + . @@ -5107,7 +5107,7 @@ A dump/restore is not required for those running 8.4.X. However, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -5389,7 +5389,7 @@ This release contains a variety of fixes from 8.4.2. For information about new features in the 8.4 major release, see - . + . @@ -5398,7 +5398,7 @@ A dump/restore is not required for those running 8.4.X. However, if you are upgrading from a version earlier than 8.4.2, - see . + see . @@ -5856,7 +5856,7 @@ This release contains a variety of fixes from 8.4.1. For information about new features in the 8.4 major release, see - . + . @@ -6387,7 +6387,7 @@ WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE This release contains a variety of fixes from 8.4. For information about new features in the 8.4 major release, see - . + . diff --git a/doc/src/sgml/release-9.0.sgml b/doc/src/sgml/release-9.0.sgml index d5b3239c30..9e90f5a7f3 100644 --- a/doc/src/sgml/release-9.0.sgml +++ b/doc/src/sgml/release-9.0.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.0.22. For information about new features in the 9.0 major release, see - . + . @@ -30,7 +30,7 @@ However, if you are upgrading from a version earlier than 9.0.18, - see . + see . @@ -509,7 +509,7 @@ This release contains a small number of fixes from 9.0.21. For information about new features in the 9.0 major release, see - . + . @@ -527,7 +527,7 @@ However, if you are upgrading from a version earlier than 9.0.18, - see . + see . @@ -583,7 +583,7 @@ This release contains a small number of fixes from 9.0.20. For information about new features in the 9.0 major release, see - . + . @@ -601,7 +601,7 @@ However, if you are upgrading from a version earlier than 9.0.18, - see . + see . @@ -677,7 +677,7 @@ This release contains a variety of fixes from 9.0.19. For information about new features in the 9.0 major release, see - . + . @@ -695,7 +695,7 @@ However, if you are upgrading from a version earlier than 9.0.18, - see . + see . @@ -1121,7 +1121,7 @@ This release contains a variety of fixes from 9.0.18. For information about new features in the 9.0 major release, see - . + . @@ -1133,7 +1133,7 @@ However, if you are upgrading from a version earlier than 9.0.18, - see . + see . @@ -1847,7 +1847,7 @@ This release contains a variety of fixes from 9.0.17. For information about new features in the 9.0 major release, see - . + . @@ -1865,7 +1865,7 @@ Also, if you are upgrading from a version earlier than 9.0.15, - see . + see . @@ -2069,7 +2069,7 @@ On Windows, allow new sessions to absorb values of PGC_BACKEND - parameters (such as ) from the + parameters (such as ) from the configuration file (Amit Kapila) @@ -2193,7 +2193,7 @@ This release contains a variety of fixes from 9.0.16. For information about new features in the 9.0 major release, see - . + . @@ -2205,7 +2205,7 @@ However, if you are upgrading from a version earlier than 9.0.15, - see . + see . @@ -2346,7 +2346,7 @@ This release contains a variety of fixes from 9.0.15. For information about new features in the 9.0 major release, see - . + . @@ -2358,7 +2358,7 @@ However, if you are upgrading from a version earlier than 9.0.15, - see . + see . @@ -2847,7 +2847,7 @@ This release contains a variety of fixes from 9.0.14. For information about new features in the 9.0 major release, see - . + . @@ -2865,7 +2865,7 @@ Also, if you are upgrading from a version earlier than 9.0.13, - see . + see . @@ -3085,7 +3085,7 @@ This release contains a variety of fixes from 9.0.13. For information about new features in the 9.0 major release, see - . + . @@ -3097,7 +3097,7 @@ However, if you are upgrading from a version earlier than 9.0.13, - see . + see . @@ -3351,7 +3351,7 @@ This release contains a variety of fixes from 9.0.12. For information about new features in the 9.0 major release, see - . + . @@ -3370,7 +3370,7 @@ Also, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -3648,7 +3648,7 @@ This release contains a variety of fixes from 9.0.11. For information about new features in the 9.0 major release, see - . + . @@ -3660,7 +3660,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -3910,7 +3910,7 @@ This release contains a variety of fixes from 9.0.10. For information about new features in the 9.0 major release, see - . + . @@ -3922,7 +3922,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -4288,7 +4288,7 @@ This release contains a variety of fixes from 9.0.9. For information about new features in the 9.0 major release, see - . + . @@ -4300,7 +4300,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -4430,7 +4430,7 @@ This release contains a variety of fixes from 9.0.8. For information about new features in the 9.0 major release, see - . + . @@ -4442,7 +4442,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -4736,7 +4736,7 @@ This release contains a variety of fixes from 9.0.7. For information about new features in the 9.0 major release, see - . + . @@ -4748,7 +4748,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -5056,7 +5056,7 @@ This release contains a variety of fixes from 9.0.6. For information about new features in the 9.0 major release, see - . + . @@ -5068,7 +5068,7 @@ However, if you are upgrading from a version earlier than 9.0.6, - see . + see . @@ -5572,7 +5572,7 @@ This release contains a variety of fixes from 9.0.5. For information about new features in the 9.0 major release, see - . + . @@ -5591,7 +5591,7 @@ Also, if you are upgrading from a version earlier than 9.0.4, - see . + see . @@ -5956,7 +5956,7 @@ This release contains a variety of fixes from 9.0.4. For information about new features in the 9.0 major release, see - . + . @@ -5968,7 +5968,7 @@ However, if you are upgrading from a version earlier than 9.0.4, - see . + see . @@ -6606,7 +6606,7 @@ This release contains a variety of fixes from 9.0.3. For information about new features in the 9.0 major release, see - . + . @@ -6947,7 +6947,7 @@ This release contains a variety of fixes from 9.0.2. For information about new features in the 9.0 major release, see - . + . @@ -7123,7 +7123,7 @@ This release contains a variety of fixes from 9.0.1. For information about new features in the 9.0 major release, see - . + . @@ -7570,7 +7570,7 @@ This release contains a variety of fixes from 9.0.0. For information about new features in the 9.0 major release, see - . + . diff --git a/doc/src/sgml/release-9.1.sgml b/doc/src/sgml/release-9.1.sgml index 92948a4ad0..ad5998e495 100644 --- a/doc/src/sgml/release-9.1.sgml +++ b/doc/src/sgml/release-9.1.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.1.23. For information about new features in the 9.1 major release, see - . + . @@ -30,7 +30,7 @@ However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -222,7 +222,7 @@ This release contains a variety of fixes from 9.1.22. For information about new features in the 9.1 major release, see - . + . @@ -240,7 +240,7 @@ However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -564,7 +564,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This release contains a variety of fixes from 9.1.21. For information about new features in the 9.1 major release, see - . + . @@ -582,7 +582,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -729,7 +729,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This release contains a variety of fixes from 9.1.20. For information about new features in the 9.1 major release, see - . + . @@ -741,7 +741,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -935,7 +935,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This release contains a variety of fixes from 9.1.19. For information about new features in the 9.1 major release, see - . + . @@ -947,7 +947,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -1448,7 +1448,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 This release contains a variety of fixes from 9.1.18. For information about new features in the 9.1 major release, see - . + . @@ -1460,7 +1460,7 @@ Branch: REL9_1_STABLE [354b3a3ac] 2016-06-19 14:01:17 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -2009,7 +2009,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a small number of fixes from 9.1.17. For information about new features in the 9.1 major release, see - . + . @@ -2021,7 +2021,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -2077,7 +2077,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a small number of fixes from 9.1.16. For information about new features in the 9.1 major release, see - . + . @@ -2089,7 +2089,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.16, - see . + see . @@ -2165,7 +2165,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.15. For information about new features in the 9.1 major release, see - . + . @@ -2183,7 +2183,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.14, - see . + see . @@ -2691,7 +2691,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.14. For information about new features in the 9.1 major release, see - . + . @@ -2703,7 +2703,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.14, - see . + see . @@ -3501,7 +3501,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.13. For information about new features in the 9.1 major release, see - . + . @@ -3519,7 +3519,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.11, - see . + see . @@ -3570,7 +3570,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 - Fix feedback status when is + Fix feedback status when is turned off on-the-fly (Simon Riggs) @@ -3654,7 +3654,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Prevent foreign tables from being created with OIDS - when is true + when is true (Etsuro Fujita) @@ -3760,7 +3760,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 On Windows, allow new sessions to absorb values of PGC_BACKEND - parameters (such as ) from the + parameters (such as ) from the configuration file (Amit Kapila) @@ -3884,7 +3884,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.12. For information about new features in the 9.1 major release, see - . + . @@ -3896,7 +3896,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.11, - see . + see . @@ -4051,7 +4051,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.11. For information about new features in the 9.1 major release, see - . + . @@ -4063,7 +4063,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.11, - see . + see . @@ -4601,7 +4601,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.10. For information about new features in the 9.1 major release, see - . + . @@ -4619,7 +4619,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.9, - see . + see . @@ -4858,7 +4858,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.9. For information about new features in the 9.1 major release, see - . + . @@ -4870,7 +4870,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.9, - see . + see . @@ -5193,7 +5193,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.8. For information about new features in the 9.1 major release, see - . + . @@ -5212,7 +5212,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.6, - see . + see . @@ -5527,7 +5527,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.7. For information about new features in the 9.1 major release, see - . + . @@ -5539,7 +5539,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.6, - see . + see . @@ -5849,7 +5849,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.6. For information about new features in the 9.1 major release, see - . + . @@ -5861,7 +5861,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.6, - see . + see . @@ -6312,7 +6312,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.5. For information about new features in the 9.1 major release, see - . + . @@ -6330,7 +6330,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.4, - see . + see . @@ -6564,7 +6564,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.4. For information about new features in the 9.1 major release, see - . + . @@ -6576,7 +6576,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.4, - see . + see . @@ -6951,7 +6951,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.3. For information about new features in the 9.1 major release, see - . + . @@ -6975,7 +6975,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 Also, if you are upgrading from a version earlier than 9.1.2, - see . + see . @@ -7441,7 +7441,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.2. For information about new features in the 9.1 major release, see - . + . @@ -7453,7 +7453,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 However, if you are upgrading from a version earlier than 9.1.2, - see . + see . @@ -8078,7 +8078,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a variety of fixes from 9.1.1. For information about new features in the 9.1 major release, see - . + . @@ -8725,7 +8725,7 @@ Branch: REL9_0_STABLE [9d6af7367] 2015-08-15 11:02:34 -0400 This release contains a small number of fixes from 9.1.0. For information about new features in the 9.1 major release, see - . + . diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index e2da35bcd4..1f2240f158 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.2.23. For information about new features in the 9.2 major release, see - . + . @@ -30,7 +30,7 @@ However, if you are upgrading from a version earlier than 9.2.22, - see . + see . @@ -213,7 +213,7 @@ This release contains a small number of fixes from 9.2.22. For information about new features in the 9.2 major release, see - . + . @@ -231,7 +231,7 @@ However, if you are upgrading from a version earlier than 9.2.22, - see . + see . @@ -382,7 +382,7 @@ CREATE OR REPLACE VIEW table_privileges AS This release contains a variety of fixes from 9.2.21. For information about new features in the 9.2 major release, see - . + . @@ -405,7 +405,7 @@ CREATE OR REPLACE VIEW table_privileges AS Also, if you are upgrading from a version earlier than 9.2.20, - see . + see . @@ -575,7 +575,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix code for setting on + Fix code for setting on Solaris (Tom Lane) @@ -776,7 +776,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + does not have = minimum (Bruce Momjian) @@ -848,7 +848,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.2.20. For information about new features in the 9.2 major release, see - . + . @@ -871,7 +871,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Also, if you are upgrading from a version earlier than 9.2.20, - see . + see . @@ -906,7 +906,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; By itself, this patch will only fix the behavior in newly initdb'd databases. If you wish to apply this change in an existing database, follow the corrected procedure shown in the changelog entry for - CVE-2017-7547, in . + CVE-2017-7547, in . @@ -1206,7 +1206,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; will use the current and historical DST transition dates of the US/Eastern zone. If you don't want that, remove the posixrules file, or replace it with a copy of some - other zone file (see ). Note that + other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1228,7 +1228,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.2.19. For information about new features in the 9.2 major release, see - . + . @@ -1246,7 +1246,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Also, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -1281,7 +1281,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Previously, this was skipped when + Previously, this was skipped when = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1351,7 +1351,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Previously, non-default settings - of could result in broken + of could result in broken indexes. @@ -1640,7 +1640,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.18. For information about new features in the 9.2 major release, see - . + . @@ -1652,7 +1652,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -1691,7 +1691,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 Fix EXPLAIN to emit valid XML when - is on (Markus Winand) + is on (Markus Winand) @@ -1906,7 +1906,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.17. For information about new features in the 9.2 major release, see - . + . @@ -1918,7 +1918,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -2247,7 +2247,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.16. For information about new features in the 9.2 major release, see - . + . @@ -2259,7 +2259,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -2443,7 +2443,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.15. For information about new features in the 9.2 major release, see - . + . @@ -2455,7 +2455,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -2649,7 +2649,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.14. For information about new features in the 9.2 major release, see - . + . @@ -2661,7 +2661,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -3193,7 +3193,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 This release contains a variety of fixes from 9.2.13. For information about new features in the 9.2 major release, see - . + . @@ -3205,7 +3205,7 @@ Branch: REL9_2_STABLE [38bec1805] 2017-01-25 07:02:25 +0900 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -3790,7 +3790,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This release contains a small number of fixes from 9.2.12. For information about new features in the 9.2 major release, see - . + . @@ -3802,7 +3802,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -3858,7 +3858,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This release contains a small number of fixes from 9.2.11. For information about new features in the 9.2 major release, see - . + . @@ -3870,7 +3870,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 However, if you are upgrading from a version earlier than 9.2.11, - see . + see . @@ -3953,7 +3953,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 This release contains a variety of fixes from 9.2.10. For information about new features in the 9.2 major release, see - . + . @@ -3971,7 +3971,7 @@ Branch: REL9_2_STABLE [e90a629e1] 2015-09-22 14:58:38 -0700 Also, if you are upgrading from a version earlier than 9.2.10, - see . + see . @@ -4535,7 +4535,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.9. For information about new features in the 9.2 major release, see - . + . @@ -4556,7 +4556,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, if you are upgrading from a version earlier than 9.2.9, - see . + see . @@ -5452,7 +5452,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.8. For information about new features in the 9.2 major release, see - . + . @@ -5470,7 +5470,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, if you are upgrading from a version earlier than 9.2.6, - see . + see . @@ -5528,7 +5528,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 - Fix feedback status when is + Fix feedback status when is turned off on-the-fly (Simon Riggs) @@ -5650,7 +5650,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Prevent foreign tables from being created with OIDS - when is true + when is true (Etsuro Fujita) @@ -5769,7 +5769,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 On Windows, allow new sessions to absorb values of PGC_BACKEND - parameters (such as ) from the + parameters (such as ) from the configuration file (Amit Kapila) @@ -5933,7 +5933,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.7. For information about new features in the 9.2 major release, see - . + . @@ -5945,7 +5945,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are upgrading from a version earlier than 9.2.6, - see . + see . @@ -6122,7 +6122,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.6. For information about new features in the 9.2 major release, see - . + . @@ -6134,7 +6134,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are upgrading from a version earlier than 9.2.6, - see . + see . @@ -6729,7 +6729,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.5. For information about new features in the 9.2 major release, see - . + . @@ -6747,7 +6747,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, if you are upgrading from a version earlier than 9.2.4, - see . + see . @@ -7066,7 +7066,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.4. For information about new features in the 9.2 major release, see - . + . @@ -7078,7 +7078,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are upgrading from a version earlier than 9.2.4, - see . + see . @@ -7485,7 +7485,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.3. For information about new features in the 9.2 major release, see - . + . @@ -7504,7 +7504,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, if you are upgrading from a version earlier than 9.2.2, - see . + see . @@ -7892,7 +7892,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.2. For information about new features in the 9.2 major release, see - . + . @@ -7904,7 +7904,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 However, if you are upgrading from a version earlier than 9.2.2, - see . + see . @@ -8343,7 +8343,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.1. For information about new features in the 9.2 major release, see - . + . @@ -8361,7 +8361,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 Also, if you are upgrading from version 9.2.0, - see . + see . @@ -9067,7 +9067,7 @@ Branch: REL9_2_STABLE [6b700301c] 2015-02-17 16:03:00 +0100 This release contains a variety of fixes from 9.2.0. For information about new features in the 9.2 major release, see - . + . diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index ed0e292d9a..3c540bcc26 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.3.19. For information about new features in the 9.3 major release, see - . + . @@ -24,7 +24,7 @@ However, if you are upgrading from a version earlier than 9.3.18, - see . + see . @@ -246,7 +246,7 @@ This release contains a small number of fixes from 9.3.18. For information about new features in the 9.3 major release, see - . + . @@ -258,7 +258,7 @@ However, if you are upgrading from a version earlier than 9.3.18, - see . + see . @@ -417,7 +417,7 @@ CREATE OR REPLACE VIEW table_privileges AS This release contains a variety of fixes from 9.3.17. For information about new features in the 9.3 major release, see - . + . @@ -434,7 +434,7 @@ CREATE OR REPLACE VIEW table_privileges AS Also, if you are upgrading from a version earlier than 9.3.16, - see . + see . @@ -626,7 +626,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix code for setting on + Fix code for setting on Solaris (Tom Lane) @@ -879,7 +879,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + does not have = minimum (Bruce Momjian) @@ -976,7 +976,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.3.16. For information about new features in the 9.3 major release, see - . + . @@ -993,7 +993,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Also, if you are upgrading from a version earlier than 9.3.16, - see . + see . @@ -1028,7 +1028,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; By itself, this patch will only fix the behavior in newly initdb'd databases. If you wish to apply this change in an existing database, follow the corrected procedure shown in the changelog entry for - CVE-2017-7547, in . + CVE-2017-7547, in . @@ -1373,7 +1373,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; will use the current and historical DST transition dates of the US/Eastern zone. If you don't want that, remove the posixrules file, or replace it with a copy of some - other zone file (see ). Note that + other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1395,7 +1395,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.3.15. For information about new features in the 9.3 major release, see - . + . @@ -1413,7 +1413,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Also, if you are upgrading from a version earlier than 9.3.15, - see . + see . @@ -1448,7 +1448,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Previously, this was skipped when + Previously, this was skipped when = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1509,7 +1509,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Previously, non-default settings - of could result in broken + of could result in broken indexes. @@ -1843,7 +1843,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.3.14. For information about new features in the 9.3 major release, see - . + . @@ -1861,7 +1861,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Also, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -1937,7 +1937,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix EXPLAIN to emit valid XML when - is on (Markus Winand) + is on (Markus Winand) @@ -2172,7 +2172,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.3.13. For information about new features in the 9.3 major release, see - . + . @@ -2184,7 +2184,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; However, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -2662,7 +2662,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; This release contains a variety of fixes from 9.3.12. For information about new features in the 9.3 major release, see - . + . @@ -2674,7 +2674,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; However, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -2892,7 +2892,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This release contains a variety of fixes from 9.3.11. For information about new features in the 9.3 major release, see - . + . @@ -2904,7 +2904,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -3111,7 +3111,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This release contains a variety of fixes from 9.3.10. For information about new features in the 9.3 major release, see - . + . @@ -3123,7 +3123,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -3734,7 +3734,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This release contains a variety of fixes from 9.3.9. For information about new features in the 9.3 major release, see - . + . @@ -3746,7 +3746,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading from a version earlier than 9.3.9, - see . + see . @@ -4363,7 +4363,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This release contains a small number of fixes from 9.3.8. For information about new features in the 9.3 major release, see - . + . @@ -4381,7 +4381,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Also, if you are upgrading from a version earlier than 9.3.7, - see . + see . @@ -4438,10 +4438,10 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 Otherwise, for each table that has pg_class.relminmxid equal to 1, VACUUM that table with - both - and set to + both + and set to zero. (You can use the vacuum cost delay parameters described - in to reduce + in to reduce the performance consequences for concurrent sessions.) You must use PostgreSQL 9.3.5 or later to perform this step. @@ -4512,7 +4512,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 This release contains a small number of fixes from 9.3.7. For information about new features in the 9.3 major release, see - . + . @@ -4524,7 +4524,7 @@ Branch: REL9_2_STABLE [37f30b251] 2016-04-18 13:19:52 -0400 However, if you are upgrading from a version earlier than 9.3.7, - see . + see . @@ -4620,7 +4620,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This release contains a variety of fixes from 9.3.6. For information about new features in the 9.3 major release, see - . + . @@ -4638,7 +4638,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Also, if you are upgrading from a version earlier than 9.3.6, - see . + see . @@ -5210,7 +5210,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 This release contains a variety of fixes from 9.3.5. For information about new features in the 9.3 major release, see - . + . @@ -5231,7 +5231,7 @@ Branch: REL9_0_STABLE [4dddf8552] 2015-05-21 20:41:55 -0400 Also, if you are upgrading from a version earlier than 9.3.5, - see . + see . @@ -7095,7 +7095,7 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 This release contains a variety of fixes from 9.3.4. For information about new features in the 9.3 major release, see - . + . @@ -7115,7 +7115,7 @@ Branch: REL9_0_STABLE [b6391f587] 2014-10-04 14:18:43 -0400 Also, if you are upgrading from a version earlier than 9.3.4, - see . + see . @@ -7577,7 +7577,7 @@ Branch: REL9_1_STABLE [dd1a5b09b] 2014-06-24 13:30:41 +0300 Prevent foreign tables from being created with OIDS - when is true + when is true (Etsuro Fujita) @@ -7859,7 +7859,7 @@ Branch: REL8_4_STABLE [30e434bdf] 2014-04-05 12:41:40 -0400 On Windows, allow new sessions to absorb values of PGC_BACKEND - parameters (such as ) from the + parameters (such as ) from the configuration file (Amit Kapila) @@ -8215,7 +8215,7 @@ Branch: REL8_4_STABLE [c51da696b] 2014-07-19 15:01:45 -0400 This release contains a variety of fixes from 9.3.3. For information about new features in the 9.3 major release, see - . + . @@ -8234,7 +8234,7 @@ Branch: REL8_4_STABLE [c51da696b] 2014-07-19 15:01:45 -0400 Also, if you are upgrading from a version earlier than 9.3.3, - see . + see . @@ -8480,7 +8480,7 @@ Branch: REL9_3_STABLE [dcd1131c8] 2014-03-06 21:40:50 +0200 walsender failed to send ping messages to the client if it was constantly busy sending WAL data; but it expected to see ping responses despite that, and would therefore disconnect - once elapsed. + once elapsed. @@ -8536,7 +8536,7 @@ Branch: REL9_3_STABLE [5a7e75849] 2014-02-20 10:46:54 +0200 - Add read-only parameter to + Add read-only parameter to display whether page checksums are enabled (Heikki Linnakangas) @@ -8677,7 +8677,7 @@ Branch: REL8_4_STABLE [6e6c2c2e1] 2014-03-15 13:36:57 -0400 This release contains a variety of fixes from 9.3.2. For information about new features in the 9.3 major release, see - . + . @@ -8706,7 +8706,7 @@ Branch: REL8_4_STABLE [6e6c2c2e1] 2014-03-15 13:36:57 -0400 Also, if you are upgrading from a version earlier than 9.3.2, - see . + see . @@ -8985,9 +8985,9 @@ Branch: REL9_3_STABLE [fb47de2be] 2014-02-13 19:30:30 -0300 freezing parameters were used for multixact IDs too; but since the consumption rates of transaction IDs and multixact IDs can be quite different, this did not work very well. Introduce new settings - , - , and - + , + , and + to control when to freeze multixacts. @@ -10139,7 +10139,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 This release contains a variety of fixes from 9.3.1. For information about new features in the 9.3 major release, see - . + . @@ -10157,7 +10157,7 @@ Branch: REL8_4_STABLE [c0c2d62ac] 2014-02-14 21:59:56 -0500 Also, if you are upgrading from a version earlier than 9.3.1, - see . + see . @@ -10618,7 +10618,7 @@ SELECT ... FROM tab1 x CROSS JOIN (tab2 x CROSS JOIN tab3 y) z This release contains a variety of fixes from 9.3.0. For information about new features in the 9.3 major release, see - . + . diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index d8b6b1777c..4ecf90d691 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.4.14. For information about new features in the 9.4 major release, see - . + . @@ -24,7 +24,7 @@ However, if you are upgrading from a version earlier than 9.4.13, - see . + see . @@ -290,7 +290,7 @@ This release contains a small number of fixes from 9.4.13. For information about new features in the 9.4 major release, see - . + . @@ -302,7 +302,7 @@ However, if you are upgrading from a version earlier than 9.4.13, - see . + see . @@ -477,7 +477,7 @@ CREATE OR REPLACE VIEW table_privileges AS This release contains a variety of fixes from 9.4.12. For information about new features in the 9.4 major release, see - . + . @@ -494,7 +494,7 @@ CREATE OR REPLACE VIEW table_privileges AS Also, if you are upgrading from a version earlier than 9.4.12, - see . + see . @@ -706,7 +706,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix code for setting on + Fix code for setting on Solaris (Tom Lane) @@ -1039,7 +1039,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + does not have = minimum (Bruce Momjian) @@ -1150,7 +1150,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 This release contains a variety of fixes from 9.4.11. For information about new features in the 9.4 major release, see - . + . @@ -1172,7 +1172,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 Also, if you are upgrading from a version earlier than 9.4.11, - see . + see . @@ -1206,7 +1206,7 @@ Branch: REL9_4_STABLE [23a2b818f] 2017-08-05 14:56:40 -0700 By itself, this patch will only fix the behavior in newly initdb'd databases. If you wish to apply this change in an existing database, follow the corrected procedure shown in the changelog entry for - CVE-2017-7547, in . + CVE-2017-7547, in . @@ -1636,7 +1636,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 will use the current and historical DST transition dates of the US/Eastern zone. If you don't want that, remove the posixrules file, or replace it with a copy of some - other zone file (see ). Note that + other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1658,7 +1658,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 This release contains a variety of fixes from 9.4.10. For information about new features in the 9.4 major release, see - . + . @@ -1676,7 +1676,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Also, if you are upgrading from a version earlier than 9.4.10, - see . + see . @@ -1724,7 +1724,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 - Previously, this was skipped when + Previously, this was skipped when = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1797,7 +1797,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Previously, non-default settings - of could result in broken + of could result in broken indexes. @@ -2191,7 +2191,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 This release contains a variety of fixes from 9.4.9. For information about new features in the 9.4 major release, see - . + . @@ -2209,7 +2209,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Also, if you are upgrading from a version earlier than 9.4.6, - see . + see . @@ -2304,7 +2304,7 @@ Branch: REL9_2_STABLE [fb50c38e9] 2017-04-17 13:52:42 -0400 Fix EXPLAIN to emit valid XML when - is on (Markus Winand) + is on (Markus Winand) @@ -2657,7 +2657,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 This release contains a variety of fixes from 9.4.8. For information about new features in the 9.4 major release, see - . + . @@ -2669,7 +2669,7 @@ Branch: REL9_4_STABLE [10ad15f48] 2016-09-01 11:45:16 -0400 However, if you are upgrading from a version earlier than 9.4.6, - see . + see . @@ -3179,7 +3179,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 This release contains a variety of fixes from 9.4.7. For information about new features in the 9.4 major release, see - . + . @@ -3191,7 +3191,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 However, if you are upgrading from a version earlier than 9.4.6, - see . + see . @@ -3442,7 +3442,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 This release contains a variety of fixes from 9.4.6. For information about new features in the 9.4 major release, see - . + . @@ -3454,7 +3454,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 However, if you are upgrading from a version earlier than 9.4.6, - see . + see . @@ -3504,7 +3504,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 - Ignore parameter until + Ignore parameter until recovery has reached a consistent state (Michael Paquier) @@ -3713,7 +3713,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 This release contains a variety of fixes from 9.4.5. For information about new features in the 9.4 major release, see - . + . @@ -3731,7 +3731,7 @@ Branch: REL9_1_STABLE [de887cc8a] 2016-05-25 19:39:49 -0400 Also, if you are upgrading from a version earlier than 9.4.4, - see . + see . @@ -4810,7 +4810,7 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 This release contains a variety of fixes from 9.4.4. For information about new features in the 9.4 major release, see - . + . @@ -4822,7 +4822,7 @@ Branch: REL9_1_STABLE [386dcd539] 2015-12-11 19:08:40 -0500 However, if you are upgrading from a version earlier than 9.4.4, - see . + see . @@ -6333,7 +6333,7 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 This release contains a small number of fixes from 9.4.3. For information about new features in the 9.4 major release, see - . + . @@ -6351,7 +6351,7 @@ Branch: REL9_0_STABLE [47ac95f37] 2015-10-02 19:16:37 -0400 Also, if you are upgrading from a version earlier than 9.4.2, - see . + see . @@ -6414,10 +6414,10 @@ Branch: REL9_3_STABLE [2a9b01928] 2015-06-05 09:34:15 -0400 Otherwise, for each table that has pg_class.relminmxid equal to 1, VACUUM that table with - both - and set to + both + and set to zero. (You can use the vacuum cost delay parameters described - in to reduce + in to reduce the performance consequences for concurrent sessions.) @@ -6514,7 +6514,7 @@ Branch: REL9_3_STABLE [d3fdec6ae] 2015-06-03 11:58:47 -0400 This release contains a small number of fixes from 9.4.2. For information about new features in the 9.4 major release, see - . + . @@ -6526,7 +6526,7 @@ Branch: REL9_3_STABLE [d3fdec6ae] 2015-06-03 11:58:47 -0400 However, if you are upgrading from a version earlier than 9.4.2, - see . + see . @@ -6657,7 +6657,7 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 This release contains a variety of fixes from 9.4.1. For information about new features in the 9.4 major release, see - . + . @@ -6675,7 +6675,7 @@ Branch: REL9_0_STABLE [b06649b7f] 2015-05-26 22:15:00 -0400 Also, if you are upgrading from a version earlier than 9.4.1, - see . + see . @@ -7938,7 +7938,7 @@ Branch: REL9_0_STABLE [3c3749a3b] 2015-05-15 19:36:20 -0400 This release contains a variety of fixes from 9.4.0. For information about new features in the 9.4 major release, see - . + . @@ -8768,14 +8768,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command + Add new SQL command for changing postgresql.conf configuration file entries - Reduce lock strength for some + Reduce lock strength for some commands @@ -8815,8 +8815,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Migration to Version 9.4 - A dump/restore using , or use - of , is required for those wishing to migrate + A dump/restore using , or use + of , is required for those wishing to migrate data from any previous release. @@ -8850,7 +8850,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Previously such values were rendered according to the current - setting; but many JSON processors + setting; but many JSON processors require timestamps to be in ISO 8601 format. If necessary, the previous behavior can be obtained by explicitly casting the datetime value to text before passing it to the JSON conversion @@ -8957,7 +8957,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - now also discards sequence-related state + now also discards sequence-related state (Fabrízio de Royes Mello, Robert Haas) @@ -9043,7 +9043,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 User commands that did their own quote preservation might need adjustment. This is likely to be an issue for commands used in - , , + , , and COPY TO/FROM PROGRAM. @@ -9112,7 +9112,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Change empty arrays returned by the module + Change empty arrays returned by the module to be zero-dimensional arrays (Bruce Momjian) @@ -9127,7 +9127,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - now uses + now uses or to specify the user name (Bruce Momjian) @@ -9188,7 +9188,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This reduces the likelihood of leaving orphaned child processes - behind after shutdown, as well + behind after shutdown, as well as ensuring that crash recovery can proceed if some child processes have become stuck. @@ -9202,7 +9202,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make properly report dead but + Make properly report dead but not-yet-removable rows to the statistics collector (Hari Babu) @@ -9225,9 +9225,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Indexes upgraded via will work fine + Indexes upgraded via will work fine but will still be in the old, larger GIN format. - Use to recreate old GIN indexes in the + Use to recreate old GIN indexes in the new format. @@ -9316,7 +9316,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Attempt to freeze tuples when tables are rewritten with or or VACUUM FULL (Robert Haas, Andres Freund) @@ -9328,7 +9328,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve speed of with default with default nextval() columns (Simon Riggs) @@ -9352,7 +9352,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Reduce memory allocated by PL/pgSQL - blocks (Tom Lane) + blocks (Tom Lane) @@ -9399,7 +9399,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add system view to + Add system view to report WAL archiver activity (Gabriele Bartolini) @@ -9408,13 +9408,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add n_mod_since_analyze columns to - and related system views + and related system views (Mark Kirkwood) These columns expose the system's estimate of the number of changed - tuples since the table's last . This + tuples since the table's last . This estimate drives decisions about when to auto-analyze. @@ -9422,9 +9422,9 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add backend_xid and backend_xmin - columns to the system view , + columns to the system view , and a backend_xmin column to - (Christian Kruse) + (Christian Kruse) @@ -9447,14 +9447,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This allows use of Elliptic Curve keys for server authentication. Such keys are faster and have better security than RSA keys. The new configuration parameter - + controls which curve is used for ECDH. - Improve the default setting + Improve the default setting (Marko Kreen) @@ -9467,17 +9467,17 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Previously, the order specified by + Previously, the order specified by was usually ignored in favor of client-side defaults, which are not configurable in most PostgreSQL clients. If desired, the old behavior can be restored via the new configuration - parameter . + parameter . - Make show SSL + Make show SSL encryption information (Andreas Kunert) @@ -9500,7 +9500,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new SQL command + Add new SQL command for changing postgresql.conf configuration file entries (Amit Kapila) @@ -9513,7 +9513,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add configuration parameter + Add configuration parameter to control the amount of memory used by autovacuum workers (Peter Geoghegan) @@ -9521,7 +9521,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add parameter to allow using huge + Add parameter to allow using huge memory pages on Linux (Christian Kruse, Richard Poole, Abhijit Menon-Sen) @@ -9533,7 +9533,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add parameter + Add parameter to limit the number of background workers (Robert Haas) @@ -9545,12 +9545,12 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add superuser-only + Add superuser-only parameter to load libraries at session start (Peter Eisentraut) - In contrast to , this + In contrast to , this parameter can load any shared library, not just those in the $libdir/plugins directory. @@ -9558,7 +9558,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add parameter to enable WAL + Add parameter to enable WAL logging of hint-bit changes (Sawada Masahiko) @@ -9571,8 +9571,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Increase the default settings of - and by four times (Bruce + Increase the default settings of + and by four times (Bruce Momjian) @@ -9584,7 +9584,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Increase the default setting of + linkend="guc-effective-cache-size"/> to 4GB (Bruce Momjian, Tom Lane) @@ -9592,7 +9592,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow printf-style space padding to be - specified in (David Rowley) + specified in (David Rowley) @@ -9606,7 +9606,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Show PIDs of lock holders and waiters and improve - information about relations in + information about relations in log messages (Christian Kruse) @@ -9626,7 +9626,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 On Windows, make SQL_ASCII-encoded databases and server - processes (e.g., ) emit messages in + processes (e.g., ) emit messages in the character encoding of the server's Windows user locale (Alexander Law, Noah Misch) @@ -9664,7 +9664,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add recovery parameter + Add recovery parameter to delay replication (Robert Haas, Fabrízio de Royes Mello, Simon Riggs) @@ -9677,7 +9677,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add + Add option to stop WAL recovery as soon as a consistent state is reached (MauMau, Heikki Linnakangas) @@ -9764,7 +9764,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add new setting + Add new setting to enable logical change-set encoding in WAL (Andres Freund) @@ -9789,14 +9789,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add application to receive + Add application to receive logical-decoding data (Andres Freund) - Add module to illustrate logical + Add module to illustrate logical decoding at the SQL level (Andres Freund) @@ -9836,7 +9836,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Allow to have + Allow to have an empty target list (Tom Lane) @@ -9906,7 +9906,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="sql-explain"> + <xref linkend="sql-explain"/> @@ -9979,7 +9979,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - This is controlled with the new + This is controlled with the new clause WITH CHECK OPTION. @@ -10013,22 +10013,22 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow moving groups of objects from one tablespace to another using the ALL IN TABLESPACE ... SET TABLESPACE form of - , , or - (Stephen Frost) + , , or + (Stephen Frost) Allow changing foreign key constraint deferrability - via ... ALTER + via ... ALTER CONSTRAINT (Simon Riggs) - Reduce lock strength for some + Reduce lock strength for some commands (Simon Riggs, Noah Misch, Robert Haas) @@ -10046,18 +10046,18 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Allow tablespace options to be set - in (Vik Fearing) + in (Vik Fearing) Formerly these options could only be set - via . + via . - Allow to define the estimated + Allow to define the estimated size of the aggregate's transition state data (Hadi Moshayedi) @@ -10544,14 +10544,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add option + Add option to specify role membership (Christopher Browne) - Add + Add option to analyze in stages of increasing granularity (Peter Eisentraut) @@ -10571,7 +10571,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make throw error for incorrect locale + Make throw error for incorrect locale settings, rather than silently falling back to a default choice (Tom Lane) @@ -10579,7 +10579,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Make return exit code 4 for + Make return exit code 4 for an inaccessible data directory (Amit Kapila, Bruce Momjian) @@ -10593,7 +10593,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 On Windows, ensure that a non-absolute path specification is interpreted relative - to 's current directory + to 's current directory (Kumar Rajeev Rastogi) @@ -10621,7 +10621,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="app-psql"> + <xref linkend="app-psql"/> @@ -10753,13 +10753,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="app-pgdump"> + <xref linkend="app-pgdump"/> - Allow options + Allow options , , and to be specified multiple times (Heikki Linnakangas) @@ -10779,8 +10779,8 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 This change prevents unnecessary errors when removing old objects. The new option - for , , - and is only available + for , , + and is only available when is also specified. @@ -10790,7 +10790,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="app-pgbasebackup"> + <xref linkend="app-pgbasebackup"/> @@ -11081,7 +11081,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add extension to preload relation data + Add extension to preload relation data into the shared buffer cache at server start (Robert Haas) @@ -11093,19 +11093,19 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Add UUID random number generator - gen_random_uuid() to + gen_random_uuid() to (Oskari Saarenmaa) This allows creation of version 4 UUIDs without - requiring installation of . + requiring installation of . - Allow to work with + Allow to work with the BSD or e2fsprogs UUID libraries, not only the OSSP UUID library (Matteo Beccati) @@ -11120,21 +11120,21 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Add option to to include trigger + Add option to to include trigger execution time (Horiguchi Kyotaro) - Fix to not report rows from + Fix to not report rows from uncommitted transactions as dead (Robert Haas) - Make functions + Make functions use regclass-type arguments (Satoshi Nagayasu) @@ -11146,14 +11146,14 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Improve consistency of output to honor + Improve consistency of output to honor snapshot rules more consistently (Robert Haas) - Improve 's choice of trigrams for indexed + Improve 's choice of trigrams for indexed regular expression searches (Alexander Korotkov) @@ -11173,7 +11173,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Store data more compactly (Stas Kelvich) + Store data more compactly (Stas Kelvich) @@ -11184,7 +11184,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - Reduce client-side memory usage by using + Reduce client-side memory usage by using a cursor (Andrew Dunstan) @@ -11192,13 +11192,13 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 Dramatically reduce memory consumption - in (Bruce Momjian) + in (Bruce Momjian) - Pass 's user name () option to + Pass 's user name () option to generated analyze scripts (Bruce Momjian) @@ -11206,7 +11206,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="pgbench"> + <xref linkend="pgbench"/> @@ -11247,7 +11247,7 @@ Branch: REL9_4_STABLE [c2b06ab17] 2015-01-30 22:45:58 -0500 - <xref linkend="pgstatstatements"> + <xref linkend="pgstatstatements"/> diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index a1e68ba283..d79d953d51 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.5.9. For information about new features in the 9.5 major release, see - . + . @@ -28,7 +28,7 @@ Also, if you are upgrading from a version earlier than 9.5.8, - see . + see . @@ -373,7 +373,7 @@ This release contains a small number of fixes from 9.5.8. For information about new features in the 9.5 major release, see - . + . @@ -385,7 +385,7 @@ However, if you are upgrading from a version earlier than 9.5.8, - see . + see . @@ -550,7 +550,7 @@ CREATE OR REPLACE VIEW table_privileges AS This release contains a variety of fixes from 9.5.7. For information about new features in the 9.5 major release, see - . + . @@ -567,7 +567,7 @@ CREATE OR REPLACE VIEW table_privileges AS Also, if you are upgrading from a version earlier than 9.5.7, - see . + see . @@ -792,7 +792,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; - Fix code for setting on + Fix code for setting on Solaris (Tom Lane) @@ -1132,7 +1132,7 @@ UPDATE pg_database SET datallowconn = false WHERE datname = 'template0'; Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + does not have = minimum (Bruce Momjian) @@ -1257,7 +1257,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 This release contains a variety of fixes from 9.5.6. For information about new features in the 9.5 major release, see - . + . @@ -1279,7 +1279,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 Also, if you are upgrading from a version earlier than 9.5.6, - see . + see . @@ -1313,7 +1313,7 @@ Branch: REL9_2_STABLE [1188b9b2c] 2017-08-02 15:07:21 -0400 By itself, this patch will only fix the behavior in newly initdb'd databases. If you wish to apply this change in an existing database, follow the corrected procedure shown in the changelog entry for - CVE-2017-7547, in . + CVE-2017-7547, in . @@ -1804,7 +1804,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 will use the current and historical DST transition dates of the US/Eastern zone. If you don't want that, remove the posixrules file, or replace it with a copy of some - other zone file (see ). Note that + other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -1826,7 +1826,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 This release contains a variety of fixes from 9.5.5. For information about new features in the 9.5 major release, see - . + . @@ -1844,7 +1844,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 Also, if you are upgrading from a version earlier than 9.5.5, - see . + see . @@ -1904,7 +1904,7 @@ Branch: REL9_3_STABLE [3aa16b117] 2017-05-06 22:17:35 -0400 - Previously, this was skipped when + Previously, this was skipped when = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -1982,7 +1982,7 @@ Branch: REL9_4_STABLE [30e3cb307] 2016-11-17 13:31:30 -0300 Previously, non-default settings - of could result in broken + of could result in broken indexes. @@ -2488,7 +2488,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 This release contains a variety of fixes from 9.5.4. For information about new features in the 9.5 major release, see - . + . @@ -2506,7 +2506,7 @@ Branch: REL9_2_STABLE [60314e28e] 2016-12-13 19:08:09 -0600 Also, if you are upgrading from a version earlier than 9.5.2, - see . + see . @@ -2677,7 +2677,7 @@ Branch: REL9_4_STABLE [566afa15c] 2016-08-24 22:20:01 -0400 Fix EXPLAIN to emit valid XML when - is on (Markus Winand) + is on (Markus Winand) @@ -2802,7 +2802,7 @@ Branch: REL9_1_STABLE [7e01c8ef3] 2016-08-14 15:06:02 -0400 - With turned on, old + With turned on, old commit timestamps became inaccessible after a clean server restart. @@ -3312,7 +3312,7 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 This release contains a variety of fixes from 9.5.3. For information about new features in the 9.5 major release, see - . + . @@ -3324,7 +3324,7 @@ Branch: REL9_1_STABLE [380dad29d] 2016-09-02 17:29:32 -0400 However, if you are upgrading from a version earlier than 9.5.2, - see . + see . @@ -4385,7 +4385,7 @@ Branch: REL9_1_STABLE [a44388ffe] 2016-08-05 12:59:02 -0400 This release contains a variety of fixes from 9.5.2. For information about new features in the 9.5 major release, see - . + . @@ -4397,7 +4397,7 @@ Branch: REL9_1_STABLE [a44388ffe] 2016-08-05 12:59:02 -0400 However, if you are upgrading from a version earlier than 9.5.2, - see . + see . @@ -4484,7 +4484,7 @@ Branch: REL9_5_STABLE [81deadd31] 2016-04-21 23:17:36 -0400 --> Fix corner-case parser failures occurring - when is turned on + when is turned on (Tom Lane) @@ -4886,7 +4886,7 @@ Branch: REL9_1_STABLE [bfc39da64] 2016-05-05 20:09:32 -0400 This release contains a variety of fixes from 9.5.1. For information about new features in the 9.5 major release, see - . + . @@ -5170,7 +5170,7 @@ Branch: REL9_4_STABLE [a9613ee69] 2016-03-06 02:43:26 +0900 - Ignore parameter until + Ignore parameter until recovery has reached a consistent state (Michael Paquier) @@ -5597,7 +5597,7 @@ Branch: REL9_1_STABLE [e5fd35cc5] 2016-03-25 19:03:54 -0400 This release contains a variety of fixes from 9.5.0. For information about new features in the 9.5 major release, see - . + . @@ -6116,8 +6116,8 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 Migration to Version 9.5 - A dump/restore using , or use - of , is required for those wishing to migrate + A dump/restore using , or use + of , is required for those wishing to migrate data from any previous release. @@ -6150,7 +6150,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 before they had inconsistent precedence, behaving like NOT with respect to their left operand but like their base operator with respect to their right operand. The new configuration - parameter can be + parameter can be enabled to warn about queries in which these precedence changes result in different parsing choices. @@ -6161,7 +6161,7 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 2015-03-31 [0badb06] Bruce ..: pg_ctl: change default shutdown mode from 'sma.. --> - Change 's default shutdown mode from + Change 's default shutdown mode from smart to fast (Bruce Momjian) @@ -6229,8 +6229,8 @@ Branch: REL9_1_STABLE [6887d72d0] 2016-02-05 10:59:39 -0500 --> Replace configuration parameter checkpoint_segments - with - and (Heikki Linnakangas) + with + and (Heikki Linnakangas) @@ -6409,14 +6409,14 @@ max_wal_size = (3 * checkpoint_segments) * 16MB Add GUC and storage parameter to set the maximum size of GIN pending list. --> - Add configuration parameter + Add configuration parameter to control the size of GIN pending lists (Fujii Masao) This value can also be set on a per-index basis as an index storage parameter. Previously the pending-list size was controlled - by , which was awkward because + by , which was awkward because appropriate values for work_mem are often much too large for this purpose. @@ -6654,7 +6654,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-29 [51adcaa] Andres..: Add cluster_name GUC which is included in proce.. --> - Add new configuration parameter + Add new configuration parameter (Thomas Munro) @@ -6673,7 +6673,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Prevent non-superusers from changing on connection startup (Fujii Masao) + linkend="guc-log-disconnections"/> on connection startup (Fujii Masao) @@ -6768,8 +6768,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Replace configuration parameter checkpoint_segments - with - and (Heikki Linnakangas) + with + and (Heikki Linnakangas) @@ -6812,7 +6812,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. Allow recording of transaction commit time stamps when configuration parameter + linkend="guc-track-commit-timestamp"/> is enabled (Álvaro Herrera, Petr Jelínek) @@ -6828,7 +6828,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-12-22 [584e35d] Peter ..: Change local_preload_libraries to PGC_USERSET --> - Allow to be set + Allow to be set by ALTER ROLE SET (Peter Eisentraut, Kyotaro Horiguchi) @@ -6849,7 +6849,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-06-20 [3bdcf6a] Andres..: Don't allow to disable backend assertions via t.. --> - Make configuration parameter + Make configuration parameter read-only (Andres Freund) @@ -6866,7 +6866,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-10-18 [7feaccc] Peter ..: Allow setting effective_io_concurrency even on.. --> - Allow setting on + Allow setting on systems where it has no effect (Peter Eisentraut) @@ -6977,7 +6977,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2015-05-15 [ffd3774] Heikki..: Add archive_mode='always' option. --> - Add new value + Add new value always to allow standbys to always archive received WAL files (Fujii Masao) @@ -6989,7 +6989,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add configuration - parameter to + parameter to control WAL read retry after failure (Alexey Vasiliev, Michael Paquier) @@ -7011,7 +7011,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature reduces WAL volume, at the cost of more CPU time spent on WAL logging and WAL replay. It is controlled by a new - configuration parameter , which + configuration parameter , which currently is off by default. @@ -7032,14 +7032,14 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add configuration parameter + linkend="guc-log-replication-commands"/> to log replication commands (Fujii Masao) By default, replication commands, e.g. IDENTIFY_SYSTEM, - are not logged, even when is set + are not logged, even when is set to all. @@ -7219,7 +7219,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="sql-reindex"> + <xref linkend="sql-reindex"/> @@ -7347,8 +7347,8 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. This feature is now supported in - , , - , , + , , + , , and ALTER object OWNER TO commands. @@ -7436,7 +7436,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. 2014-07-10 [59efda3] Tom Lane: Implement IMPORT FOREIGN SCHEMA. --> - Add support for + Add support for (Ronan Dunklau, Michael Paquier, Tom Lane) @@ -8190,7 +8190,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="app-psql"> + <xref linkend="app-psql"/> @@ -8289,7 +8289,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. --> Add psql tab completion when setting the - variable (Jeff Janes) + variable (Jeff Janes) @@ -8406,7 +8406,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="app-pgdump"> + <xref linkend="app-pgdump"/> @@ -8474,7 +8474,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="app-pg-ctl"> + <xref linkend="app-pg-ctl"/> @@ -8525,7 +8525,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="pgupgrade"> + <xref linkend="pgupgrade"/> @@ -8580,7 +8580,7 @@ Add GUC and storage parameter to set the maximum size of GIN pending list. - <xref linkend="pgbench"> + <xref linkend="pgbench"/> diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 65df3113c2..ce040f1a5a 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -12,7 +12,7 @@ This release contains a variety of fixes from 9.6.5. For information about new features in the 9.6 major release, see - . + . @@ -28,7 +28,7 @@ Also, if you are upgrading from a version earlier than 9.6.4, - see . + see . @@ -591,7 +591,7 @@ Branch: REL9_2_STABLE [1317d1301] 2017-10-23 17:54:09 -0400 This release contains a small number of fixes from 9.6.4. For information about new features in the 9.6 major release, see - . + . @@ -603,7 +603,7 @@ Branch: REL9_2_STABLE [1317d1301] 2017-10-23 17:54:09 -0400 However, if you are upgrading from a version earlier than 9.6.4, - see . + see . @@ -870,7 +870,7 @@ Branch: REL9_5_STABLE [a784d5f21] 2017-08-09 12:06:14 -0400 This release contains a variety of fixes from 9.6.3. For information about new features in the 9.6 major release, see - . + . @@ -887,7 +887,7 @@ Branch: REL9_5_STABLE [a784d5f21] 2017-08-09 12:06:14 -0400 Also, if you are upgrading from a version earlier than 9.6.3, - see . + see . @@ -1217,7 +1217,7 @@ Branch: REL9_3_STABLE [cc154d9a0] 2017-06-28 12:30:16 -0400 Branch: REL9_2_STABLE [5e7447132] 2017-06-28 12:30:16 -0400 --> - Fix code for setting on + Fix code for setting on Solaris (Tom Lane) @@ -1950,7 +1950,7 @@ Branch: REL9_2_STABLE [65beccae5] 2017-06-20 13:20:02 -0400 --> Fix pg_upgrade to ensure that the ending WAL record - does not have = minimum + does not have = minimum (Bruce Momjian) @@ -2151,7 +2151,7 @@ Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 This release contains a variety of fixes from 9.6.2. For information about new features in the 9.6 major release, see - . + . @@ -2173,7 +2173,7 @@ Branch: REL9_4_STABLE [9c3f502b4] 2017-07-16 11:27:15 -0400 Also, if you are upgrading from a version earlier than 9.6.2, - see . + see . @@ -2216,7 +2216,7 @@ Branch: REL9_2_STABLE [99cbb0bd9] 2017-05-08 07:24:28 -0700 By itself, this patch will only fix the behavior in newly initdb'd databases. If you wish to apply this change in an existing database, follow the corrected procedure shown in the changelog entry for - CVE-2017-7547, in . + CVE-2017-7547, in . @@ -3181,7 +3181,7 @@ Branch: REL9_2_STABLE [82e7d3dfd] 2017-05-07 11:57:41 -0400 will use the current and historical DST transition dates of the US/Eastern zone. If you don't want that, remove the posixrules file, or replace it with a copy of some - other zone file (see ). Note that + other zone file (see ). Note that due to caching, you may need to restart the server to get such changes to take effect. @@ -3203,7 +3203,7 @@ Branch: REL9_2_STABLE [82e7d3dfd] 2017-05-07 11:57:41 -0400 This release contains a variety of fixes from 9.6.1. For information about new features in the 9.6 major release, see - . + . @@ -3221,7 +3221,7 @@ Branch: REL9_2_STABLE [82e7d3dfd] 2017-05-07 11:57:41 -0400 Also, if you are upgrading from a version earlier than 9.6.1, - see . + see . @@ -3312,7 +3312,7 @@ Branch: REL9_2_STABLE [a00ac6299] 2016-12-08 14:19:25 -0500 - Previously, this was skipped when + Previously, this was skipped when = minimal, but actually it's necessary even in that case to ensure that the unlogged table is properly reset to empty after a crash. @@ -3396,7 +3396,7 @@ Branch: REL9_6_STABLE [6c75fb6b3] 2016-12-17 02:25:47 +0900 --> Disallow setting the num_sync field to zero in - (Fujii Masao) + (Fujii Masao) @@ -3475,7 +3475,7 @@ Branch: REL9_2_STABLE [05975ab0a] 2016-11-23 13:45:56 -0500 Previously, non-default settings - of could result in broken + of could result in broken indexes. @@ -4620,7 +4620,7 @@ Branch: REL9_2_STABLE [ef878cc2c] 2017-01-30 11:41:09 -0500 This release contains a variety of fixes from 9.6.0. For information about new features in the 9.6 major release, see - . + . @@ -4781,7 +4781,7 @@ Branch: REL9_2_STABLE [f17c26dbd] 2016-10-20 17:18:14 -0400 --> Fix EXPLAIN to emit valid XML when - is on (Markus Winand) + is on (Markus Winand) @@ -4899,7 +4899,7 @@ Branch: REL9_5_STABLE [7a2fa5774] 2016-10-24 09:38:28 -0300 - With turned on, old + With turned on, old commit timestamps became inaccessible after a clean server restart. @@ -5275,8 +5275,8 @@ Branch: REL9_1_STABLE [22cf97635] 2016-10-19 17:57:06 -0400 Migration to Version 9.6 - A dump/restore using , or use of , is required for those wishing to migrate data + A dump/restore using , or use of , is required for those wishing to migrate data from any previous release. @@ -5593,12 +5593,12 @@ and many others in the same vein Parallel query execution is not (yet) enabled by default. To allow it, set the new configuration - parameter to a + parameter to a value larger than zero. Additional control over use of parallelism is available through other new configuration parameters - , - , , and + , + , , and min_parallel_relation_size. @@ -5628,7 +5628,7 @@ and many others in the same vein --> Allow GIN index builds to - make effective use of + make effective use of settings larger than 1 GB (Robert Abraham, Teodor Sigaev) @@ -5991,7 +5991,7 @@ and many others in the same vein time can thus cause considerable table bloat because space cannot be recycled. This feature allows setting a time-based limit, via the new configuration parameter - , on how long an + , on how long an MVCC snapshot is guaranteed to be valid. After that, dead tuples are candidates for removal. A transaction using an outdated snapshot will get an error if it attempts to read a page @@ -6094,10 +6094,10 @@ and many others in the same vein The new configuration parameters , , , and control this behavior. + linkend="guc-backend-flush-after"/>, , , and control this behavior. @@ -6249,7 +6249,7 @@ and many others in the same vein 2016-08-17 [9b33c7e80] Disable update_process_title by default on Windows --> - Disable by default on + Disable by default on Windows (Takayuki Tsunakawa) @@ -6497,7 +6497,7 @@ and many others in the same vein This behavior is controlled by the new configuration parameter - . It can + . It can be useful to prevent forgotten transactions from holding locks or preventing vacuum cleanup for too long. @@ -6509,7 +6509,7 @@ and many others in the same vein --> Raise the maximum allowed value - of to 24 hours (Simon Riggs) + of to 24 hours (Simon Riggs) @@ -6530,7 +6530,7 @@ and many others in the same vein 2015-09-07 [b1e1862a1] Coordinate log_line_prefix options 'm' and 'n' to share --> - Add option %n to + Add option %n to print the current time in Unix epoch form, with milliseconds (Tomas Vondra, Jeff Davis) @@ -6542,8 +6542,8 @@ and many others in the same vein 2016-03-16 [fc201dfd9] Add syslog_split_messages parameter --> - Add and configuration parameters + Add and configuration parameters to provide more control over the message format when logging to syslog (Peter Eisentraut) @@ -6555,7 +6555,7 @@ and many others in the same vein --> Merge the archive and hot_standby values - of the configuration parameter + of the configuration parameter into a single new value replica (Peter Eisentraut) @@ -6717,7 +6717,7 @@ XXX this is pending backpatch, may need to remove The number of standby servers that must acknowledge a commit before it is considered complete is now configurable as part of - the parameter. + the parameter. @@ -6727,7 +6727,7 @@ XXX this is pending backpatch, may need to remove --> Add new setting remote_apply for configuration - parameter (Thomas Munro) + parameter (Thomas Munro) @@ -7871,7 +7871,7 @@ This commit is also listed under psql and PL/pgSQL - <xref linkend="app-psql"> + <xref linkend="app-psql"/> @@ -8073,7 +8073,7 @@ This commit is also listed under libpq and PL/pgSQL - <xref linkend="pgbench"> + <xref linkend="pgbench"/> diff --git a/doc/src/sgml/release-old.sgml b/doc/src/sgml/release-old.sgml index e95e5cac24..d55209d85b 100644 --- a/doc/src/sgml/release-old.sgml +++ b/doc/src/sgml/release-old.sgml @@ -26,7 +26,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -131,7 +131,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -193,7 +193,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -249,7 +249,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -310,7 +310,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -366,7 +366,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -409,7 +409,7 @@ A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -500,7 +500,7 @@ Fuhr) A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.13, - see . + see . @@ -557,7 +557,7 @@ and isinf during configure (Tom) A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.10, - see . + see . Also, you might need to REINDEX indexes on textual columns after updating, if you are affected by the locale or plperl issues described below. @@ -619,7 +619,7 @@ what's actually returned by the query (Joe) A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.10, - see . + see . @@ -666,7 +666,7 @@ table has been dropped A dump/restore is not required for those running 7.3.X. However, if you are upgrading from a version earlier than 7.3.10, - see . + see . diff --git a/doc/src/sgml/replication-origins.sgml b/doc/src/sgml/replication-origins.sgml index 317ca9a1df..a03ce76e2e 100644 --- a/doc/src/sgml/replication-origins.sgml +++ b/doc/src/sgml/replication-origins.sgml @@ -83,7 +83,7 @@ replication and inefficiencies. Replication origins provide an optional mechanism to recognize and prevent that. When configured using the functions referenced in the previous paragraph, every change and transaction passed to - output plugin callbacks (see ) + output plugin callbacks (see ) generated by the session is tagged with the replication origin of the generating session. This allows treating them differently in the output plugin, e.g. ignoring all but locally-originating rows. Additionally diff --git a/doc/src/sgml/rowtypes.sgml b/doc/src/sgml/rowtypes.sgml index f80d44bc75..3f24293175 100644 --- a/doc/src/sgml/rowtypes.sgml +++ b/doc/src/sgml/rowtypes.sgml @@ -129,7 +129,7 @@ CREATE TABLE inventory_item ( (These constants are actually only a special case of the generic type constants discussed in . The constant is initially + linkend="sql-syntax-constants-generic"/>. The constant is initially treated as a string and passed to the composite-type input conversion routine. An explicit type specification might be necessary to tell which type to convert the constant to.) @@ -152,7 +152,7 @@ ROW('', 42, NULL) ('', 42, NULL) The ROW expression syntax is discussed in more detail in . + linkend="sql-syntax-row-constructors"/>. @@ -204,7 +204,7 @@ SELECT (my_func(...)).field FROM ... The special field name * means all fields, as - further explained in . + further explained in . @@ -384,7 +384,7 @@ SELECT * FROM inventory_item c ORDER BY ROW(c.*); All of these ORDER BY clauses specify the row's composite value, resulting in sorting the rows according to the rules described - in . However, + in . However, if inventory_item contained a column named c, the first case would be different from the others, as it would mean to sort by that column only. Given the column @@ -517,7 +517,7 @@ INSERT ... VALUES (E'("\\"\\\\")'); with a data type whose input routine also treated backslashes specially, bytea for example, we might need as many as eight backslashes in the command to get one backslash into the stored composite field.) - Dollar quoting (see ) can be + Dollar quoting (see ) can be used to avoid the need to double backslashes. diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 819f2a8294..2074fcca8e 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -29,8 +29,8 @@ execution. It is very powerful, and can be used for many things such as query language procedures, views, and versions. The theoretical foundations and the power of this rule system are - also discussed in and . + also discussed in and . @@ -172,7 +172,7 @@ to allow the executor to find the row to be deleted. (CTID is added when the result relation is an ordinary table. If it is a view, a whole-row variable is added instead, - as described in .) + as described in .) @@ -823,7 +823,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; the base relation in the appropriate way. Views that are simple enough for this are called automatically updatable. For detailed information on the kinds of view that can - be automatically updated, see . + be automatically updated, see . @@ -862,7 +862,7 @@ SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; UPDATE, and DELETE commands on a view. These rules will rewrite the command, typically into a command that updates one or more tables, rather than views. That is the topic - of . + of . diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index c8bc684c0e..a2ebd3e21c 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -60,7 +60,7 @@ during initialization is called template1. As the name suggests, this will be used as a template for subsequently created databases; it should not be - used for actual work. (See for + used for actual work. (See for information about creating new databases within a cluster.) @@ -73,7 +73,7 @@ /usr/local/pgsql/data or /var/lib/pgsql/data are popular. To initialize a database cluster, use the command ,initdb which is + linkend="app-initdb"/>,initdb which is installed with PostgreSQL. The desired file system location of your database cluster is indicated by the option, for example: @@ -95,14 +95,14 @@ Alternatively, you can run initdb via - the + the programpg_ctl like so: $ pg_ctl -D /usr/local/pgsql/data initdb This may be more intuitive if you are using pg_ctl for starting and stopping the - server (see ), so + server (see ), so that pg_ctl would be the sole command you use for managing the database server instance. @@ -158,7 +158,7 @@ postgres$ initdb -D /usr/local/pgsql/data before you start the server for the first time. (Other reasonable approaches include using peer authentication or file system permissions to restrict connections. See for more information.) + linkend="client-authentication"/> for more information.) @@ -167,7 +167,7 @@ postgres$ initdb -D /usr/local/pgsql/data Normally, it will just take the locale settings in the environment and apply them to the initialized database. It is possible to specify a different locale for the database; more information about - that can be found in . The default sort order used + that can be found in . The default sort order used within the particular database cluster is set by initdb, and while you can create new databases using different sort order, the order used in the template databases that initdb @@ -180,7 +180,7 @@ postgres$ initdb -D /usr/local/pgsql/data initdb also sets the default character set encoding for the database cluster. Normally this should be chosen to match the - locale setting. For details see . + locale setting. For details see . @@ -284,21 +284,21 @@ $ postgres -D /usr/local/pgsql/data >logfile 2>&1 &stdout and stderr output somewhere, as shown above. It will help for auditing purposes and to diagnose problems. (See for a more thorough discussion of log + linkend="logfile-maintenance"/> for a more thorough discussion of log file handling.) The postgres program also takes a number of other command-line options. For more information, see the - reference page - and below. + reference page + and below. This shell syntax can get tedious quickly. Therefore the wrapper program - pg_ctl + pg_ctl is provided to simplify some tasks. For example: pg_ctl start -l logfile @@ -494,7 +494,7 @@ DETAIL: Failed system call was shmget(key=5440001, size=4011376640, 03600). mean that you do not have System-V-style shared memory support configured into your kernel at all. As a temporary workaround, you can try starting the server with a smaller-than-normal number of - buffers (). You will eventually want + buffers (). You will eventually want to reconfigure your kernel to increase the allowed shared memory size. You might also see this message when trying to start multiple servers on the same machine, if their total space requested @@ -513,7 +513,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). PostgreSQL wants to create. As above, you might be able to work around the problem by starting the server with a reduced number of allowed connections - (), but you'll eventually want to + (), but you'll eventually want to increase the kernel limit. @@ -526,7 +526,7 @@ DETAIL: Failed system call was semget(5440126, 17, 03600). Details about configuring System V - IPC facilities are given in . + IPC facilities are given in . @@ -574,7 +574,7 @@ psql: could not connect to server: No such file or directory does not mean that the server got your connection request and rejected it. That case will produce a different message, as shown in .) Other error messages + linkend="client-authentication-problems"/>.) Other error messages such as Connection timed out might indicate more fundamental problems, like lack of network connectivity. @@ -648,9 +648,9 @@ psql: could not connect to server: No such file or directory the server will refuse to start and should leave an instructive error message describing the problem and what to do about it. (See also .) The relevant kernel + linkend="server-start-failures"/>.) The relevant kernel parameters are named consistently across different systems; gives an overview. The methods to set + linkend="sysvipc-parameters"/> gives an overview. The methods to set them, however, vary. Suggestions for some platforms are given below. @@ -756,9 +756,9 @@ psql: could not connect to server: No such file or directory When using System V semaphores, PostgreSQL uses one semaphore per allowed connection - (), allowed autovacuum worker process - () and allowed background - process (), in sets of 16. + (), allowed autovacuum worker process + () and allowed background + process (), in sets of 16. Each such set will also contain a 17th semaphore which contains a magic number, to detect collision with semaphore sets used by @@ -768,7 +768,7 @@ psql: could not connect to server: No such file or directory autovacuum_max_workers plus max_worker_processes, plus one extra for each 16 allowed connections plus workers (see the formula in ). The parameter SEMMNI + linkend="sysvipc-parameters"/>). The parameter SEMMNI determines the limit on the number of semaphore sets that can exist on the system at one time. Hence this parameter must be at least ceil((max_connections + autovacuum_max_workers + max_worker_processes + 5) / 16). @@ -800,9 +800,9 @@ psql: could not connect to server: No such file or directory When using POSIX semaphores, the number of semaphores needed is the same as for System V, that is one semaphore per allowed connection - (), allowed autovacuum worker process - () and allowed background - process (). + (), allowed autovacuum worker process + () and allowed background + process (). On the platforms where this option is preferred, there is no specific kernel limit on the number of POSIX semaphores. @@ -1321,7 +1321,7 @@ default:\ processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set PostgreSQL's configuration parameter to + linkend="guc-max-files-per-process"/> configuration parameter to limit the consumption of open files. @@ -1465,7 +1465,7 @@ export PG_OOM_ADJUST_VALUE=0 Using huge pages reduces overhead when using large contiguous chunks of memory, as PostgreSQL does, particularly when - using large values of . To use this + using large values of . To use this feature in PostgreSQL you need a kernel with CONFIG_HUGETLBFS=y and CONFIG_HUGETLB_PAGE=y. You will also have to adjust @@ -1517,7 +1517,7 @@ $ grep Huge /proc/meminfo The default behavior for huge pages in PostgreSQL is to use them when possible and to fall back to normal pages when failing. To enforce the use of huge - pages, you can set + pages, you can set to on in postgresql.conf. Note that with this setting PostgreSQL will fail to start if not enough huge pages are available. @@ -1600,7 +1600,7 @@ $ grep Huge /proc/meminfo - The program provides a convenient + The program provides a convenient interface for sending these signals to shut down the server. Alternatively, you can send the signal directly using kill on non-Windows systems. @@ -1629,7 +1629,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` To terminate an individual session while allowing other sessions to continue, use pg_terminate_backend() (see ) or send a + linkend="functions-admin-signal-table"/>) or send a SIGTERM signal to the child process associated with the session. @@ -1680,7 +1680,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`. Replication methods are + faster method is . Replication methods are also available, as discussed below. @@ -1688,7 +1688,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`); pay particular attention to the section + linkend="release"/>); pay particular attention to the section labeled "Migration". If you are upgrading across several major versions, be sure to read the release notes for each intervening version. @@ -1794,7 +1794,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`/usr/local/pgsql/data/pg_hba.conf (or equivalent) to disallow access from everyone except you. - See for additional information on + See for additional information on access control. @@ -1813,7 +1813,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` To make the backup, you can use the pg_dumpall command from the version you are currently running; see for more details. For best + linkend="backup-dump-all"/> for more details. For best results, however, try to use the pg_dumpall command from PostgreSQL &version;, since this version contains bug fixes and improvements over older @@ -1839,7 +1839,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` /etc/rc.d/init.d/postgresql stop - See for details about starting and + See for details about starting and stopping the server. @@ -1863,7 +1863,7 @@ $ kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid` Install the new version of PostgreSQL as - outlined in . + outlined in . @@ -1923,7 +1923,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 Upgrading Data via <application>pg_upgrade</application> - The module allows an installation to + The module allows an installation to be migrated in-place from one major PostgreSQL version to another. Upgrades can be performed in minutes, particularly with mode. It requires steps similar to @@ -1975,7 +1975,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 One way to prevent spoofing of local connections is to use a Unix domain socket directory () that has write permission only + linkend="guc-unix-socket-directories"/>) that has write permission only for a trusted local user. This prevents a malicious user from creating their own socket file in that directory. If you are concerned that some applications might still reference /tmp for the @@ -1997,11 +1997,11 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 SSL certificates and make sure that clients check the server's certificate. To do that, the server must be configured to accept only hostssl connections () and have SSL key and certificate files - (). The TCP client must connect using + linkend="auth-pg-hba-conf"/>) and have SSL key and certificate files + (). The TCP client must connect using sslmode=verify-ca or verify-full and have the appropriate root certificate - file installed (). + file installed (). @@ -2042,7 +2042,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - The module allows certain fields to be + The module allows certain fields to be stored encrypted. This is useful if only some of the data is sensitive. The client supplies the decryption key and the data is decrypted @@ -2171,19 +2171,19 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 for increased security. This requires that OpenSSL is installed on both client and server systems and that support in PostgreSQL is - enabled at build time (see ). + enabled at build time (see ). With SSL support compiled in, the PostgreSQL server can be started with SSL enabled by setting the parameter - to on in + to on in postgresql.conf. The server will listen for both normal and SSL connections on the same TCP port, and will negotiate with any connecting client on whether to use SSL. By default, this is at the client's option; see about how to set up the server to require + linkend="auth-pg-hba-conf"/> about how to set up the server to require use of SSL for some or all connections. @@ -2201,7 +2201,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 and authentication algorithms, of varying strength. While a list of ciphers can be specified in the OpenSSL configuration file, you can specify ciphers specifically for use by - the database server by modifying in + the database server by modifying in postgresql.conf. @@ -2221,8 +2221,8 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 and private key must exist. By default, these files are expected to be named server.crt and server.key, respectively, in the server's data directory, but other names and locations can be specified - using the configuration parameters - and . + using the configuration parameters + and . @@ -2264,12 +2264,12 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 To require the client to supply a trusted certificate, place certificates of the certificate authorities (CAs) you trust in a file named root.crt in the data - directory, set the parameter in + directory, set the parameter in postgresql.conf to root.crt, and add the authentication option clientcert=1 to the appropriate hostssl line(s) in pg_hba.conf. A certificate will then be requested from the client during - SSL connection startup. (See for a + SSL connection startup. (See for a description of how to set up certificates on the client.) The server will verify that the client's certificate is signed by one of the trusted certificate authorities. @@ -2280,7 +2280,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 root.crt, the file must also contain certificate chains to their root CAs. Certificate Revocation List (CRL) entries - are also checked if the parameter is set. + are also checked if the parameter is set. (See @@ -2308,7 +2308,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 If you are setting up client certificates, you may wish to use the cert authentication method, so that the certificates control user authentication as well as providing connection security. - See for details. (It is not necessary to + See for details. (It is not necessary to specify clientcert=1 explicitly when using the cert authentication method.) @@ -2318,7 +2318,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 SSL Server File Usage - summarizes the files that are + summarizes the files that are relevant to the SSL setup on the server. (The shown file names are default names. The locally configured names could be different.) @@ -2337,27 +2337,27 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - ($PGDATA/server.crt) + ($PGDATA/server.crt) server certificate sent to client to indicate server's identity - ($PGDATA/server.key) + ($PGDATA/server.key) server private key proves server certificate was sent by the owner; does not indicate certificate owner is trustworthy - + trusted certificate authorities checks that client certificate is signed by a trusted certificate authority - + certificates revoked by certificate authorities client certificate must not be on this list @@ -2532,7 +2532,7 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com To specify a different event source name (see - ), use the /n + ), use the /n and /i options: regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll @@ -2550,7 +2550,7 @@ ssh -L 63333:db.foo.com:5432 joe@shell.foo.com To enable event logging in the database server, modify - to include + to include eventlog in postgresql.conf. diff --git a/doc/src/sgml/seg.sgml b/doc/src/sgml/seg.sgml index c7e9b5f4af..d07329f5d1 100644 --- a/doc/src/sgml/seg.sgml +++ b/doc/src/sgml/seg.sgml @@ -86,13 +86,13 @@ test=> select '6.25 .. 6.50'::seg as "pH"; Optional certainty indicators (<, > or ~) can be stored as well. (Certainty indicators are ignored by all the built-in operators, however.) - gives an overview of allowed - representations; shows some + gives an overview of allowed + representations; shows some examples. - In , x, y, and + In , x, y, and delta denote floating-point numbers. x and y, but not delta, can be preceded by a certainty indicator. @@ -237,7 +237,7 @@ test=> select '6.25 .. 6.50'::seg as "pH"; The seg module includes a GiST index operator class for seg values. - The operators supported by the GiST operator class are shown in . + The operators supported by the GiST operator class are shown in .
diff --git a/doc/src/sgml/sepgsql.sgml b/doc/src/sgml/sepgsql.sgml index c6c89a389d..273efc6e27 100644 --- a/doc/src/sgml/sepgsql.sgml +++ b/doc/src/sgml/sepgsql.sgml @@ -17,7 +17,7 @@ The current implementation has significant limitations, and does not enforce mandatory access control for all actions. See - . + . @@ -51,7 +51,7 @@ - The statement allows assignment of + The statement allows assignment of a security label to a database object. @@ -93,7 +93,7 @@ Policy from config file: targeted To use this module, you must include sepgsql - in the parameter in + in the parameter in postgresql.conf. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute sepgsql.sql in each database. @@ -161,7 +161,7 @@ $ for DBNAME in template0 template1 postgres; do First, set up sepgsql in a working database - according to the instructions in . + according to the instructions in . Note that the current operating system user must be able to connect to the database as superuser without password authentication. @@ -215,7 +215,7 @@ unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 - See for details on adjusting your + See for details on adjusting your working domain, if necessary. @@ -448,7 +448,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; - additionally requires + additionally requires getattr permission for the source or template database. @@ -506,7 +506,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; - Using on an object additionally + Using on an object additionally requires relabelfrom permission for the object in conjunction with its old security label and relabelto permission for the object in conjunction with its new security label. @@ -641,7 +641,7 @@ ERROR: SELinux: security policy violation Miscellaneous - We reject the command across the board, because + We reject the command across the board, because any module loaded could easily circumvent security policy enforcement. @@ -651,7 +651,7 @@ ERROR: SELinux: security policy violation Sepgsql Functions - shows the available functions. + shows the available functions.
diff --git a/doc/src/sgml/sourcerepo.sgml b/doc/src/sgml/sourcerepo.sgml index b5618d7166..8729a8450d 100644 --- a/doc/src/sgml/sourcerepo.sgml +++ b/doc/src/sgml/sourcerepo.sgml @@ -22,7 +22,7 @@ flex, and Perl. These tools are not needed to build from a distribution tarball, because the files that these tools are used to build are included in the tarball. Other tool requirements - are the same as shown in . + are the same as shown in . diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml index 4c777de16f..8870ee938a 100644 --- a/doc/src/sgml/sources.sgml +++ b/doc/src/sgml/sources.sgml @@ -207,7 +207,7 @@ ereport(ERROR, n is the integer value that determines which plural form is needed, and the remaining arguments are formatted according to the selected format string. For more information see - . + . @@ -234,7 +234,7 @@ ereport(ERROR, errdetail_plural(const char *fmt_singular, const char *fmt_plural, unsigned long n, ...) is like errdetail, but with support for various plural forms of the message. - For more information see . + For more information see . @@ -255,7 +255,7 @@ ereport(ERROR, *fmt_plural, unsigned long n, ...) is like errdetail_log, but with support for various plural forms of the message. - For more information see . + For more information see . @@ -404,7 +404,7 @@ ereport(level, (errmsg_internal("format string", ...))); Advice about writing good error messages can be found in - . + . @@ -853,7 +853,7 @@ BETTER: unrecognized node type: 42 Keep in mind that error message texts need to be translated into other - languages. Follow the guidelines in + languages. Follow the guidelines in to avoid making life difficult for translators. diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index c56a354483..139c8ed8f7 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -59,7 +59,7 @@ The core PostgreSQL distribution includes the SP-GiST operator classes shown in - . + .
@@ -206,7 +206,7 @@ in a radix tree the node label could be the next character of the string value. (Alternatively, an operator class can omit the node labels, if it works with a fixed set of nodes for all inner tuples; - see .) + see .) Optionally, an inner tuple can have a prefix value that describes all its members. In a radix tree this could be the common prefix of the represented strings. The prefix value is not necessarily @@ -303,7 +303,7 @@ typedef struct spgConfigOut longValuesOK should be set true only when the attType is of variable length and the operator class is capable of segmenting long values by repeated suffixing - (see ). + (see ). @@ -392,7 +392,7 @@ typedef struct spgChooseOut zero for the root level. allTheSame is true if the current inner tuple is marked as containing multiple equivalent nodes - (see ). + (see ). hasPrefix is true if the current inner tuple contains a prefix; if so, prefixDatum is its value. @@ -547,7 +547,7 @@ typedef struct spgPickSplitOut allTheSame to signify that this has happened. The choose and inner_consistent functions must take suitable care with such inner tuples. - See for more information. + See for more information. @@ -557,7 +557,7 @@ typedef struct spgPickSplitOut value has been supplied. In this case the point of the operation is to strip off a prefix and produce a new, shorter leaf datum value. The call will be repeated until a leaf datum short enough to fit on - a page has been produced. See for + a page has been produced. See for more information. @@ -638,7 +638,7 @@ typedef struct spgInnerConsistentOut allTheSame is true if the current inner tuple is marked all-the-same; in this case all the nodes have the same label (if any) and so either all or none of them match the query - (see ). + (see ). hasPrefix is true if the current inner tuple contains a prefix; if so, prefixDatum is its value. @@ -817,7 +817,7 @@ typedef struct spgLeafConsistentOut tuple must divide the set of leaf values into more than one node group. If the operator class's picksplit function fails to do that, the SP-GiST core resorts to - extraordinary measures described in . + extraordinary measures described in . @@ -894,7 +894,7 @@ typedef struct spgLeafConsistentOut The PostgreSQL source distribution includes several examples of index operator classes for SP-GiST, - as described in . Look + as described in . Look into src/backend/access/spgist/ and src/backend/utils/adt/ to see the code. diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index e2b44c5fa1..350f0863e9 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -887,11 +887,11 @@ SPIPlanPtr SPI_prepare(const char * command, int changes + statement. Also, if the value of changes from one use to the next, the statement will be re-parsed using the new search_path. (This latter behavior is new as of PostgreSQL 9.3.) See for more information about the behavior of prepared + linkend="sql-prepare"/> for more information about the behavior of prepared statements. @@ -2308,7 +2308,7 @@ void SPI_scroll_cursor_fetch(Portal portal, FetchDirectio Notes - See the SQL command + See the SQL command for details of the interpretation of the direction and count parameters. @@ -2409,7 +2409,7 @@ void SPI_scroll_cursor_move(Portal portal, FetchDirection Notes - See the SQL command + See the SQL command for details of the interpretation of the direction and count parameters. @@ -4396,7 +4396,7 @@ INSERT INTO a SELECT * FROM a; that were processed by the command. You can find more complex examples for SPI in the source tree in src/test/regress/regress.c and in the - module. + module. @@ -4462,7 +4462,7 @@ execq(text *sql, int cnt) This is how you declare the function after having compiled it into - a shared library (details are in .): + a shared library (details are in .): CREATE FUNCTION execq(text, integer) RETURNS int8 diff --git a/doc/src/sgml/standalone-install.xml b/doc/src/sgml/standalone-install.xml index 49d94c5187..62582effed 100644 --- a/doc/src/sgml/standalone-install.xml +++ b/doc/src/sgml/standalone-install.xml @@ -14,10 +14,10 @@ in the stand-alone version. PostgreSQL using this source code distribution. - - - - + + + + Getting Started @@ -162,6 +162,6 @@ postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data - - + + diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml index 4a6f746d20..5b73557835 100644 --- a/doc/src/sgml/start.sgml +++ b/doc/src/sgml/start.sgml @@ -29,7 +29,7 @@ If you are installing PostgreSQL - yourself, then refer to + yourself, then refer to for instructions on installation, and return to this guide when the installation is complete. Be sure to follow closely the section about setting up the appropriate environment @@ -194,7 +194,7 @@ createdb: could not connect to database postgres: FATAL: role "joe" does not ex administrator has not created a PostgreSQL user account for you. (PostgreSQL user accounts are distinct from operating system user accounts.) If you are the administrator, see - for help creating accounts. You will need to + for help creating accounts. You will need to become the operating system user under which PostgreSQL was installed (usually postgres) to create the first user account. It could also be that you were assigned a @@ -268,7 +268,7 @@ createdb: database creation failed: ERROR: permission denied to create database More about createdb and dropdb can - be found in and + be found in and respectively. @@ -308,7 +308,7 @@ createdb: database creation failed: ERROR: permission denied to create database Writing a custom application, using one of the several available language bindings. These possibilities are discussed - further in . + further in . @@ -402,7 +402,7 @@ mydb=# command shell. (For more internal commands, type \? at the psql prompt.) The full capabilities of psql are documented in - . In this tutorial we will not use these + . In this tutorial we will not use these features explicitly, but you can use them yourself when it is helpful. diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml index 0f9bddf7ab..128b19cbc9 100644 --- a/doc/src/sgml/storage.sgml +++ b/doc/src/sgml/storage.sgml @@ -29,7 +29,7 @@ managed by different server instances, can exist on the same machine. The PGDATA directory contains several subdirectories and control -files, as shown in . In addition to +files, as shown in . In addition to these required items, the cluster configuration files postgresql.conf, pg_hba.conf, and pg_ident.conf are traditionally stored in @@ -197,14 +197,14 @@ for temporary relations, the file name is of the form is the backend ID of the backend which created the file, and FFF is the filenode number. In either case, in addition to the main file (a/k/a main fork), each table and index has a free space map (see ), which stores information about free space available in +linkend="storage-fsm"/>), which stores information about free space available in the relation. The free space map is stored in a file named with the filenode number plus the suffix _fsm. Tables also have a visibility map, stored in a fork with the suffix _vm, to track which pages are known to have no dead tuples. The visibility map is -described further in . Unlogged tables and indexes +described further in . Unlogged tables and indexes have a third fork, known as the initialization fork, which is stored in a fork -with the suffix _init (see ). +with the suffix _init (see ). @@ -240,12 +240,12 @@ associated TOAST table, which is used for out-of-line sto field values that are too large to keep in the table rows proper. pg_class.reltoastrelid links from a table to its TOAST table, if any. -See for more information. +See for more information. The contents of tables and indexes are discussed further in -. +. @@ -335,7 +335,7 @@ actually consist of a four-byte length word and contents until after it's been detoasted. (This is normally done by invoking PG_DETOAST_DATUM before doing anything with an input value, but in some cases more efficient approaches are possible. -See for more detail.) +See for more detail.) @@ -376,12 +376,12 @@ associated with, the table containing the TOAST pointer datum itself. These on-disk pointer datums are created by the TOAST management code (in access/heap/tuptoaster.c) when a tuple to be stored on disk is too large to be stored as-is. -Further details appear in . +Further details appear in . Alternatively, a TOAST pointer datum can contain a pointer to out-of-line data that appears elsewhere in memory. Such datums are necessarily short-lived, and will never appear on-disk, but they are very useful for avoiding copying and redundant processing of large data values. -Further details appear in . +Further details appear in . @@ -611,7 +611,7 @@ at the root. See src/backend/storage/freespace/README for more details on how the FSM is structured, and how it's updated and searched. -The module +The module can be used to examine the information stored in free space maps. @@ -657,7 +657,7 @@ cleared by any data-modifying operations on a page. -The module can be used to examine the +The module can be used to examine the information stored in the visibility map. @@ -718,7 +718,7 @@ within the index, depending on the index access method. - shows the overall layout of a page. + shows the overall layout of a page. There are five parts to each page. @@ -774,9 +774,9 @@ data. Empty in ordinary tables. The first 24 bytes of each page consists of a page header (PageHeaderData). Its format is detailed in . The first field tracks the most + linkend="pageheaderdata-table"/>. The first field tracks the most recent WAL entry related to this page. The second field contains - the page checksum if are + the page checksum if are enabled. Next is a 2-byte field containing flag bits. This is followed by three 2-byte integer fields (pd_lower, pd_upper, and @@ -918,7 +918,7 @@ data. Empty in ordinary tables. header (occupying 23 bytes on most machines), followed by an optional null bitmap, an optional object ID field, and the user data. The header is detailed - in . The actual user data + in . The actual user data (columns of the row) begins at the offset indicated by t_hoff, which must always be a multiple of the MAXALIGN distance for the platform. @@ -1032,7 +1032,7 @@ data. Empty in ordinary tables. struct varlena, which includes the total length of the stored value and some flag bits. Depending on the flags, the data can be either inline or in a TOAST table; - it might be compressed, too (see ). + it might be compressed, too (see ). diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index e4012cc182..a938a21334 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -75,7 +75,7 @@ INSERT INTO MY_TABLE VALUES (3, 'hi there'); a SET token to appear in a certain position, and this particular variation of INSERT also requires a VALUES in order to be complete. The - precise syntax rules for each command are described in . + precise syntax rules for each command are described in . @@ -109,7 +109,7 @@ INSERT INTO MY_TABLE VALUES (3, 'hi there'); same lexical structure, meaning that one cannot know whether a token is an identifier or a key word without knowing the language. A complete list of key words can be found in . + linkend="sql-keywords-appendix"/>. @@ -353,7 +353,7 @@ SELECT 'foo' 'bar'; Within an escape string, a backslash character (\) begins a C-like backslash escape sequence, in which the combination of backslash and following character(s) represent a special byte - value, as shown in . + value, as shown in .
@@ -430,7 +430,7 @@ SELECT 'foo' 'bar'; valid characters in the server character set encoding. When the server encoding is UTF-8, then the Unicode escapes or the alternative Unicode escape syntax, explained - in , should be used + in , should be used instead. (The alternative would be doing the UTF-8 encoding by hand and writing out the bytes, which would be very cumbersome.) @@ -451,7 +451,7 @@ SELECT 'foo' 'bar'; If the configuration parameter - is off, + is off, then PostgreSQL recognizes backslash escapes in both regular and escape string constants. However, as of PostgreSQL 9.1, the default is on, meaning @@ -466,8 +466,8 @@ SELECT 'foo' 'bar'; In addition to standard_conforming_strings, the configuration - parameters and - govern treatment of backslashes + parameters and + govern treatment of backslashes in string constants. @@ -539,7 +539,7 @@ U&'d!0061t!+000061' UESCAPE '!' Also, the Unicode escape syntax for string constants only works when the configuration - parameter is + parameter is turned on. This is because otherwise this syntax could confuse clients that parse the SQL statements to the point that it could lead to SQL injections and similar security issues. If the @@ -772,14 +772,14 @@ CAST ( 'string' AS type ) typename ( 'string' ) but not all type names can be used in this way; see for details. + linkend="sql-syntax-type-casts"/> for details. The ::, CAST(), and function-call syntaxes can also be used to specify run-time type conversions of arbitrary expressions, as discussed in . To avoid syntactic ambiguity, the + linkend="sql-syntax-type-casts"/>. To avoid syntactic ambiguity, the type 'string' syntax can only be used to specify the type of a simple literal constant. Another restriction on the @@ -885,7 +885,7 @@ CAST ( 'string' AS type ) Brackets ([]) are used to select the elements - of an array. See for more information + of an array. See for more information on arrays. @@ -909,7 +909,7 @@ CAST ( 'string' AS type ) The colon (:) is used to select slices from arrays. (See .) In certain SQL dialects (such as Embedded + linkend="arrays"/>.) In certain SQL dialects (such as Embedded SQL), the colon is used to prefix variable names. @@ -980,7 +980,7 @@ CAST ( 'string' AS type ) - shows the precedence and + shows the precedence and associativity of the operators in PostgreSQL. Most operators have the same precedence and are left-associative. The precedence and associativity of the operators is hard-wired @@ -1126,7 +1126,7 @@ SELECT (5 !) - 6; SELECT 3 OPERATOR(pg_catalog.+) 4; the OPERATOR construct is taken to have the default precedence - shown in for + shown in for any other operator. This is true no matter which specific operator appears inside OPERATOR(). @@ -1148,7 +1148,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; change behavior without any parsing error being reported. If you are concerned about whether these changes have silently broken something, you can test your application with the configuration - parameter turned on + parameter turned on to see if any warnings are logged. @@ -1290,13 +1290,13 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; be classified as an expression but do not follow any general syntax rules. These generally have the semantics of a function or operator and are explained in the appropriate location in . An example is the IS NULL + linkend="functions"/>. An example is the IS NULL clause. We have already discussed constants in . The following sections discuss + linkend="sql-syntax-constants"/>. The following sections discuss the remaining options. @@ -1319,7 +1319,7 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; table (possibly qualified with a schema name), or an alias for a table defined by means of a FROM clause. The correlation name and separating dot can be omitted if the column name - is unique across all the tables being used in the current query. (See also .) + is unique across all the tables being used in the current query. (See also .) @@ -1402,7 +1402,7 @@ $1[10:42] The parentheses in the last example are required. - See for more about arrays. + See for more about arrays. @@ -1455,7 +1455,7 @@ $1.somecolumn (compositecol).* This notation behaves differently depending on context; - see for details. + see for details. @@ -1475,7 +1475,7 @@ $1.somecolumn expression operator (unary postfix operator) where the operator token follows the syntax - rules of , or is one of the + rules of , or is one of the key words AND, OR, and NOT, or is a qualified operator name in the form: @@ -1483,7 +1483,7 @@ $1.somecolumn Which particular operators exist and whether they are unary or binary depends on what operators have been - defined by the system or the user. + defined by the system or the user. describes the built-in operators. @@ -1514,13 +1514,13 @@ sqrt(2) - The list of built-in functions is in . + The list of built-in functions is in . Other functions can be added by the user. The arguments can optionally have names attached. - See for details. + See for details. @@ -1532,7 +1532,7 @@ sqrt(2) interchangeable. This behavior is not SQL-standard but is provided in PostgreSQL because it allows use of functions to emulate computed fields. For more information see - . + . @@ -1622,7 +1622,7 @@ sqrt(2) such an aggregate, the optional order_by_clause can be used to specify the desired ordering. The order_by_clause has the same syntax as for a query-level ORDER BY clause, as - described in , except that its expressions + described in , except that its expressions are always just expressions and cannot be output-column names or numbers. For example: @@ -1736,7 +1736,7 @@ FROM generate_series(1,10) AS s(i); The predefined aggregate functions are described in . Other aggregate functions can be added + linkend="functions-aggregate"/>. Other aggregate functions can be added by the user. @@ -1750,8 +1750,8 @@ FROM generate_series(1,10) AS s(i); When an aggregate expression appears in a subquery (see - and - ), the aggregate is normally + and + ), the aggregate is normally evaluated over the rows of the subquery. But an exception occurs if the aggregate's arguments (and filter_clause if any) contain only outer-level variables: @@ -1830,7 +1830,7 @@ UNBOUNDED FOLLOWING Alternatively, a full window_definition can be given within parentheses, using the same syntax as for defining a named window in the WINDOW clause; see the - reference page for details. It's worth + reference page for details. It's worth pointing out that OVER wname is not exactly equivalent to OVER (wname ...); the latter implies copying and modifying the window definition, and will be rejected if the referenced window @@ -1921,7 +1921,7 @@ UNBOUNDED FOLLOWING The built-in window functions are described in . Other window functions can be added by + linkend="functions-window-table"/>. Other window functions can be added by the user. Also, any built-in or user-defined general-purpose or statistical aggregate can be used as a window function. (Ordered-set and hypothetical-set aggregates cannot presently be used as window functions.) @@ -1944,9 +1944,9 @@ UNBOUNDED FOLLOWING More information about window functions can be found in - , - , and - . + , + , and + . @@ -1984,7 +1984,7 @@ CAST ( expression AS type represents a run-time type conversion. The cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly different from the use of casts with constants, as shown in - . A cast applied to an + . A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type (if the contents of the string literal are acceptable input syntax for the @@ -2028,7 +2028,7 @@ CAST ( expression AS type syntax is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see - . + . @@ -2079,7 +2079,7 @@ SELECT * FROM tbl WHERE a > 'foo' COLLATE "C"; arguments, and an explicit COLLATE clause will override the collations of all other arguments. (Attaching non-matching COLLATE clauses to more than one argument, however, is an - error. For more details see .) + error. For more details see .) Thus, this gives the same result as the previous example: SELECT * FROM tbl WHERE a COLLATE "C" > 'foo'; @@ -2104,7 +2104,7 @@ SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C"; A scalar subquery is an ordinary SELECT query in parentheses that returns exactly one - row with one column. (See for information about writing queries.) + row with one column. (See for information about writing queries.) The SELECT query is executed and the single returned value is used in the surrounding value expression. It is an error to use a query that @@ -2113,7 +2113,7 @@ SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C"; there is no error; the scalar result is taken to be null.) The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery. - See also for other expressions involving subqueries. + See also for other expressions involving subqueries. @@ -2156,7 +2156,7 @@ SELECT ARRAY[1,2,3+4]; By default, the array element type is the common type of the member expressions, determined using the same rules as for UNION or - CASE constructs (see ). + CASE constructs (see ). You can override this by explicitly casting the array constructor to the desired type, for example: @@ -2168,7 +2168,7 @@ SELECT ARRAY[1,2,22.7]::integer[]; This has the same effect as casting each expression to the array element type individually. - For more on casting, see . + For more on casting, see . @@ -2259,7 +2259,7 @@ SELECT ARRAY(SELECT ARRAY[i, i*2] FROM generate_series(1,5) AS a(i)); The subscripts of an array value built with ARRAY always begin with one. For more information about arrays, see - . + . @@ -2300,7 +2300,7 @@ SELECT ROW(1,2.5,'this is a test'); rowvalue.*, which will be expanded to a list of the elements of the row value, just as occurs when the .* syntax is used at the top level - of a SELECT list (see ). + of a SELECT list (see ). For example, if table t has columns f1 and f2, these are the same: @@ -2372,9 +2372,9 @@ SELECT ROW(1,2.5,'this is a test') = ROW(1, 3, 'not the same'); SELECT ROW(table.*) IS NULL FROM table; -- detect all-null rows - For more detail see . + For more detail see . Row constructors can also be used in connection with subqueries, - as discussed in . + as discussed in . @@ -2422,7 +2422,7 @@ SELECT somefunc() OR true; When it is essential to force evaluation order, a CASE - construct (see ) can be + construct (see ) can be used. For example, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause: @@ -2442,7 +2442,7 @@ SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END; CASE is not a cure-all for such issues, however. One limitation of the technique illustrated above is that it does not prevent early evaluation of constant subexpressions. - As described in , functions and + As described in , functions and operators marked IMMUTABLE can be evaluated when the query is planned rather than when it is executed. Thus for example @@ -2547,7 +2547,7 @@ LANGUAGE SQL IMMUTABLE STRICT; b inputs will be concatenated, and forced to either upper or lower case depending on the uppercase parameter. The remaining details of this function - definition are not important here (see for + definition are not important here (see for more information). diff --git a/doc/src/sgml/tablefunc.sgml b/doc/src/sgml/tablefunc.sgml index 6f1d3df34d..007e9c62f5 100644 --- a/doc/src/sgml/tablefunc.sgml +++ b/doc/src/sgml/tablefunc.sgml @@ -18,7 +18,7 @@ Functions Provided - shows the functions provided + shows the functions provided by the tablefunc module. @@ -646,7 +646,7 @@ connectby(text relname, text keyid_fld, text parent_keyid_fld - explains the + explains the parameters. diff --git a/doc/src/sgml/tablesample-method.sgml b/doc/src/sgml/tablesample-method.sgml index 9ac28ceb4c..b84b7ba885 100644 --- a/doc/src/sgml/tablesample-method.sgml +++ b/doc/src/sgml/tablesample-method.sgml @@ -33,7 +33,7 @@ method_name(internal) RETURNS tsm_handler type TsmRoutine, which contains pointers to support functions for the sampling method. These support functions are plain C functions and are not visible or callable at the SQL level. The support functions are - described in . + described in . diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index 7b4912dd5e..4dc52ec983 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -160,12 +160,12 @@ A data type tsvector is provided for storing preprocessed documents, along with a type tsquery for representing processed - queries (). There are many + queries (). There are many functions and operators available for these data types - (), the most important of which is + (), the most important of which is the match operator @@, which we introduce in - . Full text searches can be accelerated - using indexes (). + . Full text searches can be accelerated + using indexes (). @@ -264,7 +264,7 @@ SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::t text, any more than a tsvector is. A tsquery contains search terms, which must be already-normalized lexemes, and may combine multiple terms using AND, OR, NOT, and FOLLOWED BY operators. - (For syntax details see .) There are + (For syntax details see .) There are functions to_tsquery, plainto_tsquery, and phraseto_tsquery that are helpful in converting user-written text into a proper @@ -420,7 +420,7 @@ SELECT phraseto_tsquery('the cats ate the rats'); During installation an appropriate configuration is selected and - is set accordingly + is set accordingly in postgresql.conf. If you are using the same text search configuration for the entire cluster you can use the value in postgresql.conf. To use different configurations @@ -530,7 +530,7 @@ WHERE to_tsvector(body) @@ to_tsquery('friend'); This query will use the configuration set by . + linkend="guc-default-text-search-config"/>. @@ -565,7 +565,7 @@ LIMIT 10; We can create a GIN index () to speed up text searches: + linkend="textsearch-indexes"/>) to speed up text searches: CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); @@ -573,9 +573,9 @@ CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body)); Notice that the 2-argument version of to_tsvector is used. Only text search functions that specify a configuration name can - be used in expression indexes (). + be used in expression indexes (). This is because the index contents must be unaffected by . If they were affected, the + linkend="guc-default-text-search-config"/>. If they were affected, the index contents might be inconsistent because different entries could contain tsvectors that were created with different text search configurations, and there would be no way to guess which was which. It @@ -653,7 +653,7 @@ LIMIT 10; representation, it is necessary to create a trigger to keep the tsvector column current anytime title or body changes. - explains how to do that. + explains how to do that. @@ -665,7 +665,7 @@ LIMIT 10; searches will be faster, since it will not be necessary to redo the to_tsvector calls to verify index matches. (This is more important when using a GiST index than a GIN index; see .) The expression-index approach is + linkend="textsearch-indexes"/>.) The expression-index approach is simpler to set up, however, and it requires less disk space since the tsvector representation is not stored explicitly. @@ -732,14 +732,14 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); The to_tsvector function internally calls a parser which breaks the document text into tokens and assigns a type to each token. For each token, a list of - dictionaries () is consulted, + dictionaries () is consulted, where the list can vary depending on the token type. The first dictionary that recognizes the token emits one or more normalized lexemes to represent the token. For example, rats became rat because one of the dictionaries recognized that the word rats is a plural form of rat. Some words are recognized as - stop words (), which + stop words (), which causes them to be ignored since they occur too frequently to be useful in searching. In our example these are a, on, and it. @@ -749,7 +749,7 @@ SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats'); (Space symbols), meaning space tokens will never be indexed. The choices of parser, dictionaries and which types of tokens to index are determined by the selected text search configuration (). It is possible to have + linkend="textsearch-configuration"/>). It is possible to have many different configurations in the same database, and predefined configurations are available for various languages. In our example we used the default configuration english for the @@ -785,7 +785,7 @@ UPDATE tt SET ti = of each lexeme in the finished tsvector, and then merged the labeled tsvector values using the tsvector concatenation operator ||. ( gives details about these + linkend="textsearch-manipulate-tsvector"/> gives details about these operations.) @@ -823,7 +823,7 @@ to_tsquery( config using parentheses. In other words, the input to to_tsquery must already follow the general rules for tsquery input, as described in . The difference is that while basic + linkend="datatype-tsquery"/>. The difference is that while basic tsquery input takes the tokens at face value, to_tsquery normalizes each token into a lexeme using the specified or default configuration, and discards any tokens that are @@ -1030,7 +1030,7 @@ SELECT phraseto_tsquery('english', 'The Fat & Rats:C'); its calculation. Therefore, it ignores any stripped lexemes in the tsvector. If there are no unstripped lexemes in the input, the result will be zero. (See for more information + linkend="textsearch-manipulate-tsvector"/> for more information about the strip function and positional information in tsvectors.) @@ -1333,7 +1333,7 @@ query.', Manipulating Documents - showed how raw textual + showed how raw textual documents can be converted into tsvector values. PostgreSQL also provides functions and operators that can be used to manipulate documents that are already @@ -1455,7 +1455,7 @@ query.', A full list of tsvector-related functions is available - in . + in . @@ -1464,7 +1464,7 @@ query.', Manipulating Queries - showed how raw textual + showed how raw textual queries can be converted into tsquery values. PostgreSQL also provides functions and operators that can be used to manipulate queries that are already @@ -1651,7 +1651,7 @@ SELECT querytree(to_tsquery('!defined')); (e.g., new york, big apple, nyc, gotham) or narrow the search to direct the user to some hot topic. There is some overlap in functionality between this feature - and thesaurus dictionaries (). + and thesaurus dictionaries (). However, you can modify a set of rewrite rules on-the-fly without reindexing, whereas updating a thesaurus requires reindexing to be effective. @@ -1962,7 +1962,7 @@ LIMIT 10; The built-in parser is named pg_catalog.default. - It recognizes 23 token types, shown in . + It recognizes 23 token types, shown in .
@@ -2295,7 +2295,7 @@ ALTER TEXT SEARCH CONFIGURATION astro_en end where it'd be useless. Filtering dictionaries are useful to partially normalize words to simplify the task of later dictionaries. For example, a filtering dictionary could be used to remove accents from accented - letters, as is done by the module. + letters, as is done by the module. @@ -2453,7 +2453,7 @@ SELECT ts_lexize('public.simple_dict','The'); This dictionary template is used to create dictionaries that replace a word with a synonym. Phrases are not supported (use the thesaurus - template () for that). A synonym + template () for that). A synonym dictionary can be used to overcome linguistic problems, for example, to prevent an English stemmer dictionary from reducing the word Paris to pari. It is enough to have a Paris paris line in the @@ -2511,7 +2511,7 @@ SELECT * FROM ts_debug('english', 'Paris'); to_tsvector(), but when it is used in to_tsquery(), the result will be a query item with the prefix match marker (see - ). + ). For example, suppose we have these entries in $SHAREDIR/tsearch_data/synonym_sample.syn: @@ -3042,7 +3042,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( to_tsvector or to_tsquery needs a text search configuration to perform its processing. The configuration parameter - + specifies the name of the default configuration, which is the one used by text search functions if an explicit configuration parameter is omitted. @@ -3055,7 +3055,7 @@ CREATE TEXT SEARCH DICTIONARY english_stem ( you can create custom configurations easily. To facilitate management of text search objects, a set of SQL commands is available, and there are several psql commands that display information - about text search objects (). + about text search objects (). @@ -3324,7 +3324,7 @@ SELECT * FROM ts_debug('public.english','The Brightest supernovaes'); The word The was recognized by the english_ispell dictionary as a stop word () and will not be indexed. + linkend="textsearch-stopwords"/>) and will not be indexed. The spaces are discarded too, since the configuration provides no dictionaries at all for them. @@ -3604,7 +3604,7 @@ SELECT plainto_tsquery('supernovae stars'); Note that GIN index build time can often be improved - by increasing , while + by increasing , while GiST index build time is not sensitive to that parameter. @@ -3614,7 +3614,7 @@ SELECT plainto_tsquery('supernovae stars'); allows the implementation of very fast searches with online update. Partitioning can be done at the database level using table inheritance, or by distributing documents over - servers and collecting search results using the + servers and collecting search results using the module. The latter is possible because ranking functions use only local information. diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index b0e160acf6..bf5d3f9088 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -11,10 +11,10 @@ This chapter provides general information about writing trigger functions. Trigger functions can be written in most of the available procedural languages, including - PL/pgSQL (), - PL/Tcl (), - PL/Perl (), and - PL/Python (). + PL/pgSQL (), + PL/Tcl (), + PL/Perl (), and + PL/Python (). After reading this chapter, you should consult the chapter for your favorite procedural language to find out the language-specific details of writing a trigger in it. @@ -76,7 +76,7 @@ Once a suitable trigger function has been created, the trigger is established with - . + . The same trigger function can be used for multiple triggers. @@ -397,8 +397,8 @@ Further information about data visibility rules can be found in - . The example in contains a demonstration of these rules. + . The example in contains a demonstration of these rules. @@ -715,7 +715,7 @@ typedef struct Trigger To allow queries issued through SPI to reference transition tables, see - . + . @@ -835,7 +835,7 @@ trigf(PG_FUNCTION_ARGS) After you have compiled the source code (see ), declare the function and the triggers: + linkend="dfunc"/>), declare the function and the triggers: CREATE FUNCTION trigf() RETURNS trigger AS 'filename' @@ -921,7 +921,7 @@ DELETE 2 There are more complex examples in src/test/regress/regress.c and - in . + in . diff --git a/doc/src/sgml/tsm-system-rows.sgml b/doc/src/sgml/tsm-system-rows.sgml index 8504ee1281..3dcd948ff8 100644 --- a/doc/src/sgml/tsm-system-rows.sgml +++ b/doc/src/sgml/tsm-system-rows.sgml @@ -10,7 +10,7 @@ The tsm_system_rows module provides the table sampling method SYSTEM_ROWS, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. diff --git a/doc/src/sgml/tsm-system-time.sgml b/doc/src/sgml/tsm-system-time.sgml index 525292bb7c..fd8e999544 100644 --- a/doc/src/sgml/tsm-system-time.sgml +++ b/doc/src/sgml/tsm-system-time.sgml @@ -10,7 +10,7 @@ The tsm_system_time module provides the table sampling method SYSTEM_TIME, which can be used in - the TABLESAMPLE clause of a + the TABLESAMPLE clause of a command. diff --git a/doc/src/sgml/typeconv.sgml b/doc/src/sgml/typeconv.sgml index 5c99e3adaf..cd7de8fe3f 100644 --- a/doc/src/sgml/typeconv.sgml +++ b/doc/src/sgml/typeconv.sgml @@ -26,7 +26,7 @@ can be tailored by using explicit type conversion. This chapter introduces the PostgreSQL type conversion mechanisms and conventions. -Refer to the relevant sections in and +Refer to the relevant sections in and for more information on specific data types and allowed functions and operators. @@ -139,7 +139,7 @@ and for the GREATEST and LEAST functio The system catalogs store information about which conversions, or casts, exist between which data types, and how to perform those conversions. Additional casts can be added by the user -with the +with the command. (This is usually done in conjunction with defining new data types. The set of casts between built-in types has been carefully crafted and is best not @@ -158,7 +158,7 @@ Data types are divided into several basic type categories, including boolean, numeric, string, bitstring, datetime, timespan, geometric, network, and -user-defined. (For a list see ; +user-defined. (For a list see ; but note it is also possible to create custom type categories.) Within each category there can be one or more preferred types, which are preferred when there is a choice of possible types. With careful selection @@ -213,7 +213,7 @@ should use this new function and no longer do implicit conversion to use the old Note that this procedure is indirectly affected by the precedence of the operators involved, since that will determine which sub-expressions are taken to be the inputs of which operators. - See for more information. + See for more information. @@ -225,7 +225,7 @@ Select the operators to be considered from the pg_operator system catalog. If a non-schema-qualified operator name was used (the usual case), the operators considered are those with the matching name and argument count that are -visible in the current search path (see ). +visible in the current search path (see ). If a qualified operator name was given, only operators in the specified schema are considered. @@ -490,9 +490,9 @@ could possibly accept an integer array on the left-hand side are array inclusion (anyarray <@ anyarray) and range inclusion (anyelement <@ anyrange). Since none of these polymorphic pseudo-types (see ) are considered preferred, the parser cannot +linkend="datatype-pseudo"/>) are considered preferred, the parser cannot resolve the ambiguity on that basis. -However, tells +However, tells it to assume that the unknown-type literal is of the same type as the other input, that is, integer array. Now only one of the two operators can match, so array inclusion is selected. (Had range inclusion been selected, we would @@ -519,10 +519,10 @@ SELECT * FROM mytable WHERE val = 'foo'; This query will not use the custom operator. The parser will first see if there is a mytext = mytext operator -(), which there is not; +(), which there is not; then it will consider the domain's base type text, and see if there is a text = text operator -(), which there is; +(), which there is; so it resolves the unknown-type literal as text and uses the text = text operator. The only way to get the custom operator to be used is to explicitly cast @@ -564,7 +564,7 @@ Select the functions to be considered from the pg_proc system catalog. If a non-schema-qualified function name was used, the functions considered are those with the matching name and argument count that are -visible in the current search path (see ). +visible in the current search path (see ). If a qualified function name was given, only functions in the specified schema are considered. @@ -633,7 +633,7 @@ the function call is treated as a form of CAST specification. in cases where there is not an actual cast function. If there is a cast function, it is conventionally named after its output type, and so there is no need to have a special case. See - + for additional commentary. @@ -846,7 +846,7 @@ Check for an exact match with the target. Otherwise, try to convert the expression to the target type. This is possible if an assignment cast between the two types is registered in the -pg_cast catalog (see ). +pg_cast catalog (see ). Alternatively, if the expression is an unknown-type literal, the contents of the literal string will be fed to the input conversion routine for the target type. diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index 94fcdf9829..ae15efed95 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -24,7 +24,7 @@ This chapter describes how to create and manage roles. More information about the effects of role privileges on various - database objects can be found in . + database objects can be found in . @@ -52,7 +52,7 @@ maintain a correspondence, but this is not required. Database roles are global across a database cluster installation (and not per individual database). To create a role use the SQL command: + linkend="sql-createrole"/> SQL command: CREATE ROLE name; @@ -61,7 +61,7 @@ CREATE ROLE name; double-quoted. (In practice, you will usually want to add additional options, such as LOGIN, to the command. More details appear below.) To remove an existing role, use the analogous - command: + command: DROP ROLE name; @@ -76,8 +76,8 @@ DROP ROLE name; - For convenience, the programs - and are provided as wrappers + For convenience, the programs + and are provided as wrappers around these SQL commands that can be called from the shell command line: @@ -92,7 +92,7 @@ dropuser name SELECT rolname FROM pg_roles; - The program's \du meta-command + The program's \du meta-command is also useful for listing the existing roles. @@ -126,7 +126,7 @@ SELECT rolname FROM pg_roles; The set of database roles a given client connection can connect as is determined by the client authentication setup, as explained in - . (Thus, a client is not + . (Thus, a client is not limited to connect as the role matching its operating system user, just as a person's login name need not match his or her real name.) Since the role @@ -240,8 +240,8 @@ CREATE USER name; A role's attributes can be modified after creation with ALTER ROLE.ALTER ROLE - See the reference pages for the - and commands for details. + See the reference pages for the + and commands for details. @@ -257,7 +257,7 @@ CREATE USER name; A role can also have role-specific defaults for many of the run-time configuration settings described in . For example, if for some reason you + linkend="runtime-config"/>. For example, if for some reason you want to disable index scans (hint: not a good idea) anytime you connect, you can use: @@ -303,8 +303,8 @@ CREATE ROLE name; Once the group role exists, you can add and remove members using the - and - commands: + and + commands: GRANT group_role TO role1, ... ; REVOKE group_role FROM role1, ... ; @@ -319,7 +319,7 @@ REVOKE group_role FROM role1 The members of a group role can use the privileges of the role in two ways. First, every member of a group can explicitly do - to + to temporarily become the group role. In this state, the database session has access to the privileges of the group role rather than the original login role, and any database objects created are @@ -403,7 +403,7 @@ RESET ROLE; To destroy a group role, use : + linkend="sql-droprole"/>: DROP ROLE name; @@ -418,7 +418,7 @@ DROP ROLE name; Because roles can own database objects and can hold privileges to access other objects, dropping a role is often not just a matter of a - quick . Any objects owned by the role must + quick . Any objects owned by the role must first be dropped or reassigned to other owners; and any permissions granted to the role must be revoked. @@ -429,7 +429,7 @@ DROP ROLE name; ALTER TABLE bobs_table OWNER TO alice; - Alternatively, the command can be + Alternatively, the command can be used to reassign ownership of all objects owned by the role-to-be-dropped to a single other role. Because REASSIGN OWNED cannot access objects in other databases, it is necessary to run it in each database @@ -442,7 +442,7 @@ ALTER TABLE bobs_table OWNER TO alice; Once any valuable objects have been transferred to new owners, any remaining objects owned by the role-to-be-dropped can be dropped with - the command. Again, this command cannot + the command. Again, this command cannot access objects in other databases, so it is necessary to run it in each database that contains objects owned by the role. Also, DROP OWNED will not drop entire databases or tablespaces, so it is @@ -499,7 +499,7 @@ DROP ROLE doomed_role; - The default roles are described in . + The default roles are described in . Note that the specific permissions for each of the default roles may change in the future as additional capabilities are added. Administrators should monitor the release notes for changes. diff --git a/doc/src/sgml/uuid-ossp.sgml b/doc/src/sgml/uuid-ossp.sgml index b1c1cd6f0a..b3b816c372 100644 --- a/doc/src/sgml/uuid-ossp.sgml +++ b/doc/src/sgml/uuid-ossp.sgml @@ -17,7 +17,7 @@ <literal>uuid-ossp</literal> Functions - shows the functions available to + shows the functions available to generate UUIDs. The relevant standards ITU-T Rec. X.667, ISO/IEC 9834-8:2005, and RFC 4122 specify four algorithms for generating UUIDs, identified by the @@ -64,7 +64,7 @@ This function generates a version 3 UUID in the given namespace using the specified input name. The namespace should be one of the special constants produced by the uuid_ns_*() functions shown - in . (It could be any UUID in theory.) The name is an identifier + in . (It could be any UUID in theory.) The name is an identifier in the selected namespace. @@ -186,7 +186,7 @@ SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org'); If you only need randomly-generated (version 4) UUIDs, consider using the gen_random_uuid() function - from the module instead. + from the module instead. diff --git a/doc/src/sgml/vacuumlo.sgml b/doc/src/sgml/vacuumlo.sgml index 190ed9880b..0b4dfc2b17 100644 --- a/doc/src/sgml/vacuumlo.sgml +++ b/doc/src/sgml/vacuumlo.sgml @@ -37,7 +37,7 @@ If you use this, you may also be interested in the lo_manage - trigger in the module. + trigger in the module. lo_manage is useful to try to avoid creating orphaned LOs in the first place. @@ -61,7 +61,7 @@ Remove no more than limit large objects per transaction (default 1000). Since the server acquires a lock per LO removed, removing too many LOs in one transaction risks exceeding - . Set the limit to + . Set the limit to zero if you want all removals done in a single transaction. diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index f9febe916f..f4bc2d4161 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -35,7 +35,7 @@ frequently requested disk blocks and combines disk writes. Fortunately, all operating systems give applications a way to force writes from the buffer cache to disk, and PostgreSQL uses those - features. (See the parameter + features. (See the parameter to adjust how this is done.) @@ -133,7 +133,7 @@ (BBU) disk controllers. In such setups, the synchronize command forces all data from the controller cache to the disks, eliminating much of the benefit of the BBU. You can run the - program to see + program to see if you are affected. If you are affected, the performance benefits of the BBU can be regained by turning off write barriers in the file system or reconfiguring the disk controller, if that is @@ -174,7 +174,7 @@ restore partially-written pages from WAL. If you have file-system software that prevents partial page writes (e.g., ZFS), you can turn off this page imaging by turning off the parameter. Battery-Backed Unit + linkend="guc-full-page-writes"/> parameter. Battery-Backed Unit (BBU) disk controllers do not prevent partial page writes unless they guarantee that data is written to the BBU as full (8kB) pages. @@ -290,7 +290,7 @@ WAL also makes it possible to support on-line backup and point-in-time recovery, as described in . By archiving the WAL data we can support + linkend="continuous-archiving"/>. By archiving the WAL data we can support reverting to any time instant covered by the available WAL data: we simply install a prior physical backup of the database, and replay the WAL log just as far as the desired time. What's more, @@ -367,7 +367,7 @@ transactions running concurrently. This allows flexible trade-offs between performance and certainty of transaction durability. The commit mode is controlled by the user-settable parameter - , which can be changed in any of + , which can be changed in any of the ways that a configuration parameter can be set. The mode used for any one transaction depends on the value of synchronous_commit when transaction commit begins. @@ -390,7 +390,7 @@ The duration of the risk window is limited because a background process (the WAL writer) flushes unwritten WAL records to disk - every milliseconds. + every milliseconds. The actual maximum duration of the risk window is three times wal_writer_delay because the WAL writer is designed to favor writing whole pages at a time during busy periods. @@ -405,7 +405,7 @@ Asynchronous commit provides behavior different from setting - = off. + = off. fsync is a server-wide setting that will alter the behavior of all transactions. It disables all logic within PostgreSQL that attempts to synchronize @@ -419,7 +419,7 @@ - also sounds very similar to + also sounds very similar to asynchronous commit, but it is actually a synchronous commit method (in fact, commit_delay is ignored during an asynchronous commit). commit_delay causes a delay @@ -439,7 +439,7 @@ There are several WAL-related configuration parameters that affect database performance. This section explains their use. - Consult for general information about + Consult for general information about setting server configuration parameters. @@ -472,15 +472,15 @@ The server's checkpointer process automatically performs a checkpoint every so often. A checkpoint is begun every seconds, or if - is about to be exceeded, + linkend="guc-checkpoint-timeout"/> seconds, or if + is about to be exceeded, whichever comes first. The default settings are 5 minutes and 1 GB, respectively. If no WAL has been written since the previous checkpoint, new checkpoints will be skipped even if checkpoint_timeout has passed. (If WAL archiving is being used and you want to put a lower limit on how often files are archived in order to bound potential data loss, you should - adjust the parameter rather than the + adjust the parameter rather than the checkpoint parameters.) It is also possible to force a checkpoint by using the SQL command CHECKPOINT. @@ -492,7 +492,7 @@ more often. This allows faster after-crash recovery, since less work will need to be redone. However, one must balance this against the increased cost of flushing dirty data pages more often. If - is set (as is the default), there is + is set (as is the default), there is another factor to consider. To ensure data page consistency, the first modification of a data page after each checkpoint results in logging the entire page content. In that case, @@ -507,7 +507,7 @@ extra subsequent WAL traffic as discussed above. It is therefore wise to set the checkpointing parameters high enough so that checkpoints don't happen too often. As a simple sanity check on your checkpointing - parameters, you can set the + parameters, you can set the parameter. If checkpoints happen closer together than checkpoint_warning seconds, a message will be output to the server log recommending increasing @@ -523,7 +523,7 @@ To avoid flooding the I/O system with a burst of page writes, writing dirty buffers during a checkpoint is spread over a period of time. That period is controlled by - , which is + , which is given as a fraction of the checkpoint interval. The I/O rate is adjusted so that the checkpoint finishes when the given fraction of @@ -546,14 +546,14 @@ - On Linux and POSIX platforms + On Linux and POSIX platforms allows to force the OS that pages written by the checkpoint should be flushed to disk after a configurable number of bytes. Otherwise, these pages may be kept in the OS's page cache, inducing a stall when fsync is issued at the end of a checkpoint. This setting will often help to reduce transaction latency, but it also can an adverse effect on performance; particularly for workloads that are bigger than - , but smaller than the OS's page cache. + , but smaller than the OS's page cache. @@ -578,14 +578,14 @@ Independently of max_wal_size, - + 1 most recent WAL files are + + 1 most recent WAL files are kept at all times. Also, if WAL archiving is used, old segments can not be removed or recycled until they are archived. If WAL archiving cannot keep up with the pace that WAL is generated, or if archive_command fails repeatedly, old WAL files will accumulate in pg_wal until the situation is resolved. A slow or failed standby server that uses a replication slot will have the same effect (see - ). + ). @@ -629,21 +629,21 @@ not occur often enough to prevent XLogInsertRecord from having to do writes. On such systems one should increase the number of WAL buffers by - modifying the parameter. When - is set and the system is very busy, + modifying the parameter. When + is set and the system is very busy, setting wal_buffers higher will help smooth response times during the period immediately following each checkpoint. - The parameter defines for how many + The parameter defines for how many microseconds a group commit leader process will sleep after acquiring a lock within XLogFlush, while group commit followers queue up behind the leader. This delay allows other server processes to add their commit records to the WAL buffers so that all of them will be flushed by the leader's eventual sync operation. No sleep - will occur if is not enabled, or if fewer - than other sessions are currently + will occur if is not enabled, or if fewer + than other sessions are currently in active transactions; this avoids sleeping when it's unlikely that any other session will commit soon. Note that on some platforms, the resolution of a sleep request is ten milliseconds, so that any nonzero @@ -661,7 +661,7 @@ be chosen intelligently. The higher that cost is, the more effective commit_delay is expected to be in increasing transaction throughput, up to a point. The program can be used to measure the average time + linkend="pgtestfsync"/> program can be used to measure the average time in microseconds that a single WAL flush operation takes. A value of half of the average time the program reports it takes to flush after a single 8kB write operation is often the most effective setting for @@ -698,7 +698,7 @@ - The parameter determines how + The parameter determines how PostgreSQL will ask the kernel to force WAL updates out to disk. All the options should be the same in terms of reliability, with @@ -706,13 +706,13 @@ force a flush of the disk cache even when other options do not do so. However, it's quite platform-specific which one will be the fastest. You can test the speeds of different options using the program. + linkend="pgtestfsync"/> program. Note that this parameter is irrelevant if fsync has been turned off. - Enabling the configuration parameter + Enabling the configuration parameter (provided that PostgreSQL has been compiled with support for it) will result in each XLogInsertRecord and XLogFlush @@ -733,7 +733,7 @@ required from the administrator except ensuring that the disk-space requirements for the WAL logs are met, and that any necessary tuning is done (see ). + linkend="wal-configuration"/>). @@ -781,7 +781,7 @@ irrecoverable data corruption. Administrators should try to ensure that disks holding PostgreSQL's WAL log files do not make such false reports. - (See .) + (See .) @@ -793,7 +793,7 @@ scanning forward from the log location indicated in the checkpoint record. Because the entire content of data pages is saved in the log on the first page modification after a checkpoint (assuming - is not disabled), all pages + is not disabled), all pages changed since the checkpoint will be restored to a consistent state. diff --git a/doc/src/sgml/xaggr.sgml b/doc/src/sgml/xaggr.sgml index f99dbb6510..1514e5c388 100644 --- a/doc/src/sgml/xaggr.sgml +++ b/doc/src/sgml/xaggr.sgml @@ -141,7 +141,7 @@ CREATE AGGREGATE avg (float8) For further details see the - + command. @@ -161,8 +161,8 @@ CREATE AGGREGATE avg (float8) Aggregate functions can optionally support moving-aggregate mode, which allows substantially faster execution of aggregate functions within windows with moving frame starting points. - (See - and for information about use of + (See + and for information about use of aggregate functions as window functions.) The basic idea is that in addition to a normal forward transition function, the aggregate provides an inverse @@ -290,7 +290,7 @@ FROM (VALUES (1, 1.0e20::float8), Aggregate functions can use polymorphic state transition functions or final functions, so that the same functions can be used to implement multiple aggregates. - See + See for an explanation of polymorphic functions. Going a step further, the aggregate function itself can be specified with polymorphic input type(s) and state type, allowing a single @@ -384,7 +384,7 @@ CREATE AGGREGATE array_agg (anynonarray) An aggregate function can be made to accept a varying number of arguments by declaring its last argument as a VARIADIC array, in much the same fashion as for regular functions; see - . The aggregate's transition + . The aggregate's transition function(s) must have the same array type as their last argument. The transition function(s) typically would also be marked VARIADIC, but this is not strictly required. @@ -393,7 +393,7 @@ CREATE AGGREGATE array_agg (anynonarray) Variadic aggregates are easily misused in connection with - the ORDER BY option (see ), + the ORDER BY option (see ), since the parser cannot tell whether the wrong number of actual arguments have been given in such a combination. Keep in mind that everything to the right of ORDER BY is a sort key, not an argument to the @@ -490,7 +490,7 @@ SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households; Also, because the final function performs the sort, it is not possible to continue adding input rows by executing the transition function again later. This means the final function is not READ_ONLY; - it must be declared in + it must be declared in as READ_WRITE, or as SHARABLE if it's possible for additional final-function calls to make use of the already-sorted state. @@ -636,7 +636,7 @@ if (AggCheckCallContext(fcinfo, NULL)) (While aggregate transition functions are always allowed to modify the transition value in-place, aggregate final functions are generally discouraged from doing so; if they do so, the behavior must be declared - when creating the aggregate. See + when creating the aggregate. See for more detail.) @@ -644,7 +644,7 @@ if (AggCheckCallContext(fcinfo, NULL)) The second argument of AggCheckCallContext can be used to retrieve the memory context in which aggregate state values are being kept. This is useful for transition functions that wish to use expanded - objects (see ) as their state values. + objects (see ) as their state values. On first call, the transition function should return an expanded object whose memory context is a child of the aggregate state context, and then keep returning the same expanded object on subsequent calls. See diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index 9bdb72cd98..508ee7a96c 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -16,24 +16,24 @@ query language functions (functions written in - SQL) () + SQL) () procedural language functions (functions written in, for example, PL/pgSQL or PL/Tcl) - () + () - internal functions () + internal functions () - C-language functions () + C-language functions () @@ -63,7 +63,7 @@ Throughout this chapter, it can be useful to look at the reference - page of the command to + page of the command to understand the examples better. Some examples from this chapter can be found in funcs.sql and funcs.c in the src/tutorial @@ -162,11 +162,11 @@ SELECT clean_emp(); The syntax of the CREATE FUNCTION command requires the function body to be written as a string constant. It is usually most convenient to use dollar quoting (see ) for the string constant. + linkend="sql-syntax-dollar-quoting"/>) for the string constant. If you choose to use regular single-quoted string constant syntax, you must double single quote marks (') and backslashes (\) (assuming escape string syntax) in the body of - the function (see ). + the function (see ). @@ -430,7 +430,7 @@ SELECT name, double_salary(emp) AS dream WHERE emp.cubicle ~= point '(2,1)'; but this usage is deprecated since it's easy to get confused. - (See for details about these + (See for details about these two notations for the composite value of a table row.) @@ -536,7 +536,7 @@ SELECT * FROM new_emp(); The second way is described more fully in . + linkend="xfunc-sql-table-functions"/>. @@ -574,7 +574,7 @@ SELECT name(new_emp()); None - As explained in , the field notation and + As explained in , the field notation and functional notation are equivalent. @@ -621,7 +621,7 @@ SELECT add_em(3,7); This is not essentially different from the version of add_em - shown in . The real value of + shown in . The real value of output parameters is that they provide a convenient way of defining functions that return several columns. For example, @@ -762,7 +762,7 @@ SELECT mleast(VARIADIC ARRAY[]::numeric[]); The array element parameters generated from a variadic parameter are treated as not having any names of their own. This means it is not possible to call a variadic function using named arguments (), except when you specify + linkend="sql-syntax-calling-funcs"/>), except when you specify VARIADIC. For example, this will work: @@ -950,7 +950,7 @@ SELECT * FROM sum_n_product_with_tab(10); set-returning function multiple times, with the parameters for each invocation coming from successive rows of a table or subquery. The preferred way to do this is to use the LATERAL key word, - which is described in . + which is described in . Here is an example using a set-returning function to enumerate elements of a tree structure: @@ -1197,7 +1197,7 @@ $$ LANGUAGE SQL; return the polymorphic types anyelement, anyarray, anynonarray, anyenum, and anyrange. See for a more detailed + linkend="extend-types-polymorphic"/> for a more detailed explanation of polymorphic functions. Here is a polymorphic function make_array that builds up an array from two arbitrary data type elements: @@ -1311,7 +1311,7 @@ SELECT concat_values('|', 1, 4, 2); When a SQL function has one or more parameters of collatable data types, a collation is identified for each function call depending on the collations assigned to the actual arguments, as described in . If a collation is successfully identified + linkend="collation"/>. If a collation is successfully identified (i.e., there are no conflicts of implicit collations among the arguments) then all the collatable parameters are treated as having that collation implicitly. This will affect the behavior of collation-sensitive @@ -1384,7 +1384,7 @@ CREATE FUNCTION test(smallint, double precision) RETURNS ... it is not immediately clear which function would be called with some trivial input like test(1, 1.5). The currently implemented resolution rules are described in - , but it is unwise to design a system that subtly + , but it is unwise to design a system that subtly relies on this behavior. @@ -1457,7 +1457,7 @@ CREATE FUNCTION test(int, int) RETURNS int Every function has a volatility classification, with the possibilities being VOLATILE, STABLE, or IMMUTABLE. VOLATILE is the default if the - + command does not specify a category. The volatility category is a promise to the optimizer about the behavior of the function: @@ -1539,7 +1539,7 @@ CREATE FUNCTION test(int, int) RETURNS int been made by the SQL command that is calling the function. A VOLATILE function will see such changes, a STABLE or IMMUTABLE function will not. This behavior is implemented - using the snapshotting behavior of MVCC (see ): + using the snapshotting behavior of MVCC (see ): STABLE and IMMUTABLE functions use a snapshot established as of the start of the calling query, whereas VOLATILE functions obtain a fresh snapshot at the start of @@ -1577,7 +1577,7 @@ CREATE FUNCTION test(int, int) RETURNS int A common error is to label a function IMMUTABLE when its results depend on a configuration parameter. For example, a function that manipulates timestamps might well have results that depend on the - setting. For safety, such functions should + setting. For safety, such functions should be labeled STABLE instead. @@ -1606,7 +1606,7 @@ CREATE FUNCTION test(int, int) RETURNS int Procedural languages aren't built into the PostgreSQL server; they are offered by loadable modules. - See and following chapters for more + See and following chapters for more information. @@ -1630,7 +1630,7 @@ CREATE FUNCTION test(int, int) RETURNS int Normally, all internal functions present in the server are declared during the initialization of the database cluster - (see ), + (see ), but a user could use CREATE FUNCTION to create additional alias names for an internal function. Internal functions are declared in CREATE FUNCTION @@ -1726,7 +1726,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision If the name does not contain a directory part, the file is searched for in the path specified by the configuration variable - .dynamic_library_path + .dynamic_library_path @@ -1775,7 +1775,7 @@ CREATE FUNCTION square_root(double precision) RETURNS double precision PostgreSQL will not compile a C function automatically. The object file must be compiled before it is referenced in a CREATE - FUNCTION command. See for additional + FUNCTION command. See for additional information. @@ -1899,7 +1899,7 @@ typedef int int4; means XX bits. Note therefore also that the C type int8 is 1 byte in size. The SQL type int8 is called int64 in C. See also - .) + .) @@ -1952,7 +1952,7 @@ typedef struct value. If you do so you are likely to corrupt on-disk data, since the pointer you are given might point directly into a disk buffer. The sole exception to this rule is explained in - . + . @@ -1998,7 +1998,7 @@ memcpy(destination->data, buffer, 40); - specifies which C type + specifies which C type corresponds to which SQL type when writing a C-language function that uses a built-in type of PostgreSQL. The Defined In column gives the header file that @@ -2433,10 +2433,10 @@ CREATE FUNCTION concat_text(text, text) RETURNS text Finally, the version-1 function call conventions make it possible - to return set results () and - implement trigger functions () and + to return set results () and + implement trigger functions () and procedural-language call handlers (). For more details + linkend="plhandler"/>). For more details see src/backend/utils/fmgr/README in the source distribution. @@ -2477,7 +2477,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text Compiling and linking your code so that it can be dynamically loaded into PostgreSQL always - requires special flags. See for a + requires special flags. See for a detailed explanation of how to do it for your particular operating system. @@ -2486,7 +2486,7 @@ CREATE FUNCTION concat_text(text, text) RETURNS text Remember to define a magic block for your shared library, - as described in . + as described in . @@ -3122,7 +3122,7 @@ CREATE OR REPLACE FUNCTION retcomposite(IN integer, IN integer, return the polymorphic types anyelement, anyarray, anynonarray, anyenum, and anyrange. - See for a more detailed explanation + See for a more detailed explanation of polymorphic functions. When function arguments or return types are defined as polymorphic types, the function author cannot know in advance what data type it will be called with, or @@ -3268,7 +3268,7 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray Add-ins can reserve LWLocks and an allocation of shared memory on server startup. The add-in's shared library must be preloaded by specifying it in - shared_preload_libraries. + shared_preload_libraries. Shared memory is reserved by calling: void RequestAddinShmemSpace(int size) diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index dce68dd4ac..2b4298065c 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -36,7 +36,7 @@ described in pg_am. It is possible to add a new index access method by writing the necessary code and then creating a row in pg_am — but that is - beyond the scope of this chapter (see ). + beyond the scope of this chapter (see ). @@ -101,7 +101,7 @@ The B-tree index method defines five strategies, shown in . + linkend="xindex-btree-strat-table"/>.
@@ -140,7 +140,7 @@ Hash indexes support only equality comparisons, and so they use only one - strategy, shown in . + strategy, shown in .
@@ -168,7 +168,7 @@ however it likes. As an example, several of the built-in GiST index operator classes index two-dimensional geometric objects, providing the R-tree strategies shown in - . Four of these are true + . Four of these are true two-dimensional tests (overlaps, same, contains, contained by); four of them consider only the X direction; and the other four provide the same tests in the Y direction. @@ -242,7 +242,7 @@ class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in operator classes for points are shown in . + linkend="xindex-spgist-point-strat-table"/>.
@@ -289,7 +289,7 @@ each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in operator class for arrays are shown in - . + .
@@ -328,7 +328,7 @@ of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in Minmax operator classes are shown in - . + .
@@ -372,7 +372,7 @@ level of a WHERE clause to be used with an index. (Some index access methods also support ordering operators, which typically don't return Boolean values; that feature is discussed - in .) + in .) @@ -403,7 +403,7 @@ B-trees require a single support function, and allow a second one to be supplied at the operator class author's option, as shown in . + linkend="xindex-btree-support-table"/>.
@@ -438,7 +438,7 @@ Hash indexes require one support function, and allow a second one to be supplied at the operator class author's option, as shown in . + linkend="xindex-hash-support-table"/>.
@@ -469,8 +469,8 @@ GiST indexes have nine support functions, two of which are optional, - as shown in . - (For more information see .) + as shown in . + (For more information see .)
@@ -541,8 +541,8 @@ SP-GiST indexes require five support functions, as - shown in . - (For more information see .) + shown in . + (For more information see .)
@@ -589,8 +589,8 @@ GIN indexes have six support functions, three of which are optional, - as shown in . - (For more information see .) + as shown in . + (For more information see .)
@@ -655,9 +655,9 @@ BRIN indexes have four basic support functions, as shown in - ; those basic functions + ; those basic functions may require additional support functions to be provided. - (For more information see .) + (For more information see .)
@@ -726,7 +726,7 @@ operators that sort complex numbers in absolute value order, so we choose the name complex_abs_ops. First, we need a set of operators. The procedure for defining operators was - discussed in . For an operator class on + discussed in . For an operator class on B-trees, the operators we require are: diff --git a/doc/src/sgml/xml2.sgml b/doc/src/sgml/xml2.sgml index 35e1ccb7a1..0a0f13d02d 100644 --- a/doc/src/sgml/xml2.sgml +++ b/doc/src/sgml/xml2.sgml @@ -34,7 +34,7 @@ Description of Functions - shows the functions provided by this module. + shows the functions provided by this module. These functions provide straightforward XML parsing and XPath queries. All arguments are of type text, so for brevity that is not shown. @@ -211,7 +211,7 @@ xpath_table(text key, text document, text relation, text xpaths, text criteria) table. The primary key field from the original document table is returned as the first column of the result so that the result set can readily be used in joins. The parameters are described in - . + .
diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml index 4b0716951a..2aa7cf9b64 100644 --- a/doc/src/sgml/xoper.sgml +++ b/doc/src/sgml/xoper.sgml @@ -32,7 +32,7 @@ Here is an example of creating an operator for adding two complex numbers. We assume we've already created the definition of type - complex (see ). First we need a + complex (see ). First we need a function that does the work, then we can define the operator: diff --git a/doc/src/sgml/xplang.sgml b/doc/src/sgml/xplang.sgml index 60d0cc6190..4b52210459 100644 --- a/doc/src/sgml/xplang.sgml +++ b/doc/src/sgml/xplang.sgml @@ -27,15 +27,15 @@ There are currently four procedural languages available in the standard PostgreSQL distribution: - PL/pgSQL (), - PL/Tcl (), - PL/Perl (), and - PL/Python (). + PL/pgSQL (), + PL/Tcl (), + PL/Perl (), and + PL/Python (). There are additional procedural languages available that are not - included in the core distribution. + included in the core distribution. has information about finding them. In addition other languages can be defined by users; the basics of developing a new procedural - language are covered in . + language are covered in . @@ -79,7 +79,7 @@ The shared object for the language handler must be compiled and installed into an appropriate library directory. This works in the same way as building and installing modules with regular user-defined C - functions does; see . Often, the language + functions does; see . Often, the language handler will depend on an external library that provides the actual programming language engine; if so, that must be installed as well. @@ -105,7 +105,7 @@ CREATE FUNCTION handler_function_name() Optionally, the language handler can provide an inline handler function that executes anonymous code blocks - ( commands) + ( commands) written in this language. If an inline handler function is provided by the language, declare it with a command like @@ -165,7 +165,7 @@ CREATE TRUSTED PROCEDURAL LANGUAGE - shows how the manual + shows how the manual installation procedure would work with the language PL/Perl. diff --git a/doc/src/sgml/xtypes.sgml b/doc/src/sgml/xtypes.sgml index 2f90c1d42c..29747a0873 100644 --- a/doc/src/sgml/xtypes.sgml +++ b/doc/src/sgml/xtypes.sgml @@ -9,7 +9,7 @@ - As described in , + As described in , PostgreSQL can be extended to support new data types. This section describes how to define new base types, which are data types defined below the level of the SQL @@ -246,7 +246,7 @@ CREATE TYPE complex ( For further details see the description of the - command. + command. @@ -259,7 +259,7 @@ CREATE TYPE complex ( If the values of your data type vary in size (in internal form), it's usually desirable to make the data type TOAST-able (see ). You should do this even if the values are always + linkend="storage-toast"/>). You should do this even if the values are always too small to be compressed or stored externally, because TOAST can save space on small data too, by reducing header overhead. diff --git a/src/Makefile.global.in b/src/Makefile.global.in index 27ec54a417..d980f81046 100644 --- a/src/Makefile.global.in +++ b/src/Makefile.global.in @@ -406,8 +406,6 @@ STRIP_SHARED_LIB = @STRIP_SHARED_LIB@ DBTOEPUB = @DBTOEPUB@ FOP = @FOP@ -NSGMLS = @NSGMLS@ -OSX = @OSX@ XMLLINT = @XMLLINT@ XSLTPROC = @XSLTPROC@ From 07bd77b95a7846de2b193d1574951436d5783800 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 23 Nov 2017 17:02:15 -0500 Subject: [PATCH 0594/1087] Ensure sizeof(GenerationChunk) is maxaligned. Per buildfarm. Also improve some comments. --- src/backend/utils/mmgr/generation.c | 41 +++++++++++++++++------------ 1 file changed, 24 insertions(+), 17 deletions(-) diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index cdff20ff81..2ede8e1be9 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -9,7 +9,7 @@ * Portions Copyright (c) 2017, PostgreSQL Global Development Group * * IDENTIFICATION - * src/backend/utils/mmgr/Generation.c + * src/backend/utils/mmgr/generation.c * * * This memory context is based on the assumption that the chunks are freed @@ -21,8 +21,8 @@ * The memory context uses a very simple approach to free space management. * Instead of a complex global freelist, each block tracks a number * of allocated and freed chunks. Freed chunks are not reused, and once all - * chunks on a block are freed, the whole block is thrown away. When the - * chunks allocated on the same block have similar lifespan, this works + * chunks in a block are freed, the whole block is thrown away. When the + * chunks allocated in the same block have similar lifespan, this works * very well and is very cheap. * * The current implementation only uses a fixed block size - maybe it should @@ -38,15 +38,15 @@ #include "postgres.h" +#include "lib/ilist.h" #include "utils/memdebug.h" #include "utils/memutils.h" -#include "lib/ilist.h" #define Generation_BLOCKHDRSZ MAXALIGN(sizeof(GenerationBlock)) #define Generation_CHUNKHDRSZ sizeof(GenerationChunk) -/* Portion of Generation_CHUNKHDRSZ examined outside Generation.c. */ +/* Portion of Generation_CHUNKHDRSZ examined outside generation.c. */ #define Generation_CHUNK_PUBLIC \ (offsetof(GenerationChunk, size) + sizeof(Size)) @@ -65,36 +65,35 @@ typedef struct GenerationChunk GenerationChunk; typedef void *GenerationPointer; /* - * GenerationContext is a simple memory context not reusing allocated chunks, and - * freeing blocks once all chunks are freed. + * GenerationContext is a simple memory context not reusing allocated chunks, + * and freeing blocks once all chunks are freed. */ typedef struct GenerationContext { MemoryContextData header; /* Standard memory-context fields */ - /* Generationerational context parameters */ + /* Generational context parameters */ Size blockSize; /* block size */ GenerationBlock *block; /* current (most recently allocated) block */ dlist_head blocks; /* list of blocks */ - } GenerationContext; /* * GenerationBlock - * A GenerationBlock is the unit of memory that is obtained by Generation.c + * GenerationBlock is the unit of memory that is obtained by generation.c * from malloc(). It contains one or more GenerationChunks, which are * the units requested by palloc() and freed by pfree(). GenerationChunks * cannot be returned to malloc() individually, instead pfree() - * updates a free counter on a block and when all chunks on a block - * are freed the whole block is returned to malloc(). + * updates the free counter of the block and when all chunks in a block + * are free the whole block is returned to malloc(). * - * GenerationBloc is the header data for a block --- the usable space + * GenerationBlock is the header data for a block --- the usable space * within the block begins at the next alignment boundary. */ struct GenerationBlock { - dlist_node node; /* doubly-linked list */ + dlist_node node; /* doubly-linked list of blocks */ int nchunks; /* number of chunks in the block */ int nfree; /* number of free chunks */ char *freeptr; /* start of free space in this block */ @@ -103,7 +102,7 @@ struct GenerationBlock /* * GenerationChunk - * The prefix of each piece of memory in an GenerationBlock + * The prefix of each piece of memory in a GenerationBlock */ struct GenerationChunk { @@ -116,9 +115,17 @@ struct GenerationChunk /* when debugging memory usage, also store actual requested size */ /* this is zero in a free chunk */ Size requested_size; -#endif /* MEMORY_CONTEXT_CHECKING */ +#define GENERATIONCHUNK_RAWSIZE (SIZEOF_VOID_P * 2 + SIZEOF_SIZE_T * 2) +#else +#define GENERATIONCHUNK_RAWSIZE (SIZEOF_VOID_P * 2 + SIZEOF_SIZE_T) +#endif /* MEMORY_CONTEXT_CHECKING */ + + /* ensure proper alignment by adding padding if needed */ +#if (GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF) != 0 + char padding[MAXIMUM_ALIGNOF - (GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF)]; +#endif - GenerationContext *context; /* owning context */ + GenerationContext *context; /* owning context */ /* there must not be any padding to reach a MAXALIGN boundary here! */ }; From 59b71c6fe6ca89566f40439bcdff94a2f5b39a92 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 23 Nov 2017 17:13:09 -0800 Subject: [PATCH 0595/1087] Fix handling of NULLs returned by aggregate combine functions. When strict aggregate combine functions, used in multi-stage/parallel aggregation, returned NULL, we didn't check for that, invoking the combine function with NULL the next round, despite it being strict. The equivalent code invoking normal transition functions has a check for that situation, which did not get copied in a7de3dc5c346. Fix the bug by adding the equivalent check. Based on a quick look I could not find any strict combine functions in core actually returning NULL, and it doesn't seem very likely external users have done so. So this isn't likely to have caused issues in practice. Add tests verifying transition / combine functions returning NULL is tested. Reported-By: Andres Freund Author: Andres Freund Discussion: https://postgr.es/m/20171121033642.7xvmjqrl4jdaaat3@alap3.anarazel.de Backpatch: 9.6, where parallel aggregation was introduced --- src/backend/executor/nodeAgg.c | 11 ++++ src/test/regress/expected/aggregates.out | 71 ++++++++++++++++++++++++ src/test/regress/sql/aggregates.sql | 63 +++++++++++++++++++++ 3 files changed, 145 insertions(+) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index d26ce0847a..da6ef1a94c 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -1246,6 +1246,17 @@ advance_combine_function(AggState *aggstate, pergroupstate->noTransValue = false; return; } + + if (pergroupstate->transValueIsNull) + { + /* + * Don't call a strict function with NULL inputs. Note it is + * possible to get here despite the above tests, if the combinefn + * is strict *and* returned a NULL on a prior cycle. If that + * happens we will propagate the NULL all the way to the end. + */ + return; + } } /* We run the combine functions in per-input-tuple memory context */ diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index 3408cf3333..f8c42f911b 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -1993,3 +1993,74 @@ NOTICE: sum_transfn called with 4 (1 row) rollback; +-- test that the aggregate transition logic correctly handles +-- transition / combine functions returning NULL +-- First test the case of a normal transition function returning NULL +BEGIN; +CREATE FUNCTION balkifnull(int8, int4) +RETURNS int8 +STRICT +LANGUAGE plpgsql AS $$ +BEGIN + IF $1 IS NULL THEN + RAISE 'erroneously called with NULL argument'; + END IF; + RETURN NULL; +END$$; +CREATE AGGREGATE balk( + BASETYPE = int4, + SFUNC = balkifnull(int8, int4), + STYPE = int8, + "PARALLEL" = SAFE, + INITCOND = '0'); +SELECT balk(1) FROM tenk1; + balk +------ + +(1 row) + +ROLLBACK; +-- Secondly test the case of a parallel aggregate combiner function +-- returning NULL. For that use normal transition function, but a +-- combiner function returning NULL. +BEGIN ISOLATION LEVEL REPEATABLE READ; +CREATE FUNCTION balkifnull(int8, int8) +RETURNS int8 +PARALLEL SAFE +STRICT +LANGUAGE plpgsql AS $$ +BEGIN + IF $1 IS NULL THEN + RAISE 'erroneously called with NULL argument'; + END IF; + RETURN NULL; +END$$; +CREATE AGGREGATE balk( + BASETYPE = int4, + SFUNC = int4_sum(int8, int4), + STYPE = int8, + COMBINEFUNC = balkifnull(int8, int8), + "PARALLEL" = SAFE, + INITCOND = '0' +); +-- force use of parallelism +ALTER TABLE tenk1 set (parallel_workers = 4); +SET LOCAL parallel_setup_cost=0; +SET LOCAL max_parallel_workers_per_gather=4; +EXPLAIN (COSTS OFF) SELECT balk(1) FROM tenk1; + QUERY PLAN +-------------------------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 4 + -> Partial Aggregate + -> Parallel Index Only Scan using tenk1_thous_tenthous on tenk1 +(5 rows) + +SELECT balk(1) FROM tenk1; + balk +------ + +(1 row) + +ROLLBACK; diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 55c8528fd5..1bfc5e649c 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -843,3 +843,66 @@ create aggregate my_half_sum(int4) select my_sum(one),my_half_sum(one) from (values(1),(2),(3),(4)) t(one); rollback; + + +-- test that the aggregate transition logic correctly handles +-- transition / combine functions returning NULL + +-- First test the case of a normal transition function returning NULL +BEGIN; +CREATE FUNCTION balkifnull(int8, int4) +RETURNS int8 +STRICT +LANGUAGE plpgsql AS $$ +BEGIN + IF $1 IS NULL THEN + RAISE 'erroneously called with NULL argument'; + END IF; + RETURN NULL; +END$$; + +CREATE AGGREGATE balk( + BASETYPE = int4, + SFUNC = balkifnull(int8, int4), + STYPE = int8, + "PARALLEL" = SAFE, + INITCOND = '0'); + +SELECT balk(1) FROM tenk1; + +ROLLBACK; + +-- Secondly test the case of a parallel aggregate combiner function +-- returning NULL. For that use normal transition function, but a +-- combiner function returning NULL. +BEGIN ISOLATION LEVEL REPEATABLE READ; +CREATE FUNCTION balkifnull(int8, int8) +RETURNS int8 +PARALLEL SAFE +STRICT +LANGUAGE plpgsql AS $$ +BEGIN + IF $1 IS NULL THEN + RAISE 'erroneously called with NULL argument'; + END IF; + RETURN NULL; +END$$; + +CREATE AGGREGATE balk( + BASETYPE = int4, + SFUNC = int4_sum(int8, int4), + STYPE = int8, + COMBINEFUNC = balkifnull(int8, int8), + "PARALLEL" = SAFE, + INITCOND = '0' +); + +-- force use of parallelism +ALTER TABLE tenk1 set (parallel_workers = 4); +SET LOCAL parallel_setup_cost=0; +SET LOCAL max_parallel_workers_per_gather=4; + +EXPLAIN (COSTS OFF) SELECT balk(1) FROM tenk1; +SELECT balk(1) FROM tenk1; + +ROLLBACK; From 84c4313c6f6a3f67ccc2296e5a7674dee1979e7a Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Thu, 23 Nov 2017 20:22:04 -0800 Subject: [PATCH 0596/1087] Support linking with MinGW-built Perl. This is necessary for ActivePerl 5.18 onwards and for Strawberry Perl. It is not sufficient for 32-bit builds with newer Visual Studio; these fail with error LINK2026. Back-patch to 9.3 (all supported versions). Reported by Victor Wagner. Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home --- config/perl.m4 | 17 ++++++++++++----- configure | 17 ++++++++++++----- src/pl/plperl/plperl.h | 8 ++++++++ src/tools/msvc/Mkvcbuild.pm | 5 +++-- 4 files changed, 35 insertions(+), 12 deletions(-) diff --git a/config/perl.m4 b/config/perl.m4 index 8c21d0fb39..76b1a92e3a 100644 --- a/config/perl.m4 +++ b/config/perl.m4 @@ -83,12 +83,19 @@ AC_DEFUN([PGAC_CHECK_PERL_EMBED_LDFLAGS], [AC_REQUIRE([PGAC_PATH_PERL]) AC_MSG_CHECKING(for flags to link embedded Perl) if test "$PORTNAME" = "win32" ; then -perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib` -test -e "$perl_archlibexp/CORE/$perl_lib.lib" && perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib` + if test -e "$perl_archlibexp/CORE/$perl_lib.lib"; then + perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + else + perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'` + if test -e "$perl_archlibexp/CORE/lib$perl_lib.a"; then + perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + fi + fi else -pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts` -pgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'` -perl_embed_ldflags=`echo X"$pgac_tmp1" | sed -e "s/^X//" -e "s%$pgac_tmp2%%" -e ["s/ -arch [-a-zA-Z0-9_]*//g"]` + pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts` + pgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'` + perl_embed_ldflags=`echo X"$pgac_tmp1" | sed -e "s/^X//" -e "s%$pgac_tmp2%%" -e ["s/ -arch [-a-zA-Z0-9_]*//g"]` fi AC_SUBST(perl_embed_ldflags)dnl if test -z "$perl_embed_ldflags" ; then diff --git a/configure b/configure index 3203473f87..6c4d743b35 100755 --- a/configure +++ b/configure @@ -7818,12 +7818,19 @@ $as_echo "$perl_embed_ccflags" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for flags to link embedded Perl" >&5 $as_echo_n "checking for flags to link embedded Perl... " >&6; } if test "$PORTNAME" = "win32" ; then -perl_lib=`basename $perl_archlibexp/CORE/perl[5-9]*.lib .lib` -test -e "$perl_archlibexp/CORE/$perl_lib.lib" && perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + perl_lib=`basename $perl_archlibexp/CORE/perl[5-9]*.lib .lib` + if test -e "$perl_archlibexp/CORE/$perl_lib.lib"; then + perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + else + perl_lib=`basename $perl_archlibexp/CORE/libperl[5-9]*.a .a | sed 's/^lib//'` + if test -e "$perl_archlibexp/CORE/lib$perl_lib.a"; then + perl_embed_ldflags="-L$perl_archlibexp/CORE -l$perl_lib" + fi + fi else -pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts` -pgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'` -perl_embed_ldflags=`echo X"$pgac_tmp1" | sed -e "s/^X//" -e "s%$pgac_tmp2%%" -e "s/ -arch [-a-zA-Z0-9_]*//g"` + pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts` + pgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'` + perl_embed_ldflags=`echo X"$pgac_tmp1" | sed -e "s/^X//" -e "s%$pgac_tmp2%%" -e "s/ -arch [-a-zA-Z0-9_]*//g"` fi if test -z "$perl_embed_ldflags" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 diff --git a/src/pl/plperl/plperl.h b/src/pl/plperl/plperl.h index c4e06d089f..aac95f8d2c 100644 --- a/src/pl/plperl/plperl.h +++ b/src/pl/plperl/plperl.h @@ -42,6 +42,14 @@ #undef vsnprintf #endif +/* + * ActivePerl 5.18 and later are MinGW-built, and their headers use GCC's + * __inline__. Translate to something MSVC recognizes. + */ +#ifdef _MSC_VER +#define __inline__ inline +#endif + /* * Get the basic Perl API. We use PERL_NO_GET_CONTEXT mode so that our code diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index 686c7369f6..4c2e12e228 100644 --- a/src/tools/msvc/Mkvcbuild.pm +++ b/src/tools/msvc/Mkvcbuild.pm @@ -615,9 +615,10 @@ sub mkvcbuild } } $plperl->AddReference($postgres); - my $perl_path = $solution->{options}->{perl} . '\lib\CORE\perl*.lib'; + my $perl_path = $solution->{options}->{perl} . '\lib\CORE\*perl*'; + # ActivePerl 5.16 provided perl516.lib; 5.18 provided libperl518.a my @perl_libs = - grep { /perl\d+.lib$/ } glob($perl_path); + grep { /perl\d+\.lib$|libperl\d+\.a$/ } glob($perl_path); if (@perl_libs == 1) { $plperl->AddLibrary($perl_libs[0]); From e842791b0f99dd2005fc2d1754755a632514e5e5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 24 Nov 2017 00:29:20 -0500 Subject: [PATCH 0597/1087] Fix unstable regression test added by commits 59b71c6fe et al. The query didn't really have a preferred index, leading to platform- specific choices of which one to use. Adjust it to make sure tenk1_hundred is always chosen. Per buildfarm. --- src/test/regress/expected/aggregates.out | 12 ++++++------ src/test/regress/sql/aggregates.sql | 6 +++--- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index f8c42f911b..dbce7d3e8b 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -2013,7 +2013,7 @@ CREATE AGGREGATE balk( STYPE = int8, "PARALLEL" = SAFE, INITCOND = '0'); -SELECT balk(1) FROM tenk1; +SELECT balk(hundred) FROM tenk1; balk ------ @@ -2047,17 +2047,17 @@ CREATE AGGREGATE balk( ALTER TABLE tenk1 set (parallel_workers = 4); SET LOCAL parallel_setup_cost=0; SET LOCAL max_parallel_workers_per_gather=4; -EXPLAIN (COSTS OFF) SELECT balk(1) FROM tenk1; - QUERY PLAN --------------------------------------------------------------------------------- +EXPLAIN (COSTS OFF) SELECT balk(hundred) FROM tenk1; + QUERY PLAN +------------------------------------------------------------------------- Finalize Aggregate -> Gather Workers Planned: 4 -> Partial Aggregate - -> Parallel Index Only Scan using tenk1_thous_tenthous on tenk1 + -> Parallel Index Only Scan using tenk1_hundred on tenk1 (5 rows) -SELECT balk(1) FROM tenk1; +SELECT balk(hundred) FROM tenk1; balk ------ diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 1bfc5e649c..6c9b86a616 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -868,7 +868,7 @@ CREATE AGGREGATE balk( "PARALLEL" = SAFE, INITCOND = '0'); -SELECT balk(1) FROM tenk1; +SELECT balk(hundred) FROM tenk1; ROLLBACK; @@ -902,7 +902,7 @@ ALTER TABLE tenk1 set (parallel_workers = 4); SET LOCAL parallel_setup_cost=0; SET LOCAL max_parallel_workers_per_gather=4; -EXPLAIN (COSTS OFF) SELECT balk(1) FROM tenk1; -SELECT balk(1) FROM tenk1; +EXPLAIN (COSTS OFF) SELECT balk(hundred) FROM tenk1; +SELECT balk(hundred) FROM tenk1; ROLLBACK; From 87c2a17fee784c7e1004ba3d3c5d8147da676783 Mon Sep 17 00:00:00 2001 From: Dean Rasheed Date: Fri, 24 Nov 2017 12:01:18 +0000 Subject: [PATCH 0598/1087] Doc: add a summary table to the CREATE POLICY docs. This table summarizes which RLS policy expressions apply to each command type, and whether they apply to the old or new tuples (or both), which saves reading through a lot of text. Rod Taylor, hacked on by me. Reviewed by Fabien Coelho. Discussion: https://postgr.es/m/CAHz80e4HxJShm6m9ZWFrHW=pgd2KP=RZmfFnEccujtPMiAOW5Q@mail.gmail.com --- doc/src/sgml/ref/create_policy.sgml | 104 +++++++++++++++++++++++++++- 1 file changed, 103 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index c30506abc2..b4e66b11b8 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -73,7 +73,10 @@ CREATE POLICY name ON Policies can be applied for specific commands or for specific roles. The default for newly created policies is that they apply for all commands and - roles, unless otherwise specified. + roles, unless otherwise specified. Multiple policies may apply to a single + command; see below for more details. + summarizes how the different types + of policy apply to specific commands. @@ -391,6 +394,105 @@ CREATE POLICY name ON + +
+ Policies Applied by Command Type + + + + + + + Command + SELECT/ALL policy + INSERT/ALL policy + UPDATE/ALL policy + DELETE/ALL policy + + + USING expression + WITH CHECK expression + USING expression + WITH CHECK expression + USING expression + + + + + SELECT + Existing row + + + + + + + SELECT FOR UPDATE/SHARE + Existing row + + Existing row + + + + + INSERT + + New row + + + + + + INSERT ... RETURNING + + New row + + + If read access is required to the existing or new row (for example, + a WHERE or RETURNING clause + that refers to columns from the relation). + + + + New row + + + + + + UPDATE + + Existing & new rows + + + + Existing row + New row + + + + DELETE + + Existing row + + + + + + Existing row + + + ON CONFLICT DO UPDATE + Existing & new rows + + Existing row + New row + + + + +
+ From 26329ad8dcc78eb651ab61f6b17c4f5f7bb77323 Mon Sep 17 00:00:00 2001 From: Dean Rasheed Date: Fri, 24 Nov 2017 13:59:25 +0000 Subject: [PATCH 0599/1087] Fix broken XML in CREATE POLICY sgml. Commit 87c2a17fee failed to close some tags (necessary now that the SGML docs are in fact XML). --- doc/src/sgml/ref/create_policy.sgml | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/ref/create_policy.sgml b/doc/src/sgml/ref/create_policy.sgml index b4e66b11b8..85c74e8e82 100644 --- a/doc/src/sgml/ref/create_policy.sgml +++ b/doc/src/sgml/ref/create_policy.sgml @@ -75,7 +75,7 @@ CREATE POLICY name ON summarizes how the different types + summarizes how the different types of policy apply to specific commands.
@@ -398,9 +398,9 @@ CREATE POLICY name ON Policies Applied by Command Type - - - + + + Command @@ -463,7 +463,7 @@ CREATE POLICY name ON UPDATE Existing & new rows - + Existing row @@ -474,7 +474,7 @@ CREATE POLICY name ON DELETE Existing row - + From 9c55391f0f277318c754f89950e65363ede4136e Mon Sep 17 00:00:00 2001 From: Dean Rasheed Date: Fri, 24 Nov 2017 14:14:40 +0000 Subject: [PATCH 0600/1087] RLS comment fixes. The comments in get_policies_for_relation() say that CREATE POLICY does not support defining restrictive policies. This is no longer true, starting from PG10. --- src/backend/rewrite/rowsecurity.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/src/backend/rewrite/rowsecurity.c b/src/backend/rewrite/rowsecurity.c index a0cd6b1075..5bd33f7ba4 100644 --- a/src/backend/rewrite/rowsecurity.c +++ b/src/backend/rewrite/rowsecurity.c @@ -408,11 +408,7 @@ get_policies_for_relation(Relation relation, CmdType cmd, Oid user_id, *permissive_policies = NIL; *restrictive_policies = NIL; - /* - * First find all internal policies for the relation. CREATE POLICY does - * not currently support defining restrictive policies, so for now all - * internal policies are permissive. - */ + /* First find all internal policies for the relation. */ foreach(item, relation->rd_rsdesc->policies) { bool cmd_matches = false; @@ -450,7 +446,7 @@ get_policies_for_relation(Relation relation, CmdType cmd, Oid user_id, } /* - * Add this policy to the list of permissive policies if it applies to + * Add this policy to the relevant list of policies if it applies to * the specified role. */ if (cmd_matches && check_role_for_policy(policy->roles, user_id)) From cc3c4af4a948e5c5be22afe769bab41235c574e5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 24 Nov 2017 13:43:34 -0500 Subject: [PATCH 0601/1087] Fix bug in generation.c's valgrind support. This doesn't look like the last such bug, but it's one that the test_decoding regression test is tripping over. Per buildfarm. Tomas Vondra Discussion: https://postgr.es/m/c903f275-2150-fa52-64bf-dca7b53ebf8d@fuzzy.cz --- src/backend/utils/mmgr/generation.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 2ede8e1be9..a748ee266c 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -409,9 +409,14 @@ GenerationAlloc(MemoryContext context, Size size) chunk = (GenerationChunk *) block->freeptr; + /* Prepare to initialize the chunk header. */ + VALGRIND_MAKE_MEM_UNDEFINED(chunk, Generation_CHUNKHDRSZ); + block->nchunks += 1; block->freeptr += (Generation_CHUNKHDRSZ + chunk_size); + Assert(block->freeptr <= block->endptr); + chunk->block = block; chunk->context = set; From f65d21b258085bdc8ef2cc282ab1ff12da9c595c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 24 Nov 2017 15:50:22 -0500 Subject: [PATCH 0602/1087] Mostly-cosmetic improvements in memory chunk header alignment coding. Add commentary about what we're doing and why. Apply the method used for padding in GenerationChunk to AllocChunkData, replacing the rather ad-hoc solution used in commit 7e3aa03b4. Reorder fields in GenerationChunk so that the padding calculation will work even if sizeof(size_t) is different from sizeof(void *) --- likely that will never happen, but we don't need the assumption if we do it like this. Improve static assertions about alignment. In passing, fix a couple of oversights in the "large chunk" path in GenerationAlloc(). Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org --- src/backend/utils/mmgr/aset.c | 25 ++++++++++--- src/backend/utils/mmgr/generation.c | 58 ++++++++++++++--------------- src/backend/utils/mmgr/slab.c | 21 ++++++++--- 3 files changed, 62 insertions(+), 42 deletions(-) diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 7033042e2d..0b017a4031 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -157,6 +157,14 @@ typedef struct AllocBlockData /* * AllocChunk * The prefix of each piece of memory in an AllocBlock + * + * Note: to meet the memory context APIs, the payload area of the chunk must + * be maxaligned, and the "aset" link must be immediately adjacent to the + * payload area (cf. GetMemoryChunkContext). We simplify matters for this + * module by requiring sizeof(AllocChunkData) to be maxaligned, and then + * we can ensure things work by adding any required alignment padding before + * the "aset" field. There is a static assertion below that the alignment + * is done correctly. */ typedef struct AllocChunkData { @@ -166,15 +174,19 @@ typedef struct AllocChunkData /* when debugging memory usage, also store actual requested size */ /* this is zero in a free chunk */ Size requested_size; -#if MAXIMUM_ALIGNOF > 4 && SIZEOF_VOID_P == 4 - Size padding; -#endif +#define ALLOCCHUNK_RAWSIZE (SIZEOF_SIZE_T * 2 + SIZEOF_VOID_P) +#else +#define ALLOCCHUNK_RAWSIZE (SIZEOF_SIZE_T + SIZEOF_VOID_P) #endif /* MEMORY_CONTEXT_CHECKING */ + /* ensure proper alignment by adding padding if needed */ +#if (ALLOCCHUNK_RAWSIZE % MAXIMUM_ALIGNOF) != 0 + char padding[MAXIMUM_ALIGNOF - ALLOCCHUNK_RAWSIZE % MAXIMUM_ALIGNOF]; +#endif + /* aset is the owning aset if allocated, or the freelist link if free */ void *aset; - /* there must not be any padding to reach a MAXALIGN boundary here! */ } AllocChunkData; @@ -327,8 +339,11 @@ AllocSetContextCreate(MemoryContext parent, { AllocSet set; + /* Assert we padded AllocChunkData properly */ + StaticAssertStmt(ALLOC_CHUNKHDRSZ == MAXALIGN(ALLOC_CHUNKHDRSZ), + "sizeof(AllocChunkData) is not maxaligned"); StaticAssertStmt(offsetof(AllocChunkData, aset) + sizeof(MemoryContext) == - MAXALIGN(sizeof(AllocChunkData)), + ALLOC_CHUNKHDRSZ, "padding calculation in AllocChunkData is wrong"); /* diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index a748ee266c..8dd0a35095 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -46,19 +46,6 @@ #define Generation_BLOCKHDRSZ MAXALIGN(sizeof(GenerationBlock)) #define Generation_CHUNKHDRSZ sizeof(GenerationChunk) -/* Portion of Generation_CHUNKHDRSZ examined outside generation.c. */ -#define Generation_CHUNK_PUBLIC \ - (offsetof(GenerationChunk, size) + sizeof(Size)) - -/* Portion of Generation_CHUNKHDRSZ excluding trailing padding. */ -#ifdef MEMORY_CONTEXT_CHECKING -#define Generation_CHUNK_USED \ - (offsetof(GenerationChunk, requested_size) + sizeof(Size)) -#else -#define Generation_CHUNK_USED \ - (offsetof(GenerationChunk, size) + sizeof(Size)) -#endif - typedef struct GenerationBlock GenerationBlock; /* forward reference */ typedef struct GenerationChunk GenerationChunk; @@ -103,28 +90,35 @@ struct GenerationBlock /* * GenerationChunk * The prefix of each piece of memory in a GenerationBlock + * + * Note: to meet the memory context APIs, the payload area of the chunk must + * be maxaligned, and the "context" link must be immediately adjacent to the + * payload area (cf. GetMemoryChunkContext). We simplify matters for this + * module by requiring sizeof(GenerationChunk) to be maxaligned, and then + * we can ensure things work by adding any required alignment padding before + * the pointer fields. There is a static assertion below that the alignment + * is done correctly. */ struct GenerationChunk { - /* block owning this chunk */ - void *block; - /* size is always the size of the usable space in the chunk */ Size size; #ifdef MEMORY_CONTEXT_CHECKING /* when debugging memory usage, also store actual requested size */ /* this is zero in a free chunk */ Size requested_size; -#define GENERATIONCHUNK_RAWSIZE (SIZEOF_VOID_P * 2 + SIZEOF_SIZE_T * 2) + +#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T * 2 + SIZEOF_VOID_P * 2) #else -#define GENERATIONCHUNK_RAWSIZE (SIZEOF_VOID_P * 2 + SIZEOF_SIZE_T) +#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T + SIZEOF_VOID_P * 2) #endif /* MEMORY_CONTEXT_CHECKING */ /* ensure proper alignment by adding padding if needed */ #if (GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF) != 0 - char padding[MAXIMUM_ALIGNOF - (GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF)]; + char padding[MAXIMUM_ALIGNOF - GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF]; #endif + GenerationBlock *block; /* block owning this chunk */ GenerationContext *context; /* owning context */ /* there must not be any padding to reach a MAXALIGN boundary here! */ }; @@ -210,8 +204,11 @@ GenerationContextCreate(MemoryContext parent, { GenerationContext *set; + /* Assert we padded GenerationChunk properly */ + StaticAssertStmt(Generation_CHUNKHDRSZ == MAXALIGN(Generation_CHUNKHDRSZ), + "sizeof(GenerationChunk) is not maxaligned"); StaticAssertStmt(offsetof(GenerationChunk, context) + sizeof(MemoryContext) == - MAXALIGN(sizeof(GenerationChunk)), + Generation_CHUNKHDRSZ, "padding calculation in GenerationChunk is wrong"); /* @@ -318,7 +315,6 @@ GenerationAlloc(MemoryContext context, Size size) GenerationContext *set = (GenerationContext *) context; GenerationBlock *block; GenerationChunk *chunk; - Size chunk_size = MAXALIGN(size); /* is it an over-sized chunk? if yes, allocate special block */ @@ -338,6 +334,7 @@ GenerationAlloc(MemoryContext context, Size size) block->freeptr = block->endptr = ((char *) block) + blksize; chunk = (GenerationChunk *) (((char *) block) + Generation_BLOCKHDRSZ); + chunk->block = block; chunk->context = set; chunk->size = chunk_size; @@ -356,17 +353,16 @@ GenerationAlloc(MemoryContext context, Size size) /* add the block to the list of allocated blocks */ dlist_push_head(&set->blocks, &block->node); - GenerationAllocInfo(set, chunk); - /* - * Chunk header public fields remain DEFINED. The requested - * allocation itself can be NOACCESS or UNDEFINED; our caller will - * soon make it UNDEFINED. Make extra space at the end of the chunk, - * if any, NOACCESS. + * Chunk header fields remain DEFINED. The requested allocation + * itself can be NOACCESS or UNDEFINED; our caller will soon make it + * UNDEFINED. Make extra space at the end of the chunk, if any, + * NOACCESS. */ - VALGRIND_MAKE_MEM_NOACCESS((char *) chunk + Generation_CHUNK_PUBLIC, - chunk_size + Generation_CHUNKHDRSZ - Generation_CHUNK_PUBLIC); + VALGRIND_MAKE_MEM_NOACCESS((char *) chunk + Generation_CHUNKHDRSZ + size, + chunk_size - size); + GenerationAllocInfo(set, chunk); return GenerationChunkGetPointer(chunk); } @@ -442,8 +438,8 @@ GenerationAlloc(MemoryContext context, Size size) /* * GenerationFree - * Update number of chunks on the block, and if all chunks on the block - * are freeed then discard the block. + * Update number of chunks in the block, and if all chunks in the block + * are now free then discard the block. */ static void GenerationFree(MemoryContext context, void *pointer) diff --git a/src/backend/utils/mmgr/slab.c b/src/backend/utils/mmgr/slab.c index 35de6b6d82..ee2175278d 100644 --- a/src/backend/utils/mmgr/slab.c +++ b/src/backend/utils/mmgr/slab.c @@ -91,12 +91,18 @@ typedef struct SlabBlock /* * SlabChunk - * The prefix of each piece of memory in an SlabBlock + * The prefix of each piece of memory in a SlabBlock + * + * Note: to meet the memory context APIs, the payload area of the chunk must + * be maxaligned, and the "slab" link must be immediately adjacent to the + * payload area (cf. GetMemoryChunkContext). Since we support no machines on + * which MAXALIGN is more than twice sizeof(void *), this happens without any + * special hacking in this struct declaration. But there is a static + * assertion below that the alignment is done correctly. */ typedef struct SlabChunk { - /* block owning this chunk */ - void *block; + SlabBlock *block; /* block owning this chunk */ SlabContext *slab; /* owning context */ /* there must not be any padding to reach a MAXALIGN boundary here! */ } SlabChunk; @@ -190,8 +196,11 @@ SlabContextCreate(MemoryContext parent, Size freelistSize; SlabContext *slab; + /* Assert we padded SlabChunk properly */ + StaticAssertStmt(sizeof(SlabChunk) == MAXALIGN(sizeof(SlabChunk)), + "sizeof(SlabChunk) is not maxaligned"); StaticAssertStmt(offsetof(SlabChunk, slab) + sizeof(MemoryContext) == - MAXALIGN(sizeof(SlabChunk)), + sizeof(SlabChunk), "padding calculation in SlabChunk is wrong"); /* Make sure the linked list node fits inside a freed chunk */ @@ -199,7 +208,7 @@ SlabContextCreate(MemoryContext parent, chunkSize = sizeof(int); /* chunk, including SLAB header (both addresses nicely aligned) */ - fullChunkSize = MAXALIGN(sizeof(SlabChunk) + MAXALIGN(chunkSize)); + fullChunkSize = sizeof(SlabChunk) + MAXALIGN(chunkSize); /* Make sure the block can store at least one chunk. */ if (blockSize - sizeof(SlabBlock) < fullChunkSize) @@ -443,7 +452,7 @@ SlabAlloc(MemoryContext context, Size size) /* Prepare to initialize the chunk header. */ VALGRIND_MAKE_MEM_UNDEFINED(chunk, sizeof(SlabChunk)); - chunk->block = (void *) block; + chunk->block = block; chunk->slab = slab; #ifdef MEMORY_CONTEXT_CHECKING From 0f2458ff5f970cade04313f1a10fe01d02f888b7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 24 Nov 2017 19:28:19 -0500 Subject: [PATCH 0603/1087] Improve valgrind logic in aset.c, and fix multiple issues in generation.c. Revise aset.c so that all the "private" fields of chunk headers are marked NOACCESS when outside the module, improving on the previous coding which protected only requested_size. Fix a couple of corner case bugs, such as failing to re-protect the header during a failure exit from AllocSetRealloc, and wrong padding-size calculation for an oversize allocation request. Apply the same design to generation.c, and also fix several bugs therein that I found by dint of hacking the code to use generation.c as the standard allocator and then running the core regression tests with it. Notably, we have to track the actual size of each block, else the wipe_mem call in GenerationReset clears the wrong amount of memory for an oversize-chunk block; and GenerationCheck needs a way of identifying freed chunks that isn't fooled by palloc(0). I chose to fix the latter by resetting the context pointer to NULL in a freed chunk, roughly like what happens in a freed aset.c chunk. Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org --- src/backend/utils/mmgr/aset.c | 121 +++++++++++++------ src/backend/utils/mmgr/generation.c | 173 ++++++++++++++++++---------- 2 files changed, 192 insertions(+), 102 deletions(-) diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 0b017a4031..1bd1c34fde 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -190,6 +190,14 @@ typedef struct AllocChunkData /* there must not be any padding to reach a MAXALIGN boundary here! */ } AllocChunkData; +/* + * Only the "aset" field should be accessed outside this module. + * We keep the rest of an allocated chunk's header marked NOACCESS when using + * valgrind. But note that chunk headers that are in a freelist are kept + * accessible, for simplicity. + */ +#define ALLOCCHUNK_PRIVATE_LEN offsetof(AllocChunkData, aset) + /* * AllocPointerIsValid * True iff pointer is valid allocation pointer. @@ -572,6 +580,10 @@ AllocSetDelete(MemoryContext context) * No request may exceed: * MAXALIGN_DOWN(SIZE_MAX) - ALLOC_BLOCKHDRSZ - ALLOC_CHUNKHDRSZ * All callers use a much-lower limit. + * + * Note: when using valgrind, it doesn't matter how the returned allocation + * is marked, as mcxt.c will set it to UNDEFINED. In some paths we will + * return space that is marked NOACCESS - AllocSetRealloc has to beware! */ static void * AllocSetAlloc(MemoryContext context, Size size) @@ -603,7 +615,6 @@ AllocSetAlloc(MemoryContext context, Size size) chunk->aset = set; chunk->size = chunk_size; #ifdef MEMORY_CONTEXT_CHECKING - /* Valgrind: Will be made NOACCESS below. */ chunk->requested_size = size; /* set mark to catch clobber of "unused" space */ if (size < chunk_size) @@ -635,14 +646,12 @@ AllocSetAlloc(MemoryContext context, Size size) AllocAllocInfo(set, chunk); - /* - * Chunk's metadata fields remain DEFINED. The requested allocation - * itself can be NOACCESS or UNDEFINED; our caller will soon make it - * UNDEFINED. Make extra space at the end of the chunk, if any, - * NOACCESS. - */ - VALGRIND_MAKE_MEM_NOACCESS((char *) chunk + ALLOC_CHUNKHDRSZ, - chunk_size - ALLOC_CHUNKHDRSZ); + /* Ensure any padding bytes are marked NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS((char *) AllocChunkGetPointer(chunk) + size, + chunk_size - size); + + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); return AllocChunkGetPointer(chunk); } @@ -664,10 +673,7 @@ AllocSetAlloc(MemoryContext context, Size size) chunk->aset = (void *) set; #ifdef MEMORY_CONTEXT_CHECKING - /* Valgrind: Free list requested_size should be DEFINED. */ chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* set mark to catch clobber of "unused" space */ if (size < chunk->size) set_sentinel(AllocChunkGetPointer(chunk), size); @@ -678,6 +684,14 @@ AllocSetAlloc(MemoryContext context, Size size) #endif AllocAllocInfo(set, chunk); + + /* Ensure any padding bytes are marked NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS((char *) AllocChunkGetPointer(chunk) + size, + chunk->size - size); + + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + return AllocChunkGetPointer(chunk); } @@ -831,8 +845,6 @@ AllocSetAlloc(MemoryContext context, Size size) chunk->size = chunk_size; #ifdef MEMORY_CONTEXT_CHECKING chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* set mark to catch clobber of "unused" space */ if (size < chunk->size) set_sentinel(AllocChunkGetPointer(chunk), size); @@ -843,6 +855,14 @@ AllocSetAlloc(MemoryContext context, Size size) #endif AllocAllocInfo(set, chunk); + + /* Ensure any padding bytes are marked NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS((char *) AllocChunkGetPointer(chunk) + size, + chunk_size - size); + + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + return AllocChunkGetPointer(chunk); } @@ -856,11 +876,12 @@ AllocSetFree(MemoryContext context, void *pointer) AllocSet set = (AllocSet) context; AllocChunk chunk = AllocPointerGetChunk(pointer); + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, ALLOCCHUNK_PRIVATE_LEN); + AllocFreeInfo(set, chunk); #ifdef MEMORY_CONTEXT_CHECKING - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); /* Test for someone scribbling on unused space in chunk */ if (chunk->requested_size < chunk->size) if (!sentinel_ok(pointer, chunk->requested_size)) @@ -935,11 +956,14 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) { AllocSet set = (AllocSet) context; AllocChunk chunk = AllocPointerGetChunk(pointer); - Size oldsize = chunk->size; + Size oldsize; + + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, ALLOCCHUNK_PRIVATE_LEN); + + oldsize = chunk->size; #ifdef MEMORY_CONTEXT_CHECKING - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); /* Test for someone scribbling on unused space in chunk */ if (chunk->requested_size < oldsize) if (!sentinel_ok(pointer, chunk->requested_size)) @@ -965,8 +989,6 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) #endif chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* * If this is an increase, mark any newly-available part UNDEFINED. @@ -993,6 +1015,9 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) VALGRIND_MAKE_MEM_DEFINED(pointer, size); #endif + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + return pointer; } @@ -1023,7 +1048,11 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) blksize = chksize + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ; block = (AllocBlock) realloc(block, blksize); if (block == NULL) + { + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); return NULL; + } block->freeptr = block->endptr = ((char *) block) + blksize; /* Update pointers since block has likely been moved */ @@ -1053,8 +1082,6 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) oldsize - chunk->requested_size); chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* set mark to catch clobber of "unused" space */ if (size < chunk->size) @@ -1069,9 +1096,12 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) VALGRIND_MAKE_MEM_DEFINED(pointer, oldsize); #endif - /* Make any trailing alignment padding NOACCESS. */ + /* Ensure any padding bytes are marked NOACCESS. */ VALGRIND_MAKE_MEM_NOACCESS((char *) pointer + size, chksize - size); + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + return pointer; } else @@ -1094,14 +1124,19 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size) /* leave immediately if request was not completed */ if (newPointer == NULL) + { + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); return NULL; + } /* - * AllocSetAlloc() just made the region NOACCESS. Change it to - * UNDEFINED for the moment; memcpy() will then transfer definedness - * from the old allocation to the new. If we know the old allocation, - * copy just that much. Otherwise, make the entire old chunk defined - * to avoid errors as we copy the currently-NOACCESS trailing bytes. + * AllocSetAlloc() may have returned a region that is still NOACCESS. + * Change it to UNDEFINED for the moment; memcpy() will then transfer + * definedness from the old allocation to the new. If we know the old + * allocation, copy just that much. Otherwise, make the entire old + * chunk defined to avoid errors as we copy the currently-NOACCESS + * trailing bytes. */ VALGRIND_MAKE_MEM_UNDEFINED(newPointer, size); #ifdef MEMORY_CONTEXT_CHECKING @@ -1129,8 +1164,12 @@ static Size AllocSetGetChunkSpace(MemoryContext context, void *pointer) { AllocChunk chunk = AllocPointerGetChunk(pointer); + Size result; - return chunk->size + ALLOC_CHUNKHDRSZ; + VALGRIND_MAKE_MEM_DEFINED(chunk, ALLOCCHUNK_PRIVATE_LEN); + result = chunk->size + ALLOC_CHUNKHDRSZ; + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + return result; } /* @@ -1267,13 +1306,11 @@ AllocSetCheck(MemoryContext context) Size chsize, dsize; + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, ALLOCCHUNK_PRIVATE_LEN); + chsize = chunk->size; /* aligned chunk size */ - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); dsize = chunk->requested_size; /* real data */ - if (dsize > 0) /* not on a free list */ - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* * Check chunk size @@ -1294,20 +1331,28 @@ AllocSetCheck(MemoryContext context) /* * If chunk is allocated, check for correct aset pointer. (If it's * free, the aset is the freelist pointer, which we can't check as - * easily...) + * easily...) Note this is an incomplete test, since palloc(0) + * produces an allocated chunk with requested_size == 0. */ if (dsize > 0 && chunk->aset != (void *) set) elog(WARNING, "problem in alloc set %s: bogus aset link in block %p, chunk %p", name, block, chunk); /* - * Check for overwrite of "unallocated" space in chunk + * Check for overwrite of padding space in an allocated chunk. */ - if (dsize > 0 && dsize < chsize && + if (chunk->aset == (void *) set && dsize < chsize && !sentinel_ok(chunk, ALLOC_CHUNKHDRSZ + dsize)) elog(WARNING, "problem in alloc set %s: detected write past chunk end in block %p, chunk %p", name, block, chunk); + /* + * If chunk is allocated, disallow external access to private part + * of chunk header. + */ + if (chunk->aset == (void *) set) + VALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN); + blk_data += chsize; nchunks++; diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 8dd0a35095..31342ad69b 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -60,7 +60,7 @@ typedef struct GenerationContext MemoryContextData header; /* Standard memory-context fields */ /* Generational context parameters */ - Size blockSize; /* block size */ + Size blockSize; /* standard block size */ GenerationBlock *block; /* current (most recently allocated) block */ dlist_head blocks; /* list of blocks */ @@ -81,6 +81,7 @@ typedef struct GenerationContext struct GenerationBlock { dlist_node node; /* doubly-linked list of blocks */ + Size blksize; /* allocated size of this block */ int nchunks; /* number of chunks in the block */ int nfree; /* number of free chunks */ char *freeptr; /* start of free space in this block */ @@ -119,10 +120,17 @@ struct GenerationChunk #endif GenerationBlock *block; /* block owning this chunk */ - GenerationContext *context; /* owning context */ + GenerationContext *context; /* owning context, or NULL if freed chunk */ /* there must not be any padding to reach a MAXALIGN boundary here! */ }; +/* + * Only the "context" field should be accessed outside this module. + * We keep the rest of an allocated chunk's header marked NOACCESS when using + * valgrind. But note that freed chunk headers are kept accessible, for + * simplicity. + */ +#define GENERATIONCHUNK_PRIVATE_LEN offsetof(GenerationChunk, context) /* * GenerationIsValid @@ -231,7 +239,6 @@ GenerationContextCreate(MemoryContext parent, name); set->blockSize = blockSize; - set->block = NULL; return (MemoryContext) set; } @@ -245,6 +252,7 @@ GenerationInit(MemoryContext context) { GenerationContext *set = (GenerationContext *) context; + set->block = NULL; dlist_init(&set->blocks); } @@ -274,9 +282,8 @@ GenerationReset(MemoryContext context) dlist_delete(miter.cur); - /* Normal case, release the block */ #ifdef CLOBBER_FREED_MEMORY - wipe_mem(block, set->blockSize); + wipe_mem(block, block->blksize); #endif free(block); @@ -290,14 +297,15 @@ GenerationReset(MemoryContext context) /* * GenerationDelete * Frees all memory which is allocated in the given set, in preparation - * for deletion of the set. We simply call GenerationReset() which does all the - * dirty work. + * for deletion of the set. We simply call GenerationReset() which does + * all the dirty work. */ static void GenerationDelete(MemoryContext context) { - /* just reset (although not really necessary) */ + /* just reset to release all the GenerationBlocks */ GenerationReset(context); + /* we are not responsible for deleting the context node itself */ } /* @@ -308,6 +316,10 @@ GenerationDelete(MemoryContext context) * No request may exceed: * MAXALIGN_DOWN(SIZE_MAX) - Generation_BLOCKHDRSZ - Generation_CHUNKHDRSZ * All callers use a much-lower limit. + * + * Note: when using valgrind, it doesn't matter how the returned allocation + * is marked, as mcxt.c will set it to UNDEFINED. In some paths we will + * return space that is marked NOACCESS - GenerationRealloc has to beware! */ static void * GenerationAlloc(MemoryContext context, Size size) @@ -327,6 +339,7 @@ GenerationAlloc(MemoryContext context, Size size) return NULL; /* block with a single (used) chunk */ + block->blksize = blksize; block->nchunks = 1; block->nfree = 0; @@ -339,7 +352,6 @@ GenerationAlloc(MemoryContext context, Size size) chunk->size = chunk_size; #ifdef MEMORY_CONTEXT_CHECKING - /* Valgrind: Will be made NOACCESS below. */ chunk->requested_size = size; /* set mark to catch clobber of "unused" space */ if (size < chunk_size) @@ -353,21 +365,20 @@ GenerationAlloc(MemoryContext context, Size size) /* add the block to the list of allocated blocks */ dlist_push_head(&set->blocks, &block->node); - /* - * Chunk header fields remain DEFINED. The requested allocation - * itself can be NOACCESS or UNDEFINED; our caller will soon make it - * UNDEFINED. Make extra space at the end of the chunk, if any, - * NOACCESS. - */ - VALGRIND_MAKE_MEM_NOACCESS((char *) chunk + Generation_CHUNKHDRSZ + size, + GenerationAllocInfo(set, chunk); + + /* Ensure any padding bytes are marked NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS((char *) GenerationChunkGetPointer(chunk) + size, chunk_size - size); - GenerationAllocInfo(set, chunk); + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); + return GenerationChunkGetPointer(chunk); } /* - * Not an over-sized chunk. Is there enough space on the current block? If + * Not an over-sized chunk. Is there enough space in the current block? If * not, allocate a new "regular" block. */ block = set->block; @@ -382,6 +393,7 @@ GenerationAlloc(MemoryContext context, Size size) if (block == NULL) return NULL; + block->blksize = blksize; block->nchunks = 0; block->nfree = 0; @@ -414,15 +426,11 @@ GenerationAlloc(MemoryContext context, Size size) Assert(block->freeptr <= block->endptr); chunk->block = block; - chunk->context = set; chunk->size = chunk_size; #ifdef MEMORY_CONTEXT_CHECKING - /* Valgrind: Free list requested_size should be DEFINED. */ chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* set mark to catch clobber of "unused" space */ if (size < chunk->size) set_sentinel(GenerationChunkGetPointer(chunk), size); @@ -433,6 +441,14 @@ GenerationAlloc(MemoryContext context, Size size) #endif GenerationAllocInfo(set, chunk); + + /* Ensure any padding bytes are marked NOACCESS. */ + VALGRIND_MAKE_MEM_NOACCESS((char *) GenerationChunkGetPointer(chunk) + size, + chunk_size - size); + + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); + return GenerationChunkGetPointer(chunk); } @@ -446,11 +462,14 @@ GenerationFree(MemoryContext context, void *pointer) { GenerationContext *set = (GenerationContext *) context; GenerationChunk *chunk = GenerationPointerGetChunk(pointer); - GenerationBlock *block = chunk->block; + GenerationBlock *block; + + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, GENERATIONCHUNK_PRIVATE_LEN); + + block = chunk->block; #ifdef MEMORY_CONTEXT_CHECKING - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); /* Test for someone scribbling on unused space in chunk */ if (chunk->requested_size < chunk->size) if (!sentinel_ok(pointer, chunk->requested_size)) @@ -462,8 +481,11 @@ GenerationFree(MemoryContext context, void *pointer) wipe_mem(pointer, chunk->size); #endif + /* Reset context to NULL in freed chunks */ + chunk->context = NULL; + #ifdef MEMORY_CONTEXT_CHECKING - /* Reset requested_size to 0 in chunks that are on freelist */ + /* Reset requested_size to 0 in freed chunks */ chunk->requested_size = 0; #endif @@ -472,7 +494,7 @@ GenerationFree(MemoryContext context, void *pointer) Assert(block->nchunks > 0); Assert(block->nfree <= block->nchunks); - /* If there are still allocated chunks on the block, we're done. */ + /* If there are still allocated chunks in the block, we're done. */ if (block->nfree < block->nchunks) return; @@ -501,11 +523,14 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) GenerationContext *set = (GenerationContext *) context; GenerationChunk *chunk = GenerationPointerGetChunk(pointer); GenerationPointer newPointer; - Size oldsize = chunk->size; + Size oldsize; + + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, GENERATIONCHUNK_PRIVATE_LEN); + + oldsize = chunk->size; #ifdef MEMORY_CONTEXT_CHECKING - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); /* Test for someone scribbling on unused space in chunk */ if (chunk->requested_size < oldsize) if (!sentinel_ok(pointer, chunk->requested_size)) @@ -517,7 +542,7 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) * Maybe the allocated area already is >= the new size. (In particular, * we always fall out here if the requested size is a decrease.) * - * This memory context is not use the power-of-2 chunk sizing and instead + * This memory context does not use power-of-2 chunk sizing and instead * carves the chunks to be as small as possible, so most repalloc() calls * will end up in the palloc/memcpy/pfree branch. * @@ -536,8 +561,6 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) #endif chunk->requested_size = size; - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); /* * If this is an increase, mark any newly-available part UNDEFINED. @@ -564,6 +587,9 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) VALGRIND_MAKE_MEM_DEFINED(pointer, size); #endif + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); + return pointer; } @@ -572,14 +598,19 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) /* leave immediately if request was not completed */ if (newPointer == NULL) + { + /* Disallow external access to private part of chunk header. */ + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); return NULL; + } /* - * GenerationSetAlloc() just made the region NOACCESS. Change it to UNDEFINED - * for the moment; memcpy() will then transfer definedness from the old - * allocation to the new. If we know the old allocation, copy just that - * much. Otherwise, make the entire old chunk defined to avoid errors as - * we copy the currently-NOACCESS trailing bytes. + * GenerationAlloc() may have returned a region that is still NOACCESS. + * Change it to UNDEFINED for the moment; memcpy() will then transfer + * definedness from the old allocation to the new. If we know the old + * allocation, copy just that much. Otherwise, make the entire old chunk + * defined to avoid errors as we copy the currently-NOACCESS trailing + * bytes. */ VALGRIND_MAKE_MEM_UNDEFINED(newPointer, size); #ifdef MEMORY_CONTEXT_CHECKING @@ -606,13 +637,17 @@ static Size GenerationGetChunkSpace(MemoryContext context, void *pointer) { GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + Size result; - return chunk->size + Generation_CHUNKHDRSZ; + VALGRIND_MAKE_MEM_DEFINED(chunk, GENERATIONCHUNK_PRIVATE_LEN); + result = chunk->size + Generation_CHUNKHDRSZ; + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); + return result; } /* * GenerationIsEmpty - * Is an Generation empty of any allocated space? + * Is a GenerationContext empty of any allocated space? */ static bool GenerationIsEmpty(MemoryContext context) @@ -638,13 +673,11 @@ GenerationStats(MemoryContext context, int level, bool print, MemoryContextCounters *totals) { GenerationContext *set = (GenerationContext *) context; - Size nblocks = 0; Size nchunks = 0; Size nfreechunks = 0; Size totalspace = 0; Size freespace = 0; - dlist_iter iter; dlist_foreach(iter, &set->blocks) @@ -654,7 +687,7 @@ GenerationStats(MemoryContext context, int level, bool print, nblocks++; nchunks += block->nchunks; nfreechunks += block->nfree; - totalspace += set->blockSize; + totalspace += block->blksize; freespace += (block->endptr - block->freeptr); } @@ -700,13 +733,16 @@ GenerationCheck(MemoryContext context) /* walk all blocks in this context */ dlist_foreach(iter, &gen->blocks) { + GenerationBlock *block = dlist_container(GenerationBlock, node, iter.cur); int nfree, nchunks; char *ptr; - GenerationBlock *block = dlist_container(GenerationBlock, node, iter.cur); - /* We can't free more chunks than allocated. */ - if (block->nfree <= block->nchunks) + /* + * nfree > nchunks is surely wrong, and we don't expect to see + * equality either, because such a block should have gotten freed. + */ + if (block->nfree >= block->nchunks) elog(WARNING, "problem in Generation %s: number of free chunks %d in block %p exceeds %d allocated", name, block->nfree, block, block->nchunks); @@ -719,37 +755,39 @@ GenerationCheck(MemoryContext context) { GenerationChunk *chunk = (GenerationChunk *) ptr; + /* Allow access to private part of chunk header. */ + VALGRIND_MAKE_MEM_DEFINED(chunk, GENERATIONCHUNK_PRIVATE_LEN); + /* move to the next chunk */ ptr += (chunk->size + Generation_CHUNKHDRSZ); + nchunks += 1; + /* chunks have both block and context pointers, so check both */ if (chunk->block != block) elog(WARNING, "problem in Generation %s: bogus block link in block %p, chunk %p", name, block, chunk); - if (chunk->context != gen) + /* + * Check for valid context pointer. Note this is an incomplete + * test, since palloc(0) produces an allocated chunk with + * requested_size == 0. + */ + if ((chunk->requested_size > 0 && chunk->context != gen) || + (chunk->context != gen && chunk->context != NULL)) elog(WARNING, "problem in Generation %s: bogus context link in block %p, chunk %p", name, block, chunk); - nchunks += 1; + /* now make sure the chunk size is correct */ + if (chunk->size < chunk->requested_size || + chunk->size != MAXALIGN(chunk->size)) + elog(WARNING, "problem in Generation %s: bogus chunk size in block %p, chunk %p", + name, block, chunk); - /* if requested_size==0, the chunk was freed */ - if (chunk->requested_size > 0) + /* is chunk allocated? */ + if (chunk->context != NULL) { - /* if the chunk was not freed, we can trigger valgrind checks */ - VALGRIND_MAKE_MEM_DEFINED(&chunk->requested_size, - sizeof(chunk->requested_size)); - - /* we're in a no-freelist branch */ - VALGRIND_MAKE_MEM_NOACCESS(&chunk->requested_size, - sizeof(chunk->requested_size)); - - /* now make sure the chunk size is correct */ - if (chunk->size != MAXALIGN(chunk->requested_size)) - elog(WARNING, "problem in Generation %s: bogus chunk size in block %p, chunk %p", - name, block, chunk); - - /* there might be sentinel (thanks to alignment) */ + /* check sentinel, but only in allocated blocks */ if (chunk->requested_size < chunk->size && !sentinel_ok(chunk, Generation_CHUNKHDRSZ + chunk->requested_size)) elog(WARNING, "problem in Generation %s: detected write past chunk end in block %p, chunk %p", @@ -757,6 +795,13 @@ GenerationCheck(MemoryContext context) } else nfree += 1; + + /* + * If chunk is allocated, disallow external access to private part + * of chunk header. + */ + if (chunk->context != NULL) + VALGRIND_MAKE_MEM_NOACCESS(chunk, GENERATIONCHUNK_PRIVATE_LEN); } /* From b10967eddf964f8c0a11060cf3f366bbdd1235f6 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Sat, 25 Nov 2017 10:49:17 -0500 Subject: [PATCH 0604/1087] Avoid projecting tuples unnecessarily in Gather and Gather Merge. It's most often the case that the target list for the Gather (Merge) node matches the target list supplied by the underlying plan node; when this is so, we can avoid the overhead of projecting. This depends on commit f455e1125e2588d4cd4fc663c6a10da4e003a3b5 for proper functioning. Idea by Andres Freund. Patch by me. Review by Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoZ0ZL=cesZFq8c9NnfK6bqy-wwUd3_74iYGodYrSoQ7Fw@mail.gmail.com --- src/backend/executor/execScan.c | 75 ++---------------------- src/backend/executor/execUtils.c | 80 ++++++++++++++++++++++++++ src/backend/executor/nodeGather.c | 16 ++++-- src/backend/executor/nodeGatherMerge.c | 24 ++++---- src/include/executor/executor.h | 2 + 5 files changed, 110 insertions(+), 87 deletions(-) diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index 5dfc49deb9..837abc0f01 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -23,8 +23,6 @@ #include "utils/memutils.h" -static bool tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, TupleDesc tupdesc); - /* * ExecScanFetch -- check interrupts & fetch next potential tuple @@ -237,8 +235,9 @@ void ExecAssignScanProjectionInfo(ScanState *node) { Scan *scan = (Scan *) node->ps.plan; + TupleDesc tupdesc = node->ss_ScanTupleSlot->tts_tupleDescriptor; - ExecAssignScanProjectionInfoWithVarno(node, scan->scanrelid); + ExecConditionalAssignProjectionInfo(&node->ps, tupdesc, scan->scanrelid); } /* @@ -248,75 +247,9 @@ ExecAssignScanProjectionInfo(ScanState *node) void ExecAssignScanProjectionInfoWithVarno(ScanState *node, Index varno) { - Scan *scan = (Scan *) node->ps.plan; - - if (tlist_matches_tupdesc(&node->ps, - scan->plan.targetlist, - varno, - node->ss_ScanTupleSlot->tts_tupleDescriptor)) - node->ps.ps_ProjInfo = NULL; - else - ExecAssignProjectionInfo(&node->ps, - node->ss_ScanTupleSlot->tts_tupleDescriptor); -} - -static bool -tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, TupleDesc tupdesc) -{ - int numattrs = tupdesc->natts; - int attrno; - bool hasoid; - ListCell *tlist_item = list_head(tlist); - - /* Check the tlist attributes */ - for (attrno = 1; attrno <= numattrs; attrno++) - { - Form_pg_attribute att_tup = TupleDescAttr(tupdesc, attrno - 1); - Var *var; - - if (tlist_item == NULL) - return false; /* tlist too short */ - var = (Var *) ((TargetEntry *) lfirst(tlist_item))->expr; - if (!var || !IsA(var, Var)) - return false; /* tlist item not a Var */ - /* if these Asserts fail, planner messed up */ - Assert(var->varno == varno); - Assert(var->varlevelsup == 0); - if (var->varattno != attrno) - return false; /* out of order */ - if (att_tup->attisdropped) - return false; /* table contains dropped columns */ - - /* - * Note: usually the Var's type should match the tupdesc exactly, but - * in situations involving unions of columns that have different - * typmods, the Var may have come from above the union and hence have - * typmod -1. This is a legitimate situation since the Var still - * describes the column, just not as exactly as the tupdesc does. We - * could change the planner to prevent it, but it'd then insert - * projection steps just to convert from specific typmod to typmod -1, - * which is pretty silly. - */ - if (var->vartype != att_tup->atttypid || - (var->vartypmod != att_tup->atttypmod && - var->vartypmod != -1)) - return false; /* type mismatch */ - - tlist_item = lnext(tlist_item); - } - - if (tlist_item) - return false; /* tlist too long */ - - /* - * If the plan context requires a particular hasoid setting, then that has - * to match, too. - */ - if (ExecContextForcesOids(ps, &hasoid) && - hasoid != tupdesc->tdhasoid) - return false; + TupleDesc tupdesc = node->ss_ScanTupleSlot->tts_tupleDescriptor; - return true; + ExecConditionalAssignProjectionInfo(&node->ps, tupdesc, varno); } /* diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index e8c06c7605..876439835a 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -56,6 +56,7 @@ #include "utils/typcache.h" +static bool tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, TupleDesc tupdesc); static void ShutdownExprContext(ExprContext *econtext, bool isCommit); @@ -503,6 +504,85 @@ ExecAssignProjectionInfo(PlanState *planstate, } +/* ---------------- + * ExecConditionalAssignProjectionInfo + * + * as ExecAssignProjectionInfo, but store NULL rather than building projection + * info if no projection is required + * ---------------- + */ +void +ExecConditionalAssignProjectionInfo(PlanState *planstate, TupleDesc inputDesc, + Index varno) +{ + if (tlist_matches_tupdesc(planstate, + planstate->plan->targetlist, + varno, + inputDesc)) + planstate->ps_ProjInfo = NULL; + else + ExecAssignProjectionInfo(planstate, inputDesc); +} + +static bool +tlist_matches_tupdesc(PlanState *ps, List *tlist, Index varno, TupleDesc tupdesc) +{ + int numattrs = tupdesc->natts; + int attrno; + bool hasoid; + ListCell *tlist_item = list_head(tlist); + + /* Check the tlist attributes */ + for (attrno = 1; attrno <= numattrs; attrno++) + { + Form_pg_attribute att_tup = TupleDescAttr(tupdesc, attrno - 1); + Var *var; + + if (tlist_item == NULL) + return false; /* tlist too short */ + var = (Var *) ((TargetEntry *) lfirst(tlist_item))->expr; + if (!var || !IsA(var, Var)) + return false; /* tlist item not a Var */ + /* if these Asserts fail, planner messed up */ + Assert(var->varno == varno); + Assert(var->varlevelsup == 0); + if (var->varattno != attrno) + return false; /* out of order */ + if (att_tup->attisdropped) + return false; /* table contains dropped columns */ + + /* + * Note: usually the Var's type should match the tupdesc exactly, but + * in situations involving unions of columns that have different + * typmods, the Var may have come from above the union and hence have + * typmod -1. This is a legitimate situation since the Var still + * describes the column, just not as exactly as the tupdesc does. We + * could change the planner to prevent it, but it'd then insert + * projection steps just to convert from specific typmod to typmod -1, + * which is pretty silly. + */ + if (var->vartype != att_tup->atttypid || + (var->vartypmod != att_tup->atttypmod && + var->vartypmod != -1)) + return false; /* type mismatch */ + + tlist_item = lnext(tlist_item); + } + + if (tlist_item) + return false; /* tlist too long */ + + /* + * If the plan context requires a particular hasoid setting, then that has + * to match, too. + */ + if (ExecContextForcesOids(ps, &hasoid) && + hasoid != tupdesc->tdhasoid) + return false; + + return true; +} + /* ---------------- * ExecFreeExprContext * diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 30885e6f5c..212612b535 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -103,12 +103,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags) outerNode = outerPlan(node); outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags); - /* - * Initialize result tuple type and projection info. - */ - ExecAssignResultTypeFromTL(&gatherstate->ps); - ExecAssignProjectionInfo(&gatherstate->ps, NULL); - /* * Initialize funnel slot to same tuple descriptor as outer plan. */ @@ -117,6 +111,12 @@ ExecInitGather(Gather *node, EState *estate, int eflags) tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); ExecSetSlotDescriptor(gatherstate->funnel_slot, tupDesc); + /* + * Initialize result tuple type and projection info. + */ + ExecAssignResultTypeFromTL(&gatherstate->ps); + ExecConditionalAssignProjectionInfo(&gatherstate->ps, tupDesc, OUTER_VAR); + return gatherstate; } @@ -221,6 +221,10 @@ ExecGather(PlanState *pstate) if (TupIsNull(slot)) return NULL; + /* If no projection is required, we're done. */ + if (node->ps.ps_ProjInfo == NULL) + return slot; + /* * Form the result tuple using ExecProject(), and return it. */ diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index d81462e72b..166f2064ff 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -115,11 +115,20 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) outerNode = outerPlan(node); outerPlanState(gm_state) = ExecInitNode(outerNode, estate, eflags); + /* + * Store the tuple descriptor into gather merge state, so we can use it + * while initializing the gather merge slots. + */ + if (!ExecContextForcesOids(outerPlanState(gm_state), &hasoid)) + hasoid = false; + tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); + gm_state->tupDesc = tupDesc; + /* * Initialize result tuple type and projection info. */ ExecAssignResultTypeFromTL(&gm_state->ps); - ExecAssignProjectionInfo(&gm_state->ps, NULL); + ExecConditionalAssignProjectionInfo(&gm_state->ps, tupDesc, OUTER_VAR); /* * initialize sort-key information @@ -151,15 +160,6 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) } } - /* - * Store the tuple descriptor into gather merge state, so we can use it - * while initializing the gather merge slots. - */ - if (!ExecContextForcesOids(outerPlanState(gm_state), &hasoid)) - hasoid = false; - tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); - gm_state->tupDesc = tupDesc; - /* Now allocate the workspace for gather merge */ gather_merge_setup(gm_state); @@ -257,6 +257,10 @@ ExecGatherMerge(PlanState *pstate) if (TupIsNull(slot)) return NULL; + /* If no projection is required, we're done. */ + if (node->ps.ps_ProjInfo == NULL) + return slot; + /* * Form the result tuple using ExecProject(), and return it. */ diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index bee4ebf269..b5578f5855 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -485,6 +485,8 @@ extern void ExecAssignResultTypeFromTL(PlanState *planstate); extern TupleDesc ExecGetResultType(PlanState *planstate); extern void ExecAssignProjectionInfo(PlanState *planstate, TupleDesc inputDesc); +extern void ExecConditionalAssignProjectionInfo(PlanState *planstate, + TupleDesc inputDesc, Index varno); extern void ExecFreeExprContext(PlanState *planstate); extern void ExecAssignScanType(ScanState *scanstate, TupleDesc tupDesc); extern void ExecAssignScanTypeFromOuterPlan(ScanState *scanstate); From df3a66e282b66971b3c08896176b95f745466a64 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 11:48:09 -0500 Subject: [PATCH 0605/1087] Improve planner's handling of set-returning functions in grouping columns. Improve query_is_distinct_for() to accept SRFs in the targetlist when we can prove distinctness from a DISTINCT clause. In that case the de-duplication will surely happen after SRF expansion, so the proof still works. Continue to punt in the case where we'd try to prove distinctness from GROUP BY (or, in the future, source relations). To do that, we'd have to determine whether the SRFs were in the grouping columns or elsewhere in the tlist, and it still doesn't seem worth the trouble. But this trivial change allows us to recognize that "SELECT DISTINCT unnest(foo) FROM ..." produces unique-ified output, which seems worth having. Also, fix estimate_num_groups() to consider the possibility of SRFs in the grouping columns. Its failure to do so was masked before v10 because grouping_planner() scaled up plan rowcount estimates by the estimated SRF multiplier after performing grouping. That doesn't happen anymore, which is more correct, but it means we need an adjustment in the estimate for the number of groups. Failure to do this leads to an underestimate for the number of output rows of subqueries like "SELECT DISTINCT unnest(foo)" compared to what 9.6 and earlier estimated, thus breaking plan choices in some cases. Per report from Dmitry Shalashov. Back-patch to v10 to avoid degraded plan choices compared to previous releases. Discussion: https://postgr.es/m/CAKPeCUGAeHgoh5O=SvcQxREVkoX7UdeJUMj1F5=aBNvoTa+O8w@mail.gmail.com --- src/backend/optimizer/plan/analyzejoins.c | 28 +++++++++++------------ src/backend/utils/adt/selfuncs.c | 27 ++++++++++++++++++++++ 2 files changed, 41 insertions(+), 14 deletions(-) diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c index 5b0da14748..5783f90b62 100644 --- a/src/backend/optimizer/plan/analyzejoins.c +++ b/src/backend/optimizer/plan/analyzejoins.c @@ -744,8 +744,8 @@ rel_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List *clause_list) bool query_supports_distinctness(Query *query) { - /* we don't cope with SRFs, see comment below */ - if (query->hasTargetSRFs) + /* SRFs break distinctness except with DISTINCT, see below */ + if (query->hasTargetSRFs && query->distinctClause == NIL) return false; /* check for features we can prove distinctness with */ @@ -786,21 +786,11 @@ query_is_distinct_for(Query *query, List *colnos, List *opids) Assert(list_length(colnos) == list_length(opids)); - /* - * A set-returning function in the query's targetlist can result in - * returning duplicate rows, if the SRF is evaluated after the - * de-duplication step; so we play it safe and say "no" if there are any - * SRFs. (We could be certain that it's okay if SRFs appear only in the - * specified columns, since those must be evaluated before de-duplication; - * but it doesn't presently seem worth the complication to check that.) - */ - if (query->hasTargetSRFs) - return false; - /* * DISTINCT (including DISTINCT ON) guarantees uniqueness if all the * columns in the DISTINCT clause appear in colnos and operator semantics - * match. + * match. This is true even if there are SRFs in the DISTINCT columns or + * elsewhere in the tlist. */ if (query->distinctClause) { @@ -819,6 +809,16 @@ query_is_distinct_for(Query *query, List *colnos, List *opids) return true; } + /* + * Otherwise, a set-returning function in the query's targetlist can + * result in returning duplicate rows, despite any grouping that might + * occur before tlist evaluation. (If all tlist SRFs are within GROUP BY + * columns, it would be safe because they'd be expanded before grouping. + * But it doesn't currently seem worth the effort to check for that.) + */ + if (query->hasTargetSRFs) + return false; + /* * Similarly, GROUP BY without GROUPING SETS guarantees uniqueness if all * the grouped columns appear in colnos and operator semantics match. diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 4bbb4a850e..edff6da410 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -3361,6 +3361,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows, List **pgset) { List *varinfos = NIL; + double srf_multiplier = 1.0; double numdistinct; ListCell *l; int i; @@ -3394,6 +3395,7 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows, foreach(l, groupExprs) { Node *groupexpr = (Node *) lfirst(l); + double this_srf_multiplier; VariableStatData vardata; List *varshere; ListCell *l2; @@ -3402,6 +3404,21 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows, if (pgset && !list_member_int(*pgset, i++)) continue; + /* + * Set-returning functions in grouping columns are a bit problematic. + * The code below will effectively ignore their SRF nature and come up + * with a numdistinct estimate as though they were scalar functions. + * We compensate by scaling up the end result by the largest SRF + * rowcount estimate. (This will be an overestimate if the SRF + * produces multiple copies of any output value, but it seems best to + * assume the SRF's outputs are distinct. In any case, it's probably + * pointless to worry too much about this without much better + * estimates for SRF output rowcounts than we have today.) + */ + this_srf_multiplier = expression_returns_set_rows(groupexpr); + if (srf_multiplier < this_srf_multiplier) + srf_multiplier = this_srf_multiplier; + /* Short-circuit for expressions returning boolean */ if (exprType(groupexpr) == BOOLOID) { @@ -3467,9 +3484,15 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows, */ if (varinfos == NIL) { + /* Apply SRF multiplier as we would do in the long path */ + numdistinct *= srf_multiplier; + /* Round off */ + numdistinct = ceil(numdistinct); /* Guard against out-of-range answers */ if (numdistinct > input_rows) numdistinct = input_rows; + if (numdistinct < 1.0) + numdistinct = 1.0; return numdistinct; } @@ -3638,6 +3661,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, double input_rows, varinfos = newvarinfos; } while (varinfos != NIL); + /* Now we can account for the effects of any SRFs */ + numdistinct *= srf_multiplier; + + /* Round off */ numdistinct = ceil(numdistinct); /* Guard against out-of-range answers */ From ab97aaac8f058f2e16ef08655d185db20bc241d3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 13:19:43 -0500 Subject: [PATCH 0606/1087] Update buffile.h/.c comments for removal of non-temp option. Commit 11e264517 removed BufFile's isTemp flag, thereby eliminating the possibility of resurrecting BufFileCreate(). But it left that function in place, as well as a bunch of comments describing how things worked for the non-temp-file case. At best, that's now a source of confusion. So remove the long-since-commented-out function and change relevant comments. I (tgl) wanted to rename BufFileCreateTemp() to BufFileCreate(), but that seems not to be the consensus position, so leave it as-is. In passing, fix commit f0828b2fc's failure to update BufFileSeek's comment to match the change of its argument type from long to off_t. (I think that might actually have been intentional at the time, but now that 64-bit off_t is nearly universal, it looks anachronistic.) Thomas Munro and Tom Lane Discussion: https://postgr.es/m/E1eFVyl-0008J1-RO@gemulon.postgresql.org --- src/backend/storage/file/buffile.c | 38 +++++++----------------------- src/include/storage/buffile.h | 2 +- 2 files changed, 10 insertions(+), 30 deletions(-) diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index b527d38b05..06bf2fadbf 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -1,7 +1,7 @@ /*------------------------------------------------------------------------- * * buffile.c - * Management of large buffered files, primarily temporary files. + * Management of large buffered temporary files. * * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -20,8 +20,8 @@ * of opening/closing file descriptors. * * Note that BufFile structs are allocated with palloc(), and therefore - * will go away automatically at transaction end. If the underlying - * virtual File is made with OpenTemporaryFile, then all resources for + * will go away automatically at query/transaction end. Since the underlying + * virtual Files are made with OpenTemporaryFile, all resources for * the file are certain to be cleaned up even if processing is aborted * by ereport(ERROR). The data structures required are made in the * palloc context that was current when the BufFile was created, and @@ -45,8 +45,8 @@ /* * We break BufFiles into gigabyte-sized segments, regardless of RELSEG_SIZE. - * The reason is that we'd like large temporary BufFiles to be spread across - * multiple tablespaces when available. + * The reason is that we'd like large BufFiles to be spread across multiple + * tablespaces when available. */ #define MAX_PHYSICAL_FILESIZE 0x40000000 #define BUFFILE_SEG_SIZE (MAX_PHYSICAL_FILESIZE / BLCKSZ) @@ -175,21 +175,6 @@ BufFileCreateTemp(bool interXact) return file; } -#ifdef NOT_USED -/* - * Create a BufFile and attach it to an already-opened virtual File. - * - * This is comparable to fdopen() in stdio. This is the only way at present - * to attach a BufFile to a non-temporary file. Note that BufFiles created - * in this way CANNOT be expanded into multiple files. - */ -BufFile * -BufFileCreate(File file) -{ - return makeBufFile(file); -} -#endif - /* * Close a BufFile * @@ -202,7 +187,7 @@ BufFileClose(BufFile *file) /* flush any unwritten data */ BufFileFlush(file); - /* close the underlying file(s) (with delete if it's a temp file) */ + /* close and delete the underlying file(s) */ for (i = 0; i < file->numFiles; i++) FileClose(file->files[i]); /* release the buffer space */ @@ -225,10 +210,6 @@ BufFileLoadBuffer(BufFile *file) /* * Advance to next component file if necessary and possible. - * - * This path can only be taken if there is more than one component, so it - * won't interfere with reading a non-temp file that is over - * MAX_PHYSICAL_FILESIZE. */ if (file->curOffset >= MAX_PHYSICAL_FILESIZE && file->curFile + 1 < file->numFiles) @@ -298,8 +279,7 @@ BufFileDumpBuffer(BufFile *file) } /* - * Enforce per-file size limit only for temp files, else just try to - * write as much as asked... + * Determine how much we need to write into this file. */ bytestowrite = file->nbytes - wpos; availbytes = MAX_PHYSICAL_FILESIZE - file->curOffset; @@ -471,8 +451,8 @@ BufFileFlush(BufFile *file) * BufFileSeek * * Like fseek(), except that target position needs two values in order to - * work when logical filesize exceeds maximum value representable by long. - * We do not support relative seeks across more than LONG_MAX, however. + * work when logical filesize exceeds maximum value representable by off_t. + * We do not support relative seeks across more than that, however. * * Result is 0 if OK, EOF if not. Logical position is not moved if an * impossible seek is attempted. diff --git a/src/include/storage/buffile.h b/src/include/storage/buffile.h index fafcb3f089..640908717d 100644 --- a/src/include/storage/buffile.h +++ b/src/include/storage/buffile.h @@ -1,7 +1,7 @@ /*------------------------------------------------------------------------- * * buffile.h - * Management of large buffered files, primarily temporary files. + * Management of large buffered temporary files. * * The BufFile routines provide a partial replacement for stdio atop * virtual file descriptors managed by fd.c. Currently they only support From 9b63c13f0a213bfb38bb70a3df3f28cc1f496b30 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 14:15:48 -0500 Subject: [PATCH 0607/1087] Repair failure with SubPlans in multi-row VALUES lists. When nodeValuesscan.c was written, it was impossible to have a SubPlan in VALUES --- any sub-SELECT there would have to be uncorrelated and thereby would produce an InitPlan instead. We therefore took a shortcut in the logic that throws away a ValuesScan's per-row expression evaluation data structures. This was broken by the introduction of LATERAL however; a sub-SELECT containing a lateral reference produces a correlated SubPlan. The cleanest fix for this would be to give up the optimization of discarding the expression eval state. But that still seems pretty unappetizing for long VALUES lists. It seems to work to just prevent the subexpressions from hooking into the ValuesScan node's subPlan list, so let's do that and see how well it works. (If this breaks, due to additional connections between the subexpressions and the outer query structures, we might consider compromises like throwing away data only for VALUES rows not containing SubPlans.) Per bug #14924 from Christian Duta. Back-patch to 9.3 where LATERAL was introduced. Discussion: https://postgr.es/m/20171124120836.1463.5310@wrigleys.postgresql.org --- src/backend/executor/nodeValuesscan.c | 21 ++++++++++++----- src/test/regress/expected/subselect.out | 30 +++++++++++++++++++++++++ src/test/regress/sql/subselect.sql | 10 +++++++++ 3 files changed, 56 insertions(+), 5 deletions(-) diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c index 1a72bfe160..47ba9faa78 100644 --- a/src/backend/executor/nodeValuesscan.c +++ b/src/backend/executor/nodeValuesscan.c @@ -92,6 +92,7 @@ ValuesNext(ValuesScanState *node) if (exprlist) { MemoryContext oldContext; + List *oldsubplans; List *exprstatelist; Datum *values; bool *isnull; @@ -114,12 +115,22 @@ ValuesNext(ValuesScanState *node) oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory); /* - * Pass NULL, not my plan node, because we don't want anything in this - * transient state linking into permanent state. The only possibility - * is a SubPlan, and there shouldn't be any (any subselects in the - * VALUES list should be InitPlans). + * The expressions might contain SubPlans (this is currently only + * possible if there's a sub-select containing a LATERAL reference, + * otherwise sub-selects in a VALUES list should be InitPlans). Those + * subplans will want to hook themselves into our subPlan list, which + * would result in a corrupted list after we delete the eval state. We + * can work around this by saving and restoring the subPlan list. + * (There's no need for the functionality that would be enabled by + * having the list entries, since the SubPlans aren't going to be + * re-executed anyway.) */ - exprstatelist = ExecInitExprList(exprlist, NULL); + oldsubplans = node->ss.ps.subPlan; + node->ss.ps.subPlan = NIL; + + exprstatelist = ExecInitExprList(exprlist, &node->ss.ps); + + node->ss.ps.subPlan = oldsubplans; /* parser should have checked all sublists are the same length */ Assert(list_length(exprstatelist) == slot->tts_tupleDescriptor->natts); diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index 992d29bb86..4b893856cf 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -840,6 +840,36 @@ select exists(select * from nocolumns); f (1 row) +-- +-- Check behavior with a SubPlan in VALUES (bug #14924) +-- +select val.x + from generate_series(1,10) as s(i), + lateral ( + values ((select s.i + 1)), (s.i + 101) + ) as val(x) +where s.i < 10 and (select val.x) < 110; + x +----- + 2 + 102 + 3 + 103 + 4 + 104 + 5 + 105 + 6 + 106 + 7 + 107 + 8 + 108 + 9 + 109 + 10 +(17 rows) + -- -- Check sane behavior with nested IN SubLinks -- diff --git a/src/test/regress/sql/subselect.sql b/src/test/regress/sql/subselect.sql index 0d2dd2f1d8..0100367b8d 100644 --- a/src/test/regress/sql/subselect.sql +++ b/src/test/regress/sql/subselect.sql @@ -472,6 +472,16 @@ explain (verbose, costs off) create temp table nocolumns(); select exists(select * from nocolumns); +-- +-- Check behavior with a SubPlan in VALUES (bug #14924) +-- +select val.x + from generate_series(1,10) as s(i), + lateral ( + values ((select s.i + 1)), (s.i + 101) + ) as val(x) +where s.i < 10 and (select val.x) < 110; + -- -- Check sane behavior with nested IN SubLinks -- From d3f4e8a8a78be60f3a971f1f8ef6acc2d0576e5f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 14:42:10 -0500 Subject: [PATCH 0608/1087] Avoid formally-undefined use of memcpy() in hstoreUniquePairs(). hstoreUniquePairs() often called memcpy with equal source and destination pointers. Although this is almost surely harmless in practice, it's undefined according to the letter of the C standard. Some versions of valgrind will complain about it, and some versions of libc as well (cf. commit ad520ec4a). Tweak the code to avoid doing that. Noted by Tomas Vondra. Back-patch to all supported versions because of the hazard of libc assertions. Discussion: https://postgr.es/m/bf84d940-90d4-de91-19dd-612e011007f4@fuzzy.cz --- contrib/hstore/hstore_io.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index e999a8e12c..7a741a779c 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -340,7 +340,8 @@ hstoreUniquePairs(Pairs *a, int32 l, int32 *buflen) { *buflen += res->keylen + ((res->isnull) ? 0 : res->vallen); res++; - memcpy(res, ptr, sizeof(Pairs)); + if (res != ptr) + memcpy(res, ptr, sizeof(Pairs)); } ptr++; From 7cce222c965dec93327692dbac9c340a6220afe9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 15:30:11 -0500 Subject: [PATCH 0609/1087] Replace raw timezone source data with IANA's new compact format. Traditionally IANA has distributed their timezone data in pure source form, replete with extensive historical comments. As of release 2017c, they've added a compact single-file format that omits comments and abbreviates command keywords. This form is way shorter than the pure source, even before considering its allegedly better compressibility. Hence, let's distribute the data in that form rather than pure source. I'm pushing this now, rather than at the next timezone database update, so that it's easy to confirm that this data file produces compiled zic output that's identical to what we were getting before. Discussion: https://postgr.es/m/1915.1511210334@sss.pgh.pa.us --- src/timezone/Makefile | 6 +- src/timezone/README | 19 +- src/timezone/data/africa | 1225 ---------- src/timezone/data/antarctica | 340 --- src/timezone/data/asia | 3142 ------------------------ src/timezone/data/australasia | 1793 -------------- src/timezone/data/backward | 128 - src/timezone/data/backzone | 675 ------ src/timezone/data/etcetera | 78 - src/timezone/data/europe | 3845 ----------------------------- src/timezone/data/factory | 10 - src/timezone/data/northamerica | 3419 -------------------------- src/timezone/data/pacificnew | 27 - src/timezone/data/southamerica | 1793 -------------- src/timezone/data/systemv | 37 - src/timezone/data/tzdata.zi | 4146 ++++++++++++++++++++++++++++++++ 16 files changed, 4160 insertions(+), 16523 deletions(-) delete mode 100644 src/timezone/data/africa delete mode 100644 src/timezone/data/antarctica delete mode 100644 src/timezone/data/asia delete mode 100644 src/timezone/data/australasia delete mode 100644 src/timezone/data/backward delete mode 100644 src/timezone/data/backzone delete mode 100644 src/timezone/data/etcetera delete mode 100644 src/timezone/data/europe delete mode 100644 src/timezone/data/factory delete mode 100644 src/timezone/data/northamerica delete mode 100644 src/timezone/data/pacificnew delete mode 100644 src/timezone/data/southamerica delete mode 100644 src/timezone/data/systemv create mode 100644 src/timezone/data/tzdata.zi diff --git a/src/timezone/Makefile b/src/timezone/Makefile index bed5727f53..87493da8b3 100644 --- a/src/timezone/Makefile +++ b/src/timezone/Makefile @@ -21,10 +21,8 @@ OBJS= localtime.o strftime.o pgtz.o # files needed to build zic utility program ZICOBJS= zic.o $(WIN32RES) -# timezone data files -TZDATA = africa antarctica asia australasia europe northamerica southamerica \ - pacificnew etcetera factory backward systemv -TZDATAFILES = $(TZDATA:%=$(srcdir)/data/%) +# we now distribute the timezone data as a single file +TZDATAFILES = $(srcdir)/data/tzdata.zi # which zone should determine the DST rules (not the specific UTC offset!) # for POSIX-style timezone specs diff --git a/src/timezone/README b/src/timezone/README index fc93e940f9..7b5573183d 100644 --- a/src/timezone/README +++ b/src/timezone/README @@ -2,10 +2,12 @@ src/timezone/README This is a PostgreSQL adapted version of the IANA timezone library from - http://www.iana.org/time-zones + https://www.iana.org/time-zones -The latest versions of both the tzdata and tzcode tarballs are normally -available right from that page. Historical versions can be found +The latest version of the timezone data and library source code is +available right from that page. It's best to get the merged file +tzdb-NNNNX.tar.lz, since the other archive formats omit tzdata.zi. +Historical versions, as well as release announcements, can be found elsewhere on the site. Since time zone rules change frequently in some parts of the world, @@ -17,11 +19,14 @@ changes that might affect interpretation of the data files. Time Zone data ============== -The data files under data/ are an exact copy of the latest tzdata set, -except that we omit some files that are not of interest for our purposes. +We distribute the time zone source data as-is under src/timezone/data/. +Currently, we distribute just the abbreviated single-file format +"tzdata.zi", to reduce the size of our tarballs as well as churn +in our git repo. Feeding that file to zic produces the same compiled +output as feeding the bulkier individual data files would do. -While the files under data/ can just be duplicated when updating, manual -effort is needed to update the time zone abbreviation lists under tznames/. +While data/tzdata.zi can just be duplicated when updating, manual effort +is needed to update the time zone abbreviation lists under tznames/. These need to be changed whenever new abbreviations are invented or the UTC offset associated with an existing abbreviation changes. To detect if this has happened, after installing new files under data/ do diff --git a/src/timezone/data/africa b/src/timezone/data/africa deleted file mode 100644 index 3a60bc27d0..0000000000 --- a/src/timezone/data/africa +++ /dev/null @@ -1,1225 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (2017-02-20): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# Another source occasionally used is Edward W. Whitman, World Time Differences, -# Whitman Publishing Co, 2 Niagara Av, Ealing, London (undated), which -# I found in the UCLA library. -# -# For data circa 1899, a common source is: -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# https://www.jstor.org/stable/1774359 -# -# A reliable and entertaining source about time zones is -# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). -# -# European-style abbreviations are commonly used along the Mediterranean. -# For sub-Saharan Africa abbreviations were less standardized. -# Previous editions of this database used WAT, CAT, SAT, and EAT -# for UT +00 through +03, respectively, -# but in 1997 Mark R V Murray reported that -# 'SAST' is the official abbreviation for +02 in the country of South Africa, -# 'CAT' is commonly used for +02 in countries north of South Africa, and -# 'WAT' is probably the best name for +01, as the common phrase for -# the area that includes Nigeria is "West Africa". -# -# To summarize, the following abbreviations seemed to have some currency: -# +00 GMT Greenwich Mean Time -# +02 CAT Central Africa Time -# +02 SAST South Africa Standard Time -# and Murray suggested the following abbreviation: -# +01 WAT West Africa Time -# Murray's suggestion seems to have caught on in news reports and the like. -# I vaguely recall 'WAT' also being used for -01 in the past but -# cannot now come up with solid citations. -# -# I invented the following abbreviations; corrections are welcome! -# +02 WAST West Africa Summer Time -# +03 CAST Central Africa Summer Time (no longer used) -# +03 SAST South Africa Summer Time (no longer used) -# +03 EAT East Africa Time -# 'EAT' also seems to have caught on; the others are rare but are paired -# with better-attested non-DST abbreviations. - -# Algeria -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Algeria 1916 only - Jun 14 23:00s 1:00 S -Rule Algeria 1916 1919 - Oct Sun>=1 23:00s 0 - -Rule Algeria 1917 only - Mar 24 23:00s 1:00 S -Rule Algeria 1918 only - Mar 9 23:00s 1:00 S -Rule Algeria 1919 only - Mar 1 23:00s 1:00 S -Rule Algeria 1920 only - Feb 14 23:00s 1:00 S -Rule Algeria 1920 only - Oct 23 23:00s 0 - -Rule Algeria 1921 only - Mar 14 23:00s 1:00 S -Rule Algeria 1921 only - Jun 21 23:00s 0 - -Rule Algeria 1939 only - Sep 11 23:00s 1:00 S -Rule Algeria 1939 only - Nov 19 1:00 0 - -Rule Algeria 1944 1945 - Apr Mon>=1 2:00 1:00 S -Rule Algeria 1944 only - Oct 8 2:00 0 - -Rule Algeria 1945 only - Sep 16 1:00 0 - -Rule Algeria 1971 only - Apr 25 23:00s 1:00 S -Rule Algeria 1971 only - Sep 26 23:00s 0 - -Rule Algeria 1977 only - May 6 0:00 1:00 S -Rule Algeria 1977 only - Oct 21 0:00 0 - -Rule Algeria 1978 only - Mar 24 1:00 1:00 S -Rule Algeria 1978 only - Sep 22 3:00 0 - -Rule Algeria 1980 only - Apr 25 0:00 1:00 S -Rule Algeria 1980 only - Oct 31 2:00 0 - -# Shanks & Pottenger give 0:09:20 for Paris Mean Time; go with Howse's -# more precise 0:09:21. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Algiers 0:12:12 - LMT 1891 Mar 15 0:01 - 0:09:21 - PMT 1911 Mar 11 # Paris Mean Time - 0:00 Algeria WE%sT 1940 Feb 25 2:00 - 1:00 Algeria CE%sT 1946 Oct 7 - 0:00 - WET 1956 Jan 29 - 1:00 - CET 1963 Apr 14 - 0:00 Algeria WE%sT 1977 Oct 21 - 1:00 Algeria CE%sT 1979 Oct 26 - 0:00 Algeria WE%sT 1981 May - 1:00 - CET - -# Angola -# Benin -# See Africa/Lagos. - -# Botswana -# See Africa/Maputo. - -# Burkina Faso -# See Africa/Abidjan. - -# Burundi -# See Africa/Maputo. - -# Cameroon -# See Africa/Lagos. - -# Cape Verde / Cabo Verde -# -# Shanks gives 1907 for the transition to +02. -# Perhaps the 1911-05-26 Portuguese decree -# https://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf -# merely made it official? -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Atlantic/Cape_Verde -1:34:04 - LMT 1907 # Praia - -2:00 - -02 1942 Sep - -2:00 1:00 -01 1945 Oct 15 - -2:00 - -02 1975 Nov 25 2:00 - -1:00 - -01 - -# Central African Republic -# See Africa/Lagos. - -# Chad -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Ndjamena 1:00:12 - LMT 1912 # N'Djamena - 1:00 - WAT 1979 Oct 14 - 1:00 1:00 WAST 1980 Mar 8 - 1:00 - WAT - -# Comoros -# See Africa/Nairobi. - -# Democratic Republic of the Congo -# See Africa/Lagos for the western part and Africa/Maputo for the eastern. - -# Republic of the Congo -# See Africa/Lagos. - -# Côte d'Ivoire / Ivory Coast -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Abidjan -0:16:08 - LMT 1912 - 0:00 - GMT -Link Africa/Abidjan Africa/Bamako # Mali -Link Africa/Abidjan Africa/Banjul # Gambia -Link Africa/Abidjan Africa/Conakry # Guinea -Link Africa/Abidjan Africa/Dakar # Senegal -Link Africa/Abidjan Africa/Freetown # Sierra Leone -Link Africa/Abidjan Africa/Lome # Togo -Link Africa/Abidjan Africa/Nouakchott # Mauritania -Link Africa/Abidjan Africa/Ouagadougou # Burkina Faso -Link Africa/Abidjan Africa/Sao_Tome # São Tomé and Príncipe -Link Africa/Abidjan Atlantic/St_Helena # St Helena - -# Djibouti -# See Africa/Nairobi. - -############################################################################### - -# Egypt - -# Milne says Cairo used 2:05:08.9, the local mean time of the Abbasizeh -# observatory; round to nearest. Milne also says that the official time for -# Egypt was mean noon at the Great Pyramid, 2:04:30.5, but apparently this -# did not apply to Cairo, Alexandria, or Port Said. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Egypt 1940 only - Jul 15 0:00 1:00 S -Rule Egypt 1940 only - Oct 1 0:00 0 - -Rule Egypt 1941 only - Apr 15 0:00 1:00 S -Rule Egypt 1941 only - Sep 16 0:00 0 - -Rule Egypt 1942 1944 - Apr 1 0:00 1:00 S -Rule Egypt 1942 only - Oct 27 0:00 0 - -Rule Egypt 1943 1945 - Nov 1 0:00 0 - -Rule Egypt 1945 only - Apr 16 0:00 1:00 S -Rule Egypt 1957 only - May 10 0:00 1:00 S -Rule Egypt 1957 1958 - Oct 1 0:00 0 - -Rule Egypt 1958 only - May 1 0:00 1:00 S -Rule Egypt 1959 1981 - May 1 1:00 1:00 S -Rule Egypt 1959 1965 - Sep 30 3:00 0 - -Rule Egypt 1966 1994 - Oct 1 3:00 0 - -Rule Egypt 1982 only - Jul 25 1:00 1:00 S -Rule Egypt 1983 only - Jul 12 1:00 1:00 S -Rule Egypt 1984 1988 - May 1 1:00 1:00 S -Rule Egypt 1989 only - May 6 1:00 1:00 S -Rule Egypt 1990 1994 - May 1 1:00 1:00 S -# IATA (after 1990) says transitions are at 0:00. -# Go with IATA starting in 1995, except correct 1995 entry from 09-30 to 09-29. - -# From Alexander Krivenyshev (2011-04-20): -# "...Egypt's interim cabinet decided on Wednesday to cancel daylight -# saving time after a poll posted on its website showed the majority of -# Egyptians would approve the cancellation." -# -# Egypt to cancel daylight saving time -# http://www.almasryalyoum.com/en/node/407168 -# or -# http://www.worldtimezone.com/dst_news/dst_news_egypt04.html -Rule Egypt 1995 2010 - Apr lastFri 0:00s 1:00 S -Rule Egypt 1995 2005 - Sep lastThu 24:00 0 - -# From Steffen Thorsen (2006-09-19): -# The Egyptian Gazette, issue 41,090 (2006-09-18), page 1, reports: -# Egypt will turn back clocks by one hour at the midnight of Thursday -# after observing the daylight saving time since May. -# http://news.gom.com.eg/gazette/pdf/2006/09/18/01.pdf -Rule Egypt 2006 only - Sep 21 24:00 0 - -# From Dirk Losch (2007-08-14): -# I received a mail from an airline which says that the daylight -# saving time in Egypt will end in the night of 2007-09-06 to 2007-09-07. -# From Jesper Nørgaard Welen (2007-08-15): [The following agree:] -# http://www.nentjes.info/Bill/bill5.htm -# https://www.timeanddate.com/worldclock/city.html?n=53 -# From Steffen Thorsen (2007-09-04): The official information...: -# http://www.sis.gov.eg/En/EgyptOnline/Miscellaneous/000002/0207000000000000001580.htm -Rule Egypt 2007 only - Sep Thu>=1 24:00 0 - -# From Abdelrahman Hassan (2007-09-06): -# Due to the Hijri (lunar Islamic calendar) year being 11 days shorter -# than the year of the Gregorian calendar, Ramadan shifts earlier each -# year. This year it will be observed September 13 (September is quite -# hot in Egypt), and the idea is to make fasting easier for workers by -# shifting business hours one hour out of daytime heat. Consequently, -# unless discontinued, next DST may end Thursday 28 August 2008. -# From Paul Eggert (2007-08-17): -# For lack of better info, assume the new rule is last Thursday in August. - -# From Petr Machata (2009-04-06): -# The following appeared in Red Hat bugzilla[1] (edited): -# -# > $ zdump -v /usr/share/zoneinfo/Africa/Cairo | grep 2009 -# > /usr/share/zoneinfo/Africa/Cairo Thu Apr 23 21:59:59 2009 UTC = Thu = -# Apr 23 -# > 23:59:59 2009 EET isdst=0 gmtoff=7200 -# > /usr/share/zoneinfo/Africa/Cairo Thu Apr 23 22:00:00 2009 UTC = Fri = -# Apr 24 -# > 01:00:00 2009 EEST isdst=1 gmtoff=10800 -# > /usr/share/zoneinfo/Africa/Cairo Thu Aug 27 20:59:59 2009 UTC = Thu = -# Aug 27 -# > 23:59:59 2009 EEST isdst=1 gmtoff=10800 -# > /usr/share/zoneinfo/Africa/Cairo Thu Aug 27 21:00:00 2009 UTC = Thu = -# Aug 27 -# > 23:00:00 2009 EET isdst=0 gmtoff=7200 -# -# > end date should be Thu Sep 24 2009 (Last Thursday in September at 23:59= -# :59) -# > http://support.microsoft.com/kb/958729/ -# -# timeanddate[2] and another site I've found[3] also support that. -# -# [1] https://bugzilla.redhat.com/show_bug.cgi?id=492263 -# [2] https://www.timeanddate.com/worldclock/clockchange.html?n=53 -# [3] https://wwp.greenwichmeantime.com/time-zone/africa/egypt/ - -# From Arthur David Olson (2009-04-20): -# In 2009 (and for the next several years), Ramadan ends before the fourth -# Thursday in September; Egypt is expected to revert to the last Thursday -# in September. - -# From Steffen Thorsen (2009-08-11): -# We have been able to confirm the August change with the Egyptian Cabinet -# Information and Decision Support Center: -# https://www.timeanddate.com/news/time/egypt-dst-ends-2009.html -# -# The Middle East News Agency -# https://www.mena.org.eg/index.aspx -# also reports "Egypt starts winter time on August 21" -# today in article numbered "71, 11/08/2009 12:25 GMT." -# Only the title above is available without a subscription to their service, -# and can be found by searching for "winter" in their search engine -# (at least today). - -# From Alexander Krivenyshev (2010-07-20): -# According to News from Egypt - Al-Masry Al-Youm Egypt's cabinet has -# decided that Daylight Saving Time will not be used in Egypt during -# Ramadan. -# -# Arabic translation: -# "Clocks to go back during Ramadan - and then forward again" -# http://www.almasryalyoum.com/en/news/clocks-go-back-during-ramadan-and-then-forward-again -# http://www.worldtimezone.com/dst_news/dst_news_egypt02.html - -# From Ahmad El-Dardiry (2014-05-07): -# Egypt is to change back to Daylight system on May 15 -# http://english.ahram.org.eg/NewsContent/1/64/100735/Egypt/Politics-/Egypts-government-to-reapply-daylight-saving-time-.aspx - -# From Gunther Vermier (2014-05-13): -# our Egypt office confirms that the change will be at 15 May "midnight" (24:00) - -# From Imed Chihi (2014-06-04): -# We have finally "located" a precise official reference about the DST changes -# in Egypt. The Ministers Cabinet decision is explained at -# http://www.cabinet.gov.eg/Media/CabinetMeetingsDetails.aspx?id=347 ... -# [T]his (Arabic) site is not accessible outside Egypt, but the page ... -# translates into: "With regard to daylight saving time, it is scheduled to -# take effect at exactly twelve o'clock this evening, Thursday, 15 MAY 2014, -# to be suspended by twelve o'clock on the evening of Thursday, 26 JUN 2014, -# and re-established again at the end of the month of Ramadan, at twelve -# o'clock on the evening of Thursday, 31 JUL 2014." This statement has been -# reproduced by other (more accessible) sites[, e.g.,]... -# http://elgornal.net/news/news.aspx?id=4699258 - -# From Paul Eggert (2014-06-04): -# Sarah El Deeb and Lee Keath of AP report that the Egyptian government says -# the change is because of blackouts in Cairo, even though Ahram Online (cited -# above) says DST had no affect on electricity consumption. There is -# no information about when DST will end this fall. See: -# http://abcnews.go.com/International/wireStory/el-sissi-pushes-egyptians-line-23614833 - -# From Steffen Thorsen (2015-04-08): -# Egypt will start DST on midnight after Thursday, April 30, 2015. -# This is based on a law (no 35) from May 15, 2014 saying it starts the last -# Thursday of April.... Clocks will still be turned back for Ramadan, but -# dates not yet announced.... -# http://almogaz.com/news/weird-news/2015/04/05/1947105 ... -# https://www.timeanddate.com/news/time/egypt-starts-dst-2015.html - -# From Ahmed Nazmy (2015-04-20): -# Egypt's ministers cabinet just announced ... that it will cancel DST at -# least for 2015. -# -# From Tim Parenti (2015-04-20): -# http://english.ahram.org.eg/WriterArticles/NewsContentP/1/128195/Egypt/No-daylight-saving-this-summer-Egypts-prime-minist.aspx -# "Egypt's cabinet agreed on Monday not to switch clocks for daylight saving -# time this summer, and carry out studies on the possibility of canceling the -# practice altogether in future years." -# -# From Paul Eggert (2015-04-24): -# Yesterday the office of Egyptian President El-Sisi announced his -# decision to abandon DST permanently. See Ahram Online 2015-04-24. -# http://english.ahram.org.eg/NewsContent/1/64/128509/Egypt/Politics-/Sisi-cancels-daylight-saving-time-in-Egypt.aspx - -# From Steffen Thorsen (2016-04-29): -# Egypt will have DST from July 7 until the end of October.... -# http://english.ahram.org.eg/NewsContentP/1/204655/Egypt/Daylight-savings-time-returning-to-Egypt-on--July.aspx -# From Mina Samuel (2016-07-04): -# Egyptian government took the decision to cancel the DST, - -Rule Egypt 2008 only - Aug lastThu 24:00 0 - -Rule Egypt 2009 only - Aug 20 24:00 0 - -Rule Egypt 2010 only - Aug 10 24:00 0 - -Rule Egypt 2010 only - Sep 9 24:00 1:00 S -Rule Egypt 2010 only - Sep lastThu 24:00 0 - -Rule Egypt 2014 only - May 15 24:00 1:00 S -Rule Egypt 2014 only - Jun 26 24:00 0 - -Rule Egypt 2014 only - Jul 31 24:00 1:00 S -Rule Egypt 2014 only - Sep lastThu 24:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Cairo 2:05:09 - LMT 1900 Oct - 2:00 Egypt EE%sT - -# Equatorial Guinea -# See Africa/Lagos. - -# Eritrea -# Ethiopia -# See Africa/Nairobi. - -# Gabon -# See Africa/Lagos. - -# Gambia -# See Africa/Abidjan. - -# Ghana -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# Whitman says DST was observed from 1931 to "the present"; -# Shanks & Pottenger say 1936 to 1942; -# and September 1 to January 1 is given by: -# Scott Keltie J, Epstein M (eds), The Statesman's Year-Book, -# 57th ed. Macmillan, London (1920), OCLC 609408015, pp xxviii. -# For lack of better info, assume DST was observed from 1920 to 1942. -Rule Ghana 1920 1942 - Sep 1 0:00 0:20 GHST -Rule Ghana 1920 1942 - Dec 31 0:00 0 GMT -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Accra -0:00:52 - LMT 1918 - 0:00 Ghana GMT/+0020 - -# Guinea -# See Africa/Abidjan. - -# Guinea-Bissau -# -# Shanks gives 1911-05-26 for the transition to WAT, -# evidently confusing the date of the Portuguese decree -# https://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf -# with the date that it took effect, namely 1912-01-01. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Bissau -1:02:20 - LMT 1912 Jan 1 - -1:00 - -01 1975 - 0:00 - GMT - -# Kenya -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Nairobi 2:27:16 - LMT 1928 Jul - 3:00 - EAT 1930 - 2:30 - +0230 1940 - 2:45 - +0245 1960 - 3:00 - EAT -Link Africa/Nairobi Africa/Addis_Ababa # Ethiopia -Link Africa/Nairobi Africa/Asmara # Eritrea -Link Africa/Nairobi Africa/Dar_es_Salaam # Tanzania -Link Africa/Nairobi Africa/Djibouti -Link Africa/Nairobi Africa/Kampala # Uganda -Link Africa/Nairobi Africa/Mogadishu # Somalia -Link Africa/Nairobi Indian/Antananarivo # Madagascar -Link Africa/Nairobi Indian/Comoro -Link Africa/Nairobi Indian/Mayotte - -# Lesotho -# See Africa/Johannesburg. - -# Liberia -# -# From Paul Eggert (2017-03-02): -# -# The Nautical Almanac for the Year 1970, p 264, is the source for -0:44:30. -# -# In 1972 Liberia was the last country to switch from a UTC offset -# that was not a multiple of 15 or 20 minutes. The 1972 change was on -# 1972-01-07, according to an entry dated 1972-01-04 on p 330 of: -# Presidential Papers: First year of the administration of -# President William R. Tolbert, Jr., July 23, 1971-July 31, 1972. -# Monrovia: Executive Mansion. -# -# Use the abbreviation "MMT" before 1972, as the more-accurate numeric -# abbreviation "-004430" would be one byte over the POSIX limit. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Monrovia -0:43:08 - LMT 1882 - -0:43:08 - MMT 1919 Mar # Monrovia Mean Time - -0:44:30 - MMT 1972 Jan 7 # approximately MMT - 0:00 - GMT - -############################################################################### - -# Libya - -# From Even Scharning (2012-11-10): -# Libya set their time one hour back at 02:00 on Saturday November 10. -# https://www.libyaherald.com/2012/11/04/clocks-to-go-back-an-hour-on-saturday/ -# Here is an official source [in Arabic]: http://ls.ly/fb6Yc -# -# Steffen Thorsen forwarded a translation (2012-11-10) in -# https://mm.icann.org/pipermail/tz/2012-November/018451.html -# -# From Tim Parenti (2012-11-11): -# Treat the 2012-11-10 change as a zone change from UTC+2 to UTC+1. -# The DST rules planned for 2013 and onward roughly mirror those of Europe -# (either two days before them or five days after them, so as to fall on -# lastFri instead of lastSun). - -# From Even Scharning (2013-10-25): -# The scheduled end of DST in Libya on Friday, October 25, 2013 was -# cancelled yesterday.... -# https://www.libyaherald.com/2013/10/24/correction-no-time-change-tomorrow/ -# -# From Paul Eggert (2013-10-25): -# For now, assume they're reverting to the pre-2012 rules of permanent UT +02. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Libya 1951 only - Oct 14 2:00 1:00 S -Rule Libya 1952 only - Jan 1 0:00 0 - -Rule Libya 1953 only - Oct 9 2:00 1:00 S -Rule Libya 1954 only - Jan 1 0:00 0 - -Rule Libya 1955 only - Sep 30 0:00 1:00 S -Rule Libya 1956 only - Jan 1 0:00 0 - -Rule Libya 1982 1984 - Apr 1 0:00 1:00 S -Rule Libya 1982 1985 - Oct 1 0:00 0 - -Rule Libya 1985 only - Apr 6 0:00 1:00 S -Rule Libya 1986 only - Apr 4 0:00 1:00 S -Rule Libya 1986 only - Oct 3 0:00 0 - -Rule Libya 1987 1989 - Apr 1 0:00 1:00 S -Rule Libya 1987 1989 - Oct 1 0:00 0 - -Rule Libya 1997 only - Apr 4 0:00 1:00 S -Rule Libya 1997 only - Oct 4 0:00 0 - -Rule Libya 2013 only - Mar lastFri 1:00 1:00 S -Rule Libya 2013 only - Oct lastFri 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Tripoli 0:52:44 - LMT 1920 - 1:00 Libya CE%sT 1959 - 2:00 - EET 1982 - 1:00 Libya CE%sT 1990 May 4 -# The 1996 and 1997 entries are from Shanks & Pottenger; -# the IATA SSIM data entries contain some obvious errors. - 2:00 - EET 1996 Sep 30 - 1:00 Libya CE%sT 1997 Oct 4 - 2:00 - EET 2012 Nov 10 2:00 - 1:00 Libya CE%sT 2013 Oct 25 2:00 - 2:00 - EET - -# Madagascar -# See Africa/Nairobi. - -# Malawi -# See Africa/Maputo. - -# Mali -# Mauritania -# See Africa/Abidjan. - -# Mauritius - -# From Steffen Thorsen (2008-06-25): -# Mauritius plans to observe DST from 2008-11-01 to 2009-03-31 on a trial -# basis.... -# It seems that Mauritius observed daylight saving time from 1982-10-10 to -# 1983-03-20 as well, but that was not successful.... -# https://www.timeanddate.com/news/time/mauritius-daylight-saving-time.html - -# From Alex Krivenyshev (2008-06-25): -# http://economicdevelopment.gov.mu/portal/site/Mainhomepage/menuitem.a42b24128104d9845dabddd154508a0c/?content_id=0a7cee8b5d69a110VgnVCM1000000a04a8c0RCRD - -# From Arthur David Olson (2008-06-30): -# The www.timeanddate.com article cited by Steffen Thorsen notes that "A -# final decision has yet to be made on the times that daylight saving -# would begin and end on these dates." As a place holder, use midnight. - -# From Paul Eggert (2008-06-30): -# Follow Thorsen on DST in 1982/1983, instead of Shanks & Pottenger. - -# From Steffen Thorsen (2008-07-10): -# According to -# http://www.lexpress.mu/display_article.php?news_id=111216 -# (in French), Mauritius will start and end their DST a few days earlier -# than previously announced (2008-11-01 to 2009-03-31). The new start -# date is 2008-10-26 at 02:00 and the new end date is 2009-03-27 (no time -# given, but it is probably at either 2 or 3 wall clock time). -# -# A little strange though, since the article says that they moved the date -# to align itself with Europe and USA which also change time on that date, -# but that means they have not paid attention to what happened in -# USA/Canada last year (DST ends first Sunday in November). I also wonder -# why that they end on a Friday, instead of aligning with Europe which -# changes two days later. - -# From Alex Krivenyshev (2008-07-11): -# Seems that English language article "The revival of daylight saving -# time: Energy conservation?"- No. 16578 (07/11/2008) was originally -# published on Monday, June 30, 2008... -# -# I guess that article in French "Le gouvernement avance l'introduction -# de l'heure d'été" stating that DST in Mauritius starting on October 26 -# and ending on March 27, 2009 is the most recent one.... -# http://www.worldtimezone.com/dst_news/dst_news_mauritius02.html - -# From Riad M. Hossen Ally (2008-08-03): -# The Government of Mauritius weblink -# http://www.gov.mu/portal/site/pmosite/menuitem.4ca0efdee47462e7440a600248a521ca/?content_id=4728ca68b2a5b110VgnVCM1000000a04a8c0RCRD -# Cabinet Decision of July 18th, 2008 states as follows: -# -# 4. ...Cabinet has agreed to the introduction into the National Assembly -# of the Time Bill which provides for the introduction of summer time in -# Mauritius. The summer time period which will be of one hour ahead of -# the standard time, will be aligned with that in Europe and the United -# States of America. It will start at two o'clock in the morning on the -# last Sunday of October and will end at two o'clock in the morning on -# the last Sunday of March the following year. The summer time for the -# year 2008-2009 will, therefore, be effective as from 26 October 2008 -# and end on 29 March 2009. - -# From Ed Maste (2008-10-07): -# THE TIME BILL (No. XXVII of 2008) Explanatory Memorandum states the -# beginning / ending of summer time is 2 o'clock standard time in the -# morning of the last Sunday of October / last Sunday of March. -# http://www.gov.mu/portal/goc/assemblysite/file/bill2708.pdf - -# From Steffen Thorsen (2009-06-05): -# According to several sources, Mauritius will not continue to observe -# DST the coming summer... -# -# Some sources, in French: -# http://www.defimedia.info/news/946/Rashid-Beebeejaun-:-%C2%AB-L%E2%80%99heure-d%E2%80%99%C3%A9t%C3%A9-ne-sera-pas-appliqu%C3%A9e-cette-ann%C3%A9e-%C2%BB -# http://lexpress.mu/Story/3398~Beebeejaun---Les-objectifs-d-%C3%A9conomie-d-%C3%A9nergie-de-l-heure-d-%C3%A9t%C3%A9-ont-%C3%A9t%C3%A9-atteints- -# -# Our wrap-up: -# https://www.timeanddate.com/news/time/mauritius-dst-will-not-repeat.html - -# From Arthur David Olson (2009-07-11): -# The "mauritius-dst-will-not-repeat" wrapup includes this: -# "The trial ended on March 29, 2009, when the clocks moved back by one hour -# at 2am (or 02:00) local time..." - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Mauritius 1982 only - Oct 10 0:00 1:00 S -Rule Mauritius 1983 only - Mar 21 0:00 0 - -Rule Mauritius 2008 only - Oct lastSun 2:00 1:00 S -Rule Mauritius 2009 only - Mar lastSun 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis - 4:00 Mauritius +04/+05 -# Agalega Is, Rodriguez -# no information; probably like Indian/Mauritius - -# Mayotte -# See Africa/Nairobi. - -# Morocco -# See the 'europe' file for Spanish Morocco (Africa/Ceuta). - -# From Alex Krivenyshev (2008-05-09): -# Here is an article that Morocco plan to introduce Daylight Saving Time between -# 1 June, 2008 and 27 September, 2008. -# -# "... Morocco is to save energy by adjusting its clock during summer so it will -# be one hour ahead of GMT between 1 June and 27 September, according to -# Communication Minister and Government Spokesman, Khalid Naciri...." -# -# http://www.worldtimezone.com/dst_news/dst_news_morocco01.html -# http://en.afrik.com/news11892.html - -# From Alex Krivenyshev (2008-05-09): -# The Morocco time change can be confirmed on Morocco web site Maghreb Arabe -# Presse: -# http://www.map.ma/eng/sections/box3/morocco_shifts_to_da/view -# -# Morocco shifts to daylight time on June 1st through September 27, Govt. -# spokesman. - -# From Patrice Scattolin (2008-05-09): -# According to this article: -# https://www.avmaroc.com/actualite/heure-dete-comment-a127896.html -# (and republished here: ) -# the changes occur at midnight: -# -# Saturday night May 31st at midnight (which in French is to be -# interpreted as the night between Saturday and Sunday) -# Sunday night the 28th at midnight -# -# Seeing that the 28th is Monday, I am guessing that she intends to say -# the midnight of the 28th which is the midnight between Sunday and -# Monday, which jives with other sources that say that it's inclusive -# June 1st to Sept 27th. -# -# The decision was taken by decree *2-08-224 *but I can't find the decree -# published on the web. -# -# It's also confirmed here: -# http://www.maroc.ma/NR/exeres/FACF141F-D910-44B0-B7FA-6E03733425D1.htm -# on a government portal as being between June 1st and Sept 27th (not yet -# posted in English). -# -# The following Google query will generate many relevant hits: -# https://www.google.com/search?hl=en&q=Conseil+de+gouvernement+maroc+heure+avance&btnG=Search - -# From Steffen Thorsen (2008-08-27): -# Morocco will change the clocks back on the midnight between August 31 -# and September 1. They originally planned to observe DST to near the end -# of September: -# -# One article about it (in French): -# http://www.menara.ma/fr/Actualites/Maroc/Societe/ci.retour_a_l_heure_gmt_a_partir_du_dimanche_31_aout_a_minuit_officiel_.default -# -# We have some further details posted here: -# https://www.timeanddate.com/news/time/morocco-ends-dst-early-2008.html - -# From Steffen Thorsen (2009-03-17): -# Morocco will observe DST from 2009-06-01 00:00 to 2009-08-21 00:00 according -# to many sources, such as -# http://news.marweb.com/morocco/entertainment/morocco-daylight-saving.html -# http://www.medi1sat.ma/fr/depeche.aspx?idp=2312 -# (French) -# -# Our summary: -# https://www.timeanddate.com/news/time/morocco-starts-dst-2009.html - -# From Alexander Krivenyshev (2009-03-17): -# Here is a link to official document from Royaume du Maroc Premier Ministre, -# Ministère de la Modernisation des Secteurs Publics -# -# Under Article 1 of Royal Decree No. 455-67 of Act 23 safar 1387 (2 June 1967) -# concerning the amendment of the legal time, the Ministry of Modernization of -# Public Sectors announced that the official time in the Kingdom will be -# advanced 60 minutes from Sunday 31 May 2009 at midnight. -# -# http://www.mmsp.gov.ma/francais/Actualites_fr/PDF_Actualites_Fr/HeureEte_FR.pdf -# http://www.worldtimezone.com/dst_news/dst_news_morocco03.html - -# From Steffen Thorsen (2010-04-13): -# Several news media in Morocco report that the Ministry of Modernization -# of Public Sectors has announced that Morocco will have DST from -# 2010-05-02 to 2010-08-08. -# -# Example: -# http://www.lavieeco.com/actualites/4099-le-maroc-passera-a-l-heure-d-ete-gmt1-le-2-mai.html -# (French) -# Our page: -# https://www.timeanddate.com/news/time/morocco-starts-dst-2010.html - -# From Dan Abitol (2011-03-30): -# ...Rules for Africa/Casablanca are the following (24h format) -# The 3rd April 2011 at 00:00:00, [it] will be 3rd April 01:00:00 -# The 31st July 2011 at 00:59:59, [it] will be 31st July 00:00:00 -# ...Official links of change in morocco -# The change was broadcast on the FM Radio -# I ve called ANRT (telecom regulations in Morocco) at -# +212.537.71.84.00 -# http://www.anrt.net.ma/fr/ -# They said that -# http://www.map.ma/fr/sections/accueil/l_heure_legale_au_ma/view -# is the official publication to look at. -# They said that the decision was already taken. -# -# More articles in the press -# https://www.yabiladi.com/articles/details/5058/secret-l-heure-d-ete-maroc-leve.html -# http://www.lematin.ma/Actualite/Express/Article.asp?id=148923 -# http://www.lavieeco.com/actualite/Le-Maroc-passe-sur-GMT%2B1-a-partir-de-dim - -# From Petr Machata (2011-03-30): -# They have it written in English here: -# http://www.map.ma/eng/sections/home/morocco_to_spring_fo/view -# -# It says there that "Morocco will resume its standard time on July 31, -# 2011 at midnight." Now they don't say whether they mean midnight of -# wall clock time (i.e. 11pm UTC), but that's what I would assume. It has -# also been like that in the past. - -# From Alexander Krivenyshev (2012-03-09): -# According to Infomédiaire web site from Morocco (infomediaire.ma), -# on March 9, 2012, (in French) Heure légale: -# Le Maroc adopte officiellement l'heure d'été -# http://www.infomediaire.ma/news/maroc/heure-l%C3%A9gale-le-maroc-adopte-officiellement-lheure-d%C3%A9t%C3%A9 -# Governing Council adopted draft decree, that Morocco DST starts on -# the last Sunday of March (March 25, 2012) and ends on -# last Sunday of September (September 30, 2012) -# except the month of Ramadan. -# or (brief) -# http://www.worldtimezone.com/dst_news/dst_news_morocco06.html - -# From Arthur David Olson (2012-03-10): -# The infomediaire.ma source indicates that the system is to be in -# effect every year. It gives 03H00 as the "fall back" time of day; -# it lacks a "spring forward" time of day; assume 2:00 XXX. -# Wait on specifying the Ramadan exception for details about -# start date, start time of day, end date, and end time of day XXX. - -# From Christophe Tropamer (2012-03-16): -# Seen Morocco change again: -# http://www.le2uminutes.com/actualite.php -# "...à partir du dernier dimanche d'avril et non fins mars, -# comme annoncé précédemment." - -# From Milamber Space Network (2012-07-17): -# The official return to GMT is announced by the Moroccan government: -# http://www.mmsp.gov.ma/fr/actualites.aspx?id=288 [in French] -# -# Google translation, lightly edited: -# Back to the standard time of the Kingdom (GMT) -# Pursuant to Decree No. 2-12-126 issued on 26 Jumada (I) 1433 (April 18, -# 2012) and in accordance with the order of Mr. President of the -# Government No. 3-47-12 issued on 24 Sha'ban (11 July 2012), the Ministry -# of Public Service and Administration Modernization announces the return -# of the legal time of the Kingdom (GMT) from Friday, July 20, 2012 until -# Monday, August 20, 2012. So the time will be delayed by 60 minutes from -# 3:00 am Friday, July 20, 2012 and will again be advanced by 60 minutes -# August 20, 2012 from 2:00 am. - -# From Paul Eggert (2013-03-06): -# Morocco's daylight-saving transitions due to Ramadan seem to be -# announced a bit in advance. On 2012-07-11 the Moroccan government -# announced that year's Ramadan daylight-saving transitions would be -# 2012-07-20 and 2012-08-20; see -# http://www.mmsp.gov.ma/fr/actualites.aspx?id=288 - -# From Andrew Paprocki (2013-07-02): -# Morocco announced that the year's Ramadan daylight-savings -# transitions would be 2013-07-07 and 2013-08-10; see: -# http://www.maroc.ma/en/news/morocco-suspends-daylight-saving-time-july-7-aug10 - -# From Steffen Thorsen (2013-09-28): -# Morocco extends DST by one month, on very short notice, just 1 day -# before it was going to end. There is a new decree (2.13.781) for -# this, where DST from now on goes from last Sunday of March at 02:00 -# to last Sunday of October at 03:00, similar to EU rules. Official -# source (French): -# http://www.maroc.gov.ma/fr/actualites/lhoraire-dete-gmt1-maintenu-jusquau-27-octobre-2013 -# Another source (specifying the time for start and end in the decree): -# http://www.lemag.ma/Heure-d-ete-au-Maroc-jusqu-au-27-octobre_a75620.html - -# From Sebastien Willemijns (2014-03-18): -# http://www.afriquinfos.com/articles/2014/3/18/maroc-heure-dete-avancez-tous-horloges-247891.asp - -# From Milamber Space Network (2014-06-05): -# The Moroccan government has recently announced that the country will return -# to standard time at 03:00 on Saturday, June 28, 2014 local time.... DST -# will resume again at 02:00 on Saturday, August 2, 2014.... -# http://www.mmsp.gov.ma/fr/actualites.aspx?id=586 - -# From Milamber (2015-06-08): -# (Google Translation) The hour will thus be delayed 60 minutes -# Sunday, June 14 at 3:00, the ministry said in a statement, adding -# that the time will be advanced again 60 minutes Sunday, July 19, -# 2015 at 2:00. The move comes under 2.12.126 Decree of 26 Jumada I -# 1433 (18 April 2012) and the decision of the Head of Government of -# 16 N. 3-29-15 Chaaban 1435 (4 June 2015). -# Source (french): -# https://lnt.ma/le-maroc-reculera-dune-heure-le-dimanche-14-juin/ -# -# From Milamber (2015-06-09): -# http://www.mmsp.gov.ma/fr/actualites.aspx?id=863 -# -# From Michael Deckers (2015-06-09): -# [The gov.ma announcement] would (probably) make the switch on 2015-07-19 go -# from 03:00 to 04:00 rather than from 02:00 to 03:00, as in the patch.... -# I think the patch is correct and the quoted text is wrong; the text in -# agrees -# with the patch. - -# From Paul Eggert (2015-06-08): -# For now, guess that later spring and fall transitions will use 2015's rules, -# and guess that Morocco will switch to standard time at 03:00 the last -# Sunday before Ramadan, and back to DST at 02:00 the first Sunday after -# Ramadan. To implement this, transition dates for 2016 through 2037 were -# determined by running the following program under GNU Emacs 24.3, with the -# results integrated by hand into the table below. -# (let ((islamic-year 1437)) -# (require 'cal-islam) -# (while (< islamic-year 1460) -# (let ((a (calendar-islamic-to-absolute (list 9 1 islamic-year))) -# (b (calendar-islamic-to-absolute (list 10 1 islamic-year))) -# (sunday 0)) -# (while (/= sunday (mod (setq a (1- a)) 7))) -# (while (/= sunday (mod b 7)) -# (setq b (1+ b))) -# (setq a (calendar-gregorian-from-absolute a)) -# (setq b (calendar-gregorian-from-absolute b)) -# (insert -# (format -# (concat "Rule\tMorocco\t%d\tonly\t-\t%s\t%2d\t 3:00\t0\t-\n" -# "Rule\tMorocco\t%d\tonly\t-\t%s\t%2d\t 2:00\t1:00\tS\n") -# (car (cdr (cdr a))) (calendar-month-name (car a) t) (car (cdr a)) -# (car (cdr (cdr b))) (calendar-month-name (car b) t) (car (cdr b))))) -# (setq islamic-year (+ 1 islamic-year)))) - -# RULE NAME FROM TO TYPE IN ON AT SAVE LETTER/S - -Rule Morocco 1939 only - Sep 12 0:00 1:00 S -Rule Morocco 1939 only - Nov 19 0:00 0 - -Rule Morocco 1940 only - Feb 25 0:00 1:00 S -Rule Morocco 1945 only - Nov 18 0:00 0 - -Rule Morocco 1950 only - Jun 11 0:00 1:00 S -Rule Morocco 1950 only - Oct 29 0:00 0 - -Rule Morocco 1967 only - Jun 3 12:00 1:00 S -Rule Morocco 1967 only - Oct 1 0:00 0 - -Rule Morocco 1974 only - Jun 24 0:00 1:00 S -Rule Morocco 1974 only - Sep 1 0:00 0 - -Rule Morocco 1976 1977 - May 1 0:00 1:00 S -Rule Morocco 1976 only - Aug 1 0:00 0 - -Rule Morocco 1977 only - Sep 28 0:00 0 - -Rule Morocco 1978 only - Jun 1 0:00 1:00 S -Rule Morocco 1978 only - Aug 4 0:00 0 - -Rule Morocco 2008 only - Jun 1 0:00 1:00 S -Rule Morocco 2008 only - Sep 1 0:00 0 - -Rule Morocco 2009 only - Jun 1 0:00 1:00 S -Rule Morocco 2009 only - Aug 21 0:00 0 - -Rule Morocco 2010 only - May 2 0:00 1:00 S -Rule Morocco 2010 only - Aug 8 0:00 0 - -Rule Morocco 2011 only - Apr 3 0:00 1:00 S -Rule Morocco 2011 only - Jul 31 0:00 0 - -Rule Morocco 2012 2013 - Apr lastSun 2:00 1:00 S -Rule Morocco 2012 only - Jul 20 3:00 0 - -Rule Morocco 2012 only - Aug 20 2:00 1:00 S -Rule Morocco 2012 only - Sep 30 3:00 0 - -Rule Morocco 2013 only - Jul 7 3:00 0 - -Rule Morocco 2013 only - Aug 10 2:00 1:00 S -Rule Morocco 2013 max - Oct lastSun 3:00 0 - -Rule Morocco 2014 2021 - Mar lastSun 2:00 1:00 S -Rule Morocco 2014 only - Jun 28 3:00 0 - -Rule Morocco 2014 only - Aug 2 2:00 1:00 S -Rule Morocco 2015 only - Jun 14 3:00 0 - -Rule Morocco 2015 only - Jul 19 2:00 1:00 S -Rule Morocco 2016 only - Jun 5 3:00 0 - -Rule Morocco 2016 only - Jul 10 2:00 1:00 S -Rule Morocco 2017 only - May 21 3:00 0 - -Rule Morocco 2017 only - Jul 2 2:00 1:00 S -Rule Morocco 2018 only - May 13 3:00 0 - -Rule Morocco 2018 only - Jun 17 2:00 1:00 S -Rule Morocco 2019 only - May 5 3:00 0 - -Rule Morocco 2019 only - Jun 9 2:00 1:00 S -Rule Morocco 2020 only - Apr 19 3:00 0 - -Rule Morocco 2020 only - May 24 2:00 1:00 S -Rule Morocco 2021 only - Apr 11 3:00 0 - -Rule Morocco 2021 only - May 16 2:00 1:00 S -Rule Morocco 2022 only - May 8 2:00 1:00 S -Rule Morocco 2023 only - Apr 23 2:00 1:00 S -Rule Morocco 2024 only - Apr 14 2:00 1:00 S -Rule Morocco 2025 only - Apr 6 2:00 1:00 S -Rule Morocco 2026 max - Mar lastSun 2:00 1:00 S -Rule Morocco 2036 only - Oct 19 3:00 0 - -Rule Morocco 2037 only - Oct 4 3:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Casablanca -0:30:20 - LMT 1913 Oct 26 - 0:00 Morocco WE%sT 1984 Mar 16 - 1:00 - CET 1986 - 0:00 Morocco WE%sT - -# Western Sahara -# -# From Gwillim Law (2013-10-22): -# A correspondent who is usually well informed about time zone matters -# ... says that Western Sahara observes daylight saving time, just as -# Morocco does. -# -# From Paul Eggert (2013-10-23): -# Assume that this has been true since Western Sahara switched to GMT, -# since most of it was then controlled by Morocco. - -Zone Africa/El_Aaiun -0:52:48 - LMT 1934 Jan # El Aaiún - -1:00 - -01 1976 Apr 14 - 0:00 Morocco WE%sT - -# Mozambique -# -# Shanks gives 1903-03-01 for the transition to CAT. -# Perhaps the 1911-05-26 Portuguese decree -# https://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf -# merely made it official? -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Maputo 2:10:20 - LMT 1903 Mar - 2:00 - CAT -Link Africa/Maputo Africa/Blantyre # Malawi -Link Africa/Maputo Africa/Bujumbura # Burundi -Link Africa/Maputo Africa/Gaborone # Botswana -Link Africa/Maputo Africa/Harare # Zimbabwe -Link Africa/Maputo Africa/Kigali # Rwanda -Link Africa/Maputo Africa/Lubumbashi # E Dem. Rep. of Congo -Link Africa/Maputo Africa/Lusaka # Zambia - - -# Namibia - -# From Arthur David Olson (2017-08-09): -# The text of the "Namibia Time Act, 1994" is available online at -# www.lac.org.na/laws/1994/811.pdf -# and includes this nugget: -# Notwithstanding the provisions of subsection (2) of section 1, the -# first winter period after the commencement of this Act shall -# commence at OOhOO on Monday 21 March 1994 and shall end at 02h00 on -# Sunday 4 September 1994. - -# From Petronella Sibeene (2007-03-30): -# http://allafrica.com/stories/200703300178.html -# While the entire country changes its time, Katima Mulilo and other -# settlements in Caprivi unofficially will not because the sun there -# rises and sets earlier compared to other regions. Chief of -# Forecasting Riaan van Zyl explained that the far eastern parts of -# the country are close to 40 minutes earlier in sunrise than the rest -# of the country. -# -# From Paul Eggert (2017-02-22): -# Although the Zambezi Region (formerly known as Caprivi) informally -# observes Botswana time, we have no details about historical practice. -# In the meantime people there can use Africa/Gaborone. -# See: Immanuel S. The Namibian. 2017-02-23. -# https://www.namibian.com.na/51480/read/Time-change-divides-lawmakers - -# From Steffen Thorsen (2017-08-09): -# Namibia is going to change their time zone to what is now their DST: -# https://www.newera.com.na/2017/02/23/namibias-winter-time-might-be-repealed/ -# This video is from the government decision: -# https://www.nbc.na/news/na-passes-namibia-time-bill-repealing-1994-namibia-time-act.8665 -# We have made the assumption so far that they will change their time zone at -# the same time they would normally start DST, the first Sunday in September: -# https://www.timeanddate.com/news/time/namibia-new-time-zone.html - -# RULE NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Namibia 1994 only - Mar 21 0:00 0 - -Rule Namibia 1994 2016 - Sep Sun>=1 2:00 1:00 S -Rule Namibia 1995 2017 - Apr Sun>=1 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Windhoek 1:08:24 - LMT 1892 Feb 8 - 1:30 - +0130 1903 Mar - 2:00 - SAST 1942 Sep 20 2:00 - 2:00 1:00 SAST 1943 Mar 21 2:00 - 2:00 - SAST 1990 Mar 21 # independence - 2:00 - CAT 1994 Mar 21 0:00 - 1:00 Namibia WA%sT 2017 Sep 3 2:00 - 2:00 - CAT - -# Niger -# See Africa/Lagos. - -# Nigeria -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Lagos 0:13:36 - LMT 1919 Sep - 1:00 - WAT -Link Africa/Lagos Africa/Bangui # Central African Republic -Link Africa/Lagos Africa/Brazzaville # Rep. of the Congo -Link Africa/Lagos Africa/Douala # Cameroon -Link Africa/Lagos Africa/Kinshasa # Dem. Rep. of the Congo (west) -Link Africa/Lagos Africa/Libreville # Gabon -Link Africa/Lagos Africa/Luanda # Angola -Link Africa/Lagos Africa/Malabo # Equatorial Guinea -Link Africa/Lagos Africa/Niamey # Niger -Link Africa/Lagos Africa/Porto-Novo # Benin - -# Réunion -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Reunion 3:41:52 - LMT 1911 Jun # Saint-Denis - 4:00 - +04 -# -# Crozet Islands also observes Réunion time; see the 'antarctica' file. -# -# Scattered Islands (Îles Éparses) administered from Réunion are as follows. -# The following information about them is taken from -# Îles Éparses (, 1997-07-22, -# in French; no longer available as of 1999-08-17). -# We have no info about their time zone histories. -# -# Bassas da India - uninhabited -# Europa Island - inhabited from 1905 to 1910 by two families -# Glorioso Is - inhabited until at least 1958 -# Juan de Nova - uninhabited -# Tromelin - inhabited until at least 1958 - -# Rwanda -# See Africa/Maputo. - -# St Helena -# See Africa/Abidjan. -# The other parts of the St Helena territory are similar: -# Tristan da Cunha: on GMT, say Whitman and the CIA -# Ascension: on GMT, say the USNO (1995-12-21) and the CIA -# Gough (scientific station since 1955; sealers wintered previously): -# on GMT, says the CIA -# Inaccessible, Nightingale: uninhabited - -# São Tomé and Príncipe -# Senegal -# See Africa/Abidjan. - -# Seychelles -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Mahe 3:41:48 - LMT 1906 Jun # Victoria - 4:00 - +04 -# From Paul Eggert (2001-05-30): -# Aldabra, Farquhar, and Desroches, originally dependencies of the -# Seychelles, were transferred to the British Indian Ocean Territory -# in 1965 and returned to Seychelles control in 1976. We don't know -# whether this affected their time zone, so omit this for now. -# Possibly the islands were uninhabited. - -# Sierra Leone -# See Africa/Abidjan. - -# Somalia -# See Africa/Nairobi. - -# South Africa -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule SA 1942 1943 - Sep Sun>=15 2:00 1:00 - -Rule SA 1943 1944 - Mar Sun>=15 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Johannesburg 1:52:00 - LMT 1892 Feb 8 - 1:30 - SAST 1903 Mar - 2:00 SA SAST -Link Africa/Johannesburg Africa/Maseru # Lesotho -Link Africa/Johannesburg Africa/Mbabane # Swaziland -# -# Marion and Prince Edward Is -# scientific station since 1947 -# no information - -# Sudan - -# From -# Sudan News Agency (2000-01-13), -# also reported by Michaël De Beukelaer-Dossche via Steffen Thorsen: -# Clocks will be moved ahead for 60 minutes all over the Sudan as of noon -# Saturday.... This was announced Thursday by Caretaker State Minister for -# Manpower Abdul-Rahman Nur-Eddin. - -# From Ahmed Atyya, National Telecommunications Corp. (NTC), Sudan (2017-10-17): -# ... the Republic of Sudan is going to change the time zone from (GMT+3:00) -# to (GMT+ 2:00) starting from Wednesday 1 November 2017. -# -# From Paul Eggert (2017-10-18): -# A scanned copy (in Arabic) of Cabinet Resolution No. 352 for the -# year 2017 can be found as an attachment in email today from Yahia -# Abdalla of NTC, archived at: -# https://mm.icann.org/pipermail/tz/2017-October/025333.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Sudan 1970 only - May 1 0:00 1:00 S -Rule Sudan 1970 1985 - Oct 15 0:00 0 - -Rule Sudan 1971 only - Apr 30 0:00 1:00 S -Rule Sudan 1972 1985 - Apr lastSun 0:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Khartoum 2:10:08 - LMT 1931 - 2:00 Sudan CA%sT 2000 Jan 15 12:00 - 3:00 - EAT 2017 Nov 1 - 2:00 - CAT - -# South Sudan -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Juba 2:06:28 - LMT 1931 - 2:00 Sudan CA%sT 2000 Jan 15 12:00 - 3:00 - EAT - -# Swaziland -# See Africa/Johannesburg. - -# Tanzania -# See Africa/Nairobi. - -# Togo -# See Africa/Abidjan. - -# Tunisia - -# From Gwillim Law (2005-04-30): -# My correspondent, Risto Nykänen, has alerted me to another adoption of DST, -# this time in Tunisia. According to Yahoo France News -# , in a story attributed to AP -# and dated 2005-04-26, "Tunisia has decided to advance its official time by -# one hour, starting on Sunday, May 1. Henceforth, Tunisian time will be -# UTC+2 instead of UTC+1. The change will take place at 23:00 UTC next -# Saturday." (My translation) -# -# From Oscar van Vlijmen (2005-05-02): -# La Presse, the first national daily newspaper ... -# http://www.lapresse.tn/archives/archives280405/actualites/lheure.html -# ... DST for 2005: on: Sun May 1 0h standard time, off: Fri Sept. 30, -# 1h standard time. -# -# From Atef Loukil (2006-03-28): -# The daylight saving time will be the same each year: -# Beginning : the last Sunday of March at 02:00 -# Ending : the last Sunday of October at 03:00 ... -# http://www.tap.info.tn/en/index.php?option=com_content&task=view&id=1188&Itemid=50 - -# From Steffen Thorsen (2009-03-16): -# According to several news sources, Tunisia will not observe DST this year. -# (Arabic) -# http://www.elbashayer.com/?page=viewn&nid=42546 -# https://www.babnet.net/kiwidetail-15295.asp -# -# We have also confirmed this with the US embassy in Tunisia. -# We have a wrap-up about this on the following page: -# https://www.timeanddate.com/news/time/tunisia-cancels-dst-2009.html - -# From Alexander Krivenyshev (2009-03-17): -# Here is a link to Tunis Afrique Presse News Agency -# -# Standard time to be kept the whole year long (tap.info.tn): -# -# (in English) -# http://www.tap.info.tn/en/index.php?option=com_content&task=view&id=26813&Itemid=157 -# -# (in Arabic) -# http://www.tap.info.tn/ar/index.php?option=com_content&task=view&id=61240&Itemid=1 - -# From Arthur David Olson (2009-03-18): -# The Tunis Afrique Presse News Agency notice contains this: "This measure is -# due to the fact that the fasting month of Ramadan coincides with the period -# concerned by summer time. Therefore, the standard time will be kept -# unchanged the whole year long." So foregoing DST seems to be an exception -# (albeit one that may be repeated in the future). - -# From Alexander Krivenyshev (2010-03-27): -# According to some news reports Tunis confirmed not to use DST in 2010 -# -# (translation): -# "The Tunisian government has decided to abandon DST, which was scheduled on -# Sunday... -# Tunisian authorities had suspended the DST for the first time last year also -# coincided with the month of Ramadan..." -# -# (in Arabic) -# http://www.moheet.com/show_news.aspx?nid=358861&pg=1 -# http://www.almadenahnews.com/newss/news.php?c=118&id=38036 -# http://www.worldtimezone.com/dst_news/dst_news_tunis02.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Tunisia 1939 only - Apr 15 23:00s 1:00 S -Rule Tunisia 1939 only - Nov 18 23:00s 0 - -Rule Tunisia 1940 only - Feb 25 23:00s 1:00 S -Rule Tunisia 1941 only - Oct 6 0:00 0 - -Rule Tunisia 1942 only - Mar 9 0:00 1:00 S -Rule Tunisia 1942 only - Nov 2 3:00 0 - -Rule Tunisia 1943 only - Mar 29 2:00 1:00 S -Rule Tunisia 1943 only - Apr 17 2:00 0 - -Rule Tunisia 1943 only - Apr 25 2:00 1:00 S -Rule Tunisia 1943 only - Oct 4 2:00 0 - -Rule Tunisia 1944 1945 - Apr Mon>=1 2:00 1:00 S -Rule Tunisia 1944 only - Oct 8 0:00 0 - -Rule Tunisia 1945 only - Sep 16 0:00 0 - -Rule Tunisia 1977 only - Apr 30 0:00s 1:00 S -Rule Tunisia 1977 only - Sep 24 0:00s 0 - -Rule Tunisia 1978 only - May 1 0:00s 1:00 S -Rule Tunisia 1978 only - Oct 1 0:00s 0 - -Rule Tunisia 1988 only - Jun 1 0:00s 1:00 S -Rule Tunisia 1988 1990 - Sep lastSun 0:00s 0 - -Rule Tunisia 1989 only - Mar 26 0:00s 1:00 S -Rule Tunisia 1990 only - May 1 0:00s 1:00 S -Rule Tunisia 2005 only - May 1 0:00s 1:00 S -Rule Tunisia 2005 only - Sep 30 1:00s 0 - -Rule Tunisia 2006 2008 - Mar lastSun 2:00s 1:00 S -Rule Tunisia 2006 2008 - Oct lastSun 2:00s 0 - - -# Shanks & Pottenger give 0:09:20 for Paris Mean Time; go with Howse's -# more precise 0:09:21. -# Shanks & Pottenger say the 1911 switch was on Mar 9; go with Howse's Mar 11. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Africa/Tunis 0:40:44 - LMT 1881 May 12 - 0:09:21 - PMT 1911 Mar 11 # Paris Mean Time - 1:00 Tunisia CE%sT - -# Uganda -# See Africa/Nairobi. - -# Zambia -# Zimbabwe -# See Africa/Maputo. diff --git a/src/timezone/data/antarctica b/src/timezone/data/antarctica deleted file mode 100644 index d9c132a30f..0000000000 --- a/src/timezone/data/antarctica +++ /dev/null @@ -1,340 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# From Paul Eggert (1999-11-15): -# To keep things manageable, we list only locations occupied year-round; see -# COMNAP - Stations and Bases -# http://www.comnap.aq/comnap/comnap.nsf/P/Stations/ -# and -# Summary of the Peri-Antarctic Islands (1998-07-23) -# http://www.spri.cam.ac.uk/bob/periant.htm -# for information. -# Unless otherwise specified, we have no time zone information. - -# FORMAT is '-00' and GMTOFF is 0 for locations while uninhabited. - -# Argentina - year-round bases -# Belgrano II, Confin Coast, -770227-0343737, since 1972-02-05 -# Carlini, Potter Cove, King George Island, -6414-0602320, since 1982-01 -# Esperanza, Hope Bay, -6323-05659, since 1952-12-17 -# Marambio, -6414-05637, since 1969-10-29 -# Orcadas, Laurie I, -6016-04444, since 1904-02-22 -# San Martín, Barry I, -6808-06706, since 1951-03-21 -# (except 1960-03 / 1976-03-21) - -# Australia - territories -# Heard Island, McDonald Islands (uninhabited) -# previously sealers and scientific personnel wintered -# Margaret Turner reports -# https://web.archive.org/web/20021204222245/http://www.dstc.qut.edu.au/DST/marg/daylight.html -# (1999-09-30) that they're UT +05, with no DST; -# presumably this is when they have visitors. -# -# year-round bases -# Casey, Bailey Peninsula, -6617+11032, since 1969 -# Davis, Vestfold Hills, -6835+07759, since 1957-01-13 -# (except 1964-11 - 1969-02) -# Mawson, Holme Bay, -6736+06253, since 1954-02-13 - -# From Steffen Thorsen (2009-03-11): -# Three Australian stations in Antarctica have changed their time zone: -# Casey moved from UTC+8 to UTC+11 -# Davis moved from UTC+7 to UTC+5 -# Mawson moved from UTC+6 to UTC+5 -# The changes occurred on 2009-10-18 at 02:00 (local times). -# -# Government source: (Australian Antarctic Division) -# http://www.aad.gov.au/default.asp?casid=37079 -# -# We have more background information here: -# https://www.timeanddate.com/news/time/antarctica-new-times.html - -# From Steffen Thorsen (2010-03-10): -# We got these changes from the Australian Antarctic Division: ... -# -# - Casey station reverted to its normal time of UTC+8 on 5 March 2010. -# The change to UTC+11 is being considered as a regular summer thing but -# has not been decided yet. -# -# - Davis station will revert to its normal time of UTC+7 at 10 March 2010 -# 20:00 UTC. -# -# - Mawson station stays on UTC+5. -# -# Background: -# https://www.timeanddate.com/news/time/antartica-time-changes-2010.html - -# From Steffen Thorsen (2016-10-28): -# Australian Antarctica Division informed us that Casey changed time -# zone to UTC+11 in "the morning of 22nd October 2016". - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/Casey 0 - -00 1969 - 8:00 - +08 2009 Oct 18 2:00 - 11:00 - +11 2010 Mar 5 2:00 - 8:00 - +08 2011 Oct 28 2:00 - 11:00 - +11 2012 Feb 21 17:00u - 8:00 - +08 2016 Oct 22 - 11:00 - +11 -Zone Antarctica/Davis 0 - -00 1957 Jan 13 - 7:00 - +07 1964 Nov - 0 - -00 1969 Feb - 7:00 - +07 2009 Oct 18 2:00 - 5:00 - +05 2010 Mar 10 20:00u - 7:00 - +07 2011 Oct 28 2:00 - 5:00 - +05 2012 Feb 21 20:00u - 7:00 - +07 -Zone Antarctica/Mawson 0 - -00 1954 Feb 13 - 6:00 - +06 2009 Oct 18 2:00 - 5:00 - +05 -# References: -# Casey Weather (1998-02-26) -# http://www.antdiv.gov.au/aad/exop/sfo/casey/casey_aws.html -# Davis Station, Antarctica (1998-02-26) -# http://www.antdiv.gov.au/aad/exop/sfo/davis/video.html -# Mawson Station, Antarctica (1998-02-25) -# http://www.antdiv.gov.au/aad/exop/sfo/mawson/video.html - -# Belgium - year-round base -# Princess Elisabeth, Queen Maud Land, -713412+0231200, since 2007 - -# Brazil - year-round base -# Ferraz, King George Island, -6205+05824, since 1983/4 - -# Bulgaria - year-round base -# St. Kliment Ohridski, Livingston Island, -623829-0602153, since 1988 - -# Chile - year-round bases and towns -# Escudero, South Shetland Is, -621157-0585735, since 1994 -# Frei Montalva, King George Island, -6214-05848, since 1969-03-07 -# O'Higgins, Antarctic Peninsula, -6319-05704, since 1948-02 -# Prat, -6230-05941 -# Villa Las Estrellas (a town), around the Frei base, since 1984-04-09 -# These locations employ Region of Magallanes time; use -# TZ='America/Punta_Arenas'. - -# China - year-round bases -# Great Wall, King George Island, -6213-05858, since 1985-02-20 -# Zhongshan, Larsemann Hills, Prydz Bay, -6922+07623, since 1989-02-26 - -# France - year-round bases (also see "France & Italy") -# -# From Antoine Leca (1997-01-20): -# Time data entries are from Nicole Pailleau at the IFRTP -# (French Institute for Polar Research and Technology). -# She confirms that French Southern Territories and Terre Adélie bases -# don't observe daylight saving time, even if Terre Adélie supplies came -# from Tasmania. -# -# French Southern Territories with year-round inhabitants -# -# Alfred Faure, Possession Island, Crozet Islands, -462551+0515152, since 1964; -# sealing & whaling stations operated variously 1802/1911+; -# see Indian/Reunion. -# -# Martin-de-Viviès, Amsterdam Island, -374105+0773155, since 1950 -# Port-aux-Français, Kerguelen Islands, -492110+0701303, since 1951; -# whaling & sealing station operated 1908/1914, 1920/1929, and 1951/1956 -# -# St Paul Island - near Amsterdam, uninhabited -# fishing stations operated variously 1819/1931 -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Kerguelen 0 - -00 1950 # Port-aux-Français - 5:00 - +05 -# -# year-round base in the main continent -# Dumont d'Urville, Île des Pétrels, -6640+14001, since 1956-11 -# (2005-12-05) -# -# Another base at Port-Martin, 50km east, began operation in 1947. -# It was destroyed by fire on 1952-01-14. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/DumontDUrville 0 - -00 1947 - 10:00 - +10 1952 Jan 14 - 0 - -00 1956 Nov - 10:00 - +10 - -# France & Italy - year-round base -# Concordia, -750600+1232000, since 2005 - -# Germany - year-round base -# Neumayer III, -704080-0081602, since 2009 - -# India - year-round bases -# Bharati, -692428+0761114, since 2012 -# Maitri, -704558+0114356, since 1989 - -# Italy - year-round base (also see "France & Italy") -# Zuchelli, Terra Nova Bay, -744140+1640647, since 1986 - -# Japan - year-round bases -# Syowa (also known as Showa), -690022+0393524, since 1957 -# -# From Hideyuki Suzuki (1999-02-06): -# In all Japanese stations, +0300 is used as the standard time. -# -# Syowa station, which is the first antarctic station of Japan, -# was established on 1957-01-29. Since Syowa station is still the main -# station of Japan, it's appropriate for the principal location. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/Syowa 0 - -00 1957 Jan 29 - 3:00 - +03 -# See: -# NIPR Antarctic Research Activities (1999-08-17) -# http://www.nipr.ac.jp/english/ara01.html - -# S Korea - year-round base -# Jang Bogo, Terra Nova Bay, -743700+1641205 since 2014 -# King Sejong, King George Island, -6213-05847, since 1988 - -# New Zealand - claims -# Balleny Islands (never inhabited) -# Scott Island (never inhabited) -# -# year-round base -# Scott Base, Ross Island, since 1957-01. -# See Pacific/Auckland. - -# Norway - territories -# Bouvet (never inhabited) -# -# claims -# Peter I Island (never inhabited) -# -# year-round base -# Troll, Queen Maud Land, -720041+0023206, since 2005-02-12 -# -# From Paul-Inge Flakstad (2014-03-10): -# I recently had a long dialog about this with the developer of timegenie.com. -# In the absence of specific dates, he decided to choose some likely ones: -# GMT +1 - From March 1 to the last Sunday in March -# GMT +2 - From the last Sunday in March until the last Sunday in October -# GMT +1 - From the last Sunday in October until November 7 -# GMT +0 - From November 7 until March 1 -# The dates for switching to and from UTC+0 will probably not be absolutely -# correct, but they should be quite close to the actual dates. -# -# From Paul Eggert (2014-03-21): -# The CET-switching Troll rules require zic from tz 2014b or later, so as -# suggested by Bengt-Inge Larsson comment them out for now, and approximate -# with only UTC and CEST. Uncomment them when 2014b is more prevalent. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -#Rule Troll 2005 max - Mar 1 1:00u 1:00 +01 -Rule Troll 2005 max - Mar lastSun 1:00u 2:00 +02 -#Rule Troll 2005 max - Oct lastSun 1:00u 1:00 +01 -#Rule Troll 2004 max - Nov 7 1:00u 0:00 +00 -# Remove the following line when uncommenting the above '#Rule' lines. -Rule Troll 2004 max - Oct lastSun 1:00u 0:00 +00 -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/Troll 0 - -00 2005 Feb 12 - 0:00 Troll %s - -# Poland - year-round base -# Arctowski, King George Island, -620945-0582745, since 1977 - -# Romania - year-bound base -# Law-Racoviță, Larsemann Hills, -692319+0762251, since 1986 - -# Russia - year-round bases -# Bellingshausen, King George Island, -621159-0585337, since 1968-02-22 -# Mirny, Davis coast, -6633+09301, since 1956-02 -# Molodezhnaya, Alasheyev Bay, -6740+04551, -# year-round from 1962-02 to 1999-07-01 -# Novolazarevskaya, Queen Maud Land, -7046+01150, -# year-round from 1960/61 to 1992 - -# Vostok, since 1957-12-16, temporarily closed 1994-02/1994-11 -# From Craig Mundell (1994-12-15): -# http://quest.arc.nasa.gov/antarctica/QA/computers/Directions,Time,ZIP -# Vostok, which is one of the Russian stations, is set on the same -# time as Moscow, Russia. -# -# From Lee Hotz (2001-03-08): -# I queried the folks at Columbia who spent the summer at Vostok and this is -# what they had to say about time there: -# "in the US Camp (East Camp) we have been on New Zealand (McMurdo) -# time, which is 12 hours ahead of GMT. The Russian Station Vostok was -# 6 hours behind that (although only 2 miles away, i.e. 6 hours ahead -# of GMT). This is a time zone I think two hours east of Moscow. The -# natural time zone is in between the two: 8 hours ahead of GMT." -# -# From Paul Eggert (2001-05-04): -# This seems to be hopelessly confusing, so I asked Lee Hotz about it -# in person. He said that some Antarctic locations set their local -# time so that noon is the warmest part of the day, and that this -# changes during the year and does not necessarily correspond to mean -# solar noon. So the Vostok time might have been whatever the clocks -# happened to be during their visit. So we still don't really know what time -# it is at Vostok. But we'll guess +06. -# -Zone Antarctica/Vostok 0 - -00 1957 Dec 16 - 6:00 - +06 - -# S Africa - year-round bases -# Marion Island, -4653+03752 -# SANAE IV, Vesleskarvet, Queen Maud Land, -714022-0025026, since 1997 - -# Ukraine - year-round base -# Vernadsky (formerly Faraday), Galindez Island, -651445-0641526, since 1954 - -# United Kingdom -# -# British Antarctic Territories (BAT) claims -# South Orkney Islands -# scientific station from 1903 -# whaling station at Signy I 1920/1926 -# South Shetland Islands -# -# year-round bases -# Bird Island, South Georgia, -5400-03803, since 1983 -# Deception Island, -6259-06034, whaling station 1912/1931, -# scientific station 1943/1967, -# previously sealers and a scientific expedition wintered by accident, -# and a garrison was deployed briefly -# Halley, Coates Land, -7535-02604, since 1956-01-06 -# Halley is on a moving ice shelf and is periodically relocated -# so that it is never more than 10km from its nominal location. -# Rothera, Adelaide Island, -6734-6808, since 1976-12-01 -# -# From Paul Eggert (2002-10-22) -# says Rothera is -03 all year. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/Rothera 0 - -00 1976 Dec 1 - -3:00 - -03 - -# Uruguay - year round base -# Artigas, King George Island, -621104-0585107 - -# USA - year-round bases -# -# Palmer, Anvers Island, since 1965 (moved 2 miles in 1968) -# See 'southamerica' for Antarctica/Palmer, since it uses South American DST. -# -# McMurdo Station, Ross Island, since 1955-12 -# Amundsen-Scott South Pole Station, continuously occupied since 1956-11-20 -# -# From Chris Carrier (1996-06-27): -# Siple, the first commander of the South Pole station, -# stated that he would have liked to have kept GMT at the station, -# but that he found it more convenient to keep GMT+12 -# as supplies for the station were coming from McMurdo Sound, -# which was on GMT+12 because New Zealand was on GMT+12 all year -# at that time (1957). (Source: Siple's book 90 Degrees South.) -# -# From Susan Smith -# http://www.cybertours.com/whs/pole10.html -# (1995-11-13 16:24:56 +1300, no longer available): -# We use the same time as McMurdo does. -# And they use the same time as Christchurch, NZ does.... -# One last quirk about South Pole time. -# All the electric clocks are usually wrong. -# Something about the generators running at 60.1hertz or something -# makes all of the clocks run fast. So every couple of days, -# we have to go around and set them back 5 minutes or so. -# Maybe if we let them run fast all of the time, we'd get to leave here sooner!! -# -# See 'australasia' for Antarctica/McMurdo. diff --git a/src/timezone/data/asia b/src/timezone/data/asia deleted file mode 100644 index ac39af351e..0000000000 --- a/src/timezone/data/asia +++ /dev/null @@ -1,3142 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (2017-01-13): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# Another source occasionally used is Edward W. Whitman, World Time Differences, -# Whitman Publishing Co, 2 Niagara Av, Ealing, London (undated), which -# I found in the UCLA library. -# -# For data circa 1899, a common source is: -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# https://www.jstor.org/stable/1774359 -# -# For Russian data circa 1919, a source is: -# Byalokoz EL. New Counting of Time in Russia since July 1, 1919. -# (See the 'europe' file for a fuller citation.) -# -# A reliable and entertaining source about time zones is -# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). -# -# The following alphabetic abbreviations appear in these tables: -# std dst -# LMT Local Mean Time -# 2:00 EET EEST Eastern European Time -# 2:00 IST IDT Israel -# 5:30 IST India -# 7:00 WIB west Indonesia (Waktu Indonesia Barat) -# 8:00 WITA central Indonesia (Waktu Indonesia Tengah) -# 8:00 CST China -# 8:30 KST KDT Korea when at +0830 -# 9:00 WIT east Indonesia (Waktu Indonesia Timur) -# 9:00 JST JDT Japan -# 9:00 KST KDT Korea when at +09 -# 9:30 ACST Australian Central Standard Time -# Otherwise, these tables typically use numeric abbreviations like +03 -# and +0330 for integer hour and minute UTC offsets. Although earlier -# editions invented alphabetic time zone abbreviations for every -# offset, this did not reflect common practice. -# -# See the 'europe' file for Russia and Turkey in Asia. - -# From Guy Harris: -# Incorporates data for Singapore from Robert Elz' asia 1.1, as well as -# additional information from Tom Yap, Sun Microsystems Intercontinental -# Technical Support (including a page from the Official Airline Guide - -# Worldwide Edition). - -############################################################################### - -# These rules are stolen from the 'europe' file. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule EUAsia 1981 max - Mar lastSun 1:00u 1:00 S -Rule EUAsia 1979 1995 - Sep lastSun 1:00u 0 - -Rule EUAsia 1996 max - Oct lastSun 1:00u 0 - -Rule E-EurAsia 1981 max - Mar lastSun 0:00 1:00 S -Rule E-EurAsia 1979 1995 - Sep lastSun 0:00 0 - -Rule E-EurAsia 1996 max - Oct lastSun 0:00 0 - -Rule RussiaAsia 1981 1984 - Apr 1 0:00 1:00 S -Rule RussiaAsia 1981 1983 - Oct 1 0:00 0 - -Rule RussiaAsia 1984 1995 - Sep lastSun 2:00s 0 - -Rule RussiaAsia 1985 2010 - Mar lastSun 2:00s 1:00 S -Rule RussiaAsia 1996 2010 - Oct lastSun 2:00s 0 - - -# Afghanistan -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kabul 4:36:48 - LMT 1890 - 4:00 - +04 1945 - 4:30 - +0430 - -# Armenia -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger have Yerevan switching to 3:00 (with Russian DST) -# in spring 1991, then to 4:00 with no DST in fall 1995, then -# readopting Russian DST in 1997. Go with Shanks & Pottenger, even -# when they disagree with others. Edgar Der-Danieliantz -# reported (1996-05-04) that Yerevan probably wouldn't use DST -# in 1996, though it did use DST in 1995. IATA SSIM (1991/1998) reports that -# Armenia switched from 3:00 to 4:00 in 1998 and observed DST after 1991, -# but started switching at 3:00s in 1998. - -# From Arthur David Olson (2011-06-15): -# While Russia abandoned DST in 2011, Armenia may choose to -# follow Russia's "old" rules. - -# From Alexander Krivenyshev (2012-02-10): -# According to News Armenia, on Feb 9, 2012, -# http://newsarmenia.ru/society/20120209/42609695.html -# -# The Armenia National Assembly adopted final reading of Amendments to the -# Law "On procedure of calculation time on the territory of the Republic of -# Armenia" according to which Armenia [is] abolishing Daylight Saving Time. -# or -# (brief) -# http://www.worldtimezone.com/dst_news/dst_news_armenia03.html -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Armenia 2011 only - Mar lastSun 2:00s 1:00 S -Rule Armenia 2011 only - Oct lastSun 2:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Yerevan 2:58:00 - LMT 1924 May 2 - 3:00 - +03 1957 Mar - 4:00 RussiaAsia +04/+05 1991 Mar 31 2:00s - 3:00 RussiaAsia +03/+04 1995 Sep 24 2:00s - 4:00 - +04 1997 - 4:00 RussiaAsia +04/+05 2011 - 4:00 Armenia +04/+05 - -# Azerbaijan - -# From Rustam Aliyev of the Azerbaijan Internet Forum (2005-10-23): -# According to the resolution of Cabinet of Ministers, 1997 -# From Paul Eggert (2015-09-17): It was Resolution No. 21 (1997-03-17). -# http://code.az/files/daylight_res.pdf - -# From Steffen Thorsen (2016-03-17): -# ... the Azerbaijani Cabinet of Ministers has cancelled switching to -# daylight saving time.... -# https://www.azernews.az/azerbaijan/94137.html -# http://vestnikkavkaza.net/news/Azerbaijani-Cabinet-of-Ministers-cancels-daylight-saving-time.html -# http://en.apa.az/xeber_azerbaijan_abolishes_daylight_savings_ti_240862.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Azer 1997 2015 - Mar lastSun 4:00 1:00 S -Rule Azer 1997 2015 - Oct lastSun 5:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Baku 3:19:24 - LMT 1924 May 2 - 3:00 - +03 1957 Mar - 4:00 RussiaAsia +04/+05 1991 Mar 31 2:00s - 3:00 RussiaAsia +03/+04 1992 Sep lastSun 2:00s - 4:00 - +04 1996 - 4:00 EUAsia +04/+05 1997 - 4:00 Azer +04/+05 - -# Bahrain -# See Asia/Qatar. - -# Bangladesh -# From Alexander Krivenyshev (2009-05-13): -# According to newspaper Asian Tribune (May 6, 2009) Bangladesh may introduce -# Daylight Saving Time from June 16 to Sept 30 -# -# Bangladesh to introduce daylight saving time likely from June 16 -# http://www.asiantribune.com/?q=node/17288 -# http://www.worldtimezone.com/dst_news/dst_news_bangladesh02.html -# -# "... Bangladesh government has decided to switch daylight saving time from -# June -# 16 till September 30 in a bid to ensure maximum use of daylight to cope with -# crippling power crisis. " -# -# The switch will remain in effect from June 16 to Sept 30 (2009) but if -# implemented the next year, it will come in force from April 1, 2010 - -# From Steffen Thorsen (2009-06-02): -# They have finally decided now, but changed the start date to midnight between -# the 19th and 20th, and they have not set the end date yet. -# -# Some sources: -# https://in.reuters.com/article/southAsiaNews/idINIndia-40017620090601 -# http://bdnews24.com/details.php?id=85889&cid=2 -# -# Our wrap-up: -# https://www.timeanddate.com/news/time/bangladesh-daylight-saving-2009.html - -# From A. N. M. Kamrus Saadat (2009-06-15): -# Finally we've got the official mail regarding DST start time where DST start -# time is mentioned as Jun 19 2009, 23:00 from BTRC (Bangladesh -# Telecommunication Regulatory Commission). -# -# No DST end date has been announced yet. - -# From Alexander Krivenyshev (2009-09-25): -# Bangladesh won't go back to Standard Time from October 1, 2009, -# instead it will continue DST measure till the cabinet makes a fresh decision. -# -# Following report by same newspaper-"The Daily Star Friday": -# "DST change awaits cabinet decision-Clock won't go back by 1-hr from Oct 1" -# http://www.thedailystar.net/newDesign/news-details.php?nid=107021 -# http://www.worldtimezone.com/dst_news/dst_news_bangladesh04.html - -# From Steffen Thorsen (2009-10-13): -# IANS (Indo-Asian News Service) now reports: -# Bangladesh has decided that the clock advanced by an hour to make -# maximum use of daylight hours as an energy saving measure would -# "continue for an indefinite period." -# -# One of many places where it is published: -# http://www.thaindian.com/newsportal/business/bangladesh-to-continue-indefinitely-with-advanced-time_100259987.html - -# From Alexander Krivenyshev (2009-12-24): -# According to Bangladesh newspaper "The Daily Star," -# Bangladesh will change its clock back to Standard Time on Dec 31, 2009. -# -# Clock goes back 1-hr on Dec 31 night. -# http://www.thedailystar.net/newDesign/news-details.php?nid=119228 -# http://www.worldtimezone.com/dst_news/dst_news_bangladesh05.html -# -# "...The government yesterday decided to put the clock back by one hour -# on December 31 midnight and the new time will continue until March 31, -# 2010 midnight. The decision came at a cabinet meeting at the Prime -# Minister's Office last night..." - -# From Alexander Krivenyshev (2010-03-22): -# According to Bangladesh newspaper "The Daily Star," -# Cabinet cancels Daylight Saving Time -# http://www.thedailystar.net/newDesign/latest_news.php?nid=22817 -# http://www.worldtimezone.com/dst_news/dst_news_bangladesh06.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Dhaka 2009 only - Jun 19 23:00 1:00 S -Rule Dhaka 2009 only - Dec 31 24:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Dhaka 6:01:40 - LMT 1890 - 5:53:20 - HMT 1941 Oct # Howrah Mean Time? - 6:30 - +0630 1942 May 15 - 5:30 - +0530 1942 Sep - 6:30 - +0630 1951 Sep 30 - 6:00 - +06 2009 - 6:00 Dhaka +06/+07 - -# Bhutan -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Thimphu 5:58:36 - LMT 1947 Aug 15 # or Thimbu - 5:30 - +0530 1987 Oct - 6:00 - +06 - -# British Indian Ocean Territory -# Whitman and the 1995 CIA time zone map say 5:00, but the -# 1997 and later maps say 6:00. Assume the switch occurred in 1996. -# We have no information as to when standard time was introduced; -# assume it occurred in 1907, the same year as Mauritius (which -# then contained the Chagos Archipelago). -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Chagos 4:49:40 - LMT 1907 - 5:00 - +05 1996 - 6:00 - +06 - -# Brunei -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Brunei 7:39:40 - LMT 1926 Mar # Bandar Seri Begawan - 7:30 - +0730 1933 - 8:00 - +08 - -# Burma / Myanmar - -# Milne says 6:24:40 was the meridian of the time ball observatory at Rangoon. - -# From Paul Eggert (2017-04-20): -# Page 27 of Reed & Low (cited for Asia/Kolkata) says "Rangoon local time is -# used upon the railways and telegraphs of Burma, and is 6h. 24m. 47s. ahead -# of Greenwich." This refers to the period before Burma's transition to +0630, -# a transition for which Shanks is the only source. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Yangon 6:24:47 - LMT 1880 # or Rangoon - 6:24:47 - RMT 1920 # Rangoon local time - 6:30 - +0630 1942 May - 9:00 - +09 1945 May 3 - 6:30 - +0630 - -# Cambodia -# See Asia/Bangkok. - - -# China - -# From Guy Harris: -# People's Republic of China. Yes, they really have only one time zone. - -# From Bob Devine (1988-01-28): -# No they don't. See TIME mag, 1986-02-17 p.52. Even though -# China is across 4 physical time zones, before Feb 1, 1986 only the -# Peking (Beijing) time zone was recognized. Since that date, China -# has two of 'em - Peking's and Ürümqi (named after the capital of -# the Xinjiang Uyghur Autonomous Region). I don't know about DST for it. -# -# . . .I just deleted the DST table and this editor makes it too -# painful to suck in another copy. So, here is what I have for -# DST start/end dates for Peking's time zone (info from AP): -# -# 1986 May 4 - Sept 14 -# 1987 mid-April - ?? - -# From U. S. Naval Observatory (1989-01-19): -# CHINA 8 H AHEAD OF UTC ALL OF CHINA, INCL TAIWAN -# CHINA 9 H AHEAD OF UTC APR 17 - SEP 10 - -# From Paul Eggert (2008-02-11): -# Jim Mann, "A clumsy embrace for another western custom: China on daylight -# time - sort of", Los Angeles Times, 1986-05-05 ... [says] that China began -# observing daylight saving time in 1986. - -# From Paul Eggert (2014-06-30): -# Shanks & Pottenger have China switching to a single time zone in 1980, but -# this doesn't seem to be correct. They also write that China observed summer -# DST from 1986 through 1991, which seems to match the above commentary, so -# go with them for DST rules as follows: -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Shang 1940 only - Jun 3 0:00 1:00 D -Rule Shang 1940 1941 - Oct 1 0:00 0 S -Rule Shang 1941 only - Mar 16 0:00 1:00 D -Rule PRC 1986 only - May 4 0:00 1:00 D -Rule PRC 1986 1991 - Sep Sun>=11 0:00 0 S -Rule PRC 1987 1991 - Apr Sun>=10 0:00 1:00 D - -# From Anthony Fok (2001-12-20): -# BTW, I did some research on-line and found some info regarding these five -# historic timezones from some Taiwan websites. And yes, there are official -# Chinese names for these locales (before 1949). -# -# From Jesper Nørgaard Welen (2006-07-14): -# I have investigated the timezones around 1970 on the -# https://www.astro.com/atlas site [with provinces and county -# boundaries summarized below].... A few other exceptions were two -# counties on the Sichuan side of the Xizang-Sichuan border, -# counties Dege and Baiyu which lies on the Sichuan side and are -# therefore supposed to be GMT+7, Xizang region being GMT+6, but Dege -# county is GMT+8 according to astro.com while Baiyu county is GMT+6 -# (could be true), for the moment I am assuming that those two -# counties are mistakes in the astro.com data. - -# From Paul Eggert (2017-01-05): -# Alois Treindl kindly sent me translations of the following two sources: -# -# (1) -# Guo Qingsheng (National Time-Service Center, CAS, Xi'an 710600, China) -# Beijing Time at the Beginning of the PRC -# China Historical Materials of Science and Technology -# (Zhongguo ke ji shi liao, 中国科技史料), Vol. 24, No. 1 (2003) -# It gives evidence that at the beginning of the PRC, Beijing time was -# officially apparent solar time! However, Guo also says that the -# evidence is dubious, as the relevant institute of astronomy had not -# been taken over by the PRC yet. It's plausible that apparent solar -# time was announced but never implemented, and that people continued -# to use UT+8. As the Shanghai radio station (and I presume the -# observatory) was still under control of French missionaries, it -# could well have ignored any such mandate. -# -# (2) -# Guo Qing-sheng (Shaanxi Astronomical Observatory, CAS, Xi'an 710600, China) -# A Study on the Standard Time Changes for the Past 100 Years in China -# [undated and unknown publication location] -# It says several things: -# * The Qing dynasty used local apparent solar time throughout China. -# * The Republic of China instituted Beijing mean solar time effective -# the official calendar book of 1914. -# * The French Concession in Shanghai set up signal stations in -# French docks in the 1890s, controlled by Xujiahui (Zikawei) -# Observatory and set to local mean time. -# * "From the end of the 19th century" it changed to UT+8. -# * Chinese Customs (by then reduced to a tool of foreign powers) -# eventually standardized on this time for all ports, and it -# became used by railways as well. -# * In 1918 the Central Observatory proposed dividing China into -# five time zones (see below for details). This caught on -# at first only in coastal areas observing UT+8. -# * During WWII all of China was in theory was at UT+7. In practice -# this was ignored in the west, and I presume was ignored in -# Japanese-occupied territory. -# * Japanese-occupied Manchuria was at UT+9, i.e., Japan time. -# * The five-zone plan was resurrected after WWII and officially put into -# place (with some modifications) in March 1948. It's not clear -# how well it was observed in areas under Nationalist control. -# * The People's Liberation Army used UT+8 during the civil war. -# -# An AP article "Shanghai Internat'l Area Little Changed" in the -# Lewiston (ME) Daily Sun (1939-05-29), p 17, said "Even the time is -# different - the occupied districts going by Tokyo time, an hour -# ahead of that prevailing in the rest of Shanghai." Guess that the -# Xujiahui Observatory was under French control and stuck with UT +08. -# -# In earlier versions of this file, China had many separate Zone entries, but -# this was based on what were apparently incorrect data in Shanks & Pottenger. -# This has now been simplified to the two entries Asia/Shanghai and -# Asia/Urumqi, with the others being links for backward compatibility. -# Proposed in 1918 and theoretically in effect until 1949 (although in practice -# mainly observed in coastal areas), the five zones were: -# -# Changbai Time ("Long-white Time", Long-white = Heilongjiang area) UT +08:30 -# Now part of Asia/Shanghai; its pre-1970 times are not recorded here. -# Heilongjiang (except Mohe county), Jilin -# -# Zhongyuan Time ("Central plain Time") UT +08 -# Now part of Asia/Shanghai. -# most of China -# Milne gives 8:05:43.2 for Xujiahui Observatory time; round to nearest. -# Guo says Shanghai switched to UT +08 "from the end of the 19th century". -# -# Long-shu Time (probably as Long and Shu were two names of the area) UT +07 -# Now part of Asia/Shanghai; its pre-1970 times are not recorded here. -# Guangxi, Guizhou, Hainan, Ningxia, Sichuan, Shaanxi, and Yunnan; -# most of Gansu; west Inner Mongolia; east Qinghai; and the Guangdong -# counties Deqing, Enping, Kaiping, Luoding, Taishan, Xinxing, -# Yangchun, Yangjiang, Yu'nan, and Yunfu. -# -# Xin-zang Time ("Xinjiang-Tibet Time") UT +06 -# This region is now part of either Asia/Urumqi or Asia/Shanghai with -# current boundaries uncertain; times before 1970 for areas that -# disagree with Ürümqi or Shanghai are not recorded here. -# The Gansu counties Aksay, Anxi, Dunhuang, Subei; west Qinghai; -# the Guangdong counties Xuwen, Haikang, Suixi, Lianjiang, -# Zhanjiang, Wuchuan, Huazhou, Gaozhou, Maoming, Dianbai, and Xinyi; -# east Tibet, including Lhasa, Chamdo, Shigaise, Jimsar, Shawan and Hutubi; -# east Xinjiang, including Ürümqi, Turpan, Karamay, Korla, Minfeng, Jinghe, -# Wusu, Qiemo, Xinyan, Wulanwusu, Jinghe, Yumin, Tacheng, Tuoli, Emin, -# Shihezi, Changji, Yanqi, Heshuo, Tuokexun, Tulufan, Shanshan, Hami, -# Fukang, Kuitun, Kumukuli, Miquan, Qitai, and Turfan. -# -# Kunlun Time UT +05:30 -# This region is now in the same status as Xin-zang Time (see above). -# West Tibet, including Pulan, Aheqi, Shufu, Shule; -# West Xinjiang, including Aksu, Atushi, Yining, Hetian, Cele, Luopu, Nileke, -# Zhaosu, Tekesi, Gongliu, Chabuchaer, Huocheng, Bole, Pishan, Suiding, -# and Yarkand. - -# From Luther Ma (2009-10-17): -# Almost all (>99.9%) ethnic Chinese (properly ethnic Han) living in -# Xinjiang use Chinese Standard Time. Some are aware of Xinjiang time, -# but have no need of it. All planes, trains, and schools function on -# what is called "Beijing time." When Han make an appointment in Chinese -# they implicitly use Beijing time. -# -# On the other hand, ethnic Uyghurs, who make up about half the -# population of Xinjiang, typically use "Xinjiang time" which is two -# hours behind Beijing time, or UT +06. The government of the Xinjiang -# Uyghur Autonomous Region, (XAUR, or just Xinjiang for short) as well as -# local governments such as the Ürümqi city government use both times in -# publications, referring to what is popularly called Xinjiang time as -# "Ürümqi time." When Uyghurs make an appointment in the Uyghur language -# they almost invariably use Xinjiang time. -# -# (Their ethnic Han compatriots would typically have no clue of its -# widespread use, however, because so extremely few of them are fluent in -# Uyghur, comparable to the number of Anglo-Americans fluent in Navajo.) -# -# (...As with the rest of China there was a brief interval ending in 1990 -# or 1991 when summer time was in use. The confusion was severe, with -# the province not having dual times but four times in use at the same -# time. Some areas remained on standard Xinjiang time or Beijing time and -# others moving their clocks ahead.) - -# From Luther Ma (2009-11-19): -# With the risk of being redundant to previous answers these are the most common -# English "transliterations" (w/o using non-English symbols): -# -# 1. Wulumuqi... -# 2. Kashi... -# 3. Urumqi... -# 4. Kashgar... -# ... -# 5. It seems that Uyghurs in Ürümqi has been using Xinjiang since at least the -# 1960's. I know of one Han, now over 50, who grew up in the surrounding -# countryside and used Xinjiang time as a child. -# -# 6. Likewise for Kashgar and the rest of south Xinjiang I don't know of any -# start date for Xinjiang time. -# -# Without having access to local historical records, nor the ability to legally -# publish them, I would go with October 1, 1949, when Xinjiang became the Uyghur -# Autonomous Region under the PRC. (Before that Uyghurs, of course, would also -# not be using Beijing time, but some local time.) - -# From David Cochrane (2014-03-26): -# Just a confirmation that Ürümqi time was implemented in Ürümqi on 1 Feb 1986: -# https://content.time.com/time/magazine/article/0,9171,960684,00.html - -# From Luther Ma (2014-04-22): -# I have interviewed numerous people of various nationalities and from -# different localities in Xinjiang and can confirm the information in Guo's -# report regarding Xinjiang, as well as the Time article reference by David -# Cochrane. Whether officially recognized or not (and both are officially -# recognized), two separate times have been in use in Xinjiang since at least -# the Cultural Revolution: Xinjiang Time (XJT), aka Ürümqi Time or local time; -# and Beijing Time. There is no confusion in Xinjiang as to which name refers -# to which time. Both are widely used in the province, although in some -# population groups might be use one to the exclusion of the other. The only -# problem is that computers and smart phones list Ürümqi (or Kashgar) as -# having the same time as Beijing. - -# From Paul Eggert (2014-06-30): -# In the early days of the PRC, Tibet was given its own time zone (UT +06) -# but this was withdrawn in 1959 and never reinstated; see Tubten Khétsun, -# Memories of life in Lhasa under Chinese Rule, Columbia U Press, ISBN -# 978-0231142861 (2008), translator's introduction by Matthew Akester, p x. -# As this is before our 1970 cutoff, Tibet doesn't need a separate zone. -# -# Xinjiang Time is well-documented as being officially recognized. E.g., see -# "The Working-Calendar for The Xinjiang Uygur Autonomous Region Government" -# (2014-04-22). -# Unfortunately, we have no good records of time in Xinjiang before 1986. -# During the 20th century parts of Xinjiang were ruled by the Qing dynasty, -# the Republic of China, various warlords, the First and Second East Turkestan -# Republics, the Soviet Union, the Kuomintang, and the People's Republic of -# China, and tracking down all these organizations' timekeeping rules would be -# quite a trick. Approximate this lost history by a transition from LMT to -# UT +06 at the start of 1928, the year of accession of the warlord Jin Shuren, -# which happens to be the date given by Shanks & Pottenger (no doubt as a -# guess) as the transition from LMT. Ignore the usage of +08 before -# 1986-02-01 under the theory that the transition date to +08 is unknown and -# that the sort of users who prefer Asia/Urumqi now typically ignored the -# +08 mandate back then. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Beijing time, used throughout China; represented by Shanghai. -Zone Asia/Shanghai 8:05:43 - LMT 1901 - 8:00 Shang C%sT 1949 - 8:00 PRC C%sT -# Xinjiang time, used by many in western China; represented by Ürümqi / Ürümchi -# / Wulumuqi. (Please use Asia/Shanghai if you prefer Beijing time.) -Zone Asia/Urumqi 5:50:20 - LMT 1928 - 6:00 - +06 - - -# Hong Kong (Xianggang) - -# Milne gives 7:36:41.7; round this. - -# From Lee Yiu Chung (2009-10-24): -# I found there are some mistakes for the...DST rule for Hong -# Kong. [According] to the DST record from Hong Kong Observatory (actually, -# it is not [an] observatory, but the official meteorological agency of HK, -# and also serves as the official timing agency), there are some missing -# and incorrect rules. Although the exact switch over time is missing, I -# think 3:30 is correct. The official DST record for Hong Kong can be -# obtained from -# http://www.hko.gov.hk/gts/time/Summertime.htm - -# From Arthur David Olson (2009-10-28): -# Here are the dates given at -# http://www.hko.gov.hk/gts/time/Summertime.htm -# as of 2009-10-28: -# Year Period -# 1941 1 Apr to 30 Sep -# 1942 Whole year -# 1943 Whole year -# 1944 Whole year -# 1945 Whole year -# 1946 20 Apr to 1 Dec -# 1947 13 Apr to 30 Dec -# 1948 2 May to 31 Oct -# 1949 3 Apr to 30 Oct -# 1950 2 Apr to 29 Oct -# 1951 1 Apr to 28 Oct -# 1952 6 Apr to 25 Oct -# 1953 5 Apr to 1 Nov -# 1954 21 Mar to 31 Oct -# 1955 20 Mar to 6 Nov -# 1956 18 Mar to 4 Nov -# 1957 24 Mar to 3 Nov -# 1958 23 Mar to 2 Nov -# 1959 22 Mar to 1 Nov -# 1960 20 Mar to 6 Nov -# 1961 19 Mar to 5 Nov -# 1962 18 Mar to 4 Nov -# 1963 24 Mar to 3 Nov -# 1964 22 Mar to 1 Nov -# 1965 18 Apr to 17 Oct -# 1966 17 Apr to 16 Oct -# 1967 16 Apr to 22 Oct -# 1968 21 Apr to 20 Oct -# 1969 20 Apr to 19 Oct -# 1970 19 Apr to 18 Oct -# 1971 18 Apr to 17 Oct -# 1972 16 Apr to 22 Oct -# 1973 22 Apr to 21 Oct -# 1973/74 30 Dec 73 to 20 Oct 74 -# 1975 20 Apr to 19 Oct -# 1976 18 Apr to 17 Oct -# 1977 Nil -# 1978 Nil -# 1979 13 May to 21 Oct -# 1980 to Now Nil -# The page does not give start or end times of day. -# The page does not give a start date for 1942. -# The page does not givw an end date for 1945. -# The Japanese occupation of Hong Kong began on 1941-12-25. -# The Japanese surrender of Hong Kong was signed 1945-09-15. -# For lack of anything better, use start of those days as the transition times. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule HK 1941 only - Apr 1 3:30 1:00 S -Rule HK 1941 only - Sep 30 3:30 0 - -Rule HK 1946 only - Apr 20 3:30 1:00 S -Rule HK 1946 only - Dec 1 3:30 0 - -Rule HK 1947 only - Apr 13 3:30 1:00 S -Rule HK 1947 only - Dec 30 3:30 0 - -Rule HK 1948 only - May 2 3:30 1:00 S -Rule HK 1948 1951 - Oct lastSun 3:30 0 - -Rule HK 1952 only - Oct 25 3:30 0 - -Rule HK 1949 1953 - Apr Sun>=1 3:30 1:00 S -Rule HK 1953 only - Nov 1 3:30 0 - -Rule HK 1954 1964 - Mar Sun>=18 3:30 1:00 S -Rule HK 1954 only - Oct 31 3:30 0 - -Rule HK 1955 1964 - Nov Sun>=1 3:30 0 - -Rule HK 1965 1976 - Apr Sun>=16 3:30 1:00 S -Rule HK 1965 1976 - Oct Sun>=16 3:30 0 - -Rule HK 1973 only - Dec 30 3:30 1:00 S -Rule HK 1979 only - May Sun>=8 3:30 1:00 S -Rule HK 1979 only - Oct Sun>=16 3:30 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Hong_Kong 7:36:42 - LMT 1904 Oct 30 - 8:00 HK HK%sT 1941 Dec 25 - 9:00 - JST 1945 Sep 15 - 8:00 HK HK%sT - -############################################################################### - -# Taiwan - -# From smallufo (2010-04-03): -# According to Taiwan's CWB [Central Weather Bureau], -# http://www.cwb.gov.tw/V6/astronomy/cdata/summert.htm -# Taipei has DST in 1979 between July 1st and Sep 30. - -# From Yu-Cheng Chuang (2013-07-12): -# On Dec 28, 1895, the Meiji Emperor announced Ordinance No. 167 of -# Meiji Year 28 "The clause about standard time", mentioned that -# Taiwan and Penghu Islands, as well as Yaeyama and Miyako Islands -# (both in Okinawa) adopt the Western Standard Time which is based on -# 120E. The adoption began from Jan 1, 1896. The original text can be -# found on Wikisource: -# https://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) -# ... This could be the first adoption of time zone in Taiwan, because -# during the Qing Dynasty, it seems that there was no time zone -# declared officially. -# -# Later, in the beginning of World War II, on Sep 25, 1937, the Showa -# Emperor announced Ordinance No. 529 of Showa Year 12 "The clause of -# revision in the ordinance No. 167 of Meiji year 28 about standard -# time", in which abolished the adoption of Western Standard Time in -# western islands (listed above), which means the whole Japan -# territory, including later occupations, adopt Japan Central Time -# (UTC+9). The adoption began on Oct 1, 1937. The original text can -# be found on Wikisource: -# https://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 -# -# That is, the time zone of Taipei switched to UTC+9 on Oct 1, 1937. - -# From Yu-Cheng Chuang (2014-07-02): -# I've found more evidence about when the time zone was switched from UTC+9 -# back to UTC+8 after WW2. I believe it was on Sep 21, 1945. In a document -# during Japanese era [1] in which the officer told the staff to change time -# zone back to Western Standard Time (UTC+8) on Sep 21. And in another -# history page of National Cheng Kung University [2], on Sep 21 there is a -# note "from today, switch back to Western Standard Time". From these two -# materials, I believe that the time zone change happened on Sep 21. And -# today I have found another monthly journal called "The Astronomical Herald" -# from The Astronomical Society of Japan [3] in which it mentioned the fact -# that: -# -# 1. Standard Time of the Country (Japan) was adopted on Jan 1, 1888, using -# the time at 135E (GMT+9) -# -# 2. Standard Time of the Country was renamed to Central Standard Time, on Jan -# 1, 1898, and on the same day, the new territories Taiwan and Penghu islands, -# as well as Yaeyama and Miyako islands, adopted a new time zone called -# Western Standard Time, which is in GMT+8. -# -# 3. Western Standard Time was deprecated on Sep 30, 1937. From then all the -# territories of Japan adopted the same time zone, which is Central Standard -# Time. -# -# [1] Academica Historica, Taiwan: -# http://163.29.208.22:8080/govsaleShowImage/connect_img.php?s=00101738900090036&e=00101738900090037 -# [2] Nat'l Cheng Kung University 70th Anniversary Special Site: -# http://www.ncku.edu.tw/~ncku70/menu/001/01_01.htm -# [3] Yukio Niimi, The Standard Time in Japan (1997), p.475: -# http://www.asj.or.jp/geppou/archive_open/1997/pdf/19971001c.pdf - -# Yu-Cheng Chuang (2014-07-03): -# I finally have found the real official gazette about changing back to -# Western Standard Time on Sep 21 in Taiwan. It's Taiwan Governor-General -# Bulletin No. 386 in Showa 20 years (1945), published on Sep 19, 1945. [1] ... -# [It] abolishes Bulletin No. 207 in Showa 12 years (1937), which is a local -# bulletin in Taiwan for that Ordinance No. 529. It also mentioned that 1am on -# Sep 21, 1945 will be 12am on Sep 21. I think this bulletin is much more -# official than the one I mentioned in my first mail, because it's from the -# top-level government in Taiwan. If you're going to quote any resource, this -# would be a good one. -# [1] Taiwan Governor-General Gazette, No. 1018, Sep 19, 1945: -# http://db2.th.gov.tw/db2/view/viewImg.php?imgcode=0072031018a&num=19&bgn=019&end=019&otherImg=&type=gener - -# From Yu-Cheng Chuang (2014-07-02): -# In 1946, DST in Taiwan was from May 15 and ended on Sep 30. The info from -# Central Weather Bureau website was not correct. -# -# Original Bulletin: -# http://subtpg.tpg.gov.tw/og/image2.asp?f=03502F0AKM1AF -# http://subtpg.tpg.gov.tw/og/image2.asp?f=0350300AKM1B0 (cont.) -# -# In 1947, DST in Taiwan was expanded to Oct 31. There is a backup of that -# telegram announcement from Taiwan Province Government: -# -# http://subtpg.tpg.gov.tw/og/image2.asp?f=0360310AKZ431 -# -# Here is a brief translation: -# -# The Summer Time this year is adopted from midnight Apr 15 until Sep 20 -# midnight. To save (energy?) consumption, we're expanding Summer Time -# adoption till Oct 31 midnight. -# -# The Central Weather Bureau website didn't mention that, however it can -# be found from historical government announcement database. - -# From Paul Eggert (2014-07-03): -# As per Yu-Cheng Chuang, say that Taiwan was at UT +09 from 1937-10-01 -# until 1945-09-21 at 01:00, overriding Shanks & Pottenger. -# Likewise, use Yu-Cheng Chuang's data for DST in Taiwan. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Taiwan 1946 only - May 15 0:00 1:00 D -Rule Taiwan 1946 only - Oct 1 0:00 0 S -Rule Taiwan 1947 only - Apr 15 0:00 1:00 D -Rule Taiwan 1947 only - Nov 1 0:00 0 S -Rule Taiwan 1948 1951 - May 1 0:00 1:00 D -Rule Taiwan 1948 1951 - Oct 1 0:00 0 S -Rule Taiwan 1952 only - Mar 1 0:00 1:00 D -Rule Taiwan 1952 1954 - Nov 1 0:00 0 S -Rule Taiwan 1953 1959 - Apr 1 0:00 1:00 D -Rule Taiwan 1955 1961 - Oct 1 0:00 0 S -Rule Taiwan 1960 1961 - Jun 1 0:00 1:00 D -Rule Taiwan 1974 1975 - Apr 1 0:00 1:00 D -Rule Taiwan 1974 1975 - Oct 1 0:00 0 S -Rule Taiwan 1979 only - Jul 1 0:00 1:00 D -Rule Taiwan 1979 only - Oct 1 0:00 0 S - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Taipei or Taibei or T'ai-pei -Zone Asia/Taipei 8:06:00 - LMT 1896 Jan 1 - 8:00 - CST 1937 Oct 1 - 9:00 - JST 1945 Sep 21 1:00 - 8:00 Taiwan C%sT - -# Macau (Macao, Aomen) -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Macau 1961 1962 - Mar Sun>=16 3:30 1:00 D -Rule Macau 1961 1964 - Nov Sun>=1 3:30 0 S -Rule Macau 1963 only - Mar Sun>=16 0:00 1:00 D -Rule Macau 1964 only - Mar Sun>=16 3:30 1:00 D -Rule Macau 1965 only - Mar Sun>=16 0:00 1:00 D -Rule Macau 1965 only - Oct 31 0:00 0 S -Rule Macau 1966 1971 - Apr Sun>=16 3:30 1:00 D -Rule Macau 1966 1971 - Oct Sun>=16 3:30 0 S -Rule Macau 1972 1974 - Apr Sun>=15 0:00 1:00 D -Rule Macau 1972 1973 - Oct Sun>=15 0:00 0 S -Rule Macau 1974 1977 - Oct Sun>=15 3:30 0 S -Rule Macau 1975 1977 - Apr Sun>=15 3:30 1:00 D -Rule Macau 1978 1980 - Apr Sun>=15 0:00 1:00 D -Rule Macau 1978 1980 - Oct Sun>=15 0:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Macau 7:34:20 - LMT 1912 Jan 1 - 8:00 Macau C%sT - - -############################################################################### - -# Cyprus - -# Milne says the Eastern Telegraph Company used 2:14:00. Stick with LMT. -# IATA SSIM (1998-09) has Cyprus using EU rules for the first time. - -# From Paul Eggert (2016-09-09): -# Yesterday's Cyprus Mail reports that Northern Cyprus followed Turkey's -# lead and switched from +02/+03 to +03 year-round. -# http://cyprus-mail.com/2016/09/08/two-time-zones-cyprus-turkey-will-not-turn-clocks-back-next-month/ -# -# From Even Scharning (2016-10-31): -# Looks like the time zone split in Cyprus went through last night. -# http://cyprus-mail.com/2016/10/30/cyprus-new-division-two-time-zones-now-reality/ - -# From Paul Eggert (2017-10-18): -# Northern Cyprus will reinstate winter time on October 29, thus -# staying in sync with the rest of Cyprus. See: Anastasiou A. -# Cyprus to remain united in time. Cyprus Mail 2017-10-17. -# https://cyprus-mail.com/2017/10/17/cyprus-remain-united-time/ - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Cyprus 1975 only - Apr 13 0:00 1:00 S -Rule Cyprus 1975 only - Oct 12 0:00 0 - -Rule Cyprus 1976 only - May 15 0:00 1:00 S -Rule Cyprus 1976 only - Oct 11 0:00 0 - -Rule Cyprus 1977 1980 - Apr Sun>=1 0:00 1:00 S -Rule Cyprus 1977 only - Sep 25 0:00 0 - -Rule Cyprus 1978 only - Oct 2 0:00 0 - -Rule Cyprus 1979 1997 - Sep lastSun 0:00 0 - -Rule Cyprus 1981 1998 - Mar lastSun 0:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Nicosia 2:13:28 - LMT 1921 Nov 14 - 2:00 Cyprus EE%sT 1998 Sep - 2:00 EUAsia EE%sT -Zone Asia/Famagusta 2:15:48 - LMT 1921 Nov 14 - 2:00 Cyprus EE%sT 1998 Sep - 2:00 EUAsia EE%sT 2016 Sep 8 - 3:00 - +03 2017 Oct 29 1:00u - 2:00 EUAsia EE%sT - -# Classically, Cyprus belongs to Asia; e.g. see Herodotus, Histories, I.72. -# However, for various reasons many users expect to find it under Europe. -Link Asia/Nicosia Europe/Nicosia - -# Georgia -# From Paul Eggert (1994-11-19): -# Today's _Economist_ (p 60) reports that Georgia moved its clocks forward -# an hour recently, due to a law proposed by Zurab Murvanidze, -# an MP who went on a hunger strike for 11 days to force discussion about it! -# We have no details, but we'll guess they didn't move the clocks back in fall. -# -# From Mathew Englander, quoting AP (1996-10-23 13:05-04): -# Instead of putting back clocks at the end of October, Georgia -# will stay on daylight savings time this winter to save energy, -# President Eduard Shevardnadze decreed Wednesday. -# -# From the BBC via Joseph S. Myers (2004-06-27): -# -# Georgia moved closer to Western Europe on Sunday... The former Soviet -# republic has changed its time zone back to that of Moscow. As a result it -# is now just four hours ahead of Greenwich Mean Time, rather than five hours -# ahead. The switch was decreed by the pro-Western president of Georgia, -# Mikheil Saakashvili, who said the change was partly prompted by the process -# of integration into Europe. - -# From Teimuraz Abashidze (2005-11-07): -# Government of Georgia ... decided to NOT CHANGE daylight savings time on -# [Oct.] 30, as it was done before during last more than 10 years. -# Currently, we are in fact GMT +4:00, as before 30 October it was GMT -# +3:00.... The problem is, there is NO FORMAL LAW or governmental document -# about it. As far as I can find, I was told, that there is no document, -# because we just DIDN'T ISSUE document about switching to winter time.... -# I don't know what can be done, especially knowing that some years ago our -# DST rules where changed THREE TIMES during one month. - -# Milne 1899 says Tbilisi (Tiflis) time was 2:59:05.7. -# Byalokoz 1919 says Georgia was 2:59:11. -# Go with Byalokoz. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Tbilisi 2:59:11 - LMT 1880 - 2:59:11 - TBMT 1924 May 2 # Tbilisi Mean Time - 3:00 - +03 1957 Mar - 4:00 RussiaAsia +04/+05 1991 Mar 31 2:00s - 3:00 RussiaAsia +03/+04 1992 - 3:00 E-EurAsia +03/+04 1994 Sep lastSun - 4:00 E-EurAsia +04/+05 1996 Oct lastSun - 4:00 1:00 +05 1997 Mar lastSun - 4:00 E-EurAsia +04/+05 2004 Jun 27 - 3:00 RussiaAsia +03/+04 2005 Mar lastSun 2:00 - 4:00 - +04 - -# East Timor - -# See Indonesia for the 1945 transition. - -# From João Carrascalão, brother of the former governor of East Timor, in -# East Timor may be late for its millennium -# (1999-12-26/31): -# Portugal tried to change the time forward in 1974 because the sun -# rises too early but the suggestion raised a lot of problems with the -# Timorese and I still don't think it would work today because it -# conflicts with their way of life. - -# From Paul Eggert (2000-12-04): -# We don't have any record of the above attempt. -# Most likely our records are incomplete, but we have no better data. - -# From Manoel de Almeida e Silva, Deputy Spokesman for the UN Secretary-General -# http://www.hri.org/news/world/undh/2000/00-08-16.undh.html -# (2000-08-16): -# The Cabinet of the East Timor Transition Administration decided -# today to advance East Timor's time by one hour. The time change, -# which will be permanent, with no seasonal adjustment, will happen at -# midnight on Saturday, September 16. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Dili 8:22:20 - LMT 1912 Jan 1 - 8:00 - +08 1942 Feb 21 23:00 - 9:00 - +09 1976 May 3 - 8:00 - +08 2000 Sep 17 0:00 - 9:00 - +09 - -# India - -# From Ian P. Beacock, in "A brief history of (modern) time", The Atlantic -# https://www.theatlantic.com/technology/archive/2015/12/the-creation-of-modern-time/421419/ -# (2015-12-22): -# In January 1906, several thousand cotton-mill workers rioted on the -# outskirts of Bombay.... They were protesting the proposed abolition of -# local time in favor of Indian Standard Time.... Journalists called this -# dispute the "Battle of the Clocks." It lasted nearly half a century. - -# From Paul Eggert (2017-04-20): -# Good luck trying to nail down old timekeeping records in India. -# "... in the nineteenth century ... Madras Observatory took its magnetic -# measurements on Göttingen time, its meteorological measurements on Madras -# (local) time, dropped its time ball on Greenwich (ocean navigator's) time, -# and distributed civil (local time)." -- Bartky IR. Selling the true time: -# 19th-century timekeeping in america. Stanford U Press (2000), 247 note 19. -# "A more potent cause of resistance to the general adoption of the present -# standard time lies in the fact that it is Madras time. The citizen of -# Bombay, proud of being 'primus in Indis' and of Calcutta, equally proud of -# his city being the Capital of India, and - for a part of the year - the Seat -# of the Supreme Government, alike look down on Madras, and refuse to change -# the time they are using, for that of what they regard as a benighted -# Presidency; while Madras, having for long given the standard time to the -# rest of India, would resist the adoption of any other Indian standard in its -# place." -- Oldham RD. On Time in India: a suggestion for its improvement. -# Proceedings of the Asiatic Society of Bengal (April 1899), 49-55. -# -# "In 1870 ... Madras time - 'now used by the telegraph and regulated from the -# only government observatory' - was suggested as a standard railway time, -# first to be adopted on the Great Indian Peninsular Railway (GIPR).... -# Calcutta, Bombay, and Karachi, were to be allowed to continue with their -# local time for civil purposes." - Prasad R. Tracks of Change: Railways and -# Everyday Life in Colonial India. Cambridge University Press (2016), 145. -# -# Reed S, Low F. The Indian Year Book 1936-37. Bennett, Coleman, pp 27-8. -# https://archive.org/details/in.ernet.dli.2015.282212 -# This lists +052110 as Madras local time used in railways, and says that on -# 1906-01-01 railways and telegraphs in India switched to +0530. Some -# municipalities retained their former time, and the time in Calcutta -# continued to depend on whether you were at the railway station or at -# government offices. Government time was at +055320 (according to Shanks) or -# at +0554 (according to the Indian Year Book). Railway time is more -# appropriate for our purposes, as it was better documented, it is what we do -# elsewhere (e.g., Europe/London before 1880), and after 1906 it was -# consistent in the region now identified by Asia/Kolkata. So, use railway -# time for 1870-1941. Shanks is our only (and dubious) source for the -# 1941-1945 data. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kolkata 5:53:28 - LMT 1854 Jun 28 # Kolkata - 5:53:20 - HMT 1870 # Howrah Mean Time? - 5:21:10 - MMT 1906 Jan 1 # Madras local time - 5:30 - IST 1941 Oct - 5:30 1:00 +0630 1942 May 15 - 5:30 - IST 1942 Sep - 5:30 1:00 +0630 1945 Oct 15 - 5:30 - IST -# Since 1970 the following are like Asia/Kolkata: -# Andaman Is -# Lakshadweep (Laccadive, Minicoy and Amindivi Is) -# Nicobar Is - -# Indonesia -# -# From Paul Eggert (2014-09-06): -# The 1876 Report of the Secretary of the [US] Navy, p 306 says that Batavia -# civil time was 7:07:12.5; round to even for Jakarta. -# -# From Gwillim Law (2001-05-28), overriding Shanks & Pottenger: -# http://www.sumatera-inc.com/go_to_invest/about_indonesia.asp#standtime -# says that Indonesia's time zones changed on 1988-01-01. Looking at some -# time zone maps, I think that must refer to Western Borneo (Kalimantan Barat -# and Kalimantan Tengah) switching from UTC+8 to UTC+7. -# -# From Paul Eggert (2007-03-10): -# Here is another correction to Shanks & Pottenger. -# JohnTWB writes that Japanese forces did not surrender control in -# Indonesia until 1945-09-01 00:00 at the earliest (in Jakarta) and -# other formal surrender ceremonies were September 9, 11, and 13, plus -# September 12 for the regional surrender to Mountbatten in Singapore. -# These would be the earliest possible times for a change. -# Régimes horaires pour le monde entier, by Henri Le Corre, (Éditions -# Traditionnelles, 1987, Paris) says that Java and Madura switched -# from UT +09 to +07:30 on 1945-09-23, and gives 1944-09-01 for Jayapura -# (Hollandia). For now, assume all Indonesian locations other than Jayapura -# switched on 1945-09-23. -# -# From Paul Eggert (2013-08-11): -# Normally the tz database uses English-language abbreviations, but in -# Indonesia it's typical to use Indonesian-language abbreviations even -# when writing in English. For example, see the English-language -# summary published by the Time and Frequency Laboratory of the -# Research Center for Calibration, Instrumentation and Metrology, -# Indonesia, (2006-09-29). -# The time zone abbreviations and UT offsets are: -# -# WIB - +07 - Waktu Indonesia Barat (Indonesia western time) -# WITA - +08 - Waktu Indonesia Tengah (Indonesia central time) -# WIT - +09 - Waktu Indonesia Timur (Indonesia eastern time) -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Java, Sumatra -Zone Asia/Jakarta 7:07:12 - LMT 1867 Aug 10 -# Shanks & Pottenger say the next transition was at 1924 Jan 1 0:13, -# but this must be a typo. - 7:07:12 - BMT 1923 Dec 31 23:47:12 # Batavia - 7:20 - +0720 1932 Nov - 7:30 - +0730 1942 Mar 23 - 9:00 - +09 1945 Sep 23 - 7:30 - +0730 1948 May - 8:00 - +08 1950 May - 7:30 - +0730 1964 - 7:00 - WIB -# west and central Borneo -Zone Asia/Pontianak 7:17:20 - LMT 1908 May - 7:17:20 - PMT 1932 Nov # Pontianak MT - 7:30 - +0730 1942 Jan 29 - 9:00 - +09 1945 Sep 23 - 7:30 - +0730 1948 May - 8:00 - +08 1950 May - 7:30 - +0730 1964 - 8:00 - WITA 1988 Jan 1 - 7:00 - WIB -# Sulawesi, Lesser Sundas, east and south Borneo -Zone Asia/Makassar 7:57:36 - LMT 1920 - 7:57:36 - MMT 1932 Nov # Macassar MT - 8:00 - +08 1942 Feb 9 - 9:00 - +09 1945 Sep 23 - 8:00 - WITA -# Maluku Islands, West Papua, Papua -Zone Asia/Jayapura 9:22:48 - LMT 1932 Nov - 9:00 - +09 1944 Sep 1 - 9:30 - +0930 1964 - 9:00 - WIT - -# Iran - -# From Roozbeh Pournader (2003-03-15): -# This is an English translation of what I just found (originally in Persian). -# The Gregorian dates in brackets are mine: -# -# Official Newspaper No. 13548-1370/6/25 [1991-09-16] -# No. 16760/T233 H 1370/6/10 [1991-09-01] -# -# The Rule About Change of the Official Time of the Country -# -# The Board of Ministers, in the meeting dated 1370/5/23 [1991-08-14], -# based on the suggestion number 2221/D dated 1370/4/22 [1991-07-13] -# of the Country's Organization for Official and Employment Affairs, -# and referring to the law for equating the working hours of workers -# and officers in the whole country dated 1359/4/23 [1980-07-14], and -# for synchronizing the official times of the country, agreed that: -# -# The official time of the country will should move forward one hour -# at the 24[:00] hours of the first day of Farvardin and should return -# to its previous state at the 24[:00] hours of the 30th day of -# Shahrivar. -# -# First Deputy to the President - Hassan Habibi -# -# From personal experience, that agrees with what has been followed -# for at least the last 5 years. Before that, for a few years, the -# date used was the first Thursday night of Farvardin and the last -# Thursday night of Shahrivar, but I can't give exact dates.... -# -# From Roozbeh Pournader (2005-04-05): -# The text of the Iranian law, in effect since 1925, clearly mentions -# that the true solar year is the measure, and there is no arithmetic -# leap year calculation involved. There has never been any serious -# plan to change that law.... -# -# From Paul Eggert (2006-03-22): -# Go with Shanks & Pottenger before Sept. 1991, and with Pournader thereafter. -# I used Ed Reingold's cal-persia in GNU Emacs 21.2 to check Persian dates, -# stopping after 2037 when 32-bit time_t's overflow. -# That cal-persia used Birashk's approximation, which disagrees with the solar -# calendar predictions for the year 2025, so I corrected those dates by hand. -# -# From Oscar van Vlijmen (2005-03-30), writing about future -# discrepancies between cal-persia and the Iranian calendar: -# For 2091 solar-longitude-after yields 2091-03-20 08:40:07.7 UT for -# the vernal equinox and that gets so close to 12:00 some local -# Iranian time that the definition of the correct location needs to be -# known exactly, amongst other factors. 2157 is even closer: -# 2157-03-20 08:37:15.5 UT. But the Gregorian year 2025 should give -# no interpretation problem whatsoever. By the way, another instant -# in the near future where there will be a discrepancy between -# arithmetical and astronomical Iranian calendars will be in 2058: -# vernal equinox on 2058-03-20 09:03:05.9 UT. The Java version of -# Reingold's/Dershowitz' calculator gives correctly the Gregorian date -# 2058-03-21 for 1 Farvardin 1437 (astronomical). -# -# From Steffen Thorsen (2006-03-22): -# Several of my users have reported that Iran will not observe DST anymore: -# http://www.irna.ir/en/news/view/line-17/0603193812164948.htm -# -# From Reuters (2007-09-16), with a heads-up from Jesper Nørgaard Welen: -# ... the Guardian Council ... approved a law on Sunday to re-introduce -# daylight saving time ... -# https://uk.reuters.com/article/oilRpt/idUKBLA65048420070916 -# -# From Roozbeh Pournader (2007-11-05): -# This is quoted from Official Gazette of the Islamic Republic of -# Iran, Volume 63, No. 18242, dated Tuesday 1386/6/24 -# [2007-10-16]. I am doing the best translation I can:... -# The official time of the country will be moved forward for one hour -# on the 24 hours of the first day of the month of Farvardin and will -# be changed back to its previous state on the 24 hours of the -# thirtieth day of Shahrivar. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Iran 1978 1980 - Mar 21 0:00 1:00 D -Rule Iran 1978 only - Oct 21 0:00 0 S -Rule Iran 1979 only - Sep 19 0:00 0 S -Rule Iran 1980 only - Sep 23 0:00 0 S -Rule Iran 1991 only - May 3 0:00 1:00 D -Rule Iran 1992 1995 - Mar 22 0:00 1:00 D -Rule Iran 1991 1995 - Sep 22 0:00 0 S -Rule Iran 1996 only - Mar 21 0:00 1:00 D -Rule Iran 1996 only - Sep 21 0:00 0 S -Rule Iran 1997 1999 - Mar 22 0:00 1:00 D -Rule Iran 1997 1999 - Sep 22 0:00 0 S -Rule Iran 2000 only - Mar 21 0:00 1:00 D -Rule Iran 2000 only - Sep 21 0:00 0 S -Rule Iran 2001 2003 - Mar 22 0:00 1:00 D -Rule Iran 2001 2003 - Sep 22 0:00 0 S -Rule Iran 2004 only - Mar 21 0:00 1:00 D -Rule Iran 2004 only - Sep 21 0:00 0 S -Rule Iran 2005 only - Mar 22 0:00 1:00 D -Rule Iran 2005 only - Sep 22 0:00 0 S -Rule Iran 2008 only - Mar 21 0:00 1:00 D -Rule Iran 2008 only - Sep 21 0:00 0 S -Rule Iran 2009 2011 - Mar 22 0:00 1:00 D -Rule Iran 2009 2011 - Sep 22 0:00 0 S -Rule Iran 2012 only - Mar 21 0:00 1:00 D -Rule Iran 2012 only - Sep 21 0:00 0 S -Rule Iran 2013 2015 - Mar 22 0:00 1:00 D -Rule Iran 2013 2015 - Sep 22 0:00 0 S -Rule Iran 2016 only - Mar 21 0:00 1:00 D -Rule Iran 2016 only - Sep 21 0:00 0 S -Rule Iran 2017 2019 - Mar 22 0:00 1:00 D -Rule Iran 2017 2019 - Sep 22 0:00 0 S -Rule Iran 2020 only - Mar 21 0:00 1:00 D -Rule Iran 2020 only - Sep 21 0:00 0 S -Rule Iran 2021 2023 - Mar 22 0:00 1:00 D -Rule Iran 2021 2023 - Sep 22 0:00 0 S -Rule Iran 2024 only - Mar 21 0:00 1:00 D -Rule Iran 2024 only - Sep 21 0:00 0 S -Rule Iran 2025 2027 - Mar 22 0:00 1:00 D -Rule Iran 2025 2027 - Sep 22 0:00 0 S -Rule Iran 2028 2029 - Mar 21 0:00 1:00 D -Rule Iran 2028 2029 - Sep 21 0:00 0 S -Rule Iran 2030 2031 - Mar 22 0:00 1:00 D -Rule Iran 2030 2031 - Sep 22 0:00 0 S -Rule Iran 2032 2033 - Mar 21 0:00 1:00 D -Rule Iran 2032 2033 - Sep 21 0:00 0 S -Rule Iran 2034 2035 - Mar 22 0:00 1:00 D -Rule Iran 2034 2035 - Sep 22 0:00 0 S -# -# The following rules are approximations starting in the year 2038. -# These are the best post-2037 approximations available, given the -# restrictions of a single rule using a Gregorian-based data format. -# At some point this table will need to be extended, though quite -# possibly Iran will change the rules first. -Rule Iran 2036 max - Mar 21 0:00 1:00 D -Rule Iran 2036 max - Sep 21 0:00 0 S - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Tehran 3:25:44 - LMT 1916 - 3:25:44 - TMT 1946 # Tehran Mean Time - 3:30 - +0330 1977 Nov - 4:00 Iran +04/+05 1979 - 3:30 Iran +0330/+0430 - - -# Iraq -# -# From Jonathan Lennox (2000-06-12): -# An article in this week's Economist ("Inside the Saddam-free zone", p. 50 in -# the U.S. edition) on the Iraqi Kurds contains a paragraph: -# "The three northern provinces ... switched their clocks this spring and -# are an hour ahead of Baghdad." -# -# But Rives McDow (2000-06-18) quotes a contact in Iraqi-Kurdistan as follows: -# In the past, some Kurdish nationalists, as a protest to the Iraqi -# Government, did not adhere to daylight saving time. They referred -# to daylight saving as Saddam time. But, as of today, the time zone -# in Iraqi-Kurdistan is on standard time with Baghdad, Iraq. -# -# So we'll ignore the Economist's claim. - -# From Steffen Thorsen (2008-03-10): -# The cabinet in Iraq abolished DST last week, according to the following -# news sources (in Arabic): -# http://www.aljeeran.net/wesima_articles/news-20080305-98602.html -# http://www.aswataliraq.info/look/article.tpl?id=2047&IdLanguage=17&IdPublication=4&NrArticle=71743&NrIssue=1&NrSection=10 -# -# We have published a short article in English about the change: -# https://www.timeanddate.com/news/time/iraq-dumps-daylight-saving.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Iraq 1982 only - May 1 0:00 1:00 D -Rule Iraq 1982 1984 - Oct 1 0:00 0 S -Rule Iraq 1983 only - Mar 31 0:00 1:00 D -Rule Iraq 1984 1985 - Apr 1 0:00 1:00 D -Rule Iraq 1985 1990 - Sep lastSun 1:00s 0 S -Rule Iraq 1986 1990 - Mar lastSun 1:00s 1:00 D -# IATA SSIM (1991/1996) says Apr 1 12:01am UTC; guess the ':01' is a typo. -# Shanks & Pottenger say Iraq did not observe DST 1992/1997; ignore this. -# -Rule Iraq 1991 2007 - Apr 1 3:00s 1:00 D -Rule Iraq 1991 2007 - Oct 1 3:00s 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Baghdad 2:57:40 - LMT 1890 - 2:57:36 - BMT 1918 # Baghdad Mean Time? - 3:00 - +03 1982 May - 3:00 Iraq +03/+04 - - -############################################################################### - -# Israel - -# From Ephraim Silverberg (2001-01-11): -# -# I coined "IST/IDT" circa 1988. Until then there were three -# different abbreviations in use: -# -# JST Jerusalem Standard Time [Danny Braniss, Hebrew University] -# IZT Israel Zonal (sic) Time [Prof. Haim Papo, Technion] -# EEST Eastern Europe Standard Time [used by almost everyone else] -# -# Since timezones should be called by country and not capital cities, -# I ruled out JST. As Israel is in Asia Minor and not Eastern Europe, -# EEST was equally unacceptable. Since "zonal" was not compatible with -# any other timezone abbreviation, I felt that 'IST' was the way to go -# and, indeed, it has received almost universal acceptance in timezone -# settings in Israeli computers. -# -# In any case, I am happy to share timezone abbreviations with India, -# high on my favorite-country list (and not only because my wife's -# family is from India). - -# From Shanks & Pottenger: -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 1940 only - Jun 1 0:00 1:00 D -Rule Zion 1942 1944 - Nov 1 0:00 0 S -Rule Zion 1943 only - Apr 1 2:00 1:00 D -Rule Zion 1944 only - Apr 1 0:00 1:00 D -Rule Zion 1945 only - Apr 16 0:00 1:00 D -Rule Zion 1945 only - Nov 1 2:00 0 S -Rule Zion 1946 only - Apr 16 2:00 1:00 D -Rule Zion 1946 only - Nov 1 0:00 0 S -Rule Zion 1948 only - May 23 0:00 2:00 DD -Rule Zion 1948 only - Sep 1 0:00 1:00 D -Rule Zion 1948 1949 - Nov 1 2:00 0 S -Rule Zion 1949 only - May 1 0:00 1:00 D -Rule Zion 1950 only - Apr 16 0:00 1:00 D -Rule Zion 1950 only - Sep 15 3:00 0 S -Rule Zion 1951 only - Apr 1 0:00 1:00 D -Rule Zion 1951 only - Nov 11 3:00 0 S -Rule Zion 1952 only - Apr 20 2:00 1:00 D -Rule Zion 1952 only - Oct 19 3:00 0 S -Rule Zion 1953 only - Apr 12 2:00 1:00 D -Rule Zion 1953 only - Sep 13 3:00 0 S -Rule Zion 1954 only - Jun 13 0:00 1:00 D -Rule Zion 1954 only - Sep 12 0:00 0 S -Rule Zion 1955 only - Jun 11 2:00 1:00 D -Rule Zion 1955 only - Sep 11 0:00 0 S -Rule Zion 1956 only - Jun 3 0:00 1:00 D -Rule Zion 1956 only - Sep 30 3:00 0 S -Rule Zion 1957 only - Apr 29 2:00 1:00 D -Rule Zion 1957 only - Sep 22 0:00 0 S -Rule Zion 1974 only - Jul 7 0:00 1:00 D -Rule Zion 1974 only - Oct 13 0:00 0 S -Rule Zion 1975 only - Apr 20 0:00 1:00 D -Rule Zion 1975 only - Aug 31 0:00 0 S -Rule Zion 1985 only - Apr 14 0:00 1:00 D -Rule Zion 1985 only - Sep 15 0:00 0 S -Rule Zion 1986 only - May 18 0:00 1:00 D -Rule Zion 1986 only - Sep 7 0:00 0 S -Rule Zion 1987 only - Apr 15 0:00 1:00 D -Rule Zion 1987 only - Sep 13 0:00 0 S - -# From Avigdor Finkelstein (2014-03-05): -# I check the Parliament (Knesset) records and there it's stated that the -# [1988] transition should take place on Saturday night, when the Sabbath -# ends and changes to Sunday. -Rule Zion 1988 only - Apr 10 0:00 1:00 D -Rule Zion 1988 only - Sep 4 0:00 0 S - -# From Ephraim Silverberg -# (1997-03-04, 1998-03-16, 1998-12-28, 2000-01-17, 2000-07-25, 2004-12-22, -# and 2005-02-17): - -# According to the Office of the Secretary General of the Ministry of -# Interior, there is NO set rule for Daylight-Savings/Standard time changes. -# One thing is entrenched in law, however: that there must be at least 150 -# days of daylight savings time annually. From 1993-1998, the change to -# daylight savings time was on a Friday morning from midnight IST to -# 1 a.m IDT; up until 1998, the change back to standard time was on a -# Saturday night from midnight daylight savings time to 11 p.m. standard -# time. 1996 is an exception to this rule where the change back to standard -# time took place on Sunday night instead of Saturday night to avoid -# conflicts with the Jewish New Year. In 1999, the change to -# daylight savings time was still on a Friday morning but from -# 2 a.m. IST to 3 a.m. IDT; furthermore, the change back to standard time -# was also on a Friday morning from 2 a.m. IDT to 1 a.m. IST for -# 1999 only. In the year 2000, the change to daylight savings time was -# similar to 1999, but although the change back will be on a Friday, it -# will take place from 1 a.m. IDT to midnight IST. Starting in 2001, all -# changes to/from will take place at 1 a.m. old time, but now there is no -# rule as to what day of the week it will take place in as the start date -# (except in 2003) is the night after the Passover Seder (i.e. the eve -# of the 16th of Nisan in the lunar Hebrew calendar) and the end date -# (except in 2002) is three nights before Yom Kippur [Day of Atonement] -# (the eve of the 7th of Tishrei in the lunar Hebrew calendar). - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 1989 only - Apr 30 0:00 1:00 D -Rule Zion 1989 only - Sep 3 0:00 0 S -Rule Zion 1990 only - Mar 25 0:00 1:00 D -Rule Zion 1990 only - Aug 26 0:00 0 S -Rule Zion 1991 only - Mar 24 0:00 1:00 D -Rule Zion 1991 only - Sep 1 0:00 0 S -Rule Zion 1992 only - Mar 29 0:00 1:00 D -Rule Zion 1992 only - Sep 6 0:00 0 S -Rule Zion 1993 only - Apr 2 0:00 1:00 D -Rule Zion 1993 only - Sep 5 0:00 0 S - -# The dates for 1994-1995 were obtained from Office of the Spokeswoman for the -# Ministry of Interior, Jerusalem, Israel. The spokeswoman can be reached by -# calling the office directly at 972-2-6701447 or 972-2-6701448. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 1994 only - Apr 1 0:00 1:00 D -Rule Zion 1994 only - Aug 28 0:00 0 S -Rule Zion 1995 only - Mar 31 0:00 1:00 D -Rule Zion 1995 only - Sep 3 0:00 0 S - -# The dates for 1996 were determined by the Minister of Interior of the -# time, Haim Ramon. The official announcement regarding 1996-1998 -# (with the dates for 1997-1998 no longer being relevant) can be viewed at: -# -# ftp://ftp.cs.huji.ac.il/pub/tz/announcements/1996-1998.ramon.ps.gz -# -# The dates for 1997-1998 were altered by his successor, Rabbi Eli Suissa. -# -# The official announcements for the years 1997-1999 can be viewed at: -# -# ftp://ftp.cs.huji.ac.il/pub/tz/announcements/YYYY.ps.gz -# -# where YYYY is the relevant year. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 1996 only - Mar 15 0:00 1:00 D -Rule Zion 1996 only - Sep 16 0:00 0 S -Rule Zion 1997 only - Mar 21 0:00 1:00 D -Rule Zion 1997 only - Sep 14 0:00 0 S -Rule Zion 1998 only - Mar 20 0:00 1:00 D -Rule Zion 1998 only - Sep 6 0:00 0 S -Rule Zion 1999 only - Apr 2 2:00 1:00 D -Rule Zion 1999 only - Sep 3 2:00 0 S - -# The Knesset Interior Committee has changed the dates for 2000 for -# the third time in just over a year and have set new dates for the -# years 2001-2004 as well. -# -# The official announcement for the start date of 2000 can be viewed at: -# -# ftp://ftp.cs.huji.ac.il/pub/tz/announcements/2000-start.ps.gz -# -# The official announcement for the end date of 2000 and the dates -# for the years 2001-2004 can be viewed at: -# -# ftp://ftp.cs.huji.ac.il/pub/tz/announcements/2000-2004.ps.gz - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 2000 only - Apr 14 2:00 1:00 D -Rule Zion 2000 only - Oct 6 1:00 0 S -Rule Zion 2001 only - Apr 9 1:00 1:00 D -Rule Zion 2001 only - Sep 24 1:00 0 S -Rule Zion 2002 only - Mar 29 1:00 1:00 D -Rule Zion 2002 only - Oct 7 1:00 0 S -Rule Zion 2003 only - Mar 28 1:00 1:00 D -Rule Zion 2003 only - Oct 3 1:00 0 S -Rule Zion 2004 only - Apr 7 1:00 1:00 D -Rule Zion 2004 only - Sep 22 1:00 0 S - -# The proposed law agreed upon by the Knesset Interior Committee on -# 2005-02-14 is that, for 2005 and beyond, DST starts at 02:00 the -# last Friday before April 2nd (i.e. the last Friday in March or April -# 1st itself if it falls on a Friday) and ends at 02:00 on the Saturday -# night _before_ the fast of Yom Kippur. -# -# Those who can read Hebrew can view the announcement at: -# -# ftp://ftp.cs.huji.ac.il/pub/tz/announcements/2005+beyond.ps - -# From Paul Eggert (2012-10-26): -# I used Ephraim Silverberg's dst-israel.el program -# (2005-02-20) -# along with Ed Reingold's cal-hebrew in GNU Emacs 21.4, -# to generate the transitions from 2005 through 2012. -# (I replaced "lastFri" with "Fri>=26" by hand.) -# The spring transitions all correspond to the following Rule: -# -# Rule Zion 2005 2012 - Mar Fri>=26 2:00 1:00 D -# -# but older zic implementations (e.g., Solaris 8) do not support -# "Fri>=26" to mean April 1 in years like 2005, so for now we list the -# springtime transitions explicitly. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 2005 only - Apr 1 2:00 1:00 D -Rule Zion 2005 only - Oct 9 2:00 0 S -Rule Zion 2006 2010 - Mar Fri>=26 2:00 1:00 D -Rule Zion 2006 only - Oct 1 2:00 0 S -Rule Zion 2007 only - Sep 16 2:00 0 S -Rule Zion 2008 only - Oct 5 2:00 0 S -Rule Zion 2009 only - Sep 27 2:00 0 S -Rule Zion 2010 only - Sep 12 2:00 0 S -Rule Zion 2011 only - Apr 1 2:00 1:00 D -Rule Zion 2011 only - Oct 2 2:00 0 S -Rule Zion 2012 only - Mar Fri>=26 2:00 1:00 D -Rule Zion 2012 only - Sep 23 2:00 0 S - -# From Ephraim Silverberg (2013-06-27): -# On June 23, 2013, the Israeli government approved changes to the -# Time Decree Law. The next day, the changes passed the First Reading -# in the Knesset. The law is expected to pass the Second and Third -# (final) Readings by the beginning of September 2013. -# -# As of 2013, DST starts at 02:00 on the Friday before the last Sunday -# in March. DST ends at 02:00 on the last Sunday of October. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Zion 2013 max - Mar Fri>=23 2:00 1:00 D -Rule Zion 2013 max - Oct lastSun 2:00 0 S - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Jerusalem 2:20:54 - LMT 1880 - 2:20:40 - JMT 1918 # Jerusalem Mean Time? - 2:00 Zion I%sT - - - -############################################################################### - -# Japan - -# '9:00' and 'JST' is from Guy Harris. - -# From Paul Eggert (1995-03-06): -# Today's _Asahi Evening News_ (page 4) reports that Japan had -# daylight saving between 1948 and 1951, but "the system was discontinued -# because the public believed it would lead to longer working hours." - -# From Mayumi Negishi in the 2005-08-10 Japan Times: -# http://www.japantimes.co.jp/cgi-bin/getarticle.pl5?nn20050810f2.htm -# Occupation authorities imposed daylight-saving time on Japan on -# [1948-05-01].... But lack of prior debate and the execution of -# daylight-saving time just three days after the bill was passed generated -# deep hatred of the concept.... The Diet unceremoniously passed a bill to -# dump the unpopular system in October 1951, less than a month after the San -# Francisco Peace Treaty was signed. (A government poll in 1951 showed 53% -# of the Japanese wanted to scrap daylight-saving time, as opposed to 30% who -# wanted to keep it.) - -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger write that DST in Japan during those years was as follows: -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Japan 1948 only - May Sun>=1 2:00 1:00 D -Rule Japan 1948 1951 - Sep Sat>=8 2:00 0 S -Rule Japan 1949 only - Apr Sun>=1 2:00 1:00 D -Rule Japan 1950 1951 - May Sun>=1 2:00 1:00 D -# but the only locations using it (for birth certificates, presumably, since -# their audience is astrologers) were US military bases. For now, assume -# that for most purposes daylight-saving time was observed; otherwise, what -# would have been the point of the 1951 poll? - -# From Hideyuki Suzuki (1998-11-09): -# 'Tokyo' usually stands for the former location of Tokyo Astronomical -# Observatory: 139 degrees 44' 40.90" E (9h 18m 58.727s), -# 35 degrees 39' 16.0" N. -# This data is from 'Rika Nenpyou (Chronological Scientific Tables) 1996' -# edited by National Astronomical Observatory of Japan.... -# JST (Japan Standard Time) has been used since 1888-01-01 00:00 (JST). -# The law is enacted on 1886-07-07. - -# From Hideyuki Suzuki (1998-11-16): -# The ordinance No. 51 (1886) established "standard time" in Japan, -# which stands for the time on 135 degrees E. -# In the ordinance No. 167 (1895), "standard time" was renamed to "central -# standard time". And the same ordinance also established "western standard -# time", which stands for the time on 120 degrees E.... But "western standard -# time" was abolished in the ordinance No. 529 (1937). In the ordinance No. -# 167, there is no mention regarding for what place western standard time is -# standard.... -# -# I wrote "ordinance" above, but I don't know how to translate. -# In Japanese it's "chokurei", which means ordinance from emperor. - -# From Yu-Cheng Chuang (2013-07-12): -# ...the Meiji Emperor announced Ordinance No. 167 of Meiji Year 28 "The clause -# about standard time" ... The adoption began from Jan 1, 1896. -# https://ja.wikisource.org/wiki/標準時ニ關スル件_(公布時) -# -# ...the Showa Emperor announced Ordinance No. 529 of Showa Year 12 ... which -# means the whole Japan territory, including later occupations, adopt Japan -# Central Time (UTC+9). The adoption began on Oct 1, 1937. -# https://ja.wikisource.org/wiki/明治二十八年勅令第百六十七號標準時ニ關スル件中改正ノ件 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Tokyo 9:18:59 - LMT 1887 Dec 31 15:00u - 9:00 Japan J%sT -# Since 1938, all Japanese possessions have been like Asia/Tokyo. - -# Jordan -# -# From -# Jordan Week (1999-07-01) via Steffen Thorsen (1999-09-09): -# Clocks in Jordan were forwarded one hour on Wednesday at midnight, -# in accordance with the government's decision to implement summer time -# all year round. -# -# From -# Jordan Week (1999-09-30) via Steffen Thorsen (1999-11-09): -# Winter time starts today Thursday, 30 September. Clocks will be turned back -# by one hour. This is the latest government decision and it's final! -# The decision was taken because of the increase in working hours in -# government's departments from six to seven hours. -# -# From Paul Eggert (2005-11-22): -# Starting 2003 transitions are from Steffen Thorsen's web site timeanddate.com. -# -# From Steffen Thorsen (2005-11-23): -# For Jordan I have received multiple independent user reports every year -# about DST end dates, as the end-rule is different every year. -# -# From Steffen Thorsen (2006-10-01), after a heads-up from Hilal Malawi: -# http://www.petranews.gov.jo/nepras/2006/Sep/05/4000.htm -# "Jordan will switch to winter time on Friday, October 27". -# - -# From Steffen Thorsen (2009-04-02): -# This single one might be good enough, (2009-03-24, Arabic): -# http://petra.gov.jo/Artical.aspx?Lng=2&Section=8&Artical=95279 -# -# Google's translation: -# -# > The Council of Ministers decided in 2002 to adopt the principle of timely -# > submission of the summer at 60 minutes as of midnight on the last Thursday -# > of the month of March of each year. -# -# So - this means the midnight between Thursday and Friday since 2002. - -# From Arthur David Olson (2009-04-06): -# We still have Jordan switching to DST on Thursdays in 2000 and 2001. - -# From Steffen Thorsen (2012-10-25): -# Yesterday the government in Jordan announced that they will not -# switch back to standard time this winter, so the will stay on DST -# until about the same time next year (at least). -# http://www.petra.gov.jo/Public_News/Nws_NewsDetails.aspx?NewsID=88950 - -# From Steffen Thorsen (2013-12-11): -# Jordan Times and other sources say that Jordan is going back to -# UTC+2 on 2013-12-19 at midnight: -# http://jordantimes.com/govt-decides-to-switch-back-to-wintertime -# Official, in Arabic: -# http://www.petra.gov.jo/public_news/Nws_NewsDetails.aspx?Menu_ID=&Site_Id=2&lang=1&NewsID=133230&CatID=14 -# ... Our background/permalink about it -# https://www.timeanddate.com/news/time/jordan-reverses-dst-decision.html -# ... -# http://www.petra.gov.jo/Public_News/Nws_NewsDetails.aspx?lang=2&site_id=1&NewsID=133313&Type=P -# ... says midnight for the coming one and 1:00 for the ones in the future -# (and they will use DST again next year, using the normal schedule). - -# From Paul Eggert (2013-12-11): -# As Steffen suggested, consider the past 21-month experiment to be DST. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Jordan 1973 only - Jun 6 0:00 1:00 S -Rule Jordan 1973 1975 - Oct 1 0:00 0 - -Rule Jordan 1974 1977 - May 1 0:00 1:00 S -Rule Jordan 1976 only - Nov 1 0:00 0 - -Rule Jordan 1977 only - Oct 1 0:00 0 - -Rule Jordan 1978 only - Apr 30 0:00 1:00 S -Rule Jordan 1978 only - Sep 30 0:00 0 - -Rule Jordan 1985 only - Apr 1 0:00 1:00 S -Rule Jordan 1985 only - Oct 1 0:00 0 - -Rule Jordan 1986 1988 - Apr Fri>=1 0:00 1:00 S -Rule Jordan 1986 1990 - Oct Fri>=1 0:00 0 - -Rule Jordan 1989 only - May 8 0:00 1:00 S -Rule Jordan 1990 only - Apr 27 0:00 1:00 S -Rule Jordan 1991 only - Apr 17 0:00 1:00 S -Rule Jordan 1991 only - Sep 27 0:00 0 - -Rule Jordan 1992 only - Apr 10 0:00 1:00 S -Rule Jordan 1992 1993 - Oct Fri>=1 0:00 0 - -Rule Jordan 1993 1998 - Apr Fri>=1 0:00 1:00 S -Rule Jordan 1994 only - Sep Fri>=15 0:00 0 - -Rule Jordan 1995 1998 - Sep Fri>=15 0:00s 0 - -Rule Jordan 1999 only - Jul 1 0:00s 1:00 S -Rule Jordan 1999 2002 - Sep lastFri 0:00s 0 - -Rule Jordan 2000 2001 - Mar lastThu 0:00s 1:00 S -Rule Jordan 2002 2012 - Mar lastThu 24:00 1:00 S -Rule Jordan 2003 only - Oct 24 0:00s 0 - -Rule Jordan 2004 only - Oct 15 0:00s 0 - -Rule Jordan 2005 only - Sep lastFri 0:00s 0 - -Rule Jordan 2006 2011 - Oct lastFri 0:00s 0 - -Rule Jordan 2013 only - Dec 20 0:00 0 - -Rule Jordan 2014 max - Mar lastThu 24:00 1:00 S -Rule Jordan 2014 max - Oct lastFri 0:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Amman 2:23:44 - LMT 1931 - 2:00 Jordan EE%sT - - -# Kazakhstan - -# From Kazakhstan Embassy's News Bulletin No. 11 -# (2005-03-21): -# The Government of Kazakhstan passed a resolution March 15 abolishing -# daylight saving time citing lack of economic benefits and health -# complications coupled with a decrease in productivity. -# -# From Branislav Kojic (in Astana) via Gwillim Law (2005-06-28): -# ... what happened was that the former Kazakhstan Eastern time zone -# was "blended" with the Central zone. Therefore, Kazakhstan now has -# two time zones, and difference between them is one hour. The zone -# closer to UTC is the former Western zone (probably still called the -# same), encompassing four provinces in the west: Aqtöbe, Atyraū, -# Mangghystaū, and West Kazakhstan. The other zone encompasses -# everything else.... I guess that would make Kazakhstan time zones -# de jure UTC+5 and UTC+6 respectively. - -# From Stepan Golosunov (2016-03-27): -# Review of the linked documents from http://adilet.zan.kz/ -# produced the following data for post-1991 Kazakhstan: -# -# 0. Act of the Cabinet of Ministers of the USSR -# from 1991-02-04 No. 20 -# http://pravo.gov.ru/proxy/ips/?docbody=&nd=102010545 -# removed the extra hour ("decree time") on the territory of the USSR -# starting with the last Sunday of March 1991. -# It also allowed (but not mandated) Kazakh SSR, Kirghiz SSR, Tajik SSR, -# Turkmen SSR and Uzbek SSR to not have "summer" time. -# -# The 1992-01-13 act also refers to the act of the Cabinet of Ministers -# of the Kazakh SSR from 1991-03-20 No. 170 "About the act of the Cabinet -# of Ministers of the USSR from 1991-02-04 No. 20" but I didn't found its -# text. -# -# According to Izvestia newspaper No. 68 (23334) from 1991-03-20 -# (page 6; available at http://libinfo.org/newsr/newsr2574.djvu via -# http://libinfo.org/index.php?id=58564) on 1991-03-31 at 2:00 during -# transition to "summer" time: -# Republic of Georgia, Latvian SSR, Lithuanian SSR, SSR Moldova, -# Estonian SSR; Komi ASSR; Kaliningrad oblast; Nenets autonomous okrug -# were to move clocks 1 hour forward. -# Kazakh SSR (excluding Uralsk oblast); Republic of Kyrgyzstan, Tajik -# SSR; Andijan, Jizzakh, Namangan, Sirdarya, Tashkent, Fergana oblasts -# of the Uzbek SSR were to move clocks 1 hour backwards. -# Other territories were to not move clocks. -# When the "summer" time would end on 1991-09-29, clocks were to be -# moved 1 hour backwards on the territory of the USSR excluding -# Kazakhstan, Kirghizia, Uzbekistan, Turkmenia, Tajikistan. -# -# Apparently there were last minute changes. Apparently Kazakh act No. 170 -# was one of such changes. -# -# https://ru.wikipedia.org/wiki/Декретное время -# claims that Sovetskaya Rossiya newspaper on 1991-03-29 published that -# Nenets autonomous okrug, Komi and Kazakhstan (excluding Uralsk oblast) -# were to not move clocks and Uralsk oblast was to move clocks -# forward; on 1991-09-29 Kazakhstan was to move clocks backwards. -# (Probably there were changes even after that publication. There is an -# article claiming that Kaliningrad oblast decided on 1991-03-29 to not -# move clocks.) -# -# This implies that on 1991-03-31 Asia/Oral remained on +04/+05 while -# the rest of Kazakhstan switched from +06/+07 to +05/06 or from +05/06 -# to +04/+05. It's unclear how Qyzylorda oblast moved into the fifth -# time belt. (By switching from +04/+05 to +05/+06 on 1991-09-29?) ... -# -# 1. Act of the Cabinet of Ministers of the Republic of Kazakhstan -# from 1992-01-13 No. 28 -# http://adilet.zan.kz/rus/docs/P920000028_ -# (text includes modification from the 1996 act) -# introduced new rules for calculation of time, mirroring Russian -# 1992-01-08 act. It specified that time would be calculated -# according to time belts plus extra hour ("decree time"), moved clocks -# on the whole territory of Kazakhstan 1 hour forward on 1992-01-19 at -# 2:00, specified DST rules. It acknowledged that Kazakhstan was -# located in the fourth and the fifth time belts and specified the -# border between them to be located east of Qostanay and Aktyubinsk -# oblasts (notably including Turgai and Qyzylorda oblasts into the fifth -# time belt). -# -# This means switch on 1992-01-19 at 2:00 from +04/+05 to +05/+06 for -# Asia/Aqtau, Asia/Aqtobe, Asia/Oral, Atyraū and Qostanay oblasts; from -# +05/+06 to +06/+07 for Asia/Almaty and Asia/Qyzylorda (and Arkalyk).... -# -# 2. Act of the Cabinet of Ministers of the Republic of Kazakhstan -# from 1992-03-27 No. 284 -# http://adilet.zan.kz/rus/docs/P920000284_ -# cancels extra hour ("decree time") for Uralsk and Qyzylorda oblasts -# since the last Sunday of March 1992, while keeping them in the fourth -# and the fifth time belts respectively. -# -# 3. Order of the Prime Minister of the Republic of Kazakhstan -# from 1994-09-23 No. 384 -# http://adilet.zan.kz/rus/docs/R940000384_ -# cancels the extra hour ("decree time") on the territory of Mangghystaū -# oblast since the last Sunday of September 1994 (saying that time on -# the territory would correspond to the third time belt as a -# result).... -# -# 4. Act of the Government of the Republic of Kazakhstan -# from 1996-05-08 No. 575 -# http://adilet.zan.kz/rus/docs/P960000575_ -# amends the 1992-01-13 act to end summer time in October instead -# of September, mirroring identical Russian change from 1996-04-23 act. -# -# 5. Act of the Government of the Republic of Kazakhstan -# from 1999-03-26 No. 305 -# http://adilet.zan.kz/rus/docs/P990000305_ -# cancels the extra hour ("decree time") for Atyraū oblast since the -# last Sunday of March 1999 while retaining the oblast in the fourth -# time belt. -# -# This means change from +05/+06 to +04/+05.... -# -# 6. Act of the Government of the Republic of Kazakhstan -# from 2000-11-23 No. 1749 -# http://adilet.zan.kz/rus/archive/docs/P000001749_/23.11.2000 -# replaces the previous five documents. -# -# The only changes I noticed are in definition of the border between the -# fourth and the fifth time belts. They account for changes in spelling -# and administrative division (splitting of Turgai oblast in 1997 -# probably changed time in territories incorporated into Qostanay oblast -# (including Arkalyk) from +06/+07 to +05/+06) and move Qyzylorda oblast -# from being in the fifth time belt and not using decree time into the -# fourth time belt (no change in practice). -# -# 7. Act of the Government of the Republic of Kazakhstan -# from 2003-12-29 No. 1342 -# http://adilet.zan.kz/rus/docs/P030001342_ -# modified the 2000-11-23 act. No relevant changes, apparently. -# -# 8. Act of the Government of the Republic of Kazakhstan -# from 2004-07-20 No. 775 -# http://adilet.zan.kz/rus/archive/docs/P040000775_/20.07.2004 -# modified the 2000-11-23 act to move Qostanay and Qyzylorda oblasts into -# the fifth time belt and add Aktobe oblast to the list of regions not -# using extra hour ("decree time"), leaving Kazakhstan with only 2 time -# zones (+04/+05 and +06/+07). The changes were to be implemented -# during DST transitions in 2004 and 2005 but the acts got radically -# amended before implementation happened. -# -# 9. Act of the Government of the Republic of Kazakhstan -# from 2004-09-15 No. 1059 -# http://adilet.zan.kz/rus/docs/P040001059_ -# modified the 2000-11-23 act to remove exceptions from the "decree time" -# (leaving Kazakhstan in +05/+06 and +06/+07 zones), amended the -# 2004-07-20 act to implement changes for Atyraū, West Kazakhstan, -# Qostanay, Qyzylorda and Mangghystaū oblasts by not moving clocks -# during the 2004 transition to "winter" time. -# -# This means transition from +04/+05 to +05/+06 for Atyraū oblast (no -# zone currently), Asia/Oral, Asia/Aqtau and transition from +05/+06 to -# +06/+07 for Qostanay oblast (Qostanay and Arkalyk, no zones currently) -# and Asia/Qyzylorda on 2004-10-31 at 3:00.... -# -# 10. Act of the Government of the Republic of Kazakhstan -# from 2005-03-15 No. 231 -# http://adilet.zan.kz/rus/docs/P050000231_ -# removes DST provisions from the 2000-11-23 act, removes most of the -# (already implemented) provisions from the 2004-07-20 and 2004-09-15 -# acts, comes into effect 10 days after official publication. -# The only practical effect seems to be the abolition of the summer -# time. -# -# Unamended version of the act of the Government of the Russian Federation -# No. 23 from 1992-01-08 [See 'europe' file for details]. -# Kazakh 1992-01-13 act appears to provide the same rules and 1992-03-27 -# act was to be enacted on the last Sunday of March 1992. - -# From Stepan Golosunov (2016-11-08): -# Turgai reorganization should affect only southern part of Qostanay -# oblast. Which should probably be separated into Asia/Arkalyk zone. -# (There were also 1970, 1988 and 1990 Turgai oblast reorganizations -# according to wikipedia.) -# -# [For Qostanay] http://www.ng.kz/gazeta/195/hranit/ -# suggests that clocks were to be moved 40 minutes backwards on -# 1920-01-01 to the fourth time belt. But I do not understand -# how that could happen.... -# -# [For Atyrau and Oral] 1919 decree -# (http://www.worldtimezone.com/dst_news/dst_news_russia-1919-02-08.html -# and in Byalokoz) lists Ural river (plus 10 versts on its left bank) in -# the third time belt (before 1930 this means +03). - -# From Paul Eggert (2016-12-06): -# The tables below reflect Golosunov's remarks, with exceptions as noted. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# -# Almaty (formerly Alma-Ata), representing most locations in Kazakhstan -# This includes KZ-AKM, KZ-ALA, KZ-ALM, KZ-AST, KZ-BAY, KZ-VOS, KZ-ZHA, -# KZ-KAR, KZ-SEV, KZ-PAV, and KZ-YUZ. -Zone Asia/Almaty 5:07:48 - LMT 1924 May 2 # or Alma-Ata - 5:00 - +05 1930 Jun 21 - 6:00 RussiaAsia +06/+07 1991 Mar 31 2:00s - 5:00 RussiaAsia +05/+06 1992 Jan 19 2:00s - 6:00 RussiaAsia +06/+07 2004 Oct 31 2:00s - 6:00 - +06 -# Qyzylorda (aka Kyzylorda, Kizilorda, Kzyl-Orda, etc.) (KZ-KZY) -# This currently includes Qostanay (aka Kostanay, Kustanay) (KZ-KUS); -# see comments below. -Zone Asia/Qyzylorda 4:21:52 - LMT 1924 May 2 - 4:00 - +04 1930 Jun 21 - 5:00 - +05 1981 Apr 1 - 5:00 1:00 +06 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00s - 4:00 RussiaAsia +04/+05 1991 Sep 29 2:00s - 5:00 RussiaAsia +05/+06 1992 Jan 19 2:00s - 6:00 RussiaAsia +06/+07 1992 Mar 29 2:00s - 5:00 RussiaAsia +05/+06 2004 Oct 31 2:00s - 6:00 - +06 -# The following zone is like Asia/Qyzylorda except for being one -# hour earlier from 1991-09-29 to 1992-03-29. The 1991/2 rules for -# Qostanay are unclear partly because of the 1997 Turgai -# reorganization, so this zone is commented out for now. -#Zone Asia/Qostanay 4:14:20 - LMT 1924 May 2 -# 4:00 - +04 1930 Jun 21 -# 5:00 - +05 1981 Apr 1 -# 5:00 1:00 +06 1981 Oct 1 -# 6:00 - +06 1982 Apr 1 -# 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00s -# 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00s -# 5:00 RussiaAsia +05/+06 2004 Oct 31 2:00s -# 6:00 - +06 -# -# Aqtöbe (aka Aktobe, formerly Aktyubinsk) (KZ-AKT) -Zone Asia/Aqtobe 3:48:40 - LMT 1924 May 2 - 4:00 - +04 1930 Jun 21 - 5:00 - +05 1981 Apr 1 - 5:00 1:00 +06 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00s - 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00s - 5:00 RussiaAsia +05/+06 2004 Oct 31 2:00s - 5:00 - +05 -# Mangghystaū (KZ-MAN) -# Aqtau was not founded until 1963, but it represents an inhabited region, -# so include time stamps before 1963. -Zone Asia/Aqtau 3:21:04 - LMT 1924 May 2 - 4:00 - +04 1930 Jun 21 - 5:00 - +05 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00s - 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00s - 5:00 RussiaAsia +05/+06 1994 Sep 25 2:00s - 4:00 RussiaAsia +04/+05 2004 Oct 31 2:00s - 5:00 - +05 -# Atyraū (KZ-ATY) is like Mangghystaū except it switched from -# +04/+05 to +05/+06 in spring 1999, not fall 1994. -Zone Asia/Atyrau 3:27:44 - LMT 1924 May 2 - 3:00 - +03 1930 Jun 21 - 5:00 - +05 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00s - 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00s - 5:00 RussiaAsia +05/+06 1999 Mar 28 2:00s - 4:00 RussiaAsia +04/+05 2004 Oct 31 2:00s - 5:00 - +05 -# West Kazakhstan (KZ-ZAP) -# From Paul Eggert (2016-03-18): -# The 1989 transition is from USSR act No. 227 (1989-03-14). -Zone Asia/Oral 3:25:24 - LMT 1924 May 2 # or Ural'sk - 3:00 - +03 1930 Jun 21 - 5:00 - +05 1981 Apr 1 - 5:00 1:00 +06 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1989 Mar 26 2:00s - 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00s - 5:00 RussiaAsia +05/+06 1992 Mar 29 2:00s - 4:00 RussiaAsia +04/+05 2004 Oct 31 2:00s - 5:00 - +05 - -# Kyrgyzstan (Kirgizstan) -# Transitions through 1991 are from Shanks & Pottenger. - -# From Paul Eggert (2005-08-15): -# According to an article dated today in the Kyrgyzstan Development Gateway -# http://eng.gateway.kg/cgi-bin/page.pl?id=1&story_name=doc9979.shtml -# Kyrgyzstan is canceling the daylight saving time system. I take the article -# to mean that they will leave their clocks at 6 hours ahead of UTC. -# From Malik Abdugaliev (2005-09-21): -# Our government cancels daylight saving time 6th of August 2005. -# From 2005-08-12 our GMT-offset is +6, w/o any daylight saving. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Kyrgyz 1992 1996 - Apr Sun>=7 0:00s 1:00 S -Rule Kyrgyz 1992 1996 - Sep lastSun 0:00 0 - -Rule Kyrgyz 1997 2005 - Mar lastSun 2:30 1:00 S -Rule Kyrgyz 1997 2004 - Oct lastSun 2:30 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Bishkek 4:58:24 - LMT 1924 May 2 - 5:00 - +05 1930 Jun 21 - 6:00 RussiaAsia +06/+07 1991 Mar 31 2:00s - 5:00 RussiaAsia +05/+06 1991 Aug 31 2:00 - 5:00 Kyrgyz +05/+06 2005 Aug 12 - 6:00 - +06 - -############################################################################### - -# Korea (North and South) - -# From Annie I. Bang (2006-07-10): -# http://www.koreaherald.com/view.php?ud=200607100012 -# Korea ran a daylight saving program from 1949-61 but stopped it -# during the 1950-53 Korean War. The system was temporarily enforced -# between 1987 and 1988 ... - -# From Sanghyuk Jung (2014-10-29): -# https://mm.icann.org/pipermail/tz/2014-October/021830.html -# According to the Korean Wikipedia -# https://ko.wikipedia.org/wiki/한국_표준시 -# [oldid=12896437 2014-09-04 08:03 UTC] -# DST in Republic of Korea was as follows.... And I checked old -# newspapers in Korean, all articles correspond with data in Wikipedia. -# For example, the article in 1948 (Korean Language) proved that DST -# started at June 1 in that year. For another example, the article in -# 1988 said that DST started at 2:00 AM in that year. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule ROK 1948 only - Jun 1 0:00 1:00 D -Rule ROK 1948 only - Sep 13 0:00 0 S -Rule ROK 1949 only - Apr 3 0:00 1:00 D -Rule ROK 1949 1951 - Sep Sun>=8 0:00 0 S -Rule ROK 1950 only - Apr 1 0:00 1:00 D -Rule ROK 1951 only - May 6 0:00 1:00 D -Rule ROK 1955 only - May 5 0:00 1:00 D -Rule ROK 1955 only - Sep 9 0:00 0 S -Rule ROK 1956 only - May 20 0:00 1:00 D -Rule ROK 1956 only - Sep 30 0:00 0 S -Rule ROK 1957 1960 - May Sun>=1 0:00 1:00 D -Rule ROK 1957 1960 - Sep Sun>=18 0:00 0 S -Rule ROK 1987 1988 - May Sun>=8 2:00 1:00 D -Rule ROK 1987 1988 - Oct Sun>=8 3:00 0 S - -# From Paul Eggert (2016-08-23): -# The Korean Wikipedia entry gives the following sources for UT offsets: -# -# 1908: Official Journal Article No. 3994 (decree No. 5) -# 1912: Governor-General of Korea Official Gazette Issue No. 367 -# (Announcement No. 338) -# 1954: Presidential Decree No. 876 (1954-03-17) -# 1961: Law No. 676 (1961-08-07) -# -# (Another source "1987: Law No. 3919 (1986-12-31)" was in the 2014-10-30 -# edition of the Korean Wikipedia entry.) -# -# I guessed that time zone abbreviations through 1945 followed the same -# rules as discussed under Taiwan, with nominal switches from JST to KST -# when the respective cities were taken over by the Allies after WWII. -# -# For Pyongyang, guess no changes from World War II until 2015, as we -# have no information otherwise. - -# From Steffen Thorsen (2015-08-07): -# According to many news sources, North Korea is going to change to -# the 8:30 time zone on August 15, one example: -# http://www.bbc.com/news/world-asia-33815049 -# -# From Paul Eggert (2015-08-15): -# Bells rang out midnight (00:00) Friday as part of the celebrations. See: -# Talmadge E. North Korea celebrates new time zone, 'Pyongyang Time' -# http://news.yahoo.com/north-korea-celebrates-time-zone-pyongyang-time-164038128.html -# There is no common English-language abbreviation for this time zone. -# Use KST, as that's what we already use for 1954-1961 in ROK. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Seoul 8:27:52 - LMT 1908 Apr 1 - 8:30 - KST 1912 Jan 1 - 9:00 - JST 1945 Sep 8 - 9:00 - KST 1954 Mar 21 - 8:30 ROK K%sT 1961 Aug 10 - 9:00 ROK K%sT -Zone Asia/Pyongyang 8:23:00 - LMT 1908 Apr 1 - 8:30 - KST 1912 Jan 1 - 9:00 - JST 1945 Aug 24 - 9:00 - KST 2015 Aug 15 00:00 - 8:30 - KST - -############################################################################### - -# Kuwait -# See Asia/Riyadh. - -# Laos -# See Asia/Bangkok. - - -# Lebanon -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Lebanon 1920 only - Mar 28 0:00 1:00 S -Rule Lebanon 1920 only - Oct 25 0:00 0 - -Rule Lebanon 1921 only - Apr 3 0:00 1:00 S -Rule Lebanon 1921 only - Oct 3 0:00 0 - -Rule Lebanon 1922 only - Mar 26 0:00 1:00 S -Rule Lebanon 1922 only - Oct 8 0:00 0 - -Rule Lebanon 1923 only - Apr 22 0:00 1:00 S -Rule Lebanon 1923 only - Sep 16 0:00 0 - -Rule Lebanon 1957 1961 - May 1 0:00 1:00 S -Rule Lebanon 1957 1961 - Oct 1 0:00 0 - -Rule Lebanon 1972 only - Jun 22 0:00 1:00 S -Rule Lebanon 1972 1977 - Oct 1 0:00 0 - -Rule Lebanon 1973 1977 - May 1 0:00 1:00 S -Rule Lebanon 1978 only - Apr 30 0:00 1:00 S -Rule Lebanon 1978 only - Sep 30 0:00 0 - -Rule Lebanon 1984 1987 - May 1 0:00 1:00 S -Rule Lebanon 1984 1991 - Oct 16 0:00 0 - -Rule Lebanon 1988 only - Jun 1 0:00 1:00 S -Rule Lebanon 1989 only - May 10 0:00 1:00 S -Rule Lebanon 1990 1992 - May 1 0:00 1:00 S -Rule Lebanon 1992 only - Oct 4 0:00 0 - -Rule Lebanon 1993 max - Mar lastSun 0:00 1:00 S -Rule Lebanon 1993 1998 - Sep lastSun 0:00 0 - -Rule Lebanon 1999 max - Oct lastSun 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Beirut 2:22:00 - LMT 1880 - 2:00 Lebanon EE%sT - -# Malaysia -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule NBorneo 1935 1941 - Sep 14 0:00 0:20 TS # one-Third Summer -Rule NBorneo 1935 1941 - Dec 14 0:00 0 - -# -# peninsular Malaysia -# taken from Mok Ly Yng (2003-10-30) -# http://www.math.nus.edu.sg/aslaksen/teaching/timezone.html -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kuala_Lumpur 6:46:46 - LMT 1901 Jan 1 - 6:55:25 - SMT 1905 Jun 1 # Singapore M.T. - 7:00 - +07 1933 Jan 1 - 7:00 0:20 +0720 1936 Jan 1 - 7:20 - +0720 1941 Sep 1 - 7:30 - +0730 1942 Feb 16 - 9:00 - +09 1945 Sep 12 - 7:30 - +0730 1982 Jan 1 - 8:00 - +08 -# Sabah & Sarawak -# From Paul Eggert (2014-08-12): -# The data entries here are mostly from Shanks & Pottenger, but the 1942, 1945 -# and 1982 transition dates are from Mok Ly Yng. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kuching 7:21:20 - LMT 1926 Mar - 7:30 - +0730 1933 - 8:00 NBorneo +08/+0820 1942 Feb 16 - 9:00 - +09 1945 Sep 12 - 8:00 - +08 - -# Maldives -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Maldives 4:54:00 - LMT 1880 # Male - 4:54:00 - MMT 1960 # Male Mean Time - 5:00 - +05 - -# Mongolia - -# Shanks & Pottenger say that Mongolia has three time zones, but -# The USNO (1995-12-21) and the CIA map Standard Time Zones of the World -# (2005-03) both say that it has just one. - -# From Oscar van Vlijmen (1999-12-11): -# General Information Mongolia -# (1999-09) -# "Time: Mongolia has two time zones. Three westernmost provinces of -# Bayan-Ölgii, Uvs, and Hovd are one hour earlier than the capital city, and -# the rest of the country follows the Ulaanbaatar time, which is UTC/GMT plus -# eight hours." - -# From Rives McDow (1999-12-13): -# Mongolia discontinued the use of daylight savings time in 1999; 1998 -# being the last year it was implemented. The dates of implementation I am -# unsure of, but most probably it was similar to Russia, except for the time -# of implementation may have been different.... -# Some maps in the past have indicated that there was an additional time -# zone in the eastern part of Mongolia, including the provinces of Dornod, -# Sükhbaatar, and possibly Khentii. - -# From Paul Eggert (1999-12-15): -# Naming and spelling is tricky in Mongolia. -# We'll use Hovd (also spelled Chovd and Khovd) to represent the west zone; -# the capital of the Hovd province is sometimes called Hovd, sometimes Dund-Us, -# and sometimes Jirgalanta (with variant spellings), but the name Hovd -# is good enough for our purposes. - -# From Rives McDow (2001-05-13): -# In addition to Mongolia starting daylight savings as reported earlier -# (adopted DST on 2001-04-27 02:00 local time, ending 2001-09-28), -# there are three time zones. -# -# Provinces [at 7:00]: Bayan-Ölgii, Uvs, Khovd, Zavkhan, Govi-Altai -# Provinces [at 8:00]: Khövsgöl, Bulgan, Arkhangai, Khentii, Töv, -# Bayankhongor, Övörkhangai, Dundgovi, Dornogovi, Ömnögovi -# Provinces [at 9:00]: Dornod, Sükhbaatar -# -# [The province of Selenge is omitted from the above lists.] - -# From Ganbold Ts., Ulaanbaatar (2004-04-17): -# Daylight saving occurs at 02:00 local time last Saturday of March. -# It will change back to normal at 02:00 local time last Saturday of -# September.... As I remember this rule was changed in 2001. -# -# From Paul Eggert (2004-04-17): -# For now, assume Rives McDow's informant got confused about Friday vs -# Saturday, and that his 2001 dates should have 1 added to them. - -# From Paul Eggert (2005-07-26): -# We have wildly conflicting information about Mongolia's time zones. -# Bill Bonnet (2005-05-19) reports that the US Embassy in Ulaanbaatar says -# there is only one time zone and that DST is observed, citing Microsoft -# Windows XP as the source. Risto Nykänen (2005-05-16) reports that -# travelmongolia.org says there are two time zones (UT +07, +08) with no DST. -# Oscar van Vlijmen (2005-05-20) reports that the Mongolian Embassy in -# Washington, DC says there are two time zones, with DST observed. -# He also found -# http://ubpost.mongolnews.mn/index.php?subaction=showcomments&id=1111634894&archive=&start_from=&ucat=1& -# which also says that there is DST, and which has a comment by "Toddius" -# (2005-03-31 06:05 +0700) saying "Mongolia actually has 3.5 time zones. -# The West (OLGII) is +7 GMT, most of the country is ULAT is +8 GMT -# and some Eastern provinces are +9 GMT but Sükhbaatar Aimag is SUHK +8.5 GMT. -# The SUKH timezone is new this year, it is one of the few things the -# parliament passed during the tumultuous winter session." -# For now, let's ignore this information, until we have more confirmation. - -# From Ganbold Ts. (2007-02-26): -# Parliament of Mongolia has just changed the daylight-saving rule in February. -# They decided not to adopt daylight-saving time.... -# http://www.mongolnews.mn/index.php?module=unuudur&sec=view&id=15742 - -# From Deborah Goldsmith (2008-03-30): -# We received a bug report claiming that the tz database UTC offset for -# Asia/Choibalsan (GMT+09:00) is incorrect, and that it should be GMT -# +08:00 instead. Different sources appear to disagree with the tz -# database on this, e.g.: -# -# https://www.timeanddate.com/worldclock/city.html?n=1026 -# http://www.worldtimeserver.com/current_time_in_MN.aspx -# -# both say GMT+08:00. - -# From Steffen Thorsen (2008-03-31): -# eznis airways, which operates several domestic flights, has a flight -# schedule here: -# http://www.eznis.com/Container.jsp?id=112 -# (click the English flag for English) -# -# There it appears that flights between Choibalsan and Ulaanbaatar arrive -# about 1:35 - 1:50 hours later in local clock time, no matter the -# direction, while Ulaanbaatar-Khovd takes 2 hours in the Eastern -# direction and 3:35 back, which indicates that Ulaanbaatar and Khovd are -# in different time zones (like we know about), while Choibalsan and -# Ulaanbaatar are in the same time zone (correction needed). - -# From Arthur David Olson (2008-05-19): -# Assume that Choibalsan is indeed offset by 8:00. -# XXX--in the absence of better information, assume that transition -# was at the start of 2008-03-31 (the day of Steffen Thorsen's report); -# this is almost surely wrong. - -# From Ganbold Tsagaankhuu (2015-03-10): -# It seems like yesterday Mongolian Government meeting has concluded to use -# daylight saving time in Mongolia.... Starting at 2:00AM of last Saturday of -# March 2015, daylight saving time starts. And 00:00AM of last Saturday of -# September daylight saving time ends. Source: -# http://zasag.mn/news/view/8969 - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Mongol 1983 1984 - Apr 1 0:00 1:00 S -Rule Mongol 1983 only - Oct 1 0:00 0 - -# Shanks & Pottenger and IATA SSIM say 1990s switches occurred at 00:00, -# but McDow says the 2001 switches occurred at 02:00. Also, IATA SSIM -# (1996-09) says 1996-10-25. Go with Shanks & Pottenger through 1998. -# -# Shanks & Pottenger say that the Sept. 1984 through Sept. 1990 switches -# in Choibalsan (more precisely, in Dornod and Sükhbaatar) took place -# at 02:00 standard time, not at 00:00 local time as in the rest of -# the country. That would be odd, and possibly is a result of their -# correction of 02:00 (in the previous edition) not being done correctly -# in the latest edition; so ignore it for now. - -# From Ganbold Tsagaankhuu (2017-02-09): -# Mongolian Government meeting has concluded today to cancel daylight -# saving time adoption in Mongolia. Source: http://zasag.mn/news/view/16192 - -Rule Mongol 1985 1998 - Mar lastSun 0:00 1:00 S -Rule Mongol 1984 1998 - Sep lastSun 0:00 0 - -# IATA SSIM (1999-09) says Mongolia no longer observes DST. -Rule Mongol 2001 only - Apr lastSat 2:00 1:00 S -Rule Mongol 2001 2006 - Sep lastSat 2:00 0 - -Rule Mongol 2002 2006 - Mar lastSat 2:00 1:00 S -Rule Mongol 2015 2016 - Mar lastSat 2:00 1:00 S -Rule Mongol 2015 2016 - Sep lastSat 0:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Hovd, a.k.a. Chovd, Dund-Us, Dzhargalant, Khovd, Jirgalanta -Zone Asia/Hovd 6:06:36 - LMT 1905 Aug - 6:00 - +06 1978 - 7:00 Mongol +07/+08 -# Ulaanbaatar, a.k.a. Ulan Bataar, Ulan Bator, Urga -Zone Asia/Ulaanbaatar 7:07:32 - LMT 1905 Aug - 7:00 - +07 1978 - 8:00 Mongol +08/+09 -# Choibalsan, a.k.a. Bajan Tümen, Bajan Tumen, Chojbalsan, -# Choybalsan, Sanbejse, Tchoibalsan -Zone Asia/Choibalsan 7:38:00 - LMT 1905 Aug - 7:00 - +07 1978 - 8:00 - +08 1983 Apr - 9:00 Mongol +09/+10 2008 Mar 31 - 8:00 Mongol +08/+09 - -# Nepal -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Kathmandu 5:41:16 - LMT 1920 - 5:30 - +0530 1986 - 5:45 - +0545 - -# Oman -# See Asia/Dubai. - -# Pakistan - -# From Rives McDow (2002-03-13): -# I have been advised that Pakistan has decided to adopt dst on a -# TRIAL basis for one year, starting 00:01 local time on April 7, 2002 -# and ending at 00:01 local time October 6, 2002. This is what I was -# told, but I believe that the actual time of change may be 00:00; the -# 00:01 was to make it clear which day it was on. - -# From Paul Eggert (2002-03-15): -# Jesper Nørgaard found this URL: -# http://www.pak.gov.pk/public/news/app/app06_dec.htm -# (dated 2001-12-06) which says that the Cabinet adopted a scheme "to -# advance the clocks by one hour on the night between the first -# Saturday and Sunday of April and revert to the original position on -# 15th October each year". This agrees with McDow's 04-07 at 00:00, -# but disagrees about the October transition, and makes it sound like -# it's not on a trial basis. Also, the "between the first Saturday -# and Sunday of April" phrase, if taken literally, means that the -# transition takes place at 00:00 on the first Sunday on or after 04-02. - -# From Paul Eggert (2003-02-09): -# DAWN reported on 2002-10-05 -# that 2002 DST ended that day at midnight. Go with McDow for now. - -# From Steffen Thorsen (2003-03-14): -# According to http://www.dawn.com/2003/03/07/top15.htm -# there will be no DST in Pakistan this year: -# -# ISLAMABAD, March 6: Information and Media Development Minister Sheikh -# Rashid Ahmed on Thursday said the cabinet had reversed a previous -# decision to advance clocks by one hour in summer and put them back by -# one hour in winter with the aim of saving light hours and energy. -# -# The minister told a news conference that the experiment had rather -# shown 8 per cent higher consumption of electricity. - -# From Alex Krivenyshev (2008-05-15): -# -# Here is an article that Pakistan plan to introduce Daylight Saving Time -# on June 1, 2008 for 3 months. -# -# "... The federal cabinet on Wednesday announced a new conservation plan to -# help reduce load shedding by approving the closure of commercial centres at -# 9pm and moving clocks forward by one hour for the next three months. ...." -# -# http://www.worldtimezone.com/dst_news/dst_news_pakistan01.html -# http://www.dailytimes.com.pk/default.asp?page=2008%5C05%5C15%5Cstory_15-5-2008_pg1_4 - -# From Arthur David Olson (2008-05-19): -# XXX--midnight transitions is a guess; 2008 only is a guess. - -# From Alexander Krivenyshev (2008-08-28): -# Pakistan government has decided to keep the watches one-hour advanced -# for another 2 months - plan to return to Standard Time on October 31 -# instead of August 31. -# -# http://www.worldtimezone.com/dst_news/dst_news_pakistan02.html -# http://dailymailnews.com/200808/28/news/dmbrn03.html - -# From Alexander Krivenyshev (2009-04-08): -# Based on previous media reports that "... proposed plan to -# advance clocks by one hour from May 1 will cause disturbance -# to the working schedules rather than bringing discipline in -# official working." -# http://www.thenews.com.pk/daily_detail.asp?id=171280 -# -# recent news that instead of May 2009 - Pakistan plan to -# introduce DST from April 15, 2009 -# -# FYI: Associated Press Of Pakistan -# April 08, 2009 -# Cabinet okays proposal to advance clocks by one hour from April 15 -# http://www.app.com.pk/en_/index.php?option=com_content&task=view&id=73043&Itemid=1 -# http://www.worldtimezone.com/dst_news/dst_news_pakistan05.html -# -# .... -# The Federal Cabinet on Wednesday approved the proposal to -# advance clocks in the country by one hour from April 15 to -# conserve energy" - -# From Steffen Thorsen (2009-09-17): -# "The News International," Pakistan reports that: "The Federal -# Government has decided to restore the previous time by moving the -# clocks backward by one hour from October 1. A formal announcement to -# this effect will be made after the Prime Minister grants approval in -# this regard." -# http://www.thenews.com.pk/updates.asp?id=87168 - -# From Alexander Krivenyshev (2009-09-28): -# According to Associated Press Of Pakistan, it is confirmed that -# Pakistan clocks across the country would be turned back by an hour from -# October 1, 2009. -# -# "Clocks to go back one hour from 1 Oct" -# http://www.app.com.pk/en_/index.php?option=com_content&task=view&id=86715&Itemid=2 -# http://www.worldtimezone.com/dst_news/dst_news_pakistan07.htm -# -# From Steffen Thorsen (2009-09-29): -# Now they seem to have changed their mind, November 1 is the new date: -# http://www.thenews.com.pk/top_story_detail.asp?Id=24742 -# "The country's clocks will be reversed by one hour on November 1. -# Officials of Federal Ministry for Interior told this to Geo News on -# Monday." -# -# And more importantly, it seems that these dates will be kept every year: -# "It has now been decided that clocks will be wound forward by one hour -# on April 15 and reversed by an hour on November 1 every year without -# obtaining prior approval, the officials added." -# -# We have confirmed this year's end date with both with the Ministry of -# Water and Power and the Pakistan Electric Power Company: -# https://www.timeanddate.com/news/time/pakistan-ends-dst09.html - -# From Christoph Göhre (2009-10-01): -# [T]he German Consulate General in Karachi reported me today that Pakistan -# will go back to standard time on 1st of November. - -# From Steffen Thorsen (2010-03-26): -# Steffen Thorsen wrote: -# > On Thursday (2010-03-25) it was announced that DST would start in -# > Pakistan on 2010-04-01. -# > -# > Then today, the president said that they might have to revert the -# > decision if it is not supported by the parliament. So at the time -# > being, it seems unclear if DST will be actually observed or not - but -# > April 1 could be a more likely date than April 15. -# Now, it seems that the decision to not observe DST in final: -# -# "Govt Withdraws Plan To Advance Clocks" -# http://www.apakistannews.com/govt-withdraws-plan-to-advance-clocks-172041 -# -# "People laud PM's announcement to end DST" -# http://www.app.com.pk/en_/index.php?option=com_content&task=view&id=99374&Itemid=2 - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Pakistan 2002 only - Apr Sun>=2 0:00 1:00 S -Rule Pakistan 2002 only - Oct Sun>=2 0:00 0 - -Rule Pakistan 2008 only - Jun 1 0:00 1:00 S -Rule Pakistan 2008 2009 - Nov 1 0:00 0 - -Rule Pakistan 2009 only - Apr 15 0:00 1:00 S - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Karachi 4:28:12 - LMT 1907 - 5:30 - +0530 1942 Sep - 5:30 1:00 +0630 1945 Oct 15 - 5:30 - +0530 1951 Sep 30 - 5:00 - +05 1971 Mar 26 - 5:00 Pakistan PK%sT # Pakistan Time - -# Palestine - -# From Amos Shapir (1998-02-15): -# -# From 1917 until 1948-05-15, all of Palestine, including the parts now -# known as the Gaza Strip and the West Bank, was under British rule. -# Therefore the rules given for Israel for that period, apply there too... -# -# The Gaza Strip was under Egyptian rule between 1948-05-15 until 1967-06-05 -# (except a short occupation by Israel from 1956-11 till 1957-03, but no -# time zone was affected then). It was never formally annexed to Egypt, -# though. -# -# The rest of Palestine was under Jordanian rule at that time, formally -# annexed in 1950 as the West Bank (and the word "Trans" was dropped from -# the country's previous name of "the Hashemite Kingdom of the -# Trans-Jordan"). So the rules for Jordan for that time apply. Major -# towns in that area are Nablus (Shchem), El-Halil (Hebron), Ramallah, and -# East Jerusalem. -# -# Both areas were occupied by Israel in June 1967, but not annexed (except -# for East Jerusalem). They were on Israel time since then; there might -# have been a Military Governor's order about time zones, but I'm not aware -# of any (such orders may have been issued semi-annually whenever summer -# time was in effect, but maybe the legal aspect of time was just neglected). -# -# The Palestinian Authority was established in 1993, and got hold of most -# towns in the West Bank and Gaza by 1995. I know that in order to -# demonstrate...independence, they have been switching to -# summer time and back on a different schedule than Israel's, but I don't -# know when this was started, or what algorithm is used (most likely the -# Jordanian one). -# -# To summarize, the table should probably look something like that: -# -# Area \ when | 1918-1947 | 1948-1967 | 1967-1995 | 1996- -# ------------+-----------+-----------+-----------+----------- -# Israel | Zion | Zion | Zion | Zion -# West bank | Zion | Jordan | Zion | Jordan -# Gaza | Zion | Egypt | Zion | Jordan -# -# I guess more info may be available from the PA's web page (if/when they -# have one). - -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger write that Gaza did not observe DST until 1957, but go -# with Shapir and assume that it observed DST from 1940 through 1947, -# and that it used Jordanian rules starting in 1996. -# We don't yet need a separate entry for the West Bank, since -# the only differences between it and Gaza that we know about -# occurred before our cutoff date of 1970. -# However, as we get more information, we may need to add entries -# for parts of the West Bank as they transitioned from Israel's rules -# to Palestine's rules. - -# From IINS News Service - Israel - 1998-03-23 10:38:07 Israel time, -# forwarded by Ephraim Silverberg: -# -# Despite the fact that Israel changed over to daylight savings time -# last week, the PLO Authority (PA) has decided not to turn its clocks -# one-hour forward at this time. As a sign of independence from Israeli rule, -# the PA has decided to implement DST in April. - -# From Paul Eggert (1999-09-20): -# Daoud Kuttab writes in Holiday havoc -# http://www.jpost.com/com/Archive/22.Apr.1999/Opinion/Article-2.html -# (Jerusalem Post, 1999-04-22) that -# the Palestinian National Authority changed to DST on 1999-04-15. -# I vaguely recall that they switch back in October (sorry, forgot the source). -# For now, let's assume that the spring switch was at 24:00, -# and that they switch at 0:00 on the 3rd Fridays of April and October. - -# From Paul Eggert (2005-11-22): -# Starting 2004 transitions are from Steffen Thorsen's web site timeanddate.com. - -# From Steffen Thorsen (2005-11-23): -# A user from Gaza reported that Gaza made the change early because of -# the Ramadan. Next year Ramadan will be even earlier, so I think -# there is a good chance next year's end date will be around two weeks -# earlier - the same goes for Jordan. - -# From Steffen Thorsen (2006-08-17): -# I was informed by a user in Bethlehem that in Bethlehem it started the -# same day as Israel, and after checking with other users in the area, I -# was informed that they started DST one day after Israel. I was not -# able to find any authoritative sources at the time, nor details if -# Gaza changed as well, but presumed Gaza to follow the same rules as -# the West Bank. - -# From Steffen Thorsen (2006-09-26): -# according to the Palestine News Network (2006-09-19): -# http://english.pnn.ps/index.php?option=com_content&task=view&id=596&Itemid=5 -# > The Council of Ministers announced that this year its winter schedule -# > will begin early, as of midnight Thursday. It is also time to turn -# > back the clocks for winter. Friday will begin an hour late this week. -# I guess it is likely that next year's date will be moved as well, -# because of the Ramadan. - -# From Jesper Nørgaard Welen (2007-09-18): -# According to Steffen Thorsen's web site the Gaza Strip and the rest of the -# Palestinian territories left DST early on 13.th. of September at 2:00. - -# From Paul Eggert (2007-09-20): -# My understanding is that Gaza and the West Bank disagree even over when -# the weekend is (Thursday+Friday versus Friday+Saturday), so I'd be a bit -# surprised if they agreed about DST. But for now, assume they agree. -# For lack of better information, predict that future changes will be -# the 2nd Thursday of September at 02:00. - -# From Alexander Krivenyshev (2008-08-28): -# Here is an article, that Mideast running on different clocks at Ramadan. -# -# Gaza Strip (as Egypt) ended DST at midnight Thursday (Aug 28, 2008), while -# the West Bank will end Daylight Saving Time at midnight Sunday (Aug 31, 2008). -# -# http://www.guardian.co.uk/world/feedarticle/7759001 -# http://www.abcnews.go.com/International/wireStory?id=5676087 -# http://www.worldtimezone.com/dst_news/dst_news_gazastrip01.html - -# From Alexander Krivenyshev (2009-03-26): -# According to the Palestine News Network (arabic.pnn.ps), Palestinian -# government decided to start Daylight Time on Thursday night March -# 26 and continue until the night of 27 September 2009. -# -# (in Arabic) -# http://arabic.pnn.ps/index.php?option=com_content&task=view&id=50850 -# -# (English translation) -# http://www.worldtimezone.com/dst_news/dst_news_westbank01.html - -# From Steffen Thorsen (2009-08-31): -# Palestine's Council of Ministers announced that they will revert back to -# winter time on Friday, 2009-09-04. -# -# One news source: -# http://www.safa.ps/ara/?action=showdetail&seid=4158 -# (Palestinian press agency, Arabic), -# Google translate: "Decided that the Palestinian government in Ramallah -# headed by Salam Fayyad, the start of work in time for the winter of -# 2009, starting on Friday approved the fourth delay Sept. clock sixty -# minutes per hour as of Friday morning." -# -# We are not sure if Gaza will do the same, last year they had a different -# end date, we will keep this page updated: -# https://www.timeanddate.com/news/time/westbank-gaza-dst-2009.html - -# From Alexander Krivenyshev (2009-09-02): -# Seems that Gaza Strip will go back to Winter Time same date as West Bank. -# -# According to Palestinian Ministry Of Interior, West Bank and Gaza Strip plan -# to change time back to Standard time on September 4, 2009. -# -# "Winter time unite the West Bank and Gaza" -# (from Palestinian National Authority): -# http://www.moi.gov.ps/en/?page=633167343250594025&nid=11505 -# http://www.worldtimezone.com/dst_news/dst_news_gazastrip02.html - -# From Alexander Krivenyshev (2010-03-19): -# According to Voice of Palestine DST will last for 191 days, from March -# 26, 2010 till "the last Sunday before the tenth day of Tishri -# (October), each year" (October 03, 2010?) -# -# http://palvoice.org/forums/showthread.php?t=245697 -# (in Arabic) -# http://www.worldtimezone.com/dst_news/dst_news_westbank03.html - -# From Steffen Thorsen (2010-03-24): -# ...Ma'an News Agency reports that Hamas cabinet has decided it will -# start one day later, at 12:01am. Not sure if they really mean 12:01am or -# noon though: -# -# http://www.maannews.net/eng/ViewDetails.aspx?ID=271178 -# (Ma'an News Agency) -# "At 12:01am Friday, clocks in Israel and the West Bank will change to -# 1:01am, while Gaza clocks will change at 12:01am Saturday morning." - -# From Steffen Thorsen (2010-08-11): -# According to several sources, including -# http://www.maannews.net/eng/ViewDetails.aspx?ID=306795 -# the clocks were set back one hour at 2010-08-11 00:00:00 local time in -# Gaza and the West Bank. -# Some more background info: -# https://www.timeanddate.com/news/time/westbank-gaza-end-dst-2010.html - -# From Steffen Thorsen (2011-08-26): -# Gaza and the West Bank did go back to standard time in the beginning of -# August, and will now enter daylight saving time again on 2011-08-30 -# 00:00 (so two periods of DST in 2011). The pause was because of -# Ramadan. -# -# http://www.maannews.net/eng/ViewDetails.aspx?ID=416217 -# Additional info: -# https://www.timeanddate.com/news/time/palestine-dst-2011.html - -# From Alexander Krivenyshev (2011-08-27): -# According to the article in The Jerusalem Post: -# "...Earlier this month, the Palestinian government in the West Bank decided to -# move to standard time for 30 days, during Ramadan. The Palestinians in the -# Gaza Strip accepted the change and also moved their clocks one hour back. -# The Hamas government said on Saturday that it won't observe summertime after -# the Muslim feast of Id al-Fitr, which begins on Tuesday..." -# ... -# https://www.jpost.com/MiddleEast/Article.aspx?id=235650 -# http://www.worldtimezone.com/dst_news/dst_news_gazastrip05.html -# The rules for Egypt are stolen from the 'africa' file. - -# From Steffen Thorsen (2011-09-30): -# West Bank did end Daylight Saving Time this morning/midnight (2011-09-30 -# 00:00). -# So West Bank and Gaza now have the same time again. -# -# Many sources, including: -# http://www.maannews.net/eng/ViewDetails.aspx?ID=424808 - -# From Steffen Thorsen (2012-03-26): -# Palestinian news sources tell that both Gaza and West Bank will start DST -# on Friday (Thursday midnight, 2012-03-29 24:00). -# Some of many sources in Arabic: -# http://www.samanews.com/index.php?act=Show&id=122638 -# -# http://safa.ps/details/news/74352/%D8%A8%D8%AF%D8%A1-%D8%A7%D9%84%D8%AA%D9%88%D9%82%D9%8A%D8%AA-%D8%A7%D9%84%D8%B5%D9%8A%D9%81%D9%8A-%D8%A8%D8%A7%D9%84%D8%B6%D9%81%D8%A9-%D9%88%D8%BA%D8%B2%D8%A9-%D9%84%D9%8A%D9%84%D8%A9-%D8%A7%D9%84%D8%AC%D9%85%D8%B9%D8%A9.html -# -# Our brief summary: -# https://www.timeanddate.com/news/time/gaza-west-bank-dst-2012.html - -# From Steffen Thorsen (2013-03-26): -# The following news sources tells that Palestine will "start daylight saving -# time from midnight on Friday, March 29, 2013" (translated). -# [These are in Arabic and are for Gaza and for Ramallah, respectively.] -# http://www.samanews.com/index.php?act=Show&id=154120 -# http://safa.ps/details/news/99844/%D8%B1%D8%A7%D9%85-%D8%A7%D9%84%D9%84%D9%87-%D8%A8%D8%AF%D8%A1-%D8%A7%D9%84%D8%AA%D9%88%D9%82%D9%8A%D8%AA-%D8%A7%D9%84%D8%B5%D9%8A%D9%81%D9%8A-29-%D8%A7%D9%84%D8%AC%D8%A7%D8%B1%D9%8A.html - -# From Steffen Thorsen (2013-09-24): -# The Gaza and West Bank are ending DST Thursday at midnight -# (2013-09-27 00:00:00) (one hour earlier than last year...). -# This source in English, says "that winter time will go into effect -# at midnight on Thursday in the West Bank and Gaza Strip": -# http://english.wafa.ps/index.php?action=detail&id=23246 -# official source...: -# http://www.palestinecabinet.gov.ps/ar/Views/ViewDetails.aspx?pid=1252 - -# From Steffen Thorsen (2015-03-03): -# Sources such as http://www.alquds.com/news/article/view/id/548257 -# and https://www.raya.ps/ar/news/890705.html say Palestine areas will -# start DST on 2015-03-28 00:00 which is one day later than expected. -# -# From Paul Eggert (2015-03-03): -# https://www.timeanddate.com/time/change/west-bank/ramallah?year=2014 -# says that the fall 2014 transition was Oct 23 at 24:00. - -# From Hannah Kreitem (2016-03-09): -# http://www.palestinecabinet.gov.ps/WebSite/ar/ViewDetails?ID=31728 -# [Google translation]: "The Council also decided to start daylight -# saving in Palestine as of one o'clock on Saturday morning, -# 2016-03-26, to provide the clock 60 minutes ahead." -# -# From Paul Eggert (2016-03-12): -# Predict spring transitions on March's last Saturday at 01:00 from now on. - -# From Sharef Mustafa (2016-10-19): -# [T]he Palestinian cabinet decision (Mar 8th 2016) published on -# http://www.palestinecabinet.gov.ps/WebSite/Upload/Decree/GOV_17/16032016134830.pdf -# states that summer time will end on Oct 29th at 01:00. -# -# From Tim Parenti (2016-10-19): -# Predict fall transitions on October's last Saturday at 01:00 from now on. -# This is consistent with the 2016 transition as well as our spring -# predictions. -# -# From Paul Eggert (2016-10-19): -# It's also consistent with predictions in the following URLs today: -# https://www.timeanddate.com/time/change/gaza-strip/gaza -# https://www.timeanddate.com/time/change/west-bank/hebron - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule EgyptAsia 1957 only - May 10 0:00 1:00 S -Rule EgyptAsia 1957 1958 - Oct 1 0:00 0 - -Rule EgyptAsia 1958 only - May 1 0:00 1:00 S -Rule EgyptAsia 1959 1967 - May 1 1:00 1:00 S -Rule EgyptAsia 1959 1965 - Sep 30 3:00 0 - -Rule EgyptAsia 1966 only - Oct 1 3:00 0 - - -Rule Palestine 1999 2005 - Apr Fri>=15 0:00 1:00 S -Rule Palestine 1999 2003 - Oct Fri>=15 0:00 0 - -Rule Palestine 2004 only - Oct 1 1:00 0 - -Rule Palestine 2005 only - Oct 4 2:00 0 - -Rule Palestine 2006 2007 - Apr 1 0:00 1:00 S -Rule Palestine 2006 only - Sep 22 0:00 0 - -Rule Palestine 2007 only - Sep Thu>=8 2:00 0 - -Rule Palestine 2008 2009 - Mar lastFri 0:00 1:00 S -Rule Palestine 2008 only - Sep 1 0:00 0 - -Rule Palestine 2009 only - Sep Fri>=1 1:00 0 - -Rule Palestine 2010 only - Mar 26 0:00 1:00 S -Rule Palestine 2010 only - Aug 11 0:00 0 - -Rule Palestine 2011 only - Apr 1 0:01 1:00 S -Rule Palestine 2011 only - Aug 1 0:00 0 - -Rule Palestine 2011 only - Aug 30 0:00 1:00 S -Rule Palestine 2011 only - Sep 30 0:00 0 - -Rule Palestine 2012 2014 - Mar lastThu 24:00 1:00 S -Rule Palestine 2012 only - Sep 21 1:00 0 - -Rule Palestine 2013 only - Sep Fri>=21 0:00 0 - -Rule Palestine 2014 2015 - Oct Fri>=21 0:00 0 - -Rule Palestine 2015 only - Mar lastFri 24:00 1:00 S -Rule Palestine 2016 max - Mar lastSat 1:00 1:00 S -Rule Palestine 2016 max - Oct lastSat 1:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Gaza 2:17:52 - LMT 1900 Oct - 2:00 Zion EET/EEST 1948 May 15 - 2:00 EgyptAsia EE%sT 1967 Jun 5 - 2:00 Zion I%sT 1996 - 2:00 Jordan EE%sT 1999 - 2:00 Palestine EE%sT 2008 Aug 29 0:00 - 2:00 - EET 2008 Sep - 2:00 Palestine EE%sT 2010 - 2:00 - EET 2010 Mar 27 0:01 - 2:00 Palestine EE%sT 2011 Aug 1 - 2:00 - EET 2012 - 2:00 Palestine EE%sT - -Zone Asia/Hebron 2:20:23 - LMT 1900 Oct - 2:00 Zion EET/EEST 1948 May 15 - 2:00 EgyptAsia EE%sT 1967 Jun 5 - 2:00 Zion I%sT 1996 - 2:00 Jordan EE%sT 1999 - 2:00 Palestine EE%sT - -# Paracel Is -# no information - -# Philippines -# On 1844-08-16, Narciso Clavería, governor-general of the -# Philippines, issued a proclamation announcing that 1844-12-30 was to -# be immediately followed by 1845-01-01; see R.H. van Gent's -# History of the International Date Line -# https://www.staff.science.uu.nl/~gent0113/idl/idl_philippines.htm -# The rest of the data entries are from Shanks & Pottenger. - -# From Jesper Nørgaard Welen (2006-04-26): -# ... claims that Philippines had DST last time in 1990: -# http://story.philippinetimes.com/p.x/ct/9/id/145be20cc6b121c0/cid/3e5bbccc730d258c/ -# [a story dated 2006-04-25 by Cris Larano of Dow Jones Newswires, -# but no details] - -# From Paul Eggert (2014-08-14): -# The following source says DST may be instituted November-January and again -# March-June, but this is not definite. It also says DST was last proclaimed -# during the Ramos administration (1992-1998); but again, no details. -# Carcamo D. PNoy urged to declare use of daylight saving time. -# Philippine Star 2014-08-05 -# http://www.philstar.com/headlines/2014/08/05/1354152/pnoy-urged-declare-use-daylight-saving-time - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Phil 1936 only - Nov 1 0:00 1:00 S -Rule Phil 1937 only - Feb 1 0:00 0 - -Rule Phil 1954 only - Apr 12 0:00 1:00 S -Rule Phil 1954 only - Jul 1 0:00 0 - -Rule Phil 1978 only - Mar 22 0:00 1:00 S -Rule Phil 1978 only - Sep 21 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Manila -15:56:00 - LMT 1844 Dec 31 - 8:04:00 - LMT 1899 May 11 - 8:00 Phil +08/+09 1942 May - 9:00 - +09 1944 Nov - 8:00 Phil +08/+09 - -# Qatar -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Qatar 3:26:08 - LMT 1920 # Al Dawhah / Doha - 4:00 - +04 1972 Jun - 3:00 - +03 -Link Asia/Qatar Asia/Bahrain - -# Saudi Arabia -# -# From Paul Eggert (2014-07-15): -# Time in Saudi Arabia and other countries in the Arabian peninsula was not -# standardized until relatively recently; we don't know when, and possibly it -# has never been made official. Richard P Hunt, in "Islam city yielding to -# modern times", New York Times (1961-04-09), p 20, wrote that only airlines -# observed standard time, and that people in Jeddah mostly observed quasi-solar -# time, doing so by setting their watches at sunrise to 6 o'clock (or to 12 -# o'clock for "Arab" time). -# -# The TZ database cannot represent quasi-solar time; airline time is the best -# we can do. The 1946 foreign air news digest of the U.S. Civil Aeronautics -# Board (OCLC 42299995) reported that the "... Arabian Government, inaugurated -# a weekly Dhahran-Cairo service, via the Saudi Arabian cities of Riyadh and -# Jidda, on March 14, 1947". Shanks & Pottenger guessed 1950; go with the -# earlier date. -# -# Shanks & Pottenger also state that until 1968-05-01 Saudi Arabia had two -# time zones; the other zone, at UT +04, was in the far eastern part of -# the country. Ignore this, as it's before our 1970 cutoff. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Riyadh 3:06:52 - LMT 1947 Mar 14 - 3:00 - +03 -Link Asia/Riyadh Asia/Aden # Yemen -Link Asia/Riyadh Asia/Kuwait - -# Singapore -# taken from Mok Ly Yng (2003-10-30) -# http://www.math.nus.edu.sg/aslaksen/teaching/timezone.html -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Singapore 6:55:25 - LMT 1901 Jan 1 - 6:55:25 - SMT 1905 Jun 1 # Singapore M.T. - 7:00 - +07 1933 Jan 1 - 7:00 0:20 +0720 1936 Jan 1 - 7:20 - +0720 1941 Sep 1 - 7:30 - +0730 1942 Feb 16 - 9:00 - +09 1945 Sep 12 - 7:30 - +0730 1982 Jan 1 - 8:00 - +08 - -# Spratly Is -# no information - -# Sri Lanka - -# From Paul Eggert (2013-02-21): -# Milne says "Madras mean time use from May 1, 1898. Prior to this Colombo -# mean time, 5h. 4m. 21.9s. F., was used." But 5:04:21.9 differs considerably -# from Colombo's meridian 5:19:24, so for now ignore Milne and stick with -# Shanks and Pottenger. - -# From Paul Eggert (1996-09-03): -# "Sri Lanka advances clock by an hour to avoid blackout" -# (, 1996-05-24, -# no longer available as of 1999-08-17) -# reported "the country's standard time will be put forward by one hour at -# midnight Friday (1830 GMT) 'in the light of the present power crisis'." -# -# From Dharmasiri Senanayake, Sri Lanka Media Minister (1996-10-24), as quoted -# by Shamindra in Daily News - Hot News Section -# (1996-10-26): -# With effect from 12.30 a.m. on 26th October 1996 -# Sri Lanka will be six (06) hours ahead of GMT. - -# From Jesper Nørgaard Welen (2006-04-14), quoting Sri Lanka News Online -# (2006-04-13): -# 0030 hrs on April 15, 2006 (midnight of April 14, 2006 +30 minutes) -# at present, become 2400 hours of April 14, 2006 (midnight of April 14, 2006). - -# From Peter Apps and Ranga Sirila of Reuters (2006-04-12) in: -# http://today.reuters.co.uk/news/newsArticle.aspx?type=scienceNews&storyID=2006-04-12T172228Z_01_COL295762_RTRIDST_0_SCIENCE-SRILANKA-TIME-DC.XML -# [The Tamil Tigers] never accepted the original 1996 time change and simply -# kept their clocks set five and a half hours ahead of Greenwich Mean -# Time (GMT), in line with neighbor India. -# From Paul Eggert (2006-04-18): -# People who live in regions under Tamil control can use [TZ='Asia/Kolkata'], -# as that zone has agreed with the Tamil areas since our cutoff date of 1970. - -# From Sadika Sumanapala (2016-10-19): -# According to http://www.sltime.org (maintained by Measurement Units, -# Standards & Services Department, Sri Lanka) abbreviation for Sri Lanka -# standard time is SLST. -# -# From Paul Eggert (2016-10-18): -# "SLST" seems to be reasonably recent and rarely-used outside time -# zone nerd sources. I searched Google News and found three uses of -# it in the International Business Times of India in February and -# March of this year when discussing cricket match times, but nothing -# since then (though there has been a lot of cricket) and nothing in -# other English-language news sources. Our old abbreviation "LKT" is -# even worse. For now, let's use a numeric abbreviation; we can -# switch to "SLST" if it catches on. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Colombo 5:19:24 - LMT 1880 - 5:19:32 - MMT 1906 # Moratuwa Mean Time - 5:30 - +0530 1942 Jan 5 - 5:30 0:30 +06 1942 Sep - 5:30 1:00 +0630 1945 Oct 16 2:00 - 5:30 - +0530 1996 May 25 0:00 - 6:30 - +0630 1996 Oct 26 0:30 - 6:00 - +06 2006 Apr 15 0:30 - 5:30 - +0530 - -# Syria -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Syria 1920 1923 - Apr Sun>=15 2:00 1:00 S -Rule Syria 1920 1923 - Oct Sun>=1 2:00 0 - -Rule Syria 1962 only - Apr 29 2:00 1:00 S -Rule Syria 1962 only - Oct 1 2:00 0 - -Rule Syria 1963 1965 - May 1 2:00 1:00 S -Rule Syria 1963 only - Sep 30 2:00 0 - -Rule Syria 1964 only - Oct 1 2:00 0 - -Rule Syria 1965 only - Sep 30 2:00 0 - -Rule Syria 1966 only - Apr 24 2:00 1:00 S -Rule Syria 1966 1976 - Oct 1 2:00 0 - -Rule Syria 1967 1978 - May 1 2:00 1:00 S -Rule Syria 1977 1978 - Sep 1 2:00 0 - -Rule Syria 1983 1984 - Apr 9 2:00 1:00 S -Rule Syria 1983 1984 - Oct 1 2:00 0 - -Rule Syria 1986 only - Feb 16 2:00 1:00 S -Rule Syria 1986 only - Oct 9 2:00 0 - -Rule Syria 1987 only - Mar 1 2:00 1:00 S -Rule Syria 1987 1988 - Oct 31 2:00 0 - -Rule Syria 1988 only - Mar 15 2:00 1:00 S -Rule Syria 1989 only - Mar 31 2:00 1:00 S -Rule Syria 1989 only - Oct 1 2:00 0 - -Rule Syria 1990 only - Apr 1 2:00 1:00 S -Rule Syria 1990 only - Sep 30 2:00 0 - -Rule Syria 1991 only - Apr 1 0:00 1:00 S -Rule Syria 1991 1992 - Oct 1 0:00 0 - -Rule Syria 1992 only - Apr 8 0:00 1:00 S -Rule Syria 1993 only - Mar 26 0:00 1:00 S -Rule Syria 1993 only - Sep 25 0:00 0 - -# IATA SSIM (1998-02) says 1998-04-02; -# (1998-09) says 1999-03-29 and 1999-09-29; (1999-02) says 1999-04-02, -# 2000-04-02, and 2001-04-02; (1999-09) says 2000-03-31 and 2001-03-31; -# (2006) says 2006-03-31 and 2006-09-22; -# for now ignore all these claims and go with Shanks & Pottenger, -# except for the 2006-09-22 claim (which seems right for Ramadan). -Rule Syria 1994 1996 - Apr 1 0:00 1:00 S -Rule Syria 1994 2005 - Oct 1 0:00 0 - -Rule Syria 1997 1998 - Mar lastMon 0:00 1:00 S -Rule Syria 1999 2006 - Apr 1 0:00 1:00 S -# From Stephen Colebourne (2006-09-18): -# According to IATA data, Syria will change DST on 21st September [21:00 UTC] -# this year [only].... This is probably related to Ramadan, like Egypt. -Rule Syria 2006 only - Sep 22 0:00 0 - -# From Paul Eggert (2007-03-29): -# Today the AP reported "Syria will switch to summertime at midnight Thursday." -# http://www.iht.com/articles/ap/2007/03/29/africa/ME-GEN-Syria-Time-Change.php -Rule Syria 2007 only - Mar lastFri 0:00 1:00 S -# From Jesper Nørgaard (2007-10-27): -# The sister center ICARDA of my work CIMMYT is confirming that Syria DST will -# not take place 1st November at 0:00 o'clock but 1st November at 24:00 or -# rather Midnight between Thursday and Friday. This does make more sense than -# having it between Wednesday and Thursday (two workdays in Syria) since the -# weekend in Syria is not Saturday and Sunday, but Friday and Saturday. So now -# it is implemented at midnight of the last workday before weekend... -# -# From Steffen Thorsen (2007-10-27): -# Jesper Nørgaard Welen wrote: -# -# > "Winter local time in Syria will be observed at midnight of Thursday 1 -# > November 2007, and the clock will be put back 1 hour." -# -# I found confirmation on this in this gov.sy-article (Arabic): -# http://wehda.alwehda.gov.sy/_print_veiw.asp?FileName=12521710520070926111247 -# -# which using Google's translate tools says: -# Council of Ministers also approved the commencement of work on -# identifying the winter time as of Friday, 2/11/2007 where the 60th -# minute delay at midnight Thursday 1/11/2007. -Rule Syria 2007 only - Nov Fri>=1 0:00 0 - - -# From Stephen Colebourne (2008-03-17): -# For everyone's info, I saw an IATA time zone change for [Syria] for -# this month (March 2008) in the last day or so.... -# Country Time Standard --- DST Start --- --- DST End --- DST -# Name Zone Variation Time Date Time Date -# Variation -# Syrian Arab -# Republic SY +0200 2200 03APR08 2100 30SEP08 +0300 -# 2200 02APR09 2100 30SEP09 +0300 -# 2200 01APR10 2100 30SEP10 +0300 - -# From Arthur David Olson (2008-03-17): -# Here's a link to English-language coverage by the Syrian Arab News -# Agency (SANA)... -# http://www.sana.sy/eng/21/2008/03/11/165173.htm -# ...which reads (in part) "The Cabinet approved the suggestion of the -# Ministry of Electricity to begin daylight savings time on Friday April -# 4th, advancing clocks one hour ahead on midnight of Thursday April 3rd." -# Since Syria is two hours east of UTC, the 2200 and 2100 transition times -# shown above match up with midnight in Syria. - -# From Arthur David Olson (2008-03-18): -# My best guess at a Syrian rule is "the Friday nearest April 1"; -# coding that involves either using a "Mar Fri>=29" construct that old time zone -# compilers can't handle or having multiple Rules (a la Israel). -# For now, use "Apr Fri>=1", and go with IATA on a uniform Sep 30 end. - -# From Steffen Thorsen (2008-10-07): -# Syria has now officially decided to end DST on 2008-11-01 this year, -# according to the following article in the Syrian Arab News Agency (SANA). -# -# The article is in Arabic, and seems to tell that they will go back to -# winter time on 2008-11-01 at 00:00 local daylight time (delaying/setting -# clocks back 60 minutes). -# -# http://sana.sy/ara/2/2008/10/07/195459.htm - -# From Steffen Thorsen (2009-03-19): -# Syria will start DST on 2009-03-27 00:00 this year according to many sources, -# two examples: -# -# http://www.sana.sy/eng/21/2009/03/17/217563.htm -# (English, Syrian Arab News # Agency) -# http://thawra.alwehda.gov.sy/_View_news2.asp?FileName=94459258720090318012209 -# (Arabic, gov-site) -# -# We have not found any sources saying anything about when DST ends this year. -# -# Our summary -# https://www.timeanddate.com/news/time/syria-dst-starts-march-27-2009.html - -# From Steffen Thorsen (2009-10-27): -# The Syrian Arab News Network on 2009-09-29 reported that Syria will -# revert back to winter (standard) time on midnight between Thursday -# 2009-10-29 and Friday 2009-10-30: -# http://www.sana.sy/ara/2/2009/09/29/247012.htm (Arabic) - -# From Arthur David Olson (2009-10-28): -# We'll see if future DST switching times turn out to be end of the last -# Thursday of the month or the start of the last Friday of the month or -# something else. For now, use the start of the last Friday. - -# From Steffen Thorsen (2010-03-17): -# The "Syrian News Station" reported on 2010-03-16 that the Council of -# Ministers has decided that Syria will start DST on midnight Thursday -# 2010-04-01: (midnight between Thursday and Friday): -# http://sns.sy/sns/?path=news/read/11421 (Arabic) - -# From Steffen Thorsen (2012-03-26): -# Today, Syria's government announced that they will start DST early on Friday -# (00:00). This is a bit earlier than the past two years. -# -# From Syrian Arab News Agency, in Arabic: -# http://www.sana.sy/ara/2/2012/03/26/408215.htm -# -# Our brief summary: -# https://www.timeanddate.com/news/time/syria-dst-2012.html - -# From Arthur David Olson (2012-03-27): -# Assume last Friday in March going forward XXX. - -Rule Syria 2008 only - Apr Fri>=1 0:00 1:00 S -Rule Syria 2008 only - Nov 1 0:00 0 - -Rule Syria 2009 only - Mar lastFri 0:00 1:00 S -Rule Syria 2010 2011 - Apr Fri>=1 0:00 1:00 S -Rule Syria 2012 max - Mar lastFri 0:00 1:00 S -Rule Syria 2009 max - Oct lastFri 0:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Damascus 2:25:12 - LMT 1920 # Dimashq - 2:00 Syria EE%sT - -# Tajikistan -# From Shanks & Pottenger. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Dushanbe 4:35:12 - LMT 1924 May 2 - 5:00 - +05 1930 Jun 21 - 6:00 RussiaAsia +06/+07 1991 Mar 31 2:00s - 5:00 1:00 +05/+06 1991 Sep 9 2:00s - 5:00 - +05 - -# Thailand -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Bangkok 6:42:04 - LMT 1880 - 6:42:04 - BMT 1920 Apr # Bangkok Mean Time - 7:00 - +07 -Link Asia/Bangkok Asia/Phnom_Penh # Cambodia -Link Asia/Bangkok Asia/Vientiane # Laos - -# Turkmenistan -# From Shanks & Pottenger. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Ashgabat 3:53:32 - LMT 1924 May 2 # or Ashkhabad - 4:00 - +04 1930 Jun 21 - 5:00 RussiaAsia +05/+06 1991 Mar 31 2:00 - 4:00 RussiaAsia +04/+05 1992 Jan 19 2:00 - 5:00 - +05 - -# United Arab Emirates -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Dubai 3:41:12 - LMT 1920 - 4:00 - +04 -Link Asia/Dubai Asia/Muscat # Oman - -# Uzbekistan -# Byalokoz 1919 says Uzbekistan was 4:27:53. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Samarkand 4:27:53 - LMT 1924 May 2 - 4:00 - +04 1930 Jun 21 - 5:00 - +05 1981 Apr 1 - 5:00 1:00 +06 1981 Oct 1 - 6:00 - +06 1982 Apr 1 - 5:00 RussiaAsia +05/+06 1992 - 5:00 - +05 -# Milne says Tashkent was 4:37:10.8; round to nearest. -Zone Asia/Tashkent 4:37:11 - LMT 1924 May 2 - 5:00 - +05 1930 Jun 21 - 6:00 RussiaAsia +06/+07 1991 Mar 31 2:00 - 5:00 RussiaAsia +05/+06 1992 - 5:00 - +05 - -# Vietnam - -# From Paul Eggert (2014-10-04): -# Milne gives 7:16:56 for the meridian of Saigon in 1899, as being -# used in Lower Laos, Cambodia, and Annam. But this is quite a ways -# from Saigon's location. For now, ignore this and stick with Shanks -# and Pottenger for LMT before 1906. - -# From Arthur David Olson (2008-03-18): -# The English-language name of Vietnam's most populous city is "Ho Chi Minh -# City"; use Ho_Chi_Minh below to avoid a name of more than 14 characters. - -# From Paul Eggert (2014-10-21) after a heads-up from Trần Ngọc Quân: -# Trần Tiến Bình's authoritative book "Lịch Việt Nam: thế kỷ XX-XXI (1901-2100)" -# (Nhà xuất bản Văn Hoá - Thông Tin, Hanoi, 2005), pp 49-50, -# is quoted verbatim in: -# http://www.thoigian.com.vn/?mPage=P80D01 -# is translated by Brian Inglis in: -# https://mm.icann.org/pipermail/tz/2014-October/021654.html -# and is the basis for the information below. -# -# The 1906 transition was effective July 1 and standardized Indochina to -# Phù Liễn Observatory, legally 104 deg. 17'17" east of Paris. -# It's unclear whether this meant legal Paris Mean Time (00:09:21) or -# the Paris Meridian (2 deg. 20'14.03" E); the former yields 07:06:30.1333... -# and the latter 07:06:29.333... so either way it rounds to 07:06:30, -# which is used below even though the modern-day Phù Liễn Observatory -# is closer to 07:06:31. Abbreviate Phù Liễn Mean Time as PLMT. -# -# The following transitions occurred in Indochina in general (before 1954) -# and in South Vietnam in particular (after 1954): -# To 07:00 on 1911-05-01. -# To 08:00 on 1942-12-31 at 23:00. -# To 09:00 in 1945-03-14 at 23:00. -# To 07:00 on 1945-09-02 in Vietnam. -# To 08:00 on 1947-04-01 in French-controlled Indochina. -# To 07:00 on 1955-07-01 in South Vietnam. -# To 08:00 on 1959-12-31 at 23:00 in South Vietnam. -# To 07:00 on 1975-06-13 in South Vietnam. -# -# Trần cites the following sources; it's unclear which supplied the info above. -# -# Hoàng Xuân Hãn: "Lịch và lịch Việt Nam". Tập san Khoa học Xã hội, -# No. 9, Paris, February 1982. -# -# Lê Thành Lân: "Lịch và niên biểu lịch sử hai mươi thế kỷ (0001-2010)", -# NXB Thống kê, Hanoi, 2000. -# -# Lê Thành Lân: "Lịch hai thế kỷ (1802-2010) và các lịch vĩnh cửu", -# NXB Thuận Hoá, Huế, 1995. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Asia/Ho_Chi_Minh 7:06:40 - LMT 1906 Jul 1 - 7:06:30 - PLMT 1911 May 1 # Phù Liễn MT - 7:00 - +07 1942 Dec 31 23:00 - 8:00 - +08 1945 Mar 14 23:00 - 9:00 - +09 1945 Sep 2 - 7:00 - +07 1947 Apr 1 - 8:00 - +08 1955 Jul 1 - 7:00 - +07 1959 Dec 31 23:00 - 8:00 - +08 1975 Jun 13 - 7:00 - +07 - -# Yemen -# See Asia/Riyadh. diff --git a/src/timezone/data/australasia b/src/timezone/data/australasia deleted file mode 100644 index 5f7c86dda6..0000000000 --- a/src/timezone/data/australasia +++ /dev/null @@ -1,1793 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file also includes Pacific islands. - -# Notes are at the end of this file - -############################################################################### - -# Australia - -# Please see the notes below for the controversy about "EST" versus "AEST" etc. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Aus 1917 only - Jan 1 0:01 1:00 D -Rule Aus 1917 only - Mar 25 2:00 0 S -Rule Aus 1942 only - Jan 1 2:00 1:00 D -Rule Aus 1942 only - Mar 29 2:00 0 S -Rule Aus 1942 only - Sep 27 2:00 1:00 D -Rule Aus 1943 1944 - Mar lastSun 2:00 0 S -Rule Aus 1943 only - Oct 3 2:00 1:00 D -# Go with Whitman and the Australian National Standards Commission, which -# says W Australia didn't use DST in 1943/1944. Ignore Whitman's claim that -# 1944/1945 was just like 1943/1944. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Northern Territory -Zone Australia/Darwin 8:43:20 - LMT 1895 Feb - 9:00 - ACST 1899 May - 9:30 Aus AC%sT -# Western Australia -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AW 1974 only - Oct lastSun 2:00s 1:00 D -Rule AW 1975 only - Mar Sun>=1 2:00s 0 S -Rule AW 1983 only - Oct lastSun 2:00s 1:00 D -Rule AW 1984 only - Mar Sun>=1 2:00s 0 S -Rule AW 1991 only - Nov 17 2:00s 1:00 D -Rule AW 1992 only - Mar Sun>=1 2:00s 0 S -Rule AW 2006 only - Dec 3 2:00s 1:00 D -Rule AW 2007 2009 - Mar lastSun 2:00s 0 S -Rule AW 2007 2008 - Oct lastSun 2:00s 1:00 D -Zone Australia/Perth 7:43:24 - LMT 1895 Dec - 8:00 Aus AW%sT 1943 Jul - 8:00 AW AW%sT -Zone Australia/Eucla 8:35:28 - LMT 1895 Dec - 8:45 Aus +0845/+0945 1943 Jul - 8:45 AW +0845/+0945 - -# Queensland -# -# From Alex Livingston (1996-11-01): -# I have heard or read more than once that some resort islands off the coast -# of Queensland chose to keep observing daylight-saving time even after -# Queensland ceased to. -# -# From Paul Eggert (1996-11-22): -# IATA SSIM (1993-02/1994-09) say that the Holiday Islands (Hayman, Lindeman, -# Hamilton) observed DST for two years after the rest of Queensland stopped. -# Hamilton is the largest, but there is also a Hamilton in Victoria, -# so use Lindeman. -# -# From J William Piggott (2016-02-20): -# There is no location named Holiday Islands in Queensland Australia; holiday -# islands is a colloquial term used globally. Hayman and Lindeman are at the -# north and south extremes of the Whitsunday Islands archipelago, and -# Hamilton is in between; it is reasonable to believe that this time zone -# applies to all of the Whitsundays. -# http://www.australia.gov.au/about-australia/australian-story/austn-islands -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AQ 1971 only - Oct lastSun 2:00s 1:00 D -Rule AQ 1972 only - Feb lastSun 2:00s 0 S -Rule AQ 1989 1991 - Oct lastSun 2:00s 1:00 D -Rule AQ 1990 1992 - Mar Sun>=1 2:00s 0 S -Rule Holiday 1992 1993 - Oct lastSun 2:00s 1:00 D -Rule Holiday 1993 1994 - Mar Sun>=1 2:00s 0 S -Zone Australia/Brisbane 10:12:08 - LMT 1895 - 10:00 Aus AE%sT 1971 - 10:00 AQ AE%sT -Zone Australia/Lindeman 9:55:56 - LMT 1895 - 10:00 Aus AE%sT 1971 - 10:00 AQ AE%sT 1992 Jul - 10:00 Holiday AE%sT - -# South Australia -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AS 1971 1985 - Oct lastSun 2:00s 1:00 D -Rule AS 1986 only - Oct 19 2:00s 1:00 D -Rule AS 1987 2007 - Oct lastSun 2:00s 1:00 D -Rule AS 1972 only - Feb 27 2:00s 0 S -Rule AS 1973 1985 - Mar Sun>=1 2:00s 0 S -Rule AS 1986 1990 - Mar Sun>=15 2:00s 0 S -Rule AS 1991 only - Mar 3 2:00s 0 S -Rule AS 1992 only - Mar 22 2:00s 0 S -Rule AS 1993 only - Mar 7 2:00s 0 S -Rule AS 1994 only - Mar 20 2:00s 0 S -Rule AS 1995 2005 - Mar lastSun 2:00s 0 S -Rule AS 2006 only - Apr 2 2:00s 0 S -Rule AS 2007 only - Mar lastSun 2:00s 0 S -Rule AS 2008 max - Apr Sun>=1 2:00s 0 S -Rule AS 2008 max - Oct Sun>=1 2:00s 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Australia/Adelaide 9:14:20 - LMT 1895 Feb - 9:00 - ACST 1899 May - 9:30 Aus AC%sT 1971 - 9:30 AS AC%sT - -# Tasmania -# -# From Paul Eggert (2005-08-16): -# http://www.bom.gov.au/climate/averages/tables/dst_times.shtml -# says King Island didn't observe DST from WWII until late 1971. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AT 1967 only - Oct Sun>=1 2:00s 1:00 D -Rule AT 1968 only - Mar lastSun 2:00s 0 S -Rule AT 1968 1985 - Oct lastSun 2:00s 1:00 D -Rule AT 1969 1971 - Mar Sun>=8 2:00s 0 S -Rule AT 1972 only - Feb lastSun 2:00s 0 S -Rule AT 1973 1981 - Mar Sun>=1 2:00s 0 S -Rule AT 1982 1983 - Mar lastSun 2:00s 0 S -Rule AT 1984 1986 - Mar Sun>=1 2:00s 0 S -Rule AT 1986 only - Oct Sun>=15 2:00s 1:00 D -Rule AT 1987 1990 - Mar Sun>=15 2:00s 0 S -Rule AT 1987 only - Oct Sun>=22 2:00s 1:00 D -Rule AT 1988 1990 - Oct lastSun 2:00s 1:00 D -Rule AT 1991 1999 - Oct Sun>=1 2:00s 1:00 D -Rule AT 1991 2005 - Mar lastSun 2:00s 0 S -Rule AT 2000 only - Aug lastSun 2:00s 1:00 D -Rule AT 2001 max - Oct Sun>=1 2:00s 1:00 D -Rule AT 2006 only - Apr Sun>=1 2:00s 0 S -Rule AT 2007 only - Mar lastSun 2:00s 0 S -Rule AT 2008 max - Apr Sun>=1 2:00s 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Australia/Hobart 9:49:16 - LMT 1895 Sep - 10:00 - AEST 1916 Oct 1 2:00 - 10:00 1:00 AEDT 1917 Feb - 10:00 Aus AE%sT 1967 - 10:00 AT AE%sT -Zone Australia/Currie 9:35:28 - LMT 1895 Sep - 10:00 - AEST 1916 Oct 1 2:00 - 10:00 1:00 AEDT 1917 Feb - 10:00 Aus AE%sT 1971 Jul - 10:00 AT AE%sT - -# Victoria -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AV 1971 1985 - Oct lastSun 2:00s 1:00 D -Rule AV 1972 only - Feb lastSun 2:00s 0 S -Rule AV 1973 1985 - Mar Sun>=1 2:00s 0 S -Rule AV 1986 1990 - Mar Sun>=15 2:00s 0 S -Rule AV 1986 1987 - Oct Sun>=15 2:00s 1:00 D -Rule AV 1988 1999 - Oct lastSun 2:00s 1:00 D -Rule AV 1991 1994 - Mar Sun>=1 2:00s 0 S -Rule AV 1995 2005 - Mar lastSun 2:00s 0 S -Rule AV 2000 only - Aug lastSun 2:00s 1:00 D -Rule AV 2001 2007 - Oct lastSun 2:00s 1:00 D -Rule AV 2006 only - Apr Sun>=1 2:00s 0 S -Rule AV 2007 only - Mar lastSun 2:00s 0 S -Rule AV 2008 max - Apr Sun>=1 2:00s 0 S -Rule AV 2008 max - Oct Sun>=1 2:00s 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Australia/Melbourne 9:39:52 - LMT 1895 Feb - 10:00 Aus AE%sT 1971 - 10:00 AV AE%sT - -# New South Wales -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule AN 1971 1985 - Oct lastSun 2:00s 1:00 D -Rule AN 1972 only - Feb 27 2:00s 0 S -Rule AN 1973 1981 - Mar Sun>=1 2:00s 0 S -Rule AN 1982 only - Apr Sun>=1 2:00s 0 S -Rule AN 1983 1985 - Mar Sun>=1 2:00s 0 S -Rule AN 1986 1989 - Mar Sun>=15 2:00s 0 S -Rule AN 1986 only - Oct 19 2:00s 1:00 D -Rule AN 1987 1999 - Oct lastSun 2:00s 1:00 D -Rule AN 1990 1995 - Mar Sun>=1 2:00s 0 S -Rule AN 1996 2005 - Mar lastSun 2:00s 0 S -Rule AN 2000 only - Aug lastSun 2:00s 1:00 D -Rule AN 2001 2007 - Oct lastSun 2:00s 1:00 D -Rule AN 2006 only - Apr Sun>=1 2:00s 0 S -Rule AN 2007 only - Mar lastSun 2:00s 0 S -Rule AN 2008 max - Apr Sun>=1 2:00s 0 S -Rule AN 2008 max - Oct Sun>=1 2:00s 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Australia/Sydney 10:04:52 - LMT 1895 Feb - 10:00 Aus AE%sT 1971 - 10:00 AN AE%sT -Zone Australia/Broken_Hill 9:25:48 - LMT 1895 Feb - 10:00 - AEST 1896 Aug 23 - 9:00 - ACST 1899 May - 9:30 Aus AC%sT 1971 - 9:30 AN AC%sT 2000 - 9:30 AS AC%sT - -# Lord Howe Island -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule LH 1981 1984 - Oct lastSun 2:00 1:00 D -Rule LH 1982 1985 - Mar Sun>=1 2:00 0 S -Rule LH 1985 only - Oct lastSun 2:00 0:30 D -Rule LH 1986 1989 - Mar Sun>=15 2:00 0 S -Rule LH 1986 only - Oct 19 2:00 0:30 D -Rule LH 1987 1999 - Oct lastSun 2:00 0:30 D -Rule LH 1990 1995 - Mar Sun>=1 2:00 0 S -Rule LH 1996 2005 - Mar lastSun 2:00 0 S -Rule LH 2000 only - Aug lastSun 2:00 0:30 D -Rule LH 2001 2007 - Oct lastSun 2:00 0:30 D -Rule LH 2006 only - Apr Sun>=1 2:00 0 S -Rule LH 2007 only - Mar lastSun 2:00 0 S -Rule LH 2008 max - Apr Sun>=1 2:00 0 S -Rule LH 2008 max - Oct Sun>=1 2:00 0:30 D -Zone Australia/Lord_Howe 10:36:20 - LMT 1895 Feb - 10:00 - AEST 1981 Mar - 10:30 LH +1030/+1130 1985 Jul - 10:30 LH +1030/+11 - -# Australian miscellany -# -# Ashmore Is, Cartier -# no indigenous inhabitants; only seasonal caretakers -# no times are set -# -# Coral Sea Is -# no indigenous inhabitants; only meteorologists -# no times are set -# -# Macquarie -# Permanent occupation (scientific station) 1911-1915 and since 25 March 1948; -# sealing and penguin oil station operated Nov 1899 to Apr 1919. See the -# Tasmania Parks & Wildlife Service history of sealing at Macquarie Island -# http://www.parks.tas.gov.au/index.aspx?base=1828 -# http://www.parks.tas.gov.au/index.aspx?base=1831 -# Guess that it was like Australia/Hobart while inhabited before 2010. -# -# From Steffen Thorsen (2010-03-10): -# We got these changes from the Australian Antarctic Division: -# - Macquarie Island will stay on UTC+11 for winter and therefore not -# switch back from daylight savings time when other parts of Australia do -# on 4 April. -# -# From Arthur David Olson (2013-05-23): -# The 1919 transition is overspecified below so pre-2013 zics -# will produce a binary file with an [A]EST-type as the first 32-bit type; -# this is required for correct handling of times before 1916 by -# pre-2013 versions of localtime. -Zone Antarctica/Macquarie 0 - -00 1899 Nov - 10:00 - AEST 1916 Oct 1 2:00 - 10:00 1:00 AEDT 1917 Feb - 10:00 Aus AE%sT 1919 Apr 1 0:00s - 0 - -00 1948 Mar 25 - 10:00 Aus AE%sT 1967 - 10:00 AT AE%sT 2010 Apr 4 3:00 - 11:00 - +11 - -# Christmas -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Christmas 7:02:52 - LMT 1895 Feb - 7:00 - +07 - -# Cocos (Keeling) Is -# These islands were ruled by the Ross family from about 1830 to 1978. -# We don't know when standard time was introduced; for now, we guess 1900. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Indian/Cocos 6:27:40 - LMT 1900 - 6:30 - +0630 - - -# Fiji - -# Milne gives 11:55:44 for Suva. - -# From Alexander Krivenyshev (2009-11-10): -# According to Fiji Broadcasting Corporation, Fiji plans to re-introduce DST -# from November 29th 2009 to April 25th 2010. -# -# "Daylight savings to commence this month" -# http://www.radiofiji.com.fj/fullstory.php?id=23719 -# http://www.worldtimezone.com/dst_news/dst_news_fiji01.html - -# From Steffen Thorsen (2009-11-10): -# The Fiji Government has posted some more details about the approved -# amendments: -# http://www.fiji.gov.fj/publish/page_16198.shtml - -# From Steffen Thorsen (2010-03-03): -# The Cabinet in Fiji has decided to end DST about a month early, on -# 2010-03-28 at 03:00. -# The plan is to observe DST again, from 2010-10-24 to sometime in March -# 2011 (last Sunday a good guess?). -# -# Official source: -# http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=1096:3310-cabinet-approves-change-in-daylight-savings-dates&catid=49:cabinet-releases&Itemid=166 -# -# A bit more background info here: -# https://www.timeanddate.com/news/time/fiji-dst-ends-march-2010.html - -# From Alexander Krivenyshev (2010-10-24): -# According to Radio Fiji and Fiji Times online, Fiji will end DST 3 -# weeks earlier than expected - on March 6, 2011, not March 27, 2011... -# Here is confirmation from Government of the Republic of the Fiji Islands, -# Ministry of Information (fiji.gov.fj) web site: -# http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=2608:daylight-savings&catid=71:press-releases&Itemid=155 -# http://www.worldtimezone.com/dst_news/dst_news_fiji04.html - -# From Steffen Thorsen (2011-10-03): -# Now the dates have been confirmed, and at least our start date -# assumption was correct (end date was one week wrong). -# -# http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=4966:daylight-saving-starts-in-fiji&catid=71:press-releases&Itemid=155 -# which says -# Members of the public are reminded to change their time to one hour in -# advance at 2am to 3am on October 23, 2011 and one hour back at 3am to -# 2am on February 26 next year. - -# From Ken Rylander (2011-10-24) -# Another change to the Fiji DST end date. In the TZ database the end date for -# Fiji DST 2012, is currently Feb 26. This has been changed to Jan 22. -# -# http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=5017:amendments-to-daylight-savings&catid=71:press-releases&Itemid=155 -# states: -# -# The end of daylight saving scheduled initially for the 26th of February 2012 -# has been brought forward to the 22nd of January 2012. -# The commencement of daylight saving will remain unchanged and start -# on the 23rd of October, 2011. - -# From the Fiji Government Online Portal (2012-08-21) via Steffen Thorsen: -# The Minister for Labour, Industrial Relations and Employment Mr Jone Usamate -# today confirmed that Fiji will start daylight savings at 2 am on Sunday 21st -# October 2012 and end at 3 am on Sunday 20th January 2013. -# http://www.fiji.gov.fj/index.php?option=com_content&view=article&id=6702&catid=71&Itemid=155 - -# From the Fijian Government Media Center (2013-08-30) via David Wheeler: -# Fiji will start daylight savings on Sunday 27th October, 2013 ... -# move clocks forward by one hour from 2am -# http://www.fiji.gov.fj/Media-Center/Press-Releases/DAYLIGHT-SAVING-STARTS-ON-SUNDAY,-27th-OCTOBER-201.aspx - -# From Steffen Thorsen (2013-01-10): -# Fiji will end DST on 2014-01-19 02:00: -# http://www.fiji.gov.fj/Media-Center/Press-Releases/DAYLIGHT-SAVINGS-TO-END-THIS-MONTH-%281%29.aspx - -# From Ken Rylander (2014-10-20): -# DST will start Nov. 2 this year. -# http://www.fiji.gov.fj/Media-Center/Press-Releases/DAYLIGHT-SAVING-STARTS-ON-SUNDAY,-NOVEMBER-2ND.aspx - -# From a government order dated 2015-08-26 and published as Legal Notice No. 77 -# in the Government of Fiji Gazette Supplement No. 24 (2015-08-28), -# via Ken Rylander (2015-09-02): -# the daylight saving period is 1 hour in advance of the standard time -# commencing at 2.00 am on Sunday 1st November, 2015 and ending at -# 3.00 am on Sunday 17th January, 2016. - -# From Raymond Kumar (2016-10-04): -# http://www.fiji.gov.fj/Media-Center/Press-Releases/DAYLIGHT-SAVING-STARTS-ON-6th-NOVEMBER,-2016.aspx -# "Fiji's daylight savings will begin on Sunday, 6 November 2016, when -# clocks go forward an hour at 2am to 3am.... Daylight Saving will -# end at 3.00am on Sunday 15th January 2017." - -# From Paul Eggert (2017-08-21): -# Dominic Fok writes (2017-08-20) that DST ends 2018-01-14, citing -# Extraordinary Government of Fiji Gazette Supplement No. 21 (2017-08-27), -# [Legal Notice No. 41] of an order of the previous day by J Usamate. -# For now, guess DST from 02:00 the first Sunday in November to 03:00 -# the first Sunday on or after January 14. Although ad hoc, it matches -# transitions since late 2014 and seems more likely to match future -# practice than guessing no DST. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Fiji 1998 1999 - Nov Sun>=1 2:00 1:00 S -Rule Fiji 1999 2000 - Feb lastSun 3:00 0 - -Rule Fiji 2009 only - Nov 29 2:00 1:00 S -Rule Fiji 2010 only - Mar lastSun 3:00 0 - -Rule Fiji 2010 2013 - Oct Sun>=21 2:00 1:00 S -Rule Fiji 2011 only - Mar Sun>=1 3:00 0 - -Rule Fiji 2012 2013 - Jan Sun>=18 3:00 0 - -Rule Fiji 2014 only - Jan Sun>=18 2:00 0 - -Rule Fiji 2014 max - Nov Sun>=1 2:00 1:00 S -Rule Fiji 2015 max - Jan Sun>=14 3:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Fiji 11:55:44 - LMT 1915 Oct 26 # Suva - 12:00 Fiji +12/+13 - -# French Polynesia -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Gambier -8:59:48 - LMT 1912 Oct # Rikitea - -9:00 - -09 -Zone Pacific/Marquesas -9:18:00 - LMT 1912 Oct - -9:30 - -0930 -Zone Pacific/Tahiti -9:58:16 - LMT 1912 Oct # Papeete - -10:00 - -10 -# Clipperton (near North America) is administered from French Polynesia; -# it is uninhabited. - -# Guam -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Guam -14:21:00 - LMT 1844 Dec 31 - 9:39:00 - LMT 1901 # Agana - 10:00 - GST 2000 Dec 23 # Guam - 10:00 - ChST # Chamorro Standard Time -Link Pacific/Guam Pacific/Saipan # N Mariana Is - -# Kiribati -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Tarawa 11:32:04 - LMT 1901 # Bairiki - 12:00 - +12 -Zone Pacific/Enderbury -11:24:20 - LMT 1901 - -12:00 - -12 1979 Oct - -11:00 - -11 1995 - 13:00 - +13 -Zone Pacific/Kiritimati -10:29:20 - LMT 1901 - -10:40 - -1040 1979 Oct - -10:00 - -10 1995 - 14:00 - +14 - -# N Mariana Is -# See Pacific/Guam. - -# Marshall Is -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Majuro 11:24:48 - LMT 1901 - 11:00 - +11 1969 Oct - 12:00 - +12 -Zone Pacific/Kwajalein 11:09:20 - LMT 1901 - 11:00 - +11 1969 Oct - -12:00 - -12 1993 Aug 20 - 12:00 - +12 - -# Micronesia -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Chuuk 10:07:08 - LMT 1901 - 10:00 - +10 -Zone Pacific/Pohnpei 10:32:52 - LMT 1901 # Kolonia - 11:00 - +11 -Zone Pacific/Kosrae 10:51:56 - LMT 1901 - 11:00 - +11 1969 Oct - 12:00 - +12 1999 - 11:00 - +11 - -# Nauru -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Nauru 11:07:40 - LMT 1921 Jan 15 # Uaobe - 11:30 - +1130 1942 Mar 15 - 9:00 - +09 1944 Aug 15 - 11:30 - +1130 1979 May - 12:00 - +12 - -# New Caledonia -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule NC 1977 1978 - Dec Sun>=1 0:00 1:00 S -Rule NC 1978 1979 - Feb 27 0:00 0 - -Rule NC 1996 only - Dec 1 2:00s 1:00 S -# Shanks & Pottenger say the following was at 2:00; go with IATA. -Rule NC 1997 only - Mar 2 2:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Noumea 11:05:48 - LMT 1912 Jan 13 # Nouméa - 11:00 NC +11/+12 - - -############################################################################### - -# New Zealand - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule NZ 1927 only - Nov 6 2:00 1:00 S -Rule NZ 1928 only - Mar 4 2:00 0 M -Rule NZ 1928 1933 - Oct Sun>=8 2:00 0:30 S -Rule NZ 1929 1933 - Mar Sun>=15 2:00 0 M -Rule NZ 1934 1940 - Apr lastSun 2:00 0 M -Rule NZ 1934 1940 - Sep lastSun 2:00 0:30 S -Rule NZ 1946 only - Jan 1 0:00 0 S -# Since 1957 Chatham has been 45 minutes ahead of NZ, but there's no -# convenient single notation for the date and time of this transition -# so we must duplicate the Rule lines. -Rule NZ 1974 only - Nov Sun>=1 2:00s 1:00 D -Rule Chatham 1974 only - Nov Sun>=1 2:45s 1:00 D -Rule NZ 1975 only - Feb lastSun 2:00s 0 S -Rule Chatham 1975 only - Feb lastSun 2:45s 0 S -Rule NZ 1975 1988 - Oct lastSun 2:00s 1:00 D -Rule Chatham 1975 1988 - Oct lastSun 2:45s 1:00 D -Rule NZ 1976 1989 - Mar Sun>=1 2:00s 0 S -Rule Chatham 1976 1989 - Mar Sun>=1 2:45s 0 S -Rule NZ 1989 only - Oct Sun>=8 2:00s 1:00 D -Rule Chatham 1989 only - Oct Sun>=8 2:45s 1:00 D -Rule NZ 1990 2006 - Oct Sun>=1 2:00s 1:00 D -Rule Chatham 1990 2006 - Oct Sun>=1 2:45s 1:00 D -Rule NZ 1990 2007 - Mar Sun>=15 2:00s 0 S -Rule Chatham 1990 2007 - Mar Sun>=15 2:45s 0 S -Rule NZ 2007 max - Sep lastSun 2:00s 1:00 D -Rule Chatham 2007 max - Sep lastSun 2:45s 1:00 D -Rule NZ 2008 max - Apr Sun>=1 2:00s 0 S -Rule Chatham 2008 max - Apr Sun>=1 2:45s 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Auckland 11:39:04 - LMT 1868 Nov 2 - 11:30 NZ NZ%sT 1946 Jan 1 - 12:00 NZ NZ%sT -Zone Pacific/Chatham 12:13:48 - LMT 1868 Nov 2 - 12:15 - +1215 1946 Jan 1 - 12:45 Chatham +1245/+1345 - -Link Pacific/Auckland Antarctica/McMurdo - -# Auckland Is -# uninhabited; Māori and Moriori, colonial settlers, pastoralists, sealers, -# and scientific personnel have wintered - -# Campbell I -# minor whaling stations operated 1909/1914 -# scientific station operated 1941/1995; -# previously whalers, sealers, pastoralists, and scientific personnel wintered -# was probably like Pacific/Auckland - -# Cook Is -# From Shanks & Pottenger: -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Cook 1978 only - Nov 12 0:00 0:30 HS -Rule Cook 1979 1991 - Mar Sun>=1 0:00 0 - -Rule Cook 1979 1990 - Oct lastSun 0:00 0:30 HS -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Rarotonga -10:39:04 - LMT 1901 # Avarua - -10:30 - -1030 1978 Nov 12 - -10:00 Cook -10/-0930 - -############################################################################### - - -# Niue -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Niue -11:19:40 - LMT 1901 # Alofi - -11:20 - -1120 1951 - -11:30 - -1130 1978 Oct 1 - -11:00 - -11 - -# Norfolk -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Norfolk 11:11:52 - LMT 1901 # Kingston - 11:12 - +1112 1951 - 11:30 - +1130 1974 Oct 27 02:00 - 11:30 1:00 +1230 1975 Mar 2 02:00 - 11:30 - +1130 2015 Oct 4 02:00 - 11:00 - +11 - -# Palau (Belau) -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Palau 8:57:56 - LMT 1901 # Koror - 9:00 - +09 - -# Papua New Guinea -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Port_Moresby 9:48:40 - LMT 1880 - 9:48:32 - PMMT 1895 # Port Moresby Mean Time - 10:00 - +10 -# -# From Paul Eggert (2014-10-13): -# Base the Bougainville entry on the Arawa-Kieta region, which appears to have -# the most people even though it was devastated in the Bougainville Civil War. -# -# Although Shanks gives 1942-03-15 / 1943-11-01 for UT +09, these dates -# are apparently rough guesswork from the starts of military campaigns. -# The World War II entries below are instead based on Arawa-Kieta. -# The Japanese occupied Kieta in July 1942, -# according to the Pacific War Online Encyclopedia -# https://pwencycl.kgbudge.com/B/o/Bougainville.htm -# and seem to have controlled it until their 1945-08-21 surrender. -# -# The Autonomous Region of Bougainville switched from UT +10 to +11 -# on 2014-12-28 at 02:00. They call +11 "Bougainville Standard Time". -# See: -# http://www.bougainville24.com/bougainville-issues/bougainville-gets-own-timezone/ -# -Zone Pacific/Bougainville 10:22:16 - LMT 1880 - 9:48:32 - PMMT 1895 - 10:00 - +10 1942 Jul - 9:00 - +09 1945 Aug 21 - 10:00 - +10 2014 Dec 28 2:00 - 11:00 - +11 - -# Pitcairn -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Pitcairn -8:40:20 - LMT 1901 # Adamstown - -8:30 - -0830 1998 Apr 27 0:00 - -8:00 - -08 - -# American Samoa -Zone Pacific/Pago_Pago 12:37:12 - LMT 1892 Jul 5 - -11:22:48 - LMT 1911 - -11:00 - SST # S=Samoa -Link Pacific/Pago_Pago Pacific/Midway # in US minor outlying islands - -# Samoa (formerly and also known as Western Samoa) - -# From Steffen Thorsen (2009-10-16): -# We have been in contact with the government of Samoa again, and received -# the following info: -# -# "Cabinet has now approved Daylight Saving to be effected next year -# commencing from the last Sunday of September 2010 and conclude first -# Sunday of April 2011." -# -# Background info: -# https://www.timeanddate.com/news/time/samoa-dst-plan-2009.html -# -# Samoa's Daylight Saving Time Act 2009 is available here, but does not -# contain any dates: -# http://www.parliament.gov.ws/documents/acts/Daylight%20Saving%20Act%20%202009%20%28English%29%20-%20Final%207-7-091.pdf - -# From Laupue Raymond Hughes (2010-10-07): -# Please see -# http://www.mcil.gov.ws -# the Ministry of Commerce, Industry and Labour (sideframe) "Last Sunday -# September 2010 (26/09/10) - adjust clocks forward from 12:00 midnight -# to 01:00am and First Sunday April 2011 (03/04/11) - adjust clocks -# backwards from 1:00am to 12:00am" - -# From Laupue Raymond Hughes (2011-03-07): -# [http://www.mcil.gov.ws/ftcd/daylight_saving_2011.pdf] -# -# ... when the standard time strikes the hour of four o'clock (4.00am -# or 0400 Hours) on the 2nd April 2011, then all instruments used to -# measure standard time are to be adjusted/changed to three o'clock -# (3:00am or 0300Hrs). - -# From David Zülke (2011-05-09): -# Subject: Samoa to move timezone from east to west of international date line -# -# http://www.morningstar.co.uk/uk/markets/newsfeeditem.aspx?id=138501958347963 - -# From Paul Eggert (2014-06-27): -# The International Date Line Act 2011 -# http://www.parliament.gov.ws/images/ACTS/International_Date_Line_Act__2011_-_Eng.pdf -# changed Samoa from UT -11 to +13, effective "12 o'clock midnight, on -# Thursday 29th December 2011". The International Date Line was adjusted -# accordingly. - -# From Laupue Raymond Hughes (2011-09-02): -# http://www.mcil.gov.ws/mcil_publications.html -# -# here is the official website publication for Samoa DST and dateline change -# -# DST -# Year End Time Start Time -# 2011 - - - - - - 24 September 3:00am to 4:00am -# 2012 01 April 4:00am to 3:00am - - - - - - -# -# Dateline Change skip Friday 30th Dec 2011 -# Thursday 29th December 2011 23:59:59 Hours -# Saturday 31st December 2011 00:00:00 Hours -# -# From Nicholas Pereira (2012-09-10): -# Daylight Saving Time commences on Sunday 30th September 2012 and -# ends on Sunday 7th of April 2013.... -# http://www.mcil.gov.ws/mcil_publications.html -# -# From Paul Eggert (2014-07-08): -# That web page currently lists transitions for 2012/3 and 2013/4. -# Assume the pattern instituted in 2012 will continue indefinitely. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule WS 2010 only - Sep lastSun 0:00 1 D -Rule WS 2011 only - Apr Sat>=1 4:00 0 S -Rule WS 2011 only - Sep lastSat 3:00 1 D -Rule WS 2012 max - Apr Sun>=1 4:00 0 S -Rule WS 2012 max - Sep lastSun 3:00 1 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Apia 12:33:04 - LMT 1892 Jul 5 - -11:26:56 - LMT 1911 - -11:30 - -1130 1950 - -11:00 WS -11/-10 2011 Dec 29 24:00 - 13:00 WS +13/+14 - -# Solomon Is -# excludes Bougainville, for which see Papua New Guinea -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Guadalcanal 10:39:48 - LMT 1912 Oct # Honiara - 11:00 - +11 - -# Tokelau -# -# From Gwillim Law (2011-12-29) -# A correspondent informed me that Tokelau, like Samoa, will be skipping -# December 31 this year ... -# -# From Steffen Thorsen (2012-07-25) -# ... we double checked by calling hotels and offices based in Tokelau asking -# about the time there, and they all told a time that agrees with UTC+13.... -# Shanks says UTC-10 from 1901 [but] ... there is a good chance the change -# actually was to UTC-11 back then. -# -# From Paul Eggert (2012-07-25) -# A Google Books snippet of Appendix to the Journals of the House of -# Representatives of New Zealand, Session 1948, -# , page 65, says Tokelau -# was "11 hours slow on G.M.T." Go with Thorsen and assume Shanks & Pottenger -# are off by an hour starting in 1901. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Fakaofo -11:24:56 - LMT 1901 - -11:00 - -11 2011 Dec 30 - 13:00 - +13 - -# Tonga -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Tonga 1999 only - Oct 7 2:00s 1:00 S -Rule Tonga 2000 only - Mar 19 2:00s 0 - -Rule Tonga 2000 2001 - Nov Sun>=1 2:00 1:00 S -Rule Tonga 2001 2002 - Jan lastSun 2:00 0 - -Rule Tonga 2016 only - Nov Sun>=1 2:00 1:00 S -Rule Tonga 2017 only - Jan Sun>=15 3:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Tongatapu 12:19:20 - LMT 1901 - 12:20 - +1220 1941 - 13:00 - +13 1999 - 13:00 Tonga +13/+14 - -# Tuvalu -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Funafuti 11:56:52 - LMT 1901 - 12:00 - +12 - - -# US minor outlying islands - -# Howland, Baker -# Howland was mined for guano by American companies 1857-1878 and British -# 1886-1891; Baker was similar but exact dates are not known. -# Inhabited by civilians 1935-1942; U.S. military bases 1943-1944; -# uninhabited thereafter. -# Howland observed Hawaii Standard Time (UT -10:30) in 1937; -# see page 206 of Elgen M. Long and Marie K. Long, -# Amelia Earhart: the Mystery Solved, Simon & Schuster (2000). -# So most likely Howland and Baker observed Hawaii Time from 1935 -# until they were abandoned after the war. - -# Jarvis -# Mined for guano by American companies 1857-1879 and British 1883?-1891?. -# Inhabited by civilians 1935-1942; IGY scientific base 1957-1958; -# uninhabited thereafter. -# no information; was probably like Pacific/Kiritimati - -# Johnston -# -# From Paul Eggert (2017-02-10): -# Sometimes Johnston kept Hawaii time, and sometimes it was an hour behind. -# Details are uncertain. We have no data for Johnston after 1970, so -# treat it like Hawaii for now. Since Johnston is now uninhabited, -# its link to Pacific/Honolulu is in the 'backward' file. -# -# In his memoirs of June 6th to October 4, 1945 -# (2005), Herbert C. Bach writes, -# "We started our letdown to Kwajalein Atoll and landed there at 5:00 AM -# Johnston time, 1:30 AM Kwajalein time." This was in June 1945, and -# confirms that Johnston kept the same time as Honolulu in summer 1945. -# -# From Lyle McElhaney (2014-03-11): -# [W]hen JI was being used for that [atomic bomb] testing, the time being used -# was not Hawaiian time but rather the same time being used on the ships, -# which had a GMT offset of -11 hours. This apparently applied to at least the -# time from Operation Newsreel (Hardtack I/Teak shot, 1958-08-01) to the last -# Operation Fishbowl shot (Tightrope, 1962-11-04).... [See] Herman Hoerlin, -# "The United States High-Altitude Test Experience: A Review Emphasizing the -# Impact on the Environment", Los Alamos LA-6405, Oct 1976. -# https://www.fas.org/sgp/othergov/doe/lanl/docs1/00322994.pdf -# See the table on page 4 where he lists GMT and local times for the tests; a -# footnote for the JI tests reads that local time is "JI time = Hawaii Time -# Minus One Hour". - -# Kingman -# uninhabited - -# Midway -# See Pacific/Pago_Pago. - -# Palmyra -# uninhabited since World War II; was probably like Pacific/Kiritimati - -# Wake -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Wake 11:06:28 - LMT 1901 - 12:00 - +12 - - -# Vanuatu -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Vanuatu 1983 only - Sep 25 0:00 1:00 S -Rule Vanuatu 1984 1991 - Mar Sun>=23 0:00 0 - -Rule Vanuatu 1984 only - Oct 23 0:00 1:00 S -Rule Vanuatu 1985 1991 - Sep Sun>=23 0:00 1:00 S -Rule Vanuatu 1992 1993 - Jan Sun>=23 0:00 0 - -Rule Vanuatu 1992 only - Oct Sun>=23 0:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Efate 11:13:16 - LMT 1912 Jan 13 # Vila - 11:00 Vanuatu +11/+12 - -# Wallis and Futuna -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Wallis 12:15:20 - LMT 1901 - 12:00 - +12 - -############################################################################### - -# NOTES - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (2017-02-10): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# Another source occasionally used is Edward W. Whitman, World Time Differences, -# Whitman Publishing Co, 2 Niagara Av, Ealing, London (undated), which -# I found in the UCLA library. -# -# For data circa 1899, a common source is: -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# https://www.jstor.org/stable/1774359 -# -# A reliable and entertaining source about time zones is -# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). -# -# The following abbreviations are from other sources. -# Corrections are welcome! -# std dst -# LMT Local Mean Time -# 8:00 AWST AWDT Western Australia -# 9:30 ACST ACDT Central Australia -# 10:00 AEST AEDT Eastern Australia -# 10:00 GST Guam through 2000 -# 10:00 ChST Chamorro -# 11:30 NZMT NZST New Zealand through 1945 -# 12:00 NZST NZDT New Zealand 1946-present -# -11:00 SST Samoa -# -10:00 HST Hawaii -# -# See the 'northamerica' file for Hawaii. -# See the 'southamerica' file for Easter I and the Galápagos Is. - -############################################################################### - -# Australia - -# From Paul Eggert (2014-06-30): -# Daylight saving time has long been controversial in Australia, pitting -# region against region, rural against urban, and local against global. -# For example, in her review of Graeme Davison's _The Unforgiving -# Minute: how Australians learned to tell the time_ (1993), Perth native -# Phillipa J Martyr wrote, "The section entitled 'Saving Daylight' was -# very informative, but was (as can, sadly, only be expected from a -# Melbourne-based study) replete with the usual chuckleheaded -# Queenslanders and straw-chewing yokels from the West prattling fables -# about fading curtains and crazed farm animals." -# Electronic Journal of Australian and New Zealand History (1997-03-03) -# http://www.jcu.edu.au/aff/history/reviews/davison.htm - -# From Paul Eggert (2005-12-08): -# Implementation Dates of Daylight Saving Time within Australia -# http://www.bom.gov.au/climate/averages/tables/dst_times.shtml -# summarizes daylight saving issues in Australia. - -# From Arthur David Olson (2005-12-12): -# Lawlink NSW:Daylight Saving in New South Wales -# http://www.lawlink.nsw.gov.au/lawlink/Corporate/ll_agdinfo.nsf/pages/community_relations_daylight_saving -# covers New South Wales in particular. - -# From John Mackin (1991-03-06): -# We in Australia have _never_ referred to DST as 'daylight' time. -# It is called 'summer' time. Now by a happy coincidence, 'summer' -# and 'standard' happen to start with the same letter; hence, the -# abbreviation does _not_ change... -# The legislation does not actually define abbreviations, at least -# in this State, but the abbreviation is just commonly taken to be the -# initials of the phrase, and the legislation here uniformly uses -# the phrase 'summer time' and does not use the phrase 'daylight -# time'. -# Announcers on the Commonwealth radio network, the ABC (for Australian -# Broadcasting Commission), use the phrases 'Eastern Standard Time' -# or 'Eastern Summer Time'. (Note, though, that as I say in the -# current australasia file, there is really no such thing.) Announcers -# on its overseas service, Radio Australia, use the same phrases -# prefixed by the word 'Australian' when referring to local times; -# time announcements on that service, naturally enough, are made in UTC. - -# From Paul Eggert (2014-06-30): -# -# Inspired by Mackin's remarks quoted above, earlier versions of this -# file used "EST" for both Eastern Standard Time and Eastern Summer -# Time in Australia, and similarly for "CST", "CWST", and "WST". -# However, these abbreviations were confusing and were not common -# practice among Australians, and there were justifiable complaints -# about them, so I attempted to survey current Australian usage. -# For the tz database, the full English phrase is not that important; -# what matters is the abbreviation. It's difficult to survey the web -# directly for abbreviation usage, as there are so many false hits for -# strings like "EST" and "EDT", so I looked for pages that defined an -# abbreviation for eastern or central DST in Australia, and got the -# following numbers of unique hits for the listed Google queries: -# -# 10 "Eastern Daylight Time AEST" site:au [some are false hits] -# 10 "Eastern Summer Time AEST" site:au -# 10 "Summer Time AEDT" site:au -# 13 "EDST Eastern Daylight Saving Time" site:au -# 18 "Summer Time ESST" site:au -# 28 "Eastern Daylight Saving Time EDST" site:au -# 39 "EDT Eastern Daylight Time" site:au [some are false hits] -# 53 "Eastern Daylight Time EDT" site:au [some are false hits] -# 54 "AEDT Australian Eastern Daylight Time" site:au -# 182 "Eastern Daylight Time AEDT" site:au -# -# 17 "Central Daylight Time CDT" site:au [some are false hits] -# 46 "Central Daylight Time ACDT" site:au -# -# I tried several other variants (e.g., "Eastern Summer Time EST") but -# they all returned fewer than 10 unique hits. I also looked for pages -# mentioning both "western standard time" and an abbreviation, since -# there is no WST in the US to generate false hits, and found: -# -# 156 "western standard time" AWST site:au -# 226 "western standard time" WST site:au -# -# I then surveyed the top ten newspapers in Australia by circulation as -# listed in Wikipedia, using Google queries like "AEDT site:heraldsun.com.au" -# and obtaining estimated counts from the initial page of search results. -# All ten papers greatly preferred "AEDT" to "EDT". The papers -# surveyed were the Herald Sun, The Daily Telegraph, The Courier-Mail, -# The Sydney Morning Herald, The West Australian, The Age, The Advertiser, -# The Australian, The Financial Review, and The Herald (Newcastle). -# -# I also searched for historical usage, to see whether abbreviations -# like "AEDT" are new. A Trove search -# found only one newspaper (The Canberra Times) with a house style -# dating back to the 1970s, I expect because other newspapers weren't -# fully indexed. The Canberra Times strongly preferred abbreviations -# like "AEDT". The first occurrence of "AEDT" was a World Weather -# column (1971-11-17, page 24), and of "ACDT" was a Scoreboard column -# (1993-01-24, p 16). The style was the typical usage but was not -# strictly enforced; for example, "Welcome to the twilight zones ..." -# (1994-10-29, p 1) uses the abbreviations AEST/AEDT, CST/CDT, and -# WST, and goes on to say, "The confusion and frustration some feel -# about the lack of uniformity among Australia's six states and two -# territories has prompted one group to form its very own political -# party -- the Sydney-based Daylight Saving Extension Party." -# -# I also surveyed federal government sources. They did not agree: -# -# The Australian Government (2014-03-26) -# http://australia.gov.au/about-australia/our-country/time -# (This document was produced by the Department of Finance.) -# AEST ACST AWST AEDT ACDT -# -# Bureau of Meteorology (2012-11-08) -# http://www.bom.gov.au/climate/averages/tables/daysavtm.shtml -# EST CST WST EDT CDT -# -# Civil Aviation Safety Authority (undated) -# http://services.casa.gov.au/outnback/inc/pages/episode3/episode-3_time_zones.shtml -# EST CST WST (no abbreviations given for DST) -# -# Geoscience Australia (2011-11-24) -# http://www.ga.gov.au/geodesy/astro/sunrise.jsp -# AEST ACST AWST AEDT ACDT -# -# Parliamentary Library (2008-11-10) -# https://www.aph.gov.au/binaries/library/pubs/rp/2008-09/09rp14.pdf -# EST CST WST preferred for standard time; AEST AEDT ACST ACDT also used -# -# The Transport Safety Bureau has an extensive series of accident reports, -# and investigators seem to use whatever abbreviation they like. -# Googling site:atsb.gov.au found the following number of unique hits: -# 311 "ESuT", 195 "EDT", 26 "AEDT", 83 "CSuT", 46 "CDT". -# "_SuT" tended to appear in older reports, and "A_DT" tended to -# appear in reports of events with international implications. -# -# From the above it appears that there is a working consensus in -# Australia to use trailing "DT" for daylight saving time; although -# some sources use trailing "SST" or "ST" or "SuT" they are by far in -# the minority. The case for leading "A" is weaker, but since it -# seems to be preferred in the overall web and is preferred in all -# the leading newspaper websites and in many government departments, -# it has a stronger case than omitting the leading "A". The current -# version of the database therefore uses abbreviations like "AEST" and -# "AEDT" for Australian time zones. - -# From Paul Eggert (1995-12-19): -# Shanks & Pottenger report 2:00 for all autumn changes in Australia and NZ. -# Mark Prior writes that his newspaper -# reports that NSW's fall 1995 change will occur at 2:00, -# but Robert Elz says it's been 3:00 in Victoria since 1970 -# and perhaps the newspaper's '2:00' is referring to standard time. -# For now we'll continue to assume 2:00s for changes since 1960. - -# From Eric Ulevik (1998-01-05): -# -# Here are some URLs to Australian time legislation. These URLs are stable, -# and should probably be included in the data file. There are probably more -# relevant entries in this database. -# -# NSW (including LHI and Broken Hill): -# Standard Time Act 1987 (updated 1995-04-04) -# https://www.austlii.edu.au/au/legis/nsw/consol_act/sta1987137/index.html -# ACT -# Standard Time and Summer Time Act 1972 -# https://www.austlii.edu.au/au/legis/act/consol_act/stasta1972279/index.html -# SA -# Standard Time Act, 1898 -# https://www.austlii.edu.au/au/legis/sa/consol_act/sta1898137/index.html - -# From David Grosz (2005-06-13): -# It was announced last week that Daylight Saving would be extended by -# one week next year to allow for the 2006 Commonwealth Games. -# Daylight Saving is now to end for next year only on the first Sunday -# in April instead of the last Sunday in March. -# -# From Gwillim Law (2005-06-14): -# I did some Googling and found that all of those states (and territory) plan -# to extend DST together in 2006. -# ACT: http://www.cmd.act.gov.au/mediareleases/fileread.cfm?file=86.txt -# New South Wales: http://www.thecouriermail.news.com.au/common/story_page/0,5936,15538869%255E1702,00.html -# South Australia: http://www.news.com.au/story/0,10117,15555031-1246,00.html -# Tasmania: http://www.media.tas.gov.au/release.php?id=14772 -# Victoria: I wasn't able to find anything separate, but the other articles -# allude to it. -# But not Queensland -# http://www.news.com.au/story/0,10117,15564030-1248,00.html - -# Northern Territory - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # The NORTHERN TERRITORY.. [ Courtesy N.T. Dept of the Chief Minister ] -# # [ Nov 1990 ] -# # N.T. have never utilised any DST due to sub-tropical/tropical location. -# ... -# Zone Australia/North 9:30 - CST - -# From Bradley White (1991-03-04): -# A recent excerpt from an Australian newspaper... -# the Northern Territory do[es] not have daylight saving. - -# Western Australia - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # The state of WESTERN AUSTRALIA.. [ Courtesy W.A. dept Premier+Cabinet ] -# # [ Nov 1990 ] -# # W.A. suffers from a great deal of public and political opposition to -# # DST in principle. A bill is brought before parliament in most years, but -# # usually defeated either in the upper house, or in party caucus -# # before reaching parliament. -# ... -# Zone Australia/West 8:00 AW %sST -# ... -# Rule AW 1974 only - Oct lastSun 2:00 1:00 D -# Rule AW 1975 only - Mar Sun>=1 3:00 0 W -# Rule AW 1983 only - Oct lastSun 2:00 1:00 D -# Rule AW 1984 only - Mar Sun>=1 3:00 0 W - -# From Bradley White (1991-03-04): -# A recent excerpt from an Australian newspaper... -# Western Australia...do[es] not have daylight saving. - -# From John D. Newman via Bradley White (1991-11-02): -# Western Australia is still on "winter time". Some DH in Sydney -# rang me at home a few days ago at 6.00am. (He had just arrived at -# work at 9.00am.) -# W.A. is switching to Summer Time on Nov 17th just to confuse -# everybody again. - -# From Arthur David Olson (1992-03-08): -# The 1992 ending date used in the rules is a best guess; -# it matches what was used in the past. - -# The Australian Bureau of Meteorology FAQ -# http://www.bom.gov.au/faq/faqgen.htm -# (1999-09-27) writes that Giles Meteorological Station uses -# South Australian time even though it's located in Western Australia. - -# Queensland -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # The state of QUEENSLAND.. [ Courtesy Qld. Dept Premier Econ&Trade Devel ] -# # [ Dec 1990 ] -# ... -# Zone Australia/Queensland 10:00 AQ %sST -# ... -# Rule AQ 1971 only - Oct lastSun 2:00 1:00 D -# Rule AQ 1972 only - Feb lastSun 3:00 0 E -# Rule AQ 1989 max - Oct lastSun 2:00 1:00 D -# Rule AQ 1990 max - Mar Sun>=1 3:00 0 E - -# From Bradley White (1989-12-24): -# "Australia/Queensland" now observes daylight time (i.e. from -# October 1989). - -# From Bradley White (1991-03-04): -# A recent excerpt from an Australian newspaper... -# ...Queensland...[has] agreed to end daylight saving -# at 3am tomorrow (March 3)... - -# From John Mackin (1991-03-06): -# I can certainly confirm for my part that Daylight Saving in NSW did in fact -# end on Sunday, 3 March. I don't know at what hour, though. (It surprised -# me.) - -# From Bradley White (1992-03-08): -# ...there was recently a referendum in Queensland which resulted -# in the experimental daylight saving system being abandoned. So, ... -# ... -# Rule QLD 1989 1991 - Oct lastSun 2:00 1:00 D -# Rule QLD 1990 1992 - Mar Sun>=1 3:00 0 S -# ... - -# From Arthur David Olson (1992-03-08): -# The chosen rules the union of the 1971/1972 change and the 1989-1992 changes. - -# From Christopher Hunt (2006-11-21), after an advance warning -# from Jesper Nørgaard Welen (2006-11-01): -# WA are trialing DST for three years. -# http://www.parliament.wa.gov.au/parliament/bills.nsf/9A1B183144403DA54825721200088DF1/$File/Bill175-1B.pdf - -# From Rives McDow (2002-04-09): -# The most interesting region I have found consists of three towns on the -# southern coast.... South Australia observes daylight saving time; Western -# Australia does not. The two states are one and a half hours apart. The -# residents decided to forget about this nonsense of changing the clock so -# much and set the local time 20 hours and 45 minutes from the -# international date line, or right in the middle of the time of South -# Australia and Western Australia.... -# -# From Paul Eggert (2002-04-09): -# This is confirmed by the section entitled -# "What's the deal with time zones???" in -# http://www.earthsci.unimelb.edu.au/~awatkins/null.html -# -# From Alex Livingston (2006-12-07): -# ... it was just on four years ago that I drove along the Eyre Highway, -# which passes through eastern Western Australia close to the southern -# coast of the continent. -# -# I paid particular attention to the time kept there. There can be no -# dispute that UTC+08:45 was considered "the time" from the border -# village just inside the border with South Australia to as far west -# as just east of Caiguna. There can also be no dispute that Eucla is -# the largest population centre in this zone.... -# -# Now that Western Australia is observing daylight saving, the -# question arose whether this part of the state would follow suit. I -# just called the border village and confirmed that indeed they have, -# meaning that they are now observing UTC+09:45. -# -# (2006-12-09): -# I personally doubt that either experimentation with daylight saving -# in WA or its introduction in SA had anything to do with the genesis -# of this time zone. My hunch is that it's been around since well -# before 1975. I remember seeing it noted on road maps decades ago. - -# From Paul Eggert (2006-12-15): -# For lack of better info, assume the tradition dates back to the -# introduction of standard time in 1895. - - -# southeast Australia -# -# From Paul Eggert (2007-07-23): -# Starting autumn 2008 Victoria, NSW, South Australia, Tasmania and the ACT -# end DST the first Sunday in April and start DST the first Sunday in October. -# http://www.theage.com.au/news/national/daylight-savings-to-span-six-months/2007/06/27/1182623966703.html - - -# South Australia - -# From Bradley White (1991-03-04): -# A recent excerpt from an Australian newspaper... -# ...South Australia...[has] agreed to end daylight saving -# at 3am tomorrow (March 3)... - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # The state of SOUTH AUSTRALIA....[ Courtesy of S.A. Dept of Labour ] -# # [ Nov 1990 ] -# ... -# Zone Australia/South 9:30 AS %sST -# ... -# Rule AS 1971 max - Oct lastSun 2:00 1:00 D -# Rule AS 1972 1985 - Mar Sun>=1 3:00 0 C -# Rule AS 1986 1990 - Mar Sun>=15 3:00 0 C -# Rule AS 1991 max - Mar Sun>=1 3:00 0 C - -# From Bradley White (1992-03-11): -# Recent correspondence with a friend in Adelaide -# contained the following exchange: "Due to the Adelaide Festival, -# South Australia delays setting back our clocks for a few weeks." - -# From Robert Elz (1992-03-13): -# I heard that apparently (or at least, it appears that) -# South Aus will have an extra 3 weeks daylight saving every even -# numbered year (from 1990). That's when the Adelaide Festival -# is on... - -# From Robert Elz (1992-03-16, 00:57:07 +1000): -# DST didn't end in Adelaide today (yesterday).... -# But whether it's "4th Sunday" or "2nd last Sunday" I have no idea whatever... -# (it's just as likely to be "the Sunday we pick for this year"...). - -# From Bradley White (1994-04-11): -# If Sun, 15 March, 1992 was at +1030 as kre asserts, but yet Sun, 20 March, -# 1994 was at +0930 as John Connolly's customer seems to assert, then I can -# only conclude that the actual rule is more complicated.... - -# From John Warburton (1994-10-07): -# The new Daylight Savings dates for South Australia ... -# was gazetted in the Government Hansard on Sep 26 1994.... -# start on last Sunday in October and end in last sunday in March. - -# From Paul Eggert (2007-07-23): -# See "southeast Australia" above for 2008 and later. - -# Tasmania - -# The rules for 1967 through 1991 were reported by George Shepherd -# via Simon Woodhead via Robert Elz (1991-03-06): -# # The state of TASMANIA.. [Courtesy Tasmanian Dept of Premier + Cabinet ] -# # [ Nov 1990 ] - -# From Bill Hart via Guy Harris (1991-10-10): -# Oh yes, the new daylight savings rules are uniquely tasmanian, we have -# 6 weeks a year now when we are out of sync with the rest of Australia -# (but nothing new about that). - -# From Alex Livingston (1999-10-04): -# I heard on the ABC (Australian Broadcasting Corporation) radio news on the -# (long) weekend that Tasmania, which usually goes its own way in this regard, -# has decided to join with most of NSW, the ACT, and most of Victoria -# (Australia) and start daylight saving on the last Sunday in August in 2000 -# instead of the first Sunday in October. - -# Sim Alam (2000-07-03) reported a legal citation for the 2000/2001 rules: -# http://www.thelaw.tas.gov.au/fragview/42++1968+GS3A@EN+2000070300 - -# From Paul Eggert (2007-07-23): -# See "southeast Australia" above for 2008 and later. - -# Victoria - -# The rules for 1971 through 1991 were reported by George Shepherd -# via Simon Woodhead via Robert Elz (1991-03-06): -# # The state of VICTORIA.. [ Courtesy of Vic. Dept of Premier + Cabinet ] -# # [ Nov 1990 ] - -# From Scott Harrington (2001-08-29): -# On KQED's "City Arts and Lectures" program last night I heard an -# interesting story about daylight savings time. Dr. John Heilbron was -# discussing his book "The Sun in the Church: Cathedrals as Solar -# Observatories"[1], and in particular the Shrine of Remembrance[2] located -# in Melbourne, Australia. -# -# Apparently the shrine's main purpose is a beam of sunlight which -# illuminates a special spot on the floor at the 11th hour of the 11th day -# of the 11th month (Remembrance Day) every year in memory of Australia's -# fallen WWI soldiers. And if you go there on Nov. 11, at 11am local time, -# you will indeed see the sunbeam illuminate the special spot at the -# expected time. -# -# However, that is only because of some special mirror contraption that had -# to be employed, since due to daylight savings time, the true solar time of -# the remembrance moment occurs one hour later (or earlier?). Perhaps -# someone with more information on this jury-rig can tell us more. -# -# [1] http://www.hup.harvard.edu/catalog/HEISUN.html -# [2] http://www.shrine.org.au - -# From Paul Eggert (2007-07-23): -# See "southeast Australia" above for 2008 and later. - -# New South Wales - -# From Arthur David Olson: -# New South Wales and subjurisdictions have their own ideas of a fun time. -# Based on law library research by John Mackin, -# who notes: -# In Australia, time is not legislated federally, but rather by the -# individual states. Thus, while such terms as "Eastern Standard Time" -# [I mean, of course, Australian EST, not any other kind] are in common -# use, _they have NO REAL MEANING_, as they are not defined in the -# legislation. This is very important to understand. -# I have researched New South Wales time only... - -# From Eric Ulevik (1999-05-26): -# DST will start in NSW on the last Sunday of August, rather than the usual -# October in 2000. See: Matthew Moore, -# Two months more daylight saving, Sydney Morning Herald (1999-05-26). -# http://www.smh.com.au/news/9905/26/pageone/pageone4.html - -# From Paul Eggert (1999-09-27): -# See the following official NSW source: -# Daylight Saving in New South Wales. -# http://dir.gis.nsw.gov.au/cgi-bin/genobject/document/other/daylightsaving/tigGmZ -# -# Narrabri Shire (NSW) council has announced it will ignore the extension of -# daylight saving next year. See: -# Narrabri Council to ignore daylight saving -# http://abc.net.au/news/regionals/neweng/monthly/regeng-22jul1999-1.htm -# (1999-07-22). For now, we'll wait to see if this really happens. -# -# Victoria will follow NSW. See: -# Vic to extend daylight saving (1999-07-28) -# http://abc.net.au/local/news/olympics/1999/07/item19990728112314_1.htm -# -# However, South Australia rejected the DST request. See: -# South Australia rejects Olympics daylight savings request (1999-07-19) -# http://abc.net.au/news/olympics/1999/07/item19990719151754_1.htm -# -# Queensland also will not observe DST for the Olympics. See: -# Qld says no to daylight savings for Olympics -# http://abc.net.au/news/olympics/1999/06/item19990601114608_1.htm -# (1999-06-01), which quotes Queensland Premier Peter Beattie as saying -# "Look you've got to remember in my family when this came up last time -# I voted for it, my wife voted against it and she said to me it's all very -# well for you, you don't have to worry about getting the children out of -# bed, getting them to school, getting them to sleep at night. -# I've been through all this argument domestically...my wife rules." -# -# Broken Hill will stick with South Australian time in 2000. See: -# Broken Hill to be behind the times (1999-07-21) -# http://abc.net.au/news/regionals/brokenh/monthly/regbrok-21jul1999-6.htm - -# IATA SSIM (1998-09) says that the spring 2000 change for Australian -# Capital Territory, New South Wales except Lord Howe Island and Broken -# Hill, and Victoria will be August 27, presumably due to the Sydney Olympics. - -# From Eric Ulevik, referring to Sydney's Sun Herald (2000-08-13), page 29: -# The Queensland Premier Peter Beattie is encouraging northern NSW -# towns to use Queensland time. - -# From Paul Eggert (2007-07-23): -# See "southeast Australia" above for 2008 and later. - -# Yancowinna - -# From John Mackin (1989-01-04): -# 'Broken Hill' means the County of Yancowinna. - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # YANCOWINNA.. [ Confirmation courtesy of Broken Hill Postmaster ] -# # [ Dec 1990 ] -# ... -# # Yancowinna uses Central Standard Time, despite [its] location on the -# # New South Wales side of the S.A. border. Most business and social dealings -# # are with CST zones, therefore CST is legislated by local government -# # although the switch to Summer Time occurs in line with N.S.W. There have -# # been years when this did not apply, but the historical data is not -# # presently available. -# Zone Australia/Yancowinna 9:30 AY %sST -# ... -# Rule AY 1971 1985 - Oct lastSun 2:00 1:00 D -# Rule AY 1972 only - Feb lastSun 3:00 0 C -# [followed by other Rules] - -# Lord Howe Island - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# LHI... [ Courtesy of Pauline Van Winsen ] -# [ Dec 1990 ] -# Lord Howe Island is located off the New South Wales coast, and is half an -# hour ahead of NSW time. - -# From James Lonergan, Secretary, Lord Howe Island Board (2000-01-27): -# Lord Howe Island summer time in 2000/2001 will commence on the same -# date as the rest of NSW (i.e. 2000-08-27). For your information the -# Lord Howe Island Board (controlling authority for the Island) is -# seeking the community's views on various options for summer time -# arrangements on the Island, e.g. advance clocks by 1 full hour -# instead of only 30 minutes. [Dependent] on the wishes of residents -# the Board may approach the NSW government to change the existing -# arrangements. The starting date for summer time on the Island will -# however always coincide with the rest of NSW. - -# From James Lonergan, Secretary, Lord Howe Island Board (2000-10-25): -# Lord Howe Island advances clocks by 30 minutes during DST in NSW and retards -# clocks by 30 minutes when DST finishes. Since DST was most recently -# introduced in NSW, the "changeover" time on the Island has been 02:00 as -# shown on clocks on LHI. I guess this means that for 30 minutes at the start -# of DST, LHI is actually 1 hour ahead of the rest of NSW. - -# From Paul Eggert (2006-03-22): -# For Lord Howe dates we use Shanks & Pottenger through 1989, and -# Lonergan thereafter. For times we use Lonergan. - -# From Paul Eggert (2007-07-23): -# See "southeast Australia" above for 2008 and later. - -# From Steffen Thorsen (2009-04-28): -# According to the official press release, South Australia's extended daylight -# saving period will continue with the same rules as used during the 2008-2009 -# summer (southern hemisphere). -# -# From -# http://www.safework.sa.gov.au/uploaded_files/DaylightDatesSet.pdf -# The extended daylight saving period that South Australia has been trialling -# for over the last year is now set to be ongoing. -# Daylight saving will continue to start on the first Sunday in October each -# year and finish on the first Sunday in April the following year. -# Industrial Relations Minister, Paul Caica, says this provides South Australia -# with a consistent half hour time difference with NSW, Victoria, Tasmania and -# the ACT for all 52 weeks of the year... -# -# We have a wrap-up here: -# https://www.timeanddate.com/news/time/south-australia-extends-dst.html -############################################################################### - -# New Zealand - -# From Mark Davies (1990-10-03): -# the 1989/90 year was a trial of an extended "daylight saving" period. -# This trial was deemed successful and the extended period adopted for -# subsequent years (with the addition of a further week at the start). -# source - phone call to Ministry of Internal Affairs Head Office. - -# From George Shepherd via Simon Woodhead via Robert Elz (1991-03-06): -# # The Country of New Zealand (Australia's east island -) Gee they hate that! -# # or is Australia the west island of N.Z. -# # [ courtesy of Geoff Tribble.. Auckland N.Z. ] -# # [ Nov 1990 ] -# ... -# Rule NZ 1974 1988 - Oct lastSun 2:00 1:00 D -# Rule NZ 1989 max - Oct Sun>=1 2:00 1:00 D -# Rule NZ 1975 1989 - Mar Sun>=1 3:00 0 S -# Rule NZ 1990 max - Mar lastSun 3:00 0 S -# ... -# Zone NZ 12:00 NZ NZ%sT # New Zealand -# Zone NZ-CHAT 12:45 - NZ-CHAT # Chatham Island - -# From Arthur David Olson (1992-03-08): -# The chosen rules use the Davies October 8 values for the start of DST in 1989 -# rather than the October 1 value. - -# From Paul Eggert (1995-12-19); -# Shank & Pottenger report 2:00 for all autumn changes in Australia and NZ. -# Robert Uzgalis writes that the New Zealand Daylight -# Savings Time Order in Council dated 1990-06-18 specifies 2:00 standard -# time on both the first Sunday in October and the third Sunday in March. -# As with Australia, we'll assume the tradition is 2:00s, not 2:00. -# -# From Paul Eggert (2006-03-22): -# The Department of Internal Affairs (DIA) maintains a brief history, -# as does Carol Squires; see tz-link.htm for the full references. -# Use these sources in preference to Shanks & Pottenger. -# -# For Chatham, IATA SSIM (1991/1999) gives the NZ rules but with -# transitions at 2:45 local standard time; this confirms that Chatham -# is always exactly 45 minutes ahead of Auckland. - -# From Colin Sharples (2007-04-30): -# DST will now start on the last Sunday in September, and end on the -# first Sunday in April. The changes take effect this year, meaning -# that DST will begin on 2007-09-30 2008-04-06. -# http://www.dia.govt.nz/diawebsite.nsf/wpg_URL/Services-Daylight-Saving-Daylight-saving-to-be-extended - -# From Paul Eggert (2014-07-14): -# Chatham Island time was formally standardized on 1957-01-01 by -# New Zealand's Standard Time Amendment Act 1956 (1956-10-26). -# https://www.austlii.edu.au/nz/legis/hist_act/staa19561956n100244.pdf -# According to Google Books snippet view, a speaker in the New Zealand -# parliamentary debates in 1956 said "Clause 78 makes provision for standard -# time in the Chatham Islands. The time there is 45 minutes in advance of New -# Zealand time. I understand that is the time they keep locally, anyhow." -# For now, assume this practice goes back to the introduction of standard time -# in New Zealand, as this would make Chatham Islands time almost exactly match -# LMT back when New Zealand was at UT +11:30; also, assume Chatham Islands did -# not observe New Zealand's prewar DST. - -############################################################################### - - -# Fiji - -# Howse writes (p 153) that in 1879 the British governor of Fiji -# enacted an ordinance standardizing the islands on Antipodean Time -# instead of the American system (which was one day behind). - -# From Rives McDow (1998-10-08): -# Fiji will introduce DST effective 0200 local time, 1998-11-01 -# until 0300 local time 1999-02-28. Each year the DST period will -# be from the first Sunday in November until the last Sunday in February. - -# From Paul Eggert (2000-01-08): -# IATA SSIM (1999-09) says DST ends 0100 local time. Go with McDow. - -# From the BBC World Service in -# http://news.bbc.co.uk/2/hi/asia-pacific/205226.stm (1998-10-31 16:03 UTC): -# The Fijian government says the main reasons for the time change is to -# improve productivity and reduce road accidents.... [T]he move is also -# intended to boost Fiji's ability to attract tourists to witness the dawning -# of the new millennium. - -# http://www.fiji.gov.fj/press/2000_09/2000_09_13-05.shtml (2000-09-13) -# reports that Fiji has discontinued DST. - - -# Kiribati - -# From Paul Eggert (1996-01-22): -# Today's _Wall Street Journal_ (page 1) reports that Kiribati -# "declared it the same day [throughout] the country as of Jan. 1, 1995" -# as part of the competition to be first into the 21st century. - - -# Kwajalein - -# In comp.risks 14.87 (26 August 1993), Peter Neumann writes: -# I wonder what happened in Kwajalein, where there was NO Friday, -# 1993-08-20. Thursday night at midnight Kwajalein switched sides with -# respect to the International Date Line, to rejoin its fellow islands, -# going from 11:59 p.m. Thursday to 12:00 m. Saturday in a blink. - - -# N Mariana Is, Guam - -# Howse writes (p 153) "The Spaniards, on the other hand, reached the -# Philippines and the Ladrones from America," and implies that the Ladrones -# (now called the Marianas) kept American date for quite some time. -# For now, we assume the Ladrones switched at the same time as the Philippines; -# see Asia/Manila. - -# US Public Law 106-564 (2000-12-23) made UT +10 the official standard time, -# under the name "Chamorro Standard Time". There is no official abbreviation, -# but Congressman Robert A. Underwood, author of the bill that became law, -# wrote in a press release (2000-12-27) that he will seek the use of "ChST". - - -# Micronesia - -# Alan Eugene Davis writes (1996-03-16), -# "I am certain, having lived there for the past decade, that 'Truk' -# (now properly known as Chuuk) ... is in the time zone GMT+10." -# -# Shanks & Pottenger write that Truk switched from UT +10 to +11 -# on 1978-10-01; ignore this for now. - -# From Paul Eggert (1999-10-29): -# The Federated States of Micronesia Visitors Board writes in -# The Federated States of Micronesia - Visitor Information (1999-01-26) -# http://www.fsmgov.org/info/clocks.html -# that Truk and Yap are UT +10, and Ponape and Kosrae are +11. -# We don't know when Kosrae switched from +12; assume January 1 for now. - - -# Midway - -# From Charles T O'Connor, KMTH DJ (1956), -# quoted in the KTMH section of the Radio Heritage Collection -# (2002-12-31): -# For the past two months we've been on what is known as Daylight -# Saving Time. This time has put us on air at 5am in the morning, -# your time down there in New Zealand. Starting September 2, 1956 -# we'll again go back to Standard Time. This'll mean that we'll go to -# air at 6am your time. -# -# From Paul Eggert (2003-03-23): -# We don't know the date of that quote, but we'll guess they -# started DST on June 3. Possibly DST was observed other years -# in Midway, but we have no record of it. - -# Norfolk - -# From Alexander Krivenyshev (2015-09-23): -# Norfolk Island will change ... from +1130 to +1100: -# https://www.comlaw.gov.au/Details/F2015L01483/Explanatory%20Statement/Text -# ... at 12.30 am (by legal time in New South Wales) on 4 October 2015. -# http://www.norfolkisland.gov.nf/nia/MediaRelease/Media%20Release%20Norfolk%20Island%20Standard%20Time%20Change.pdf - -# From Paul Eggert (2015-09-23): -# Transitions before 2015 are from timeanddate.com, which consulted -# the Norfolk Island Museum and the Australian Bureau of Meteorology's -# Norfolk Island station, and found no record of Norfolk observing DST -# other than in 1974/5. See: -# https://www.timeanddate.com/time/australia/norfolk-island.html - -# Pitcairn - -# From Rives McDow (1999-11-08): -# A Proclamation was signed by the Governor of Pitcairn on the 27th March 1998 -# with regard to Pitcairn Standard Time. The Proclamation is as follows. -# -# The local time for general purposes in the Islands shall be -# Co-ordinated Universal time minus 8 hours and shall be known -# as Pitcairn Standard Time. -# -# ... I have also seen Pitcairn listed as UTC minus 9 hours in several -# references, and can only assume that this was an error in interpretation -# somehow in light of this proclamation. - -# From Rives McDow (1999-11-09): -# The Proclamation regarding Pitcairn time came into effect on 27 April 1998 -# ... at midnight. - -# From Howie Phelps (1999-11-10), who talked to a Pitcairner via shortwave: -# Betty Christian told me yesterday that their local time is the same as -# Pacific Standard Time. They used to be 1/2 hour different from us here in -# Sacramento but it was changed a couple of years ago. - - -# (Western) Samoa and American Samoa - -# Howse writes (p 153) that after the 1879 standardization on Antipodean -# time by the British governor of Fiji, the King of Samoa decided to change -# "the date in his kingdom from the Antipodean to the American system, -# ordaining - by a masterpiece of diplomatic flattery - that -# the Fourth of July should be celebrated twice in that year." -# This happened in 1892, according to the Evening News (Sydney) of 1892-07-20. -# https://www.staff.science.uu.nl/~gent0113/idl/idl.htm - -# Although Shanks & Pottenger says they both switched to UT -11:30 -# in 1911, and to -11 in 1950. many earlier sources give -11 -# for American Samoa, e.g., the US National Bureau of Standards -# circular "Standard Time Throughout the World", 1932. -# Assume American Samoa switched to -11 in 1911, not 1950, -# and that after 1950 they agreed until (western) Samoa skipped a -# day in 2011. Assume also that the Samoas follow the US and New -# Zealand's "ST"/"DT" style of daylight-saving abbreviations. - - -# Tonga - -# From Paul Eggert (1996-01-22): -# Today's _Wall Street Journal_ (p 1) reports that "Tonga has been plotting -# to sneak ahead of [New Zealanders] by introducing daylight-saving time." -# Since Kiribati has moved the Date Line it's not clear what Tonga will do. - -# Don Mundell writes in the 1997-02-20 Tonga Chronicle -# How Tonga became 'The Land where Time Begins': -# http://www.tongatapu.net.to/tonga/homeland/timebegins.htm -# -# Until 1941 Tonga maintained a standard time 50 minutes ahead of NZST -# 12 hours and 20 minutes ahead of GMT. When New Zealand adjusted its -# standard time in 1940s, Tonga had the choice of subtracting from its -# local time to come on the same standard time as New Zealand or of -# advancing its time to maintain the differential of 13 degrees -# (approximately 50 minutes ahead of New Zealand time). -# -# Because His Majesty King Tāufaʻāhau Tupou IV, then Crown Prince -# Tungī, preferred to ensure Tonga's title as the land where time -# begins, the Legislative Assembly approved the latter change. -# -# But some of the older, more conservative members from the outer -# islands objected. "If at midnight on Dec. 31, we move ahead 40 -# minutes, as your Royal Highness wishes, what becomes of the 40 -# minutes we have lost?" -# -# The Crown Prince, presented an unanswerable argument: "Remember that -# on the World Day of Prayer, you would be the first people on Earth -# to say your prayers in the morning." - -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger say the transition was on 1968-10-01; go with Mundell. - -# From Eric Ulevik (1999-05-03): -# Tonga's director of tourism, who is also secretary of the National Millennium -# Committee, has a plan to get Tonga back in front. -# He has proposed a one-off move to tropical daylight saving for Tonga from -# October to March, which has won approval in principle from the Tongan -# Government. - -# From Steffen Thorsen (1999-09-09): -# * Tonga will introduce DST in November -# -# I was given this link by John Letts: -# http://news.bbc.co.uk/hi/english/world/asia-pacific/newsid_424000/424764.stm -# -# I have not been able to find exact dates for the transition in November -# yet. By reading this article it seems like Fiji will be 14 hours ahead -# of UTC as well, but as far as I know Fiji will only be 13 hours ahead -# (12 + 1 hour DST). - -# From Arthur David Olson (1999-09-20): -# According to : -# "Daylight Savings Time will take effect on Oct. 2 through April 15, 2000 -# and annually thereafter from the first Saturday in October through the -# third Saturday of April. Under the system approved by Privy Council on -# Sept. 10, clocks must be turned ahead one hour on the opening day and -# set back an hour on the closing date." -# Alas, no indication of the time of day. - -# From Rives McDow (1999-10-06): -# Tonga started its Daylight Saving on Saturday morning October 2nd at 0200am. -# Daylight Saving ends on April 16 at 0300am which is Sunday morning. - -# From Steffen Thorsen (2000-10-31): -# Back in March I found a notice on the website http://www.tongaonline.com -# that Tonga changed back to standard time one month early, on March 19 -# instead of the original reported date April 16. Unfortunately, the article -# is no longer available on the site, and I did not make a copy of the -# text, and I have forgotten to report it here. -# (Original URL was ) - -# From Rives McDow (2000-12-01): -# Tonga is observing DST as of 2000-11-04 and will stop on 2001-01-27. - -# From Sione Moala-Mafi (2001-09-20) via Rives McDow: -# At 2:00am on the first Sunday of November, the standard time in the Kingdom -# shall be moved forward by one hour to 3:00am. At 2:00am on the last Sunday -# of January the standard time in the Kingdom shall be moved backward by one -# hour to 1:00am. - -# From Pulu ʻAnau (2002-11-05): -# The law was for 3 years, supposedly to get renewed. It wasn't. - -# From Pulu ʻAnau (2016-10-27): -# http://mic.gov.to/news-today/press-releases/6375-daylight-saving-set-to-run-from-6-november-2016-to-15-january-2017 -# Cannot find anyone who knows the rules, has seen the duration or has seen -# the cabinet decision, but it appears we are following Fiji's rule set. -# -# From Tim Parenti (2016-10-26): -# Assume Tonga will observe DST from the first Sunday in November at 02:00 -# through the third Sunday in January at 03:00, like Fiji, for now. - -# From David Wade (2017-10-18): -# In August government was disolved by the King. The current prime minister -# continued in office in care taker mode. It is easy to see that few -# decisions will be made until elections 16th November. -# -# From Paul Eggert (2017-10-18): -# For now, guess that DST is discontinued. That's what the IATA is guessing. - - -# Wake - -# From Vernice Anderson, Personal Secretary to Philip Jessup, -# US Ambassador At Large (oral history interview, 1971-02-02): -# -# Saturday, the 14th [of October, 1950] - ... The time was all the -# more confusing at that point, because we had crossed the -# International Date Line, thus getting two Sundays. Furthermore, we -# discovered that Wake Island had two hours of daylight saving time -# making calculation of time in Washington difficult if not almost -# impossible. -# -# https://www.trumanlibrary.org/oralhist/andrsonv.htm - -# From Paul Eggert (2003-03-23): -# We have no other report of DST in Wake Island, so omit this info for now. - -############################################################################### - -# The International Date Line - -# From Gwillim Law (2000-01-03): -# -# The International Date Line is not defined by any international standard, -# convention, or treaty. Mapmakers are free to draw it as they please. -# Reputable mapmakers will simply ensure that every point of land appears on -# the correct side of the IDL, according to the date legally observed there. -# -# When Kiribati adopted a uniform date in 1995, thereby moving the Phoenix and -# Line Islands to the west side of the IDL (or, if you prefer, moving the IDL -# to the east side of the Phoenix and Line Islands), I suppose that most -# mapmakers redrew the IDL following the boundary of Kiribati. Even that line -# has a rather arbitrary nature. The straight-line boundaries between Pacific -# island nations that are shown on many maps are based on an international -# convention, but are not legally binding national borders.... The date is -# governed by the IDL; therefore, even on the high seas, there may be some -# places as late as fourteen hours later than UTC. And, since the IDL is not -# an international standard, there are some places on the high seas where the -# correct date is ambiguous. - -# From Wikipedia (2005-08-31): -# Before 1920, all ships kept local apparent time on the high seas by setting -# their clocks at night or at the morning sight so that, given the ship's -# speed and direction, it would be 12 o'clock when the Sun crossed the ship's -# meridian (12 o'clock = local apparent noon). During 1917, at the -# Anglo-French Conference on Time-keeping at Sea, it was recommended that all -# ships, both military and civilian, should adopt hourly standard time zones -# on the high seas. Whenever a ship was within the territorial waters of any -# nation it would use that nation's standard time. The captain was permitted -# to change his ship's clocks at a time of his choice following his ship's -# entry into another zone time - he often chose midnight. These zones were -# adopted by all major fleets between 1920 and 1925 but not by many -# independent merchant ships until World War II. - -# From Paul Eggert, using references suggested by Oscar van Vlijmen -# (2005-03-20): -# -# The American Practical Navigator (2002) -# http://pollux.nss.nima.mil/pubs/pubs_j_apn_sections.html?rid=187 -# talks only about the 180-degree meridian with respect to ships in -# international waters; it ignores the international date line. diff --git a/src/timezone/data/backward b/src/timezone/data/backward deleted file mode 100644 index 2141f0d579..0000000000 --- a/src/timezone/data/backward +++ /dev/null @@ -1,128 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file provides links between current names for time zones -# and their old names. Many names changed in late 1993. - -# Link TARGET LINK-NAME -Link Africa/Nairobi Africa/Asmera -Link Africa/Abidjan Africa/Timbuktu -Link America/Argentina/Catamarca America/Argentina/ComodRivadavia -Link America/Adak America/Atka -Link America/Argentina/Buenos_Aires America/Buenos_Aires -Link America/Argentina/Catamarca America/Catamarca -Link America/Atikokan America/Coral_Harbour -Link America/Argentina/Cordoba America/Cordoba -Link America/Tijuana America/Ensenada -Link America/Indiana/Indianapolis America/Fort_Wayne -Link America/Indiana/Indianapolis America/Indianapolis -Link America/Argentina/Jujuy America/Jujuy -Link America/Indiana/Knox America/Knox_IN -Link America/Kentucky/Louisville America/Louisville -Link America/Argentina/Mendoza America/Mendoza -Link America/Toronto America/Montreal -Link America/Rio_Branco America/Porto_Acre -Link America/Argentina/Cordoba America/Rosario -Link America/Tijuana America/Santa_Isabel -Link America/Denver America/Shiprock -Link America/Port_of_Spain America/Virgin -Link Pacific/Auckland Antarctica/South_Pole -Link Asia/Ashgabat Asia/Ashkhabad -Link Asia/Kolkata Asia/Calcutta -Link Asia/Shanghai Asia/Chongqing -Link Asia/Shanghai Asia/Chungking -Link Asia/Dhaka Asia/Dacca -Link Asia/Shanghai Asia/Harbin -Link Asia/Urumqi Asia/Kashgar -Link Asia/Kathmandu Asia/Katmandu -Link Asia/Macau Asia/Macao -Link Asia/Yangon Asia/Rangoon -Link Asia/Ho_Chi_Minh Asia/Saigon -Link Asia/Jerusalem Asia/Tel_Aviv -Link Asia/Thimphu Asia/Thimbu -Link Asia/Makassar Asia/Ujung_Pandang -Link Asia/Ulaanbaatar Asia/Ulan_Bator -Link Atlantic/Faroe Atlantic/Faeroe -Link Europe/Oslo Atlantic/Jan_Mayen -Link Australia/Sydney Australia/ACT -Link Australia/Sydney Australia/Canberra -Link Australia/Lord_Howe Australia/LHI -Link Australia/Sydney Australia/NSW -Link Australia/Darwin Australia/North -Link Australia/Brisbane Australia/Queensland -Link Australia/Adelaide Australia/South -Link Australia/Hobart Australia/Tasmania -Link Australia/Melbourne Australia/Victoria -Link Australia/Perth Australia/West -Link Australia/Broken_Hill Australia/Yancowinna -Link America/Rio_Branco Brazil/Acre -Link America/Noronha Brazil/DeNoronha -Link America/Sao_Paulo Brazil/East -Link America/Manaus Brazil/West -Link America/Halifax Canada/Atlantic -Link America/Winnipeg Canada/Central -# This line is commented out, as the name exceeded the 14-character limit -# and was an unused misnomer. -#Link America/Regina Canada/East-Saskatchewan -Link America/Toronto Canada/Eastern -Link America/Edmonton Canada/Mountain -Link America/St_Johns Canada/Newfoundland -Link America/Vancouver Canada/Pacific -Link America/Regina Canada/Saskatchewan -Link America/Whitehorse Canada/Yukon -Link America/Santiago Chile/Continental -Link Pacific/Easter Chile/EasterIsland -Link America/Havana Cuba -Link Africa/Cairo Egypt -Link Europe/Dublin Eire -Link Europe/London Europe/Belfast -Link Europe/Chisinau Europe/Tiraspol -Link Europe/London GB -Link Europe/London GB-Eire -Link Etc/GMT GMT+0 -Link Etc/GMT GMT-0 -Link Etc/GMT GMT0 -Link Etc/GMT Greenwich -Link Asia/Hong_Kong Hongkong -Link Atlantic/Reykjavik Iceland -Link Asia/Tehran Iran -Link Asia/Jerusalem Israel -Link America/Jamaica Jamaica -Link Asia/Tokyo Japan -Link Pacific/Kwajalein Kwajalein -Link Africa/Tripoli Libya -Link America/Tijuana Mexico/BajaNorte -Link America/Mazatlan Mexico/BajaSur -Link America/Mexico_City Mexico/General -Link Pacific/Auckland NZ -Link Pacific/Chatham NZ-CHAT -Link America/Denver Navajo -Link Asia/Shanghai PRC -Link Pacific/Honolulu Pacific/Johnston -Link Pacific/Pohnpei Pacific/Ponape -Link Pacific/Pago_Pago Pacific/Samoa -Link Pacific/Chuuk Pacific/Truk -Link Pacific/Chuuk Pacific/Yap -Link Europe/Warsaw Poland -Link Europe/Lisbon Portugal -Link Asia/Taipei ROC -Link Asia/Seoul ROK -Link Asia/Singapore Singapore -Link Europe/Istanbul Turkey -Link Etc/UCT UCT -Link America/Anchorage US/Alaska -Link America/Adak US/Aleutian -Link America/Phoenix US/Arizona -Link America/Chicago US/Central -Link America/Indiana/Indianapolis US/East-Indiana -Link America/New_York US/Eastern -Link Pacific/Honolulu US/Hawaii -Link America/Indiana/Knox US/Indiana-Starke -Link America/Detroit US/Michigan -Link America/Denver US/Mountain -Link America/Los_Angeles US/Pacific -Link Pacific/Pago_Pago US/Samoa -Link Etc/UTC UTC -Link Etc/UTC Universal -Link Europe/Moscow W-SU -Link Etc/UTC Zulu diff --git a/src/timezone/data/backzone b/src/timezone/data/backzone deleted file mode 100644 index 32bd0f1061..0000000000 --- a/src/timezone/data/backzone +++ /dev/null @@ -1,675 +0,0 @@ -# Zones that go back beyond the scope of the tz database - -# This file is in the public domain. - -# This file is by no means authoritative; if you think you know -# better, go ahead and edit it (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - - -# From Paul Eggert (2014-10-31): - -# This file contains data outside the normal scope of the tz database, -# in that its zones do not differ from normal tz zones after 1970. -# Links in this file point to zones in this file, superseding links in -# the file 'backward'. - -# Although zones in this file may be of some use for analyzing -# pre-1970 time stamps, they are less reliable, cover only a tiny -# sliver of the pre-1970 era, and cannot feasibly be improved to cover -# most of the era. Because the zones are out of normal scope for the -# database, less effort is put into maintaining this file. Many of -# the zones were formerly in other source files, but were removed or -# replaced by links as their data entries were questionable and/or they -# differed from other zones only in pre-1970 time stamps. - -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. - -# This file is not intended to be compiled standalone, as it -# assumes rules from other files. In the tz distribution, use -# 'make PACKRATDATA=backzone zones' to compile and install this file. - -# Zones are sorted by zone name. Each zone is preceded by the -# name of the country that the zone is in, along with any other -# commentary and rules associated with the entry. -# -# As explained in the zic man page, the zone columns are: -# Zone NAME GMTOFF RULES FORMAT [UNTIL] - -# Ethiopia -# From Paul Eggert (2014-07-31): -# Like the Swahili of Kenya and Tanzania, many Ethiopians keep a -# 12-hour clock starting at our 06:00, so their "8 o'clock" is our -# 02:00 or 14:00. Keep this in mind when you ask the time in Amharic. -# -# Shanks & Pottenger write that Ethiopia had six narrowly-spaced time -# zones between 1870 and 1890, that they merged to 38E50 (2:35:20) in -# 1890, and that they switched to 3:00 on 1936-05-05. Perhaps 38E50 -# was for Adis Dera. Quite likely the Shanks data entries are wrong -# anyway. -Zone Africa/Addis_Ababa 2:34:48 - LMT 1870 - 2:35:20 - ADMT 1936 May 5 # Adis Dera MT - 3:00 - EAT - -# Eritrea -Zone Africa/Asmara 2:35:32 - LMT 1870 - 2:35:32 - AMT 1890 # Asmara Mean Time - 2:35:20 - ADMT 1936 May 5 # Adis Dera MT - 3:00 - EAT -Link Africa/Asmara Africa/Asmera - -# Mali (southern) -Zone Africa/Bamako -0:32:00 - LMT 1912 - 0:00 - GMT 1934 Feb 26 - -1:00 - -01 1960 Jun 20 - 0:00 - GMT - -# Central African Republic -Zone Africa/Bangui 1:14:20 - LMT 1912 - 1:00 - WAT - -# Gambia -Zone Africa/Banjul -1:06:36 - LMT 1912 - -1:06:36 - BMT 1935 # Banjul Mean Time - -1:00 - -01 1964 - 0:00 - GMT - -# Malawi -Zone Africa/Blantyre 2:20:00 - LMT 1903 Mar - 2:00 - CAT - -# Republic of the Congo -Zone Africa/Brazzaville 1:01:08 - LMT 1912 - 1:00 - WAT - -# Burundi -Zone Africa/Bujumbura 1:57:28 - LMT 1890 - 2:00 - CAT - -# Guinea -Zone Africa/Conakry -0:54:52 - LMT 1912 - 0:00 - GMT 1934 Feb 26 - -1:00 - -01 1960 - 0:00 - GMT - -# Senegal -Zone Africa/Dakar -1:09:44 - LMT 1912 - -1:00 - -01 1941 Jun - 0:00 - GMT - -# Tanzania -Zone Africa/Dar_es_Salaam 2:37:08 - LMT 1931 - 3:00 - EAT 1948 - 2:45 - +0245 1961 - 3:00 - EAT - -# Djibouti -Zone Africa/Djibouti 2:52:36 - LMT 1911 Jul - 3:00 - EAT - -# Cameroon -# Whitman says they switched to 1:00 in 1920; go with Shanks & Pottenger. -Zone Africa/Douala 0:38:48 - LMT 1912 - 1:00 - WAT -# Sierra Leone -# From Paul Eggert (2014-08-12): -# The following table is from Shanks & Pottenger, but it can't be right. -# Whitman gives Mar 31 - Aug 31 for 1931 on. -# The International Hydrographic Bulletin, 1932-33, p 63 says that -# Sierra Leone would advance its clocks by 20 minutes on 1933-10-01. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule SL 1935 1942 - Jun 1 0:00 0:40 -0020 -Rule SL 1935 1942 - Oct 1 0:00 0 -01 -Rule SL 1957 1962 - Jun 1 0:00 1:00 +01 -Rule SL 1957 1962 - Sep 1 0:00 0 GMT -Zone Africa/Freetown -0:53:00 - LMT 1882 - -0:53:00 - FMT 1913 Jun # Freetown Mean Time - -1:00 SL %s 1957 - 0:00 SL GMT/+01 - -# Botswana -# From Paul Eggert (2013-02-21): -# Milne says they were regulated by the Cape Town Signal in 1899; -# assume they switched to 2:00 when Cape Town did. -Zone Africa/Gaborone 1:43:40 - LMT 1885 - 1:30 - SAST 1903 Mar - 2:00 - CAT 1943 Sep 19 2:00 - 2:00 1:00 CAST 1944 Mar 19 2:00 - 2:00 - CAT - -# Zimbabwe -Zone Africa/Harare 2:04:12 - LMT 1903 Mar - 2:00 - CAT - -# South Sudan -Zone Africa/Juba 2:06:24 - LMT 1931 - 2:00 Sudan CA%sT 2000 Jan 15 12:00 - 3:00 - EAT - -# Uganda -Zone Africa/Kampala 2:09:40 - LMT 1928 Jul - 3:00 - EAT 1930 - 2:30 - +0230 1948 - 2:45 - +0245 1957 - 3:00 - EAT - -# Rwanda -Zone Africa/Kigali 2:00:16 - LMT 1935 Jun - 2:00 - CAT - -# Democratic Republic of the Congo (west) -Zone Africa/Kinshasa 1:01:12 - LMT 1897 Nov 9 - 1:00 - WAT - -# Gabon -Zone Africa/Libreville 0:37:48 - LMT 1912 - 1:00 - WAT - -# Togo -Zone Africa/Lome 0:04:52 - LMT 1893 - 0:00 - GMT - -# Angola -# -# Shanks gives 1911-05-26 for the transition to WAT, -# evidently confusing the date of the Portuguese decree -# https://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf -# with the date that it took effect, namely 1912-01-01. -# -Zone Africa/Luanda 0:52:56 - LMT 1892 - 0:52:04 - LMT 1912 Jan 1 # Luanda Mean Time? - 1:00 - WAT - -# Democratic Republic of the Congo (east) -Zone Africa/Lubumbashi 1:49:52 - LMT 1897 Nov 9 - 2:00 - CAT - -# Zambia -Zone Africa/Lusaka 1:53:08 - LMT 1903 Mar - 2:00 - CAT - -# Equatorial Guinea -# -# Although Shanks says that Malabo switched from UT +00 to +01 on 1963-12-15, -# a Google Books search says that London Calling, Issues 432-465 (1948), p 19, -# says that Spanish Guinea was at +01 back then. The Shanks data entries -# are most likely wrong, but we have nothing better; use them here for now. -# -Zone Africa/Malabo 0:35:08 - LMT 1912 - 0:00 - GMT 1963 Dec 15 - 1:00 - WAT - -# Lesotho -Zone Africa/Maseru 1:50:00 - LMT 1903 Mar - 2:00 - SAST 1943 Sep 19 2:00 - 2:00 1:00 SAST 1944 Mar 19 2:00 - 2:00 - SAST - -# Swaziland -Zone Africa/Mbabane 2:04:24 - LMT 1903 Mar - 2:00 - SAST - -# Somalia -Zone Africa/Mogadishu 3:01:28 - LMT 1893 Nov - 3:00 - EAT 1931 - 2:30 - +0230 1957 - 3:00 - EAT - -# Niger -Zone Africa/Niamey 0:08:28 - LMT 1912 - -1:00 - -01 1934 Feb 26 - 0:00 - GMT 1960 - 1:00 - WAT - -# Mauritania -Zone Africa/Nouakchott -1:03:48 - LMT 1912 - 0:00 - GMT 1934 Feb 26 - -1:00 - -01 1960 Nov 28 - 0:00 - GMT - -# Burkina Faso -Zone Africa/Ouagadougou -0:06:04 - LMT 1912 - 0:00 - GMT - -# Benin -# Whitman says they switched to 1:00 in 1946, not 1934; -# go with Shanks & Pottenger. -Zone Africa/Porto-Novo 0:10:28 - LMT 1912 Jan 1 - 0:00 - GMT 1934 Feb 26 - 1:00 - WAT - -# São Tomé and Príncipe -Zone Africa/Sao_Tome 0:26:56 - LMT 1884 - -0:36:32 - LMT 1912 # Lisbon Mean Time - 0:00 - GMT - -# Mali (northern) -Zone Africa/Timbuktu -0:12:04 - LMT 1912 - 0:00 - GMT - -# Anguilla -Zone America/Anguilla -4:12:16 - LMT 1912 Mar 2 - -4:00 - AST - -# Antigua and Barbuda -Zone America/Antigua -4:07:12 - LMT 1912 Mar 2 - -5:00 - EST 1951 - -4:00 - AST - -# Chubut, Argentina -# The name "Comodoro Rivadavia" exceeds the 14-byte POSIX limit. -Zone America/Argentina/ComodRivadavia -4:30:00 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 3 - -4:00 - -04 1991 Oct 20 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 Jun 1 - -4:00 - -04 2004 Jun 20 - -3:00 - -03 - -# Aruba -Zone America/Aruba -4:40:24 - LMT 1912 Feb 12 # Oranjestad - -4:30 - -0430 1965 - -4:00 - AST - -# Cayman Is -Zone America/Cayman -5:25:32 - LMT 1890 # Georgetown - -5:07:11 - KMT 1912 Feb # Kingston Mean Time - -5:00 - EST - -# Canada -Zone America/Coral_Harbour -5:32:40 - LMT 1884 - -5:00 NT_YK E%sT 1946 - -5:00 - EST - -# Dominica -Zone America/Dominica -4:05:36 - LMT 1911 Jul 1 0:01 # Roseau - -4:00 - AST - -# Baja California -# See 'northamerica' for why this entry is here rather than there. -Zone America/Ensenada -7:46:28 - LMT 1922 Jan 1 0:13:32 - -8:00 - PST 1927 Jun 10 23:00 - -7:00 - MST 1930 Nov 16 - -8:00 - PST 1942 Apr - -7:00 - MST 1949 Jan 14 - -8:00 - PST 1996 - -8:00 Mexico P%sT - -# Grenada -Zone America/Grenada -4:07:00 - LMT 1911 Jul # St George's - -4:00 - AST - -# Guadeloupe -Zone America/Guadeloupe -4:06:08 - LMT 1911 Jun 8 # Pointe-à-Pitre - -4:00 - AST - -# Canada -# -# From Paul Eggert (2015-03-24): -# Since 1970 most of Quebec has been like Toronto; see -# America/Toronto. However, earlier versions of the tz database -# mistakenly relied on data from Shanks & Pottenger saying that Quebec -# differed from Ontario after 1970, and the following rules and zone -# were created for most of Quebec from the incorrect Shanks & -# Pottenger data. The post-1970 entries have been corrected, but the -# pre-1970 entries are unchecked and probably have errors. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Mont 1917 only - Mar 25 2:00 1:00 D -Rule Mont 1917 only - Apr 24 0:00 0 S -Rule Mont 1919 only - Mar 31 2:30 1:00 D -Rule Mont 1919 only - Oct 25 2:30 0 S -Rule Mont 1920 only - May 2 2:30 1:00 D -Rule Mont 1920 1922 - Oct Sun>=1 2:30 0 S -Rule Mont 1921 only - May 1 2:00 1:00 D -Rule Mont 1922 only - Apr 30 2:00 1:00 D -Rule Mont 1924 only - May 17 2:00 1:00 D -Rule Mont 1924 1926 - Sep lastSun 2:30 0 S -Rule Mont 1925 1926 - May Sun>=1 2:00 1:00 D -Rule Mont 1927 1937 - Apr lastSat 24:00 1:00 D -Rule Mont 1927 1937 - Sep lastSat 24:00 0 S -Rule Mont 1938 1940 - Apr lastSun 0:00 1:00 D -Rule Mont 1938 1939 - Sep lastSun 0:00 0 S -Rule Mont 1946 1973 - Apr lastSun 2:00 1:00 D -Rule Mont 1945 1948 - Sep lastSun 2:00 0 S -Rule Mont 1949 1950 - Oct lastSun 2:00 0 S -Rule Mont 1951 1956 - Sep lastSun 2:00 0 S -Rule Mont 1957 1973 - Oct lastSun 2:00 0 S -Zone America/Montreal -4:54:16 - LMT 1884 - -5:00 Mont E%sT 1918 - -5:00 Canada E%sT 1919 - -5:00 Mont E%sT 1942 Feb 9 2:00s - -5:00 Canada E%sT 1946 - -5:00 Mont E%sT 1974 - -5:00 Canada E%sT - -# Montserrat -# From Paul Eggert (2006-03-22): -# In 1995 volcanic eruptions forced evacuation of Plymouth, the capital. -# world.gazetteer.com says Cork Hill is the most populous location now. -Zone America/Montserrat -4:08:52 - LMT 1911 Jul 1 0:01 # Cork Hill - -4:00 - AST - -# Argentina -# This entry was intended for the following areas, but has been superseded by -# more detailed zones. -# Santa Fe (SF), Entre Ríos (ER), Corrientes (CN), Misiones (MN), Chaco (CC), -# Formosa (FM), La Pampa (LP), Chubut (CH) -Zone America/Rosario -4:02:40 - LMT 1894 Nov - -4:16:44 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Jul - -3:00 - -03 1999 Oct 3 0:00 - -4:00 Arg -04/-03 2000 Mar 3 0:00 - -3:00 - -03 - -# St Kitts-Nevis -Zone America/St_Kitts -4:10:52 - LMT 1912 Mar 2 # Basseterre - -4:00 - AST - -# St Lucia -Zone America/St_Lucia -4:04:00 - LMT 1890 # Castries - -4:04:00 - CMT 1912 # Castries Mean Time - -4:00 - AST - -# Virgin Is -Zone America/St_Thomas -4:19:44 - LMT 1911 Jul # Charlotte Amalie - -4:00 - AST - -# St Vincent and the Grenadines -Zone America/St_Vincent -4:04:56 - LMT 1890 # Kingstown - -4:04:56 - KMT 1912 # Kingstown Mean Time - -4:00 - AST - -# British Virgin Is -Zone America/Tortola -4:18:28 - LMT 1911 Jul # Road Town - -4:00 - AST - -# McMurdo, Ross Island, since 1955-12 -Zone Antarctica/McMurdo 0 - -00 1956 - 12:00 NZ NZ%sT -Link Antarctica/McMurdo Antarctica/South_Pole - -# Yemen -# Milne says 2:59:54 was the meridian of the saluting battery at Aden, -# and that Yemen was at 1:55:56, the meridian of the Hagia Sophia. -Zone Asia/Aden 2:59:54 - LMT 1950 - 3:00 - +03 - -# Bahrain -Zone Asia/Bahrain 3:22:20 - LMT 1920 # Manamah - 4:00 - +04 1972 Jun - 3:00 - +03 - -# India -# -# From Paul Eggert (2014-09-06): -# The 1876 Report of the Secretary of the [US] Navy, p 305 says that Madras -# civil time was 5:20:57.3. -# -# From Paul Eggert (2014-08-21): -# In tomorrow's The Hindu, Nitya Menon reports that India had two civil time -# zones starting in 1884, one in Bombay and one in Calcutta, and that railways -# used a third time zone based on Madras time (80 deg. 18'30" E). Also, -# in 1881 Bombay briefly switched to Madras time, but switched back. See: -# http://www.thehindu.com/news/cities/chennai/madras-375-when-madras-clocked-the-time/article6339393.ece -#Zone Asia/Chennai [not enough info to complete] - -# China -# Long-shu Time (probably due to Long and Shu being two names of that area) -# Guangxi, Guizhou, Hainan, Ningxia, Sichuan, Shaanxi, and Yunnan; -# most of Gansu; west Inner Mongolia; west Qinghai; and the Guangdong -# counties Deqing, Enping, Kaiping, Luoding, Taishan, Xinxing, -# Yangchun, Yangjiang, Yu'nan, and Yunfu. -Zone Asia/Chongqing 7:06:20 - LMT 1928 # or Chungking - 7:00 - +07 1980 May - 8:00 PRC C%sT -Link Asia/Chongqing Asia/Chungking - -# Vietnam -# From Paul Eggert (2014-10-13): -# See Asia/Ho_Chi_Minh for the source for this data. -# Trần's book says the 1954-55 transition to 07:00 in Hanoi was in -# October 1954, with exact date and time unspecified. -Zone Asia/Hanoi 7:03:24 - LMT 1906 Jul 1 - 7:06:30 - PLMT 1911 May 1 - 7:00 - +07 1942 Dec 31 23:00 - 8:00 - +08 1945 Mar 14 23:00 - 9:00 - +09 1945 Sep 2 - 7:00 - +07 1947 Apr 1 - 8:00 - +08 1954 Oct - 7:00 - +07 - -# China -# Changbai Time ("Long-white Time", Long-white = Heilongjiang area) -# Heilongjiang (except Mohe county), Jilin -Zone Asia/Harbin 8:26:44 - LMT 1928 # or Haerbin - 8:30 - +0830 1932 Mar - 8:00 - CST 1940 - 9:00 - +09 1966 May - 8:30 - +0830 1980 May - 8:00 PRC C%sT - -# far west China -Zone Asia/Kashgar 5:03:56 - LMT 1928 # or Kashi or Kaxgar - 5:30 - +0530 1940 - 5:00 - +05 1980 May - 8:00 PRC C%sT - -# Kuwait -Zone Asia/Kuwait 3:11:56 - LMT 1950 - 3:00 - +03 - - -# Oman -# Milne says 3:54:24 was the meridian of the Muscat Tidal Observatory. -Zone Asia/Muscat 3:54:24 - LMT 1920 - 4:00 - +04 - -# India -# From Paul Eggert (2014-08-11), after a heads-up from Stephen Colebourne: -# According to a Portuguese decree (1911-05-26) -# https://dre.pt/pdf1sdip/1911/05/12500/23132313.pdf -# Portuguese India switched to UT +05 on 1912-01-01. -#Zone Asia/Panaji [not enough info to complete] - -# Cambodia -# From Paul Eggert (2014-10-11): -# See Asia/Ho_Chi_Minh for the source for most of this data. Also, guess -# (1) Cambodia reverted to UT +07 on 1945-09-02, when Vietnam did, and -# (2) they also reverted to +07 on 1953-11-09, the date of independence. -# These guesses are probably wrong but they're better than guessing no -# transitions there. -Zone Asia/Phnom_Penh 6:59:40 - LMT 1906 Jul 1 - 7:06:30 - PLMT 1911 May 1 - 7:00 - +07 1942 Dec 31 23:00 - 8:00 - +08 1945 Mar 14 23:00 - 9:00 - +09 1945 Sep 2 - 7:00 - +07 1947 Apr 1 - 8:00 - +08 1953 Nov 9 - 7:00 - +07 - -# Israel -Zone Asia/Tel_Aviv 2:19:04 - LMT 1880 - 2:21 - JMT 1918 - 2:00 Zion I%sT - -# Laos -# From Paul Eggert (2014-10-11): -# See Asia/Ho_Chi_Minh for the source for most of this data. -# Trần's book says that Laos reverted to UT +07 on 1955-04-15. -# Also, guess that Laos reverted to +07 on 1945-09-02, when Vietnam did; -# this is probably wrong but it's better than guessing no transition. -Zone Asia/Vientiane 6:50:24 - LMT 1906 Jul 1 - 7:06:30 - PLMT 1911 May 1 - 7:00 - +07 1942 Dec 31 23:00 - 8:00 - +08 1945 Mar 14 23:00 - 9:00 - +09 1945 Sep 2 - 7:00 - +07 1947 Apr 1 - 8:00 - +08 1955 Apr 15 - 7:00 - +07 - -# Jan Mayen -# From Whitman: -Zone Atlantic/Jan_Mayen -1:00 - -01 - -# St Helena -Zone Atlantic/St_Helena -0:22:48 - LMT 1890 # Jamestown - -0:22:48 - JMT 1951 # Jamestown Mean Time - 0:00 - GMT - -# Northern Ireland -Zone Europe/Belfast -0:23:40 - LMT 1880 Aug 2 - -0:25:21 - DMT 1916 May 21 2:00 - # DMT = Dublin/Dunsink MT - -0:25:21 1:00 IST 1916 Oct 1 2:00s - # IST = Irish Summer Time - 0:00 GB-Eire %s 1968 Oct 27 - 1:00 - BST 1971 Oct 31 2:00u - 0:00 GB-Eire %s 1996 - 0:00 EU GMT/BST - -# Guernsey -# Data from Joseph S. Myers -# https://mm.icann.org/pipermail/tz/2013-September/019883.html -# References to be added -# LMT is for Town Church, St. Peter Port, 49 degrees 27'17"N 2 degrees 32'10"W -Zone Europe/Guernsey -0:10:09 - LMT 1913 Jun 18 - 0:00 GB-Eire %s 1940 Jul 2 - 1:00 C-Eur CE%sT 1945 May 8 - 0:00 GB-Eire %s 1968 Oct 27 - 1:00 - BST 1971 Oct 31 2:00u - 0:00 GB-Eire %s 1996 - 0:00 EU GMT/BST - -# Isle of Man -# -# From Lester Caine (2013-09-04): -# The Isle of Man legislation is now on-line at -# , starting with the original Statutory -# Time Act in 1883 and including additional confirmation of some of -# the dates of the 'Summer Time' orders originating at -# Westminster. There is a little uncertainty as to the starting date -# of the first summer time in 1916 which may have been announced a -# couple of days late. There is still a substantial number of -# documents to work through, but it is thought that every GB change -# was also implemented on the island. -# -# AT4 of 1883 - The Statutory Time et cetera Act 1883 - -# LMT Location - 54.1508N -4.4814E - Tynwald Hill ( Manx parliament ) -Zone Europe/Isle_of_Man -0:17:55 - LMT 1883 Mar 30 0:00s - 0:00 GB-Eire %s 1968 Oct 27 - 1:00 - BST 1971 Oct 31 2:00u - 0:00 GB-Eire %s 1996 - 0:00 EU GMT/BST - -# Jersey -# Data from Joseph S. Myers -# https://mm.icann.org/pipermail/tz/2013-September/019883.html -# References to be added -# LMT is for Parish Church, St. Helier, 49 degrees 11'0.57"N 2 degrees 6'24.33"W -Zone Europe/Jersey -0:08:26 - LMT 1898 Jun 11 16:00u - 0:00 GB-Eire %s 1940 Jul 2 - 1:00 C-Eur CE%sT 1945 May 8 - 0:00 GB-Eire %s 1968 Oct 27 - 1:00 - BST 1971 Oct 31 2:00u - 0:00 GB-Eire %s 1996 - 0:00 EU GMT/BST - -# Slovenia -Zone Europe/Ljubljana 0:58:04 - LMT 1884 - 1:00 - CET 1941 Apr 18 23:00 - 1:00 C-Eur CE%sT 1945 May 8 2:00s - 1:00 1:00 CEST 1945 Sep 16 2:00s - 1:00 - CET 1982 Nov 27 - 1:00 EU CE%sT - -# Bosnia and Herzegovina -Zone Europe/Sarajevo 1:13:40 - LMT 1884 - 1:00 - CET 1941 Apr 18 23:00 - 1:00 C-Eur CE%sT 1945 May 8 2:00s - 1:00 1:00 CEST 1945 Sep 16 2:00s - 1:00 - CET 1982 Nov 27 - 1:00 EU CE%sT - -# Macedonia -Zone Europe/Skopje 1:25:44 - LMT 1884 - 1:00 - CET 1941 Apr 18 23:00 - 1:00 C-Eur CE%sT 1945 May 8 2:00s - 1:00 1:00 CEST 1945 Sep 16 2:00s - 1:00 - CET 1982 Nov 27 - 1:00 EU CE%sT - -# Moldova / Transnistria -Zone Europe/Tiraspol 1:58:32 - LMT 1880 - 1:55 - CMT 1918 Feb 15 # Chisinau MT - 1:44:24 - BMT 1931 Jul 24 # Bucharest MT - 2:00 Romania EE%sT 1940 Aug 15 - 2:00 1:00 EEST 1941 Jul 17 - 1:00 C-Eur CE%sT 1944 Aug 24 - 3:00 Russia MSK/MSD 1991 Mar 31 2:00 - 2:00 Russia EE%sT 1992 Jan 19 2:00 - 3:00 Russia MSK/MSD - -# Liechtenstein -Zone Europe/Vaduz 0:38:04 - LMT 1894 Jun - 1:00 - CET 1981 - 1:00 EU CE%sT - -# Croatia -Zone Europe/Zagreb 1:03:52 - LMT 1884 - 1:00 - CET 1941 Apr 18 23:00 - 1:00 C-Eur CE%sT 1945 May 8 2:00s - 1:00 1:00 CEST 1945 Sep 16 2:00s - 1:00 - CET 1982 Nov 27 - 1:00 EU CE%sT - -# Madagascar -Zone Indian/Antananarivo 3:10:04 - LMT 1911 Jul - 3:00 - EAT 1954 Feb 27 23:00s - 3:00 1:00 EAST 1954 May 29 23:00s - 3:00 - EAT - -# Comoros -Zone Indian/Comoro 2:53:04 - LMT 1911 Jul # Moroni, Gran Comoro - 3:00 - EAT - -# Mayotte -Zone Indian/Mayotte 3:00:56 - LMT 1911 Jul # Mamoutzou - 3:00 - EAT - -# US minor outlying islands -Zone Pacific/Johnston -10:00 - HST - -# US minor outlying islands -# -# From Mark Brader (2005-01-23): -# [Fallacies and Fantasies of Air Transport History, by R.E.G. Davies, -# published 1994 by Paladwr Press, McLean, VA, USA; ISBN 0-9626483-5-3] -# reproduced a Pan American Airways timetable from 1936, for their weekly -# "Orient Express" flights between San Francisco and Manila, and connecting -# flights to Chicago and the US East Coast. As it uses some time zone -# designations that I've never seen before:.... -# Fri. 6:30A Lv. HONOLOLU (Pearl Harbor), H.I. H.L.T. Ar. 5:30P Sun. -# " 3:00P Ar. MIDWAY ISLAND . . . . . . . . . M.L.T. Lv. 6:00A " -# -Zone Pacific/Midway -11:49:28 - LMT 1901 - -11:00 - -11 1956 Jun 3 - -11:00 1:00 -10 1956 Sep 2 - -11:00 - -11 - -# N Mariana Is -Zone Pacific/Saipan -14:17:00 - LMT 1844 Dec 31 - 9:43:00 - LMT 1901 - 9:00 - +09 1969 Oct - 10:00 - +10 2000 Dec 23 - 10:00 - ChST # Chamorro Standard Time diff --git a/src/timezone/data/etcetera b/src/timezone/data/etcetera deleted file mode 100644 index f5fa4c94b4..0000000000 --- a/src/timezone/data/etcetera +++ /dev/null @@ -1,78 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# These entries are mostly present for historical reasons, so that -# people in areas not otherwise covered by the tz files could "zic -l" -# to a time zone that was right for their area. These days, the -# tz files cover almost all the inhabited world, and the only practical -# need now for the entries that are not on UTC are for ships at sea -# that cannot use POSIX TZ settings. - -# Starting with POSIX 1003.1-2001, the entries below are all -# unnecessary as settings for the TZ environment variable. E.g., -# instead of TZ='Etc/GMT+4' one can use the POSIX setting TZ='<-04>+4'. -# -# Do not use a POSIX TZ setting like TZ='GMT+4', which is four hours -# behind GMT but uses the completely misleading abbreviation "GMT". - -Zone Etc/GMT 0 - GMT -Zone Etc/UTC 0 - UTC -Zone Etc/UCT 0 - UCT - -# The following link uses older naming conventions, -# but it belongs here, not in the file 'backward', -# as functions like gmtime load the "GMT" file to handle leap seconds properly. -# We want this to work even on installations that omit the other older names. -Link Etc/GMT GMT - -Link Etc/UTC Etc/Universal -Link Etc/UTC Etc/Zulu - -Link Etc/GMT Etc/Greenwich -Link Etc/GMT Etc/GMT-0 -Link Etc/GMT Etc/GMT+0 -Link Etc/GMT Etc/GMT0 - -# Be consistent with POSIX TZ settings in the Zone names, -# even though this is the opposite of what many people expect. -# POSIX has positive signs west of Greenwich, but many people expect -# positive signs east of Greenwich. For example, TZ='Etc/GMT+4' uses -# the abbreviation "-04" and corresponds to 4 hours behind UT -# (i.e. west of Greenwich) even though many people would expect it to -# mean 4 hours ahead of UT (i.e. east of Greenwich). - -# Earlier incarnations of this package were not POSIX-compliant, -# and had lines such as -# Zone GMT-12 -12 - GMT-1200 -# We did not want things to change quietly if someone accustomed to the old -# way does a -# zic -l GMT-12 -# so we moved the names into the Etc subdirectory. -# Also, the time zone abbreviations are now compatible with %z. - -Zone Etc/GMT-14 14 - +14 -Zone Etc/GMT-13 13 - +13 -Zone Etc/GMT-12 12 - +12 -Zone Etc/GMT-11 11 - +11 -Zone Etc/GMT-10 10 - +10 -Zone Etc/GMT-9 9 - +09 -Zone Etc/GMT-8 8 - +08 -Zone Etc/GMT-7 7 - +07 -Zone Etc/GMT-6 6 - +06 -Zone Etc/GMT-5 5 - +05 -Zone Etc/GMT-4 4 - +04 -Zone Etc/GMT-3 3 - +03 -Zone Etc/GMT-2 2 - +02 -Zone Etc/GMT-1 1 - +01 -Zone Etc/GMT+1 -1 - -01 -Zone Etc/GMT+2 -2 - -02 -Zone Etc/GMT+3 -3 - -03 -Zone Etc/GMT+4 -4 - -04 -Zone Etc/GMT+5 -5 - -05 -Zone Etc/GMT+6 -6 - -06 -Zone Etc/GMT+7 -7 - -07 -Zone Etc/GMT+8 -8 - -08 -Zone Etc/GMT+9 -9 - -09 -Zone Etc/GMT+10 -10 - -10 -Zone Etc/GMT+11 -11 - -11 -Zone Etc/GMT+12 -12 - -12 diff --git a/src/timezone/data/europe b/src/timezone/data/europe deleted file mode 100644 index 5b3b4e52fa..0000000000 --- a/src/timezone/data/europe +++ /dev/null @@ -1,3845 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (2017-02-10): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# A reliable and entertaining source about time zones is -# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). -# -# Except where otherwise noted, Shanks & Pottenger is the source for -# entries through 1991, and IATA SSIM is the source for entries afterwards. -# -# Other sources occasionally used include: -# -# Edward W. Whitman, World Time Differences, -# Whitman Publishing Co, 2 Niagara Av, Ealing, London (undated), -# which I found in the UCLA library. -# -# William Willett, The Waste of Daylight, 19th edition -# -# [PDF] (1914-03) -# -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94 -# . He writes: -# "It is requested that corrections and additions to these tables -# may be sent to Mr. John Milne, Royal Geographical Society, -# Savile Row, London." Nowadays please email them to tz@iana.org. -# -# Byalokoz EL. New Counting of Time in Russia since July 1, 1919. -# This Russian-language source was consulted by Vladimir Karpinsky; see -# https://mm.icann.org/pipermail/tz/2014-August/021320.html -# The full Russian citation is: -# Бялокоз, Евгений Людвигович. Новый счет времени в течении суток -# введенный декретом Совета народных комиссаров для всей России с 1-го -# июля 1919 г. / Изд. 2-е Междуведомственной комиссии. - Петроград: -# Десятая гос. тип., 1919. -# http://resolver.gpntb.ru/purl?docushare/dsweb/Get/Resource-2011/Byalokoz__E.L.__Novyy__schet__vremeni__v__techenie__sutok__izd__2(1).pdf -# -# Brazil's Divisão Serviço da Hora (DSHO), -# History of Summer Time -# -# (1998-09-21, in Portuguese) -# -# I invented the abbreviations marked '*' in the following table; -# the rest are variants of the "xMT" pattern for a city's mean time, -# or are from other sources. Corrections are welcome! -# std dst 2dst -# LMT Local Mean Time -# -4:00 AST ADT Atlantic -# 0:00 GMT BST BDST Greenwich, British Summer -# 0:00 GMT IST Greenwich, Irish Summer -# 0:00 WET WEST WEMT Western Europe -# 0:19:32.13 AMT* NST* Amsterdam, Netherlands Summer (1835-1937) -# 1:00 BST British Standard (1968-1971) -# 1:00 CET CEST CEMT Central Europe -# 1:00:14 SET Swedish (1879-1899) -# 1:36:34 RMT* LST* Riga, Latvian Summer (1880-1926)* -# 2:00 EET EEST Eastern Europe -# 3:00 MSK MSD MDST* Moscow - -# From Peter Ilieve (1994-12-04), -# The original six [EU members]: Belgium, France, (West) Germany, Italy, -# Luxembourg, the Netherlands. -# Plus, from 1 Jan 73: Denmark, Ireland, United Kingdom. -# Plus, from 1 Jan 81: Greece. -# Plus, from 1 Jan 86: Spain, Portugal. -# Plus, from 1 Jan 95: Austria, Finland, Sweden. (Norway negotiated terms for -# entry but in a referendum on 28 Nov 94 the people voted No by 52.2% to 47.8% -# on a turnout of 88.6%. This was almost the same result as Norway's previous -# referendum in 1972, they are the only country to have said No twice. -# Referendums in the other three countries voted Yes.) -# ... -# Estonia ... uses EU dates but not at 01:00 GMT, they use midnight GMT. -# I don't think they know yet what they will do from 1996 onwards. -# ... -# There shouldn't be any [current members who are not using EU rules]. -# A Directive has the force of law, member states are obliged to enact -# national law to implement it. The only contentious issue was the -# different end date for the UK and Ireland, and this was always allowed -# in the Directive. - - -############################################################################### - -# Britain (United Kingdom) and Ireland (Eire) - -# From Peter Ilieve (1994-07-06): -# -# On 17 Jan 1994 the Independent, a UK quality newspaper, had a piece about -# historical vistas along the Thames in west London. There was a photo -# and a sketch map showing some of the sightlines involved. One paragraph -# of the text said: -# -# 'An old stone obelisk marking a forgotten terrestrial meridian stands -# beside the river at Kew. In the 18th century, before time and longitude -# was standardised by the Royal Observatory in Greenwich, scholars observed -# this stone and the movement of stars from Kew Observatory nearby. They -# made their calculations and set the time for the Horse Guards and Parliament, -# but now the stone is obscured by scrubwood and can only be seen by walking -# along the towpath within a few yards of it.' -# -# I have a one inch to one mile map of London and my estimate of the stone's -# position is 51 degrees 28' 30" N, 0 degrees 18' 45" W. The longitude should -# be within about +-2". The Ordnance Survey grid reference is TQ172761. -# -# [This yields GMTOFF = -0:01:15 for London LMT in the 18th century.] - -# From Paul Eggert (1993-11-18): -# -# Howse writes that Britain was the first country to use standard time. -# The railways cared most about the inconsistencies of local mean time, -# and it was they who forced a uniform time on the country. -# The original idea was credited to Dr. William Hyde Wollaston (1766-1828) -# and was popularized by Abraham Follett Osler (1808-1903). -# The first railway to adopt London time was the Great Western Railway -# in November 1840; other railways followed suit, and by 1847 most -# (though not all) railways used London time. On 1847-09-22 the -# Railway Clearing House, an industry standards body, recommended that GMT be -# adopted at all stations as soon as the General Post Office permitted it. -# The transition occurred on 12-01 for the L&NW, the Caledonian, -# and presumably other railways; the January 1848 Bradshaw's lists many -# railways as using GMT. By 1855 the vast majority of public -# clocks in Britain were set to GMT (though some, like the great clock -# on Tom Tower at Christ Church, Oxford, were fitted with two minute hands, -# one for local time and one for GMT). The last major holdout was the legal -# system, which stubbornly stuck to local time for many years, leading -# to oddities like polls opening at 08:13 and closing at 16:13. -# The legal system finally switched to GMT when the Statutes (Definition -# of Time) Act took effect; it received the Royal Assent on 1880-08-02. -# -# In the tables below, we condense this complicated story into a single -# transition date for London, namely 1847-12-01. We don't know as much -# about Dublin, so we use 1880-08-02, the legal transition time. - -# From Paul Eggert (2014-07-19): -# The ancients had no need for daylight saving, as they kept time -# informally or via hours whose length depended on the time of year. -# Daylight saving time in its modern sense was invented by the -# New Zealand entomologist George Vernon Hudson (1867-1946), -# whose day job as a postal clerk led him to value -# after-hours daylight in which to pursue his research. -# In 1895 he presented a paper to the Wellington Philosophical Society -# that proposed a two-hour daylight-saving shift. See: -# Hudson GV. On seasonal time-adjustment in countries south of lat. 30 deg. -# Transactions and Proceedings of the New Zealand Institute. 1895;28:734 -# http://rsnz.natlib.govt.nz/volume/rsnz_28/rsnz_28_00_006110.html -# Although some interest was expressed in New Zealand, his proposal -# did not find its way into law and eventually it was almost forgotten. -# -# In England, DST was independently reinvented by William Willett (1857-1915), -# a London builder and member of the Royal Astronomical Society -# who circulated a pamphlet "The Waste of Daylight" (1907) -# that proposed advancing clocks 20 minutes on each of four Sundays in April, -# and retarding them by the same amount on four Sundays in September. -# A bill was drafted in 1909 and introduced in Parliament several times, -# but it met with ridicule and opposition, especially from farming interests. -# Later editions of the pamphlet proposed one-hour summer time, and -# it was eventually adopted as a wartime measure in 1916. -# See: Summer Time Arrives Early, The Times (2000-05-18). -# A monument to Willett was unveiled on 1927-05-21, in an open space in -# a 45-acre wood near Chislehurst, Kent that was purchased by popular -# subscription and open to the public. On the south face of the monolith, -# designed by G. W. Miller, is the William Willett Memorial Sundial, -# which is permanently set to Summer Time. - -# From Winston Churchill (1934-04-28): -# It is one of the paradoxes of history that we should owe the boon of -# summer time, which gives every year to the people of this country -# between 160 and 170 hours more daylight leisure, to a war which -# plunged Europe into darkness for four years, and shook the -# foundations of civilization throughout the world. -# -- "A Silent Toast to William Willett", Pictorial Weekly; -# republished in Finest Hour (Spring 2002) 1(114):26 -# https://www.winstonchurchill.org/publications/finest-hour/finest-hour-114/a-silent-toast-to-william-willett-by-winston-s-churchill - -# From Paul Eggert (2015-08-08): -# The OED Supplement says that the English originally said "Daylight Saving" -# when they were debating the adoption of DST in 1908; but by 1916 this -# term appears only in quotes taken from DST's opponents, whereas the -# proponents (who eventually won the argument) are quoted as using "Summer". -# The term "Summer Time" was introduced by Herbert Samuel, Home Secretary; see: -# Viscount Samuel. Leisure in a Democracy. Cambridge University Press -# ISBN 978-1-107-49471-8 (1949, reissued 2015), p 8. - -# From Arthur David Olson (1989-01-19): -# A source at the British Information Office in New York avers that it's -# known as "British" Summer Time in all parts of the United Kingdom. - -# Date: 4 Jan 89 08:57:25 GMT (Wed) -# From: Jonathan Leffler -# [British Summer Time] is fixed annually by Act of Parliament. -# If you can predict what Parliament will do, you should be in -# politics making a fortune, not computing. - -# From Chris Carrier (1996-06-14): -# I remember reading in various wartime issues of the London Times the -# acronym BDST for British Double Summer Time. Look for the published -# time of sunrise and sunset in The Times, when BDST was in effect, and -# if you find a zone reference it will say, "All times B.D.S.T." - -# From Joseph S. Myers (1999-09-02): -# ... some military cables (WO 219/4100 - this is a copy from the -# main SHAEF archives held in the US National Archives, SHAEF/5252/8/516) -# agree that the usage is BDST (this appears in a message dated 17 Feb 1945). - -# From Joseph S. Myers (2000-10-03): -# On 18th April 1941, Sir Stephen Tallents of the BBC wrote to Sir -# Alexander Maxwell of the Home Office asking whether there was any -# official designation; the reply of the 21st was that there wasn't -# but he couldn't think of anything better than the "Double British -# Summer Time" that the BBC had been using informally. -# https://www.polyomino.org.uk/british-time/bbc-19410418.png -# https://www.polyomino.org.uk/british-time/ho-19410421.png - -# From Sir Alexander Maxwell in the above-mentioned letter (1941-04-21): -# [N]o official designation has as far as I know been adopted for the time -# which is to be introduced in May.... -# I cannot think of anything better than "Double British Summer Time" -# which could not be said to run counter to any official description. - -# From Paul Eggert (2000-10-02): -# Howse writes (p 157) 'DBST' too, but 'BDST' seems to have been common -# and follows the more usual convention of putting the location name first, -# so we use 'BDST'. - -# Peter Ilieve (1998-04-19) described at length -# the history of summer time legislation in the United Kingdom. -# Since 1998 Joseph S. Myers has been updating -# and extending this list, which can be found in -# https://www.polyomino.org.uk/british-time/ - -# From Joseph S. Myers (1998-01-06): -# -# The legal time in the UK outside of summer time is definitely GMT, not UTC; -# see Lord Tanlaw's speech -# https://www.publications.parliament.uk/pa/ld199798/ldhansrd/vo970611/text/70611-10.htm#70611-10_head0 -# (Lords Hansard 11 June 1997 columns 964 to 976). - -# From Paul Eggert (2006-03-22): -# -# For lack of other data, follow Shanks & Pottenger for Eire in 1940-1948. -# -# Given Ilieve and Myers's data, the following claims by Shanks & Pottenger -# are incorrect: -# * Wales did not switch from GMT to daylight saving time until -# 1921 Apr 3, when they began to conform with the rest of Great Britain. -# Actually, Wales was identical after 1880. -# * Eire had two transitions on 1916 Oct 1. -# It actually just had one transition. -# * Northern Ireland used single daylight saving time throughout WW II. -# Actually, it conformed to Britain. -# * GB-Eire changed standard time to 1 hour ahead of GMT on 1968-02-18. -# Actually, that date saw the usual switch to summer time. -# Standard time was not changed until 1968-10-27 (the clocks didn't change). -# -# Here is another incorrect claim by Shanks & Pottenger: -# * Jersey, Guernsey, and the Isle of Man did not switch from GMT -# to daylight saving time until 1921 Apr 3, when they began to -# conform with Great Britain. -# S.R.&O. 1916, No. 382 and HO 45/10811/312364 (quoted above) say otherwise. -# -# The following claim by Shanks & Pottenger is possible though doubtful; -# we'll ignore it for now. -# * Dublin's 1971-10-31 switch was at 02:00, even though London's was 03:00. -# -# -# Whitman says Dublin Mean Time was -0:25:21, which is more precise than -# Shanks & Pottenger. -# Perhaps this was Dunsink Observatory Time, as Dunsink Observatory -# (8 km NW of Dublin's center) seemingly was to Dublin as Greenwich was -# to London. For example: -# -# "Timeball on the ballast office is down. Dunsink time." -# -- James Joyce, Ulysses - -# "Countess Markievicz ... claimed that the [1916] abolition of Dublin Mean Time -# was among various actions undertaken by the 'English' government that -# would 'put the whole country into the SF (Sinn Féin) camp'. She claimed -# Irish 'public feeling (was) outraged by forcing of English time on us'." -# -- Parsons M. Dublin lost its time zone - and 25 minutes - after 1916 Rising. -# Irish Times 2014-10-27. -# https://www.irishtimes.com/news/politics/dublin-lost-its-time-zone-and-25-minutes-after-1916-rising-1.1977411 - -# From Joseph S. Myers (2005-01-26): -# Irish laws are available online at . -# These include various relating to legal time, for example: -# -# ZZA13Y1923.html ZZA12Y1924.html ZZA8Y1925.html ZZSIV20PG1267.html -# -# ZZSI71Y1947.html ZZSI128Y1948.html ZZSI23Y1949.html ZZSI41Y1950.html -# ZZSI27Y1951.html ZZSI73Y1952.html -# -# ZZSI11Y1961.html ZZSI232Y1961.html ZZSI182Y1962.html -# ZZSI167Y1963.html ZZSI257Y1964.html ZZSI198Y1967.html -# ZZA23Y1968.html ZZA17Y1971.html -# -# ZZSI67Y1981.html ZZSI212Y1982.html ZZSI45Y1986.html -# ZZSI264Y1988.html ZZSI52Y1990.html ZZSI371Y1992.html -# ZZSI395Y1994.html ZZSI484Y1997.html ZZSI506Y2001.html -# -# [These are all relative to the root, e.g., the first is -# .] -# -# (These are those I found, but there could be more. In any case these -# should allow various updates to the comments in the europe file to cover -# the laws applicable in Ireland.) -# -# (Note that the time in the Republic of Ireland since 1968 has been defined -# in terms of standard time being GMT+1 with a period of winter time when it -# is GMT, rather than standard time being GMT with a period of summer time -# being GMT+1.) - -# From Paul Eggert (1999-03-28): -# Clive Feather (, 1997-03-31) -# reports that Folkestone (Cheriton) Shuttle Terminal uses Concession Time -# (CT), equivalent to French civil time. -# Julian Hill (, 1998-09-30) reports that -# trains between Dollands Moor (the freight facility next door) -# and Frethun run in CT. -# My admittedly uninformed guess is that the terminal has two authorities, -# the French concession operators and the British civil authorities, -# and that the time depends on who you're talking to. -# If, say, the British police were called to the station for some reason, -# I would expect the official police report to use GMT/BST and not CET/CEST. -# This is a borderline case, but for now let's stick to GMT/BST. - -# From an anonymous contributor (1996-06-02): -# The law governing time in Ireland is under Statutory Instrument SI 395/94, -# which gives force to European Union 7th Council Directive No. 94/21/EC. -# Under this directive, the Minister for Justice in Ireland makes appropriate -# regulations. I spoke this morning with the Secretary of the Department of -# Justice (tel +353 1 678 9711) who confirmed to me that the correct name is -# "Irish Summer Time", abbreviated to "IST". - -# Michael Deckers (2017-06-01) gave the following URLs for Ireland's -# Summer Time Act, 1925 and Summer Time Orders, 1926 and 1947: -# http://www.irishstatutebook.ie/eli/1925/act/8/enacted/en/print.html -# http://www.irishstatutebook.ie/eli/1926/sro/919/made/en/print.html -# http://www.irishstatutebook.ie/eli/1947/sro/71/made/en/print.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# Summer Time Act, 1916 -Rule GB-Eire 1916 only - May 21 2:00s 1:00 BST -Rule GB-Eire 1916 only - Oct 1 2:00s 0 GMT -# S.R.&O. 1917, No. 358 -Rule GB-Eire 1917 only - Apr 8 2:00s 1:00 BST -Rule GB-Eire 1917 only - Sep 17 2:00s 0 GMT -# S.R.&O. 1918, No. 274 -Rule GB-Eire 1918 only - Mar 24 2:00s 1:00 BST -Rule GB-Eire 1918 only - Sep 30 2:00s 0 GMT -# S.R.&O. 1919, No. 297 -Rule GB-Eire 1919 only - Mar 30 2:00s 1:00 BST -Rule GB-Eire 1919 only - Sep 29 2:00s 0 GMT -# S.R.&O. 1920, No. 458 -Rule GB-Eire 1920 only - Mar 28 2:00s 1:00 BST -# S.R.&O. 1920, No. 1844 -Rule GB-Eire 1920 only - Oct 25 2:00s 0 GMT -# S.R.&O. 1921, No. 363 -Rule GB-Eire 1921 only - Apr 3 2:00s 1:00 BST -Rule GB-Eire 1921 only - Oct 3 2:00s 0 GMT -# S.R.&O. 1922, No. 264 -Rule GB-Eire 1922 only - Mar 26 2:00s 1:00 BST -Rule GB-Eire 1922 only - Oct 8 2:00s 0 GMT -# The Summer Time Act, 1922 -Rule GB-Eire 1923 only - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1923 1924 - Sep Sun>=16 2:00s 0 GMT -Rule GB-Eire 1924 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1925 1926 - Apr Sun>=16 2:00s 1:00 BST -# The Summer Time Act, 1925 -Rule GB-Eire 1925 1938 - Oct Sun>=2 2:00s 0 GMT -Rule GB-Eire 1927 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1928 1929 - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1930 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1931 1932 - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1933 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1934 only - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1935 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1936 1937 - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1938 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1939 only - Apr Sun>=16 2:00s 1:00 BST -# S.R.&O. 1939, No. 1379 -Rule GB-Eire 1939 only - Nov Sun>=16 2:00s 0 GMT -# S.R.&O. 1940, No. 172 and No. 1883 -Rule GB-Eire 1940 only - Feb Sun>=23 2:00s 1:00 BST -# S.R.&O. 1941, No. 476 -Rule GB-Eire 1941 only - May Sun>=2 1:00s 2:00 BDST -Rule GB-Eire 1941 1943 - Aug Sun>=9 1:00s 1:00 BST -# S.R.&O. 1942, No. 506 -Rule GB-Eire 1942 1944 - Apr Sun>=2 1:00s 2:00 BDST -# S.R.&O. 1944, No. 932 -Rule GB-Eire 1944 only - Sep Sun>=16 1:00s 1:00 BST -# S.R.&O. 1945, No. 312 -Rule GB-Eire 1945 only - Apr Mon>=2 1:00s 2:00 BDST -Rule GB-Eire 1945 only - Jul Sun>=9 1:00s 1:00 BST -# S.R.&O. 1945, No. 1208 -Rule GB-Eire 1945 1946 - Oct Sun>=2 2:00s 0 GMT -Rule GB-Eire 1946 only - Apr Sun>=9 2:00s 1:00 BST -# The Summer Time Act, 1947 -Rule GB-Eire 1947 only - Mar 16 2:00s 1:00 BST -Rule GB-Eire 1947 only - Apr 13 1:00s 2:00 BDST -Rule GB-Eire 1947 only - Aug 10 1:00s 1:00 BST -Rule GB-Eire 1947 only - Nov 2 2:00s 0 GMT -# Summer Time Order, 1948 (S.I. 1948/495) -Rule GB-Eire 1948 only - Mar 14 2:00s 1:00 BST -Rule GB-Eire 1948 only - Oct 31 2:00s 0 GMT -# Summer Time Order, 1949 (S.I. 1949/373) -Rule GB-Eire 1949 only - Apr 3 2:00s 1:00 BST -Rule GB-Eire 1949 only - Oct 30 2:00s 0 GMT -# Summer Time Order, 1950 (S.I. 1950/518) -# Summer Time Order, 1951 (S.I. 1951/430) -# Summer Time Order, 1952 (S.I. 1952/451) -Rule GB-Eire 1950 1952 - Apr Sun>=14 2:00s 1:00 BST -Rule GB-Eire 1950 1952 - Oct Sun>=21 2:00s 0 GMT -# revert to the rules of the Summer Time Act, 1925 -Rule GB-Eire 1953 only - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1953 1960 - Oct Sun>=2 2:00s 0 GMT -Rule GB-Eire 1954 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1955 1956 - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1957 only - Apr Sun>=9 2:00s 1:00 BST -Rule GB-Eire 1958 1959 - Apr Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1960 only - Apr Sun>=9 2:00s 1:00 BST -# Summer Time Order, 1961 (S.I. 1961/71) -# Summer Time (1962) Order, 1961 (S.I. 1961/2465) -# Summer Time Order, 1963 (S.I. 1963/81) -Rule GB-Eire 1961 1963 - Mar lastSun 2:00s 1:00 BST -Rule GB-Eire 1961 1968 - Oct Sun>=23 2:00s 0 GMT -# Summer Time (1964) Order, 1963 (S.I. 1963/2101) -# Summer Time Order, 1964 (S.I. 1964/1201) -# Summer Time Order, 1967 (S.I. 1967/1148) -Rule GB-Eire 1964 1967 - Mar Sun>=19 2:00s 1:00 BST -# Summer Time Order, 1968 (S.I. 1968/117) -Rule GB-Eire 1968 only - Feb 18 2:00s 1:00 BST -# The British Standard Time Act, 1968 -# (no summer time) -# The Summer Time Act, 1972 -Rule GB-Eire 1972 1980 - Mar Sun>=16 2:00s 1:00 BST -Rule GB-Eire 1972 1980 - Oct Sun>=23 2:00s 0 GMT -# Summer Time Order, 1980 (S.I. 1980/1089) -# Summer Time Order, 1982 (S.I. 1982/1673) -# Summer Time Order, 1986 (S.I. 1986/223) -# Summer Time Order, 1988 (S.I. 1988/931) -Rule GB-Eire 1981 1995 - Mar lastSun 1:00u 1:00 BST -Rule GB-Eire 1981 1989 - Oct Sun>=23 1:00u 0 GMT -# Summer Time Order, 1989 (S.I. 1989/985) -# Summer Time Order, 1992 (S.I. 1992/1729) -# Summer Time Order 1994 (S.I. 1994/2798) -Rule GB-Eire 1990 1995 - Oct Sun>=22 1:00u 0 GMT -# Summer Time Order 1997 (S.I. 1997/2982) -# See EU for rules starting in 1996. -# -# Use Europe/London for Jersey, Guernsey, and the Isle of Man. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/London -0:01:15 - LMT 1847 Dec 1 0:00s - 0:00 GB-Eire %s 1968 Oct 27 - 1:00 - BST 1971 Oct 31 2:00u - 0:00 GB-Eire %s 1996 - 0:00 EU GMT/BST -Link Europe/London Europe/Jersey -Link Europe/London Europe/Guernsey -Link Europe/London Europe/Isle_of_Man - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Dublin -0:25:00 - LMT 1880 Aug 2 - -0:25:21 - DMT 1916 May 21 2:00s # Dublin MT - -0:25:21 1:00 IST 1916 Oct 1 2:00s - 0:00 GB-Eire %s 1921 Dec 6 # independence - 0:00 GB-Eire GMT/IST 1940 Feb 25 2:00s - 0:00 1:00 IST 1946 Oct 6 2:00s - 0:00 - GMT 1947 Mar 16 2:00s - 0:00 1:00 IST 1947 Nov 2 2:00s - 0:00 - GMT 1948 Apr 18 2:00s - 0:00 GB-Eire GMT/IST 1968 Oct 27 - 1:00 - IST 1971 Oct 31 2:00u - 0:00 GB-Eire GMT/IST 1996 - 0:00 EU GMT/IST - -############################################################################### - -# Europe - -# EU rules are for the European Union, previously known as the EC, EEC, -# Common Market, etc. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule EU 1977 1980 - Apr Sun>=1 1:00u 1:00 S -Rule EU 1977 only - Sep lastSun 1:00u 0 - -Rule EU 1978 only - Oct 1 1:00u 0 - -Rule EU 1979 1995 - Sep lastSun 1:00u 0 - -Rule EU 1981 max - Mar lastSun 1:00u 1:00 S -Rule EU 1996 max - Oct lastSun 1:00u 0 - -# The most recent directive covers the years starting in 2002. See: -# Directive 2000/84/EC of the European Parliament and of the Council -# of 19 January 2001 on summer-time arrangements. -# http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0084:EN:NOT - -# W-Eur differs from EU only in that W-Eur uses standard time. -Rule W-Eur 1977 1980 - Apr Sun>=1 1:00s 1:00 S -Rule W-Eur 1977 only - Sep lastSun 1:00s 0 - -Rule W-Eur 1978 only - Oct 1 1:00s 0 - -Rule W-Eur 1979 1995 - Sep lastSun 1:00s 0 - -Rule W-Eur 1981 max - Mar lastSun 1:00s 1:00 S -Rule W-Eur 1996 max - Oct lastSun 1:00s 0 - - -# Older C-Eur rules are for convenience in the tables. -# From 1977 on, C-Eur differs from EU only in that C-Eur uses standard time. -Rule C-Eur 1916 only - Apr 30 23:00 1:00 S -Rule C-Eur 1916 only - Oct 1 1:00 0 - -Rule C-Eur 1917 1918 - Apr Mon>=15 2:00s 1:00 S -Rule C-Eur 1917 1918 - Sep Mon>=15 2:00s 0 - -Rule C-Eur 1940 only - Apr 1 2:00s 1:00 S -Rule C-Eur 1942 only - Nov 2 2:00s 0 - -Rule C-Eur 1943 only - Mar 29 2:00s 1:00 S -Rule C-Eur 1943 only - Oct 4 2:00s 0 - -Rule C-Eur 1944 1945 - Apr Mon>=1 2:00s 1:00 S -# Whitman gives 1944 Oct 7; go with Shanks & Pottenger. -Rule C-Eur 1944 only - Oct 2 2:00s 0 - -# From Jesper Nørgaard Welen (2008-07-13): -# -# I found what is probably a typo of 2:00 which should perhaps be 2:00s -# in the C-Eur rule from tz database version 2008d (this part was -# corrected in version 2008d). The circumstantial evidence is simply the -# tz database itself, as seen below: -# -# Zone Europe/Paris 0:09:21 - LMT 1891 Mar 15 0:01 -# 0:00 France WE%sT 1945 Sep 16 3:00 -# -# Zone Europe/Monaco 0:29:32 - LMT 1891 Mar 15 -# 0:00 France WE%sT 1945 Sep 16 3:00 -# -# Zone Europe/Belgrade 1:22:00 - LMT 1884 -# 1:00 1:00 CEST 1945 Sep 16 2:00s -# -# Rule France 1945 only - Sep 16 3:00 0 - -# Rule Belgium 1945 only - Sep 16 2:00s 0 - -# Rule Neth 1945 only - Sep 16 2:00s 0 - -# -# The rule line to be changed is: -# -# Rule C-Eur 1945 only - Sep 16 2:00 0 - -# -# It seems that Paris, Monaco, Rule France, Rule Belgium all agree on -# 2:00 standard time, e.g. 3:00 local time. However there are no -# countries that use C-Eur rules in September 1945, so the only items -# affected are apparently these fictitious zones that translate acronyms -# CET and MET: -# -# Zone CET 1:00 C-Eur CE%sT -# Zone MET 1:00 C-Eur ME%sT -# -# It this is right then the corrected version would look like: -# -# Rule C-Eur 1945 only - Sep 16 2:00s 0 - -# -# A small step for mankind though 8-) -Rule C-Eur 1945 only - Sep 16 2:00s 0 - -Rule C-Eur 1977 1980 - Apr Sun>=1 2:00s 1:00 S -Rule C-Eur 1977 only - Sep lastSun 2:00s 0 - -Rule C-Eur 1978 only - Oct 1 2:00s 0 - -Rule C-Eur 1979 1995 - Sep lastSun 2:00s 0 - -Rule C-Eur 1981 max - Mar lastSun 2:00s 1:00 S -Rule C-Eur 1996 max - Oct lastSun 2:00s 0 - - -# E-Eur differs from EU only in that E-Eur switches at midnight local time. -Rule E-Eur 1977 1980 - Apr Sun>=1 0:00 1:00 S -Rule E-Eur 1977 only - Sep lastSun 0:00 0 - -Rule E-Eur 1978 only - Oct 1 0:00 0 - -Rule E-Eur 1979 1995 - Sep lastSun 0:00 0 - -Rule E-Eur 1981 max - Mar lastSun 0:00 1:00 S -Rule E-Eur 1996 max - Oct lastSun 0:00 0 - - - -# Daylight saving time for Russia and the Soviet Union -# -# The 1917-1921 decree URLs are from Alexander Belopolsky (2016-08-23). - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Russia 1917 only - Jul 1 23:00 1:00 MST # Moscow Summer Time -# -# Decree No. 142 (1917-12-22) http://istmat.info/node/28137 -Rule Russia 1917 only - Dec 28 0:00 0 MMT # Moscow Mean Time -# -# Decree No. 497 (1918-05-30) http://istmat.info/node/30001 -Rule Russia 1918 only - May 31 22:00 2:00 MDST # Moscow Double Summer Time -Rule Russia 1918 only - Sep 16 1:00 1:00 MST -# -# Decree No. 258 (1919-05-29) http://istmat.info/node/37949 -Rule Russia 1919 only - May 31 23:00 2:00 MDST -# -Rule Russia 1919 only - Jul 1 0:00u 1:00 MSD -Rule Russia 1919 only - Aug 16 0:00 0 MSK -# -# Decree No. 63 (1921-02-03) http://istmat.info/node/45840 -Rule Russia 1921 only - Feb 14 23:00 1:00 MSD -# -# Decree No. 121 (1921-03-07) http://istmat.info/node/45949 -Rule Russia 1921 only - Mar 20 23:00 2:00 +05 -# -Rule Russia 1921 only - Sep 1 0:00 1:00 MSD -Rule Russia 1921 only - Oct 1 0:00 0 - -# Act No. 925 of the Council of Ministers of the USSR (1980-10-24): -Rule Russia 1981 1984 - Apr 1 0:00 1:00 S -Rule Russia 1981 1983 - Oct 1 0:00 0 - -# Act No. 967 of the Council of Ministers of the USSR (1984-09-13), repeated in -# Act No. 227 of the Council of Ministers of the USSR (1989-03-14): -Rule Russia 1984 1995 - Sep lastSun 2:00s 0 - -Rule Russia 1985 2010 - Mar lastSun 2:00s 1:00 S -# -Rule Russia 1996 2010 - Oct lastSun 2:00s 0 - -# As described below, Russia's 2014 change affects Zone data, not Rule data. - -# From Stepan Golosunov (2016-03-07): -# Wikipedia and other sources refer to the Act of the Council of -# Ministers of the USSR from 1988-01-04 No. 5 and the Act of the -# Council of Ministers of the USSR from 1989-03-14 No. 227. -# -# I did not find full texts of these acts. For the 1989 one we have -# title at https://base.garant.ru/70754136/ : -# "About change in calculation of time on the territories of -# Lithuanian SSR, Latvian SSR and Estonian SSR, Astrakhan, -# Kaliningrad, Kirov, Kuybyshev, Ulyanovsk and Uralsk oblasts". -# And http://astrozet.net/files/Zones/DOC/RU/1980-925.txt appears to -# contain quotes from both acts: Since last Sunday of March 1988 rules -# of the second time belt are installed in Volgograd and Saratov -# oblasts. Since last Sunday of March 1989: -# a) Lithuanian SSR, Latvian SSR, Estonian SSR, Kaliningrad oblast: -# second time belt rules without extra hour (Moscow-1); -# b) Astrakhan, Kirov, Kuybyshev, Ulyanovsk oblasts: second time belt -# rules (Moscow time) -# c) Uralsk oblast: third time belt rules (Moscow+1). - -# From Stepan Golosunov (2016-03-27): -# Unamended version of the act of the -# Government of the Russian Federation No. 23 from 08.01.1992 -# http://pravo.gov.ru/proxy/ips/?docbody=&nd=102014034&rdk=0 -# says that every year clocks were to be moved forward on last Sunday -# of March at 2 hours and moved backwards on last Sunday of September -# at 3 hours. It was amended in 1996 to replace September with October. - -# From Alexander Krivenyshev (2011-06-14): -# According to Kremlin press service, Russian President Dmitry Medvedev -# signed a federal law "On calculation of time" on June 9, 2011. -# According to the law Russia is abolishing daylight saving time. -# -# Medvedev signed a law "On the Calculation of Time" (in russian): -# http://bmockbe.ru/events/?ID=7583 -# -# Medvedev signed a law on the calculation of the time (in russian): -# https://www.regnum.ru/news/polit/1413906.html - -# From Arthur David Olson (2011-06-15): -# Take "abolishing daylight saving time" to mean that time is now considered -# to be standard. - -# These are for backward compatibility with older versions. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone WET 0:00 EU WE%sT -Zone CET 1:00 C-Eur CE%sT -Zone MET 1:00 C-Eur ME%sT -Zone EET 2:00 EU EE%sT - -# Previous editions of this database used abbreviations like MET DST -# for Central European Summer Time, but this didn't agree with common usage. - -# From Markus Kuhn (1996-07-12): -# The official German names ... are -# -# Mitteleuropäische Zeit (MEZ) = UTC+01:00 -# Mitteleuropäische Sommerzeit (MESZ) = UTC+02:00 -# -# as defined in the German Time Act (Gesetz über die Zeitbestimmung (ZeitG), -# 1978-07-25, Bundesgesetzblatt, Jahrgang 1978, Teil I, S. 1110-1111).... -# I wrote ... to the German Federal Physical-Technical Institution -# -# Physikalisch-Technische Bundesanstalt (PTB) -# Laboratorium 4.41 "Zeiteinheit" -# Postfach 3345 -# D-38023 Braunschweig -# phone: +49 531 592-0 -# -# ... I received today an answer letter from Dr. Peter Hetzel, head of the PTB -# department for time and frequency transmission. He explained that the -# PTB translates MEZ and MESZ into English as -# -# Central European Time (CET) = UTC+01:00 -# Central European Summer Time (CEST) = UTC+02:00 - - -# Albania -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Albania 1940 only - Jun 16 0:00 1:00 S -Rule Albania 1942 only - Nov 2 3:00 0 - -Rule Albania 1943 only - Mar 29 2:00 1:00 S -Rule Albania 1943 only - Apr 10 3:00 0 - -Rule Albania 1974 only - May 4 0:00 1:00 S -Rule Albania 1974 only - Oct 2 0:00 0 - -Rule Albania 1975 only - May 1 0:00 1:00 S -Rule Albania 1975 only - Oct 2 0:00 0 - -Rule Albania 1976 only - May 2 0:00 1:00 S -Rule Albania 1976 only - Oct 3 0:00 0 - -Rule Albania 1977 only - May 8 0:00 1:00 S -Rule Albania 1977 only - Oct 2 0:00 0 - -Rule Albania 1978 only - May 6 0:00 1:00 S -Rule Albania 1978 only - Oct 1 0:00 0 - -Rule Albania 1979 only - May 5 0:00 1:00 S -Rule Albania 1979 only - Sep 30 0:00 0 - -Rule Albania 1980 only - May 3 0:00 1:00 S -Rule Albania 1980 only - Oct 4 0:00 0 - -Rule Albania 1981 only - Apr 26 0:00 1:00 S -Rule Albania 1981 only - Sep 27 0:00 0 - -Rule Albania 1982 only - May 2 0:00 1:00 S -Rule Albania 1982 only - Oct 3 0:00 0 - -Rule Albania 1983 only - Apr 18 0:00 1:00 S -Rule Albania 1983 only - Oct 1 0:00 0 - -Rule Albania 1984 only - Apr 1 0:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Tirane 1:19:20 - LMT 1914 - 1:00 - CET 1940 Jun 16 - 1:00 Albania CE%sT 1984 Jul - 1:00 EU CE%sT - -# Andorra -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Andorra 0:06:04 - LMT 1901 - 0:00 - WET 1946 Sep 30 - 1:00 - CET 1985 Mar 31 2:00 - 1:00 EU CE%sT - -# Austria - -# Milne says Vienna time was 1:05:21. - -# From Paul Eggert (2006-03-22): Shanks & Pottenger give 1918-06-16 and -# 1945-11-18, but the Austrian Federal Office of Metrology and -# Surveying (BEV) gives 1918-09-16 and for Vienna gives the "alleged" -# date of 1945-04-12 with no time. For the 1980-04-06 transition -# Shanks & Pottenger give 02:00, the BEV 00:00. Go with the BEV, -# and guess 02:00 for 1945-04-12. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Austria 1920 only - Apr 5 2:00s 1:00 S -Rule Austria 1920 only - Sep 13 2:00s 0 - -Rule Austria 1946 only - Apr 14 2:00s 1:00 S -Rule Austria 1946 1948 - Oct Sun>=1 2:00s 0 - -Rule Austria 1947 only - Apr 6 2:00s 1:00 S -Rule Austria 1948 only - Apr 18 2:00s 1:00 S -Rule Austria 1980 only - Apr 6 0:00 1:00 S -Rule Austria 1980 only - Sep 28 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Vienna 1:05:21 - LMT 1893 Apr - 1:00 C-Eur CE%sT 1920 - 1:00 Austria CE%sT 1940 Apr 1 2:00s - 1:00 C-Eur CE%sT 1945 Apr 2 2:00s - 1:00 1:00 CEST 1945 Apr 12 2:00s - 1:00 - CET 1946 - 1:00 Austria CE%sT 1981 - 1:00 EU CE%sT - -# Belarus -# -# From Stepan Golosunov (2016-07-02): -# http://www.lawbelarus.com/repub/sub30/texf9611.htm -# (Act of the Cabinet of Ministers of the Republic of Belarus from -# 1992-03-25 No. 157) ... says clocks were to be moved forward at 2:00 -# on last Sunday of March and backward at 3:00 on last Sunday of September -# (the same as previous USSR and contemporary Russian regulations). -# -# From Yauhen Kharuzhy (2011-09-16): -# By latest Belarus government act Europe/Minsk timezone was changed to -# GMT+3 without DST (was GMT+2 with DST). -# -# Sources (Russian language): -# http://www.belta.by/ru/all_news/society/V-Belarusi-otmenjaetsja-perexod-na-sezonnoe-vremja_i_572952.html -# http://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/ -# https://news.tut.by/society/250578.html -# -# From Alexander Bokovoy (2014-10-09): -# Belarussian government decided against changing to winter time.... -# http://eng.belta.by/all_news/society/Belarus-decides-against-adjusting-time-in-Russias-wake_i_76335.html -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Minsk 1:50:16 - LMT 1880 - 1:50 - MMT 1924 May 2 # Minsk Mean Time - 2:00 - EET 1930 Jun 21 - 3:00 - MSK 1941 Jun 28 - 1:00 C-Eur CE%sT 1944 Jul 3 - 3:00 Russia MSK/MSD 1990 - 3:00 - MSK 1991 Mar 31 2:00s - 2:00 Russia EE%sT 2011 Mar 27 2:00s - 3:00 - +03 - -# Belgium -# -# From Paul Eggert (1997-07-02): -# Entries from 1918 through 1991 are taken from: -# Annuaire de L'Observatoire Royal de Belgique, -# Avenue Circulaire, 3, B-1180 BRUXELLES, CLVIIe année, 1991 -# (Imprimerie HAYEZ, s.p.r.l., Rue Fin, 4, 1080 BRUXELLES, MCMXC), -# pp 8-9. -# LMT before 1892 was 0:17:30, according to the official journal of Belgium: -# Moniteur Belge, Samedi 30 Avril 1892, N.121. -# Thanks to Pascal Delmoitie for these references. -# The 1918 rules are listed for completeness; they apply to unoccupied Belgium. -# Assume Brussels switched to WET in 1918 when the armistice took effect. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Belgium 1918 only - Mar 9 0:00s 1:00 S -Rule Belgium 1918 1919 - Oct Sat>=1 23:00s 0 - -Rule Belgium 1919 only - Mar 1 23:00s 1:00 S -Rule Belgium 1920 only - Feb 14 23:00s 1:00 S -Rule Belgium 1920 only - Oct 23 23:00s 0 - -Rule Belgium 1921 only - Mar 14 23:00s 1:00 S -Rule Belgium 1921 only - Oct 25 23:00s 0 - -Rule Belgium 1922 only - Mar 25 23:00s 1:00 S -Rule Belgium 1922 1927 - Oct Sat>=1 23:00s 0 - -Rule Belgium 1923 only - Apr 21 23:00s 1:00 S -Rule Belgium 1924 only - Mar 29 23:00s 1:00 S -Rule Belgium 1925 only - Apr 4 23:00s 1:00 S -# DSH writes that a royal decree of 1926-02-22 specified the Sun following 3rd -# Sat in Apr (except if it's Easter, in which case it's one Sunday earlier), -# to Sun following 1st Sat in Oct, and that a royal decree of 1928-09-15 -# changed the transition times to 02:00 GMT. -Rule Belgium 1926 only - Apr 17 23:00s 1:00 S -Rule Belgium 1927 only - Apr 9 23:00s 1:00 S -Rule Belgium 1928 only - Apr 14 23:00s 1:00 S -Rule Belgium 1928 1938 - Oct Sun>=2 2:00s 0 - -Rule Belgium 1929 only - Apr 21 2:00s 1:00 S -Rule Belgium 1930 only - Apr 13 2:00s 1:00 S -Rule Belgium 1931 only - Apr 19 2:00s 1:00 S -Rule Belgium 1932 only - Apr 3 2:00s 1:00 S -Rule Belgium 1933 only - Mar 26 2:00s 1:00 S -Rule Belgium 1934 only - Apr 8 2:00s 1:00 S -Rule Belgium 1935 only - Mar 31 2:00s 1:00 S -Rule Belgium 1936 only - Apr 19 2:00s 1:00 S -Rule Belgium 1937 only - Apr 4 2:00s 1:00 S -Rule Belgium 1938 only - Mar 27 2:00s 1:00 S -Rule Belgium 1939 only - Apr 16 2:00s 1:00 S -Rule Belgium 1939 only - Nov 19 2:00s 0 - -Rule Belgium 1940 only - Feb 25 2:00s 1:00 S -Rule Belgium 1944 only - Sep 17 2:00s 0 - -Rule Belgium 1945 only - Apr 2 2:00s 1:00 S -Rule Belgium 1945 only - Sep 16 2:00s 0 - -Rule Belgium 1946 only - May 19 2:00s 1:00 S -Rule Belgium 1946 only - Oct 7 2:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Brussels 0:17:30 - LMT 1880 - 0:17:30 - BMT 1892 May 1 12:00 # Brussels MT - 0:00 - WET 1914 Nov 8 - 1:00 - CET 1916 May 1 0:00 - 1:00 C-Eur CE%sT 1918 Nov 11 11:00u - 0:00 Belgium WE%sT 1940 May 20 2:00s - 1:00 C-Eur CE%sT 1944 Sep 3 - 1:00 Belgium CE%sT 1977 - 1:00 EU CE%sT - -# Bosnia and Herzegovina -# See Europe/Belgrade. - -# Bulgaria -# -# From Plamen Simenov via Steffen Thorsen (1999-09-09): -# A document of Government of Bulgaria (No. 94/1997) says: -# EET -> EETDST is in 03:00 Local time in last Sunday of March ... -# EETDST -> EET is in 04:00 Local time in last Sunday of October -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Bulg 1979 only - Mar 31 23:00 1:00 S -Rule Bulg 1979 only - Oct 1 1:00 0 - -Rule Bulg 1980 1982 - Apr Sat>=1 23:00 1:00 S -Rule Bulg 1980 only - Sep 29 1:00 0 - -Rule Bulg 1981 only - Sep 27 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Sofia 1:33:16 - LMT 1880 - 1:56:56 - IMT 1894 Nov 30 # Istanbul MT? - 2:00 - EET 1942 Nov 2 3:00 - 1:00 C-Eur CE%sT 1945 - 1:00 - CET 1945 Apr 2 3:00 - 2:00 - EET 1979 Mar 31 23:00 - 2:00 Bulg EE%sT 1982 Sep 26 3:00 - 2:00 C-Eur EE%sT 1991 - 2:00 E-Eur EE%sT 1997 - 2:00 EU EE%sT - -# Croatia -# See Europe/Belgrade. - -# Cyprus -# Please see the 'asia' file for Asia/Nicosia. - -# Czech Republic / Czechia -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Czech 1945 only - Apr 8 2:00s 1:00 S -Rule Czech 1945 only - Nov 18 2:00s 0 - -Rule Czech 1946 only - May 6 2:00s 1:00 S -Rule Czech 1946 1949 - Oct Sun>=1 2:00s 0 - -Rule Czech 1947 only - Apr 20 2:00s 1:00 S -Rule Czech 1948 only - Apr 18 2:00s 1:00 S -Rule Czech 1949 only - Apr 9 2:00s 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Prague 0:57:44 - LMT 1850 - 0:57:44 - PMT 1891 Oct # Prague Mean Time - 1:00 C-Eur CE%sT 1944 Sep 17 2:00s - 1:00 Czech CE%sT 1979 - 1:00 EU CE%sT -# Use Europe/Prague also for Slovakia. - -# Denmark, Faroe Islands, and Greenland - -# From Jesper Nørgaard Welen (2005-04-26): -# http://www.hum.aau.dk/~poe/tid/tine/DanskTid.htm says that the law -# [introducing standard time] was in effect from 1894-01-01.... -# The page http://www.retsinfo.dk/_GETDOCI_/ACCN/A18930008330-REGL -# confirms this, and states that the law was put forth 1893-03-29. -# -# The EU treaty with effect from 1973: -# http://www.retsinfo.dk/_GETDOCI_/ACCN/A19722110030-REGL -# -# This provoked a new law from 1974 to make possible summer time changes -# in subsequent decrees with the law -# http://www.retsinfo.dk/_GETDOCI_/ACCN/A19740022330-REGL -# -# It seems however that no decree was set forward until 1980. I have -# not found any decree, but in another related law, the effecting DST -# changes are stated explicitly to be from 1980-04-06 at 02:00 to -# 1980-09-28 at 02:00. If this is true, this differs slightly from -# the EU rule in that DST runs to 02:00, not 03:00. We don't know -# when Denmark began using the EU rule correctly, but we have only -# confirmation of the 1980-time, so I presume it was correct in 1981: -# The law is about the management of the extra hour, concerning -# working hours reported and effect on obligatory-rest rules (which -# was suspended on that night): -# http://www.retsinfo.dk/_GETDOCI_/ACCN/C19801120554-REGL - -# From Jesper Nørgaard Welen (2005-06-11): -# The Herning Folkeblad (1980-09-26) reported that the night between -# Saturday and Sunday the clock is set back from three to two. - -# From Paul Eggert (2005-06-11): -# Hence the "02:00" of the 1980 law refers to standard time, not -# wall-clock time, and so the EU rules were in effect in 1980. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Denmark 1916 only - May 14 23:00 1:00 S -Rule Denmark 1916 only - Sep 30 23:00 0 - -Rule Denmark 1940 only - May 15 0:00 1:00 S -Rule Denmark 1945 only - Apr 2 2:00s 1:00 S -Rule Denmark 1945 only - Aug 15 2:00s 0 - -Rule Denmark 1946 only - May 1 2:00s 1:00 S -Rule Denmark 1946 only - Sep 1 2:00s 0 - -Rule Denmark 1947 only - May 4 2:00s 1:00 S -Rule Denmark 1947 only - Aug 10 2:00s 0 - -Rule Denmark 1948 only - May 9 2:00s 1:00 S -Rule Denmark 1948 only - Aug 8 2:00s 0 - -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Copenhagen 0:50:20 - LMT 1890 - 0:50:20 - CMT 1894 Jan 1 # Copenhagen MT - 1:00 Denmark CE%sT 1942 Nov 2 2:00s - 1:00 C-Eur CE%sT 1945 Apr 2 2:00 - 1:00 Denmark CE%sT 1980 - 1:00 EU CE%sT -Zone Atlantic/Faroe -0:27:04 - LMT 1908 Jan 11 # Tórshavn - 0:00 - WET 1981 - 0:00 EU WE%sT -# -# From Paul Eggert (2004-10-31): -# During World War II, Germany maintained secret manned weather stations in -# East Greenland and Franz Josef Land, but we don't know their time zones. -# My source for this is Wilhelm Dege's book mentioned under Svalbard. -# -# From Paul Eggert (2006-03-22): -# Greenland joined the EU as part of Denmark, obtained home rule on 1979-05-01, -# and left the EU on 1985-02-01. It therefore should have been using EU -# rules at least through 1984. Shanks & Pottenger say Scoresbysund and Godthåb -# used C-Eur rules after 1980, but IATA SSIM (1991/1996) says they use EU -# rules since at least 1991. Assume EU rules since 1980. - -# From Gwillim Law (2001-06-06), citing -# (2001-03-15), -# and with translations corrected by Steffen Thorsen: -# -# Greenland has four local times, and the relation to UTC -# is according to the following time line: -# -# The military zone near Thule UTC-4 -# Standard Greenland time UTC-3 -# Scoresbysund UTC-1 -# Danmarkshavn UTC -# -# In the military area near Thule and in Danmarkshavn DST will not be -# introduced. - -# From Rives McDow (2001-11-01): -# -# I correspond regularly with the Dansk Polarcenter, and wrote them at -# the time to clarify the situation in Thule. Unfortunately, I have -# not heard back from them regarding my recent letter. [But I have -# info from earlier correspondence.] -# -# According to the center, a very small local time zone around Thule -# Air Base keeps the time according to UTC-4, implementing daylight -# savings using North America rules, changing the time at 02:00 local time.... -# -# The east coast of Greenland north of the community of Scoresbysund -# uses UTC in the same way as in Iceland, year round, with no dst. -# There are just a few stations on this coast, including the -# Danmarkshavn ICAO weather station mentioned in your September 29th -# email. The other stations are two sledge patrol stations in -# Mestersvig and Daneborg, the air force base at Station Nord, and the -# DPC research station at Zackenberg. -# -# Scoresbysund and two small villages nearby keep time UTC-1 and use -# the same daylight savings time period as in West Greenland (Godthåb). -# -# The rest of Greenland, including Godthåb (this area, although it -# includes central Greenland, is known as west Greenland), keeps time -# UTC-3, with daylight savings methods according to European rules. -# -# It is common procedure to use UTC 0 in the wilderness of East and -# North Greenland, because it is mainly Icelandic aircraft operators -# maintaining traffic in these areas. However, the official status of -# this area is that it sticks with Godthåb time. This area might be -# considered a dual time zone in some respects because of this. - -# From Rives McDow (2001-11-19): -# I heard back from someone stationed at Thule; the time change took place -# there at 2:00 AM. - -# From Paul Eggert (2006-03-22): -# From 1997 on the CIA map shows Danmarkshavn on GMT; -# the 1995 map as like Godthåb. -# For lack of better info, assume they were like Godthåb before 1996. -# startkart.no says Thule does not observe DST, but this is clearly an error, -# so go with Shanks & Pottenger for Thule transitions until this year. -# For 2007 on assume Thule will stay in sync with US DST rules. - -# From J William Piggott (2016-02-20): -# "Greenland north of the community of Scoresbysund" is officially named -# "National Park" by Executive Order: -# http://naalakkersuisut.gl/~/media/Nanoq/Files/Attached%20Files/Engelske-tekster/Legislation/Executive%20Order%20National%20Park.rtf -# It is their only National Park. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Thule 1991 1992 - Mar lastSun 2:00 1:00 D -Rule Thule 1991 1992 - Sep lastSun 2:00 0 S -Rule Thule 1993 2006 - Apr Sun>=1 2:00 1:00 D -Rule Thule 1993 2006 - Oct lastSun 2:00 0 S -Rule Thule 2007 max - Mar Sun>=8 2:00 1:00 D -Rule Thule 2007 max - Nov Sun>=1 2:00 0 S -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Danmarkshavn -1:14:40 - LMT 1916 Jul 28 - -3:00 - -03 1980 Apr 6 2:00 - -3:00 EU -03/-02 1996 - 0:00 - GMT -Zone America/Scoresbysund -1:27:52 - LMT 1916 Jul 28 # Ittoqqortoormiit - -2:00 - -02 1980 Apr 6 2:00 - -2:00 C-Eur -02/-01 1981 Mar 29 - -1:00 EU -01/+00 -Zone America/Godthab -3:26:56 - LMT 1916 Jul 28 # Nuuk - -3:00 - -03 1980 Apr 6 2:00 - -3:00 EU -03/-02 -Zone America/Thule -4:35:08 - LMT 1916 Jul 28 # Pituffik air base - -4:00 Thule A%sT - -# Estonia -# -# From Paul Eggert (2016-03-18): -# The 1989 transition is from USSR act No. 227 (1989-03-14). -# -# From Peter Ilieve (1994-10-15): -# A relative in Tallinn confirms the accuracy of the data for 1989 onwards -# [through 1994] and gives the legal authority for it, -# a regulation of the Government of Estonia, No. 111 of 1989.... -# -# From Peter Ilieve (1996-10-28): -# [IATA SSIM (1992/1996) claims that the Baltic republics switch at 01:00s, -# but a relative confirms that Estonia still switches at 02:00s, writing:] -# "I do not [know] exactly but there are some little different -# (confusing) rules for International Air and Railway Transport Schedules -# conversion in Sunday connected with end of summer time in Estonia.... -# A discussion is running about the summer time efficiency and effect on -# human physiology. It seems that Estonia maybe will not change to -# summer time next spring." - -# From Peter Ilieve (1998-11-04), heavily edited: -# The 1998-09-22 Estonian time law -# http://trip.rk.ee/cgi-bin/thw?${BASE}=akt&${OOHTML}=rtd&TA=1998&TO=1&AN=1390 -# refers to the Eighth Directive and cites the association agreement between -# the EU and Estonia, ratified by the Estonian law (RT II 1995, 22-27, 120). -# -# I also asked [my relative] whether they use any standard abbreviation -# for their standard and summer times. He says no, they use "suveaeg" -# (summer time) and "talveaeg" (winter time). - -# From The Baltic Times (1999-09-09) -# via Steffen Thorsen: -# This year will mark the last time Estonia shifts to summer time, -# a council of the ruling coalition announced Sept. 6.... -# But what this could mean for Estonia's chances of joining the European -# Union are still unclear. In 1994, the EU declared summer time compulsory -# for all member states until 2001. Brussels has yet to decide what to do -# after that. - -# From Mart Oruaas (2000-01-29): -# Regulation No. 301 (1999-10-12) obsoletes previous regulation -# No. 206 (1998-09-22) and thus sticks Estonia to +02:00 GMT for all -# the year round. The regulation is effective 1999-11-01. - -# From Toomas Soome (2002-02-21): -# The Estonian government has changed once again timezone politics. -# Now we are using again EU rules. -# -# From Urmet Jänes (2002-03-28): -# The legislative reference is Government decree No. 84 on 2002-02-21. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Tallinn 1:39:00 - LMT 1880 - 1:39:00 - TMT 1918 Feb # Tallinn Mean Time - 1:00 C-Eur CE%sT 1919 Jul - 1:39:00 - TMT 1921 May - 2:00 - EET 1940 Aug 6 - 3:00 - MSK 1941 Sep 15 - 1:00 C-Eur CE%sT 1944 Sep 22 - 3:00 Russia MSK/MSD 1989 Mar 26 2:00s - 2:00 1:00 EEST 1989 Sep 24 2:00s - 2:00 C-Eur EE%sT 1998 Sep 22 - 2:00 EU EE%sT 1999 Oct 31 4:00 - 2:00 - EET 2002 Feb 21 - 2:00 EU EE%sT - -# Finland - -# From Hannu Strang (1994-09-25 06:03:37 UTC): -# Well, here in Helsinki we're just changing from summer time to regular one, -# and it's supposed to change at 4am... - -# From Janne Snabb (2010-07-15): -# -# I noticed that the Finland data is not accurate for years 1981 and 1982. -# During these two first trial years the DST adjustment was made one hour -# earlier than in forthcoming years. Starting 1983 the adjustment was made -# according to the central European standards. -# -# This is documented in Heikki Oja: Aikakirja 2007, published by The Almanac -# Office of University of Helsinki, ISBN 952-10-3221-9, available online (in -# Finnish) at -# https://almanakka.helsinki.fi/aikakirja/Aikakirja2007kokonaan.pdf -# -# Page 105 (56 in PDF version) has a handy table of all past daylight savings -# transitions. It is easy enough to interpret without Finnish skills. -# -# This is also confirmed by Finnish Broadcasting Company's archive at: -# http://www.yle.fi/elavaarkisto/?s=s&g=1&ag=5&t=&a=3401 -# -# The news clip from 1981 says that "the time between 2 and 3 o'clock does not -# exist tonight." - -# From Konstantin Hyppönen (2014-06-13): -# [Heikki Oja's book Aikakirja 2013] -# https://almanakka.helsinki.fi/images/aikakirja/Aikakirja2013kokonaan.pdf -# pages 104-105, including a scan from a newspaper published on Apr 2 1942 -# say that ... [o]n Apr 2 1942, 24 o'clock (which means Apr 3 1942, -# 00:00), clocks were moved one hour forward. The newspaper -# mentions "on the night from Thursday to Friday".... -# On Oct 4 1942, clocks were moved at 1:00 one hour backwards. -# -# From Paul Eggert (2014-06-14): -# Go with Oja over Shanks. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Finland 1942 only - Apr 2 24:00 1:00 S -Rule Finland 1942 only - Oct 4 1:00 0 - -Rule Finland 1981 1982 - Mar lastSun 2:00 1:00 S -Rule Finland 1981 1982 - Sep lastSun 3:00 0 - - -# Milne says Helsinki (Helsingfors) time was 1:39:49.2 (official document); -# round to nearest. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Helsinki 1:39:49 - LMT 1878 May 31 - 1:39:49 - HMT 1921 May # Helsinki Mean Time - 2:00 Finland EE%sT 1983 - 2:00 EU EE%sT - -# Åland Is -Link Europe/Helsinki Europe/Mariehamn - - -# France - -# From Ciro Discepolo (2000-12-20): -# -# Henri Le Corre, Régimes horaires pour le monde entier, Éditions -# Traditionnelles - Paris 2 books, 1993 -# -# Gabriel, Traité de l'heure dans le monde, Guy Trédaniel, -# Paris, 1991 -# -# Françoise Gauquelin, Problèmes de l'heure résolus en astrologie, -# Guy Trédaniel, Paris 1987 - - -# -# Shank & Pottenger seem to use '24:00' ambiguously; resolve it with Whitman. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule France 1916 only - Jun 14 23:00s 1:00 S -Rule France 1916 1919 - Oct Sun>=1 23:00s 0 - -Rule France 1917 only - Mar 24 23:00s 1:00 S -Rule France 1918 only - Mar 9 23:00s 1:00 S -Rule France 1919 only - Mar 1 23:00s 1:00 S -Rule France 1920 only - Feb 14 23:00s 1:00 S -Rule France 1920 only - Oct 23 23:00s 0 - -Rule France 1921 only - Mar 14 23:00s 1:00 S -Rule France 1921 only - Oct 25 23:00s 0 - -Rule France 1922 only - Mar 25 23:00s 1:00 S -# DSH writes that a law of 1923-05-24 specified 3rd Sat in Apr at 23:00 to 1st -# Sat in Oct at 24:00; and that in 1930, because of Easter, the transitions -# were Apr 12 and Oct 5. Go with Shanks & Pottenger. -Rule France 1922 1938 - Oct Sat>=1 23:00s 0 - -Rule France 1923 only - May 26 23:00s 1:00 S -Rule France 1924 only - Mar 29 23:00s 1:00 S -Rule France 1925 only - Apr 4 23:00s 1:00 S -Rule France 1926 only - Apr 17 23:00s 1:00 S -Rule France 1927 only - Apr 9 23:00s 1:00 S -Rule France 1928 only - Apr 14 23:00s 1:00 S -Rule France 1929 only - Apr 20 23:00s 1:00 S -Rule France 1930 only - Apr 12 23:00s 1:00 S -Rule France 1931 only - Apr 18 23:00s 1:00 S -Rule France 1932 only - Apr 2 23:00s 1:00 S -Rule France 1933 only - Mar 25 23:00s 1:00 S -Rule France 1934 only - Apr 7 23:00s 1:00 S -Rule France 1935 only - Mar 30 23:00s 1:00 S -Rule France 1936 only - Apr 18 23:00s 1:00 S -Rule France 1937 only - Apr 3 23:00s 1:00 S -Rule France 1938 only - Mar 26 23:00s 1:00 S -Rule France 1939 only - Apr 15 23:00s 1:00 S -Rule France 1939 only - Nov 18 23:00s 0 - -Rule France 1940 only - Feb 25 2:00 1:00 S -# The French rules for 1941-1944 were not used in Paris, but Shanks & Pottenger -# write that they were used in Monaco and in many French locations. -# Le Corre writes that the upper limit of the free zone was Arnéguy, Orthez, -# Mont-de-Marsan, Bazas, Langon, Lamothe-Montravel, Marœuil, La -# Rochefoucauld, Champagne-Mouton, La Roche-Posay, La Haye-Descartes, -# Loches, Montrichard, Vierzon, Bourges, Moulins, Digoin, -# Paray-le-Monial, Montceau-les-Mines, Chalon-sur-Saône, Arbois, -# Dole, Morez, St-Claude, and Collonges (Haute-Savoie). -Rule France 1941 only - May 5 0:00 2:00 M # Midsummer -# Shanks & Pottenger say this transition occurred at Oct 6 1:00, -# but go with Denis Excoffier (1997-12-12), -# who quotes the Ephémérides astronomiques for 1998 from Bureau des Longitudes -# as saying 5/10/41 22hUT. -Rule France 1941 only - Oct 6 0:00 1:00 S -Rule France 1942 only - Mar 9 0:00 2:00 M -Rule France 1942 only - Nov 2 3:00 1:00 S -Rule France 1943 only - Mar 29 2:00 2:00 M -Rule France 1943 only - Oct 4 3:00 1:00 S -Rule France 1944 only - Apr 3 2:00 2:00 M -Rule France 1944 only - Oct 8 1:00 1:00 S -Rule France 1945 only - Apr 2 2:00 2:00 M -Rule France 1945 only - Sep 16 3:00 0 - -# Shanks & Pottenger give Mar 28 2:00 and Sep 26 3:00; -# go with Excoffier's 28/3/76 0hUT and 25/9/76 23hUT. -Rule France 1976 only - Mar 28 1:00 1:00 S -Rule France 1976 only - Sep 26 1:00 0 - -# Shanks & Pottenger give 0:09:20 for Paris Mean Time, and Whitman 0:09:05, -# but Howse quotes the actual French legislation as saying 0:09:21. -# Go with Howse. Howse writes that the time in France was officially based -# on PMT-0:09:21 until 1978-08-09, when the time base finally switched to UTC. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Paris 0:09:21 - LMT 1891 Mar 15 0:01 - 0:09:21 - PMT 1911 Mar 11 0:01 # Paris MT -# Shanks & Pottenger give 1940 Jun 14 0:00; go with Excoffier and Le Corre. - 0:00 France WE%sT 1940 Jun 14 23:00 -# Le Corre says Paris stuck with occupied-France time after the liberation; -# go with Shanks & Pottenger. - 1:00 C-Eur CE%sT 1944 Aug 25 - 0:00 France WE%sT 1945 Sep 16 3:00 - 1:00 France CE%sT 1977 - 1:00 EU CE%sT - -# Germany - -# From Markus Kuhn (1998-09-29): -# The German time zone web site by the Physikalisch-Technische -# Bundesanstalt contains DST information back to 1916. -# [See tz-link.htm for the URL.] - -# From Jörg Schilling (2002-10-23): -# In 1945, Berlin was switched to Moscow Summer time (GMT+4) by -# https://www.dhm.de/lemo/html/biografien/BersarinNikolai/ -# General [Nikolai] Bersarin. - -# From Paul Eggert (2003-03-08): -# http://www.parlament-berlin.de/pds-fraktion.nsf/727459127c8b66ee8525662300459099/defc77cb784f180ac1256c2b0030274b/$FILE/bersarint.pdf -# says that Bersarin issued an order to use Moscow time on May 20. -# However, Moscow did not observe daylight saving in 1945, so -# this was equivalent to UT +03, not +04. - - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Germany 1946 only - Apr 14 2:00s 1:00 S -Rule Germany 1946 only - Oct 7 2:00s 0 - -Rule Germany 1947 1949 - Oct Sun>=1 2:00s 0 - -# http://www.ptb.de/de/org/4/44/441/salt.htm says the following transition -# occurred at 3:00 MEZ, not the 2:00 MEZ given in Shanks & Pottenger. -# Go with the PTB. -Rule Germany 1947 only - Apr 6 3:00s 1:00 S -Rule Germany 1947 only - May 11 2:00s 2:00 M -Rule Germany 1947 only - Jun 29 3:00 1:00 S -Rule Germany 1948 only - Apr 18 2:00s 1:00 S -Rule Germany 1949 only - Apr 10 2:00s 1:00 S - -Rule SovietZone 1945 only - May 24 2:00 2:00 M # Midsummer -Rule SovietZone 1945 only - Sep 24 3:00 1:00 S -Rule SovietZone 1945 only - Nov 18 2:00s 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Berlin 0:53:28 - LMT 1893 Apr - 1:00 C-Eur CE%sT 1945 May 24 2:00 - 1:00 SovietZone CE%sT 1946 - 1:00 Germany CE%sT 1980 - 1:00 EU CE%sT - -# From Tobias Conradi (2011-09-12): -# Büsingen , surrounded by the Swiss canton -# Schaffhausen, did not start observing DST in 1980 as the rest of DE -# (West Germany at that time) and DD (East Germany at that time) did. -# DD merged into DE, the area is currently covered by code DE in ISO 3166-1, -# which in turn is covered by the zone Europe/Berlin. -# -# Source for the time in Büsingen 1980: -# http://www.srf.ch/player/video?id=c012c029-03b7-4c2b-9164-aa5902cd58d3 - -# From Arthur David Olson (2012-03-03): -# Büsingen and Zurich have shared clocks since 1970. - -Link Europe/Zurich Europe/Busingen - -# Georgia -# Please see the "asia" file for Asia/Tbilisi. -# Herodotus (Histories, IV.45) says Georgia north of the Phasis (now Rioni) -# is in Europe. Our reference location Tbilisi is in the Asian part. - -# Gibraltar -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Gibraltar -0:21:24 - LMT 1880 Aug 2 0:00s - 0:00 GB-Eire %s 1957 Apr 14 2:00 - 1:00 - CET 1982 - 1:00 EU CE%sT - -# Greece -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# Whitman gives 1932 Jul 5 - Nov 1; go with Shanks & Pottenger. -Rule Greece 1932 only - Jul 7 0:00 1:00 S -Rule Greece 1932 only - Sep 1 0:00 0 - -# Whitman gives 1941 Apr 25 - ?; go with Shanks & Pottenger. -Rule Greece 1941 only - Apr 7 0:00 1:00 S -# Whitman gives 1942 Feb 2 - ?; go with Shanks & Pottenger. -Rule Greece 1942 only - Nov 2 3:00 0 - -Rule Greece 1943 only - Mar 30 0:00 1:00 S -Rule Greece 1943 only - Oct 4 0:00 0 - -# Whitman gives 1944 Oct 3 - Oct 31; go with Shanks & Pottenger. -Rule Greece 1952 only - Jul 1 0:00 1:00 S -Rule Greece 1952 only - Nov 2 0:00 0 - -Rule Greece 1975 only - Apr 12 0:00s 1:00 S -Rule Greece 1975 only - Nov 26 0:00s 0 - -Rule Greece 1976 only - Apr 11 2:00s 1:00 S -Rule Greece 1976 only - Oct 10 2:00s 0 - -Rule Greece 1977 1978 - Apr Sun>=1 2:00s 1:00 S -Rule Greece 1977 only - Sep 26 2:00s 0 - -Rule Greece 1978 only - Sep 24 4:00 0 - -Rule Greece 1979 only - Apr 1 9:00 1:00 S -Rule Greece 1979 only - Sep 29 2:00 0 - -Rule Greece 1980 only - Apr 1 0:00 1:00 S -Rule Greece 1980 only - Sep 28 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Athens 1:34:52 - LMT 1895 Sep 14 - 1:34:52 - AMT 1916 Jul 28 0:01 # Athens MT - 2:00 Greece EE%sT 1941 Apr 30 - 1:00 Greece CE%sT 1944 Apr 4 - 2:00 Greece EE%sT 1981 - # Shanks & Pottenger say it switched to C-Eur in 1981; - # go with EU instead, since Greece joined it on Jan 1. - 2:00 EU EE%sT - -# Hungary -# From Paul Eggert (2014-07-15): -# Dates for 1916-1945 are taken from: -# Oross A. Jelen a múlt jövője: a nyári időszámítás Magyarországon 1916-1945. -# National Archives of Hungary (2012-10-29). -# http://mnl.gov.hu/a_het_dokumentuma/a_nyari_idoszamitas_magyarorszagon_19161945.html -# This source does not always give times, which are taken from Shanks -# & Pottenger (which disagree about the dates). -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Hungary 1918 only - Apr 1 3:00 1:00 S -Rule Hungary 1918 only - Sep 16 3:00 0 - -Rule Hungary 1919 only - Apr 15 3:00 1:00 S -Rule Hungary 1919 only - Nov 24 3:00 0 - -Rule Hungary 1945 only - May 1 23:00 1:00 S -Rule Hungary 1945 only - Nov 1 0:00 0 - -Rule Hungary 1946 only - Mar 31 2:00s 1:00 S -Rule Hungary 1946 1949 - Oct Sun>=1 2:00s 0 - -Rule Hungary 1947 1949 - Apr Sun>=4 2:00s 1:00 S -Rule Hungary 1950 only - Apr 17 2:00s 1:00 S -Rule Hungary 1950 only - Oct 23 2:00s 0 - -Rule Hungary 1954 1955 - May 23 0:00 1:00 S -Rule Hungary 1954 1955 - Oct 3 0:00 0 - -Rule Hungary 1956 only - Jun Sun>=1 0:00 1:00 S -Rule Hungary 1956 only - Sep lastSun 0:00 0 - -Rule Hungary 1957 only - Jun Sun>=1 1:00 1:00 S -Rule Hungary 1957 only - Sep lastSun 3:00 0 - -Rule Hungary 1980 only - Apr 6 1:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Budapest 1:16:20 - LMT 1890 Oct - 1:00 C-Eur CE%sT 1918 - 1:00 Hungary CE%sT 1941 Apr 8 - 1:00 C-Eur CE%sT 1945 - 1:00 Hungary CE%sT 1980 Sep 28 2:00s - 1:00 EU CE%sT - -# Iceland -# -# From Adam David (1993-11-06): -# The name of the timezone in Iceland for system / mail / news purposes is GMT. -# -# (1993-12-05): -# This material is paraphrased from the 1988 edition of the University of -# Iceland Almanak. -# -# From January 1st, 1908 the whole of Iceland was standardised at 1 hour -# behind GMT. Previously, local mean solar time was used in different parts -# of Iceland, the almanak had been based on Reykjavik mean solar time which -# was 1 hour and 28 minutes behind GMT. -# -# "first day of winter" referred to [below] means the first day of the 26 weeks -# of winter, according to the old icelandic calendar that dates back to the -# time the norsemen first settled Iceland. The first day of winter is always -# Saturday, but is not dependent on the Julian or Gregorian calendars. -# -# (1993-12-10): -# I have a reference from the Oxford Icelandic-English dictionary for the -# beginning of winter, which ties it to the ecclesiastical calendar (and thus -# to the julian/gregorian calendar) over the period in question. -# the winter begins on the Saturday next before St. Luke's day -# (old style), or on St. Luke's day, if a Saturday. -# St. Luke's day ought to be traceable from ecclesiastical sources. "old style" -# might be a reference to the Julian calendar as opposed to Gregorian, or it -# might mean something else (???). -# -# From Paul Eggert (2014-11-22): -# The information below is taken from the 1988 Almanak; see -# http://www.almanak.hi.is/klukkan.html -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Iceland 1917 1919 - Feb 19 23:00 1:00 S -Rule Iceland 1917 only - Oct 21 1:00 0 - -Rule Iceland 1918 1919 - Nov 16 1:00 0 - -Rule Iceland 1921 only - Mar 19 23:00 1:00 S -Rule Iceland 1921 only - Jun 23 1:00 0 - -Rule Iceland 1939 only - Apr 29 23:00 1:00 S -Rule Iceland 1939 only - Oct 29 2:00 0 - -Rule Iceland 1940 only - Feb 25 2:00 1:00 S -Rule Iceland 1940 1941 - Nov Sun>=2 1:00s 0 - -Rule Iceland 1941 1942 - Mar Sun>=2 1:00s 1:00 S -# 1943-1946 - first Sunday in March until first Sunday in winter -Rule Iceland 1943 1946 - Mar Sun>=1 1:00s 1:00 S -Rule Iceland 1942 1948 - Oct Sun>=22 1:00s 0 - -# 1947-1967 - first Sunday in April until first Sunday in winter -Rule Iceland 1947 1967 - Apr Sun>=1 1:00s 1:00 S -# 1949 and 1967 Oct transitions delayed by 1 week -Rule Iceland 1949 only - Oct 30 1:00s 0 - -Rule Iceland 1950 1966 - Oct Sun>=22 1:00s 0 - -Rule Iceland 1967 only - Oct 29 1:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Atlantic/Reykjavik -1:28 - LMT 1908 - -1:00 Iceland -01/+00 1968 Apr 7 1:00s - 0:00 - GMT - -# Italy -# -# From Paul Eggert (2001-03-06): -# Sicily and Sardinia each had their own time zones from 1866 to 1893, -# called Palermo Time (+00:53:28) and Cagliari Time (+00:36:32). -# During World War II, German-controlled Italy used German time. -# But these events all occurred before the 1970 cutoff, -# so record only the time in Rome. -# -# From Michael Deckers (2016-10-24): -# http://www.ac-ilsestante.it/MERIDIANE/ora_legale quotes a law of 1893-08-10 -# ... [translated as] "The preceding dispositions will enter into -# force at the instant at which, according to the time specified in -# the 1st article, the 1st of November 1893 will begin...." -# -# From Pierpaolo Bernardi (2016-10-20): -# The authoritative source for time in Italy is the national metrological -# institute, which has a summary page of historical DST data at -# http://www.inrim.it/res/tf/ora_legale_i.shtml -# (2016-10-24): -# http://www.renzobaldini.it/le-ore-legali-in-italia/ -# has still different data for 1944. It divides Italy in two, as -# there were effectively two governments at the time, north of Gothic -# Line German controlled territory, official government RSI, and south -# of the Gothic Line, controlled by allied armies. -# -# From Brian Inglis (2016-10-23): -# Viceregal LEGISLATIVE DECREE. 14 September 1944, no. 219. -# Restoration of Standard Time. (044U0219) (OJ 62 of 30.9.1944) ... -# Given the R. law decreed on 1944-03-29, no. 92, by which standard time is -# advanced to sixty minutes later starting at hour two on 1944-04-02; ... -# Starting at hour three on the date 1944-09-17 standard time will be resumed. -# -# From Paul Eggert (2016-10-27): -# Go with INRiM for DST rules, except as corrected by Inglis for 1944 -# for the Kingdom of Italy. This is consistent with Renzo Baldini. -# Model Rome's occupation by using C-Eur rules from 1943-09-10 -# to 1944-06-04; although Rome was an open city during this period, it -# was effectively controlled by Germany. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Italy 1916 only - Jun 3 24:00 1:00 S -Rule Italy 1916 1917 - Sep 30 24:00 0 - -Rule Italy 1917 only - Mar 31 24:00 1:00 S -Rule Italy 1918 only - Mar 9 24:00 1:00 S -Rule Italy 1918 only - Oct 6 24:00 0 - -Rule Italy 1919 only - Mar 1 24:00 1:00 S -Rule Italy 1919 only - Oct 4 24:00 0 - -Rule Italy 1920 only - Mar 20 24:00 1:00 S -Rule Italy 1920 only - Sep 18 24:00 0 - -Rule Italy 1940 only - Jun 14 24:00 1:00 S -Rule Italy 1942 only - Nov 2 2:00s 0 - -Rule Italy 1943 only - Mar 29 2:00s 1:00 S -Rule Italy 1943 only - Oct 4 2:00s 0 - -Rule Italy 1944 only - Apr 2 2:00s 1:00 S -Rule Italy 1944 only - Sep 17 2:00s 0 - -Rule Italy 1945 only - Apr 2 2:00 1:00 S -Rule Italy 1945 only - Sep 15 1:00 0 - -Rule Italy 1946 only - Mar 17 2:00s 1:00 S -Rule Italy 1946 only - Oct 6 2:00s 0 - -Rule Italy 1947 only - Mar 16 0:00s 1:00 S -Rule Italy 1947 only - Oct 5 0:00s 0 - -Rule Italy 1948 only - Feb 29 2:00s 1:00 S -Rule Italy 1948 only - Oct 3 2:00s 0 - -Rule Italy 1966 1968 - May Sun>=22 0:00s 1:00 S -Rule Italy 1966 only - Sep 24 24:00 0 - -Rule Italy 1967 1969 - Sep Sun>=22 0:00s 0 - -Rule Italy 1969 only - Jun 1 0:00s 1:00 S -Rule Italy 1970 only - May 31 0:00s 1:00 S -Rule Italy 1970 only - Sep lastSun 0:00s 0 - -Rule Italy 1971 1972 - May Sun>=22 0:00s 1:00 S -Rule Italy 1971 only - Sep lastSun 0:00s 0 - -Rule Italy 1972 only - Oct 1 0:00s 0 - -Rule Italy 1973 only - Jun 3 0:00s 1:00 S -Rule Italy 1973 1974 - Sep lastSun 0:00s 0 - -Rule Italy 1974 only - May 26 0:00s 1:00 S -Rule Italy 1975 only - Jun 1 0:00s 1:00 S -Rule Italy 1975 1977 - Sep lastSun 0:00s 0 - -Rule Italy 1976 only - May 30 0:00s 1:00 S -Rule Italy 1977 1979 - May Sun>=22 0:00s 1:00 S -Rule Italy 1978 only - Oct 1 0:00s 0 - -Rule Italy 1979 only - Sep 30 0:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Rome 0:49:56 - LMT 1866 Sep 22 - 0:49:56 - RMT 1893 Oct 31 23:49:56 # Rome Mean - 1:00 Italy CE%sT 1943 Sep 10 - 1:00 C-Eur CE%sT 1944 Jun 4 - 1:00 Italy CE%sT 1980 - 1:00 EU CE%sT - -Link Europe/Rome Europe/Vatican -Link Europe/Rome Europe/San_Marino - -# Latvia - -# From Liene Kanepe (1998-09-17): - -# I asked about this matter Scientific Secretary of the Institute of Astronomy -# of The University of Latvia Dr. paed Mr. Ilgonis Vilks. I also searched the -# correct data in juridical acts and I found some juridical documents about -# changes in the counting of time in Latvia from 1981.... -# -# Act No. 35 of the Council of Ministers of Latvian SSR of 1981-01-22 ... -# according to the Act No. 925 of the Council of Ministers of USSR of 1980-10-24 -# ...: all year round the time of 2nd time zone + 1 hour, in addition turning -# the hands of the clock 1 hour forward on 1 April at 00:00 (GMT 31 March 21:00) -# and 1 hour backward on the 1 October at 00:00 (GMT 30 September 20:00). -# -# Act No. 592 of the Council of Ministers of Latvian SSR of 1984-09-24 ... -# according to the Act No. 967 of the Council of Ministers of USSR of 1984-09-13 -# ...: all year round the time of 2nd time zone + 1 hour, in addition turning -# the hands of the clock 1 hour forward on the last Sunday of March at 02:00 -# (GMT 23:00 on the previous day) and 1 hour backward on the last Sunday of -# September at 03:00 (GMT 23:00 on the previous day). -# -# Act No. 81 of the Council of Ministers of Latvian SSR of 1989-03-22 ... -# according to the Act No. 227 of the Council of Ministers of USSR of 1989-03-14 -# ...: since the last Sunday of March 1989 in Lithuanian SSR, Latvian SSR, -# Estonian SSR and Kaliningrad region of Russian Federation all year round the -# time of 2nd time zone (Moscow time minus one hour). On the territory of Latvia -# transition to summer time is performed on the last Sunday of March at 02:00 -# (GMT 00:00), turning the hands of the clock 1 hour forward. The end of -# daylight saving time is performed on the last Sunday of September at 03:00 -# (GMT 00:00), turning the hands of the clock 1 hour backward. Exception is -# 1989-03-26, when we must not turn the hands of the clock.... -# -# The Regulations of the Cabinet of Ministers of the Republic of Latvia of -# 1997-01-21 on transition to Summer time ... established the same order of -# daylight savings time settings as in the States of the European Union. - -# From Andrei Ivanov (2000-03-06): -# This year Latvia will not switch to Daylight Savings Time (as specified in -# The Regulations of the Cabinet of Ministers of the Rep. of Latvia of -# 29-Feb-2000 (No. 79) , -# in Latvian for subscribers only). - -# From RFE/RL Newsline -# http://www.rferl.org/newsline/2001/01/3-CEE/cee-030101.html -# (2001-01-03), noted after a heads-up by Rives McDow: -# The Latvian government on 2 January decided that the country will -# institute daylight-saving time this spring, LETA reported. -# Last February the three Baltic states decided not to turn back their -# clocks one hour in the spring.... -# Minister of Economy Aigars Kalvītis noted that Latvia had too few -# daylight hours and thus decided to comply with a draft European -# Commission directive that provides for instituting daylight-saving -# time in EU countries between 2002 and 2006. The Latvian government -# urged Lithuania and Estonia to adopt a similar time policy, but it -# appears that they will not do so.... - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Latvia 1989 1996 - Mar lastSun 2:00s 1:00 S -Rule Latvia 1989 1996 - Sep lastSun 2:00s 0 - - -# Milne 1899 says Riga was 1:36:28 (Polytechnique House time). -# Byalokoz 1919 says Latvia was 1:36:34. -# Go with Byalokoz. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Riga 1:36:34 - LMT 1880 - 1:36:34 - RMT 1918 Apr 15 2:00 # Riga MT - 1:36:34 1:00 LST 1918 Sep 16 3:00 # Latvian ST - 1:36:34 - RMT 1919 Apr 1 2:00 - 1:36:34 1:00 LST 1919 May 22 3:00 - 1:36:34 - RMT 1926 May 11 - 2:00 - EET 1940 Aug 5 - 3:00 - MSK 1941 Jul - 1:00 C-Eur CE%sT 1944 Oct 13 - 3:00 Russia MSK/MSD 1989 Mar lastSun 2:00s - 2:00 1:00 EEST 1989 Sep lastSun 2:00s - 2:00 Latvia EE%sT 1997 Jan 21 - 2:00 EU EE%sT 2000 Feb 29 - 2:00 - EET 2001 Jan 2 - 2:00 EU EE%sT - -# Liechtenstein - -# From Paul Eggert (2013-09-09): -# Shanks & Pottenger say Vaduz is like Zurich. - -# From Alois Treindl (2013-09-18): -# http://www.eliechtensteinensia.li/LIJ/1978/1938-1978/1941.pdf -# ... confirms on p. 6 that Liechtenstein followed Switzerland in 1941 and 1942. -# I ... translate only the last two paragraphs: -# ... during second world war, in the years 1941 and 1942, Liechtenstein -# introduced daylight saving time, adapting to Switzerland. From 1943 on -# central European time was in force throughout the year. -# From a report of the duke's government to the high council, -# regarding the introduction of a time law, of 31 May 1977. - -Link Europe/Zurich Europe/Vaduz - - -# Lithuania - -# From Paul Eggert (2016-03-18): -# The 1989 transition is from USSR act No. 227 (1989-03-14). - -# From Paul Eggert (1996-11-22): -# IATA SSIM (1992/1996) says Lithuania uses W-Eur rules, but since it is -# known to be wrong about Estonia and Latvia, assume it's wrong here too. - -# From Marius Gedminas (1998-08-07): -# I would like to inform that in this year Lithuanian time zone -# (Europe/Vilnius) was changed. - -# From ELTA No. 972 (2582) (1999-09-29) , -# via Steffen Thorsen: -# Lithuania has shifted back to the second time zone (GMT plus two hours) -# to be valid here starting from October 31, -# as decided by the national government on Wednesday.... -# The Lithuanian government also announced plans to consider a -# motion to give up shifting to summer time in spring, as it was -# already done by Estonia. - -# From the Fact File, Lithuanian State Department of Tourism -# (2000-03-27): -# Local time is GMT+2 hours ..., no daylight saving. - -# From a user via Klaus Marten (2003-02-07): -# As a candidate for membership of the European Union, Lithuania will -# observe Summer Time in 2003, changing its clocks at the times laid -# down in EU Directive 2000/84 of 19.I.01 (i.e. at the same times as its -# neighbour Latvia). The text of the Lithuanian government Order of -# 7.XI.02 to this effect can be found at -# http://www.lrvk.lt/nut/11/n1749.htm - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Vilnius 1:41:16 - LMT 1880 - 1:24:00 - WMT 1917 # Warsaw Mean Time - 1:35:36 - KMT 1919 Oct 10 # Kaunas Mean Time - 1:00 - CET 1920 Jul 12 - 2:00 - EET 1920 Oct 9 - 1:00 - CET 1940 Aug 3 - 3:00 - MSK 1941 Jun 24 - 1:00 C-Eur CE%sT 1944 Aug - 3:00 Russia MSK/MSD 1989 Mar 26 2:00s - 2:00 Russia EE%sT 1991 Sep 29 2:00s - 2:00 C-Eur EE%sT 1998 - 2:00 - EET 1998 Mar 29 1:00u - 1:00 EU CE%sT 1999 Oct 31 1:00u - 2:00 - EET 2003 Jan 1 - 2:00 EU EE%sT - -# Luxembourg -# Whitman disagrees with most of these dates in minor ways; -# go with Shanks & Pottenger. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Lux 1916 only - May 14 23:00 1:00 S -Rule Lux 1916 only - Oct 1 1:00 0 - -Rule Lux 1917 only - Apr 28 23:00 1:00 S -Rule Lux 1917 only - Sep 17 1:00 0 - -Rule Lux 1918 only - Apr Mon>=15 2:00s 1:00 S -Rule Lux 1918 only - Sep Mon>=15 2:00s 0 - -Rule Lux 1919 only - Mar 1 23:00 1:00 S -Rule Lux 1919 only - Oct 5 3:00 0 - -Rule Lux 1920 only - Feb 14 23:00 1:00 S -Rule Lux 1920 only - Oct 24 2:00 0 - -Rule Lux 1921 only - Mar 14 23:00 1:00 S -Rule Lux 1921 only - Oct 26 2:00 0 - -Rule Lux 1922 only - Mar 25 23:00 1:00 S -Rule Lux 1922 only - Oct Sun>=2 1:00 0 - -Rule Lux 1923 only - Apr 21 23:00 1:00 S -Rule Lux 1923 only - Oct Sun>=2 2:00 0 - -Rule Lux 1924 only - Mar 29 23:00 1:00 S -Rule Lux 1924 1928 - Oct Sun>=2 1:00 0 - -Rule Lux 1925 only - Apr 5 23:00 1:00 S -Rule Lux 1926 only - Apr 17 23:00 1:00 S -Rule Lux 1927 only - Apr 9 23:00 1:00 S -Rule Lux 1928 only - Apr 14 23:00 1:00 S -Rule Lux 1929 only - Apr 20 23:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Luxembourg 0:24:36 - LMT 1904 Jun - 1:00 Lux CE%sT 1918 Nov 25 - 0:00 Lux WE%sT 1929 Oct 6 2:00s - 0:00 Belgium WE%sT 1940 May 14 3:00 - 1:00 C-Eur WE%sT 1944 Sep 18 3:00 - 1:00 Belgium CE%sT 1977 - 1:00 EU CE%sT - -# Macedonia -# See Europe/Belgrade. - -# Malta -# -# From Paul Eggert (2016-10-21): -# Assume 1900-1972 was like Rome, overriding Shanks. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Malta 1973 only - Mar 31 0:00s 1:00 S -Rule Malta 1973 only - Sep 29 0:00s 0 - -Rule Malta 1974 only - Apr 21 0:00s 1:00 S -Rule Malta 1974 only - Sep 16 0:00s 0 - -Rule Malta 1975 1979 - Apr Sun>=15 2:00 1:00 S -Rule Malta 1975 1980 - Sep Sun>=15 2:00 0 - -Rule Malta 1980 only - Mar 31 2:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Malta 0:58:04 - LMT 1893 Nov 2 0:00s # Valletta - 1:00 Italy CE%sT 1973 Mar 31 - 1:00 Malta CE%sT 1981 - 1:00 EU CE%sT - -# Moldova - -# From Stepan Golosunov (2016-03-07): -# the act of the government of the Republic of Moldova Nr. 132 from 1990-05-04 -# http://lex.justice.md/viewdoc.php?action=view&view=doc&id=298782&lang=2 -# ... says that since 1990-05-06 on the territory of the Moldavian SSR -# time would be calculated as the standard time of the second time belt -# plus one hour of the "summer" time. To implement that clocks would be -# adjusted one hour backwards at 1990-05-06 2:00. After that "summer" -# time would be cancelled last Sunday of September at 3:00 and -# reintroduced last Sunday of March at 2:00. - -# From Paul Eggert (2006-03-22): -# A previous version of this database followed Shanks & Pottenger, who write -# that Tiraspol switched to Moscow time on 1992-01-19 at 02:00. -# However, this is most likely an error, as Moldova declared independence -# on 1991-08-27 (the 1992-01-19 date is that of a Russian decree). -# In early 1992 there was large-scale interethnic violence in the area -# and it's possible that some Russophones continued to observe Moscow time. -# But [two people] separately reported via -# Jesper Nørgaard that as of 2001-01-24 Tiraspol was like Chisinau. -# The Tiraspol entry has therefore been removed for now. -# -# From Alexander Krivenyshev (2011-10-17): -# Pridnestrovian Moldavian Republic (PMR, also known as -# "Pridnestrovie") has abolished seasonal clock change (no transition -# to the Winter Time). -# -# News (in Russian): -# http://www.kyivpost.ua/russia/news/pridnestrove-otkazalos-ot-perehoda-na-zimnee-vremya-30954.html -# http://www.allmoldova.com/moldova-news/1249064116.html -# -# The substance of this change (reinstatement of the Tiraspol entry) -# is from a patch from Petr Machata (2011-10-17) -# -# From Tim Parenti (2011-10-19) -# In addition, being situated at +4651+2938 would give Tiraspol -# a pre-1880 LMT offset of 1:58:32. -# -# (which agrees with the earlier entry that had been removed) -# -# From Alexander Krivenyshev (2011-10-26) -# NO need to divide Moldova into two timezones at this point. -# As of today, Transnistria (Pridnestrovie)- Tiraspol reversed its own -# decision to abolish DST this winter. -# Following Moldova and neighboring Ukraine- Transnistria (Pridnestrovie)- -# Tiraspol will go back to winter time on October 30, 2011. -# News from Moldova (in russian): -# https://ru.publika.md/link_317061.html - -# From Roman Tudos (2015-07-02): -# http://lex.justice.md/index.php?action=view&view=doc&lang=1&id=355077 -# From Paul Eggert (2015-07-01): -# The abovementioned official link to IGO1445-868/2014 states that -# 2014-10-26's fallback transition occurred at 03:00 local time. Also, -# https://www.trm.md/en/social/la-30-martie-vom-trece-la-ora-de-vara -# says the 2014-03-30 spring-forward transition was at 02:00 local time. -# Guess that since 1997 Moldova has switched one hour before the EU. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Moldova 1997 max - Mar lastSun 2:00 1:00 S -Rule Moldova 1997 max - Oct lastSun 3:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Chisinau 1:55:20 - LMT 1880 - 1:55 - CMT 1918 Feb 15 # Chisinau MT - 1:44:24 - BMT 1931 Jul 24 # Bucharest MT - 2:00 Romania EE%sT 1940 Aug 15 - 2:00 1:00 EEST 1941 Jul 17 - 1:00 C-Eur CE%sT 1944 Aug 24 - 3:00 Russia MSK/MSD 1990 May 6 2:00 - 2:00 Russia EE%sT 1992 - 2:00 E-Eur EE%sT 1997 -# See Romania commentary for the guessed 1997 transition to EU rules. - 2:00 Moldova EE%sT - -# Monaco -# Shanks & Pottenger give 0:09:20 for Paris Mean Time; go with Howse's -# more precise 0:09:21. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Monaco 0:29:32 - LMT 1891 Mar 15 - 0:09:21 - PMT 1911 Mar 11 # Paris Mean Time - 0:00 France WE%sT 1945 Sep 16 3:00 - 1:00 France CE%sT 1977 - 1:00 EU CE%sT - -# Montenegro -# See Europe/Belgrade. - -# Netherlands - -# Howse writes that the Netherlands' railways used GMT between 1892 and 1940, -# but for other purposes the Netherlands used Amsterdam mean time. - -# However, Robert H. van Gent writes (2001-04-01): -# Howse's statement is only correct up to 1909. From 1909-05-01 (00:00:00 -# Amsterdam mean time) onwards, the whole of the Netherlands (including -# the Dutch railways) was required by law to observe Amsterdam mean time -# (19 minutes 32.13 seconds ahead of GMT). This had already been the -# common practice (except for the railways) for many decades but it was -# not until 1909 when the Dutch government finally defined this by law. -# On 1937-07-01 this was changed to 20 minutes (exactly) ahead of GMT and -# was generally known as Dutch Time ("Nederlandse Tijd"). -# -# (2001-04-08): -# 1892-05-01 was the date when the Dutch railways were by law required to -# observe GMT while the remainder of the Netherlands adhered to the common -# practice of following Amsterdam mean time. -# -# (2001-04-09): -# In 1835 the authorities of the province of North Holland requested the -# municipal authorities of the towns and cities in the province to observe -# Amsterdam mean time but I do not know in how many cases this request was -# actually followed. -# -# From 1852 onwards the Dutch telegraph offices were by law required to -# observe Amsterdam mean time. As the time signals from the observatory of -# Leiden were also distributed by the telegraph system, I assume that most -# places linked up with the telegraph (and railway) system automatically -# adopted Amsterdam mean time. -# -# Although the early Dutch railway companies initially observed a variety -# of times, most of them had adopted Amsterdam mean time by 1858 but it -# was not until 1866 when they were all required by law to observe -# Amsterdam mean time. - -# The data entries before 1945 are taken from -# https://www.staff.science.uu.nl/~gent0113/wettijd/wettijd.htm - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Neth 1916 only - May 1 0:00 1:00 NST # Netherlands Summer Time -Rule Neth 1916 only - Oct 1 0:00 0 AMT # Amsterdam Mean Time -Rule Neth 1917 only - Apr 16 2:00s 1:00 NST -Rule Neth 1917 only - Sep 17 2:00s 0 AMT -Rule Neth 1918 1921 - Apr Mon>=1 2:00s 1:00 NST -Rule Neth 1918 1921 - Sep lastMon 2:00s 0 AMT -Rule Neth 1922 only - Mar lastSun 2:00s 1:00 NST -Rule Neth 1922 1936 - Oct Sun>=2 2:00s 0 AMT -Rule Neth 1923 only - Jun Fri>=1 2:00s 1:00 NST -Rule Neth 1924 only - Mar lastSun 2:00s 1:00 NST -Rule Neth 1925 only - Jun Fri>=1 2:00s 1:00 NST -# From 1926 through 1939 DST began 05-15, except that it was delayed by a week -# in years when 05-15 fell in the Pentecost weekend. -Rule Neth 1926 1931 - May 15 2:00s 1:00 NST -Rule Neth 1932 only - May 22 2:00s 1:00 NST -Rule Neth 1933 1936 - May 15 2:00s 1:00 NST -Rule Neth 1937 only - May 22 2:00s 1:00 NST -Rule Neth 1937 only - Jul 1 0:00 1:00 S -Rule Neth 1937 1939 - Oct Sun>=2 2:00s 0 - -Rule Neth 1938 1939 - May 15 2:00s 1:00 S -Rule Neth 1945 only - Apr 2 2:00s 1:00 S -Rule Neth 1945 only - Sep 16 2:00s 0 - -# -# Amsterdam Mean Time was +00:19:32.13 exactly, but the .13 is omitted -# below because the current format requires GMTOFF to be an integer. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Amsterdam 0:19:32 - LMT 1835 - 0:19:32 Neth %s 1937 Jul 1 - 0:20 Neth +0020/+0120 1940 May 16 0:00 - 1:00 C-Eur CE%sT 1945 Apr 2 2:00 - 1:00 Neth CE%sT 1977 - 1:00 EU CE%sT - -# Norway -# http://met.no/met/met_lex/q_u/sommertid.html (2004-01) agrees with Shanks & -# Pottenger. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Norway 1916 only - May 22 1:00 1:00 S -Rule Norway 1916 only - Sep 30 0:00 0 - -Rule Norway 1945 only - Apr 2 2:00s 1:00 S -Rule Norway 1945 only - Oct 1 2:00s 0 - -Rule Norway 1959 1964 - Mar Sun>=15 2:00s 1:00 S -Rule Norway 1959 1965 - Sep Sun>=15 2:00s 0 - -Rule Norway 1965 only - Apr 25 2:00s 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Oslo 0:43:00 - LMT 1895 Jan 1 - 1:00 Norway CE%sT 1940 Aug 10 23:00 - 1:00 C-Eur CE%sT 1945 Apr 2 2:00 - 1:00 Norway CE%sT 1980 - 1:00 EU CE%sT - -# Svalbard & Jan Mayen - -# From Steffen Thorsen (2001-05-01): -# Although I could not find it explicitly, it seems that Jan Mayen and -# Svalbard have been using the same time as Norway at least since the -# time they were declared as parts of Norway. Svalbard was declared -# as a part of Norway by law of 1925-07-17 no 11, section 4 and Jan -# Mayen by law of 1930-02-27 no 2, section 2. (From -# and -# ). The law/regulation -# for normal/standard time in Norway is from 1894-06-29 no 1 (came -# into operation on 1895-01-01) and Svalbard/Jan Mayen seem to be a -# part of this law since 1925/1930. (From -# ) I have not been -# able to find if Jan Mayen used a different time zone (e.g. -0100) -# before 1930. Jan Mayen has only been "inhabited" since 1921 by -# Norwegian meteorologists and maybe used the same time as Norway ever -# since 1921. Svalbard (Arctic/Longyearbyen) has been inhabited since -# before 1895, and therefore probably changed the local time somewhere -# between 1895 and 1925 (inclusive). - -# From Paul Eggert (2013-09-04): -# -# Actually, Jan Mayen was never occupied by Germany during World War II, -# so it must have diverged from Oslo time during the war, as Oslo was -# keeping Berlin time. -# -# says that the meteorologists -# burned down their station in 1940 and left the island, but returned in -# 1941 with a small Norwegian garrison and continued operations despite -# frequent air attacks from Germans. In 1943 the Americans established a -# radiolocating station on the island, called "Atlantic City". Possibly -# the UT offset changed during the war, but I think it unlikely that -# Jan Mayen used German daylight-saving rules. -# -# Svalbard is more complicated, as it was raided in August 1941 by an -# Allied party that evacuated the civilian population to England (says -# ). The Svalbard FAQ -# says that the Germans were -# expelled on 1942-05-14. However, small parties of Germans did return, -# and according to Wilhelm Dege's book "War North of 80" (1954) -# http://www.ucalgary.ca/UofC/departments/UP/1-55238/1-55238-110-2.html -# the German armed forces at the Svalbard weather station code-named -# Haudegen did not surrender to the Allies until September 1945. -# -# All these events predate our cutoff date of 1970, so use Europe/Oslo -# for these regions. -Link Europe/Oslo Arctic/Longyearbyen - -# Poland - -# The 1919 dates and times can be found in Tygodnik Urzędowy nr 1 (1919-03-20), -# pp 1-2. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Poland 1918 1919 - Sep 16 2:00s 0 - -Rule Poland 1919 only - Apr 15 2:00s 1:00 S -Rule Poland 1944 only - Apr 3 2:00s 1:00 S -# Whitman gives 1944 Nov 30; go with Shanks & Pottenger. -Rule Poland 1944 only - Oct 4 2:00 0 - -# For 1944-1948 Whitman gives the previous day; go with Shanks & Pottenger. -Rule Poland 1945 only - Apr 29 0:00 1:00 S -Rule Poland 1945 only - Nov 1 0:00 0 - -# For 1946 on the source is Kazimierz Borkowski, -# Toruń Center for Astronomy, Dept. of Radio Astronomy, Nicolaus Copernicus U., -# https://www.astro.uni.torun.pl/~kb/Artykuly/U-PA/Czas2.htm#tth_tAb1 -# Thanks to Przemysław Augustyniak (2005-05-28) for this reference. -# He also gives these further references: -# Mon Pol nr 13, poz 162 (1995) -# Druk nr 2180 (2003) -Rule Poland 1946 only - Apr 14 0:00s 1:00 S -Rule Poland 1946 only - Oct 7 2:00s 0 - -Rule Poland 1947 only - May 4 2:00s 1:00 S -Rule Poland 1947 1949 - Oct Sun>=1 2:00s 0 - -Rule Poland 1948 only - Apr 18 2:00s 1:00 S -Rule Poland 1949 only - Apr 10 2:00s 1:00 S -Rule Poland 1957 only - Jun 2 1:00s 1:00 S -Rule Poland 1957 1958 - Sep lastSun 1:00s 0 - -Rule Poland 1958 only - Mar 30 1:00s 1:00 S -Rule Poland 1959 only - May 31 1:00s 1:00 S -Rule Poland 1959 1961 - Oct Sun>=1 1:00s 0 - -Rule Poland 1960 only - Apr 3 1:00s 1:00 S -Rule Poland 1961 1964 - May lastSun 1:00s 1:00 S -Rule Poland 1962 1964 - Sep lastSun 1:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Warsaw 1:24:00 - LMT 1880 - 1:24:00 - WMT 1915 Aug 5 # Warsaw Mean Time - 1:00 C-Eur CE%sT 1918 Sep 16 3:00 - 2:00 Poland EE%sT 1922 Jun - 1:00 Poland CE%sT 1940 Jun 23 2:00 - 1:00 C-Eur CE%sT 1944 Oct - 1:00 Poland CE%sT 1977 - 1:00 W-Eur CE%sT 1988 - 1:00 EU CE%sT - -# Portugal -# -# From Paul Eggert (2014-08-11), after a heads-up from Stephen Colebourne: -# According to a Portuguese decree (1911-05-26) -# https://dre.pt/application/dir/pdf1sdip/1911/05/12500/23132313.pdf -# Lisbon was at -0:36:44.68, but switched to GMT on 1912-01-01 at 00:00. -# Round the old offset to -0:36:45. This agrees with Willett but disagrees -# with Shanks, who says the transition occurred on 1911-05-24 at 00:00 for -# Europe/Lisbon, Atlantic/Azores, and Atlantic/Madeira. -# -# From Rui Pedro Salgueiro (1992-11-12): -# Portugal has recently (September, 27) changed timezone -# (from WET to MET or CET) to harmonize with EEC. -# -# Martin Bruckmann (1996-02-29) reports via Peter Ilieve -# that Portugal is reverting to 0:00 by not moving its clocks this spring. -# The new Prime Minister was fed up with getting up in the dark in the winter. -# -# From Paul Eggert (1996-11-12): -# IATA SSIM (1991-09) reports several 1991-09 and 1992-09 transitions -# at 02:00u, not 01:00u. Assume that these are typos. -# IATA SSIM (1991/1992) reports that the Azores were at -1:00. -# IATA SSIM (1993-02) says +0:00; later issues (through 1996-09) say -1:00. -# Guess that the Azores changed to EU rules in 1992 (since that's when Portugal -# harmonized with the EU), and that they stayed +0:00 that winter. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# DSH writes that despite Decree 1,469 (1915), the change to the clocks was not -# done every year, depending on what Spain did, because of railroad schedules. -# Go with Shanks & Pottenger. -Rule Port 1916 only - Jun 17 23:00 1:00 S -# Whitman gives 1916 Oct 31; go with Shanks & Pottenger. -Rule Port 1916 only - Nov 1 1:00 0 - -Rule Port 1917 only - Feb 28 23:00s 1:00 S -Rule Port 1917 1921 - Oct 14 23:00s 0 - -Rule Port 1918 only - Mar 1 23:00s 1:00 S -Rule Port 1919 only - Feb 28 23:00s 1:00 S -Rule Port 1920 only - Feb 29 23:00s 1:00 S -Rule Port 1921 only - Feb 28 23:00s 1:00 S -Rule Port 1924 only - Apr 16 23:00s 1:00 S -Rule Port 1924 only - Oct 14 23:00s 0 - -Rule Port 1926 only - Apr 17 23:00s 1:00 S -Rule Port 1926 1929 - Oct Sat>=1 23:00s 0 - -Rule Port 1927 only - Apr 9 23:00s 1:00 S -Rule Port 1928 only - Apr 14 23:00s 1:00 S -Rule Port 1929 only - Apr 20 23:00s 1:00 S -Rule Port 1931 only - Apr 18 23:00s 1:00 S -# Whitman gives 1931 Oct 8; go with Shanks & Pottenger. -Rule Port 1931 1932 - Oct Sat>=1 23:00s 0 - -Rule Port 1932 only - Apr 2 23:00s 1:00 S -Rule Port 1934 only - Apr 7 23:00s 1:00 S -# Whitman gives 1934 Oct 5; go with Shanks & Pottenger. -Rule Port 1934 1938 - Oct Sat>=1 23:00s 0 - -# Shanks & Pottenger give 1935 Apr 30; go with Whitman. -Rule Port 1935 only - Mar 30 23:00s 1:00 S -Rule Port 1936 only - Apr 18 23:00s 1:00 S -# Whitman gives 1937 Apr 2; go with Shanks & Pottenger. -Rule Port 1937 only - Apr 3 23:00s 1:00 S -Rule Port 1938 only - Mar 26 23:00s 1:00 S -Rule Port 1939 only - Apr 15 23:00s 1:00 S -# Whitman gives 1939 Oct 7; go with Shanks & Pottenger. -Rule Port 1939 only - Nov 18 23:00s 0 - -Rule Port 1940 only - Feb 24 23:00s 1:00 S -# Shanks & Pottenger give 1940 Oct 7; go with Whitman. -Rule Port 1940 1941 - Oct 5 23:00s 0 - -Rule Port 1941 only - Apr 5 23:00s 1:00 S -Rule Port 1942 1945 - Mar Sat>=8 23:00s 1:00 S -Rule Port 1942 only - Apr 25 22:00s 2:00 M # Midsummer -Rule Port 1942 only - Aug 15 22:00s 1:00 S -Rule Port 1942 1945 - Oct Sat>=24 23:00s 0 - -Rule Port 1943 only - Apr 17 22:00s 2:00 M -Rule Port 1943 1945 - Aug Sat>=25 22:00s 1:00 S -Rule Port 1944 1945 - Apr Sat>=21 22:00s 2:00 M -Rule Port 1946 only - Apr Sat>=1 23:00s 1:00 S -Rule Port 1946 only - Oct Sat>=1 23:00s 0 - -Rule Port 1947 1949 - Apr Sun>=1 2:00s 1:00 S -Rule Port 1947 1949 - Oct Sun>=1 2:00s 0 - -# Shanks & Pottenger say DST was observed in 1950; go with Whitman. -# Whitman gives Oct lastSun for 1952 on; go with Shanks & Pottenger. -Rule Port 1951 1965 - Apr Sun>=1 2:00s 1:00 S -Rule Port 1951 1965 - Oct Sun>=1 2:00s 0 - -Rule Port 1977 only - Mar 27 0:00s 1:00 S -Rule Port 1977 only - Sep 25 0:00s 0 - -Rule Port 1978 1979 - Apr Sun>=1 0:00s 1:00 S -Rule Port 1978 only - Oct 1 0:00s 0 - -Rule Port 1979 1982 - Sep lastSun 1:00s 0 - -Rule Port 1980 only - Mar lastSun 0:00s 1:00 S -Rule Port 1981 1982 - Mar lastSun 1:00s 1:00 S -Rule Port 1983 only - Mar lastSun 2:00s 1:00 S -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Lisbon -0:36:45 - LMT 1884 - -0:36:45 - LMT 1912 Jan 1 # Lisbon Mean Time - 0:00 Port WE%sT 1966 Apr 3 2:00 - 1:00 - CET 1976 Sep 26 1:00 - 0:00 Port WE%sT 1983 Sep 25 1:00s - 0:00 W-Eur WE%sT 1992 Sep 27 1:00s - 1:00 EU CE%sT 1996 Mar 31 1:00u - 0:00 EU WE%sT -# This Zone can be simplified once we assume zic %z. -Zone Atlantic/Azores -1:42:40 - LMT 1884 # Ponta Delgada - -1:54:32 - HMT 1912 Jan 1 # Horta Mean Time - -2:00 Port -02/-01 1942 Apr 25 22:00s - -2:00 Port +00 1942 Aug 15 22:00s - -2:00 Port -02/-01 1943 Apr 17 22:00s - -2:00 Port +00 1943 Aug 28 22:00s - -2:00 Port -02/-01 1944 Apr 22 22:00s - -2:00 Port +00 1944 Aug 26 22:00s - -2:00 Port -02/-01 1945 Apr 21 22:00s - -2:00 Port +00 1945 Aug 25 22:00s - -2:00 Port -02/-01 1966 Apr 3 2:00 - -1:00 Port -01/+00 1983 Sep 25 1:00s - -1:00 W-Eur -01/+00 1992 Sep 27 1:00s - 0:00 EU WE%sT 1993 Mar 28 1:00u - -1:00 EU -01/+00 -# This Zone can be simplified once we assume zic %z. -Zone Atlantic/Madeira -1:07:36 - LMT 1884 # Funchal - -1:07:36 - FMT 1912 Jan 1 # Funchal Mean Time - -1:00 Port -01/+00 1942 Apr 25 22:00s - -1:00 Port +01 1942 Aug 15 22:00s - -1:00 Port -01/+00 1943 Apr 17 22:00s - -1:00 Port +01 1943 Aug 28 22:00s - -1:00 Port -01/+00 1944 Apr 22 22:00s - -1:00 Port +01 1944 Aug 26 22:00s - -1:00 Port -01/+00 1945 Apr 21 22:00s - -1:00 Port +01 1945 Aug 25 22:00s - -1:00 Port -01/+00 1966 Apr 3 2:00 - 0:00 Port WE%sT 1983 Sep 25 1:00s - 0:00 EU WE%sT - -# Romania -# -# From Paul Eggert (1999-10-07): -# Nine O'clock -# (1998-10-23) reports that the switch occurred at -# 04:00 local time in fall 1998. For lack of better info, -# assume that Romania and Moldova switched to EU rules in 1997, -# the same year as Bulgaria. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Romania 1932 only - May 21 0:00s 1:00 S -Rule Romania 1932 1939 - Oct Sun>=1 0:00s 0 - -Rule Romania 1933 1939 - Apr Sun>=2 0:00s 1:00 S -Rule Romania 1979 only - May 27 0:00 1:00 S -Rule Romania 1979 only - Sep lastSun 0:00 0 - -Rule Romania 1980 only - Apr 5 23:00 1:00 S -Rule Romania 1980 only - Sep lastSun 1:00 0 - -Rule Romania 1991 1993 - Mar lastSun 0:00s 1:00 S -Rule Romania 1991 1993 - Sep lastSun 0:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct - 1:44:24 - BMT 1931 Jul 24 # Bucharest MT - 2:00 Romania EE%sT 1981 Mar 29 2:00s - 2:00 C-Eur EE%sT 1991 - 2:00 Romania EE%sT 1994 - 2:00 E-Eur EE%sT 1997 - 2:00 EU EE%sT - - -# Russia - -# From Alexander Krivenyshev (2011-09-15): -# Based on last Russian Government Decree No. 725 on August 31, 2011 -# (Government document -# http://www.government.ru/gov/results/16355/print/ -# in Russian) -# there are few corrections have to be made for some Russian time zones... -# All updated Russian Time Zones were placed in table and translated to English -# by WorldTimeZone.com at the link below: -# http://www.worldtimezone.com/dst_news/dst_news_russia36.htm - -# From Sanjeev Gupta (2011-09-27): -# Scans of [Decree No. 23 of January 8, 1992] are available at: -# http://government.consultant.ru/page.aspx?1223966 -# They are in Cyrillic letters (presumably Russian). - -# From Arthur David Olson (2012-05-09): -# Regarding the instant when clocks in time-zone-shifting parts of Russia -# changed in September 2011: -# -# One source is -# http://government.ru/gov/results/16355/ -# which, according to translate.google.com, begins "Decree of August 31, -# 2011 No. 725" and contains no other dates or "effective date" information. -# -# Another source is -# https://rg.ru/2011/09/06/chas-zona-dok.html -# which, according to translate.google.com, begins "Resolution of the -# Government of the Russian Federation on August 31, 2011 N 725" and also -# contains "Date first official publication: September 6, 2011 Posted on: -# in the 'RG' - Federal Issue No. 5573 September 6, 2011" but which -# does not contain any "effective date" information. -# -# Another source is -# https://en.wikipedia.org/wiki/Oymyakonsky_District#cite_note-RuTime-7 -# which, in note 8, contains "Resolution No. 725 of August 31, 2011... -# Effective as of after 7 days following the day of the official publication" -# but which does not contain any reference to September 6, 2011. -# -# The Wikipedia article refers to -# http://base.consultant.ru/cons/cgi/online.cgi?req=doc;base=LAW;n=118896 -# which seems to copy the text of the government.ru page. -# -# Tobias Conradi combines Wikipedia's -# "as of after 7 days following the day of the official publication" -# with www.rg.ru's "Date of first official publication: September 6, 2011" to -# get September 13, 2011 as the cutover date (unusually, a Tuesday, as Tobias -# Conradi notes). -# -# None of the sources indicates a time of day for changing clocks. -# -# Go with 2011-09-13 0:00s. - -# From Alexander Krivenyshev (2014-07-01): -# According to the Russian news (ITAR-TASS News Agency) -# http://en.itar-tass.com/russia/738562 -# the State Duma has approved ... the draft bill on returning to -# winter time standard and return Russia 11 time zones. The new -# regulations will come into effect on October 26, 2014 at 02:00 ... -# http://asozd2.duma.gov.ru/main.nsf/%28Spravka%29?OpenAgent&RN=431985-6&02 -# Here is a link where we put together table (based on approved Bill N -# 431985-6) with proposed 11 Russian time zones and corresponding -# areas/cities/administrative centers in the Russian Federation (in English): -# http://www.worldtimezone.com/dst_news/dst_news_russia65.html -# -# From Alexander Krivenyshev (2014-07-22): -# Putin signed the Federal Law 431985-6 ... (in Russian) -# http://itar-tass.com/obschestvo/1333711 -# http://www.pravo.gov.ru:8080/page.aspx?111660 -# http://www.kremlin.ru/acts/46279 -# From October 26, 2014 the new Russian time zone map will look like this: -# http://www.worldtimezone.com/dst_news/dst_news_russia-map-2014-07.html - -# From Paul Eggert (2006-03-22): -# Moscow time zone abbreviations after 1919-07-01, and Moscow rules after 1991, -# are from Andrey A. Chernov. The rest is from Shanks & Pottenger, -# except we follow Chernov's report that 1992 DST transitions were Sat -# 23:00, not Sun 02:00s. -# -# From Stanislaw A. Kuzikowski (1994-06-29): -# But now it is some months since Novosibirsk is 3 hours ahead of Moscow! -# I do not know why they have decided to make this change; -# as far as I remember it was done exactly during winter->summer switching -# so we (Novosibirsk) simply did not switch. -# -# From Andrey A. Chernov (1996-10-04): -# 'MSK' and 'MSD' were born and used initially on Moscow computers with -# UNIX-like OSes by several developer groups (e.g. Demos group, Kiae group).... -# The next step was the UUCP network, the Relcom predecessor -# (used mainly for mail), and MSK/MSD was actively used there. -# -# From Chris Carrier (1996-10-30): -# According to a friend of mine who rode the Trans-Siberian Railroad from -# Moscow to Irkutsk in 1995, public air and rail transport in Russia ... -# still follows Moscow time, no matter where in Russia it is located. -# -# For Grozny, Chechnya, we have the following story from -# John Daniszewski, "Scavengers in the Rubble", Los Angeles Times (2001-02-07): -# News - often false - is spread by word of mouth. A rumor that it was -# time to move the clocks back put this whole city out of sync with -# the rest of Russia for two weeks - even soldiers stationed here began -# enforcing curfew at the wrong time. -# -# From Gwillim Law (2001-06-05): -# There's considerable evidence that Sakhalin Island used to be in -# UTC+11, and has changed to UTC+10, in this decade. I start with the -# SSIM, which listed Yuzhno-Sakhalinsk in zone RU10 along with Magadan -# until February 1997, and then in RU9 with Khabarovsk and Vladivostok -# since September 1997.... Although the Kuril Islands are -# administratively part of Sakhalin oblast', they appear to have -# remained on UTC+11 along with Magadan. - -# From Tim Parenti (2014-07-06): -# The comments detailing the coverage of each Russian zone are meant to assist -# with maintenance only and represent our best guesses as to which regions -# are covered by each zone. They are not meant to be taken as an authoritative -# listing. The region codes listed come from -# https://en.wikipedia.org/w/?title=Federal_subjects_of_Russia&oldid=611810498 -# and are used for convenience only; no guarantees are made regarding their -# future stability. ISO 3166-2:RU codes are also listed for first-level -# divisions where available. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] - - -# From Tim Parenti (2014-07-03): -# Europe/Kaliningrad covers... -# 39 RU-KGD Kaliningrad Oblast - -# From Paul Eggert (2016-03-18): -# The 1989 transition is from USSR act No. 227 (1989-03-14). - -# From Stepan Golosunov (2016-03-07): -# http://www.rgo.ru/ru/kaliningradskoe-oblastnoe-otdelenie/ob-otdelenii/publikacii/kak-nam-zhilos-bez-letnego-vremeni -# confirms that the 1989 change to Moscow-1 was implemented. -# (The article, though, is misattributed to 1990 while saying that -# summer->winter transition would be done on the 24 of September. But -# 1990-09-24 was Monday, while 1989-09-24 was Sunday as expected.) -# ... -# http://www.kaliningradka.ru/site_pc/cherez/index.php?ELEMENT_ID=40091 -# says that Kaliningrad switched to Moscow-1 on 1989-03-26, avoided -# at the last moment switch to Moscow-1 on 1991-03-31, switched to -# Moscow on 1991-11-03, switched to Moscow-1 on 1992-01-19. - -Zone Europe/Kaliningrad 1:22:00 - LMT 1893 Apr - 1:00 C-Eur CE%sT 1945 - 2:00 Poland CE%sT 1946 - 3:00 Russia MSK/MSD 1989 Mar 26 2:00s - 2:00 Russia EE%sT 2011 Mar 27 2:00s - 3:00 - +03 2014 Oct 26 2:00s - 2:00 - EET - - -# From Paul Eggert (2016-02-21), per Tim Parenti (2014-07-03) and -# Oscar van Vlijmen (2001-08-25): -# Europe/Moscow covers... -# 01 RU-AD Adygea, Republic of -# 05 RU-DA Dagestan, Republic of -# 06 RU-IN Ingushetia, Republic of -# 07 RU-KB Kabardino-Balkar Republic -# 08 RU-KL Kalmykia, Republic of -# 09 RU-KC Karachay-Cherkess Republic -# 10 RU-KR Karelia, Republic of -# 11 RU-KO Komi Republic -# 12 RU-ME Mari El Republic -# 13 RU-MO Mordovia, Republic of -# 15 RU-SE North Ossetia-Alania, Republic of -# 16 RU-TA Tatarstan, Republic of -# 20 RU-CE Chechen Republic -# 21 RU-CU Chuvash Republic -# 23 RU-KDA Krasnodar Krai -# 26 RU-STA Stavropol Krai -# 29 RU-ARK Arkhangelsk Oblast -# 31 RU-BEL Belgorod Oblast -# 32 RU-BRY Bryansk Oblast -# 33 RU-VLA Vladimir Oblast -# 35 RU-VLG Vologda Oblast -# 36 RU-VOR Voronezh Oblast -# 37 RU-IVA Ivanovo Oblast -# 40 RU-KLU Kaluga Oblast -# 44 RU-KOS Kostroma Oblast -# 46 RU-KRS Kursk Oblast -# 47 RU-LEN Leningrad Oblast -# 48 RU-LIP Lipetsk Oblast -# 50 RU-MOS Moscow Oblast -# 51 RU-MUR Murmansk Oblast -# 52 RU-NIZ Nizhny Novgorod Oblast -# 53 RU-NGR Novgorod Oblast -# 57 RU-ORL Oryol Oblast -# 58 RU-PNZ Penza Oblast -# 60 RU-PSK Pskov Oblast -# 61 RU-ROS Rostov Oblast -# 62 RU-RYA Ryazan Oblast -# 67 RU-SMO Smolensk Oblast -# 68 RU-TAM Tambov Oblast -# 69 RU-TVE Tver Oblast -# 71 RU-TUL Tula Oblast -# 76 RU-YAR Yaroslavl Oblast -# 77 RU-MOW Moscow -# 78 RU-SPE Saint Petersburg -# 83 RU-NEN Nenets Autonomous Okrug - -# From Paul Eggert (2016-08-23): -# The Soviets switched to UT-based time in 1919. Decree No. 59 -# (1919-02-08) http://istmat.info/node/35567 established UT-based time -# zones, and Decree No. 147 (1919-03-29) http://istmat.info/node/35854 -# specified a transition date of 1919-07-01, apparently at 00:00 UT. -# No doubt only the Soviet-controlled regions switched on that date; -# later transitions to UT-based time in other parts of Russia are -# taken from what appear to be guesses by Shanks. -# (Thanks to Alexander Belopolsky for pointers to the decrees.) - -# From Stepan Golosunov (2016-03-07): -# 11. Regions-violators, 1981-1982. -# Wikipedia refers to -# http://maps.monetonos.ru/maps/raznoe/Old_Maps/Old_Maps/Articles/022/3_1981.html -# http://besp.narod.ru/nauka_1981_3.htm -# -# The second link provides two articles scanned from the Nauka i Zhizn -# magazine No. 3, 1981 and a scan of the short article attributed to -# the Trud newspaper from February 1982. The first link provides the -# same Nauka i Zhizn articles converted to the text form (but misses -# time belt changes map). -# -# The second Nauka i Zhizn article says that in addition to -# introduction of summer time on 1981-04-01 there are some time belt -# border changes on 1981-10-01, mostly affecting Nenets Autonomous -# Okrug, Krasnoyarsk Krai, Yakutia, Magadan Oblast and Chukotka -# according to the provided map (colored one). In addition to that -# "time violators" (regions which were not using rules of the time -# belts in which they were located) would not be moving off the DST on -# 1981-10-01 to restore the decree time usage. (Komi ASSR was -# supposed to repeat that move in October 1982 to account for the 2 -# hour difference.) Map depicting "time violators" before 1981-10-01 -# is also provided. -# -# The article from Trud says that 1981-10-01 changes caused problems -# and some territories would be moved to pre-1981-10-01 time by not -# moving to summer time on 1982-04-01. Namely: Dagestan, -# Kabardino-Balkar, Kalmyk, Komi, Mari, Mordovian, North Ossetian, -# Tatar, Chechen-Ingush and Chuvash ASSR, Krasnodar and Stavropol -# krais, Arkhangelsk, Vladimir, Vologda, Voronezh, Gorky, Ivanovo, -# Kostroma, Lipetsk, Penza, Rostov, Ryazan, Tambov, Tyumen and -# Yaroslavl oblasts, Nenets and Evenk autonomous okrugs, Khatangsky -# district of Taymyr Autonomous Okrug. As a result Evenk Autonomous -# Okrug and Khatangsky district of Taymyr Autonomous Okrug would end -# up on Moscow+4, Tyumen Oblast on Moscow+2 and the rest on Moscow -# time. -# -# http://astrozet.net/files/Zones/DOC/RU/1980-925.txt -# attributes the 1982 changes to the Act of the Council of Ministers -# of the USSR No. 126 from 18.02.1982. 1980-925.txt also adds -# Udmurtia to the list of affected territories and lists Khatangsky -# district separately from Taymyr Autonomous Okrug. Probably erroneously. -# -# The affected territories are currently listed under Europe/Moscow, -# Asia/Yekaterinburg and Asia/Krasnoyarsk. -# -# 12. Udmurtia -# The fact that Udmurtia is depicted as a violator in the Nauka i -# Zhizn article hints at Izhevsk being on different time from -# Kuybyshev before 1981-10-01. Udmurtia is not mentioned in the 1989 act. -# http://astrozet.net/files/Zones/DOC/RU/1980-925.txt -# implies Udmurtia was on Moscow time after 1982-04-01. -# Wikipedia implies Udmurtia being on Moscow+1 until 1991. -# -# ... -# -# All Russian zones are supposed to have by default a -1 change at -# 1991-03-31 2:00 (cancellation of the decree time in the USSR) and a +1 -# change at 1992-01-19 2:00 (restoration of the decree time in Russia). -# -# There were some exceptions, though. -# Wikipedia says newspapers listed Astrakhan, Saratov, Kirov, Volgograd, -# Izhevsk, Grozny, Kazan and Samara as such exceptions for the 1992 -# change. (Different newspapers providing different lists. And some -# lists found in the internet are quite wild.) -# -# And apparently some exceptions were reverted in the last moment. -# http://www.kaliningradka.ru/site_pc/cherez/index.php?ELEMENT_ID=40091 -# says that Kaliningrad decided not to be an exception 2 days before the -# 1991-03-31 switch and one person at -# https://izhevsk.ru/forum_light_message/50/682597-m8369040.html -# says he remembers that Samara opted out of the 1992-01-19 exception -# 2 days before the switch. -# -# -# From Paul Eggert (2016-03-18): -# Given the above, we appear to be missing some Zone entries for the -# chaotic early 1980s in Russia. It's not clear what these entries -# should be. For now, sweep this under the rug and just document the -# time in Moscow. - -# From Vladimir Karpinsky (2014-07-08): -# LMT in Moscow (before Jul 3, 1916) is 2:30:17, that was defined by Moscow -# Observatory (coordinates: 55 deg. 45'29.70", 37 deg. 34'05.30").... -# LMT in Moscow since Jul 3, 1916 is 2:31:01 as a result of new standard. -# (The info is from the book by Byalokoz ... p. 18.) -# The time in St. Petersburg as capital of Russia was defined by -# Pulkov observatory, near St. Petersburg. In 1916 LMT Moscow -# was synchronized with LMT St. Petersburg (+30 minutes), (Pulkov observatory -# coordinates: 59 deg. 46'18.70", 30 deg. 19'40.70") so 30 deg. 19'40.70" > -# 2h01m18.7s = 2:01:19. LMT Moscow = LMT St.Petersburg + 30m 2:01:19 + 0:30 = -# 2:31:19 ... -# -# From Paul Eggert (2014-07-08): -# Milne does not list Moscow, but suggests that its time might be listed in -# Résumés mensuels et annuels des observations météorologiques (1895). -# Presumably this is OCLC 85825704, a journal published with parallel text in -# Russian and French. This source has not been located; go with Karpinsky. - -Zone Europe/Moscow 2:30:17 - LMT 1880 - 2:30:17 - MMT 1916 Jul 3 # Moscow Mean Time - 2:31:19 Russia %s 1919 Jul 1 0:00u - 3:00 Russia %s 1921 Oct - 3:00 Russia MSK/MSD 1922 Oct - 2:00 - EET 1930 Jun 21 - 3:00 Russia MSK/MSD 1991 Mar 31 2:00s - 2:00 Russia EE%sT 1992 Jan 19 2:00s - 3:00 Russia MSK/MSD 2011 Mar 27 2:00s - 4:00 - MSK 2014 Oct 26 2:00s - 3:00 - MSK - - -# From Paul Eggert (2016-12-06): -# Europe/Simferopol covers Crimea. - -Zone Europe/Simferopol 2:16:24 - LMT 1880 - 2:16 - SMT 1924 May 2 # Simferopol Mean T - 2:00 - EET 1930 Jun 21 - 3:00 - MSK 1941 Nov - 1:00 C-Eur CE%sT 1944 Apr 13 - 3:00 Russia MSK/MSD 1990 - 3:00 - MSK 1990 Jul 1 2:00 - 2:00 - EET 1992 -# Central Crimea used Moscow time 1994/1997. -# -# From Paul Eggert (2006-03-22): -# The _Economist_ (1994-05-28, p 45) reports that central Crimea switched -# from Kiev to Moscow time sometime after the January 1994 elections. -# Shanks (1999) says "date of change uncertain", but implies that it happened -# sometime between the 1994 DST switches. Shanks & Pottenger simply say -# 1994-09-25 03:00, but that can't be right. For now, guess it -# changed in May. - 2:00 E-Eur EE%sT 1994 May -# From IATA SSIM (1994/1997), which also says that Kerch is still like Kiev. - 3:00 E-Eur MSK/MSD 1996 Mar 31 0:00s - 3:00 1:00 MSD 1996 Oct 27 3:00s -# IATA SSIM (1997-09) says Crimea switched to EET/EEST. -# Assume it happened in March by not changing the clocks. - 3:00 Russia MSK/MSD 1997 - 3:00 - MSK 1997 Mar lastSun 1:00u -# From Alexander Krivenyshev (2014-03-17): -# time change at 2:00 (2am) on March 30, 2014 -# https://vz.ru/news/2014/3/17/677464.html -# From Paul Eggert (2014-03-30): -# Simferopol and Sevastopol reportedly changed their central town clocks -# late the previous day, but this appears to have been ceremonial -# and the discrepancies are small enough to not worry about. - 2:00 EU EE%sT 2014 Mar 30 2:00 - 4:00 - MSK 2014 Oct 26 2:00s - 3:00 - MSK - - -# From Paul Eggert (2016-03-18): -# Europe/Astrakhan covers: -# 30 RU-AST Astrakhan Oblast -# -# The 1989 transition is from USSR act No. 227 (1989-03-14). - -# From Alexander Krivenyshev (2016-01-12): -# On February 10, 2016 Astrakhan Oblast got approval by the Federation -# Council to change its time zone to UTC+4 (from current UTC+3 Moscow time).... -# This Federal Law shall enter into force on 27 March 2016 at 02:00. -# From Matt Johnson (2016-03-09): -# http://publication.pravo.gov.ru/Document/View/0001201602150056 - -Zone Europe/Astrakhan 3:12:12 - LMT 1924 May - 3:00 - +03 1930 Jun 21 - 4:00 Russia +04/+05 1989 Mar 26 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 4:00 - +04 1992 Mar 29 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 2014 Oct 26 2:00s - 3:00 - +03 2016 Mar 27 2:00s - 4:00 - +04 - -# From Paul Eggert (2016-11-11): -# Europe/Volgograd covers: -# 34 RU-VGG Volgograd Oblast -# The 1988 transition is from USSR act No. 5 (1988-01-04). - -Zone Europe/Volgograd 2:57:40 - LMT 1920 Jan 3 - 3:00 - +03 1930 Jun 21 - 4:00 - +04 1961 Nov 11 - 4:00 Russia +04/+05 1988 Mar 27 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 4:00 - +04 1992 Mar 29 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 2014 Oct 26 2:00s - 3:00 - +03 - -# From Paul Eggert (2016-11-11): -# Europe/Saratov covers: -# 64 RU-SAR Saratov Oblast - -# From Yuri Konotopov (2016-11-11): -# Dec 4, 2016 02:00 UTC+3.... Saratov Region's local time will be ... UTC+4. -# From Stepan Golosunov (2016-11-11): -# ... Byalokoz listed Saratov on 03:04:18. -# From Stepan Golosunov (2016-11-22): -# http://publication.pravo.gov.ru/Document/View/0001201611220031 - -Zone Europe/Saratov 3:04:18 - LMT 1919 Jul 1 0:00u - 3:00 - +03 1930 Jun 21 - 4:00 Russia +04/+05 1988 Mar 27 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 4:00 - +04 1992 Mar 29 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 2014 Oct 26 2:00s - 3:00 - +03 2016 Dec 4 2:00s - 4:00 - +04 - -# From Paul Eggert (2016-03-18): -# Europe/Kirov covers: -# 43 RU-KIR Kirov Oblast -# The 1989 transition is from USSR act No. 227 (1989-03-14). -# -Zone Europe/Kirov 3:18:48 - LMT 1919 Jul 1 0:00u - 3:00 - +03 1930 Jun 21 - 4:00 Russia +04/+05 1989 Mar 26 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 4:00 - +04 1992 Mar 29 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 2014 Oct 26 2:00s - 3:00 - +03 - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Europe/Samara covers... -# 18 RU-UD Udmurt Republic -# 63 RU-SAM Samara Oblast - -# From Paul Eggert (2016-03-18): -# Byalokoz 1919 says Samara was 3:20:20. -# The 1989 transition is from USSR act No. 227 (1989-03-14). - -Zone Europe/Samara 3:20:20 - LMT 1919 Jul 1 0:00u - 3:00 - +03 1930 Jun 21 - 4:00 - +04 1935 Jan 27 - 4:00 Russia +04/+05 1989 Mar 26 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 2:00 Russia +02/+03 1991 Sep 29 2:00s - 3:00 - +03 1991 Oct 20 3:00 - 4:00 Russia +04/+05 2010 Mar 28 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 - -# From Paul Eggert (2016-03-18): -# Europe/Ulyanovsk covers: -# 73 RU-ULY Ulyanovsk Oblast - -# The 1989 transition is from USSR act No. 227 (1989-03-14). - -# From Alexander Krivenyshev (2016-02-17): -# Ulyanovsk ... on their way to change time zones by March 27, 2016 at 2am. -# Ulyanovsk Oblast ... from MSK to MSK+1 (UTC+3 to UTC+4) ... -# 920582-6 ... 02/17/2016 The State Duma passed the bill in the first reading. -# From Matt Johnson (2016-03-09): -# http://publication.pravo.gov.ru/Document/View/0001201603090051 - -Zone Europe/Ulyanovsk 3:13:36 - LMT 1919 Jul 1 0:00u - 3:00 - +03 1930 Jun 21 - 4:00 Russia +04/+05 1989 Mar 26 2:00s - 3:00 Russia +03/+04 1991 Mar 31 2:00s - 2:00 Russia +02/+03 1992 Jan 19 2:00s - 3:00 Russia +03/+04 2011 Mar 27 2:00s - 4:00 - +04 2014 Oct 26 2:00s - 3:00 - +03 2016 Mar 27 2:00s - 4:00 - +04 - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Asia/Yekaterinburg covers... -# 02 RU-BA Bashkortostan, Republic of -# 90 RU-PER Perm Krai -# 45 RU-KGN Kurgan Oblast -# 56 RU-ORE Orenburg Oblast -# 66 RU-SVE Sverdlovsk Oblast -# 72 RU-TYU Tyumen Oblast -# 74 RU-CHE Chelyabinsk Oblast -# 86 RU-KHM Khanty-Mansi Autonomous Okrug - Yugra -# 89 RU-YAN Yamalo-Nenets Autonomous Okrug -# -# Note: Effective 2005-12-01, (59) Perm Oblast and (81) Komi-Permyak -# Autonomous Okrug merged to form (90, RU-PER) Perm Krai. - -# Milne says Yekaterinburg was 4:02:32.9; round to nearest. -# Byalokoz 1919 says its provincial time was based on Perm, at 3:45:05. -# Assume it switched on 1916-07-03, the time of the new standard. -# The 1919 and 1930 transitions are from Shanks. - -Zone Asia/Yekaterinburg 4:02:33 - LMT 1916 Jul 3 - 3:45:05 - PMT 1919 Jul 15 4:00 - 4:00 - +04 1930 Jun 21 - 5:00 Russia +05/+06 1991 Mar 31 2:00s - 4:00 Russia +04/+05 1992 Jan 19 2:00s - 5:00 Russia +05/+06 2011 Mar 27 2:00s - 6:00 - +06 2014 Oct 26 2:00s - 5:00 - +05 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Asia/Omsk covers... -# 55 RU-OMS Omsk Oblast - -# Byalokoz 1919 says Omsk was 4:53:30. - -Zone Asia/Omsk 4:53:30 - LMT 1919 Nov 14 - 5:00 - +05 1930 Jun 21 - 6:00 Russia +06/+07 1991 Mar 31 2:00s - 5:00 Russia +05/+06 1992 Jan 19 2:00s - 6:00 Russia +06/+07 2011 Mar 27 2:00s - 7:00 - +07 2014 Oct 26 2:00s - 6:00 - +06 - -# From Paul Eggert (2016-02-22): -# Asia/Barnaul covers: -# 04 RU-AL Altai Republic -# 22 RU-ALT Altai Krai - -# Data before 1991 are from Shanks & Pottenger. - -# From Stepan Golosunov (2016-03-07): -# Letter of Bank of Russia from 1995-05-25 -# http://www.bestpravo.ru/rossijskoje/lj-akty/y3a.htm -# suggests that Altai Republic transitioned to Moscow+3 on -# 1995-05-28. -# -# https://regnum.ru/news/society/1957270.html -# has some historical data for Altai Krai: -# before 1957: west part on UTC+6, east on UTC+7 -# after 1957: UTC+7 -# since 1995: UTC+6 -# http://barnaul.rusplt.ru/index/pochemu_altajskij_kraj_okazalsja_v_neprivychnom_chasovom_pojase-17648.html -# confirms that and provides more details including 1995-05-28 transition date. - -# From Alexander Krivenyshev (2016-02-17): -# Altai Krai and Altai Republic on their way to change time zones -# by March 27, 2016 at 2am.... -# Altai Republic / Gorno-Altaysk MSK+3 to MSK+4 (UTC+6 to UTC+7) ... -# Altai Krai / Barnaul MSK+3 to MSK+4 (UTC+6 to UTC+7) -# From Matt Johnson (2016-03-09): -# http://publication.pravo.gov.ru/Document/View/0001201603090043 -# http://publication.pravo.gov.ru/Document/View/0001201603090038 - -Zone Asia/Barnaul 5:35:00 - LMT 1919 Dec 10 - 6:00 - +06 1930 Jun 21 - 7:00 Russia +07/+08 1991 Mar 31 2:00s - 6:00 Russia +06/+07 1992 Jan 19 2:00s - 7:00 Russia +07/+08 1995 May 28 - 6:00 Russia +06/+07 2011 Mar 27 2:00s - 7:00 - +07 2014 Oct 26 2:00s - 6:00 - +06 2016 Mar 27 2:00s - 7:00 - +07 - -# From Paul Eggert (2016-03-18): -# Asia/Novosibirsk covers: -# 54 RU-NVS Novosibirsk Oblast - -# From Stepan Golosunov (2016-05-30): -# http://asozd2.duma.gov.ru/main.nsf/(Spravka)?OpenAgent&RN=1085784-6 -# moves Novosibirsk oblast from UTC+6 to UTC+7. -# From Stepan Golosunov (2016-07-04): -# The law was signed yesterday and published today on -# http://publication.pravo.gov.ru/Document/View/0001201607040064 - -Zone Asia/Novosibirsk 5:31:40 - LMT 1919 Dec 14 6:00 - 6:00 - +06 1930 Jun 21 - 7:00 Russia +07/+08 1991 Mar 31 2:00s - 6:00 Russia +06/+07 1992 Jan 19 2:00s - 7:00 Russia +07/+08 1993 May 23 # say Shanks & P. - 6:00 Russia +06/+07 2011 Mar 27 2:00s - 7:00 - +07 2014 Oct 26 2:00s - 6:00 - +06 2016 Jul 24 2:00s - 7:00 - +07 - -# From Paul Eggert (2016-03-18): -# Asia/Tomsk covers: -# 70 RU-TOM Tomsk Oblast - -# From Stepan Golosunov (2016-03-24): -# Byalokoz listed Tomsk at 5:39:51. - -# From Stanislaw A. Kuzikowski (1994-06-29): -# Tomsk is still 4 hours ahead of Moscow. - -# From Stepan Golosunov (2016-03-19): -# http://pravo.gov.ru/proxy/ips/?docbody=&nd=102075743 -# (fifth time belt being UTC+5+1(decree time) -# / UTC+5+1(decree time)+1(summer time)) ... -# Note that time belts (numbered from 2 (Moscow) to 12 according to their -# GMT/UTC offset and having too many exceptions like regions formally -# belonging to one belt but using time from another) were replaced -# with time zones in 2011 with different numbering (there was a -# 2-hour gap between second and third zones in 2011-2014). - -# From Stepan Golosunov (2016-04-12): -# http://asozd2.duma.gov.ru/main.nsf/(SpravkaNew)?OpenAgent&RN=1006865-6 -# This bill was approved in the first reading today. It moves Tomsk oblast -# from UTC+6 to UTC+7 and is supposed to come into effect on 2016-05-29 at -# 2:00. The bill needs to be approved in the second and the third readings by -# the State Duma, approved by the Federation Council, signed by the President -# and published to become a law. Minor changes in the text are to be expected -# before the second reading (references need to be updated to account for the -# recent changes). -# -# Judging by the ultra-short one-day amendments period, recent similar laws, -# the State Duma schedule and the Federation Council schedule -# http://www.duma.gov.ru/legislative/planning/day-shedule/por_vesna_2016/ -# http://council.gov.ru/activity/meetings/schedule/63303 -# I speculate that the final text of the bill will be proposed tomorrow, the -# bill will be approved in the second and the third readings on Friday, -# approved by the Federation Council on 2016-04-20, signed by the President and -# published as a law around 2016-04-26. - -# From Matt Johnson (2016-04-26): -# http://publication.pravo.gov.ru/Document/View/0001201604260048 - -Zone Asia/Tomsk 5:39:51 - LMT 1919 Dec 22 - 6:00 - +06 1930 Jun 21 - 7:00 Russia +07/+08 1991 Mar 31 2:00s - 6:00 Russia +06/+07 1992 Jan 19 2:00s - 7:00 Russia +07/+08 2002 May 1 3:00 - 6:00 Russia +06/+07 2011 Mar 27 2:00s - 7:00 - +07 2014 Oct 26 2:00s - 6:00 - +06 2016 May 29 2:00s - 7:00 - +07 - - -# From Tim Parenti (2014-07-03): -# Asia/Novokuznetsk covers... -# 42 RU-KEM Kemerovo Oblast - -# From Alexander Krivenyshev (2009-10-13): -# Kemerovo oblast' (Kemerovo region) in Russia will change current time zone on -# March 28, 2010: -# from current Russia Zone 6 - Krasnoyarsk Time Zone (KRA) UTC +0700 -# to Russia Zone 5 - Novosibirsk Time Zone (NOV) UTC +0600 -# -# This is according to Government of Russia decree No. 740, on September -# 14, 2009 "Application in the territory of the Kemerovo region the Fifth -# time zone." ("Russia Zone 5" or old "USSR Zone 5" is GMT +0600) -# -# Russian Government web site (Russian language) -# http://www.government.ru/content/governmentactivity/rfgovernmentdecisions/archive/2009/09/14/991633.htm -# or Russian-English translation by WorldTimeZone.com with reference -# map to local region and new Russia Time Zone map after March 28, 2010 -# http://www.worldtimezone.com/dst_news/dst_news_russia03.html -# -# Thus, when Russia will switch to DST on the night of March 28, 2010 -# Kemerovo region (Kemerovo oblast') will not change the clock. - -# From Tim Parenti (2014-07-02), per Alexander Krivenyshev (2014-07-02): -# The Kemerovo region will remain at UTC+7 through the 2014-10-26 change, thus -# realigning itself with KRAT. - -Zone Asia/Novokuznetsk 5:48:48 - LMT 1924 May 1 - 6:00 - +06 1930 Jun 21 - 7:00 Russia +07/+08 1991 Mar 31 2:00s - 6:00 Russia +06/+07 1992 Jan 19 2:00s - 7:00 Russia +07/+08 2010 Mar 28 2:00s - 6:00 Russia +06/+07 2011 Mar 27 2:00s - 7:00 - +07 - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Asia/Krasnoyarsk covers... -# 17 RU-TY Tuva Republic -# 19 RU-KK Khakassia, Republic of -# 24 RU-KYA Krasnoyarsk Krai -# -# Note: Effective 2007-01-01, (88) Evenk Autonomous Okrug and (84) Taymyr -# Autonomous Okrug were merged into (24, RU-KYA) Krasnoyarsk Krai. - -# Byalokoz 1919 says Krasnoyarsk was 6:11:26. - -Zone Asia/Krasnoyarsk 6:11:26 - LMT 1920 Jan 6 - 6:00 - +06 1930 Jun 21 - 7:00 Russia +07/+08 1991 Mar 31 2:00s - 6:00 Russia +06/+07 1992 Jan 19 2:00s - 7:00 Russia +07/+08 2011 Mar 27 2:00s - 8:00 - +08 2014 Oct 26 2:00s - 7:00 - +07 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Asia/Irkutsk covers... -# 03 RU-BU Buryatia, Republic of -# 38 RU-IRK Irkutsk Oblast -# -# Note: Effective 2008-01-01, (85) Ust-Orda Buryat Autonomous Okrug was -# merged into (38, RU-IRK) Irkutsk Oblast. - -# Milne 1899 says Irkutsk was 6:57:15. -# Byalokoz 1919 says Irkutsk was 6:57:05. -# Go with Byalokoz. - -Zone Asia/Irkutsk 6:57:05 - LMT 1880 - 6:57:05 - IMT 1920 Jan 25 # Irkutsk Mean Time - 7:00 - +07 1930 Jun 21 - 8:00 Russia +08/+09 1991 Mar 31 2:00s - 7:00 Russia +07/+08 1992 Jan 19 2:00s - 8:00 Russia +08/+09 2011 Mar 27 2:00s - 9:00 - +09 2014 Oct 26 2:00s - 8:00 - +08 - - -# From Tim Parenti (2014-07-06): -# Asia/Chita covers... -# 92 RU-ZAB Zabaykalsky Krai -# -# Note: Effective 2008-03-01, (75) Chita Oblast and (80) Agin-Buryat -# Autonomous Okrug merged to form (92, RU-ZAB) Zabaykalsky Krai. - -# From Alexander Krivenyshev (2016-01-02): -# [The] time zone in the Trans-Baikal Territory (Zabaykalsky Krai) - -# Asia/Chita [is changing] from UTC+8 to UTC+9. Effective date will -# be March 27, 2016 at 2:00am.... -# http://publication.pravo.gov.ru/Document/View/0001201512300107 - -Zone Asia/Chita 7:33:52 - LMT 1919 Dec 15 - 8:00 - +08 1930 Jun 21 - 9:00 Russia +09/+10 1991 Mar 31 2:00s - 8:00 Russia +08/+09 1992 Jan 19 2:00s - 9:00 Russia +09/+10 2011 Mar 27 2:00s - 10:00 - +10 2014 Oct 26 2:00s - 8:00 - +08 2016 Mar 27 2:00 - 9:00 - +09 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2009-11-29): -# Asia/Yakutsk covers... -# 28 RU-AMU Amur Oblast -# -# ...and parts of (14, RU-SA) Sakha (Yakutia) Republic: -# 14-02 **** Aldansky District -# 14-04 **** Amginsky District -# 14-05 **** Anabarsky District -# 14-06 **** Bulunsky District -# 14-07 **** Verkhnevilyuysky District -# 14-10 **** Vilyuysky District -# 14-11 **** Gorny District -# 14-12 **** Zhigansky District -# 14-13 **** Kobyaysky District -# 14-14 **** Lensky District -# 14-15 **** Megino-Kangalassky District -# 14-16 **** Mirninsky District -# 14-18 **** Namsky District -# 14-19 **** Neryungrinsky District -# 14-21 **** Nyurbinsky District -# 14-23 **** Olenyoksky District -# 14-24 **** Olyokminsky District -# 14-26 **** Suntarsky District -# 14-27 **** Tattinsky District -# 14-29 **** Ust-Aldansky District -# 14-32 **** Khangalassky District -# 14-33 **** Churapchinsky District -# 14-34 **** Eveno-Bytantaysky National District - -# From Tim Parenti (2014-07-03): -# Our commentary seems to have lost mention of (14-19) Neryungrinsky District. -# Since the surrounding districts of Sakha are all YAKT, assume this is, too. -# Also assume its history has been the same as the rest of Asia/Yakutsk. - -# Byalokoz 1919 says Yakutsk was 8:38:58. - -Zone Asia/Yakutsk 8:38:58 - LMT 1919 Dec 15 - 8:00 - +08 1930 Jun 21 - 9:00 Russia +09/+10 1991 Mar 31 2:00s - 8:00 Russia +08/+09 1992 Jan 19 2:00s - 9:00 Russia +09/+10 2011 Mar 27 2:00s - 10:00 - +10 2014 Oct 26 2:00s - 9:00 - +09 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2009-11-29): -# Asia/Vladivostok covers... -# 25 RU-PRI Primorsky Krai -# 27 RU-KHA Khabarovsk Krai -# 79 RU-YEV Jewish Autonomous Oblast -# -# ...and parts of (14, RU-SA) Sakha (Yakutia) Republic: -# 14-09 **** Verkhoyansky District -# 14-31 **** Ust-Yansky District - -# Milne 1899 says Vladivostok was 8:47:33.5. -# Byalokoz 1919 says Vladivostok was 8:47:31. -# Go with Byalokoz. - -Zone Asia/Vladivostok 8:47:31 - LMT 1922 Nov 15 - 9:00 - +09 1930 Jun 21 - 10:00 Russia +10/+11 1991 Mar 31 2:00s - 9:00 Russia +09/+10 1992 Jan 19 2:00s - 10:00 Russia +10/+11 2011 Mar 27 2:00s - 11:00 - +11 2014 Oct 26 2:00s - 10:00 - +10 - - -# From Tim Parenti (2014-07-03): -# Asia/Khandyga covers parts of (14, RU-SA) Sakha (Yakutia) Republic: -# 14-28 **** Tomponsky District -# 14-30 **** Ust-Maysky District - -# From Arthur David Olson (2012-05-09): -# Tomponskij and Ust'-Majskij switched from Vladivostok time to Yakutsk time -# in 2011. - -# From Paul Eggert (2012-11-25): -# Shanks and Pottenger (2003) has Khandyga on Yakutsk time. -# Make a wild guess that it switched to Vladivostok time in 2004. -# This transition is no doubt wrong, but we have no better info. - -Zone Asia/Khandyga 9:02:13 - LMT 1919 Dec 15 - 8:00 - +08 1930 Jun 21 - 9:00 Russia +09/+10 1991 Mar 31 2:00s - 8:00 Russia +08/+09 1992 Jan 19 2:00s - 9:00 Russia +09/+10 2004 - 10:00 Russia +10/+11 2011 Mar 27 2:00s - 11:00 - +11 2011 Sep 13 0:00s # Decree 725? - 10:00 - +10 2014 Oct 26 2:00s - 9:00 - +09 - - -# From Tim Parenti (2014-07-03): -# Asia/Sakhalin covers... -# 65 RU-SAK Sakhalin Oblast -# ...with the exception of: -# 65-11 **** Severo-Kurilsky District (North Kuril Islands) - -# From Matt Johnson (2016-02-22): -# Asia/Sakhalin is moving (in entirety) from UTC+10 to UTC+11 ... -# (2016-03-09): -# http://publication.pravo.gov.ru/Document/View/0001201603090044 - -# The Zone name should be Asia/Yuzhno-Sakhalinsk, but that's too long. -Zone Asia/Sakhalin 9:30:48 - LMT 1905 Aug 23 - 9:00 - +09 1945 Aug 25 - 11:00 Russia +11/+12 1991 Mar 31 2:00s # Sakhalin T - 10:00 Russia +10/+11 1992 Jan 19 2:00s - 11:00 Russia +11/+12 1997 Mar lastSun 2:00s - 10:00 Russia +10/+11 2011 Mar 27 2:00s - 11:00 - +11 2014 Oct 26 2:00s - 10:00 - +10 2016 Mar 27 2:00s - 11:00 - +11 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2009-11-29): -# Asia/Magadan covers... -# 49 RU-MAG Magadan Oblast - -# From Tim Parenti (2014-07-06), per Alexander Krivenyshev (2014-07-02): -# Magadan Oblast is moving from UTC+12 to UTC+10 on 2014-10-26; however, -# several districts of Sakha Republic as well as Severo-Kurilsky District of -# the Sakhalin Oblast (also known as the North Kuril Islands), represented -# until now by Asia/Magadan, will instead move to UTC+11. These regions will -# need their own zone. - -# From Alexander Krivenyshev (2016-03-27): -# ... draft bill 948300-6 to change its time zone from UTC+10 to UTC+11 ... -# will take ... effect ... on April 24, 2016 at 2 o'clock -# -# From Matt Johnson (2016-04-05): -# ... signed by the President today ... -# http://publication.pravo.gov.ru/Document/View/0001201604050038 - -Zone Asia/Magadan 10:03:12 - LMT 1924 May 2 - 10:00 - +10 1930 Jun 21 # Magadan Time - 11:00 Russia +11/+12 1991 Mar 31 2:00s - 10:00 Russia +10/+11 1992 Jan 19 2:00s - 11:00 Russia +11/+12 2011 Mar 27 2:00s - 12:00 - +12 2014 Oct 26 2:00s - 10:00 - +10 2016 Apr 24 2:00s - 11:00 - +11 - - -# From Tim Parenti (2014-07-06): -# Asia/Srednekolymsk covers parts of (14, RU-SA) Sakha (Yakutia) Republic: -# 14-01 **** Abyysky District -# 14-03 **** Allaikhovsky District -# 14-08 **** Verkhnekolymsky District -# 14-17 **** Momsky District -# 14-20 **** Nizhnekolymsky District -# 14-25 **** Srednekolymsky District -# -# ...and parts of (65, RU-SAK) Sakhalin Oblast: -# 65-11 **** Severo-Kurilsky District (North Kuril Islands) - -# From Tim Parenti (2014-07-02): -# Oymyakonsky District of Sakha Republic (represented by Ust-Nera), along with -# most of Sakhalin Oblast (represented by Sakhalin) will be moving to UTC+10 on -# 2014-10-26 to stay aligned with VLAT/SAKT; however, Severo-Kurilsky District -# of the Sakhalin Oblast (also known as the North Kuril Islands, represented by -# Severo-Kurilsk) will remain on UTC+11. - -# From Tim Parenti (2014-07-06): -# Assume North Kuril Islands have history like Magadan before 2011-03-27. -# There is a decent chance this is wrong, in which case a new zone -# Asia/Severo-Kurilsk would become necessary. -# -# Srednekolymsk and Zyryanka are the most populous places amongst these -# districts, but have very similar populations. In fact, Wikipedia currently -# lists them both as having 3528 people, exactly 1668 males and 1860 females -# each! (Yikes!) -# https://en.wikipedia.org/w/?title=Srednekolymsky_District&oldid=603435276 -# https://en.wikipedia.org/w/?title=Verkhnekolymsky_District&oldid=594378493 -# Assume this is a mistake, albeit an amusing one. -# -# Looking at censuses, the populations of the two municipalities seem to have -# fluctuated recently. Zyryanka was more populous than Srednekolymsk in the -# 1989 and 2002 censuses, but Srednekolymsk was more populous in the most -# recent (2010) census, 3525 to 3170. (See pages 195 and 197 of -# http://www.gks.ru/free_doc/new_site/perepis2010/croc/Documents/Vol1/pub-01-05.pdf -# in Russian.) In addition, Srednekolymsk appears to be a much older -# settlement and the population of Zyryanka seems to be declining. -# Go with Srednekolymsk. - -Zone Asia/Srednekolymsk 10:14:52 - LMT 1924 May 2 - 10:00 - +10 1930 Jun 21 - 11:00 Russia +11/+12 1991 Mar 31 2:00s - 10:00 Russia +10/+11 1992 Jan 19 2:00s - 11:00 Russia +11/+12 2011 Mar 27 2:00s - 12:00 - +12 2014 Oct 26 2:00s - 11:00 - +11 - - -# From Tim Parenti (2014-07-03): -# Asia/Ust-Nera covers parts of (14, RU-SA) Sakha (Yakutia) Republic: -# 14-22 **** Oymyakonsky District - -# From Arthur David Olson (2012-05-09): -# Ojmyakonskij [and the Kuril Islands] switched from -# Magadan time to Vladivostok time in 2011. -# -# From Tim Parenti (2014-07-06), per Alexander Krivenyshev (2014-07-02): -# It's unlikely that any of the Kuril Islands were involved in such a switch, -# as the South and Middle Kurils have been on UTC+11 (SAKT) with the rest of -# Sakhalin Oblast since at least 2011-09, and the North Kurils have been on -# UTC+12 since at least then, too. - -Zone Asia/Ust-Nera 9:32:54 - LMT 1919 Dec 15 - 8:00 - +08 1930 Jun 21 - 9:00 Russia +09/+10 1981 Apr 1 - 11:00 Russia +11/+12 1991 Mar 31 2:00s - 10:00 Russia +10/+11 1992 Jan 19 2:00s - 11:00 Russia +11/+12 2011 Mar 27 2:00s - 12:00 - +12 2011 Sep 13 0:00s # Decree 725? - 11:00 - +11 2014 Oct 26 2:00s - 10:00 - +10 - - -# From Tim Parenti (2014-07-03), per Oscar van Vlijmen (2001-08-25): -# Asia/Kamchatka covers... -# 91 RU-KAM Kamchatka Krai -# -# Note: Effective 2007-07-01, (41) Kamchatka Oblast and (82) Koryak -# Autonomous Okrug merged to form (91, RU-KAM) Kamchatka Krai. - -# The Zone name should be Asia/Petropavlovsk-Kamchatski or perhaps -# Asia/Petropavlovsk-Kamchatsky, but these are too long. -Zone Asia/Kamchatka 10:34:36 - LMT 1922 Nov 10 - 11:00 - +11 1930 Jun 21 - 12:00 Russia +12/+13 1991 Mar 31 2:00s - 11:00 Russia +11/+12 1992 Jan 19 2:00s - 12:00 Russia +12/+13 2010 Mar 28 2:00s - 11:00 Russia +11/+12 2011 Mar 27 2:00s - 12:00 - +12 - - -# From Tim Parenti (2014-07-03): -# Asia/Anadyr covers... -# 87 RU-CHU Chukotka Autonomous Okrug - -Zone Asia/Anadyr 11:49:56 - LMT 1924 May 2 - 12:00 - +12 1930 Jun 21 - 13:00 Russia +13/+14 1982 Apr 1 0:00s - 12:00 Russia +12/+13 1991 Mar 31 2:00s - 11:00 Russia +11/+12 1992 Jan 19 2:00s - 12:00 Russia +12/+13 2010 Mar 28 2:00s - 11:00 Russia +11/+12 2011 Mar 27 2:00s - 12:00 - +12 - - -# San Marino -# See Europe/Rome. - -# Serbia -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Belgrade 1:22:00 - LMT 1884 - 1:00 - CET 1941 Apr 18 23:00 - 1:00 C-Eur CE%sT 1945 - 1:00 - CET 1945 May 8 2:00s - 1:00 1:00 CEST 1945 Sep 16 2:00s -# Metod Koželj reports that the legal date of -# transition to EU rules was 1982-11-27, for all of Yugoslavia at the time. -# Shanks & Pottenger don't give as much detail, so go with Koželj. - 1:00 - CET 1982 Nov 27 - 1:00 EU CE%sT -Link Europe/Belgrade Europe/Ljubljana # Slovenia -Link Europe/Belgrade Europe/Podgorica # Montenegro -Link Europe/Belgrade Europe/Sarajevo # Bosnia and Herzegovina -Link Europe/Belgrade Europe/Skopje # Macedonia -Link Europe/Belgrade Europe/Zagreb # Croatia - -# Slovakia -Link Europe/Prague Europe/Bratislava - -# Slovenia -# See Europe/Belgrade. - -# Spain -# -# From Paul Eggert (2016-12-14): -# -# The source for Europe/Madrid before 2013 is: -# Planesas P. La hora oficial en España y sus cambios. -# Anuario del Observatorio Astronómico de Madrid (2013, in Spanish). -# http://astronomia.ign.es/rknowsys-theme/images/webAstro/paginas/documentos/Anuario/lahoraoficialenespana.pdf -# As this source says that historical time in the Canaries is obscure, -# and it does not discuss Ceuta, stick with Shanks for now for that data. -# -# In the 1918 and 1919 fallback transitions in Spain, the clock for -# the hour-longer day officially kept going after midnight, so that -# the repeated instances of that day's 00:00 hour were 24 hours apart, -# with a fallback transition from the second occurrence of 00:59... to -# the next day's 00:00. Our data format cannot represent this -# directly, and instead repeats the first hour of the next day, with a -# fallback transition from the next day's 00:59... to 00:00. - -# From Michael Deckers (2016-12-15): -# The Royal Decree of 1900-06-26 quoted by Planesas, online at -# https://www.boe.es/datos/pdfs/BOE//1900/209/A00383-00384.pdf -# says in its article 5 (my translation): -# These dispositions will enter into force beginning with the -# instant at which, according to the time indicated in article 1, -# the 1st day of January of 1901 will begin. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Spain 1918 only - Apr 15 23:00 1:00 S -Rule Spain 1918 1919 - Oct 6 24:00s 0 - -Rule Spain 1919 only - Apr 6 23:00 1:00 S -Rule Spain 1924 only - Apr 16 23:00 1:00 S -Rule Spain 1924 only - Oct 4 24:00s 0 - -Rule Spain 1926 only - Apr 17 23:00 1:00 S -Rule Spain 1926 1929 - Oct Sat>=1 24:00s 0 - -Rule Spain 1927 only - Apr 9 23:00 1:00 S -Rule Spain 1928 only - Apr 15 0:00 1:00 S -Rule Spain 1929 only - Apr 20 23:00 1:00 S -# Republican Spain during the civil war; it controlled Madrid until 1939-03-28. -Rule Spain 1937 only - Jun 16 23:00 1:00 S -Rule Spain 1937 only - Oct 2 24:00s 0 - -Rule Spain 1938 only - Apr 2 23:00 1:00 S -Rule Spain 1938 only - Apr 30 23:00 2:00 M -Rule Spain 1938 only - Oct 2 24:00 1:00 S -# The following rules are for unified Spain again. -# -# Planesas does not say what happened in Madrid between its fall on -# 1939-03-28 and the Nationalist spring-forward transition on -# 1939-04-15. For lack of better info, assume Madrid's clocks did not -# change during that period. -# -# The first rule is commented out, as it is redundant for Republican Spain. -#Rule Spain 1939 only - Apr 15 23:00 1:00 S -Rule Spain 1939 only - Oct 7 24:00s 0 - -Rule Spain 1942 only - May 2 23:00 1:00 S -Rule Spain 1942 only - Sep 1 1:00 0 - -Rule Spain 1943 1946 - Apr Sat>=13 23:00 1:00 S -Rule Spain 1943 1944 - Oct Sun>=1 1:00 0 - -Rule Spain 1945 1946 - Sep lastSun 1:00 0 - -Rule Spain 1949 only - Apr 30 23:00 1:00 S -Rule Spain 1949 only - Oct 2 1:00 0 - -Rule Spain 1974 1975 - Apr Sat>=12 23:00 1:00 S -Rule Spain 1974 1975 - Oct Sun>=1 1:00 0 - -Rule Spain 1976 only - Mar 27 23:00 1:00 S -Rule Spain 1976 1977 - Sep lastSun 1:00 0 - -Rule Spain 1977 only - Apr 2 23:00 1:00 S -Rule Spain 1978 only - Apr 2 2:00s 1:00 S -Rule Spain 1978 only - Oct 1 2:00s 0 - -# Nationalist Spain during the civil war -#Rule NatSpain 1937 only - May 22 23:00 1:00 S -#Rule NatSpain 1937 1938 - Oct Sat>=1 24:00s 0 - -#Rule NatSpain 1938 only - Mar 26 23:00 1:00 S -# The following rules are copied from Morocco from 1967 through 1978. -Rule SpainAfrica 1967 only - Jun 3 12:00 1:00 S -Rule SpainAfrica 1967 only - Oct 1 0:00 0 - -Rule SpainAfrica 1974 only - Jun 24 0:00 1:00 S -Rule SpainAfrica 1974 only - Sep 1 0:00 0 - -Rule SpainAfrica 1976 1977 - May 1 0:00 1:00 S -Rule SpainAfrica 1976 only - Aug 1 0:00 0 - -Rule SpainAfrica 1977 only - Sep 28 0:00 0 - -Rule SpainAfrica 1978 only - Jun 1 0:00 1:00 S -Rule SpainAfrica 1978 only - Aug 4 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Madrid -0:14:44 - LMT 1900 Dec 31 23:45:16 - 0:00 Spain WE%sT 1940 Mar 16 23:00 - 1:00 Spain CE%sT 1979 - 1:00 EU CE%sT -Zone Africa/Ceuta -0:21:16 - LMT 1900 Dec 31 23:38:44 - 0:00 - WET 1918 May 6 23:00 - 0:00 1:00 WEST 1918 Oct 7 23:00 - 0:00 - WET 1924 - 0:00 Spain WE%sT 1929 - 0:00 SpainAfrica WE%sT 1984 Mar 16 - 1:00 - CET 1986 - 1:00 EU CE%sT -Zone Atlantic/Canary -1:01:36 - LMT 1922 Mar # Las Palmas de Gran C. - -1:00 - -01 1946 Sep 30 1:00 - 0:00 - WET 1980 Apr 6 0:00s - 0:00 1:00 WEST 1980 Sep 28 1:00u - 0:00 EU WE%sT -# IATA SSIM (1996-09) says the Canaries switch at 2:00u, not 1:00u. -# Ignore this for now, as the Canaries are part of the EU. - -# Sweden - -# From Ivan Nilsson (2001-04-13), superseding Shanks & Pottenger: -# -# The law "Svensk författningssamling 1878, no 14" about standard time in 1879: -# From the beginning of 1879 (that is 01-01 00:00) the time for all -# places in the country is "the mean solar time for the meridian at -# three degrees, or twelve minutes of time, to the west of the -# meridian of the Observatory of Stockholm". The law is dated 1878-05-31. -# -# The observatory at that time had the meridian 18 degrees 03' 30" -# eastern longitude = 01:12:14 in time. Less 12 minutes gives the -# national standard time as 01:00:14 ahead of GMT.... -# -# About the beginning of CET in Sweden. The lawtext ("Svensk -# författningssamling 1899, no 44") states, that "from the beginning -# of 1900... ... the same as the mean solar time for the meridian at -# the distance of one hour of time from the meridian of the English -# observatory at Greenwich, or at 12 minutes 14 seconds to the west -# from the meridian of the Observatory of Stockholm". The law is dated -# 1899-06-16. In short: At 1900-01-01 00:00:00 the new standard time -# in Sweden is 01:00:00 ahead of GMT. -# -# 1916: The lawtext ("Svensk författningssamling 1916, no 124") states -# that "1916-05-15 is considered to begin one hour earlier". It is -# pretty obvious that at 05-14 23:00 the clocks are set to 05-15 00:00.... -# Further the law says, that "1916-09-30 is considered to end one hour later". -# -# The laws regulating [DST] are available on the site of the Swedish -# Parliament beginning with 1985 - the laws regulating 1980/1984 are -# not available on the site (to my knowledge they are only available -# in Swedish): (type -# "sommartid" without the quotes in the field "Fritext" and then click -# the Sök-button). -# -# (2001-05-13): -# -# I have now found a newspaper stating that at 1916-10-01 01:00 -# summertime the church-clocks etc were set back one hour to show -# 1916-10-01 00:00 standard time. The article also reports that some -# people thought the switch to standard time would take place already -# at 1916-10-01 00:00 summer time, but they had to wait for another -# hour before the event took place. -# -# Source: The newspaper "Dagens Nyheter", 1916-10-01, page 7 upper left. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Stockholm 1:12:12 - LMT 1879 Jan 1 - 1:00:14 - SET 1900 Jan 1 # Swedish Time - 1:00 - CET 1916 May 14 23:00 - 1:00 1:00 CEST 1916 Oct 1 1:00 - 1:00 - CET 1980 - 1:00 EU CE%sT - -# Switzerland -# From Howse: -# By the end of the 18th century clocks and watches became commonplace -# and their performance improved enormously. Communities began to keep -# mean time in preference to apparent time - Geneva from 1780 .... -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# From Whitman (who writes "Midnight?"): -# Rule Swiss 1940 only - Nov 2 0:00 1:00 S -# Rule Swiss 1940 only - Dec 31 0:00 0 - -# From Shanks & Pottenger: -# Rule Swiss 1941 1942 - May Sun>=1 2:00 1:00 S -# Rule Swiss 1941 1942 - Oct Sun>=1 0:00 0 - - -# From Alois Treindl (2008-12-17): -# I have researched the DST usage in Switzerland during the 1940ies. -# -# As I wrote in an earlier message, I suspected the current tzdata values -# to be wrong. This is now verified. -# -# I have found copies of the original ruling by the Swiss Federal -# government, in 'Eidgenössische Gesetzessammlung 1941 and 1942' (Swiss -# federal law collection)... -# -# DST began on Monday 5 May 1941, 1:00 am by shifting the clocks to 2:00 am -# DST ended on Monday 6 Oct 1941, 2:00 am by shifting the clocks to 1:00 am. -# -# DST began on Monday, 4 May 1942 at 01:00 am -# DST ended on Monday, 5 Oct 1942 at 02:00 am -# -# There was no DST in 1940, I have checked the law collection carefully. -# It is also indicated by the fact that the 1942 entry in the law -# collection points back to 1941 as a reference, but no reference to any -# other years are made. -# -# Newspaper articles I have read in the archives on 6 May 1941 reported -# about the introduction of DST (Sommerzeit in German) during the previous -# night as an absolute novelty, because this was the first time that such -# a thing had happened in Switzerland. -# -# I have also checked 1916, because one book source (Gabriel, Traité de -# l'heure dans le monde) claims that Switzerland had DST in 1916. This is -# false, no official document could be found. Probably Gabriel got misled -# by references to Germany, which introduced DST in 1916 for the first time. -# -# The tzdata rules for Switzerland must be changed to: -# Rule Swiss 1941 1942 - May Mon>=1 1:00 1:00 S -# Rule Swiss 1941 1942 - Oct Mon>=1 2:00 0 - -# -# The 1940 rules must be deleted. -# -# One further detail for Switzerland, which is probably out of scope for -# most users of tzdata: The [Europe/Zurich zone] ... -# describes all of Switzerland correctly, with the exception of -# the Canton de Genève (Geneva, Genf). Between 1848 and 1894 Geneva did not -# follow Bern Mean Time but kept its own local mean time. -# To represent this, an extra zone would be needed. -# -# From Alois Treindl (2013-09-11): -# The Federal regulations say -# https://www.admin.ch/opc/de/classified-compilation/20071096/index.html -# ... the meridian for Bern mean time ... is 7 degrees 26' 22.50". -# Expressed in time, it is 0h29m45.5s. - -# From Pierre-Yves Berger (2013-09-11): -# the "Circulaire du conseil fédéral" (December 11 1893) -# http://www.amtsdruckschriften.bar.admin.ch/viewOrigDoc.do?id=10071353 -# clearly states that the [1894-06-01] change should be done at midnight -# but if no one is present after 11 at night, could be postponed until one -# hour before the beginning of service. - -# From Paul Eggert (2013-09-11): -# Round BMT to the nearest even second, 0:29:46. -# -# We can find no reliable source for Shanks's assertion that all of Switzerland -# except Geneva switched to Bern Mean Time at 00:00 on 1848-09-12. This book: -# -# Jakob Messerli. Gleichmässig, pünktlich, schnell. Zeiteinteilung und -# Zeitgebrauch in der Schweiz im 19. Jahrhundert. Chronos, Zurich 1995, -# ISBN 3-905311-68-2, OCLC 717570797. -# -# suggests that the transition was more gradual, and that the Swiss did not -# agree about civil time during the transition. The timekeeping it gives the -# most detail for is postal and telegraph time: here, federal legislation (the -# "Bundesgesetz über die Erstellung von elektrischen Telegraphen") passed on -# 1851-11-23, and an official implementation notice was published 1853-07-16 -# (Bundesblatt 1853, Bd. II, S. 859). On p 72 Messerli writes that in -# practice since July 1853 Bernese time was used in "all postal and telegraph -# offices in Switzerland from Geneva to St. Gallen and Basel to Chiasso" -# (Google translation). For now, model this transition as occurring on -# 1853-07-16, though it probably occurred at some other date in Zurich, and -# legal civil time probably changed at still some other transition date. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Swiss 1941 1942 - May Mon>=1 1:00 1:00 S -Rule Swiss 1941 1942 - Oct Mon>=1 2:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Zurich 0:34:08 - LMT 1853 Jul 16 # See above comment. - 0:29:46 - BMT 1894 Jun # Bern Mean Time - 1:00 Swiss CE%sT 1981 - 1:00 EU CE%sT - -# Turkey - -# From Kıvanç Yazan (2016-09-25): -# 1) For 1986-2006, DST started at 01:00 local and ended at 02:00 local, with -# no exceptions. -# 2) 1994's lastSun was overridden with Mar 20 ... -# Here are official papers: -# http://www.resmigazete.gov.tr/arsiv/19032.pdf - page 2 for 1986 -# http://www.resmigazete.gov.tr/arsiv/19400.pdf - page 4 for 1987 -# http://www.resmigazete.gov.tr/arsiv/19752.pdf - page 15 for 1988 -# http://www.resmigazete.gov.tr/arsiv/20102.pdf - page 6 for 1989 -# http://www.resmigazete.gov.tr/arsiv/20464.pdf - page 1 for 1990 - 1992 -# http://www.resmigazete.gov.tr/arsiv/21531.pdf - page 15 for 1993 - 1995 -# http://www.resmigazete.gov.tr/arsiv/21879.pdf - page 1 for overriding 1994 -# http://www.resmigazete.gov.tr/arsiv/22588.pdf - page 1 for 1996, 1997 -# http://www.resmigazete.gov.tr/arsiv/23286.pdf - page 10 for 1998 - 2000 -# http://www.resmigazete.gov.tr/eskiler/2001/03/20010324.htm#2 - for 2001 -# http://www.resmigazete.gov.tr/eskiler/2002/03/20020316.htm#2 - for 2002-2006 -# From Paul Eggert (2016-09-25): -# Prefer the above sources to Shanks & Pottenger for time stamps after 1985. - -# From Steffen Thorsen (2007-03-09): -# Starting 2007 though, it seems that they are adopting EU's 1:00 UTC -# start/end time, according to the following page (2007-03-07): -# http://www.ntvmsnbc.com/news/402029.asp -# The official document is located here - it is in Turkish...: -# http://rega.basbakanlik.gov.tr/eskiler/2007/03/20070307-7.htm -# I was able to locate the following seemingly official document -# (on a non-government server though) describing dates between 2002 and 2006: -# http://www.alomaliye.com/bkk_2002_3769.htm - -# From Gökdeniz Karadağ (2011-03-10): -# According to the articles linked below, Turkey will change into summer -# time zone (GMT+3) on March 28, 2011 at 3:00 a.m. instead of March 27. -# This change is due to a nationwide exam on 27th. -# https://www.worldbulletin.net/?aType=haber&ArticleID=70872 -# Turkish: -# https://www.hurriyet.com.tr/yaz-saati-uygulamasi-bir-gun-ileri-alindi-17230464 - -# From Faruk Pasin (2014-02-14): -# The DST for Turkey has been changed for this year because of the -# Turkish Local election.... -# http://www.sabah.com.tr/Ekonomi/2014/02/12/yaz-saatinde-onemli-degisiklik -# ... so Turkey will move clocks forward one hour on March 31 at 3:00 a.m. -# From Randal L. Schwartz (2014-04-15): -# Having landed on a flight from the states to Istanbul (via AMS) on March 31, -# I can tell you that NOBODY (even the airlines) respected this timezone DST -# change delay. Maybe the word just didn't get out in time. -# From Paul Eggert (2014-06-15): -# The press reported massive confusion, as election officials obeyed the rule -# change but cell phones (and airline baggage systems) did not. See: -# Kostidis M. Eventful elections in Turkey. Balkan News Agency -# http://www.balkaneu.com/eventful-elections-turkey/ 2014-03-30. -# I guess the best we can do is document the official time. - -# From Fatih (2015-09-29): -# It's officially announced now by the Ministry of Energy. -# Turkey delays winter time to 8th of November 04:00 -# http://www.aa.com.tr/tr/turkiye/yaz-saati-uygulamasi-8-kasimda-sona-erecek/362217 -# -# From BBC News (2015-10-25): -# Confused Turks are asking "what's the time?" after automatic clocks defied a -# government decision ... "For the next two weeks #Turkey is on EEST... Erdogan -# Engineered Standard Time," said Twitter user @aysekarahasan. -# http://www.bbc.com/news/world-europe-34631326 - -# From Burak AYDIN (2016-09-08): -# Turkey will stay in Daylight Saving Time even in winter.... -# http://www.resmigazete.gov.tr/eskiler/2016/09/20160908-2.pdf -# -# From Paul Eggert (2016-09-07): -# The change is permanent, so this is the new standard time in Turkey. -# It takes effect today, which is not much notice. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Turkey 1916 only - May 1 0:00 1:00 S -Rule Turkey 1916 only - Oct 1 0:00 0 - -Rule Turkey 1920 only - Mar 28 0:00 1:00 S -Rule Turkey 1920 only - Oct 25 0:00 0 - -Rule Turkey 1921 only - Apr 3 0:00 1:00 S -Rule Turkey 1921 only - Oct 3 0:00 0 - -Rule Turkey 1922 only - Mar 26 0:00 1:00 S -Rule Turkey 1922 only - Oct 8 0:00 0 - -# Whitman gives 1923 Apr 28 - Sep 16 and no DST in 1924-1925; -# go with Shanks & Pottenger. -Rule Turkey 1924 only - May 13 0:00 1:00 S -Rule Turkey 1924 1925 - Oct 1 0:00 0 - -Rule Turkey 1925 only - May 1 0:00 1:00 S -Rule Turkey 1940 only - Jun 30 0:00 1:00 S -Rule Turkey 1940 only - Oct 5 0:00 0 - -Rule Turkey 1940 only - Dec 1 0:00 1:00 S -Rule Turkey 1941 only - Sep 21 0:00 0 - -Rule Turkey 1942 only - Apr 1 0:00 1:00 S -# Whitman omits the next two transition and gives 1945 Oct 1; -# go with Shanks & Pottenger. -Rule Turkey 1942 only - Nov 1 0:00 0 - -Rule Turkey 1945 only - Apr 2 0:00 1:00 S -Rule Turkey 1945 only - Oct 8 0:00 0 - -Rule Turkey 1946 only - Jun 1 0:00 1:00 S -Rule Turkey 1946 only - Oct 1 0:00 0 - -Rule Turkey 1947 1948 - Apr Sun>=16 0:00 1:00 S -Rule Turkey 1947 1950 - Oct Sun>=2 0:00 0 - -Rule Turkey 1949 only - Apr 10 0:00 1:00 S -Rule Turkey 1950 only - Apr 19 0:00 1:00 S -Rule Turkey 1951 only - Apr 22 0:00 1:00 S -Rule Turkey 1951 only - Oct 8 0:00 0 - -Rule Turkey 1962 only - Jul 15 0:00 1:00 S -Rule Turkey 1962 only - Oct 8 0:00 0 - -Rule Turkey 1964 only - May 15 0:00 1:00 S -Rule Turkey 1964 only - Oct 1 0:00 0 - -Rule Turkey 1970 1972 - May Sun>=2 0:00 1:00 S -Rule Turkey 1970 1972 - Oct Sun>=2 0:00 0 - -Rule Turkey 1973 only - Jun 3 1:00 1:00 S -Rule Turkey 1973 only - Nov 4 3:00 0 - -Rule Turkey 1974 only - Mar 31 2:00 1:00 S -Rule Turkey 1974 only - Nov 3 5:00 0 - -Rule Turkey 1975 only - Mar 30 0:00 1:00 S -Rule Turkey 1975 1976 - Oct lastSun 0:00 0 - -Rule Turkey 1976 only - Jun 1 0:00 1:00 S -Rule Turkey 1977 1978 - Apr Sun>=1 0:00 1:00 S -Rule Turkey 1977 only - Oct 16 0:00 0 - -Rule Turkey 1979 1980 - Apr Sun>=1 3:00 1:00 S -Rule Turkey 1979 1982 - Oct Mon>=11 0:00 0 - -Rule Turkey 1981 1982 - Mar lastSun 3:00 1:00 S -Rule Turkey 1983 only - Jul 31 0:00 1:00 S -Rule Turkey 1983 only - Oct 2 0:00 0 - -Rule Turkey 1985 only - Apr 20 0:00 1:00 S -Rule Turkey 1985 only - Sep 28 0:00 0 - -Rule Turkey 1986 1993 - Mar lastSun 1:00s 1:00 S -Rule Turkey 1986 1995 - Sep lastSun 1:00s 0 - -Rule Turkey 1994 only - Mar 20 1:00s 1:00 S -Rule Turkey 1995 2006 - Mar lastSun 1:00s 1:00 S -Rule Turkey 1996 2006 - Oct lastSun 1:00s 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Europe/Istanbul 1:55:52 - LMT 1880 - 1:56:56 - IMT 1910 Oct # Istanbul Mean Time? - 2:00 Turkey EE%sT 1978 Oct 15 - 3:00 Turkey +03/+04 1985 Apr 20 - 2:00 Turkey EE%sT 2007 - 2:00 EU EE%sT 2011 Mar 27 1:00u - 2:00 - EET 2011 Mar 28 1:00u - 2:00 EU EE%sT 2014 Mar 30 1:00u - 2:00 - EET 2014 Mar 31 1:00u - 2:00 EU EE%sT 2015 Oct 25 1:00u - 2:00 1:00 EEST 2015 Nov 8 1:00u - 2:00 EU EE%sT 2016 Sep 7 - 3:00 - +03 -Link Europe/Istanbul Asia/Istanbul # Istanbul is in both continents. - -# Ukraine -# -# From Igor Karpov, who works for the Ukrainian Ministry of Justice, -# via Garrett Wollman (2003-01-27): -# BTW, I've found the official document on this matter. It's government -# regulations No. 509, May 13, 1996. In my poor translation it says: -# "Time in Ukraine is set to second timezone (Kiev time). Each last Sunday -# of March at 3am the time is changing to 4am and each last Sunday of -# October the time at 4am is changing to 3am" - -# From Alexander Krivenyshev (2011-09-20): -# On September 20, 2011 the deputies of the Verkhovna Rada agreed to -# abolish the transfer clock to winter time. -# -# Bill No. 8330 of MP from the Party of Regions Oleg Nadoshi got -# approval from 266 deputies. -# -# Ukraine abolishes transfer back to the winter time (in Russian) -# http://news.mail.ru/politics/6861560/ -# -# The Ukrainians will no longer change the clock (in Russian) -# http://www.segodnya.ua/news/14290482.html -# -# Deputies cancelled the winter time (in Russian) -# https://www.pravda.com.ua/rus/news/2011/09/20/6600616/ -# -# From Philip Pizzey (2011-10-18): -# Today my Ukrainian colleagues have informed me that the -# Ukrainian parliament have decided that they will go to winter -# time this year after all. -# -# From Udo Schwedt (2011-10-18): -# As far as I understand, the recent change to the Ukrainian time zone -# (Europe/Kiev) to introduce permanent daylight saving time (similar -# to Russia) was reverted today: -# http://portal.rada.gov.ua/rada/control/en/publish/article/info_left?art_id=287324&cat_id=105995 -# -# Also reported by Alexander Bokovoy (2011-10-18) who also noted: -# The law documents themselves are at -# http://w1.c1.rada.gov.ua/pls/zweb_n/webproc4_1?id=&pf3511=41484 - -# From Vladimir in Moscow via Alois Treindl re Kiev time 1991/2 (2014-02-28): -# First in Ukraine they changed Time zone from UTC+3 to UTC+2 with DST: -# 03 25 1990 02:00 -03.00 1 Time Zone 3 with DST -# 07 01 1990 02:00 -02.00 1 Time Zone 2 with DST -# * Ukrainian Government's Resolution of 18.06.1990, No. 134. -# http://search.ligazakon.ua/l_doc2.nsf/link1/T001500.html -# -# They did not end DST in September, 1990 (according to the law, -# "summer time" was still in action): -# 09 30 1990 03:00 -02.00 1 Time Zone 2 with DST -# * Ukrainian Government's Resolution of 21.09.1990, No. 272. -# http://search.ligazakon.ua/l_doc2.nsf/link1/KP900272.html -# -# Again no change in March, 1991 ("summer time" in action): -# 03 31 1991 02:00 -02.00 1 Time Zone 2 with DST -# -# DST ended in September 1991 ("summer time" ended): -# 09 29 1991 03:00 -02.00 0 Time Zone 2, no DST -# * Ukrainian Government's Resolution of 25.09.1991, No. 225. -# http://www.uazakon.com/documents/date_21/pg_iwgdoc.htm -# This is an answer. -# -# Since 1992 they had normal DST procedure: -# 03 29 1992 02:00 -02.00 1 DST started -# 09 27 1992 03:00 -02.00 0 DST ended -# * Ukrainian Government's Resolution of 20.03.1992, No. 139. -# http://www.uazakon.com/documents/date_8u/pg_grcasa.htm - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Most of Ukraine since 1970 has been like Kiev. -# "Kyiv" is the transliteration of the Ukrainian name, but -# "Kiev" is more common in English. -Zone Europe/Kiev 2:02:04 - LMT 1880 - 2:02:04 - KMT 1924 May 2 # Kiev Mean Time - 2:00 - EET 1930 Jun 21 - 3:00 - MSK 1941 Sep 20 - 1:00 C-Eur CE%sT 1943 Nov 6 - 3:00 Russia MSK/MSD 1990 Jul 1 2:00 - 2:00 1:00 EEST 1991 Sep 29 3:00 - 2:00 E-Eur EE%sT 1995 - 2:00 EU EE%sT -# Ruthenia used CET 1990/1991. -# "Uzhhorod" is the transliteration of the Rusyn/Ukrainian pronunciation, but -# "Uzhgorod" is more common in English. -Zone Europe/Uzhgorod 1:29:12 - LMT 1890 Oct - 1:00 - CET 1940 - 1:00 C-Eur CE%sT 1944 Oct - 1:00 1:00 CEST 1944 Oct 26 - 1:00 - CET 1945 Jun 29 - 3:00 Russia MSK/MSD 1990 - 3:00 - MSK 1990 Jul 1 2:00 - 1:00 - CET 1991 Mar 31 3:00 - 2:00 - EET 1992 - 2:00 E-Eur EE%sT 1995 - 2:00 EU EE%sT -# Zaporozh'ye and eastern Lugansk oblasts observed DST 1990/1991. -# "Zaporizhia" is the transliteration of the Ukrainian name, but -# "Zaporozh'ye" is more common in English. Use the common English -# spelling, except omit the apostrophe as it is not allowed in -# portable Posix file names. -Zone Europe/Zaporozhye 2:20:40 - LMT 1880 - 2:20 - +0220 1924 May 2 - 2:00 - EET 1930 Jun 21 - 3:00 - MSK 1941 Aug 25 - 1:00 C-Eur CE%sT 1943 Oct 25 - 3:00 Russia MSK/MSD 1991 Mar 31 2:00 - 2:00 E-Eur EE%sT 1995 - 2:00 EU EE%sT - -# Vatican City -# See Europe/Rome. - -############################################################################### - -# One source shows that Bulgaria, Cyprus, Finland, and Greece observe DST from -# the last Sunday in March to the last Sunday in September in 1986. -# The source shows Romania changing a day later than everybody else. -# -# According to Bernard Sieloff's source, Poland is in the MET time zone but -# uses the WE DST rules. The Western USSR uses EET+1 and ME DST rules. -# Bernard Sieloff's source claims Romania switches on the same day, but at -# 00:00 standard time (i.e., 01:00 DST). It also claims that Turkey -# switches on the same day, but switches on at 01:00 standard time -# and off at 00:00 standard time (i.e., 01:00 DST) - -# ... -# Date: Wed, 28 Jan 87 16:56:27 -0100 -# From: Tom Hofmann -# ... -# -# ...the European time rules are...standardized since 1981, when -# most European countries started DST. Before that year, only -# a few countries (UK, France, Italy) had DST, each according -# to own national rules. In 1981, however, DST started on -# 'Apr firstSun', and not on 'Mar lastSun' as in the following -# years... -# But also since 1981 there are some more national exceptions -# than listed in 'europe': Switzerland, for example, joined DST -# one year later, Denmark ended DST on 'Oct 1' instead of 'Sep -# lastSun' in 1981 - I don't know how they handle now. -# -# Finally, DST ist always from 'Apr 1' to 'Oct 1' in the -# Soviet Union (as far as I know). -# -# Tom Hofmann, Scientific Computer Center, CIBA-GEIGY AG, -# 4002 Basle, Switzerland -# ... - -# ... -# Date: Wed, 4 Feb 87 22:35:22 +0100 -# From: Dik T. Winter -# ... -# -# The information from Tom Hofmann is (as far as I know) not entirely correct. -# After a request from chongo at amdahl I tried to retrieve all information -# about DST in Europe. I was able to find all from about 1969. -# -# ...standardization on DST in Europe started in about 1977 with switches on -# first Sunday in April and last Sunday in September... -# In 1981 UK joined Europe insofar that -# the starting day for both shifted to last Sunday in March. And from 1982 -# the whole of Europe used DST, with switch dates April 1 and October 1 in -# the Sov[i]et Union. In 1985 the SU reverted to standard Europe[a]n switch -# dates... -# -# It should also be remembered that time-zones are not constants; e.g. -# Portugal switched in 1976 from MET (or CET) to WET with DST... -# Note also that though there were rules for switch dates not -# all countries abided to these dates, and many individual deviations -# occurred, though not since 1982 I believe. Another note: it is always -# assumed that DST is 1 hour ahead of normal time, this need not be the -# case; at least in the Netherlands there have been times when DST was 2 hours -# in advance of normal time. -# -# ... -# dik t. winter, cwi, amsterdam, nederland -# ... - -# From Bob Devine (1988-01-28): -# ... -# Greece: Last Sunday in April to last Sunday in September (iffy on dates). -# Since 1978. Change at midnight. -# ... -# Monaco: has same DST as France. -# ... diff --git a/src/timezone/data/factory b/src/timezone/data/factory deleted file mode 100644 index 75fa4a11c3..0000000000 --- a/src/timezone/data/factory +++ /dev/null @@ -1,10 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# For distributors who don't want to put time zone specification in -# their installation procedures. Users that run 'date' will get the -# time zone abbreviation "-00", indicating that the actual time zone -# is unknown. - -# Zone NAME GMTOFF RULES FORMAT -Zone Factory 0 - -00 diff --git a/src/timezone/data/northamerica b/src/timezone/data/northamerica deleted file mode 100644 index e5d3eca41c..0000000000 --- a/src/timezone/data/northamerica +++ /dev/null @@ -1,3419 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# also includes Central America and the Caribbean - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (1999-03-22): -# A reliable and entertaining source about time zones is -# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997). - -############################################################################### - -# United States - -# From Paul Eggert (1999-03-31): -# Howse writes (pp 121-125) that time zones were invented by -# Professor Charles Ferdinand Dowd (1825-1904), -# Principal of Temple Grove Ladies' Seminary (Saratoga Springs, NY). -# His pamphlet "A System of National Time for Railroads" (1870) -# was the result of his proposals at the Convention of Railroad Trunk Lines -# in New York City (1869-10). His 1870 proposal was based on Washington, DC, -# but in 1872-05 he moved the proposed origin to Greenwich. - -# From Paul Eggert (2016-09-21): -# Dowd's proposal left many details unresolved, such as where to draw -# lines between time zones. The key individual who made time zones -# work in the US was William Frederick Allen - railway engineer, -# managing editor of the Travelers' Guide, and secretary of the -# General Time Convention, a railway standardization group. Allen -# spent months in dialogs with scientific and railway leaders, -# developed a workable plan to institute time zones, and presented it -# to the General Time Convention on 1883-04-11, saying that his plan -# meant "local time would be practically abolished" - a plus for -# railway scheduling. By the next convention on 1883-10-11 nearly all -# railroads had agreed and it took effect on 1883-11-18 at 12:00. -# That Sunday was called the "day of two noons", as the eastern parts -# of the new zones observed noon twice. Allen witnessed the -# transition in New York City, writing: -# -# I heard the bells of St. Paul's strike on the old time. Four -# minutes later, obedient to the electrical signal from the Naval -# Observatory ... the time-ball made its rapid descent, the chimes -# of old Trinity rang twelve measured strokes, and local time was -# abandoned, probably forever. -# -# Most of the US soon followed suit. See: -# Bartky IR. The adoption of standard time. Technol Cult 1989 Jan;30(1):25-56. -# http://dx.doi.org/10.2307/3105430 - -# From Paul Eggert (2005-04-16): -# That 1883 transition occurred at 12:00 new time, not at 12:00 old time. -# See p 46 of David Prerau, Seize the daylight, Thunder's Mouth Press (2005). - -# From Paul Eggert (2006-03-22): -# A good source for time zone historical data in the US is -# Thomas G. Shanks, The American Atlas (5th edition), -# San Diego: ACS Publications, Inc. (1991). -# Make sure you have the errata sheet; the book is somewhat useless without it. -# It is the source for most of the pre-1991 US entries below. - -# From Paul Eggert (2001-03-06): -# Daylight Saving Time was first suggested as a joke by Benjamin Franklin -# in his whimsical essay "An Economical Project for Diminishing the Cost -# of Light" published in the Journal de Paris (1784-04-26). -# Not everyone is happy with the results: -# -# I don't really care how time is reckoned so long as there is some -# agreement about it, but I object to being told that I am saving -# daylight when my reason tells me that I am doing nothing of the kind. -# I even object to the implication that I am wasting something -# valuable if I stay in bed after the sun has risen. As an admirer -# of moonlight I resent the bossy insistence of those who want to -# reduce my time for enjoying it. At the back of the Daylight Saving -# scheme I detect the bony, blue-fingered hand of Puritanism, eager -# to push people into bed earlier, and get them up earlier, to make -# them healthy, wealthy and wise in spite of themselves. -# -# -- Robertson Davies, The diary of Samuel Marchbanks, -# Clarke, Irwin (1947), XIX, Sunday -# -# For more about the first ten years of DST in the United States, see -# Robert Garland, Ten years of daylight saving from the Pittsburgh standpoint -# (Carnegie Library of Pittsburgh, 1927). -# http://www.clpgh.org/exhibit/dst.html -# -# Shanks says that DST was called "War Time" in the US in 1918 and 1919. -# However, DST was imposed by the Standard Time Act of 1918, which -# was the first nationwide legal time standard, and apparently -# time was just called "Standard Time" or "Daylight Saving Time". - -# From Arthur David Olson: -# US Daylight Saving Time ended on the last Sunday of *October* in 1974. -# See, for example, the front page of the Saturday, 1974-10-26 -# and Sunday, 1974-10-27 editions of the Washington Post. - -# From Arthur David Olson: -# Before the Uniform Time Act of 1966 took effect in 1967, observance of -# Daylight Saving Time in the US was by local option, except during wartime. - -# From Arthur David Olson (2000-09-25): -# Last night I heard part of a rebroadcast of a 1945 Arch Oboler radio drama. -# In the introduction, Oboler spoke of "Eastern Peace Time." -# An AltaVista search turned up: -# https://web.archive.org/web/20000926032210/http://rowayton.org/rhs/hstaug45.html -# "When the time is announced over the radio now, it is 'Eastern Peace -# Time' instead of the old familiar 'Eastern War Time.' Peace is wonderful." -# (August 1945) by way of confirmation. -# -# From Paul Eggert (2017-09-23): -# This was the V-J Day issue of the Clamdigger, a Rowayton, CT newsletter. - -# From Joseph Gallant citing -# George H. Douglas, _The Early Days of Radio Broadcasting_ (1987): -# At 7 P.M. (Eastern War Time) [on 1945-08-14], the networks were set -# to switch to London for Attlee's address, but the American people -# never got to hear his speech live. According to one press account, -# CBS' Bob Trout was first to announce the word of Japan's surrender, -# but a few seconds later, NBC, ABC and Mutual also flashed the word -# of surrender, all of whom interrupting the bells of Big Ben in -# London which were to precede Mr. Attlee's speech. - -# From Paul Eggert (2003-02-09): It was Robert St John, not Bob Trout. From -# Myrna Oliver's obituary of St John on page B16 of today's Los Angeles Times: -# -# ... a war-weary U.S. clung to radios, awaiting word of Japan's surrender. -# Any announcement from Asia would reach St. John's New York newsroom on a -# wire service teletype machine, which had prescribed signals for major news. -# Associated Press, for example, would ring five bells before spewing out -# typed copy of an important story, and 10 bells for news "of transcendental -# importance." -# -# On Aug. 14, stalling while talking steadily into the NBC networks' open -# microphone, St. John heard five bells and waited only to hear a sixth bell, -# before announcing confidently: "Ladies and gentlemen, World War II is over. -# The Japanese have agreed to our surrender terms." -# -# He had scored a 20-second scoop on other broadcasters. - -# From Arthur David Olson (2005-08-22): -# Paul has been careful to use the "US" rules only in those locations -# that are part of the United States; this reflects the real scope of -# U.S. government action. So even though the "US" rules have changed -# in the latest release, other countries won't be affected. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule US 1918 1919 - Mar lastSun 2:00 1:00 D -Rule US 1918 1919 - Oct lastSun 2:00 0 S -Rule US 1942 only - Feb 9 2:00 1:00 W # War -Rule US 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule US 1945 only - Sep lastSun 2:00 0 S -Rule US 1967 2006 - Oct lastSun 2:00 0 S -Rule US 1967 1973 - Apr lastSun 2:00 1:00 D -Rule US 1974 only - Jan 6 2:00 1:00 D -Rule US 1975 only - Feb 23 2:00 1:00 D -Rule US 1976 1986 - Apr lastSun 2:00 1:00 D -Rule US 1987 2006 - Apr Sun>=1 2:00 1:00 D -Rule US 2007 max - Mar Sun>=8 2:00 1:00 D -Rule US 2007 max - Nov Sun>=1 2:00 0 S - -# From Arthur David Olson, 2005-12-19 -# We generate the files specified below to guard against old files with -# obsolete information being left in the time zone binary directory. -# We limit the list to names that have appeared in previous versions of -# this time zone package. -# We do these as separate Zones rather than as Links to avoid problems if -# a particular place changes whether it observes DST. -# We put these specifications here in the northamerica file both to -# increase the chances that they'll actually get compiled and to -# avoid the need to duplicate the US rules in another file. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone EST -5:00 - EST -Zone MST -7:00 - MST -Zone HST -10:00 - HST -Zone EST5EDT -5:00 US E%sT -Zone CST6CDT -6:00 US C%sT -Zone MST7MDT -7:00 US M%sT -Zone PST8PDT -8:00 US P%sT - -# From U. S. Naval Observatory (1989-01-19): -# USA EASTERN 5 H BEHIND UTC NEW YORK, WASHINGTON -# USA EASTERN 4 H BEHIND UTC APR 3 - OCT 30 -# USA CENTRAL 6 H BEHIND UTC CHICAGO, HOUSTON -# USA CENTRAL 5 H BEHIND UTC APR 3 - OCT 30 -# USA MOUNTAIN 7 H BEHIND UTC DENVER -# USA MOUNTAIN 6 H BEHIND UTC APR 3 - OCT 30 -# USA PACIFIC 8 H BEHIND UTC L.A., SAN FRANCISCO -# USA PACIFIC 7 H BEHIND UTC APR 3 - OCT 30 -# USA ALASKA STD 9 H BEHIND UTC MOST OF ALASKA (AKST) -# USA ALASKA STD 8 H BEHIND UTC APR 3 - OCT 30 (AKDT) -# USA ALEUTIAN 10 H BEHIND UTC ISLANDS WEST OF 170W -# USA " 9 H BEHIND UTC APR 3 - OCT 30 -# USA HAWAII 10 H BEHIND UTC -# USA BERING 11 H BEHIND UTC SAMOA, MIDWAY - -# From Arthur David Olson (1989-01-21): -# The above dates are for 1988. -# Note the "AKST" and "AKDT" abbreviations, the claim that there's -# no DST in Samoa, and the claim that there is DST in Alaska and the -# Aleutians. - -# From Arthur David Olson (1988-02-13): -# Legal standard time zone names, from United States Code (1982 Edition and -# Supplement III), Title 15, Chapter 6, Section 260 and forward. First, names -# up to 1967-04-01 (when most provisions of the Uniform Time Act of 1966 -# took effect), as explained in sections 263 and 261: -# (none) -# United States standard eastern time -# United States standard mountain time -# United States standard central time -# United States standard Pacific time -# (none) -# United States standard Alaska time -# (none) -# Next, names from 1967-04-01 until 1983-11-30 (the date for -# public law 98-181): -# Atlantic standard time -# eastern standard time -# central standard time -# mountain standard time -# Pacific standard time -# Yukon standard time -# Alaska-Hawaii standard time -# Bering standard time -# And after 1983-11-30: -# Atlantic standard time -# eastern standard time -# central standard time -# mountain standard time -# Pacific standard time -# Alaska standard time -# Hawaii-Aleutian standard time -# Samoa standard time -# The law doesn't give abbreviations. -# -# From Paul Eggert (2016-12-19): -# Here are URLs for the 1918 and 1966 legislation: -# http://uscode.house.gov/statviewer.htm?volume=40&page=451 -# http://uscode.house.gov/statviewer.htm?volume=80&page=108 -# Although the 1918 names were officially "United States Standard -# Eastern Time" and similarly for "Central", "Mountain", "Pacific", -# and "Alaska", in practice "Standard" was placed just before "Time", -# as codified in 1966. In practice, Alaska time was abbreviated "AST" -# before 1968. Summarizing the 1967 name changes: -# 1918 names 1967 names -# -08 Standard Pacific Time (PST) Pacific standard time (PST) -# -09 (unofficial) Yukon (YST) Yukon standard time (YST) -# -10 Standard Alaska Time (AST) Alaska-Hawaii standard time (AHST) -# -11 (unofficial) Nome (NST) Bering standard time (BST) -# -# From Paul Eggert (2000-01-08), following a heads-up from Rives McDow: -# Public law 106-564 (2000-12-23) introduced ... "Chamorro Standard Time" -# for time in Guam and the Northern Marianas. See the file "australasia". -# -# From Paul Eggert (2015-04-17): -# HST and HDT are standardized abbreviations for Hawaii-Aleutian -# standard and daylight times. See section 9.47 (p 234) of the -# U.S. Government Printing Office Style Manual (2008) -# https://www.gpo.gov/fdsys/pkg/GPO-STYLEMANUAL-2008/pdf/GPO-STYLEMANUAL-2008.pdf - -# From Arthur David Olson, 2005-08-09 -# The following was signed into law on 2005-08-08. -# -# H.R. 6, Energy Policy Act of 2005, SEC. 110. DAYLIGHT SAVINGS. -# (a) Amendment.--Section 3(a) of the Uniform Time Act of 1966 (15 -# U.S.C. 260a(a)) is amended-- -# (1) by striking "first Sunday of April" and inserting "second -# Sunday of March"; and -# (2) by striking "last Sunday of October" and inserting "first -# Sunday of November'. -# (b) Effective Date.--Subsection (a) shall take effect 1 year after the -# date of enactment of this Act or March 1, 2007, whichever is later. -# (c) Report to Congress.--Not later than 9 months after the effective -# date stated in subsection (b), the Secretary shall report to Congress -# on the impact of this section on energy consumption in the United -# States. -# (d) Right to Revert.--Congress retains the right to revert the -# Daylight Saving Time back to the 2005 time schedules once the -# Department study is complete. - -# US eastern time, represented by New York - -# Connecticut, Delaware, District of Columbia, most of Florida, -# Georgia, southeast Indiana (Dearborn and Ohio counties), eastern Kentucky -# (except America/Kentucky/Louisville below), Maine, Maryland, Massachusetts, -# New Hampshire, New Jersey, New York, North Carolina, Ohio, -# Pennsylvania, Rhode Island, South Carolina, eastern Tennessee, -# Vermont, Virginia, West Virginia - -# From Dave Cantor (2004-11-02): -# Early this summer I had the occasion to visit the Mount Washington -# Observatory weather station atop (of course!) Mount Washington [, NH].... -# One of the staff members said that the station was on Eastern Standard Time -# and didn't change their clocks for Daylight Saving ... so that their -# reports will always have times which are 5 hours behind UTC. - -# From Paul Eggert (2005-08-26): -# According to today's Huntsville Times -# http://www.al.com/news/huntsvilletimes/index.ssf?/base/news/1125047783228320.xml&coll=1 -# a few towns on Alabama's "eastern border with Georgia, such as Phenix City -# in Russell County, Lanett in Chambers County and some towns in Lee County, -# set their watches and clocks on Eastern time." It quotes H.H. "Bubba" -# Roberts, city administrator in Phenix City. as saying "We are in the Central -# time zone, but we do go by the Eastern time zone because so many people work -# in Columbus." -# -# From Paul Eggert (2017-02-22): -# Four cities are involved. The two not mentioned above are Smiths Station -# and Valley. Barbara Brooks, Valley's assistant treasurer, heard it started -# because West Point Pepperell textile mills were in Alabama while the -# corporate office was in Georgia, and residents voted to keep Eastern -# time even after the mills closed. See: Kazek K. Did you know which -# Alabama towns are in a different time zone? al.com 2017-02-06. -# http://www.al.com/living/index.ssf/2017/02/do_you_know_which_alabama_town.html - -# From Paul Eggert (2014-09-06): -# Monthly Notices of the Royal Astronomical Society 44, 4 (1884-02-08), 208 -# says that New York City Hall time was 3 minutes 58.4 seconds fast of -# Eastern time (i.e., -4:56:01.6) just before the 1883 switch. Round to the -# nearest second. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule NYC 1920 only - Mar lastSun 2:00 1:00 D -Rule NYC 1920 only - Oct lastSun 2:00 0 S -Rule NYC 1921 1966 - Apr lastSun 2:00 1:00 D -Rule NYC 1921 1954 - Sep lastSun 2:00 0 S -Rule NYC 1955 1966 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/New_York -4:56:02 - LMT 1883 Nov 18 12:03:58 - -5:00 US E%sT 1920 - -5:00 NYC E%sT 1942 - -5:00 US E%sT 1946 - -5:00 NYC E%sT 1967 - -5:00 US E%sT - -# US central time, represented by Chicago - -# Alabama, Arkansas, Florida panhandle (Bay, Calhoun, Escambia, -# Gulf, Holmes, Jackson, Okaloosa, Santa Rosa, Walton, and -# Washington counties), Illinois, western Indiana -# (Gibson, Jasper, Lake, LaPorte, Newton, Porter, Posey, Spencer, -# Vanderburgh, and Warrick counties), Iowa, most of Kansas, western -# Kentucky, Louisiana, Minnesota, Mississippi, Missouri, eastern -# Nebraska, eastern North Dakota, Oklahoma, eastern South Dakota, -# western Tennessee, most of Texas, Wisconsin - -# From Larry M. Smith (2006-04-26) re Wisconsin: -# https://docs.legis.wisconsin.gov/statutes/statutes/175.pdf -# is currently enforced at the 01:00 time of change. Because the local -# "bar time" in the state corresponds to 02:00, a number of citations -# are issued for the "sale of class 'B' alcohol after prohibited -# hours" within the deviated hour of this change every year.... -# -# From Douglas R. Bomberg (2007-03-12): -# Wisconsin has enacted (nearly eleventh-hour) legislation to get WI -# Statue 175 closer in synch with the US Congress' intent.... -# https://docs.legis.wisconsin.gov/2007/related/acts/3 - -# From an email administrator of the City of Fort Pierre, SD (2015-12-21): -# Fort Pierre is technically located in the Mountain time zone as is -# the rest of Stanley County. Most of Stanley County and Fort Pierre -# uses the Central time zone due to doing most of their business in -# Pierre so it simplifies schedules. I have lived in Stanley County -# all my life and it has been that way since I can remember. (43 years!) -# -# From Paul Eggert (2015-12-25): -# Assume this practice predates 1970, so Fort Pierre can use America/Chicago. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Chicago 1920 only - Jun 13 2:00 1:00 D -Rule Chicago 1920 1921 - Oct lastSun 2:00 0 S -Rule Chicago 1921 only - Mar lastSun 2:00 1:00 D -Rule Chicago 1922 1966 - Apr lastSun 2:00 1:00 D -Rule Chicago 1922 1954 - Sep lastSun 2:00 0 S -Rule Chicago 1955 1966 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Chicago -5:50:36 - LMT 1883 Nov 18 12:09:24 - -6:00 US C%sT 1920 - -6:00 Chicago C%sT 1936 Mar 1 2:00 - -5:00 - EST 1936 Nov 15 2:00 - -6:00 Chicago C%sT 1942 - -6:00 US C%sT 1946 - -6:00 Chicago C%sT 1967 - -6:00 US C%sT -# Oliver County, ND switched from mountain to central time on 1992-10-25. -Zone America/North_Dakota/Center -6:45:12 - LMT 1883 Nov 18 12:14:48 - -7:00 US M%sT 1992 Oct 25 2:00 - -6:00 US C%sT -# Morton County, ND, switched from mountain to central time on -# 2003-10-26, except for the area around Mandan which was already central time. -# See . -# Officially this switch also included part of Sioux County, and -# Jones, Mellette, and Todd Counties in South Dakota; -# but in practice these other counties were already observing central time. -# See . -Zone America/North_Dakota/New_Salem -6:45:39 - LMT 1883 Nov 18 12:14:21 - -7:00 US M%sT 2003 Oct 26 2:00 - -6:00 US C%sT - -# From Josh Findley (2011-01-21): -# ...it appears that Mercer County, North Dakota, changed from the -# mountain time zone to the central time zone at the last transition from -# daylight-saving to standard time (on Nov. 7, 2010): -# https://www.gpo.gov/fdsys/pkg/FR-2010-09-29/html/2010-24376.htm -# http://www.bismarcktribune.com/news/local/article_1eb1b588-c758-11df-b472-001cc4c03286.html - -# From Andy Lipscomb (2011-01-24): -# ...according to the Census Bureau, the largest city is Beulah (although -# it's commonly referred to as Beulah-Hazen, with Hazen being the next -# largest city in Mercer County). Google Maps places Beulah's city hall -# at 47 degrees 15' 51" N, 101 degrees 46' 40" W, which yields an offset -# of 6h47'07". - -Zone America/North_Dakota/Beulah -6:47:07 - LMT 1883 Nov 18 12:12:53 - -7:00 US M%sT 2010 Nov 7 2:00 - -6:00 US C%sT - -# US mountain time, represented by Denver -# -# Colorado, far western Kansas, Montana, western -# Nebraska, Nevada border (Jackpot, Owyhee, and Mountain City), -# New Mexico, southwestern North Dakota, -# western South Dakota, far western Texas (El Paso County, Hudspeth County, -# and Pine Springs and Nickel Creek in Culberson County), Utah, Wyoming -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Denver 1920 1921 - Mar lastSun 2:00 1:00 D -Rule Denver 1920 only - Oct lastSun 2:00 0 S -Rule Denver 1921 only - May 22 2:00 0 S -Rule Denver 1965 1966 - Apr lastSun 2:00 1:00 D -Rule Denver 1965 1966 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Denver -6:59:56 - LMT 1883 Nov 18 12:00:04 - -7:00 US M%sT 1920 - -7:00 Denver M%sT 1942 - -7:00 US M%sT 1946 - -7:00 Denver M%sT 1967 - -7:00 US M%sT - -# US Pacific time, represented by Los Angeles -# -# California, northern Idaho (Benewah, Bonner, Boundary, Clearwater, -# Kootenai, Latah, Lewis, Nez Perce, and Shoshone counties, Idaho county -# north of the Salmon River, and the towns of Burgdorf and Warren), -# Nevada (except West Wendover), Oregon (except the northern 3/4 of -# Malheur county), and Washington - -# From Paul Eggert (2016-08-20): -# In early February 1948, in response to California's electricity shortage, -# PG&E changed power frequency from 60 to 59.5 Hz during daylight hours, -# causing electric clocks to lose six minutes per day. (This did not change -# legal time, and is not part of the data here.) See: -# Ross SA. An energy crisis from the past: Northern California in 1948. -# Working Paper No. 8, Institute of Governmental Studies, UC Berkeley, -# 1973-11. https://escholarship.org/uc/item/8x22k30c -# -# In another measure to save electricity, DST was instituted from 1948-03-14 -# at 02:01 to 1949-01-16 at 02:00, with the governor having the option to move -# the fallback transition earlier. See pages 3-4 of: -# http://clerk.assembly.ca.gov/sites/clerk.assembly.ca.gov/files/archive/Statutes/1948/48Vol1_Chapters.pdf -# -# In response: -# -# Governor Warren received a torrent of objecting mail, and it is not too much -# to speculate that the objections to Daylight Saving Time were one important -# factor in the defeat of the Dewey-Warren Presidential ticket in California. -# -- Ross, p 25 -# -# On December 8 the governor exercised the option, setting the date to January 1 -# (LA Times 1948-12-09). The transition time was 02:00 (LA Times 1949-01-01). -# -# Despite the controversy, in 1949 California voters approved Proposition 12, -# which established DST from April's last Sunday at 01:00 until September's -# last Sunday at 02:00. This was amended by 1962's Proposition 6, which changed -# the fall-back date to October's last Sunday. See: -# https://repository.uchastings.edu/cgi/viewcontent.cgi?article=1501&context=ca_ballot_props -# https://repository.uchastings.edu/cgi/viewcontent.cgi?article=1636&context=ca_ballot_props -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule CA 1948 only - Mar 14 2:01 1:00 D -Rule CA 1949 only - Jan 1 2:00 0 S -Rule CA 1950 1966 - Apr lastSun 1:00 1:00 D -Rule CA 1950 1961 - Sep lastSun 2:00 0 S -Rule CA 1962 1966 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Los_Angeles -7:52:58 - LMT 1883 Nov 18 12:07:02 - -8:00 US P%sT 1946 - -8:00 CA P%sT 1967 - -8:00 US P%sT - -# Alaska -# AK%sT is the modern abbreviation for -09 per USNO. -# -# From Paul Eggert (2017-06-15): -# Howse writes that Alaska switched from the Julian to the Gregorian calendar, -# and from east-of-GMT to west-of-GMT days, when the US bought it from Russia. -# On Friday, 1867-10-18 (Gregorian), at precisely 15:30 local time, the -# Russian forts and fleet at Sitka fired salutes to mark the ceremony of -# formal transfer. See the Sacramento Daily Union (1867-11-14), p 3, col 2. -# https://cdnc.ucr.edu/cgi-bin/cdnc?a=d&d=SDU18671114.2.12.1 -# Sitka workers did not change their calendars until Sunday, 1867-10-20, -# and so celebrated two Sundays that week. See: Ahllund T (tr Hallamaa P). -# From the memoirs of a Finnish workman. Alaska History. 2006 Fall;21(2):1-25. -# http://alaskahistoricalsociety.org/wp-content/uploads/2016/12/Ahllund-2006-Memoirs-of-a-Finnish-Workman.pdf -# Include only the time zone part of this transition, ignoring the switch -# from Julian to Gregorian, since we can't represent the Julian calendar. -# -# As far as we know, of the locations mentioned below only Sitka was -# permanently inhabited in 1867 by anyone using either calendar. -# (Yakutat was colonized by the Russians in 1799, but the settlement was -# destroyed in 1805 by a Yakutat-kon war party.) Many of Alaska's inhabitants -# were unaware of the US acquisition of Alaska, much less of any calendar or -# time change. However, the Russian-influenced part of Alaska did observe -# Russian time, and it is more accurate to model this than to ignore it. -# The database format requires an exact transition time; use the Russian -# salute as a somewhat-arbitrary time for the formal transfer of control for -# all of Alaska. Sitka's UTC offset is -9:01:13; adjust its 15:30 to the -# local times of other Alaskan locations so that they change simultaneously. - -# From Paul Eggert (2014-07-18): -# One opinion of the early-1980s turmoil in Alaska over time zones and -# daylight saving time appeared as graffiti on a Juneau airport wall: -# "Welcome to Juneau. Please turn your watch back to the 19th century." -# See: Turner W. Alaska's four time zones now two. NY Times 1983-11-01. -# http://www.nytimes.com/1983/11/01/us/alaska-s-four-time-zones-now-two.html -# -# Steve Ferguson (2011-01-31) referred to the following source: -# Norris F. Keeping time in Alaska: national directives, local response. -# Alaska History 2001;16(1-2). -# http://alaskahistoricalsociety.org/discover-alaska/glimpses-of-the-past/keeping-time-in-alaska/ - -# From Arthur David Olson (2011-02-01): -# Here's database-relevant material from the 2001 "Alaska History" article: -# -# On September 20 [1979]...DOT...officials decreed that on April 27, -# 1980, Juneau and other nearby communities would move to Yukon Time. -# Sitka, Petersburg, Wrangell, and Ketchikan, however, would remain on -# Pacific Time. -# -# ...on September 22, 1980, DOT Secretary Neil E. Goldschmidt rescinded the -# Department's September 1979 decision. Juneau and other communities in -# northern Southeast reverted to Pacific Time on October 26. -# -# On October 28 [1983]...the Metlakatla Indian Community Council voted -# unanimously to keep the reservation on Pacific Time. -# -# According to DOT official Joanne Petrie, Indian reservations are not -# bound to follow time zones imposed by neighboring jurisdictions. -# -# (The last is consistent with how the database now handles the Navajo -# Nation.) - -# From Arthur David Olson (2011-02-09): -# I just spoke by phone with a staff member at the Metlakatla Indian -# Community office (using contact information available at -# http://www.commerce.state.ak.us/dca/commdb/CIS.cfm?Comm_Boro_name=Metlakatla -# It's shortly after 1:00 here on the east coast of the United States; -# the staffer said it was shortly after 10:00 there. When I asked whether -# that meant they were on Pacific time, they said no - they were on their -# own time. I asked about daylight saving; they said it wasn't used. I -# did not inquire about practices in the past. - -# From Arthur David Olson (2011-08-17): -# For lack of better information, assume that Metlakatla's -# abandonment of use of daylight saving resulted from the 1983 vote. - -# From Steffen Thorsen (2015-11-09): -# It seems Metlakatla did go off PST on Sunday, November 1, changing -# their time to AKST and are going to follow Alaska's DST, switching -# between AKST and AKDT from now on.... -# https://www.krbd.org/2015/10/30/annette-island-times-they-are-a-changing/ - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Juneau 15:02:19 - LMT 1867 Oct 19 15:33:32 - -8:57:41 - LMT 1900 Aug 20 12:00 - -8:00 - PST 1942 - -8:00 US P%sT 1946 - -8:00 - PST 1969 - -8:00 US P%sT 1980 Apr 27 2:00 - -9:00 US Y%sT 1980 Oct 26 2:00 - -8:00 US P%sT 1983 Oct 30 2:00 - -9:00 US Y%sT 1983 Nov 30 - -9:00 US AK%sT -Zone America/Sitka 14:58:47 - LMT 1867 Oct 19 15:30 - -9:01:13 - LMT 1900 Aug 20 12:00 - -8:00 - PST 1942 - -8:00 US P%sT 1946 - -8:00 - PST 1969 - -8:00 US P%sT 1983 Oct 30 2:00 - -9:00 US Y%sT 1983 Nov 30 - -9:00 US AK%sT -Zone America/Metlakatla 15:13:42 - LMT 1867 Oct 19 15:44:55 - -8:46:18 - LMT 1900 Aug 20 12:00 - -8:00 - PST 1942 - -8:00 US P%sT 1946 - -8:00 - PST 1969 - -8:00 US P%sT 1983 Oct 30 2:00 - -8:00 - PST 2015 Nov 1 2:00 - -9:00 US AK%sT -Zone America/Yakutat 14:41:05 - LMT 1867 Oct 19 15:12:18 - -9:18:55 - LMT 1900 Aug 20 12:00 - -9:00 - YST 1942 - -9:00 US Y%sT 1946 - -9:00 - YST 1969 - -9:00 US Y%sT 1983 Nov 30 - -9:00 US AK%sT -Zone America/Anchorage 14:00:24 - LMT 1867 Oct 19 14:31:37 - -9:59:36 - LMT 1900 Aug 20 12:00 - -10:00 - AST 1942 - -10:00 US A%sT 1967 Apr - -10:00 - AHST 1969 - -10:00 US AH%sT 1983 Oct 30 2:00 - -9:00 US Y%sT 1983 Nov 30 - -9:00 US AK%sT -Zone America/Nome 12:58:22 - LMT 1867 Oct 19 13:29:35 - -11:01:38 - LMT 1900 Aug 20 12:00 - -11:00 - NST 1942 - -11:00 US N%sT 1946 - -11:00 - NST 1967 Apr - -11:00 - BST 1969 - -11:00 US B%sT 1983 Oct 30 2:00 - -9:00 US Y%sT 1983 Nov 30 - -9:00 US AK%sT -Zone America/Adak 12:13:22 - LMT 1867 Oct 19 12:44:35 - -11:46:38 - LMT 1900 Aug 20 12:00 - -11:00 - NST 1942 - -11:00 US N%sT 1946 - -11:00 - NST 1967 Apr - -11:00 - BST 1969 - -11:00 US B%sT 1983 Oct 30 2:00 - -10:00 US AH%sT 1983 Nov 30 - -10:00 US H%sT -# The following switches don't quite make our 1970 cutoff. -# -# Shanks writes that part of southwest Alaska (e.g. Aniak) -# switched from -11:00 to -10:00 on 1968-09-22 at 02:00, -# and another part (e.g. Akiak) made the same switch five weeks later. -# -# From David Flater (2004-11-09): -# In e-mail, 2004-11-02, Ray Hudson, historian/liaison to the Unalaska -# Historic Preservation Commission, provided this information, which -# suggests that Unalaska deviated from statutory time from early 1967 -# possibly until 1983: -# -# Minutes of the Unalaska City Council Meeting, January 10, 1967: -# "Except for St. Paul and Akutan, Unalaska is the only important -# location not on Alaska Standard Time. The following resolution was -# made by William Robinson and seconded by Henry Swanson: Be it -# resolved that the City of Unalaska hereby goes to Alaska Standard -# Time as of midnight Friday, January 13, 1967 (1 A.M. Saturday, -# January 14, Alaska Standard Time.) This resolution was passed with -# three votes for and one against." - -# Hawaii - -# From Arthur David Olson (2010-12-09): -# "Hawaiian Time" by Robert C. Schmitt and Doak C. Cox appears on pages 207-225 -# of volume 26 of The Hawaiian Journal of History (1992). As of 2010-12-09, -# the article is available at -# https://evols.library.manoa.hawaii.edu/bitstream/10524/239/2/JL26215.pdf -# and indicates that standard time was adopted effective noon, January -# 13, 1896 (page 218), that in "1933, the Legislature decreed daylight -# saving for the period between the last Sunday of each April and the -# last Sunday of each September, but less than a month later repealed the -# act," (page 220), that year-round daylight saving time was in effect -# from 1942-02-09 to 1945-09-30 (page 221, with no time of day given for -# when clocks changed) and that clocks were changed by 30 minutes -# effective the second Sunday of June, 1947 (page 219, with no time of -# day given for when clocks changed). A footnote for the 1933 changes -# cites Session Laws of Hawaii 1933, "Act. 90 (approved 26 Apr. 1933) -# and Act 163 (approved 21 May 1933)." - -# From Arthur David Olson (2011-01-19): -# The following is from "Laws of the Territory of Hawaii Passed by the -# Seventeenth Legislature: Regular Session 1933," available (as of -# 2011-01-19) at American University's Pence Law Library. Page 85: "Act -# 90...At 2 o'clock ante meridian of the last Sunday in April of each -# year, the standard time of this Territory shall be advanced one -# hour...This Act shall take effect upon its approval. Approved this 26th -# day of April, A. D. 1933. LAWRENCE M JUDD, Governor of the Territory of -# Hawaii." Page 172: "Act 163...Act 90 of the Session Laws of 1933 is -# hereby repealed...This Act shall take effect upon its approval, upon -# which date the standard time of this Territory shall be restored to -# that existing immediately prior to the taking effect of said Act 90. -# Approved this 21st day of May, A. D. 1933. LAWRENCE M. JUDD, Governor -# of the Territory of Hawaii." -# -# Note that 1933-05-21 was a Sunday. -# We're left to guess the time of day when Act 163 was approved; guess noon. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Pacific/Honolulu -10:31:26 - LMT 1896 Jan 13 12:00 - -10:30 - HST 1933 Apr 30 2:00 - -10:30 1:00 HDT 1933 May 21 12:00 - -10:30 - HST 1942 Feb 9 2:00 - -10:30 1:00 HDT 1945 Sep 30 2:00 - -10:30 - HST 1947 Jun 8 2:00 - -10:00 - HST - -# Now we turn to US areas that have diverged from the consensus since 1970. - -# Arizona mostly uses MST. - -# From Paul Eggert (2002-10-20): -# -# The information in the rest of this paragraph is derived from the -# Daylight Saving Time web page -# (2002-01-23) -# maintained by the Arizona State Library, Archives and Public Records. -# Between 1944-01-01 and 1944-04-01 the State of Arizona used standard -# time, but by federal law railroads, airlines, bus lines, military -# personnel, and some engaged in interstate commerce continued to -# observe war (i.e., daylight saving) time. The 1944-03-17 Phoenix -# Gazette says that was the date the law changed, and that 04-01 was -# the date the state's clocks would change. In 1945 the State of -# Arizona used standard time all year, again with exceptions only as -# mandated by federal law. Arizona observed DST in 1967, but Arizona -# Laws 1968, ch. 183 (effective 1968-03-21) repealed DST. -# -# Shanks says the 1944 experiment came to an end on 1944-03-17. -# Go with the Arizona State Library instead. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Phoenix -7:28:18 - LMT 1883 Nov 18 11:31:42 - -7:00 US M%sT 1944 Jan 1 0:01 - -7:00 - MST 1944 Apr 1 0:01 - -7:00 US M%sT 1944 Oct 1 0:01 - -7:00 - MST 1967 - -7:00 US M%sT 1968 Mar 21 - -7:00 - MST -# From Arthur David Olson (1988-02-13): -# A writer from the Inter Tribal Council of Arizona, Inc., -# notes in private correspondence dated 1987-12-28 that "Presently, only the -# Navajo Nation participates in the Daylight Saving Time policy, due to its -# large size and location in three states." (The "only" means that other -# tribal nations don't use DST.) -# -# From Paul Eggert (2013-08-26): -# See America/Denver for a zone appropriate for the Navajo Nation. - -# Southern Idaho (Ada, Adams, Bannock, Bear Lake, Bingham, Blaine, -# Boise, Bonneville, Butte, Camas, Canyon, Caribou, Cassia, Clark, -# Custer, Elmore, Franklin, Fremont, Gem, Gooding, Jefferson, Jerome, -# Lemhi, Lincoln, Madison, Minidoka, Oneida, Owyhee, Payette, Power, -# Teton, Twin Falls, Valley, Washington counties, and the southern -# quarter of Idaho county) and eastern Oregon (most of Malheur County) -# switched four weeks late in 1974. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Boise -7:44:49 - LMT 1883 Nov 18 12:15:11 - -8:00 US P%sT 1923 May 13 2:00 - -7:00 US M%sT 1974 - -7:00 - MST 1974 Feb 3 2:00 - -7:00 US M%sT - -# Indiana -# -# For a map of Indiana's time zone regions, see: -# https://en.wikipedia.org/wiki/Time_in_Indiana -# -# From Paul Eggert (2007-08-17): -# Since 1970, most of Indiana has been like America/Indiana/Indianapolis, -# with the following exceptions: -# -# - Gibson, Jasper, Lake, LaPorte, Newton, Porter, Posey, Spencer, -# Vanderburgh, and Warrick counties have been like America/Chicago. -# -# - Dearborn and Ohio counties have been like America/New_York. -# -# - Clark, Floyd, and Harrison counties have been like -# America/Kentucky/Louisville. -# -# - Crawford, Daviess, Dubois, Knox, Martin, Perry, Pike, Pulaski, Starke, -# and Switzerland counties have their own time zone histories as noted below. -# -# Shanks partitioned Indiana into 345 regions, each with its own time history, -# and wrote "Even newspaper reports present contradictory information." -# Those Hoosiers! Such a flighty and changeable people! -# Fortunately, most of the complexity occurred before our cutoff date of 1970. -# -# Other than Indianapolis, the Indiana place names are so nondescript -# that they would be ambiguous if we left them at the 'America' level. -# So we reluctantly put them all in a subdirectory 'America/Indiana'. - -# From Paul Eggert (2014-06-26): -# https://www.federalregister.gov/articles/2006/01/20/06-563/standard-time-zone-boundary-in-the-state-of-indiana -# says "DOT is relocating the time zone boundary in Indiana to move Starke, -# Pulaski, Knox, Daviess, Martin, Pike, Dubois, and Perry Counties from the -# Eastern Time Zone to the Central Time Zone.... The effective date of -# this rule is 2 a.m. EST Sunday, April 2, 2006, which is the -# changeover date from standard time to Daylight Saving Time." -# Strictly speaking, this meant the affected counties changed their -# clocks twice that night, but this obviously was in error. The intent -# was that 01:59:59 EST be followed by 02:00:00 CDT. - -# From Gwillim Law (2007-02-10): -# The Associated Press has been reporting that Pulaski County, Indiana is -# going to switch from Central to Eastern Time on March 11, 2007.... -# http://www.indystar.com/apps/pbcs.dll/article?AID=/20070207/LOCAL190108/702070524/0/LOCAL - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Indianapolis 1941 only - Jun 22 2:00 1:00 D -Rule Indianapolis 1941 1954 - Sep lastSun 2:00 0 S -Rule Indianapolis 1946 1954 - Apr lastSun 2:00 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Indianapolis -5:44:38 - LMT 1883 Nov 18 12:15:22 - -6:00 US C%sT 1920 - -6:00 Indianapolis C%sT 1942 - -6:00 US C%sT 1946 - -6:00 Indianapolis C%sT 1955 Apr 24 2:00 - -5:00 - EST 1957 Sep 29 2:00 - -6:00 - CST 1958 Apr 27 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1971 - -5:00 - EST 2006 - -5:00 US E%sT -# -# Eastern Crawford County, Indiana, left its clocks alone in 1974, -# as well as from 1976 through 2005. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Marengo 1951 only - Apr lastSun 2:00 1:00 D -Rule Marengo 1951 only - Sep lastSun 2:00 0 S -Rule Marengo 1954 1960 - Apr lastSun 2:00 1:00 D -Rule Marengo 1954 1960 - Sep lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Marengo -5:45:23 - LMT 1883 Nov 18 12:14:37 - -6:00 US C%sT 1951 - -6:00 Marengo C%sT 1961 Apr 30 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1974 Jan 6 2:00 - -6:00 1:00 CDT 1974 Oct 27 2:00 - -5:00 US E%sT 1976 - -5:00 - EST 2006 - -5:00 US E%sT -# -# Daviess, Dubois, Knox, and Martin Counties, Indiana, -# switched from eastern to central time in April 2006, then switched back -# in November 2007. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Vincennes 1946 only - Apr lastSun 2:00 1:00 D -Rule Vincennes 1946 only - Sep lastSun 2:00 0 S -Rule Vincennes 1953 1954 - Apr lastSun 2:00 1:00 D -Rule Vincennes 1953 1959 - Sep lastSun 2:00 0 S -Rule Vincennes 1955 only - May 1 0:00 1:00 D -Rule Vincennes 1956 1963 - Apr lastSun 2:00 1:00 D -Rule Vincennes 1960 only - Oct lastSun 2:00 0 S -Rule Vincennes 1961 only - Sep lastSun 2:00 0 S -Rule Vincennes 1962 1963 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Vincennes -5:50:07 - LMT 1883 Nov 18 12:09:53 - -6:00 US C%sT 1946 - -6:00 Vincennes C%sT 1964 Apr 26 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1971 - -5:00 - EST 2006 Apr 2 2:00 - -6:00 US C%sT 2007 Nov 4 2:00 - -5:00 US E%sT -# -# Perry County, Indiana, switched from eastern to central time in April 2006. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Perry 1946 only - Apr lastSun 2:00 1:00 D -Rule Perry 1946 only - Sep lastSun 2:00 0 S -Rule Perry 1953 1954 - Apr lastSun 2:00 1:00 D -Rule Perry 1953 1959 - Sep lastSun 2:00 0 S -Rule Perry 1955 only - May 1 0:00 1:00 D -Rule Perry 1956 1963 - Apr lastSun 2:00 1:00 D -Rule Perry 1960 only - Oct lastSun 2:00 0 S -Rule Perry 1961 only - Sep lastSun 2:00 0 S -Rule Perry 1962 1963 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Tell_City -5:47:03 - LMT 1883 Nov 18 12:12:57 - -6:00 US C%sT 1946 - -6:00 Perry C%sT 1964 Apr 26 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1971 - -5:00 - EST 2006 Apr 2 2:00 - -6:00 US C%sT -# -# Pike County, Indiana moved from central to eastern time in 1977, -# then switched back in 2006, then switched back again in 2007. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Pike 1955 only - May 1 0:00 1:00 D -Rule Pike 1955 1960 - Sep lastSun 2:00 0 S -Rule Pike 1956 1964 - Apr lastSun 2:00 1:00 D -Rule Pike 1961 1964 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Petersburg -5:49:07 - LMT 1883 Nov 18 12:10:53 - -6:00 US C%sT 1955 - -6:00 Pike C%sT 1965 Apr 25 2:00 - -5:00 - EST 1966 Oct 30 2:00 - -6:00 US C%sT 1977 Oct 30 2:00 - -5:00 - EST 2006 Apr 2 2:00 - -6:00 US C%sT 2007 Nov 4 2:00 - -5:00 US E%sT -# -# Starke County, Indiana moved from central to eastern time in 1991, -# then switched back in 2006. -# From Arthur David Olson (1991-10-28): -# An article on page A3 of the Sunday, 1991-10-27 Washington Post -# notes that Starke County switched from Central time to Eastern time as of -# 1991-10-27. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Starke 1947 1961 - Apr lastSun 2:00 1:00 D -Rule Starke 1947 1954 - Sep lastSun 2:00 0 S -Rule Starke 1955 1956 - Oct lastSun 2:00 0 S -Rule Starke 1957 1958 - Sep lastSun 2:00 0 S -Rule Starke 1959 1961 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Knox -5:46:30 - LMT 1883 Nov 18 12:13:30 - -6:00 US C%sT 1947 - -6:00 Starke C%sT 1962 Apr 29 2:00 - -5:00 - EST 1963 Oct 27 2:00 - -6:00 US C%sT 1991 Oct 27 2:00 - -5:00 - EST 2006 Apr 2 2:00 - -6:00 US C%sT -# -# Pulaski County, Indiana, switched from eastern to central time in -# April 2006 and then switched back in March 2007. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Pulaski 1946 1960 - Apr lastSun 2:00 1:00 D -Rule Pulaski 1946 1954 - Sep lastSun 2:00 0 S -Rule Pulaski 1955 1956 - Oct lastSun 2:00 0 S -Rule Pulaski 1957 1960 - Sep lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Winamac -5:46:25 - LMT 1883 Nov 18 12:13:35 - -6:00 US C%sT 1946 - -6:00 Pulaski C%sT 1961 Apr 30 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1971 - -5:00 - EST 2006 Apr 2 2:00 - -6:00 US C%sT 2007 Mar 11 2:00 - -5:00 US E%sT -# -# Switzerland County, Indiana, did not observe DST from 1973 through 2005. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Indiana/Vevay -5:40:16 - LMT 1883 Nov 18 12:19:44 - -6:00 US C%sT 1954 Apr 25 2:00 - -5:00 - EST 1969 - -5:00 US E%sT 1973 - -5:00 - EST 2006 - -5:00 US E%sT - -# Part of Kentucky left its clocks alone in 1974. -# This also includes Clark, Floyd, and Harrison counties in Indiana. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Louisville 1921 only - May 1 2:00 1:00 D -Rule Louisville 1921 only - Sep 1 2:00 0 S -Rule Louisville 1941 1961 - Apr lastSun 2:00 1:00 D -Rule Louisville 1941 only - Sep lastSun 2:00 0 S -Rule Louisville 1946 only - Jun 2 2:00 0 S -Rule Louisville 1950 1955 - Sep lastSun 2:00 0 S -Rule Louisville 1956 1960 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Kentucky/Louisville -5:43:02 - LMT 1883 Nov 18 12:16:58 - -6:00 US C%sT 1921 - -6:00 Louisville C%sT 1942 - -6:00 US C%sT 1946 - -6:00 Louisville C%sT 1961 Jul 23 2:00 - -5:00 - EST 1968 - -5:00 US E%sT 1974 Jan 6 2:00 - -6:00 1:00 CDT 1974 Oct 27 2:00 - -5:00 US E%sT -# -# Wayne County, Kentucky -# -# From Lake Cumberland LIFE -# http://www.lake-cumberland.com/life/archive/news990129time.shtml -# (1999-01-29) via WKYM-101.7: -# Clinton County has joined Wayne County in asking the DoT to change from -# the Central to the Eastern time zone.... The Wayne County government made -# the same request in December. And while Russell County officials have not -# taken action, the majority of respondents to a poll conducted there in -# August indicated they would like to change to "fast time" also. -# The three Lake Cumberland counties are the farthest east of any U.S. -# location in the Central time zone. -# -# From Rich Wales (2000-08-29): -# After prolonged debate, and despite continuing deep differences of opinion, -# Wayne County (central Kentucky) is switching from Central (-0600) to Eastern -# (-0500) time. They won't "fall back" this year. See Sara Shipley, -# The difference an hour makes, Nando Times (2000-08-29 15:33 -0400). -# -# From Paul Eggert (2001-07-16): -# The final rule was published in the -# Federal Register 65, 160 (2000-08-17), pp 50154-50158. -# https://www.gpo.gov/fdsys/pkg/FR-2000-08-17/html/00-20854.htm -# -Zone America/Kentucky/Monticello -5:39:24 - LMT 1883 Nov 18 12:20:36 - -6:00 US C%sT 1946 - -6:00 - CST 1968 - -6:00 US C%sT 2000 Oct 29 2:00 - -5:00 US E%sT - - -# From Rives McDow (2000-08-30): -# Here ... are all the changes in the US since 1985. -# Kearny County, KS (put all of county on central; -# previously split between MST and CST) ... 1990-10 -# Starke County, IN (from CST to EST) ... 1991-10 -# Oliver County, ND (from MST to CST) ... 1992-10 -# West Wendover, NV (from PST TO MST) ... 1999-10 -# Wayne County, KY (from CST to EST) ... 2000-10 -# -# From Paul Eggert (2001-07-17): -# We don't know where the line used to be within Kearny County, KS, -# so omit that change for now. -# See America/Indiana/Knox for the Starke County, IN change. -# See America/North_Dakota/Center for the Oliver County, ND change. -# West Wendover, NV officially switched from Pacific to mountain time on -# 1999-10-31. See the -# Federal Register 64, 203 (1999-10-21), pp 56705-56707. -# https://www.gpo.gov/fdsys/pkg/FR-1999-10-21/html/99-27240.htm -# However, the Federal Register says that West Wendover already operated -# on mountain time, and the rule merely made this official; -# hence a separate tz entry is not needed. - -# Michigan -# -# From Bob Devine (1988-01-28): -# Michigan didn't observe DST from 1968 to 1973. -# -# From Paul Eggert (1999-03-31): -# Shanks writes that Michigan started using standard time on 1885-09-18, -# but Howse writes (pp 124-125, referring to Popular Astronomy, 1901-01) -# that Detroit kept -# -# local time until 1900 when the City Council decreed that clocks should -# be put back twenty-eight minutes to Central Standard Time. Half the -# city obeyed, half refused. After considerable debate, the decision -# was rescinded and the city reverted to Sun time. A derisive offer to -# erect a sundial in front of the city hall was referred to the -# Committee on Sewers. Then, in 1905, Central time was adopted -# by city vote. -# -# This story is too entertaining to be false, so go with Howse over Shanks. -# -# From Paul Eggert (2001-03-06): -# Garland (1927) writes "Cleveland and Detroit advanced their clocks -# one hour in 1914." This change is not in Shanks. We have no more -# info, so omit this for now. -# -# From Paul Eggert (2017-07-26): -# Although Shanks says Detroit observed DST in 1967 from 06-14 00:01 -# until 10-29 00:01, I now see multiple reports that this is incorrect. -# For example, according to a 50-year anniversary report about the 1967 -# Detroit riots and a major-league doubleheader on 1967-07-23, "By the time -# the last fly ball of the doubleheader settled into the glove of leftfielder -# Lenny Green, it was after 7 p.m. Detroit did not observe daylight saving -# time, so light was already starting to fail. Twilight was made even deeper -# by billowing columns of smoke that ascended in an unbroken wall north of the -# ballpark." See: Dow B. Detroit '67: As violence unfolded, Tigers played two -# at home vs. Yankees. Detroit Free Press 2017-07-23. -# https://www.freep.com/story/sports/mlb/tigers/2017/07/23/detroit-tigers-1967-riot-new-york-yankees/499951001/ -# -# Most of Michigan observed DST from 1973 on, but was a bit late in 1975. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Detroit 1948 only - Apr lastSun 2:00 1:00 D -Rule Detroit 1948 only - Sep lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Detroit -5:32:11 - LMT 1905 - -6:00 - CST 1915 May 15 2:00 - -5:00 - EST 1942 - -5:00 US E%sT 1946 - -5:00 Detroit E%sT 1973 - -5:00 US E%sT 1975 - -5:00 - EST 1975 Apr 27 2:00 - -5:00 US E%sT -# -# Dickinson, Gogebic, Iron, and Menominee Counties, Michigan, -# switched from EST to CST/CDT in 1973. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER -Rule Menominee 1946 only - Apr lastSun 2:00 1:00 D -Rule Menominee 1946 only - Sep lastSun 2:00 0 S -Rule Menominee 1966 only - Apr lastSun 2:00 1:00 D -Rule Menominee 1966 only - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Menominee -5:50:27 - LMT 1885 Sep 18 12:00 - -6:00 US C%sT 1946 - -6:00 Menominee C%sT 1969 Apr 27 2:00 - -5:00 - EST 1973 Apr 29 2:00 - -6:00 US C%sT - -# Navassa -# administered by the US Fish and Wildlife Service -# claimed by US under the provisions of the 1856 Guano Islands Act -# also claimed by Haiti -# occupied 1857/1900 by the Navassa Phosphate Co -# US lighthouse 1917/1996-09 -# currently uninhabited -# see Mark Fineman, "An Isle Rich in Guano and Discord", -# _Los Angeles Times_ (1998-11-10), A1, A10; it cites -# Jimmy Skaggs, _The Great Guano Rush_ (1994). - -################################################################################ - - -# From Paul Eggert (2017-02-10): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# Other sources occasionally used include: -# -# Edward W. Whitman, World Time Differences, -# Whitman Publishing Co, 2 Niagara Av, Ealing, London (undated), -# which I found in the UCLA library. -# -# William Willett, The Waste of Daylight, 19th edition -# -# [PDF] (1914-03) -# -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94 -# . -# -# See the 'europe' file for Greenland. - -# Canada - -# From Alain LaBonté (1994-11-14): -# I post here the time zone abbreviations standardized in Canada -# for both English and French in the CAN/CSA-Z234.4-89 standard.... -# -# UTC Standard time Daylight saving time -# offset French English French English -# -2:30 - - HAT NDT -# -3 - - HAA ADT -# -3:30 HNT NST - - -# -4 HNA AST HAE EDT -# -5 HNE EST HAC CDT -# -6 HNC CST HAR MDT -# -7 HNR MST HAP PDT -# -8 HNP PST HAY YDT -# -9 HNY YST - - -# -# HN: Heure Normale ST: Standard Time -# HA: Heure Avancée DT: Daylight saving Time -# -# A: de l'Atlantique Atlantic -# C: du Centre Central -# E: de l'Est Eastern -# M: Mountain -# N: Newfoundland -# P: du Pacifique Pacific -# R: des Rocheuses -# T: de Terre-Neuve -# Y: du Yukon Yukon -# -# From Paul Eggert (1994-11-22): -# Alas, this sort of thing must be handled by localization software. - -# Unless otherwise specified, the data entries for Canada are all from Shanks -# & Pottenger. - -# From Chris Walton (2006-04-01, 2006-04-25, 2006-06-26, 2007-01-31, -# 2007-03-01): -# The British Columbia government announced yesterday that it will -# adjust daylight savings next year to align with changes in the -# U.S. and the rest of Canada.... -# https://archive.news.gov.bc.ca/releases/news_releases_2005-2009/2006AG0014-000330.htm -# ... -# Nova Scotia -# Daylight saving time will be extended by four weeks starting in 2007.... -# https://www.novascotia.ca/just/regulations/rg2/2006/ma1206.pdf -# -# [For New Brunswick] the new legislation dictates that the time change is to -# be done at 02:00 instead of 00:01. -# https://www.gnb.ca/0062/acts/BBA-2006/Chap-19.pdf -# ... -# Manitoba has traditionally changed the clock every fall at 03:00. -# As of 2006, the transition is to take place one hour earlier at 02:00. -# https://web2.gov.mb.ca/laws/statutes/ccsm/o030e.php -# ... -# [Alberta, Ontario, Quebec] will follow US rules. -# http://www.qp.gov.ab.ca/documents/spring/CH03_06.CFM -# http://www.e-laws.gov.on.ca/DBLaws/Source/Regs/English/2006/R06111_e.htm -# http://www2.publicationsduquebec.gouv.qc.ca/dynamicSearch/telecharge.php?type=5&file=2006C39A.PDF -# ... -# P.E.I. will follow US rules.... -# http://www.assembly.pe.ca/bills/pdf_chapter/62/3/chapter-41.pdf -# ... -# Province of Newfoundland and Labrador.... -# http://www.hoa.gov.nl.ca/hoa/bills/Bill0634.htm -# ... -# Yukon -# https://www.gov.yk.ca/legislation/regs/oic2006_127.pdf -# ... -# N.W.T. will follow US rules. Whoever maintains the government web site -# does not seem to believe in bookmarks. To see the news release, click the -# following link and search for "Daylight Savings Time Change". Press the -# "Daylight Savings Time Change" link; it will fire off a popup using -# JavaScript. -# http://www.exec.gov.nt.ca/currentnews/currentPR.asp?mode=archive -# ... -# Nunavut -# An amendment to the Interpretation Act was registered on February 19/2007.... -# http://action.attavik.ca/home/justice-gn/attach/2007/gaz02part2.pdf - -# From Paul Eggert (2014-10-18): -# H. David Matthews and Mary Vincent's map -# "It's about TIME", _Canadian Geographic_ (September-October 1998) -# http://www.canadiangeographic.ca/Magazine/SO98/alacarte.asp -# contains detailed boundaries for regions observing nonstandard -# time and daylight saving time arrangements in Canada circa 1998. -# -# National Research Council Canada maintains info about time zones and DST. -# https://www.nrc-cnrc.gc.ca/eng/services/time/time_zones.html -# https://www.nrc-cnrc.gc.ca/eng/services/time/faq/index.html#Q5 -# Its unofficial information is often taken from Matthews and Vincent. - -# From Paul Eggert (2006-06-27): -# For now, assume all of DST-observing Canada will fall into line with the -# new US DST rules, - -# From Chris Walton (2011-12-01) -# In the first of Tammy Hardwick's articles -# http://www.ilovecreston.com/?p=articles&t=spec&ar=260 -# she quotes the Friday November 1/1918 edition of the Creston Review. -# The quote includes these two statements: -# 'Sunday the CPR went back to the old system of time...' -# '... The daylight saving scheme was dropped all over Canada at the same time,' -# These statements refer to a transition from daylight time to standard time -# that occurred nationally on Sunday October 27/1918. This transition was -# also documented in the Saturday October 26/1918 edition of the Toronto Star. - -# In light of that evidence, we alter the date from the earlier believed -# Oct 31, to Oct 27, 1918 (and Sunday is a more likely transition day -# than Thursday) in all Canadian rulesets. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Canada 1918 only - Apr 14 2:00 1:00 D -Rule Canada 1918 only - Oct 27 2:00 0 S -Rule Canada 1942 only - Feb 9 2:00 1:00 W # War -Rule Canada 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule Canada 1945 only - Sep 30 2:00 0 S -Rule Canada 1974 1986 - Apr lastSun 2:00 1:00 D -Rule Canada 1974 2006 - Oct lastSun 2:00 0 S -Rule Canada 1987 2006 - Apr Sun>=1 2:00 1:00 D -Rule Canada 2007 max - Mar Sun>=8 2:00 1:00 D -Rule Canada 2007 max - Nov Sun>=1 2:00 0 S - - -# Newfoundland and Labrador - -# From Paul Eggert (2017-10-14): -# Legally Labrador should observe Newfoundland time; see: -# McLeod J. Labrador time - legal or not? St. John's Telegram, 2017-10-07 -# http://www.thetelegram.com/news/local/labrador-time--legal-or-not-154860/ -# Matthews and Vincent (1998) write that the only part of Labrador -# that follows the rules is the southeast corner, including Port Hope -# Simpson and Mary's Harbour, but excluding, say, Black Tickle. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule StJohns 1917 only - Apr 8 2:00 1:00 D -Rule StJohns 1917 only - Sep 17 2:00 0 S -# Whitman gives 1919 Apr 5 and 1920 Apr 5; go with Shanks & Pottenger. -Rule StJohns 1919 only - May 5 23:00 1:00 D -Rule StJohns 1919 only - Aug 12 23:00 0 S -# For 1931-1935 Whitman gives Apr same date; go with Shanks & Pottenger. -Rule StJohns 1920 1935 - May Sun>=1 23:00 1:00 D -Rule StJohns 1920 1935 - Oct lastSun 23:00 0 S -# For 1936-1941 Whitman gives May Sun>=8 and Oct Sun>=1; go with Shanks & -# Pottenger. -Rule StJohns 1936 1941 - May Mon>=9 0:00 1:00 D -Rule StJohns 1936 1941 - Oct Mon>=2 0:00 0 S -# Whitman gives the following transitions: -# 1942 03-01/12-31, 1943 05-30/09-05, 1944 07-10/09-02, 1945 01-01/10-07 -# but go with Shanks & Pottenger and assume they used Canadian rules. -# For 1946-9 Whitman gives May 5,4,9,1 - Oct 1,5,3,2, and for 1950 he gives -# Apr 30 - Sep 24; go with Shanks & Pottenger. -Rule StJohns 1946 1950 - May Sun>=8 2:00 1:00 D -Rule StJohns 1946 1950 - Oct Sun>=2 2:00 0 S -Rule StJohns 1951 1986 - Apr lastSun 2:00 1:00 D -Rule StJohns 1951 1959 - Sep lastSun 2:00 0 S -Rule StJohns 1960 1986 - Oct lastSun 2:00 0 S -# From Paul Eggert (2000-10-02): -# INMS (2000-09-12) says that, since 1988 at least, Newfoundland switches -# at 00:01 local time. For now, assume it started in 1987. - -# From Michael Pelley (2011-09-12): -# We received today, Monday, September 12, 2011, notification that the -# changes to the Newfoundland Standard Time Act have been proclaimed. -# The change in the Act stipulates that the change from Daylight Savings -# Time to Standard Time and from Standard Time to Daylight Savings Time -# now occurs at 2:00AM. -# ... -# http://www.assembly.nl.ca/legislation/sr/annualstatutes/2011/1106.chp.htm -# ... -# MICHAEL PELLEY | Manager of Enterprise Architecture - Solution Delivery -# Office of the Chief Information Officer -# Executive Council -# Government of Newfoundland & Labrador - -Rule StJohns 1987 only - Apr Sun>=1 0:01 1:00 D -Rule StJohns 1987 2006 - Oct lastSun 0:01 0 S -Rule StJohns 1988 only - Apr Sun>=1 0:01 2:00 DD -Rule StJohns 1989 2006 - Apr Sun>=1 0:01 1:00 D -Rule StJohns 2007 2011 - Mar Sun>=8 0:01 1:00 D -Rule StJohns 2007 2010 - Nov Sun>=1 0:01 0 S -# -# St John's has an apostrophe, but Posix file names can't have apostrophes. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/St_Johns -3:30:52 - LMT 1884 - -3:30:52 StJohns N%sT 1918 - -3:30:52 Canada N%sT 1919 - -3:30:52 StJohns N%sT 1935 Mar 30 - -3:30 StJohns N%sT 1942 May 11 - -3:30 Canada N%sT 1946 - -3:30 StJohns N%sT 2011 Nov - -3:30 Canada N%sT - -# most of east Labrador - -# The name 'Happy Valley-Goose Bay' is too long; use 'Goose Bay'. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Goose_Bay -4:01:40 - LMT 1884 # Happy Valley-Goose Bay - -3:30:52 - NST 1918 - -3:30:52 Canada N%sT 1919 - -3:30:52 - NST 1935 Mar 30 - -3:30 - NST 1936 - -3:30 StJohns N%sT 1942 May 11 - -3:30 Canada N%sT 1946 - -3:30 StJohns N%sT 1966 Mar 15 2:00 - -4:00 StJohns A%sT 2011 Nov - -4:00 Canada A%sT - - -# west Labrador, Nova Scotia, Prince Edward I - -# From Brian Inglis (2015-07-20): -# From the historical weather station records available at: -# https://weatherspark.com/history/28351/1971/Sydney-Nova-Scotia-Canada -# Sydney shares the same time history as Glace Bay, so was -# likely to be the same across the island.... -# Sydney, as the capital and most populous location, or Cape Breton, would -# have been better names for the zone had we known this in 1996. - -# From Paul Eggert (2015-07-20): -# Shanks & Pottenger write that since 1970 most of this region has been like -# Halifax. Many locales did not observe peacetime DST until 1972; -# the Cape Breton area, represented by Glace Bay, is the largest we know of -# (Glace Bay was perhaps not the best name choice but no point changing now). -# Shanks & Pottenger also write that Liverpool, NS was the only town -# in Canada to observe DST in 1971 but not 1970; for now we'll assume -# this is a typo. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Halifax 1916 only - Apr 1 0:00 1:00 D -Rule Halifax 1916 only - Oct 1 0:00 0 S -Rule Halifax 1920 only - May 9 0:00 1:00 D -Rule Halifax 1920 only - Aug 29 0:00 0 S -Rule Halifax 1921 only - May 6 0:00 1:00 D -Rule Halifax 1921 1922 - Sep 5 0:00 0 S -Rule Halifax 1922 only - Apr 30 0:00 1:00 D -Rule Halifax 1923 1925 - May Sun>=1 0:00 1:00 D -Rule Halifax 1923 only - Sep 4 0:00 0 S -Rule Halifax 1924 only - Sep 15 0:00 0 S -Rule Halifax 1925 only - Sep 28 0:00 0 S -Rule Halifax 1926 only - May 16 0:00 1:00 D -Rule Halifax 1926 only - Sep 13 0:00 0 S -Rule Halifax 1927 only - May 1 0:00 1:00 D -Rule Halifax 1927 only - Sep 26 0:00 0 S -Rule Halifax 1928 1931 - May Sun>=8 0:00 1:00 D -Rule Halifax 1928 only - Sep 9 0:00 0 S -Rule Halifax 1929 only - Sep 3 0:00 0 S -Rule Halifax 1930 only - Sep 15 0:00 0 S -Rule Halifax 1931 1932 - Sep Mon>=24 0:00 0 S -Rule Halifax 1932 only - May 1 0:00 1:00 D -Rule Halifax 1933 only - Apr 30 0:00 1:00 D -Rule Halifax 1933 only - Oct 2 0:00 0 S -Rule Halifax 1934 only - May 20 0:00 1:00 D -Rule Halifax 1934 only - Sep 16 0:00 0 S -Rule Halifax 1935 only - Jun 2 0:00 1:00 D -Rule Halifax 1935 only - Sep 30 0:00 0 S -Rule Halifax 1936 only - Jun 1 0:00 1:00 D -Rule Halifax 1936 only - Sep 14 0:00 0 S -Rule Halifax 1937 1938 - May Sun>=1 0:00 1:00 D -Rule Halifax 1937 1941 - Sep Mon>=24 0:00 0 S -Rule Halifax 1939 only - May 28 0:00 1:00 D -Rule Halifax 1940 1941 - May Sun>=1 0:00 1:00 D -Rule Halifax 1946 1949 - Apr lastSun 2:00 1:00 D -Rule Halifax 1946 1949 - Sep lastSun 2:00 0 S -Rule Halifax 1951 1954 - Apr lastSun 2:00 1:00 D -Rule Halifax 1951 1954 - Sep lastSun 2:00 0 S -Rule Halifax 1956 1959 - Apr lastSun 2:00 1:00 D -Rule Halifax 1956 1959 - Sep lastSun 2:00 0 S -Rule Halifax 1962 1973 - Apr lastSun 2:00 1:00 D -Rule Halifax 1962 1973 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Halifax -4:14:24 - LMT 1902 Jun 15 - -4:00 Halifax A%sT 1918 - -4:00 Canada A%sT 1919 - -4:00 Halifax A%sT 1942 Feb 9 2:00s - -4:00 Canada A%sT 1946 - -4:00 Halifax A%sT 1974 - -4:00 Canada A%sT -Zone America/Glace_Bay -3:59:48 - LMT 1902 Jun 15 - -4:00 Canada A%sT 1953 - -4:00 Halifax A%sT 1954 - -4:00 - AST 1972 - -4:00 Halifax A%sT 1974 - -4:00 Canada A%sT - -# New Brunswick - -# From Paul Eggert (2007-01-31): -# The Time Definition Act -# says they changed at 00:01 through 2006, and -# makes it -# clear that this was the case since at least 1993. -# For now, assume it started in 1993. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Moncton 1933 1935 - Jun Sun>=8 1:00 1:00 D -Rule Moncton 1933 1935 - Sep Sun>=8 1:00 0 S -Rule Moncton 1936 1938 - Jun Sun>=1 1:00 1:00 D -Rule Moncton 1936 1938 - Sep Sun>=1 1:00 0 S -Rule Moncton 1939 only - May 27 1:00 1:00 D -Rule Moncton 1939 1941 - Sep Sat>=21 1:00 0 S -Rule Moncton 1940 only - May 19 1:00 1:00 D -Rule Moncton 1941 only - May 4 1:00 1:00 D -Rule Moncton 1946 1972 - Apr lastSun 2:00 1:00 D -Rule Moncton 1946 1956 - Sep lastSun 2:00 0 S -Rule Moncton 1957 1972 - Oct lastSun 2:00 0 S -Rule Moncton 1993 2006 - Apr Sun>=1 0:01 1:00 D -Rule Moncton 1993 2006 - Oct lastSun 0:01 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Moncton -4:19:08 - LMT 1883 Dec 9 - -5:00 - EST 1902 Jun 15 - -4:00 Canada A%sT 1933 - -4:00 Moncton A%sT 1942 - -4:00 Canada A%sT 1946 - -4:00 Moncton A%sT 1973 - -4:00 Canada A%sT 1993 - -4:00 Moncton A%sT 2007 - -4:00 Canada A%sT - -# Quebec - -# From Paul Eggert (2015-03-24): -# See America/Toronto for most of Quebec, including Montreal. -# -# Matthews and Vincent (1998) also write that Quebec east of the -63 -# meridian is supposed to observe AST, but residents as far east as -# Natashquan use EST/EDT, and residents east of Natashquan use AST. -# The Quebec department of justice writes in -# "The situation in Minganie and Basse-Côte-Nord" -# http://www.justice.gouv.qc.ca/english/publications/generale/temps-minganie-a.htm -# that the coastal strip from just east of Natashquan to Blanc-Sablon -# observes Atlantic standard time all year round. -# https://www.assnat.qc.ca/Media/Process.aspx?MediaId=ANQ.Vigie.Bll.DocumentGenerique_8845en -# says this common practice was codified into law as of 2007. -# For lack of better info, guess this practice began around 1970, contra to -# Shanks & Pottenger who have this region observing AST/ADT. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Blanc-Sablon -3:48:28 - LMT 1884 - -4:00 Canada A%sT 1970 - -4:00 - AST - -# Ontario - -# From Paul Eggert (2006-07-09): -# Shanks & Pottenger write that since 1970 most of Ontario has been like -# Toronto. -# Thunder Bay skipped DST in 1973. -# Many smaller locales did not observe peacetime DST until 1974; -# Nipigon (EST) and Rainy River (CST) are the largest that we know of. -# Far west Ontario is like Winnipeg; far east Quebec is like Halifax. - -# From Mark Brader (2003-07-26): -# [According to the Toronto Star] Orillia, Ontario, adopted DST -# effective Saturday, 1912-06-22, 22:00; the article mentions that -# Port Arthur (now part of Thunder Bay, Ontario) as well as Moose Jaw -# have already done so. In Orillia DST was to run until Saturday, -# 1912-08-31 (no time mentioned), but it was met with considerable -# hostility from certain segments of the public, and was revoked after -# only two weeks - I copied it as Saturday, 1912-07-07, 22:00, but -# presumably that should be -07-06. (1912-06-19, -07-12; also letters -# earlier in June). -# -# Kenora, Ontario, was to abandon DST on 1914-06-01 (-05-21). -# -# From Paul Eggert (2017-07-08): -# For more on Orillia, see: Daubs K. Bold attempt at daylight saving -# time became a comic failure in Orillia. Toronto Star 2017-07-08. -# https://www.thestar.com/news/insight/2017/07/08/bold-attempt-at-daylight-saving-time-became-a-comic-failure-in-orillia.html - -# From Paul Eggert (1997-10-17): -# Mark Brader writes that an article in the 1997-10-14 Toronto Star -# says that Atikokan, Ontario currently does not observe DST, -# but will vote on 11-10 whether to use EST/EDT. -# He also writes that the Ontario Time Act (1990, Chapter T.9) -# http://www.gov.on.ca/MBS/english/publications/statregs/conttext.html -# says that Ontario east of 90W uses EST/EDT, and west of 90W uses CST/CDT. -# Officially Atikokan is therefore on CST/CDT, and most likely this report -# concerns a non-official time observed as a matter of local practice. -# -# From Paul Eggert (2000-10-02): -# Matthews and Vincent (1998) write that Atikokan, Pickle Lake, and -# New Osnaburgh observe CST all year, that Big Trout Lake observes -# CST/CDT, and that Upsala and Shebandowan observe EST/EDT, all in -# violation of the official Ontario rules. -# -# From Paul Eggert (2006-07-09): -# Chris Walton (2006-07-06) mentioned an article by Stephanie MacLellan in the -# 2005-07-21 Chronicle-Journal, which said: -# -# The clocks in Atikokan stay set on standard time year-round. -# This means they spend about half the time on central time and -# the other half on eastern time. -# -# For the most part, the system works, Mayor Dennis Brown said. -# -# "The majority of businesses in Atikokan deal more with Eastern -# Canada, but there are some that deal with Western Canada," he -# said. "I don't see any changes happening here." -# -# Walton also writes "Supposedly Pickle Lake and Mishkeegogamang -# [New Osnaburgh] follow the same practice." - -# From Garry McKinnon (2006-07-14) via Chris Walton: -# I chatted with a member of my board who has an outstanding memory -# and a long history in Atikokan (and in the telecom industry) and he -# can say for certain that Atikokan has been practicing the current -# time keeping since 1952, at least. - -# From Paul Eggert (2006-07-17): -# Shanks & Pottenger say that Atikokan has agreed with Rainy River -# ever since standard time was introduced, but the information from -# McKinnon sounds more authoritative. For now, assume that Atikokan -# switched to EST immediately after WWII era daylight saving time -# ended. This matches the old (less-populous) America/Coral_Harbour -# entry since our cutoff date of 1970, so we can move -# America/Coral_Harbour to the 'backward' file. - -# From Mark Brader (2010-03-06): -# -# Currently the database has: -# -# # Ontario -# -# # From Paul Eggert (2006-07-09): -# # Shanks & Pottenger write that since 1970 most of Ontario has been like -# # Toronto. -# # Thunder Bay skipped DST in 1973. -# # Many smaller locales did not observe peacetime DST until 1974; -# # Nipigon (EST) and Rainy River (CST) are the largest that we know of. -# -# In the (Toronto) Globe and Mail for Saturday, 1955-09-24, in the bottom -# right corner of page 1, it says that Toronto will return to standard -# time at 2 am Sunday morning (which agrees with the database), and that: -# -# The one-hour setback will go into effect throughout most of Ontario, -# except in areas like Windsor which remains on standard time all year. -# -# Windsor is, of course, a lot larger than Nipigon. -# -# I only came across this incidentally. I don't know if Windsor began -# observing DST when Detroit did, or in 1974, or on some other date. -# -# By the way, the article continues by noting that: -# -# Some cities in the United States have pushed the deadline back -# three weeks and will change over from daylight saving in October. - -# From Arthur David Olson (2010-07-17): -# -# "Standard Time and Time Zones in Canada" appeared in -# The Journal of The Royal Astronomical Society of Canada, -# volume 26, number 2 (February 1932) and, as of 2010-07-17, -# was available at -# http://adsabs.harvard.edu/full/1932JRASC..26...49S -# -# It includes the text below (starting on page 57): -# -# A list of the places in Canada using daylight saving time would -# require yearly revision. From information kindly furnished by -# the provincial governments and by the postmasters in many cities -# and towns, it is found that the following places used daylight sav- -# ing in 1930. The information for the province of Quebec is definite, -# for the other provinces only approximate: -# -# Province Daylight saving time used -# Prince Edward Island Not used. -# Nova Scotia In Halifax only. -# New Brunswick In St. John only. -# Quebec In the following places: -# Montreal Lachine -# Quebec Mont-Royal -# Lévis Iberville -# St. Lambert Cap de la Madelèine -# Verdun Loretteville -# Westmount Richmond -# Outremont St. Jérôme -# Longueuil Greenfield Park -# Arvida Waterloo -# Chambly-Canton Beaulieu -# Melbourne La Tuque -# St. Théophile Buckingham -# Ontario Used generally in the cities and towns along -# the southerly part of the province. Not -# used in the northwesterly part. -# Manitoba Not used. -# Saskatchewan In Regina only. -# Alberta Not used. -# British Columbia Not used. -# -# With some exceptions, the use of daylight saving may be said to be limited -# to those cities and towns lying between Quebec city and Windsor, Ont. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Toronto 1919 only - Mar 30 23:30 1:00 D -Rule Toronto 1919 only - Oct 26 0:00 0 S -Rule Toronto 1920 only - May 2 2:00 1:00 D -Rule Toronto 1920 only - Sep 26 0:00 0 S -Rule Toronto 1921 only - May 15 2:00 1:00 D -Rule Toronto 1921 only - Sep 15 2:00 0 S -Rule Toronto 1922 1923 - May Sun>=8 2:00 1:00 D -# Shanks & Pottenger say 1923-09-19; assume it's a typo and that "-16" -# was meant. -Rule Toronto 1922 1926 - Sep Sun>=15 2:00 0 S -Rule Toronto 1924 1927 - May Sun>=1 2:00 1:00 D -# The 1927-to-1939 rules can be expressed more simply as -# Rule Toronto 1927 1937 - Sep Sun>=25 2:00 0 S -# Rule Toronto 1928 1937 - Apr Sun>=25 2:00 1:00 D -# Rule Toronto 1938 1940 - Apr lastSun 2:00 1:00 D -# Rule Toronto 1938 1939 - Sep lastSun 2:00 0 S -# The rules below avoid use of Sun>=25 -# (which pre-2004 versions of zic cannot handle). -Rule Toronto 1927 1932 - Sep lastSun 2:00 0 S -Rule Toronto 1928 1931 - Apr lastSun 2:00 1:00 D -Rule Toronto 1932 only - May 1 2:00 1:00 D -Rule Toronto 1933 1940 - Apr lastSun 2:00 1:00 D -Rule Toronto 1933 only - Oct 1 2:00 0 S -Rule Toronto 1934 1939 - Sep lastSun 2:00 0 S -Rule Toronto 1945 1946 - Sep lastSun 2:00 0 S -Rule Toronto 1946 only - Apr lastSun 2:00 1:00 D -Rule Toronto 1947 1949 - Apr lastSun 0:00 1:00 D -Rule Toronto 1947 1948 - Sep lastSun 0:00 0 S -Rule Toronto 1949 only - Nov lastSun 0:00 0 S -Rule Toronto 1950 1973 - Apr lastSun 2:00 1:00 D -Rule Toronto 1950 only - Nov lastSun 2:00 0 S -Rule Toronto 1951 1956 - Sep lastSun 2:00 0 S -# Shanks & Pottenger say Toronto ended DST a week early in 1971, -# namely on 1971-10-24, but Mark Brader wrote (2003-05-31) that this -# is wrong, and that he had confirmed it by checking the 1971-10-30 -# Toronto Star, which said that DST was ending 1971-10-31 as usual. -Rule Toronto 1957 1973 - Oct lastSun 2:00 0 S - -# From Paul Eggert (2003-07-27): -# Willett (1914-03) writes (p. 17) "In the Cities of Fort William, and -# Port Arthur, Ontario, the principle of the Bill has been in -# operation for the past three years, and in the City of Moose Jaw, -# Saskatchewan, for one year." - -# From David Bryan via Tory Tronrud, Director/Curator, -# Thunder Bay Museum (2003-11-12): -# There is some suggestion, however, that, by-law or not, daylight -# savings time was being practiced in Fort William and Port Arthur -# before 1909.... [I]n 1910, the line between the Eastern and Central -# Time Zones was permanently moved about two hundred miles west to -# include the Thunder Bay area.... When Canada adopted daylight -# savings time in 1916, Fort William and Port Arthur, having done so -# already, did not change their clocks.... During the Second World -# War,... [t]he cities agreed to implement DST during the summer -# months for the remainder of the war years. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Toronto -5:17:32 - LMT 1895 - -5:00 Canada E%sT 1919 - -5:00 Toronto E%sT 1942 Feb 9 2:00s - -5:00 Canada E%sT 1946 - -5:00 Toronto E%sT 1974 - -5:00 Canada E%sT -Zone America/Thunder_Bay -5:57:00 - LMT 1895 - -6:00 - CST 1910 - -5:00 - EST 1942 - -5:00 Canada E%sT 1970 - -5:00 Toronto E%sT 1973 - -5:00 - EST 1974 - -5:00 Canada E%sT -Zone America/Nipigon -5:53:04 - LMT 1895 - -5:00 Canada E%sT 1940 Sep 29 - -5:00 1:00 EDT 1942 Feb 9 2:00s - -5:00 Canada E%sT -Zone America/Rainy_River -6:18:16 - LMT 1895 - -6:00 Canada C%sT 1940 Sep 29 - -6:00 1:00 CDT 1942 Feb 9 2:00s - -6:00 Canada C%sT -Zone America/Atikokan -6:06:28 - LMT 1895 - -6:00 Canada C%sT 1940 Sep 29 - -6:00 1:00 CDT 1942 Feb 9 2:00s - -6:00 Canada C%sT 1945 Sep 30 2:00 - -5:00 - EST - - -# Manitoba - -# From Rob Douglas (2006-04-06): -# the old Manitoba Time Act - as amended by Bill 2, assented to -# March 27, 1987 ... said ... -# "between two o'clock Central Standard Time in the morning of -# the first Sunday of April of each year and two o'clock Central -# Standard Time in the morning of the last Sunday of October next -# following, one hour in advance of Central Standard Time."... -# I believe that the English legislation [of the old time act] had -# been assented to (March 22, 1967).... -# Also, as far as I can tell, there was no order-in-council varying -# the time of Daylight Saving Time for 2005 and so the provisions of -# the 1987 version would apply - the changeover was at 2:00 Central -# Standard Time (i.e. not until 3:00 Central Daylight Time). - -# From Paul Eggert (2006-04-10): -# Shanks & Pottenger say Manitoba switched at 02:00 (not 02:00s) -# starting 1966. Since 02:00s is clearly correct for 1967 on, assume -# it was also 02:00s in 1966. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Winn 1916 only - Apr 23 0:00 1:00 D -Rule Winn 1916 only - Sep 17 0:00 0 S -Rule Winn 1918 only - Apr 14 2:00 1:00 D -Rule Winn 1918 only - Oct 27 2:00 0 S -Rule Winn 1937 only - May 16 2:00 1:00 D -Rule Winn 1937 only - Sep 26 2:00 0 S -Rule Winn 1942 only - Feb 9 2:00 1:00 W # War -Rule Winn 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule Winn 1945 only - Sep lastSun 2:00 0 S -Rule Winn 1946 only - May 12 2:00 1:00 D -Rule Winn 1946 only - Oct 13 2:00 0 S -Rule Winn 1947 1949 - Apr lastSun 2:00 1:00 D -Rule Winn 1947 1949 - Sep lastSun 2:00 0 S -Rule Winn 1950 only - May 1 2:00 1:00 D -Rule Winn 1950 only - Sep 30 2:00 0 S -Rule Winn 1951 1960 - Apr lastSun 2:00 1:00 D -Rule Winn 1951 1958 - Sep lastSun 2:00 0 S -Rule Winn 1959 only - Oct lastSun 2:00 0 S -Rule Winn 1960 only - Sep lastSun 2:00 0 S -Rule Winn 1963 only - Apr lastSun 2:00 1:00 D -Rule Winn 1963 only - Sep 22 2:00 0 S -Rule Winn 1966 1986 - Apr lastSun 2:00s 1:00 D -Rule Winn 1966 2005 - Oct lastSun 2:00s 0 S -Rule Winn 1987 2005 - Apr Sun>=1 2:00s 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Winnipeg -6:28:36 - LMT 1887 Jul 16 - -6:00 Winn C%sT 2006 - -6:00 Canada C%sT - - -# Saskatchewan - -# From Mark Brader (2003-07-26): -# The first actual adoption of DST in Canada was at the municipal -# level. As the [Toronto] Star put it (1912-06-07), "While people -# elsewhere have long been talking of legislation to save daylight, -# the city of Moose Jaw [Saskatchewan] has acted on its own hook." -# DST in Moose Jaw began on Saturday, 1912-06-01 (no time mentioned: -# presumably late evening, as below), and would run until "the end of -# the summer". The discrepancy between municipal time and railroad -# time was noted. - -# From Paul Eggert (2003-07-27): -# Willett (1914-03) notes that DST "has been in operation ... in the -# City of Moose Jaw, Saskatchewan, for one year." - -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger say that since 1970 this region has mostly been as Regina. -# Some western towns (e.g. Swift Current) switched from MST/MDT to CST in 1972. -# Other western towns (e.g. Lloydminster) are like Edmonton. -# Matthews and Vincent (1998) write that Denare Beach and Creighton -# are like Winnipeg, in violation of Saskatchewan law. - -# From W. Jones (1992-11-06): -# The. . .below is based on information I got from our law library, the -# provincial archives, and the provincial Community Services department. -# A precise history would require digging through newspaper archives, and -# since you didn't say what you wanted, I didn't bother. -# -# Saskatchewan is split by a time zone meridian (105W) and over the years -# the boundary became pretty ragged as communities near it reevaluated -# their affiliations in one direction or the other. In 1965 a provincial -# referendum favoured legislating common time practices. -# -# On 15 April 1966 the Time Act (c. T-14, Revised Statutes of -# Saskatchewan 1978) was proclaimed, and established that the eastern -# part of Saskatchewan would use CST year round, that districts in -# northwest Saskatchewan would by default follow CST but could opt to -# follow Mountain Time rules (thus 1 hour difference in the winter and -# zero in the summer), and that districts in southwest Saskatchewan would -# by default follow MT but could opt to follow CST. -# -# It took a few years for the dust to settle (I know one story of a town -# on one time zone having its school in another, such that a mom had to -# serve her family lunch in two shifts), but presently it seems that only -# a few towns on the border with Alberta (e.g. Lloydminster) follow MT -# rules any more; all other districts appear to have used CST year round -# since sometime in the 1960s. - -# From Chris Walton (2006-06-26): -# The Saskatchewan time act which was last updated in 1996 is about 30 pages -# long and rather painful to read. -# http://www.qp.gov.sk.ca/documents/English/Statutes/Statutes/T14.pdf - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Regina 1918 only - Apr 14 2:00 1:00 D -Rule Regina 1918 only - Oct 27 2:00 0 S -Rule Regina 1930 1934 - May Sun>=1 0:00 1:00 D -Rule Regina 1930 1934 - Oct Sun>=1 0:00 0 S -Rule Regina 1937 1941 - Apr Sun>=8 0:00 1:00 D -Rule Regina 1937 only - Oct Sun>=8 0:00 0 S -Rule Regina 1938 only - Oct Sun>=1 0:00 0 S -Rule Regina 1939 1941 - Oct Sun>=8 0:00 0 S -Rule Regina 1942 only - Feb 9 2:00 1:00 W # War -Rule Regina 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule Regina 1945 only - Sep lastSun 2:00 0 S -Rule Regina 1946 only - Apr Sun>=8 2:00 1:00 D -Rule Regina 1946 only - Oct Sun>=8 2:00 0 S -Rule Regina 1947 1957 - Apr lastSun 2:00 1:00 D -Rule Regina 1947 1957 - Sep lastSun 2:00 0 S -Rule Regina 1959 only - Apr lastSun 2:00 1:00 D -Rule Regina 1959 only - Oct lastSun 2:00 0 S -# -Rule Swift 1957 only - Apr lastSun 2:00 1:00 D -Rule Swift 1957 only - Oct lastSun 2:00 0 S -Rule Swift 1959 1961 - Apr lastSun 2:00 1:00 D -Rule Swift 1959 only - Oct lastSun 2:00 0 S -Rule Swift 1960 1961 - Sep lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Regina -6:58:36 - LMT 1905 Sep - -7:00 Regina M%sT 1960 Apr lastSun 2:00 - -6:00 - CST -Zone America/Swift_Current -7:11:20 - LMT 1905 Sep - -7:00 Canada M%sT 1946 Apr lastSun 2:00 - -7:00 Regina M%sT 1950 - -7:00 Swift M%sT 1972 Apr lastSun 2:00 - -6:00 - CST - - -# Alberta - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Edm 1918 1919 - Apr Sun>=8 2:00 1:00 D -Rule Edm 1918 only - Oct 27 2:00 0 S -Rule Edm 1919 only - May 27 2:00 0 S -Rule Edm 1920 1923 - Apr lastSun 2:00 1:00 D -Rule Edm 1920 only - Oct lastSun 2:00 0 S -Rule Edm 1921 1923 - Sep lastSun 2:00 0 S -Rule Edm 1942 only - Feb 9 2:00 1:00 W # War -Rule Edm 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule Edm 1945 only - Sep lastSun 2:00 0 S -Rule Edm 1947 only - Apr lastSun 2:00 1:00 D -Rule Edm 1947 only - Sep lastSun 2:00 0 S -Rule Edm 1967 only - Apr lastSun 2:00 1:00 D -Rule Edm 1967 only - Oct lastSun 2:00 0 S -Rule Edm 1969 only - Apr lastSun 2:00 1:00 D -Rule Edm 1969 only - Oct lastSun 2:00 0 S -Rule Edm 1972 1986 - Apr lastSun 2:00 1:00 D -Rule Edm 1972 2006 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Edmonton -7:33:52 - LMT 1906 Sep - -7:00 Edm M%sT 1987 - -7:00 Canada M%sT - - -# British Columbia - -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger write that since 1970 most of this region has -# been like Vancouver. -# Dawson Creek uses MST. Much of east BC is like Edmonton. -# Matthews and Vincent (1998) write that Creston is like Dawson Creek. - -# It seems though that (re: Creston) is not entirely correct: - -# From Chris Walton (2011-12-01): -# There are two areas within the Canadian province of British Columbia -# that do not currently observe daylight saving: -# a) The Creston Valley (includes the town of Creston and surrounding area) -# b) The eastern half of the Peace River Regional District -# (includes the cities of Dawson Creek and Fort St. John) - -# Earlier this year I stumbled across a detailed article about the time -# keeping history of Creston; it was written by Tammy Hardwick who is the -# manager of the Creston & District Museum. The article was written in May 2009. -# http://www.ilovecreston.com/?p=articles&t=spec&ar=260 -# According to the article, Creston has not changed its clocks since June 1918. -# i.e. Creston has been stuck on UTC-7 for 93 years. -# Dawson Creek, on the other hand, changed its clocks as recently as April 1972. - -# Unfortunately the exact date for the time change in June 1918 remains -# unknown and will be difficult to ascertain. I e-mailed Tammy a few months -# ago to ask if Sunday June 2 was a reasonable guess. She said it was just -# as plausible as any other date (in June). She also said that after writing -# the article she had discovered another time change in 1916; this is the -# subject of another article which she wrote in October 2010. -# http://www.creston.museum.bc.ca/index.php?module=comments&uop=view_comment&cm+id=56 - -# Here is a summary of the three clock change events in Creston's history: -# 1. 1884 or 1885: adoption of Mountain Standard Time (GMT-7) -# Exact date unknown -# 2. Oct 1916: switch to Pacific Standard Time (GMT-8) -# Exact date in October unknown; Sunday October 1 is a reasonable guess. -# 3. June 1918: switch to Pacific Daylight Time (GMT-7) -# Exact date in June unknown; Sunday June 2 is a reasonable guess. -# note 1: -# On Oct 27/1918 when daylight saving ended in the rest of Canada, -# Creston did not change its clocks. -# note 2: -# During WWII when the Federal Government legislated a mandatory clock change, -# Creston did not oblige. -# note 3: -# There is no guarantee that Creston will remain on Mountain Standard Time -# (UTC-7) forever. -# The subject was debated at least once this year by the town Council. -# http://www.bclocalnews.com/kootenay_rockies/crestonvalleyadvance/news/116760809.html - -# During a period WWII, summer time (Daylight saying) was mandatory in Canada. -# In Creston, that was handled by shifting the area to PST (-8:00) then applying -# summer time to cause the offset to be -7:00, the same as it had been before -# the change. It can be argued that the timezone abbreviation during this -# period should be PDT rather than MST, but that doesn't seem important enough -# (to anyone) to further complicate the rules. - -# The transition dates (and times) are guesses. - -# From Matt Johnson (2015-09-21): -# Fort Nelson, BC, Canada will cancel DST this year. So while previously they -# were aligned with America/Vancouver, they're now aligned with -# America/Dawson_Creek. -# http://www.northernrockies.ca/EN/meta/news/archives/2015/northern-rockies-time-change.html -# -# From Tim Parenti (2015-09-23): -# This requires a new zone for the Northern Rockies Regional Municipality, -# America/Fort_Nelson. The resolution of 2014-12-08 was reached following a -# 2014-11-15 poll with nearly 75% support. Effectively, the municipality has -# been on MST (-0700) like Dawson Creek since it advanced its clocks on -# 2015-03-08. -# -# From Paul Eggert (2015-09-23): -# Shanks says Fort Nelson did not observe DST in 1946, unlike Vancouver. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Vanc 1918 only - Apr 14 2:00 1:00 D -Rule Vanc 1918 only - Oct 27 2:00 0 S -Rule Vanc 1942 only - Feb 9 2:00 1:00 W # War -Rule Vanc 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule Vanc 1945 only - Sep 30 2:00 0 S -Rule Vanc 1946 1986 - Apr lastSun 2:00 1:00 D -Rule Vanc 1946 only - Oct 13 2:00 0 S -Rule Vanc 1947 1961 - Sep lastSun 2:00 0 S -Rule Vanc 1962 2006 - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Vancouver -8:12:28 - LMT 1884 - -8:00 Vanc P%sT 1987 - -8:00 Canada P%sT -Zone America/Dawson_Creek -8:00:56 - LMT 1884 - -8:00 Canada P%sT 1947 - -8:00 Vanc P%sT 1972 Aug 30 2:00 - -7:00 - MST -Zone America/Fort_Nelson -8:10:47 - LMT 1884 - -8:00 Vanc P%sT 1946 - -8:00 - PST 1947 - -8:00 Vanc P%sT 1987 - -8:00 Canada P%sT 2015 Mar 8 2:00 - -7:00 - MST -Zone America/Creston -7:46:04 - LMT 1884 - -7:00 - MST 1916 Oct 1 - -8:00 - PST 1918 Jun 2 - -7:00 - MST - -# Northwest Territories, Nunavut, Yukon - -# From Paul Eggert (2006-03-22): -# Dawson switched to PST in 1973. Inuvik switched to MST in 1979. -# Mathew Englander (1996-10-07) gives the following refs: -# * 1967. Paragraph 28(34)(g) of the Interpretation Act, S.C. 1967-68, -# c. 7 defines Yukon standard time as UTC-9.... -# see Interpretation Act, R.S.C. 1985, c. I-21, s. 35(1). -# [https://www.canlii.org/en/ca/laws/stat/rsc-1985-c-i-21/latest/rsc-1985-c-i-21.html] -# * C.O. 1973/214 switched Yukon to PST on 1973-10-28 00:00. -# * O.I.C. 1980/02 established DST. -# * O.I.C. 1987/056 changed DST to Apr firstSun 2:00 to Oct lastSun 2:00. - -# From Brian Inglis (2015-04-14): -# -# I tried to trace the history of Yukon time and found the following -# regulations, giving the reference title and URL if found, regulation name, -# and relevant quote if available. Each regulation specifically revokes its -# predecessor. The final reference is to the current Interpretation Act -# authorizing and resulting from these regulatory changes. -# -# Only recent regulations were retrievable via Yukon government site search or -# index, and only some via Canadian legal sources. Other sources used include -# articles titled "Standard Time and Time Zones in Canada" from JRASC via ADS -# Abstracts, cited by ADO for 1932 ..., and updated versions from 1958 and -# 1970 quoted below; each article includes current extracts from provincial -# and territorial ST and DST regulations at the end, summaries and details of -# standard times and daylight saving time at many locations across Canada, -# with time zone maps, tables and calculations for Canadian Sunrise, Sunset, -# and LMST; they also cover many countries and global locations, with a chart -# and table showing current Universal Time offsets, and may be useful as -# another source of information for 1970 and earlier. -# -# * Standard Time and Time Zones in Canada; Smith, C.C.; JRASC, Vol. 26, -# pp.49-77; February 1932; SAO/NASA Astrophysics Data System (ADS) -# http://adsabs.harvard.edu/abs/1932JRASC..26...49S from p.75: -# Yukon Interpretation Ordinance -# Yukon standard time is the local mean time at the one hundred and -# thirty-fifth meridian. -# -# * Standard Time and Time Zones in Canada; Smith, C.C.; Thomson, Malcolm M.; -# JRASC, Vol. 52, pp.193-223; October 1958; SAO/NASA Astrophysics Data System -# (ADS) http://adsabs.harvard.edu/abs/1958JRASC..52..193S from pp.220-1: -# Yukon Interpretation Ordinance, 1955, Chap. 16. -# -# (1) Subject to this section, standard time shall be reckoned as nine -# hours behind Greenwich Time and called Yukon Standard Time. -# -# (2) Notwithstanding subsection (1), the Commissioner may make regulations -# varying the manner of reckoning standard time. -# -# * Yukon Territory Commissioner's Order 1966-20 Interpretation Ordinance -# http://? - no online source found -# -# * Standard Time and Time Zones in Canada; Thomson, Malcolm M.; JRASC, -# Vol. 64, pp.129-162; June 1970; SAO/NASA Astrophysics Data System (ADS) -# http://adsabs.harvard.edu/abs/1970JRASC..64..129T from p.156: Yukon -# Territory Commissioner's Order 1967-59 Interpretation Ordinance ... -# -# 1. Commissioner's Order 1966-20 dated at Whitehorse in the Yukon -# Territory on 27th January, 1966, is hereby revoked. -# -# 2. Yukon (East) Standard Time as defined by section 36 of the -# Interpretation Ordinance from and after mid-night on the 28th day of May, -# 1967 shall be reckoned in the same manner as Pacific Standard Time, that -# is to say, eight hours behind Greenwich Time in the area of the Yukon -# Territory lying east of the 138th degree longitude west. -# -# 3. In the remainder of the Territory, lying west of the 138th degree -# longitude west, Yukon (West) Standard Time shall be reckoned as nine -# hours behind Greenwich Time. -# -# * Yukon Standard Time defined as Pacific Standard Time, YCO 1973/214 -# https://www.canlii.org/en/yk/laws/regu/yco-1973-214/latest/yco-1973-214.html -# C.O. 1973/214 INTERPRETATION ACT ... -# -# 1. Effective October 28, 1973 Commissioner's Order 1967/59 is hereby -# revoked. -# -# 2. Yukon Standard Time as defined by section 36 of the Interpretation -# Act from and after midnight on the twenty-eighth day of October, 1973 -# shall be reckoned in the same manner as Pacific Standard Time, that is -# to say eight hours behind Greenwich Time. -# -# * O.I.C. 1980/02 INTERPRETATION ACT -# http://? - no online source found -# -# * Yukon Daylight Saving Time, YOIC 1987/56 -# https://www.canlii.org/en/yk/laws/regu/yoic-1987-56/latest/yoic-1987-56.html -# O.I.C. 1987/056 INTERPRETATION ACT ... -# -# In every year between -# (a) two o'clock in the morning in the first Sunday in April, and -# (b) two o'clock in the morning in the last Sunday in October, -# Standard Time shall be reckoned as seven hours behind Greenwich Time and -# called Yukon Daylight Saving Time. -# ... -# Dated ... 9th day of March, A.D., 1987. -# -# * Yukon Daylight Saving Time 2006, YOIC 2006/127 -# https://www.canlii.org/en/yk/laws/regu/yoic-2006-127/latest/yoic-2006-127.html -# O.I.C. 2006/127 INTERPRETATION ACT ... -# -# 1. In Yukon each year the time for general purposes shall be 7 hours -# behind Greenwich mean time during the period commencing at two o'clock -# in the forenoon on the second Sunday of March and ending at two o'clock -# in the forenoon on the first Sunday of November and shall be called -# Yukon Daylight Saving Time. -# -# 2. Order-in-Council 1987/56 is revoked. -# -# 3. This order comes into force January 1, 2007. -# -# * Interpretation Act, RSY 2002, c 125 -# https://www.canlii.org/en/yk/laws/stat/rsy-2002-c-125/latest/rsy-2002-c-125.html - -# From Rives McDow (1999-09-04): -# Nunavut ... moved ... to incorporate the whole territory into one time zone. -# Nunavut moves to single time zone Oct. 31 -# http://www.nunatsiaq.com/nunavut/nvt90903_13.html -# -# From Antoine Leca (1999-09-06): -# We then need to create a new timezone for the Kitikmeot region of Nunavut -# to differentiate it from the Yellowknife region. - -# From Paul Eggert (1999-09-20): -# Basic Facts: The New Territory -# http://www.nunavut.com/basicfacts/english/basicfacts_1territory.html -# (1999) reports that Pangnirtung operates on eastern time, -# and that Coral Harbour does not observe DST. We don't know when -# Pangnirtung switched to eastern time; we'll guess 1995. - -# From Rives McDow (1999-11-08): -# On October 31, when the rest of Nunavut went to Central time, -# Pangnirtung wobbled. Here is the result of their wobble: -# -# The following businesses and organizations in Pangnirtung use Central Time: -# -# First Air, Power Corp, Nunavut Construction, Health Center, RCMP, -# Eastern Arctic National Parks, A & D Specialist -# -# The following businesses and organizations in Pangnirtung use Eastern Time: -# -# Hamlet office, All other businesses, Both schools, Airport operator -# -# This has made for an interesting situation there, which warranted the news. -# No one there that I spoke with seems concerned, or has plans to -# change the local methods of keeping time, as it evidently does not -# really interfere with any activities or make things difficult locally. -# They plan to celebrate New Year's turn-over twice, one hour apart, -# so it appears that the situation will last at least that long. -# The Nunavut Intergovernmental Affairs hopes that they will "come to -# their senses", but the locals evidently don't see any problem with -# the current state of affairs. - -# From Michaela Rodrigue, writing in the -# Nunatsiaq News (1999-11-19): -# http://www.nunatsiaqonline.ca/archives/nunavut991130/nvt91119_17.html -# Clyde River, Pangnirtung and Sanikiluaq now operate with two time zones, -# central - or Nunavut time - for government offices, and eastern time -# for municipal offices and schools.... Igloolik [was similar but then] -# made the switch to central time on Saturday, Nov. 6. - -# From Paul Eggert (2000-10-02): -# Matthews and Vincent (1998) say the following, but we lack histories -# for these potential new Zones. -# -# The Canadian Forces station at Alert uses Eastern Time while the -# handful of residents at the Eureka weather station [in the Central -# zone] skip daylight savings. Baffin Island, which is crossed by the -# Central, Eastern and Atlantic Time zones only uses Eastern Time. -# Gjoa Haven, Taloyoak and Pelly Bay all use Mountain instead of -# Central Time and Southampton Island [in the Central zone] is not -# required to use daylight savings. - -# From -# Nunavut now has two time zones (2000-11-10): -# The Nunavut government would allow its employees in Kugluktuk and -# Cambridge Bay to operate on central time year-round, putting them -# one hour behind the rest of Nunavut for six months during the winter. -# At the end of October the two communities had rebelled against -# Nunavut's unified time zone, refusing to shift to eastern time with -# the rest of the territory for the winter. Cambridge Bay remained on -# central time, while Kugluktuk, even farther west, reverted to -# mountain time, which they had used before the advent of Nunavut's -# unified time zone in 1999. -# -# From Rives McDow (2001-01-20), quoting the Nunavut government: -# The preceding decision came into effect at midnight, Saturday Nov 4, 2000. - -# From Paul Eggert (2000-12-04): -# Let's just keep track of the official times for now. - -# From Rives McDow (2001-03-07): -# The premier of Nunavut has issued a ministerial statement advising -# that effective 2001-04-01, the territory of Nunavut will revert -# back to three time zones (mountain, central, and eastern). Of the -# cities in Nunavut, Coral Harbor is the only one that I know of that -# has said it will not observe dst, staying on EST year round. I'm -# checking for more info, and will get back to you if I come up with -# more. -# [Also see (2001-03-09).] - -# From Gwillim Law (2005-05-21): -# According to ... -# http://www.canadiangeographic.ca/Magazine/SO98/geomap.asp -# (from a 1998 Canadian Geographic article), the de facto and de jure time -# for Southampton Island (at the north end of Hudson Bay) is UTC-5 all year -# round. Using Google, it's easy to find other websites that confirm this. -# I wasn't able to find how far back this time regimen goes, but since it -# predates the creation of Nunavut, it probably goes back many years.... -# The Inuktitut name of Coral Harbour is Sallit, but it's rarely used. -# -# From Paul Eggert (2014-10-17): -# For lack of better information, assume that Southampton Island observed -# daylight saving only during wartime. Gwillim Law's email also -# mentioned maps now maintained by National Research Council Canada; -# see above for an up-to-date link. - -# From Chris Walton (2007-03-01): -# ... the community of Resolute (located on Cornwallis Island in -# Nunavut) moved from Central Time to Eastern Time last November. -# Basically the community did not change its clocks at the end of -# daylight saving.... -# http://www.nnsl.com/frames/newspapers/2006-11/nov13_06none.html - -# From Chris Walton (2011-03-21): -# Back in 2007 I initiated the creation of a new "zone file" for Resolute -# Bay. Resolute Bay is a small community located about 900km north of -# the Arctic Circle. The zone file was required because Resolute Bay had -# decided to use UTC-5 instead of UTC-6 for the winter of 2006-2007. -# -# According to new information which I received last week, Resolute Bay -# went back to using UTC-6 in the winter of 2007-2008... -# -# On March 11/2007 most of Canada went onto daylight saving. On March -# 14/2007 I phoned the Resolute Bay hamlet office to do a "time check." I -# talked to somebody that was both knowledgeable and helpful. I was able -# to confirm that Resolute Bay was still operating on UTC-5. It was -# explained to me that Resolute Bay had been on the Eastern Time zone -# (EST) in the winter, and was now back on the Central Time zone (CDT). -# i.e. the time zone had changed twice in the last year but the clocks -# had not moved. The residents had to know which time zone they were in -# so they could follow the correct TV schedule... -# -# On Nov 02/2008 most of Canada went onto standard time. On Nov 03/2008 I -# phoned the Resolute Bay hamlet office...[D]ue to the challenging nature -# of the phone call, I decided to seek out an alternate source of -# information. I found an e-mail address for somebody by the name of -# Stephanie Adams whose job was listed as "Inns North Support Officer for -# Arctic Co-operatives." I was under the impression that Stephanie lived -# and worked in Resolute Bay... -# -# On March 14/2011 I phoned the hamlet office again. I was told that -# Resolute Bay had been using Central Standard Time over the winter of -# 2010-2011 and that the clocks had therefore been moved one hour ahead -# on March 13/2011. The person I talked to was aware that Resolute Bay -# had previously experimented with Eastern Standard Time but he could not -# tell me when the practice had stopped. -# -# On March 17/2011 I searched the Web to find an e-mail address of -# somebody that might be able to tell me exactly when Resolute Bay went -# off Eastern Standard Time. I stumbled on the name "Aziz Kheraj." Aziz -# used to be the mayor of Resolute Bay and he apparently owns half the -# businesses including "South Camp Inn." This website has some info on -# Aziz: -# http://www.uphere.ca/node/493 -# -# I sent Aziz an e-mail asking when Resolute Bay had stopped using -# Eastern Standard Time. -# -# Aziz responded quickly with this: "hi, The time was not changed for the -# 1 year only, the following year, the community went back to the old way -# of "spring ahead-fall behind" currently we are zulu plus 5 hrs and in -# the winter Zulu plus 6 hrs" -# -# This of course conflicted with everything I had ascertained in November 2008. -# -# I sent Aziz a copy of my 2008 e-mail exchange with Stephanie. Aziz -# responded with this: "Hi, Stephanie lives in Winnipeg. I live here, You -# may want to check with the weather office in Resolute Bay or do a -# search on the weather through Env. Canada. web site" -# -# If I had realized the Stephanie did not live in Resolute Bay I would -# never have contacted her. I now believe that all the information I -# obtained in November 2008 should be ignored... -# I apologize for reporting incorrect information in 2008. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule NT_YK 1918 only - Apr 14 2:00 1:00 D -Rule NT_YK 1918 only - Oct 27 2:00 0 S -Rule NT_YK 1919 only - May 25 2:00 1:00 D -Rule NT_YK 1919 only - Nov 1 0:00 0 S -Rule NT_YK 1942 only - Feb 9 2:00 1:00 W # War -Rule NT_YK 1945 only - Aug 14 23:00u 1:00 P # Peace -Rule NT_YK 1945 only - Sep 30 2:00 0 S -Rule NT_YK 1965 only - Apr lastSun 0:00 2:00 DD -Rule NT_YK 1965 only - Oct lastSun 2:00 0 S -Rule NT_YK 1980 1986 - Apr lastSun 2:00 1:00 D -Rule NT_YK 1980 2006 - Oct lastSun 2:00 0 S -Rule NT_YK 1987 2006 - Apr Sun>=1 2:00 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# aka Panniqtuuq -Zone America/Pangnirtung 0 - -00 1921 # trading post est. - -4:00 NT_YK A%sT 1995 Apr Sun>=1 2:00 - -5:00 Canada E%sT 1999 Oct 31 2:00 - -6:00 Canada C%sT 2000 Oct 29 2:00 - -5:00 Canada E%sT -# formerly Frobisher Bay -Zone America/Iqaluit 0 - -00 1942 Aug # Frobisher Bay est. - -5:00 NT_YK E%sT 1999 Oct 31 2:00 - -6:00 Canada C%sT 2000 Oct 29 2:00 - -5:00 Canada E%sT -# aka Qausuittuq -Zone America/Resolute 0 - -00 1947 Aug 31 # Resolute founded - -6:00 NT_YK C%sT 2000 Oct 29 2:00 - -5:00 - EST 2001 Apr 1 3:00 - -6:00 Canada C%sT 2006 Oct 29 2:00 - -5:00 - EST 2007 Mar 11 3:00 - -6:00 Canada C%sT -# aka Kangiqiniq -Zone America/Rankin_Inlet 0 - -00 1957 # Rankin Inlet founded - -6:00 NT_YK C%sT 2000 Oct 29 2:00 - -5:00 - EST 2001 Apr 1 3:00 - -6:00 Canada C%sT -# aka Iqaluktuuttiaq -Zone America/Cambridge_Bay 0 - -00 1920 # trading post est.? - -7:00 NT_YK M%sT 1999 Oct 31 2:00 - -6:00 Canada C%sT 2000 Oct 29 2:00 - -5:00 - EST 2000 Nov 5 0:00 - -6:00 - CST 2001 Apr 1 3:00 - -7:00 Canada M%sT -Zone America/Yellowknife 0 - -00 1935 # Yellowknife founded? - -7:00 NT_YK M%sT 1980 - -7:00 Canada M%sT -Zone America/Inuvik 0 - -00 1953 # Inuvik founded - -8:00 NT_YK P%sT 1979 Apr lastSun 2:00 - -7:00 NT_YK M%sT 1980 - -7:00 Canada M%sT -Zone America/Whitehorse -9:00:12 - LMT 1900 Aug 20 - -9:00 NT_YK Y%sT 1967 May 28 0:00 - -8:00 NT_YK P%sT 1980 - -8:00 Canada P%sT -Zone America/Dawson -9:17:40 - LMT 1900 Aug 20 - -9:00 NT_YK Y%sT 1973 Oct 28 0:00 - -8:00 NT_YK P%sT 1980 - -8:00 Canada P%sT - - -############################################################################### - -# Mexico - -# From Paul Eggert (2014-12-07): -# The Investigation and Analysis Service of the -# Mexican Library of Congress (MLoC) has published a -# history of Mexican local time (in Spanish) -# http://www.diputados.gob.mx/bibliot/publica/inveyana/polisoc/horver/index.htm -# -# Here are the discrepancies between Shanks & Pottenger (S&P) and the MLoC. -# (In all cases we go with the MLoC.) -# S&P report that Baja was at -8:00 in 1922/1923. -# S&P say the 1930 transition in Baja was 1930-11-16. -# S&P report no DST during summer 1931. -# S&P report a transition at 1932-03-30 23:00, not 1932-04-01. - -# From Gwillim Law (2001-02-20): -# There are some other discrepancies between the Decrees page and the -# tz database. I think they can best be explained by supposing that -# the researchers who prepared the Decrees page failed to find some of -# the relevant documents. - -# From Alan Perry (1996-02-15): -# A guy from our Mexico subsidiary finally found the Presidential Decree -# outlining the timezone changes in Mexico. -# -# ------------- Begin Forwarded Message ------------- -# -# I finally got my hands on the Official Presidential Decree that sets up the -# rules for the DST changes. The rules are: -# -# 1. The country is divided in 3 timezones: -# - Baja California Norte (the Mexico/BajaNorte TZ) -# - Baja California Sur, Nayarit, Sinaloa and Sonora (the Mexico/BajaSur TZ) -# - The rest of the country (the Mexico/General TZ) -# -# 2. From the first Sunday in April at 2:00 AM to the last Sunday in October -# at 2:00 AM, the times in each zone are as follows: -# BajaNorte: GMT+7 -# BajaSur: GMT+6 -# General: GMT+5 -# -# 3. The rest of the year, the times are as follows: -# BajaNorte: GMT+8 -# BajaSur: GMT+7 -# General: GMT+6 -# -# The Decree was published in Mexico's Official Newspaper on January 4th. -# -# -------------- End Forwarded Message -------------- -# From Paul Eggert (1996-06-12): -# For an English translation of the decree, see -# "Diario Oficial: Time Zone Changeover" (1996-01-04). -# http://mexico-travel.com/extra/timezone_eng.html - -# From Rives McDow (1998-10-08): -# The State of Quintana Roo has reverted back to central STD and DST times -# (i.e. UTC -0600 and -0500 as of 1998-08-02). - -# From Rives McDow (2000-01-10): -# Effective April 4, 1999 at 2:00 AM local time, Sonora changed to the time -# zone 5 hours from the International Date Line, and will not observe daylight -# savings time so as to stay on the same time zone as the southern part of -# Arizona year round. - -# From Jesper Nørgaard, translating -# (2001-01-17): -# In Oaxaca, the 55.000 teachers from the Section 22 of the National -# Syndicate of Education Workers, refuse to apply daylight saving each -# year, so that the more than 10,000 schools work at normal hour the -# whole year. - -# From Gwillim Law (2001-01-19): -# ... says -# (translated):... -# January 17, 2000 - The Energy Secretary, Ernesto Martens, announced -# that Summer Time will be reduced from seven to five months, starting -# this year.... -# http://www.publico.com.mx/scripts/texto3.asp?action=pagina&pag=21&pos=p&secc=naci&date=01/17/2001 -# [translated], says "summer time will ... take effect on the first Sunday -# in May, and end on the last Sunday of September. - -# From Arthur David Olson (2001-01-25): -# The 2001-01-24 traditional Washington Post contained the page one -# story "Timely Issue Divides Mexicans."... -# http://www.washingtonpost.com/wp-dyn/articles/A37383-2001Jan23.html -# ... Mexico City Mayor López Obrador "...is threatening to keep -# Mexico City and its 20 million residents on a different time than -# the rest of the country..." In particular, López Obrador would abolish -# observation of Daylight Saving Time. - -# Official statute published by the Energy Department -# http://www.conae.gob.mx/ahorro/decretohorver2001.html#decre -# (2001-02-01) shows Baja and Chihauhua as still using US DST rules, -# and Sonora with no DST. This was reported by Jesper Nørgaard (2001-02-03). - -# From Paul Eggert (2001-03-03): -# -# http://www.latimes.com/news/nation/20010303/t000018766.html -# James F. Smith writes in today's LA Times -# * Sonora will continue to observe standard time. -# * Last week Mexico City's mayor Andrés Manuel López Obrador decreed that -# the Federal District will not adopt DST. -# * 4 of 16 district leaders announced they'll ignore the decree. -# * The decree does not affect federal-controlled facilities including -# the airport, banks, hospitals, and schools. -# -# For now we'll assume that the Federal District will bow to federal rules. - -# From Jesper Nørgaard (2001-04-01): -# I found some references to the Mexican application of daylight -# saving, which modifies what I had already sent you, stating earlier -# that a number of northern Mexican states would go on daylight -# saving. The modification reverts this to only cover Baja California -# (Norte), while all other states (except Sonora, who has no daylight -# saving all year) will follow the original decree of president -# Vicente Fox, starting daylight saving May 6, 2001 and ending -# September 30, 2001. -# References: "Diario de Monterrey" -# Palabra (2001-03-31) - -# From Reuters (2001-09-04): -# Mexico's Supreme Court on Tuesday declared that daylight savings was -# unconstitutional in Mexico City, creating the possibility the -# capital will be in a different time zone from the rest of the nation -# next year.... The Supreme Court's ruling takes effect at 2:00 -# a.m. (0800 GMT) on Sept. 30, when Mexico is scheduled to revert to -# standard time. "This is so residents of the Federal District are not -# subject to unexpected time changes," a statement from the court said. - -# From Jesper Nørgaard Welen (2002-03-12): -# ... consulting my local grocery store(!) and my coworkers, they all insisted -# that a new decision had been made to reinstate US style DST in Mexico.... -# http://www.conae.gob.mx/ahorro/horaver2001_m1_2002.html (2002-02-20) -# confirms this. Sonora as usual is the only state where DST is not applied. - -# From Steffen Thorsen (2009-12-28): -# -# Steffen Thorsen wrote: -# > Mexico's House of Representatives has approved a proposal for northern -# > Mexico's border cities to share the same daylight saving schedule as -# > the United States. -# Now this has passed both the Congress and the Senate, so starting from -# 2010, some border regions will be the same: -# http://www.signonsandiego.com/news/2009/dec/28/clocks-will-match-both-sides-border/ -# http://www.elmananarey.com/diario/noticia/nacional/noticias/empatan_horario_de_frontera_con_eu/621939 -# (Spanish) -# -# Could not find the new law text, but the proposed law text changes are here: -# http://gaceta.diputados.gob.mx/Gaceta/61/2009/dic/20091210-V.pdf -# (Gaceta Parlamentaria) -# -# There is also a list of the votes here: -# http://gaceta.diputados.gob.mx/Gaceta/61/2009/dic/V2-101209.html -# -# Our page: -# https://www.timeanddate.com/news/time/north-mexico-dst-change.html - -# From Arthur David Olson (2010-01-20): -# The page -# http://dof.gob.mx/nota_detalle.php?codigo=5127480&fecha=06/01/2010 -# includes this text: -# En los municipios fronterizos de Tijuana y Mexicali en Baja California; -# Juárez y Ojinaga en Chihuahua; Acuña y Piedras Negras en Coahuila; -# Anáhuac en Nuevo León; y Nuevo Laredo, Reynosa y Matamoros en -# Tamaulipas, la aplicación de este horario estacional surtirá efecto -# desde las dos horas del segundo domingo de marzo y concluirá a las dos -# horas del primer domingo de noviembre. -# En los municipios fronterizos que se encuentren ubicados en la franja -# fronteriza norte en el territorio comprendido entre la línea -# internacional y la línea paralela ubicada a una distancia de veinte -# kilómetros, así como la Ciudad de Ensenada, Baja California, hacia el -# interior del país, la aplicación de este horario estacional surtirá -# efecto desde las dos horas del segundo domingo de marzo y concluirá a -# las dos horas del primer domingo de noviembre. - -# From Steffen Thorsen (2014-12-08), translated by Gwillim Law: -# The Mexican state of Quintana Roo will likely change to EST in 2015. -# -# http://www.unioncancun.mx/articulo/2014/12/04/medio-ambiente/congreso-aprueba-una-hora-mas-de-sol-en-qroo -# "With this change, the time conflict that has existed between the municipios -# of Quintana Roo and the municipio of Felipe Carrillo Puerto may come to an -# end. The latter declared itself in rebellion 15 years ago when a time change -# was initiated in Mexico, and since then it has refused to change its time -# zone along with the rest of the country." -# -# From Steffen Thorsen (2015-01-14), translated by Gwillim Law: -# http://sipse.com/novedades/confirman-aplicacion-de-nueva-zona-horaria-para-quintana-roo-132331.html -# "...the new time zone will come into effect at two o'clock on the first Sunday -# of February, when we will have to advance the clock one hour from its current -# time..." -# Also, the new zone will not use DST. -# -# From Carlos Raúl Perasso (2015-02-02): -# The decree that modifies the Mexican Hour System Law has finally -# been published at the Diario Oficial de la Federación -# http://www.dof.gob.mx/nota_detalle.php?codigo=5380123&fecha=31/01/2015 -# It establishes 5 zones for Mexico: -# 1- Zona Centro (Central Zone): Corresponds to longitude 90 W, -# includes most of Mexico, excluding what's mentioned below. -# 2- Zona Pacífico (Pacific Zone): Longitude 105 W, includes the -# states of Baja California Sur; Chihuahua; Nayarit (excluding Bahía -# de Banderas which lies in Central Zone); Sinaloa and Sonora. -# 3- Zona Noroeste (Northwest Zone): Longitude 120 W, includes the -# state of Baja California. -# 4- Zona Sureste (Southeast Zone): Longitude 75 W, includes the state -# of Quintana Roo. -# 5- The islands, reefs and keys shall take their timezone from the -# longitude they are located at. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Mexico 1939 only - Feb 5 0:00 1:00 D -Rule Mexico 1939 only - Jun 25 0:00 0 S -Rule Mexico 1940 only - Dec 9 0:00 1:00 D -Rule Mexico 1941 only - Apr 1 0:00 0 S -Rule Mexico 1943 only - Dec 16 0:00 1:00 W # War -Rule Mexico 1944 only - May 1 0:00 0 S -Rule Mexico 1950 only - Feb 12 0:00 1:00 D -Rule Mexico 1950 only - Jul 30 0:00 0 S -Rule Mexico 1996 2000 - Apr Sun>=1 2:00 1:00 D -Rule Mexico 1996 2000 - Oct lastSun 2:00 0 S -Rule Mexico 2001 only - May Sun>=1 2:00 1:00 D -Rule Mexico 2001 only - Sep lastSun 2:00 0 S -Rule Mexico 2002 max - Apr Sun>=1 2:00 1:00 D -Rule Mexico 2002 max - Oct lastSun 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# Quintana Roo; represented by Cancún -Zone America/Cancun -5:47:04 - LMT 1922 Jan 1 0:12:56 - -6:00 - CST 1981 Dec 23 - -5:00 Mexico E%sT 1998 Aug 2 2:00 - -6:00 Mexico C%sT 2015 Feb 1 2:00 - -5:00 - EST -# Campeche, Yucatán; represented by Mérida -Zone America/Merida -5:58:28 - LMT 1922 Jan 1 0:01:32 - -6:00 - CST 1981 Dec 23 - -5:00 - EST 1982 Dec 2 - -6:00 Mexico C%sT -# Coahuila, Nuevo León, Tamaulipas (near US border) -# This includes the following municipalities: -# in Coahuila: Ocampo, Acuña, Zaragoza, Jiménez, Piedras Negras, Nava, -# Guerrero, Hidalgo. -# in Nuevo León: Anáhuac, Los Aldama. -# in Tamaulipas: Nuevo Laredo, Guerrero, Mier, Miguel Alemán, Camargo, -# Gustavo Díaz Ordaz, Reynosa, Río Bravo, Valle Hermoso, Matamoros. -# See: Inicia mañana Horario de Verano en zona fronteriza, El Universal, -# 2016-03-12 -# http://www.eluniversal.com.mx/articulo/estados/2016/03/12/inicia-manana-horario-de-verano-en-zona-fronteriza -Zone America/Matamoros -6:40:00 - LMT 1921 Dec 31 23:20:00 - -6:00 - CST 1988 - -6:00 US C%sT 1989 - -6:00 Mexico C%sT 2010 - -6:00 US C%sT -# Durango; Coahuila, Nuevo León, Tamaulipas (away from US border) -Zone America/Monterrey -6:41:16 - LMT 1921 Dec 31 23:18:44 - -6:00 - CST 1988 - -6:00 US C%sT 1989 - -6:00 Mexico C%sT -# Central Mexico -Zone America/Mexico_City -6:36:36 - LMT 1922 Jan 1 0:23:24 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 Mexico C%sT 2001 Sep 30 2:00 - -6:00 - CST 2002 Feb 20 - -6:00 Mexico C%sT -# Chihuahua (near US border) -# This includes the municipalities of Janos, Ascensión, Juárez, Guadalupe, -# Práxedis G Guerrero, Coyame del Sotol, Ojinaga, and Manuel Benavides. -# (See the 2016-03-12 El Universal source mentioned above.) -Zone America/Ojinaga -6:57:40 - LMT 1922 Jan 1 0:02:20 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 - CST 1996 - -6:00 Mexico C%sT 1998 - -6:00 - CST 1998 Apr Sun>=1 3:00 - -7:00 Mexico M%sT 2010 - -7:00 US M%sT -# Chihuahua (away from US border) -Zone America/Chihuahua -7:04:20 - LMT 1921 Dec 31 23:55:40 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 - CST 1996 - -6:00 Mexico C%sT 1998 - -6:00 - CST 1998 Apr Sun>=1 3:00 - -7:00 Mexico M%sT -# Sonora -Zone America/Hermosillo -7:23:52 - LMT 1921 Dec 31 23:36:08 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 - CST 1942 Apr 24 - -7:00 - MST 1949 Jan 14 - -8:00 - PST 1970 - -7:00 Mexico M%sT 1999 - -7:00 - MST - -# From Alexander Krivenyshev (2010-04-21): -# According to news, Bahía de Banderas (Mexican state of Nayarit) -# changed time zone UTC-7 to new time zone UTC-6 on April 4, 2010 (to -# share the same time zone as nearby city Puerto Vallarta, Jalisco). -# -# (Spanish) -# Bahía de Banderas homologa su horario al del centro del -# país, a partir de este domingo -# http://www.nayarit.gob.mx/notes.asp?id=20748 -# -# Bahía de Banderas homologa su horario con el del Centro del -# País -# http://www.bahiadebanderas.gob.mx/principal/index.php?option=com_content&view=article&id=261:bahia-de-banderas-homologa-su-horario-con-el-del-centro-del-pais&catid=42:comunicacion-social&Itemid=50 -# -# (English) -# Puerto Vallarta and Bahía de Banderas: One Time Zone -# http://virtualvallarta.com/puertovallarta/puertovallarta/localnews/2009-12-03-Puerto-Vallarta-and-Bahia-de-Banderas-One-Time-Zone.shtml -# http://www.worldtimezone.com/dst_news/dst_news_mexico08.html -# -# "Mexico's Senate approved the amendments to the Mexican Schedule System that -# will allow Bahía de Banderas and Puerto Vallarta to share the same time -# zone ..." -# Baja California Sur, Nayarit, Sinaloa - -# From Arthur David Olson (2010-05-01): -# Use "Bahia_Banderas" to keep the name to fourteen characters. - -# Mazatlán -Zone America/Mazatlan -7:05:40 - LMT 1921 Dec 31 23:54:20 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 - CST 1942 Apr 24 - -7:00 - MST 1949 Jan 14 - -8:00 - PST 1970 - -7:00 Mexico M%sT - -# Bahía de Banderas -Zone America/Bahia_Banderas -7:01:00 - LMT 1921 Dec 31 23:59:00 - -7:00 - MST 1927 Jun 10 23:00 - -6:00 - CST 1930 Nov 15 - -7:00 - MST 1931 May 1 23:00 - -6:00 - CST 1931 Oct - -7:00 - MST 1932 Apr 1 - -6:00 - CST 1942 Apr 24 - -7:00 - MST 1949 Jan 14 - -8:00 - PST 1970 - -7:00 Mexico M%sT 2010 Apr 4 2:00 - -6:00 Mexico C%sT - -# Baja California -Zone America/Tijuana -7:48:04 - LMT 1922 Jan 1 0:11:56 - -7:00 - MST 1924 - -8:00 - PST 1927 Jun 10 23:00 - -7:00 - MST 1930 Nov 15 - -8:00 - PST 1931 Apr 1 - -8:00 1:00 PDT 1931 Sep 30 - -8:00 - PST 1942 Apr 24 - -8:00 1:00 PWT 1945 Aug 14 23:00u - -8:00 1:00 PPT 1945 Nov 12 # Peace - -8:00 - PST 1948 Apr 5 - -8:00 1:00 PDT 1949 Jan 14 - -8:00 - PST 1954 - -8:00 CA P%sT 1961 - -8:00 - PST 1976 - -8:00 US P%sT 1996 - -8:00 Mexico P%sT 2001 - -8:00 US P%sT 2002 Feb 20 - -8:00 Mexico P%sT 2010 - -8:00 US P%sT -# From Paul Eggert (2006-03-22): -# Formerly there was an America/Ensenada zone, which differed from -# America/Tijuana only in that it did not observe DST from 1976 -# through 1995. This was as per Shanks (1999). But Shanks & Pottenger say -# Ensenada did not observe DST from 1948 through 1975. Guy Harris reports -# that the 1987 OAG says "Only Ensenada, Mexicali, San Felipe and -# Tijuana observe DST," which agrees with Shanks & Pottenger but implies that -# DST-observance was a town-by-town matter back then. This concerns -# data after 1970 so most likely there should be at least one Zone -# other than America/Tijuana for Baja, but it's not clear yet what its -# name or contents should be. -# -# From Paul Eggert (2015-10-08): -# Formerly there was an America/Santa_Isabel zone, but this appears to -# have come from a misreading of -# http://dof.gob.mx/nota_detalle.php?codigo=5127480&fecha=06/01/2010 -# It has been moved to the 'backward' file. -# -# -# Revillagigedo Is -# no information - -############################################################################### - -# Anguilla -# Antigua and Barbuda -# See America/Port_of_Spain. - -# Bahamas -# -# For 1899 Milne gives -5:09:29.5; round that. -# -# From Sue Williams (2006-12-07): -# The Bahamas announced about a month ago that they plan to change their DST -# rules to sync with the U.S. starting in 2007.... -# http://www.jonesbahamas.com/?c=45&a=10412 - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Bahamas 1964 1975 - Oct lastSun 2:00 0 S -Rule Bahamas 1964 1975 - Apr lastSun 2:00 1:00 D -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Nassau -5:09:30 - LMT 1912 Mar 2 - -5:00 Bahamas E%sT 1976 - -5:00 US E%sT - -# Barbados - -# For 1899 Milne gives -3:58:29.2; round that. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Barb 1977 only - Jun 12 2:00 1:00 D -Rule Barb 1977 1978 - Oct Sun>=1 2:00 0 S -Rule Barb 1978 1980 - Apr Sun>=15 2:00 1:00 D -Rule Barb 1979 only - Sep 30 2:00 0 S -Rule Barb 1980 only - Sep 25 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Barbados -3:58:29 - LMT 1924 # Bridgetown - -3:58:29 - BMT 1932 # Bridgetown Mean Time - -4:00 Barb A%sT - -# Belize -# Whitman entirely disagrees with Shanks; go with Shanks & Pottenger. -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Belize 1918 1942 - Oct Sun>=2 0:00 0:30 -0530 -Rule Belize 1919 1943 - Feb Sun>=9 0:00 0 CST -Rule Belize 1973 only - Dec 5 0:00 1:00 CDT -Rule Belize 1974 only - Feb 9 0:00 0 CST -Rule Belize 1982 only - Dec 18 0:00 1:00 CDT -Rule Belize 1983 only - Feb 12 0:00 0 CST -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Belize -5:52:48 - LMT 1912 Apr - -6:00 Belize %s - -# Bermuda - -# For 1899 Milne gives -4:19:18.3 as the meridian of the clock tower, -# Bermuda dockyard, Ireland I; round that. - -# From Dan Jones, reporting in The Royal Gazette (2006-06-26): - -# Next year, however, clocks in the US will go forward on the second Sunday -# in March, until the first Sunday in November. And, after the Time Zone -# (Seasonal Variation) Bill 2006 was passed in the House of Assembly on -# Friday, the same thing will happen in Bermuda. -# http://www.theroyalgazette.com/apps/pbcs.dll/article?AID=/20060529/NEWS/105290135 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Atlantic/Bermuda -4:19:18 - LMT 1930 Jan 1 2:00 # Hamilton - -4:00 - AST 1974 Apr 28 2:00 - -4:00 Canada A%sT 1976 - -4:00 US A%sT - -# Cayman Is -# See America/Panama. - -# Costa Rica - -# Milne gives -5:36:13.3 as San José mean time; round to nearest. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule CR 1979 1980 - Feb lastSun 0:00 1:00 D -Rule CR 1979 1980 - Jun Sun>=1 0:00 0 S -Rule CR 1991 1992 - Jan Sat>=15 0:00 1:00 D -# IATA SSIM (1991-09) says the following was at 1:00; -# go with Shanks & Pottenger. -Rule CR 1991 only - Jul 1 0:00 0 S -Rule CR 1992 only - Mar 15 0:00 0 S -# There are too many San Josés elsewhere, so we'll use 'Costa Rica'. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Costa_Rica -5:36:13 - LMT 1890 # San José - -5:36:13 - SJMT 1921 Jan 15 # San José Mean Time - -6:00 CR C%sT -# Coco -# no information; probably like America/Costa_Rica - -# Cuba - -# From Paul Eggert (2013-02-21): -# Milne gives -5:28:50.45 for the observatory at Havana, -5:29:23.57 -# for the port, and -5:30 for meteorological observations. -# For now, stick with Shanks & Pottenger. - -# From Arthur David Olson (1999-03-29): -# The 1999-03-28 exhibition baseball game held in Havana, Cuba, between -# the Cuban National Team and the Baltimore Orioles was carried live on -# the Orioles Radio Network, including affiliate WTOP in Washington, DC. -# During the game, play-by-play announcer Jim Hunter noted that -# "We'll be losing two hours of sleep...Cuba switched to Daylight Saving -# Time today." (The "two hour" remark referred to losing one hour of -# sleep on 1999-03-28 - when the announcers were in Cuba as it switched -# to DST - and one more hour on 1999-04-04 - when the announcers will have -# returned to Baltimore, which switches on that date.) - -# From Steffen Thorsen (2013-11-11): -# DST start in Cuba in 2004 ... does not follow the same rules as the -# years before. The correct date should be Sunday 2004-03-28 00:00 ... -# https://web.archive.org/web/20040402060750/http://www.granma.cu/espanol/2004/marzo/sab27/reloj.html - -# From Evert van der Veer via Steffen Thorsen (2004-10-28): -# Cuba is not going back to standard time this year. -# From Paul Eggert (2006-03-22): -# http://www.granma.cu/ingles/2004/septiembre/juev30/41medid-i.html -# says that it's due to a problem at the Antonio Guiteras -# thermoelectric plant, and says "This October there will be no return -# to normal hours (after daylight saving time)". -# For now, let's assume that it's a temporary measure. - -# From Carlos A. Carnero Delgado (2005-11-12): -# This year (just like in 2004-2005) there's no change in time zone -# adjustment in Cuba. We will stay in daylight saving time: -# http://www.granma.cu/espanol/2005/noviembre/mier9/horario.html - -# From Jesper Nørgaard Welen (2006-10-21): -# An article in GRANMA INTERNACIONAL claims that Cuba will end -# the 3 years of permanent DST next weekend, see -# http://www.granma.cu/ingles/2006/octubre/lun16/43horario.html -# "On Saturday night, October 28 going into Sunday, October 29, at 01:00, -# watches should be set back one hour - going back to 00:00 hours - returning -# to the normal schedule.... - -# From Paul Eggert (2007-03-02): -# , dated yesterday, -# says Cuban clocks will advance at midnight on March 10. -# For lack of better information, assume Cuba will use US rules, -# except that it switches at midnight standard time as usual. -# -# From Steffen Thorsen (2007-10-25): -# Carlos Alberto Fonseca Arauz informed me that Cuba will end DST one week -# earlier - on the last Sunday of October, just like in 2006. -# -# He supplied these references: -# -# http://www.prensalatina.com.mx/article.asp?ID=%7B4CC32C1B-A9F7-42FB-8A07-8631AFC923AF%7D&language=ES -# http://actualidad.terra.es/sociedad/articulo/cuba_llama_ahorrar_energia_cambio_1957044.htm -# -# From Alex Krivenyshev (2007-10-25): -# Here is also article from Granma (Cuba): -# -# Regirá el Horario Normal desde el próximo domingo 28 de octubre -# http://www.granma.cubaweb.cu/2007/10/24/nacional/artic07.html -# -# http://www.worldtimezone.com/dst_news/dst_news_cuba03.html - -# From Arthur David Olson (2008-03-09): -# I'm in Maryland which is now observing United States Eastern Daylight -# Time. At 9:44 local time I used RealPlayer to listen to -# http://media.enet.cu/radioreloj -# a Cuban information station, and heard -# the time announced as "ocho cuarenta y cuatro" ("eight forty-four"), -# indicating that Cuba is still on standard time. - -# From Steffen Thorsen (2008-03-12): -# It seems that Cuba will start DST on Sunday, 2007-03-16... -# It was announced yesterday, according to this source (in Spanish): -# http://www.nnc.cubaweb.cu/marzo-2008/cien-1-11-3-08.htm -# -# Some more background information is posted here: -# https://www.timeanddate.com/news/time/cuba-starts-dst-march-16.html -# -# The article also says that Cuba has been observing DST since 1963, -# while Shanks (and tzdata) has 1965 as the first date (except in the -# 1940's). Many other web pages in Cuba also claim that it has been -# observed since 1963, but with the exception of 1970 - an exception -# which is not present in tzdata/Shanks. So there is a chance we need to -# change some historic records as well. -# -# One example: -# http://www.radiohc.cu/espanol/noticias/mar07/11mar/hor.htm - -# From Jesper Nørgaard Welen (2008-03-13): -# The Cuban time change has just been confirmed on the most authoritative -# web site, the Granma. Please check out -# http://www.granma.cubaweb.cu/2008/03/13/nacional/artic10.html -# -# Basically as expected after Steffen Thorsen's information, the change -# will take place midnight between Saturday and Sunday. - -# From Arthur David Olson (2008-03-12): -# Assume Sun>=15 (third Sunday) going forward. - -# From Alexander Krivenyshev (2009-03-04) -# According to the Radio Reloj - Cuba will start Daylight Saving Time on -# midnight between Saturday, March 07, 2009 and Sunday, March 08, 2009- -# not on midnight March 14 / March 15 as previously thought. -# -# http://www.worldtimezone.com/dst_news/dst_news_cuba05.html -# (in Spanish) - -# From Arthur David Olson (2009-03-09) -# I listened over the Internet to -# http://media.enet.cu/readioreloj -# this morning; when it was 10:05 a. m. here in Bethesda, Maryland the -# the time was announced as "diez cinco" - the same time as here, indicating -# that has indeed switched to DST. Assume second Sunday from 2009 forward. - -# From Steffen Thorsen (2011-03-08): -# Granma announced that Cuba is going to start DST on 2011-03-20 00:00:00 -# this year. Nothing about the end date known so far (if that has -# changed at all). -# -# Source: -# http://granma.co.cu/2011/03/08/nacional/artic01.html -# -# Our info: -# https://www.timeanddate.com/news/time/cuba-starts-dst-2011.html -# -# From Steffen Thorsen (2011-10-30) -# Cuba will end DST two weeks later this year. Instead of going back -# tonight, it has been delayed to 2011-11-13 at 01:00. -# -# One source (Spanish) -# http://www.radioangulo.cu/noticias/cuba/17105-cuba-restablecera-el-horario-del-meridiano-de-greenwich.html -# -# Our page: -# https://www.timeanddate.com/news/time/cuba-time-changes-2011.html -# -# From Steffen Thorsen (2012-03-01) -# According to Radio Reloj, Cuba will start DST on Midnight between March -# 31 and April 1. -# -# Radio Reloj has the following info (Spanish): -# http://www.radioreloj.cu/index.php/noticias-radio-reloj/71-miscelaneas/7529-cuba-aplicara-el-horario-de-verano-desde-el-1-de-abril -# -# Our info on it: -# https://www.timeanddate.com/news/time/cuba-starts-dst-2012.html - -# From Steffen Thorsen (2012-11-03): -# Radio Reloj and many other sources report that Cuba is changing back -# to standard time on 2012-11-04: -# http://www.radioreloj.cu/index.php/noticias-radio-reloj/36-nacionales/9961-regira-horario-normal-en-cuba-desde-el-domingo-cuatro-de-noviembre -# From Paul Eggert (2012-11-03): -# For now, assume the future rule is first Sunday in November. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Cuba 1928 only - Jun 10 0:00 1:00 D -Rule Cuba 1928 only - Oct 10 0:00 0 S -Rule Cuba 1940 1942 - Jun Sun>=1 0:00 1:00 D -Rule Cuba 1940 1942 - Sep Sun>=1 0:00 0 S -Rule Cuba 1945 1946 - Jun Sun>=1 0:00 1:00 D -Rule Cuba 1945 1946 - Sep Sun>=1 0:00 0 S -Rule Cuba 1965 only - Jun 1 0:00 1:00 D -Rule Cuba 1965 only - Sep 30 0:00 0 S -Rule Cuba 1966 only - May 29 0:00 1:00 D -Rule Cuba 1966 only - Oct 2 0:00 0 S -Rule Cuba 1967 only - Apr 8 0:00 1:00 D -Rule Cuba 1967 1968 - Sep Sun>=8 0:00 0 S -Rule Cuba 1968 only - Apr 14 0:00 1:00 D -Rule Cuba 1969 1977 - Apr lastSun 0:00 1:00 D -Rule Cuba 1969 1971 - Oct lastSun 0:00 0 S -Rule Cuba 1972 1974 - Oct 8 0:00 0 S -Rule Cuba 1975 1977 - Oct lastSun 0:00 0 S -Rule Cuba 1978 only - May 7 0:00 1:00 D -Rule Cuba 1978 1990 - Oct Sun>=8 0:00 0 S -Rule Cuba 1979 1980 - Mar Sun>=15 0:00 1:00 D -Rule Cuba 1981 1985 - May Sun>=5 0:00 1:00 D -Rule Cuba 1986 1989 - Mar Sun>=14 0:00 1:00 D -Rule Cuba 1990 1997 - Apr Sun>=1 0:00 1:00 D -Rule Cuba 1991 1995 - Oct Sun>=8 0:00s 0 S -Rule Cuba 1996 only - Oct 6 0:00s 0 S -Rule Cuba 1997 only - Oct 12 0:00s 0 S -Rule Cuba 1998 1999 - Mar lastSun 0:00s 1:00 D -Rule Cuba 1998 2003 - Oct lastSun 0:00s 0 S -Rule Cuba 2000 2003 - Apr Sun>=1 0:00s 1:00 D -Rule Cuba 2004 only - Mar lastSun 0:00s 1:00 D -Rule Cuba 2006 2010 - Oct lastSun 0:00s 0 S -Rule Cuba 2007 only - Mar Sun>=8 0:00s 1:00 D -Rule Cuba 2008 only - Mar Sun>=15 0:00s 1:00 D -Rule Cuba 2009 2010 - Mar Sun>=8 0:00s 1:00 D -Rule Cuba 2011 only - Mar Sun>=15 0:00s 1:00 D -Rule Cuba 2011 only - Nov 13 0:00s 0 S -Rule Cuba 2012 only - Apr 1 0:00s 1:00 D -Rule Cuba 2012 max - Nov Sun>=1 0:00s 0 S -Rule Cuba 2013 max - Mar Sun>=8 0:00s 1:00 D - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Havana -5:29:28 - LMT 1890 - -5:29:36 - HMT 1925 Jul 19 12:00 # Havana MT - -5:00 Cuba C%sT - -# Dominica -# See America/Port_of_Spain. - -# Dominican Republic - -# From Steffen Thorsen (2000-10-30): -# Enrique Morales reported to me that the Dominican Republic has changed the -# time zone to Eastern Standard Time as of Sunday 29 at 2 am.... -# http://www.listin.com.do/antes/261000/republica/princi.html - -# From Paul Eggert (2000-12-04): -# That URL (2000-10-26, in Spanish) says they planned to use US-style DST. - -# From Rives McDow (2000-12-01): -# Dominican Republic changed its mind and presidential decree on Tuesday, -# November 28, 2000, with a new decree. On Sunday, December 3 at 1:00 AM the -# Dominican Republic will be reverting to 8 hours from the International Date -# Line, and will not be using DST in the foreseeable future. The reason they -# decided to use DST was to be in synch with Puerto Rico, who was also going -# to implement DST. When Puerto Rico didn't implement DST, the president -# decided to revert. - - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule DR 1966 only - Oct 30 0:00 1:00 EDT -Rule DR 1967 only - Feb 28 0:00 0 EST -Rule DR 1969 1973 - Oct lastSun 0:00 0:30 -0430 -Rule DR 1970 only - Feb 21 0:00 0 EST -Rule DR 1971 only - Jan 20 0:00 0 EST -Rule DR 1972 1974 - Jan 21 0:00 0 EST -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Santo_Domingo -4:39:36 - LMT 1890 - -4:40 - SDMT 1933 Apr 1 12:00 # S. Dom. MT - -5:00 DR %s 1974 Oct 27 - -4:00 - AST 2000 Oct 29 2:00 - -5:00 US E%sT 2000 Dec 3 1:00 - -4:00 - AST - -# El Salvador - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Salv 1987 1988 - May Sun>=1 0:00 1:00 D -Rule Salv 1987 1988 - Sep lastSun 0:00 0 S -# There are too many San Salvadors elsewhere, so use America/El_Salvador -# instead of America/San_Salvador. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/El_Salvador -5:56:48 - LMT 1921 # San Salvador - -6:00 Salv C%sT - -# Grenada -# Guadeloupe -# St Barthélemy -# St Martin (French part) -# See America/Port_of_Spain. - -# Guatemala -# -# From Gwillim Law (2006-04-22), after a heads-up from Oscar van Vlijmen: -# Diario Co Latino, at -# , -# says in an article dated 2006-04-19 that the Guatemalan government had -# decided on that date to advance official time by 60 minutes, to lessen the -# impact of the elevated cost of oil.... Daylight saving time will last from -# 2006-04-29 24:00 (Guatemalan standard time) to 2006-09-30 (time unspecified). -# From Paul Eggert (2006-06-22): -# The Ministry of Energy and Mines, press release CP-15/2006 -# (2006-04-19), says DST ends at 24:00. See -# http://www.sieca.org.gt/Sitio_publico/Energeticos/Doc/Medidas/Cambio_Horario_Nac_190406.pdf - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Guat 1973 only - Nov 25 0:00 1:00 D -Rule Guat 1974 only - Feb 24 0:00 0 S -Rule Guat 1983 only - May 21 0:00 1:00 D -Rule Guat 1983 only - Sep 22 0:00 0 S -Rule Guat 1991 only - Mar 23 0:00 1:00 D -Rule Guat 1991 only - Sep 7 0:00 0 S -Rule Guat 2006 only - Apr 30 0:00 1:00 D -Rule Guat 2006 only - Oct 1 0:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Guatemala -6:02:04 - LMT 1918 Oct 5 - -6:00 Guat C%sT - -# Haiti -# From Gwillim Law (2005-04-15): -# Risto O. Nykänen wrote me that Haiti is now on DST. -# I searched for confirmation, and I found a press release -# on the Web page of the Haitian Consulate in Chicago (2005-03-31), -# . Translated from French, it says: -# -# "The Prime Minister's Communication Office notifies the public in general -# and the press in particular that, following a decision of the Interior -# Ministry and the Territorial Collectivities [I suppose that means the -# provinces], Haiti will move to Eastern Daylight Time in the night from next -# Saturday the 2nd to Sunday the 3rd. -# -# "Consequently, the Prime Minister's Communication Office wishes to inform -# the population that the country's clocks will be set forward one hour -# starting at midnight. This provision will hold until the last Saturday in -# October 2005. -# -# "Port-au-Prince, March 31, 2005" -# -# From Steffen Thorsen (2006-04-04): -# I have been informed by users that Haiti observes DST this year like -# last year, so the current "only" rule for 2005 might be changed to a -# "max" rule or to last until 2006. (Who knows if they will observe DST -# next year or if they will extend their DST like US/Canada next year). -# -# I have found this article about it (in French): -# http://www.haitipressnetwork.com/news.cfm?articleID=7612 -# -# The reason seems to be an energy crisis. - -# From Stephen Colebourne (2007-02-22): -# Some IATA info: Haiti won't be having DST in 2007. - -# From Steffen Thorsen (2012-03-11): -# According to several news sources, Haiti will observe DST this year, -# apparently using the same start and end date as USA/Canada. -# So this means they have already changed their time. -# -# http://www.alterpresse.org/spip.php?article12510 -# http://radiovision2000haiti.net/home/?p=13253 -# -# From Arthur David Olson (2012-03-11): -# The alterpresse.org source seems to show a US-style leap from 2:00 a.m. to -# 3:00 a.m. rather than the traditional Haitian jump at midnight. -# Assume a US-style fall back as well. - -# From Steffen Thorsen (2013-03-10): -# It appears that Haiti is observing DST this year as well, same rules -# as US/Canada. They did it last year as well, and it looks like they -# are going to observe DST every year now... -# -# http://radiovision2000haiti.net/public/haiti-avis-changement-dheure-dimanche/ -# http://www.canalplushaiti.net/?p=6714 - -# From Steffen Thorsen (2016-03-12): -# Jean Antoine, editor of www.haiti-reference.com informed us that Haiti -# are not going on DST this year. Several other resources confirm this: ... -# https://www.radiotelevisioncaraibes.com/presse/heure_d_t_pas_de_changement_d_heure_pr_vu_pour_cet_ann_e.html -# https://www.vantbefinfo.com/changement-dheure-pas-pour-haiti/ -# http://news.anmwe.com/haiti-lheure-nationale-ne-sera-ni-avancee-ni-reculee-cette-annee/ - -# From Steffen Thorsen (2017-03-12): -# We have received 4 mails from different people telling that Haiti -# has started DST again today, and this source seems to confirm that, -# I have not been able to find a more authoritative source: -# https://www.haitilibre.com/en/news-20319-haiti-notices-time-change-in-haiti.html - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Haiti 1983 only - May 8 0:00 1:00 D -Rule Haiti 1984 1987 - Apr lastSun 0:00 1:00 D -Rule Haiti 1983 1987 - Oct lastSun 0:00 0 S -# Shanks & Pottenger say AT is 2:00, but IATA SSIM (1991/1997) says 1:00s. -# Go with IATA. -Rule Haiti 1988 1997 - Apr Sun>=1 1:00s 1:00 D -Rule Haiti 1988 1997 - Oct lastSun 1:00s 0 S -Rule Haiti 2005 2006 - Apr Sun>=1 0:00 1:00 D -Rule Haiti 2005 2006 - Oct lastSun 0:00 0 S -Rule Haiti 2012 2015 - Mar Sun>=8 2:00 1:00 D -Rule Haiti 2012 2015 - Nov Sun>=1 2:00 0 S -Rule Haiti 2017 max - Mar Sun>=8 2:00 1:00 D -Rule Haiti 2017 max - Nov Sun>=1 2:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Port-au-Prince -4:49:20 - LMT 1890 - -4:49 - PPMT 1917 Jan 24 12:00 # P-a-P MT - -5:00 Haiti E%sT - -# Honduras -# Shanks & Pottenger say 1921 Jan 1; go with Whitman's more precise Apr 1. - -# From Paul Eggert (2006-05-05): -# worldtimezone.com reports a 2006-05-02 Spanish-language AP article -# saying Honduras will start using DST midnight Saturday, effective 4 -# months until September. La Tribuna reported today -# that Manuel Zelaya, the president -# of Honduras, refused to back down on this. - -# From Jesper Nørgaard Welen (2006-08-08): -# It seems that Honduras has returned from DST to standard time this Monday at -# 00:00 hours (prolonging Sunday to 25 hours duration). -# http://www.worldtimezone.com/dst_news/dst_news_honduras04.html - -# From Paul Eggert (2006-08-08): -# Also see Diario El Heraldo, The country returns to standard time (2006-08-08). -# http://www.elheraldo.hn/nota.php?nid=54941&sec=12 -# It mentions executive decree 18-2006. - -# From Steffen Thorsen (2006-08-17): -# Honduras will observe DST from 2007 to 2009, exact dates are not -# published, I have located this authoritative source: -# http://www.presidencia.gob.hn/noticia.aspx?nId=47 - -# From Steffen Thorsen (2007-03-30): -# http://www.laprensahn.com/pais_nota.php?id04962=7386 -# So it seems that Honduras will not enter DST this year.... - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Hond 1987 1988 - May Sun>=1 0:00 1:00 D -Rule Hond 1987 1988 - Sep lastSun 0:00 0 S -Rule Hond 2006 only - May Sun>=1 0:00 1:00 D -Rule Hond 2006 only - Aug Mon>=1 0:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Tegucigalpa -5:48:52 - LMT 1921 Apr - -6:00 Hond C%sT -# -# Great Swan I ceded by US to Honduras in 1972 - -# Jamaica -# Shanks & Pottenger give -5:07:12, but Milne records -5:07:10.41 from an -# unspecified official document, and says "This time is used throughout the -# island". Go with Milne. Round to the nearest second as required by zic. -# -# Shanks & Pottenger give April 28 for the 1974 spring-forward transition, but -# Lance Neita writes that Prime Minister Michael Manley decreed it January 5. -# Assume Neita meant Jan 6 02:00, the same as the US. Neita also writes that -# Manley's supporters associated this act with Manley's nickname "Joshua" -# (recall that in the Bible the sun stood still at Joshua's request), -# and with the Rod of Correction which Manley said he had received from -# Haile Selassie, Emperor of Ethiopia. See: -# Neita L. The politician in all of us. Jamaica Observer 2014-09-20 -# http://www.jamaicaobserver.com/columns/The-politician-in-all-of-us_17573647 -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Jamaica -5:07:11 - LMT 1890 # Kingston - -5:07:11 - KMT 1912 Feb # Kingston Mean Time - -5:00 - EST 1974 - -5:00 US E%sT 1984 - -5:00 - EST - -# Martinique -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Martinique -4:04:20 - LMT 1890 # Fort-de-France - -4:04:20 - FFMT 1911 May # Fort-de-France MT - -4:00 - AST 1980 Apr 6 - -4:00 1:00 ADT 1980 Sep 28 - -4:00 - AST - -# Montserrat -# See America/Port_of_Spain. - -# Nicaragua -# -# This uses Shanks & Pottenger for times before 2005. -# -# From Steffen Thorsen (2005-04-12): -# I've got reports from 8 different people that Nicaragua just started -# DST on Sunday 2005-04-10, in order to save energy because of -# expensive petroleum. The exact end date for DST is not yet -# announced, only "September" but some sites also say "mid-September". -# Some background information is available on the President's official site: -# http://www.presidencia.gob.ni/Presidencia/Files_index/Secretaria/Notas%20de%20Prensa/Presidente/2005/ABRIL/Gobierno-de-nicaragua-adelanta-hora-oficial-06abril.htm -# The Decree, no 23-2005 is available here: -# http://www.presidencia.gob.ni/buscador_gaceta/BD/DECRETOS/2005/Decreto%2023-2005%20Se%20adelanta%20en%20una%20hora%20en%20todo%20el%20territorio%20nacional%20apartir%20de%20las%2024horas%20del%2009%20de%20Abril.pdf -# -# From Paul Eggert (2005-05-01): -# The decree doesn't say anything about daylight saving, but for now let's -# assume that it is daylight saving.... -# -# From Gwillim Law (2005-04-21): -# The Associated Press story on the time change, which can be found at -# http://www.lapalmainteractivo.com/guias/content/gen/ap/America_Latina/AMC_GEN_NICARAGUA_HORA.html -# and elsewhere, says (fifth paragraph, translated from Spanish): "The last -# time that a change of clocks was applied to save energy was in the year 2000 -# during the Arnoldo Alemán administration."... -# The northamerica file says that Nicaragua has been on UTC-6 continuously -# since December 1998. I wasn't able to find any details of Nicaraguan time -# changes in 2000. Perhaps a note could be added to the northamerica file, to -# the effect that we have indirect evidence that DST was observed in 2000. -# -# From Jesper Nørgaard Welen (2005-11-02): -# Nicaragua left DST the 2005-10-02 at 00:00 (local time). -# http://www.presidencia.gob.ni/presidencia/files_index/secretaria/comunicados/2005/septiembre/26septiembre-cambio-hora.htm -# (2005-09-26) -# -# From Jesper Nørgaard Welen (2006-05-05): -# http://www.elnuevodiario.com.ni/2006/05/01/nacionales/18410 -# (my informal translation) -# By order of the president of the republic, Enrique Bolaños, Nicaragua -# advanced by sixty minutes their official time, yesterday at 2 in the -# morning, and will stay that way until 30th of September. -# -# From Jesper Nørgaard Welen (2006-09-30): -# http://www.presidencia.gob.ni/buscador_gaceta/BD/DECRETOS/2006/D-063-2006P-PRN-Cambio-Hora.pdf -# My informal translation runs: -# The natural sun time is restored in all the national territory, in that the -# time is returned one hour at 01:00 am of October 1 of 2006. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Nic 1979 1980 - Mar Sun>=16 0:00 1:00 D -Rule Nic 1979 1980 - Jun Mon>=23 0:00 0 S -Rule Nic 2005 only - Apr 10 0:00 1:00 D -Rule Nic 2005 only - Oct Sun>=1 0:00 0 S -Rule Nic 2006 only - Apr 30 2:00 1:00 D -Rule Nic 2006 only - Oct Sun>=1 1:00 0 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Managua -5:45:08 - LMT 1890 - -5:45:12 - MMT 1934 Jun 23 # Managua Mean Time? - -6:00 - CST 1973 May - -5:00 - EST 1975 Feb 16 - -6:00 Nic C%sT 1992 Jan 1 4:00 - -5:00 - EST 1992 Sep 24 - -6:00 - CST 1993 - -5:00 - EST 1997 - -6:00 Nic C%sT - -# Panama -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Panama -5:18:08 - LMT 1890 - -5:19:36 - CMT 1908 Apr 22 # Colón Mean Time - -5:00 - EST -Link America/Panama America/Cayman - -# Puerto Rico -# There are too many San Juans elsewhere, so we'll use 'Puerto_Rico'. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Puerto_Rico -4:24:25 - LMT 1899 Mar 28 12:00 # San Juan - -4:00 - AST 1942 May 3 - -4:00 US A%sT 1946 - -4:00 - AST - -# St Kitts-Nevis -# St Lucia -# See America/Port_of_Spain. - -# St Pierre and Miquelon -# There are too many St Pierres elsewhere, so we'll use 'Miquelon'. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Miquelon -3:44:40 - LMT 1911 May 15 # St Pierre - -4:00 - AST 1980 May - -3:00 - -03 1987 - -3:00 Canada -03/-02 - -# St Vincent and the Grenadines -# See America/Port_of_Spain. - -# Turks and Caicos -# -# From Chris Dunn in -# https://bugs.debian.org/415007 -# (2007-03-15): In the Turks & Caicos Islands (America/Grand_Turk) the -# daylight saving dates for time changes have been adjusted to match -# the recent U.S. change of dates. -# -# From Brian Inglis (2007-04-28): -# http://www.turksandcaicos.tc/calendar/index.htm [2007-04-26] -# there is an entry for Nov 4 "Daylight Savings Time Ends 2007" and three -# rows before that there is an out of date entry for Oct: -# "Eastern Standard Times Begins 2007 -# Clocks are set back one hour at 2:00 a.m. local Daylight Saving Time" -# indicating that the normal ET rules are followed. -# -# From Paul Eggert (2014-08-19): -# The 2014-08-13 Cabinet meeting decided to stay on UT -04 year-round. See: -# http://tcweeklynews.com/daylight-savings-time-to-be-maintained-p5353-127.htm -# Model this as a switch from EST/EDT to AST ... -# From Chris Walton (2014-11-04): -# ... the TCI government appears to have delayed the switch to -# "permanent daylight saving time" by one year.... -# http://tcweeklynews.com/time-change-to-go-ahead-this-november-p5437-127.htm -# -# From the Turks & Caicos Cabinet (2017-07-20), heads-up from Steffen Thorsen: -# ... agreed to the reintroduction in TCI of Daylight Saving Time (DST) -# during the summer months and Standard Time, also known as Local -# Time, during the winter months with effect from April 2018 ... -# https://www.gov.uk/government/news/turks-and-caicos-post-cabinet-meeting-statement--3 -# -# From Paul Eggert (2017-08-26): -# The date of effect of the spring 2018 change appears to be March 11, -# which makes more sense. See: Hamilton D. Time change back -# by March 2018 for TCI. Magnetic Media. 2017-08-25. -# http://magneticmediatv.com/2017/08/time-change-back-by-march-2018-for-tci/ -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Grand_Turk -4:44:32 - LMT 1890 - -5:07:11 - KMT 1912 Feb # Kingston Mean Time - -5:00 - EST 1979 - -5:00 US E%sT 2015 Nov Sun>=1 2:00 - -4:00 - AST 2018 Mar 11 3:00 - -5:00 US E%sT - -# British Virgin Is -# Virgin Is -# See America/Port_of_Spain. - - -# Local Variables: -# coding: utf-8 -# End: diff --git a/src/timezone/data/pacificnew b/src/timezone/data/pacificnew deleted file mode 100644 index 734943486b..0000000000 --- a/src/timezone/data/pacificnew +++ /dev/null @@ -1,27 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# From Arthur David Olson (1989-04-05): -# On 1989-04-05, the U. S. House of Representatives passed (238-154) a bill -# establishing "Pacific Presidential Election Time"; it was not acted on -# by the Senate or signed into law by the President. -# You might want to change the "PE" (Presidential Election) below to -# "Q" (Quadrennial) to maintain three-character zone abbreviations. -# If you're really conservative, you might want to change it to "D". -# Avoid "L" (Leap Year), which won't be true in 2100. - -# If Presidential Election Time is ever established, replace "XXXX" below -# with the year the law takes effect and uncomment the "##" lines. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -## Rule Twilite XXXX max - Apr Sun>=1 2:00 1:00 D -## Rule Twilite XXXX max uspres Oct lastSun 2:00 1:00 PE -## Rule Twilite XXXX max uspres Nov Sun>=7 2:00 0 S -## Rule Twilite XXXX max nonpres Oct lastSun 2:00 0 S - -# Zone NAME GMTOFF RULES/SAVE FORMAT [UNTIL] -## Zone America/Los_Angeles-PET -8:00 US P%sT XXXX -## -8:00 Twilite P%sT - -# For now... -Link America/Los_Angeles US/Pacific-New ## diff --git a/src/timezone/data/southamerica b/src/timezone/data/southamerica deleted file mode 100644 index bbae226156..0000000000 --- a/src/timezone/data/southamerica +++ /dev/null @@ -1,1793 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# This file is by no means authoritative; if you think you know better, -# go ahead and edit the file (and please send any changes to -# tz@iana.org for general use in the future). For more, please see -# the file CONTRIBUTING in the tz distribution. - -# From Paul Eggert (2016-12-05): -# -# Unless otherwise specified, the source for data through 1990 is: -# Thomas G. Shanks and Rique Pottenger, The International Atlas (6th edition), -# San Diego: ACS Publications, Inc. (2003). -# Unfortunately this book contains many errors and cites no sources. -# -# Many years ago Gwillim Law wrote that a good source -# for time zone data was the International Air Transport -# Association's Standard Schedules Information Manual (IATA SSIM), -# published semiannually. Law sent in several helpful summaries -# of the IATA's data after 1990. Except where otherwise noted, -# IATA SSIM is the source for entries after 1990. -# -# For data circa 1899, a common source is: -# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94. -# https://www.jstor.org/stable/1774359 -# -# These tables use numeric abbreviations like -03 and -0330 for -# integer hour and minute UTC offsets. Although earlier editions used -# alphabetic time zone abbreviations, these abbreviations were -# invented and did not reflect common practice. - -############################################################################### - -############################################################################### - -# Argentina - -# From Bob Devine (1988-01-28): -# Argentina: first Sunday in October to first Sunday in April since 1976. -# Double Summer time from 1969 to 1974. Switches at midnight. - -# From U. S. Naval Observatory (1988-01-19): -# ARGENTINA 3 H BEHIND UTC - -# From Hernan G. Otero (1995-06-26): -# I am sending modifications to the Argentine time zone table... -# AR was chosen because they are the ISO letters that represent Argentina. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Arg 1930 only - Dec 1 0:00 1:00 S -Rule Arg 1931 only - Apr 1 0:00 0 - -Rule Arg 1931 only - Oct 15 0:00 1:00 S -Rule Arg 1932 1940 - Mar 1 0:00 0 - -Rule Arg 1932 1939 - Nov 1 0:00 1:00 S -Rule Arg 1940 only - Jul 1 0:00 1:00 S -Rule Arg 1941 only - Jun 15 0:00 0 - -Rule Arg 1941 only - Oct 15 0:00 1:00 S -Rule Arg 1943 only - Aug 1 0:00 0 - -Rule Arg 1943 only - Oct 15 0:00 1:00 S -Rule Arg 1946 only - Mar 1 0:00 0 - -Rule Arg 1946 only - Oct 1 0:00 1:00 S -Rule Arg 1963 only - Oct 1 0:00 0 - -Rule Arg 1963 only - Dec 15 0:00 1:00 S -Rule Arg 1964 1966 - Mar 1 0:00 0 - -Rule Arg 1964 1966 - Oct 15 0:00 1:00 S -Rule Arg 1967 only - Apr 2 0:00 0 - -Rule Arg 1967 1968 - Oct Sun>=1 0:00 1:00 S -Rule Arg 1968 1969 - Apr Sun>=1 0:00 0 - -Rule Arg 1974 only - Jan 23 0:00 1:00 S -Rule Arg 1974 only - May 1 0:00 0 - -Rule Arg 1988 only - Dec 1 0:00 1:00 S -# -# From Hernan G. Otero (1995-06-26): -# These corrections were contributed by InterSoft Argentina S.A., -# obtaining the data from the: -# Talleres de Hidrografía Naval Argentina -# (Argentine Naval Hydrography Institute) -Rule Arg 1989 1993 - Mar Sun>=1 0:00 0 - -Rule Arg 1989 1992 - Oct Sun>=15 0:00 1:00 S -# -# From Hernan G. Otero (1995-06-26): -# From this moment on, the law that mandated the daylight saving -# time corrections was derogated and no more modifications -# to the time zones (for daylight saving) are now made. -# -# From Rives McDow (2000-01-10): -# On October 3, 1999, 0:00 local, Argentina implemented daylight savings time, -# which did not result in the switch of a time zone, as they stayed 9 hours -# from the International Date Line. -Rule Arg 1999 only - Oct Sun>=1 0:00 1:00 S -# From Paul Eggert (2007-12-28): -# DST was set to expire on March 5, not March 3, but since it was converted -# to standard time on March 3 it's more convenient for us to pretend that -# it ended on March 3. -Rule Arg 2000 only - Mar 3 0:00 0 - -# -# From Peter Gradelski via Steffen Thorsen (2000-03-01): -# We just checked with our São Paulo office and they say the government of -# Argentina decided not to become one of the countries that go on or off DST. -# So Buenos Aires should be -3 hours from GMT at all times. -# -# From Fabián L. Arce Jofré (2000-04-04): -# The law that claimed DST for Argentina was derogated by President Fernando -# de la Rúa on March 2, 2000, because it would make people spend more energy -# in the winter time, rather than less. The change took effect on March 3. -# -# From Mariano Absatz (2001-06-06): -# one of the major newspapers here in Argentina said that the 1999 -# Timezone Law (which never was effectively applied) will (would?) be -# in effect.... The article is at -# http://ar.clarin.com/diario/2001-06-06/e-01701.htm -# ... The Law itself is "Ley No. 25155", sanctioned on 1999-08-25, enacted -# 1999-09-17, and published 1999-09-21. The official publication is at: -# http://www.boletin.jus.gov.ar/BON/Primera/1999/09-Septiembre/21/PDF/BO21-09-99LEG.PDF -# Regretfully, you have to subscribe (and pay) for the on-line version.... -# -# (2001-06-12): -# the timezone for Argentina will not change next Sunday. -# Apparently it will do so on Sunday 24th.... -# http://ar.clarin.com/diario/2001-06-12/s-03501.htm -# -# (2001-06-25): -# Last Friday (yes, the last working day before the date of the change), the -# Senate annulled the 1999 law that introduced the changes later postponed. -# http://www.clarin.com.ar/diario/2001-06-22/s-03601.htm -# It remains the vote of the Deputies..., but it will be the same.... -# This kind of things had always been done this way in Argentina. -# We are still -03:00 all year round in all of the country. -# -# From Steffen Thorsen (2007-12-21): -# A user (Leonardo Chaim) reported that Argentina will adopt DST.... -# all of the country (all Zone-entries) are affected. News reports like -# http://www.lanacion.com.ar/opinion/nota.asp?nota_id=973037 indicate -# that Argentina will use DST next year as well, from October to -# March, although exact rules are not given. -# -# From Jesper Nørgaard Welen (2007-12-26) -# The last hurdle of Argentina DST is over, the proposal was approved in -# the lower chamber too (Diputados) with a vote 192 for and 2 against. -# By the way thanks to Mariano Absatz and Daniel Mario Vega for the link to -# the original scanned proposal, where the dates and the zero hours are -# clear and unambiguous...This is the article about final approval: -# http://www.lanacion.com.ar/politica/nota.asp?nota_id=973996 -# -# From Paul Eggert (2007-12-22): -# For dates after mid-2008, the following rules are my guesses and -# are quite possibly wrong, but are more likely than no DST at all. - -# From Alexander Krivenyshev (2008-09-05): -# As per message from Carlos Alberto Fonseca Arauz (Nicaragua), -# Argentina will start DST on Sunday October 19, 2008. -# -# http://www.worldtimezone.com/dst_news/dst_news_argentina03.html -# http://www.impulsobaires.com.ar/nota.php?id=57832 (in spanish) - -# From Juan Manuel Docile in https://bugs.gentoo.org/240339 (2008-10-07) -# via Rodrigo Severo: -# Argentinian law No. 25.155 is no longer valid. -# http://www.infoleg.gov.ar/infolegInternet/anexos/60000-64999/60036/norma.htm -# The new one is law No. 26.350 -# http://www.infoleg.gov.ar/infolegInternet/anexos/135000-139999/136191/norma.htm -# So there is no summer time in Argentina for now. - -# From Mariano Absatz (2008-10-20): -# Decree 1693/2008 applies Law 26.350 for the summer 2008/2009 establishing DST -# in Argentina from 2008-10-19 until 2009-03-15. -# http://www.boletinoficial.gov.ar/Bora.Portal/CustomControls/PdfContent.aspx?fp=16102008&pi=3&pf=4&s=0&sec=01 -# - -# Decree 1705/2008 excepting 12 Provinces from applying DST in the summer -# 2008/2009: Catamarca, La Rioja, Mendoza, Salta, San Juan, San Luis, La -# Pampa, Neuquén, Rio Negro, Chubut, Santa Cruz and Tierra del Fuego -# http://www.boletinoficial.gov.ar/Bora.Portal/CustomControls/PdfContent.aspx?fp=17102008&pi=1&pf=1&s=0&sec=01 -# -# Press release 235 dated Saturday October 18th, from the Government of the -# Province of Jujuy saying it will not apply DST either (even when it was not -# included in Decree 1705/2008). -# http://www.jujuy.gov.ar/index2/partes_prensa/18_10_08/235-181008.doc - -# From fullinet (2009-10-18): -# As announced in -# http://www.argentina.gob.ar/argentina/portal/paginas.dhtml?pagina=356 -# (an official .gob.ar) under title: "Sin Cambio de Hora" -# (English: "No hour change"). -# -# "Por el momento, el Gobierno Nacional resolvió no modificar la hora -# oficial, decisión que estaba en estudio para su implementación el -# domingo 18 de octubre. Desde el Ministerio de Planificación se anunció -# que la Argentina hoy, en estas condiciones meteorológicas, no necesita -# la modificación del huso horario, ya que 2009 nos encuentra con -# crecimiento en la producción y distribución energética." - -Rule Arg 2007 only - Dec 30 0:00 1:00 S -Rule Arg 2008 2009 - Mar Sun>=15 0:00 0 - -Rule Arg 2008 only - Oct Sun>=15 0:00 1:00 S - -# From Mariano Absatz (2004-05-21): -# Today it was officially published that the Province of Mendoza is changing -# its timezone this winter... starting tomorrow night.... -# http://www.gobernac.mendoza.gov.ar/boletin/pdf/20040521-27158-normas.pdf -# From Paul Eggert (2004-05-24): -# It's Law No. 7,210. This change is due to a public power emergency, so for -# now we'll assume it's for this year only. -# -# From Paul Eggert (2014-08-09): -# Hora de verano para la República Argentina -# http://buenasiembra.com.ar/esoterismo/astrologia/hora-de-verano-de-la-republica-argentina-27.html -# says that standard time in Argentina from 1894-10-31 -# to 1920-05-01 was -4:16:48.25. Go with this more-precise value -# over Shanks & Pottenger. -# -# From Mariano Absatz (2004-06-05): -# These media articles from a major newspaper mostly cover the current state: -# http://www.lanacion.com.ar/04/05/27/de_604825.asp -# http://www.lanacion.com.ar/04/05/28/de_605203.asp -# -# The following eight (8) provinces pulled clocks back to UTC-04:00 at -# midnight Monday May 31st. (that is, the night between 05/31 and 06/01). -# Apparently, all nine provinces would go back to UTC-03:00 at the same -# time in October 17th. -# -# Catamarca, Chubut, La Rioja, San Juan, San Luis, Santa Cruz, -# Tierra del Fuego, Tucumán. -# -# From Mariano Absatz (2004-06-14): -# ... this weekend, the Province of Tucumán decided it'd go back to UTC-03:00 -# yesterday midnight (that is, at 24:00 Saturday 12th), since the people's -# annoyance with the change is much higher than the power savings obtained.... -# -# From Gwillim Law (2004-06-14): -# http://www.lanacion.com.ar/04/06/10/de_609078.asp ... -# "The time change in Tierra del Fuego was a conflicted decision from -# the start. The government had decreed that the measure would take -# effect on June 1, but a normative error forced the new time to begin -# three days earlier, from a Saturday to a Sunday.... -# Our understanding was that the change was originally scheduled to take place -# on June 1 at 00:00 in Chubut, Santa Cruz, Tierra del Fuego (and some other -# provinces). Sunday was May 30, only two days earlier. So the article -# contains a contradiction. I would give more credence to the Saturday/Sunday -# date than the "three days earlier" phrase, and conclude that Tierra del -# Fuego set its clocks back at 2004-05-30 00:00. -# -# From Steffen Thorsen (2004-10-05): -# The previous law 7210 which changed the province of Mendoza's time zone -# back in May have been modified slightly in a new law 7277, which set the -# new end date to 2004-09-26 (original date was 2004-10-17). -# http://www.gobernac.mendoza.gov.ar/boletin/pdf/20040924-27244-normas.pdf -# -# From Mariano Absatz (2004-10-05): -# San Juan changed from UTC-03:00 to UTC-04:00 at midnight between -# Sunday, May 30th and Monday, May 31st. It changed back to UTC-03:00 -# at midnight between Saturday, July 24th and Sunday, July 25th.... -# http://www.sanjuan.gov.ar/prensa/archivo/000329.html -# http://www.sanjuan.gov.ar/prensa/archivo/000426.html -# http://www.sanjuan.gov.ar/prensa/archivo/000441.html - -# From Alex Krivenyshev (2008-01-17): -# Here are articles that Argentina Province San Luis is planning to end DST -# as earlier as upcoming Monday January 21, 2008 or February 2008: -# -# Provincia argentina retrasa reloj y marca diferencia con resto del país -# (Argentine Province delayed clock and mark difference with the rest of the -# country) -# http://cl.invertia.com/noticias/noticia.aspx?idNoticia=200801171849_EFE_ET4373&idtel -# -# Es inminente que en San Luis atrasen una hora los relojes -# (It is imminent in San Luis clocks one hour delay) -# https://www.lagaceta.com.ar/nota/253414/Economia/Es-inminente-que-en-San-Luis-atrasen-una-hora-los-relojes.html -# http://www.worldtimezone.com/dst_news/dst_news_argentina02.html - -# From Jesper Nørgaard Welen (2008-01-18): -# The page of the San Luis provincial government -# http://www.sanluis.gov.ar/notas.asp?idCanal=0&id=22812 -# confirms what Alex Krivenyshev has earlier sent to the tz -# emailing list about that San Luis plans to return to standard -# time much earlier than the rest of the country. It also -# confirms that upon request the provinces San Juan and Mendoza -# refused to follow San Luis in this change. -# -# The change is supposed to take place Monday the 21st at 0:00 -# hours. As far as I understand it if this goes ahead, we need -# a new timezone for San Luis (although there are also documented -# independent changes in the southamerica file of San Luis in -# 1990 and 1991 which has not been confirmed). - -# From Jesper Nørgaard Welen (2008-01-25): -# Unfortunately the below page has become defunct, about the San Luis -# time change. Perhaps because it now is part of a group of pages "Most -# important pages of 2008." -# -# You can use -# http://www.sanluis.gov.ar/notas.asp?idCanal=8141&id=22834 -# instead it seems. Or use "Buscador" from the main page of the San Luis -# government, and fill in "huso" and click OK, and you will get 3 pages -# from which the first one is identical to the above. - -# From Mariano Absatz (2008-01-28): -# I can confirm that the Province of San Luis (and so far only that -# province) decided to go back to UTC-3 effective midnight Jan 20th 2008 -# (that is, Monday 21st at 0:00 is the time the clocks were delayed back -# 1 hour), and they intend to keep UTC-3 as their timezone all year round -# (that is, unless they change their mind any minute now). -# -# So we'll have to add yet another city to 'southamerica' (I think San -# Luis city is the mos populated city in the Province, so it'd be -# America/Argentina/San_Luis... of course I can't remember if San Luis's -# history of particular changes goes along with Mendoza or San Juan :-( -# (I only remember not being able to collect hard facts about San Luis -# back in 2004, when these provinces changed to UTC-4 for a few days, I -# mailed them personally and never got an answer). - -# From Paul Eggert (2014-08-12): -# Unless otherwise specified, data entries are from Shanks & Pottenger through -# 1992, from the IATA otherwise. As noted below, Shanks & Pottenger say that -# America/Cordoba split into 6 subregions during 1991/1992, one of which -# was America/San_Luis, but we haven't verified this yet so for now we'll -# keep America/Cordoba a single region rather than splitting it into the -# other 5 subregions. - -# From Mariano Absatz (2009-03-13): -# Yesterday (with our usual 2-day notice) the Province of San Luis -# decided that next Sunday instead of "staying" @utc-03:00 they will go -# to utc-04:00 until the second Saturday in October... -# -# The press release is at -# http://www.sanluis.gov.ar/SL/Paginas/NoticiaDetalle.asp?TemaId=1&InfoPrensaId=3102 -# (I couldn't find the decree, but www.sanluis.gov.ar -# is the official page for the Province Government.) -# -# There's also a note in only one of the major national papers ... -# http://www.lanacion.com.ar/nota.asp?nota_id=1107912 -# -# The press release says [quick and dirty translation]: -# ... announced that next Sunday, at 00:00, Puntanos (the San Luis -# inhabitants) will have to turn back one hour their clocks -# -# Since then, San Luis will establish its own Province timezone. Thus, -# during 2009, this timezone change will run from 00:00 the third Sunday -# in March until 24:00 of the second Saturday in October. - -# From Mariano Absatz (2009-10-16): -# ...the Province of San Luis is a case in itself. -# -# The Law at -# http://www.diputadossanluis.gov.ar/diputadosasp/paginas/verNorma.asp?NormaID=276 -# is ambiguous because establishes a calendar from the 2nd Sunday in -# October at 0:00 thru the 2nd Saturday in March at 24:00 and the -# complement of that starting on the 2nd Sunday of March at 0:00 and -# ending on the 2nd Saturday of March at 24:00. -# -# This clearly breaks every time the 1st of March or October is a Sunday. -# -# IMHO, the "spirit of the Law" is to make the changes at 0:00 on the 2nd -# Sunday of October and March. -# -# The problem is that the changes in the rest of the Provinces that did -# change in 2007/2008, were made according to the Federal Law and Decrees -# that did so on the 3rd Sunday of October and March. -# -# In fact, San Luis actually switched from UTC-4 to UTC-3 last Sunday -# (October 11th) at 0:00. -# -# So I guess a new set of rules, besides "Arg", must be made and the last -# America/Argentina/San_Luis entries should change to use these... -# ... - -# From Alexander Krivenyshev (2010-04-09): -# According to news reports from El Diario de la República Province San -# Luis, Argentina (standard time UTC-04) will keep Daylight Saving Time -# after April 11, 2010 - will continue to have same time as rest of -# Argentina (UTC-3) (no DST). -# -# Confirmaron la prórroga del huso horario de verano (Spanish) -# http://www.eldiariodelarepublica.com/index.php?option=com_content&task=view&id=29383&Itemid=9 -# or (some English translation): -# http://www.worldtimezone.com/dst_news/dst_news_argentina08.html - -# From Mariano Absatz (2010-04-12): -# yes...I can confirm this...and given that San Luis keeps calling -# UTC-03:00 "summer time", we should't just let San Luis go back to "Arg" -# rules...San Luis is still using "Western ARgentina Time" and it got -# stuck on Summer daylight savings time even though the summer is over. - -# From Paul Eggert (2013-09-05): -# Perhaps San Luis operates on the legal fiction that it is at -04 -# with perpetual summer time, but ordinary usage typically seems to -# just say it's at -03; see, for example, -# https://es.wikipedia.org/wiki/Hora_oficial_argentina -# We've documented similar situations as being plain changes to -# standard time, so let's do that here too. This does not change UTC -# offsets, only tm_isdst and the time zone abbreviations. One minor -# plus is that this silences a zic complaint that there's no POSIX TZ -# setting for time stamps past 2038. - -# From Paul Eggert (2013-02-21): -# Milne says Córdoba time was -4:16:48.2. Round to the nearest second. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# -# Buenos Aires (BA), Capital Federal (CF), -Zone America/Argentina/Buenos_Aires -3:53:48 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May # Córdoba Mean Time - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 Arg -03/-02 -# -# Córdoba (CB), Santa Fe (SF), Entre Ríos (ER), Corrientes (CN), Misiones (MN), -# Chaco (CC), Formosa (FM), Santiago del Estero (SE) -# -# Shanks & Pottenger also make the following claims, which we haven't verified: -# - Formosa switched to -3:00 on 1991-01-07. -# - Misiones switched to -3:00 on 1990-12-29. -# - Chaco switched to -3:00 on 1991-01-04. -# - Santiago del Estero switched to -4:00 on 1991-04-01, -# then to -3:00 on 1991-04-26. -# -Zone America/Argentina/Cordoba -4:16:48 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 3 - -4:00 - -04 1991 Oct 20 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 Arg -03/-02 -# -# Salta (SA), La Pampa (LP), Neuquén (NQ), Rio Negro (RN) -Zone America/Argentina/Salta -4:21:40 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 3 - -4:00 - -04 1991 Oct 20 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# Tucumán (TM) -Zone America/Argentina/Tucuman -4:20:52 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 3 - -4:00 - -04 1991 Oct 20 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 Jun 1 - -4:00 - -04 2004 Jun 13 - -3:00 Arg -03/-02 -# -# La Rioja (LR) -Zone America/Argentina/La_Rioja -4:27:24 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 1 - -4:00 - -04 1991 May 7 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 Jun 1 - -4:00 - -04 2004 Jun 20 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# San Juan (SJ) -Zone America/Argentina/San_Juan -4:34:04 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 1 - -4:00 - -04 1991 May 7 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 May 31 - -4:00 - -04 2004 Jul 25 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# Jujuy (JY) -Zone America/Argentina/Jujuy -4:21:12 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1990 Mar 4 - -4:00 - -04 1990 Oct 28 - -4:00 1:00 -03 1991 Mar 17 - -4:00 - -04 1991 Oct 6 - -3:00 1:00 -02 1992 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# Catamarca (CT), Chubut (CH) -Zone America/Argentina/Catamarca -4:23:08 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1991 Mar 3 - -4:00 - -04 1991 Oct 20 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 Jun 1 - -4:00 - -04 2004 Jun 20 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# Mendoza (MZ) -Zone America/Argentina/Mendoza -4:35:16 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1990 Mar 4 - -4:00 - -04 1990 Oct 15 - -4:00 1:00 -03 1991 Mar 1 - -4:00 - -04 1991 Oct 15 - -4:00 1:00 -03 1992 Mar 1 - -4:00 - -04 1992 Oct 18 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 May 23 - -4:00 - -04 2004 Sep 26 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# San Luis (SL) - -Rule SanLuis 2008 2009 - Mar Sun>=8 0:00 0 - -Rule SanLuis 2007 2008 - Oct Sun>=8 0:00 1:00 S - -Zone America/Argentina/San_Luis -4:25:24 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1990 - -3:00 1:00 -02 1990 Mar 14 - -4:00 - -04 1990 Oct 15 - -4:00 1:00 -03 1991 Mar 1 - -4:00 - -04 1991 Jun 1 - -3:00 - -03 1999 Oct 3 - -4:00 1:00 -03 2000 Mar 3 - -3:00 - -03 2004 May 31 - -4:00 - -04 2004 Jul 25 - -3:00 Arg -03/-02 2008 Jan 21 - -4:00 SanLuis -04/-03 2009 Oct 11 - -3:00 - -03 -# -# Santa Cruz (SC) -Zone America/Argentina/Rio_Gallegos -4:36:52 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 Jun 1 - -4:00 - -04 2004 Jun 20 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 -# -# Tierra del Fuego, Antártida e Islas del Atlántico Sur (TF) -Zone America/Argentina/Ushuaia -4:33:12 - LMT 1894 Oct 31 - -4:16:48 - CMT 1920 May - -4:00 - -04 1930 Dec - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1999 Oct 3 - -4:00 Arg -04/-03 2000 Mar 3 - -3:00 - -03 2004 May 30 - -4:00 - -04 2004 Jun 20 - -3:00 Arg -03/-02 2008 Oct 18 - -3:00 - -03 - -# Aruba -Link America/Curacao America/Aruba - -# Bolivia -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/La_Paz -4:32:36 - LMT 1890 - -4:32:36 - CMT 1931 Oct 15 # Calamarca MT - -4:32:36 1:00 BOST 1932 Mar 21 # Bolivia ST - -4:00 - -04 - -# Brazil - -# From Paul Eggert (1993-11-18): -# The mayor of Rio recently attempted to change the time zone rules -# just in his city, in order to leave more summer time for the tourist trade. -# The rule change lasted only part of the day; -# the federal government refused to follow the city's rules, and business -# was in a chaos, so the mayor backed down that afternoon. - -# From IATA SSIM (1996-02): -# _Only_ the following states in BR1 observe DST: Rio Grande do Sul (RS), -# Santa Catarina (SC), Paraná (PR), São Paulo (SP), Rio de Janeiro (RJ), -# Espírito Santo (ES), Minas Gerais (MG), Bahia (BA), Goiás (GO), -# Distrito Federal (DF), Tocantins (TO), Sergipe [SE] and Alagoas [AL]. -# [The last three states are new to this issue of the IATA SSIM.] - -# From Gwillim Law (1996-10-07): -# Geography, history (Tocantins was part of Goiás until 1989), and other -# sources of time zone information lead me to believe that AL, SE, and TO were -# always in BR1, and so the only change was whether or not they observed DST.... -# The earliest issue of the SSIM I have is 2/91. Each issue from then until -# 9/95 says that DST is observed only in the ten states I quoted from 9/95, -# along with Mato Grosso (MT) and Mato Grosso do Sul (MS), which are in BR2 -# (UTC-4).... The other two time zones given for Brazil are BR3, which is -# UTC-5, no DST, and applies only in the state of Acre (AC); and BR4, which is -# UTC-2, and applies to Fernando de Noronha (formerly FN, but I believe it's -# become part of the state of Pernambuco). The boundary between BR1 and BR2 -# has never been clearly stated. They've simply been called East and West. -# However, some conclusions can be drawn from another IATA manual: the Airline -# Coding Directory, which lists close to 400 airports in Brazil. For each -# airport it gives a time zone which is coded to the SSIM. From that -# information, I'm led to conclude that the states of Amapá (AP), Ceará (CE), -# Maranhão (MA), Paraíba (PR), Pernambuco (PE), Piauí (PI), and Rio Grande do -# Norte (RN), and the eastern part of Pará (PA) are all in BR1 without DST. - -# From Marcos Tadeu (1998-09-27): -# Brazilian official page - -# From Jesper Nørgaard (2000-11-03): -# [For an official list of which regions in Brazil use which time zones, see:] -# http://pcdsh01.on.br/Fusbr.htm -# http://pcdsh01.on.br/Fusbrhv.htm - -# From Celso Doria via David Madeo (2002-10-09): -# The reason for the delay this year has to do with elections in Brazil. -# -# Unlike in the United States, elections in Brazil are 100% computerized and -# the results are known almost immediately. Yesterday, it was the first -# round of the elections when 115 million Brazilians voted for President, -# Governor, Senators, Federal Deputies, and State Deputies. Nobody is -# counting (or re-counting) votes anymore and we know there will be a second -# round for the Presidency and also for some Governors. The 2nd round will -# take place on October 27th. -# -# The reason why the DST will only begin November 3rd is that the thousands -# of electoral machines used cannot have their time changed, and since the -# Constitution says the elections must begin at 8:00 AM and end at 5:00 PM, -# the Government decided to postpone DST, instead of changing the Constitution -# (maybe, for the next elections, it will be possible to change the clock)... - -# From Rodrigo Severo (2004-10-04): -# It's just the biannual change made necessary by the much hyped, supposedly -# modern Brazilian eletronic voting machines which, apparently, can't deal -# with a time change between the first and the second rounds of the elections. - -# From Steffen Thorsen (2007-09-20): -# Brazil will start DST on 2007-10-14 00:00 and end on 2008-02-17 00:00: -# http://www.mme.gov.br/site/news/detail.do;jsessionid=BBA06811AFCAAC28F0285210913513DA?newsId=13975 - -# From Paul Schulze (2008-06-24): -# ...by law number 11.662 of April 24, 2008 (published in the "Diario -# Oficial da União"...) in Brazil there are changes in the timezones, -# effective today (00:00am at June 24, 2008) as follows: -# -# a) The timezone UTC+5 is extinguished, with all the Acre state and the -# part of the Amazonas state that had this timezone now being put to the -# timezone UTC+4 -# b) The whole Pará state now is put at timezone UTC+3, instead of just -# part of it, as was before. -# -# This change follows a proposal of senator Tiao Viana of Acre state, that -# proposed it due to concerns about open television channels displaying -# programs inappropriate to youths in the states that had the timezone -# UTC+5 too early in the night. In the occasion, some more corrections -# were proposed, trying to unify the timezones of any given state. This -# change modifies timezone rules defined in decree 2.784 of 18 June, -# 1913. - -# From Rodrigo Severo (2008-06-24): -# Just correcting the URL: -# https://www.in.gov.br/imprensa/visualiza/index.jsp?jornal=do&secao=1&pagina=1&data=25/04/2008 -# -# As a result of the above Decree I believe the America/Rio_Branco -# timezone shall be modified from UTC-5 to UTC-4 and a new timezone shall -# be created to represent the...west side of the Pará State. I -# suggest this new timezone be called Santarem as the most -# important/populated city in the affected area. -# -# This new timezone would be the same as the Rio_Branco timezone up to -# the 2008/06/24 change which would be to UTC-3 instead of UTC-4. - -# From Alex Krivenyshev (2008-06-24): -# This is a quick reference page for New and Old Brazil Time Zones map. -# http://www.worldtimezone.com/brazil-time-new-old.php -# -# - 4 time zones replaced by 3 time zones - eliminating time zone UTC-05 -# (state Acre and the part of the Amazonas will be UTC/GMT-04) - western -# part of Par state is moving to one timezone UTC-03 (from UTC-04). - -# From Paul Eggert (2002-10-10): -# The official decrees referenced below are mostly taken from -# Decretos sobre o Horário de Verão no Brasil. -# http://pcdsh01.on.br/DecHV.html - -# From Steffen Thorsen (2008-08-29): -# As announced by the government and many newspapers in Brazil late -# yesterday, Brazil will start DST on 2008-10-19 (need to change rule) and -# it will end on 2009-02-15 (current rule for Brazil is fine). Based on -# past years experience with the elections, there was a good chance that -# the start was postponed to November, but it did not happen this year. -# -# It has not yet been posted to http://pcdsh01.on.br/DecHV.html -# -# An official page about it: -# http://www.mme.gov.br/site/news/detail.do?newsId=16722 -# Note that this link does not always work directly, but must be accessed -# by going to -# http://www.mme.gov.br/first -# -# One example link that works directly: -# http://jornale.com.br/index.php?option=com_content&task=view&id=13530&Itemid=54 -# (Portuguese) -# -# We have a written a short article about it as well: -# https://www.timeanddate.com/news/time/brazil-dst-2008-2009.html -# -# From Alexander Krivenyshev (2011-10-04): -# State Bahia will return to Daylight savings time this year after 8 years off. -# The announcement was made by Governor Jaques Wagner in an interview to a -# television station in Salvador. - -# In Portuguese: -# http://g1.globo.com/bahia/noticia/2011/10/governador-jaques-wagner-confirma-horario-de-verao-na-bahia.html -# https://noticias.terra.com.br/brasil/noticias/0,,OI5390887-EI8139,00-Bahia+volta+a+ter+horario+de+verao+apos+oito+anos.html - -# From Guilherme Bernardes Rodrigues (2011-10-07): -# There is news in the media, however there is still no decree about it. -# I just send a e-mail to Zulmira Brandao at http://pcdsh01.on.br/ the -# official agency about time in Brazil, and she confirmed that the old rule is -# still in force. - -# From Guilherme Bernardes Rodrigues (2011-10-14) -# It's official, the President signed a decree that includes Bahia in summer -# time. -# [ and in a second message (same day): ] -# I found the decree. -# -# DECRETO No. 7.584, DE 13 DE OUTUBRO DE 2011 -# Link : -# http://www.in.gov.br/visualiza/index.jsp?data=13/10/2011&jornal=1000&pagina=6&totalArquivos=6 - -# From Kelley Cook (2012-10-16): -# The governor of state of Bahia in Brazil announced on Thursday that -# due to public pressure, he is reversing the DST policy they implemented -# last year and will not be going to Summer Time on October 21st.... -# http://www.correio24horas.com.br/r/artigo/apos-pressoes-wagner-suspende-horario-de-verao-na-bahia - -# From Rodrigo Severo (2012-10-16): -# Tocantins state will have DST. -# https://noticias.terra.com.br/brasil/noticias/0,,OI6232536-EI306.html - -# From Steffen Thorsen (2013-09-20): -# Tocantins in Brazil is very likely not to observe DST from October.... -# http://conexaoto.com.br/2013/09/18/ministerio-confirma-que-tocantins-esta-fora-do-horario-de-verao-em-2013-mas-falta-publicacao-de-decreto -# We will keep this article updated when this is confirmed: -# https://www.timeanddate.com/news/time/brazil-starts-dst-2013.html - -# From Steffen Thorsen (2013-10-17): -# https://www.timeanddate.com/news/time/acre-amazonas-change-time-zone.html -# Senator Jorge Viana announced that Acre will change time zone on November 10. -# He did not specify the time of the change, nor if western parts of Amazonas -# will change as well. -# -# From Paul Eggert (2013-10-17): -# For now, assume western Amazonas will change as well. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# Decree 20,466 (1931-10-01) -# Decree 21,896 (1932-01-10) -Rule Brazil 1931 only - Oct 3 11:00 1:00 S -Rule Brazil 1932 1933 - Apr 1 0:00 0 - -Rule Brazil 1932 only - Oct 3 0:00 1:00 S -# Decree 23,195 (1933-10-10) -# revoked DST. -# Decree 27,496 (1949-11-24) -# Decree 27,998 (1950-04-13) -Rule Brazil 1949 1952 - Dec 1 0:00 1:00 S -Rule Brazil 1950 only - Apr 16 1:00 0 - -Rule Brazil 1951 1952 - Apr 1 0:00 0 - -# Decree 32,308 (1953-02-24) -Rule Brazil 1953 only - Mar 1 0:00 0 - -# Decree 34,724 (1953-11-30) -# revoked DST. -# Decree 52,700 (1963-10-18) -# established DST from 1963-10-23 00:00 to 1964-02-29 00:00 -# in SP, RJ, GB, MG, ES, due to the prolongation of the drought. -# Decree 53,071 (1963-12-03) -# extended the above decree to all of the national territory on 12-09. -Rule Brazil 1963 only - Dec 9 0:00 1:00 S -# Decree 53,604 (1964-02-25) -# extended summer time by one day to 1964-03-01 00:00 (start of school). -Rule Brazil 1964 only - Mar 1 0:00 0 - -# Decree 55,639 (1965-01-27) -Rule Brazil 1965 only - Jan 31 0:00 1:00 S -Rule Brazil 1965 only - Mar 31 0:00 0 - -# Decree 57,303 (1965-11-22) -Rule Brazil 1965 only - Dec 1 0:00 1:00 S -# Decree 57,843 (1966-02-18) -Rule Brazil 1966 1968 - Mar 1 0:00 0 - -Rule Brazil 1966 1967 - Nov 1 0:00 1:00 S -# Decree 63,429 (1968-10-15) -# revoked DST. -# Decree 91,698 (1985-09-27) -Rule Brazil 1985 only - Nov 2 0:00 1:00 S -# Decree 92,310 (1986-01-21) -# Decree 92,463 (1986-03-13) -Rule Brazil 1986 only - Mar 15 0:00 0 - -# Decree 93,316 (1986-10-01) -Rule Brazil 1986 only - Oct 25 0:00 1:00 S -Rule Brazil 1987 only - Feb 14 0:00 0 - -# Decree 94,922 (1987-09-22) -Rule Brazil 1987 only - Oct 25 0:00 1:00 S -Rule Brazil 1988 only - Feb 7 0:00 0 - -# Decree 96,676 (1988-09-12) -# except for the states of AC, AM, PA, RR, RO, and AP (then a territory) -Rule Brazil 1988 only - Oct 16 0:00 1:00 S -Rule Brazil 1989 only - Jan 29 0:00 0 - -# Decree 98,077 (1989-08-21) -# with the same exceptions -Rule Brazil 1989 only - Oct 15 0:00 1:00 S -Rule Brazil 1990 only - Feb 11 0:00 0 - -# Decree 99,530 (1990-09-17) -# adopted by RS, SC, PR, SP, RJ, ES, MG, GO, MS, DF. -# Decree 99,629 (1990-10-19) adds BA, MT. -Rule Brazil 1990 only - Oct 21 0:00 1:00 S -Rule Brazil 1991 only - Feb 17 0:00 0 - -# Unnumbered decree (1991-09-25) -# adopted by RS, SC, PR, SP, RJ, ES, MG, BA, GO, MT, MS, DF. -Rule Brazil 1991 only - Oct 20 0:00 1:00 S -Rule Brazil 1992 only - Feb 9 0:00 0 - -# Unnumbered decree (1992-10-16) -# adopted by same states. -Rule Brazil 1992 only - Oct 25 0:00 1:00 S -Rule Brazil 1993 only - Jan 31 0:00 0 - -# Decree 942 (1993-09-28) -# adopted by same states, plus AM. -# Decree 1,252 (1994-09-22; -# web page corrected 2004-01-07) adopted by same states, minus AM. -# Decree 1,636 (1995-09-14) -# adopted by same states, plus MT and TO. -# Decree 1,674 (1995-10-13) -# adds AL, SE. -Rule Brazil 1993 1995 - Oct Sun>=11 0:00 1:00 S -Rule Brazil 1994 1995 - Feb Sun>=15 0:00 0 - -Rule Brazil 1996 only - Feb 11 0:00 0 - -# Decree 2,000 (1996-09-04) -# adopted by same states, minus AL, SE. -Rule Brazil 1996 only - Oct 6 0:00 1:00 S -Rule Brazil 1997 only - Feb 16 0:00 0 - -# From Daniel C. Sobral (1998-02-12): -# In 1997, the DS began on October 6. The stated reason was that -# because international television networks ignored Brazil's policy on DS, -# they bought the wrong times on satellite for coverage of Pope's visit. -# This year, the ending date of DS was postponed to March 1 -# to help dealing with the shortages of electric power. -# -# Decree 2,317 (1997-09-04), adopted by same states. -Rule Brazil 1997 only - Oct 6 0:00 1:00 S -# Decree 2,495 -# (1998-02-10) -Rule Brazil 1998 only - Mar 1 0:00 0 - -# Decree 2,780 (1998-09-11) -# adopted by the same states as before. -Rule Brazil 1998 only - Oct 11 0:00 1:00 S -Rule Brazil 1999 only - Feb 21 0:00 0 - -# Decree 3,150 -# (1999-08-23) adopted by same states. -# Decree 3,188 (1999-09-30) -# adds SE, AL, PB, PE, RN, CE, PI, MA and RR. -Rule Brazil 1999 only - Oct 3 0:00 1:00 S -Rule Brazil 2000 only - Feb 27 0:00 0 - -# Decree 3,592 (2000-09-06) -# adopted by the same states as before. -# Decree 3,630 (2000-10-13) -# repeals DST in PE and RR, effective 2000-10-15 00:00. -# Decree 3,632 (2000-10-17) -# repeals DST in SE, AL, PB, RN, CE, PI and MA, effective 2000-10-22 00:00. -# Decree 3,916 -# (2001-09-13) reestablishes DST in AL, CE, MA, PB, PE, PI, RN, SE. -Rule Brazil 2000 2001 - Oct Sun>=8 0:00 1:00 S -Rule Brazil 2001 2006 - Feb Sun>=15 0:00 0 - -# Decree 4,399 (2002-10-01) repeals DST in AL, CE, MA, PB, PE, PI, RN, SE. -# 4,399 -Rule Brazil 2002 only - Nov 3 0:00 1:00 S -# Decree 4,844 (2003-09-24; corrected 2003-09-26) repeals DST in BA, MT, TO. -# 4,844 -Rule Brazil 2003 only - Oct 19 0:00 1:00 S -# Decree 5,223 (2004-10-01) reestablishes DST in MT. -# 5,223 -Rule Brazil 2004 only - Nov 2 0:00 1:00 S -# Decree 5,539 (2005-09-19), -# adopted by the same states as before. -Rule Brazil 2005 only - Oct 16 0:00 1:00 S -# Decree 5,920 (2006-10-03), -# adopted by the same states as before. -Rule Brazil 2006 only - Nov 5 0:00 1:00 S -Rule Brazil 2007 only - Feb 25 0:00 0 - -# Decree 6,212 (2007-09-26), -# adopted by the same states as before. -Rule Brazil 2007 only - Oct Sun>=8 0:00 1:00 S -# From Frederico A. C. Neves (2008-09-10): -# According to this decree -# http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6558.htm -# [t]he DST period in Brazil now on will be from the 3rd Oct Sunday to the -# 3rd Feb Sunday. There is an exception on the return date when this is -# the Carnival Sunday then the return date will be the next Sunday... -Rule Brazil 2008 max - Oct Sun>=15 0:00 1:00 S -Rule Brazil 2008 2011 - Feb Sun>=15 0:00 0 - -Rule Brazil 2012 only - Feb Sun>=22 0:00 0 - -Rule Brazil 2013 2014 - Feb Sun>=15 0:00 0 - -Rule Brazil 2015 only - Feb Sun>=22 0:00 0 - -Rule Brazil 2016 2022 - Feb Sun>=15 0:00 0 - -Rule Brazil 2023 only - Feb Sun>=22 0:00 0 - -Rule Brazil 2024 2025 - Feb Sun>=15 0:00 0 - -Rule Brazil 2026 only - Feb Sun>=22 0:00 0 - -Rule Brazil 2027 2033 - Feb Sun>=15 0:00 0 - -Rule Brazil 2034 only - Feb Sun>=22 0:00 0 - -Rule Brazil 2035 2036 - Feb Sun>=15 0:00 0 - -Rule Brazil 2037 only - Feb Sun>=22 0:00 0 - -# From Arthur David Olson (2008-09-29): -# The next is wrong in some years but is better than nothing. -Rule Brazil 2038 max - Feb Sun>=15 0:00 0 - - -# The latest ruleset listed above says that the following states observe DST: -# DF, ES, GO, MG, MS, MT, PR, RJ, RS, SC, SP. - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -# -# Fernando de Noronha (administratively part of PE) -Zone America/Noronha -2:09:40 - LMT 1914 - -2:00 Brazil -02/-01 1990 Sep 17 - -2:00 - -02 1999 Sep 30 - -2:00 Brazil -02/-01 2000 Oct 15 - -2:00 - -02 2001 Sep 13 - -2:00 Brazil -02/-01 2002 Oct 1 - -2:00 - -02 -# Other Atlantic islands have no permanent settlement. -# These include Trindade and Martim Vaz (administratively part of ES), -# Rocas Atoll (RN), and the St Peter and St Paul Archipelago (PE). -# Fernando de Noronha was a separate territory from 1942-09-02 to 1989-01-01; -# it also included the Penedos. -# -# Amapá (AP), east Pará (PA) -# East Pará includes Belém, Marabá, Serra Norte, and São Félix do Xingu. -# The division between east and west Pará is the river Xingu. -# In the north a very small part from the river Javary (now Jari I guess, -# the border with Amapá) to the Amazon, then to the Xingu. -Zone America/Belem -3:13:56 - LMT 1914 - -3:00 Brazil -03/-02 1988 Sep 12 - -3:00 - -03 -# -# west Pará (PA) -# West Pará includes Altamira, Óbidos, Prainha, Oriximiná, and Santarém. -Zone America/Santarem -3:38:48 - LMT 1914 - -4:00 Brazil -04/-03 1988 Sep 12 - -4:00 - -04 2008 Jun 24 0:00 - -3:00 - -03 -# -# Maranhão (MA), Piauí (PI), Ceará (CE), Rio Grande do Norte (RN), -# Paraíba (PB) -Zone America/Fortaleza -2:34:00 - LMT 1914 - -3:00 Brazil -03/-02 1990 Sep 17 - -3:00 - -03 1999 Sep 30 - -3:00 Brazil -03/-02 2000 Oct 22 - -3:00 - -03 2001 Sep 13 - -3:00 Brazil -03/-02 2002 Oct 1 - -3:00 - -03 -# -# Pernambuco (PE) (except Atlantic islands) -Zone America/Recife -2:19:36 - LMT 1914 - -3:00 Brazil -03/-02 1990 Sep 17 - -3:00 - -03 1999 Sep 30 - -3:00 Brazil -03/-02 2000 Oct 15 - -3:00 - -03 2001 Sep 13 - -3:00 Brazil -03/-02 2002 Oct 1 - -3:00 - -03 -# -# Tocantins (TO) -Zone America/Araguaina -3:12:48 - LMT 1914 - -3:00 Brazil -03/-02 1990 Sep 17 - -3:00 - -03 1995 Sep 14 - -3:00 Brazil -03/-02 2003 Sep 24 - -3:00 - -03 2012 Oct 21 - -3:00 Brazil -03/-02 2013 Sep - -3:00 - -03 -# -# Alagoas (AL), Sergipe (SE) -Zone America/Maceio -2:22:52 - LMT 1914 - -3:00 Brazil -03/-02 1990 Sep 17 - -3:00 - -03 1995 Oct 13 - -3:00 Brazil -03/-02 1996 Sep 4 - -3:00 - -03 1999 Sep 30 - -3:00 Brazil -03/-02 2000 Oct 22 - -3:00 - -03 2001 Sep 13 - -3:00 Brazil -03/-02 2002 Oct 1 - -3:00 - -03 -# -# Bahia (BA) -# There are too many Salvadors elsewhere, so use America/Bahia instead -# of America/Salvador. -Zone America/Bahia -2:34:04 - LMT 1914 - -3:00 Brazil -03/-02 2003 Sep 24 - -3:00 - -03 2011 Oct 16 - -3:00 Brazil -03/-02 2012 Oct 21 - -3:00 - -03 -# -# Goiás (GO), Distrito Federal (DF), Minas Gerais (MG), -# Espírito Santo (ES), Rio de Janeiro (RJ), São Paulo (SP), Paraná (PR), -# Santa Catarina (SC), Rio Grande do Sul (RS) -Zone America/Sao_Paulo -3:06:28 - LMT 1914 - -3:00 Brazil -03/-02 1963 Oct 23 0:00 - -3:00 1:00 -02 1964 - -3:00 Brazil -03/-02 -# -# Mato Grosso do Sul (MS) -Zone America/Campo_Grande -3:38:28 - LMT 1914 - -4:00 Brazil -04/-03 -# -# Mato Grosso (MT) -Zone America/Cuiaba -3:44:20 - LMT 1914 - -4:00 Brazil -04/-03 2003 Sep 24 - -4:00 - -04 2004 Oct 1 - -4:00 Brazil -04/-03 -# -# Rondônia (RO) -Zone America/Porto_Velho -4:15:36 - LMT 1914 - -4:00 Brazil -04/-03 1988 Sep 12 - -4:00 - -04 -# -# Roraima (RR) -Zone America/Boa_Vista -4:02:40 - LMT 1914 - -4:00 Brazil -04/-03 1988 Sep 12 - -4:00 - -04 1999 Sep 30 - -4:00 Brazil -04/-03 2000 Oct 15 - -4:00 - -04 -# -# east Amazonas (AM): Boca do Acre, Jutaí, Manaus, Floriano Peixoto -# The great circle line from Tabatinga to Porto Acre divides -# east from west Amazonas. -Zone America/Manaus -4:00:04 - LMT 1914 - -4:00 Brazil -04/-03 1988 Sep 12 - -4:00 - -04 1993 Sep 28 - -4:00 Brazil -04/-03 1994 Sep 22 - -4:00 - -04 -# -# west Amazonas (AM): Atalaia do Norte, Boca do Maoco, Benjamin Constant, -# Eirunepé, Envira, Ipixuna -Zone America/Eirunepe -4:39:28 - LMT 1914 - -5:00 Brazil -05/-04 1988 Sep 12 - -5:00 - -05 1993 Sep 28 - -5:00 Brazil -05/-04 1994 Sep 22 - -5:00 - -05 2008 Jun 24 0:00 - -4:00 - -04 2013 Nov 10 - -5:00 - -05 -# -# Acre (AC) -Zone America/Rio_Branco -4:31:12 - LMT 1914 - -5:00 Brazil -05/-04 1988 Sep 12 - -5:00 - -05 2008 Jun 24 0:00 - -4:00 - -04 2013 Nov 10 - -5:00 - -05 - -# Chile - -# From Paul Eggert (2015-04-03): -# Shanks & Pottenger says America/Santiago introduced standard time in -# 1890 and rounds its UTC offset to 70W40; guess that in practice this -# was the same offset as in 1916-1919. It also says Pacific/Easter -# standardized on 109W22 in 1890; assume this didn't change the clocks. -# -# Dates for America/Santiago from 1910 to 2004 are primarily from -# the following source, cited by Oscar van Vlijmen (2006-10-08): -# [1] Chile Law -# http://www.webexhibits.org/daylightsaving/chile.html -# This contains a copy of this official table: -# Cambios en la hora oficial de Chile desde 1900 (retrieved 2008-03-30) -# https://web.archive.org/web/20080330200901/http://www.horaoficial.cl/cambio.htm -# [1] needs several corrections, though. -# -# The first set of corrections is from: -# [2] History of the Official Time of Chile -# http://www.horaoficial.cl/ing/horaof_ing.html (retrieved 2012-03-06). See: -# https://web.archive.org/web/20120306042032/http://www.horaoficial.cl/ing/horaof_ing.html -# This is an English translation of: -# Historia de la hora oficial de Chile (retrieved 2012-10-24). See: -# https://web.archive.org/web/20121024234627/http://www.horaoficial.cl/horaof.htm -# A fancier Spanish version (requiring mouse-clicking) is at: -# http://www.horaoficial.cl/historia_hora.html -# Conflicts between [1] and [2] were resolved as follows: -# -# - [1] says the 1910 transition was Jan 1, [2] says Jan 10 and cites -# Boletín No. 1, Aviso No. 1 (1910). Go with [2]. -# -# - [1] says SMT was -4:42:45, [2] says Chile's official time from -# 1916 to 1919 was -4:42:46.3, the meridian of Chile's National -# Astronomical Observatory (OAN), then located in what is now -# Quinta Normal in Santiago. Go with [2], rounding it to -4:42:46. -# -# - [1] says the 1918 transition was Sep 1, [2] says Sep 10 and cites -# Boletín No. 22, Aviso No. 129/1918 (1918-08-23). Go with [2]. -# -# - [1] does not give times for transitions; assume they occur -# at midnight mainland time, the current common practice. However, -# go with [2]'s specification of 23:00 for the 1947-05-21 transition. -# -# Another correction to [1] is from Jesper Nørgaard Welen, who -# wrote (2006-10-08), "I think that there are some obvious mistakes in -# the suggested link from Oscar van Vlijmen,... for instance entry 66 -# says that GMT-4 ended 1990-09-12 while entry 67 only begins GMT-3 at -# 1990-09-15 (they should have been 1990-09-15 and 1990-09-16 -# respectively), but anyhow it clears up some doubts too." -# -# Data for Pacific/Easter from 1910 through 1967 come from Shanks & -# Pottenger. After that, for lack of better info assume -# Pacific/Easter is always two hours behind America/Santiago; -# this is known to work for DST transitions starting in 2008 and -# may well be true for earlier transitions. - -# From Eduardo Krell (1995-10-19): -# The law says to switch to DST at midnight [24:00] on the second SATURDAY -# of October.... The law is the same for March and October. -# (1998-09-29): -# Because of the drought this year, the government decided to go into -# DST earlier (saturday 9/26 at 24:00). This is a one-time change only ... -# (unless there's another dry season next year, I guess). - -# From Julio I. Pacheco Troncoso (1999-03-18): -# Because of the same drought, the government decided to end DST later, -# on April 3, (one-time change). - -# From Germán Poo-Caamaño (2008-03-03): -# Due to drought, Chile extends Daylight Time in three weeks. This -# is one-time change (Saturday 3/29 at 24:00 for America/Santiago -# and Saturday 3/29 at 22:00 for Pacific/Easter) -# The Supreme Decree is located at -# http://www.shoa.cl/servicios/supremo316.pdf -# -# From José Miguel Garrido (2008-03-05): -# http://www.shoa.cl/noticias/2008/04hora/hora.htm - -# From Angel Chiang (2010-03-04): -# Subject: DST in Chile exceptionally extended to 3 April due to earthquake -# http://www.gobiernodechile.cl/viewNoticia.aspx?idArticulo=30098 -# -# From Arthur David Olson (2010-03-06): -# Angel Chiang's message confirmed by Julio Pacheco; Julio provided a patch. - -# From Glenn Eychaner (2011-03-28): -# http://diario.elmercurio.com/2011/03/28/_portada/_portada/noticias/7565897A-CA86-49E6-9E03-660B21A4883E.htm?id=3D{7565897A-CA86-49E6-9E03-660B21A4883E} -# In English: -# Chile's clocks will go back an hour this year on the 7th of May instead -# of this Saturday. They will go forward again the 3rd Saturday in -# August, not in October as they have since 1968. - -# From Mauricio Parada (2012-02-22), translated by Glenn Eychaner (2012-02-23): -# As stated in the website of the Chilean Energy Ministry -# http://www.minenergia.cl/ministerio/noticias/generales/gobierno-anuncia-fechas-de-cambio-de.html -# The Chilean Government has decided to postpone the entrance into winter time -# (to leave DST) from March 11 2012 to April 28th 2012.... -# Quote from the website communication: -# -# 6. For the year 2012, the dates of entry into winter time will be as follows: -# a. Saturday April 28, 2012, clocks should go back 60 minutes; that is, at -# 23:59:59, instead of passing to 0:00, the time should be adjusted to be 23:00 -# of the same day. -# b. Saturday, September 1, 2012, clocks should go forward 60 minutes; that is, -# at 23:59:59, instead of passing to 0:00, the time should be adjusted to be -# 01:00 on September 2. - -# From Steffen Thorsen (2013-02-15): -# According to several news sources, Chile has extended DST this year, -# they will end DST later and start DST earlier than planned. They -# hope to save energy. The new end date is 2013-04-28 00:00 and new -# start date is 2013-09-08 00:00.... -# http://www.gob.cl/informa/2013/02/15/gobierno-anuncia-fechas-de-cambio-de-hora-para-el-ano-2013.htm - -# From José Miguel Garrido (2014-02-19): -# Today appeared in the Diario Oficial a decree amending the time change -# dates to 2014. -# DST End: last Saturday of April 2014 (Sun 27 Apr 2014 03:00 UTC) -# DST Start: first Saturday of September 2014 (Sun 07 Sep 2014 04:00 UTC) -# http://www.diariooficial.interior.gob.cl//media/2014/02/19/do-20140219.pdf - -# From Eduardo Romero Urra (2015-03-03): -# Today has been published officially that Chile will use the DST time -# permanently until March 25 of 2017 -# http://www.diariooficial.interior.gob.cl/media/2015/03/03/1-large.jpg -# -# From Paul Eggert (2015-03-03): -# For now, assume that the extension will persist indefinitely. - -# From Juan Correa (2016-03-18): -# The decree regarding DST has been published in today's Official Gazette: -# http://www.diariooficial.interior.gob.cl/versiones-anteriores/do/20160318/ -# http://www.leychile.cl/Navegar?idNorma=1088502 -# It does consider the second Saturday of May and August as the dates -# for the transition; and it lists DST dates until 2019, but I think -# this scheme will stick. -# -# From Paul Eggert (2016-03-18): -# For now, assume the pattern holds for the indefinite future. -# The decree says transitions occur at 24:00; in practice this appears -# to mean 24:00 mainland time, not 24:00 local time, so that Easter -# Island is always two hours behind the mainland. - -# From Juan Correa (2016-12-04): -# Magallanes region ... will keep DST (UTC -3) all year round.... -# http://www.soychile.cl/Santiago/Sociedad/2016/12/04/433428/Bachelet-firmo-el-decreto-para-establecer-un-horario-unico-para-la-Region-de-Magallanes.aspx -# -# From Deborah Goldsmith (2017-01-19): -# http://www.diariooficial.interior.gob.cl/publicaciones/2017/01/17/41660/01/1169626.pdf -# From Paul Eggert (2017-01-19): -# The above says the Magallanes change expires 2019-05-11 at 24:00, -# so in theory, they will revert to -04/-03 after that, which means -# they will switch from -03 to -04 one hour after Santiago does that day. -# For now, assume that they will not revert. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Chile 1927 1931 - Sep 1 0:00 1:00 S -Rule Chile 1928 1932 - Apr 1 0:00 0 - -Rule Chile 1968 only - Nov 3 4:00u 1:00 S -Rule Chile 1969 only - Mar 30 3:00u 0 - -Rule Chile 1969 only - Nov 23 4:00u 1:00 S -Rule Chile 1970 only - Mar 29 3:00u 0 - -Rule Chile 1971 only - Mar 14 3:00u 0 - -Rule Chile 1970 1972 - Oct Sun>=9 4:00u 1:00 S -Rule Chile 1972 1986 - Mar Sun>=9 3:00u 0 - -Rule Chile 1973 only - Sep 30 4:00u 1:00 S -Rule Chile 1974 1987 - Oct Sun>=9 4:00u 1:00 S -Rule Chile 1987 only - Apr 12 3:00u 0 - -Rule Chile 1988 1990 - Mar Sun>=9 3:00u 0 - -Rule Chile 1988 1989 - Oct Sun>=9 4:00u 1:00 S -Rule Chile 1990 only - Sep 16 4:00u 1:00 S -Rule Chile 1991 1996 - Mar Sun>=9 3:00u 0 - -Rule Chile 1991 1997 - Oct Sun>=9 4:00u 1:00 S -Rule Chile 1997 only - Mar 30 3:00u 0 - -Rule Chile 1998 only - Mar Sun>=9 3:00u 0 - -Rule Chile 1998 only - Sep 27 4:00u 1:00 S -Rule Chile 1999 only - Apr 4 3:00u 0 - -Rule Chile 1999 2010 - Oct Sun>=9 4:00u 1:00 S -Rule Chile 2000 2007 - Mar Sun>=9 3:00u 0 - -# N.B.: the end of March 29 in Chile is March 30 in Universal time, -# which is used below in specifying the transition. -Rule Chile 2008 only - Mar 30 3:00u 0 - -Rule Chile 2009 only - Mar Sun>=9 3:00u 0 - -Rule Chile 2010 only - Apr Sun>=1 3:00u 0 - -Rule Chile 2011 only - May Sun>=2 3:00u 0 - -Rule Chile 2011 only - Aug Sun>=16 4:00u 1:00 S -Rule Chile 2012 2014 - Apr Sun>=23 3:00u 0 - -Rule Chile 2012 2014 - Sep Sun>=2 4:00u 1:00 S -Rule Chile 2016 max - May Sun>=9 3:00u 0 - -Rule Chile 2016 max - Aug Sun>=9 4:00u 1:00 S -# IATA SSIM anomalies: (1992-02) says 1992-03-14; -# (1996-09) says 1998-03-08. Ignore these. -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Santiago -4:42:46 - LMT 1890 - -4:42:46 - SMT 1910 Jan 10 # Santiago Mean Time - -5:00 - -05 1916 Jul 1 - -4:42:46 - SMT 1918 Sep 10 - -4:00 - -04 1919 Jul 1 - -4:42:46 - SMT 1927 Sep 1 - -5:00 Chile -05/-04 1932 Sep 1 - -4:00 - -04 1942 Jun 1 - -5:00 - -05 1942 Aug 1 - -4:00 - -04 1946 Jul 15 - -4:00 1:00 -03 1946 Sep 1 # central Chile - -4:00 - -04 1947 Apr 1 - -5:00 - -05 1947 May 21 23:00 - -4:00 Chile -04/-03 -Zone America/Punta_Arenas -4:43:40 - LMT 1890 - -4:42:46 - SMT 1910 Jan 10 - -5:00 - -05 1916 Jul 1 - -4:42:46 - SMT 1918 Sep 10 - -4:00 - -04 1919 Jul 1 - -4:42:46 - SMT 1927 Sep 1 - -5:00 Chile -05/-04 1932 Sep 1 - -4:00 - -04 1942 Jun 1 - -5:00 - -05 1942 Aug 1 - -4:00 - -04 1947 Apr 1 - -5:00 - -05 1947 May 21 23:00 - -4:00 Chile -04/-03 2016 Dec 4 - -3:00 - -03 -Zone Pacific/Easter -7:17:28 - LMT 1890 - -7:17:28 - EMT 1932 Sep # Easter Mean Time - -7:00 Chile -07/-06 1982 Mar 14 3:00u # Easter Time - -6:00 Chile -06/-05 -# -# Salas y Gómez Island is uninhabited. -# Other Chilean locations, including Juan Fernández Is, Desventuradas Is, -# and Antarctic bases, are like America/Santiago. - -# Antarctic base using South American rules -# (See the file 'antarctica' for more.) -# -# Palmer, Anvers Island, since 1965 (moved 2 miles in 1968) -# -# From Ethan Dicks (1996-10-06): -# It keeps the same time as Punta Arenas, Chile, because, just like us -# and the South Pole, that's the other end of their supply line.... -# I verified with someone who was there that since 1980, -# Palmer has followed Chile. Prior to that, before the Falklands War, -# Palmer used to be supplied from Argentina. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Antarctica/Palmer 0 - -00 1965 - -4:00 Arg -04/-03 1969 Oct 5 - -3:00 Arg -03/-02 1982 May - -4:00 Chile -04/-03 2016 Dec 4 - -3:00 - -03 - -# Colombia - -# Milne gives 4:56:16.4 for Bogotá time in 1899; round to nearest. He writes, -# "A variation of fifteen minutes in the public clocks of Bogota is not rare." - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule CO 1992 only - May 3 0:00 1:00 S -Rule CO 1993 only - Apr 4 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Bogota -4:56:16 - LMT 1884 Mar 13 - -4:56:16 - BMT 1914 Nov 23 # Bogotá Mean Time - -5:00 CO -05/-04 -# Malpelo, Providencia, San Andres -# no information; probably like America/Bogota - -# Curaçao - -# Milne gives 4:35:46.9 for Curaçao mean time; round to nearest. -# -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger say that The Bottom and Philipsburg have been at -# -4:00 since standard time was introduced on 1912-03-02; and that -# Kralendijk and Rincon used Kralendijk Mean Time (-4:33:08) from -# 1912-02-02 to 1965-01-01. The former is dubious, since S&P also say -# Saba Island has been like Curaçao. -# This all predates our 1970 cutoff, though. -# -# By July 2007 Curaçao and St Maarten are planned to become -# associated states within the Netherlands, much like Aruba; -# Bonaire, Saba and St Eustatius would become directly part of the -# Netherlands as Kingdom Islands. This won't affect their time zones -# though, as far as we know. -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Curacao -4:35:47 - LMT 1912 Feb 12 # Willemstad - -4:30 - -0430 1965 - -4:00 - AST - -# From Arthur David Olson (2011-06-15): -# use links for places with new iso3166 codes. -# The name "Lower Prince's Quarter" is both longer than fourteen characters -# and contains an apostrophe; use "Lower_Princes" below. - -Link America/Curacao America/Lower_Princes # Sint Maarten -Link America/Curacao America/Kralendijk # Caribbean Netherlands - -# Ecuador -# -# Milne says the Central and South American Telegraph Company used -5:24:15. -# -# From Alois Treindl (2016-12-15): -# https://www.elcomercio.com/actualidad/hora-sixto-1993.html -# ... Whether the law applied also to Galápagos, I do not know. -# From Paul Eggert (2016-12-15): -# https://www.elcomercio.com/afull/modificacion-husohorario-ecuador-presidentes-decreto.html -# This says President Sixto Durán Ballén signed decree No. 285, which -# established DST from 1992-11-28 to 1993-02-05; it does not give transition -# times. The people called it "hora de Sixto" ("Sixto hour"). The change did -# not go over well; a popular song "Qué hora es" by Jaime Guevara had lyrics -# that included "Amanecía en mitad de la noche, los guaguas iban a clase sin -# sol" ("It was dawning in the middle of the night, the buses went to class -# without sun"). Although Ballén's campaign slogan was "Ni un paso atrás" -# (Not one step back), the clocks went back in 1993 and the experiment was not -# repeated. For now, assume transitions were at 00:00 local time country-wide. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Ecuador 1992 only - Nov 28 0:00 1:00 S -Rule Ecuador 1993 only - Feb 5 0:00 0 - -# -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Guayaquil -5:19:20 - LMT 1890 - -5:14:00 - QMT 1931 # Quito Mean Time - -5:00 Ecuador -05/-04 -Zone Pacific/Galapagos -5:58:24 - LMT 1931 # Puerto Baquerizo Moreno - -5:00 - -05 1986 - -6:00 Ecuador -06/-05 - -# Falklands - -# From Paul Eggert (2006-03-22): -# Between 1990 and 2000 inclusive, Shanks & Pottenger and the IATA agree except -# the IATA gives 1996-09-08. Go with Shanks & Pottenger. - -# From Falkland Islands Government Office, London (2001-01-22) -# via Jesper Nørgaard: -# ... the clocks revert back to Local Mean Time at 2 am on Sunday 15 -# April 2001 and advance one hour to summer time at 2 am on Sunday 2 -# September. It is anticipated that the clocks will revert back at 2 -# am on Sunday 21 April 2002 and advance to summer time at 2 am on -# Sunday 1 September. - -# From Rives McDow (2001-02-13): -# -# I have communicated several times with people there, and the last -# time I had communications that was helpful was in 1998. Here is -# what was said then: -# -# "The general rule was that Stanley used daylight saving and the Camp -# did not. However for various reasons many people in the Camp have -# started to use daylight saving (known locally as 'Stanley Time') -# There is no rule as to who uses daylight saving - it is a matter of -# personal choice and so it is impossible to draw a map showing who -# uses it and who does not. Any list would be out of date as soon as -# it was produced. This year daylight saving ended on April 18/19th -# and started again on September 12/13th. I do not know what the rule -# is, but can find out if you like. We do not change at the same time -# as UK or Chile." -# -# I did have in my notes that the rule was "Second Saturday in Sep at -# 0:00 until third Saturday in Apr at 0:00". I think that this does -# not agree in some cases with Shanks; is this true? -# -# Also, there is no mention in the list that some areas in the -# Falklands do not use DST. I have found in my communications there -# that these areas are on the western half of East Falkland and all of -# West Falkland. Stanley is the only place that consistently observes -# DST. Again, as in other places in the world, the farmers don't like -# it. West Falkland is almost entirely sheep farmers. -# -# I know one lady there that keeps a list of which farm keeps DST and -# which doesn't each year. She runs a shop in Stanley, and says that -# the list changes each year. She uses it to communicate to her -# customers, catching them when they are home for lunch or dinner. - -# From Paul Eggert (2001-03-05): -# For now, we'll just record the time in Stanley, since we have no -# better info. - -# From Steffen Thorsen (2011-04-01): -# The Falkland Islands will not turn back clocks this winter, but stay on -# daylight saving time. -# -# One source: -# http://www.falklandnews.com/public/story.cfm?get=5914&source=3 -# -# We have gotten this confirmed by a clerk of the legislative assembly: -# Normally the clocks revert to Local Mean Time (UTC/GMT -4 hours) on the -# third Sunday of April at 0200hrs and advance to Summer Time (UTC/GMT -3 -# hours) on the first Sunday of September at 0200hrs. -# -# IMPORTANT NOTE: During 2011, on a trial basis, the Falkland Islands -# will not revert to local mean time, but clocks will remain on Summer -# time (UTC/GMT - 3 hours) throughout the whole of 2011. Any long term -# change to local time following the trial period will be notified. -# -# From Andrew Newman (2012-02-24) -# A letter from Justin McPhee, Chief Executive, -# Cable & Wireless Falkland Islands (dated 2012-02-22) -# states... -# The current Atlantic/Stanley entry under South America expects the -# clocks to go back to standard Falklands Time (FKT) on the 15th April. -# The database entry states that in 2011 Stanley was staying on fixed -# summer time on a trial basis only. FIG need to contact IANA and/or -# the maintainers of the database to inform them we're adopting -# the same policy this year and suggest recommendations for future years. -# -# For now we will assume permanent summer time for the Falklands -# until advised differently (to apply for 2012 and beyond, after the 2011 -# experiment was apparently successful.) -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Falk 1937 1938 - Sep lastSun 0:00 1:00 S -Rule Falk 1938 1942 - Mar Sun>=19 0:00 0 - -Rule Falk 1939 only - Oct 1 0:00 1:00 S -Rule Falk 1940 1942 - Sep lastSun 0:00 1:00 S -Rule Falk 1943 only - Jan 1 0:00 0 - -Rule Falk 1983 only - Sep lastSun 0:00 1:00 S -Rule Falk 1984 1985 - Apr lastSun 0:00 0 - -Rule Falk 1984 only - Sep 16 0:00 1:00 S -Rule Falk 1985 2000 - Sep Sun>=9 0:00 1:00 S -Rule Falk 1986 2000 - Apr Sun>=16 0:00 0 - -Rule Falk 2001 2010 - Apr Sun>=15 2:00 0 - -Rule Falk 2001 2010 - Sep Sun>=1 2:00 1:00 S -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Atlantic/Stanley -3:51:24 - LMT 1890 - -3:51:24 - SMT 1912 Mar 12 # Stanley Mean Time - -4:00 Falk -04/-03 1983 May - -3:00 Falk -03/-02 1985 Sep 15 - -4:00 Falk -04/-03 2010 Sep 5 2:00 - -3:00 - -03 - -# French Guiana -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Cayenne -3:29:20 - LMT 1911 Jul - -4:00 - -04 1967 Oct - -3:00 - -03 - -# Guyana -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Guyana -3:52:40 - LMT 1915 Mar # Georgetown - -3:45 - -0345 1975 Jul 31 - -3:00 - -03 1991 -# IATA SSIM (1996-06) says -4:00. Assume a 1991 switch. - -4:00 - -04 - -# Paraguay -# -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger say that spring transitions are 01:00 -> 02:00, -# and autumn transitions are 00:00 -> 23:00. Go with pre-1999 -# editions of Shanks, and with the IATA, who say transitions occur at 00:00. -# -# From Waldemar Villamayor-Venialbo (2013-09-20): -# No time of the day is established for the adjustment, so people normally -# adjust their clocks at 0 hour of the given dates. -# -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Para 1975 1988 - Oct 1 0:00 1:00 S -Rule Para 1975 1978 - Mar 1 0:00 0 - -Rule Para 1979 1991 - Apr 1 0:00 0 - -Rule Para 1989 only - Oct 22 0:00 1:00 S -Rule Para 1990 only - Oct 1 0:00 1:00 S -Rule Para 1991 only - Oct 6 0:00 1:00 S -Rule Para 1992 only - Mar 1 0:00 0 - -Rule Para 1992 only - Oct 5 0:00 1:00 S -Rule Para 1993 only - Mar 31 0:00 0 - -Rule Para 1993 1995 - Oct 1 0:00 1:00 S -Rule Para 1994 1995 - Feb lastSun 0:00 0 - -Rule Para 1996 only - Mar 1 0:00 0 - -# IATA SSIM (2000-02) says 1999-10-10; ignore this for now. -# From Steffen Thorsen (2000-10-02): -# I have three independent reports that Paraguay changed to DST this Sunday -# (10-01). -# -# Translated by Gwillim Law (2001-02-27) from -# Noticias, a daily paper in Asunción, Paraguay (2000-10-01): -# http://www.diarionoticias.com.py/011000/nacional/naciona1.htm -# Starting at 0:00 today, the clock will be set forward 60 minutes, in -# fulfillment of Decree No. 7,273 of the Executive Power.... The time change -# system has been operating for several years. Formerly there was a separate -# decree each year; the new law has the same effect, but permanently. Every -# year, the time will change on the first Sunday of October; likewise, the -# clock will be set back on the first Sunday of March. -# -Rule Para 1996 2001 - Oct Sun>=1 0:00 1:00 S -# IATA SSIM (1997-09) says Mar 1; go with Shanks & Pottenger. -Rule Para 1997 only - Feb lastSun 0:00 0 - -# Shanks & Pottenger say 1999-02-28; IATA SSIM (1999-02) says 1999-02-27, but -# (1999-09) reports no date; go with above sources and Gerd Knops (2001-02-27). -Rule Para 1998 2001 - Mar Sun>=1 0:00 0 - -# From Rives McDow (2002-02-28): -# A decree was issued in Paraguay (No. 16350) on 2002-02-26 that changed the -# dst method to be from the first Sunday in September to the first Sunday in -# April. -Rule Para 2002 2004 - Apr Sun>=1 0:00 0 - -Rule Para 2002 2003 - Sep Sun>=1 0:00 1:00 S -# -# From Jesper Nørgaard Welen (2005-01-02): -# There are several sources that claim that Paraguay made -# a timezone rule change in autumn 2004. -# From Steffen Thorsen (2005-01-05): -# Decree 1,867 (2004-03-05) -# From Carlos Raúl Perasso via Jesper Nørgaard Welen (2006-10-13) -# http://www.presidencia.gov.py/decretos/D1867.pdf -Rule Para 2004 2009 - Oct Sun>=15 0:00 1:00 S -Rule Para 2005 2009 - Mar Sun>=8 0:00 0 - -# From Carlos Raúl Perasso (2010-02-18): -# By decree number 3958 issued yesterday -# http://www.presidencia.gov.py/v1/wp-content/uploads/2010/02/decreto3958.pdf -# Paraguay changes its DST schedule, postponing the March rule to April and -# modifying the October date. The decree reads: -# ... -# Art. 1. It is hereby established that from the second Sunday of the month of -# April of this year (2010), the official time is to be set back 60 minutes, -# and that on the first Sunday of the month of October, it is to be set -# forward 60 minutes, in all the territory of the Paraguayan Republic. -# ... -Rule Para 2010 max - Oct Sun>=1 0:00 1:00 S -Rule Para 2010 2012 - Apr Sun>=8 0:00 0 - -# -# From Steffen Thorsen (2013-03-07): -# Paraguay will end DST on 2013-03-24 00:00.... -# http://www.ande.gov.py/interna.php?id=1075 -# -# From Carlos Raúl Perasso (2013-03-15): -# The change in Paraguay is now final. Decree number 10780 -# http://www.presidencia.gov.py/uploads/pdf/presidencia-3b86ff4b691c79d4f5927ca964922ec74772ce857c02ca054a52a37b49afc7fb.pdf -# From Carlos Raúl Perasso (2014-02-28): -# Decree 1264 can be found at: -# http://www.presidencia.gov.py/archivos/documentos/DECRETO1264_ey9r8zai.pdf -Rule Para 2013 max - Mar Sun>=22 0:00 0 - - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Asuncion -3:50:40 - LMT 1890 - -3:50:40 - AMT 1931 Oct 10 # Asunción Mean Time - -4:00 - -04 1972 Oct - -3:00 - -03 1974 Apr - -4:00 Para -04/-03 - -# Peru -# -# From Evelyn C. Leeper via Mark Brader (2003-10-26) -# : -# When we were in Peru in 1985-1986, they apparently switched over -# sometime between December 29 and January 3 while we were on the Amazon. -# -# From Paul Eggert (2006-03-22): -# Shanks & Pottenger don't have this transition. Assume 1986 was like 1987. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule Peru 1938 only - Jan 1 0:00 1:00 S -Rule Peru 1938 only - Apr 1 0:00 0 - -Rule Peru 1938 1939 - Sep lastSun 0:00 1:00 S -Rule Peru 1939 1940 - Mar Sun>=24 0:00 0 - -Rule Peru 1986 1987 - Jan 1 0:00 1:00 S -Rule Peru 1986 1987 - Apr 1 0:00 0 - -Rule Peru 1990 only - Jan 1 0:00 1:00 S -Rule Peru 1990 only - Apr 1 0:00 0 - -# IATA is ambiguous for 1993/1995; go with Shanks & Pottenger. -Rule Peru 1994 only - Jan 1 0:00 1:00 S -Rule Peru 1994 only - Apr 1 0:00 0 - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Lima -5:08:12 - LMT 1890 - -5:08:36 - LMT 1908 Jul 28 # Lima Mean Time? - -5:00 Peru -05/-04 - -# South Georgia -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone Atlantic/South_Georgia -2:26:08 - LMT 1890 # Grytviken - -2:00 - -02 - -# South Sandwich Is -# uninhabited; scientific personnel have wintered - -# Suriname -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Paramaribo -3:40:40 - LMT 1911 - -3:40:52 - PMT 1935 # Paramaribo Mean Time - -3:40:36 - PMT 1945 Oct # The capital moved? - -3:30 - -0330 1984 Oct - -3:00 - -03 - -# Trinidad and Tobago -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Port_of_Spain -4:06:04 - LMT 1912 Mar 2 - -4:00 - AST - -# These all agree with Trinidad and Tobago since 1970. -Link America/Port_of_Spain America/Anguilla -Link America/Port_of_Spain America/Antigua -Link America/Port_of_Spain America/Dominica -Link America/Port_of_Spain America/Grenada -Link America/Port_of_Spain America/Guadeloupe -Link America/Port_of_Spain America/Marigot # St Martin (French part) -Link America/Port_of_Spain America/Montserrat -Link America/Port_of_Spain America/St_Barthelemy # St Barthélemy -Link America/Port_of_Spain America/St_Kitts # St Kitts & Nevis -Link America/Port_of_Spain America/St_Lucia -Link America/Port_of_Spain America/St_Thomas # Virgin Islands (US) -Link America/Port_of_Spain America/St_Vincent -Link America/Port_of_Spain America/Tortola # Virgin Islands (UK) - -# Uruguay -# From Paul Eggert (1993-11-18): -# Uruguay wins the prize for the strangest peacetime manipulation of the rules. -# From Shanks & Pottenger: -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -# Whitman gives 1923 Oct 1; go with Shanks & Pottenger. -Rule Uruguay 1923 only - Oct 2 0:00 0:30 HS -Rule Uruguay 1924 1926 - Apr 1 0:00 0 - -Rule Uruguay 1924 1925 - Oct 1 0:00 0:30 HS -Rule Uruguay 1933 1935 - Oct lastSun 0:00 0:30 HS -# Shanks & Pottenger give 1935 Apr 1 0:00 & 1936 Mar 30 0:00; go with Whitman. -Rule Uruguay 1934 1936 - Mar Sat>=25 23:30s 0 - -Rule Uruguay 1936 only - Nov 1 0:00 0:30 HS -Rule Uruguay 1937 1941 - Mar lastSun 0:00 0 - -# Whitman gives 1937 Oct 3; go with Shanks & Pottenger. -Rule Uruguay 1937 1940 - Oct lastSun 0:00 0:30 HS -# Whitman gives 1941 Oct 24 - 1942 Mar 27, 1942 Dec 14 - 1943 Apr 13, -# and 1943 Apr 13 "to present time"; go with Shanks & Pottenger. -Rule Uruguay 1941 only - Aug 1 0:00 0:30 HS -Rule Uruguay 1942 only - Jan 1 0:00 0 - -Rule Uruguay 1942 only - Dec 14 0:00 1:00 S -Rule Uruguay 1943 only - Mar 14 0:00 0 - -Rule Uruguay 1959 only - May 24 0:00 1:00 S -Rule Uruguay 1959 only - Nov 15 0:00 0 - -Rule Uruguay 1960 only - Jan 17 0:00 1:00 S -Rule Uruguay 1960 only - Mar 6 0:00 0 - -Rule Uruguay 1965 1967 - Apr Sun>=1 0:00 1:00 S -Rule Uruguay 1965 only - Sep 26 0:00 0 - -Rule Uruguay 1966 1967 - Oct 31 0:00 0 - -Rule Uruguay 1968 1970 - May 27 0:00 0:30 HS -Rule Uruguay 1968 1970 - Dec 2 0:00 0 - -Rule Uruguay 1972 only - Apr 24 0:00 1:00 S -Rule Uruguay 1972 only - Aug 15 0:00 0 - -Rule Uruguay 1974 only - Mar 10 0:00 0:30 HS -Rule Uruguay 1974 only - Dec 22 0:00 1:00 S -Rule Uruguay 1976 only - Oct 1 0:00 0 - -Rule Uruguay 1977 only - Dec 4 0:00 1:00 S -Rule Uruguay 1978 only - Apr 1 0:00 0 - -Rule Uruguay 1979 only - Oct 1 0:00 1:00 S -Rule Uruguay 1980 only - May 1 0:00 0 - -Rule Uruguay 1987 only - Dec 14 0:00 1:00 S -Rule Uruguay 1988 only - Mar 14 0:00 0 - -Rule Uruguay 1988 only - Dec 11 0:00 1:00 S -Rule Uruguay 1989 only - Mar 12 0:00 0 - -Rule Uruguay 1989 only - Oct 29 0:00 1:00 S -# Shanks & Pottenger say no DST was observed in 1990/1 and 1991/2, -# and that 1992/3's DST was from 10-25 to 03-01. Go with IATA. -Rule Uruguay 1990 1992 - Mar Sun>=1 0:00 0 - -Rule Uruguay 1990 1991 - Oct Sun>=21 0:00 1:00 S -Rule Uruguay 1992 only - Oct 18 0:00 1:00 S -Rule Uruguay 1993 only - Feb 28 0:00 0 - -# From Eduardo Cota (2004-09-20): -# The Uruguayan government has decreed a change in the local time.... -# http://www.presidencia.gub.uy/decretos/2004091502.htm -Rule Uruguay 2004 only - Sep 19 0:00 1:00 S -# From Steffen Thorsen (2005-03-11): -# Uruguay's DST was scheduled to end on Sunday, 2005-03-13, but in order to -# save energy ... it was postponed two weeks.... -# http://www.presidencia.gub.uy/_Web/noticias/2005/03/2005031005.htm -Rule Uruguay 2005 only - Mar 27 2:00 0 - -# From Eduardo Cota (2005-09-27): -# http://www.presidencia.gub.uy/_Web/decretos/2005/09/CM%20119_09%2009%202005_00001.PDF -# This means that from 2005-10-09 at 02:00 local time, until 2006-03-12 at -# 02:00 local time, official time in Uruguay will be at GMT -2. -Rule Uruguay 2005 only - Oct 9 2:00 1:00 S -Rule Uruguay 2006 only - Mar 12 2:00 0 - -# From Jesper Nørgaard Welen (2006-09-06): -# http://www.presidencia.gub.uy/_web/decretos/2006/09/CM%20210_08%2006%202006_00001.PDF -# -# From Steffen Thorsen (2015-06-30): -# ... it looks like they will not be using DST the coming summer: -# http://www.elobservador.com.uy/gobierno-resolvio-que-no-habra-cambio-horario-verano-n656787 -# http://www.republica.com.uy/este-ano-no-se-modificara-el-huso-horario-en-uruguay/523760/ -# From Paul Eggert (2015-06-30): -# Apparently restaurateurs complained that DST caused people to go to the beach -# instead of out to dinner. -# From Pablo Camargo (2015-07-13): -# http://archivo.presidencia.gub.uy/sci/decretos/2015/06/cons_min_201.pdf -# [dated 2015-06-29; repeals Decree 311/006 dated 2006-09-04] -Rule Uruguay 2006 2014 - Oct Sun>=1 2:00 1:00 S -Rule Uruguay 2007 2015 - Mar Sun>=8 2:00 0 - - -# This Zone can be simplified once we assume zic %z. -Zone America/Montevideo -3:44:44 - LMT 1898 Jun 28 - -3:44:44 - MMT 1920 May 1 # Montevideo MT - -3:30 Uruguay -0330/-03 1942 Dec 14 - -3:00 Uruguay -03/-02 1968 - -3:00 Uruguay -03/-0230 1971 - -3:00 Uruguay -03/-02 1974 - -3:00 Uruguay -03/-0230 1974 Dec 22 - -3:00 Uruguay -03/-02 - -# Venezuela -# -# From Paul Eggert (2015-07-28): -# For the 1965 transition see Gaceta Oficial No. 27.619 (1964-12-15), p 205.533 -# http://www.pgr.gob.ve/dmdocuments/1964/27619.pdf -# -# From John Stainforth (2007-11-28): -# ... the change for Venezuela originally expected for 2007-12-31 has -# been brought forward to 2007-12-09. The official announcement was -# published today in the "Gaceta Oficial de la República Bolivariana -# de Venezuela, número 38.819" (official document for all laws or -# resolution publication) -# http://www.globovision.com/news.php?nid=72208 - -# From Alexander Krivenyshev (2016-04-15): -# https://actualidad.rt.com/actualidad/204758-venezuela-modificar-huso-horario-sequia-elnino -# -# From Paul Eggert (2016-04-15): -# Clocks advance 30 minutes on 2016-05-01 at 02:30.... -# "'Venezuela's new time-zone: hours without light, hours without water, -# hours of presidential broadcasts, hours of lines,' quipped comedian -# Jean Mary Curró ...". See: Cawthorne A, Kai D. Venezuela scraps -# half-hour time difference set by Chavez. Reuters 2016-04-15 14:50 -0400 -# https://www.reuters.com/article/us-venezuela-timezone-idUSKCN0XC2BE -# -# From Matt Johnson (2016-04-20): -# ... published in the official Gazette [2016-04-18], here: -# http://historico.tsj.gob.ve/gaceta_ext/abril/1842016/E-1842016-4551.pdf - -# Zone NAME GMTOFF RULES FORMAT [UNTIL] -Zone America/Caracas -4:27:44 - LMT 1890 - -4:27:40 - CMT 1912 Feb 12 # Caracas Mean Time? - -4:30 - -0430 1965 Jan 1 0:00 - -4:00 - -04 2007 Dec 9 3:00 - -4:30 - -0430 2016 May 1 2:30 - -4:00 - -04 diff --git a/src/timezone/data/systemv b/src/timezone/data/systemv deleted file mode 100644 index d9e2995756..0000000000 --- a/src/timezone/data/systemv +++ /dev/null @@ -1,37 +0,0 @@ -# This file is in the public domain, so clarified as of -# 2009-05-17 by Arthur David Olson. - -# Old rules, should the need arise. -# No attempt is made to handle Newfoundland, since it cannot be expressed -# using the System V "TZ" scheme (half-hour offset), or anything outside -# North America (no support for non-standard DST start/end dates), nor -# the changes in the DST rules in the US after 1976 (which occurred after -# the old rules were written). -# -# If you need the old rules, uncomment ## lines. -# Compile this *without* leap second correction for true conformance. - -# Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S -Rule SystemV min 1973 - Apr lastSun 2:00 1:00 D -Rule SystemV min 1973 - Oct lastSun 2:00 0 S -Rule SystemV 1974 only - Jan 6 2:00 1:00 D -Rule SystemV 1974 only - Nov lastSun 2:00 0 S -Rule SystemV 1975 only - Feb 23 2:00 1:00 D -Rule SystemV 1975 only - Oct lastSun 2:00 0 S -Rule SystemV 1976 max - Apr lastSun 2:00 1:00 D -Rule SystemV 1976 max - Oct lastSun 2:00 0 S - -# Zone NAME GMTOFF RULES/SAVE FORMAT [UNTIL] -## Zone SystemV/AST4ADT -4:00 SystemV A%sT -## Zone SystemV/EST5EDT -5:00 SystemV E%sT -## Zone SystemV/CST6CDT -6:00 SystemV C%sT -## Zone SystemV/MST7MDT -7:00 SystemV M%sT -## Zone SystemV/PST8PDT -8:00 SystemV P%sT -## Zone SystemV/YST9YDT -9:00 SystemV Y%sT -## Zone SystemV/AST4 -4:00 - AST -## Zone SystemV/EST5 -5:00 - EST -## Zone SystemV/CST6 -6:00 - CST -## Zone SystemV/MST7 -7:00 - MST -## Zone SystemV/PST8 -8:00 - PST -## Zone SystemV/YST9 -9:00 - YST -## Zone SystemV/HST10 -10:00 - HST diff --git a/src/timezone/data/tzdata.zi b/src/timezone/data/tzdata.zi new file mode 100644 index 0000000000..a818947009 --- /dev/null +++ b/src/timezone/data/tzdata.zi @@ -0,0 +1,4146 @@ +# This zic input file is in the public domain. +R A 1916 o - Jun 14 23s 1 S +R A 1916 1919 - O Sun>=1 23s 0 - +R A 1917 o - Mar 24 23s 1 S +R A 1918 o - Mar 9 23s 1 S +R A 1919 o - Mar 1 23s 1 S +R A 1920 o - F 14 23s 1 S +R A 1920 o - O 23 23s 0 - +R A 1921 o - Mar 14 23s 1 S +R A 1921 o - Jun 21 23s 0 - +R A 1939 o - S 11 23s 1 S +R A 1939 o - N 19 1 0 - +R A 1944 1945 - Ap M>=1 2 1 S +R A 1944 o - O 8 2 0 - +R A 1945 o - S 16 1 0 - +R A 1971 o - Ap 25 23s 1 S +R A 1971 o - S 26 23s 0 - +R A 1977 o - May 6 0 1 S +R A 1977 o - O 21 0 0 - +R A 1978 o - Mar 24 1 1 S +R A 1978 o - S 22 3 0 - +R A 1980 o - Ap 25 0 1 S +R A 1980 o - O 31 2 0 - +Z Africa/Algiers 0:12:12 - LMT 1891 Mar 15 0:1 +0:9:21 - PMT 1911 Mar 11 +0 A WE%sT 1940 F 25 2 +1 A CE%sT 1946 O 7 +0 - WET 1956 Ja 29 +1 - CET 1963 Ap 14 +0 A WE%sT 1977 O 21 +1 A CE%sT 1979 O 26 +0 A WE%sT 1981 May +1 - CET +Z Atlantic/Cape_Verde -1:34:4 - LMT 1907 +-2 - -02 1942 S +-2 1 -01 1945 O 15 +-2 - -02 1975 N 25 2 +-1 - -01 +Z Africa/Ndjamena 1:0:12 - LMT 1912 +1 - WAT 1979 O 14 +1 1 WAST 1980 Mar 8 +1 - WAT +Z Africa/Abidjan -0:16:8 - LMT 1912 +0 - GMT +Li Africa/Abidjan Africa/Bamako +Li Africa/Abidjan Africa/Banjul +Li Africa/Abidjan Africa/Conakry +Li Africa/Abidjan Africa/Dakar +Li Africa/Abidjan Africa/Freetown +Li Africa/Abidjan Africa/Lome +Li Africa/Abidjan Africa/Nouakchott +Li Africa/Abidjan Africa/Ouagadougou +Li Africa/Abidjan Africa/Sao_Tome +Li Africa/Abidjan Atlantic/St_Helena +R B 1940 o - Jul 15 0 1 S +R B 1940 o - O 1 0 0 - +R B 1941 o - Ap 15 0 1 S +R B 1941 o - S 16 0 0 - +R B 1942 1944 - Ap 1 0 1 S +R B 1942 o - O 27 0 0 - +R B 1943 1945 - N 1 0 0 - +R B 1945 o - Ap 16 0 1 S +R B 1957 o - May 10 0 1 S +R B 1957 1958 - O 1 0 0 - +R B 1958 o - May 1 0 1 S +R B 1959 1981 - May 1 1 1 S +R B 1959 1965 - S 30 3 0 - +R B 1966 1994 - O 1 3 0 - +R B 1982 o - Jul 25 1 1 S +R B 1983 o - Jul 12 1 1 S +R B 1984 1988 - May 1 1 1 S +R B 1989 o - May 6 1 1 S +R B 1990 1994 - May 1 1 1 S +R B 1995 2010 - Ap lastF 0s 1 S +R B 1995 2005 - S lastTh 24 0 - +R B 2006 o - S 21 24 0 - +R B 2007 o - S Th>=1 24 0 - +R B 2008 o - Au lastTh 24 0 - +R B 2009 o - Au 20 24 0 - +R B 2010 o - Au 10 24 0 - +R B 2010 o - S 9 24 1 S +R B 2010 o - S lastTh 24 0 - +R B 2014 o - May 15 24 1 S +R B 2014 o - Jun 26 24 0 - +R B 2014 o - Jul 31 24 1 S +R B 2014 o - S lastTh 24 0 - +Z Africa/Cairo 2:5:9 - LMT 1900 O +2 B EE%sT +R C 1920 1942 - S 1 0 0:20 GHST +R C 1920 1942 - D 31 0 0 GMT +Z Africa/Accra -0:0:52 - LMT 1918 +0 C GMT/+0020 +Z Africa/Bissau -1:2:20 - LMT 1912 +-1 - -01 1975 +0 - GMT +Z Africa/Nairobi 2:27:16 - LMT 1928 Jul +3 - EAT 1930 +2:30 - +0230 1940 +2:45 - +0245 1960 +3 - EAT +Li Africa/Nairobi Africa/Addis_Ababa +Li Africa/Nairobi Africa/Asmara +Li Africa/Nairobi Africa/Dar_es_Salaam +Li Africa/Nairobi Africa/Djibouti +Li Africa/Nairobi Africa/Kampala +Li Africa/Nairobi Africa/Mogadishu +Li Africa/Nairobi Indian/Antananarivo +Li Africa/Nairobi Indian/Comoro +Li Africa/Nairobi Indian/Mayotte +Z Africa/Monrovia -0:43:8 - LMT 1882 +-0:43:8 - MMT 1919 Mar +-0:44:30 - MMT 1972 Ja 7 +0 - GMT +R D 1951 o - O 14 2 1 S +R D 1952 o - Ja 1 0 0 - +R D 1953 o - O 9 2 1 S +R D 1954 o - Ja 1 0 0 - +R D 1955 o - S 30 0 1 S +R D 1956 o - Ja 1 0 0 - +R D 1982 1984 - Ap 1 0 1 S +R D 1982 1985 - O 1 0 0 - +R D 1985 o - Ap 6 0 1 S +R D 1986 o - Ap 4 0 1 S +R D 1986 o - O 3 0 0 - +R D 1987 1989 - Ap 1 0 1 S +R D 1987 1989 - O 1 0 0 - +R D 1997 o - Ap 4 0 1 S +R D 1997 o - O 4 0 0 - +R D 2013 o - Mar lastF 1 1 S +R D 2013 o - O lastF 2 0 - +Z Africa/Tripoli 0:52:44 - LMT 1920 +1 D CE%sT 1959 +2 - EET 1982 +1 D CE%sT 1990 May 4 +2 - EET 1996 S 30 +1 D CE%sT 1997 O 4 +2 - EET 2012 N 10 2 +1 D CE%sT 2013 O 25 2 +2 - EET +R E 1982 o - O 10 0 1 S +R E 1983 o - Mar 21 0 0 - +R E 2008 o - O lastSun 2 1 S +R E 2009 o - Mar lastSun 2 0 - +Z Indian/Mauritius 3:50 - LMT 1907 +4 E +04/+05 +R F 1939 o - S 12 0 1 S +R F 1939 o - N 19 0 0 - +R F 1940 o - F 25 0 1 S +R F 1945 o - N 18 0 0 - +R F 1950 o - Jun 11 0 1 S +R F 1950 o - O 29 0 0 - +R F 1967 o - Jun 3 12 1 S +R F 1967 o - O 1 0 0 - +R F 1974 o - Jun 24 0 1 S +R F 1974 o - S 1 0 0 - +R F 1976 1977 - May 1 0 1 S +R F 1976 o - Au 1 0 0 - +R F 1977 o - S 28 0 0 - +R F 1978 o - Jun 1 0 1 S +R F 1978 o - Au 4 0 0 - +R F 2008 o - Jun 1 0 1 S +R F 2008 o - S 1 0 0 - +R F 2009 o - Jun 1 0 1 S +R F 2009 o - Au 21 0 0 - +R F 2010 o - May 2 0 1 S +R F 2010 o - Au 8 0 0 - +R F 2011 o - Ap 3 0 1 S +R F 2011 o - Jul 31 0 0 - +R F 2012 2013 - Ap lastSun 2 1 S +R F 2012 o - Jul 20 3 0 - +R F 2012 o - Au 20 2 1 S +R F 2012 o - S 30 3 0 - +R F 2013 o - Jul 7 3 0 - +R F 2013 o - Au 10 2 1 S +R F 2013 ma - O lastSun 3 0 - +R F 2014 2021 - Mar lastSun 2 1 S +R F 2014 o - Jun 28 3 0 - +R F 2014 o - Au 2 2 1 S +R F 2015 o - Jun 14 3 0 - +R F 2015 o - Jul 19 2 1 S +R F 2016 o - Jun 5 3 0 - +R F 2016 o - Jul 10 2 1 S +R F 2017 o - May 21 3 0 - +R F 2017 o - Jul 2 2 1 S +R F 2018 o - May 13 3 0 - +R F 2018 o - Jun 17 2 1 S +R F 2019 o - May 5 3 0 - +R F 2019 o - Jun 9 2 1 S +R F 2020 o - Ap 19 3 0 - +R F 2020 o - May 24 2 1 S +R F 2021 o - Ap 11 3 0 - +R F 2021 o - May 16 2 1 S +R F 2022 o - May 8 2 1 S +R F 2023 o - Ap 23 2 1 S +R F 2024 o - Ap 14 2 1 S +R F 2025 o - Ap 6 2 1 S +R F 2026 ma - Mar lastSun 2 1 S +R F 2036 o - O 19 3 0 - +R F 2037 o - O 4 3 0 - +Z Africa/Casablanca -0:30:20 - LMT 1913 O 26 +0 F WE%sT 1984 Mar 16 +1 - CET 1986 +0 F WE%sT +Z Africa/El_Aaiun -0:52:48 - LMT 1934 +-1 - -01 1976 Ap 14 +0 F WE%sT +Z Africa/Maputo 2:10:20 - LMT 1903 Mar +2 - CAT +Li Africa/Maputo Africa/Blantyre +Li Africa/Maputo Africa/Bujumbura +Li Africa/Maputo Africa/Gaborone +Li Africa/Maputo Africa/Harare +Li Africa/Maputo Africa/Kigali +Li Africa/Maputo Africa/Lubumbashi +Li Africa/Maputo Africa/Lusaka +R G 1994 o - Mar 21 0 0 - +R G 1994 2016 - S Sun>=1 2 1 S +R G 1995 2017 - Ap Sun>=1 2 0 - +Z Africa/Windhoek 1:8:24 - LMT 1892 F 8 +1:30 - +0130 1903 Mar +2 - SAST 1942 S 20 2 +2 1 SAST 1943 Mar 21 2 +2 - SAST 1990 Mar 21 +2 - CAT 1994 Mar 21 +1 G WA%sT 2017 S 3 2 +2 - CAT +Z Africa/Lagos 0:13:36 - LMT 1919 S +1 - WAT +Li Africa/Lagos Africa/Bangui +Li Africa/Lagos Africa/Brazzaville +Li Africa/Lagos Africa/Douala +Li Africa/Lagos Africa/Kinshasa +Li Africa/Lagos Africa/Libreville +Li Africa/Lagos Africa/Luanda +Li Africa/Lagos Africa/Malabo +Li Africa/Lagos Africa/Niamey +Li Africa/Lagos Africa/Porto-Novo +Z Indian/Reunion 3:41:52 - LMT 1911 Jun +4 - +04 +Z Indian/Mahe 3:41:48 - LMT 1906 Jun +4 - +04 +R H 1942 1943 - S Sun>=15 2 1 - +R H 1943 1944 - Mar Sun>=15 2 0 - +Z Africa/Johannesburg 1:52 - LMT 1892 F 8 +1:30 - SAST 1903 Mar +2 H SAST +Li Africa/Johannesburg Africa/Maseru +Li Africa/Johannesburg Africa/Mbabane +R I 1970 o - May 1 0 1 S +R I 1970 1985 - O 15 0 0 - +R I 1971 o - Ap 30 0 1 S +R I 1972 1985 - Ap lastSun 0 1 S +Z Africa/Khartoum 2:10:8 - LMT 1931 +2 I CA%sT 2000 Ja 15 12 +3 - EAT 2017 N +2 - CAT +Z Africa/Juba 2:6:28 - LMT 1931 +2 I CA%sT 2000 Ja 15 12 +3 - EAT +R J 1939 o - Ap 15 23s 1 S +R J 1939 o - N 18 23s 0 - +R J 1940 o - F 25 23s 1 S +R J 1941 o - O 6 0 0 - +R J 1942 o - Mar 9 0 1 S +R J 1942 o - N 2 3 0 - +R J 1943 o - Mar 29 2 1 S +R J 1943 o - Ap 17 2 0 - +R J 1943 o - Ap 25 2 1 S +R J 1943 o - O 4 2 0 - +R J 1944 1945 - Ap M>=1 2 1 S +R J 1944 o - O 8 0 0 - +R J 1945 o - S 16 0 0 - +R J 1977 o - Ap 30 0s 1 S +R J 1977 o - S 24 0s 0 - +R J 1978 o - May 1 0s 1 S +R J 1978 o - O 1 0s 0 - +R J 1988 o - Jun 1 0s 1 S +R J 1988 1990 - S lastSun 0s 0 - +R J 1989 o - Mar 26 0s 1 S +R J 1990 o - May 1 0s 1 S +R J 2005 o - May 1 0s 1 S +R J 2005 o - S 30 1s 0 - +R J 2006 2008 - Mar lastSun 2s 1 S +R J 2006 2008 - O lastSun 2s 0 - +Z Africa/Tunis 0:40:44 - LMT 1881 May 12 +0:9:21 - PMT 1911 Mar 11 +1 J CE%sT +Z Antarctica/Casey 0 - -00 1969 +8 - +08 2009 O 18 2 +11 - +11 2010 Mar 5 2 +8 - +08 2011 O 28 2 +11 - +11 2012 F 21 17u +8 - +08 2016 O 22 +11 - +11 +Z Antarctica/Davis 0 - -00 1957 Ja 13 +7 - +07 1964 N +0 - -00 1969 F +7 - +07 2009 O 18 2 +5 - +05 2010 Mar 10 20u +7 - +07 2011 O 28 2 +5 - +05 2012 F 21 20u +7 - +07 +Z Antarctica/Mawson 0 - -00 1954 F 13 +6 - +06 2009 O 18 2 +5 - +05 +Z Indian/Kerguelen 0 - -00 1950 +5 - +05 +Z Antarctica/DumontDUrville 0 - -00 1947 +10 - +10 1952 Ja 14 +0 - -00 1956 N +10 - +10 +Z Antarctica/Syowa 0 - -00 1957 Ja 29 +3 - +03 +R K 2005 ma - Mar lastSun 1u 2 +02 +R K 2004 ma - O lastSun 1u 0 +00 +Z Antarctica/Troll 0 - -00 2005 F 12 +0 K %s +Z Antarctica/Vostok 0 - -00 1957 D 16 +6 - +06 +Z Antarctica/Rothera 0 - -00 1976 D +-3 - -03 +Z Asia/Kabul 4:36:48 - LMT 1890 +4 - +04 1945 +4:30 - +0430 +R L 2011 o - Mar lastSun 2s 1 S +R L 2011 o - O lastSun 2s 0 - +Z Asia/Yerevan 2:58 - LMT 1924 May 2 +3 - +03 1957 Mar +4 M +04/+05 1991 Mar 31 2s +3 M +03/+04 1995 S 24 2s +4 - +04 1997 +4 M +04/+05 2011 +4 L +04/+05 +R N 1997 2015 - Mar lastSun 4 1 S +R N 1997 2015 - O lastSun 5 0 - +Z Asia/Baku 3:19:24 - LMT 1924 May 2 +3 - +03 1957 Mar +4 M +04/+05 1991 Mar 31 2s +3 M +03/+04 1992 S lastSun 2s +4 - +04 1996 +4 O +04/+05 1997 +4 N +04/+05 +R P 2009 o - Jun 19 23 1 S +R P 2009 o - D 31 24 0 - +Z Asia/Dhaka 6:1:40 - LMT 1890 +5:53:20 - HMT 1941 O +6:30 - +0630 1942 May 15 +5:30 - +0530 1942 S +6:30 - +0630 1951 S 30 +6 - +06 2009 +6 P +06/+07 +Z Asia/Thimphu 5:58:36 - LMT 1947 Au 15 +5:30 - +0530 1987 O +6 - +06 +Z Indian/Chagos 4:49:40 - LMT 1907 +5 - +05 1996 +6 - +06 +Z Asia/Brunei 7:39:40 - LMT 1926 Mar +7:30 - +0730 1933 +8 - +08 +Z Asia/Yangon 6:24:47 - LMT 1880 +6:24:47 - RMT 1920 +6:30 - +0630 1942 May +9 - +09 1945 May 3 +6:30 - +0630 +R Q 1940 o - Jun 3 0 1 D +R Q 1940 1941 - O 1 0 0 S +R Q 1941 o - Mar 16 0 1 D +R R 1986 o - May 4 0 1 D +R R 1986 1991 - S Sun>=11 0 0 S +R R 1987 1991 - Ap Sun>=10 0 1 D +Z Asia/Shanghai 8:5:43 - LMT 1901 +8 Q C%sT 1949 +8 R C%sT +Z Asia/Urumqi 5:50:20 - LMT 1928 +6 - +06 +R S 1941 o - Ap 1 3:30 1 S +R S 1941 o - S 30 3:30 0 - +R S 1946 o - Ap 20 3:30 1 S +R S 1946 o - D 1 3:30 0 - +R S 1947 o - Ap 13 3:30 1 S +R S 1947 o - D 30 3:30 0 - +R S 1948 o - May 2 3:30 1 S +R S 1948 1951 - O lastSun 3:30 0 - +R S 1952 o - O 25 3:30 0 - +R S 1949 1953 - Ap Sun>=1 3:30 1 S +R S 1953 o - N 1 3:30 0 - +R S 1954 1964 - Mar Sun>=18 3:30 1 S +R S 1954 o - O 31 3:30 0 - +R S 1955 1964 - N Sun>=1 3:30 0 - +R S 1965 1976 - Ap Sun>=16 3:30 1 S +R S 1965 1976 - O Sun>=16 3:30 0 - +R S 1973 o - D 30 3:30 1 S +R S 1979 o - May Sun>=8 3:30 1 S +R S 1979 o - O Sun>=16 3:30 0 - +Z Asia/Hong_Kong 7:36:42 - LMT 1904 O 30 +8 S HK%sT 1941 D 25 +9 - JST 1945 S 15 +8 S HK%sT +R T 1946 o - May 15 0 1 D +R T 1946 o - O 1 0 0 S +R T 1947 o - Ap 15 0 1 D +R T 1947 o - N 1 0 0 S +R T 1948 1951 - May 1 0 1 D +R T 1948 1951 - O 1 0 0 S +R T 1952 o - Mar 1 0 1 D +R T 1952 1954 - N 1 0 0 S +R T 1953 1959 - Ap 1 0 1 D +R T 1955 1961 - O 1 0 0 S +R T 1960 1961 - Jun 1 0 1 D +R T 1974 1975 - Ap 1 0 1 D +R T 1974 1975 - O 1 0 0 S +R T 1979 o - Jul 1 0 1 D +R T 1979 o - O 1 0 0 S +Z Asia/Taipei 8:6 - LMT 1896 +8 - CST 1937 O +9 - JST 1945 S 21 1 +8 T C%sT +R U 1961 1962 - Mar Sun>=16 3:30 1 D +R U 1961 1964 - N Sun>=1 3:30 0 S +R U 1963 o - Mar Sun>=16 0 1 D +R U 1964 o - Mar Sun>=16 3:30 1 D +R U 1965 o - Mar Sun>=16 0 1 D +R U 1965 o - O 31 0 0 S +R U 1966 1971 - Ap Sun>=16 3:30 1 D +R U 1966 1971 - O Sun>=16 3:30 0 S +R U 1972 1974 - Ap Sun>=15 0 1 D +R U 1972 1973 - O Sun>=15 0 0 S +R U 1974 1977 - O Sun>=15 3:30 0 S +R U 1975 1977 - Ap Sun>=15 3:30 1 D +R U 1978 1980 - Ap Sun>=15 0 1 D +R U 1978 1980 - O Sun>=15 0 0 S +Z Asia/Macau 7:34:20 - LMT 1912 +8 U C%sT +R V 1975 o - Ap 13 0 1 S +R V 1975 o - O 12 0 0 - +R V 1976 o - May 15 0 1 S +R V 1976 o - O 11 0 0 - +R V 1977 1980 - Ap Sun>=1 0 1 S +R V 1977 o - S 25 0 0 - +R V 1978 o - O 2 0 0 - +R V 1979 1997 - S lastSun 0 0 - +R V 1981 1998 - Mar lastSun 0 1 S +Z Asia/Nicosia 2:13:28 - LMT 1921 N 14 +2 V EE%sT 1998 S +2 O EE%sT +Z Asia/Famagusta 2:15:48 - LMT 1921 N 14 +2 V EE%sT 1998 S +2 O EE%sT 2016 S 8 +3 - +03 2017 O 29 1u +2 O EE%sT +Li Asia/Nicosia Europe/Nicosia +Z Asia/Tbilisi 2:59:11 - LMT 1880 +2:59:11 - TBMT 1924 May 2 +3 - +03 1957 Mar +4 M +04/+05 1991 Mar 31 2s +3 M +03/+04 1992 +3 W +03/+04 1994 S lastSun +4 W +04/+05 1996 O lastSun +4 1 +05 1997 Mar lastSun +4 W +04/+05 2004 Jun 27 +3 M +03/+04 2005 Mar lastSun 2 +4 - +04 +Z Asia/Dili 8:22:20 - LMT 1912 +8 - +08 1942 F 21 23 +9 - +09 1976 May 3 +8 - +08 2000 S 17 +9 - +09 +Z Asia/Kolkata 5:53:28 - LMT 1854 Jun 28 +5:53:20 - HMT 1870 +5:21:10 - MMT 1906 +5:30 - IST 1941 O +5:30 1 +0630 1942 May 15 +5:30 - IST 1942 S +5:30 1 +0630 1945 O 15 +5:30 - IST +Z Asia/Jakarta 7:7:12 - LMT 1867 Au 10 +7:7:12 - BMT 1923 D 31 23:47:12 +7:20 - +0720 1932 N +7:30 - +0730 1942 Mar 23 +9 - +09 1945 S 23 +7:30 - +0730 1948 May +8 - +08 1950 May +7:30 - +0730 1964 +7 - WIB +Z Asia/Pontianak 7:17:20 - LMT 1908 May +7:17:20 - PMT 1932 N +7:30 - +0730 1942 Ja 29 +9 - +09 1945 S 23 +7:30 - +0730 1948 May +8 - +08 1950 May +7:30 - +0730 1964 +8 - WITA 1988 +7 - WIB +Z Asia/Makassar 7:57:36 - LMT 1920 +7:57:36 - MMT 1932 N +8 - +08 1942 F 9 +9 - +09 1945 S 23 +8 - WITA +Z Asia/Jayapura 9:22:48 - LMT 1932 N +9 - +09 1944 S +9:30 - +0930 1964 +9 - WIT +R X 1978 1980 - Mar 21 0 1 D +R X 1978 o - O 21 0 0 S +R X 1979 o - S 19 0 0 S +R X 1980 o - S 23 0 0 S +R X 1991 o - May 3 0 1 D +R X 1992 1995 - Mar 22 0 1 D +R X 1991 1995 - S 22 0 0 S +R X 1996 o - Mar 21 0 1 D +R X 1996 o - S 21 0 0 S +R X 1997 1999 - Mar 22 0 1 D +R X 1997 1999 - S 22 0 0 S +R X 2000 o - Mar 21 0 1 D +R X 2000 o - S 21 0 0 S +R X 2001 2003 - Mar 22 0 1 D +R X 2001 2003 - S 22 0 0 S +R X 2004 o - Mar 21 0 1 D +R X 2004 o - S 21 0 0 S +R X 2005 o - Mar 22 0 1 D +R X 2005 o - S 22 0 0 S +R X 2008 o - Mar 21 0 1 D +R X 2008 o - S 21 0 0 S +R X 2009 2011 - Mar 22 0 1 D +R X 2009 2011 - S 22 0 0 S +R X 2012 o - Mar 21 0 1 D +R X 2012 o - S 21 0 0 S +R X 2013 2015 - Mar 22 0 1 D +R X 2013 2015 - S 22 0 0 S +R X 2016 o - Mar 21 0 1 D +R X 2016 o - S 21 0 0 S +R X 2017 2019 - Mar 22 0 1 D +R X 2017 2019 - S 22 0 0 S +R X 2020 o - Mar 21 0 1 D +R X 2020 o - S 21 0 0 S +R X 2021 2023 - Mar 22 0 1 D +R X 2021 2023 - S 22 0 0 S +R X 2024 o - Mar 21 0 1 D +R X 2024 o - S 21 0 0 S +R X 2025 2027 - Mar 22 0 1 D +R X 2025 2027 - S 22 0 0 S +R X 2028 2029 - Mar 21 0 1 D +R X 2028 2029 - S 21 0 0 S +R X 2030 2031 - Mar 22 0 1 D +R X 2030 2031 - S 22 0 0 S +R X 2032 2033 - Mar 21 0 1 D +R X 2032 2033 - S 21 0 0 S +R X 2034 2035 - Mar 22 0 1 D +R X 2034 2035 - S 22 0 0 S +R X 2036 ma - Mar 21 0 1 D +R X 2036 ma - S 21 0 0 S +Z Asia/Tehran 3:25:44 - LMT 1916 +3:25:44 - TMT 1946 +3:30 - +0330 1977 N +4 X +04/+05 1979 +3:30 X +0330/+0430 +R Y 1982 o - May 1 0 1 D +R Y 1982 1984 - O 1 0 0 S +R Y 1983 o - Mar 31 0 1 D +R Y 1984 1985 - Ap 1 0 1 D +R Y 1985 1990 - S lastSun 1s 0 S +R Y 1986 1990 - Mar lastSun 1s 1 D +R Y 1991 2007 - Ap 1 3s 1 D +R Y 1991 2007 - O 1 3s 0 S +Z Asia/Baghdad 2:57:40 - LMT 1890 +2:57:36 - BMT 1918 +3 - +03 1982 May +3 Y +03/+04 +R Z 1940 o - Jun 1 0 1 D +R Z 1942 1944 - N 1 0 0 S +R Z 1943 o - Ap 1 2 1 D +R Z 1944 o - Ap 1 0 1 D +R Z 1945 o - Ap 16 0 1 D +R Z 1945 o - N 1 2 0 S +R Z 1946 o - Ap 16 2 1 D +R Z 1946 o - N 1 0 0 S +R Z 1948 o - May 23 0 2 DD +R Z 1948 o - S 1 0 1 D +R Z 1948 1949 - N 1 2 0 S +R Z 1949 o - May 1 0 1 D +R Z 1950 o - Ap 16 0 1 D +R Z 1950 o - S 15 3 0 S +R Z 1951 o - Ap 1 0 1 D +R Z 1951 o - N 11 3 0 S +R Z 1952 o - Ap 20 2 1 D +R Z 1952 o - O 19 3 0 S +R Z 1953 o - Ap 12 2 1 D +R Z 1953 o - S 13 3 0 S +R Z 1954 o - Jun 13 0 1 D +R Z 1954 o - S 12 0 0 S +R Z 1955 o - Jun 11 2 1 D +R Z 1955 o - S 11 0 0 S +R Z 1956 o - Jun 3 0 1 D +R Z 1956 o - S 30 3 0 S +R Z 1957 o - Ap 29 2 1 D +R Z 1957 o - S 22 0 0 S +R Z 1974 o - Jul 7 0 1 D +R Z 1974 o - O 13 0 0 S +R Z 1975 o - Ap 20 0 1 D +R Z 1975 o - Au 31 0 0 S +R Z 1985 o - Ap 14 0 1 D +R Z 1985 o - S 15 0 0 S +R Z 1986 o - May 18 0 1 D +R Z 1986 o - S 7 0 0 S +R Z 1987 o - Ap 15 0 1 D +R Z 1987 o - S 13 0 0 S +R Z 1988 o - Ap 10 0 1 D +R Z 1988 o - S 4 0 0 S +R Z 1989 o - Ap 30 0 1 D +R Z 1989 o - S 3 0 0 S +R Z 1990 o - Mar 25 0 1 D +R Z 1990 o - Au 26 0 0 S +R Z 1991 o - Mar 24 0 1 D +R Z 1991 o - S 1 0 0 S +R Z 1992 o - Mar 29 0 1 D +R Z 1992 o - S 6 0 0 S +R Z 1993 o - Ap 2 0 1 D +R Z 1993 o - S 5 0 0 S +R Z 1994 o - Ap 1 0 1 D +R Z 1994 o - Au 28 0 0 S +R Z 1995 o - Mar 31 0 1 D +R Z 1995 o - S 3 0 0 S +R Z 1996 o - Mar 15 0 1 D +R Z 1996 o - S 16 0 0 S +R Z 1997 o - Mar 21 0 1 D +R Z 1997 o - S 14 0 0 S +R Z 1998 o - Mar 20 0 1 D +R Z 1998 o - S 6 0 0 S +R Z 1999 o - Ap 2 2 1 D +R Z 1999 o - S 3 2 0 S +R Z 2000 o - Ap 14 2 1 D +R Z 2000 o - O 6 1 0 S +R Z 2001 o - Ap 9 1 1 D +R Z 2001 o - S 24 1 0 S +R Z 2002 o - Mar 29 1 1 D +R Z 2002 o - O 7 1 0 S +R Z 2003 o - Mar 28 1 1 D +R Z 2003 o - O 3 1 0 S +R Z 2004 o - Ap 7 1 1 D +R Z 2004 o - S 22 1 0 S +R Z 2005 o - Ap 1 2 1 D +R Z 2005 o - O 9 2 0 S +R Z 2006 2010 - Mar F>=26 2 1 D +R Z 2006 o - O 1 2 0 S +R Z 2007 o - S 16 2 0 S +R Z 2008 o - O 5 2 0 S +R Z 2009 o - S 27 2 0 S +R Z 2010 o - S 12 2 0 S +R Z 2011 o - Ap 1 2 1 D +R Z 2011 o - O 2 2 0 S +R Z 2012 o - Mar F>=26 2 1 D +R Z 2012 o - S 23 2 0 S +R Z 2013 ma - Mar F>=23 2 1 D +R Z 2013 ma - O lastSun 2 0 S +Z Asia/Jerusalem 2:20:54 - LMT 1880 +2:20:40 - JMT 1918 +2 Z I%sT +R a 1948 o - May Sun>=1 2 1 D +R a 1948 1951 - S Sat>=8 2 0 S +R a 1949 o - Ap Sun>=1 2 1 D +R a 1950 1951 - May Sun>=1 2 1 D +Z Asia/Tokyo 9:18:59 - LMT 1887 D 31 15u +9 a J%sT +R b 1973 o - Jun 6 0 1 S +R b 1973 1975 - O 1 0 0 - +R b 1974 1977 - May 1 0 1 S +R b 1976 o - N 1 0 0 - +R b 1977 o - O 1 0 0 - +R b 1978 o - Ap 30 0 1 S +R b 1978 o - S 30 0 0 - +R b 1985 o - Ap 1 0 1 S +R b 1985 o - O 1 0 0 - +R b 1986 1988 - Ap F>=1 0 1 S +R b 1986 1990 - O F>=1 0 0 - +R b 1989 o - May 8 0 1 S +R b 1990 o - Ap 27 0 1 S +R b 1991 o - Ap 17 0 1 S +R b 1991 o - S 27 0 0 - +R b 1992 o - Ap 10 0 1 S +R b 1992 1993 - O F>=1 0 0 - +R b 1993 1998 - Ap F>=1 0 1 S +R b 1994 o - S F>=15 0 0 - +R b 1995 1998 - S F>=15 0s 0 - +R b 1999 o - Jul 1 0s 1 S +R b 1999 2002 - S lastF 0s 0 - +R b 2000 2001 - Mar lastTh 0s 1 S +R b 2002 2012 - Mar lastTh 24 1 S +R b 2003 o - O 24 0s 0 - +R b 2004 o - O 15 0s 0 - +R b 2005 o - S lastF 0s 0 - +R b 2006 2011 - O lastF 0s 0 - +R b 2013 o - D 20 0 0 - +R b 2014 ma - Mar lastTh 24 1 S +R b 2014 ma - O lastF 0s 0 - +Z Asia/Amman 2:23:44 - LMT 1931 +2 b EE%sT +Z Asia/Almaty 5:7:48 - LMT 1924 May 2 +5 - +05 1930 Jun 21 +6 M +06/+07 1991 Mar 31 2s +5 M +05/+06 1992 Ja 19 2s +6 M +06/+07 2004 O 31 2s +6 - +06 +Z Asia/Qyzylorda 4:21:52 - LMT 1924 May 2 +4 - +04 1930 Jun 21 +5 - +05 1981 Ap +5 1 +06 1981 O +6 - +06 1982 Ap +5 M +05/+06 1991 Mar 31 2s +4 M +04/+05 1991 S 29 2s +5 M +05/+06 1992 Ja 19 2s +6 M +06/+07 1992 Mar 29 2s +5 M +05/+06 2004 O 31 2s +6 - +06 +Z Asia/Aqtobe 3:48:40 - LMT 1924 May 2 +4 - +04 1930 Jun 21 +5 - +05 1981 Ap +5 1 +06 1981 O +6 - +06 1982 Ap +5 M +05/+06 1991 Mar 31 2s +4 M +04/+05 1992 Ja 19 2s +5 M +05/+06 2004 O 31 2s +5 - +05 +Z Asia/Aqtau 3:21:4 - LMT 1924 May 2 +4 - +04 1930 Jun 21 +5 - +05 1981 O +6 - +06 1982 Ap +5 M +05/+06 1991 Mar 31 2s +4 M +04/+05 1992 Ja 19 2s +5 M +05/+06 1994 S 25 2s +4 M +04/+05 2004 O 31 2s +5 - +05 +Z Asia/Atyrau 3:27:44 - LMT 1924 May 2 +3 - +03 1930 Jun 21 +5 - +05 1981 O +6 - +06 1982 Ap +5 M +05/+06 1991 Mar 31 2s +4 M +04/+05 1992 Ja 19 2s +5 M +05/+06 1999 Mar 28 2s +4 M +04/+05 2004 O 31 2s +5 - +05 +Z Asia/Oral 3:25:24 - LMT 1924 May 2 +3 - +03 1930 Jun 21 +5 - +05 1981 Ap +5 1 +06 1981 O +6 - +06 1982 Ap +5 M +05/+06 1989 Mar 26 2s +4 M +04/+05 1992 Ja 19 2s +5 M +05/+06 1992 Mar 29 2s +4 M +04/+05 2004 O 31 2s +5 - +05 +R c 1992 1996 - Ap Sun>=7 0s 1 S +R c 1992 1996 - S lastSun 0 0 - +R c 1997 2005 - Mar lastSun 2:30 1 S +R c 1997 2004 - O lastSun 2:30 0 - +Z Asia/Bishkek 4:58:24 - LMT 1924 May 2 +5 - +05 1930 Jun 21 +6 M +06/+07 1991 Mar 31 2s +5 M +05/+06 1991 Au 31 2 +5 c +05/+06 2005 Au 12 +6 - +06 +R d 1948 o - Jun 1 0 1 D +R d 1948 o - S 13 0 0 S +R d 1949 o - Ap 3 0 1 D +R d 1949 1951 - S Sun>=8 0 0 S +R d 1950 o - Ap 1 0 1 D +R d 1951 o - May 6 0 1 D +R d 1955 o - May 5 0 1 D +R d 1955 o - S 9 0 0 S +R d 1956 o - May 20 0 1 D +R d 1956 o - S 30 0 0 S +R d 1957 1960 - May Sun>=1 0 1 D +R d 1957 1960 - S Sun>=18 0 0 S +R d 1987 1988 - May Sun>=8 2 1 D +R d 1987 1988 - O Sun>=8 3 0 S +Z Asia/Seoul 8:27:52 - LMT 1908 Ap +8:30 - KST 1912 +9 - JST 1945 S 8 +9 - KST 1954 Mar 21 +8:30 d K%sT 1961 Au 10 +9 d K%sT +Z Asia/Pyongyang 8:23 - LMT 1908 Ap +8:30 - KST 1912 +9 - JST 1945 Au 24 +9 - KST 2015 Au 15 +8:30 - KST +R e 1920 o - Mar 28 0 1 S +R e 1920 o - O 25 0 0 - +R e 1921 o - Ap 3 0 1 S +R e 1921 o - O 3 0 0 - +R e 1922 o - Mar 26 0 1 S +R e 1922 o - O 8 0 0 - +R e 1923 o - Ap 22 0 1 S +R e 1923 o - S 16 0 0 - +R e 1957 1961 - May 1 0 1 S +R e 1957 1961 - O 1 0 0 - +R e 1972 o - Jun 22 0 1 S +R e 1972 1977 - O 1 0 0 - +R e 1973 1977 - May 1 0 1 S +R e 1978 o - Ap 30 0 1 S +R e 1978 o - S 30 0 0 - +R e 1984 1987 - May 1 0 1 S +R e 1984 1991 - O 16 0 0 - +R e 1988 o - Jun 1 0 1 S +R e 1989 o - May 10 0 1 S +R e 1990 1992 - May 1 0 1 S +R e 1992 o - O 4 0 0 - +R e 1993 ma - Mar lastSun 0 1 S +R e 1993 1998 - S lastSun 0 0 - +R e 1999 ma - O lastSun 0 0 - +Z Asia/Beirut 2:22 - LMT 1880 +2 e EE%sT +R f 1935 1941 - S 14 0 0:20 TS +R f 1935 1941 - D 14 0 0 - +Z Asia/Kuala_Lumpur 6:46:46 - LMT 1901 +6:55:25 - SMT 1905 Jun +7 - +07 1933 +7 0:20 +0720 1936 +7:20 - +0720 1941 S +7:30 - +0730 1942 F 16 +9 - +09 1945 S 12 +7:30 - +0730 1982 +8 - +08 +Z Asia/Kuching 7:21:20 - LMT 1926 Mar +7:30 - +0730 1933 +8 f +08/+0820 1942 F 16 +9 - +09 1945 S 12 +8 - +08 +Z Indian/Maldives 4:54 - LMT 1880 +4:54 - MMT 1960 +5 - +05 +R g 1983 1984 - Ap 1 0 1 S +R g 1983 o - O 1 0 0 - +R g 1985 1998 - Mar lastSun 0 1 S +R g 1984 1998 - S lastSun 0 0 - +R g 2001 o - Ap lastSat 2 1 S +R g 2001 2006 - S lastSat 2 0 - +R g 2002 2006 - Mar lastSat 2 1 S +R g 2015 2016 - Mar lastSat 2 1 S +R g 2015 2016 - S lastSat 0 0 - +Z Asia/Hovd 6:6:36 - LMT 1905 Au +6 - +06 1978 +7 g +07/+08 +Z Asia/Ulaanbaatar 7:7:32 - LMT 1905 Au +7 - +07 1978 +8 g +08/+09 +Z Asia/Choibalsan 7:38 - LMT 1905 Au +7 - +07 1978 +8 - +08 1983 Ap +9 g +09/+10 2008 Mar 31 +8 g +08/+09 +Z Asia/Kathmandu 5:41:16 - LMT 1920 +5:30 - +0530 1986 +5:45 - +0545 +R h 2002 o - Ap Sun>=2 0 1 S +R h 2002 o - O Sun>=2 0 0 - +R h 2008 o - Jun 1 0 1 S +R h 2008 2009 - N 1 0 0 - +R h 2009 o - Ap 15 0 1 S +Z Asia/Karachi 4:28:12 - LMT 1907 +5:30 - +0530 1942 S +5:30 1 +0630 1945 O 15 +5:30 - +0530 1951 S 30 +5 - +05 1971 Mar 26 +5 h PK%sT +R i 1999 2005 - Ap F>=15 0 1 S +R i 1999 2003 - O F>=15 0 0 - +R i 2004 o - O 1 1 0 - +R i 2005 o - O 4 2 0 - +R i 2006 2007 - Ap 1 0 1 S +R i 2006 o - S 22 0 0 - +R i 2007 o - S Th>=8 2 0 - +R i 2008 2009 - Mar lastF 0 1 S +R i 2008 o - S 1 0 0 - +R i 2009 o - S F>=1 1 0 - +R i 2010 o - Mar 26 0 1 S +R i 2010 o - Au 11 0 0 - +R i 2011 o - Ap 1 0:1 1 S +R i 2011 o - Au 1 0 0 - +R i 2011 o - Au 30 0 1 S +R i 2011 o - S 30 0 0 - +R i 2012 2014 - Mar lastTh 24 1 S +R i 2012 o - S 21 1 0 - +R i 2013 o - S F>=21 0 0 - +R i 2014 2015 - O F>=21 0 0 - +R i 2015 o - Mar lastF 24 1 S +R i 2016 ma - Mar lastSat 1 1 S +R i 2016 ma - O lastSat 1 0 - +Z Asia/Gaza 2:17:52 - LMT 1900 O +2 Z EET/EEST 1948 May 15 +2 B EE%sT 1967 Jun 5 +2 Z I%sT 1996 +2 b EE%sT 1999 +2 i EE%sT 2008 Au 29 +2 - EET 2008 S +2 i EE%sT 2010 +2 - EET 2010 Mar 27 0:1 +2 i EE%sT 2011 Au +2 - EET 2012 +2 i EE%sT +Z Asia/Hebron 2:20:23 - LMT 1900 O +2 Z EET/EEST 1948 May 15 +2 B EE%sT 1967 Jun 5 +2 Z I%sT 1996 +2 b EE%sT 1999 +2 i EE%sT +R j 1936 o - N 1 0 1 S +R j 1937 o - F 1 0 0 - +R j 1954 o - Ap 12 0 1 S +R j 1954 o - Jul 1 0 0 - +R j 1978 o - Mar 22 0 1 S +R j 1978 o - S 21 0 0 - +Z Asia/Manila -15:56 - LMT 1844 D 31 +8:4 - LMT 1899 May 11 +8 j +08/+09 1942 May +9 - +09 1944 N +8 j +08/+09 +Z Asia/Qatar 3:26:8 - LMT 1920 +4 - +04 1972 Jun +3 - +03 +Li Asia/Qatar Asia/Bahrain +Z Asia/Riyadh 3:6:52 - LMT 1947 Mar 14 +3 - +03 +Li Asia/Riyadh Asia/Aden +Li Asia/Riyadh Asia/Kuwait +Z Asia/Singapore 6:55:25 - LMT 1901 +6:55:25 - SMT 1905 Jun +7 - +07 1933 +7 0:20 +0720 1936 +7:20 - +0720 1941 S +7:30 - +0730 1942 F 16 +9 - +09 1945 S 12 +7:30 - +0730 1982 +8 - +08 +Z Asia/Colombo 5:19:24 - LMT 1880 +5:19:32 - MMT 1906 +5:30 - +0530 1942 Ja 5 +5:30 0:30 +06 1942 S +5:30 1 +0630 1945 O 16 2 +5:30 - +0530 1996 May 25 +6:30 - +0630 1996 O 26 0:30 +6 - +06 2006 Ap 15 0:30 +5:30 - +0530 +R k 1920 1923 - Ap Sun>=15 2 1 S +R k 1920 1923 - O Sun>=1 2 0 - +R k 1962 o - Ap 29 2 1 S +R k 1962 o - O 1 2 0 - +R k 1963 1965 - May 1 2 1 S +R k 1963 o - S 30 2 0 - +R k 1964 o - O 1 2 0 - +R k 1965 o - S 30 2 0 - +R k 1966 o - Ap 24 2 1 S +R k 1966 1976 - O 1 2 0 - +R k 1967 1978 - May 1 2 1 S +R k 1977 1978 - S 1 2 0 - +R k 1983 1984 - Ap 9 2 1 S +R k 1983 1984 - O 1 2 0 - +R k 1986 o - F 16 2 1 S +R k 1986 o - O 9 2 0 - +R k 1987 o - Mar 1 2 1 S +R k 1987 1988 - O 31 2 0 - +R k 1988 o - Mar 15 2 1 S +R k 1989 o - Mar 31 2 1 S +R k 1989 o - O 1 2 0 - +R k 1990 o - Ap 1 2 1 S +R k 1990 o - S 30 2 0 - +R k 1991 o - Ap 1 0 1 S +R k 1991 1992 - O 1 0 0 - +R k 1992 o - Ap 8 0 1 S +R k 1993 o - Mar 26 0 1 S +R k 1993 o - S 25 0 0 - +R k 1994 1996 - Ap 1 0 1 S +R k 1994 2005 - O 1 0 0 - +R k 1997 1998 - Mar lastM 0 1 S +R k 1999 2006 - Ap 1 0 1 S +R k 2006 o - S 22 0 0 - +R k 2007 o - Mar lastF 0 1 S +R k 2007 o - N F>=1 0 0 - +R k 2008 o - Ap F>=1 0 1 S +R k 2008 o - N 1 0 0 - +R k 2009 o - Mar lastF 0 1 S +R k 2010 2011 - Ap F>=1 0 1 S +R k 2012 ma - Mar lastF 0 1 S +R k 2009 ma - O lastF 0 0 - +Z Asia/Damascus 2:25:12 - LMT 1920 +2 k EE%sT +Z Asia/Dushanbe 4:35:12 - LMT 1924 May 2 +5 - +05 1930 Jun 21 +6 M +06/+07 1991 Mar 31 2s +5 1 +05/+06 1991 S 9 2s +5 - +05 +Z Asia/Bangkok 6:42:4 - LMT 1880 +6:42:4 - BMT 1920 Ap +7 - +07 +Li Asia/Bangkok Asia/Phnom_Penh +Li Asia/Bangkok Asia/Vientiane +Z Asia/Ashgabat 3:53:32 - LMT 1924 May 2 +4 - +04 1930 Jun 21 +5 M +05/+06 1991 Mar 31 2 +4 M +04/+05 1992 Ja 19 2 +5 - +05 +Z Asia/Dubai 3:41:12 - LMT 1920 +4 - +04 +Li Asia/Dubai Asia/Muscat +Z Asia/Samarkand 4:27:53 - LMT 1924 May 2 +4 - +04 1930 Jun 21 +5 - +05 1981 Ap +5 1 +06 1981 O +6 - +06 1982 Ap +5 M +05/+06 1992 +5 - +05 +Z Asia/Tashkent 4:37:11 - LMT 1924 May 2 +5 - +05 1930 Jun 21 +6 M +06/+07 1991 Mar 31 2 +5 M +05/+06 1992 +5 - +05 +Z Asia/Ho_Chi_Minh 7:6:40 - LMT 1906 Jul +7:6:30 - PLMT 1911 May +7 - +07 1942 D 31 23 +8 - +08 1945 Mar 14 23 +9 - +09 1945 S 2 +7 - +07 1947 Ap +8 - +08 1955 Jul +7 - +07 1959 D 31 23 +8 - +08 1975 Jun 13 +7 - +07 +R l 1917 o - Ja 1 0:1 1 D +R l 1917 o - Mar 25 2 0 S +R l 1942 o - Ja 1 2 1 D +R l 1942 o - Mar 29 2 0 S +R l 1942 o - S 27 2 1 D +R l 1943 1944 - Mar lastSun 2 0 S +R l 1943 o - O 3 2 1 D +Z Australia/Darwin 8:43:20 - LMT 1895 F +9 - ACST 1899 May +9:30 l AC%sT +R m 1974 o - O lastSun 2s 1 D +R m 1975 o - Mar Sun>=1 2s 0 S +R m 1983 o - O lastSun 2s 1 D +R m 1984 o - Mar Sun>=1 2s 0 S +R m 1991 o - N 17 2s 1 D +R m 1992 o - Mar Sun>=1 2s 0 S +R m 2006 o - D 3 2s 1 D +R m 2007 2009 - Mar lastSun 2s 0 S +R m 2007 2008 - O lastSun 2s 1 D +Z Australia/Perth 7:43:24 - LMT 1895 D +8 l AW%sT 1943 Jul +8 m AW%sT +Z Australia/Eucla 8:35:28 - LMT 1895 D +8:45 l +0845/+0945 1943 Jul +8:45 m +0845/+0945 +R n 1971 o - O lastSun 2s 1 D +R n 1972 o - F lastSun 2s 0 S +R n 1989 1991 - O lastSun 2s 1 D +R n 1990 1992 - Mar Sun>=1 2s 0 S +R o 1992 1993 - O lastSun 2s 1 D +R o 1993 1994 - Mar Sun>=1 2s 0 S +Z Australia/Brisbane 10:12:8 - LMT 1895 +10 l AE%sT 1971 +10 n AE%sT +Z Australia/Lindeman 9:55:56 - LMT 1895 +10 l AE%sT 1971 +10 n AE%sT 1992 Jul +10 o AE%sT +R p 1971 1985 - O lastSun 2s 1 D +R p 1986 o - O 19 2s 1 D +R p 1987 2007 - O lastSun 2s 1 D +R p 1972 o - F 27 2s 0 S +R p 1973 1985 - Mar Sun>=1 2s 0 S +R p 1986 1990 - Mar Sun>=15 2s 0 S +R p 1991 o - Mar 3 2s 0 S +R p 1992 o - Mar 22 2s 0 S +R p 1993 o - Mar 7 2s 0 S +R p 1994 o - Mar 20 2s 0 S +R p 1995 2005 - Mar lastSun 2s 0 S +R p 2006 o - Ap 2 2s 0 S +R p 2007 o - Mar lastSun 2s 0 S +R p 2008 ma - Ap Sun>=1 2s 0 S +R p 2008 ma - O Sun>=1 2s 1 D +Z Australia/Adelaide 9:14:20 - LMT 1895 F +9 - ACST 1899 May +9:30 l AC%sT 1971 +9:30 p AC%sT +R q 1967 o - O Sun>=1 2s 1 D +R q 1968 o - Mar lastSun 2s 0 S +R q 1968 1985 - O lastSun 2s 1 D +R q 1969 1971 - Mar Sun>=8 2s 0 S +R q 1972 o - F lastSun 2s 0 S +R q 1973 1981 - Mar Sun>=1 2s 0 S +R q 1982 1983 - Mar lastSun 2s 0 S +R q 1984 1986 - Mar Sun>=1 2s 0 S +R q 1986 o - O Sun>=15 2s 1 D +R q 1987 1990 - Mar Sun>=15 2s 0 S +R q 1987 o - O Sun>=22 2s 1 D +R q 1988 1990 - O lastSun 2s 1 D +R q 1991 1999 - O Sun>=1 2s 1 D +R q 1991 2005 - Mar lastSun 2s 0 S +R q 2000 o - Au lastSun 2s 1 D +R q 2001 ma - O Sun>=1 2s 1 D +R q 2006 o - Ap Sun>=1 2s 0 S +R q 2007 o - Mar lastSun 2s 0 S +R q 2008 ma - Ap Sun>=1 2s 0 S +Z Australia/Hobart 9:49:16 - LMT 1895 S +10 - AEST 1916 O 1 2 +10 1 AEDT 1917 F +10 l AE%sT 1967 +10 q AE%sT +Z Australia/Currie 9:35:28 - LMT 1895 S +10 - AEST 1916 O 1 2 +10 1 AEDT 1917 F +10 l AE%sT 1971 Jul +10 q AE%sT +R r 1971 1985 - O lastSun 2s 1 D +R r 1972 o - F lastSun 2s 0 S +R r 1973 1985 - Mar Sun>=1 2s 0 S +R r 1986 1990 - Mar Sun>=15 2s 0 S +R r 1986 1987 - O Sun>=15 2s 1 D +R r 1988 1999 - O lastSun 2s 1 D +R r 1991 1994 - Mar Sun>=1 2s 0 S +R r 1995 2005 - Mar lastSun 2s 0 S +R r 2000 o - Au lastSun 2s 1 D +R r 2001 2007 - O lastSun 2s 1 D +R r 2006 o - Ap Sun>=1 2s 0 S +R r 2007 o - Mar lastSun 2s 0 S +R r 2008 ma - Ap Sun>=1 2s 0 S +R r 2008 ma - O Sun>=1 2s 1 D +Z Australia/Melbourne 9:39:52 - LMT 1895 F +10 l AE%sT 1971 +10 r AE%sT +R s 1971 1985 - O lastSun 2s 1 D +R s 1972 o - F 27 2s 0 S +R s 1973 1981 - Mar Sun>=1 2s 0 S +R s 1982 o - Ap Sun>=1 2s 0 S +R s 1983 1985 - Mar Sun>=1 2s 0 S +R s 1986 1989 - Mar Sun>=15 2s 0 S +R s 1986 o - O 19 2s 1 D +R s 1987 1999 - O lastSun 2s 1 D +R s 1990 1995 - Mar Sun>=1 2s 0 S +R s 1996 2005 - Mar lastSun 2s 0 S +R s 2000 o - Au lastSun 2s 1 D +R s 2001 2007 - O lastSun 2s 1 D +R s 2006 o - Ap Sun>=1 2s 0 S +R s 2007 o - Mar lastSun 2s 0 S +R s 2008 ma - Ap Sun>=1 2s 0 S +R s 2008 ma - O Sun>=1 2s 1 D +Z Australia/Sydney 10:4:52 - LMT 1895 F +10 l AE%sT 1971 +10 s AE%sT +Z Australia/Broken_Hill 9:25:48 - LMT 1895 F +10 - AEST 1896 Au 23 +9 - ACST 1899 May +9:30 l AC%sT 1971 +9:30 s AC%sT 2000 +9:30 p AC%sT +R t 1981 1984 - O lastSun 2 1 D +R t 1982 1985 - Mar Sun>=1 2 0 S +R t 1985 o - O lastSun 2 0:30 D +R t 1986 1989 - Mar Sun>=15 2 0 S +R t 1986 o - O 19 2 0:30 D +R t 1987 1999 - O lastSun 2 0:30 D +R t 1990 1995 - Mar Sun>=1 2 0 S +R t 1996 2005 - Mar lastSun 2 0 S +R t 2000 o - Au lastSun 2 0:30 D +R t 2001 2007 - O lastSun 2 0:30 D +R t 2006 o - Ap Sun>=1 2 0 S +R t 2007 o - Mar lastSun 2 0 S +R t 2008 ma - Ap Sun>=1 2 0 S +R t 2008 ma - O Sun>=1 2 0:30 D +Z Australia/Lord_Howe 10:36:20 - LMT 1895 F +10 - AEST 1981 Mar +10:30 t +1030/+1130 1985 Jul +10:30 t +1030/+11 +Z Antarctica/Macquarie 0 - -00 1899 N +10 - AEST 1916 O 1 2 +10 1 AEDT 1917 F +10 l AE%sT 1919 Ap 1 0s +0 - -00 1948 Mar 25 +10 l AE%sT 1967 +10 q AE%sT 2010 Ap 4 3 +11 - +11 +Z Indian/Christmas 7:2:52 - LMT 1895 F +7 - +07 +Z Indian/Cocos 6:27:40 - LMT 1900 +6:30 - +0630 +R u 1998 1999 - N Sun>=1 2 1 S +R u 1999 2000 - F lastSun 3 0 - +R u 2009 o - N 29 2 1 S +R u 2010 o - Mar lastSun 3 0 - +R u 2010 2013 - O Sun>=21 2 1 S +R u 2011 o - Mar Sun>=1 3 0 - +R u 2012 2013 - Ja Sun>=18 3 0 - +R u 2014 o - Ja Sun>=18 2 0 - +R u 2014 ma - N Sun>=1 2 1 S +R u 2015 ma - Ja Sun>=14 3 0 - +Z Pacific/Fiji 11:55:44 - LMT 1915 O 26 +12 u +12/+13 +Z Pacific/Gambier -8:59:48 - LMT 1912 O +-9 - -09 +Z Pacific/Marquesas -9:18 - LMT 1912 O +-9:30 - -0930 +Z Pacific/Tahiti -9:58:16 - LMT 1912 O +-10 - -10 +Z Pacific/Guam -14:21 - LMT 1844 D 31 +9:39 - LMT 1901 +10 - GST 2000 D 23 +10 - ChST +Li Pacific/Guam Pacific/Saipan +Z Pacific/Tarawa 11:32:4 - LMT 1901 +12 - +12 +Z Pacific/Enderbury -11:24:20 - LMT 1901 +-12 - -12 1979 O +-11 - -11 1995 +13 - +13 +Z Pacific/Kiritimati -10:29:20 - LMT 1901 +-10:40 - -1040 1979 O +-10 - -10 1995 +14 - +14 +Z Pacific/Majuro 11:24:48 - LMT 1901 +11 - +11 1969 O +12 - +12 +Z Pacific/Kwajalein 11:9:20 - LMT 1901 +11 - +11 1969 O +-12 - -12 1993 Au 20 +12 - +12 +Z Pacific/Chuuk 10:7:8 - LMT 1901 +10 - +10 +Z Pacific/Pohnpei 10:32:52 - LMT 1901 +11 - +11 +Z Pacific/Kosrae 10:51:56 - LMT 1901 +11 - +11 1969 O +12 - +12 1999 +11 - +11 +Z Pacific/Nauru 11:7:40 - LMT 1921 Ja 15 +11:30 - +1130 1942 Mar 15 +9 - +09 1944 Au 15 +11:30 - +1130 1979 May +12 - +12 +R v 1977 1978 - D Sun>=1 0 1 S +R v 1978 1979 - F 27 0 0 - +R v 1996 o - D 1 2s 1 S +R v 1997 o - Mar 2 2s 0 - +Z Pacific/Noumea 11:5:48 - LMT 1912 Ja 13 +11 v +11/+12 +R w 1927 o - N 6 2 1 S +R w 1928 o - Mar 4 2 0 M +R w 1928 1933 - O Sun>=8 2 0:30 S +R w 1929 1933 - Mar Sun>=15 2 0 M +R w 1934 1940 - Ap lastSun 2 0 M +R w 1934 1940 - S lastSun 2 0:30 S +R w 1946 o - Ja 1 0 0 S +R w 1974 o - N Sun>=1 2s 1 D +R x 1974 o - N Sun>=1 2:45s 1 D +R w 1975 o - F lastSun 2s 0 S +R x 1975 o - F lastSun 2:45s 0 S +R w 1975 1988 - O lastSun 2s 1 D +R x 1975 1988 - O lastSun 2:45s 1 D +R w 1976 1989 - Mar Sun>=1 2s 0 S +R x 1976 1989 - Mar Sun>=1 2:45s 0 S +R w 1989 o - O Sun>=8 2s 1 D +R x 1989 o - O Sun>=8 2:45s 1 D +R w 1990 2006 - O Sun>=1 2s 1 D +R x 1990 2006 - O Sun>=1 2:45s 1 D +R w 1990 2007 - Mar Sun>=15 2s 0 S +R x 1990 2007 - Mar Sun>=15 2:45s 0 S +R w 2007 ma - S lastSun 2s 1 D +R x 2007 ma - S lastSun 2:45s 1 D +R w 2008 ma - Ap Sun>=1 2s 0 S +R x 2008 ma - Ap Sun>=1 2:45s 0 S +Z Pacific/Auckland 11:39:4 - LMT 1868 N 2 +11:30 w NZ%sT 1946 +12 w NZ%sT +Z Pacific/Chatham 12:13:48 - LMT 1868 N 2 +12:15 - +1215 1946 +12:45 x +1245/+1345 +Li Pacific/Auckland Antarctica/McMurdo +R y 1978 o - N 12 0 0:30 HS +R y 1979 1991 - Mar Sun>=1 0 0 - +R y 1979 1990 - O lastSun 0 0:30 HS +Z Pacific/Rarotonga -10:39:4 - LMT 1901 +-10:30 - -1030 1978 N 12 +-10 y -10/-0930 +Z Pacific/Niue -11:19:40 - LMT 1901 +-11:20 - -1120 1951 +-11:30 - -1130 1978 O +-11 - -11 +Z Pacific/Norfolk 11:11:52 - LMT 1901 +11:12 - +1112 1951 +11:30 - +1130 1974 O 27 2 +11:30 1 +1230 1975 Mar 2 2 +11:30 - +1130 2015 O 4 2 +11 - +11 +Z Pacific/Palau 8:57:56 - LMT 1901 +9 - +09 +Z Pacific/Port_Moresby 9:48:40 - LMT 1880 +9:48:32 - PMMT 1895 +10 - +10 +Z Pacific/Bougainville 10:22:16 - LMT 1880 +9:48:32 - PMMT 1895 +10 - +10 1942 Jul +9 - +09 1945 Au 21 +10 - +10 2014 D 28 2 +11 - +11 +Z Pacific/Pitcairn -8:40:20 - LMT 1901 +-8:30 - -0830 1998 Ap 27 +-8 - -08 +Z Pacific/Pago_Pago 12:37:12 - LMT 1892 Jul 5 +-11:22:48 - LMT 1911 +-11 - SST +Li Pacific/Pago_Pago Pacific/Midway +R z 2010 o - S lastSun 0 1 D +R z 2011 o - Ap Sat>=1 4 0 S +R z 2011 o - S lastSat 3 1 D +R z 2012 ma - Ap Sun>=1 4 0 S +R z 2012 ma - S lastSun 3 1 D +Z Pacific/Apia 12:33:4 - LMT 1892 Jul 5 +-11:26:56 - LMT 1911 +-11:30 - -1130 1950 +-11 z -11/-10 2011 D 29 24 +13 z +13/+14 +Z Pacific/Guadalcanal 10:39:48 - LMT 1912 O +11 - +11 +Z Pacific/Fakaofo -11:24:56 - LMT 1901 +-11 - -11 2011 D 30 +13 - +13 +R ! 1999 o - O 7 2s 1 S +R ! 2000 o - Mar 19 2s 0 - +R ! 2000 2001 - N Sun>=1 2 1 S +R ! 2001 2002 - Ja lastSun 2 0 - +R ! 2016 o - N Sun>=1 2 1 S +R ! 2017 o - Ja Sun>=15 3 0 - +Z Pacific/Tongatapu 12:19:20 - LMT 1901 +12:20 - +1220 1941 +13 - +13 1999 +13 ! +13/+14 +Z Pacific/Funafuti 11:56:52 - LMT 1901 +12 - +12 +Z Pacific/Wake 11:6:28 - LMT 1901 +12 - +12 +R $ 1983 o - S 25 0 1 S +R $ 1984 1991 - Mar Sun>=23 0 0 - +R $ 1984 o - O 23 0 1 S +R $ 1985 1991 - S Sun>=23 0 1 S +R $ 1992 1993 - Ja Sun>=23 0 0 - +R $ 1992 o - O Sun>=23 0 1 S +Z Pacific/Efate 11:13:16 - LMT 1912 Ja 13 +11 $ +11/+12 +Z Pacific/Wallis 12:15:20 - LMT 1901 +12 - +12 +R % 1916 o - May 21 2s 1 BST +R % 1916 o - O 1 2s 0 GMT +R % 1917 o - Ap 8 2s 1 BST +R % 1917 o - S 17 2s 0 GMT +R % 1918 o - Mar 24 2s 1 BST +R % 1918 o - S 30 2s 0 GMT +R % 1919 o - Mar 30 2s 1 BST +R % 1919 o - S 29 2s 0 GMT +R % 1920 o - Mar 28 2s 1 BST +R % 1920 o - O 25 2s 0 GMT +R % 1921 o - Ap 3 2s 1 BST +R % 1921 o - O 3 2s 0 GMT +R % 1922 o - Mar 26 2s 1 BST +R % 1922 o - O 8 2s 0 GMT +R % 1923 o - Ap Sun>=16 2s 1 BST +R % 1923 1924 - S Sun>=16 2s 0 GMT +R % 1924 o - Ap Sun>=9 2s 1 BST +R % 1925 1926 - Ap Sun>=16 2s 1 BST +R % 1925 1938 - O Sun>=2 2s 0 GMT +R % 1927 o - Ap Sun>=9 2s 1 BST +R % 1928 1929 - Ap Sun>=16 2s 1 BST +R % 1930 o - Ap Sun>=9 2s 1 BST +R % 1931 1932 - Ap Sun>=16 2s 1 BST +R % 1933 o - Ap Sun>=9 2s 1 BST +R % 1934 o - Ap Sun>=16 2s 1 BST +R % 1935 o - Ap Sun>=9 2s 1 BST +R % 1936 1937 - Ap Sun>=16 2s 1 BST +R % 1938 o - Ap Sun>=9 2s 1 BST +R % 1939 o - Ap Sun>=16 2s 1 BST +R % 1939 o - N Sun>=16 2s 0 GMT +R % 1940 o - F Sun>=23 2s 1 BST +R % 1941 o - May Sun>=2 1s 2 BDST +R % 1941 1943 - Au Sun>=9 1s 1 BST +R % 1942 1944 - Ap Sun>=2 1s 2 BDST +R % 1944 o - S Sun>=16 1s 1 BST +R % 1945 o - Ap M>=2 1s 2 BDST +R % 1945 o - Jul Sun>=9 1s 1 BST +R % 1945 1946 - O Sun>=2 2s 0 GMT +R % 1946 o - Ap Sun>=9 2s 1 BST +R % 1947 o - Mar 16 2s 1 BST +R % 1947 o - Ap 13 1s 2 BDST +R % 1947 o - Au 10 1s 1 BST +R % 1947 o - N 2 2s 0 GMT +R % 1948 o - Mar 14 2s 1 BST +R % 1948 o - O 31 2s 0 GMT +R % 1949 o - Ap 3 2s 1 BST +R % 1949 o - O 30 2s 0 GMT +R % 1950 1952 - Ap Sun>=14 2s 1 BST +R % 1950 1952 - O Sun>=21 2s 0 GMT +R % 1953 o - Ap Sun>=16 2s 1 BST +R % 1953 1960 - O Sun>=2 2s 0 GMT +R % 1954 o - Ap Sun>=9 2s 1 BST +R % 1955 1956 - Ap Sun>=16 2s 1 BST +R % 1957 o - Ap Sun>=9 2s 1 BST +R % 1958 1959 - Ap Sun>=16 2s 1 BST +R % 1960 o - Ap Sun>=9 2s 1 BST +R % 1961 1963 - Mar lastSun 2s 1 BST +R % 1961 1968 - O Sun>=23 2s 0 GMT +R % 1964 1967 - Mar Sun>=19 2s 1 BST +R % 1968 o - F 18 2s 1 BST +R % 1972 1980 - Mar Sun>=16 2s 1 BST +R % 1972 1980 - O Sun>=23 2s 0 GMT +R % 1981 1995 - Mar lastSun 1u 1 BST +R % 1981 1989 - O Sun>=23 1u 0 GMT +R % 1990 1995 - O Sun>=22 1u 0 GMT +Z Europe/London -0:1:15 - LMT 1847 D 1 0s +0 % %s 1968 O 27 +1 - BST 1971 O 31 2u +0 % %s 1996 +0 O GMT/BST +Li Europe/London Europe/Jersey +Li Europe/London Europe/Guernsey +Li Europe/London Europe/Isle_of_Man +Z Europe/Dublin -0:25 - LMT 1880 Au 2 +-0:25:21 - DMT 1916 May 21 2s +-0:25:21 1 IST 1916 O 1 2s +0 % %s 1921 D 6 +0 % GMT/IST 1940 F 25 2s +0 1 IST 1946 O 6 2s +0 - GMT 1947 Mar 16 2s +0 1 IST 1947 N 2 2s +0 - GMT 1948 Ap 18 2s +0 % GMT/IST 1968 O 27 +1 - IST 1971 O 31 2u +0 % GMT/IST 1996 +0 O GMT/IST +R O 1977 1980 - Ap Sun>=1 1u 1 S +R O 1977 o - S lastSun 1u 0 - +R O 1978 o - O 1 1u 0 - +R O 1979 1995 - S lastSun 1u 0 - +R O 1981 ma - Mar lastSun 1u 1 S +R O 1996 ma - O lastSun 1u 0 - +R & 1977 1980 - Ap Sun>=1 1s 1 S +R & 1977 o - S lastSun 1s 0 - +R & 1978 o - O 1 1s 0 - +R & 1979 1995 - S lastSun 1s 0 - +R & 1981 ma - Mar lastSun 1s 1 S +R & 1996 ma - O lastSun 1s 0 - +R ' 1916 o - Ap 30 23 1 S +R ' 1916 o - O 1 1 0 - +R ' 1917 1918 - Ap M>=15 2s 1 S +R ' 1917 1918 - S M>=15 2s 0 - +R ' 1940 o - Ap 1 2s 1 S +R ' 1942 o - N 2 2s 0 - +R ' 1943 o - Mar 29 2s 1 S +R ' 1943 o - O 4 2s 0 - +R ' 1944 1945 - Ap M>=1 2s 1 S +R ' 1944 o - O 2 2s 0 - +R ' 1945 o - S 16 2s 0 - +R ' 1977 1980 - Ap Sun>=1 2s 1 S +R ' 1977 o - S lastSun 2s 0 - +R ' 1978 o - O 1 2s 0 - +R ' 1979 1995 - S lastSun 2s 0 - +R ' 1981 ma - Mar lastSun 2s 1 S +R ' 1996 ma - O lastSun 2s 0 - +R W 1977 1980 - Ap Sun>=1 0 1 S +R W 1977 o - S lastSun 0 0 - +R W 1978 o - O 1 0 0 - +R W 1979 1995 - S lastSun 0 0 - +R W 1981 ma - Mar lastSun 0 1 S +R W 1996 ma - O lastSun 0 0 - +R M 1917 o - Jul 1 23 1 MST +R M 1917 o - D 28 0 0 MMT +R M 1918 o - May 31 22 2 MDST +R M 1918 o - S 16 1 1 MST +R M 1919 o - May 31 23 2 MDST +R M 1919 o - Jul 1 0u 1 MSD +R M 1919 o - Au 16 0 0 MSK +R M 1921 o - F 14 23 1 MSD +R M 1921 o - Mar 20 23 2 +05 +R M 1921 o - S 1 0 1 MSD +R M 1921 o - O 1 0 0 - +R M 1981 1984 - Ap 1 0 1 S +R M 1981 1983 - O 1 0 0 - +R M 1984 1995 - S lastSun 2s 0 - +R M 1985 2010 - Mar lastSun 2s 1 S +R M 1996 2010 - O lastSun 2s 0 - +Z WET 0 O WE%sT +Z CET 1 ' CE%sT +Z MET 1 ' ME%sT +Z EET 2 O EE%sT +R ( 1940 o - Jun 16 0 1 S +R ( 1942 o - N 2 3 0 - +R ( 1943 o - Mar 29 2 1 S +R ( 1943 o - Ap 10 3 0 - +R ( 1974 o - May 4 0 1 S +R ( 1974 o - O 2 0 0 - +R ( 1975 o - May 1 0 1 S +R ( 1975 o - O 2 0 0 - +R ( 1976 o - May 2 0 1 S +R ( 1976 o - O 3 0 0 - +R ( 1977 o - May 8 0 1 S +R ( 1977 o - O 2 0 0 - +R ( 1978 o - May 6 0 1 S +R ( 1978 o - O 1 0 0 - +R ( 1979 o - May 5 0 1 S +R ( 1979 o - S 30 0 0 - +R ( 1980 o - May 3 0 1 S +R ( 1980 o - O 4 0 0 - +R ( 1981 o - Ap 26 0 1 S +R ( 1981 o - S 27 0 0 - +R ( 1982 o - May 2 0 1 S +R ( 1982 o - O 3 0 0 - +R ( 1983 o - Ap 18 0 1 S +R ( 1983 o - O 1 0 0 - +R ( 1984 o - Ap 1 0 1 S +Z Europe/Tirane 1:19:20 - LMT 1914 +1 - CET 1940 Jun 16 +1 ( CE%sT 1984 Jul +1 O CE%sT +Z Europe/Andorra 0:6:4 - LMT 1901 +0 - WET 1946 S 30 +1 - CET 1985 Mar 31 2 +1 O CE%sT +R ) 1920 o - Ap 5 2s 1 S +R ) 1920 o - S 13 2s 0 - +R ) 1946 o - Ap 14 2s 1 S +R ) 1946 1948 - O Sun>=1 2s 0 - +R ) 1947 o - Ap 6 2s 1 S +R ) 1948 o - Ap 18 2s 1 S +R ) 1980 o - Ap 6 0 1 S +R ) 1980 o - S 28 0 0 - +Z Europe/Vienna 1:5:21 - LMT 1893 Ap +1 ' CE%sT 1920 +1 ) CE%sT 1940 Ap 1 2s +1 ' CE%sT 1945 Ap 2 2s +1 1 CEST 1945 Ap 12 2s +1 - CET 1946 +1 ) CE%sT 1981 +1 O CE%sT +Z Europe/Minsk 1:50:16 - LMT 1880 +1:50 - MMT 1924 May 2 +2 - EET 1930 Jun 21 +3 - MSK 1941 Jun 28 +1 ' CE%sT 1944 Jul 3 +3 M MSK/MSD 1990 +3 - MSK 1991 Mar 31 2s +2 M EE%sT 2011 Mar 27 2s +3 - +03 +R * 1918 o - Mar 9 0s 1 S +R * 1918 1919 - O Sat>=1 23s 0 - +R * 1919 o - Mar 1 23s 1 S +R * 1920 o - F 14 23s 1 S +R * 1920 o - O 23 23s 0 - +R * 1921 o - Mar 14 23s 1 S +R * 1921 o - O 25 23s 0 - +R * 1922 o - Mar 25 23s 1 S +R * 1922 1927 - O Sat>=1 23s 0 - +R * 1923 o - Ap 21 23s 1 S +R * 1924 o - Mar 29 23s 1 S +R * 1925 o - Ap 4 23s 1 S +R * 1926 o - Ap 17 23s 1 S +R * 1927 o - Ap 9 23s 1 S +R * 1928 o - Ap 14 23s 1 S +R * 1928 1938 - O Sun>=2 2s 0 - +R * 1929 o - Ap 21 2s 1 S +R * 1930 o - Ap 13 2s 1 S +R * 1931 o - Ap 19 2s 1 S +R * 1932 o - Ap 3 2s 1 S +R * 1933 o - Mar 26 2s 1 S +R * 1934 o - Ap 8 2s 1 S +R * 1935 o - Mar 31 2s 1 S +R * 1936 o - Ap 19 2s 1 S +R * 1937 o - Ap 4 2s 1 S +R * 1938 o - Mar 27 2s 1 S +R * 1939 o - Ap 16 2s 1 S +R * 1939 o - N 19 2s 0 - +R * 1940 o - F 25 2s 1 S +R * 1944 o - S 17 2s 0 - +R * 1945 o - Ap 2 2s 1 S +R * 1945 o - S 16 2s 0 - +R * 1946 o - May 19 2s 1 S +R * 1946 o - O 7 2s 0 - +Z Europe/Brussels 0:17:30 - LMT 1880 +0:17:30 - BMT 1892 May 1 12 +0 - WET 1914 N 8 +1 - CET 1916 May +1 ' CE%sT 1918 N 11 11u +0 * WE%sT 1940 May 20 2s +1 ' CE%sT 1944 S 3 +1 * CE%sT 1977 +1 O CE%sT +R + 1979 o - Mar 31 23 1 S +R + 1979 o - O 1 1 0 - +R + 1980 1982 - Ap Sat>=1 23 1 S +R + 1980 o - S 29 1 0 - +R + 1981 o - S 27 2 0 - +Z Europe/Sofia 1:33:16 - LMT 1880 +1:56:56 - IMT 1894 N 30 +2 - EET 1942 N 2 3 +1 ' CE%sT 1945 +1 - CET 1945 Ap 2 3 +2 - EET 1979 Mar 31 23 +2 + EE%sT 1982 S 26 3 +2 ' EE%sT 1991 +2 W EE%sT 1997 +2 O EE%sT +R , 1945 o - Ap 8 2s 1 S +R , 1945 o - N 18 2s 0 - +R , 1946 o - May 6 2s 1 S +R , 1946 1949 - O Sun>=1 2s 0 - +R , 1947 o - Ap 20 2s 1 S +R , 1948 o - Ap 18 2s 1 S +R , 1949 o - Ap 9 2s 1 S +Z Europe/Prague 0:57:44 - LMT 1850 +0:57:44 - PMT 1891 O +1 ' CE%sT 1944 S 17 2s +1 , CE%sT 1979 +1 O CE%sT +R . 1916 o - May 14 23 1 S +R . 1916 o - S 30 23 0 - +R . 1940 o - May 15 0 1 S +R . 1945 o - Ap 2 2s 1 S +R . 1945 o - Au 15 2s 0 - +R . 1946 o - May 1 2s 1 S +R . 1946 o - S 1 2s 0 - +R . 1947 o - May 4 2s 1 S +R . 1947 o - Au 10 2s 0 - +R . 1948 o - May 9 2s 1 S +R . 1948 o - Au 8 2s 0 - +Z Europe/Copenhagen 0:50:20 - LMT 1890 +0:50:20 - CMT 1894 +1 . CE%sT 1942 N 2 2s +1 ' CE%sT 1945 Ap 2 2 +1 . CE%sT 1980 +1 O CE%sT +Z Atlantic/Faroe -0:27:4 - LMT 1908 Ja 11 +0 - WET 1981 +0 O WE%sT +R / 1991 1992 - Mar lastSun 2 1 D +R / 1991 1992 - S lastSun 2 0 S +R / 1993 2006 - Ap Sun>=1 2 1 D +R / 1993 2006 - O lastSun 2 0 S +R / 2007 ma - Mar Sun>=8 2 1 D +R / 2007 ma - N Sun>=1 2 0 S +Z America/Danmarkshavn -1:14:40 - LMT 1916 Jul 28 +-3 - -03 1980 Ap 6 2 +-3 O -03/-02 1996 +0 - GMT +Z America/Scoresbysund -1:27:52 - LMT 1916 Jul 28 +-2 - -02 1980 Ap 6 2 +-2 ' -02/-01 1981 Mar 29 +-1 O -01/+00 +Z America/Godthab -3:26:56 - LMT 1916 Jul 28 +-3 - -03 1980 Ap 6 2 +-3 O -03/-02 +Z America/Thule -4:35:8 - LMT 1916 Jul 28 +-4 / A%sT +Z Europe/Tallinn 1:39 - LMT 1880 +1:39 - TMT 1918 F +1 ' CE%sT 1919 Jul +1:39 - TMT 1921 May +2 - EET 1940 Au 6 +3 - MSK 1941 S 15 +1 ' CE%sT 1944 S 22 +3 M MSK/MSD 1989 Mar 26 2s +2 1 EEST 1989 S 24 2s +2 ' EE%sT 1998 S 22 +2 O EE%sT 1999 O 31 4 +2 - EET 2002 F 21 +2 O EE%sT +R : 1942 o - Ap 2 24 1 S +R : 1942 o - O 4 1 0 - +R : 1981 1982 - Mar lastSun 2 1 S +R : 1981 1982 - S lastSun 3 0 - +Z Europe/Helsinki 1:39:49 - LMT 1878 May 31 +1:39:49 - HMT 1921 May +2 : EE%sT 1983 +2 O EE%sT +Li Europe/Helsinki Europe/Mariehamn +R ; 1916 o - Jun 14 23s 1 S +R ; 1916 1919 - O Sun>=1 23s 0 - +R ; 1917 o - Mar 24 23s 1 S +R ; 1918 o - Mar 9 23s 1 S +R ; 1919 o - Mar 1 23s 1 S +R ; 1920 o - F 14 23s 1 S +R ; 1920 o - O 23 23s 0 - +R ; 1921 o - Mar 14 23s 1 S +R ; 1921 o - O 25 23s 0 - +R ; 1922 o - Mar 25 23s 1 S +R ; 1922 1938 - O Sat>=1 23s 0 - +R ; 1923 o - May 26 23s 1 S +R ; 1924 o - Mar 29 23s 1 S +R ; 1925 o - Ap 4 23s 1 S +R ; 1926 o - Ap 17 23s 1 S +R ; 1927 o - Ap 9 23s 1 S +R ; 1928 o - Ap 14 23s 1 S +R ; 1929 o - Ap 20 23s 1 S +R ; 1930 o - Ap 12 23s 1 S +R ; 1931 o - Ap 18 23s 1 S +R ; 1932 o - Ap 2 23s 1 S +R ; 1933 o - Mar 25 23s 1 S +R ; 1934 o - Ap 7 23s 1 S +R ; 1935 o - Mar 30 23s 1 S +R ; 1936 o - Ap 18 23s 1 S +R ; 1937 o - Ap 3 23s 1 S +R ; 1938 o - Mar 26 23s 1 S +R ; 1939 o - Ap 15 23s 1 S +R ; 1939 o - N 18 23s 0 - +R ; 1940 o - F 25 2 1 S +R ; 1941 o - May 5 0 2 M +R ; 1941 o - O 6 0 1 S +R ; 1942 o - Mar 9 0 2 M +R ; 1942 o - N 2 3 1 S +R ; 1943 o - Mar 29 2 2 M +R ; 1943 o - O 4 3 1 S +R ; 1944 o - Ap 3 2 2 M +R ; 1944 o - O 8 1 1 S +R ; 1945 o - Ap 2 2 2 M +R ; 1945 o - S 16 3 0 - +R ; 1976 o - Mar 28 1 1 S +R ; 1976 o - S 26 1 0 - +Z Europe/Paris 0:9:21 - LMT 1891 Mar 15 0:1 +0:9:21 - PMT 1911 Mar 11 0:1 +0 ; WE%sT 1940 Jun 14 23 +1 ' CE%sT 1944 Au 25 +0 ; WE%sT 1945 S 16 3 +1 ; CE%sT 1977 +1 O CE%sT +R < 1946 o - Ap 14 2s 1 S +R < 1946 o - O 7 2s 0 - +R < 1947 1949 - O Sun>=1 2s 0 - +R < 1947 o - Ap 6 3s 1 S +R < 1947 o - May 11 2s 2 M +R < 1947 o - Jun 29 3 1 S +R < 1948 o - Ap 18 2s 1 S +R < 1949 o - Ap 10 2s 1 S +R = 1945 o - May 24 2 2 M +R = 1945 o - S 24 3 1 S +R = 1945 o - N 18 2s 0 - +Z Europe/Berlin 0:53:28 - LMT 1893 Ap +1 ' CE%sT 1945 May 24 2 +1 = CE%sT 1946 +1 < CE%sT 1980 +1 O CE%sT +Li Europe/Zurich Europe/Busingen +Z Europe/Gibraltar -0:21:24 - LMT 1880 Au 2 0s +0 % %s 1957 Ap 14 2 +1 - CET 1982 +1 O CE%sT +R > 1932 o - Jul 7 0 1 S +R > 1932 o - S 1 0 0 - +R > 1941 o - Ap 7 0 1 S +R > 1942 o - N 2 3 0 - +R > 1943 o - Mar 30 0 1 S +R > 1943 o - O 4 0 0 - +R > 1952 o - Jul 1 0 1 S +R > 1952 o - N 2 0 0 - +R > 1975 o - Ap 12 0s 1 S +R > 1975 o - N 26 0s 0 - +R > 1976 o - Ap 11 2s 1 S +R > 1976 o - O 10 2s 0 - +R > 1977 1978 - Ap Sun>=1 2s 1 S +R > 1977 o - S 26 2s 0 - +R > 1978 o - S 24 4 0 - +R > 1979 o - Ap 1 9 1 S +R > 1979 o - S 29 2 0 - +R > 1980 o - Ap 1 0 1 S +R > 1980 o - S 28 0 0 - +Z Europe/Athens 1:34:52 - LMT 1895 S 14 +1:34:52 - AMT 1916 Jul 28 0:1 +2 > EE%sT 1941 Ap 30 +1 > CE%sT 1944 Ap 4 +2 > EE%sT 1981 +2 O EE%sT +R ? 1918 o - Ap 1 3 1 S +R ? 1918 o - S 16 3 0 - +R ? 1919 o - Ap 15 3 1 S +R ? 1919 o - N 24 3 0 - +R ? 1945 o - May 1 23 1 S +R ? 1945 o - N 1 0 0 - +R ? 1946 o - Mar 31 2s 1 S +R ? 1946 1949 - O Sun>=1 2s 0 - +R ? 1947 1949 - Ap Sun>=4 2s 1 S +R ? 1950 o - Ap 17 2s 1 S +R ? 1950 o - O 23 2s 0 - +R ? 1954 1955 - May 23 0 1 S +R ? 1954 1955 - O 3 0 0 - +R ? 1956 o - Jun Sun>=1 0 1 S +R ? 1956 o - S lastSun 0 0 - +R ? 1957 o - Jun Sun>=1 1 1 S +R ? 1957 o - S lastSun 3 0 - +R ? 1980 o - Ap 6 1 1 S +Z Europe/Budapest 1:16:20 - LMT 1890 O +1 ' CE%sT 1918 +1 ? CE%sT 1941 Ap 8 +1 ' CE%sT 1945 +1 ? CE%sT 1980 S 28 2s +1 O CE%sT +R @ 1917 1919 - F 19 23 1 S +R @ 1917 o - O 21 1 0 - +R @ 1918 1919 - N 16 1 0 - +R @ 1921 o - Mar 19 23 1 S +R @ 1921 o - Jun 23 1 0 - +R @ 1939 o - Ap 29 23 1 S +R @ 1939 o - O 29 2 0 - +R @ 1940 o - F 25 2 1 S +R @ 1940 1941 - N Sun>=2 1s 0 - +R @ 1941 1942 - Mar Sun>=2 1s 1 S +R @ 1943 1946 - Mar Sun>=1 1s 1 S +R @ 1942 1948 - O Sun>=22 1s 0 - +R @ 1947 1967 - Ap Sun>=1 1s 1 S +R @ 1949 o - O 30 1s 0 - +R @ 1950 1966 - O Sun>=22 1s 0 - +R @ 1967 o - O 29 1s 0 - +Z Atlantic/Reykjavik -1:28 - LMT 1908 +-1 @ -01/+00 1968 Ap 7 1s +0 - GMT +R [ 1916 o - Jun 3 24 1 S +R [ 1916 1917 - S 30 24 0 - +R [ 1917 o - Mar 31 24 1 S +R [ 1918 o - Mar 9 24 1 S +R [ 1918 o - O 6 24 0 - +R [ 1919 o - Mar 1 24 1 S +R [ 1919 o - O 4 24 0 - +R [ 1920 o - Mar 20 24 1 S +R [ 1920 o - S 18 24 0 - +R [ 1940 o - Jun 14 24 1 S +R [ 1942 o - N 2 2s 0 - +R [ 1943 o - Mar 29 2s 1 S +R [ 1943 o - O 4 2s 0 - +R [ 1944 o - Ap 2 2s 1 S +R [ 1944 o - S 17 2s 0 - +R [ 1945 o - Ap 2 2 1 S +R [ 1945 o - S 15 1 0 - +R [ 1946 o - Mar 17 2s 1 S +R [ 1946 o - O 6 2s 0 - +R [ 1947 o - Mar 16 0s 1 S +R [ 1947 o - O 5 0s 0 - +R [ 1948 o - F 29 2s 1 S +R [ 1948 o - O 3 2s 0 - +R [ 1966 1968 - May Sun>=22 0s 1 S +R [ 1966 o - S 24 24 0 - +R [ 1967 1969 - S Sun>=22 0s 0 - +R [ 1969 o - Jun 1 0s 1 S +R [ 1970 o - May 31 0s 1 S +R [ 1970 o - S lastSun 0s 0 - +R [ 1971 1972 - May Sun>=22 0s 1 S +R [ 1971 o - S lastSun 0s 0 - +R [ 1972 o - O 1 0s 0 - +R [ 1973 o - Jun 3 0s 1 S +R [ 1973 1974 - S lastSun 0s 0 - +R [ 1974 o - May 26 0s 1 S +R [ 1975 o - Jun 1 0s 1 S +R [ 1975 1977 - S lastSun 0s 0 - +R [ 1976 o - May 30 0s 1 S +R [ 1977 1979 - May Sun>=22 0s 1 S +R [ 1978 o - O 1 0s 0 - +R [ 1979 o - S 30 0s 0 - +Z Europe/Rome 0:49:56 - LMT 1866 S 22 +0:49:56 - RMT 1893 O 31 23:49:56 +1 [ CE%sT 1943 S 10 +1 ' CE%sT 1944 Jun 4 +1 [ CE%sT 1980 +1 O CE%sT +Li Europe/Rome Europe/Vatican +Li Europe/Rome Europe/San_Marino +R \ 1989 1996 - Mar lastSun 2s 1 S +R \ 1989 1996 - S lastSun 2s 0 - +Z Europe/Riga 1:36:34 - LMT 1880 +1:36:34 - RMT 1918 Ap 15 2 +1:36:34 1 LST 1918 S 16 3 +1:36:34 - RMT 1919 Ap 1 2 +1:36:34 1 LST 1919 May 22 3 +1:36:34 - RMT 1926 May 11 +2 - EET 1940 Au 5 +3 - MSK 1941 Jul +1 ' CE%sT 1944 O 13 +3 M MSK/MSD 1989 Mar lastSun 2s +2 1 EEST 1989 S lastSun 2s +2 \ EE%sT 1997 Ja 21 +2 O EE%sT 2000 F 29 +2 - EET 2001 Ja 2 +2 O EE%sT +Li Europe/Zurich Europe/Vaduz +Z Europe/Vilnius 1:41:16 - LMT 1880 +1:24 - WMT 1917 +1:35:36 - KMT 1919 O 10 +1 - CET 1920 Jul 12 +2 - EET 1920 O 9 +1 - CET 1940 Au 3 +3 - MSK 1941 Jun 24 +1 ' CE%sT 1944 Au +3 M MSK/MSD 1989 Mar 26 2s +2 M EE%sT 1991 S 29 2s +2 ' EE%sT 1998 +2 - EET 1998 Mar 29 1u +1 O CE%sT 1999 O 31 1u +2 - EET 2003 +2 O EE%sT +R ] 1916 o - May 14 23 1 S +R ] 1916 o - O 1 1 0 - +R ] 1917 o - Ap 28 23 1 S +R ] 1917 o - S 17 1 0 - +R ] 1918 o - Ap M>=15 2s 1 S +R ] 1918 o - S M>=15 2s 0 - +R ] 1919 o - Mar 1 23 1 S +R ] 1919 o - O 5 3 0 - +R ] 1920 o - F 14 23 1 S +R ] 1920 o - O 24 2 0 - +R ] 1921 o - Mar 14 23 1 S +R ] 1921 o - O 26 2 0 - +R ] 1922 o - Mar 25 23 1 S +R ] 1922 o - O Sun>=2 1 0 - +R ] 1923 o - Ap 21 23 1 S +R ] 1923 o - O Sun>=2 2 0 - +R ] 1924 o - Mar 29 23 1 S +R ] 1924 1928 - O Sun>=2 1 0 - +R ] 1925 o - Ap 5 23 1 S +R ] 1926 o - Ap 17 23 1 S +R ] 1927 o - Ap 9 23 1 S +R ] 1928 o - Ap 14 23 1 S +R ] 1929 o - Ap 20 23 1 S +Z Europe/Luxembourg 0:24:36 - LMT 1904 Jun +1 ] CE%sT 1918 N 25 +0 ] WE%sT 1929 O 6 2s +0 * WE%sT 1940 May 14 3 +1 ' WE%sT 1944 S 18 3 +1 * CE%sT 1977 +1 O CE%sT +R ^ 1973 o - Mar 31 0s 1 S +R ^ 1973 o - S 29 0s 0 - +R ^ 1974 o - Ap 21 0s 1 S +R ^ 1974 o - S 16 0s 0 - +R ^ 1975 1979 - Ap Sun>=15 2 1 S +R ^ 1975 1980 - S Sun>=15 2 0 - +R ^ 1980 o - Mar 31 2 1 S +Z Europe/Malta 0:58:4 - LMT 1893 N 2 0s +1 [ CE%sT 1973 Mar 31 +1 ^ CE%sT 1981 +1 O CE%sT +R _ 1997 ma - Mar lastSun 2 1 S +R _ 1997 ma - O lastSun 3 0 - +Z Europe/Chisinau 1:55:20 - LMT 1880 +1:55 - CMT 1918 F 15 +1:44:24 - BMT 1931 Jul 24 +2 ` EE%sT 1940 Au 15 +2 1 EEST 1941 Jul 17 +1 ' CE%sT 1944 Au 24 +3 M MSK/MSD 1990 May 6 2 +2 M EE%sT 1992 +2 W EE%sT 1997 +2 _ EE%sT +Z Europe/Monaco 0:29:32 - LMT 1891 Mar 15 +0:9:21 - PMT 1911 Mar 11 +0 ; WE%sT 1945 S 16 3 +1 ; CE%sT 1977 +1 O CE%sT +R { 1916 o - May 1 0 1 NST +R { 1916 o - O 1 0 0 AMT +R { 1917 o - Ap 16 2s 1 NST +R { 1917 o - S 17 2s 0 AMT +R { 1918 1921 - Ap M>=1 2s 1 NST +R { 1918 1921 - S lastM 2s 0 AMT +R { 1922 o - Mar lastSun 2s 1 NST +R { 1922 1936 - O Sun>=2 2s 0 AMT +R { 1923 o - Jun F>=1 2s 1 NST +R { 1924 o - Mar lastSun 2s 1 NST +R { 1925 o - Jun F>=1 2s 1 NST +R { 1926 1931 - May 15 2s 1 NST +R { 1932 o - May 22 2s 1 NST +R { 1933 1936 - May 15 2s 1 NST +R { 1937 o - May 22 2s 1 NST +R { 1937 o - Jul 1 0 1 S +R { 1937 1939 - O Sun>=2 2s 0 - +R { 1938 1939 - May 15 2s 1 S +R { 1945 o - Ap 2 2s 1 S +R { 1945 o - S 16 2s 0 - +Z Europe/Amsterdam 0:19:32 - LMT 1835 +0:19:32 { %s 1937 Jul +0:20 { +0020/+0120 1940 May 16 +1 ' CE%sT 1945 Ap 2 2 +1 { CE%sT 1977 +1 O CE%sT +R | 1916 o - May 22 1 1 S +R | 1916 o - S 30 0 0 - +R | 1945 o - Ap 2 2s 1 S +R | 1945 o - O 1 2s 0 - +R | 1959 1964 - Mar Sun>=15 2s 1 S +R | 1959 1965 - S Sun>=15 2s 0 - +R | 1965 o - Ap 25 2s 1 S +Z Europe/Oslo 0:43 - LMT 1895 +1 | CE%sT 1940 Au 10 23 +1 ' CE%sT 1945 Ap 2 2 +1 | CE%sT 1980 +1 O CE%sT +Li Europe/Oslo Arctic/Longyearbyen +R } 1918 1919 - S 16 2s 0 - +R } 1919 o - Ap 15 2s 1 S +R } 1944 o - Ap 3 2s 1 S +R } 1944 o - O 4 2 0 - +R } 1945 o - Ap 29 0 1 S +R } 1945 o - N 1 0 0 - +R } 1946 o - Ap 14 0s 1 S +R } 1946 o - O 7 2s 0 - +R } 1947 o - May 4 2s 1 S +R } 1947 1949 - O Sun>=1 2s 0 - +R } 1948 o - Ap 18 2s 1 S +R } 1949 o - Ap 10 2s 1 S +R } 1957 o - Jun 2 1s 1 S +R } 1957 1958 - S lastSun 1s 0 - +R } 1958 o - Mar 30 1s 1 S +R } 1959 o - May 31 1s 1 S +R } 1959 1961 - O Sun>=1 1s 0 - +R } 1960 o - Ap 3 1s 1 S +R } 1961 1964 - May lastSun 1s 1 S +R } 1962 1964 - S lastSun 1s 0 - +Z Europe/Warsaw 1:24 - LMT 1880 +1:24 - WMT 1915 Au 5 +1 ' CE%sT 1918 S 16 3 +2 } EE%sT 1922 Jun +1 } CE%sT 1940 Jun 23 2 +1 ' CE%sT 1944 O +1 } CE%sT 1977 +1 & CE%sT 1988 +1 O CE%sT +R ~ 1916 o - Jun 17 23 1 S +R ~ 1916 o - N 1 1 0 - +R ~ 1917 o - F 28 23s 1 S +R ~ 1917 1921 - O 14 23s 0 - +R ~ 1918 o - Mar 1 23s 1 S +R ~ 1919 o - F 28 23s 1 S +R ~ 1920 o - F 29 23s 1 S +R ~ 1921 o - F 28 23s 1 S +R ~ 1924 o - Ap 16 23s 1 S +R ~ 1924 o - O 14 23s 0 - +R ~ 1926 o - Ap 17 23s 1 S +R ~ 1926 1929 - O Sat>=1 23s 0 - +R ~ 1927 o - Ap 9 23s 1 S +R ~ 1928 o - Ap 14 23s 1 S +R ~ 1929 o - Ap 20 23s 1 S +R ~ 1931 o - Ap 18 23s 1 S +R ~ 1931 1932 - O Sat>=1 23s 0 - +R ~ 1932 o - Ap 2 23s 1 S +R ~ 1934 o - Ap 7 23s 1 S +R ~ 1934 1938 - O Sat>=1 23s 0 - +R ~ 1935 o - Mar 30 23s 1 S +R ~ 1936 o - Ap 18 23s 1 S +R ~ 1937 o - Ap 3 23s 1 S +R ~ 1938 o - Mar 26 23s 1 S +R ~ 1939 o - Ap 15 23s 1 S +R ~ 1939 o - N 18 23s 0 - +R ~ 1940 o - F 24 23s 1 S +R ~ 1940 1941 - O 5 23s 0 - +R ~ 1941 o - Ap 5 23s 1 S +R ~ 1942 1945 - Mar Sat>=8 23s 1 S +R ~ 1942 o - Ap 25 22s 2 M +R ~ 1942 o - Au 15 22s 1 S +R ~ 1942 1945 - O Sat>=24 23s 0 - +R ~ 1943 o - Ap 17 22s 2 M +R ~ 1943 1945 - Au Sat>=25 22s 1 S +R ~ 1944 1945 - Ap Sat>=21 22s 2 M +R ~ 1946 o - Ap Sat>=1 23s 1 S +R ~ 1946 o - O Sat>=1 23s 0 - +R ~ 1947 1949 - Ap Sun>=1 2s 1 S +R ~ 1947 1949 - O Sun>=1 2s 0 - +R ~ 1951 1965 - Ap Sun>=1 2s 1 S +R ~ 1951 1965 - O Sun>=1 2s 0 - +R ~ 1977 o - Mar 27 0s 1 S +R ~ 1977 o - S 25 0s 0 - +R ~ 1978 1979 - Ap Sun>=1 0s 1 S +R ~ 1978 o - O 1 0s 0 - +R ~ 1979 1982 - S lastSun 1s 0 - +R ~ 1980 o - Mar lastSun 0s 1 S +R ~ 1981 1982 - Mar lastSun 1s 1 S +R ~ 1983 o - Mar lastSun 2s 1 S +Z Europe/Lisbon -0:36:45 - LMT 1884 +-0:36:45 - LMT 1912 +0 ~ WE%sT 1966 Ap 3 2 +1 - CET 1976 S 26 1 +0 ~ WE%sT 1983 S 25 1s +0 & WE%sT 1992 S 27 1s +1 O CE%sT 1996 Mar 31 1u +0 O WE%sT +Z Atlantic/Azores -1:42:40 - LMT 1884 +-1:54:32 - HMT 1912 +-2 ~ -02/-01 1942 Ap 25 22s +-2 ~ +00 1942 Au 15 22s +-2 ~ -02/-01 1943 Ap 17 22s +-2 ~ +00 1943 Au 28 22s +-2 ~ -02/-01 1944 Ap 22 22s +-2 ~ +00 1944 Au 26 22s +-2 ~ -02/-01 1945 Ap 21 22s +-2 ~ +00 1945 Au 25 22s +-2 ~ -02/-01 1966 Ap 3 2 +-1 ~ -01/+00 1983 S 25 1s +-1 & -01/+00 1992 S 27 1s +0 O WE%sT 1993 Mar 28 1u +-1 O -01/+00 +Z Atlantic/Madeira -1:7:36 - LMT 1884 +-1:7:36 - FMT 1912 +-1 ~ -01/+00 1942 Ap 25 22s +-1 ~ +01 1942 Au 15 22s +-1 ~ -01/+00 1943 Ap 17 22s +-1 ~ +01 1943 Au 28 22s +-1 ~ -01/+00 1944 Ap 22 22s +-1 ~ +01 1944 Au 26 22s +-1 ~ -01/+00 1945 Ap 21 22s +-1 ~ +01 1945 Au 25 22s +-1 ~ -01/+00 1966 Ap 3 2 +0 ~ WE%sT 1983 S 25 1s +0 O WE%sT +R ` 1932 o - May 21 0s 1 S +R ` 1932 1939 - O Sun>=1 0s 0 - +R ` 1933 1939 - Ap Sun>=2 0s 1 S +R ` 1979 o - May 27 0 1 S +R ` 1979 o - S lastSun 0 0 - +R ` 1980 o - Ap 5 23 1 S +R ` 1980 o - S lastSun 1 0 - +R ` 1991 1993 - Mar lastSun 0s 1 S +R ` 1991 1993 - S lastSun 0s 0 - +Z Europe/Bucharest 1:44:24 - LMT 1891 O +1:44:24 - BMT 1931 Jul 24 +2 ` EE%sT 1981 Mar 29 2s +2 ' EE%sT 1991 +2 ` EE%sT 1994 +2 W EE%sT 1997 +2 O EE%sT +Z Europe/Kaliningrad 1:22 - LMT 1893 Ap +1 ' CE%sT 1945 +2 } CE%sT 1946 +3 M MSK/MSD 1989 Mar 26 2s +2 M EE%sT 2011 Mar 27 2s +3 - +03 2014 O 26 2s +2 - EET +Z Europe/Moscow 2:30:17 - LMT 1880 +2:30:17 - MMT 1916 Jul 3 +2:31:19 M %s 1919 Jul 1 0u +3 M %s 1921 O +3 M MSK/MSD 1922 O +2 - EET 1930 Jun 21 +3 M MSK/MSD 1991 Mar 31 2s +2 M EE%sT 1992 Ja 19 2s +3 M MSK/MSD 2011 Mar 27 2s +4 - MSK 2014 O 26 2s +3 - MSK +Z Europe/Simferopol 2:16:24 - LMT 1880 +2:16 - SMT 1924 May 2 +2 - EET 1930 Jun 21 +3 - MSK 1941 N +1 ' CE%sT 1944 Ap 13 +3 M MSK/MSD 1990 +3 - MSK 1990 Jul 1 2 +2 - EET 1992 +2 W EE%sT 1994 May +3 W MSK/MSD 1996 Mar 31 0s +3 1 MSD 1996 O 27 3s +3 M MSK/MSD 1997 +3 - MSK 1997 Mar lastSun 1u +2 O EE%sT 2014 Mar 30 2 +4 - MSK 2014 O 26 2s +3 - MSK +Z Europe/Astrakhan 3:12:12 - LMT 1924 May +3 - +03 1930 Jun 21 +4 M +04/+05 1989 Mar 26 2s +3 M +03/+04 1991 Mar 31 2s +4 - +04 1992 Mar 29 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 2014 O 26 2s +3 - +03 2016 Mar 27 2s +4 - +04 +Z Europe/Volgograd 2:57:40 - LMT 1920 Ja 3 +3 - +03 1930 Jun 21 +4 - +04 1961 N 11 +4 M +04/+05 1988 Mar 27 2s +3 M +03/+04 1991 Mar 31 2s +4 - +04 1992 Mar 29 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 2014 O 26 2s +3 - +03 +Z Europe/Saratov 3:4:18 - LMT 1919 Jul 1 0u +3 - +03 1930 Jun 21 +4 M +04/+05 1988 Mar 27 2s +3 M +03/+04 1991 Mar 31 2s +4 - +04 1992 Mar 29 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 2014 O 26 2s +3 - +03 2016 D 4 2s +4 - +04 +Z Europe/Kirov 3:18:48 - LMT 1919 Jul 1 0u +3 - +03 1930 Jun 21 +4 M +04/+05 1989 Mar 26 2s +3 M +03/+04 1991 Mar 31 2s +4 - +04 1992 Mar 29 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 2014 O 26 2s +3 - +03 +Z Europe/Samara 3:20:20 - LMT 1919 Jul 1 0u +3 - +03 1930 Jun 21 +4 - +04 1935 Ja 27 +4 M +04/+05 1989 Mar 26 2s +3 M +03/+04 1991 Mar 31 2s +2 M +02/+03 1991 S 29 2s +3 - +03 1991 O 20 3 +4 M +04/+05 2010 Mar 28 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 +Z Europe/Ulyanovsk 3:13:36 - LMT 1919 Jul 1 0u +3 - +03 1930 Jun 21 +4 M +04/+05 1989 Mar 26 2s +3 M +03/+04 1991 Mar 31 2s +2 M +02/+03 1992 Ja 19 2s +3 M +03/+04 2011 Mar 27 2s +4 - +04 2014 O 26 2s +3 - +03 2016 Mar 27 2s +4 - +04 +Z Asia/Yekaterinburg 4:2:33 - LMT 1916 Jul 3 +3:45:5 - PMT 1919 Jul 15 4 +4 - +04 1930 Jun 21 +5 M +05/+06 1991 Mar 31 2s +4 M +04/+05 1992 Ja 19 2s +5 M +05/+06 2011 Mar 27 2s +6 - +06 2014 O 26 2s +5 - +05 +Z Asia/Omsk 4:53:30 - LMT 1919 N 14 +5 - +05 1930 Jun 21 +6 M +06/+07 1991 Mar 31 2s +5 M +05/+06 1992 Ja 19 2s +6 M +06/+07 2011 Mar 27 2s +7 - +07 2014 O 26 2s +6 - +06 +Z Asia/Barnaul 5:35 - LMT 1919 D 10 +6 - +06 1930 Jun 21 +7 M +07/+08 1991 Mar 31 2s +6 M +06/+07 1992 Ja 19 2s +7 M +07/+08 1995 May 28 +6 M +06/+07 2011 Mar 27 2s +7 - +07 2014 O 26 2s +6 - +06 2016 Mar 27 2s +7 - +07 +Z Asia/Novosibirsk 5:31:40 - LMT 1919 D 14 6 +6 - +06 1930 Jun 21 +7 M +07/+08 1991 Mar 31 2s +6 M +06/+07 1992 Ja 19 2s +7 M +07/+08 1993 May 23 +6 M +06/+07 2011 Mar 27 2s +7 - +07 2014 O 26 2s +6 - +06 2016 Jul 24 2s +7 - +07 +Z Asia/Tomsk 5:39:51 - LMT 1919 D 22 +6 - +06 1930 Jun 21 +7 M +07/+08 1991 Mar 31 2s +6 M +06/+07 1992 Ja 19 2s +7 M +07/+08 2002 May 1 3 +6 M +06/+07 2011 Mar 27 2s +7 - +07 2014 O 26 2s +6 - +06 2016 May 29 2s +7 - +07 +Z Asia/Novokuznetsk 5:48:48 - LMT 1924 May +6 - +06 1930 Jun 21 +7 M +07/+08 1991 Mar 31 2s +6 M +06/+07 1992 Ja 19 2s +7 M +07/+08 2010 Mar 28 2s +6 M +06/+07 2011 Mar 27 2s +7 - +07 +Z Asia/Krasnoyarsk 6:11:26 - LMT 1920 Ja 6 +6 - +06 1930 Jun 21 +7 M +07/+08 1991 Mar 31 2s +6 M +06/+07 1992 Ja 19 2s +7 M +07/+08 2011 Mar 27 2s +8 - +08 2014 O 26 2s +7 - +07 +Z Asia/Irkutsk 6:57:5 - LMT 1880 +6:57:5 - IMT 1920 Ja 25 +7 - +07 1930 Jun 21 +8 M +08/+09 1991 Mar 31 2s +7 M +07/+08 1992 Ja 19 2s +8 M +08/+09 2011 Mar 27 2s +9 - +09 2014 O 26 2s +8 - +08 +Z Asia/Chita 7:33:52 - LMT 1919 D 15 +8 - +08 1930 Jun 21 +9 M +09/+10 1991 Mar 31 2s +8 M +08/+09 1992 Ja 19 2s +9 M +09/+10 2011 Mar 27 2s +10 - +10 2014 O 26 2s +8 - +08 2016 Mar 27 2 +9 - +09 +Z Asia/Yakutsk 8:38:58 - LMT 1919 D 15 +8 - +08 1930 Jun 21 +9 M +09/+10 1991 Mar 31 2s +8 M +08/+09 1992 Ja 19 2s +9 M +09/+10 2011 Mar 27 2s +10 - +10 2014 O 26 2s +9 - +09 +Z Asia/Vladivostok 8:47:31 - LMT 1922 N 15 +9 - +09 1930 Jun 21 +10 M +10/+11 1991 Mar 31 2s +9 M +09/+10 1992 Ja 19 2s +10 M +10/+11 2011 Mar 27 2s +11 - +11 2014 O 26 2s +10 - +10 +Z Asia/Khandyga 9:2:13 - LMT 1919 D 15 +8 - +08 1930 Jun 21 +9 M +09/+10 1991 Mar 31 2s +8 M +08/+09 1992 Ja 19 2s +9 M +09/+10 2004 +10 M +10/+11 2011 Mar 27 2s +11 - +11 2011 S 13 0s +10 - +10 2014 O 26 2s +9 - +09 +Z Asia/Sakhalin 9:30:48 - LMT 1905 Au 23 +9 - +09 1945 Au 25 +11 M +11/+12 1991 Mar 31 2s +10 M +10/+11 1992 Ja 19 2s +11 M +11/+12 1997 Mar lastSun 2s +10 M +10/+11 2011 Mar 27 2s +11 - +11 2014 O 26 2s +10 - +10 2016 Mar 27 2s +11 - +11 +Z Asia/Magadan 10:3:12 - LMT 1924 May 2 +10 - +10 1930 Jun 21 +11 M +11/+12 1991 Mar 31 2s +10 M +10/+11 1992 Ja 19 2s +11 M +11/+12 2011 Mar 27 2s +12 - +12 2014 O 26 2s +10 - +10 2016 Ap 24 2s +11 - +11 +Z Asia/Srednekolymsk 10:14:52 - LMT 1924 May 2 +10 - +10 1930 Jun 21 +11 M +11/+12 1991 Mar 31 2s +10 M +10/+11 1992 Ja 19 2s +11 M +11/+12 2011 Mar 27 2s +12 - +12 2014 O 26 2s +11 - +11 +Z Asia/Ust-Nera 9:32:54 - LMT 1919 D 15 +8 - +08 1930 Jun 21 +9 M +09/+10 1981 Ap +11 M +11/+12 1991 Mar 31 2s +10 M +10/+11 1992 Ja 19 2s +11 M +11/+12 2011 Mar 27 2s +12 - +12 2011 S 13 0s +11 - +11 2014 O 26 2s +10 - +10 +Z Asia/Kamchatka 10:34:36 - LMT 1922 N 10 +11 - +11 1930 Jun 21 +12 M +12/+13 1991 Mar 31 2s +11 M +11/+12 1992 Ja 19 2s +12 M +12/+13 2010 Mar 28 2s +11 M +11/+12 2011 Mar 27 2s +12 - +12 +Z Asia/Anadyr 11:49:56 - LMT 1924 May 2 +12 - +12 1930 Jun 21 +13 M +13/+14 1982 Ap 1 0s +12 M +12/+13 1991 Mar 31 2s +11 M +11/+12 1992 Ja 19 2s +12 M +12/+13 2010 Mar 28 2s +11 M +11/+12 2011 Mar 27 2s +12 - +12 +Z Europe/Belgrade 1:22 - LMT 1884 +1 - CET 1941 Ap 18 23 +1 ' CE%sT 1945 +1 - CET 1945 May 8 2s +1 1 CEST 1945 S 16 2s +1 - CET 1982 N 27 +1 O CE%sT +Li Europe/Belgrade Europe/Ljubljana +Li Europe/Belgrade Europe/Podgorica +Li Europe/Belgrade Europe/Sarajevo +Li Europe/Belgrade Europe/Skopje +Li Europe/Belgrade Europe/Zagreb +Li Europe/Prague Europe/Bratislava +R AA 1918 o - Ap 15 23 1 S +R AA 1918 1919 - O 6 24s 0 - +R AA 1919 o - Ap 6 23 1 S +R AA 1924 o - Ap 16 23 1 S +R AA 1924 o - O 4 24s 0 - +R AA 1926 o - Ap 17 23 1 S +R AA 1926 1929 - O Sat>=1 24s 0 - +R AA 1927 o - Ap 9 23 1 S +R AA 1928 o - Ap 15 0 1 S +R AA 1929 o - Ap 20 23 1 S +R AA 1937 o - Jun 16 23 1 S +R AA 1937 o - O 2 24s 0 - +R AA 1938 o - Ap 2 23 1 S +R AA 1938 o - Ap 30 23 2 M +R AA 1938 o - O 2 24 1 S +R AA 1939 o - O 7 24s 0 - +R AA 1942 o - May 2 23 1 S +R AA 1942 o - S 1 1 0 - +R AA 1943 1946 - Ap Sat>=13 23 1 S +R AA 1943 1944 - O Sun>=1 1 0 - +R AA 1945 1946 - S lastSun 1 0 - +R AA 1949 o - Ap 30 23 1 S +R AA 1949 o - O 2 1 0 - +R AA 1974 1975 - Ap Sat>=12 23 1 S +R AA 1974 1975 - O Sun>=1 1 0 - +R AA 1976 o - Mar 27 23 1 S +R AA 1976 1977 - S lastSun 1 0 - +R AA 1977 o - Ap 2 23 1 S +R AA 1978 o - Ap 2 2s 1 S +R AA 1978 o - O 1 2s 0 - +R AB 1967 o - Jun 3 12 1 S +R AB 1967 o - O 1 0 0 - +R AB 1974 o - Jun 24 0 1 S +R AB 1974 o - S 1 0 0 - +R AB 1976 1977 - May 1 0 1 S +R AB 1976 o - Au 1 0 0 - +R AB 1977 o - S 28 0 0 - +R AB 1978 o - Jun 1 0 1 S +R AB 1978 o - Au 4 0 0 - +Z Europe/Madrid -0:14:44 - LMT 1900 D 31 23:45:16 +0 AA WE%sT 1940 Mar 16 23 +1 AA CE%sT 1979 +1 O CE%sT +Z Africa/Ceuta -0:21:16 - LMT 1900 D 31 23:38:44 +0 - WET 1918 May 6 23 +0 1 WEST 1918 O 7 23 +0 - WET 1924 +0 AA WE%sT 1929 +0 AB WE%sT 1984 Mar 16 +1 - CET 1986 +1 O CE%sT +Z Atlantic/Canary -1:1:36 - LMT 1922 Mar +-1 - -01 1946 S 30 1 +0 - WET 1980 Ap 6 0s +0 1 WEST 1980 S 28 1u +0 O WE%sT +Z Europe/Stockholm 1:12:12 - LMT 1879 +1:0:14 - SET 1900 +1 - CET 1916 May 14 23 +1 1 CEST 1916 O 1 1 +1 - CET 1980 +1 O CE%sT +R AC 1941 1942 - May M>=1 1 1 S +R AC 1941 1942 - O M>=1 2 0 - +Z Europe/Zurich 0:34:8 - LMT 1853 Jul 16 +0:29:46 - BMT 1894 Jun +1 AC CE%sT 1981 +1 O CE%sT +R AD 1916 o - May 1 0 1 S +R AD 1916 o - O 1 0 0 - +R AD 1920 o - Mar 28 0 1 S +R AD 1920 o - O 25 0 0 - +R AD 1921 o - Ap 3 0 1 S +R AD 1921 o - O 3 0 0 - +R AD 1922 o - Mar 26 0 1 S +R AD 1922 o - O 8 0 0 - +R AD 1924 o - May 13 0 1 S +R AD 1924 1925 - O 1 0 0 - +R AD 1925 o - May 1 0 1 S +R AD 1940 o - Jun 30 0 1 S +R AD 1940 o - O 5 0 0 - +R AD 1940 o - D 1 0 1 S +R AD 1941 o - S 21 0 0 - +R AD 1942 o - Ap 1 0 1 S +R AD 1942 o - N 1 0 0 - +R AD 1945 o - Ap 2 0 1 S +R AD 1945 o - O 8 0 0 - +R AD 1946 o - Jun 1 0 1 S +R AD 1946 o - O 1 0 0 - +R AD 1947 1948 - Ap Sun>=16 0 1 S +R AD 1947 1950 - O Sun>=2 0 0 - +R AD 1949 o - Ap 10 0 1 S +R AD 1950 o - Ap 19 0 1 S +R AD 1951 o - Ap 22 0 1 S +R AD 1951 o - O 8 0 0 - +R AD 1962 o - Jul 15 0 1 S +R AD 1962 o - O 8 0 0 - +R AD 1964 o - May 15 0 1 S +R AD 1964 o - O 1 0 0 - +R AD 1970 1972 - May Sun>=2 0 1 S +R AD 1970 1972 - O Sun>=2 0 0 - +R AD 1973 o - Jun 3 1 1 S +R AD 1973 o - N 4 3 0 - +R AD 1974 o - Mar 31 2 1 S +R AD 1974 o - N 3 5 0 - +R AD 1975 o - Mar 30 0 1 S +R AD 1975 1976 - O lastSun 0 0 - +R AD 1976 o - Jun 1 0 1 S +R AD 1977 1978 - Ap Sun>=1 0 1 S +R AD 1977 o - O 16 0 0 - +R AD 1979 1980 - Ap Sun>=1 3 1 S +R AD 1979 1982 - O M>=11 0 0 - +R AD 1981 1982 - Mar lastSun 3 1 S +R AD 1983 o - Jul 31 0 1 S +R AD 1983 o - O 2 0 0 - +R AD 1985 o - Ap 20 0 1 S +R AD 1985 o - S 28 0 0 - +R AD 1986 1993 - Mar lastSun 1s 1 S +R AD 1986 1995 - S lastSun 1s 0 - +R AD 1994 o - Mar 20 1s 1 S +R AD 1995 2006 - Mar lastSun 1s 1 S +R AD 1996 2006 - O lastSun 1s 0 - +Z Europe/Istanbul 1:55:52 - LMT 1880 +1:56:56 - IMT 1910 O +2 AD EE%sT 1978 O 15 +3 AD +03/+04 1985 Ap 20 +2 AD EE%sT 2007 +2 O EE%sT 2011 Mar 27 1u +2 - EET 2011 Mar 28 1u +2 O EE%sT 2014 Mar 30 1u +2 - EET 2014 Mar 31 1u +2 O EE%sT 2015 O 25 1u +2 1 EEST 2015 N 8 1u +2 O EE%sT 2016 S 7 +3 - +03 +Li Europe/Istanbul Asia/Istanbul +Z Europe/Kiev 2:2:4 - LMT 1880 +2:2:4 - KMT 1924 May 2 +2 - EET 1930 Jun 21 +3 - MSK 1941 S 20 +1 ' CE%sT 1943 N 6 +3 M MSK/MSD 1990 Jul 1 2 +2 1 EEST 1991 S 29 3 +2 W EE%sT 1995 +2 O EE%sT +Z Europe/Uzhgorod 1:29:12 - LMT 1890 O +1 - CET 1940 +1 ' CE%sT 1944 O +1 1 CEST 1944 O 26 +1 - CET 1945 Jun 29 +3 M MSK/MSD 1990 +3 - MSK 1990 Jul 1 2 +1 - CET 1991 Mar 31 3 +2 - EET 1992 +2 W EE%sT 1995 +2 O EE%sT +Z Europe/Zaporozhye 2:20:40 - LMT 1880 +2:20 - +0220 1924 May 2 +2 - EET 1930 Jun 21 +3 - MSK 1941 Au 25 +1 ' CE%sT 1943 O 25 +3 M MSK/MSD 1991 Mar 31 2 +2 W EE%sT 1995 +2 O EE%sT +R AE 1918 1919 - Mar lastSun 2 1 D +R AE 1918 1919 - O lastSun 2 0 S +R AE 1942 o - F 9 2 1 W +R AE 1945 o - Au 14 23u 1 P +R AE 1945 o - S lastSun 2 0 S +R AE 1967 2006 - O lastSun 2 0 S +R AE 1967 1973 - Ap lastSun 2 1 D +R AE 1974 o - Ja 6 2 1 D +R AE 1975 o - F 23 2 1 D +R AE 1976 1986 - Ap lastSun 2 1 D +R AE 1987 2006 - Ap Sun>=1 2 1 D +R AE 2007 ma - Mar Sun>=8 2 1 D +R AE 2007 ma - N Sun>=1 2 0 S +Z EST -5 - EST +Z MST -7 - MST +Z HST -10 - HST +Z EST5EDT -5 AE E%sT +Z CST6CDT -6 AE C%sT +Z MST7MDT -7 AE M%sT +Z PST8PDT -8 AE P%sT +R AF 1920 o - Mar lastSun 2 1 D +R AF 1920 o - O lastSun 2 0 S +R AF 1921 1966 - Ap lastSun 2 1 D +R AF 1921 1954 - S lastSun 2 0 S +R AF 1955 1966 - O lastSun 2 0 S +Z America/New_York -4:56:2 - LMT 1883 N 18 12:3:58 +-5 AE E%sT 1920 +-5 AF E%sT 1942 +-5 AE E%sT 1946 +-5 AF E%sT 1967 +-5 AE E%sT +R AG 1920 o - Jun 13 2 1 D +R AG 1920 1921 - O lastSun 2 0 S +R AG 1921 o - Mar lastSun 2 1 D +R AG 1922 1966 - Ap lastSun 2 1 D +R AG 1922 1954 - S lastSun 2 0 S +R AG 1955 1966 - O lastSun 2 0 S +Z America/Chicago -5:50:36 - LMT 1883 N 18 12:9:24 +-6 AE C%sT 1920 +-6 AG C%sT 1936 Mar 1 2 +-5 - EST 1936 N 15 2 +-6 AG C%sT 1942 +-6 AE C%sT 1946 +-6 AG C%sT 1967 +-6 AE C%sT +Z America/North_Dakota/Center -6:45:12 - LMT 1883 N 18 12:14:48 +-7 AE M%sT 1992 O 25 2 +-6 AE C%sT +Z America/North_Dakota/New_Salem -6:45:39 - LMT 1883 N 18 12:14:21 +-7 AE M%sT 2003 O 26 2 +-6 AE C%sT +Z America/North_Dakota/Beulah -6:47:7 - LMT 1883 N 18 12:12:53 +-7 AE M%sT 2010 N 7 2 +-6 AE C%sT +R AH 1920 1921 - Mar lastSun 2 1 D +R AH 1920 o - O lastSun 2 0 S +R AH 1921 o - May 22 2 0 S +R AH 1965 1966 - Ap lastSun 2 1 D +R AH 1965 1966 - O lastSun 2 0 S +Z America/Denver -6:59:56 - LMT 1883 N 18 12:0:4 +-7 AE M%sT 1920 +-7 AH M%sT 1942 +-7 AE M%sT 1946 +-7 AH M%sT 1967 +-7 AE M%sT +R AI 1948 o - Mar 14 2:1 1 D +R AI 1949 o - Ja 1 2 0 S +R AI 1950 1966 - Ap lastSun 1 1 D +R AI 1950 1961 - S lastSun 2 0 S +R AI 1962 1966 - O lastSun 2 0 S +Z America/Los_Angeles -7:52:58 - LMT 1883 N 18 12:7:2 +-8 AE P%sT 1946 +-8 AI P%sT 1967 +-8 AE P%sT +Z America/Juneau 15:2:19 - LMT 1867 O 19 15:33:32 +-8:57:41 - LMT 1900 Au 20 12 +-8 - PST 1942 +-8 AE P%sT 1946 +-8 - PST 1969 +-8 AE P%sT 1980 Ap 27 2 +-9 AE Y%sT 1980 O 26 2 +-8 AE P%sT 1983 O 30 2 +-9 AE Y%sT 1983 N 30 +-9 AE AK%sT +Z America/Sitka 14:58:47 - LMT 1867 O 19 15:30 +-9:1:13 - LMT 1900 Au 20 12 +-8 - PST 1942 +-8 AE P%sT 1946 +-8 - PST 1969 +-8 AE P%sT 1983 O 30 2 +-9 AE Y%sT 1983 N 30 +-9 AE AK%sT +Z America/Metlakatla 15:13:42 - LMT 1867 O 19 15:44:55 +-8:46:18 - LMT 1900 Au 20 12 +-8 - PST 1942 +-8 AE P%sT 1946 +-8 - PST 1969 +-8 AE P%sT 1983 O 30 2 +-8 - PST 2015 N 1 2 +-9 AE AK%sT +Z America/Yakutat 14:41:5 - LMT 1867 O 19 15:12:18 +-9:18:55 - LMT 1900 Au 20 12 +-9 - YST 1942 +-9 AE Y%sT 1946 +-9 - YST 1969 +-9 AE Y%sT 1983 N 30 +-9 AE AK%sT +Z America/Anchorage 14:0:24 - LMT 1867 O 19 14:31:37 +-9:59:36 - LMT 1900 Au 20 12 +-10 - AST 1942 +-10 AE A%sT 1967 Ap +-10 - AHST 1969 +-10 AE AH%sT 1983 O 30 2 +-9 AE Y%sT 1983 N 30 +-9 AE AK%sT +Z America/Nome 12:58:22 - LMT 1867 O 19 13:29:35 +-11:1:38 - LMT 1900 Au 20 12 +-11 - NST 1942 +-11 AE N%sT 1946 +-11 - NST 1967 Ap +-11 - BST 1969 +-11 AE B%sT 1983 O 30 2 +-9 AE Y%sT 1983 N 30 +-9 AE AK%sT +Z America/Adak 12:13:22 - LMT 1867 O 19 12:44:35 +-11:46:38 - LMT 1900 Au 20 12 +-11 - NST 1942 +-11 AE N%sT 1946 +-11 - NST 1967 Ap +-11 - BST 1969 +-11 AE B%sT 1983 O 30 2 +-10 AE AH%sT 1983 N 30 +-10 AE H%sT +Z Pacific/Honolulu -10:31:26 - LMT 1896 Ja 13 12 +-10:30 - HST 1933 Ap 30 2 +-10:30 1 HDT 1933 May 21 12 +-10:30 - HST 1942 F 9 2 +-10:30 1 HDT 1945 S 30 2 +-10:30 - HST 1947 Jun 8 2 +-10 - HST +Z America/Phoenix -7:28:18 - LMT 1883 N 18 11:31:42 +-7 AE M%sT 1944 Ja 1 0:1 +-7 - MST 1944 Ap 1 0:1 +-7 AE M%sT 1944 O 1 0:1 +-7 - MST 1967 +-7 AE M%sT 1968 Mar 21 +-7 - MST +Z America/Boise -7:44:49 - LMT 1883 N 18 12:15:11 +-8 AE P%sT 1923 May 13 2 +-7 AE M%sT 1974 +-7 - MST 1974 F 3 2 +-7 AE M%sT +R AJ 1941 o - Jun 22 2 1 D +R AJ 1941 1954 - S lastSun 2 0 S +R AJ 1946 1954 - Ap lastSun 2 1 D +Z America/Indiana/Indianapolis -5:44:38 - LMT 1883 N 18 12:15:22 +-6 AE C%sT 1920 +-6 AJ C%sT 1942 +-6 AE C%sT 1946 +-6 AJ C%sT 1955 Ap 24 2 +-5 - EST 1957 S 29 2 +-6 - CST 1958 Ap 27 2 +-5 - EST 1969 +-5 AE E%sT 1971 +-5 - EST 2006 +-5 AE E%sT +R AK 1951 o - Ap lastSun 2 1 D +R AK 1951 o - S lastSun 2 0 S +R AK 1954 1960 - Ap lastSun 2 1 D +R AK 1954 1960 - S lastSun 2 0 S +Z America/Indiana/Marengo -5:45:23 - LMT 1883 N 18 12:14:37 +-6 AE C%sT 1951 +-6 AK C%sT 1961 Ap 30 2 +-5 - EST 1969 +-5 AE E%sT 1974 Ja 6 2 +-6 1 CDT 1974 O 27 2 +-5 AE E%sT 1976 +-5 - EST 2006 +-5 AE E%sT +R AL 1946 o - Ap lastSun 2 1 D +R AL 1946 o - S lastSun 2 0 S +R AL 1953 1954 - Ap lastSun 2 1 D +R AL 1953 1959 - S lastSun 2 0 S +R AL 1955 o - May 1 0 1 D +R AL 1956 1963 - Ap lastSun 2 1 D +R AL 1960 o - O lastSun 2 0 S +R AL 1961 o - S lastSun 2 0 S +R AL 1962 1963 - O lastSun 2 0 S +Z America/Indiana/Vincennes -5:50:7 - LMT 1883 N 18 12:9:53 +-6 AE C%sT 1946 +-6 AL C%sT 1964 Ap 26 2 +-5 - EST 1969 +-5 AE E%sT 1971 +-5 - EST 2006 Ap 2 2 +-6 AE C%sT 2007 N 4 2 +-5 AE E%sT +R AM 1946 o - Ap lastSun 2 1 D +R AM 1946 o - S lastSun 2 0 S +R AM 1953 1954 - Ap lastSun 2 1 D +R AM 1953 1959 - S lastSun 2 0 S +R AM 1955 o - May 1 0 1 D +R AM 1956 1963 - Ap lastSun 2 1 D +R AM 1960 o - O lastSun 2 0 S +R AM 1961 o - S lastSun 2 0 S +R AM 1962 1963 - O lastSun 2 0 S +Z America/Indiana/Tell_City -5:47:3 - LMT 1883 N 18 12:12:57 +-6 AE C%sT 1946 +-6 AM C%sT 1964 Ap 26 2 +-5 - EST 1969 +-5 AE E%sT 1971 +-5 - EST 2006 Ap 2 2 +-6 AE C%sT +R AN 1955 o - May 1 0 1 D +R AN 1955 1960 - S lastSun 2 0 S +R AN 1956 1964 - Ap lastSun 2 1 D +R AN 1961 1964 - O lastSun 2 0 S +Z America/Indiana/Petersburg -5:49:7 - LMT 1883 N 18 12:10:53 +-6 AE C%sT 1955 +-6 AN C%sT 1965 Ap 25 2 +-5 - EST 1966 O 30 2 +-6 AE C%sT 1977 O 30 2 +-5 - EST 2006 Ap 2 2 +-6 AE C%sT 2007 N 4 2 +-5 AE E%sT +R AO 1947 1961 - Ap lastSun 2 1 D +R AO 1947 1954 - S lastSun 2 0 S +R AO 1955 1956 - O lastSun 2 0 S +R AO 1957 1958 - S lastSun 2 0 S +R AO 1959 1961 - O lastSun 2 0 S +Z America/Indiana/Knox -5:46:30 - LMT 1883 N 18 12:13:30 +-6 AE C%sT 1947 +-6 AO C%sT 1962 Ap 29 2 +-5 - EST 1963 O 27 2 +-6 AE C%sT 1991 O 27 2 +-5 - EST 2006 Ap 2 2 +-6 AE C%sT +R AP 1946 1960 - Ap lastSun 2 1 D +R AP 1946 1954 - S lastSun 2 0 S +R AP 1955 1956 - O lastSun 2 0 S +R AP 1957 1960 - S lastSun 2 0 S +Z America/Indiana/Winamac -5:46:25 - LMT 1883 N 18 12:13:35 +-6 AE C%sT 1946 +-6 AP C%sT 1961 Ap 30 2 +-5 - EST 1969 +-5 AE E%sT 1971 +-5 - EST 2006 Ap 2 2 +-6 AE C%sT 2007 Mar 11 2 +-5 AE E%sT +Z America/Indiana/Vevay -5:40:16 - LMT 1883 N 18 12:19:44 +-6 AE C%sT 1954 Ap 25 2 +-5 - EST 1969 +-5 AE E%sT 1973 +-5 - EST 2006 +-5 AE E%sT +R AQ 1921 o - May 1 2 1 D +R AQ 1921 o - S 1 2 0 S +R AQ 1941 1961 - Ap lastSun 2 1 D +R AQ 1941 o - S lastSun 2 0 S +R AQ 1946 o - Jun 2 2 0 S +R AQ 1950 1955 - S lastSun 2 0 S +R AQ 1956 1960 - O lastSun 2 0 S +Z America/Kentucky/Louisville -5:43:2 - LMT 1883 N 18 12:16:58 +-6 AE C%sT 1921 +-6 AQ C%sT 1942 +-6 AE C%sT 1946 +-6 AQ C%sT 1961 Jul 23 2 +-5 - EST 1968 +-5 AE E%sT 1974 Ja 6 2 +-6 1 CDT 1974 O 27 2 +-5 AE E%sT +Z America/Kentucky/Monticello -5:39:24 - LMT 1883 N 18 12:20:36 +-6 AE C%sT 1946 +-6 - CST 1968 +-6 AE C%sT 2000 O 29 2 +-5 AE E%sT +R AR 1948 o - Ap lastSun 2 1 D +R AR 1948 o - S lastSun 2 0 S +Z America/Detroit -5:32:11 - LMT 1905 +-6 - CST 1915 May 15 2 +-5 - EST 1942 +-5 AE E%sT 1946 +-5 AR E%sT 1973 +-5 AE E%sT 1975 +-5 - EST 1975 Ap 27 2 +-5 AE E%sT +R AS 1946 o - Ap lastSun 2 1 D +R AS 1946 o - S lastSun 2 0 S +R AS 1966 o - Ap lastSun 2 1 D +R AS 1966 o - O lastSun 2 0 S +Z America/Menominee -5:50:27 - LMT 1885 S 18 12 +-6 AE C%sT 1946 +-6 AS C%sT 1969 Ap 27 2 +-5 - EST 1973 Ap 29 2 +-6 AE C%sT +R AT 1918 o - Ap 14 2 1 D +R AT 1918 o - O 27 2 0 S +R AT 1942 o - F 9 2 1 W +R AT 1945 o - Au 14 23u 1 P +R AT 1945 o - S 30 2 0 S +R AT 1974 1986 - Ap lastSun 2 1 D +R AT 1974 2006 - O lastSun 2 0 S +R AT 1987 2006 - Ap Sun>=1 2 1 D +R AT 2007 ma - Mar Sun>=8 2 1 D +R AT 2007 ma - N Sun>=1 2 0 S +R AU 1917 o - Ap 8 2 1 D +R AU 1917 o - S 17 2 0 S +R AU 1919 o - May 5 23 1 D +R AU 1919 o - Au 12 23 0 S +R AU 1920 1935 - May Sun>=1 23 1 D +R AU 1920 1935 - O lastSun 23 0 S +R AU 1936 1941 - May M>=9 0 1 D +R AU 1936 1941 - O M>=2 0 0 S +R AU 1946 1950 - May Sun>=8 2 1 D +R AU 1946 1950 - O Sun>=2 2 0 S +R AU 1951 1986 - Ap lastSun 2 1 D +R AU 1951 1959 - S lastSun 2 0 S +R AU 1960 1986 - O lastSun 2 0 S +R AU 1987 o - Ap Sun>=1 0:1 1 D +R AU 1987 2006 - O lastSun 0:1 0 S +R AU 1988 o - Ap Sun>=1 0:1 2 DD +R AU 1989 2006 - Ap Sun>=1 0:1 1 D +R AU 2007 2011 - Mar Sun>=8 0:1 1 D +R AU 2007 2010 - N Sun>=1 0:1 0 S +Z America/St_Johns -3:30:52 - LMT 1884 +-3:30:52 AU N%sT 1918 +-3:30:52 AT N%sT 1919 +-3:30:52 AU N%sT 1935 Mar 30 +-3:30 AU N%sT 1942 May 11 +-3:30 AT N%sT 1946 +-3:30 AU N%sT 2011 N +-3:30 AT N%sT +Z America/Goose_Bay -4:1:40 - LMT 1884 +-3:30:52 - NST 1918 +-3:30:52 AT N%sT 1919 +-3:30:52 - NST 1935 Mar 30 +-3:30 - NST 1936 +-3:30 AU N%sT 1942 May 11 +-3:30 AT N%sT 1946 +-3:30 AU N%sT 1966 Mar 15 2 +-4 AU A%sT 2011 N +-4 AT A%sT +R AV 1916 o - Ap 1 0 1 D +R AV 1916 o - O 1 0 0 S +R AV 1920 o - May 9 0 1 D +R AV 1920 o - Au 29 0 0 S +R AV 1921 o - May 6 0 1 D +R AV 1921 1922 - S 5 0 0 S +R AV 1922 o - Ap 30 0 1 D +R AV 1923 1925 - May Sun>=1 0 1 D +R AV 1923 o - S 4 0 0 S +R AV 1924 o - S 15 0 0 S +R AV 1925 o - S 28 0 0 S +R AV 1926 o - May 16 0 1 D +R AV 1926 o - S 13 0 0 S +R AV 1927 o - May 1 0 1 D +R AV 1927 o - S 26 0 0 S +R AV 1928 1931 - May Sun>=8 0 1 D +R AV 1928 o - S 9 0 0 S +R AV 1929 o - S 3 0 0 S +R AV 1930 o - S 15 0 0 S +R AV 1931 1932 - S M>=24 0 0 S +R AV 1932 o - May 1 0 1 D +R AV 1933 o - Ap 30 0 1 D +R AV 1933 o - O 2 0 0 S +R AV 1934 o - May 20 0 1 D +R AV 1934 o - S 16 0 0 S +R AV 1935 o - Jun 2 0 1 D +R AV 1935 o - S 30 0 0 S +R AV 1936 o - Jun 1 0 1 D +R AV 1936 o - S 14 0 0 S +R AV 1937 1938 - May Sun>=1 0 1 D +R AV 1937 1941 - S M>=24 0 0 S +R AV 1939 o - May 28 0 1 D +R AV 1940 1941 - May Sun>=1 0 1 D +R AV 1946 1949 - Ap lastSun 2 1 D +R AV 1946 1949 - S lastSun 2 0 S +R AV 1951 1954 - Ap lastSun 2 1 D +R AV 1951 1954 - S lastSun 2 0 S +R AV 1956 1959 - Ap lastSun 2 1 D +R AV 1956 1959 - S lastSun 2 0 S +R AV 1962 1973 - Ap lastSun 2 1 D +R AV 1962 1973 - O lastSun 2 0 S +Z America/Halifax -4:14:24 - LMT 1902 Jun 15 +-4 AV A%sT 1918 +-4 AT A%sT 1919 +-4 AV A%sT 1942 F 9 2s +-4 AT A%sT 1946 +-4 AV A%sT 1974 +-4 AT A%sT +Z America/Glace_Bay -3:59:48 - LMT 1902 Jun 15 +-4 AT A%sT 1953 +-4 AV A%sT 1954 +-4 - AST 1972 +-4 AV A%sT 1974 +-4 AT A%sT +R AW 1933 1935 - Jun Sun>=8 1 1 D +R AW 1933 1935 - S Sun>=8 1 0 S +R AW 1936 1938 - Jun Sun>=1 1 1 D +R AW 1936 1938 - S Sun>=1 1 0 S +R AW 1939 o - May 27 1 1 D +R AW 1939 1941 - S Sat>=21 1 0 S +R AW 1940 o - May 19 1 1 D +R AW 1941 o - May 4 1 1 D +R AW 1946 1972 - Ap lastSun 2 1 D +R AW 1946 1956 - S lastSun 2 0 S +R AW 1957 1972 - O lastSun 2 0 S +R AW 1993 2006 - Ap Sun>=1 0:1 1 D +R AW 1993 2006 - O lastSun 0:1 0 S +Z America/Moncton -4:19:8 - LMT 1883 D 9 +-5 - EST 1902 Jun 15 +-4 AT A%sT 1933 +-4 AW A%sT 1942 +-4 AT A%sT 1946 +-4 AW A%sT 1973 +-4 AT A%sT 1993 +-4 AW A%sT 2007 +-4 AT A%sT +Z America/Blanc-Sablon -3:48:28 - LMT 1884 +-4 AT A%sT 1970 +-4 - AST +R AX 1919 o - Mar 30 23:30 1 D +R AX 1919 o - O 26 0 0 S +R AX 1920 o - May 2 2 1 D +R AX 1920 o - S 26 0 0 S +R AX 1921 o - May 15 2 1 D +R AX 1921 o - S 15 2 0 S +R AX 1922 1923 - May Sun>=8 2 1 D +R AX 1922 1926 - S Sun>=15 2 0 S +R AX 1924 1927 - May Sun>=1 2 1 D +R AX 1927 1932 - S lastSun 2 0 S +R AX 1928 1931 - Ap lastSun 2 1 D +R AX 1932 o - May 1 2 1 D +R AX 1933 1940 - Ap lastSun 2 1 D +R AX 1933 o - O 1 2 0 S +R AX 1934 1939 - S lastSun 2 0 S +R AX 1945 1946 - S lastSun 2 0 S +R AX 1946 o - Ap lastSun 2 1 D +R AX 1947 1949 - Ap lastSun 0 1 D +R AX 1947 1948 - S lastSun 0 0 S +R AX 1949 o - N lastSun 0 0 S +R AX 1950 1973 - Ap lastSun 2 1 D +R AX 1950 o - N lastSun 2 0 S +R AX 1951 1956 - S lastSun 2 0 S +R AX 1957 1973 - O lastSun 2 0 S +Z America/Toronto -5:17:32 - LMT 1895 +-5 AT E%sT 1919 +-5 AX E%sT 1942 F 9 2s +-5 AT E%sT 1946 +-5 AX E%sT 1974 +-5 AT E%sT +Z America/Thunder_Bay -5:57 - LMT 1895 +-6 - CST 1910 +-5 - EST 1942 +-5 AT E%sT 1970 +-5 AX E%sT 1973 +-5 - EST 1974 +-5 AT E%sT +Z America/Nipigon -5:53:4 - LMT 1895 +-5 AT E%sT 1940 S 29 +-5 1 EDT 1942 F 9 2s +-5 AT E%sT +Z America/Rainy_River -6:18:16 - LMT 1895 +-6 AT C%sT 1940 S 29 +-6 1 CDT 1942 F 9 2s +-6 AT C%sT +Z America/Atikokan -6:6:28 - LMT 1895 +-6 AT C%sT 1940 S 29 +-6 1 CDT 1942 F 9 2s +-6 AT C%sT 1945 S 30 2 +-5 - EST +R AY 1916 o - Ap 23 0 1 D +R AY 1916 o - S 17 0 0 S +R AY 1918 o - Ap 14 2 1 D +R AY 1918 o - O 27 2 0 S +R AY 1937 o - May 16 2 1 D +R AY 1937 o - S 26 2 0 S +R AY 1942 o - F 9 2 1 W +R AY 1945 o - Au 14 23u 1 P +R AY 1945 o - S lastSun 2 0 S +R AY 1946 o - May 12 2 1 D +R AY 1946 o - O 13 2 0 S +R AY 1947 1949 - Ap lastSun 2 1 D +R AY 1947 1949 - S lastSun 2 0 S +R AY 1950 o - May 1 2 1 D +R AY 1950 o - S 30 2 0 S +R AY 1951 1960 - Ap lastSun 2 1 D +R AY 1951 1958 - S lastSun 2 0 S +R AY 1959 o - O lastSun 2 0 S +R AY 1960 o - S lastSun 2 0 S +R AY 1963 o - Ap lastSun 2 1 D +R AY 1963 o - S 22 2 0 S +R AY 1966 1986 - Ap lastSun 2s 1 D +R AY 1966 2005 - O lastSun 2s 0 S +R AY 1987 2005 - Ap Sun>=1 2s 1 D +Z America/Winnipeg -6:28:36 - LMT 1887 Jul 16 +-6 AY C%sT 2006 +-6 AT C%sT +R AZ 1918 o - Ap 14 2 1 D +R AZ 1918 o - O 27 2 0 S +R AZ 1930 1934 - May Sun>=1 0 1 D +R AZ 1930 1934 - O Sun>=1 0 0 S +R AZ 1937 1941 - Ap Sun>=8 0 1 D +R AZ 1937 o - O Sun>=8 0 0 S +R AZ 1938 o - O Sun>=1 0 0 S +R AZ 1939 1941 - O Sun>=8 0 0 S +R AZ 1942 o - F 9 2 1 W +R AZ 1945 o - Au 14 23u 1 P +R AZ 1945 o - S lastSun 2 0 S +R AZ 1946 o - Ap Sun>=8 2 1 D +R AZ 1946 o - O Sun>=8 2 0 S +R AZ 1947 1957 - Ap lastSun 2 1 D +R AZ 1947 1957 - S lastSun 2 0 S +R AZ 1959 o - Ap lastSun 2 1 D +R AZ 1959 o - O lastSun 2 0 S +R Aa 1957 o - Ap lastSun 2 1 D +R Aa 1957 o - O lastSun 2 0 S +R Aa 1959 1961 - Ap lastSun 2 1 D +R Aa 1959 o - O lastSun 2 0 S +R Aa 1960 1961 - S lastSun 2 0 S +Z America/Regina -6:58:36 - LMT 1905 S +-7 AZ M%sT 1960 Ap lastSun 2 +-6 - CST +Z America/Swift_Current -7:11:20 - LMT 1905 S +-7 AT M%sT 1946 Ap lastSun 2 +-7 AZ M%sT 1950 +-7 Aa M%sT 1972 Ap lastSun 2 +-6 - CST +R Ab 1918 1919 - Ap Sun>=8 2 1 D +R Ab 1918 o - O 27 2 0 S +R Ab 1919 o - May 27 2 0 S +R Ab 1920 1923 - Ap lastSun 2 1 D +R Ab 1920 o - O lastSun 2 0 S +R Ab 1921 1923 - S lastSun 2 0 S +R Ab 1942 o - F 9 2 1 W +R Ab 1945 o - Au 14 23u 1 P +R Ab 1945 o - S lastSun 2 0 S +R Ab 1947 o - Ap lastSun 2 1 D +R Ab 1947 o - S lastSun 2 0 S +R Ab 1967 o - Ap lastSun 2 1 D +R Ab 1967 o - O lastSun 2 0 S +R Ab 1969 o - Ap lastSun 2 1 D +R Ab 1969 o - O lastSun 2 0 S +R Ab 1972 1986 - Ap lastSun 2 1 D +R Ab 1972 2006 - O lastSun 2 0 S +Z America/Edmonton -7:33:52 - LMT 1906 S +-7 Ab M%sT 1987 +-7 AT M%sT +R Ac 1918 o - Ap 14 2 1 D +R Ac 1918 o - O 27 2 0 S +R Ac 1942 o - F 9 2 1 W +R Ac 1945 o - Au 14 23u 1 P +R Ac 1945 o - S 30 2 0 S +R Ac 1946 1986 - Ap lastSun 2 1 D +R Ac 1946 o - O 13 2 0 S +R Ac 1947 1961 - S lastSun 2 0 S +R Ac 1962 2006 - O lastSun 2 0 S +Z America/Vancouver -8:12:28 - LMT 1884 +-8 Ac P%sT 1987 +-8 AT P%sT +Z America/Dawson_Creek -8:0:56 - LMT 1884 +-8 AT P%sT 1947 +-8 Ac P%sT 1972 Au 30 2 +-7 - MST +Z America/Fort_Nelson -8:10:47 - LMT 1884 +-8 Ac P%sT 1946 +-8 - PST 1947 +-8 Ac P%sT 1987 +-8 AT P%sT 2015 Mar 8 2 +-7 - MST +Z America/Creston -7:46:4 - LMT 1884 +-7 - MST 1916 O +-8 - PST 1918 Jun 2 +-7 - MST +R Ad 1918 o - Ap 14 2 1 D +R Ad 1918 o - O 27 2 0 S +R Ad 1919 o - May 25 2 1 D +R Ad 1919 o - N 1 0 0 S +R Ad 1942 o - F 9 2 1 W +R Ad 1945 o - Au 14 23u 1 P +R Ad 1945 o - S 30 2 0 S +R Ad 1965 o - Ap lastSun 0 2 DD +R Ad 1965 o - O lastSun 2 0 S +R Ad 1980 1986 - Ap lastSun 2 1 D +R Ad 1980 2006 - O lastSun 2 0 S +R Ad 1987 2006 - Ap Sun>=1 2 1 D +Z America/Pangnirtung 0 - -00 1921 +-4 Ad A%sT 1995 Ap Sun>=1 2 +-5 AT E%sT 1999 O 31 2 +-6 AT C%sT 2000 O 29 2 +-5 AT E%sT +Z America/Iqaluit 0 - -00 1942 Au +-5 Ad E%sT 1999 O 31 2 +-6 AT C%sT 2000 O 29 2 +-5 AT E%sT +Z America/Resolute 0 - -00 1947 Au 31 +-6 Ad C%sT 2000 O 29 2 +-5 - EST 2001 Ap 1 3 +-6 AT C%sT 2006 O 29 2 +-5 - EST 2007 Mar 11 3 +-6 AT C%sT +Z America/Rankin_Inlet 0 - -00 1957 +-6 Ad C%sT 2000 O 29 2 +-5 - EST 2001 Ap 1 3 +-6 AT C%sT +Z America/Cambridge_Bay 0 - -00 1920 +-7 Ad M%sT 1999 O 31 2 +-6 AT C%sT 2000 O 29 2 +-5 - EST 2000 N 5 +-6 - CST 2001 Ap 1 3 +-7 AT M%sT +Z America/Yellowknife 0 - -00 1935 +-7 Ad M%sT 1980 +-7 AT M%sT +Z America/Inuvik 0 - -00 1953 +-8 Ad P%sT 1979 Ap lastSun 2 +-7 Ad M%sT 1980 +-7 AT M%sT +Z America/Whitehorse -9:0:12 - LMT 1900 Au 20 +-9 Ad Y%sT 1967 May 28 +-8 Ad P%sT 1980 +-8 AT P%sT +Z America/Dawson -9:17:40 - LMT 1900 Au 20 +-9 Ad Y%sT 1973 O 28 +-8 Ad P%sT 1980 +-8 AT P%sT +R Ae 1939 o - F 5 0 1 D +R Ae 1939 o - Jun 25 0 0 S +R Ae 1940 o - D 9 0 1 D +R Ae 1941 o - Ap 1 0 0 S +R Ae 1943 o - D 16 0 1 W +R Ae 1944 o - May 1 0 0 S +R Ae 1950 o - F 12 0 1 D +R Ae 1950 o - Jul 30 0 0 S +R Ae 1996 2000 - Ap Sun>=1 2 1 D +R Ae 1996 2000 - O lastSun 2 0 S +R Ae 2001 o - May Sun>=1 2 1 D +R Ae 2001 o - S lastSun 2 0 S +R Ae 2002 ma - Ap Sun>=1 2 1 D +R Ae 2002 ma - O lastSun 2 0 S +Z America/Cancun -5:47:4 - LMT 1922 Ja 1 0:12:56 +-6 - CST 1981 D 23 +-5 Ae E%sT 1998 Au 2 2 +-6 Ae C%sT 2015 F 1 2 +-5 - EST +Z America/Merida -5:58:28 - LMT 1922 Ja 1 0:1:32 +-6 - CST 1981 D 23 +-5 - EST 1982 D 2 +-6 Ae C%sT +Z America/Matamoros -6:40 - LMT 1921 D 31 23:20 +-6 - CST 1988 +-6 AE C%sT 1989 +-6 Ae C%sT 2010 +-6 AE C%sT +Z America/Monterrey -6:41:16 - LMT 1921 D 31 23:18:44 +-6 - CST 1988 +-6 AE C%sT 1989 +-6 Ae C%sT +Z America/Mexico_City -6:36:36 - LMT 1922 Ja 1 0:23:24 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 Ae C%sT 2001 S 30 2 +-6 - CST 2002 F 20 +-6 Ae C%sT +Z America/Ojinaga -6:57:40 - LMT 1922 Ja 1 0:2:20 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 - CST 1996 +-6 Ae C%sT 1998 +-6 - CST 1998 Ap Sun>=1 3 +-7 Ae M%sT 2010 +-7 AE M%sT +Z America/Chihuahua -7:4:20 - LMT 1921 D 31 23:55:40 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 - CST 1996 +-6 Ae C%sT 1998 +-6 - CST 1998 Ap Sun>=1 3 +-7 Ae M%sT +Z America/Hermosillo -7:23:52 - LMT 1921 D 31 23:36:8 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 - CST 1942 Ap 24 +-7 - MST 1949 Ja 14 +-8 - PST 1970 +-7 Ae M%sT 1999 +-7 - MST +Z America/Mazatlan -7:5:40 - LMT 1921 D 31 23:54:20 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 - CST 1942 Ap 24 +-7 - MST 1949 Ja 14 +-8 - PST 1970 +-7 Ae M%sT +Z America/Bahia_Banderas -7:1 - LMT 1921 D 31 23:59 +-7 - MST 1927 Jun 10 23 +-6 - CST 1930 N 15 +-7 - MST 1931 May 1 23 +-6 - CST 1931 O +-7 - MST 1932 Ap +-6 - CST 1942 Ap 24 +-7 - MST 1949 Ja 14 +-8 - PST 1970 +-7 Ae M%sT 2010 Ap 4 2 +-6 Ae C%sT +Z America/Tijuana -7:48:4 - LMT 1922 Ja 1 0:11:56 +-7 - MST 1924 +-8 - PST 1927 Jun 10 23 +-7 - MST 1930 N 15 +-8 - PST 1931 Ap +-8 1 PDT 1931 S 30 +-8 - PST 1942 Ap 24 +-8 1 PWT 1945 Au 14 23u +-8 1 PPT 1945 N 12 +-8 - PST 1948 Ap 5 +-8 1 PDT 1949 Ja 14 +-8 - PST 1954 +-8 AI P%sT 1961 +-8 - PST 1976 +-8 AE P%sT 1996 +-8 Ae P%sT 2001 +-8 AE P%sT 2002 F 20 +-8 Ae P%sT 2010 +-8 AE P%sT +R Af 1964 1975 - O lastSun 2 0 S +R Af 1964 1975 - Ap lastSun 2 1 D +Z America/Nassau -5:9:30 - LMT 1912 Mar 2 +-5 Af E%sT 1976 +-5 AE E%sT +R Ag 1977 o - Jun 12 2 1 D +R Ag 1977 1978 - O Sun>=1 2 0 S +R Ag 1978 1980 - Ap Sun>=15 2 1 D +R Ag 1979 o - S 30 2 0 S +R Ag 1980 o - S 25 2 0 S +Z America/Barbados -3:58:29 - LMT 1924 +-3:58:29 - BMT 1932 +-4 Ag A%sT +R Ah 1918 1942 - O Sun>=2 0 0:30 -0530 +R Ah 1919 1943 - F Sun>=9 0 0 CST +R Ah 1973 o - D 5 0 1 CDT +R Ah 1974 o - F 9 0 0 CST +R Ah 1982 o - D 18 0 1 CDT +R Ah 1983 o - F 12 0 0 CST +Z America/Belize -5:52:48 - LMT 1912 Ap +-6 Ah %s +Z Atlantic/Bermuda -4:19:18 - LMT 1930 Ja 1 2 +-4 - AST 1974 Ap 28 2 +-4 AT A%sT 1976 +-4 AE A%sT +R Ai 1979 1980 - F lastSun 0 1 D +R Ai 1979 1980 - Jun Sun>=1 0 0 S +R Ai 1991 1992 - Ja Sat>=15 0 1 D +R Ai 1991 o - Jul 1 0 0 S +R Ai 1992 o - Mar 15 0 0 S +Z America/Costa_Rica -5:36:13 - LMT 1890 +-5:36:13 - SJMT 1921 Ja 15 +-6 Ai C%sT +R Aj 1928 o - Jun 10 0 1 D +R Aj 1928 o - O 10 0 0 S +R Aj 1940 1942 - Jun Sun>=1 0 1 D +R Aj 1940 1942 - S Sun>=1 0 0 S +R Aj 1945 1946 - Jun Sun>=1 0 1 D +R Aj 1945 1946 - S Sun>=1 0 0 S +R Aj 1965 o - Jun 1 0 1 D +R Aj 1965 o - S 30 0 0 S +R Aj 1966 o - May 29 0 1 D +R Aj 1966 o - O 2 0 0 S +R Aj 1967 o - Ap 8 0 1 D +R Aj 1967 1968 - S Sun>=8 0 0 S +R Aj 1968 o - Ap 14 0 1 D +R Aj 1969 1977 - Ap lastSun 0 1 D +R Aj 1969 1971 - O lastSun 0 0 S +R Aj 1972 1974 - O 8 0 0 S +R Aj 1975 1977 - O lastSun 0 0 S +R Aj 1978 o - May 7 0 1 D +R Aj 1978 1990 - O Sun>=8 0 0 S +R Aj 1979 1980 - Mar Sun>=15 0 1 D +R Aj 1981 1985 - May Sun>=5 0 1 D +R Aj 1986 1989 - Mar Sun>=14 0 1 D +R Aj 1990 1997 - Ap Sun>=1 0 1 D +R Aj 1991 1995 - O Sun>=8 0s 0 S +R Aj 1996 o - O 6 0s 0 S +R Aj 1997 o - O 12 0s 0 S +R Aj 1998 1999 - Mar lastSun 0s 1 D +R Aj 1998 2003 - O lastSun 0s 0 S +R Aj 2000 2003 - Ap Sun>=1 0s 1 D +R Aj 2004 o - Mar lastSun 0s 1 D +R Aj 2006 2010 - O lastSun 0s 0 S +R Aj 2007 o - Mar Sun>=8 0s 1 D +R Aj 2008 o - Mar Sun>=15 0s 1 D +R Aj 2009 2010 - Mar Sun>=8 0s 1 D +R Aj 2011 o - Mar Sun>=15 0s 1 D +R Aj 2011 o - N 13 0s 0 S +R Aj 2012 o - Ap 1 0s 1 D +R Aj 2012 ma - N Sun>=1 0s 0 S +R Aj 2013 ma - Mar Sun>=8 0s 1 D +Z America/Havana -5:29:28 - LMT 1890 +-5:29:36 - HMT 1925 Jul 19 12 +-5 Aj C%sT +R Ak 1966 o - O 30 0 1 EDT +R Ak 1967 o - F 28 0 0 EST +R Ak 1969 1973 - O lastSun 0 0:30 -0430 +R Ak 1970 o - F 21 0 0 EST +R Ak 1971 o - Ja 20 0 0 EST +R Ak 1972 1974 - Ja 21 0 0 EST +Z America/Santo_Domingo -4:39:36 - LMT 1890 +-4:40 - SDMT 1933 Ap 1 12 +-5 Ak %s 1974 O 27 +-4 - AST 2000 O 29 2 +-5 AE E%sT 2000 D 3 1 +-4 - AST +R Al 1987 1988 - May Sun>=1 0 1 D +R Al 1987 1988 - S lastSun 0 0 S +Z America/El_Salvador -5:56:48 - LMT 1921 +-6 Al C%sT +R Am 1973 o - N 25 0 1 D +R Am 1974 o - F 24 0 0 S +R Am 1983 o - May 21 0 1 D +R Am 1983 o - S 22 0 0 S +R Am 1991 o - Mar 23 0 1 D +R Am 1991 o - S 7 0 0 S +R Am 2006 o - Ap 30 0 1 D +R Am 2006 o - O 1 0 0 S +Z America/Guatemala -6:2:4 - LMT 1918 O 5 +-6 Am C%sT +R An 1983 o - May 8 0 1 D +R An 1984 1987 - Ap lastSun 0 1 D +R An 1983 1987 - O lastSun 0 0 S +R An 1988 1997 - Ap Sun>=1 1s 1 D +R An 1988 1997 - O lastSun 1s 0 S +R An 2005 2006 - Ap Sun>=1 0 1 D +R An 2005 2006 - O lastSun 0 0 S +R An 2012 2015 - Mar Sun>=8 2 1 D +R An 2012 2015 - N Sun>=1 2 0 S +R An 2017 ma - Mar Sun>=8 2 1 D +R An 2017 ma - N Sun>=1 2 0 S +Z America/Port-au-Prince -4:49:20 - LMT 1890 +-4:49 - PPMT 1917 Ja 24 12 +-5 An E%sT +R Ao 1987 1988 - May Sun>=1 0 1 D +R Ao 1987 1988 - S lastSun 0 0 S +R Ao 2006 o - May Sun>=1 0 1 D +R Ao 2006 o - Au M>=1 0 0 S +Z America/Tegucigalpa -5:48:52 - LMT 1921 Ap +-6 Ao C%sT +Z America/Jamaica -5:7:11 - LMT 1890 +-5:7:11 - KMT 1912 F +-5 - EST 1974 +-5 AE E%sT 1984 +-5 - EST +Z America/Martinique -4:4:20 - LMT 1890 +-4:4:20 - FFMT 1911 May +-4 - AST 1980 Ap 6 +-4 1 ADT 1980 S 28 +-4 - AST +R Ap 1979 1980 - Mar Sun>=16 0 1 D +R Ap 1979 1980 - Jun M>=23 0 0 S +R Ap 2005 o - Ap 10 0 1 D +R Ap 2005 o - O Sun>=1 0 0 S +R Ap 2006 o - Ap 30 2 1 D +R Ap 2006 o - O Sun>=1 1 0 S +Z America/Managua -5:45:8 - LMT 1890 +-5:45:12 - MMT 1934 Jun 23 +-6 - CST 1973 May +-5 - EST 1975 F 16 +-6 Ap C%sT 1992 Ja 1 4 +-5 - EST 1992 S 24 +-6 - CST 1993 +-5 - EST 1997 +-6 Ap C%sT +Z America/Panama -5:18:8 - LMT 1890 +-5:19:36 - CMT 1908 Ap 22 +-5 - EST +Li America/Panama America/Cayman +Z America/Puerto_Rico -4:24:25 - LMT 1899 Mar 28 12 +-4 - AST 1942 May 3 +-4 AE A%sT 1946 +-4 - AST +Z America/Miquelon -3:44:40 - LMT 1911 May 15 +-4 - AST 1980 May +-3 - -03 1987 +-3 AT -03/-02 +Z America/Grand_Turk -4:44:32 - LMT 1890 +-5:7:11 - KMT 1912 F +-5 - EST 1979 +-5 AE E%sT 2015 N Sun>=1 2 +-4 - AST 2018 Mar 11 3 +-5 AE E%sT +R Aq 1930 o - D 1 0 1 S +R Aq 1931 o - Ap 1 0 0 - +R Aq 1931 o - O 15 0 1 S +R Aq 1932 1940 - Mar 1 0 0 - +R Aq 1932 1939 - N 1 0 1 S +R Aq 1940 o - Jul 1 0 1 S +R Aq 1941 o - Jun 15 0 0 - +R Aq 1941 o - O 15 0 1 S +R Aq 1943 o - Au 1 0 0 - +R Aq 1943 o - O 15 0 1 S +R Aq 1946 o - Mar 1 0 0 - +R Aq 1946 o - O 1 0 1 S +R Aq 1963 o - O 1 0 0 - +R Aq 1963 o - D 15 0 1 S +R Aq 1964 1966 - Mar 1 0 0 - +R Aq 1964 1966 - O 15 0 1 S +R Aq 1967 o - Ap 2 0 0 - +R Aq 1967 1968 - O Sun>=1 0 1 S +R Aq 1968 1969 - Ap Sun>=1 0 0 - +R Aq 1974 o - Ja 23 0 1 S +R Aq 1974 o - May 1 0 0 - +R Aq 1988 o - D 1 0 1 S +R Aq 1989 1993 - Mar Sun>=1 0 0 - +R Aq 1989 1992 - O Sun>=15 0 1 S +R Aq 1999 o - O Sun>=1 0 1 S +R Aq 2000 o - Mar 3 0 0 - +R Aq 2007 o - D 30 0 1 S +R Aq 2008 2009 - Mar Sun>=15 0 0 - +R Aq 2008 o - O Sun>=15 0 1 S +Z America/Argentina/Buenos_Aires -3:53:48 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 Aq -03/-02 +Z America/Argentina/Cordoba -4:16:48 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar 3 +-4 - -04 1991 O 20 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 Aq -03/-02 +Z America/Argentina/Salta -4:21:40 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar 3 +-4 - -04 1991 O 20 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/Tucuman -4:20:52 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar 3 +-4 - -04 1991 O 20 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 Jun +-4 - -04 2004 Jun 13 +-3 Aq -03/-02 +Z America/Argentina/La_Rioja -4:27:24 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar +-4 - -04 1991 May 7 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 Jun +-4 - -04 2004 Jun 20 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/San_Juan -4:34:4 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar +-4 - -04 1991 May 7 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 May 31 +-4 - -04 2004 Jul 25 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/Jujuy -4:21:12 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1990 Mar 4 +-4 - -04 1990 O 28 +-4 1 -03 1991 Mar 17 +-4 - -04 1991 O 6 +-3 1 -02 1992 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/Catamarca -4:23:8 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1991 Mar 3 +-4 - -04 1991 O 20 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 Jun +-4 - -04 2004 Jun 20 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/Mendoza -4:35:16 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1990 Mar 4 +-4 - -04 1990 O 15 +-4 1 -03 1991 Mar +-4 - -04 1991 O 15 +-4 1 -03 1992 Mar +-4 - -04 1992 O 18 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 May 23 +-4 - -04 2004 S 26 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +R Ar 2008 2009 - Mar Sun>=8 0 0 - +R Ar 2007 2008 - O Sun>=8 0 1 S +Z America/Argentina/San_Luis -4:25:24 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1990 +-3 1 -02 1990 Mar 14 +-4 - -04 1990 O 15 +-4 1 -03 1991 Mar +-4 - -04 1991 Jun +-3 - -03 1999 O 3 +-4 1 -03 2000 Mar 3 +-3 - -03 2004 May 31 +-4 - -04 2004 Jul 25 +-3 Aq -03/-02 2008 Ja 21 +-4 Ar -04/-03 2009 O 11 +-3 - -03 +Z America/Argentina/Rio_Gallegos -4:36:52 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 Jun +-4 - -04 2004 Jun 20 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Z America/Argentina/Ushuaia -4:33:12 - LMT 1894 O 31 +-4:16:48 - CMT 1920 May +-4 - -04 1930 D +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1999 O 3 +-4 Aq -04/-03 2000 Mar 3 +-3 - -03 2004 May 30 +-4 - -04 2004 Jun 20 +-3 Aq -03/-02 2008 O 18 +-3 - -03 +Li America/Curacao America/Aruba +Z America/La_Paz -4:32:36 - LMT 1890 +-4:32:36 - CMT 1931 O 15 +-4:32:36 1 BOST 1932 Mar 21 +-4 - -04 +R As 1931 o - O 3 11 1 S +R As 1932 1933 - Ap 1 0 0 - +R As 1932 o - O 3 0 1 S +R As 1949 1952 - D 1 0 1 S +R As 1950 o - Ap 16 1 0 - +R As 1951 1952 - Ap 1 0 0 - +R As 1953 o - Mar 1 0 0 - +R As 1963 o - D 9 0 1 S +R As 1964 o - Mar 1 0 0 - +R As 1965 o - Ja 31 0 1 S +R As 1965 o - Mar 31 0 0 - +R As 1965 o - D 1 0 1 S +R As 1966 1968 - Mar 1 0 0 - +R As 1966 1967 - N 1 0 1 S +R As 1985 o - N 2 0 1 S +R As 1986 o - Mar 15 0 0 - +R As 1986 o - O 25 0 1 S +R As 1987 o - F 14 0 0 - +R As 1987 o - O 25 0 1 S +R As 1988 o - F 7 0 0 - +R As 1988 o - O 16 0 1 S +R As 1989 o - Ja 29 0 0 - +R As 1989 o - O 15 0 1 S +R As 1990 o - F 11 0 0 - +R As 1990 o - O 21 0 1 S +R As 1991 o - F 17 0 0 - +R As 1991 o - O 20 0 1 S +R As 1992 o - F 9 0 0 - +R As 1992 o - O 25 0 1 S +R As 1993 o - Ja 31 0 0 - +R As 1993 1995 - O Sun>=11 0 1 S +R As 1994 1995 - F Sun>=15 0 0 - +R As 1996 o - F 11 0 0 - +R As 1996 o - O 6 0 1 S +R As 1997 o - F 16 0 0 - +R As 1997 o - O 6 0 1 S +R As 1998 o - Mar 1 0 0 - +R As 1998 o - O 11 0 1 S +R As 1999 o - F 21 0 0 - +R As 1999 o - O 3 0 1 S +R As 2000 o - F 27 0 0 - +R As 2000 2001 - O Sun>=8 0 1 S +R As 2001 2006 - F Sun>=15 0 0 - +R As 2002 o - N 3 0 1 S +R As 2003 o - O 19 0 1 S +R As 2004 o - N 2 0 1 S +R As 2005 o - O 16 0 1 S +R As 2006 o - N 5 0 1 S +R As 2007 o - F 25 0 0 - +R As 2007 o - O Sun>=8 0 1 S +R As 2008 ma - O Sun>=15 0 1 S +R As 2008 2011 - F Sun>=15 0 0 - +R As 2012 o - F Sun>=22 0 0 - +R As 2013 2014 - F Sun>=15 0 0 - +R As 2015 o - F Sun>=22 0 0 - +R As 2016 2022 - F Sun>=15 0 0 - +R As 2023 o - F Sun>=22 0 0 - +R As 2024 2025 - F Sun>=15 0 0 - +R As 2026 o - F Sun>=22 0 0 - +R As 2027 2033 - F Sun>=15 0 0 - +R As 2034 o - F Sun>=22 0 0 - +R As 2035 2036 - F Sun>=15 0 0 - +R As 2037 o - F Sun>=22 0 0 - +R As 2038 ma - F Sun>=15 0 0 - +Z America/Noronha -2:9:40 - LMT 1914 +-2 As -02/-01 1990 S 17 +-2 - -02 1999 S 30 +-2 As -02/-01 2000 O 15 +-2 - -02 2001 S 13 +-2 As -02/-01 2002 O +-2 - -02 +Z America/Belem -3:13:56 - LMT 1914 +-3 As -03/-02 1988 S 12 +-3 - -03 +Z America/Santarem -3:38:48 - LMT 1914 +-4 As -04/-03 1988 S 12 +-4 - -04 2008 Jun 24 +-3 - -03 +Z America/Fortaleza -2:34 - LMT 1914 +-3 As -03/-02 1990 S 17 +-3 - -03 1999 S 30 +-3 As -03/-02 2000 O 22 +-3 - -03 2001 S 13 +-3 As -03/-02 2002 O +-3 - -03 +Z America/Recife -2:19:36 - LMT 1914 +-3 As -03/-02 1990 S 17 +-3 - -03 1999 S 30 +-3 As -03/-02 2000 O 15 +-3 - -03 2001 S 13 +-3 As -03/-02 2002 O +-3 - -03 +Z America/Araguaina -3:12:48 - LMT 1914 +-3 As -03/-02 1990 S 17 +-3 - -03 1995 S 14 +-3 As -03/-02 2003 S 24 +-3 - -03 2012 O 21 +-3 As -03/-02 2013 S +-3 - -03 +Z America/Maceio -2:22:52 - LMT 1914 +-3 As -03/-02 1990 S 17 +-3 - -03 1995 O 13 +-3 As -03/-02 1996 S 4 +-3 - -03 1999 S 30 +-3 As -03/-02 2000 O 22 +-3 - -03 2001 S 13 +-3 As -03/-02 2002 O +-3 - -03 +Z America/Bahia -2:34:4 - LMT 1914 +-3 As -03/-02 2003 S 24 +-3 - -03 2011 O 16 +-3 As -03/-02 2012 O 21 +-3 - -03 +Z America/Sao_Paulo -3:6:28 - LMT 1914 +-3 As -03/-02 1963 O 23 +-3 1 -02 1964 +-3 As -03/-02 +Z America/Campo_Grande -3:38:28 - LMT 1914 +-4 As -04/-03 +Z America/Cuiaba -3:44:20 - LMT 1914 +-4 As -04/-03 2003 S 24 +-4 - -04 2004 O +-4 As -04/-03 +Z America/Porto_Velho -4:15:36 - LMT 1914 +-4 As -04/-03 1988 S 12 +-4 - -04 +Z America/Boa_Vista -4:2:40 - LMT 1914 +-4 As -04/-03 1988 S 12 +-4 - -04 1999 S 30 +-4 As -04/-03 2000 O 15 +-4 - -04 +Z America/Manaus -4:0:4 - LMT 1914 +-4 As -04/-03 1988 S 12 +-4 - -04 1993 S 28 +-4 As -04/-03 1994 S 22 +-4 - -04 +Z America/Eirunepe -4:39:28 - LMT 1914 +-5 As -05/-04 1988 S 12 +-5 - -05 1993 S 28 +-5 As -05/-04 1994 S 22 +-5 - -05 2008 Jun 24 +-4 - -04 2013 N 10 +-5 - -05 +Z America/Rio_Branco -4:31:12 - LMT 1914 +-5 As -05/-04 1988 S 12 +-5 - -05 2008 Jun 24 +-4 - -04 2013 N 10 +-5 - -05 +R At 1927 1931 - S 1 0 1 S +R At 1928 1932 - Ap 1 0 0 - +R At 1968 o - N 3 4u 1 S +R At 1969 o - Mar 30 3u 0 - +R At 1969 o - N 23 4u 1 S +R At 1970 o - Mar 29 3u 0 - +R At 1971 o - Mar 14 3u 0 - +R At 1970 1972 - O Sun>=9 4u 1 S +R At 1972 1986 - Mar Sun>=9 3u 0 - +R At 1973 o - S 30 4u 1 S +R At 1974 1987 - O Sun>=9 4u 1 S +R At 1987 o - Ap 12 3u 0 - +R At 1988 1990 - Mar Sun>=9 3u 0 - +R At 1988 1989 - O Sun>=9 4u 1 S +R At 1990 o - S 16 4u 1 S +R At 1991 1996 - Mar Sun>=9 3u 0 - +R At 1991 1997 - O Sun>=9 4u 1 S +R At 1997 o - Mar 30 3u 0 - +R At 1998 o - Mar Sun>=9 3u 0 - +R At 1998 o - S 27 4u 1 S +R At 1999 o - Ap 4 3u 0 - +R At 1999 2010 - O Sun>=9 4u 1 S +R At 2000 2007 - Mar Sun>=9 3u 0 - +R At 2008 o - Mar 30 3u 0 - +R At 2009 o - Mar Sun>=9 3u 0 - +R At 2010 o - Ap Sun>=1 3u 0 - +R At 2011 o - May Sun>=2 3u 0 - +R At 2011 o - Au Sun>=16 4u 1 S +R At 2012 2014 - Ap Sun>=23 3u 0 - +R At 2012 2014 - S Sun>=2 4u 1 S +R At 2016 ma - May Sun>=9 3u 0 - +R At 2016 ma - Au Sun>=9 4u 1 S +Z America/Santiago -4:42:46 - LMT 1890 +-4:42:46 - SMT 1910 Ja 10 +-5 - -05 1916 Jul +-4:42:46 - SMT 1918 S 10 +-4 - -04 1919 Jul +-4:42:46 - SMT 1927 S +-5 At -05/-04 1932 S +-4 - -04 1942 Jun +-5 - -05 1942 Au +-4 - -04 1946 Jul 15 +-4 1 -03 1946 S +-4 - -04 1947 Ap +-5 - -05 1947 May 21 23 +-4 At -04/-03 +Z America/Punta_Arenas -4:43:40 - LMT 1890 +-4:42:46 - SMT 1910 Ja 10 +-5 - -05 1916 Jul +-4:42:46 - SMT 1918 S 10 +-4 - -04 1919 Jul +-4:42:46 - SMT 1927 S +-5 At -05/-04 1932 S +-4 - -04 1942 Jun +-5 - -05 1942 Au +-4 - -04 1947 Ap +-5 - -05 1947 May 21 23 +-4 At -04/-03 2016 D 4 +-3 - -03 +Z Pacific/Easter -7:17:28 - LMT 1890 +-7:17:28 - EMT 1932 S +-7 At -07/-06 1982 Mar 14 3u +-6 At -06/-05 +Z Antarctica/Palmer 0 - -00 1965 +-4 Aq -04/-03 1969 O 5 +-3 Aq -03/-02 1982 May +-4 At -04/-03 2016 D 4 +-3 - -03 +R Au 1992 o - May 3 0 1 S +R Au 1993 o - Ap 4 0 0 - +Z America/Bogota -4:56:16 - LMT 1884 Mar 13 +-4:56:16 - BMT 1914 N 23 +-5 Au -05/-04 +Z America/Curacao -4:35:47 - LMT 1912 F 12 +-4:30 - -0430 1965 +-4 - AST +Li America/Curacao America/Lower_Princes +Li America/Curacao America/Kralendijk +R Av 1992 o - N 28 0 1 S +R Av 1993 o - F 5 0 0 - +Z America/Guayaquil -5:19:20 - LMT 1890 +-5:14 - QMT 1931 +-5 Av -05/-04 +Z Pacific/Galapagos -5:58:24 - LMT 1931 +-5 - -05 1986 +-6 Av -06/-05 +R Aw 1937 1938 - S lastSun 0 1 S +R Aw 1938 1942 - Mar Sun>=19 0 0 - +R Aw 1939 o - O 1 0 1 S +R Aw 1940 1942 - S lastSun 0 1 S +R Aw 1943 o - Ja 1 0 0 - +R Aw 1983 o - S lastSun 0 1 S +R Aw 1984 1985 - Ap lastSun 0 0 - +R Aw 1984 o - S 16 0 1 S +R Aw 1985 2000 - S Sun>=9 0 1 S +R Aw 1986 2000 - Ap Sun>=16 0 0 - +R Aw 2001 2010 - Ap Sun>=15 2 0 - +R Aw 2001 2010 - S Sun>=1 2 1 S +Z Atlantic/Stanley -3:51:24 - LMT 1890 +-3:51:24 - SMT 1912 Mar 12 +-4 Aw -04/-03 1983 May +-3 Aw -03/-02 1985 S 15 +-4 Aw -04/-03 2010 S 5 2 +-3 - -03 +Z America/Cayenne -3:29:20 - LMT 1911 Jul +-4 - -04 1967 O +-3 - -03 +Z America/Guyana -3:52:40 - LMT 1915 Mar +-3:45 - -0345 1975 Jul 31 +-3 - -03 1991 +-4 - -04 +R Ax 1975 1988 - O 1 0 1 S +R Ax 1975 1978 - Mar 1 0 0 - +R Ax 1979 1991 - Ap 1 0 0 - +R Ax 1989 o - O 22 0 1 S +R Ax 1990 o - O 1 0 1 S +R Ax 1991 o - O 6 0 1 S +R Ax 1992 o - Mar 1 0 0 - +R Ax 1992 o - O 5 0 1 S +R Ax 1993 o - Mar 31 0 0 - +R Ax 1993 1995 - O 1 0 1 S +R Ax 1994 1995 - F lastSun 0 0 - +R Ax 1996 o - Mar 1 0 0 - +R Ax 1996 2001 - O Sun>=1 0 1 S +R Ax 1997 o - F lastSun 0 0 - +R Ax 1998 2001 - Mar Sun>=1 0 0 - +R Ax 2002 2004 - Ap Sun>=1 0 0 - +R Ax 2002 2003 - S Sun>=1 0 1 S +R Ax 2004 2009 - O Sun>=15 0 1 S +R Ax 2005 2009 - Mar Sun>=8 0 0 - +R Ax 2010 ma - O Sun>=1 0 1 S +R Ax 2010 2012 - Ap Sun>=8 0 0 - +R Ax 2013 ma - Mar Sun>=22 0 0 - +Z America/Asuncion -3:50:40 - LMT 1890 +-3:50:40 - AMT 1931 O 10 +-4 - -04 1972 O +-3 - -03 1974 Ap +-4 Ax -04/-03 +R Ay 1938 o - Ja 1 0 1 S +R Ay 1938 o - Ap 1 0 0 - +R Ay 1938 1939 - S lastSun 0 1 S +R Ay 1939 1940 - Mar Sun>=24 0 0 - +R Ay 1986 1987 - Ja 1 0 1 S +R Ay 1986 1987 - Ap 1 0 0 - +R Ay 1990 o - Ja 1 0 1 S +R Ay 1990 o - Ap 1 0 0 - +R Ay 1994 o - Ja 1 0 1 S +R Ay 1994 o - Ap 1 0 0 - +Z America/Lima -5:8:12 - LMT 1890 +-5:8:36 - LMT 1908 Jul 28 +-5 Ay -05/-04 +Z Atlantic/South_Georgia -2:26:8 - LMT 1890 +-2 - -02 +Z America/Paramaribo -3:40:40 - LMT 1911 +-3:40:52 - PMT 1935 +-3:40:36 - PMT 1945 O +-3:30 - -0330 1984 O +-3 - -03 +Z America/Port_of_Spain -4:6:4 - LMT 1912 Mar 2 +-4 - AST +Li America/Port_of_Spain America/Anguilla +Li America/Port_of_Spain America/Antigua +Li America/Port_of_Spain America/Dominica +Li America/Port_of_Spain America/Grenada +Li America/Port_of_Spain America/Guadeloupe +Li America/Port_of_Spain America/Marigot +Li America/Port_of_Spain America/Montserrat +Li America/Port_of_Spain America/St_Barthelemy +Li America/Port_of_Spain America/St_Kitts +Li America/Port_of_Spain America/St_Lucia +Li America/Port_of_Spain America/St_Thomas +Li America/Port_of_Spain America/St_Vincent +Li America/Port_of_Spain America/Tortola +R Az 1923 o - O 2 0 0:30 HS +R Az 1924 1926 - Ap 1 0 0 - +R Az 1924 1925 - O 1 0 0:30 HS +R Az 1933 1935 - O lastSun 0 0:30 HS +R Az 1934 1936 - Mar Sat>=25 23:30s 0 - +R Az 1936 o - N 1 0 0:30 HS +R Az 1937 1941 - Mar lastSun 0 0 - +R Az 1937 1940 - O lastSun 0 0:30 HS +R Az 1941 o - Au 1 0 0:30 HS +R Az 1942 o - Ja 1 0 0 - +R Az 1942 o - D 14 0 1 S +R Az 1943 o - Mar 14 0 0 - +R Az 1959 o - May 24 0 1 S +R Az 1959 o - N 15 0 0 - +R Az 1960 o - Ja 17 0 1 S +R Az 1960 o - Mar 6 0 0 - +R Az 1965 1967 - Ap Sun>=1 0 1 S +R Az 1965 o - S 26 0 0 - +R Az 1966 1967 - O 31 0 0 - +R Az 1968 1970 - May 27 0 0:30 HS +R Az 1968 1970 - D 2 0 0 - +R Az 1972 o - Ap 24 0 1 S +R Az 1972 o - Au 15 0 0 - +R Az 1974 o - Mar 10 0 0:30 HS +R Az 1974 o - D 22 0 1 S +R Az 1976 o - O 1 0 0 - +R Az 1977 o - D 4 0 1 S +R Az 1978 o - Ap 1 0 0 - +R Az 1979 o - O 1 0 1 S +R Az 1980 o - May 1 0 0 - +R Az 1987 o - D 14 0 1 S +R Az 1988 o - Mar 14 0 0 - +R Az 1988 o - D 11 0 1 S +R Az 1989 o - Mar 12 0 0 - +R Az 1989 o - O 29 0 1 S +R Az 1990 1992 - Mar Sun>=1 0 0 - +R Az 1990 1991 - O Sun>=21 0 1 S +R Az 1992 o - O 18 0 1 S +R Az 1993 o - F 28 0 0 - +R Az 2004 o - S 19 0 1 S +R Az 2005 o - Mar 27 2 0 - +R Az 2005 o - O 9 2 1 S +R Az 2006 o - Mar 12 2 0 - +R Az 2006 2014 - O Sun>=1 2 1 S +R Az 2007 2015 - Mar Sun>=8 2 0 - +Z America/Montevideo -3:44:44 - LMT 1898 Jun 28 +-3:44:44 - MMT 1920 May +-3:30 Az -0330/-03 1942 D 14 +-3 Az -03/-02 1968 +-3 Az -03/-0230 1971 +-3 Az -03/-02 1974 +-3 Az -03/-0230 1974 D 22 +-3 Az -03/-02 +Z America/Caracas -4:27:44 - LMT 1890 +-4:27:40 - CMT 1912 F 12 +-4:30 - -0430 1965 +-4 - -04 2007 D 9 3 +-4:30 - -0430 2016 May 1 2:30 +-4 - -04 +Z Etc/GMT 0 - GMT +Z Etc/UTC 0 - UTC +Z Etc/UCT 0 - UCT +Li Etc/GMT GMT +Li Etc/UTC Etc/Universal +Li Etc/UTC Etc/Zulu +Li Etc/GMT Etc/Greenwich +Li Etc/GMT Etc/GMT-0 +Li Etc/GMT Etc/GMT+0 +Li Etc/GMT Etc/GMT0 +Z Etc/GMT-14 14 - +14 +Z Etc/GMT-13 13 - +13 +Z Etc/GMT-12 12 - +12 +Z Etc/GMT-11 11 - +11 +Z Etc/GMT-10 10 - +10 +Z Etc/GMT-9 9 - +09 +Z Etc/GMT-8 8 - +08 +Z Etc/GMT-7 7 - +07 +Z Etc/GMT-6 6 - +06 +Z Etc/GMT-5 5 - +05 +Z Etc/GMT-4 4 - +04 +Z Etc/GMT-3 3 - +03 +Z Etc/GMT-2 2 - +02 +Z Etc/GMT-1 1 - +01 +Z Etc/GMT+1 -1 - -01 +Z Etc/GMT+2 -2 - -02 +Z Etc/GMT+3 -3 - -03 +Z Etc/GMT+4 -4 - -04 +Z Etc/GMT+5 -5 - -05 +Z Etc/GMT+6 -6 - -06 +Z Etc/GMT+7 -7 - -07 +Z Etc/GMT+8 -8 - -08 +Z Etc/GMT+9 -9 - -09 +Z Etc/GMT+10 -10 - -10 +Z Etc/GMT+11 -11 - -11 +Z Etc/GMT+12 -12 - -12 +Li Africa/Nairobi Africa/Asmera +Li Africa/Abidjan Africa/Timbuktu +Li America/Argentina/Catamarca America/Argentina/ComodRivadavia +Li America/Adak America/Atka +Li America/Argentina/Buenos_Aires America/Buenos_Aires +Li America/Argentina/Catamarca America/Catamarca +Li America/Atikokan America/Coral_Harbour +Li America/Argentina/Cordoba America/Cordoba +Li America/Tijuana America/Ensenada +Li America/Indiana/Indianapolis America/Fort_Wayne +Li America/Indiana/Indianapolis America/Indianapolis +Li America/Argentina/Jujuy America/Jujuy +Li America/Indiana/Knox America/Knox_IN +Li America/Kentucky/Louisville America/Louisville +Li America/Argentina/Mendoza America/Mendoza +Li America/Toronto America/Montreal +Li America/Rio_Branco America/Porto_Acre +Li America/Argentina/Cordoba America/Rosario +Li America/Tijuana America/Santa_Isabel +Li America/Denver America/Shiprock +Li America/Port_of_Spain America/Virgin +Li Pacific/Auckland Antarctica/South_Pole +Li Asia/Ashgabat Asia/Ashkhabad +Li Asia/Kolkata Asia/Calcutta +Li Asia/Shanghai Asia/Chongqing +Li Asia/Shanghai Asia/Chungking +Li Asia/Dhaka Asia/Dacca +Li Asia/Shanghai Asia/Harbin +Li Asia/Urumqi Asia/Kashgar +Li Asia/Kathmandu Asia/Katmandu +Li Asia/Macau Asia/Macao +Li Asia/Yangon Asia/Rangoon +Li Asia/Ho_Chi_Minh Asia/Saigon +Li Asia/Jerusalem Asia/Tel_Aviv +Li Asia/Thimphu Asia/Thimbu +Li Asia/Makassar Asia/Ujung_Pandang +Li Asia/Ulaanbaatar Asia/Ulan_Bator +Li Atlantic/Faroe Atlantic/Faeroe +Li Europe/Oslo Atlantic/Jan_Mayen +Li Australia/Sydney Australia/ACT +Li Australia/Sydney Australia/Canberra +Li Australia/Lord_Howe Australia/LHI +Li Australia/Sydney Australia/NSW +Li Australia/Darwin Australia/North +Li Australia/Brisbane Australia/Queensland +Li Australia/Adelaide Australia/South +Li Australia/Hobart Australia/Tasmania +Li Australia/Melbourne Australia/Victoria +Li Australia/Perth Australia/West +Li Australia/Broken_Hill Australia/Yancowinna +Li America/Rio_Branco Brazil/Acre +Li America/Noronha Brazil/DeNoronha +Li America/Sao_Paulo Brazil/East +Li America/Manaus Brazil/West +Li America/Halifax Canada/Atlantic +Li America/Winnipeg Canada/Central +Li America/Toronto Canada/Eastern +Li America/Edmonton Canada/Mountain +Li America/St_Johns Canada/Newfoundland +Li America/Vancouver Canada/Pacific +Li America/Regina Canada/Saskatchewan +Li America/Whitehorse Canada/Yukon +Li America/Santiago Chile/Continental +Li Pacific/Easter Chile/EasterIsland +Li America/Havana Cuba +Li Africa/Cairo Egypt +Li Europe/Dublin Eire +Li Europe/London Europe/Belfast +Li Europe/Chisinau Europe/Tiraspol +Li Europe/London GB +Li Europe/London GB-Eire +Li Etc/GMT GMT+0 +Li Etc/GMT GMT-0 +Li Etc/GMT GMT0 +Li Etc/GMT Greenwich +Li Asia/Hong_Kong Hongkong +Li Atlantic/Reykjavik Iceland +Li Asia/Tehran Iran +Li Asia/Jerusalem Israel +Li America/Jamaica Jamaica +Li Asia/Tokyo Japan +Li Pacific/Kwajalein Kwajalein +Li Africa/Tripoli Libya +Li America/Tijuana Mexico/BajaNorte +Li America/Mazatlan Mexico/BajaSur +Li America/Mexico_City Mexico/General +Li Pacific/Auckland NZ +Li Pacific/Chatham NZ-CHAT +Li America/Denver Navajo +Li Asia/Shanghai PRC +Li Pacific/Honolulu Pacific/Johnston +Li Pacific/Pohnpei Pacific/Ponape +Li Pacific/Pago_Pago Pacific/Samoa +Li Pacific/Chuuk Pacific/Truk +Li Pacific/Chuuk Pacific/Yap +Li Europe/Warsaw Poland +Li Europe/Lisbon Portugal +Li Asia/Taipei ROC +Li Asia/Seoul ROK +Li Asia/Singapore Singapore +Li Europe/Istanbul Turkey +Li Etc/UCT UCT +Li America/Anchorage US/Alaska +Li America/Adak US/Aleutian +Li America/Phoenix US/Arizona +Li America/Chicago US/Central +Li America/Indiana/Indianapolis US/East-Indiana +Li America/New_York US/Eastern +Li Pacific/Honolulu US/Hawaii +Li America/Indiana/Knox US/Indiana-Starke +Li America/Detroit US/Michigan +Li America/Denver US/Mountain +Li America/Los_Angeles US/Pacific +Li Pacific/Pago_Pago US/Samoa +Li Etc/UTC UTC +Li Etc/UTC Universal +Li Europe/Moscow W-SU +Li Etc/UTC Zulu +Li America/Los_Angeles US/Pacific-New +Z Factory 0 - -00 From 6d4ae6a8e782d87ffb6aab62f75787b2722daa2d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 25 Nov 2017 18:15:22 -0500 Subject: [PATCH 0610/1087] Update MSVC build process for new timezone data. Missed this dependency in commits 7cce222c9 et al. --- src/tools/msvc/Install.pm | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm index 18bded431f..d8ef98e4f7 100644 --- a/src/tools/msvc/Install.pm +++ b/src/tools/msvc/Install.pm @@ -381,8 +381,8 @@ sub GenerateTimezoneFiles my $mf = read_file("src/timezone/Makefile"); $mf =~ s{\\\r?\n}{}g; - $mf =~ /^TZDATA\s*:?=\s*(.*)$/m - || die "Could not find TZDATA line in timezone makefile\n"; + $mf =~ /^TZDATAFILES\s*:?=\s*(.*)$/m + || die "Could not find TZDATAFILES line in timezone makefile\n"; my @tzfiles = split /\s+/, $1; $mf =~ /^POSIXRULES\s*:?=\s*(.*)$/m @@ -397,7 +397,8 @@ sub GenerateTimezoneFiles foreach (@tzfiles) { my $tzfile = $_; - push(@args, "src/timezone/data/$tzfile"); + $tzfile =~ s|\$\(srcdir\)|src/timezone|; + push(@args, $tzfile); } system(@args); From 752714dd9de3d1b919d582ddaac96425a6cfa66d Mon Sep 17 00:00:00 2001 From: Joe Conway Date: Sun, 26 Nov 2017 09:49:40 -0800 Subject: [PATCH 0611/1087] Make has_sequence_privilege support WITH GRANT OPTION The various has_*_privilege() functions all support an optional WITH GRANT OPTION added to the supported privilege types to test whether the privilege is held with grant option. That is, all except has_sequence_privilege() variations. Fix that. Back-patch to all supported branches. Discussion: https://postgr.es/m/005147f6-8280-42e9-5a03-dd2c1e4397ef@joeconway.com --- src/backend/utils/adt/acl.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c index fa6b792d00..2f2758fe91 100644 --- a/src/backend/utils/adt/acl.c +++ b/src/backend/utils/adt/acl.c @@ -2242,8 +2242,11 @@ convert_sequence_priv_string(text *priv_type_text) { static const priv_map sequence_priv_map[] = { {"USAGE", ACL_USAGE}, + {"USAGE WITH GRANT OPTION", ACL_GRANT_OPTION_FOR(ACL_USAGE)}, {"SELECT", ACL_SELECT}, + {"SELECT WITH GRANT OPTION", ACL_GRANT_OPTION_FOR(ACL_SELECT)}, {"UPDATE", ACL_UPDATE}, + {"UPDATE WITH GRANT OPTION", ACL_GRANT_OPTION_FOR(ACL_UPDATE)}, {NULL, 0} }; From 8735978e7aebfbc499843630131c18d1f7346c79 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 26 Nov 2017 15:17:24 -0500 Subject: [PATCH 0612/1087] Pad XLogReaderState's main_data buffer more aggressively. Originally, we palloc'd this buffer just barely big enough to hold the largest xlog record seen so far. It turns out that that can result in valgrind complaints, because some compilers will emit code that assumes it can safely fetch padding bytes at the end of a struct, and those padding bytes were unallocated so far as aset.c was concerned. We can fix that by MAXALIGN'ing the palloc request size, ensuring that it is big enough to include any possible padding that might've been omitted from the on-disk record. An additional objection to the original coding is that it could result in many repeated palloc cycles, in the worst case where we see a series of gradually larger xlog records. We can ameliorate that cheaply by imposing a minimum buffer size that's large enough for most xlog records. BLCKSZ/2 was chosen after a bit of discussion. In passing, remove an obsolete comment in struct xl_heap_new_cid that the combocid field is free due to alignment considerations. Perhaps that was true at some point, but it's not now. Back-patch to 9.5 where this code came in. Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org --- src/backend/access/transam/xlogreader.c | 17 ++++++++++++++++- src/include/access/heapam_xlog.h | 8 +------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c index aeaafedf0b..fa9432eb67 100644 --- a/src/backend/access/transam/xlogreader.c +++ b/src/backend/access/transam/xlogreader.c @@ -1279,7 +1279,22 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg) { if (state->main_data) pfree(state->main_data); - state->main_data_bufsz = state->main_data_len; + + /* + * main_data_bufsz must be MAXALIGN'ed. In many xlog record + * types, we omit trailing struct padding on-disk to save a few + * bytes; but compilers may generate accesses to the xlog struct + * that assume that padding bytes are present. If the palloc + * request is not large enough to include such padding bytes then + * we'll get valgrind complaints due to otherwise-harmless fetches + * of the padding bytes. + * + * In addition, force the initial request to be reasonably large + * so that we don't waste time with lots of trips through this + * stanza. BLCKSZ / 2 seems like a good compromise choice. + */ + state->main_data_bufsz = MAXALIGN(Max(state->main_data_len, + BLCKSZ / 2)); state->main_data = palloc(state->main_data_bufsz); } memcpy(state->main_data, ptr, state->main_data_len); diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h index 81a6a395c4..5e4dee60c7 100644 --- a/src/include/access/heapam_xlog.h +++ b/src/include/access/heapam_xlog.h @@ -339,13 +339,7 @@ typedef struct xl_heap_new_cid TransactionId top_xid; CommandId cmin; CommandId cmax; - - /* - * don't really need the combocid since we have the actual values right in - * this struct, but the padding makes it free and its useful for - * debugging. - */ - CommandId combocid; + CommandId combocid; /* just for debugging */ /* * Store the relfilenode/ctid pair to facilitate lookups. From d5f965c257aed40d515e6b518422ac6e6982c46c Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 27 Nov 2017 09:24:14 +0100 Subject: [PATCH 0613/1087] Fix typo in comment Andreas Karlsson --- src/tools/msvc/config_default.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl index 4d69dc2a2e..d7a9fc5039 100644 --- a/src/tools/msvc/config_default.pl +++ b/src/tools/msvc/config_default.pl @@ -18,7 +18,7 @@ icu => undef, # --with-icu= nls => undef, # --enable-nls= tap_tests => undef, # --enable-tap-tests - tcl => undef, # --with-tls= + tcl => undef, # --with-tcl= perl => undef, # --with-perl python => undef, # --with-python= openssl => undef, # --with-openssl= From 59af8d4384ba5ae72986eab7e5cdc514a969aa05 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Mon, 27 Nov 2017 09:34:05 +0000 Subject: [PATCH 0614/1087] Pad XLogReaderState's per-buffer data_bufsz more aggressively. Originally, we palloc'd this buffer just barely big enough to hold the largest xlog backup block seen so far. We now MAXALIGN the palloc size. The original coding could result in many repeated palloc cycles, in the worst case where we see a series ofgradually larger xlog records. We ameliorate that similarly to 8735978e7aebfbc499843630131c18d1f7346c79 by imposing a minimum buffer size of BLCKSZ. Discussion: https://postgr.es/m/E1eHa4J-0006hI-Q8@gemulon.postgresql.org --- src/backend/access/transam/xlogreader.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c index fa9432eb67..0a75c36026 100644 --- a/src/backend/access/transam/xlogreader.c +++ b/src/backend/access/transam/xlogreader.c @@ -1264,7 +1264,13 @@ DecodeXLogRecord(XLogReaderState *state, XLogRecord *record, char **errormsg) { if (blk->data) pfree(blk->data); - blk->data_bufsz = blk->data_len; + + /* + * Force the initial request to be BLCKSZ so that we don't + * waste time with lots of trips through this stanza as a + * result of WAL compression. + */ + blk->data_bufsz = MAXALIGN(Max(blk->data_len, BLCKSZ)); blk->data = palloc(blk->data_bufsz); } memcpy(blk->data, ptr, blk->data_len); From 117469006bf525c6e8dc84cb9fcbdc4c1a050878 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Mon, 27 Nov 2017 09:51:51 +0000 Subject: [PATCH 0615/1087] Additional docs for toast_tuple_target changes --- doc/src/sgml/storage.sgml | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml index 128b19cbc9..c0e548fa5b 100644 --- a/doc/src/sgml/storage.sgml +++ b/doc/src/sgml/storage.sgml @@ -429,7 +429,7 @@ when a row value to be stored in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The TOAST code will compress and/or move field values out-of-line until the row value is shorter than -TOAST_TUPLE_TARGET bytes (also normally 2 kB) +TOAST_TUPLE_TARGET bytes (also normally 2 kB, adjustable) or no more gains can be had. During an UPDATE operation, values of unchanged fields are normally preserved as-is; so an UPDATE of a row with out-of-line values incurs no TOAST costs if @@ -483,6 +483,11 @@ of that data type, but the strategy for a given table column can be altered with ALTER TABLE ... SET STORAGE.
+ +TOAST_TUPLE_TARGET can be adjusted for each table using +ALTER TABLE ... SET (toast_tuple_target = N) + + This scheme has a number of advantages compared to a more straightforward approach such as allowing row values to span pages. Assuming that queries are From 9a785ad573176b88a93563209980fbe80cd72163 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 27 Nov 2017 17:53:56 -0500 Subject: [PATCH 0616/1087] Fix creation of resjunk tlist entries for inherited mixed UPDATE/DELETE. rewriteTargetListUD's processing is dependent on the relkind of the query's target table. That was fine at the time it was made to act that way, even for queries on inheritance trees, because all tables in an inheritance tree would necessarily be plain tables. However, the 9.5 feature addition allowing some members of an inheritance tree to be foreign tables broke the assumption that rewriteTargetListUD's output tlist could be applied to all child tables with nothing more than column-number mapping. This led to visible failures if foreign child tables had row-level triggers, and would also break in cases where child tables belonged to FDWs that used methods other than CTID for row identification. To fix, delay running rewriteTargetListUD until after the planner has expanded inheritance, so that it is applied separately to the (already mapped) tlist for each child table. We can conveniently call it from preprocess_targetlist. Refactor associated code slightly to avoid the need to heap_open the target relation multiple times during preprocess_targetlist. (The APIs remain a bit ugly, particularly around the point of which steps scribble on parse->targetList and which don't. But avoiding such scribbling would require a change in FDW callback APIs, which is more pain than it's worth.) Also fix ExecModifyTable to ensure that "tupleid" is reset to NULL when we transition from rows providing a CTID to rows that don't. (That's really an independent bug, but it manifests in much the same cases.) Add a regression test checking one manifestation of this problem, which was that row-level triggers on a foreign child table did not work right. Back-patch to 9.5 where the problem was introduced. Etsuro Fujita, reviewed by Ildus Kurbangaliev and Ashutosh Bapat Discussion: https://postgr.es/m/20170514150525.0346ba72@postgrespro.ru --- .../postgres_fdw/expected/postgres_fdw.out | 57 ++++++++++ contrib/postgres_fdw/sql/postgres_fdw.sql | 18 ++++ doc/src/sgml/fdwhandler.sgml | 7 +- doc/src/sgml/rules.sgml | 8 +- src/backend/executor/nodeModifyTable.c | 3 +- src/backend/optimizer/plan/planner.c | 10 +- src/backend/optimizer/prep/preptlist.c | 101 +++++++++++------- src/backend/rewrite/rewriteHandler.c | 74 ++++++------- src/include/optimizer/prep.h | 5 +- src/include/rewrite/rewriteHandler.h | 3 + 10 files changed, 185 insertions(+), 101 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 4339bbf9df..1063d92825 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -7022,6 +7022,63 @@ update bar set f2 = f2 + 100 returning *; 7 | 277 (6 rows) +-- Test that UPDATE/DELETE with inherited target works with row-level triggers +CREATE TRIGGER trig_row_before +BEFORE UPDATE OR DELETE ON bar2 +FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); +CREATE TRIGGER trig_row_after +AFTER UPDATE OR DELETE ON bar2 +FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); +explain (verbose, costs off) +update bar set f2 = f2 + 100; + QUERY PLAN +-------------------------------------------------------------------------------------- + Update on public.bar + Update on public.bar + Foreign Update on public.bar2 + Remote SQL: UPDATE public.loct2 SET f2 = $2 WHERE ctid = $1 RETURNING f1, f2, f3 + -> Seq Scan on public.bar + Output: bar.f1, (bar.f2 + 100), bar.ctid + -> Foreign Scan on public.bar2 + Output: bar2.f1, (bar2.f2 + 100), bar2.f3, bar2.ctid, bar2.* + Remote SQL: SELECT f1, f2, f3, ctid FROM public.loct2 FOR UPDATE +(9 rows) + +update bar set f2 = f2 + 100; +NOTICE: trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2 +NOTICE: OLD: (3,333,33),NEW: (3,433,33) +NOTICE: trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2 +NOTICE: OLD: (4,344,44),NEW: (4,444,44) +NOTICE: trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2 +NOTICE: OLD: (7,277,77),NEW: (7,377,77) +NOTICE: trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2 +NOTICE: OLD: (3,333,33),NEW: (3,433,33) +NOTICE: trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2 +NOTICE: OLD: (4,344,44),NEW: (4,444,44) +NOTICE: trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2 +NOTICE: OLD: (7,277,77),NEW: (7,377,77) +explain (verbose, costs off) +delete from bar where f2 < 400; + QUERY PLAN +--------------------------------------------------------------------------------------------- + Delete on public.bar + Delete on public.bar + Foreign Delete on public.bar2 + Remote SQL: DELETE FROM public.loct2 WHERE ctid = $1 RETURNING f1, f2, f3 + -> Seq Scan on public.bar + Output: bar.ctid + Filter: (bar.f2 < 400) + -> Foreign Scan on public.bar2 + Output: bar2.ctid, bar2.* + Remote SQL: SELECT f1, f2, f3, ctid FROM public.loct2 WHERE ((f2 < 400)) FOR UPDATE +(10 rows) + +delete from bar where f2 < 400; +NOTICE: trig_row_before(23, skidoo) BEFORE ROW DELETE ON bar2 +NOTICE: OLD: (7,377,77) +NOTICE: trig_row_after(23, skidoo) AFTER ROW DELETE ON bar2 +NOTICE: OLD: (7,377,77) +-- cleanup drop table foo cascade; NOTICE: drop cascades to foreign table foo2 drop table bar cascade; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index ddfec7930d..09869578da 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1656,6 +1656,24 @@ explain (verbose, costs off) update bar set f2 = f2 + 100 returning *; update bar set f2 = f2 + 100 returning *; +-- Test that UPDATE/DELETE with inherited target works with row-level triggers +CREATE TRIGGER trig_row_before +BEFORE UPDATE OR DELETE ON bar2 +FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); + +CREATE TRIGGER trig_row_after +AFTER UPDATE OR DELETE ON bar2 +FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); + +explain (verbose, costs off) +update bar set f2 = f2 + 100; +update bar set f2 = f2 + 100; + +explain (verbose, costs off) +delete from bar where f2 < 400; +delete from bar where f2 < 400; + +-- cleanup drop table foo cascade; drop table bar cascade; drop table loct1; diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index a2f8137713..0ed3a47233 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -429,11 +429,14 @@ AddForeignUpdateTargets(Query *parsetree, wholerow, or wholerowN, as the core system can generate junk columns of these names. + If the extra expressions are more complex than simple Vars, they + must be run through eval_const_expressions + before adding them to the targetlist. - This function is called in the rewriter, not the planner, so the - information available is a bit different from that available to the + Although this function is called during planning, the + information provided is a bit different from that available to other planning routines. parsetree is the parse tree for the UPDATE or DELETE command, while target_rte and diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml index 2074fcca8e..3372b1ac2b 100644 --- a/doc/src/sgml/rules.sgml +++ b/doc/src/sgml/rules.sgml @@ -167,12 +167,12 @@ DELETE commands don't need a normal target list - because they don't produce any result. Instead, the rule system + because they don't produce any result. Instead, the planner adds a special CTID entry to the empty target list, to allow the executor to find the row to be deleted. (CTID is added when the result relation is an ordinary - table. If it is a view, a whole-row variable is added instead, - as described in .) + table. If it is a view, a whole-row variable is added instead, by + the rule system, as described in .) @@ -194,7 +194,7 @@ column = expression
part of the command. The planner will handle missing columns by inserting expressions that copy the values from the old row into the new one. Just as for DELETE, - the rule system adds a CTID or whole-row variable so that + a CTID or whole-row variable is added so that the executor can identify the old row to be updated. diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 503b89f606..201c607002 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1576,7 +1576,7 @@ ExecModifyTable(PlanState *pstate) JunkFilter *junkfilter; TupleTableSlot *slot; TupleTableSlot *planSlot; - ItemPointer tupleid = NULL; + ItemPointer tupleid; ItemPointerData tuple_ctid; HeapTupleData oldtupdata; HeapTuple oldtuple; @@ -1699,6 +1699,7 @@ ExecModifyTable(PlanState *pstate) EvalPlanQualSetSlot(&node->mt_epqstate, planSlot); slot = planSlot; + tupleid = NULL; oldtuple = NULL; if (junkfilter != NULL) { diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index f6b8bbf5fa..ef2eaeac0a 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -1555,7 +1555,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, double tuple_fraction) { Query *parse = root->parse; - List *tlist = parse->targetList; + List *tlist; int64 offset_est = 0; int64 count_est = 0; double limit_tuples = -1.0; @@ -1685,13 +1685,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, } /* Preprocess targetlist */ - tlist = preprocess_targetlist(root, tlist); - - if (parse->onConflict) - parse->onConflict->onConflictSet = - preprocess_onconflict_targetlist(parse->onConflict->onConflictSet, - parse->resultRelation, - parse->rtable); + tlist = preprocess_targetlist(root); /* * We are now done hacking up the query's targetlist. Most of the diff --git a/src/backend/optimizer/prep/preptlist.c b/src/backend/optimizer/prep/preptlist.c index d7db32ebf5..3abea92335 100644 --- a/src/backend/optimizer/prep/preptlist.c +++ b/src/backend/optimizer/prep/preptlist.c @@ -4,20 +4,22 @@ * Routines to preprocess the parse tree target list * * For INSERT and UPDATE queries, the targetlist must contain an entry for - * each attribute of the target relation in the correct order. For all query + * each attribute of the target relation in the correct order. For UPDATE and + * DELETE queries, it must also contain junk tlist entries needed to allow the + * executor to identify the rows to be updated or deleted. For all query * types, we may need to add junk tlist entries for Vars used in the RETURNING * list and row ID information needed for SELECT FOR UPDATE locking and/or * EvalPlanQual checking. * - * The rewriter's rewriteTargetListIU and rewriteTargetListUD routines - * also do preprocessing of the targetlist. The division of labor between - * here and there is partially historical, but it's not entirely arbitrary. - * In particular, consider an UPDATE across an inheritance tree. What the - * rewriter does need be done only once (because it depends only on the - * properties of the parent relation). What's done here has to be done over - * again for each child relation, because it depends on the column list of - * the child, which might have more columns and/or a different column order - * than the parent. + * The query rewrite phase also does preprocessing of the targetlist (see + * rewriteTargetListIU). The division of labor between here and there is + * partially historical, but it's not entirely arbitrary. In particular, + * consider an UPDATE across an inheritance tree. What rewriteTargetListIU + * does need be done only once (because it depends only on the properties of + * the parent relation). What's done here has to be done over again for each + * child relation, because it depends on the properties of the child, which + * might be of a different relation type, or have more columns and/or a + * different column order than the parent. * * The fact that rewriteTargetListIU sorts non-resjunk tlist entries by column * position, which expand_targetlist depends on, violates the above comment @@ -47,11 +49,12 @@ #include "optimizer/var.h" #include "parser/parsetree.h" #include "parser/parse_coerce.h" +#include "rewrite/rewriteHandler.h" #include "utils/rel.h" static List *expand_targetlist(List *tlist, int command_type, - Index result_relation, List *range_table); + Index result_relation, Relation rel); /* @@ -59,36 +62,61 @@ static List *expand_targetlist(List *tlist, int command_type, * Driver for preprocessing the parse tree targetlist. * * Returns the new targetlist. + * + * As a side effect, if there's an ON CONFLICT UPDATE clause, its targetlist + * is also preprocessed (and updated in-place). */ List * -preprocess_targetlist(PlannerInfo *root, List *tlist) +preprocess_targetlist(PlannerInfo *root) { Query *parse = root->parse; int result_relation = parse->resultRelation; List *range_table = parse->rtable; CmdType command_type = parse->commandType; + RangeTblEntry *target_rte = NULL; + Relation target_relation = NULL; + List *tlist; ListCell *lc; /* - * Sanity check: if there is a result relation, it'd better be a real - * relation not a subquery. Else parser or rewriter messed up. + * If there is a result relation, open it so we can look for missing + * columns and so on. We assume that previous code already acquired at + * least AccessShareLock on the relation, so we need no lock here. */ if (result_relation) { - RangeTblEntry *rte = rt_fetch(result_relation, range_table); + target_rte = rt_fetch(result_relation, range_table); + + /* + * Sanity check: it'd better be a real relation not, say, a subquery. + * Else parser or rewriter messed up. + */ + if (target_rte->rtekind != RTE_RELATION) + elog(ERROR, "result relation must be a regular relation"); - if (rte->subquery != NULL || rte->relid == InvalidOid) - elog(ERROR, "subquery cannot be result relation"); + target_relation = heap_open(target_rte->relid, NoLock); } + else + Assert(command_type == CMD_SELECT); + + /* + * For UPDATE/DELETE, add any junk column(s) needed to allow the executor + * to identify the rows to be updated or deleted. Note that this step + * scribbles on parse->targetList, which is not very desirable, but we + * keep it that way to avoid changing APIs used by FDWs. + */ + if (command_type == CMD_UPDATE || command_type == CMD_DELETE) + rewriteTargetListUD(parse, target_rte, target_relation); /* * for heap_form_tuple to work, the targetlist must match the exact order * of the attributes. We also need to fill in any missing attributes. -ay * 10/94 */ + tlist = parse->targetList; if (command_type == CMD_INSERT || command_type == CMD_UPDATE) tlist = expand_targetlist(tlist, command_type, - result_relation, range_table); + result_relation, target_relation); /* * Add necessary junk columns for rowmarked rels. These values are needed @@ -193,19 +221,21 @@ preprocess_targetlist(PlannerInfo *root, List *tlist) list_free(vars); } - return tlist; -} + /* + * If there's an ON CONFLICT UPDATE clause, preprocess its targetlist too + * while we have the relation open. + */ + if (parse->onConflict) + parse->onConflict->onConflictSet = + expand_targetlist(parse->onConflict->onConflictSet, + CMD_UPDATE, + result_relation, + target_relation); -/* - * preprocess_onconflict_targetlist - * Process ON CONFLICT SET targetlist. - * - * Returns the new targetlist. - */ -List * -preprocess_onconflict_targetlist(List *tlist, int result_relation, List *range_table) -{ - return expand_targetlist(tlist, CMD_UPDATE, result_relation, range_table); + if (target_relation) + heap_close(target_relation, NoLock); + + return tlist; } @@ -223,11 +253,10 @@ preprocess_onconflict_targetlist(List *tlist, int result_relation, List *range_t */ static List * expand_targetlist(List *tlist, int command_type, - Index result_relation, List *range_table) + Index result_relation, Relation rel) { List *new_tlist = NIL; ListCell *tlist_item; - Relation rel; int attrno, numattrs; @@ -238,12 +267,8 @@ expand_targetlist(List *tlist, int command_type, * order; but we have to insert TLEs for any missing attributes. * * Scan the tuple description in the relation's relcache entry to make - * sure we have all the user attributes in the right order. We assume - * that the rewriter already acquired at least AccessShareLock on the - * relation, so we need no lock here. + * sure we have all the user attributes in the right order. */ - rel = heap_open(getrelid(result_relation, range_table), NoLock); - numattrs = RelationGetNumberOfAttributes(rel); for (attrno = 1; attrno <= numattrs; attrno++) @@ -386,8 +411,6 @@ expand_targetlist(List *tlist, int command_type, tlist_item = lnext(tlist_item); } - heap_close(rel, NoLock); - return new_tlist; } diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index c2bc3ad4c5..e93552a8f3 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -72,8 +72,6 @@ static TargetEntry *process_matched_tle(TargetEntry *src_tle, static Node *get_assignment_input(Node *node); static void rewriteValuesRTE(RangeTblEntry *rte, Relation target_relation, List *attrnos); -static void rewriteTargetListUD(Query *parsetree, RangeTblEntry *target_rte, - Relation target_relation); static void markQueryForLocking(Query *qry, Node *jtnode, LockClauseStrength strength, LockWaitPolicy waitPolicy, bool pushedDown); @@ -707,6 +705,13 @@ adjustJoinTreeList(Query *parsetree, bool removert, int rt_index) * rewritten targetlist: an integer list of the assigned-to attnums, in * order of the original tlist's non-junk entries. This is needed for * processing VALUES RTEs. + * + * Note that for an inheritable UPDATE, this processing is only done once, + * using the parent relation as reference. It must not do anything that + * will not be correct when transposed to the child relation(s). (Step 4 + * is incorrect by this light, since child relations might have different + * colun ordering, but the planner will fix things by re-sorting the tlist + * for each child.) */ static List * rewriteTargetListIU(List *targetList, @@ -1293,14 +1298,15 @@ rewriteValuesRTE(RangeTblEntry *rte, Relation target_relation, List *attrnos) * This function adds a "junk" TLE that is needed to allow the executor to * find the original row for the update or delete. When the target relation * is a regular table, the junk TLE emits the ctid attribute of the original - * row. When the target relation is a view, there is no ctid, so we instead - * emit a whole-row Var that will contain the "old" values of the view row. - * If it's a foreign table, we let the FDW decide what to add. + * row. When the target relation is a foreign table, we let the FDW decide + * what to add. * - * For UPDATE queries, this is applied after rewriteTargetListIU. The - * ordering isn't actually critical at the moment. + * We used to do this during RewriteQuery(), but now that inheritance trees + * can contain a mix of regular and foreign tables, we must postpone it till + * planning, after the inheritance tree has been expanded. In that way we + * can do the right thing for each child table. */ -static void +void rewriteTargetListUD(Query *parsetree, RangeTblEntry *target_rte, Relation target_relation) { @@ -1358,19 +1364,6 @@ rewriteTargetListUD(Query *parsetree, RangeTblEntry *target_rte, attrname = "wholerow"; } } - else - { - /* - * Emit whole-row Var so that executor will have the "old" view row to - * pass to the INSTEAD OF trigger. - */ - var = makeWholeRowVar(target_rte, - parsetree->resultRelation, - 0, - false); - - attrname = "wholerow"; - } if (var != NULL) { @@ -1497,6 +1490,8 @@ ApplyRetrieveRule(Query *parsetree, parsetree->commandType == CMD_DELETE) { RangeTblEntry *newrte; + Var *var; + TargetEntry *tle; rte = rt_fetch(rt_index, parsetree->rtable); newrte = copyObject(rte); @@ -1527,6 +1522,20 @@ ApplyRetrieveRule(Query *parsetree, ChangeVarNodes((Node *) parsetree->returningList, rt_index, parsetree->resultRelation, 0); + /* + * To allow the executor to compute the original view row to pass + * to the INSTEAD OF trigger, we add a resjunk whole-row Var + * referencing the original RTE. This will later get expanded + * into a RowExpr computing all the OLD values of the view row. + */ + var = makeWholeRowVar(rte, rt_index, 0, false); + tle = makeTargetEntry((Expr *) var, + list_length(parsetree->targetList) + 1, + pstrdup("wholerow"), + true); + + parsetree->targetList = lappend(parsetree->targetList, tle); + /* Now, continue with expanding the original view RTE */ } else @@ -2966,26 +2975,6 @@ rewriteTargetView(Query *parsetree, Relation view) new_rte->securityQuals = view_rte->securityQuals; view_rte->securityQuals = NIL; - /* - * For UPDATE/DELETE, rewriteTargetListUD will have added a wholerow junk - * TLE for the view to the end of the targetlist, which we no longer need. - * Remove it to avoid unnecessary work when we process the targetlist. - * Note that when we recurse through rewriteQuery a new junk TLE will be - * added to allow the executor to find the proper row in the new target - * relation. (So, if we failed to do this, we might have multiple junk - * TLEs with the same name, which would be disastrous.) - */ - if (parsetree->commandType != CMD_INSERT) - { - TargetEntry *tle = (TargetEntry *) llast(parsetree->targetList); - - Assert(tle->resjunk); - Assert(IsA(tle->expr, Var) && - ((Var *) tle->expr)->varno == parsetree->resultRelation && - ((Var *) tle->expr)->varattno == 0); - parsetree->targetList = list_delete_ptr(parsetree->targetList, tle); - } - /* * Now update all Vars in the outer query that reference the view to * reference the appropriate column of the base relation instead. @@ -3347,11 +3336,10 @@ RewriteQuery(Query *parsetree, List *rewrite_events) parsetree->override, rt_entry_relation, parsetree->resultRelation, NULL); - rewriteTargetListUD(parsetree, rt_entry, rt_entry_relation); } else if (event == CMD_DELETE) { - rewriteTargetListUD(parsetree, rt_entry, rt_entry_relation); + /* Nothing to do here */ } else elog(ERROR, "unrecognized commandType: %d", (int) event); diff --git a/src/include/optimizer/prep.h b/src/include/optimizer/prep.h index 80fbfd6ea9..804c9e3b8b 100644 --- a/src/include/optimizer/prep.h +++ b/src/include/optimizer/prep.h @@ -38,10 +38,7 @@ extern Expr *canonicalize_qual(Expr *qual); /* * prototypes for preptlist.c */ -extern List *preprocess_targetlist(PlannerInfo *root, List *tlist); - -extern List *preprocess_onconflict_targetlist(List *tlist, - int result_relation, List *range_table); +extern List *preprocess_targetlist(PlannerInfo *root); extern PlanRowMark *get_plan_rowmark(List *rowmarks, Index rtindex); diff --git a/src/include/rewrite/rewriteHandler.h b/src/include/rewrite/rewriteHandler.h index 494fa29f10..86ae571eb1 100644 --- a/src/include/rewrite/rewriteHandler.h +++ b/src/include/rewrite/rewriteHandler.h @@ -23,6 +23,9 @@ extern void AcquireRewriteLocks(Query *parsetree, bool forUpdatePushedDown); extern Node *build_column_default(Relation rel, int attrno); +extern void rewriteTargetListUD(Query *parsetree, RangeTblEntry *target_rte, + Relation target_relation); + extern Query *get_view_query(Relation view); extern const char *view_query_is_auto_updatable(Query *viewquery, bool check_cols); From cb03fa33aeaea4775b9f3437a2240de4ac9cb630 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 27 Nov 2017 19:22:08 -0500 Subject: [PATCH 0617/1087] Fix assorted syscache lookup sloppiness in partition-related code. heap_drop_with_catalog and ATExecDetachPartition neglected to check for SearchSysCache failures, as noted in bugs #14927 and #14928 from Pan Bian. Such failures are pretty unlikely, since we should already have some sort of lock on the rel at these points, but it's neither a good idea nor per project style to omit a check for failure. Also, StorePartitionKey contained a syscache lookup that it never did anything with, including never releasing the result. Presumably the reason why we don't see refcount-leak complaints is that the lookup always fails; but in any case it's pretty useless, so remove it. All of these errors were evidently introduced by the relation partitioning feature. Back-patch to v10 where that came in. Amit Langote and Tom Lane Discussion: https://postgr.es/m/20171127090105.1463.3962@wrigleys.postgresql.org Discussion: https://postgr.es/m/20171127091341.1468.72696@wrigleys.postgresql.org --- src/backend/catalog/heap.c | 5 ++--- src/backend/commands/tablecmds.c | 3 +++ 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 9e14880b99..4319fc6b8c 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -1772,6 +1772,8 @@ heap_drop_with_catalog(Oid relid) * shared-cache-inval notice that will make them update their index lists. */ tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(relid)); + if (!HeapTupleIsValid(tuple)) + elog(ERROR, "cache lookup failed for relation %u", relid); if (((Form_pg_class) GETSTRUCT(tuple))->relispartition) { parentOid = get_partition_parent(relid); @@ -3131,9 +3133,6 @@ StorePartitionKey(Relation rel, Assert(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); - tuple = SearchSysCache1(PARTRELID, - ObjectIdGetDatum(RelationGetRelid(rel))); - /* Copy the partition attribute numbers, opclass OIDs into arrays */ partattrs_vec = buildint2vector(partattrs, partnatts); partopclass_vec = buildoidvector(partopclass, partnatts); diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index d19846d005..d979ce266d 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -14111,6 +14111,9 @@ ATExecDetachPartition(Relation rel, RangeVar *name) classRel = heap_open(RelationRelationId, RowExclusiveLock); tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(RelationGetRelid(partRel))); + if (!HeapTupleIsValid(tuple)) + elog(ERROR, "cache lookup failed for relation %u", + RelationGetRelid(partRel)); Assert(((Form_pg_class) GETSTRUCT(tuple))->relispartition); (void) SysCacheGetAttr(RELOID, tuple, Anum_pg_class_relpartbound, From 0772c152b9bd02baeca6920c3371fce95e8f13dc Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 27 Nov 2017 20:56:46 -0500 Subject: [PATCH 0618/1087] Mark some more functions as pg_attribute_noreturn(). Doing this suppresses Coverity warnings and might allow improved code in some cases. The prospects of that are not so bright as to warrant back-patching, though. Michael Paquier, per Coverity --- src/bin/initdb/initdb.c | 2 +- src/bin/pg_basebackup/pg_basebackup.c | 2 +- src/bin/pg_basebackup/pg_recvlogical.c | 2 +- src/test/isolation/isolationtester.c | 1 + 4 files changed, 4 insertions(+), 3 deletions(-) diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index bb2bc065ef..ad0d0e2ac0 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -237,7 +237,7 @@ static char **filter_lines_with_token(char **lines, const char *token); static char **readfile(const char *path); static void writefile(char *path, char **lines); static FILE *popen_check(const char *command, const char *mode); -static void exit_nicely(void); +static void exit_nicely(void) pg_attribute_noreturn(); static char *get_id(void); static int get_encoding_id(const char *encoding_name); static void set_input(char **dest, const char *filename); diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index a8715d912d..8427c97fe4 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -132,7 +132,7 @@ static PQExpBuffer recoveryconfcontents = NULL; /* Function headers */ static void usage(void); -static void disconnect_and_exit(int code); +static void disconnect_and_exit(int code) pg_attribute_noreturn(); static void verify_dir_is_empty_or_create(char *dirname, bool *created, bool *found); static void progress_report(int tablespacenum, const char *filename, bool force); diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c index 3109d0f99f..c7893c10ca 100644 --- a/src/bin/pg_basebackup/pg_recvlogical.c +++ b/src/bin/pg_basebackup/pg_recvlogical.c @@ -64,7 +64,7 @@ static XLogRecPtr output_fsync_lsn = InvalidXLogRecPtr; static void usage(void); static void StreamLogicalLog(void); -static void disconnect_and_exit(int code); +static void disconnect_and_exit(int code) pg_attribute_noreturn(); static bool flushAndSendFeedback(PGconn *conn, TimestampTz *now); static void prepareToTerminate(PGconn *conn, XLogRecPtr endpos, bool keepalive, XLogRecPtr lsn); diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c index ba8082c980..4ecad038bd 100644 --- a/src/test/isolation/isolationtester.c +++ b/src/test/isolation/isolationtester.c @@ -32,6 +32,7 @@ static int nconns = 0; /* In dry run only output permutations to be run by the tester. */ static int dry_run = false; +static void exit_nicely(void) pg_attribute_noreturn(); static void run_testspec(TestSpec *testspec); static void run_all_permutations(TestSpec *testspec); static void run_all_permutations_recurse(TestSpec *testspec, int nsteps, From 487a0c1518af2f3ae2d05b7fd23d636d687f28f3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 08:18:00 -0500 Subject: [PATCH 0619/1087] Fix typo. Masahiko Sawada Discussion: http://postgr.es/m/CAD21AoCq_QG6UEo6yb-purmhLQjDLsCA2_B+8Wh3ah9P2-XmrQ@mail.gmail.com --- src/backend/access/transam/xact.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index c06fabca10..046898c619 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -1164,7 +1164,7 @@ RecordTransactionCommit(void) * vacuum. Hence we emit a bespoke record for the invalidations. We * don't want to use that in case a commit record is emitted, so they * happen synchronously with commits (besides not wanting to emit more - * WAL recoreds). + * WAL records). */ if (nmsgs != 0) { From 7b88d63a9122646ead60c1afffc248a31d4e457d Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 10:51:01 -0500 Subject: [PATCH 0620/1087] Add null test to partition constraint for default range partitions. Non-default range partitions have a constraint which include null tests, and both default and non-default list partitions also have a constraint which includes null tests, but for some reason this was missed for default range partitions. This could cause the partition constraint to evaluate to false for rows that were (correctly) routed to that partition by insert tuple routing, which could in turn cause constraint exclusion to prune the default partition in cases where it should not. Amit Langote, reviewed by Kyotaro Horiguchi Discussion: http://postgr.es/m/ba7aaeb1-4399-220e-70b4-62eade1522d0@lab.ntt.co.jp --- src/backend/catalog/partition.c | 29 +++++++++++++++---- src/test/regress/expected/inherit.out | 41 ++++++++++++++++++--------- src/test/regress/expected/update.out | 2 +- src/test/regress/sql/inherit.sql | 9 +++--- 4 files changed, 56 insertions(+), 25 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 9a44cceb22..e032c11ed4 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -2134,12 +2134,29 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec, if (or_expr_args != NIL) { - /* OR all the non-default partition constraints; then negate it */ - result = lappend(result, - list_length(or_expr_args) > 1 - ? makeBoolExpr(OR_EXPR, or_expr_args, -1) - : linitial(or_expr_args)); - result = list_make1(makeBoolExpr(NOT_EXPR, result, -1)); + Expr *other_parts_constr; + + /* + * Combine the constraints obtained for non-default partitions + * using OR. As requested, each of the OR's args doesn't include + * the NOT NULL test for partition keys (which is to avoid its + * useless repetition). Add the same now. + */ + other_parts_constr = + makeBoolExpr(AND_EXPR, + lappend(get_range_nulltest(key), + list_length(or_expr_args) > 1 + ? makeBoolExpr(OR_EXPR, or_expr_args, + -1) + : linitial(or_expr_args)), + -1); + + /* + * Finally, the default partition contains everything *NOT* + * contained in the non-default partitions. + */ + result = list_make1(makeBoolExpr(NOT_EXPR, + list_make1(other_parts_constr), -1)); } return result; diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out index c698faff2f..fac7b62f9c 100644 --- a/src/test/regress/expected/inherit.out +++ b/src/test/regress/expected/inherit.out @@ -1853,29 +1853,34 @@ drop table range_list_parted; -- check that constraint exclusion is able to cope with the partition -- constraint emitted for multi-column range partitioned tables create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); +create table mcrparted_def partition of mcrparted default; create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, 1, 1); create table mcrparted1 partition of mcrparted for values from (1, 1, 1) to (10, 5, 10); create table mcrparted2 partition of mcrparted for values from (10, 5, 10) to (10, 10, 10); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); create table mcrparted4 partition of mcrparted for values from (20, 10, 10) to (20, 20, 20); create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); -explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0 - QUERY PLAN ------------------------------- +explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0, mcrparted_def + QUERY PLAN +--------------------------------- Append -> Seq Scan on mcrparted0 Filter: (a = 0) -(3 rows) + -> Seq Scan on mcrparted_def + Filter: (a = 0) +(5 rows) -explain (costs off) select * from mcrparted where a = 10 and abs(b) < 5; -- scans mcrparted1 +explain (costs off) select * from mcrparted where a = 10 and abs(b) < 5; -- scans mcrparted1, mcrparted_def QUERY PLAN --------------------------------------------- Append -> Seq Scan on mcrparted1 Filter: ((a = 10) AND (abs(b) < 5)) -(3 rows) + -> Seq Scan on mcrparted_def + Filter: ((a = 10) AND (abs(b) < 5)) +(5 rows) -explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scans mcrparted1, mcrparted2 +explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scans mcrparted1, mcrparted2, mcrparted_def QUERY PLAN --------------------------------------------- Append @@ -1883,11 +1888,13 @@ explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scan Filter: ((a = 10) AND (abs(b) = 5)) -> Seq Scan on mcrparted2 Filter: ((a = 10) AND (abs(b) = 5)) -(5 rows) + -> Seq Scan on mcrparted_def + Filter: ((a = 10) AND (abs(b) = 5)) +(7 rows) explain (costs off) select * from mcrparted where abs(b) = 5; -- scans all partitions - QUERY PLAN ------------------------------- + QUERY PLAN +--------------------------------- Append -> Seq Scan on mcrparted0 Filter: (abs(b) = 5) @@ -1899,7 +1906,9 @@ explain (costs off) select * from mcrparted where abs(b) = 5; -- scans all parti Filter: (abs(b) = 5) -> Seq Scan on mcrparted5 Filter: (abs(b) = 5) -(11 rows) + -> Seq Scan on mcrparted_def + Filter: (abs(b) = 5) +(13 rows) explain (costs off) select * from mcrparted where a > -1; -- scans all partitions QUERY PLAN @@ -1917,7 +1926,9 @@ explain (costs off) select * from mcrparted where a > -1; -- scans all partition Filter: (a > '-1'::integer) -> Seq Scan on mcrparted5 Filter: (a > '-1'::integer) -(13 rows) + -> Seq Scan on mcrparted_def + Filter: (a > '-1'::integer) +(15 rows) explain (costs off) select * from mcrparted where a = 20 and abs(b) = 10 and c > 10; -- scans mcrparted4 QUERY PLAN @@ -1927,7 +1938,7 @@ explain (costs off) select * from mcrparted where a = 20 and abs(b) = 10 and c > Filter: ((c > 10) AND (a = 20) AND (abs(b) = 10)) (3 rows) -explain (costs off) select * from mcrparted where a = 20 and c > 20; -- scans mcrparted3, mcrparte4, mcrparte5 +explain (costs off) select * from mcrparted where a = 20 and c > 20; -- scans mcrparted3, mcrparte4, mcrparte5, mcrparted_def QUERY PLAN ----------------------------------------- Append @@ -1937,7 +1948,9 @@ explain (costs off) select * from mcrparted where a = 20 and c > 20; -- scans mc Filter: ((c > 20) AND (a = 20)) -> Seq Scan on mcrparted5 Filter: ((c > 20) AND (a = 20)) -(7 rows) + -> Seq Scan on mcrparted_def + Filter: ((c > 20) AND (a = 20)) +(9 rows) drop table mcrparted; -- check that partitioned table Appends cope with being referenced in diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out index a4fe96112e..b69ceaa75e 100644 --- a/src/test/regress/expected/update.out +++ b/src/test/regress/expected/update.out @@ -227,7 +227,7 @@ create table part_def partition of range_parted default; a | text | | | | extended | | b | integer | | | | plain | | Partition of: range_parted DEFAULT -Partition constraint: (NOT (((a = 'a'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'a'::text) AND (b >= 10) AND (b < 20)) OR ((a = 'b'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'b'::text) AND (b >= 10) AND (b < 20)))) +Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'a'::text) AND (b >= 10) AND (b < 20)) OR ((a = 'b'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'b'::text) AND (b >= 10) AND (b < 20))))) insert into range_parted values ('c', 9); -- ok diff --git a/src/test/regress/sql/inherit.sql b/src/test/regress/sql/inherit.sql index 169d0dc0f5..c71febffc2 100644 --- a/src/test/regress/sql/inherit.sql +++ b/src/test/regress/sql/inherit.sql @@ -664,19 +664,20 @@ drop table range_list_parted; -- check that constraint exclusion is able to cope with the partition -- constraint emitted for multi-column range partitioned tables create table mcrparted (a int, b int, c int) partition by range (a, abs(b), c); +create table mcrparted_def partition of mcrparted default; create table mcrparted0 partition of mcrparted for values from (minvalue, minvalue, minvalue) to (1, 1, 1); create table mcrparted1 partition of mcrparted for values from (1, 1, 1) to (10, 5, 10); create table mcrparted2 partition of mcrparted for values from (10, 5, 10) to (10, 10, 10); create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); create table mcrparted4 partition of mcrparted for values from (20, 10, 10) to (20, 20, 20); create table mcrparted5 partition of mcrparted for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); -explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0 -explain (costs off) select * from mcrparted where a = 10 and abs(b) < 5; -- scans mcrparted1 -explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scans mcrparted1, mcrparted2 +explain (costs off) select * from mcrparted where a = 0; -- scans mcrparted0, mcrparted_def +explain (costs off) select * from mcrparted where a = 10 and abs(b) < 5; -- scans mcrparted1, mcrparted_def +explain (costs off) select * from mcrparted where a = 10 and abs(b) = 5; -- scans mcrparted1, mcrparted2, mcrparted_def explain (costs off) select * from mcrparted where abs(b) = 5; -- scans all partitions explain (costs off) select * from mcrparted where a > -1; -- scans all partitions explain (costs off) select * from mcrparted where a = 20 and abs(b) = 10 and c > 10; -- scans mcrparted4 -explain (costs off) select * from mcrparted where a = 20 and c > 20; -- scans mcrparted3, mcrparte4, mcrparte5 +explain (costs off) select * from mcrparted where a = 20 and c > 20; -- scans mcrparted3, mcrparte4, mcrparte5, mcrparted_def drop table mcrparted; -- check that partitioned table Appends cope with being referenced in From e42e2f38907681c48c43f49c5ec9f9f41a9aa9a5 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 28 Nov 2017 11:28:05 -0500 Subject: [PATCH 0621/1087] PL/Python: Fix potential NULL pointer dereference After d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56, one error path in PLy_spi_execute_fetch_result() could result in the variable "result" being dereferenced after being set to NULL. To fix that, just clear the resources right there and return early. Also add another SPI_freetuptable() call so that that is cleared in all error paths. discovered by John Naylor via scan-build --- src/pl/plpython/plpy_spi.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index ade27f3924..c80ccf6129 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -361,7 +361,10 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) result = (PLyResultObject *) PLy_result_new(); if (!result) + { + SPI_freetuptable(tuptable); return NULL; + } Py_DECREF(result->status); result->status = PyInt_FromLong(status); @@ -414,7 +417,9 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) if (!result->rows) { Py_DECREF(result); - result = NULL; + MemoryContextDelete(cxt); + SPI_freetuptable(tuptable); + return NULL; } else { From c6755e233be1cccadd0884d952a2bb455fa0db1f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 11:39:16 -0500 Subject: [PATCH 0622/1087] Teach bitmap heap scan to cope with absence of a DSA. If we have a plan that uses parallelism but are unable to execute it using parallelism, for example due to a lack of available DSM segments, then the EState's es_query_dsa will be NULL. Parallel bitmap heap scan needs to fall back to a non-parallel scan in such cases. Patch by me, reviewed by Dilip Kumar Discussion: http://postgr.es/m/CAEepm=0kADK5inNf_KuemjX=HQ=PuTP0DykM--fO5jS5ePVFEA@mail.gmail.com --- src/backend/executor/nodeBitmapHeapscan.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index 221391908c..eb5bbb57ef 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -1051,6 +1051,11 @@ ExecBitmapHeapInitializeDSM(BitmapHeapScanState *node, { ParallelBitmapHeapState *pstate; EState *estate = node->ss.ps.state; + dsa_area *dsa = node->ss.ps.state->es_query_dsa; + + /* If there's no DSA, there are no workers; initialize nothing. */ + if (dsa == NULL) + return; pstate = shm_toc_allocate(pcxt->toc, node->pscan_len); @@ -1083,6 +1088,10 @@ ExecBitmapHeapReInitializeDSM(BitmapHeapScanState *node, ParallelBitmapHeapState *pstate = node->pstate; dsa_area *dsa = node->ss.ps.state->es_query_dsa; + /* If there's no DSA, there are no workers; do nothing. */ + if (dsa == NULL) + return; + pstate->state = BM_INITIAL; if (DsaPointerIsValid(pstate->tbmiterator)) @@ -1108,6 +1117,8 @@ ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node, ParallelBitmapHeapState *pstate; Snapshot snapshot; + Assert(node->ss.ps.state->es_query_dsa != NULL); + pstate = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false); node->pstate = pstate; From 445dbd82a3192c6f4d15de012333943882020904 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 12:05:52 -0500 Subject: [PATCH 0623/1087] Fix ReinitializeParallelDSM to tolerate finding no error queues. Commit d4663350646ca0c069a36d906155a0f7e3372eb7 changed things so that shm_toc_lookup would fail with an error rather than silently returning NULL in the hope that such failures would be reported in a useful way rather than via a system crash. However, it overlooked the fact that the lookup of PARALLEL_KEY_ERROR_QUEUE in ReinitializeParallelDSM is expected to fail when no DSM segment was created in the first place; in that case, we end up with a backend-private memory segment that still contains an entry for PARALLEL_KEY_FIXED but no others. Consequently a benign failure to initialize parallelism can escalate into an elog(ERROR); repair. Discussion: http://postgr.es/m/CA+Tgmob8LFw55DzH1QEREpBEA9RJ_W_amhBFCVZ6WMwUhVpOqg@mail.gmail.com --- src/backend/access/transam/parallel.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 1f542ed8d8..d3431a7c30 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -428,9 +428,10 @@ ReinitializeParallelDSM(ParallelContext *pcxt) fps = shm_toc_lookup(pcxt->toc, PARALLEL_KEY_FIXED, false); fps->last_xlog_end = 0; - /* Recreate error queues. */ + /* Recreate error queues (if they exist). */ error_queue_space = - shm_toc_lookup(pcxt->toc, PARALLEL_KEY_ERROR_QUEUE, false); + shm_toc_lookup(pcxt->toc, PARALLEL_KEY_ERROR_QUEUE, true); + Assert(pcxt->nworkers == 0 || error_queue_space != NULL); for (i = 0; i < pcxt->nworkers; ++i) { char *start; From 62546b4357f2aec46bb896fdbddfc0904a2d7920 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 28 Nov 2017 13:55:39 -0500 Subject: [PATCH 0624/1087] Revert "PL/Python: Fix potential NULL pointer dereference" This reverts commit e42e2f38907681c48c43f49c5ec9f9f41a9aa9a5. It's not safe to return in the middle of a PG_TRY block, so this will have to be done differently. --- src/pl/plpython/plpy_spi.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index c80ccf6129..ade27f3924 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -361,10 +361,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) result = (PLyResultObject *) PLy_result_new(); if (!result) - { - SPI_freetuptable(tuptable); return NULL; - } Py_DECREF(result->status); result->status = PyInt_FromLong(status); @@ -417,9 +414,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) if (!result->rows) { Py_DECREF(result); - MemoryContextDelete(cxt); - SPI_freetuptable(tuptable); - return NULL; + result = NULL; } else { From 2d7950f2222c97bd9d9f4d4edc1b59e6660c3621 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 14:11:16 -0500 Subject: [PATCH 0625/1087] If a range-partitioned table has no default partition, reject null keys. Commit 4e5fe9ad19e14af360de7970caa8b150436c9dec introduced this problem. Also add a test so it doesn't get broken again. Report by Rushabh Lathia. Fix by Amit Langote. Reviewed by Rushabh Lathia and Amul Sul. Tweaked by me. Discussion: http://postgr.es/m/CAGPqQf0Y1iJyk4QJBdMf=pS9i6Q0JUMM_h5-qkR3OMJ-e04PyA@mail.gmail.com --- src/backend/catalog/partition.c | 5 ++--- src/test/regress/expected/insert.out | 4 ++++ src/test/regress/sql/insert.sql | 3 +++ 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index e032c11ed4..d62230554e 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -2553,11 +2553,10 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) */ for (i = 0; i < key->partnatts; i++) { - if (isnull[i] && - partition_bound_has_default(partdesc->boundinfo)) + if (isnull[i]) { range_partkey_has_null = true; - part_index = partdesc->boundinfo->default_index; + break; } } diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index 7481bebd83..b7b37dbc39 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -659,6 +659,10 @@ create table mcrparted2 partition of mcrparted for values from (10, 6, minvalue) create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20, 10, 10); create table mcrparted4 partition of mcrparted for values from (21, minvalue, minvalue) to (30, 20, maxvalue); create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, maxvalue, maxvalue); +-- null not allowed in range partition +insert into mcrparted values (null, null, null); +ERROR: no partition of relation "mcrparted" found for row +DETAIL: Partition key of the failing row contains (a, abs(b), c) = (null, null, null). -- routed to mcrparted0 insert into mcrparted values (0, 1, 1); insert into mcrparted0 values (0, 1, 1); diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index f22ab41ae3..310b818076 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -421,6 +421,9 @@ create table mcrparted3 partition of mcrparted for values from (11, 1, 1) to (20 create table mcrparted4 partition of mcrparted for values from (21, minvalue, minvalue) to (30, 20, maxvalue); create table mcrparted5 partition of mcrparted for values from (30, 21, 20) to (maxvalue, maxvalue, maxvalue); +-- null not allowed in range partition +insert into mcrparted values (null, null, null); + -- routed to mcrparted0 insert into mcrparted values (0, 1, 1); insert into mcrparted0 values (0, 1, 1); From cd482f295150f8311290b5234873917f2172a34a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 28 Nov 2017 14:17:21 -0500 Subject: [PATCH 0626/1087] Fix wrong function name in comment. Rushabh Lathia Discussion: http://postgr.es/m/CAGPqQf2z5g+7YmGZSZgKoiFsaUB+63Rzmz8-5PQHuS6hd14FEg@mail.gmail.com --- src/backend/executor/execPartition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index d275cefe1d..2fc411a9b5 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -472,7 +472,7 @@ FormPartitionKeyDatum(PartitionDispatch pd, } /* - * BuildSlotPartitionKeyDescription + * ExecBuildSlotPartitionKeyDescription * * This works very much like BuildIndexValueDescription() and is currently * used for building error messages when ExecFindPartition() fails to find From 414cd434ff681e5f499803458eae9d5bb32372a9 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 28 Nov 2017 23:25:47 -0300 Subject: [PATCH 0627/1087] Fix extstat collection when no stats are produced for a column This is a mistakenly placed conditional in bf2a691e02d7. Reported by Justin Pryzby Discussion: https://postgr.es/m/20171117214352.GE25796@telsasoft.com --- src/backend/statistics/extended_stats.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c index db4987bde3..eeed56ff0a 100644 --- a/src/backend/statistics/extended_stats.c +++ b/src/backend/statistics/extended_stats.c @@ -95,15 +95,16 @@ BuildRelationExtStatistics(Relation onerel, double totalrows, */ stats = lookup_var_attr_stats(onerel, stat->columns, natts, vacattrstats); - if (!stats && !IsAutoVacuumWorkerProcess()) + if (!stats) { - ereport(WARNING, - (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), - errmsg("statistics object \"%s.%s\" could not be computed for relation \"%s.%s\"", - stat->schema, stat->name, - get_namespace_name(onerel->rd_rel->relnamespace), - RelationGetRelationName(onerel)), - errtable(onerel))); + if (!IsAutoVacuumWorkerProcess()) + ereport(WARNING, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("statistics object \"%s.%s\" could not be computed for relation \"%s.%s\"", + stat->schema, stat->name, + get_namespace_name(onerel->rd_rel->relnamespace), + RelationGetRelationName(onerel)), + errtable(onerel))); continue; } From 3848d21673e9dcb42d4bc1353043d7c7d6f550cd Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 29 Nov 2017 00:09:17 -0300 Subject: [PATCH 0628/1087] Make memset() use sizeof() rather than re-compute size This is simpler and more closely follows overwhelming precedent. Report and patch by Mark Dilger. Discussion: https://postgr.es/m/9A68FB88-5F45-4848-9926-8586E2D777D1@gmail.com --- src/backend/statistics/extended_stats.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c index eeed56ff0a..26c2aedd36 100644 --- a/src/backend/statistics/extended_stats.c +++ b/src/backend/statistics/extended_stats.c @@ -301,9 +301,9 @@ statext_store(Relation pg_stext, Oid statOid, bool nulls[Natts_pg_statistic_ext]; bool replaces[Natts_pg_statistic_ext]; - memset(nulls, 1, Natts_pg_statistic_ext * sizeof(bool)); - memset(replaces, 0, Natts_pg_statistic_ext * sizeof(bool)); - memset(values, 0, Natts_pg_statistic_ext * sizeof(Datum)); + memset(nulls, true, sizeof(nulls)); + memset(replaces, false, sizeof(replaces)); + memset(values, 0, sizeof(values)); /* * Construct a new pg_statistic_ext tuple, replacing the calculated stats. From 801386af62eac84c13feec5a643c120cf0ce33bd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 28 Nov 2017 23:32:17 -0500 Subject: [PATCH 0629/1087] Clarify old comment about qual_is_pushdown_safe's handling of subplans. This comment glossed over the difference between initplans and subplans, but they are indeed different for our purposes here. --- src/backend/optimizer/path/allpaths.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 906d08ab37..44f6b03442 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -2840,9 +2840,11 @@ targetIsInAllPartitionLists(TargetEntry *tle, Query *query) * * Conditions checked here: * - * 1. The qual must not contain any subselects (mainly because I'm not sure - * it will work correctly: sublinks will already have been transformed into - * subplans in the qual, but not in the subquery). + * 1. The qual must not contain any SubPlans (mainly because I'm not sure + * it will work correctly: SubLinks will already have been transformed into + * SubPlans in the qual, but not in the subquery). Note that SubLinks that + * transform to initplans are safe, and will be accepted here because what + * we'll see in the qual is just a Param referencing the initplan output. * * 2. If unsafeVolatile is set, the qual must not contain any volatile * functions. From eaedf0df7197b21182f6c341a44e4fdaa3cd6ea6 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 29 Nov 2017 09:24:24 -0500 Subject: [PATCH 0630/1087] Update typedefs.list and re-run pgindent Discussion: http://postgr.es/m/CA+TgmoaA9=1RWKtBWpDaj+sF3Stgc8sHgf5z=KGtbjwPLQVDMA@mail.gmail.com --- contrib/fuzzystrmatch/fuzzystrmatch.c | 2 +- contrib/isn/isn.c | 2 +- src/backend/access/brin/brin.c | 4 +- src/backend/access/common/printsimple.c | 6 +- src/backend/access/transam/xlog.c | 14 +++-- src/backend/access/transam/xlogutils.c | 2 +- src/backend/catalog/partition.c | 39 ++++++------ src/backend/commands/copy.c | 6 +- src/backend/commands/vacuumlazy.c | 2 +- src/backend/executor/execPartition.c | 4 +- src/backend/executor/nodeModifyTable.c | 4 +- src/backend/libpq/auth-scram.c | 9 ++- src/backend/libpq/auth.c | 8 ++- src/backend/optimizer/plan/setrefs.c | 2 +- src/backend/optimizer/util/clauses.c | 2 +- src/backend/optimizer/util/pathnode.c | 8 +-- src/backend/optimizer/util/plancat.c | 6 +- src/backend/optimizer/util/relnode.c | 34 +++++----- src/backend/postmaster/bgworker.c | 2 +- src/backend/postmaster/postmaster.c | 8 +-- src/backend/replication/basebackup.c | 10 +-- .../replication/logical/reorderbuffer.c | 4 +- src/backend/storage/smgr/smgr.c | 6 +- src/backend/tcop/postgres.c | 18 +++--- src/backend/tsearch/spell.c | 2 +- src/backend/utils/adt/date.c | 2 +- src/backend/utils/adt/pgstatfuncs.c | 4 +- src/backend/utils/mmgr/generation.c | 62 +++++++++---------- src/backend/utils/sort/tuplesort.c | 7 +-- src/bin/initdb/initdb.c | 4 +- src/bin/pg_basebackup/pg_receivewal.c | 3 +- src/bin/pg_basebackup/streamutil.h | 6 +- src/bin/psql/describe.c | 2 +- src/include/access/hash.h | 2 +- src/include/catalog/partition.h | 2 +- src/include/executor/execParallel.h | 2 +- src/include/executor/execPartition.h | 10 +-- src/include/fe_utils/psqlscan_int.h | 2 +- src/include/lib/dshash.h | 2 +- src/include/lib/stringinfo.h | 2 +- src/include/libpq/scram.h | 6 +- src/include/nodes/relation.h | 2 +- src/include/utils/guc.h | 2 +- src/include/utils/memutils.h | 4 +- src/interfaces/ecpg/ecpglib/data.c | 3 +- src/interfaces/ecpg/ecpglib/execute.c | 2 +- src/interfaces/libpq/fe-auth-scram.c | 4 +- src/interfaces/libpq/fe-auth.c | 7 ++- src/interfaces/libpq/fe-auth.h | 10 +-- src/pl/plpgsql/src/pl_exec.c | 4 +- src/test/thread/thread_test.c | 8 +-- src/tools/pgindent/typedefs.list | 53 ++++++++++------ 52 files changed, 224 insertions(+), 197 deletions(-) diff --git a/contrib/fuzzystrmatch/fuzzystrmatch.c b/contrib/fuzzystrmatch/fuzzystrmatch.c index f6341c84cf..b108353c06 100644 --- a/contrib/fuzzystrmatch/fuzzystrmatch.c +++ b/contrib/fuzzystrmatch/fuzzystrmatch.c @@ -104,7 +104,7 @@ soundex_code(char letter) #define TH '0' static char Lookahead(char *word, int how_far); -static void _metaphone(char *word, int max_phonemes, char **phoned_word); +static void _metaphone(char *word, int max_phonemes, char **phoned_word); /* Metachar.h ... little bits about characters for metaphone */ diff --git a/contrib/isn/isn.c b/contrib/isn/isn.c index 0148f9549f..a33a86444b 100644 --- a/contrib/isn/isn.c +++ b/contrib/isn/isn.c @@ -925,7 +925,7 @@ string2ean(const char *str, bool errorOK, ean13 *result, * Exported routines. *---------------------------------------------------------*/ -void _PG_init(void); +void _PG_init(void); void _PG_init(void) diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index cafc8fe7be..9fbb093b3c 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -1314,8 +1314,8 @@ brinsummarize(Relation index, Relation heapRel, BlockNumber pageRange, /* * Unless requested to summarize even a partial range, go away now if - * we think the next range is partial. Caller would pass true when - * it is typically run once bulk data loading is done + * we think the next range is partial. Caller would pass true when it + * is typically run once bulk data loading is done * (brin_summarize_new_values), and false when it is typically the * result of arbitrarily-scheduled maintenance command (vacuuming). */ diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index 872de7c3f4..f3e48b7e98 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -41,12 +41,12 @@ printsimple_startup(DestReceiver *self, int operation, TupleDesc tupdesc) Form_pg_attribute attr = TupleDescAttr(tupdesc, i); pq_sendstring(&buf, NameStr(attr->attname)); - pq_sendint32(&buf, 0); /* table oid */ - pq_sendint16(&buf, 0); /* attnum */ + pq_sendint32(&buf, 0); /* table oid */ + pq_sendint16(&buf, 0); /* attnum */ pq_sendint32(&buf, (int) attr->atttypid); pq_sendint16(&buf, attr->attlen); pq_sendint32(&buf, attr->atttypmod); - pq_sendint16(&buf, 0); /* format code */ + pq_sendint16(&buf, 0); /* format code */ } pq_endmessage(&buf); diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index e729180f82..fba201f659 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -6586,10 +6586,10 @@ StartupXLOG(void) else { /* - * We used to attempt to go back to a secondary checkpoint - * record here, but only when not in standby_mode. We now - * just fail if we can't read the last checkpoint because - * this allows us to simplify processing around checkpoints. + * We used to attempt to go back to a secondary checkpoint record + * here, but only when not in standby_mode. We now just fail if we + * can't read the last checkpoint because this allows us to + * simplify processing around checkpoints. */ ereport(PANIC, (errmsg("could not locate a valid checkpoint record"))); @@ -8888,7 +8888,8 @@ CreateCheckPoint(int flags) (errmsg("concurrent write-ahead log activity while database system is shutting down"))); /* - * Remember the prior checkpoint's redo ptr for UpdateCheckPointDistanceEstimate() + * Remember the prior checkpoint's redo ptr for + * UpdateCheckPointDistanceEstimate() */ PriorRedoPtr = ControlFile->checkPointCopy.redo; @@ -9211,7 +9212,8 @@ CreateRestartPoint(int flags) CheckPointGuts(lastCheckPoint.redo, flags); /* - * Remember the prior checkpoint's redo ptr for UpdateCheckPointDistanceEstimate() + * Remember the prior checkpoint's redo ptr for + * UpdateCheckPointDistanceEstimate() */ PriorRedoPtr = ControlFile->checkPointCopy.redo; diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c index 3af6e19c98..cc14063daf 100644 --- a/src/backend/access/transam/xlogutils.c +++ b/src/backend/access/transam/xlogutils.c @@ -802,7 +802,7 @@ void XLogReadDetermineTimeline(XLogReaderState *state, XLogRecPtr wantPage, uint32 wantLength) { const XLogRecPtr lastReadPage = state->readSegNo * - state->wal_segment_size + state->readOff; + state->wal_segment_size + state->readOff; Assert(wantPage != InvalidXLogRecPtr && wantPage % XLOG_BLCKSZ == 0); Assert(wantLength <= XLOG_BLCKSZ); diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index d62230554e..2bf8117757 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -2134,7 +2134,7 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec, if (or_expr_args != NIL) { - Expr *other_parts_constr; + Expr *other_parts_constr; /* * Combine the constraints obtained for non-default partitions @@ -2143,20 +2143,20 @@ get_qual_for_range(Relation parent, PartitionBoundSpec *spec, * useless repetition). Add the same now. */ other_parts_constr = - makeBoolExpr(AND_EXPR, - lappend(get_range_nulltest(key), - list_length(or_expr_args) > 1 - ? makeBoolExpr(OR_EXPR, or_expr_args, - -1) - : linitial(or_expr_args)), - -1); + makeBoolExpr(AND_EXPR, + lappend(get_range_nulltest(key), + list_length(or_expr_args) > 1 + ? makeBoolExpr(OR_EXPR, or_expr_args, + -1) + : linitial(or_expr_args)), + -1); /* * Finally, the default partition contains everything *NOT* * contained in the non-default partitions. */ result = list_make1(makeBoolExpr(NOT_EXPR, - list_make1(other_parts_constr), -1)); + list_make1(other_parts_constr), -1)); } return result; @@ -2502,9 +2502,9 @@ generate_partition_qual(Relation rel) int get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) { - int bound_offset; - int part_index = -1; - PartitionKey key = RelationGetPartitionKey(relation); + int bound_offset; + int part_index = -1; + PartitionKey key = RelationGetPartitionKey(relation); PartitionDesc partdesc = RelationGetPartitionDesc(relation); /* Route as appropriate based on partitioning strategy. */ @@ -2513,8 +2513,8 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) case PARTITION_STRATEGY_HASH: { PartitionBoundInfo boundinfo = partdesc->boundinfo; - int greatest_modulus = get_greatest_modulus(boundinfo); - uint64 rowHash = compute_hash_value(key, values, isnull); + int greatest_modulus = get_greatest_modulus(boundinfo); + uint64 rowHash = compute_hash_value(key, values, isnull); part_index = boundinfo->indexes[rowHash % greatest_modulus]; } @@ -2548,8 +2548,7 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) /* * No range includes NULL, so this will be accepted by the - * default partition if there is one, and otherwise - * rejected. + * default partition if there is one, and otherwise rejected. */ for (i = 0; i < key->partnatts; i++) { @@ -2563,7 +2562,7 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) if (!range_partkey_has_null) { bound_offset = partition_bound_bsearch(key, - partdesc->boundinfo, + partdesc->boundinfo, values, false, &equal); @@ -2585,8 +2584,8 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) } /* - * part_index < 0 means we failed to find a partition of this parent. - * Use the default partition, if there is one. + * part_index < 0 means we failed to find a partition of this parent. Use + * the default partition, if there is one. */ if (part_index < 0) part_index = partdesc->boundinfo->default_index; @@ -3125,7 +3124,7 @@ satisfies_hash_partition(PG_FUNCTION_ARGS) bool variadic_typbyval; char variadic_typalign; FmgrInfo partsupfunc[PARTITION_MAX_KEYS]; - } ColumnsHashData; + } ColumnsHashData; Oid parentId; int modulus; int remainder; diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index d6b235ca79..13eb9e34ba 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -168,7 +168,7 @@ typedef struct CopyStateData PartitionDispatch *partition_dispatch_info; int num_dispatch; /* Number of entries in the above array */ int num_partitions; /* Number of members in the following arrays */ - ResultRelInfo **partitions; /* Per partition result relation pointers */ + ResultRelInfo **partitions; /* Per partition result relation pointers */ TupleConversionMap **partition_tupconv_maps; TupleTableSlot *partition_tuple_slot; TransitionCaptureState *transition_capture; @@ -360,7 +360,7 @@ SendCopyBegin(CopyState cstate) pq_sendbyte(&buf, format); /* overall format */ pq_sendint16(&buf, natts); for (i = 0; i < natts; i++) - pq_sendint16(&buf, format); /* per-column formats */ + pq_sendint16(&buf, format); /* per-column formats */ pq_endmessage(&buf); cstate->copy_dest = COPY_NEW_FE; } @@ -393,7 +393,7 @@ ReceiveCopyBegin(CopyState cstate) pq_sendbyte(&buf, format); /* overall format */ pq_sendint16(&buf, natts); for (i = 0; i < natts; i++) - pq_sendint16(&buf, format); /* per-column formats */ + pq_sendint16(&buf, format); /* per-column formats */ pq_endmessage(&buf); cstate->copy_dest = COPY_NEW_FE; cstate->fe_msgbuf = makeStringInfo(); diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 6587db77ac..20ce431e46 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -355,7 +355,7 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params, params->log_min_duration)) { StringInfoData buf; - char *msgfmt; + char *msgfmt; TimestampDifference(starttime, endtime, &secs, &usecs); diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 2fc411a9b5..59a0ca4597 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -184,10 +184,10 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, parent = pd[0]; while (true) { - PartitionDesc partdesc; + PartitionDesc partdesc; TupleTableSlot *myslot = parent->tupslot; TupleConversionMap *map = parent->tupmap; - int cur_index = -1; + int cur_index = -1; rel = parent->reldesc; partdesc = RelationGetPartitionDesc(rel); diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 201c607002..1e3ece9b34 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1518,8 +1518,8 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) if (mtstate->mt_partition_dispatch_info != NULL) { /* - * For tuple routing among partitions, we need TupleDescs based - * on the partition routing table. + * For tuple routing among partitions, we need TupleDescs based on + * the partition routing table. */ ResultRelInfo **resultRelInfos = mtstate->mt_partitions; diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 22103ce479..15c3857f57 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -791,6 +791,7 @@ read_client_first_message(scram_state *state, char *input) switch (*input) { case 'n': + /* * The client does not support channel binding or has simply * decided to not use it. In that case just let it go. @@ -805,6 +806,7 @@ read_client_first_message(scram_state *state, char *input) input++; break; case 'y': + /* * The client supports channel binding and thinks that the server * does not. In this case, the server must fail authentication if @@ -827,12 +829,13 @@ read_client_first_message(scram_state *state, char *input) input++; break; case 'p': + /* * The client requires channel binding. Channel binding type * follows, e.g., "p=tls-unique". */ { - char *channel_binding_type; + char *channel_binding_type; if (!state->ssl_in_use) { @@ -1139,8 +1142,8 @@ read_client_final_message(scram_state *state, char *input) b64_message[b64_message_len] = '\0'; /* - * Compare the value sent by the client with the value expected by - * the server. + * Compare the value sent by the client with the value expected by the + * server. */ if (strcmp(channel_binding, b64_message) != 0) ereport(ERROR, diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 2dd3328d71..19a91ca67d 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -100,7 +100,8 @@ static struct pam_conv pam_passw_conv = { NULL }; -static const char *pam_passwd = NULL; /* Workaround for Solaris 2.6 brokenness */ +static const char *pam_passwd = NULL; /* Workaround for Solaris 2.6 + * brokenness */ static Port *pam_port_cludge; /* Workaround for passing "Port *port" into * pam_passwd_conv_proc */ #endif /* USE_PAM */ @@ -914,6 +915,7 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) pfree(sasl_mechs); #ifdef USE_SSL + /* * Get data for channel binding. */ @@ -2343,7 +2345,7 @@ CheckBSDAuth(Port *port, char *user) */ #ifdef USE_LDAP -static int errdetail_for_ldap(LDAP *ldap); +static int errdetail_for_ldap(LDAP *ldap); /* * Initialize a connection to the LDAP server, including setting up @@ -2514,7 +2516,7 @@ CheckLDAPAuth(Port *port) char *filter; LDAPMessage *search_message; LDAPMessage *entry; - char *attributes[] = { LDAP_NO_ATTRS, NULL }; + char *attributes[] = {LDAP_NO_ATTRS, NULL}; char *dn; char *c; int count; diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index 28a7f7ec45..b5c41241d7 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -1795,7 +1795,7 @@ set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset) static void set_param_references(PlannerInfo *root, Plan *plan) { - Assert(IsA(plan, Gather) || IsA(plan, GatherMerge)); + Assert(IsA(plan, Gather) ||IsA(plan, GatherMerge)); if (plan->lefttree->extParam) { diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index d14ef31eae..e5e2956564 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1118,7 +1118,7 @@ is_parallel_safe(PlannerInfo *root, Node *node) foreach(l2, initsubplan->setParam) context.safe_param_ids = lcons_int(lfirst_int(l2), - context.safe_param_ids); + context.safe_param_ids); } } diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 68dee0f501..02bbbc0647 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -2412,7 +2412,7 @@ apply_projection_to_path(PlannerInfo *root, * workers can help project. But if there is something that is not * parallel-safe in the target expressions, then we can't. */ - if ((IsA(path, GatherPath) || IsA(path, GatherMergePath)) && + if ((IsA(path, GatherPath) ||IsA(path, GatherMergePath)) && is_parallel_safe(root, (Node *) target->exprs)) { /* @@ -2421,9 +2421,9 @@ apply_projection_to_path(PlannerInfo *root, * It seems unlikely at present that there could be any other * references to the subpath, but better safe than sorry. * - * Note that we don't change the parallel path's cost estimates; it might - * be appropriate to do so, to reflect the fact that the bulk of the - * target evaluation will happen in workers. + * Note that we don't change the parallel path's cost estimates; it + * might be appropriate to do so, to reflect the fact that the bulk of + * the target evaluation will happen in workers. */ if (IsA(path, GatherPath)) { diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index 9d35a41e22..199a2631a1 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -1825,7 +1825,7 @@ set_relation_partition_info(PlannerInfo *root, RelOptInfo *rel, Relation relation) { PartitionDesc partdesc; - PartitionKey partkey; + PartitionKey partkey; Assert(relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); @@ -1890,8 +1890,8 @@ find_partition_scheme(PlannerInfo *root, Relation relation) /* * Did not find matching partition scheme. Create one copying relevant - * information from the relcache. We need to copy the contents of the array - * since the relcache entry may not survive after we have closed the + * information from the relcache. We need to copy the contents of the + * array since the relcache entry may not survive after we have closed the * relation. */ part_scheme = (PartitionScheme) palloc0(sizeof(PartitionSchemeData)); diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index cb94c318a7..674cfc6b06 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -873,11 +873,11 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel, continue; /* - * Otherwise, anything in a baserel or joinrel targetlist ought to be a - * Var. Children of a partitioned table may have ConvertRowtypeExpr - * translating whole-row Var of a child to that of the parent. Children - * of an inherited table or subquery child rels can not directly - * participate in a join, so other kinds of nodes here. + * Otherwise, anything in a baserel or joinrel targetlist ought to be + * a Var. Children of a partitioned table may have ConvertRowtypeExpr + * translating whole-row Var of a child to that of the parent. + * Children of an inherited table or subquery child rels can not + * directly participate in a join, so other kinds of nodes here. */ if (IsA(var, Var)) { @@ -901,7 +901,7 @@ build_joinrel_tlist(PlannerInfo *root, RelOptInfo *joinrel, child_expr = (ConvertRowtypeExpr *) childvar; childvar = (Var *) child_expr->arg; } - Assert(IsA(childvar, Var) && childvar->varattno == 0); + Assert(IsA(childvar, Var) &&childvar->varattno == 0); baserel = find_base_rel(root, childvar->varno); ndx = 0 - baserel->min_attr; @@ -1666,18 +1666,19 @@ build_joinrel_partition_info(RelOptInfo *joinrel, RelOptInfo *outer_rel, partnatts = joinrel->part_scheme->partnatts; joinrel->partexprs = (List **) palloc0(sizeof(List *) * partnatts); joinrel->nullable_partexprs = - (List **) palloc0(sizeof(List *) *partnatts); + (List **) palloc0(sizeof(List *) * partnatts); /* * Construct partition keys for the join. * * An INNER join between two partitioned relations can be regarded as - * partitioned by either key expression. For example, A INNER JOIN B ON A.a = - * B.b can be regarded as partitioned on A.a or on B.b; they are equivalent. + * partitioned by either key expression. For example, A INNER JOIN B ON + * A.a = B.b can be regarded as partitioned on A.a or on B.b; they are + * equivalent. * * For a SEMI or ANTI join, the result can only be regarded as being - * partitioned in the same manner as the outer side, since the inner columns - * are not retained. + * partitioned in the same manner as the outer side, since the inner + * columns are not retained. * * An OUTER join like (A LEFT JOIN B ON A.a = B.b) may produce rows with * B.b NULL. These rows may not fit the partitioning conditions imposed on @@ -1686,11 +1687,12 @@ build_joinrel_partition_info(RelOptInfo *joinrel, RelOptInfo *outer_rel, * expressions from the OUTER side only. However, because all * commonly-used comparison operators are strict, the presence of nulls on * the outer side doesn't cause any problem; they can't match anything at - * future join levels anyway. Therefore, we track two sets of expressions: - * those that authentically partition the relation (partexprs) and those - * that partition the relation with the exception that extra nulls may be - * present (nullable_partexprs). When the comparison operator is strict, - * the latter is just as good as the former. + * future join levels anyway. Therefore, we track two sets of + * expressions: those that authentically partition the relation + * (partexprs) and those that partition the relation with the exception + * that extra nulls may be present (nullable_partexprs). When the + * comparison operator is strict, the latter is just as good as the + * former. */ for (cnt = 0; cnt < partnatts; cnt++) { diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c index 4a3c4b4cc9..f752a12713 100644 --- a/src/backend/postmaster/bgworker.c +++ b/src/backend/postmaster/bgworker.c @@ -1253,7 +1253,7 @@ GetBackgroundWorkerTypeByPid(pid_t pid) { int slotno; bool found = false; - static char result[BGW_MAXLEN]; + static char result[BGW_MAXLEN]; LWLockAcquire(BackgroundWorkerLock, LW_SHARED); diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index a3d4917173..94a4609371 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -2161,8 +2161,8 @@ ProcessStartupPacket(Port *port, bool SSLdone) /* * If the client requested a newer protocol version or if the client * requested any protocol options we didn't recognize, let them know - * the newest minor protocol version we do support and the names of any - * unrecognized options. + * the newest minor protocol version we do support and the names of + * any unrecognized options. */ if (PG_PROTOCOL_MINOR(proto) > PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) || unrecognized_protocol_options != NIL) @@ -4316,8 +4316,8 @@ BackendInitialize(Port *port) * * postgres: walsender * - * To achieve that, we pass "walsender" as username and username - * as dbname to init_ps_display(). XXX: should add a new variant of + * To achieve that, we pass "walsender" as username and username as dbname + * to init_ps_display(). XXX: should add a new variant of * init_ps_display() to avoid abusing the parameters like this. */ if (am_walsender) diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index cbcb3dbec3..ebb8fde3bc 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -278,7 +278,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir) /* Send CopyOutResponse message */ pq_beginmessage(&buf, 'H'); pq_sendbyte(&buf, 0); /* overall format */ - pq_sendint16(&buf, 0); /* natts */ + pq_sendint16(&buf, 0); /* natts */ pq_endmessage(&buf); if (ti->path == NULL) @@ -744,7 +744,7 @@ SendBackupHeader(List *tablespaces) pq_sendstring(&buf, "spcoid"); pq_sendint32(&buf, 0); /* table oid */ pq_sendint16(&buf, 0); /* attnum */ - pq_sendint32(&buf, OIDOID); /* type oid */ + pq_sendint32(&buf, OIDOID); /* type oid */ pq_sendint16(&buf, 4); /* typlen */ pq_sendint32(&buf, 0); /* typmod */ pq_sendint16(&buf, 0); /* format code */ @@ -774,10 +774,10 @@ SendBackupHeader(List *tablespaces) /* Send one datarow message */ pq_beginmessage(&buf, 'D'); - pq_sendint16(&buf, 3); /* number of columns */ + pq_sendint16(&buf, 3); /* number of columns */ if (ti->path == NULL) { - pq_sendint32(&buf, -1); /* Length = -1 ==> NULL */ + pq_sendint32(&buf, -1); /* Length = -1 ==> NULL */ pq_sendint32(&buf, -1); } else @@ -795,7 +795,7 @@ SendBackupHeader(List *tablespaces) if (ti->size >= 0) send_int8_string(&buf, ti->size / 1024); else - pq_sendint32(&buf, -1); /* NULL */ + pq_sendint32(&buf, -1); /* NULL */ pq_endmessage(&buf); } diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index dc0ad5b0e7..fa95bab58e 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -246,8 +246,8 @@ ReorderBufferAllocate(void) sizeof(ReorderBufferTXN)); buffer->tup_context = GenerationContextCreate(new_ctx, - "Tuples", - SLAB_LARGE_BLOCK_SIZE); + "Tuples", + SLAB_LARGE_BLOCK_SIZE); hash_ctl.keysize = sizeof(TransactionId); hash_ctl.entrysize = sizeof(ReorderBufferTXNByIdEnt); diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c index 5d5b7dd95e..ac4ce9ce62 100644 --- a/src/backend/storage/smgr/smgr.c +++ b/src/backend/storage/smgr/smgr.c @@ -601,7 +601,7 @@ smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer, bool skipFsync) { smgrsw[reln->smgr_which].smgr_extend(reln, forknum, blocknum, - buffer, skipFsync); + buffer, skipFsync); } /* @@ -648,7 +648,7 @@ smgrwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer, bool skipFsync) { smgrsw[reln->smgr_which].smgr_write(reln, forknum, blocknum, - buffer, skipFsync); + buffer, skipFsync); } @@ -661,7 +661,7 @@ smgrwriteback(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, BlockNumber nblocks) { smgrsw[reln->smgr_which].smgr_writeback(reln, forknum, blocknum, - nblocks); + nblocks); } /* diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 05c5c194ec..1ae9ac2d57 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -2374,8 +2374,8 @@ exec_describe_statement_message(const char *stmt_name) /* * First describe the parameters... */ - pq_beginmessage_reuse(&row_description_buf, 't'); /* parameter description - * message type */ + pq_beginmessage_reuse(&row_description_buf, 't'); /* parameter description + * message type */ pq_sendint16(&row_description_buf, psrc->num_params); for (i = 0; i < psrc->num_params; i++) @@ -2952,14 +2952,14 @@ ProcessInterrupts(void) /* * Don't allow query cancel interrupts while reading input from the * client, because we might lose sync in the FE/BE protocol. (Die - * interrupts are OK, because we won't read any further messages from - * the client in that case.) + * interrupts are OK, because we won't read any further messages from the + * client in that case.) */ if (QueryCancelPending && QueryCancelHoldoffCount != 0) { /* - * Re-arm InterruptPending so that we process the cancel request - * as soon as we're done reading the message. + * Re-arm InterruptPending so that we process the cancel request as + * soon as we're done reading the message. */ InterruptPending = true; } @@ -4494,10 +4494,10 @@ ShowUsage(const char *title) appendStringInfo(&str, "!\t%ld kB max resident size\n", #if defined(__darwin__) - /* in bytes on macOS */ - r.ru_maxrss/1024 + /* in bytes on macOS */ + r.ru_maxrss / 1024 #else - /* in kilobytes on most other platforms */ + /* in kilobytes on most other platforms */ r.ru_maxrss #endif ); diff --git a/src/backend/tsearch/spell.c b/src/backend/tsearch/spell.c index e70e901066..9a09ffb20a 100644 --- a/src/backend/tsearch/spell.c +++ b/src/backend/tsearch/spell.c @@ -202,7 +202,7 @@ static int cmpspellaffix(const void *s1, const void *s2) { return strcmp((*(SPELL *const *) s1)->p.flag, - (*(SPELL *const *) s2)->p.flag); + (*(SPELL *const *) s2)->p.flag); } static int diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 04e737d080..307b5e8629 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -2231,7 +2231,7 @@ timetz_hash_extended(PG_FUNCTION_ARGS) Int64GetDatumFast(key->time), seed)); thash ^= DatumGetUInt64(hash_uint32_extended(key->zone, - DatumGetInt64(seed))); + DatumGetInt64(seed))); PG_RETURN_UINT64(thash); } diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c index 8d9e7c10ae..04cf209b5b 100644 --- a/src/backend/utils/adt/pgstatfuncs.c +++ b/src/backend/utils/adt/pgstatfuncs.c @@ -921,8 +921,8 @@ pg_stat_get_backend_activity(PG_FUNCTION_ARGS) int32 beid = PG_GETARG_INT32(0); PgBackendStatus *beentry; const char *activity; - char *clipped_activity; - text *ret; + char *clipped_activity; + text *ret; if ((beentry = pgstat_fetch_stat_beentry(beid)) == NULL) activity = ""; diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 31342ad69b..19390fa581 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -46,7 +46,7 @@ #define Generation_BLOCKHDRSZ MAXALIGN(sizeof(GenerationBlock)) #define Generation_CHUNKHDRSZ sizeof(GenerationChunk) -typedef struct GenerationBlock GenerationBlock; /* forward reference */ +typedef struct GenerationBlock GenerationBlock; /* forward reference */ typedef struct GenerationChunk GenerationChunk; typedef void *GenerationPointer; @@ -62,9 +62,9 @@ typedef struct GenerationContext /* Generational context parameters */ Size blockSize; /* standard block size */ - GenerationBlock *block; /* current (most recently allocated) block */ + GenerationBlock *block; /* current (most recently allocated) block */ dlist_head blocks; /* list of blocks */ -} GenerationContext; +} GenerationContext; /* * GenerationBlock @@ -155,7 +155,7 @@ static void GenerationDelete(MemoryContext context); static Size GenerationGetChunkSpace(MemoryContext context, void *pointer); static bool GenerationIsEmpty(MemoryContext context); static void GenerationStats(MemoryContext context, int level, bool print, - MemoryContextCounters *totals); + MemoryContextCounters *totals); #ifdef MEMORY_CONTEXT_CHECKING static void GenerationCheck(MemoryContext context); @@ -207,10 +207,10 @@ static MemoryContextMethods GenerationMethods = { */ MemoryContext GenerationContextCreate(MemoryContext parent, - const char *name, - Size blockSize) + const char *name, + Size blockSize) { - GenerationContext *set; + GenerationContext *set; /* Assert we padded GenerationChunk properly */ StaticAssertStmt(Generation_CHUNKHDRSZ == MAXALIGN(Generation_CHUNKHDRSZ), @@ -233,10 +233,10 @@ GenerationContextCreate(MemoryContext parent, /* Do the type-independent part of context creation */ set = (GenerationContext *) MemoryContextCreate(T_GenerationContext, - sizeof(GenerationContext), - &GenerationMethods, - parent, - name); + sizeof(GenerationContext), + &GenerationMethods, + parent, + name); set->blockSize = blockSize; @@ -250,7 +250,7 @@ GenerationContextCreate(MemoryContext parent, static void GenerationInit(MemoryContext context) { - GenerationContext *set = (GenerationContext *) context; + GenerationContext *set = (GenerationContext *) context; set->block = NULL; dlist_init(&set->blocks); @@ -266,7 +266,7 @@ GenerationInit(MemoryContext context) static void GenerationReset(MemoryContext context) { - GenerationContext *set = (GenerationContext *) context; + GenerationContext *set = (GenerationContext *) context; dlist_mutable_iter miter; AssertArg(GenerationIsValid(set)); @@ -324,9 +324,9 @@ GenerationDelete(MemoryContext context) static void * GenerationAlloc(MemoryContext context, Size size) { - GenerationContext *set = (GenerationContext *) context; - GenerationBlock *block; - GenerationChunk *chunk; + GenerationContext *set = (GenerationContext *) context; + GenerationBlock *block; + GenerationChunk *chunk; Size chunk_size = MAXALIGN(size); /* is it an over-sized chunk? if yes, allocate special block */ @@ -460,9 +460,9 @@ GenerationAlloc(MemoryContext context, Size size) static void GenerationFree(MemoryContext context, void *pointer) { - GenerationContext *set = (GenerationContext *) context; - GenerationChunk *chunk = GenerationPointerGetChunk(pointer); - GenerationBlock *block; + GenerationContext *set = (GenerationContext *) context; + GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + GenerationBlock *block; /* Allow access to private part of chunk header. */ VALGRIND_MAKE_MEM_DEFINED(chunk, GENERATIONCHUNK_PRIVATE_LEN); @@ -474,7 +474,7 @@ GenerationFree(MemoryContext context, void *pointer) if (chunk->requested_size < chunk->size) if (!sentinel_ok(pointer, chunk->requested_size)) elog(WARNING, "detected write past chunk end in %s %p", - ((MemoryContext)set)->name, chunk); + ((MemoryContext) set)->name, chunk); #endif #ifdef CLOBBER_FREED_MEMORY @@ -520,9 +520,9 @@ GenerationFree(MemoryContext context, void *pointer) static void * GenerationRealloc(MemoryContext context, void *pointer, Size size) { - GenerationContext *set = (GenerationContext *) context; - GenerationChunk *chunk = GenerationPointerGetChunk(pointer); - GenerationPointer newPointer; + GenerationContext *set = (GenerationContext *) context; + GenerationChunk *chunk = GenerationPointerGetChunk(pointer); + GenerationPointer newPointer; Size oldsize; /* Allow access to private part of chunk header. */ @@ -535,7 +535,7 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size) if (chunk->requested_size < oldsize) if (!sentinel_ok(pointer, chunk->requested_size)) elog(WARNING, "detected write past chunk end in %s %p", - ((MemoryContext)set)->name, chunk); + ((MemoryContext) set)->name, chunk); #endif /* @@ -652,7 +652,7 @@ GenerationGetChunkSpace(MemoryContext context, void *pointer) static bool GenerationIsEmpty(MemoryContext context) { - GenerationContext *set = (GenerationContext *) context; + GenerationContext *set = (GenerationContext *) context; return dlist_is_empty(&set->blocks); } @@ -670,9 +670,9 @@ GenerationIsEmpty(MemoryContext context) */ static void GenerationStats(MemoryContext context, int level, bool print, - MemoryContextCounters *totals) + MemoryContextCounters *totals) { - GenerationContext *set = (GenerationContext *) context; + GenerationContext *set = (GenerationContext *) context; Size nblocks = 0; Size nchunks = 0; Size nfreechunks = 0; @@ -698,8 +698,8 @@ GenerationStats(MemoryContext context, int level, bool print, for (i = 0; i < level; i++) fprintf(stderr, " "); fprintf(stderr, - "Generation: %s: %zu total in %zd blocks (%zd chunks); %zu free (%zd chunks); %zu used\n", - ((MemoryContext)set)->name, totalspace, nblocks, nchunks, freespace, + "Generation: %s: %zu total in %zd blocks (%zd chunks); %zu free (%zd chunks); %zu used\n", + ((MemoryContext) set)->name, totalspace, nblocks, nchunks, freespace, nfreechunks, totalspace - freespace); } @@ -726,7 +726,7 @@ GenerationStats(MemoryContext context, int level, bool print, static void GenerationCheck(MemoryContext context) { - GenerationContext *gen = (GenerationContext *) context; + GenerationContext *gen = (GenerationContext *) context; char *name = context->name; dlist_iter iter; @@ -818,4 +818,4 @@ GenerationCheck(MemoryContext context) } } -#endif /* MEMORY_CONTEXT_CHECKING */ +#endif /* MEMORY_CONTEXT_CHECKING */ diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 34af8d6334..3c23ac75a0 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -1591,8 +1591,7 @@ puttuple_common(Tuplesortstate *state, SortTuple *tuple) case TSS_BUILDRUNS: /* - * Save the tuple into the unsorted array (there must be - * space) + * Save the tuple into the unsorted array (there must be space) */ state->memtuples[state->memtupcount++] = *tuple; @@ -2742,8 +2741,8 @@ dumptuples(Tuplesortstate *state, bool alltuples) int i; /* - * Nothing to do if we still fit in available memory and have array - * slots, unless this is the final call during initial run generation. + * Nothing to do if we still fit in available memory and have array slots, + * unless this is the final call during initial run generation. */ if (state->memtupcount < state->memtupsize && !LACKMEM(state) && !alltuples) diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index ad0d0e2ac0..ddc850db1c 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -148,7 +148,7 @@ static int wal_segment_size_mb; /* internal vars */ static const char *progname; -static int encodingid; +static int encodingid; static char *bki_file; static char *desc_file; static char *shdesc_file; @@ -239,7 +239,7 @@ static void writefile(char *path, char **lines); static FILE *popen_check(const char *command, const char *mode); static void exit_nicely(void) pg_attribute_noreturn(); static char *get_id(void); -static int get_encoding_id(const char *encoding_name); +static int get_encoding_id(const char *encoding_name); static void set_input(char **dest, const char *filename); static void check_input(char *path); static void write_version_file(const char *extrapath); diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c index d801ea07fc..d6f0f3cb1a 100644 --- a/src/bin/pg_basebackup/pg_receivewal.c +++ b/src/bin/pg_basebackup/pg_receivewal.c @@ -496,7 +496,8 @@ main(int argc, char **argv) int c; int option_index; char *db_name; - uint32 hi, lo; + uint32 hi, + lo; progname = get_progname(argv[0]); set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup")); diff --git a/src/bin/pg_basebackup/streamutil.h b/src/bin/pg_basebackup/streamutil.h index 908fd68c2b..e9f1d1f536 100644 --- a/src/bin/pg_basebackup/streamutil.h +++ b/src/bin/pg_basebackup/streamutil.h @@ -33,9 +33,9 @@ extern PGconn *GetConnection(void); /* Replication commands */ extern bool CreateReplicationSlot(PGconn *conn, const char *slot_name, - const char *plugin, bool is_temporary, - bool is_physical, bool reserve_wal, - bool slot_exists_ok); + const char *plugin, bool is_temporary, + bool is_physical, bool reserve_wal, + bool slot_exists_ok); extern bool DropReplicationSlot(PGconn *conn, const char *slot_name); extern bool RunIdentifySystem(PGconn *conn, char **sysid, TimeLineID *starttli, diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 99167104d4..804a84a0c9 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -2939,7 +2939,7 @@ describeOneTableDetails(const char *schemaname, } else { - char *partitioned_note; + char *partitioned_note; if (*PQgetvalue(result, i, 2) == RELKIND_PARTITIONED_TABLE) partitioned_note = ", PARTITIONED"; diff --git a/src/include/access/hash.h b/src/include/access/hash.h index e3135c1738..ccdf9dff4b 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -133,7 +133,7 @@ typedef struct HashScanPosData int itemIndex; /* current index in items[] */ HashScanPosItem items[MaxIndexTuplesPerPage]; /* MUST BE LAST */ -} HashScanPosData; +} HashScanPosData; #define HashScanPosIsPinned(scanpos) \ ( \ diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 295e9d224e..2983cfa217 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -69,6 +69,6 @@ extern List *get_proposed_default_constraint(List *new_part_constaints); /* For tuple routing */ extern int get_partition_for_tuple(Relation relation, Datum *values, - bool *isnull); + bool *isnull); #endif /* PARTITION_H */ diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index 99a13f3b7d..f093f95232 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -28,7 +28,7 @@ typedef struct ParallelExecutorInfo BufferUsage *buffer_usage; /* points to bufusage area in DSM */ SharedExecutorInstrumentation *instrumentation; /* optional */ dsa_area *area; /* points to DSA area in DSM */ - dsa_pointer param_exec; /* serialized PARAM_EXEC parameters */ + dsa_pointer param_exec; /* serialized PARAM_EXEC parameters */ bool finished; /* set true by ExecParallelFinish */ /* These two arrays have pcxt->nworkers_launched entries: */ shm_mq_handle **tqueue; /* tuple queues for worker output */ diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index 64e5aab4eb..43ca9908aa 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -38,13 +38,13 @@ */ typedef struct PartitionDispatchData { - Relation reldesc; - PartitionKey key; - List *keystate; /* list of ExprState */ - PartitionDesc partdesc; + Relation reldesc; + PartitionKey key; + List *keystate; /* list of ExprState */ + PartitionDesc partdesc; TupleTableSlot *tupslot; TupleConversionMap *tupmap; - int *indexes; + int *indexes; } PartitionDispatchData; typedef struct PartitionDispatchData *PartitionDispatch; diff --git a/src/include/fe_utils/psqlscan_int.h b/src/include/fe_utils/psqlscan_int.h index e9b351756b..2653344701 100644 --- a/src/include/fe_utils/psqlscan_int.h +++ b/src/include/fe_utils/psqlscan_int.h @@ -143,6 +143,6 @@ extern void psqlscan_escape_variable(PsqlScanState state, const char *txt, int len, PsqlScanQuoteType quote); extern void psqlscan_test_variable(PsqlScanState state, - const char *txt, int len); + const char *txt, int len); #endif /* PSQLSCAN_INT_H */ diff --git a/src/include/lib/dshash.h b/src/include/lib/dshash.h index 362871bfe0..220553c0d9 100644 --- a/src/include/lib/dshash.h +++ b/src/include/lib/dshash.h @@ -81,7 +81,7 @@ extern void dshash_delete_entry(dshash_table *hash_table, void *entry); extern void dshash_release_lock(dshash_table *hash_table, void *entry); /* Convenience hash and compare functions wrapping memcmp and tag_hash. */ -extern int dshash_memcmp(const void *a, const void *b, size_t size, void *arg); +extern int dshash_memcmp(const void *a, const void *b, size_t size, void *arg); extern dshash_hash dshash_memhash(const void *v, size_t size, void *arg); /* Debugging support. */ diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h index 01b845db44..906ae5e980 100644 --- a/src/include/lib/stringinfo.h +++ b/src/include/lib/stringinfo.h @@ -149,7 +149,7 @@ extern void appendBinaryStringInfo(StringInfo str, * if necessary. Does not ensure a trailing null-byte exists. */ extern void appendBinaryStringInfoNT(StringInfo str, - const char *data, int datalen); + const char *data, int datalen); /*------------------------ * enlargeStringInfo diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h index 99560d3d2f..91f1e0f2c7 100644 --- a/src/include/libpq/scram.h +++ b/src/include/libpq/scram.h @@ -15,7 +15,7 @@ /* Name of SCRAM mechanisms per IANA */ #define SCRAM_SHA256_NAME "SCRAM-SHA-256" -#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ +#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ /* Channel binding types */ #define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" @@ -27,8 +27,8 @@ /* Routines dedicated to authentication */ extern void *pg_be_scram_init(const char *username, const char *shadow_pass, - bool ssl_in_use, const char *tls_finished_message, - size_t tls_finished_len); + bool ssl_in_use, const char *tls_finished_message, + size_t tls_finished_len); extern int pg_be_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, char **logdetail); diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 9e68e65cc6..51df8e9741 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -660,7 +660,7 @@ typedef struct RelOptInfo struct RelOptInfo **part_rels; /* Array of RelOptInfos of partitions, * stored in the same order of bounds */ List **partexprs; /* Non-nullable partition key expressions. */ - List **nullable_partexprs; /* Nullable partition key expressions. */ + List **nullable_partexprs; /* Nullable partition key expressions. */ } RelOptInfo; /* diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index aa130cdb84..41335b8e75 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -245,7 +245,7 @@ extern bool log_btree_build_stats; extern PGDLLIMPORT bool check_function_bodies; extern bool default_with_oids; -extern bool session_auth_is_superuser; +extern bool session_auth_is_superuser; extern int log_min_error_statement; extern int log_min_messages; diff --git a/src/include/utils/memutils.h b/src/include/utils/memutils.h index ff8e5d7d79..d177b0cc8d 100644 --- a/src/include/utils/memutils.h +++ b/src/include/utils/memutils.h @@ -157,8 +157,8 @@ extern MemoryContext SlabContextCreate(MemoryContext parent, /* generation.c */ extern MemoryContext GenerationContextCreate(MemoryContext parent, - const char *name, - Size blockSize); + const char *name, + Size blockSize); /* * Recommended default alloc parameters, suitable for "ordinary" contexts diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c index 8f901a83fb..871e2011f3 100644 --- a/src/interfaces/ecpg/ecpglib/data.c +++ b/src/interfaces/ecpg/ecpglib/data.c @@ -55,7 +55,8 @@ garbage_left(enum ARRAY_TYPE isarray, char **scan_length, enum COMPAT_MODE compa if (INFORMIX_MODE(compat) && **scan_length == '.') { /* skip invalid characters */ - do { + do + { (*scan_length)++; } while (isdigit((unsigned char) **scan_length)); } diff --git a/src/interfaces/ecpg/ecpglib/execute.c b/src/interfaces/ecpg/ecpglib/execute.c index 7776813d1b..7d6d7d0754 100644 --- a/src/interfaces/ecpg/ecpglib/execute.c +++ b/src/interfaces/ecpg/ecpglib/execute.c @@ -1109,7 +1109,7 @@ ecpg_build_params(struct statement *stmt) struct variable *var; int desc_counter = 0; int position = 0; - const char *value; + const char *value; bool std_strings = false; /* Get standard_conforming_strings setting. */ diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index f2403147ca..97db0b1faa 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -491,9 +491,9 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) free(cbind_input); } else if (state->ssl_in_use) - appendPQExpBuffer(&buf, "c=eSws"); /* base64 of "y,," */ + appendPQExpBuffer(&buf, "c=eSws"); /* base64 of "y,," */ else - appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ + appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ if (PQExpBufferDataBroken(buf)) goto oom_error; diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index 9d394919ef..f54ad8e0cc 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -534,7 +534,7 @@ pg_SASL_init(PGconn *conn, int payloadlen) */ if (conn->ssl_in_use && strcmp(mechanism_buf.data, SCRAM_SHA256_PLUS_NAME) == 0) - selected_mechanism = SCRAM_SHA256_PLUS_NAME; + selected_mechanism = SCRAM_SHA256_PLUS_NAME; else if (strcmp(mechanism_buf.data, SCRAM_SHA256_NAME) == 0 && !selected_mechanism) selected_mechanism = SCRAM_SHA256_NAME; @@ -569,6 +569,7 @@ pg_SASL_init(PGconn *conn, int payloadlen) } #ifdef USE_SSL + /* * Get data for channel binding. */ @@ -581,8 +582,8 @@ pg_SASL_init(PGconn *conn, int payloadlen) #endif /* - * Initialize the SASL state information with all the information - * gathered during the initial exchange. + * Initialize the SASL state information with all the information gathered + * during the initial exchange. * * Note: Only tls-unique is supported for the moment. */ diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h index 1525a52742..3e92410eae 100644 --- a/src/interfaces/libpq/fe-auth.h +++ b/src/interfaces/libpq/fe-auth.h @@ -24,11 +24,11 @@ extern char *pg_fe_getauthname(PQExpBuffer errorMessage); /* Prototypes for functions in fe-auth-scram.c */ extern void *pg_fe_scram_init(const char *username, - const char *password, - bool ssl_in_use, - const char *sasl_mechanism, - char *tls_finished_message, - size_t tls_finished_len); + const char *password, + bool ssl_in_use, + const char *sasl_mechanism, + char *tls_finished_message, + size_t tls_finished_len); extern void pg_fe_scram_free(void *opaq); extern void pg_fe_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 7a6dd15460..cf6120eea9 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -6617,8 +6617,8 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan) if (IsA(tle_expr, Const)) break; /* Otherwise, it had better be a Param or an outer Var */ - Assert(IsA(tle_expr, Param) || (IsA(tle_expr, Var) && - ((Var *) tle_expr)->varno == OUTER_VAR)); + Assert(IsA(tle_expr, Param) ||(IsA(tle_expr, Var) && + ((Var *) tle_expr)->varno == OUTER_VAR)); /* Descend to the child node */ plan = plan->lefttree; } diff --git a/src/test/thread/thread_test.c b/src/test/thread/thread_test.c index 282a95872c..2501ca22b1 100644 --- a/src/test/thread/thread_test.c +++ b/src/test/thread/thread_test.c @@ -80,23 +80,23 @@ static volatile int errno2_set = 0; #ifndef HAVE_STRERROR_R static char *strerror_p1; static char *strerror_p2; -static int strerror_threadsafe = 0; +static int strerror_threadsafe = 0; #endif #if !defined(WIN32) && !defined(HAVE_GETPWUID_R) static struct passwd *passwd_p1; static struct passwd *passwd_p2; -static int getpwuid_threadsafe = 0; +static int getpwuid_threadsafe = 0; #endif #if !defined(HAVE_GETADDRINFO) && !defined(HAVE_GETHOSTBYNAME_R) static struct hostent *hostent_p1; static struct hostent *hostent_p2; static char myhostname[MAXHOSTNAMELEN]; -static int gethostbyname_threadsafe = 0; +static int gethostbyname_threadsafe = 0; #endif -static int platform_is_threadsafe = 1; +static int platform_is_threadsafe = 1; int main(int argc, char *argv[]) diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index b422050a92..3e84720038 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -34,6 +34,9 @@ AfterTriggerEventList AfterTriggerShared AfterTriggerSharedData AfterTriggersData +AfterTriggersQueryData +AfterTriggersTableData +AfterTriggersTransData Agg AggClauseCosts AggInfo @@ -125,7 +128,6 @@ ArrayMetaState ArrayParseState ArrayRef ArrayRefState -ArrayRemapInfo ArrayType AsyncQueueControl AsyncQueueEntry @@ -143,7 +145,6 @@ AutoVacOpts AutoVacuumShmemStruct AutoVacuumWorkItem AutoVacuumWorkItemType -AutovacWorkItems AuxProcType BF_ctx BF_key @@ -274,6 +275,8 @@ BuiltinScript BulkInsertState CACHESIGN CAC_state +CCFastEqualFN +CCHashFN CEOUC_WAIT_MODE CFuncHashTabEntry CHAR @@ -302,8 +305,6 @@ CatCache CatCacheHeader CatalogId CatalogIndexState -CCHashFN -CCFastEqualFN ChangeVarNodes_context CheckPoint CheckPointStmt @@ -344,6 +345,7 @@ ColumnCompareData ColumnDef ColumnIOData ColumnRef +ColumnsHashData CombinationGenerator ComboCidEntry ComboCidEntryData @@ -632,9 +634,9 @@ FieldStore File FileFdwExecutionState FileFdwPlanState -FileName FileNameMap FindSplitData +FixedParallelExecutorState FixedParallelState FixedParamState FlagMode @@ -824,6 +826,10 @@ GatherMergeState GatherPath GatherState Gene +GenerationBlock +GenerationChunk +GenerationContext +GenerationPointer GenericCosts GenericXLogState GeqoPrivateData @@ -941,6 +947,7 @@ HashPageStat HashPath HashScanOpaque HashScanOpaqueData +HashScanPosData HashScanPosItem HashSkewBucket HashState @@ -1021,13 +1028,13 @@ InsertStmt Instrumentation Int128AggState Int8TransTypeData +IntRBTreeNode InternalDefaultACL InternalGrant Interval IntoClause InvalidationChunk InvalidationListHeader -InvertedWalkNextStep IpcMemoryId IpcMemoryKey IpcSemaphoreId @@ -1217,6 +1224,7 @@ MergeJoinClause MergeJoinState MergePath MergeScanSelCache +MetaCommand MinMaxAggInfo MinMaxAggPath MinMaxExpr @@ -1454,13 +1462,18 @@ PLpgSQL_var PLpgSQL_variable PLwdatum PLword +PLyArrayToOb PLyCursorObject PLyDatumToOb PLyDatumToObFunc PLyExceptionEntry PLyExecutionContext +PLyObToArray PLyObToDatum PLyObToDatumFunc +PLyObToDomain +PLyObToScalar +PLyObToTransform PLyObToTuple PLyObject_AsString_t PLyPlanObject @@ -1470,12 +1483,11 @@ PLyProcedureKey PLyResultObject PLySRFState PLySavedArgs +PLyScalarToOb PLySubtransactionData PLySubtransactionObject +PLyTransformToOb PLyTupleToOb -PLyTypeInfo -PLyTypeInput -PLyTypeOutput PLyUnicode_FromStringAndSize_t PMINIDUMP_CALLBACK_INFORMATION PMINIDUMP_EXCEPTION_INFORMATION @@ -1565,12 +1577,13 @@ PartitionDescData PartitionDispatch PartitionDispatchData PartitionElem -PartitionKey PartitionHashBound +PartitionKey PartitionListValue PartitionRangeBound PartitionRangeDatum PartitionRangeDatumKind +PartitionScheme PartitionSpec PartitionedChildRelInfo PasswordType @@ -1670,6 +1683,7 @@ Pool PopulateArrayContext PopulateArrayState PopulateRecordCache +PopulateRecordsetCache PopulateRecordsetState Port Portal @@ -1781,7 +1795,6 @@ RangeBox RangeFunction RangeIOData RangeQueryClause -RangeRemapInfo RangeSubselect RangeTableFunc RangeTableFuncCol @@ -1794,6 +1807,7 @@ RangeVar RangeVarGetRelidCallback RawColumnDefault RawStmt +ReInitializeDSMForeignScan_function ReScanForeignScan_function ReadBufPtrType ReadBufferMode @@ -1805,8 +1819,6 @@ RecheckForeignScan_function RecordCacheEntry RecordCompareData RecordIOData -RecordRemapInfo -RecordTypmodMap RecoveryTargetAction RecoveryTargetType RectBox @@ -1866,6 +1878,7 @@ ReorderBufferTupleCidEnt ReorderBufferTupleCidKey ReorderTuple RepOriginId +ReparameterizeForeignPathByChild_function ReplaceVarsFromTargetList_context ReplaceVarsNoMatchOption ReplicaIdentityStmt @@ -2020,9 +2033,10 @@ SharedInvalRelmapMsg SharedInvalSmgrMsg SharedInvalSnapshotMsg SharedInvalidationMessage -SharedRecordTableKey SharedRecordTableEntry +SharedRecordTableKey SharedRecordTypmodRegistry +SharedSortInfo SharedTypmodTableEntry ShellTypeInfo ShippableCacheEntry @@ -2297,9 +2311,10 @@ TupleHashEntryData TupleHashIterator TupleHashTable TupleQueueReader -TupleRemapClass -TupleRemapInfo TupleTableSlot +TuplesortInstrumentation +TuplesortMethod +TuplesortSpaceType Tuplesortstate Tuplestorestate TwoPhaseCallback @@ -2329,7 +2344,6 @@ UChar UCharIterator UCollator UConverter -UEnumeration UErrorCode UINT ULARGE_INTEGER @@ -2353,6 +2367,7 @@ UserOpts VacAttrStats VacAttrStatsP VacuumParams +VacuumRelation VacuumStmt Value ValuesScan @@ -2547,6 +2562,7 @@ bgworker_main_type binaryheap binaryheap_comparator bitmapword +bits16 bits32 bits8 bool @@ -2561,7 +2577,6 @@ check_network_data check_object_relabel_type check_password_hook_type check_ungrouped_columns_context -chkpass chr clock_t cmpEntriesArg @@ -2612,7 +2627,9 @@ dsa_pointer dsa_segment_header dsa_segment_index dsa_segment_map +dshash_compare_function dshash_hash +dshash_hash_function dshash_parameters dshash_partition dshash_table From cdddd5d40b8a8b37db18adda3912e029756d1e36 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 22 Aug 2017 20:05:49 -0400 Subject: [PATCH 0631/1087] Add compiler hints to PLy_elog() Decorate PLy_elog() in a similar way as elog(), to give compilers and static analyzers hints in which cases it does not return. Reviewed-by: John Naylor --- src/pl/plpython/plpy_elog.c | 2 +- src/pl/plpython/plpy_elog.h | 28 +++++++++++++++++++++++++++- 2 files changed, 28 insertions(+), 2 deletions(-) diff --git a/src/pl/plpython/plpy_elog.c b/src/pl/plpython/plpy_elog.c index bb864899f6..e244104fed 100644 --- a/src/pl/plpython/plpy_elog.c +++ b/src/pl/plpython/plpy_elog.c @@ -44,7 +44,7 @@ static bool set_string_attr(PyObject *obj, char *attrname, char *str); * in the context. */ void -PLy_elog(int elevel, const char *fmt,...) +PLy_elog_impl(int elevel, const char *fmt,...) { char *xmsg; char *tbmsg; diff --git a/src/pl/plpython/plpy_elog.h b/src/pl/plpython/plpy_elog.h index e73177d130..e4b30c3cca 100644 --- a/src/pl/plpython/plpy_elog.h +++ b/src/pl/plpython/plpy_elog.h @@ -10,7 +10,33 @@ extern PyObject *PLy_exc_error; extern PyObject *PLy_exc_fatal; extern PyObject *PLy_exc_spi_error; -extern void PLy_elog(int elevel, const char *fmt,...) pg_attribute_printf(2, 3); +/* + * PLy_elog() + * + * See comments at elog() about the compiler hinting. + */ +#ifdef HAVE__VA_ARGS +#ifdef HAVE__BUILTIN_CONSTANT_P +#define PLy_elog(elevel, ...) \ + do { \ + PLy_elog_impl(elevel, __VA_ARGS__); \ + if (__builtin_constant_p(elevel) && (elevel) >= ERROR) \ + pg_unreachable(); \ + } while(0) +#else /* !HAVE__BUILTIN_CONSTANT_P */ +#define PLy_elog(elevel, ...) \ + do { \ + const int elevel_ = (elevel); \ + PLy_elog_impl(elevel_, __VA_ARGS__); \ + if (elevel_ >= ERROR) \ + pg_unreachable(); \ + } while(0) +#endif /* HAVE__BUILTIN_CONSTANT_P */ +#else /* !HAVE__VA_ARGS */ +#define PLy_elog PLy_elog_impl +#endif /* HAVE__VA_ARGS */ + +extern void PLy_elog_impl(int elevel, const char *fmt,...) pg_attribute_printf(2, 3); extern void PLy_exception_set(PyObject *exc, const char *fmt,...) pg_attribute_printf(2, 3); From c7f5c58e1c6bb250ff7c24970a05e033201be409 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 22 Aug 2017 20:05:49 -0400 Subject: [PATCH 0632/1087] PL/Python: Fix remaining scan-build warnings Apparently, scan-build thinks that proc->is_setof can change during PLy_exec_function(). To make it clearer, save the value in a local variable. Also add an assertion to clear another warning. Reviewed-by: John Naylor --- src/pl/plpython/plpy_exec.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index 9d2341a4a3..fe217c6a2c 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -57,6 +57,7 @@ static void PLy_abort_open_subtransactions(int save_subxact_level); Datum PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) { + bool is_setof = proc->is_setof; Datum rv; PyObject *volatile plargs = NULL; PyObject *volatile plrv = NULL; @@ -73,7 +74,7 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) PG_TRY(); { - if (proc->is_setof) + if (is_setof) { /* First Call setup */ if (SRF_IS_FIRSTCALL()) @@ -93,6 +94,7 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) funcctx = SRF_PERCALL_SETUP(); Assert(funcctx != NULL); srfstate = (PLySRFState *) funcctx->user_fctx; + Assert(srfstate != NULL); } if (srfstate == NULL || srfstate->iter == NULL) @@ -125,7 +127,7 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) * We stay in the SPI context while doing this, because PyIter_Next() * calls back into Python code which might contain SPI calls. */ - if (proc->is_setof) + if (is_setof) { if (srfstate->iter == NULL) { From 8d4e70a63bf8772bbf5db620ef1e14761fbd2044 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 29 Nov 2017 15:19:07 -0500 Subject: [PATCH 0633/1087] Add extensive tests for partition pruning. Currently, partition pruning happens via constraint exclusion, but there are pending places to replace that with a different and hopefully faster mechanism. To be sure that we don't change behavior without realizing it, add extensive test coverage. Note that not all of these behaviors are optimal; in some cases, partitions are not pruned even though it would be safe to do so. These tests therefore serve to memorialize the current state rather than the ideal state. Patches that improve things can update the test results as appropriate. Amit Langote, adjusted by me. Review and testing of the larger patch set of which this is a part by Ashutosh Bapat, David Rowley, Dilip Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul, and Kyotaro Horiguchi. Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp --- src/test/regress/expected/partition_prune.out | 1095 +++++++++++++++++ src/test/regress/parallel_schedule | 2 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/partition_prune.sql | 155 +++ 4 files changed, 1252 insertions(+), 1 deletion(-) create mode 100644 src/test/regress/expected/partition_prune.out create mode 100644 src/test/regress/sql/partition_prune.sql diff --git a/src/test/regress/expected/partition_prune.out b/src/test/regress/expected/partition_prune.out new file mode 100644 index 0000000000..aabb0240a9 --- /dev/null +++ b/src/test/regress/expected/partition_prune.out @@ -0,0 +1,1095 @@ +-- +-- Test partitioning planner code +-- +create table lp (a char) partition by list (a); +create table lp_default partition of lp default; +create table lp_ef partition of lp for values in ('e', 'f'); +create table lp_ad partition of lp for values in ('a', 'd'); +create table lp_bc partition of lp for values in ('b', 'c'); +create table lp_g partition of lp for values in ('g'); +create table lp_null partition of lp for values in (null); +explain (costs off) select * from lp; + QUERY PLAN +------------------------------ + Append + -> Seq Scan on lp_ad + -> Seq Scan on lp_bc + -> Seq Scan on lp_ef + -> Seq Scan on lp_g + -> Seq Scan on lp_null + -> Seq Scan on lp_default +(7 rows) + +explain (costs off) select * from lp where a > 'a' and a < 'd'; + QUERY PLAN +----------------------------------------------------------- + Append + -> Seq Scan on lp_bc + Filter: ((a > 'a'::bpchar) AND (a < 'd'::bpchar)) + -> Seq Scan on lp_default + Filter: ((a > 'a'::bpchar) AND (a < 'd'::bpchar)) +(5 rows) + +explain (costs off) select * from lp where a > 'a' and a <= 'd'; + QUERY PLAN +------------------------------------------------------------ + Append + -> Seq Scan on lp_ad + Filter: ((a > 'a'::bpchar) AND (a <= 'd'::bpchar)) + -> Seq Scan on lp_bc + Filter: ((a > 'a'::bpchar) AND (a <= 'd'::bpchar)) + -> Seq Scan on lp_default + Filter: ((a > 'a'::bpchar) AND (a <= 'd'::bpchar)) +(7 rows) + +explain (costs off) select * from lp where a = 'a'; + QUERY PLAN +----------------------------------- + Append + -> Seq Scan on lp_ad + Filter: (a = 'a'::bpchar) +(3 rows) + +explain (costs off) select * from lp where 'a' = a; /* commuted */ + QUERY PLAN +----------------------------------- + Append + -> Seq Scan on lp_ad + Filter: ('a'::bpchar = a) +(3 rows) + +explain (costs off) select * from lp where a is not null; + QUERY PLAN +--------------------------------- + Append + -> Seq Scan on lp_ad + Filter: (a IS NOT NULL) + -> Seq Scan on lp_bc + Filter: (a IS NOT NULL) + -> Seq Scan on lp_ef + Filter: (a IS NOT NULL) + -> Seq Scan on lp_g + Filter: (a IS NOT NULL) + -> Seq Scan on lp_default + Filter: (a IS NOT NULL) +(11 rows) + +explain (costs off) select * from lp where a is null; + QUERY PLAN +----------------------------- + Append + -> Seq Scan on lp_null + Filter: (a IS NULL) +(3 rows) + +explain (costs off) select * from lp where a = 'a' or a = 'c'; + QUERY PLAN +---------------------------------------------------------- + Append + -> Seq Scan on lp_ad + Filter: ((a = 'a'::bpchar) OR (a = 'c'::bpchar)) + -> Seq Scan on lp_bc + Filter: ((a = 'a'::bpchar) OR (a = 'c'::bpchar)) +(5 rows) + +explain (costs off) select * from lp where a is not null and (a = 'a' or a = 'c'); + QUERY PLAN +-------------------------------------------------------------------------------- + Append + -> Seq Scan on lp_ad + Filter: ((a IS NOT NULL) AND ((a = 'a'::bpchar) OR (a = 'c'::bpchar))) + -> Seq Scan on lp_bc + Filter: ((a IS NOT NULL) AND ((a = 'a'::bpchar) OR (a = 'c'::bpchar))) +(5 rows) + +explain (costs off) select * from lp where a <> 'g'; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on lp_ad + Filter: (a <> 'g'::bpchar) + -> Seq Scan on lp_bc + Filter: (a <> 'g'::bpchar) + -> Seq Scan on lp_ef + Filter: (a <> 'g'::bpchar) + -> Seq Scan on lp_default + Filter: (a <> 'g'::bpchar) +(9 rows) + +explain (costs off) select * from lp where a <> 'a' and a <> 'd'; + QUERY PLAN +------------------------------------------------------------- + Append + -> Seq Scan on lp_bc + Filter: ((a <> 'a'::bpchar) AND (a <> 'd'::bpchar)) + -> Seq Scan on lp_ef + Filter: ((a <> 'a'::bpchar) AND (a <> 'd'::bpchar)) + -> Seq Scan on lp_g + Filter: ((a <> 'a'::bpchar) AND (a <> 'd'::bpchar)) + -> Seq Scan on lp_default + Filter: ((a <> 'a'::bpchar) AND (a <> 'd'::bpchar)) +(9 rows) + +explain (costs off) select * from lp where a not in ('a', 'd'); + QUERY PLAN +------------------------------------------------ + Append + -> Seq Scan on lp_bc + Filter: (a <> ALL ('{a,d}'::bpchar[])) + -> Seq Scan on lp_ef + Filter: (a <> ALL ('{a,d}'::bpchar[])) + -> Seq Scan on lp_g + Filter: (a <> ALL ('{a,d}'::bpchar[])) + -> Seq Scan on lp_default + Filter: (a <> ALL ('{a,d}'::bpchar[])) +(9 rows) + +-- collation matches the partitioning collation, pruning works +create table coll_pruning (a text collate "C") partition by list (a); +create table coll_pruning_a partition of coll_pruning for values in ('a'); +create table coll_pruning_b partition of coll_pruning for values in ('b'); +create table coll_pruning_def partition of coll_pruning default; +explain (costs off) select * from coll_pruning where a collate "C" = 'a' collate "C"; + QUERY PLAN +--------------------------------------------- + Append + -> Seq Scan on coll_pruning_a + Filter: (a = 'a'::text COLLATE "C") +(3 rows) + +-- collation doesn't match the partitioning collation, no pruning occurs +explain (costs off) select * from coll_pruning where a collate "POSIX" = 'a' collate "POSIX"; + QUERY PLAN +--------------------------------------------------------- + Append + -> Seq Scan on coll_pruning_a + Filter: ((a)::text = 'a'::text COLLATE "POSIX") + -> Seq Scan on coll_pruning_b + Filter: ((a)::text = 'a'::text COLLATE "POSIX") + -> Seq Scan on coll_pruning_def + Filter: ((a)::text = 'a'::text COLLATE "POSIX") +(7 rows) + +create table rlp (a int, b varchar) partition by range (a); +create table rlp_default partition of rlp default partition by list (a); +create table rlp_default_default partition of rlp_default default; +create table rlp_default_10 partition of rlp_default for values in (10); +create table rlp_default_30 partition of rlp_default for values in (30); +create table rlp_default_null partition of rlp_default for values in (null); +create table rlp1 partition of rlp for values from (minvalue) to (1); +create table rlp2 partition of rlp for values from (1) to (10); +create table rlp3 (b varchar, a int) partition by list (b varchar_ops); +create table rlp3_default partition of rlp3 default; +create table rlp3abcd partition of rlp3 for values in ('ab', 'cd'); +create table rlp3efgh partition of rlp3 for values in ('ef', 'gh'); +create table rlp3nullxy partition of rlp3 for values in (null, 'xy'); +alter table rlp attach partition rlp3 for values from (15) to (20); +create table rlp4 partition of rlp for values from (20) to (30) partition by range (a); +create table rlp4_default partition of rlp4 default; +create table rlp4_1 partition of rlp4 for values from (20) to (25); +create table rlp4_2 partition of rlp4 for values from (25) to (29); +create table rlp5 partition of rlp for values from (31) to (maxvalue) partition by range (a); +create table rlp5_default partition of rlp5 default; +create table rlp5_1 partition of rlp5 for values from (31) to (40); +explain (costs off) select * from rlp where a < 1; + QUERY PLAN +------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a < 1) +(3 rows) + +explain (costs off) select * from rlp where 1 > a; /* commuted */ + QUERY PLAN +------------------------- + Append + -> Seq Scan on rlp1 + Filter: (1 > a) +(3 rows) + +explain (costs off) select * from rlp where a <= 1; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a <= 1) + -> Seq Scan on rlp2 + Filter: (a <= 1) + -> Seq Scan on rlp_default_default + Filter: (a <= 1) +(7 rows) + +explain (costs off) select * from rlp where a = 1; + QUERY PLAN +------------------------- + Append + -> Seq Scan on rlp2 + Filter: (a = 1) +(3 rows) + +explain (costs off) select * from rlp where a = 1::bigint; /* same as above */ + QUERY PLAN +----------------------------------- + Append + -> Seq Scan on rlp2 + Filter: (a = '1'::bigint) +(3 rows) + +explain (costs off) select * from rlp where a = 1::numeric; /* no pruning */ + QUERY PLAN +----------------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp2 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp3abcd + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp3efgh + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp3nullxy + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp3_default + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp4_1 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp4_2 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp4_default + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp5_1 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp5_default + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp_default_10 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp_default_30 + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp_default_null + Filter: ((a)::numeric = '1'::numeric) + -> Seq Scan on rlp_default_default + Filter: ((a)::numeric = '1'::numeric) +(31 rows) + +explain (costs off) select * from rlp where a <= 10; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a <= 10) + -> Seq Scan on rlp2 + Filter: (a <= 10) + -> Seq Scan on rlp_default_10 + Filter: (a <= 10) + -> Seq Scan on rlp_default_default + Filter: (a <= 10) +(9 rows) + +explain (costs off) select * from rlp where a > 10; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp3abcd + Filter: (a > 10) + -> Seq Scan on rlp3efgh + Filter: (a > 10) + -> Seq Scan on rlp3nullxy + Filter: (a > 10) + -> Seq Scan on rlp3_default + Filter: (a > 10) + -> Seq Scan on rlp4_1 + Filter: (a > 10) + -> Seq Scan on rlp4_2 + Filter: (a > 10) + -> Seq Scan on rlp4_default + Filter: (a > 10) + -> Seq Scan on rlp5_1 + Filter: (a > 10) + -> Seq Scan on rlp5_default + Filter: (a > 10) + -> Seq Scan on rlp_default_30 + Filter: (a > 10) + -> Seq Scan on rlp_default_default + Filter: (a > 10) +(23 rows) + +explain (costs off) select * from rlp where a < 15; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a < 15) + -> Seq Scan on rlp2 + Filter: (a < 15) + -> Seq Scan on rlp_default_10 + Filter: (a < 15) + -> Seq Scan on rlp_default_default + Filter: (a < 15) +(9 rows) + +explain (costs off) select * from rlp where a <= 15; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a <= 15) + -> Seq Scan on rlp2 + Filter: (a <= 15) + -> Seq Scan on rlp3abcd + Filter: (a <= 15) + -> Seq Scan on rlp3efgh + Filter: (a <= 15) + -> Seq Scan on rlp3nullxy + Filter: (a <= 15) + -> Seq Scan on rlp3_default + Filter: (a <= 15) + -> Seq Scan on rlp_default_10 + Filter: (a <= 15) + -> Seq Scan on rlp_default_default + Filter: (a <= 15) +(17 rows) + +explain (costs off) select * from rlp where a > 15 and b = 'ab'; + QUERY PLAN +--------------------------------------------------------- + Append + -> Seq Scan on rlp3abcd + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_1 + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_2 + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_default + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp5_1 + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp5_default + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_30 + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_default + Filter: ((a > 15) AND ((b)::text = 'ab'::text)) +(17 rows) + +explain (costs off) select * from rlp where a = 16; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on rlp3abcd + Filter: (a = 16) + -> Seq Scan on rlp3efgh + Filter: (a = 16) + -> Seq Scan on rlp3nullxy + Filter: (a = 16) + -> Seq Scan on rlp3_default + Filter: (a = 16) +(9 rows) + +explain (costs off) select * from rlp where a = 16 and b in ('not', 'in', 'here'); + QUERY PLAN +---------------------------------------------------------------------------- + Append + -> Seq Scan on rlp3_default + Filter: ((a = 16) AND ((b)::text = ANY ('{not,in,here}'::text[]))) +(3 rows) + +explain (costs off) select * from rlp where a = 16 and b < 'ab'; + QUERY PLAN +--------------------------------------------------------- + Append + -> Seq Scan on rlp3_default + Filter: (((b)::text < 'ab'::text) AND (a = 16)) +(3 rows) + +explain (costs off) select * from rlp where a = 16 and b <= 'ab'; + QUERY PLAN +---------------------------------------------------------- + Append + -> Seq Scan on rlp3abcd + Filter: (((b)::text <= 'ab'::text) AND (a = 16)) + -> Seq Scan on rlp3_default + Filter: (((b)::text <= 'ab'::text) AND (a = 16)) +(5 rows) + +explain (costs off) select * from rlp where a = 16 and b is null; + QUERY PLAN +-------------------------------------------- + Append + -> Seq Scan on rlp3nullxy + Filter: ((b IS NULL) AND (a = 16)) +(3 rows) + +explain (costs off) select * from rlp where a = 16 and b is not null; + QUERY PLAN +------------------------------------------------ + Append + -> Seq Scan on rlp3abcd + Filter: ((b IS NOT NULL) AND (a = 16)) + -> Seq Scan on rlp3efgh + Filter: ((b IS NOT NULL) AND (a = 16)) + -> Seq Scan on rlp3nullxy + Filter: ((b IS NOT NULL) AND (a = 16)) + -> Seq Scan on rlp3_default + Filter: ((b IS NOT NULL) AND (a = 16)) +(9 rows) + +explain (costs off) select * from rlp where a is null; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on rlp_default_null + Filter: (a IS NULL) +(3 rows) + +explain (costs off) select * from rlp where a is not null; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp2 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp3abcd + Filter: (a IS NOT NULL) + -> Seq Scan on rlp3efgh + Filter: (a IS NOT NULL) + -> Seq Scan on rlp3nullxy + Filter: (a IS NOT NULL) + -> Seq Scan on rlp3_default + Filter: (a IS NOT NULL) + -> Seq Scan on rlp4_1 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp4_2 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp4_default + Filter: (a IS NOT NULL) + -> Seq Scan on rlp5_1 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp5_default + Filter: (a IS NOT NULL) + -> Seq Scan on rlp_default_10 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp_default_30 + Filter: (a IS NOT NULL) + -> Seq Scan on rlp_default_default + Filter: (a IS NOT NULL) +(29 rows) + +explain (costs off) select * from rlp where a > 30; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp5_1 + Filter: (a > 30) + -> Seq Scan on rlp5_default + Filter: (a > 30) + -> Seq Scan on rlp_default_default + Filter: (a > 30) +(7 rows) + +explain (costs off) select * from rlp where a = 30; /* only default is scanned */ + QUERY PLAN +---------------------------------- + Append + -> Seq Scan on rlp_default_30 + Filter: (a = 30) +(3 rows) + +explain (costs off) select * from rlp where a <= 31; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: (a <= 31) + -> Seq Scan on rlp2 + Filter: (a <= 31) + -> Seq Scan on rlp3abcd + Filter: (a <= 31) + -> Seq Scan on rlp3efgh + Filter: (a <= 31) + -> Seq Scan on rlp3nullxy + Filter: (a <= 31) + -> Seq Scan on rlp3_default + Filter: (a <= 31) + -> Seq Scan on rlp4_1 + Filter: (a <= 31) + -> Seq Scan on rlp4_2 + Filter: (a <= 31) + -> Seq Scan on rlp4_default + Filter: (a <= 31) + -> Seq Scan on rlp5_1 + Filter: (a <= 31) + -> Seq Scan on rlp5_default + Filter: (a <= 31) + -> Seq Scan on rlp_default_10 + Filter: (a <= 31) + -> Seq Scan on rlp_default_30 + Filter: (a <= 31) + -> Seq Scan on rlp_default_default + Filter: (a <= 31) +(29 rows) + +explain (costs off) select * from rlp where a = 1 or a = 7; + QUERY PLAN +-------------------------------------- + Append + -> Seq Scan on rlp2 + Filter: ((a = 1) OR (a = 7)) +(3 rows) + +explain (costs off) select * from rlp where a = 1 or b = 'ab'; + QUERY PLAN +------------------------------------------------------- + Append + -> Seq Scan on rlp1 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp2 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp3abcd + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_1 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_2 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp4_default + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp5_1 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp5_default + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_10 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_30 + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_null + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) + -> Seq Scan on rlp_default_default + Filter: ((a = 1) OR ((b)::text = 'ab'::text)) +(25 rows) + +explain (costs off) select * from rlp where a > 20 and a < 27; + QUERY PLAN +----------------------------------------- + Append + -> Seq Scan on rlp4_1 + Filter: ((a > 20) AND (a < 27)) + -> Seq Scan on rlp4_2 + Filter: ((a > 20) AND (a < 27)) + -> Seq Scan on rlp4_default + Filter: ((a > 20) AND (a < 27)) +(7 rows) + +explain (costs off) select * from rlp where a = 29; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on rlp4_default + Filter: (a = 29) +(3 rows) + +explain (costs off) select * from rlp where a >= 29; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on rlp4_default + Filter: (a >= 29) + -> Seq Scan on rlp5_1 + Filter: (a >= 29) + -> Seq Scan on rlp5_default + Filter: (a >= 29) + -> Seq Scan on rlp_default_30 + Filter: (a >= 29) + -> Seq Scan on rlp_default_default + Filter: (a >= 29) +(11 rows) + +-- redundant clauses are eliminated +explain (costs off) select * from rlp where a > 1 and a = 10; /* only default */ + QUERY PLAN +---------------------------------------- + Append + -> Seq Scan on rlp_default_10 + Filter: ((a > 1) AND (a = 10)) +(3 rows) + +explain (costs off) select * from rlp where a > 1 and a >=15; /* rlp3 onwards, including default */ + QUERY PLAN +----------------------------------------- + Append + -> Seq Scan on rlp3abcd + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp3efgh + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp3nullxy + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp3_default + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp4_1 + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp4_2 + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp4_default + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp5_1 + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp5_default + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp_default_30 + Filter: ((a > 1) AND (a >= 15)) + -> Seq Scan on rlp_default_default + Filter: ((a > 1) AND (a >= 15)) +(23 rows) + +explain (costs off) select * from rlp where a = 1 and a = 3; /* empty */ + QUERY PLAN +-------------------------- + Result + One-Time Filter: false +(2 rows) + +explain (costs off) select * from rlp where (a = 1 and a = 3) or (a > 1 and a = 15); + QUERY PLAN +------------------------------------------------------------------- + Append + -> Seq Scan on rlp2 + Filter: (((a = 1) AND (a = 3)) OR ((a > 1) AND (a = 15))) + -> Seq Scan on rlp3abcd + Filter: (((a = 1) AND (a = 3)) OR ((a > 1) AND (a = 15))) + -> Seq Scan on rlp3efgh + Filter: (((a = 1) AND (a = 3)) OR ((a > 1) AND (a = 15))) + -> Seq Scan on rlp3nullxy + Filter: (((a = 1) AND (a = 3)) OR ((a > 1) AND (a = 15))) + -> Seq Scan on rlp3_default + Filter: (((a = 1) AND (a = 3)) OR ((a > 1) AND (a = 15))) +(11 rows) + +-- multi-column keys +create table mc3p (a int, b int, c int) partition by range (a, abs(b), c); +create table mc3p_default partition of mc3p default; +create table mc3p0 partition of mc3p for values from (minvalue, minvalue, minvalue) to (1, 1, 1); +create table mc3p1 partition of mc3p for values from (1, 1, 1) to (10, 5, 10); +create table mc3p2 partition of mc3p for values from (10, 5, 10) to (10, 10, 10); +create table mc3p3 partition of mc3p for values from (10, 10, 10) to (10, 10, 20); +create table mc3p4 partition of mc3p for values from (10, 10, 20) to (10, maxvalue, maxvalue); +create table mc3p5 partition of mc3p for values from (11, 1, 1) to (20, 10, 10); +create table mc3p6 partition of mc3p for values from (20, 10, 10) to (20, 20, 20); +create table mc3p7 partition of mc3p for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); +explain (costs off) select * from mc3p where a = 1; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: (a = 1) + -> Seq Scan on mc3p1 + Filter: (a = 1) + -> Seq Scan on mc3p_default + Filter: (a = 1) +(7 rows) + +explain (costs off) select * from mc3p where a = 1 and abs(b) < 1; + QUERY PLAN +-------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: ((a = 1) AND (abs(b) < 1)) + -> Seq Scan on mc3p_default + Filter: ((a = 1) AND (abs(b) < 1)) +(5 rows) + +explain (costs off) select * from mc3p where a = 1 and abs(b) = 1; + QUERY PLAN +-------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: ((a = 1) AND (abs(b) = 1)) + -> Seq Scan on mc3p1 + Filter: ((a = 1) AND (abs(b) = 1)) + -> Seq Scan on mc3p_default + Filter: ((a = 1) AND (abs(b) = 1)) +(7 rows) + +explain (costs off) select * from mc3p where a = 1 and abs(b) = 1 and c < 8; + QUERY PLAN +-------------------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: ((c < 8) AND (a = 1) AND (abs(b) = 1)) + -> Seq Scan on mc3p1 + Filter: ((c < 8) AND (a = 1) AND (abs(b) = 1)) + -> Seq Scan on mc3p_default + Filter: ((c < 8) AND (a = 1) AND (abs(b) = 1)) +(7 rows) + +explain (costs off) select * from mc3p where a = 10 and abs(b) between 5 and 35; + QUERY PLAN +----------------------------------------------------------------- + Append + -> Seq Scan on mc3p1 + Filter: ((a = 10) AND (abs(b) >= 5) AND (abs(b) <= 35)) + -> Seq Scan on mc3p2 + Filter: ((a = 10) AND (abs(b) >= 5) AND (abs(b) <= 35)) + -> Seq Scan on mc3p3 + Filter: ((a = 10) AND (abs(b) >= 5) AND (abs(b) <= 35)) + -> Seq Scan on mc3p4 + Filter: ((a = 10) AND (abs(b) >= 5) AND (abs(b) <= 35)) + -> Seq Scan on mc3p_default + Filter: ((a = 10) AND (abs(b) >= 5) AND (abs(b) <= 35)) +(11 rows) + +explain (costs off) select * from mc3p where a > 10; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p5 + Filter: (a > 10) + -> Seq Scan on mc3p6 + Filter: (a > 10) + -> Seq Scan on mc3p7 + Filter: (a > 10) + -> Seq Scan on mc3p_default + Filter: (a > 10) +(9 rows) + +explain (costs off) select * from mc3p where a >= 10; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p1 + Filter: (a >= 10) + -> Seq Scan on mc3p2 + Filter: (a >= 10) + -> Seq Scan on mc3p3 + Filter: (a >= 10) + -> Seq Scan on mc3p4 + Filter: (a >= 10) + -> Seq Scan on mc3p5 + Filter: (a >= 10) + -> Seq Scan on mc3p6 + Filter: (a >= 10) + -> Seq Scan on mc3p7 + Filter: (a >= 10) + -> Seq Scan on mc3p_default + Filter: (a >= 10) +(17 rows) + +explain (costs off) select * from mc3p where a < 10; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: (a < 10) + -> Seq Scan on mc3p1 + Filter: (a < 10) + -> Seq Scan on mc3p_default + Filter: (a < 10) +(7 rows) + +explain (costs off) select * from mc3p where a <= 10 and abs(b) < 10; + QUERY PLAN +----------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: ((a <= 10) AND (abs(b) < 10)) + -> Seq Scan on mc3p1 + Filter: ((a <= 10) AND (abs(b) < 10)) + -> Seq Scan on mc3p2 + Filter: ((a <= 10) AND (abs(b) < 10)) + -> Seq Scan on mc3p_default + Filter: ((a <= 10) AND (abs(b) < 10)) +(9 rows) + +explain (costs off) select * from mc3p where a = 11 and abs(b) = 0; + QUERY PLAN +--------------------------------------------- + Append + -> Seq Scan on mc3p_default + Filter: ((a = 11) AND (abs(b) = 0)) +(3 rows) + +explain (costs off) select * from mc3p where a = 20 and abs(b) = 10 and c = 100; + QUERY PLAN +------------------------------------------------------------ + Append + -> Seq Scan on mc3p6 + Filter: ((a = 20) AND (c = 100) AND (abs(b) = 10)) +(3 rows) + +explain (costs off) select * from mc3p where a > 20; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p7 + Filter: (a > 20) + -> Seq Scan on mc3p_default + Filter: (a > 20) +(5 rows) + +explain (costs off) select * from mc3p where a >= 20; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc3p5 + Filter: (a >= 20) + -> Seq Scan on mc3p6 + Filter: (a >= 20) + -> Seq Scan on mc3p7 + Filter: (a >= 20) + -> Seq Scan on mc3p_default + Filter: (a >= 20) +(9 rows) + +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20); + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------- + Append + -> Seq Scan on mc3p1 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20))) + -> Seq Scan on mc3p2 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20))) + -> Seq Scan on mc3p5 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20))) + -> Seq Scan on mc3p_default + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20))) +(9 rows) + +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20) or a < 1; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1)) + -> Seq Scan on mc3p1 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1)) + -> Seq Scan on mc3p2 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1)) + -> Seq Scan on mc3p5 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1)) + -> Seq Scan on mc3p_default + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1)) +(11 rows) + +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20) or a < 1 or a = 1; + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1) OR (a = 1)) + -> Seq Scan on mc3p1 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1) OR (a = 1)) + -> Seq Scan on mc3p2 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1) OR (a = 1)) + -> Seq Scan on mc3p5 + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1) OR (a = 1)) + -> Seq Scan on mc3p_default + Filter: (((a = 1) AND (abs(b) = 1) AND (c = 1)) OR ((a = 10) AND (abs(b) = 5) AND (c = 10)) OR ((a > 11) AND (a < 20)) OR (a < 1) OR (a = 1)) +(11 rows) + +explain (costs off) select * from mc3p where a = 1 or abs(b) = 1 or c = 1; + QUERY PLAN +------------------------------------------------------ + Append + -> Seq Scan on mc3p0 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p1 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p2 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p4 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p5 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p6 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p7 + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) + -> Seq Scan on mc3p_default + Filter: ((a = 1) OR (abs(b) = 1) OR (c = 1)) +(17 rows) + +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1) or (a = 10 and abs(b) = 10); + QUERY PLAN +------------------------------------------------------------------------------ + Append + -> Seq Scan on mc3p0 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) + -> Seq Scan on mc3p1 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) + -> Seq Scan on mc3p2 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) + -> Seq Scan on mc3p3 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) + -> Seq Scan on mc3p4 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) + -> Seq Scan on mc3p_default + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 10))) +(13 rows) + +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1) or (a = 10 and abs(b) = 9); + QUERY PLAN +----------------------------------------------------------------------------- + Append + -> Seq Scan on mc3p0 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 9))) + -> Seq Scan on mc3p1 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 9))) + -> Seq Scan on mc3p2 + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 9))) + -> Seq Scan on mc3p_default + Filter: (((a = 1) AND (abs(b) = 1)) OR ((a = 10) AND (abs(b) = 9))) +(9 rows) + +-- a simpler multi-column keys case +create table mc2p (a int, b int) partition by range (a, b); +create table mc2p_default partition of mc2p default; +create table mc2p0 partition of mc2p for values from (minvalue, minvalue) to (1, minvalue); +create table mc2p1 partition of mc2p for values from (1, minvalue) to (1, 1); +create table mc2p2 partition of mc2p for values from (1, 1) to (2, minvalue); +create table mc2p3 partition of mc2p for values from (2, minvalue) to (2, 1); +create table mc2p4 partition of mc2p for values from (2, 1) to (2, maxvalue); +create table mc2p5 partition of mc2p for values from (2, maxvalue) to (maxvalue, maxvalue); +explain (costs off) select * from mc2p where a < 2; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc2p0 + Filter: (a < 2) + -> Seq Scan on mc2p1 + Filter: (a < 2) + -> Seq Scan on mc2p2 + Filter: (a < 2) + -> Seq Scan on mc2p_default + Filter: (a < 2) +(9 rows) + +explain (costs off) select * from mc2p where a = 2 and b < 1; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on mc2p3 + Filter: ((b < 1) AND (a = 2)) +(3 rows) + +explain (costs off) select * from mc2p where a > 1; + QUERY PLAN +-------------------------------- + Append + -> Seq Scan on mc2p2 + Filter: (a > 1) + -> Seq Scan on mc2p3 + Filter: (a > 1) + -> Seq Scan on mc2p4 + Filter: (a > 1) + -> Seq Scan on mc2p5 + Filter: (a > 1) + -> Seq Scan on mc2p_default + Filter: (a > 1) +(11 rows) + +explain (costs off) select * from mc2p where a = 1 and b > 1; + QUERY PLAN +--------------------------------------- + Append + -> Seq Scan on mc2p2 + Filter: ((b > 1) AND (a = 1)) +(3 rows) + +-- boolean partitioning +create table boolpart (a bool) partition by list (a); +create table boolpart_default partition of boolpart default; +create table boolpart_t partition of boolpart for values in ('true'); +create table boolpart_f partition of boolpart for values in ('false'); +explain (costs off) select * from boolpart where a in (true, false); + QUERY PLAN +------------------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: (a = ANY ('{t,f}'::boolean[])) + -> Seq Scan on boolpart_t + Filter: (a = ANY ('{t,f}'::boolean[])) +(5 rows) + +explain (costs off) select * from boolpart where a = false; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: (NOT a) + -> Seq Scan on boolpart_t + Filter: (NOT a) + -> Seq Scan on boolpart_default + Filter: (NOT a) +(7 rows) + +explain (costs off) select * from boolpart where not a = false; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: a + -> Seq Scan on boolpart_t + Filter: a + -> Seq Scan on boolpart_default + Filter: a +(7 rows) + +explain (costs off) select * from boolpart where a is true or a is not true; + QUERY PLAN +-------------------------------------------------- + Append + -> Seq Scan on boolpart_f + Filter: ((a IS TRUE) OR (a IS NOT TRUE)) + -> Seq Scan on boolpart_t + Filter: ((a IS TRUE) OR (a IS NOT TRUE)) + -> Seq Scan on boolpart_default + Filter: ((a IS TRUE) OR (a IS NOT TRUE)) +(7 rows) + +explain (costs off) select * from boolpart where a is not true; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: (a IS NOT TRUE) + -> Seq Scan on boolpart_t + Filter: (a IS NOT TRUE) + -> Seq Scan on boolpart_default + Filter: (a IS NOT TRUE) +(7 rows) + +explain (costs off) select * from boolpart where a is not true and a is not false; + QUERY PLAN +-------------------------------------------------------- + Append + -> Seq Scan on boolpart_f + Filter: ((a IS NOT TRUE) AND (a IS NOT FALSE)) + -> Seq Scan on boolpart_t + Filter: ((a IS NOT TRUE) AND (a IS NOT FALSE)) + -> Seq Scan on boolpart_default + Filter: ((a IS NOT TRUE) AND (a IS NOT FALSE)) +(7 rows) + +explain (costs off) select * from boolpart where a is unknown; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: (a IS UNKNOWN) + -> Seq Scan on boolpart_t + Filter: (a IS UNKNOWN) + -> Seq Scan on boolpart_default + Filter: (a IS UNKNOWN) +(7 rows) + +explain (costs off) select * from boolpart where a is not unknown; + QUERY PLAN +------------------------------------ + Append + -> Seq Scan on boolpart_f + Filter: (a IS NOT UNKNOWN) + -> Seq Scan on boolpart_t + Filter: (a IS NOT UNKNOWN) + -> Seq Scan on boolpart_default + Filter: (a IS NOT UNKNOWN) +(7 rows) + +drop table lp, coll_pruning, rlp, mc3p, mc2p, boolpart; diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index 1a3ac4c1f9..892a214f2f 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -116,7 +116,7 @@ test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid c # ---------- # Another group of parallel tests # ---------- -test: identity partition_join reloptions hash_part +test: identity partition_join partition_prune reloptions hash_part # event triggers cannot run concurrently with any test that runs DDL test: event_trigger diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index a205e5d05c..15a1f861a9 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -180,6 +180,7 @@ test: with test: xml test: identity test: partition_join +test: partition_prune test: reloptions test: hash_part test: event_trigger diff --git a/src/test/regress/sql/partition_prune.sql b/src/test/regress/sql/partition_prune.sql new file mode 100644 index 0000000000..514f8e5ce1 --- /dev/null +++ b/src/test/regress/sql/partition_prune.sql @@ -0,0 +1,155 @@ +-- +-- Test partitioning planner code +-- +create table lp (a char) partition by list (a); +create table lp_default partition of lp default; +create table lp_ef partition of lp for values in ('e', 'f'); +create table lp_ad partition of lp for values in ('a', 'd'); +create table lp_bc partition of lp for values in ('b', 'c'); +create table lp_g partition of lp for values in ('g'); +create table lp_null partition of lp for values in (null); +explain (costs off) select * from lp; +explain (costs off) select * from lp where a > 'a' and a < 'd'; +explain (costs off) select * from lp where a > 'a' and a <= 'd'; +explain (costs off) select * from lp where a = 'a'; +explain (costs off) select * from lp where 'a' = a; /* commuted */ +explain (costs off) select * from lp where a is not null; +explain (costs off) select * from lp where a is null; +explain (costs off) select * from lp where a = 'a' or a = 'c'; +explain (costs off) select * from lp where a is not null and (a = 'a' or a = 'c'); +explain (costs off) select * from lp where a <> 'g'; +explain (costs off) select * from lp where a <> 'a' and a <> 'd'; +explain (costs off) select * from lp where a not in ('a', 'd'); + +-- collation matches the partitioning collation, pruning works +create table coll_pruning (a text collate "C") partition by list (a); +create table coll_pruning_a partition of coll_pruning for values in ('a'); +create table coll_pruning_b partition of coll_pruning for values in ('b'); +create table coll_pruning_def partition of coll_pruning default; +explain (costs off) select * from coll_pruning where a collate "C" = 'a' collate "C"; +-- collation doesn't match the partitioning collation, no pruning occurs +explain (costs off) select * from coll_pruning where a collate "POSIX" = 'a' collate "POSIX"; + +create table rlp (a int, b varchar) partition by range (a); +create table rlp_default partition of rlp default partition by list (a); +create table rlp_default_default partition of rlp_default default; +create table rlp_default_10 partition of rlp_default for values in (10); +create table rlp_default_30 partition of rlp_default for values in (30); +create table rlp_default_null partition of rlp_default for values in (null); +create table rlp1 partition of rlp for values from (minvalue) to (1); +create table rlp2 partition of rlp for values from (1) to (10); + +create table rlp3 (b varchar, a int) partition by list (b varchar_ops); +create table rlp3_default partition of rlp3 default; +create table rlp3abcd partition of rlp3 for values in ('ab', 'cd'); +create table rlp3efgh partition of rlp3 for values in ('ef', 'gh'); +create table rlp3nullxy partition of rlp3 for values in (null, 'xy'); +alter table rlp attach partition rlp3 for values from (15) to (20); + +create table rlp4 partition of rlp for values from (20) to (30) partition by range (a); +create table rlp4_default partition of rlp4 default; +create table rlp4_1 partition of rlp4 for values from (20) to (25); +create table rlp4_2 partition of rlp4 for values from (25) to (29); + +create table rlp5 partition of rlp for values from (31) to (maxvalue) partition by range (a); +create table rlp5_default partition of rlp5 default; +create table rlp5_1 partition of rlp5 for values from (31) to (40); + +explain (costs off) select * from rlp where a < 1; +explain (costs off) select * from rlp where 1 > a; /* commuted */ +explain (costs off) select * from rlp where a <= 1; +explain (costs off) select * from rlp where a = 1; +explain (costs off) select * from rlp where a = 1::bigint; /* same as above */ +explain (costs off) select * from rlp where a = 1::numeric; /* no pruning */ +explain (costs off) select * from rlp where a <= 10; +explain (costs off) select * from rlp where a > 10; +explain (costs off) select * from rlp where a < 15; +explain (costs off) select * from rlp where a <= 15; +explain (costs off) select * from rlp where a > 15 and b = 'ab'; +explain (costs off) select * from rlp where a = 16; +explain (costs off) select * from rlp where a = 16 and b in ('not', 'in', 'here'); +explain (costs off) select * from rlp where a = 16 and b < 'ab'; +explain (costs off) select * from rlp where a = 16 and b <= 'ab'; +explain (costs off) select * from rlp where a = 16 and b is null; +explain (costs off) select * from rlp where a = 16 and b is not null; +explain (costs off) select * from rlp where a is null; +explain (costs off) select * from rlp where a is not null; +explain (costs off) select * from rlp where a > 30; +explain (costs off) select * from rlp where a = 30; /* only default is scanned */ +explain (costs off) select * from rlp where a <= 31; +explain (costs off) select * from rlp where a = 1 or a = 7; +explain (costs off) select * from rlp where a = 1 or b = 'ab'; + +explain (costs off) select * from rlp where a > 20 and a < 27; +explain (costs off) select * from rlp where a = 29; +explain (costs off) select * from rlp where a >= 29; + +-- redundant clauses are eliminated +explain (costs off) select * from rlp where a > 1 and a = 10; /* only default */ +explain (costs off) select * from rlp where a > 1 and a >=15; /* rlp3 onwards, including default */ +explain (costs off) select * from rlp where a = 1 and a = 3; /* empty */ +explain (costs off) select * from rlp where (a = 1 and a = 3) or (a > 1 and a = 15); + +-- multi-column keys +create table mc3p (a int, b int, c int) partition by range (a, abs(b), c); +create table mc3p_default partition of mc3p default; +create table mc3p0 partition of mc3p for values from (minvalue, minvalue, minvalue) to (1, 1, 1); +create table mc3p1 partition of mc3p for values from (1, 1, 1) to (10, 5, 10); +create table mc3p2 partition of mc3p for values from (10, 5, 10) to (10, 10, 10); +create table mc3p3 partition of mc3p for values from (10, 10, 10) to (10, 10, 20); +create table mc3p4 partition of mc3p for values from (10, 10, 20) to (10, maxvalue, maxvalue); +create table mc3p5 partition of mc3p for values from (11, 1, 1) to (20, 10, 10); +create table mc3p6 partition of mc3p for values from (20, 10, 10) to (20, 20, 20); +create table mc3p7 partition of mc3p for values from (20, 20, 20) to (maxvalue, maxvalue, maxvalue); + +explain (costs off) select * from mc3p where a = 1; +explain (costs off) select * from mc3p where a = 1 and abs(b) < 1; +explain (costs off) select * from mc3p where a = 1 and abs(b) = 1; +explain (costs off) select * from mc3p where a = 1 and abs(b) = 1 and c < 8; +explain (costs off) select * from mc3p where a = 10 and abs(b) between 5 and 35; +explain (costs off) select * from mc3p where a > 10; +explain (costs off) select * from mc3p where a >= 10; +explain (costs off) select * from mc3p where a < 10; +explain (costs off) select * from mc3p where a <= 10 and abs(b) < 10; +explain (costs off) select * from mc3p where a = 11 and abs(b) = 0; +explain (costs off) select * from mc3p where a = 20 and abs(b) = 10 and c = 100; +explain (costs off) select * from mc3p where a > 20; +explain (costs off) select * from mc3p where a >= 20; +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20); +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20) or a < 1; +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1 and c = 1) or (a = 10 and abs(b) = 5 and c = 10) or (a > 11 and a < 20) or a < 1 or a = 1; +explain (costs off) select * from mc3p where a = 1 or abs(b) = 1 or c = 1; +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1) or (a = 10 and abs(b) = 10); +explain (costs off) select * from mc3p where (a = 1 and abs(b) = 1) or (a = 10 and abs(b) = 9); + +-- a simpler multi-column keys case +create table mc2p (a int, b int) partition by range (a, b); +create table mc2p_default partition of mc2p default; +create table mc2p0 partition of mc2p for values from (minvalue, minvalue) to (1, minvalue); +create table mc2p1 partition of mc2p for values from (1, minvalue) to (1, 1); +create table mc2p2 partition of mc2p for values from (1, 1) to (2, minvalue); +create table mc2p3 partition of mc2p for values from (2, minvalue) to (2, 1); +create table mc2p4 partition of mc2p for values from (2, 1) to (2, maxvalue); +create table mc2p5 partition of mc2p for values from (2, maxvalue) to (maxvalue, maxvalue); + +explain (costs off) select * from mc2p where a < 2; +explain (costs off) select * from mc2p where a = 2 and b < 1; +explain (costs off) select * from mc2p where a > 1; +explain (costs off) select * from mc2p where a = 1 and b > 1; + +-- boolean partitioning +create table boolpart (a bool) partition by list (a); +create table boolpart_default partition of boolpart default; +create table boolpart_t partition of boolpart for values in ('true'); +create table boolpart_f partition of boolpart for values in ('false'); + +explain (costs off) select * from boolpart where a in (true, false); +explain (costs off) select * from boolpart where a = false; +explain (costs off) select * from boolpart where not a = false; +explain (costs off) select * from boolpart where a is true or a is not true; +explain (costs off) select * from boolpart where a is not true; +explain (costs off) select * from boolpart where a is not true and a is not false; +explain (costs off) select * from boolpart where a is unknown; +explain (costs off) select * from boolpart where a is not unknown; + +drop table lp, coll_pruning, rlp, mc3p, mc2p, boolpart; From 84940644de931f331433b35e3a391822671f8c9c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 29 Nov 2017 17:06:14 -0500 Subject: [PATCH 0634/1087] New C function: bms_add_range This will be used by pending patches to improve partition pruning. Amit Langote and Kyotaro Horiguchi, per a suggestion from David Rowley. Review and testing of the larger patch set of which this is a part by Ashutosh Bapat, David Rowley, Dilip Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Amul Sul, and Kyotaro Horiguchi. Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp --- src/backend/nodes/bitmapset.c | 72 +++++++++++++++++++++++++++++++++++ src/include/nodes/bitmapset.h | 1 + 2 files changed, 73 insertions(+) diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c index d4b82c6305..e5096e01a7 100644 --- a/src/backend/nodes/bitmapset.c +++ b/src/backend/nodes/bitmapset.c @@ -784,6 +784,78 @@ bms_add_members(Bitmapset *a, const Bitmapset *b) return result; } +/* + * bms_add_range + * Add members in the range of 'lower' to 'upper' to the set. + * + * Note this could also be done by calling bms_add_member in a loop, however, + * using this function will be faster when the range is large as we work with + * at the bitmapword level rather than at bit level. + */ +Bitmapset * +bms_add_range(Bitmapset *a, int lower, int upper) +{ + int lwordnum, + lbitnum, + uwordnum, + ushiftbits, + wordnum; + + if (lower < 0 || upper < 0) + elog(ERROR, "negative bitmapset member not allowed"); + if (lower > upper) + elog(ERROR, "lower range must not be above upper range"); + uwordnum = WORDNUM(upper); + + if (a == NULL) + { + a = (Bitmapset *) palloc0(BITMAPSET_SIZE(uwordnum + 1)); + a->nwords = uwordnum + 1; + } + + /* ensure we have enough words to store the upper bit */ + else if (uwordnum >= a->nwords) + { + int oldnwords = a->nwords; + int i; + + a = (Bitmapset *) repalloc(a, BITMAPSET_SIZE(uwordnum + 1)); + a->nwords = uwordnum + 1; + /* zero out the enlarged portion */ + for (i = oldnwords; i < a->nwords; i++) + a->words[i] = 0; + } + + wordnum = lwordnum = WORDNUM(lower); + + lbitnum = BITNUM(lower); + ushiftbits = BITS_PER_BITMAPWORD - (BITNUM(upper) + 1); + + /* + * Special case when lwordnum is the same as uwordnum we must perform the + * upper and lower masking on the word. + */ + if (lwordnum == uwordnum) + { + a->words[lwordnum] |= ~(bitmapword) (((bitmapword) 1 << lbitnum) - 1) + & (~(bitmapword) 0) >> ushiftbits; + } + else + { + /* turn on lbitnum and all bits left of it */ + a->words[wordnum++] |= ~(bitmapword) (((bitmapword) 1 << lbitnum) - 1); + + /* turn on all bits for any intermediate words */ + while (wordnum < uwordnum) + a->words[wordnum++] = ~(bitmapword) 0; + + /* turn on upper's bit and all bits right of it. */ + a->words[uwordnum] |= (~(bitmapword) 0) >> ushiftbits; + } + + return a; +} + /* * bms_int_members - like bms_intersect, but left input is recycled */ diff --git a/src/include/nodes/bitmapset.h b/src/include/nodes/bitmapset.h index aa3fb253c2..3b62a97775 100644 --- a/src/include/nodes/bitmapset.h +++ b/src/include/nodes/bitmapset.h @@ -90,6 +90,7 @@ extern bool bms_is_empty(const Bitmapset *a); extern Bitmapset *bms_add_member(Bitmapset *a, int x); extern Bitmapset *bms_del_member(Bitmapset *a, int x); extern Bitmapset *bms_add_members(Bitmapset *a, const Bitmapset *b); +extern Bitmapset *bms_add_range(Bitmapset *a, int lower, int upper); extern Bitmapset *bms_int_members(Bitmapset *a, const Bitmapset *b); extern Bitmapset *bms_del_members(Bitmapset *a, const Bitmapset *b); extern Bitmapset *bms_join(Bitmapset *a, Bitmapset *b); From fa330f9adf4e83c0707b0b1164e7bf09c9204b3d Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 29 Nov 2017 16:06:50 -0800 Subject: [PATCH 0635/1087] Add some regression tests that exercise hash join code. Although hash joins are already tested by many queries, these tests systematically cover the four different states we can reach as part of the strategy for respecting work_mem. Author: Thomas Munro Reviewed-By: Andres Freund --- src/test/regress/expected/join.out | 457 +++++++++++++++++++++++++++++ src/test/regress/sql/join.sql | 253 ++++++++++++++++ 2 files changed, 710 insertions(+) diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index f47449b1c4..f02ef3f978 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -5785,3 +5785,460 @@ where exists (select 1 from j3 (13 rows) drop table j3; +-- +-- exercises for the hash join code +-- +begin; +set local min_parallel_table_scan_size = 0; +set local parallel_setup_cost = 0; +-- Extract bucket and batch counts from an explain analyze plan. In +-- general we can't make assertions about how many batches (or +-- buckets) will be required because it can vary, but we can in some +-- special cases and we can check for growth. +create or replace function find_hash(node json) +returns json language plpgsql +as +$$ +declare + x json; + child json; +begin + if node->>'Node Type' = 'Hash' then + return node; + else + for child in select json_array_elements(node->'Plans') + loop + x := find_hash(child); + if x is not null then + return x; + end if; + end loop; + return null; + end if; +end; +$$; +create or replace function hash_join_batches(query text) +returns table (original int, final int) language plpgsql +as +$$ +declare + whole_plan json; + hash_node json; +begin + for whole_plan in + execute 'explain (analyze, format ''json'') ' || query + loop + hash_node := find_hash(json_extract_path(whole_plan, '0', 'Plan')); + original := hash_node->>'Original Hash Batches'; + final := hash_node->>'Hash Batches'; + return next; + end loop; +end; +$$; +-- Make a simple relation with well distributed keys and correctly +-- estimated size. +create table simple as + select generate_series(1, 20000) AS id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'; +alter table simple set (parallel_workers = 2); +analyze simple; +-- Make a relation whose size we will under-estimate. We want stats +-- to say 1000 rows, but actually there are 20,000 rows. +create table bigger_than_it_looks as + select generate_series(1, 20000) as id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'; +alter table bigger_than_it_looks set (autovacuum_enabled = 'false'); +alter table bigger_than_it_looks set (parallel_workers = 2); +analyze bigger_than_it_looks; +update pg_class set reltuples = 1000 where relname = 'bigger_than_it_looks'; +-- Make a relation whose size we underestimate and that also has a +-- kind of skew that breaks our batching scheme. We want stats to say +-- 2 rows, but actually there are 20,000 rows with the same key. +create table extremely_skewed (id int, t text); +alter table extremely_skewed set (autovacuum_enabled = 'false'); +alter table extremely_skewed set (parallel_workers = 2); +analyze extremely_skewed; +insert into extremely_skewed + select 42 as id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' + from generate_series(1, 20000); +update pg_class + set reltuples = 2, relpages = pg_relation_size('extremely_skewed') / 8192 + where relname = 'extremely_skewed'; +-- The "optimal" case: the hash table fits in memory; we plan for 1 +-- batch, we stick to that number, and peak memory usage stays within +-- our work_mem budget +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '4MB'; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(6 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | f +(1 row) + +rollback to settings; +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(9 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | f +(1 row) + +rollback to settings; +-- The "good" case: batches required, but we plan the right number; we +-- plan for some number of batches, and we stick to that number, and +-- peak memory usage says within our work_mem budget +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(6 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + t | f +(1 row) + +rollback to settings; +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(9 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + t | f +(1 row) + +rollback to settings; +-- The "bad" case: during execution we need to increase number of +-- batches; in this case we plan for 1 batch, and increase at least a +-- couple of times, and peak memory usage stays within our work_mem +-- budget +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); + QUERY PLAN +------------------------------------------------------ + Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on bigger_than_it_looks s +(6 rows) + +select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | t +(1 row) + +rollback to settings; +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join bigger_than_it_looks s using (id); + QUERY PLAN +------------------------------------------------------------------ + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Hash + -> Seq Scan on bigger_than_it_looks s +(9 rows) + +select count(*) from simple r join bigger_than_it_looks s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join bigger_than_it_looks s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | t +(1 row) + +rollback to settings; +-- The "ugly" case: increasing the number of batches during execution +-- doesn't help, so stop trying to fit in work_mem and hope for the +-- best; in this case we plan for 1 batch, increases just once and +-- then stop increasing because that didn't help at all, so we blow +-- right through the work_mem budget and hope for the best... +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); + QUERY PLAN +-------------------------------------------------- + Aggregate + -> Hash Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on extremely_skewed s +(6 rows) + +select count(*) from simple r join extremely_skewed s using (id); + count +------- + 20000 +(1 row) + +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); + original | final +----------+------- + 1 | 2 +(1 row) + +rollback to settings; +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); + QUERY PLAN +-------------------------------------------------------- + Aggregate + -> Gather + Workers Planned: 2 + -> Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Hash + -> Seq Scan on extremely_skewed s +(8 rows) + +select count(*) from simple r join extremely_skewed s using (id); + count +------- + 20000 +(1 row) + +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); + original | final +----------+------- + 1 | 2 +(1 row) + +rollback to settings; +-- A couple of other hash join tests unrelated to work_mem management. +-- A full outer join where every record is matched. +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +explain (costs off) + select count(*) from simple r full outer join simple s using (id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Full Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(6 rows) + +select count(*) from simple r full outer join simple s using (id); + count +------- + 20000 +(1 row) + +rollback to settings; +-- parallelism not possible with parallel-oblivious outer hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +explain (costs off) + select count(*) from simple r full outer join simple s using (id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Full Join + Hash Cond: (r.id = s.id) + -> Seq Scan on simple r + -> Hash + -> Seq Scan on simple s +(6 rows) + +select count(*) from simple r full outer join simple s using (id); + count +------- + 20000 +(1 row) + +rollback to settings; +-- An full outer join where every record is not matched. +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +explain (costs off) + select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Full Join + Hash Cond: ((0 - s.id) = r.id) + -> Seq Scan on simple s + -> Hash + -> Seq Scan on simple r +(6 rows) + +select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); + count +------- + 40000 +(1 row) + +rollback to settings; +-- parallelism not possible with parallel-oblivious outer hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +explain (costs off) + select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); + QUERY PLAN +---------------------------------------- + Aggregate + -> Hash Full Join + Hash Cond: ((0 - s.id) = r.id) + -> Seq Scan on simple s + -> Hash + -> Seq Scan on simple r +(6 rows) + +select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); + count +------- + 40000 +(1 row) + +rollback to settings; +rollback; diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index d847d53653..cbb71c5b1b 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -1934,3 +1934,256 @@ where exists (select 1 from j3 and t1.unique1 < 1; drop table j3; + +-- +-- exercises for the hash join code +-- + +begin; + +set local min_parallel_table_scan_size = 0; +set local parallel_setup_cost = 0; + +-- Extract bucket and batch counts from an explain analyze plan. In +-- general we can't make assertions about how many batches (or +-- buckets) will be required because it can vary, but we can in some +-- special cases and we can check for growth. +create or replace function find_hash(node json) +returns json language plpgsql +as +$$ +declare + x json; + child json; +begin + if node->>'Node Type' = 'Hash' then + return node; + else + for child in select json_array_elements(node->'Plans') + loop + x := find_hash(child); + if x is not null then + return x; + end if; + end loop; + return null; + end if; +end; +$$; +create or replace function hash_join_batches(query text) +returns table (original int, final int) language plpgsql +as +$$ +declare + whole_plan json; + hash_node json; +begin + for whole_plan in + execute 'explain (analyze, format ''json'') ' || query + loop + hash_node := find_hash(json_extract_path(whole_plan, '0', 'Plan')); + original := hash_node->>'Original Hash Batches'; + final := hash_node->>'Hash Batches'; + return next; + end loop; +end; +$$; + +-- Make a simple relation with well distributed keys and correctly +-- estimated size. +create table simple as + select generate_series(1, 20000) AS id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'; +alter table simple set (parallel_workers = 2); +analyze simple; + +-- Make a relation whose size we will under-estimate. We want stats +-- to say 1000 rows, but actually there are 20,000 rows. +create table bigger_than_it_looks as + select generate_series(1, 20000) as id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'; +alter table bigger_than_it_looks set (autovacuum_enabled = 'false'); +alter table bigger_than_it_looks set (parallel_workers = 2); +analyze bigger_than_it_looks; +update pg_class set reltuples = 1000 where relname = 'bigger_than_it_looks'; + +-- Make a relation whose size we underestimate and that also has a +-- kind of skew that breaks our batching scheme. We want stats to say +-- 2 rows, but actually there are 20,000 rows with the same key. +create table extremely_skewed (id int, t text); +alter table extremely_skewed set (autovacuum_enabled = 'false'); +alter table extremely_skewed set (parallel_workers = 2); +analyze extremely_skewed; +insert into extremely_skewed + select 42 as id, 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' + from generate_series(1, 20000); +update pg_class + set reltuples = 2, relpages = pg_relation_size('extremely_skewed') / 8192 + where relname = 'extremely_skewed'; + +-- The "optimal" case: the hash table fits in memory; we plan for 1 +-- batch, we stick to that number, and peak memory usage stays within +-- our work_mem budget + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '4MB'; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- The "good" case: batches required, but we plan the right number; we +-- plan for some number of batches, and we stick to that number, and +-- peak memory usage says within our work_mem budget + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- The "bad" case: during execution we need to increase number of +-- batches; in this case we plan for 1 batch, and increase at least a +-- couple of times, and peak memory usage stays within our work_mem +-- budget + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); +select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) FROM simple r JOIN bigger_than_it_looks s USING (id); +$$); +rollback to settings; + +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join bigger_than_it_looks s using (id); +select count(*) from simple r join bigger_than_it_looks s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join bigger_than_it_looks s using (id); +$$); +rollback to settings; + +-- The "ugly" case: increasing the number of batches during execution +-- doesn't help, so stop trying to fit in work_mem and hope for the +-- best; in this case we plan for 1 batch, increases just once and +-- then stop increasing because that didn't help at all, so we blow +-- right through the work_mem budget and hope for the best... + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); +select count(*) from simple r join extremely_skewed s using (id); +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); +rollback to settings; + +-- parallel with parallel-oblivious hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); +select count(*) from simple r join extremely_skewed s using (id); +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); +rollback to settings; + +-- A couple of other hash join tests unrelated to work_mem management. + +-- A full outer join where every record is matched. + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +explain (costs off) + select count(*) from simple r full outer join simple s using (id); +select count(*) from simple r full outer join simple s using (id); +rollback to settings; + +-- parallelism not possible with parallel-oblivious outer hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +explain (costs off) + select count(*) from simple r full outer join simple s using (id); +select count(*) from simple r full outer join simple s using (id); +rollback to settings; + +-- An full outer join where every record is not matched. + +-- non-parallel +savepoint settings; +set local max_parallel_workers_per_gather = 0; +explain (costs off) + select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); +select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); +rollback to settings; + +-- parallelism not possible with parallel-oblivious outer hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +explain (costs off) + select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); +select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); +rollback to settings; + +rollback; From 1145acc70debacc34de01fac238defde543f4ed4 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 29 Nov 2017 17:07:16 -0800 Subject: [PATCH 0636/1087] Add a barrier primitive for synchronizing backends. Provide support for dynamic or static parties of processes to wait for all processes to reach point in the code before continuing. This is similar to the mechanism of the same name in POSIX threads and MPI, though has explicit phasing and dynamic party support like the Java core library's Phaser. This will be used by an upcoming patch adding support for parallel hash joins. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=2_y7oi01OjA_wLvYcWMc9_d=LaoxrY3eiROCZkB_qakA@mail.gmail.com --- src/backend/storage/ipc/Makefile | 2 +- src/backend/storage/ipc/barrier.c | 311 ++++++++++++++++++++++++++++++ src/include/storage/barrier.h | 45 +++++ 3 files changed, 357 insertions(+), 1 deletion(-) create mode 100644 src/backend/storage/ipc/barrier.c create mode 100644 src/include/storage/barrier.h diff --git a/src/backend/storage/ipc/Makefile b/src/backend/storage/ipc/Makefile index 8a55392ade..9dbdc26c9b 100644 --- a/src/backend/storage/ipc/Makefile +++ b/src/backend/storage/ipc/Makefile @@ -8,7 +8,7 @@ subdir = src/backend/storage/ipc top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global -OBJS = dsm_impl.o dsm.o ipc.o ipci.o latch.o pmsignal.o procarray.o \ +OBJS = barrier.o dsm_impl.o dsm.o ipc.o ipci.o latch.o pmsignal.o procarray.o \ procsignal.o shmem.o shmqueue.o shm_mq.o shm_toc.o sinval.o \ sinvaladt.o standby.o diff --git a/src/backend/storage/ipc/barrier.c b/src/backend/storage/ipc/barrier.c new file mode 100644 index 0000000000..7dde932738 --- /dev/null +++ b/src/backend/storage/ipc/barrier.c @@ -0,0 +1,311 @@ +/*------------------------------------------------------------------------- + * + * barrier.c + * Barriers for synchronizing cooperating processes. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * From Wikipedia[1]: "In parallel computing, a barrier is a type of + * synchronization method. A barrier for a group of threads or processes in + * the source code means any thread/process must stop at this point and cannot + * proceed until all other threads/processes reach this barrier." + * + * This implementation of barriers allows for static sets of participants + * known up front, or dynamic sets of participants which processes can join or + * leave at any time. In the dynamic case, a phase number can be used to + * track progress through a parallel algorithm, and may be necessary to + * synchronize with the current phase of a multi-phase algorithm when a new + * participant joins. In the static case, the phase number is used + * internally, but it isn't strictly necessary for client code to access it + * because the phase can only advance when the declared number of participants + * reaches the barrier, so client code should be in no doubt about the current + * phase of computation at all times. + * + * Consider a parallel algorithm that involves separate phases of computation + * A, B and C where the output of each phase is needed before the next phase + * can begin. + * + * In the case of a static barrier initialized with 4 participants, each + * participant works on phase A, then calls BarrierArriveAndWait to wait until + * all 4 participants have reached that point. When BarrierArriveAndWait + * returns control, each participant can work on B, and so on. Because the + * barrier knows how many participants to expect, the phases of computation + * don't need labels or numbers, since each process's program counter implies + * the current phase. Even if some of the processes are slow to start up and + * begin running phase A, the other participants are expecting them and will + * patiently wait at the barrier. The code could be written as follows: + * + * perform_a(); + * BarrierArriveAndWait(&barrier, ...); + * perform_b(); + * BarrierArriveAndWait(&barrier, ...); + * perform_c(); + * BarrierArriveAndWait(&barrier, ...); + * + * If the number of participants is not known up front, then a dynamic barrier + * is needed and the number should be set to zero at initialization. New + * complications arise because the number necessarily changes over time as + * participants attach and detach, and therefore phases B, C or even the end + * of processing may be reached before any given participant has started + * running and attached. Therefore the client code must perform an initial + * test of the phase number after attaching, because it needs to find out + * which phase of the algorithm has been reached by any participants that are + * already attached in order to synchronize with that work. Once the program + * counter or some other representation of current progress is synchronized + * with the barrier's phase, normal control flow can be used just as in the + * static case. Our example could be written using a switch statement with + * cases that fall-through, as follows: + * + * phase = BarrierAttach(&barrier); + * switch (phase) + * { + * case PHASE_A: + * perform_a(); + * BarrierArriveAndWait(&barrier, ...); + * case PHASE_B: + * perform_b(); + * BarrierArriveAndWait(&barrier, ...); + * case PHASE_C: + * perform_c(); + * BarrierArriveAndWait(&barrier, ...); + * } + * BarrierDetach(&barrier); + * + * Static barriers behave similarly to POSIX's pthread_barrier_t. Dynamic + * barriers behave similarly to Java's java.util.concurrent.Phaser. + * + * [1] https://en.wikipedia.org/wiki/Barrier_(computer_science) + * + * IDENTIFICATION + * src/backend/storage/ipc/barrier.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" +#include "storage/barrier.h" + +static inline bool BarrierDetachImpl(Barrier *barrier, bool arrive); + +/* + * Initialize this barrier. To use a static party size, provide the number of + * participants to wait for at each phase indicating that that number of + * backends is implicitly attached. To use a dynamic party size, specify zero + * here and then use BarrierAttach() and + * BarrierDetach()/BarrierArriveAndDetach() to register and deregister + * participants explicitly. + */ +void +BarrierInit(Barrier *barrier, int participants) +{ + SpinLockInit(&barrier->mutex); + barrier->participants = participants; + barrier->arrived = 0; + barrier->phase = 0; + barrier->elected = 0; + barrier->static_party = participants > 0; + ConditionVariableInit(&barrier->condition_variable); +} + +/* + * Arrive at this barrier, wait for all other attached participants to arrive + * too and then return. Increments the current phase. The caller must be + * attached. + * + * While waiting, pg_stat_activity shows a wait_event_class and wait_event + * controlled by the wait_event_info passed in, which should be a value from + * from one of the WaitEventXXX enums defined in pgstat.h. + * + * Return true in one arbitrarily chosen participant. Return false in all + * others. The return code can be used to elect one participant to execute a + * phase of work that must be done serially while other participants wait. + */ +bool +BarrierArriveAndWait(Barrier *barrier, uint32 wait_event_info) +{ + bool release = false; + bool elected; + int start_phase; + int next_phase; + + SpinLockAcquire(&barrier->mutex); + start_phase = barrier->phase; + next_phase = start_phase + 1; + ++barrier->arrived; + if (barrier->arrived == barrier->participants) + { + release = true; + barrier->arrived = 0; + barrier->phase = next_phase; + barrier->elected = next_phase; + } + SpinLockRelease(&barrier->mutex); + + /* + * If we were the last expected participant to arrive, we can release our + * peers and return true to indicate that this backend has been elected to + * perform any serial work. + */ + if (release) + { + ConditionVariableBroadcast(&barrier->condition_variable); + + return true; + } + + /* + * Otherwise we have to wait for the last participant to arrive and + * advance the phase. + */ + elected = false; + ConditionVariablePrepareToSleep(&barrier->condition_variable); + for (;;) + { + /* + * We know that phase must either be start_phase, indicating that we + * need to keep waiting, or next_phase, indicating that the last + * participant that we were waiting for has either arrived or detached + * so that the next phase has begun. The phase cannot advance any + * further than that without this backend's participation, because + * this backend is attached. + */ + SpinLockAcquire(&barrier->mutex); + Assert(barrier->phase == start_phase || barrier->phase == next_phase); + release = barrier->phase == next_phase; + if (release && barrier->elected != next_phase) + { + /* + * Usually the backend that arrives last and releases the other + * backends is elected to return true (see above), so that it can + * begin processing serial work while it has a CPU timeslice. + * However, if the barrier advanced because someone detached, then + * one of the backends that is awoken will need to be elected. + */ + barrier->elected = barrier->phase; + elected = true; + } + SpinLockRelease(&barrier->mutex); + if (release) + break; + ConditionVariableSleep(&barrier->condition_variable, wait_event_info); + } + ConditionVariableCancelSleep(); + + return elected; +} + +/* + * Arrive at this barrier, but detach rather than waiting. Returns true if + * the caller was the last to detach. + */ +bool +BarrierArriveAndDetach(Barrier *barrier) +{ + return BarrierDetachImpl(barrier, true); +} + +/* + * Attach to a barrier. All waiting participants will now wait for this + * participant to call BarrierArriveAndWait(), BarrierDetach() or + * BarrierArriveAndDetach(). Return the current phase. + */ +int +BarrierAttach(Barrier *barrier) +{ + int phase; + + Assert(!barrier->static_party); + + SpinLockAcquire(&barrier->mutex); + ++barrier->participants; + phase = barrier->phase; + SpinLockRelease(&barrier->mutex); + + return phase; +} + +/* + * Detach from a barrier. This may release other waiters from BarrierWait and + * advance the phase if they were only waiting for this backend. Return true + * if this participant was the last to detach. + */ +bool +BarrierDetach(Barrier *barrier) +{ + return BarrierDetachImpl(barrier, false); +} + +/* + * Return the current phase of a barrier. The caller must be attached. + */ +int +BarrierPhase(Barrier *barrier) +{ + /* + * It is OK to read barrier->phase without locking, because it can't + * change without us (we are attached to it), and we executed a memory + * barrier when we either attached or participated in changing it last + * time. + */ + return barrier->phase; +} + +/* + * Return an instantaneous snapshot of the number of participants currently + * attached to this barrier. For debugging purposes only. + */ +int +BarrierParticipants(Barrier *barrier) +{ + int participants; + + SpinLockAcquire(&barrier->mutex); + participants = barrier->participants; + SpinLockRelease(&barrier->mutex); + + return participants; +} + +/* + * Detach from a barrier. If 'arrive' is true then also increment the phase + * if there are no other participants. If there are other participants + * waiting, then the phase will be advanced and they'll be released if they + * were only waiting for the caller. Return true if this participant was the + * last to detach. + */ +static inline bool +BarrierDetachImpl(Barrier *barrier, bool arrive) +{ + bool release; + bool last; + + Assert(!barrier->static_party); + + SpinLockAcquire(&barrier->mutex); + Assert(barrier->participants > 0); + --barrier->participants; + + /* + * If any other participants are waiting and we were the last participant + * waited for, release them. If no other participants are waiting, but + * this is a BarrierArriveAndDetach() call, then advance the phase too. + */ + if ((arrive || barrier->participants > 0) && + barrier->arrived == barrier->participants) + { + release = true; + barrier->arrived = 0; + ++barrier->phase; + } + else + release = false; + + last = barrier->participants == 0; + SpinLockRelease(&barrier->mutex); + + if (release) + ConditionVariableBroadcast(&barrier->condition_variable); + + return last; +} diff --git a/src/include/storage/barrier.h b/src/include/storage/barrier.h new file mode 100644 index 0000000000..0abaa2b98b --- /dev/null +++ b/src/include/storage/barrier.h @@ -0,0 +1,45 @@ +/*------------------------------------------------------------------------- + * + * barrier.h + * Barriers for synchronizing cooperating processes. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/storage/barrier.h + * + *------------------------------------------------------------------------- + */ +#ifndef BARRIER_H +#define BARRIER_H + +/* + * For the header previously known as "barrier.h", please include + * "port/atomics.h", which deals with atomics, compiler barriers and memory + * barriers. + */ + +#include "storage/condition_variable.h" +#include "storage/spin.h" + +typedef struct Barrier +{ + slock_t mutex; + int phase; /* phase counter */ + int participants; /* the number of participants attached */ + int arrived; /* the number of participants that have + * arrived */ + int elected; /* highest phase elected */ + bool static_party; /* used only for assertions */ + ConditionVariable condition_variable; +} Barrier; + +extern void BarrierInit(Barrier *barrier, int num_workers); +extern bool BarrierArriveAndWait(Barrier *barrier, uint32 wait_event_info); +extern bool BarrierArriveAndDetach(Barrier *barrier); +extern int BarrierAttach(Barrier *barrier); +extern bool BarrierDetach(Barrier *barrier); +extern int BarrierPhase(Barrier *barrier); +extern int BarrierParticipants(Barrier *barrier); + +#endif /* BARRIER_H */ From 7ca25b7de6aefa5537e0dbe56541bc41c0464f97 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 29 Nov 2017 22:00:29 -0500 Subject: [PATCH 0637/1087] Fix neqjoinsel's behavior for semi/anti join cases. Previously, this function estimated the selectivity as 1 minus eqjoinsel() for the negator equality operator, regardless of join type (I think there was an expectation that eqjoinsel would handle the join type). But actually this is completely wrong for semijoin cases: the fraction of the LHS that has a non-matching row is not one minus the fraction of the LHS that has a matching row. In reality a semijoin with <> will nearly always succeed: it can only fail when the RHS is empty, or it contains a single distinct value that is equal to the particular LHS value, or the LHS value is null. The only one of those things we should have much confidence in estimating is the fraction of LHS values that are null, so let's just take the selectivity as 1 minus outer nullfrac. Per coding convention, antijoin should be estimated the same as semijoin. Arguably this is a bug fix, but in view of the lack of field complaints and the risk of destabilizing plans, no back-patch. Thomas Munro, reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/CAEepm=270ze2hVxWkJw-5eKzc3AB4C9KpH3L2kih75R5pdSogg@mail.gmail.com --- src/backend/utils/adt/selfuncs.c | 70 +++++++++++++++++++++++------- src/test/regress/expected/join.out | 22 ++++++++++ src/test/regress/sql/join.sql | 9 ++++ 3 files changed, 85 insertions(+), 16 deletions(-) diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index edff6da410..ea95b8068d 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -2767,29 +2767,67 @@ neqjoinsel(PG_FUNCTION_ARGS) List *args = (List *) PG_GETARG_POINTER(2); JoinType jointype = (JoinType) PG_GETARG_INT16(3); SpecialJoinInfo *sjinfo = (SpecialJoinInfo *) PG_GETARG_POINTER(4); - Oid eqop; float8 result; - /* - * We want 1 - eqjoinsel() where the equality operator is the one - * associated with this != operator, that is, its negator. - */ - eqop = get_negator(operator); - if (eqop) + if (jointype == JOIN_SEMI || jointype == JOIN_ANTI) { - result = DatumGetFloat8(DirectFunctionCall5(eqjoinsel, - PointerGetDatum(root), - ObjectIdGetDatum(eqop), - PointerGetDatum(args), - Int16GetDatum(jointype), - PointerGetDatum(sjinfo))); + /* + * For semi-joins, if there is more than one distinct value in the RHS + * relation then every non-null LHS row must find a row to join since + * it can only be equal to one of them. We'll assume that there is + * always more than one distinct RHS value for the sake of stability, + * though in theory we could have special cases for empty RHS + * (selectivity = 0) and single-distinct-value RHS (selectivity = + * fraction of LHS that has the same value as the single RHS value). + * + * For anti-joins, if we use the same assumption that there is more + * than one distinct key in the RHS relation, then every non-null LHS + * row must be suppressed by the anti-join. + * + * So either way, the selectivity estimate should be 1 - nullfrac. + */ + VariableStatData leftvar; + VariableStatData rightvar; + bool reversed; + HeapTuple statsTuple; + double nullfrac; + + get_join_variables(root, args, sjinfo, &leftvar, &rightvar, &reversed); + statsTuple = reversed ? rightvar.statsTuple : leftvar.statsTuple; + if (HeapTupleIsValid(statsTuple)) + nullfrac = ((Form_pg_statistic) GETSTRUCT(statsTuple))->stanullfrac; + else + nullfrac = 0.0; + ReleaseVariableStats(leftvar); + ReleaseVariableStats(rightvar); + + result = 1.0 - nullfrac; } else { - /* Use default selectivity (should we raise an error instead?) */ - result = DEFAULT_EQ_SEL; + /* + * We want 1 - eqjoinsel() where the equality operator is the one + * associated with this != operator, that is, its negator. + */ + Oid eqop = get_negator(operator); + + if (eqop) + { + result = DatumGetFloat8(DirectFunctionCall5(eqjoinsel, + PointerGetDatum(root), + ObjectIdGetDatum(eqop), + PointerGetDatum(args), + Int16GetDatum(jointype), + PointerGetDatum(sjinfo))); + } + else + { + /* Use default selectivity (should we raise an error instead?) */ + result = DEFAULT_EQ_SEL; + } + result = 1.0 - result; } - result = 1.0 - result; + PG_RETURN_FLOAT8(result); } diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index f02ef3f978..b7d1790097 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -1845,6 +1845,28 @@ SELECT '' AS "xxx", * | 1 | 4 | one | -1 (1 row) +-- +-- semijoin selectivity for <> +-- +explain (costs off) +select * from int4_tbl i4, tenk1 a +where exists(select * from tenk1 b + where a.twothousand = b.twothousand and a.fivethous <> b.fivethous) + and i4.f1 = a.tenthous; + QUERY PLAN +---------------------------------------------- + Hash Semi Join + Hash Cond: (a.twothousand = b.twothousand) + Join Filter: (a.fivethous <> b.fivethous) + -> Hash Join + Hash Cond: (a.tenthous = i4.f1) + -> Seq Scan on tenk1 a + -> Hash + -> Seq Scan on int4_tbl i4 + -> Hash + -> Seq Scan on tenk1 b +(10 rows) + -- -- More complicated constructs -- diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index cbb71c5b1b..c6d4a513e8 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -193,6 +193,15 @@ SELECT '' AS "xxx", * SELECT '' AS "xxx", * FROM J1_TBL LEFT JOIN J2_TBL USING (i) WHERE (i = 1); +-- +-- semijoin selectivity for <> +-- +explain (costs off) +select * from int4_tbl i4, tenk1 a +where exists(select * from tenk1 b + where a.twothousand = b.twothousand and a.fivethous <> b.fivethous) + and i4.f1 = a.tenthous; + -- -- More complicated constructs From e21a556e136973cea95852b91fe1d72c7626bc34 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Thu, 30 Nov 2017 00:57:22 -0800 Subject: [PATCH 0638/1087] Fix non-GNU makefiles for AIX make. Invoking the Makefile without an explicit target was building every possible target instead of just the "all" target. Back-patch to 9.3 (all supported versions). --- Makefile | 4 ++++ src/test/regress/Makefile | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/Makefile b/Makefile index 4c68950e90..c400854cd3 100644 --- a/Makefile +++ b/Makefile @@ -11,6 +11,10 @@ # GNUmakefile won't exist yet, so we catch that case as well. +# AIX make defaults to building *every* target of the first rule. Start with +# a single-target, empty rule to make the other targets non-default. +all: + all check install installdirs installcheck installcheck-parallel uninstall clean distclean maintainer-clean dist distcheck world check-world install-world installcheck-world: @if [ ! -f GNUmakefile ] ; then \ echo "You need to run the 'configure' program first. See the file"; \ diff --git a/src/test/regress/Makefile b/src/test/regress/Makefile index 6409a485e8..7c665ff892 100644 --- a/src/test/regress/Makefile +++ b/src/test/regress/Makefile @@ -7,6 +7,11 @@ # GNU make uses a make file named "GNUmakefile" in preference to "Makefile" # if it exists. Postgres is shipped with a "GNUmakefile". + +# AIX make defaults to building *every* target of the first rule. Start with +# a single-target, empty rule to make the other targets non-default. +all: + all install clean check installcheck: @echo "You must use GNU make to use Postgres. It may be installed" @echo "on your system with the name 'gmake'." From 1761653bbb17447906c812c347b3fe284ce699cf Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 30 Nov 2017 09:50:10 -0500 Subject: [PATCH 0639/1087] Make create_unique_path manage memory like mark_dummy_rel. Put the unique path in the same context as the owning RelOptInfo, rather than the toplevel planner context. This is how this function worked originally, but commit f41803bb39bc2949db200116a609fd242d0ec221 changed it without explanation. mark_dummy_rel adopted the older (or newer?) technique in commit eca75a12a27d28b972fc269c1c8813cd8eb15441, which also featured a much better explanation of why it is correct. So, switch back to that technique here, with the same explanation given there. Although this fixes a possible memory leak when GEQO is in use, the leak is minor and probably nobody cares, so no back-patch. Ashutosh Bapat, reviewed by Tom Lane and by me Discussion: http://postgr.es/m/CAFjFpRcXkHHrXyD9BCvkgGJV4TnHG2SWJ0PhJfrDu3NAcQvh7g@mail.gmail.com --- src/backend/optimizer/util/pathnode.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 02bbbc0647..bc0841bf0b 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -1462,10 +1462,16 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, return NULL; /* - * We must ensure path struct and subsidiary data are allocated in main - * planning context; otherwise GEQO memory management causes trouble. + * When called during GEQO join planning, we are in a short-lived memory + * context. We must make sure that the path and any subsidiary data + * structures created for a baserel survive the GEQO cycle, else the + * baserel is trashed for future GEQO cycles. On the other hand, when we + * are creating those for a joinrel during GEQO, we don't want them to + * clutter the main planning context. Upshot is that the best solution is + * to explicitly allocate memory in the same context the given RelOptInfo + * is in. */ - oldcontext = MemoryContextSwitchTo(root->planner_cxt); + oldcontext = MemoryContextSwitchTo(GetMemoryChunkContext(rel)); pathnode = makeNode(UniquePath); From e4128ee767df3c8c715eb08f8977647ae49dfb59 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 30 Nov 2017 08:46:13 -0500 Subject: [PATCH 0640/1087] SQL procedures This adds a new object type "procedure" that is similar to a function but does not have a return type and is invoked by the new CALL statement instead of SELECT or similar. This implementation is aligned with the SQL standard and compatible with or similar to other SQL implementations. This commit adds new commands CALL, CREATE/ALTER/DROP PROCEDURE, as well as ALTER/DROP ROUTINE that can refer to either a function or a procedure (or an aggregate function, as an extension to SQL). There is also support for procedures in various utility commands such as COMMENT and GRANT, as well as support in pg_dump and psql. Support for defining procedures is available in all the languages supplied by the core distribution. While this commit is mainly syntax sugar around existing functionality, future features will rely on having procedures as a separate object type. Reviewed-by: Andrew Dunstan --- doc/src/sgml/catalogs.sgml | 2 +- doc/src/sgml/ddl.sgml | 2 +- doc/src/sgml/ecpg.sgml | 4 +- doc/src/sgml/information_schema.sgml | 18 +- doc/src/sgml/plperl.sgml | 4 + doc/src/sgml/plpgsql.sgml | 17 +- doc/src/sgml/plpython.sgml | 6 +- doc/src/sgml/pltcl.sgml | 3 +- doc/src/sgml/ref/allfiles.sgml | 6 + .../sgml/ref/alter_default_privileges.sgml | 12 +- doc/src/sgml/ref/alter_extension.sgml | 12 +- doc/src/sgml/ref/alter_function.sgml | 2 + doc/src/sgml/ref/alter_procedure.sgml | 281 +++++++++++++++ doc/src/sgml/ref/alter_routine.sgml | 102 ++++++ doc/src/sgml/ref/call.sgml | 97 +++++ doc/src/sgml/ref/comment.sgml | 13 +- doc/src/sgml/ref/create_function.sgml | 10 +- doc/src/sgml/ref/create_procedure.sgml | 341 ++++++++++++++++++ doc/src/sgml/ref/drop_function.sgml | 2 + doc/src/sgml/ref/drop_procedure.sgml | 162 +++++++++ doc/src/sgml/ref/drop_routine.sgml | 94 +++++ doc/src/sgml/ref/grant.sgml | 25 +- doc/src/sgml/ref/revoke.sgml | 4 +- doc/src/sgml/ref/security_label.sgml | 12 +- doc/src/sgml/reference.sgml | 6 + doc/src/sgml/xfunc.sgml | 33 ++ src/backend/catalog/aclchk.c | 68 +++- src/backend/catalog/information_schema.sql | 25 +- src/backend/catalog/objectaddress.c | 19 +- src/backend/catalog/pg_proc.c | 3 +- src/backend/commands/aggregatecmds.c | 2 +- src/backend/commands/alter.c | 6 + src/backend/commands/dropcmds.c | 38 +- src/backend/commands/event_trigger.c | 14 + src/backend/commands/functioncmds.c | 164 ++++++++- src/backend/commands/opclasscmds.c | 4 +- src/backend/executor/functions.c | 15 +- src/backend/nodes/copyfuncs.c | 15 + src/backend/nodes/equalfuncs.c | 13 + src/backend/optimizer/util/clauses.c | 1 + src/backend/parser/gram.y | 255 ++++++++++++- src/backend/parser/parse_agg.c | 11 + src/backend/parser/parse_expr.c | 8 + src/backend/parser/parse_func.c | 201 +++++++---- src/backend/tcop/utility.c | 44 ++- src/backend/utils/adt/ruleutils.c | 6 + src/backend/utils/cache/lsyscache.c | 19 + src/bin/pg_dump/dumputils.c | 5 +- src/bin/pg_dump/pg_backup_archiver.c | 7 +- src/bin/pg_dump/pg_dump.c | 32 +- src/bin/pg_dump/t/002_pg_dump.pl | 38 ++ src/bin/psql/describe.c | 8 +- src/bin/psql/tab-complete.c | 77 +++- src/include/catalog/catversion.h | 2 +- src/include/commands/defrem.h | 3 +- src/include/nodes/nodes.h | 1 + src/include/nodes/parsenodes.h | 16 + src/include/parser/kwlist.h | 4 + src/include/parser/parse_func.h | 8 +- src/include/parser/parse_node.h | 3 +- src/include/utils/lsyscache.h | 1 + src/interfaces/ecpg/preproc/ecpg.tokens | 2 +- src/interfaces/ecpg/preproc/ecpg.trailer | 5 +- src/interfaces/ecpg/preproc/ecpg_keywords.c | 1 - src/pl/plperl/GNUmakefile | 2 +- src/pl/plperl/expected/plperl_call.out | 29 ++ src/pl/plperl/plperl.c | 8 +- src/pl/plperl/sql/plperl_call.sql | 36 ++ src/pl/plpgsql/src/pl_comp.c | 88 ++--- src/pl/plpgsql/src/pl_exec.c | 8 +- src/pl/plpython/Makefile | 1 + src/pl/plpython/expected/plpython_call.out | 35 ++ src/pl/plpython/plpy_exec.c | 14 +- src/pl/plpython/plpy_main.c | 10 +- src/pl/plpython/plpy_procedure.c | 5 +- src/pl/plpython/plpy_procedure.h | 3 +- src/pl/plpython/sql/plpython_call.sql | 41 +++ src/pl/tcl/Makefile | 2 +- src/pl/tcl/expected/pltcl_call.out | 29 ++ src/pl/tcl/pltcl.c | 13 +- src/pl/tcl/sql/pltcl_call.sql | 36 ++ .../regress/expected/create_procedure.out | 92 +++++ src/test/regress/expected/object_address.out | 15 +- src/test/regress/expected/plpgsql.out | 41 +++ src/test/regress/expected/polymorphism.out | 16 +- src/test/regress/expected/privileges.out | 128 ++++++- src/test/regress/parallel_schedule | 2 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/create_procedure.sql | 79 ++++ src/test/regress/sql/object_address.sql | 4 +- src/test/regress/sql/plpgsql.sql | 49 +++ src/test/regress/sql/privileges.sql | 55 ++- 92 files changed, 2951 insertions(+), 305 deletions(-) create mode 100644 doc/src/sgml/ref/alter_procedure.sgml create mode 100644 doc/src/sgml/ref/alter_routine.sgml create mode 100644 doc/src/sgml/ref/call.sgml create mode 100644 doc/src/sgml/ref/create_procedure.sgml create mode 100644 doc/src/sgml/ref/drop_procedure.sgml create mode 100644 doc/src/sgml/ref/drop_routine.sgml create mode 100644 src/pl/plperl/expected/plperl_call.out create mode 100644 src/pl/plperl/sql/plperl_call.sql create mode 100644 src/pl/plpython/expected/plpython_call.out create mode 100644 src/pl/plpython/sql/plpython_call.sql create mode 100644 src/pl/tcl/expected/pltcl_call.out create mode 100644 src/pl/tcl/sql/pltcl_call.sql create mode 100644 src/test/regress/expected/create_procedure.out create mode 100644 src/test/regress/sql/create_procedure.sql diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index da881a7737..3f02202caf 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -5241,7 +5241,7 @@ SCRAM-SHA-256$<iteration count>:&l prorettype oid pg_type.oid - Data type of the return value + Data type of the return value, or null for a procedure diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index e6f50ec819..9f583266de 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3947,7 +3947,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; - Functions and operators + Functions, procedures, and operators diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index d1872c1a5c..5a8d1f1b95 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -4778,7 +4778,9 @@ EXEC SQL WHENEVER condition actionDO name (args) - Call the specified C functions with the specified arguments. + Call the specified C functions with the specified arguments. (This + use is different from the meaning of CALL + and DO in the normal PostgreSQL grammar.) diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index 99b0ea8519..0faa72f1d3 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -3972,8 +3972,8 @@ ORDER BY c.ordinal_position; <literal>routines</literal> - The view routines contains all functions in the - current database. Only those functions are shown that the current + The view routines contains all functions and procedures in the + current database. Only those functions and procedures are shown that the current user has access to (by way of being the owner or having some privilege). @@ -4037,8 +4037,8 @@ ORDER BY c.ordinal_position; routine_type character_data - Always FUNCTION (In the future there might - be other types of routines.) + FUNCTION for a + function, PROCEDURE for a procedure @@ -4087,7 +4087,7 @@ ORDER BY c.ordinal_position; the view element_types), else USER-DEFINED (in that case, the type is identified in type_udt_name and associated - columns). + columns). Null for a procedure. @@ -4180,7 +4180,7 @@ ORDER BY c.ordinal_position; sql_identifier Name of the database that the return data type of the function - is defined in (always the current database) + is defined in (always the current database). Null for a procedure. @@ -4189,7 +4189,7 @@ ORDER BY c.ordinal_position; sql_identifier Name of the schema that the return data type of the function is - defined in + defined in. Null for a procedure. @@ -4197,7 +4197,7 @@ ORDER BY c.ordinal_position; type_udt_name sql_identifier - Name of the return data type of the function + Name of the return data type of the function. Null for a procedure. @@ -4314,7 +4314,7 @@ ORDER BY c.ordinal_position; If the function automatically returns null if any of its arguments are null, then YES, else - NO. + NO. Null for a procedure. diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml index 33e39d85e4..100162dead 100644 --- a/doc/src/sgml/plperl.sgml +++ b/doc/src/sgml/plperl.sgml @@ -67,6 +67,10 @@ $$ LANGUAGE plperl; as discussed below. + + In a PL/Perl procedure, any return value from the Perl code is ignored. + + PL/Perl also supports anonymous code blocks called with the statement: diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 6d14b34448..7d23ed437e 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -156,7 +156,8 @@ Finally, a PL/pgSQL function can be declared to return - void if it has no useful return value. + void if it has no useful return value. (Alternatively, it + could be written as a procedure in that case.) @@ -1865,6 +1866,18 @@ SELECT * FROM get_available_flightid(CURRENT_DATE); + + Returning From a Procedure + + + A procedure does not have a return value. A procedure can therefore end + without a RETURN statement. If + a RETURN statement is desired to exit the code early, + then NULL must be returned. Returning any other value + will result in an error. + + + Conditionals @@ -5244,7 +5257,7 @@ show errors; Here is how this function would end up in PostgreSQL: -CREATE OR REPLACE FUNCTION cs_update_referrer_type_proc() RETURNS void AS $func$ +CREATE OR REPLACE PROCEDURE cs_update_referrer_type_proc() AS $func$ DECLARE referrer_keys CURSOR IS SELECT * FROM cs_referrer_keys diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml index ec5f671632..0dbeee1fa2 100644 --- a/doc/src/sgml/plpython.sgml +++ b/doc/src/sgml/plpython.sgml @@ -207,7 +207,11 @@ $$ LANGUAGE plpythonu; yield (in case of a result-set statement). If you do not provide a return value, Python returns the default None. PL/Python translates - Python's None into the SQL null value. + Python's None into the SQL null value. In a procedure, + the result from the Python code must be None (typically + achieved by ending the procedure without a return + statement or by using a return statement without + argument); otherwise, an error will be raised. diff --git a/doc/src/sgml/pltcl.sgml b/doc/src/sgml/pltcl.sgml index 0646a8ba0b..8018783b0a 100644 --- a/doc/src/sgml/pltcl.sgml +++ b/doc/src/sgml/pltcl.sgml @@ -97,7 +97,8 @@ $$ LANGUAGE pltcl; Tcl script as variables named 1 ... n. The result is returned from the Tcl code in the usual way, with - a return statement. + a return statement. In a procedure, the return value + from the Tcl code is ignored. diff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml index 01acc2ef9d..22e6893211 100644 --- a/doc/src/sgml/ref/allfiles.sgml +++ b/doc/src/sgml/ref/allfiles.sgml @@ -26,8 +26,10 @@ Complete list of usable sgml source files in this directory. + + @@ -48,6 +50,7 @@ Complete list of usable sgml source files in this directory. + @@ -75,6 +78,7 @@ Complete list of usable sgml source files in this directory. + @@ -122,8 +126,10 @@ Complete list of usable sgml source files in this directory. + + diff --git a/doc/src/sgml/ref/alter_default_privileges.sgml b/doc/src/sgml/ref/alter_default_privileges.sgml index ab2c35b4dd..0c09f1db5c 100644 --- a/doc/src/sgml/ref/alter_default_privileges.sgml +++ b/doc/src/sgml/ref/alter_default_privileges.sgml @@ -39,7 +39,7 @@ GRANT { { USAGE | SELECT | UPDATE } TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { EXECUTE | ALL [ PRIVILEGES ] } - ON FUNCTIONS + ON { FUNCTIONS | ROUTINES } TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } @@ -66,7 +66,7 @@ REVOKE [ GRANT OPTION FOR ] REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] } - ON FUNCTIONS + ON { FUNCTIONS | ROUTINES } FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] @@ -93,7 +93,13 @@ REVOKE [ GRANT OPTION FOR ] affect privileges assigned to already-existing objects.) Currently, only the privileges for schemas, tables (including views and foreign tables), sequences, functions, and types (including domains) can be - altered. + altered. For this command, functions include aggregates and procedures. + The words FUNCTIONS and ROUTINES are + equivalent in this command. (ROUTINES is preferred + going forward as the standard term for functions and procedures taken + together. In earlier PostgreSQL releases, only the + word FUNCTIONS was allowed. It is not possible to set + default privileges for functions and procedures separately.) diff --git a/doc/src/sgml/ref/alter_extension.sgml b/doc/src/sgml/ref/alter_extension.sgml index e54925507e..a2d405d6cd 100644 --- a/doc/src/sgml/ref/alter_extension.sgml +++ b/doc/src/sgml/ref/alter_extension.sgml @@ -45,6 +45,8 @@ ALTER EXTENSION name DROP object_name USING index_method | OPERATOR FAMILY object_name USING index_method | [ PROCEDURAL ] LANGUAGE object_name | + PROCEDURE procedure_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + ROUTINE routine_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | SCHEMA object_name | SEQUENCE object_name | SERVER object_name | @@ -170,12 +172,14 @@ ALTER EXTENSION name DROP aggregate_name function_name operator_name + procedure_name + routine_name The name of an object to be added to or removed from the extension. Names of tables, aggregates, domains, foreign tables, functions, operators, - operator classes, operator families, sequences, text search objects, + operator classes, operator families, procedures, routines, sequences, text search objects, types, and views can be schema-qualified. @@ -204,7 +208,7 @@ ALTER EXTENSION name DROP - The mode of a function or aggregate + The mode of a function, procedure, or aggregate argument: IN, OUT, INOUT, or VARIADIC. If omitted, the default is IN. @@ -222,7 +226,7 @@ ALTER EXTENSION name DROP - The name of a function or aggregate argument. + The name of a function, procedure, or aggregate argument. Note that ALTER EXTENSION does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity. @@ -235,7 +239,7 @@ ALTER EXTENSION name DROP - The data type of a function or aggregate argument. + The data type of a function, procedure, or aggregate argument. diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml index 196d2dde0c..d8747e0748 100644 --- a/doc/src/sgml/ref/alter_function.sgml +++ b/doc/src/sgml/ref/alter_function.sgml @@ -359,6 +359,8 @@ ALTER FUNCTION check_password(text) RESET search_path; + + diff --git a/doc/src/sgml/ref/alter_procedure.sgml b/doc/src/sgml/ref/alter_procedure.sgml new file mode 100644 index 0000000000..dae80076d9 --- /dev/null +++ b/doc/src/sgml/ref/alter_procedure.sgml @@ -0,0 +1,281 @@ + + + + + ALTER PROCEDURE + + + + ALTER PROCEDURE + 7 + SQL - Language Statements + + + + ALTER PROCEDURE + change the definition of a procedure + + + + +ALTER PROCEDURE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + action [ ... ] [ RESTRICT ] +ALTER PROCEDURE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + RENAME TO new_name +ALTER PROCEDURE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER PROCEDURE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + SET SCHEMA new_schema +ALTER PROCEDURE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + DEPENDS ON EXTENSION extension_name + +where action is one of: + + [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER + SET configuration_parameter { TO | = } { value | DEFAULT } + SET configuration_parameter FROM CURRENT + RESET configuration_parameter + RESET ALL + + + + + Description + + + ALTER PROCEDURE changes the definition of a + procedure. + + + + You must own the procedure to use ALTER PROCEDURE. + To change a procedure's schema, you must also have CREATE + privilege on the new schema. + To alter the owner, you must also be a direct or indirect member of the new + owning role, and that role must have CREATE privilege on + the procedure's schema. (These restrictions enforce that altering the owner + doesn't do anything you couldn't do by dropping and recreating the procedure. + However, a superuser can alter ownership of any procedure anyway.) + + + + + Parameters + + + + name + + + The name (optionally schema-qualified) of an existing procedure. If no + argument list is specified, the name must be unique in its schema. + + + + + + argmode + + + + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. + + + + + + argname + + + + The name of an argument. + Note that ALTER PROCEDURE does not actually pay + any attention to argument names, since only the argument data + types are needed to determine the procedure's identity. + + + + + + argtype + + + + The data type(s) of the procedure's arguments (optionally + schema-qualified), if any. + + + + + + new_name + + + The new name of the procedure. + + + + + + new_owner + + + The new owner of the procedure. Note that if the procedure is + marked SECURITY DEFINER, it will subsequently + execute as the new owner. + + + + + + new_schema + + + The new schema for the procedure. + + + + + + extension_name + + + The name of the extension that the procedure is to depend on. + + + + + + EXTERNAL SECURITY INVOKER + EXTERNAL SECURITY DEFINER + + + + Change whether the procedure is a security definer or not. The + key word EXTERNAL is ignored for SQL + conformance. See for more information about + this capability. + + + + + + configuration_parameter + value + + + Add or change the assignment to be made to a configuration parameter + when the procedure is called. If + value is DEFAULT + or, equivalently, RESET is used, the procedure-local + setting is removed, so that the procedure executes with the value + present in its environment. Use RESET + ALL to clear all procedure-local settings. + SET FROM CURRENT saves the value of the parameter that + is current when ALTER PROCEDURE is executed as the value + to be applied when the procedure is entered. + + + + See and + + for more information about allowed parameter names and values. + + + + + + RESTRICT + + + + Ignored for conformance with the SQL standard. + + + + + + + + Examples + + + To rename the procedure insert_data with two arguments + of type integer to insert_record: + +ALTER PROCEDURE insert_data(integer, integer) RENAME TO insert_record; + + + + + To change the owner of the procedure insert_data with + two arguments of type integer to joe: + +ALTER PROCEDURE insert_data(integer, integer) OWNER TO joe; + + + + + To change the schema of the procedure insert_data with + two arguments of type integer + to accounting: + +ALTER PROCEDURE insert_data(integer, integer) SET SCHEMA accounting; + + + + + To mark the procedure insert_data(integer, integer) as + being dependent on the extension myext: + +ALTER PROCEDURE insert_data(integer, integer) DEPENDS ON EXTENSION myext; + + + + + To adjust the search path that is automatically set for a procedure: + +ALTER PROCEDURE check_password(text) SET search_path = admin, pg_temp; + + + + + To disable automatic setting of search_path for a procedure: + +ALTER PROCEDURE check_password(text) RESET search_path; + + The procedure will now execute with whatever search path is used by its + caller. + + + + + Compatibility + + + This statement is partially compatible with the ALTER + PROCEDURE statement in the SQL standard. The standard allows more + properties of a procedure to be modified, but does not provide the + ability to rename a procedure, make a procedure a security definer, + attach configuration parameter values to a procedure, + or change the owner, schema, or volatility of a procedure. The standard also + requires the RESTRICT key word, which is optional in + PostgreSQL. + + + + + See Also + + + + + + + + + diff --git a/doc/src/sgml/ref/alter_routine.sgml b/doc/src/sgml/ref/alter_routine.sgml new file mode 100644 index 0000000000..d1699691e1 --- /dev/null +++ b/doc/src/sgml/ref/alter_routine.sgml @@ -0,0 +1,102 @@ + + + + + ALTER ROUTINE + + + + ALTER ROUTINE + 7 + SQL - Language Statements + + + + ALTER ROUTINE + change the definition of a routine + + + + +ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + action [ ... ] [ RESTRICT ] +ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + RENAME TO new_name +ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + OWNER TO { new_owner | CURRENT_USER | SESSION_USER } +ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + SET SCHEMA new_schema +ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + DEPENDS ON EXTENSION extension_name + +where action is one of: + + IMMUTABLE | STABLE | VOLATILE | [ NOT ] LEAKPROOF + [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER + PARALLEL { UNSAFE | RESTRICTED | SAFE } + COST execution_cost + ROWS result_rows + SET configuration_parameter { TO | = } { value | DEFAULT } + SET configuration_parameter FROM CURRENT + RESET configuration_parameter + RESET ALL + + + + + Description + + + ALTER ROUTINE changes the definition of a routine, which + can be an aggregate function, a normal function, or a procedure. See + under , , + and for the description of the + parameters, more examples, and further details. + + + + + Examples + + + To rename the routine foo for type + integer to foobar: + +ALTER ROUTINE foo(integer) RENAME TO foobar; + + This command will work independent of whether foo is an + aggregate, function, or procedure. + + + + + Compatibility + + + This statement is partially compatible with the ALTER + ROUTINE statement in the SQL standard. See + under + and for more details. Allowing + routine names to refer to aggregate functions is + a PostgreSQL extension. + + + + + See Also + + + + + + + + + + Note that there is no CREATE ROUTINE command. + + + diff --git a/doc/src/sgml/ref/call.sgml b/doc/src/sgml/ref/call.sgml new file mode 100644 index 0000000000..2741d8d15e --- /dev/null +++ b/doc/src/sgml/ref/call.sgml @@ -0,0 +1,97 @@ + + + + + CALL + + + + CALL + 7 + SQL - Language Statements + + + + CALL + invoke a procedure + + + + +CALL name ( [ argument ] [ , ...] ) + + + + + Description + + + CALL executes a procedure. + + + + + Parameters + + + + name + + + The name (optionally schema-qualified) of the procedure. + + + + + + argument + + + An argument for the procedure call. + See for the full details on + function and procedure call syntax, including use of named parameters. + + + + + + + + Notes + + + The user must have EXECUTE privilege on the procedure in + order to be allowed to invoke it. + + + + To call a function (not a procedure), use SELECT instead. + + + + + Examples + +CALL do_db_maintenance(); + + + + + Compatibility + + + CALL conforms to the SQL standard. + + + + + See Also + + + + + + diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml index 7d66c1a34c..965c5a40ad 100644 --- a/doc/src/sgml/ref/comment.sgml +++ b/doc/src/sgml/ref/comment.sgml @@ -46,8 +46,10 @@ COMMENT ON OPERATOR FAMILY object_name USING index_method | POLICY policy_name ON table_name | [ PROCEDURAL ] LANGUAGE object_name | + PROCEDURE procedure_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | PUBLICATION object_name | ROLE object_name | + ROUTINE routine_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | RULE rule_name ON table_name | SCHEMA object_name | SEQUENCE object_name | @@ -121,13 +123,15 @@ COMMENT ON function_name operator_name policy_name + procedure_name + routine_name rule_name trigger_name The name of the object to be commented. Names of tables, aggregates, collations, conversions, domains, foreign tables, functions, - indexes, operators, operator classes, operator families, sequences, + indexes, operators, operator classes, operator families, procedures, routines, sequences, statistics, text search objects, types, and views can be schema-qualified. When commenting on a column, relation_name must refer @@ -170,7 +174,7 @@ COMMENT ON argmode - The mode of a function or aggregate + The mode of a function, procedure, or aggregate argument: IN, OUT, INOUT, or VARIADIC. If omitted, the default is IN. @@ -187,7 +191,7 @@ COMMENT ON argname - The name of a function or aggregate argument. + The name of a function, procedure, or aggregate argument. Note that COMMENT does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity. @@ -199,7 +203,7 @@ COMMENT ON argtype - The data type of a function or aggregate argument. + The data type of a function, procedure, or aggregate argument. @@ -325,6 +329,7 @@ COMMENT ON OPERATOR - (NONE, integer) IS 'Unary minus'; COMMENT ON OPERATOR CLASS int4ops USING btree IS '4 byte integer operators for btrees'; COMMENT ON OPERATOR FAMILY integer_ops USING btree IS 'all integer operators for btrees'; COMMENT ON POLICY my_policy ON mytable IS 'Filter rows by users'; +COMMENT ON PROCEDURE my_proc (integer, integer) IS 'Runs a report'; COMMENT ON ROLE my_role IS 'Administration group for finance tables'; COMMENT ON RULE my_rule ON my_table IS 'Logs updates of employee records'; COMMENT ON SCHEMA my_schema IS 'Departmental data'; diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index 75331165fe..fd229d1193 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -55,9 +55,9 @@ CREATE [ OR REPLACE ] FUNCTION If a schema name is included, then the function is created in the specified schema. Otherwise it is created in the current schema. - The name of the new function must not match any existing function + The name of the new function must not match any existing function or procedure with the same input argument types in the same schema. However, - functions of different argument types can share a name (this is + functions and procedures of different argument types can share a name (this is called overloading). @@ -450,7 +450,7 @@ CREATE [ OR REPLACE ] FUNCTION - execution_cost + COST execution_cost @@ -466,7 +466,7 @@ CREATE [ OR REPLACE ] FUNCTION - result_rows + ROWS result_rows @@ -818,7 +818,7 @@ COMMIT; Compatibility - A CREATE FUNCTION command is defined in SQL:1999 and later. + A CREATE FUNCTION command is defined in the SQL standard. The PostgreSQL version is similar but not fully compatible. The attributes are not portable, neither are the different available languages. diff --git a/doc/src/sgml/ref/create_procedure.sgml b/doc/src/sgml/ref/create_procedure.sgml new file mode 100644 index 0000000000..d712043824 --- /dev/null +++ b/doc/src/sgml/ref/create_procedure.sgml @@ -0,0 +1,341 @@ + + + + + CREATE PROCEDURE + + + + CREATE PROCEDURE + 7 + SQL - Language Statements + + + + CREATE PROCEDURE + define a new procedure + + + + +CREATE [ OR REPLACE ] PROCEDURE + name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = } default_expr ] [, ...] ] ) + { LANGUAGE lang_name + | TRANSFORM { FOR TYPE type_name } [, ... ] + | [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER + | SET configuration_parameter { TO value | = value | FROM CURRENT } + | AS 'definition' + | AS 'obj_file', 'link_symbol' + } ... + + + + + Description + + + CREATE PROCEDURE defines a new procedure. + CREATE OR REPLACE PROCEDURE will either create a + new procedure, or replace an existing definition. + To be able to define a procedure, the user must have the + USAGE privilege on the language. + + + + If a schema name is included, then the procedure is created in the + specified schema. Otherwise it is created in the current schema. + The name of the new procedure must not match any existing procedure or function + with the same input argument types in the same schema. However, + procedures and functions of different argument types can share a name (this is + called overloading). + + + + To replace the current definition of an existing procedure, use + CREATE OR REPLACE PROCEDURE. It is not possible + to change the name or argument types of a procedure this way (if you + tried, you would actually be creating a new, distinct procedure). + + + + When CREATE OR REPLACE PROCEDURE is used to replace an + existing procedure, the ownership and permissions of the procedure + do not change. All other procedure properties are assigned the + values specified or implied in the command. You must own the procedure + to replace it (this includes being a member of the owning role). + + + + The user that creates the procedure becomes the owner of the procedure. + + + + To be able to create a procedure, you must have USAGE + privilege on the argument types. + + + + + Parameters + + + + name + + + + The name (optionally schema-qualified) of the procedure to create. + + + + + + argmode + + + + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. + + + + + + argname + + + + The name of an argument. + + + + + + argtype + + + + The data type(s) of the procedure's arguments (optionally + schema-qualified), if any. The argument types can be base, composite, + or domain types, or can reference the type of a table column. + + + Depending on the implementation language it might also be allowed + to specify pseudo-types such as cstring. + Pseudo-types indicate that the actual argument type is either + incompletely specified, or outside the set of ordinary SQL data types. + + + The type of a column is referenced by writing + table_name.column_name%TYPE. + Using this feature can sometimes help make a procedure independent of + changes to the definition of a table. + + + + + + default_expr + + + + An expression to be used as default value if the parameter is + not specified. The expression has to be coercible to the + argument type of the parameter. + All input parameters following a + parameter with a default value must have default values as well. + + + + + + lang_name + + + + The name of the language that the procedure is implemented in. + It can be sql, c, + internal, or the name of a user-defined + procedural language, e.g. plpgsql. Enclosing the + name in single quotes is deprecated and requires matching case. + + + + + + TRANSFORM { FOR TYPE type_name } [, ... ] } + + + + Lists which transforms a call to the procedure should apply. Transforms + convert between SQL types and language-specific data types; + see . Procedural language + implementations usually have hardcoded knowledge of the built-in types, + so those don't need to be listed here. If a procedural language + implementation does not know how to handle a type and no transform is + supplied, it will fall back to a default behavior for converting data + types, but this depends on the implementation. + + + + + + EXTERNAL SECURITY INVOKER + EXTERNAL SECURITY DEFINER + + + SECURITY INVOKER indicates that the procedure + is to be executed with the privileges of the user that calls it. + That is the default. SECURITY DEFINER + specifies that the procedure is to be executed with the + privileges of the user that owns it. + + + + The key word EXTERNAL is allowed for SQL + conformance, but it is optional since, unlike in SQL, this feature + applies to all procedures not only external ones. + + + + + + configuration_parameter + value + + + The SET clause causes the specified configuration + parameter to be set to the specified value when the procedure is + entered, and then restored to its prior value when the procedure exits. + SET FROM CURRENT saves the value of the parameter that + is current when CREATE PROCEDURE is executed as the value + to be applied when the procedure is entered. + + + + If a SET clause is attached to a procedure, then + the effects of a SET LOCAL command executed inside the + procedure for the same variable are restricted to the procedure: the + configuration parameter's prior value is still restored at procedure exit. + However, an ordinary + SET command (without LOCAL) overrides the + SET clause, much as it would do for a previous SET + LOCAL command: the effects of such a command will persist after + procedure exit, unless the current transaction is rolled back. + + + + See and + + for more information about allowed parameter names and values. + + + + + + definition + + + + A string constant defining the procedure; the meaning depends on the + language. It can be an internal procedure name, the path to an + object file, an SQL command, or text in a procedural language. + + + + It is often helpful to use dollar quoting (see ) to write the procedure definition + string, rather than the normal single quote syntax. Without dollar + quoting, any single quotes or backslashes in the procedure definition must + be escaped by doubling them. + + + + + + + obj_file, link_symbol + + + + This form of the AS clause is used for + dynamically loadable C language procedures when the procedure name + in the C language source code is not the same as the name of + the SQL procedure. The string obj_file is the name of the shared + library file containing the compiled C procedure, and is interpreted + as for the command. The string + link_symbol is the + procedure's link symbol, that is, the name of the procedure in the C + language source code. If the link symbol is omitted, it is assumed + to be the same as the name of the SQL procedure being defined. + + + + When repeated CREATE PROCEDURE calls refer to + the same object file, the file is only loaded once per session. + To unload and + reload the file (perhaps during development), start a new session. + + + + + + + + + Notes + + + See for more details on function + creation that also apply to procedures. + + + + Use to execute a procedure. + + + + + Examples + + +CREATE PROCEDURE insert_data(a integer, b integer) +LANGUAGE SQL +AS $$ +INSERT INTO tbl VALUES (a); +INSERT INTO tbl VALUES (b); +$$; + +CALL insert_data(1, 2); + + + + + Compatibility + + + A CREATE PROCEDURE command is defined in the SQL + standard. The PostgreSQL version is similar but + not fully compatible. For details see + also . + + + + + + See Also + + + + + + + + + + diff --git a/doc/src/sgml/ref/drop_function.sgml b/doc/src/sgml/ref/drop_function.sgml index eda1a59c84..127fdfe419 100644 --- a/doc/src/sgml/ref/drop_function.sgml +++ b/doc/src/sgml/ref/drop_function.sgml @@ -185,6 +185,8 @@ DROP FUNCTION update_employee_salaries(); + + diff --git a/doc/src/sgml/ref/drop_procedure.sgml b/doc/src/sgml/ref/drop_procedure.sgml new file mode 100644 index 0000000000..fef61b66ac --- /dev/null +++ b/doc/src/sgml/ref/drop_procedure.sgml @@ -0,0 +1,162 @@ + + + + + DROP PROCEDURE + + + + DROP PROCEDURE + 7 + SQL - Language Statements + + + + DROP PROCEDURE + remove a procedure + + + + +DROP PROCEDURE [ IF EXISTS ] name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] [, ...] + [ CASCADE | RESTRICT ] + + + + + Description + + + DROP PROCEDURE removes the definition of an existing + procedure. To execute this command the user must be the + owner of the procedure. The argument types to the + procedure must be specified, since several different procedures + can exist with the same name and different argument lists. + + + + + Parameters + + + + IF EXISTS + + + Do not throw an error if the procedure does not exist. A notice is issued + in this case. + + + + + + name + + + The name (optionally schema-qualified) of an existing procedure. If no + argument list is specified, the name must be unique in its schema. + + + + + + argmode + + + + The mode of an argument: IN or VARIADIC. + If omitted, the default is IN. + + + + + + argname + + + + The name of an argument. + Note that DROP PROCEDURE does not actually pay + any attention to argument names, since only the argument data + types are needed to determine the procedure's identity. + + + + + + argtype + + + + The data type(s) of the procedure's arguments (optionally + schema-qualified), if any. + + + + + + CASCADE + + + Automatically drop objects that depend on the procedure, + and in turn all objects that depend on those objects + (see ). + + + + + + RESTRICT + + + Refuse to drop the procedure if any objects depend on it. This + is the default. + + + + + + + + Examples + + +DROP PROCEDURE do_db_maintenance(); + + + + + Compatibility + + + This command conforms to the SQL standard, with + these PostgreSQL extensions: + + + The standard only allows one procedure to be dropped per command. + + + The IF EXISTS option + + + The ability to specify argument modes and names + + + + + + + See Also + + + + + + + + + + diff --git a/doc/src/sgml/ref/drop_routine.sgml b/doc/src/sgml/ref/drop_routine.sgml new file mode 100644 index 0000000000..5cd1a0f11e --- /dev/null +++ b/doc/src/sgml/ref/drop_routine.sgml @@ -0,0 +1,94 @@ + + + + + DROP ROUTINE + + + + DROP ROUTINE + 7 + SQL - Language Statements + + + + DROP ROUTINE + remove a routine + + + + +DROP ROUTINE [ IF EXISTS ] name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] [, ...] + [ CASCADE | RESTRICT ] + + + + + Description + + + DROP ROUTINE removes the definition of an existing + routine, which can be an aggregate function, a normal function, or a + procedure. See + under , , + and for the description of the + parameters, more examples, and further details. + + + + + Examples + + + To drop the routine foo for type + integer: + +DROP ROUTINE foo(integer); + + This command will work independent of whether foo is an + aggregate, function, or procedure. + + + + + Compatibility + + + This command conforms to the SQL standard, with + these PostgreSQL extensions: + + + The standard only allows one routine to be dropped per command. + + + The IF EXISTS option + + + The ability to specify argument modes and names + + + Aggregate functions are an extension. + + + + + + + See Also + + + + + + + + + + Note that there is no CREATE ROUTINE command. + + + + diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml index a5e895d09d..ff64c7a3ba 100644 --- a/doc/src/sgml/ref/grant.sgml +++ b/doc/src/sgml/ref/grant.sgml @@ -55,8 +55,8 @@ GRANT { USAGE | ALL [ PRIVILEGES ] } TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { EXECUTE | ALL [ PRIVILEGES ] } - ON { FUNCTION function_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] - | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + ON { { FUNCTION | PROCEDURE | ROUTINE } routine_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] + | ALL { FUNCTIONS | PROCEDURES | ROUTINES } IN SCHEMA schema_name [, ...] } TO role_specification [, ...] [ WITH GRANT OPTION ] GRANT { USAGE | ALL [ PRIVILEGES ] } @@ -96,7 +96,7 @@ GRANT role_name [, ...] TO The GRANT command has two basic variants: one that grants privileges on a database object (table, column, view, foreign - table, sequence, database, foreign-data wrapper, foreign server, function, + table, sequence, database, foreign-data wrapper, foreign server, function, procedure, procedural language, schema, or tablespace), and one that grants membership in a role. These variants are similar in many ways, but they are different enough to be described separately. @@ -115,8 +115,11 @@ GRANT role_name [, ...] TO There is also an option to grant privileges on all objects of the same type within one or more schemas. This functionality is currently supported - only for tables, sequences, and functions (but note that ALL - TABLES is considered to include views and foreign tables). + only for tables, sequences, functions, and procedures. ALL + TABLES also affects views and foreign tables, just like the + specific-object GRANT command. ALL + FUNCTIONS also affects aggregate functions, but not procedures, + again just like the specific-object GRANT command. @@ -169,7 +172,7 @@ GRANT role_name [, ...] TO PUBLIC
are as follows: CONNECT and TEMPORARY (create temporary tables) privileges for databases; - EXECUTE privilege for functions; and + EXECUTE privilege for functions and procedures; and USAGE privilege for languages and data types (including domains). The object owner can, of course, REVOKE @@ -329,10 +332,12 @@ GRANT role_name [, ...] TO EXECUTE - Allows the use of the specified function and the use of any - operators that are implemented on top of the function. This is - the only type of privilege that is applicable to functions. - (This syntax works for aggregate functions, as well.) + Allows the use of the specified function or procedure and the use of + any operators that are implemented on top of the function. This is the + only type of privilege that is applicable to functions and procedures. + The FUNCTION syntax also works for aggregate + functions. Alternatively, use ROUTINE to refer to a function, + aggregate function, or procedure regardless of what it is. diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml index 4d133a782b..7018202f14 100644 --- a/doc/src/sgml/ref/revoke.sgml +++ b/doc/src/sgml/ref/revoke.sgml @@ -70,8 +70,8 @@ REVOKE [ GRANT OPTION FOR ] REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL [ PRIVILEGES ] } - ON { FUNCTION function_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] - | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } + ON { { FUNCTION | PROCEDURE | ROUTINE } function_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] + | ALL { FUNCTIONS | PROCEDURES | ROUTINES } IN SCHEMA schema_name [, ...] } FROM { [ GROUP ] role_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] diff --git a/doc/src/sgml/ref/security_label.sgml b/doc/src/sgml/ref/security_label.sgml index d52113e035..e9cfdec9f9 100644 --- a/doc/src/sgml/ref/security_label.sgml +++ b/doc/src/sgml/ref/security_label.sgml @@ -34,8 +34,10 @@ SECURITY LABEL [ FOR provider ] ON LARGE OBJECT large_object_oid | MATERIALIZED VIEW object_name | [ PROCEDURAL ] LANGUAGE object_name | + PROCEDURE procedure_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | PUBLICATION object_name | ROLE object_name | + ROUTINE routine_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | SCHEMA object_name | SEQUENCE object_name | SUBSCRIPTION object_name | @@ -93,10 +95,12 @@ SECURITY LABEL [ FOR provider ] ON table_name.column_name aggregate_name function_name + procedure_name + routine_name The name of the object to be labeled. Names of tables, - aggregates, domains, foreign tables, functions, sequences, types, and + aggregates, domains, foreign tables, functions, procedures, routines, sequences, types, and views can be schema-qualified. @@ -119,7 +123,7 @@ SECURITY LABEL [ FOR provider ] ON - The mode of a function or aggregate + The mode of a function, procedure, or aggregate argument: IN, OUT, INOUT, or VARIADIC. If omitted, the default is IN. @@ -137,7 +141,7 @@ SECURITY LABEL [ FOR provider ] ON - The name of a function or aggregate argument. + The name of a function, procedure, or aggregate argument. Note that SECURITY LABEL does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity. @@ -150,7 +154,7 @@ SECURITY LABEL [ FOR provider ] ON - The data type of a function or aggregate argument. + The data type of a function, procedure, or aggregate argument. diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml index d20eaa87e7..d27fb414f7 100644 --- a/doc/src/sgml/reference.sgml +++ b/doc/src/sgml/reference.sgml @@ -54,8 +54,10 @@ &alterOperatorClass; &alterOperatorFamily; &alterPolicy; + &alterProcedure; &alterPublication; &alterRole; + &alterRoutine; &alterRule; &alterSchema; &alterSequence; @@ -76,6 +78,7 @@ &alterView; &analyze; &begin; + &call; &checkpoint; &close; &cluster; @@ -103,6 +106,7 @@ &createOperatorClass; &createOperatorFamily; &createPolicy; + &createProcedure; &createPublication; &createRole; &createRule; @@ -150,8 +154,10 @@ &dropOperatorFamily; &dropOwned; &dropPolicy; + &dropProcedure; &dropPublication; &dropRole; + &dropRoutine; &dropRule; &dropSchema; &dropSequence; diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml index 508ee7a96c..bbc3766cc2 100644 --- a/doc/src/sgml/xfunc.sgml +++ b/doc/src/sgml/xfunc.sgml @@ -72,6 +72,39 @@ + + User-defined Procedures + + + procedure + user-defined + + + + A procedure is a database object similar to a function. The difference is + that a procedure does not return a value, so there is no return type + declaration. While a function is called as part of a query or DML + command, a procedure is called explicitly using + the statement. + + + + The explanations on how to define user-defined functions in the rest of + this chapter apply to procedures as well, except that + the command is used instead, there is + no return type, and some other features such as strictness don't apply. + + + + Collectively, functions and procedures are also known + as routinesroutine. + There are commands such as + and that can operate on functions and + procedures without having to know which kind it is. Note, however, that + there is no CREATE ROUTINE command. + + + Query Language (<acronym>SQL</acronym>) Functions diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index ccde66a7dd..e481cf3d11 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -482,6 +482,14 @@ ExecuteGrantStmt(GrantStmt *stmt) all_privileges = ACL_ALL_RIGHTS_NAMESPACE; errormsg = gettext_noop("invalid privilege type %s for schema"); break; + case ACL_OBJECT_PROCEDURE: + all_privileges = ACL_ALL_RIGHTS_FUNCTION; + errormsg = gettext_noop("invalid privilege type %s for procedure"); + break; + case ACL_OBJECT_ROUTINE: + all_privileges = ACL_ALL_RIGHTS_FUNCTION; + errormsg = gettext_noop("invalid privilege type %s for routine"); + break; case ACL_OBJECT_TABLESPACE: all_privileges = ACL_ALL_RIGHTS_TABLESPACE; errormsg = gettext_noop("invalid privilege type %s for tablespace"); @@ -584,6 +592,8 @@ ExecGrantStmt_oids(InternalGrant *istmt) ExecGrant_ForeignServer(istmt); break; case ACL_OBJECT_FUNCTION: + case ACL_OBJECT_PROCEDURE: + case ACL_OBJECT_ROUTINE: ExecGrant_Function(istmt); break; case ACL_OBJECT_LANGUAGE: @@ -671,7 +681,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); Oid funcid; - funcid = LookupFuncWithArgs(func, false); + funcid = LookupFuncWithArgs(OBJECT_FUNCTION, func, false); objects = lappend_oid(objects, funcid); } break; @@ -709,6 +719,26 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, oid); } break; + case ACL_OBJECT_PROCEDURE: + foreach(cell, objnames) + { + ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); + Oid procid; + + procid = LookupFuncWithArgs(OBJECT_PROCEDURE, func, false); + objects = lappend_oid(objects, procid); + } + break; + case ACL_OBJECT_ROUTINE: + foreach(cell, objnames) + { + ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); + Oid routid; + + routid = LookupFuncWithArgs(OBJECT_ROUTINE, func, false); + objects = lappend_oid(objects, routid); + } + break; case ACL_OBJECT_TABLESPACE: foreach(cell, objnames) { @@ -785,19 +815,39 @@ objectsInSchemaToOids(GrantObjectType objtype, List *nspnames) objects = list_concat(objects, objs); break; case ACL_OBJECT_FUNCTION: + case ACL_OBJECT_PROCEDURE: + case ACL_OBJECT_ROUTINE: { - ScanKeyData key[1]; + ScanKeyData key[2]; + int keycount; Relation rel; HeapScanDesc scan; HeapTuple tuple; - ScanKeyInit(&key[0], + keycount = 0; + ScanKeyInit(&key[keycount++], Anum_pg_proc_pronamespace, BTEqualStrategyNumber, F_OIDEQ, ObjectIdGetDatum(namespaceId)); + /* + * When looking for functions, check for return type <>0. + * When looking for procedures, check for return type ==0. + * When looking for routines, don't check the return type. + */ + if (objtype == ACL_OBJECT_FUNCTION) + ScanKeyInit(&key[keycount++], + Anum_pg_proc_prorettype, + BTEqualStrategyNumber, F_OIDNE, + InvalidOid); + else if (objtype == ACL_OBJECT_PROCEDURE) + ScanKeyInit(&key[keycount++], + Anum_pg_proc_prorettype, + BTEqualStrategyNumber, F_OIDEQ, + InvalidOid); + rel = heap_open(ProcedureRelationId, AccessShareLock); - scan = heap_beginscan_catalog(rel, 1, key); + scan = heap_beginscan_catalog(rel, keycount, key); while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) { @@ -955,6 +1005,14 @@ ExecAlterDefaultPrivilegesStmt(ParseState *pstate, AlterDefaultPrivilegesStmt *s all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for function"); break; + case ACL_OBJECT_PROCEDURE: + all_privileges = ACL_ALL_RIGHTS_FUNCTION; + errormsg = gettext_noop("invalid privilege type %s for procedure"); + break; + case ACL_OBJECT_ROUTINE: + all_privileges = ACL_ALL_RIGHTS_FUNCTION; + errormsg = gettext_noop("invalid privilege type %s for routine"); + break; case ACL_OBJECT_TYPE: all_privileges = ACL_ALL_RIGHTS_TYPE; errormsg = gettext_noop("invalid privilege type %s for type"); @@ -1423,7 +1481,7 @@ RemoveRoleFromObjectACL(Oid roleid, Oid classid, Oid objid) istmt.objtype = ACL_OBJECT_TYPE; break; case ProcedureRelationId: - istmt.objtype = ACL_OBJECT_FUNCTION; + istmt.objtype = ACL_OBJECT_ROUTINE; break; case LanguageRelationId: istmt.objtype = ACL_OBJECT_LANGUAGE; diff --git a/src/backend/catalog/information_schema.sql b/src/backend/catalog/information_schema.sql index 236f6be37e..360725d59a 100644 --- a/src/backend/catalog/information_schema.sql +++ b/src/backend/catalog/information_schema.sql @@ -1413,7 +1413,8 @@ CREATE VIEW routines AS CAST(current_database() AS sql_identifier) AS routine_catalog, CAST(n.nspname AS sql_identifier) AS routine_schema, CAST(p.proname AS sql_identifier) AS routine_name, - CAST('FUNCTION' AS character_data) AS routine_type, + CAST(CASE WHEN p.prorettype <> 0 THEN 'FUNCTION' ELSE 'PROCEDURE' END + AS character_data) AS routine_type, CAST(null AS sql_identifier) AS module_catalog, CAST(null AS sql_identifier) AS module_schema, CAST(null AS sql_identifier) AS module_name, @@ -1422,7 +1423,8 @@ CREATE VIEW routines AS CAST(null AS sql_identifier) AS udt_name, CAST( - CASE WHEN t.typelem <> 0 AND t.typlen = -1 THEN 'ARRAY' + CASE WHEN p.prorettype = 0 THEN NULL + WHEN t.typelem <> 0 AND t.typlen = -1 THEN 'ARRAY' WHEN nt.nspname = 'pg_catalog' THEN format_type(t.oid, null) ELSE 'USER-DEFINED' END AS character_data) AS data_type, @@ -1440,7 +1442,7 @@ CREATE VIEW routines AS CAST(null AS cardinal_number) AS datetime_precision, CAST(null AS character_data) AS interval_type, CAST(null AS cardinal_number) AS interval_precision, - CAST(current_database() AS sql_identifier) AS type_udt_catalog, + CAST(CASE WHEN p.prorettype <> 0 THEN current_database() END AS sql_identifier) AS type_udt_catalog, CAST(nt.nspname AS sql_identifier) AS type_udt_schema, CAST(t.typname AS sql_identifier) AS type_udt_name, CAST(null AS sql_identifier) AS scope_catalog, @@ -1462,7 +1464,8 @@ CREATE VIEW routines AS CAST('GENERAL' AS character_data) AS parameter_style, CAST(CASE WHEN p.provolatile = 'i' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_deterministic, CAST('MODIFIES' AS character_data) AS sql_data_access, - CAST(CASE WHEN p.proisstrict THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_null_call, + CAST(CASE WHEN p.prorettype <> 0 THEN + CASE WHEN p.proisstrict THEN 'YES' ELSE 'NO' END END AS yes_or_no) AS is_null_call, CAST(null AS character_data) AS sql_path, CAST('YES' AS yes_or_no) AS schema_level_routine, CAST(0 AS cardinal_number) AS max_dynamic_result_sets, @@ -1503,13 +1506,15 @@ CREATE VIEW routines AS CAST(null AS cardinal_number) AS result_cast_maximum_cardinality, CAST(null AS sql_identifier) AS result_cast_dtd_identifier - FROM pg_namespace n, pg_proc p, pg_language l, - pg_type t, pg_namespace nt + FROM (pg_namespace n + JOIN pg_proc p ON n.oid = p.pronamespace + JOIN pg_language l ON p.prolang = l.oid) + LEFT JOIN + (pg_type t JOIN pg_namespace nt ON t.typnamespace = nt.oid) + ON p.prorettype = t.oid - WHERE n.oid = p.pronamespace AND p.prolang = l.oid - AND p.prorettype = t.oid AND t.typnamespace = nt.oid - AND (pg_has_role(p.proowner, 'USAGE') - OR has_function_privilege(p.oid, 'EXECUTE')); + WHERE (pg_has_role(p.proowner, 'USAGE') + OR has_function_privilege(p.oid, 'EXECUTE')); GRANT SELECT ON routines TO PUBLIC; diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index 8d55c76fc4..9553675975 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -566,6 +566,9 @@ static const struct object_type_map { "function", OBJECT_FUNCTION }, + { + "procedure", OBJECT_PROCEDURE + }, /* OCLASS_TYPE */ { "type", OBJECT_TYPE @@ -884,13 +887,11 @@ get_object_address(ObjectType objtype, Node *object, address = get_object_address_type(objtype, castNode(TypeName, object), missing_ok); break; case OBJECT_AGGREGATE: - address.classId = ProcedureRelationId; - address.objectId = LookupAggWithArgs(castNode(ObjectWithArgs, object), missing_ok); - address.objectSubId = 0; - break; case OBJECT_FUNCTION: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: address.classId = ProcedureRelationId; - address.objectId = LookupFuncWithArgs(castNode(ObjectWithArgs, object), missing_ok); + address.objectId = LookupFuncWithArgs(objtype, castNode(ObjectWithArgs, object), missing_ok); address.objectSubId = 0; break; case OBJECT_OPERATOR: @@ -2025,6 +2026,8 @@ pg_get_object_address(PG_FUNCTION_ARGS) */ if (type == OBJECT_AGGREGATE || type == OBJECT_FUNCTION || + type == OBJECT_PROCEDURE || + type == OBJECT_ROUTINE || type == OBJECT_OPERATOR || type == OBJECT_CAST || type == OBJECT_AMOP || @@ -2168,6 +2171,8 @@ pg_get_object_address(PG_FUNCTION_ARGS) objnode = (Node *) list_make2(name, args); break; case OBJECT_FUNCTION: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: case OBJECT_AGGREGATE: case OBJECT_OPERATOR: { @@ -2253,6 +2258,8 @@ check_object_ownership(Oid roleid, ObjectType objtype, ObjectAddress address, break; case OBJECT_AGGREGATE: case OBJECT_FUNCTION: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: if (!pg_proc_ownercheck(address.objectId, roleid)) aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, NameListToString((castNode(ObjectWithArgs, object))->objname)); @@ -4026,6 +4033,8 @@ getProcedureTypeDescription(StringInfo buffer, Oid procid) if (procForm->proisagg) appendStringInfoString(buffer, "aggregate"); + else if (procForm->prorettype == InvalidOid) + appendStringInfoString(buffer, "procedure"); else appendStringInfoString(buffer, "function"); diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index 47916cfb54..7d05e4bdb2 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -857,7 +857,8 @@ fmgr_sql_validator(PG_FUNCTION_ARGS) /* Disallow pseudotype result */ /* except for RECORD, VOID, or polymorphic */ - if (get_typtype(proc->prorettype) == TYPTYPE_PSEUDO && + if (proc->prorettype && + get_typtype(proc->prorettype) == TYPTYPE_PSEUDO && proc->prorettype != RECORDOID && proc->prorettype != VOIDOID && !IsPolymorphicType(proc->prorettype)) diff --git a/src/backend/commands/aggregatecmds.c b/src/backend/commands/aggregatecmds.c index adc9877e79..2e2ee883e2 100644 --- a/src/backend/commands/aggregatecmds.c +++ b/src/backend/commands/aggregatecmds.c @@ -307,7 +307,7 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List interpret_function_parameter_list(pstate, args, InvalidOid, - true, /* is an aggregate */ + OBJECT_AGGREGATE, ¶meterTypes, &allParameterTypes, ¶meterModes, diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c index 4f8147907c..21e3f1efe1 100644 --- a/src/backend/commands/alter.c +++ b/src/backend/commands/alter.c @@ -378,6 +378,8 @@ ExecRenameStmt(RenameStmt *stmt) case OBJECT_OPCLASS: case OBJECT_OPFAMILY: case OBJECT_LANGUAGE: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: case OBJECT_STATISTIC_EXT: case OBJECT_TSCONFIGURATION: case OBJECT_TSDICTIONARY: @@ -495,6 +497,8 @@ ExecAlterObjectSchemaStmt(AlterObjectSchemaStmt *stmt, case OBJECT_OPERATOR: case OBJECT_OPCLASS: case OBJECT_OPFAMILY: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: case OBJECT_STATISTIC_EXT: case OBJECT_TSCONFIGURATION: case OBJECT_TSDICTIONARY: @@ -842,6 +846,8 @@ ExecAlterOwnerStmt(AlterOwnerStmt *stmt) case OBJECT_OPERATOR: case OBJECT_OPCLASS: case OBJECT_OPFAMILY: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: case OBJECT_STATISTIC_EXT: case OBJECT_TABLESPACE: case OBJECT_TSDICTIONARY: diff --git a/src/backend/commands/dropcmds.c b/src/backend/commands/dropcmds.c index 2b30677d6f..7e6baa1928 100644 --- a/src/backend/commands/dropcmds.c +++ b/src/backend/commands/dropcmds.c @@ -26,6 +26,7 @@ #include "nodes/makefuncs.h" #include "parser/parse_type.h" #include "utils/builtins.h" +#include "utils/lsyscache.h" #include "utils/syscache.h" @@ -91,21 +92,12 @@ RemoveObjects(DropStmt *stmt) */ if (stmt->removeType == OBJECT_FUNCTION) { - Oid funcOid = address.objectId; - HeapTuple tup; - - tup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcOid)); - if (!HeapTupleIsValid(tup)) /* should not happen */ - elog(ERROR, "cache lookup failed for function %u", funcOid); - - if (((Form_pg_proc) GETSTRUCT(tup))->proisagg) + if (get_func_isagg(address.objectId)) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an aggregate function", NameListToString(castNode(ObjectWithArgs, object)->objname)), errhint("Use DROP AGGREGATE to drop aggregate functions."))); - - ReleaseSysCache(tup); } /* Check permissions. */ @@ -338,6 +330,32 @@ does_not_exist_skipping(ObjectType objtype, Node *object) } break; } + case OBJECT_PROCEDURE: + { + ObjectWithArgs *owa = castNode(ObjectWithArgs, object); + + if (!schema_does_not_exist_skipping(owa->objname, &msg, &name) && + !type_in_list_does_not_exist_skipping(owa->objargs, &msg, &name)) + { + msg = gettext_noop("procedure %s(%s) does not exist, skipping"); + name = NameListToString(owa->objname); + args = TypeNameListToString(owa->objargs); + } + break; + } + case OBJECT_ROUTINE: + { + ObjectWithArgs *owa = castNode(ObjectWithArgs, object); + + if (!schema_does_not_exist_skipping(owa->objname, &msg, &name) && + !type_in_list_does_not_exist_skipping(owa->objargs, &msg, &name)) + { + msg = gettext_noop("routine %s(%s) does not exist, skipping"); + name = NameListToString(owa->objname); + args = TypeNameListToString(owa->objargs); + } + break; + } case OBJECT_AGGREGATE: { ObjectWithArgs *owa = castNode(ObjectWithArgs, object); diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index fa7d0d015a..a602c20b41 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -106,8 +106,10 @@ static event_trigger_support_data event_trigger_support[] = { {"OPERATOR CLASS", true}, {"OPERATOR FAMILY", true}, {"POLICY", true}, + {"PROCEDURE", true}, {"PUBLICATION", true}, {"ROLE", false}, + {"ROUTINE", true}, {"RULE", true}, {"SCHEMA", true}, {"SEQUENCE", true}, @@ -1103,8 +1105,10 @@ EventTriggerSupportsObjectType(ObjectType obtype) case OBJECT_OPERATOR: case OBJECT_OPFAMILY: case OBJECT_POLICY: + case OBJECT_PROCEDURE: case OBJECT_PUBLICATION: case OBJECT_PUBLICATION_REL: + case OBJECT_ROUTINE: case OBJECT_RULE: case OBJECT_SCHEMA: case OBJECT_SEQUENCE: @@ -1215,6 +1219,8 @@ EventTriggerSupportsGrantObjectType(GrantObjectType objtype) case ACL_OBJECT_LANGUAGE: case ACL_OBJECT_LARGEOBJECT: case ACL_OBJECT_NAMESPACE: + case ACL_OBJECT_PROCEDURE: + case ACL_OBJECT_ROUTINE: case ACL_OBJECT_TYPE: return true; @@ -2243,6 +2249,10 @@ stringify_grantobjtype(GrantObjectType objtype) return "LARGE OBJECT"; case ACL_OBJECT_NAMESPACE: return "SCHEMA"; + case ACL_OBJECT_PROCEDURE: + return "PROCEDURE"; + case ACL_OBJECT_ROUTINE: + return "ROUTINE"; case ACL_OBJECT_TABLESPACE: return "TABLESPACE"; case ACL_OBJECT_TYPE: @@ -2285,6 +2295,10 @@ stringify_adefprivs_objtype(GrantObjectType objtype) return "LARGE OBJECTS"; case ACL_OBJECT_NAMESPACE: return "SCHEMAS"; + case ACL_OBJECT_PROCEDURE: + return "PROCEDURES"; + case ACL_OBJECT_ROUTINE: + return "ROUTINES"; case ACL_OBJECT_TABLESPACE: return "TABLESPACES"; case ACL_OBJECT_TYPE: diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index 7de844b2ca..2a9c90133d 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -51,6 +51,8 @@ #include "commands/alter.h" #include "commands/defrem.h" #include "commands/proclang.h" +#include "executor/execdesc.h" +#include "executor/executor.h" #include "miscadmin.h" #include "optimizer/var.h" #include "parser/parse_coerce.h" @@ -179,7 +181,7 @@ void interpret_function_parameter_list(ParseState *pstate, List *parameters, Oid languageOid, - bool is_aggregate, + ObjectType objtype, oidvector **parameterTypes, ArrayType **allParameterTypes, ArrayType **parameterModes, @@ -233,7 +235,7 @@ interpret_function_parameter_list(ParseState *pstate, errmsg("SQL function cannot accept shell type %s", TypeNameToString(t)))); /* We don't allow creating aggregates on shell types either */ - else if (is_aggregate) + else if (objtype == OBJECT_AGGREGATE) ereport(ERROR, (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("aggregate cannot accept shell type %s", @@ -262,16 +264,28 @@ interpret_function_parameter_list(ParseState *pstate, if (t->setof) { - if (is_aggregate) + if (objtype == OBJECT_AGGREGATE) ereport(ERROR, (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("aggregates cannot accept set arguments"))); + else if (objtype == OBJECT_PROCEDURE) + ereport(ERROR, + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), + errmsg("procedures cannot accept set arguments"))); else ereport(ERROR, (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("functions cannot accept set arguments"))); } + if (objtype == OBJECT_PROCEDURE) + { + if (fp->mode == FUNC_PARAM_OUT || fp->mode == FUNC_PARAM_INOUT) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + (errmsg("procedures cannot have OUT parameters")))); + } + /* handle input parameters */ if (fp->mode != FUNC_PARAM_OUT && fp->mode != FUNC_PARAM_TABLE) { @@ -451,6 +465,7 @@ interpret_function_parameter_list(ParseState *pstate, */ static bool compute_common_attribute(ParseState *pstate, + bool is_procedure, DefElem *defel, DefElem **volatility_item, DefElem **strict_item, @@ -463,6 +478,8 @@ compute_common_attribute(ParseState *pstate, { if (strcmp(defel->defname, "volatility") == 0) { + if (is_procedure) + goto procedure_error; if (*volatility_item) goto duplicate_error; @@ -470,6 +487,8 @@ compute_common_attribute(ParseState *pstate, } else if (strcmp(defel->defname, "strict") == 0) { + if (is_procedure) + goto procedure_error; if (*strict_item) goto duplicate_error; @@ -484,6 +503,8 @@ compute_common_attribute(ParseState *pstate, } else if (strcmp(defel->defname, "leakproof") == 0) { + if (is_procedure) + goto procedure_error; if (*leakproof_item) goto duplicate_error; @@ -495,6 +516,8 @@ compute_common_attribute(ParseState *pstate, } else if (strcmp(defel->defname, "cost") == 0) { + if (is_procedure) + goto procedure_error; if (*cost_item) goto duplicate_error; @@ -502,6 +525,8 @@ compute_common_attribute(ParseState *pstate, } else if (strcmp(defel->defname, "rows") == 0) { + if (is_procedure) + goto procedure_error; if (*rows_item) goto duplicate_error; @@ -509,6 +534,8 @@ compute_common_attribute(ParseState *pstate, } else if (strcmp(defel->defname, "parallel") == 0) { + if (is_procedure) + goto procedure_error; if (*parallel_item) goto duplicate_error; @@ -526,6 +553,13 @@ compute_common_attribute(ParseState *pstate, errmsg("conflicting or redundant options"), parser_errposition(pstate, defel->location))); return false; /* keep compiler quiet */ + +procedure_error: + ereport(ERROR, + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), + errmsg("invalid attribute in procedure definition"), + parser_errposition(pstate, defel->location))); + return false; } static char @@ -603,6 +637,7 @@ update_proconfig_value(ArrayType *a, List *set_items) */ static void compute_attributes_sql_style(ParseState *pstate, + bool is_procedure, List *options, List **as, char **language, @@ -669,9 +704,15 @@ compute_attributes_sql_style(ParseState *pstate, (errcode(ERRCODE_SYNTAX_ERROR), errmsg("conflicting or redundant options"), parser_errposition(pstate, defel->location))); + if (is_procedure) + ereport(ERROR, + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), + errmsg("invalid attribute in procedure definition"), + parser_errposition(pstate, defel->location))); windowfunc_item = defel; } else if (compute_common_attribute(pstate, + is_procedure, defel, &volatility_item, &strict_item, @@ -762,7 +803,7 @@ compute_attributes_sql_style(ParseState *pstate, *------------ */ static void -compute_attributes_with_style(ParseState *pstate, List *parameters, bool *isStrict_p, char *volatility_p) +compute_attributes_with_style(ParseState *pstate, bool is_procedure, List *parameters, bool *isStrict_p, char *volatility_p) { ListCell *pl; @@ -771,10 +812,22 @@ compute_attributes_with_style(ParseState *pstate, List *parameters, bool *isStri DefElem *param = (DefElem *) lfirst(pl); if (pg_strcasecmp(param->defname, "isstrict") == 0) + { + if (is_procedure) + ereport(ERROR, + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), + errmsg("invalid attribute in procedure definition"), + parser_errposition(pstate, param->location))); *isStrict_p = defGetBoolean(param); + } else if (pg_strcasecmp(param->defname, "iscachable") == 0) { /* obsolete spelling of isImmutable */ + if (is_procedure) + ereport(ERROR, + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), + errmsg("invalid attribute in procedure definition"), + parser_errposition(pstate, param->location))); if (defGetBoolean(param)) *volatility_p = PROVOLATILE_IMMUTABLE; } @@ -916,6 +969,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) /* override attributes from explicit list */ compute_attributes_sql_style(pstate, + stmt->is_procedure, stmt->options, &as_clause, &language, &transformDefElem, &isWindowFunc, &volatility, @@ -990,7 +1044,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) interpret_function_parameter_list(pstate, stmt->parameters, languageOid, - false, /* not an aggregate */ + stmt->is_procedure ? OBJECT_PROCEDURE : OBJECT_FUNCTION, ¶meterTypes, &allParameterTypes, ¶meterModes, @@ -999,7 +1053,14 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) &variadicArgType, &requiredResultType); - if (stmt->returnType) + if (stmt->is_procedure) + { + Assert(!stmt->returnType); + + prorettype = InvalidOid; + returnsSet = false; + } + else if (stmt->returnType) { /* explicit RETURNS clause */ compute_return_type(stmt->returnType, languageOid, @@ -1045,7 +1106,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) trftypes = NULL; } - compute_attributes_with_style(pstate, stmt->withClause, &isStrict, &volatility); + compute_attributes_with_style(pstate, stmt->is_procedure, stmt->withClause, &isStrict, &volatility); interpret_AS_clause(languageOid, language, funcname, as_clause, &prosrc_str, &probin_str); @@ -1168,6 +1229,7 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt) HeapTuple tup; Oid funcOid; Form_pg_proc procForm; + bool is_procedure; Relation rel; ListCell *l; DefElem *volatility_item = NULL; @@ -1182,7 +1244,7 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt) rel = heap_open(ProcedureRelationId, RowExclusiveLock); - funcOid = LookupFuncWithArgs(stmt->func, false); + funcOid = LookupFuncWithArgs(stmt->objtype, stmt->func, false); tup = SearchSysCacheCopy1(PROCOID, ObjectIdGetDatum(funcOid)); if (!HeapTupleIsValid(tup)) /* should not happen */ @@ -1201,12 +1263,15 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt) errmsg("\"%s\" is an aggregate function", NameListToString(stmt->func->objname)))); + is_procedure = (procForm->prorettype == InvalidOid); + /* Examine requested actions. */ foreach(l, stmt->actions) { DefElem *defel = (DefElem *) lfirst(l); if (compute_common_attribute(pstate, + is_procedure, defel, &volatility_item, &strict_item, @@ -1472,7 +1537,7 @@ CreateCast(CreateCastStmt *stmt) { Form_pg_proc procstruct; - funcid = LookupFuncWithArgs(stmt->func, false); + funcid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->func, false); tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid)); if (!HeapTupleIsValid(tuple)) @@ -1853,7 +1918,7 @@ CreateTransform(CreateTransformStmt *stmt) */ if (stmt->fromsql) { - fromsqlfuncid = LookupFuncWithArgs(stmt->fromsql, false); + fromsqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->fromsql, false); if (!pg_proc_ownercheck(fromsqlfuncid, GetUserId())) aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, NameListToString(stmt->fromsql->objname)); @@ -1879,7 +1944,7 @@ CreateTransform(CreateTransformStmt *stmt) if (stmt->tosql) { - tosqlfuncid = LookupFuncWithArgs(stmt->tosql, false); + tosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, false); if (!pg_proc_ownercheck(tosqlfuncid, GetUserId())) aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, NameListToString(stmt->tosql->objname)); @@ -2168,3 +2233,80 @@ ExecuteDoStmt(DoStmt *stmt) /* execute the inline handler */ OidFunctionCall1(laninline, PointerGetDatum(codeblock)); } + +/* + * Execute CALL statement + */ +void +ExecuteCallStmt(ParseState *pstate, CallStmt *stmt) +{ + List *targs; + ListCell *lc; + Node *node; + FuncExpr *fexpr; + int nargs; + int i; + AclResult aclresult; + FmgrInfo flinfo; + FunctionCallInfoData fcinfo; + + targs = NIL; + foreach(lc, stmt->funccall->args) + { + targs = lappend(targs, transformExpr(pstate, + (Node *) lfirst(lc), + EXPR_KIND_CALL)); + } + + node = ParseFuncOrColumn(pstate, + stmt->funccall->funcname, + targs, + pstate->p_last_srf, + stmt->funccall, + true, + stmt->funccall->location); + + fexpr = castNode(FuncExpr, node); + + aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE); + if (aclresult != ACLCHECK_OK) + aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(fexpr->funcid)); + InvokeFunctionExecuteHook(fexpr->funcid); + + nargs = list_length(fexpr->args); + + /* safety check; see ExecInitFunc() */ + if (nargs > FUNC_MAX_ARGS) + ereport(ERROR, + (errcode(ERRCODE_TOO_MANY_ARGUMENTS), + errmsg_plural("cannot pass more than %d argument to a procedure", + "cannot pass more than %d arguments to a procedure", + FUNC_MAX_ARGS, + FUNC_MAX_ARGS))); + + fmgr_info(fexpr->funcid, &flinfo); + InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, NULL, NULL); + + i = 0; + foreach (lc, fexpr->args) + { + EState *estate; + ExprState *exprstate; + ExprContext *econtext; + Datum val; + bool isnull; + + estate = CreateExecutorState(); + exprstate = ExecPrepareExpr(lfirst(lc), estate); + econtext = CreateStandaloneExprContext(); + val = ExecEvalExprSwitchContext(exprstate, econtext, &isnull); + FreeExecutorState(estate); + + fcinfo.arg[i] = val; + fcinfo.argnull[i] = isnull; + + i++; + } + + FunctionCallInvoke(&fcinfo); +} diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index 1641e68abe..35c7c67bf5 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -520,7 +520,7 @@ DefineOpClass(CreateOpClassStmt *stmt) errmsg("invalid procedure number %d," " must be between 1 and %d", item->number, maxProcNumber))); - funcOid = LookupFuncWithArgs(item->name, false); + funcOid = LookupFuncWithArgs(OBJECT_FUNCTION, item->name, false); #ifdef NOT_USED /* XXX this is unnecessary given the superuser check above */ /* Caller must own function */ @@ -894,7 +894,7 @@ AlterOpFamilyAdd(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid, errmsg("invalid procedure number %d," " must be between 1 and %d", item->number, maxProcNumber))); - funcOid = LookupFuncWithArgs(item->name, false); + funcOid = LookupFuncWithArgs(OBJECT_FUNCTION, item->name, false); #ifdef NOT_USED /* XXX this is unnecessary given the superuser check above */ /* Caller must own function */ diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index 98eb777421..3caa343723 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -390,6 +390,7 @@ sql_fn_post_column_ref(ParseState *pstate, ColumnRef *cref, Node *var) list_make1(param), pstate->p_last_srf, NULL, + false, cref->location); } @@ -658,7 +659,8 @@ init_sql_fcache(FmgrInfo *finfo, Oid collation, bool lazyEvalOK) fcache->rettype = rettype; /* Fetch the typlen and byval info for the result type */ - get_typlenbyval(rettype, &fcache->typlen, &fcache->typbyval); + if (rettype) + get_typlenbyval(rettype, &fcache->typlen, &fcache->typbyval); /* Remember whether we're returning setof something */ fcache->returnsSet = procedureStruct->proretset; @@ -1321,8 +1323,8 @@ fmgr_sql(PG_FUNCTION_ARGS) } else { - /* Should only get here for VOID functions */ - Assert(fcache->rettype == VOIDOID); + /* Should only get here for procedures and VOID functions */ + Assert(fcache->rettype == InvalidOid || fcache->rettype == VOIDOID); fcinfo->isnull = true; result = (Datum) 0; } @@ -1546,7 +1548,10 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, if (modifyTargetList) *modifyTargetList = false; /* initialize for no change */ if (junkFilter) - *junkFilter = NULL; /* initialize in case of VOID result */ + *junkFilter = NULL; /* initialize in case of procedure/VOID result */ + + if (!rettype) + return false; /* * Find the last canSetTag query in the list. This isn't necessarily the @@ -1591,7 +1596,7 @@ check_sql_fn_retval(Oid func_id, Oid rettype, List *queryTreeList, else { /* Empty function body, or last statement is a utility command */ - if (rettype != VOIDOID) + if (rettype && rettype != VOIDOID) ereport(ERROR, (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("return type mismatch in function declared to return %s", diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index d9ff8a7e51..aff9a62106 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3210,6 +3210,16 @@ _copyClosePortalStmt(const ClosePortalStmt *from) return newnode; } +static CallStmt * +_copyCallStmt(const CallStmt *from) +{ + CallStmt *newnode = makeNode(CallStmt); + + COPY_NODE_FIELD(funccall); + + return newnode; +} + static ClusterStmt * _copyClusterStmt(const ClusterStmt *from) { @@ -3411,6 +3421,7 @@ _copyCreateFunctionStmt(const CreateFunctionStmt *from) COPY_NODE_FIELD(funcname); COPY_NODE_FIELD(parameters); COPY_NODE_FIELD(returnType); + COPY_SCALAR_FIELD(is_procedure); COPY_NODE_FIELD(options); COPY_NODE_FIELD(withClause); @@ -3435,6 +3446,7 @@ _copyAlterFunctionStmt(const AlterFunctionStmt *from) { AlterFunctionStmt *newnode = makeNode(AlterFunctionStmt); + COPY_SCALAR_FIELD(objtype); COPY_NODE_FIELD(func); COPY_NODE_FIELD(actions); @@ -5104,6 +5116,9 @@ copyObjectImpl(const void *from) case T_ClosePortalStmt: retval = _copyClosePortalStmt(from); break; + case T_CallStmt: + retval = _copyCallStmt(from); + break; case T_ClusterStmt: retval = _copyClusterStmt(from); break; diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 2866fd7b4a..2e869a9d5d 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1201,6 +1201,14 @@ _equalClosePortalStmt(const ClosePortalStmt *a, const ClosePortalStmt *b) return true; } +static bool +_equalCallStmt(const CallStmt *a, const CallStmt *b) +{ + COMPARE_NODE_FIELD(funccall); + + return true; +} + static bool _equalClusterStmt(const ClusterStmt *a, const ClusterStmt *b) { @@ -1364,6 +1372,7 @@ _equalCreateFunctionStmt(const CreateFunctionStmt *a, const CreateFunctionStmt * COMPARE_NODE_FIELD(funcname); COMPARE_NODE_FIELD(parameters); COMPARE_NODE_FIELD(returnType); + COMPARE_SCALAR_FIELD(is_procedure); COMPARE_NODE_FIELD(options); COMPARE_NODE_FIELD(withClause); @@ -1384,6 +1393,7 @@ _equalFunctionParameter(const FunctionParameter *a, const FunctionParameter *b) static bool _equalAlterFunctionStmt(const AlterFunctionStmt *a, const AlterFunctionStmt *b) { + COMPARE_SCALAR_FIELD(objtype); COMPARE_NODE_FIELD(func); COMPARE_NODE_FIELD(actions); @@ -3246,6 +3256,9 @@ equal(const void *a, const void *b) case T_ClosePortalStmt: retval = _equalClosePortalStmt(a, b); break; + case T_CallStmt: + retval = _equalCallStmt(a, b); + break; case T_ClusterStmt: retval = _equalClusterStmt(a, b); break; diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index e5e2956564..6a2d5ad760 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -4401,6 +4401,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid, if (funcform->prolang != SQLlanguageId || funcform->prosecdef || funcform->proretset || + funcform->prorettype == InvalidOid || funcform->prorettype == RECORDOID || !heap_attisnull(func_tuple, Anum_pg_proc_proconfig) || funcform->pronargs != list_length(args)) diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index c301ca465d..ebfc94f896 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -253,7 +253,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); AlterCompositeTypeStmt AlterUserMappingStmt AlterRoleStmt AlterRoleSetStmt AlterPolicyStmt AlterDefaultPrivilegesStmt DefACLAction - AnalyzeStmt ClosePortalStmt ClusterStmt CommentStmt + AnalyzeStmt CallStmt ClosePortalStmt ClusterStmt CommentStmt ConstraintsSetStmt CopyStmt CreateAsStmt CreateCastStmt CreateDomainStmt CreateExtensionStmt CreateGroupStmt CreateOpClassStmt CreateOpFamilyStmt AlterOpFamilyStmt CreatePLangStmt @@ -611,7 +611,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); BACKWARD BEFORE BEGIN_P BETWEEN BIGINT BINARY BIT BOOLEAN_P BOTH BY - CACHE CALLED CASCADE CASCADED CASE CAST CATALOG_P CHAIN CHAR_P + CACHE CALL CALLED CASCADE CASCADED CASE CAST CATALOG_P CHAIN CHAR_P CHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE CLUSTER COALESCE COLLATE COLLATION COLUMN COLUMNS COMMENT COMMENTS COMMIT COMMITTED CONCURRENTLY CONFIGURATION CONFLICT CONNECTION CONSTRAINT @@ -660,14 +660,14 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); PARALLEL PARSER PARTIAL PARTITION PASSING PASSWORD PLACING PLANS POLICY POSITION PRECEDING PRECISION PRESERVE PREPARE PREPARED PRIMARY - PRIOR PRIVILEGES PROCEDURAL PROCEDURE PROGRAM PUBLICATION + PRIOR PRIVILEGES PROCEDURAL PROCEDURE PROCEDURES PROGRAM PUBLICATION QUOTE RANGE READ REAL REASSIGN RECHECK RECURSIVE REF REFERENCES REFERENCING REFRESH REINDEX RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA RESET RESTART RESTRICT RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP - ROW ROWS RULE + ROUTINE ROUTINES ROW ROWS RULE SAVEPOINT SCHEMA SCHEMAS SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES SERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW @@ -845,6 +845,7 @@ stmt : | AlterTSDictionaryStmt | AlterUserMappingStmt | AnalyzeStmt + | CallStmt | CheckPointStmt | ClosePortalStmt | ClusterStmt @@ -940,6 +941,20 @@ stmt : { $$ = NULL; } ; +/***************************************************************************** + * + * CALL statement + * + *****************************************************************************/ + +CallStmt: CALL func_application + { + CallStmt *n = makeNode(CallStmt); + n->funccall = castNode(FuncCall, $2); + $$ = (Node *)n; + } + ; + /***************************************************************************** * * Create a new Postgres DBMS role @@ -4554,6 +4569,24 @@ AlterExtensionContentsStmt: n->object = (Node *) lcons(makeString($9), $7); $$ = (Node *)n; } + | ALTER EXTENSION name add_drop PROCEDURE function_with_argtypes + { + AlterExtensionContentsStmt *n = makeNode(AlterExtensionContentsStmt); + n->extname = $3; + n->action = $4; + n->objtype = OBJECT_PROCEDURE; + n->object = (Node *) $6; + $$ = (Node *)n; + } + | ALTER EXTENSION name add_drop ROUTINE function_with_argtypes + { + AlterExtensionContentsStmt *n = makeNode(AlterExtensionContentsStmt); + n->extname = $3; + n->action = $4; + n->objtype = OBJECT_ROUTINE; + n->object = (Node *) $6; + $$ = (Node *)n; + } | ALTER EXTENSION name add_drop SCHEMA name { AlterExtensionContentsStmt *n = makeNode(AlterExtensionContentsStmt); @@ -6436,6 +6469,22 @@ CommentStmt: n->comment = $8; $$ = (Node *) n; } + | COMMENT ON PROCEDURE function_with_argtypes IS comment_text + { + CommentStmt *n = makeNode(CommentStmt); + n->objtype = OBJECT_PROCEDURE; + n->object = (Node *) $4; + n->comment = $6; + $$ = (Node *) n; + } + | COMMENT ON ROUTINE function_with_argtypes IS comment_text + { + CommentStmt *n = makeNode(CommentStmt); + n->objtype = OBJECT_ROUTINE; + n->object = (Node *) $4; + n->comment = $6; + $$ = (Node *) n; + } | COMMENT ON RULE name ON any_name IS comment_text { CommentStmt *n = makeNode(CommentStmt); @@ -6614,6 +6663,26 @@ SecLabelStmt: n->label = $9; $$ = (Node *) n; } + | SECURITY LABEL opt_provider ON PROCEDURE function_with_argtypes + IS security_label + { + SecLabelStmt *n = makeNode(SecLabelStmt); + n->provider = $3; + n->objtype = OBJECT_PROCEDURE; + n->object = (Node *) $6; + n->label = $8; + $$ = (Node *) n; + } + | SECURITY LABEL opt_provider ON ROUTINE function_with_argtypes + IS security_label + { + SecLabelStmt *n = makeNode(SecLabelStmt); + n->provider = $3; + n->objtype = OBJECT_ROUTINE; + n->object = (Node *) $6; + n->label = $8; + $$ = (Node *) n; + } ; opt_provider: FOR NonReservedWord_or_Sconst { $$ = $2; } @@ -6977,6 +7046,22 @@ privilege_target: n->objs = $2; $$ = n; } + | PROCEDURE function_with_argtypes_list + { + PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); + n->targtype = ACL_TARGET_OBJECT; + n->objtype = ACL_OBJECT_PROCEDURE; + n->objs = $2; + $$ = n; + } + | ROUTINE function_with_argtypes_list + { + PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); + n->targtype = ACL_TARGET_OBJECT; + n->objtype = ACL_OBJECT_ROUTINE; + n->objs = $2; + $$ = n; + } | DATABASE name_list { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); @@ -7057,6 +7142,22 @@ privilege_target: n->objs = $5; $$ = n; } + | ALL PROCEDURES IN_P SCHEMA name_list + { + PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); + n->targtype = ACL_TARGET_ALL_IN_SCHEMA; + n->objtype = ACL_OBJECT_PROCEDURE; + n->objs = $5; + $$ = n; + } + | ALL ROUTINES IN_P SCHEMA name_list + { + PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); + n->targtype = ACL_TARGET_ALL_IN_SCHEMA; + n->objtype = ACL_OBJECT_ROUTINE; + n->objs = $5; + $$ = n; + } ; @@ -7213,6 +7314,7 @@ DefACLAction: defacl_privilege_target: TABLES { $$ = ACL_OBJECT_RELATION; } | FUNCTIONS { $$ = ACL_OBJECT_FUNCTION; } + | ROUTINES { $$ = ACL_OBJECT_FUNCTION; } | SEQUENCES { $$ = ACL_OBJECT_SEQUENCE; } | TYPES_P { $$ = ACL_OBJECT_TYPE; } | SCHEMAS { $$ = ACL_OBJECT_NAMESPACE; } @@ -7413,6 +7515,18 @@ CreateFunctionStmt: n->withClause = $7; $$ = (Node *)n; } + | CREATE opt_or_replace PROCEDURE func_name func_args_with_defaults + createfunc_opt_list + { + CreateFunctionStmt *n = makeNode(CreateFunctionStmt); + n->replace = $2; + n->funcname = $4; + n->parameters = $5; + n->returnType = NULL; + n->is_procedure = true; + n->options = $6; + $$ = (Node *)n; + } ; opt_or_replace: @@ -7830,7 +7944,7 @@ table_func_column_list: ; /***************************************************************************** - * ALTER FUNCTION + * ALTER FUNCTION / ALTER PROCEDURE / ALTER ROUTINE * * RENAME and OWNER subcommands are already provided by the generic * ALTER infrastructure, here we just specify alterations that can @@ -7841,6 +7955,23 @@ AlterFunctionStmt: ALTER FUNCTION function_with_argtypes alterfunc_opt_list opt_restrict { AlterFunctionStmt *n = makeNode(AlterFunctionStmt); + n->objtype = OBJECT_FUNCTION; + n->func = $3; + n->actions = $4; + $$ = (Node *) n; + } + | ALTER PROCEDURE function_with_argtypes alterfunc_opt_list opt_restrict + { + AlterFunctionStmt *n = makeNode(AlterFunctionStmt); + n->objtype = OBJECT_PROCEDURE; + n->func = $3; + n->actions = $4; + $$ = (Node *) n; + } + | ALTER ROUTINE function_with_argtypes alterfunc_opt_list opt_restrict + { + AlterFunctionStmt *n = makeNode(AlterFunctionStmt); + n->objtype = OBJECT_ROUTINE; n->func = $3; n->actions = $4; $$ = (Node *) n; @@ -7865,6 +7996,8 @@ opt_restrict: * QUERY: * * DROP FUNCTION funcname (arg1, arg2, ...) [ RESTRICT | CASCADE ] + * DROP PROCEDURE procname (arg1, arg2, ...) [ RESTRICT | CASCADE ] + * DROP ROUTINE routname (arg1, arg2, ...) [ RESTRICT | CASCADE ] * DROP AGGREGATE aggname (arg1, ...) [ RESTRICT | CASCADE ] * DROP OPERATOR opname (leftoperand_typ, rightoperand_typ) [ RESTRICT | CASCADE ] * @@ -7891,6 +8024,46 @@ RemoveFuncStmt: n->concurrent = false; $$ = (Node *)n; } + | DROP PROCEDURE function_with_argtypes_list opt_drop_behavior + { + DropStmt *n = makeNode(DropStmt); + n->removeType = OBJECT_PROCEDURE; + n->objects = $3; + n->behavior = $4; + n->missing_ok = false; + n->concurrent = false; + $$ = (Node *)n; + } + | DROP PROCEDURE IF_P EXISTS function_with_argtypes_list opt_drop_behavior + { + DropStmt *n = makeNode(DropStmt); + n->removeType = OBJECT_PROCEDURE; + n->objects = $5; + n->behavior = $6; + n->missing_ok = true; + n->concurrent = false; + $$ = (Node *)n; + } + | DROP ROUTINE function_with_argtypes_list opt_drop_behavior + { + DropStmt *n = makeNode(DropStmt); + n->removeType = OBJECT_ROUTINE; + n->objects = $3; + n->behavior = $4; + n->missing_ok = false; + n->concurrent = false; + $$ = (Node *)n; + } + | DROP ROUTINE IF_P EXISTS function_with_argtypes_list opt_drop_behavior + { + DropStmt *n = makeNode(DropStmt); + n->removeType = OBJECT_ROUTINE; + n->objects = $5; + n->behavior = $6; + n->missing_ok = true; + n->concurrent = false; + $$ = (Node *)n; + } ; RemoveAggrStmt: @@ -8348,6 +8521,15 @@ RenameStmt: ALTER AGGREGATE aggregate_with_argtypes RENAME TO name n->missing_ok = true; $$ = (Node *)n; } + | ALTER PROCEDURE function_with_argtypes RENAME TO name + { + RenameStmt *n = makeNode(RenameStmt); + n->renameType = OBJECT_PROCEDURE; + n->object = (Node *) $3; + n->newname = $6; + n->missing_ok = false; + $$ = (Node *)n; + } | ALTER PUBLICATION name RENAME TO name { RenameStmt *n = makeNode(RenameStmt); @@ -8357,6 +8539,15 @@ RenameStmt: ALTER AGGREGATE aggregate_with_argtypes RENAME TO name n->missing_ok = false; $$ = (Node *)n; } + | ALTER ROUTINE function_with_argtypes RENAME TO name + { + RenameStmt *n = makeNode(RenameStmt); + n->renameType = OBJECT_ROUTINE; + n->object = (Node *) $3; + n->newname = $6; + n->missing_ok = false; + $$ = (Node *)n; + } | ALTER SCHEMA name RENAME TO name { RenameStmt *n = makeNode(RenameStmt); @@ -8736,6 +8927,22 @@ AlterObjectDependsStmt: n->extname = makeString($7); $$ = (Node *)n; } + | ALTER PROCEDURE function_with_argtypes DEPENDS ON EXTENSION name + { + AlterObjectDependsStmt *n = makeNode(AlterObjectDependsStmt); + n->objectType = OBJECT_PROCEDURE; + n->object = (Node *) $3; + n->extname = makeString($7); + $$ = (Node *)n; + } + | ALTER ROUTINE function_with_argtypes DEPENDS ON EXTENSION name + { + AlterObjectDependsStmt *n = makeNode(AlterObjectDependsStmt); + n->objectType = OBJECT_ROUTINE; + n->object = (Node *) $3; + n->extname = makeString($7); + $$ = (Node *)n; + } | ALTER TRIGGER name ON qualified_name DEPENDS ON EXTENSION name { AlterObjectDependsStmt *n = makeNode(AlterObjectDependsStmt); @@ -8851,6 +9058,24 @@ AlterObjectSchemaStmt: n->missing_ok = false; $$ = (Node *)n; } + | ALTER PROCEDURE function_with_argtypes SET SCHEMA name + { + AlterObjectSchemaStmt *n = makeNode(AlterObjectSchemaStmt); + n->objectType = OBJECT_PROCEDURE; + n->object = (Node *) $3; + n->newschema = $6; + n->missing_ok = false; + $$ = (Node *)n; + } + | ALTER ROUTINE function_with_argtypes SET SCHEMA name + { + AlterObjectSchemaStmt *n = makeNode(AlterObjectSchemaStmt); + n->objectType = OBJECT_ROUTINE; + n->object = (Node *) $3; + n->newschema = $6; + n->missing_ok = false; + $$ = (Node *)n; + } | ALTER TABLE relation_expr SET SCHEMA name { AlterObjectSchemaStmt *n = makeNode(AlterObjectSchemaStmt); @@ -9126,6 +9351,22 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec n->newowner = $9; $$ = (Node *)n; } + | ALTER PROCEDURE function_with_argtypes OWNER TO RoleSpec + { + AlterOwnerStmt *n = makeNode(AlterOwnerStmt); + n->objectType = OBJECT_PROCEDURE; + n->object = (Node *) $3; + n->newowner = $6; + $$ = (Node *)n; + } + | ALTER ROUTINE function_with_argtypes OWNER TO RoleSpec + { + AlterOwnerStmt *n = makeNode(AlterOwnerStmt); + n->objectType = OBJECT_ROUTINE; + n->object = (Node *) $3; + n->newowner = $6; + $$ = (Node *)n; + } | ALTER SCHEMA name OWNER TO RoleSpec { AlterOwnerStmt *n = makeNode(AlterOwnerStmt); @@ -14689,6 +14930,7 @@ unreserved_keyword: | BEGIN_P | BY | CACHE + | CALL | CALLED | CASCADE | CASCADED @@ -14848,6 +15090,7 @@ unreserved_keyword: | PRIVILEGES | PROCEDURAL | PROCEDURE + | PROCEDURES | PROGRAM | PUBLICATION | QUOTE @@ -14874,6 +15117,8 @@ unreserved_keyword: | ROLE | ROLLBACK | ROLLUP + | ROUTINE + | ROUTINES | ROWS | RULE | SAVEPOINT diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c index 64111f315e..4c4f4cdc3d 100644 --- a/src/backend/parser/parse_agg.c +++ b/src/backend/parser/parse_agg.c @@ -508,6 +508,14 @@ check_agglevels_and_constraints(ParseState *pstate, Node *expr) break; + case EXPR_KIND_CALL: + if (isAgg) + err = _("aggregate functions are not allowed in CALL arguments"); + else + err = _("grouping operations are not allowed in CALL arguments"); + + break; + /* * There is intentionally no default: case here, so that the * compiler will warn if we add a new ParseExprKind without @@ -883,6 +891,9 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc, case EXPR_KIND_PARTITION_EXPRESSION: err = _("window functions are not allowed in partition key expression"); break; + case EXPR_KIND_CALL: + err = _("window functions are not allowed in CALL arguments"); + break; /* * There is intentionally no default: case here, so that the diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index 86d1da0677..29f9da796f 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -480,6 +480,7 @@ transformIndirection(ParseState *pstate, A_Indirection *ind) list_make1(result), last_srf, NULL, + false, location); if (newresult == NULL) unknown_attribute(pstate, result, strVal(n), location); @@ -629,6 +630,7 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref) list_make1(node), pstate->p_last_srf, NULL, + false, cref->location); } break; @@ -676,6 +678,7 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref) list_make1(node), pstate->p_last_srf, NULL, + false, cref->location); } break; @@ -736,6 +739,7 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref) list_make1(node), pstate->p_last_srf, NULL, + false, cref->location); } break; @@ -1477,6 +1481,7 @@ transformFuncCall(ParseState *pstate, FuncCall *fn) targs, last_srf, fn, + false, fn->location); } @@ -1812,6 +1817,7 @@ transformSubLink(ParseState *pstate, SubLink *sublink) case EXPR_KIND_RETURNING: case EXPR_KIND_VALUES: case EXPR_KIND_VALUES_SINGLE: + case EXPR_KIND_CALL: /* okay */ break; case EXPR_KIND_CHECK_CONSTRAINT: @@ -3462,6 +3468,8 @@ ParseExprKindName(ParseExprKind exprKind) return "WHEN"; case EXPR_KIND_PARTITION_EXPRESSION: return "PARTITION BY"; + case EXPR_KIND_CALL: + return "CALL"; /* * There is intentionally no default: case here, so that the diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index a11843332b..2f20516e76 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -71,7 +71,7 @@ static Node *ParseComplexProjection(ParseState *pstate, const char *funcname, */ Node * ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, - Node *last_srf, FuncCall *fn, int location) + Node *last_srf, FuncCall *fn, bool proc_call, int location) { bool is_column = (fn == NULL); List *agg_order = (fn ? fn->agg_order : NIL); @@ -263,7 +263,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, actual_arg_types[0], rettype, -1, COERCION_EXPLICIT, COERCE_EXPLICIT_CALL, location); } - else if (fdresult == FUNCDETAIL_NORMAL) + else if (fdresult == FUNCDETAIL_NORMAL || fdresult == FUNCDETAIL_PROCEDURE) { /* * Normal function found; was there anything indicating it must be an @@ -306,6 +306,26 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, errmsg("OVER specified, but %s is not a window function nor an aggregate function", NameListToString(funcname)), parser_errposition(pstate, location))); + + if (fdresult == FUNCDETAIL_NORMAL && proc_call) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("%s is not a procedure", + func_signature_string(funcname, nargs, + argnames, + actual_arg_types)), + errhint("To call a function, use SELECT."), + parser_errposition(pstate, location))); + + if (fdresult == FUNCDETAIL_PROCEDURE && !proc_call) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("%s is a procedure", + func_signature_string(funcname, nargs, + argnames, + actual_arg_types)), + errhint("To call a procedure, use CALL."), + parser_errposition(pstate, location))); } else if (fdresult == FUNCDETAIL_AGGREGATE) { @@ -635,7 +655,7 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, check_srf_call_placement(pstate, last_srf, location); /* build the appropriate output structure */ - if (fdresult == FUNCDETAIL_NORMAL) + if (fdresult == FUNCDETAIL_NORMAL || fdresult == FUNCDETAIL_PROCEDURE) { FuncExpr *funcexpr = makeNode(FuncExpr); @@ -1589,6 +1609,8 @@ func_get_detail(List *funcname, result = FUNCDETAIL_AGGREGATE; else if (pform->proiswindow) result = FUNCDETAIL_WINDOWFUNC; + else if (pform->prorettype == InvalidOid) + result = FUNCDETAIL_PROCEDURE; else result = FUNCDETAIL_NORMAL; ReleaseSysCache(ftup); @@ -1984,16 +2006,28 @@ LookupFuncName(List *funcname, int nargs, const Oid *argtypes, bool noError) /* * LookupFuncWithArgs - * Like LookupFuncName, but the argument types are specified by a - * ObjectWithArgs node. + * + * Like LookupFuncName, but the argument types are specified by a + * ObjectWithArgs node. Also, this function can check whether the result is a + * function, procedure, or aggregate, based on the objtype argument. Pass + * OBJECT_ROUTINE to accept any of them. + * + * For historical reasons, we also accept aggregates when looking for a + * function. */ Oid -LookupFuncWithArgs(ObjectWithArgs *func, bool noError) +LookupFuncWithArgs(ObjectType objtype, ObjectWithArgs *func, bool noError) { Oid argoids[FUNC_MAX_ARGS]; int argcount; int i; ListCell *args_item; + Oid oid; + + Assert(objtype == OBJECT_AGGREGATE || + objtype == OBJECT_FUNCTION || + objtype == OBJECT_PROCEDURE || + objtype == OBJECT_ROUTINE); argcount = list_length(func->objargs); if (argcount > FUNC_MAX_ARGS) @@ -2013,90 +2047,100 @@ LookupFuncWithArgs(ObjectWithArgs *func, bool noError) args_item = lnext(args_item); } - return LookupFuncName(func->objname, func->args_unspecified ? -1 : argcount, argoids, noError); -} - -/* - * LookupAggWithArgs - * Find an aggregate function from a given ObjectWithArgs node. - * - * This is almost like LookupFuncWithArgs, but the error messages refer - * to aggregates rather than plain functions, and we verify that the found - * function really is an aggregate. - */ -Oid -LookupAggWithArgs(ObjectWithArgs *agg, bool noError) -{ - Oid argoids[FUNC_MAX_ARGS]; - int argcount; - int i; - ListCell *lc; - Oid oid; - HeapTuple ftup; - Form_pg_proc pform; - - argcount = list_length(agg->objargs); - if (argcount > FUNC_MAX_ARGS) - ereport(ERROR, - (errcode(ERRCODE_TOO_MANY_ARGUMENTS), - errmsg_plural("functions cannot have more than %d argument", - "functions cannot have more than %d arguments", - FUNC_MAX_ARGS, - FUNC_MAX_ARGS))); + /* + * When looking for a function or routine, we pass noError through to + * LookupFuncName and let it make any error messages. Otherwise, we make + * our own errors for the aggregate and procedure cases. + */ + oid = LookupFuncName(func->objname, func->args_unspecified ? -1 : argcount, argoids, + (objtype == OBJECT_FUNCTION || objtype == OBJECT_ROUTINE) ? noError : true); - i = 0; - foreach(lc, agg->objargs) + if (objtype == OBJECT_FUNCTION) { - TypeName *t = (TypeName *) lfirst(lc); - - argoids[i] = LookupTypeNameOid(NULL, t, noError); - i++; + /* Make sure it's a function, not a procedure */ + if (oid && get_func_rettype(oid) == InvalidOid) + { + if (noError) + return InvalidOid; + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("%s is not a function", + func_signature_string(func->objname, argcount, + NIL, argoids)))); + } } - - oid = LookupFuncName(agg->objname, argcount, argoids, true); - - if (!OidIsValid(oid)) + else if (objtype == OBJECT_PROCEDURE) { - if (noError) - return InvalidOid; - if (argcount == 0) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_FUNCTION), - errmsg("aggregate %s(*) does not exist", - NameListToString(agg->objname)))); - else + if (!OidIsValid(oid)) + { + if (noError) + return InvalidOid; + else if (func->args_unspecified) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("could not find a procedure named \"%s\"", + NameListToString(func->objname)))); + else + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("procedure %s does not exist", + func_signature_string(func->objname, argcount, + NIL, argoids)))); + } + + /* Make sure it's a procedure */ + if (get_func_rettype(oid) != InvalidOid) + { + if (noError) + return InvalidOid; ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_FUNCTION), - errmsg("aggregate %s does not exist", - func_signature_string(agg->objname, argcount, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("%s is not a procedure", + func_signature_string(func->objname, argcount, NIL, argoids)))); + } } - - /* Make sure it's an aggregate */ - ftup = SearchSysCache1(PROCOID, ObjectIdGetDatum(oid)); - if (!HeapTupleIsValid(ftup)) /* should not happen */ - elog(ERROR, "cache lookup failed for function %u", oid); - pform = (Form_pg_proc) GETSTRUCT(ftup); - - if (!pform->proisagg) + else if (objtype == OBJECT_AGGREGATE) { - ReleaseSysCache(ftup); - if (noError) - return InvalidOid; - /* we do not use the (*) notation for functions... */ - ereport(ERROR, - (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("function %s is not an aggregate", - func_signature_string(agg->objname, argcount, - NIL, argoids)))); - } + if (!OidIsValid(oid)) + { + if (noError) + return InvalidOid; + else if (func->args_unspecified) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("could not find a aggregate named \"%s\"", + NameListToString(func->objname)))); + else if (argcount == 0) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("aggregate %s(*) does not exist", + NameListToString(func->objname)))); + else + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("aggregate %s does not exist", + func_signature_string(func->objname, argcount, + NIL, argoids)))); + } - ReleaseSysCache(ftup); + /* Make sure it's an aggregate */ + if (!get_func_isagg(oid)) + { + if (noError) + return InvalidOid; + /* we do not use the (*) notation for functions... */ + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("function %s is not an aggregate", + func_signature_string(func->objname, argcount, + NIL, argoids)))); + } + } return oid; } - /* * check_srf_call_placement * Verify that a set-returning function is called in a valid place, @@ -2236,6 +2280,9 @@ check_srf_call_placement(ParseState *pstate, Node *last_srf, int location) case EXPR_KIND_PARTITION_EXPRESSION: err = _("set-returning functions are not allowed in partition key expressions"); break; + case EXPR_KIND_CALL: + err = _("set-returning functions are not allowed in CALL arguments"); + break; /* * There is intentionally no default: case here, so that the diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 82a707af7b..4da1f8f643 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -657,6 +657,10 @@ standard_ProcessUtility(PlannedStmt *pstmt, } break; + case T_CallStmt: + ExecuteCallStmt(pstate, castNode(CallStmt, parsetree)); + break; + case T_ClusterStmt: /* we choose to allow this during "read only" transactions */ PreventCommandDuringRecovery("CLUSTER"); @@ -1957,9 +1961,15 @@ AlterObjectTypeCommandTag(ObjectType objtype) case OBJECT_POLICY: tag = "ALTER POLICY"; break; + case OBJECT_PROCEDURE: + tag = "ALTER PROCEDURE"; + break; case OBJECT_ROLE: tag = "ALTER ROLE"; break; + case OBJECT_ROUTINE: + tag = "ALTER ROUTINE"; + break; case OBJECT_RULE: tag = "ALTER RULE"; break; @@ -2261,6 +2271,12 @@ CreateCommandTag(Node *parsetree) case OBJECT_FUNCTION: tag = "DROP FUNCTION"; break; + case OBJECT_PROCEDURE: + tag = "DROP PROCEDURE"; + break; + case OBJECT_ROUTINE: + tag = "DROP ROUTINE"; + break; case OBJECT_AGGREGATE: tag = "DROP AGGREGATE"; break; @@ -2359,7 +2375,20 @@ CreateCommandTag(Node *parsetree) break; case T_AlterFunctionStmt: - tag = "ALTER FUNCTION"; + switch (((AlterFunctionStmt *) parsetree)->objtype) + { + case OBJECT_FUNCTION: + tag = "ALTER FUNCTION"; + break; + case OBJECT_PROCEDURE: + tag = "ALTER PROCEDURE"; + break; + case OBJECT_ROUTINE: + tag = "ALTER ROUTINE"; + break; + default: + tag = "???"; + } break; case T_GrantStmt: @@ -2438,7 +2467,10 @@ CreateCommandTag(Node *parsetree) break; case T_CreateFunctionStmt: - tag = "CREATE FUNCTION"; + if (((CreateFunctionStmt *) parsetree)->is_procedure) + tag = "CREATE PROCEDURE"; + else + tag = "CREATE FUNCTION"; break; case T_IndexStmt: @@ -2493,6 +2525,10 @@ CreateCommandTag(Node *parsetree) tag = "LOAD"; break; + case T_CallStmt: + tag = "CALL"; + break; + case T_ClusterStmt: tag = "CLUSTER"; break; @@ -3116,6 +3152,10 @@ GetCommandLogLevel(Node *parsetree) lev = LOGSTMT_ALL; break; + case T_CallStmt: + lev = LOGSTMT_ALL; + break; + case T_ClusterStmt: lev = LOGSTMT_DDL; break; diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 06cf32f5d7..8514c21c40 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -2691,6 +2691,12 @@ pg_get_function_result(PG_FUNCTION_ARGS) if (!HeapTupleIsValid(proctup)) PG_RETURN_NULL(); + if (((Form_pg_proc) GETSTRUCT(proctup))->prorettype == InvalidOid) + { + ReleaseSysCache(proctup); + PG_RETURN_NULL(); + } + initStringInfo(&buf); print_function_rettype(&buf, proctup); diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index 0ea2f2bc54..5211360777 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -1614,6 +1614,25 @@ func_parallel(Oid funcid) return result; } +/* + * get_func_isagg + * Given procedure id, return the function's proisagg field. + */ +bool +get_func_isagg(Oid funcid) +{ + HeapTuple tp; + bool result; + + tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid)); + if (!HeapTupleIsValid(tp)) + elog(ERROR, "cache lookup failed for function %u", funcid); + + result = ((Form_pg_proc) GETSTRUCT(tp))->proisagg; + ReleaseSysCache(tp); + return result; +} + /* * get_func_leakproof * Given procedure id, return the function's leakproof field. diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index 70d8f24d17..12290a1aae 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -33,7 +33,7 @@ static void AddAcl(PQExpBuffer aclbuf, const char *keyword, * name: the object name, in the form to use in the commands (already quoted) * subname: the sub-object name, if any (already quoted); NULL if none * type: the object type (as seen in GRANT command: must be one of - * TABLE, SEQUENCE, FUNCTION, LANGUAGE, SCHEMA, DATABASE, TABLESPACE, + * TABLE, SEQUENCE, FUNCTION, PROCEDURE, LANGUAGE, SCHEMA, DATABASE, TABLESPACE, * FOREIGN DATA WRAPPER, SERVER, or LARGE OBJECT) * acls: the ACL string fetched from the database * racls: the ACL string of any initial-but-now-revoked privileges @@ -524,6 +524,9 @@ do { \ else if (strcmp(type, "FUNCTION") == 0 || strcmp(type, "FUNCTIONS") == 0) CONVERT_PRIV('X', "EXECUTE"); + else if (strcmp(type, "PROCEDURE") == 0 || + strcmp(type, "PROCEDURES") == 0) + CONVERT_PRIV('X', "EXECUTE"); else if (strcmp(type, "LANGUAGE") == 0) CONVERT_PRIV('U', "USAGE"); else if (strcmp(type, "SCHEMA") == 0 || diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index ec2fa8b9b9..41741aefbc 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -2889,7 +2889,8 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) if (ropt->indexNames.head != NULL && (!(simple_string_list_member(&ropt->indexNames, te->tag)))) return 0; } - else if (strcmp(te->desc, "FUNCTION") == 0) + else if (strcmp(te->desc, "FUNCTION") == 0 || + strcmp(te->desc, "PROCEDURE") == 0) { if (!ropt->selFunction) return 0; @@ -3388,7 +3389,8 @@ _getObjectDescription(PQExpBuffer buf, TocEntry *te, ArchiveHandle *AH) strcmp(type, "FUNCTION") == 0 || strcmp(type, "OPERATOR") == 0 || strcmp(type, "OPERATOR CLASS") == 0 || - strcmp(type, "OPERATOR FAMILY") == 0) + strcmp(type, "OPERATOR FAMILY") == 0 || + strcmp(type, "PROCEDURE") == 0) { /* Chop "DROP " off the front and make a modifiable copy */ char *first = pg_strdup(te->dropStmt + 5); @@ -3560,6 +3562,7 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) strcmp(te->desc, "OPERATOR") == 0 || strcmp(te->desc, "OPERATOR CLASS") == 0 || strcmp(te->desc, "OPERATOR FAMILY") == 0 || + strcmp(te->desc, "PROCEDURE") == 0 || strcmp(te->desc, "PROCEDURAL LANGUAGE") == 0 || strcmp(te->desc, "SCHEMA") == 0 || strcmp(te->desc, "EVENT TRIGGER") == 0 || diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index d8fb356130..e6701aaa78 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -11349,6 +11349,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) char *funcargs; char *funciargs; char *funcresult; + bool is_procedure; char *proallargtypes; char *proargmodes; char *proargnames; @@ -11370,6 +11371,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) char **argnames = NULL; char **configitems = NULL; int nconfigitems = 0; + const char *keyword; int i; /* Skip if not to be dumped */ @@ -11513,7 +11515,11 @@ dumpFunc(Archive *fout, FuncInfo *finfo) { funcargs = PQgetvalue(res, 0, PQfnumber(res, "funcargs")); funciargs = PQgetvalue(res, 0, PQfnumber(res, "funciargs")); - funcresult = PQgetvalue(res, 0, PQfnumber(res, "funcresult")); + is_procedure = PQgetisnull(res, 0, PQfnumber(res, "funcresult")); + if (is_procedure) + funcresult = NULL; + else + funcresult = PQgetvalue(res, 0, PQfnumber(res, "funcresult")); proallargtypes = proargmodes = proargnames = NULL; } else @@ -11522,6 +11528,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) proargmodes = PQgetvalue(res, 0, PQfnumber(res, "proargmodes")); proargnames = PQgetvalue(res, 0, PQfnumber(res, "proargnames")); funcargs = funciargs = funcresult = NULL; + is_procedure = false; } if (PQfnumber(res, "protrftypes") != -1) protrftypes = PQgetvalue(res, 0, PQfnumber(res, "protrftypes")); @@ -11653,22 +11660,29 @@ dumpFunc(Archive *fout, FuncInfo *finfo) funcsig_tag = format_function_signature(fout, finfo, false); + keyword = is_procedure ? "PROCEDURE" : "FUNCTION"; + /* * DROP must be fully qualified in case same name appears in pg_catalog */ - appendPQExpBuffer(delqry, "DROP FUNCTION %s.%s;\n", + appendPQExpBuffer(delqry, "DROP %s %s.%s;\n", + keyword, fmtId(finfo->dobj.namespace->dobj.name), funcsig); - appendPQExpBuffer(q, "CREATE FUNCTION %s ", funcfullsig ? funcfullsig : + appendPQExpBuffer(q, "CREATE %s %s", + keyword, + funcfullsig ? funcfullsig : funcsig); - if (funcresult) - appendPQExpBuffer(q, "RETURNS %s", funcresult); + if (is_procedure) + ; + else if (funcresult) + appendPQExpBuffer(q, " RETURNS %s", funcresult); else { rettypename = getFormattedTypeName(fout, finfo->prorettype, zeroAsOpaque); - appendPQExpBuffer(q, "RETURNS %s%s", + appendPQExpBuffer(q, " RETURNS %s%s", (proretset[0] == 't') ? "SETOF " : "", rettypename); free(rettypename); @@ -11775,7 +11789,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) appendPQExpBuffer(q, "\n %s;\n", asPart->data); - appendPQExpBuffer(labelq, "FUNCTION %s", funcsig); + appendPQExpBuffer(labelq, "%s %s", keyword, funcsig); if (dopt->binary_upgrade) binary_upgrade_extension_member(q, &finfo->dobj, labelq->data); @@ -11786,7 +11800,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) finfo->dobj.namespace->dobj.name, NULL, finfo->rolname, false, - "FUNCTION", SECTION_PRE_DATA, + keyword, SECTION_PRE_DATA, q->data, delqry->data, NULL, NULL, 0, NULL, NULL); @@ -11803,7 +11817,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) finfo->dobj.catId, 0, finfo->dobj.dumpId); if (finfo->dobj.dump & DUMP_COMPONENT_ACL) - dumpACL(fout, finfo->dobj.catId, finfo->dobj.dumpId, "FUNCTION", + dumpACL(fout, finfo->dobj.catId, finfo->dobj.dumpId, keyword, funcsig, NULL, funcsig_tag, finfo->dobj.namespace->dobj.name, finfo->rolname, finfo->proacl, finfo->rproacl, diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index fa3b56a426..7cf9bdadb2 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -3654,6 +3654,44 @@ section_data => 1, section_post_data => 1, }, }, + 'CREATE PROCEDURE dump_test.ptest1' => { + all_runs => 1, + create_order => 41, + create_sql => 'CREATE PROCEDURE dump_test.ptest1(a int) + LANGUAGE SQL AS $$ INSERT INTO dump_test.test_table (col1) VALUES (a) $$;', + regexp => qr/^ + \QCREATE PROCEDURE ptest1(a integer)\E + \n\s+\QLANGUAGE sql\E + \n\s+AS\ \$\$\Q INSERT INTO dump_test.test_table (col1) VALUES (a) \E\$\$; + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + pg_dumpall_dbprivs => 1, + schema_only => 1, + section_pre_data => 1, + test_schema_plus_blobs => 1, + with_oids => 1, }, + unlike => { + column_inserts => 1, + data_only => 1, + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + role => 1, + section_data => 1, + section_post_data => 1, }, }, + 'CREATE TYPE dump_test.int42 populated' => { all_runs => 1, create_order => 42, diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 804a84a0c9..3fc69c46c0 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -353,6 +353,7 @@ describeFunctions(const char *functypes, const char *pattern, bool verbose, bool " CASE\n" " WHEN p.proisagg THEN '%s'\n" " WHEN p.proiswindow THEN '%s'\n" + " WHEN p.prorettype = 0 THEN '%s'\n" " WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype THEN '%s'\n" " ELSE '%s'\n" " END as \"%s\"", @@ -361,8 +362,9 @@ describeFunctions(const char *functypes, const char *pattern, bool verbose, bool /* translator: "agg" is short for "aggregate" */ gettext_noop("agg"), gettext_noop("window"), + gettext_noop("proc"), gettext_noop("trigger"), - gettext_noop("normal"), + gettext_noop("func"), gettext_noop("Type")); else if (pset.sversion >= 80100) appendPQExpBuffer(&buf, @@ -407,7 +409,7 @@ describeFunctions(const char *functypes, const char *pattern, bool verbose, bool /* translator: "agg" is short for "aggregate" */ gettext_noop("agg"), gettext_noop("trigger"), - gettext_noop("normal"), + gettext_noop("func"), gettext_noop("Type")); else appendPQExpBuffer(&buf, @@ -424,7 +426,7 @@ describeFunctions(const char *functypes, const char *pattern, bool verbose, bool /* translator: "agg" is short for "aggregate" */ gettext_noop("agg"), gettext_noop("trigger"), - gettext_noop("normal"), + gettext_noop("func"), gettext_noop("Type")); if (verbose) diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index b3e3799c13..468e50aa31 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -397,7 +397,7 @@ static const SchemaQuery Query_for_list_of_functions = { /* catname */ "pg_catalog.pg_proc p", /* selcondition */ - NULL, + "p.prorettype <> 0", /* viscondition */ "pg_catalog.pg_function_is_visible(p.oid)", /* namespace */ @@ -423,6 +423,36 @@ static const SchemaQuery Query_for_list_of_indexes = { NULL }; +static const SchemaQuery Query_for_list_of_procedures = { + /* catname */ + "pg_catalog.pg_proc p", + /* selcondition */ + "p.prorettype = 0", + /* viscondition */ + "pg_catalog.pg_function_is_visible(p.oid)", + /* namespace */ + "p.pronamespace", + /* result */ + "pg_catalog.quote_ident(p.proname)", + /* qualresult */ + NULL +}; + +static const SchemaQuery Query_for_list_of_routines = { + /* catname */ + "pg_catalog.pg_proc p", + /* selcondition */ + NULL, + /* viscondition */ + "pg_catalog.pg_function_is_visible(p.oid)", + /* namespace */ + "p.pronamespace", + /* result */ + "pg_catalog.quote_ident(p.proname)", + /* qualresult */ + NULL +}; + static const SchemaQuery Query_for_list_of_sequences = { /* catname */ "pg_catalog.pg_class c", @@ -1032,8 +1062,10 @@ static const pgsql_thing_t words_after_create[] = { {"OWNED", NULL, NULL, THING_NO_CREATE | THING_NO_ALTER}, /* for DROP OWNED BY ... */ {"PARSER", Query_for_list_of_ts_parsers, NULL, THING_NO_SHOW}, {"POLICY", NULL, NULL}, + {"PROCEDURE", NULL, &Query_for_list_of_procedures}, {"PUBLICATION", Query_for_list_of_publications}, {"ROLE", Query_for_list_of_roles}, + {"ROUTINE", NULL, &Query_for_list_of_routines, THING_NO_CREATE}, {"RULE", "SELECT pg_catalog.quote_ident(rulename) FROM pg_catalog.pg_rules WHERE substring(pg_catalog.quote_ident(rulename),1,%d)='%s'"}, {"SCHEMA", Query_for_list_of_schemas}, {"SEQUENCE", NULL, &Query_for_list_of_sequences}, @@ -1407,7 +1439,7 @@ psql_completion(const char *text, int start, int end) /* Known command-starting keywords. */ static const char *const sql_commands[] = { - "ABORT", "ALTER", "ANALYZE", "BEGIN", "CHECKPOINT", "CLOSE", "CLUSTER", + "ABORT", "ALTER", "ANALYZE", "BEGIN", "CALL", "CHECKPOINT", "CLOSE", "CLUSTER", "COMMENT", "COMMIT", "COPY", "CREATE", "DEALLOCATE", "DECLARE", "DELETE FROM", "DISCARD", "DO", "DROP", "END", "EXECUTE", "EXPLAIN", "FETCH", "GRANT", "IMPORT", "INSERT", "LISTEN", "LOAD", "LOCK", @@ -1520,11 +1552,11 @@ psql_completion(const char *text, int start, int end) /* ALTER TABLE,INDEX,MATERIALIZED VIEW ALL IN TABLESPACE xxx OWNED BY xxx */ else if (TailMatches7("ALL", "IN", "TABLESPACE", MatchAny, "OWNED", "BY", MatchAny)) COMPLETE_WITH_CONST("SET TABLESPACE"); - /* ALTER AGGREGATE,FUNCTION */ - else if (Matches3("ALTER", "AGGREGATE|FUNCTION", MatchAny)) + /* ALTER AGGREGATE,FUNCTION,PROCEDURE,ROUTINE */ + else if (Matches3("ALTER", "AGGREGATE|FUNCTION|PROCEDURE|ROUTINE", MatchAny)) COMPLETE_WITH_CONST("("); - /* ALTER AGGREGATE,FUNCTION (...) */ - else if (Matches4("ALTER", "AGGREGATE|FUNCTION", MatchAny, MatchAny)) + /* ALTER AGGREGATE,FUNCTION,PROCEDURE,ROUTINE (...) */ + else if (Matches4("ALTER", "AGGREGATE|FUNCTION|PROCEDURE|ROUTINE", MatchAny, MatchAny)) { if (ends_with(prev_wd, ')')) COMPLETE_WITH_LIST3("OWNER TO", "RENAME TO", "SET SCHEMA"); @@ -2145,6 +2177,11 @@ psql_completion(const char *text, int start, int end) /* ROLLBACK */ else if (Matches1("ROLLBACK")) COMPLETE_WITH_LIST4("WORK", "TRANSACTION", "TO SAVEPOINT", "PREPARED"); +/* CALL */ + else if (Matches1("CALL")) + COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_procedures, NULL); + else if (Matches2("CALL", MatchAny)) + COMPLETE_WITH_CONST("("); /* CLUSTER */ else if (Matches1("CLUSTER")) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, "UNION SELECT 'VERBOSE'"); @@ -2176,6 +2213,7 @@ psql_completion(const char *text, int start, int end) "SERVER", "INDEX", "LANGUAGE", "POLICY", "PUBLICATION", "RULE", "SCHEMA", "SEQUENCE", "STATISTICS", "SUBSCRIPTION", "TABLE", "TYPE", "VIEW", "MATERIALIZED VIEW", "COLUMN", "AGGREGATE", "FUNCTION", + "PROCEDURE", "ROUTINE", "OPERATOR", "TRIGGER", "CONSTRAINT", "DOMAIN", "LARGE OBJECT", "TABLESPACE", "TEXT SEARCH", "ROLE", NULL}; @@ -2685,7 +2723,7 @@ psql_completion(const char *text, int start, int end) "COLLATION|CONVERSION|DOMAIN|EXTENSION|LANGUAGE|PUBLICATION|SCHEMA|SEQUENCE|SERVER|SUBSCRIPTION|STATISTICS|TABLE|TYPE|VIEW", MatchAny) || Matches4("DROP", "ACCESS", "METHOD", MatchAny) || - (Matches4("DROP", "AGGREGATE|FUNCTION", MatchAny, MatchAny) && + (Matches4("DROP", "AGGREGATE|FUNCTION|PROCEDURE|ROUTINE", MatchAny, MatchAny) && ends_with(prev_wd, ')')) || Matches4("DROP", "EVENT", "TRIGGER", MatchAny) || Matches5("DROP", "FOREIGN", "DATA", "WRAPPER", MatchAny) || @@ -2694,9 +2732,9 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH_LIST2("CASCADE", "RESTRICT"); /* help completing some of the variants */ - else if (Matches3("DROP", "AGGREGATE|FUNCTION", MatchAny)) + else if (Matches3("DROP", "AGGREGATE|FUNCTION|PROCEDURE|ROUTINE", MatchAny)) COMPLETE_WITH_CONST("("); - else if (Matches4("DROP", "AGGREGATE|FUNCTION", MatchAny, "(")) + else if (Matches4("DROP", "AGGREGATE|FUNCTION|PROCEDURE|ROUTINE", MatchAny, "(")) COMPLETE_WITH_FUNCTION_ARG(prev2_wd); else if (Matches2("DROP", "FOREIGN")) COMPLETE_WITH_LIST2("DATA WRAPPER", "TABLE"); @@ -2893,10 +2931,12 @@ psql_completion(const char *text, int start, int end) * objects supported. */ if (HeadMatches3("ALTER", "DEFAULT", "PRIVILEGES")) - COMPLETE_WITH_LIST5("TABLES", "SEQUENCES", "FUNCTIONS", "TYPES", "SCHEMAS"); + COMPLETE_WITH_LIST7("TABLES", "SEQUENCES", "FUNCTIONS", "PROCEDURES", "ROUTINES", "TYPES", "SCHEMAS"); else COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tsvmf, " UNION SELECT 'ALL FUNCTIONS IN SCHEMA'" + " UNION SELECT 'ALL PROCEDURES IN SCHEMA'" + " UNION SELECT 'ALL ROUTINES IN SCHEMA'" " UNION SELECT 'ALL SEQUENCES IN SCHEMA'" " UNION SELECT 'ALL TABLES IN SCHEMA'" " UNION SELECT 'DATABASE'" @@ -2906,6 +2946,8 @@ psql_completion(const char *text, int start, int end) " UNION SELECT 'FUNCTION'" " UNION SELECT 'LANGUAGE'" " UNION SELECT 'LARGE OBJECT'" + " UNION SELECT 'PROCEDURE'" + " UNION SELECT 'ROUTINE'" " UNION SELECT 'SCHEMA'" " UNION SELECT 'SEQUENCE'" " UNION SELECT 'TABLE'" @@ -2913,7 +2955,10 @@ psql_completion(const char *text, int start, int end) " UNION SELECT 'TYPE'"); } else if (TailMatches4("GRANT|REVOKE", MatchAny, "ON", "ALL")) - COMPLETE_WITH_LIST3("FUNCTIONS IN SCHEMA", "SEQUENCES IN SCHEMA", + COMPLETE_WITH_LIST5("FUNCTIONS IN SCHEMA", + "PROCEDURES IN SCHEMA", + "ROUTINES IN SCHEMA", + "SEQUENCES IN SCHEMA", "TABLES IN SCHEMA"); else if (TailMatches4("GRANT|REVOKE", MatchAny, "ON", "FOREIGN")) COMPLETE_WITH_LIST2("DATA WRAPPER", "SERVER"); @@ -2934,6 +2979,10 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_functions, NULL); else if (TailMatches1("LANGUAGE")) COMPLETE_WITH_QUERY(Query_for_list_of_languages); + else if (TailMatches1("PROCEDURE")) + COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_procedures, NULL); + else if (TailMatches1("ROUTINE")) + COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_routines, NULL); else if (TailMatches1("SCHEMA")) COMPLETE_WITH_QUERY(Query_for_list_of_schemas); else if (TailMatches1("SEQUENCE")) @@ -3163,7 +3212,7 @@ psql_completion(const char *text, int start, int end) static const char *const list_SECURITY_LABEL[] = {"TABLE", "COLUMN", "AGGREGATE", "DATABASE", "DOMAIN", "EVENT TRIGGER", "FOREIGN TABLE", "FUNCTION", "LARGE OBJECT", - "MATERIALIZED VIEW", "LANGUAGE", "PUBLICATION", "ROLE", "SCHEMA", + "MATERIALIZED VIEW", "LANGUAGE", "PUBLICATION", "PROCEDURE", "ROLE", "ROUTINE", "SCHEMA", "SEQUENCE", "SUBSCRIPTION", "TABLESPACE", "TYPE", "VIEW", NULL}; COMPLETE_WITH_LIST(list_SECURITY_LABEL); @@ -3233,8 +3282,8 @@ psql_completion(const char *text, int start, int end) /* Complete SET with "TO" */ else if (Matches2("SET", MatchAny)) COMPLETE_WITH_CONST("TO"); - /* Complete ALTER DATABASE|FUNCTION|ROLE|USER ... SET */ - else if (HeadMatches2("ALTER", "DATABASE|FUNCTION|ROLE|USER") && + /* Complete ALTER DATABASE|FUNCTION||PROCEDURE|ROLE|ROUTINE|USER ... SET */ + else if (HeadMatches2("ALTER", "DATABASE|FUNCTION|PROCEDURE|ROLE|ROUTINE|USER") && TailMatches2("SET", MatchAny)) COMPLETE_WITH_LIST2("FROM CURRENT", "TO"); /* Suggest possible variable values */ diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index a30ce6b81d..b13cf62bec 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201711171 +#define CATALOG_VERSION_NO 201711301 #endif diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index bfead9af3d..52cbf61ccb 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -59,12 +59,13 @@ extern void DropTransformById(Oid transformOid); extern void IsThereFunctionInNamespace(const char *proname, int pronargs, oidvector *proargtypes, Oid nspOid); extern void ExecuteDoStmt(DoStmt *stmt); +extern void ExecuteCallStmt(ParseState *pstate, CallStmt *stmt); extern Oid get_cast_oid(Oid sourcetypeid, Oid targettypeid, bool missing_ok); extern Oid get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok); extern void interpret_function_parameter_list(ParseState *pstate, List *parameters, Oid languageOid, - bool is_aggregate, + ObjectType objtype, oidvector **parameterTypes, ArrayType **allParameterTypes, ArrayType **parameterModes, diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index 03dc5307e8..c5b5115f5b 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -414,6 +414,7 @@ typedef enum NodeTag T_DropSubscriptionStmt, T_CreateStatsStmt, T_AlterCollationStmt, + T_CallStmt, /* * TAGS FOR PARSE TREE NODES (parsenodes.h) diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 34d6afc80f..2eaa6b2774 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1642,9 +1642,11 @@ typedef enum ObjectType OBJECT_OPERATOR, OBJECT_OPFAMILY, OBJECT_POLICY, + OBJECT_PROCEDURE, OBJECT_PUBLICATION, OBJECT_PUBLICATION_REL, OBJECT_ROLE, + OBJECT_ROUTINE, OBJECT_RULE, OBJECT_SCHEMA, OBJECT_SEQUENCE, @@ -1856,6 +1858,8 @@ typedef enum GrantObjectType ACL_OBJECT_LANGUAGE, /* procedural language */ ACL_OBJECT_LARGEOBJECT, /* largeobject */ ACL_OBJECT_NAMESPACE, /* namespace */ + ACL_OBJECT_PROCEDURE, /* procedure */ + ACL_OBJECT_ROUTINE, /* routine */ ACL_OBJECT_TABLESPACE, /* tablespace */ ACL_OBJECT_TYPE /* type */ } GrantObjectType; @@ -2749,6 +2753,7 @@ typedef struct CreateFunctionStmt List *funcname; /* qualified name of function to create */ List *parameters; /* a list of FunctionParameter */ TypeName *returnType; /* the return type */ + bool is_procedure; List *options; /* a list of DefElem */ List *withClause; /* a list of DefElem */ } CreateFunctionStmt; @@ -2775,6 +2780,7 @@ typedef struct FunctionParameter typedef struct AlterFunctionStmt { NodeTag type; + ObjectType objtype; ObjectWithArgs *func; /* name and args of function */ List *actions; /* list of DefElem */ } AlterFunctionStmt; @@ -2799,6 +2805,16 @@ typedef struct InlineCodeBlock bool langIsTrusted; /* trusted property of the language */ } InlineCodeBlock; +/* ---------------------- + * CALL statement + * ---------------------- + */ +typedef struct CallStmt +{ + NodeTag type; + FuncCall *funccall; +} CallStmt; + /* ---------------------- * Alter Object Rename Statement * ---------------------- diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h index f50e45e886..a932400058 100644 --- a/src/include/parser/kwlist.h +++ b/src/include/parser/kwlist.h @@ -63,6 +63,7 @@ PG_KEYWORD("boolean", BOOLEAN_P, COL_NAME_KEYWORD) PG_KEYWORD("both", BOTH, RESERVED_KEYWORD) PG_KEYWORD("by", BY, UNRESERVED_KEYWORD) PG_KEYWORD("cache", CACHE, UNRESERVED_KEYWORD) +PG_KEYWORD("call", CALL, UNRESERVED_KEYWORD) PG_KEYWORD("called", CALLED, UNRESERVED_KEYWORD) PG_KEYWORD("cascade", CASCADE, UNRESERVED_KEYWORD) PG_KEYWORD("cascaded", CASCADED, UNRESERVED_KEYWORD) @@ -310,6 +311,7 @@ PG_KEYWORD("prior", PRIOR, UNRESERVED_KEYWORD) PG_KEYWORD("privileges", PRIVILEGES, UNRESERVED_KEYWORD) PG_KEYWORD("procedural", PROCEDURAL, UNRESERVED_KEYWORD) PG_KEYWORD("procedure", PROCEDURE, UNRESERVED_KEYWORD) +PG_KEYWORD("procedures", PROCEDURES, UNRESERVED_KEYWORD) PG_KEYWORD("program", PROGRAM, UNRESERVED_KEYWORD) PG_KEYWORD("publication", PUBLICATION, UNRESERVED_KEYWORD) PG_KEYWORD("quote", QUOTE, UNRESERVED_KEYWORD) @@ -340,6 +342,8 @@ PG_KEYWORD("right", RIGHT, TYPE_FUNC_NAME_KEYWORD) PG_KEYWORD("role", ROLE, UNRESERVED_KEYWORD) PG_KEYWORD("rollback", ROLLBACK, UNRESERVED_KEYWORD) PG_KEYWORD("rollup", ROLLUP, UNRESERVED_KEYWORD) +PG_KEYWORD("routine", ROUTINE, UNRESERVED_KEYWORD) +PG_KEYWORD("routines", ROUTINES, UNRESERVED_KEYWORD) PG_KEYWORD("row", ROW, COL_NAME_KEYWORD) PG_KEYWORD("rows", ROWS, UNRESERVED_KEYWORD) PG_KEYWORD("rule", RULE, UNRESERVED_KEYWORD) diff --git a/src/include/parser/parse_func.h b/src/include/parser/parse_func.h index b4b6084b1b..fccccd21ed 100644 --- a/src/include/parser/parse_func.h +++ b/src/include/parser/parse_func.h @@ -24,6 +24,7 @@ typedef enum FUNCDETAIL_NOTFOUND, /* no matching function */ FUNCDETAIL_MULTIPLE, /* too many matching functions */ FUNCDETAIL_NORMAL, /* found a matching regular function */ + FUNCDETAIL_PROCEDURE, /* found a matching procedure */ FUNCDETAIL_AGGREGATE, /* found a matching aggregate function */ FUNCDETAIL_WINDOWFUNC, /* found a matching window function */ FUNCDETAIL_COERCION /* it's a type coercion request */ @@ -31,7 +32,8 @@ typedef enum extern Node *ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, - Node *last_srf, FuncCall *fn, int location); + Node *last_srf, FuncCall *fn, bool proc_call, + int location); extern FuncDetailCode func_get_detail(List *funcname, List *fargs, List *fargnames, @@ -62,10 +64,8 @@ extern const char *func_signature_string(List *funcname, int nargs, extern Oid LookupFuncName(List *funcname, int nargs, const Oid *argtypes, bool noError); -extern Oid LookupFuncWithArgs(ObjectWithArgs *func, +extern Oid LookupFuncWithArgs(ObjectType objtype, ObjectWithArgs *func, bool noError); -extern Oid LookupAggWithArgs(ObjectWithArgs *agg, - bool noError); extern void check_srf_call_placement(ParseState *pstate, Node *last_srf, int location); diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h index f0e210ad8d..565bb3dc6c 100644 --- a/src/include/parser/parse_node.h +++ b/src/include/parser/parse_node.h @@ -67,7 +67,8 @@ typedef enum ParseExprKind EXPR_KIND_EXECUTE_PARAMETER, /* parameter value in EXECUTE */ EXPR_KIND_TRIGGER_WHEN, /* WHEN condition in CREATE TRIGGER */ EXPR_KIND_POLICY, /* USING or WITH CHECK expr in policy */ - EXPR_KIND_PARTITION_EXPRESSION /* PARTITION BY expression */ + EXPR_KIND_PARTITION_EXPRESSION, /* PARTITION BY expression */ + EXPR_KIND_CALL /* CALL argument */ } ParseExprKind; diff --git a/src/include/utils/lsyscache.h b/src/include/utils/lsyscache.h index 07208b56ce..b316cc594c 100644 --- a/src/include/utils/lsyscache.h +++ b/src/include/utils/lsyscache.h @@ -118,6 +118,7 @@ extern bool get_func_retset(Oid funcid); extern bool func_strict(Oid funcid); extern char func_volatile(Oid funcid); extern char func_parallel(Oid funcid); +extern bool get_func_isagg(Oid funcid); extern bool get_func_leakproof(Oid funcid); extern float4 get_func_cost(Oid funcid); extern float4 get_func_rows(Oid funcid); diff --git a/src/interfaces/ecpg/preproc/ecpg.tokens b/src/interfaces/ecpg/preproc/ecpg.tokens index 68ba925efe..1d613af02f 100644 --- a/src/interfaces/ecpg/preproc/ecpg.tokens +++ b/src/interfaces/ecpg/preproc/ecpg.tokens @@ -2,7 +2,7 @@ /* special embedded SQL tokens */ %token SQL_ALLOCATE SQL_AUTOCOMMIT SQL_BOOL SQL_BREAK - SQL_CALL SQL_CARDINALITY SQL_CONNECT + SQL_CARDINALITY SQL_CONNECT SQL_COUNT SQL_DATETIME_INTERVAL_CODE SQL_DATETIME_INTERVAL_PRECISION SQL_DESCRIBE diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer index f60a62099d..19dc781885 100644 --- a/src/interfaces/ecpg/preproc/ecpg.trailer +++ b/src/interfaces/ecpg/preproc/ecpg.trailer @@ -1460,13 +1460,13 @@ action : CONTINUE_P $$.command = NULL; $$.str = mm_strdup("continue"); } - | SQL_CALL name '(' c_args ')' + | CALL name '(' c_args ')' { $$.code = W_DO; $$.command = cat_str(4, $2, mm_strdup("("), $4, mm_strdup(")")); $$.str = cat2_str(mm_strdup("call"), mm_strdup($$.command)); } - | SQL_CALL name + | CALL name { $$.code = W_DO; $$.command = cat2_str($2, mm_strdup("()")); @@ -1482,7 +1482,6 @@ ECPGKeywords: ECPGKeywords_vanames { $$ = $1; } ; ECPGKeywords_vanames: SQL_BREAK { $$ = mm_strdup("break"); } - | SQL_CALL { $$ = mm_strdup("call"); } | SQL_CARDINALITY { $$ = mm_strdup("cardinality"); } | SQL_COUNT { $$ = mm_strdup("count"); } | SQL_DATETIME_INTERVAL_CODE { $$ = mm_strdup("datetime_interval_code"); } diff --git a/src/interfaces/ecpg/preproc/ecpg_keywords.c b/src/interfaces/ecpg/preproc/ecpg_keywords.c index 3b52b8f3a2..848b2d4849 100644 --- a/src/interfaces/ecpg/preproc/ecpg_keywords.c +++ b/src/interfaces/ecpg/preproc/ecpg_keywords.c @@ -33,7 +33,6 @@ static const ScanKeyword ECPGScanKeywords[] = { {"autocommit", SQL_AUTOCOMMIT, 0}, {"bool", SQL_BOOL, 0}, {"break", SQL_BREAK, 0}, - {"call", SQL_CALL, 0}, {"cardinality", SQL_CARDINALITY, 0}, {"connect", SQL_CONNECT, 0}, {"count", SQL_COUNT, 0}, diff --git a/src/pl/plperl/GNUmakefile b/src/pl/plperl/GNUmakefile index 91d1296b21..b829027d05 100644 --- a/src/pl/plperl/GNUmakefile +++ b/src/pl/plperl/GNUmakefile @@ -55,7 +55,7 @@ endif # win32 SHLIB_LINK = $(perl_embed_ldflags) REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-extension=plperl --load-extension=plperlu -REGRESS = plperl plperl_lc plperl_trigger plperl_shared plperl_elog plperl_util plperl_init plperlu plperl_array +REGRESS = plperl plperl_lc plperl_trigger plperl_shared plperl_elog plperl_util plperl_init plperlu plperl_array plperl_call # if Perl can support two interpreters in one backend, # test plperl-and-plperlu cases ifneq ($(PERL),) diff --git a/src/pl/plperl/expected/plperl_call.out b/src/pl/plperl/expected/plperl_call.out new file mode 100644 index 0000000000..4bccfcb7c8 --- /dev/null +++ b/src/pl/plperl/expected/plperl_call.out @@ -0,0 +1,29 @@ +CREATE PROCEDURE test_proc1() +LANGUAGE plperl +AS $$ +undef; +$$; +CALL test_proc1(); +CREATE PROCEDURE test_proc2() +LANGUAGE plperl +AS $$ +return 5 +$$; +CALL test_proc2(); +CREATE TABLE test1 (a int); +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plperl +AS $$ +spi_exec_query("INSERT INTO test1 VALUES ($_[0])"); +$$; +CALL test_proc3(55); +SELECT * FROM test1; + a +---- + 55 +(1 row) + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; +DROP TABLE test1; diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index a57393fbdd..9f5313235f 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -1915,7 +1915,7 @@ plperl_inline_handler(PG_FUNCTION_ARGS) desc.fn_retistuple = false; desc.fn_retisset = false; desc.fn_retisarray = false; - desc.result_oid = VOIDOID; + desc.result_oid = InvalidOid; desc.nargs = 0; desc.reference = NULL; @@ -2481,7 +2481,7 @@ plperl_func_handler(PG_FUNCTION_ARGS) } retval = (Datum) 0; } - else + else if (prodesc->result_oid) { retval = plperl_sv_to_datum(perlret, prodesc->result_oid, @@ -2826,7 +2826,7 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) * Get the required information for input conversion of the * return value. ************************************************************/ - if (!is_trigger && !is_event_trigger) + if (!is_trigger && !is_event_trigger && procStruct->prorettype) { Oid rettype = procStruct->prorettype; @@ -3343,7 +3343,7 @@ plperl_return_next_internal(SV *sv) tuplestore_puttuple(current_call_data->tuple_store, tuple); } - else + else if (prodesc->result_oid) { Datum ret[1]; bool isNull[1]; diff --git a/src/pl/plperl/sql/plperl_call.sql b/src/pl/plperl/sql/plperl_call.sql new file mode 100644 index 0000000000..bd2b63b418 --- /dev/null +++ b/src/pl/plperl/sql/plperl_call.sql @@ -0,0 +1,36 @@ +CREATE PROCEDURE test_proc1() +LANGUAGE plperl +AS $$ +undef; +$$; + +CALL test_proc1(); + + +CREATE PROCEDURE test_proc2() +LANGUAGE plperl +AS $$ +return 5 +$$; + +CALL test_proc2(); + + +CREATE TABLE test1 (a int); + +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plperl +AS $$ +spi_exec_query("INSERT INTO test1 VALUES ($_[0])"); +$$; + +CALL test_proc3(55); + +SELECT * FROM test1; + + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; + +DROP TABLE test1; diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index d0afa59242..f459c02f7b 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -275,7 +275,6 @@ do_compile(FunctionCallInfo fcinfo, bool isnull; char *proc_source; HeapTuple typeTup; - Form_pg_type typeStruct; PLpgSQL_variable *var; PLpgSQL_rec *rec; int i; @@ -531,53 +530,58 @@ do_compile(FunctionCallInfo fcinfo, /* * Lookup the function's return type */ - typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(rettypeid)); - if (!HeapTupleIsValid(typeTup)) - elog(ERROR, "cache lookup failed for type %u", rettypeid); - typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - - /* Disallow pseudotype result, except VOID or RECORD */ - /* (note we already replaced polymorphic types) */ - if (typeStruct->typtype == TYPTYPE_PSEUDO) + if (rettypeid) { - if (rettypeid == VOIDOID || - rettypeid == RECORDOID) - /* okay */ ; - else if (rettypeid == TRIGGEROID || rettypeid == EVTTRIGGEROID) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("trigger functions can only be called as triggers"))); - else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("PL/pgSQL functions cannot return type %s", - format_type_be(rettypeid)))); - } + Form_pg_type typeStruct; - if (typeStruct->typrelid != InvalidOid || - rettypeid == RECORDOID) - function->fn_retistuple = true; - else - { - function->fn_retbyval = typeStruct->typbyval; - function->fn_rettyplen = typeStruct->typlen; + typeTup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(rettypeid)); + if (!HeapTupleIsValid(typeTup)) + elog(ERROR, "cache lookup failed for type %u", rettypeid); + typeStruct = (Form_pg_type) GETSTRUCT(typeTup); - /* - * install $0 reference, but only for polymorphic return - * types, and not when the return is specified through an - * output parameter. - */ - if (IsPolymorphicType(procStruct->prorettype) && - num_out_args == 0) + /* Disallow pseudotype result, except VOID or RECORD */ + /* (note we already replaced polymorphic types) */ + if (typeStruct->typtype == TYPTYPE_PSEUDO) { - (void) plpgsql_build_variable("$0", 0, - build_datatype(typeTup, - -1, - function->fn_input_collation), - true); + if (rettypeid == VOIDOID || + rettypeid == RECORDOID) + /* okay */ ; + else if (rettypeid == TRIGGEROID || rettypeid == EVTTRIGGEROID) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("trigger functions can only be called as triggers"))); + else + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("PL/pgSQL functions cannot return type %s", + format_type_be(rettypeid)))); + } + + if (typeStruct->typrelid != InvalidOid || + rettypeid == RECORDOID) + function->fn_retistuple = true; + else + { + function->fn_retbyval = typeStruct->typbyval; + function->fn_rettyplen = typeStruct->typlen; + + /* + * install $0 reference, but only for polymorphic return + * types, and not when the return is specified through an + * output parameter. + */ + if (IsPolymorphicType(procStruct->prorettype) && + num_out_args == 0) + { + (void) plpgsql_build_variable("$0", 0, + build_datatype(typeTup, + -1, + function->fn_input_collation), + true); + } } + ReleaseSysCache(typeTup); } - ReleaseSysCache(typeTup); break; case PLPGSQL_DML_TRIGGER: diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index cf6120eea9..ec480cb0ba 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -462,7 +462,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, estate.err_text = NULL; estate.err_stmt = (PLpgSQL_stmt *) (func->action); rc = exec_stmt_block(&estate, func->action); - if (rc != PLPGSQL_RC_RETURN) + if (rc != PLPGSQL_RC_RETURN && func->fn_rettype) { estate.err_stmt = NULL; estate.err_text = NULL; @@ -509,6 +509,12 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, } else if (!estate.retisnull) { + if (!func->fn_rettype) + { + ereport(ERROR, + (errmsg("cannot return a value from a procedure"))); + } + if (estate.retistuple) { /* diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile index 7680d49cb6..cc91afebde 100644 --- a/src/pl/plpython/Makefile +++ b/src/pl/plpython/Makefile @@ -78,6 +78,7 @@ REGRESS = \ plpython_spi \ plpython_newline \ plpython_void \ + plpython_call \ plpython_params \ plpython_setof \ plpython_record \ diff --git a/src/pl/plpython/expected/plpython_call.out b/src/pl/plpython/expected/plpython_call.out new file mode 100644 index 0000000000..90785343b6 --- /dev/null +++ b/src/pl/plpython/expected/plpython_call.out @@ -0,0 +1,35 @@ +-- +-- Tests for procedures / CALL syntax +-- +CREATE PROCEDURE test_proc1() +LANGUAGE plpythonu +AS $$ +pass +$$; +CALL test_proc1(); +-- error: can't return non-None +CREATE PROCEDURE test_proc2() +LANGUAGE plpythonu +AS $$ +return 5 +$$; +CALL test_proc2(); +ERROR: PL/Python procedure did not return None +CONTEXT: PL/Python procedure "test_proc2" +CREATE TABLE test1 (a int); +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpythonu +AS $$ +plpy.execute("INSERT INTO test1 VALUES (%s)" % x) +$$; +CALL test_proc3(55); +SELECT * FROM test1; + a +---- + 55 +(1 row) + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; +DROP TABLE test1; diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index fe217c6a2c..4594a08ead 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -199,12 +199,19 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) error_context_stack = &plerrcontext; /* - * If the function is declared to return void, the Python return value + * For a procedure or function declared to return void, the Python return value * must be None. For void-returning functions, we also treat a None * return value as a special "void datum" rather than NULL (as is the * case for non-void-returning functions). */ - if (proc->result.typoid == VOIDOID) + if (proc->is_procedure) + { + if (plrv != Py_None) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("PL/Python procedure did not return None"))); + } + else if (proc->result.typoid == VOIDOID) { if (plrv != Py_None) ereport(ERROR, @@ -672,7 +679,8 @@ plpython_return_error_callback(void *arg) { PLyExecutionContext *exec_ctx = PLy_current_execution_context(); - if (exec_ctx->curr_proc) + if (exec_ctx->curr_proc && + !exec_ctx->curr_proc->is_procedure) errcontext("while creating return value"); } diff --git a/src/pl/plpython/plpy_main.c b/src/pl/plpython/plpy_main.c index 32d23ae5b6..695de30583 100644 --- a/src/pl/plpython/plpy_main.c +++ b/src/pl/plpython/plpy_main.c @@ -389,8 +389,14 @@ plpython_error_callback(void *arg) PLyExecutionContext *exec_ctx = PLy_current_execution_context(); if (exec_ctx->curr_proc) - errcontext("PL/Python function \"%s\"", - PLy_procedure_name(exec_ctx->curr_proc)); + { + if (exec_ctx->curr_proc->is_procedure) + errcontext("PL/Python procedure \"%s\"", + PLy_procedure_name(exec_ctx->curr_proc)); + else + errcontext("PL/Python function \"%s\"", + PLy_procedure_name(exec_ctx->curr_proc)); + } } static void diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c index faa4977463..b7c24e356f 100644 --- a/src/pl/plpython/plpy_procedure.c +++ b/src/pl/plpython/plpy_procedure.c @@ -189,6 +189,7 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) proc->fn_tid = procTup->t_self; proc->fn_readonly = (procStruct->provolatile != PROVOLATILE_VOLATILE); proc->is_setof = procStruct->proretset; + proc->is_procedure = (procStruct->prorettype == InvalidOid); proc->src = NULL; proc->argnames = NULL; proc->args = NULL; @@ -206,9 +207,9 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) /* * get information required for output conversion of the return value, - * but only if this isn't a trigger. + * but only if this isn't a trigger or procedure. */ - if (!is_trigger) + if (!is_trigger && procStruct->prorettype) { Oid rettype = procStruct->prorettype; HeapTuple rvTypeTup; diff --git a/src/pl/plpython/plpy_procedure.h b/src/pl/plpython/plpy_procedure.h index cd1b87fdc3..8968b5c92e 100644 --- a/src/pl/plpython/plpy_procedure.h +++ b/src/pl/plpython/plpy_procedure.h @@ -30,7 +30,8 @@ typedef struct PLyProcedure TransactionId fn_xmin; ItemPointerData fn_tid; bool fn_readonly; - bool is_setof; /* true, if procedure returns result set */ + bool is_setof; /* true, if function returns result set */ + bool is_procedure; PLyObToDatum result; /* Function result output conversion info */ PLyDatumToOb result_in; /* For converting input tuples in a trigger */ char *src; /* textual procedure code, after mangling */ diff --git a/src/pl/plpython/sql/plpython_call.sql b/src/pl/plpython/sql/plpython_call.sql new file mode 100644 index 0000000000..3fb74de5f0 --- /dev/null +++ b/src/pl/plpython/sql/plpython_call.sql @@ -0,0 +1,41 @@ +-- +-- Tests for procedures / CALL syntax +-- + +CREATE PROCEDURE test_proc1() +LANGUAGE plpythonu +AS $$ +pass +$$; + +CALL test_proc1(); + + +-- error: can't return non-None +CREATE PROCEDURE test_proc2() +LANGUAGE plpythonu +AS $$ +return 5 +$$; + +CALL test_proc2(); + + +CREATE TABLE test1 (a int); + +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpythonu +AS $$ +plpy.execute("INSERT INTO test1 VALUES (%s)" % x) +$$; + +CALL test_proc3(55); + +SELECT * FROM test1; + + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; + +DROP TABLE test1; diff --git a/src/pl/tcl/Makefile b/src/pl/tcl/Makefile index b8971d3cc8..6a92a9b6aa 100644 --- a/src/pl/tcl/Makefile +++ b/src/pl/tcl/Makefile @@ -28,7 +28,7 @@ DATA = pltcl.control pltcl--1.0.sql pltcl--unpackaged--1.0.sql \ pltclu.control pltclu--1.0.sql pltclu--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-extension=pltcl -REGRESS = pltcl_setup pltcl_queries pltcl_start_proc pltcl_subxact pltcl_unicode +REGRESS = pltcl_setup pltcl_queries pltcl_call pltcl_start_proc pltcl_subxact pltcl_unicode # Tcl on win32 ships with import libraries only for Microsoft Visual C++, # which are not compatible with mingw gcc. Therefore we need to build a diff --git a/src/pl/tcl/expected/pltcl_call.out b/src/pl/tcl/expected/pltcl_call.out new file mode 100644 index 0000000000..7221a37ad0 --- /dev/null +++ b/src/pl/tcl/expected/pltcl_call.out @@ -0,0 +1,29 @@ +CREATE PROCEDURE test_proc1() +LANGUAGE pltcl +AS $$ +unset +$$; +CALL test_proc1(); +CREATE PROCEDURE test_proc2() +LANGUAGE pltcl +AS $$ +return 5 +$$; +CALL test_proc2(); +CREATE TABLE test1 (a int); +CREATE PROCEDURE test_proc3(x int) +LANGUAGE pltcl +AS $$ +spi_exec "INSERT INTO test1 VALUES ($1)" +$$; +CALL test_proc3(55); +SELECT * FROM test1; + a +---- + 55 +(1 row) + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; +DROP TABLE test1; diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index 6d97ddc99b..e0792d93e1 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -146,6 +146,7 @@ typedef struct pltcl_proc_desc Oid result_typid; /* OID of fn's result type */ FmgrInfo result_in_func; /* input function for fn's result type */ Oid result_typioparam; /* param to pass to same */ + bool fn_is_procedure;/* true if this is a procedure */ bool fn_retisset; /* true if function returns a set */ bool fn_retistuple; /* true if function returns composite */ bool fn_retisdomain; /* true if function returns domain */ @@ -968,7 +969,7 @@ pltcl_func_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, retval = (Datum) 0; fcinfo->isnull = true; } - else if (fcinfo->isnull) + else if (fcinfo->isnull && !prodesc->fn_is_procedure) { retval = InputFunctionCall(&prodesc->result_in_func, NULL, @@ -1026,11 +1027,13 @@ pltcl_func_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, call_state); retval = HeapTupleGetDatum(tup); } - else + else if (!prodesc->fn_is_procedure) retval = InputFunctionCall(&prodesc->result_in_func, utf_u2e(Tcl_GetStringResult(interp)), prodesc->result_typioparam, -1); + else + retval = 0; return retval; } @@ -1506,7 +1509,9 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid, * Get the required information for input conversion of the * return value. ************************************************************/ - if (!is_trigger && !is_event_trigger) + prodesc->fn_is_procedure = (procStruct->prorettype == InvalidOid); + + if (!is_trigger && !is_event_trigger && procStruct->prorettype) { Oid rettype = procStruct->prorettype; @@ -2199,7 +2204,7 @@ pltcl_returnnext(ClientData cdata, Tcl_Interp *interp, tuplestore_puttuple(call_state->tuple_store, tuple); } } - else + else if (!prodesc->fn_is_procedure) { Datum retval; bool isNull = false; diff --git a/src/pl/tcl/sql/pltcl_call.sql b/src/pl/tcl/sql/pltcl_call.sql new file mode 100644 index 0000000000..ef1f540f50 --- /dev/null +++ b/src/pl/tcl/sql/pltcl_call.sql @@ -0,0 +1,36 @@ +CREATE PROCEDURE test_proc1() +LANGUAGE pltcl +AS $$ +unset +$$; + +CALL test_proc1(); + + +CREATE PROCEDURE test_proc2() +LANGUAGE pltcl +AS $$ +return 5 +$$; + +CALL test_proc2(); + + +CREATE TABLE test1 (a int); + +CREATE PROCEDURE test_proc3(x int) +LANGUAGE pltcl +AS $$ +spi_exec "INSERT INTO test1 VALUES ($1)" +$$; + +CALL test_proc3(55); + +SELECT * FROM test1; + + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; + +DROP TABLE test1; diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out new file mode 100644 index 0000000000..5538ef2f2b --- /dev/null +++ b/src/test/regress/expected/create_procedure.out @@ -0,0 +1,92 @@ +CALL nonexistent(); -- error +ERROR: function nonexistent() does not exist +LINE 1: CALL nonexistent(); + ^ +HINT: No function matches the given name and argument types. You might need to add explicit type casts. +CALL random(); -- error +ERROR: random() is not a procedure +LINE 1: CALL random(); + ^ +HINT: To call a function, use SELECT. +CREATE FUNCTION testfunc1(a int) RETURNS int LANGUAGE SQL AS $$ SELECT a $$; +CREATE TABLE cp_test (a int, b text); +CREATE PROCEDURE ptest1(x text) +LANGUAGE SQL +AS $$ +INSERT INTO cp_test VALUES (1, x); +$$; +SELECT ptest1('x'); -- error +ERROR: ptest1(unknown) is a procedure +LINE 1: SELECT ptest1('x'); + ^ +HINT: To call a procedure, use CALL. +CALL ptest1('a'); -- ok +\df ptest1 + List of functions + Schema | Name | Result data type | Argument data types | Type +--------+--------+------------------+---------------------+------ + public | ptest1 | | x text | proc +(1 row) + +SELECT * FROM cp_test ORDER BY a; + a | b +---+--- + 1 | a +(1 row) + +CREATE PROCEDURE ptest2() +LANGUAGE SQL +AS $$ +SELECT 5; +$$; +CALL ptest2(); +-- various error cases +CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; +ERROR: invalid attribute in procedure definition +LINE 1: CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT I... + ^ +CREATE PROCEDURE ptestx() LANGUAGE SQL STRICT AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; +ERROR: invalid attribute in procedure definition +LINE 1: CREATE PROCEDURE ptestx() LANGUAGE SQL STRICT AS $$ INSERT I... + ^ +CREATE PROCEDURE ptestx(OUT a int) LANGUAGE SQL AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; +ERROR: procedures cannot have OUT parameters +ALTER PROCEDURE ptest1(text) STRICT; +ERROR: invalid attribute in procedure definition +LINE 1: ALTER PROCEDURE ptest1(text) STRICT; + ^ +ALTER FUNCTION ptest1(text) VOLATILE; -- error: not a function +ERROR: ptest1(text) is not a function +ALTER PROCEDURE testfunc1(int) VOLATILE; -- error: not a procedure +ERROR: testfunc1(integer) is not a procedure +ALTER PROCEDURE nonexistent() VOLATILE; +ERROR: procedure nonexistent() does not exist +DROP FUNCTION ptest1(text); -- error: not a function +ERROR: ptest1(text) is not a function +DROP PROCEDURE testfunc1(int); -- error: not a procedure +ERROR: testfunc1(integer) is not a procedure +DROP PROCEDURE nonexistent(); +ERROR: procedure nonexistent() does not exist +-- privileges +CREATE USER regress_user1; +GRANT INSERT ON cp_test TO regress_user1; +REVOKE EXECUTE ON PROCEDURE ptest1(text) FROM PUBLIC; +SET ROLE regress_user1; +CALL ptest1('a'); -- error +ERROR: permission denied for function ptest1 +RESET ROLE; +GRANT EXECUTE ON PROCEDURE ptest1(text) TO regress_user1; +SET ROLE regress_user1; +CALL ptest1('a'); -- ok +RESET ROLE; +-- ROUTINE syntax +ALTER ROUTINE testfunc1(int) RENAME TO testfunc1a; +ALTER ROUTINE testfunc1a RENAME TO testfunc1; +ALTER ROUTINE ptest1(text) RENAME TO ptest1a; +ALTER ROUTINE ptest1a RENAME TO ptest1; +DROP ROUTINE testfunc1(int); +-- cleanup +DROP PROCEDURE ptest1; +DROP PROCEDURE ptest2; +DROP TABLE cp_test; +DROP USER regress_user1; diff --git a/src/test/regress/expected/object_address.out b/src/test/regress/expected/object_address.out index 1fdadbc9ef..bfd9d54c11 100644 --- a/src/test/regress/expected/object_address.out +++ b/src/test/regress/expected/object_address.out @@ -29,6 +29,7 @@ CREATE DOMAIN addr_nsp.gendomain AS int4 CONSTRAINT domconstr CHECK (value > 0); CREATE FUNCTION addr_nsp.trig() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN END; $$; CREATE TRIGGER t BEFORE INSERT ON addr_nsp.gentable FOR EACH ROW EXECUTE PROCEDURE addr_nsp.trig(); CREATE POLICY genpol ON addr_nsp.gentable; +CREATE PROCEDURE addr_nsp.proc(int4) LANGUAGE SQL AS $$ $$; CREATE SERVER "integer" FOREIGN DATA WRAPPER addr_fdw; CREATE USER MAPPING FOR regress_addr_user SERVER "integer"; ALTER DEFAULT PRIVILEGES FOR ROLE regress_addr_user IN SCHEMA public GRANT ALL ON TABLES TO regress_addr_user; @@ -88,7 +89,7 @@ BEGIN ('table'), ('index'), ('sequence'), ('view'), ('materialized view'), ('foreign table'), ('table column'), ('foreign table column'), - ('aggregate'), ('function'), ('type'), ('cast'), + ('aggregate'), ('function'), ('procedure'), ('type'), ('cast'), ('table constraint'), ('domain constraint'), ('conversion'), ('default value'), ('operator'), ('operator class'), ('operator family'), ('rule'), ('trigger'), ('text search parser'), ('text search dictionary'), @@ -171,6 +172,12 @@ WARNING: error for function,{addr_nsp,zwei},{}: function addr_nsp.zwei() does n WARNING: error for function,{addr_nsp,zwei},{integer}: function addr_nsp.zwei(integer) does not exist WARNING: error for function,{eins,zwei,drei},{}: cross-database references are not implemented: eins.zwei.drei WARNING: error for function,{eins,zwei,drei},{integer}: cross-database references are not implemented: eins.zwei.drei +WARNING: error for procedure,{eins},{}: procedure eins() does not exist +WARNING: error for procedure,{eins},{integer}: procedure eins(integer) does not exist +WARNING: error for procedure,{addr_nsp,zwei},{}: procedure addr_nsp.zwei() does not exist +WARNING: error for procedure,{addr_nsp,zwei},{integer}: procedure addr_nsp.zwei(integer) does not exist +WARNING: error for procedure,{eins,zwei,drei},{}: cross-database references are not implemented: eins.zwei.drei +WARNING: error for procedure,{eins,zwei,drei},{integer}: cross-database references are not implemented: eins.zwei.drei WARNING: error for type,{eins},{}: type "eins" does not exist WARNING: error for type,{eins},{integer}: type "eins" does not exist WARNING: error for type,{addr_nsp,zwei},{}: name list length must be exactly 1 @@ -371,6 +378,7 @@ WITH objects (type, name, args) AS (VALUES ('foreign table column', '{addr_nsp, genftable, a}', '{}'), ('aggregate', '{addr_nsp, genaggr}', '{int4}'), ('function', '{pg_catalog, pg_identify_object}', '{pg_catalog.oid, pg_catalog.oid, int4}'), + ('procedure', '{addr_nsp, proc}', '{int4}'), ('type', '{pg_catalog._int4}', '{}'), ('type', '{addr_nsp.gendomain}', '{}'), ('type', '{addr_nsp.gencomptype}', '{}'), @@ -431,6 +439,7 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*, type | addr_nsp | gendomain | addr_nsp.gendomain | t function | pg_catalog | | pg_catalog.pg_identify_object(pg_catalog.oid,pg_catalog.oid,integer) | t aggregate | addr_nsp | | addr_nsp.genaggr(integer) | t + procedure | addr_nsp | | addr_nsp.proc(integer) | t sequence | addr_nsp | gentable_a_seq | addr_nsp.gentable_a_seq | t table | addr_nsp | gentable | addr_nsp.gentable | t table column | addr_nsp | gentable | addr_nsp.gentable.b | t @@ -469,7 +478,7 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*, subscription | | addr_sub | addr_sub | t publication | | addr_pub | addr_pub | t publication relation | | | gentable in publication addr_pub | t -(46 rows) +(47 rows) --- --- Cleanup resources @@ -480,6 +489,6 @@ NOTICE: drop cascades to 4 other objects DROP PUBLICATION addr_pub; DROP SUBSCRIPTION addr_sub; DROP SCHEMA addr_nsp CASCADE; -NOTICE: drop cascades to 12 other objects +NOTICE: drop cascades to 13 other objects DROP OWNED BY regress_addr_user; DROP USER regress_addr_user; diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index bb3532676b..d6e5bc3353 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -6040,3 +6040,44 @@ END; $$ LANGUAGE plpgsql; ERROR: "x" is not a scalar variable LINE 3: GET DIAGNOSTICS x = ROW_COUNT; ^ +-- +-- Procedures +-- +CREATE PROCEDURE test_proc1() +LANGUAGE plpgsql +AS $$ +BEGIN + NULL; +END; +$$; +CALL test_proc1(); +-- error: can't return non-NULL +CREATE PROCEDURE test_proc2() +LANGUAGE plpgsql +AS $$ +BEGIN + RETURN 5; +END; +$$; +CALL test_proc2(); +ERROR: cannot return a value from a procedure +CONTEXT: PL/pgSQL function test_proc2() while casting return value to function's return type +CREATE TABLE proc_test1 (a int); +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpgsql +AS $$ +BEGIN + INSERT INTO proc_test1 VALUES (x); +END; +$$; +CALL test_proc3(55); +SELECT * FROM proc_test1; + a +---- + 55 +(1 row) + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; +DROP TABLE proc_test1; diff --git a/src/test/regress/expected/polymorphism.out b/src/test/regress/expected/polymorphism.out index 91cfb743b6..66e35a6a5c 100644 --- a/src/test/regress/expected/polymorphism.out +++ b/src/test/regress/expected/polymorphism.out @@ -915,10 +915,10 @@ select dfunc(); -- verify it lists properly \df dfunc - List of functions - Schema | Name | Result data type | Argument data types | Type ---------+-------+------------------+-----------------------------------------------------------+-------- - public | dfunc | integer | a integer DEFAULT 1, OUT sum integer, b integer DEFAULT 2 | normal + List of functions + Schema | Name | Result data type | Argument data types | Type +--------+-------+------------------+-----------------------------------------------------------+------ + public | dfunc | integer | a integer DEFAULT 1, OUT sum integer, b integer DEFAULT 2 | func (1 row) drop function dfunc(int, int); @@ -1083,10 +1083,10 @@ $$ select array_upper($1, 1) $$ language sql; ERROR: cannot remove parameter defaults from existing function HINT: Use DROP FUNCTION dfunc(integer[]) first. \df dfunc - List of functions - Schema | Name | Result data type | Argument data types | Type ---------+-------+------------------+-------------------------------------------------+-------- - public | dfunc | integer | VARIADIC a integer[] DEFAULT ARRAY[]::integer[] | normal + List of functions + Schema | Name | Result data type | Argument data types | Type +--------+-------+------------------+-------------------------------------------------+------ + public | dfunc | integer | VARIADIC a integer[] DEFAULT ARRAY[]::integer[] | func (1 row) drop function dfunc(a variadic int[]); diff --git a/src/test/regress/expected/privileges.out b/src/test/regress/expected/privileges.out index 771971a095..e6994f0490 100644 --- a/src/test/regress/expected/privileges.out +++ b/src/test/regress/expected/privileges.out @@ -651,13 +651,25 @@ GRANT USAGE ON LANGUAGE sql TO regress_user2; -- fail WARNING: no privileges were granted for "sql" CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE sql; CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE sql; -REVOKE ALL ON FUNCTION testfunc1(int), testfunc2(int) FROM PUBLIC; -GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regress_user2; +CREATE AGGREGATE testagg1(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testproc1(int) AS 'select $1;' LANGUAGE sql; +REVOKE ALL ON FUNCTION testfunc1(int), testfunc2(int), testagg1(int) FROM PUBLIC; +GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int), testagg1(int) TO regress_user2; +REVOKE ALL ON FUNCTION testproc1(int) FROM PUBLIC; -- fail, not a function +ERROR: testproc1(integer) is not a function +REVOKE ALL ON PROCEDURE testproc1(int) FROM PUBLIC; +GRANT EXECUTE ON PROCEDURE testproc1(int) TO regress_user2; GRANT USAGE ON FUNCTION testfunc1(int) TO regress_user3; -- semantic error ERROR: invalid privilege type USAGE for function +GRANT USAGE ON FUNCTION testagg1(int) TO regress_user3; -- semantic error +ERROR: invalid privilege type USAGE for function +GRANT USAGE ON PROCEDURE testproc1(int) TO regress_user3; -- semantic error +ERROR: invalid privilege type USAGE for procedure GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regress_user4; GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regress_user4; ERROR: function testfunc_nosuch(integer) does not exist +GRANT ALL PRIVILEGES ON FUNCTION testagg1(int) TO regress_user4; +GRANT ALL PRIVILEGES ON PROCEDURE testproc1(int) TO regress_user4; CREATE FUNCTION testfunc4(boolean) RETURNS text AS 'select col1 from atest2 where col2 = $1;' LANGUAGE sql SECURITY DEFINER; @@ -671,9 +683,20 @@ SELECT testfunc1(5), testfunc2(5); -- ok CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE sql; -- fail ERROR: permission denied for language sql +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- ok + testagg1 +---------- + 6 +(1 row) + +CALL testproc1(6); -- ok SET SESSION AUTHORIZATION regress_user3; SELECT testfunc1(5); -- fail ERROR: permission denied for function testfunc1 +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- fail +ERROR: permission denied for function testagg1 +CALL testproc1(6); -- fail +ERROR: permission denied for function testproc1 SELECT col1 FROM atest2 WHERE col2 = true; -- fail ERROR: permission denied for relation atest2 SELECT testfunc4(true); -- ok @@ -689,8 +712,19 @@ SELECT testfunc1(5); -- ok 10 (1 row) +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- ok + testagg1 +---------- + 6 +(1 row) + +CALL testproc1(6); -- ok DROP FUNCTION testfunc1(int); -- fail ERROR: must be owner of function testfunc1 +DROP AGGREGATE testagg1(int); -- fail +ERROR: must be owner of function testagg1 +DROP PROCEDURE testproc1(int); -- fail +ERROR: must be owner of function testproc1 \c - DROP FUNCTION testfunc1(int); -- ok -- restore to sanity @@ -1537,22 +1571,54 @@ SELECT has_schema_privilege('regress_user2', 'testns5', 'CREATE'); -- no SET ROLE regress_user1; CREATE FUNCTION testns.foo() RETURNS int AS 'select 1' LANGUAGE sql; +CREATE AGGREGATE testns.agg1(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testns.bar() AS 'select 1' LANGUAGE sql; SELECT has_function_privilege('regress_user2', 'testns.foo()', 'EXECUTE'); -- no has_function_privilege ------------------------ f (1 row) -ALTER DEFAULT PRIVILEGES IN SCHEMA testns GRANT EXECUTE ON FUNCTIONS to public; +SELECT has_function_privilege('regress_user2', 'testns.agg1(int)', 'EXECUTE'); -- no + has_function_privilege +------------------------ + f +(1 row) + +SELECT has_function_privilege('regress_user2', 'testns.bar()', 'EXECUTE'); -- no + has_function_privilege +------------------------ + f +(1 row) + +ALTER DEFAULT PRIVILEGES IN SCHEMA testns GRANT EXECUTE ON ROUTINES to public; DROP FUNCTION testns.foo(); CREATE FUNCTION testns.foo() RETURNS int AS 'select 1' LANGUAGE sql; +DROP AGGREGATE testns.agg1(int); +CREATE AGGREGATE testns.agg1(int) (sfunc = int4pl, stype = int4); +DROP PROCEDURE testns.bar(); +CREATE PROCEDURE testns.bar() AS 'select 1' LANGUAGE sql; SELECT has_function_privilege('regress_user2', 'testns.foo()', 'EXECUTE'); -- yes has_function_privilege ------------------------ t (1 row) +SELECT has_function_privilege('regress_user2', 'testns.agg1(int)', 'EXECUTE'); -- yes + has_function_privilege +------------------------ + t +(1 row) + +SELECT has_function_privilege('regress_user2', 'testns.bar()', 'EXECUTE'); -- yes (counts as function here) + has_function_privilege +------------------------ + t +(1 row) + DROP FUNCTION testns.foo(); +DROP AGGREGATE testns.agg1(int); +DROP PROCEDURE testns.bar(); ALTER DEFAULT PRIVILEGES FOR ROLE regress_user1 REVOKE USAGE ON TYPES FROM public; CREATE DOMAIN testns.testdomain1 AS int; SELECT has_type_privilege('regress_user2', 'testns.testdomain1', 'USAGE'); -- no @@ -1631,12 +1697,26 @@ SELECT has_table_privilege('regress_user1', 'testns.t2', 'SELECT'); -- false (1 row) CREATE FUNCTION testns.testfunc(int) RETURNS int AS 'select 3 * $1;' LANGUAGE sql; +CREATE AGGREGATE testns.testagg(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testns.testproc(int) AS 'select 3' LANGUAGE sql; SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- true by default has_function_privilege ------------------------ t (1 row) +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- true by default + has_function_privilege +------------------------ + t +(1 row) + +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- true by default + has_function_privilege +------------------------ + t +(1 row) + REVOKE ALL ON ALL FUNCTIONS IN SCHEMA testns FROM PUBLIC; SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- false has_function_privilege @@ -1644,9 +1724,47 @@ SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE' f (1 row) +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- false + has_function_privilege +------------------------ + f +(1 row) + +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- still true, not a function + has_function_privilege +------------------------ + t +(1 row) + +REVOKE ALL ON ALL PROCEDURES IN SCHEMA testns FROM PUBLIC; +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- now false + has_function_privilege +------------------------ + f +(1 row) + +GRANT ALL ON ALL ROUTINES IN SCHEMA testns TO PUBLIC; +SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- true + has_function_privilege +------------------------ + t +(1 row) + +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- true + has_function_privilege +------------------------ + t +(1 row) + +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- true + has_function_privilege +------------------------ + t +(1 row) + \set VERBOSITY terse \\ -- suppress cascade details DROP SCHEMA testns CASCADE; -NOTICE: drop cascades to 3 other objects +NOTICE: drop cascades to 5 other objects \set VERBOSITY default -- Change owner of the schema & and rename of new schema owner \c - @@ -1729,8 +1847,10 @@ drop table dep_priv_test; -- clean up \c drop sequence x_seq; +DROP AGGREGATE testagg1(int); DROP FUNCTION testfunc2(int); DROP FUNCTION testfunc4(boolean); +DROP PROCEDURE testproc1(int); DROP VIEW atestv0; DROP VIEW atestv1; DROP VIEW atestv2; diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index 892a214f2f..e224977791 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -53,7 +53,7 @@ test: copy copyselect copydml # ---------- # More groups of parallel tests # ---------- -test: create_misc create_operator +test: create_misc create_operator create_procedure # These depend on the above two test: create_index create_view diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index 15a1f861a9..9fc5f1a268 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -63,6 +63,7 @@ test: copyselect test: copydml test: create_misc test: create_operator +test: create_procedure test: create_index test: create_view test: create_aggregate diff --git a/src/test/regress/sql/create_procedure.sql b/src/test/regress/sql/create_procedure.sql new file mode 100644 index 0000000000..f09ba2ad30 --- /dev/null +++ b/src/test/regress/sql/create_procedure.sql @@ -0,0 +1,79 @@ +CALL nonexistent(); -- error +CALL random(); -- error + +CREATE FUNCTION testfunc1(a int) RETURNS int LANGUAGE SQL AS $$ SELECT a $$; + +CREATE TABLE cp_test (a int, b text); + +CREATE PROCEDURE ptest1(x text) +LANGUAGE SQL +AS $$ +INSERT INTO cp_test VALUES (1, x); +$$; + +SELECT ptest1('x'); -- error +CALL ptest1('a'); -- ok + +\df ptest1 + +SELECT * FROM cp_test ORDER BY a; + + +CREATE PROCEDURE ptest2() +LANGUAGE SQL +AS $$ +SELECT 5; +$$; + +CALL ptest2(); + + +-- various error cases + +CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; +CREATE PROCEDURE ptestx() LANGUAGE SQL STRICT AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; +CREATE PROCEDURE ptestx(OUT a int) LANGUAGE SQL AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; + +ALTER PROCEDURE ptest1(text) STRICT; +ALTER FUNCTION ptest1(text) VOLATILE; -- error: not a function +ALTER PROCEDURE testfunc1(int) VOLATILE; -- error: not a procedure +ALTER PROCEDURE nonexistent() VOLATILE; + +DROP FUNCTION ptest1(text); -- error: not a function +DROP PROCEDURE testfunc1(int); -- error: not a procedure +DROP PROCEDURE nonexistent(); + + +-- privileges + +CREATE USER regress_user1; +GRANT INSERT ON cp_test TO regress_user1; +REVOKE EXECUTE ON PROCEDURE ptest1(text) FROM PUBLIC; +SET ROLE regress_user1; +CALL ptest1('a'); -- error +RESET ROLE; +GRANT EXECUTE ON PROCEDURE ptest1(text) TO regress_user1; +SET ROLE regress_user1; +CALL ptest1('a'); -- ok +RESET ROLE; + + +-- ROUTINE syntax + +ALTER ROUTINE testfunc1(int) RENAME TO testfunc1a; +ALTER ROUTINE testfunc1a RENAME TO testfunc1; + +ALTER ROUTINE ptest1(text) RENAME TO ptest1a; +ALTER ROUTINE ptest1a RENAME TO ptest1; + +DROP ROUTINE testfunc1(int); + + +-- cleanup + +DROP PROCEDURE ptest1; +DROP PROCEDURE ptest2; + +DROP TABLE cp_test; + +DROP USER regress_user1; diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql index 63821b8008..55faa71edf 100644 --- a/src/test/regress/sql/object_address.sql +++ b/src/test/regress/sql/object_address.sql @@ -32,6 +32,7 @@ CREATE DOMAIN addr_nsp.gendomain AS int4 CONSTRAINT domconstr CHECK (value > 0); CREATE FUNCTION addr_nsp.trig() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN END; $$; CREATE TRIGGER t BEFORE INSERT ON addr_nsp.gentable FOR EACH ROW EXECUTE PROCEDURE addr_nsp.trig(); CREATE POLICY genpol ON addr_nsp.gentable; +CREATE PROCEDURE addr_nsp.proc(int4) LANGUAGE SQL AS $$ $$; CREATE SERVER "integer" FOREIGN DATA WRAPPER addr_fdw; CREATE USER MAPPING FOR regress_addr_user SERVER "integer"; ALTER DEFAULT PRIVILEGES FOR ROLE regress_addr_user IN SCHEMA public GRANT ALL ON TABLES TO regress_addr_user; @@ -81,7 +82,7 @@ BEGIN ('table'), ('index'), ('sequence'), ('view'), ('materialized view'), ('foreign table'), ('table column'), ('foreign table column'), - ('aggregate'), ('function'), ('type'), ('cast'), + ('aggregate'), ('function'), ('procedure'), ('type'), ('cast'), ('table constraint'), ('domain constraint'), ('conversion'), ('default value'), ('operator'), ('operator class'), ('operator family'), ('rule'), ('trigger'), ('text search parser'), ('text search dictionary'), @@ -147,6 +148,7 @@ WITH objects (type, name, args) AS (VALUES ('foreign table column', '{addr_nsp, genftable, a}', '{}'), ('aggregate', '{addr_nsp, genaggr}', '{int4}'), ('function', '{pg_catalog, pg_identify_object}', '{pg_catalog.oid, pg_catalog.oid, int4}'), + ('procedure', '{addr_nsp, proc}', '{int4}'), ('type', '{pg_catalog._int4}', '{}'), ('type', '{addr_nsp.gendomain}', '{}'), ('type', '{addr_nsp.gencomptype}', '{}'), diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 6620ea6172..1c355132b7 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4820,3 +4820,52 @@ BEGIN GET DIAGNOSTICS x = ROW_COUNT; RETURN; END; $$ LANGUAGE plpgsql; + + +-- +-- Procedures +-- + +CREATE PROCEDURE test_proc1() +LANGUAGE plpgsql +AS $$ +BEGIN + NULL; +END; +$$; + +CALL test_proc1(); + + +-- error: can't return non-NULL +CREATE PROCEDURE test_proc2() +LANGUAGE plpgsql +AS $$ +BEGIN + RETURN 5; +END; +$$; + +CALL test_proc2(); + + +CREATE TABLE proc_test1 (a int); + +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpgsql +AS $$ +BEGIN + INSERT INTO proc_test1 VALUES (x); +END; +$$; + +CALL test_proc3(55); + +SELECT * FROM proc_test1; + + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; + +DROP TABLE proc_test1; diff --git a/src/test/regress/sql/privileges.sql b/src/test/regress/sql/privileges.sql index a900ba2f84..ea8dd028cd 100644 --- a/src/test/regress/sql/privileges.sql +++ b/src/test/regress/sql/privileges.sql @@ -442,12 +442,21 @@ SET SESSION AUTHORIZATION regress_user1; GRANT USAGE ON LANGUAGE sql TO regress_user2; -- fail CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE sql; CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE sql; - -REVOKE ALL ON FUNCTION testfunc1(int), testfunc2(int) FROM PUBLIC; -GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regress_user2; +CREATE AGGREGATE testagg1(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testproc1(int) AS 'select $1;' LANGUAGE sql; + +REVOKE ALL ON FUNCTION testfunc1(int), testfunc2(int), testagg1(int) FROM PUBLIC; +GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int), testagg1(int) TO regress_user2; +REVOKE ALL ON FUNCTION testproc1(int) FROM PUBLIC; -- fail, not a function +REVOKE ALL ON PROCEDURE testproc1(int) FROM PUBLIC; +GRANT EXECUTE ON PROCEDURE testproc1(int) TO regress_user2; GRANT USAGE ON FUNCTION testfunc1(int) TO regress_user3; -- semantic error +GRANT USAGE ON FUNCTION testagg1(int) TO regress_user3; -- semantic error +GRANT USAGE ON PROCEDURE testproc1(int) TO regress_user3; -- semantic error GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regress_user4; GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regress_user4; +GRANT ALL PRIVILEGES ON FUNCTION testagg1(int) TO regress_user4; +GRANT ALL PRIVILEGES ON PROCEDURE testproc1(int) TO regress_user4; CREATE FUNCTION testfunc4(boolean) RETURNS text AS 'select col1 from atest2 where col2 = $1;' @@ -457,16 +466,24 @@ GRANT EXECUTE ON FUNCTION testfunc4(boolean) TO regress_user3; SET SESSION AUTHORIZATION regress_user2; SELECT testfunc1(5), testfunc2(5); -- ok CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE sql; -- fail +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- ok +CALL testproc1(6); -- ok SET SESSION AUTHORIZATION regress_user3; SELECT testfunc1(5); -- fail +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- fail +CALL testproc1(6); -- fail SELECT col1 FROM atest2 WHERE col2 = true; -- fail SELECT testfunc4(true); -- ok SET SESSION AUTHORIZATION regress_user4; SELECT testfunc1(5); -- ok +SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- ok +CALL testproc1(6); -- ok DROP FUNCTION testfunc1(int); -- fail +DROP AGGREGATE testagg1(int); -- fail +DROP PROCEDURE testproc1(int); -- fail \c - @@ -931,17 +948,29 @@ SELECT has_schema_privilege('regress_user2', 'testns5', 'CREATE'); -- no SET ROLE regress_user1; CREATE FUNCTION testns.foo() RETURNS int AS 'select 1' LANGUAGE sql; +CREATE AGGREGATE testns.agg1(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testns.bar() AS 'select 1' LANGUAGE sql; SELECT has_function_privilege('regress_user2', 'testns.foo()', 'EXECUTE'); -- no +SELECT has_function_privilege('regress_user2', 'testns.agg1(int)', 'EXECUTE'); -- no +SELECT has_function_privilege('regress_user2', 'testns.bar()', 'EXECUTE'); -- no -ALTER DEFAULT PRIVILEGES IN SCHEMA testns GRANT EXECUTE ON FUNCTIONS to public; +ALTER DEFAULT PRIVILEGES IN SCHEMA testns GRANT EXECUTE ON ROUTINES to public; DROP FUNCTION testns.foo(); CREATE FUNCTION testns.foo() RETURNS int AS 'select 1' LANGUAGE sql; +DROP AGGREGATE testns.agg1(int); +CREATE AGGREGATE testns.agg1(int) (sfunc = int4pl, stype = int4); +DROP PROCEDURE testns.bar(); +CREATE PROCEDURE testns.bar() AS 'select 1' LANGUAGE sql; SELECT has_function_privilege('regress_user2', 'testns.foo()', 'EXECUTE'); -- yes +SELECT has_function_privilege('regress_user2', 'testns.agg1(int)', 'EXECUTE'); -- yes +SELECT has_function_privilege('regress_user2', 'testns.bar()', 'EXECUTE'); -- yes (counts as function here) DROP FUNCTION testns.foo(); +DROP AGGREGATE testns.agg1(int); +DROP PROCEDURE testns.bar(); ALTER DEFAULT PRIVILEGES FOR ROLE regress_user1 REVOKE USAGE ON TYPES FROM public; @@ -995,12 +1024,28 @@ SELECT has_table_privilege('regress_user1', 'testns.t1', 'SELECT'); -- false SELECT has_table_privilege('regress_user1', 'testns.t2', 'SELECT'); -- false CREATE FUNCTION testns.testfunc(int) RETURNS int AS 'select 3 * $1;' LANGUAGE sql; +CREATE AGGREGATE testns.testagg(int) (sfunc = int4pl, stype = int4); +CREATE PROCEDURE testns.testproc(int) AS 'select 3' LANGUAGE sql; SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- true by default +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- true by default +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- true by default REVOKE ALL ON ALL FUNCTIONS IN SCHEMA testns FROM PUBLIC; SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- false +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- false +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- still true, not a function + +REVOKE ALL ON ALL PROCEDURES IN SCHEMA testns FROM PUBLIC; + +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- now false + +GRANT ALL ON ALL ROUTINES IN SCHEMA testns TO PUBLIC; + +SELECT has_function_privilege('regress_user1', 'testns.testfunc(int)', 'EXECUTE'); -- true +SELECT has_function_privilege('regress_user1', 'testns.testagg(int)', 'EXECUTE'); -- true +SELECT has_function_privilege('regress_user1', 'testns.testproc(int)', 'EXECUTE'); -- true \set VERBOSITY terse \\ -- suppress cascade details DROP SCHEMA testns CASCADE; @@ -1064,8 +1109,10 @@ drop table dep_priv_test; drop sequence x_seq; +DROP AGGREGATE testagg1(int); DROP FUNCTION testfunc2(int); DROP FUNCTION testfunc4(boolean); +DROP PROCEDURE testproc1(int); DROP VIEW atestv0; DROP VIEW atestv1; From 06ae669c9229270663d6c4953ceb3621e4bc2d5b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 30 Nov 2017 16:22:38 -0500 Subject: [PATCH 0641/1087] Remove extra word from comment. David Rowley, who also was the primary author of the patch that added this function; the attribution in my previous commit, 84940644de931f331433b35e3a391822671f8c9c, was incorrect due to sloppiness on my part. Discussion: http://postgr.es/m/CAKJS1f_0iSiLQsf_c06AzOWAc3eS6ePjjVQFpcFv3W-O5aktnQ@mail.gmail.com --- src/backend/nodes/bitmapset.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c index e5096e01a7..ae30072a01 100644 --- a/src/backend/nodes/bitmapset.c +++ b/src/backend/nodes/bitmapset.c @@ -789,8 +789,8 @@ bms_add_members(Bitmapset *a, const Bitmapset *b) * Add members in the range of 'lower' to 'upper' to the set. * * Note this could also be done by calling bms_add_member in a loop, however, - * using this function will be faster when the range is large as we work with - * at the bitmapword level rather than at bit level. + * using this function will be faster when the range is large as we work at + * the bitmapword level rather than at bit level. */ Bitmapset * bms_add_range(Bitmapset *a, int lower, int upper) From 143b54d21d37804707c27edebdbd4410891da133 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 1 Dec 2017 09:21:34 -0500 Subject: [PATCH 0642/1087] pg_basebackup: Fix progress messages when writing to a file MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The progress messages print out \r to keep overwriting the same line on the screen. But this does not yield useful results when writing the output to a file. So in that case, print out \n instead. Author: Martín Marqués Reviewed-by: Arthur Zakirov --- src/bin/pg_basebackup/pg_basebackup.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index 8427c97fe4..4fbca7d8e8 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -811,7 +811,10 @@ progress_report(int tablespacenum, const char *filename, bool force) totaldone_str, totalsize_str, percent, tablespacenum, tablespacecount); - fprintf(stderr, "\r"); + if (isatty(fileno(stderr))) + fprintf(stderr, "\r"); + else + fprintf(stderr, "\n"); } static int32 @@ -1796,7 +1799,13 @@ BaseBackup(void) progname); if (showprogress && !verbose) - fprintf(stderr, "waiting for checkpoint\r"); + { + fprintf(stderr, "waiting for checkpoint"); + if (isatty(fileno(stderr))) + fprintf(stderr, "\r"); + else + fprintf(stderr, "\n"); + } basebkp = psprintf("BASE_BACKUP LABEL '%s' %s %s %s %s %s %s", @@ -1929,7 +1938,8 @@ BaseBackup(void) if (showprogress) { progress_report(PQntuples(res), NULL, true); - fprintf(stderr, "\n"); /* Need to move to next line */ + if (isatty(fileno(stderr))) + fprintf(stderr, "\n"); /* Need to move to next line */ } PQclear(res); From 86ab28fbd19a6a0742a7f66e69a595b61eb13d00 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 22 Nov 2017 14:02:57 -0500 Subject: [PATCH 0643/1087] Check channel binding flag at end of SCRAM exchange We need to check whether the channel-binding flag encoded in the client-final-message is the same one sent in the client-first-message. Reviewed-by: Michael Paquier --- src/backend/libpq/auth-scram.c | 11 ++++++++--- src/interfaces/libpq/fe-auth-scram.c | 4 ++++ 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 15c3857f57..d52a763457 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -110,6 +110,7 @@ typedef struct const char *username; /* username from startup packet */ + char cbind_flag; bool ssl_in_use; const char *tls_finished_message; size_t tls_finished_len; @@ -788,6 +789,7 @@ read_client_first_message(scram_state *state, char *input) * Read gs2-cbind-flag. (For details see also RFC 5802 Section 6 "Channel * Binding".) */ + state->cbind_flag = *input; switch (*input) { case 'n': @@ -1111,6 +1113,8 @@ read_client_final_message(scram_state *state, char *input) char *b64_message; int b64_message_len; + Assert(state->cbind_flag == 'p'); + /* * Fetch data appropriate for channel binding type */ @@ -1155,10 +1159,11 @@ read_client_final_message(scram_state *state, char *input) /* * If we are not using channel binding, the binding data is expected * to always be "biws", which is "n,," base64-encoded, or "eSws", - * which is "y,,". + * which is "y,,". We also have to check whether the flag is the same + * one that the client originally sent. */ - if (strcmp(channel_binding, "biws") != 0 && - strcmp(channel_binding, "eSws") != 0) + if (!(strcmp(channel_binding, "biws") == 0 && state->cbind_flag == 'n') && + !(strcmp(channel_binding, "eSws") == 0 && state->cbind_flag == 'y')) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), (errmsg("unexpected SCRAM channel-binding attribute in client-final-message")))); diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 97db0b1faa..5b783bc313 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -437,6 +437,10 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) /* * Construct client-final-message-without-proof. We need to remember it * for verifying the server proof in the final step of authentication. + * + * The channel binding flag handling (p/y/n) must be consistent with + * build_client_first_message(), because the server will check that it's + * the same flag both times. */ if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) { From 59c8078744b5febf549c8b53171242cf667b87a1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Dec 2017 10:01:50 -0500 Subject: [PATCH 0644/1087] Fix uninitialized memory reference. Without this, when partdesc->nparts == 0, we end up calling ExecBuildSlotPartitionKeyDescription without initializing values and isnull. Reported by Coverity via Michael Paquier. Patch by Michael Paquier, reviewed and revised by Amit Langote. Discussion: http://postgr.es/m/CAB7nPqQ3mwkdMoPY-ocgTpPnjd8TKOadMxdTtMLvEzF8480Zfg@mail.gmail.com --- src/backend/executor/execPartition.c | 18 +++++++++++------- src/test/regress/expected/insert.out | 4 ++++ src/test/regress/sql/insert.sql | 4 ++++ 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 59a0ca4597..34875945e8 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -206,13 +206,6 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, slot = myslot; } - /* Quick exit */ - if (partdesc->nparts == 0) - { - result = -1; - break; - } - /* * Extract partition key from tuple. Expression evaluation machinery * that FormPartitionKeyDatum() invokes expects ecxt_scantuple to @@ -223,6 +216,17 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, */ ecxt->ecxt_scantuple = slot; FormPartitionKeyDatum(parent, slot, estate, values, isnull); + + /* + * Nothing for get_partition_for_tuple() to do if there are no + * partitions to begin with. + */ + if (partdesc->nparts == 0) + { + result = -1; + break; + } + cur_index = get_partition_for_tuple(rel, values, isnull); /* diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out index b7b37dbc39..dcbaad8e2f 100644 --- a/src/test/regress/expected/insert.out +++ b/src/test/regress/expected/insert.out @@ -165,6 +165,10 @@ create table range_parted ( a text, b int ) partition by range (a, (b+0)); +-- no partitions, so fail +insert into range_parted values ('a', 11); +ERROR: no partition of relation "range_parted" found for row +DETAIL: Partition key of the failing row contains (a, (b + 0)) = (a, 11). create table part1 partition of range_parted for values from ('a', 1) to ('a', 10); create table part2 partition of range_parted for values from ('a', 10) to ('a', 20); create table part3 partition of range_parted for values from ('b', 1) to ('b', 10); diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql index 310b818076..0150b6bb0f 100644 --- a/src/test/regress/sql/insert.sql +++ b/src/test/regress/sql/insert.sql @@ -90,6 +90,10 @@ create table range_parted ( a text, b int ) partition by range (a, (b+0)); + +-- no partitions, so fail +insert into range_parted values ('a', 11); + create table part1 partition of range_parted for values from ('a', 1) to ('a', 10); create table part2 partition of range_parted for values from ('a', 10) to ('a', 20); create table part3 partition of range_parted for values from ('b', 1) to ('b', 10); From 1cbc17aaca82b2e262912da96c49b2e1d2f492e7 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Dec 2017 10:58:08 -0500 Subject: [PATCH 0645/1087] Try to exclude partitioned tables in toto. Ashutosh Bapat, reviewed by Jeevan Chalke. Comment by me. Discussion: http://postgr.es/m/CAFjFpRcuRaydz88CY_aQekmuvmN2A9ax5z0k=ppT+s8KS8xMRA@mail.gmail.com --- src/backend/optimizer/util/plancat.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index 199a2631a1..f7438714c4 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -1414,8 +1414,18 @@ relation_excluded_by_constraints(PlannerInfo *root, if (predicate_refuted_by(safe_restrictions, safe_restrictions, false)) return true; - /* Only plain relations have constraints */ - if (rte->rtekind != RTE_RELATION || rte->inh) + /* + * Only plain relations have constraints. In a partitioning hierarchy, + * but not with regular table inheritance, it's OK to assume that any + * constraints that hold for the parent also hold for every child; for + * instance, table inheritance allows the parent to have constraints + * marked NO INHERIT, but table partitioning does not. We choose to check + * whether the partitioning parents can be excluded here; doing so + * consumes some cycles, but potentially saves us the work of excluding + * each child individually. + */ + if (rte->rtekind != RTE_RELATION || + (rte->inh && rte->relkind != RELKIND_PARTITIONED_TABLE)) return false; /* From 87c37e3291cb75273ccdf4645b9472dd805c4493 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Dec 2017 12:53:21 -0500 Subject: [PATCH 0646/1087] Re-allow INSERT .. ON CONFLICT DO NOTHING on partitioned tables. Commit 8355a011a0124bdf7ccbada206a967d427039553 was reverted in f05230752d53c4aa74cffa9b699983bbb6bcb118, but this attempt is hopefully better-considered: we now pass the correct value to ExecOpenIndices, which should avoid the crash that we hit before. Amit Langote, reviewed by Simon Riggs and by me. Some final editing by me. Discussion: http://postgr.es/m/7ff1e8ec-dc39-96b1-7f47-ff5965dceeac@lab.ntt.co.jp --- doc/src/sgml/ddl.sgml | 13 +++++++++---- src/backend/commands/copy.c | 3 ++- src/backend/executor/execPartition.c | 15 ++++++++++----- src/backend/executor/nodeModifyTable.c | 3 ++- src/backend/parser/analyze.c | 8 -------- src/include/executor/execPartition.h | 3 ++- src/test/regress/expected/insert_conflict.out | 13 +++++++++++++ src/test/regress/sql/insert_conflict.sql | 13 +++++++++++++ 8 files changed, 51 insertions(+), 20 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 9f583266de..b1167a40e6 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3288,10 +3288,15 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 Using the ON CONFLICT clause with partitioned tables - will cause an error, because unique or exclusion constraints can only be - created on individual partitions. There is no support for enforcing - uniqueness (or an exclusion constraint) across an entire partitioning - hierarchy. + will cause an error if the conflict target is specified (see + for more details on how the clause + works). Therefore, it is not possible to specify + DO UPDATE as the alternative action, because + specifying the conflict target is mandatory in that case. On the other + hand, specifying DO NOTHING as the alternative action + works fine provided the conflict target is not specified. In that case, + unique constraints (or exclusion constraints) of the individual leaf + partitions are considered. diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 13eb9e34ba..bace390470 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2478,7 +2478,8 @@ CopyFrom(CopyState cstate) int num_parted, num_partitions; - ExecSetupPartitionTupleRouting(cstate->rel, + ExecSetupPartitionTupleRouting(NULL, + cstate->rel, 1, estate, &partition_dispatch_info, diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 34875945e8..d545af2b67 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -63,7 +63,8 @@ static char *ExecBuildSlotPartitionKeyDescription(Relation rel, * RowExclusiveLock mode upon return from this function. */ void -ExecSetupPartitionTupleRouting(Relation rel, +ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, + Relation rel, Index resultRTindex, EState *estate, PartitionDispatch **pd, @@ -133,13 +134,17 @@ ExecSetupPartitionTupleRouting(Relation rel, CheckValidResultRel(leaf_part_rri, CMD_INSERT); /* - * Open partition indices (remember we do not support ON CONFLICT in - * case of partitioned tables, so we do not need support information - * for speculative insertion) + * Open partition indices. The user may have asked to check for + * conflicts within this leaf partition and do "nothing" instead of + * throwing an error. Be prepared in that case by initializing the + * index information needed by ExecInsert() to perform speculative + * insertions. */ if (leaf_part_rri->ri_RelationDesc->rd_rel->relhasindex && leaf_part_rri->ri_IndexRelationDescs == NULL) - ExecOpenIndices(leaf_part_rri, false); + ExecOpenIndices(leaf_part_rri, + mtstate != NULL && + mtstate->mt_onconflict != ONCONFLICT_NONE); estate->es_leaf_result_relations = lappend(estate->es_leaf_result_relations, leaf_part_rri); diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 1e3ece9b34..afb83ed3ae 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1953,7 +1953,8 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) int num_parted, num_partitions; - ExecSetupPartitionTupleRouting(rel, + ExecSetupPartitionTupleRouting(mtstate, + rel, node->nominalRelation, estate, &partition_dispatch_info, diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c index 757a4a8fd1..d680d2285c 100644 --- a/src/backend/parser/analyze.c +++ b/src/backend/parser/analyze.c @@ -847,16 +847,8 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt) /* Process ON CONFLICT, if any. */ if (stmt->onConflictClause) - { - /* Bail out if target relation is partitioned table */ - if (pstate->p_target_rangetblentry->relkind == RELKIND_PARTITIONED_TABLE) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("ON CONFLICT clause is not supported with partitioned tables"))); - qry->onConflict = transformOnConflictClause(pstate, stmt->onConflictClause); - } /* * If we have a RETURNING clause, we need to add the target relation to diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index 43ca9908aa..86a199d169 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -49,7 +49,8 @@ typedef struct PartitionDispatchData typedef struct PartitionDispatchData *PartitionDispatch; -extern void ExecSetupPartitionTupleRouting(Relation rel, +extern void ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, + Relation rel, Index resultRTindex, EState *estate, PartitionDispatch **pd, diff --git a/src/test/regress/expected/insert_conflict.out b/src/test/regress/expected/insert_conflict.out index 8d005fddd4..8fd2027d6a 100644 --- a/src/test/regress/expected/insert_conflict.out +++ b/src/test/regress/expected/insert_conflict.out @@ -786,3 +786,16 @@ select * from selfconflict; (3 rows) drop table selfconflict; +-- check that the following works: +-- insert into partitioned_table on conflict do nothing +create table parted_conflict_test (a int, b char) partition by list (a); +create table parted_conflict_test_1 partition of parted_conflict_test (b unique) for values in (1); +insert into parted_conflict_test values (1, 'a') on conflict do nothing; +insert into parted_conflict_test values (1, 'a') on conflict do nothing; +-- however, on conflict do update is not supported yet +insert into parted_conflict_test values (1) on conflict (b) do update set a = excluded.a; +ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification +-- but it works OK if we target the partition directly +insert into parted_conflict_test_1 values (1) on conflict (b) do +update set a = excluded.a; +drop table parted_conflict_test; diff --git a/src/test/regress/sql/insert_conflict.sql b/src/test/regress/sql/insert_conflict.sql index df3a9b59b5..32c647e3f8 100644 --- a/src/test/regress/sql/insert_conflict.sql +++ b/src/test/regress/sql/insert_conflict.sql @@ -471,3 +471,16 @@ commit; select * from selfconflict; drop table selfconflict; + +-- check that the following works: +-- insert into partitioned_table on conflict do nothing +create table parted_conflict_test (a int, b char) partition by list (a); +create table parted_conflict_test_1 partition of parted_conflict_test (b unique) for values in (1); +insert into parted_conflict_test values (1, 'a') on conflict do nothing; +insert into parted_conflict_test values (1, 'a') on conflict do nothing; +-- however, on conflict do update is not supported yet +insert into parted_conflict_test values (1) on conflict (b) do update set a = excluded.a; +-- but it works OK if we target the partition directly +insert into parted_conflict_test_1 values (1) on conflict (b) do +update set a = excluded.a; +drop table parted_conflict_test; From 950222780535e6d2ea560cd8b3db5308652ddd42 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Dec 2017 13:47:00 -0500 Subject: [PATCH 0647/1087] postgres_fdw: Fix test that didn't test what it claimed. Antonin Houska reported that the planner does consider pushing postgres_fdw_abs() to the remote side, which happens because we make it shippable earlier in the test case file. Jeevan Chalke provided this patch, which changes the join condition to use random(), which is not shippable, instead. Antonin reviewed the patch. Discussion: http://postgr.es/m/15265.1511985971@localhost --- .../postgres_fdw/expected/postgres_fdw.out | 24 ++++++++++--------- contrib/postgres_fdw/sql/postgres_fdw.sql | 2 +- 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 1063d92825..bce334811b 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -3160,21 +3160,23 @@ drop operator public.<^(int, int); -- Input relation to aggregate push down hook is not safe to pushdown and thus -- the aggregate cannot be pushed down to foreign server. explain (verbose, costs off) -select count(t1.c3) from ft1 t1, ft1 t2 where t1.c1 = postgres_fdw_abs(t1.c2); - QUERY PLAN ----------------------------------------------------------------------------------------------------------- +select count(t1.c3) from ft2 t1 left join ft2 t2 on (t1.c1 = random() * t2.c2); + QUERY PLAN +------------------------------------------------------------------------------------------- Aggregate Output: count(t1.c3) - -> Nested Loop + -> Nested Loop Left Join Output: t1.c3 - -> Foreign Scan on public.ft1 t2 - Remote SQL: SELECT NULL FROM "S 1"."T 1" + Join Filter: ((t1.c1)::double precision = (random() * (t2.c2)::double precision)) + -> Foreign Scan on public.ft2 t1 + Output: t1.c3, t1.c1 + Remote SQL: SELECT "C 1", c3 FROM "S 1"."T 1" -> Materialize - Output: t1.c3 - -> Foreign Scan on public.ft1 t1 - Output: t1.c3 - Remote SQL: SELECT c3 FROM "S 1"."T 1" WHERE (("C 1" = public.postgres_fdw_abs(c2))) -(11 rows) + Output: t2.c2 + -> Foreign Scan on public.ft2 t2 + Output: t2.c2 + Remote SQL: SELECT c2 FROM "S 1"."T 1" +(13 rows) -- Subquery in FROM clause having aggregate explain (verbose, costs off) diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 09869578da..1df1e3aad0 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -829,7 +829,7 @@ drop operator public.<^(int, int); -- Input relation to aggregate push down hook is not safe to pushdown and thus -- the aggregate cannot be pushed down to foreign server. explain (verbose, costs off) -select count(t1.c3) from ft1 t1, ft1 t2 where t1.c1 = postgres_fdw_abs(t1.c2); +select count(t1.c3) from ft2 t1 left join ft2 t2 on (t1.c1 = random() * t2.c2); -- Subquery in FROM clause having aggregate explain (verbose, costs off) From 35438e5763c3021e579472e4b0c4a4d6038570b4 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 1 Dec 2017 13:52:59 -0500 Subject: [PATCH 0648/1087] Minor code beautification in partition_bounds_equal. Use get_greatest_modulus more consistently, instead of doing the same thing in an ad-hoc manner in this one place. Ashutosh Bapat Discussion: http://postgr.es/m/CAFjFpReT9L4RCiJBKOyWC2=i02kv9uG2fx=4Fv7kFY2t0SPCgw@mail.gmail.com --- src/backend/catalog/partition.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 2bf8117757..dd4a8d3c02 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -751,15 +751,13 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, if (b1->strategy == PARTITION_STRATEGY_HASH) { - int greatest_modulus; + int greatest_modulus = get_greatest_modulus(b1); /* * If two hash partitioned tables have different greatest moduli, - * their partition schemes don't match. For hash partitioned table, - * the greatest modulus is given by the last datum and number of - * partitions is given by ndatums. + * their partition schemes don't match. */ - if (b1->datums[b1->ndatums - 1][0] != b2->datums[b2->ndatums - 1][0]) + if (greatest_modulus != get_greatest_modulus(b2)) return false; /* @@ -773,7 +771,6 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, * their indexes array will be same. So, it suffices to compare * indexes array. */ - greatest_modulus = get_greatest_modulus(b1); for (i = 0; i < greatest_modulus; i++) if (b1->indexes[i] != b2->indexes[i]) return false; From dc6c4c9dc2a111519b76b22daaaac86c5608223b Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 1 Dec 2017 16:30:56 -0800 Subject: [PATCH 0649/1087] Add infrastructure for sharing temporary files between backends. SharedFileSet allows temporary files to be created by one backend and then exported for read-only access by other backends, with clean-up managed by reference counting associated with a DSM segment. This includes changes to fd.c and buffile.c to support the new kind of temporary file. This will be used by an upcoming patch adding support for parallel hash joins. Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas, Rushabh Lathia Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com https://postgr.es/m/CAH2-WznJ_UgLux=_jTgCQ4yFz0iBntudsNKa1we3kN1BAG=88w@mail.gmail.com --- src/backend/storage/file/Makefile | 2 +- src/backend/storage/file/buffile.c | 205 +++++++++++- src/backend/storage/file/fd.c | 395 +++++++++++++++++++---- src/backend/storage/file/sharedfileset.c | 244 ++++++++++++++ src/include/storage/buffile.h | 7 + src/include/storage/fd.h | 11 +- src/include/storage/sharedfileset.h | 45 +++ src/tools/pgindent/typedefs.list | 1 + 8 files changed, 848 insertions(+), 62 deletions(-) create mode 100644 src/backend/storage/file/sharedfileset.c create mode 100644 src/include/storage/sharedfileset.h diff --git a/src/backend/storage/file/Makefile b/src/backend/storage/file/Makefile index d2198f2b93..ca6a0e4f7d 100644 --- a/src/backend/storage/file/Makefile +++ b/src/backend/storage/file/Makefile @@ -12,6 +12,6 @@ subdir = src/backend/storage/file top_builddir = ../../../.. include $(top_builddir)/src/Makefile.global -OBJS = fd.o buffile.o copydir.o reinit.o +OBJS = fd.o buffile.o copydir.o reinit.o sharedfileset.o include $(top_srcdir)/src/backend/common.mk diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index 06bf2fadbf..fa9940da9b 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -31,12 +31,18 @@ * BufFile also supports temporary files that exceed the OS file size limit * (by opening multiple fd.c temporary files). This is an essential feature * for sorts and hashjoins on large amounts of data. + * + * BufFile supports temporary files that can be made read-only and shared with + * other backends, as infrastructure for parallel execution. Such files need + * to be created as a member of a SharedFileSet that all participants are + * attached to. *------------------------------------------------------------------------- */ #include "postgres.h" #include "executor/instrument.h" +#include "miscadmin.h" #include "pgstat.h" #include "storage/fd.h" #include "storage/buffile.h" @@ -70,6 +76,10 @@ struct BufFile bool isInterXact; /* keep open over transactions? */ bool dirty; /* does buffer need to be written? */ + bool readOnly; /* has the file been set to read only? */ + + SharedFileSet *fileset; /* space for segment files if shared */ + const char *name; /* name of this BufFile if shared */ /* * resowner is the ResourceOwner to use for underlying temp files. (We @@ -94,6 +104,7 @@ static void extendBufFile(BufFile *file); static void BufFileLoadBuffer(BufFile *file); static void BufFileDumpBuffer(BufFile *file); static int BufFileFlush(BufFile *file); +static File MakeNewSharedSegment(BufFile *file, int segment); /* @@ -117,6 +128,9 @@ makeBufFile(File firstfile) file->curOffset = 0L; file->pos = 0; file->nbytes = 0; + file->readOnly = false; + file->fileset = NULL; + file->name = NULL; return file; } @@ -134,7 +148,11 @@ extendBufFile(BufFile *file) oldowner = CurrentResourceOwner; CurrentResourceOwner = file->resowner; - pfile = OpenTemporaryFile(file->isInterXact); + if (file->fileset == NULL) + pfile = OpenTemporaryFile(file->isInterXact); + else + pfile = MakeNewSharedSegment(file, file->numFiles); + Assert(pfile >= 0); CurrentResourceOwner = oldowner; @@ -175,6 +193,189 @@ BufFileCreateTemp(bool interXact) return file; } +/* + * Build the name for a given segment of a given BufFile. + */ +static void +SharedSegmentName(char *name, const char *buffile_name, int segment) +{ + snprintf(name, MAXPGPATH, "%s.%d", buffile_name, segment); +} + +/* + * Create a new segment file backing a shared BufFile. + */ +static File +MakeNewSharedSegment(BufFile *buffile, int segment) +{ + char name[MAXPGPATH]; + File file; + + SharedSegmentName(name, buffile->name, segment); + file = SharedFileSetCreate(buffile->fileset, name); + + /* SharedFileSetCreate would've errored out */ + Assert(file > 0); + + return file; +} + +/* + * Create a BufFile that can be discovered and opened read-only by other + * backends that are attached to the same SharedFileSet using the same name. + * + * The naming scheme for shared BufFiles is left up to the calling code. The + * name will appear as part of one or more filenames on disk, and might + * provide clues to administrators about which subsystem is generating + * temporary file data. Since each SharedFileSet object is backed by one or + * more uniquely named temporary directory, names don't conflict with + * unrelated SharedFileSet objects. + */ +BufFile * +BufFileCreateShared(SharedFileSet *fileset, const char *name) +{ + BufFile *file; + + file = (BufFile *) palloc(sizeof(BufFile)); + file->fileset = fileset; + file->name = pstrdup(name); + file->numFiles = 1; + file->files = (File *) palloc(sizeof(File)); + file->files[0] = MakeNewSharedSegment(file, 0); + file->offsets = (off_t *) palloc(sizeof(off_t)); + file->offsets[0] = 0L; + file->isInterXact = false; + file->dirty = false; + file->resowner = CurrentResourceOwner; + file->curFile = 0; + file->curOffset = 0L; + file->pos = 0; + file->nbytes = 0; + file->readOnly = false; + file->name = pstrdup(name); + + return file; +} + +/* + * Open a file that was previously created in another backend (or this one) + * with BufFileCreateShared in the same SharedFileSet using the same name. + * The backend that created the file must have called BufFileClose() or + * BufFileExport() to make sure that it is ready to be opened by other + * backends and render it read-only. + */ +BufFile * +BufFileOpenShared(SharedFileSet *fileset, const char *name) +{ + BufFile *file = (BufFile *) palloc(sizeof(BufFile)); + char segment_name[MAXPGPATH]; + Size capacity = 16; + File *files = palloc(sizeof(File) * capacity); + int nfiles = 0; + + file = (BufFile *) palloc(sizeof(BufFile)); + files = palloc(sizeof(File) * capacity); + + /* + * We don't know how many segments there are, so we'll probe the + * filesystem to find out. + */ + for (;;) + { + /* See if we need to expand our file segment array. */ + if (nfiles + 1 > capacity) + { + capacity *= 2; + files = repalloc(files, sizeof(File) * capacity); + } + /* Try to load a segment. */ + SharedSegmentName(segment_name, name, nfiles); + files[nfiles] = SharedFileSetOpen(fileset, segment_name); + if (files[nfiles] <= 0) + break; + ++nfiles; + + CHECK_FOR_INTERRUPTS(); + } + + /* + * If we didn't find any files at all, then no BufFile exists with this + * name. + */ + if (nfiles == 0) + return NULL; + + file->numFiles = nfiles; + file->files = files; + file->offsets = (off_t *) palloc0(sizeof(off_t) * nfiles); + file->isInterXact = false; + file->dirty = false; + file->resowner = CurrentResourceOwner; /* Unused, can't extend */ + file->curFile = 0; + file->curOffset = 0L; + file->pos = 0; + file->nbytes = 0; + file->readOnly = true; /* Can't write to files opened this way */ + file->fileset = fileset; + file->name = pstrdup(name); + + return file; +} + +/* + * Delete a BufFile that was created by BufFileCreateShared in the given + * SharedFileSet using the given name. + * + * It is not necessary to delete files explicitly with this function. It is + * provided only as a way to delete files proactively, rather than waiting for + * the SharedFileSet to be cleaned up. + * + * Only one backend should attempt to delete a given name, and should know + * that it exists and has been exported or closed. + */ +void +BufFileDeleteShared(SharedFileSet *fileset, const char *name) +{ + char segment_name[MAXPGPATH]; + int segment = 0; + bool found = false; + + /* + * We don't know how many segments the file has. We'll keep deleting + * until we run out. If we don't manage to find even an initial segment, + * raise an error. + */ + for (;;) + { + SharedSegmentName(segment_name, name, segment); + if (!SharedFileSetDelete(fileset, segment_name, true)) + break; + found = true; + ++segment; + + CHECK_FOR_INTERRUPTS(); + } + + if (!found) + elog(ERROR, "could not delete unknown shared BufFile \"%s\"", name); +} + +/* + * BufFileExportShared --- flush and make read-only, in preparation for sharing. + */ +void +BufFileExportShared(BufFile *file) +{ + /* Must be a file belonging to a SharedFileSet. */ + Assert(file->fileset != NULL); + + /* It's probably a bug if someone calls this twice. */ + Assert(!file->readOnly); + + BufFileFlush(file); + file->readOnly = true; +} + /* * Close a BufFile * @@ -390,6 +591,8 @@ BufFileWrite(BufFile *file, void *ptr, size_t size) size_t nwritten = 0; size_t nthistime; + Assert(!file->readOnly); + while (size > 0) { if (file->pos >= BLCKSZ) diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index aa2fe2c6c0..2e93e4ad63 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -39,6 +39,14 @@ * for a long time, like relation files. It is the caller's responsibility * to close them, there is no automatic mechanism in fd.c for that. * + * PathName(Create|Open|Delete)Temporary(File|Dir) are used to manage + * temporary files that have names so that they can be shared between + * backends. Such files are automatically closed and count against the + * temporary file limit of the backend that creates them, but unlike anonymous + * files they are not automatically deleted. See sharedfileset.c for a shared + * ownership mechanism that provides automatic cleanup for shared files when + * the last of a group of backends detaches. + * * AllocateFile, AllocateDir, OpenPipeStream and OpenTransientFile are * wrappers around fopen(3), opendir(3), popen(3) and open(2), respectively. * They behave like the corresponding native functions, except that the handle @@ -175,8 +183,9 @@ int max_safe_fds = 32; /* default if not changed */ #define FilePosIsUnknown(pos) ((pos) < 0) /* these are the assigned bits in fdstate below: */ -#define FD_TEMPORARY (1 << 0) /* T = delete when closed */ -#define FD_XACT_TEMPORARY (1 << 1) /* T = delete at eoXact */ +#define FD_DELETE_AT_CLOSE (1 << 0) /* T = delete when closed */ +#define FD_CLOSE_AT_EOXACT (1 << 1) /* T = close at eoXact */ +#define FD_TEMP_FILE_LIMIT (1 << 2) /* T = respect temp_file_limit */ typedef struct vfd { @@ -313,7 +322,7 @@ static struct dirent *ReadDirExtended(DIR *dir, const char *dirname, int elevel) static void AtProcExit_Files(int code, Datum arg); static void CleanupTempFiles(bool isProcExit); -static void RemovePgTempFilesInDir(const char *tmpdirname); +static void RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all); static void RemovePgTempRelationFiles(const char *tsdirname); static void RemovePgTempRelationFilesInDbspace(const char *dbspacedirname); static bool looks_like_temp_rel_name(const char *name); @@ -326,6 +335,7 @@ static void walkdir(const char *path, static void pre_sync_fname(const char *fname, bool isdir, int elevel); #endif static void datadir_fsync_fname(const char *fname, bool isdir, int elevel); +static void unlink_if_exists_fname(const char *fname, bool isdir, int elevel); static int fsync_fname_ext(const char *fname, bool isdir, bool ignore_perm, int elevel); static int fsync_parent_path(const char *fname, int elevel); @@ -1294,6 +1304,39 @@ FileAccess(File file) return 0; } +/* + * Called whenever a temporary file is deleted to report its size. + */ +static void +ReportTemporaryFileUsage(const char *path, off_t size) +{ + pgstat_report_tempfile(size); + + if (log_temp_files >= 0) + { + if ((size / 1024) >= log_temp_files) + ereport(LOG, + (errmsg("temporary file: path \"%s\", size %lu", + path, (unsigned long) size))); + } +} + +/* + * Called to register a temporary file for automatic close. + * ResourceOwnerEnlargeFiles(CurrentResourceOwner) must have been called + * before the file was opened. + */ +static void +RegisterTemporaryFile(File file) +{ + ResourceOwnerRememberFile(CurrentResourceOwner, file); + VfdCache[file].resowner = CurrentResourceOwner; + + /* Backup mechanism for closing at end of xact. */ + VfdCache[file].fdstate |= FD_CLOSE_AT_EOXACT; + have_xact_temporary_files = true; +} + /* * Called when we get a shared invalidation message on some relation. */ @@ -1378,6 +1421,67 @@ PathNameOpenFilePerm(const char *fileName, int fileFlags, mode_t fileMode) return file; } +/* + * Create directory 'directory'. If necessary, create 'basedir', which must + * be the directory above it. This is designed for creating the top-level + * temporary directory on demand before creating a directory underneath it. + * Do nothing if the directory already exists. + * + * Directories created within the top-level temporary directory should begin + * with PG_TEMP_FILE_PREFIX, so that they can be identified as temporary and + * deleted at startup by RemovePgTempFiles(). Further subdirectories below + * that do not need any particular prefix. +*/ +void +PathNameCreateTemporaryDir(const char *basedir, const char *directory) +{ + if (mkdir(directory, S_IRWXU) < 0) + { + if (errno == EEXIST) + return; + + /* + * Failed. Try to create basedir first in case it's missing. Tolerate + * EEXIST to close a race against another process following the same + * algorithm. + */ + if (mkdir(basedir, S_IRWXU) < 0 && errno != EEXIST) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("cannot create temporary directory \"%s\": %m", + basedir))); + + /* Try again. */ + if (mkdir(directory, S_IRWXU) < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("cannot create temporary subdirectory \"%s\": %m", + directory))); + } +} + +/* + * Delete a directory and everything in it, if it exists. + */ +void +PathNameDeleteTemporaryDir(const char *dirname) +{ + struct stat statbuf; + + /* Silently ignore missing directory. */ + if (stat(dirname, &statbuf) != 0 && errno == ENOENT) + return; + + /* + * Currently, walkdir doesn't offer a way for our passed in function to + * maintain state. Perhaps it should, so that we could tell the caller + * whether this operation succeeded or failed. Since this operation is + * used in a cleanup path, we wouldn't actually behave differently: we'll + * just log failures. + */ + walkdir(dirname, unlink_if_exists_fname, false, LOG); +} + /* * Open a temporary file that will disappear when we close it. * @@ -1432,53 +1536,52 @@ OpenTemporaryFile(bool interXact) DEFAULTTABLESPACE_OID, true); - /* Mark it for deletion at close */ - VfdCache[file].fdstate |= FD_TEMPORARY; + /* Mark it for deletion at close and temporary file size limit */ + VfdCache[file].fdstate |= FD_DELETE_AT_CLOSE | FD_TEMP_FILE_LIMIT; /* Register it with the current resource owner */ if (!interXact) - { - VfdCache[file].fdstate |= FD_XACT_TEMPORARY; - - VfdCache[file].resowner = CurrentResourceOwner; - ResourceOwnerRememberFile(CurrentResourceOwner, file); - - /* ensure cleanup happens at eoxact */ - have_xact_temporary_files = true; - } + RegisterTemporaryFile(file); return file; } /* - * Open a temporary file in a specific tablespace. - * Subroutine for OpenTemporaryFile, which see for details. + * Return the path of the temp directory in a given tablespace. */ -static File -OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError) +void +TempTablespacePath(char *path, Oid tablespace) { - char tempdirpath[MAXPGPATH]; - char tempfilepath[MAXPGPATH]; - File file; - /* * Identify the tempfile directory for this tablespace. * * If someone tries to specify pg_global, use pg_default instead. */ - if (tblspcOid == DEFAULTTABLESPACE_OID || - tblspcOid == GLOBALTABLESPACE_OID) - { - /* The default tablespace is {datadir}/base */ - snprintf(tempdirpath, sizeof(tempdirpath), "base/%s", - PG_TEMP_FILES_DIR); - } + if (tablespace == InvalidOid || + tablespace == DEFAULTTABLESPACE_OID || + tablespace == GLOBALTABLESPACE_OID) + snprintf(path, MAXPGPATH, "base/%s", PG_TEMP_FILES_DIR); else { /* All other tablespaces are accessed via symlinks */ - snprintf(tempdirpath, sizeof(tempdirpath), "pg_tblspc/%u/%s/%s", - tblspcOid, TABLESPACE_VERSION_DIRECTORY, PG_TEMP_FILES_DIR); + snprintf(path, MAXPGPATH, "pg_tblspc/%u/%s/%s", + tablespace, TABLESPACE_VERSION_DIRECTORY, + PG_TEMP_FILES_DIR); } +} + +/* + * Open a temporary file in a specific tablespace. + * Subroutine for OpenTemporaryFile, which see for details. + */ +static File +OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError) +{ + char tempdirpath[MAXPGPATH]; + char tempfilepath[MAXPGPATH]; + File file; + + TempTablespacePath(tempdirpath, tblspcOid); /* * Generate a tempfile name that should be unique within the current @@ -1515,6 +1618,130 @@ OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError) return file; } + +/* + * Create a new file. The directory containing it must already exist. Files + * created this way are subject to temp_file_limit and are automatically + * closed at end of transaction, but are not automatically deleted on close + * because they are intended to be shared between cooperating backends. + * + * If the file is inside the top-level temporary directory, its name should + * begin with PG_TEMP_FILE_PREFIX so that it can be identified as temporary + * and deleted at startup by RemovePgTempFiles(). Alternatively, it can be + * inside a directory created with PathnameCreateTemporaryDir(), in which case + * the prefix isn't needed. + */ +File +PathNameCreateTemporaryFile(const char *path, bool error_on_failure) +{ + File file; + + ResourceOwnerEnlargeFiles(CurrentResourceOwner); + + /* + * Open the file. Note: we don't use O_EXCL, in case there is an orphaned + * temp file that can be reused. + */ + file = PathNameOpenFile(path, O_RDWR | O_CREAT | O_TRUNC | PG_BINARY); + if (file <= 0) + { + if (error_on_failure) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not create temporary file \"%s\": %m", + path))); + else + return file; + } + + /* Mark it for temp_file_limit accounting. */ + VfdCache[file].fdstate |= FD_TEMP_FILE_LIMIT; + + /* Register it for automatic close. */ + RegisterTemporaryFile(file); + + return file; +} + +/* + * Open a file that was created with PathNameCreateTemporaryFile, possibly in + * another backend. Files opened this way don't count against the + * temp_file_limit of the caller, are read-only and are automatically closed + * at the end of the transaction but are not deleted on close. + */ +File +PathNameOpenTemporaryFile(const char *path) +{ + File file; + + ResourceOwnerEnlargeFiles(CurrentResourceOwner); + + /* We open the file read-only. */ + file = PathNameOpenFile(path, O_RDONLY | PG_BINARY); + + /* If no such file, then we don't raise an error. */ + if (file <= 0 && errno != ENOENT) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open temporary file \"%s\": %m", + path))); + + if (file > 0) + { + /* Register it for automatic close. */ + RegisterTemporaryFile(file); + } + + return file; +} + +/* + * Delete a file by pathname. Return true if the file existed, false if + * didn't. + */ +bool +PathNameDeleteTemporaryFile(const char *path, bool error_on_failure) +{ + struct stat filestats; + int stat_errno; + + /* Get the final size for pgstat reporting. */ + if (stat(path, &filestats) != 0) + stat_errno = errno; + else + stat_errno = 0; + + /* + * Unlike FileClose's automatic file deletion code, we tolerate + * non-existence to support BufFileDeleteShared which doesn't know how + * many segments it has to delete until it runs out. + */ + if (stat_errno == ENOENT) + return false; + + if (unlink(path) < 0) + { + if (errno != ENOENT) + ereport(error_on_failure ? ERROR : LOG, + (errcode_for_file_access(), + errmsg("cannot unlink temporary file \"%s\": %m", + path))); + return false; + } + + if (stat_errno == 0) + ReportTemporaryFileUsage(path, filestats.st_size); + else + { + errno = stat_errno; + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not stat file \"%s\": %m", path))); + } + + return true; +} + /* * close a file when done with it */ @@ -1543,10 +1770,17 @@ FileClose(File file) Delete(file); } + if (vfdP->fdstate & FD_TEMP_FILE_LIMIT) + { + /* Subtract its size from current usage (do first in case of error) */ + temporary_files_size -= vfdP->fileSize; + vfdP->fileSize = 0; + } + /* * Delete the file if it was temporary, and make a log entry if wanted */ - if (vfdP->fdstate & FD_TEMPORARY) + if (vfdP->fdstate & FD_DELETE_AT_CLOSE) { struct stat filestats; int stat_errno; @@ -1558,11 +1792,8 @@ FileClose(File file) * is arranged to ensure that the worst-case consequence is failing to * emit log message(s), not failing to attempt the unlink. */ - vfdP->fdstate &= ~FD_TEMPORARY; + vfdP->fdstate &= ~FD_DELETE_AT_CLOSE; - /* Subtract its size from current usage (do first in case of error) */ - temporary_files_size -= vfdP->fileSize; - vfdP->fileSize = 0; /* first try the stat() */ if (stat(vfdP->fileName, &filestats)) @@ -1576,18 +1807,7 @@ FileClose(File file) /* and last report the stat results */ if (stat_errno == 0) - { - pgstat_report_tempfile(filestats.st_size); - - if (log_temp_files >= 0) - { - if ((filestats.st_size / 1024) >= log_temp_files) - ereport(LOG, - (errmsg("temporary file: path \"%s\", size %lu", - vfdP->fileName, - (unsigned long) filestats.st_size))); - } - } + ReportTemporaryFileUsage(vfdP->fileName, filestats.st_size); else { errno = stat_errno; @@ -1761,7 +1981,7 @@ FileWrite(File file, char *buffer, int amount, uint32 wait_event_info) * message if we do that. All current callers would just throw error * immediately anyway, so this is safe at present. */ - if (temp_file_limit >= 0 && (vfdP->fdstate & FD_TEMPORARY)) + if (temp_file_limit >= 0 && (vfdP->fdstate & FD_TEMP_FILE_LIMIT)) { off_t newPos; @@ -1814,7 +2034,7 @@ FileWrite(File file, char *buffer, int amount, uint32 wait_event_info) * get here in that state if we're not enforcing temporary_files_size, * so we don't care. */ - if (vfdP->fdstate & FD_TEMPORARY) + if (vfdP->fdstate & FD_TEMP_FILE_LIMIT) { off_t newPos = vfdP->seekPos; @@ -1985,7 +2205,7 @@ FileTruncate(File file, off_t offset, uint32 wait_event_info) if (returnCode == 0 && VfdCache[file].fileSize > offset) { /* adjust our state for truncation of a temp file */ - Assert(VfdCache[file].fdstate & FD_TEMPORARY); + Assert(VfdCache[file].fdstate & FD_TEMP_FILE_LIMIT); temporary_files_size -= VfdCache[file].fileSize - offset; VfdCache[file].fileSize = offset; } @@ -2593,6 +2813,24 @@ TempTablespacesAreSet(void) return (numTempTableSpaces >= 0); } +/* + * GetTempTablespaces + * + * Populate an array with the OIDs of the tablespaces that should be used for + * temporary files. Return the number that were copied into the output array. + */ +int +GetTempTablespaces(Oid *tableSpaces, int numSpaces) +{ + int i; + + Assert(TempTablespacesAreSet()); + for (i = 0; i < numTempTableSpaces && i < numSpaces; ++i) + tableSpaces[i] = tempTableSpaces[i]; + + return i; +} + /* * GetNextTempTableSpace * @@ -2696,7 +2934,8 @@ CleanupTempFiles(bool isProcExit) { unsigned short fdstate = VfdCache[i].fdstate; - if ((fdstate & FD_TEMPORARY) && VfdCache[i].fileName != NULL) + if (((fdstate & FD_DELETE_AT_CLOSE) || (fdstate & FD_CLOSE_AT_EOXACT)) && + VfdCache[i].fileName != NULL) { /* * If we're in the process of exiting a backend process, close @@ -2707,7 +2946,7 @@ CleanupTempFiles(bool isProcExit) */ if (isProcExit) FileClose(i); - else if (fdstate & FD_XACT_TEMPORARY) + else if (fdstate & FD_CLOSE_AT_EOXACT) { elog(WARNING, "temporary file %s not closed at end-of-transaction", @@ -2751,7 +2990,7 @@ RemovePgTempFiles(void) * First process temp files in pg_default ($PGDATA/base) */ snprintf(temp_path, sizeof(temp_path), "base/%s", PG_TEMP_FILES_DIR); - RemovePgTempFilesInDir(temp_path); + RemovePgTempFilesInDir(temp_path, false); RemovePgTempRelationFiles("base"); /* @@ -2767,7 +3006,7 @@ RemovePgTempFiles(void) snprintf(temp_path, sizeof(temp_path), "pg_tblspc/%s/%s/%s", spc_de->d_name, TABLESPACE_VERSION_DIRECTORY, PG_TEMP_FILES_DIR); - RemovePgTempFilesInDir(temp_path); + RemovePgTempFilesInDir(temp_path, false); snprintf(temp_path, sizeof(temp_path), "pg_tblspc/%s/%s", spc_de->d_name, TABLESPACE_VERSION_DIRECTORY); @@ -2785,9 +3024,15 @@ RemovePgTempFiles(void) #endif } -/* Process one pgsql_tmp directory for RemovePgTempFiles */ +/* + * Process one pgsql_tmp directory for RemovePgTempFiles. At the top level in + * each tablespace, this should be called with unlink_all = false, so that + * only files matching the temporary name prefix will be unlinked. When + * recursing it will be called with unlink_all = true to unlink everything + * under a top-level temporary directory. + */ static void -RemovePgTempFilesInDir(const char *tmpdirname) +RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) { DIR *temp_dir; struct dirent *temp_de; @@ -2813,10 +3058,25 @@ RemovePgTempFilesInDir(const char *tmpdirname) snprintf(rm_path, sizeof(rm_path), "%s/%s", tmpdirname, temp_de->d_name); - if (strncmp(temp_de->d_name, + if (unlink_all || + strncmp(temp_de->d_name, PG_TEMP_FILE_PREFIX, strlen(PG_TEMP_FILE_PREFIX)) == 0) - unlink(rm_path); /* note we ignore any error */ + { + struct stat statbuf; + + /* note that we ignore any error here and below */ + if (lstat(rm_path, &statbuf) < 0) + continue; + + if (S_ISDIR(statbuf.st_mode)) + { + RemovePgTempFilesInDir(rm_path, true); + rmdir(rm_path); + } + else + unlink(rm_path); + } else elog(LOG, "unexpected file found in temporary-files directory: \"%s\"", @@ -3152,6 +3412,23 @@ datadir_fsync_fname(const char *fname, bool isdir, int elevel) fsync_fname_ext(fname, isdir, true, elevel); } +static void +unlink_if_exists_fname(const char *fname, bool isdir, int elevel) +{ + if (isdir) + { + if (rmdir(fname) != 0 && errno != ENOENT) + ereport(elevel, + (errcode_for_file_access(), + errmsg("could not rmdir directory \"%s\": %m", fname))); + } + else + { + /* Use PathNameDeleteTemporaryFile to report filesize */ + PathNameDeleteTemporaryFile(fname, false); + } +} + /* * fsync_fname_ext -- Try to fsync a file or directory * diff --git a/src/backend/storage/file/sharedfileset.c b/src/backend/storage/file/sharedfileset.c new file mode 100644 index 0000000000..343b2283f0 --- /dev/null +++ b/src/backend/storage/file/sharedfileset.c @@ -0,0 +1,244 @@ +/*------------------------------------------------------------------------- + * + * sharedfileset.c + * Shared temporary file management. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/storage/file/sharedfileset.c + * + * SharefFileSets provide a temporary namespace (think directory) so that + * files can be discovered by name, and a shared ownership semantics so that + * shared files survive until the last user detaches. + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "access/hash.h" +#include "catalog/pg_tablespace.h" +#include "commands/tablespace.h" +#include "miscadmin.h" +#include "storage/dsm.h" +#include "storage/sharedfileset.h" +#include "utils/builtins.h" + +static void SharedFileSetOnDetach(dsm_segment *segment, Datum datum); +static void SharedFileSetPath(char *path, SharedFileSet *fileset, Oid tablespace); +static void SharedFilePath(char *path, SharedFileSet *fileset, const char *name); +static Oid ChooseTablespace(const SharedFileSet *fileset, const char *name); + +/* + * Initialize a space for temporary files that can be opened for read-only + * access by other backends. Other backends must attach to it before + * accessing it. Associate this SharedFileSet with 'seg'. Any contained + * files will be deleted when the last backend detaches. + * + * Files will be distributed over the tablespaces configured in + * temp_tablespaces. + * + * Under the covers the set is one or more directories which will eventually + * be deleted when there are no backends attached. + */ +void +SharedFileSetInit(SharedFileSet *fileset, dsm_segment *seg) +{ + static uint32 counter = 0; + + SpinLockInit(&fileset->mutex); + fileset->refcnt = 1; + fileset->creator_pid = MyProcPid; + fileset->number = counter; + counter = (counter + 1) % INT_MAX; + + /* Capture the tablespace OIDs so that all backends agree on them. */ + PrepareTempTablespaces(); + fileset->ntablespaces = + GetTempTablespaces(&fileset->tablespaces[0], + lengthof(fileset->tablespaces)); + if (fileset->ntablespaces == 0) + { + fileset->tablespaces[0] = DEFAULTTABLESPACE_OID; + fileset->ntablespaces = 1; + } + + /* Register our cleanup callback. */ + on_dsm_detach(seg, SharedFileSetOnDetach, PointerGetDatum(fileset)); +} + +/* + * Attach to a set of directories that was created with SharedFileSetInit. + */ +void +SharedFileSetAttach(SharedFileSet *fileset, dsm_segment *seg) +{ + bool success; + + SpinLockAcquire(&fileset->mutex); + if (fileset->refcnt == 0) + success = false; + else + { + ++fileset->refcnt; + success = true; + } + SpinLockRelease(&fileset->mutex); + + if (!success) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("could not attach to a SharedFileSet that is already destroyed"))); + + /* Register our cleanup callback. */ + on_dsm_detach(seg, SharedFileSetOnDetach, PointerGetDatum(fileset)); +} + +/* + * Create a new file in the given set. + */ +File +SharedFileSetCreate(SharedFileSet *fileset, const char *name) +{ + char path[MAXPGPATH]; + File file; + + SharedFilePath(path, fileset, name); + file = PathNameCreateTemporaryFile(path, false); + + /* If we failed, see if we need to create the directory on demand. */ + if (file <= 0) + { + char tempdirpath[MAXPGPATH]; + char filesetpath[MAXPGPATH]; + Oid tablespace = ChooseTablespace(fileset, name); + + TempTablespacePath(tempdirpath, tablespace); + SharedFileSetPath(filesetpath, fileset, tablespace); + PathNameCreateTemporaryDir(tempdirpath, filesetpath); + file = PathNameCreateTemporaryFile(path, true); + } + + return file; +} + +/* + * Open a file that was created with SharedFileSetCreate(), possibly in + * another backend. + */ +File +SharedFileSetOpen(SharedFileSet *fileset, const char *name) +{ + char path[MAXPGPATH]; + File file; + + SharedFilePath(path, fileset, name); + file = PathNameOpenTemporaryFile(path); + + return file; +} + +/* + * Delete a file that was created with PathNameCreateShared(). + * Return true if the file existed, false if didn't. + */ +bool +SharedFileSetDelete(SharedFileSet *fileset, const char *name, + bool error_on_failure) +{ + char path[MAXPGPATH]; + + SharedFilePath(path, fileset, name); + + return PathNameDeleteTemporaryFile(path, error_on_failure); +} + +/* + * Delete all files in the set. + */ +void +SharedFileSetDeleteAll(SharedFileSet *fileset) +{ + char dirpath[MAXPGPATH]; + int i; + + /* + * Delete the directory we created in each tablespace. Doesn't fail + * because we use this in error cleanup paths, but can generate LOG + * message on IO error. + */ + for (i = 0; i < fileset->ntablespaces; ++i) + { + SharedFileSetPath(dirpath, fileset, fileset->tablespaces[i]); + PathNameDeleteTemporaryDir(dirpath); + } +} + +/* + * Callback function that will be invoked when this backend detaches from a + * DSM segment holding a SharedFileSet that it has created or attached to. If + * we are the last to detach, then try to remove the directories and + * everything in them. We can't raise an error on failures, because this runs + * in error cleanup paths. + */ +static void +SharedFileSetOnDetach(dsm_segment *segment, Datum datum) +{ + bool unlink_all = false; + SharedFileSet *fileset = (SharedFileSet *) DatumGetPointer(datum); + + SpinLockAcquire(&fileset->mutex); + Assert(fileset->refcnt > 0); + if (--fileset->refcnt == 0) + unlink_all = true; + SpinLockRelease(&fileset->mutex); + + /* + * If we are the last to detach, we delete the directory in all + * tablespaces. Note that we are still actually attached for the rest of + * this function so we can safely access its data. + */ + if (unlink_all) + SharedFileSetDeleteAll(fileset); +} + +/* + * Build the path for the directory holding the files backing a SharedFileSet + * in a given tablespace. + */ +static void +SharedFileSetPath(char *path, SharedFileSet *fileset, Oid tablespace) +{ + char tempdirpath[MAXPGPATH]; + + TempTablespacePath(tempdirpath, tablespace); + snprintf(path, MAXPGPATH, "%s/%s%d.%u.sharedfileset", + tempdirpath, PG_TEMP_FILE_PREFIX, + fileset->creator_pid, fileset->number); +} + +/* + * Sorting hat to determine which tablespace a given shared temporary file + * belongs in. + */ +static Oid +ChooseTablespace(const SharedFileSet *fileset, const char *name) +{ + uint32 hash = hash_any((const unsigned char *) name, strlen(name)); + + return fileset->tablespaces[hash % fileset->ntablespaces]; +} + +/* + * Compute the full path of a file in a SharedFileSet. + */ +static void +SharedFilePath(char *path, SharedFileSet *fileset, const char *name) +{ + char dirpath[MAXPGPATH]; + + SharedFileSetPath(dirpath, fileset, ChooseTablespace(fileset, name)); + snprintf(path, MAXPGPATH, "%s/%s", dirpath, name); +} diff --git a/src/include/storage/buffile.h b/src/include/storage/buffile.h index 640908717d..c3d7a61b64 100644 --- a/src/include/storage/buffile.h +++ b/src/include/storage/buffile.h @@ -26,6 +26,8 @@ #ifndef BUFFILE_H #define BUFFILE_H +#include "storage/sharedfileset.h" + /* BufFile is an opaque type whose details are not known outside buffile.c. */ typedef struct BufFile BufFile; @@ -42,4 +44,9 @@ extern int BufFileSeek(BufFile *file, int fileno, off_t offset, int whence); extern void BufFileTell(BufFile *file, int *fileno, off_t *offset); extern int BufFileSeekBlock(BufFile *file, long blknum); +extern BufFile *BufFileCreateShared(SharedFileSet *fileset, const char *name); +extern void BufFileExportShared(BufFile *file); +extern BufFile *BufFileOpenShared(SharedFileSet *fileset, const char *name); +extern void BufFileDeleteShared(SharedFileSet *fileset, const char *name); + #endif /* BUFFILE_H */ diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index 6ea26e63b8..9829281509 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -79,6 +79,14 @@ extern int FileGetRawDesc(File file); extern int FileGetRawFlags(File file); extern mode_t FileGetRawMode(File file); +/* Operations used for sharing named temporary files */ +extern File PathNameCreateTemporaryFile(const char *name, bool error_on_failure); +extern File PathNameOpenTemporaryFile(const char *name); +extern bool PathNameDeleteTemporaryFile(const char *name, bool error_on_failure); +extern void PathNameCreateTemporaryDir(const char *base, const char *name); +extern void PathNameDeleteTemporaryDir(const char *name); +extern void TempTablespacePath(char *path, Oid tablespace); + /* Operations that allow use of regular stdio --- USE WITH CAUTION */ extern FILE *AllocateFile(const char *name, const char *mode); extern int FreeFile(FILE *file); @@ -107,6 +115,7 @@ extern void set_max_safe_fds(void); extern void closeAllVfds(void); extern void SetTempTablespaces(Oid *tableSpaces, int numSpaces); extern bool TempTablespacesAreSet(void); +extern int GetTempTablespaces(Oid *tableSpaces, int numSpaces); extern Oid GetNextTempTableSpace(void); extern void AtEOXact_Files(void); extern void AtEOSubXact_Files(bool isCommit, SubTransactionId mySubid, @@ -124,7 +133,7 @@ extern int durable_unlink(const char *fname, int loglevel); extern int durable_link_or_rename(const char *oldfile, const char *newfile, int loglevel); extern void SyncDataDirectory(void); -/* Filename components for OpenTemporaryFile */ +/* Filename components */ #define PG_TEMP_FILES_DIR "pgsql_tmp" #define PG_TEMP_FILE_PREFIX "pgsql_tmp" diff --git a/src/include/storage/sharedfileset.h b/src/include/storage/sharedfileset.h new file mode 100644 index 0000000000..20651bb93b --- /dev/null +++ b/src/include/storage/sharedfileset.h @@ -0,0 +1,45 @@ +/*------------------------------------------------------------------------- + * + * sharedfileset.h + * Shared temporary file management. + * + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/storage/sharedfilespace.h + * + *------------------------------------------------------------------------- + */ + +#ifndef SHAREDFILESET_H +#define SHAREDFILESET_H + +#include "storage/dsm.h" +#include "storage/fd.h" +#include "storage/spin.h" + +/* + * A set of temporary files that can be shared by multiple backends. + */ +typedef struct SharedFileSet +{ + pid_t creator_pid; /* PID of the creating process */ + uint32 number; /* per-PID identifier */ + slock_t mutex; /* mutex protecting the reference count */ + int refcnt; /* number of attached backends */ + int ntablespaces; /* number of tablespaces to use */ + Oid tablespaces[8]; /* OIDs of tablespaces to use. Assumes that + * it's rare that there more than temp + * tablespaces. */ +} SharedFileSet; + +extern void SharedFileSetInit(SharedFileSet *fileset, dsm_segment *seg); +extern void SharedFileSetAttach(SharedFileSet *fileset, dsm_segment *seg); +extern File SharedFileSetCreate(SharedFileSet *fileset, const char *name); +extern File SharedFileSetOpen(SharedFileSet *fileset, const char *name); +extern bool SharedFileSetDelete(SharedFileSet *fileset, const char *name, + bool error_on_failure); +extern void SharedFileSetDeleteAll(SharedFileSet *fileset); + +#endif diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 3e84720038..72eb9fd390 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2026,6 +2026,7 @@ SharedBitmapState SharedDependencyObjectType SharedDependencyType SharedExecutorInstrumentation +SharedFileSet SharedInvalCatalogMsg SharedInvalCatcacheMsg SharedInvalRelcacheMsg From ec6a04005618c206163761e5739a8b90debd6b1e Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 1 Dec 2017 17:28:05 -0800 Subject: [PATCH 0650/1087] Adjust #ifdef EXEC_BACKEND RemovePgTempFilesInDir() call. Other callers were adjusted in the course of dc6c4c9dc2a111519b76b22daaaac86c5608223b. Per buildfarm. --- src/backend/storage/file/fd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 2e93e4ad63..ecd6d85270 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -3020,7 +3020,7 @@ RemovePgTempFiles(void) * DataDir as well. */ #ifdef EXEC_BACKEND - RemovePgTempFilesInDir(PG_TEMP_FILES_DIR); + RemovePgTempFilesInDir(PG_TEMP_FILES_DIR, false); #endif } From a852cfe96752b25c2deaa2653cffd60c0ec82ead Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 3 Dec 2017 11:25:17 -0500 Subject: [PATCH 0651/1087] Fix uninitialized-variable compiler warning induced by commit e4128ee76. I'm a little bit astonished that anyone's compiler would have failed to complain about this. The compiler surely does not know that is_procedure means the function return value will be ignored. --- src/pl/plpython/plpy_exec.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c index 4594a08ead..1e0f3d9d3a 100644 --- a/src/pl/plpython/plpy_exec.c +++ b/src/pl/plpython/plpy_exec.c @@ -210,6 +210,8 @@ PLy_exec_function(FunctionCallInfo fcinfo, PLyProcedure *proc) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("PL/Python procedure did not return None"))); + fcinfo->isnull = false; + rv = (Datum) 0; } else if (proc->result.typoid == VOIDOID) { From 9f4992e2a9939a4c3d560c2ac58067861ee0029a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 4 Dec 2017 10:33:09 -0500 Subject: [PATCH 0652/1087] Remove memory leak protection from Gather and Gather Merge nodes. Before commit 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608, tqueue.c could perform tuple remapping and thus leak memory, which is why commit af33039317ddc4a0e38a02e2255c2bf453115fd2 made TupleQueueReaderNext run in a short-lived context. Now, however, tqueue.c has been reduced to a shadow of its former self, and there shouldn't be any chance of leaks any more. Accordingly, remove some tuple copying and memory context manipulation to speed up processing. Patch by me, reviewed by Amit Kapila. Some testing by Rafia Sabih. Discussion: http://postgr.es/m/CAA4eK1LSDydwrNjmYSNkfJ3ZivGSWH9SVswh6QpNzsMdj_oOQA@mail.gmail.com --- src/backend/executor/nodeGather.c | 14 ++------------ src/backend/executor/nodeGatherMerge.c | 10 +--------- src/backend/executor/tqueue.c | 2 ++ 3 files changed, 5 insertions(+), 21 deletions(-) diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 212612b535..a44cf8409a 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -131,7 +131,6 @@ static TupleTableSlot * ExecGather(PlanState *pstate) { GatherState *node = castNode(GatherState, pstate); - TupleTableSlot *fslot = node->funnel_slot; TupleTableSlot *slot; ExprContext *econtext; @@ -205,11 +204,8 @@ ExecGather(PlanState *pstate) /* * Reset per-tuple memory context to free any expression evaluation - * storage allocated in the previous tuple cycle. This will also clear - * any previous tuple returned by a TupleQueueReader; to make sure we - * don't leave a dangling pointer around, clear the working slot first. + * storage allocated in the previous tuple cycle. */ - ExecClearTuple(fslot); econtext = node->ps.ps_ExprContext; ResetExprContext(econtext); @@ -258,7 +254,6 @@ gather_getnext(GatherState *gatherstate) PlanState *outerPlan = outerPlanState(gatherstate); TupleTableSlot *outerTupleSlot; TupleTableSlot *fslot = gatherstate->funnel_slot; - MemoryContext tupleContext = gatherstate->ps.ps_ExprContext->ecxt_per_tuple_memory; HeapTuple tup; while (gatherstate->nreaders > 0 || gatherstate->need_to_scan_locally) @@ -267,12 +262,7 @@ gather_getnext(GatherState *gatherstate) if (gatherstate->nreaders > 0) { - MemoryContext oldContext; - - /* Run TupleQueueReaders in per-tuple context */ - oldContext = MemoryContextSwitchTo(tupleContext); tup = gather_readnext(gatherstate); - MemoryContextSwitchTo(oldContext); if (HeapTupleIsValid(tup)) { @@ -280,7 +270,7 @@ gather_getnext(GatherState *gatherstate) fslot, /* slot in which to store the tuple */ InvalidBuffer, /* buffer associated with this * tuple */ - false); /* slot should not pfree tuple */ + true); /* pfree tuple when done with it */ return fslot; } } diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 166f2064ff..4a8a59eabf 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -609,7 +609,7 @@ load_tuple_array(GatherMergeState *gm_state, int reader) &tuple_buffer->done); if (!HeapTupleIsValid(tuple)) break; - tuple_buffer->tuple[i] = heap_copytuple(tuple); + tuple_buffer->tuple[i] = tuple; tuple_buffer->nTuples++; } } @@ -673,7 +673,6 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) &tuple_buffer->done); if (!HeapTupleIsValid(tup)) return false; - tup = heap_copytuple(tup); /* * Attempt to read more tuples in nowait mode and store them in the @@ -703,20 +702,13 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, { TupleQueueReader *reader; HeapTuple tup; - MemoryContext oldContext; - MemoryContext tupleContext; /* Check for async events, particularly messages from workers. */ CHECK_FOR_INTERRUPTS(); /* Attempt to read a tuple. */ reader = gm_state->reader[nreader - 1]; - - /* Run TupleQueueReaders in per-tuple context */ - tupleContext = gm_state->ps.ps_ExprContext->ecxt_per_tuple_memory; - oldContext = MemoryContextSwitchTo(tupleContext); tup = TupleQueueReaderNext(reader, nowait, done); - MemoryContextSwitchTo(oldContext); return tup; } diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index 4a295c936b..0dcb911c3c 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -161,6 +161,8 @@ DestroyTupleQueueReader(TupleQueueReader *reader) * is set to true when there are no remaining tuples and otherwise to false. * * The returned tuple, if any, is allocated in CurrentMemoryContext. + * Note that this routine must not leak memory! (We used to allow that, + * but not any more.) * * Even when shm_mq_receive() returns SHM_MQ_WOULD_BLOCK, this can still * accumulate bytes from a partially-read message, so it's useful to call From ecc27d55f4c37a8485a7d0e1dae0eb5ef2bc886e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Dec 2017 11:51:43 -0500 Subject: [PATCH 0653/1087] Support boolean columns in functional-dependency statistics. There's no good reason that the multicolumn stats stuff shouldn't work on booleans. But it looked only for "Var = pseudoconstant" clauses, and it will seldom find those for boolean Vars, since earlier phases of planning will fold "boolvar = true" or "boolvar = false" to just "boolvar" or "NOT boolvar" respectively. Improve dependencies_clauselist_selectivity() to recognize such clauses as equivalent to equality restrictions. This fixes a failure of the extended stats mechanism to apply in a case reported by Vitaliy Garnashevich. It's not a complete solution to his problem because the bitmap-scan costing code isn't consulting extended stats where it should, but that's surely an independent issue. In passing, improve some comments, get rid of a NumRelids() test that's redundant with the preceding bms_membership() test, and fix dependencies_clauselist_selectivity() so that estimatedclauses actually is a pure output argument as stated by its API contract. Back-patch to v10 where this code was introduced. Discussion: https://postgr.es/m/73a4936d-2814-dc08-ed0c-978f76f435b0@gmail.com --- src/backend/statistics/dependencies.c | 110 +++++++++++++++----------- 1 file changed, 63 insertions(+), 47 deletions(-) diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c index 9756fb83c0..ae0f304dba 100644 --- a/src/backend/statistics/dependencies.c +++ b/src/backend/statistics/dependencies.c @@ -736,91 +736,104 @@ pg_dependencies_send(PG_FUNCTION_ARGS) * dependency_is_compatible_clause * Determines if the clause is compatible with functional dependencies * - * Only OpExprs with two arguments using an equality operator are supported. - * When returning True attnum is set to the attribute number of the Var within - * the supported clause. - * - * Currently we only support Var = Const, or Const = Var. It may be possible - * to expand on this later. + * Only clauses that have the form of equality to a pseudoconstant, or can be + * interpreted that way, are currently accepted. Furthermore the variable + * part of the clause must be a simple Var belonging to the specified + * relation, whose attribute number we return in *attnum on success. */ static bool dependency_is_compatible_clause(Node *clause, Index relid, AttrNumber *attnum) { RestrictInfo *rinfo = (RestrictInfo *) clause; + Var *var; if (!IsA(rinfo, RestrictInfo)) return false; - /* Pseudoconstants are not really interesting here. */ + /* Pseudoconstants are not interesting (they couldn't contain a Var) */ if (rinfo->pseudoconstant) return false; - /* clauses referencing multiple varnos are incompatible */ + /* Clauses referencing multiple, or no, varnos are incompatible */ if (bms_membership(rinfo->clause_relids) != BMS_SINGLETON) return false; if (is_opclause(rinfo->clause)) { + /* If it's an opclause, check for Var = Const or Const = Var. */ OpExpr *expr = (OpExpr *) rinfo->clause; - Var *var; - bool varonleft = true; - bool ok; - /* Only expressions with two arguments are considered compatible. */ + /* Only expressions with two arguments are candidates. */ if (list_length(expr->args) != 2) return false; - /* see if it actually has the right */ - ok = (NumRelids((Node *) expr) == 1) && - (is_pseudo_constant_clause(lsecond(expr->args)) || - (varonleft = false, - is_pseudo_constant_clause(linitial(expr->args)))); - - /* unsupported structure (two variables or so) */ - if (!ok) + /* Make sure non-selected argument is a pseudoconstant. */ + if (is_pseudo_constant_clause(lsecond(expr->args))) + var = linitial(expr->args); + else if (is_pseudo_constant_clause(linitial(expr->args))) + var = lsecond(expr->args); + else return false; /* - * If it's not "=" operator, just ignore the clause, as it's not + * If it's not an "=" operator, just ignore the clause, as it's not * compatible with functional dependencies. * * This uses the function for estimating selectivity, not the operator * directly (a bit awkward, but well ...). + * + * XXX this is pretty dubious; probably it'd be better to check btree + * or hash opclass membership, so as not to be fooled by custom + * selectivity functions, and to be more consistent with decisions + * elsewhere in the planner. */ if (get_oprrest(expr->opno) != F_EQSEL) return false; - var = (varonleft) ? linitial(expr->args) : lsecond(expr->args); - + /* OK to proceed with checking "var" */ + } + else if (not_clause((Node *) rinfo->clause)) + { /* - * We may ignore any RelabelType node above the operand. (There won't - * be more than one, since eval_const_expressions() has been applied - * already.) + * "NOT x" can be interpreted as "x = false", so get the argument and + * proceed with seeing if it's a suitable Var. */ - if (IsA(var, RelabelType)) - var = (Var *) ((RelabelType *) var)->arg; + var = (Var *) get_notclausearg(rinfo->clause); + } + else + { + /* + * A boolean expression "x" can be interpreted as "x = true", so + * proceed with seeing if it's a suitable Var. + */ + var = (Var *) rinfo->clause; + } - /* We only support plain Vars for now */ - if (!IsA(var, Var)) - return false; + /* + * We may ignore any RelabelType node above the operand. (There won't be + * more than one, since eval_const_expressions has been applied already.) + */ + if (IsA(var, RelabelType)) + var = (Var *) ((RelabelType *) var)->arg; - /* Ensure var is from the correct relation */ - if (var->varno != relid) - return false; + /* We only support plain Vars for now */ + if (!IsA(var, Var)) + return false; - /* we also better ensure the Var is from the current level */ - if (var->varlevelsup > 0) - return false; + /* Ensure Var is from the correct relation */ + if (var->varno != relid) + return false; - /* Also skip system attributes (we don't allow stats on those). */ - if (!AttrNumberIsForUserDefinedAttr(var->varattno)) - return false; + /* We also better ensure the Var is from the current level */ + if (var->varlevelsup != 0) + return false; - *attnum = var->varattno; - return true; - } + /* Also ignore system attributes (we don't allow stats on those) */ + if (!AttrNumberIsForUserDefinedAttr(var->varattno)) + return false; - return false; + *attnum = var->varattno; + return true; } /* @@ -891,12 +904,12 @@ find_strongest_dependency(StatisticExtInfo *stats, MVDependencies *dependencies, /* * dependencies_clauselist_selectivity - * Return the estimated selectivity of the given clauses using - * functional dependency statistics, or 1.0 if no useful functional + * Return the estimated selectivity of (a subset of) the given clauses + * using functional dependency statistics, or 1.0 if no useful functional * dependency statistic exists. * * 'estimatedclauses' is an output argument that gets a bit set corresponding - * to the (zero-based) list index of clauses that are included in the + * to the (zero-based) list index of each clause that is included in the * estimated selectivity. * * Given equality clauses on attributes (a,b) we find the strongest dependency @@ -932,6 +945,9 @@ dependencies_clauselist_selectivity(PlannerInfo *root, AttrNumber *list_attnums; int listidx; + /* initialize output argument */ + *estimatedclauses = NULL; + /* check if there's any stats that might be useful for us. */ if (!has_stats_of_kind(rel->statlist, STATS_EXT_DEPENDENCIES)) return 1.0; From ab6eaee88420db58a948849d5a735997728d73a9 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 4 Dec 2017 15:23:36 -0500 Subject: [PATCH 0654/1087] When VACUUM or ANALYZE skips a concurrently dropped table, log it. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Hopefully, the additional logging will help avoid confusion that could otherwise result. Nathan Bossart, reviewed by Michael Paquier, Fabrízio Mello, and me --- doc/src/sgml/config.sgml | 4 +- src/backend/commands/analyze.c | 46 +++++++++-- src/backend/commands/vacuum.c | 49 ++++++++++-- .../expected/vacuum-concurrent-drop.out | 76 +++++++++++++++++++ src/test/isolation/isolation_schedule | 1 + .../specs/vacuum-concurrent-drop.spec | 45 +++++++++++ 6 files changed, 208 insertions(+), 13 deletions(-) create mode 100644 src/test/isolation/expected/vacuum-concurrent-drop.out create mode 100644 src/test/isolation/specs/vacuum-concurrent-drop.spec diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 3060597011..563ad1fc7f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -5926,8 +5926,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; 250ms then all automatic vacuums and analyzes that run 250ms or longer will be logged. In addition, when this parameter is set to any value other than -1, a message will be - logged if an autovacuum action is skipped due to the existence of a - conflicting lock. Enabling this parameter can be helpful + logged if an autovacuum action is skipped due to a conflicting lock or a + concurrently dropped relation. Enabling this parameter can be helpful in tracking autovacuum activity. This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 760d19142e..f952b3c732 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -120,6 +120,7 @@ analyze_rel(Oid relid, RangeVar *relation, int options, int elevel; AcquireSampleRowsFunc acquirefunc = NULL; BlockNumber relpages = 0; + bool rel_lock = true; /* Select logging level */ if (options & VACOPT_VERBOSE) @@ -149,15 +150,50 @@ analyze_rel(Oid relid, RangeVar *relation, int options, else { onerel = NULL; - if (relation && - IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) - ereport(LOG, + rel_lock = false; + } + + /* + * If we failed to open or lock the relation, emit a log message before + * exiting. + */ + if (!onerel) + { + /* + * If the RangeVar is not defined, we do not have enough information + * to provide a meaningful log statement. Chances are that + * analyze_rel's caller has intentionally not provided this + * information so that this logging is skipped, anyway. + */ + if (relation == NULL) + return; + + /* + * Determine the log level. For autovacuum logs, we emit a LOG if + * log_autovacuum_min_duration is not disabled. For manual ANALYZE, + * we emit a WARNING to match the log statements in the permissions + * checks. + */ + if (!IsAutoVacuumWorkerProcess()) + elevel = WARNING; + else if (params->log_min_duration >= 0) + elevel = LOG; + else + return; + + if (!rel_lock) + ereport(elevel, (errcode(ERRCODE_LOCK_NOT_AVAILABLE), errmsg("skipping analyze of \"%s\" --- lock not available", relation->relname))); - } - if (!onerel) + else + ereport(elevel, + (errcode(ERRCODE_UNDEFINED_TABLE), + errmsg("skipping analyze of \"%s\" --- relation no longer exists", + relation->relname))); + return; + } /* * Check permissions --- this should match vacuum's check! diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index cbd6e9b161..4abe6b15e0 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -1330,6 +1330,7 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) Oid save_userid; int save_sec_context; int save_nestlevel; + bool rel_lock = true; Assert(params != NULL); @@ -1400,16 +1401,52 @@ vacuum_rel(Oid relid, RangeVar *relation, int options, VacuumParams *params) else { onerel = NULL; - if (relation && - IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0) - ereport(LOG, - (errcode(ERRCODE_LOCK_NOT_AVAILABLE), - errmsg("skipping vacuum of \"%s\" --- lock not available", - relation->relname))); + rel_lock = false; } + /* + * If we failed to open or lock the relation, emit a log message before + * exiting. + */ if (!onerel) { + int elevel = 0; + + /* + * Determine the log level. + * + * If the RangeVar is not defined, we do not have enough information + * to provide a meaningful log statement. Chances are that + * vacuum_rel's caller has intentionally not provided this information + * so that this logging is skipped, anyway. + * + * Otherwise, for autovacuum logs, we emit a LOG if + * log_autovacuum_min_duration is not disabled. For manual VACUUM, we + * emit a WARNING to match the log statements in the permission + * checks. + */ + if (relation != NULL) + { + if (!IsAutoVacuumWorkerProcess()) + elevel = WARNING; + else if (params->log_min_duration >= 0) + elevel = LOG; + } + + if (elevel != 0) + { + if (!rel_lock) + ereport(elevel, + (errcode(ERRCODE_LOCK_NOT_AVAILABLE), + errmsg("skipping vacuum of \"%s\" --- lock not available", + relation->relname))); + else + ereport(elevel, + (errcode(ERRCODE_UNDEFINED_TABLE), + errmsg("skipping vacuum of \"%s\" --- relation no longer exists", + relation->relname))); + } + PopActiveSnapshot(); CommitTransactionCommand(); return false; diff --git a/src/test/isolation/expected/vacuum-concurrent-drop.out b/src/test/isolation/expected/vacuum-concurrent-drop.out new file mode 100644 index 0000000000..72d80a1de1 --- /dev/null +++ b/src/test/isolation/expected/vacuum-concurrent-drop.out @@ -0,0 +1,76 @@ +Parsed test spec with 2 sessions + +starting permutation: lock vac_specified drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step vac_specified: VACUUM test1, test2; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +WARNING: skipping vacuum of "test2" --- relation no longer exists +step vac_specified: <... completed> + +starting permutation: lock vac_all drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step vac_all: VACUUM; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +step vac_all: <... completed> + +starting permutation: lock analyze_specified drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step analyze_specified: ANALYZE test1, test2; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +WARNING: skipping analyze of "test2" --- relation no longer exists +step analyze_specified: <... completed> + +starting permutation: lock analyze_all drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step analyze_all: ANALYZE; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +step analyze_all: <... completed> + +starting permutation: lock vac_analyze_specified drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step vac_analyze_specified: VACUUM ANALYZE test1, test2; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +WARNING: skipping vacuum of "test2" --- relation no longer exists +step vac_analyze_specified: <... completed> + +starting permutation: lock vac_analyze_all drop_and_commit +step lock: + BEGIN; + LOCK test1 IN SHARE MODE; + +step vac_analyze_all: VACUUM ANALYZE; +step drop_and_commit: + DROP TABLE test2; + COMMIT; + +step vac_analyze_all: <... completed> diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index 32c965b2a0..e41b9164cd 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -62,3 +62,4 @@ test: sequence-ddl test: async-notify test: vacuum-reltuples test: timeouts +test: vacuum-concurrent-drop diff --git a/src/test/isolation/specs/vacuum-concurrent-drop.spec b/src/test/isolation/specs/vacuum-concurrent-drop.spec new file mode 100644 index 0000000000..31fc161e02 --- /dev/null +++ b/src/test/isolation/specs/vacuum-concurrent-drop.spec @@ -0,0 +1,45 @@ +# Test for log messages emitted by VACUUM and ANALYZE when a specified +# relation is concurrently dropped. +# +# This also verifies that log messages are not emitted for concurrently +# dropped relations that were not specified in the VACUUM or ANALYZE +# command. + +setup +{ + CREATE TABLE test1 (a INT); + CREATE TABLE test2 (a INT); +} + +teardown +{ + DROP TABLE IF EXISTS test1; + DROP TABLE IF EXISTS test2; +} + +session "s1" +step "lock" +{ + BEGIN; + LOCK test1 IN SHARE MODE; +} +step "drop_and_commit" +{ + DROP TABLE test2; + COMMIT; +} + +session "s2" +step "vac_specified" { VACUUM test1, test2; } +step "vac_all" { VACUUM; } +step "analyze_specified" { ANALYZE test1, test2; } +step "analyze_all" { ANALYZE; } +step "vac_analyze_specified" { VACUUM ANALYZE test1, test2; } +step "vac_analyze_all" { VACUUM ANALYZE; } + +permutation "lock" "vac_specified" "drop_and_commit" +permutation "lock" "vac_all" "drop_and_commit" +permutation "lock" "analyze_specified" "drop_and_commit" +permutation "lock" "analyze_all" "drop_and_commit" +permutation "lock" "vac_analyze_specified" "drop_and_commit" +permutation "lock" "vac_analyze_all" "drop_and_commit" From 2069e6faa0f72eba968714b2260cd65436d0ef3a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Dec 2017 17:02:52 -0500 Subject: [PATCH 0655/1087] Clean up assorted messiness around AllocateDir() usage. This patch fixes a couple of low-probability bugs that could lead to reporting an irrelevant errno value (and hence possibly a wrong SQLSTATE) concerning directory-open or file-open failures. It also fixes places where we took shortcuts in reporting such errors, either by using elog instead of ereport or by using ereport but forgetting to specify an errcode. And it eliminates a lot of just plain redundant error-handling code. In service of all this, export fd.c's formerly-static function ReadDirExtended, so that external callers can make use of the coding pattern dir = AllocateDir(path); while ((de = ReadDirExtended(dir, path, LOG)) != NULL) if they'd like to treat directory-open failures as mere LOG conditions rather than errors. Also fix FreeDir to be a no-op if we reach it with dir == NULL, as such a coding pattern would cause. Then, remove code at many call sites that was throwing an error or log message for AllocateDir failure, as ReadDir or ReadDirExtended can handle that job just fine. Aside from being a net code savings, this gets rid of a lot of not-quite-up-to-snuff reports, as mentioned above. (In some places these changes result in replacing a custom error message such as "could not open tablespace directory" with more generic wording "could not open directory", but it was agreed that the custom wording buys little as long as we report the directory name.) In some other call sites where we can't just remove code, change the error reports to be fully project-style-compliant. Also reorder code in restoreTwoPhaseData that was acquiring a lock between AllocateDir and ReadDir; in the unlikely but surely not impossible case that LWLockAcquire changes errno, AllocateDir failures would be misreported. There is no great value in opening the directory before acquiring TwoPhaseStateLock, so just do it in the other order. Also fix CheckXLogRemoved to guarantee that it preserves errno, as quite a number of call sites are implicitly assuming. (Again, it's unlikely but I think not impossible that errno could change during a SpinLockAcquire. If so, this function was broken for its own purposes as well as breaking callers.) And change a few places that were using not-per-project-style messages, such as "could not read directory" when "could not open directory" is more correct. Back-patch the exporting of ReadDirExtended, in case we have occasion to back-patch some fix that makes use of it; it's not needed right now but surely making it global is pretty harmless. Also back-patch the restoreTwoPhaseData and CheckXLogRemoved fixes. The rest of this is essentially cosmetic and need not get back-patched. Michael Paquier, with a bit of additional work by me Discussion: https://postgr.es/m/CAB7nPqRpOCxjiirHmebEFhXVTK7V5Jvw4bz82p7Oimtsm3TyZA@mail.gmail.com --- contrib/adminpack/adminpack.c | 2 +- src/backend/access/transam/twophase.c | 2 +- src/backend/access/transam/xlog.c | 31 +++++------- src/backend/access/transam/xlogfuncs.c | 4 +- src/backend/postmaster/pgarch.c | 5 -- src/backend/replication/basebackup.c | 7 ++- src/backend/storage/file/copydir.c | 8 --- src/backend/storage/file/fd.c | 69 ++++++++++++++------------ src/backend/storage/file/reinit.c | 35 +++++++------ src/backend/storage/ipc/dsm.c | 17 +------ src/backend/utils/adt/dbsize.c | 5 -- src/backend/utils/adt/genfile.c | 2 +- src/backend/utils/adt/misc.c | 15 ++---- src/backend/utils/cache/relcache.c | 16 +----- src/backend/utils/time/snapmgr.c | 26 +++++----- src/include/storage/fd.h | 2 + src/timezone/pgtz.c | 9 +--- 17 files changed, 102 insertions(+), 153 deletions(-) diff --git a/contrib/adminpack/adminpack.c b/contrib/adminpack/adminpack.c index f3f8e7f1e4..e46a64a898 100644 --- a/contrib/adminpack/adminpack.c +++ b/contrib/adminpack/adminpack.c @@ -319,7 +319,7 @@ pg_logdir_ls(PG_FUNCTION_ARGS) if (!fctx->dirdesc) ereport(ERROR, (errcode_for_file_access(), - errmsg("could not read directory \"%s\": %m", + errmsg("could not open directory \"%s\": %m", fctx->location))); funcctx->user_fctx = fctx; diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c index b715152e8d..321da9f5f6 100644 --- a/src/backend/access/transam/twophase.c +++ b/src/backend/access/transam/twophase.c @@ -1735,8 +1735,8 @@ restoreTwoPhaseData(void) DIR *cldir; struct dirent *clde; - cldir = AllocateDir(TWOPHASE_DIR); LWLockAcquire(TwoPhaseStateLock, LW_EXCLUSIVE); + cldir = AllocateDir(TWOPHASE_DIR); while ((clde = ReadDir(cldir, TWOPHASE_DIR)) != NULL) { if (strlen(clde->d_name) == 8 && diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index fba201f659..a11406c741 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -3764,10 +3764,16 @@ PreallocXlogFiles(XLogRecPtr endptr) * existed while the server has been running, as this function always * succeeds if no WAL segments have been removed since startup. * 'tli' is only used in the error message. + * + * Note: this function guarantees to keep errno unchanged on return. + * This supports callers that use this to possibly deliver a better + * error message about a missing file, while still being able to throw + * a normal file-access error afterwards, if this does return. */ void CheckXLogRemoved(XLogSegNo segno, TimeLineID tli) { + int save_errno = errno; XLogSegNo lastRemovedSegNo; SpinLockAcquire(&XLogCtl->info_lck); @@ -3779,11 +3785,13 @@ CheckXLogRemoved(XLogSegNo segno, TimeLineID tli) char filename[MAXFNAMELEN]; XLogFileName(filename, tli, segno, wal_segment_size); + errno = save_errno; ereport(ERROR, (errcode_for_file_access(), errmsg("requested WAL segment %s has already been removed", filename))); } + errno = save_errno; } /* @@ -3837,13 +3845,6 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr) struct dirent *xlde; char lastoff[MAXFNAMELEN]; - xldir = AllocateDir(XLOGDIR); - if (xldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open write-ahead log directory \"%s\": %m", - XLOGDIR))); - /* * Construct a filename of the last segment to be kept. The timeline ID * doesn't matter, we ignore that in the comparison. (During recovery, @@ -3854,6 +3855,8 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr PriorRedoPtr, XLogRecPtr endptr) elog(DEBUG2, "attempting to remove WAL segments older than log file %s", lastoff); + xldir = AllocateDir(XLOGDIR); + while ((xlde = ReadDir(xldir, XLOGDIR)) != NULL) { /* Ignore files that are not XLOG segments */ @@ -3912,13 +3915,6 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI) XLByteToPrevSeg(switchpoint, endLogSegNo, wal_segment_size); - xldir = AllocateDir(XLOGDIR); - if (xldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open write-ahead log directory \"%s\": %m", - XLOGDIR))); - /* * Construct a filename of the last segment to be kept. */ @@ -3927,6 +3923,8 @@ RemoveNonParentXlogFiles(XLogRecPtr switchpoint, TimeLineID newTLI) elog(DEBUG2, "attempting to remove WAL segments newer than log file %s", switchseg); + xldir = AllocateDir(XLOGDIR); + while ((xlde = ReadDir(xldir, XLOGDIR)) != NULL) { /* Ignore files that are not XLOG segments */ @@ -4108,11 +4106,6 @@ CleanupBackupHistory(void) char path[MAXPGPATH + sizeof(XLOGDIR)]; xldir = AllocateDir(XLOGDIR); - if (xldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open write-ahead log directory \"%s\": %m", - XLOGDIR))); while ((xlde = ReadDir(xldir, XLOGDIR)) != NULL) { diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c index 443ccd6411..48d85c1ce5 100644 --- a/src/backend/access/transam/xlogfuncs.c +++ b/src/backend/access/transam/xlogfuncs.c @@ -89,7 +89,9 @@ pg_start_backup(PG_FUNCTION_ARGS) dir = AllocateDir("pg_tblspc"); if (!dir) ereport(ERROR, - (errmsg("could not open directory \"%s\": %m", "pg_tblspc"))); + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + "pg_tblspc"))); if (exclusive) { diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c index 1c6cf83f8c..3d8c02e865 100644 --- a/src/backend/postmaster/pgarch.c +++ b/src/backend/postmaster/pgarch.c @@ -673,11 +673,6 @@ pgarch_readyXlog(char *xlog) snprintf(XLogArchiveStatusDir, MAXPGPATH, XLOGDIR "/archive_status"); rldir = AllocateDir(XLogArchiveStatusDir); - if (rldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open archive status directory \"%s\": %m", - XLogArchiveStatusDir))); while ((rlde = ReadDir(rldir, XLogArchiveStatusDir)) != NULL) { diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index ebb8fde3bc..b264b69aef 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -367,9 +367,6 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir) XLogFileName(lastoff, ThisTimeLineID, endsegno, wal_segment_size); dir = AllocateDir("pg_wal"); - if (!dir) - ereport(ERROR, - (errmsg("could not open directory \"%s\": %m", "pg_wal"))); while ((de = ReadDir(dir, "pg_wal")) != NULL) { /* Does it look like a WAL segment, and is it in the range? */ @@ -713,7 +710,9 @@ SendBaseBackup(BaseBackupCmd *cmd) dir = AllocateDir("pg_tblspc"); if (!dir) ereport(ERROR, - (errmsg("could not open directory \"%s\": %m", "pg_tblspc"))); + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + "pg_tblspc"))); perform_base_backup(&opt, dir); diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c index eae9f5a1f2..d169e9c8bb 100644 --- a/src/backend/storage/file/copydir.c +++ b/src/backend/storage/file/copydir.c @@ -47,10 +47,6 @@ copydir(char *fromdir, char *todir, bool recurse) errmsg("could not create directory \"%s\": %m", todir))); xldir = AllocateDir(fromdir); - if (xldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", fromdir))); while ((xlde = ReadDir(xldir, fromdir)) != NULL) { @@ -90,10 +86,6 @@ copydir(char *fromdir, char *todir, bool recurse) return; xldir = AllocateDir(todir); - if (xldir == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", todir))); while ((xlde = ReadDir(xldir, todir)) != NULL) { diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index ecd6d85270..0c435a24af 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -318,7 +318,6 @@ static int FileAccess(File file); static File OpenTemporaryFileInTablespace(Oid tblspcOid, bool rejectError); static bool reserveAllocatedDesc(void); static int FreeDesc(AllocateDesc *desc); -static struct dirent *ReadDirExtended(DIR *dir, const char *dirname, int elevel); static void AtProcExit_Files(int code, Datum arg); static void CleanupTempFiles(bool isProcExit); @@ -2587,6 +2586,10 @@ CloseTransientFile(int fd) * necessary to open the directory, and with closing it after an elog. * When done, call FreeDir rather than closedir. * + * Returns NULL, with errno set, on failure. Note that failure detection + * is commonly left to the following call of ReadDir or ReadDirExtended; + * see the comments for ReadDir. + * * Ideally this should be the *only* direct call of opendir() in the backend. */ DIR * @@ -2649,8 +2652,8 @@ AllocateDir(const char *dirname) * FreeDir(dir); * * since a NULL dir parameter is taken as indicating AllocateDir failed. - * (Make sure errno hasn't been changed since AllocateDir if you use this - * shortcut.) + * (Make sure errno isn't changed between AllocateDir and ReadDir if you + * use this shortcut.) * * The pathname passed to AllocateDir must be passed to this routine too, * but it is only used for error reporting. @@ -2662,10 +2665,15 @@ ReadDir(DIR *dir, const char *dirname) } /* - * Alternate version that allows caller to specify the elevel for any - * error report. If elevel < ERROR, returns NULL on any error. + * Alternate version of ReadDir that allows caller to specify the elevel + * for any error report (whether it's reporting an initial failure of + * AllocateDir or a subsequent directory read failure). + * + * If elevel < ERROR, returns NULL after any error. With the normal coding + * pattern, this will result in falling out of the loop immediately as + * though the directory contained no (more) entries. */ -static struct dirent * +struct dirent * ReadDirExtended(DIR *dir, const char *dirname, int elevel) { struct dirent *dent; @@ -2695,14 +2703,22 @@ ReadDirExtended(DIR *dir, const char *dirname, int elevel) /* * Close a directory opened with AllocateDir. * - * Note we do not check closedir's return value --- it is up to the caller - * to handle close errors. + * Returns closedir's return value (with errno set if it's not 0). + * Note we do not check the return value --- it is up to the caller + * to handle close errors if wanted. + * + * Does nothing if dir == NULL; we assume that directory open failure was + * already reported if desired. */ int FreeDir(DIR *dir) { int i; + /* Nothing to do if AllocateDir failed */ + if (dir == NULL) + return 0; + DO_DB(elog(LOG, "FreeDir: Allocated %d", numAllocatedDescs)); /* Remove dir from list of allocated dirs, if it's present */ @@ -3043,9 +3059,10 @@ RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) { /* anything except ENOENT is fishy */ if (errno != ENOENT) - elog(LOG, - "could not open temporary-files directory \"%s\": %m", - tmpdirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + tmpdirname))); return; } @@ -3099,9 +3116,10 @@ RemovePgTempRelationFiles(const char *tsdirname) { /* anything except ENOENT is fishy */ if (errno != ENOENT) - elog(LOG, - "could not open tablespace directory \"%s\": %m", - tsdirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + tsdirname))); return; } @@ -3136,16 +3154,8 @@ RemovePgTempRelationFilesInDbspace(const char *dbspacedirname) char rm_path[MAXPGPATH * 2]; dbspace_dir = AllocateDir(dbspacedirname); - if (dbspace_dir == NULL) - { - /* we just saw this directory, so it really ought to be there */ - elog(LOG, - "could not open dbspace directory \"%s\": %m", - dbspacedirname); - return; - } - while ((de = ReadDir(dbspace_dir, dbspacedirname)) != NULL) + while ((de = ReadDirExtended(dbspace_dir, dbspacedirname, LOG)) != NULL) { if (!looks_like_temp_rel_name(de->d_name)) continue; @@ -3310,13 +3320,6 @@ walkdir(const char *path, struct dirent *de; dir = AllocateDir(path); - if (dir == NULL) - { - ereport(elevel, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", path))); - return; - } while ((de = ReadDirExtended(dir, path, elevel)) != NULL) { @@ -3356,9 +3359,11 @@ walkdir(const char *path, /* * It's important to fsync the destination directory itself as individual * file fsyncs don't guarantee that the directory entry for the file is - * synced. + * synced. However, skip this if AllocateDir failed; the action function + * might not be robust against that. */ - (*action) (path, true, elevel); + if (dir) + (*action) (path, true, elevel); } diff --git a/src/backend/storage/file/reinit.c b/src/backend/storage/file/reinit.c index f331e7bc21..99c443c753 100644 --- a/src/backend/storage/file/reinit.c +++ b/src/backend/storage/file/reinit.c @@ -111,9 +111,10 @@ ResetUnloggedRelationsInTablespaceDir(const char *tsdirname, int op) { /* anything except ENOENT is fishy */ if (errno != ENOENT) - elog(LOG, - "could not open tablespace directory \"%s\": %m", - tsdirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + tsdirname))); return; } @@ -164,9 +165,10 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) dbspace_dir = AllocateDir(dbspacedirname); if (dbspace_dir == NULL) { - elog(LOG, - "could not open dbspace directory \"%s\": %m", - dbspacedirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + dbspacedirname))); return; } @@ -226,9 +228,10 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) dbspace_dir = AllocateDir(dbspacedirname); if (dbspace_dir == NULL) { - elog(LOG, - "could not open dbspace directory \"%s\": %m", - dbspacedirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + dbspacedirname))); hash_destroy(hash); return; } @@ -296,9 +299,10 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) if (dbspace_dir == NULL) { /* we just saw this directory, so it really ought to be there */ - elog(LOG, - "could not open dbspace directory \"%s\": %m", - dbspacedirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + dbspacedirname))); return; } @@ -349,9 +353,10 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) if (dbspace_dir == NULL) { /* we just saw this directory, so it really ought to be there */ - elog(LOG, - "could not open dbspace directory \"%s\": %m", - dbspacedirname); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + dbspacedirname))); return; } diff --git a/src/backend/storage/ipc/dsm.c b/src/backend/storage/ipc/dsm.c index 36904d2676..a2efdb2c64 100644 --- a/src/backend/storage/ipc/dsm.c +++ b/src/backend/storage/ipc/dsm.c @@ -294,14 +294,9 @@ dsm_cleanup_for_mmap(void) DIR *dir; struct dirent *dent; - /* Open the directory; can't use AllocateDir in postmaster. */ - if ((dir = AllocateDir(PG_DYNSHMEM_DIR)) == NULL) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - PG_DYNSHMEM_DIR))); + /* Scan the directory for something with a name of the correct format. */ + dir = AllocateDir(PG_DYNSHMEM_DIR); - /* Scan for something with a name of the correct format. */ while ((dent = ReadDir(dir, PG_DYNSHMEM_DIR)) != NULL) { if (strncmp(dent->d_name, PG_DYNSHMEM_MMAP_FILE_PREFIX, @@ -315,17 +310,9 @@ dsm_cleanup_for_mmap(void) /* We found a matching file; so remove it. */ if (unlink(buf) != 0) - { - int save_errno; - - save_errno = errno; - closedir(dir); - errno = save_errno; - ereport(ERROR, (errcode_for_file_access(), errmsg("could not remove file \"%s\": %m", buf))); - } } } diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c index 515e30d177..58c0b01bdc 100644 --- a/src/backend/utils/adt/dbsize.c +++ b/src/backend/utils/adt/dbsize.c @@ -110,11 +110,6 @@ calculate_database_size(Oid dbOid) /* Scan the non-default tablespaces */ snprintf(dirpath, MAXPGPATH, "pg_tblspc"); dirdesc = AllocateDir(dirpath); - if (!dirdesc) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open tablespace directory \"%s\": %m", - dirpath))); while ((direntry = ReadDir(dirdesc, dirpath)) != NULL) { diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c index b3b9fc522d..04f1efbe4b 100644 --- a/src/backend/utils/adt/genfile.c +++ b/src/backend/utils/adt/genfile.c @@ -508,7 +508,7 @@ pg_ls_dir_files(FunctionCallInfo fcinfo, const char *dir) if (!fctx->dirdesc) ereport(ERROR, (errcode_for_file_access(), - errmsg("could not read directory \"%s\": %m", + errmsg("could not open directory \"%s\": %m", fctx->location))); funcctx->user_fctx = fctx; diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c index 1980ff5ac7..f53d411ad1 100644 --- a/src/backend/utils/adt/misc.c +++ b/src/backend/utils/adt/misc.c @@ -26,6 +26,7 @@ #include "catalog/pg_tablespace.h" #include "catalog/pg_type.h" #include "commands/dbcommands.h" +#include "commands/tablespace.h" #include "common/keywords.h" #include "funcapi.h" #include "miscadmin.h" @@ -425,9 +426,9 @@ pg_tablespace_databases(PG_FUNCTION_ARGS) while ((de = ReadDir(fctx->dirdesc, fctx->location)) != NULL) { - char *subdir; - DIR *dirdesc; Oid datOid = atooid(de->d_name); + char *subdir; + bool isempty; /* this test skips . and .., but is awfully weak */ if (!datOid) @@ -436,16 +437,10 @@ pg_tablespace_databases(PG_FUNCTION_ARGS) /* if database subdir is empty, don't report tablespace as used */ subdir = psprintf("%s/%s", fctx->location, de->d_name); - dirdesc = AllocateDir(subdir); - while ((de = ReadDir(dirdesc, subdir)) != NULL) - { - if (strcmp(de->d_name, ".") != 0 && strcmp(de->d_name, "..") != 0) - break; - } - FreeDir(dirdesc); + isempty = directory_is_empty(subdir); pfree(subdir); - if (!de) + if (isempty) continue; /* indeed, nothing in it */ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(datOid)); diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 1908420d82..12a5f157c0 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -6119,14 +6119,8 @@ RelationCacheInitFileRemove(void) /* Scan the tablespace link directory to find non-default tablespaces */ dir = AllocateDir(tblspcdir); - if (dir == NULL) - { - elog(LOG, "could not open tablespace link directory \"%s\": %m", - tblspcdir); - return; - } - while ((de = ReadDir(dir, tblspcdir)) != NULL) + while ((de = ReadDirExtended(dir, tblspcdir, LOG)) != NULL) { if (strspn(de->d_name, "0123456789") == strlen(de->d_name)) { @@ -6150,14 +6144,8 @@ RelationCacheInitFileRemoveInDir(const char *tblspcpath) /* Scan the tablespace directory to find per-database directories */ dir = AllocateDir(tblspcpath); - if (dir == NULL) - { - elog(LOG, "could not open tablespace directory \"%s\": %m", - tblspcpath); - return; - } - while ((de = ReadDir(dir, tblspcpath)) != NULL) + while ((de = ReadDirExtended(dir, tblspcpath, LOG)) != NULL) { if (strspn(de->d_name, "0123456789") == strlen(de->d_name)) { diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index addf87dc3b..0b032905a5 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -1619,27 +1619,25 @@ DeleteAllExportedSnapshotFiles(void) DIR *s_dir; struct dirent *s_de; - if (!(s_dir = AllocateDir(SNAPSHOT_EXPORT_DIR))) - { - /* - * We really should have that directory in a sane cluster setup. But - * then again if we don't, it's not fatal enough to make it FATAL. - * Since we're running in the postmaster, LOG is our best bet. - */ - elog(LOG, "could not open directory \"%s\": %m", SNAPSHOT_EXPORT_DIR); - return; - } + /* + * Problems in reading the directory, or unlinking files, are reported at + * LOG level. Since we're running in the startup process, ERROR level + * would prevent database start, and it's not important enough for that. + */ + s_dir = AllocateDir(SNAPSHOT_EXPORT_DIR); - while ((s_de = ReadDir(s_dir, SNAPSHOT_EXPORT_DIR)) != NULL) + while ((s_de = ReadDirExtended(s_dir, SNAPSHOT_EXPORT_DIR, LOG)) != NULL) { if (strcmp(s_de->d_name, ".") == 0 || strcmp(s_de->d_name, "..") == 0) continue; snprintf(buf, sizeof(buf), SNAPSHOT_EXPORT_DIR "/%s", s_de->d_name); - /* Again, unlink failure is not worthy of FATAL */ - if (unlink(buf)) - elog(LOG, "could not unlink file \"%s\": %m", buf); + + if (unlink(buf) != 0) + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", buf))); } FreeDir(s_dir); diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index 9829281509..45dadf666f 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -98,6 +98,8 @@ extern int ClosePipeStream(FILE *file); /* Operations to allow use of the library routines */ extern DIR *AllocateDir(const char *dirname); extern struct dirent *ReadDir(DIR *dir, const char *dirname); +extern struct dirent *ReadDirExtended(DIR *dir, const char *dirname, + int elevel); extern int FreeDir(DIR *dir); /* Operations to allow use of a plain kernel FD, with automatic cleanup */ diff --git a/src/timezone/pgtz.c b/src/timezone/pgtz.c index a73dc6188b..4018310a5c 100644 --- a/src/timezone/pgtz.c +++ b/src/timezone/pgtz.c @@ -156,15 +156,8 @@ scan_directory_ci(const char *dirname, const char *fname, int fnamelen, struct dirent *direntry; dirdesc = AllocateDir(dirname); - if (!dirdesc) - { - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", dirname))); - return false; - } - while ((direntry = ReadDir(dirdesc, dirname)) != NULL) + while ((direntry = ReadDirExtended(dirdesc, dirname, LOG)) != NULL) { /* * Ignore . and .., plus any other "hidden" files. This is a security From 561885db05d3296082ce8750805b8ec322cf9aa1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Dec 2017 17:59:35 -0500 Subject: [PATCH 0656/1087] Improve error handling in RemovePgTempFiles(). Modify this function and its subsidiaries so that syscall failures are reported via ereport(LOG), rather than silently ignored as before. We don't want to throw a hard ERROR, as that would prevent database startup, and getting rid of leftover temporary files is not important enough for that. On the other hand, not reporting trouble at all seems like an odd choice not in line with current project norms, especially since any failure here is quite unexpected. On the same reasoning, adjust these functions' AllocateDir/ReadDir calls so that failure to scan a directory results in LOG not ERROR. I also removed the previous practice of silently ignoring ENOENT failures during directory opens --- there are some corner cases where that could happen given a previous database crash, but that seems like a bad excuse for ignoring a condition that isn't expected in most cases. A LOG message during postmaster start seems OK in such situations, and better than no output at all. In passing, make RemovePgTempRelationFiles' test for "is the file name all digits" look more like the way it's done elsewhere. Discussion: https://postgr.es/m/19907.1512402254@sss.pgh.pa.us --- src/backend/storage/file/fd.c | 70 +++++++++++++++++------------------ 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 0c435a24af..5c7fd645ac 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -2994,6 +2994,10 @@ CleanupTempFiles(bool isProcExit) * the temp files for debugging purposes. This does however mean that * OpenTemporaryFile had better allow for collision with an existing temp * file name. + * + * NOTE: this function and its subroutines generally report syscall failures + * with ereport(LOG) and keep going. Removing temp files is not so critical + * that we should fail to start the database when we can't do it. */ void RemovePgTempFiles(void) @@ -3014,7 +3018,7 @@ RemovePgTempFiles(void) */ spc_dir = AllocateDir("pg_tblspc"); - while ((spc_de = ReadDir(spc_dir, "pg_tblspc")) != NULL) + while ((spc_de = ReadDirExtended(spc_dir, "pg_tblspc", LOG)) != NULL) { if (strcmp(spc_de->d_name, ".") == 0 || strcmp(spc_de->d_name, "..") == 0) @@ -3055,18 +3059,8 @@ RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) char rm_path[MAXPGPATH * 2]; temp_dir = AllocateDir(tmpdirname); - if (temp_dir == NULL) - { - /* anything except ENOENT is fishy */ - if (errno != ENOENT) - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - tmpdirname))); - return; - } - while ((temp_de = ReadDir(temp_dir, tmpdirname)) != NULL) + while ((temp_de = ReadDirExtended(temp_dir, tmpdirname, LOG)) != NULL) { if (strcmp(temp_de->d_name, ".") == 0 || strcmp(temp_de->d_name, "..") == 0) @@ -3082,22 +3076,38 @@ RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) { struct stat statbuf; - /* note that we ignore any error here and below */ if (lstat(rm_path, &statbuf) < 0) + { + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not stat file \"%s\": %m", rm_path))); continue; + } if (S_ISDIR(statbuf.st_mode)) { + /* recursively remove contents, then directory itself */ RemovePgTempFilesInDir(rm_path, true); - rmdir(rm_path); + + if (rmdir(rm_path) < 0) + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not remove directory \"%s\": %m", + rm_path))); } else - unlink(rm_path); + { + if (unlink(rm_path) < 0) + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", + rm_path))); + } } else - elog(LOG, - "unexpected file found in temporary-files directory: \"%s\"", - rm_path); + ereport(LOG, + (errmsg("unexpected file found in temporary-files directory: \"%s\"", + rm_path))); } FreeDir(temp_dir); @@ -3112,29 +3122,15 @@ RemovePgTempRelationFiles(const char *tsdirname) char dbspace_path[MAXPGPATH * 2]; ts_dir = AllocateDir(tsdirname); - if (ts_dir == NULL) - { - /* anything except ENOENT is fishy */ - if (errno != ENOENT) - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - tsdirname))); - return; - } - while ((de = ReadDir(ts_dir, tsdirname)) != NULL) + while ((de = ReadDirExtended(ts_dir, tsdirname, LOG)) != NULL) { - int i = 0; - /* * We're only interested in the per-database directories, which have * numeric names. Note that this code will also (properly) ignore "." * and "..". */ - while (isdigit((unsigned char) de->d_name[i])) - ++i; - if (de->d_name[i] != '\0' || i == 0) + if (strspn(de->d_name, "0123456789") != strlen(de->d_name)) continue; snprintf(dbspace_path, sizeof(dbspace_path), "%s/%s", @@ -3163,7 +3159,11 @@ RemovePgTempRelationFilesInDbspace(const char *dbspacedirname) snprintf(rm_path, sizeof(rm_path), "%s/%s", dbspacedirname, de->d_name); - unlink(rm_path); /* note we ignore any error */ + if (unlink(rm_path) < 0) + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", + rm_path))); } FreeDir(dbspace_dir); From 066bc21c0e085e2642ff25cc665c4efad3669d6f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Dec 2017 18:37:54 -0500 Subject: [PATCH 0657/1087] Simplify do_pg_start_backup's API by opening pg_tblspc internally. do_pg_start_backup() expects its callers to pass in an open DIR pointer for the pg_tblspc directory, but there's no apparent advantage in that. It complicates the callers without adding any flexibility, and there's no robustness advantage, since we surely have to be prepared for errors during the scan of pg_tblspc anyway. In fact, by holding an extra kernel resource during operations like the preliminary checkpoint, we might be making things a fraction more failure-prone not less. Hence, remove that argument and open the directory just for the duration of the actual scan. Discussion: https://postgr.es/m/28752.1512413887@sss.pgh.pa.us --- src/backend/access/transam/xlog.c | 6 ++++-- src/backend/access/transam/xlogfuncs.c | 15 ++------------- src/backend/replication/basebackup.c | 19 ++++--------------- src/include/access/xlog.h | 2 +- 4 files changed, 11 insertions(+), 31 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index a11406c741..e46ee553d6 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -10206,7 +10206,7 @@ XLogFileNameP(TimeLineID tli, XLogSegNo segno) */ XLogRecPtr do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, - StringInfo labelfile, DIR *tblspcdir, List **tablespaces, + StringInfo labelfile, List **tablespaces, StringInfo tblspcmapfile, bool infotbssize, bool needtblspcmapfile) { @@ -10297,6 +10297,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, PG_ENSURE_ERROR_CLEANUP(pg_start_backup_callback, (Datum) BoolGetDatum(exclusive)); { bool gotUniqueStartpoint = false; + DIR *tblspcdir; struct dirent *de; tablespaceinfo *ti; int datadirpathlen; @@ -10428,6 +10429,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, datadirpathlen = strlen(DataDir); /* Collect information about all tablespaces */ + tblspcdir = AllocateDir("pg_tblspc"); while ((de = ReadDir(tblspcdir, "pg_tblspc")) != NULL) { char fullpath[MAXPGPATH + 10]; @@ -10476,7 +10478,6 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, appendStringInfoChar(&buflinkpath, *s++); } - /* * Relpath holds the relative path of the tablespace directory * when it's located within PGDATA, or NULL if it's located @@ -10511,6 +10512,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, errmsg("tablespaces are not supported on this platform"))); #endif } + FreeDir(tblspcdir); /* * Construct backup label file diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c index 48d85c1ce5..c41428ea2a 100644 --- a/src/backend/access/transam/xlogfuncs.c +++ b/src/backend/access/transam/xlogfuncs.c @@ -75,7 +75,6 @@ pg_start_backup(PG_FUNCTION_ARGS) bool exclusive = PG_GETARG_BOOL(2); char *backupidstr; XLogRecPtr startpoint; - DIR *dir; SessionBackupState status = get_backup_status(); backupidstr = text_to_cstring(backupid); @@ -85,18 +84,10 @@ pg_start_backup(PG_FUNCTION_ARGS) (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("a backup is already in progress in this session"))); - /* Make sure we can open the directory with tablespaces in it */ - dir = AllocateDir("pg_tblspc"); - if (!dir) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - "pg_tblspc"))); - if (exclusive) { startpoint = do_pg_start_backup(backupidstr, fast, NULL, NULL, - dir, NULL, NULL, false, true); + NULL, NULL, false, true); } else { @@ -112,13 +103,11 @@ pg_start_backup(PG_FUNCTION_ARGS) MemoryContextSwitchTo(oldcontext); startpoint = do_pg_start_backup(backupidstr, fast, NULL, label_file, - dir, NULL, tblspc_map_file, false, true); + NULL, tblspc_map_file, false, true); before_shmem_exit(nonexclusive_base_backup_cleanup, (Datum) 0); } - FreeDir(dir); - PG_RETURN_LSN(startpoint); } diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index b264b69aef..cd7d391b2f 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -64,7 +64,7 @@ static int64 _tarWriteDir(const char *pathbuf, int basepathlen, struct stat *sta static void send_int8_string(StringInfoData *buf, int64 intval); static void SendBackupHeader(List *tablespaces); static void base_backup_cleanup(int code, Datum arg); -static void perform_base_backup(basebackup_options *opt, DIR *tblspcdir); +static void perform_base_backup(basebackup_options *opt); static void parse_basebackup_options(List *options, basebackup_options *opt); static void SendXlogRecPtrResult(XLogRecPtr ptr, TimeLineID tli); static int compareWalFileNames(const void *a, const void *b); @@ -188,7 +188,7 @@ base_backup_cleanup(int code, Datum arg) * clobbered by longjmp" from stupider versions of gcc. */ static void -perform_base_backup(basebackup_options *opt, DIR *tblspcdir) +perform_base_backup(basebackup_options *opt) { XLogRecPtr startptr; TimeLineID starttli; @@ -207,7 +207,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir) tblspc_map_file = makeStringInfo(); startptr = do_pg_start_backup(opt->label, opt->fastcheckpoint, &starttli, - labelfile, tblspcdir, &tablespaces, + labelfile, &tablespaces, tblspc_map_file, opt->progress, opt->sendtblspcmapfile); @@ -690,7 +690,6 @@ parse_basebackup_options(List *options, basebackup_options *opt) void SendBaseBackup(BaseBackupCmd *cmd) { - DIR *dir; basebackup_options opt; parse_basebackup_options(cmd->options, &opt); @@ -706,17 +705,7 @@ SendBaseBackup(BaseBackupCmd *cmd) set_ps_display(activitymsg, false); } - /* Make sure we can open the directory with tablespaces in it */ - dir = AllocateDir("pg_tblspc"); - if (!dir) - ereport(ERROR, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - "pg_tblspc"))); - - perform_base_backup(&opt, dir); - - FreeDir(dir); + perform_base_backup(&opt); } static void diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index 8fd6010ba0..dd7d8b5e40 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -310,7 +310,7 @@ typedef enum SessionBackupState } SessionBackupState; extern XLogRecPtr do_pg_start_backup(const char *backupidstr, bool fast, - TimeLineID *starttli_p, StringInfo labelfile, DIR *tblspcdir, + TimeLineID *starttli_p, StringInfo labelfile, List **tablespaces, StringInfo tblspcmapfile, bool infotbssize, bool needtblspcmapfile); extern XLogRecPtr do_pg_stop_backup(char *labelfile, bool waitforarchive, From e7cfb26fbc11ea94e93e443e2260e106b6daabdd Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 30 Nov 2017 14:46:22 -0500 Subject: [PATCH 0658/1087] Fix warnings from cpluspluscheck Fix warnings about "comparison between signed and unsigned integer expressions" in inline functions in header files by adding some casts. --- src/include/libpq/pqformat.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 4de9e6dd21..9f56d184fd 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -51,7 +51,7 @@ pq_writeint8(StringInfoData *pg_restrict buf, int8 i) { int8 ni = i; - Assert(buf->len + sizeof(int8) <= buf->maxlen); + Assert(buf->len + (int) sizeof(int8) <= buf->maxlen); memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int8)); buf->len += sizeof(int8); } @@ -65,7 +65,7 @@ pq_writeint16(StringInfoData *pg_restrict buf, int16 i) { int16 ni = pg_hton16(i); - Assert(buf->len + sizeof(int16) <= buf->maxlen); + Assert(buf->len + (int) sizeof(int16) <= buf->maxlen); memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int16)); buf->len += sizeof(int16); } @@ -79,7 +79,7 @@ pq_writeint32(StringInfoData *pg_restrict buf, int32 i) { int32 ni = pg_hton32(i); - Assert(buf->len + sizeof(int32) <= buf->maxlen); + Assert(buf->len + (int) sizeof(int32) <= buf->maxlen); memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int32)); buf->len += sizeof(int32); } @@ -93,7 +93,7 @@ pq_writeint64(StringInfoData *pg_restrict buf, int64 i) { int64 ni = pg_hton64(i); - Assert(buf->len + sizeof(int64) <= buf->maxlen); + Assert(buf->len + (int) sizeof(int64) <= buf->maxlen); memcpy((char *pg_restrict) (buf->data + buf->len), &ni, sizeof(int64)); buf->len += sizeof(int64); } From 8dc3c971a9d6db5ddc9f0a3c11a70308412d66c3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 4 Dec 2017 20:52:48 -0500 Subject: [PATCH 0659/1087] Treat directory open failures as hard errors in ResetUnloggedRelations(). Previously, this code just reported such problems at LOG level and kept going. The problem with this approach is that transient failures (e.g., ENFILE) could prevent us from resetting unlogged relations to empty, yet allow recovery to appear to complete successfully. That seems like a data corruption hazard large enough to treat such problems as reasons to fail startup. For the same reason, treat unlink failures for unlogged files as hard errors not just LOG messages. It's a little odd that we did it like that when file-level errors in other steps (copy_file, fsync_fname) are ERRORs. The sole case that I left alone is that ENOENT failure on a tablespace (not database) directory is not an error, though it will now be logged rather than just silently ignored. This is to cover the scenario where a previous DROP TABLESPACE removed the tablespace directory but failed before removing the pg_tblspc symlink. I'm not sure that that's very likely in practice, but that seems like the only real excuse for the old behavior here, so let's allow for it. (As coded, this will also allow ENOENT on $PGDATA/base/. But since we'll fail soon enough if that's gone, I don't think we need to complicate this code by distinguishing that from a true tablespace case.) Discussion: https://postgr.es/m/21040.1512418508@sss.pgh.pa.us --- src/backend/storage/file/reinit.c | 109 +++++++++++------------------- 1 file changed, 39 insertions(+), 70 deletions(-) diff --git a/src/backend/storage/file/reinit.c b/src/backend/storage/file/reinit.c index 99c443c753..00678cb182 100644 --- a/src/backend/storage/file/reinit.c +++ b/src/backend/storage/file/reinit.c @@ -98,7 +98,9 @@ ResetUnloggedRelations(int op) MemoryContextDelete(tmpctx); } -/* Process one tablespace directory for ResetUnloggedRelations */ +/* + * Process one tablespace directory for ResetUnloggedRelations + */ static void ResetUnloggedRelationsInTablespaceDir(const char *tsdirname, int op) { @@ -107,29 +109,32 @@ ResetUnloggedRelationsInTablespaceDir(const char *tsdirname, int op) char dbspace_path[MAXPGPATH * 2]; ts_dir = AllocateDir(tsdirname); - if (ts_dir == NULL) + + /* + * If we get ENOENT on a tablespace directory, log it and return. This + * can happen if a previous DROP TABLESPACE crashed between removing the + * tablespace directory and removing the symlink in pg_tblspc. We don't + * really want to prevent database startup in that scenario, so let it + * pass instead. Any other type of error will be reported by ReadDir + * (causing a startup failure). + */ + if (ts_dir == NULL && errno == ENOENT) { - /* anything except ENOENT is fishy */ - if (errno != ENOENT) - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - tsdirname))); + ereport(LOG, + (errcode_for_file_access(), + errmsg("could not open directory \"%s\": %m", + tsdirname))); return; } while ((de = ReadDir(ts_dir, tsdirname)) != NULL) { - int i = 0; - /* * We're only interested in the per-database directories, which have * numeric names. Note that this code will also (properly) ignore "." * and "..". */ - while (isdigit((unsigned char) de->d_name[i])) - ++i; - if (de->d_name[i] != '\0' || i == 0) + if (strspn(de->d_name, "0123456789") != strlen(de->d_name)) continue; snprintf(dbspace_path, sizeof(dbspace_path), "%s/%s", @@ -140,7 +145,9 @@ ResetUnloggedRelationsInTablespaceDir(const char *tsdirname, int op) FreeDir(ts_dir); } -/* Process one per-dbspace directory for ResetUnloggedRelations */ +/* + * Process one per-dbspace directory for ResetUnloggedRelations + */ static void ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) { @@ -158,20 +165,9 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) */ if ((op & UNLOGGED_RELATION_CLEANUP) != 0) { - HTAB *hash = NULL; + HTAB *hash; HASHCTL ctl; - /* Open the directory. */ - dbspace_dir = AllocateDir(dbspacedirname); - if (dbspace_dir == NULL) - { - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - dbspacedirname))); - return; - } - /* * It's possible that someone could create a ton of unlogged relations * in the same database & tablespace, so we'd better use a hash table @@ -179,11 +175,13 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) * need to be reset. Otherwise, this cleanup operation would be * O(n^2). */ + memset(&ctl, 0, sizeof(ctl)); ctl.keysize = sizeof(unlogged_relation_entry); ctl.entrysize = sizeof(unlogged_relation_entry); hash = hash_create("unlogged hash", 32, &ctl, HASH_ELEM); /* Scan the directory. */ + dbspace_dir = AllocateDir(dbspacedirname); while ((de = ReadDir(dbspace_dir, dbspacedirname)) != NULL) { ForkNumber forkNum; @@ -222,21 +220,9 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) } /* - * Now, make a second pass and remove anything that matches. First, - * reopen the directory. + * Now, make a second pass and remove anything that matches. */ dbspace_dir = AllocateDir(dbspacedirname); - if (dbspace_dir == NULL) - { - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - dbspacedirname))); - hash_destroy(hash); - return; - } - - /* Scan the directory. */ while ((de = ReadDir(dbspace_dir, dbspacedirname)) != NULL) { ForkNumber forkNum; @@ -266,15 +252,11 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) { snprintf(rm_path, sizeof(rm_path), "%s/%s", dbspacedirname, de->d_name); - - /* - * It's tempting to actually throw an error here, but since - * this code gets run during database startup, that could - * result in the database failing to start. (XXX Should we do - * it anyway?) - */ - if (unlink(rm_path)) - elog(LOG, "could not unlink file \"%s\": %m", rm_path); + if (unlink(rm_path) < 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not remove file \"%s\": %m", + rm_path))); else elog(DEBUG2, "unlinked file \"%s\"", rm_path); } @@ -294,19 +276,8 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) */ if ((op & UNLOGGED_RELATION_INIT) != 0) { - /* Open the directory. */ - dbspace_dir = AllocateDir(dbspacedirname); - if (dbspace_dir == NULL) - { - /* we just saw this directory, so it really ought to be there */ - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - dbspacedirname))); - return; - } - /* Scan the directory. */ + dbspace_dir = AllocateDir(dbspacedirname); while ((de = ReadDir(dbspace_dir, dbspacedirname)) != NULL) { ForkNumber forkNum; @@ -350,16 +321,6 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) * (especially the metadata ones) at once. */ dbspace_dir = AllocateDir(dbspacedirname); - if (dbspace_dir == NULL) - { - /* we just saw this directory, so it really ought to be there */ - ereport(LOG, - (errcode_for_file_access(), - errmsg("could not open directory \"%s\": %m", - dbspacedirname))); - return; - } - while ((de = ReadDir(dbspace_dir, dbspacedirname)) != NULL) { ForkNumber forkNum; @@ -388,6 +349,14 @@ ResetUnloggedRelationsInDbspaceDir(const char *dbspacedirname, int op) FreeDir(dbspace_dir); + /* + * Lastly, fsync the database directory itself, ensuring the + * filesystem remembers the file creations and deletions we've done. + * We don't bother with this during a call that does only + * UNLOGGED_RELATION_CLEANUP, because if recovery crashes before we + * get to doing UNLOGGED_RELATION_INIT, we'll redo the cleanup step + * too at the next startup attempt. + */ fsync_fname(dbspacedirname, true); } } From 28f8896af0765a05447f605c55fa9f1ab3b41150 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 1 Dec 2017 13:30:21 -0500 Subject: [PATCH 0660/1087] doc: Turn on generate.consistent.ids parameter This ensures that automatically generated HTML anchors don't change in every build. --- doc/src/sgml/stylesheet-common.xsl | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/src/sgml/stylesheet-common.xsl b/doc/src/sgml/stylesheet-common.xsl index 658a5ac5e1..6d26e7e5c9 100644 --- a/doc/src/sgml/stylesheet-common.xsl +++ b/doc/src/sgml/stylesheet-common.xsl @@ -36,6 +36,7 @@ + 1 From c572599c65bfe0387563233faabecd2845073538 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 5 Dec 2017 09:23:57 -0500 Subject: [PATCH 0661/1087] Mark assorted variables PGDLLIMPORT. This makes life easier for extension authors who wish to support Windows. Brian Cloutier, slightly amended by me. Discussion: http://postgr.es/m/CAJCy68fscdNhmzFPS4kyO00CADkvXvEa-28H-OtENk-pa2OTWw@mail.gmail.com --- src/include/access/twophase.h | 2 +- src/include/commands/extension.h | 2 +- src/include/miscadmin.h | 10 +++++----- src/include/pgtime.h | 2 +- src/include/postmaster/postmaster.h | 4 ++-- src/include/storage/fd.h | 2 +- src/include/storage/proc.h | 4 ++-- src/include/tcop/dest.h | 3 ++- src/include/tcop/tcopprot.h | 2 +- src/include/utils/guc.h | 6 +++--- src/include/utils/snapmgr.h | 6 +++--- 11 files changed, 22 insertions(+), 21 deletions(-) diff --git a/src/include/access/twophase.h b/src/include/access/twophase.h index 54dec4eeaf..f5fbbea4b6 100644 --- a/src/include/access/twophase.h +++ b/src/include/access/twophase.h @@ -25,7 +25,7 @@ typedef struct GlobalTransactionData *GlobalTransaction; /* GUC variable */ -extern int max_prepared_xacts; +extern PGDLLIMPORT int max_prepared_xacts; extern Size TwoPhaseShmemSize(void); extern void TwoPhaseShmemInit(void); diff --git a/src/include/commands/extension.h b/src/include/commands/extension.h index 73bba3c784..a0dfae10c6 100644 --- a/src/include/commands/extension.h +++ b/src/include/commands/extension.h @@ -28,7 +28,7 @@ * them from the extension first. */ extern PGDLLIMPORT bool creating_extension; -extern Oid CurrentExtensionObject; +extern PGDLLIMPORT Oid CurrentExtensionObject; extern ObjectAddress CreateExtension(ParseState *pstate, CreateExtensionStmt *stmt); diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 3950054368..59da7a6091 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -150,14 +150,14 @@ extern PGDLLIMPORT bool IsUnderPostmaster; extern PGDLLIMPORT bool IsBackgroundWorker; extern PGDLLIMPORT bool IsBinaryUpgrade; -extern bool ExitOnAnyError; +extern PGDLLIMPORT bool ExitOnAnyError; extern PGDLLIMPORT char *DataDir; extern PGDLLIMPORT int NBuffers; -extern int MaxBackends; -extern int MaxConnections; -extern int max_worker_processes; +extern PGDLLIMPORT int MaxBackends; +extern PGDLLIMPORT int MaxConnections; +extern PGDLLIMPORT int max_worker_processes; extern int max_parallel_workers; extern PGDLLIMPORT int MyProcPid; @@ -238,7 +238,7 @@ extern PGDLLIMPORT int IntervalStyle; #define MAXTZLEN 10 /* max TZ name len, not counting tr. null */ extern bool enableFsync; -extern bool allowSystemTableMods; +extern PGDLLIMPORT bool allowSystemTableMods; extern PGDLLIMPORT int work_mem; extern PGDLLIMPORT int maintenance_work_mem; diff --git a/src/include/pgtime.h b/src/include/pgtime.h index 4fd8f75ef9..8a13d717e0 100644 --- a/src/include/pgtime.h +++ b/src/include/pgtime.h @@ -70,7 +70,7 @@ extern size_t pg_strftime(char *s, size_t max, const char *format, /* these functions and variables are in pgtz.c */ -extern pg_tz *session_timezone; +extern PGDLLIMPORT pg_tz *session_timezone; extern pg_tz *log_timezone; extern void pg_timezone_initialize(void); diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h index 0f85908b09..f5b863e710 100644 --- a/src/include/postmaster/postmaster.h +++ b/src/include/postmaster/postmaster.h @@ -16,7 +16,7 @@ /* GUC options */ extern bool EnableSSL; extern int ReservedBackends; -extern int PostPortNumber; +extern PGDLLIMPORT int PostPortNumber; extern int Unix_socket_permissions; extern char *Unix_socket_group; extern char *Unix_socket_directories; @@ -44,7 +44,7 @@ extern int postmaster_alive_fds[2]; #define POSTMASTER_FD_OWN 1 /* kept open by postmaster only */ #endif -extern const char *progname; +extern PGDLLIMPORT const char *progname; extern void PostmasterMain(int argc, char *argv[]) pg_attribute_noreturn(); extern void ClosePostmasterPorts(bool am_syslogger); diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index 45dadf666f..dc2eb35f06 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -50,7 +50,7 @@ typedef int File; /* GUC parameter */ -extern int max_files_per_process; +extern PGDLLIMPORT int max_files_per_process; /* * This is private to fd.c, but exported for save/restore_backend_variables() diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h index 205f484510..1d370500af 100644 --- a/src/include/storage/proc.h +++ b/src/include/storage/proc.h @@ -269,7 +269,7 @@ typedef struct PROC_HDR int startupBufferPinWaitBufId; } PROC_HDR; -extern PROC_HDR *ProcGlobal; +extern PGDLLIMPORT PROC_HDR *ProcGlobal; extern PGPROC *PreparedXactProcs; @@ -287,7 +287,7 @@ extern PGPROC *PreparedXactProcs; #define NUM_AUXILIARY_PROCS 4 /* configurable options */ -extern int DeadlockTimeout; +extern PGDLLIMPORT int DeadlockTimeout; extern int StatementTimeout; extern int LockTimeout; extern int IdleInTransactionSessionTimeout; diff --git a/src/include/tcop/dest.h b/src/include/tcop/dest.h index c990544a16..f31c06a9c0 100644 --- a/src/include/tcop/dest.h +++ b/src/include/tcop/dest.h @@ -129,7 +129,8 @@ struct _DestReceiver /* Private fields might appear beyond this point... */ }; -extern DestReceiver *None_Receiver; /* permanent receiver for DestNone */ +extern PGDLLIMPORT DestReceiver *None_Receiver; /* permanent receiver for + * DestNone */ /* The primary destination management functions */ diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h index f8c535c91e..62c7f6c61c 100644 --- a/src/include/tcop/tcopprot.h +++ b/src/include/tcop/tcopprot.h @@ -45,7 +45,7 @@ typedef enum LOGSTMT_ALL /* log all statements */ } LogStmtLevel; -extern int log_statement; +extern PGDLLIMPORT int log_statement; extern List *pg_parse_query(const char *query_string); extern List *pg_analyze_and_rewrite(RawStmt *parsetree, diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index 41335b8e75..26e0caaf59 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -248,8 +248,8 @@ extern bool default_with_oids; extern bool session_auth_is_superuser; extern int log_min_error_statement; -extern int log_min_messages; -extern int client_min_messages; +extern PGDLLIMPORT int log_min_messages; +extern PGDLLIMPORT int client_min_messages; extern int log_min_duration_statement; extern int log_temp_files; @@ -258,7 +258,7 @@ extern int temp_file_limit; extern int num_temp_buffers; extern char *cluster_name; -extern char *ConfigFileName; +extern PGDLLIMPORT char *ConfigFileName; extern char *HbaFileName; extern char *IdentFileName; extern char *external_pid_file; diff --git a/src/include/utils/snapmgr.h b/src/include/utils/snapmgr.h index fc64153780..8585194e78 100644 --- a/src/include/utils/snapmgr.h +++ b/src/include/utils/snapmgr.h @@ -56,10 +56,10 @@ extern TimestampTz GetOldSnapshotThresholdTimestamp(void); extern bool FirstSnapshotSet; -extern TransactionId TransactionXmin; -extern TransactionId RecentXmin; +extern PGDLLIMPORT TransactionId TransactionXmin; +extern PGDLLIMPORT TransactionId RecentXmin; extern PGDLLIMPORT TransactionId RecentGlobalXmin; -extern TransactionId RecentGlobalDataXmin; +extern PGDLLIMPORT TransactionId RecentGlobalDataXmin; extern Snapshot GetTransactionSnapshot(void); extern Snapshot GetLatestSnapshot(void); From ab3f008a2dc364cf7fb75de0a691fb0c61586c8e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 5 Dec 2017 11:19:45 -0500 Subject: [PATCH 0662/1087] postgres_fdw: Judge password use by run-as user, not session user. This is a backward incompatibility which should be noted in the release notes for PostgreSQL 11. For security reasons, we require that a postgres_fdw foreign table use password authentication when accessing a remote server, so that an unprivileged user cannot usurp the server's credentials. Superusers are exempt from this requirement, because we assume they are entitled to usurp the server's credentials or, at least, can find some other way to do it. But what should happen when the foreign table is accessed by a view owned by a user different from the session user? Is it the view owner that must be a superuser in order to avoid the requirement of using a password, or the session user? Historically it was the latter, but this requirement makes it the former instead. This allows superusers to delegate to other users the right to select from a foreign table that doesn't use password authentication by creating a view over the foreign table and handing out rights to the view. It is also more consistent with the idea that access to a view should use the view owner's privileges rather than the session user's privileges. The upshot of this change is that a superuser selecting from a view created by a non-superuser may now get an error complaining that no password was used, while a non-superuser selecting from a view created by a superuser will no longer receive such an error. No documentation changes are present in this patch because the wording of the documentation already suggests that it works this way. We should perhaps adjust the documentation in the back-branches, but that's a task for another patch. Originally proposed by Jeff Janes, but with different semantics; adjusted to work like this by me per discussion. Discussion: http://postgr.es/m/CA+TgmoaY4HsVZJv5SqEjCKLDwtCTSwXzKpRftgj50wmMMBwciA@mail.gmail.com --- contrib/postgres_fdw/connection.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c index be4ec07cf9..4fbf043860 100644 --- a/contrib/postgres_fdw/connection.c +++ b/contrib/postgres_fdw/connection.c @@ -75,7 +75,7 @@ static bool xact_got_connection = false; /* prototypes of private functions */ static PGconn *connect_pg_server(ForeignServer *server, UserMapping *user); static void disconnect_pg_server(ConnCacheEntry *entry); -static void check_conn_params(const char **keywords, const char **values); +static void check_conn_params(const char **keywords, const char **values, UserMapping *user); static void configure_remote_session(PGconn *conn); static void do_sql_command(PGconn *conn, const char *sql); static void begin_remote_xact(ConnCacheEntry *entry); @@ -261,7 +261,7 @@ connect_pg_server(ForeignServer *server, UserMapping *user) keywords[n] = values[n] = NULL; /* verify connection parameters and make connection */ - check_conn_params(keywords, values); + check_conn_params(keywords, values, user); conn = PQconnectdbParams(keywords, values, false); if (!conn || PQstatus(conn) != CONNECTION_OK) @@ -276,7 +276,7 @@ connect_pg_server(ForeignServer *server, UserMapping *user) * otherwise, he's piggybacking on the postgres server's user * identity. See also dblink_security_check() in contrib/dblink. */ - if (!superuser() && !PQconnectionUsedPassword(conn)) + if (!superuser_arg(user->userid) && !PQconnectionUsedPassword(conn)) ereport(ERROR, (errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED), errmsg("password is required"), @@ -322,12 +322,12 @@ disconnect_pg_server(ConnCacheEntry *entry) * contrib/dblink.) */ static void -check_conn_params(const char **keywords, const char **values) +check_conn_params(const char **keywords, const char **values, UserMapping *user) { int i; /* no check required if superuser */ - if (superuser()) + if (superuser_arg(user->userid)) return; /* ok if params contain a non-empty password */ From 82c5c533d1f55dfbf3eacc61aff64fedc8915d78 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 5 Dec 2017 13:12:00 -0500 Subject: [PATCH 0663/1087] postgres_fdw: Fix failing regression test. Commit ab3f008a2dc364cf7fb75de0a691fb0c61586c8e broke this. Report by Stephen Frost. Discussion: http://postgr.es/m/20171205180342.GO4628@tamriel.snowman.net --- contrib/postgres_fdw/expected/postgres_fdw.out | 2 +- contrib/postgres_fdw/sql/postgres_fdw.sql | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index bce334811b..683d641fa7 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -2315,7 +2315,7 @@ SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5 (4 rows) -- check join pushdown in situations where multiple userids are involved -CREATE ROLE regress_view_owner; +CREATE ROLE regress_view_owner SUPERUSER; CREATE USER MAPPING FOR regress_view_owner SERVER loopback; GRANT SELECT ON ft4 TO regress_view_owner; GRANT SELECT ON ft5 TO regress_view_owner; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 1df1e3aad0..3c3c5c705f 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -560,7 +560,7 @@ SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5 SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5.c1 = ft4.c1 WHERE ft4.c1 BETWEEN 10 and 30 ORDER BY ft5.c1, ft4.c1; -- check join pushdown in situations where multiple userids are involved -CREATE ROLE regress_view_owner; +CREATE ROLE regress_view_owner SUPERUSER; CREATE USER MAPPING FOR regress_view_owner SERVER loopback; GRANT SELECT ON ft4 TO regress_view_owner; GRANT SELECT ON ft5 TO regress_view_owner; From 5bcf389ecfd40daf92238e1abbff4fc4d3f18b33 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 5 Dec 2017 10:55:56 -0800 Subject: [PATCH 0664/1087] Fix EXPLAIN ANALYZE of hash join when the leader doesn't participate. If a hash join appears in a parallel query, there may be no hash table available for explain.c to inspect even though a hash table may have been built in other processes. This could happen either because parallel_leader_participation was set to off or because the leader happened to hit the end of the outer relation immediately (even though the complete relation is not empty) and decided not to build the hash table. Commit bf11e7ee introduced a way for workers to exchange instrumentation via the DSM segment for Sort nodes even though they are not parallel-aware. This commit does the same for Hash nodes, so that explain.c has a way to find instrumentation data from an arbitrary participant that actually built the hash table. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm%3D3DUQC2-z252N55eOcZBer6DPdM%3DFzrxH9dZc5vYLsjaA%40mail.gmail.com --- src/backend/commands/explain.c | 60 +++++++++++----- src/backend/executor/execParallel.c | 43 +++++++++--- src/backend/executor/execProcnode.c | 3 + src/backend/executor/nodeHash.c | 104 ++++++++++++++++++++++++++++ src/include/executor/nodeHash.h | 9 +++ src/include/nodes/execnodes.h | 26 +++++++ src/test/regress/expected/join.out | 15 ++++ src/test/regress/sql/join.sql | 11 +++ 8 files changed, 245 insertions(+), 26 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 447f69d044..7e4fbafc53 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -19,7 +19,7 @@ #include "commands/createas.h" #include "commands/defrem.h" #include "commands/prepare.h" -#include "executor/hashjoin.h" +#include "executor/nodeHash.h" #include "foreign/fdwapi.h" #include "nodes/extensible.h" #include "nodes/nodeFuncs.h" @@ -2379,34 +2379,62 @@ show_sort_info(SortState *sortstate, ExplainState *es) static void show_hash_info(HashState *hashstate, ExplainState *es) { - HashJoinTable hashtable; + HashInstrumentation *hinstrument = NULL; - hashtable = hashstate->hashtable; + /* + * In a parallel query, the leader process may or may not have run the + * hash join, and even if it did it may not have built a hash table due to + * timing (if it started late it might have seen no tuples in the outer + * relation and skipped building the hash table). Therefore we have to be + * prepared to get instrumentation data from a worker if there is no hash + * table. + */ + if (hashstate->hashtable) + { + hinstrument = (HashInstrumentation *) + palloc(sizeof(HashInstrumentation)); + ExecHashGetInstrumentation(hinstrument, hashstate->hashtable); + } + else if (hashstate->shared_info) + { + SharedHashInfo *shared_info = hashstate->shared_info; + int i; + + /* Find the first worker that built a hash table. */ + for (i = 0; i < shared_info->num_workers; ++i) + { + if (shared_info->hinstrument[i].nbatch > 0) + { + hinstrument = &shared_info->hinstrument[i]; + break; + } + } + } - if (hashtable) + if (hinstrument) { - long spacePeakKb = (hashtable->spacePeak + 1023) / 1024; + long spacePeakKb = (hinstrument->space_peak + 1023) / 1024; if (es->format != EXPLAIN_FORMAT_TEXT) { - ExplainPropertyLong("Hash Buckets", hashtable->nbuckets, es); + ExplainPropertyLong("Hash Buckets", hinstrument->nbuckets, es); ExplainPropertyLong("Original Hash Buckets", - hashtable->nbuckets_original, es); - ExplainPropertyLong("Hash Batches", hashtable->nbatch, es); + hinstrument->nbuckets_original, es); + ExplainPropertyLong("Hash Batches", hinstrument->nbatch, es); ExplainPropertyLong("Original Hash Batches", - hashtable->nbatch_original, es); + hinstrument->nbatch_original, es); ExplainPropertyLong("Peak Memory Usage", spacePeakKb, es); } - else if (hashtable->nbatch_original != hashtable->nbatch || - hashtable->nbuckets_original != hashtable->nbuckets) + else if (hinstrument->nbatch_original != hinstrument->nbatch || + hinstrument->nbuckets_original != hinstrument->nbuckets) { appendStringInfoSpaces(es->str, es->indent * 2); appendStringInfo(es->str, "Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\n", - hashtable->nbuckets, - hashtable->nbuckets_original, - hashtable->nbatch, - hashtable->nbatch_original, + hinstrument->nbuckets, + hinstrument->nbuckets_original, + hinstrument->nbatch, + hinstrument->nbatch_original, spacePeakKb); } else @@ -2414,7 +2442,7 @@ show_hash_info(HashState *hashstate, ExplainState *es) appendStringInfoSpaces(es->str, es->indent * 2); appendStringInfo(es->str, "Buckets: %d Batches: %d Memory Usage: %ldkB\n", - hashtable->nbuckets, hashtable->nbatch, + hinstrument->nbuckets, hinstrument->nbatch, spacePeakKb); } } diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 53c5254be1..0aca00b0e6 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -29,6 +29,7 @@ #include "executor/nodeBitmapHeapscan.h" #include "executor/nodeCustom.h" #include "executor/nodeForeignscan.h" +#include "executor/nodeHash.h" #include "executor/nodeIndexscan.h" #include "executor/nodeIndexonlyscan.h" #include "executor/nodeSeqscan.h" @@ -259,8 +260,12 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) ExecBitmapHeapEstimate((BitmapHeapScanState *) planstate, e->pcxt); break; + case T_HashState: + /* even when not parallel-aware, for EXPLAIN ANALYZE */ + ExecHashEstimate((HashState *) planstate, e->pcxt); + break; case T_SortState: - /* even when not parallel-aware */ + /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecSortEstimate((SortState *) planstate, e->pcxt); break; @@ -458,8 +463,12 @@ ExecParallelInitializeDSM(PlanState *planstate, ExecBitmapHeapInitializeDSM((BitmapHeapScanState *) planstate, d->pcxt); break; + case T_HashState: + /* even when not parallel-aware, for EXPLAIN ANALYZE */ + ExecHashInitializeDSM((HashState *) planstate, d->pcxt); + break; case T_SortState: - /* even when not parallel-aware */ + /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecSortInitializeDSM((SortState *) planstate, d->pcxt); break; @@ -872,8 +881,12 @@ ExecParallelReInitializeDSM(PlanState *planstate, ExecBitmapHeapReInitializeDSM((BitmapHeapScanState *) planstate, pcxt); break; + case T_HashState: + /* even when not parallel-aware, for EXPLAIN ANALYZE */ + ExecHashReInitializeDSM((HashState *) planstate, pcxt); + break; case T_SortState: - /* even when not parallel-aware */ + /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecSortReInitializeDSM((SortState *) planstate, pcxt); break; @@ -928,12 +941,18 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, planstate->worker_instrument->num_workers = instrumentation->num_workers; memcpy(&planstate->worker_instrument->instrument, instrument, ibytes); - /* - * Perform any node-type-specific work that needs to be done. Currently, - * only Sort nodes need to do anything here. - */ - if (IsA(planstate, SortState)) - ExecSortRetrieveInstrumentation((SortState *) planstate); + /* Perform any node-type-specific work that needs to be done. */ + switch (nodeTag(planstate)) + { + case T_SortState: + ExecSortRetrieveInstrumentation((SortState *) planstate); + break; + case T_HashState: + ExecHashRetrieveInstrumentation((HashState *) planstate); + break; + default: + break; + } return planstate_tree_walker(planstate, ExecParallelRetrieveInstrumentation, instrumentation); @@ -1160,8 +1179,12 @@ ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt) ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate, pwcxt); break; + case T_HashState: + /* even when not parallel-aware, for EXPLAIN ANALYZE */ + ExecHashInitializeWorker((HashState *) planstate, pwcxt); + break; case T_SortState: - /* even when not parallel-aware */ + /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecSortInitializeWorker((SortState *) planstate, pwcxt); break; diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index c1aa5064c9..9befca9016 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -751,6 +751,9 @@ ExecShutdownNode(PlanState *node) case T_GatherMergeState: ExecShutdownGatherMerge((GatherMergeState *) node); break; + case T_HashState: + ExecShutdownHash((HashState *) node); + break; default: break; } diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index f7cd8fb347..6fe5d69d55 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -1637,6 +1637,110 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable) } } +/* + * Reserve space in the DSM segment for instrumentation data. + */ +void +ExecHashEstimate(HashState *node, ParallelContext *pcxt) +{ + size_t size; + + size = mul_size(pcxt->nworkers, sizeof(HashInstrumentation)); + size = add_size(size, offsetof(SharedHashInfo, hinstrument)); + shm_toc_estimate_chunk(&pcxt->estimator, size); + shm_toc_estimate_keys(&pcxt->estimator, 1); +} + +/* + * Set up a space in the DSM for all workers to record instrumentation data + * about their hash table. + */ +void +ExecHashInitializeDSM(HashState *node, ParallelContext *pcxt) +{ + size_t size; + + size = offsetof(SharedHashInfo, hinstrument) + + pcxt->nworkers * sizeof(HashInstrumentation); + node->shared_info = (SharedHashInfo *) shm_toc_allocate(pcxt->toc, size); + memset(node->shared_info, 0, size); + node->shared_info->num_workers = pcxt->nworkers; + shm_toc_insert(pcxt->toc, node->ps.plan->plan_node_id, + node->shared_info); +} + +/* + * Reset shared state before beginning a fresh scan. + */ +void +ExecHashReInitializeDSM(HashState *node, ParallelContext *pcxt) +{ + if (node->shared_info != NULL) + { + memset(node->shared_info->hinstrument, 0, + node->shared_info->num_workers * sizeof(HashInstrumentation)); + } +} + +/* + * Locate the DSM space for hash table instrumentation data that we'll write + * to at shutdown time. + */ +void +ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwcxt) +{ + SharedHashInfo *shared_info; + + shared_info = (SharedHashInfo *) + shm_toc_lookup(pwcxt->toc, node->ps.plan->plan_node_id, true); + node->hinstrument = &shared_info->hinstrument[ParallelWorkerNumber]; +} + +/* + * Copy instrumentation data from this worker's hash table (if it built one) + * to DSM memory so the leader can retrieve it. This must be done in an + * ExecShutdownHash() rather than ExecEndHash() because the latter runs after + * we've detached from the DSM segment. + */ +void +ExecShutdownHash(HashState *node) +{ + if (node->hinstrument && node->hashtable) + ExecHashGetInstrumentation(node->hinstrument, node->hashtable); +} + +/* + * Retrieve instrumentation data from workers before the DSM segment is + * detached, so that EXPLAIN can access it. + */ +void +ExecHashRetrieveInstrumentation(HashState *node) +{ + SharedHashInfo *shared_info = node->shared_info; + size_t size; + + /* Replace node->shared_info with a copy in backend-local memory. */ + size = offsetof(SharedHashInfo, hinstrument) + + shared_info->num_workers * sizeof(HashInstrumentation); + node->shared_info = palloc(size); + memcpy(node->shared_info, shared_info, size); +} + +/* + * Copy the instrumentation data from 'hashtable' into a HashInstrumentation + * struct. + */ +void +ExecHashGetInstrumentation(HashInstrumentation *instrument, + HashJoinTable hashtable) +{ + instrument->nbuckets = hashtable->nbuckets; + instrument->nbuckets_original = hashtable->nbuckets_original; + instrument->nbatch = hashtable->nbatch; + instrument->nbatch_original = hashtable->nbatch_original; + instrument->space_peak = hashtable->spacePeak; +} + /* * Allocate 'size' bytes from the currently active HashMemoryChunk */ diff --git a/src/include/executor/nodeHash.h b/src/include/executor/nodeHash.h index 3ae556fb6c..75d4c70f6f 100644 --- a/src/include/executor/nodeHash.h +++ b/src/include/executor/nodeHash.h @@ -14,6 +14,7 @@ #ifndef NODEHASH_H #define NODEHASH_H +#include "access/parallel.h" #include "nodes/execnodes.h" extern HashState *ExecInitHash(Hash *node, EState *estate, int eflags); @@ -48,5 +49,13 @@ extern void ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, int *numbatches, int *num_skew_mcvs); extern int ExecHashGetSkewBucket(HashJoinTable hashtable, uint32 hashvalue); +extern void ExecHashEstimate(HashState *node, ParallelContext *pcxt); +extern void ExecHashInitializeDSM(HashState *node, ParallelContext *pcxt); +extern void ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwcxt); +extern void ExecHashReInitializeDSM(HashState *node, ParallelContext *pcxt); +extern void ExecHashRetrieveInstrumentation(HashState *node); +extern void ExecShutdownHash(HashState *node); +extern void ExecHashGetInstrumentation(HashInstrumentation *instrument, + HashJoinTable hashtable); #endif /* NODEHASH_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index e05bc04f52..084d59ef83 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1980,6 +1980,29 @@ typedef struct GatherMergeState struct binaryheap *gm_heap; /* binary heap of slot indices */ } GatherMergeState; +/* ---------------- + * Values displayed by EXPLAIN ANALYZE + * ---------------- + */ +typedef struct HashInstrumentation +{ + int nbuckets; /* number of buckets at end of execution */ + int nbuckets_original; /* planned number of buckets */ + int nbatch; /* number of batches at end of execution */ + int nbatch_original; /* planned number of batches */ + size_t space_peak; /* speak memory usage in bytes */ +} HashInstrumentation; + +/* ---------------- + * Shared memory container for per-worker hash information + * ---------------- + */ +typedef struct SharedHashInfo +{ + int num_workers; + HashInstrumentation hinstrument[FLEXIBLE_ARRAY_MEMBER]; +} SharedHashInfo; + /* ---------------- * HashState information * ---------------- @@ -1990,6 +2013,9 @@ typedef struct HashState HashJoinTable hashtable; /* hash table for the hashjoin */ List *hashkeys; /* list of ExprState nodes */ /* hashkeys is same as parent's hj_InnerHashKeys */ + + SharedHashInfo *shared_info; /* one entry per worker */ + HashInstrumentation *hinstrument; /* this worker's entry */ } HashState; /* ---------------- diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index b7d1790097..001d96dc2d 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -6173,6 +6173,21 @@ $$); rollback to settings; -- A couple of other hash join tests unrelated to work_mem management. +-- Check that EXPLAIN ANALYZE has data even if the leader doesn't participate +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +set local parallel_leader_participation = off; +select * from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + original | final +----------+------- + 1 | 1 +(1 row) + +rollback to settings; -- A full outer join where every record is matched. -- non-parallel savepoint settings; diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index c6d4a513e8..882601b338 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -2159,6 +2159,17 @@ rollback to settings; -- A couple of other hash join tests unrelated to work_mem management. +-- Check that EXPLAIN ANALYZE has data even if the leader doesn't participate +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +set local parallel_leader_participation = off; +select * from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + -- A full outer join where every record is matched. -- non-parallel From 2c09a5c12a66087218c7f8cba269cd3de51b9b82 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 5 Dec 2017 14:35:33 -0500 Subject: [PATCH 0665/1087] Fix accumulation of parallel worker instrumentation. When a Gather or Gather Merge node is started and stopped multiple times, the old code wouldn't reset the shared state between executions, potentially resulting in dramatically inflated instrumentation data for nodes beneath it. (The per-worker instrumentation ended up OK, I think, but the overall totals were inflated.) Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila, reviewed and tweaked a bit by me. Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com --- src/backend/executor/execParallel.c | 51 ++++++++++++++----- src/test/regress/expected/select_parallel.out | 21 ++++++++ src/test/regress/sql/select_parallel.sql | 7 +++ 3 files changed, 66 insertions(+), 13 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 0aca00b0e6..ff5cff59b0 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -808,6 +808,19 @@ ExecParallelReinitialize(PlanState *planstate, /* Old workers must already be shut down */ Assert(pei->finished); + /* Clear the instrumentation space from the last round. */ + if (pei->instrumentation) + { + Instrumentation *instrument; + SharedExecutorInstrumentation *sh_instr; + int i; + + sh_instr = pei->instrumentation; + instrument = GetInstrumentationArray(sh_instr); + for (i = 0; i < sh_instr->num_workers * sh_instr->num_plan_nodes; ++i) + InstrInit(&instrument[i], pei->planstate->state->es_instrument); + } + /* Force parameters we're going to pass to workers to be evaluated. */ ExecEvalParamExecParams(sendParams, estate); @@ -925,21 +938,33 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, for (n = 0; n < instrumentation->num_workers; ++n) InstrAggNode(planstate->instrument, &instrument[n]); - /* - * Also store the per-worker detail. - * - * Worker instrumentation should be allocated in the same context as the - * regular instrumentation information, which is the per-query context. - * Switch into per-query memory context. - */ - oldcontext = MemoryContextSwitchTo(planstate->state->es_query_cxt); - ibytes = mul_size(instrumentation->num_workers, sizeof(Instrumentation)); - planstate->worker_instrument = - palloc(ibytes + offsetof(WorkerInstrumentation, instrument)); - MemoryContextSwitchTo(oldcontext); + if (!planstate->worker_instrument) + { + /* + * Allocate space for the per-worker detail. + * + * Worker instrumentation should be allocated in the same context as + * the regular instrumentation information, which is the per-query + * context. Switch into per-query memory context. + */ + oldcontext = MemoryContextSwitchTo(planstate->state->es_query_cxt); + ibytes = + mul_size(instrumentation->num_workers, sizeof(Instrumentation)); + planstate->worker_instrument = + palloc(ibytes + offsetof(WorkerInstrumentation, instrument)); + MemoryContextSwitchTo(oldcontext); + + for (n = 0; n < instrumentation->num_workers; ++n) + InstrInit(&planstate->worker_instrument->instrument[n], + planstate->state->es_instrument); + } planstate->worker_instrument->num_workers = instrumentation->num_workers; - memcpy(&planstate->worker_instrument->instrument, instrument, ibytes); + + /* Accumulate the per-worker detail. */ + for (n = 0; n < instrumentation->num_workers; ++n) + InstrAggNode(&planstate->worker_instrument->instrument[n], + &instrument[n]); /* Perform any node-type-specific work that needs to be done. */ switch (nodeTag(planstate)) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index d1d5b228ce..b748c98c91 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -378,7 +378,28 @@ select count(*) from bmscantest where a>1; 99999 (1 row) +-- test accumulation of stats for parallel node reset enable_seqscan; +alter table tenk2 set (parallel_workers = 0); +explain (analyze, timing off, summary off, costs off) + select count(*) from tenk1, tenk2 where tenk1.hundred > 1 + and tenk2.thousand=0; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate (actual rows=1 loops=1) + -> Nested Loop (actual rows=98000 loops=1) + -> Seq Scan on tenk2 (actual rows=10 loops=1) + Filter: (thousand = 0) + Rows Removed by Filter: 9990 + -> Gather (actual rows=9800 loops=10) + Workers Planned: 4 + Workers Launched: 4 + -> Parallel Seq Scan on tenk1 (actual rows=1960 loops=50) + Filter: (hundred > 1) + Rows Removed by Filter: 40 +(11 rows) + +alter table tenk2 reset (parallel_workers); reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index bb4e34adbe..00df92779c 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -149,7 +149,14 @@ insert into bmscantest select r, 'fooooooooooooooooooooooooooooooooooooooooooooo create index i_bmtest ON bmscantest(a); select count(*) from bmscantest where a>1; +-- test accumulation of stats for parallel node reset enable_seqscan; +alter table tenk2 set (parallel_workers = 0); +explain (analyze, timing off, summary off, costs off) + select count(*) from tenk1, tenk2 where tenk1.hundred > 1 + and tenk2.thousand=0; +alter table tenk2 reset (parallel_workers); + reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; From 8097d189ccc2cd55a8bf189bd21cb196e3cfc385 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 5 Dec 2017 15:29:24 -0500 Subject: [PATCH 0666/1087] doc: Update memory requirements for FOP Reported-by: Dave Page --- doc/src/sgml/docguide.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index 090ca95835..b4338e0e73 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -336,11 +336,11 @@ checking for fop... fop file ~/.foprc, for example: # FOP binary distribution -FOP_OPTS='-Xmx1000m' +FOP_OPTS='-Xmx1500m' # Debian -JAVA_ARGS='-Xmx1000m' +JAVA_ARGS='-Xmx1500m' # Red Hat -ADDITIONAL_FLAGS='-Xmx1000m' +ADDITIONAL_FLAGS='-Xmx1500m' There is a minimum amount of memory that is required, and to some extent more memory appears to make things a bit faster. On systems with very From ab72716778128fb63d54ac256adf7fe6820a1185 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 5 Dec 2017 17:28:39 -0500 Subject: [PATCH 0667/1087] Support Parallel Append plan nodes. When we create an Append node, we can spread out the workers over the subplans instead of piling on to each subplan one at a time, which should typically be a bit more efficient, both because the startup cost of any plan executed entirely by one worker is paid only once and also because of reduced contention. We can also construct Append plans using a mix of partial and non-partial subplans, which may allow for parallelism in places that otherwise couldn't support it. Unfortunately, this patch doesn't handle the important case of parallelizing UNION ALL by running each branch in a separate worker; the executor infrastructure is added here, but more planner work is needed. Amit Khandekar, Robert Haas, Amul Sul, reviewed and tested by Ashutosh Bapat, Amit Langote, Rafia Sabih, Amit Kapila, and Rajkumar Raghuwanshi. Discussion: http://postgr.es/m/CAJ3gD9dy0K_E8r727heqXoBmWZ83HwLFwdcaSSmBQ1+S+vRuUQ@mail.gmail.com --- doc/src/sgml/config.sgml | 14 + doc/src/sgml/monitoring.sgml | 7 +- src/backend/executor/execParallel.c | 19 + src/backend/executor/nodeAppend.c | 331 +++++++++++++++--- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/list.c | 38 ++ src/backend/nodes/outfuncs.c | 1 + src/backend/nodes/readfuncs.c | 1 + src/backend/optimizer/path/allpaths.c | 216 ++++++++++-- src/backend/optimizer/path/costsize.c | 164 +++++++++ src/backend/optimizer/path/joinrels.c | 3 +- src/backend/optimizer/plan/createplan.c | 10 +- src/backend/optimizer/plan/planner.c | 5 +- src/backend/optimizer/prep/prepunion.c | 7 +- src/backend/optimizer/util/pathnode.c | 88 ++++- src/backend/storage/lmgr/lwlock.c | 1 + src/backend/utils/misc/guc.c | 9 + src/backend/utils/misc/postgresql.conf.sample | 1 + src/include/executor/nodeAppend.h | 5 + src/include/nodes/execnodes.h | 14 +- src/include/nodes/pg_list.h | 3 + src/include/nodes/plannodes.h | 1 + src/include/nodes/relation.h | 6 + src/include/optimizer/cost.h | 2 + src/include/optimizer/pathnode.h | 9 +- src/include/storage/lwlock.h | 1 + src/test/regress/expected/inherit.out | 2 + src/test/regress/expected/select_parallel.out | 91 ++++- src/test/regress/expected/sysviews.out | 3 +- src/test/regress/sql/inherit.sql | 2 + src/test/regress/sql/select_parallel.sql | 33 +- 31 files changed, 959 insertions(+), 129 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 563ad1fc7f..9ae6861cd7 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3633,6 +3633,20 @@ ANY num_sync ( + enable_parallel_append (boolean) + + enable_parallel_append configuration parameter + + + + + Enables or disables the query planner's use of parallel-aware + append plan types. The default is on. + + + + enable_partition_wise_join (boolean) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 8d461c8145..b6f80d9708 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -845,7 +845,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser - LWLock + LWLock ShmemIndexLock Waiting to find or allocate space in shared memory. @@ -1116,6 +1116,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser tbm Waiting for TBM shared iterator lock. + + parallel_append + Waiting to choose the next subplan during Parallel Append plan + execution. + Lock relation diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index ff5cff59b0..558cb08b07 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -26,6 +26,7 @@ #include "executor/execExpr.h" #include "executor/execParallel.h" #include "executor/executor.h" +#include "executor/nodeAppend.h" #include "executor/nodeBitmapHeapscan.h" #include "executor/nodeCustom.h" #include "executor/nodeForeignscan.h" @@ -250,6 +251,11 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) ExecForeignScanEstimate((ForeignScanState *) planstate, e->pcxt); break; + case T_AppendState: + if (planstate->plan->parallel_aware) + ExecAppendEstimate((AppendState *) planstate, + e->pcxt); + break; case T_CustomScanState: if (planstate->plan->parallel_aware) ExecCustomScanEstimate((CustomScanState *) planstate, @@ -453,6 +459,11 @@ ExecParallelInitializeDSM(PlanState *planstate, ExecForeignScanInitializeDSM((ForeignScanState *) planstate, d->pcxt); break; + case T_AppendState: + if (planstate->plan->parallel_aware) + ExecAppendInitializeDSM((AppendState *) planstate, + d->pcxt); + break; case T_CustomScanState: if (planstate->plan->parallel_aware) ExecCustomScanInitializeDSM((CustomScanState *) planstate, @@ -884,6 +895,10 @@ ExecParallelReInitializeDSM(PlanState *planstate, ExecForeignScanReInitializeDSM((ForeignScanState *) planstate, pcxt); break; + case T_AppendState: + if (planstate->plan->parallel_aware) + ExecAppendReInitializeDSM((AppendState *) planstate, pcxt); + break; case T_CustomScanState: if (planstate->plan->parallel_aware) ExecCustomScanReInitializeDSM((CustomScanState *) planstate, @@ -1194,6 +1209,10 @@ ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt) ExecForeignScanInitializeWorker((ForeignScanState *) planstate, pwcxt); break; + case T_AppendState: + if (planstate->plan->parallel_aware) + ExecAppendInitializeWorker((AppendState *) planstate, pwcxt); + break; case T_CustomScanState: if (planstate->plan->parallel_aware) ExecCustomScanInitializeWorker((CustomScanState *) planstate, diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 1d2fb35d55..246a0b2d85 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -61,51 +61,27 @@ #include "executor/nodeAppend.h" #include "miscadmin.h" -static TupleTableSlot *ExecAppend(PlanState *pstate); -static bool exec_append_initialize_next(AppendState *appendstate); - - -/* ---------------------------------------------------------------- - * exec_append_initialize_next - * - * Sets up the append state node for the "next" scan. - * - * Returns t iff there is a "next" scan to process. - * ---------------------------------------------------------------- - */ -static bool -exec_append_initialize_next(AppendState *appendstate) +/* Shared state for parallel-aware Append. */ +struct ParallelAppendState { - int whichplan; + LWLock pa_lock; /* mutual exclusion to choose next subplan */ + int pa_next_plan; /* next plan to choose by any worker */ /* - * get information from the append node + * pa_finished[i] should be true if no more workers should select subplan + * i. for a non-partial plan, this should be set to true as soon as a + * worker selects the plan; for a partial plan, it remains false until + * some worker executes the plan to completion. */ - whichplan = appendstate->as_whichplan; + bool pa_finished[FLEXIBLE_ARRAY_MEMBER]; +}; - if (whichplan < 0) - { - /* - * if scanning in reverse, we start at the last scan in the list and - * then proceed back to the first.. in any case we inform ExecAppend - * that we are at the end of the line by returning false - */ - appendstate->as_whichplan = 0; - return false; - } - else if (whichplan >= appendstate->as_nplans) - { - /* - * as above, end the scan if we go beyond the last scan in our list.. - */ - appendstate->as_whichplan = appendstate->as_nplans - 1; - return false; - } - else - { - return true; - } -} +#define INVALID_SUBPLAN_INDEX -1 + +static TupleTableSlot *ExecAppend(PlanState *pstate); +static bool choose_next_subplan_locally(AppendState *node); +static bool choose_next_subplan_for_leader(AppendState *node); +static bool choose_next_subplan_for_worker(AppendState *node); /* ---------------------------------------------------------------- * ExecInitAppend @@ -185,10 +161,15 @@ ExecInitAppend(Append *node, EState *estate, int eflags) appendstate->ps.ps_ProjInfo = NULL; /* - * initialize to scan first subplan + * Parallel-aware append plans must choose the first subplan to execute by + * looking at shared memory, but non-parallel-aware append plans can + * always start with the first subplan. */ - appendstate->as_whichplan = 0; - exec_append_initialize_next(appendstate); + appendstate->as_whichplan = + appendstate->ps.plan->parallel_aware ? INVALID_SUBPLAN_INDEX : 0; + + /* If parallel-aware, this will be overridden later. */ + appendstate->choose_next_subplan = choose_next_subplan_locally; return appendstate; } @@ -204,6 +185,11 @@ ExecAppend(PlanState *pstate) { AppendState *node = castNode(AppendState, pstate); + /* If no subplan has been chosen, we must choose one before proceeding. */ + if (node->as_whichplan == INVALID_SUBPLAN_INDEX && + !node->choose_next_subplan(node)) + return ExecClearTuple(node->ps.ps_ResultTupleSlot); + for (;;) { PlanState *subnode; @@ -214,6 +200,7 @@ ExecAppend(PlanState *pstate) /* * figure out which subplan we are currently processing */ + Assert(node->as_whichplan >= 0 && node->as_whichplan < node->as_nplans); subnode = node->appendplans[node->as_whichplan]; /* @@ -231,19 +218,9 @@ ExecAppend(PlanState *pstate) return result; } - /* - * Go on to the "next" subplan in the appropriate direction. If no - * more subplans, return the empty slot set up for us by - * ExecInitAppend. - */ - if (ScanDirectionIsForward(node->ps.state->es_direction)) - node->as_whichplan++; - else - node->as_whichplan--; - if (!exec_append_initialize_next(node)) + /* choose new subplan; if none, we're done */ + if (!node->choose_next_subplan(node)) return ExecClearTuple(node->ps.ps_ResultTupleSlot); - - /* Else loop back and try to get a tuple from the new subplan */ } } @@ -298,6 +275,246 @@ ExecReScanAppend(AppendState *node) if (subnode->chgParam == NULL) ExecReScan(subnode); } - node->as_whichplan = 0; - exec_append_initialize_next(node); + + node->as_whichplan = + node->ps.plan->parallel_aware ? INVALID_SUBPLAN_INDEX : 0; +} + +/* ---------------------------------------------------------------- + * Parallel Append Support + * ---------------------------------------------------------------- + */ + +/* ---------------------------------------------------------------- + * ExecAppendEstimate + * + * Compute the amount of space we'll need in the parallel + * query DSM, and inform pcxt->estimator about our needs. + * ---------------------------------------------------------------- + */ +void +ExecAppendEstimate(AppendState *node, + ParallelContext *pcxt) +{ + node->pstate_len = + add_size(offsetof(ParallelAppendState, pa_finished), + sizeof(bool) * node->as_nplans); + + shm_toc_estimate_chunk(&pcxt->estimator, node->pstate_len); + shm_toc_estimate_keys(&pcxt->estimator, 1); +} + + +/* ---------------------------------------------------------------- + * ExecAppendInitializeDSM + * + * Set up shared state for Parallel Append. + * ---------------------------------------------------------------- + */ +void +ExecAppendInitializeDSM(AppendState *node, + ParallelContext *pcxt) +{ + ParallelAppendState *pstate; + + pstate = shm_toc_allocate(pcxt->toc, node->pstate_len); + memset(pstate, 0, node->pstate_len); + LWLockInitialize(&pstate->pa_lock, LWTRANCHE_PARALLEL_APPEND); + shm_toc_insert(pcxt->toc, node->ps.plan->plan_node_id, pstate); + + node->as_pstate = pstate; + node->choose_next_subplan = choose_next_subplan_for_leader; +} + +/* ---------------------------------------------------------------- + * ExecAppendReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecAppendReInitializeDSM(AppendState *node, ParallelContext *pcxt) +{ + ParallelAppendState *pstate = node->as_pstate; + + pstate->pa_next_plan = 0; + memset(pstate->pa_finished, 0, sizeof(bool) * node->as_nplans); +} + +/* ---------------------------------------------------------------- + * ExecAppendInitializeWorker + * + * Copy relevant information from TOC into planstate, and initialize + * whatever is required to choose and execute the optimal subplan. + * ---------------------------------------------------------------- + */ +void +ExecAppendInitializeWorker(AppendState *node, ParallelWorkerContext *pwcxt) +{ + node->as_pstate = shm_toc_lookup(pwcxt->toc, node->ps.plan->plan_node_id, false); + node->choose_next_subplan = choose_next_subplan_for_worker; +} + +/* ---------------------------------------------------------------- + * choose_next_subplan_locally + * + * Choose next subplan for a non-parallel-aware Append, + * returning false if there are no more. + * ---------------------------------------------------------------- + */ +static bool +choose_next_subplan_locally(AppendState *node) +{ + int whichplan = node->as_whichplan; + + /* We should never see INVALID_SUBPLAN_INDEX in this case. */ + Assert(whichplan >= 0 && whichplan <= node->as_nplans); + + if (ScanDirectionIsForward(node->ps.state->es_direction)) + { + if (whichplan >= node->as_nplans - 1) + return false; + node->as_whichplan++; + } + else + { + if (whichplan <= 0) + return false; + node->as_whichplan--; + } + + return true; +} + +/* ---------------------------------------------------------------- + * choose_next_subplan_for_leader + * + * Try to pick a plan which doesn't commit us to doing much + * work locally, so that as much work as possible is done in + * the workers. Cheapest subplans are at the end. + * ---------------------------------------------------------------- + */ +static bool +choose_next_subplan_for_leader(AppendState *node) +{ + ParallelAppendState *pstate = node->as_pstate; + Append *append = (Append *) node->ps.plan; + + /* Backward scan is not supported by parallel-aware plans */ + Assert(ScanDirectionIsForward(node->ps.state->es_direction)); + + LWLockAcquire(&pstate->pa_lock, LW_EXCLUSIVE); + + if (node->as_whichplan != INVALID_SUBPLAN_INDEX) + { + /* Mark just-completed subplan as finished. */ + node->as_pstate->pa_finished[node->as_whichplan] = true; + } + else + { + /* Start with last subplan. */ + node->as_whichplan = node->as_nplans - 1; + } + + /* Loop until we find a subplan to execute. */ + while (pstate->pa_finished[node->as_whichplan]) + { + if (node->as_whichplan == 0) + { + pstate->pa_next_plan = INVALID_SUBPLAN_INDEX; + node->as_whichplan = INVALID_SUBPLAN_INDEX; + LWLockRelease(&pstate->pa_lock); + return false; + } + node->as_whichplan--; + } + + /* If non-partial, immediately mark as finished. */ + if (node->as_whichplan < append->first_partial_plan) + node->as_pstate->pa_finished[node->as_whichplan] = true; + + LWLockRelease(&pstate->pa_lock); + + return true; +} + +/* ---------------------------------------------------------------- + * choose_next_subplan_for_worker + * + * Choose next subplan for a parallel-aware Append, returning + * false if there are no more. + * + * We start from the first plan and advance through the list; + * when we get back to the end, we loop back to the first + * nonpartial plan. This assigns the non-partial plans first + * in order of descending cost and then spreads out the + * workers as evenly as possible across the remaining partial + * plans. + * ---------------------------------------------------------------- + */ +static bool +choose_next_subplan_for_worker(AppendState *node) +{ + ParallelAppendState *pstate = node->as_pstate; + Append *append = (Append *) node->ps.plan; + + /* Backward scan is not supported by parallel-aware plans */ + Assert(ScanDirectionIsForward(node->ps.state->es_direction)); + + LWLockAcquire(&pstate->pa_lock, LW_EXCLUSIVE); + + /* Mark just-completed subplan as finished. */ + if (node->as_whichplan != INVALID_SUBPLAN_INDEX) + node->as_pstate->pa_finished[node->as_whichplan] = true; + + /* If all the plans are already done, we have nothing to do */ + if (pstate->pa_next_plan == INVALID_SUBPLAN_INDEX) + { + LWLockRelease(&pstate->pa_lock); + return false; + } + + /* Loop until we find a subplan to execute. */ + while (pstate->pa_finished[pstate->pa_next_plan]) + { + if (pstate->pa_next_plan < node->as_nplans - 1) + { + /* Advance to next plan. */ + pstate->pa_next_plan++; + } + else if (append->first_partial_plan < node->as_nplans) + { + /* Loop back to first partial plan. */ + pstate->pa_next_plan = append->first_partial_plan; + } + else + { + /* At last plan, no partial plans, arrange to bail out. */ + pstate->pa_next_plan = node->as_whichplan; + } + + if (pstate->pa_next_plan == node->as_whichplan) + { + /* We've tried everything! */ + pstate->pa_next_plan = INVALID_SUBPLAN_INDEX; + LWLockRelease(&pstate->pa_lock); + return false; + } + } + + /* Pick the plan we found, and advance pa_next_plan one more time. */ + node->as_whichplan = pstate->pa_next_plan++; + if (pstate->pa_next_plan >= node->as_nplans) + { + Assert(append->first_partial_plan < node->as_nplans); + pstate->pa_next_plan = append->first_partial_plan; + } + + /* If non-partial, immediately mark as finished. */ + if (node->as_whichplan < append->first_partial_plan) + node->as_pstate->pa_finished[node->as_whichplan] = true; + + LWLockRelease(&pstate->pa_lock); + + return true; } diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index aff9a62106..b1515dd8e1 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -242,6 +242,7 @@ _copyAppend(const Append *from) */ COPY_NODE_FIELD(partitioned_rels); COPY_NODE_FIELD(appendplans); + COPY_SCALAR_FIELD(first_partial_plan); return newnode; } diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c index acaf4b5315..bee6244adc 100644 --- a/src/backend/nodes/list.c +++ b/src/backend/nodes/list.c @@ -1249,6 +1249,44 @@ list_copy_tail(const List *oldlist, int nskip) return newlist; } +/* + * Sort a list using qsort. A sorted list is built but the cells of the + * original list are re-used. The comparator function receives arguments of + * type ListCell ** + */ +List * +list_qsort(const List *list, list_qsort_comparator cmp) +{ + ListCell *cell; + int i; + int len = list_length(list); + ListCell **list_arr; + List *new_list; + + if (len == 0) + return NIL; + + i = 0; + list_arr = palloc(sizeof(ListCell *) * len); + foreach(cell, list) + list_arr[i++] = cell; + + qsort(list_arr, len, sizeof(ListCell *), cmp); + + new_list = (List *) palloc(sizeof(List)); + new_list->type = list->type; + new_list->length = len; + new_list->head = list_arr[0]; + new_list->tail = list_arr[len - 1]; + + for (i = 0; i < len - 1; i++) + list_arr[i]->next = list_arr[i + 1]; + + list_arr[len - 1]->next = NULL; + pfree(list_arr); + return new_list; +} + /* * Temporary compatibility functions * diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index c97ee24ade..b59a5219a7 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -399,6 +399,7 @@ _outAppend(StringInfo str, const Append *node) WRITE_NODE_FIELD(partitioned_rels); WRITE_NODE_FIELD(appendplans); + WRITE_INT_FIELD(first_partial_plan); } static void diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 7eb67fc040..0d17ae89b0 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -1600,6 +1600,7 @@ _readAppend(void) READ_NODE_FIELD(partitioned_rels); READ_NODE_FIELD(appendplans); + READ_INT_FIELD(first_partial_plan); READ_DONE(); } diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 44f6b03442..47986ba80a 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -101,7 +101,8 @@ static void generate_mergeappend_paths(PlannerInfo *root, RelOptInfo *rel, static Path *get_cheapest_parameterized_child_path(PlannerInfo *root, RelOptInfo *rel, Relids required_outer); -static List *accumulate_append_subpath(List *subpaths, Path *path); +static void accumulate_append_subpath(Path *path, + List **subpaths, List **special_subpaths); static void set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel, Index rti, RangeTblEntry *rte); static void set_function_pathlist(PlannerInfo *root, RelOptInfo *rel, @@ -1331,13 +1332,17 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, List *subpaths = NIL; bool subpaths_valid = true; List *partial_subpaths = NIL; + List *pa_partial_subpaths = NIL; + List *pa_nonpartial_subpaths = NIL; bool partial_subpaths_valid = true; + bool pa_subpaths_valid = enable_parallel_append; List *all_child_pathkeys = NIL; List *all_child_outers = NIL; ListCell *l; List *partitioned_rels = NIL; RangeTblEntry *rte; bool build_partitioned_rels = false; + double partial_rows = -1; if (IS_SIMPLE_REL(rel)) { @@ -1388,6 +1393,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, { RelOptInfo *childrel = lfirst(l); ListCell *lcp; + Path *cheapest_partial_path = NULL; /* * If we need to build partitioned_rels, accumulate the partitioned @@ -1408,18 +1414,69 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, * If not, there's no workable unparameterized path. */ if (childrel->cheapest_total_path->param_info == NULL) - subpaths = accumulate_append_subpath(subpaths, - childrel->cheapest_total_path); + accumulate_append_subpath(childrel->cheapest_total_path, + &subpaths, NULL); else subpaths_valid = false; /* Same idea, but for a partial plan. */ if (childrel->partial_pathlist != NIL) - partial_subpaths = accumulate_append_subpath(partial_subpaths, - linitial(childrel->partial_pathlist)); + { + cheapest_partial_path = linitial(childrel->partial_pathlist); + accumulate_append_subpath(cheapest_partial_path, + &partial_subpaths, NULL); + } else partial_subpaths_valid = false; + /* + * Same idea, but for a parallel append mixing partial and non-partial + * paths. + */ + if (pa_subpaths_valid) + { + Path *nppath = NULL; + + nppath = + get_cheapest_parallel_safe_total_inner(childrel->pathlist); + + if (cheapest_partial_path == NULL && nppath == NULL) + { + /* Neither a partial nor a parallel-safe path? Forget it. */ + pa_subpaths_valid = false; + } + else if (nppath == NULL || + (cheapest_partial_path != NULL && + cheapest_partial_path->total_cost < nppath->total_cost)) + { + /* Partial path is cheaper or the only option. */ + Assert(cheapest_partial_path != NULL); + accumulate_append_subpath(cheapest_partial_path, + &pa_partial_subpaths, + &pa_nonpartial_subpaths); + + } + else + { + /* + * Either we've got only a non-partial path, or we think that + * a single backend can execute the best non-partial path + * faster than all the parallel backends working together can + * execute the best partial path. + * + * It might make sense to be more aggressive here. Even if + * the best non-partial path is more expensive than the best + * partial path, it could still be better to choose the + * non-partial path if there are several such paths that can + * be given to different workers. For now, we don't try to + * figure that out. + */ + accumulate_append_subpath(nppath, + &pa_nonpartial_subpaths, + NULL); + } + } + /* * Collect lists of all the available path orderings and * parameterizations for all the children. We use these as a @@ -1491,11 +1548,13 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, * if we have zero or one live subpath due to constraint exclusion.) */ if (subpaths_valid) - add_path(rel, (Path *) create_append_path(rel, subpaths, NULL, 0, - partitioned_rels)); + add_path(rel, (Path *) create_append_path(rel, subpaths, NIL, + NULL, 0, false, + partitioned_rels, -1)); /* - * Consider an append of partial unordered, unparameterized partial paths. + * Consider an append of unordered, unparameterized partial paths. Make + * it parallel-aware if possible. */ if (partial_subpaths_valid) { @@ -1503,12 +1562,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, ListCell *lc; int parallel_workers = 0; - /* - * Decide on the number of workers to request for this append path. - * For now, we just use the maximum value from among the members. It - * might be useful to use a higher number if the Append node were - * smart enough to spread out the workers, but it currently isn't. - */ + /* Find the highest number of workers requested for any subpath. */ foreach(lc, partial_subpaths) { Path *path = lfirst(lc); @@ -1517,9 +1571,78 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, } Assert(parallel_workers > 0); + /* + * If the use of parallel append is permitted, always request at least + * log2(# of children) paths. We assume it can be useful to have + * extra workers in this case because they will be spread out across + * the children. The precise formula is just a guess, but we don't + * want to end up with a radically different answer for a table with N + * partitions vs. an unpartitioned table with the same data, so the + * use of some kind of log-scaling here seems to make some sense. + */ + if (enable_parallel_append) + { + parallel_workers = Max(parallel_workers, + fls(list_length(live_childrels))); + parallel_workers = Min(parallel_workers, + max_parallel_workers_per_gather); + } + Assert(parallel_workers > 0); + /* Generate a partial append path. */ - appendpath = create_append_path(rel, partial_subpaths, NULL, - parallel_workers, partitioned_rels); + appendpath = create_append_path(rel, NIL, partial_subpaths, NULL, + parallel_workers, + enable_parallel_append, + partitioned_rels, -1); + + /* + * Make sure any subsequent partial paths use the same row count + * estimate. + */ + partial_rows = appendpath->path.rows; + + /* Add the path. */ + add_partial_path(rel, (Path *) appendpath); + } + + /* + * Consider a parallel-aware append using a mix of partial and non-partial + * paths. (This only makes sense if there's at least one child which has + * a non-partial path that is substantially cheaper than any partial path; + * otherwise, we should use the append path added in the previous step.) + */ + if (pa_subpaths_valid && pa_nonpartial_subpaths != NIL) + { + AppendPath *appendpath; + ListCell *lc; + int parallel_workers = 0; + + /* + * Find the highest number of workers requested for any partial + * subpath. + */ + foreach(lc, pa_partial_subpaths) + { + Path *path = lfirst(lc); + + parallel_workers = Max(parallel_workers, path->parallel_workers); + } + + /* + * Same formula here as above. It's even more important in this + * instance because the non-partial paths won't contribute anything to + * the planned number of parallel workers. + */ + parallel_workers = Max(parallel_workers, + fls(list_length(live_childrels))); + parallel_workers = Min(parallel_workers, + max_parallel_workers_per_gather); + Assert(parallel_workers > 0); + + appendpath = create_append_path(rel, pa_nonpartial_subpaths, + pa_partial_subpaths, + NULL, parallel_workers, true, + partitioned_rels, partial_rows); add_partial_path(rel, (Path *) appendpath); } @@ -1567,13 +1690,14 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, subpaths_valid = false; break; } - subpaths = accumulate_append_subpath(subpaths, subpath); + accumulate_append_subpath(subpath, &subpaths, NULL); } if (subpaths_valid) add_path(rel, (Path *) - create_append_path(rel, subpaths, required_outer, 0, - partitioned_rels)); + create_append_path(rel, subpaths, NIL, + required_outer, 0, false, + partitioned_rels, -1)); } } @@ -1657,10 +1781,10 @@ generate_mergeappend_paths(PlannerInfo *root, RelOptInfo *rel, if (cheapest_startup != cheapest_total) startup_neq_total = true; - startup_subpaths = - accumulate_append_subpath(startup_subpaths, cheapest_startup); - total_subpaths = - accumulate_append_subpath(total_subpaths, cheapest_total); + accumulate_append_subpath(cheapest_startup, + &startup_subpaths, NULL); + accumulate_append_subpath(cheapest_total, + &total_subpaths, NULL); } /* ... and build the MergeAppend paths */ @@ -1756,7 +1880,7 @@ get_cheapest_parameterized_child_path(PlannerInfo *root, RelOptInfo *rel, /* * accumulate_append_subpath - * Add a subpath to the list being built for an Append or MergeAppend + * Add a subpath to the list being built for an Append or MergeAppend. * * It's possible that the child is itself an Append or MergeAppend path, in * which case we can "cut out the middleman" and just add its child paths to @@ -1767,26 +1891,53 @@ get_cheapest_parameterized_child_path(PlannerInfo *root, RelOptInfo *rel, * omitting a sort step, which seems fine: if the parent is to be an Append, * its result would be unsorted anyway, while if the parent is to be a * MergeAppend, there's no point in a separate sort on a child. + * its result would be unsorted anyway. + * + * Normally, either path is a partial path and subpaths is a list of partial + * paths, or else path is a non-partial plan and subpaths is a list of those. + * However, if path is a parallel-aware Append, then we add its partial path + * children to subpaths and the rest to special_subpaths. If the latter is + * NULL, we don't flatten the path at all (unless it contains only partial + * paths). */ -static List * -accumulate_append_subpath(List *subpaths, Path *path) +static void +accumulate_append_subpath(Path *path, List **subpaths, List **special_subpaths) { if (IsA(path, AppendPath)) { AppendPath *apath = (AppendPath *) path; - /* list_copy is important here to avoid sharing list substructure */ - return list_concat(subpaths, list_copy(apath->subpaths)); + if (!apath->path.parallel_aware || apath->first_partial_path == 0) + { + /* list_copy is important here to avoid sharing list substructure */ + *subpaths = list_concat(*subpaths, list_copy(apath->subpaths)); + return; + } + else if (special_subpaths != NULL) + { + List *new_special_subpaths; + + /* Split Parallel Append into partial and non-partial subpaths */ + *subpaths = list_concat(*subpaths, + list_copy_tail(apath->subpaths, + apath->first_partial_path)); + new_special_subpaths = + list_truncate(list_copy(apath->subpaths), + apath->first_partial_path); + *special_subpaths = list_concat(*special_subpaths, + new_special_subpaths); + } } else if (IsA(path, MergeAppendPath)) { MergeAppendPath *mpath = (MergeAppendPath *) path; /* list_copy is important here to avoid sharing list substructure */ - return list_concat(subpaths, list_copy(mpath->subpaths)); + *subpaths = list_concat(*subpaths, list_copy(mpath->subpaths)); + return; } - else - return lappend(subpaths, path); + + *subpaths = lappend(*subpaths, path); } /* @@ -1809,7 +1960,8 @@ set_dummy_rel_pathlist(RelOptInfo *rel) rel->pathlist = NIL; rel->partial_pathlist = NIL; - add_path(rel, (Path *) create_append_path(rel, NIL, NULL, 0, NIL)); + add_path(rel, (Path *) create_append_path(rel, NIL, NIL, NULL, + 0, false, NIL, -1)); /* * We set the cheapest path immediately, to ensure that IS_DUMMY_REL() diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index d11bf19e30..877827dcb5 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -128,6 +128,7 @@ bool enable_mergejoin = true; bool enable_hashjoin = true; bool enable_gathermerge = true; bool enable_partition_wise_join = false; +bool enable_parallel_append = true; typedef struct { @@ -160,6 +161,8 @@ static Selectivity get_foreign_key_join_selectivity(PlannerInfo *root, Relids inner_relids, SpecialJoinInfo *sjinfo, List **restrictlist); +static Cost append_nonpartial_cost(List *subpaths, int numpaths, + int parallel_workers); static void set_rel_width(PlannerInfo *root, RelOptInfo *rel); static double relation_byte_size(double tuples, int width); static double page_size(double tuples, int width); @@ -1741,6 +1744,167 @@ cost_sort(Path *path, PlannerInfo *root, path->total_cost = startup_cost + run_cost; } +/* + * append_nonpartial_cost + * Estimate the cost of the non-partial paths in a Parallel Append. + * The non-partial paths are assumed to be the first "numpaths" paths + * from the subpaths list, and to be in order of decreasing cost. + */ +static Cost +append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers) +{ + Cost *costarr; + int arrlen; + ListCell *l; + ListCell *cell; + int i; + int path_index; + int min_index; + int max_index; + + if (numpaths == 0) + return 0; + + /* + * Array length is number of workers or number of relevants paths, + * whichever is less. + */ + arrlen = Min(parallel_workers, numpaths); + costarr = (Cost *) palloc(sizeof(Cost) * arrlen); + + /* The first few paths will each be claimed by a different worker. */ + path_index = 0; + foreach(cell, subpaths) + { + Path *subpath = (Path *) lfirst(cell); + + if (path_index == arrlen) + break; + costarr[path_index++] = subpath->total_cost; + } + + /* + * Since subpaths are sorted by decreasing cost, the last one will have + * the minimum cost. + */ + min_index = arrlen - 1; + + /* + * For each of the remaining subpaths, add its cost to the array element + * with minimum cost. + */ + for_each_cell(l, cell) + { + Path *subpath = (Path *) lfirst(l); + int i; + + /* Consider only the non-partial paths */ + if (path_index++ == numpaths) + break; + + costarr[min_index] += subpath->total_cost; + + /* Update the new min cost array index */ + for (min_index = i = 0; i < arrlen; i++) + { + if (costarr[i] < costarr[min_index]) + min_index = i; + } + } + + /* Return the highest cost from the array */ + for (max_index = i = 0; i < arrlen; i++) + { + if (costarr[i] > costarr[max_index]) + max_index = i; + } + + return costarr[max_index]; +} + +/* + * cost_append + * Determines and returns the cost of an Append node. + * + * We charge nothing extra for the Append itself, which perhaps is too + * optimistic, but since it doesn't do any selection or projection, it is a + * pretty cheap node. + */ +void +cost_append(AppendPath *apath) +{ + ListCell *l; + + apath->path.startup_cost = 0; + apath->path.total_cost = 0; + + if (apath->subpaths == NIL) + return; + + if (!apath->path.parallel_aware) + { + Path *subpath = (Path *) linitial(apath->subpaths); + + /* + * Startup cost of non-parallel-aware Append is the startup cost of + * first subpath. + */ + apath->path.startup_cost = subpath->startup_cost; + + /* Compute rows and costs as sums of subplan rows and costs. */ + foreach(l, apath->subpaths) + { + Path *subpath = (Path *) lfirst(l); + + apath->path.rows += subpath->rows; + apath->path.total_cost += subpath->total_cost; + } + } + else /* parallel-aware */ + { + int i = 0; + double parallel_divisor = get_parallel_divisor(&apath->path); + + /* Calculate startup cost. */ + foreach(l, apath->subpaths) + { + Path *subpath = (Path *) lfirst(l); + + /* + * Append will start returning tuples when the child node having + * lowest startup cost is done setting up. We consider only the + * first few subplans that immediately get a worker assigned. + */ + if (i == 0) + apath->path.startup_cost = subpath->startup_cost; + else if (i < apath->path.parallel_workers) + apath->path.startup_cost = Min(apath->path.startup_cost, + subpath->startup_cost); + + /* + * Apply parallel divisor to non-partial subpaths. Also add the + * cost of partial paths to the total cost, but ignore non-partial + * paths for now. + */ + if (i < apath->first_partial_path) + apath->path.rows += subpath->rows / parallel_divisor; + else + { + apath->path.rows += subpath->rows; + apath->path.total_cost += subpath->total_cost; + } + + i++; + } + + /* Add cost for non-partial subpaths. */ + apath->path.total_cost += + append_nonpartial_cost(apath->subpaths, + apath->first_partial_path, + apath->path.parallel_workers); + } +} + /* * cost_merge_append * Determines and returns the cost of a MergeAppend node. diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index 453f25964a..5e03f8bc21 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -1232,7 +1232,8 @@ mark_dummy_rel(RelOptInfo *rel) rel->partial_pathlist = NIL; /* Set up the dummy path */ - add_path(rel, (Path *) create_append_path(rel, NIL, NULL, 0, NIL)); + add_path(rel, (Path *) create_append_path(rel, NIL, NIL, NULL, + 0, false, NIL, -1)); /* Set or update cheapest_total_path and related fields */ set_cheapest(rel); diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index d4454779ee..f6c83d0477 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -203,7 +203,8 @@ static NamedTuplestoreScan *make_namedtuplestorescan(List *qptlist, List *qpqual Index scanrelid, char *enrname); static WorkTableScan *make_worktablescan(List *qptlist, List *qpqual, Index scanrelid, int wtParam); -static Append *make_append(List *appendplans, List *tlist, List *partitioned_rels); +static Append *make_append(List *appendplans, int first_partial_plan, + List *tlist, List *partitioned_rels); static RecursiveUnion *make_recursive_union(List *tlist, Plan *lefttree, Plan *righttree, @@ -1059,7 +1060,8 @@ create_append_plan(PlannerInfo *root, AppendPath *best_path) * parent-rel Vars it'll be asked to emit. */ - plan = make_append(subplans, tlist, best_path->partitioned_rels); + plan = make_append(subplans, best_path->first_partial_path, + tlist, best_path->partitioned_rels); copy_generic_path_info(&plan->plan, (Path *) best_path); @@ -5294,7 +5296,8 @@ make_foreignscan(List *qptlist, } static Append * -make_append(List *appendplans, List *tlist, List *partitioned_rels) +make_append(List *appendplans, int first_partial_plan, + List *tlist, List *partitioned_rels) { Append *node = makeNode(Append); Plan *plan = &node->plan; @@ -5305,6 +5308,7 @@ make_append(List *appendplans, List *tlist, List *partitioned_rels) plan->righttree = NULL; node->partitioned_rels = partitioned_rels; node->appendplans = appendplans; + node->first_partial_plan = first_partial_plan; return node; } diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index ef2eaeac0a..e8bc15c35d 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -3680,9 +3680,12 @@ create_grouping_paths(PlannerInfo *root, path = (Path *) create_append_path(grouped_rel, paths, + NIL, NULL, 0, - NIL); + false, + NIL, + -1); path->pathtarget = target; } else diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index f620243ab4..a24e8acfa6 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -590,8 +590,8 @@ generate_union_path(SetOperationStmt *op, PlannerInfo *root, /* * Append the child results together. */ - path = (Path *) create_append_path(result_rel, pathlist, NULL, 0, NIL); - + path = (Path *) create_append_path(result_rel, pathlist, NIL, + NULL, 0, false, NIL, -1); /* We have to manually jam the right tlist into the path; ick */ path->pathtarget = create_pathtarget(root, tlist); @@ -702,7 +702,8 @@ generate_nonunion_path(SetOperationStmt *op, PlannerInfo *root, /* * Append the child results together. */ - path = (Path *) create_append_path(result_rel, pathlist, NULL, 0, NIL); + path = (Path *) create_append_path(result_rel, pathlist, NIL, + NULL, 0, false, NIL, -1); /* We have to manually jam the right tlist into the path; ick */ path->pathtarget = create_pathtarget(root, tlist); diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index bc0841bf0b..54126fbb6a 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -51,6 +51,8 @@ typedef enum #define STD_FUZZ_FACTOR 1.01 static List *translate_sub_tlist(List *tlist, int relid); +static int append_total_cost_compare(const void *a, const void *b); +static int append_startup_cost_compare(const void *a, const void *b); static List *reparameterize_pathlist_by_child(PlannerInfo *root, List *pathlist, RelOptInfo *child_rel); @@ -1208,44 +1210,50 @@ create_tidscan_path(PlannerInfo *root, RelOptInfo *rel, List *tidquals, * Note that we must handle subpaths = NIL, representing a dummy access path. */ AppendPath * -create_append_path(RelOptInfo *rel, List *subpaths, Relids required_outer, - int parallel_workers, List *partitioned_rels) +create_append_path(RelOptInfo *rel, + List *subpaths, List *partial_subpaths, + Relids required_outer, + int parallel_workers, bool parallel_aware, + List *partitioned_rels, double rows) { AppendPath *pathnode = makeNode(AppendPath); ListCell *l; + Assert(!parallel_aware || parallel_workers > 0); + pathnode->path.pathtype = T_Append; pathnode->path.parent = rel; pathnode->path.pathtarget = rel->reltarget; pathnode->path.param_info = get_appendrel_parampathinfo(rel, required_outer); - pathnode->path.parallel_aware = false; + pathnode->path.parallel_aware = parallel_aware; pathnode->path.parallel_safe = rel->consider_parallel; pathnode->path.parallel_workers = parallel_workers; pathnode->path.pathkeys = NIL; /* result is always considered unsorted */ pathnode->partitioned_rels = list_copy(partitioned_rels); - pathnode->subpaths = subpaths; /* - * We don't bother with inventing a cost_append(), but just do it here. - * - * Compute rows and costs as sums of subplan rows and costs. We charge - * nothing extra for the Append itself, which perhaps is too optimistic, - * but since it doesn't do any selection or projection, it is a pretty - * cheap node. + * For parallel append, non-partial paths are sorted by descending total + * costs. That way, the total time to finish all non-partial paths is + * minimized. Also, the partial paths are sorted by descending startup + * costs. There may be some paths that require to do startup work by a + * single worker. In such case, it's better for workers to choose the + * expensive ones first, whereas the leader should choose the cheapest + * startup plan. */ - pathnode->path.rows = 0; - pathnode->path.startup_cost = 0; - pathnode->path.total_cost = 0; + if (pathnode->path.parallel_aware) + { + subpaths = list_qsort(subpaths, append_total_cost_compare); + partial_subpaths = list_qsort(partial_subpaths, + append_startup_cost_compare); + } + pathnode->first_partial_path = list_length(subpaths); + pathnode->subpaths = list_concat(subpaths, partial_subpaths); + foreach(l, subpaths) { Path *subpath = (Path *) lfirst(l); - pathnode->path.rows += subpath->rows; - - if (l == list_head(subpaths)) /* first node? */ - pathnode->path.startup_cost = subpath->startup_cost; - pathnode->path.total_cost += subpath->total_cost; pathnode->path.parallel_safe = pathnode->path.parallel_safe && subpath->parallel_safe; @@ -1253,9 +1261,53 @@ create_append_path(RelOptInfo *rel, List *subpaths, Relids required_outer, Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer)); } + Assert(!parallel_aware || pathnode->path.parallel_safe); + + cost_append(pathnode); + + /* If the caller provided a row estimate, override the computed value. */ + if (rows >= 0) + pathnode->path.rows = rows; + return pathnode; } +/* + * append_total_cost_compare + * list_qsort comparator for sorting append child paths by total_cost + */ +static int +append_total_cost_compare(const void *a, const void *b) +{ + Path *path1 = (Path *) lfirst(*(ListCell **) a); + Path *path2 = (Path *) lfirst(*(ListCell **) b); + + if (path1->total_cost > path2->total_cost) + return -1; + if (path1->total_cost < path2->total_cost) + return 1; + + return 0; +} + +/* + * append_startup_cost_compare + * list_qsort comparator for sorting append child paths by startup_cost + */ +static int +append_startup_cost_compare(const void *a, const void *b) +{ + Path *path1 = (Path *) lfirst(*(ListCell **) a); + Path *path2 = (Path *) lfirst(*(ListCell **) b); + + if (path1->startup_cost > path2->startup_cost) + return -1; + if (path1->startup_cost < path2->startup_cost) + return 1; + + return 0; +} + /* * create_merge_append_path * Creates a path corresponding to a MergeAppend plan, returning the diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index e5c3e86709..46f5c4277d 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -517,6 +517,7 @@ RegisterLWLockTranches(void) LWLockRegisterTranche(LWTRANCHE_SESSION_TYPMOD_TABLE, "session_typmod_table"); LWLockRegisterTranche(LWTRANCHE_TBM, "tbm"); + LWLockRegisterTranche(LWTRANCHE_PARALLEL_APPEND, "parallel_append"); /* Register named tranches. */ for (i = 0; i < NamedLWLockTrancheRequests; i++) diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 6dcd738be6..0f7a96d85c 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -920,6 +920,15 @@ static struct config_bool ConfigureNamesBool[] = false, NULL, NULL, NULL }, + { + {"enable_parallel_append", PGC_USERSET, QUERY_TUNING_METHOD, + gettext_noop("Enables the planner's use of parallel append plans."), + NULL + }, + &enable_parallel_append, + true, + NULL, NULL, NULL + }, { {"geqo", PGC_USERSET, QUERY_TUNING_GEQO, diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 16ffbbeea8..842cf3601a 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -296,6 +296,7 @@ #enable_material = on #enable_mergejoin = on #enable_nestloop = on +#enable_parallel_append = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on diff --git a/src/include/executor/nodeAppend.h b/src/include/executor/nodeAppend.h index 4e38a1380e..d42d50614c 100644 --- a/src/include/executor/nodeAppend.h +++ b/src/include/executor/nodeAppend.h @@ -14,10 +14,15 @@ #ifndef NODEAPPEND_H #define NODEAPPEND_H +#include "access/parallel.h" #include "nodes/execnodes.h" extern AppendState *ExecInitAppend(Append *node, EState *estate, int eflags); extern void ExecEndAppend(AppendState *node); extern void ExecReScanAppend(AppendState *node); +extern void ExecAppendEstimate(AppendState *node, ParallelContext *pcxt); +extern void ExecAppendInitializeDSM(AppendState *node, ParallelContext *pcxt); +extern void ExecAppendReInitializeDSM(AppendState *node, ParallelContext *pcxt); +extern void ExecAppendInitializeWorker(AppendState *node, ParallelWorkerContext *pwcxt); #endif /* NODEAPPEND_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 084d59ef83..1a35c5c9ad 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -21,6 +21,7 @@ #include "lib/pairingheap.h" #include "nodes/params.h" #include "nodes/plannodes.h" +#include "storage/spin.h" #include "utils/hsearch.h" #include "utils/queryenvironment.h" #include "utils/reltrigger.h" @@ -1000,13 +1001,22 @@ typedef struct ModifyTableState * whichplan which plan is being executed (0 .. n-1) * ---------------- */ -typedef struct AppendState + +struct AppendState; +typedef struct AppendState AppendState; +struct ParallelAppendState; +typedef struct ParallelAppendState ParallelAppendState; + +struct AppendState { PlanState ps; /* its first field is NodeTag */ PlanState **appendplans; /* array of PlanStates for my inputs */ int as_nplans; int as_whichplan; -} AppendState; + ParallelAppendState *as_pstate; /* parallel coordination info */ + Size pstate_len; /* size of parallel coordination info */ + bool (*choose_next_subplan) (AppendState *); +}; /* ---------------- * MergeAppendState information diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h index 667d5e269c..711db92576 100644 --- a/src/include/nodes/pg_list.h +++ b/src/include/nodes/pg_list.h @@ -269,6 +269,9 @@ extern void list_free_deep(List *list); extern List *list_copy(const List *list); extern List *list_copy_tail(const List *list, int nskip); +typedef int (*list_qsort_comparator) (const void *a, const void *b); +extern List *list_qsort(const List *list, list_qsort_comparator cmp); + /* * To ease migration to the new list API, a set of compatibility * macros are provided that reduce the impact of the list API changes diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index 9b38d44ba0..02fb366680 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -248,6 +248,7 @@ typedef struct Append /* RT indexes of non-leaf tables in a partition tree */ List *partitioned_rels; List *appendplans; + int first_partial_plan; } Append; /* ---------------- diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 51df8e9741..1108b6a0ea 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1255,6 +1255,9 @@ typedef struct CustomPath * AppendPath represents an Append plan, ie, successive execution of * several member plans. * + * For partial Append, 'subpaths' contains non-partial subpaths followed by + * partial subpaths. + * * Note: it is possible for "subpaths" to contain only one, or even no, * elements. These cases are optimized during create_append_plan. * In particular, an AppendPath with no subpaths is a "dummy" path that @@ -1266,6 +1269,9 @@ typedef struct AppendPath /* RT indexes of non-leaf tables in a partition tree */ List *partitioned_rels; List *subpaths; /* list of component Paths */ + + /* Index of first partial path in subpaths */ + int first_partial_path; } AppendPath; #define IS_DUMMY_PATH(p) \ diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 6c2317df39..5a1fbf97c3 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -68,6 +68,7 @@ extern bool enable_mergejoin; extern bool enable_hashjoin; extern bool enable_gathermerge; extern bool enable_partition_wise_join; +extern bool enable_parallel_append; extern int constraint_exclusion; extern double clamp_row_est(double nrows); @@ -106,6 +107,7 @@ extern void cost_sort(Path *path, PlannerInfo *root, List *pathkeys, Cost input_cost, double tuples, int width, Cost comparison_cost, int sort_mem, double limit_tuples); +extern void cost_append(AppendPath *path); extern void cost_merge_append(Path *path, PlannerInfo *root, List *pathkeys, int n_streams, Cost input_startup_cost, Cost input_total_cost, diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index e9ed16ad32..99f65b44f2 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -14,6 +14,7 @@ #ifndef PATHNODE_H #define PATHNODE_H +#include "nodes/bitmapset.h" #include "nodes/relation.h" @@ -63,9 +64,11 @@ extern BitmapOrPath *create_bitmap_or_path(PlannerInfo *root, List *bitmapquals); extern TidPath *create_tidscan_path(PlannerInfo *root, RelOptInfo *rel, List *tidquals, Relids required_outer); -extern AppendPath *create_append_path(RelOptInfo *rel, List *subpaths, - Relids required_outer, int parallel_workers, - List *partitioned_rels); +extern AppendPath *create_append_path(RelOptInfo *rel, + List *subpaths, List *partial_subpaths, + Relids required_outer, + int parallel_workers, bool parallel_aware, + List *partitioned_rels, double rows); extern MergeAppendPath *create_merge_append_path(PlannerInfo *root, RelOptInfo *rel, List *subpaths, diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index 596fdadc63..460843d73e 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -216,6 +216,7 @@ typedef enum BuiltinTrancheIds LWTRANCHE_SESSION_RECORD_TABLE, LWTRANCHE_SESSION_TYPMOD_TABLE, LWTRANCHE_TBM, + LWTRANCHE_PARALLEL_APPEND, LWTRANCHE_FIRST_USER_DEFINED } BuiltinTrancheIds; diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out index fac7b62f9c..a79f891da7 100644 --- a/src/test/regress/expected/inherit.out +++ b/src/test/regress/expected/inherit.out @@ -1404,6 +1404,7 @@ select min(1-id) from matest0; reset enable_indexscan; set enable_seqscan = off; -- plan with fewest seqscans should be merge +set enable_parallel_append = off; -- Don't let parallel-append interfere explain (verbose, costs off) select * from matest0 order by 1-id; QUERY PLAN ------------------------------------------------------------------ @@ -1470,6 +1471,7 @@ select min(1-id) from matest0; (1 row) reset enable_seqscan; +reset enable_parallel_append; drop table matest0 cascade; NOTICE: drop cascades to 3 other objects DETAIL: drop cascades to table matest1 diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index b748c98c91..62ed719ccc 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -11,8 +11,88 @@ set parallel_setup_cost=0; set parallel_tuple_cost=0; set min_parallel_table_scan_size=0; set max_parallel_workers_per_gather=4; +-- Parallel Append with partial-subplans explain (costs off) - select count(*) from a_star; + select round(avg(aa)), sum(aa) from a_star; + QUERY PLAN +----------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 3 + -> Partial Aggregate + -> Parallel Append + -> Parallel Seq Scan on a_star + -> Parallel Seq Scan on b_star + -> Parallel Seq Scan on c_star + -> Parallel Seq Scan on d_star + -> Parallel Seq Scan on e_star + -> Parallel Seq Scan on f_star +(11 rows) + +select round(avg(aa)), sum(aa) from a_star; + round | sum +-------+----- + 14 | 355 +(1 row) + +-- Parallel Append with both partial and non-partial subplans +alter table c_star set (parallel_workers = 0); +alter table d_star set (parallel_workers = 0); +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; + QUERY PLAN +----------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 3 + -> Partial Aggregate + -> Parallel Append + -> Seq Scan on d_star + -> Seq Scan on c_star + -> Parallel Seq Scan on a_star + -> Parallel Seq Scan on b_star + -> Parallel Seq Scan on e_star + -> Parallel Seq Scan on f_star +(11 rows) + +-- Parallel Append with only non-partial subplans +alter table a_star set (parallel_workers = 0); +alter table b_star set (parallel_workers = 0); +alter table e_star set (parallel_workers = 0); +alter table f_star set (parallel_workers = 0); +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; + QUERY PLAN +-------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 3 + -> Partial Aggregate + -> Parallel Append + -> Seq Scan on d_star + -> Seq Scan on f_star + -> Seq Scan on e_star + -> Seq Scan on b_star + -> Seq Scan on c_star + -> Seq Scan on a_star +(11 rows) + +select round(avg(aa)), sum(aa) from a_star; + round | sum +-------+----- + 14 | 355 +(1 row) + +-- Disable Parallel Append +alter table a_star reset (parallel_workers); +alter table b_star reset (parallel_workers); +alter table c_star reset (parallel_workers); +alter table d_star reset (parallel_workers); +alter table e_star reset (parallel_workers); +alter table f_star reset (parallel_workers); +set enable_parallel_append to off; +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; QUERY PLAN ----------------------------------------------------- Finalize Aggregate @@ -28,12 +108,13 @@ explain (costs off) -> Parallel Seq Scan on f_star (11 rows) -select count(*) from a_star; - count -------- - 50 +select round(avg(aa)), sum(aa) from a_star; + round | sum +-------+----- + 14 | 355 (1 row) +reset enable_parallel_append; -- test with leader participation disabled set parallel_leader_participation = off; explain (costs off) diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out index cd1f7f301d..2b738aae7c 100644 --- a/src/test/regress/expected/sysviews.out +++ b/src/test/regress/expected/sysviews.out @@ -81,11 +81,12 @@ select name, setting from pg_settings where name like 'enable%'; enable_material | on enable_mergejoin | on enable_nestloop | on + enable_parallel_append | on enable_partition_wise_join | off enable_seqscan | on enable_sort | on enable_tidscan | on -(13 rows) +(14 rows) -- Test that the pg_timezone_names and pg_timezone_abbrevs views are -- more-or-less working. We can't test their contents in any great detail diff --git a/src/test/regress/sql/inherit.sql b/src/test/regress/sql/inherit.sql index c71febffc2..2e42ae115d 100644 --- a/src/test/regress/sql/inherit.sql +++ b/src/test/regress/sql/inherit.sql @@ -508,11 +508,13 @@ select min(1-id) from matest0; reset enable_indexscan; set enable_seqscan = off; -- plan with fewest seqscans should be merge +set enable_parallel_append = off; -- Don't let parallel-append interfere explain (verbose, costs off) select * from matest0 order by 1-id; select * from matest0 order by 1-id; explain (verbose, costs off) select min(1-id) from matest0; select min(1-id) from matest0; reset enable_seqscan; +reset enable_parallel_append; drop table matest0 cascade; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 00df92779c..d3f2028468 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -15,9 +15,38 @@ set parallel_tuple_cost=0; set min_parallel_table_scan_size=0; set max_parallel_workers_per_gather=4; +-- Parallel Append with partial-subplans explain (costs off) - select count(*) from a_star; -select count(*) from a_star; + select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star; + +-- Parallel Append with both partial and non-partial subplans +alter table c_star set (parallel_workers = 0); +alter table d_star set (parallel_workers = 0); +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; + +-- Parallel Append with only non-partial subplans +alter table a_star set (parallel_workers = 0); +alter table b_star set (parallel_workers = 0); +alter table e_star set (parallel_workers = 0); +alter table f_star set (parallel_workers = 0); +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star; + +-- Disable Parallel Append +alter table a_star reset (parallel_workers); +alter table b_star reset (parallel_workers); +alter table c_star reset (parallel_workers); +alter table d_star reset (parallel_workers); +alter table e_star reset (parallel_workers); +alter table f_star reset (parallel_workers); +set enable_parallel_append to off; +explain (costs off) + select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star; +reset enable_parallel_append; -- test with leader participation disabled set parallel_leader_participation = off; From 7404704a0c5abf0510a9cd889bd867ce46bdfc4c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 5 Dec 2017 18:53:32 -0500 Subject: [PATCH 0668/1087] Fix broken markup. --- doc/src/sgml/config.sgml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 9ae6861cd7..533faf060d 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3636,13 +3636,13 @@ ANY num_sync ( enable_parallel_append (boolean) - enable_parallel_append configuration parameter + enable_parallel_append configuration parameter Enables or disables the query planner's use of parallel-aware - append plan types. The default is on. + append plan types. The default is on. From 51cff91c905e3b32c1f9b56d82d5c802b257b157 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 5 Dec 2017 21:02:47 -0500 Subject: [PATCH 0669/1087] doc: Flex is not a GNU package Remove the designation that Flex is a GNU package. Even though Bison is a GNU package, leave out the designation to not make the sentence unnecessarily complicated. Author: Pavan Maddamsetti --- doc/src/sgml/installation.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index a922819fce..141494c651 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -289,7 +289,7 @@ su - postgres yacc - GNU Flex and Bison + Flex and Bison are needed to build from a Git checkout, or if you changed the actual scanner and parser definition files. If you need them, be sure to get Flex 2.5.31 or later and From 979a36c3894db0a4b0d6b4b20fc861a0bbe3271c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 5 Dec 2017 22:40:05 -0500 Subject: [PATCH 0670/1087] Adjust regression test cases added by commit ab7271677. I suppose it is a copy-and-paste error that this test doesn't actually test the "Parallel Append with both partial and non-partial subplans" case (EXPLAIN alone surely doesn't qualify as a test of executor behavior). Fix that. Also, add cosmetic aliases to make it possible to tell apart these otherwise-identical test cases in log_statement output. --- src/test/regress/expected/select_parallel.out | 12 +++++++++--- src/test/regress/sql/select_parallel.sql | 7 ++++--- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 62ed719ccc..ff00d47f65 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -29,7 +29,7 @@ explain (costs off) -> Parallel Seq Scan on f_star (11 rows) -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a1; round | sum -------+----- 14 | 355 @@ -55,6 +55,12 @@ explain (costs off) -> Parallel Seq Scan on f_star (11 rows) +select round(avg(aa)), sum(aa) from a_star a2; + round | sum +-------+----- + 14 | 355 +(1 row) + -- Parallel Append with only non-partial subplans alter table a_star set (parallel_workers = 0); alter table b_star set (parallel_workers = 0); @@ -77,7 +83,7 @@ explain (costs off) -> Seq Scan on a_star (11 rows) -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a3; round | sum -------+----- 14 | 355 @@ -108,7 +114,7 @@ explain (costs off) -> Parallel Seq Scan on f_star (11 rows) -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a4; round | sum -------+----- 14 | 355 diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index d3f2028468..1035d04d1a 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -18,13 +18,14 @@ set max_parallel_workers_per_gather=4; -- Parallel Append with partial-subplans explain (costs off) select round(avg(aa)), sum(aa) from a_star; -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a1; -- Parallel Append with both partial and non-partial subplans alter table c_star set (parallel_workers = 0); alter table d_star set (parallel_workers = 0); explain (costs off) select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a2; -- Parallel Append with only non-partial subplans alter table a_star set (parallel_workers = 0); @@ -33,7 +34,7 @@ alter table e_star set (parallel_workers = 0); alter table f_star set (parallel_workers = 0); explain (costs off) select round(avg(aa)), sum(aa) from a_star; -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a3; -- Disable Parallel Append alter table a_star reset (parallel_workers); @@ -45,7 +46,7 @@ alter table f_star reset (parallel_workers); set enable_parallel_append to off; explain (costs off) select round(avg(aa)), sum(aa) from a_star; -select round(avg(aa)), sum(aa) from a_star; +select round(avg(aa)), sum(aa) from a_star a4; reset enable_parallel_append; -- test with leader participation disabled From 9c64ddd414855fb10e0355e887745270a4464c50 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 6 Dec 2017 08:42:50 -0500 Subject: [PATCH 0671/1087] Fix Parallel Append crash. Reported by Tom Lane and the buildfarm. Amul Sul and Amit Khandekar Discussion: http://postgr.es/m/17868.1512519318@sss.pgh.pa.us Discussion: http://postgr.es/m/CAJ3gD9cJQ4d-XhmZ6BqM9rMM2KDBfpkdgOAb4+psz56uBuMQ_A@mail.gmail.com --- src/backend/executor/nodeAppend.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 246a0b2d85..0e9371373c 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -506,8 +506,16 @@ choose_next_subplan_for_worker(AppendState *node) node->as_whichplan = pstate->pa_next_plan++; if (pstate->pa_next_plan >= node->as_nplans) { - Assert(append->first_partial_plan < node->as_nplans); - pstate->pa_next_plan = append->first_partial_plan; + if (append->first_partial_plan < node->as_nplans) + pstate->pa_next_plan = append->first_partial_plan; + else + { + /* + * We have only non-partial plans, and we already chose the last + * one; so arrange for the other workers to immediately bail out. + */ + pstate->pa_next_plan = INVALID_SUBPLAN_INDEX; + } } /* If non-partial, immediately mark as finished. */ From 28724fd90d2f85a0573a8107b48abad062a86d83 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 6 Dec 2017 08:49:30 -0500 Subject: [PATCH 0672/1087] Report failure to start a background worker. When a worker is flagged as BGW_NEVER_RESTART and we fail to start it, or if it is not marked BGW_NEVER_RESTART but is terminated before startup succeeds, what BgwHandleStatus should be reported? The previous code really hadn't considered this possibility (as indicated by the comments which ignore it completely) and would typically return BGWH_NOT_YET_STARTED, but that's not a good answer, because then there's no way for code using GetBackgroundWorkerPid() to tell the difference between a worker that has not started but will start later and a worker that has not started and will never be started. So, when this case happens, return BGWH_STOPPED instead. Update the comments to reflect this. The preceding fix by itself is insufficient to fix the problem, because the old code also didn't send a notification to the process identified in bgw_notify_pid when startup failed. That might've been technically correct under the theory that the status of the worker was BGWH_NOT_YET_STARTED, because the status would indeed not change when the worker failed to start, but now that we're more usefully reporting BGWH_STOPPED, a notification is needed. Without these fixes, code which starts background workers and then uses the recommended APIs to wait for those background workers to start would hang indefinitely if the postmaster failed to fork a worker. Amit Kapila and Robert Haas Discussion: http://postgr.es/m/CAA4eK1KDfKkvrjxsKJi3WPyceVi3dH1VCkbTJji2fuwKuB=3uw@mail.gmail.com --- src/backend/postmaster/bgworker.c | 23 +++++++++++++++-------- src/backend/postmaster/postmaster.c | 9 +++++++++ 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c index f752a12713..88806ebe6b 100644 --- a/src/backend/postmaster/bgworker.c +++ b/src/backend/postmaster/bgworker.c @@ -1034,14 +1034,18 @@ RegisterDynamicBackgroundWorker(BackgroundWorker *worker, * Get the PID of a dynamically-registered background worker. * * If the worker is determined to be running, the return value will be - * BGWH_STARTED and *pidp will get the PID of the worker process. - * Otherwise, the return value will be BGWH_NOT_YET_STARTED if the worker - * hasn't been started yet, and BGWH_STOPPED if the worker was previously - * running but is no longer. + * BGWH_STARTED and *pidp will get the PID of the worker process. If the + * postmaster has not yet attempted to start the worker, the return value will + * be BGWH_NOT_YET_STARTED. Otherwise, the return value is BGWH_STOPPED. * - * In the latter case, the worker may be stopped temporarily (if it is - * configured for automatic restart and exited non-zero) or gone for - * good (if it exited with code 0 or if it is configured not to restart). + * BGWH_STOPPED can indicate either that the worker is temporarily stopped + * (because it is configured for automatic restart and exited non-zero), + * or that the worker is permanently stopped (because it exited with exit + * code 0, or was not configured for automatic restart), or even that the + * worker was unregistered without ever starting (either because startup + * failed and the worker is not configured for automatic restart, or because + * TerminateBackgroundWorker was used before the worker was successfully + * started). */ BgwHandleStatus GetBackgroundWorkerPid(BackgroundWorkerHandle *handle, pid_t *pidp) @@ -1066,8 +1070,11 @@ GetBackgroundWorkerPid(BackgroundWorkerHandle *handle, pid_t *pidp) * time, but we assume such changes are atomic. So the value we read * won't be garbage, but it might be out of date by the time the caller * examines it (but that's unavoidable anyway). + * + * The in_use flag could be in the process of changing from true to false, + * but if it is already false then it can't change further. */ - if (handle->generation != slot->generation) + if (handle->generation != slot->generation || !slot->in_use) pid = 0; else pid = slot->pid; diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 94a4609371..17c7f7e78f 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -5909,7 +5909,16 @@ maybe_start_bgworkers(void) { if (rw->rw_worker.bgw_restart_time == BGW_NEVER_RESTART) { + int notify_pid; + + notify_pid = rw->rw_worker.bgw_notify_pid; + ForgetBackgroundWorker(&iter); + + /* Report worker is gone now. */ + if (notify_pid != 0) + kill(notify_pid, SIGUSR1); + continue; } From 0a3edbb3302173f8ac465570b6273392aa6e20b1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 7 Dec 2017 11:09:49 -0500 Subject: [PATCH 0673/1087] Speed up isolation test for concurrent VACUUM/ANALYZE behavior. Per Tom Lane, the old test sometimes times out with CLOBBER_CACHE_ALWAYS. Nathan Bossart Discussion: http://postgr.es/m/28614.1512583046@sss.pgh.pa.us --- .../expected/vacuum-concurrent-drop.out | 54 +++++++++---------- .../specs/vacuum-concurrent-drop.spec | 30 +++++------ 2 files changed, 42 insertions(+), 42 deletions(-) diff --git a/src/test/isolation/expected/vacuum-concurrent-drop.out b/src/test/isolation/expected/vacuum-concurrent-drop.out index 72d80a1de1..bb92e80b92 100644 --- a/src/test/isolation/expected/vacuum-concurrent-drop.out +++ b/src/test/isolation/expected/vacuum-concurrent-drop.out @@ -3,74 +3,74 @@ Parsed test spec with 2 sessions starting permutation: lock vac_specified drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step vac_specified: VACUUM test1, test2; +step vac_specified: VACUUM part1, part2; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -WARNING: skipping vacuum of "test2" --- relation no longer exists +WARNING: skipping vacuum of "part2" --- relation no longer exists step vac_specified: <... completed> -starting permutation: lock vac_all drop_and_commit +starting permutation: lock vac_all_parts drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step vac_all: VACUUM; +step vac_all_parts: VACUUM parted; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -step vac_all: <... completed> +step vac_all_parts: <... completed> starting permutation: lock analyze_specified drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step analyze_specified: ANALYZE test1, test2; +step analyze_specified: ANALYZE part1, part2; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -WARNING: skipping analyze of "test2" --- relation no longer exists +WARNING: skipping analyze of "part2" --- relation no longer exists step analyze_specified: <... completed> -starting permutation: lock analyze_all drop_and_commit +starting permutation: lock analyze_all_parts drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step analyze_all: ANALYZE; +step analyze_all_parts: ANALYZE parted; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -step analyze_all: <... completed> +step analyze_all_parts: <... completed> starting permutation: lock vac_analyze_specified drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step vac_analyze_specified: VACUUM ANALYZE test1, test2; +step vac_analyze_specified: VACUUM ANALYZE part1, part2; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -WARNING: skipping vacuum of "test2" --- relation no longer exists +WARNING: skipping vacuum of "part2" --- relation no longer exists step vac_analyze_specified: <... completed> -starting permutation: lock vac_analyze_all drop_and_commit +starting permutation: lock vac_analyze_all_parts drop_and_commit step lock: BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; -step vac_analyze_all: VACUUM ANALYZE; +step vac_analyze_all_parts: VACUUM ANALYZE parted; step drop_and_commit: - DROP TABLE test2; + DROP TABLE part2; COMMIT; -step vac_analyze_all: <... completed> +step vac_analyze_all_parts: <... completed> diff --git a/src/test/isolation/specs/vacuum-concurrent-drop.spec b/src/test/isolation/specs/vacuum-concurrent-drop.spec index 31fc161e02..cae4092667 100644 --- a/src/test/isolation/specs/vacuum-concurrent-drop.spec +++ b/src/test/isolation/specs/vacuum-concurrent-drop.spec @@ -7,39 +7,39 @@ setup { - CREATE TABLE test1 (a INT); - CREATE TABLE test2 (a INT); + CREATE TABLE parted (a INT) PARTITION BY LIST (a); + CREATE TABLE part1 PARTITION OF parted FOR VALUES IN (1); + CREATE TABLE part2 PARTITION OF parted FOR VALUES IN (2); } teardown { - DROP TABLE IF EXISTS test1; - DROP TABLE IF EXISTS test2; + DROP TABLE IF EXISTS parted; } session "s1" step "lock" { BEGIN; - LOCK test1 IN SHARE MODE; + LOCK part1 IN SHARE MODE; } step "drop_and_commit" { - DROP TABLE test2; + DROP TABLE part2; COMMIT; } session "s2" -step "vac_specified" { VACUUM test1, test2; } -step "vac_all" { VACUUM; } -step "analyze_specified" { ANALYZE test1, test2; } -step "analyze_all" { ANALYZE; } -step "vac_analyze_specified" { VACUUM ANALYZE test1, test2; } -step "vac_analyze_all" { VACUUM ANALYZE; } +step "vac_specified" { VACUUM part1, part2; } +step "vac_all_parts" { VACUUM parted; } +step "analyze_specified" { ANALYZE part1, part2; } +step "analyze_all_parts" { ANALYZE parted; } +step "vac_analyze_specified" { VACUUM ANALYZE part1, part2; } +step "vac_analyze_all_parts" { VACUUM ANALYZE parted; } permutation "lock" "vac_specified" "drop_and_commit" -permutation "lock" "vac_all" "drop_and_commit" +permutation "lock" "vac_all_parts" "drop_and_commit" permutation "lock" "analyze_specified" "drop_and_commit" -permutation "lock" "analyze_all" "drop_and_commit" +permutation "lock" "analyze_all_parts" "drop_and_commit" permutation "lock" "vac_analyze_specified" "drop_and_commit" -permutation "lock" "vac_analyze_all" "drop_and_commit" +permutation "lock" "vac_analyze_all_parts" "drop_and_commit" From 2d2d06b7e27e3177d5bef0061801c75946871db3 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 8 Dec 2017 09:18:18 -0500 Subject: [PATCH 0674/1087] Apply identity sequence values on COPY A COPY into a table should apply identity sequence values just like it does for ordinary defaults. This was previously forgotten, leading to null values being inserted, which in turn would fail because identity columns have not-null constraints. Author: Michael Paquier Reported-by: Steven Winfield Bug: #14952 --- src/backend/commands/copy.c | 16 ++++++++++++++-- src/test/regress/expected/identity.out | 13 +++++++++++++ src/test/regress/sql/identity.sql | 17 +++++++++++++++++ 3 files changed, 44 insertions(+), 2 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index bace390470..254be28ae4 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -23,6 +23,7 @@ #include "access/sysattr.h" #include "access/xact.h" #include "access/xlog.h" +#include "catalog/dependency.h" #include "catalog/pg_type.h" #include "commands/copy.h" #include "commands/defrem.h" @@ -3067,8 +3068,19 @@ BeginCopyFrom(ParseState *pstate, { /* attribute is NOT to be copied from input */ /* use default value if one exists */ - Expr *defexpr = (Expr *) build_column_default(cstate->rel, - attnum); + Expr *defexpr; + + if (att->attidentity) + { + NextValueExpr *nve = makeNode(NextValueExpr); + + nve->seqid = getOwnedSequence(RelationGetRelid(cstate->rel), + attnum); + nve->typeId = att->atttypid; + defexpr = (Expr *) nve; + } + else + defexpr = (Expr *) build_column_default(cstate->rel, attnum); if (defexpr != NULL) { diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 5fa585d6cc..174b420a04 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -153,6 +153,19 @@ SELECT * FROM itest2; 3 | (3 rows) +-- COPY tests +CREATE TABLE itest9 (a int GENERATED ALWAYS AS IDENTITY, b text, c bigint); +COPY itest9 FROM stdin; +COPY itest9 (b, c) FROM stdin; +SELECT * FROM itest9 ORDER BY c; + a | b | c +-----+------+----- + 100 | foo | 200 + 101 | bar | 201 + 1 | foo2 | 202 + 2 | bar2 | 203 +(4 rows) + -- DROP IDENTITY tests ALTER TABLE itest4 ALTER COLUMN a DROP IDENTITY; ALTER TABLE itest4 ALTER COLUMN a DROP IDENTITY; -- error diff --git a/src/test/regress/sql/identity.sql b/src/test/regress/sql/identity.sql index e1b5a074c9..6e76dd08c8 100644 --- a/src/test/regress/sql/identity.sql +++ b/src/test/regress/sql/identity.sql @@ -78,6 +78,23 @@ UPDATE itest2 SET a = DEFAULT WHERE a = 2; SELECT * FROM itest2; +-- COPY tests + +CREATE TABLE itest9 (a int GENERATED ALWAYS AS IDENTITY, b text, c bigint); + +COPY itest9 FROM stdin; +100 foo 200 +101 bar 201 +\. + +COPY itest9 (b, c) FROM stdin; +foo2 202 +bar2 203 +\. + +SELECT * FROM itest9 ORDER BY c; + + -- DROP IDENTITY tests ALTER TABLE itest4 ALTER COLUMN a DROP IDENTITY; From dd759b96ea8fa41b48541dd321c07d9a947f8de9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 8 Dec 2017 11:20:50 -0500 Subject: [PATCH 0675/1087] In plpgsql, unify duplicate variables for record and row cases. plpgsql's function exec_move_row() handles assignment of a composite source value to either a PLpgSQL_rec or PLpgSQL_row target variable. Oddly, rather than taking a single target argument which it could do run-time type detection on, it was coded to take two separate arguments (only one of which is allowed to be non-NULL). This choice had then back-propagated into storing two separate target variables in various plpgsql statement nodes, with lots of duplicative coding and awkward interface logic to support that. Simplify matters by folding those pairs down to single variables, distinguishing the two cases only where we must ... which turns out to be only in exec_move_row itself. This is purely refactoring and should not change any behavior. In passing, remove unused field PLpgSQL_stmt_open.returntype. Discussion: https://postgr.es/m/11787.1512713374@sss.pgh.pa.us --- src/pl/plpgsql/src/pl_comp.c | 2 +- src/pl/plpgsql/src/pl_exec.c | 111 ++++++++++++------------------ src/pl/plpgsql/src/pl_funcs.c | 39 +++-------- src/pl/plpgsql/src/pl_gram.y | 126 ++++++++++++---------------------- src/pl/plpgsql/src/plpgsql.h | 22 ++---- 5 files changed, 106 insertions(+), 194 deletions(-) diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index f459c02f7b..1300ea6a52 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -545,7 +545,7 @@ do_compile(FunctionCallInfo fcinfo, { if (rettypeid == VOIDOID || rettypeid == RECORDOID) - /* okay */ ; + /* okay */ ; else if (rettypeid == TRIGGEROID || rettypeid == EVTTRIGGEROID) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index ec480cb0ba..1959d6dc42 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -272,8 +272,7 @@ static ParamListInfo setup_unshared_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr); static void plpgsql_param_fetch(ParamListInfo params, int paramid); static void exec_move_row(PLpgSQL_execstate *estate, - PLpgSQL_rec *rec, - PLpgSQL_row *row, + PLpgSQL_variable *target, HeapTuple tup, TupleDesc tupdesc); static HeapTuple make_tuple_from_row(PLpgSQL_execstate *estate, PLpgSQL_row *row, @@ -281,8 +280,7 @@ static HeapTuple make_tuple_from_row(PLpgSQL_execstate *estate, static HeapTuple get_tuple_from_datum(Datum value); static TupleDesc get_tupdesc_from_datum(Datum value); static void exec_move_row_from_datum(PLpgSQL_execstate *estate, - PLpgSQL_rec *rec, - PLpgSQL_row *row, + PLpgSQL_variable *target, Datum value); static char *convert_value_to_string(PLpgSQL_execstate *estate, Datum value, Oid valtype); @@ -425,13 +423,15 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, if (!fcinfo->argnull[i]) { /* Assign row value from composite datum */ - exec_move_row_from_datum(&estate, NULL, row, + exec_move_row_from_datum(&estate, + (PLpgSQL_variable *) row, fcinfo->arg[i]); } else { /* If arg is null, treat it as an empty row */ - exec_move_row(&estate, NULL, row, NULL, NULL); + exec_move_row(&estate, (PLpgSQL_variable *) row, + NULL, NULL); } /* clean up after exec_move_row() */ exec_eval_cleanup(&estate); @@ -2327,7 +2327,7 @@ exec_stmt_forc(PLpgSQL_execstate *estate, PLpgSQL_stmt_forc *stmt) set_args.sqlstmt = stmt->argquery; set_args.into = true; /* XXX historically this has not been STRICT */ - set_args.row = (PLpgSQL_row *) + set_args.target = (PLpgSQL_variable *) (estate->datums[curvar->cursor_explicit_argrow]); if (exec_stmt_execsql(estate, &set_args) != PLPGSQL_RC_OK) @@ -3755,8 +3755,7 @@ exec_stmt_execsql(PLpgSQL_execstate *estate, { SPITupleTable *tuptab = SPI_tuptable; uint64 n = SPI_processed; - PLpgSQL_rec *rec = NULL; - PLpgSQL_row *row = NULL; + PLpgSQL_variable *target; /* If the statement did not return a tuple table, complain */ if (tuptab == NULL) @@ -3764,13 +3763,8 @@ exec_stmt_execsql(PLpgSQL_execstate *estate, (errcode(ERRCODE_SYNTAX_ERROR), errmsg("INTO used with a command that cannot return data"))); - /* Determine if we assign to a record or a row */ - if (stmt->rec != NULL) - rec = (PLpgSQL_rec *) (estate->datums[stmt->rec->dno]); - else if (stmt->row != NULL) - row = (PLpgSQL_row *) (estate->datums[stmt->row->dno]); - else - elog(ERROR, "unsupported target"); + /* Fetch target's datum entry */ + target = (PLpgSQL_variable *) estate->datums[stmt->target->dno]; /* * If SELECT ... INTO specified STRICT, and the query didn't find @@ -3794,7 +3788,7 @@ exec_stmt_execsql(PLpgSQL_execstate *estate, errdetail ? errdetail_internal("parameters: %s", errdetail) : 0)); } /* set the target to NULL(s) */ - exec_move_row(estate, rec, row, NULL, tuptab->tupdesc); + exec_move_row(estate, target, NULL, tuptab->tupdesc); } else { @@ -3813,7 +3807,7 @@ exec_stmt_execsql(PLpgSQL_execstate *estate, errdetail ? errdetail_internal("parameters: %s", errdetail) : 0)); } /* Put the first result row into the target */ - exec_move_row(estate, rec, row, tuptab->vals[0], tuptab->tupdesc); + exec_move_row(estate, target, tuptab->vals[0], tuptab->tupdesc); } /* Clean up */ @@ -3946,8 +3940,7 @@ exec_stmt_dynexecute(PLpgSQL_execstate *estate, { SPITupleTable *tuptab = SPI_tuptable; uint64 n = SPI_processed; - PLpgSQL_rec *rec = NULL; - PLpgSQL_row *row = NULL; + PLpgSQL_variable *target; /* If the statement did not return a tuple table, complain */ if (tuptab == NULL) @@ -3955,13 +3948,8 @@ exec_stmt_dynexecute(PLpgSQL_execstate *estate, (errcode(ERRCODE_SYNTAX_ERROR), errmsg("INTO used with a command that cannot return data"))); - /* Determine if we assign to a record or a row */ - if (stmt->rec != NULL) - rec = (PLpgSQL_rec *) (estate->datums[stmt->rec->dno]); - else if (stmt->row != NULL) - row = (PLpgSQL_row *) (estate->datums[stmt->row->dno]); - else - elog(ERROR, "unsupported target"); + /* Fetch target's datum entry */ + target = (PLpgSQL_variable *) estate->datums[stmt->target->dno]; /* * If SELECT ... INTO specified STRICT, and the query didn't find @@ -3985,7 +3973,7 @@ exec_stmt_dynexecute(PLpgSQL_execstate *estate, errdetail ? errdetail_internal("parameters: %s", errdetail) : 0)); } /* set the target to NULL(s) */ - exec_move_row(estate, rec, row, NULL, tuptab->tupdesc); + exec_move_row(estate, target, NULL, tuptab->tupdesc); } else { @@ -4005,7 +3993,7 @@ exec_stmt_dynexecute(PLpgSQL_execstate *estate, } /* Put the first result row into the target */ - exec_move_row(estate, rec, row, tuptab->vals[0], tuptab->tupdesc); + exec_move_row(estate, target, tuptab->vals[0], tuptab->tupdesc); } /* clean up after exec_move_row() */ exec_eval_cleanup(estate); @@ -4163,7 +4151,7 @@ exec_stmt_open(PLpgSQL_execstate *estate, PLpgSQL_stmt_open *stmt) set_args.sqlstmt = stmt->argquery; set_args.into = true; /* XXX historically this has not been STRICT */ - set_args.row = (PLpgSQL_row *) + set_args.target = (PLpgSQL_variable *) (estate->datums[curvar->cursor_explicit_argrow]); if (exec_stmt_execsql(estate, &set_args) != PLPGSQL_RC_OK) @@ -4221,8 +4209,6 @@ static int exec_stmt_fetch(PLpgSQL_execstate *estate, PLpgSQL_stmt_fetch *stmt) { PLpgSQL_var *curvar; - PLpgSQL_rec *rec = NULL; - PLpgSQL_row *row = NULL; long how_many = stmt->how_many; SPITupleTable *tuptab; Portal portal; @@ -4269,16 +4255,7 @@ exec_stmt_fetch(PLpgSQL_execstate *estate, PLpgSQL_stmt_fetch *stmt) if (!stmt->is_move) { - /* ---------- - * Determine if we fetch into a record or a row - * ---------- - */ - if (stmt->rec != NULL) - rec = (PLpgSQL_rec *) (estate->datums[stmt->rec->dno]); - else if (stmt->row != NULL) - row = (PLpgSQL_row *) (estate->datums[stmt->row->dno]); - else - elog(ERROR, "unsupported target"); + PLpgSQL_variable *target; /* ---------- * Fetch 1 tuple from the cursor @@ -4292,10 +4269,11 @@ exec_stmt_fetch(PLpgSQL_execstate *estate, PLpgSQL_stmt_fetch *stmt) * Set the target appropriately. * ---------- */ + target = (PLpgSQL_variable *) estate->datums[stmt->target->dno]; if (n == 0) - exec_move_row(estate, rec, row, NULL, tuptab->tupdesc); + exec_move_row(estate, target, NULL, tuptab->tupdesc); else - exec_move_row(estate, rec, row, tuptab->vals[0], tuptab->tupdesc); + exec_move_row(estate, target, tuptab->vals[0], tuptab->tupdesc); exec_eval_cleanup(estate); SPI_freetuptable(tuptab); @@ -4514,7 +4492,8 @@ exec_assign_value(PLpgSQL_execstate *estate, if (isNull) { /* If source is null, just assign nulls to the row */ - exec_move_row(estate, NULL, row, NULL, NULL); + exec_move_row(estate, (PLpgSQL_variable *) row, + NULL, NULL); } else { @@ -4523,7 +4502,8 @@ exec_assign_value(PLpgSQL_execstate *estate, ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot assign non-composite value to a row variable"))); - exec_move_row_from_datum(estate, NULL, row, value); + exec_move_row_from_datum(estate, (PLpgSQL_variable *) row, + value); } break; } @@ -4538,7 +4518,8 @@ exec_assign_value(PLpgSQL_execstate *estate, if (isNull) { /* If source is null, just assign nulls to the record */ - exec_move_row(estate, rec, NULL, NULL, NULL); + exec_move_row(estate, (PLpgSQL_variable *) rec, + NULL, NULL); } else { @@ -4547,7 +4528,8 @@ exec_assign_value(PLpgSQL_execstate *estate, ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("cannot assign non-composite value to a record variable"))); - exec_move_row_from_datum(estate, rec, NULL, value); + exec_move_row_from_datum(estate, (PLpgSQL_variable *) rec, + value); } break; } @@ -5341,22 +5323,14 @@ static int exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, Portal portal, bool prefetch_ok) { - PLpgSQL_rec *rec = NULL; - PLpgSQL_row *row = NULL; + PLpgSQL_variable *var; SPITupleTable *tuptab; bool found = false; int rc = PLPGSQL_RC_OK; uint64 n; - /* - * Determine if we assign to a record or a row - */ - if (stmt->rec != NULL) - rec = (PLpgSQL_rec *) (estate->datums[stmt->rec->dno]); - else if (stmt->row != NULL) - row = (PLpgSQL_row *) (estate->datums[stmt->row->dno]); - else - elog(ERROR, "unsupported target"); + /* Fetch loop variable's datum entry */ + var = (PLpgSQL_variable *) estate->datums[stmt->var->dno]; /* * Make sure the portal doesn't get closed by the user statements we @@ -5379,7 +5353,7 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, */ if (n == 0) { - exec_move_row(estate, rec, row, NULL, tuptab->tupdesc); + exec_move_row(estate, var, NULL, tuptab->tupdesc); exec_eval_cleanup(estate); } else @@ -5397,7 +5371,7 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, /* * Assign the tuple to the target */ - exec_move_row(estate, rec, row, tuptab->vals[i], tuptab->tupdesc); + exec_move_row(estate, var, tuptab->vals[i], tuptab->tupdesc); exec_eval_cleanup(estate); /* @@ -5949,16 +5923,17 @@ plpgsql_param_fetch(ParamListInfo params, int paramid) */ static void exec_move_row(PLpgSQL_execstate *estate, - PLpgSQL_rec *rec, - PLpgSQL_row *row, + PLpgSQL_variable *target, HeapTuple tup, TupleDesc tupdesc) { /* * Record is simple - just copy the tuple and its descriptor into the * record variable */ - if (rec != NULL) + if (target->dtype == PLPGSQL_DTYPE_REC) { + PLpgSQL_rec *rec = (PLpgSQL_rec *) target; + /* * Copy input first, just in case it is pointing at variable's value */ @@ -6027,8 +6002,9 @@ exec_move_row(PLpgSQL_execstate *estate, * If we have no tuple data at all, we'll assign NULL to all columns of * the row variable. */ - if (row != NULL) + if (target->dtype == PLPGSQL_DTYPE_ROW) { + PLpgSQL_row *row = (PLpgSQL_row *) target; int td_natts = tupdesc ? tupdesc->natts : 0; int t_natts; int fnum; @@ -6195,8 +6171,7 @@ get_tupdesc_from_datum(Datum value) */ static void exec_move_row_from_datum(PLpgSQL_execstate *estate, - PLpgSQL_rec *rec, - PLpgSQL_row *row, + PLpgSQL_variable *target, Datum value) { HeapTupleHeader td = DatumGetHeapTupleHeader(value); @@ -6217,7 +6192,7 @@ exec_move_row_from_datum(PLpgSQL_execstate *estate, tmptup.t_data = td; /* Do the move */ - exec_move_row(estate, rec, row, &tmptup, tupdesc); + exec_move_row(estate, target, &tmptup, tupdesc); /* Release tupdesc usage count */ ReleaseTupleDesc(tupdesc); diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index 23f54e1c21..be779b6fc4 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -1062,7 +1062,7 @@ static void dump_fors(PLpgSQL_stmt_fors *stmt) { dump_ind(); - printf("FORS %s ", (stmt->rec != NULL) ? stmt->rec->refname : stmt->row->refname); + printf("FORS %s ", stmt->var->refname); dump_expr(stmt->query); printf("\n"); @@ -1076,7 +1076,7 @@ static void dump_forc(PLpgSQL_stmt_forc *stmt) { dump_ind(); - printf("FORC %s ", stmt->rec->refname); + printf("FORC %s ", stmt->var->refname); printf("curvar=%d\n", stmt->curvar); dump_indent += 2; @@ -1174,15 +1174,11 @@ dump_fetch(PLpgSQL_stmt_fetch *stmt) dump_cursor_direction(stmt); dump_indent += 2; - if (stmt->rec != NULL) + if (stmt->target != NULL) { dump_ind(); - printf(" target = %d %s\n", stmt->rec->dno, stmt->rec->refname); - } - if (stmt->row != NULL) - { - dump_ind(); - printf(" target = %d %s\n", stmt->row->dno, stmt->row->refname); + printf(" target = %d %s\n", + stmt->target->dno, stmt->target->refname); } dump_indent -= 2; } @@ -1420,19 +1416,12 @@ dump_execsql(PLpgSQL_stmt_execsql *stmt) printf("\n"); dump_indent += 2; - if (stmt->rec != NULL) + if (stmt->target != NULL) { dump_ind(); printf(" INTO%s target = %d %s\n", stmt->strict ? " STRICT" : "", - stmt->rec->dno, stmt->rec->refname); - } - if (stmt->row != NULL) - { - dump_ind(); - printf(" INTO%s target = %d %s\n", - stmt->strict ? " STRICT" : "", - stmt->row->dno, stmt->row->refname); + stmt->target->dno, stmt->target->refname); } dump_indent -= 2; } @@ -1446,19 +1435,12 @@ dump_dynexecute(PLpgSQL_stmt_dynexecute *stmt) printf("\n"); dump_indent += 2; - if (stmt->rec != NULL) - { - dump_ind(); - printf(" INTO%s target = %d %s\n", - stmt->strict ? " STRICT" : "", - stmt->rec->dno, stmt->rec->refname); - } - if (stmt->row != NULL) + if (stmt->target != NULL) { dump_ind(); printf(" INTO%s target = %d %s\n", stmt->strict ? " STRICT" : "", - stmt->row->dno, stmt->row->refname); + stmt->target->dno, stmt->target->refname); } if (stmt->params != NIL) { @@ -1485,8 +1467,7 @@ static void dump_dynfors(PLpgSQL_stmt_dynfors *stmt) { dump_ind(); - printf("FORS %s EXECUTE ", - (stmt->rec != NULL) ? stmt->rec->refname : stmt->row->refname); + printf("FORS %s EXECUTE ", stmt->var->refname); dump_expr(stmt->query); printf("\n"); if (stmt->params != NIL) diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index 94f1f58593..e802440b45 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -90,7 +90,7 @@ static PLpgSQL_stmt *make_case(int location, PLpgSQL_expr *t_expr, List *case_when_list, List *else_stmts); static char *NameOfDatum(PLwdatum *wdatum); static void check_assignable(PLpgSQL_datum *datum, int location); -static void read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row, +static void read_into_target(PLpgSQL_variable **target, bool *strict); static PLpgSQL_row *read_into_scalar_list(char *initial_name, PLpgSQL_datum *initial_datum, @@ -138,8 +138,7 @@ static void check_raise_parameters(PLpgSQL_stmt_raise *stmt); char *name; int lineno; PLpgSQL_datum *scalar; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_datum *row; } forvariable; struct { @@ -1310,22 +1309,18 @@ for_control : for_variable K_IN new = palloc0(sizeof(PLpgSQL_stmt_dynfors)); new->cmd_type = PLPGSQL_STMT_DYNFORS; - if ($1.rec) + if ($1.row) { - new->rec = $1.rec; - check_assignable((PLpgSQL_datum *) new->rec, @1); - } - else if ($1.row) - { - new->row = $1.row; - check_assignable((PLpgSQL_datum *) new->row, @1); + new->var = (PLpgSQL_variable *) $1.row; + check_assignable($1.row, @1); } else if ($1.scalar) { /* convert single scalar to list */ - new->row = make_scalar_list1($1.name, $1.scalar, - $1.lineno, @1); - /* no need for check_assignable */ + new->var = (PLpgSQL_variable *) + make_scalar_list1($1.name, $1.scalar, + $1.lineno, @1); + /* make_scalar_list1 did check_assignable */ } else { @@ -1381,9 +1376,10 @@ for_control : for_variable K_IN "LOOP"); /* create loop's private RECORD variable */ - new->rec = plpgsql_build_record($1.name, - $1.lineno, - true); + new->var = (PLpgSQL_variable *) + plpgsql_build_record($1.name, + $1.lineno, + true); $$ = (PLpgSQL_stmt *) new; } @@ -1504,22 +1500,18 @@ for_control : for_variable K_IN new = palloc0(sizeof(PLpgSQL_stmt_fors)); new->cmd_type = PLPGSQL_STMT_FORS; - if ($1.rec) + if ($1.row) { - new->rec = $1.rec; - check_assignable((PLpgSQL_datum *) new->rec, @1); - } - else if ($1.row) - { - new->row = $1.row; - check_assignable((PLpgSQL_datum *) new->row, @1); + new->var = (PLpgSQL_variable *) $1.row; + check_assignable($1.row, @1); } else if ($1.scalar) { /* convert single scalar to list */ - new->row = make_scalar_list1($1.name, $1.scalar, - $1.lineno, @1); - /* no need for check_assignable */ + new->var = (PLpgSQL_variable *) + make_scalar_list1($1.name, $1.scalar, + $1.lineno, @1); + /* make_scalar_list1 did check_assignable */ } else { @@ -1558,32 +1550,26 @@ for_variable : T_DATUM { $$.name = NameOfDatum(&($1)); $$.lineno = plpgsql_location_to_lineno(@1); - if ($1.datum->dtype == PLPGSQL_DTYPE_ROW) + if ($1.datum->dtype == PLPGSQL_DTYPE_ROW || + $1.datum->dtype == PLPGSQL_DTYPE_REC) { $$.scalar = NULL; - $$.rec = NULL; - $$.row = (PLpgSQL_row *) $1.datum; - } - else if ($1.datum->dtype == PLPGSQL_DTYPE_REC) - { - $$.scalar = NULL; - $$.rec = (PLpgSQL_rec *) $1.datum; - $$.row = NULL; + $$.row = $1.datum; } else { int tok; $$.scalar = $1.datum; - $$.rec = NULL; $$.row = NULL; /* check for comma-separated list */ tok = yylex(); plpgsql_push_back_token(tok); if (tok == ',') - $$.row = read_into_scalar_list($$.name, - $$.scalar, - @1); + $$.row = (PLpgSQL_datum *) + read_into_scalar_list($$.name, + $$.scalar, + @1); } } | T_WORD @@ -1593,7 +1579,6 @@ for_variable : T_DATUM $$.name = $1.ident; $$.lineno = plpgsql_location_to_lineno(@1); $$.scalar = NULL; - $$.rec = NULL; $$.row = NULL; /* check for comma-separated list */ tok = yylex(); @@ -1620,15 +1605,10 @@ stmt_foreach_a : opt_loop_label K_FOREACH for_variable foreach_slice K_IN K_ARRA new->expr = $7; new->body = $8.stmts; - if ($3.rec) - { - new->varno = $3.rec->dno; - check_assignable((PLpgSQL_datum *) $3.rec, @3); - } - else if ($3.row) + if ($3.row) { new->varno = $3.row->dno; - check_assignable((PLpgSQL_datum *) $3.row, @3); + check_assignable($3.row, @3); } else if ($3.scalar) { @@ -1981,8 +1961,7 @@ stmt_dynexecute : K_EXECUTE new->query = expr; new->into = false; new->strict = false; - new->rec = NULL; - new->row = NULL; + new->target = NULL; new->params = NIL; /* @@ -1999,7 +1978,7 @@ stmt_dynexecute : K_EXECUTE if (new->into) /* multiple INTO */ yyerror("syntax error"); new->into = true; - read_into_target(&new->rec, &new->row, &new->strict); + read_into_target(&new->target, &new->strict); endtoken = yylex(); } else if (endtoken == K_USING) @@ -2107,11 +2086,10 @@ stmt_open : K_OPEN cursor_variable stmt_fetch : K_FETCH opt_fetch_direction cursor_variable K_INTO { PLpgSQL_stmt_fetch *fetch = $2; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_variable *target; /* We have already parsed everything through the INTO keyword */ - read_into_target(&rec, &row, NULL); + read_into_target(&target, NULL); if (yylex() != ';') yyerror("syntax error"); @@ -2127,8 +2105,7 @@ stmt_fetch : K_FETCH opt_fetch_direction cursor_variable K_INTO parser_errposition(@1))); fetch->lineno = plpgsql_location_to_lineno(@1); - fetch->rec = rec; - fetch->row = row; + fetch->target = target; fetch->curvar = $3->dno; fetch->is_move = false; @@ -2842,8 +2819,7 @@ make_execsql_stmt(int firsttoken, int location) IdentifierLookup save_IdentifierLookup; PLpgSQL_stmt_execsql *execsql; PLpgSQL_expr *expr; - PLpgSQL_row *row = NULL; - PLpgSQL_rec *rec = NULL; + PLpgSQL_variable *target = NULL; int tok; int prev_tok; bool have_into = false; @@ -2907,7 +2883,7 @@ make_execsql_stmt(int firsttoken, int location) have_into = true; into_start_loc = yylloc; plpgsql_IdentifierLookup = IDENTIFIER_LOOKUP_NORMAL; - read_into_target(&rec, &row, &have_strict); + read_into_target(&target, &have_strict); plpgsql_IdentifierLookup = IDENTIFIER_LOOKUP_EXPR; } } @@ -2949,8 +2925,7 @@ make_execsql_stmt(int firsttoken, int location) execsql->sqlstmt = expr; execsql->into = have_into; execsql->strict = have_strict; - execsql->rec = rec; - execsql->row = row; + execsql->target = target; return (PLpgSQL_stmt *) execsql; } @@ -3341,13 +3316,12 @@ check_assignable(PLpgSQL_datum *datum, int location) * INTO keyword. */ static void -read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row, bool *strict) +read_into_target(PLpgSQL_variable **target, bool *strict) { int tok; /* Set default results */ - *rec = NULL; - *row = NULL; + *target = NULL; if (strict) *strict = false; @@ -3368,22 +3342,11 @@ read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row, bool *strict) switch (tok) { case T_DATUM: - if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW) - { - check_assignable(yylval.wdatum.datum, yylloc); - *row = (PLpgSQL_row *) yylval.wdatum.datum; - - if ((tok = yylex()) == ',') - ereport(ERROR, - (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("record or row variable cannot be part of multiple-item INTO list"), - parser_errposition(yylloc))); - plpgsql_push_back_token(tok); - } - else if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC) + if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW || + yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC) { check_assignable(yylval.wdatum.datum, yylloc); - *rec = (PLpgSQL_rec *) yylval.wdatum.datum; + *target = (PLpgSQL_variable *) yylval.wdatum.datum; if ((tok = yylex()) == ',') ereport(ERROR, @@ -3394,8 +3357,9 @@ read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row, bool *strict) } else { - *row = read_into_scalar_list(NameOfDatum(&(yylval.wdatum)), - yylval.wdatum.datum, yylloc); + *target = (PLpgSQL_variable *) + read_into_scalar_list(NameOfDatum(&(yylval.wdatum)), + yylval.wdatum.datum, yylloc); } break; diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index 2b19948562..8448578537 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -549,8 +549,7 @@ typedef struct PLpgSQL_stmt_forq PLpgSQL_stmt_type cmd_type; int lineno; char *label; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_variable *var; /* Loop variable (record or row) */ List *body; /* List of statements */ } PLpgSQL_stmt_forq; @@ -562,8 +561,7 @@ typedef struct PLpgSQL_stmt_fors PLpgSQL_stmt_type cmd_type; int lineno; char *label; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_variable *var; /* Loop variable (record or row) */ List *body; /* List of statements */ /* end of fields that must match PLpgSQL_stmt_forq */ PLpgSQL_expr *query; @@ -577,8 +575,7 @@ typedef struct PLpgSQL_stmt_forc PLpgSQL_stmt_type cmd_type; int lineno; char *label; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_variable *var; /* Loop variable (record or row) */ List *body; /* List of statements */ /* end of fields that must match PLpgSQL_stmt_forq */ int curvar; @@ -593,8 +590,7 @@ typedef struct PLpgSQL_stmt_dynfors PLpgSQL_stmt_type cmd_type; int lineno; char *label; - PLpgSQL_rec *rec; - PLpgSQL_row *row; + PLpgSQL_variable *var; /* Loop variable (record or row) */ List *body; /* List of statements */ /* end of fields that must match PLpgSQL_stmt_forq */ PLpgSQL_expr *query; @@ -624,7 +620,6 @@ typedef struct PLpgSQL_stmt_open int lineno; int curvar; int cursor_options; - PLpgSQL_row *returntype; PLpgSQL_expr *argquery; PLpgSQL_expr *query; PLpgSQL_expr *dynquery; @@ -638,8 +633,7 @@ typedef struct PLpgSQL_stmt_fetch { PLpgSQL_stmt_type cmd_type; int lineno; - PLpgSQL_rec *rec; /* target, as record or row */ - PLpgSQL_row *row; + PLpgSQL_variable *target; /* target (record or row) */ int curvar; /* cursor variable to fetch from */ FetchDirection direction; /* fetch direction */ long how_many; /* count, if constant (expr is NULL) */ @@ -750,8 +744,7 @@ typedef struct PLpgSQL_stmt_execsql * mod_stmt is set when we plan the query */ bool into; /* INTO supplied? */ bool strict; /* INTO STRICT flag */ - PLpgSQL_rec *rec; /* INTO target, if record */ - PLpgSQL_row *row; /* INTO target, if row */ + PLpgSQL_variable *target; /* INTO target (record or row) */ } PLpgSQL_stmt_execsql; /* @@ -764,8 +757,7 @@ typedef struct PLpgSQL_stmt_dynexecute PLpgSQL_expr *query; /* string expression */ bool into; /* INTO supplied? */ bool strict; /* INTO STRICT flag */ - PLpgSQL_rec *rec; /* INTO target, if record */ - PLpgSQL_row *row; /* INTO target, if row */ + PLpgSQL_variable *target; /* INTO target (record or row) */ List *params; /* USING expressions */ } PLpgSQL_stmt_dynexecute; From af9f8b7ca343eefa33b693d7919d8f945aeee3e7 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 8 Dec 2017 11:16:23 -0500 Subject: [PATCH 0676/1087] Fix mistake in comment Reported-by: Masahiko Sawada --- src/backend/access/transam/xlog.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index e46ee553d6..0791404263 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -7868,7 +7868,7 @@ CheckRecoveryConsistency(void) /* * Have we passed our safe starting point? Note that minRecoveryPoint is * known to be incorrectly set if ControlFile->backupEndRequired, until - * the XLOG_BACKUP_RECORD arrives to advise us of the correct + * the XLOG_BACKUP_END arrives to advise us of the correct * minRecoveryPoint. All we know prior to that is that we're not * consistent yet. */ From 005ac298b1bdc3e9bd19e5ee2bcf7e320ebe4130 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 8 Dec 2017 12:13:04 -0500 Subject: [PATCH 0677/1087] Prohibit identity columns on typed tables and partitions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Those cases currently crash and supporting them is more work then originally thought, so we'll just prohibit these scenarios for now. Author: Michael Paquier Reviewed-by: Amit Langote Reported-by: Мансур Галиев Bug: #14866 --- src/backend/parser/parse_utilcmd.c | 13 +++++++++++++ src/test/regress/expected/identity.out | 12 ++++++++++++ src/test/regress/sql/identity.sql | 16 ++++++++++++++++ 3 files changed, 41 insertions(+) diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 8461da490a..343e6b3738 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -92,6 +92,7 @@ typedef struct IndexStmt *pkey; /* PRIMARY KEY index, if any */ bool ispartitioned; /* true if table is partitioned */ PartitionBoundSpec *partbound; /* transformed FOR VALUES */ + bool ofType; /* true if statement contains OF typename */ } CreateStmtContext; /* State shared by transformCreateSchemaStmt and its subroutines */ @@ -240,6 +241,8 @@ transformCreateStmt(CreateStmt *stmt, const char *queryString) cxt.alist = NIL; cxt.pkey = NULL; cxt.ispartitioned = stmt->partspec != NULL; + cxt.partbound = stmt->partbound; + cxt.ofType = (stmt->ofTypename != NULL); /* * Notice that we allow OIDs here only for plain tables, even though @@ -662,6 +665,15 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column) Type ctype; Oid typeOid; + if (cxt->ofType) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("identity colums are not supported on typed tables"))); + if (cxt->partbound) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("identify columns are not supported on partitions"))); + ctype = typenameType(cxt->pstate, column->typeName, NULL); typeOid = HeapTupleGetOid(ctype); ReleaseSysCache(ctype); @@ -2697,6 +2709,7 @@ transformAlterTableStmt(Oid relid, AlterTableStmt *stmt, cxt.pkey = NULL; cxt.ispartitioned = (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); cxt.partbound = NULL; + cxt.ofType = false; /* * The only subtypes that currently require parse transformation handling diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 174b420a04..ddc6950593 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -346,3 +346,15 @@ SELECT * FROM itest8; RESET ROLE; DROP TABLE itest8; DROP USER regress_user1; +-- typed tables (currently not supported) +CREATE TYPE itest_type AS (f1 integer, f2 text, f3 bigint); +CREATE TABLE itest12 OF itest_type (f1 WITH OPTIONS GENERATED ALWAYS AS IDENTITY); -- error +ERROR: identity colums are not supported on typed tables +DROP TYPE itest_type CASCADE; +-- table partitions (currently not supported) +CREATE TABLE itest_parent (f1 date NOT NULL, f2 text, f3 bigint) PARTITION BY RANGE (f1); +CREATE TABLE itest_child PARTITION OF itest_parent ( + f3 WITH OPTIONS GENERATED ALWAYS AS IDENTITY +) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); -- error +ERROR: identify columns are not supported on partitions +DROP TABLE itest_parent; diff --git a/src/test/regress/sql/identity.sql b/src/test/regress/sql/identity.sql index 6e76dd08c8..1b2d11cf34 100644 --- a/src/test/regress/sql/identity.sql +++ b/src/test/regress/sql/identity.sql @@ -211,3 +211,19 @@ SELECT * FROM itest8; RESET ROLE; DROP TABLE itest8; DROP USER regress_user1; + + +-- typed tables (currently not supported) + +CREATE TYPE itest_type AS (f1 integer, f2 text, f3 bigint); +CREATE TABLE itest12 OF itest_type (f1 WITH OPTIONS GENERATED ALWAYS AS IDENTITY); -- error +DROP TYPE itest_type CASCADE; + + +-- table partitions (currently not supported) + +CREATE TABLE itest_parent (f1 date NOT NULL, f2 text, f3 bigint) PARTITION BY RANGE (f1); +CREATE TABLE itest_child PARTITION OF itest_parent ( + f3 WITH OPTIONS GENERATED ALWAYS AS IDENTITY +) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); -- error +DROP TABLE itest_parent; From 65a00f30352a3c0ab5615fac008735b103cfa5bb Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Fri, 8 Dec 2017 18:06:05 -0800 Subject: [PATCH 0678/1087] MSVC: Test whether 32-bit Perl needs -D_USE_32BIT_TIME_T. Commits 5a5c2feca3fd858e70ea348822595547e6fa6c15 and b5178c5d08ca59e30f9d9428fa6fdb2741794e65 introduced support for modern MSVC-built, 32-bit Perl, but they broke use of MinGW-built, 32-bit Perl distributions like Strawberry Perl and modern ActivePerl. Perl has no robust means to report whether it expects a -D_USE_32BIT_TIME_T ABI, so test this. Back-patch to 9.3 (all supported versions). The chief alternative was a heuristic of adding -D_USE_32BIT_TIME_T when $Config{gccversion} is nonempty. That banks on every gcc-built Perl using the same ABI. gcc could change its default ABI the way MSVC once did, and one could build Perl with gcc and the non-default ABI. The GNU make build system could benefit from a similar test, without which it does not support MSVC-built Perl. For now, just add a comment. Most users taking the special step of building Perl with MSVC probably build PostgreSQL with MSVC. Discussion: https://postgr.es/m/20171130041441.GA3161526@rfd.leadboat.com --- config/perl.m4 | 30 ++++--- src/tools/msvc/Mkvcbuild.pm | 170 ++++++++++++++++++++++++++++++------ 2 files changed, 158 insertions(+), 42 deletions(-) diff --git a/config/perl.m4 b/config/perl.m4 index 76b1a92e3a..caefb0705e 100644 --- a/config/perl.m4 +++ b/config/perl.m4 @@ -48,19 +48,23 @@ AC_DEFUN([PGAC_CHECK_PERL_CONFIGS], # PGAC_CHECK_PERL_EMBED_CCFLAGS # ----------------------------- -# We selectively extract stuff from $Config{ccflags}. We don't really need -# anything except -D switches, and other sorts of compiler switches can -# actively break things if Perl was compiled with a different compiler. -# Moreover, although Perl likes to put stuff like -D_LARGEFILE_SOURCE and -# -D_FILE_OFFSET_BITS=64 here, it would be fatal to try to compile PL/Perl -# to a different libc ABI than core Postgres uses. The available information -# says that all the symbols that affect Perl's own ABI begin with letters, -# so it should be sufficient to adopt -D switches for symbols not beginning -# with underscore. An exception is that we need to let through -# -D_USE_32BIT_TIME_T if it's present. (We probably could restrict that to -# only get through on Windows, but for the moment we let it through always.) -# For debugging purposes, let's have the configure output report the raw -# ccflags value as well as the set of flags we chose to adopt. +# We selectively extract stuff from $Config{ccflags}. For debugging purposes, +# let's have the configure output report the raw ccflags value as well as the +# set of flags we chose to adopt. We don't really need anything except -D +# switches, and other sorts of compiler switches can actively break things if +# Perl was compiled with a different compiler. Moreover, although Perl likes +# to put stuff like -D_LARGEFILE_SOURCE and -D_FILE_OFFSET_BITS=64 here, it +# would be fatal to try to compile PL/Perl to a different libc ABI than core +# Postgres uses. The available information says that most symbols that affect +# Perl's own ABI begin with letters, so it's almost sufficient to adopt -D +# switches for symbols not beginning with underscore. Some exceptions are the +# Windows-specific -D_USE_32BIT_TIME_T and -D__MINGW_USE_VC2005_COMPAT; see +# Mkvcbuild.pm for details. We absorb the former when Perl reports it. Perl +# never reports the latter, and we don't attempt to deduce when it's needed. +# Consequently, we don't support using MinGW to link to MSVC-built Perl. As +# of 2017, all supported ActivePerl and Strawberry Perl are MinGW-built. If +# that changes or an MSVC-built Perl distribution becomes prominent, we can +# revisit this limitation. AC_DEFUN([PGAC_CHECK_PERL_EMBED_CCFLAGS], [AC_REQUIRE([PGAC_PATH_PERL]) AC_MSG_CHECKING([for CFLAGS recommended by Perl]) diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index 4c2e12e228..93f364a9f2 100644 --- a/src/tools/msvc/Mkvcbuild.pm +++ b/src/tools/msvc/Mkvcbuild.pm @@ -28,6 +28,7 @@ my $libpgcommon; my $libpgfeutils; my $postgres; my $libpq; +my @unlink_on_exit; # Set of variables for modules in contrib/ and src/test/modules/ my $contrib_defines = { 'refint' => 'REFINT_VERBOSE' }; @@ -517,34 +518,154 @@ sub mkvcbuild my $plperl = $solution->AddProject('plperl', 'dll', 'PLs', 'src/pl/plperl'); $plperl->AddIncludeDir($solution->{options}->{perl} . '/lib/CORE'); + $plperl->AddReference($postgres); + + my $perl_path = $solution->{options}->{perl} . '\lib\CORE\*perl*'; + + # ActivePerl 5.16 provided perl516.lib; 5.18 provided libperl518.a + my @perl_libs = + grep { /perl\d+\.lib$|libperl\d+\.a$/ } glob($perl_path); + if (@perl_libs == 1) + { + $plperl->AddLibrary($perl_libs[0]); + } + else + { + die +"could not identify perl library version matching pattern $perl_path\n"; + } # Add defines from Perl's ccflags; see PGAC_CHECK_PERL_EMBED_CCFLAGS my @perl_embed_ccflags; foreach my $f (split(" ", $Config{ccflags})) { - if ( $f =~ /^-D[^_]/ - || $f =~ /^-D_USE_32BIT_TIME_T/) + if ($f =~ /^-D[^_]/) { $f =~ s/\-D//; push(@perl_embed_ccflags, $f); } } - # Perl versions before 5.13.4 don't provide -D_USE_32BIT_TIME_T - # regardless of how they were built. On 32-bit Windows, assume - # such a version was built with a pre-MSVC-2005 compiler, and - # define the symbol anyway, so that we are compatible if we're - # being built with a later MSVC version. - push(@perl_embed_ccflags, '_USE_32BIT_TIME_T') - if $solution->{platform} eq 'Win32' - && $Config{PERL_REVISION} == 5 - && ($Config{PERL_VERSION} < 13 - || ( $Config{PERL_VERSION} == 13 - && $Config{PERL_SUBVERSION} < 4)); - - # Also, a hack to prevent duplicate definitions of uid_t/gid_t + # hack to prevent duplicate definitions of uid_t/gid_t push(@perl_embed_ccflags, 'PLPERL_HAVE_UID_GID'); + # Windows offers several 32-bit ABIs. Perl is sensitive to + # sizeof(time_t), one of the ABI dimensions. To get 32-bit time_t, + # use "cl -D_USE_32BIT_TIME_T" or plain "gcc". For 64-bit time_t, use + # "gcc -D__MINGW_USE_VC2005_COMPAT" or plain "cl". Before MSVC 2005, + # plain "cl" chose 32-bit time_t. PostgreSQL doesn't support building + # with pre-MSVC-2005 compilers, but it does support linking to Perl + # built with such a compiler. MSVC-built Perl 5.13.4 and later report + # -D_USE_32BIT_TIME_T in $Config{ccflags} if applicable, but + # MinGW-built Perl never reports -D_USE_32BIT_TIME_T despite typically + # needing it. Ignore the $Config{ccflags} opinion about + # -D_USE_32BIT_TIME_T, and use a runtime test to deduce the ABI Perl + # expects. Specifically, test use of PL_modglobal, which maps to a + # PerlInterpreter field whose position depends on sizeof(time_t). + if ($solution->{platform} eq 'Win32') + { + my $source_file = 'conftest.c'; + my $obj = 'conftest.obj'; + my $exe = 'conftest.exe'; + my @conftest = ($source_file, $obj, $exe); + push @unlink_on_exit, @conftest; + unlink $source_file; + open my $o, '>', $source_file + || croak "Could not write to $source_file"; + print $o ' + /* compare to plperl.h */ + #define __inline__ __inline + #define PERL_NO_GET_CONTEXT + #include + #include + + int + main(int argc, char **argv) + { + int dummy_argc = 1; + char *dummy_argv[1] = {""}; + char *dummy_env[1] = {NULL}; + static PerlInterpreter *interp; + + PERL_SYS_INIT3(&dummy_argc, (char ***) &dummy_argv, + (char ***) &dummy_env); + interp = perl_alloc(); + perl_construct(interp); + { + dTHX; + const char key[] = "dummy"; + + PL_exit_flags |= PERL_EXIT_DESTRUCT_END; + hv_store(PL_modglobal, key, sizeof(key) - 1, newSViv(1), 0); + return hv_fetch(PL_modglobal, key, sizeof(key) - 1, 0) == NULL; + } + } +'; + close $o; + + # Build $source_file with a given #define, and return a true value + # if a run of the resulting binary exits successfully. + my $try_define = sub { + my $define = shift; + + unlink $obj, $exe; + my @cmd = ( + 'cl', + '-I' . $solution->{options}->{perl} . '/lib/CORE', + (map { "-D$_" } @perl_embed_ccflags, $define || ()), + $source_file, + '/link', + $perl_libs[0]); + my $compile_output = `@cmd 2>&1`; + -f $exe || die "Failed to build Perl test:\n$compile_output"; + + { + + # Some builds exhibit runtime failure through Perl warning + # 'Can't spawn "conftest.exe"'; supress that. + no warnings; + + # Disable error dialog boxes like we do in the postmaster. + # Here, we run code that triggers relevant errors. + use Win32API::File qw(SetErrorMode :SEM_); + my $oldmode = SetErrorMode( + SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX); + system(".\\$exe"); + SetErrorMode($oldmode); + } + + return !($? >> 8); + }; + + my $define_32bit_time = '_USE_32BIT_TIME_T'; + my $ok_now = $try_define->(undef); + my $ok_32bit = $try_define->($define_32bit_time); + unlink @conftest; + if (!$ok_now && !$ok_32bit) + { + + # Unsupported configuration. Since we used %Config from the + # Perl running the build scripts, this is expected if + # attempting to link with some other Perl. + die "Perl test fails with or without -D$define_32bit_time"; + } + elsif ($ok_now && $ok_32bit) + { + + # Resulting build may work, but it's especially important to + # verify with "vcregress plcheck". A refined test may avoid + # this outcome. + warn "Perl test passes with or without -D$define_32bit_time"; + } + elsif ($ok_32bit) + { + push(@perl_embed_ccflags, $define_32bit_time); + } # else $ok_now, hence no flag required + } + + print "CFLAGS recommended by Perl: $Config{ccflags}\n"; + print "CFLAGS to compile embedded Perl: ", + (join ' ', map { "-D$_" } @perl_embed_ccflags), "\n"; foreach my $f (@perl_embed_ccflags) { $plperl->AddDefine($f); @@ -614,20 +735,6 @@ sub mkvcbuild die 'Failed to create plperl_opmask.h' . "\n"; } } - $plperl->AddReference($postgres); - my $perl_path = $solution->{options}->{perl} . '\lib\CORE\*perl*'; - # ActivePerl 5.16 provided perl516.lib; 5.18 provided libperl518.a - my @perl_libs = - grep { /perl\d+\.lib$|libperl\d+\.a$/ } glob($perl_path); - if (@perl_libs == 1) - { - $plperl->AddLibrary($perl_libs[0]); - } - else - { - die -"could not identify perl library version matching pattern $perl_path\n"; - } # Add transform module dependent on plperl my $hstore_plperl = AddTransformModule( @@ -956,4 +1063,9 @@ sub AdjustModule } } +END +{ + unlink @unlink_on_exit; +} + 1; From 7e0c574ee26ce0308b76166312788e909b555c23 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 9 Dec 2017 00:58:55 -0800 Subject: [PATCH 0679/1087] MSVC 2012+: Permit linking to 32-bit, MinGW-built libraries. Notably, this permits linking to the 32-bit Perl binaries advertised on perl.org, namely Strawberry Perl and ActivePerl. This has a side effect of permitting linking to binaries built with obsolete MSVC versions. By default, MSVC 2012 and later require a "safe exception handler table" in each binary. MinGW-built, 32-bit DLLs lack the relevant exception handler metadata, so linking to them failed with error LNK2026. Restore the semantics of MSVC 2010, which omits the table from a given binary if some linker input lacks metadata. This has no effect on 64-bit builds or on MSVC 2010 and earlier. Back-patch to 9.3 (all supported versions). Reported by Victor Wagner. Discussion: https://postgr.es/m/20160326154321.7754ab8f@wagner.wagner.home --- src/tools/msvc/MSBuildProject.pm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm index 7a287bd0bd..9ddccc7c55 100644 --- a/src/tools/msvc/MSBuildProject.pm +++ b/src/tools/msvc/MSBuildProject.pm @@ -320,6 +320,8 @@ sub WriteItemDefinitionGroup false .\\$cfgname\\$self->{name}\\$self->{name}.map false + + Console $targetmachine EOF From d8f632caec3fcc5eece9d53d7510322f11489fe4 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sat, 9 Dec 2017 11:40:31 +0100 Subject: [PATCH 0680/1087] Fix typo Reported by Robins Tharakan --- src/backend/parser/parse_utilcmd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 343e6b3738..f67379f8ed 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -672,7 +672,7 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column) if (cxt->partbound) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("identify columns are not supported on partitions"))); + errmsg("identity columns are not supported on partitions"))); ctype = typenameType(cxt->pstate, column->typeName, NULL); typeOid = HeapTupleGetOid(ctype); From ce1468d02bdbbe3aa710463fa9faaf8cf865ad72 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sat, 9 Dec 2017 13:45:06 +0100 Subject: [PATCH 0681/1087] Fix regression test output Missed this in the last commit. --- src/test/regress/expected/identity.out | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index ddc6950593..87ef0d3b2a 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -356,5 +356,5 @@ CREATE TABLE itest_parent (f1 date NOT NULL, f2 text, f3 bigint) PARTITION BY RA CREATE TABLE itest_child PARTITION OF itest_parent ( f3 WITH OPTIONS GENERATED ALWAYS AS IDENTITY ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); -- error -ERROR: identify columns are not supported on partitions +ERROR: identity columns are not supported on partitions DROP TABLE itest_parent; From 390d58135b22bc25229b524a60f69682182201d8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 9 Dec 2017 12:03:00 -0500 Subject: [PATCH 0682/1087] Fix plpgsql to reinitialize record variables at block re-entry. If one exits and re-enters a DECLARE ... BEGIN ... END block within a single execution of a plpgsql function, perhaps due to a surrounding loop, the declared variables are supposed to get re-initialized to null (or whatever their initializer is). But this failed to happen for variables of type "record", because while exec_stmt_block() expected such variables to be included in the block's initvarnos list, plpgsql_add_initdatums() only adds DTYPE_VAR variables to that list. This bug appears to have been there since the aboriginal addition of plpgsql to our tree. Fix by teaching plpgsql_add_initdatums() to include DTYPE_REC variables as well. (We don't need to consider other DTYPEs because they don't represent separately-stored values.) I failed to resist the temptation to make some nearby cosmetic adjustments, too. No back-patch, because there have not been field complaints, and it seems possible that somewhere out there someone has code depending on the incorrect behavior. In any case this change would have no impact on correctly-written code. Discussion: https://postgr.es/m/22994.1512800671@sss.pgh.pa.us --- src/pl/plpgsql/src/pl_comp.c | 12 +++++++++--- src/pl/plpgsql/src/pl_exec.c | 20 +++++++++----------- src/pl/plpgsql/src/plpgsql.h | 4 ++-- src/test/regress/expected/plpgsql.out | 27 +++++++++++++++++++++++++++ src/test/regress/sql/plpgsql.sql | 21 +++++++++++++++++++++ 5 files changed, 68 insertions(+), 16 deletions(-) diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 1300ea6a52..2d7844bd9d 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -2384,14 +2384,14 @@ plpgsql_finish_datums(PLpgSQL_function *function) /* ---------- * plpgsql_add_initdatums Make an array of the datum numbers of - * all the simple VAR datums created since the last call + * all the initializable datums created since the last call * to this function. * * If varnos is NULL, we just forget any datum entries created since the * last call. * - * This is used around a DECLARE section to create a list of the VARs - * that have to be initialized at block entry. Note that VARs can also + * This is used around a DECLARE section to create a list of the datums + * that have to be initialized at block entry. Note that datums can also * be created elsewhere than DECLARE, eg by a FOR-loop, but it is then * the responsibility of special-purpose code to initialize them. * ---------- @@ -2402,11 +2402,16 @@ plpgsql_add_initdatums(int **varnos) int i; int n = 0; + /* + * The set of dtypes recognized here must match what exec_stmt_block() + * cares about (re)initializing at block entry. + */ for (i = datums_last; i < plpgsql_nDatums; i++) { switch (plpgsql_Datums[i]->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_REC: n++; break; @@ -2427,6 +2432,7 @@ plpgsql_add_initdatums(int **varnos) switch (plpgsql_Datums[i]->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_REC: (*varnos)[n++] = plpgsql_Datums[i]->dno; default: diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 1959d6dc42..fa4d573e50 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -1184,7 +1184,6 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) { volatile int rc = -1; int i; - int n; /* * First initialize all variables declared in this block @@ -1193,13 +1192,17 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) for (i = 0; i < block->n_initvars; i++) { - n = block->initvarnos[i]; + int n = block->initvarnos[i]; + PLpgSQL_datum *datum = estate->datums[n]; - switch (estate->datums[n]->dtype) + /* + * The set of dtypes handled here must match plpgsql_add_initdatums(). + */ + switch (datum->dtype) { case PLPGSQL_DTYPE_VAR: { - PLpgSQL_var *var = (PLpgSQL_var *) (estate->datums[n]); + PLpgSQL_var *var = (PLpgSQL_var *) datum; /* * Free any old value, in case re-entering block, and @@ -1241,7 +1244,7 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) case PLPGSQL_DTYPE_REC: { - PLpgSQL_rec *rec = (PLpgSQL_rec *) (estate->datums[n]); + PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; if (rec->freetup) { @@ -1258,13 +1261,8 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) } break; - case PLPGSQL_DTYPE_RECFIELD: - case PLPGSQL_DTYPE_ARRAYELEM: - break; - default: - elog(ERROR, "unrecognized dtype: %d", - estate->datums[n]->dtype); + elog(ERROR, "unrecognized dtype: %d", datum->dtype); } } diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index 8448578537..39bd82acd1 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -407,8 +407,8 @@ typedef struct PLpgSQL_stmt_block int lineno; char *label; List *body; /* List of statements */ - int n_initvars; - int *initvarnos; + int n_initvars; /* Length of initvarnos[] */ + int *initvarnos; /* dnos of variables declared in this block */ PLpgSQL_exception_block *exceptions; } PLpgSQL_stmt_block; diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index d6e5bc3353..29f9e86d56 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -5025,6 +5025,33 @@ select scope_test(); (1 row) drop function scope_test(); +-- Check that variables are reinitialized on block re-entry. +do $$ +begin + for i in 1..3 loop + declare + x int; + y int := i; + r record; + begin + if i = 1 then + x := 42; + r := row(i, i+1); + end if; + raise notice 'x = %', x; + raise notice 'y = %', y; + raise notice 'r = %', r; + end; + end loop; +end$$; +NOTICE: x = 42 +NOTICE: y = 1 +NOTICE: r = (1,2) +NOTICE: x = +NOTICE: y = 2 +ERROR: record "r" is not assigned yet +DETAIL: The tuple structure of a not-yet-assigned record is indeterminate. +CONTEXT: PL/pgSQL function inline_code_block line 15 at RAISE -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; create function conflict_test() returns setof int8_tbl as $$ diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 1c355132b7..07b6fc8971 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4014,6 +4014,27 @@ select scope_test(); drop function scope_test(); +-- Check that variables are reinitialized on block re-entry. + +do $$ +begin + for i in 1..3 loop + declare + x int; + y int := i; + r record; + begin + if i = 1 then + x := 42; + r := row(i, i+1); + end if; + raise notice 'x = %', x; + raise notice 'y = %', y; + raise notice 'r = %', r; + end; + end loop; +end$$; + -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; From 9edc97b712e2f0ba041b40b4b2e2285d229f4fb0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 10 Dec 2017 12:44:03 -0500 Subject: [PATCH 0683/1087] Stabilize output of new regression test case. The test added by commit 390d58135 turns out to have different output in CLOBBER_CACHE_ALWAYS builds: there's an extra CONTEXT line in the error message as a result of detecting the error at a different place. Possibly we should do something to make that more consistent. But as a stopgap measure to make the buildfarm green again, adjust the test to suppress CONTEXT entirely. We can revert this if we do something in the backend to eliminate the inconsistency. Discussion: https://postgr.es/m/31545.1512924904@sss.pgh.pa.us --- src/test/regress/expected/plpgsql.out | 4 ++-- src/test/regress/sql/plpgsql.sql | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 29f9e86d56..26f6e4394f 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -5026,6 +5026,7 @@ select scope_test(); drop function scope_test(); -- Check that variables are reinitialized on block re-entry. +\set VERBOSITY terse \\ -- needed for output stability do $$ begin for i in 1..3 loop @@ -5050,8 +5051,7 @@ NOTICE: r = (1,2) NOTICE: x = NOTICE: y = 2 ERROR: record "r" is not assigned yet -DETAIL: The tuple structure of a not-yet-assigned record is indeterminate. -CONTEXT: PL/pgSQL function inline_code_block line 15 at RAISE +\set VERBOSITY default -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; create function conflict_test() returns setof int8_tbl as $$ diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 07b6fc8971..bb09b2d807 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4016,6 +4016,7 @@ drop function scope_test(); -- Check that variables are reinitialized on block re-entry. +\set VERBOSITY terse \\ -- needed for output stability do $$ begin for i in 1..3 loop @@ -4034,6 +4035,7 @@ begin end; end loop; end$$; +\set VERBOSITY default -- Check handling of conflicts between plpgsql vars and table columns. From 01a0ca1bed02d6a375c6565a529555eefd2b4fe8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 11 Dec 2017 12:48:40 -0500 Subject: [PATCH 0684/1087] Improve comment about PartitionBoundInfoData. Ashutosh Bapat, per discussion with Julien Rouhaund, who also reviewed this patch. Discussion: http://postgr.es/m/CAFjFpReBR3ftK9C23LLCZY_TDXhhjB_dgE-L9+mfTnA=gkvdvQ@mail.gmail.com --- src/backend/catalog/partition.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index dd4a8d3c02..ef156e449e 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -72,6 +72,13 @@ * of datum-tuples with 2 datums, modulus and remainder, corresponding to a * given partition. * + * The datums in datums array are arranged in increasing order as defined by + * functions qsort_partition_rbound_cmp(), qsort_partition_list_value_cmp() and + * qsort_partition_hbound_cmp() for range, list and hash partitioned tables + * respectively. For range and list partitions this simply means that the + * datums in the datums array are arranged in increasing order as defined by + * the partition key's operator classes and collations. + * * In the case of list partitioning, the indexes array stores one entry for * every datum, which is the index of the partition that accepts a given datum. * In case of range partitioning, it stores one entry per distinct range From 7eb16ab17d5c01b293aad35f0843e5f3a9a64080 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 11 Dec 2017 16:33:20 -0500 Subject: [PATCH 0685/1087] Fix corner-case coredump in _SPI_error_callback(). I noticed that _SPI_execute_plan initially sets spierrcontext.arg = NULL, and only fills it in some time later. If an error were to happen in between, _SPI_error_callback would try to dereference the null pointer. This is unlikely --- there's not much between those points except push-snapshot calls --- but it's clearly not impossible. Tweak the callback to do nothing if the pointer isn't set yet. It's been like this for awhile, so back-patch to all supported branches. --- src/backend/executor/spi.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 2da1cac3e2..f3da2ddd08 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -2367,6 +2367,9 @@ _SPI_error_callback(void *arg) const char *query = (const char *) arg; int syntaxerrposition; + if (query == NULL) /* in case arg wasn't set yet */ + return; + /* * If there is a syntax error position, convert to internal syntax error; * otherwise treat the query as an item of context stack From 4034db215b92c68ce55cf1c658d4ef7599ccc45a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 11 Dec 2017 16:37:39 -0500 Subject: [PATCH 0686/1087] Fix comment Reported-by: Noah Misch --- src/backend/executor/execReplication.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index fb538c0297..bd786a1be6 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -288,7 +288,7 @@ RelationFindReplTupleSeq(Relation rel, LockTupleMode lockmode, Assert(equalTupleDescs(desc, outslot->tts_tupleDescriptor)); - /* Start an index scan. */ + /* Start a heap scan. */ InitDirtySnapshot(snap); scan = heap_beginscan(rel, &snap, 0, NULL); From c28aa157b86f756d53f2a6b715e23ca56f219b4f Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Tue, 12 Dec 2017 14:59:27 +0300 Subject: [PATCH 0687/1087] Make pg_trgm tests independ from standard_conforming_string. Tests uses regular expression which contains backslash. --- contrib/pg_trgm/expected/pg_trgm.out | 3 +++ contrib/pg_trgm/sql/pg_trgm.sql | 4 ++++ 2 files changed, 7 insertions(+) diff --git a/contrib/pg_trgm/expected/pg_trgm.out b/contrib/pg_trgm/expected/pg_trgm.out index c3304b0ceb..6efc54356a 100644 --- a/contrib/pg_trgm/expected/pg_trgm.out +++ b/contrib/pg_trgm/expected/pg_trgm.out @@ -7,6 +7,9 @@ WHERE opc.oid >= 16384 AND NOT amvalidate(opc.oid); --------+--------- (0 rows) +--backslash is used in tests below, installcheck will fail if +--standard_conforming_string is off +set standard_conforming_strings=on; select show_trgm(''); show_trgm ----------- diff --git a/contrib/pg_trgm/sql/pg_trgm.sql b/contrib/pg_trgm/sql/pg_trgm.sql index fe8d0a7495..96ae542320 100644 --- a/contrib/pg_trgm/sql/pg_trgm.sql +++ b/contrib/pg_trgm/sql/pg_trgm.sql @@ -5,6 +5,10 @@ SELECT amname, opcname FROM pg_opclass opc LEFT JOIN pg_am am ON am.oid = opcmethod WHERE opc.oid >= 16384 AND NOT amvalidate(opc.oid); +--backslash is used in tests below, installcheck will fail if +--standard_conforming_string is off +set standard_conforming_strings=on; + select show_trgm(''); select show_trgm('(*&^$@%@'); select show_trgm('a b c'); From d329dc2ea4bfac84ec60ba14b96561a7508bb37b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 12 Dec 2017 10:52:15 -0500 Subject: [PATCH 0688/1087] Remove bug from OPTIMIZER_DEBUG code for partition-wise join. Etsuro Fujita, reviewed by Ashutosh Bapat Discussion: http://postgr.es/m/5A2A60E6.6000008@lab.ntt.co.jp --- src/backend/optimizer/path/allpaths.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 47986ba80a..0e8463e4a3 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -3457,7 +3457,7 @@ generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) set_cheapest(child_rel); #ifdef OPTIMIZER_DEBUG - debug_print_rel(root, rel); + debug_print_rel(root, child_rel); #endif live_children = lappend(live_children, child_rel); From 95b52351fe966c93791462274dfa7af7e50d2da1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 12 Dec 2017 19:33:50 -0500 Subject: [PATCH 0689/1087] Remove obsolete comment. Commit 8b304b8b72b0a60f1968d39f01cf817c8df863ec removed replacement selection, but left behind this comment text. The optimization to which the comment refers is not relevant without replacement selection, because if we had so few tuples as to require only one tape, we would have just completed the sort in memory. Peter Geoghegan Discussion: http://postgr.es/m/CAH2-WznqupLA8CMjp+vqzoe0yXu0DYYbQSNZxmgN76tLnAOZ_w@mail.gmail.com --- src/backend/utils/sort/tuplesort.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 3c23ac75a0..35eebad8e4 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -2459,11 +2459,6 @@ mergeruns(Tuplesortstate *state) * Use all the remaining memory we have available for read buffers among * the input tapes. * - * We do this only after checking for the case that we produced only one - * initial run, because there is no need to use a large read buffer when - * we're reading from a single tape. With one tape, the I/O pattern will - * be the same regardless of the buffer size. - * * We don't try to "rebalance" the memory among tapes, when we start a new * merge phase, even if some tapes are inactive in the new phase. That * would be hard, because logtape.c doesn't know where one run ends and From 4d6ad31257adaf8a51e1c4377d96afa656d9165f Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 29 Oct 2017 22:13:54 -0700 Subject: [PATCH 0690/1087] Provide overflow safe integer math inline functions. It's not easy to get signed integer overflow checks correct and fast. Therefore abstract the necessary infrastructure into a common header providing addition, subtraction and multiplication for 16, 32, 64 bit signed integers. The new macros aren't yet used, but a followup commit will convert several open coded overflow checks. Author: Andres Freund, with some code stolen from Greg Stark Reviewed-By: Robert Haas Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de --- config/c-compiler.m4 | 22 ++++ configure | 33 +++++ configure.in | 4 + src/include/common/int.h | 239 ++++++++++++++++++++++++++++++++++ src/include/pg_config.h.in | 3 + src/include/pg_config.h.win32 | 3 + 6 files changed, 304 insertions(+) create mode 100644 src/include/common/int.h diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 492f6832cf..28c372cd32 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -299,6 +299,28 @@ fi])# PGAC_C_BUILTIN_CONSTANT_P +# PGAC_C_BUILTIN_OP_OVERFLOW +# ------------------------- +# Check if the C compiler understands __builtin_$op_overflow(), +# and define HAVE__BUILTIN_OP_OVERFLOW if so. +# +# Check for the most complicated case, 64 bit multiplication, as a +# proxy for all of the operations. +AC_DEFUN([PGAC_C_BUILTIN_OP_OVERFLOW], +[AC_CACHE_CHECK(for __builtin_mul_overflow, pgac_cv__builtin_op_overflow, +[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], +[PG_INT64_TYPE result; +__builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result);] +)], +[pgac_cv__builtin_op_overflow=yes], +[pgac_cv__builtin_op_overflow=no])]) +if test x"$pgac_cv__builtin_op_overflow" = xyes ; then +AC_DEFINE(HAVE__BUILTIN_OP_OVERFLOW, 1, + [Define to 1 if your compiler understands __builtin_$op_overflow.]) +fi])# PGAC_C_BUILTIN_OP_OVERFLOW + + + # PGAC_C_BUILTIN_UNREACHABLE # -------------------------- # Check if the C compiler understands __builtin_unreachable(), diff --git a/configure b/configure index 6c4d743b35..4a4f13314e 100755 --- a/configure +++ b/configure @@ -14472,6 +14472,39 @@ esac fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_mul_overflow" >&5 +$as_echo_n "checking for __builtin_mul_overflow... " >&6; } +if ${pgac_cv__builtin_op_overflow+:} false; then : + $as_echo_n "(cached) " >&6 +else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ + +int +main () +{ +PG_INT64_TYPE result; +__builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result); + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + pgac_cv__builtin_op_overflow=yes +else + pgac_cv__builtin_op_overflow=no +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_op_overflow" >&5 +$as_echo "$pgac_cv__builtin_op_overflow" >&6; } +if test x"$pgac_cv__builtin_op_overflow" = xyes ; then + +$as_echo "#define HAVE__BUILTIN_OP_OVERFLOW 1" >>confdefs.h + +fi + # Check size of void *, size_t (enables tweaks for > 32bit address space) # The cast to long int works around a bug in the HP C Compiler # version HP92453-01 B.11.11.23709.GP, which incorrectly rejects diff --git a/configure.in b/configure.in index d9c4a50b4b..5245899109 100644 --- a/configure.in +++ b/configure.in @@ -1770,6 +1770,10 @@ if test $pgac_need_repl_snprintf = yes; then AC_LIBOBJ(snprintf) fi +# has to be down here, rather than with the other builtins, because +# the test uses PG_INT64_TYPE. +PGAC_C_BUILTIN_OP_OVERFLOW + # Check size of void *, size_t (enables tweaks for > 32bit address space) AC_CHECK_SIZEOF([void *]) AC_CHECK_SIZEOF([size_t]) diff --git a/src/include/common/int.h b/src/include/common/int.h new file mode 100644 index 0000000000..e44d42f7da --- /dev/null +++ b/src/include/common/int.h @@ -0,0 +1,239 @@ +/*------------------------------------------------------------------------- + * + * int.h + * Routines to perform integer math, while checking for overflows. + * + * The routines in this file are intended to be well defined C, without + * relying on compiler flags like -fwrapv. + * + * To reduce the overhead of these routines try to use compiler intrinsics + * where available. That's not that important for the 16, 32 bit cases, but + * the 64 bit cases can be considerably faster with intrinsics. In case no + * intrinsics are available 128 bit math is used where available. + * + * Copyright (c) 2017, PostgreSQL Global Development Group + * + * src/include/common/int.h + * + *------------------------------------------------------------------------- + */ +#ifndef COMMON_INT_H +#define COMMON_INT_H + +/* + * If a + b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_add_s16_overflow(int16 a, int16 b, int16 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_add_overflow(a, b, result); +#else + int32 res = (int32) a + (int32) b; + + if (res > PG_INT16_MAX || res < PG_INT16_MIN) + return true; + *result = (int16) res; + return false; +#endif +} + +/* + * If a - b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_sub_s16_overflow(int16 a, int16 b, int16 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_sub_overflow(a, b, result); +#else + int32 res = (int32) a - (int32) b; + + if (res > PG_INT16_MAX || res < PG_INT16_MIN) + return true; + *result = (int16) res; + return false; +#endif +} + +/* + * If a * b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_mul_s16_overflow(int16 a, int16 b, int16 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_mul_overflow(a, b, result); +#else + int32 res = (int32) a * (int32) b; + + if (res > PG_INT16_MAX || res < PG_INT16_MIN) + return true; + *result = (int16) res; + return false; +#endif +} + +/* + * If a + b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_add_s32_overflow(int32 a, int32 b, int32 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_add_overflow(a, b, result); +#else + int64 res = (int64) a + (int64) b; + + if (res > PG_INT32_MAX || res < PG_INT32_MIN) + return true; + *result = (int32) res; + return false; +#endif +} + +/* + * If a - b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_sub_s32_overflow(int32 a, int32 b, int32 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_sub_overflow(a, b, result); +#else + int64 res = (int64) a - (int64) b; + + if (res > PG_INT32_MAX || res < PG_INT32_MIN) + return true; + *result = (int32) res; + return false; +#endif +} + +/* + * If a * b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_mul_s32_overflow(int32 a, int32 b, int32 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_mul_overflow(a, b, result); +#else + int64 res = (int64) a * (int64) b; + + if (res > PG_INT32_MAX || res < PG_INT32_MIN) + return true; + *result = (int32) res; + return false; +#endif +} + +/* + * If a + b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_add_s64_overflow(int64 a, int64 b, int64 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_add_overflow(a, b, result); +#elif defined(HAVE_INT128) + int128 res = (int128) a + (int128) b; + + if (res > PG_INT64_MAX || res < PG_INT64_MIN) + return true; + *result = (int64) res; + return false; +#else + if ((a > 0 && b > 0 && a > PG_INT64_MAX - b) || + (a < 0 && b < 0 && a < PG_INT64_MIN - b)) + return true; + *result = a + b; + return false; +#endif +} + +/* + * If a - b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_sub_s64_overflow(int64 a, int64 b, int64 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_sub_overflow(a, b, result); +#elif defined(HAVE_INT128) + int128 res = (int128) a - (int128) b; + + if (res > PG_INT64_MAX || res < PG_INT64_MIN) + return true; + *result = (int64) res; + return false; +#else + if ((a < 0 && b > 0 && a < PG_INT64_MIN + b) || + (a > 0 && b < 0 && a > PG_INT64_MAX + b)) + return true; + *result = a - b; + return false; +#endif +} + +/* + * If a * b overflows, return true, otherwise store the result of a + b into + * *result. The content of *result is implementation defined in case of + * overflow. + */ +static inline bool +pg_mul_s64_overflow(int64 a, int64 b, int64 *result) +{ +#if defined(HAVE__BUILTIN_OP_OVERFLOW) + return __builtin_mul_overflow(a, b, result); +#elif defined(HAVE_INT128) + int128 res = (int128) a * (int128) b; + + if (res > PG_INT64_MAX || res < PG_INT64_MIN) + return true; + *result = (int64) res; + return false; +#else + /* + * Overflow can only happen if at least one value is outside the range + * sqrt(min)..sqrt(max) so check that first as the division can be quite a + * bit more expensive than the multiplication. + * + * Multiplying by 0 or 1 can't overflow of course and checking for 0 + * separately avoids any risk of dividing by 0. Be careful about dividing + * INT_MIN by -1 also, note reversing the a and b to ensure we're always + * dividing it by a positive value. + * + */ + if ((a > PG_INT32_MAX || a < PG_INT32_MIN || + b > PG_INT32_MAX || b < PG_INT32_MIN) && + a != 0 && a != 1 && b != 0 && b != 1 && + ((a > 0 && b > 0 && a > PG_INT64_MAX / b) || + (a > 0 && b < 0 && b < PG_INT64_MIN / a) || + (a < 0 && b > 0 && a < PG_INT64_MIN / b) || + (a < 0 && b < 0 && a < PG_INT64_MAX / b))) + { + return true; + } + *result = a * b; + return false; +#endif +} + +#endif /* COMMON_INT_H */ diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 84d59f12b2..0aa6be4666 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -690,6 +690,9 @@ /* Define to 1 if your compiler understands __builtin_constant_p. */ #undef HAVE__BUILTIN_CONSTANT_P +/* Define to 1 if your compiler understands __builtin_$op_overflow. */ +#undef HAVE__BUILTIN_OP_OVERFLOW + /* Define to 1 if your compiler understands __builtin_types_compatible_p. */ #undef HAVE__BUILTIN_TYPES_COMPATIBLE_P diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32 index e192d98c5a..22d19ed794 100644 --- a/src/include/pg_config.h.win32 +++ b/src/include/pg_config.h.win32 @@ -515,6 +515,9 @@ /* Define to 1 if your compiler understands __builtin_constant_p. */ /* #undef HAVE__BUILTIN_CONSTANT_P */ +/* Define to 1 if your compiler understands __builtin_$op_overflow. */ +/* #undef HAVE__BUILTIN_OP_OVERFLOW */ + /* Define to 1 if your compiler understands __builtin_types_compatible_p. */ /* #undef HAVE__BUILTIN_TYPES_COMPATIBLE_P */ From 101c7ee3ee847bac970c74b73b4f2858484383e5 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 12 Dec 2017 16:32:31 -0800 Subject: [PATCH 0691/1087] Use new overflow aware integer operations. A previous commit added inline functions that provide fast(er) and correct overflow checks for signed integer math. Use them in a significant portion of backend code. There's more to touch in both backend and frontend code, but these were the easily identifiable cases. The old overflow checks are noticeable in integer heavy workloads. A secondary benefit is that getting rid of overflow checks that rely on signed integer overflow wrapping around, will allow us to get rid of -fwrapv in the future. Which in turn slows down other code. Author: Andres Freund Discussion: https://postgr.es/m/20171024103954.ztmatprlglz3rwke@alap3.anarazel.de --- contrib/btree_gist/btree_cash.c | 10 +- contrib/btree_gist/btree_int2.c | 10 +- contrib/btree_gist/btree_int4.c | 10 +- contrib/btree_gist/btree_int8.c | 10 +- contrib/btree_gist/btree_utils_num.h | 2 - src/backend/utils/adt/array_userfuncs.c | 13 +- src/backend/utils/adt/cash.c | 39 ++- src/backend/utils/adt/float.c | 9 +- src/backend/utils/adt/int.c | 200 +++---------- src/backend/utils/adt/int8.c | 377 ++++++------------------ src/backend/utils/adt/numeric.c | 41 +-- src/backend/utils/adt/oracle_compat.c | 18 +- src/backend/utils/adt/varbit.c | 4 +- src/backend/utils/adt/varlena.c | 13 +- 14 files changed, 217 insertions(+), 539 deletions(-) diff --git a/contrib/btree_gist/btree_cash.c b/contrib/btree_gist/btree_cash.c index 81131af4dc..18f45f2750 100644 --- a/contrib/btree_gist/btree_cash.c +++ b/contrib/btree_gist/btree_cash.c @@ -5,6 +5,7 @@ #include "btree_gist.h" #include "btree_utils_num.h" +#include "common/int.h" #include "utils/cash.h" typedef struct @@ -99,15 +100,14 @@ cash_dist(PG_FUNCTION_ARGS) Cash r; Cash ra; - r = a - b; - ra = Abs(r); - - /* Overflow check. */ - if (ra < 0 || (!SAMESIGN(a, b) && !SAMESIGN(r, a))) + if (pg_sub_s64_overflow(a, b, &r) || + r == INT64_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("money out of range"))); + ra = Abs(r); + PG_RETURN_CASH(ra); } diff --git a/contrib/btree_gist/btree_int2.c b/contrib/btree_gist/btree_int2.c index f343b8615f..c2af4cd566 100644 --- a/contrib/btree_gist/btree_int2.c +++ b/contrib/btree_gist/btree_int2.c @@ -5,6 +5,7 @@ #include "btree_gist.h" #include "btree_utils_num.h" +#include "common/int.h" typedef struct int16key { @@ -98,15 +99,14 @@ int2_dist(PG_FUNCTION_ARGS) int16 r; int16 ra; - r = a - b; - ra = Abs(r); - - /* Overflow check. */ - if (ra < 0 || (!SAMESIGN(a, b) && !SAMESIGN(r, a))) + if (pg_sub_s16_overflow(a, b, &r) || + r == INT16_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); + ra = Abs(r); + PG_RETURN_INT16(ra); } diff --git a/contrib/btree_gist/btree_int4.c b/contrib/btree_gist/btree_int4.c index 35bb442437..f2b6dec660 100644 --- a/contrib/btree_gist/btree_int4.c +++ b/contrib/btree_gist/btree_int4.c @@ -5,6 +5,7 @@ #include "btree_gist.h" #include "btree_utils_num.h" +#include "common/int.h" typedef struct int32key { @@ -99,15 +100,14 @@ int4_dist(PG_FUNCTION_ARGS) int32 r; int32 ra; - r = a - b; - ra = Abs(r); - - /* Overflow check. */ - if (ra < 0 || (!SAMESIGN(a, b) && !SAMESIGN(r, a))) + if (pg_sub_s32_overflow(a, b, &r) || + r == INT32_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); + ra = Abs(r); + PG_RETURN_INT32(ra); } diff --git a/contrib/btree_gist/btree_int8.c b/contrib/btree_gist/btree_int8.c index 91f2d032d1..16db0028b7 100644 --- a/contrib/btree_gist/btree_int8.c +++ b/contrib/btree_gist/btree_int8.c @@ -5,6 +5,7 @@ #include "btree_gist.h" #include "btree_utils_num.h" +#include "common/int.h" typedef struct int64key { @@ -99,15 +100,14 @@ int8_dist(PG_FUNCTION_ARGS) int64 r; int64 ra; - r = a - b; - ra = Abs(r); - - /* Overflow check. */ - if (ra < 0 || (!SAMESIGN(a, b) && !SAMESIGN(r, a))) + if (pg_sub_s64_overflow(a, b, &r) || + r == INT64_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + ra = Abs(r); + PG_RETURN_INT64(ra); } diff --git a/contrib/btree_gist/btree_utils_num.h b/contrib/btree_gist/btree_utils_num.h index 17561fa9e4..d7945f856c 100644 --- a/contrib/btree_gist/btree_utils_num.h +++ b/contrib/btree_gist/btree_utils_num.h @@ -89,8 +89,6 @@ typedef struct #define GET_FLOAT_DISTANCE(t, arg1, arg2) Abs( ((float8) *((const t *) (arg1))) - ((float8) *((const t *) (arg2))) ) -#define SAMESIGN(a,b) (((a) < 0) == ((b) < 0)) - /* * check to see if a float4/8 val has underflowed or overflowed * borrowed from src/backend/utils/adt/float.c diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c index 87d79f3f98..bb70cba171 100644 --- a/src/backend/utils/adt/array_userfuncs.c +++ b/src/backend/utils/adt/array_userfuncs.c @@ -13,6 +13,7 @@ #include "postgres.h" #include "catalog/pg_type.h" +#include "common/int.h" #include "utils/array.h" #include "utils/builtins.h" #include "utils/lsyscache.h" @@ -118,15 +119,11 @@ array_append(PG_FUNCTION_ARGS) if (eah->ndims == 1) { /* append newelem */ - int ub; - lb = eah->lbound; dimv = eah->dims; - ub = dimv[0] + lb[0] - 1; - indx = ub + 1; - /* overflow? */ - if (indx < ub) + /* index of added elem is at lb[0] + (dimv[0] - 1) + 1 */ + if (pg_add_s32_overflow(lb[0], dimv[0], &indx)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -176,11 +173,9 @@ array_prepend(PG_FUNCTION_ARGS) { /* prepend newelem */ lb = eah->lbound; - indx = lb[0] - 1; lb0 = lb[0]; - /* overflow? */ - if (indx > lb[0]) + if (pg_sub_s32_overflow(lb0, 1, &indx)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); diff --git a/src/backend/utils/adt/cash.c b/src/backend/utils/adt/cash.c index 7bbc634bd2..c787dd3419 100644 --- a/src/backend/utils/adt/cash.c +++ b/src/backend/utils/adt/cash.c @@ -22,6 +22,7 @@ #include #include +#include "common/int.h" #include "libpq/pqformat.h" #include "utils/builtins.h" #include "utils/cash.h" @@ -199,20 +200,21 @@ cash_in(PG_FUNCTION_ARGS) for (; *s; s++) { - /* we look for digits as long as we have found less */ - /* than the required number of decimal places */ + /* + * We look for digits as long as we have found less than the required + * number of decimal places. + */ if (isdigit((unsigned char) *s) && (!seen_dot || dec < fpoint)) { - Cash newvalue = (value * 10) - (*s - '0'); + int8 digit = *s - '0'; - if (newvalue / 10 != value) + if (pg_mul_s64_overflow(value, 10, &value) || + pg_sub_s64_overflow(value, digit, &value)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("value \"%s\" is out of range for type %s", str, "money"))); - value = newvalue; - if (seen_dot) dec++; } @@ -230,26 +232,23 @@ cash_in(PG_FUNCTION_ARGS) /* round off if there's another digit */ if (isdigit((unsigned char) *s) && *s >= '5') - value--; /* remember we build the value in the negative */ - - if (value > 0) - ereport(ERROR, - (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), - errmsg("value \"%s\" is out of range for type %s", - str, "money"))); + { + /* remember we build the value in the negative */ + if (pg_sub_s64_overflow(value, 1, &value)) + ereport(ERROR, + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), + errmsg("value \"%s\" is out of range for type %s", + str, "money"))); + } /* adjust for less than required decimal places */ for (; dec < fpoint; dec++) { - Cash newvalue = value * 10; - - if (newvalue / 10 != value) + if (pg_mul_s64_overflow(value, 10, &value)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("value \"%s\" is out of range for type %s", str, "money"))); - - value = newvalue; } /* @@ -285,12 +284,12 @@ cash_in(PG_FUNCTION_ARGS) */ if (sgn > 0) { - result = -value; - if (result < 0) + if (value == PG_INT64_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("value \"%s\" is out of range for type %s", str, "money"))); + result = -value; } else result = value; diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c index 18b3b949ac..be65aab1c9 100644 --- a/src/backend/utils/adt/float.c +++ b/src/backend/utils/adt/float.c @@ -20,6 +20,7 @@ #include #include "catalog/pg_type.h" +#include "common/int.h" #include "libpq/pqformat.h" #include "utils/array.h" #include "utils/builtins.h" @@ -3548,9 +3549,7 @@ width_bucket_float8(PG_FUNCTION_ARGS) result = 0; else if (operand >= bound2) { - result = count + 1; - /* check for overflow */ - if (result < count) + if (pg_add_s32_overflow(count, 1, &result)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -3564,9 +3563,7 @@ width_bucket_float8(PG_FUNCTION_ARGS) result = 0; else if (operand <= bound2) { - result = count + 1; - /* check for overflow */ - if (result < count) + if (pg_add_s32_overflow(count, 1, &result)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c index 4cd8960b3f..36ba86ca73 100644 --- a/src/backend/utils/adt/int.c +++ b/src/backend/utils/adt/int.c @@ -32,14 +32,12 @@ #include #include "catalog/pg_type.h" +#include "common/int.h" #include "funcapi.h" #include "libpq/pqformat.h" #include "utils/array.h" #include "utils/builtins.h" - -#define SAMESIGN(a,b) (((a) < 0) == ((b) < 0)) - #define Int2VectorSize(n) (offsetof(int2vector, values) + (n) * sizeof(int16)) typedef struct @@ -328,7 +326,7 @@ i4toi2(PG_FUNCTION_ARGS) { int32 arg1 = PG_GETARG_INT32(0); - if (arg1 < SHRT_MIN || arg1 > SHRT_MAX) + if (unlikely(arg1 < SHRT_MIN) || unlikely(arg1 > SHRT_MAX)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); @@ -598,15 +596,12 @@ Datum int4um(PG_FUNCTION_ARGS) { int32 arg = PG_GETARG_INT32(0); - int32 result; - result = -arg; - /* overflow check (needed for INT_MIN) */ - if (arg != 0 && SAMESIGN(result, arg)) + if (unlikely(arg == PG_INT32_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); - PG_RETURN_INT32(result); + PG_RETURN_INT32(-arg); } Datum @@ -624,14 +619,7 @@ int4pl(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s32_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -645,14 +633,7 @@ int4mi(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s32_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -666,24 +647,7 @@ int4mul(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg2 gives arg1 - * again. There are two cases where this fails: arg2 = 0 (which cannot - * overflow) and arg1 = INT_MIN, arg2 = -1 (where the division itself will - * overflow and thus incorrectly match). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int16 - * range; if so, no overflow is possible. - */ - if (!(arg1 >= (int32) SHRT_MIN && arg1 <= (int32) SHRT_MAX && - arg2 >= (int32) SHRT_MIN && arg2 <= (int32) SHRT_MAX) && - arg2 != 0 && - ((arg2 == -1 && arg1 < 0 && result < 0) || - result / arg2 != arg1)) + if (unlikely(pg_mul_s32_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -714,12 +678,11 @@ int4div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for INT_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == PG_INT32_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); + result = -arg1; PG_RETURN_INT32(result); } @@ -736,9 +699,7 @@ int4inc(PG_FUNCTION_ARGS) int32 arg = PG_GETARG_INT32(0); int32 result; - result = arg + 1; - /* Overflow check */ - if (arg > 0 && result < 0) + if (unlikely(pg_add_s32_overflow(arg, 1, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -750,15 +711,12 @@ Datum int2um(PG_FUNCTION_ARGS) { int16 arg = PG_GETARG_INT16(0); - int16 result; - result = -arg; - /* overflow check (needed for SHRT_MIN) */ - if (arg != 0 && SAMESIGN(result, arg)) + if (unlikely(arg == PG_INT16_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); - PG_RETURN_INT16(result); + PG_RETURN_INT16(-arg); } Datum @@ -776,14 +734,7 @@ int2pl(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int16 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s16_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); @@ -797,14 +748,7 @@ int2mi(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int16 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s16_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); @@ -816,20 +760,14 @@ int2mul(PG_FUNCTION_ARGS) { int16 arg1 = PG_GETARG_INT16(0); int16 arg2 = PG_GETARG_INT16(1); - int32 result32; - - /* - * The most practical way to detect overflow is to do the arithmetic in - * int32 (so that the result can't overflow) and then do a range check. - */ - result32 = (int32) arg1 * (int32) arg2; + int16 result; - if (result32 < SHRT_MIN || result32 > SHRT_MAX) + if (unlikely(pg_mul_s16_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); - PG_RETURN_INT16((int16) result32); + PG_RETURN_INT16(result); } Datum @@ -856,12 +794,11 @@ int2div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for SHRT_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == INT16_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); + result = -arg1; PG_RETURN_INT16(result); } @@ -879,14 +816,7 @@ int24pl(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s32_overflow((int32) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -900,14 +830,7 @@ int24mi(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s32_overflow((int32) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -921,20 +844,7 @@ int24mul(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int32 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg2 gives arg1 - * again. There is one case where this fails: arg2 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int16 - * range; if so, no overflow is possible. - */ - if (!(arg2 >= (int32) SHRT_MIN && arg2 <= (int32) SHRT_MAX) && - result / arg2 != arg1) + if (unlikely(pg_mul_s32_overflow((int32) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -947,7 +857,7 @@ int24div(PG_FUNCTION_ARGS) int16 arg1 = PG_GETARG_INT16(0); int32 arg2 = PG_GETARG_INT32(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -967,14 +877,7 @@ int42pl(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int32 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s32_overflow(arg1, (int32) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -988,14 +891,7 @@ int42mi(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int32 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s32_overflow(arg1, (int32) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -1009,20 +905,7 @@ int42mul(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int32 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg1 gives arg2 - * again. There is one case where this fails: arg1 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int16 - * range; if so, no overflow is possible. - */ - if (!(arg1 >= (int32) SHRT_MIN && arg1 <= (int32) SHRT_MAX) && - result / arg1 != arg2) + if (unlikely(pg_mul_s32_overflow(arg1, (int32) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -1036,7 +919,7 @@ int42div(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int32 result; - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1053,12 +936,11 @@ int42div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for INT_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == PG_INT32_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); + result = -arg1; PG_RETURN_INT32(result); } @@ -1075,7 +957,7 @@ int4mod(PG_FUNCTION_ARGS) int32 arg1 = PG_GETARG_INT32(0); int32 arg2 = PG_GETARG_INT32(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1103,7 +985,7 @@ int2mod(PG_FUNCTION_ARGS) int16 arg1 = PG_GETARG_INT16(0); int16 arg2 = PG_GETARG_INT16(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1136,12 +1018,11 @@ int4abs(PG_FUNCTION_ARGS) int32 arg1 = PG_GETARG_INT32(0); int32 result; - result = (arg1 < 0) ? -arg1 : arg1; - /* overflow check (needed for INT_MIN) */ - if (result < 0) + if (unlikely(arg1 == INT32_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); + result = (arg1 < 0) ? -arg1 : arg1; PG_RETURN_INT32(result); } @@ -1151,12 +1032,11 @@ int2abs(PG_FUNCTION_ARGS) int16 arg1 = PG_GETARG_INT16(0); int16 result; - result = (arg1 < 0) ? -arg1 : arg1; - /* overflow check (needed for SHRT_MIN) */ - if (result < 0) + if (unlikely(arg1 == INT16_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); + result = (arg1 < 0) ? -arg1 : arg1; PG_RETURN_INT16(result); } @@ -1381,11 +1261,11 @@ generate_series_step_int4(PG_FUNCTION_ARGS) if ((fctx->step > 0 && fctx->current <= fctx->finish) || (fctx->step < 0 && fctx->current >= fctx->finish)) { - /* increment current in preparation for next iteration */ - fctx->current += fctx->step; - - /* if next-value computation overflows, this is the final result */ - if (SAMESIGN(result, fctx->step) && !SAMESIGN(result, fctx->current)) + /* + * Increment current in preparation for next iteration. If next-value + * computation overflows, this is the final result. + */ + if (pg_add_s32_overflow(fctx->current, fctx->step, &fctx->current)) fctx->step = 0; /* do when there is more left to send */ diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index afa434cfee..bc8dad5c5d 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -17,6 +17,7 @@ #include #include +#include "common/int.h" #include "funcapi.h" #include "libpq/pqformat.h" #include "utils/int8.h" @@ -25,8 +26,6 @@ #define MAXINT8LEN 25 -#define SAMESIGN(a,b) (((a) < 0) == ((b) < 0)) - typedef struct { int64 current; @@ -56,11 +55,14 @@ scanint8(const char *str, bool errorOK, int64 *result) { const char *ptr = str; int64 tmp = 0; - int sign = 1; + bool neg = false; /* * Do our own scan, rather than relying on sscanf which might be broken * for long long. + * + * As INT64_MIN can't be stored as a positive 64 bit integer, accumulate + * value as a negative number. */ /* skip leading spaces */ @@ -71,72 +73,60 @@ scanint8(const char *str, bool errorOK, int64 *result) if (*ptr == '-') { ptr++; - - /* - * Do an explicit check for INT64_MIN. Ugly though this is, it's - * cleaner than trying to get the loop below to handle it portably. - */ - if (strncmp(ptr, "9223372036854775808", 19) == 0) - { - tmp = PG_INT64_MIN; - ptr += 19; - goto gotdigits; - } - sign = -1; + neg = true; } else if (*ptr == '+') ptr++; /* require at least one digit */ - if (!isdigit((unsigned char) *ptr)) + if (unlikely(!isdigit((unsigned char) *ptr))) { - if (errorOK) - return false; - else - ereport(ERROR, - (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for integer: \"%s\"", - str))); + goto invalid_syntax; } /* process digits */ while (*ptr && isdigit((unsigned char) *ptr)) { - int64 newtmp = tmp * 10 + (*ptr++ - '0'); - - if ((newtmp / 10) != tmp) /* overflow? */ - { - if (errorOK) - return false; - else - ereport(ERROR, - (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), - errmsg("value \"%s\" is out of range for type %s", - str, "bigint"))); - } - tmp = newtmp; - } + int8 digit = (*ptr++ - '0'); -gotdigits: + if (unlikely(pg_mul_s64_overflow(tmp, 10, &tmp)) || + unlikely(pg_sub_s64_overflow(tmp, digit, &tmp))) + goto out_of_range; + } /* allow trailing whitespace, but not other trailing chars */ while (*ptr != '\0' && isspace((unsigned char) *ptr)) ptr++; - if (*ptr != '\0') + if (unlikely(*ptr != '\0')) + goto invalid_syntax; + + if (!neg) { - if (errorOK) - return false; - else - ereport(ERROR, - (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), - errmsg("invalid input syntax for integer: \"%s\"", - str))); + if (unlikely(tmp == INT64_MIN)) + goto out_of_range; + tmp = -tmp; } - - *result = (sign < 0) ? -tmp : tmp; + *result = tmp; return true; + +out_of_range: + if (errorOK) + return false; + else + ereport(ERROR, + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), + errmsg("value \"%s\" is out of range for type %s", + str, "bigint"))); +invalid_syntax: + if (errorOK) + return false; + else + ereport(ERROR, + (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), + errmsg("invalid input syntax for integer: \"%s\"", + str))); } /* int8in() @@ -492,12 +482,11 @@ int8um(PG_FUNCTION_ARGS) int64 arg = PG_GETARG_INT64(0); int64 result; - result = -arg; - /* overflow check (needed for INT64_MIN) */ - if (arg != 0 && SAMESIGN(result, arg)) + if (unlikely(arg == PG_INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = -arg; PG_RETURN_INT64(result); } @@ -516,14 +505,7 @@ int8pl(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s64_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -537,14 +519,7 @@ int8mi(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s64_overflow(arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -558,28 +533,10 @@ int8mul(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg2 gives arg1 - * again. There are two cases where this fails: arg2 = 0 (which cannot - * overflow) and arg1 = INT64_MIN, arg2 = -1 (where the division itself - * will overflow and thus incorrectly match). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int32 - * range; if so, no overflow is possible. - */ - if (arg1 != (int64) ((int32) arg1) || arg2 != (int64) ((int32) arg2)) - { - if (arg2 != 0 && - ((arg2 == -1 && arg1 < 0 && result < 0) || - result / arg2 != arg1)) - ereport(ERROR, - (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), - errmsg("bigint out of range"))); - } + if (unlikely(pg_mul_s64_overflow(arg1, arg2, &result))) + ereport(ERROR, + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), + errmsg("bigint out of range"))); PG_RETURN_INT64(result); } @@ -607,12 +564,11 @@ int8div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for INT64_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = -arg1; PG_RETURN_INT64(result); } @@ -632,12 +588,11 @@ int8abs(PG_FUNCTION_ARGS) int64 arg1 = PG_GETARG_INT64(0); int64 result; - result = (arg1 < 0) ? -arg1 : arg1; - /* overflow check (needed for INT64_MIN) */ - if (result < 0) + if (unlikely(arg1 == INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = (arg1 < 0) ? -arg1 : arg1; PG_RETURN_INT64(result); } @@ -650,7 +605,7 @@ int8mod(PG_FUNCTION_ARGS) int64 arg1 = PG_GETARG_INT64(0); int64 arg2 = PG_GETARG_INT64(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -687,16 +642,12 @@ int8inc(PG_FUNCTION_ARGS) if (AggCheckCallContext(fcinfo, NULL)) { int64 *arg = (int64 *) PG_GETARG_POINTER(0); - int64 result; - result = *arg + 1; - /* Overflow check */ - if (result < 0 && *arg > 0) + if (unlikely(pg_add_s64_overflow(*arg, 1, arg))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); - *arg = result; PG_RETURN_POINTER(arg); } else @@ -706,9 +657,7 @@ int8inc(PG_FUNCTION_ARGS) int64 arg = PG_GETARG_INT64(0); int64 result; - result = arg + 1; - /* Overflow check */ - if (result < 0 && arg > 0) + if (unlikely(pg_add_s64_overflow(arg, 1, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -731,16 +680,11 @@ int8dec(PG_FUNCTION_ARGS) if (AggCheckCallContext(fcinfo, NULL)) { int64 *arg = (int64 *) PG_GETARG_POINTER(0); - int64 result; - result = *arg - 1; - /* Overflow check */ - if (result > 0 && *arg < 0) + if (unlikely(pg_sub_s64_overflow(*arg, 1, arg))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); - - *arg = result; PG_RETURN_POINTER(arg); } else @@ -750,9 +694,7 @@ int8dec(PG_FUNCTION_ARGS) int64 arg = PG_GETARG_INT64(0); int64 result; - result = arg - 1; - /* Overflow check */ - if (result > 0 && arg < 0) + if (unlikely(pg_sub_s64_overflow(arg, 1, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -821,14 +763,7 @@ int84pl(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int64 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -842,14 +777,7 @@ int84mi(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int64 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -863,20 +791,7 @@ int84mul(PG_FUNCTION_ARGS) int32 arg2 = PG_GETARG_INT32(1); int64 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg1 gives arg2 - * again. There is one case where this fails: arg1 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int32 - * range; if so, no overflow is possible. - */ - if (arg1 != (int64) ((int32) arg1) && - result / arg1 != arg2) + if (unlikely(pg_mul_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -907,12 +822,11 @@ int84div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for INT64_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = -arg1; PG_RETURN_INT64(result); } @@ -930,14 +844,7 @@ int48pl(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -951,14 +858,7 @@ int48mi(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -972,20 +872,7 @@ int48mul(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg2 gives arg1 - * again. There is one case where this fails: arg2 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int32 - * range; if so, no overflow is possible. - */ - if (arg2 != (int64) ((int32) arg2) && - result / arg2 != arg1) + if (unlikely(pg_mul_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -998,7 +885,7 @@ int48div(PG_FUNCTION_ARGS) int32 arg1 = PG_GETARG_INT32(0); int64 arg2 = PG_GETARG_INT64(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1018,14 +905,7 @@ int82pl(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int64 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1039,14 +919,7 @@ int82mi(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int64 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1060,20 +933,7 @@ int82mul(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int64 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg1 gives arg2 - * again. There is one case where this fails: arg1 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int32 - * range; if so, no overflow is possible. - */ - if (arg1 != (int64) ((int32) arg1) && - result / arg1 != arg2) + if (unlikely(pg_mul_s64_overflow(arg1, (int64) arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1087,7 +947,7 @@ int82div(PG_FUNCTION_ARGS) int16 arg2 = PG_GETARG_INT16(1); int64 result; - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1104,12 +964,11 @@ int82div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - result = -arg1; - /* overflow check (needed for INT64_MIN) */ - if (arg1 != 0 && SAMESIGN(result, arg1)) + if (unlikely(arg1 == INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = -arg1; PG_RETURN_INT64(result); } @@ -1127,14 +986,7 @@ int28pl(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 + arg2; - - /* - * Overflow check. If the inputs are of different signs then their sum - * cannot overflow. If the inputs are of the same sign, their sum had - * better be that sign too. - */ - if (SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_add_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1148,14 +1000,7 @@ int28mi(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 - arg2; - - /* - * Overflow check. If the inputs are of the same sign then their - * difference cannot overflow. If they are of different signs then the - * result should be of the same sign as the first input. - */ - if (!SAMESIGN(arg1, arg2) && !SAMESIGN(result, arg1)) + if (unlikely(pg_sub_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1169,20 +1014,7 @@ int28mul(PG_FUNCTION_ARGS) int64 arg2 = PG_GETARG_INT64(1); int64 result; - result = arg1 * arg2; - - /* - * Overflow check. We basically check to see if result / arg2 gives arg1 - * again. There is one case where this fails: arg2 = 0 (which cannot - * overflow). - * - * Since the division is likely much more expensive than the actual - * multiplication, we'd like to skip it where possible. The best bang for - * the buck seems to be to check whether both inputs are in the int32 - * range; if so, no overflow is possible. - */ - if (arg2 != (int64) ((int32) arg2) && - result / arg2 != arg1) + if (unlikely(pg_mul_s64_overflow((int64) arg1, arg2, &result))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -1195,7 +1027,7 @@ int28div(PG_FUNCTION_ARGS) int16 arg1 = PG_GETARG_INT16(0); int64 arg2 = PG_GETARG_INT64(1); - if (arg2 == 0) + if (unlikely(arg2 == 0)) { ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), @@ -1287,17 +1119,13 @@ Datum int84(PG_FUNCTION_ARGS) { int64 arg = PG_GETARG_INT64(0); - int32 result; - - result = (int32) arg; - /* Test for overflow by reverse-conversion. */ - if ((int64) result != arg) + if (unlikely(arg < PG_INT32_MIN) || unlikely(arg > PG_INT32_MAX)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); - PG_RETURN_INT32(result); + PG_RETURN_INT32((int32) arg); } Datum @@ -1312,17 +1140,13 @@ Datum int82(PG_FUNCTION_ARGS) { int64 arg = PG_GETARG_INT64(0); - int16 result; - result = (int16) arg; - - /* Test for overflow by reverse-conversion. */ - if ((int64) result != arg) + if (unlikely(arg < PG_INT16_MIN) || unlikely(arg > PG_INT16_MAX)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); - PG_RETURN_INT16(result); + PG_RETURN_INT16((int16) arg); } Datum @@ -1348,18 +1172,15 @@ dtoi8(PG_FUNCTION_ARGS) /* Round arg to nearest integer (but it's still in float form) */ arg = rint(arg); - /* - * Does it fit in an int64? Avoid assuming that we have handy constants - * defined for the range boundaries, instead test for overflow by - * reverse-conversion. - */ - result = (int64) arg; - - if ((float8) result != arg) + if (unlikely(arg < (double) PG_INT64_MIN) || + unlikely(arg > (double) PG_INT64_MAX) || + unlikely(isnan(arg))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); + result = (int64) arg; + PG_RETURN_INT64(result); } @@ -1381,42 +1202,32 @@ Datum ftoi8(PG_FUNCTION_ARGS) { float4 arg = PG_GETARG_FLOAT4(0); - int64 result; float8 darg; /* Round arg to nearest integer (but it's still in float form) */ darg = rint(arg); - /* - * Does it fit in an int64? Avoid assuming that we have handy constants - * defined for the range boundaries, instead test for overflow by - * reverse-conversion. - */ - result = (int64) darg; - - if ((float8) result != darg) + if (unlikely(arg < (float4) PG_INT64_MIN) || + unlikely(arg > (float4) PG_INT64_MAX) || + unlikely(isnan(arg))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); - PG_RETURN_INT64(result); + PG_RETURN_INT64((int64) darg); } Datum i8tooid(PG_FUNCTION_ARGS) { int64 arg = PG_GETARG_INT64(0); - Oid result; - - result = (Oid) arg; - /* Test for overflow by reverse-conversion. */ - if ((int64) result != arg) + if (unlikely(arg < 0) || unlikely(arg > PG_UINT32_MAX)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("OID out of range"))); - PG_RETURN_OID(result); + PG_RETURN_OID((Oid) arg); } Datum @@ -1494,11 +1305,11 @@ generate_series_step_int8(PG_FUNCTION_ARGS) if ((fctx->step > 0 && fctx->current <= fctx->finish) || (fctx->step < 0 && fctx->current >= fctx->finish)) { - /* increment current in preparation for next iteration */ - fctx->current += fctx->step; - - /* if next-value computation overflows, this is the final result */ - if (SAMESIGN(result, fctx->step) && !SAMESIGN(result, fctx->current)) + /* + * Increment current in preparation for next iteration. If next-value + * computation overflows, this is the final result. + */ + if (pg_add_s64_overflow(fctx->current, fctx->step, &fctx->current)) fctx->step = 0; /* do when there is more left to send */ diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 82e6f4483b..e9a6ca3535 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -28,6 +28,7 @@ #include "access/hash.h" #include "catalog/pg_type.h" +#include "common/int.h" #include "funcapi.h" #include "lib/hyperloglog.h" #include "libpq/pqformat.h" @@ -6169,8 +6170,7 @@ numericvar_to_int64(const NumericVar *var, int64 *result) int ndigits; int weight; int i; - int64 val, - oldval; + int64 val; bool neg; NumericVar rounded; @@ -6196,27 +6196,25 @@ numericvar_to_int64(const NumericVar *var, int64 *result) weight = rounded.weight; Assert(weight >= 0 && ndigits <= weight + 1); - /* Construct the result */ + /* + * Construct the result. To avoid issues with converting a value + * corresponding to INT64_MIN (which can't be represented as a positive 64 + * bit two's complement integer), accumulate value as a negative number. + */ digits = rounded.digits; neg = (rounded.sign == NUMERIC_NEG); - val = digits[0]; + val = -digits[0]; for (i = 1; i <= weight; i++) { - oldval = val; - val *= NBASE; - if (i < ndigits) - val += digits[i]; + if (unlikely(pg_mul_s64_overflow(val, NBASE, &val))) + { + free_var(&rounded); + return false; + } - /* - * The overflow check is a bit tricky because we want to accept - * INT64_MIN, which will overflow the positive accumulator. We can - * detect this case easily though because INT64_MIN is the only - * nonzero value for which -val == val (on a two's complement machine, - * anyway). - */ - if ((val / NBASE) != oldval) /* possible overflow? */ + if (i < ndigits) { - if (!neg || (-val) != val || val == 0 || oldval < 0) + if (unlikely(pg_sub_s64_overflow(val, digits[i], &val))) { free_var(&rounded); return false; @@ -6226,7 +6224,14 @@ numericvar_to_int64(const NumericVar *var, int64 *result) free_var(&rounded); - *result = neg ? -val : val; + if (!neg) + { + if (unlikely(val == INT64_MIN)) + return false; + val = -val; + } + *result = val; + return true; } diff --git a/src/backend/utils/adt/oracle_compat.c b/src/backend/utils/adt/oracle_compat.c index b82016500b..a5aa6a95aa 100644 --- a/src/backend/utils/adt/oracle_compat.c +++ b/src/backend/utils/adt/oracle_compat.c @@ -15,6 +15,7 @@ */ #include "postgres.h" +#include "common/int.h" #include "utils/builtins.h" #include "utils/formatting.h" #include "mb/pg_wchar.h" @@ -1045,19 +1046,12 @@ repeat(PG_FUNCTION_ARGS) count = 0; slen = VARSIZE_ANY_EXHDR(string); - tlen = VARHDRSZ + (count * slen); - /* Check for integer overflow */ - if (slen != 0 && count != 0) - { - int check = count * slen; - int check2 = check + VARHDRSZ; - - if ((check / slen) != count || check2 <= check) - ereport(ERROR, - (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), - errmsg("requested length too large"))); - } + if (unlikely(pg_mul_s32_overflow(count, slen, &tlen)) || + unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen))) + ereport(ERROR, + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), + errmsg("requested length too large"))); result = (text *) palloc(tlen); diff --git a/src/backend/utils/adt/varbit.c b/src/backend/utils/adt/varbit.c index 478fab9bfc..6caa7e9b5e 100644 --- a/src/backend/utils/adt/varbit.c +++ b/src/backend/utils/adt/varbit.c @@ -17,6 +17,7 @@ #include "postgres.h" #include "access/htup_details.h" +#include "common/int.h" #include "libpq/pqformat.h" #include "nodes/nodeFuncs.h" #include "utils/array.h" @@ -1166,8 +1167,7 @@ bit_overlay(VarBit *t1, VarBit *t2, int sp, int sl) ereport(ERROR, (errcode(ERRCODE_SUBSTRING_ERROR), errmsg("negative substring length not allowed"))); - sp_pl_sl = sp + sl; - if (sp_pl_sl <= sl) + if (pg_add_s32_overflow(sp, sl, &sp_pl_sl)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index 39b68dbc65..a84e845ad2 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -21,6 +21,7 @@ #include "access/tuptoaster.h" #include "catalog/pg_collation.h" #include "catalog/pg_type.h" +#include "common/int.h" #include "common/md5.h" #include "lib/hyperloglog.h" #include "libpq/pqformat.h" @@ -1047,8 +1048,7 @@ text_overlay(text *t1, text *t2, int sp, int sl) ereport(ERROR, (errcode(ERRCODE_SUBSTRING_ERROR), errmsg("negative substring length not allowed"))); - sp_pl_sl = sp + sl; - if (sp_pl_sl <= sl) + if (pg_add_s32_overflow(sp, sl, &sp_pl_sl)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -2950,8 +2950,7 @@ bytea_overlay(bytea *t1, bytea *t2, int sp, int sl) ereport(ERROR, (errcode(ERRCODE_SUBSTRING_ERROR), errmsg("negative substring length not allowed"))); - sp_pl_sl = sp + sl; - if (sp_pl_sl <= sl) + if (pg_add_s32_overflow(sp, sl, &sp_pl_sl)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -5279,13 +5278,13 @@ text_format_parse_digits(const char **ptr, const char *end_ptr, int *value) while (*cp >= '0' && *cp <= '9') { - int newval = val * 10 + (*cp - '0'); + int8 digit = (*cp - '0'); - if (newval / 10 != val) /* overflow? */ + if (unlikely(pg_mul_s32_overflow(val, 10, &val)) || + unlikely(pg_add_s32_overflow(val, digit, &val))) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("number is out of range"))); - val = newval; ADVANCE_PARSE_POINTER(cp, end_ptr); found = true; } From 85abb5b297c5b318738f09345ae226f780b88e92 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 12 Dec 2017 17:19:44 -0800 Subject: [PATCH 0692/1087] Make PGAC_C_BUILTIN_OP_OVERFLOW link instead of just compiling. Otherwise the detection can spuriously detect symbol as available, because the compiler may just emits reference to non-existant symbol. --- config/c-compiler.m4 | 5 +++-- configure | 7 +++++-- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 28c372cd32..ed26644a48 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -305,10 +305,11 @@ fi])# PGAC_C_BUILTIN_CONSTANT_P # and define HAVE__BUILTIN_OP_OVERFLOW if so. # # Check for the most complicated case, 64 bit multiplication, as a -# proxy for all of the operations. +# proxy for all of the operations. Have to link to be sure to +# recognize a missing __builtin_mul_overflow. AC_DEFUN([PGAC_C_BUILTIN_OP_OVERFLOW], [AC_CACHE_CHECK(for __builtin_mul_overflow, pgac_cv__builtin_op_overflow, -[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], +[AC_LINK_IFELSE([AC_LANG_PROGRAM([], [PG_INT64_TYPE result; __builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result);] )], diff --git a/configure b/configure index 4a4f13314e..ca76ef0ab2 100755 --- a/configure +++ b/configure @@ -14472,6 +14472,8 @@ esac fi +# has to be down here, rather than with the other builtins, because +# the test uses PG_INT64_TYPE. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_mul_overflow" >&5 $as_echo_n "checking for __builtin_mul_overflow... " >&6; } if ${pgac_cv__builtin_op_overflow+:} false; then : @@ -14490,12 +14492,13 @@ __builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result); return 0; } _ACEOF -if ac_fn_c_try_compile "$LINENO"; then : +if ac_fn_c_try_link "$LINENO"; then : pgac_cv__builtin_op_overflow=yes else pgac_cv__builtin_op_overflow=no fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +rm -f core conftest.err conftest.$ac_objext \ + conftest$ac_exeext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_op_overflow" >&5 $as_echo "$pgac_cv__builtin_op_overflow" >&6; } From 4c6744ed705df6f388371d044b87d1b4a60e9f80 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 5 Dec 2017 14:14:55 -0500 Subject: [PATCH 0693/1087] PL/Python: Fix potential NULL pointer dereference After d0aa965c0a0ac2ff7906ae1b1dad50a7952efa56, one error path in PLy_spi_execute_fetch_result() could result in the variable "result" being dereferenced after being set to NULL. Rearrange the code a bit to fix that. Also add another SPI_freetuptable() call so that that is cleared in all error paths. discovered by John Naylor via scan-build ideas and review by Tom Lane --- src/pl/plpython/plpy_spi.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index ade27f3924..0c623a9458 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -361,7 +361,10 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) result = (PLyResultObject *) PLy_result_new(); if (!result) + { + SPI_freetuptable(tuptable); return NULL; + } Py_DECREF(result->status); result->status = PyInt_FromLong(status); @@ -411,12 +414,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) Py_DECREF(result->rows); result->rows = PyList_New(rows); - if (!result->rows) - { - Py_DECREF(result); - result = NULL; - } - else + if (result->rows) { PLy_input_setup_tuple(&ininfo, tuptable->tupdesc, exec_ctx->curr_proc); @@ -455,6 +453,13 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) MemoryContextDelete(cxt); SPI_freetuptable(tuptable); + + /* in case PyList_New() failed above */ + if (!result->rows) + { + Py_DECREF(result); + result = NULL; + } } return (PyObject *) result; From f512a6e1323eefa843a063e466babb96d7bfb4ce Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 12 Dec 2017 18:15:22 -0800 Subject: [PATCH 0694/1087] Consistently use PG_INT(16|32|64)_(MIN|MAX). Per buildfarm animal woodlouse. --- contrib/btree_gist/btree_cash.c | 2 +- contrib/btree_gist/btree_int2.c | 2 +- contrib/btree_gist/btree_int4.c | 2 +- contrib/btree_gist/btree_int8.c | 2 +- src/backend/utils/adt/int.c | 6 +++--- src/backend/utils/adt/int8.c | 10 +++++----- src/backend/utils/adt/numeric.c | 2 +- 7 files changed, 13 insertions(+), 13 deletions(-) diff --git a/contrib/btree_gist/btree_cash.c b/contrib/btree_gist/btree_cash.c index 18f45f2750..894d0a2665 100644 --- a/contrib/btree_gist/btree_cash.c +++ b/contrib/btree_gist/btree_cash.c @@ -101,7 +101,7 @@ cash_dist(PG_FUNCTION_ARGS) Cash ra; if (pg_sub_s64_overflow(a, b, &r) || - r == INT64_MIN) + r == PG_INT64_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("money out of range"))); diff --git a/contrib/btree_gist/btree_int2.c b/contrib/btree_gist/btree_int2.c index c2af4cd566..7674e2d292 100644 --- a/contrib/btree_gist/btree_int2.c +++ b/contrib/btree_gist/btree_int2.c @@ -100,7 +100,7 @@ int2_dist(PG_FUNCTION_ARGS) int16 ra; if (pg_sub_s16_overflow(a, b, &r) || - r == INT16_MIN) + r == PG_INT16_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); diff --git a/contrib/btree_gist/btree_int4.c b/contrib/btree_gist/btree_int4.c index f2b6dec660..80005ab65d 100644 --- a/contrib/btree_gist/btree_int4.c +++ b/contrib/btree_gist/btree_int4.c @@ -101,7 +101,7 @@ int4_dist(PG_FUNCTION_ARGS) int32 ra; if (pg_sub_s32_overflow(a, b, &r) || - r == INT32_MIN) + r == PG_INT32_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); diff --git a/contrib/btree_gist/btree_int8.c b/contrib/btree_gist/btree_int8.c index 16db0028b7..b0fd3e1277 100644 --- a/contrib/btree_gist/btree_int8.c +++ b/contrib/btree_gist/btree_int8.c @@ -101,7 +101,7 @@ int8_dist(PG_FUNCTION_ARGS) int64 ra; if (pg_sub_s64_overflow(a, b, &r) || - r == INT64_MIN) + r == PG_INT64_MIN) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c index 36ba86ca73..d6eb16a907 100644 --- a/src/backend/utils/adt/int.c +++ b/src/backend/utils/adt/int.c @@ -794,7 +794,7 @@ int2div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - if (unlikely(arg1 == INT16_MIN)) + if (unlikely(arg1 == PG_INT16_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); @@ -1018,7 +1018,7 @@ int4abs(PG_FUNCTION_ARGS) int32 arg1 = PG_GETARG_INT32(0); int32 result; - if (unlikely(arg1 == INT32_MIN)) + if (unlikely(arg1 == PG_INT32_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("integer out of range"))); @@ -1032,7 +1032,7 @@ int2abs(PG_FUNCTION_ARGS) int16 arg1 = PG_GETARG_INT16(0); int16 result; - if (unlikely(arg1 == INT16_MIN)) + if (unlikely(arg1 == PG_INT16_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("smallint out of range"))); diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index bc8dad5c5d..42bb6ebe2f 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -103,7 +103,7 @@ scanint8(const char *str, bool errorOK, int64 *result) if (!neg) { - if (unlikely(tmp == INT64_MIN)) + if (unlikely(tmp == PG_INT64_MIN)) goto out_of_range; tmp = -tmp; } @@ -564,7 +564,7 @@ int8div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - if (unlikely(arg1 == INT64_MIN)) + if (unlikely(arg1 == PG_INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -588,7 +588,7 @@ int8abs(PG_FUNCTION_ARGS) int64 arg1 = PG_GETARG_INT64(0); int64 result; - if (unlikely(arg1 == INT64_MIN)) + if (unlikely(arg1 == PG_INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -822,7 +822,7 @@ int84div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - if (unlikely(arg1 == INT64_MIN)) + if (unlikely(arg1 == PG_INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); @@ -964,7 +964,7 @@ int82div(PG_FUNCTION_ARGS) */ if (arg2 == -1) { - if (unlikely(arg1 == INT64_MIN)) + if (unlikely(arg1 == PG_INT64_MIN)) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("bigint out of range"))); diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index e9a6ca3535..9830988701 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -6226,7 +6226,7 @@ numericvar_to_int64(const NumericVar *var, int64 *result) if (!neg) { - if (unlikely(val == INT64_MIN)) + if (unlikely(val == PG_INT64_MIN)) return false; val = -val; } From 8e211f5391465bddda79e17e63c79dbc8c70e6d1 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 12 Dec 2017 23:32:43 -0800 Subject: [PATCH 0695/1087] Add float.h include to int8.c, for isnan(). port.h redirects isnan() to _isnan() on windows, which in turn is provided by float.h rather than math.h. Therefore include the latter as well. Per buildfarm. --- src/backend/utils/adt/int8.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index 42bb6ebe2f..a8e2200852 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -14,6 +14,7 @@ #include "postgres.h" #include +#include /* for _isnan */ #include #include From 3d8874224ff25de3ca4f9da8ce3118391bd6609e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 13 Dec 2017 10:37:48 -0500 Subject: [PATCH 0696/1087] Fix crash when using CALL on an aggregate Author: Ashutosh Bapat Reported-by: Rushabh Lathia --- src/backend/parser/parse_func.c | 9 +++++++++ src/test/regress/expected/create_procedure.out | 9 +++++++++ src/test/regress/sql/create_procedure.sql | 3 +++ 3 files changed, 21 insertions(+) diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index 2f20516e76..e6b085637b 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -336,6 +336,15 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs, Form_pg_aggregate classForm; int catDirectArgs; + if (proc_call) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_FUNCTION), + errmsg("%s is not a procedure", + func_signature_string(funcname, nargs, + argnames, + actual_arg_types)), + parser_errposition(pstate, location))); + tup = SearchSysCache1(AGGFNOID, ObjectIdGetDatum(funcid)); if (!HeapTupleIsValid(tup)) /* should not happen */ elog(ERROR, "cache lookup failed for aggregate %u", funcid); diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out index 5538ef2f2b..e627d8ebbc 100644 --- a/src/test/regress/expected/create_procedure.out +++ b/src/test/regress/expected/create_procedure.out @@ -41,6 +41,15 @@ SELECT 5; $$; CALL ptest2(); -- various error cases +CALL version(); -- error: not a procedure +ERROR: version() is not a procedure +LINE 1: CALL version(); + ^ +HINT: To call a function, use SELECT. +CALL sum(1); -- error: not a procedure +ERROR: sum(integer) is not a procedure +LINE 1: CALL sum(1); + ^ CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; ERROR: invalid attribute in procedure definition LINE 1: CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT I... diff --git a/src/test/regress/sql/create_procedure.sql b/src/test/regress/sql/create_procedure.sql index f09ba2ad30..8c47b7e9ef 100644 --- a/src/test/regress/sql/create_procedure.sql +++ b/src/test/regress/sql/create_procedure.sql @@ -30,6 +30,9 @@ CALL ptest2(); -- various error cases +CALL version(); -- error: not a procedure +CALL sum(1); -- error: not a procedure + CREATE PROCEDURE ptestx() LANGUAGE SQL WINDOW AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; CREATE PROCEDURE ptestx() LANGUAGE SQL STRICT AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; CREATE PROCEDURE ptestx(OUT a int) LANGUAGE SQL AS $$ INSERT INTO cp_test VALUES (1, 'a') $$; From 632b03da31cbbf4d32193d35031d301bd50d2679 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 7 Dec 2017 14:03:29 -0500 Subject: [PATCH 0697/1087] Start a separate test suite for plpgsql The plpgsql.sql test file in the main regression tests is now by far the largest after numeric_big, making editing and managing the test cases very cumbersome. The other PLs have their own test suites split up into smaller files by topic. It would be nice to have that for plpgsql as well. So, to get that started, set up test infrastructure in src/pl/plpgsql/src/ and split out the recently added procedure test cases into a new file there. That file now mirrors the test cases added to the other PLs, making managing those matching tests a bit easier too. msvc build system changes with help from Michael Paquier --- src/pl/plpgsql/src/.gitignore | 3 ++ src/pl/plpgsql/src/Makefile | 14 ++++++ src/pl/plpgsql/src/expected/plpgsql_call.out | 41 ++++++++++++++++ src/pl/plpgsql/src/sql/plpgsql_call.sql | 47 +++++++++++++++++++ src/test/regress/expected/plpgsql.out | 41 ---------------- src/test/regress/sql/plpgsql.sql | 49 -------------------- src/tools/msvc/vcregress.pl | 31 ++++++++----- 7 files changed, 125 insertions(+), 101 deletions(-) create mode 100644 src/pl/plpgsql/src/expected/plpgsql_call.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_call.sql diff --git a/src/pl/plpgsql/src/.gitignore b/src/pl/plpgsql/src/.gitignore index 92387fa3cb..ff6ac965fd 100644 --- a/src/pl/plpgsql/src/.gitignore +++ b/src/pl/plpgsql/src/.gitignore @@ -1,3 +1,6 @@ /pl_gram.c /pl_gram.h /plerrcodes.h +/log/ +/results/ +/tmp_check/ diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 95348179ac..64991c3115 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -24,6 +24,8 @@ OBJS = pl_gram.o pl_handler.o pl_comp.o pl_exec.o \ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql +REGRESS = plpgsql_call + all: all-lib # Shared library stuff @@ -65,6 +67,18 @@ pl_gram.c: BISONFLAGS += -d plerrcodes.h: $(top_srcdir)/src/backend/utils/errcodes.txt generate-plerrcodes.pl $(PERL) $(srcdir)/generate-plerrcodes.pl $< > $@ + +check: submake + $(pg_regress_check) $(REGRESS_OPTS) $(REGRESS) + +installcheck: submake + $(pg_regress_installcheck) $(REGRESS_OPTS) $(REGRESS) + +.PHONY: submake +submake: + $(MAKE) -C $(top_builddir)/src/test/regress pg_regress$(X) + + distprep: pl_gram.h pl_gram.c plerrcodes.h # pl_gram.c, pl_gram.h and plerrcodes.h are in the distribution tarball, diff --git a/src/pl/plpgsql/src/expected/plpgsql_call.out b/src/pl/plpgsql/src/expected/plpgsql_call.out new file mode 100644 index 0000000000..d0f35163bc --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_call.out @@ -0,0 +1,41 @@ +-- +-- Tests for procedures / CALL syntax +-- +CREATE PROCEDURE test_proc1() +LANGUAGE plpgsql +AS $$ +BEGIN + NULL; +END; +$$; +CALL test_proc1(); +-- error: can't return non-NULL +CREATE PROCEDURE test_proc2() +LANGUAGE plpgsql +AS $$ +BEGIN + RETURN 5; +END; +$$; +CALL test_proc2(); +ERROR: cannot return a value from a procedure +CONTEXT: PL/pgSQL function test_proc2() while casting return value to function's return type +CREATE TABLE test1 (a int); +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpgsql +AS $$ +BEGIN + INSERT INTO test1 VALUES (x); +END; +$$; +CALL test_proc3(55); +SELECT * FROM test1; + a +---- + 55 +(1 row) + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; +DROP TABLE test1; diff --git a/src/pl/plpgsql/src/sql/plpgsql_call.sql b/src/pl/plpgsql/src/sql/plpgsql_call.sql new file mode 100644 index 0000000000..38fd220e8f --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_call.sql @@ -0,0 +1,47 @@ +-- +-- Tests for procedures / CALL syntax +-- + +CREATE PROCEDURE test_proc1() +LANGUAGE plpgsql +AS $$ +BEGIN + NULL; +END; +$$; + +CALL test_proc1(); + + +-- error: can't return non-NULL +CREATE PROCEDURE test_proc2() +LANGUAGE plpgsql +AS $$ +BEGIN + RETURN 5; +END; +$$; + +CALL test_proc2(); + + +CREATE TABLE test1 (a int); + +CREATE PROCEDURE test_proc3(x int) +LANGUAGE plpgsql +AS $$ +BEGIN + INSERT INTO test1 VALUES (x); +END; +$$; + +CALL test_proc3(55); + +SELECT * FROM test1; + + +DROP PROCEDURE test_proc1; +DROP PROCEDURE test_proc2; +DROP PROCEDURE test_proc3; + +DROP TABLE test1; diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 26f6e4394f..a2df411132 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -6067,44 +6067,3 @@ END; $$ LANGUAGE plpgsql; ERROR: "x" is not a scalar variable LINE 3: GET DIAGNOSTICS x = ROW_COUNT; ^ --- --- Procedures --- -CREATE PROCEDURE test_proc1() -LANGUAGE plpgsql -AS $$ -BEGIN - NULL; -END; -$$; -CALL test_proc1(); --- error: can't return non-NULL -CREATE PROCEDURE test_proc2() -LANGUAGE plpgsql -AS $$ -BEGIN - RETURN 5; -END; -$$; -CALL test_proc2(); -ERROR: cannot return a value from a procedure -CONTEXT: PL/pgSQL function test_proc2() while casting return value to function's return type -CREATE TABLE proc_test1 (a int); -CREATE PROCEDURE test_proc3(x int) -LANGUAGE plpgsql -AS $$ -BEGIN - INSERT INTO proc_test1 VALUES (x); -END; -$$; -CALL test_proc3(55); -SELECT * FROM proc_test1; - a ----- - 55 -(1 row) - -DROP PROCEDURE test_proc1; -DROP PROCEDURE test_proc2; -DROP PROCEDURE test_proc3; -DROP TABLE proc_test1; diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index bb09b2d807..02c8913801 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -4843,52 +4843,3 @@ BEGIN GET DIAGNOSTICS x = ROW_COUNT; RETURN; END; $$ LANGUAGE plpgsql; - - --- --- Procedures --- - -CREATE PROCEDURE test_proc1() -LANGUAGE plpgsql -AS $$ -BEGIN - NULL; -END; -$$; - -CALL test_proc1(); - - --- error: can't return non-NULL -CREATE PROCEDURE test_proc2() -LANGUAGE plpgsql -AS $$ -BEGIN - RETURN 5; -END; -$$; - -CALL test_proc2(); - - -CREATE TABLE proc_test1 (a int); - -CREATE PROCEDURE test_proc3(x int) -LANGUAGE plpgsql -AS $$ -BEGIN - INSERT INTO proc_test1 VALUES (x); -END; -$$; - -CALL test_proc3(55); - -SELECT * FROM proc_test1; - - -DROP PROCEDURE test_proc1; -DROP PROCEDURE test_proc2; -DROP PROCEDURE test_proc3; - -DROP TABLE proc_test1; diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl index 719fe83047..314f2c37d2 100644 --- a/src/tools/msvc/vcregress.pl +++ b/src/tools/msvc/vcregress.pl @@ -248,23 +248,32 @@ sub taptest sub plcheck { - chdir "../../pl"; + chdir "$topdir/src/pl"; - foreach my $pl (glob("*")) + foreach my $dir (glob("*/src *")) { - next unless -d "$pl/sql" && -d "$pl/expected"; - my $lang = $pl eq 'tcl' ? 'pltcl' : $pl; + next unless -d "$dir/sql" && -d "$dir/expected"; + my $lang; + if ($dir eq 'plpgsql/src') { + $lang = 'plpgsql'; + } + elsif ($dir eq 'tcl') { + $lang = 'pltcl'; + } + else { + $lang = $dir; + } if ($lang eq 'plpython') { - next unless -d "../../$Config/plpython2"; + next unless -d "$topdir/$Config/plpython2"; $lang = 'plpythonu'; } else { - next unless -d "../../$Config/$lang"; + next unless -d "$topdir/$Config/$lang"; } my @lang_args = ("--load-extension=$lang"); - chdir $pl; + chdir $dir; my @tests = fetchTests(); if ($lang eq 'plperl') { @@ -285,16 +294,16 @@ sub plcheck "============================================================\n"; print "Checking $lang\n"; my @args = ( - "../../../$Config/pg_regress/pg_regress", - "--bindir=../../../$Config/psql", + "$topdir/$Config/pg_regress/pg_regress", + "--bindir=$topdir/$Config/psql", "--dbname=pl_regression", @lang_args, @tests); system(@args); my $status = $? >> 8; exit $status if $status; - chdir ".."; + chdir "$topdir/src/pl"; } - chdir "../../.."; + chdir "$topdir"; } sub subdircheck From 9fa6f00b1308dd10da4eca2f31ccbfc7b35bb461 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 13 Dec 2017 13:55:12 -0500 Subject: [PATCH 0698/1087] Rethink MemoryContext creation to improve performance. This patch makes a number of interrelated changes to reduce the overhead involved in creating/deleting memory contexts. The key ideas are: * Include the AllocSetContext header of an aset.c context in its first malloc request, rather than allocating it separately in TopMemoryContext. This means that we now always create an initial or "keeper" block in an aset, even if it never receives any allocation requests. * Create freelists in which we can save and recycle recently-destroyed asets (this idea is due to Robert Haas). * In the common case where the name of a context is a constant string, just store a pointer to it in the context header, rather than copying the string. The first change eliminates a palloc/pfree cycle per context, and also avoids bloat in TopMemoryContext, at the price that creating a context now involves a malloc/free cycle even if the context never receives any allocations. That would be a loser for some common usage patterns, but recycling short-lived contexts via the freelist eliminates that pain. Avoiding copying constant strings not only saves strlen() and strcpy() overhead, but is an essential part of the freelist optimization because it makes the context header size constant. Currently we make no attempt to use the freelist for contexts with non-constant names. (Perhaps someday we'll need to think harder about that, but in current usage, most contexts with custom names are long-lived anyway.) The freelist management in this initial commit is pretty simplistic, and we might want to refine it later --- but in common workloads that will never matter because the freelists will never get full anyway. To create a context with a non-constant name, one is now required to call AllocSetContextCreateExtended and specify the MEMCONTEXT_COPY_NAME option. AllocSetContextCreate becomes a wrapper macro, and it includes a test that will complain about non-string-literal context name parameters on gcc and similar compilers. An unfortunate side effect of making AllocSetContextCreate a macro is that one is now *required* to use the size parameter abstraction macros (ALLOCSET_DEFAULT_SIZES and friends) with it; the pre-9.6 habit of writing out individual size parameters no longer works unless you switch to AllocSetContextCreateExtended. Internally to the memory-context-related modules, the context creation APIs are simplified, removing the rather baroque original design whereby a context-type module called mcxt.c which then called back into the context-type module. That saved a bit of code duplication, but not much, and it prevented context-type modules from exercising control over the allocation of context headers. In passing, I converted the test-and-elog validation of aset size parameters into Asserts to save a few more cycles. The original thought was that callers might compute size parameters on the fly, but in practice nobody does that, so it's useless to expend cycles on checking those numbers in production builds. Also, mark the memory context method-pointer structs "const", just for cleanliness. Discussion: https://postgr.es/m/2264.1512870796@sss.pgh.pa.us --- contrib/amcheck/verify_nbtree.c | 4 +- src/backend/access/transam/xact.c | 11 +- src/backend/catalog/partition.c | 7 +- src/backend/commands/subscriptioncmds.c | 4 +- src/backend/lib/knapsack.c | 4 +- src/backend/replication/logical/launcher.c | 4 +- .../replication/logical/reorderbuffer.c | 3 + src/backend/replication/pgoutput/pgoutput.c | 4 +- src/backend/utils/cache/relcache.c | 29 +- src/backend/utils/cache/ts_cache.c | 7 +- src/backend/utils/hash/dynahash.c | 8 +- src/backend/utils/mmgr/README | 12 +- src/backend/utils/mmgr/aset.c | 370 ++++++++++++------ src/backend/utils/mmgr/generation.c | 81 ++-- src/backend/utils/mmgr/mcxt.c | 159 ++++---- src/backend/utils/mmgr/slab.c | 102 +++-- src/include/nodes/memnodes.h | 5 +- src/include/utils/memutils.h | 43 +- src/pl/plperl/plperl.c | 7 +- src/pl/plpython/plpy_procedure.c | 7 +- src/pl/tcl/pltcl.c | 9 +- 21 files changed, 539 insertions(+), 341 deletions(-) diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c index 868c14ec8f..adbbc44468 100644 --- a/contrib/amcheck/verify_nbtree.c +++ b/contrib/amcheck/verify_nbtree.c @@ -295,9 +295,7 @@ bt_check_every_level(Relation rel, bool readonly) /* Create context for page */ state->targetcontext = AllocSetContextCreate(CurrentMemoryContext, "amcheck context", - ALLOCSET_DEFAULT_MINSIZE, - ALLOCSET_DEFAULT_INITSIZE, - ALLOCSET_DEFAULT_MAXSIZE); + ALLOCSET_DEFAULT_SIZES); state->checkstrategy = GetAccessStrategy(BAS_BULKREAD); /* Get true root block from meta-page */ diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 046898c619..e93d740b21 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -997,11 +997,12 @@ AtStart_Memory(void) */ if (TransactionAbortContext == NULL) TransactionAbortContext = - AllocSetContextCreate(TopMemoryContext, - "TransactionAbortContext", - 32 * 1024, - 32 * 1024, - 32 * 1024); + AllocSetContextCreateExtended(TopMemoryContext, + "TransactionAbortContext", + 0, + 32 * 1024, + 32 * 1024, + 32 * 1024); /* * We shouldn't have a transaction context already. diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index ef156e449e..5c4018e9f7 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -520,9 +520,10 @@ RelationBuildPartitionDesc(Relation rel) } /* Now build the actual relcache partition descriptor */ - rel->rd_pdcxt = AllocSetContextCreate(CacheMemoryContext, - RelationGetRelationName(rel), - ALLOCSET_DEFAULT_SIZES); + rel->rd_pdcxt = AllocSetContextCreateExtended(CacheMemoryContext, + RelationGetRelationName(rel), + MEMCONTEXT_COPY_NAME, + ALLOCSET_DEFAULT_SIZES); oldcxt = MemoryContextSwitchTo(rel->rd_pdcxt); result = (PartitionDescData *) palloc0(sizeof(PartitionDescData)); diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index 086a6ef85e..a7f426d52b 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -259,9 +259,7 @@ publicationListToArray(List *publist) /* Create memory context for temporary allocations. */ memcxt = AllocSetContextCreate(CurrentMemoryContext, "publicationListToArray to array", - ALLOCSET_DEFAULT_MINSIZE, - ALLOCSET_DEFAULT_INITSIZE, - ALLOCSET_DEFAULT_MAXSIZE); + ALLOCSET_DEFAULT_SIZES); oldcxt = MemoryContextSwitchTo(memcxt); datums = (Datum *) palloc(sizeof(Datum) * list_length(publist)); diff --git a/src/backend/lib/knapsack.c b/src/backend/lib/knapsack.c index ddf2b9afa3..490c0cc73c 100644 --- a/src/backend/lib/knapsack.c +++ b/src/backend/lib/knapsack.c @@ -57,9 +57,7 @@ DiscreteKnapsack(int max_weight, int num_items, { MemoryContext local_ctx = AllocSetContextCreate(CurrentMemoryContext, "Knapsack", - ALLOCSET_SMALL_MINSIZE, - ALLOCSET_SMALL_INITSIZE, - ALLOCSET_SMALL_MAXSIZE); + ALLOCSET_SMALL_SIZES); MemoryContext oldctx = MemoryContextSwitchTo(local_ctx); double *values; Bitmapset **sets; diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c index a613ef4757..24be3cef53 100644 --- a/src/backend/replication/logical/launcher.c +++ b/src/backend/replication/logical/launcher.c @@ -925,9 +925,7 @@ ApplyLauncherMain(Datum main_arg) /* Use temporary context for the database list and worker info. */ subctx = AllocSetContextCreate(TopMemoryContext, "Logical Replication Launcher sublist", - ALLOCSET_DEFAULT_MINSIZE, - ALLOCSET_DEFAULT_INITSIZE, - ALLOCSET_DEFAULT_MAXSIZE); + ALLOCSET_DEFAULT_SIZES); oldctx = MemoryContextSwitchTo(subctx); /* search for subscriptions to start or stop. */ diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index fa95bab58e..5ac391dbda 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -237,16 +237,19 @@ ReorderBufferAllocate(void) buffer->change_context = SlabContextCreate(new_ctx, "Change", + 0, SLAB_DEFAULT_BLOCK_SIZE, sizeof(ReorderBufferChange)); buffer->txn_context = SlabContextCreate(new_ctx, "TXN", + 0, SLAB_DEFAULT_BLOCK_SIZE, sizeof(ReorderBufferTXN)); buffer->tup_context = GenerationContextCreate(new_ctx, "Tuples", + 0, SLAB_LARGE_BLOCK_SIZE); hash_ctl.keysize = sizeof(TransactionId); diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index c3126545b4..550b156e2d 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -152,9 +152,7 @@ pgoutput_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt, /* Create our memory context for private allocations. */ data->context = AllocSetContextCreate(ctx->context, "logical replication output context", - ALLOCSET_DEFAULT_MINSIZE, - ALLOCSET_DEFAULT_INITSIZE, - ALLOCSET_DEFAULT_MAXSIZE); + ALLOCSET_DEFAULT_SIZES); ctx->output_plugin_private = data; diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 12a5f157c0..3a9233ef3d 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -669,9 +669,10 @@ RelationBuildRuleLock(Relation relation) /* * Make the private context. Assume it'll not contain much data. */ - rulescxt = AllocSetContextCreate(CacheMemoryContext, - RelationGetRelationName(relation), - ALLOCSET_SMALL_SIZES); + rulescxt = AllocSetContextCreateExtended(CacheMemoryContext, + RelationGetRelationName(relation), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); relation->rd_rulescxt = rulescxt; /* @@ -984,9 +985,10 @@ RelationBuildPartitionKey(Relation relation) ReleaseSysCache(tuple); /* Success --- now copy to the cache memory */ - partkeycxt = AllocSetContextCreate(CacheMemoryContext, - RelationGetRelationName(relation), - ALLOCSET_SMALL_SIZES); + partkeycxt = AllocSetContextCreateExtended(CacheMemoryContext, + RelationGetRelationName(relation), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); relation->rd_partkeycxt = partkeycxt; oldcxt = MemoryContextSwitchTo(relation->rd_partkeycxt); relation->rd_partkey = copy_partition_key(key); @@ -1566,9 +1568,10 @@ RelationInitIndexAccessInfo(Relation relation) * a context, and not just a couple of pallocs, is so that we won't leak * any subsidiary info attached to fmgr lookup records. */ - indexcxt = AllocSetContextCreate(CacheMemoryContext, - RelationGetRelationName(relation), - ALLOCSET_SMALL_SIZES); + indexcxt = AllocSetContextCreateExtended(CacheMemoryContext, + RelationGetRelationName(relation), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); relation->rd_indexcxt = indexcxt; /* @@ -5537,9 +5540,11 @@ load_relcache_init_file(bool shared) * prepare index info context --- parameters should match * RelationInitIndexAccessInfo */ - indexcxt = AllocSetContextCreate(CacheMemoryContext, - RelationGetRelationName(rel), - ALLOCSET_SMALL_SIZES); + indexcxt = + AllocSetContextCreateExtended(CacheMemoryContext, + RelationGetRelationName(rel), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); rel->rd_indexcxt = indexcxt; /* diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c index da5c8ea345..29cf93a4de 100644 --- a/src/backend/utils/cache/ts_cache.c +++ b/src/backend/utils/cache/ts_cache.c @@ -294,9 +294,10 @@ lookup_ts_dictionary_cache(Oid dictId) Assert(!found); /* it wasn't there a moment ago */ /* Create private memory context the first time through */ - saveCtx = AllocSetContextCreate(CacheMemoryContext, - NameStr(dict->dictname), - ALLOCSET_SMALL_SIZES); + saveCtx = AllocSetContextCreateExtended(CacheMemoryContext, + NameStr(dict->dictname), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); } else { diff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c index 71f5f0688a..c88efc3054 100644 --- a/src/backend/utils/hash/dynahash.c +++ b/src/backend/utils/hash/dynahash.c @@ -340,9 +340,11 @@ hash_create(const char *tabname, long nelem, HASHCTL *info, int flags) CurrentDynaHashCxt = info->hcxt; else CurrentDynaHashCxt = TopMemoryContext; - CurrentDynaHashCxt = AllocSetContextCreate(CurrentDynaHashCxt, - tabname, - ALLOCSET_DEFAULT_SIZES); + CurrentDynaHashCxt = + AllocSetContextCreateExtended(CurrentDynaHashCxt, + tabname, + MEMCONTEXT_COPY_NAME, + ALLOCSET_DEFAULT_SIZES); } /* Initialize the hash header, plus a copy of the table name */ diff --git a/src/backend/utils/mmgr/README b/src/backend/utils/mmgr/README index 296fa198dc..a42e568d5c 100644 --- a/src/backend/utils/mmgr/README +++ b/src/backend/utils/mmgr/README @@ -177,8 +177,7 @@ every other context is a direct or indirect child of this one. Allocating here is essentially the same as "malloc", because this context will never be reset or deleted. This is for stuff that should live forever, or for stuff that the controlling module will take care of deleting at the -appropriate time. An example is fd.c's tables of open files, as well as -the context management nodes for memory contexts themselves. Avoid +appropriate time. An example is fd.c's tables of open files. Avoid allocating stuff here unless really necessary, and especially avoid running with CurrentMemoryContext pointing here. @@ -420,11 +419,10 @@ a maximum block size. Selecting smaller values can prevent wastage of space in contexts that aren't expected to hold very much (an example is the relcache's per-relation contexts). -Also, it is possible to specify a minimum context size. If this -value is greater than zero then a block of that size will be grabbed -immediately upon context creation, and cleared but not released during -context resets. This feature is needed for ErrorContext (see above), -but will most likely not be used for other contexts. +Also, it is possible to specify a minimum context size, in case for some +reason that should be different from the initial size for additional +blocks. An aset.c context will always contain at least one block, +of size minContextSize if that is specified, otherwise initBlockSize. We expect that per-tuple contexts will be reset frequently and typically will not allocate very much space per tuple cycle. To make this usage diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 1bd1c34fde..1519da05d2 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -93,6 +93,9 @@ * * Blocks allocated to hold oversize chunks do not follow this rule, however; * they are just however big they need to be to hold that single chunk. + * + * Also, if a minContextSize is specified, the first block has that size, + * and then initBlockSize is used for the next one. *-------------------- */ @@ -113,7 +116,7 @@ typedef void *AllocPointer; * * Note: header.isReset means there is nothing for AllocSetReset to do. * This is different from the aset being physically empty (empty blocks list) - * because we may still have a keeper block. It's also different from the set + * because we will still have a keeper block. It's also different from the set * being logically empty, because we don't attempt to detect pfree'ing the * last active chunk. */ @@ -127,8 +130,11 @@ typedef struct AllocSetContext Size initBlockSize; /* initial block size */ Size maxBlockSize; /* maximum block size */ Size nextBlockSize; /* next block size to allocate */ + Size headerSize; /* allocated size of context header */ Size allocChunkLimit; /* effective chunk size limit */ - AllocBlock keeper; /* if not NULL, keep this block over resets */ + AllocBlock keeper; /* keep this block over resets */ + /* freelist this context could be put in, or -1 if not a candidate: */ + int freeListIndex; /* index in context_freelists[], or -1 */ } AllocSetContext; typedef AllocSetContext *AllocSet; @@ -215,13 +221,57 @@ typedef struct AllocChunkData #define AllocChunkGetPointer(chk) \ ((AllocPointer)(((char *)(chk)) + ALLOC_CHUNKHDRSZ)) +/* + * Rather than repeatedly creating and deleting memory contexts, we keep some + * freed contexts in freelists so that we can hand them out again with little + * work. Before putting a context in a freelist, we reset it so that it has + * only its initial malloc chunk and no others. To be a candidate for a + * freelist, a context must have the same minContextSize/initBlockSize as + * other contexts in the list; but its maxBlockSize is irrelevant since that + * doesn't affect the size of the initial chunk. Also, candidate contexts + * *must not* use MEMCONTEXT_COPY_NAME since that would make their header size + * variable. (We currently insist that all flags be zero, since other flags + * would likely make the contexts less interchangeable, too.) + * + * We currently provide one freelist for ALLOCSET_DEFAULT_SIZES contexts + * and one for ALLOCSET_SMALL_SIZES contexts; the latter works for + * ALLOCSET_START_SMALL_SIZES too, since only the maxBlockSize differs. + * + * Ordinarily, we re-use freelist contexts in last-in-first-out order, in + * hopes of improving locality of reference. But if there get to be too + * many contexts in the list, we'd prefer to drop the most-recently-created + * contexts in hopes of keeping the process memory map compact. + * We approximate that by simply deleting all existing entries when the list + * overflows, on the assumption that queries that allocate a lot of contexts + * will probably free them in more or less reverse order of allocation. + * + * Contexts in a freelist are chained via their nextchild pointers. + */ +#define MAX_FREE_CONTEXTS 100 /* arbitrary limit on freelist length */ + +typedef struct AllocSetFreeList +{ + int num_free; /* current list length */ + AllocSetContext *first_free; /* list header */ +} AllocSetFreeList; + +/* context_freelists[0] is for default params, [1] for small params */ +static AllocSetFreeList context_freelists[2] = +{ + { + 0, NULL + }, + { + 0, NULL + } +}; + /* * These functions implement the MemoryContext API for AllocSet contexts. */ static void *AllocSetAlloc(MemoryContext context, Size size); static void AllocSetFree(MemoryContext context, void *pointer); static void *AllocSetRealloc(MemoryContext context, void *pointer, Size size); -static void AllocSetInit(MemoryContext context); static void AllocSetReset(MemoryContext context); static void AllocSetDelete(MemoryContext context); static Size AllocSetGetChunkSpace(MemoryContext context, void *pointer); @@ -236,11 +286,10 @@ static void AllocSetCheck(MemoryContext context); /* * This is the virtual function table for AllocSet contexts. */ -static MemoryContextMethods AllocSetMethods = { +static const MemoryContextMethods AllocSetMethods = { AllocSetAlloc, AllocSetFree, AllocSetRealloc, - AllocSetInit, AllocSetReset, AllocSetDelete, AllocSetGetChunkSpace, @@ -325,27 +374,35 @@ AllocSetFreeIndex(Size size) /* - * AllocSetContextCreate + * AllocSetContextCreateExtended * Create a new AllocSet context. * * parent: parent context, or NULL if top-level context * name: name of context (for debugging only, need not be unique) + * flags: bitmask of MEMCONTEXT_XXX option flags * minContextSize: minimum context size * initBlockSize: initial allocation block size * maxBlockSize: maximum allocation block size * - * Notes: the name string will be copied into context-lifespan storage. + * Notes: if flags & MEMCONTEXT_COPY_NAME, the name string will be copied into + * context-lifespan storage; otherwise, it had better be statically allocated. * Most callers should abstract the context size parameters using a macro - * such as ALLOCSET_DEFAULT_SIZES. + * such as ALLOCSET_DEFAULT_SIZES. (This is now *required* when going + * through the AllocSetContextCreate macro.) */ MemoryContext -AllocSetContextCreate(MemoryContext parent, - const char *name, - Size minContextSize, - Size initBlockSize, - Size maxBlockSize) +AllocSetContextCreateExtended(MemoryContext parent, + const char *name, + int flags, + Size minContextSize, + Size initBlockSize, + Size maxBlockSize) { + int freeListIndex; + Size headerSize; + Size firstBlockSize; AllocSet set; + AllocBlock block; /* Assert we padded AllocChunkData properly */ StaticAssertStmt(ALLOC_CHUNKHDRSZ == MAXALIGN(ALLOC_CHUNKHDRSZ), @@ -355,36 +412,125 @@ AllocSetContextCreate(MemoryContext parent, "padding calculation in AllocChunkData is wrong"); /* - * First, validate allocation parameters. (If we're going to throw an - * error, we should do so before the context is created, not after.) We - * somewhat arbitrarily enforce a minimum 1K block size. + * First, validate allocation parameters. Once these were regular runtime + * test and elog's, but in practice Asserts seem sufficient because nobody + * varies their parameters at runtime. We somewhat arbitrarily enforce a + * minimum 1K block size. */ - if (initBlockSize != MAXALIGN(initBlockSize) || - initBlockSize < 1024) - elog(ERROR, "invalid initBlockSize for memory context: %zu", - initBlockSize); - if (maxBlockSize != MAXALIGN(maxBlockSize) || - maxBlockSize < initBlockSize || - !AllocHugeSizeIsValid(maxBlockSize)) /* must be safe to double */ - elog(ERROR, "invalid maxBlockSize for memory context: %zu", - maxBlockSize); - if (minContextSize != 0 && - (minContextSize != MAXALIGN(minContextSize) || - minContextSize <= ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ)) - elog(ERROR, "invalid minContextSize for memory context: %zu", - minContextSize); - - /* Do the type-independent part of context creation */ - set = (AllocSet) MemoryContextCreate(T_AllocSetContext, - sizeof(AllocSetContext), - &AllocSetMethods, - parent, - name); - - /* Save allocation parameters */ + Assert(initBlockSize == MAXALIGN(initBlockSize) && + initBlockSize >= 1024); + Assert(maxBlockSize == MAXALIGN(maxBlockSize) && + maxBlockSize >= initBlockSize && + AllocHugeSizeIsValid(maxBlockSize)); /* must be safe to double */ + Assert(minContextSize == 0 || + (minContextSize == MAXALIGN(minContextSize) && + minContextSize >= 1024 && + minContextSize <= maxBlockSize)); + + /* + * Check whether the parameters match either available freelist. We do + * not need to demand a match of maxBlockSize. + */ + if (flags == 0 && + minContextSize == ALLOCSET_DEFAULT_MINSIZE && + initBlockSize == ALLOCSET_DEFAULT_INITSIZE) + freeListIndex = 0; + else if (flags == 0 && + minContextSize == ALLOCSET_SMALL_MINSIZE && + initBlockSize == ALLOCSET_SMALL_INITSIZE) + freeListIndex = 1; + else + freeListIndex = -1; + + /* + * If a suitable freelist entry exists, just recycle that context. + */ + if (freeListIndex >= 0) + { + AllocSetFreeList *freelist = &context_freelists[freeListIndex]; + + if (freelist->first_free != NULL) + { + /* Remove entry from freelist */ + set = freelist->first_free; + freelist->first_free = (AllocSet) set->header.nextchild; + freelist->num_free--; + + /* Update its maxBlockSize; everything else should be OK */ + set->maxBlockSize = maxBlockSize; + + /* Reinitialize its header, installing correct name and parent */ + MemoryContextCreate((MemoryContext) set, + T_AllocSetContext, + set->headerSize, + sizeof(AllocSetContext), + &AllocSetMethods, + parent, + name, + flags); + + return (MemoryContext) set; + } + } + + /* Size of the memory context header, including name storage if needed */ + if (flags & MEMCONTEXT_COPY_NAME) + headerSize = MAXALIGN(sizeof(AllocSetContext) + strlen(name) + 1); + else + headerSize = MAXALIGN(sizeof(AllocSetContext)); + + /* Determine size of initial block */ + firstBlockSize = headerSize + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ; + if (minContextSize != 0) + firstBlockSize = Max(firstBlockSize, minContextSize); + else + firstBlockSize = Max(firstBlockSize, initBlockSize); + + /* + * Allocate the initial block. Unlike other aset.c blocks, it starts with + * the context header and its block header follows that. + */ + set = (AllocSet) malloc(firstBlockSize); + if (set == NULL) + { + if (TopMemoryContext) + MemoryContextStats(TopMemoryContext); + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"), + errdetail("Failed while creating memory context \"%s\".", + name))); + } + + /* + * Avoid writing code that can fail between here and MemoryContextCreate; + * we'd leak the header/initial block if we ereport in this stretch. + */ + + /* Fill in the initial block's block header */ + block = (AllocBlock) (((char *) set) + headerSize); + block->aset = set; + block->freeptr = ((char *) block) + ALLOC_BLOCKHDRSZ; + block->endptr = ((char *) set) + firstBlockSize; + block->prev = NULL; + block->next = NULL; + + /* Mark unallocated space NOACCESS; leave the block header alone. */ + VALGRIND_MAKE_MEM_NOACCESS(block->freeptr, block->endptr - block->freeptr); + + /* Remember block as part of block list */ + set->blocks = block; + /* Mark block as not to be released at reset time */ + set->keeper = block; + + /* Finish filling in aset-specific parts of the context header */ + MemSetAligned(set->freelist, 0, sizeof(set->freelist)); + set->initBlockSize = initBlockSize; set->maxBlockSize = maxBlockSize; set->nextBlockSize = initBlockSize; + set->headerSize = headerSize; + set->freeListIndex = freeListIndex; /* * Compute the allocation chunk size limit for this context. It can't be @@ -410,64 +556,19 @@ AllocSetContextCreate(MemoryContext parent, (Size) ((maxBlockSize - ALLOC_BLOCKHDRSZ) / ALLOC_CHUNK_FRACTION)) set->allocChunkLimit >>= 1; - /* - * Grab always-allocated space, if requested - */ - if (minContextSize > 0) - { - Size blksize = minContextSize; - AllocBlock block; - - block = (AllocBlock) malloc(blksize); - if (block == NULL) - { - MemoryContextStats(TopMemoryContext); - ereport(ERROR, - (errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory"), - errdetail("Failed while creating memory context \"%s\".", - name))); - } - block->aset = set; - block->freeptr = ((char *) block) + ALLOC_BLOCKHDRSZ; - block->endptr = ((char *) block) + blksize; - block->prev = NULL; - block->next = set->blocks; - if (block->next) - block->next->prev = block; - set->blocks = block; - /* Mark block as not to be released at reset time */ - set->keeper = block; - - /* Mark unallocated space NOACCESS; leave the block header alone. */ - VALGRIND_MAKE_MEM_NOACCESS(block->freeptr, - blksize - ALLOC_BLOCKHDRSZ); - } + /* Finally, do the type-independent part of context creation */ + MemoryContextCreate((MemoryContext) set, + T_AllocSetContext, + headerSize, + sizeof(AllocSetContext), + &AllocSetMethods, + parent, + name, + flags); return (MemoryContext) set; } -/* - * AllocSetInit - * Context-type-specific initialization routine. - * - * This is called by MemoryContextCreate() after setting up the - * generic MemoryContext fields and before linking the new context - * into the context tree. We must do whatever is needed to make the - * new context minimally valid for deletion. We must *not* risk - * failure --- thus, for example, allocating more memory is not cool. - * (AllocSetContextCreate can allocate memory when it gets control - * back, however.) - */ -static void -AllocSetInit(MemoryContext context) -{ - /* - * Since MemoryContextCreate already zeroed the context node, we don't - * have to do anything here: it's already OK. - */ -} - /* * AllocSetReset * Frees all memory which is allocated in the given set. @@ -475,9 +576,10 @@ AllocSetInit(MemoryContext context) * Actually, this routine has some discretion about what to do. * It should mark all allocated chunks freed, but it need not necessarily * give back all the resources the set owns. Our actual implementation is - * that we hang onto any "keeper" block specified for the set. In this way, - * we don't thrash malloc() when a context is repeatedly reset after small - * allocations, which is typical behavior for per-tuple contexts. + * that we give back all but the "keeper" block (which we must keep, since + * it shares a malloc chunk with the context header). In this way, we don't + * thrash malloc() when a context is repeatedly reset after small allocations, + * which is typical behavior for per-tuple contexts. */ static void AllocSetReset(MemoryContext context) @@ -497,7 +599,7 @@ AllocSetReset(MemoryContext context) block = set->blocks; - /* New blocks list is either empty or just the keeper block */ + /* New blocks list will be just the keeper block */ set->blocks = set->keeper; while (block != NULL) @@ -540,7 +642,6 @@ AllocSetReset(MemoryContext context) * in preparation for deletion of the set. * * Unlike AllocSetReset, this *must* free all resources of the set. - * But note we are not responsible for deleting the context node itself. */ static void AllocSetDelete(MemoryContext context) @@ -555,11 +656,49 @@ AllocSetDelete(MemoryContext context) AllocSetCheck(context); #endif - /* Make it look empty, just in case... */ - MemSetAligned(set->freelist, 0, sizeof(set->freelist)); - set->blocks = NULL; - set->keeper = NULL; + /* + * If the context is a candidate for a freelist, put it into that freelist + * instead of destroying it. + */ + if (set->freeListIndex >= 0) + { + AllocSetFreeList *freelist = &context_freelists[set->freeListIndex]; + + /* + * Reset the context, if it needs it, so that we aren't hanging on to + * more than the initial malloc chunk. + */ + if (!context->isReset) + MemoryContextResetOnly(context); + + /* + * If the freelist is full, just discard what's already in it. See + * comments with context_freelists[]. + */ + if (freelist->num_free >= MAX_FREE_CONTEXTS) + { + while (freelist->first_free != NULL) + { + AllocSetContext *oldset = freelist->first_free; + + freelist->first_free = (AllocSetContext *) oldset->header.nextchild; + freelist->num_free--; + + /* All that remains is to free the header/initial block */ + free(oldset); + } + Assert(freelist->num_free == 0); + } + + /* Now add the just-deleted context to the freelist. */ + set->header.nextchild = (MemoryContext) freelist->first_free; + freelist->first_free = set; + freelist->num_free++; + + return; + } + /* Free all blocks, except the keeper which is part of context header */ while (block != NULL) { AllocBlock next = block->next; @@ -567,9 +706,15 @@ AllocSetDelete(MemoryContext context) #ifdef CLOBBER_FREED_MEMORY wipe_mem(block, block->freeptr - ((char *) block)); #endif - free(block); + + if (block != set->keeper) + free(block); + block = next; } + + /* Finally, free the context header, including the keeper block */ + free(set); } /* @@ -807,18 +952,6 @@ AllocSetAlloc(MemoryContext context, Size size) block->freeptr = ((char *) block) + ALLOC_BLOCKHDRSZ; block->endptr = ((char *) block) + blksize; - /* - * If this is the first block of the set, make it the "keeper" block. - * Formerly, a keeper block could only be created during context - * creation, but allowing it to happen here lets us have fast reset - * cycling even for contexts created with minContextSize = 0; that way - * we don't have to force space to be allocated in contexts that might - * never need any space. Don't mark an oversize block as a keeper, - * however. - */ - if (set->keeper == NULL && blksize == set->initBlockSize) - set->keeper = block; - /* Mark unallocated space NOACCESS. */ VALGRIND_MAKE_MEM_NOACCESS(block->freeptr, blksize - ALLOC_BLOCKHDRSZ); @@ -1205,11 +1338,14 @@ AllocSetStats(MemoryContext context, int level, bool print, AllocSet set = (AllocSet) context; Size nblocks = 0; Size freechunks = 0; - Size totalspace = 0; + Size totalspace; Size freespace = 0; AllocBlock block; int fidx; + /* Include context header in totalspace */ + totalspace = set->headerSize; + for (block = set->blocks; block != NULL; block = block->next) { nblocks++; @@ -1264,7 +1400,7 @@ static void AllocSetCheck(MemoryContext context) { AllocSet set = (AllocSet) context; - char *name = set->header.name; + const char *name = set->header.name; AllocBlock prevblock; AllocBlock block; diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 19390fa581..10d0fc1f90 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -61,6 +61,7 @@ typedef struct GenerationContext /* Generational context parameters */ Size blockSize; /* standard block size */ + Size headerSize; /* allocated size of context header */ GenerationBlock *block; /* current (most recently allocated) block */ dlist_head blocks; /* list of blocks */ @@ -149,7 +150,6 @@ struct GenerationChunk static void *GenerationAlloc(MemoryContext context, Size size); static void GenerationFree(MemoryContext context, void *pointer); static void *GenerationRealloc(MemoryContext context, void *pointer, Size size); -static void GenerationInit(MemoryContext context); static void GenerationReset(MemoryContext context); static void GenerationDelete(MemoryContext context); static Size GenerationGetChunkSpace(MemoryContext context, void *pointer); @@ -164,11 +164,10 @@ static void GenerationCheck(MemoryContext context); /* * This is the virtual function table for Generation contexts. */ -static MemoryContextMethods GenerationMethods = { +static const MemoryContextMethods GenerationMethods = { GenerationAlloc, GenerationFree, GenerationRealloc, - GenerationInit, GenerationReset, GenerationDelete, GenerationGetChunkSpace, @@ -208,8 +207,10 @@ static MemoryContextMethods GenerationMethods = { MemoryContext GenerationContextCreate(MemoryContext parent, const char *name, + int flags, Size blockSize) { + Size headerSize; GenerationContext *set; /* Assert we padded GenerationChunk properly */ @@ -231,29 +232,51 @@ GenerationContextCreate(MemoryContext parent, elog(ERROR, "invalid blockSize for memory context: %zu", blockSize); - /* Do the type-independent part of context creation */ - set = (GenerationContext *) MemoryContextCreate(T_GenerationContext, - sizeof(GenerationContext), - &GenerationMethods, - parent, - name); + /* + * Allocate the context header. Unlike aset.c, we never try to combine + * this with the first regular block, since that would prevent us from + * freeing the first generation of allocations. + */ - set->blockSize = blockSize; + /* Size of the memory context header, including name storage if needed */ + if (flags & MEMCONTEXT_COPY_NAME) + headerSize = MAXALIGN(sizeof(GenerationContext) + strlen(name) + 1); + else + headerSize = MAXALIGN(sizeof(GenerationContext)); - return (MemoryContext) set; -} + set = (GenerationContext *) malloc(headerSize); + if (set == NULL) + { + MemoryContextStats(TopMemoryContext); + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"), + errdetail("Failed while creating memory context \"%s\".", + name))); + } -/* - * GenerationInit - * Context-type-specific initialization routine. - */ -static void -GenerationInit(MemoryContext context) -{ - GenerationContext *set = (GenerationContext *) context; + /* + * Avoid writing code that can fail between here and MemoryContextCreate; + * we'd leak the header if we ereport in this stretch. + */ + /* Fill in GenerationContext-specific header fields */ + set->blockSize = blockSize; + set->headerSize = headerSize; set->block = NULL; dlist_init(&set->blocks); + + /* Finally, do the type-independent part of context creation */ + MemoryContextCreate((MemoryContext) set, + T_GenerationContext, + headerSize, + sizeof(GenerationContext), + &GenerationMethods, + parent, + name, + flags); + + return (MemoryContext) set; } /* @@ -296,16 +319,15 @@ GenerationReset(MemoryContext context) /* * GenerationDelete - * Frees all memory which is allocated in the given set, in preparation - * for deletion of the set. We simply call GenerationReset() which does - * all the dirty work. + * Free all memory which is allocated in the given context. */ static void GenerationDelete(MemoryContext context) { - /* just reset to release all the GenerationBlocks */ + /* Reset to release all the GenerationBlocks */ GenerationReset(context); - /* we are not responsible for deleting the context node itself */ + /* And free the context header */ + free(context); } /* @@ -659,7 +681,7 @@ GenerationIsEmpty(MemoryContext context) /* * GenerationStats - * Compute stats about memory consumption of an Generation. + * Compute stats about memory consumption of a Generation context. * * level: recursion level (0 at top level); used for print indentation. * print: true to print stats to stderr. @@ -676,10 +698,13 @@ GenerationStats(MemoryContext context, int level, bool print, Size nblocks = 0; Size nchunks = 0; Size nfreechunks = 0; - Size totalspace = 0; + Size totalspace; Size freespace = 0; dlist_iter iter; + /* Include context header in totalspace */ + totalspace = set->headerSize; + dlist_foreach(iter, &set->blocks) { GenerationBlock *block = dlist_container(GenerationBlock, node, iter.cur); @@ -727,7 +752,7 @@ static void GenerationCheck(MemoryContext context) { GenerationContext *gen = (GenerationContext *) context; - char *name = context->name; + const char *name = context->name; dlist_iter iter; /* walk all blocks in this context */ diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c index c5c311fad3..97382a693c 100644 --- a/src/backend/utils/mmgr/mcxt.c +++ b/src/backend/utils/mmgr/mcxt.c @@ -91,9 +91,7 @@ MemoryContextInit(void) AssertState(TopMemoryContext == NULL); /* - * First, initialize TopMemoryContext, which will hold the MemoryContext - * nodes for all other contexts. (There is special-case code in - * MemoryContextCreate() to handle this call.) + * First, initialize TopMemoryContext, which is the parent of all others. */ TopMemoryContext = AllocSetContextCreate((MemoryContext) NULL, "TopMemoryContext", @@ -118,11 +116,12 @@ MemoryContextInit(void) * This should be the last step in this function, as elog.c assumes memory * management works once ErrorContext is non-null. */ - ErrorContext = AllocSetContextCreate(TopMemoryContext, - "ErrorContext", - 8 * 1024, - 8 * 1024, - 8 * 1024); + ErrorContext = AllocSetContextCreateExtended(TopMemoryContext, + "ErrorContext", + 0, + 8 * 1024, + 8 * 1024, + 8 * 1024); MemoryContextAllowInCriticalSection(ErrorContext, true); } @@ -191,10 +190,9 @@ MemoryContextResetChildren(MemoryContext context) * Delete a context and its descendants, and release all space * allocated therein. * - * The type-specific delete routine removes all subsidiary storage - * for the context, but we have to delete the context node itself, - * as well as recurse to get the children. We must also delink the - * node from its parent, if it has one. + * The type-specific delete routine removes all storage for the context, + * but we have to recurse to handle the children. + * We must also delink the context from its parent, if it has one. */ void MemoryContextDelete(MemoryContext context) @@ -205,7 +203,9 @@ MemoryContextDelete(MemoryContext context) /* And not CurrentMemoryContext, either */ Assert(context != CurrentMemoryContext); - MemoryContextDeleteChildren(context); + /* save a function call in common case where there are no children */ + if (context->firstchild != NULL) + MemoryContextDeleteChildren(context); /* * It's not entirely clear whether 'tis better to do this before or after @@ -223,8 +223,8 @@ MemoryContextDelete(MemoryContext context) MemoryContextSetParent(context, NULL); context->methods->delete_context(context); + VALGRIND_DESTROY_MEMPOOL(context); - pfree(context); } /* @@ -587,100 +587,85 @@ MemoryContextContains(MemoryContext context, void *pointer) return ptr_context == context; } -/*-------------------- +/* * MemoryContextCreate * Context-type-independent part of context creation. * * This is only intended to be called by context-type-specific * context creation routines, not by the unwashed masses. * - * The context creation procedure is a little bit tricky because - * we want to be sure that we don't leave the context tree invalid - * in case of failure (such as insufficient memory to allocate the - * context node itself). The procedure goes like this: - * 1. Context-type-specific routine first calls MemoryContextCreate(), - * passing the appropriate tag/size/methods values (the methods - * pointer will ordinarily point to statically allocated data). - * The parent and name parameters usually come from the caller. - * 2. MemoryContextCreate() attempts to allocate the context node, - * plus space for the name. If this fails we can ereport() with no - * damage done. - * 3. We fill in all of the type-independent MemoryContext fields. - * 4. We call the type-specific init routine (using the methods pointer). - * The init routine is required to make the node minimally valid - * with zero chance of failure --- it can't allocate more memory, - * for example. - * 5. Now we have a minimally valid node that can behave correctly - * when told to reset or delete itself. We link the node to its - * parent (if any), making the node part of the context tree. - * 6. We return to the context-type-specific routine, which finishes + * The memory context creation procedure goes like this: + * 1. Context-type-specific routine makes some initial space allocation, + * including enough space for the context header. If it fails, + * it can ereport() with no damage done. + * 2. Context-type-specific routine sets up all type-specific fields of + * the header (those beyond MemoryContextData proper), as well as any + * other management fields it needs to have a fully valid context. + * Usually, failure in this step is impossible, but if it's possible + * the initial space allocation should be freed before ereport'ing. + * 3. Context-type-specific routine calls MemoryContextCreate() to fill in + * the generic header fields and link the context into the context tree. + * 4. We return to the context-type-specific routine, which finishes * up type-specific initialization. This routine can now do things * that might fail (like allocate more memory), so long as it's * sure the node is left in a state that delete will handle. * - * This protocol doesn't prevent us from leaking memory if step 6 fails - * during creation of a top-level context, since there's no parent link - * in that case. However, if you run out of memory while you're building - * a top-level context, you might as well go home anyway... - * - * Normally, the context node and the name are allocated from - * TopMemoryContext (NOT from the parent context, since the node must - * survive resets of its parent context!). However, this routine is itself - * used to create TopMemoryContext! If we see that TopMemoryContext is NULL, - * we assume we are creating TopMemoryContext and use malloc() to allocate - * the node. + * node: the as-yet-uninitialized common part of the context header node. + * tag: NodeTag code identifying the memory context type. + * size: total size of context header including context-type-specific fields, + * as well as space for the context name if MEMCONTEXT_COPY_NAME is set. + * nameoffset: where within the "size" space to insert the context name. + * methods: context-type-specific methods (usually statically allocated). + * parent: parent context, or NULL if this will be a top-level context. + * name: name of context (for debugging only, need not be unique). + * flags: bitmask of MEMCONTEXT_XXX option flags. * - * Note that the name field of a MemoryContext does not point to - * separately-allocated storage, so it should not be freed at context - * deletion. - *-------------------- + * Context routines generally assume that MemoryContextCreate can't fail, + * so this can contain Assert but not elog/ereport. */ -MemoryContext -MemoryContextCreate(NodeTag tag, Size size, - MemoryContextMethods *methods, +void +MemoryContextCreate(MemoryContext node, + NodeTag tag, Size size, Size nameoffset, + const MemoryContextMethods *methods, MemoryContext parent, - const char *name) + const char *name, + int flags) { - MemoryContext node; - Size needed = size + strlen(name) + 1; - - /* creating new memory contexts is not allowed in a critical section */ + /* Creating new memory contexts is not allowed in a critical section */ Assert(CritSectionCount == 0); - /* Get space for node and name */ - if (TopMemoryContext != NULL) - { - /* Normal case: allocate the node in TopMemoryContext */ - node = (MemoryContext) MemoryContextAlloc(TopMemoryContext, - needed); - } - else - { - /* Special case for startup: use good ol' malloc */ - node = (MemoryContext) malloc(needed); - Assert(node != NULL); - } + /* Check size is sane */ + Assert(nameoffset >= sizeof(MemoryContextData)); + Assert((flags & MEMCONTEXT_COPY_NAME) ? + size >= nameoffset + strlen(name) + 1 : + size >= nameoffset); - /* Initialize the node as best we can */ - MemSet(node, 0, size); + /* Initialize all standard fields of memory context header */ node->type = tag; + node->isReset = true; node->methods = methods; - node->parent = NULL; /* for the moment */ + node->parent = parent; node->firstchild = NULL; node->prevchild = NULL; - node->nextchild = NULL; - node->isReset = true; - node->name = ((char *) node) + size; - strcpy(node->name, name); + node->reset_cbs = NULL; - /* Type-specific routine finishes any other essential initialization */ - node->methods->init(node); + if (flags & MEMCONTEXT_COPY_NAME) + { + /* Insert context name into space reserved for it */ + char *namecopy = ((char *) node) + nameoffset; - /* OK to link node to parent (if any) */ - /* Could use MemoryContextSetParent here, but doesn't seem worthwhile */ + node->name = namecopy; + strcpy(namecopy, name); + } + else + { + /* Assume the passed-in name is statically allocated */ + node->name = name; + } + + /* OK to link node into context tree */ if (parent) { - node->parent = parent; node->nextchild = parent->firstchild; if (parent->firstchild != NULL) parent->firstchild->prevchild = node; @@ -688,11 +673,13 @@ MemoryContextCreate(NodeTag tag, Size size, /* inherit allowInCritSection flag from parent */ node->allowInCritSection = parent->allowInCritSection; } + else + { + node->nextchild = NULL; + node->allowInCritSection = false; + } VALGRIND_CREATE_MEMPOOL(node, 0, false); - - /* Return to type-specific creation routine to finish up */ - return node; } /* diff --git a/src/backend/utils/mmgr/slab.c b/src/backend/utils/mmgr/slab.c index ee2175278d..c01c77913a 100644 --- a/src/backend/utils/mmgr/slab.c +++ b/src/backend/utils/mmgr/slab.c @@ -67,6 +67,7 @@ typedef struct SlabContext Size chunkSize; /* chunk size */ Size fullChunkSize; /* chunk size including header and alignment */ Size blockSize; /* block size */ + Size headerSize; /* allocated size of context header */ int chunksPerBlock; /* number of chunks per block */ int minFreeChunks; /* min number of free chunks in any block */ int nblocks; /* number of blocks allocated */ @@ -126,7 +127,6 @@ typedef struct SlabChunk static void *SlabAlloc(MemoryContext context, Size size); static void SlabFree(MemoryContext context, void *pointer); static void *SlabRealloc(MemoryContext context, void *pointer, Size size); -static void SlabInit(MemoryContext context); static void SlabReset(MemoryContext context); static void SlabDelete(MemoryContext context); static Size SlabGetChunkSpace(MemoryContext context, void *pointer); @@ -140,11 +140,10 @@ static void SlabCheck(MemoryContext context); /* * This is the virtual function table for Slab contexts. */ -static MemoryContextMethods SlabMethods = { +static const MemoryContextMethods SlabMethods = { SlabAlloc, SlabFree, SlabRealloc, - SlabInit, SlabReset, SlabDelete, SlabGetChunkSpace, @@ -177,24 +176,30 @@ static MemoryContextMethods SlabMethods = { * Create a new Slab context. * * parent: parent context, or NULL if top-level context - * name: name of context (for debugging --- string will be copied) + * name: name of context (for debugging only, need not be unique) + * flags: bitmask of MEMCONTEXT_XXX option flags * blockSize: allocation block size * chunkSize: allocation chunk size * + * Notes: if flags & MEMCONTEXT_COPY_NAME, the name string will be copied into + * context-lifespan storage; otherwise, it had better be statically allocated. * The chunkSize may not exceed: * MAXALIGN_DOWN(SIZE_MAX) - MAXALIGN(sizeof(SlabBlock)) - SLAB_CHUNKHDRSZ - * */ MemoryContext SlabContextCreate(MemoryContext parent, const char *name, + int flags, Size blockSize, Size chunkSize) { int chunksPerBlock; Size fullChunkSize; Size freelistSize; + Size nameOffset; + Size headerSize; SlabContext *slab; + int i; /* Assert we padded SlabChunk properly */ StaticAssertStmt(sizeof(SlabChunk) == MAXALIGN(sizeof(SlabChunk)), @@ -211,7 +216,7 @@ SlabContextCreate(MemoryContext parent, fullChunkSize = sizeof(SlabChunk) + MAXALIGN(chunkSize); /* Make sure the block can store at least one chunk. */ - if (blockSize - sizeof(SlabBlock) < fullChunkSize) + if (blockSize < fullChunkSize + sizeof(SlabBlock)) elog(ERROR, "block size %zu for slab is too small for %zu chunks", blockSize, chunkSize); @@ -221,45 +226,58 @@ SlabContextCreate(MemoryContext parent, /* The freelist starts with 0, ends with chunksPerBlock. */ freelistSize = sizeof(dlist_head) * (chunksPerBlock + 1); - /* if we can't fit at least one chunk into the block, we're hosed */ - Assert(chunksPerBlock > 0); + /* + * Allocate the context header. Unlike aset.c, we never try to combine + * this with the first regular block; not worth the extra complication. + */ - /* make sure the chunks actually fit on the block */ - Assert((fullChunkSize * chunksPerBlock) + sizeof(SlabBlock) <= blockSize); + /* Size of the memory context header, including name storage if needed */ + nameOffset = offsetof(SlabContext, freelist) + freelistSize; + if (flags & MEMCONTEXT_COPY_NAME) + headerSize = nameOffset + strlen(name) + 1; + else + headerSize = nameOffset; - /* Do the type-independent part of context creation */ - slab = (SlabContext *) - MemoryContextCreate(T_SlabContext, - (offsetof(SlabContext, freelist) + freelistSize), - &SlabMethods, - parent, - name); + slab = (SlabContext *) malloc(headerSize); + if (slab == NULL) + { + MemoryContextStats(TopMemoryContext); + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"), + errdetail("Failed while creating memory context \"%s\".", + name))); + } - slab->blockSize = blockSize; + /* + * Avoid writing code that can fail between here and MemoryContextCreate; + * we'd leak the header if we ereport in this stretch. + */ + + /* Fill in SlabContext-specific header fields */ slab->chunkSize = chunkSize; slab->fullChunkSize = fullChunkSize; + slab->blockSize = blockSize; + slab->headerSize = headerSize; slab->chunksPerBlock = chunksPerBlock; - slab->nblocks = 0; slab->minFreeChunks = 0; - - return (MemoryContext) slab; -} - -/* - * SlabInit - * Context-type-specific initialization routine. - */ -static void -SlabInit(MemoryContext context) -{ - int i; - SlabContext *slab = castNode(SlabContext, context); - - Assert(slab); + slab->nblocks = 0; /* initialize the freelist slots */ for (i = 0; i < (slab->chunksPerBlock + 1); i++) dlist_init(&slab->freelist[i]); + + /* Finally, do the type-independent part of context creation */ + MemoryContextCreate((MemoryContext) slab, + T_SlabContext, + headerSize, + nameOffset, + &SlabMethods, + parent, + name, + flags); + + return (MemoryContext) slab; } /* @@ -308,14 +326,15 @@ SlabReset(MemoryContext context) /* * SlabDelete - * Frees all memory which is allocated in the given slab, in preparation - * for deletion of the slab. We simply call SlabReset(). + * Free all memory which is allocated in the given context. */ static void SlabDelete(MemoryContext context) { - /* just reset the context */ + /* Reset to release all the SlabBlocks */ SlabReset(context); + /* And free the context header */ + free(context); } /* @@ -613,7 +632,7 @@ SlabIsEmpty(MemoryContext context) /* * SlabStats - * Compute stats about memory consumption of an Slab. + * Compute stats about memory consumption of a Slab context. * * level: recursion level (0 at top level); used for print indentation. * print: true to print stats to stderr. @@ -626,11 +645,12 @@ SlabStats(MemoryContext context, int level, bool print, SlabContext *slab = castNode(SlabContext, context); Size nblocks = 0; Size freechunks = 0; - Size totalspace = 0; + Size totalspace; Size freespace = 0; int i; - Assert(slab); + /* Include context header in totalspace */ + totalspace = slab->headerSize; for (i = 0; i <= slab->chunksPerBlock; i++) { @@ -682,7 +702,7 @@ SlabCheck(MemoryContext context) { int i; SlabContext *slab = castNode(SlabContext, context); - char *name = slab->header.name; + const char *name = slab->header.name; char *freechunks; Assert(slab); diff --git a/src/include/nodes/memnodes.h b/src/include/nodes/memnodes.h index e22d9fb178..c7eb1e72e9 100644 --- a/src/include/nodes/memnodes.h +++ b/src/include/nodes/memnodes.h @@ -57,7 +57,6 @@ typedef struct MemoryContextMethods /* call this free_p in case someone #define's free() */ void (*free_p) (MemoryContext context, void *pointer); void *(*realloc) (MemoryContext context, void *pointer, Size size); - void (*init) (MemoryContext context); void (*reset) (MemoryContext context); void (*delete_context) (MemoryContext context); Size (*get_chunk_space) (MemoryContext context, void *pointer); @@ -76,12 +75,12 @@ typedef struct MemoryContextData /* these two fields are placed here to minimize alignment wastage: */ bool isReset; /* T = no space alloced since last reset */ bool allowInCritSection; /* allow palloc in critical section */ - MemoryContextMethods *methods; /* virtual function table */ + const MemoryContextMethods *methods; /* virtual function table */ MemoryContext parent; /* NULL if no parent (toplevel context) */ MemoryContext firstchild; /* head of linked list of children */ MemoryContext prevchild; /* previous child of same parent */ MemoryContext nextchild; /* next child of same parent */ - char *name; /* context name (just for debugging) */ + const char *name; /* context name (just for debugging) */ MemoryContextCallback *reset_cbs; /* list of reset/delete callbacks */ } MemoryContextData; diff --git a/src/include/utils/memutils.h b/src/include/utils/memutils.h index d177b0cc8d..9c30eb76e9 100644 --- a/src/include/utils/memutils.h +++ b/src/include/utils/memutils.h @@ -132,10 +132,12 @@ GetMemoryChunkContext(void *pointer) * context creation. It's intended to be called from context-type- * specific creation routines, and noplace else. */ -extern MemoryContext MemoryContextCreate(NodeTag tag, Size size, - MemoryContextMethods *methods, +extern void MemoryContextCreate(MemoryContext node, + NodeTag tag, Size size, Size nameoffset, + const MemoryContextMethods *methods, MemoryContext parent, - const char *name); + const char *name, + int flags); /* @@ -143,23 +145,48 @@ extern MemoryContext MemoryContextCreate(NodeTag tag, Size size, */ /* aset.c */ -extern MemoryContext AllocSetContextCreate(MemoryContext parent, - const char *name, - Size minContextSize, - Size initBlockSize, - Size maxBlockSize); +extern MemoryContext AllocSetContextCreateExtended(MemoryContext parent, + const char *name, + int flags, + Size minContextSize, + Size initBlockSize, + Size maxBlockSize); + +/* + * This backwards compatibility macro only works for constant context names, + * and you must specify block sizes with one of the abstraction macros below. + */ +#ifdef HAVE__BUILTIN_CONSTANT_P +#define AllocSetContextCreate(parent, name, allocparams) \ + (StaticAssertExpr(__builtin_constant_p(name), \ + "Use AllocSetContextCreateExtended with MEMCONTEXT_COPY_NAME for non-constant context names"), \ + AllocSetContextCreateExtended(parent, name, 0, allocparams)) +#else +#define AllocSetContextCreate(parent, name, allocparams) \ + AllocSetContextCreateExtended(parent, name, 0, allocparams) +#endif /* slab.c */ extern MemoryContext SlabContextCreate(MemoryContext parent, const char *name, + int flags, Size blockSize, Size chunkSize); /* generation.c */ extern MemoryContext GenerationContextCreate(MemoryContext parent, const char *name, + int flags, Size blockSize); +/* + * Flag option bits for FooContextCreate functions. + * In future, some of these might be relevant to only some context types. + * + * COPY_NAME: FooContextCreate's name argument is not a constant string + */ +#define MEMCONTEXT_COPY_NAME 0x0001 /* is passed name transient? */ + /* * Recommended default alloc parameters, suitable for "ordinary" contexts * that might hold quite a lot of data. diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index 9f5313235f..41fd0ba421 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -2777,9 +2777,10 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger) /************************************************************ * Allocate a context that will hold all PG data for the procedure. ************************************************************/ - proc_cxt = AllocSetContextCreate(TopMemoryContext, - NameStr(procStruct->proname), - ALLOCSET_SMALL_SIZES); + proc_cxt = AllocSetContextCreateExtended(TopMemoryContext, + NameStr(procStruct->proname), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); /************************************************************ * Allocate and fill a new procedure description block. diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c index b7c24e356f..990a33cc6d 100644 --- a/src/pl/plpython/plpy_procedure.c +++ b/src/pl/plpython/plpy_procedure.c @@ -166,9 +166,10 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger) } /* Create long-lived context that all procedure info will live in */ - cxt = AllocSetContextCreate(TopMemoryContext, - procName, - ALLOCSET_DEFAULT_SIZES); + cxt = AllocSetContextCreateExtended(TopMemoryContext, + procName, + MEMCONTEXT_COPY_NAME, + ALLOCSET_DEFAULT_SIZES); oldcxt = MemoryContextSwitchTo(cxt); diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index e0792d93e1..8069784151 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -146,7 +146,7 @@ typedef struct pltcl_proc_desc Oid result_typid; /* OID of fn's result type */ FmgrInfo result_in_func; /* input function for fn's result type */ Oid result_typioparam; /* param to pass to same */ - bool fn_is_procedure;/* true if this is a procedure */ + bool fn_is_procedure; /* true if this is a procedure */ bool fn_retisset; /* true if function returns a set */ bool fn_retistuple; /* true if function returns composite */ bool fn_retisdomain; /* true if function returns domain */ @@ -1471,9 +1471,10 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid, * Allocate a context that will hold all PG data for the procedure. * We use the internal proc name as the context name. ************************************************************/ - proc_cxt = AllocSetContextCreate(TopMemoryContext, - internal_proname, - ALLOCSET_SMALL_SIZES); + proc_cxt = AllocSetContextCreateExtended(TopMemoryContext, + internal_proname, + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); /************************************************************ * Allocate and fill a new procedure description block. From 1d6fb35ad62968c8678d0321887e2b9ca8fe1a84 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 13 Dec 2017 15:19:28 -0500 Subject: [PATCH 0699/1087] Revert "Fix accumulation of parallel worker instrumentation." This reverts commit 2c09a5c12a66087218c7f8cba269cd3de51b9b82. Per further discussion, that doesn't seem to be the best possible fix. Discussion: http://postgr.es/m/CAA4eK1LW2aFKzY3=vwvc=t-juzPPVWP2uT1bpx_MeyEqnM+p8g@mail.gmail.com --- src/backend/executor/execParallel.c | 51 +++++-------------- src/test/regress/expected/select_parallel.out | 21 -------- src/test/regress/sql/select_parallel.sql | 7 --- 3 files changed, 13 insertions(+), 66 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 558cb08b07..d57cdbd4e1 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -819,19 +819,6 @@ ExecParallelReinitialize(PlanState *planstate, /* Old workers must already be shut down */ Assert(pei->finished); - /* Clear the instrumentation space from the last round. */ - if (pei->instrumentation) - { - Instrumentation *instrument; - SharedExecutorInstrumentation *sh_instr; - int i; - - sh_instr = pei->instrumentation; - instrument = GetInstrumentationArray(sh_instr); - for (i = 0; i < sh_instr->num_workers * sh_instr->num_plan_nodes; ++i) - InstrInit(&instrument[i], pei->planstate->state->es_instrument); - } - /* Force parameters we're going to pass to workers to be evaluated. */ ExecEvalParamExecParams(sendParams, estate); @@ -953,33 +940,21 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, for (n = 0; n < instrumentation->num_workers; ++n) InstrAggNode(planstate->instrument, &instrument[n]); - if (!planstate->worker_instrument) - { - /* - * Allocate space for the per-worker detail. - * - * Worker instrumentation should be allocated in the same context as - * the regular instrumentation information, which is the per-query - * context. Switch into per-query memory context. - */ - oldcontext = MemoryContextSwitchTo(planstate->state->es_query_cxt); - ibytes = - mul_size(instrumentation->num_workers, sizeof(Instrumentation)); - planstate->worker_instrument = - palloc(ibytes + offsetof(WorkerInstrumentation, instrument)); - MemoryContextSwitchTo(oldcontext); - - for (n = 0; n < instrumentation->num_workers; ++n) - InstrInit(&planstate->worker_instrument->instrument[n], - planstate->state->es_instrument); - } + /* + * Also store the per-worker detail. + * + * Worker instrumentation should be allocated in the same context as the + * regular instrumentation information, which is the per-query context. + * Switch into per-query memory context. + */ + oldcontext = MemoryContextSwitchTo(planstate->state->es_query_cxt); + ibytes = mul_size(instrumentation->num_workers, sizeof(Instrumentation)); + planstate->worker_instrument = + palloc(ibytes + offsetof(WorkerInstrumentation, instrument)); + MemoryContextSwitchTo(oldcontext); planstate->worker_instrument->num_workers = instrumentation->num_workers; - - /* Accumulate the per-worker detail. */ - for (n = 0; n < instrumentation->num_workers; ++n) - InstrAggNode(&planstate->worker_instrument->instrument[n], - &instrument[n]); + memcpy(&planstate->worker_instrument->instrument, instrument, ibytes); /* Perform any node-type-specific work that needs to be done. */ switch (nodeTag(planstate)) diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index ff00d47f65..86a55922c8 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -465,28 +465,7 @@ select count(*) from bmscantest where a>1; 99999 (1 row) --- test accumulation of stats for parallel node reset enable_seqscan; -alter table tenk2 set (parallel_workers = 0); -explain (analyze, timing off, summary off, costs off) - select count(*) from tenk1, tenk2 where tenk1.hundred > 1 - and tenk2.thousand=0; - QUERY PLAN --------------------------------------------------------------------------- - Aggregate (actual rows=1 loops=1) - -> Nested Loop (actual rows=98000 loops=1) - -> Seq Scan on tenk2 (actual rows=10 loops=1) - Filter: (thousand = 0) - Rows Removed by Filter: 9990 - -> Gather (actual rows=9800 loops=10) - Workers Planned: 4 - Workers Launched: 4 - -> Parallel Seq Scan on tenk1 (actual rows=1960 loops=50) - Filter: (hundred > 1) - Rows Removed by Filter: 40 -(11 rows) - -alter table tenk2 reset (parallel_workers); reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index 1035d04d1a..fb35ca3376 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -179,14 +179,7 @@ insert into bmscantest select r, 'fooooooooooooooooooooooooooooooooooooooooooooo create index i_bmtest ON bmscantest(a); select count(*) from bmscantest where a>1; --- test accumulation of stats for parallel node reset enable_seqscan; -alter table tenk2 set (parallel_workers = 0); -explain (analyze, timing off, summary off, costs off) - select count(*) from tenk1, tenk2 where tenk1.hundred > 1 - and tenk2.thousand=0; -alter table tenk2 reset (parallel_workers); - reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; From 884a60840cd684dd7925e7a4f9bf10288c37694d Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 13 Dec 2017 16:09:00 -0500 Subject: [PATCH 0700/1087] Fix parallel index scan hang with deleted or half-dead pages. The previous coding forgot to release the scan before seizing it again, leading to a lockup. Report by Patrick Hemmer. Diagnosis by Thomas Munro. Patch by Amit Kapila. Discussion: http://postgr.es/m/CAEepm=2xZUcOGP9V0O_G0=2P2wwXwPrkF=upWTCJSisUxMnuSg@mail.gmail.com --- src/backend/access/nbtree/nbtsearch.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c index 558113bd13..12d3f081b6 100644 --- a/src/backend/access/nbtree/nbtsearch.c +++ b/src/backend/access/nbtree/nbtsearch.c @@ -1486,6 +1486,11 @@ _bt_readnextpage(IndexScanDesc scan, BlockNumber blkno, ScanDirection dir) if (_bt_readpage(scan, dir, P_FIRSTDATAKEY(opaque))) break; } + else if (scan->parallel_scan != NULL) + { + /* allow next page be processed by parallel worker */ + _bt_parallel_release(scan, opaque->btpo_next); + } /* nope, keep going */ if (scan->parallel_scan != NULL) @@ -1581,6 +1586,11 @@ _bt_readnextpage(IndexScanDesc scan, BlockNumber blkno, ScanDirection dir) if (_bt_readpage(scan, dir, PageGetMaxOffsetNumber(page))) break; } + else if (scan->parallel_scan != NULL) + { + /* allow next page be processed by parallel worker */ + _bt_parallel_release(scan, BufferGetBlockNumber(so->currPos.buf)); + } /* * For parallel scans, get the last page scanned as it is quite From 923e8dee88ada071fe41541e83f121ead4baf7f8 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 13 Dec 2017 12:51:32 -0800 Subject: [PATCH 0701/1087] Add defenses against pre-crash files to BufFileOpenShared(). Crash restarts currently don't clean up temporary files, as a debugging aid. If a left-over file happens to have the same name as a segment file we're trying to create, we'll just truncate and reuse it, but there is a problem: BufFileOpenShared() determines how many segment files exist by trying to open .0, .1, .2, ... until it finds no more files. It might be confused by a junk file that has the next segment number. To defend against that, make sure we always create a gap after the end file by unlinking the following name if it exists. Also make it an error to try to open a BufFile that doesn't exist (has no segment 0), so as not to encourage the development of client code that depends on an interface that we can't reliably provide. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm%3D2jhCbC_GFQJaaDhWxLB4EXtT3vVd5czuRNaqF5CWSTog%40mail.gmail.com --- src/backend/storage/file/buffile.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index fa9940da9b..c6b210d137 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -211,6 +211,16 @@ MakeNewSharedSegment(BufFile *buffile, int segment) char name[MAXPGPATH]; File file; + /* + * It is possible that there are files left over from before a crash + * restart with the same name. In order for BufFileOpenShared() + * not to get confused about how many segments there are, we'll unlink + * the next segment number if it already exists. + */ + SharedSegmentName(name, buffile->name, segment + 1); + SharedFileSetDelete(buffile->fileset, name, true); + + /* Create the new segment. */ SharedSegmentName(name, buffile->name, segment); file = SharedFileSetCreate(buffile->fileset, name); @@ -303,7 +313,9 @@ BufFileOpenShared(SharedFileSet *fileset, const char *name) * name. */ if (nfiles == 0) - return NULL; + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not open BufFile \"%s\"", name))); file->numFiles = nfiles; file->files = files; From dbb3d6f0102e0aca7575ff864450fca57ac85517 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 13 Dec 2017 15:34:20 -0800 Subject: [PATCH 0702/1087] Add pg_attribute_always_inline. Sometimes it is useful to be able to insist that the compiler inline a function that its normal cost analysis would not normally choose to inline. This can be useful for instantiating different variants of a function that remove branches of code by constant folding. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com --- src/include/c.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/include/c.h b/src/include/c.h index a61428843a..11fcffbae3 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -146,6 +146,16 @@ #define pg_attribute_noreturn() #endif +/* GCC, Sunpro and XLC support always_inline via __attribute__ */ +#if defined(__GNUC__) +#define pg_attribute_always_inline __attribute__((always_inline)) +/* msvc via a special keyword */ +#elif defined(_MSC_VER) +#define pg_attribute_always_inline __forceinline +#else +#define pg_attribute_always_inline +#endif + /* * Forcing a function not to be inlined can be useful if it's the slow path of * a performance-critical function, or should be visible in profiles to allow From 538d114f6d72cbc94122ab522e002e63359cff5b Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 13 Dec 2017 15:47:01 -0800 Subject: [PATCH 0703/1087] Allow executor nodes to change their ExecProcNode function. In order for executor nodes to be able to change their ExecProcNode function after ExecInitNode() has finished, provide ExecSetExecProcNode(). This allows any wrappers functions that only execProcnode.c knows about to be reinstalled. The motivation for wanting to change ExecProcNode after ExecInitNode() has finished is that it is not known until later whether parallel query is available, so if a parallel variant is to be installed then ExecInitNode() is too soon to decide. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAEepm=09rr65VN+cAV5FgyM_z=D77Xy8Fuc9CDDDYbq3pQUezg@mail.gmail.com --- src/backend/executor/execProcnode.c | 28 ++++++++++++++++++++++------ src/include/executor/executor.h | 1 + 2 files changed, 23 insertions(+), 6 deletions(-) diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 9befca9016..fcb8b56999 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -370,12 +370,7 @@ ExecInitNode(Plan *node, EState *estate, int eflags) break; } - /* - * Add a wrapper around the ExecProcNode callback that checks stack depth - * during the first execution. - */ - result->ExecProcNodeReal = result->ExecProcNode; - result->ExecProcNode = ExecProcNodeFirst; + ExecSetExecProcNode(result, result->ExecProcNode); /* * Initialize any initPlans present in this node. The planner put them in @@ -401,6 +396,27 @@ ExecInitNode(Plan *node, EState *estate, int eflags) } +/* + * If a node wants to change its ExecProcNode function after ExecInitNode() + * has finished, it should do so with this function. That way any wrapper + * functions can be reinstalled, without the node having to know how that + * works. + */ +void +ExecSetExecProcNode(PlanState *node, ExecProcNodeMtd function) +{ + /* + * Add a wrapper around the ExecProcNode callback that checks stack depth + * during the first execution and maybe adds an instrumentation + * wrapper. When the callback is changed after execution has already begun + * that means we'll superflously execute ExecProcNodeFirst, but that seems + * ok. + */ + node->ExecProcNodeReal = function; + node->ExecProcNode = ExecProcNodeFirst; +} + + /* * ExecProcNode wrapper that performs some one-time checks, before calling * the relevant node method (possibly via an instrumentation wrapper). diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index b5578f5855..dea9216fd6 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -219,6 +219,7 @@ extern void EvalPlanQualEnd(EPQState *epqstate); * functions in execProcnode.c */ extern PlanState *ExecInitNode(Plan *node, EState *estate, int eflags); +extern void ExecSetExecProcNode(PlanState *node, ExecProcNodeMtd function); extern Node *MultiExecProcNode(PlanState *node); extern void ExecEndNode(PlanState *node); extern bool ExecShutdownNode(PlanState *node); From 1fcd0adeb38d6ef36066134bb3b44acc5a249a98 Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Thu, 14 Dec 2017 14:30:22 +0300 Subject: [PATCH 0704/1087] Add approximated Zipfian-distributed random generator to pgbench. Generator helps to make close to real-world tests. Author: Alik Khilazhev Reviewed-By: Fabien COELHO Discussion: https://www.postgresql.org/message-id/flat/BF3B6F54-68C3-417A-BFAB-FB4D66F2B410@postgrespro.ru --- doc/src/sgml/ref/pgbench.sgml | 29 +++ src/bin/pgbench/exprparse.y | 3 + src/bin/pgbench/pgbench.c | 193 ++++++++++++++++++- src/bin/pgbench/pgbench.h | 3 +- src/bin/pgbench/t/001_pgbench_with_server.pl | 38 +++- 5 files changed, 263 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 94b495e606..4431fc3eb7 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -1092,6 +1092,14 @@ pgbench options d random_gaussian(1, 10, 2.5) an integer between 1 and 10 + + random_zipfian(lb, ub, parameter) + integer + Zipfian-distributed random integer in [lb, ub], + see below + random_zipfian(1, 10, 1.5) + an integer between 1 and 10 + sqrt(x) double @@ -1173,6 +1181,27 @@ f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) / of the Box-Muller transform. + + + random_zipfian generates an approximated bounded zipfian + distribution. For parameter in (0, 1), an + approximated algorithm is taken from + "Quickly Generating Billion-Record Synthetic Databases", + Jim Gray et al, SIGMOD 1994. For parameter + in (1, 1000), a rejection method is used, based on + "Non-Uniform Random Variate Generation", Luc Devroye, p. 550-551, + Springer 1986. The distribution is not defined when the parameter's + value is 1.0. The drawing performance is poor for parameter values + close and above 1.0 and on a small range. + + + parameter + defines how skewed the distribution is. The larger the parameter, the more + frequently values to the beginning of the interval are drawn. + The closer to 0 parameter is, + the flatter (more uniform) the access distribution. + + diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y index b3a2d9bfd3..25d5ad48e5 100644 --- a/src/bin/pgbench/exprparse.y +++ b/src/bin/pgbench/exprparse.y @@ -191,6 +191,9 @@ static const struct { "random_exponential", 3, PGBENCH_RANDOM_EXPONENTIAL }, + { + "random_zipfian", 3, PGBENCH_RANDOM_ZIPFIAN + }, /* keep as last array element */ { NULL, 0, 0 diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index bd96eae5e6..7ce6f607f5 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -95,7 +95,10 @@ static int pthread_join(pthread_t th, void **thread_return); #define LOG_STEP_SECONDS 5 /* seconds between log messages */ #define DEFAULT_NXACTS 10 /* default nxacts */ +#define ZIPF_CACHE_SIZE 15 /* cache cells number */ + #define MIN_GAUSSIAN_PARAM 2.0 /* minimum parameter for gauss */ +#define MAX_ZIPFIAN_PARAM 1000 /* maximum parameter for zipfian */ int nxacts = 0; /* number of transactions per client */ int duration = 0; /* duration in seconds */ @@ -330,6 +333,35 @@ typedef struct int ecnt; /* error count */ } CState; +/* + * Cache cell for zipfian_random call + */ +typedef struct +{ + /* cell keys */ + double s; /* s - parameter of zipfan_random function */ + int64 n; /* number of elements in range (max - min + 1) */ + + double harmonicn; /* generalizedHarmonicNumber(n, s) */ + double alpha; + double beta; + double eta; + + uint64 last_used; /* last used logical time */ +} ZipfCell; + +/* + * Zipf cache for zeta values + */ +typedef struct +{ + uint64 current; /* counter for LRU cache replacement algorithm */ + + int nb_cells; /* number of filled cells */ + int overflowCount; /* number of cache overflows */ + ZipfCell cells[ZIPF_CACHE_SIZE]; +} ZipfCache; + /* * Thread state */ @@ -342,6 +374,8 @@ typedef struct unsigned short random_state[3]; /* separate randomness for each thread */ int64 throttle_trigger; /* previous/next throttling (us) */ FILE *logfile; /* where to log, or NULL */ + ZipfCache zipf_cache; /* for thread-safe zipfian random number + * generation */ /* per thread collected stats */ instr_time start_time; /* thread start time */ @@ -746,6 +780,137 @@ getPoissonRand(TState *thread, int64 center) return (int64) (-log(uniform) * ((double) center) + 0.5); } +/* helper function for getZipfianRand */ +static double +generalizedHarmonicNumber(int64 n, double s) +{ + int i; + double ans = 0.0; + + for (i = n; i > 1; i--) + ans += pow(i, -s); + return ans + 1.0; +} + +/* set harmonicn and other parameters to cache cell */ +static void +zipfSetCacheCell(ZipfCell * cell, int64 n, double s) +{ + double harmonic2; + + cell->n = n; + cell->s = s; + + harmonic2 = generalizedHarmonicNumber(2, s); + cell->harmonicn = generalizedHarmonicNumber(n, s); + + cell->alpha = 1.0 / (1.0 - s); + cell->beta = pow(0.5, s); + cell->eta = (1.0 - pow(2.0 / n, 1.0 - s)) / (1.0 - harmonic2 / cell->harmonicn); +} + +/* + * search for cache cell with keys (n, s) + * and create new cell if it does not exist + */ +static ZipfCell * +zipfFindOrCreateCacheCell(ZipfCache * cache, int64 n, double s) +{ + int i, + least_recently_used = 0; + ZipfCell *cell; + + /* search cached cell for given parameters */ + for (i = 0; i < cache->nb_cells; i++) + { + cell = &cache->cells[i]; + if (cell->n == n && cell->s == s) + return &cache->cells[i]; + + if (cell->last_used < cache->cells[least_recently_used].last_used) + least_recently_used = i; + } + + /* create new one if it does not exist */ + if (cache->nb_cells < ZIPF_CACHE_SIZE) + i = cache->nb_cells++; + else + { + /* replace LRU cell if cache is full */ + i = least_recently_used; + cache->overflowCount++; + } + + zipfSetCacheCell(&cache->cells[i], n, s); + + cache->cells[i].last_used = cache->current++; + return &cache->cells[i]; +} + +/* + * Computing zipfian using rejection method, based on + * "Non-Uniform Random Variate Generation", + * Luc Devroye, p. 550-551, Springer 1986. + */ +static int64 +computeIterativeZipfian(TState *thread, int64 n, double s) +{ + double b = pow(2.0, s - 1.0); + double x, + t, + u, + v; + + while (true) + { + /* random variates */ + u = pg_erand48(thread->random_state); + v = pg_erand48(thread->random_state); + + x = floor(pow(u, -1.0 / (s - 1.0))); + + t = pow(1.0 + 1.0 / x, s - 1.0); + /* reject if too large or out of bound */ + if (v * x * (t - 1.0) / (b - 1.0) <= t / b && x <= n) + break; + } + return (int64) x; +} + +/* + * Computing zipfian using harmonic numbers, based on algorithm described in + * "Quickly Generating Billion-Record Synthetic Databases", + * Jim Gray et al, SIGMOD 1994 + */ +static int64 +computeHarmonicZipfian(TState *thread, int64 n, double s) +{ + ZipfCell *cell = zipfFindOrCreateCacheCell(&thread->zipf_cache, n, s); + double uniform = pg_erand48(thread->random_state); + double uz = uniform * cell->harmonicn; + + if (uz < 1.0) + return 1; + if (uz < 1.0 + cell->beta) + return 2; + return 1 + (int64) (cell->n * pow(cell->eta * uniform - cell->eta + 1.0, cell->alpha)); +} + +/* random number generator: zipfian distribution from min to max inclusive */ +static int64 +getZipfianRand(TState *thread, int64 min, int64 max, double s) +{ + int64 n = max - min + 1; + + /* abort if parameter is invalid */ + Assert(s > 0.0 && s != 1.0 && s <= MAX_ZIPFIAN_PARAM); + + + return min - 1 + ((s > 1) + ? computeIterativeZipfian(thread, n, s) + : computeHarmonicZipfian(thread, n, s)); +} + /* * Initialize the given SimpleStats struct to all zeroes */ @@ -1303,7 +1468,6 @@ coerceToDouble(PgBenchValue *pval, double *dval) return true; } } - /* assign an integer value */ static void setIntValue(PgBenchValue *pv, int64 ival) @@ -1605,6 +1769,7 @@ evalFunc(TState *thread, CState *st, case PGBENCH_RANDOM: case PGBENCH_RANDOM_EXPONENTIAL: case PGBENCH_RANDOM_GAUSSIAN: + case PGBENCH_RANDOM_ZIPFIAN: { int64 imin, imax; @@ -1655,6 +1820,18 @@ evalFunc(TState *thread, CState *st, setIntValue(retval, getGaussianRand(thread, imin, imax, param)); } + else if (func == PGBENCH_RANDOM_ZIPFIAN) + { + if (param <= 0.0 || param == 1.0 || param > MAX_ZIPFIAN_PARAM) + { + fprintf(stderr, + "zipfian parameter must be in range (0, 1) U (1, %d]" + " (got %f)\n", MAX_ZIPFIAN_PARAM, param); + return false; + } + setIntValue(retval, + getZipfianRand(thread, imin, imax, param)); + } else /* exponential */ { if (param <= 0.0) @@ -3683,6 +3860,8 @@ printResults(TState *threads, StatsData *total, instr_time total_time, tps_include, tps_exclude; int64 ntx = total->cnt - total->skipped; + int i, + totalCacheOverflows = 0; time_include = INSTR_TIME_GET_DOUBLE(total_time); @@ -3710,6 +3889,15 @@ printResults(TState *threads, StatsData *total, instr_time total_time, printf("number of transactions actually processed: " INT64_FORMAT "\n", ntx); } + /* Report zipfian cache overflow */ + for (i = 0; i < nthreads; i++) + { + totalCacheOverflows += threads[i].zipf_cache.overflowCount; + } + if (totalCacheOverflows > 0) + { + printf("zipfian cache array overflowed %d time(s)\n", totalCacheOverflows); + } /* Remaining stats are nonsensical if we failed to execute any xacts */ if (total->cnt <= 0) @@ -4513,6 +4701,9 @@ main(int argc, char **argv) thread->random_state[2] = random(); thread->logfile = NULL; /* filled in later */ thread->latency_late = 0; + thread->zipf_cache.nb_cells = 0; + thread->zipf_cache.current = 0; + thread->zipf_cache.overflowCount = 0; initStats(&thread->stats, 0); nclients_dealt += thread->nstate; diff --git a/src/bin/pgbench/pgbench.h b/src/bin/pgbench/pgbench.h index fd428af274..83fee1ae74 100644 --- a/src/bin/pgbench/pgbench.h +++ b/src/bin/pgbench/pgbench.h @@ -75,7 +75,8 @@ typedef enum PgBenchFunction PGBENCH_SQRT, PGBENCH_RANDOM, PGBENCH_RANDOM_GAUSSIAN, - PGBENCH_RANDOM_EXPONENTIAL + PGBENCH_RANDOM_EXPONENTIAL, + PGBENCH_RANDOM_ZIPFIAN } PgBenchFunction; typedef struct PgBenchExpr PgBenchExpr; diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index c095881312..e3cdf28628 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -231,7 +231,8 @@ sub pgbench qr{command=18.: double 18\b}, qr{command=19.: double 19\b}, qr{command=20.: double 20\b}, - qr{command=21.: int 9223372036854775807\b}, ], + qr{command=21.: int 9223372036854775807\b}, + qr{command=23.: int [1-9]\b}, ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions \set i1 debug(random(1, 100)) @@ -261,6 +262,8 @@ sub pgbench \set maxint debug(:minint - 1) -- reset a variable \set i1 0 +-- yet another integer function +\set id debug(random_zipfian(1, 9, 1.3)) } }); # backslash commands @@ -371,6 +374,14 @@ sub pgbench 0, [qr{exponential parameter must be greater }], q{\set i random_exponential(0, 10, 0.0)} ], + [ 'set zipfian param to 1', + 0, + [qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}], + q{\set i random_zipfian(0, 10, 1)} ], + [ 'set zipfian param too large', + 0, + [qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}], + q{\set i random_zipfian(0, 10, 1000000)} ], [ 'set non numeric value', 0, [qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1} ], [ 'set no expression', 1, [qr{syntax error}], q{\set i} ], @@ -412,6 +423,31 @@ sub pgbench { $n => $script }); } +# zipfian cache array overflow +pgbench( + '-t 1', 0, + [ qr{processed: 1/1}, qr{zipfian cache array overflowed 1 time\(s\)} ], + [ qr{^} ], + 'pgbench zipfian array overflow on random_zipfian', + { '001_pgbench_random_zipfian' => q{ +\set i random_zipfian(1, 100, 0.5) +\set i random_zipfian(2, 100, 0.5) +\set i random_zipfian(3, 100, 0.5) +\set i random_zipfian(4, 100, 0.5) +\set i random_zipfian(5, 100, 0.5) +\set i random_zipfian(6, 100, 0.5) +\set i random_zipfian(7, 100, 0.5) +\set i random_zipfian(8, 100, 0.5) +\set i random_zipfian(9, 100, 0.5) +\set i random_zipfian(10, 100, 0.5) +\set i random_zipfian(11, 100, 0.5) +\set i random_zipfian(12, 100, 0.5) +\set i random_zipfian(13, 100, 0.5) +\set i random_zipfian(14, 100, 0.5) +\set i random_zipfian(15, 100, 0.5) +\set i random_zipfian(16, 100, 0.5) +} }); + # throttling pgbench( '-t 100 -S --rate=100000 --latency-limit=1000000 -c 2 -n -r', From 0fedb4ea6946e72c5c51130446b59b083ba3dd21 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Thu, 14 Dec 2017 11:13:14 -0500 Subject: [PATCH 0705/1087] Fix walsender timeouts when decoding a large transaction The logical slots have a fast code path for sending data so as not to impose too high a per message overhead. The fast path skips checks for interrupts and timeouts. However, the existing coding failed to consider the fact that a transaction with a large number of changes may take a very long time to be processed and sent to the client. This causes the walsender to ignore interrupts for potentially a long time and more importantly it will result in the walsender being killed due to timeout at the end of such a transaction. This commit changes the fast path to also check for interrupts and only allows calling the fast path when the last keepalive check happened less than half the walsender timeout ago. Otherwise the slower code path will be taken. Backpatched to 9.4 Petr Jelinek, reviewed by Kyotaro HORIGUCHI, Yura Sokolov, Craig Ringer and Robert Haas. Discussion: https://postgr.es/m/e082a56a-fd95-a250-3bae-0fff93832510@2ndquadrant.com --- src/backend/replication/walsender.c | 66 ++++++++++++++++------------- 1 file changed, 37 insertions(+), 29 deletions(-) diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index fa1db748b5..6a252fcf45 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1151,6 +1151,8 @@ static void WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid, bool last_write) { + TimestampTz now; + /* output previously gathered data in a CopyData packet */ pq_putmessage_noblock('d', ctx->out->data, ctx->out->len); @@ -1160,23 +1162,54 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid, * several releases by streaming physical replication. */ resetStringInfo(&tmpbuf); - pq_sendint64(&tmpbuf, GetCurrentTimestamp()); + now = GetCurrentTimestamp(); + pq_sendint64(&tmpbuf, now); memcpy(&ctx->out->data[1 + sizeof(int64) + sizeof(int64)], tmpbuf.data, sizeof(int64)); - /* fast path */ + CHECK_FOR_INTERRUPTS(); + /* Try to flush pending output to the client */ if (pq_flush_if_writable() != 0) WalSndShutdown(); - if (!pq_is_send_pending()) + /* Try taking fast path unless we get too close to walsender timeout. */ + if (now < TimestampTzPlusMilliseconds(last_reply_timestamp, + wal_sender_timeout / 2) && + !pq_is_send_pending()) + { return; + } + /* If we have pending write here, go to slow path */ for (;;) { int wakeEvents; long sleeptime; - TimestampTz now; + + /* Check for input from the client */ + ProcessRepliesIfAny(); + + now = GetCurrentTimestamp(); + + /* die if timeout was reached */ + WalSndCheckTimeOut(now); + + /* Send keepalive if the time has come */ + WalSndKeepaliveIfNecessary(now); + + if (!pq_is_send_pending()) + break; + + sleeptime = WalSndComputeSleeptime(now); + + wakeEvents = WL_LATCH_SET | WL_POSTMASTER_DEATH | + WL_SOCKET_WRITEABLE | WL_SOCKET_READABLE | WL_TIMEOUT; + + /* Sleep until something happens or we time out */ + WaitLatchOrSocket(MyLatch, wakeEvents, + MyProcPort->sock, sleeptime, + WAIT_EVENT_WAL_SENDER_WRITE_DATA); /* * Emergency bailout if postmaster has died. This is to avoid the @@ -1198,34 +1231,9 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid, SyncRepInitConfig(); } - /* Check for input from the client */ - ProcessRepliesIfAny(); - /* Try to flush pending output to the client */ if (pq_flush_if_writable() != 0) WalSndShutdown(); - - /* If we finished clearing the buffered data, we're done here. */ - if (!pq_is_send_pending()) - break; - - now = GetCurrentTimestamp(); - - /* die if timeout was reached */ - WalSndCheckTimeOut(now); - - /* Send keepalive if the time has come */ - WalSndKeepaliveIfNecessary(now); - - sleeptime = WalSndComputeSleeptime(now); - - wakeEvents = WL_LATCH_SET | WL_POSTMASTER_DEATH | - WL_SOCKET_WRITEABLE | WL_SOCKET_READABLE | WL_TIMEOUT; - - /* Sleep until something happens or we time out */ - WaitLatchOrSocket(MyLatch, wakeEvents, - MyProcPort->sock, sleeptime, - WAIT_EVENT_WAL_SENDER_WRITE_DATA); } /* reactivate latch so WalSndLoop knows to continue */ From 11b8f076c02b4ff0230430fb8d82c80acc450c90 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 14 Dec 2017 12:33:48 -0800 Subject: [PATCH 0706/1087] Fix a number of copy & paste comment errors in common/int.h. Author: Christoph Berg Discussion: https://postgr.es/m/20171214082808.GA5775@msg.df7cb.de --- src/include/common/int.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/src/include/common/int.h b/src/include/common/int.h index e44d42f7da..e6907c699f 100644 --- a/src/include/common/int.h +++ b/src/include/common/int.h @@ -41,7 +41,7 @@ pg_add_s16_overflow(int16 a, int16 b, int16 *result) } /* - * If a - b overflows, return true, otherwise store the result of a + b into + * If a - b overflows, return true, otherwise store the result of a - b into * *result. The content of *result is implementation defined in case of * overflow. */ @@ -61,7 +61,7 @@ pg_sub_s16_overflow(int16 a, int16 b, int16 *result) } /* - * If a * b overflows, return true, otherwise store the result of a + b into + * If a * b overflows, return true, otherwise store the result of a * b into * *result. The content of *result is implementation defined in case of * overflow. */ @@ -101,7 +101,7 @@ pg_add_s32_overflow(int32 a, int32 b, int32 *result) } /* - * If a - b overflows, return true, otherwise store the result of a + b into + * If a - b overflows, return true, otherwise store the result of a - b into * *result. The content of *result is implementation defined in case of * overflow. */ @@ -121,7 +121,7 @@ pg_sub_s32_overflow(int32 a, int32 b, int32 *result) } /* - * If a * b overflows, return true, otherwise store the result of a + b into + * If a * b overflows, return true, otherwise store the result of a * b into * *result. The content of *result is implementation defined in case of * overflow. */ @@ -167,7 +167,7 @@ pg_add_s64_overflow(int64 a, int64 b, int64 *result) } /* - * If a - b overflows, return true, otherwise store the result of a + b into + * If a - b overflows, return true, otherwise store the result of a - b into * *result. The content of *result is implementation defined in case of * overflow. */ @@ -193,7 +193,7 @@ pg_sub_s64_overflow(int64 a, int64 b, int64 *result) } /* - * If a * b overflows, return true, otherwise store the result of a + b into + * If a * b overflows, return true, otherwise store the result of a * b into * *result. The content of *result is implementation defined in case of * overflow. */ From 9220b00e57352fda988b187940f5d5ac4851a8bb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 14 Dec 2017 17:19:27 -0500 Subject: [PATCH 0707/1087] Tighten configure's test for __builtin_constant_p(). Commit 9fa6f00b1 assumed that __builtin_constant_p("string literal") is TRUE, if the compiler has that function at all. Buildfarm results show that Sun Studio 12, at least, breaks that assumption. Removing that usage would leave us with no mechanical check for a very fragile coding requirement, so instead teach configure to ignore __builtin_constant_p() if it doesn't behave that way. We could complicate matters by distinguishing three cases (no such function, vs does, vs doesn't work for string literals); but for now, that seems unnecessary because our other existing uses of this function are just fairly minor optimizations of non-returning elog/ereport. We can live without that on the small population of compilers that act this way. Discussion: https://postgr.es/m/22997.1513264066@sss.pgh.pa.us --- config/c-compiler.m4 | 7 ++++++- configure | 5 ++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index ed26644a48..67323ade12 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -285,10 +285,15 @@ fi])# PGAC_C_BUILTIN_BSWAP64 # ------------------------- # Check if the C compiler understands __builtin_constant_p(), # and define HAVE__BUILTIN_CONSTANT_P if so. +# We need __builtin_constant_p("string literal") to be true, but some older +# compilers don't think that, so test for that case explicitly. AC_DEFUN([PGAC_C_BUILTIN_CONSTANT_P], [AC_CACHE_CHECK(for __builtin_constant_p, pgac_cv__builtin_constant_p, [AC_COMPILE_IFELSE([AC_LANG_SOURCE( -[[static int x; static int y[__builtin_constant_p(x) ? x : 1];]] +[[static int x; + static int y[__builtin_constant_p(x) ? x : 1]; + static int z[__builtin_constant_p("string literal") ? 1 : x]; +]] )], [pgac_cv__builtin_constant_p=yes], [pgac_cv__builtin_constant_p=no])]) diff --git a/configure b/configure index ca76ef0ab2..58eafd31c5 100755 --- a/configure +++ b/configure @@ -11901,7 +11901,10 @@ if ${pgac_cv__builtin_constant_p+:} false; then : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ -static int x; static int y[__builtin_constant_p(x) ? x : 1]; +static int x; + static int y[__builtin_constant_p(x) ? x : 1]; + static int z[__builtin_constant_p("string literal") ? 1 : x]; + _ACEOF if ac_fn_c_try_compile "$LINENO"; then : From 9c2f0a6c3cc8bb85b78191579760dbe9fb7814ec Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 3 Nov 2017 07:52:29 -0700 Subject: [PATCH 0708/1087] Fix pruning of locked and updated tuples. Previously it was possible that a tuple was not pruned during vacuum, even though its update xmax (i.e. the updating xid in a multixact with both key share lockers and an updater) was below the cutoff horizon. As the freezing code assumed, rightly so, that that's not supposed to happen, xmax would be preserved (as a member of a new multixact or xmax directly). That causes two problems: For one the tuple is below the xmin horizon, which can cause problems if the clog is truncated or once there's an xid wraparound. The bigger problem is that that will break HOT chains, which in turn can lead two to breakages: First, failing index lookups, which in turn can e.g lead to constraints being violated. Second, future hot prunes / vacuums can end up making invisible tuples visible again. There's other harmful scenarios. Fix the problem by recognizing that tuples can be DEAD instead of RECENTLY_DEAD, even if the multixactid has alive members, if the update_xid is below the xmin horizon. That's safe because newer versions of the tuple will contain the locking xids. A followup commit will harden the code somewhat against future similar bugs and already corrupted data. Author: Andres Freund, with changes by Alvaro Herrera Reported-By: Daniel Wood Analyzed-By: Andres Freund, Alvaro Herrera, Robert Haas, Peter Geoghegan, Daniel Wood, Yi Wen Wong, Michael Paquier Reviewed-By: Alvaro Herrera, Robert Haas, Michael Paquier Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de Backpatch: 9.3- --- src/backend/utils/time/tqual.c | 57 ++++------ .../isolation/expected/freeze-the-dead.out | 107 ++++-------------- src/test/isolation/isolation_schedule | 1 + src/test/isolation/specs/freeze-the-dead.spec | 36 +++++- 4 files changed, 80 insertions(+), 121 deletions(-) diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c index a821e2eed1..2b218e07e6 100644 --- a/src/backend/utils/time/tqual.c +++ b/src/backend/utils/time/tqual.c @@ -1311,49 +1311,40 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin, if (tuple->t_infomask & HEAP_XMAX_IS_MULTI) { - TransactionId xmax; - - if (MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) - { - /* already checked above */ - Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); - - xmax = HeapTupleGetUpdateXid(tuple); - - /* not LOCKED_ONLY, so it has to have an xmax */ - Assert(TransactionIdIsValid(xmax)); - - if (TransactionIdIsInProgress(xmax)) - return HEAPTUPLE_DELETE_IN_PROGRESS; - else if (TransactionIdDidCommit(xmax)) - /* there are still lockers around -- can't return DEAD here */ - return HEAPTUPLE_RECENTLY_DEAD; - /* updating transaction aborted */ - return HEAPTUPLE_LIVE; - } - - Assert(!(tuple->t_infomask & HEAP_XMAX_COMMITTED)); + TransactionId xmax = HeapTupleGetUpdateXid(tuple); - xmax = HeapTupleGetUpdateXid(tuple); + /* already checked above */ + Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask)); /* not LOCKED_ONLY, so it has to have an xmax */ Assert(TransactionIdIsValid(xmax)); - /* multi is not running -- updating xact cannot be */ - Assert(!TransactionIdIsInProgress(xmax)); - if (TransactionIdDidCommit(xmax)) + if (TransactionIdIsInProgress(xmax)) + return HEAPTUPLE_DELETE_IN_PROGRESS; + else if (TransactionIdDidCommit(xmax)) { + /* + * The multixact might still be running due to lockers. If the + * updater is below the xid horizon, we have to return DEAD + * regardless -- otherwise we could end up with a tuple where the + * updater has to be removed due to the horizon, but is not pruned + * away. It's not a problem to prune that tuple, because any + * remaining lockers will also be present in newer tuple versions. + */ if (!TransactionIdPrecedes(xmax, OldestXmin)) return HEAPTUPLE_RECENTLY_DEAD; - else - return HEAPTUPLE_DEAD; + + return HEAPTUPLE_DEAD; + } + else if (!MultiXactIdIsRunning(HeapTupleHeaderGetRawXmax(tuple), false)) + { + /* + * Not in Progress, Not Committed, so either Aborted or crashed. + * Mark the Xmax as invalid. + */ + SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); } - /* - * Not in Progress, Not Committed, so either Aborted or crashed. - * Remove the Xmax. - */ - SetHintBits(tuple, buffer, HEAP_XMAX_INVALID, InvalidTransactionId); return HEAPTUPLE_LIVE; } diff --git a/src/test/isolation/expected/freeze-the-dead.out b/src/test/isolation/expected/freeze-the-dead.out index dd045613f9..8e638f132f 100644 --- a/src/test/isolation/expected/freeze-the-dead.out +++ b/src/test/isolation/expected/freeze-the-dead.out @@ -1,101 +1,36 @@ -Parsed test spec with 2 sessions +Parsed test spec with 3 sessions -starting permutation: s1_update s1_commit s1_vacuum s2_key_share s2_commit +starting permutation: s1_begin s2_begin s3_begin s1_update s2_key_share s3_key_share s1_update s1_commit s2_commit s2_vacuum s1_selectone s3_commit s2_vacuum s1_selectall +step s1_begin: BEGIN; +step s2_begin: BEGIN; +step s3_begin: BEGIN; step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; id 3 -step s2_commit: COMMIT; - -starting permutation: s1_update s1_commit s2_key_share s1_vacuum s2_commit -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s1_commit: COMMIT; -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; +step s3_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; id 3 -step s1_vacuum: VACUUM FREEZE tab_freeze; -step s2_commit: COMMIT; - -starting permutation: s1_update s1_commit s2_key_share s2_commit s1_vacuum step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; step s1_commit: COMMIT; -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 step s2_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; +step s2_vacuum: VACUUM FREEZE tab_freeze; +step s1_selectone: + BEGIN; + SET LOCAL enable_seqscan = false; + SET LOCAL enable_bitmapscan = false; + SELECT * FROM tab_freeze WHERE id = 3; + COMMIT; -starting permutation: s1_update s2_key_share s1_commit s1_vacuum s2_commit -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id +id name x -3 -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; -step s2_commit: COMMIT; - -starting permutation: s1_update s2_key_share s1_commit s2_commit s1_vacuum -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 -step s1_commit: COMMIT; -step s2_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; - -starting permutation: s1_update s2_key_share s2_commit s1_commit s1_vacuum -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id +3 333 2 +step s3_commit: COMMIT; +step s2_vacuum: VACUUM FREEZE tab_freeze; +step s1_selectall: SELECT * FROM tab_freeze ORDER BY name, id; +id name x -3 -step s2_commit: COMMIT; -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; - -starting permutation: s2_key_share s1_update s1_commit s1_vacuum s2_commit -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; -step s2_commit: COMMIT; - -starting permutation: s2_key_share s1_update s1_commit s2_commit s1_vacuum -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s1_commit: COMMIT; -step s2_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; - -starting permutation: s2_key_share s1_update s2_commit s1_commit s1_vacuum -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s2_commit: COMMIT; -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; - -starting permutation: s2_key_share s2_commit s1_update s1_commit s1_vacuum -step s2_key_share: SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; -id - -3 -step s2_commit: COMMIT; -step s1_update: UPDATE tab_freeze SET x = x + 1 WHERE id = 3; -step s1_commit: COMMIT; -step s1_vacuum: VACUUM FREEZE tab_freeze; +1 111 0 +3 333 2 diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index e41b9164cd..eb566ebb6c 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -44,6 +44,7 @@ test: update-locked-tuple test: propagate-lock-delete test: tuplelock-conflict test: tuplelock-update +test: freeze-the-dead test: nowait test: nowait-2 test: nowait-3 diff --git a/src/test/isolation/specs/freeze-the-dead.spec b/src/test/isolation/specs/freeze-the-dead.spec index 3cd9965b2f..e24d7d5d11 100644 --- a/src/test/isolation/specs/freeze-the-dead.spec +++ b/src/test/isolation/specs/freeze-the-dead.spec @@ -16,12 +16,44 @@ teardown } session "s1" -setup { BEGIN; } +step "s1_begin" { BEGIN; } step "s1_update" { UPDATE tab_freeze SET x = x + 1 WHERE id = 3; } step "s1_commit" { COMMIT; } step "s1_vacuum" { VACUUM FREEZE tab_freeze; } +step "s1_selectone" { + BEGIN; + SET LOCAL enable_seqscan = false; + SET LOCAL enable_bitmapscan = false; + SELECT * FROM tab_freeze WHERE id = 3; + COMMIT; +} +step "s1_selectall" { SELECT * FROM tab_freeze ORDER BY name, id; } +step "s1_reindex" { REINDEX TABLE tab_freeze; } session "s2" -setup { BEGIN; } +step "s2_begin" { BEGIN; } step "s2_key_share" { SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; } step "s2_commit" { COMMIT; } +step "s2_vacuum" { VACUUM FREEZE tab_freeze; } + +session "s3" +step "s3_begin" { BEGIN; } +step "s3_key_share" { SELECT id FROM tab_freeze WHERE id = 3 FOR KEY SHARE; } +step "s3_commit" { COMMIT; } +step "s3_vacuum" { VACUUM FREEZE tab_freeze; } + +# This permutation verfies that a previous bug +# https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com +# https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de +# is not reintroduced. We used to make wrong pruning / freezing +# decision for multixacts, which could lead to a) broken hot chains b) +# dead rows being revived. +permutation "s1_begin" "s2_begin" "s3_begin" # start transactions + "s1_update" "s2_key_share" "s3_key_share" # have xmax be a multi with an updater, updater being oldest xid + "s1_update" # create additional row version that has multis + "s1_commit" "s2_commit" # commit both updater and share locker + "s2_vacuum" # due to bug in freezing logic, we used to *not* prune updated row, and then froze it + "s1_selectone" # if hot chain is broken, the row can't be found via index scan + "s3_commit" # commit remaining open xact + "s2_vacuum" # pruning / freezing in broken hot chains would unset xmax, reviving rows + "s1_selectall" # show borkedness From 699bf7d05c68734f800052829427c20674eb2c6b Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 13 Nov 2017 18:45:47 -0800 Subject: [PATCH 0709/1087] Perform a lot more sanity checks when freezing tuples. The previous commit has shown that the sanity checks around freezing aren't strong enough. Strengthening them seems especially important because the existance of the bug has caused corruption that we don't want to make even worse during future vacuum cycles. The errors are emitted with ereport rather than elog, despite being "should never happen" messages, so a proper error code is emitted. To avoid superflous translations, mark messages as internal. Author: Andres Freund and Alvaro Herrera Reviewed-By: Alvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de Backpatch: 9.3- --- src/backend/access/heap/heapam.c | 112 +++++++++++++++++++++----- src/backend/access/heap/rewriteheap.c | 5 +- src/backend/commands/vacuumlazy.c | 15 +++- src/include/access/heapam.h | 5 +- src/include/access/heapam_xlog.h | 2 + 5 files changed, 114 insertions(+), 25 deletions(-) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 3acef279f4..54f1100ffd 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -6357,6 +6357,7 @@ heap_inplace_update(Relation relation, HeapTuple tuple) */ static TransactionId FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, + TransactionId relfrozenxid, TransactionId relminmxid, TransactionId cutoff_xid, MultiXactId cutoff_multi, uint16 *flags) { @@ -6383,16 +6384,26 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, *flags |= FRM_INVALIDATE_XMAX; return InvalidTransactionId; } + else if (MultiXactIdPrecedes(multi, relminmxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found multixact %u from before relminmxid %u", + multi, relminmxid))); else if (MultiXactIdPrecedes(multi, cutoff_multi)) { /* - * This old multi cannot possibly have members still running. If it - * was a locker only, it can be removed without any further - * consideration; but if it contained an update, we might need to - * preserve it. + * This old multi cannot possibly have members still running, but + * verify just in case. If it was a locker only, it can be removed + * without any further consideration; but if it contained an update, we + * might need to preserve it. */ - Assert(!MultiXactIdIsRunning(multi, - HEAP_XMAX_IS_LOCKED_ONLY(t_infomask))); + if (MultiXactIdIsRunning(multi, + HEAP_XMAX_IS_LOCKED_ONLY(t_infomask))) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("multixact %u from before cutoff %u found to be still running", + multi, cutoff_multi))); + if (HEAP_XMAX_IS_LOCKED_ONLY(t_infomask)) { *flags |= FRM_INVALIDATE_XMAX; @@ -6406,13 +6417,22 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, /* wasn't only a lock, xid needs to be valid */ Assert(TransactionIdIsValid(xid)); + if (TransactionIdPrecedes(xid, relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found update xid %u from before relfrozenxid %u", + xid, relfrozenxid))); + /* * If the xid is older than the cutoff, it has to have aborted, * otherwise the tuple would have gotten pruned away. */ if (TransactionIdPrecedes(xid, cutoff_xid)) { - Assert(!TransactionIdDidCommit(xid)); + if (TransactionIdDidCommit(xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("cannot freeze committed update xid %u", xid))); *flags |= FRM_INVALIDATE_XMAX; xid = InvalidTransactionId; /* not strictly necessary */ } @@ -6484,6 +6504,13 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, { TransactionId xid = members[i].xid; + Assert(TransactionIdIsValid(xid)); + if (TransactionIdPrecedes(xid, relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found update xid %u from before relfrozenxid %u", + xid, relfrozenxid))); + /* * It's an update; should we keep it? If the transaction is known * aborted or crashed then it's okay to ignore it, otherwise not. @@ -6512,18 +6539,26 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, update_committed = true; update_xid = xid; } - - /* - * Not in progress, not committed -- must be aborted or crashed; - * we can ignore it. - */ + else + { + /* + * Not in progress, not committed -- must be aborted or crashed; + * we can ignore it. + */ + } /* * Since the tuple wasn't marked HEAPTUPLE_DEAD by vacuum, the - * update Xid cannot possibly be older than the xid cutoff. + * update Xid cannot possibly be older than the xid cutoff. The + * presence of such a tuple would cause corruption, so be paranoid + * and check. */ - Assert(!TransactionIdIsValid(update_xid) || - !TransactionIdPrecedes(update_xid, cutoff_xid)); + if (TransactionIdIsValid(update_xid) && + TransactionIdPrecedes(update_xid, cutoff_xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found update xid %u from before xid cutoff %u", + update_xid, cutoff_xid))); /* * If we determined that it's an Xid corresponding to an update @@ -6620,8 +6655,9 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask, * recovery. We really need to remove old xids. */ bool -heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, - TransactionId cutoff_multi, +heap_prepare_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId cutoff_xid, TransactionId cutoff_multi, xl_heap_freeze_tuple *frz, bool *totally_frozen_p) { bool changed = false; @@ -6638,8 +6674,20 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, xid = HeapTupleHeaderGetXmin(tuple); if (TransactionIdIsNormal(xid)) { + if (TransactionIdPrecedes(xid, relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmin %u from before relfrozenxid %u", + xid, relfrozenxid))); + if (TransactionIdPrecedes(xid, cutoff_xid)) { + if (!TransactionIdDidCommit(xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("uncommitted xmin %u from before xid cutoff %u needs to be frozen", + xid, cutoff_xid))); + frz->t_infomask |= HEAP_XMIN_FROZEN; changed = true; } @@ -6664,6 +6712,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, uint16 flags; newxmax = FreezeMultiXactId(xid, tuple->t_infomask, + relfrozenxid, relminmxid, cutoff_xid, cutoff_multi, &flags); if (flags & FRM_INVALIDATE_XMAX) @@ -6713,8 +6762,28 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, } else if (TransactionIdIsNormal(xid)) { + if (TransactionIdPrecedes(xid, relfrozenxid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("found xmax %u from before relfrozenxid %u", + xid, relfrozenxid))); + if (TransactionIdPrecedes(xid, cutoff_xid)) + { + /* + * If we freeze xmax, make absolutely sure that it's not an XID + * that is important. (Note, a lock-only xmax can be removed + * independent of committedness, since a committed lock holder has + * released the lock). + */ + if (!(tuple->t_infomask & HEAP_XMAX_LOCK_ONLY) && + TransactionIdDidCommit(xid)) + ereport(ERROR, + (errcode(ERRCODE_DATA_CORRUPTED), + errmsg_internal("cannot freeze committed xmax %u", + xid))); freeze_xmax = true; + } else totally_frozen = false; } @@ -6819,14 +6888,17 @@ heap_execute_freeze_tuple(HeapTupleHeader tuple, xl_heap_freeze_tuple *frz) * Useful for callers like CLUSTER that perform their own WAL logging. */ bool -heap_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, - TransactionId cutoff_multi) +heap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId cutoff_xid, TransactionId cutoff_multi) { xl_heap_freeze_tuple frz; bool do_freeze; bool tuple_totally_frozen; - do_freeze = heap_prepare_freeze_tuple(tuple, cutoff_xid, cutoff_multi, + do_freeze = heap_prepare_freeze_tuple(tuple, + relfrozenxid, relminmxid, + cutoff_xid, cutoff_multi, &frz, &tuple_totally_frozen); /* diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c index f93c194e18..7d163c9137 100644 --- a/src/backend/access/heap/rewriteheap.c +++ b/src/backend/access/heap/rewriteheap.c @@ -407,7 +407,10 @@ rewrite_heap_tuple(RewriteState state, * While we have our hands on the tuple, we may as well freeze any * eligible xmin or xmax, so that future VACUUM effort can be saved. */ - heap_freeze_tuple(new_tuple->t_data, state->rs_freeze_xid, + heap_freeze_tuple(new_tuple->t_data, + state->rs_old_rel->rd_rel->relfrozenxid, + state->rs_old_rel->rd_rel->relminmxid, + state->rs_freeze_xid, state->rs_cutoff_multi); /* diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 20ce431e46..f95346acdb 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -467,6 +467,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, blkno; HeapTupleData tuple; char *relname; + TransactionId relfrozenxid = onerel->rd_rel->relfrozenxid; + TransactionId relminmxid = onerel->rd_rel->relminmxid; BlockNumber empty_pages, vacuumed_pages; double num_tuples, @@ -1004,6 +1006,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * tuple, we choose to keep it, because it'll be a lot * cheaper to get rid of it in the next pruning pass than * to treat it like an indexed tuple. + * + * If this were to happen for a tuple that actually needed + * to be deleted, we'd be in trouble, because it'd + * possibly leave a tuple below the relation's xmin + * horizon alive. heap_prepare_freeze_tuple() is prepared + * to detect that case and abort the transaction, + * preventing corruption. */ if (HeapTupleIsHotUpdated(&tuple) || HeapTupleIsHeapOnly(&tuple)) @@ -1095,8 +1104,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats, * Each non-removable tuple must be checked to see if it needs * freezing. Note we already have exclusive buffer lock. */ - if (heap_prepare_freeze_tuple(tuple.t_data, FreezeLimit, - MultiXactCutoff, &frozen[nfrozen], + if (heap_prepare_freeze_tuple(tuple.t_data, + relfrozenxid, relminmxid, + FreezeLimit, MultiXactCutoff, + &frozen[nfrozen], &tuple_totally_frozen)) frozen[nfrozen++].offset = offnum; diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h index 4e41024e92..f1366ed958 100644 --- a/src/include/access/heapam.h +++ b/src/include/access/heapam.h @@ -168,8 +168,9 @@ extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple, bool follow_update, Buffer *buffer, HeapUpdateFailureData *hufd); extern void heap_inplace_update(Relation relation, HeapTuple tuple); -extern bool heap_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid, - TransactionId cutoff_multi); +extern bool heap_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, TransactionId relminmxid, + TransactionId cutoff_xid, TransactionId cutoff_multi); extern bool heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid, MultiXactId cutoff_multi, Buffer buf); extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple); diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h index 5e4dee60c7..38f7f63984 100644 --- a/src/include/access/heapam_xlog.h +++ b/src/include/access/heapam_xlog.h @@ -384,6 +384,8 @@ extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer, TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples, int ntuples); extern bool heap_prepare_freeze_tuple(HeapTupleHeader tuple, + TransactionId relfrozenxid, + TransactionId relminmxid, TransactionId cutoff_xid, TransactionId cutoff_multi, xl_heap_freeze_tuple *frz, From 997071691f66dfe92e97e6b4e3d29d153317be31 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 16 Dec 2017 11:32:49 -0500 Subject: [PATCH 0710/1087] Fix oversights in new plpgsql test suite infrastructure. Fix a couple of minor oversights in commit 632b03da3: the tests should be run in database "pl_regression" like the other PLs do, and we should clean up the tests' output cruft during "make clean". --- src/pl/plpgsql/src/Makefile | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 64991c3115..76ac247e57 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -24,6 +24,8 @@ OBJS = pl_gram.o pl_handler.o pl_comp.o pl_exec.o \ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql +REGRESS_OPTS = --dbname=$(PL_TESTDB) + REGRESS = plpgsql_call all: all-lib @@ -85,6 +87,7 @@ distprep: pl_gram.h pl_gram.c plerrcodes.h # so they are not cleaned here. clean distclean: clean-lib rm -f $(OBJS) + rm -rf $(pg_regress_clean_files) -maintainer-clean: clean +maintainer-clean: distclean rm -f pl_gram.c pl_gram.h plerrcodes.h From c757a3da0af0e5eb636eeee2af6602d279162b0a Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sat, 16 Dec 2017 10:03:35 -0800 Subject: [PATCH 0711/1087] Avoid and detect SIGPIPE race in TAP tests. Don't write to stdin of a psql process that could have already exited with an authentication failure. Buildfarm members crake and mandrill have failed once by doing so. Ignore SIGPIPE in all TAP tests. Back-patch to v10, where these tests were introduced. Reviewed by Michael Paquier. Discussion: https://postgr.es/m/20171209210203.GC3362632@rfd.leadboat.com --- src/test/authentication/t/001_password.pl | 3 +-- src/test/authentication/t/002_saslprep.pl | 3 +-- src/test/perl/TestLib.pm | 4 ++++ 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl index 2d3f674144..9340f2f1ab 100644 --- a/src/test/authentication/t/001_password.pl +++ b/src/test/authentication/t/001_password.pl @@ -44,8 +44,7 @@ sub test_role $status_string = 'success' if ($expected_res eq 0); - my $res = - $node->psql('postgres', 'SELECT 1', extra_params => [ '-U', $role ]); + my $res = $node->psql('postgres', undef, extra_params => [ '-U', $role ]); is($res, $expected_res, "authentication $status_string for method $method, role $role"); } diff --git a/src/test/authentication/t/002_saslprep.pl b/src/test/authentication/t/002_saslprep.pl index df9f85d6a9..e09273edd4 100644 --- a/src/test/authentication/t/002_saslprep.pl +++ b/src/test/authentication/t/002_saslprep.pl @@ -41,8 +41,7 @@ sub test_login $status_string = 'success' if ($expected_res eq 0); $ENV{"PGPASSWORD"} = $password; - my $res = - $node->psql('postgres', 'SELECT 1', extra_params => [ '-U', $role ]); + my $res = $node->psql('postgres', undef, extra_params => [ '-U', $role ]); is($res, $expected_res, "authentication $status_string for role $role with password $password" ); diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index d1a2eb5883..60190400de 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -75,6 +75,10 @@ BEGIN INIT { + # Return EPIPE instead of killing the process with SIGPIPE. An affected + # test may still fail, but it's more likely to report useful facts. + $SIG{PIPE} = 'IGNORE'; + # Determine output directories, and create them. The base path is the # TESTDIR environment variable, which is normally set by the invoking # Makefile. From c04d35f442a8c4fd5a20103b31839ec52fce3046 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sat, 16 Dec 2017 12:49:41 -0800 Subject: [PATCH 0712/1087] Try to detect runtime unavailability of __builtin_mul_overflow(int64). On some systems the results of 64 bit __builtin_mul_overflow() operations can be computed at compile time, but not at runtime. The known cases are arm buildfar animals using clang where the runtime operation is implemented in a unavailable function. Try to avoid compile-time computation by using volatile arguments to __builtin_mul_overflow(). In that case we hopefully will get a link error when unavailable, similar to what buildfarm animals dangomushi and gull are reporting. Author: Andres Freund Discussion: https://postgr.es/m/20171213213754.pydkyjs6bt2hvsdb@alap3.anarazel.de --- config/c-compiler.m4 | 12 ++++++++---- configure | 4 +++- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 67323ade12..b35436481c 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -310,13 +310,17 @@ fi])# PGAC_C_BUILTIN_CONSTANT_P # and define HAVE__BUILTIN_OP_OVERFLOW if so. # # Check for the most complicated case, 64 bit multiplication, as a -# proxy for all of the operations. Have to link to be sure to -# recognize a missing __builtin_mul_overflow. +# proxy for all of the operations. Use volatile variables to avoid the +# compiler computing result at compile time, even though the runtime +# might not supply operation. Have to link to be sure to recognize a +# missing __builtin_mul_overflow. AC_DEFUN([PGAC_C_BUILTIN_OP_OVERFLOW], [AC_CACHE_CHECK(for __builtin_mul_overflow, pgac_cv__builtin_op_overflow, [AC_LINK_IFELSE([AC_LANG_PROGRAM([], -[PG_INT64_TYPE result; -__builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result);] +[PG_INT64_TYPE a = 1; +PG_INT64_TYPE b = 1; +PG_INT64_TYPE result; +__builtin_mul_overflow(*(volatile PG_INT64_TYPE *) a, *(volatile PG_INT64_TYPE *) b, &result);] )], [pgac_cv__builtin_op_overflow=yes], [pgac_cv__builtin_op_overflow=no])]) diff --git a/configure b/configure index 58eafd31c5..22ca4230fc 100755 --- a/configure +++ b/configure @@ -14488,8 +14488,10 @@ else int main () { +PG_INT64_TYPE a = 1; +PG_INT64_TYPE b = 1; PG_INT64_TYPE result; -__builtin_mul_overflow((PG_INT64_TYPE) 1, (PG_INT64_TYPE) 2, &result); +__builtin_mul_overflow(*(volatile PG_INT64_TYPE *) a, *(volatile PG_INT64_TYPE *) b, &result); ; return 0; From b31a9d7dd3bf8435fddf404c4b75236d0ea76d78 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Dec 2017 00:41:41 -0500 Subject: [PATCH 0713/1087] Suppress compiler warning about no function return value. Compilers that don't know that ereport(ERROR) doesn't return complained about the new coding in scanint8() introduced by commit 101c7ee3e. Tweak coding to avoid the warning. Per buildfarm. --- src/backend/utils/adt/int8.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index a8e2200852..24c9150893 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -81,9 +81,7 @@ scanint8(const char *str, bool errorOK, int64 *result) /* require at least one digit */ if (unlikely(!isdigit((unsigned char) *ptr))) - { goto invalid_syntax; - } /* process digits */ while (*ptr && isdigit((unsigned char) *ptr)) @@ -108,26 +106,25 @@ scanint8(const char *str, bool errorOK, int64 *result) goto out_of_range; tmp = -tmp; } - *result = tmp; + *result = tmp; return true; out_of_range: - if (errorOK) - return false; - else + if (!errorOK) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg("value \"%s\" is out of range for type %s", str, "bigint"))); + return false; + invalid_syntax: - if (errorOK) - return false; - else + if (!errorOK) ereport(ERROR, (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION), errmsg("invalid input syntax for integer: \"%s\"", str))); + return false; } /* int8in() From c6d21d56f1a92b4762a22cbbb694b1e853165e70 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 17 Dec 2017 11:52:22 -0500 Subject: [PATCH 0714/1087] Try harder to detect unavailability of __builtin_mul_overflow(int64). Commit c04d35f44 didn't quite do the job here, because it still allowed the compiler to deduce that the function call could be optimized away. Prevent that by putting the arguments and results in global variables. Discussion: https://postgr.es/m/20171213213754.pydkyjs6bt2hvsdb@alap3.anarazel.de --- config/c-compiler.m4 | 17 +++++++++-------- configure | 9 +++++---- 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index b35436481c..076656c77f 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -310,18 +310,19 @@ fi])# PGAC_C_BUILTIN_CONSTANT_P # and define HAVE__BUILTIN_OP_OVERFLOW if so. # # Check for the most complicated case, 64 bit multiplication, as a -# proxy for all of the operations. Use volatile variables to avoid the -# compiler computing result at compile time, even though the runtime -# might not supply operation. Have to link to be sure to recognize a -# missing __builtin_mul_overflow. +# proxy for all of the operations. To detect the case where the compiler +# knows the function but library support is missing, we must link not just +# compile, and store the results in global variables so the compiler doesn't +# optimize away the call. AC_DEFUN([PGAC_C_BUILTIN_OP_OVERFLOW], [AC_CACHE_CHECK(for __builtin_mul_overflow, pgac_cv__builtin_op_overflow, -[AC_LINK_IFELSE([AC_LANG_PROGRAM([], -[PG_INT64_TYPE a = 1; +[AC_LINK_IFELSE([AC_LANG_PROGRAM([ +PG_INT64_TYPE a = 1; PG_INT64_TYPE b = 1; PG_INT64_TYPE result; -__builtin_mul_overflow(*(volatile PG_INT64_TYPE *) a, *(volatile PG_INT64_TYPE *) b, &result);] -)], +int oflo; +], +[oflo = __builtin_mul_overflow(a, b, &result);])], [pgac_cv__builtin_op_overflow=yes], [pgac_cv__builtin_op_overflow=no])]) if test x"$pgac_cv__builtin_op_overflow" = xyes ; then diff --git a/configure b/configure index 22ca4230fc..d9b7b8d7ec 100755 --- a/configure +++ b/configure @@ -14485,14 +14485,15 @@ else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ -int -main () -{ PG_INT64_TYPE a = 1; PG_INT64_TYPE b = 1; PG_INT64_TYPE result; -__builtin_mul_overflow(*(volatile PG_INT64_TYPE *) a, *(volatile PG_INT64_TYPE *) b, &result); +int oflo; +int +main () +{ +oflo = __builtin_mul_overflow(a, b, &result); ; return 0; } From 7731c32087faf498db0562cc7e40d256ffc1750f Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 18 Dec 2017 11:24:55 +0100 Subject: [PATCH 0715/1087] Fix typo on comment Author: David Rowley --- src/backend/utils/adt/json.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index baf1178995..fcce26ed2e 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -1940,7 +1940,7 @@ json_agg_transfn(PG_FUNCTION_ARGS) state->val_output_func, false); /* - * The transition type for array_agg() is declared to be "internal", which + * The transition type for json_agg() is declared to be "internal", which * is a pass-by-value type the same size as a pointer. So we can safely * pass the JsonAggState pointer through nodeAgg.c's machinations. */ From fd7c0fa732d97a4b4ebb58730e6244ea30d0a618 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 18 Dec 2017 12:17:37 -0500 Subject: [PATCH 0716/1087] Fix crashes on plans with multiple Gather (Merge) nodes. es_query_dsa turns out to be broken by design, because it supposes that there is only one DSA for the whole query, whereas there is actually one per Gather (Merge) node. For now, work around that problem by setting and clearing the pointer around the sections of code that might need it. It's probably a better idea to get rid of es_query_dsa altogether in favor of having each node keep track individually of which DSA is relevant, but that seems like more than we would want to back-patch. Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit Kapila, and by me. Discussion: http://postgr.es/m/CAEepm=1U6as=brnVvMNixEV2tpi8NuyQoTmO8Qef0-VV+=7MDA@mail.gmail.com --- src/backend/executor/execParallel.c | 26 ++++++++++++++------------ src/backend/executor/nodeGather.c | 6 ++++++ src/backend/executor/nodeGatherMerge.c | 4 ++++ 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index d57cdbd4e1..6b6064637b 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -330,7 +330,7 @@ EstimateParamExecSpace(EState *estate, Bitmapset *params) * parameter array) and then the datum as serialized by datumSerialize(). */ static dsa_pointer -SerializeParamExecParams(EState *estate, Bitmapset *params) +SerializeParamExecParams(EState *estate, Bitmapset *params, dsa_area *area) { Size size; int nparams; @@ -341,8 +341,8 @@ SerializeParamExecParams(EState *estate, Bitmapset *params) /* Allocate enough space for the current parameter values. */ size = EstimateParamExecSpace(estate, params); - handle = dsa_allocate(estate->es_query_dsa, size); - start_address = dsa_get_address(estate->es_query_dsa, handle); + handle = dsa_allocate(area, size); + start_address = dsa_get_address(area, handle); /* First write the number of parameters as a 4-byte integer. */ nparams = bms_num_members(params); @@ -736,12 +736,6 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, LWTRANCHE_PARALLEL_QUERY_DSA, pcxt->seg); - /* - * Make the area available to executor nodes running in the leader. - * See also ParallelQueryMain which makes it available to workers. - */ - estate->es_query_dsa = pei->area; - /* * Serialize parameters, if any, using DSA storage. We don't dare use * the main parallel query DSM for this because we might relaunch @@ -750,7 +744,8 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, */ if (!bms_is_empty(sendParams)) { - pei->param_exec = SerializeParamExecParams(estate, sendParams); + pei->param_exec = SerializeParamExecParams(estate, sendParams, + pei->area); fpes->param_exec = pei->param_exec; } } @@ -763,7 +758,11 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, d.pcxt = pcxt; d.instrumentation = instrumentation; d.nnodes = 0; + + /* Install our DSA area while initializing the plan. */ + estate->es_query_dsa = pei->area; ExecParallelInitializeDSM(planstate, &d); + estate->es_query_dsa = NULL; /* * Make sure that the world hasn't shifted under our feet. This could @@ -832,19 +831,22 @@ ExecParallelReinitialize(PlanState *planstate, /* Free any serialized parameters from the last round. */ if (DsaPointerIsValid(fpes->param_exec)) { - dsa_free(estate->es_query_dsa, fpes->param_exec); + dsa_free(pei->area, fpes->param_exec); fpes->param_exec = InvalidDsaPointer; } /* Serialize current parameter values if required. */ if (!bms_is_empty(sendParams)) { - pei->param_exec = SerializeParamExecParams(estate, sendParams); + pei->param_exec = SerializeParamExecParams(estate, sendParams, + pei->area); fpes->param_exec = pei->param_exec; } /* Traverse plan tree and let each child node reset associated state. */ + estate->es_query_dsa = pei->area; ExecParallelReInitializeDSM(planstate, pei->pcxt); + estate->es_query_dsa = NULL; } /* diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index a44cf8409a..1697ae650d 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -277,7 +277,13 @@ gather_getnext(GatherState *gatherstate) if (gatherstate->need_to_scan_locally) { + EState *estate = gatherstate->ps.state; + + /* Install our DSA area while executing the plan. */ + estate->es_query_dsa = + gatherstate->pei ? gatherstate->pei->area : NULL; outerTupleSlot = ExecProcNode(outerPlan); + estate->es_query_dsa = NULL; if (!TupIsNull(outerTupleSlot)) return outerTupleSlot; diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 4a8a59eabf..a69777aa95 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -637,8 +637,12 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait) { PlanState *outerPlan = outerPlanState(gm_state); TupleTableSlot *outerTupleSlot; + EState *estate = gm_state->ps.state; + /* Install our DSA area while executing the plan. */ + estate->es_query_dsa = gm_state->pei ? gm_state->pei->area : NULL; outerTupleSlot = ExecProcNode(outerPlan); + estate->es_query_dsa = NULL; if (!TupIsNull(outerTupleSlot)) { From 56a95ee5118bf6d46e261b8d352f7dafac10587d Mon Sep 17 00:00:00 2001 From: Fujii Masao Date: Tue, 19 Dec 2017 03:46:14 +0900 Subject: [PATCH 0717/1087] Fix bug in cancellation of non-exclusive backup to avoid assertion failure. Previously an assertion failure occurred when pg_stop_backup() for non-exclusive backup was aborted while it's waiting for WAL files to be archived. This assertion failure happened in do_pg_abort_backup() which was called when a non-exclusive backup was canceled. do_pg_abort_backup() assumes that there is at least one non-exclusive backup running when it's called. But pg_stop_backup() can be canceled even after it marks the end of non-exclusive backup (e.g., during waiting for WAL archiving). This broke the assumption that do_pg_abort_backup() relies on, and which caused an assertion failure. This commit changes do_pg_abort_backup() so that it does nothing when non-exclusive backup has been already marked as completed. That is, the asssumption is also changed, and do_pg_abort_backup() now can handle even the case where it's called when there is no running backup. Backpatch to 9.6 where SQL-callable non-exclusive backup was added. Author: Masahiko Sawada and Michael Paquier Reviewed-By: Robert Haas and Fujii Masao Discussion: https://www.postgresql.org/message-id/CAD21AoD2L1Fu2c==gnVASMyFAAaq3y-AQ2uEVj-zTCGFFjvmDg@mail.gmail.com --- src/backend/access/transam/xlog.c | 36 ++++++++++++++++++++++++---- src/backend/replication/basebackup.c | 5 ++-- 2 files changed, 35 insertions(+), 6 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 0791404263..3e9a12dacd 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -10628,13 +10628,20 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, /* * Mark that start phase has correctly finished for an exclusive backup. * Session-level locks are updated as well to reflect that state. + * + * Note that CHECK_FOR_INTERRUPTS() must not occur while updating + * backup counters and session-level lock. Otherwise they can be + * updated inconsistently, and which might cause do_pg_abort_backup() + * to fail. */ if (exclusive) { WALInsertLockAcquireExclusive(); XLogCtl->Insert.exclusiveBackupState = EXCLUSIVE_BACKUP_IN_PROGRESS; - WALInsertLockRelease(); + + /* Set session-level lock */ sessionBackupState = SESSION_BACKUP_EXCLUSIVE; + WALInsertLockRelease(); } else sessionBackupState = SESSION_BACKUP_NON_EXCLUSIVE; @@ -10838,7 +10845,11 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p) } /* - * OK to update backup counters and forcePageWrites + * OK to update backup counters, forcePageWrites and session-level lock. + * + * Note that CHECK_FOR_INTERRUPTS() must not occur while updating them. + * Otherwise they can be updated inconsistently, and which might cause + * do_pg_abort_backup() to fail. */ WALInsertLockAcquireExclusive(); if (exclusive) @@ -10862,11 +10873,20 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p) { XLogCtl->Insert.forcePageWrites = false; } - WALInsertLockRelease(); - /* Clean up session-level lock */ + /* + * Clean up session-level lock. + * + * You might think that WALInsertLockRelease() can be called + * before cleaning up session-level lock because session-level + * lock doesn't need to be protected with WAL insertion lock. + * But since CHECK_FOR_INTERRUPTS() can occur in it, + * session-level lock must be cleaned up before it. + */ sessionBackupState = SESSION_BACKUP_NONE; + WALInsertLockRelease(); + /* * Read and parse the START WAL LOCATION line (this code is pretty crude, * but we are not expecting any variability in the file format). @@ -11104,8 +11124,16 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p) void do_pg_abort_backup(void) { + /* + * Quick exit if session is not keeping around a non-exclusive backup + * already started. + */ + if (sessionBackupState == SESSION_BACKUP_NONE) + return; + WALInsertLockAcquireExclusive(); Assert(XLogCtl->Insert.nonExclusiveBackups > 0); + Assert(sessionBackupState == SESSION_BACKUP_NON_EXCLUSIVE); XLogCtl->Insert.nonExclusiveBackups--; if (XLogCtl->Insert.exclusiveBackupState == EXCLUSIVE_BACKUP_NONE && diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index cd7d391b2f..05ca95bac2 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -215,7 +215,7 @@ perform_base_backup(basebackup_options *opt) * Once do_pg_start_backup has been called, ensure that any failure causes * us to abort the backup so we don't "leak" a backup counter. For this * reason, *all* functionality between do_pg_start_backup() and - * do_pg_stop_backup() should be inside the error cleanup block! + * the end of do_pg_stop_backup() should be inside the error cleanup block! */ PG_ENSURE_ERROR_CLEANUP(base_backup_cleanup, (Datum) 0); @@ -324,10 +324,11 @@ perform_base_backup(basebackup_options *opt) else pq_putemptymessage('c'); /* CopyDone */ } + + endptr = do_pg_stop_backup(labelfile->data, !opt->nowait, &endtli); } PG_END_ENSURE_ERROR_CLEANUP(base_backup_cleanup, (Datum) 0); - endptr = do_pg_stop_backup(labelfile->data, !opt->nowait, &endtli); if (opt->includewal) { From 53cba77b53f98255bfbba9d2612d1a5685feec52 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 18 Dec 2017 16:00:35 -0500 Subject: [PATCH 0718/1087] doc: Fix figures in example description oversight in 244c8b466a743d1ec18a7d841bf42669699b3b56 Reported-by: Blaz Merela --- doc/src/sgml/perform.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index a51fd1c7dc..c32e95a154 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -388,7 +388,7 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; as a result of caching that's expected to occur during the repeated index scans on t2.) The costs of the loop node are then set on the basis of the cost of the outer - scan, plus one repetition of the inner scan for each outer row (10 * 7.87, + scan, plus one repetition of the inner scan for each outer row (10 * 7.91, here), plus a little CPU time for join processing. From 25d532698d74f4adb34f013f1a287a0029e31fb1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 18 Dec 2017 16:59:10 -0500 Subject: [PATCH 0719/1087] Move SCRAM-related name definitions to scram-common.h Mechanism names for SCRAM and channel binding names have been included in scram.h by the libpq frontend code, and this header references a set of routines which are only used by the backend. scram-common.h is on the contrary usable by both the backend and libpq, so getting those names from there seems more reasonable. Author: Michael Paquier --- src/backend/libpq/auth.c | 1 + src/include/common/scram-common.h | 7 +++++++ src/include/libpq/scram.h | 7 ------- src/interfaces/libpq/fe-auth-scram.c | 1 - src/interfaces/libpq/fe-auth.c | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 19a91ca67d..b7f9bb1669 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -26,6 +26,7 @@ #include "commands/user.h" #include "common/ip.h" #include "common/md5.h" +#include "common/scram-common.h" #include "libpq/auth.h" #include "libpq/crypt.h" #include "libpq/libpq.h" diff --git a/src/include/common/scram-common.h b/src/include/common/scram-common.h index 0c5ee04f26..857a60e71f 100644 --- a/src/include/common/scram-common.h +++ b/src/include/common/scram-common.h @@ -15,6 +15,13 @@ #include "common/sha2.h" +/* Name of SCRAM mechanisms per IANA */ +#define SCRAM_SHA256_NAME "SCRAM-SHA-256" +#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ + +/* Channel binding types */ +#define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" + /* Length of SCRAM keys (client and server) */ #define SCRAM_KEY_LEN PG_SHA256_DIGEST_LENGTH diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h index 91f1e0f2c7..2c245813d6 100644 --- a/src/include/libpq/scram.h +++ b/src/include/libpq/scram.h @@ -13,13 +13,6 @@ #ifndef PG_SCRAM_H #define PG_SCRAM_H -/* Name of SCRAM mechanisms per IANA */ -#define SCRAM_SHA256_NAME "SCRAM-SHA-256" -#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ - -/* Channel binding types */ -#define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" - /* Status codes for message exchange */ #define SASL_EXCHANGE_CONTINUE 0 #define SASL_EXCHANGE_SUCCESS 1 diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 5b783bc313..4cad93c24a 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -17,7 +17,6 @@ #include "common/base64.h" #include "common/saslprep.h" #include "common/scram-common.h" -#include "libpq/scram.h" #include "fe-auth.h" /* These are needed for getpid(), in the fallback implementation */ diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index f54ad8e0cc..2cfdb7c125 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -39,8 +39,8 @@ #endif #include "common/md5.h" +#include "common/scram-common.h" #include "libpq-fe.h" -#include "libpq/scram.h" #include "fe-auth.h" From ab9e0e718acb9ded7e4c4b5cedc1d410690ea6ba Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 18 Dec 2017 14:23:19 -0800 Subject: [PATCH 0720/1087] Add shared tuplestores. SharedTuplestore allows multiple participants to write into it and then read the tuples back from it in parallel. Each reader receives partial results. For now it always uses disk files, but other buffering policies and other kinds of scans (ie each reader receives complete results) may be useful in future. The upcoming parallel hash join feature will use this facility. Author: Thomas Munro Reviewed-By: Peter Geoghegan, Andres Freund, Robert Haas Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com --- src/backend/storage/lmgr/lwlock.c | 2 + src/backend/utils/sort/Makefile | 2 +- src/backend/utils/sort/sharedtuplestore.c | 633 ++++++++++++++++++++++ src/include/storage/lwlock.h | 1 + src/include/utils/sharedtuplestore.h | 60 ++ src/tools/pgindent/typedefs.list | 2 + 6 files changed, 699 insertions(+), 1 deletion(-) create mode 100644 src/backend/utils/sort/sharedtuplestore.c create mode 100644 src/include/utils/sharedtuplestore.h diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 46f5c4277d..eab98b0760 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -516,6 +516,8 @@ RegisterLWLockTranches(void) "session_record_table"); LWLockRegisterTranche(LWTRANCHE_SESSION_TYPMOD_TABLE, "session_typmod_table"); + LWLockRegisterTranche(LWTRANCHE_SHARED_TUPLESTORE, + "shared_tuplestore"); LWLockRegisterTranche(LWTRANCHE_TBM, "tbm"); LWLockRegisterTranche(LWTRANCHE_PARALLEL_APPEND, "parallel_append"); diff --git a/src/backend/utils/sort/Makefile b/src/backend/utils/sort/Makefile index 370b12cee6..d8c08ac25e 100644 --- a/src/backend/utils/sort/Makefile +++ b/src/backend/utils/sort/Makefile @@ -14,7 +14,7 @@ include $(top_builddir)/src/Makefile.global override CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS) -OBJS = logtape.o sortsupport.o tuplesort.o tuplestore.o +OBJS = logtape.o sharedtuplestore.o sortsupport.o tuplesort.o tuplestore.o tuplesort.o: qsort_tuple.c diff --git a/src/backend/utils/sort/sharedtuplestore.c b/src/backend/utils/sort/sharedtuplestore.c new file mode 100644 index 0000000000..2c8505672c --- /dev/null +++ b/src/backend/utils/sort/sharedtuplestore.c @@ -0,0 +1,633 @@ +/*------------------------------------------------------------------------- + * + * sharedtuplestore.c + * Simple mechanism for sharing tuples between backends. + * + * This module contains a shared temporary tuple storage mechanism providing + * a parallel-aware subset of the features of tuplestore.c. Multiple backends + * can write to a SharedTuplestore, and then multiple backends can later scan + * the stored tuples. Currently, the only scan type supported is a parallel + * scan where each backend reads an arbitrary subset of the tuples that were + * written. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/backend/util/sort/sharedtuplestore.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres.h" + +#include "access/htup.h" +#include "access/htup_details.h" +#include "miscadmin.h" +#include "storage/buffile.h" +#include "storage/lwlock.h" +#include "storage/sharedfileset.h" +#include "utils/sharedtuplestore.h" + +#include + +/* + * The size of chunks, in pages. This is somewhat arbitrarily set to match + * the size of HASH_CHUNK, so that Parallel Hash obtains new chunks of tuples + * at approximately the same rate as it allocates new chunks of memory to + * insert them into. + */ +#define STS_CHUNK_PAGES 4 +#define STS_CHUNK_HEADER_SIZE offsetof(SharedTuplestoreChunk, data) +#define STS_CHUNK_DATA_SIZE (STS_CHUNK_PAGES * BLCKSZ - STS_CHUNK_HEADER_SIZE) + +/* Chunk written to disk. */ +typedef struct SharedTuplestoreChunk +{ + int ntuples; /* Number of tuples in this chunk. */ + int overflow; /* If overflow, how many including this one? */ + char data[FLEXIBLE_ARRAY_MEMBER]; +} SharedTuplestoreChunk; + +/* Per-participant shared state. */ +typedef struct SharedTuplestoreParticipant +{ + LWLock lock; + BlockNumber read_page; /* Page number for next read. */ + BlockNumber npages; /* Number of pages written. */ + bool writing; /* Used only for assertions. */ +} SharedTuplestoreParticipant; + +/* The control object that lives in shared memory. */ +struct SharedTuplestore +{ + int nparticipants; /* Number of participants that can write. */ + int flags; /* Flag bits from SHARED_TUPLESTORE_XXX */ + size_t meta_data_size; /* Size of per-tuple header. */ + char name[NAMEDATALEN]; /* A name for this tuplestore. */ + + /* Followed by per-participant shared state. */ + SharedTuplestoreParticipant participants[FLEXIBLE_ARRAY_MEMBER]; +}; + +/* Per-participant state that lives in backend-local memory. */ +struct SharedTuplestoreAccessor +{ + int participant; /* My participant number. */ + SharedTuplestore *sts; /* The shared state. */ + SharedFileSet *fileset; /* The SharedFileSet holding files. */ + MemoryContext context; /* Memory context for buffers. */ + + /* State for reading. */ + int read_participant; /* The current participant to read from. */ + BufFile *read_file; /* The current file to read from. */ + int read_ntuples_available; /* The number of tuples in chunk. */ + int read_ntuples; /* How many tuples have we read from chunk? */ + size_t read_bytes; /* How many bytes have we read from chunk? */ + char *read_buffer; /* A buffer for loading tuples. */ + size_t read_buffer_size; + BlockNumber read_next_page; /* Lowest block we'll consider reading. */ + + /* State for writing. */ + SharedTuplestoreChunk *write_chunk; /* Buffer for writing. */ + BufFile *write_file; /* The current file to write to. */ + BlockNumber write_page; /* The next page to write to. */ + char *write_pointer; /* Current write pointer within chunk. */ + char *write_end; /* One past the end of the current chunk. */ +}; + +static void sts_filename(char *name, SharedTuplestoreAccessor *accessor, + int participant); + +/* + * Return the amount of shared memory required to hold SharedTuplestore for a + * given number of participants. + */ +size_t +sts_estimate(int participants) +{ + return offsetof(SharedTuplestore, participants) + + sizeof(SharedTuplestoreParticipant) * participants; +} + +/* + * Initialize a SharedTuplestore in existing shared memory. There must be + * space for sts_estimate(participants) bytes. If flags includes the value + * SHARED_TUPLESTORE_SINGLE_PASS, the files may in future be removed more + * eagerly (but this isn't yet implemented). + * + * Tuples that are stored may optionally carry a piece of fixed sized + * meta-data which will be retrieved along with the tuple. This is useful for + * the hash values used in multi-batch hash joins, but could have other + * applications. + * + * The caller must supply a SharedFileSet, which is essentially a directory + * that will be cleaned up automatically, and a name which must be unique + * across all SharedTuplestores created in the same SharedFileSet. + */ +SharedTuplestoreAccessor * +sts_initialize(SharedTuplestore *sts, int participants, + int my_participant_number, + size_t meta_data_size, + int flags, + SharedFileSet *fileset, + const char *name) +{ + SharedTuplestoreAccessor *accessor; + int i; + + Assert(my_participant_number < participants); + + sts->nparticipants = participants; + sts->meta_data_size = meta_data_size; + sts->flags = flags; + + if (strlen(name) > sizeof(sts->name) - 1) + elog(ERROR, "SharedTuplestore name too long"); + strcpy(sts->name, name); + + /* + * Limit meta-data so it + tuple size always fits into a single chunk. + * sts_puttuple() and sts_read_tuple() could be made to support scenarios + * where that's not the case, but it's not currently required. If so, + * meta-data size probably should be made variable, too. + */ + if (meta_data_size + sizeof(uint32) >= STS_CHUNK_DATA_SIZE) + elog(ERROR, "meta-data too long"); + + for (i = 0; i < participants; ++i) + { + LWLockInitialize(&sts->participants[i].lock, + LWTRANCHE_SHARED_TUPLESTORE); + sts->participants[i].read_page = 0; + sts->participants[i].writing = false; + } + + accessor = palloc0(sizeof(SharedTuplestoreAccessor)); + accessor->participant = my_participant_number; + accessor->sts = sts; + accessor->fileset = fileset; + accessor->context = CurrentMemoryContext; + + return accessor; +} + +/* + * Attach to a SharedTupleStore that has been initialized by another backend, + * so that this backend can read and write tuples. + */ +SharedTuplestoreAccessor * +sts_attach(SharedTuplestore *sts, + int my_participant_number, + SharedFileSet *fileset) +{ + SharedTuplestoreAccessor *accessor; + + Assert(my_participant_number < sts->nparticipants); + + accessor = palloc0(sizeof(SharedTuplestoreAccessor)); + accessor->participant = my_participant_number; + accessor->sts = sts; + accessor->fileset = fileset; + accessor->context = CurrentMemoryContext; + + return accessor; +} + +static void +sts_flush_chunk(SharedTuplestoreAccessor *accessor) +{ + size_t size; + size_t written; + + size = STS_CHUNK_PAGES * BLCKSZ; + written = BufFileWrite(accessor->write_file, accessor->write_chunk, size); + if (written != size) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not write to temporary file: %m"))); + memset(accessor->write_chunk, 0, size); + accessor->write_pointer = &accessor->write_chunk->data[0]; + accessor->sts->participants[accessor->participant].npages += + STS_CHUNK_PAGES; +} + +/* + * Finish writing tuples. This must be called by all backends that have + * written data before any backend begins reading it. + */ +void +sts_end_write(SharedTuplestoreAccessor *accessor) +{ + if (accessor->write_file != NULL) + { + sts_flush_chunk(accessor); + BufFileClose(accessor->write_file); + pfree(accessor->write_chunk); + accessor->write_chunk = NULL; + accessor->write_file = NULL; + accessor->sts->participants[accessor->participant].writing = false; + } +} + +/* + * Prepare to rescan. Only one participant must call this. After it returns, + * all participants may call sts_begin_parallel_scan() and then loop over + * sts_parallel_scan_next(). This function must not be called concurrently + * with a scan, and synchronization to avoid that is the caller's + * responsibility. + */ +void +sts_reinitialize(SharedTuplestoreAccessor *accessor) +{ + int i; + + /* + * Reset the shared read head for all participants' files. Also set the + * initial chunk size to the minimum (any increases from that size will be + * recorded in chunk_expansion_log). + */ + for (i = 0; i < accessor->sts->nparticipants; ++i) + { + accessor->sts->participants[i].read_page = 0; + } +} + +/* + * Begin scanning the contents in parallel. + */ +void +sts_begin_parallel_scan(SharedTuplestoreAccessor *accessor) +{ + int i PG_USED_FOR_ASSERTS_ONLY; + + /* End any existing scan that was in progress. */ + sts_end_parallel_scan(accessor); + + /* + * Any backend that might have written into this shared tuplestore must + * have called sts_end_write(), so that all buffers are flushed and the + * files have stopped growing. + */ + for (i = 0; i < accessor->sts->nparticipants; ++i) + Assert(!accessor->sts->participants[i].writing); + + /* + * We will start out reading the file that THIS backend wrote. There may + * be some caching locality advantage to that. + */ + accessor->read_participant = accessor->participant; + accessor->read_file = NULL; + accessor->read_next_page = 0; +} + +/* + * Finish a parallel scan, freeing associated backend-local resources. + */ +void +sts_end_parallel_scan(SharedTuplestoreAccessor *accessor) +{ + /* + * Here we could delete all files if SHARED_TUPLESTORE_SINGLE_PASS, but + * we'd probably need a reference count of current parallel scanners so we + * could safely do it only when the reference count reaches zero. + */ + if (accessor->read_file != NULL) + { + BufFileClose(accessor->read_file); + accessor->read_file = NULL; + } +} + +/* + * Write a tuple. If a meta-data size was provided to sts_initialize, then a + * pointer to meta data of that size must be provided. + */ +void +sts_puttuple(SharedTuplestoreAccessor *accessor, void *meta_data, + MinimalTuple tuple) +{ + size_t size; + + /* Do we have our own file yet? */ + if (accessor->write_file == NULL) + { + SharedTuplestoreParticipant *participant; + char name[MAXPGPATH]; + + /* Create one. Only this backend will write into it. */ + sts_filename(name, accessor, accessor->participant); + accessor->write_file = BufFileCreateShared(accessor->fileset, name); + + /* Set up the shared state for this backend's file. */ + participant = &accessor->sts->participants[accessor->participant]; + participant->writing = true; /* for assertions only */ + } + + /* Do we have space? */ + size = accessor->sts->meta_data_size + tuple->t_len; + if (accessor->write_pointer + size >= accessor->write_end) + { + if (accessor->write_chunk == NULL) + { + /* First time through. Allocate chunk. */ + accessor->write_chunk = (SharedTuplestoreChunk *) + MemoryContextAllocZero(accessor->context, + STS_CHUNK_PAGES * BLCKSZ); + accessor->write_chunk->ntuples = 0; + accessor->write_pointer = &accessor->write_chunk->data[0]; + accessor->write_end = (char *) + accessor->write_chunk + STS_CHUNK_PAGES * BLCKSZ; + } + else + { + /* See if flushing helps. */ + sts_flush_chunk(accessor); + } + + /* It may still not be enough in the case of a gigantic tuple. */ + if (accessor->write_pointer + size >= accessor->write_end) + { + size_t written; + + /* + * We'll write the beginning of the oversized tuple, and then + * write the rest in some number of 'overflow' chunks. + * + * sts_initialize() verifies that the size of the tuple + + * meta-data always fits into a chunk. Because the chunk has been + * flushed above, we can be sure to have all of a chunk's usable + * space available. + */ + Assert(accessor->write_pointer + accessor->sts->meta_data_size + + sizeof(uint32) < accessor->write_end); + + /* Write the meta-data as one chunk. */ + if (accessor->sts->meta_data_size > 0) + memcpy(accessor->write_pointer, meta_data, + accessor->sts->meta_data_size); + + /* + * Write as much of the tuple as we can fit. This includes the + * tuple's size at the start. + */ + written = accessor->write_end - accessor->write_pointer - + accessor->sts->meta_data_size; + memcpy(accessor->write_pointer + accessor->sts->meta_data_size, + tuple, written); + ++accessor->write_chunk->ntuples; + size -= accessor->sts->meta_data_size; + size -= written; + /* Now write as many overflow chunks as we need for the rest. */ + while (size > 0) + { + size_t written_this_chunk; + + sts_flush_chunk(accessor); + + /* + * How many oveflow chunks to go? This will allow readers to + * skip all of them at once instead of reading each one. + */ + accessor->write_chunk->overflow = (size + STS_CHUNK_DATA_SIZE - 1) / + STS_CHUNK_DATA_SIZE; + written_this_chunk = + Min(accessor->write_end - accessor->write_pointer, size); + memcpy(accessor->write_pointer, (char *) tuple + written, + written_this_chunk); + accessor->write_pointer += written_this_chunk; + size -= written_this_chunk; + written += written_this_chunk; + } + return; + } + } + + /* Copy meta-data and tuple into buffer. */ + if (accessor->sts->meta_data_size > 0) + memcpy(accessor->write_pointer, meta_data, + accessor->sts->meta_data_size); + memcpy(accessor->write_pointer + accessor->sts->meta_data_size, tuple, + tuple->t_len); + accessor->write_pointer += size; + ++accessor->write_chunk->ntuples; +} + +static MinimalTuple +sts_read_tuple(SharedTuplestoreAccessor *accessor, void *meta_data) +{ + MinimalTuple tuple; + uint32 size; + size_t remaining_size; + size_t this_chunk_size; + char *destination; + + /* + * We'll keep track of bytes read from this chunk so that we can detect an + * overflowing tuple and switch to reading overflow pages. + */ + if (accessor->sts->meta_data_size > 0) + { + if (BufFileRead(accessor->read_file, + meta_data, + accessor->sts->meta_data_size) != + accessor->sts->meta_data_size) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading meta-data."))); + accessor->read_bytes += accessor->sts->meta_data_size; + } + if (BufFileRead(accessor->read_file, + &size, + sizeof(size)) != sizeof(size)) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading size."))); + accessor->read_bytes += sizeof(size); + if (size > accessor->read_buffer_size) + { + size_t new_read_buffer_size; + + if (accessor->read_buffer != NULL) + pfree(accessor->read_buffer); + new_read_buffer_size = Max(size, accessor->read_buffer_size * 2); + accessor->read_buffer = + MemoryContextAlloc(accessor->context, new_read_buffer_size); + accessor->read_buffer_size = new_read_buffer_size; + } + remaining_size = size - sizeof(uint32); + this_chunk_size = Min(remaining_size, + BLCKSZ * STS_CHUNK_PAGES - accessor->read_bytes); + destination = accessor->read_buffer + sizeof(uint32); + if (BufFileRead(accessor->read_file, + destination, + this_chunk_size) != this_chunk_size) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading tuple."))); + accessor->read_bytes += this_chunk_size; + remaining_size -= this_chunk_size; + destination += this_chunk_size; + ++accessor->read_ntuples; + + /* Check if we need to read any overflow chunks. */ + while (remaining_size > 0) + { + /* We are now positioned at the start of an overflow chunk. */ + SharedTuplestoreChunk chunk_header; + + if (BufFileRead(accessor->read_file, &chunk_header, STS_CHUNK_HEADER_SIZE) != + STS_CHUNK_HEADER_SIZE) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading overflow chunk header."))); + accessor->read_bytes = STS_CHUNK_HEADER_SIZE; + if (chunk_header.overflow == 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("unexpected chunk in shared tuplestore temporary file"), + errdetail_internal("Expected overflow chunk."))); + accessor->read_next_page += STS_CHUNK_PAGES; + this_chunk_size = Min(remaining_size, + BLCKSZ * STS_CHUNK_PAGES - + STS_CHUNK_HEADER_SIZE); + if (BufFileRead(accessor->read_file, + destination, + this_chunk_size) != this_chunk_size) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading tuple."))); + accessor->read_bytes += this_chunk_size; + remaining_size -= this_chunk_size; + destination += this_chunk_size; + + /* + * These will be used to count regular tuples following the oversized + * tuple that spilled into this overflow chunk. + */ + accessor->read_ntuples = 0; + accessor->read_ntuples_available = chunk_header.ntuples; + } + + tuple = (MinimalTuple) accessor->read_buffer; + tuple->t_len = size; + + return tuple; +} + +/* + * Get the next tuple in the current parallel scan. + */ +MinimalTuple +sts_parallel_scan_next(SharedTuplestoreAccessor *accessor, void *meta_data) +{ + SharedTuplestoreParticipant *p; + BlockNumber read_page; + bool eof; + + for (;;) + { + /* Can we read more tuples from the current chunk? */ + if (accessor->read_ntuples < accessor->read_ntuples_available) + return sts_read_tuple(accessor, meta_data); + + /* Find the location of a new chunk to read. */ + p = &accessor->sts->participants[accessor->read_participant]; + + LWLockAcquire(&p->lock, LW_EXCLUSIVE); + /* We can skip directly past overflow pages we know about. */ + if (p->read_page < accessor->read_next_page) + p->read_page = accessor->read_next_page; + eof = p->read_page >= p->npages; + if (!eof) + { + /* Claim the next chunk. */ + read_page = p->read_page; + /* Advance the read head for the next reader. */ + p->read_page += STS_CHUNK_PAGES; + accessor->read_next_page = p->read_page; + } + LWLockRelease(&p->lock); + + if (!eof) + { + SharedTuplestoreChunk chunk_header; + + /* Make sure we have the file open. */ + if (accessor->read_file == NULL) + { + char name[MAXPGPATH]; + + sts_filename(name, accessor, accessor->read_participant); + accessor->read_file = + BufFileOpenShared(accessor->fileset, name); + } + + /* Seek and load the chunk header. */ + if (BufFileSeekBlock(accessor->read_file, read_page) != 0) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Could not seek to next block."))); + if (BufFileRead(accessor->read_file, &chunk_header, + STS_CHUNK_HEADER_SIZE) != STS_CHUNK_HEADER_SIZE) + ereport(ERROR, + (errcode_for_file_access(), + errmsg("could not read from shared tuplestore temporary file"), + errdetail_internal("Short read while reading chunk header."))); + + /* + * If this is an overflow chunk, we skip it and any following + * overflow chunks all at once. + */ + if (chunk_header.overflow > 0) + { + accessor->read_next_page = read_page + + chunk_header.overflow * STS_CHUNK_PAGES; + continue; + } + + accessor->read_ntuples = 0; + accessor->read_ntuples_available = chunk_header.ntuples; + accessor->read_bytes = STS_CHUNK_HEADER_SIZE; + + /* Go around again, so we can get a tuple from this chunk. */ + } + else + { + if (accessor->read_file != NULL) + { + BufFileClose(accessor->read_file); + accessor->read_file = NULL; + } + + /* + * Try the next participant's file. If we've gone full circle, + * we're done. + */ + accessor->read_participant = (accessor->read_participant + 1) % + accessor->sts->nparticipants; + if (accessor->read_participant == accessor->participant) + break; + accessor->read_next_page = 0; + + /* Go around again, so we can get a chunk from this file. */ + } + } + + return NULL; +} + +/* + * Create the name used for the BufFile that a given participant will write. + */ +static void +sts_filename(char *name, SharedTuplestoreAccessor *accessor, int participant) +{ + snprintf(name, MAXPGPATH, "%s.p%d", accessor->sts->name, participant); +} diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index 460843d73e..a347ee4d7d 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -215,6 +215,7 @@ typedef enum BuiltinTrancheIds LWTRANCHE_SESSION_DSA, LWTRANCHE_SESSION_RECORD_TABLE, LWTRANCHE_SESSION_TYPMOD_TABLE, + LWTRANCHE_SHARED_TUPLESTORE, LWTRANCHE_TBM, LWTRANCHE_PARALLEL_APPEND, LWTRANCHE_FIRST_USER_DEFINED diff --git a/src/include/utils/sharedtuplestore.h b/src/include/utils/sharedtuplestore.h new file mode 100644 index 0000000000..49490ec414 --- /dev/null +++ b/src/include/utils/sharedtuplestore.h @@ -0,0 +1,60 @@ +/*------------------------------------------------------------------------- + * + * sharedtuplestore.h + * Simple mechinism for sharing tuples between backends. + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/utils/sharedtuplestore.h + * + *------------------------------------------------------------------------- + */ +#ifndef SHAREDTUPLESTORE_H +#define SHAREDTUPLESTORE_H + +#include "storage/fd.h" +#include "storage/sharedfileset.h" + +struct SharedTuplestore; +typedef struct SharedTuplestore SharedTuplestore; + +struct SharedTuplestoreAccessor; +typedef struct SharedTuplestoreAccessor SharedTuplestoreAccessor; + +/* + * A flag indicating that the tuplestore will only be scanned once, so backing + * files can be unlinked early. + */ +#define SHARED_TUPLESTORE_SINGLE_PASS 0x01 + +extern size_t sts_estimate(int participants); + +extern SharedTuplestoreAccessor *sts_initialize(SharedTuplestore *sts, + int participants, + int my_participant_number, + size_t meta_data_size, + int flags, + SharedFileSet *fileset, + const char *name); + +extern SharedTuplestoreAccessor *sts_attach(SharedTuplestore *sts, + int my_participant_number, + SharedFileSet *fileset); + +extern void sts_end_write(SharedTuplestoreAccessor *accessor); + +extern void sts_reinitialize(SharedTuplestoreAccessor *accessor); + +extern void sts_begin_parallel_scan(SharedTuplestoreAccessor *accessor); + +extern void sts_end_parallel_scan(SharedTuplestoreAccessor *accessor); + +extern void sts_puttuple(SharedTuplestoreAccessor *accessor, + void *meta_data, + MinimalTuple tuple); + +extern MinimalTuple sts_parallel_scan_next(SharedTuplestoreAccessor *accessor, + void *meta_data); + +#endif /* SHAREDTUPLESTORE_H */ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index 72eb9fd390..e308e20184 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2038,6 +2038,8 @@ SharedRecordTableEntry SharedRecordTableKey SharedRecordTypmodRegistry SharedSortInfo +SharedTuplestore +SharedTuplestoreAccessor SharedTypmodTableEntry ShellTypeInfo ShippableCacheEntry From 4bbf110d2fb4f74b9385bd5a521f824dfa5f15ec Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 18 Dec 2017 18:05:24 -0500 Subject: [PATCH 0721/1087] Add libpq connection parameter "scram_channel_binding" This parameter can be used to enforce the channel binding type used during a SCRAM authentication. This can be useful to check code paths where an invalid channel binding type is used by a client and will be even more useful to allow testing other channel binding types when they are added. The default value is tls-unique, which is what RFC 5802 specifies. Clients can optionally specify an empty value, which has as effect to not use channel binding and use SCRAM-SHA-256 as chosen SASL mechanism. More tests for SCRAM and channel binding are added to the SSL test suite. Author: Author: Michael Paquier --- doc/src/sgml/libpq.sgml | 24 ++++++++++++++++++++++++ src/interfaces/libpq/fe-auth-scram.c | 20 +++++++++++++++----- src/interfaces/libpq/fe-auth.c | 9 ++++++--- src/interfaces/libpq/fe-auth.h | 1 + src/interfaces/libpq/fe-connect.c | 9 +++++++++ src/interfaces/libpq/libpq-int.h | 1 + src/test/ssl/t/002_scram.pl | 14 +++++++++++++- 7 files changed, 69 insertions(+), 9 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 4703309254..4e4645136c 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1222,6 +1222,30 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname + + scram_channel_binding + + + Specifies the channel binding type to use with SCRAM authentication. + The list of channel binding types supported by server are listed in + . An empty value specifies that + the client will not use channel binding. The default value is + tls-unique. + + + + Channel binding is only supported on SSL connections. If the + connection is not using SSL, then this setting is ignored. + + + + This parameter is mainly intended for protocol testing. In normal + use, there should not be a need to choose a channel binding type other + than the default one. + + + + sslmode diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 4cad93c24a..b8f7a6b5be 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -93,6 +93,7 @@ pg_fe_scram_init(const char *username, const char *password, bool ssl_in_use, const char *sasl_mechanism, + const char *channel_binding_type, char *tls_finished_message, size_t tls_finished_len) { @@ -112,17 +113,14 @@ pg_fe_scram_init(const char *username, state->tls_finished_message = tls_finished_message; state->tls_finished_len = tls_finished_len; state->sasl_mechanism = strdup(sasl_mechanism); + state->channel_binding_type = channel_binding_type; + if (!state->sasl_mechanism) { free(state); return NULL; } - /* - * Store channel binding type. Only one type is currently supported. - */ - state->channel_binding_type = SCRAM_CHANNEL_BINDING_TLS_UNIQUE; - /* Normalize the password with SASLprep, if possible */ rc = pg_saslprep(password, &prep_password); if (rc == SASLPREP_OOM) @@ -375,6 +373,15 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) Assert(state->ssl_in_use); appendPQExpBuffer(&buf, "p=%s", state->channel_binding_type); } + else if (state->channel_binding_type == NULL || + strlen(state->channel_binding_type) == 0) + { + /* + * Client has chosen to not show to server that it supports channel + * binding. + */ + appendPQExpBuffer(&buf, "n"); + } else if (state->ssl_in_use) { /* @@ -493,6 +500,9 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) free(cbind_input); } + else if (state->channel_binding_type == NULL || + strlen(state->channel_binding_type) == 0) + appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ else if (state->ssl_in_use) appendPQExpBuffer(&buf, "c=eSws"); /* base64 of "y,," */ else diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index 2cfdb7c125..3340a9ad93 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -528,11 +528,13 @@ pg_SASL_init(PGconn *conn, int payloadlen) /* * Select the mechanism to use. Pick SCRAM-SHA-256-PLUS over anything - * else. Pick SCRAM-SHA-256 if nothing else has already been picked. - * If we add more mechanisms, a more refined priority mechanism might - * become necessary. + * else if a channel binding type is set. Pick SCRAM-SHA-256 if + * nothing else has already been picked. If we add more mechanisms, a + * more refined priority mechanism might become necessary. */ if (conn->ssl_in_use && + conn->scram_channel_binding && + strlen(conn->scram_channel_binding) > 0 && strcmp(mechanism_buf.data, SCRAM_SHA256_PLUS_NAME) == 0) selected_mechanism = SCRAM_SHA256_PLUS_NAME; else if (strcmp(mechanism_buf.data, SCRAM_SHA256_NAME) == 0 && @@ -591,6 +593,7 @@ pg_SASL_init(PGconn *conn, int payloadlen) password, conn->ssl_in_use, selected_mechanism, + conn->scram_channel_binding, tls_finished, tls_finished_len); if (!conn->sasl_state) diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h index 3e92410eae..db319ac071 100644 --- a/src/interfaces/libpq/fe-auth.h +++ b/src/interfaces/libpq/fe-auth.h @@ -27,6 +27,7 @@ extern void *pg_fe_scram_init(const char *username, const char *password, bool ssl_in_use, const char *sasl_mechanism, + const char *channel_binding_type, char *tls_finished_message, size_t tls_finished_len); extern void pg_fe_scram_free(void *opaq); diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 2c175a2a24..68fb9a124a 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -71,6 +71,7 @@ static int ldapServiceLookup(const char *purl, PQconninfoOption *options, #endif #include "common/ip.h" +#include "common/scram-common.h" #include "mb/pg_wchar.h" #include "port/pg_bswap.h" @@ -122,6 +123,7 @@ static int ldapServiceLookup(const char *purl, PQconninfoOption *options, #define DefaultOption "" #define DefaultAuthtype "" #define DefaultTargetSessionAttrs "any" +#define DefaultSCRAMChannelBinding SCRAM_CHANNEL_BINDING_TLS_UNIQUE #ifdef USE_SSL #define DefaultSSLMode "prefer" #else @@ -262,6 +264,11 @@ static const internalPQconninfoOption PQconninfoOptions[] = { "TCP-Keepalives-Count", "", 10, /* strlen(INT32_MAX) == 10 */ offsetof(struct pg_conn, keepalives_count)}, + {"scram_channel_binding", NULL, DefaultSCRAMChannelBinding, NULL, + "SCRAM-Channel-Binding", "D", + 21, /* sizeof("tls-server-end-point") == 21 */ + offsetof(struct pg_conn, scram_channel_binding)}, + /* * ssl options are allowed even without client SSL support because the * client can still handle SSL modes "disable" and "allow". Other @@ -3469,6 +3476,8 @@ freePGconn(PGconn *conn) free(conn->keepalives_interval); if (conn->keepalives_count) free(conn->keepalives_count); + if (conn->scram_channel_binding) + free(conn->scram_channel_binding); if (conn->sslmode) free(conn->sslmode); if (conn->sslcert) diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index 8412ee8160..f6c1023f37 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -349,6 +349,7 @@ struct pg_conn * retransmits */ char *keepalives_count; /* maximum number of TCP keepalive * retransmits */ + char *scram_channel_binding; /* SCRAM channel binding type */ char *sslmode; /* SSL mode (require,prefer,allow,disable) */ char *sslcompression; /* SSL compression (0 or 1) */ char *sslkey; /* client key filename */ diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl index 25f75bd52a..324b4888d4 100644 --- a/src/test/ssl/t/002_scram.pl +++ b/src/test/ssl/t/002_scram.pl @@ -4,7 +4,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 1; +use Test::More tests => 4; use ServerSetup; use File::Copy; @@ -34,5 +34,17 @@ $common_connstr = "user=ssltestuser dbname=trustdb sslmode=require hostaddr=$SERVERHOSTADDR"; +# Default settings test_connect_ok($common_connstr, '', "SCRAM authentication with default channel binding"); + +# Channel binding settings +test_connect_ok($common_connstr, + "scram_channel_binding=tls-unique", + "SCRAM authentication with tls-unique as channel binding"); +test_connect_ok($common_connstr, + "scram_channel_binding=''", + "SCRAM authentication without channel binding"); +test_connect_fails($common_connstr, + "scram_channel_binding=not-exists", + "SCRAM authentication with invalid channel binding"); From 09a65f5a284491d865f3f2ca5a996e597b9cf696 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 19 Dec 2017 10:21:54 -0500 Subject: [PATCH 0722/1087] Mark a few parallelism-related variables with PGDLLIMPORT. Discussion: http://postgr.es/m/CAEepm=2HzxAOKU6eCWTyvMwBy=fhGvbwDPM_fVps759tkyQSYQ@mail.gmail.com --- src/include/access/parallel.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index f4db88294a..d381afb285 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -52,8 +52,8 @@ typedef struct ParallelWorkerContext } ParallelWorkerContext; extern volatile bool ParallelMessagePending; -extern int ParallelWorkerNumber; -extern bool InitializingParallelWorker; +extern PGDLLIMPORT int ParallelWorkerNumber; +extern PGDLLIMPORT bool InitializingParallelWorker; #define IsParallelWorker() (ParallelWorkerNumber >= 0) From 38fc54703ea4203a537c58332f697c546eaa4bcf Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 19 Dec 2017 10:34:35 -0500 Subject: [PATCH 0723/1087] Re-fix wrong costing of Sort under Gather Merge. Commit dc02c7bca4dccf7de278cdc6b3325a829e75b252 changed this call to create_sort_path() to take -1 rather than limit_tuples because, at that time, there was no way for a Sort beneath a Gather Merge to become a top-N sort. Later, commit 3452dc5240da43e833118484e1e9b4894d04431c provided a way for a Sort beneath a Gather Merge to become a top-N sort, but failed to revert the previous commit in the process. Do that. Report and analysis by Jeff Janes; patch by Thomas Munro; review by Amit Kapila and by me. Discussion: http://postgr.es/m/CAEepm=1BWtC34vUroA0Uqjw02MaqdUrW+d6WD85_k8SLyPiKHQ@mail.gmail.com --- src/backend/optimizer/plan/planner.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index e8bc15c35d..382791fadb 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -5059,7 +5059,7 @@ create_ordered_paths(PlannerInfo *root, ordered_rel, cheapest_partial_path, root->sort_pathkeys, - -1.0); + limit_tuples); total_groups = cheapest_partial_path->rows * cheapest_partial_path->parallel_workers; From 8526bcb2df76d5171b4f4d6dc7a97560a73a5eff Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 19 Dec 2017 12:21:56 -0500 Subject: [PATCH 0724/1087] Try again to fix accumulation of parallel worker instrumentation. When a Gather or Gather Merge node is started and stopped multiple times, accumulate instrumentation data only once, at the end, instead of after each execution, to avoid recording inflated totals. Commit 778e78ae9fa51e58f41cbdc72b293291d02d8984, the previous attempt at a fix, instead reset the state after every execution, which worked for the general instrumentation data but had problems for the additional instrumentation specific to Sort and Hash nodes. Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila, following a design proposal from Thomas Munro, with a comment tweak by me. Discussion: http://postgr.es/m/20171127175631.GA405@depesz.com --- src/backend/executor/execParallel.c | 26 ++++---- src/backend/executor/nodeHash.c | 13 ---- src/backend/executor/nodeSort.c | 17 ------ src/include/executor/nodeHash.h | 1 - src/include/executor/nodeSort.h | 1 - src/test/regress/expected/select_parallel.out | 59 ++++++++++++++++++- src/test/regress/sql/select_parallel.sql | 28 ++++++++- 7 files changed, 96 insertions(+), 49 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 6b6064637b..02b5aa517b 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -899,12 +899,8 @@ ExecParallelReInitializeDSM(PlanState *planstate, pcxt); break; case T_HashState: - /* even when not parallel-aware, for EXPLAIN ANALYZE */ - ExecHashReInitializeDSM((HashState *) planstate, pcxt); - break; case T_SortState: - /* even when not parallel-aware, for EXPLAIN ANALYZE */ - ExecSortReInitializeDSM((SortState *) planstate, pcxt); + /* these nodes have DSM state, but no reinitialization is required */ break; default: @@ -977,7 +973,7 @@ ExecParallelRetrieveInstrumentation(PlanState *planstate, /* * Finish parallel execution. We wait for parallel workers to finish, and - * accumulate their buffer usage and instrumentation. + * accumulate their buffer usage. */ void ExecParallelFinish(ParallelExecutorInfo *pei) @@ -1023,23 +1019,23 @@ ExecParallelFinish(ParallelExecutorInfo *pei) for (i = 0; i < nworkers; i++) InstrAccumParallelQuery(&pei->buffer_usage[i]); - /* Finally, accumulate instrumentation, if any. */ - if (pei->instrumentation) - ExecParallelRetrieveInstrumentation(pei->planstate, - pei->instrumentation); - pei->finished = true; } /* - * Clean up whatever ParallelExecutorInfo resources still exist after - * ExecParallelFinish. We separate these routines because someone might - * want to examine the contents of the DSM after ExecParallelFinish and - * before calling this routine. + * Accumulate instrumentation, and then clean up whatever ParallelExecutorInfo + * resources still exist after ExecParallelFinish. We separate these + * routines because someone might want to examine the contents of the DSM + * after ExecParallelFinish and before calling this routine. */ void ExecParallelCleanup(ParallelExecutorInfo *pei) { + /* Accumulate instrumentation, if any. */ + if (pei->instrumentation) + ExecParallelRetrieveInstrumentation(pei->planstate, + pei->instrumentation); + /* Free any serialized parameters. */ if (DsaPointerIsValid(pei->param_exec)) { diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 6fe5d69d55..afd7384e94 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -1669,19 +1669,6 @@ ExecHashInitializeDSM(HashState *node, ParallelContext *pcxt) node->shared_info); } -/* - * Reset shared state before beginning a fresh scan. - */ -void -ExecHashReInitializeDSM(HashState *node, ParallelContext *pcxt) -{ - if (node->shared_info != NULL) - { - memset(node->shared_info->hinstrument, 0, - node->shared_info->num_workers * sizeof(HashInstrumentation)); - } -} - /* * Locate the DSM space for hash table instrumentation data that we'll write * to at shutdown time. diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index 73aa3715e6..d593378f74 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -396,23 +396,6 @@ ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt) node->shared_info); } -/* ---------------------------------------------------------------- - * ExecSortReInitializeDSM - * - * Reset shared state before beginning a fresh scan. - * ---------------------------------------------------------------- - */ -void -ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt) -{ - /* If there's any instrumentation space, clear it for next time */ - if (node->shared_info != NULL) - { - memset(node->shared_info->sinstrument, 0, - node->shared_info->num_workers * sizeof(TuplesortInstrumentation)); - } -} - /* ---------------------------------------------------------------- * ExecSortInitializeWorker * diff --git a/src/include/executor/nodeHash.h b/src/include/executor/nodeHash.h index 75d4c70f6f..0974f1edc2 100644 --- a/src/include/executor/nodeHash.h +++ b/src/include/executor/nodeHash.h @@ -52,7 +52,6 @@ extern int ExecHashGetSkewBucket(HashJoinTable hashtable, uint32 hashvalue); extern void ExecHashEstimate(HashState *node, ParallelContext *pcxt); extern void ExecHashInitializeDSM(HashState *node, ParallelContext *pcxt); extern void ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwcxt); -extern void ExecHashReInitializeDSM(HashState *node, ParallelContext *pcxt); extern void ExecHashRetrieveInstrumentation(HashState *node); extern void ExecShutdownHash(HashState *node); extern void ExecHashGetInstrumentation(HashInstrumentation *instrument, diff --git a/src/include/executor/nodeSort.h b/src/include/executor/nodeSort.h index cc61a9db69..627a04c3fd 100644 --- a/src/include/executor/nodeSort.h +++ b/src/include/executor/nodeSort.h @@ -26,7 +26,6 @@ extern void ExecReScanSort(SortState *node); /* parallel instrumentation support */ extern void ExecSortEstimate(SortState *node, ParallelContext *pcxt); extern void ExecSortInitializeDSM(SortState *node, ParallelContext *pcxt); -extern void ExecSortReInitializeDSM(SortState *node, ParallelContext *pcxt); extern void ExecSortInitializeWorker(SortState *node, ParallelWorkerContext *pwcxt); extern void ExecSortRetrieveInstrumentation(SortState *node); diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 86a55922c8..7824ca52ca 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -465,14 +465,71 @@ select count(*) from bmscantest where a>1; 99999 (1 row) +-- test accumulation of stats for parallel nodes reset enable_seqscan; +alter table tenk2 set (parallel_workers = 0); +explain (analyze, timing off, summary off, costs off) + select count(*) from tenk1, tenk2 where tenk1.hundred > 1 + and tenk2.thousand=0; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate (actual rows=1 loops=1) + -> Nested Loop (actual rows=98000 loops=1) + -> Seq Scan on tenk2 (actual rows=10 loops=1) + Filter: (thousand = 0) + Rows Removed by Filter: 9990 + -> Gather (actual rows=9800 loops=10) + Workers Planned: 4 + Workers Launched: 4 + -> Parallel Seq Scan on tenk1 (actual rows=1960 loops=50) + Filter: (hundred > 1) + Rows Removed by Filter: 40 +(11 rows) + +alter table tenk2 reset (parallel_workers); +reset work_mem; +create function explain_parallel_sort_stats() returns setof text +language plpgsql as +$$ +declare ln text; +begin + for ln in + explain (analyze, timing off, summary off, costs off) + select * from + (select ten from tenk1 where ten < 100 order by ten) ss + right join (values (1),(2),(3)) v(x) on true + loop + ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + return next ln; + end loop; +end; +$$; +select * from explain_parallel_sort_stats(); + explain_parallel_sort_stats +-------------------------------------------------------------------------- + Nested Loop Left Join (actual rows=30000 loops=1) + -> Values Scan on "*VALUES*" (actual rows=3 loops=1) + -> Gather Merge (actual rows=10000 loops=3) + Workers Planned: 4 + Workers Launched: 4 + -> Sort (actual rows=2000 loops=15) + Sort Key: tenk1.ten + Sort Method: quicksort Memory: xxx + Worker 0: Sort Method: quicksort Memory: xxx + Worker 1: Sort Method: quicksort Memory: xxx + Worker 2: Sort Method: quicksort Memory: xxx + Worker 3: Sort Method: quicksort Memory: xxx + -> Parallel Seq Scan on tenk1 (actual rows=2000 loops=15) + Filter: (ten < 100) +(14 rows) + reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; reset enable_material; reset effective_io_concurrency; -reset work_mem; drop table bmscantest; +drop function explain_parallel_sort_stats(); -- test parallel merge join path. set enable_hashjoin to off; set enable_nestloop to off; diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql index fb35ca3376..b12ba0b74a 100644 --- a/src/test/regress/sql/select_parallel.sql +++ b/src/test/regress/sql/select_parallel.sql @@ -179,14 +179,40 @@ insert into bmscantest select r, 'fooooooooooooooooooooooooooooooooooooooooooooo create index i_bmtest ON bmscantest(a); select count(*) from bmscantest where a>1; +-- test accumulation of stats for parallel nodes reset enable_seqscan; +alter table tenk2 set (parallel_workers = 0); +explain (analyze, timing off, summary off, costs off) + select count(*) from tenk1, tenk2 where tenk1.hundred > 1 + and tenk2.thousand=0; +alter table tenk2 reset (parallel_workers); + +reset work_mem; +create function explain_parallel_sort_stats() returns setof text +language plpgsql as +$$ +declare ln text; +begin + for ln in + explain (analyze, timing off, summary off, costs off) + select * from + (select ten from tenk1 where ten < 100 order by ten) ss + right join (values (1),(2),(3)) v(x) on true + loop + ln := regexp_replace(ln, 'Memory: \S*', 'Memory: xxx'); + return next ln; + end loop; +end; +$$; +select * from explain_parallel_sort_stats(); + reset enable_indexscan; reset enable_hashjoin; reset enable_mergejoin; reset enable_material; reset effective_io_concurrency; -reset work_mem; drop table bmscantest; +drop function explain_parallel_sort_stats(); -- test parallel merge join path. set enable_hashjoin to off; From 7d3583ad9ae54b44119973a9d6d731c9cc74c86e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 19 Dec 2017 15:26:09 -0500 Subject: [PATCH 0725/1087] Test instrumentation of Hash nodes with parallel query. Commit 8526bcb2df76d5171b4f4d6dc7a97560a73a5eff fixed bugs related to both Sort and Hash, but only added a test case for Sort. This adds a test case for Hash to match. Thomas Munro Discussion: http://postgr.es/m/CAEepm=2-LRnfwUBZDqQt+XAcd0af_ykNyyVvP3h1uB1AQ=e-eA@mail.gmail.com --- src/test/regress/expected/join.out | 106 +++++++++++++++++++++++++++++ src/test/regress/sql/join.sql | 60 ++++++++++++++++ 2 files changed, 166 insertions(+) diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 001d96dc2d..9d3abf0ed0 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -6187,6 +6187,112 @@ $$); 1 | 1 (1 row) +rollback to settings; +-- Exercise rescans. We'll turn off parallel_leader_participation so +-- that we can check that instrumentation comes back correctly. +create table foo as select generate_series(1, 3) as id, 'xxxxx'::text as t; +alter table foo set (parallel_workers = 0); +create table bar as select generate_series(1, 5000) as id, 'xxxxx'::text as t; +alter table bar set (parallel_workers = 2); +-- multi-batch with rescan, parallel-oblivious +savepoint settings; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '64kB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate + -> Nested Loop Left Join + Join Filter: ((foo.id < (b1.id + 1)) AND (foo.id > (b1.id - 1))) + -> Seq Scan on foo + -> Gather + Workers Planned: 2 + -> Hash Join + Hash Cond: (b1.id = b2.id) + -> Parallel Seq Scan on bar b1 + -> Hash + -> Seq Scan on bar b2 +(11 rows) + +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + count +------- + 3 +(1 row) + +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); + multibatch +------------ + t +(1 row) + +rollback to settings; +-- single-batch with rescan, parallel-oblivious +savepoint settings; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '4MB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate + -> Nested Loop Left Join + Join Filter: ((foo.id < (b1.id + 1)) AND (foo.id > (b1.id - 1))) + -> Seq Scan on foo + -> Gather + Workers Planned: 2 + -> Hash Join + Hash Cond: (b1.id = b2.id) + -> Parallel Seq Scan on bar b1 + -> Hash + -> Seq Scan on bar b2 +(11 rows) + +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + count +------- + 3 +(1 row) + +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); + multibatch +------------ + f +(1 row) + rollback to settings; -- A full outer join where every record is matched. -- non-parallel diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index 882601b338..0e933e00d5 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -2170,6 +2170,66 @@ $$ $$); rollback to settings; +-- Exercise rescans. We'll turn off parallel_leader_participation so +-- that we can check that instrumentation comes back correctly. + +create table foo as select generate_series(1, 3) as id, 'xxxxx'::text as t; +alter table foo set (parallel_workers = 0); +create table bar as select generate_series(1, 5000) as id, 'xxxxx'::text as t; +alter table bar set (parallel_workers = 2); + +-- multi-batch with rescan, parallel-oblivious +savepoint settings; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '64kB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); +rollback to settings; + +-- single-batch with rescan, parallel-oblivious +savepoint settings; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '4MB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); +rollback to settings; + -- A full outer join where every record is matched. -- non-parallel From f94eec490b2671399c102b89c9fa0311aea3a39f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 20 Dec 2017 17:21:55 -0500 Subject: [PATCH 0726/1087] When passing query strings to workers, pass the terminating \0. Otherwise, when the query string is read, we might trailing garbage beyond the end, unless there happens to be a \0 there by good luck. Report and patch by Thomas Munro. Reviewed by Rafia Sabih. Discussion: http://postgr.es/m/CAEepm=2SJs7X+_vx8QoDu8d1SMEOxtLhxxLNzZun_BvNkuNhrw@mail.gmail.com --- src/backend/executor/execParallel.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 02b5aa517b..604f4f5b61 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -597,7 +597,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, /* Estimate space for query text. */ query_len = strlen(estate->es_sourceText); - shm_toc_estimate_chunk(&pcxt->estimator, query_len); + shm_toc_estimate_chunk(&pcxt->estimator, query_len + 1); shm_toc_estimate_keys(&pcxt->estimator, 1); /* Estimate space for serialized PlannedStmt. */ @@ -672,8 +672,8 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, shm_toc_insert(pcxt->toc, PARALLEL_KEY_EXECUTOR_FIXED, fpes); /* Store query string */ - query_string = shm_toc_allocate(pcxt->toc, query_len); - memcpy(query_string, estate->es_sourceText, query_len); + query_string = shm_toc_allocate(pcxt->toc, query_len + 1); + memcpy(query_string, estate->es_sourceText, query_len + 1); shm_toc_insert(pcxt->toc, PARALLEL_KEY_QUERY_TEXT, query_string); /* Store serialized PlannedStmt. */ From 1804284042e659e7d16904e7bbb0ad546394b6a3 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 20 Dec 2017 23:39:21 -0800 Subject: [PATCH 0727/1087] Add parallel-aware hash joins. Introduce parallel-aware hash joins that appear in EXPLAIN plans as Parallel Hash Join with Parallel Hash. While hash joins could already appear in parallel queries, they were previously always parallel-oblivious and had a partial subplan only on the outer side, meaning that the work of the inner subplan was duplicated in every worker. After this commit, the planner will consider using a partial subplan on the inner side too, using the Parallel Hash node to divide the work over the available CPU cores and combine its results in shared memory. If the join needs to be split into multiple batches in order to respect work_mem, then workers process different batches as much as possible and then work together on the remaining batches. The advantages of a parallel-aware hash join over a parallel-oblivious hash join used in a parallel query are that it: * avoids wasting memory on duplicated hash tables * avoids wasting disk space on duplicated batch files * divides the work of building the hash table over the CPUs One disadvantage is that there is some communication between the participating CPUs which might outweigh the benefits of parallelism in the case of small hash tables. This is avoided by the planner's existing reluctance to supply partial plans for small scans, but it may be necessary to estimate synchronization costs in future if that situation changes. Another is that outer batch 0 must be written to disk if multiple batches are required. A potential future advantage of parallel-aware hash joins is that right and full outer joins could be supported, since there is a single set of matched bits for each hashtable, but that is not yet implemented. A new GUC enable_parallel_hash is defined to control the feature, defaulting to on. Author: Thomas Munro Reviewed-By: Andres Freund, Robert Haas Tested-By: Rafia Sabih, Prabhat Sahu Discussion: https://postgr.es/m/CAEepm=2W=cOkiZxcg6qiFQP-dHUe09aqTrEMM7yJDrHMhDv_RA@mail.gmail.com https://postgr.es/m/CAEepm=37HKyJ4U6XOLi=JgfSHM3o6B-GaeO-6hkOmneTDkH+Uw@mail.gmail.com --- doc/src/sgml/config.sgml | 15 + doc/src/sgml/monitoring.sgml | 62 +- src/backend/executor/execParallel.c | 21 + src/backend/executor/execProcnode.c | 3 + src/backend/executor/nodeHash.c | 1659 ++++++++++++++++- src/backend/executor/nodeHashjoin.c | 617 +++++- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/outfuncs.c | 1 + src/backend/nodes/readfuncs.c | 1 + src/backend/optimizer/path/costsize.c | 25 +- src/backend/optimizer/path/joinpath.c | 36 +- src/backend/optimizer/plan/createplan.c | 11 + src/backend/optimizer/util/pathnode.c | 5 +- src/backend/postmaster/pgstat.c | 45 + src/backend/utils/misc/guc.c | 10 +- src/backend/utils/misc/postgresql.conf.sample | 1 + src/include/executor/hashjoin.h | 175 +- src/include/executor/nodeHash.h | 24 +- src/include/executor/nodeHashjoin.h | 6 + src/include/nodes/execnodes.h | 6 + src/include/nodes/plannodes.h | 1 + src/include/nodes/relation.h | 2 + src/include/optimizer/cost.h | 4 +- src/include/optimizer/pathnode.h | 1 + src/include/pgstat.h | 15 + src/include/storage/lwlock.h | 1 + src/test/regress/expected/join.out | 304 ++- src/test/regress/expected/sysviews.out | 3 +- src/test/regress/sql/join.sql | 148 +- src/tools/pgindent/typedefs.list | 4 + 30 files changed, 3091 insertions(+), 116 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 533faf060d..e4a01699e4 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3647,6 +3647,21 @@ ANY num_sync ( + enable_parallel_hash (boolean) + + enable_parallel_hash configuration parameter + + + + + Enables or disables the query planner's use of hash-join plan + types with parallel hash. Has no effect if hash-join plans are not + also enabled. The default is on. + + + + enable_partition_wise_join (boolean) diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index b6f80d9708..8a9793644f 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -1263,7 +1263,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Waiting in an extension. - IPC + IPC BgWorkerShutdown Waiting for background worker to shut down. @@ -1279,6 +1279,66 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ExecuteGather Waiting for activity from child process when executing Gather node. + + Hash/Batch/Allocating + Waiting for an elected Parallel Hash participant to allocate a hash table. + + + Hash/Batch/Electing + Electing a Parallel Hash participant to allocate a hash table. + + + Hash/Batch/Loading + Waiting for other Parallel Hash participants to finish loading a hash table. + + + Hash/Build/Allocating + Waiting for an elected Parallel Hash participant to allocate the initial hash table. + + + Hash/Build/Electing + Electing a Parallel Hash participant to allocate the initial hash table. + + + Hash/Build/HashingInner + Waiting for other Parallel Hash participants to finish hashing the inner relation. + + + Hash/Build/HashingOuter + Waiting for other Parallel Hash participants to finish partitioning the outer relation. + + + Hash/GrowBatches/Allocating + Waiting for an elected Parallel Hash participant to allocate more batches. + + + Hash/GrowBatches/Deciding + Electing a Parallel Hash participant to decide on future batch growth. + + + Hash/GrowBatches/Electing + Electing a Parallel Hash participant to allocate more batches. + + + Hash/GrowBatches/Finishing + Waiting for an elected Parallel Hash participant to decide on future batch growth. + + + Hash/GrowBatches/Repartitioning + Waiting for other Parallel Hash participants to finishing repartitioning. + + + Hash/GrowBuckets/Allocating + Waiting for an elected Parallel Hash participant to finish allocating more buckets. + + + Hash/GrowBuckets/Electing + Electing a Parallel Hash participant to allocate more buckets. + + + Hash/GrowBuckets/Reinserting + Waiting for other Parallel Hash participants to finish inserting tuples into new buckets. + LogicalSyncData Waiting for logical replication remote server to send data for initial table synchronization. diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index 604f4f5b61..b344d4b589 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -31,6 +31,7 @@ #include "executor/nodeCustom.h" #include "executor/nodeForeignscan.h" #include "executor/nodeHash.h" +#include "executor/nodeHashjoin.h" #include "executor/nodeIndexscan.h" #include "executor/nodeIndexonlyscan.h" #include "executor/nodeSeqscan.h" @@ -266,6 +267,11 @@ ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e) ExecBitmapHeapEstimate((BitmapHeapScanState *) planstate, e->pcxt); break; + case T_HashJoinState: + if (planstate->plan->parallel_aware) + ExecHashJoinEstimate((HashJoinState *) planstate, + e->pcxt); + break; case T_HashState: /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecHashEstimate((HashState *) planstate, e->pcxt); @@ -474,6 +480,11 @@ ExecParallelInitializeDSM(PlanState *planstate, ExecBitmapHeapInitializeDSM((BitmapHeapScanState *) planstate, d->pcxt); break; + case T_HashJoinState: + if (planstate->plan->parallel_aware) + ExecHashJoinInitializeDSM((HashJoinState *) planstate, + d->pcxt); + break; case T_HashState: /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecHashInitializeDSM((HashState *) planstate, d->pcxt); @@ -898,6 +909,11 @@ ExecParallelReInitializeDSM(PlanState *planstate, ExecBitmapHeapReInitializeDSM((BitmapHeapScanState *) planstate, pcxt); break; + case T_HashJoinState: + if (planstate->plan->parallel_aware) + ExecHashJoinReInitializeDSM((HashJoinState *) planstate, + pcxt); + break; case T_HashState: case T_SortState: /* these nodes have DSM state, but no reinitialization is required */ @@ -1196,6 +1212,11 @@ ExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt) ExecBitmapHeapInitializeWorker((BitmapHeapScanState *) planstate, pwcxt); break; + case T_HashJoinState: + if (planstate->plan->parallel_aware) + ExecHashJoinInitializeWorker((HashJoinState *) planstate, + pwcxt); + break; case T_HashState: /* even when not parallel-aware, for EXPLAIN ANALYZE */ ExecHashInitializeWorker((HashState *) planstate, pwcxt); diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index fcb8b56999..699dc69179 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -770,6 +770,9 @@ ExecShutdownNode(PlanState *node) case T_HashState: ExecShutdownHash((HashState *) node); break; + case T_HashJoinState: + ExecShutdownHashJoin((HashJoinState *) node); + break; default: break; } diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index afd7384e94..4284e8682a 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -10,6 +10,8 @@ * IDENTIFICATION * src/backend/executor/nodeHash.c * + * See note on parallelism in nodeHashjoin.c. + * *------------------------------------------------------------------------- */ /* @@ -25,6 +27,7 @@ #include #include "access/htup_details.h" +#include "access/parallel.h" #include "catalog/pg_statistic.h" #include "commands/tablespace.h" #include "executor/execdebug.h" @@ -32,6 +35,8 @@ #include "executor/nodeHash.h" #include "executor/nodeHashjoin.h" #include "miscadmin.h" +#include "pgstat.h" +#include "port/atomics.h" #include "utils/dynahash.h" #include "utils/memutils.h" #include "utils/lsyscache.h" @@ -40,6 +45,8 @@ static void ExecHashIncreaseNumBatches(HashJoinTable hashtable); static void ExecHashIncreaseNumBuckets(HashJoinTable hashtable); +static void ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable); +static void ExecParallelHashIncreaseNumBuckets(HashJoinTable hashtable); static void ExecHashBuildSkewHash(HashJoinTable hashtable, Hash *node, int mcvsToUse); static void ExecHashSkewTableInsert(HashJoinTable hashtable, @@ -49,6 +56,30 @@ static void ExecHashSkewTableInsert(HashJoinTable hashtable, static void ExecHashRemoveNextSkewBucket(HashJoinTable hashtable); static void *dense_alloc(HashJoinTable hashtable, Size size); +static HashJoinTuple ExecParallelHashTupleAlloc(HashJoinTable hashtable, + size_t size, + dsa_pointer *shared); +static void MultiExecPrivateHash(HashState *node); +static void MultiExecParallelHash(HashState *node); +static inline HashJoinTuple ExecParallelHashFirstTuple(HashJoinTable table, + int bucketno); +static inline HashJoinTuple ExecParallelHashNextTuple(HashJoinTable table, + HashJoinTuple tuple); +static inline void ExecParallelHashPushTuple(dsa_pointer_atomic *head, + HashJoinTuple tuple, + dsa_pointer tuple_shared); +static void ExecParallelHashJoinSetUpBatches(HashJoinTable hashtable, int nbatch); +static void ExecParallelHashEnsureBatchAccessors(HashJoinTable hashtable); +static void ExecParallelHashRepartitionFirst(HashJoinTable hashtable); +static void ExecParallelHashRepartitionRest(HashJoinTable hashtable); +static HashMemoryChunk ExecParallelHashPopChunkQueue(HashJoinTable table, + dsa_pointer *shared); +static bool ExecParallelHashTuplePrealloc(HashJoinTable hashtable, + int batchno, + size_t size); +static void ExecParallelHashMergeCounters(HashJoinTable hashtable); +static void ExecParallelHashCloseBatchAccessors(HashJoinTable hashtable); + /* ---------------------------------------------------------------- * ExecHash @@ -72,6 +103,39 @@ ExecHash(PlanState *pstate) */ Node * MultiExecHash(HashState *node) +{ + /* must provide our own instrumentation support */ + if (node->ps.instrument) + InstrStartNode(node->ps.instrument); + + if (node->parallel_state != NULL) + MultiExecParallelHash(node); + else + MultiExecPrivateHash(node); + + /* must provide our own instrumentation support */ + if (node->ps.instrument) + InstrStopNode(node->ps.instrument, node->hashtable->partialTuples); + + /* + * We do not return the hash table directly because it's not a subtype of + * Node, and so would violate the MultiExecProcNode API. Instead, our + * parent Hashjoin node is expected to know how to fish it out of our node + * state. Ugly but not really worth cleaning up, since Hashjoin knows + * quite a bit more about Hash besides that. + */ + return NULL; +} + +/* ---------------------------------------------------------------- + * MultiExecPrivateHash + * + * parallel-oblivious version, building a backend-private + * hash table and (if necessary) batch files. + * ---------------------------------------------------------------- + */ +static void +MultiExecPrivateHash(HashState *node) { PlanState *outerNode; List *hashkeys; @@ -80,10 +144,6 @@ MultiExecHash(HashState *node) ExprContext *econtext; uint32 hashvalue; - /* must provide our own instrumentation support */ - if (node->ps.instrument) - InstrStartNode(node->ps.instrument); - /* * get state info from node */ @@ -138,18 +198,147 @@ MultiExecHash(HashState *node) if (hashtable->spaceUsed > hashtable->spacePeak) hashtable->spacePeak = hashtable->spaceUsed; - /* must provide our own instrumentation support */ - if (node->ps.instrument) - InstrStopNode(node->ps.instrument, hashtable->totalTuples); + hashtable->partialTuples = hashtable->totalTuples; +} + +/* ---------------------------------------------------------------- + * MultiExecParallelHash + * + * parallel-aware version, building a shared hash table and + * (if necessary) batch files using the combined effort of + * a set of co-operating backends. + * ---------------------------------------------------------------- + */ +static void +MultiExecParallelHash(HashState *node) +{ + ParallelHashJoinState *pstate; + PlanState *outerNode; + List *hashkeys; + HashJoinTable hashtable; + TupleTableSlot *slot; + ExprContext *econtext; + uint32 hashvalue; + Barrier *build_barrier; + int i; /* - * We do not return the hash table directly because it's not a subtype of - * Node, and so would violate the MultiExecProcNode API. Instead, our - * parent Hashjoin node is expected to know how to fish it out of our node - * state. Ugly but not really worth cleaning up, since Hashjoin knows - * quite a bit more about Hash besides that. + * get state info from node */ - return NULL; + outerNode = outerPlanState(node); + hashtable = node->hashtable; + + /* + * set expression context + */ + hashkeys = node->hashkeys; + econtext = node->ps.ps_ExprContext; + + /* + * Synchronize the parallel hash table build. At this stage we know that + * the shared hash table has been or is being set up by + * ExecHashTableCreate(), but we don't know if our peers have returned + * from there or are here in MultiExecParallelHash(), and if so how far + * through they are. To find out, we check the build_barrier phase then + * and jump to the right step in the build algorithm. + */ + pstate = hashtable->parallel_state; + build_barrier = &pstate->build_barrier; + Assert(BarrierPhase(build_barrier) >= PHJ_BUILD_ALLOCATING); + switch (BarrierPhase(build_barrier)) + { + case PHJ_BUILD_ALLOCATING: + + /* + * Either I just allocated the initial hash table in + * ExecHashTableCreate(), or someone else is doing that. Either + * way, wait for everyone to arrive here so we can proceed. + */ + BarrierArriveAndWait(build_barrier, WAIT_EVENT_HASH_BUILD_ALLOCATING); + /* Fall through. */ + + case PHJ_BUILD_HASHING_INNER: + + /* + * It's time to begin hashing, or if we just arrived here then + * hashing is already underway, so join in that effort. While + * hashing we have to be prepared to help increase the number of + * batches or buckets at any time, and if we arrived here when + * that was already underway we'll have to help complete that work + * immediately so that it's safe to access batches and buckets + * below. + */ + if (PHJ_GROW_BATCHES_PHASE(BarrierAttach(&pstate->grow_batches_barrier)) != + PHJ_GROW_BATCHES_ELECTING) + ExecParallelHashIncreaseNumBatches(hashtable); + if (PHJ_GROW_BUCKETS_PHASE(BarrierAttach(&pstate->grow_buckets_barrier)) != + PHJ_GROW_BUCKETS_ELECTING) + ExecParallelHashIncreaseNumBuckets(hashtable); + ExecParallelHashEnsureBatchAccessors(hashtable); + ExecParallelHashTableSetCurrentBatch(hashtable, 0); + for (;;) + { + slot = ExecProcNode(outerNode); + if (TupIsNull(slot)) + break; + econtext->ecxt_innertuple = slot; + if (ExecHashGetHashValue(hashtable, econtext, hashkeys, + false, hashtable->keepNulls, + &hashvalue)) + ExecParallelHashTableInsert(hashtable, slot, hashvalue); + hashtable->partialTuples++; + } + BarrierDetach(&pstate->grow_buckets_barrier); + BarrierDetach(&pstate->grow_batches_barrier); + + /* + * Make sure that any tuples we wrote to disk are visible to + * others before anyone tries to load them. + */ + for (i = 0; i < hashtable->nbatch; ++i) + sts_end_write(hashtable->batches[i].inner_tuples); + + /* + * Update shared counters. We need an accurate total tuple count + * to control the empty table optimization. + */ + ExecParallelHashMergeCounters(hashtable); + + /* + * Wait for everyone to finish building and flushing files and + * counters. + */ + if (BarrierArriveAndWait(build_barrier, + WAIT_EVENT_HASH_BUILD_HASHING_INNER)) + { + /* + * Elect one backend to disable any further growth. Batches + * are now fixed. While building them we made sure they'd fit + * in our memory budget when we load them back in later (or we + * tried to do that and gave up because we detected extreme + * skew). + */ + pstate->growth = PHJ_GROWTH_DISABLED; + } + } + + /* + * We're not yet attached to a batch. We all agree on the dimensions and + * number of inner tuples (for the empty table optimization). + */ + hashtable->curbatch = -1; + hashtable->nbuckets = pstate->nbuckets; + hashtable->log2_nbuckets = my_log2(hashtable->nbuckets); + hashtable->totalTuples = pstate->total_tuples; + ExecParallelHashEnsureBatchAccessors(hashtable); + + /* + * The next synchronization point is in ExecHashJoin's HJ_BUILD_HASHTABLE + * case, which will bring the build phase to PHJ_BUILD_DONE (if it isn't + * there already). + */ + Assert(BarrierPhase(build_barrier) == PHJ_BUILD_HASHING_OUTER || + BarrierPhase(build_barrier) == PHJ_BUILD_DONE); } /* ---------------------------------------------------------------- @@ -240,12 +429,15 @@ ExecEndHash(HashState *node) * ---------------------------------------------------------------- */ HashJoinTable -ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) +ExecHashTableCreate(HashState *state, List *hashOperators, bool keepNulls) { + Hash *node; HashJoinTable hashtable; Plan *outerNode; + size_t space_allowed; int nbuckets; int nbatch; + double rows; int num_skew_mcvs; int log2_nbuckets; int nkeys; @@ -258,10 +450,22 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) * "outer" subtree of this node, but the inner relation of the hashjoin). * Compute the appropriate size of the hash table. */ + node = (Hash *) state->ps.plan; outerNode = outerPlan(node); - ExecChooseHashTableSize(outerNode->plan_rows, outerNode->plan_width, + /* + * If this is shared hash table with a partial plan, then we can't use + * outerNode->plan_rows to estimate its size. We need an estimate of the + * total number of rows across all copies of the partial plan. + */ + rows = node->plan.parallel_aware ? node->rows_total : outerNode->plan_rows; + + ExecChooseHashTableSize(rows, outerNode->plan_width, OidIsValid(node->skewTable), + state->parallel_state != NULL, + state->parallel_state != NULL ? + state->parallel_state->nparticipants - 1 : 0, + &space_allowed, &nbuckets, &nbatch, &num_skew_mcvs); /* nbuckets must be a power of 2 */ @@ -280,7 +484,7 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) hashtable->nbuckets_optimal = nbuckets; hashtable->log2_nbuckets = log2_nbuckets; hashtable->log2_nbuckets_optimal = log2_nbuckets; - hashtable->buckets = NULL; + hashtable->buckets.unshared = NULL; hashtable->keepNulls = keepNulls; hashtable->skewEnabled = false; hashtable->skewBucket = NULL; @@ -293,16 +497,21 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) hashtable->nbatch_outstart = nbatch; hashtable->growEnabled = true; hashtable->totalTuples = 0; + hashtable->partialTuples = 0; hashtable->skewTuples = 0; hashtable->innerBatchFile = NULL; hashtable->outerBatchFile = NULL; hashtable->spaceUsed = 0; hashtable->spacePeak = 0; - hashtable->spaceAllowed = work_mem * 1024L; + hashtable->spaceAllowed = space_allowed; hashtable->spaceUsedSkew = 0; hashtable->spaceAllowedSkew = hashtable->spaceAllowed * SKEW_WORK_MEM_PERCENT / 100; hashtable->chunks = NULL; + hashtable->current_chunk = NULL; + hashtable->parallel_state = state->parallel_state; + hashtable->area = state->ps.state->es_query_dsa; + hashtable->batches = NULL; #ifdef HJDEBUG printf("Hashjoin %p: initial nbatch = %d, nbuckets = %d\n", @@ -351,10 +560,11 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) oldcxt = MemoryContextSwitchTo(hashtable->hashCxt); - if (nbatch > 1) + if (nbatch > 1 && hashtable->parallel_state == NULL) { /* - * allocate and initialize the file arrays in hashCxt + * allocate and initialize the file arrays in hashCxt (not needed for + * parallel case which uses shared tuplestores instead of raw files) */ hashtable->innerBatchFile = (BufFile **) palloc0(nbatch * sizeof(BufFile *)); @@ -365,23 +575,77 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) PrepareTempTablespaces(); } - /* - * Prepare context for the first-scan space allocations; allocate the - * hashbucket array therein, and set each bucket "empty". - */ - MemoryContextSwitchTo(hashtable->batchCxt); + MemoryContextSwitchTo(oldcxt); - hashtable->buckets = (HashJoinTuple *) - palloc0(nbuckets * sizeof(HashJoinTuple)); + if (hashtable->parallel_state) + { + ParallelHashJoinState *pstate = hashtable->parallel_state; + Barrier *build_barrier; - /* - * Set up for skew optimization, if possible and there's a need for more - * than one batch. (In a one-batch join, there's no point in it.) - */ - if (nbatch > 1) - ExecHashBuildSkewHash(hashtable, node, num_skew_mcvs); + /* + * Attach to the build barrier. The corresponding detach operation is + * in ExecHashTableDetach. Note that we won't attach to the + * batch_barrier for batch 0 yet. We'll attach later and start it out + * in PHJ_BATCH_PROBING phase, because batch 0 is allocated up front + * and then loaded while hashing (the standard hybrid hash join + * algorithm), and we'll coordinate that using build_barrier. + */ + build_barrier = &pstate->build_barrier; + BarrierAttach(build_barrier); - MemoryContextSwitchTo(oldcxt); + /* + * So far we have no idea whether there are any other participants, + * and if so, what phase they are working on. The only thing we care + * about at this point is whether someone has already created the + * SharedHashJoinBatch objects and the hash table for batch 0. One + * backend will be elected to do that now if necessary. + */ + if (BarrierPhase(build_barrier) == PHJ_BUILD_ELECTING && + BarrierArriveAndWait(build_barrier, WAIT_EVENT_HASH_BUILD_ELECTING)) + { + pstate->nbatch = nbatch; + pstate->space_allowed = space_allowed; + pstate->growth = PHJ_GROWTH_OK; + + /* Set up the shared state for coordinating batches. */ + ExecParallelHashJoinSetUpBatches(hashtable, nbatch); + + /* + * Allocate batch 0's hash table up front so we can load it + * directly while hashing. + */ + pstate->nbuckets = nbuckets; + ExecParallelHashTableAlloc(hashtable, 0); + } + + /* + * The next Parallel Hash synchronization point is in + * MultiExecParallelHash(), which will progress it all the way to + * PHJ_BUILD_DONE. The caller must not return control from this + * executor node between now and then. + */ + } + else + { + /* + * Prepare context for the first-scan space allocations; allocate the + * hashbucket array therein, and set each bucket "empty". + */ + MemoryContextSwitchTo(hashtable->batchCxt); + + hashtable->buckets.unshared = (HashJoinTuple *) + palloc0(nbuckets * sizeof(HashJoinTuple)); + + /* + * Set up for skew optimization, if possible and there's a need for + * more than one batch. (In a one-batch join, there's no point in + * it.) + */ + if (nbatch > 1) + ExecHashBuildSkewHash(hashtable, node, num_skew_mcvs); + + MemoryContextSwitchTo(oldcxt); + } return hashtable; } @@ -399,6 +663,9 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls) void ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, + bool try_combined_work_mem, + int parallel_workers, + size_t *space_allowed, int *numbuckets, int *numbatches, int *num_skew_mcvs) @@ -433,6 +700,16 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, */ hash_table_bytes = work_mem * 1024L; + /* + * Parallel Hash tries to use the combined work_mem of all workers to + * avoid the need to batch. If that won't work, it falls back to work_mem + * per worker and tries to process batches in parallel. + */ + if (try_combined_work_mem) + hash_table_bytes += hash_table_bytes * parallel_workers; + + *space_allowed = hash_table_bytes; + /* * If skew optimization is possible, estimate the number of skew buckets * that will fit in the memory allowed, and decrement the assumed space @@ -478,7 +755,7 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, * Note that both nbuckets and nbatch must be powers of 2 to make * ExecHashGetBucketAndBatch fast. */ - max_pointers = (work_mem * 1024L) / sizeof(HashJoinTuple); + max_pointers = *space_allowed / sizeof(HashJoinTuple); max_pointers = Min(max_pointers, MaxAllocSize / sizeof(HashJoinTuple)); /* If max_pointers isn't a power of 2, must round it down to one */ mppow2 = 1L << my_log2(max_pointers); @@ -510,6 +787,21 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, int minbatch; long bucket_size; + /* + * If Parallel Hash with combined work_mem would still need multiple + * batches, we'll have to fall back to regular work_mem budget. + */ + if (try_combined_work_mem) + { + ExecChooseHashTableSize(ntuples, tupwidth, useskew, + false, parallel_workers, + space_allowed, + numbuckets, + numbatches, + num_skew_mcvs); + return; + } + /* * Estimate the number of buckets we'll want to have when work_mem is * entirely full. Each bucket will contain a bucket pointer plus @@ -564,14 +856,17 @@ ExecHashTableDestroy(HashJoinTable hashtable) /* * Make sure all the temp files are closed. We skip batch 0, since it * can't have any temp files (and the arrays might not even exist if - * nbatch is only 1). + * nbatch is only 1). Parallel hash joins don't use these files. */ - for (i = 1; i < hashtable->nbatch; i++) + if (hashtable->innerBatchFile != NULL) { - if (hashtable->innerBatchFile[i]) - BufFileClose(hashtable->innerBatchFile[i]); - if (hashtable->outerBatchFile[i]) - BufFileClose(hashtable->outerBatchFile[i]); + for (i = 1; i < hashtable->nbatch; i++) + { + if (hashtable->innerBatchFile[i]) + BufFileClose(hashtable->innerBatchFile[i]); + if (hashtable->outerBatchFile[i]) + BufFileClose(hashtable->outerBatchFile[i]); + } } /* Release working memory (batchCxt is a child, so it goes away too) */ @@ -657,8 +952,9 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) hashtable->nbuckets = hashtable->nbuckets_optimal; hashtable->log2_nbuckets = hashtable->log2_nbuckets_optimal; - hashtable->buckets = repalloc(hashtable->buckets, - sizeof(HashJoinTuple) * hashtable->nbuckets); + hashtable->buckets.unshared = + repalloc(hashtable->buckets.unshared, + sizeof(HashJoinTuple) * hashtable->nbuckets); } /* @@ -666,14 +962,15 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) * buckets now and not have to keep track which tuples in the buckets have * already been processed. We will free the old chunks as we go. */ - memset(hashtable->buckets, 0, sizeof(HashJoinTuple) * hashtable->nbuckets); + memset(hashtable->buckets.unshared, 0, + sizeof(HashJoinTuple) * hashtable->nbuckets); oldchunks = hashtable->chunks; hashtable->chunks = NULL; /* so, let's scan through the old chunks, and all tuples in each chunk */ while (oldchunks != NULL) { - HashMemoryChunk nextchunk = oldchunks->next; + HashMemoryChunk nextchunk = oldchunks->next.unshared; /* position within the buffer (up to oldchunks->used) */ size_t idx = 0; @@ -700,8 +997,8 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) memcpy(copyTuple, hashTuple, hashTupleSize); /* and add it back to the appropriate bucket */ - copyTuple->next = hashtable->buckets[bucketno]; - hashtable->buckets[bucketno] = copyTuple; + copyTuple->next.unshared = hashtable->buckets.unshared[bucketno]; + hashtable->buckets.unshared[bucketno] = copyTuple; } else { @@ -750,6 +1047,380 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) } } +/* + * ExecParallelHashIncreaseNumBatches + * Every participant attached to grow_barrier must run this function + * when it observes growth == PHJ_GROWTH_NEED_MORE_BATCHES. + */ +static void +ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + int i; + + Assert(BarrierPhase(&pstate->build_barrier) == PHJ_BUILD_HASHING_INNER); + + /* + * It's unlikely, but we need to be prepared for new participants to show + * up while we're in the middle of this operation so we need to switch on + * barrier phase here. + */ + switch (PHJ_GROW_BATCHES_PHASE(BarrierPhase(&pstate->grow_batches_barrier))) + { + case PHJ_GROW_BATCHES_ELECTING: + + /* + * Elect one participant to prepare to grow the number of batches. + * This involves reallocating or resetting the buckets of batch 0 + * in preparation for all participants to begin repartitioning the + * tuples. + */ + if (BarrierArriveAndWait(&pstate->grow_batches_barrier, + WAIT_EVENT_HASH_GROW_BATCHES_ELECTING)) + { + dsa_pointer_atomic *buckets; + ParallelHashJoinBatch *old_batch0; + int new_nbatch; + int i; + + /* Move the old batch out of the way. */ + old_batch0 = hashtable->batches[0].shared; + pstate->old_batches = pstate->batches; + pstate->old_nbatch = hashtable->nbatch; + pstate->batches = InvalidDsaPointer; + + /* Free this backend's old accessors. */ + ExecParallelHashCloseBatchAccessors(hashtable); + + /* Figure out how many batches to use. */ + if (hashtable->nbatch == 1) + { + /* + * We are going from single-batch to multi-batch. We need + * to switch from one large combined memory budget to the + * regular work_mem budget. + */ + pstate->space_allowed = work_mem * 1024L; + + /* + * The combined work_mem of all participants wasn't + * enough. Therefore one batch per participant would be + * approximately equivalent and would probably also be + * insufficient. So try two batches per particiant, + * rounded up to a power of two. + */ + new_nbatch = 1 << my_log2(pstate->nparticipants * 2); + } + else + { + /* + * We were already multi-batched. Try doubling the number + * of batches. + */ + new_nbatch = hashtable->nbatch * 2; + } + + /* Allocate new larger generation of batches. */ + Assert(hashtable->nbatch == pstate->nbatch); + ExecParallelHashJoinSetUpBatches(hashtable, new_nbatch); + Assert(hashtable->nbatch == pstate->nbatch); + + /* Replace or recycle batch 0's bucket array. */ + if (pstate->old_nbatch == 1) + { + double dtuples; + double dbuckets; + int new_nbuckets; + + /* + * We probably also need a smaller bucket array. How many + * tuples do we expect per batch, assuming we have only + * half of them so far? Normally we don't need to change + * the bucket array's size, because the size of each batch + * stays the same as we add more batches, but in this + * special case we move from a large batch to many smaller + * batches and it would be wasteful to keep the large + * array. + */ + dtuples = (old_batch0->ntuples * 2.0) / new_nbatch; + dbuckets = ceil(dtuples / NTUP_PER_BUCKET); + dbuckets = Min(dbuckets, + MaxAllocSize / sizeof(dsa_pointer_atomic)); + new_nbuckets = (int) dbuckets; + new_nbuckets = Max(new_nbuckets, 1024); + new_nbuckets = 1 << my_log2(new_nbuckets); + dsa_free(hashtable->area, old_batch0->buckets); + hashtable->batches[0].shared->buckets = + dsa_allocate(hashtable->area, + sizeof(dsa_pointer_atomic) * new_nbuckets); + buckets = (dsa_pointer_atomic *) + dsa_get_address(hashtable->area, + hashtable->batches[0].shared->buckets); + for (i = 0; i < new_nbuckets; ++i) + dsa_pointer_atomic_init(&buckets[i], InvalidDsaPointer); + pstate->nbuckets = new_nbuckets; + } + else + { + /* Recycle the existing bucket array. */ + hashtable->batches[0].shared->buckets = old_batch0->buckets; + buckets = (dsa_pointer_atomic *) + dsa_get_address(hashtable->area, old_batch0->buckets); + for (i = 0; i < hashtable->nbuckets; ++i) + dsa_pointer_atomic_write(&buckets[i], InvalidDsaPointer); + } + + /* Move all chunks to the work queue for parallel processing. */ + pstate->chunk_work_queue = old_batch0->chunks; + + /* Disable further growth temporarily while we're growing. */ + pstate->growth = PHJ_GROWTH_DISABLED; + } + else + { + /* All other participants just flush their tuples to disk. */ + ExecParallelHashCloseBatchAccessors(hashtable); + } + /* Fall through. */ + + case PHJ_GROW_BATCHES_ALLOCATING: + /* Wait for the above to be finished. */ + BarrierArriveAndWait(&pstate->grow_batches_barrier, + WAIT_EVENT_HASH_GROW_BATCHES_ALLOCATING); + /* Fall through. */ + + case PHJ_GROW_BATCHES_REPARTITIONING: + /* Make sure that we have the current dimensions and buckets. */ + ExecParallelHashEnsureBatchAccessors(hashtable); + ExecParallelHashTableSetCurrentBatch(hashtable, 0); + /* Then partition, flush counters. */ + ExecParallelHashRepartitionFirst(hashtable); + ExecParallelHashRepartitionRest(hashtable); + ExecParallelHashMergeCounters(hashtable); + /* Wait for the above to be finished. */ + BarrierArriveAndWait(&pstate->grow_batches_barrier, + WAIT_EVENT_HASH_GROW_BATCHES_REPARTITIONING); + /* Fall through. */ + + case PHJ_GROW_BATCHES_DECIDING: + + /* + * Elect one participant to clean up and decide whether further + * repartitioning is needed, or should be disabled because it's + * not helping. + */ + if (BarrierArriveAndWait(&pstate->grow_batches_barrier, + WAIT_EVENT_HASH_GROW_BATCHES_DECIDING)) + { + bool space_exhausted = false; + bool extreme_skew_detected = false; + + /* Make sure that we have the current dimensions and buckets. */ + ExecParallelHashEnsureBatchAccessors(hashtable); + ExecParallelHashTableSetCurrentBatch(hashtable, 0); + + /* Are any of the new generation of batches exhausted? */ + for (i = 0; i < hashtable->nbatch; ++i) + { + ParallelHashJoinBatch *batch = hashtable->batches[i].shared; + + if (batch->space_exhausted || + batch->estimated_size > pstate->space_allowed) + { + int parent; + + space_exhausted = true; + + /* + * Did this batch receive ALL of the tuples from its + * parent batch? That would indicate that further + * repartitioning isn't going to help (the hash values + * are probably all the same). + */ + parent = i % pstate->old_nbatch; + if (batch->ntuples == hashtable->batches[parent].shared->old_ntuples) + extreme_skew_detected = true; + } + } + + /* Don't keep growing if it's not helping or we'd overflow. */ + if (extreme_skew_detected || hashtable->nbatch >= INT_MAX / 2) + pstate->growth = PHJ_GROWTH_DISABLED; + else if (space_exhausted) + pstate->growth = PHJ_GROWTH_NEED_MORE_BATCHES; + else + pstate->growth = PHJ_GROWTH_OK; + + /* Free the old batches in shared memory. */ + dsa_free(hashtable->area, pstate->old_batches); + pstate->old_batches = InvalidDsaPointer; + } + /* Fall through. */ + + case PHJ_GROW_BATCHES_FINISHING: + /* Wait for the above to complete. */ + BarrierArriveAndWait(&pstate->grow_batches_barrier, + WAIT_EVENT_HASH_GROW_BATCHES_FINISHING); + } +} + +/* + * Repartition the tuples currently loaded into memory for inner batch 0 + * because the number of batches has been increased. Some tuples are retained + * in memory and some are written out to a later batch. + */ +static void +ExecParallelHashRepartitionFirst(HashJoinTable hashtable) +{ + dsa_pointer chunk_shared; + HashMemoryChunk chunk; + + Assert(hashtable->nbatch = hashtable->parallel_state->nbatch); + + while ((chunk = ExecParallelHashPopChunkQueue(hashtable, &chunk_shared))) + { + size_t idx = 0; + + /* Repartition all tuples in this chunk. */ + while (idx < chunk->used) + { + HashJoinTuple hashTuple = (HashJoinTuple) (chunk->data + idx); + MinimalTuple tuple = HJTUPLE_MINTUPLE(hashTuple); + HashJoinTuple copyTuple; + dsa_pointer shared; + int bucketno; + int batchno; + + ExecHashGetBucketAndBatch(hashtable, hashTuple->hashvalue, + &bucketno, &batchno); + + Assert(batchno < hashtable->nbatch); + if (batchno == 0) + { + /* It still belongs in batch 0. Copy to a new chunk. */ + copyTuple = + ExecParallelHashTupleAlloc(hashtable, + HJTUPLE_OVERHEAD + tuple->t_len, + &shared); + copyTuple->hashvalue = hashTuple->hashvalue; + memcpy(HJTUPLE_MINTUPLE(copyTuple), tuple, tuple->t_len); + ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno], + copyTuple, shared); + } + else + { + size_t tuple_size = + MAXALIGN(HJTUPLE_OVERHEAD + tuple->t_len); + + /* It belongs in a later batch. */ + hashtable->batches[batchno].estimated_size += tuple_size; + sts_puttuple(hashtable->batches[batchno].inner_tuples, + &hashTuple->hashvalue, tuple); + } + + /* Count this tuple. */ + ++hashtable->batches[0].old_ntuples; + ++hashtable->batches[batchno].ntuples; + + idx += MAXALIGN(HJTUPLE_OVERHEAD + + HJTUPLE_MINTUPLE(hashTuple)->t_len); + } + + /* Free this chunk. */ + dsa_free(hashtable->area, chunk_shared); + + CHECK_FOR_INTERRUPTS(); + } +} + +/* + * Help repartition inner batches 1..n. + */ +static void +ExecParallelHashRepartitionRest(HashJoinTable hashtable) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + int old_nbatch = pstate->old_nbatch; + SharedTuplestoreAccessor **old_inner_tuples; + ParallelHashJoinBatch *old_batches; + int i; + + /* Get our hands on the previous generation of batches. */ + old_batches = (ParallelHashJoinBatch *) + dsa_get_address(hashtable->area, pstate->old_batches); + old_inner_tuples = palloc0(sizeof(SharedTuplestoreAccessor *) * old_nbatch); + for (i = 1; i < old_nbatch; ++i) + { + ParallelHashJoinBatch *shared = + NthParallelHashJoinBatch(old_batches, i); + + old_inner_tuples[i] = sts_attach(ParallelHashJoinBatchInner(shared), + ParallelWorkerNumber + 1, + &pstate->fileset); + } + + /* Join in the effort to repartition them. */ + for (i = 1; i < old_nbatch; ++i) + { + MinimalTuple tuple; + uint32 hashvalue; + + /* Scan one partition from the previous generation. */ + sts_begin_parallel_scan(old_inner_tuples[i]); + while ((tuple = sts_parallel_scan_next(old_inner_tuples[i], &hashvalue))) + { + size_t tuple_size = MAXALIGN(HJTUPLE_OVERHEAD + tuple->t_len); + int bucketno; + int batchno; + + /* Decide which partition it goes to in the new generation. */ + ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno, + &batchno); + + hashtable->batches[batchno].estimated_size += tuple_size; + ++hashtable->batches[batchno].ntuples; + ++hashtable->batches[i].old_ntuples; + + /* Store the tuple its new batch. */ + sts_puttuple(hashtable->batches[batchno].inner_tuples, + &hashvalue, tuple); + + CHECK_FOR_INTERRUPTS(); + } + sts_end_parallel_scan(old_inner_tuples[i]); + } + + pfree(old_inner_tuples); +} + +/* + * Transfer the backend-local per-batch counters to the shared totals. + */ +static void +ExecParallelHashMergeCounters(HashJoinTable hashtable) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + int i; + + LWLockAcquire(&pstate->lock, LW_EXCLUSIVE); + pstate->total_tuples = 0; + for (i = 0; i < hashtable->nbatch; ++i) + { + ParallelHashJoinBatchAccessor *batch = &hashtable->batches[i]; + + batch->shared->size += batch->size; + batch->shared->estimated_size += batch->estimated_size; + batch->shared->ntuples += batch->ntuples; + batch->shared->old_ntuples += batch->old_ntuples; + batch->size = 0; + batch->estimated_size = 0; + batch->ntuples = 0; + batch->old_ntuples = 0; + pstate->total_tuples += batch->shared->ntuples; + } + LWLockRelease(&pstate->lock); +} + /* * ExecHashIncreaseNumBuckets * increase the original number of buckets in order to reduce @@ -782,14 +1453,15 @@ ExecHashIncreaseNumBuckets(HashJoinTable hashtable) * ExecHashIncreaseNumBatches, but without all the copying into new * chunks) */ - hashtable->buckets = - (HashJoinTuple *) repalloc(hashtable->buckets, + hashtable->buckets.unshared = + (HashJoinTuple *) repalloc(hashtable->buckets.unshared, hashtable->nbuckets * sizeof(HashJoinTuple)); - memset(hashtable->buckets, 0, hashtable->nbuckets * sizeof(HashJoinTuple)); + memset(hashtable->buckets.unshared, 0, + hashtable->nbuckets * sizeof(HashJoinTuple)); /* scan through all tuples in all chunks to rebuild the hash table */ - for (chunk = hashtable->chunks; chunk != NULL; chunk = chunk->next) + for (chunk = hashtable->chunks; chunk != NULL; chunk = chunk->next.unshared) { /* process all tuples stored in this chunk */ size_t idx = 0; @@ -804,8 +1476,8 @@ ExecHashIncreaseNumBuckets(HashJoinTable hashtable) &bucketno, &batchno); /* add the tuple to the proper bucket */ - hashTuple->next = hashtable->buckets[bucketno]; - hashtable->buckets[bucketno] = hashTuple; + hashTuple->next.unshared = hashtable->buckets.unshared[bucketno]; + hashtable->buckets.unshared[bucketno] = hashTuple; /* advance index past the tuple */ idx += MAXALIGN(HJTUPLE_OVERHEAD + @@ -817,13 +1489,100 @@ ExecHashIncreaseNumBuckets(HashJoinTable hashtable) } } +static void +ExecParallelHashIncreaseNumBuckets(HashJoinTable hashtable) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + int i; + HashMemoryChunk chunk; + dsa_pointer chunk_s; -/* - * ExecHashTableInsert - * insert a tuple into the hash table depending on the hash value - * it may just go to a temp file for later batches - * - * Note: the passed TupleTableSlot may contain a regular, minimal, or virtual + Assert(BarrierPhase(&pstate->build_barrier) == PHJ_BUILD_HASHING_INNER); + + /* + * It's unlikely, but we need to be prepared for new participants to show + * up while we're in the middle of this operation so we need to switch on + * barrier phase here. + */ + switch (PHJ_GROW_BUCKETS_PHASE(BarrierPhase(&pstate->grow_buckets_barrier))) + { + case PHJ_GROW_BUCKETS_ELECTING: + /* Elect one participant to prepare to increase nbuckets. */ + if (BarrierArriveAndWait(&pstate->grow_buckets_barrier, + WAIT_EVENT_HASH_GROW_BUCKETS_ELECTING)) + { + size_t size; + dsa_pointer_atomic *buckets; + + /* Double the size of the bucket array. */ + pstate->nbuckets *= 2; + size = pstate->nbuckets * sizeof(dsa_pointer_atomic); + hashtable->batches[0].shared->size += size / 2; + dsa_free(hashtable->area, hashtable->batches[0].shared->buckets); + hashtable->batches[0].shared->buckets = + dsa_allocate(hashtable->area, size); + buckets = (dsa_pointer_atomic *) + dsa_get_address(hashtable->area, + hashtable->batches[0].shared->buckets); + for (i = 0; i < pstate->nbuckets; ++i) + dsa_pointer_atomic_init(&buckets[i], InvalidDsaPointer); + + /* Put the chunk list onto the work queue. */ + pstate->chunk_work_queue = hashtable->batches[0].shared->chunks; + + /* Clear the flag. */ + pstate->growth = PHJ_GROWTH_OK; + } + /* Fall through. */ + + case PHJ_GROW_BUCKETS_ALLOCATING: + /* Wait for the above to complete. */ + BarrierArriveAndWait(&pstate->grow_buckets_barrier, + WAIT_EVENT_HASH_GROW_BUCKETS_ALLOCATING); + /* Fall through. */ + + case PHJ_GROW_BUCKETS_REINSERTING: + /* Reinsert all tuples into the hash table. */ + ExecParallelHashEnsureBatchAccessors(hashtable); + ExecParallelHashTableSetCurrentBatch(hashtable, 0); + while ((chunk = ExecParallelHashPopChunkQueue(hashtable, &chunk_s))) + { + size_t idx = 0; + + while (idx < chunk->used) + { + HashJoinTuple hashTuple = (HashJoinTuple) (chunk->data + idx); + dsa_pointer shared = chunk_s + HASH_CHUNK_HEADER_SIZE + idx; + int bucketno; + int batchno; + + ExecHashGetBucketAndBatch(hashtable, hashTuple->hashvalue, + &bucketno, &batchno); + Assert(batchno == 0); + + /* add the tuple to the proper bucket */ + ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno], + hashTuple, shared); + + /* advance index past the tuple */ + idx += MAXALIGN(HJTUPLE_OVERHEAD + + HJTUPLE_MINTUPLE(hashTuple)->t_len); + } + + /* allow this loop to be cancellable */ + CHECK_FOR_INTERRUPTS(); + } + BarrierArriveAndWait(&pstate->grow_buckets_barrier, + WAIT_EVENT_HASH_GROW_BUCKETS_REINSERTING); + } +} + +/* + * ExecHashTableInsert + * insert a tuple into the hash table depending on the hash value + * it may just go to a temp file for later batches + * + * Note: the passed TupleTableSlot may contain a regular, minimal, or virtual * tuple; the minimal case in particular is certain to happen while reloading * tuples from batch files. We could save some cycles in the regular-tuple * case by not forcing the slot contents into minimal form; not clear if it's @@ -869,8 +1628,8 @@ ExecHashTableInsert(HashJoinTable hashtable, HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(hashTuple)); /* Push it onto the front of the bucket's list */ - hashTuple->next = hashtable->buckets[bucketno]; - hashtable->buckets[bucketno] = hashTuple; + hashTuple->next.unshared = hashtable->buckets.unshared[bucketno]; + hashtable->buckets.unshared[bucketno] = hashTuple; /* * Increase the (optimal) number of buckets if we just exceeded the @@ -910,6 +1669,94 @@ ExecHashTableInsert(HashJoinTable hashtable, } } +/* + * ExecHashTableParallelInsert + * insert a tuple into a shared hash table or shared batch tuplestore + */ +void +ExecParallelHashTableInsert(HashJoinTable hashtable, + TupleTableSlot *slot, + uint32 hashvalue) +{ + MinimalTuple tuple = ExecFetchSlotMinimalTuple(slot); + dsa_pointer shared; + int bucketno; + int batchno; + +retry: + ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno, &batchno); + + if (batchno == 0) + { + HashJoinTuple hashTuple; + + /* Try to load it into memory. */ + Assert(BarrierPhase(&hashtable->parallel_state->build_barrier) == + PHJ_BUILD_HASHING_INNER); + hashTuple = ExecParallelHashTupleAlloc(hashtable, + HJTUPLE_OVERHEAD + tuple->t_len, + &shared); + if (hashTuple == NULL) + goto retry; + + /* Store the hash value in the HashJoinTuple header. */ + hashTuple->hashvalue = hashvalue; + memcpy(HJTUPLE_MINTUPLE(hashTuple), tuple, tuple->t_len); + + /* Push it onto the front of the bucket's list */ + ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno], + hashTuple, shared); + } + else + { + size_t tuple_size = MAXALIGN(HJTUPLE_OVERHEAD + tuple->t_len); + + Assert(batchno > 0); + + /* Try to preallocate space in the batch if necessary. */ + if (hashtable->batches[batchno].preallocated < tuple_size) + { + if (!ExecParallelHashTuplePrealloc(hashtable, batchno, tuple_size)) + goto retry; + } + + Assert(hashtable->batches[batchno].preallocated >= tuple_size); + hashtable->batches[batchno].preallocated -= tuple_size; + sts_puttuple(hashtable->batches[batchno].inner_tuples, &hashvalue, + tuple); + } + ++hashtable->batches[batchno].ntuples; +} + +/* + * Insert a tuple into the current hash table. Unlike + * ExecParallelHashTableInsert, this version is not prepared to send the tuple + * to other batches or to run out of memory, and should only be called with + * tuples that belong in the current batch once growth has been disabled. + */ +void +ExecParallelHashTableInsertCurrentBatch(HashJoinTable hashtable, + TupleTableSlot *slot, + uint32 hashvalue) +{ + MinimalTuple tuple = ExecFetchSlotMinimalTuple(slot); + HashJoinTuple hashTuple; + dsa_pointer shared; + int batchno; + int bucketno; + + ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno, &batchno); + Assert(batchno == hashtable->curbatch); + hashTuple = ExecParallelHashTupleAlloc(hashtable, + HJTUPLE_OVERHEAD + tuple->t_len, + &shared); + hashTuple->hashvalue = hashvalue; + memcpy(HJTUPLE_MINTUPLE(hashTuple), tuple, tuple->t_len); + HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(hashTuple)); + ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno], + hashTuple, shared); +} + /* * ExecHashGetHashValue * Compute the hash value for a tuple @@ -1076,11 +1923,71 @@ ExecScanHashBucket(HashJoinState *hjstate, * otherwise scan the standard hashtable bucket. */ if (hashTuple != NULL) - hashTuple = hashTuple->next; + hashTuple = hashTuple->next.unshared; else if (hjstate->hj_CurSkewBucketNo != INVALID_SKEW_BUCKET_NO) hashTuple = hashtable->skewBucket[hjstate->hj_CurSkewBucketNo]->tuples; else - hashTuple = hashtable->buckets[hjstate->hj_CurBucketNo]; + hashTuple = hashtable->buckets.unshared[hjstate->hj_CurBucketNo]; + + while (hashTuple != NULL) + { + if (hashTuple->hashvalue == hashvalue) + { + TupleTableSlot *inntuple; + + /* insert hashtable's tuple into exec slot so ExecQual sees it */ + inntuple = ExecStoreMinimalTuple(HJTUPLE_MINTUPLE(hashTuple), + hjstate->hj_HashTupleSlot, + false); /* do not pfree */ + econtext->ecxt_innertuple = inntuple; + + /* reset temp memory each time to avoid leaks from qual expr */ + ResetExprContext(econtext); + + if (ExecQual(hjclauses, econtext)) + { + hjstate->hj_CurTuple = hashTuple; + return true; + } + } + + hashTuple = hashTuple->next.unshared; + } + + /* + * no match + */ + return false; +} + +/* + * ExecParallelScanHashBucket + * scan a hash bucket for matches to the current outer tuple + * + * The current outer tuple must be stored in econtext->ecxt_outertuple. + * + * On success, the inner tuple is stored into hjstate->hj_CurTuple and + * econtext->ecxt_innertuple, using hjstate->hj_HashTupleSlot as the slot + * for the latter. + */ +bool +ExecParallelScanHashBucket(HashJoinState *hjstate, + ExprContext *econtext) +{ + ExprState *hjclauses = hjstate->hashclauses; + HashJoinTable hashtable = hjstate->hj_HashTable; + HashJoinTuple hashTuple = hjstate->hj_CurTuple; + uint32 hashvalue = hjstate->hj_CurHashValue; + + /* + * hj_CurTuple is the address of the tuple last returned from the current + * bucket, or NULL if it's time to start scanning a new bucket. + */ + if (hashTuple != NULL) + hashTuple = ExecParallelHashNextTuple(hashtable, hashTuple); + else + hashTuple = ExecParallelHashFirstTuple(hashtable, + hjstate->hj_CurBucketNo); while (hashTuple != NULL) { @@ -1104,7 +2011,7 @@ ExecScanHashBucket(HashJoinState *hjstate, } } - hashTuple = hashTuple->next; + hashTuple = ExecParallelHashNextTuple(hashtable, hashTuple); } /* @@ -1155,10 +2062,10 @@ ExecScanHashTableForUnmatched(HashJoinState *hjstate, ExprContext *econtext) * bucket. */ if (hashTuple != NULL) - hashTuple = hashTuple->next; + hashTuple = hashTuple->next.unshared; else if (hjstate->hj_CurBucketNo < hashtable->nbuckets) { - hashTuple = hashtable->buckets[hjstate->hj_CurBucketNo]; + hashTuple = hashtable->buckets.unshared[hjstate->hj_CurBucketNo]; hjstate->hj_CurBucketNo++; } else if (hjstate->hj_CurSkewBucketNo < hashtable->nSkewBuckets) @@ -1194,7 +2101,7 @@ ExecScanHashTableForUnmatched(HashJoinState *hjstate, ExprContext *econtext) return true; } - hashTuple = hashTuple->next; + hashTuple = hashTuple->next.unshared; } /* allow this loop to be cancellable */ @@ -1226,7 +2133,7 @@ ExecHashTableReset(HashJoinTable hashtable) oldcxt = MemoryContextSwitchTo(hashtable->batchCxt); /* Reallocate and reinitialize the hash bucket headers. */ - hashtable->buckets = (HashJoinTuple *) + hashtable->buckets.unshared = (HashJoinTuple *) palloc0(nbuckets * sizeof(HashJoinTuple)); hashtable->spaceUsed = 0; @@ -1250,7 +2157,8 @@ ExecHashTableResetMatchFlags(HashJoinTable hashtable) /* Reset all flags in the main table ... */ for (i = 0; i < hashtable->nbuckets; i++) { - for (tuple = hashtable->buckets[i]; tuple != NULL; tuple = tuple->next) + for (tuple = hashtable->buckets.unshared[i]; tuple != NULL; + tuple = tuple->next.unshared) HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(tuple)); } @@ -1260,7 +2168,7 @@ ExecHashTableResetMatchFlags(HashJoinTable hashtable) int j = hashtable->skewBucketNums[i]; HashSkewBucket *skewBucket = hashtable->skewBucket[j]; - for (tuple = skewBucket->tuples; tuple != NULL; tuple = tuple->next) + for (tuple = skewBucket->tuples; tuple != NULL; tuple = tuple->next.unshared) HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(tuple)); } } @@ -1505,8 +2413,9 @@ ExecHashSkewTableInsert(HashJoinTable hashtable, HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(hashTuple)); /* Push it onto the front of the skew bucket's list */ - hashTuple->next = hashtable->skewBucket[bucketNumber]->tuples; + hashTuple->next.unshared = hashtable->skewBucket[bucketNumber]->tuples; hashtable->skewBucket[bucketNumber]->tuples = hashTuple; + Assert(hashTuple != hashTuple->next.unshared); /* Account for space used, and back off if we've used too much */ hashtable->spaceUsed += hashTupleSize; @@ -1554,7 +2463,7 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable) hashTuple = bucket->tuples; while (hashTuple != NULL) { - HashJoinTuple nextHashTuple = hashTuple->next; + HashJoinTuple nextHashTuple = hashTuple->next.unshared; MinimalTuple tuple; Size tupleSize; @@ -1580,8 +2489,8 @@ ExecHashRemoveNextSkewBucket(HashJoinTable hashtable) memcpy(copyTuple, hashTuple, tupleSize); pfree(hashTuple); - copyTuple->next = hashtable->buckets[bucketno]; - hashtable->buckets[bucketno] = copyTuple; + copyTuple->next.unshared = hashtable->buckets.unshared[bucketno]; + hashtable->buckets.unshared[bucketno] = copyTuple; /* We have reduced skew space, but overall space doesn't change */ hashtable->spaceUsedSkew -= tupleSize; @@ -1760,11 +2669,11 @@ dense_alloc(HashJoinTable hashtable, Size size) if (hashtable->chunks != NULL) { newChunk->next = hashtable->chunks->next; - hashtable->chunks->next = newChunk; + hashtable->chunks->next.unshared = newChunk; } else { - newChunk->next = hashtable->chunks; + newChunk->next.unshared = hashtable->chunks; hashtable->chunks = newChunk; } @@ -1789,7 +2698,7 @@ dense_alloc(HashJoinTable hashtable, Size size) newChunk->used = size; newChunk->ntuples = 1; - newChunk->next = hashtable->chunks; + newChunk->next.unshared = hashtable->chunks; hashtable->chunks = newChunk; return newChunk->data; @@ -1803,3 +2712,601 @@ dense_alloc(HashJoinTable hashtable, Size size) /* return pointer to the start of the tuple memory */ return ptr; } + +/* + * Allocate space for a tuple in shared dense storage. This is equivalent to + * dense_alloc but for Parallel Hash using shared memory. + * + * While loading a tuple into shared memory, we might run out of memory and + * decide to repartition, or determine that the load factor is too high and + * decide to expand the bucket array, or discover that another participant has + * commanded us to help do that. Return NULL if number of buckets or batches + * has changed, indicating that the caller must retry (considering the + * possibility that the tuple no longer belongs in the same batch). + */ +static HashJoinTuple +ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size, + dsa_pointer *shared) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + dsa_pointer chunk_shared; + HashMemoryChunk chunk; + Size chunk_size; + HashJoinTuple result; + int curbatch = hashtable->curbatch; + + size = MAXALIGN(size); + + /* + * Fast path: if there is enough space in this backend's current chunk, + * then we can allocate without any locking. + */ + chunk = hashtable->current_chunk; + if (chunk != NULL && + size < HASH_CHUNK_THRESHOLD && + chunk->maxlen - chunk->used >= size) + { + + chunk_shared = hashtable->current_chunk_shared; + Assert(chunk == dsa_get_address(hashtable->area, chunk_shared)); + *shared = chunk_shared + HASH_CHUNK_HEADER_SIZE + chunk->used; + result = (HashJoinTuple) (chunk->data + chunk->used); + chunk->used += size; + + Assert(chunk->used <= chunk->maxlen); + Assert(result == dsa_get_address(hashtable->area, *shared)); + + return result; + } + + /* Slow path: try to allocate a new chunk. */ + LWLockAcquire(&pstate->lock, LW_EXCLUSIVE); + + /* + * Check if we need to help increase the number of buckets or batches. + */ + if (pstate->growth == PHJ_GROWTH_NEED_MORE_BATCHES || + pstate->growth == PHJ_GROWTH_NEED_MORE_BUCKETS) + { + ParallelHashGrowth growth = pstate->growth; + + hashtable->current_chunk = NULL; + LWLockRelease(&pstate->lock); + + /* Another participant has commanded us to help grow. */ + if (growth == PHJ_GROWTH_NEED_MORE_BATCHES) + ExecParallelHashIncreaseNumBatches(hashtable); + else if (growth == PHJ_GROWTH_NEED_MORE_BUCKETS) + ExecParallelHashIncreaseNumBuckets(hashtable); + + /* The caller must retry. */ + return NULL; + } + + /* Oversized tuples get their own chunk. */ + if (size > HASH_CHUNK_THRESHOLD) + chunk_size = size + HASH_CHUNK_HEADER_SIZE; + else + chunk_size = HASH_CHUNK_SIZE; + + /* Check if it's time to grow batches or buckets. */ + if (pstate->growth != PHJ_GROWTH_DISABLED) + { + Assert(curbatch == 0); + Assert(BarrierPhase(&pstate->build_barrier) == PHJ_BUILD_HASHING_INNER); + + /* + * Check if our space limit would be exceeded. To avoid choking on + * very large tuples or very low work_mem setting, we'll always allow + * each backend to allocate at least one chunk. + */ + if (hashtable->batches[0].at_least_one_chunk && + hashtable->batches[0].shared->size + + chunk_size > pstate->space_allowed) + { + pstate->growth = PHJ_GROWTH_NEED_MORE_BATCHES; + hashtable->batches[0].shared->space_exhausted = true; + LWLockRelease(&pstate->lock); + + return NULL; + } + + /* Check if our load factor limit would be exceeded. */ + if (hashtable->nbatch == 1) + { + hashtable->batches[0].shared->ntuples += hashtable->batches[0].ntuples; + hashtable->batches[0].ntuples = 0; + if (hashtable->batches[0].shared->ntuples + 1 > + hashtable->nbuckets * NTUP_PER_BUCKET && + hashtable->nbuckets < (INT_MAX / 2)) + { + pstate->growth = PHJ_GROWTH_NEED_MORE_BUCKETS; + LWLockRelease(&pstate->lock); + + return NULL; + } + } + } + + /* We are cleared to allocate a new chunk. */ + chunk_shared = dsa_allocate(hashtable->area, chunk_size); + hashtable->batches[curbatch].shared->size += chunk_size; + hashtable->batches[curbatch].at_least_one_chunk = true; + + /* Set up the chunk. */ + chunk = (HashMemoryChunk) dsa_get_address(hashtable->area, chunk_shared); + *shared = chunk_shared + HASH_CHUNK_HEADER_SIZE; + chunk->maxlen = chunk_size - HASH_CHUNK_HEADER_SIZE; + chunk->used = size; + + /* + * Push it onto the list of chunks, so that it can be found if we need to + * increase the number of buckets or batches (batch 0 only) and later for + * freeing the memory (all batches). + */ + chunk->next.shared = hashtable->batches[curbatch].shared->chunks; + hashtable->batches[curbatch].shared->chunks = chunk_shared; + + if (size <= HASH_CHUNK_THRESHOLD) + { + /* + * Make this the current chunk so that we can use the fast path to + * fill the rest of it up in future calls. + */ + hashtable->current_chunk = chunk; + hashtable->current_chunk_shared = chunk_shared; + } + LWLockRelease(&pstate->lock); + + Assert(chunk->data == dsa_get_address(hashtable->area, *shared)); + result = (HashJoinTuple) chunk->data; + + return result; +} + +/* + * One backend needs to set up the shared batch state including tuplestores. + * Other backends will ensure they have correctly configured accessors by + * called ExecParallelHashEnsureBatchAccessors(). + */ +static void +ExecParallelHashJoinSetUpBatches(HashJoinTable hashtable, int nbatch) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + ParallelHashJoinBatch *batches; + MemoryContext oldcxt; + int i; + + Assert(hashtable->batches == NULL); + + /* Allocate space. */ + pstate->batches = + dsa_allocate0(hashtable->area, + EstimateParallelHashJoinBatch(hashtable) * nbatch); + pstate->nbatch = nbatch; + batches = dsa_get_address(hashtable->area, pstate->batches); + + /* Use hash join memory context. */ + oldcxt = MemoryContextSwitchTo(hashtable->hashCxt); + + /* Allocate this backend's accessor array. */ + hashtable->nbatch = nbatch; + hashtable->batches = (ParallelHashJoinBatchAccessor *) + palloc0(sizeof(ParallelHashJoinBatchAccessor) * hashtable->nbatch); + + /* Set up the shared state, tuplestores and backend-local accessors. */ + for (i = 0; i < hashtable->nbatch; ++i) + { + ParallelHashJoinBatchAccessor *accessor = &hashtable->batches[i]; + ParallelHashJoinBatch *shared = NthParallelHashJoinBatch(batches, i); + char name[MAXPGPATH]; + + /* + * All members of shared were zero-initialized. We just need to set + * up the Barrier. + */ + BarrierInit(&shared->batch_barrier, 0); + if (i == 0) + { + /* Batch 0 doesn't need to be loaded. */ + BarrierAttach(&shared->batch_barrier); + while (BarrierPhase(&shared->batch_barrier) < PHJ_BATCH_PROBING) + BarrierArriveAndWait(&shared->batch_barrier, 0); + BarrierDetach(&shared->batch_barrier); + } + + /* Initialize accessor state. All members were zero-initialized. */ + accessor->shared = shared; + + /* Initialize the shared tuplestores. */ + snprintf(name, sizeof(name), "i%dof%d", i, hashtable->nbatch); + accessor->inner_tuples = + sts_initialize(ParallelHashJoinBatchInner(shared), + pstate->nparticipants, + ParallelWorkerNumber + 1, + sizeof(uint32), + SHARED_TUPLESTORE_SINGLE_PASS, + &pstate->fileset, + name); + snprintf(name, sizeof(name), "o%dof%d", i, hashtable->nbatch); + accessor->outer_tuples = + sts_initialize(ParallelHashJoinBatchOuter(shared, + pstate->nparticipants), + pstate->nparticipants, + ParallelWorkerNumber + 1, + sizeof(uint32), + SHARED_TUPLESTORE_SINGLE_PASS, + &pstate->fileset, + name); + } + + MemoryContextSwitchTo(oldcxt); +} + +/* + * Free the current set of ParallelHashJoinBatchAccessor objects. + */ +static void +ExecParallelHashCloseBatchAccessors(HashJoinTable hashtable) +{ + int i; + + for (i = 0; i < hashtable->nbatch; ++i) + { + /* Make sure no files are left open. */ + sts_end_write(hashtable->batches[i].inner_tuples); + sts_end_write(hashtable->batches[i].outer_tuples); + sts_end_parallel_scan(hashtable->batches[i].inner_tuples); + sts_end_parallel_scan(hashtable->batches[i].outer_tuples); + } + pfree(hashtable->batches); + hashtable->batches = NULL; +} + +/* + * Make sure this backend has up-to-date accessors for the current set of + * batches. + */ +static void +ExecParallelHashEnsureBatchAccessors(HashJoinTable hashtable) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + ParallelHashJoinBatch *batches; + MemoryContext oldcxt; + int i; + + if (hashtable->batches != NULL) + { + if (hashtable->nbatch == pstate->nbatch) + return; + ExecParallelHashCloseBatchAccessors(hashtable); + } + + /* + * It's possible for a backend to start up very late so that the whole + * join is finished and the shm state for tracking batches has already + * been freed by ExecHashTableDetach(). In that case we'll just leave + * hashtable->batches as NULL so that ExecParallelHashJoinNewBatch() gives + * up early. + */ + if (!DsaPointerIsValid(pstate->batches)) + return; + + /* Use hash join memory context. */ + oldcxt = MemoryContextSwitchTo(hashtable->hashCxt); + + /* Allocate this backend's accessor array. */ + hashtable->nbatch = pstate->nbatch; + hashtable->batches = (ParallelHashJoinBatchAccessor *) + palloc0(sizeof(ParallelHashJoinBatchAccessor) * hashtable->nbatch); + + /* Find the base of the pseudo-array of ParallelHashJoinBatch objects. */ + batches = (ParallelHashJoinBatch *) + dsa_get_address(hashtable->area, pstate->batches); + + /* Set up the accessor array and attach to the tuplestores. */ + for (i = 0; i < hashtable->nbatch; ++i) + { + ParallelHashJoinBatchAccessor *accessor = &hashtable->batches[i]; + ParallelHashJoinBatch *shared = NthParallelHashJoinBatch(batches, i); + + accessor->shared = shared; + accessor->preallocated = 0; + accessor->done = false; + accessor->inner_tuples = + sts_attach(ParallelHashJoinBatchInner(shared), + ParallelWorkerNumber + 1, + &pstate->fileset); + accessor->outer_tuples = + sts_attach(ParallelHashJoinBatchOuter(shared, + pstate->nparticipants), + ParallelWorkerNumber + 1, + &pstate->fileset); + } + + MemoryContextSwitchTo(oldcxt); +} + +/* + * Allocate an empty shared memory hash table for a given batch. + */ +void +ExecParallelHashTableAlloc(HashJoinTable hashtable, int batchno) +{ + ParallelHashJoinBatch *batch = hashtable->batches[batchno].shared; + dsa_pointer_atomic *buckets; + int nbuckets = hashtable->parallel_state->nbuckets; + int i; + + batch->buckets = + dsa_allocate(hashtable->area, sizeof(dsa_pointer_atomic) * nbuckets); + buckets = (dsa_pointer_atomic *) + dsa_get_address(hashtable->area, batch->buckets); + for (i = 0; i < nbuckets; ++i) + dsa_pointer_atomic_init(&buckets[i], InvalidDsaPointer); +} + +/* + * If we are currently attached to a shared hash join batch, detach. If we + * are last to detach, clean up. + */ +void +ExecHashTableDetachBatch(HashJoinTable hashtable) +{ + if (hashtable->parallel_state != NULL && + hashtable->curbatch >= 0) + { + int curbatch = hashtable->curbatch; + ParallelHashJoinBatch *batch = hashtable->batches[curbatch].shared; + + /* Make sure any temporary files are closed. */ + sts_end_parallel_scan(hashtable->batches[curbatch].inner_tuples); + sts_end_parallel_scan(hashtable->batches[curbatch].outer_tuples); + + /* Detach from the batch we were last working on. */ + if (BarrierArriveAndDetach(&batch->batch_barrier)) + { + /* + * Technically we shouldn't access the barrier because we're no + * longer attached, but since there is no way it's moving after + * this point it seems safe to make the following assertion. + */ + Assert(BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_DONE); + + /* Free shared chunks and buckets. */ + while (DsaPointerIsValid(batch->chunks)) + { + HashMemoryChunk chunk = + dsa_get_address(hashtable->area, batch->chunks); + dsa_pointer next = chunk->next.shared; + + dsa_free(hashtable->area, batch->chunks); + batch->chunks = next; + } + if (DsaPointerIsValid(batch->buckets)) + { + dsa_free(hashtable->area, batch->buckets); + batch->buckets = InvalidDsaPointer; + } + } + ExecParallelHashUpdateSpacePeak(hashtable, curbatch); + /* Remember that we are not attached to a batch. */ + hashtable->curbatch = -1; + } +} + +/* + * Detach from all shared resources. If we are last to detach, clean up. + */ +void +ExecHashTableDetach(HashJoinTable hashtable) +{ + if (hashtable->parallel_state) + { + ParallelHashJoinState *pstate = hashtable->parallel_state; + int i; + + /* Make sure any temporary files are closed. */ + if (hashtable->batches) + { + for (i = 0; i < hashtable->nbatch; ++i) + { + sts_end_write(hashtable->batches[i].inner_tuples); + sts_end_write(hashtable->batches[i].outer_tuples); + sts_end_parallel_scan(hashtable->batches[i].inner_tuples); + sts_end_parallel_scan(hashtable->batches[i].outer_tuples); + } + } + + /* If we're last to detach, clean up shared memory. */ + if (BarrierDetach(&pstate->build_barrier)) + { + if (DsaPointerIsValid(pstate->batches)) + { + dsa_free(hashtable->area, pstate->batches); + pstate->batches = InvalidDsaPointer; + } + } + + hashtable->parallel_state = NULL; + } +} + +/* + * Get the first tuple in a given bucket identified by number. + */ +static inline HashJoinTuple +ExecParallelHashFirstTuple(HashJoinTable hashtable, int bucketno) +{ + HashJoinTuple tuple; + dsa_pointer p; + + Assert(hashtable->parallel_state); + p = dsa_pointer_atomic_read(&hashtable->buckets.shared[bucketno]); + tuple = (HashJoinTuple) dsa_get_address(hashtable->area, p); + + return tuple; +} + +/* + * Get the next tuple in the same bucket as 'tuple'. + */ +static inline HashJoinTuple +ExecParallelHashNextTuple(HashJoinTable hashtable, HashJoinTuple tuple) +{ + HashJoinTuple next; + + Assert(hashtable->parallel_state); + next = (HashJoinTuple) dsa_get_address(hashtable->area, tuple->next.shared); + + return next; +} + +/* + * Insert a tuple at the front of a chain of tuples in DSA memory atomically. + */ +static inline void +ExecParallelHashPushTuple(dsa_pointer_atomic *head, + HashJoinTuple tuple, + dsa_pointer tuple_shared) +{ + for (;;) + { + tuple->next.shared = dsa_pointer_atomic_read(head); + if (dsa_pointer_atomic_compare_exchange(head, + &tuple->next.shared, + tuple_shared)) + break; + } +} + +/* + * Prepare to work on a given batch. + */ +void +ExecParallelHashTableSetCurrentBatch(HashJoinTable hashtable, int batchno) +{ + Assert(hashtable->batches[batchno].shared->buckets != InvalidDsaPointer); + + hashtable->curbatch = batchno; + hashtable->buckets.shared = (dsa_pointer_atomic *) + dsa_get_address(hashtable->area, + hashtable->batches[batchno].shared->buckets); + hashtable->nbuckets = hashtable->parallel_state->nbuckets; + hashtable->log2_nbuckets = my_log2(hashtable->nbuckets); + hashtable->current_chunk = NULL; + hashtable->current_chunk_shared = InvalidDsaPointer; + hashtable->batches[batchno].at_least_one_chunk = false; +} + +/* + * Take the next available chunk from the queue of chunks being worked on in + * parallel. Return NULL if there are none left. Otherwise return a pointer + * to the chunk, and set *shared to the DSA pointer to the chunk. + */ +static HashMemoryChunk +ExecParallelHashPopChunkQueue(HashJoinTable hashtable, dsa_pointer *shared) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + HashMemoryChunk chunk; + + LWLockAcquire(&pstate->lock, LW_EXCLUSIVE); + if (DsaPointerIsValid(pstate->chunk_work_queue)) + { + *shared = pstate->chunk_work_queue; + chunk = (HashMemoryChunk) + dsa_get_address(hashtable->area, *shared); + pstate->chunk_work_queue = chunk->next.shared; + } + else + chunk = NULL; + LWLockRelease(&pstate->lock); + + return chunk; +} + +/* + * Increase the space preallocated in this backend for a given inner batch by + * at least a given amount. This allows us to track whether a given batch + * would fit in memory when loaded back in. Also increase the number of + * batches or buckets if required. + * + * This maintains a running estimation of how much space will be taken when we + * load the batch back into memory by simulating the way chunks will be handed + * out to workers. It's not perfectly accurate because the tuples will be + * packed into memory chunks differently by ExecParallelHashTupleAlloc(), but + * it should be pretty close. It tends to overestimate by a fraction of a + * chunk per worker since all workers gang up to preallocate during hashing, + * but workers tend to reload batches alone if there are enough to go around, + * leaving fewer partially filled chunks. This effect is bounded by + * nparticipants. + * + * Return false if the number of batches or buckets has changed, and the + * caller should reconsider which batch a given tuple now belongs in and call + * again. + */ +static bool +ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size) +{ + ParallelHashJoinState *pstate = hashtable->parallel_state; + ParallelHashJoinBatchAccessor *batch = &hashtable->batches[batchno]; + size_t want = Max(size, HASH_CHUNK_SIZE - HASH_CHUNK_HEADER_SIZE); + + Assert(batchno > 0); + Assert(batchno < hashtable->nbatch); + + LWLockAcquire(&pstate->lock, LW_EXCLUSIVE); + + /* Has another participant commanded us to help grow? */ + if (pstate->growth == PHJ_GROWTH_NEED_MORE_BATCHES || + pstate->growth == PHJ_GROWTH_NEED_MORE_BUCKETS) + { + ParallelHashGrowth growth = pstate->growth; + + LWLockRelease(&pstate->lock); + if (growth == PHJ_GROWTH_NEED_MORE_BATCHES) + ExecParallelHashIncreaseNumBatches(hashtable); + else if (growth == PHJ_GROWTH_NEED_MORE_BUCKETS) + ExecParallelHashIncreaseNumBuckets(hashtable); + + return false; + } + + if (pstate->growth != PHJ_GROWTH_DISABLED && + batch->at_least_one_chunk && + (batch->shared->estimated_size + size > pstate->space_allowed)) + { + /* + * We have determined that this batch would exceed the space budget if + * loaded into memory. Command all participants to help repartition. + */ + batch->shared->space_exhausted = true; + pstate->growth = PHJ_GROWTH_NEED_MORE_BATCHES; + LWLockRelease(&pstate->lock); + + return false; + } + + batch->at_least_one_chunk = true; + batch->shared->estimated_size += want + HASH_CHUNK_HEADER_SIZE; + batch->preallocated = want; + LWLockRelease(&pstate->lock); + + return true; +} + +/* + * Update this backend's copy of hashtable->spacePeak to account for a given + * batch. This is called at the end of hashing for batch 0, and then for each + * batch when it is done or discovered to be already done. The result is used + * for EXPLAIN output. + */ +void +ExecParallelHashUpdateSpacePeak(HashJoinTable hashtable, int batchno) +{ + size_t size; + + size = hashtable->batches[batchno].shared->size; + size += sizeof(dsa_pointer_atomic) * hashtable->nbuckets; + hashtable->spacePeak = Max(hashtable->spacePeak, size); +} diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index ab1632cc13..5d1dc1f401 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -10,18 +10,112 @@ * IDENTIFICATION * src/backend/executor/nodeHashjoin.c * + * PARALLELISM + * + * Hash joins can participate in parallel query execution in several ways. A + * parallel-oblivious hash join is one where the node is unaware that it is + * part of a parallel plan. In this case, a copy of the inner plan is used to + * build a copy of the hash table in every backend, and the outer plan could + * either be built from a partial or complete path, so that the results of the + * hash join are correspondingly either partial or complete. A parallel-aware + * hash join is one that behaves differently, coordinating work between + * backends, and appears as Parallel Hash Join in EXPLAIN output. A Parallel + * Hash Join always appears with a Parallel Hash node. + * + * Parallel-aware hash joins use the same per-backend state machine to track + * progress through the hash join algorithm as parallel-oblivious hash joins. + * In a parallel-aware hash join, there is also a shared state machine that + * co-operating backends use to synchronize their local state machines and + * program counters. The shared state machine is managed with a Barrier IPC + * primitive. When all attached participants arrive at a barrier, the phase + * advances and all waiting participants are released. + * + * When a participant begins working on a parallel hash join, it must first + * figure out how much progress has already been made, because participants + * don't wait for each other to begin. For this reason there are switch + * statements at key points in the code where we have to synchronize our local + * state machine with the phase, and then jump to the correct part of the + * algorithm so that we can get started. + * + * One barrier called build_barrier is used to coordinate the hashing phases. + * The phase is represented by an integer which begins at zero and increments + * one by one, but in the code it is referred to by symbolic names as follows: + * + * PHJ_BUILD_ELECTING -- initial state + * PHJ_BUILD_ALLOCATING -- one sets up the batches and table 0 + * PHJ_BUILD_HASHING_INNER -- all hash the inner rel + * PHJ_BUILD_HASHING_OUTER -- (multi-batch only) all hash the outer + * PHJ_BUILD_DONE -- building done, probing can begin + * + * While in the phase PHJ_BUILD_HASHING_INNER a separate pair of barriers may + * be used repeatedly as required to coordinate expansions in the number of + * batches or buckets. Their phases are as follows: + * + * PHJ_GROW_BATCHES_ELECTING -- initial state + * PHJ_GROW_BATCHES_ALLOCATING -- one allocates new batches + * PHJ_GROW_BATCHES_REPARTITIONING -- all repartition + * PHJ_GROW_BATCHES_FINISHING -- one cleans up, detects skew + * + * PHJ_GROW_BUCKETS_ELECTING -- initial state + * PHJ_GROW_BUCKETS_ALLOCATING -- one allocates new buckets + * PHJ_GROW_BUCKETS_REINSERTING -- all insert tuples + * + * If the planner got the number of batches and buckets right, those won't be + * necessary, but on the other hand we might finish up needing to expand the + * buckets or batches multiple times while hashing the inner relation to stay + * within our memory budget and load factor target. For that reason it's a + * separate pair of barriers using circular phases. + * + * The PHJ_BUILD_HASHING_OUTER phase is required only for multi-batch joins, + * because we need to divide the outer relation into batches up front in order + * to be able to process batches entirely independently. In contrast, the + * parallel-oblivious algorithm simply throws tuples 'forward' to 'later' + * batches whenever it encounters them while scanning and probing, which it + * can do because it processes batches in serial order. + * + * Once PHJ_BUILD_DONE is reached, backends then split up and process + * different batches, or gang up and work together on probing batches if there + * aren't enough to go around. For each batch there is a separate barrier + * with the following phases: + * + * PHJ_BATCH_ELECTING -- initial state + * PHJ_BATCH_ALLOCATING -- one allocates buckets + * PHJ_BATCH_LOADING -- all load the hash table from disk + * PHJ_BATCH_PROBING -- all probe + * PHJ_BATCH_DONE -- end + * + * Batch 0 is a special case, because it starts out in phase + * PHJ_BATCH_PROBING; populating batch 0's hash table is done during + * PHJ_BUILD_HASHING_INNER so we can skip loading. + * + * Initially we try to plan for a single-batch hash join using the combined + * work_mem of all participants to create a large shared hash table. If that + * turns out either at planning or execution time to be impossible then we + * fall back to regular work_mem sized hash tables. + * + * To avoid deadlocks, we never wait for any barrier unless it is known that + * all other backends attached to it are actively executing the node or have + * already arrived. Practically, that means that we never return a tuple + * while attached to a barrier, unless the barrier has reached its final + * state. In the slightly special case of the per-batch barrier, we return + * tuples while in PHJ_BATCH_PROBING phase, but that's OK because we use + * BarrierArriveAndDetach() to advance it to PHJ_BATCH_DONE without waiting. + * *------------------------------------------------------------------------- */ #include "postgres.h" #include "access/htup_details.h" +#include "access/parallel.h" #include "executor/executor.h" #include "executor/hashjoin.h" #include "executor/nodeHash.h" #include "executor/nodeHashjoin.h" #include "miscadmin.h" +#include "pgstat.h" #include "utils/memutils.h" +#include "utils/sharedtuplestore.h" /* @@ -42,24 +136,34 @@ static TupleTableSlot *ExecHashJoinOuterGetTuple(PlanState *outerNode, HashJoinState *hjstate, uint32 *hashvalue); +static TupleTableSlot *ExecParallelHashJoinOuterGetTuple(PlanState *outerNode, + HashJoinState *hjstate, + uint32 *hashvalue); static TupleTableSlot *ExecHashJoinGetSavedTuple(HashJoinState *hjstate, BufFile *file, uint32 *hashvalue, TupleTableSlot *tupleSlot); static bool ExecHashJoinNewBatch(HashJoinState *hjstate); +static bool ExecParallelHashJoinNewBatch(HashJoinState *hjstate); +static void ExecParallelHashJoinPartitionOuter(HashJoinState *node); /* ---------------------------------------------------------------- - * ExecHashJoin + * ExecHashJoinImpl * - * This function implements the Hybrid Hashjoin algorithm. + * This function implements the Hybrid Hashjoin algorithm. It is marked + * with an always-inline attribute so that ExecHashJoin() and + * ExecParallelHashJoin() can inline it. Compilers that respect the + * attribute should create versions specialized for parallel == true and + * parallel == false with unnecessary branches removed. * * Note: the relation we build hash table on is the "inner" * the other one is "outer". * ---------------------------------------------------------------- */ -static TupleTableSlot * /* return: a tuple or NULL */ -ExecHashJoin(PlanState *pstate) +pg_attribute_always_inline +static inline TupleTableSlot * +ExecHashJoinImpl(PlanState *pstate, bool parallel) { HashJoinState *node = castNode(HashJoinState, pstate); PlanState *outerNode; @@ -71,6 +175,7 @@ ExecHashJoin(PlanState *pstate) TupleTableSlot *outerTupleSlot; uint32 hashvalue; int batchno; + ParallelHashJoinState *parallel_state; /* * get information from HashJoin node @@ -81,6 +186,7 @@ ExecHashJoin(PlanState *pstate) outerNode = outerPlanState(node); hashtable = node->hj_HashTable; econtext = node->js.ps.ps_ExprContext; + parallel_state = hashNode->parallel_state; /* * Reset per-tuple memory context to free any expression evaluation @@ -138,6 +244,18 @@ ExecHashJoin(PlanState *pstate) /* no chance to not build the hash table */ node->hj_FirstOuterTupleSlot = NULL; } + else if (parallel) + { + /* + * The empty-outer optimization is not implemented for + * shared hash tables, because no one participant can + * determine that there are no outer tuples, and it's not + * yet clear that it's worth the synchronization overhead + * of reaching consensus to figure that out. So we have + * to build the hash table. + */ + node->hj_FirstOuterTupleSlot = NULL; + } else if (HJ_FILL_OUTER(node) || (outerNode->plan->startup_cost < hashNode->ps.plan->total_cost && !node->hj_OuterNotEmpty)) @@ -155,15 +273,19 @@ ExecHashJoin(PlanState *pstate) node->hj_FirstOuterTupleSlot = NULL; /* - * create the hash table + * Create the hash table. If using Parallel Hash, then + * whoever gets here first will create the hash table and any + * later arrivals will merely attach to it. */ - hashtable = ExecHashTableCreate((Hash *) hashNode->ps.plan, + hashtable = ExecHashTableCreate(hashNode, node->hj_HashOperators, HJ_FILL_INNER(node)); node->hj_HashTable = hashtable; /* - * execute the Hash node, to build the hash table + * Execute the Hash node, to build the hash table. If using + * Parallel Hash, then we'll try to help hashing unless we + * arrived too late. */ hashNode->hashtable = hashtable; (void) MultiExecProcNode((PlanState *) hashNode); @@ -189,7 +311,34 @@ ExecHashJoin(PlanState *pstate) */ node->hj_OuterNotEmpty = false; - node->hj_JoinState = HJ_NEED_NEW_OUTER; + if (parallel) + { + Barrier *build_barrier; + + build_barrier = ¶llel_state->build_barrier; + Assert(BarrierPhase(build_barrier) == PHJ_BUILD_HASHING_OUTER || + BarrierPhase(build_barrier) == PHJ_BUILD_DONE); + if (BarrierPhase(build_barrier) == PHJ_BUILD_HASHING_OUTER) + { + /* + * If multi-batch, we need to hash the outer relation + * up front. + */ + if (hashtable->nbatch > 1) + ExecParallelHashJoinPartitionOuter(node); + BarrierArriveAndWait(build_barrier, + WAIT_EVENT_HASH_BUILD_HASHING_OUTER); + } + Assert(BarrierPhase(build_barrier) == PHJ_BUILD_DONE); + + /* Each backend should now select a batch to work on. */ + hashtable->curbatch = -1; + node->hj_JoinState = HJ_NEED_NEW_BATCH; + + continue; + } + else + node->hj_JoinState = HJ_NEED_NEW_OUTER; /* FALL THRU */ @@ -198,9 +347,14 @@ ExecHashJoin(PlanState *pstate) /* * We don't have an outer tuple, try to get the next one */ - outerTupleSlot = ExecHashJoinOuterGetTuple(outerNode, - node, - &hashvalue); + if (parallel) + outerTupleSlot = + ExecParallelHashJoinOuterGetTuple(outerNode, node, + &hashvalue); + else + outerTupleSlot = + ExecHashJoinOuterGetTuple(outerNode, node, &hashvalue); + if (TupIsNull(outerTupleSlot)) { /* end of batch, or maybe whole join */ @@ -240,10 +394,12 @@ ExecHashJoin(PlanState *pstate) * Need to postpone this outer tuple to a later batch. * Save it in the corresponding outer-batch file. */ + Assert(parallel_state == NULL); Assert(batchno > hashtable->curbatch); ExecHashJoinSaveTuple(ExecFetchSlotMinimalTuple(outerTupleSlot), hashvalue, &hashtable->outerBatchFile[batchno]); + /* Loop around, staying in HJ_NEED_NEW_OUTER state */ continue; } @@ -258,11 +414,23 @@ ExecHashJoin(PlanState *pstate) /* * Scan the selected hash bucket for matches to current outer */ - if (!ExecScanHashBucket(node, econtext)) + if (parallel) { - /* out of matches; check for possible outer-join fill */ - node->hj_JoinState = HJ_FILL_OUTER_TUPLE; - continue; + if (!ExecParallelScanHashBucket(node, econtext)) + { + /* out of matches; check for possible outer-join fill */ + node->hj_JoinState = HJ_FILL_OUTER_TUPLE; + continue; + } + } + else + { + if (!ExecScanHashBucket(node, econtext)) + { + /* out of matches; check for possible outer-join fill */ + node->hj_JoinState = HJ_FILL_OUTER_TUPLE; + continue; + } } /* @@ -362,8 +530,16 @@ ExecHashJoin(PlanState *pstate) /* * Try to advance to next batch. Done if there are no more. */ - if (!ExecHashJoinNewBatch(node)) - return NULL; /* end of join */ + if (parallel) + { + if (!ExecParallelHashJoinNewBatch(node)) + return NULL; /* end of parallel-aware join */ + } + else + { + if (!ExecHashJoinNewBatch(node)) + return NULL; /* end of parallel-oblivious join */ + } node->hj_JoinState = HJ_NEED_NEW_OUTER; break; @@ -374,6 +550,38 @@ ExecHashJoin(PlanState *pstate) } } +/* ---------------------------------------------------------------- + * ExecHashJoin + * + * Parallel-oblivious version. + * ---------------------------------------------------------------- + */ +static TupleTableSlot * /* return: a tuple or NULL */ +ExecHashJoin(PlanState *pstate) +{ + /* + * On sufficiently smart compilers this should be inlined with the + * parallel-aware branches removed. + */ + return ExecHashJoinImpl(pstate, false); +} + +/* ---------------------------------------------------------------- + * ExecParallelHashJoin + * + * Parallel-aware version. + * ---------------------------------------------------------------- + */ +static TupleTableSlot * /* return: a tuple or NULL */ +ExecParallelHashJoin(PlanState *pstate) +{ + /* + * On sufficiently smart compilers this should be inlined with the + * parallel-oblivious branches removed. + */ + return ExecHashJoinImpl(pstate, true); +} + /* ---------------------------------------------------------------- * ExecInitHashJoin * @@ -400,6 +608,12 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) hjstate = makeNode(HashJoinState); hjstate->js.ps.plan = (Plan *) node; hjstate->js.ps.state = estate; + + /* + * See ExecHashJoinInitializeDSM() and ExecHashJoinInitializeWorker() + * where this function may be replaced with a parallel version, if we + * managed to launch a parallel query. + */ hjstate->js.ps.ExecProcNode = ExecHashJoin; /* @@ -581,9 +795,9 @@ ExecEndHashJoin(HashJoinState *node) /* * ExecHashJoinOuterGetTuple * - * get the next outer tuple for hashjoin: either by - * executing the outer plan node in the first pass, or from - * the temp files for the hashjoin batches. + * get the next outer tuple for a parallel oblivious hashjoin: either by + * executing the outer plan node in the first pass, or from the temp + * files for the hashjoin batches. * * Returns a null slot if no more outer tuples (within the current batch). * @@ -661,6 +875,67 @@ ExecHashJoinOuterGetTuple(PlanState *outerNode, return NULL; } +/* + * ExecHashJoinOuterGetTuple variant for the parallel case. + */ +static TupleTableSlot * +ExecParallelHashJoinOuterGetTuple(PlanState *outerNode, + HashJoinState *hjstate, + uint32 *hashvalue) +{ + HashJoinTable hashtable = hjstate->hj_HashTable; + int curbatch = hashtable->curbatch; + TupleTableSlot *slot; + + /* + * In the Parallel Hash case we only run the outer plan directly for + * single-batch hash joins. Otherwise we have to go to batch files, even + * for batch 0. + */ + if (curbatch == 0 && hashtable->nbatch == 1) + { + slot = ExecProcNode(outerNode); + + while (!TupIsNull(slot)) + { + ExprContext *econtext = hjstate->js.ps.ps_ExprContext; + + econtext->ecxt_outertuple = slot; + if (ExecHashGetHashValue(hashtable, econtext, + hjstate->hj_OuterHashKeys, + true, /* outer tuple */ + HJ_FILL_OUTER(hjstate), + hashvalue)) + return slot; + + /* + * That tuple couldn't match because of a NULL, so discard it and + * continue with the next one. + */ + slot = ExecProcNode(outerNode); + } + } + else if (curbatch < hashtable->nbatch) + { + MinimalTuple tuple; + + tuple = sts_parallel_scan_next(hashtable->batches[curbatch].outer_tuples, + hashvalue); + if (tuple != NULL) + { + slot = ExecStoreMinimalTuple(tuple, + hjstate->hj_OuterTupleSlot, + false); + return slot; + } + else + ExecClearTuple(hjstate->hj_OuterTupleSlot); + } + + /* End of this batch */ + return NULL; +} + /* * ExecHashJoinNewBatch * switch to a new hashjoin batch @@ -803,6 +1078,135 @@ ExecHashJoinNewBatch(HashJoinState *hjstate) return true; } +/* + * Choose a batch to work on, and attach to it. Returns true if successful, + * false if there are no more batches. + */ +static bool +ExecParallelHashJoinNewBatch(HashJoinState *hjstate) +{ + HashJoinTable hashtable = hjstate->hj_HashTable; + int start_batchno; + int batchno; + + /* + * If we started up so late that the batch tracking array has been freed + * already by ExecHashTableDetach(), then we are finished. See also + * ExecParallelHashEnsureBatchAccessors(). + */ + if (hashtable->batches == NULL) + return false; + + /* + * If we were already attached to a batch, remember not to bother checking + * it again, and detach from it (possibly freeing the hash table if we are + * last to detach). + */ + if (hashtable->curbatch >= 0) + { + hashtable->batches[hashtable->curbatch].done = true; + ExecHashTableDetachBatch(hashtable); + } + + /* + * Search for a batch that isn't done. We use an atomic counter to start + * our search at a different batch in every participant when there are + * more batches than participants. + */ + batchno = start_batchno = + pg_atomic_fetch_add_u32(&hashtable->parallel_state->distributor, 1) % + hashtable->nbatch; + do + { + uint32 hashvalue; + MinimalTuple tuple; + TupleTableSlot *slot; + + if (!hashtable->batches[batchno].done) + { + SharedTuplestoreAccessor *inner_tuples; + Barrier *batch_barrier = + &hashtable->batches[batchno].shared->batch_barrier; + + switch (BarrierAttach(batch_barrier)) + { + case PHJ_BATCH_ELECTING: + + /* One backend allocates the hash table. */ + if (BarrierArriveAndWait(batch_barrier, + WAIT_EVENT_HASH_BATCH_ELECTING)) + ExecParallelHashTableAlloc(hashtable, batchno); + /* Fall through. */ + + case PHJ_BATCH_ALLOCATING: + /* Wait for allocation to complete. */ + BarrierArriveAndWait(batch_barrier, + WAIT_EVENT_HASH_BATCH_ALLOCATING); + /* Fall through. */ + + case PHJ_BATCH_LOADING: + /* Start (or join in) loading tuples. */ + ExecParallelHashTableSetCurrentBatch(hashtable, batchno); + inner_tuples = hashtable->batches[batchno].inner_tuples; + sts_begin_parallel_scan(inner_tuples); + while ((tuple = sts_parallel_scan_next(inner_tuples, + &hashvalue))) + { + slot = ExecStoreMinimalTuple(tuple, + hjstate->hj_HashTupleSlot, + false); + ExecParallelHashTableInsertCurrentBatch(hashtable, slot, + hashvalue); + } + sts_end_parallel_scan(inner_tuples); + BarrierArriveAndWait(batch_barrier, + WAIT_EVENT_HASH_BATCH_LOADING); + /* Fall through. */ + + case PHJ_BATCH_PROBING: + + /* + * This batch is ready to probe. Return control to + * caller. We stay attached to batch_barrier so that the + * hash table stays alive until everyone's finished + * probing it, but no participant is allowed to wait at + * this barrier again (or else a deadlock could occur). + * All attached participants must eventually call + * BarrierArriveAndDetach() so that the final phase + * PHJ_BATCH_DONE can be reached. + */ + ExecParallelHashTableSetCurrentBatch(hashtable, batchno); + sts_begin_parallel_scan(hashtable->batches[batchno].outer_tuples); + return true; + + case PHJ_BATCH_DONE: + + /* + * Already done. Detach and go around again (if any + * remain). + */ + BarrierDetach(batch_barrier); + + /* + * We didn't work on this batch, but we need to observe + * its size for EXPLAIN. + */ + ExecParallelHashUpdateSpacePeak(hashtable, batchno); + hashtable->batches[batchno].done = true; + hashtable->curbatch = -1; + break; + + default: + elog(ERROR, "unexpected batch phase %d", + BarrierPhase(batch_barrier)); + } + } + batchno = (batchno + 1) % hashtable->nbatch; + } while (batchno != start_batchno); + + return false; +} + /* * ExecHashJoinSaveTuple * save a tuple to a batch file. @@ -964,3 +1368,176 @@ ExecReScanHashJoin(HashJoinState *node) if (node->js.ps.lefttree->chgParam == NULL) ExecReScan(node->js.ps.lefttree); } + +void +ExecShutdownHashJoin(HashJoinState *node) +{ + if (node->hj_HashTable) + { + /* + * Detach from shared state before DSM memory goes away. This makes + * sure that we don't have any pointers into DSM memory by the time + * ExecEndHashJoin runs. + */ + ExecHashTableDetachBatch(node->hj_HashTable); + ExecHashTableDetach(node->hj_HashTable); + } +} + +static void +ExecParallelHashJoinPartitionOuter(HashJoinState *hjstate) +{ + PlanState *outerState = outerPlanState(hjstate); + ExprContext *econtext = hjstate->js.ps.ps_ExprContext; + HashJoinTable hashtable = hjstate->hj_HashTable; + TupleTableSlot *slot; + uint32 hashvalue; + int i; + + Assert(hjstate->hj_FirstOuterTupleSlot == NULL); + + /* Execute outer plan, writing all tuples to shared tuplestores. */ + for (;;) + { + slot = ExecProcNode(outerState); + if (TupIsNull(slot)) + break; + econtext->ecxt_outertuple = slot; + if (ExecHashGetHashValue(hashtable, econtext, + hjstate->hj_OuterHashKeys, + true, /* outer tuple */ + false, /* outer join, currently unsupported */ + &hashvalue)) + { + int batchno; + int bucketno; + + ExecHashGetBucketAndBatch(hashtable, hashvalue, &bucketno, + &batchno); + sts_puttuple(hashtable->batches[batchno].outer_tuples, + &hashvalue, ExecFetchSlotMinimalTuple(slot)); + } + CHECK_FOR_INTERRUPTS(); + } + + /* Make sure all outer partitions are readable by any backend. */ + for (i = 0; i < hashtable->nbatch; ++i) + sts_end_write(hashtable->batches[i].outer_tuples); +} + +void +ExecHashJoinEstimate(HashJoinState *state, ParallelContext *pcxt) +{ + shm_toc_estimate_chunk(&pcxt->estimator, sizeof(ParallelHashJoinState)); + shm_toc_estimate_keys(&pcxt->estimator, 1); +} + +void +ExecHashJoinInitializeDSM(HashJoinState *state, ParallelContext *pcxt) +{ + int plan_node_id = state->js.ps.plan->plan_node_id; + HashState *hashNode; + ParallelHashJoinState *pstate; + + /* + * Disable shared hash table mode if we failed to create a real DSM + * segment, because that means that we don't have a DSA area to work with. + */ + if (pcxt->seg == NULL) + return; + + ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin); + + /* + * Set up the state needed to coordinate access to the shared hash + * table(s), using the plan node ID as the toc key. + */ + pstate = shm_toc_allocate(pcxt->toc, sizeof(ParallelHashJoinState)); + shm_toc_insert(pcxt->toc, plan_node_id, pstate); + + /* + * Set up the shared hash join state with no batches initially. + * ExecHashTableCreate() will prepare at least one later and set nbatch + * and space_allowed. + */ + pstate->nbatch = 0; + pstate->space_allowed = 0; + pstate->batches = InvalidDsaPointer; + pstate->old_batches = InvalidDsaPointer; + pstate->nbuckets = 0; + pstate->growth = PHJ_GROWTH_OK; + pstate->chunk_work_queue = InvalidDsaPointer; + pg_atomic_init_u32(&pstate->distributor, 0); + pstate->nparticipants = pcxt->nworkers + 1; + pstate->total_tuples = 0; + LWLockInitialize(&pstate->lock, + LWTRANCHE_PARALLEL_HASH_JOIN); + BarrierInit(&pstate->build_barrier, 0); + BarrierInit(&pstate->grow_batches_barrier, 0); + BarrierInit(&pstate->grow_buckets_barrier, 0); + + /* Set up the space we'll use for shared temporary files. */ + SharedFileSetInit(&pstate->fileset, pcxt->seg); + + /* Initialize the shared state in the hash node. */ + hashNode = (HashState *) innerPlanState(state); + hashNode->parallel_state = pstate; +} + +/* ---------------------------------------------------------------- + * ExecHashJoinReInitializeDSM + * + * Reset shared state before beginning a fresh scan. + * ---------------------------------------------------------------- + */ +void +ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *cxt) +{ + int plan_node_id = state->js.ps.plan->plan_node_id; + ParallelHashJoinState *pstate = + shm_toc_lookup(cxt->toc, plan_node_id, false); + + /* + * It would be possible to reuse the shared hash table in single-batch + * cases by resetting and then fast-forwarding build_barrier to + * PHJ_BUILD_DONE and batch 0's batch_barrier to PHJ_BATCH_PROBING, but + * currently shared hash tables are already freed by now (by the last + * participant to detach from the batch). We could consider keeping it + * around for single-batch joins. We'd also need to adjust + * finalize_plan() so that it doesn't record a dummy dependency for + * Parallel Hash nodes, preventing the rescan optimization. For now we + * don't try. + */ + + /* Detach, freeing any remaining shared memory. */ + if (state->hj_HashTable != NULL) + { + ExecHashTableDetachBatch(state->hj_HashTable); + ExecHashTableDetach(state->hj_HashTable); + } + + /* Clear any shared batch files. */ + SharedFileSetDeleteAll(&pstate->fileset); + + /* Reset build_barrier to PHJ_BUILD_ELECTING so we can go around again. */ + BarrierInit(&pstate->build_barrier, 0); +} + +void +ExecHashJoinInitializeWorker(HashJoinState *state, + ParallelWorkerContext *pwcxt) +{ + HashState *hashNode; + int plan_node_id = state->js.ps.plan->plan_node_id; + ParallelHashJoinState *pstate = + shm_toc_lookup(pwcxt->toc, plan_node_id, false); + + /* Attach to the space for shared temporary files. */ + SharedFileSetAttach(&pstate->fileset, pwcxt->seg); + + /* Attach to the shared state in the hash node. */ + hashNode = (HashState *) innerPlanState(state); + hashNode->parallel_state = pstate; + + ExecSetExecProcNode(&state->js.ps, ExecParallelHashJoin); +} diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index b1515dd8e1..84d717102d 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -1057,6 +1057,7 @@ _copyHash(const Hash *from) COPY_SCALAR_FIELD(skewTable); COPY_SCALAR_FIELD(skewColumn); COPY_SCALAR_FIELD(skewInherit); + COPY_SCALAR_FIELD(rows_total); return newnode; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index b59a5219a7..e468d7cc41 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -927,6 +927,7 @@ _outHash(StringInfo str, const Hash *node) WRITE_OID_FIELD(skewTable); WRITE_INT_FIELD(skewColumn); WRITE_BOOL_FIELD(skewInherit); + WRITE_FLOAT_FIELD(rows_total, "%.0f"); } static void diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 0d17ae89b0..1133c70a1c 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -2213,6 +2213,7 @@ _readHash(void) READ_OID_FIELD(skewTable); READ_INT_FIELD(skewColumn); READ_BOOL_FIELD(skewInherit); + READ_FLOAT_FIELD(rows_total); READ_DONE(); } diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 877827dcb5..c3daacd3ea 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -129,6 +129,7 @@ bool enable_hashjoin = true; bool enable_gathermerge = true; bool enable_partition_wise_join = false; bool enable_parallel_append = true; +bool enable_parallel_hash = true; typedef struct { @@ -3130,16 +3131,19 @@ initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace, JoinType jointype, List *hashclauses, Path *outer_path, Path *inner_path, - JoinPathExtraData *extra) + JoinPathExtraData *extra, + bool parallel_hash) { Cost startup_cost = 0; Cost run_cost = 0; double outer_path_rows = outer_path->rows; double inner_path_rows = inner_path->rows; + double inner_path_rows_total = inner_path_rows; int num_hashclauses = list_length(hashclauses); int numbuckets; int numbatches; int num_skew_mcvs; + size_t space_allowed; /* unused */ /* cost of source data */ startup_cost += outer_path->startup_cost; @@ -3160,6 +3164,15 @@ initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace, * inner_path_rows; run_cost += cpu_operator_cost * num_hashclauses * outer_path_rows; + /* + * If this is a parallel hash build, then the value we have for + * inner_rows_total currently refers only to the rows returned by each + * participant. For shared hash table size estimation, we need the total + * number, so we need to undo the division. + */ + if (parallel_hash) + inner_path_rows_total *= get_parallel_divisor(inner_path); + /* * Get hash table size that executor would use for inner relation. * @@ -3170,9 +3183,12 @@ initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace, * XXX at some point it might be interesting to try to account for skew * optimization in the cost estimate, but for now, we don't. */ - ExecChooseHashTableSize(inner_path_rows, + ExecChooseHashTableSize(inner_path_rows_total, inner_path->pathtarget->width, true, /* useskew */ + parallel_hash, /* try_combined_work_mem */ + outer_path->parallel_workers, + &space_allowed, &numbuckets, &numbatches, &num_skew_mcvs); @@ -3204,6 +3220,7 @@ initial_cost_hashjoin(PlannerInfo *root, JoinCostWorkspace *workspace, workspace->run_cost = run_cost; workspace->numbuckets = numbuckets; workspace->numbatches = numbatches; + workspace->inner_rows_total = inner_path_rows_total; } /* @@ -3226,6 +3243,7 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path, Path *inner_path = path->jpath.innerjoinpath; double outer_path_rows = outer_path->rows; double inner_path_rows = inner_path->rows; + double inner_path_rows_total = workspace->inner_rows_total; List *hashclauses = path->path_hashclauses; Cost startup_cost = workspace->startup_cost; Cost run_cost = workspace->run_cost; @@ -3266,6 +3284,9 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path, /* mark the path with estimated # of batches */ path->num_batches = numbatches; + /* store the total number of tuples (sum of partial row estimates) */ + path->inner_rows_total = inner_path_rows_total; + /* and compute the number of "virtual" buckets in the whole join */ virtualbuckets = (double) numbuckets * (double) numbatches; diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c index 02a630278f..e774130ac8 100644 --- a/src/backend/optimizer/path/joinpath.c +++ b/src/backend/optimizer/path/joinpath.c @@ -747,7 +747,7 @@ try_hashjoin_path(PlannerInfo *root, * never have any output pathkeys, per comments in create_hashjoin_path. */ initial_cost_hashjoin(root, &workspace, jointype, hashclauses, - outer_path, inner_path, extra); + outer_path, inner_path, extra, false); if (add_path_precheck(joinrel, workspace.startup_cost, workspace.total_cost, @@ -761,6 +761,7 @@ try_hashjoin_path(PlannerInfo *root, extra, outer_path, inner_path, + false, /* parallel_hash */ extra->restrictlist, required_outer, hashclauses)); @@ -776,6 +777,10 @@ try_hashjoin_path(PlannerInfo *root, * try_partial_hashjoin_path * Consider a partial hashjoin join path; if it appears useful, push it into * the joinrel's partial_pathlist via add_partial_path(). + * The outer side is partial. If parallel_hash is true, then the inner path + * must be partial and will be run in parallel to create one or more shared + * hash tables; otherwise the inner path must be complete and a copy of it + * is run in every process to create separate identical private hash tables. */ static void try_partial_hashjoin_path(PlannerInfo *root, @@ -784,7 +789,8 @@ try_partial_hashjoin_path(PlannerInfo *root, Path *inner_path, List *hashclauses, JoinType jointype, - JoinPathExtraData *extra) + JoinPathExtraData *extra, + bool parallel_hash) { JoinCostWorkspace workspace; @@ -808,7 +814,7 @@ try_partial_hashjoin_path(PlannerInfo *root, * cost. Bail out right away if it looks terrible. */ initial_cost_hashjoin(root, &workspace, jointype, hashclauses, - outer_path, inner_path, extra); + outer_path, inner_path, extra, true); if (!add_partial_path_precheck(joinrel, workspace.total_cost, NIL)) return; @@ -821,6 +827,7 @@ try_partial_hashjoin_path(PlannerInfo *root, extra, outer_path, inner_path, + parallel_hash, extra->restrictlist, NULL, hashclauses)); @@ -1839,6 +1846,10 @@ hash_inner_and_outer(PlannerInfo *root, * able to properly guarantee uniqueness. Similarly, we can't handle * JOIN_FULL and JOIN_RIGHT, because they can produce false null * extended rows. Also, the resulting path must not be parameterized. + * We would be able to support JOIN_FULL and JOIN_RIGHT for Parallel + * Hash, since in that case we're back to a single hash table with a + * single set of match bits for each batch, but that will require + * figuring out a deadlock-free way to wait for the probe to finish. */ if (joinrel->consider_parallel && save_jointype != JOIN_UNIQUE_OUTER && @@ -1848,11 +1859,27 @@ hash_inner_and_outer(PlannerInfo *root, bms_is_empty(joinrel->lateral_relids)) { Path *cheapest_partial_outer; + Path *cheapest_partial_inner = NULL; Path *cheapest_safe_inner = NULL; cheapest_partial_outer = (Path *) linitial(outerrel->partial_pathlist); + /* + * Can we use a partial inner plan too, so that we can build a + * shared hash table in parallel? + */ + if (innerrel->partial_pathlist != NIL && enable_parallel_hash) + { + cheapest_partial_inner = + (Path *) linitial(innerrel->partial_pathlist); + try_partial_hashjoin_path(root, joinrel, + cheapest_partial_outer, + cheapest_partial_inner, + hashclauses, jointype, extra, + true /* parallel_hash */ ); + } + /* * Normally, given that the joinrel is parallel-safe, the cheapest * total inner path will also be parallel-safe, but if not, we'll @@ -1870,7 +1897,8 @@ hash_inner_and_outer(PlannerInfo *root, try_partial_hashjoin_path(root, joinrel, cheapest_partial_outer, cheapest_safe_inner, - hashclauses, jointype, extra); + hashclauses, jointype, extra, + false /* parallel_hash */ ); } } } diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index f6c83d0477..1a0d3a885f 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -4192,6 +4192,17 @@ create_hashjoin_plan(PlannerInfo *root, copy_plan_costsize(&hash_plan->plan, inner_plan); hash_plan->plan.startup_cost = hash_plan->plan.total_cost; + /* + * If parallel-aware, the executor will also need an estimate of the total + * number of rows expected from all participants so that it can size the + * shared hash table. + */ + if (best_path->jpath.path.parallel_aware) + { + hash_plan->plan.parallel_aware = true; + hash_plan->rows_total = best_path->inner_rows_total; + } + join_plan = make_hashjoin(tlist, joinclauses, otherclauses, diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 54126fbb6a..2aee156ad3 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -2278,6 +2278,7 @@ create_mergejoin_path(PlannerInfo *root, * 'extra' contains various information about the join * 'outer_path' is the cheapest outer path * 'inner_path' is the cheapest inner path + * 'parallel_hash' to select Parallel Hash of inner path (shared hash table) * 'restrict_clauses' are the RestrictInfo nodes to apply at the join * 'required_outer' is the set of required outer rels * 'hashclauses' are the RestrictInfo nodes to use as hash clauses @@ -2291,6 +2292,7 @@ create_hashjoin_path(PlannerInfo *root, JoinPathExtraData *extra, Path *outer_path, Path *inner_path, + bool parallel_hash, List *restrict_clauses, Relids required_outer, List *hashclauses) @@ -2308,7 +2310,8 @@ create_hashjoin_path(PlannerInfo *root, extra->sjinfo, required_outer, &restrict_clauses); - pathnode->jpath.path.parallel_aware = false; + pathnode->jpath.path.parallel_aware = + joinrel->consider_parallel && parallel_hash; pathnode->jpath.path.parallel_safe = joinrel->consider_parallel && outer_path->parallel_safe && inner_path->parallel_safe; /* This is a foolish way to estimate parallel_workers, but for now... */ diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 5c256ff8ab..b502c1bb9b 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -3586,6 +3586,51 @@ pgstat_get_wait_ipc(WaitEventIPC w) case WAIT_EVENT_EXECUTE_GATHER: event_name = "ExecuteGather"; break; + case WAIT_EVENT_HASH_BATCH_ALLOCATING: + event_name = "Hash/Batch/Allocating"; + break; + case WAIT_EVENT_HASH_BATCH_ELECTING: + event_name = "Hash/Batch/Electing"; + break; + case WAIT_EVENT_HASH_BATCH_LOADING: + event_name = "Hash/Batch/Loading"; + break; + case WAIT_EVENT_HASH_BUILD_ALLOCATING: + event_name = "Hash/Build/Allocating"; + break; + case WAIT_EVENT_HASH_BUILD_ELECTING: + event_name = "Hash/Build/Electing"; + break; + case WAIT_EVENT_HASH_BUILD_HASHING_INNER: + event_name = "Hash/Build/HashingInner"; + break; + case WAIT_EVENT_HASH_BUILD_HASHING_OUTER: + event_name = "Hash/Build/HashingOuter"; + break; + case WAIT_EVENT_HASH_GROW_BATCHES_ALLOCATING: + event_name = "Hash/GrowBatches/Allocating"; + break; + case WAIT_EVENT_HASH_GROW_BATCHES_DECIDING: + event_name = "Hash/GrowBatches/Deciding"; + break; + case WAIT_EVENT_HASH_GROW_BATCHES_ELECTING: + event_name = "Hash/GrowBatches/Electing"; + break; + case WAIT_EVENT_HASH_GROW_BATCHES_FINISHING: + event_name = "Hash/GrowBatches/Finishing"; + break; + case WAIT_EVENT_HASH_GROW_BATCHES_REPARTITIONING: + event_name = "Hash/GrowBatches/Repartitioning"; + break; + case WAIT_EVENT_HASH_GROW_BUCKETS_ALLOCATING: + event_name = "Hash/GrowBuckets/Allocating"; + break; + case WAIT_EVENT_HASH_GROW_BUCKETS_ELECTING: + event_name = "Hash/GrowBuckets/Electing"; + break; + case WAIT_EVENT_HASH_GROW_BUCKETS_REINSERTING: + event_name = "Hash/GrowBuckets/Reinserting"; + break; case WAIT_EVENT_LOGICAL_SYNC_DATA: event_name = "LogicalSyncData"; break; diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 0f7a96d85c..e32901d506 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -929,7 +929,15 @@ static struct config_bool ConfigureNamesBool[] = true, NULL, NULL, NULL }, - + { + {"enable_parallel_hash", PGC_USERSET, QUERY_TUNING_METHOD, + gettext_noop("Enables the planner's user of parallel hash plans."), + NULL + }, + &enable_parallel_hash, + true, + NULL, NULL, NULL + }, { {"geqo", PGC_USERSET, QUERY_TUNING_GEQO, gettext_noop("Enables genetic query optimization."), diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 842cf3601a..69f40f04b0 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -301,6 +301,7 @@ #enable_sort = on #enable_tidscan = on #enable_partition_wise_join = off +#enable_parallel_hash = on # - Planner Cost Constants - diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h index 82acadf85b..d8c82d4e7c 100644 --- a/src/include/executor/hashjoin.h +++ b/src/include/executor/hashjoin.h @@ -15,7 +15,10 @@ #define HASHJOIN_H #include "nodes/execnodes.h" +#include "port/atomics.h" +#include "storage/barrier.h" #include "storage/buffile.h" +#include "storage/lwlock.h" /* ---------------------------------------------------------------- * hash-join hash table structures @@ -63,7 +66,12 @@ typedef struct HashJoinTupleData { - struct HashJoinTupleData *next; /* link to next tuple in same bucket */ + /* link to next tuple in same bucket */ + union + { + struct HashJoinTupleData *unshared; + dsa_pointer shared; + } next; uint32 hashvalue; /* tuple's hash code */ /* Tuple data, in MinimalTuple format, follows on a MAXALIGN boundary */ } HashJoinTupleData; @@ -112,8 +120,12 @@ typedef struct HashMemoryChunkData size_t maxlen; /* size of the buffer holding the tuples */ size_t used; /* number of buffer bytes already used */ - struct HashMemoryChunkData *next; /* pointer to the next chunk (linked - * list) */ + /* pointer to the next chunk (linked list) */ + union + { + struct HashMemoryChunkData *unshared; + dsa_pointer shared; + } next; char data[FLEXIBLE_ARRAY_MEMBER]; /* buffer allocated at the end */ } HashMemoryChunkData; @@ -121,8 +133,148 @@ typedef struct HashMemoryChunkData typedef struct HashMemoryChunkData *HashMemoryChunk; #define HASH_CHUNK_SIZE (32 * 1024L) +#define HASH_CHUNK_HEADER_SIZE (offsetof(HashMemoryChunkData, data)) #define HASH_CHUNK_THRESHOLD (HASH_CHUNK_SIZE / 4) +/* + * For each batch of a Parallel Hash Join, we have a ParallelHashJoinBatch + * object in shared memory to coordinate access to it. Since they are + * followed by variable-sized objects, they are arranged in contiguous memory + * but not accessed directly as an array. + */ +typedef struct ParallelHashJoinBatch +{ + dsa_pointer buckets; /* array of hash table buckets */ + Barrier batch_barrier; /* synchronization for joining this batch */ + + dsa_pointer chunks; /* chunks of tuples loaded */ + size_t size; /* size of buckets + chunks in memory */ + size_t estimated_size; /* size of buckets + chunks while writing */ + size_t ntuples; /* number of tuples loaded */ + size_t old_ntuples; /* number of tuples before repartitioning */ + bool space_exhausted; + + /* + * Variable-sized SharedTuplestore objects follow this struct in memory. + * See the accessor macros below. + */ +} ParallelHashJoinBatch; + +/* Accessor for inner batch tuplestore following a ParallelHashJoinBatch. */ +#define ParallelHashJoinBatchInner(batch) \ + ((SharedTuplestore *) \ + ((char *) (batch) + MAXALIGN(sizeof(ParallelHashJoinBatch)))) + +/* Accessor for outer batch tuplestore following a ParallelHashJoinBatch. */ +#define ParallelHashJoinBatchOuter(batch, nparticipants) \ + ((SharedTuplestore *) \ + ((char *) ParallelHashJoinBatchInner(batch) + \ + MAXALIGN(sts_estimate(nparticipants)))) + +/* Total size of a ParallelHashJoinBatch and tuplestores. */ +#define EstimateParallelHashJoinBatch(hashtable) \ + (MAXALIGN(sizeof(ParallelHashJoinBatch)) + \ + MAXALIGN(sts_estimate((hashtable)->parallel_state->nparticipants)) * 2) + +/* Accessor for the nth ParallelHashJoinBatch given the base. */ +#define NthParallelHashJoinBatch(base, n) \ + ((ParallelHashJoinBatch *) \ + ((char *) (base) + \ + EstimateParallelHashJoinBatch(hashtable) * (n))) + +/* + * Each backend requires a small amount of per-batch state to interact with + * each ParalellHashJoinBatch. + */ +typedef struct ParallelHashJoinBatchAccessor +{ + ParallelHashJoinBatch *shared; /* pointer to shared state */ + + /* Per-backend partial counters to reduce contention. */ + size_t preallocated; /* pre-allocated space for this backend */ + size_t ntuples; /* number of tuples */ + size_t size; /* size of partition in memory */ + size_t estimated_size; /* size of partition on disk */ + size_t old_ntuples; /* how many tuples before repartioning? */ + bool at_least_one_chunk; /* has this backend allocated a chunk? */ + + bool done; /* flag to remember that a batch is done */ + SharedTuplestoreAccessor *inner_tuples; + SharedTuplestoreAccessor *outer_tuples; +} ParallelHashJoinBatchAccessor; + +/* + * While hashing the inner relation, any participant might determine that it's + * time to increase the number of buckets to reduce the load factor or batches + * to reduce the memory size. This is indicated by setting the growth flag to + * these values. + */ +typedef enum ParallelHashGrowth +{ + /* The current dimensions are sufficient. */ + PHJ_GROWTH_OK, + /* The load factor is too high, so we need to add buckets. */ + PHJ_GROWTH_NEED_MORE_BUCKETS, + /* The memory budget would be exhausted, so we need to repartition. */ + PHJ_GROWTH_NEED_MORE_BATCHES, + /* Repartitioning didn't help last time, so don't try to do that again. */ + PHJ_GROWTH_DISABLED +} ParallelHashGrowth; + +/* + * The shared state used to coordinate a Parallel Hash Join. This is stored + * in the DSM segment. + */ +typedef struct ParallelHashJoinState +{ + dsa_pointer batches; /* array of ParallelHashJoinBatch */ + dsa_pointer old_batches; /* previous generation during repartition */ + int nbatch; /* number of batches now */ + int old_nbatch; /* previous number of batches */ + int nbuckets; /* number of buckets */ + ParallelHashGrowth growth; /* control batch/bucket growth */ + dsa_pointer chunk_work_queue; /* chunk work queue */ + int nparticipants; + size_t space_allowed; + size_t total_tuples; /* total number of inner tuples */ + LWLock lock; /* lock protecting the above */ + + Barrier build_barrier; /* synchronization for the build phases */ + Barrier grow_batches_barrier; + Barrier grow_buckets_barrier; + pg_atomic_uint32 distributor; /* counter for load balancing */ + + SharedFileSet fileset; /* space for shared temporary files */ +} ParallelHashJoinState; + +/* The phases for building batches, used by build_barrier. */ +#define PHJ_BUILD_ELECTING 0 +#define PHJ_BUILD_ALLOCATING 1 +#define PHJ_BUILD_HASHING_INNER 2 +#define PHJ_BUILD_HASHING_OUTER 3 +#define PHJ_BUILD_DONE 4 + +/* The phases for probing each batch, used by for batch_barrier. */ +#define PHJ_BATCH_ELECTING 0 +#define PHJ_BATCH_ALLOCATING 1 +#define PHJ_BATCH_LOADING 2 +#define PHJ_BATCH_PROBING 3 +#define PHJ_BATCH_DONE 4 + +/* The phases of batch growth while hashing, for grow_batches_barrier. */ +#define PHJ_GROW_BATCHES_ELECTING 0 +#define PHJ_GROW_BATCHES_ALLOCATING 1 +#define PHJ_GROW_BATCHES_REPARTITIONING 2 +#define PHJ_GROW_BATCHES_DECIDING 3 +#define PHJ_GROW_BATCHES_FINISHING 4 +#define PHJ_GROW_BATCHES_PHASE(n) ((n) % 5) /* circular phases */ + +/* The phases of bucket growth while hashing, for grow_buckets_barrier. */ +#define PHJ_GROW_BUCKETS_ELECTING 0 +#define PHJ_GROW_BUCKETS_ALLOCATING 1 +#define PHJ_GROW_BUCKETS_REINSERTING 2 +#define PHJ_GROW_BUCKETS_PHASE(n) ((n) % 3) /* circular phases */ + typedef struct HashJoinTableData { int nbuckets; /* # buckets in the in-memory hash table */ @@ -133,8 +285,13 @@ typedef struct HashJoinTableData int log2_nbuckets_optimal; /* log2(nbuckets_optimal) */ /* buckets[i] is head of list of tuples in i'th in-memory bucket */ - struct HashJoinTupleData **buckets; - /* buckets array is per-batch storage, as are all the tuples */ + union + { + /* unshared array is per-batch storage, as are all the tuples */ + struct HashJoinTupleData **unshared; + /* shared array is per-query DSA area, as are all the tuples */ + dsa_pointer_atomic *shared; + } buckets; bool keepNulls; /* true to store unmatchable NULL tuples */ @@ -153,6 +310,7 @@ typedef struct HashJoinTableData bool growEnabled; /* flag to shut off nbatch increases */ double totalTuples; /* # tuples obtained from inner plan */ + double partialTuples; /* # tuples obtained from inner plan by me */ double skewTuples; /* # tuples inserted into skew tuples */ /* @@ -185,6 +343,13 @@ typedef struct HashJoinTableData /* used for dense allocation of tuples (into linked chunks) */ HashMemoryChunk chunks; /* one list for the whole batch */ + + /* Shared and private state for Parallel Hash. */ + HashMemoryChunk current_chunk; /* this backend's current chunk */ + dsa_area *area; /* DSA area to allocate memory from */ + ParallelHashJoinState *parallel_state; + ParallelHashJoinBatchAccessor *batches; + dsa_pointer current_chunk_shared; } HashJoinTableData; #endif /* HASHJOIN_H */ diff --git a/src/include/executor/nodeHash.h b/src/include/executor/nodeHash.h index 0974f1edc2..84c166b395 100644 --- a/src/include/executor/nodeHash.h +++ b/src/include/executor/nodeHash.h @@ -17,17 +17,33 @@ #include "access/parallel.h" #include "nodes/execnodes.h" +struct SharedHashJoinBatch; + extern HashState *ExecInitHash(Hash *node, EState *estate, int eflags); extern Node *MultiExecHash(HashState *node); extern void ExecEndHash(HashState *node); extern void ExecReScanHash(HashState *node); -extern HashJoinTable ExecHashTableCreate(Hash *node, List *hashOperators, +extern HashJoinTable ExecHashTableCreate(HashState *state, List *hashOperators, bool keepNulls); +extern void ExecParallelHashTableAlloc(HashJoinTable hashtable, + int batchno); extern void ExecHashTableDestroy(HashJoinTable hashtable); +extern void ExecHashTableDetach(HashJoinTable hashtable); +extern void ExecHashTableDetachBatch(HashJoinTable hashtable); +extern void ExecParallelHashTableSetCurrentBatch(HashJoinTable hashtable, + int batchno); +void ExecParallelHashUpdateSpacePeak(HashJoinTable hashtable, int batchno); + extern void ExecHashTableInsert(HashJoinTable hashtable, TupleTableSlot *slot, uint32 hashvalue); +extern void ExecParallelHashTableInsert(HashJoinTable hashtable, + TupleTableSlot *slot, + uint32 hashvalue); +extern void ExecParallelHashTableInsertCurrentBatch(HashJoinTable hashtable, + TupleTableSlot *slot, + uint32 hashvalue); extern bool ExecHashGetHashValue(HashJoinTable hashtable, ExprContext *econtext, List *hashkeys, @@ -39,12 +55,16 @@ extern void ExecHashGetBucketAndBatch(HashJoinTable hashtable, int *bucketno, int *batchno); extern bool ExecScanHashBucket(HashJoinState *hjstate, ExprContext *econtext); +extern bool ExecParallelScanHashBucket(HashJoinState *hjstate, ExprContext *econtext); extern void ExecPrepHashTableForUnmatched(HashJoinState *hjstate); extern bool ExecScanHashTableForUnmatched(HashJoinState *hjstate, ExprContext *econtext); extern void ExecHashTableReset(HashJoinTable hashtable); extern void ExecHashTableResetMatchFlags(HashJoinTable hashtable); extern void ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, + bool try_combined_work_mem, + int parallel_workers, + size_t *space_allowed, int *numbuckets, int *numbatches, int *num_skew_mcvs); @@ -55,6 +75,6 @@ extern void ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwc extern void ExecHashRetrieveInstrumentation(HashState *node); extern void ExecShutdownHash(HashState *node); extern void ExecHashGetInstrumentation(HashInstrumentation *instrument, - HashJoinTable hashtable); + HashJoinTable hashtable); #endif /* NODEHASH_H */ diff --git a/src/include/executor/nodeHashjoin.h b/src/include/executor/nodeHashjoin.h index 7469bfbf60..8469085d7e 100644 --- a/src/include/executor/nodeHashjoin.h +++ b/src/include/executor/nodeHashjoin.h @@ -20,6 +20,12 @@ extern HashJoinState *ExecInitHashJoin(HashJoin *node, EState *estate, int eflags); extern void ExecEndHashJoin(HashJoinState *node); extern void ExecReScanHashJoin(HashJoinState *node); +extern void ExecShutdownHashJoin(HashJoinState *node); +extern void ExecHashJoinEstimate(HashJoinState *state, ParallelContext *pcxt); +extern void ExecHashJoinInitializeDSM(HashJoinState *state, ParallelContext *pcxt); +extern void ExecHashJoinReInitializeDSM(HashJoinState *state, ParallelContext *pcxt); +extern void ExecHashJoinInitializeWorker(HashJoinState *state, + ParallelWorkerContext *pwcxt); extern void ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue, BufFile **fileptr); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 1a35c5c9ad..44d8c47d2c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -25,6 +25,7 @@ #include "utils/hsearch.h" #include "utils/queryenvironment.h" #include "utils/reltrigger.h" +#include "utils/sharedtuplestore.h" #include "utils/sortsupport.h" #include "utils/tuplestore.h" #include "utils/tuplesort.h" @@ -43,6 +44,8 @@ struct ExprState; /* forward references in this file */ struct ExprContext; struct ExprEvalStep; /* avoid including execExpr.h everywhere */ +struct ParallelHashJoinState; + typedef Datum (*ExprStateEvalFunc) (struct ExprState *expression, struct ExprContext *econtext, bool *isNull); @@ -2026,6 +2029,9 @@ typedef struct HashState SharedHashInfo *shared_info; /* one entry per worker */ HashInstrumentation *hinstrument; /* this worker's entry */ + + /* Parallel hash state. */ + struct ParallelHashJoinState *parallel_state; } HashState; /* ---------------- diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index 02fb366680..d763da647b 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -880,6 +880,7 @@ typedef struct Hash AttrNumber skewColumn; /* outer join key's column #, or zero */ bool skewInherit; /* is outer join rel an inheritance tree? */ /* all other info is in the parent HashJoin node */ + double rows_total; /* estimate total rows if parallel_aware */ } Hash; /* ---------------- diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 1108b6a0ea..3b9d303ce4 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1464,6 +1464,7 @@ typedef struct HashPath JoinPath jpath; List *path_hashclauses; /* join clauses used for hashing */ int num_batches; /* number of batches expected */ + double inner_rows_total; /* total inner rows expected */ } HashPath; /* @@ -2315,6 +2316,7 @@ typedef struct JoinCostWorkspace /* private for cost_hashjoin code */ int numbuckets; int numbatches; + double inner_rows_total; } JoinCostWorkspace; #endif /* RELATION_H */ diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 5a1fbf97c3..27afc2eaeb 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -69,6 +69,7 @@ extern bool enable_hashjoin; extern bool enable_gathermerge; extern bool enable_partition_wise_join; extern bool enable_parallel_append; +extern bool enable_parallel_hash; extern int constraint_exclusion; extern double clamp_row_est(double nrows); @@ -153,7 +154,8 @@ extern void initial_cost_hashjoin(PlannerInfo *root, JoinType jointype, List *hashclauses, Path *outer_path, Path *inner_path, - JoinPathExtraData *extra); + JoinPathExtraData *extra, + bool parallel_hash); extern void final_cost_hashjoin(PlannerInfo *root, HashPath *path, JoinCostWorkspace *workspace, JoinPathExtraData *extra); diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index 99f65b44f2..3ef12b323b 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -153,6 +153,7 @@ extern HashPath *create_hashjoin_path(PlannerInfo *root, JoinPathExtraData *extra, Path *outer_path, Path *inner_path, + bool parallel_hash, List *restrict_clauses, Relids required_outer, List *hashclauses); diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 089b7c3a10..58f3a19bc6 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -803,6 +803,21 @@ typedef enum WAIT_EVENT_BGWORKER_STARTUP, WAIT_EVENT_BTREE_PAGE, WAIT_EVENT_EXECUTE_GATHER, + WAIT_EVENT_HASH_BATCH_ALLOCATING, + WAIT_EVENT_HASH_BATCH_ELECTING, + WAIT_EVENT_HASH_BATCH_LOADING, + WAIT_EVENT_HASH_BUILD_ALLOCATING, + WAIT_EVENT_HASH_BUILD_ELECTING, + WAIT_EVENT_HASH_BUILD_HASHING_INNER, + WAIT_EVENT_HASH_BUILD_HASHING_OUTER, + WAIT_EVENT_HASH_GROW_BATCHES_ELECTING, + WAIT_EVENT_HASH_GROW_BATCHES_FINISHING, + WAIT_EVENT_HASH_GROW_BATCHES_REPARTITIONING, + WAIT_EVENT_HASH_GROW_BATCHES_ALLOCATING, + WAIT_EVENT_HASH_GROW_BATCHES_DECIDING, + WAIT_EVENT_HASH_GROW_BUCKETS_ELECTING, + WAIT_EVENT_HASH_GROW_BUCKETS_REINSERTING, + WAIT_EVENT_HASH_GROW_BUCKETS_ALLOCATING, WAIT_EVENT_LOGICAL_SYNC_DATA, WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE, WAIT_EVENT_MQ_INTERNAL, diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index a347ee4d7d..97e4a0bbbd 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -211,6 +211,7 @@ typedef enum BuiltinTrancheIds LWTRANCHE_BUFFER_MAPPING, LWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, + LWTRANCHE_PARALLEL_HASH_JOIN, LWTRANCHE_PARALLEL_QUERY_DSA, LWTRANCHE_SESSION_DSA, LWTRANCHE_SESSION_RECORD_TABLE, diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 9d3abf0ed0..a7cfdf1f44 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -5884,6 +5884,9 @@ insert into extremely_skewed update pg_class set reltuples = 2, relpages = pg_relation_size('extremely_skewed') / 8192 where relname = 'extremely_skewed'; +-- Make a relation with a couple of enormous tuples. +create table wide as select generate_series(1, 2) as id, rpad('', 320000, 'x') as t; +alter table wide set (parallel_workers = 2); -- The "optimal" case: the hash table fits in memory; we plan for 1 -- batch, we stick to that number, and peak memory usage stays within -- our work_mem budget @@ -5924,6 +5927,7 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '4MB'; +set local enable_parallel_hash = off; explain (costs off) select count(*) from simple r join simple s using (id); QUERY PLAN @@ -5955,6 +5959,43 @@ $$); f | f (1 row) +rollback to settings; +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +set local enable_parallel_hash = on; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Parallel Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Parallel Hash + -> Parallel Seq Scan on simple s +(9 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | f +(1 row) + rollback to settings; -- The "good" case: batches required, but we plan the right number; we -- plan for some number of batches, and we stick to that number, and @@ -5996,6 +6037,7 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; explain (costs off) select count(*) from simple r join simple s using (id); QUERY PLAN @@ -6027,6 +6069,43 @@ $$); t | f (1 row) +rollback to settings; +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +set local enable_parallel_hash = on; +explain (costs off) + select count(*) from simple r join simple s using (id); + QUERY PLAN +------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Parallel Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Parallel Hash + -> Parallel Seq Scan on simple s +(9 rows) + +select count(*) from simple r join simple s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + t | f +(1 row) + rollback to settings; -- The "bad" case: during execution we need to increase number of -- batches; in this case we plan for 1 batch, and increase at least a @@ -6069,6 +6148,7 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; explain (costs off) select count(*) from simple r join bigger_than_it_looks s using (id); QUERY PLAN @@ -6100,6 +6180,43 @@ $$); f | t (1 row) +rollback to settings; +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 1; +set local work_mem = '192kB'; +set local enable_parallel_hash = on; +explain (costs off) + select count(*) from simple r join bigger_than_it_looks s using (id); + QUERY PLAN +--------------------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 1 + -> Partial Aggregate + -> Parallel Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Parallel Hash + -> Parallel Seq Scan on bigger_than_it_looks s +(9 rows) + +select count(*) from simple r join bigger_than_it_looks s using (id); + count +------- + 20000 +(1 row) + +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join bigger_than_it_looks s using (id); +$$); + initially_multibatch | increased_batches +----------------------+------------------- + f | t +(1 row) + rollback to settings; -- The "ugly" case: increasing the number of batches during execution -- doesn't help, so stop trying to fit in work_mem and hope for the @@ -6142,6 +6259,7 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; explain (costs off) select count(*) from simple r join extremely_skewed s using (id); QUERY PLAN @@ -6171,6 +6289,42 @@ $$); 1 | 2 (1 row) +rollback to settings; +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 1; +set local work_mem = '128kB'; +set local enable_parallel_hash = on; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); + QUERY PLAN +----------------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 1 + -> Partial Aggregate + -> Parallel Hash Join + Hash Cond: (r.id = s.id) + -> Parallel Seq Scan on simple r + -> Parallel Hash + -> Parallel Seq Scan on extremely_skewed s +(9 rows) + +select count(*) from simple r join extremely_skewed s using (id); + count +------- + 20000 +(1 row) + +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); + original | final +----------+------- + 1 | 4 +(1 row) + rollback to settings; -- A couple of other hash join tests unrelated to work_mem management. -- Check that EXPLAIN ANALYZE has data even if the leader doesn't participate @@ -6192,10 +6346,11 @@ rollback to settings; -- that we can check that instrumentation comes back correctly. create table foo as select generate_series(1, 3) as id, 'xxxxx'::text as t; alter table foo set (parallel_workers = 0); -create table bar as select generate_series(1, 5000) as id, 'xxxxx'::text as t; +create table bar as select generate_series(1, 10000) as id, 'xxxxx'::text as t; alter table bar set (parallel_workers = 2); -- multi-batch with rescan, parallel-oblivious savepoint settings; +set enable_parallel_hash = off; set parallel_leader_participation = off; set min_parallel_table_scan_size = 0; set parallel_setup_cost = 0; @@ -6246,6 +6401,7 @@ $$); rollback to settings; -- single-batch with rescan, parallel-oblivious savepoint settings; +set enable_parallel_hash = off; set parallel_leader_participation = off; set min_parallel_table_scan_size = 0; set parallel_setup_cost = 0; @@ -6293,6 +6449,108 @@ $$); f (1 row) +rollback to settings; +-- multi-batch with rescan, parallel-aware +savepoint settings; +set enable_parallel_hash = on; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '64kB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate + -> Nested Loop Left Join + Join Filter: ((foo.id < (b1.id + 1)) AND (foo.id > (b1.id - 1))) + -> Seq Scan on foo + -> Gather + Workers Planned: 2 + -> Parallel Hash Join + Hash Cond: (b1.id = b2.id) + -> Parallel Seq Scan on bar b1 + -> Parallel Hash + -> Parallel Seq Scan on bar b2 +(11 rows) + +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + count +------- + 3 +(1 row) + +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); + multibatch +------------ + t +(1 row) + +rollback to settings; +-- single-batch with rescan, parallel-aware +savepoint settings; +set enable_parallel_hash = on; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '4MB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + QUERY PLAN +-------------------------------------------------------------------------- + Aggregate + -> Nested Loop Left Join + Join Filter: ((foo.id < (b1.id + 1)) AND (foo.id > (b1.id - 1))) + -> Seq Scan on foo + -> Gather + Workers Planned: 2 + -> Parallel Hash Join + Hash Cond: (b1.id = b2.id) + -> Parallel Seq Scan on bar b1 + -> Parallel Hash + -> Parallel Seq Scan on bar b2 +(11 rows) + +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; + count +------- + 3 +(1 row) + +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); + multibatch +------------ + f +(1 row) + rollback to settings; -- A full outer join where every record is matched. -- non-parallel @@ -6383,5 +6641,49 @@ select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); 40000 (1 row) +rollback to settings; +-- exercise special code paths for huge tuples (note use of non-strict +-- expression and left join required to get the detoasted tuple into +-- the hash table) +-- parallel with parallel-aware hash join (hits ExecParallelHashLoadTuple and +-- sts_puttuple oversized tuple cases because it's multi-batch) +savepoint settings; +set max_parallel_workers_per_gather = 2; +set enable_parallel_hash = on; +set work_mem = '128kB'; +explain (costs off) + select length(max(s.t)) + from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); + QUERY PLAN +---------------------------------------------------------------- + Finalize Aggregate + -> Gather + Workers Planned: 2 + -> Partial Aggregate + -> Parallel Hash Left Join + Hash Cond: (wide.id = wide_1.id) + -> Parallel Seq Scan on wide + -> Parallel Hash + -> Parallel Seq Scan on wide wide_1 +(9 rows) + +select length(max(s.t)) +from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); + length +-------- + 320000 +(1 row) + +select final > 1 as multibatch + from hash_join_batches( +$$ + select length(max(s.t)) + from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); +$$); + multibatch +------------ + t +(1 row) + rollback to settings; rollback; diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out index 2b738aae7c..c9c8f51e1c 100644 --- a/src/test/regress/expected/sysviews.out +++ b/src/test/regress/expected/sysviews.out @@ -82,11 +82,12 @@ select name, setting from pg_settings where name like 'enable%'; enable_mergejoin | on enable_nestloop | on enable_parallel_append | on + enable_parallel_hash | on enable_partition_wise_join | off enable_seqscan | on enable_sort | on enable_tidscan | on -(14 rows) +(15 rows) -- Test that the pg_timezone_names and pg_timezone_abbrevs views are -- more-or-less working. We can't test their contents in any great detail diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index 0e933e00d5..a6a452f960 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -2028,6 +2028,10 @@ update pg_class set reltuples = 2, relpages = pg_relation_size('extremely_skewed') / 8192 where relname = 'extremely_skewed'; +-- Make a relation with a couple of enormous tuples. +create table wide as select generate_series(1, 2) as id, rpad('', 320000, 'x') as t; +alter table wide set (parallel_workers = 2); + -- The "optimal" case: the hash table fits in memory; we plan for 1 -- batch, we stick to that number, and peak memory usage stays within -- our work_mem budget @@ -2050,6 +2054,22 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '4MB'; +set local enable_parallel_hash = off; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '4MB'; +set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join simple s using (id); select count(*) from simple r join simple s using (id); @@ -2082,6 +2102,22 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; +explain (costs off) + select count(*) from simple r join simple s using (id); +select count(*) from simple r join simple s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join simple s using (id); +$$); +rollback to settings; + +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 2; +set local work_mem = '128kB'; +set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join simple s using (id); select count(*) from simple r join simple s using (id); @@ -2115,6 +2151,22 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; +explain (costs off) + select count(*) from simple r join bigger_than_it_looks s using (id); +select count(*) from simple r join bigger_than_it_looks s using (id); +select original > 1 as initially_multibatch, final > original as increased_batches + from hash_join_batches( +$$ + select count(*) from simple r join bigger_than_it_looks s using (id); +$$); +rollback to settings; + +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 1; +set local work_mem = '192kB'; +set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join bigger_than_it_looks s using (id); select count(*) from simple r join bigger_than_it_looks s using (id); @@ -2148,6 +2200,21 @@ rollback to settings; savepoint settings; set local max_parallel_workers_per_gather = 2; set local work_mem = '128kB'; +set local enable_parallel_hash = off; +explain (costs off) + select count(*) from simple r join extremely_skewed s using (id); +select count(*) from simple r join extremely_skewed s using (id); +select * from hash_join_batches( +$$ + select count(*) from simple r join extremely_skewed s using (id); +$$); +rollback to settings; + +-- parallel with parallel-aware hash join +savepoint settings; +set local max_parallel_workers_per_gather = 1; +set local work_mem = '128kB'; +set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join extremely_skewed s using (id); select count(*) from simple r join extremely_skewed s using (id); @@ -2175,11 +2242,12 @@ rollback to settings; create table foo as select generate_series(1, 3) as id, 'xxxxx'::text as t; alter table foo set (parallel_workers = 0); -create table bar as select generate_series(1, 5000) as id, 'xxxxx'::text as t; +create table bar as select generate_series(1, 10000) as id, 'xxxxx'::text as t; alter table bar set (parallel_workers = 2); -- multi-batch with rescan, parallel-oblivious savepoint settings; +set enable_parallel_hash = off; set parallel_leader_participation = off; set min_parallel_table_scan_size = 0; set parallel_setup_cost = 0; @@ -2206,6 +2274,61 @@ rollback to settings; -- single-batch with rescan, parallel-oblivious savepoint settings; +set enable_parallel_hash = off; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '4MB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); +rollback to settings; + +-- multi-batch with rescan, parallel-aware +savepoint settings; +set enable_parallel_hash = on; +set parallel_leader_participation = off; +set min_parallel_table_scan_size = 0; +set parallel_setup_cost = 0; +set parallel_tuple_cost = 0; +set max_parallel_workers_per_gather = 2; +set enable_material = off; +set enable_mergejoin = off; +set work_mem = '64kB'; +explain (costs off) + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +select final > 1 as multibatch + from hash_join_batches( +$$ + select count(*) from foo + left join (select b1.id, b1.t from bar b1 join bar b2 using (id)) ss + on foo.id < ss.id + 1 and foo.id > ss.id - 1; +$$); +rollback to settings; + +-- single-batch with rescan, parallel-aware +savepoint settings; +set enable_parallel_hash = on; set parallel_leader_participation = off; set min_parallel_table_scan_size = 0; set parallel_setup_cost = 0; @@ -2266,4 +2389,27 @@ explain (costs off) select count(*) from simple r full outer join simple s on (r.id = 0 - s.id); rollback to settings; +-- exercise special code paths for huge tuples (note use of non-strict +-- expression and left join required to get the detoasted tuple into +-- the hash table) + +-- parallel with parallel-aware hash join (hits ExecParallelHashLoadTuple and +-- sts_puttuple oversized tuple cases because it's multi-batch) +savepoint settings; +set max_parallel_workers_per_gather = 2; +set enable_parallel_hash = on; +set work_mem = '128kB'; +explain (costs off) + select length(max(s.t)) + from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); +select length(max(s.t)) +from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); +select final > 1 as multibatch + from hash_join_batches( +$$ + select length(max(s.t)) + from wide left join (select id, coalesce(t, '') || '' as t from wide) s using (id); +$$); +rollback to settings; + rollback; diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index e308e20184..a92c62adde 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -1542,6 +1542,10 @@ ParallelBitmapHeapState ParallelCompletionPtr ParallelContext ParallelExecutorInfo +ParallelHashGrowth +ParallelHashJoinBatch +ParallelHashJoinBatchAccessor +ParallelHashJoinState ParallelHeapScanDesc ParallelIndexScanDesc ParallelSlot From 59d1e2b95a826869e2789ffe01e9e552148eefde Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 21 Dec 2017 09:09:04 -0500 Subject: [PATCH 0728/1087] Cancel CV sleep during subtransaction abort. Generally, error recovery paths that need to do things like LWLockReleaseAll and pgstat_report_wait_end also need to call ConditionVariableCancelSleep, but AbortSubTransaction was missed. Since subtransaction abort might destroy up the DSM segment that contains the ConditionVariable stored in cv_sleep_target, this can result in a crash for anything using condition variables. Reported and diagnosed by Andres Freund. Discussion: http://postgr.es/m/20171221110048.rxk6464azzl5t2fi@alap3.anarazel.de --- src/backend/access/transam/xact.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index e93d740b21..d430e662e6 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -4717,6 +4717,9 @@ AbortSubTransaction(void) /* Reset WAL record construction state */ XLogResetInsertion(); + /* Cancel condition variable sleep */ + ConditionVariableCancelSleep(); + /* * Also clean up any open wait for lock, since the lock manager will choke * if we try to wait for another lock before doing this. From c98c35cd084a25c6cf9b08c76de8b89facd75fe7 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 21 Dec 2017 10:56:57 -0500 Subject: [PATCH 0729/1087] Avoid putting build-location-dependent strings into generated files. Various Perl scripts we use to generate files were in the habit of printing things like "generated by $0" into their output files. That looks like a fine idea at first glance, but it results in non-reproducible output, because in VPATH builds $0 won't be just the name of the script file, but a full path for it. We'd prefer that you get identical results whether using VPATH or not, so this is a bad thing. Some of these places also printed their input file name(s), causing an additional hazard of the same type. Hence, establish a policy that thou shalt not print $0, nor input file pathnames, into output files (they're still allowed in error messages, though). Instead just write the script name verbatim. While we are at it, we can make these annotations more useful by giving the script's full relative path name within the PG source tree, eg instead of Gen_fmgrtab.pl let's print src/backend/utils/Gen_fmgrtab.pl. Not all of the changes made here actually affect any files shipped in finished tarballs today, but it seems best to apply the policy everyplace so that nobody copies unsafe code into places where it could matter. Christoph Berg and Tom Lane Discussion: https://postgr.es/m/20171215102223.GB31812@msg.df7cb.de --- src/backend/catalog/genbki.pl | 2 +- src/backend/utils/Gen_fmgrtab.pl | 9 +++------ src/backend/utils/mb/Unicode/UCS_to_BIG5.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_GB18030.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_SJIS.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_UHC.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_most.pl | 2 +- src/bin/psql/create_help.pl | 8 +++----- src/pl/plperl/text2macro.pl | 2 +- src/test/perl/TestLib.pm | 2 +- 17 files changed, 21 insertions(+), 26 deletions(-) diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl index 256a9c9c93..e4a0b8b2c7 100644 --- a/src/backend/catalog/genbki.pl +++ b/src/backend/catalog/genbki.pl @@ -340,7 +340,7 @@ * *** DO NOT EDIT THIS FILE! *** * ****************************** * - * It has been GENERATED by $0 + * It has been GENERATED by src/backend/catalog/genbki.pl * *------------------------------------------------------------------------- */ diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl index ee89d50784..26b428b11e 100644 --- a/src/backend/utils/Gen_fmgrtab.pl +++ b/src/backend/utils/Gen_fmgrtab.pl @@ -118,8 +118,7 @@ * *** DO NOT EDIT THIS FILE! *** * ****************************** * - * It has been GENERATED by $0 - * from $infile + * It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl * *------------------------------------------------------------------------- */ @@ -153,8 +152,7 @@ * *** DO NOT EDIT THIS FILE! *** * ****************************** * - * It has been GENERATED by $0 - * from $infile + * It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl * *------------------------------------------------------------------------- */ @@ -181,8 +179,7 @@ * *** DO NOT EDIT THIS FILE! *** * ****************************** * - * It has been GENERATED by $0 - * from $infile + * It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl * *------------------------------------------------------------------------- */ diff --git a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl index 05822b2fce..f73e2a1f16 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl @@ -27,7 +27,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_BIG5.pl'; # Load BIG5.TXT my $all = &read_source("BIG5.TXT"); diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl index f7f94967d9..63f5e1b67a 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl @@ -16,7 +16,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl'; # Read the input diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl index 229003196b..4dac7d267f 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl @@ -10,7 +10,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl'; # first generate UTF-8 --> EUC_JIS_2004 table diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl index 5db663f362..77ce273171 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl @@ -14,7 +14,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl'; # Load JIS0212.TXT my $jis0212 = &read_source("JIS0212.TXT"); diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl index f0d978d3df..8e0126c7f5 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl @@ -19,7 +19,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl'; # Load the source file. diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl index 0bfcbd5bb2..61013e356a 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl @@ -20,7 +20,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl'; my $mapping = &read_source("CNS11643.TXT"); diff --git a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl index 4469cc7c9d..e9f816cfa8 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl @@ -16,7 +16,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_GB18030.pl'; # Read the input diff --git a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl index 7c6f526f0d..be10d5200e 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl @@ -18,7 +18,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl'; # Load the source file. diff --git a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl index d1b36c0d77..98a6ee7c7b 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl @@ -12,7 +12,7 @@ # first generate UTF-8 --> SHIFT_JIS_2004 table -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl'; my $in_file = "sjis-0213-2004-std.txt"; diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl index 746096449c..cc1edcc4aa 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl @@ -13,7 +13,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_SJIS.pl'; my $mapping = read_source("CP932.TXT"); diff --git a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl index 8d99ca302b..640a2ec2f7 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl @@ -16,7 +16,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_UHC.pl'; # Read the input diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl index 1c3922e2ff..2d69748a93 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_most.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl @@ -18,7 +18,7 @@ use strict; use convutils; -my $this_script = $0; +my $this_script = 'src/backend/utils/mb/Unicode/UCS_to_most.pl'; my %filename = ( 'WIN866' => 'CP866.TXT', diff --git a/src/bin/psql/create_help.pl b/src/bin/psql/create_help.pl index cedb767b27..9fa1855878 100644 --- a/src/bin/psql/create_help.pl +++ b/src/bin/psql/create_help.pl @@ -51,8 +51,7 @@ * *** Do not change this file by hand. It is automatically * *** generated from the DocBook documentation. * - * generated by - * $^X $0 @ARGV + * generated by src/bin/psql/create_help.pl * */ @@ -76,8 +75,7 @@ * *** Do not change this file by hand. It is automatically * *** generated from the DocBook documentation. * - * generated by - * $^X $0 @ARGV + * generated by src/bin/psql/create_help.pl * */ @@ -131,7 +129,7 @@ my $nl_count = () = $cmdsynopsis =~ /\n/g; $cmdsynopsis =~ m!! - and die "$0:$file: null end tag not supported in synopsis\n"; + and die "$0: $file: null end tag not supported in synopsis\n"; $cmdsynopsis =~ s/%/%%/g; while ($cmdsynopsis =~ m!<(\w+)[^>]*>(.+?)]*>!) diff --git a/src/pl/plperl/text2macro.pl b/src/pl/plperl/text2macro.pl index e681fca21a..27c6ef7e42 100644 --- a/src/pl/plperl/text2macro.pl +++ b/src/pl/plperl/text2macro.pl @@ -40,7 +40,7 @@ =head1 DESCRIPTION print qq{ /* * DO NOT EDIT - THIS FILE IS AUTOGENERATED - CHANGES WILL BE LOST - * Written by $0 from @ARGV + * Generated by src/pl/plperl/text2macro.pl */ }; diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 60190400de..72826d5bad 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -66,7 +66,7 @@ BEGIN delete $ENV{PGPORT}; delete $ENV{PGHOST}; - $ENV{PGAPPNAME} = $0; + $ENV{PGAPPNAME} = basename($0); # Must be set early $windows_os = $Config{osname} eq 'MSWin32' || $Config{osname} eq 'msys'; From 9ef6aba1d3513829d9f77a4a91ca52f2e5719aef Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 21 Dec 2017 13:36:52 -0300 Subject: [PATCH 0730/1087] Fix typo --- src/backend/storage/ipc/procarray.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 37e12bd829..d87799cb96 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -2180,7 +2180,7 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly) * that value, it's guaranteed to be safe since it's computed by this * routine initially and has been enforced since. We can always use the * slot's general xmin horizon, but the catalog horizon is only usable - * when we only catalog data is going to be looked at. + * when only catalog data is going to be looked at. */ if (TransactionIdIsValid(procArray->replication_slot_xmin) && TransactionIdPrecedes(procArray->replication_slot_xmin, From 8a0596cb656e357c391cccf12854beb2e05f3901 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 21 Dec 2017 14:21:39 -0300 Subject: [PATCH 0731/1087] Get rid of copy_partition_key That function currently exists to avoid leaking memory in CacheMemoryContext in case of trouble while the partition key is being built, but there's a better way: allocate everything in a memcxt that goes away if the current (sub)transaction fails, and once the partition key is built and no further errors can occur, make the memcxt permanent by making it a child of CacheMemoryContext. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20171027172730.eh2domlkpn4ja62m@alvherre.pgsql --- src/backend/utils/cache/relcache.c | 71 ++++-------------------------- 1 file changed, 9 insertions(+), 62 deletions(-) diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 3a9233ef3d..e2760daac4 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -262,7 +262,6 @@ static Relation AllocateRelationDesc(Form_pg_class relp); static void RelationParseRelOptions(Relation relation, HeapTuple tuple); static void RelationBuildTupleDesc(Relation relation); static void RelationBuildPartitionKey(Relation relation); -static PartitionKey copy_partition_key(PartitionKey fromkey); static Relation RelationBuildDesc(Oid targetRelId, bool insertIt); static void RelationInitPhysicalAddr(Relation relation); static void load_critical_index(Oid indexoid, Oid heapoid); @@ -847,6 +846,12 @@ RelationBuildPartitionKey(Relation relation) if (!HeapTupleIsValid(tuple)) return; + partkeycxt = AllocSetContextCreateExtended(CurTransactionContext, + RelationGetRelationName(relation), + MEMCONTEXT_COPY_NAME, + ALLOCSET_SMALL_SIZES); + oldcxt = MemoryContextSwitchTo(partkeycxt); + key = (PartitionKey) palloc0(sizeof(PartitionKeyData)); /* Fixed-length attributes */ @@ -984,71 +989,13 @@ RelationBuildPartitionKey(Relation relation) ReleaseSysCache(tuple); - /* Success --- now copy to the cache memory */ - partkeycxt = AllocSetContextCreateExtended(CacheMemoryContext, - RelationGetRelationName(relation), - MEMCONTEXT_COPY_NAME, - ALLOCSET_SMALL_SIZES); + /* Success --- make the relcache point to the newly constructed key */ + MemoryContextSetParent(partkeycxt, CacheMemoryContext); relation->rd_partkeycxt = partkeycxt; - oldcxt = MemoryContextSwitchTo(relation->rd_partkeycxt); - relation->rd_partkey = copy_partition_key(key); + relation->rd_partkey = key; MemoryContextSwitchTo(oldcxt); } -/* - * copy_partition_key - * - * The copy is allocated in the current memory context. - */ -static PartitionKey -copy_partition_key(PartitionKey fromkey) -{ - PartitionKey newkey; - int n; - - newkey = (PartitionKey) palloc(sizeof(PartitionKeyData)); - - newkey->strategy = fromkey->strategy; - newkey->partnatts = n = fromkey->partnatts; - - newkey->partattrs = (AttrNumber *) palloc(n * sizeof(AttrNumber)); - memcpy(newkey->partattrs, fromkey->partattrs, n * sizeof(AttrNumber)); - - newkey->partexprs = copyObject(fromkey->partexprs); - - newkey->partopfamily = (Oid *) palloc(n * sizeof(Oid)); - memcpy(newkey->partopfamily, fromkey->partopfamily, n * sizeof(Oid)); - - newkey->partopcintype = (Oid *) palloc(n * sizeof(Oid)); - memcpy(newkey->partopcintype, fromkey->partopcintype, n * sizeof(Oid)); - - newkey->partsupfunc = (FmgrInfo *) palloc(n * sizeof(FmgrInfo)); - memcpy(newkey->partsupfunc, fromkey->partsupfunc, n * sizeof(FmgrInfo)); - - newkey->partcollation = (Oid *) palloc(n * sizeof(Oid)); - memcpy(newkey->partcollation, fromkey->partcollation, n * sizeof(Oid)); - - newkey->parttypid = (Oid *) palloc(n * sizeof(Oid)); - memcpy(newkey->parttypid, fromkey->parttypid, n * sizeof(Oid)); - - newkey->parttypmod = (int32 *) palloc(n * sizeof(int32)); - memcpy(newkey->parttypmod, fromkey->parttypmod, n * sizeof(int32)); - - newkey->parttyplen = (int16 *) palloc(n * sizeof(int16)); - memcpy(newkey->parttyplen, fromkey->parttyplen, n * sizeof(int16)); - - newkey->parttypbyval = (bool *) palloc(n * sizeof(bool)); - memcpy(newkey->parttypbyval, fromkey->parttypbyval, n * sizeof(bool)); - - newkey->parttypalign = (char *) palloc(n * sizeof(bool)); - memcpy(newkey->parttypalign, fromkey->parttypalign, n * sizeof(char)); - - newkey->parttypcoll = (Oid *) palloc(n * sizeof(Oid)); - memcpy(newkey->parttypcoll, fromkey->parttypcoll, n * sizeof(Oid)); - - return newkey; -} - /* * equalRuleLocks * From 6719b238e8f0ea83c39d2b1728c7670cdaa34e06 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 21 Dec 2017 12:57:41 -0500 Subject: [PATCH 0732/1087] Rearrange execution of PARAM_EXTERN Params for plpgsql's benefit. This patch does three interrelated things: * Create a new expression execution step type EEOP_PARAM_CALLBACK and add the infrastructure needed for add-on modules to generate that. As discussed, the best control mechanism for that seems to be to add another hook function to ParamListInfo, which will be called by ExecInitExpr if it's supplied and a PARAM_EXTERN Param is found. For stand-alone expressions, we add a new entry point to allow the ParamListInfo to be specified directly, since it can't be retrieved from the parent plan node's EState. * Redesign the API for the ParamListInfo paramFetch hook so that the ParamExternData array can be entirely virtual. This also lets us get rid of ParamListInfo.paramMask, instead leaving it to the paramFetch hook to decide which param IDs should be accessible or not. plpgsql_param_fetch was already doing the identical masking check, so having callers do it too seemed redundant. While I was at it, I added a "speculative" flag to paramFetch that the planner can specify as TRUE to avoid unwanted failures. This solves an ancient problem for plpgsql that it couldn't provide values of non-DTYPE_VAR variables to the planner for fear of triggering premature "record not assigned yet" or "field not found" errors during planning. * Rework plpgsql to get rid of the need for "unshared" parameter lists, by dint of turning the single ParamListInfo per estate into a nearly read-only data structure that doesn't instantiate any per-variable data. Instead, the paramFetch hook controls access to per-variable data and can make the right decisions on the fly, replacing the cases that we used to need multiple ParamListInfos for. This might perhaps have been a performance loss on its own, but by using a paramCompile hook we can bypass plpgsql_param_fetch entirely during normal query execution. (It's now only called when, eg, we copy the ParamListInfo into a cursor portal. copyParamList() or SerializeParamList() effectively instantiate the virtual parameter array as a simple physical array without a paramFetch hook, which is what we want in those cases.) This allows reverting most of commit 6c82d8d1f, though I kept the cosmetic code-consolidation aspects of that (eg the assign_simple_var function). Performance testing shows this to be at worst a break-even change, and it can provide wins ranging up to 20% in test cases involving accesses to fields of "record" variables. The fact that values of such variables can now be exposed to the planner might produce wins in some situations, too, but I've not pursued that angle. In passing, remove the "parent" pointer from the arguments to ExecInitExprRec and related functions, instead storing that pointer in a transient field in ExprState. The ParamListInfo pointer for a stand-alone expression is handled the same way; we'd otherwise have had to add yet another recursively-passed-down argument in expression compilation. Discussion: https://postgr.es/m/32589.1513706441@sss.pgh.pa.us --- src/backend/commands/prepare.c | 3 +- src/backend/executor/execCurrent.c | 9 +- src/backend/executor/execExpr.c | 231 ++++++++---- src/backend/executor/execExprInterp.c | 17 +- src/backend/executor/functions.c | 3 +- src/backend/executor/spi.c | 3 +- src/backend/nodes/params.c | 64 ++-- src/backend/optimizer/util/clauses.c | 19 +- src/backend/tcop/postgres.c | 6 +- src/include/executor/execExpr.h | 23 +- src/include/executor/executor.h | 1 + src/include/nodes/execnodes.h | 21 +- src/include/nodes/params.h | 84 ++++- src/pl/plpgsql/src/pl_comp.c | 18 - src/pl/plpgsql/src/pl_exec.c | 508 ++++++++++++++------------ src/pl/plpgsql/src/plpgsql.h | 9 +- 16 files changed, 613 insertions(+), 406 deletions(-) diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c index be7222f003..6e90912ae1 100644 --- a/src/backend/commands/prepare.c +++ b/src/backend/commands/prepare.c @@ -399,10 +399,11 @@ EvaluateParams(PreparedStatement *pstmt, List *params, /* we have static list of params, so no hooks needed */ paramLI->paramFetch = NULL; paramLI->paramFetchArg = NULL; + paramLI->paramCompile = NULL; + paramLI->paramCompileArg = NULL; paramLI->parserSetup = NULL; paramLI->parserSetupArg = NULL; paramLI->numParams = num_params; - paramLI->paramMask = NULL; i = 0; foreach(l, exprstates) diff --git a/src/backend/executor/execCurrent.c b/src/backend/executor/execCurrent.c index a3e962ee67..eaeb3a2836 100644 --- a/src/backend/executor/execCurrent.c +++ b/src/backend/executor/execCurrent.c @@ -216,11 +216,14 @@ fetch_cursor_param_value(ExprContext *econtext, int paramId) if (paramInfo && paramId > 0 && paramId <= paramInfo->numParams) { - ParamExternData *prm = ¶mInfo->params[paramId - 1]; + ParamExternData *prm; + ParamExternData prmdata; /* give hook a chance in case parameter is dynamic */ - if (!OidIsValid(prm->ptype) && paramInfo->paramFetch != NULL) - paramInfo->paramFetch(paramInfo, paramId); + if (paramInfo->paramFetch != NULL) + prm = paramInfo->paramFetch(paramInfo, paramId, false, &prmdata); + else + prm = ¶mInfo->params[paramId - 1]; if (OidIsValid(prm->ptype) && !prm->isnull) { diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index e0839616e1..55bb925191 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -55,22 +55,21 @@ typedef struct LastAttnumInfo } LastAttnumInfo; static void ExecReadyExpr(ExprState *state); -static void ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, +static void ExecInitExprRec(Expr *node, ExprState *state, Datum *resv, bool *resnull); -static void ExprEvalPushStep(ExprState *es, const ExprEvalStep *s); static void ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, - Oid funcid, Oid inputcollid, PlanState *parent, + Oid funcid, Oid inputcollid, ExprState *state); static void ExecInitExprSlots(ExprState *state, Node *node); static bool get_last_attnums_walker(Node *node, LastAttnumInfo *info); static void ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, - PlanState *parent); + ExprState *state); static void ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, - PlanState *parent, ExprState *state, + ExprState *state, Datum *resv, bool *resnull); static bool isAssignmentIndirectionExpr(Expr *expr); static void ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, - PlanState *parent, ExprState *state, + ExprState *state, Datum *resv, bool *resnull); @@ -122,12 +121,51 @@ ExecInitExpr(Expr *node, PlanState *parent) /* Initialize ExprState with empty step list */ state = makeNode(ExprState); state->expr = node; + state->parent = parent; + state->ext_params = NULL; /* Insert EEOP_*_FETCHSOME steps as needed */ ExecInitExprSlots(state, (Node *) node); /* Compile the expression proper */ - ExecInitExprRec(node, parent, state, &state->resvalue, &state->resnull); + ExecInitExprRec(node, state, &state->resvalue, &state->resnull); + + /* Finally, append a DONE step */ + scratch.opcode = EEOP_DONE; + ExprEvalPushStep(state, &scratch); + + ExecReadyExpr(state); + + return state; +} + +/* + * ExecInitExprWithParams: prepare a standalone expression tree for execution + * + * This is the same as ExecInitExpr, except that there is no parent PlanState, + * and instead we may have a ParamListInfo describing PARAM_EXTERN Params. + */ +ExprState * +ExecInitExprWithParams(Expr *node, ParamListInfo ext_params) +{ + ExprState *state; + ExprEvalStep scratch; + + /* Special case: NULL expression produces a NULL ExprState pointer */ + if (node == NULL) + return NULL; + + /* Initialize ExprState with empty step list */ + state = makeNode(ExprState); + state->expr = node; + state->parent = NULL; + state->ext_params = ext_params; + + /* Insert EEOP_*_FETCHSOME steps as needed */ + ExecInitExprSlots(state, (Node *) node); + + /* Compile the expression proper */ + ExecInitExprRec(node, state, &state->resvalue, &state->resnull); /* Finally, append a DONE step */ scratch.opcode = EEOP_DONE; @@ -172,6 +210,9 @@ ExecInitQual(List *qual, PlanState *parent) state = makeNode(ExprState); state->expr = (Expr *) qual; + state->parent = parent; + state->ext_params = NULL; + /* mark expression as to be used with ExecQual() */ state->flags = EEO_FLAG_IS_QUAL; @@ -198,7 +239,7 @@ ExecInitQual(List *qual, PlanState *parent) Expr *node = (Expr *) lfirst(lc); /* first evaluate expression */ - ExecInitExprRec(node, parent, state, &state->resvalue, &state->resnull); + ExecInitExprRec(node, state, &state->resvalue, &state->resnull); /* then emit EEOP_QUAL to detect if it's false (or null) */ scratch.d.qualexpr.jumpdone = -1; @@ -314,6 +355,9 @@ ExecBuildProjectionInfo(List *targetList, projInfo->pi_state.tag.type = T_ExprState; state = &projInfo->pi_state; state->expr = (Expr *) targetList; + state->parent = parent; + state->ext_params = NULL; + state->resultslot = slot; /* Insert EEOP_*_FETCHSOME steps as needed */ @@ -398,7 +442,7 @@ ExecBuildProjectionInfo(List *targetList, * matter) can change between executions. We instead evaluate * into the ExprState's resvalue/resnull and then move. */ - ExecInitExprRec(tle->expr, parent, state, + ExecInitExprRec(tle->expr, state, &state->resvalue, &state->resnull); /* @@ -581,12 +625,11 @@ ExecReadyExpr(ExprState *state) * possibly recursing into sub-expressions of node. * * node - expression to evaluate - * parent - parent executor node (or NULL if a standalone expression) * state - ExprState to whose ->steps to append the necessary operations * resv / resnull - where to store the result of the node into */ static void -ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, +ExecInitExprRec(Expr *node, ExprState *state, Datum *resv, bool *resnull) { ExprEvalStep scratch; @@ -609,7 +652,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, if (variable->varattno == InvalidAttrNumber) { /* whole-row Var */ - ExecInitWholeRowVar(&scratch, variable, parent); + ExecInitWholeRowVar(&scratch, variable, state); } else if (variable->varattno <= 0) { @@ -674,6 +717,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, case T_Param: { Param *param = (Param *) node; + ParamListInfo params; switch (param->paramkind) { @@ -681,19 +725,41 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, scratch.opcode = EEOP_PARAM_EXEC; scratch.d.param.paramid = param->paramid; scratch.d.param.paramtype = param->paramtype; + ExprEvalPushStep(state, &scratch); break; case PARAM_EXTERN: - scratch.opcode = EEOP_PARAM_EXTERN; - scratch.d.param.paramid = param->paramid; - scratch.d.param.paramtype = param->paramtype; + + /* + * If we have a relevant ParamCompileHook, use it; + * otherwise compile a standard EEOP_PARAM_EXTERN + * step. ext_params, if supplied, takes precedence + * over info from the parent node's EState (if any). + */ + if (state->ext_params) + params = state->ext_params; + else if (state->parent && + state->parent->state) + params = state->parent->state->es_param_list_info; + else + params = NULL; + if (params && params->paramCompile) + { + params->paramCompile(params, param, state, + resv, resnull); + } + else + { + scratch.opcode = EEOP_PARAM_EXTERN; + scratch.d.param.paramid = param->paramid; + scratch.d.param.paramtype = param->paramtype; + ExprEvalPushStep(state, &scratch); + } break; default: elog(ERROR, "unrecognized paramkind: %d", (int) param->paramkind); break; } - - ExprEvalPushStep(state, &scratch); break; } @@ -706,9 +772,9 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, scratch.d.aggref.astate = astate; astate->aggref = aggref; - if (parent && IsA(parent, AggState)) + if (state->parent && IsA(state->parent, AggState)) { - AggState *aggstate = (AggState *) parent; + AggState *aggstate = (AggState *) state->parent; aggstate->aggs = lcons(astate, aggstate->aggs); aggstate->numaggs++; @@ -728,14 +794,14 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, GroupingFunc *grp_node = (GroupingFunc *) node; Agg *agg; - if (!parent || !IsA(parent, AggState) || - !IsA(parent->plan, Agg)) + if (!state->parent || !IsA(state->parent, AggState) || + !IsA(state->parent->plan, Agg)) elog(ERROR, "GroupingFunc found in non-Agg plan node"); scratch.opcode = EEOP_GROUPING_FUNC; - scratch.d.grouping_func.parent = (AggState *) parent; + scratch.d.grouping_func.parent = (AggState *) state->parent; - agg = (Agg *) (parent->plan); + agg = (Agg *) (state->parent->plan); if (agg->groupingSets) scratch.d.grouping_func.clauses = grp_node->cols; @@ -753,9 +819,9 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, wfstate->wfunc = wfunc; - if (parent && IsA(parent, WindowAggState)) + if (state->parent && IsA(state->parent, WindowAggState)) { - WindowAggState *winstate = (WindowAggState *) parent; + WindowAggState *winstate = (WindowAggState *) state->parent; int nfuncs; winstate->funcs = lcons(wfstate, winstate->funcs); @@ -764,9 +830,10 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, winstate->numaggs++; /* for now initialize agg using old style expressions */ - wfstate->args = ExecInitExprList(wfunc->args, parent); + wfstate->args = ExecInitExprList(wfunc->args, + state->parent); wfstate->aggfilter = ExecInitExpr(wfunc->aggfilter, - parent); + state->parent); /* * Complain if the windowfunc's arguments contain any @@ -795,7 +862,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { ArrayRef *aref = (ArrayRef *) node; - ExecInitArrayRef(&scratch, aref, parent, state, resv, resnull); + ExecInitArrayRef(&scratch, aref, state, resv, resnull); break; } @@ -805,7 +872,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ExecInitFunc(&scratch, node, func->args, func->funcid, func->inputcollid, - parent, state); + state); ExprEvalPushStep(state, &scratch); break; } @@ -816,7 +883,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ExecInitFunc(&scratch, node, op->args, op->opfuncid, op->inputcollid, - parent, state); + state); ExprEvalPushStep(state, &scratch); break; } @@ -827,7 +894,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ExecInitFunc(&scratch, node, op->args, op->opfuncid, op->inputcollid, - parent, state); + state); /* * Change opcode of call instruction to EEOP_DISTINCT. @@ -849,7 +916,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ExecInitFunc(&scratch, node, op->args, op->opfuncid, op->inputcollid, - parent, state); + state); /* * Change opcode of call instruction to EEOP_NULLIF. @@ -896,7 +963,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, opexpr->inputcollid, NULL, NULL); /* Evaluate scalar directly into left function argument */ - ExecInitExprRec(scalararg, parent, state, + ExecInitExprRec(scalararg, state, &fcinfo->arg[0], &fcinfo->argnull[0]); /* @@ -905,7 +972,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, * be overwritten by EEOP_SCALARARRAYOP, and will not be * passed to any other expression. */ - ExecInitExprRec(arrayarg, parent, state, resv, resnull); + ExecInitExprRec(arrayarg, state, resv, resnull); /* And perform the operation */ scratch.opcode = EEOP_SCALARARRAYOP; @@ -949,7 +1016,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, Expr *arg = (Expr *) lfirst(lc); /* Evaluate argument into our output variable */ - ExecInitExprRec(arg, parent, state, resv, resnull); + ExecInitExprRec(arg, state, resv, resnull); /* Perform the appropriate step type */ switch (boolexpr->boolop) @@ -1009,13 +1076,14 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, SubPlan *subplan = (SubPlan *) node; SubPlanState *sstate; - if (!parent) + if (!state->parent) elog(ERROR, "SubPlan found with no parent plan"); - sstate = ExecInitSubPlan(subplan, parent); + sstate = ExecInitSubPlan(subplan, state->parent); - /* add SubPlanState nodes to parent->subPlan */ - parent->subPlan = lappend(parent->subPlan, sstate); + /* add SubPlanState nodes to state->parent->subPlan */ + state->parent->subPlan = lappend(state->parent->subPlan, + sstate); scratch.opcode = EEOP_SUBPLAN; scratch.d.subplan.sstate = sstate; @@ -1029,10 +1097,10 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, AlternativeSubPlan *asplan = (AlternativeSubPlan *) node; AlternativeSubPlanState *asstate; - if (!parent) + if (!state->parent) elog(ERROR, "AlternativeSubPlan found with no parent plan"); - asstate = ExecInitAlternativeSubPlan(asplan, parent); + asstate = ExecInitAlternativeSubPlan(asplan, state->parent); scratch.opcode = EEOP_ALTERNATIVE_SUBPLAN; scratch.d.alternative_subplan.asstate = asstate; @@ -1046,7 +1114,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, FieldSelect *fselect = (FieldSelect *) node; /* evaluate row/record argument into result area */ - ExecInitExprRec(fselect->arg, parent, state, resv, resnull); + ExecInitExprRec(fselect->arg, state, resv, resnull); /* and extract field */ scratch.opcode = EEOP_FIELDSELECT; @@ -1083,7 +1151,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, *descp = NULL; /* emit code to evaluate the composite input value */ - ExecInitExprRec(fstore->arg, parent, state, resv, resnull); + ExecInitExprRec(fstore->arg, state, resv, resnull); /* next, deform the input tuple into our workspace */ scratch.opcode = EEOP_FIELDSTORE_DEFORM; @@ -1134,7 +1202,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, state->innermost_caseval = &values[fieldnum - 1]; state->innermost_casenull = &nulls[fieldnum - 1]; - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &values[fieldnum - 1], &nulls[fieldnum - 1]); @@ -1158,7 +1226,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, /* relabel doesn't need to do anything at runtime */ RelabelType *relabel = (RelabelType *) node; - ExecInitExprRec(relabel->arg, parent, state, resv, resnull); + ExecInitExprRec(relabel->arg, state, resv, resnull); break; } @@ -1171,7 +1239,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, FunctionCallInfo fcinfo_in; /* evaluate argument into step's result area */ - ExecInitExprRec(iocoerce->arg, parent, state, resv, resnull); + ExecInitExprRec(iocoerce->arg, state, resv, resnull); /* * Prepare both output and input function calls, to be @@ -1228,7 +1296,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ExprState *elemstate; /* evaluate argument into step's result area */ - ExecInitExprRec(acoerce->arg, parent, state, resv, resnull); + ExecInitExprRec(acoerce->arg, state, resv, resnull); resultelemtype = get_element_type(acoerce->resulttype); if (!OidIsValid(resultelemtype)) @@ -1244,10 +1312,13 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, */ elemstate = makeNode(ExprState); elemstate->expr = acoerce->elemexpr; + elemstate->parent = state->parent; + elemstate->ext_params = state->ext_params; + elemstate->innermost_caseval = (Datum *) palloc(sizeof(Datum)); elemstate->innermost_casenull = (bool *) palloc(sizeof(bool)); - ExecInitExprRec(acoerce->elemexpr, parent, elemstate, + ExecInitExprRec(acoerce->elemexpr, elemstate, &elemstate->resvalue, &elemstate->resnull); if (elemstate->steps_len == 1 && @@ -1290,7 +1361,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, ConvertRowtypeExpr *convert = (ConvertRowtypeExpr *) node; /* evaluate argument into step's result area */ - ExecInitExprRec(convert->arg, parent, state, resv, resnull); + ExecInitExprRec(convert->arg, state, resv, resnull); /* and push conversion step */ scratch.opcode = EEOP_CONVERT_ROWTYPE; @@ -1324,7 +1395,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, caseval = palloc(sizeof(Datum)); casenull = palloc(sizeof(bool)); - ExecInitExprRec(caseExpr->arg, parent, state, + ExecInitExprRec(caseExpr->arg, state, caseval, casenull); /* @@ -1375,7 +1446,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, state->innermost_casenull = casenull; /* evaluate condition into CASE's result variables */ - ExecInitExprRec(when->expr, parent, state, resv, resnull); + ExecInitExprRec(when->expr, state, resv, resnull); state->innermost_caseval = save_innermost_caseval; state->innermost_casenull = save_innermost_casenull; @@ -1390,7 +1461,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, * If WHEN result is true, evaluate THEN result, storing * it into the CASE's result variables. */ - ExecInitExprRec(when->result, parent, state, resv, resnull); + ExecInitExprRec(when->result, state, resv, resnull); /* Emit JUMP step to jump to end of CASE's code */ scratch.opcode = EEOP_JUMP; @@ -1415,7 +1486,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, Assert(caseExpr->defresult); /* evaluate ELSE expr into CASE's result variables */ - ExecInitExprRec(caseExpr->defresult, parent, state, + ExecInitExprRec(caseExpr->defresult, state, resv, resnull); /* adjust jump targets */ @@ -1484,7 +1555,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { Expr *e = (Expr *) lfirst(lc); - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &scratch.d.arrayexpr.elemvalues[elemoff], &scratch.d.arrayexpr.elemnulls[elemoff]); elemoff++; @@ -1578,7 +1649,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, } /* Evaluate column expr into appropriate workspace slot */ - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &scratch.d.row.elemvalues[i], &scratch.d.row.elemnulls[i]); i++; @@ -1667,9 +1738,9 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, */ /* evaluate left and right args directly into fcinfo */ - ExecInitExprRec(left_expr, parent, state, + ExecInitExprRec(left_expr, state, &fcinfo->arg[0], &fcinfo->argnull[0]); - ExecInitExprRec(right_expr, parent, state, + ExecInitExprRec(right_expr, state, &fcinfo->arg[1], &fcinfo->argnull[1]); scratch.opcode = EEOP_ROWCOMPARE_STEP; @@ -1738,7 +1809,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, Expr *e = (Expr *) lfirst(lc); /* evaluate argument, directly into result datum */ - ExecInitExprRec(e, parent, state, resv, resnull); + ExecInitExprRec(e, state, resv, resnull); /* if it's not null, skip to end of COALESCE expr */ scratch.opcode = EEOP_JUMP_IF_NOT_NULL; @@ -1820,7 +1891,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { Expr *e = (Expr *) lfirst(lc); - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &scratch.d.minmax.values[off], &scratch.d.minmax.nulls[off]); off++; @@ -1886,7 +1957,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { Expr *e = (Expr *) lfirst(arg); - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &scratch.d.xmlexpr.named_argvalue[off], &scratch.d.xmlexpr.named_argnull[off]); off++; @@ -1897,7 +1968,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { Expr *e = (Expr *) lfirst(arg); - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &scratch.d.xmlexpr.argvalue[off], &scratch.d.xmlexpr.argnull[off]); off++; @@ -1935,7 +2006,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, scratch.d.nulltest_row.argdesc = NULL; /* first evaluate argument into result variable */ - ExecInitExprRec(ntest->arg, parent, state, + ExecInitExprRec(ntest->arg, state, resv, resnull); /* then push the test of that argument */ @@ -1953,7 +2024,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, * and will get overwritten by the below EEOP_BOOLTEST_IS_* * step. */ - ExecInitExprRec(btest->arg, parent, state, resv, resnull); + ExecInitExprRec(btest->arg, state, resv, resnull); switch (btest->booltesttype) { @@ -1990,7 +2061,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, { CoerceToDomain *ctest = (CoerceToDomain *) node; - ExecInitCoerceToDomain(&scratch, ctest, parent, state, + ExecInitCoerceToDomain(&scratch, ctest, state, resv, resnull); break; } @@ -2046,7 +2117,7 @@ ExecInitExprRec(Expr *node, PlanState *parent, ExprState *state, * Note that this potentially re-allocates es->steps, therefore no pointer * into that array may be used while the expression is still being built. */ -static void +void ExprEvalPushStep(ExprState *es, const ExprEvalStep *s) { if (es->steps_alloc == 0) @@ -2074,7 +2145,7 @@ ExprEvalPushStep(ExprState *es, const ExprEvalStep *s) */ static void ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, Oid funcid, - Oid inputcollid, PlanState *parent, ExprState *state) + Oid inputcollid, ExprState *state) { int nargs = list_length(args); AclResult aclresult; @@ -2126,8 +2197,9 @@ ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, Oid funcid, ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that cannot accept a set"), - parent ? executor_errposition(parent->state, - exprLocation((Node *) node)) : 0)); + state->parent ? + executor_errposition(state->parent->state, + exprLocation((Node *) node)) : 0)); /* Build code to evaluate arguments directly into the fcinfo struct */ argno = 0; @@ -2148,7 +2220,7 @@ ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, Oid funcid, } else { - ExecInitExprRec(arg, parent, state, + ExecInitExprRec(arg, state, &fcinfo->arg[argno], &fcinfo->argnull[argno]); } argno++; @@ -2260,8 +2332,10 @@ get_last_attnums_walker(Node *node, LastAttnumInfo *info) * The caller still has to push the step. */ static void -ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, PlanState *parent) +ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, ExprState *state) { + PlanState *parent = state->parent; + /* fill in all but the target */ scratch->opcode = EEOP_WHOLEROW; scratch->d.wholerow.var = variable; @@ -2331,7 +2405,7 @@ ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, PlanState *parent) * Prepare evaluation of an ArrayRef expression. */ static void -ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, PlanState *parent, +ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, ExprState *state, Datum *resv, bool *resnull) { bool isAssignment = (aref->refassgnexpr != NULL); @@ -2355,7 +2429,7 @@ ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, PlanState *parent, * be overwritten by the final EEOP_ARRAYREF_FETCH/ASSIGN step, which is * pushed last. */ - ExecInitExprRec(aref->refexpr, parent, state, resv, resnull); + ExecInitExprRec(aref->refexpr, state, resv, resnull); /* * If refexpr yields NULL, and it's a fetch, then result is NULL. We can @@ -2401,7 +2475,7 @@ ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, PlanState *parent, arefstate->upperprovided[i] = true; /* Each subscript is evaluated into subscriptvalue/subscriptnull */ - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &arefstate->subscriptvalue, &arefstate->subscriptnull); /* ... and then ARRAYREF_SUBSCRIPT saves it into step's workspace */ @@ -2434,7 +2508,7 @@ ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, PlanState *parent, arefstate->lowerprovided[i] = true; /* Each subscript is evaluated into subscriptvalue/subscriptnull */ - ExecInitExprRec(e, parent, state, + ExecInitExprRec(e, state, &arefstate->subscriptvalue, &arefstate->subscriptnull); /* ... and then ARRAYREF_SUBSCRIPT saves it into step's workspace */ @@ -2488,7 +2562,7 @@ ExecInitArrayRef(ExprEvalStep *scratch, ArrayRef *aref, PlanState *parent, state->innermost_casenull = &arefstate->prevnull; /* evaluate replacement value into replacevalue/replacenull */ - ExecInitExprRec(aref->refassgnexpr, parent, state, + ExecInitExprRec(aref->refassgnexpr, state, &arefstate->replacevalue, &arefstate->replacenull); state->innermost_caseval = save_innermost_caseval; @@ -2566,8 +2640,7 @@ isAssignmentIndirectionExpr(Expr *expr) */ static void ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, - PlanState *parent, ExprState *state, - Datum *resv, bool *resnull) + ExprState *state, Datum *resv, bool *resnull) { ExprEvalStep scratch2; DomainConstraintRef *constraint_ref; @@ -2587,7 +2660,7 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, * if there's constraint failures there'll be errors, otherwise it's what * needs to be returned. */ - ExecInitExprRec(ctest->arg, parent, state, resv, resnull); + ExecInitExprRec(ctest->arg, state, resv, resnull); /* * Note: if the argument is of varlena type, it could be a R/W expanded @@ -2684,7 +2757,7 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, state->innermost_domainnull = domainnull; /* evaluate check expression value */ - ExecInitExprRec(con->check_expr, parent, state, + ExecInitExprRec(con->check_expr, state, scratch->d.domaincheck.checkvalue, scratch->d.domaincheck.checknull); diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 6c4612dad4..0c3f66803f 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -335,6 +335,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_BOOLTEST_IS_NOT_FALSE, &&CASE_EEOP_PARAM_EXEC, &&CASE_EEOP_PARAM_EXTERN, + &&CASE_EEOP_PARAM_CALLBACK, &&CASE_EEOP_CASE_TESTVAL, &&CASE_EEOP_MAKE_READONLY, &&CASE_EEOP_IOCOERCE, @@ -1047,6 +1048,13 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } + EEO_CASE(EEOP_PARAM_CALLBACK) + { + /* allow an extension module to supply a PARAM_EXTERN value */ + op->d.cparam.paramfunc(state, op, econtext); + EEO_NEXT(); + } + EEO_CASE(EEOP_CASE_TESTVAL) { /* @@ -1967,11 +1975,14 @@ ExecEvalParamExtern(ExprState *state, ExprEvalStep *op, ExprContext *econtext) if (likely(paramInfo && paramId > 0 && paramId <= paramInfo->numParams)) { - ParamExternData *prm = ¶mInfo->params[paramId - 1]; + ParamExternData *prm; + ParamExternData prmdata; /* give hook a chance in case parameter is dynamic */ - if (!OidIsValid(prm->ptype) && paramInfo->paramFetch != NULL) - paramInfo->paramFetch(paramInfo, paramId); + if (paramInfo->paramFetch != NULL) + prm = paramInfo->paramFetch(paramInfo, paramId, false, &prmdata); + else + prm = ¶mInfo->params[paramId - 1]; if (likely(OidIsValid(prm->ptype))) { diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index 3caa343723..527f7d810f 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -914,10 +914,11 @@ postquel_sub_params(SQLFunctionCachePtr fcache, /* we have static list of params, so no hooks needed */ paramLI->paramFetch = NULL; paramLI->paramFetchArg = NULL; + paramLI->paramCompile = NULL; + paramLI->paramCompileArg = NULL; paramLI->parserSetup = NULL; paramLI->parserSetupArg = NULL; paramLI->numParams = nargs; - paramLI->paramMask = NULL; fcache->paramLI = paramLI; } else diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index f3da2ddd08..977f317420 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -2259,10 +2259,11 @@ _SPI_convert_params(int nargs, Oid *argtypes, /* we have static list of params, so no hooks needed */ paramLI->paramFetch = NULL; paramLI->paramFetchArg = NULL; + paramLI->paramCompile = NULL; + paramLI->paramCompileArg = NULL; paramLI->parserSetup = NULL; paramLI->parserSetupArg = NULL; paramLI->numParams = nargs; - paramLI->paramMask = NULL; for (i = 0; i < nargs; i++) { diff --git a/src/backend/nodes/params.c b/src/backend/nodes/params.c index 51429af1e3..94acdf4e7b 100644 --- a/src/backend/nodes/params.c +++ b/src/backend/nodes/params.c @@ -48,32 +48,25 @@ copyParamList(ParamListInfo from) retval = (ParamListInfo) palloc(size); retval->paramFetch = NULL; retval->paramFetchArg = NULL; + retval->paramCompile = NULL; + retval->paramCompileArg = NULL; retval->parserSetup = NULL; retval->parserSetupArg = NULL; retval->numParams = from->numParams; - retval->paramMask = NULL; for (i = 0; i < from->numParams; i++) { - ParamExternData *oprm = &from->params[i]; + ParamExternData *oprm; ParamExternData *nprm = &retval->params[i]; + ParamExternData prmdata; int16 typLen; bool typByVal; - /* Ignore parameters we don't need, to save cycles and space. */ - if (from->paramMask != NULL && - !bms_is_member(i, from->paramMask)) - { - nprm->value = (Datum) 0; - nprm->isnull = true; - nprm->pflags = 0; - nprm->ptype = InvalidOid; - continue; - } - /* give hook a chance in case parameter is dynamic */ - if (!OidIsValid(oprm->ptype) && from->paramFetch != NULL) - from->paramFetch(from, i + 1); + if (from->paramFetch != NULL) + oprm = from->paramFetch(from, i + 1, false, &prmdata); + else + oprm = &from->params[i]; /* flat-copy the parameter info */ *nprm = *oprm; @@ -102,22 +95,19 @@ EstimateParamListSpace(ParamListInfo paramLI) for (i = 0; i < paramLI->numParams; i++) { - ParamExternData *prm = ¶mLI->params[i]; + ParamExternData *prm; + ParamExternData prmdata; Oid typeOid; int16 typLen; bool typByVal; - /* Ignore parameters we don't need, to save cycles and space. */ - if (paramLI->paramMask != NULL && - !bms_is_member(i, paramLI->paramMask)) - typeOid = InvalidOid; + /* give hook a chance in case parameter is dynamic */ + if (paramLI->paramFetch != NULL) + prm = paramLI->paramFetch(paramLI, i + 1, false, &prmdata); else - { - /* give hook a chance in case parameter is dynamic */ - if (!OidIsValid(prm->ptype) && paramLI->paramFetch != NULL) - paramLI->paramFetch(paramLI, i + 1); - typeOid = prm->ptype; - } + prm = ¶mLI->params[i]; + + typeOid = prm->ptype; sz = add_size(sz, sizeof(Oid)); /* space for type OID */ sz = add_size(sz, sizeof(uint16)); /* space for pflags */ @@ -171,22 +161,19 @@ SerializeParamList(ParamListInfo paramLI, char **start_address) /* Write each parameter in turn. */ for (i = 0; i < nparams; i++) { - ParamExternData *prm = ¶mLI->params[i]; + ParamExternData *prm; + ParamExternData prmdata; Oid typeOid; int16 typLen; bool typByVal; - /* Ignore parameters we don't need, to save cycles and space. */ - if (paramLI->paramMask != NULL && - !bms_is_member(i, paramLI->paramMask)) - typeOid = InvalidOid; + /* give hook a chance in case parameter is dynamic */ + if (paramLI->paramFetch != NULL) + prm = paramLI->paramFetch(paramLI, i + 1, false, &prmdata); else - { - /* give hook a chance in case parameter is dynamic */ - if (!OidIsValid(prm->ptype) && paramLI->paramFetch != NULL) - paramLI->paramFetch(paramLI, i + 1); - typeOid = prm->ptype; - } + prm = ¶mLI->params[i]; + + typeOid = prm->ptype; /* Write type OID. */ memcpy(*start_address, &typeOid, sizeof(Oid)); @@ -237,10 +224,11 @@ RestoreParamList(char **start_address) paramLI = (ParamListInfo) palloc(size); paramLI->paramFetch = NULL; paramLI->paramFetchArg = NULL; + paramLI->paramCompile = NULL; + paramLI->paramCompileArg = NULL; paramLI->parserSetup = NULL; paramLI->parserSetupArg = NULL; paramLI->numParams = nparams; - paramLI->paramMask = NULL; for (i = 0; i < nparams; i++) { diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 6a2d5ad760..9ca384db51 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -2513,14 +2513,27 @@ eval_const_expressions_mutator(Node *node, case T_Param: { Param *param = (Param *) node; + ParamListInfo paramLI = context->boundParams; /* Look to see if we've been given a value for this Param */ if (param->paramkind == PARAM_EXTERN && - context->boundParams != NULL && + paramLI != NULL && param->paramid > 0 && - param->paramid <= context->boundParams->numParams) + param->paramid <= paramLI->numParams) { - ParamExternData *prm = &context->boundParams->params[param->paramid - 1]; + ParamExternData *prm; + ParamExternData prmdata; + + /* + * Give hook a chance in case parameter is dynamic. Tell + * it that this fetch is speculative, so it should avoid + * erroring out if parameter is unavailable. + */ + if (paramLI->paramFetch != NULL) + prm = paramLI->paramFetch(paramLI, param->paramid, + true, &prmdata); + else + prm = ¶mLI->params[param->paramid - 1]; if (OidIsValid(prm->ptype)) { diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 1ae9ac2d57..1b24dddbce 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -1646,10 +1646,11 @@ exec_bind_message(StringInfo input_message) /* we have static list of params, so no hooks needed */ params->paramFetch = NULL; params->paramFetchArg = NULL; + params->paramCompile = NULL; + params->paramCompileArg = NULL; params->parserSetup = NULL; params->parserSetupArg = NULL; params->numParams = numParams; - params->paramMask = NULL; for (paramno = 0; paramno < numParams; paramno++) { @@ -2211,6 +2212,9 @@ errdetail_params(ParamListInfo params) MemoryContext oldcontext; int paramno; + /* This code doesn't support dynamic param lists */ + Assert(params->paramFetch == NULL); + /* Make sure any trash is generated in MessageContext */ oldcontext = MemoryContextSwitchTo(MessageContext); diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 5bbb63a9d8..080252fad6 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -16,7 +16,8 @@ #include "nodes/execnodes.h" -/* forward reference to avoid circularity */ +/* forward references to avoid circularity */ +struct ExprEvalStep; struct ArrayRefState; /* Bits in ExprState->flags (see also execnodes.h for public flag bits): */ @@ -25,6 +26,11 @@ struct ArrayRefState; /* jump-threading is in use */ #define EEO_FLAG_DIRECT_THREADED (1 << 2) +/* Typical API for out-of-line evaluation subroutines */ +typedef void (*ExecEvalSubroutine) (ExprState *state, + struct ExprEvalStep *op, + ExprContext *econtext); + /* * Discriminator for ExprEvalSteps. * @@ -131,6 +137,7 @@ typedef enum ExprEvalOp /* evaluate PARAM_EXEC/EXTERN parameters */ EEOP_PARAM_EXEC, EEOP_PARAM_EXTERN, + EEOP_PARAM_CALLBACK, /* return CaseTestExpr value */ EEOP_CASE_TESTVAL, @@ -331,6 +338,15 @@ typedef struct ExprEvalStep Oid paramtype; /* OID of parameter's datatype */ } param; + /* for EEOP_PARAM_CALLBACK */ + struct + { + ExecEvalSubroutine paramfunc; /* add-on evaluation subroutine */ + void *paramarg; /* private data for same */ + int paramid; /* numeric ID for parameter */ + Oid paramtype; /* OID of parameter's datatype */ + } cparam; + /* for EEOP_CASE_TESTVAL/DOMAIN_TESTVAL */ struct { @@ -598,8 +614,11 @@ typedef struct ArrayRefState } ArrayRefState; -extern void ExecReadyInterpretedExpr(ExprState *state); +/* functions in execExpr.c */ +extern void ExprEvalPushStep(ExprState *es, const ExprEvalStep *s); +/* functions in execExprInterp.c */ +extern void ExecReadyInterpretedExpr(ExprState *state); extern ExprEvalOp ExecEvalStepOp(ExprState *state, ExprEvalStep *op); /* diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index dea9216fd6..2cc74da0ba 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -247,6 +247,7 @@ ExecProcNode(PlanState *node) * prototypes from functions in execExpr.c */ extern ExprState *ExecInitExpr(Expr *node, PlanState *parent); +extern ExprState *ExecInitExprWithParams(Expr *node, ParamListInfo ext_params); extern ExprState *ExecInitQual(List *qual, PlanState *parent); extern ExprState *ExecInitCheck(List *qual, PlanState *parent); extern List *ExecInitExprList(List *nodes, PlanState *parent); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 44d8c47d2c..c9a5279dc5 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -33,6 +33,13 @@ #include "storage/condition_variable.h" +struct PlanState; /* forward references in this file */ +struct ParallelHashJoinState; +struct ExprState; +struct ExprContext; +struct ExprEvalStep; /* avoid including execExpr.h everywhere */ + + /* ---------------- * ExprState node * @@ -40,12 +47,6 @@ * It contains instructions (in ->steps) to evaluate the expression. * ---------------- */ -struct ExprState; /* forward references in this file */ -struct ExprContext; -struct ExprEvalStep; /* avoid including execExpr.h everywhere */ - -struct ParallelHashJoinState; - typedef Datum (*ExprStateEvalFunc) (struct ExprState *expression, struct ExprContext *econtext, bool *isNull); @@ -87,12 +88,16 @@ typedef struct ExprState Expr *expr; /* - * XXX: following only needed during "compilation", could be thrown away. + * XXX: following fields only needed during "compilation" (ExecInitExpr); + * could be thrown away afterwards. */ int steps_len; /* number of steps currently */ int steps_alloc; /* allocated length of steps array */ + struct PlanState *parent; /* parent PlanState node, if any */ + ParamListInfo ext_params; /* for compiling PARAM_EXTERN nodes */ + Datum *innermost_caseval; bool *innermost_casenull; @@ -827,8 +832,6 @@ typedef struct DomainConstraintState * ---------------------------------------------------------------- */ -struct PlanState; - /* ---------------- * ExecProcNodeMtd * diff --git a/src/include/nodes/params.h b/src/include/nodes/params.h index 55219dab6e..b198db5ee4 100644 --- a/src/include/nodes/params.h +++ b/src/include/nodes/params.h @@ -16,16 +16,23 @@ /* Forward declarations, to avoid including other headers */ struct Bitmapset; +struct ExprState; +struct Param; struct ParseState; -/* ---------------- +/* * ParamListInfo * - * ParamListInfo arrays are used to pass parameters into the executor - * for parameterized plans. Each entry in the array defines the value - * to be substituted for a PARAM_EXTERN parameter. The "paramid" - * of a PARAM_EXTERN Param can range from 1 to numParams. + * ParamListInfo structures are used to pass parameters into the executor + * for parameterized plans. We support two basic approaches to supplying + * parameter values, the "static" way and the "dynamic" way. + * + * In the static approach, per-parameter data is stored in an array of + * ParamExternData structs appended to the ParamListInfo struct. + * Each entry in the array defines the value to be substituted for a + * PARAM_EXTERN parameter. The "paramid" of a PARAM_EXTERN Param + * can range from 1 to numParams. * * Although parameter numbers are normally consecutive, we allow * ptype == InvalidOid to signal an unused array entry. @@ -35,18 +42,47 @@ struct ParseState; * as a constant (i.e., generate a plan that works only for this value * of the parameter). * - * There are two hook functions that can be associated with a ParamListInfo - * array to support dynamic parameter handling. First, if paramFetch - * isn't null and the executor requires a value for an invalid parameter - * (one with ptype == InvalidOid), the paramFetch hook is called to give - * it a chance to fill in the parameter value. Second, a parserSetup - * hook can be supplied to re-instantiate the original parsing hooks if - * a query needs to be re-parsed/planned (as a substitute for supposing - * that the current ptype values represent a fixed set of parameter types). - + * In the dynamic approach, all access to parameter values is done through + * hook functions found in the ParamListInfo struct. In this case, + * the ParamExternData array is typically unused and not allocated; + * but the legal range of paramid is still 1 to numParams. + * * Although the data structure is really an array, not a list, we keep * the old typedef name to avoid unnecessary code changes. - * ---------------- + * + * There are 3 hook functions that can be associated with a ParamListInfo + * structure: + * + * If paramFetch isn't null, it is called to fetch the ParamExternData + * for a particular param ID, rather than accessing the relevant element + * of the ParamExternData array. This supports the case where the array + * isn't there at all, as well as cases where the data in the array + * might be obsolete or lazily evaluated. paramFetch must return the + * address of a ParamExternData struct describing the specified param ID; + * the convention above about ptype == InvalidOid signaling an invalid + * param ID still applies. The returned struct can either be placed in + * the "workspace" supplied by the caller, or it can be in storage + * controlled by the paramFetch hook if that's more convenient. + * (In either case, the struct is not expected to be long-lived.) + * If "speculative" is true, the paramFetch hook should not risk errors + * in trying to fetch the parameter value, and should report an invalid + * parameter instead. + * + * If paramCompile isn't null, then it controls what execExpr.c compiles + * for PARAM_EXTERN Param nodes --- typically, this hook would emit a + * EEOP_PARAM_CALLBACK step. This allows unnecessary work to be + * optimized away in compiled expressions. + * + * If parserSetup isn't null, then it is called to re-instantiate the + * original parsing hooks when a query needs to be re-parsed/planned. + * This is especially useful if the types of parameters might change + * from time to time, since it can replace the need to supply a fixed + * list of parameter types to the parser. + * + * Notice that the paramFetch and paramCompile hooks are actually passed + * the ParamListInfo struct's address; they can therefore access all + * three of the "arg" fields, and the distinction between paramFetchArg + * and paramCompileArg is rather arbitrary. */ #define PARAM_FLAG_CONST 0x0001 /* parameter is constant */ @@ -61,7 +97,13 @@ typedef struct ParamExternData typedef struct ParamListInfoData *ParamListInfo; -typedef void (*ParamFetchHook) (ParamListInfo params, int paramid); +typedef ParamExternData *(*ParamFetchHook) (ParamListInfo params, + int paramid, bool speculative, + ParamExternData *workspace); + +typedef void (*ParamCompileHook) (ParamListInfo params, struct Param *param, + struct ExprState *state, + Datum *resv, bool *resnull); typedef void (*ParserSetupHook) (struct ParseState *pstate, void *arg); @@ -69,10 +111,16 @@ typedef struct ParamListInfoData { ParamFetchHook paramFetch; /* parameter fetch hook */ void *paramFetchArg; + ParamCompileHook paramCompile; /* parameter compile hook */ + void *paramCompileArg; ParserSetupHook parserSetup; /* parser setup hook */ void *parserSetupArg; - int numParams; /* number of ParamExternDatas following */ - struct Bitmapset *paramMask; /* if non-NULL, can ignore omitted params */ + int numParams; /* nominal/maximum # of Params represented */ + + /* + * params[] may be of length zero if paramFetch is supplied; otherwise it + * must be of length numParams. + */ ParamExternData params[FLEXIBLE_ARRAY_MEMBER]; } ParamListInfoData; diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 2d7844bd9d..e9eab17338 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -2350,35 +2350,17 @@ plpgsql_adddatum(PLpgSQL_datum *new) /* ---------- * plpgsql_finish_datums Copy completed datum info into function struct. - * - * This is also responsible for building resettable_datums, a bitmapset - * of the dnos of all ROW, REC, and RECFIELD datums in the function. * ---------- */ static void plpgsql_finish_datums(PLpgSQL_function *function) { - Bitmapset *resettable_datums = NULL; int i; function->ndatums = plpgsql_nDatums; function->datums = palloc(sizeof(PLpgSQL_datum *) * plpgsql_nDatums); for (i = 0; i < plpgsql_nDatums; i++) - { function->datums[i] = plpgsql_Datums[i]; - switch (function->datums[i]->dtype) - { - case PLPGSQL_DTYPE_ROW: - case PLPGSQL_DTYPE_REC: - case PLPGSQL_DTYPE_RECFIELD: - resettable_datums = bms_add_member(resettable_datums, i); - break; - - default: - break; - } - } - function->resettable_datums = resettable_datums; } diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index fa4d573e50..dd575e73f4 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -22,6 +22,7 @@ #include "access/tupconvert.h" #include "catalog/pg_proc.h" #include "catalog/pg_type.h" +#include "executor/execExpr.h" #include "executor/spi.h" #include "funcapi.h" #include "miscadmin.h" @@ -268,9 +269,18 @@ static int exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, Portal portal, bool prefetch_ok); static ParamListInfo setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr); -static ParamListInfo setup_unshared_param_list(PLpgSQL_execstate *estate, - PLpgSQL_expr *expr); -static void plpgsql_param_fetch(ParamListInfo params, int paramid); +static ParamExternData *plpgsql_param_fetch(ParamListInfo params, + int paramid, bool speculative, + ParamExternData *prm); +static void plpgsql_param_compile(ParamListInfo params, Param *param, + ExprState *state, + Datum *resv, bool *resnull); +static void plpgsql_param_eval_var(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); +static void plpgsql_param_eval_var_ro(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); +static void plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); static void exec_move_row(PLpgSQL_execstate *estate, PLpgSQL_variable *target, HeapTuple tup, TupleDesc tupdesc); @@ -2346,9 +2356,9 @@ exec_stmt_forc(PLpgSQL_execstate *estate, PLpgSQL_stmt_forc *stmt) exec_prepare_plan(estate, query, curvar->cursor_options); /* - * Set up short-lived ParamListInfo + * Set up ParamListInfo for this query */ - paramLI = setup_unshared_param_list(estate, query); + paramLI = setup_param_list(estate, query); /* * Open the cursor (the paramlist will get copied into the portal) @@ -3440,17 +3450,16 @@ plpgsql_estate_setup(PLpgSQL_execstate *estate, estate->datums = palloc(sizeof(PLpgSQL_datum *) * estate->ndatums); /* caller is expected to fill the datums array */ - /* initialize ParamListInfo with one entry per datum, all invalid */ + /* initialize our ParamListInfo with appropriate hook functions */ estate->paramLI = (ParamListInfo) - palloc0(offsetof(ParamListInfoData, params) + - estate->ndatums * sizeof(ParamExternData)); + palloc(offsetof(ParamListInfoData, params)); estate->paramLI->paramFetch = plpgsql_param_fetch; estate->paramLI->paramFetchArg = (void *) estate; + estate->paramLI->paramCompile = plpgsql_param_compile; + estate->paramLI->paramCompileArg = NULL; /* not needed */ estate->paramLI->parserSetup = (ParserSetupHook) plpgsql_parser_setup; estate->paramLI->parserSetupArg = NULL; /* filled during use */ estate->paramLI->numParams = estate->ndatums; - estate->paramLI->paramMask = NULL; - estate->params_dirty = false; /* set up for use of appropriate simple-expression EState and cast hash */ if (simple_eval_estate) @@ -4169,12 +4178,12 @@ exec_stmt_open(PLpgSQL_execstate *estate, PLpgSQL_stmt_open *stmt) } /* - * Set up short-lived ParamListInfo + * Set up ParamListInfo for this query */ - paramLI = setup_unshared_param_list(estate, query); + paramLI = setup_param_list(estate, query); /* - * Open the cursor + * Open the cursor (the paramlist will get copied into the portal) */ portal = SPI_cursor_open_with_paramlist(curname, query->plan, paramLI, @@ -5268,15 +5277,15 @@ exec_run_select(PLpgSQL_execstate *estate, portalP == NULL ? CURSOR_OPT_PARALLEL_OK : 0); /* - * If a portal was requested, put the query into the portal + * Set up ParamListInfo to pass to executor + */ + paramLI = setup_param_list(estate, expr); + + /* + * If a portal was requested, put the query and paramlist into the portal */ if (portalP != NULL) { - /* - * Set up short-lived ParamListInfo - */ - paramLI = setup_unshared_param_list(estate, expr); - *portalP = SPI_cursor_open_with_paramlist(NULL, expr->plan, paramLI, estate->readonly_func); @@ -5287,11 +5296,6 @@ exec_run_select(PLpgSQL_execstate *estate, return SPI_OK_CURSOR; } - /* - * Set up ParamListInfo to pass to executor - */ - paramLI = setup_param_list(estate, expr); - /* * Execute the query */ @@ -5504,7 +5508,6 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate, ExprContext *econtext = estate->eval_econtext; LocalTransactionId curlxid = MyProc->lxid; CachedPlan *cplan; - ParamListInfo paramLI; void *save_setup_arg; MemoryContext oldcontext; @@ -5551,6 +5554,14 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate, *rettype = expr->expr_simple_type; *rettypmod = expr->expr_simple_typmod; + /* + * Set up ParamListInfo to pass to executor. For safety, save and restore + * estate->paramLI->parserSetupArg around our use of the param list. + */ + save_setup_arg = estate->paramLI->parserSetupArg; + + econtext->ecxt_param_list_info = setup_param_list(estate, expr); + /* * Prepare the expression for execution, if it's not been done already in * the current transaction. (This will be forced to happen if we called @@ -5559,7 +5570,9 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate, if (expr->expr_simple_lxid != curlxid) { oldcontext = MemoryContextSwitchTo(estate->simple_eval_estate->es_query_cxt); - expr->expr_simple_state = ExecInitExpr(expr->expr_simple_expr, NULL); + expr->expr_simple_state = + ExecInitExprWithParams(expr->expr_simple_expr, + econtext->ecxt_param_list_info); expr->expr_simple_in_use = false; expr->expr_simple_lxid = curlxid; MemoryContextSwitchTo(oldcontext); @@ -5578,21 +5591,6 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate, PushActiveSnapshot(GetTransactionSnapshot()); } - /* - * Set up ParamListInfo to pass to executor. We need an unshared list if - * it's going to include any R/W expanded-object pointer. For safety, - * save and restore estate->paramLI->parserSetupArg around our use of the - * param list. - */ - save_setup_arg = estate->paramLI->parserSetupArg; - - if (expr->rwparam >= 0) - paramLI = setup_unshared_param_list(estate, expr); - else - paramLI = setup_param_list(estate, expr); - - econtext->ecxt_param_list_info = paramLI; - /* * Mark expression as busy for the duration of the ExecEvalExpr call. */ @@ -5632,35 +5630,17 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate, /* * Create a ParamListInfo to pass to SPI * - * We share a single ParamListInfo array across all SPI calls made from this - * estate, except calls creating cursors, which use setup_unshared_param_list - * (see its comments for reasons why), and calls that pass a R/W expanded - * object pointer. A shared array is generally OK since any given slot in - * the array would need to contain the same current datum value no matter - * which query or expression we're evaluating; but of course that doesn't - * hold when a specific variable is being passed as a R/W pointer, because - * other expressions in the same function probably don't want to do that. - * - * Note that paramLI->parserSetupArg points to the specific PLpgSQL_expr - * being evaluated. This is not an issue for statement-level callers, but - * lower-level callers must save and restore estate->paramLI->parserSetupArg - * just in case there's an active evaluation at an outer call level. + * We use a single ParamListInfo struct for all SPI calls made from this + * estate; it contains no per-param data, just hook functions, so it's + * effectively read-only for SPI. * - * The general plan for passing parameters to SPI is that plain VAR datums - * always have valid images in the shared param list. This is ensured by - * assign_simple_var(), which also marks those params as PARAM_FLAG_CONST, - * allowing the planner to use those values in custom plans. However, non-VAR - * datums cannot conveniently be managed that way. For one thing, they could - * throw errors (for example "no such record field") and we do not want that - * to happen in a part of the expression that might never be evaluated at - * runtime. For another thing, exec_eval_datum() may return short-lived - * values stored in the estate's eval_mcontext, which will not necessarily - * survive to the next SPI operation. And for a third thing, ROW - * and RECFIELD datums' values depend on other datums, and we don't have a - * cheap way to track that. Therefore, param slots for non-VAR datum types - * are always reset here and then filled on-demand by plpgsql_param_fetch(). - * We can save a few cycles by not bothering with the reset loop unless at - * least one such param has actually been filled by plpgsql_param_fetch(). + * An exception from pure read-only-ness is that the parserSetupArg points + * to the specific PLpgSQL_expr being evaluated. This is not an issue for + * statement-level callers, but lower-level callers must save and restore + * estate->paramLI->parserSetupArg just in case there's an active evaluation + * at an outer call level. (A plausible alternative design would be to + * create a ParamListInfo struct for each PLpgSQL_expr, but for the moment + * that seems like a waste of memory.) */ static ParamListInfo setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) @@ -5673,11 +5653,6 @@ setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) */ Assert(expr->plan != NULL); - /* - * Expressions with R/W parameters can't use the shared param list. - */ - Assert(expr->rwparam == -1); - /* * We only need a ParamListInfo if the expression has parameters. In * principle we should test with bms_is_empty(), but we use a not-null @@ -5689,25 +5664,6 @@ setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) /* Use the common ParamListInfo */ paramLI = estate->paramLI; - /* - * If any resettable parameters have been passed to the executor since - * last time, we need to reset those param slots to "invalid", for the - * reasons mentioned in the comment above. - */ - if (estate->params_dirty) - { - Bitmapset *resettable_datums = estate->func->resettable_datums; - int dno = -1; - - while ((dno = bms_next_member(resettable_datums, dno)) >= 0) - { - ParamExternData *prm = ¶mLI->params[dno]; - - prm->ptype = InvalidOid; - } - estate->params_dirty = false; - } - /* * Set up link to active expr where the hook functions can find it. * Callers must save and restore parserSetupArg if there is any chance @@ -5715,12 +5671,6 @@ setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) */ paramLI->parserSetupArg = (void *) expr; - /* - * Allow parameters that aren't needed by this expression to be - * ignored. - */ - paramLI->paramMask = expr->paramnos; - /* * Also make sure this is set before parser hooks need it. There is * no need to save and restore, since the value is always correct once @@ -5741,115 +5691,25 @@ setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) } /* - * Create an unshared, short-lived ParamListInfo to pass to SPI - * - * When creating a cursor, we do not use the shared ParamListInfo array - * but create a short-lived one that will contain only params actually - * referenced by the query. The reason for this is that copyParamList() will - * be used to copy the parameters into cursor-lifespan storage, and we don't - * want it to copy anything that's not used by the specific cursor; that - * could result in uselessly copying some large values. - * - * We also use this for expressions that are passing a R/W object pointer - * to some trusted function. We don't want the R/W pointer to get into the - * shared param list, where it could get passed to some less-trusted function. + * plpgsql_param_fetch paramFetch callback for dynamic parameter fetch * - * The result, if not NULL, is in the estate's eval_mcontext. + * We always use the caller's workspace to construct the returned struct. * - * XXX. Could we use ParamListInfo's new paramMask to avoid creating unshared - * parameter lists? - */ -static ParamListInfo -setup_unshared_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr) -{ - ParamListInfo paramLI; - - /* - * We must have created the SPIPlan already (hence, query text has been - * parsed/analyzed at least once); else we cannot rely on expr->paramnos. - */ - Assert(expr->plan != NULL); - - /* - * We only need a ParamListInfo if the expression has parameters. In - * principle we should test with bms_is_empty(), but we use a not-null - * test because it's faster. In current usage bits are never removed from - * expr->paramnos, only added, so this test is correct anyway. - */ - if (expr->paramnos) - { - int dno; - - /* initialize ParamListInfo with one entry per datum, all invalid */ - paramLI = (ParamListInfo) - eval_mcontext_alloc0(estate, - offsetof(ParamListInfoData, params) + - estate->ndatums * sizeof(ParamExternData)); - paramLI->paramFetch = plpgsql_param_fetch; - paramLI->paramFetchArg = (void *) estate; - paramLI->parserSetup = (ParserSetupHook) plpgsql_parser_setup; - paramLI->parserSetupArg = (void *) expr; - paramLI->numParams = estate->ndatums; - paramLI->paramMask = NULL; - - /* - * Instantiate values for "safe" parameters of the expression. We - * could skip this and leave them to be filled by plpgsql_param_fetch; - * but then the values would not be available for query planning, - * since the planner doesn't call the paramFetch hook. - */ - dno = -1; - while ((dno = bms_next_member(expr->paramnos, dno)) >= 0) - { - PLpgSQL_datum *datum = estate->datums[dno]; - - if (datum->dtype == PLPGSQL_DTYPE_VAR) - { - PLpgSQL_var *var = (PLpgSQL_var *) datum; - ParamExternData *prm = ¶mLI->params[dno]; - - if (dno == expr->rwparam) - prm->value = var->value; - else - prm->value = MakeExpandedObjectReadOnly(var->value, - var->isnull, - var->datatype->typlen); - prm->isnull = var->isnull; - prm->pflags = PARAM_FLAG_CONST; - prm->ptype = var->datatype->typoid; - } - } - - /* - * Also make sure this is set before parser hooks need it. There is - * no need to save and restore, since the value is always correct once - * set. (Should be set already, but let's be sure.) - */ - expr->func = estate->func; - } - else - { - /* - * Expression requires no parameters. Be sure we represent this case - * as a NULL ParamListInfo, so that plancache.c knows there is no - * point in a custom plan. - */ - paramLI = NULL; - } - return paramLI; -} - -/* - * plpgsql_param_fetch paramFetch callback for dynamic parameter fetch + * Note: this is no longer used during query execution. It is used during + * planning (with speculative == true) and when the ParamListInfo we supply + * to the executor is copied into a cursor portal or transferred to a + * parallel child process. */ -static void -plpgsql_param_fetch(ParamListInfo params, int paramid) +static ParamExternData * +plpgsql_param_fetch(ParamListInfo params, + int paramid, bool speculative, + ParamExternData *prm) { int dno; PLpgSQL_execstate *estate; PLpgSQL_expr *expr; PLpgSQL_datum *datum; - ParamExternData *prm; + bool ok = true; int32 prmtypmod; /* paramid's are 1-based, but dnos are 0-based */ @@ -5866,35 +5726,74 @@ plpgsql_param_fetch(ParamListInfo params, int paramid) /* * Since copyParamList() or SerializeParamList() will try to materialize - * every single parameter slot, it's important to do nothing when asked - * for a datum that's not supposed to be used by this SQL expression. - * Otherwise we risk failures in exec_eval_datum(), or copying a lot more - * data than necessary. + * every single parameter slot, it's important to return a dummy param + * when asked for a datum that's not supposed to be used by this SQL + * expression. Otherwise we risk failures in exec_eval_datum(), or + * copying a lot more data than necessary. */ if (!bms_is_member(dno, expr->paramnos)) - return; + ok = false; - if (params == estate->paramLI) + /* + * If the access is speculative, we prefer to return no data rather than + * to fail in exec_eval_datum(). Check the likely failure cases. + */ + else if (speculative) { - /* - * We need to mark the shared params array dirty if we're about to - * evaluate a resettable datum. - */ switch (datum->dtype) { + case PLPGSQL_DTYPE_VAR: + /* always safe */ + break; + case PLPGSQL_DTYPE_ROW: + /* should be safe in all interesting cases */ + break; + case PLPGSQL_DTYPE_REC: + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; + + if (!HeapTupleIsValid(rec->tup)) + ok = false; + break; + } + case PLPGSQL_DTYPE_RECFIELD: - estate->params_dirty = true; - break; + { + PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) datum; + PLpgSQL_rec *rec; + int fno; + + rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); + if (!HeapTupleIsValid(rec->tup)) + ok = false; + else + { + fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); + if (fno == SPI_ERROR_NOATTRIBUTE) + ok = false; + } + break; + } default: + ok = false; break; } } - /* OK, evaluate the value and store into the appropriate paramlist slot */ - prm = ¶ms->params[dno]; + /* Return "no such parameter" if not ok */ + if (!ok) + { + prm->value = (Datum) 0; + prm->isnull = true; + prm->pflags = 0; + prm->ptype = InvalidOid; + return prm; + } + + /* OK, evaluate the value and store into the return struct */ exec_eval_datum(estate, datum, &prm->ptype, &prmtypmod, &prm->value, &prm->isnull); @@ -5909,6 +5808,174 @@ plpgsql_param_fetch(ParamListInfo params, int paramid) prm->value = MakeExpandedObjectReadOnly(prm->value, prm->isnull, ((PLpgSQL_var *) datum)->datatype->typlen); + + return prm; +} + +/* + * plpgsql_param_compile paramCompile callback for plpgsql parameters + */ +static void +plpgsql_param_compile(ParamListInfo params, Param *param, + ExprState *state, + Datum *resv, bool *resnull) +{ + PLpgSQL_execstate *estate; + PLpgSQL_expr *expr; + int dno; + PLpgSQL_datum *datum; + ExprEvalStep scratch; + + /* fetch back the hook data */ + estate = (PLpgSQL_execstate *) params->paramFetchArg; + expr = (PLpgSQL_expr *) params->parserSetupArg; + + /* paramid's are 1-based, but dnos are 0-based */ + dno = param->paramid - 1; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + datum = estate->datums[dno]; + + scratch.opcode = EEOP_PARAM_CALLBACK; + scratch.resvalue = resv; + scratch.resnull = resnull; + + /* Select appropriate eval function */ + if (datum->dtype == PLPGSQL_DTYPE_VAR) + { + if (dno != expr->rwparam && + ((PLpgSQL_var *) datum)->datatype->typlen == -1) + scratch.d.cparam.paramfunc = plpgsql_param_eval_var_ro; + else + scratch.d.cparam.paramfunc = plpgsql_param_eval_var; + } + else + scratch.d.cparam.paramfunc = plpgsql_param_eval_non_var; + + /* + * Note: it's tempting to use paramarg to store the estate pointer and + * thereby save an indirection or two in the eval functions. But that + * doesn't work because the compiled expression might be used with + * different estates for the same PL/pgSQL function. + */ + scratch.d.cparam.paramarg = NULL; + scratch.d.cparam.paramid = param->paramid; + scratch.d.cparam.paramtype = param->paramtype; + ExprEvalPushStep(state, &scratch); +} + +/* + * plpgsql_param_eval_var evaluation of EEOP_PARAM_CALLBACK step + * + * This is specialized to the case of DTYPE_VAR variables for which + * we do not need to invoke MakeExpandedObjectReadOnly. + */ +static void +plpgsql_param_eval_var(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + ParamListInfo params; + PLpgSQL_execstate *estate; + int dno = op->d.cparam.paramid - 1; + PLpgSQL_var *var; + + /* fetch back the hook data */ + params = econtext->ecxt_param_list_info; + estate = (PLpgSQL_execstate *) params->paramFetchArg; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + var = (PLpgSQL_var *) estate->datums[dno]; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + + /* inlined version of exec_eval_datum() */ + *op->resvalue = var->value; + *op->resnull = var->isnull; + + /* safety check -- an assertion should be sufficient */ + Assert(var->datatype->typoid == op->d.cparam.paramtype); +} + +/* + * plpgsql_param_eval_var_ro evaluation of EEOP_PARAM_CALLBACK step + * + * This is specialized to the case of DTYPE_VAR variables for which + * we need to invoke MakeExpandedObjectReadOnly. + */ +static void +plpgsql_param_eval_var_ro(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + ParamListInfo params; + PLpgSQL_execstate *estate; + int dno = op->d.cparam.paramid - 1; + PLpgSQL_var *var; + + /* fetch back the hook data */ + params = econtext->ecxt_param_list_info; + estate = (PLpgSQL_execstate *) params->paramFetchArg; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + var = (PLpgSQL_var *) estate->datums[dno]; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + + /* + * Inlined version of exec_eval_datum() ... and while we're at it, force + * expanded datums to read-only. + */ + *op->resvalue = MakeExpandedObjectReadOnly(var->value, + var->isnull, + -1); + *op->resnull = var->isnull; + + /* safety check -- an assertion should be sufficient */ + Assert(var->datatype->typoid == op->d.cparam.paramtype); +} + +/* + * plpgsql_param_eval_non_var evaluation of EEOP_PARAM_CALLBACK step + * + * This handles all variable types except DTYPE_VAR. + */ +static void +plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + ParamListInfo params; + PLpgSQL_execstate *estate; + int dno = op->d.cparam.paramid - 1; + PLpgSQL_datum *datum; + Oid datumtype; + int32 datumtypmod; + + /* fetch back the hook data */ + params = econtext->ecxt_param_list_info; + estate = (PLpgSQL_execstate *) params->paramFetchArg; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + datum = estate->datums[dno]; + Assert(datum->dtype != PLPGSQL_DTYPE_VAR); + + exec_eval_datum(estate, datum, + &datumtype, &datumtypmod, + op->resvalue, op->resnull); + + /* safety check -- needed for, eg, record fields */ + if (unlikely(datumtype != op->d.cparam.paramtype)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("type of parameter %d (%s) does not match that when preparing the plan (%s)", + op->d.cparam.paramid, + format_type_be(datumtype), + format_type_be(op->d.cparam.paramtype)))); + + /* + * Currently, if the dtype isn't VAR, the value couldn't be a read/write + * expanded datum. + */ } @@ -6875,14 +6942,12 @@ plpgsql_subxact_cb(SubXactEvent event, SubTransactionId mySubid, * assign_simple_var --- assign a new value to any VAR datum. * * This should be the only mechanism for assignment to simple variables, - * lest we forget to update the paramLI image. + * lest we do the release of the old value incorrectly. */ static void assign_simple_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, Datum newvalue, bool isnull, bool freeable) { - ParamExternData *prm; - Assert(var->dtype == PLPGSQL_DTYPE_VAR); /* Free the old value if needed */ if (var->freeval) @@ -6898,15 +6963,6 @@ assign_simple_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, var->value = newvalue; var->isnull = isnull; var->freeval = freeable; - /* And update the image in the common parameter list */ - prm = &estate->paramLI->params[var->dno]; - prm->value = MakeExpandedObjectReadOnly(newvalue, - isnull, - var->datatype->typlen); - prm->isnull = isnull; - /* these might be set already, but let's be sure */ - prm->pflags = PARAM_FLAG_CONST; - prm->ptype = var->datatype->typoid; } /* diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index 39bd82acd1..43d7d7db36 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -857,7 +857,6 @@ typedef struct PLpgSQL_function /* the datums representing the function's local variables */ int ndatums; PLpgSQL_datum **datums; - Bitmapset *resettable_datums; /* dnos of non-simple vars */ /* function body parsetree */ PLpgSQL_stmt_block *action; @@ -899,9 +898,13 @@ typedef struct PLpgSQL_execstate int ndatums; PLpgSQL_datum **datums; - /* we pass datums[i] to the executor, when needed, in paramLI->params[i] */ + /* + * paramLI is what we use to pass local variable values to the executor. + * It does not have a ParamExternData array; we just dynamically + * instantiate parameter data as needed. By convention, PARAM_EXTERN + * Params have paramid equal to the dno of the referenced local variable. + */ ParamListInfo paramLI; - bool params_dirty; /* T if any resettable datum has been passed */ /* EState to use for "simple" expression evaluation */ EState *simple_eval_estate; From cce1ecfc77385233664de661d6a1b8b3c5d3d3f5 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 21 Dec 2017 13:10:51 -0500 Subject: [PATCH 0733/1087] Adjust assertion in GetCurrentCommandId. currentCommandIdUsed is only used to skip redundant increments of the command counter, and CommandCounterIncrement() is categorically denied under parallelism anyway. Therefore, it's OK for GetCurrentCommandId() to mark the counter value used, as long as it happens in the leader, not a worker. Prior to commit e9baa5e9fa147e00a2466ab2c40eb99c8a700824, the slightly incorrect check didn't matter, but now it does. A test case added by commit 1804284042e659e7d16904e7bbb0ad546394b6a3 uncovered the problem by accident; it caused failures with force_parallel_mode=on/regress. Report and review by Andres Freund. Patch by me. Discussion: http://postgr.es/m/20171221143106.5lhtygohvmazli3x@alap3.anarazel.de --- src/backend/access/transam/xact.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index d430e662e6..b37510c24f 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -683,12 +683,12 @@ GetCurrentCommandId(bool used) if (used) { /* - * Forbid setting currentCommandIdUsed in parallel mode, because we - * have no provision for communicating this back to the master. We + * Forbid setting currentCommandIdUsed in a parallel worker, because + * we have no provision for communicating this back to the master. We * could relax this restriction when currentCommandIdUsed was already * true at the start of the parallel operation. */ - Assert(CurrentTransactionState->parallelModeLevel == 0); + Assert(!IsParallelWorker()); currentCommandIdUsed = true; } return currentCommandId; From 9373baa0f764392c504df034afd2f6b178c29491 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 21 Dec 2017 19:07:32 -0300 Subject: [PATCH 0734/1087] Minor edits to catalog files and scripts This fixes a few typos and small mistakes; it also cleans a few minor stylistic issues. The biggest functional change is that Gen_fmgrtab.pl no longer knows the OID of language 'internal'. Author: John Naylor Discussion: https://postgr.es/m/CAJVSVGXAkwbk-A9QHHHf00N905kKisyQbaYwKqaRpze_gPXGfg@mail.gmail.com --- src/backend/catalog/Catalog.pm | 22 ++++----- src/backend/catalog/Makefile | 2 +- src/backend/catalog/genbki.pl | 55 ++++++++++++---------- src/backend/utils/Gen_fmgrtab.pl | 11 +++-- src/include/catalog/pg_partitioned_table.h | 2 +- src/include/catalog/pg_sequence.h | 10 ++++ src/include/catalog/pg_statistic.h | 10 ++-- src/include/catalog/pg_subscription_rel.h | 5 +- 8 files changed, 62 insertions(+), 55 deletions(-) diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm index 54f83533b6..80bd9771f1 100644 --- a/src/backend/catalog/Catalog.pm +++ b/src/backend/catalog/Catalog.pm @@ -16,11 +16,6 @@ package Catalog; use strict; use warnings; -require Exporter; -our @ISA = qw(Exporter); -our @EXPORT = (); -our @EXPORT_OK = qw(Catalogs SplitDataLine RenameTempFile FindDefinedSymbol); - # Call this function with an array of names of header files to parse. # Returns a nested data structure describing the data in the headers. sub Catalogs @@ -36,7 +31,8 @@ sub Catalogs 'int64' => 'int8', 'Oid' => 'oid', 'NameData' => 'name', - 'TransactionId' => 'xid'); + 'TransactionId' => 'xid', + 'XLogRecPtr' => 'pg_lsn'); foreach my $input_file (@_) { @@ -162,7 +158,7 @@ sub Catalogs /BKI_WITHOUT_OIDS/ ? ' without_oids' : ''; $catalog{rowtype_oid} = /BKI_ROWTYPE_OID\((\d+)\)/ ? " rowtype_oid $1" : ''; - $catalog{schema_macro} = /BKI_SCHEMA_MACRO/ ? 'True' : ''; + $catalog{schema_macro} = /BKI_SCHEMA_MACRO/ ? 1 : 0; $declaring_attributes = 1; } elsif ($declaring_attributes) @@ -175,7 +171,7 @@ sub Catalogs } else { - my %row; + my %column; my ($atttype, $attname, $attopt) = split /\s+/, $_; die "parse error ($input_file)" unless $attname; if (exists $RENAME_ATTTYPE{$atttype}) @@ -188,18 +184,18 @@ sub Catalogs $atttype .= '[]'; # variable-length only } - $row{'type'} = $atttype; - $row{'name'} = $attname; + $column{type} = $atttype; + $column{name} = $attname; if (defined $attopt) { if ($attopt eq 'BKI_FORCE_NULL') { - $row{'forcenull'} = 1; + $column{forcenull} = 1; } elsif ($attopt eq 'BKI_FORCE_NOT_NULL') { - $row{'forcenotnull'} = 1; + $column{forcenotnull} = 1; } else { @@ -207,7 +203,7 @@ sub Catalogs "unknown column option $attopt on column $attname"; } } - push @{ $catalog{columns} }, \%row; + push @{ $catalog{columns} }, \%column; } } } diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile index fd33426bad..30ca509534 100644 --- a/src/backend/catalog/Makefile +++ b/src/backend/catalog/Makefile @@ -45,7 +45,7 @@ POSTGRES_BKI_SRCS = $(addprefix $(top_srcdir)/src/include/catalog/,\ pg_default_acl.h pg_init_privs.h pg_seclabel.h pg_shseclabel.h \ pg_collation.h pg_partitioned_table.h pg_range.h pg_transform.h \ pg_sequence.h pg_publication.h pg_publication_rel.h pg_subscription.h \ - pg_subscription_rel.h toasting.h indexing.h \ + pg_subscription_rel.h \ toasting.h indexing.h \ ) diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl index e4a0b8b2c7..5b5b04f41c 100644 --- a/src/backend/catalog/genbki.pl +++ b/src/backend/catalog/genbki.pl @@ -20,7 +20,7 @@ use warnings; my @input_files; -our @include_path; +my @include_path; my $output_path = ''; my $major_version; @@ -105,7 +105,7 @@ my %schemapg_entries; my @tables_needing_macros; my %regprocoids; -our @types; +my @types; # produce output, one catalog at a time foreach my $catname (@{ $catalogs->{names} }) @@ -124,7 +124,8 @@ my $first = 1; print $bki " (\n"; - foreach my $column (@{ $catalog->{columns} }) + my $schema = $catalog->{columns}; + foreach my $column (@$schema) { my $attname = $column->{name}; my $atttype = $column->{type}; @@ -150,8 +151,9 @@ } print $bki "\n )\n"; - # open it, unless bootstrap case (create bootstrap does this automatically) - if ($catalog->{bootstrap} eq '') + # Open it, unless bootstrap case (create bootstrap does this + # automatically) + if (!$catalog->{bootstrap}) { print $bki "open $catname\n"; } @@ -169,21 +171,23 @@ Catalog::SplitDataLine($row->{bki_values}); # Perform required substitutions on fields - foreach my $att (keys %bki_values) + foreach my $column (@$schema) { + my $attname = $column->{name}; + my $atttype = $column->{type}; # Substitute constant values we acquired above. # (It's intentional that this can apply to parts of a field). - $bki_values{$att} =~ s/\bPGUID\b/$BOOTSTRAP_SUPERUSERID/g; - $bki_values{$att} =~ s/\bPGNSP\b/$PG_CATALOG_NAMESPACE/g; + $bki_values{$attname} =~ s/\bPGUID\b/$BOOTSTRAP_SUPERUSERID/g; + $bki_values{$attname} =~ s/\bPGNSP\b/$PG_CATALOG_NAMESPACE/g; # Replace regproc columns' values with OIDs. # If we don't have a unique value to substitute, # just do nothing (regprocin will complain). - if ($bki_attr{$att}->{type} eq 'regproc') + if ($atttype eq 'regproc') { - my $procoid = $regprocoids{ $bki_values{$att} }; - $bki_values{$att} = $procoid + my $procoid = $regprocoids{ $bki_values{$attname} }; + $bki_values{$attname} = $procoid if defined($procoid) && $procoid ne 'MULTIPLE'; } } @@ -215,16 +219,17 @@ printf $bki "insert %s( %s )\n", $oid, join(' ', @bki_values{@attnames}); - # Write comments to postgres.description and postgres.shdescription + # Write comments to postgres.description and + # postgres.shdescription if (defined $row->{descr}) { - printf $descr "%s\t%s\t0\t%s\n", $row->{oid}, $catname, - $row->{descr}; + printf $descr "%s\t%s\t0\t%s\n", + $row->{oid}, $catname, $row->{descr}; } if (defined $row->{shdescr}) { - printf $shdescr "%s\t%s\t%s\n", $row->{oid}, $catname, - $row->{shdescr}; + printf $shdescr "%s\t%s\t%s\n", + $row->{oid}, $catname, $row->{shdescr}; } } } @@ -240,11 +245,10 @@ # Currently, all bootstrapped relations also need schemapg.h # entries, so skip if the relation isn't to be in schemapg.h. - next if $table->{schema_macro} ne 'True'; + next if !$table->{schema_macro}; $schemapg_entries{$table_name} = []; push @tables_needing_macros, $table_name; - my $is_bootstrap = $table->{bootstrap}; # Generate entries for user attributes. my $attnum = 0; @@ -259,7 +263,7 @@ $priornotnull &= ($row->{attnotnull} eq 't'); # If it's bootstrapped, put an entry in postgres.bki. - if ($is_bootstrap eq ' bootstrap') + if ($table->{bootstrap}) { bki_insert($row, @attnames); } @@ -268,15 +272,14 @@ $row = emit_schemapg_row($row, grep { $bki_attr{$_}{type} eq 'bool' } @attnames); - push @{ $schemapg_entries{$table_name} }, '{ ' - . join( - ', ', grep { defined $_ } - map $row->{$_}, @attnames) . ' }'; + push @{ $schemapg_entries{$table_name} }, + sprintf "{ %s }", + join(', ', grep { defined $_ } @{$row}{@attnames}); } # Generate entries for system attributes. # We only need postgres.bki entries, not schemapg.h entries. - if ($is_bootstrap eq ' bootstrap') + if ($table->{bootstrap}) { $attnum = 0; my @SYS_ATTRS = ( @@ -294,9 +297,9 @@ $row->{attnum} = $attnum; $row->{attstattarget} = '0'; - # some catalogs don't have oids + # Omit the oid column if the catalog doesn't have them next - if $table->{without_oids} eq ' without_oids' + if $table->{without_oids} && $row->{attname} eq 'oid'; bki_insert($row, @attnames); diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl index 26b428b11e..14c02f5b57 100644 --- a/src/backend/utils/Gen_fmgrtab.pl +++ b/src/backend/utils/Gen_fmgrtab.pl @@ -2,7 +2,8 @@ #------------------------------------------------------------------------- # # Gen_fmgrtab.pl -# Perl script that generates fmgroids.h and fmgrtab.c from pg_proc.h +# Perl script that generates fmgroids.h, fmgrprotos.h, and fmgrtab.c +# from pg_proc.h # # Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California @@ -56,6 +57,8 @@ my $FirstBootstrapObjectId = Catalog::FindDefinedSymbol('access/transam.h', \@include_path, 'FirstBootstrapObjectId'); +my $INTERNALlanguageId = + Catalog::FindDefinedSymbol('catalog/pg_language.h', \@include_path, 'INTERNALlanguageId'); # Read all the data from the include/catalog files. my $catalogs = Catalog::Catalogs($infile); @@ -77,8 +80,7 @@ @bki_values{@attnames} = Catalog::SplitDataLine($row->{bki_values}); # Select out just the rows for internal-language procedures. - # Note assumption here that INTERNALlanguageId is 12. - next if $bki_values{prolang} ne '12'; + next if $bki_values{prolang} ne $INTERNALlanguageId; push @fmgr, { oid => $row->{oid}, @@ -281,7 +283,8 @@ sub usage die <. EOM diff --git a/src/include/catalog/pg_partitioned_table.h b/src/include/catalog/pg_partitioned_table.h index 525e541f93..731147ecbf 100644 --- a/src/include/catalog/pg_partitioned_table.h +++ b/src/include/catalog/pg_partitioned_table.h @@ -10,7 +10,7 @@ * src/include/catalog/pg_partitioned_table.h * * NOTES - * the genbki.sh script reads this file and generates .bki + * the genbki.pl script reads this file and generates .bki * information from the DATA() statements. * *------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_sequence.h b/src/include/catalog/pg_sequence.h index 8ae6b7143d..6de54bb665 100644 --- a/src/include/catalog/pg_sequence.h +++ b/src/include/catalog/pg_sequence.h @@ -1,3 +1,13 @@ +/* ------------------------------------------------------------------------- + * + * pg_sequence.h + * definition of the system "sequence" relation (pg_sequence) + * + * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * ------------------------------------------------------------------------- + */ #ifndef PG_SEQUENCE_H #define PG_SEQUENCE_H diff --git a/src/include/catalog/pg_statistic.h b/src/include/catalog/pg_statistic.h index 3713a56bbd..43128f1928 100644 --- a/src/include/catalog/pg_statistic.h +++ b/src/include/catalog/pg_statistic.h @@ -161,12 +161,10 @@ typedef FormData_pg_statistic *Form_pg_statistic; #define Anum_pg_statistic_stavalues5 26 /* - * Currently, five statistical slot "kinds" are defined by core PostgreSQL, - * as documented below. Additional "kinds" will probably appear in - * future to help cope with non-scalar datatypes. Also, custom data types - * can define their own "kind" codes by mutual agreement between a custom - * typanalyze routine and the selectivity estimation functions of the type's - * operators. + * Several statistical slot "kinds" are defined by core PostgreSQL, as + * documented below. Also, custom data types can define their own "kind" + * codes by mutual agreement between a custom typanalyze routine and the + * selectivity estimation functions of the type's operators. * * Code reading the pg_statistic relation should not assume that a particular * data "kind" will appear in any particular slot. Instead, search the diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h index 991ca9d552..57482972fb 100644 --- a/src/include/catalog/pg_subscription_rel.h +++ b/src/include/catalog/pg_subscription_rel.h @@ -23,15 +23,12 @@ */ #define SubscriptionRelRelationId 6102 -/* Workaround for genbki not knowing about XLogRecPtr */ -#define pg_lsn XLogRecPtr - CATALOG(pg_subscription_rel,6102) BKI_WITHOUT_OIDS { Oid srsubid; /* Oid of subscription */ Oid srrelid; /* Oid of relation */ char srsubstate; /* state of the relation in subscription */ - pg_lsn srsublsn; /* remote lsn of the state change used for + XLogRecPtr srsublsn; /* remote lsn of the state change used for * synchronization coordination */ } FormData_pg_subscription_rel; From 854823fa334cb826eed50da751801d0693b10173 Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Fri, 22 Dec 2017 13:33:16 +0300 Subject: [PATCH 0735/1087] Add optional compression method to SP-GiST Patch allows to have different types of column and value stored in leaf tuples of SP-GiST. The main application of feature is to transform complex column type to simple indexed type or for truncating too long value, transformation could be lossy. Simple example: polygons are converted to their bounding boxes, this opclass follows. Authors: me, Heikki Linnakangas, Alexander Korotkov, Nikita Glukhov Reviewed-By: all authors + Darafei Praliaskouski Discussions: https://www.postgresql.org/message-id/5447B3FF.2080406@sigaev.ru https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru#54907069.1030506@sigaev.ru --- doc/src/sgml/spgist.sgml | 92 +++++++++++++++++++------ src/backend/access/spgist/spgdoinsert.c | 37 ++++++++-- src/backend/access/spgist/spgscan.c | 6 +- src/backend/access/spgist/spgutils.c | 21 +++++- src/backend/access/spgist/spgvalidate.c | 50 +++++++++++++- src/include/access/spgist.h | 5 +- src/include/access/spgist_private.h | 8 ++- 7 files changed, 182 insertions(+), 37 deletions(-) diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index 139c8ed8f7..b4a8be476e 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -240,20 +240,22 @@ There are five user-defined methods that an index operator class for - SP-GiST must provide. All five follow the convention - of accepting two internal arguments, the first of which is a - pointer to a C struct containing input values for the support method, - while the second argument is a pointer to a C struct where output values - must be placed. Four of the methods just return void, since - all their results appear in the output struct; but + SP-GiST must provide, and one is optional. All five + mandatory methods follow the convention of accepting two internal + arguments, the first of which is a pointer to a C struct containing input + values for the support method, while the second argument is a pointer to a + C struct where output values must be placed. Four of the mandatory methods just + return void, since all their results appear in the output struct; but leaf_consistent additionally returns a boolean result. The methods must not modify any fields of their input structs. In all cases, the output struct is initialized to zeroes before calling the - user-defined method. + user-defined method. Optional sixth method compress + accepts datum to be indexed as the only argument and returns value suitable + for physical storage in leaf tuple. - The five user-defined methods are: + The five mandatory user-defined methods are: @@ -283,6 +285,7 @@ typedef struct spgConfigOut { Oid prefixType; /* Data type of inner-tuple prefixes */ Oid labelType; /* Data type of inner-tuple node labels */ + Oid leafType; /* Data type of leaf-tuple values */ bool canReturnData; /* Opclass can reconstruct original data */ bool longValuesOK; /* Opclass can cope with values > 1 page */ } spgConfigOut; @@ -305,6 +308,22 @@ typedef struct spgConfigOut class is capable of segmenting long values by repeated suffixing (see ). + + + leafType is typically the same as + attType. For the reasons of backward + compatibility, method config can + leave leafType uninitialized; that would + give the same effect as setting leafType equal + to attType. When attType + and leafType are different, then optional + method compress must be provided. + Method compress is responsible + for transformation of datums to be indexed from attType + to leafType. + Note: both consistent functions will get scankeys + unchanged, without transformation using compress. + @@ -380,10 +399,16 @@ typedef struct spgChooseOut } spgChooseOut; - datum is the original datum that was to be inserted - into the index. - leafDatum is initially the same as - datum, but can change at lower levels of the tree + datum is the original datum of + spgConfigIn.attType + type that was to be inserted into the index. + leafDatum is a value of + spgConfigOut.leafType + type which is initially an result of method + compress applied to datum + when method compress is provided, or same value as + datum otherwise. + leafDatum can change at lower levels of the tree if the choose or picksplit methods change it. When the insertion search reaches a leaf page, the current value of leafDatum is what will be stored @@ -418,7 +443,7 @@ typedef struct spgChooseOut Set levelAdd to the increment in level caused by descending through that node, or leave it as zero if the operator class does not use levels. - Set restDatum to equal datum + Set restDatum to equal leafDatum if the operator class does not modify datums from one level to the next, or otherwise set it to the modified value to be used as leafDatum at the next level. @@ -509,7 +534,9 @@ typedef struct spgPickSplitOut nTuples is the number of leaf tuples provided. - datums is an array of their datum values. + datums is an array of their datum values of + spgConfigOut.leafType + type. level is the current level that all the leaf tuples share, which will become the level of the new inner tuple. @@ -624,7 +651,8 @@ typedef struct spgInnerConsistentOut reconstructedValue is the value reconstructed for the parent tuple; it is (Datum) 0 at the root level or if the inner_consistent function did not provide a value at the - parent level. + parent level. reconstructedValue is always of + spgConfigOut.leafType type. traversalValue is a pointer to any traverse data passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. @@ -659,6 +687,7 @@ typedef struct spgInnerConsistentOut necessarily so, so an array is used.) If value reconstruction is needed, set reconstructedValues to an array of the values + of spgConfigOut.leafType type reconstructed for each child node to be visited; otherwise, leave reconstructedValues as NULL. If it is desired to pass down additional out-of-band information @@ -730,7 +759,8 @@ typedef struct spgLeafConsistentOut reconstructedValue is the value reconstructed for the parent tuple; it is (Datum) 0 at the root level or if the inner_consistent function did not provide a value at the - parent level. + parent level. reconstructedValue is always of + spgConfigOut.leafType type. traversalValue is a pointer to any traverse data passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. @@ -739,16 +769,18 @@ typedef struct spgLeafConsistentOut returnData is true if reconstructed data is required for this query; this will only be so if the config function asserted canReturnData. - leafDatum is the key value stored in the current - leaf tuple. + leafDatum is the key value of + spgConfigOut.leafType + stored in the current leaf tuple. The function must return true if the leaf tuple matches the query, or false if not. In the true case, if returnData is true then - leafValue must be set to the value originally supplied - to be indexed for this leaf tuple. Also, + leafValue must be set to the value of + spgConfigIn.attType type + originally supplied to be indexed for this leaf tuple. Also, recheck may be set to true if the match is uncertain and so the operator(s) must be re-applied to the actual heap tuple to verify the match. @@ -757,6 +789,26 @@ typedef struct spgLeafConsistentOut + + The optional user-defined method is: + + + + + Datum compress(Datum in) + + + Converts the data item into a format suitable for physical storage in + a leaf tuple of index page. It accepts + spgConfigIn.attType + value and return + spgConfigOut.leafType + value. Output value should not be toasted. + + + + + All the SP-GiST support methods are normally called in a short-lived memory context; that is, CurrentMemoryContext will be reset diff --git a/src/backend/access/spgist/spgdoinsert.c b/src/backend/access/spgist/spgdoinsert.c index a5f4c4059c..a8cb8c7bdc 100644 --- a/src/backend/access/spgist/spgdoinsert.c +++ b/src/backend/access/spgist/spgdoinsert.c @@ -1906,14 +1906,37 @@ spgdoinsert(Relation index, SpGistState *state, procinfo = index_getprocinfo(index, 1, SPGIST_CHOOSE_PROC); /* - * Since we don't use index_form_tuple in this AM, we have to make sure + * Prepare the leaf datum to insert. + * + * If an optional "compress" method is provided, then call it to form + * the leaf datum from the input datum. Otherwise store the input datum as + * is. Since we don't use index_form_tuple in this AM, we have to make sure * value to be inserted is not toasted; FormIndexDatum doesn't guarantee - * that. + * that. But we assume the "compress" method to return an untoasted value. */ - if (!isnull && state->attType.attlen == -1) - datum = PointerGetDatum(PG_DETOAST_DATUM(datum)); + if (!isnull) + { + if (OidIsValid(index_getprocid(index, 1, SPGIST_COMPRESS_PROC))) + { + FmgrInfo *compressProcinfo = NULL; + + compressProcinfo = index_getprocinfo(index, 1, SPGIST_COMPRESS_PROC); + leafDatum = FunctionCall1Coll(compressProcinfo, + index->rd_indcollation[0], + datum); + } + else + { + Assert(state->attLeafType.type == state->attType.type); - leafDatum = datum; + if (state->attType.attlen == -1) + leafDatum = PointerGetDatum(PG_DETOAST_DATUM(datum)); + else + leafDatum = datum; + } + } + else + leafDatum = (Datum) 0; /* * Compute space needed for a leaf tuple containing the given datum. @@ -1923,7 +1946,7 @@ spgdoinsert(Relation index, SpGistState *state, */ if (!isnull) leafSize = SGLTHDRSZ + sizeof(ItemIdData) + - SpGistGetTypeSize(&state->attType, leafDatum); + SpGistGetTypeSize(&state->attLeafType, leafDatum); else leafSize = SGDTSIZE + sizeof(ItemIdData); @@ -2138,7 +2161,7 @@ spgdoinsert(Relation index, SpGistState *state, { leafDatum = out.result.matchNode.restDatum; leafSize = SGLTHDRSZ + sizeof(ItemIdData) + - SpGistGetTypeSize(&state->attType, leafDatum); + SpGistGetTypeSize(&state->attLeafType, leafDatum); } /* diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c index 7965b5846d..c64a174143 100644 --- a/src/backend/access/spgist/spgscan.c +++ b/src/backend/access/spgist/spgscan.c @@ -40,7 +40,7 @@ typedef struct ScanStackEntry static void freeScanStackEntry(SpGistScanOpaque so, ScanStackEntry *stackEntry) { - if (!so->state.attType.attbyval && + if (!so->state.attLeafType.attbyval && DatumGetPointer(stackEntry->reconstructedValue) != NULL) pfree(DatumGetPointer(stackEntry->reconstructedValue)); if (stackEntry->traversalValue) @@ -527,8 +527,8 @@ spgWalk(Relation index, SpGistScanOpaque so, bool scanWholeIndex, if (out.reconstructedValues) newEntry->reconstructedValue = datumCopy(out.reconstructedValues[i], - so->state.attType.attbyval, - so->state.attType.attlen); + so->state.attLeafType.attbyval, + so->state.attLeafType.attlen); else newEntry->reconstructedValue = (Datum) 0; diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c index bd5301f383..e571f0cce0 100644 --- a/src/backend/access/spgist/spgutils.c +++ b/src/backend/access/spgist/spgutils.c @@ -125,6 +125,22 @@ spgGetCache(Relation index) /* Get the information we need about each relevant datatype */ fillTypeDesc(&cache->attType, atttype); + + if (OidIsValid(cache->config.leafType) && + cache->config.leafType != atttype) + { + if (!OidIsValid(index_getprocid(index, 1, SPGIST_COMPRESS_PROC))) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("compress method must not defined when leaf type is different from input type"))); + + fillTypeDesc(&cache->attLeafType, cache->config.leafType); + } + else + { + cache->attLeafType = cache->attType; + } + fillTypeDesc(&cache->attPrefixType, cache->config.prefixType); fillTypeDesc(&cache->attLabelType, cache->config.labelType); @@ -164,6 +180,7 @@ initSpGistState(SpGistState *state, Relation index) state->config = cache->config; state->attType = cache->attType; + state->attLeafType = cache->attLeafType; state->attPrefixType = cache->attPrefixType; state->attLabelType = cache->attLabelType; @@ -618,7 +635,7 @@ spgFormLeafTuple(SpGistState *state, ItemPointer heapPtr, /* compute space needed (note result is already maxaligned) */ size = SGLTHDRSZ; if (!isnull) - size += SpGistGetTypeSize(&state->attType, datum); + size += SpGistGetTypeSize(&state->attLeafType, datum); /* * Ensure that we can replace the tuple with a dead tuple later. This @@ -634,7 +651,7 @@ spgFormLeafTuple(SpGistState *state, ItemPointer heapPtr, tup->nextOffset = InvalidOffsetNumber; tup->heapPtr = *heapPtr; if (!isnull) - memcpyDatum(SGLTDATAPTR(tup), &state->attType, datum); + memcpyDatum(SGLTDATAPTR(tup), &state->attLeafType, datum); return tup; } diff --git a/src/backend/access/spgist/spgvalidate.c b/src/backend/access/spgist/spgvalidate.c index 157cf2a028..440b3ce917 100644 --- a/src/backend/access/spgist/spgvalidate.c +++ b/src/backend/access/spgist/spgvalidate.c @@ -22,6 +22,7 @@ #include "catalog/pg_opfamily.h" #include "catalog/pg_type.h" #include "utils/builtins.h" +#include "utils/lsyscache.h" #include "utils/regproc.h" #include "utils/syscache.h" @@ -52,6 +53,10 @@ spgvalidate(Oid opclassoid) OpFamilyOpFuncGroup *opclassgroup; int i; ListCell *lc; + spgConfigIn configIn; + spgConfigOut configOut; + Oid configOutLefttype = InvalidOid; + Oid configOutRighttype = InvalidOid; /* Fetch opclass information */ classtup = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclassoid)); @@ -74,6 +79,7 @@ spgvalidate(Oid opclassoid) /* Fetch all operators and support functions of the opfamily */ oprlist = SearchSysCacheList1(AMOPSTRATEGY, ObjectIdGetDatum(opfamilyoid)); proclist = SearchSysCacheList1(AMPROCNUM, ObjectIdGetDatum(opfamilyoid)); + grouplist = identify_opfamily_groups(oprlist, proclist); /* Check individual support functions */ for (i = 0; i < proclist->n_members; i++) @@ -100,6 +106,40 @@ spgvalidate(Oid opclassoid) switch (procform->amprocnum) { case SPGIST_CONFIG_PROC: + ok = check_amproc_signature(procform->amproc, VOIDOID, true, + 2, 2, INTERNALOID, INTERNALOID); + configIn.attType = procform->amproclefttype; + memset(&configOut, 0, sizeof(configOut)); + + OidFunctionCall2(procform->amproc, + PointerGetDatum(&configIn), + PointerGetDatum(&configOut)); + + configOutLefttype = procform->amproclefttype; + configOutRighttype = procform->amprocrighttype; + + /* + * When leaf and attribute types are the same, compress function + * is not required and we set corresponding bit in functionset + * for later group consistency check. + */ + if (!OidIsValid(configOut.leafType) || + configOut.leafType == configIn.attType) + { + foreach(lc, grouplist) + { + OpFamilyOpFuncGroup *group = lfirst(lc); + + if (group->lefttype == procform->amproclefttype && + group->righttype == procform->amprocrighttype) + { + group->functionset |= + ((uint64) 1) << SPGIST_COMPRESS_PROC; + break; + } + } + } + break; case SPGIST_CHOOSE_PROC: case SPGIST_PICKSPLIT_PROC: case SPGIST_INNER_CONSISTENT_PROC: @@ -110,6 +150,15 @@ spgvalidate(Oid opclassoid) ok = check_amproc_signature(procform->amproc, BOOLOID, true, 2, 2, INTERNALOID, INTERNALOID); break; + case SPGIST_COMPRESS_PROC: + if (configOutLefttype != procform->amproclefttype || + configOutRighttype != procform->amprocrighttype) + ok = false; + else + ok = check_amproc_signature(procform->amproc, + configOut.leafType, true, + 1, 1, procform->amproclefttype); + break; default: ereport(INFO, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), @@ -178,7 +227,6 @@ spgvalidate(Oid opclassoid) } /* Now check for inconsistent groups of operators/functions */ - grouplist = identify_opfamily_groups(oprlist, proclist); opclassgroup = NULL; foreach(lc, grouplist) { diff --git a/src/include/access/spgist.h b/src/include/access/spgist.h index d1bc396e6d..06b1d88e5a 100644 --- a/src/include/access/spgist.h +++ b/src/include/access/spgist.h @@ -30,7 +30,9 @@ #define SPGIST_PICKSPLIT_PROC 3 #define SPGIST_INNER_CONSISTENT_PROC 4 #define SPGIST_LEAF_CONSISTENT_PROC 5 -#define SPGISTNProc 5 +#define SPGIST_COMPRESS_PROC 6 +#define SPGISTNRequiredProc 5 +#define SPGISTNProc 6 /* * Argument structs for spg_config method @@ -44,6 +46,7 @@ typedef struct spgConfigOut { Oid prefixType; /* Data type of inner-tuple prefixes */ Oid labelType; /* Data type of inner-tuple node labels */ + Oid leafType; /* Data type of leaf-tuple values */ bool canReturnData; /* Opclass can reconstruct original data */ bool longValuesOK; /* Opclass can cope with values > 1 page */ } spgConfigOut; diff --git a/src/include/access/spgist_private.h b/src/include/access/spgist_private.h index 1c4b321b6c..e55de9dc54 100644 --- a/src/include/access/spgist_private.h +++ b/src/include/access/spgist_private.h @@ -119,7 +119,8 @@ typedef struct SpGistState { spgConfigOut config; /* filled in by opclass config method */ - SpGistTypeDesc attType; /* type of input data and leaf values */ + SpGistTypeDesc attType; /* type of values to be indexed/restored */ + SpGistTypeDesc attLeafType; /* type of leaf-tuple values */ SpGistTypeDesc attPrefixType; /* type of inner-tuple prefix values */ SpGistTypeDesc attLabelType; /* type of node label values */ @@ -178,7 +179,8 @@ typedef struct SpGistCache { spgConfigOut config; /* filled in by opclass config method */ - SpGistTypeDesc attType; /* type of input data and leaf values */ + SpGistTypeDesc attType; /* type of values to be indexed/restored */ + SpGistTypeDesc attLeafType; /* type of leaf-tuple values */ SpGistTypeDesc attPrefixType; /* type of inner-tuple prefix values */ SpGistTypeDesc attLabelType; /* type of node label values */ @@ -300,7 +302,7 @@ typedef SpGistLeafTupleData *SpGistLeafTuple; #define SGLTHDRSZ MAXALIGN(sizeof(SpGistLeafTupleData)) #define SGLTDATAPTR(x) (((char *) (x)) + SGLTHDRSZ) -#define SGLTDATUM(x, s) ((s)->attType.attbyval ? \ +#define SGLTDATUM(x, s) ((s)->attLeafType.attbyval ? \ *(Datum *) SGLTDATAPTR(x) : \ PointerGetDatum(SGLTDATAPTR(x))) From c4c2885cbb1803f772e58f6db4c8951d8cd672cd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 22 Dec 2017 12:08:06 -0500 Subject: [PATCH 0736/1087] Fix UNION/INTERSECT/EXCEPT over no columns. Since 9.4, we've allowed the syntax "select union select" and variants of that. However, the planner wasn't expecting a no-column set operation and ended up treating the set operation as if it were UNION ALL. Turns out it's trivial to fix in v10 and later; we just need to be careful about not generating a Sort node with no sort keys. However, since a weird corner case like this is never going to be exercised by developers, we'd better have thorough regression tests if we want to consider it supported. Per report from Victor Yegorov. Discussion: https://postgr.es/m/CAGnEbojGJrRSOgJwNGM7JSJZpVAf8xXcVPbVrGdhbVEHZ-BUMw@mail.gmail.com --- src/backend/optimizer/plan/createplan.c | 1 - src/backend/optimizer/prep/prepunion.c | 26 +++--- src/test/regress/expected/union.out | 115 ++++++++++++++++++++++++ src/test/regress/sql/union.sql | 43 +++++++++ 4 files changed, 168 insertions(+), 17 deletions(-) diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 1a0d3a885f..1a9fd82900 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -6326,7 +6326,6 @@ make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree, * convert SortGroupClause list into arrays of attr indexes and equality * operators, as wanted by executor */ - Assert(numCols > 0); dupColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols); dupOperators = (Oid *) palloc(sizeof(Oid) * numCols); diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index a24e8acfa6..f87849ea47 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -711,10 +711,6 @@ generate_nonunion_path(SetOperationStmt *op, PlannerInfo *root, /* Identify the grouping semantics */ groupList = generate_setop_grouplist(op, tlist); - /* punt if nothing to group on (can this happen?) */ - if (groupList == NIL) - return path; - /* * Estimate number of distinct groups that we'll need hashtable entries * for; this is the size of the left-hand input for EXCEPT, or the smaller @@ -741,7 +737,7 @@ generate_nonunion_path(SetOperationStmt *op, PlannerInfo *root, dNumGroups, dNumOutputRows, (op->op == SETOP_INTERSECT) ? "INTERSECT" : "EXCEPT"); - if (!use_hash) + if (groupList && !use_hash) path = (Path *) create_sort_path(root, result_rel, path, @@ -864,10 +860,6 @@ make_union_unique(SetOperationStmt *op, Path *path, List *tlist, /* Identify the grouping semantics */ groupList = generate_setop_grouplist(op, tlist); - /* punt if nothing to group on (can this happen?) */ - if (groupList == NIL) - return path; - /* * XXX for the moment, take the number of distinct groups as equal to the * total input size, ie, the worst case. This is too conservative, but we @@ -898,13 +890,15 @@ make_union_unique(SetOperationStmt *op, Path *path, List *tlist, else { /* Sort and Unique */ - path = (Path *) create_sort_path(root, - result_rel, - path, - make_pathkeys_for_sortclauses(root, - groupList, - tlist), - -1.0); + if (groupList) + path = (Path *) + create_sort_path(root, + result_rel, + path, + make_pathkeys_for_sortclauses(root, + groupList, + tlist), + -1.0); /* We have to manually jam the right tlist into the path; ick */ path->pathtarget = create_pathtarget(root, tlist); path = (Path *) create_upper_unique_path(root, diff --git a/src/test/regress/expected/union.out b/src/test/regress/expected/union.out index ee26b163f7..92d427a690 100644 --- a/src/test/regress/expected/union.out +++ b/src/test/regress/expected/union.out @@ -552,6 +552,121 @@ SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1))) 4567890123456789 | -4567890123456789 (5 rows) +-- +-- Check behavior with empty select list (allowed since 9.4) +-- +select union select; +-- +(1 row) + +select intersect select; +-- +(1 row) + +select except select; +-- +(0 rows) + +-- check hashed implementation +set enable_hashagg = true; +set enable_sort = false; +explain (costs off) +select from generate_series(1,5) union select from generate_series(1,3); + QUERY PLAN +---------------------------------------------------------------- + HashAggregate + -> Append + -> Function Scan on generate_series + -> Function Scan on generate_series generate_series_1 +(4 rows) + +explain (costs off) +select from generate_series(1,5) intersect select from generate_series(1,3); + QUERY PLAN +---------------------------------------------------------------------- + HashSetOp Intersect + -> Append + -> Subquery Scan on "*SELECT* 1" + -> Function Scan on generate_series + -> Subquery Scan on "*SELECT* 2" + -> Function Scan on generate_series generate_series_1 +(6 rows) + +select from generate_series(1,5) union select from generate_series(1,3); +-- +(1 row) + +select from generate_series(1,5) union all select from generate_series(1,3); +-- +(8 rows) + +select from generate_series(1,5) intersect select from generate_series(1,3); +-- +(1 row) + +select from generate_series(1,5) intersect all select from generate_series(1,3); +-- +(3 rows) + +select from generate_series(1,5) except select from generate_series(1,3); +-- +(0 rows) + +select from generate_series(1,5) except all select from generate_series(1,3); +-- +(2 rows) + +-- check sorted implementation +set enable_hashagg = false; +set enable_sort = true; +explain (costs off) +select from generate_series(1,5) union select from generate_series(1,3); + QUERY PLAN +---------------------------------------------------------------- + Unique + -> Append + -> Function Scan on generate_series + -> Function Scan on generate_series generate_series_1 +(4 rows) + +explain (costs off) +select from generate_series(1,5) intersect select from generate_series(1,3); + QUERY PLAN +---------------------------------------------------------------------- + SetOp Intersect + -> Append + -> Subquery Scan on "*SELECT* 1" + -> Function Scan on generate_series + -> Subquery Scan on "*SELECT* 2" + -> Function Scan on generate_series generate_series_1 +(6 rows) + +select from generate_series(1,5) union select from generate_series(1,3); +-- +(1 row) + +select from generate_series(1,5) union all select from generate_series(1,3); +-- +(8 rows) + +select from generate_series(1,5) intersect select from generate_series(1,3); +-- +(1 row) + +select from generate_series(1,5) intersect all select from generate_series(1,3); +-- +(3 rows) + +select from generate_series(1,5) except select from generate_series(1,3); +-- +(0 rows) + +select from generate_series(1,5) except all select from generate_series(1,3); +-- +(2 rows) + +reset enable_hashagg; +reset enable_sort; -- -- Check handling of a case with unknown constants. We don't guarantee -- an undecorated constant will work in all cases, but historically this diff --git a/src/test/regress/sql/union.sql b/src/test/regress/sql/union.sql index c0317cccb4..eed7c8d34b 100644 --- a/src/test/regress/sql/union.sql +++ b/src/test/regress/sql/union.sql @@ -190,6 +190,49 @@ SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1))) (((((select * from int8_tbl))))); +-- +-- Check behavior with empty select list (allowed since 9.4) +-- + +select union select; +select intersect select; +select except select; + +-- check hashed implementation +set enable_hashagg = true; +set enable_sort = false; + +explain (costs off) +select from generate_series(1,5) union select from generate_series(1,3); +explain (costs off) +select from generate_series(1,5) intersect select from generate_series(1,3); + +select from generate_series(1,5) union select from generate_series(1,3); +select from generate_series(1,5) union all select from generate_series(1,3); +select from generate_series(1,5) intersect select from generate_series(1,3); +select from generate_series(1,5) intersect all select from generate_series(1,3); +select from generate_series(1,5) except select from generate_series(1,3); +select from generate_series(1,5) except all select from generate_series(1,3); + +-- check sorted implementation +set enable_hashagg = false; +set enable_sort = true; + +explain (costs off) +select from generate_series(1,5) union select from generate_series(1,3); +explain (costs off) +select from generate_series(1,5) intersect select from generate_series(1,3); + +select from generate_series(1,5) union select from generate_series(1,3); +select from generate_series(1,5) union all select from generate_series(1,3); +select from generate_series(1,5) intersect select from generate_series(1,3); +select from generate_series(1,5) intersect all select from generate_series(1,3); +select from generate_series(1,5) except select from generate_series(1,3); +select from generate_series(1,5) except all select from generate_series(1,3); + +reset enable_hashagg; +reset enable_sort; + -- -- Check handling of a case with unknown constants. We don't guarantee -- an undecorated constant will work in all cases, but historically this From 4e2970f8807f1ccfc8029979a70dc80ee102ce48 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Sun, 24 Dec 2017 02:57:55 -0800 Subject: [PATCH 0737/1087] Fix assert with side effects in the new PHJ code. Instead of asserting the assert just set the value to what it was supposed to test... Per coverity. --- src/backend/executor/nodeHash.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 4284e8682a..0a519fae31 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -1275,7 +1275,7 @@ ExecParallelHashRepartitionFirst(HashJoinTable hashtable) dsa_pointer chunk_shared; HashMemoryChunk chunk; - Assert(hashtable->nbatch = hashtable->parallel_state->nbatch); + Assert(hashtable->nbatch == hashtable->parallel_state->nbatch); while ((chunk = ExecParallelHashPopChunkQueue(hashtable, &chunk_shared))) { From ff963b393ca93a71d2f398c4c584b322cd351c2c Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Mon, 25 Dec 2017 18:59:38 +0300 Subject: [PATCH 0738/1087] Add polygon opclass for SP-GiST Polygon opclass uses compress method feature of SP-GiST added earlier. For now it's a single operator class which uses this feature. SP-GiST actually indexes a bounding boxes of input polygons, so part of supported operations are lossy. Opclass uses most methods of corresponding opclass over boxes of SP-GiST and treats bounding boxes as point in 4D-space. Bump catalog version. Authors: Nikita Glukhov, Alexander Korotkov with minor editorization by me Reviewed-By: all authors + Darafei Praliaskouski Discussion: https://www.postgresql.org/message-id/flat/54907069.1030506@sigaev.ru --- doc/src/sgml/spgist.sgml | 36 ++++ src/backend/utils/adt/geo_ops.c | 3 +- src/backend/utils/adt/geo_spgist.c | 92 +++++++- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_amop.h | 16 ++ src/include/catalog/pg_amproc.h | 6 + src/include/catalog/pg_opclass.h | 1 + src/include/catalog/pg_opfamily.h | 1 + src/include/catalog/pg_proc.h | 5 + src/include/utils/geo_decls.h | 3 +- src/test/regress/expected/polygon.out | 238 +++++++++++++++++++++ src/test/regress/expected/sanity_check.out | 3 + src/test/regress/sql/polygon.sql | 93 ++++++++ 13 files changed, 491 insertions(+), 8 deletions(-) diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml index b4a8be476e..51bb60c92a 100644 --- a/doc/src/sgml/spgist.sgml +++ b/doc/src/sgml/spgist.sgml @@ -130,6 +130,42 @@ |&> + + poly_ops + polygon + + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| + |>> + |&> + + + + poly_ops + polygon + + << + &< + && + &> + >> + ~= + @> + <@ + &<| + <<| + |>> + |&> + + text_ops text diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c index 9dbe5db2b2..f00ea54033 100644 --- a/src/backend/utils/adt/geo_ops.c +++ b/src/backend/utils/adt/geo_ops.c @@ -41,7 +41,6 @@ enum path_delim static int point_inside(Point *p, int npts, Point *plist); static int lseg_crossing(double x, double y, double px, double py); static BOX *box_construct(double x1, double x2, double y1, double y2); -static BOX *box_copy(BOX *box); static BOX *box_fill(BOX *result, double x1, double x2, double y1, double y2); static bool box_ov(BOX *box1, BOX *box2); static double box_ht(BOX *box); @@ -482,7 +481,7 @@ box_fill(BOX *result, double x1, double x2, double y1, double y2) /* box_copy - copy a box */ -static BOX * +BOX * box_copy(BOX *box) { BOX *result = (BOX *) palloc(sizeof(BOX)); diff --git a/src/backend/utils/adt/geo_spgist.c b/src/backend/utils/adt/geo_spgist.c index f6334bae14..a10543600f 100644 --- a/src/backend/utils/adt/geo_spgist.c +++ b/src/backend/utils/adt/geo_spgist.c @@ -391,7 +391,7 @@ spg_box_quad_choose(PG_FUNCTION_ARGS) spgChooseIn *in = (spgChooseIn *) PG_GETARG_POINTER(0); spgChooseOut *out = (spgChooseOut *) PG_GETARG_POINTER(1); BOX *centroid = DatumGetBoxP(in->prefixDatum), - *box = DatumGetBoxP(in->datum); + *box = DatumGetBoxP(in->leafDatum); out->resultType = spgMatchNode; out->result.matchNode.restDatum = BoxPGetDatum(box); @@ -473,6 +473,51 @@ spg_box_quad_picksplit(PG_FUNCTION_ARGS) PG_RETURN_VOID(); } +/* + * Check if result of consistent method based on bounding box is exact. + */ +static bool +is_bounding_box_test_exact(StrategyNumber strategy) +{ + switch (strategy) + { + case RTLeftStrategyNumber: + case RTOverLeftStrategyNumber: + case RTOverRightStrategyNumber: + case RTRightStrategyNumber: + case RTOverBelowStrategyNumber: + case RTBelowStrategyNumber: + case RTAboveStrategyNumber: + case RTOverAboveStrategyNumber: + return true; + + default: + return false; + } +} + +/* + * Get bounding box for ScanKey. + */ +static BOX * +spg_box_quad_get_scankey_bbox(ScanKey sk, bool *recheck) +{ + switch (sk->sk_subtype) + { + case BOXOID: + return DatumGetBoxP(sk->sk_argument); + + case POLYGONOID: + if (recheck && !is_bounding_box_test_exact(sk->sk_strategy)) + *recheck = true; + return &DatumGetPolygonP(sk->sk_argument)->boundbox; + + default: + elog(ERROR, "unrecognized scankey subtype: %d", sk->sk_subtype); + return NULL; + } +} + /* * SP-GiST inner consistent function */ @@ -515,7 +560,11 @@ spg_box_quad_inner_consistent(PG_FUNCTION_ARGS) centroid = getRangeBox(DatumGetBoxP(in->prefixDatum)); queries = (RangeBox **) palloc(in->nkeys * sizeof(RangeBox *)); for (i = 0; i < in->nkeys; i++) - queries[i] = getRangeBox(DatumGetBoxP(in->scankeys[i].sk_argument)); + { + BOX *box = spg_box_quad_get_scankey_bbox(&in->scankeys[i], NULL); + + queries[i] = getRangeBox(box); + } /* Allocate enough memory for nodes */ out->nNodes = 0; @@ -637,8 +686,10 @@ spg_box_quad_leaf_consistent(PG_FUNCTION_ARGS) /* Perform the required comparison(s) */ for (i = 0; i < in->nkeys; i++) { - StrategyNumber strategy = in->scankeys[i].sk_strategy; - Datum query = in->scankeys[i].sk_argument; + StrategyNumber strategy = in->scankeys[i].sk_strategy; + BOX *box = spg_box_quad_get_scankey_bbox(&in->scankeys[i], + &out->recheck); + Datum query = BoxPGetDatum(box); switch (strategy) { @@ -713,3 +764,36 @@ spg_box_quad_leaf_consistent(PG_FUNCTION_ARGS) PG_RETURN_BOOL(flag); } + + +/* + * SP-GiST config function for 2-D types that are lossy represented by their + * bounding boxes + */ +Datum +spg_bbox_quad_config(PG_FUNCTION_ARGS) +{ + spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1); + + cfg->prefixType = BOXOID; /* A type represented by its bounding box */ + cfg->labelType = VOIDOID; /* We don't need node labels. */ + cfg->leafType = BOXOID; + cfg->canReturnData = false; + cfg->longValuesOK = false; + + PG_RETURN_VOID(); +} + +/* + * SP-GiST compress function for polygons + */ +Datum +spg_poly_quad_compress(PG_FUNCTION_ARGS) +{ + POLYGON *polygon = PG_GETARG_POLYGON_P(0); + BOX *box; + + box = box_copy(&polygon->boundbox); + + PG_RETURN_BOX_P(box); +} diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index b13cf62bec..3934582efc 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201711301 +#define CATALOG_VERSION_NO 201712251 #endif diff --git a/src/include/catalog/pg_amop.h b/src/include/catalog/pg_amop.h index f850be490a..d8770798a6 100644 --- a/src/include/catalog/pg_amop.h +++ b/src/include/catalog/pg_amop.h @@ -857,6 +857,22 @@ DATA(insert ( 5000 603 603 10 s 2570 4000 0 )); DATA(insert ( 5000 603 603 11 s 2573 4000 0 )); DATA(insert ( 5000 603 603 12 s 2572 4000 0 )); +/* + * SP-GiST poly_ops (supports polygons) + */ +DATA(insert ( 5008 604 604 1 s 485 4000 0 )); +DATA(insert ( 5008 604 604 2 s 486 4000 0 )); +DATA(insert ( 5008 604 604 3 s 492 4000 0 )); +DATA(insert ( 5008 604 604 4 s 487 4000 0 )); +DATA(insert ( 5008 604 604 5 s 488 4000 0 )); +DATA(insert ( 5008 604 604 6 s 491 4000 0 )); +DATA(insert ( 5008 604 604 7 s 490 4000 0 )); +DATA(insert ( 5008 604 604 8 s 489 4000 0 )); +DATA(insert ( 5008 604 604 9 s 2575 4000 0 )); +DATA(insert ( 5008 604 604 10 s 2574 4000 0 )); +DATA(insert ( 5008 604 604 11 s 2577 4000 0 )); +DATA(insert ( 5008 604 604 12 s 2576 4000 0 )); + /* * GiST inet_ops */ diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index 1c95846207..b25ad105fd 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -334,6 +334,12 @@ DATA(insert ( 5000 603 603 2 5013 )); DATA(insert ( 5000 603 603 3 5014 )); DATA(insert ( 5000 603 603 4 5015 )); DATA(insert ( 5000 603 603 5 5016 )); +DATA(insert ( 5008 604 604 1 5010 )); +DATA(insert ( 5008 604 604 2 5013 )); +DATA(insert ( 5008 604 604 3 5014 )); +DATA(insert ( 5008 604 604 4 5015 )); +DATA(insert ( 5008 604 604 5 5016 )); +DATA(insert ( 5008 604 604 6 5011 )); /* BRIN opclasses */ /* minmax bytea */ diff --git a/src/include/catalog/pg_opclass.h b/src/include/catalog/pg_opclass.h index 28dbc747d5..6aabc7279f 100644 --- a/src/include/catalog/pg_opclass.h +++ b/src/include/catalog/pg_opclass.h @@ -205,6 +205,7 @@ DATA(insert ( 4000 box_ops PGNSP PGUID 5000 603 t 0 )); DATA(insert ( 4000 quad_point_ops PGNSP PGUID 4015 600 t 0 )); DATA(insert ( 4000 kd_point_ops PGNSP PGUID 4016 600 f 0 )); DATA(insert ( 4000 text_ops PGNSP PGUID 4017 25 t 0 )); +DATA(insert ( 4000 poly_ops PGNSP PGUID 5008 604 t 603 )); DATA(insert ( 403 jsonb_ops PGNSP PGUID 4033 3802 t 0 )); DATA(insert ( 405 jsonb_ops PGNSP PGUID 4034 3802 t 0 )); DATA(insert ( 2742 jsonb_ops PGNSP PGUID 4036 3802 t 25 )); diff --git a/src/include/catalog/pg_opfamily.h b/src/include/catalog/pg_opfamily.h index 0d0ba7c66a..838812b932 100644 --- a/src/include/catalog/pg_opfamily.h +++ b/src/include/catalog/pg_opfamily.h @@ -186,5 +186,6 @@ DATA(insert OID = 4103 ( 3580 range_inclusion_ops PGNSP PGUID )); DATA(insert OID = 4082 ( 3580 pg_lsn_minmax_ops PGNSP PGUID )); DATA(insert OID = 4104 ( 3580 box_inclusion_ops PGNSP PGUID )); DATA(insert OID = 5000 ( 4000 box_ops PGNSP PGUID )); +DATA(insert OID = 5008 ( 4000 poly_ops PGNSP PGUID )); #endif /* PG_OPFAMILY_H */ diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index c969375981..830bab37ea 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -5335,6 +5335,11 @@ DESCR("SP-GiST support for quad tree over box"); DATA(insert OID = 5016 ( spg_box_quad_leaf_consistent PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "2281 2281" _null_ _null_ _null_ _null_ _null_ spg_box_quad_leaf_consistent _null_ _null_ _null_ )); DESCR("SP-GiST support for quad tree over box"); +DATA(insert OID = 5010 ( spg_bbox_quad_config PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 2278 "2281 2281" _null_ _null_ _null_ _null_ _null_ spg_bbox_quad_config _null_ _null_ _null_ )); +DESCR("SP-GiST support for quad tree over 2-D types represented by their bounding boxes"); +DATA(insert OID = 5011 ( spg_poly_quad_compress PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 603 "604" _null_ _null_ _null_ _null_ _null_ spg_poly_quad_compress _null_ _null_ _null_ )); +DESCR("SP-GiST support for quad tree over polygons"); + /* replication slots */ DATA(insert OID = 3779 ( pg_create_physical_replication_slot PGNSP PGUID 12 1 0 0 0 f f f f t f v u 3 0 2249 "19 16 16" "{19,16,16,19,3220}" "{i,i,i,o,o}" "{slot_name,immediately_reserve,temporary,slot_name,lsn}" _null_ _null_ pg_create_physical_replication_slot _null_ _null_ _null_ )); DESCR("create a physical replication slot"); diff --git a/src/include/utils/geo_decls.h b/src/include/utils/geo_decls.h index 44c6381b85..c89e6c3d1c 100644 --- a/src/include/utils/geo_decls.h +++ b/src/include/utils/geo_decls.h @@ -178,9 +178,10 @@ typedef struct * in geo_ops.c */ -/* private point routines */ +/* private routines */ extern double point_dt(Point *pt1, Point *pt2); extern double point_sl(Point *pt1, Point *pt2); extern double pg_hypot(double x, double y); +extern BOX *box_copy(BOX *box); #endif /* GEO_DECLS_H */ diff --git a/src/test/regress/expected/polygon.out b/src/test/regress/expected/polygon.out index 2361274f9e..4a1f60427a 100644 --- a/src/test/regress/expected/polygon.out +++ b/src/test/regress/expected/polygon.out @@ -227,3 +227,241 @@ SELECT '(0,0)'::point <-> '((0,0),(1,2),(2,1))'::polygon as on_corner, 0 | 0 | 0 | 1.4142135623731 | 3.2 (1 row) +-- +-- Test the SP-GiST index +-- +CREATE TABLE quad_poly_tbl (id int, p polygon); +INSERT INTO quad_poly_tbl + SELECT (x - 1) * 100 + y, polygon(circle(point(x * 10, y * 10), 1 + (x + y) % 10)) + FROM generate_series(1, 100) x, + generate_series(1, 100) y; +INSERT INTO quad_poly_tbl + SELECT i, polygon '((200, 300),(210, 310),(230, 290))' + FROM generate_series(10001, 11000) AS i; +INSERT INTO quad_poly_tbl + VALUES + (11001, NULL), + (11002, NULL), + (11003, NULL); +CREATE INDEX quad_poly_tbl_idx ON quad_poly_tbl USING spgist(p); +-- get reference results for ORDER BY distance from seq scan +SET enable_seqscan = ON; +SET enable_indexscan = OFF; +SET enable_bitmapscan = OFF; +CREATE TABLE quad_poly_tbl_ord_seq1 AS +SELECT rank() OVER (ORDER BY p <-> point '123,456') n, p <-> point '123,456' dist, id +FROM quad_poly_tbl; +CREATE TABLE quad_poly_tbl_ord_seq2 AS +SELECT rank() OVER (ORDER BY p <-> point '123,456') n, p <-> point '123,456' dist, id +FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; +-- check results results from index scan +SET enable_seqscan = OFF; +SET enable_indexscan = OFF; +SET enable_bitmapscan = ON; +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p << polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p << '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p << '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p << polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 3890 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &< polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p &< '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p &< '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p &< polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 7900 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p && polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p && '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p && '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p && polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 977 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &> polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p &> '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p &> '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p &> polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 7000 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p >> polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p >> '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p >> '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p >> polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 2990 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p <<| polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +---------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p <<| '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p <<| '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p <<| polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 1890 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &<| polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +---------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p &<| '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p &<| '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p &<| polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 6900 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p |&> polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +---------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p |&> '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p |&> '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p |&> polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 9000 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p |>> polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +---------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p |>> '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p |>> '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p |>> polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 3990 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; + QUERY PLAN +--------------------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p <@ '((300,300),(400,600),(600,500),(700,200))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p <@ '((300,300),(400,600),(600,500),(700,200))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; + count +------- + 831 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p @> polygon '((340,550),(343,552),(341,553))'; + QUERY PLAN +----------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p @> '((340,550),(343,552),(341,553))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p @> '((340,550),(343,552),(341,553))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p @> polygon '((340,550),(343,552),(341,553))'; + count +------- + 1 +(1 row) + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p ~= polygon '((200, 300),(210, 310),(230, 290))'; + QUERY PLAN +----------------------------------------------------------------------------- + Aggregate + -> Bitmap Heap Scan on quad_poly_tbl + Recheck Cond: (p ~= '((200,300),(210,310),(230,290))'::polygon) + -> Bitmap Index Scan on quad_poly_tbl_idx + Index Cond: (p ~= '((200,300),(210,310),(230,290))'::polygon) +(5 rows) + +SELECT count(*) FROM quad_poly_tbl WHERE p ~= polygon '((200, 300),(210, 310),(230, 290))'; + count +------- + 1000 +(1 row) + +RESET enable_seqscan; +RESET enable_indexscan; +RESET enable_bitmapscan; diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out index e996640593..ac0fb539e9 100644 --- a/src/test/regress/expected/sanity_check.out +++ b/src/test/regress/expected/sanity_check.out @@ -166,6 +166,9 @@ point_tbl|t polygon_tbl|t quad_box_tbl|t quad_point_tbl|t +quad_poly_tbl|t +quad_poly_tbl_ord_seq1|f +quad_poly_tbl_ord_seq2|f radix_text_tbl|t ramp|f real_city|f diff --git a/src/test/regress/sql/polygon.sql b/src/test/regress/sql/polygon.sql index 7ac8079465..7e8cb08cd8 100644 --- a/src/test/regress/sql/polygon.sql +++ b/src/test/regress/sql/polygon.sql @@ -116,3 +116,96 @@ SELECT '(0,0)'::point <-> '((0,0),(1,2),(2,1))'::polygon as on_corner, '(2,2)'::point <-> '((0,0),(1,4),(3,1))'::polygon as inside, '(3,3)'::point <-> '((0,2),(2,0),(2,2))'::polygon as near_corner, '(4,4)'::point <-> '((0,0),(0,3),(4,0))'::polygon as near_segment; + +-- +-- Test the SP-GiST index +-- + +CREATE TABLE quad_poly_tbl (id int, p polygon); + +INSERT INTO quad_poly_tbl + SELECT (x - 1) * 100 + y, polygon(circle(point(x * 10, y * 10), 1 + (x + y) % 10)) + FROM generate_series(1, 100) x, + generate_series(1, 100) y; + +INSERT INTO quad_poly_tbl + SELECT i, polygon '((200, 300),(210, 310),(230, 290))' + FROM generate_series(10001, 11000) AS i; + +INSERT INTO quad_poly_tbl + VALUES + (11001, NULL), + (11002, NULL), + (11003, NULL); + +CREATE INDEX quad_poly_tbl_idx ON quad_poly_tbl USING spgist(p); + +-- get reference results for ORDER BY distance from seq scan +SET enable_seqscan = ON; +SET enable_indexscan = OFF; +SET enable_bitmapscan = OFF; + +CREATE TABLE quad_poly_tbl_ord_seq1 AS +SELECT rank() OVER (ORDER BY p <-> point '123,456') n, p <-> point '123,456' dist, id +FROM quad_poly_tbl; + +CREATE TABLE quad_poly_tbl_ord_seq2 AS +SELECT rank() OVER (ORDER BY p <-> point '123,456') n, p <-> point '123,456' dist, id +FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; + +-- check results results from index scan +SET enable_seqscan = OFF; +SET enable_indexscan = OFF; +SET enable_bitmapscan = ON; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p << polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p << polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &< polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p &< polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p && polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p && polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &> polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p &> polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p >> polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p >> polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p <<| polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p <<| polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p &<| polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p &<| polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p |&> polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p |&> polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p |>> polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p |>> polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; +SELECT count(*) FROM quad_poly_tbl WHERE p <@ polygon '((300,300),(400,600),(600,500),(700,200))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p @> polygon '((340,550),(343,552),(341,553))'; +SELECT count(*) FROM quad_poly_tbl WHERE p @> polygon '((340,550),(343,552),(341,553))'; + +EXPLAIN (COSTS OFF) +SELECT count(*) FROM quad_poly_tbl WHERE p ~= polygon '((200, 300),(210, 310),(230, 290))'; +SELECT count(*) FROM quad_poly_tbl WHERE p ~= polygon '((200, 300),(210, 310),(230, 290))'; + +RESET enable_seqscan; +RESET enable_indexscan; +RESET enable_bitmapscan; From 0689dc3a235a12c58910fba325f0150979d0c81f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Dec 2017 10:21:27 -0500 Subject: [PATCH 0739/1087] Add includes to make header files self-contained --- src/include/executor/nodeHashjoin.h | 1 + src/include/utils/sharedtuplestore.h | 1 + 2 files changed, 2 insertions(+) diff --git a/src/include/executor/nodeHashjoin.h b/src/include/executor/nodeHashjoin.h index 8469085d7e..b46bcc1f68 100644 --- a/src/include/executor/nodeHashjoin.h +++ b/src/include/executor/nodeHashjoin.h @@ -14,6 +14,7 @@ #ifndef NODEHASHJOIN_H #define NODEHASHJOIN_H +#include "access/parallel.h" #include "nodes/execnodes.h" #include "storage/buffile.h" diff --git a/src/include/utils/sharedtuplestore.h b/src/include/utils/sharedtuplestore.h index 49490ec414..13642f6bfd 100644 --- a/src/include/utils/sharedtuplestore.h +++ b/src/include/utils/sharedtuplestore.h @@ -13,6 +13,7 @@ #ifndef SHAREDTUPLESTORE_H #define SHAREDTUPLESTORE_H +#include "access/htup.h" #include "storage/fd.h" #include "storage/sharedfileset.h" From a2c8e5cfdb9d82ae6d4bb8f37a4dc7cbeca63ec1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 30 Aug 2016 12:00:00 -0400 Subject: [PATCH 0740/1087] Add support for static assertions in C++ This allows modules written in C++ to use or include header files that use StaticAssertStmt() etc. Reviewed-by: Tom Lane --- src/include/c.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/src/include/c.h b/src/include/c.h index 11fcffbae3..22535a7deb 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -754,6 +754,7 @@ typedef NameData *Name; * about a negative width for a struct bit-field. This will not include a * helpful error message, but it beats not getting an error at all. */ +#ifndef __cplusplus #ifdef HAVE__STATIC_ASSERT #define StaticAssertStmt(condition, errmessage) \ do { _Static_assert(condition, errmessage); } while(0) @@ -765,6 +766,19 @@ typedef NameData *Name; #define StaticAssertExpr(condition, errmessage) \ StaticAssertStmt(condition, errmessage) #endif /* HAVE__STATIC_ASSERT */ +#else /* C++ */ +#if defined(__cpp_static_assert) && __cpp_static_assert >= 200410 +#define StaticAssertStmt(condition, errmessage) \ + static_assert(condition, errmessage) +#define StaticAssertExpr(condition, errmessage) \ + StaticAssertStmt(condition, errmessage) +#else +#define StaticAssertStmt(condition, errmessage) \ + do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0) +#define StaticAssertExpr(condition, errmessage) \ + ({ StaticAssertStmt(condition, errmessage); }) +#endif +#endif /* C++ */ /* From ad337c76b6f454157982309089c3302fe77c9cbc Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Wed, 27 Dec 2017 18:25:37 +0300 Subject: [PATCH 0741/1087] Update relation's stats in pg_class during vacuum full. Hash index depends on estimation of numbers of tuples and pages of relations, incorrect value could be a reason of significantly growing of index. Vacuum full recreates heap and reindex all indexes before renewal stats. The patch fixes that, so indexes will see correct values. Backpatch to v10 only because earlier versions haven't usable hash index and growing of hash index is a single user-visible symptom. Author: Amit Kapila Reviewed-by: Ashutosh Sharma, me Discussion: https://www.postgresql.org/message-id/flat/20171115232922.5tomkxnw3iq6jsg7@inml.weebeastie.net --- src/backend/commands/cluster.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c index 48f1e6e2ad..1c5669afa8 100644 --- a/src/backend/commands/cluster.c +++ b/src/backend/commands/cluster.c @@ -738,6 +738,9 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, Relation NewHeap, OldHeap, OldIndex; + Relation relRelation; + HeapTuple reltup; + Form_pg_class relform; TupleDesc oldTupDesc; TupleDesc newTupDesc; int natts; @@ -756,6 +759,7 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, double num_tuples = 0, tups_vacuumed = 0, tups_recently_dead = 0; + BlockNumber num_pages; int elevel = verbose ? INFO : DEBUG2; PGRUsage ru0; @@ -1079,6 +1083,8 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, /* Reset rd_toastoid just to be tidy --- it shouldn't be looked at again */ NewHeap->rd_toastoid = InvalidOid; + num_pages = RelationGetNumberOfBlocks(NewHeap); + /* Log what we did */ ereport(elevel, (errmsg("\"%s\": found %.0f removable, %.0f nonremovable row versions in %u pages", @@ -1098,6 +1104,30 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, index_close(OldIndex, NoLock); heap_close(OldHeap, NoLock); heap_close(NewHeap, NoLock); + + /* Update pg_class to reflect the correct values of pages and tuples. */ + relRelation = heap_open(RelationRelationId, RowExclusiveLock); + + reltup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(OIDNewHeap)); + if (!HeapTupleIsValid(reltup)) + elog(ERROR, "cache lookup failed for relation %u", OIDNewHeap); + relform = (Form_pg_class) GETSTRUCT(reltup); + + relform->relpages = num_pages; + relform->reltuples = num_tuples; + + /* Don't update the stats for pg_class. See swap_relation_files. */ + if (OIDOldHeap != RelationRelationId) + CatalogTupleUpdate(relRelation, &reltup->t_self, reltup); + else + CacheInvalidateRelcacheByTuple(reltup); + + /* Clean up. */ + heap_freetuple(reltup); + heap_close(relRelation, RowExclusiveLock); + + /* Make the update visible */ + CommandCounterIncrement(); } /* From 7a727c180aa3c3baba12957d4cbec7b022ba4be5 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 27 Dec 2017 10:24:33 -0800 Subject: [PATCH 0742/1087] Add pow(), aka power(), function to pgbench. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Raúl Marín Rodríguez, reviewed by Fabien Coelho and Michael Paquier, with a minor fix by me. Discussion: http://postgr.es/m/CAM6_UM4XiA14y9HnDqu9kAAOtwMhHZxW--q_ZACZW9Hsrsf-tg@mail.gmail.com --- doc/src/sgml/ref/pgbench.sgml | 7 +++++++ src/bin/pgbench/exprparse.y | 6 ++++++ src/bin/pgbench/pgbench.c | 18 ++++++++++++++++ src/bin/pgbench/pgbench.h | 3 ++- src/bin/pgbench/t/001_pgbench_with_server.pl | 22 +++++++++++++++++++- 5 files changed, 54 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 4431fc3eb7..1519fe78ef 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -1069,6 +1069,13 @@ pgbench options d pi() 3.14159265358979323846 + + pow(x, y), power(x, y) + double + exponentiation + pow(2.0, 10), power(2.0, 10) + 1024.0 + random(lb, ub) integer diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y index 25d5ad48e5..74ffe5e7a7 100644 --- a/src/bin/pgbench/exprparse.y +++ b/src/bin/pgbench/exprparse.y @@ -194,6 +194,12 @@ static const struct { "random_zipfian", 3, PGBENCH_RANDOM_ZIPFIAN }, + { + "pow", 2, PGBENCH_POW + }, + { + "power", 2, PGBENCH_POW + }, /* keep as last array element */ { NULL, 0, 0 diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 7ce6f607f5..e065f7bedc 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -1850,6 +1850,24 @@ evalFunc(TState *thread, CState *st, return true; } + case PGBENCH_POW: + { + PgBenchValue *lval = &vargs[0]; + PgBenchValue *rval = &vargs[1]; + double ld, + rd; + + Assert(nargs == 2); + + if (!coerceToDouble(lval, &ld) || + !coerceToDouble(rval, &rd)) + return false; + + setDoubleValue(retval, pow(ld, rd)); + + return true; + } + default: /* cannot get here */ Assert(0); diff --git a/src/bin/pgbench/pgbench.h b/src/bin/pgbench/pgbench.h index 83fee1ae74..0e92882a4c 100644 --- a/src/bin/pgbench/pgbench.h +++ b/src/bin/pgbench/pgbench.h @@ -76,7 +76,8 @@ typedef enum PgBenchFunction PGBENCH_RANDOM, PGBENCH_RANDOM_GAUSSIAN, PGBENCH_RANDOM_EXPONENTIAL, - PGBENCH_RANDOM_ZIPFIAN + PGBENCH_RANDOM_ZIPFIAN, + PGBENCH_POW } PgBenchFunction; typedef struct PgBenchExpr PgBenchExpr; diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index e3cdf28628..9cbeb2fc11 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -232,7 +232,17 @@ sub pgbench qr{command=19.: double 19\b}, qr{command=20.: double 20\b}, qr{command=21.: int 9223372036854775807\b}, - qr{command=23.: int [1-9]\b}, ], + qr{command=23.: int [1-9]\b}, + qr{command=24.: double -27\b}, + qr{command=25.: double 1024\b}, + qr{command=26.: double 1\b}, + qr{command=27.: double 1\b}, + qr{command=28.: double -0.125\b}, + qr{command=29.: double -0.125\b}, + qr{command=30.: double -0.00032\b}, + qr{command=31.: double 8.50705917302346e\+37\b}, + qr{command=32.: double 1e\+30\b}, + ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions \set i1 debug(random(1, 100)) @@ -264,6 +274,16 @@ sub pgbench \set i1 0 -- yet another integer function \set id debug(random_zipfian(1, 9, 1.3)) +--- pow and power +\set poweri debug(pow(-3,3)) +\set powerd debug(pow(2.0,10)) +\set poweriz debug(pow(0,0)) +\set powerdz debug(pow(0.0,0.0)) +\set powernegi debug(pow(-2,-3)) +\set powernegd debug(pow(-2.0,-3.0)) +\set powernegd2 debug(power(-5.0,-5.0)) +\set powerov debug(pow(9223372036854775807, 2)) +\set powerov2 debug(pow(10,30)) } }); # backslash commands From 62d02f39e72a2c030711a772f00f47f51262803c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 27 Dec 2017 10:56:14 -0800 Subject: [PATCH 0743/1087] Fix race-under-concurrency in PathNameCreateTemporaryDir. Thomas Munro Discussion: http://postgr.es/m/CAEepm=1Vp1e3KtftLtw4B60ZV9teNeKu6HxoaaBptQMsRWjJbQ@mail.gmail.com --- src/backend/storage/file/fd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 5c7fd645ac..f449ee5c51 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -1451,7 +1451,7 @@ PathNameCreateTemporaryDir(const char *basedir, const char *directory) basedir))); /* Try again. */ - if (mkdir(directory, S_IRWXU) < 0) + if (mkdir(directory, S_IRWXU) < 0 && errno != EEXIST) ereport(ERROR, (errcode_for_file_access(), errmsg("cannot create temporary subdirectory \"%s\": %m", From b726eaa37a59d0cae0be56457c9522db7288255d Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 27 Dec 2017 11:01:47 -0800 Subject: [PATCH 0744/1087] Remove incorrect apostrophe. Etsuro Fujita Discussion: http://postgr.es/m/5A4393AA.8000708@lab.ntt.co.jp --- contrib/postgres_fdw/postgres_fdw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index fb65e2eb20..44db4497c4 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -5115,7 +5115,7 @@ conversion_error_callback(void *arg) /* * Target list can have Vars and expressions. For Vars, we can get - * it's relation, however for expressions we can't. Thus for + * its relation, however for expressions we can't. Thus for * expressions, just show generic context message. */ if (IsA(tle->expr, Var)) From be2343221fb74bde6b7445feeef32f7ea5cf2618 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 27 Dec 2017 18:01:37 -0300 Subject: [PATCH 0745/1087] Protect against hypothetical memory leaks in RelationGetPartitionKey Also, fix a comment that commit 8a0596cb656e made obsolete. Reported-by: Robert Haas Discussion: http://postgr.es/m/CA+TgmoYbpuUUUp2GhYNwWm0qkah39spiU7uOiNXLz20ASfKYoA@mail.gmail.com --- src/backend/utils/cache/relcache.c | 53 ++++++++++++++++-------------- src/include/access/hash.h | 2 +- 2 files changed, 30 insertions(+), 25 deletions(-) diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index e2760daac4..1d0cc6cb79 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -807,17 +807,16 @@ RelationBuildRuleLock(Relation relation) * RelationBuildPartitionKey * Build and attach to relcache partition key data of relation * - * Partitioning key data is stored in CacheMemoryContext to ensure it survives - * as long as the relcache. To avoid leaking memory in that context in case - * of an error partway through this function, we build the structure in the - * working context (which must be short-lived) and copy the completed - * structure into the cache memory. - * - * Also, since the structure being created here is sufficiently complex, we - * make a private child context of CacheMemoryContext for each relation that - * has associated partition key information. That means no complicated logic - * to free individual elements whenever the relcache entry is flushed - just - * delete the context. + * Partitioning key data is a complex structure; to avoid complicated logic to + * free individual elements whenever the relcache entry is flushed, we give it + * its own memory context, child of CacheMemoryContext, which can easily be + * deleted on its own. To avoid leaking memory in that context in case of an + * error partway through this function, the context is initially created as a + * child of CurTransactionContext and only re-parented to CacheMemoryContext + * at the end, when no further errors are possible. Also, we don't make this + * context the current context except in very brief code sections, out of fear + * that some of our callees allocate memory on their own which would be leaked + * permanently. */ static void RelationBuildPartitionKey(Relation relation) @@ -850,9 +849,9 @@ RelationBuildPartitionKey(Relation relation) RelationGetRelationName(relation), MEMCONTEXT_COPY_NAME, ALLOCSET_SMALL_SIZES); - oldcxt = MemoryContextSwitchTo(partkeycxt); - key = (PartitionKey) palloc0(sizeof(PartitionKeyData)); + key = (PartitionKey) MemoryContextAllocZero(partkeycxt, + sizeof(PartitionKeyData)); /* Fixed-length attributes */ form = (Form_pg_partitioned_table) GETSTRUCT(tuple); @@ -894,17 +893,20 @@ RelationBuildPartitionKey(Relation relation) /* * Run the expressions through const-simplification since the planner * will be comparing them to similarly-processed qual clause operands, - * and may fail to detect valid matches without this step. We don't - * need to bother with canonicalize_qual() though, because partition - * expressions are not full-fledged qualification clauses. + * and may fail to detect valid matches without this step; fix + * opfuncids while at it. We don't need to bother with + * canonicalize_qual() though, because partition expressions are not + * full-fledged qualification clauses. */ - expr = eval_const_expressions(NULL, (Node *) expr); + expr = eval_const_expressions(NULL, expr); + fix_opfuncids(expr); - /* May as well fix opfuncids too */ - fix_opfuncids((Node *) expr); - key->partexprs = (List *) expr; + oldcxt = MemoryContextSwitchTo(partkeycxt); + key->partexprs = (List *) copyObject(expr); + MemoryContextSwitchTo(oldcxt); } + oldcxt = MemoryContextSwitchTo(partkeycxt); key->partattrs = (AttrNumber *) palloc0(key->partnatts * sizeof(AttrNumber)); key->partopfamily = (Oid *) palloc0(key->partnatts * sizeof(Oid)); key->partopcintype = (Oid *) palloc0(key->partnatts * sizeof(Oid)); @@ -919,8 +921,9 @@ RelationBuildPartitionKey(Relation relation) key->parttypbyval = (bool *) palloc0(key->partnatts * sizeof(bool)); key->parttypalign = (char *) palloc0(key->partnatts * sizeof(char)); key->parttypcoll = (Oid *) palloc0(key->partnatts * sizeof(Oid)); + MemoryContextSwitchTo(oldcxt); - /* For the hash partitioning, an extended hash function will be used. */ + /* determine support function number to search for */ procnum = (key->strategy == PARTITION_STRATEGY_HASH) ? HASHEXTENDED_PROC : BTORDER_PROC; @@ -952,7 +955,7 @@ RelationBuildPartitionKey(Relation relation) if (!OidIsValid(funcid)) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), - errmsg("operator class \"%s\" of access method %s is missing support function %d for data type \"%s\"", + errmsg("operator class \"%s\" of access method %s is missing support function %d for type %s", NameStr(opclassform->opcname), (key->strategy == PARTITION_STRATEGY_HASH) ? "hash" : "btree", @@ -989,11 +992,13 @@ RelationBuildPartitionKey(Relation relation) ReleaseSysCache(tuple); - /* Success --- make the relcache point to the newly constructed key */ + /* + * Success --- reparent our context and make the relcache point to the + * newly constructed key + */ MemoryContextSetParent(partkeycxt, CacheMemoryContext); relation->rd_partkeycxt = partkeycxt; relation->rd_partkey = key; - MemoryContextSwitchTo(oldcxt); } /* diff --git a/src/include/access/hash.h b/src/include/access/hash.h index ccdf9dff4b..179ac97f87 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -338,7 +338,7 @@ typedef HashMetaPageData *HashMetaPage; /* * When a new operator class is declared, we require that the user supply - * us with an amproc procudure for hashing a key of the new type, returning + * us with an amproc procedure for hashing a key of the new type, returning * a 32-bit hash value. We call this the "standard" hash procedure. We * also allow an optional "extended" hash procedure which accepts a salt and * returns a 64-bit hash value. This is highly recommended but, for reasons From f83040c62a78e784e6e33a6382a55925bfd66634 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 28 Dec 2017 02:41:53 -0800 Subject: [PATCH 0746/1087] Fix rare assertion failure in parallel hash join. When a backend runs out of inner tuples to hash, it should detach from grow_batch_barrier only after it has flushed all batches to disk and merged counters, not before. Otherwise a concurrent backend in ExecParallelHashIncreaseNumBatches() could stop waiting for this backend and try to read tuples before they have been written. This commit reorders those operations and should fix the assertion failures seen occasionally on the build farm since commit 1804284042e659e7d16904e7bbb0ad546394b6a3. Author: Thomas Munro Discussion: https://postgr.es/m/E1eRwXy-0004IK-TO%40gemulon.postgresql.org --- src/backend/executor/nodeHash.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 0a519fae31..04eb3650aa 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -288,8 +288,6 @@ MultiExecParallelHash(HashState *node) ExecParallelHashTableInsert(hashtable, slot, hashvalue); hashtable->partialTuples++; } - BarrierDetach(&pstate->grow_buckets_barrier); - BarrierDetach(&pstate->grow_batches_barrier); /* * Make sure that any tuples we wrote to disk are visible to @@ -304,6 +302,9 @@ MultiExecParallelHash(HashState *node) */ ExecParallelHashMergeCounters(hashtable); + BarrierDetach(&pstate->grow_buckets_barrier); + BarrierDetach(&pstate->grow_batches_barrier); + /* * Wait for everyone to finish building and flushing files and * counters. From 0aa1d489ea756b96b6d5573692ae9cd5d143c2a5 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Fri, 29 Dec 2017 08:40:18 -0500 Subject: [PATCH 0747/1087] Allow leading zero on exponents in pgbench test results Following commit 7a727c18 this is found to be necessary on at least some Windows platforms. per buildfarm. --- src/bin/pgbench/t/001_pgbench_with_server.pl | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 9cbeb2fc11..3dd080e6e6 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -240,8 +240,8 @@ sub pgbench qr{command=28.: double -0.125\b}, qr{command=29.: double -0.125\b}, qr{command=30.: double -0.00032\b}, - qr{command=31.: double 8.50705917302346e\+37\b}, - qr{command=32.: double 1e\+30\b}, + qr{command=31.: double 8.50705917302346e\+0?37\b}, + qr{command=32.: double 1e\+0?30\b}, ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions From 2958a672b1fed35403b23c2b453aede9f7ef4b39 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 29 Dec 2017 14:01:25 +0000 Subject: [PATCH 0748/1087] Extend near-wraparound hints to include replication slots Author: Feike Steenbergen Reviewed-by: Michael Paquier --- doc/src/sgml/logicaldecoding.sgml | 8 +++++--- src/backend/access/transam/multixact.c | 12 ++++++------ src/backend/access/transam/varsup.c | 12 ++++++------ src/backend/commands/vacuum.c | 3 ++- 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index 6bab1b9b32..fa101937e5 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -248,16 +248,18 @@ $ pg_recvlogical -d postgres --slot test --drop-slot may consume changes from a slot at any given time. - + Replication slots persist across crashes and know nothing about the state of their consumer(s). They will prevent removal of required resources even when there is no connection using them. This consumes storage because neither required WAL nor required rows from the system catalogs can be removed by VACUUM as long as they are required by a replication - slot. So if a slot is no longer required it should be dropped. + slot. In extreme cases this could cause the database to shut down to prevent + transaction ID wraparound (see ). + So if a slot is no longer required it should be dropped. - + diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c index 0fb6bf2f02..ba01e94328 100644 --- a/src/backend/access/transam/multixact.c +++ b/src/backend/access/transam/multixact.c @@ -1000,14 +1000,14 @@ GetNewMultiXactId(int nmembers, MultiXactOffset *offset) errmsg("database is not accepting commands that generate new MultiXactIds to avoid wraparound data loss in database \"%s\"", oldest_datname), errhint("Execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("database is not accepting commands that generate new MultiXactIds to avoid wraparound data loss in database with OID %u", oldest_datoid), errhint("Execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } /* @@ -1031,7 +1031,7 @@ GetNewMultiXactId(int nmembers, MultiXactOffset *offset) oldest_datname, multiWrapLimit - result), errhint("Execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(WARNING, (errmsg_plural("database with OID %u must be vacuumed before %u more MultiXactId is used", @@ -1040,7 +1040,7 @@ GetNewMultiXactId(int nmembers, MultiXactOffset *offset) oldest_datoid, multiWrapLimit - result), errhint("Execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } /* Re-acquire lock and start over */ @@ -2321,7 +2321,7 @@ SetMultiXactIdLimit(MultiXactId oldest_datminmxid, Oid oldest_datoid, oldest_datname, multiWrapLimit - curMulti), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(WARNING, (errmsg_plural("database with OID %u must be vacuumed before %u more MultiXactId is used", @@ -2330,7 +2330,7 @@ SetMultiXactIdLimit(MultiXactId oldest_datminmxid, Oid oldest_datoid, oldest_datoid, multiWrapLimit - curMulti), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } } diff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c index 702c8c957f..4f094e2e63 100644 --- a/src/backend/access/transam/varsup.c +++ b/src/backend/access/transam/varsup.c @@ -124,14 +124,14 @@ GetNewTransactionId(bool isSubXact) errmsg("database is not accepting commands to avoid wraparound data loss in database \"%s\"", oldest_datname), errhint("Stop the postmaster and vacuum that database in single-user mode.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("database is not accepting commands to avoid wraparound data loss in database with OID %u", oldest_datoid), errhint("Stop the postmaster and vacuum that database in single-user mode.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } else if (TransactionIdFollowsOrEquals(xid, xidWarnLimit)) { @@ -144,14 +144,14 @@ GetNewTransactionId(bool isSubXact) oldest_datname, xidWrapLimit - xid), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(WARNING, (errmsg("database with OID %u must be vacuumed within %u transactions", oldest_datoid, xidWrapLimit - xid), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } /* Re-acquire lock and start over */ @@ -403,14 +403,14 @@ SetTransactionIdLimit(TransactionId oldest_datfrozenxid, Oid oldest_datoid) oldest_datname, xidWrapLimit - curXid), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); else ereport(WARNING, (errmsg("database with OID %u must be vacuumed within %u transactions", oldest_datoid, xidWrapLimit - curXid), errhint("To avoid a database shutdown, execute a database-wide VACUUM in that database.\n" - "You might also need to commit or roll back old prepared transactions."))); + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); } } diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index 4abe6b15e0..d5f3fa5a31 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -655,7 +655,8 @@ vacuum_set_xid_limits(Relation rel, { ereport(WARNING, (errmsg("oldest xmin is far in the past"), - errhint("Close open transactions soon to avoid wraparound problems."))); + errhint("Close open transactions soon to avoid wraparound problems.\n" + "You might also need to commit or roll back old prepared transactions, or drop stale replication slots."))); limit = *oldestXmin; } From 48c9f4926562278a2fd2b85e7486c6d11705f177 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 29 Dec 2017 14:30:33 +0000 Subject: [PATCH 0749/1087] Fix race condition when changing synchronous_standby_names A momentary window exists when synchronous_standby_names changes that allows commands issued after the change to continue to act as async until the change becomes visible. Remove the race by using more appropriate test in syncrep.c Author: Asim Rama Praveen and Ashwin Agrawal Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen Reviewed-by: Michael Paquier, Masahiko Sawada --- src/backend/replication/syncrep.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.c index 8677235411..962772ef8f 100644 --- a/src/backend/replication/syncrep.c +++ b/src/backend/replication/syncrep.c @@ -156,11 +156,9 @@ SyncRepWaitForLSN(XLogRecPtr lsn, bool commit) mode = Min(SyncRepWaitMode, SYNC_REP_WAIT_FLUSH); /* - * Fast exit if user has not requested sync replication, or there are no - * sync replication standby names defined. Note that those standbys don't - * need to be connected. + * Fast exit if user has not requested sync replication. */ - if (!SyncRepRequested() || !SyncStandbysDefined()) + if (!SyncRepRequested()) return; Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks))); From d02974e32e028fc078d8f5eca1d6a4516efb0aa6 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Fri, 29 Dec 2017 16:19:51 +0100 Subject: [PATCH 0750/1087] Properly set base backup backends to active in pg_stat_activity When walsenders were included in pg_stat_activity, only the ones actually streaming WAL were listed as active when they were active. In particular, the connections sending base backups were listed as being idle. Which means that a regular pg_basebackup would show up with one active and one idle connection, when both were active. This patch updates to set all walsenders to active when they are (including those doing very fast things like IDENTIFY_SYSTEM), and then back to idle. Details about exactly what they are doing is available in pg_stat_replication. Patch by me, review by Michael Paquier and David Steele. --- src/backend/replication/walsender.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 6a252fcf45..f2e886f99f 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1504,6 +1504,9 @@ exec_replication_command(const char *cmd_string) initStringInfo(&reply_message); initStringInfo(&tmpbuf); + /* Report to pgstat that this process is running */ + pgstat_report_activity(STATE_RUNNING, NULL); + switch (cmd_node->type) { case T_IdentifySystemCmd: @@ -1555,6 +1558,9 @@ exec_replication_command(const char *cmd_string) ereport(ERROR, (errmsg("cannot execute SQL commands in WAL sender for physical replication"))); + /* Report to pgstat that this process is now idle */ + pgstat_report_activity(STATE_IDLE, NULL); + /* Tell the caller that this wasn't a WalSender command. */ return false; @@ -1570,6 +1576,9 @@ exec_replication_command(const char *cmd_string) /* Send CommandComplete message */ EndCommand("SELECT", DestRemote); + /* Report to pgstat that this process is now idle */ + pgstat_report_activity(STATE_IDLE, NULL); + return true; } @@ -2089,9 +2098,6 @@ WalSndLoop(WalSndSendDataCallback send_data) last_reply_timestamp = GetCurrentTimestamp(); waiting_for_ping_response = false; - /* Report to pgstat that this process is running */ - pgstat_report_activity(STATE_RUNNING, NULL); - /* * Loop until we reach the end of this timeline or the client requests to * stop streaming. From 4717fdb14cf0a62ffe1b1023e1c5ea8866e34fa0 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 29 Dec 2017 12:26:29 -0800 Subject: [PATCH 0751/1087] Rely on executor utils to build targetlist for DML RETURNING. This is useful because it gets rid of the sole direct user of ExecAssignResultType(). A future commit will likely make use of that and combine creating the targetlist with the initialization of the result slot. But it seems like good code hygiene anyway. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de --- src/backend/executor/nodeModifyTable.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index afb83ed3ae..82cd4462a3 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1828,7 +1828,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) int nplans = list_length(node->plans); ResultRelInfo *saved_resultRelInfo; ResultRelInfo *resultRelInfo; - TupleDesc tupDesc; Plan *subplan; ListCell *l; int i; @@ -2068,12 +2067,11 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * Initialize result tuple slot and assign its rowtype using the first * RETURNING list. We assume the rest will look the same. */ - tupDesc = ExecTypeFromTL((List *) linitial(node->returningLists), - false); + mtstate->ps.plan->targetlist = (List *) linitial(node->returningLists); /* Set up a slot for the output of the RETURNING projection(s) */ ExecInitResultTupleSlot(estate, &mtstate->ps); - ExecAssignResultType(&mtstate->ps, tupDesc); + ExecAssignResultTypeFromTL(&mtstate->ps); slot = mtstate->ps.ps_ResultTupleSlot; /* Need an econtext too */ @@ -2126,9 +2124,9 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * We still must construct a dummy result tuple type, because InitPlan * expects one (maybe should change that?). */ - tupDesc = ExecTypeFromTL(NIL, false); + mtstate->ps.plan->targetlist = NIL; ExecInitResultTupleSlot(estate, &mtstate->ps); - ExecAssignResultType(&mtstate->ps, tupDesc); + ExecAssignResultTypeFromTL(&mtstate->ps); mtstate->ps.ps_ExprContext = NULL; } From b40933101ca622aa8a35b6fe07ace36effadf1c7 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 29 Dec 2017 12:38:15 -0800 Subject: [PATCH 0752/1087] Perform slot validity checks in a separate pass over expression. This reduces code duplication a bit, but the primary benefit that it makes JITing expression evaluation easier. When doing so we can't, as previously done in the interpreted case, really change opcode without recompiling. Nor dow we just carry around unnecessary branches to avoid re-checking over and over. As a minor side-effect this makes ExecEvalStepOp() O(log(N)) rather than O(N). Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de --- src/backend/executor/execExpr.c | 7 +- src/backend/executor/execExprInterp.c | 277 ++++++++++++++------------ src/include/executor/execExpr.h | 14 +- src/include/nodes/execnodes.h | 3 + 4 files changed, 165 insertions(+), 136 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 55bb925191..2642b404ff 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -680,20 +680,19 @@ ExecInitExprRec(Expr *node, ExprState *state, /* regular user column */ scratch.d.var.attnum = variable->varattno - 1; scratch.d.var.vartype = variable->vartype; - /* select EEOP_*_FIRST opcode to force one-time checks */ switch (variable->varno) { case INNER_VAR: - scratch.opcode = EEOP_INNER_VAR_FIRST; + scratch.opcode = EEOP_INNER_VAR; break; case OUTER_VAR: - scratch.opcode = EEOP_OUTER_VAR_FIRST; + scratch.opcode = EEOP_OUTER_VAR; break; /* INDEX_VAR is handled by default case */ default: - scratch.opcode = EEOP_SCAN_VAR_FIRST; + scratch.opcode = EEOP_SCAN_VAR; break; } } diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 0c3f66803f..fa4ab30e99 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -95,8 +95,17 @@ */ #if defined(EEO_USE_COMPUTED_GOTO) +/* struct for jump target -> opcode lookup table */ +typedef struct ExprEvalOpLookup +{ + const void *opcode; + ExprEvalOp op; +} ExprEvalOpLookup; + /* to make dispatch_table accessible outside ExecInterpExpr() */ static const void **dispatch_table = NULL; +/* jump target -> opcode lookup table */ +static ExprEvalOpLookup reverse_dispatch_table[EEOP_LAST]; #define EEO_SWITCH() #define EEO_CASE(name) CASE_##name: @@ -137,11 +146,8 @@ static void ExecEvalRowNullInt(ExprState *state, ExprEvalStep *op, ExprContext *econtext, bool checkisnull); /* fast-path evaluation functions */ -static Datum ExecJustInnerVarFirst(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull); -static Datum ExecJustOuterVarFirst(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull); -static Datum ExecJustScanVarFirst(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustScanVar(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustConst(ExprState *state, ExprContext *econtext, bool *isnull); static Datum ExecJustAssignInnerVar(ExprState *state, ExprContext *econtext, bool *isnull); @@ -171,6 +177,14 @@ ExecReadyInterpretedExpr(ExprState *state) if (state->flags & EEO_FLAG_INTERPRETER_INITIALIZED) return; + /* + * First time through, check whether attribute matches Var. Might not be + * ok anymore, due to schema changes. We do that by setting up a callback + * that does checking on the first call, which then sets the evalfunc + * callback to the actual method of execution. + */ + state->evalfunc = ExecInterpExprStillValid; + /* DIRECT_THREADED should not already be set */ Assert((state->flags & EEO_FLAG_DIRECT_THREADED) == 0); @@ -192,53 +206,53 @@ ExecReadyInterpretedExpr(ExprState *state) ExprEvalOp step1 = state->steps[1].opcode; if (step0 == EEOP_INNER_FETCHSOME && - step1 == EEOP_INNER_VAR_FIRST) + step1 == EEOP_INNER_VAR) { - state->evalfunc = ExecJustInnerVarFirst; + state->evalfunc_private = (void *) ExecJustInnerVar; return; } else if (step0 == EEOP_OUTER_FETCHSOME && - step1 == EEOP_OUTER_VAR_FIRST) + step1 == EEOP_OUTER_VAR) { - state->evalfunc = ExecJustOuterVarFirst; + state->evalfunc_private = (void *) ExecJustOuterVar; return; } else if (step0 == EEOP_SCAN_FETCHSOME && - step1 == EEOP_SCAN_VAR_FIRST) + step1 == EEOP_SCAN_VAR) { - state->evalfunc = ExecJustScanVarFirst; + state->evalfunc_private = (void *) ExecJustScanVar; return; } else if (step0 == EEOP_INNER_FETCHSOME && step1 == EEOP_ASSIGN_INNER_VAR) { - state->evalfunc = ExecJustAssignInnerVar; + state->evalfunc_private = (void *) ExecJustAssignInnerVar; return; } else if (step0 == EEOP_OUTER_FETCHSOME && step1 == EEOP_ASSIGN_OUTER_VAR) { - state->evalfunc = ExecJustAssignOuterVar; + state->evalfunc_private = (void *) ExecJustAssignOuterVar; return; } else if (step0 == EEOP_SCAN_FETCHSOME && step1 == EEOP_ASSIGN_SCAN_VAR) { - state->evalfunc = ExecJustAssignScanVar; + state->evalfunc_private = (void *) ExecJustAssignScanVar; return; } else if (step0 == EEOP_CASE_TESTVAL && step1 == EEOP_FUNCEXPR_STRICT && state->steps[0].d.casetest.value) { - state->evalfunc = ExecJustApplyFuncToCase; + state->evalfunc_private = (void *) ExecJustApplyFuncToCase; return; } } else if (state->steps_len == 2 && state->steps[0].opcode == EEOP_CONST) { - state->evalfunc = ExecJustConst; + state->evalfunc_private = (void *) ExecJustConst; return; } @@ -262,7 +276,7 @@ ExecReadyInterpretedExpr(ExprState *state) } #endif /* EEO_USE_COMPUTED_GOTO */ - state->evalfunc = ExecInterpExpr; + state->evalfunc_private = (void *) ExecInterpExpr; } @@ -293,11 +307,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_INNER_FETCHSOME, &&CASE_EEOP_OUTER_FETCHSOME, &&CASE_EEOP_SCAN_FETCHSOME, - &&CASE_EEOP_INNER_VAR_FIRST, &&CASE_EEOP_INNER_VAR, - &&CASE_EEOP_OUTER_VAR_FIRST, &&CASE_EEOP_OUTER_VAR, - &&CASE_EEOP_SCAN_VAR_FIRST, &&CASE_EEOP_SCAN_VAR, &&CASE_EEOP_INNER_SYSVAR, &&CASE_EEOP_OUTER_SYSVAR, @@ -420,22 +431,6 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } - EEO_CASE(EEOP_INNER_VAR_FIRST) - { - int attnum = op->d.var.attnum; - - /* - * First time through, check whether attribute matches Var. Might - * not be ok anymore, due to schema changes. - */ - CheckVarSlotCompatibility(innerslot, attnum + 1, op->d.var.vartype); - - /* Skip that check on subsequent evaluations */ - op->opcode = EEO_OPCODE(EEOP_INNER_VAR); - - /* FALL THROUGH to EEOP_INNER_VAR */ - } - EEO_CASE(EEOP_INNER_VAR) { int attnum = op->d.var.attnum; @@ -453,18 +448,6 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } - EEO_CASE(EEOP_OUTER_VAR_FIRST) - { - int attnum = op->d.var.attnum; - - /* See EEOP_INNER_VAR_FIRST comments */ - - CheckVarSlotCompatibility(outerslot, attnum + 1, op->d.var.vartype); - op->opcode = EEO_OPCODE(EEOP_OUTER_VAR); - - /* FALL THROUGH to EEOP_OUTER_VAR */ - } - EEO_CASE(EEOP_OUTER_VAR) { int attnum = op->d.var.attnum; @@ -478,18 +461,6 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } - EEO_CASE(EEOP_SCAN_VAR_FIRST) - { - int attnum = op->d.var.attnum; - - /* See EEOP_INNER_VAR_FIRST comments */ - - CheckVarSlotCompatibility(scanslot, attnum + 1, op->d.var.vartype); - op->opcode = EEO_OPCODE(EEOP_SCAN_VAR); - - /* FALL THROUGH to EEOP_SCAN_VAR */ - } - EEO_CASE(EEOP_SCAN_VAR) { int attnum = op->d.var.attnum; @@ -1556,6 +1527,78 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) return state->resvalue; } +/* + * Expression evaluation callback that performs extra checks before executing + * the expression. Declared extern so other methods of execution can use it + * too. + */ +Datum +ExecInterpExprStillValid(ExprState *state, ExprContext *econtext, bool *isNull) +{ + /* + * First time through, check whether attribute matches Var. Might + * not be ok anymore, due to schema changes. + */ + CheckExprStillValid(state, econtext); + + /* skip the check during further executions */ + state->evalfunc = (ExprStateEvalFunc) state->evalfunc_private; + + /* and actually execute */ + return state->evalfunc(state, econtext, isNull); +} + +/* + * Check that an expression is still valid in the face of potential schema + * changes since the plan has been created. + */ +void +CheckExprStillValid(ExprState *state, ExprContext *econtext) +{ + int i = 0; + TupleTableSlot *innerslot; + TupleTableSlot *outerslot; + TupleTableSlot *scanslot; + + innerslot = econtext->ecxt_innertuple; + outerslot = econtext->ecxt_outertuple; + scanslot = econtext->ecxt_scantuple; + + for (i = 0; i < state->steps_len;i++) + { + ExprEvalStep *op = &state->steps[i]; + + switch (ExecEvalStepOp(state, op)) + { + case EEOP_INNER_VAR: + { + int attnum = op->d.var.attnum; + + CheckVarSlotCompatibility(innerslot, attnum + 1, op->d.var.vartype); + break; + } + + case EEOP_OUTER_VAR: + { + int attnum = op->d.var.attnum; + + CheckVarSlotCompatibility(outerslot, attnum + 1, op->d.var.vartype); + break; + } + + case EEOP_SCAN_VAR: + { + int attnum = op->d.var.attnum; + + CheckVarSlotCompatibility(scanslot, attnum + 1, op->d.var.vartype); + break; + } + default: + break; + } + } +} + /* * Check whether a user attribute in a slot can be referenced by a Var * expression. This should succeed unless there have been schema changes @@ -1668,20 +1711,14 @@ ShutdownTupleDescRef(Datum arg) * Fast-path functions, for very simple expressions */ -/* Simple reference to inner Var, first time through */ +/* Simple reference to inner Var */ static Datum -ExecJustInnerVarFirst(ExprState *state, ExprContext *econtext, bool *isnull) +ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull) { ExprEvalStep *op = &state->steps[1]; int attnum = op->d.var.attnum + 1; TupleTableSlot *slot = econtext->ecxt_innertuple; - /* See ExecInterpExpr()'s comments for EEOP_INNER_VAR_FIRST */ - - CheckVarSlotCompatibility(slot, attnum, op->d.var.vartype); - op->opcode = EEOP_INNER_VAR; /* just for cleanliness */ - state->evalfunc = ExecJustInnerVar; - /* * Since we use slot_getattr(), we don't need to implement the FETCHSOME * step explicitly, and we also needn't Assert that the attnum is in range @@ -1690,34 +1727,6 @@ ExecJustInnerVarFirst(ExprState *state, ExprContext *econtext, bool *isnull) return slot_getattr(slot, attnum, isnull); } -/* Simple reference to inner Var */ -static Datum -ExecJustInnerVar(ExprState *state, ExprContext *econtext, bool *isnull) -{ - ExprEvalStep *op = &state->steps[1]; - int attnum = op->d.var.attnum + 1; - TupleTableSlot *slot = econtext->ecxt_innertuple; - - /* See comments in ExecJustInnerVarFirst */ - return slot_getattr(slot, attnum, isnull); -} - -/* Simple reference to outer Var, first time through */ -static Datum -ExecJustOuterVarFirst(ExprState *state, ExprContext *econtext, bool *isnull) -{ - ExprEvalStep *op = &state->steps[1]; - int attnum = op->d.var.attnum + 1; - TupleTableSlot *slot = econtext->ecxt_outertuple; - - CheckVarSlotCompatibility(slot, attnum, op->d.var.vartype); - op->opcode = EEOP_OUTER_VAR; /* just for cleanliness */ - state->evalfunc = ExecJustOuterVar; - - /* See comments in ExecJustInnerVarFirst */ - return slot_getattr(slot, attnum, isnull); -} - /* Simple reference to outer Var */ static Datum ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull) @@ -1726,23 +1735,7 @@ ExecJustOuterVar(ExprState *state, ExprContext *econtext, bool *isnull) int attnum = op->d.var.attnum + 1; TupleTableSlot *slot = econtext->ecxt_outertuple; - /* See comments in ExecJustInnerVarFirst */ - return slot_getattr(slot, attnum, isnull); -} - -/* Simple reference to scan Var, first time through */ -static Datum -ExecJustScanVarFirst(ExprState *state, ExprContext *econtext, bool *isnull) -{ - ExprEvalStep *op = &state->steps[1]; - int attnum = op->d.var.attnum + 1; - TupleTableSlot *slot = econtext->ecxt_scantuple; - - CheckVarSlotCompatibility(slot, attnum, op->d.var.vartype); - op->opcode = EEOP_SCAN_VAR; /* just for cleanliness */ - state->evalfunc = ExecJustScanVar; - - /* See comments in ExecJustInnerVarFirst */ + /* See comments in ExecJustInnerVar */ return slot_getattr(slot, attnum, isnull); } @@ -1754,7 +1747,7 @@ ExecJustScanVar(ExprState *state, ExprContext *econtext, bool *isnull) int attnum = op->d.var.attnum + 1; TupleTableSlot *slot = econtext->ecxt_scantuple; - /* See comments in ExecJustInnerVarFirst */ + /* See comments in ExecJustInnerVar */ return slot_getattr(slot, attnum, isnull); } @@ -1860,6 +1853,24 @@ ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull) return d; } +#if defined(EEO_USE_COMPUTED_GOTO) +/* + * Comparator used when building address->opcode lookup table for + * ExecEvalStepOp() in the threaded dispatch case. + */ +static int +dispatch_compare_ptr(const void* a, const void *b) +{ + const ExprEvalOpLookup *la = (const ExprEvalOpLookup *) a; + const ExprEvalOpLookup *lb = (const ExprEvalOpLookup *) b; + + if (la->opcode < lb->opcode) + return -1; + else if (la->opcode > lb->opcode) + return 1; + return 0; +} +#endif /* * Do one-time initialization of interpretation machinery. @@ -1870,8 +1881,25 @@ ExecInitInterpreter(void) #if defined(EEO_USE_COMPUTED_GOTO) /* Set up externally-visible pointer to dispatch table */ if (dispatch_table == NULL) + { + int i; + dispatch_table = (const void **) DatumGetPointer(ExecInterpExpr(NULL, NULL, NULL)); + + /* build reverse lookup table */ + for (i = 0; i < EEOP_LAST; i++) + { + reverse_dispatch_table[i].opcode = dispatch_table[i]; + reverse_dispatch_table[i].op = (ExprEvalOp) i; + } + + /* make it bsearch()able */ + qsort(reverse_dispatch_table, + EEOP_LAST /* nmembers */, + sizeof(ExprEvalOpLookup), + dispatch_compare_ptr); + } #endif } @@ -1880,10 +1908,6 @@ ExecInitInterpreter(void) * * When direct-threading is in use, ExprState->opcode isn't easily * decipherable. This function returns the appropriate enum member. - * - * This currently is only supposed to be used in paths that aren't critical - * performance-wise. If that changes, we could add an inverse dispatch_table - * that's sorted on the address, so a binary search can be performed. */ ExprEvalOp ExecEvalStepOp(ExprState *state, ExprEvalStep *op) @@ -1891,16 +1915,17 @@ ExecEvalStepOp(ExprState *state, ExprEvalStep *op) #if defined(EEO_USE_COMPUTED_GOTO) if (state->flags & EEO_FLAG_DIRECT_THREADED) { - int i; - - for (i = 0; i < EEOP_LAST; i++) - { - if ((void *) op->opcode == dispatch_table[i]) - { - return (ExprEvalOp) i; - } - } - elog(ERROR, "unknown opcode"); + ExprEvalOpLookup key; + ExprEvalOpLookup *res; + + key.opcode = (void *) op->opcode; + res = bsearch(&key, + reverse_dispatch_table, + EEOP_LAST /* nmembers */, + sizeof(ExprEvalOpLookup), + dispatch_compare_ptr); + Assert(res); /* unknown ops shouldn't get looked up */ + return res->op; } #endif return (ExprEvalOp) op->opcode; diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 080252fad6..511205b5ac 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -51,12 +51,8 @@ typedef enum ExprEvalOp EEOP_SCAN_FETCHSOME, /* compute non-system Var value */ - /* "FIRST" variants are used only the first time through */ - EEOP_INNER_VAR_FIRST, EEOP_INNER_VAR, - EEOP_OUTER_VAR_FIRST, EEOP_OUTER_VAR, - EEOP_SCAN_VAR_FIRST, EEOP_SCAN_VAR, /* compute system Var value */ @@ -67,8 +63,11 @@ typedef enum ExprEvalOp /* compute wholerow Var */ EEOP_WHOLEROW, - /* compute non-system Var value, assign it into ExprState's resultslot */ - /* (these are not used if _FIRST checks would be needed) */ + /* + * Compute non-system Var value, assign it into ExprState's + * resultslot. These are not used if a CheckVarSlotCompatibility() check + * would be needed. + */ EEOP_ASSIGN_INNER_VAR, EEOP_ASSIGN_OUTER_VAR, EEOP_ASSIGN_SCAN_VAR, @@ -621,6 +620,9 @@ extern void ExprEvalPushStep(ExprState *es, const ExprEvalStep *s); extern void ExecReadyInterpretedExpr(ExprState *state); extern ExprEvalOp ExecEvalStepOp(ExprState *state, ExprEvalStep *op); +extern Datum ExecInterpExprStillValid(ExprState *state, ExprContext *econtext, bool *isNull); +extern void CheckExprStillValid(ExprState *state, ExprContext *econtext); + /* * Non fast-path execution functions. These are externs instead of statics in * execExprInterp.c, because that allows them to be used by other methods of diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index c9a5279dc5..94351eafad 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -87,6 +87,9 @@ typedef struct ExprState /* original expression tree, for debugging only */ Expr *expr; + /* private state for an evalfunc */ + void *evalfunc_private; + /* * XXX: following fields only needed during "compilation" (ExecInitExpr); * could be thrown away afterwards. From 5303ffe71b4d28663e0881199bb1a5ea26217ce4 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 29 Dec 2017 18:32:32 -0300 Subject: [PATCH 0753/1087] Fix typo --- src/bin/pg_basebackup/pg_recvlogical.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c index c7893c10ca..f3cb831d59 100644 --- a/src/bin/pg_basebackup/pg_recvlogical.c +++ b/src/bin/pg_basebackup/pg_recvlogical.c @@ -1037,7 +1037,7 @@ flushAndSendFeedback(PGconn *conn, TimestampTz *now) } /* - * Try to inform the server about of upcoming demise, but don't wait around or + * Try to inform the server about our upcoming demise, but don't wait around or * retry on failure. */ static void From dd2243f2ade43bcad8e615e6cf4286be250e374a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 31 Dec 2017 17:04:11 -0500 Subject: [PATCH 0754/1087] Improve regression tests' code coverage for plpgsql control structures. I noticed that our code coverage report showed considerable deficiency in test coverage for PL/pgSQL control statements. Notably, both exec_stmt_block and most of the loop control statements had very poor coverage of handling of return/exit/continue result codes from their child statements; and exec_stmt_fori was seriously lacking in feature coverage, having no test that exercised its BY or REVERSE features, nor verification that its overflow defenses work. Now that we have some infrastructure for plpgsql-specific test scripts, the natural thing to do is make a new script rather than further extend plpgsql.sql. So I created a new script plpgsql_control.sql with the charter to test plpgsql control structures, and moved a few existing tests there because they fell entirely under that charter. I then added new test cases that exercise the bits of code complained of above. Of the five kinds of loop statements, only exec_stmt_while's result code handling is fully exercised by these tests. That would be a deficiency as things stand, but a follow-on commit will merge the loop statements' result code handling into one implementation. So testing each usage of that implementation separately seems redundant. In passing, also add a couple test cases to plpgsql.sql to more fully exercise plpgsql's code related to expanded arrays --- I had thought that area was sufficiently covered already, but the coverage report showed a couple of un-executed code paths. Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us --- src/pl/plpgsql/src/Makefile | 2 +- .../plpgsql/src/expected/plpgsql_control.out | 672 ++++++++++++++++++ src/pl/plpgsql/src/sql/plpgsql_control.sql | 476 +++++++++++++ src/test/regress/expected/plpgsql.out | 476 +------------ src/test/regress/sql/plpgsql.sql | 310 +------- 5 files changed, 1171 insertions(+), 765 deletions(-) create mode 100644 src/pl/plpgsql/src/expected/plpgsql_control.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_control.sql diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 76ac247e57..14a4d83584 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -26,7 +26,7 @@ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) -REGRESS = plpgsql_call +REGRESS = plpgsql_call plpgsql_control all: all-lib diff --git a/src/pl/plpgsql/src/expected/plpgsql_control.out b/src/pl/plpgsql/src/expected/plpgsql_control.out new file mode 100644 index 0000000000..73b23a35e5 --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_control.out @@ -0,0 +1,672 @@ +-- +-- Tests for PL/pgSQL control structures +-- +-- integer FOR loop +do $$ +begin + -- basic case + for i in 1..3 loop + raise notice '1..3: i = %', i; + end loop; + -- with BY, end matches exactly + for i in 1..10 by 3 loop + raise notice '1..10 by 3: i = %', i; + end loop; + -- with BY, end does not match + for i in 1..11 by 3 loop + raise notice '1..11 by 3: i = %', i; + end loop; + -- zero iterations + for i in 1..0 by 3 loop + raise notice '1..0 by 3: i = %', i; + end loop; + -- REVERSE + for i in reverse 10..0 by 3 loop + raise notice 'reverse 10..0 by 3: i = %', i; + end loop; + -- potential overflow + for i in 2147483620..2147483647 by 10 loop + raise notice '2147483620..2147483647 by 10: i = %', i; + end loop; + -- potential overflow, reverse direction + for i in reverse -2147483620..-2147483647 by 10 loop + raise notice 'reverse -2147483620..-2147483647 by 10: i = %', i; + end loop; +end$$; +NOTICE: 1..3: i = 1 +NOTICE: 1..3: i = 2 +NOTICE: 1..3: i = 3 +NOTICE: 1..10 by 3: i = 1 +NOTICE: 1..10 by 3: i = 4 +NOTICE: 1..10 by 3: i = 7 +NOTICE: 1..10 by 3: i = 10 +NOTICE: 1..11 by 3: i = 1 +NOTICE: 1..11 by 3: i = 4 +NOTICE: 1..11 by 3: i = 7 +NOTICE: 1..11 by 3: i = 10 +NOTICE: reverse 10..0 by 3: i = 10 +NOTICE: reverse 10..0 by 3: i = 7 +NOTICE: reverse 10..0 by 3: i = 4 +NOTICE: reverse 10..0 by 3: i = 1 +NOTICE: 2147483620..2147483647 by 10: i = 2147483620 +NOTICE: 2147483620..2147483647 by 10: i = 2147483630 +NOTICE: 2147483620..2147483647 by 10: i = 2147483640 +NOTICE: reverse -2147483620..-2147483647 by 10: i = -2147483620 +NOTICE: reverse -2147483620..-2147483647 by 10: i = -2147483630 +NOTICE: reverse -2147483620..-2147483647 by 10: i = -2147483640 +-- BY can't be zero or negative +do $$ +begin + for i in 1..3 by 0 loop + raise notice '1..3 by 0: i = %', i; + end loop; +end$$; +ERROR: BY value of FOR loop must be greater than zero +CONTEXT: PL/pgSQL function inline_code_block line 3 at FOR with integer loop variable +do $$ +begin + for i in 1..3 by -1 loop + raise notice '1..3 by -1: i = %', i; + end loop; +end$$; +ERROR: BY value of FOR loop must be greater than zero +CONTEXT: PL/pgSQL function inline_code_block line 3 at FOR with integer loop variable +do $$ +begin + for i in reverse 1..3 by -1 loop + raise notice 'reverse 1..3 by -1: i = %', i; + end loop; +end$$; +ERROR: BY value of FOR loop must be greater than zero +CONTEXT: PL/pgSQL function inline_code_block line 3 at FOR with integer loop variable +-- CONTINUE statement +create table conttesttbl(idx serial, v integer); +insert into conttesttbl(v) values(10); +insert into conttesttbl(v) values(20); +insert into conttesttbl(v) values(30); +insert into conttesttbl(v) values(40); +create function continue_test1() returns void as $$ +declare _i integer = 0; _r record; +begin + raise notice '---1---'; + loop + _i := _i + 1; + raise notice '%', _i; + continue when _i < 10; + exit; + end loop; + + raise notice '---2---'; + <> + loop + _i := _i - 1; + loop + raise notice '%', _i; + continue lbl when _i > 0; + exit lbl; + end loop; + end loop; + + raise notice '---3---'; + <> + while _i < 10 loop + _i := _i + 1; + continue the_loop when _i % 2 = 0; + raise notice '%', _i; + end loop; + + raise notice '---4---'; + for _i in 1..10 loop + begin + -- applies to outer loop, not the nested begin block + continue when _i < 5; + raise notice '%', _i; + end; + end loop; + + raise notice '---5---'; + for _r in select * from conttesttbl loop + continue when _r.v <= 20; + raise notice '%', _r.v; + end loop; + + raise notice '---6---'; + for _r in execute 'select * from conttesttbl' loop + continue when _r.v <= 20; + raise notice '%', _r.v; + end loop; + + raise notice '---7---'; + <> + for _i in 1..3 loop + continue looplabel when _i = 2; + raise notice '%', _i; + end loop; + + raise notice '---8---'; + _i := 1; + while _i <= 3 loop + raise notice '%', _i; + _i := _i + 1; + continue when _i = 3; + end loop; + + raise notice '---9---'; + for _r in select * from conttesttbl order by v limit 1 loop + raise notice '%', _r.v; + continue; + end loop; + + raise notice '---10---'; + for _r in execute 'select * from conttesttbl order by v limit 1' loop + raise notice '%', _r.v; + continue; + end loop; + + raise notice '---11---'; + <> + for _i in 1..2 loop + raise notice 'outer %', _i; + <> + for _j in 1..3 loop + continue outerlooplabel when _j = 2; + raise notice 'inner %', _j; + end loop; + end loop; +end; $$ language plpgsql; +select continue_test1(); +NOTICE: ---1--- +NOTICE: 1 +NOTICE: 2 +NOTICE: 3 +NOTICE: 4 +NOTICE: 5 +NOTICE: 6 +NOTICE: 7 +NOTICE: 8 +NOTICE: 9 +NOTICE: 10 +NOTICE: ---2--- +NOTICE: 9 +NOTICE: 8 +NOTICE: 7 +NOTICE: 6 +NOTICE: 5 +NOTICE: 4 +NOTICE: 3 +NOTICE: 2 +NOTICE: 1 +NOTICE: 0 +NOTICE: ---3--- +NOTICE: 1 +NOTICE: 3 +NOTICE: 5 +NOTICE: 7 +NOTICE: 9 +NOTICE: ---4--- +NOTICE: 5 +NOTICE: 6 +NOTICE: 7 +NOTICE: 8 +NOTICE: 9 +NOTICE: 10 +NOTICE: ---5--- +NOTICE: 30 +NOTICE: 40 +NOTICE: ---6--- +NOTICE: 30 +NOTICE: 40 +NOTICE: ---7--- +NOTICE: 1 +NOTICE: 3 +NOTICE: ---8--- +NOTICE: 1 +NOTICE: 2 +NOTICE: 3 +NOTICE: ---9--- +NOTICE: 10 +NOTICE: ---10--- +NOTICE: 10 +NOTICE: ---11--- +NOTICE: outer 1 +NOTICE: inner 1 +NOTICE: outer 2 +NOTICE: inner 1 + continue_test1 +---------------- + +(1 row) + +-- should fail: CONTINUE is only legal inside a loop +create function continue_error1() returns void as $$ +begin + begin + continue; + end; +end; +$$ language plpgsql; +ERROR: CONTINUE cannot be used outside a loop +LINE 4: continue; + ^ +-- should fail: unlabeled EXIT is only legal inside a loop +create function exit_error1() returns void as $$ +begin + begin + exit; + end; +end; +$$ language plpgsql; +ERROR: EXIT cannot be used outside a loop, unless it has a label +LINE 4: exit; + ^ +-- should fail: no such label +create function continue_error2() returns void as $$ +begin + begin + loop + continue no_such_label; + end loop; + end; +end; +$$ language plpgsql; +ERROR: there is no label "no_such_label" attached to any block or loop enclosing this statement +LINE 5: continue no_such_label; + ^ +-- should fail: no such label +create function exit_error2() returns void as $$ +begin + begin + loop + exit no_such_label; + end loop; + end; +end; +$$ language plpgsql; +ERROR: there is no label "no_such_label" attached to any block or loop enclosing this statement +LINE 5: exit no_such_label; + ^ +-- should fail: CONTINUE can't reference the label of a named block +create function continue_error3() returns void as $$ +begin + <> + begin + loop + continue begin_block1; + end loop; + end; +end; +$$ language plpgsql; +ERROR: block label "begin_block1" cannot be used in CONTINUE +LINE 6: continue begin_block1; + ^ +-- On the other hand, EXIT *can* reference the label of a named block +create function exit_block1() returns void as $$ +begin + <> + begin + loop + exit begin_block1; + raise exception 'should not get here'; + end loop; + end; +end; +$$ language plpgsql; +select exit_block1(); + exit_block1 +------------- + +(1 row) + +-- verbose end block and end loop +create function end_label1() returns void as $$ +<> +begin + <> + for i in 1 .. 10 loop + raise notice 'i = %', i; + exit flbl1; + end loop flbl1; + <> + for j in 1 .. 10 loop + raise notice 'j = %', j; + exit flbl2; + end loop; +end blbl; +$$ language plpgsql; +select end_label1(); +NOTICE: i = 1 +NOTICE: j = 1 + end_label1 +------------ + +(1 row) + +-- should fail: undefined end label +create function end_label2() returns void as $$ +begin + for _i in 1 .. 10 loop + exit; + end loop flbl1; +end; +$$ language plpgsql; +ERROR: end label "flbl1" specified for unlabelled block +LINE 5: end loop flbl1; + ^ +-- should fail: end label does not match start label +create function end_label3() returns void as $$ +<> +begin + <> + for _i in 1 .. 10 loop + exit; + end loop outer_label; +end; +$$ language plpgsql; +ERROR: end label "outer_label" differs from block's label "inner_label" +LINE 7: end loop outer_label; + ^ +-- should fail: end label on a block without a start label +create function end_label4() returns void as $$ +<> +begin + for _i in 1 .. 10 loop + exit; + end loop outer_label; +end; +$$ language plpgsql; +ERROR: end label "outer_label" specified for unlabelled block +LINE 6: end loop outer_label; + ^ +-- unlabeled exit matches no blocks +do $$ +begin +for i in 1..10 loop + <> + begin + begin -- unlabeled block + exit; + raise notice 'should not get here'; + end; + raise notice 'should not get here, either'; + end; + raise notice 'nor here'; +end loop; +raise notice 'should get here'; +end$$; +NOTICE: should get here +-- check exit out of an unlabeled block to a labeled one +do $$ +<> +begin + <> + begin + <> + begin + begin -- unlabeled block + exit innerblock; + raise notice 'should not get here'; + end; + raise notice 'should not get here, either'; + end; + raise notice 'nor here'; + end; + raise notice 'should get here'; +end$$; +NOTICE: should get here +-- unlabeled exit does match a while loop +do $$ +begin + <> + while 1 > 0 loop + <> + while 1 > 0 loop + <> + while 1 > 0 loop + exit; + raise notice 'should not get here'; + end loop; + raise notice 'should get here'; + exit outermostwhile; + raise notice 'should not get here, either'; + end loop; + raise notice 'nor here'; + end loop; + raise notice 'should get here, too'; +end$$; +NOTICE: should get here +NOTICE: should get here, too +-- check exit out of an unlabeled while to a labeled one +do $$ +begin + <> + while 1 > 0 loop + while 1 > 0 loop + exit outerwhile; + raise notice 'should not get here'; + end loop; + raise notice 'should not get here, either'; + end loop; + raise notice 'should get here'; +end$$; +NOTICE: should get here +-- continue to an outer while +do $$ +declare i int := 0; +begin + <> + while i < 2 loop + raise notice 'outermostwhile, i = %', i; + i := i + 1; + <> + while 1 > 0 loop + <> + while 1 > 0 loop + continue outermostwhile; + raise notice 'should not get here'; + end loop; + raise notice 'should not get here, either'; + end loop; + raise notice 'nor here'; + end loop; + raise notice 'out of outermostwhile, i = %', i; +end$$; +NOTICE: outermostwhile, i = 0 +NOTICE: outermostwhile, i = 1 +NOTICE: out of outermostwhile, i = 2 +-- return out of a while +create function return_from_while() returns int language plpgsql as $$ +declare i int := 0; +begin + while i < 10 loop + if i > 2 then + return i; + end if; + i := i + 1; + end loop; + return null; +end$$; +select return_from_while(); + return_from_while +------------------- + 3 +(1 row) + +-- using list of scalars in fori and fore stmts +create function for_vect() returns void as $proc$ +<>declare a integer; b varchar; c varchar; r record; +begin + -- fori + for i in 1 .. 3 loop + raise notice '%', i; + end loop; + -- fore with record var + for r in select gs as aa, 'BB' as bb, 'CC' as cc from generate_series(1,4) gs loop + raise notice '% % %', r.aa, r.bb, r.cc; + end loop; + -- fore with single scalar + for a in select gs from generate_series(1,4) gs loop + raise notice '%', a; + end loop; + -- fore with multiple scalars + for a,b,c in select gs, 'BB','CC' from generate_series(1,4) gs loop + raise notice '% % %', a, b, c; + end loop; + -- using qualified names in fors, fore is enabled, disabled only for fori + for lbl.a, lbl.b, lbl.c in execute $$select gs, 'bb','cc' from generate_series(1,4) gs$$ loop + raise notice '% % %', a, b, c; + end loop; +end; +$proc$ language plpgsql; +select for_vect(); +NOTICE: 1 +NOTICE: 2 +NOTICE: 3 +NOTICE: 1 BB CC +NOTICE: 2 BB CC +NOTICE: 3 BB CC +NOTICE: 4 BB CC +NOTICE: 1 +NOTICE: 2 +NOTICE: 3 +NOTICE: 4 +NOTICE: 1 BB CC +NOTICE: 2 BB CC +NOTICE: 3 BB CC +NOTICE: 4 BB CC +NOTICE: 1 bb cc +NOTICE: 2 bb cc +NOTICE: 3 bb cc +NOTICE: 4 bb cc + for_vect +---------- + +(1 row) + +-- CASE statement +create or replace function case_test(bigint) returns text as $$ +declare a int = 10; + b int = 1; +begin + case $1 + when 1 then + return 'one'; + when 2 then + return 'two'; + when 3,4,3+5 then + return 'three, four or eight'; + when a then + return 'ten'; + when a+b, a+b+1 then + return 'eleven, twelve'; + end case; +end; +$$ language plpgsql immutable; +select case_test(1); + case_test +----------- + one +(1 row) + +select case_test(2); + case_test +----------- + two +(1 row) + +select case_test(3); + case_test +---------------------- + three, four or eight +(1 row) + +select case_test(4); + case_test +---------------------- + three, four or eight +(1 row) + +select case_test(5); -- fails +ERROR: case not found +HINT: CASE statement is missing ELSE part. +CONTEXT: PL/pgSQL function case_test(bigint) line 5 at CASE +select case_test(8); + case_test +---------------------- + three, four or eight +(1 row) + +select case_test(10); + case_test +----------- + ten +(1 row) + +select case_test(11); + case_test +---------------- + eleven, twelve +(1 row) + +select case_test(12); + case_test +---------------- + eleven, twelve +(1 row) + +select case_test(13); -- fails +ERROR: case not found +HINT: CASE statement is missing ELSE part. +CONTEXT: PL/pgSQL function case_test(bigint) line 5 at CASE +create or replace function catch() returns void as $$ +begin + raise notice '%', case_test(6); +exception + when case_not_found then + raise notice 'caught case_not_found % %', SQLSTATE, SQLERRM; +end +$$ language plpgsql; +select catch(); +NOTICE: caught case_not_found 20000 case not found + catch +------- + +(1 row) + +-- test the searched variant too, as well as ELSE +create or replace function case_test(bigint) returns text as $$ +declare a int = 10; +begin + case + when $1 = 1 then + return 'one'; + when $1 = a + 2 then + return 'twelve'; + else + return 'other'; + end case; +end; +$$ language plpgsql immutable; +select case_test(1); + case_test +----------- + one +(1 row) + +select case_test(2); + case_test +----------- + other +(1 row) + +select case_test(12); + case_test +----------- + twelve +(1 row) + +select case_test(13); + case_test +----------- + other +(1 row) + diff --git a/src/pl/plpgsql/src/sql/plpgsql_control.sql b/src/pl/plpgsql/src/sql/plpgsql_control.sql new file mode 100644 index 0000000000..61d6ca6451 --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_control.sql @@ -0,0 +1,476 @@ +-- +-- Tests for PL/pgSQL control structures +-- + +-- integer FOR loop + +do $$ +begin + -- basic case + for i in 1..3 loop + raise notice '1..3: i = %', i; + end loop; + -- with BY, end matches exactly + for i in 1..10 by 3 loop + raise notice '1..10 by 3: i = %', i; + end loop; + -- with BY, end does not match + for i in 1..11 by 3 loop + raise notice '1..11 by 3: i = %', i; + end loop; + -- zero iterations + for i in 1..0 by 3 loop + raise notice '1..0 by 3: i = %', i; + end loop; + -- REVERSE + for i in reverse 10..0 by 3 loop + raise notice 'reverse 10..0 by 3: i = %', i; + end loop; + -- potential overflow + for i in 2147483620..2147483647 by 10 loop + raise notice '2147483620..2147483647 by 10: i = %', i; + end loop; + -- potential overflow, reverse direction + for i in reverse -2147483620..-2147483647 by 10 loop + raise notice 'reverse -2147483620..-2147483647 by 10: i = %', i; + end loop; +end$$; + +-- BY can't be zero or negative +do $$ +begin + for i in 1..3 by 0 loop + raise notice '1..3 by 0: i = %', i; + end loop; +end$$; + +do $$ +begin + for i in 1..3 by -1 loop + raise notice '1..3 by -1: i = %', i; + end loop; +end$$; + +do $$ +begin + for i in reverse 1..3 by -1 loop + raise notice 'reverse 1..3 by -1: i = %', i; + end loop; +end$$; + + +-- CONTINUE statement + +create table conttesttbl(idx serial, v integer); +insert into conttesttbl(v) values(10); +insert into conttesttbl(v) values(20); +insert into conttesttbl(v) values(30); +insert into conttesttbl(v) values(40); + +create function continue_test1() returns void as $$ +declare _i integer = 0; _r record; +begin + raise notice '---1---'; + loop + _i := _i + 1; + raise notice '%', _i; + continue when _i < 10; + exit; + end loop; + + raise notice '---2---'; + <> + loop + _i := _i - 1; + loop + raise notice '%', _i; + continue lbl when _i > 0; + exit lbl; + end loop; + end loop; + + raise notice '---3---'; + <> + while _i < 10 loop + _i := _i + 1; + continue the_loop when _i % 2 = 0; + raise notice '%', _i; + end loop; + + raise notice '---4---'; + for _i in 1..10 loop + begin + -- applies to outer loop, not the nested begin block + continue when _i < 5; + raise notice '%', _i; + end; + end loop; + + raise notice '---5---'; + for _r in select * from conttesttbl loop + continue when _r.v <= 20; + raise notice '%', _r.v; + end loop; + + raise notice '---6---'; + for _r in execute 'select * from conttesttbl' loop + continue when _r.v <= 20; + raise notice '%', _r.v; + end loop; + + raise notice '---7---'; + <> + for _i in 1..3 loop + continue looplabel when _i = 2; + raise notice '%', _i; + end loop; + + raise notice '---8---'; + _i := 1; + while _i <= 3 loop + raise notice '%', _i; + _i := _i + 1; + continue when _i = 3; + end loop; + + raise notice '---9---'; + for _r in select * from conttesttbl order by v limit 1 loop + raise notice '%', _r.v; + continue; + end loop; + + raise notice '---10---'; + for _r in execute 'select * from conttesttbl order by v limit 1' loop + raise notice '%', _r.v; + continue; + end loop; + + raise notice '---11---'; + <> + for _i in 1..2 loop + raise notice 'outer %', _i; + <> + for _j in 1..3 loop + continue outerlooplabel when _j = 2; + raise notice 'inner %', _j; + end loop; + end loop; +end; $$ language plpgsql; + +select continue_test1(); + +-- should fail: CONTINUE is only legal inside a loop +create function continue_error1() returns void as $$ +begin + begin + continue; + end; +end; +$$ language plpgsql; + +-- should fail: unlabeled EXIT is only legal inside a loop +create function exit_error1() returns void as $$ +begin + begin + exit; + end; +end; +$$ language plpgsql; + +-- should fail: no such label +create function continue_error2() returns void as $$ +begin + begin + loop + continue no_such_label; + end loop; + end; +end; +$$ language plpgsql; + +-- should fail: no such label +create function exit_error2() returns void as $$ +begin + begin + loop + exit no_such_label; + end loop; + end; +end; +$$ language plpgsql; + +-- should fail: CONTINUE can't reference the label of a named block +create function continue_error3() returns void as $$ +begin + <> + begin + loop + continue begin_block1; + end loop; + end; +end; +$$ language plpgsql; + +-- On the other hand, EXIT *can* reference the label of a named block +create function exit_block1() returns void as $$ +begin + <> + begin + loop + exit begin_block1; + raise exception 'should not get here'; + end loop; + end; +end; +$$ language plpgsql; + +select exit_block1(); + +-- verbose end block and end loop +create function end_label1() returns void as $$ +<> +begin + <> + for i in 1 .. 10 loop + raise notice 'i = %', i; + exit flbl1; + end loop flbl1; + <> + for j in 1 .. 10 loop + raise notice 'j = %', j; + exit flbl2; + end loop; +end blbl; +$$ language plpgsql; + +select end_label1(); + +-- should fail: undefined end label +create function end_label2() returns void as $$ +begin + for _i in 1 .. 10 loop + exit; + end loop flbl1; +end; +$$ language plpgsql; + +-- should fail: end label does not match start label +create function end_label3() returns void as $$ +<> +begin + <> + for _i in 1 .. 10 loop + exit; + end loop outer_label; +end; +$$ language plpgsql; + +-- should fail: end label on a block without a start label +create function end_label4() returns void as $$ +<> +begin + for _i in 1 .. 10 loop + exit; + end loop outer_label; +end; +$$ language plpgsql; + +-- unlabeled exit matches no blocks +do $$ +begin +for i in 1..10 loop + <> + begin + begin -- unlabeled block + exit; + raise notice 'should not get here'; + end; + raise notice 'should not get here, either'; + end; + raise notice 'nor here'; +end loop; +raise notice 'should get here'; +end$$; + +-- check exit out of an unlabeled block to a labeled one +do $$ +<> +begin + <> + begin + <> + begin + begin -- unlabeled block + exit innerblock; + raise notice 'should not get here'; + end; + raise notice 'should not get here, either'; + end; + raise notice 'nor here'; + end; + raise notice 'should get here'; +end$$; + +-- unlabeled exit does match a while loop +do $$ +begin + <> + while 1 > 0 loop + <> + while 1 > 0 loop + <> + while 1 > 0 loop + exit; + raise notice 'should not get here'; + end loop; + raise notice 'should get here'; + exit outermostwhile; + raise notice 'should not get here, either'; + end loop; + raise notice 'nor here'; + end loop; + raise notice 'should get here, too'; +end$$; + +-- check exit out of an unlabeled while to a labeled one +do $$ +begin + <> + while 1 > 0 loop + while 1 > 0 loop + exit outerwhile; + raise notice 'should not get here'; + end loop; + raise notice 'should not get here, either'; + end loop; + raise notice 'should get here'; +end$$; + +-- continue to an outer while +do $$ +declare i int := 0; +begin + <> + while i < 2 loop + raise notice 'outermostwhile, i = %', i; + i := i + 1; + <> + while 1 > 0 loop + <> + while 1 > 0 loop + continue outermostwhile; + raise notice 'should not get here'; + end loop; + raise notice 'should not get here, either'; + end loop; + raise notice 'nor here'; + end loop; + raise notice 'out of outermostwhile, i = %', i; +end$$; + +-- return out of a while +create function return_from_while() returns int language plpgsql as $$ +declare i int := 0; +begin + while i < 10 loop + if i > 2 then + return i; + end if; + i := i + 1; + end loop; + return null; +end$$; + +select return_from_while(); + +-- using list of scalars in fori and fore stmts +create function for_vect() returns void as $proc$ +<>declare a integer; b varchar; c varchar; r record; +begin + -- fori + for i in 1 .. 3 loop + raise notice '%', i; + end loop; + -- fore with record var + for r in select gs as aa, 'BB' as bb, 'CC' as cc from generate_series(1,4) gs loop + raise notice '% % %', r.aa, r.bb, r.cc; + end loop; + -- fore with single scalar + for a in select gs from generate_series(1,4) gs loop + raise notice '%', a; + end loop; + -- fore with multiple scalars + for a,b,c in select gs, 'BB','CC' from generate_series(1,4) gs loop + raise notice '% % %', a, b, c; + end loop; + -- using qualified names in fors, fore is enabled, disabled only for fori + for lbl.a, lbl.b, lbl.c in execute $$select gs, 'bb','cc' from generate_series(1,4) gs$$ loop + raise notice '% % %', a, b, c; + end loop; +end; +$proc$ language plpgsql; + +select for_vect(); + +-- CASE statement + +create or replace function case_test(bigint) returns text as $$ +declare a int = 10; + b int = 1; +begin + case $1 + when 1 then + return 'one'; + when 2 then + return 'two'; + when 3,4,3+5 then + return 'three, four or eight'; + when a then + return 'ten'; + when a+b, a+b+1 then + return 'eleven, twelve'; + end case; +end; +$$ language plpgsql immutable; + +select case_test(1); +select case_test(2); +select case_test(3); +select case_test(4); +select case_test(5); -- fails +select case_test(8); +select case_test(10); +select case_test(11); +select case_test(12); +select case_test(13); -- fails + +create or replace function catch() returns void as $$ +begin + raise notice '%', case_test(6); +exception + when case_not_found then + raise notice 'caught case_not_found % %', SQLSTATE, SQLERRM; +end +$$ language plpgsql; + +select catch(); + +-- test the searched variant too, as well as ELSE +create or replace function case_test(bigint) returns text as $$ +declare a int = 10; +begin + case + when $1 = 1 then + return 'one'; + when $1 = a + 2 then + return 'twelve'; + else + return 'other'; + end case; +end; +$$ language plpgsql immutable; + +select case_test(1); +select case_test(2); +select case_test(12); +select case_test(13); diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index a2df411132..4783807ae0 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -2706,339 +2706,6 @@ NOTICE: {10,20,30}; 20; xyz; xyzabc; (10,aaa,,30); (1 row) drop function raise_exprs(); --- continue statement -create table conttesttbl(idx serial, v integer); -insert into conttesttbl(v) values(10); -insert into conttesttbl(v) values(20); -insert into conttesttbl(v) values(30); -insert into conttesttbl(v) values(40); -create function continue_test1() returns void as $$ -declare _i integer = 0; _r record; -begin - raise notice '---1---'; - loop - _i := _i + 1; - raise notice '%', _i; - continue when _i < 10; - exit; - end loop; - - raise notice '---2---'; - <> - loop - _i := _i - 1; - loop - raise notice '%', _i; - continue lbl when _i > 0; - exit lbl; - end loop; - end loop; - - raise notice '---3---'; - <> - while _i < 10 loop - _i := _i + 1; - continue the_loop when _i % 2 = 0; - raise notice '%', _i; - end loop; - - raise notice '---4---'; - for _i in 1..10 loop - begin - -- applies to outer loop, not the nested begin block - continue when _i < 5; - raise notice '%', _i; - end; - end loop; - - raise notice '---5---'; - for _r in select * from conttesttbl loop - continue when _r.v <= 20; - raise notice '%', _r.v; - end loop; - - raise notice '---6---'; - for _r in execute 'select * from conttesttbl' loop - continue when _r.v <= 20; - raise notice '%', _r.v; - end loop; - - raise notice '---7---'; - for _i in 1..3 loop - raise notice '%', _i; - continue when _i = 3; - end loop; - - raise notice '---8---'; - _i := 1; - while _i <= 3 loop - raise notice '%', _i; - _i := _i + 1; - continue when _i = 3; - end loop; - - raise notice '---9---'; - for _r in select * from conttesttbl order by v limit 1 loop - raise notice '%', _r.v; - continue; - end loop; - - raise notice '---10---'; - for _r in execute 'select * from conttesttbl order by v limit 1' loop - raise notice '%', _r.v; - continue; - end loop; -end; $$ language plpgsql; -select continue_test1(); -NOTICE: ---1--- -NOTICE: 1 -NOTICE: 2 -NOTICE: 3 -NOTICE: 4 -NOTICE: 5 -NOTICE: 6 -NOTICE: 7 -NOTICE: 8 -NOTICE: 9 -NOTICE: 10 -NOTICE: ---2--- -NOTICE: 9 -NOTICE: 8 -NOTICE: 7 -NOTICE: 6 -NOTICE: 5 -NOTICE: 4 -NOTICE: 3 -NOTICE: 2 -NOTICE: 1 -NOTICE: 0 -NOTICE: ---3--- -NOTICE: 1 -NOTICE: 3 -NOTICE: 5 -NOTICE: 7 -NOTICE: 9 -NOTICE: ---4--- -NOTICE: 5 -NOTICE: 6 -NOTICE: 7 -NOTICE: 8 -NOTICE: 9 -NOTICE: 10 -NOTICE: ---5--- -NOTICE: 30 -NOTICE: 40 -NOTICE: ---6--- -NOTICE: 30 -NOTICE: 40 -NOTICE: ---7--- -NOTICE: 1 -NOTICE: 2 -NOTICE: 3 -NOTICE: ---8--- -NOTICE: 1 -NOTICE: 2 -NOTICE: 3 -NOTICE: ---9--- -NOTICE: 10 -NOTICE: ---10--- -NOTICE: 10 - continue_test1 ----------------- - -(1 row) - -drop function continue_test1(); -drop table conttesttbl; --- should fail: CONTINUE is only legal inside a loop -create function continue_error1() returns void as $$ -begin - begin - continue; - end; -end; -$$ language plpgsql; -ERROR: CONTINUE cannot be used outside a loop -LINE 4: continue; - ^ --- should fail: unlabeled EXIT is only legal inside a loop -create function exit_error1() returns void as $$ -begin - begin - exit; - end; -end; -$$ language plpgsql; -ERROR: EXIT cannot be used outside a loop, unless it has a label -LINE 4: exit; - ^ --- should fail: no such label -create function continue_error2() returns void as $$ -begin - begin - loop - continue no_such_label; - end loop; - end; -end; -$$ language plpgsql; -ERROR: there is no label "no_such_label" attached to any block or loop enclosing this statement -LINE 5: continue no_such_label; - ^ --- should fail: no such label -create function exit_error2() returns void as $$ -begin - begin - loop - exit no_such_label; - end loop; - end; -end; -$$ language plpgsql; -ERROR: there is no label "no_such_label" attached to any block or loop enclosing this statement -LINE 5: exit no_such_label; - ^ --- should fail: CONTINUE can't reference the label of a named block -create function continue_error3() returns void as $$ -begin - <> - begin - loop - continue begin_block1; - end loop; - end; -end; -$$ language plpgsql; -ERROR: block label "begin_block1" cannot be used in CONTINUE -LINE 6: continue begin_block1; - ^ --- On the other hand, EXIT *can* reference the label of a named block -create function exit_block1() returns void as $$ -begin - <> - begin - loop - exit begin_block1; - raise exception 'should not get here'; - end loop; - end; -end; -$$ language plpgsql; -select exit_block1(); - exit_block1 -------------- - -(1 row) - -drop function exit_block1(); --- verbose end block and end loop -create function end_label1() returns void as $$ -<> -begin - <> - for _i in 1 .. 10 loop - exit flbl1; - end loop flbl1; - <> - for _i in 1 .. 10 loop - exit flbl2; - end loop; -end blbl; -$$ language plpgsql; -select end_label1(); - end_label1 ------------- - -(1 row) - -drop function end_label1(); --- should fail: undefined end label -create function end_label2() returns void as $$ -begin - for _i in 1 .. 10 loop - exit; - end loop flbl1; -end; -$$ language plpgsql; -ERROR: end label "flbl1" specified for unlabelled block -LINE 5: end loop flbl1; - ^ --- should fail: end label does not match start label -create function end_label3() returns void as $$ -<> -begin - <> - for _i in 1 .. 10 loop - exit; - end loop outer_label; -end; -$$ language plpgsql; -ERROR: end label "outer_label" differs from block's label "inner_label" -LINE 7: end loop outer_label; - ^ --- should fail: end label on a block without a start label -create function end_label4() returns void as $$ -<> -begin - for _i in 1 .. 10 loop - exit; - end loop outer_label; -end; -$$ language plpgsql; -ERROR: end label "outer_label" specified for unlabelled block -LINE 6: end loop outer_label; - ^ --- using list of scalars in fori and fore stmts -create function for_vect() returns void as $proc$ -<>declare a integer; b varchar; c varchar; r record; -begin - -- fori - for i in 1 .. 3 loop - raise notice '%', i; - end loop; - -- fore with record var - for r in select gs as aa, 'BB' as bb, 'CC' as cc from generate_series(1,4) gs loop - raise notice '% % %', r.aa, r.bb, r.cc; - end loop; - -- fore with single scalar - for a in select gs from generate_series(1,4) gs loop - raise notice '%', a; - end loop; - -- fore with multiple scalars - for a,b,c in select gs, 'BB','CC' from generate_series(1,4) gs loop - raise notice '% % %', a, b, c; - end loop; - -- using qualified names in fors, fore is enabled, disabled only for fori - for lbl.a, lbl.b, lbl.c in execute $$select gs, 'bb','cc' from generate_series(1,4) gs$$ loop - raise notice '% % %', a, b, c; - end loop; -end; -$proc$ language plpgsql; -select for_vect(); -NOTICE: 1 -NOTICE: 2 -NOTICE: 3 -NOTICE: 1 BB CC -NOTICE: 2 BB CC -NOTICE: 3 BB CC -NOTICE: 4 BB CC -NOTICE: 1 -NOTICE: 2 -NOTICE: 3 -NOTICE: 4 -NOTICE: 1 BB CC -NOTICE: 2 BB CC -NOTICE: 3 BB CC -NOTICE: 4 BB CC -NOTICE: 1 bb cc -NOTICE: 2 bb cc -NOTICE: 3 bb cc -NOTICE: 4 bb cc - for_vect ----------- - -(1 row) - -- regression test: verify that multiple uses of same plpgsql datum within -- a SQL command all get mapped to the same $n parameter. The return value -- of the SELECT is not important, we only care that it doesn't fail with @@ -4368,136 +4035,6 @@ NOTICE: column >>some column name<<, constraint >>some constraint name<<, type (1 row) drop function stacked_diagnostics_test(); --- test CASE statement -create or replace function case_test(bigint) returns text as $$ -declare a int = 10; - b int = 1; -begin - case $1 - when 1 then - return 'one'; - when 2 then - return 'two'; - when 3,4,3+5 then - return 'three, four or eight'; - when a then - return 'ten'; - when a+b, a+b+1 then - return 'eleven, twelve'; - end case; -end; -$$ language plpgsql immutable; -select case_test(1); - case_test ------------ - one -(1 row) - -select case_test(2); - case_test ------------ - two -(1 row) - -select case_test(3); - case_test ----------------------- - three, four or eight -(1 row) - -select case_test(4); - case_test ----------------------- - three, four or eight -(1 row) - -select case_test(5); -- fails -ERROR: case not found -HINT: CASE statement is missing ELSE part. -CONTEXT: PL/pgSQL function case_test(bigint) line 5 at CASE -select case_test(8); - case_test ----------------------- - three, four or eight -(1 row) - -select case_test(10); - case_test ------------ - ten -(1 row) - -select case_test(11); - case_test ----------------- - eleven, twelve -(1 row) - -select case_test(12); - case_test ----------------- - eleven, twelve -(1 row) - -select case_test(13); -- fails -ERROR: case not found -HINT: CASE statement is missing ELSE part. -CONTEXT: PL/pgSQL function case_test(bigint) line 5 at CASE -create or replace function catch() returns void as $$ -begin - raise notice '%', case_test(6); -exception - when case_not_found then - raise notice 'caught case_not_found % %', SQLSTATE, SQLERRM; -end -$$ language plpgsql; -select catch(); -NOTICE: caught case_not_found 20000 case not found - catch -------- - -(1 row) - --- test the searched variant too, as well as ELSE -create or replace function case_test(bigint) returns text as $$ -declare a int = 10; -begin - case - when $1 = 1 then - return 'one'; - when $1 = a + 2 then - return 'twelve'; - else - return 'other'; - end case; -end; -$$ language plpgsql immutable; -select case_test(1); - case_test ------------ - one -(1 row) - -select case_test(2); - case_test ------------ - other -(1 row) - -select case_test(12); - case_test ------------ - twelve -(1 row) - -select case_test(13); - case_test ------------ - other -(1 row) - -drop function catch(); -drop function case_test(bigint); -- test variadic functions create or replace function vari(variadic int[]) returns void as $$ @@ -5409,6 +4946,12 @@ create function consumes_rw_array(int[]) returns int language plpgsql as $$ begin return $1[1]; end; $$ stable; +select consumes_rw_array(returns_rw_array(42)); + consumes_rw_array +------------------- + 42 +(1 row) + -- bug #14174 explain (verbose, costs off) select i, a from @@ -5465,6 +5008,13 @@ select consumes_rw_array(a), a from 2 | {2,2} (2 rows) +do $$ +declare a int[] := array[1,2]; +begin + a := a || 3; + raise notice 'a = %', a; +end$$; +NOTICE: a = {1,2,3} -- -- Test access to call stack -- diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 02c8913801..768270d467 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -2285,241 +2285,6 @@ end;$$ language plpgsql; select raise_exprs(); drop function raise_exprs(); --- continue statement -create table conttesttbl(idx serial, v integer); -insert into conttesttbl(v) values(10); -insert into conttesttbl(v) values(20); -insert into conttesttbl(v) values(30); -insert into conttesttbl(v) values(40); - -create function continue_test1() returns void as $$ -declare _i integer = 0; _r record; -begin - raise notice '---1---'; - loop - _i := _i + 1; - raise notice '%', _i; - continue when _i < 10; - exit; - end loop; - - raise notice '---2---'; - <> - loop - _i := _i - 1; - loop - raise notice '%', _i; - continue lbl when _i > 0; - exit lbl; - end loop; - end loop; - - raise notice '---3---'; - <> - while _i < 10 loop - _i := _i + 1; - continue the_loop when _i % 2 = 0; - raise notice '%', _i; - end loop; - - raise notice '---4---'; - for _i in 1..10 loop - begin - -- applies to outer loop, not the nested begin block - continue when _i < 5; - raise notice '%', _i; - end; - end loop; - - raise notice '---5---'; - for _r in select * from conttesttbl loop - continue when _r.v <= 20; - raise notice '%', _r.v; - end loop; - - raise notice '---6---'; - for _r in execute 'select * from conttesttbl' loop - continue when _r.v <= 20; - raise notice '%', _r.v; - end loop; - - raise notice '---7---'; - for _i in 1..3 loop - raise notice '%', _i; - continue when _i = 3; - end loop; - - raise notice '---8---'; - _i := 1; - while _i <= 3 loop - raise notice '%', _i; - _i := _i + 1; - continue when _i = 3; - end loop; - - raise notice '---9---'; - for _r in select * from conttesttbl order by v limit 1 loop - raise notice '%', _r.v; - continue; - end loop; - - raise notice '---10---'; - for _r in execute 'select * from conttesttbl order by v limit 1' loop - raise notice '%', _r.v; - continue; - end loop; -end; $$ language plpgsql; - -select continue_test1(); - -drop function continue_test1(); -drop table conttesttbl; - --- should fail: CONTINUE is only legal inside a loop -create function continue_error1() returns void as $$ -begin - begin - continue; - end; -end; -$$ language plpgsql; - --- should fail: unlabeled EXIT is only legal inside a loop -create function exit_error1() returns void as $$ -begin - begin - exit; - end; -end; -$$ language plpgsql; - --- should fail: no such label -create function continue_error2() returns void as $$ -begin - begin - loop - continue no_such_label; - end loop; - end; -end; -$$ language plpgsql; - --- should fail: no such label -create function exit_error2() returns void as $$ -begin - begin - loop - exit no_such_label; - end loop; - end; -end; -$$ language plpgsql; - --- should fail: CONTINUE can't reference the label of a named block -create function continue_error3() returns void as $$ -begin - <> - begin - loop - continue begin_block1; - end loop; - end; -end; -$$ language plpgsql; - --- On the other hand, EXIT *can* reference the label of a named block -create function exit_block1() returns void as $$ -begin - <> - begin - loop - exit begin_block1; - raise exception 'should not get here'; - end loop; - end; -end; -$$ language plpgsql; - -select exit_block1(); -drop function exit_block1(); - --- verbose end block and end loop -create function end_label1() returns void as $$ -<> -begin - <> - for _i in 1 .. 10 loop - exit flbl1; - end loop flbl1; - <> - for _i in 1 .. 10 loop - exit flbl2; - end loop; -end blbl; -$$ language plpgsql; - -select end_label1(); -drop function end_label1(); - --- should fail: undefined end label -create function end_label2() returns void as $$ -begin - for _i in 1 .. 10 loop - exit; - end loop flbl1; -end; -$$ language plpgsql; - --- should fail: end label does not match start label -create function end_label3() returns void as $$ -<> -begin - <> - for _i in 1 .. 10 loop - exit; - end loop outer_label; -end; -$$ language plpgsql; - --- should fail: end label on a block without a start label -create function end_label4() returns void as $$ -<> -begin - for _i in 1 .. 10 loop - exit; - end loop outer_label; -end; -$$ language plpgsql; - --- using list of scalars in fori and fore stmts -create function for_vect() returns void as $proc$ -<>declare a integer; b varchar; c varchar; r record; -begin - -- fori - for i in 1 .. 3 loop - raise notice '%', i; - end loop; - -- fore with record var - for r in select gs as aa, 'BB' as bb, 'CC' as cc from generate_series(1,4) gs loop - raise notice '% % %', r.aa, r.bb, r.cc; - end loop; - -- fore with single scalar - for a in select gs from generate_series(1,4) gs loop - raise notice '%', a; - end loop; - -- fore with multiple scalars - for a,b,c in select gs, 'BB','CC' from generate_series(1,4) gs loop - raise notice '% % %', a, b, c; - end loop; - -- using qualified names in fors, fore is enabled, disabled only for fori - for lbl.a, lbl.b, lbl.c in execute $$select gs, 'bb','cc' from generate_series(1,4) gs$$ loop - raise notice '% % %', a, b, c; - end loop; -end; -$proc$ language plpgsql; - -select for_vect(); - -- regression test: verify that multiple uses of same plpgsql datum within -- a SQL command all get mapped to the same $n parameter. The return value -- of the SELECT is not important, we only care that it doesn't fail with @@ -3580,72 +3345,6 @@ select stacked_diagnostics_test(); drop function stacked_diagnostics_test(); --- test CASE statement - -create or replace function case_test(bigint) returns text as $$ -declare a int = 10; - b int = 1; -begin - case $1 - when 1 then - return 'one'; - when 2 then - return 'two'; - when 3,4,3+5 then - return 'three, four or eight'; - when a then - return 'ten'; - when a+b, a+b+1 then - return 'eleven, twelve'; - end case; -end; -$$ language plpgsql immutable; - -select case_test(1); -select case_test(2); -select case_test(3); -select case_test(4); -select case_test(5); -- fails -select case_test(8); -select case_test(10); -select case_test(11); -select case_test(12); -select case_test(13); -- fails - -create or replace function catch() returns void as $$ -begin - raise notice '%', case_test(6); -exception - when case_not_found then - raise notice 'caught case_not_found % %', SQLSTATE, SQLERRM; -end -$$ language plpgsql; - -select catch(); - --- test the searched variant too, as well as ELSE -create or replace function case_test(bigint) returns text as $$ -declare a int = 10; -begin - case - when $1 = 1 then - return 'one'; - when $1 = a + 2 then - return 'twelve'; - else - return 'other'; - end case; -end; -$$ language plpgsql immutable; - -select case_test(1); -select case_test(2); -select case_test(12); -select case_test(13); - -drop function catch(); -drop function case_test(bigint); - -- test variadic functions create or replace function vari(variadic int[]) @@ -4278,6 +3977,8 @@ language plpgsql as $$ begin return $1[1]; end; $$ stable; +select consumes_rw_array(returns_rw_array(42)); + -- bug #14174 explain (verbose, costs off) select i, a from @@ -4300,6 +4001,13 @@ select consumes_rw_array(a), a from select consumes_rw_array(a), a from (values (returns_rw_array(1)), (returns_rw_array(2))) v(a); +do $$ +declare a int[] := array[1,2]; +begin + a := a || 3; + raise notice 'a = %', a; +end$$; + -- -- Test access to call stack From 3e724aac74e8325fe48dac8a30c2a7974eff7a14 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 31 Dec 2017 17:20:17 -0500 Subject: [PATCH 0755/1087] Merge coding of return/exit/continue cases in plpgsql's loop statements. plpgsql's five different loop control statements contained three distinct implementations of the same (or what ought to be the same, at least) logic for handling return/exit/continue result codes from their child statements. At best, that's trouble waiting to happen, and there seems no very good reason for the coding to be so different. Refactor so that all the common logic is expressed in a single macro. Discussion: https://postgr.es/m/26314.1514670401@sss.pgh.pa.us --- src/pl/plpgsql/src/pl_exec.c | 310 ++++++++++------------------------- 1 file changed, 90 insertions(+), 220 deletions(-) diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index dd575e73f4..cfc388498e 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -155,6 +155,80 @@ typedef struct /* cast_hash table entry */ static MemoryContext shared_cast_context = NULL; static HTAB *shared_cast_hash = NULL; +/* + * LOOP_RC_PROCESSING encapsulates common logic for looping statements to + * handle return/exit/continue result codes from the loop body statement(s). + * It's meant to be used like this: + * + * int rc = PLPGSQL_RC_OK; + * for (...) + * { + * ... + * rc = exec_stmts(estate, stmt->body); + * LOOP_RC_PROCESSING(stmt->label, break); + * ... + * } + * return rc; + * + * If execution of the loop should terminate, LOOP_RC_PROCESSING will execute + * "exit_action" (typically a "break" or "goto"), after updating "rc" to the + * value the current statement should return. If execution should continue, + * LOOP_RC_PROCESSING will do nothing except reset "rc" to PLPGSQL_RC_OK. + * + * estate and rc are implicit arguments to the macro. + * estate->exitlabel is examined and possibly updated. + */ +#define LOOP_RC_PROCESSING(looplabel, exit_action) \ + if (rc == PLPGSQL_RC_RETURN) \ + { \ + /* RETURN, so propagate RC_RETURN out */ \ + exit_action; \ + } \ + else if (rc == PLPGSQL_RC_EXIT) \ + { \ + if (estate->exitlabel == NULL) \ + { \ + /* unlabelled EXIT terminates this loop */ \ + rc = PLPGSQL_RC_OK; \ + exit_action; \ + } \ + else if ((looplabel) != NULL && \ + strcmp(looplabel, estate->exitlabel) == 0) \ + { \ + /* labelled EXIT matching this loop, so terminate loop */ \ + estate->exitlabel = NULL; \ + rc = PLPGSQL_RC_OK; \ + exit_action; \ + } \ + else \ + { \ + /* non-matching labelled EXIT, propagate RC_EXIT out */ \ + exit_action; \ + } \ + } \ + else if (rc == PLPGSQL_RC_CONTINUE) \ + { \ + if (estate->exitlabel == NULL) \ + { \ + /* unlabelled CONTINUE matches this loop, so continue in loop */ \ + rc = PLPGSQL_RC_OK; \ + } \ + else if ((looplabel) != NULL && \ + strcmp(looplabel, estate->exitlabel) == 0) \ + { \ + /* labelled CONTINUE matching this loop, so continue in loop */ \ + estate->exitlabel = NULL; \ + rc = PLPGSQL_RC_OK; \ + } \ + else \ + { \ + /* non-matching labelled CONTINUE, propagate RC_CONTINUE out */ \ + exit_action; \ + } \ + } \ + else \ + Assert(rc == PLPGSQL_RC_OK) + /************************************************************ * Local function forward declarations ************************************************************/ @@ -1476,7 +1550,9 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) estate->err_text = NULL; /* - * Handle the return code. + * Handle the return code. This is intentionally different from + * LOOP_RC_PROCESSING(): CONTINUE never matches a block, and EXIT matches + * a block only if there is a label match. */ switch (rc) { @@ -1486,11 +1562,6 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) return rc; case PLPGSQL_RC_EXIT: - - /* - * This is intentionally different from the handling of RC_EXIT - * for loops: to match a block, we require a match by label. - */ if (estate->exitlabel == NULL) return PLPGSQL_RC_EXIT; if (block->label == NULL) @@ -1948,45 +2019,16 @@ exec_stmt_case(PLpgSQL_execstate *estate, PLpgSQL_stmt_case *stmt) static int exec_stmt_loop(PLpgSQL_execstate *estate, PLpgSQL_stmt_loop *stmt) { + int rc = PLPGSQL_RC_OK; + for (;;) { - int rc = exec_stmts(estate, stmt->body); - - switch (rc) - { - case PLPGSQL_RC_OK: - break; - - case PLPGSQL_RC_EXIT: - if (estate->exitlabel == NULL) - return PLPGSQL_RC_OK; - if (stmt->label == NULL) - return PLPGSQL_RC_EXIT; - if (strcmp(stmt->label, estate->exitlabel) != 0) - return PLPGSQL_RC_EXIT; - estate->exitlabel = NULL; - return PLPGSQL_RC_OK; - - case PLPGSQL_RC_CONTINUE: - if (estate->exitlabel == NULL) - /* anonymous continue, so re-run the loop */ - break; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - /* label matches named continue, so re-run loop */ - estate->exitlabel = NULL; - else - /* label doesn't match named continue, so propagate upward */ - return PLPGSQL_RC_CONTINUE; - break; - - case PLPGSQL_RC_RETURN: - return rc; + rc = exec_stmts(estate, stmt->body); - default: - elog(ERROR, "unrecognized rc: %d", rc); - } + LOOP_RC_PROCESSING(stmt->label, break); } + + return rc; } @@ -1999,9 +2041,10 @@ exec_stmt_loop(PLpgSQL_execstate *estate, PLpgSQL_stmt_loop *stmt) static int exec_stmt_while(PLpgSQL_execstate *estate, PLpgSQL_stmt_while *stmt) { + int rc = PLPGSQL_RC_OK; + for (;;) { - int rc; bool value; bool isnull; @@ -2013,43 +2056,10 @@ exec_stmt_while(PLpgSQL_execstate *estate, PLpgSQL_stmt_while *stmt) rc = exec_stmts(estate, stmt->body); - switch (rc) - { - case PLPGSQL_RC_OK: - break; - - case PLPGSQL_RC_EXIT: - if (estate->exitlabel == NULL) - return PLPGSQL_RC_OK; - if (stmt->label == NULL) - return PLPGSQL_RC_EXIT; - if (strcmp(stmt->label, estate->exitlabel) != 0) - return PLPGSQL_RC_EXIT; - estate->exitlabel = NULL; - return PLPGSQL_RC_OK; - - case PLPGSQL_RC_CONTINUE: - if (estate->exitlabel == NULL) - /* anonymous continue, so re-run loop */ - break; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - /* label matches named continue, so re-run loop */ - estate->exitlabel = NULL; - else - /* label doesn't match named continue, propagate upward */ - return PLPGSQL_RC_CONTINUE; - break; - - case PLPGSQL_RC_RETURN: - return rc; - - default: - elog(ERROR, "unrecognized rc: %d", rc); - } + LOOP_RC_PROCESSING(stmt->label, break); } - return PLPGSQL_RC_OK; + return rc; } @@ -2163,50 +2173,7 @@ exec_stmt_fori(PLpgSQL_execstate *estate, PLpgSQL_stmt_fori *stmt) */ rc = exec_stmts(estate, stmt->body); - if (rc == PLPGSQL_RC_RETURN) - break; /* break out of the loop */ - else if (rc == PLPGSQL_RC_EXIT) - { - if (estate->exitlabel == NULL) - /* unlabelled exit, finish the current loop */ - rc = PLPGSQL_RC_OK; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* labelled exit, matches the current stmt's label */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - } - - /* - * otherwise, this is a labelled exit that does not match the - * current statement's label, if any: return RC_EXIT so that the - * EXIT continues to propagate up the stack. - */ - break; - } - else if (rc == PLPGSQL_RC_CONTINUE) - { - if (estate->exitlabel == NULL) - /* unlabelled continue, so re-run the current loop */ - rc = PLPGSQL_RC_OK; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* label matches named continue, so re-run loop */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - } - else - { - /* - * otherwise, this is a named continue that does not match the - * current statement's label, if any: return RC_CONTINUE so - * that the CONTINUE will propagate up the stack. - */ - break; - } - } + LOOP_RC_PROCESSING(stmt->label, break); /* * Increase/decrease loop value, unless it would overflow, in which @@ -2536,51 +2503,7 @@ exec_stmt_foreach_a(PLpgSQL_execstate *estate, PLpgSQL_stmt_foreach_a *stmt) */ rc = exec_stmts(estate, stmt->body); - /* Handle the return code */ - if (rc == PLPGSQL_RC_RETURN) - break; /* break out of the loop */ - else if (rc == PLPGSQL_RC_EXIT) - { - if (estate->exitlabel == NULL) - /* unlabelled exit, finish the current loop */ - rc = PLPGSQL_RC_OK; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* labelled exit, matches the current stmt's label */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - } - - /* - * otherwise, this is a labelled exit that does not match the - * current statement's label, if any: return RC_EXIT so that the - * EXIT continues to propagate up the stack. - */ - break; - } - else if (rc == PLPGSQL_RC_CONTINUE) - { - if (estate->exitlabel == NULL) - /* unlabelled continue, so re-run the current loop */ - rc = PLPGSQL_RC_OK; - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* label matches named continue, so re-run loop */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - } - else - { - /* - * otherwise, this is a named continue that does not match the - * current statement's label, if any: return RC_CONTINUE so - * that the CONTINUE will propagate up the stack. - */ - break; - } - } + LOOP_RC_PROCESSING(stmt->label, break); MemoryContextSwitchTo(stmt_mcontext); } @@ -5381,60 +5304,7 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, */ rc = exec_stmts(estate, stmt->body); - if (rc != PLPGSQL_RC_OK) - { - if (rc == PLPGSQL_RC_EXIT) - { - if (estate->exitlabel == NULL) - { - /* unlabelled exit, so exit the current loop */ - rc = PLPGSQL_RC_OK; - } - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* label matches this loop, so exit loop */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - } - - /* - * otherwise, we processed a labelled exit that does not - * match the current statement's label, if any; return - * RC_EXIT so that the EXIT continues to recurse upward. - */ - } - else if (rc == PLPGSQL_RC_CONTINUE) - { - if (estate->exitlabel == NULL) - { - /* unlabelled continue, so re-run the current loop */ - rc = PLPGSQL_RC_OK; - continue; - } - else if (stmt->label != NULL && - strcmp(stmt->label, estate->exitlabel) == 0) - { - /* label matches this loop, so re-run loop */ - estate->exitlabel = NULL; - rc = PLPGSQL_RC_OK; - continue; - } - - /* - * otherwise, we process a labelled continue that does not - * match the current statement's label, if any; return - * RC_CONTINUE so that the CONTINUE will propagate up the - * stack. - */ - } - - /* - * We're aborting the loop. Need a goto to get out of two - * levels of loop... - */ - goto loop_exit; - } + LOOP_RC_PROCESSING(stmt->label, goto loop_exit); } SPI_freetuptable(tuptab); From 6078770c1a6c247bb74742cb1b82733cce8afcab Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Sun, 31 Dec 2017 21:58:29 -0800 Subject: [PATCH 0756/1087] In tests, await an LSN no later than the recovery target. Otherwise, the test fails with "Timed out while waiting for standby to catch up". This happened rarely, perhaps only when autovacuum wrote WAL between our choosing the recovery target and choosing the LSN to await. Commit b26f7fa6ae2b4e5d64525b3d5bc66a0ddccd9e24 fixed one case of this. Fix two more. Back-patch to 9.6, which introduced the affected test. Discussion: https://postgr.es/m/20180101055227.GA2952815@rfd.leadboat.com --- src/test/recovery/t/003_recovery_targets.pl | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl index cc7c04b6cb..824fa4da52 100644 --- a/src/test/recovery/t/003_recovery_targets.pl +++ b/src/test/recovery/t/003_recovery_targets.pl @@ -5,8 +5,9 @@ use TestLib; use Test::More tests => 9; -# Create and test a standby from given backup, with a certain -# recovery target. +# Create and test a standby from given backup, with a certain recovery target. +# Choose $until_lsn later than the transaction commit that causes the row +# count to reach $num_rows, yet not later than the recovery target. sub test_recovery_standby { my $test_name = shift; @@ -70,9 +71,9 @@ sub test_recovery_standby # More data, with recovery target timestamp $node_master->safe_psql('postgres', "INSERT INTO tab_int VALUES (generate_series(2001,3000))"); -$ret = - $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn(), now();"); -my ($lsn3, $recovery_time) = split /\|/, $ret; +my $lsn3 = + $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();"); +my $recovery_time = $node_master->safe_psql('postgres', "SELECT now()"); # Even more data, this time with a recovery target name $node_master->safe_psql('postgres', @@ -86,10 +87,8 @@ sub test_recovery_standby # And now for a recovery target LSN $node_master->safe_psql('postgres', "INSERT INTO tab_int VALUES (generate_series(4001,5000))"); -my $recovery_lsn = +my $lsn5 = my $recovery_lsn = $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn()"); -my $lsn5 = - $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();"); $node_master->safe_psql('postgres', "INSERT INTO tab_int VALUES (generate_series(5001,6000))"); From 93ea78b17c4743c2b63edb5998fb5796ae57e289 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 1 Jan 2018 14:38:23 -0800 Subject: [PATCH 0757/1087] Fix EXPLAIN ANALYZE output for Parallel Hash. In a race case, EXPLAIN ANALYZE could fail to display correct nbatch and size information. Refactor so that participants report only on batches they worked on rather than trying to report on all of them, and teach explain.c to consider the HashInstrumentation object from all participants instead of picking the first one it can find. This should fix an occasional build farm failure in the "join" regression test. Author: Thomas Munro Reviewed-By: Andres Freund Discussion: https://postgr.es/m/30219.1514428346%40sss.pgh.pa.us --- src/backend/commands/explain.c | 79 +++++++++++++++++++---------- src/backend/executor/nodeHash.c | 27 ++++------ src/backend/executor/nodeHashjoin.c | 6 --- src/include/executor/nodeHash.h | 1 - 4 files changed, 62 insertions(+), 51 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 7e4fbafc53..2156385ac8 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -2379,62 +2379,87 @@ show_sort_info(SortState *sortstate, ExplainState *es) static void show_hash_info(HashState *hashstate, ExplainState *es) { - HashInstrumentation *hinstrument = NULL; + HashInstrumentation hinstrument = {0}; /* * In a parallel query, the leader process may or may not have run the * hash join, and even if it did it may not have built a hash table due to * timing (if it started late it might have seen no tuples in the outer * relation and skipped building the hash table). Therefore we have to be - * prepared to get instrumentation data from a worker if there is no hash - * table. + * prepared to get instrumentation data from all participants. */ if (hashstate->hashtable) - { - hinstrument = (HashInstrumentation *) - palloc(sizeof(HashInstrumentation)); - ExecHashGetInstrumentation(hinstrument, hashstate->hashtable); - } - else if (hashstate->shared_info) + ExecHashGetInstrumentation(&hinstrument, hashstate->hashtable); + + /* + * Merge results from workers. In the parallel-oblivious case, the + * results from all participants should be identical, except where + * participants didn't run the join at all so have no data. In the + * parallel-aware case, we need to consider all the results. Each worker + * may have seen a different subset of batches and we want to find the + * highest memory usage for any one batch across all batches. + */ + if (hashstate->shared_info) { SharedHashInfo *shared_info = hashstate->shared_info; - int i; + int i; - /* Find the first worker that built a hash table. */ for (i = 0; i < shared_info->num_workers; ++i) { - if (shared_info->hinstrument[i].nbatch > 0) + HashInstrumentation *worker_hi = &shared_info->hinstrument[i]; + + if (worker_hi->nbatch > 0) { - hinstrument = &shared_info->hinstrument[i]; - break; + /* + * Every participant should agree on the buckets, so to be + * sure we have a value we'll just overwrite each time. + */ + hinstrument.nbuckets = worker_hi->nbuckets; + hinstrument.nbuckets_original = worker_hi->nbuckets_original; + + /* + * Normally every participant should agree on the number of + * batches too, but it's possible for a backend that started + * late and missed the whole join not to have the final nbatch + * number. So we'll take the largest number. + */ + hinstrument.nbatch = Max(hinstrument.nbatch, worker_hi->nbatch); + hinstrument.nbatch_original = worker_hi->nbatch_original; + + /* + * In a parallel-aware hash join, for now we report the + * maximum peak memory reported by any worker. + */ + hinstrument.space_peak = + Max(hinstrument.space_peak, worker_hi->space_peak); } } } - if (hinstrument) + if (hinstrument.nbatch > 0) { - long spacePeakKb = (hinstrument->space_peak + 1023) / 1024; + long spacePeakKb = (hinstrument.space_peak + 1023) / 1024; if (es->format != EXPLAIN_FORMAT_TEXT) { - ExplainPropertyLong("Hash Buckets", hinstrument->nbuckets, es); + ExplainPropertyLong("Hash Buckets", hinstrument.nbuckets, es); ExplainPropertyLong("Original Hash Buckets", - hinstrument->nbuckets_original, es); - ExplainPropertyLong("Hash Batches", hinstrument->nbatch, es); + hinstrument.nbuckets_original, es); + ExplainPropertyLong("Hash Batches", hinstrument.nbatch, es); ExplainPropertyLong("Original Hash Batches", - hinstrument->nbatch_original, es); + hinstrument.nbatch_original, es); ExplainPropertyLong("Peak Memory Usage", spacePeakKb, es); } - else if (hinstrument->nbatch_original != hinstrument->nbatch || - hinstrument->nbuckets_original != hinstrument->nbuckets) + else if (hinstrument.nbatch_original != hinstrument.nbatch || + hinstrument.nbuckets_original != hinstrument.nbuckets) { appendStringInfoSpaces(es->str, es->indent * 2); appendStringInfo(es->str, "Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\n", - hinstrument->nbuckets, - hinstrument->nbuckets_original, - hinstrument->nbatch, - hinstrument->nbatch_original, + hinstrument.nbuckets, + hinstrument.nbuckets_original, + hinstrument.nbatch, + hinstrument.nbatch_original, spacePeakKb); } else @@ -2442,7 +2467,7 @@ show_hash_info(HashState *hashstate, ExplainState *es) appendStringInfoSpaces(es->str, es->indent * 2); appendStringInfo(es->str, "Buckets: %d Batches: %d Memory Usage: %ldkB\n", - hinstrument->nbuckets, hinstrument->nbatch, + hinstrument.nbuckets, hinstrument.nbatch, spacePeakKb); } } diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 04eb3650aa..4e1a2806b5 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -3090,7 +3090,16 @@ ExecHashTableDetachBatch(HashJoinTable hashtable) batch->buckets = InvalidDsaPointer; } } - ExecParallelHashUpdateSpacePeak(hashtable, curbatch); + + /* + * Track the largest batch we've been attached to. Though each + * backend might see a different subset of batches, explain.c will + * scan the results from all backends to find the largest value. + */ + hashtable->spacePeak = + Max(hashtable->spacePeak, + batch->size + sizeof(dsa_pointer_atomic) * hashtable->nbuckets); + /* Remember that we are not attached to a batch. */ hashtable->curbatch = -1; } @@ -3295,19 +3304,3 @@ ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size) return true; } - -/* - * Update this backend's copy of hashtable->spacePeak to account for a given - * batch. This is called at the end of hashing for batch 0, and then for each - * batch when it is done or discovered to be already done. The result is used - * for EXPLAIN output. - */ -void -ExecParallelHashUpdateSpacePeak(HashJoinTable hashtable, int batchno) -{ - size_t size; - - size = hashtable->batches[batchno].shared->size; - size += sizeof(dsa_pointer_atomic) * hashtable->nbuckets; - hashtable->spacePeak = Max(hashtable->spacePeak, size); -} diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index 5d1dc1f401..817bcf0471 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -1186,12 +1186,6 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate) * remain). */ BarrierDetach(batch_barrier); - - /* - * We didn't work on this batch, but we need to observe - * its size for EXPLAIN. - */ - ExecParallelHashUpdateSpacePeak(hashtable, batchno); hashtable->batches[batchno].done = true; hashtable->curbatch = -1; break; diff --git a/src/include/executor/nodeHash.h b/src/include/executor/nodeHash.h index 84c166b395..367dfff018 100644 --- a/src/include/executor/nodeHash.h +++ b/src/include/executor/nodeHash.h @@ -33,7 +33,6 @@ extern void ExecHashTableDetach(HashJoinTable hashtable); extern void ExecHashTableDetachBatch(HashJoinTable hashtable); extern void ExecParallelHashTableSetCurrentBatch(HashJoinTable hashtable, int batchno); -void ExecParallelHashUpdateSpacePeak(HashJoinTable hashtable, int batchno); extern void ExecHashTableInsert(HashJoinTable hashtable, TupleTableSlot *slot, From 438036264a3b71eaf39b2d2eeca67237ed38ca51 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 26 Dec 2017 13:47:18 -0500 Subject: [PATCH 0758/1087] Don't cast between GinNullCategory and bool The original idea was that we could use an isNull-style bool array directly as a GinNullCategory array. However, the existing code already acknowledges that that doesn't really work, because of the possibility that bool as currently defined can have arbitrary bit patterns for true values. So it has to loop through the nullFlags array to set each bool value to an acceptable value. But if we are looping through the whole array anyway, we might as well build a proper GinNullCategory array instead and abandon the type casting. That makes the code much safer in case bool is ever changed to something else. Reviewed-by: Michael Paquier --- src/backend/access/gin/ginscan.c | 19 ++++++++----------- src/backend/access/gin/ginutil.c | 18 ++++++++---------- src/include/access/ginblock.h | 7 +++++-- 3 files changed, 21 insertions(+), 23 deletions(-) diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c index 7ceea7a741..77c0c577b5 100644 --- a/src/backend/access/gin/ginscan.c +++ b/src/backend/access/gin/ginscan.c @@ -295,6 +295,7 @@ ginNewScanKey(IndexScanDesc scan) bool *partial_matches = NULL; Pointer *extra_data = NULL; bool *nullFlags = NULL; + GinNullCategory *categories; int32 searchMode = GIN_SEARCH_MODE_DEFAULT; /* @@ -346,15 +347,12 @@ ginNewScanKey(IndexScanDesc scan) } /* - * If the extractQueryFn didn't create a nullFlags array, create one, - * assuming that everything's non-null. Otherwise, run through the - * array and make sure each value is exactly 0 or 1; this ensures - * binary compatibility with the GinNullCategory representation. While - * at it, detect whether any null keys are present. + * Create GinNullCategory representation. If the extractQueryFn + * didn't create a nullFlags array, we assume everything is non-null. + * While at it, detect whether any null keys are present. */ - if (nullFlags == NULL) - nullFlags = (bool *) palloc0(nQueryValues * sizeof(bool)); - else + categories = (GinNullCategory *) palloc0(nQueryValues * sizeof(GinNullCategory)); + if (nullFlags) { int32 j; @@ -362,17 +360,16 @@ ginNewScanKey(IndexScanDesc scan) { if (nullFlags[j]) { - nullFlags[j] = true; /* not any other nonzero value */ + categories[j] = GIN_CAT_NULL_KEY; hasNullQuery = true; } } } - /* now we can use the nullFlags as category codes */ ginFillScanKey(so, skey->sk_attno, skey->sk_strategy, searchMode, skey->sk_argument, nQueryValues, - queryValues, (GinNullCategory *) nullFlags, + queryValues, categories, partial_matches, extra_data); } diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c index d9c6483437..41d4b4fb6f 100644 --- a/src/backend/access/gin/ginutil.c +++ b/src/backend/access/gin/ginutil.c @@ -529,19 +529,10 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum, /* * If the extractValueFn didn't create a nullFlags array, create one, - * assuming that everything's non-null. Otherwise, run through the array - * and make sure each value is exactly 0 or 1; this ensures binary - * compatibility with the GinNullCategory representation. + * assuming that everything's non-null. */ if (nullFlags == NULL) nullFlags = (bool *) palloc0(*nentries * sizeof(bool)); - else - { - for (i = 0; i < *nentries; i++) - nullFlags[i] = (nullFlags[i] ? true : false); - } - /* now we can use the nullFlags as category codes */ - *categories = (GinNullCategory *) nullFlags; /* * If there's more than one key, sort and unique-ify. @@ -600,6 +591,13 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum, pfree(keydata); } + /* + * Create GinNullCategory representation from nullFlags. + */ + *categories = (GinNullCategory *) palloc0(*nentries * sizeof(GinNullCategory)); + for (i = 0; i < *nentries; i++) + (*categories)[i] = (nullFlags[i] ? GIN_CAT_NULL_KEY : GIN_CAT_NORM_KEY); + return entries; } diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h index 114370c7d7..c3af3f0380 100644 --- a/src/include/access/ginblock.h +++ b/src/include/access/ginblock.h @@ -188,8 +188,11 @@ typedef struct /* * Category codes to distinguish placeholder nulls from ordinary NULL keys. - * Note that the datatype size and the first two code values are chosen to be - * compatible with the usual usage of bool isNull flags. + * + * The first two code values were chosen to be compatible with the usual usage + * of bool isNull flags. However, casting between bool and GinNullCategory is + * risky because of the possibility of different bit patterns and type sizes, + * so it is no longer done. * * GIN_CAT_EMPTY_QUERY is never stored in the index; and notice that it is * chosen to sort before not after regular key values. From 54eff5311d7c8e3d309774713b91e78067d2ad42 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 2 Jan 2018 19:16:16 -0300 Subject: [PATCH 0759/1087] Fix deadlock hazard in CREATE INDEX CONCURRENTLY MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Multiple sessions doing CREATE INDEX CONCURRENTLY simultaneously are supposed to be able to work in parallel, as evidenced by fixes in commit c3d09b3bd23f specifically to support this case. In reality, one of the sessions would be aborted by a misterious "deadlock detected" error. Jeff Janes diagnosed that this is because of leftover snapshots used for system catalog scans -- this was broken by 8aa3e47510b9 keeping track of (registering) the catalog snapshot. To fix the deadlocks, it's enough to de-register that snapshot prior to waiting. Backpatch to 9.4, which introduced MVCC catalog scans. Include an isolationtester spec that 8 out of 10 times reproduces the deadlock with the unpatched code for me (Álvaro). Author: Jeff Janes Diagnosed-by: Jeff Janes Reported-by: Jeremy Finzel Discussion: https://postgr.es/m/CAMa1XUhHjCv8Qkx0WOr1Mpm_R4qxN26EibwCrj0Oor2YBUFUTg%40mail.gmail.com --- src/backend/commands/indexcmds.c | 3 ++ src/test/isolation/expected/multiple-cic.out | 19 ++++++++++ src/test/isolation/isolation_schedule | 1 + src/test/isolation/specs/multiple-cic.spec | 40 ++++++++++++++++++++ 4 files changed, 63 insertions(+) create mode 100644 src/test/isolation/expected/multiple-cic.out create mode 100644 src/test/isolation/specs/multiple-cic.spec diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 97091dd9fb..ffa99aec16 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -856,11 +856,14 @@ DefineIndex(Oid relationId, * doing CREATE INDEX CONCURRENTLY, which would see our snapshot as one * they must wait for. But first, save the snapshot's xmin to use as * limitXmin for GetCurrentVirtualXIDs(). + * + * Our catalog snapshot could have the same effect, so drop that one too. */ limitXmin = snapshot->xmin; PopActiveSnapshot(); UnregisterSnapshot(snapshot); + InvalidateCatalogSnapshot(); /* * The index is now valid in the sense that it contains all currently diff --git a/src/test/isolation/expected/multiple-cic.out b/src/test/isolation/expected/multiple-cic.out new file mode 100644 index 0000000000..cc57940392 --- /dev/null +++ b/src/test/isolation/expected/multiple-cic.out @@ -0,0 +1,19 @@ +Parsed test spec with 2 sessions + +starting permutation: s2l s1i s2i +step s2l: SELECT pg_advisory_lock(281457); +pg_advisory_lock + + +step s1i: + CREATE INDEX CONCURRENTLY mcic_one_pkey ON mcic_one (id) + WHERE lck_shr(281457); + +step s2i: + CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) + WHERE unlck(); + +step s1i: <... completed> +s1 + + diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index eb566ebb6c..befe676816 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -55,6 +55,7 @@ test: skip-locked-2 test: skip-locked-3 test: skip-locked-4 test: drop-index-concurrently-1 +test: multiple-cic test: alter-table-1 test: alter-table-2 test: alter-table-3 diff --git a/src/test/isolation/specs/multiple-cic.spec b/src/test/isolation/specs/multiple-cic.spec new file mode 100644 index 0000000000..a7ba4eb4fd --- /dev/null +++ b/src/test/isolation/specs/multiple-cic.spec @@ -0,0 +1,40 @@ +# Test multiple CREATE INDEX CONCURRENTLY working simultaneously + +setup +{ + CREATE TABLE mcic_one ( + id int + ); + CREATE TABLE mcic_two ( + id int + ); + CREATE FUNCTION lck_shr(bigint) RETURNS bool IMMUTABLE LANGUAGE plpgsql AS $$ + BEGIN PERFORM pg_advisory_lock_shared($1); RETURN true; END; + $$; + CREATE FUNCTION unlck() RETURNS bool IMMUTABLE LANGUAGE plpgsql AS $$ + BEGIN PERFORM pg_advisory_unlock_all(); RETURN true; END; + $$; +} +teardown +{ + DROP TABLE mcic_one, mcic_two; + DROP FUNCTION lck_shr(bigint); + DROP FUNCTION unlck(); +} + +session "s1" +step "s1i" { + CREATE INDEX CONCURRENTLY mcic_one_pkey ON mcic_one (id) + WHERE lck_shr(281457); + } +teardown { SELECT pg_advisory_unlock_all() AS "s1"; } + + +session "s2" +step "s2l" { SELECT pg_advisory_lock(281457); } +step "s2i" { + CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) + WHERE unlck(); + } + +permutation "s2l" "s1i" "s2i" From 5dc692f78d3bee1e86d095a9e8d9242b44f78b01 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 2 Jan 2018 21:23:02 -0500 Subject: [PATCH 0760/1087] Ensure proper alignment of tuples in HashMemoryChunkData buffers. The previous coding relied (without any documentation) on the data[] member of HashMemoryChunkData being at a MAXALIGN'ed offset. If it was not, the tuples would not be maxaligned either, leading to failures on alignment-picky machines. While there seems to be no live bug on any platform we support, this is clearly pretty fragile: any addition to or rearrangement of the fields in HashMemoryChunkData could break it. Let's remove the hazard by getting rid of the data[] member and instead using pointer arithmetic with an explicitly maxalign'ed offset. Discussion: https://postgr.es/m/14483.1514938129@sss.pgh.pa.us --- src/backend/executor/nodeHash.c | 34 +++++++++++++++------------------ src/include/executor/hashjoin.h | 12 +++++++++--- 2 files changed, 24 insertions(+), 22 deletions(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 4e1a2806b5..38a84cc14c 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -979,7 +979,7 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) /* process all tuples stored in this chunk (and then free it) */ while (idx < oldchunks->used) { - HashJoinTuple hashTuple = (HashJoinTuple) (oldchunks->data + idx); + HashJoinTuple hashTuple = (HashJoinTuple) (HASH_CHUNK_DATA(oldchunks) + idx); MinimalTuple tuple = HJTUPLE_MINTUPLE(hashTuple); int hashTupleSize = (HJTUPLE_OVERHEAD + tuple->t_len); int bucketno; @@ -1285,7 +1285,7 @@ ExecParallelHashRepartitionFirst(HashJoinTable hashtable) /* Repartition all tuples in this chunk. */ while (idx < chunk->used) { - HashJoinTuple hashTuple = (HashJoinTuple) (chunk->data + idx); + HashJoinTuple hashTuple = (HashJoinTuple) (HASH_CHUNK_DATA(chunk) + idx); MinimalTuple tuple = HJTUPLE_MINTUPLE(hashTuple); HashJoinTuple copyTuple; dsa_pointer shared; @@ -1469,7 +1469,7 @@ ExecHashIncreaseNumBuckets(HashJoinTable hashtable) while (idx < chunk->used) { - HashJoinTuple hashTuple = (HashJoinTuple) (chunk->data + idx); + HashJoinTuple hashTuple = (HashJoinTuple) (HASH_CHUNK_DATA(chunk) + idx); int bucketno; int batchno; @@ -1552,7 +1552,7 @@ ExecParallelHashIncreaseNumBuckets(HashJoinTable hashtable) while (idx < chunk->used) { - HashJoinTuple hashTuple = (HashJoinTuple) (chunk->data + idx); + HashJoinTuple hashTuple = (HashJoinTuple) (HASH_CHUNK_DATA(chunk) + idx); dsa_pointer shared = chunk_s + HASH_CHUNK_HEADER_SIZE + idx; int bucketno; int batchno; @@ -2651,17 +2651,16 @@ dense_alloc(HashJoinTable hashtable, Size size) size = MAXALIGN(size); /* - * If tuple size is larger than of 1/4 of chunk size, allocate a separate - * chunk. + * If tuple size is larger than threshold, allocate a separate chunk. */ if (size > HASH_CHUNK_THRESHOLD) { /* allocate new chunk and put it at the beginning of the list */ newChunk = (HashMemoryChunk) MemoryContextAlloc(hashtable->batchCxt, - offsetof(HashMemoryChunkData, data) + size); + HASH_CHUNK_HEADER_SIZE + size); newChunk->maxlen = size; - newChunk->used = 0; - newChunk->ntuples = 0; + newChunk->used = size; + newChunk->ntuples = 1; /* * Add this chunk to the list after the first existing chunk, so that @@ -2678,10 +2677,7 @@ dense_alloc(HashJoinTable hashtable, Size size) hashtable->chunks = newChunk; } - newChunk->used += size; - newChunk->ntuples += 1; - - return newChunk->data; + return HASH_CHUNK_DATA(newChunk); } /* @@ -2693,7 +2689,7 @@ dense_alloc(HashJoinTable hashtable, Size size) { /* allocate new chunk and put it at the beginning of the list */ newChunk = (HashMemoryChunk) MemoryContextAlloc(hashtable->batchCxt, - offsetof(HashMemoryChunkData, data) + HASH_CHUNK_SIZE); + HASH_CHUNK_HEADER_SIZE + HASH_CHUNK_SIZE); newChunk->maxlen = HASH_CHUNK_SIZE; newChunk->used = size; @@ -2702,11 +2698,11 @@ dense_alloc(HashJoinTable hashtable, Size size) newChunk->next.unshared = hashtable->chunks; hashtable->chunks = newChunk; - return newChunk->data; + return HASH_CHUNK_DATA(newChunk); } /* There is enough space in the current chunk, let's add the tuple */ - ptr = hashtable->chunks->data + hashtable->chunks->used; + ptr = HASH_CHUNK_DATA(hashtable->chunks) + hashtable->chunks->used; hashtable->chunks->used += size; hashtable->chunks->ntuples += 1; @@ -2751,7 +2747,7 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size, chunk_shared = hashtable->current_chunk_shared; Assert(chunk == dsa_get_address(hashtable->area, chunk_shared)); *shared = chunk_shared + HASH_CHUNK_HEADER_SIZE + chunk->used; - result = (HashJoinTuple) (chunk->data + chunk->used); + result = (HashJoinTuple) (HASH_CHUNK_DATA(chunk) + chunk->used); chunk->used += size; Assert(chunk->used <= chunk->maxlen); @@ -2859,8 +2855,8 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size, } LWLockRelease(&pstate->lock); - Assert(chunk->data == dsa_get_address(hashtable->area, *shared)); - result = (HashJoinTuple) chunk->data; + Assert(HASH_CHUNK_DATA(chunk) == dsa_get_address(hashtable->area, *shared)); + result = (HashJoinTuple) HASH_CHUNK_DATA(chunk); return result; } diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h index d8c82d4e7c..be83500b9d 100644 --- a/src/include/executor/hashjoin.h +++ b/src/include/executor/hashjoin.h @@ -117,7 +117,7 @@ typedef struct HashSkewBucket typedef struct HashMemoryChunkData { int ntuples; /* number of tuples stored in this chunk */ - size_t maxlen; /* size of the buffer holding the tuples */ + size_t maxlen; /* size of the chunk's tuple buffer */ size_t used; /* number of buffer bytes already used */ /* pointer to the next chunk (linked list) */ @@ -127,13 +127,19 @@ typedef struct HashMemoryChunkData dsa_pointer shared; } next; - char data[FLEXIBLE_ARRAY_MEMBER]; /* buffer allocated at the end */ + /* + * The chunk's tuple buffer starts after the HashMemoryChunkData struct, + * at offset HASH_CHUNK_HEADER_SIZE (which must be maxaligned). Note that + * that offset is not included in "maxlen" or "used". + */ } HashMemoryChunkData; typedef struct HashMemoryChunkData *HashMemoryChunk; #define HASH_CHUNK_SIZE (32 * 1024L) -#define HASH_CHUNK_HEADER_SIZE (offsetof(HashMemoryChunkData, data)) +#define HASH_CHUNK_HEADER_SIZE MAXALIGN(sizeof(HashMemoryChunkData)) +#define HASH_CHUNK_DATA(hc) (((char *) (hc)) + HASH_CHUNK_HEADER_SIZE) +/* tuples exceeding HASH_CHUNK_THRESHOLD bytes are put in their own chunk */ #define HASH_CHUNK_THRESHOLD (HASH_CHUNK_SIZE / 4) /* From f9ccf92e16fc4d831d324c7f7ef347a0acdaef0a Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 2 Jan 2018 18:02:37 -0800 Subject: [PATCH 0761/1087] Simplify representation of aggregate transition values a bit. Previously aggregate transition values for hash and other forms of aggregation (i.e. sort and no group by) were represented differently. Hash based aggregation used a grouping set indexed array pointing to an array of transition values, whereas other forms of aggregation used one flattened array with the index being computed out of grouping set and transition offsets. That made upcoming changes hard, so represent both as grouping set indexed array of per-group data. As a nice side-effect this also makes aggregation slightly faster, because computing offsets with `transno + (setno * numTrans)` turns out not to be that cheap (too big for x86 lea for example). Author: Andres Freund Discussion: https://postgr.es/m/20171128003121.nmxbm2ounxzb6n2t@alap3.anarazel.de --- src/backend/executor/nodeAgg.c | 110 ++++++++++++++++++--------------- src/include/nodes/execnodes.h | 6 +- 2 files changed, 65 insertions(+), 51 deletions(-) diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index da6ef1a94c..a3454e52f6 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -532,13 +532,14 @@ static void select_current_set(AggState *aggstate, int setno, bool is_hash); static void initialize_phase(AggState *aggstate, int newphase); static TupleTableSlot *fetch_input_tuple(AggState *aggstate); static void initialize_aggregates(AggState *aggstate, - AggStatePerGroup pergroup, + AggStatePerGroup *pergroups, int numReset); static void advance_transition_function(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate); -static void advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, - AggStatePerGroup *pergroups); +static void advance_aggregates(AggState *aggstate, + AggStatePerGroup *sort_pergroups, + AggStatePerGroup *hash_pergroups); static void advance_combine_function(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate); @@ -793,14 +794,16 @@ initialize_aggregate(AggState *aggstate, AggStatePerTrans pertrans, * If there are multiple grouping sets, we initialize only the first numReset * of them (the grouping sets are ordered so that the most specific one, which * is reset most often, is first). As a convenience, if numReset is 0, we - * reinitialize all sets. numReset is -1 to initialize a hashtable entry, in - * which case the caller must have used select_current_set appropriately. + * reinitialize all sets. + * + * NB: This cannot be used for hash aggregates, as for those the grouping set + * number has to be specified from further up. * * When called, CurrentMemoryContext should be the per-query context. */ static void initialize_aggregates(AggState *aggstate, - AggStatePerGroup pergroup, + AggStatePerGroup *pergroups, int numReset) { int transno; @@ -812,30 +815,18 @@ initialize_aggregates(AggState *aggstate, if (numReset == 0) numReset = numGroupingSets; - for (transno = 0; transno < numTrans; transno++) + for (setno = 0; setno < numReset; setno++) { - AggStatePerTrans pertrans = &transstates[transno]; - - if (numReset < 0) - { - AggStatePerGroup pergroupstate; + AggStatePerGroup pergroup = pergroups[setno]; - pergroupstate = &pergroup[transno]; + select_current_set(aggstate, setno, false); - initialize_aggregate(aggstate, pertrans, pergroupstate); - } - else + for (transno = 0; transno < numTrans; transno++) { - for (setno = 0; setno < numReset; setno++) - { - AggStatePerGroup pergroupstate; - - pergroupstate = &pergroup[transno + (setno * numTrans)]; + AggStatePerTrans pertrans = &transstates[transno]; + AggStatePerGroup pergroupstate = &pergroup[transno]; - select_current_set(aggstate, setno, false); - - initialize_aggregate(aggstate, pertrans, pergroupstate); - } + initialize_aggregate(aggstate, pertrans, pergroupstate); } } } @@ -976,7 +967,9 @@ advance_transition_function(AggState *aggstate, * When called, CurrentMemoryContext should be the per-query context. */ static void -advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGroup *pergroups) +advance_aggregates(AggState *aggstate, + AggStatePerGroup *sort_pergroups, + AggStatePerGroup *hash_pergroups) { int transno; int setno = 0; @@ -1019,7 +1012,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro { /* DISTINCT and/or ORDER BY case */ Assert(slot->tts_nvalid >= (pertrans->numInputs + inputoff)); - Assert(!pergroups); + Assert(!hash_pergroups); /* * If the transfn is strict, we want to check for nullity before @@ -1090,7 +1083,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro fcinfo->argnull[i + 1] = slot->tts_isnull[i + inputoff]; } - if (pergroup) + if (sort_pergroups) { /* advance transition states for ordered grouping */ @@ -1100,13 +1093,13 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro select_current_set(aggstate, setno, false); - pergroupstate = &pergroup[transno + (setno * numTrans)]; + pergroupstate = &sort_pergroups[setno][transno]; advance_transition_function(aggstate, pertrans, pergroupstate); } } - if (pergroups) + if (hash_pergroups) { /* advance transition states for hashed grouping */ @@ -1116,7 +1109,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup, AggStatePerGro select_current_set(aggstate, setno, true); - pergroupstate = &pergroups[setno][transno]; + pergroupstate = &hash_pergroups[setno][transno]; advance_transition_function(aggstate, pertrans, pergroupstate); } @@ -2095,12 +2088,25 @@ lookup_hash_entry(AggState *aggstate) if (isnew) { - entry->additional = (AggStatePerGroup) + AggStatePerGroup pergroup; + int transno; + + pergroup = (AggStatePerGroup) MemoryContextAlloc(perhash->hashtable->tablecxt, sizeof(AggStatePerGroupData) * aggstate->numtrans); - /* initialize aggregates for new tuple group */ - initialize_aggregates(aggstate, (AggStatePerGroup) entry->additional, - -1); + entry->additional = pergroup; + + /* + * Initialize aggregates for new tuple group, lookup_hash_entries() + * already has selected the relevant grouping set. + */ + for (transno = 0; transno < aggstate->numtrans; transno++) + { + AggStatePerTrans pertrans = &aggstate->pertrans[transno]; + AggStatePerGroup pergroupstate = &pergroup[transno]; + + initialize_aggregate(aggstate, pertrans, pergroupstate); + } } return entry; @@ -2184,7 +2190,7 @@ agg_retrieve_direct(AggState *aggstate) ExprContext *econtext; ExprContext *tmpcontext; AggStatePerAgg peragg; - AggStatePerGroup pergroup; + AggStatePerGroup *pergroups; AggStatePerGroup *hash_pergroups = NULL; TupleTableSlot *outerslot; TupleTableSlot *firstSlot; @@ -2207,7 +2213,7 @@ agg_retrieve_direct(AggState *aggstate) tmpcontext = aggstate->tmpcontext; peragg = aggstate->peragg; - pergroup = aggstate->pergroup; + pergroups = aggstate->pergroups; firstSlot = aggstate->ss.ss_ScanTupleSlot; /* @@ -2409,7 +2415,7 @@ agg_retrieve_direct(AggState *aggstate) /* * Initialize working state for a new input tuple group. */ - initialize_aggregates(aggstate, pergroup, numReset); + initialize_aggregates(aggstate, pergroups, numReset); if (aggstate->grp_firstTuple != NULL) { @@ -2446,9 +2452,9 @@ agg_retrieve_direct(AggState *aggstate) hash_pergroups = NULL; if (DO_AGGSPLIT_COMBINE(aggstate->aggsplit)) - combine_aggregates(aggstate, pergroup); + combine_aggregates(aggstate, pergroups[0]); else - advance_aggregates(aggstate, pergroup, hash_pergroups); + advance_aggregates(aggstate, pergroups, hash_pergroups); /* Reset per-input-tuple context after each tuple */ ResetExprContext(tmpcontext); @@ -2512,7 +2518,7 @@ agg_retrieve_direct(AggState *aggstate) finalize_aggregates(aggstate, peragg, - pergroup + (currentSet * aggstate->numtrans)); + pergroups[currentSet]); /* * If there's no row to project right now, we must continue rather @@ -2756,7 +2762,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggstate->curpertrans = NULL; aggstate->input_done = false; aggstate->agg_done = false; - aggstate->pergroup = NULL; + aggstate->pergroups = NULL; aggstate->grp_firstTuple = NULL; aggstate->sort_in = NULL; aggstate->sort_out = NULL; @@ -3052,13 +3058,16 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) if (node->aggstrategy != AGG_HASHED) { - AggStatePerGroup pergroup; + AggStatePerGroup *pergroups; + + pergroups = (AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup) * + numGroupingSets); - pergroup = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) - * numaggs - * numGroupingSets); + for (i = 0; i < numGroupingSets; i++) + pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) + * numaggs); - aggstate->pergroup = pergroup; + aggstate->pergroups = pergroups; } /* @@ -4086,8 +4095,11 @@ ExecReScanAgg(AggState *node) /* * Reset the per-group state (in particular, mark transvalues null) */ - MemSet(node->pergroup, 0, - sizeof(AggStatePerGroupData) * node->numaggs * numGroupingSets); + for (setno = 0; setno < numGroupingSets; setno++) + { + MemSet(node->pergroups[setno], 0, + sizeof(AggStatePerGroupData) * node->numaggs); + } /* reset to phase 1 */ initialize_phase(node, 1); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 94351eafad..bbc3ec3f3f 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1852,13 +1852,15 @@ typedef struct AggState Tuplesortstate *sort_out; /* input is copied here for next phase */ TupleTableSlot *sort_slot; /* slot for sort results */ /* these fields are used in AGG_PLAIN and AGG_SORTED modes: */ - AggStatePerGroup pergroup; /* per-Aggref-per-group working state */ + AggStatePerGroup *pergroups; /* grouping set indexed array of per-group + * pointers */ HeapTuple grp_firstTuple; /* copy of first tuple of current group */ /* these fields are used in AGG_HASHED and AGG_MIXED modes: */ bool table_filled; /* hash table filled yet? */ int num_hashes; AggStatePerHash perhash; - AggStatePerGroup *hash_pergroup; /* array of per-group pointers */ + AggStatePerGroup *hash_pergroup; /* grouping set indexed array of + * per-group pointers */ /* support for evaluation of agg input expressions: */ ProjectionInfo *combinedproj; /* projection machinery */ } AggState; From 9d4649ca49416111aee2c84b7e4441a0b7aa2fac Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 2 Jan 2018 23:30:12 -0500 Subject: [PATCH 0762/1087] Update copyright for 2018 Backpatch-through: certain files through 9.3 --- COPYRIGHT | 2 +- configure | 4 ++-- configure.in | 2 +- contrib/adminpack/adminpack.c | 2 +- contrib/amcheck/verify_nbtree.c | 2 +- contrib/auth_delay/auth_delay.c | 2 +- contrib/auto_explain/auto_explain.c | 2 +- contrib/bloom/blcost.c | 2 +- contrib/bloom/blinsert.c | 2 +- contrib/bloom/bloom.h | 2 +- contrib/bloom/blscan.c | 2 +- contrib/bloom/blutils.c | 2 +- contrib/bloom/blvacuum.c | 2 +- contrib/bloom/blvalidate.c | 2 +- contrib/dblink/dblink.c | 2 +- contrib/dict_int/dict_int.c | 2 +- contrib/dict_xsyn/dict_xsyn.c | 2 +- contrib/file_fdw/file_fdw.c | 2 +- contrib/fuzzystrmatch/fuzzystrmatch.c | 2 +- contrib/intarray/_int_selfuncs.c | 2 +- contrib/isn/isn.c | 2 +- contrib/isn/isn.h | 2 +- contrib/pageinspect/brinfuncs.c | 2 +- contrib/pageinspect/fsmfuncs.c | 2 +- contrib/pageinspect/ginfuncs.c | 2 +- contrib/pageinspect/hashfuncs.c | 2 +- contrib/pageinspect/heapfuncs.c | 2 +- contrib/pageinspect/pageinspect.h | 2 +- contrib/pageinspect/rawpage.c | 2 +- contrib/passwordcheck/passwordcheck.c | 2 +- contrib/pg_prewarm/autoprewarm.c | 2 +- contrib/pg_prewarm/pg_prewarm.c | 2 +- contrib/pg_stat_statements/pg_stat_statements.c | 2 +- contrib/pg_trgm/trgm_regexp.c | 2 +- contrib/pg_visibility/pg_visibility.c | 2 +- contrib/pgstattuple/pgstatapprox.c | 2 +- contrib/postgres_fdw/connection.c | 2 +- contrib/postgres_fdw/deparse.c | 2 +- contrib/postgres_fdw/option.c | 2 +- contrib/postgres_fdw/postgres_fdw.c | 2 +- contrib/postgres_fdw/postgres_fdw.h | 2 +- contrib/postgres_fdw/shippable.c | 2 +- contrib/sepgsql/database.c | 2 +- contrib/sepgsql/dml.c | 2 +- contrib/sepgsql/hooks.c | 2 +- contrib/sepgsql/label.c | 2 +- contrib/sepgsql/launcher | 2 +- contrib/sepgsql/proc.c | 2 +- contrib/sepgsql/relation.c | 2 +- contrib/sepgsql/schema.c | 2 +- contrib/sepgsql/selinux.c | 2 +- contrib/sepgsql/sepgsql.h | 2 +- contrib/sepgsql/uavc.c | 2 +- contrib/tablefunc/tablefunc.c | 2 +- contrib/tablefunc/tablefunc.h | 2 +- contrib/tcn/tcn.c | 2 +- contrib/test_decoding/test_decoding.c | 2 +- contrib/tsm_system_rows/tsm_system_rows.c | 2 +- contrib/tsm_system_time/tsm_system_time.c | 2 +- contrib/unaccent/unaccent.c | 2 +- contrib/uuid-ossp/uuid-ossp.c | 2 +- contrib/vacuumlo/vacuumlo.c | 2 +- doc/src/sgml/generate-errcodes-table.pl | 2 +- doc/src/sgml/legal.sgml | 6 +++--- doc/src/sgml/lobj.sgml | 2 +- src/backend/Makefile | 2 +- src/backend/access/brin/brin.c | 2 +- src/backend/access/brin/brin_inclusion.c | 2 +- src/backend/access/brin/brin_minmax.c | 2 +- src/backend/access/brin/brin_pageops.c | 2 +- src/backend/access/brin/brin_revmap.c | 2 +- src/backend/access/brin/brin_tuple.c | 2 +- src/backend/access/brin/brin_validate.c | 2 +- src/backend/access/brin/brin_xlog.c | 2 +- src/backend/access/common/bufmask.c | 2 +- src/backend/access/common/heaptuple.c | 2 +- src/backend/access/common/indextuple.c | 2 +- src/backend/access/common/printsimple.c | 2 +- src/backend/access/common/printtup.c | 2 +- src/backend/access/common/reloptions.c | 2 +- src/backend/access/common/scankey.c | 2 +- src/backend/access/common/session.c | 2 +- src/backend/access/common/tupconvert.c | 2 +- src/backend/access/common/tupdesc.c | 2 +- src/backend/access/gin/ginarrayproc.c | 2 +- src/backend/access/gin/ginbtree.c | 2 +- src/backend/access/gin/ginbulk.c | 2 +- src/backend/access/gin/gindatapage.c | 2 +- src/backend/access/gin/ginentrypage.c | 2 +- src/backend/access/gin/ginfast.c | 2 +- src/backend/access/gin/ginget.c | 2 +- src/backend/access/gin/gininsert.c | 2 +- src/backend/access/gin/ginlogic.c | 2 +- src/backend/access/gin/ginpostinglist.c | 2 +- src/backend/access/gin/ginscan.c | 2 +- src/backend/access/gin/ginutil.c | 2 +- src/backend/access/gin/ginvacuum.c | 2 +- src/backend/access/gin/ginvalidate.c | 2 +- src/backend/access/gin/ginxlog.c | 2 +- src/backend/access/gist/gist.c | 2 +- src/backend/access/gist/gistbuild.c | 2 +- src/backend/access/gist/gistbuildbuffers.c | 2 +- src/backend/access/gist/gistget.c | 2 +- src/backend/access/gist/gistproc.c | 2 +- src/backend/access/gist/gistscan.c | 2 +- src/backend/access/gist/gistsplit.c | 2 +- src/backend/access/gist/gistutil.c | 2 +- src/backend/access/gist/gistvacuum.c | 2 +- src/backend/access/gist/gistvalidate.c | 2 +- src/backend/access/gist/gistxlog.c | 2 +- src/backend/access/hash/hash.c | 2 +- src/backend/access/hash/hash_xlog.c | 2 +- src/backend/access/hash/hashfunc.c | 2 +- src/backend/access/hash/hashinsert.c | 2 +- src/backend/access/hash/hashovfl.c | 2 +- src/backend/access/hash/hashpage.c | 2 +- src/backend/access/hash/hashsearch.c | 2 +- src/backend/access/hash/hashsort.c | 2 +- src/backend/access/hash/hashutil.c | 2 +- src/backend/access/hash/hashvalidate.c | 2 +- src/backend/access/heap/heapam.c | 2 +- src/backend/access/heap/hio.c | 2 +- src/backend/access/heap/pruneheap.c | 2 +- src/backend/access/heap/rewriteheap.c | 2 +- src/backend/access/heap/syncscan.c | 2 +- src/backend/access/heap/tuptoaster.c | 2 +- src/backend/access/heap/visibilitymap.c | 2 +- src/backend/access/index/amapi.c | 2 +- src/backend/access/index/amvalidate.c | 2 +- src/backend/access/index/genam.c | 2 +- src/backend/access/index/indexam.c | 2 +- src/backend/access/nbtree/nbtcompare.c | 2 +- src/backend/access/nbtree/nbtinsert.c | 2 +- src/backend/access/nbtree/nbtpage.c | 2 +- src/backend/access/nbtree/nbtree.c | 2 +- src/backend/access/nbtree/nbtsearch.c | 2 +- src/backend/access/nbtree/nbtsort.c | 2 +- src/backend/access/nbtree/nbtutils.c | 2 +- src/backend/access/nbtree/nbtvalidate.c | 2 +- src/backend/access/nbtree/nbtxlog.c | 2 +- src/backend/access/rmgrdesc/brindesc.c | 2 +- src/backend/access/rmgrdesc/clogdesc.c | 2 +- src/backend/access/rmgrdesc/committsdesc.c | 2 +- src/backend/access/rmgrdesc/dbasedesc.c | 2 +- src/backend/access/rmgrdesc/genericdesc.c | 2 +- src/backend/access/rmgrdesc/gindesc.c | 2 +- src/backend/access/rmgrdesc/gistdesc.c | 2 +- src/backend/access/rmgrdesc/hashdesc.c | 2 +- src/backend/access/rmgrdesc/heapdesc.c | 2 +- src/backend/access/rmgrdesc/logicalmsgdesc.c | 2 +- src/backend/access/rmgrdesc/mxactdesc.c | 2 +- src/backend/access/rmgrdesc/nbtdesc.c | 2 +- src/backend/access/rmgrdesc/relmapdesc.c | 2 +- src/backend/access/rmgrdesc/replorigindesc.c | 2 +- src/backend/access/rmgrdesc/seqdesc.c | 2 +- src/backend/access/rmgrdesc/smgrdesc.c | 2 +- src/backend/access/rmgrdesc/spgdesc.c | 2 +- src/backend/access/rmgrdesc/standbydesc.c | 2 +- src/backend/access/rmgrdesc/tblspcdesc.c | 2 +- src/backend/access/rmgrdesc/xactdesc.c | 2 +- src/backend/access/rmgrdesc/xlogdesc.c | 2 +- src/backend/access/spgist/spgdoinsert.c | 2 +- src/backend/access/spgist/spginsert.c | 2 +- src/backend/access/spgist/spgkdtreeproc.c | 2 +- src/backend/access/spgist/spgquadtreeproc.c | 2 +- src/backend/access/spgist/spgscan.c | 2 +- src/backend/access/spgist/spgtextproc.c | 2 +- src/backend/access/spgist/spgutils.c | 2 +- src/backend/access/spgist/spgvacuum.c | 2 +- src/backend/access/spgist/spgvalidate.c | 2 +- src/backend/access/spgist/spgxlog.c | 2 +- src/backend/access/tablesample/bernoulli.c | 2 +- src/backend/access/tablesample/system.c | 2 +- src/backend/access/tablesample/tablesample.c | 2 +- src/backend/access/transam/clog.c | 2 +- src/backend/access/transam/commit_ts.c | 2 +- src/backend/access/transam/generic_xlog.c | 2 +- src/backend/access/transam/multixact.c | 2 +- src/backend/access/transam/parallel.c | 2 +- src/backend/access/transam/slru.c | 2 +- src/backend/access/transam/subtrans.c | 2 +- src/backend/access/transam/timeline.c | 2 +- src/backend/access/transam/transam.c | 2 +- src/backend/access/transam/twophase.c | 2 +- src/backend/access/transam/twophase_rmgr.c | 2 +- src/backend/access/transam/varsup.c | 2 +- src/backend/access/transam/xact.c | 2 +- src/backend/access/transam/xlog.c | 2 +- src/backend/access/transam/xlogarchive.c | 2 +- src/backend/access/transam/xlogfuncs.c | 2 +- src/backend/access/transam/xloginsert.c | 2 +- src/backend/access/transam/xlogreader.c | 2 +- src/backend/access/transam/xlogutils.c | 2 +- src/backend/bootstrap/bootparse.y | 2 +- src/backend/bootstrap/bootscanner.l | 2 +- src/backend/bootstrap/bootstrap.c | 2 +- src/backend/catalog/Catalog.pm | 2 +- src/backend/catalog/aclchk.c | 2 +- src/backend/catalog/catalog.c | 2 +- src/backend/catalog/dependency.c | 2 +- src/backend/catalog/genbki.pl | 4 ++-- src/backend/catalog/heap.c | 2 +- src/backend/catalog/index.c | 2 +- src/backend/catalog/indexing.c | 2 +- src/backend/catalog/information_schema.sql | 2 +- src/backend/catalog/namespace.c | 2 +- src/backend/catalog/objectaccess.c | 2 +- src/backend/catalog/objectaddress.c | 2 +- src/backend/catalog/partition.c | 2 +- src/backend/catalog/pg_aggregate.c | 2 +- src/backend/catalog/pg_collation.c | 2 +- src/backend/catalog/pg_constraint.c | 2 +- src/backend/catalog/pg_conversion.c | 2 +- src/backend/catalog/pg_db_role_setting.c | 2 +- src/backend/catalog/pg_depend.c | 2 +- src/backend/catalog/pg_enum.c | 2 +- src/backend/catalog/pg_inherits.c | 2 +- src/backend/catalog/pg_largeobject.c | 2 +- src/backend/catalog/pg_namespace.c | 2 +- src/backend/catalog/pg_operator.c | 2 +- src/backend/catalog/pg_proc.c | 2 +- src/backend/catalog/pg_publication.c | 2 +- src/backend/catalog/pg_range.c | 2 +- src/backend/catalog/pg_shdepend.c | 2 +- src/backend/catalog/pg_subscription.c | 2 +- src/backend/catalog/pg_type.c | 2 +- src/backend/catalog/storage.c | 2 +- src/backend/catalog/system_views.sql | 2 +- src/backend/catalog/toasting.c | 2 +- src/backend/commands/aggregatecmds.c | 2 +- src/backend/commands/alter.c | 2 +- src/backend/commands/amcmds.c | 2 +- src/backend/commands/analyze.c | 2 +- src/backend/commands/async.c | 2 +- src/backend/commands/cluster.c | 2 +- src/backend/commands/collationcmds.c | 2 +- src/backend/commands/comment.c | 2 +- src/backend/commands/constraint.c | 2 +- src/backend/commands/conversioncmds.c | 2 +- src/backend/commands/copy.c | 2 +- src/backend/commands/createas.c | 2 +- src/backend/commands/dbcommands.c | 2 +- src/backend/commands/define.c | 2 +- src/backend/commands/discard.c | 2 +- src/backend/commands/dropcmds.c | 2 +- src/backend/commands/event_trigger.c | 2 +- src/backend/commands/explain.c | 2 +- src/backend/commands/extension.c | 2 +- src/backend/commands/foreigncmds.c | 2 +- src/backend/commands/functioncmds.c | 2 +- src/backend/commands/indexcmds.c | 2 +- src/backend/commands/lockcmds.c | 2 +- src/backend/commands/matview.c | 2 +- src/backend/commands/opclasscmds.c | 2 +- src/backend/commands/operatorcmds.c | 2 +- src/backend/commands/policy.c | 2 +- src/backend/commands/portalcmds.c | 2 +- src/backend/commands/prepare.c | 2 +- src/backend/commands/proclang.c | 2 +- src/backend/commands/publicationcmds.c | 2 +- src/backend/commands/schemacmds.c | 2 +- src/backend/commands/seclabel.c | 2 +- src/backend/commands/sequence.c | 2 +- src/backend/commands/statscmds.c | 2 +- src/backend/commands/subscriptioncmds.c | 2 +- src/backend/commands/tablecmds.c | 2 +- src/backend/commands/tablespace.c | 2 +- src/backend/commands/trigger.c | 2 +- src/backend/commands/tsearchcmds.c | 2 +- src/backend/commands/typecmds.c | 2 +- src/backend/commands/user.c | 2 +- src/backend/commands/vacuum.c | 2 +- src/backend/commands/vacuumlazy.c | 2 +- src/backend/commands/variable.c | 2 +- src/backend/commands/view.c | 2 +- src/backend/executor/execAmi.c | 2 +- src/backend/executor/execCurrent.c | 2 +- src/backend/executor/execExpr.c | 2 +- src/backend/executor/execExprInterp.c | 2 +- src/backend/executor/execGrouping.c | 2 +- src/backend/executor/execIndexing.c | 2 +- src/backend/executor/execJunk.c | 2 +- src/backend/executor/execMain.c | 2 +- src/backend/executor/execParallel.c | 2 +- src/backend/executor/execPartition.c | 2 +- src/backend/executor/execProcnode.c | 2 +- src/backend/executor/execReplication.c | 2 +- src/backend/executor/execSRF.c | 2 +- src/backend/executor/execScan.c | 2 +- src/backend/executor/execTuples.c | 2 +- src/backend/executor/execUtils.c | 2 +- src/backend/executor/functions.c | 2 +- src/backend/executor/instrument.c | 2 +- src/backend/executor/nodeAgg.c | 2 +- src/backend/executor/nodeAppend.c | 2 +- src/backend/executor/nodeBitmapAnd.c | 2 +- src/backend/executor/nodeBitmapHeapscan.c | 2 +- src/backend/executor/nodeBitmapIndexscan.c | 2 +- src/backend/executor/nodeBitmapOr.c | 2 +- src/backend/executor/nodeCtescan.c | 2 +- src/backend/executor/nodeCustom.c | 2 +- src/backend/executor/nodeForeignscan.c | 2 +- src/backend/executor/nodeFunctionscan.c | 2 +- src/backend/executor/nodeGather.c | 2 +- src/backend/executor/nodeGatherMerge.c | 2 +- src/backend/executor/nodeGroup.c | 2 +- src/backend/executor/nodeHash.c | 2 +- src/backend/executor/nodeHashjoin.c | 2 +- src/backend/executor/nodeIndexonlyscan.c | 2 +- src/backend/executor/nodeIndexscan.c | 2 +- src/backend/executor/nodeLimit.c | 2 +- src/backend/executor/nodeLockRows.c | 2 +- src/backend/executor/nodeMaterial.c | 2 +- src/backend/executor/nodeMergeAppend.c | 2 +- src/backend/executor/nodeMergejoin.c | 2 +- src/backend/executor/nodeModifyTable.c | 2 +- src/backend/executor/nodeNamedtuplestorescan.c | 2 +- src/backend/executor/nodeNestloop.c | 2 +- src/backend/executor/nodeProjectSet.c | 2 +- src/backend/executor/nodeRecursiveunion.c | 2 +- src/backend/executor/nodeResult.c | 2 +- src/backend/executor/nodeSamplescan.c | 2 +- src/backend/executor/nodeSeqscan.c | 2 +- src/backend/executor/nodeSetOp.c | 2 +- src/backend/executor/nodeSort.c | 2 +- src/backend/executor/nodeSubplan.c | 2 +- src/backend/executor/nodeSubqueryscan.c | 2 +- src/backend/executor/nodeTableFuncscan.c | 2 +- src/backend/executor/nodeTidscan.c | 2 +- src/backend/executor/nodeUnique.c | 2 +- src/backend/executor/nodeValuesscan.c | 2 +- src/backend/executor/nodeWindowAgg.c | 2 +- src/backend/executor/nodeWorktablescan.c | 2 +- src/backend/executor/spi.c | 2 +- src/backend/executor/tqueue.c | 2 +- src/backend/executor/tstoreReceiver.c | 2 +- src/backend/foreign/foreign.c | 2 +- src/backend/lib/binaryheap.c | 2 +- src/backend/lib/bipartite_match.c | 2 +- src/backend/lib/dshash.c | 2 +- src/backend/lib/hyperloglog.c | 2 +- src/backend/lib/ilist.c | 2 +- src/backend/lib/knapsack.c | 2 +- src/backend/lib/pairingheap.c | 2 +- src/backend/lib/rbtree.c | 2 +- src/backend/lib/stringinfo.c | 2 +- src/backend/libpq/auth-scram.c | 2 +- src/backend/libpq/auth.c | 2 +- src/backend/libpq/be-fsstubs.c | 2 +- src/backend/libpq/be-secure-openssl.c | 2 +- src/backend/libpq/be-secure.c | 2 +- src/backend/libpq/crypt.c | 2 +- src/backend/libpq/hba.c | 2 +- src/backend/libpq/ifaddr.c | 2 +- src/backend/libpq/pqcomm.c | 2 +- src/backend/libpq/pqformat.c | 2 +- src/backend/libpq/pqmq.c | 2 +- src/backend/libpq/pqsignal.c | 2 +- src/backend/main/main.c | 2 +- src/backend/nodes/bitmapset.c | 2 +- src/backend/nodes/copyfuncs.c | 2 +- src/backend/nodes/equalfuncs.c | 2 +- src/backend/nodes/extensible.c | 2 +- src/backend/nodes/list.c | 2 +- src/backend/nodes/makefuncs.c | 2 +- src/backend/nodes/nodeFuncs.c | 2 +- src/backend/nodes/nodes.c | 2 +- src/backend/nodes/outfuncs.c | 2 +- src/backend/nodes/params.c | 2 +- src/backend/nodes/print.c | 2 +- src/backend/nodes/read.c | 2 +- src/backend/nodes/readfuncs.c | 2 +- src/backend/nodes/tidbitmap.c | 2 +- src/backend/nodes/value.c | 2 +- src/backend/optimizer/geqo/geqo_copy.c | 2 +- src/backend/optimizer/geqo/geqo_eval.c | 2 +- src/backend/optimizer/geqo/geqo_main.c | 2 +- src/backend/optimizer/geqo/geqo_misc.c | 2 +- src/backend/optimizer/geqo/geqo_pool.c | 2 +- src/backend/optimizer/geqo/geqo_random.c | 2 +- src/backend/optimizer/geqo/geqo_selection.c | 2 +- src/backend/optimizer/path/allpaths.c | 2 +- src/backend/optimizer/path/clausesel.c | 2 +- src/backend/optimizer/path/costsize.c | 2 +- src/backend/optimizer/path/equivclass.c | 2 +- src/backend/optimizer/path/indxpath.c | 2 +- src/backend/optimizer/path/joinpath.c | 2 +- src/backend/optimizer/path/joinrels.c | 2 +- src/backend/optimizer/path/pathkeys.c | 2 +- src/backend/optimizer/path/tidpath.c | 2 +- src/backend/optimizer/plan/analyzejoins.c | 2 +- src/backend/optimizer/plan/createplan.c | 2 +- src/backend/optimizer/plan/initsplan.c | 2 +- src/backend/optimizer/plan/planagg.c | 2 +- src/backend/optimizer/plan/planmain.c | 2 +- src/backend/optimizer/plan/planner.c | 2 +- src/backend/optimizer/plan/setrefs.c | 2 +- src/backend/optimizer/plan/subselect.c | 2 +- src/backend/optimizer/prep/prepjointree.c | 2 +- src/backend/optimizer/prep/prepqual.c | 2 +- src/backend/optimizer/prep/preptlist.c | 2 +- src/backend/optimizer/prep/prepunion.c | 2 +- src/backend/optimizer/util/clauses.c | 2 +- src/backend/optimizer/util/joininfo.c | 2 +- src/backend/optimizer/util/orclauses.c | 2 +- src/backend/optimizer/util/pathnode.c | 2 +- src/backend/optimizer/util/placeholder.c | 2 +- src/backend/optimizer/util/plancat.c | 2 +- src/backend/optimizer/util/predtest.c | 2 +- src/backend/optimizer/util/relnode.c | 2 +- src/backend/optimizer/util/restrictinfo.c | 2 +- src/backend/optimizer/util/tlist.c | 2 +- src/backend/optimizer/util/var.c | 2 +- src/backend/parser/analyze.c | 2 +- src/backend/parser/check_keywords.pl | 2 +- src/backend/parser/gram.y | 2 +- src/backend/parser/parse_agg.c | 2 +- src/backend/parser/parse_clause.c | 2 +- src/backend/parser/parse_coerce.c | 2 +- src/backend/parser/parse_collate.c | 2 +- src/backend/parser/parse_cte.c | 2 +- src/backend/parser/parse_enr.c | 2 +- src/backend/parser/parse_expr.c | 2 +- src/backend/parser/parse_func.c | 2 +- src/backend/parser/parse_node.c | 2 +- src/backend/parser/parse_oper.c | 2 +- src/backend/parser/parse_param.c | 2 +- src/backend/parser/parse_relation.c | 2 +- src/backend/parser/parse_target.c | 2 +- src/backend/parser/parse_type.c | 2 +- src/backend/parser/parse_utilcmd.c | 2 +- src/backend/parser/parser.c | 2 +- src/backend/parser/scan.l | 2 +- src/backend/parser/scansup.c | 2 +- src/backend/port/atomics.c | 2 +- src/backend/port/dynloader/aix.h | 2 +- src/backend/port/dynloader/cygwin.h | 2 +- src/backend/port/dynloader/freebsd.c | 2 +- src/backend/port/dynloader/freebsd.h | 2 +- src/backend/port/dynloader/hpux.c | 2 +- src/backend/port/dynloader/hpux.h | 2 +- src/backend/port/dynloader/linux.c | 2 +- src/backend/port/dynloader/linux.h | 2 +- src/backend/port/dynloader/netbsd.c | 2 +- src/backend/port/dynloader/netbsd.h | 2 +- src/backend/port/dynloader/openbsd.c | 2 +- src/backend/port/dynloader/openbsd.h | 2 +- src/backend/port/dynloader/solaris.h | 2 +- src/backend/port/posix_sema.c | 2 +- src/backend/port/sysv_sema.c | 2 +- src/backend/port/sysv_shmem.c | 2 +- src/backend/port/tas/sunstudio_sparc.s | 2 +- src/backend/port/tas/sunstudio_x86.s | 2 +- src/backend/port/win32/crashdump.c | 2 +- src/backend/port/win32/mingwcompat.c | 2 +- src/backend/port/win32/signal.c | 2 +- src/backend/port/win32/socket.c | 2 +- src/backend/port/win32/timer.c | 2 +- src/backend/port/win32_sema.c | 2 +- src/backend/port/win32_shmem.c | 2 +- src/backend/postmaster/autovacuum.c | 2 +- src/backend/postmaster/bgworker.c | 2 +- src/backend/postmaster/bgwriter.c | 2 +- src/backend/postmaster/checkpointer.c | 2 +- src/backend/postmaster/fork_process.c | 2 +- src/backend/postmaster/pgarch.c | 2 +- src/backend/postmaster/pgstat.c | 2 +- src/backend/postmaster/postmaster.c | 2 +- src/backend/postmaster/startup.c | 2 +- src/backend/postmaster/syslogger.c | 2 +- src/backend/postmaster/walwriter.c | 2 +- src/backend/regex/regc_pg_locale.c | 2 +- src/backend/regex/regexport.c | 2 +- src/backend/regex/regprefix.c | 2 +- src/backend/replication/basebackup.c | 2 +- .../replication/libpqwalreceiver/libpqwalreceiver.c | 2 +- src/backend/replication/logical/decode.c | 2 +- src/backend/replication/logical/launcher.c | 2 +- src/backend/replication/logical/logical.c | 2 +- src/backend/replication/logical/logicalfuncs.c | 2 +- src/backend/replication/logical/message.c | 2 +- src/backend/replication/logical/origin.c | 2 +- src/backend/replication/logical/proto.c | 2 +- src/backend/replication/logical/relation.c | 2 +- src/backend/replication/logical/reorderbuffer.c | 2 +- src/backend/replication/logical/snapbuild.c | 2 +- src/backend/replication/logical/tablesync.c | 2 +- src/backend/replication/logical/worker.c | 2 +- src/backend/replication/pgoutput/pgoutput.c | 2 +- src/backend/replication/repl_gram.y | 2 +- src/backend/replication/repl_scanner.l | 2 +- src/backend/replication/slot.c | 2 +- src/backend/replication/slotfuncs.c | 2 +- src/backend/replication/syncrep.c | 2 +- src/backend/replication/syncrep_gram.y | 2 +- src/backend/replication/syncrep_scanner.l | 2 +- src/backend/replication/walreceiver.c | 2 +- src/backend/replication/walreceiverfuncs.c | 2 +- src/backend/replication/walsender.c | 2 +- src/backend/rewrite/rewriteDefine.c | 2 +- src/backend/rewrite/rewriteHandler.c | 2 +- src/backend/rewrite/rewriteManip.c | 2 +- src/backend/rewrite/rewriteRemove.c | 2 +- src/backend/rewrite/rewriteSupport.c | 2 +- src/backend/rewrite/rowsecurity.c | 2 +- src/backend/snowball/dict_snowball.c | 2 +- src/backend/snowball/snowball.sql.in | 2 +- src/backend/snowball/snowball_func.sql.in | 2 +- src/backend/statistics/dependencies.c | 2 +- src/backend/statistics/extended_stats.c | 2 +- src/backend/statistics/mvdistinct.c | 2 +- src/backend/storage/buffer/buf_init.c | 2 +- src/backend/storage/buffer/buf_table.c | 2 +- src/backend/storage/buffer/bufmgr.c | 2 +- src/backend/storage/buffer/freelist.c | 2 +- src/backend/storage/buffer/localbuf.c | 2 +- src/backend/storage/file/buffile.c | 2 +- src/backend/storage/file/copydir.c | 2 +- src/backend/storage/file/fd.c | 2 +- src/backend/storage/file/reinit.c | 2 +- src/backend/storage/file/sharedfileset.c | 2 +- src/backend/storage/freespace/freespace.c | 2 +- src/backend/storage/freespace/fsmpage.c | 2 +- src/backend/storage/freespace/indexfsm.c | 2 +- src/backend/storage/ipc/barrier.c | 2 +- src/backend/storage/ipc/dsm.c | 2 +- src/backend/storage/ipc/dsm_impl.c | 2 +- src/backend/storage/ipc/ipc.c | 2 +- src/backend/storage/ipc/ipci.c | 2 +- src/backend/storage/ipc/latch.c | 2 +- src/backend/storage/ipc/pmsignal.c | 2 +- src/backend/storage/ipc/procarray.c | 2 +- src/backend/storage/ipc/procsignal.c | 2 +- src/backend/storage/ipc/shm_mq.c | 2 +- src/backend/storage/ipc/shm_toc.c | 2 +- src/backend/storage/ipc/shmem.c | 2 +- src/backend/storage/ipc/shmqueue.c | 2 +- src/backend/storage/ipc/sinval.c | 2 +- src/backend/storage/ipc/sinvaladt.c | 2 +- src/backend/storage/ipc/standby.c | 2 +- src/backend/storage/large_object/inv_api.c | 2 +- src/backend/storage/lmgr/condition_variable.c | 2 +- src/backend/storage/lmgr/deadlock.c | 2 +- src/backend/storage/lmgr/generate-lwlocknames.pl | 2 +- src/backend/storage/lmgr/lmgr.c | 2 +- src/backend/storage/lmgr/lock.c | 2 +- src/backend/storage/lmgr/lwlock.c | 2 +- src/backend/storage/lmgr/predicate.c | 2 +- src/backend/storage/lmgr/proc.c | 2 +- src/backend/storage/lmgr/s_lock.c | 2 +- src/backend/storage/lmgr/spin.c | 2 +- src/backend/storage/page/bufpage.c | 2 +- src/backend/storage/page/checksum.c | 2 +- src/backend/storage/page/itemptr.c | 2 +- src/backend/storage/smgr/md.c | 2 +- src/backend/storage/smgr/smgr.c | 2 +- src/backend/storage/smgr/smgrtype.c | 2 +- src/backend/tcop/dest.c | 2 +- src/backend/tcop/fastpath.c | 2 +- src/backend/tcop/postgres.c | 2 +- src/backend/tcop/pquery.c | 2 +- src/backend/tcop/utility.c | 2 +- src/backend/tsearch/Makefile | 2 +- src/backend/tsearch/dict.c | 2 +- src/backend/tsearch/dict_ispell.c | 2 +- src/backend/tsearch/dict_simple.c | 2 +- src/backend/tsearch/dict_synonym.c | 2 +- src/backend/tsearch/dict_thesaurus.c | 2 +- src/backend/tsearch/regis.c | 2 +- src/backend/tsearch/spell.c | 2 +- src/backend/tsearch/to_tsany.c | 2 +- src/backend/tsearch/ts_locale.c | 2 +- src/backend/tsearch/ts_parse.c | 2 +- src/backend/tsearch/ts_selfuncs.c | 2 +- src/backend/tsearch/ts_typanalyze.c | 2 +- src/backend/tsearch/ts_utils.c | 2 +- src/backend/tsearch/wparser.c | 2 +- src/backend/tsearch/wparser_def.c | 2 +- src/backend/utils/Gen_dummy_probes.pl | 2 +- src/backend/utils/Gen_dummy_probes.sed | 2 +- src/backend/utils/Gen_fmgrtab.pl | 8 ++++---- src/backend/utils/adt/acl.c | 2 +- src/backend/utils/adt/amutils.c | 2 +- src/backend/utils/adt/array_expanded.c | 2 +- src/backend/utils/adt/array_selfuncs.c | 2 +- src/backend/utils/adt/array_typanalyze.c | 2 +- src/backend/utils/adt/array_userfuncs.c | 2 +- src/backend/utils/adt/arrayfuncs.c | 2 +- src/backend/utils/adt/arrayutils.c | 2 +- src/backend/utils/adt/ascii.c | 2 +- src/backend/utils/adt/bool.c | 2 +- src/backend/utils/adt/char.c | 2 +- src/backend/utils/adt/date.c | 2 +- src/backend/utils/adt/datetime.c | 2 +- src/backend/utils/adt/datum.c | 2 +- src/backend/utils/adt/dbsize.c | 2 +- src/backend/utils/adt/domains.c | 2 +- src/backend/utils/adt/encode.c | 2 +- src/backend/utils/adt/enum.c | 2 +- src/backend/utils/adt/expandeddatum.c | 2 +- src/backend/utils/adt/float.c | 2 +- src/backend/utils/adt/format_type.c | 2 +- src/backend/utils/adt/formatting.c | 2 +- src/backend/utils/adt/genfile.c | 2 +- src/backend/utils/adt/geo_ops.c | 2 +- src/backend/utils/adt/geo_selfuncs.c | 2 +- src/backend/utils/adt/geo_spgist.c | 2 +- src/backend/utils/adt/int.c | 2 +- src/backend/utils/adt/int8.c | 2 +- src/backend/utils/adt/json.c | 2 +- src/backend/utils/adt/jsonb.c | 2 +- src/backend/utils/adt/jsonb_gin.c | 2 +- src/backend/utils/adt/jsonb_op.c | 2 +- src/backend/utils/adt/jsonb_util.c | 2 +- src/backend/utils/adt/jsonfuncs.c | 2 +- src/backend/utils/adt/levenshtein.c | 2 +- src/backend/utils/adt/like.c | 2 +- src/backend/utils/adt/like_match.c | 2 +- src/backend/utils/adt/lockfuncs.c | 2 +- src/backend/utils/adt/mac.c | 2 +- src/backend/utils/adt/mac8.c | 2 +- src/backend/utils/adt/misc.c | 2 +- src/backend/utils/adt/nabstime.c | 2 +- src/backend/utils/adt/name.c | 2 +- src/backend/utils/adt/network_gist.c | 2 +- src/backend/utils/adt/network_selfuncs.c | 2 +- src/backend/utils/adt/network_spgist.c | 2 +- src/backend/utils/adt/numeric.c | 2 +- src/backend/utils/adt/numutils.c | 2 +- src/backend/utils/adt/oid.c | 2 +- src/backend/utils/adt/oracle_compat.c | 2 +- src/backend/utils/adt/orderedsetaggs.c | 2 +- src/backend/utils/adt/pg_locale.c | 2 +- src/backend/utils/adt/pg_lsn.c | 2 +- src/backend/utils/adt/pg_upgrade_support.c | 2 +- src/backend/utils/adt/pgstatfuncs.c | 2 +- src/backend/utils/adt/pseudotypes.c | 2 +- src/backend/utils/adt/quote.c | 2 +- src/backend/utils/adt/rangetypes.c | 2 +- src/backend/utils/adt/rangetypes_gist.c | 2 +- src/backend/utils/adt/rangetypes_selfuncs.c | 2 +- src/backend/utils/adt/rangetypes_spgist.c | 2 +- src/backend/utils/adt/rangetypes_typanalyze.c | 2 +- src/backend/utils/adt/regexp.c | 2 +- src/backend/utils/adt/regproc.c | 2 +- src/backend/utils/adt/ri_triggers.c | 2 +- src/backend/utils/adt/rowtypes.c | 2 +- src/backend/utils/adt/ruleutils.c | 2 +- src/backend/utils/adt/selfuncs.c | 2 +- src/backend/utils/adt/tid.c | 2 +- src/backend/utils/adt/timestamp.c | 2 +- src/backend/utils/adt/trigfuncs.c | 2 +- src/backend/utils/adt/tsginidx.c | 2 +- src/backend/utils/adt/tsgistidx.c | 2 +- src/backend/utils/adt/tsquery.c | 2 +- src/backend/utils/adt/tsquery_cleanup.c | 2 +- src/backend/utils/adt/tsquery_gist.c | 2 +- src/backend/utils/adt/tsquery_op.c | 2 +- src/backend/utils/adt/tsquery_rewrite.c | 2 +- src/backend/utils/adt/tsquery_util.c | 2 +- src/backend/utils/adt/tsrank.c | 2 +- src/backend/utils/adt/tsvector.c | 2 +- src/backend/utils/adt/tsvector_op.c | 2 +- src/backend/utils/adt/tsvector_parser.c | 2 +- src/backend/utils/adt/txid.c | 2 +- src/backend/utils/adt/uuid.c | 2 +- src/backend/utils/adt/varbit.c | 2 +- src/backend/utils/adt/varchar.c | 2 +- src/backend/utils/adt/varlena.c | 2 +- src/backend/utils/adt/version.c | 2 +- src/backend/utils/adt/windowfuncs.c | 2 +- src/backend/utils/adt/xid.c | 2 +- src/backend/utils/adt/xml.c | 2 +- src/backend/utils/cache/attoptcache.c | 2 +- src/backend/utils/cache/catcache.c | 2 +- src/backend/utils/cache/evtcache.c | 2 +- src/backend/utils/cache/inval.c | 2 +- src/backend/utils/cache/lsyscache.c | 2 +- src/backend/utils/cache/plancache.c | 2 +- src/backend/utils/cache/relcache.c | 2 +- src/backend/utils/cache/relfilenodemap.c | 2 +- src/backend/utils/cache/relmapper.c | 2 +- src/backend/utils/cache/spccache.c | 2 +- src/backend/utils/cache/syscache.c | 2 +- src/backend/utils/cache/ts_cache.c | 2 +- src/backend/utils/cache/typcache.c | 2 +- src/backend/utils/errcodes.txt | 2 +- src/backend/utils/error/assert.c | 2 +- src/backend/utils/error/elog.c | 2 +- src/backend/utils/fmgr/dfmgr.c | 2 +- src/backend/utils/fmgr/fmgr.c | 2 +- src/backend/utils/fmgr/funcapi.c | 2 +- src/backend/utils/generate-errcodes.pl | 2 +- src/backend/utils/hash/dynahash.c | 2 +- src/backend/utils/hash/hashfn.c | 2 +- src/backend/utils/hash/pg_crc.c | 2 +- src/backend/utils/init/globals.c | 2 +- src/backend/utils/init/miscinit.c | 2 +- src/backend/utils/init/postinit.c | 2 +- src/backend/utils/mb/Unicode/Makefile | 2 +- src/backend/utils/mb/Unicode/UCS_to_BIG5.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_GB18030.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_SJIS.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_UHC.pl | 2 +- src/backend/utils/mb/Unicode/UCS_to_most.pl | 2 +- src/backend/utils/mb/Unicode/convutils.pm | 2 +- src/backend/utils/mb/conv.c | 2 +- .../mb/conversion_procs/ascii_and_mic/ascii_and_mic.c | 2 +- .../conversion_procs/cyrillic_and_mic/cyrillic_and_mic.c | 2 +- .../conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c | 2 +- .../mb/conversion_procs/euc_cn_and_mic/euc_cn_and_mic.c | 2 +- .../mb/conversion_procs/euc_jp_and_sjis/euc_jp_and_sjis.c | 2 +- .../mb/conversion_procs/euc_kr_and_mic/euc_kr_and_mic.c | 2 +- .../mb/conversion_procs/euc_tw_and_big5/euc_tw_and_big5.c | 2 +- .../latin2_and_win1250/latin2_and_win1250.c | 2 +- .../mb/conversion_procs/latin_and_mic/latin_and_mic.c | 2 +- .../mb/conversion_procs/utf8_and_ascii/utf8_and_ascii.c | 2 +- .../mb/conversion_procs/utf8_and_big5/utf8_and_big5.c | 2 +- .../utf8_and_cyrillic/utf8_and_cyrillic.c | 2 +- .../conversion_procs/utf8_and_euc2004/utf8_and_euc2004.c | 2 +- .../mb/conversion_procs/utf8_and_euc_cn/utf8_and_euc_cn.c | 2 +- .../mb/conversion_procs/utf8_and_euc_jp/utf8_and_euc_jp.c | 2 +- .../mb/conversion_procs/utf8_and_euc_kr/utf8_and_euc_kr.c | 2 +- .../mb/conversion_procs/utf8_and_euc_tw/utf8_and_euc_tw.c | 2 +- .../conversion_procs/utf8_and_gb18030/utf8_and_gb18030.c | 2 +- .../utils/mb/conversion_procs/utf8_and_gbk/utf8_and_gbk.c | 2 +- .../conversion_procs/utf8_and_iso8859/utf8_and_iso8859.c | 2 +- .../utf8_and_iso8859_1/utf8_and_iso8859_1.c | 2 +- .../mb/conversion_procs/utf8_and_johab/utf8_and_johab.c | 2 +- .../mb/conversion_procs/utf8_and_sjis/utf8_and_sjis.c | 2 +- .../utf8_and_sjis2004/utf8_and_sjis2004.c | 2 +- .../utils/mb/conversion_procs/utf8_and_uhc/utf8_and_uhc.c | 2 +- .../utils/mb/conversion_procs/utf8_and_win/utf8_and_win.c | 2 +- src/backend/utils/mb/mbutils.c | 2 +- src/backend/utils/misc/backend_random.c | 2 +- src/backend/utils/misc/guc-file.l | 2 +- src/backend/utils/misc/guc.c | 2 +- src/backend/utils/misc/help_config.c | 2 +- src/backend/utils/misc/pg_config.c | 2 +- src/backend/utils/misc/pg_controldata.c | 2 +- src/backend/utils/misc/pg_rusage.c | 2 +- src/backend/utils/misc/ps_status.c | 2 +- src/backend/utils/misc/queryenvironment.c | 2 +- src/backend/utils/misc/rls.c | 2 +- src/backend/utils/misc/sampling.c | 2 +- src/backend/utils/misc/superuser.c | 2 +- src/backend/utils/misc/timeout.c | 2 +- src/backend/utils/misc/tzparser.c | 2 +- src/backend/utils/mmgr/aset.c | 2 +- src/backend/utils/mmgr/dsa.c | 2 +- src/backend/utils/mmgr/freepage.c | 2 +- src/backend/utils/mmgr/generation.c | 2 +- src/backend/utils/mmgr/mcxt.c | 2 +- src/backend/utils/mmgr/memdebug.c | 2 +- src/backend/utils/mmgr/portalmem.c | 2 +- src/backend/utils/mmgr/slab.c | 2 +- src/backend/utils/probes.d | 2 +- src/backend/utils/resowner/resowner.c | 2 +- src/backend/utils/sort/logtape.c | 2 +- src/backend/utils/sort/sharedtuplestore.c | 2 +- src/backend/utils/sort/sortsupport.c | 2 +- src/backend/utils/sort/tuplesort.c | 2 +- src/backend/utils/sort/tuplestore.c | 2 +- src/backend/utils/time/combocid.c | 2 +- src/backend/utils/time/snapmgr.c | 2 +- src/backend/utils/time/tqual.c | 2 +- src/bin/Makefile | 2 +- src/bin/initdb/Makefile | 2 +- src/bin/initdb/findtimezone.c | 2 +- src/bin/initdb/initdb.c | 2 +- src/bin/pg_basebackup/Makefile | 2 +- src/bin/pg_basebackup/pg_basebackup.c | 2 +- src/bin/pg_basebackup/pg_receivewal.c | 2 +- src/bin/pg_basebackup/pg_recvlogical.c | 2 +- src/bin/pg_basebackup/receivelog.c | 2 +- src/bin/pg_basebackup/receivelog.h | 2 +- src/bin/pg_basebackup/streamutil.c | 2 +- src/bin/pg_basebackup/streamutil.h | 2 +- src/bin/pg_basebackup/walmethods.c | 2 +- src/bin/pg_basebackup/walmethods.h | 2 +- src/bin/pg_config/Makefile | 2 +- src/bin/pg_config/pg_config.c | 2 +- src/bin/pg_controldata/Makefile | 2 +- src/bin/pg_ctl/Makefile | 2 +- src/bin/pg_ctl/pg_ctl.c | 2 +- src/bin/pg_dump/Makefile | 2 +- src/bin/pg_dump/common.c | 2 +- src/bin/pg_dump/compress_io.c | 2 +- src/bin/pg_dump/compress_io.h | 2 +- src/bin/pg_dump/dumputils.c | 2 +- src/bin/pg_dump/dumputils.h | 2 +- src/bin/pg_dump/parallel.c | 2 +- src/bin/pg_dump/parallel.h | 2 +- src/bin/pg_dump/pg_backup_directory.c | 2 +- src/bin/pg_dump/pg_backup_utils.c | 2 +- src/bin/pg_dump/pg_backup_utils.h | 2 +- src/bin/pg_dump/pg_dump.c | 2 +- src/bin/pg_dump/pg_dump.h | 2 +- src/bin/pg_dump/pg_dump_sort.c | 2 +- src/bin/pg_dump/pg_dumpall.c | 2 +- src/bin/pg_resetwal/Makefile | 2 +- src/bin/pg_resetwal/pg_resetwal.c | 2 +- src/bin/pg_rewind/Makefile | 2 +- src/bin/pg_rewind/copy_fetch.c | 2 +- src/bin/pg_rewind/datapagemap.c | 2 +- src/bin/pg_rewind/datapagemap.h | 2 +- src/bin/pg_rewind/fetch.c | 2 +- src/bin/pg_rewind/fetch.h | 2 +- src/bin/pg_rewind/file_ops.c | 2 +- src/bin/pg_rewind/file_ops.h | 2 +- src/bin/pg_rewind/filemap.c | 2 +- src/bin/pg_rewind/filemap.h | 2 +- src/bin/pg_rewind/libpq_fetch.c | 2 +- src/bin/pg_rewind/logging.c | 2 +- src/bin/pg_rewind/logging.h | 2 +- src/bin/pg_rewind/parsexlog.c | 2 +- src/bin/pg_rewind/pg_rewind.c | 2 +- src/bin/pg_rewind/pg_rewind.h | 2 +- src/bin/pg_rewind/timeline.c | 2 +- src/bin/pg_upgrade/check.c | 2 +- src/bin/pg_upgrade/controldata.c | 2 +- src/bin/pg_upgrade/dump.c | 2 +- src/bin/pg_upgrade/exec.c | 2 +- src/bin/pg_upgrade/file.c | 2 +- src/bin/pg_upgrade/function.c | 2 +- src/bin/pg_upgrade/info.c | 2 +- src/bin/pg_upgrade/option.c | 2 +- src/bin/pg_upgrade/parallel.c | 2 +- src/bin/pg_upgrade/pg_upgrade.c | 2 +- src/bin/pg_upgrade/pg_upgrade.h | 2 +- src/bin/pg_upgrade/relfilenode.c | 2 +- src/bin/pg_upgrade/server.c | 2 +- src/bin/pg_upgrade/tablespace.c | 2 +- src/bin/pg_upgrade/test.sh | 2 +- src/bin/pg_upgrade/util.c | 2 +- src/bin/pg_upgrade/version.c | 2 +- src/bin/pg_waldump/compat.c | 2 +- src/bin/pg_waldump/pg_waldump.c | 2 +- src/bin/pgbench/exprparse.y | 2 +- src/bin/pgbench/exprscan.l | 2 +- src/bin/pgbench/pgbench.c | 2 +- src/bin/pgbench/pgbench.h | 2 +- src/bin/pgevent/Makefile | 2 +- src/bin/psql/Makefile | 2 +- src/bin/psql/command.c | 2 +- src/bin/psql/command.h | 2 +- src/bin/psql/common.c | 2 +- src/bin/psql/common.h | 2 +- src/bin/psql/conditional.c | 2 +- src/bin/psql/conditional.h | 2 +- src/bin/psql/copy.c | 2 +- src/bin/psql/copy.h | 2 +- src/bin/psql/create_help.pl | 2 +- src/bin/psql/crosstabview.c | 2 +- src/bin/psql/crosstabview.h | 2 +- src/bin/psql/describe.c | 2 +- src/bin/psql/describe.h | 2 +- src/bin/psql/help.c | 4 ++-- src/bin/psql/help.h | 2 +- src/bin/psql/input.c | 2 +- src/bin/psql/input.h | 2 +- src/bin/psql/large_obj.c | 2 +- src/bin/psql/large_obj.h | 2 +- src/bin/psql/mainloop.c | 2 +- src/bin/psql/mainloop.h | 2 +- src/bin/psql/prompt.c | 2 +- src/bin/psql/prompt.h | 2 +- src/bin/psql/psqlscanslash.h | 2 +- src/bin/psql/psqlscanslash.l | 2 +- src/bin/psql/settings.h | 2 +- src/bin/psql/startup.c | 2 +- src/bin/psql/stringutils.c | 2 +- src/bin/psql/stringutils.h | 2 +- src/bin/psql/tab-complete.c | 2 +- src/bin/psql/tab-complete.h | 2 +- src/bin/psql/variables.c | 2 +- src/bin/psql/variables.h | 2 +- src/bin/scripts/Makefile | 2 +- src/bin/scripts/clusterdb.c | 2 +- src/bin/scripts/common.c | 2 +- src/bin/scripts/common.h | 2 +- src/bin/scripts/createdb.c | 2 +- src/bin/scripts/createuser.c | 2 +- src/bin/scripts/dropdb.c | 2 +- src/bin/scripts/dropuser.c | 2 +- src/bin/scripts/pg_isready.c | 2 +- src/bin/scripts/reindexdb.c | 2 +- src/bin/scripts/vacuumdb.c | 2 +- src/common/base64.c | 2 +- src/common/config_info.c | 2 +- src/common/controldata_utils.c | 2 +- src/common/exec.c | 2 +- src/common/fe_memutils.c | 2 +- src/common/file_utils.c | 2 +- src/common/ip.c | 2 +- src/common/keywords.c | 2 +- src/common/md5.c | 2 +- src/common/pg_lzcompress.c | 2 +- src/common/pgfnames.c | 2 +- src/common/psprintf.c | 2 +- src/common/relpath.c | 2 +- src/common/restricted_token.c | 2 +- src/common/rmtree.c | 2 +- src/common/saslprep.c | 2 +- src/common/scram-common.c | 2 +- src/common/sha2.c | 2 +- src/common/sha2_openssl.c | 2 +- src/common/string.c | 2 +- src/common/unicode/generate-norm_test_table.pl | 4 ++-- src/common/unicode/generate-unicode_norm_table.pl | 4 ++-- src/common/unicode/norm_test.c | 2 +- src/common/unicode_norm.c | 2 +- src/common/username.c | 2 +- src/common/wait_error.c | 2 +- src/fe_utils/Makefile | 2 +- src/fe_utils/mbprint.c | 2 +- src/fe_utils/print.c | 2 +- src/fe_utils/psqlscan.l | 2 +- src/fe_utils/simple_list.c | 2 +- src/fe_utils/string_utils.c | 2 +- src/include/access/amapi.h | 2 +- src/include/access/amvalidate.h | 2 +- src/include/access/attnum.h | 2 +- src/include/access/brin.h | 2 +- src/include/access/brin_internal.h | 2 +- src/include/access/brin_page.h | 2 +- src/include/access/brin_pageops.h | 2 +- src/include/access/brin_revmap.h | 2 +- src/include/access/brin_tuple.h | 2 +- src/include/access/brin_xlog.h | 2 +- src/include/access/bufmask.h | 2 +- src/include/access/clog.h | 2 +- src/include/access/commit_ts.h | 2 +- src/include/access/genam.h | 2 +- src/include/access/generic_xlog.h | 2 +- src/include/access/gin.h | 2 +- src/include/access/gin_private.h | 2 +- src/include/access/ginblock.h | 2 +- src/include/access/ginxlog.h | 2 +- src/include/access/gist.h | 2 +- src/include/access/gist_private.h | 2 +- src/include/access/gistscan.h | 2 +- src/include/access/gistxlog.h | 2 +- src/include/access/hash.h | 2 +- src/include/access/hash_xlog.h | 2 +- src/include/access/heapam.h | 2 +- src/include/access/heapam_xlog.h | 2 +- src/include/access/hio.h | 2 +- src/include/access/htup.h | 2 +- src/include/access/htup_details.h | 2 +- src/include/access/itup.h | 2 +- src/include/access/multixact.h | 2 +- src/include/access/nbtree.h | 2 +- src/include/access/nbtxlog.h | 2 +- src/include/access/parallel.h | 2 +- src/include/access/printsimple.h | 2 +- src/include/access/printtup.h | 2 +- src/include/access/reloptions.h | 2 +- src/include/access/relscan.h | 2 +- src/include/access/rewriteheap.h | 2 +- src/include/access/rmgrlist.h | 2 +- src/include/access/sdir.h | 2 +- src/include/access/session.h | 2 +- src/include/access/skey.h | 2 +- src/include/access/slru.h | 2 +- src/include/access/spgist.h | 2 +- src/include/access/spgist_private.h | 2 +- src/include/access/spgxlog.h | 2 +- src/include/access/stratnum.h | 2 +- src/include/access/subtrans.h | 2 +- src/include/access/sysattr.h | 2 +- src/include/access/timeline.h | 2 +- src/include/access/transam.h | 2 +- src/include/access/tsmapi.h | 2 +- src/include/access/tupconvert.h | 2 +- src/include/access/tupdesc.h | 2 +- src/include/access/tupmacs.h | 2 +- src/include/access/tuptoaster.h | 2 +- src/include/access/twophase.h | 2 +- src/include/access/twophase_rmgr.h | 2 +- src/include/access/valid.h | 2 +- src/include/access/visibilitymap.h | 2 +- src/include/access/xact.h | 2 +- src/include/access/xlog.h | 2 +- src/include/access/xlog_internal.h | 2 +- src/include/access/xlogdefs.h | 2 +- src/include/access/xloginsert.h | 2 +- src/include/access/xlogreader.h | 2 +- src/include/access/xlogrecord.h | 2 +- src/include/access/xlogutils.h | 2 +- src/include/bootstrap/bootstrap.h | 2 +- src/include/c.h | 2 +- src/include/catalog/binary_upgrade.h | 2 +- src/include/catalog/catalog.h | 2 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/dependency.h | 2 +- src/include/catalog/genbki.h | 2 +- src/include/catalog/heap.h | 2 +- src/include/catalog/index.h | 2 +- src/include/catalog/indexing.h | 2 +- src/include/catalog/namespace.h | 2 +- src/include/catalog/objectaccess.h | 2 +- src/include/catalog/objectaddress.h | 2 +- src/include/catalog/opfam_internal.h | 2 +- src/include/catalog/partition.h | 2 +- src/include/catalog/pg_aggregate.h | 2 +- src/include/catalog/pg_aggregate_fn.h | 2 +- src/include/catalog/pg_am.h | 2 +- src/include/catalog/pg_amop.h | 2 +- src/include/catalog/pg_amproc.h | 2 +- src/include/catalog/pg_attrdef.h | 2 +- src/include/catalog/pg_attribute.h | 2 +- src/include/catalog/pg_auth_members.h | 2 +- src/include/catalog/pg_authid.h | 2 +- src/include/catalog/pg_cast.h | 2 +- src/include/catalog/pg_class.h | 2 +- src/include/catalog/pg_collation.h | 2 +- src/include/catalog/pg_collation_fn.h | 2 +- src/include/catalog/pg_constraint.h | 2 +- src/include/catalog/pg_constraint_fn.h | 2 +- src/include/catalog/pg_control.h | 2 +- src/include/catalog/pg_conversion.h | 2 +- src/include/catalog/pg_conversion_fn.h | 2 +- src/include/catalog/pg_database.h | 2 +- src/include/catalog/pg_db_role_setting.h | 2 +- src/include/catalog/pg_default_acl.h | 2 +- src/include/catalog/pg_depend.h | 2 +- src/include/catalog/pg_description.h | 2 +- src/include/catalog/pg_enum.h | 2 +- src/include/catalog/pg_event_trigger.h | 2 +- src/include/catalog/pg_extension.h | 2 +- src/include/catalog/pg_foreign_data_wrapper.h | 2 +- src/include/catalog/pg_foreign_server.h | 2 +- src/include/catalog/pg_foreign_table.h | 2 +- src/include/catalog/pg_index.h | 2 +- src/include/catalog/pg_inherits.h | 2 +- src/include/catalog/pg_inherits_fn.h | 2 +- src/include/catalog/pg_init_privs.h | 2 +- src/include/catalog/pg_language.h | 2 +- src/include/catalog/pg_largeobject.h | 2 +- src/include/catalog/pg_largeobject_metadata.h | 2 +- src/include/catalog/pg_namespace.h | 2 +- src/include/catalog/pg_opclass.h | 2 +- src/include/catalog/pg_operator.h | 2 +- src/include/catalog/pg_operator_fn.h | 2 +- src/include/catalog/pg_opfamily.h | 2 +- src/include/catalog/pg_partitioned_table.h | 2 +- src/include/catalog/pg_pltemplate.h | 2 +- src/include/catalog/pg_policy.h | 2 +- src/include/catalog/pg_proc.h | 2 +- src/include/catalog/pg_proc_fn.h | 2 +- src/include/catalog/pg_publication.h | 2 +- src/include/catalog/pg_publication_rel.h | 2 +- src/include/catalog/pg_range.h | 2 +- src/include/catalog/pg_replication_origin.h | 2 +- src/include/catalog/pg_rewrite.h | 2 +- src/include/catalog/pg_seclabel.h | 2 +- src/include/catalog/pg_sequence.h | 2 +- src/include/catalog/pg_shdepend.h | 2 +- src/include/catalog/pg_shdescription.h | 2 +- src/include/catalog/pg_shseclabel.h | 2 +- src/include/catalog/pg_statistic.h | 2 +- src/include/catalog/pg_statistic_ext.h | 2 +- src/include/catalog/pg_subscription.h | 2 +- src/include/catalog/pg_subscription_rel.h | 2 +- src/include/catalog/pg_tablespace.h | 2 +- src/include/catalog/pg_transform.h | 2 +- src/include/catalog/pg_trigger.h | 2 +- src/include/catalog/pg_ts_config.h | 2 +- src/include/catalog/pg_ts_config_map.h | 2 +- src/include/catalog/pg_ts_dict.h | 2 +- src/include/catalog/pg_ts_parser.h | 2 +- src/include/catalog/pg_ts_template.h | 2 +- src/include/catalog/pg_type.h | 2 +- src/include/catalog/pg_type_fn.h | 2 +- src/include/catalog/pg_user_mapping.h | 2 +- src/include/catalog/storage.h | 2 +- src/include/catalog/storage_xlog.h | 2 +- src/include/catalog/toasting.h | 2 +- src/include/commands/alter.h | 2 +- src/include/commands/async.h | 2 +- src/include/commands/cluster.h | 2 +- src/include/commands/collationcmds.h | 2 +- src/include/commands/comment.h | 2 +- src/include/commands/conversioncmds.h | 2 +- src/include/commands/copy.h | 2 +- src/include/commands/createas.h | 2 +- src/include/commands/dbcommands.h | 2 +- src/include/commands/dbcommands_xlog.h | 2 +- src/include/commands/defrem.h | 2 +- src/include/commands/discard.h | 2 +- src/include/commands/event_trigger.h | 2 +- src/include/commands/explain.h | 2 +- src/include/commands/extension.h | 2 +- src/include/commands/lockcmds.h | 2 +- src/include/commands/matview.h | 2 +- src/include/commands/policy.h | 2 +- src/include/commands/portalcmds.h | 2 +- src/include/commands/prepare.h | 2 +- src/include/commands/progress.h | 2 +- src/include/commands/publicationcmds.h | 2 +- src/include/commands/schemacmds.h | 2 +- src/include/commands/seclabel.h | 2 +- src/include/commands/sequence.h | 2 +- src/include/commands/subscriptioncmds.h | 2 +- src/include/commands/tablecmds.h | 2 +- src/include/commands/tablespace.h | 2 +- src/include/commands/trigger.h | 2 +- src/include/commands/typecmds.h | 2 +- src/include/commands/vacuum.h | 2 +- src/include/commands/variable.h | 2 +- src/include/commands/view.h | 2 +- src/include/common/base64.h | 2 +- src/include/common/config_info.h | 2 +- src/include/common/controldata_utils.h | 2 +- src/include/common/fe_memutils.h | 2 +- src/include/common/file_utils.h | 2 +- src/include/common/int.h | 2 +- src/include/common/int128.h | 2 +- src/include/common/ip.h | 2 +- src/include/common/keywords.h | 2 +- src/include/common/md5.h | 2 +- src/include/common/relpath.h | 2 +- src/include/common/restricted_token.h | 2 +- src/include/common/saslprep.h | 2 +- src/include/common/scram-common.h | 2 +- src/include/common/sha2.h | 2 +- src/include/common/string.h | 2 +- src/include/common/unicode_norm.h | 2 +- src/include/common/unicode_norm_table.h | 2 +- src/include/common/username.h | 2 +- src/include/datatype/timestamp.h | 2 +- src/include/executor/execExpr.h | 2 +- src/include/executor/execParallel.h | 2 +- src/include/executor/execPartition.h | 2 +- src/include/executor/execdebug.h | 2 +- src/include/executor/execdesc.h | 2 +- src/include/executor/executor.h | 2 +- src/include/executor/functions.h | 2 +- src/include/executor/hashjoin.h | 2 +- src/include/executor/instrument.h | 2 +- src/include/executor/nodeAgg.h | 2 +- src/include/executor/nodeAppend.h | 2 +- src/include/executor/nodeBitmapAnd.h | 2 +- src/include/executor/nodeBitmapHeapscan.h | 2 +- src/include/executor/nodeBitmapIndexscan.h | 2 +- src/include/executor/nodeBitmapOr.h | 2 +- src/include/executor/nodeCtescan.h | 2 +- src/include/executor/nodeCustom.h | 2 +- src/include/executor/nodeForeignscan.h | 2 +- src/include/executor/nodeFunctionscan.h | 2 +- src/include/executor/nodeGather.h | 2 +- src/include/executor/nodeGatherMerge.h | 2 +- src/include/executor/nodeGroup.h | 2 +- src/include/executor/nodeHash.h | 2 +- src/include/executor/nodeHashjoin.h | 2 +- src/include/executor/nodeIndexonlyscan.h | 2 +- src/include/executor/nodeIndexscan.h | 2 +- src/include/executor/nodeLimit.h | 2 +- src/include/executor/nodeLockRows.h | 2 +- src/include/executor/nodeMaterial.h | 2 +- src/include/executor/nodeMergeAppend.h | 2 +- src/include/executor/nodeMergejoin.h | 2 +- src/include/executor/nodeModifyTable.h | 2 +- src/include/executor/nodeNamedtuplestorescan.h | 2 +- src/include/executor/nodeNestloop.h | 2 +- src/include/executor/nodeProjectSet.h | 2 +- src/include/executor/nodeRecursiveunion.h | 2 +- src/include/executor/nodeResult.h | 2 +- src/include/executor/nodeSamplescan.h | 2 +- src/include/executor/nodeSeqscan.h | 2 +- src/include/executor/nodeSetOp.h | 2 +- src/include/executor/nodeSort.h | 2 +- src/include/executor/nodeSubplan.h | 2 +- src/include/executor/nodeSubqueryscan.h | 2 +- src/include/executor/nodeTableFuncscan.h | 2 +- src/include/executor/nodeTidscan.h | 2 +- src/include/executor/nodeUnique.h | 2 +- src/include/executor/nodeValuesscan.h | 2 +- src/include/executor/nodeWindowAgg.h | 2 +- src/include/executor/nodeWorktablescan.h | 2 +- src/include/executor/spi.h | 2 +- src/include/executor/spi_priv.h | 2 +- src/include/executor/tablefunc.h | 2 +- src/include/executor/tqueue.h | 2 +- src/include/executor/tstoreReceiver.h | 2 +- src/include/executor/tuptable.h | 2 +- src/include/fe_utils/mbprint.h | 2 +- src/include/fe_utils/print.h | 2 +- src/include/fe_utils/psqlscan.h | 2 +- src/include/fe_utils/psqlscan_int.h | 2 +- src/include/fe_utils/simple_list.h | 2 +- src/include/fe_utils/string_utils.h | 2 +- src/include/fmgr.h | 2 +- src/include/foreign/fdwapi.h | 2 +- src/include/foreign/foreign.h | 2 +- src/include/funcapi.h | 2 +- src/include/getaddrinfo.h | 2 +- src/include/getopt_long.h | 2 +- src/include/lib/binaryheap.h | 2 +- src/include/lib/bipartite_match.h | 2 +- src/include/lib/dshash.h | 2 +- src/include/lib/hyperloglog.h | 2 +- src/include/lib/ilist.h | 2 +- src/include/lib/knapsack.h | 2 +- src/include/lib/pairingheap.h | 2 +- src/include/lib/rbtree.h | 2 +- src/include/lib/stringinfo.h | 2 +- src/include/libpq/auth.h | 2 +- src/include/libpq/be-fsstubs.h | 2 +- src/include/libpq/crypt.h | 2 +- src/include/libpq/ifaddr.h | 2 +- src/include/libpq/libpq-be.h | 2 +- src/include/libpq/libpq-fs.h | 2 +- src/include/libpq/libpq.h | 2 +- src/include/libpq/pqcomm.h | 2 +- src/include/libpq/pqformat.h | 2 +- src/include/libpq/pqmq.h | 2 +- src/include/libpq/pqsignal.h | 2 +- src/include/libpq/scram.h | 2 +- src/include/mb/pg_wchar.h | 2 +- src/include/miscadmin.h | 2 +- src/include/nodes/bitmapset.h | 2 +- src/include/nodes/execnodes.h | 2 +- src/include/nodes/extensible.h | 2 +- src/include/nodes/lockoptions.h | 2 +- src/include/nodes/makefuncs.h | 2 +- src/include/nodes/memnodes.h | 2 +- src/include/nodes/nodeFuncs.h | 2 +- src/include/nodes/nodes.h | 2 +- src/include/nodes/params.h | 2 +- src/include/nodes/parsenodes.h | 2 +- src/include/nodes/pg_list.h | 2 +- src/include/nodes/plannodes.h | 2 +- src/include/nodes/primnodes.h | 2 +- src/include/nodes/print.h | 2 +- src/include/nodes/readfuncs.h | 2 +- src/include/nodes/relation.h | 2 +- src/include/nodes/replnodes.h | 2 +- src/include/nodes/tidbitmap.h | 2 +- src/include/nodes/value.h | 2 +- src/include/optimizer/clauses.h | 2 +- src/include/optimizer/cost.h | 2 +- src/include/optimizer/geqo.h | 2 +- src/include/optimizer/geqo_copy.h | 2 +- src/include/optimizer/geqo_gene.h | 2 +- src/include/optimizer/geqo_misc.h | 2 +- src/include/optimizer/geqo_mutation.h | 2 +- src/include/optimizer/geqo_pool.h | 2 +- src/include/optimizer/geqo_random.h | 2 +- src/include/optimizer/geqo_recombination.h | 2 +- src/include/optimizer/geqo_selection.h | 2 +- src/include/optimizer/joininfo.h | 2 +- src/include/optimizer/orclauses.h | 2 +- src/include/optimizer/pathnode.h | 2 +- src/include/optimizer/paths.h | 2 +- src/include/optimizer/placeholder.h | 2 +- src/include/optimizer/plancat.h | 2 +- src/include/optimizer/planmain.h | 2 +- src/include/optimizer/planner.h | 2 +- src/include/optimizer/predtest.h | 2 +- src/include/optimizer/prep.h | 2 +- src/include/optimizer/restrictinfo.h | 2 +- src/include/optimizer/subselect.h | 2 +- src/include/optimizer/tlist.h | 2 +- src/include/optimizer/var.h | 2 +- src/include/parser/analyze.h | 2 +- src/include/parser/gramparse.h | 2 +- src/include/parser/kwlist.h | 2 +- src/include/parser/parse_agg.h | 2 +- src/include/parser/parse_clause.h | 2 +- src/include/parser/parse_coerce.h | 2 +- src/include/parser/parse_collate.h | 2 +- src/include/parser/parse_cte.h | 2 +- src/include/parser/parse_enr.h | 2 +- src/include/parser/parse_expr.h | 2 +- src/include/parser/parse_func.h | 2 +- src/include/parser/parse_node.h | 2 +- src/include/parser/parse_oper.h | 2 +- src/include/parser/parse_param.h | 2 +- src/include/parser/parse_relation.h | 2 +- src/include/parser/parse_target.h | 2 +- src/include/parser/parse_type.h | 2 +- src/include/parser/parse_utilcmd.h | 2 +- src/include/parser/parser.h | 2 +- src/include/parser/parsetree.h | 2 +- src/include/parser/scanner.h | 2 +- src/include/parser/scansup.h | 2 +- src/include/pg_config_manual.h | 2 +- src/include/pg_getopt.h | 2 +- src/include/pg_trace.h | 2 +- src/include/pgstat.h | 2 +- src/include/pgtar.h | 2 +- src/include/pgtime.h | 2 +- src/include/port.h | 2 +- src/include/port/atomics.h | 2 +- src/include/port/atomics/arch-arm.h | 2 +- src/include/port/atomics/arch-hppa.h | 2 +- src/include/port/atomics/arch-ia64.h | 2 +- src/include/port/atomics/arch-ppc.h | 2 +- src/include/port/atomics/arch-x86.h | 2 +- src/include/port/atomics/fallback.h | 2 +- src/include/port/atomics/generic-acc.h | 2 +- src/include/port/atomics/generic-gcc.h | 2 +- src/include/port/atomics/generic-msvc.h | 2 +- src/include/port/atomics/generic-sunpro.h | 2 +- src/include/port/atomics/generic-xlc.h | 2 +- src/include/port/atomics/generic.h | 2 +- src/include/port/pg_bswap.h | 2 +- src/include/port/pg_crc32c.h | 2 +- src/include/port/win32_port.h | 2 +- src/include/portability/instr_time.h | 2 +- src/include/portability/mem.h | 2 +- src/include/postgres.h | 2 +- src/include/postgres_fe.h | 2 +- src/include/postmaster/autovacuum.h | 2 +- src/include/postmaster/bgworker.h | 2 +- src/include/postmaster/bgworker_internals.h | 2 +- src/include/postmaster/bgwriter.h | 2 +- src/include/postmaster/fork_process.h | 2 +- src/include/postmaster/pgarch.h | 2 +- src/include/postmaster/postmaster.h | 2 +- src/include/postmaster/startup.h | 2 +- src/include/postmaster/syslogger.h | 2 +- src/include/postmaster/walwriter.h | 2 +- src/include/regex/regexport.h | 2 +- src/include/replication/basebackup.h | 2 +- src/include/replication/decode.h | 2 +- src/include/replication/logical.h | 2 +- src/include/replication/logicalfuncs.h | 2 +- src/include/replication/logicallauncher.h | 2 +- src/include/replication/logicalproto.h | 2 +- src/include/replication/logicalrelation.h | 2 +- src/include/replication/logicalworker.h | 2 +- src/include/replication/message.h | 2 +- src/include/replication/origin.h | 2 +- src/include/replication/output_plugin.h | 2 +- src/include/replication/pgoutput.h | 2 +- src/include/replication/reorderbuffer.h | 2 +- src/include/replication/slot.h | 2 +- src/include/replication/snapbuild.h | 2 +- src/include/replication/syncrep.h | 2 +- src/include/replication/walreceiver.h | 2 +- src/include/replication/walsender.h | 2 +- src/include/replication/walsender_private.h | 2 +- src/include/replication/worker_internal.h | 2 +- src/include/rewrite/prs2lock.h | 2 +- src/include/rewrite/rewriteDefine.h | 2 +- src/include/rewrite/rewriteHandler.h | 2 +- src/include/rewrite/rewriteManip.h | 2 +- src/include/rewrite/rewriteRemove.h | 2 +- src/include/rewrite/rewriteSupport.h | 2 +- src/include/rewrite/rowsecurity.h | 2 +- src/include/rusagestub.h | 2 +- src/include/snowball/header.h | 2 +- src/include/statistics/extended_stats_internal.h | 2 +- src/include/statistics/statistics.h | 2 +- src/include/storage/backendid.h | 2 +- src/include/storage/barrier.h | 2 +- src/include/storage/block.h | 2 +- src/include/storage/buf.h | 2 +- src/include/storage/buf_internals.h | 2 +- src/include/storage/buffile.h | 2 +- src/include/storage/bufmgr.h | 2 +- src/include/storage/bufpage.h | 2 +- src/include/storage/checksum.h | 2 +- src/include/storage/checksum_impl.h | 2 +- src/include/storage/condition_variable.h | 2 +- src/include/storage/copydir.h | 2 +- src/include/storage/dsm.h | 2 +- src/include/storage/dsm_impl.h | 2 +- src/include/storage/fd.h | 2 +- src/include/storage/freespace.h | 2 +- src/include/storage/fsm_internals.h | 2 +- src/include/storage/indexfsm.h | 2 +- src/include/storage/ipc.h | 2 +- src/include/storage/item.h | 2 +- src/include/storage/itemid.h | 2 +- src/include/storage/itemptr.h | 2 +- src/include/storage/large_object.h | 2 +- src/include/storage/latch.h | 2 +- src/include/storage/lmgr.h | 2 +- src/include/storage/lock.h | 2 +- src/include/storage/lockdefs.h | 2 +- src/include/storage/lwlock.h | 2 +- src/include/storage/off.h | 2 +- src/include/storage/pg_sema.h | 2 +- src/include/storage/pg_shmem.h | 2 +- src/include/storage/pmsignal.h | 2 +- src/include/storage/predicate.h | 2 +- src/include/storage/predicate_internals.h | 2 +- src/include/storage/proc.h | 2 +- src/include/storage/procarray.h | 2 +- src/include/storage/proclist.h | 2 +- src/include/storage/proclist_types.h | 2 +- src/include/storage/procsignal.h | 2 +- src/include/storage/reinit.h | 2 +- src/include/storage/relfilenode.h | 2 +- src/include/storage/s_lock.h | 2 +- src/include/storage/sharedfileset.h | 2 +- src/include/storage/shm_mq.h | 2 +- src/include/storage/shm_toc.h | 2 +- src/include/storage/shmem.h | 2 +- src/include/storage/sinval.h | 2 +- src/include/storage/sinvaladt.h | 2 +- src/include/storage/smgr.h | 2 +- src/include/storage/spin.h | 2 +- src/include/storage/standby.h | 2 +- src/include/storage/standbydefs.h | 2 +- src/include/tcop/deparse_utility.h | 2 +- src/include/tcop/dest.h | 2 +- src/include/tcop/fastpath.h | 2 +- src/include/tcop/pquery.h | 2 +- src/include/tcop/tcopprot.h | 2 +- src/include/tcop/utility.h | 2 +- src/include/tsearch/dicts/regis.h | 2 +- src/include/tsearch/dicts/spell.h | 2 +- src/include/tsearch/ts_cache.h | 2 +- src/include/tsearch/ts_locale.h | 2 +- src/include/tsearch/ts_public.h | 2 +- src/include/tsearch/ts_type.h | 2 +- src/include/tsearch/ts_utils.h | 2 +- src/include/utils/acl.h | 2 +- src/include/utils/aclchk_internal.h | 2 +- src/include/utils/array.h | 2 +- src/include/utils/arrayaccess.h | 2 +- src/include/utils/ascii.h | 2 +- src/include/utils/attoptcache.h | 2 +- src/include/utils/backend_random.h | 2 +- src/include/utils/builtins.h | 2 +- src/include/utils/bytea.h | 2 +- src/include/utils/catcache.h | 2 +- src/include/utils/combocid.h | 2 +- src/include/utils/date.h | 2 +- src/include/utils/datetime.h | 2 +- src/include/utils/datum.h | 2 +- src/include/utils/dsa.h | 2 +- src/include/utils/dynahash.h | 2 +- src/include/utils/dynamic_loader.h | 2 +- src/include/utils/elog.h | 2 +- src/include/utils/evtcache.h | 2 +- src/include/utils/expandeddatum.h | 2 +- src/include/utils/fmgrtab.h | 2 +- src/include/utils/formatting.h | 2 +- src/include/utils/freepage.h | 2 +- src/include/utils/geo_decls.h | 2 +- src/include/utils/guc.h | 2 +- src/include/utils/guc_tables.h | 2 +- src/include/utils/hashutils.h | 2 +- src/include/utils/help_config.h | 2 +- src/include/utils/hsearch.h | 2 +- src/include/utils/index_selfuncs.h | 2 +- src/include/utils/inet.h | 2 +- src/include/utils/int8.h | 2 +- src/include/utils/inval.h | 2 +- src/include/utils/json.h | 2 +- src/include/utils/jsonapi.h | 2 +- src/include/utils/jsonb.h | 2 +- src/include/utils/logtape.h | 2 +- src/include/utils/lsyscache.h | 2 +- src/include/utils/memdebug.h | 2 +- src/include/utils/memutils.h | 2 +- src/include/utils/nabstime.h | 2 +- src/include/utils/numeric.h | 2 +- src/include/utils/palloc.h | 2 +- src/include/utils/pg_crc.h | 2 +- src/include/utils/pg_locale.h | 2 +- src/include/utils/pg_lsn.h | 2 +- src/include/utils/pg_rusage.h | 2 +- src/include/utils/pidfile.h | 2 +- src/include/utils/plancache.h | 2 +- src/include/utils/portal.h | 2 +- src/include/utils/queryenvironment.h | 2 +- src/include/utils/rangetypes.h | 2 +- src/include/utils/regproc.h | 2 +- src/include/utils/rel.h | 2 +- src/include/utils/relcache.h | 2 +- src/include/utils/relfilenodemap.h | 2 +- src/include/utils/relmapper.h | 2 +- src/include/utils/relptr.h | 2 +- src/include/utils/reltrigger.h | 2 +- src/include/utils/resowner.h | 2 +- src/include/utils/resowner_private.h | 2 +- src/include/utils/rls.h | 2 +- src/include/utils/ruleutils.h | 2 +- src/include/utils/sampling.h | 2 +- src/include/utils/selfuncs.h | 2 +- src/include/utils/sharedtuplestore.h | 2 +- src/include/utils/snapmgr.h | 2 +- src/include/utils/snapshot.h | 2 +- src/include/utils/sortsupport.h | 2 +- src/include/utils/spccache.h | 2 +- src/include/utils/syscache.h | 2 +- src/include/utils/timeout.h | 2 +- src/include/utils/timestamp.h | 2 +- src/include/utils/tqual.h | 2 +- src/include/utils/tuplesort.h | 2 +- src/include/utils/tuplestore.h | 2 +- src/include/utils/typcache.h | 2 +- src/include/utils/tzparser.h | 2 +- src/include/utils/uuid.h | 2 +- src/include/utils/varbit.h | 2 +- src/include/utils/varlena.h | 2 +- src/include/utils/xml.h | 2 +- src/include/windowapi.h | 2 +- src/interfaces/ecpg/compatlib/Makefile | 2 +- src/interfaces/ecpg/ecpglib/Makefile | 2 +- src/interfaces/ecpg/ecpglib/pg_type.h | 2 +- src/interfaces/ecpg/pgtypeslib/Makefile | 2 +- src/interfaces/ecpg/preproc/Makefile | 2 +- src/interfaces/ecpg/preproc/check_rules.pl | 2 +- src/interfaces/ecpg/preproc/ecpg.c | 2 +- src/interfaces/ecpg/preproc/keywords.c | 2 +- src/interfaces/ecpg/preproc/parse.pl | 2 +- src/interfaces/ecpg/preproc/parser.c | 2 +- src/interfaces/ecpg/preproc/pgc.l | 2 +- src/interfaces/ecpg/test/pg_regress_ecpg.c | 2 +- src/interfaces/libpq/Makefile | 2 +- src/interfaces/libpq/fe-auth-scram.c | 2 +- src/interfaces/libpq/fe-auth.c | 2 +- src/interfaces/libpq/fe-auth.h | 2 +- src/interfaces/libpq/fe-connect.c | 2 +- src/interfaces/libpq/fe-exec.c | 2 +- src/interfaces/libpq/fe-lobj.c | 2 +- src/interfaces/libpq/fe-misc.c | 2 +- src/interfaces/libpq/fe-print.c | 2 +- src/interfaces/libpq/fe-protocol2.c | 2 +- src/interfaces/libpq/fe-protocol3.c | 2 +- src/interfaces/libpq/fe-secure-openssl.c | 2 +- src/interfaces/libpq/fe-secure.c | 2 +- src/interfaces/libpq/libpq-events.c | 2 +- src/interfaces/libpq/libpq-events.h | 2 +- src/interfaces/libpq/libpq-fe.h | 2 +- src/interfaces/libpq/libpq-int.h | 2 +- src/interfaces/libpq/libpq.rc.in | 2 +- src/interfaces/libpq/pqexpbuffer.c | 2 +- src/interfaces/libpq/pqexpbuffer.h | 2 +- src/interfaces/libpq/pthread-win32.c | 2 +- src/interfaces/libpq/test/uri-regress.c | 2 +- src/interfaces/libpq/win32.c | 2 +- src/pl/plperl/plperl.h | 2 +- src/pl/plpgsql/src/generate-plerrcodes.pl | 2 +- src/pl/plpgsql/src/pl_comp.c | 2 +- src/pl/plpgsql/src/pl_exec.c | 2 +- src/pl/plpgsql/src/pl_funcs.c | 2 +- src/pl/plpgsql/src/pl_gram.y | 2 +- src/pl/plpgsql/src/pl_handler.c | 2 +- src/pl/plpgsql/src/pl_scanner.c | 2 +- src/pl/plpgsql/src/plpgsql.h | 2 +- src/pl/plpython/generate-spiexceptions.pl | 2 +- src/pl/plpython/plpython.h | 2 +- src/pl/tcl/generate-pltclerrcodes.pl | 2 +- src/port/chklocale.c | 2 +- src/port/dirent.c | 2 +- src/port/dirmod.c | 2 +- src/port/fls.c | 2 +- src/port/fseeko.c | 2 +- src/port/getaddrinfo.c | 2 +- src/port/getpeereid.c | 2 +- src/port/getrusage.c | 2 +- src/port/isinf.c | 2 +- src/port/kill.c | 2 +- src/port/mkdtemp.c | 2 +- src/port/noblock.c | 2 +- src/port/open.c | 2 +- src/port/path.c | 2 +- src/port/pg_crc32c_choose.c | 2 +- src/port/pg_crc32c_sb8.c | 2 +- src/port/pg_crc32c_sse42.c | 2 +- src/port/pg_strong_random.c | 2 +- src/port/pgcheckdir.c | 2 +- src/port/pgsleep.c | 2 +- src/port/pgstrcasecmp.c | 2 +- src/port/pqsignal.c | 2 +- src/port/quotes.c | 2 +- src/port/random.c | 2 +- src/port/sprompt.c | 2 +- src/port/srandom.c | 2 +- src/port/strlcpy.c | 2 +- src/port/strnlen.c | 2 +- src/port/system.c | 2 +- src/port/thread.c | 2 +- src/port/unsetenv.c | 2 +- src/port/win32env.c | 2 +- src/port/win32error.c | 2 +- src/port/win32security.c | 2 +- src/port/win32setlocale.c | 2 +- src/port/win32ver.rc | 2 +- src/test/authentication/Makefile | 2 +- src/test/examples/testlo.c | 2 +- src/test/examples/testlo64.c | 2 +- src/test/isolation/isolation_main.c | 2 +- src/test/isolation/isolationtester.h | 2 +- src/test/isolation/specparse.y | 2 +- src/test/isolation/specscanner.l | 2 +- src/test/ldap/Makefile | 2 +- src/test/modules/dummy_seclabel/dummy_seclabel.c | 2 +- src/test/modules/test_ddl_deparse/test_ddl_deparse.c | 2 +- src/test/modules/test_parser/test_parser.c | 2 +- src/test/modules/test_rbtree/test_rbtree.c | 2 +- src/test/modules/test_rls_hooks/test_rls_hooks.c | 2 +- src/test/modules/test_rls_hooks/test_rls_hooks.h | 2 +- src/test/modules/test_shm_mq/setup.c | 2 +- src/test/modules/test_shm_mq/test.c | 2 +- src/test/modules/test_shm_mq/test_shm_mq.h | 2 +- src/test/modules/test_shm_mq/worker.c | 2 +- src/test/modules/worker_spi/worker_spi.c | 2 +- src/test/perl/Makefile | 2 +- src/test/recovery/Makefile | 2 +- src/test/regress/GNUmakefile | 2 +- src/test/regress/pg_regress.c | 2 +- src/test/regress/pg_regress.h | 2 +- src/test/regress/pg_regress_main.c | 2 +- src/test/regress/regress.c | 2 +- src/test/ssl/Makefile | 2 +- src/test/subscription/Makefile | 2 +- src/test/thread/Makefile | 2 +- src/test/thread/thread_test.c | 2 +- src/timezone/pgtz.c | 2 +- src/timezone/pgtz.h | 2 +- src/tools/check_bison_recursion.pl | 2 +- src/tools/copyright.pl | 2 +- src/tools/findoidjoins/Makefile | 2 +- src/tools/findoidjoins/findoidjoins.c | 2 +- src/tools/fix-old-flex-code.pl | 2 +- src/tools/ifaddrs/Makefile | 2 +- src/tools/testint128.c | 2 +- src/tools/version_stamp.pl | 2 +- src/tools/win32tzlist.pl | 2 +- src/tutorial/complex.source | 2 +- src/tutorial/syscat.source | 2 +- 1638 files changed, 1648 insertions(+), 1648 deletions(-) diff --git a/COPYRIGHT b/COPYRIGHT index c320eccac0..33e6e4842a 100644 --- a/COPYRIGHT +++ b/COPYRIGHT @@ -1,7 +1,7 @@ PostgreSQL Database Management System (formerly known as Postgres, then as Postgres95) -Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group Portions Copyright (c) 1994, The Regents of the University of California diff --git a/configure b/configure index d9b7b8d7ec..82f332f545 100755 --- a/configure +++ b/configure @@ -11,7 +11,7 @@ # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. # -# Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Copyright (c) 1996-2018, PostgreSQL Global Development Group ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## @@ -1635,7 +1635,7 @@ Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. -Copyright (c) 1996-2017, PostgreSQL Global Development Group +Copyright (c) 1996-2018, PostgreSQL Global Development Group _ACEOF exit fi diff --git a/configure.in b/configure.in index 5245899109..4a20c9da96 100644 --- a/configure.in +++ b/configure.in @@ -23,7 +23,7 @@ m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.69], [], [m4_fatal([Autoconf version 2.6 Untested combinations of 'autoconf' and PostgreSQL versions are not recommended. You can remove the check from 'configure.in' but it is then your responsibility whether the result works or not.])]) -AC_COPYRIGHT([Copyright (c) 1996-2017, PostgreSQL Global Development Group]) +AC_COPYRIGHT([Copyright (c) 1996-2018, PostgreSQL Global Development Group]) AC_CONFIG_SRCDIR([src/backend/access/common/heaptuple.c]) AC_CONFIG_AUX_DIR(config) AC_PREFIX_DEFAULT(/usr/local/pgsql) diff --git a/contrib/adminpack/adminpack.c b/contrib/adminpack/adminpack.c index e46a64a898..1785dee3a1 100644 --- a/contrib/adminpack/adminpack.c +++ b/contrib/adminpack/adminpack.c @@ -3,7 +3,7 @@ * adminpack.c * * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * Author: Andreas Pflug * diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c index adbbc44468..da518daea3 100644 --- a/contrib/amcheck/verify_nbtree.c +++ b/contrib/amcheck/verify_nbtree.c @@ -9,7 +9,7 @@ * verification). * * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/amcheck/verify_nbtree.c diff --git a/contrib/auth_delay/auth_delay.c b/contrib/auth_delay/auth_delay.c index cd12a86b99..ad047b365f 100644 --- a/contrib/auth_delay/auth_delay.c +++ b/contrib/auth_delay/auth_delay.c @@ -2,7 +2,7 @@ * * auth_delay.c * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/auth_delay/auth_delay.c diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c index edcb91542a..d146bf4bc9 100644 --- a/contrib/auto_explain/auto_explain.c +++ b/contrib/auto_explain/auto_explain.c @@ -3,7 +3,7 @@ * auto_explain.c * * - * Copyright (c) 2008-2017, PostgreSQL Global Development Group + * Copyright (c) 2008-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/auto_explain/auto_explain.c diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c index ba39f627fd..fa0f17a217 100644 --- a/contrib/bloom/blcost.c +++ b/contrib/bloom/blcost.c @@ -3,7 +3,7 @@ * blcost.c * Cost estimate function for bloom indexes. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/blcost.c diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c index 1fcb281508..bfee244aa1 100644 --- a/contrib/bloom/blinsert.c +++ b/contrib/bloom/blinsert.c @@ -3,7 +3,7 @@ * blinsert.c * Bloom index build and insert functions. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/blinsert.c diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h index f3df1af781..3973ac75e8 100644 --- a/contrib/bloom/bloom.h +++ b/contrib/bloom/bloom.h @@ -3,7 +3,7 @@ * bloom.h * Header for bloom index. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/bloom.h diff --git a/contrib/bloom/blscan.c b/contrib/bloom/blscan.c index b8fa2d0a71..0744d74de7 100644 --- a/contrib/bloom/blscan.c +++ b/contrib/bloom/blscan.c @@ -3,7 +3,7 @@ * blscan.c * Bloom index scan functions. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/blscan.c diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c index f2eda67e0a..bbe7183207 100644 --- a/contrib/bloom/blutils.c +++ b/contrib/bloom/blutils.c @@ -3,7 +3,7 @@ * blutils.c * Bloom index utilities. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1990-1993, Regents of the University of California * * IDENTIFICATION diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c index b0e44330ff..7530a664ab 100644 --- a/contrib/bloom/blvacuum.c +++ b/contrib/bloom/blvacuum.c @@ -3,7 +3,7 @@ * blvacuum.c * Bloom VACUUM functions. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/blvacuum.c diff --git a/contrib/bloom/blvalidate.c b/contrib/bloom/blvalidate.c index cb75d23199..7235f12307 100644 --- a/contrib/bloom/blvalidate.c +++ b/contrib/bloom/blvalidate.c @@ -3,7 +3,7 @@ * blvalidate.c * Opclass validator for bloom. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/bloom/blvalidate.c diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index a7e22b378a..a6c897c319 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -9,7 +9,7 @@ * Shridhar Daithankar * * contrib/dblink/dblink.c - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * ALL RIGHTS RESERVED; * * Permission to use, copy, modify, and distribute this software and its diff --git a/contrib/dict_int/dict_int.c b/contrib/dict_int/dict_int.c index 55427c4bc7..8b45532938 100644 --- a/contrib/dict_int/dict_int.c +++ b/contrib/dict_int/dict_int.c @@ -3,7 +3,7 @@ * dict_int.c * Text search dictionary for integers * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/dict_int/dict_int.c diff --git a/contrib/dict_xsyn/dict_xsyn.c b/contrib/dict_xsyn/dict_xsyn.c index 977162951a..8a3abf7e3c 100644 --- a/contrib/dict_xsyn/dict_xsyn.c +++ b/contrib/dict_xsyn/dict_xsyn.c @@ -3,7 +3,7 @@ * dict_xsyn.c * Extended synonym dictionary * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/dict_xsyn/dict_xsyn.c diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c index 370cc365d6..cf0a3629bc 100644 --- a/contrib/file_fdw/file_fdw.c +++ b/contrib/file_fdw/file_fdw.c @@ -3,7 +3,7 @@ * file_fdw.c * foreign-data wrapper for server-side flat files (or programs). * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/file_fdw/file_fdw.c diff --git a/contrib/fuzzystrmatch/fuzzystrmatch.c b/contrib/fuzzystrmatch/fuzzystrmatch.c index b108353c06..05774658dc 100644 --- a/contrib/fuzzystrmatch/fuzzystrmatch.c +++ b/contrib/fuzzystrmatch/fuzzystrmatch.c @@ -6,7 +6,7 @@ * Joe Conway * * contrib/fuzzystrmatch/fuzzystrmatch.c - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * ALL RIGHTS RESERVED; * * metaphone() diff --git a/contrib/intarray/_int_selfuncs.c b/contrib/intarray/_int_selfuncs.c index acb87d10f0..4c3f60c1dd 100644 --- a/contrib/intarray/_int_selfuncs.c +++ b/contrib/intarray/_int_selfuncs.c @@ -3,7 +3,7 @@ * _int_selfuncs.c * Functions for selectivity estimation of intarray operators * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/contrib/isn/isn.c b/contrib/isn/isn.c index a33a86444b..897d83e0ca 100644 --- a/contrib/isn/isn.c +++ b/contrib/isn/isn.c @@ -4,7 +4,7 @@ * PostgreSQL type definitions for ISNs (ISBN, ISMN, ISSN, EAN13, UPC) * * Author: German Mendez Bravo (Kronuz) - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/isn/isn.c diff --git a/contrib/isn/isn.h b/contrib/isn/isn.h index e2c8a26234..29632d8518 100644 --- a/contrib/isn/isn.h +++ b/contrib/isn/isn.h @@ -4,7 +4,7 @@ * PostgreSQL type definitions for ISNs (ISBN, ISMN, ISSN, EAN13, UPC) * * Author: German Mendez Bravo (Kronuz) - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/isn/isn.h diff --git a/contrib/pageinspect/brinfuncs.c b/contrib/pageinspect/brinfuncs.c index 13da7616e7..f4f0dea860 100644 --- a/contrib/pageinspect/brinfuncs.c +++ b/contrib/pageinspect/brinfuncs.c @@ -2,7 +2,7 @@ * brinfuncs.c * Functions to investigate BRIN indexes * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/brinfuncs.c diff --git a/contrib/pageinspect/fsmfuncs.c b/contrib/pageinspect/fsmfuncs.c index 615dab8b13..86e8075845 100644 --- a/contrib/pageinspect/fsmfuncs.c +++ b/contrib/pageinspect/fsmfuncs.c @@ -9,7 +9,7 @@ * there's hardly any use case for using these without superuser-rights * anyway. * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/fsmfuncs.c diff --git a/contrib/pageinspect/ginfuncs.c b/contrib/pageinspect/ginfuncs.c index f774495b6f..d42609c577 100644 --- a/contrib/pageinspect/ginfuncs.c +++ b/contrib/pageinspect/ginfuncs.c @@ -2,7 +2,7 @@ * ginfuncs.c * Functions to investigate the content of GIN indexes * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/ginfuncs.c diff --git a/contrib/pageinspect/hashfuncs.c b/contrib/pageinspect/hashfuncs.c index dbe3b6ab04..3d0e3f9757 100644 --- a/contrib/pageinspect/hashfuncs.c +++ b/contrib/pageinspect/hashfuncs.c @@ -2,7 +2,7 @@ * hashfuncs.c * Functions to investigate the content of HASH indexes * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/hashfuncs.c diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c index ca4d3f530f..088254453e 100644 --- a/contrib/pageinspect/heapfuncs.c +++ b/contrib/pageinspect/heapfuncs.c @@ -15,7 +15,7 @@ * there's hardly any use case for using these without superuser-rights * anyway. * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/heapfuncs.c diff --git a/contrib/pageinspect/pageinspect.h b/contrib/pageinspect/pageinspect.h index f49cf9e892..ab7d5d66cd 100644 --- a/contrib/pageinspect/pageinspect.h +++ b/contrib/pageinspect/pageinspect.h @@ -3,7 +3,7 @@ * pageinspect.h * Common functions for pageinspect. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/pageinspect.h diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c index 25af22f453..3d4d4f6f93 100644 --- a/contrib/pageinspect/rawpage.c +++ b/contrib/pageinspect/rawpage.c @@ -5,7 +5,7 @@ * * Access-method specific inspection functions are in separate files. * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pageinspect/rawpage.c diff --git a/contrib/passwordcheck/passwordcheck.c b/contrib/passwordcheck/passwordcheck.c index 64d43462f0..d3d9ff3676 100644 --- a/contrib/passwordcheck/passwordcheck.c +++ b/contrib/passwordcheck/passwordcheck.c @@ -3,7 +3,7 @@ * passwordcheck.c * * - * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * Copyright (c) 2009-2018, PostgreSQL Global Development Group * * Author: Laurenz Albe * diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c index 006c3153db..15e4a65ea6 100644 --- a/contrib/pg_prewarm/autoprewarm.c +++ b/contrib/pg_prewarm/autoprewarm.c @@ -16,7 +16,7 @@ * relevant database in turn. The former keeps running after the * initial prewarm is complete to update the dump file periodically. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pg_prewarm/autoprewarm.c diff --git a/contrib/pg_prewarm/pg_prewarm.c b/contrib/pg_prewarm/pg_prewarm.c index fec62b1a54..4117bf6d2e 100644 --- a/contrib/pg_prewarm/pg_prewarm.c +++ b/contrib/pg_prewarm/pg_prewarm.c @@ -3,7 +3,7 @@ * pg_prewarm.c * prewarming utilities * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pg_prewarm/pg_prewarm.c diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c index 3de8333be2..928673498a 100644 --- a/contrib/pg_stat_statements/pg_stat_statements.c +++ b/contrib/pg_stat_statements/pg_stat_statements.c @@ -48,7 +48,7 @@ * in the file to be read or written while holding only shared lock. * * - * Copyright (c) 2008-2017, PostgreSQL Global Development Group + * Copyright (c) 2008-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pg_stat_statements/pg_stat_statements.c diff --git a/contrib/pg_trgm/trgm_regexp.c b/contrib/pg_trgm/trgm_regexp.c index 6593ac7ba5..547e7c094f 100644 --- a/contrib/pg_trgm/trgm_regexp.c +++ b/contrib/pg_trgm/trgm_regexp.c @@ -181,7 +181,7 @@ * 7) Mark state 3 final because state 5 of source NFA is marked as final. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c index 2cc9575d9f..944dea66fc 100644 --- a/contrib/pg_visibility/pg_visibility.c +++ b/contrib/pg_visibility/pg_visibility.c @@ -3,7 +3,7 @@ * pg_visibility.c * display visibility map information and page-level visibility bits * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * contrib/pg_visibility/pg_visibility.c *------------------------------------------------------------------------- diff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c index 5bf06138a5..3cfbc08649 100644 --- a/contrib/pgstattuple/pgstatapprox.c +++ b/contrib/pgstattuple/pgstatapprox.c @@ -3,7 +3,7 @@ * pgstatapprox.c * Bloat estimation functions * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/pgstattuple/pgstatapprox.c diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c index 4fbf043860..00c926b983 100644 --- a/contrib/postgres_fdw/connection.c +++ b/contrib/postgres_fdw/connection.c @@ -3,7 +3,7 @@ * connection.c * Connection management functions for postgres_fdw * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/connection.c diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 0876589fe5..96f804a28d 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -24,7 +24,7 @@ * with collations that match the remote table's columns, which we can * consider to be user error. * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/deparse.c diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c index 67e1c59951..082f79ae04 100644 --- a/contrib/postgres_fdw/option.c +++ b/contrib/postgres_fdw/option.c @@ -3,7 +3,7 @@ * option.c * FDW option handling for postgres_fdw * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/option.c diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index 44db4497c4..7992ba5852 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -3,7 +3,7 @@ * postgres_fdw.c * Foreign-data wrapper for remote PostgreSQL servers * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/postgres_fdw.c diff --git a/contrib/postgres_fdw/postgres_fdw.h b/contrib/postgres_fdw/postgres_fdw.h index 788b003650..1ae809d2c6 100644 --- a/contrib/postgres_fdw/postgres_fdw.h +++ b/contrib/postgres_fdw/postgres_fdw.h @@ -3,7 +3,7 @@ * postgres_fdw.h * Foreign-data wrapper for remote PostgreSQL servers * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/postgres_fdw.h diff --git a/contrib/postgres_fdw/shippable.c b/contrib/postgres_fdw/shippable.c index 2ac0873caa..7f2ed0499c 100644 --- a/contrib/postgres_fdw/shippable.c +++ b/contrib/postgres_fdw/shippable.c @@ -13,7 +13,7 @@ * functions or functions using nonportable collations. Those considerations * need not be accounted for here. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/postgres_fdw/shippable.c diff --git a/contrib/sepgsql/database.c b/contrib/sepgsql/database.c index 8fc5a87e00..c641ec3565 100644 --- a/contrib/sepgsql/database.c +++ b/contrib/sepgsql/database.c @@ -4,7 +4,7 @@ * * Routines corresponding to database objects * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/dml.c b/contrib/sepgsql/dml.c index b643720e36..36cdb27a76 100644 --- a/contrib/sepgsql/dml.c +++ b/contrib/sepgsql/dml.c @@ -4,7 +4,7 @@ * * Routines to handle DML permission checks * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/hooks.c b/contrib/sepgsql/hooks.c index 5daa60c412..4249ed552c 100644 --- a/contrib/sepgsql/hooks.c +++ b/contrib/sepgsql/hooks.c @@ -4,7 +4,7 @@ * * Entrypoints of the hooks in PostgreSQL, and dispatches the callbacks. * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/label.c b/contrib/sepgsql/label.c index cbb9249be7..7554017923 100644 --- a/contrib/sepgsql/label.c +++ b/contrib/sepgsql/label.c @@ -4,7 +4,7 @@ * * Routines to support SELinux labels (security context) * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/launcher b/contrib/sepgsql/launcher index 0fc96ea0d4..45139f3750 100755 --- a/contrib/sepgsql/launcher +++ b/contrib/sepgsql/launcher @@ -2,7 +2,7 @@ # # A wrapper script to launch psql command in regression test # -# Copyright (c) 2010-2017, PostgreSQL Global Development Group +# Copyright (c) 2010-2018, PostgreSQL Global Development Group # # ------------------------------------------------------------------------- diff --git a/contrib/sepgsql/proc.c b/contrib/sepgsql/proc.c index 14faa5fac6..c6a817d7c5 100644 --- a/contrib/sepgsql/proc.c +++ b/contrib/sepgsql/proc.c @@ -4,7 +4,7 @@ * * Routines corresponding to procedure objects * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/relation.c b/contrib/sepgsql/relation.c index 228869a520..f0c22715aa 100644 --- a/contrib/sepgsql/relation.c +++ b/contrib/sepgsql/relation.c @@ -4,7 +4,7 @@ * * Routines corresponding to relation/attribute objects * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/schema.c b/contrib/sepgsql/schema.c index d418577b75..bc15a36a45 100644 --- a/contrib/sepgsql/schema.c +++ b/contrib/sepgsql/schema.c @@ -4,7 +4,7 @@ * * Routines corresponding to schema objects * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/selinux.c b/contrib/sepgsql/selinux.c index bf89e83dd6..47def00a46 100644 --- a/contrib/sepgsql/selinux.c +++ b/contrib/sepgsql/selinux.c @@ -5,7 +5,7 @@ * Interactions between userspace and selinux in kernelspace, * using libselinux api. * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/sepgsql.h b/contrib/sepgsql/sepgsql.h index d4bf0cd14a..99adfc522a 100644 --- a/contrib/sepgsql/sepgsql.h +++ b/contrib/sepgsql/sepgsql.h @@ -4,7 +4,7 @@ * * Definitions corresponding to SE-PostgreSQL * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/sepgsql/uavc.c b/contrib/sepgsql/uavc.c index f0915918db..ea276ee0cc 100644 --- a/contrib/sepgsql/uavc.c +++ b/contrib/sepgsql/uavc.c @@ -6,7 +6,7 @@ * access control decisions recently used, and reduce number of kernel * invocations to avoid unnecessary performance hit. * - * Copyright (c) 2011-2017, PostgreSQL Global Development Group + * Copyright (c) 2011-2018, PostgreSQL Global Development Group * * ------------------------------------------------------------------------- */ diff --git a/contrib/tablefunc/tablefunc.c b/contrib/tablefunc/tablefunc.c index 7369c71351..59f90dc947 100644 --- a/contrib/tablefunc/tablefunc.c +++ b/contrib/tablefunc/tablefunc.c @@ -10,7 +10,7 @@ * And contributors: * Nabil Sayegh * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * Permission to use, copy, modify, and distribute this software and its * documentation for any purpose, without fee, and without a written agreement diff --git a/contrib/tablefunc/tablefunc.h b/contrib/tablefunc/tablefunc.h index e88a5720fa..7d0773f82f 100644 --- a/contrib/tablefunc/tablefunc.h +++ b/contrib/tablefunc/tablefunc.h @@ -10,7 +10,7 @@ * And contributors: * Nabil Sayegh * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * Permission to use, copy, modify, and distribute this software and its * documentation for any purpose, without fee, and without a written agreement diff --git a/contrib/tcn/tcn.c b/contrib/tcn/tcn.c index 88674901bb..41186fdd8f 100644 --- a/contrib/tcn/tcn.c +++ b/contrib/tcn/tcn.c @@ -3,7 +3,7 @@ * tcn.c * triggered change notification support for PostgreSQL * - * Portions Copyright (c) 2011-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2011-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c index 135b3b7638..0f18afa852 100644 --- a/contrib/test_decoding/test_decoding.c +++ b/contrib/test_decoding/test_decoding.c @@ -3,7 +3,7 @@ * test_decoding.c * example logical decoding output plugin * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/test_decoding/test_decoding.c diff --git a/contrib/tsm_system_rows/tsm_system_rows.c b/contrib/tsm_system_rows/tsm_system_rows.c index 544458ec91..83f841f0c2 100644 --- a/contrib/tsm_system_rows/tsm_system_rows.c +++ b/contrib/tsm_system_rows/tsm_system_rows.c @@ -17,7 +17,7 @@ * won't visit blocks added after the first scan, but that is fine since * such blocks shouldn't contain any visible tuples anyway. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/contrib/tsm_system_time/tsm_system_time.c b/contrib/tsm_system_time/tsm_system_time.c index af8d025414..f0c220aa4a 100644 --- a/contrib/tsm_system_time/tsm_system_time.c +++ b/contrib/tsm_system_time/tsm_system_time.c @@ -13,7 +13,7 @@ * However, we do what we can to reduce surprising behavior by selecting * the sampling pattern just once per query, much as in tsm_system_rows. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c index e68b098b78..82f9c7fcfe 100644 --- a/contrib/unaccent/unaccent.c +++ b/contrib/unaccent/unaccent.c @@ -3,7 +3,7 @@ * unaccent.c * Text search unaccent dictionary * - * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * Copyright (c) 2009-2018, PostgreSQL Global Development Group * * IDENTIFICATION * contrib/unaccent/unaccent.c diff --git a/contrib/uuid-ossp/uuid-ossp.c b/contrib/uuid-ossp/uuid-ossp.c index 151223a199..179305b954 100644 --- a/contrib/uuid-ossp/uuid-ossp.c +++ b/contrib/uuid-ossp/uuid-ossp.c @@ -2,7 +2,7 @@ * * UUID generation functions using the BSD, E2FS or OSSP UUID library * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * Portions Copyright (c) 2009 Andrew Gierth * diff --git a/contrib/vacuumlo/vacuumlo.c b/contrib/vacuumlo/vacuumlo.c index a4d4553303..4074262b74 100644 --- a/contrib/vacuumlo/vacuumlo.c +++ b/contrib/vacuumlo/vacuumlo.c @@ -3,7 +3,7 @@ * vacuumlo.c * This removes orphaned large objects from a database. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl index e655703b5b..ebec43159e 100644 --- a/doc/src/sgml/generate-errcodes-table.pl +++ b/doc/src/sgml/generate-errcodes-table.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate the errcodes-table.sgml file from errcodes.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/doc/src/sgml/legal.sgml b/doc/src/sgml/legal.sgml index 67ef88b2ff..fd5cda30b7 100644 --- a/doc/src/sgml/legal.sgml +++ b/doc/src/sgml/legal.sgml @@ -1,9 +1,9 @@ -2017 +2018 - 1996-2017 + 1996-2018 The PostgreSQL Global Development Group @@ -11,7 +11,7 @@ Legal Notice - PostgreSQL is Copyright © 1996-2017 + PostgreSQL is Copyright © 1996-2018 by the PostgreSQL Global Development Group. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 086cb8dbe8..6b5aaebbbc 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -704,7 +704,7 @@ SELECT lo_export(image.raster, '/tmp/motd') FROM image * testlo.c * test using large objects with libpq * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/Makefile b/src/backend/Makefile index aab676dbbd..4a28267339 100644 --- a/src/backend/Makefile +++ b/src/backend/Makefile @@ -2,7 +2,7 @@ # # Makefile for the postgres backend # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/backend/Makefile diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index 9fbb093b3c..f54968bfb5 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -4,7 +4,7 @@ * * See src/backend/access/brin/README for details. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_inclusion.c b/src/backend/access/brin/brin_inclusion.c index 449ce5ea4c..25428777f7 100644 --- a/src/backend/access/brin/brin_inclusion.c +++ b/src/backend/access/brin/brin_inclusion.c @@ -16,7 +16,7 @@ * writing is the INET type, where IPv6 values cannot be merged with IPv4 * values. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_minmax.c b/src/backend/access/brin/brin_minmax.c index ce503972f8..0f6aa33a45 100644 --- a/src/backend/access/brin/brin_minmax.c +++ b/src/backend/access/brin/brin_minmax.c @@ -2,7 +2,7 @@ * brin_minmax.c * Implementation of Min/Max opclass for BRIN * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c index 09db5c6f8f..60a7025ec5 100644 --- a/src/backend/access/brin/brin_pageops.c +++ b/src/backend/access/brin/brin_pageops.c @@ -2,7 +2,7 @@ * brin_pageops.c * Page-handling routines for BRIN indexes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_revmap.c b/src/backend/access/brin/brin_revmap.c index 03e53ce43e..f0dd72ac67 100644 --- a/src/backend/access/brin/brin_revmap.c +++ b/src/backend/access/brin/brin_revmap.c @@ -12,7 +12,7 @@ * the metapage. When the revmap needs to be expanded, all tuples on the * regular BRIN page at that block (if any) are moved out of the way. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_tuple.c b/src/backend/access/brin/brin_tuple.c index 5c035fb203..00316b899c 100644 --- a/src/backend/access/brin/brin_tuple.c +++ b/src/backend/access/brin/brin_tuple.c @@ -23,7 +23,7 @@ * Note the size of the null bitmask may not be the same as that of the * datum array. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_validate.c b/src/backend/access/brin/brin_validate.c index b4acf2b6f3..35f6ccacce 100644 --- a/src/backend/access/brin/brin_validate.c +++ b/src/backend/access/brin/brin_validate.c @@ -3,7 +3,7 @@ * brin_validate.c * Opclass validator for BRIN. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/brin/brin_xlog.c b/src/backend/access/brin/brin_xlog.c index 645e516a52..b2871e78aa 100644 --- a/src/backend/access/brin/brin_xlog.c +++ b/src/backend/access/brin/brin_xlog.c @@ -2,7 +2,7 @@ * brin_xlog.c * XLog replay routines for BRIN indexes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/common/bufmask.c b/src/backend/access/common/bufmask.c index d880aef7ba..57021f6ca1 100644 --- a/src/backend/access/common/bufmask.c +++ b/src/backend/access/common/bufmask.c @@ -5,7 +5,7 @@ * in a page which can be different when the WAL is generated * and when the WAL is applied. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * Contains common routines required for masking a page. * diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c index a1a9d9905b..0a13251067 100644 --- a/src/backend/access/common/heaptuple.c +++ b/src/backend/access/common/heaptuple.c @@ -45,7 +45,7 @@ * and we'd like to still refer to them via C struct offsets. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/common/indextuple.c b/src/backend/access/common/indextuple.c index 138671410a..f7103e53bc 100644 --- a/src/backend/access/common/indextuple.c +++ b/src/backend/access/common/indextuple.c @@ -4,7 +4,7 @@ * This file contains index tuple accessor and mutator routines, * as well as various tuple utilities. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/common/printsimple.c b/src/backend/access/common/printsimple.c index f3e48b7e98..3c4d227712 100644 --- a/src/backend/access/common/printsimple.c +++ b/src/backend/access/common/printsimple.c @@ -8,7 +8,7 @@ * doesn't handle standalone backends or protocol versions other than * 3.0, because we don't need such handling for current applications. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c index fc94e711b2..a1d4415704 100644 --- a/src/backend/access/common/printtup.c +++ b/src/backend/access/common/printtup.c @@ -5,7 +5,7 @@ * clients and standalone backends are supported here). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index aa9c0f1bb9..425bc5d06e 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -3,7 +3,7 @@ * reloptions.c * Core support for relation options (pg_class.reloptions) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/common/scankey.c b/src/backend/access/common/scankey.c index 13edca1f94..781516c56a 100644 --- a/src/backend/access/common/scankey.c +++ b/src/backend/access/common/scankey.c @@ -3,7 +3,7 @@ * scankey.c * scan key support code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/common/session.c b/src/backend/access/common/session.c index 865999b063..617c3e1e50 100644 --- a/src/backend/access/common/session.c +++ b/src/backend/access/common/session.c @@ -12,7 +12,7 @@ * Currently this infrastructure is used to share: * - typemod registry for ephemeral row-types, i.e. BlessTupleDesc etc. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/backend/access/common/session.c * diff --git a/src/backend/access/common/tupconvert.c b/src/backend/access/common/tupconvert.c index 3d1bc0635b..2d0d2f4b32 100644 --- a/src/backend/access/common/tupconvert.c +++ b/src/backend/access/common/tupconvert.c @@ -9,7 +9,7 @@ * executor's "junkfilter" routines, but these functions work on bare * HeapTuples rather than TupleTableSlots. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index 9e37ca73a8..f07129ac53 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -3,7 +3,7 @@ * tupdesc.c * POSTGRES tuple descriptor support code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/gin/ginarrayproc.c b/src/backend/access/gin/ginarrayproc.c index a5238c3af5..d0fa4adf87 100644 --- a/src/backend/access/gin/ginarrayproc.c +++ b/src/backend/access/gin/ginarrayproc.c @@ -4,7 +4,7 @@ * support functions for GIN's indexing of any array * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginbtree.c b/src/backend/access/gin/ginbtree.c index 1b920facc2..37070b3b72 100644 --- a/src/backend/access/gin/ginbtree.c +++ b/src/backend/access/gin/ginbtree.c @@ -4,7 +4,7 @@ * page utilities routines for the postgres inverted index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginbulk.c b/src/backend/access/gin/ginbulk.c index c0857180b5..989c2a0c02 100644 --- a/src/backend/access/gin/ginbulk.c +++ b/src/backend/access/gin/ginbulk.c @@ -4,7 +4,7 @@ * routines for fast build of inverted index * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c index 9c6cba4825..f9daaba52e 100644 --- a/src/backend/access/gin/gindatapage.c +++ b/src/backend/access/gin/gindatapage.c @@ -4,7 +4,7 @@ * routines for handling GIN posting tree pages. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginentrypage.c b/src/backend/access/gin/ginentrypage.c index bf7b05107b..810769718f 100644 --- a/src/backend/access/gin/ginentrypage.c +++ b/src/backend/access/gin/ginentrypage.c @@ -4,7 +4,7 @@ * routines for handling GIN entry tree pages. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c index 95c8bd7b43..222a80bc4b 100644 --- a/src/backend/access/gin/ginfast.c +++ b/src/backend/access/gin/ginfast.c @@ -7,7 +7,7 @@ * transfer pending entries into the regular index structure. This * wins because bulk insertion is much more efficient than retail. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c index 1ecf97507d..6fe67f346d 100644 --- a/src/backend/access/gin/ginget.c +++ b/src/backend/access/gin/ginget.c @@ -4,7 +4,7 @@ * fetch tuples from a GIN scan. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c index 890b79c0ca..473cc3d6b3 100644 --- a/src/backend/access/gin/gininsert.c +++ b/src/backend/access/gin/gininsert.c @@ -4,7 +4,7 @@ * insert routines for the postgres inverted index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginlogic.c b/src/backend/access/gin/ginlogic.c index 5b8ad9a25a..2c42d1aa91 100644 --- a/src/backend/access/gin/ginlogic.c +++ b/src/backend/access/gin/ginlogic.c @@ -24,7 +24,7 @@ * is used for.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c index 8d2d31ac72..54c9caffe3 100644 --- a/src/backend/access/gin/ginpostinglist.c +++ b/src/backend/access/gin/ginpostinglist.c @@ -4,7 +4,7 @@ * routines for dealing with posting lists. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c index 77c0c577b5..8ade4311df 100644 --- a/src/backend/access/gin/ginscan.c +++ b/src/backend/access/gin/ginscan.c @@ -4,7 +4,7 @@ * routines to manage scans of inverted index relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c index 41d4b4fb6f..7bac7a1252 100644 --- a/src/backend/access/gin/ginutil.c +++ b/src/backend/access/gin/ginutil.c @@ -4,7 +4,7 @@ * Utility routines for the Postgres inverted index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c index 394bc832a4..398532d80b 100644 --- a/src/backend/access/gin/ginvacuum.c +++ b/src/backend/access/gin/ginvacuum.c @@ -4,7 +4,7 @@ * delete & vacuum routines for the postgres GIN * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginvalidate.c b/src/backend/access/gin/ginvalidate.c index 4c8e563545..1035be4463 100644 --- a/src/backend/access/gin/ginvalidate.c +++ b/src/backend/access/gin/ginvalidate.c @@ -3,7 +3,7 @@ * ginvalidate.c * Opclass validator for GIN. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c index 1bf3f0a88a..a6e321d77e 100644 --- a/src/backend/access/gin/ginxlog.c +++ b/src/backend/access/gin/ginxlog.c @@ -4,7 +4,7 @@ * WAL replay logic for inverted index. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c index cf4b319b4e..aff969ead4 100644 --- a/src/backend/access/gist/gist.c +++ b/src/backend/access/gist/gist.c @@ -4,7 +4,7 @@ * interface routines for the postgres GiST index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index 2415f00e06..d22318a5f1 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -4,7 +4,7 @@ * build algorithm for GiST indexes implementation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c index 88cee2028d..97033983e3 100644 --- a/src/backend/access/gist/gistbuildbuffers.c +++ b/src/backend/access/gist/gistbuildbuffers.c @@ -4,7 +4,7 @@ * node buffer management functions for GiST buffering build algorithm. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c index fb233a56d0..ca21cf7047 100644 --- a/src/backend/access/gist/gistget.c +++ b/src/backend/access/gist/gistget.c @@ -4,7 +4,7 @@ * fetch tuples from a GiST scan. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistproc.c b/src/backend/access/gist/gistproc.c index 78f3107555..97e6dc9910 100644 --- a/src/backend/access/gist/gistproc.c +++ b/src/backend/access/gist/gistproc.c @@ -7,7 +7,7 @@ * This gives R-tree behavior, with Guttman's poly-time split algorithm. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistscan.c b/src/backend/access/gist/gistscan.c index 058544e2ae..4d97ff1d5d 100644 --- a/src/backend/access/gist/gistscan.c +++ b/src/backend/access/gist/gistscan.c @@ -4,7 +4,7 @@ * routines to manage scans on GiST index relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistsplit.c b/src/backend/access/gist/gistsplit.c index 9efb16dd24..a7038cca67 100644 --- a/src/backend/access/gist/gistsplit.c +++ b/src/backend/access/gist/gistsplit.c @@ -15,7 +15,7 @@ * gistSplitByKey() is the entry point to this file. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistutil.c b/src/backend/access/gist/gistutil.c index d8d1c0acfc..55cccd247a 100644 --- a/src/backend/access/gist/gistutil.c +++ b/src/backend/access/gist/gistutil.c @@ -4,7 +4,7 @@ * utilities routines for the postgres GiST index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c index 77d9d12f0b..95a0c54f63 100644 --- a/src/backend/access/gist/gistvacuum.c +++ b/src/backend/access/gist/gistvacuum.c @@ -4,7 +4,7 @@ * vacuuming routines for the postgres GiST index access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistvalidate.c b/src/backend/access/gist/gistvalidate.c index 42f91ac0c9..dd87dad386 100644 --- a/src/backend/access/gist/gistvalidate.c +++ b/src/backend/access/gist/gistvalidate.c @@ -3,7 +3,7 @@ * gistvalidate.c * Opclass validator for GiST. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c index 7fd91ce640..1e09126978 100644 --- a/src/backend/access/gist/gistxlog.c +++ b/src/backend/access/gist/gistxlog.c @@ -4,7 +4,7 @@ * WAL replay logic for GiST. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c index 0fef60a858..718e2be1cd 100644 --- a/src/backend/access/hash/hash.c +++ b/src/backend/access/hash/hash.c @@ -3,7 +3,7 @@ * hash.c * Implementation of Margo Seltzer's Hashing package for postgres. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hash_xlog.c b/src/backend/access/hash/hash_xlog.c index f19f6fdfaf..b38208e61d 100644 --- a/src/backend/access/hash/hash_xlog.c +++ b/src/backend/access/hash/hash_xlog.c @@ -4,7 +4,7 @@ * WAL replay logic for hash index. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/hash/hashfunc.c b/src/backend/access/hash/hashfunc.c index 413e6b6ca3..1aa0b25d38 100644 --- a/src/backend/access/hash/hashfunc.c +++ b/src/backend/access/hash/hashfunc.c @@ -3,7 +3,7 @@ * hashfunc.c * Support functions for hash access method. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashinsert.c b/src/backend/access/hash/hashinsert.c index dc08db97db..f668dcff0f 100644 --- a/src/backend/access/hash/hashinsert.c +++ b/src/backend/access/hash/hashinsert.c @@ -3,7 +3,7 @@ * hashinsert.c * Item insertion in hash tables for Postgres. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashovfl.c b/src/backend/access/hash/hashovfl.c index c206e704d4..c9de1283dc 100644 --- a/src/backend/access/hash/hashovfl.c +++ b/src/backend/access/hash/hashovfl.c @@ -3,7 +3,7 @@ * hashovfl.c * Overflow page management code for the Postgres hash access method * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c index a50e35dfcb..e3c8721d29 100644 --- a/src/backend/access/hash/hashpage.c +++ b/src/backend/access/hash/hashpage.c @@ -3,7 +3,7 @@ * hashpage.c * Hash table page management code for the Postgres hash access method * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c index eeb04fe1c5..c692c5b32d 100644 --- a/src/backend/access/hash/hashsearch.c +++ b/src/backend/access/hash/hashsearch.c @@ -3,7 +3,7 @@ * hashsearch.c * search code for postgres hash tables * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashsort.c b/src/backend/access/hash/hashsort.c index 41d615df8b..7d3790a473 100644 --- a/src/backend/access/hash/hashsort.c +++ b/src/backend/access/hash/hashsort.c @@ -14,7 +14,7 @@ * plenty of locality of access. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c index f2a1c5b6ab..4e485cc4b3 100644 --- a/src/backend/access/hash/hashutil.c +++ b/src/backend/access/hash/hashutil.c @@ -3,7 +3,7 @@ * hashutil.c * Utility code for Postgres hash implementation. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/hash/hashvalidate.c b/src/backend/access/hash/hashvalidate.c index 8b633c273a..513a3bbc4c 100644 --- a/src/backend/access/hash/hashvalidate.c +++ b/src/backend/access/hash/hashvalidate.c @@ -3,7 +3,7 @@ * hashvalidate.c * Opclass validator for hash. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 54f1100ffd..dbc8f2d6c7 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -3,7 +3,7 @@ * heapam.c * heap access method code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c index 13e3bdca50..0d7bc68339 100644 --- a/src/backend/access/heap/hio.c +++ b/src/backend/access/heap/hio.c @@ -3,7 +3,7 @@ * hio.c * POSTGRES heap access method input/output code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c index 9f33e0ce07..f67d7d15df 100644 --- a/src/backend/access/heap/pruneheap.c +++ b/src/backend/access/heap/pruneheap.c @@ -3,7 +3,7 @@ * pruneheap.c * heap page pruning and HOT-chain management code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c index 7d163c9137..7d466c2588 100644 --- a/src/backend/access/heap/rewriteheap.c +++ b/src/backend/access/heap/rewriteheap.c @@ -92,7 +92,7 @@ * heap's TOAST table will go through the normal bufmgr. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/heap/syncscan.c b/src/backend/access/heap/syncscan.c index 20640cbbaf..054eb066e9 100644 --- a/src/backend/access/heap/syncscan.c +++ b/src/backend/access/heap/syncscan.c @@ -36,7 +36,7 @@ * ss_report_location - update current scan location * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c index c74945a52a..546f80f05c 100644 --- a/src/backend/access/heap/tuptoaster.c +++ b/src/backend/access/heap/tuptoaster.c @@ -4,7 +4,7 @@ * Support routines for external and compressed storage of * variable size attributes. * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/access/heap/visibilitymap.c b/src/backend/access/heap/visibilitymap.c index 4c2a13aeba..b251e69703 100644 --- a/src/backend/access/heap/visibilitymap.c +++ b/src/backend/access/heap/visibilitymap.c @@ -3,7 +3,7 @@ * visibilitymap.c * bitmap for tracking visibility of heap tuples * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/index/amapi.c b/src/backend/access/index/amapi.c index 7b597a072f..f395cb1ab4 100644 --- a/src/backend/access/index/amapi.c +++ b/src/backend/access/index/amapi.c @@ -3,7 +3,7 @@ * amapi.c * Support routines for API for Postgres index access methods. * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/access/index/amvalidate.c b/src/backend/access/index/amvalidate.c index 728c48179f..24f9927f82 100644 --- a/src/backend/access/index/amvalidate.c +++ b/src/backend/access/index/amvalidate.c @@ -3,7 +3,7 @@ * amvalidate.c * Support routines for index access methods' amvalidate functions. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c index 05d7da001a..214825114e 100644 --- a/src/backend/access/index/genam.c +++ b/src/backend/access/index/genam.c @@ -3,7 +3,7 @@ * genam.c * general index access method routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c index edf4172eb2..1b61cd9515 100644 --- a/src/backend/access/index/indexam.c +++ b/src/backend/access/index/indexam.c @@ -3,7 +3,7 @@ * indexam.c * general index access method routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/nbtree/nbtcompare.c b/src/backend/access/nbtree/nbtcompare.c index 4b131efb87..b1855e8aa8 100644 --- a/src/backend/access/nbtree/nbtcompare.c +++ b/src/backend/access/nbtree/nbtcompare.c @@ -3,7 +3,7 @@ * nbtcompare.c * Comparison functions for btree access method. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c index 310589da4e..51059c0c7d 100644 --- a/src/backend/access/nbtree/nbtinsert.c +++ b/src/backend/access/nbtree/nbtinsert.c @@ -3,7 +3,7 @@ * nbtinsert.c * Item insertion in Lehman and Yao btrees for Postgres. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c index c77434904e..92afe2de38 100644 --- a/src/backend/access/nbtree/nbtpage.c +++ b/src/backend/access/nbtree/nbtpage.c @@ -4,7 +4,7 @@ * BTree-specific page management code for the Postgres btree access * method. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index 399e6a1ae5..a344c4490e 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -8,7 +8,7 @@ * This file contains only the public interface routines. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c index 12d3f081b6..847434fec6 100644 --- a/src/backend/access/nbtree/nbtsearch.c +++ b/src/backend/access/nbtree/nbtsearch.c @@ -4,7 +4,7 @@ * Search code for postgres btrees. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c index bf6c03c7b2..f6159db1cd 100644 --- a/src/backend/access/nbtree/nbtsort.c +++ b/src/backend/access/nbtree/nbtsort.c @@ -55,7 +55,7 @@ * This code isn't concerned about the FSM at all. The caller is responsible * for initializing that. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c index 9b53aa3320..c62e4ef782 100644 --- a/src/backend/access/nbtree/nbtutils.c +++ b/src/backend/access/nbtree/nbtutils.c @@ -3,7 +3,7 @@ * nbtutils.c * Utility code for Postgres btree implementation. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/nbtree/nbtvalidate.c b/src/backend/access/nbtree/nbtvalidate.c index 5aae53ac68..8f4ccc87c0 100644 --- a/src/backend/access/nbtree/nbtvalidate.c +++ b/src/backend/access/nbtree/nbtvalidate.c @@ -3,7 +3,7 @@ * nbtvalidate.c * Opclass validator for btree. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 7250b4f0b8..bed1dd2a09 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -4,7 +4,7 @@ * WAL replay logic for btrees. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/rmgrdesc/brindesc.c b/src/backend/access/rmgrdesc/brindesc.c index 8eb5275a8b..d464234254 100644 --- a/src/backend/access/rmgrdesc/brindesc.c +++ b/src/backend/access/rmgrdesc/brindesc.c @@ -3,7 +3,7 @@ * brindesc.c * rmgr descriptor routines for BRIN indexes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/clogdesc.c b/src/backend/access/rmgrdesc/clogdesc.c index 9181154ffd..4c00d4d1f8 100644 --- a/src/backend/access/rmgrdesc/clogdesc.c +++ b/src/backend/access/rmgrdesc/clogdesc.c @@ -3,7 +3,7 @@ * clogdesc.c * rmgr descriptor routines for access/transam/clog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/committsdesc.c b/src/backend/access/rmgrdesc/committsdesc.c index 3e670bd543..b6e210398d 100644 --- a/src/backend/access/rmgrdesc/committsdesc.c +++ b/src/backend/access/rmgrdesc/committsdesc.c @@ -3,7 +3,7 @@ * committsdesc.c * rmgr descriptor routines for access/transam/commit_ts.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/dbasedesc.c b/src/backend/access/rmgrdesc/dbasedesc.c index 768242cfd5..39e26d7ed4 100644 --- a/src/backend/access/rmgrdesc/dbasedesc.c +++ b/src/backend/access/rmgrdesc/dbasedesc.c @@ -3,7 +3,7 @@ * dbasedesc.c * rmgr descriptor routines for commands/dbcommands.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/genericdesc.c b/src/backend/access/rmgrdesc/genericdesc.c index c4705428f1..4e9bba804d 100644 --- a/src/backend/access/rmgrdesc/genericdesc.c +++ b/src/backend/access/rmgrdesc/genericdesc.c @@ -4,7 +4,7 @@ * rmgr descriptor routines for access/transam/generic_xlog.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/rmgrdesc/genericdesc.c diff --git a/src/backend/access/rmgrdesc/gindesc.c b/src/backend/access/rmgrdesc/gindesc.c index 02c887496e..3456187e3d 100644 --- a/src/backend/access/rmgrdesc/gindesc.c +++ b/src/backend/access/rmgrdesc/gindesc.c @@ -3,7 +3,7 @@ * gindesc.c * rmgr descriptor routines for access/transam/gin/ginxlog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/gistdesc.c b/src/backend/access/rmgrdesc/gistdesc.c index dc0506913c..e5e925e0c5 100644 --- a/src/backend/access/rmgrdesc/gistdesc.c +++ b/src/backend/access/rmgrdesc/gistdesc.c @@ -3,7 +3,7 @@ * gistdesc.c * rmgr descriptor routines for access/gist/gistxlog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/hashdesc.c b/src/backend/access/rmgrdesc/hashdesc.c index 3e9236122b..3c53c84f1a 100644 --- a/src/backend/access/rmgrdesc/hashdesc.c +++ b/src/backend/access/rmgrdesc/hashdesc.c @@ -3,7 +3,7 @@ * hashdesc.c * rmgr descriptor routines for access/hash/hash.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c index 44d2d6333f..b00c071cb6 100644 --- a/src/backend/access/rmgrdesc/heapdesc.c +++ b/src/backend/access/rmgrdesc/heapdesc.c @@ -3,7 +3,7 @@ * heapdesc.c * rmgr descriptor routines for access/heap/heapam.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/logicalmsgdesc.c b/src/backend/access/rmgrdesc/logicalmsgdesc.c index 0b971c2aee..5b26da1b86 100644 --- a/src/backend/access/rmgrdesc/logicalmsgdesc.c +++ b/src/backend/access/rmgrdesc/logicalmsgdesc.c @@ -3,7 +3,7 @@ * logicalmsgdesc.c * rmgr descriptor routines for replication/logical/message.c * - * Portions Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2015-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/access/rmgrdesc/mxactdesc.c b/src/backend/access/rmgrdesc/mxactdesc.c index 9c17447744..bd13837bd4 100644 --- a/src/backend/access/rmgrdesc/mxactdesc.c +++ b/src/backend/access/rmgrdesc/mxactdesc.c @@ -3,7 +3,7 @@ * mxactdesc.c * rmgr descriptor routines for access/transam/multixact.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c index a3e1331fe2..c8caf56368 100644 --- a/src/backend/access/rmgrdesc/nbtdesc.c +++ b/src/backend/access/rmgrdesc/nbtdesc.c @@ -3,7 +3,7 @@ * nbtdesc.c * rmgr descriptor routines for access/nbtree/nbtxlog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/relmapdesc.c b/src/backend/access/rmgrdesc/relmapdesc.c index 4cbdf37c70..5dbec9d94c 100644 --- a/src/backend/access/rmgrdesc/relmapdesc.c +++ b/src/backend/access/rmgrdesc/relmapdesc.c @@ -3,7 +3,7 @@ * relmapdesc.c * rmgr descriptor routines for utils/cache/relmapper.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/replorigindesc.c b/src/backend/access/rmgrdesc/replorigindesc.c index c43f850f8e..2719bf4a28 100644 --- a/src/backend/access/rmgrdesc/replorigindesc.c +++ b/src/backend/access/rmgrdesc/replorigindesc.c @@ -3,7 +3,7 @@ * replorigindesc.c * rmgr descriptor routines for replication/logical/origin.c * - * Portions Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2015-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/access/rmgrdesc/seqdesc.c b/src/backend/access/rmgrdesc/seqdesc.c index 2209f7284e..5c11eb00f0 100644 --- a/src/backend/access/rmgrdesc/seqdesc.c +++ b/src/backend/access/rmgrdesc/seqdesc.c @@ -3,7 +3,7 @@ * seqdesc.c * rmgr descriptor routines for commands/sequence.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/smgrdesc.c b/src/backend/access/rmgrdesc/smgrdesc.c index b8174373dd..517de60084 100644 --- a/src/backend/access/rmgrdesc/smgrdesc.c +++ b/src/backend/access/rmgrdesc/smgrdesc.c @@ -3,7 +3,7 @@ * smgrdesc.c * rmgr descriptor routines for catalog/storage.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/spgdesc.c b/src/backend/access/rmgrdesc/spgdesc.c index 41ed84b168..92b1392974 100644 --- a/src/backend/access/rmgrdesc/spgdesc.c +++ b/src/backend/access/rmgrdesc/spgdesc.c @@ -3,7 +3,7 @@ * spgdesc.c * rmgr descriptor routines for access/spgist/spgxlog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/standbydesc.c b/src/backend/access/rmgrdesc/standbydesc.c index 278546a728..76825a8d9c 100644 --- a/src/backend/access/rmgrdesc/standbydesc.c +++ b/src/backend/access/rmgrdesc/standbydesc.c @@ -3,7 +3,7 @@ * standbydesc.c * rmgr descriptor routines for storage/ipc/standby.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/tblspcdesc.c b/src/backend/access/rmgrdesc/tblspcdesc.c index 47c42328f3..d97762687b 100644 --- a/src/backend/access/rmgrdesc/tblspcdesc.c +++ b/src/backend/access/rmgrdesc/tblspcdesc.c @@ -3,7 +3,7 @@ * tblspcdesc.c * rmgr descriptor routines for commands/tablespace.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/xactdesc.c b/src/backend/access/rmgrdesc/xactdesc.c index 3aafa79e52..e5eef9ea43 100644 --- a/src/backend/access/rmgrdesc/xactdesc.c +++ b/src/backend/access/rmgrdesc/xactdesc.c @@ -3,7 +3,7 @@ * xactdesc.c * rmgr descriptor routines for access/transam/xact.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/rmgrdesc/xlogdesc.c b/src/backend/access/rmgrdesc/xlogdesc.c index f72f076017..00741c7b09 100644 --- a/src/backend/access/rmgrdesc/xlogdesc.c +++ b/src/backend/access/rmgrdesc/xlogdesc.c @@ -3,7 +3,7 @@ * xlogdesc.c * rmgr descriptor routines for access/transam/xlog.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/spgist/spgdoinsert.c b/src/backend/access/spgist/spgdoinsert.c index a8cb8c7bdc..7bf26f8bae 100644 --- a/src/backend/access/spgist/spgdoinsert.c +++ b/src/backend/access/spgist/spgdoinsert.c @@ -4,7 +4,7 @@ * implementation of insert algorithm * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c index 80b82e1602..d2aec6df3e 100644 --- a/src/backend/access/spgist/spginsert.c +++ b/src/backend/access/spgist/spginsert.c @@ -5,7 +5,7 @@ * * All the actual insertion logic is in spgdoinsert.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgkdtreeproc.c b/src/backend/access/spgist/spgkdtreeproc.c index 9a2649bf2a..556f3a4e07 100644 --- a/src/backend/access/spgist/spgkdtreeproc.c +++ b/src/backend/access/spgist/spgkdtreeproc.c @@ -4,7 +4,7 @@ * implementation of k-d tree over points for SP-GiST * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgquadtreeproc.c b/src/backend/access/spgist/spgquadtreeproc.c index 773774555f..8700ff3573 100644 --- a/src/backend/access/spgist/spgquadtreeproc.c +++ b/src/backend/access/spgist/spgquadtreeproc.c @@ -4,7 +4,7 @@ * implementation of quad tree over points for SP-GiST * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c index c64a174143..854032d0cc 100644 --- a/src/backend/access/spgist/spgscan.c +++ b/src/backend/access/spgist/spgscan.c @@ -4,7 +4,7 @@ * routines for scanning SP-GiST indexes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgtextproc.c b/src/backend/access/spgist/spgtextproc.c index 53f298b6c2..f156b2166e 100644 --- a/src/backend/access/spgist/spgtextproc.c +++ b/src/backend/access/spgist/spgtextproc.c @@ -29,7 +29,7 @@ * No new entries ever get pushed into a -2-labeled child, either. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c index e571f0cce0..c4278b0160 100644 --- a/src/backend/access/spgist/spgutils.c +++ b/src/backend/access/spgist/spgutils.c @@ -4,7 +4,7 @@ * various support functions for SP-GiST * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c index d7d5e90ef3..72839cb8f9 100644 --- a/src/backend/access/spgist/spgvacuum.c +++ b/src/backend/access/spgist/spgvacuum.c @@ -4,7 +4,7 @@ * vacuum for SP-GiST * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgvalidate.c b/src/backend/access/spgist/spgvalidate.c index 440b3ce917..8bbed7ff32 100644 --- a/src/backend/access/spgist/spgvalidate.c +++ b/src/backend/access/spgist/spgvalidate.c @@ -3,7 +3,7 @@ * spgvalidate.c * Opclass validator for SP-GiST. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/spgist/spgxlog.c b/src/backend/access/spgist/spgxlog.c index b2da415169..9e2bd3f811 100644 --- a/src/backend/access/spgist/spgxlog.c +++ b/src/backend/access/spgist/spgxlog.c @@ -4,7 +4,7 @@ * WAL replay logic for SP-GiST * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/tablesample/bernoulli.c b/src/backend/access/tablesample/bernoulli.c index 5f6d478159..1f2a933935 100644 --- a/src/backend/access/tablesample/bernoulli.c +++ b/src/backend/access/tablesample/bernoulli.c @@ -13,7 +13,7 @@ * cutoff value computed from the selection probability by BeginSampleScan. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/tablesample/system.c b/src/backend/access/tablesample/system.c index e270cbc4a0..f888e04f40 100644 --- a/src/backend/access/tablesample/system.c +++ b/src/backend/access/tablesample/system.c @@ -13,7 +13,7 @@ * cutoff value computed from the selection probability by BeginSampleScan. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/tablesample/tablesample.c b/src/backend/access/tablesample/tablesample.c index 10d2bc91b3..6f62581e87 100644 --- a/src/backend/access/tablesample/tablesample.c +++ b/src/backend/access/tablesample/tablesample.c @@ -3,7 +3,7 @@ * tablesample.c * Support functions for TABLESAMPLE feature * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c index bbf9ce1a3a..8b7ff5b0c2 100644 --- a/src/backend/access/transam/clog.c +++ b/src/backend/access/transam/clog.c @@ -23,7 +23,7 @@ * for aborts (whether sync or async), since the post-crash assumption would * be that such transactions failed anyway. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/clog.c diff --git a/src/backend/access/transam/commit_ts.c b/src/backend/access/transam/commit_ts.c index 7b7bf2b2bf..04a15e4e29 100644 --- a/src/backend/access/transam/commit_ts.c +++ b/src/backend/access/transam/commit_ts.c @@ -15,7 +15,7 @@ * re-perform the status update on redo; so we need make no additional XLOG * entry here. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/commit_ts.c diff --git a/src/backend/access/transam/generic_xlog.c b/src/backend/access/transam/generic_xlog.c index 3adbf7b949..ce023548ae 100644 --- a/src/backend/access/transam/generic_xlog.c +++ b/src/backend/access/transam/generic_xlog.c @@ -4,7 +4,7 @@ * Implementation of generic xlog records. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/generic_xlog.c diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c index ba01e94328..6d6f2e3016 100644 --- a/src/backend/access/transam/multixact.c +++ b/src/backend/access/transam/multixact.c @@ -59,7 +59,7 @@ * counter does not fall within the wraparound horizon considering the global * minimum value. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/multixact.c diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index d3431a7c30..f720896e50 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -3,7 +3,7 @@ * parallel.c * Infrastructure for launching parallel workers * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/transam/slru.c b/src/backend/access/transam/slru.c index 94b6e6612a..87942b4cca 100644 --- a/src/backend/access/transam/slru.c +++ b/src/backend/access/transam/slru.c @@ -38,7 +38,7 @@ * by re-setting the page's page_dirty flag. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/slru.c diff --git a/src/backend/access/transam/subtrans.c b/src/backend/access/transam/subtrans.c index f640661130..4faa21f5ae 100644 --- a/src/backend/access/transam/subtrans.c +++ b/src/backend/access/transam/subtrans.c @@ -19,7 +19,7 @@ * data across crashes. During database startup, we simply force the * currently-active page of SUBTRANS to zeroes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/subtrans.c diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c index 3d65e5624a..61d36050c3 100644 --- a/src/backend/access/transam/timeline.c +++ b/src/backend/access/transam/timeline.c @@ -21,7 +21,7 @@ * The fields are separated by tabs. Lines beginning with # are comments, and * are ignored. Empty lines are also ignored. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/timeline.c diff --git a/src/backend/access/transam/transam.c b/src/backend/access/transam/transam.c index 968b232364..52a624c90b 100644 --- a/src/backend/access/transam/transam.c +++ b/src/backend/access/transam/transam.c @@ -3,7 +3,7 @@ * transam.c * postgres transaction (commit) log interface routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c index 321da9f5f6..c479c4881b 100644 --- a/src/backend/access/transam/twophase.c +++ b/src/backend/access/transam/twophase.c @@ -3,7 +3,7 @@ * twophase.c * Two-phase commit support functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/access/transam/twophase_rmgr.c b/src/backend/access/transam/twophase_rmgr.c index 1cd03482d9..6d327e36bc 100644 --- a/src/backend/access/transam/twophase_rmgr.c +++ b/src/backend/access/transam/twophase_rmgr.c @@ -3,7 +3,7 @@ * twophase_rmgr.c * Two-phase-commit resource managers tables * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c index 4f094e2e63..394843f7e9 100644 --- a/src/backend/access/transam/varsup.c +++ b/src/backend/access/transam/varsup.c @@ -3,7 +3,7 @@ * varsup.c * postgres OID & XID variables support routines * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/access/transam/varsup.c diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index b37510c24f..ea81f4b5de 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -5,7 +5,7 @@ * * See src/backend/access/transam/README for more information. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 3e9a12dacd..02974f0e52 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -4,7 +4,7 @@ * PostgreSQL write-ahead log manager * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/xlog.c diff --git a/src/backend/access/transam/xlogarchive.c b/src/backend/access/transam/xlogarchive.c index 488acd0f70..5c6de4989c 100644 --- a/src/backend/access/transam/xlogarchive.c +++ b/src/backend/access/transam/xlogarchive.c @@ -4,7 +4,7 @@ * Functions for archiving WAL files and restoring from the archive. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/xlogarchive.c diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c index c41428ea2a..316edbe3c5 100644 --- a/src/backend/access/transam/xlogfuncs.c +++ b/src/backend/access/transam/xlogfuncs.c @@ -7,7 +7,7 @@ * This file contains WAL control and information functions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/xlogfuncs.c diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c index 2a41667c39..de869e00ff 100644 --- a/src/backend/access/transam/xloginsert.c +++ b/src/backend/access/transam/xloginsert.c @@ -9,7 +9,7 @@ * of XLogRecData structs by a call to XLogRecordAssemble(). See * access/transam/README for details. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/xloginsert.c diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c index 0a75c36026..3a86f3497e 100644 --- a/src/backend/access/transam/xlogreader.c +++ b/src/backend/access/transam/xlogreader.c @@ -3,7 +3,7 @@ * xlogreader.c * Generic XLog reading facility * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/access/transam/xlogreader.c diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c index cc14063daf..89da25c207 100644 --- a/src/backend/access/transam/xlogutils.c +++ b/src/backend/access/transam/xlogutils.c @@ -8,7 +8,7 @@ * None of this code is used during normal system operation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/access/transam/xlogutils.c diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y index 2e1fef0350..8c52846a92 100644 --- a/src/backend/bootstrap/bootparse.y +++ b/src/backend/bootstrap/bootparse.y @@ -4,7 +4,7 @@ * bootparse.y * yacc grammar for the "bootstrap" mode (BKI file format) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/bootstrap/bootscanner.l b/src/backend/bootstrap/bootscanner.l index 5465217bc3..2ce6e524db 100644 --- a/src/backend/bootstrap/bootscanner.l +++ b/src/backend/bootstrap/bootscanner.l @@ -4,7 +4,7 @@ * bootscanner.l * a lexical scanner for the bootstrap parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c index 8287de97a2..80860128fb 100644 --- a/src/backend/bootstrap/bootstrap.c +++ b/src/backend/bootstrap/bootstrap.c @@ -4,7 +4,7 @@ * routines to support running postgres in 'bootstrap' mode * bootstrap mode is used to create the initial template database * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm index 80bd9771f1..f18b400bd7 100644 --- a/src/backend/catalog/Catalog.pm +++ b/src/backend/catalog/Catalog.pm @@ -4,7 +4,7 @@ # Perl module that extracts info from catalog headers into Perl # data structures # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/backend/catalog/Catalog.pm diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index e481cf3d11..fac80612b8 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -3,7 +3,7 @@ * aclchk.c * Routines to check access control permissions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/catalog.c b/src/backend/catalog/catalog.c index f50ae3e41d..8f3cd07fa4 100644 --- a/src/backend/catalog/catalog.c +++ b/src/backend/catalog/catalog.c @@ -5,7 +5,7 @@ * bits of hard-wired knowledge * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 033c4358ea..269111b4c1 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -4,7 +4,7 @@ * Routines to support inter-object dependencies. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl index 5b5b04f41c..1d3bbcc532 100644 --- a/src/backend/catalog/genbki.pl +++ b/src/backend/catalog/genbki.pl @@ -7,7 +7,7 @@ # header files. The .bki files are used to initialize the postgres # template database. # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/backend/catalog/genbki.pl @@ -335,7 +335,7 @@ * schemapg.h * Schema_pg_xxx macros for use by relcache.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 4319fc6b8c..089b7965f2 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -3,7 +3,7 @@ * heap.c * code to create and destroy POSTGRES heap relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 0125c18bc1..330488b96f 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -3,7 +3,7 @@ * index.c * code to create and destroy POSTGRES index relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c index e5b6bafaff..a84b7da114 100644 --- a/src/backend/catalog/indexing.c +++ b/src/backend/catalog/indexing.c @@ -4,7 +4,7 @@ * This file contains routines to support indexes defined on system * catalogs. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/information_schema.sql b/src/backend/catalog/information_schema.sql index 360725d59a..6fb1a1bc1c 100644 --- a/src/backend/catalog/information_schema.sql +++ b/src/backend/catalog/information_schema.sql @@ -2,7 +2,7 @@ * SQL Information Schema * as defined in ISO/IEC 9075-11:2011 * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/backend/catalog/information_schema.sql * diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c index 0a2fb1b93a..93c4bbfcb0 100644 --- a/src/backend/catalog/namespace.c +++ b/src/backend/catalog/namespace.c @@ -9,7 +9,7 @@ * and implementing search-path-controlled searches. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/objectaccess.c b/src/backend/catalog/objectaccess.c index 9d5eb7b9da..65884699c4 100644 --- a/src/backend/catalog/objectaccess.c +++ b/src/backend/catalog/objectaccess.c @@ -3,7 +3,7 @@ * objectaccess.c * functions for object_access_hook on various events * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index 9553675975..bc999ca3c4 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -3,7 +3,7 @@ * objectaddress.c * functions for working with ObjectAddresses * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 5c4018e9f7..ac9a2bda2e 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -3,7 +3,7 @@ * partition.c * Partitioning related data structures and functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c index ca3fd819b4..e801c1ed5c 100644 --- a/src/backend/catalog/pg_aggregate.c +++ b/src/backend/catalog/pg_aggregate.c @@ -3,7 +3,7 @@ * pg_aggregate.c * routines to support manipulation of the pg_aggregate relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c index ca62896ecb..d280869baf 100644 --- a/src/backend/catalog/pg_collation.c +++ b/src/backend/catalog/pg_collation.c @@ -3,7 +3,7 @@ * pg_collation.c * routines to support manipulation of the pg_collation relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c index 7dee6db0eb..442ae7e23d 100644 --- a/src/backend/catalog/pg_constraint.c +++ b/src/backend/catalog/pg_constraint.c @@ -3,7 +3,7 @@ * pg_constraint.c * routines to support manipulation of the pg_constraint relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_conversion.c b/src/backend/catalog/pg_conversion.c index 5746dc349a..76fcd8fd9c 100644 --- a/src/backend/catalog/pg_conversion.c +++ b/src/backend/catalog/pg_conversion.c @@ -3,7 +3,7 @@ * pg_conversion.c * routines to support manipulation of the pg_conversion relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c index 323471bc83..e123691923 100644 --- a/src/backend/catalog/pg_db_role_setting.c +++ b/src/backend/catalog/pg_db_role_setting.c @@ -2,7 +2,7 @@ * pg_db_role_setting.c * Routines to support manipulation of the pg_db_role_setting relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c index cf0086b9bd..9dfbe123b5 100644 --- a/src/backend/catalog/pg_depend.c +++ b/src/backend/catalog/pg_depend.c @@ -3,7 +3,7 @@ * pg_depend.c * routines to support manipulation of the pg_depend relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c index fe61d4dacc..fde361f367 100644 --- a/src/backend/catalog/pg_enum.c +++ b/src/backend/catalog/pg_enum.c @@ -3,7 +3,7 @@ * pg_enum.c * routines to support manipulation of the pg_enum relation * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/catalog/pg_inherits.c b/src/backend/catalog/pg_inherits.c index 1bd8a58b7f..b32d677347 100644 --- a/src/backend/catalog/pg_inherits.c +++ b/src/backend/catalog/pg_inherits.c @@ -8,7 +8,7 @@ * Perhaps someday that code should be moved here, but it'd have to be * disentangled from other stuff such as pg_depend updates. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_largeobject.c b/src/backend/catalog/pg_largeobject.c index fc4f4f8c9b..a876473976 100644 --- a/src/backend/catalog/pg_largeobject.c +++ b/src/backend/catalog/pg_largeobject.c @@ -3,7 +3,7 @@ * pg_largeobject.c * routines to support manipulation of the pg_largeobject relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_namespace.c b/src/backend/catalog/pg_namespace.c index 3e20d051c2..a82d785034 100644 --- a/src/backend/catalog/pg_namespace.c +++ b/src/backend/catalog/pg_namespace.c @@ -3,7 +3,7 @@ * pg_namespace.c * routines to support manipulation of the pg_namespace relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c index 61093dc473..c96f336b7a 100644 --- a/src/backend/catalog/pg_operator.c +++ b/src/backend/catalog/pg_operator.c @@ -3,7 +3,7 @@ * pg_operator.c * routines to support manipulation of the pg_operator relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index 7d05e4bdb2..39d5172e97 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -3,7 +3,7 @@ * pg_proc.c * routines to support manipulation of the pg_proc relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c index 3ef7ba8cd5..b4a5f48b4e 100644 --- a/src/backend/catalog/pg_publication.c +++ b/src/backend/catalog/pg_publication.c @@ -3,7 +3,7 @@ * pg_publication.c * publication C API manipulation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/pg_range.c b/src/backend/catalog/pg_range.c index a3b0fb8838..c902f98606 100644 --- a/src/backend/catalog/pg_range.c +++ b/src/backend/catalog/pg_range.c @@ -3,7 +3,7 @@ * pg_range.c * routines to support manipulation of the pg_range relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c index 31b09a1da5..faf42b7640 100644 --- a/src/backend/catalog/pg_shdepend.c +++ b/src/backend/catalog/pg_shdepend.c @@ -3,7 +3,7 @@ * pg_shdepend.c * routines to support manipulation of the pg_shdepend relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c index fb53d71cd6..8e16d3b7bc 100644 --- a/src/backend/catalog/pg_subscription.c +++ b/src/backend/catalog/pg_subscription.c @@ -3,7 +3,7 @@ * pg_subscription.c * replication subscriptions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c index e02d312008..963ccb7ff2 100644 --- a/src/backend/catalog/pg_type.c +++ b/src/backend/catalog/pg_type.c @@ -3,7 +3,7 @@ * pg_type.c * routines to support manipulation of the pg_type relation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c index 9a5fde00ca..cff49bae9e 100644 --- a/src/backend/catalog/storage.c +++ b/src/backend/catalog/storage.c @@ -3,7 +3,7 @@ * storage.c * code to create and destroy physical storage for relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql index 394aea8e0f..5652e9ee6d 100644 --- a/src/backend/catalog/system_views.sql +++ b/src/backend/catalog/system_views.sql @@ -1,7 +1,7 @@ /* * PostgreSQL System Views * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/backend/catalog/system_views.sql * diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index 539ca79ad3..0b4b5631a1 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -4,7 +4,7 @@ * This file contains routines to support creation of toast tables * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/aggregatecmds.c b/src/backend/commands/aggregatecmds.c index 2e2ee883e2..15378a9d4d 100644 --- a/src/backend/commands/aggregatecmds.c +++ b/src/backend/commands/aggregatecmds.c @@ -4,7 +4,7 @@ * * Routines for aggregate-manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c index 21e3f1efe1..3995c5ef3d 100644 --- a/src/backend/commands/alter.c +++ b/src/backend/commands/alter.c @@ -3,7 +3,7 @@ * alter.c * Drivers for generic alter commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c index 7e0a9aa0fd..f2173450ad 100644 --- a/src/backend/commands/amcmds.c +++ b/src/backend/commands/amcmds.c @@ -3,7 +3,7 @@ * amcmds.c * Routines for SQL commands that manipulate access methods. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index f952b3c732..5f21fcb5f4 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -3,7 +3,7 @@ * analyze.c * the Postgres statistics generator * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c index f7de742a56..ee7c6d41b4 100644 --- a/src/backend/commands/async.c +++ b/src/backend/commands/async.c @@ -3,7 +3,7 @@ * async.c * Asynchronous notification: NOTIFY, LISTEN, UNLISTEN * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c index 1c5669afa8..eb73299199 100644 --- a/src/backend/commands/cluster.c +++ b/src/backend/commands/cluster.c @@ -6,7 +6,7 @@ * There is hardly anything left of Paul Brown's original implementation... * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index 9437731276..6c6877395f 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -3,7 +3,7 @@ * collationcmds.c * collation-related commands support code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c index 2dc9371fdb..2f2e69b4a8 100644 --- a/src/backend/commands/comment.c +++ b/src/backend/commands/comment.c @@ -4,7 +4,7 @@ * * PostgreSQL object comments utility code. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/commands/comment.c diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c index e2544e51ed..90f19ad3dd 100644 --- a/src/backend/commands/constraint.c +++ b/src/backend/commands/constraint.c @@ -3,7 +3,7 @@ * constraint.c * PostgreSQL CONSTRAINT support code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/conversioncmds.c b/src/backend/commands/conversioncmds.c index 9861d3df22..294143c522 100644 --- a/src/backend/commands/conversioncmds.c +++ b/src/backend/commands/conversioncmds.c @@ -3,7 +3,7 @@ * conversioncmds.c * conversion creation command support code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 254be28ae4..118115aa42 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -3,7 +3,7 @@ * copy.c * Implements the COPY utility command * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c index 4d77411a68..3d82edbf58 100644 --- a/src/backend/commands/createas.c +++ b/src/backend/commands/createas.c @@ -13,7 +13,7 @@ * we must return a tuples-processed count in the completionTag. (We no * longer do that for CTAS ... WITH NO DATA, however.) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index eb1a4695c0..0b111fc5cf 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -8,7 +8,7 @@ * stepping on each others' toes. Formerly we used table-level locks * on pg_database, but that's too coarse-grained. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/define.c b/src/backend/commands/define.c index 8eff0ad17b..00b5721f85 100644 --- a/src/backend/commands/define.c +++ b/src/backend/commands/define.c @@ -4,7 +4,7 @@ * Support routines for various kinds of object creation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/discard.c b/src/backend/commands/discard.c index f0dcd87fb8..353ec990af 100644 --- a/src/backend/commands/discard.c +++ b/src/backend/commands/discard.c @@ -3,7 +3,7 @@ * discard.c * The implementation of the DISCARD command * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/commands/dropcmds.c b/src/backend/commands/dropcmds.c index 7e6baa1928..fc4ce8d22a 100644 --- a/src/backend/commands/dropcmds.c +++ b/src/backend/commands/dropcmds.c @@ -3,7 +3,7 @@ * dropcmds.c * handle various "DROP" operations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index a602c20b41..8455138ed3 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -3,7 +3,7 @@ * event_trigger.c * PostgreSQL EVENT TRIGGER support code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 2156385ac8..79e6985d0d 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -3,7 +3,7 @@ * explain.c * Explain query execution plans * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c index 9f77d25352..c0c933583f 100644 --- a/src/backend/commands/extension.c +++ b/src/backend/commands/extension.c @@ -12,7 +12,7 @@ * postgresql.conf and recovery.conf. An extension also has an installation * script file, containing SQL commands to create the extension's objects. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c index 9ad991507f..44f3da9b51 100644 --- a/src/backend/commands/foreigncmds.c +++ b/src/backend/commands/foreigncmds.c @@ -3,7 +3,7 @@ * foreigncmds.c * foreign-data wrapper/server creation/manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index 2a9c90133d..12ab33f418 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -5,7 +5,7 @@ * Routines for CREATE and DROP FUNCTION commands and CREATE and DROP * CAST commands. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index ffa99aec16..9e6ba92008 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -3,7 +3,7 @@ * indexcmds.c * POSTGRES define and remove index code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/lockcmds.c b/src/backend/commands/lockcmds.c index 9fe9e022b0..c587edc209 100644 --- a/src/backend/commands/lockcmds.c +++ b/src/backend/commands/lockcmds.c @@ -3,7 +3,7 @@ * lockcmds.c * LOCK command support code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c index d2e0376511..ab6a889b12 100644 --- a/src/backend/commands/matview.c +++ b/src/backend/commands/matview.c @@ -3,7 +3,7 @@ * matview.c * materialized view support * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index 35c7c67bf5..7fb7b3976c 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -4,7 +4,7 @@ * * Routines for opclass (and opfamily) manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c index 6674b41eec..81ef532184 100644 --- a/src/backend/commands/operatorcmds.c +++ b/src/backend/commands/operatorcmds.c @@ -4,7 +4,7 @@ * * Routines for operator manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c index 9ced4ee34c..396d4c3449 100644 --- a/src/backend/commands/policy.c +++ b/src/backend/commands/policy.c @@ -3,7 +3,7 @@ * policy.c * Commands for manipulating policies. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/commands/policy.c diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c index 76d6cf154c..ff38e94cb1 100644 --- a/src/backend/commands/portalcmds.c +++ b/src/backend/commands/portalcmds.c @@ -9,7 +9,7 @@ * storage management for portals (but doesn't run any queries in them). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c index 6e90912ae1..21cb855aeb 100644 --- a/src/backend/commands/prepare.c +++ b/src/backend/commands/prepare.c @@ -7,7 +7,7 @@ * accessed via the extended FE/BE query protocol. * * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/commands/prepare.c diff --git a/src/backend/commands/proclang.c b/src/backend/commands/proclang.c index 1a239fabea..9783a162d7 100644 --- a/src/backend/commands/proclang.c +++ b/src/backend/commands/proclang.c @@ -3,7 +3,7 @@ * proclang.c * PostgreSQL PROCEDURAL LANGUAGE support code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c index f298d3d381..91338e8d49 100644 --- a/src/backend/commands/publicationcmds.c +++ b/src/backend/commands/publicationcmds.c @@ -3,7 +3,7 @@ * publicationcmds.c * publication manipulation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c index f9ea73f923..16b6e8f111 100644 --- a/src/backend/commands/schemacmds.c +++ b/src/backend/commands/schemacmds.c @@ -3,7 +3,7 @@ * schemacmds.c * schema creation/manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c index b0b06fc91f..5ee46905d8 100644 --- a/src/backend/commands/seclabel.c +++ b/src/backend/commands/seclabel.c @@ -3,7 +3,7 @@ * seclabel.c * routines to support security label feature. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c index 5e1b0fe289..ef3ca8c00b 100644 --- a/src/backend/commands/sequence.c +++ b/src/backend/commands/sequence.c @@ -3,7 +3,7 @@ * sequence.c * PostgreSQL sequences support code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/statscmds.c b/src/backend/commands/statscmds.c index c70a28de4b..5773a8fd03 100644 --- a/src/backend/commands/statscmds.c +++ b/src/backend/commands/statscmds.c @@ -3,7 +3,7 @@ * statscmds.c * Commands for creating and altering extended statistics objects * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index a7f426d52b..bd25ec2401 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -3,7 +3,7 @@ * subscriptioncmds.c * subscription catalog manipulation functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index d979ce266d..62cf81e95a 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -3,7 +3,7 @@ * tablecmds.c * Commands for creating and altering table structures and settings * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c index d574e4dd00..8cb834c271 100644 --- a/src/backend/commands/tablespace.c +++ b/src/backend/commands/tablespace.c @@ -35,7 +35,7 @@ * and munge the system catalogs of the new database. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 92ae3822d8..1c488c338a 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -3,7 +3,7 @@ * trigger.c * PostgreSQL TRIGGERs support code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c index adc7cd67a7..bf06ed9318 100644 --- a/src/backend/commands/tsearchcmds.c +++ b/src/backend/commands/tsearchcmds.c @@ -4,7 +4,7 @@ * * Routines for tsearch manipulation commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index f86af4c054..a40b3cf752 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -3,7 +3,7 @@ * typecmds.c * Routines for SQL commands that manipulate types (and domains). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c index f2941352d7..d559c29d24 100644 --- a/src/backend/commands/user.c +++ b/src/backend/commands/user.c @@ -3,7 +3,7 @@ * user.c * Commands for manipulating roles (formerly called users). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/commands/user.c diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index d5f3fa5a31..7aca69a0ba 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -9,7 +9,7 @@ * in cluster.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index f95346acdb..cf7f5e1162 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -23,7 +23,7 @@ * the TID array, just enough to hold as many heap tuples as fit on one page. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/variable.c b/src/backend/commands/variable.c index 3ed1c56e82..9a754dae3f 100644 --- a/src/backend/commands/variable.c +++ b/src/backend/commands/variable.c @@ -4,7 +4,7 @@ * Routines for handling specialized SET variables. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c index c1e80e61d4..04ad76a210 100644 --- a/src/backend/commands/view.c +++ b/src/backend/commands/view.c @@ -3,7 +3,7 @@ * view.c * use rewrite rules to construct views * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c index f1636a5b88..9e78421978 100644 --- a/src/backend/executor/execAmi.c +++ b/src/backend/executor/execAmi.c @@ -3,7 +3,7 @@ * execAmi.c * miscellaneous executor access method routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/executor/execAmi.c diff --git a/src/backend/executor/execCurrent.c b/src/backend/executor/execCurrent.c index eaeb3a2836..6a8db582db 100644 --- a/src/backend/executor/execCurrent.c +++ b/src/backend/executor/execCurrent.c @@ -3,7 +3,7 @@ * execCurrent.c * executor support for WHERE CURRENT OF cursor * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/executor/execCurrent.c diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 2642b404ff..16f908037c 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -19,7 +19,7 @@ * and "Expression Evaluation" sections. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index fa4ab30e99..2e88417265 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -46,7 +46,7 @@ * exported rather than being "static" in this file.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c index 07c8852fca..058ee68804 100644 --- a/src/backend/executor/execGrouping.c +++ b/src/backend/executor/execGrouping.c @@ -7,7 +7,7 @@ * collation-sensitive, so the code in this file has no support for passing * collation settings through from callers. That may have to change someday. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c index 89e189fa71..62e51f1ef3 100644 --- a/src/backend/executor/execIndexing.c +++ b/src/backend/executor/execIndexing.c @@ -95,7 +95,7 @@ * with the higher XID backs out. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execJunk.c b/src/backend/executor/execJunk.c index 7fcd940fdb..57d74e57c1 100644 --- a/src/backend/executor/execJunk.c +++ b/src/backend/executor/execJunk.c @@ -3,7 +3,7 @@ * execJunk.c * Junk attribute support stuff.... * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index dbaa47f2d3..d8bc5028e8 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -26,7 +26,7 @@ * before ExecutorEnd. This can be omitted only in case of EXPLAIN, * which should also omit ExecutorRun. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index b344d4b589..f8b72ebab9 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -3,7 +3,7 @@ * execParallel.c * Support routines for parallel execution. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * This file contains routines that are intended to support setting up, diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index d545af2b67..89c523ef44 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -3,7 +3,7 @@ * execPartition.c * Support routines for partitioning. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 699dc69179..43a27a9af2 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -7,7 +7,7 @@ * ExecProcNode, or ExecEndNode on its subnodes and do the appropriate * processing. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index bd786a1be6..732ed42fe5 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -3,7 +3,7 @@ * execReplication.c * miscellaneous executor routines for logical replication * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index c24d8b9ead..8e521288a9 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -7,7 +7,7 @@ * common code for calling set-returning functions according to the * ReturnSetInfo API. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index 837abc0f01..bf4f603fd3 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -7,7 +7,7 @@ * stuff - checking the qualification and projecting the tuple * appropriately. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c index 51d2c5d166..5df89e419c 100644 --- a/src/backend/executor/execTuples.c +++ b/src/backend/executor/execTuples.c @@ -12,7 +12,7 @@ * This information is needed by routines manipulating tuples * (getattribute, formtuple, etc.). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index 876439835a..e29f7aaf7b 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -3,7 +3,7 @@ * execUtils.c * miscellaneous executor utility routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c index 527f7d810f..7e249f575f 100644 --- a/src/backend/executor/functions.c +++ b/src/backend/executor/functions.c @@ -3,7 +3,7 @@ * functions.c * Execution of SQL-language functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/instrument.c b/src/backend/executor/instrument.c index 6ec96ec371..a2d9381ba1 100644 --- a/src/backend/executor/instrument.c +++ b/src/backend/executor/instrument.c @@ -4,7 +4,7 @@ * functions for instrumentation of plan execution * * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/executor/instrument.c diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index a3454e52f6..46ee880415 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -195,7 +195,7 @@ * all hash tables. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 0e9371373c..4245d8afaf 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -3,7 +3,7 @@ * nodeAppend.c * routines to handle append nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeBitmapAnd.c b/src/backend/executor/nodeBitmapAnd.c index 1c5c312c95..913046c987 100644 --- a/src/backend/executor/nodeBitmapAnd.c +++ b/src/backend/executor/nodeBitmapAnd.c @@ -3,7 +3,7 @@ * nodeBitmapAnd.c * routines to handle BitmapAnd nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index eb5bbb57ef..7ba1db7d7e 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -16,7 +16,7 @@ * required index qual conditions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeBitmapIndexscan.c b/src/backend/executor/nodeBitmapIndexscan.c index 6feb70f4ae..bb5e4da187 100644 --- a/src/backend/executor/nodeBitmapIndexscan.c +++ b/src/backend/executor/nodeBitmapIndexscan.c @@ -3,7 +3,7 @@ * nodeBitmapIndexscan.c * Routines to support bitmapped index scans of relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeBitmapOr.c b/src/backend/executor/nodeBitmapOr.c index 66a7a89a8b..8047549f7d 100644 --- a/src/backend/executor/nodeBitmapOr.c +++ b/src/backend/executor/nodeBitmapOr.c @@ -3,7 +3,7 @@ * nodeBitmapOr.c * routines to handle BitmapOr nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c index 79676ca978..ec6d75cbd4 100644 --- a/src/backend/executor/nodeCtescan.c +++ b/src/backend/executor/nodeCtescan.c @@ -3,7 +3,7 @@ * nodeCtescan.c * routines to handle CteScan nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c index 5f1732d6ac..936a2221f5 100644 --- a/src/backend/executor/nodeCustom.c +++ b/src/backend/executor/nodeCustom.c @@ -3,7 +3,7 @@ * nodeCustom.c * Routines to handle execution of custom scan node * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------ diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c index dc6cfcfa66..59865f5cca 100644 --- a/src/backend/executor/nodeForeignscan.c +++ b/src/backend/executor/nodeForeignscan.c @@ -3,7 +3,7 @@ * nodeForeignscan.c * Routines to support scans of foreign tables * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c index de476ac75c..69f8d3e814 100644 --- a/src/backend/executor/nodeFunctionscan.c +++ b/src/backend/executor/nodeFunctionscan.c @@ -3,7 +3,7 @@ * nodeFunctionscan.c * Support routines for scanning RangeFunctions (functions in rangetable). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 1697ae650d..89266b5371 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -3,7 +3,7 @@ * nodeGather.c * Support routines for scanning a plan via multiple workers. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * A Gather executor launches parallel workers to run multiple copies of a diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index a69777aa95..a3e34c6980 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -3,7 +3,7 @@ * nodeGatherMerge.c * Scan a plan in multiple workers, and do order-preserving merge. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index 6b68835ca1..f1cdbaa4e6 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -3,7 +3,7 @@ * nodeGroup.c * Routines to handle group nodes (used for queries with GROUP BY clause). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 38a84cc14c..52f5c0c26e 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -3,7 +3,7 @@ * nodeHash.c * Routines to hash relations for hashjoin * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index 817bcf0471..8f2b634b12 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -3,7 +3,7 @@ * nodeHashjoin.c * Routines to handle hash join nodes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index c54c5aa659..9b7f470ee2 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -3,7 +3,7 @@ * nodeIndexonlyscan.c * Routines to support index-only scans * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index 2ffef23107..54fafa5033 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -3,7 +3,7 @@ * nodeIndexscan.c * Routines to support indexed scans of relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c index 883f46ce7c..29d2deac23 100644 --- a/src/backend/executor/nodeLimit.c +++ b/src/backend/executor/nodeLimit.c @@ -3,7 +3,7 @@ * nodeLimit.c * Routines to handle limiting of query results where appropriate * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c index 93895600a5..7961b4be6a 100644 --- a/src/backend/executor/nodeLockRows.c +++ b/src/backend/executor/nodeLockRows.c @@ -3,7 +3,7 @@ * nodeLockRows.c * Routines to handle FOR UPDATE/FOR SHARE row locking * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeMaterial.c b/src/backend/executor/nodeMaterial.c index 91178f1019..85afe87c44 100644 --- a/src/backend/executor/nodeMaterial.c +++ b/src/backend/executor/nodeMaterial.c @@ -3,7 +3,7 @@ * nodeMaterial.c * Routines to handle materialization nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeMergeAppend.c b/src/backend/executor/nodeMergeAppend.c index 6bf490bd70..ab4009c967 100644 --- a/src/backend/executor/nodeMergeAppend.c +++ b/src/backend/executor/nodeMergeAppend.c @@ -3,7 +3,7 @@ * nodeMergeAppend.c * routines to handle MergeAppend nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c index ef9e1ee471..b52946f180 100644 --- a/src/backend/executor/nodeMergejoin.c +++ b/src/backend/executor/nodeMergejoin.c @@ -3,7 +3,7 @@ * nodeMergejoin.c * routines supporting merge joins * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 82cd4462a3..e52a3bb95e 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -3,7 +3,7 @@ * nodeModifyTable.c * routines to handle ModifyTable nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeNamedtuplestorescan.c b/src/backend/executor/nodeNamedtuplestorescan.c index 3a65b9f5dc..c3b28176e4 100644 --- a/src/backend/executor/nodeNamedtuplestorescan.c +++ b/src/backend/executor/nodeNamedtuplestorescan.c @@ -3,7 +3,7 @@ * nodeNamedtuplestorescan.c * routines to handle NamedTuplestoreScan nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c index 4447b7c051..9b4f8cc432 100644 --- a/src/backend/executor/nodeNestloop.c +++ b/src/backend/executor/nodeNestloop.c @@ -3,7 +3,7 @@ * nodeNestloop.c * routines to support nest-loop joins * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeProjectSet.c b/src/backend/executor/nodeProjectSet.c index 30789bcce4..3b79993ade 100644 --- a/src/backend/executor/nodeProjectSet.c +++ b/src/backend/executor/nodeProjectSet.c @@ -11,7 +11,7 @@ * can't be inside more-complex expressions. If that'd otherwise be * the case, the planner adds additional ProjectSet nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeRecursiveunion.c b/src/backend/executor/nodeRecursiveunion.c index a64dd1397a..817749855f 100644 --- a/src/backend/executor/nodeRecursiveunion.c +++ b/src/backend/executor/nodeRecursiveunion.c @@ -7,7 +7,7 @@ * already seen. The hash key is computed from the grouping columns. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c index 4c879d8765..5860d9c1ce 100644 --- a/src/backend/executor/nodeResult.c +++ b/src/backend/executor/nodeResult.c @@ -34,7 +34,7 @@ * plan normally and pass back the results. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c index 9c74a836e4..e88cd18737 100644 --- a/src/backend/executor/nodeSamplescan.c +++ b/src/backend/executor/nodeSamplescan.c @@ -3,7 +3,7 @@ * nodeSamplescan.c * Support routines for sample scans of relations (table sampling). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c index a5bd60e579..58631378d5 100644 --- a/src/backend/executor/nodeSeqscan.c +++ b/src/backend/executor/nodeSeqscan.c @@ -3,7 +3,7 @@ * nodeSeqscan.c * Support routines for sequential scans of relations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeSetOp.c b/src/backend/executor/nodeSetOp.c index 571cbf86b1..c91c3402d2 100644 --- a/src/backend/executor/nodeSetOp.c +++ b/src/backend/executor/nodeSetOp.c @@ -32,7 +32,7 @@ * input group. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index d593378f74..9c68de8565 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -3,7 +3,7 @@ * nodeSort.c * Routines to handle sorting of relations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index a93fbf646c..edf7d034bd 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -11,7 +11,7 @@ * subplans, which are re-evaluated every time their result is required. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c index 088c92992e..715a5b6a84 100644 --- a/src/backend/executor/nodeSubqueryscan.c +++ b/src/backend/executor/nodeSubqueryscan.c @@ -7,7 +7,7 @@ * we need two sets of code. Ought to look at trying to unify the cases. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeTableFuncscan.c b/src/backend/executor/nodeTableFuncscan.c index 165fae8c83..3b609765d4 100644 --- a/src/backend/executor/nodeTableFuncscan.c +++ b/src/backend/executor/nodeTableFuncscan.c @@ -3,7 +3,7 @@ * nodeTableFuncscan.c * Support routines for scanning RangeTableFunc (XMLTABLE like functions). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c index 0ee76e7d25..f2737bb7ef 100644 --- a/src/backend/executor/nodeTidscan.c +++ b/src/backend/executor/nodeTidscan.c @@ -3,7 +3,7 @@ * nodeTidscan.c * Routines to support direct tid scans of relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeUnique.c b/src/backend/executor/nodeUnique.c index 621fdd9b9c..e330650593 100644 --- a/src/backend/executor/nodeUnique.c +++ b/src/backend/executor/nodeUnique.c @@ -11,7 +11,7 @@ * (It's debatable whether the savings justifies carrying two plan node * types, though.) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c index 47ba9faa78..c3d78b6295 100644 --- a/src/backend/executor/nodeValuesscan.c +++ b/src/backend/executor/nodeValuesscan.c @@ -4,7 +4,7 @@ * Support routines for scanning Values lists * ("VALUES (...), (...), ..." in rangetable). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 02868749f6..5492fb3369 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -23,7 +23,7 @@ * aggregate function over all rows in the current row's window frame. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c index d5ffadda3e..66d2111bd9 100644 --- a/src/backend/executor/nodeWorktablescan.c +++ b/src/backend/executor/nodeWorktablescan.c @@ -3,7 +3,7 @@ * nodeWorktablescan.c * routines to handle WorkTableScan nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 977f317420..4d9b51b947 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -3,7 +3,7 @@ * spi.c * Server Programming Interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c index 0dcb911c3c..ecdbe7f79f 100644 --- a/src/backend/executor/tqueue.c +++ b/src/backend/executor/tqueue.c @@ -8,7 +8,7 @@ * * A TupleQueueReader reads tuples from a shm_mq and returns the tuples. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/executor/tstoreReceiver.c b/src/backend/executor/tstoreReceiver.c index 027fa72f10..d02ca3afd1 100644 --- a/src/backend/executor/tstoreReceiver.c +++ b/src/backend/executor/tstoreReceiver.c @@ -9,7 +9,7 @@ * data even if the underlying table is dropped. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c index 45fca52621..e7fd507fa5 100644 --- a/src/backend/foreign/foreign.c +++ b/src/backend/foreign/foreign.c @@ -3,7 +3,7 @@ * foreign.c * support for foreign-data wrappers, servers and user mappings. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/foreign/foreign.c diff --git a/src/backend/lib/binaryheap.c b/src/backend/lib/binaryheap.c index ce4331ccde..a8adf065a9 100644 --- a/src/backend/lib/binaryheap.c +++ b/src/backend/lib/binaryheap.c @@ -3,7 +3,7 @@ * binaryheap.c * A simple binary heap implementation * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/lib/binaryheap.c diff --git a/src/backend/lib/bipartite_match.c b/src/backend/lib/bipartite_match.c index 4564a463d5..5be5ed24f1 100644 --- a/src/backend/lib/bipartite_match.c +++ b/src/backend/lib/bipartite_match.c @@ -7,7 +7,7 @@ * * http://en.wikipedia.org/w/index.php?title=Hopcroft%E2%80%93Karp_algorithm&oldid=593898016 * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/lib/bipartite_match.c diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c index dd87573067..b1973d4bfb 100644 --- a/src/backend/lib/dshash.c +++ b/src/backend/lib/dshash.c @@ -20,7 +20,7 @@ * Future versions may support iterators and incremental resizing; for now * the implementation is minimalist. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/lib/hyperloglog.c b/src/backend/lib/hyperloglog.c index df7a67e7dc..3c50375a92 100644 --- a/src/backend/lib/hyperloglog.c +++ b/src/backend/lib/hyperloglog.c @@ -3,7 +3,7 @@ * hyperloglog.c * HyperLogLog cardinality estimator * - * Portions Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2014-2018, PostgreSQL Global Development Group * * Based on Hideaki Ohno's C++ implementation. This is probably not ideally * suited to estimating the cardinality of very large sets; in particular, we diff --git a/src/backend/lib/ilist.c b/src/backend/lib/ilist.c index af8d656d3e..58bee57c76 100644 --- a/src/backend/lib/ilist.c +++ b/src/backend/lib/ilist.c @@ -3,7 +3,7 @@ * ilist.c * support for integrated/inline doubly- and singly- linked lists * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/lib/knapsack.c b/src/backend/lib/knapsack.c index 490c0cc73c..25ce4f2365 100644 --- a/src/backend/lib/knapsack.c +++ b/src/backend/lib/knapsack.c @@ -15,7 +15,7 @@ * allows approximate solutions in polynomial time (the general case of the * exact problem is NP-hard). * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/lib/knapsack.c diff --git a/src/backend/lib/pairingheap.c b/src/backend/lib/pairingheap.c index fd871408f3..89d0f62f8f 100644 --- a/src/backend/lib/pairingheap.c +++ b/src/backend/lib/pairingheap.c @@ -14,7 +14,7 @@ * The pairing heap: a new form of self-adjusting heap. * Algorithmica 1, 1 (January 1986), pages 111-129. DOI: 10.1007/BF01840439 * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/lib/pairingheap.c diff --git a/src/backend/lib/rbtree.c b/src/backend/lib/rbtree.c index 5362acc6ff..a43d5938d5 100644 --- a/src/backend/lib/rbtree.c +++ b/src/backend/lib/rbtree.c @@ -17,7 +17,7 @@ * longest path from root to leaf is only about twice as long as the shortest, * so lookups are guaranteed to run in O(lg n) time. * - * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * Copyright (c) 2009-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/lib/rbtree.c diff --git a/src/backend/lib/stringinfo.c b/src/backend/lib/stringinfo.c index cb2026c3b2..798a823ac9 100644 --- a/src/backend/lib/stringinfo.c +++ b/src/backend/lib/stringinfo.c @@ -6,7 +6,7 @@ * It can be used to buffer either ordinary C strings (null-terminated text) * or arbitrary binary data. All storage is allocated with palloc(). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/lib/stringinfo.c diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index d52a763457..7068ee5b25 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -67,7 +67,7 @@ * general, after logging in, but let's do what we can here. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/libpq/auth-scram.c diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index b7f9bb1669..1d49ed784f 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -3,7 +3,7 @@ * auth.c * Routines to handle network authentication * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c index 5a2479e6d3..0b802b54e4 100644 --- a/src/backend/libpq/be-fsstubs.c +++ b/src/backend/libpq/be-fsstubs.c @@ -3,7 +3,7 @@ * be-fsstubs.c * Builtin functions for open/close/read/write operations on large objects * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index 1e3e19f5e0..3a7aa01876 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -4,7 +4,7 @@ * functions for OpenSSL support in the backend. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c index 53fefd1b29..eb42ea1a1e 100644 --- a/src/backend/libpq/be-secure.c +++ b/src/backend/libpq/be-secure.c @@ -6,7 +6,7 @@ * message integrity and endpoint authentication. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/crypt.c b/src/backend/libpq/crypt.c index 1715c52462..2c5ce4a47e 100644 --- a/src/backend/libpq/crypt.c +++ b/src/backend/libpq/crypt.c @@ -4,7 +4,7 @@ * Functions for dealing with encrypted passwords stored in * pg_authid.rolpassword. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/libpq/crypt.c diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index ca78a7e0ba..f760d24886 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -5,7 +5,7 @@ * wherein you authenticate a user by seeing what IP address the system * says he comes from and choosing authentication method based on it). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/ifaddr.c b/src/backend/libpq/ifaddr.c index b8c463b101..274c084362 100644 --- a/src/backend/libpq/ifaddr.c +++ b/src/backend/libpq/ifaddr.c @@ -3,7 +3,7 @@ * ifaddr.c * IP netmask calculations, and enumerating network interfaces. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c index fc15181a11..a4f6d4deeb 100644 --- a/src/backend/libpq/pqcomm.c +++ b/src/backend/libpq/pqcomm.c @@ -27,7 +27,7 @@ * the backend's "backend/libpq" is quite separate from "interfaces/libpq". * All that remains is similarities of names to trap the unwary... * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/libpq/pqcomm.c diff --git a/src/backend/libpq/pqformat.c b/src/backend/libpq/pqformat.c index a5698390ae..30145b96ec 100644 --- a/src/backend/libpq/pqformat.c +++ b/src/backend/libpq/pqformat.c @@ -21,7 +21,7 @@ * are different. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/libpq/pqformat.c diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c index e1a24b62c8..201075dd47 100644 --- a/src/backend/libpq/pqmq.c +++ b/src/backend/libpq/pqmq.c @@ -3,7 +3,7 @@ * pqmq.c * Use the frontend/backend protocol for communication over a shm_mq * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/libpq/pqmq.c diff --git a/src/backend/libpq/pqsignal.c b/src/backend/libpq/pqsignal.c index 476e883a68..a24de5d410 100644 --- a/src/backend/libpq/pqsignal.c +++ b/src/backend/libpq/pqsignal.c @@ -3,7 +3,7 @@ * pqsignal.c * Backend signal(2) support (see also src/port/pqsignal.c) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/main/main.c b/src/backend/main/main.c index f9d673f881..38853e38eb 100644 --- a/src/backend/main/main.c +++ b/src/backend/main/main.c @@ -9,7 +9,7 @@ * proper FooMain() routine for the incarnation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c index ae30072a01..733fe3cf2a 100644 --- a/src/backend/nodes/bitmapset.c +++ b/src/backend/nodes/bitmapset.c @@ -11,7 +11,7 @@ * bms_is_empty() in preference to testing for NULL.) * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/nodes/bitmapset.c diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 84d717102d..ddbbc79823 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -11,7 +11,7 @@ * be handled easily in a simple depth-first traversal. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 2e869a9d5d..30ccc9c5ae 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -18,7 +18,7 @@ * "x" to be considered equal() to another reference to "x" in the query. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/nodes/extensible.c b/src/backend/nodes/extensible.c index 01cd3c84fb..f301c11fa9 100644 --- a/src/backend/nodes/extensible.c +++ b/src/backend/nodes/extensible.c @@ -10,7 +10,7 @@ * and GetExtensibleNodeMethods to get information about a previously * registered type of extensible node. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c index bee6244adc..083538f70a 100644 --- a/src/backend/nodes/list.c +++ b/src/backend/nodes/list.c @@ -4,7 +4,7 @@ * implementation for PostgreSQL generic linked list package * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c index 7a676531ae..1bd2599c2c 100644 --- a/src/backend/nodes/makefuncs.c +++ b/src/backend/nodes/makefuncs.c @@ -4,7 +4,7 @@ * creator functions for primitive nodes. The functions here are for * the most frequently created nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c index c2a93b2d4c..6c76c41ebe 100644 --- a/src/backend/nodes/nodeFuncs.c +++ b/src/backend/nodes/nodeFuncs.c @@ -3,7 +3,7 @@ * nodeFuncs.c * Various general-purpose manipulations of Node trees * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/nodes.c b/src/backend/nodes/nodes.c index d3345aae6d..f5ede390e0 100644 --- a/src/backend/nodes/nodes.c +++ b/src/backend/nodes/nodes.c @@ -4,7 +4,7 @@ * support code for nodes (now that we have removed the home-brew * inheritance system, our support code for nodes is much simpler) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index e468d7cc41..5e72df137e 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -3,7 +3,7 @@ * outfuncs.c * Output functions for Postgres tree nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/params.c b/src/backend/nodes/params.c index 94acdf4e7b..79197b18b4 100644 --- a/src/backend/nodes/params.c +++ b/src/backend/nodes/params.c @@ -4,7 +4,7 @@ * Support for finding the values associated with Param nodes. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/nodes/print.c b/src/backend/nodes/print.c index 380e8b71f2..b9bad5eacc 100644 --- a/src/backend/nodes/print.c +++ b/src/backend/nodes/print.c @@ -3,7 +3,7 @@ * print.c * various print routines (used mostly for debugging) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/read.c b/src/backend/nodes/read.c index b56f28e15f..76414029d8 100644 --- a/src/backend/nodes/read.c +++ b/src/backend/nodes/read.c @@ -4,7 +4,7 @@ * routines to convert a string (legal ascii representation of node) back * to nodes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 1133c70a1c..9925866b53 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -3,7 +3,7 @@ * readfuncs.c * Reader functions for Postgres tree nodes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/nodes/tidbitmap.c b/src/backend/nodes/tidbitmap.c index acfe6b263c..17dc53898f 100644 --- a/src/backend/nodes/tidbitmap.c +++ b/src/backend/nodes/tidbitmap.c @@ -29,7 +29,7 @@ * and a non-lossy page. * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/nodes/tidbitmap.c diff --git a/src/backend/nodes/value.c b/src/backend/nodes/value.c index 5d2f96c103..8f0428fce1 100644 --- a/src/backend/nodes/value.c +++ b/src/backend/nodes/value.c @@ -4,7 +4,7 @@ * implementation of Value nodes * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/optimizer/geqo/geqo_copy.c b/src/backend/optimizer/geqo/geqo_copy.c index 8fd20c5986..111caa2a2a 100644 --- a/src/backend/optimizer/geqo/geqo_copy.c +++ b/src/backend/optimizer/geqo/geqo_copy.c @@ -2,7 +2,7 @@ * * geqo_copy.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_copy.c diff --git a/src/backend/optimizer/geqo/geqo_eval.c b/src/backend/optimizer/geqo/geqo_eval.c index 3cf268cbd3..9053cfd0b9 100644 --- a/src/backend/optimizer/geqo/geqo_eval.c +++ b/src/backend/optimizer/geqo/geqo_eval.c @@ -3,7 +3,7 @@ * geqo_eval.c * Routines to evaluate query trees * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_eval.c diff --git a/src/backend/optimizer/geqo/geqo_main.c b/src/backend/optimizer/geqo/geqo_main.c index 86213ac5a0..3eb8bcb76f 100644 --- a/src/backend/optimizer/geqo/geqo_main.c +++ b/src/backend/optimizer/geqo/geqo_main.c @@ -4,7 +4,7 @@ * solution to the query optimization problem * by means of a Genetic Algorithm (GA) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_main.c diff --git a/src/backend/optimizer/geqo/geqo_misc.c b/src/backend/optimizer/geqo/geqo_misc.c index 937cb5fe0f..919d2889bc 100644 --- a/src/backend/optimizer/geqo/geqo_misc.c +++ b/src/backend/optimizer/geqo/geqo_misc.c @@ -3,7 +3,7 @@ * geqo_misc.c * misc. printout and debug stuff * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_misc.c diff --git a/src/backend/optimizer/geqo/geqo_pool.c b/src/backend/optimizer/geqo/geqo_pool.c index 596a2cda20..b2c9f31c8b 100644 --- a/src/backend/optimizer/geqo/geqo_pool.c +++ b/src/backend/optimizer/geqo/geqo_pool.c @@ -3,7 +3,7 @@ * geqo_pool.c * Genetic Algorithm (GA) pool stuff * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_pool.c diff --git a/src/backend/optimizer/geqo/geqo_random.c b/src/backend/optimizer/geqo/geqo_random.c index 6f3500649c..850bfe5ebe 100644 --- a/src/backend/optimizer/geqo/geqo_random.c +++ b/src/backend/optimizer/geqo/geqo_random.c @@ -3,7 +3,7 @@ * geqo_random.c * random number generator * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_random.c diff --git a/src/backend/optimizer/geqo/geqo_selection.c b/src/backend/optimizer/geqo/geqo_selection.c index 4d0f6b0881..ebd34b6db2 100644 --- a/src/backend/optimizer/geqo/geqo_selection.c +++ b/src/backend/optimizer/geqo/geqo_selection.c @@ -3,7 +3,7 @@ * geqo_selection.c * linear selection scheme for the genetic query optimizer * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/optimizer/geqo/geqo_selection.c diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 0e8463e4a3..12a6ee4a22 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -3,7 +3,7 @@ * allpaths.c * Routines to find possible search paths for processing a query * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c index b4cbc34ef1..f4717942c3 100644 --- a/src/backend/optimizer/path/clausesel.c +++ b/src/backend/optimizer/path/clausesel.c @@ -3,7 +3,7 @@ * clausesel.c * Routines to compute clause selectivities * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index c3daacd3ea..7903b2cb16 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -60,7 +60,7 @@ * values. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/optimizer/path/equivclass.c b/src/backend/optimizer/path/equivclass.c index 45a6889b8b..70a925c63a 100644 --- a/src/backend/optimizer/path/equivclass.c +++ b/src/backend/optimizer/path/equivclass.c @@ -6,7 +6,7 @@ * See src/backend/optimizer/README for discussion of EquivalenceClasses. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c index 18f6bafcdd..7fc70804f8 100644 --- a/src/backend/optimizer/path/indxpath.c +++ b/src/backend/optimizer/path/indxpath.c @@ -4,7 +4,7 @@ * Routines to determine which indexes are usable for scanning a * given relation, and create Paths accordingly. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c index e774130ac8..396ee2747a 100644 --- a/src/backend/optimizer/path/joinpath.c +++ b/src/backend/optimizer/path/joinpath.c @@ -3,7 +3,7 @@ * joinpath.c * Routines to find all possible paths for processing a set of joins * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index 5e03f8bc21..1d152c514e 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -3,7 +3,7 @@ * joinrels.c * Routines to determine which relations should be joined * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c index c6870d314e..ef58cff28d 100644 --- a/src/backend/optimizer/path/pathkeys.c +++ b/src/backend/optimizer/path/pathkeys.c @@ -7,7 +7,7 @@ * the nature and use of path keys. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/optimizer/path/tidpath.c b/src/backend/optimizer/path/tidpath.c index a2fe661075..3bb5b8def6 100644 --- a/src/backend/optimizer/path/tidpath.c +++ b/src/backend/optimizer/path/tidpath.c @@ -25,7 +25,7 @@ * for that. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c index 5783f90b62..ef25fefa45 100644 --- a/src/backend/optimizer/plan/analyzejoins.c +++ b/src/backend/optimizer/plan/analyzejoins.c @@ -11,7 +11,7 @@ * is that we have to work harder to clean up after ourselves when we modify * the query, since the derived data structures have to be updated too. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 1a9fd82900..e599283d6b 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -5,7 +5,7 @@ * Planning is complete, we just need to convert the selected * Path into a Plan. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c index 448cb73467..a436b53806 100644 --- a/src/backend/optimizer/plan/initsplan.c +++ b/src/backend/optimizer/plan/initsplan.c @@ -3,7 +3,7 @@ * initsplan.c * Target list, qualification, joininfo initialization routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c index 889e8af33b..95cbffbd69 100644 --- a/src/backend/optimizer/plan/planagg.c +++ b/src/backend/optimizer/plan/planagg.c @@ -17,7 +17,7 @@ * scan all the rows anyway. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/planmain.c b/src/backend/optimizer/plan/planmain.c index f4e0a6ea3d..7a34abca04 100644 --- a/src/backend/optimizer/plan/planmain.c +++ b/src/backend/optimizer/plan/planmain.c @@ -9,7 +9,7 @@ * shorn of features like subselects, inheritance, aggregates, grouping, * and so on. (Those are the things planner.c deals with.) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 382791fadb..7b52dadd81 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -3,7 +3,7 @@ * planner.c * The query optimizer external interface. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index b5c41241d7..4617d12cb9 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -4,7 +4,7 @@ * Post-processing of a completed plan tree: fix references to subplan * vars, compute regproc values for operators, etc * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c index 2e3abeea3d..46367cba63 100644 --- a/src/backend/optimizer/plan/subselect.c +++ b/src/backend/optimizer/plan/subselect.c @@ -3,7 +3,7 @@ * subselect.c * Planning routines for subselects and parameters. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c index 1d7e4994f5..0e2a220ad0 100644 --- a/src/backend/optimizer/prep/prepjointree.c +++ b/src/backend/optimizer/prep/prepjointree.c @@ -12,7 +12,7 @@ * reduce_outer_joins * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/prep/prepqual.c b/src/backend/optimizer/prep/prepqual.c index f75b3274ad..cb1f4853c3 100644 --- a/src/backend/optimizer/prep/prepqual.c +++ b/src/backend/optimizer/prep/prepqual.c @@ -19,7 +19,7 @@ * tree after local transformations that might introduce nested AND/ORs. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/prep/preptlist.c b/src/backend/optimizer/prep/preptlist.c index 3abea92335..8603feef2b 100644 --- a/src/backend/optimizer/prep/preptlist.c +++ b/src/backend/optimizer/prep/preptlist.c @@ -29,7 +29,7 @@ * that because it's faster in typical non-inherited cases. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index f87849ea47..5a08e75ad5 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -17,7 +17,7 @@ * append relations, and thenceforth share code with the UNION ALL case. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index 9ca384db51..bcdf7d624b 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -3,7 +3,7 @@ * clauses.c * routines to manipulate qualification clauses * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/joininfo.c b/src/backend/optimizer/util/joininfo.c index 62629ee7d8..3aaa004275 100644 --- a/src/backend/optimizer/util/joininfo.c +++ b/src/backend/optimizer/util/joininfo.c @@ -3,7 +3,7 @@ * joininfo.c * joininfo list manipulation routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/orclauses.c b/src/backend/optimizer/util/orclauses.c index 9aa661c909..1e78028abe 100644 --- a/src/backend/optimizer/util/orclauses.c +++ b/src/backend/optimizer/util/orclauses.c @@ -3,7 +3,7 @@ * orclauses.c * Routines to extract restriction OR clauses from join OR clauses * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 2aee156ad3..7df8761710 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -3,7 +3,7 @@ * pathnode.c * Routines to manipulate pathlists and create path nodes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/placeholder.c b/src/backend/optimizer/util/placeholder.c index 864b2796cc..c79d0f25d4 100644 --- a/src/backend/optimizer/util/placeholder.c +++ b/src/backend/optimizer/util/placeholder.c @@ -4,7 +4,7 @@ * PlaceHolderVar and PlaceHolderInfo manipulation routines * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index f7438714c4..8c60b35068 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -4,7 +4,7 @@ * routines for accessing the system catalogs * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c index 134460cc13..8106010329 100644 --- a/src/backend/optimizer/util/predtest.c +++ b/src/backend/optimizer/util/predtest.c @@ -4,7 +4,7 @@ * Routines to attempt to prove logical implications between predicate * expressions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index 674cfc6b06..ac5a7c9553 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -3,7 +3,7 @@ * relnode.c * Relation-node lookup/construction routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/restrictinfo.c b/src/backend/optimizer/util/restrictinfo.c index 39b52aecc5..1075dde40c 100644 --- a/src/backend/optimizer/util/restrictinfo.c +++ b/src/backend/optimizer/util/restrictinfo.c @@ -3,7 +3,7 @@ * restrictinfo.c * RestrictInfo node manipulation routines. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c index 9345891380..32160d5716 100644 --- a/src/backend/optimizer/util/tlist.c +++ b/src/backend/optimizer/util/tlist.c @@ -3,7 +3,7 @@ * tlist.c * Target list manipulation routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c index 81c60dce5e..b16b1e4656 100644 --- a/src/backend/optimizer/util/var.c +++ b/src/backend/optimizer/util/var.c @@ -9,7 +9,7 @@ * contains variables. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c index d680d2285c..e7b2bc7e73 100644 --- a/src/backend/parser/analyze.c +++ b/src/backend/parser/analyze.c @@ -14,7 +14,7 @@ * contain optimizable statements, which we should transform. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/parser/analyze.c diff --git a/src/backend/parser/check_keywords.pl b/src/backend/parser/check_keywords.pl index 6eb0aea96b..d8ebddec9f 100644 --- a/src/backend/parser/check_keywords.pl +++ b/src/backend/parser/check_keywords.pl @@ -4,7 +4,7 @@ # Usage: check_keywords.pl gram.y kwlist.h # src/backend/parser/check_keywords.pl -# Copyright (c) 2009-2017, PostgreSQL Global Development Group +# Copyright (c) 2009-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index ebfc94f896..16923e853a 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -6,7 +6,7 @@ * gram.y * POSTGRESQL BISON rules/actions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c index 4c4f4cdc3d..6a9f1b0217 100644 --- a/src/backend/parser/parse_agg.c +++ b/src/backend/parser/parse_agg.c @@ -3,7 +3,7 @@ * parse_agg.c * handle aggregates and window functions in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index 2828bbf796..9fbcfd4fa6 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -3,7 +3,7 @@ * parse_clause.c * handle clauses in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index 7d10d74a3e..085aa8766c 100644 --- a/src/backend/parser/parse_coerce.c +++ b/src/backend/parser/parse_coerce.c @@ -3,7 +3,7 @@ * parse_coerce.c * handle type coercions/conversions for parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_collate.c b/src/backend/parser/parse_collate.c index 0d106c4c19..6d34245083 100644 --- a/src/backend/parser/parse_collate.c +++ b/src/backend/parser/parse_collate.c @@ -29,7 +29,7 @@ * at runtime. If we knew exactly which functions require collation * information, we could throw those errors at parse time instead. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_cte.c b/src/backend/parser/parse_cte.c index 5160fdb0e0..d28c421b6f 100644 --- a/src/backend/parser/parse_cte.c +++ b/src/backend/parser/parse_cte.c @@ -3,7 +3,7 @@ * parse_cte.c * handle CTEs (common table expressions) in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_enr.c b/src/backend/parser/parse_enr.c index 1cfcf65a51..069249b732 100644 --- a/src/backend/parser/parse_enr.c +++ b/src/backend/parser/parse_enr.c @@ -3,7 +3,7 @@ * parse_enr.c * parser support routines dealing with ephemeral named relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index 29f9da796f..b2f5e46e3b 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -3,7 +3,7 @@ * parse_expr.c * handle expressions in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index e6b085637b..ffae0f3cf3 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -3,7 +3,7 @@ * parse_func.c * handle function calls in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_node.c b/src/backend/parser/parse_node.c index 6dbad53a41..d2672882d7 100644 --- a/src/backend/parser/parse_node.c +++ b/src/backend/parser/parse_node.c @@ -3,7 +3,7 @@ * parse_node.c * various routines that make nodes for querytrees * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c index 568eda0cf7..b279e1236a 100644 --- a/src/backend/parser/parse_oper.c +++ b/src/backend/parser/parse_oper.c @@ -3,7 +3,7 @@ * parse_oper.c * handle operator things for parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_param.c b/src/backend/parser/parse_param.c index 3e04e8c4d1..454a3e07f7 100644 --- a/src/backend/parser/parse_param.c +++ b/src/backend/parser/parse_param.c @@ -12,7 +12,7 @@ * Note that other approaches to parameters are possible using the parser * hooks defined in ParseState. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index 58bdb23c4e..2625da5327 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -3,7 +3,7 @@ * parse_relation.c * parser support routines dealing with relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_target.c b/src/backend/parser/parse_target.c index 21593b249f..ea209cdab6 100644 --- a/src/backend/parser/parse_target.c +++ b/src/backend/parser/parse_target.c @@ -3,7 +3,7 @@ * parse_target.c * handle target lists * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_type.c b/src/backend/parser/parse_type.c index b032651cf4..d959b6122a 100644 --- a/src/backend/parser/parse_type.c +++ b/src/backend/parser/parse_type.c @@ -3,7 +3,7 @@ * parse_type.c * handle type operations for parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index f67379f8ed..128f1679c6 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -16,7 +16,7 @@ * a quick copyObject() call before manipulating the query tree. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/parser/parse_utilcmd.c diff --git a/src/backend/parser/parser.c b/src/backend/parser/parser.c index 245b4cda3b..db30483459 100644 --- a/src/backend/parser/parser.c +++ b/src/backend/parser/parser.c @@ -10,7 +10,7 @@ * analyze.c and related files. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l index 6af2199cdc..eedef7c005 100644 --- a/src/backend/parser/scan.l +++ b/src/backend/parser/scan.l @@ -21,7 +21,7 @@ * Postgres 9.2, this check is made automatically by the Makefile.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/parser/scansup.c b/src/backend/parser/scansup.c index dff7a04147..9256524b8d 100644 --- a/src/backend/parser/scansup.c +++ b/src/backend/parser/scansup.c @@ -4,7 +4,7 @@ * support routines for the lex/flex scanner, used by both the normal * backend as well as the bootstrap backend * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/port/atomics.c b/src/backend/port/atomics.c index c0c2b31270..e4e4734dd2 100644 --- a/src/backend/port/atomics.c +++ b/src/backend/port/atomics.c @@ -3,7 +3,7 @@ * atomics.c * Non-Inline parts of the atomics implementation * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/port/dynloader/aix.h b/src/backend/port/dynloader/aix.h index 4b1bad6e45..df4f5d5a1a 100644 --- a/src/backend/port/dynloader/aix.h +++ b/src/backend/port/dynloader/aix.h @@ -4,7 +4,7 @@ * prototypes for AIX-specific routines * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/aix.h diff --git a/src/backend/port/dynloader/cygwin.h b/src/backend/port/dynloader/cygwin.h index 5d819cfd7b..ef05e6b416 100644 --- a/src/backend/port/dynloader/cygwin.h +++ b/src/backend/port/dynloader/cygwin.h @@ -2,7 +2,7 @@ * * Dynamic loader declarations for Cygwin * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/cygwin.h diff --git a/src/backend/port/dynloader/freebsd.c b/src/backend/port/dynloader/freebsd.c index 23547b06bb..54ee0e8448 100644 --- a/src/backend/port/dynloader/freebsd.c +++ b/src/backend/port/dynloader/freebsd.c @@ -1,5 +1,5 @@ /* - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * diff --git a/src/backend/port/dynloader/freebsd.h b/src/backend/port/dynloader/freebsd.h index 6faf07f962..d047b1662e 100644 --- a/src/backend/port/dynloader/freebsd.h +++ b/src/backend/port/dynloader/freebsd.h @@ -3,7 +3,7 @@ * freebsd.h * port-specific prototypes for FreeBSD * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/freebsd.h diff --git a/src/backend/port/dynloader/hpux.c b/src/backend/port/dynloader/hpux.c index 5ab24f8fd9..d82dd7603b 100644 --- a/src/backend/port/dynloader/hpux.c +++ b/src/backend/port/dynloader/hpux.c @@ -3,7 +3,7 @@ * dynloader.c * dynamic loader for HP-UX using the shared library mechanism * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/port/dynloader/hpux.h b/src/backend/port/dynloader/hpux.h index 6c1b367e97..1cbc46960e 100644 --- a/src/backend/port/dynloader/hpux.h +++ b/src/backend/port/dynloader/hpux.h @@ -3,7 +3,7 @@ * dynloader.h * dynamic loader for HP-UX using the shared library mechanism * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/port/dynloader/linux.c b/src/backend/port/dynloader/linux.c index 375ade32e5..8735767add 100644 --- a/src/backend/port/dynloader/linux.c +++ b/src/backend/port/dynloader/linux.c @@ -6,7 +6,7 @@ * * You need to install the dld library on your Linux system! * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/port/dynloader/linux.h b/src/backend/port/dynloader/linux.h index d2c25df033..df2852ac58 100644 --- a/src/backend/port/dynloader/linux.h +++ b/src/backend/port/dynloader/linux.h @@ -4,7 +4,7 @@ * Port-specific prototypes for Linux * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/linux.h diff --git a/src/backend/port/dynloader/netbsd.c b/src/backend/port/dynloader/netbsd.c index 475d746514..7b8d90caa7 100644 --- a/src/backend/port/dynloader/netbsd.c +++ b/src/backend/port/dynloader/netbsd.c @@ -1,5 +1,5 @@ /* - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * diff --git a/src/backend/port/dynloader/netbsd.h b/src/backend/port/dynloader/netbsd.h index 2ca332256b..823574abf0 100644 --- a/src/backend/port/dynloader/netbsd.h +++ b/src/backend/port/dynloader/netbsd.h @@ -4,7 +4,7 @@ * port-specific prototypes for NetBSD * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/netbsd.h diff --git a/src/backend/port/dynloader/openbsd.c b/src/backend/port/dynloader/openbsd.c index 7b481b90d1..2104915c6c 100644 --- a/src/backend/port/dynloader/openbsd.c +++ b/src/backend/port/dynloader/openbsd.c @@ -1,5 +1,5 @@ /* - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1990 The Regents of the University of California. * All rights reserved. * diff --git a/src/backend/port/dynloader/openbsd.h b/src/backend/port/dynloader/openbsd.h index 1130f39b41..a184ca8ecd 100644 --- a/src/backend/port/dynloader/openbsd.h +++ b/src/backend/port/dynloader/openbsd.h @@ -3,7 +3,7 @@ * openbsd.h * port-specific prototypes for OpenBSD * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/openbsd.h diff --git a/src/backend/port/dynloader/solaris.h b/src/backend/port/dynloader/solaris.h index e7638ff0fc..b583c266cf 100644 --- a/src/backend/port/dynloader/solaris.h +++ b/src/backend/port/dynloader/solaris.h @@ -4,7 +4,7 @@ * port-specific prototypes for Solaris * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/port/dynloader/solaris.h diff --git a/src/backend/port/posix_sema.c b/src/backend/port/posix_sema.c index 5719caf9b5..a2cabe58fc 100644 --- a/src/backend/port/posix_sema.c +++ b/src/backend/port/posix_sema.c @@ -15,7 +15,7 @@ * forked backends, but they could not be accessed by exec'd backends. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/port/sysv_sema.c b/src/backend/port/sysv_sema.c index d4202feb56..1c178a7317 100644 --- a/src/backend/port/sysv_sema.c +++ b/src/backend/port/sysv_sema.c @@ -4,7 +4,7 @@ * Implement PGSemaphores using SysV semaphore facilities * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/port/sysv_shmem.c b/src/backend/port/sysv_shmem.c index e8cf6d3e93..741c455ccb 100644 --- a/src/backend/port/sysv_shmem.c +++ b/src/backend/port/sysv_shmem.c @@ -9,7 +9,7 @@ * exist, though, because mmap'd shmem provides no way to find out how * many processes are attached, which we need for interlocking purposes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/port/tas/sunstudio_sparc.s b/src/backend/port/tas/sunstudio_sparc.s index 73ff315d1a..b041a296ca 100644 --- a/src/backend/port/tas/sunstudio_sparc.s +++ b/src/backend/port/tas/sunstudio_sparc.s @@ -3,7 +3,7 @@ ! sunstudio_sparc.s ! compare and swap for Sun Studio on Sparc ! -! Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +! Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group ! Portions Copyright (c) 1994, Regents of the University of California ! ! IDENTIFICATION diff --git a/src/backend/port/tas/sunstudio_x86.s b/src/backend/port/tas/sunstudio_x86.s index 31934b01d4..af2b8c6a17 100644 --- a/src/backend/port/tas/sunstudio_x86.s +++ b/src/backend/port/tas/sunstudio_x86.s @@ -3,7 +3,7 @@ / sunstudio_x86.s / compare and swap for Sun Studio on x86 / -/ Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +/ Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group / Portions Copyright (c) 1994, Regents of the University of California / / IDENTIFICATION diff --git a/src/backend/port/win32/crashdump.c b/src/backend/port/win32/crashdump.c index f06dfd1987..7b84d22679 100644 --- a/src/backend/port/win32/crashdump.c +++ b/src/backend/port/win32/crashdump.c @@ -28,7 +28,7 @@ * be added, though at the cost of a greater chance of the crash dump failing. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32/crashdump.c diff --git a/src/backend/port/win32/mingwcompat.c b/src/backend/port/win32/mingwcompat.c index e02b41711e..3577d2538f 100644 --- a/src/backend/port/win32/mingwcompat.c +++ b/src/backend/port/win32/mingwcompat.c @@ -3,7 +3,7 @@ * mingwcompat.c * MinGW compatibility functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32/mingwcompat.c diff --git a/src/backend/port/win32/signal.c b/src/backend/port/win32/signal.c index 0fd993e3f3..f489cee8bd 100644 --- a/src/backend/port/win32/signal.c +++ b/src/backend/port/win32/signal.c @@ -3,7 +3,7 @@ * signal.c * Microsoft Windows Win32 Signal Emulation Functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32/signal.c diff --git a/src/backend/port/win32/socket.c b/src/backend/port/win32/socket.c index ba8b863d82..f4356fe1f3 100644 --- a/src/backend/port/win32/socket.c +++ b/src/backend/port/win32/socket.c @@ -3,7 +3,7 @@ * socket.c * Microsoft Windows Win32 Socket Functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32/socket.c diff --git a/src/backend/port/win32/timer.c b/src/backend/port/win32/timer.c index f0a45f4339..dac105acf5 100644 --- a/src/backend/port/win32/timer.c +++ b/src/backend/port/win32/timer.c @@ -8,7 +8,7 @@ * - Does not support interval timer (value->it_interval) * - Only supports ITIMER_REAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32/timer.c diff --git a/src/backend/port/win32_sema.c b/src/backend/port/win32_sema.c index a798510bbc..a924c59cdb 100644 --- a/src/backend/port/win32_sema.c +++ b/src/backend/port/win32_sema.c @@ -3,7 +3,7 @@ * win32_sema.c * Microsoft Windows Win32 Semaphores Emulation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32_sema.c diff --git a/src/backend/port/win32_shmem.c b/src/backend/port/win32_shmem.c index 01f51f3158..4991ed46f1 100644 --- a/src/backend/port/win32_shmem.c +++ b/src/backend/port/win32_shmem.c @@ -3,7 +3,7 @@ * win32_shmem.c * Implement shared memory using win32 facilities * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/port/win32_shmem.c diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 48765bb01b..75c2362f46 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -50,7 +50,7 @@ * there is a window (caused by pgstat delay) on which a worker may choose a * table that was already vacuumed; this is a bug in the current design. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c index 88806ebe6b..f651bb49b1 100644 --- a/src/backend/postmaster/bgworker.c +++ b/src/backend/postmaster/bgworker.c @@ -2,7 +2,7 @@ * bgworker.c * POSTGRES pluggable background workers implementation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/postmaster/bgworker.c diff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c index 9ad74ee977..b7813d7a4f 100644 --- a/src/backend/postmaster/bgwriter.c +++ b/src/backend/postmaster/bgwriter.c @@ -24,7 +24,7 @@ * should be killed by SIGQUIT and then a recovery cycle started. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c index 7e0af10c4d..4b452e7cee 100644 --- a/src/backend/postmaster/checkpointer.c +++ b/src/backend/postmaster/checkpointer.c @@ -26,7 +26,7 @@ * restart needs to be forced.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/postmaster/fork_process.c b/src/backend/postmaster/fork_process.c index 65145b1205..d3005d9f1d 100644 --- a/src/backend/postmaster/fork_process.c +++ b/src/backend/postmaster/fork_process.c @@ -4,7 +4,7 @@ * EXEC_BACKEND case; it might be extended to do so, but it would be * considerably more complex. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/postmaster/fork_process.c diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c index 3d8c02e865..885e85ad8a 100644 --- a/src/backend/postmaster/pgarch.c +++ b/src/backend/postmaster/pgarch.c @@ -14,7 +14,7 @@ * * Initial author: Simon Riggs simon@2ndquadrant.com * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index b502c1bb9b..d13011454c 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -11,7 +11,7 @@ * - Add a pgstat config column to pg_database, so this * entire thing can be enabled/disabled on a per db basis. * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/backend/postmaster/pgstat.c * ---------- diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 17c7f7e78f..f3ddf828bb 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -32,7 +32,7 @@ * clients. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c index 4693a1bb86..38300527a5 100644 --- a/src/backend/postmaster/startup.c +++ b/src/backend/postmaster/startup.c @@ -9,7 +9,7 @@ * though.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c index aeb117796d..f70eea37df 100644 --- a/src/backend/postmaster/syslogger.c +++ b/src/backend/postmaster/syslogger.c @@ -13,7 +13,7 @@ * * Author: Andreas Pflug * - * Copyright (c) 2004-2017, PostgreSQL Global Development Group + * Copyright (c) 2004-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/postmaster/walwriter.c b/src/backend/postmaster/walwriter.c index 7b89e02428..4fa3a3b0aa 100644 --- a/src/backend/postmaster/walwriter.c +++ b/src/backend/postmaster/walwriter.c @@ -31,7 +31,7 @@ * should be killed by SIGQUIT and then a recovery cycle started. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/regex/regc_pg_locale.c b/src/backend/regex/regc_pg_locale.c index e39ee7ae09..acbed2eeed 100644 --- a/src/backend/regex/regc_pg_locale.c +++ b/src/backend/regex/regc_pg_locale.c @@ -6,7 +6,7 @@ * * This file is #included by regcomp.c; it's not meant to compile standalone. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/regex/regexport.c b/src/backend/regex/regexport.c index febf2adaec..81b3bf1fc8 100644 --- a/src/backend/regex/regexport.c +++ b/src/backend/regex/regexport.c @@ -15,7 +15,7 @@ * allows the caller to decide how big is too big to bother with. * * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1998, 1999 Henry Spencer * * IDENTIFICATION diff --git a/src/backend/regex/regprefix.c b/src/backend/regex/regprefix.c index 96ca0e1ed3..6bf7c77f98 100644 --- a/src/backend/regex/regprefix.c +++ b/src/backend/regex/regprefix.c @@ -4,7 +4,7 @@ * Extract a common prefix, if any, from a compiled regex. * * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1998, 1999 Henry Spencer * * IDENTIFICATION diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index 05ca95bac2..dd7ad64862 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -3,7 +3,7 @@ * basebackup.c * code for taking a base backup and streaming it to a standby * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/basebackup.c diff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c index 3957bd37fb..f9aec0531a 100644 --- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c +++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c @@ -6,7 +6,7 @@ * loaded as a dynamic module to avoid linking the main server binary with * libpq. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c index 486fd0c988..537eba7875 100644 --- a/src/backend/replication/logical/decode.c +++ b/src/backend/replication/logical/decode.c @@ -16,7 +16,7 @@ * contents of records in here except turning them into a more usable * format. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c index 24be3cef53..2da9129562 100644 --- a/src/backend/replication/logical/launcher.c +++ b/src/backend/replication/logical/launcher.c @@ -2,7 +2,7 @@ * launcher.c * PostgreSQL logical replication worker launcher process * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/launcher.c diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c index bca585fc27..2fc9d7d70f 100644 --- a/src/backend/replication/logical/logical.c +++ b/src/backend/replication/logical/logical.c @@ -2,7 +2,7 @@ * logical.c * PostgreSQL logical decoding coordination * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/logical.c diff --git a/src/backend/replication/logical/logicalfuncs.c b/src/backend/replication/logical/logicalfuncs.c index a3ba2b1266..9aab6e71b2 100644 --- a/src/backend/replication/logical/logicalfuncs.c +++ b/src/backend/replication/logical/logicalfuncs.c @@ -6,7 +6,7 @@ * logical replication slots via SQL. * * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logicalfuncs.c diff --git a/src/backend/replication/logical/message.c b/src/backend/replication/logical/message.c index ef7d6c5cde..0eba74c26a 100644 --- a/src/backend/replication/logical/message.c +++ b/src/backend/replication/logical/message.c @@ -3,7 +3,7 @@ * message.c * Generic logical messages. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/message.c diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 55382b4b24..9c30baf544 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -3,7 +3,7 @@ * origin.c * Logical replication progress tracking support. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/origin.c diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c index 9b126b2957..948343e4ae 100644 --- a/src/backend/replication/logical/proto.c +++ b/src/backend/replication/logical/proto.c @@ -3,7 +3,7 @@ * proto.c * logical replication protocol functions * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/proto.c diff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c index 4b2d8a153f..e492d26d18 100644 --- a/src/backend/replication/logical/relation.c +++ b/src/backend/replication/logical/relation.c @@ -2,7 +2,7 @@ * relation.c * PostgreSQL logical replication * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/relation.c diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index 5ac391dbda..fcc41ddb7c 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -4,7 +4,7 @@ * PostgreSQL logical replay/reorder buffer management * * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c index ad65b9831d..5b35f22a32 100644 --- a/src/backend/replication/logical/snapbuild.c +++ b/src/backend/replication/logical/snapbuild.c @@ -107,7 +107,7 @@ * is a convenient point to initialize replication from, which is why we * export a snapshot at that point, which *can* be used to read normal data. * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/snapbuild.c diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c index 4cca0f1a85..e50b9f7905 100644 --- a/src/backend/replication/logical/tablesync.c +++ b/src/backend/replication/logical/tablesync.c @@ -2,7 +2,7 @@ * tablesync.c * PostgreSQL logical replication * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/tablesync.c diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index fa5d9bb120..83c69092ae 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -2,7 +2,7 @@ * worker.c * PostgreSQL logical replication worker (apply) * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/logical/worker.c diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index 550b156e2d..40a1ef3c1d 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -3,7 +3,7 @@ * pgoutput.c * Logical Replication output plugin * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/pgoutput/pgoutput.c diff --git a/src/backend/replication/repl_gram.y b/src/backend/replication/repl_gram.y index 6e4ca10116..beb2c2877b 100644 --- a/src/backend/replication/repl_gram.y +++ b/src/backend/replication/repl_gram.y @@ -3,7 +3,7 @@ * * repl_gram.y - Parser for the replication commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l index 568d55ac95..fb48241e48 100644 --- a/src/backend/replication/repl_scanner.l +++ b/src/backend/replication/repl_scanner.l @@ -4,7 +4,7 @@ * repl_scanner.l * a lexical scanner for the replication commands * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c index 0d27b6f39e..fc9ef22b0b 100644 --- a/src/backend/replication/slot.c +++ b/src/backend/replication/slot.c @@ -4,7 +4,7 @@ * Replication slot management. * * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c index ab776e85d2..b02df593e9 100644 --- a/src/backend/replication/slotfuncs.c +++ b/src/backend/replication/slotfuncs.c @@ -3,7 +3,7 @@ * slotfuncs.c * Support functions for replication slots * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/slotfuncs.c diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.c index 962772ef8f..75d2681719 100644 --- a/src/backend/replication/syncrep.c +++ b/src/backend/replication/syncrep.c @@ -63,7 +63,7 @@ * the standbys which are considered as synchronous at that moment * will release waiters from the queue. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/syncrep.c diff --git a/src/backend/replication/syncrep_gram.y b/src/backend/replication/syncrep_gram.y index cc8622c93d..5d12925ab8 100644 --- a/src/backend/replication/syncrep_gram.y +++ b/src/backend/replication/syncrep_gram.y @@ -3,7 +3,7 @@ * * syncrep_gram.y - Parser for synchronous_standby_names * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/replication/syncrep_scanner.l b/src/backend/replication/syncrep_scanner.l index 1fbc936aa6..5b641d8297 100644 --- a/src/backend/replication/syncrep_scanner.l +++ b/src/backend/replication/syncrep_scanner.l @@ -4,7 +4,7 @@ * syncrep_scanner.l * a lexical scanner for synchronous_standby_names * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index fe4e085938..a7fc67153a 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -33,7 +33,7 @@ * specific parts are in the libpqwalreceiver module. It's loaded * dynamically to avoid linking the server with libpq. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/replication/walreceiverfuncs.c b/src/backend/replication/walreceiverfuncs.c index b1f28d0fc4..67b1a074cc 100644 --- a/src/backend/replication/walreceiverfuncs.c +++ b/src/backend/replication/walreceiverfuncs.c @@ -6,7 +6,7 @@ * with the walreceiver process. Functions implementing walreceiver itself * are in walreceiver.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index f2e886f99f..9b63c40e8e 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -37,7 +37,7 @@ * record, wait for it to be replicated to the standby, and then exit. * * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/replication/walsender.c diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c index 247809927a..abf45fa82b 100644 --- a/src/backend/rewrite/rewriteDefine.c +++ b/src/backend/rewrite/rewriteDefine.c @@ -3,7 +3,7 @@ * rewriteDefine.c * routines for defining a rewrite rule * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index e93552a8f3..32e3798972 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -3,7 +3,7 @@ * rewriteHandler.c * Primary module of query rewriter. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c index 4ceaf641ba..abad1bf7e4 100644 --- a/src/backend/rewrite/rewriteManip.c +++ b/src/backend/rewrite/rewriteManip.c @@ -2,7 +2,7 @@ * * rewriteManip.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/rewrite/rewriteRemove.c b/src/backend/rewrite/rewriteRemove.c index e6f5ed575f..07de85b8a3 100644 --- a/src/backend/rewrite/rewriteRemove.c +++ b/src/backend/rewrite/rewriteRemove.c @@ -3,7 +3,7 @@ * rewriteRemove.c * routines for removing rewrite rules * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/rewrite/rewriteSupport.c b/src/backend/rewrite/rewriteSupport.c index ce9c8e3793..ab291a43e2 100644 --- a/src/backend/rewrite/rewriteSupport.c +++ b/src/backend/rewrite/rewriteSupport.c @@ -3,7 +3,7 @@ * rewriteSupport.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/rewrite/rowsecurity.c b/src/backend/rewrite/rowsecurity.c index 5bd33f7ba4..ce77a18bc9 100644 --- a/src/backend/rewrite/rowsecurity.c +++ b/src/backend/rewrite/rowsecurity.c @@ -29,7 +29,7 @@ * in the current environment, but that may change if the row_security GUC or * the current role changes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California */ #include "postgres.h" diff --git a/src/backend/snowball/dict_snowball.c b/src/backend/snowball/dict_snowball.c index 42384b42b1..043681ec2d 100644 --- a/src/backend/snowball/dict_snowball.c +++ b/src/backend/snowball/dict_snowball.c @@ -3,7 +3,7 @@ * dict_snowball.c * Snowball dictionary * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/snowball/dict_snowball.c diff --git a/src/backend/snowball/snowball.sql.in b/src/backend/snowball/snowball.sql.in index 844822fddb..27550dfc9a 100644 --- a/src/backend/snowball/snowball.sql.in +++ b/src/backend/snowball/snowball.sql.in @@ -1,7 +1,7 @@ /* * text search configuration for _LANGNAME_ language * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * src/backend/snowball/snowball.sql.in * diff --git a/src/backend/snowball/snowball_func.sql.in b/src/backend/snowball/snowball_func.sql.in index 90fa000c6d..c02dad43e3 100644 --- a/src/backend/snowball/snowball_func.sql.in +++ b/src/backend/snowball/snowball_func.sql.in @@ -1,7 +1,7 @@ /* * Create underlying C functions for Snowball stemmers * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * src/backend/snowball/snowball_func.sql.in * diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c index ae0f304dba..d7305b7ab3 100644 --- a/src/backend/statistics/dependencies.c +++ b/src/backend/statistics/dependencies.c @@ -3,7 +3,7 @@ * dependencies.c * POSTGRES functional dependencies * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c index 26c2aedd36..d34d5c3d12 100644 --- a/src/backend/statistics/extended_stats.c +++ b/src/backend/statistics/extended_stats.c @@ -6,7 +6,7 @@ * Generic code supporting statistics objects created via CREATE STATISTICS. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/statistics/mvdistinct.c b/src/backend/statistics/mvdistinct.c index 913829233e..24e74d7a1b 100644 --- a/src/backend/statistics/mvdistinct.c +++ b/src/backend/statistics/mvdistinct.c @@ -13,7 +13,7 @@ * estimates are already available in pg_statistic. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c index 147fced852..144a2cee6f 100644 --- a/src/backend/storage/buffer/buf_init.c +++ b/src/backend/storage/buffer/buf_init.c @@ -3,7 +3,7 @@ * buf_init.c * buffer manager initialization routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/buffer/buf_table.c b/src/backend/storage/buffer/buf_table.c index bccda066f9..88155204fd 100644 --- a/src/backend/storage/buffer/buf_table.c +++ b/src/backend/storage/buffer/buf_table.c @@ -10,7 +10,7 @@ * before the lock is released (see notes in README). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 26df7cb38f..4e44336332 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -3,7 +3,7 @@ * bufmgr.c * buffer manager interface routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c index f033323cff..18493dca17 100644 --- a/src/backend/storage/buffer/freelist.c +++ b/src/backend/storage/buffer/freelist.c @@ -4,7 +4,7 @@ * routines for managing the buffer pool's replacement strategy. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/buffer/localbuf.c b/src/backend/storage/buffer/localbuf.c index 1930f0ee0b..e4146a260a 100644 --- a/src/backend/storage/buffer/localbuf.c +++ b/src/backend/storage/buffer/localbuf.c @@ -4,7 +4,7 @@ * local buffer manager. Fast buffer manager for temporary tables, * which never need to be WAL-logged or checkpointed, etc. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index c6b210d137..4de6121ab9 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -3,7 +3,7 @@ * buffile.c * Management of large buffered temporary files. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c index d169e9c8bb..ca6342db0d 100644 --- a/src/backend/storage/file/copydir.c +++ b/src/backend/storage/file/copydir.c @@ -3,7 +3,7 @@ * copydir.c * copies a directory * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * While "xcopy /e /i /q" works fine for copying directories, on Windows XP diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index f449ee5c51..b5c7028618 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -3,7 +3,7 @@ * fd.c * Virtual file descriptor code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/file/reinit.c b/src/backend/storage/file/reinit.c index 00678cb182..92363ae6ad 100644 --- a/src/backend/storage/file/reinit.c +++ b/src/backend/storage/file/reinit.c @@ -3,7 +3,7 @@ * reinit.c * Reinitialization of unlogged relations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/file/sharedfileset.c b/src/backend/storage/file/sharedfileset.c index 343b2283f0..0ac8696536 100644 --- a/src/backend/storage/file/sharedfileset.c +++ b/src/backend/storage/file/sharedfileset.c @@ -3,7 +3,7 @@ * sharedfileset.c * Shared temporary file management. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/freespace/freespace.c b/src/backend/storage/freespace/freespace.c index 4648473523..dd8ae52644 100644 --- a/src/backend/storage/freespace/freespace.c +++ b/src/backend/storage/freespace/freespace.c @@ -4,7 +4,7 @@ * POSTGRES free space map for quickly finding free space in relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/freespace/fsmpage.c b/src/backend/storage/freespace/fsmpage.c index 987a2f5e53..2b382417c5 100644 --- a/src/backend/storage/freespace/fsmpage.c +++ b/src/backend/storage/freespace/fsmpage.c @@ -4,7 +4,7 @@ * routines to search and manipulate one FSM page. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/freespace/indexfsm.c b/src/backend/storage/freespace/indexfsm.c index 5cfbd4c867..e21047b96f 100644 --- a/src/backend/storage/freespace/indexfsm.c +++ b/src/backend/storage/freespace/indexfsm.c @@ -4,7 +4,7 @@ * POSTGRES free space map for quickly finding free pages in relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/ipc/barrier.c b/src/backend/storage/ipc/barrier.c index 7dde932738..00ab57c0f6 100644 --- a/src/backend/storage/ipc/barrier.c +++ b/src/backend/storage/ipc/barrier.c @@ -3,7 +3,7 @@ * barrier.c * Barriers for synchronizing cooperating processes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * From Wikipedia[1]: "In parallel computing, a barrier is a type of diff --git a/src/backend/storage/ipc/dsm.c b/src/backend/storage/ipc/dsm.c index a2efdb2c64..f1f75b73f5 100644 --- a/src/backend/storage/ipc/dsm.c +++ b/src/backend/storage/ipc/dsm.c @@ -14,7 +14,7 @@ * hard postmaster crash, remaining segments will be removed, if they * still exist, at the next postmaster startup. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/dsm_impl.c b/src/backend/storage/ipc/dsm_impl.c index 6c4e60b0b0..67e76b98fe 100644 --- a/src/backend/storage/ipc/dsm_impl.c +++ b/src/backend/storage/ipc/dsm_impl.c @@ -36,7 +36,7 @@ * * As ever, Windows requires its own implementation. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c index dfb47e7c39..2de35efbd4 100644 --- a/src/backend/storage/ipc/ipc.c +++ b/src/backend/storage/ipc/ipc.c @@ -8,7 +8,7 @@ * exit-time cleanup for either a postmaster or a backend. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/ipci.c b/src/backend/storage/ipc/ipci.c index 2d1ed143e0..0c86a581c0 100644 --- a/src/backend/storage/ipc/ipci.c +++ b/src/backend/storage/ipc/ipci.c @@ -3,7 +3,7 @@ * ipci.c * POSTGRES inter-process communication initialization code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c index 4eb6e83682..e6706f7fb8 100644 --- a/src/backend/storage/ipc/latch.c +++ b/src/backend/storage/ipc/latch.c @@ -22,7 +22,7 @@ * The Windows implementation uses Windows events that are inherited by all * postmaster child processes. There's no need for the self-pipe trick there. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/ipc/pmsignal.c b/src/backend/storage/ipc/pmsignal.c index 85db6b21f8..be61858fc6 100644 --- a/src/backend/storage/ipc/pmsignal.c +++ b/src/backend/storage/ipc/pmsignal.c @@ -4,7 +4,7 @@ * routines for signaling the postmaster from its child processes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index d87799cb96..40451c9315 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -32,7 +32,7 @@ * happen, it would tie up KnownAssignedXids indefinitely, so we protect * ourselves by pruning the array when a valid list of running XIDs arrives. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c index b9302ac630..b0dd7d1b37 100644 --- a/src/backend/storage/ipc/procsignal.c +++ b/src/backend/storage/ipc/procsignal.c @@ -4,7 +4,7 @@ * Routines for interprocess signalling * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c index 770559a03e..1131e27e2e 100644 --- a/src/backend/storage/ipc/shm_mq.c +++ b/src/backend/storage/ipc/shm_mq.c @@ -8,7 +8,7 @@ * and only the receiver may receive. This is intended to allow a user * backend to communicate with worker backends that it has registered. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/shm_mq.h diff --git a/src/backend/storage/ipc/shm_toc.c b/src/backend/storage/ipc/shm_toc.c index 958397884d..2abd140a96 100644 --- a/src/backend/storage/ipc/shm_toc.c +++ b/src/backend/storage/ipc/shm_toc.c @@ -3,7 +3,7 @@ * shm_toc.c * shared memory segment table of contents * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/storage/ipc/shm_toc.c diff --git a/src/backend/storage/ipc/shmem.c b/src/backend/storage/ipc/shmem.c index 22522676f3..7893c01983 100644 --- a/src/backend/storage/ipc/shmem.c +++ b/src/backend/storage/ipc/shmem.c @@ -3,7 +3,7 @@ * shmem.c * create shared memory and initialize shared memory data structures. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/shmqueue.c b/src/backend/storage/ipc/shmqueue.c index da67495b03..598d9ebcfb 100644 --- a/src/backend/storage/ipc/shmqueue.c +++ b/src/backend/storage/ipc/shmqueue.c @@ -3,7 +3,7 @@ * shmqueue.c * shared memory linked lists * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/sinval.c b/src/backend/storage/ipc/sinval.c index b9a73e9247..563e2906e3 100644 --- a/src/backend/storage/ipc/sinval.c +++ b/src/backend/storage/ipc/sinval.c @@ -3,7 +3,7 @@ * sinval.c * POSTGRES shared cache invalidation communication code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/sinvaladt.c b/src/backend/storage/ipc/sinvaladt.c index 0d517a00e6..569f1058fa 100644 --- a/src/backend/storage/ipc/sinvaladt.c +++ b/src/backend/storage/ipc/sinvaladt.c @@ -3,7 +3,7 @@ * sinvaladt.c * POSTGRES shared cache invalidation data manager. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/ipc/standby.c b/src/backend/storage/ipc/standby.c index d491ece60a..44ed20992e 100644 --- a/src/backend/storage/ipc/standby.c +++ b/src/backend/storage/ipc/standby.c @@ -7,7 +7,7 @@ * AccessExclusiveLocks and starting snapshots for Hot Standby mode. * Plus conflict recovery processing. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c index 12940e5075..7eac0724bb 100644 --- a/src/backend/storage/large_object/inv_api.c +++ b/src/backend/storage/large_object/inv_api.c @@ -19,7 +19,7 @@ * memory context given to inv_open (for LargeObjectDesc structs). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index b4b7d28dd5..41378d614a 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -8,7 +8,7 @@ * interrupted, unlike LWLock waits. Condition variables are safe * to use within dynamic shared memory segments. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/storage/lmgr/condition_variable.c diff --git a/src/backend/storage/lmgr/deadlock.c b/src/backend/storage/lmgr/deadlock.c index 968e6c0e6d..aeaf1f3ab4 100644 --- a/src/backend/storage/lmgr/deadlock.c +++ b/src/backend/storage/lmgr/deadlock.c @@ -7,7 +7,7 @@ * detection and resolution algorithms. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/generate-lwlocknames.pl b/src/backend/storage/lmgr/generate-lwlocknames.pl index 10d069896f..6cf71c10c4 100644 --- a/src/backend/storage/lmgr/generate-lwlocknames.pl +++ b/src/backend/storage/lmgr/generate-lwlocknames.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate lwlocknames.h and lwlocknames.c from lwlocknames.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/backend/storage/lmgr/lmgr.c b/src/backend/storage/lmgr/lmgr.c index da5679b7a3..3754283d2b 100644 --- a/src/backend/storage/lmgr/lmgr.c +++ b/src/backend/storage/lmgr/lmgr.c @@ -3,7 +3,7 @@ * lmgr.c * POSTGRES lock manager code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c index 5833086c62..dc3d8d9817 100644 --- a/src/backend/storage/lmgr/lock.c +++ b/src/backend/storage/lmgr/lock.c @@ -3,7 +3,7 @@ * lock.c * POSTGRES primary lock mechanism * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index eab98b0760..71caac1a1f 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -20,7 +20,7 @@ * appropriate value for a free lock. The meaning of the variable is up to * the caller, the lightweight lock code just assigns and compares it. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c index 251a359bff..d1ff2b1edc 100644 --- a/src/backend/storage/lmgr/predicate.c +++ b/src/backend/storage/lmgr/predicate.c @@ -127,7 +127,7 @@ * - Protects both PredXact and SerializableXidHash. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c index 5f6727d501..6f30e082b2 100644 --- a/src/backend/storage/lmgr/proc.c +++ b/src/backend/storage/lmgr/proc.c @@ -3,7 +3,7 @@ * proc.c * routines to manage per-process shared memory data structure * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/s_lock.c b/src/backend/storage/lmgr/s_lock.c index 247d7b8199..7482b2d44b 100644 --- a/src/backend/storage/lmgr/s_lock.c +++ b/src/backend/storage/lmgr/s_lock.c @@ -36,7 +36,7 @@ * the probability of unintended failure) than to fix the total time * spent. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/lmgr/spin.c b/src/backend/storage/lmgr/spin.c index 2c16938f5a..6d59a7f15d 100644 --- a/src/backend/storage/lmgr/spin.c +++ b/src/backend/storage/lmgr/spin.c @@ -11,7 +11,7 @@ * is too slow to be very useful :-( * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c index b6aa2af818..dfbda5458f 100644 --- a/src/backend/storage/page/bufpage.c +++ b/src/backend/storage/page/bufpage.c @@ -3,7 +3,7 @@ * bufpage.c * POSTGRES standard buffer page code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/page/checksum.c b/src/backend/storage/page/checksum.c index b96440ae54..a28343b79e 100644 --- a/src/backend/storage/page/checksum.c +++ b/src/backend/storage/page/checksum.c @@ -3,7 +3,7 @@ * checksum.c * Checksum implementation for data pages. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c index b872928f2f..a899dac645 100644 --- a/src/backend/storage/page/itemptr.c +++ b/src/backend/storage/page/itemptr.c @@ -3,7 +3,7 @@ * itemptr.c * POSTGRES disk item pointer code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c index 64a4ccf0db..bb96881cad 100644 --- a/src/backend/storage/smgr/md.c +++ b/src/backend/storage/smgr/md.c @@ -10,7 +10,7 @@ * It doesn't matter whether the bits are on spinning rust or some other * storage technology. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c index ac4ce9ce62..08f06bade2 100644 --- a/src/backend/storage/smgr/smgr.c +++ b/src/backend/storage/smgr/smgr.c @@ -6,7 +6,7 @@ * All file system operations in POSTGRES dispatch through these * routines. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/storage/smgr/smgrtype.c b/src/backend/storage/smgr/smgrtype.c index dc81fe81cf..e819f2e514 100644 --- a/src/backend/storage/smgr/smgrtype.c +++ b/src/backend/storage/smgr/smgrtype.c @@ -3,7 +3,7 @@ * smgrtype.c * storage manager type * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/tcop/dest.c b/src/backend/tcop/dest.c index 28081c3765..c95a4d519d 100644 --- a/src/backend/tcop/dest.c +++ b/src/backend/tcop/dest.c @@ -4,7 +4,7 @@ * support for communication destinations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index a434f7f857..3e531977db 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -3,7 +3,7 @@ * fastpath.c * routines to handle function requests from the frontend * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 1b24dddbce..4654a01eab 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3,7 +3,7 @@ * postgres.c * POSTGRES C Backend Interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c index 2f98135a59..9925712768 100644 --- a/src/backend/tcop/pquery.c +++ b/src/backend/tcop/pquery.c @@ -3,7 +3,7 @@ * pquery.c * POSTGRES process query command code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 4da1f8f643..ec98a612ec 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -5,7 +5,7 @@ * commands. At one time acted as an interface between the Lisp and C * systems. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/tsearch/Makefile b/src/backend/tsearch/Makefile index 34fe4c5b3c..227468ae9e 100644 --- a/src/backend/tsearch/Makefile +++ b/src/backend/tsearch/Makefile @@ -2,7 +2,7 @@ # # Makefile for backend/tsearch # -# Copyright (c) 2006-2017, PostgreSQL Global Development Group +# Copyright (c) 2006-2018, PostgreSQL Global Development Group # # src/backend/tsearch/Makefile # diff --git a/src/backend/tsearch/dict.c b/src/backend/tsearch/dict.c index 63655fb592..c44849b7d3 100644 --- a/src/backend/tsearch/dict.c +++ b/src/backend/tsearch/dict.c @@ -3,7 +3,7 @@ * dict.c * Standard interface to dictionary * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/dict_ispell.c b/src/backend/tsearch/dict_ispell.c index 8f61bd2830..0d706795ad 100644 --- a/src/backend/tsearch/dict_ispell.c +++ b/src/backend/tsearch/dict_ispell.c @@ -3,7 +3,7 @@ * dict_ispell.c * Ispell dictionary interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/dict_simple.c b/src/backend/tsearch/dict_simple.c index a13cdc0743..268b4e48cf 100644 --- a/src/backend/tsearch/dict_simple.c +++ b/src/backend/tsearch/dict_simple.c @@ -3,7 +3,7 @@ * dict_simple.c * Simple dictionary: just lowercase and check for stopword * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/dict_synonym.c b/src/backend/tsearch/dict_synonym.c index e67d2e6e04..8ca65f3ded 100644 --- a/src/backend/tsearch/dict_synonym.c +++ b/src/backend/tsearch/dict_synonym.c @@ -3,7 +3,7 @@ * dict_synonym.c * Synonym dictionary: replace word by its synonym * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/dict_thesaurus.c b/src/backend/tsearch/dict_thesaurus.c index 2a458db691..23aaac8d07 100644 --- a/src/backend/tsearch/dict_thesaurus.c +++ b/src/backend/tsearch/dict_thesaurus.c @@ -3,7 +3,7 @@ * dict_thesaurus.c * Thesaurus dictionary: phrase to phrase substitution * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/regis.c b/src/backend/tsearch/regis.c index 4a799f2585..cab5a4cbd2 100644 --- a/src/backend/tsearch/regis.c +++ b/src/backend/tsearch/regis.c @@ -3,7 +3,7 @@ * regis.c * Fast regex subset * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/spell.c b/src/backend/tsearch/spell.c index 9a09ffb20a..b9fdd77e19 100644 --- a/src/backend/tsearch/spell.c +++ b/src/backend/tsearch/spell.c @@ -3,7 +3,7 @@ * spell.c * Normalizing word with ISpell * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * Ispell dictionary * ----------------- diff --git a/src/backend/tsearch/to_tsany.c b/src/backend/tsearch/to_tsany.c index cf55e3910d..ea5947a3a8 100644 --- a/src/backend/tsearch/to_tsany.c +++ b/src/backend/tsearch/to_tsany.c @@ -3,7 +3,7 @@ * to_tsany.c * to_ts* function definitions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/ts_locale.c b/src/backend/tsearch/ts_locale.c index 7a6f0bc722..a114d6635b 100644 --- a/src/backend/tsearch/ts_locale.c +++ b/src/backend/tsearch/ts_locale.c @@ -3,7 +3,7 @@ * ts_locale.c * locale compatibility layer for tsearch * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c index ad5dddff4b..7b69ef5660 100644 --- a/src/backend/tsearch/ts_parse.c +++ b/src/backend/tsearch/ts_parse.c @@ -3,7 +3,7 @@ * ts_parse.c * main parse functions for tsearch * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/ts_selfuncs.c b/src/backend/tsearch/ts_selfuncs.c index 046f543c01..7ebc3d4641 100644 --- a/src/backend/tsearch/ts_selfuncs.c +++ b/src/backend/tsearch/ts_selfuncs.c @@ -3,7 +3,7 @@ * ts_selfuncs.c * Selectivity estimation functions for text search operators. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/ts_typanalyze.c b/src/backend/tsearch/ts_typanalyze.c index 320c7f1a61..1f93963c66 100644 --- a/src/backend/tsearch/ts_typanalyze.c +++ b/src/backend/tsearch/ts_typanalyze.c @@ -3,7 +3,7 @@ * ts_typanalyze.c * functions for gathering statistics from tsvector columns * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c index 56d4cf03e5..f6e03aea4f 100644 --- a/src/backend/tsearch/ts_utils.c +++ b/src/backend/tsearch/ts_utils.c @@ -3,7 +3,7 @@ * ts_utils.c * various support functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/wparser.c b/src/backend/tsearch/wparser.c index 523c3edd7d..649245b292 100644 --- a/src/backend/tsearch/wparser.c +++ b/src/backend/tsearch/wparser.c @@ -3,7 +3,7 @@ * wparser.c * Standard interface to word parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/tsearch/wparser_def.c b/src/backend/tsearch/wparser_def.c index 8450e1c08e..f0c3441990 100644 --- a/src/backend/tsearch/wparser_def.c +++ b/src/backend/tsearch/wparser_def.c @@ -3,7 +3,7 @@ * wparser_def.c * Default text search parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/Gen_dummy_probes.pl b/src/backend/utils/Gen_dummy_probes.pl index e6dc5630b4..0458c43cc3 100644 --- a/src/backend/utils/Gen_dummy_probes.pl +++ b/src/backend/utils/Gen_dummy_probes.pl @@ -4,7 +4,7 @@ # Gen_dummy_probes.pl # Perl script that generates probes.h file when dtrace is not available # -# Portions Copyright (c) 2008-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 2008-2018, PostgreSQL Global Development Group # # # IDENTIFICATION diff --git a/src/backend/utils/Gen_dummy_probes.sed b/src/backend/utils/Gen_dummy_probes.sed index 6919e41d3a..023e2d3257 100644 --- a/src/backend/utils/Gen_dummy_probes.sed +++ b/src/backend/utils/Gen_dummy_probes.sed @@ -1,7 +1,7 @@ #------------------------------------------------------------------------- # sed script to create dummy probes.h file when dtrace is not available # -# Copyright (c) 2008-2017, PostgreSQL Global Development Group +# Copyright (c) 2008-2018, PostgreSQL Global Development Group # # src/backend/utils/Gen_dummy_probes.sed #------------------------------------------------------------------------- diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl index 14c02f5b57..4ae86df1f7 100644 --- a/src/backend/utils/Gen_fmgrtab.pl +++ b/src/backend/utils/Gen_fmgrtab.pl @@ -5,7 +5,7 @@ # Perl script that generates fmgroids.h, fmgrprotos.h, and fmgrtab.c # from pg_proc.h # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # @@ -112,7 +112,7 @@ * These macros can be used to avoid a catalog lookup when a specific * fmgr-callable function needs to be referenced. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES @@ -146,7 +146,7 @@ * fmgrprotos.h * Prototypes for built-in functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES @@ -172,7 +172,7 @@ * fmgrtab.c * The function manager's table of internal functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c index 2f2758fe91..c11f3dd1cb 100644 --- a/src/backend/utils/adt/acl.c +++ b/src/backend/utils/adt/acl.c @@ -3,7 +3,7 @@ * acl.c * Basic access control list data structures manipulation routines. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/amutils.c b/src/backend/utils/adt/amutils.c index f53b251b30..a6d8feea5b 100644 --- a/src/backend/utils/adt/amutils.c +++ b/src/backend/utils/adt/amutils.c @@ -3,7 +3,7 @@ * amutils.c * SQL-level APIs related to index access methods. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/array_expanded.c b/src/backend/utils/adt/array_expanded.c index 31583f9033..277aaf1368 100644 --- a/src/backend/utils/adt/array_expanded.c +++ b/src/backend/utils/adt/array_expanded.c @@ -3,7 +3,7 @@ * array_expanded.c * Basic functions for manipulating expanded arrays. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/array_selfuncs.c b/src/backend/utils/adt/array_selfuncs.c index 9daf8e73bc..339525b53b 100644 --- a/src/backend/utils/adt/array_selfuncs.c +++ b/src/backend/utils/adt/array_selfuncs.c @@ -3,7 +3,7 @@ * array_selfuncs.c * Functions for selectivity estimation of array operators * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/array_typanalyze.c b/src/backend/utils/adt/array_typanalyze.c index 470ef0c4b0..92e38b870f 100644 --- a/src/backend/utils/adt/array_typanalyze.c +++ b/src/backend/utils/adt/array_typanalyze.c @@ -3,7 +3,7 @@ * array_typanalyze.c * Functions for gathering statistics from array columns * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c index bb70cba171..cb7a6b6d01 100644 --- a/src/backend/utils/adt/array_userfuncs.c +++ b/src/backend/utils/adt/array_userfuncs.c @@ -3,7 +3,7 @@ * array_userfuncs.c * Misc user-visible array support functions * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/array_userfuncs.c diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c index ac21241f69..0cbdbe5587 100644 --- a/src/backend/utils/adt/arrayfuncs.c +++ b/src/backend/utils/adt/arrayfuncs.c @@ -3,7 +3,7 @@ * arrayfuncs.c * Support functions for arrays. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/arrayutils.c b/src/backend/utils/adt/arrayutils.c index 27b24703b0..c0d719e98c 100644 --- a/src/backend/utils/adt/arrayutils.c +++ b/src/backend/utils/adt/arrayutils.c @@ -3,7 +3,7 @@ * arrayutils.c * This file contains some support routines required for array functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/ascii.c b/src/backend/utils/adt/ascii.c index 362272277f..b26985369a 100644 --- a/src/backend/utils/adt/ascii.c +++ b/src/backend/utils/adt/ascii.c @@ -2,7 +2,7 @@ * ascii.c * The PostgreSQL routine for string to ascii conversion. * - * Portions Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1999-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/ascii.c diff --git a/src/backend/utils/adt/bool.c b/src/backend/utils/adt/bool.c index 6c87e21140..2af62e3f2e 100644 --- a/src/backend/utils/adt/bool.c +++ b/src/backend/utils/adt/bool.c @@ -3,7 +3,7 @@ * bool.c * Functions for the built-in type "bool". * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/char.c b/src/backend/utils/adt/char.c index 1b849e8703..d9fec3bb40 100644 --- a/src/backend/utils/adt/char.c +++ b/src/backend/utils/adt/char.c @@ -4,7 +4,7 @@ * Functions for the built-in type "char" (not to be confused with * bpchar, which is the SQL CHAR(n) type). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 307b5e8629..95a999857c 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -3,7 +3,7 @@ * date.c * implements DATE and TIME data types specified in SQL standard * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c index efa37644fc..8375b93c39 100644 --- a/src/backend/utils/adt/datetime.c +++ b/src/backend/utils/adt/datetime.c @@ -3,7 +3,7 @@ * datetime.c * Support functions for date/time types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/datum.c b/src/backend/utils/adt/datum.c index 81e3b52ec9..f02a5e77ae 100644 --- a/src/backend/utils/adt/datum.c +++ b/src/backend/utils/adt/datum.c @@ -3,7 +3,7 @@ * datum.c * POSTGRES Datum (abstract data type) manipulation routines. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c index 58c0b01bdc..60f417f1b2 100644 --- a/src/backend/utils/adt/dbsize.c +++ b/src/backend/utils/adt/dbsize.c @@ -2,7 +2,7 @@ * dbsize.c * Database object size functions, and related inquiries * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/dbsize.c diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c index 86f916ff43..1df554ec93 100644 --- a/src/backend/utils/adt/domains.c +++ b/src/backend/utils/adt/domains.c @@ -19,7 +19,7 @@ * to evaluate them in. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/encode.c b/src/backend/utils/adt/encode.c index 742ce6f97e..f527ee2a43 100644 --- a/src/backend/utils/adt/encode.c +++ b/src/backend/utils/adt/encode.c @@ -3,7 +3,7 @@ * encode.c * Various data encoding/decoding things. * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c index 048a08dd85..562cd5c555 100644 --- a/src/backend/utils/adt/enum.c +++ b/src/backend/utils/adt/enum.c @@ -3,7 +3,7 @@ * enum.c * I/O functions, operators, aggregates etc for enum types * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/expandeddatum.c b/src/backend/utils/adt/expandeddatum.c index 49854b39f4..33fa4a0629 100644 --- a/src/backend/utils/adt/expandeddatum.c +++ b/src/backend/utils/adt/expandeddatum.c @@ -3,7 +3,7 @@ * expandeddatum.c * Support functions for "expanded" value representations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c index be65aab1c9..bc6a3e09b5 100644 --- a/src/backend/utils/adt/float.c +++ b/src/backend/utils/adt/float.c @@ -3,7 +3,7 @@ * float.c * Functions for the built-in floating-point types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/format_type.c b/src/backend/utils/adt/format_type.c index 00bfaca59e..de3da3607a 100644 --- a/src/backend/utils/adt/format_type.c +++ b/src/backend/utils/adt/format_type.c @@ -4,7 +4,7 @@ * Display type names "nicely". * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index ec97de0ad2..0e30810ae4 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -4,7 +4,7 @@ * src/backend/utils/adt/formatting.c * * - * Portions Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1999-2018, PostgreSQL Global Development Group * * * TO_CHAR(); TO_TIMESTAMP(); TO_DATE(); TO_NUMBER(); diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c index 04f1efbe4b..d9027fc688 100644 --- a/src/backend/utils/adt/genfile.c +++ b/src/backend/utils/adt/genfile.c @@ -4,7 +4,7 @@ * Functions for direct access to files * * - * Copyright (c) 2004-2017, PostgreSQL Global Development Group + * Copyright (c) 2004-2018, PostgreSQL Global Development Group * * Author: Andreas Pflug * diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c index f00ea54033..f57380a4df 100644 --- a/src/backend/utils/adt/geo_ops.c +++ b/src/backend/utils/adt/geo_ops.c @@ -3,7 +3,7 @@ * geo_ops.c * 2D geometric operations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/geo_selfuncs.c b/src/backend/utils/adt/geo_selfuncs.c index 774063e92c..02a8853d75 100644 --- a/src/backend/utils/adt/geo_selfuncs.c +++ b/src/backend/utils/adt/geo_selfuncs.c @@ -4,7 +4,7 @@ * Selectivity routines registered in the operator catalog in the * "oprrest" and "oprjoin" attributes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/geo_spgist.c b/src/backend/utils/adt/geo_spgist.c index a10543600f..3f1a755cbb 100644 --- a/src/backend/utils/adt/geo_spgist.c +++ b/src/backend/utils/adt/geo_spgist.c @@ -62,7 +62,7 @@ * except the root. For the root node, we are setting the boundaries * that we don't yet have as infinity. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c index d6eb16a907..7352908365 100644 --- a/src/backend/utils/adt/int.c +++ b/src/backend/utils/adt/int.c @@ -3,7 +3,7 @@ * int.c * Functions for the built-in integer types (except int8). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index 24c9150893..ae6a4683d4 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -3,7 +3,7 @@ * int8.c * Internal 64-bit integer operations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index fcce26ed2e..151345ab2f 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -3,7 +3,7 @@ * json.c * JSON data type support. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 4b2a541128..014e7aa6e3 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -3,7 +3,7 @@ * jsonb.c * I/O routines for jsonb type * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/jsonb.c diff --git a/src/backend/utils/adt/jsonb_gin.c b/src/backend/utils/adt/jsonb_gin.c index 4e1ba10e9c..c8a27451d2 100644 --- a/src/backend/utils/adt/jsonb_gin.c +++ b/src/backend/utils/adt/jsonb_gin.c @@ -3,7 +3,7 @@ * jsonb_gin.c * GIN support functions for jsonb * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/jsonb_op.c b/src/backend/utils/adt/jsonb_op.c index d54a07d204..4e86dfde93 100644 --- a/src/backend/utils/adt/jsonb_op.c +++ b/src/backend/utils/adt/jsonb_op.c @@ -3,7 +3,7 @@ * jsonb_op.c * Special operators for jsonb only, used by various index access methods * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/jsonb_util.c b/src/backend/utils/adt/jsonb_util.c index d425f32403..2524584d95 100644 --- a/src/backend/utils/adt/jsonb_util.c +++ b/src/backend/utils/adt/jsonb_util.c @@ -3,7 +3,7 @@ * jsonb_util.c * converting between Jsonb and JsonbValues, and iterating. * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c index 242d8fe743..fa78451613 100644 --- a/src/backend/utils/adt/jsonfuncs.c +++ b/src/backend/utils/adt/jsonfuncs.c @@ -3,7 +3,7 @@ * jsonfuncs.c * Functions to process JSON data types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c index 97e034b453..dc4346a039 100644 --- a/src/backend/utils/adt/levenshtein.c +++ b/src/backend/utils/adt/levenshtein.c @@ -16,7 +16,7 @@ * PHP 4.0.6 distribution for inspiration. Configurable penalty costs * extension is introduced by Volkan YAZICI (7/95). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/like_match.c b/src/backend/utils/adt/like_match.c index 8ed362a8ff..18428569a8 100644 --- a/src/backend/utils/adt/like_match.c +++ b/src/backend/utils/adt/like_match.c @@ -16,7 +16,7 @@ * do_like_escape - name of function if wanted - needs CHAREQ and CopyAdvChar * MATCH_LOWER - define for case (4) to specify case folding for 1-byte chars * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/like_match.c diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c index 9e0a8ab79d..66c09a1f31 100644 --- a/src/backend/utils/adt/lockfuncs.c +++ b/src/backend/utils/adt/lockfuncs.c @@ -3,7 +3,7 @@ * lockfuncs.c * Functions for SQL access to various lock-manager capabilities. * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/lockfuncs.c diff --git a/src/backend/utils/adt/mac.c b/src/backend/utils/adt/mac.c index 60521cc21f..94703c8c1c 100644 --- a/src/backend/utils/adt/mac.c +++ b/src/backend/utils/adt/mac.c @@ -3,7 +3,7 @@ * mac.c * PostgreSQL type definitions for 6 byte, EUI-48, MAC addresses. * - * Portions Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1998-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/mac.c diff --git a/src/backend/utils/adt/mac8.c b/src/backend/utils/adt/mac8.c index 1533cfdca0..6dd2febc26 100644 --- a/src/backend/utils/adt/mac8.c +++ b/src/backend/utils/adt/mac8.c @@ -11,7 +11,7 @@ * The following code is written with the assumption that the OUI field * size is 24 bits. * - * Portions Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1998-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/mac8.c diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c index f53d411ad1..2e1e020c4b 100644 --- a/src/backend/utils/adt/misc.c +++ b/src/backend/utils/adt/misc.c @@ -3,7 +3,7 @@ * misc.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/nabstime.c b/src/backend/utils/adt/nabstime.c index 2bca39a90c..ec85795827 100644 --- a/src/backend/utils/adt/nabstime.c +++ b/src/backend/utils/adt/nabstime.c @@ -5,7 +5,7 @@ * Functions for the built-in type "RelativeTime". * Functions for the built-in type "TimeInterval". * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/name.c b/src/backend/utils/adt/name.c index 974e6e8401..fe20448ac5 100644 --- a/src/backend/utils/adt/name.c +++ b/src/backend/utils/adt/name.c @@ -9,7 +9,7 @@ * always use NAMEDATALEN as the symbolic constant! - jolly 8/21/95 * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/network_gist.c b/src/backend/utils/adt/network_gist.c index c4cafba503..ba884a803b 100644 --- a/src/backend/utils/adt/network_gist.c +++ b/src/backend/utils/adt/network_gist.c @@ -34,7 +34,7 @@ * twice as fast as for a simpler design in which a single field doubles as * the common prefix length and the minimum ip_bits value. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/network_selfuncs.c b/src/backend/utils/adt/network_selfuncs.c index c9927fff9b..6c365b34b1 100644 --- a/src/backend/utils/adt/network_selfuncs.c +++ b/src/backend/utils/adt/network_selfuncs.c @@ -7,7 +7,7 @@ * operators. Estimates are based on null fraction, most common values, * and histogram of inet/cidr columns. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/network_spgist.c b/src/backend/utils/adt/network_spgist.c index c48f45fe47..e5250fe13e 100644 --- a/src/backend/utils/adt/network_spgist.c +++ b/src/backend/utils/adt/network_spgist.c @@ -21,7 +21,7 @@ * the address family, everything goes into node 0 (which will probably * lead to creating an allTheSame tuple). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 9830988701..a1792f0b01 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -11,7 +11,7 @@ * Transactions on Mathematical Software, Vol. 24, No. 4, December 1998, * pages 359-367. * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/numeric.c diff --git a/src/backend/utils/adt/numutils.c b/src/backend/utils/adt/numutils.c index 244904ea94..b5439f497c 100644 --- a/src/backend/utils/adt/numutils.c +++ b/src/backend/utils/adt/numutils.c @@ -3,7 +3,7 @@ * numutils.c * utility functions for I/O of built-in numeric types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/oid.c b/src/backend/utils/adt/oid.c index 87e87fe54d..b0670e0d9f 100644 --- a/src/backend/utils/adt/oid.c +++ b/src/backend/utils/adt/oid.c @@ -3,7 +3,7 @@ * oid.c * Functions for the built-in type Oid ... also oidvector. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/oracle_compat.c b/src/backend/utils/adt/oracle_compat.c index a5aa6a95aa..331c18517a 100644 --- a/src/backend/utils/adt/oracle_compat.c +++ b/src/backend/utils/adt/oracle_compat.c @@ -2,7 +2,7 @@ * oracle_compat.c * Oracle compatible functions. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * Author: Edmund Mergl * Multibyte enhancement: Tatsuo Ishii diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 1e323d9444..79dbfd1a05 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -3,7 +3,7 @@ * orderedsetaggs.c * Ordered-set aggregate functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/pg_locale.c b/src/backend/utils/adt/pg_locale.c index 5ad75efb7a..a3dc3be5a8 100644 --- a/src/backend/utils/adt/pg_locale.c +++ b/src/backend/utils/adt/pg_locale.c @@ -2,7 +2,7 @@ * * PostgreSQL locale utilities * - * Portions Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/backend/utils/adt/pg_locale.c * diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c index 7ad30a260a..f4eae9d83d 100644 --- a/src/backend/utils/adt/pg_lsn.c +++ b/src/backend/utils/adt/pg_lsn.c @@ -3,7 +3,7 @@ * pg_lsn.c * Operations for the pg_lsn datatype. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c index c0b0474628..0c54b02542 100644 --- a/src/backend/utils/adt/pg_upgrade_support.c +++ b/src/backend/utils/adt/pg_upgrade_support.c @@ -5,7 +5,7 @@ * to control oid and relfilenode assignment, and do other special * hacks needed for pg_upgrade. * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/backend/utils/adt/pg_upgrade_support.c */ diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c index 04cf209b5b..6110e40d3e 100644 --- a/src/backend/utils/adt/pgstatfuncs.c +++ b/src/backend/utils/adt/pgstatfuncs.c @@ -3,7 +3,7 @@ * pgstatfuncs.c * Functions for accessing the statistics collector data * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/pseudotypes.c b/src/backend/utils/adt/pseudotypes.c index be793539a3..dbe67cdb4c 100644 --- a/src/backend/utils/adt/pseudotypes.c +++ b/src/backend/utils/adt/pseudotypes.c @@ -11,7 +11,7 @@ * we do better?) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/quote.c b/src/backend/utils/adt/quote.c index 43e5bb7962..699ca4a419 100644 --- a/src/backend/utils/adt/quote.c +++ b/src/backend/utils/adt/quote.c @@ -3,7 +3,7 @@ * quote.c * Functions for quoting identifiers and literals * - * Portions Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2000-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index e79f0dbfca..5a5d0a0b8f 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -19,7 +19,7 @@ * value; we must detoast it first. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c index 29fa1ae325..2fe61043d9 100644 --- a/src/backend/utils/adt/rangetypes_gist.c +++ b/src/backend/utils/adt/rangetypes_gist.c @@ -3,7 +3,7 @@ * rangetypes_gist.c * GiST support for range types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/rangetypes_selfuncs.c b/src/backend/utils/adt/rangetypes_selfuncs.c index cba8974dbe..59761ecf34 100644 --- a/src/backend/utils/adt/rangetypes_selfuncs.c +++ b/src/backend/utils/adt/rangetypes_selfuncs.c @@ -6,7 +6,7 @@ * Estimates are based on histograms of lower and upper bounds, and the * fraction of empty ranges. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c index 27fbf66433..a7898c4366 100644 --- a/src/backend/utils/adt/rangetypes_spgist.c +++ b/src/backend/utils/adt/rangetypes_spgist.c @@ -25,7 +25,7 @@ * This implementation only uses the comparison function of the range element * datatype, therefore it works for any range type. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/rangetypes_typanalyze.c b/src/backend/utils/adt/rangetypes_typanalyze.c index 3efd982d1b..f2c11f54bc 100644 --- a/src/backend/utils/adt/rangetypes_typanalyze.c +++ b/src/backend/utils/adt/rangetypes_typanalyze.c @@ -13,7 +13,7 @@ * come from different tuples. In theory, the standard scalar selectivity * functions could be used with the combined histogram. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/regexp.c b/src/backend/utils/adt/regexp.c index e858b5910f..5025a449fb 100644 --- a/src/backend/utils/adt/regexp.c +++ b/src/backend/utils/adt/regexp.c @@ -3,7 +3,7 @@ * regexp.c * Postgres' interface to the regular expression package. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/regproc.c b/src/backend/utils/adt/regproc.c index afd0c00b8a..a0079821fe 100644 --- a/src/backend/utils/adt/regproc.c +++ b/src/backend/utils/adt/regproc.c @@ -8,7 +8,7 @@ * special I/O conversion routines. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c index b1ae9e5f96..8faae1d069 100644 --- a/src/backend/utils/adt/ri_triggers.c +++ b/src/backend/utils/adt/ri_triggers.c @@ -13,7 +13,7 @@ * plan --- consider improving this someday. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/backend/utils/adt/ri_triggers.c * diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c index 9b32db5d0a..a5fabfcc9e 100644 --- a/src/backend/utils/adt/rowtypes.c +++ b/src/backend/utils/adt/rowtypes.c @@ -3,7 +3,7 @@ * rowtypes.c * I/O and comparison functions for generic composite types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 8514c21c40..9cdbb06add 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -4,7 +4,7 @@ * Functions to convert stored expressions/querytrees back to * source text * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index ea95b8068d..fcc8323f62 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -10,7 +10,7 @@ * Index cost functions are located via the index AM's API struct, * which is obtained from the handler function registered in pg_am. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c index 854097dd58..5ed5fdaffe 100644 --- a/src/backend/utils/adt/tid.c +++ b/src/backend/utils/adt/tid.c @@ -3,7 +3,7 @@ * tid.c * Functions for the built-in type tuple id * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c index 5797aaad34..e6a1eed191 100644 --- a/src/backend/utils/adt/timestamp.c +++ b/src/backend/utils/adt/timestamp.c @@ -3,7 +3,7 @@ * timestamp.c * Functions for the built-in SQL types "timestamp" and "interval". * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/trigfuncs.c b/src/backend/utils/adt/trigfuncs.c index 50ea6d9ed3..04605021d7 100644 --- a/src/backend/utils/adt/trigfuncs.c +++ b/src/backend/utils/adt/trigfuncs.c @@ -4,7 +4,7 @@ * Builtin functions for useful trigger support. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/utils/adt/trigfuncs.c diff --git a/src/backend/utils/adt/tsginidx.c b/src/backend/utils/adt/tsginidx.c index aba456ed88..de59e6417e 100644 --- a/src/backend/utils/adt/tsginidx.c +++ b/src/backend/utils/adt/tsginidx.c @@ -3,7 +3,7 @@ * tsginidx.c * GIN support functions for tsvector_ops * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c index 3295f7252b..2d9ecc4bfd 100644 --- a/src/backend/utils/adt/tsgistidx.c +++ b/src/backend/utils/adt/tsgistidx.c @@ -3,7 +3,7 @@ * tsgistidx.c * GiST support functions for tsvector_ops * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery.c b/src/backend/utils/adt/tsquery.c index 5cdfe4d732..1ccbf79030 100644 --- a/src/backend/utils/adt/tsquery.c +++ b/src/backend/utils/adt/tsquery.c @@ -3,7 +3,7 @@ * tsquery.c * I/O functions for tsquery * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery_cleanup.c b/src/backend/utils/adt/tsquery_cleanup.c index 350171c93d..c146376e66 100644 --- a/src/backend/utils/adt/tsquery_cleanup.c +++ b/src/backend/utils/adt/tsquery_cleanup.c @@ -4,7 +4,7 @@ * Cleanup query from NOT values and/or stopword * Utility functions to correct work. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c index 04134d14ad..bf8109b32f 100644 --- a/src/backend/utils/adt/tsquery_gist.c +++ b/src/backend/utils/adt/tsquery_gist.c @@ -3,7 +3,7 @@ * tsquery_gist.c * GiST index support for tsquery * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery_op.c b/src/backend/utils/adt/tsquery_op.c index 755c3e9ee8..07bc609972 100644 --- a/src/backend/utils/adt/tsquery_op.c +++ b/src/backend/utils/adt/tsquery_op.c @@ -3,7 +3,7 @@ * tsquery_op.c * Various operations with tsquery * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery_rewrite.c b/src/backend/utils/adt/tsquery_rewrite.c index 1bd3deabd5..e3420e8d27 100644 --- a/src/backend/utils/adt/tsquery_rewrite.c +++ b/src/backend/utils/adt/tsquery_rewrite.c @@ -3,7 +3,7 @@ * tsquery_rewrite.c * Utilities for reconstructing tsquery * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsquery_util.c b/src/backend/utils/adt/tsquery_util.c index 971bb81ea6..cd310b87d5 100644 --- a/src/backend/utils/adt/tsquery_util.c +++ b/src/backend/utils/adt/tsquery_util.c @@ -3,7 +3,7 @@ * tsquery_util.c * Utilities for tsquery datatype * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsrank.c b/src/backend/utils/adt/tsrank.c index 4577bcc0b8..a0011a5f56 100644 --- a/src/backend/utils/adt/tsrank.c +++ b/src/backend/utils/adt/tsrank.c @@ -3,7 +3,7 @@ * tsrank.c * rank tsvector by tsquery * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsvector.c b/src/backend/utils/adt/tsvector.c index b0a9217d1e..64e02ef434 100644 --- a/src/backend/utils/adt/tsvector.c +++ b/src/backend/utils/adt/tsvector.c @@ -3,7 +3,7 @@ * tsvector.c * I/O functions for tsvector * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsvector_op.c b/src/backend/utils/adt/tsvector_op.c index 822520299e..258fe47a24 100644 --- a/src/backend/utils/adt/tsvector_op.c +++ b/src/backend/utils/adt/tsvector_op.c @@ -3,7 +3,7 @@ * tsvector_op.c * operations over tsvector * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/tsvector_parser.c b/src/backend/utils/adt/tsvector_parser.c index 060d073fb7..7367ba6a40 100644 --- a/src/backend/utils/adt/tsvector_parser.c +++ b/src/backend/utils/adt/tsvector_parser.c @@ -3,7 +3,7 @@ * tsvector_parser.c * Parser for tsvector * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/txid.c b/src/backend/utils/adt/txid.c index 9d312edf04..7974c0bd3d 100644 --- a/src/backend/utils/adt/txid.c +++ b/src/backend/utils/adt/txid.c @@ -10,7 +10,7 @@ * via functions such as SubTransGetTopmostTransaction(). * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * Author: Jan Wieck, Afilias USA INC. * 64-bit txids: Marko Kreen, Skype Technologies * diff --git a/src/backend/utils/adt/uuid.c b/src/backend/utils/adt/uuid.c index f73c695878..43674dbfab 100644 --- a/src/backend/utils/adt/uuid.c +++ b/src/backend/utils/adt/uuid.c @@ -3,7 +3,7 @@ * uuid.c * Functions for the built-in type "uuid". * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/adt/uuid.c diff --git a/src/backend/utils/adt/varbit.c b/src/backend/utils/adt/varbit.c index 6caa7e9b5e..6ba400b699 100644 --- a/src/backend/utils/adt/varbit.c +++ b/src/backend/utils/adt/varbit.c @@ -5,7 +5,7 @@ * * Code originally contributed by Adriaan Joubert. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/adt/varchar.c b/src/backend/utils/adt/varchar.c index 2df6f2ccb0..8f07b1e272 100644 --- a/src/backend/utils/adt/varchar.c +++ b/src/backend/utils/adt/varchar.c @@ -3,7 +3,7 @@ * varchar.c * Functions for the built-in types char(n) and varchar(n). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index a84e845ad2..304cb26691 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -3,7 +3,7 @@ * varlena.c * Functions for the variable-length built-in types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/version.c b/src/backend/utils/adt/version.c index 5bdc8fad43..d57fd98ee0 100644 --- a/src/backend/utils/adt/version.c +++ b/src/backend/utils/adt/version.c @@ -3,7 +3,7 @@ * version.c * Returns the PostgreSQL version string * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * IDENTIFICATION * diff --git a/src/backend/utils/adt/windowfuncs.c b/src/backend/utils/adt/windowfuncs.c index d86ad703da..40ba783572 100644 --- a/src/backend/utils/adt/windowfuncs.c +++ b/src/backend/utils/adt/windowfuncs.c @@ -3,7 +3,7 @@ * windowfuncs.c * Standard window functions defined in SQL spec. * - * Portions Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2000-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/backend/utils/adt/xid.c b/src/backend/utils/adt/xid.c index 67c32ac619..1e811faa9c 100644 --- a/src/backend/utils/adt/xid.c +++ b/src/backend/utils/adt/xid.c @@ -3,7 +3,7 @@ * xid.c * POSTGRES transaction identifier and command identifier datatypes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c index fa392cd0e5..7cdb87ef85 100644 --- a/src/backend/utils/adt/xml.c +++ b/src/backend/utils/adt/xml.c @@ -4,7 +4,7 @@ * XML data type support. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/utils/adt/xml.c diff --git a/src/backend/utils/cache/attoptcache.c b/src/backend/utils/cache/attoptcache.c index da8b42ddb8..92468472a4 100644 --- a/src/backend/utils/cache/attoptcache.c +++ b/src/backend/utils/cache/attoptcache.c @@ -6,7 +6,7 @@ * Attribute options are cached separately from the fixed-size portion of * pg_attribute entries, which are handled by the relcache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c index 95a07422b3..8a0a42ce71 100644 --- a/src/backend/utils/cache/catcache.c +++ b/src/backend/utils/cache/catcache.c @@ -3,7 +3,7 @@ * catcache.c * System catalog cache for tuples matching a key. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/cache/evtcache.c b/src/backend/utils/cache/evtcache.c index 6faf4ae354..7c014d6092 100644 --- a/src/backend/utils/cache/evtcache.c +++ b/src/backend/utils/cache/evtcache.c @@ -3,7 +3,7 @@ * evtcache.c * Special-purpose cache for event trigger data. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c index 0e61b4b79f..955f5c7fdc 100644 --- a/src/backend/utils/cache/inval.c +++ b/src/backend/utils/cache/inval.c @@ -85,7 +85,7 @@ * problems can be overcome cheaply. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index 5211360777..93cdf13a6e 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -3,7 +3,7 @@ * lsyscache.c * Convenience routines for common queries in the system catalog cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c index 853c1f6e85..8d7d8e04c9 100644 --- a/src/backend/utils/cache/plancache.c +++ b/src/backend/utils/cache/plancache.c @@ -38,7 +38,7 @@ * be infrequent enough that more-detailed tracking is not worth the effort. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 1d0cc6cb79..28a4483434 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -3,7 +3,7 @@ * relcache.c * POSTGRES relation descriptor cache code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/cache/relfilenodemap.c b/src/backend/utils/cache/relfilenodemap.c index 3e811e1c9b..db6bb1c1fd 100644 --- a/src/backend/utils/cache/relfilenodemap.c +++ b/src/backend/utils/cache/relfilenodemap.c @@ -3,7 +3,7 @@ * relfilenodemap.c * relfilenode to oid mapping cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/relmapper.c b/src/backend/utils/cache/relmapper.c index e0c5dd404c..99d095f2df 100644 --- a/src/backend/utils/cache/relmapper.c +++ b/src/backend/utils/cache/relmapper.c @@ -28,7 +28,7 @@ * all these files commit in a single map file update rather than being tied * to transaction commit. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/cache/spccache.c b/src/backend/utils/cache/spccache.c index eafc00f9ea..3be8657639 100644 --- a/src/backend/utils/cache/spccache.c +++ b/src/backend/utils/cache/spccache.c @@ -8,7 +8,7 @@ * be a measurable performance gain from doing this, but that might change * in the future as we add more options. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c index 888edbb325..041cd53a30 100644 --- a/src/backend/utils/cache/syscache.c +++ b/src/backend/utils/cache/syscache.c @@ -3,7 +3,7 @@ * syscache.c * System cache management routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/cache/ts_cache.c b/src/backend/utils/cache/ts_cache.c index 29cf93a4de..3d5c194148 100644 --- a/src/backend/utils/cache/ts_cache.c +++ b/src/backend/utils/cache/ts_cache.c @@ -17,7 +17,7 @@ * any database access. * * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/cache/ts_cache.c diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index f6450c402c..cf22306b20 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -30,7 +30,7 @@ * Domain constraint changes are also tracked properly. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt index 76fe79eac0..1475bfe362 100644 --- a/src/backend/utils/errcodes.txt +++ b/src/backend/utils/errcodes.txt @@ -2,7 +2,7 @@ # errcodes.txt # PostgreSQL error codes # -# Copyright (c) 2003-2017, PostgreSQL Global Development Group +# Copyright (c) 2003-2018, PostgreSQL Global Development Group # # This list serves as the basis for generating source files containing error # codes. It is kept in a common format to make sure all these source files have diff --git a/src/backend/utils/error/assert.c b/src/backend/utils/error/assert.c index 2ef7792b8b..115dbf692c 100644 --- a/src/backend/utils/error/assert.c +++ b/src/backend/utils/error/assert.c @@ -3,7 +3,7 @@ * assert.c * Assert code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index 9cdc07f006..16531f7a0f 100644 --- a/src/backend/utils/error/elog.c +++ b/src/backend/utils/error/elog.c @@ -43,7 +43,7 @@ * overflow.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/fmgr/dfmgr.c b/src/backend/utils/fmgr/dfmgr.c index 1239f95ddc..1b0dbad82c 100644 --- a/src/backend/utils/fmgr/dfmgr.c +++ b/src/backend/utils/fmgr/dfmgr.c @@ -3,7 +3,7 @@ * dfmgr.c * Dynamic function manager code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c index 975c968e3b..8968a9fde8 100644 --- a/src/backend/utils/fmgr/fmgr.c +++ b/src/backend/utils/fmgr/fmgr.c @@ -3,7 +3,7 @@ * fmgr.c * The Postgres function manager. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c index 24a3950575..c0076bfce3 100644 --- a/src/backend/utils/fmgr/funcapi.c +++ b/src/backend/utils/fmgr/funcapi.c @@ -4,7 +4,7 @@ * Utility and convenience functions for fmgr functions that return * sets and/or composite types, or deal with VARIADIC inputs. * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/fmgr/funcapi.c diff --git a/src/backend/utils/generate-errcodes.pl b/src/backend/utils/generate-errcodes.pl index 6a577f657a..e9f1715a0d 100644 --- a/src/backend/utils/generate-errcodes.pl +++ b/src/backend/utils/generate-errcodes.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate the errcodes.h header from errcodes.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c index c88efc3054..5281cd5410 100644 --- a/src/backend/utils/hash/dynahash.c +++ b/src/backend/utils/hash/dynahash.c @@ -41,7 +41,7 @@ * function must be supplied; comparison defaults to memcmp() and key copying * to memcpy() when a user-defined hashing function is selected. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/hash/hashfn.c b/src/backend/utils/hash/hashfn.c index 9a93c089a6..9b7ddbad0b 100644 --- a/src/backend/utils/hash/hashfn.c +++ b/src/backend/utils/hash/hashfn.c @@ -4,7 +4,7 @@ * Hash functions for use in dynahash.c hashtables * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/hash/pg_crc.c b/src/backend/utils/hash/pg_crc.c index 9dc6c9dcc9..688ea0265f 100644 --- a/src/backend/utils/hash/pg_crc.c +++ b/src/backend/utils/hash/pg_crc.c @@ -7,7 +7,7 @@ * A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS, available from * http://www.ross.net/crc/download/crc_v3.txt or several other net sites. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 9680a4b0f7..54fa4a389e 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -3,7 +3,7 @@ * globals.c * global variable declarations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c index 544fed8096..87ed7d3f71 100644 --- a/src/backend/utils/init/miscinit.c +++ b/src/backend/utils/init/miscinit.c @@ -3,7 +3,7 @@ * miscinit.c * miscellaneous initialization support stuff * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c index 20f1d279e9..f9b330998d 100644 --- a/src/backend/utils/init/postinit.c +++ b/src/backend/utils/init/postinit.c @@ -3,7 +3,7 @@ * postinit.c * postgres initialization utilities * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/mb/Unicode/Makefile b/src/backend/utils/mb/Unicode/Makefile index 8f3afa01e2..06e22de950 100644 --- a/src/backend/utils/mb/Unicode/Makefile +++ b/src/backend/utils/mb/Unicode/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/backend/utils/mb/Unicode # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/Makefile # diff --git a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl index f73e2a1f16..4190ee5c7b 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_BIG5.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl index 63f5e1b67a..2971e64a11 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_GB18030.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl index 4dac7d267f..1c1152e820 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl index 77ce273171..0ac1f17245 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl index 8e0126c7f5..4d6a3cabaa 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl index 61013e356a..89f3cd7006 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl index e9f816cfa8..ec184d760f 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_GB18030.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl index be10d5200e..b580373a5c 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl index 98a6ee7c7b..d153e4c8e6 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl index cc1edcc4aa..a50f6f39cd 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_SJIS.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl index 640a2ec2f7..dc9fb753b9 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_GB18030.pl # diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl index 2d69748a93..445344932a 100755 --- a/src/backend/utils/mb/Unicode/UCS_to_most.pl +++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl @@ -1,6 +1,6 @@ #! /usr/bin/perl # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/UCS_to_most.pl # diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm index 22d02ca485..854df8cf2a 100644 --- a/src/backend/utils/mb/Unicode/convutils.pm +++ b/src/backend/utils/mb/Unicode/convutils.pm @@ -1,5 +1,5 @@ # -# Copyright (c) 2001-2017, PostgreSQL Global Development Group +# Copyright (c) 2001-2018, PostgreSQL Global Development Group # # src/backend/utils/mb/Unicode/convutils.pm diff --git a/src/backend/utils/mb/conv.c b/src/backend/utils/mb/conv.c index d46330b207..fb7c925397 100644 --- a/src/backend/utils/mb/conv.c +++ b/src/backend/utils/mb/conv.c @@ -2,7 +2,7 @@ * * Utility functions for conversion procs. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/ascii_and_mic/ascii_and_mic.c b/src/backend/utils/mb/conversion_procs/ascii_and_mic/ascii_and_mic.c index 4bbeea1e6f..9f44d7e853 100644 --- a/src/backend/utils/mb/conversion_procs/ascii_and_mic/ascii_and_mic.c +++ b/src/backend/utils/mb/conversion_procs/ascii_and_mic/ascii_and_mic.c @@ -2,7 +2,7 @@ * * ASCII and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/cyrillic_and_mic/cyrillic_and_mic.c b/src/backend/utils/mb/conversion_procs/cyrillic_and_mic/cyrillic_and_mic.c index 677d7ea3eb..a37156a071 100644 --- a/src/backend/utils/mb/conversion_procs/cyrillic_and_mic/cyrillic_and_mic.c +++ b/src/backend/utils/mb/conversion_procs/cyrillic_and_mic/cyrillic_and_mic.c @@ -2,7 +2,7 @@ * * Cyrillic and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c b/src/backend/utils/mb/conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c index 83c2712eba..e8df57f35c 100644 --- a/src/backend/utils/mb/conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c +++ b/src/backend/utils/mb/conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c @@ -2,7 +2,7 @@ * * EUC_JIS_2004, SHIFT_JIS_2004 * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/mb/conversion_procs/euc2004_sjis2004/euc2004_sjis2004.c diff --git a/src/backend/utils/mb/conversion_procs/euc_cn_and_mic/euc_cn_and_mic.c b/src/backend/utils/mb/conversion_procs/euc_cn_and_mic/euc_cn_and_mic.c index c897bb2e49..ad3f12870b 100644 --- a/src/backend/utils/mb/conversion_procs/euc_cn_and_mic/euc_cn_and_mic.c +++ b/src/backend/utils/mb/conversion_procs/euc_cn_and_mic/euc_cn_and_mic.c @@ -2,7 +2,7 @@ * * EUC_CN and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/euc_jp_and_sjis.c b/src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/euc_jp_and_sjis.c index 414aeff3b9..5564b02986 100644 --- a/src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/euc_jp_and_sjis.c +++ b/src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/euc_jp_and_sjis.c @@ -2,7 +2,7 @@ * * EUC_JP, SJIS and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/euc_kr_and_mic/euc_kr_and_mic.c b/src/backend/utils/mb/conversion_procs/euc_kr_and_mic/euc_kr_and_mic.c index 94f4f813ef..c48704f9b2 100644 --- a/src/backend/utils/mb/conversion_procs/euc_kr_and_mic/euc_kr_and_mic.c +++ b/src/backend/utils/mb/conversion_procs/euc_kr_and_mic/euc_kr_and_mic.c @@ -2,7 +2,7 @@ * * EUC_KR and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/euc_tw_and_big5.c b/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/euc_tw_and_big5.c index cb7e296f26..9b5a408533 100644 --- a/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/euc_tw_and_big5.c +++ b/src/backend/utils/mb/conversion_procs/euc_tw_and_big5/euc_tw_and_big5.c @@ -2,7 +2,7 @@ * * EUC_TW, BIG5 and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/latin2_and_win1250/latin2_and_win1250.c b/src/backend/utils/mb/conversion_procs/latin2_and_win1250/latin2_and_win1250.c index 62b103a8dd..648aab4f95 100644 --- a/src/backend/utils/mb/conversion_procs/latin2_and_win1250/latin2_and_win1250.c +++ b/src/backend/utils/mb/conversion_procs/latin2_and_win1250/latin2_and_win1250.c @@ -2,7 +2,7 @@ * * LATIN2 and WIN1250 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/latin_and_mic/latin_and_mic.c b/src/backend/utils/mb/conversion_procs/latin_and_mic/latin_and_mic.c index 02fa94b8aa..29c01cf771 100644 --- a/src/backend/utils/mb/conversion_procs/latin_and_mic/latin_and_mic.c +++ b/src/backend/utils/mb/conversion_procs/latin_and_mic/latin_and_mic.c @@ -2,7 +2,7 @@ * * LATINn and MULE_INTERNAL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_ascii/utf8_and_ascii.c b/src/backend/utils/mb/conversion_procs/utf8_and_ascii/utf8_and_ascii.c index 724d3dd27e..e1a5a970ae 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_ascii/utf8_and_ascii.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_ascii/utf8_and_ascii.c @@ -2,7 +2,7 @@ * * ASCII <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_big5/utf8_and_big5.c b/src/backend/utils/mb/conversion_procs/utf8_and_big5/utf8_and_big5.c index 7a6c0f3b74..5562dff3d7 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_big5/utf8_and_big5.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_big5/utf8_and_big5.c @@ -2,7 +2,7 @@ * * BIG5 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/utf8_and_cyrillic.c b/src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/utf8_and_cyrillic.c index 88ae8d309f..41fe6c5c28 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/utf8_and_cyrillic.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/utf8_and_cyrillic.c @@ -2,7 +2,7 @@ * * UTF8 and Cyrillic * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_euc2004/utf8_and_euc2004.c b/src/backend/utils/mb/conversion_procs/utf8_and_euc2004/utf8_and_euc2004.c index b21167997c..cc12708d65 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_euc2004/utf8_and_euc2004.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_euc2004/utf8_and_euc2004.c @@ -2,7 +2,7 @@ * * EUC_JIS_2004 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/utf8_and_euc_cn.c b/src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/utf8_and_euc_cn.c index ac7d475c05..e3f37a7b51 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/utf8_and_euc_cn.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/utf8_and_euc_cn.c @@ -2,7 +2,7 @@ * * EUC_CN <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/utf8_and_euc_jp.c b/src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/utf8_and_euc_jp.c index 9a9ba810f2..2f0f111797 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/utf8_and_euc_jp.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/utf8_and_euc_jp.c @@ -2,7 +2,7 @@ * * EUC_JP <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/utf8_and_euc_kr.c b/src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/utf8_and_euc_kr.c index 09289352ea..a7f4d20f59 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/utf8_and_euc_kr.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/utf8_and_euc_kr.c @@ -2,7 +2,7 @@ * * EUC_KR <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/utf8_and_euc_tw.c b/src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/utf8_and_euc_tw.c index efd92bb6af..b2dfd55025 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/utf8_and_euc_tw.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/utf8_and_euc_tw.c @@ -2,7 +2,7 @@ * * EUC_TW <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_gb18030/utf8_and_gb18030.c b/src/backend/utils/mb/conversion_procs/utf8_and_gb18030/utf8_and_gb18030.c index 7022d99e41..6b8a2cac55 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_gb18030/utf8_and_gb18030.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_gb18030/utf8_and_gb18030.c @@ -2,7 +2,7 @@ * * GB18030 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_gbk/utf8_and_gbk.c b/src/backend/utils/mb/conversion_procs/utf8_and_gbk/utf8_and_gbk.c index 13717b42f9..c0fd57aff5 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_gbk/utf8_and_gbk.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_gbk/utf8_and_gbk.c @@ -2,7 +2,7 @@ * * GBK <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_iso8859/utf8_and_iso8859.c b/src/backend/utils/mb/conversion_procs/utf8_and_iso8859/utf8_and_iso8859.c index 4e47cf66c5..d01b59467f 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_iso8859/utf8_and_iso8859.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_iso8859/utf8_and_iso8859.c @@ -2,7 +2,7 @@ * * ISO 8859 2-16 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/utf8_and_iso8859_1.c b/src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/utf8_and_iso8859_1.c index 2cfc7bb83b..f72329e33b 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/utf8_and_iso8859_1.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/utf8_and_iso8859_1.c @@ -2,7 +2,7 @@ * * ISO8859_1 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_johab/utf8_and_johab.c b/src/backend/utils/mb/conversion_procs/utf8_and_johab/utf8_and_johab.c index dce5b52aad..1bc12145f6 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_johab/utf8_and_johab.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_johab/utf8_and_johab.c @@ -2,7 +2,7 @@ * * JOHAB <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_sjis/utf8_and_sjis.c b/src/backend/utils/mb/conversion_procs/utf8_and_sjis/utf8_and_sjis.c index 3db14bc09e..561730a420 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_sjis/utf8_and_sjis.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_sjis/utf8_and_sjis.c @@ -2,7 +2,7 @@ * * SJIS <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_sjis2004/utf8_and_sjis2004.c b/src/backend/utils/mb/conversion_procs/utf8_and_sjis2004/utf8_and_sjis2004.c index adc1c9e93c..0dea5a2805 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_sjis2004/utf8_and_sjis2004.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_sjis2004/utf8_and_sjis2004.c @@ -2,7 +2,7 @@ * * SHIFT_JIS_2004 <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_uhc/utf8_and_uhc.c b/src/backend/utils/mb/conversion_procs/utf8_and_uhc/utf8_and_uhc.c index 8c71280715..738fd1191f 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_uhc/utf8_and_uhc.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_uhc/utf8_and_uhc.c @@ -2,7 +2,7 @@ * * UHC <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/conversion_procs/utf8_and_win/utf8_and_win.c b/src/backend/utils/mb/conversion_procs/utf8_and_win/utf8_and_win.c index cec458c9f4..ec8221e89d 100644 --- a/src/backend/utils/mb/conversion_procs/utf8_and_win/utf8_and_win.c +++ b/src/backend/utils/mb/conversion_procs/utf8_and_win/utf8_and_win.c @@ -2,7 +2,7 @@ * * WIN <--> UTF8 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mb/mbutils.c b/src/backend/utils/mb/mbutils.c index fee8e66bd6..8d80ffe78b 100644 --- a/src/backend/utils/mb/mbutils.c +++ b/src/backend/utils/mb/mbutils.c @@ -23,7 +23,7 @@ * the result is validly encoded according to the destination encoding. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/backend_random.c b/src/backend/utils/misc/backend_random.c index 4f18656fda..a64f3ac398 100644 --- a/src/backend/utils/misc/backend_random.c +++ b/src/backend/utils/misc/backend_random.c @@ -15,7 +15,7 @@ * The built-in implementation uses the standard erand48 algorithm, with * a seed shared between all backends. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l index 3de8e791f2..afe4fe6d77 100644 --- a/src/backend/utils/misc/guc-file.l +++ b/src/backend/utils/misc/guc-file.l @@ -2,7 +2,7 @@ /* * Scanner for the configuration file * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/backend/utils/misc/guc-file.l */ diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index e32901d506..72f6be329e 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -6,7 +6,7 @@ * See src/backend/utils/misc/README for more information. * * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * Written by Peter Eisentraut . * * IDENTIFICATION diff --git a/src/backend/utils/misc/help_config.c b/src/backend/utils/misc/help_config.c index 08d563d1ae..25f5c82804 100644 --- a/src/backend/utils/misc/help_config.c +++ b/src/backend/utils/misc/help_config.c @@ -7,7 +7,7 @@ * or GUC_DISALLOW_IN_FILE are not displayed, unless the user specifically * requests that variable by name * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/misc/help_config.c diff --git a/src/backend/utils/misc/pg_config.c b/src/backend/utils/misc/pg_config.c index a84878994c..436d2efb21 100644 --- a/src/backend/utils/misc/pg_config.c +++ b/src/backend/utils/misc/pg_config.c @@ -3,7 +3,7 @@ * pg_config.c * Expose same output as pg_config except as an SRF * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/misc/pg_controldata.c b/src/backend/utils/misc/pg_controldata.c index dee6dfc12f..8ab7d1337f 100644 --- a/src/backend/utils/misc/pg_controldata.c +++ b/src/backend/utils/misc/pg_controldata.c @@ -5,7 +5,7 @@ * Routines to expose the contents of the control data file via * a set of SQL functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/misc/pg_rusage.c b/src/backend/utils/misc/pg_rusage.c index df643d798c..f135f1b134 100644 --- a/src/backend/utils/misc/pg_rusage.c +++ b/src/backend/utils/misc/pg_rusage.c @@ -4,7 +4,7 @@ * Resource usage measurement support routines. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/ps_status.c b/src/backend/utils/misc/ps_status.c index 5742de0802..83a93d2405 100644 --- a/src/backend/utils/misc/ps_status.c +++ b/src/backend/utils/misc/ps_status.c @@ -7,7 +7,7 @@ * * src/backend/utils/misc/ps_status.c * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * various details abducted from various places *-------------------------------------------------------------------- */ diff --git a/src/backend/utils/misc/queryenvironment.c b/src/backend/utils/misc/queryenvironment.c index a0b10d402b..bc2b1e8186 100644 --- a/src/backend/utils/misc/queryenvironment.c +++ b/src/backend/utils/misc/queryenvironment.c @@ -11,7 +11,7 @@ * on callers, since this is an opaque structure. This is the reason to * require a create function. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/rls.c b/src/backend/utils/misc/rls.c index 49c03d3f88..5ed64dc1dd 100644 --- a/src/backend/utils/misc/rls.c +++ b/src/backend/utils/misc/rls.c @@ -3,7 +3,7 @@ * rls.c * RLS-related utility functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/sampling.c b/src/backend/utils/misc/sampling.c index b618ed1d7e..e9e230df01 100644 --- a/src/backend/utils/misc/sampling.c +++ b/src/backend/utils/misc/sampling.c @@ -3,7 +3,7 @@ * sampling.c * Relation block sampling routines. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/superuser.c b/src/backend/utils/misc/superuser.c index 175a0677db..fbe83c9b3c 100644 --- a/src/backend/utils/misc/superuser.c +++ b/src/backend/utils/misc/superuser.c @@ -9,7 +9,7 @@ * the single-user case works. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/timeout.c b/src/backend/utils/misc/timeout.c index 75159ea5b1..f99a63bd5b 100644 --- a/src/backend/utils/misc/timeout.c +++ b/src/backend/utils/misc/timeout.c @@ -3,7 +3,7 @@ * timeout.c * Routines to multiplex SIGALRM interrupts for multiple timeout reasons. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/misc/tzparser.c b/src/backend/utils/misc/tzparser.c index 3986141899..3d07eb7265 100644 --- a/src/backend/utils/misc/tzparser.c +++ b/src/backend/utils/misc/tzparser.c @@ -11,7 +11,7 @@ * PG_TRY if necessary. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 1519da05d2..3f9b18844f 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -7,7 +7,7 @@ * type. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mmgr/dsa.c b/src/backend/utils/mmgr/dsa.c index fe62788188..c2b13100a2 100644 --- a/src/backend/utils/mmgr/dsa.c +++ b/src/backend/utils/mmgr/dsa.c @@ -39,7 +39,7 @@ * empty and be returned to the free page manager, and whole segments can * become empty and be returned to the operating system. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mmgr/freepage.c b/src/backend/utils/mmgr/freepage.c index b455484bef..5bddfefed4 100644 --- a/src/backend/utils/mmgr/freepage.c +++ b/src/backend/utils/mmgr/freepage.c @@ -42,7 +42,7 @@ * where memory fragmentation is very severe, only a tiny fraction of * the pages under management are consumed by this btree. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c index 10d0fc1f90..338386a5d1 100644 --- a/src/backend/utils/mmgr/generation.c +++ b/src/backend/utils/mmgr/generation.c @@ -6,7 +6,7 @@ * Generation is a custom MemoryContext implementation designed for cases of * chunks with similar lifespan. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/mmgr/generation.c diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c index 97382a693c..d7baa54808 100644 --- a/src/backend/utils/mmgr/mcxt.c +++ b/src/backend/utils/mmgr/mcxt.c @@ -9,7 +9,7 @@ * context's MemoryContextMethods struct. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/mmgr/memdebug.c b/src/backend/utils/mmgr/memdebug.c index f0a87d3f20..d45534d293 100644 --- a/src/backend/utils/mmgr/memdebug.c +++ b/src/backend/utils/mmgr/memdebug.c @@ -5,7 +5,7 @@ * public API of the memory management subsystem. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/backend/utils/memdebug.c diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index d03b779407..c93c37d74a 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -8,7 +8,7 @@ * doesn't actually run the executor for them. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/mmgr/slab.c b/src/backend/utils/mmgr/slab.c index c01c77913a..58beb64ab9 100644 --- a/src/backend/utils/mmgr/slab.c +++ b/src/backend/utils/mmgr/slab.c @@ -7,7 +7,7 @@ * numbers of equally-sized objects are allocated (and freed). * * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/backend/utils/mmgr/slab.c diff --git a/src/backend/utils/probes.d b/src/backend/utils/probes.d index 214dc712ca..560d8ccda3 100644 --- a/src/backend/utils/probes.d +++ b/src/backend/utils/probes.d @@ -1,7 +1,7 @@ /* ---------- * DTrace probes for PostgreSQL backend * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/backend/utils/probes.d * ---------- diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c index 4c35ccf65e..e09a4f1ddb 100644 --- a/src/backend/utils/resowner/resowner.c +++ b/src/backend/utils/resowner/resowner.c @@ -9,7 +9,7 @@ * See utils/resowner/README for more info. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c index 5ebb6fb11a..2d07b3d3f5 100644 --- a/src/backend/utils/sort/logtape.c +++ b/src/backend/utils/sort/logtape.c @@ -64,7 +64,7 @@ * care that all calls for a single LogicalTapeSet are made in the same * palloc context. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/sort/sharedtuplestore.c b/src/backend/utils/sort/sharedtuplestore.c index 2c8505672c..e453cc0383 100644 --- a/src/backend/utils/sort/sharedtuplestore.c +++ b/src/backend/utils/sort/sharedtuplestore.c @@ -10,7 +10,7 @@ * scan where each backend reads an arbitrary subset of the tuples that were * written. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/sort/sortsupport.c b/src/backend/utils/sort/sortsupport.c index 73170b7429..8c09fe4a43 100644 --- a/src/backend/utils/sort/sortsupport.c +++ b/src/backend/utils/sort/sortsupport.c @@ -4,7 +4,7 @@ * Support routines for accelerated sorting. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 35eebad8e4..eecc66cafa 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -75,7 +75,7 @@ * too many -- see the comments in tuplesort_merge_order). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/sort/tuplestore.c b/src/backend/utils/sort/tuplestore.c index 1977b61fd9..d602753ca9 100644 --- a/src/backend/utils/sort/tuplestore.c +++ b/src/backend/utils/sort/tuplestore.c @@ -43,7 +43,7 @@ * before switching to the other state or activating a different read pointer. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c index 200fa3765f..d789f02bfa 100644 --- a/src/backend/utils/time/combocid.c +++ b/src/backend/utils/time/combocid.c @@ -30,7 +30,7 @@ * destroyed at the end of each transaction. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index 0b032905a5..e58c69dbd7 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -35,7 +35,7 @@ * stack is empty. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c index 2b218e07e6..f7c4c9188c 100644 --- a/src/backend/utils/time/tqual.c +++ b/src/backend/utils/time/tqual.c @@ -52,7 +52,7 @@ * HeapTupleSatisfiesAny() * all tuples are visible * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/bin/Makefile b/src/bin/Makefile index bc96f37dfc..3b35835abe 100644 --- a/src/bin/Makefile +++ b/src/bin/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin (client programs) # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/Makefile diff --git a/src/bin/initdb/Makefile b/src/bin/initdb/Makefile index ec26652e82..dae3daf8bd 100644 --- a/src/bin/initdb/Makefile +++ b/src/bin/initdb/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/initdb # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/initdb/Makefile diff --git a/src/bin/initdb/findtimezone.c b/src/bin/initdb/findtimezone.c index c9b4141a16..4c3a91a122 100644 --- a/src/bin/initdb/findtimezone.c +++ b/src/bin/initdb/findtimezone.c @@ -3,7 +3,7 @@ * findtimezone.c * Functions for determining the default timezone to use. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/initdb/findtimezone.c diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index ddc850db1c..2efd3b75b1 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -38,7 +38,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/initdb/initdb.c diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile index b707af9d26..54f915eec7 100644 --- a/src/bin/pg_basebackup/Makefile +++ b/src/bin/pg_basebackup/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_basebackup # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/pg_basebackup/Makefile diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c index 4fbca7d8e8..1b32592063 100644 --- a/src/bin/pg_basebackup/pg_basebackup.c +++ b/src/bin/pg_basebackup/pg_basebackup.c @@ -4,7 +4,7 @@ * * Author: Magnus Hagander * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/pg_basebackup.c diff --git a/src/bin/pg_basebackup/pg_receivewal.c b/src/bin/pg_basebackup/pg_receivewal.c index d6f0f3cb1a..b82e073b86 100644 --- a/src/bin/pg_basebackup/pg_receivewal.c +++ b/src/bin/pg_basebackup/pg_receivewal.c @@ -5,7 +5,7 @@ * * Author: Magnus Hagander * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/pg_receivewal.c diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c index f3cb831d59..53e4661d68 100644 --- a/src/bin/pg_basebackup/pg_recvlogical.c +++ b/src/bin/pg_basebackup/pg_recvlogical.c @@ -3,7 +3,7 @@ * pg_recvlogical.c - receive data from a logical decoding slot in a streaming * fashion and write it to a local file. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/pg_recvlogical.c diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c index d29b501740..1076878630 100644 --- a/src/bin/pg_basebackup/receivelog.c +++ b/src/bin/pg_basebackup/receivelog.c @@ -5,7 +5,7 @@ * * Author: Magnus Hagander * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/receivelog.c diff --git a/src/bin/pg_basebackup/receivelog.h b/src/bin/pg_basebackup/receivelog.h index 5b8c33fc26..a636cf3fa0 100644 --- a/src/bin/pg_basebackup/receivelog.h +++ b/src/bin/pg_basebackup/receivelog.h @@ -2,7 +2,7 @@ * * receivelog.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/receivelog.h diff --git a/src/bin/pg_basebackup/streamutil.c b/src/bin/pg_basebackup/streamutil.c index a57ff8f2c4..c88cede167 100644 --- a/src/bin/pg_basebackup/streamutil.c +++ b/src/bin/pg_basebackup/streamutil.c @@ -5,7 +5,7 @@ * * Author: Magnus Hagander * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/streamutil.c diff --git a/src/bin/pg_basebackup/streamutil.h b/src/bin/pg_basebackup/streamutil.h index e9f1d1f536..6854bbc31d 100644 --- a/src/bin/pg_basebackup/streamutil.h +++ b/src/bin/pg_basebackup/streamutil.h @@ -2,7 +2,7 @@ * * streamutil.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/streamutil.h diff --git a/src/bin/pg_basebackup/walmethods.c b/src/bin/pg_basebackup/walmethods.c index 02d368b242..b4558a0184 100644 --- a/src/bin/pg_basebackup/walmethods.c +++ b/src/bin/pg_basebackup/walmethods.c @@ -5,7 +5,7 @@ * NOTE! The caller must ensure that only one method is instantiated in * any given program, and that it's only instantiated once! * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/walmethods.c diff --git a/src/bin/pg_basebackup/walmethods.h b/src/bin/pg_basebackup/walmethods.h index 3fdc947ab9..2d1a10fc63 100644 --- a/src/bin/pg_basebackup/walmethods.h +++ b/src/bin/pg_basebackup/walmethods.h @@ -2,7 +2,7 @@ * * walmethods.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_basebackup/walmethods.h diff --git a/src/bin/pg_config/Makefile b/src/bin/pg_config/Makefile index c41008763e..02e6f9da0e 100644 --- a/src/bin/pg_config/Makefile +++ b/src/bin/pg_config/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_config # -# Copyright (c) 1998-2017, PostgreSQL Global Development Group +# Copyright (c) 1998-2018, PostgreSQL Global Development Group # # src/bin/pg_config/Makefile # diff --git a/src/bin/pg_config/pg_config.c b/src/bin/pg_config/pg_config.c index fa2a5a9943..a341b756de 100644 --- a/src/bin/pg_config/pg_config.c +++ b/src/bin/pg_config/pg_config.c @@ -15,7 +15,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/bin/pg_config/pg_config.c * diff --git a/src/bin/pg_controldata/Makefile b/src/bin/pg_controldata/Makefile index fd87daa11a..28d6d666fe 100644 --- a/src/bin/pg_controldata/Makefile +++ b/src/bin/pg_controldata/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_controldata # -# Copyright (c) 1998-2017, PostgreSQL Global Development Group +# Copyright (c) 1998-2018, PostgreSQL Global Development Group # # src/bin/pg_controldata/Makefile # diff --git a/src/bin/pg_ctl/Makefile b/src/bin/pg_ctl/Makefile index e734c952a2..4afc791cad 100644 --- a/src/bin/pg_ctl/Makefile +++ b/src/bin/pg_ctl/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_ctl # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/pg_ctl/Makefile diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index e43e7b24e1..62c72c3fcf 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -2,7 +2,7 @@ * * pg_ctl --- start/stops/restarts the PostgreSQL server * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/bin/pg_ctl/pg_ctl.c * diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile index 3700884720..e3bfc95f16 100644 --- a/src/bin/pg_dump/Makefile +++ b/src/bin/pg_dump/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_dump # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/pg_dump/Makefile diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c index 4b47951de1..7f5f351486 100644 --- a/src/bin/pg_dump/common.c +++ b/src/bin/pg_dump/common.c @@ -4,7 +4,7 @@ * Catalog routines used by pg_dump; long ago these were shared * by another dump tool, but not anymore. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c index 54003f7da6..a96da15dc1 100644 --- a/src/bin/pg_dump/compress_io.c +++ b/src/bin/pg_dump/compress_io.c @@ -4,7 +4,7 @@ * Routines for archivers to write an uncompressed or compressed data * stream. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * This file includes two APIs for dealing with compressed data. The first diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h index f42eb84007..10fde8bdef 100644 --- a/src/bin/pg_dump/compress_io.h +++ b/src/bin/pg_dump/compress_io.h @@ -3,7 +3,7 @@ * compress_io.h * Interface to compress_io.c routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index 12290a1aae..32ad600fd0 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -5,7 +5,7 @@ * Basically this is stuff that is useful in both pg_dump and pg_dumpall. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_dump/dumputils.c diff --git a/src/bin/pg_dump/dumputils.h b/src/bin/pg_dump/dumputils.h index fe364dd8c4..d5f150dfa0 100644 --- a/src/bin/pg_dump/dumputils.h +++ b/src/bin/pg_dump/dumputils.h @@ -5,7 +5,7 @@ * Basically this is stuff that is useful in both pg_dump and pg_dumpall. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_dump/dumputils.h diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c index 8b996f4699..02e79f2f27 100644 --- a/src/bin/pg_dump/parallel.c +++ b/src/bin/pg_dump/parallel.c @@ -4,7 +4,7 @@ * * Parallel support for pg_dump and pg_restore * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/bin/pg_dump/parallel.h b/src/bin/pg_dump/parallel.h index 8d800bdf83..1577400622 100644 --- a/src/bin/pg_dump/parallel.h +++ b/src/bin/pg_dump/parallel.h @@ -4,7 +4,7 @@ * * Parallel support for pg_dump and pg_restore * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c index 112fbb0f0c..4aabb40f59 100644 --- a/src/bin/pg_dump/pg_backup_directory.c +++ b/src/bin/pg_dump/pg_backup_directory.c @@ -17,7 +17,7 @@ * sync. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * Portions Copyright (c) 2000, Philip Warner * diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c index 066121449e..df5fe708ba 100644 --- a/src/bin/pg_dump/pg_backup_utils.c +++ b/src/bin/pg_dump/pg_backup_utils.c @@ -4,7 +4,7 @@ * Utility routines shared by pg_dump and pg_restore * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_dump/pg_backup_utils.c diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h index 14661a1160..6eaf302fc7 100644 --- a/src/bin/pg_dump/pg_backup_utils.h +++ b/src/bin/pg_dump/pg_backup_utils.h @@ -4,7 +4,7 @@ * Utility routines shared by pg_dump and pg_restore. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_dump/pg_backup_utils.h diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index e6701aaa78..27628a397c 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -4,7 +4,7 @@ * pg_dump is a utility for dumping out a postgres database * into a script file. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * pg_dump will read the system catalogs in a database and dump out a diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h index da884ffd09..49a02b4fa8 100644 --- a/src/bin/pg_dump/pg_dump.h +++ b/src/bin/pg_dump/pg_dump.h @@ -3,7 +3,7 @@ * pg_dump.h * Common header file for the pg_dump utility * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_dump/pg_dump.h diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c index 48b6dd594c..6da1c35a42 100644 --- a/src/bin/pg_dump/pg_dump_sort.c +++ b/src/bin/pg_dump/pg_dump_sort.c @@ -4,7 +4,7 @@ * Sort the items of a dump into a safe order for dumping * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 41c5ff89b7..3dd2c3871e 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -2,7 +2,7 @@ * * pg_dumpall.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * pg_dumpall forces all pg_dump output to be text, since it also outputs diff --git a/src/bin/pg_resetwal/Makefile b/src/bin/pg_resetwal/Makefile index 0f6e5da255..5ab7ad33e0 100644 --- a/src/bin/pg_resetwal/Makefile +++ b/src/bin/pg_resetwal/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_resetwal # -# Copyright (c) 1998-2017, PostgreSQL Global Development Group +# Copyright (c) 1998-2018, PostgreSQL Global Development Group # # src/bin/pg_resetwal/Makefile # diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c index 9f93385f44..a132cf2e9f 100644 --- a/src/bin/pg_resetwal/pg_resetwal.c +++ b/src/bin/pg_resetwal/pg_resetwal.c @@ -20,7 +20,7 @@ * step 2 ... * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pg_resetwal/pg_resetwal.c diff --git a/src/bin/pg_rewind/Makefile b/src/bin/pg_rewind/Makefile index e64ad76509..422c3eeba8 100644 --- a/src/bin/pg_rewind/Makefile +++ b/src/bin/pg_rewind/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pg_rewind # -# Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group # # src/bin/pg_rewind/Makefile # diff --git a/src/bin/pg_rewind/copy_fetch.c b/src/bin/pg_rewind/copy_fetch.c index f7ac5b30b5..380fb3b88e 100644 --- a/src/bin/pg_rewind/copy_fetch.c +++ b/src/bin/pg_rewind/copy_fetch.c @@ -3,7 +3,7 @@ * copy_fetch.c * Functions for using a data directory as the source. * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/datapagemap.c b/src/bin/pg_rewind/datapagemap.c index e53920d556..01bb1a4694 100644 --- a/src/bin/pg_rewind/datapagemap.c +++ b/src/bin/pg_rewind/datapagemap.c @@ -5,7 +5,7 @@ * * This is a fairly simple bitmap. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/datapagemap.h b/src/bin/pg_rewind/datapagemap.h index db991c16df..50c0ef779c 100644 --- a/src/bin/pg_rewind/datapagemap.h +++ b/src/bin/pg_rewind/datapagemap.h @@ -2,7 +2,7 @@ * * datapagemap.h * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/fetch.c b/src/bin/pg_rewind/fetch.c index 13553e3b5a..5a35a8f2e3 100644 --- a/src/bin/pg_rewind/fetch.c +++ b/src/bin/pg_rewind/fetch.c @@ -10,7 +10,7 @@ * connection (libpq_fetch.c) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/fetch.h b/src/bin/pg_rewind/fetch.h index 7288120a0b..5ee06817b6 100644 --- a/src/bin/pg_rewind/fetch.h +++ b/src/bin/pg_rewind/fetch.h @@ -8,7 +8,7 @@ * directory (copy method), or a remote PostgreSQL server (libpq fetch * method). * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/file_ops.c b/src/bin/pg_rewind/file_ops.c index 4229f0e056..705383d184 100644 --- a/src/bin/pg_rewind/file_ops.c +++ b/src/bin/pg_rewind/file_ops.c @@ -8,7 +8,7 @@ * do nothing if it's enabled. You should avoid accessing the target files * directly but if you do, make sure you honor the --dry-run mode! * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/file_ops.h b/src/bin/pg_rewind/file_ops.h index f9aea52d6d..be580ee4db 100644 --- a/src/bin/pg_rewind/file_ops.h +++ b/src/bin/pg_rewind/file_ops.h @@ -3,7 +3,7 @@ * file_ops.h * Helper functions for operating on files * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/filemap.c b/src/bin/pg_rewind/filemap.c index dd6919025d..19da1ee7e0 100644 --- a/src/bin/pg_rewind/filemap.c +++ b/src/bin/pg_rewind/filemap.c @@ -3,7 +3,7 @@ * filemap.c * A data structure for keeping track of files that have changed. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/filemap.h b/src/bin/pg_rewind/filemap.h index 6c97fa74d1..63ba32a6cd 100644 --- a/src/bin/pg_rewind/filemap.h +++ b/src/bin/pg_rewind/filemap.h @@ -2,7 +2,7 @@ * * filemap.h * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group *------------------------------------------------------------------------- */ #ifndef FILEMAP_H diff --git a/src/bin/pg_rewind/libpq_fetch.c b/src/bin/pg_rewind/libpq_fetch.c index 79bec40b02..d1726d1c74 100644 --- a/src/bin/pg_rewind/libpq_fetch.c +++ b/src/bin/pg_rewind/libpq_fetch.c @@ -3,7 +3,7 @@ * libpq_fetch.c * Functions for fetching files from a remote server. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/logging.c b/src/bin/pg_rewind/logging.c index cae776df12..f48f7e9070 100644 --- a/src/bin/pg_rewind/logging.c +++ b/src/bin/pg_rewind/logging.c @@ -3,7 +3,7 @@ * logging.c * logging functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/logging.h b/src/bin/pg_rewind/logging.h index c665194050..66455039db 100644 --- a/src/bin/pg_rewind/logging.h +++ b/src/bin/pg_rewind/logging.h @@ -4,7 +4,7 @@ * prototypes for logging functions * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/bin/pg_rewind/parsexlog.c b/src/bin/pg_rewind/parsexlog.c index 0fc71d2a13..b4c1f827a6 100644 --- a/src/bin/pg_rewind/parsexlog.c +++ b/src/bin/pg_rewind/parsexlog.c @@ -3,7 +3,7 @@ * parsexlog.c * Functions for reading Write-Ahead-Log * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/bin/pg_rewind/pg_rewind.c b/src/bin/pg_rewind/pg_rewind.c index 6079156e80..72ab2f8d5e 100644 --- a/src/bin/pg_rewind/pg_rewind.c +++ b/src/bin/pg_rewind/pg_rewind.c @@ -3,7 +3,7 @@ * pg_rewind.c * Synchronizes a PostgreSQL data directory to a new timeline * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_rewind/pg_rewind.h b/src/bin/pg_rewind/pg_rewind.h index 7bec34ff55..3f4ba7a267 100644 --- a/src/bin/pg_rewind/pg_rewind.h +++ b/src/bin/pg_rewind/pg_rewind.h @@ -3,7 +3,7 @@ * pg_rewind.h * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/bin/pg_rewind/timeline.c b/src/bin/pg_rewind/timeline.c index 3cd5af57f2..8a8c587440 100644 --- a/src/bin/pg_rewind/timeline.c +++ b/src/bin/pg_rewind/timeline.c @@ -3,7 +3,7 @@ * timeline.c * timeline-related functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index 1b9083597c..8d4f254f9f 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -3,7 +3,7 @@ * * server checks and output routines * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/check.c */ diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c index ca3db1a2f6..0fe98a550e 100644 --- a/src/bin/pg_upgrade/controldata.c +++ b/src/bin/pg_upgrade/controldata.c @@ -3,7 +3,7 @@ * * controldata functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/controldata.c */ diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c index 77c03ac05b..5ed6b786e2 100644 --- a/src/bin/pg_upgrade/dump.c +++ b/src/bin/pg_upgrade/dump.c @@ -3,7 +3,7 @@ * * dump functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/dump.c */ diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index f5cd74ff97..1fa56b8c61 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -3,7 +3,7 @@ * * execution functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/exec.c */ diff --git a/src/bin/pg_upgrade/file.c b/src/bin/pg_upgrade/file.c index ae8d89fb66..f88e3d558f 100644 --- a/src/bin/pg_upgrade/file.c +++ b/src/bin/pg_upgrade/file.c @@ -3,7 +3,7 @@ * * file system operations * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/file.c */ diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c index 063a94f0ca..d61fa38c92 100644 --- a/src/bin/pg_upgrade/function.c +++ b/src/bin/pg_upgrade/function.c @@ -3,7 +3,7 @@ * * server-side function support * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/function.c */ diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c index 0b14998f4b..f9f07f491c 100644 --- a/src/bin/pg_upgrade/info.c +++ b/src/bin/pg_upgrade/info.c @@ -3,7 +3,7 @@ * * information support functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/info.c */ diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c index f7f2ebdacf..9dbc9225a6 100644 --- a/src/bin/pg_upgrade/option.c +++ b/src/bin/pg_upgrade/option.c @@ -3,7 +3,7 @@ * * options functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/option.c */ diff --git a/src/bin/pg_upgrade/parallel.c b/src/bin/pg_upgrade/parallel.c index 7c85a13a99..cb1dc434f6 100644 --- a/src/bin/pg_upgrade/parallel.c +++ b/src/bin/pg_upgrade/parallel.c @@ -3,7 +3,7 @@ * * multi-process support * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/parallel.c */ diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index c10103f0bf..2b7da529e5 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -3,7 +3,7 @@ * * main source file * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/pg_upgrade.c */ diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index a21dd48c42..fbe69cb577 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -1,7 +1,7 @@ /* * pg_upgrade.h * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/pg_upgrade.h */ diff --git a/src/bin/pg_upgrade/relfilenode.c b/src/bin/pg_upgrade/relfilenode.c index 8c3f8ac332..50bee281f8 100644 --- a/src/bin/pg_upgrade/relfilenode.c +++ b/src/bin/pg_upgrade/relfilenode.c @@ -3,7 +3,7 @@ * * relfilenode functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/relfilenode.c */ diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index a91f18916c..96181237f8 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -3,7 +3,7 @@ * * database server functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/server.c */ diff --git a/src/bin/pg_upgrade/tablespace.c b/src/bin/pg_upgrade/tablespace.c index 31958d5c67..b0cbf81f7d 100644 --- a/src/bin/pg_upgrade/tablespace.c +++ b/src/bin/pg_upgrade/tablespace.c @@ -3,7 +3,7 @@ * * tablespace functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/tablespace.c */ diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh index 1bacf066aa..39983abea1 100644 --- a/src/bin/pg_upgrade/test.sh +++ b/src/bin/pg_upgrade/test.sh @@ -6,7 +6,7 @@ # runs the regression tests (to put in some data), runs pg_dumpall, # runs pg_upgrade, runs pg_dumpall again, compares the dumps. # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California set -e diff --git a/src/bin/pg_upgrade/util.c b/src/bin/pg_upgrade/util.c index 44c1bc880e..4dec77b3b6 100644 --- a/src/bin/pg_upgrade/util.c +++ b/src/bin/pg_upgrade/util.c @@ -3,7 +3,7 @@ * * utility functions * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/util.c */ diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c index 524b604296..76e9d65537 100644 --- a/src/bin/pg_upgrade/version.c +++ b/src/bin/pg_upgrade/version.c @@ -3,7 +3,7 @@ * * Postgres-version-specific routines * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * src/bin/pg_upgrade/version.c */ diff --git a/src/bin/pg_waldump/compat.c b/src/bin/pg_waldump/compat.c index 19a3a9d0c5..6ff9eb7e77 100644 --- a/src/bin/pg_waldump/compat.c +++ b/src/bin/pg_waldump/compat.c @@ -3,7 +3,7 @@ * compat.c * Reimplementations of various backend functions. * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_waldump/compat.c diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c index 6443eda6df..242aff2d68 100644 --- a/src/bin/pg_waldump/pg_waldump.c +++ b/src/bin/pg_waldump/pg_waldump.c @@ -2,7 +2,7 @@ * * pg_waldump.c - decode and display WAL * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/bin/pg_waldump/pg_waldump.c diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y index 74ffe5e7a7..26494fdd9f 100644 --- a/src/bin/pgbench/exprparse.y +++ b/src/bin/pgbench/exprparse.y @@ -4,7 +4,7 @@ * exprparse.y * bison grammar for a simple expression syntax * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pgbench/exprparse.y diff --git a/src/bin/pgbench/exprscan.l b/src/bin/pgbench/exprscan.l index 9f46fb9db8..b86e77a7ea 100644 --- a/src/bin/pgbench/exprscan.l +++ b/src/bin/pgbench/exprscan.l @@ -15,7 +15,7 @@ * * Note that this lexer operates within the framework created by psqlscan.l, * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/pgbench/exprscan.l diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index e065f7bedc..fc2c7342ed 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -5,7 +5,7 @@ * Originally written by Tatsuo Ishii and enhanced by many contributors. * * src/bin/pgbench/pgbench.c - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * ALL RIGHTS RESERVED; * * Permission to use, copy, modify, and distribute this software and its diff --git a/src/bin/pgbench/pgbench.h b/src/bin/pgbench/pgbench.h index 0e92882a4c..ce3c260988 100644 --- a/src/bin/pgbench/pgbench.h +++ b/src/bin/pgbench/pgbench.h @@ -2,7 +2,7 @@ * * pgbench.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/bin/pgevent/Makefile b/src/bin/pgevent/Makefile index 6e6797ba13..5c79eb8e44 100644 --- a/src/bin/pgevent/Makefile +++ b/src/bin/pgevent/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/pgevent # -# Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Copyright (c) 1996-2018, PostgreSQL Global Development Group # #------------------------------------------------------------------------- diff --git a/src/bin/psql/Makefile b/src/bin/psql/Makefile index cabfe15f97..c8eb2f95cc 100644 --- a/src/bin/psql/Makefile +++ b/src/bin/psql/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/psql # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/psql/Makefile diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c index 8cc4de3878..015c391aa4 100644 --- a/src/bin/psql/command.c +++ b/src/bin/psql/command.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/command.c */ diff --git a/src/bin/psql/command.h b/src/bin/psql/command.h index 7aedd0d625..95ad02dfe2 100644 --- a/src/bin/psql/command.h +++ b/src/bin/psql/command.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/command.h */ diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c index 7a91a44b2b..1c82180efb 100644 --- a/src/bin/psql/common.c +++ b/src/bin/psql/common.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/common.c */ diff --git a/src/bin/psql/common.h b/src/bin/psql/common.h index f34868b54e..43142a7aa6 100644 --- a/src/bin/psql/common.h +++ b/src/bin/psql/common.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/common.h */ diff --git a/src/bin/psql/conditional.c b/src/bin/psql/conditional.c index 63977ce5dd..cebf8c766c 100644 --- a/src/bin/psql/conditional.c +++ b/src/bin/psql/conditional.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/conditional.c */ diff --git a/src/bin/psql/conditional.h b/src/bin/psql/conditional.h index 0957627742..565875ac31 100644 --- a/src/bin/psql/conditional.h +++ b/src/bin/psql/conditional.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/conditional.h */ diff --git a/src/bin/psql/copy.c b/src/bin/psql/copy.c index 724ea9211a..555c6331a3 100644 --- a/src/bin/psql/copy.c +++ b/src/bin/psql/copy.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/copy.c */ diff --git a/src/bin/psql/copy.h b/src/bin/psql/copy.h index a79550de19..f4107d70b0 100644 --- a/src/bin/psql/copy.h +++ b/src/bin/psql/copy.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/copy.h */ diff --git a/src/bin/psql/create_help.pl b/src/bin/psql/create_help.pl index 9fa1855878..cb0e6e806e 100644 --- a/src/bin/psql/create_help.pl +++ b/src/bin/psql/create_help.pl @@ -3,7 +3,7 @@ ################################################################# # create_help.pl -- converts SGML docs to internal psql help # -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group # # src/bin/psql/create_help.pl ################################################################# diff --git a/src/bin/psql/crosstabview.c b/src/bin/psql/crosstabview.c index ed61c346ee..05042b3e13 100644 --- a/src/bin/psql/crosstabview.c +++ b/src/bin/psql/crosstabview.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/crosstabview.c */ diff --git a/src/bin/psql/crosstabview.h b/src/bin/psql/crosstabview.h index ad63ddef50..596637f3ff 100644 --- a/src/bin/psql/crosstabview.h +++ b/src/bin/psql/crosstabview.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/crosstabview.h */ diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index 3fc69c46c0..f2e62946d8 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -6,7 +6,7 @@ * with servers of versions 7.4 and up. It's okay to omit irrelevant * information for an old server, but not to fail outright. * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/describe.c */ diff --git a/src/bin/psql/describe.h b/src/bin/psql/describe.h index 14a5667f3e..a4cc5efae0 100644 --- a/src/bin/psql/describe.h +++ b/src/bin/psql/describe.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/describe.h */ diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index a926c40b9b..702e742af4 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/help.c */ @@ -653,7 +653,7 @@ print_copyright(void) puts( "PostgreSQL Database Management System\n" "(formerly known as Postgres, then as Postgres95)\n\n" - "Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group\n\n" + "Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group\n\n" "Portions Copyright (c) 1994, The Regents of the University of California\n\n" "Permission to use, copy, modify, and distribute this software and its\n" "documentation for any purpose, without fee, and without a written agreement\n" diff --git a/src/bin/psql/help.h b/src/bin/psql/help.h index 3ef4094476..7a292497e7 100644 --- a/src/bin/psql/help.h +++ b/src/bin/psql/help.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/help.h */ diff --git a/src/bin/psql/input.c b/src/bin/psql/input.c index 62f5f77383..8e32cd0b33 100644 --- a/src/bin/psql/input.c +++ b/src/bin/psql/input.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/input.c */ diff --git a/src/bin/psql/input.h b/src/bin/psql/input.h index 35886dae22..6c2ff61540 100644 --- a/src/bin/psql/input.h +++ b/src/bin/psql/input.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/input.h */ diff --git a/src/bin/psql/large_obj.c b/src/bin/psql/large_obj.c index 8a8887202a..7faf5591b3 100644 --- a/src/bin/psql/large_obj.c +++ b/src/bin/psql/large_obj.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/large_obj.c */ diff --git a/src/bin/psql/large_obj.h b/src/bin/psql/large_obj.h index 5750b5d9cc..e749a824c5 100644 --- a/src/bin/psql/large_obj.h +++ b/src/bin/psql/large_obj.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/large_obj.h */ diff --git a/src/bin/psql/mainloop.c b/src/bin/psql/mainloop.c index 62cf1f404a..a8778a57aa 100644 --- a/src/bin/psql/mainloop.c +++ b/src/bin/psql/mainloop.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/mainloop.c */ diff --git a/src/bin/psql/mainloop.h b/src/bin/psql/mainloop.h index 8ef8cc1bd6..6cd90f03b8 100644 --- a/src/bin/psql/mainloop.h +++ b/src/bin/psql/mainloop.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/mainloop.h */ diff --git a/src/bin/psql/prompt.c b/src/bin/psql/prompt.c index 913b23e4cd..b176972884 100644 --- a/src/bin/psql/prompt.c +++ b/src/bin/psql/prompt.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/prompt.c */ diff --git a/src/bin/psql/prompt.h b/src/bin/psql/prompt.h index a7a95effb4..2354b8f8c7 100644 --- a/src/bin/psql/prompt.h +++ b/src/bin/psql/prompt.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/prompt.h */ diff --git a/src/bin/psql/psqlscanslash.h b/src/bin/psql/psqlscanslash.h index db76061332..8e8efb2f0b 100644 --- a/src/bin/psql/psqlscanslash.h +++ b/src/bin/psql/psqlscanslash.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/psqlscanslash.h */ diff --git a/src/bin/psql/psqlscanslash.l b/src/bin/psql/psqlscanslash.l index e3cde04188..5e0bd0e774 100644 --- a/src/bin/psql/psqlscanslash.l +++ b/src/bin/psql/psqlscanslash.l @@ -8,7 +8,7 @@ * * See fe_utils/psqlscan_int.h for additional commentary. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/bin/psql/settings.h b/src/bin/psql/settings.h index 96338c3197..69e617e6b5 100644 --- a/src/bin/psql/settings.h +++ b/src/bin/psql/settings.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/settings.h */ diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c index 0dbd7841fb..ec6ae45b24 100644 --- a/src/bin/psql/startup.c +++ b/src/bin/psql/startup.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/startup.c */ diff --git a/src/bin/psql/stringutils.c b/src/bin/psql/stringutils.c index eefd18fbd9..29b9c9c7f0 100644 --- a/src/bin/psql/stringutils.c +++ b/src/bin/psql/stringutils.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/stringutils.c */ diff --git a/src/bin/psql/stringutils.h b/src/bin/psql/stringutils.h index 213473f919..d843d7119b 100644 --- a/src/bin/psql/stringutils.h +++ b/src/bin/psql/stringutils.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/stringutils.h */ diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 468e50aa31..b51098deca 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/tab-complete.c */ diff --git a/src/bin/psql/tab-complete.h b/src/bin/psql/tab-complete.h index 1a42ef1c66..544318c36d 100644 --- a/src/bin/psql/tab-complete.h +++ b/src/bin/psql/tab-complete.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/tab-complete.h */ diff --git a/src/bin/psql/variables.c b/src/bin/psql/variables.c index 5f7f6ce822..f093442644 100644 --- a/src/bin/psql/variables.c +++ b/src/bin/psql/variables.c @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/bin/psql/variables.c */ diff --git a/src/bin/psql/variables.h b/src/bin/psql/variables.h index 02d85b1bc2..03af11197c 100644 --- a/src/bin/psql/variables.h +++ b/src/bin/psql/variables.h @@ -1,7 +1,7 @@ /* * psql - the PostgreSQL interactive terminal * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * This implements a sort of variable repository. One could also think of it * as a cheap version of an associative array. Each variable has a string diff --git a/src/bin/scripts/Makefile b/src/bin/scripts/Makefile index a9c24a9f83..0cc528e965 100644 --- a/src/bin/scripts/Makefile +++ b/src/bin/scripts/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/bin/scripts # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/bin/scripts/Makefile diff --git a/src/bin/scripts/clusterdb.c b/src/bin/scripts/clusterdb.c index a6640aa57b..92c42f62bf 100644 --- a/src/bin/scripts/clusterdb.c +++ b/src/bin/scripts/clusterdb.c @@ -2,7 +2,7 @@ * * clusterdb * - * Portions Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/bin/scripts/clusterdb.c * diff --git a/src/bin/scripts/common.c b/src/bin/scripts/common.c index 7394bf293e..e20a5e9146 100644 --- a/src/bin/scripts/common.c +++ b/src/bin/scripts/common.c @@ -4,7 +4,7 @@ * Common support routines for bin/scripts/ * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/common.c diff --git a/src/bin/scripts/common.h b/src/bin/scripts/common.h index 18d8d12d15..a660d6848a 100644 --- a/src/bin/scripts/common.h +++ b/src/bin/scripts/common.h @@ -2,7 +2,7 @@ * common.h * Common support routines for bin/scripts/ * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/bin/scripts/common.h */ diff --git a/src/bin/scripts/createdb.c b/src/bin/scripts/createdb.c index 88ea401e39..81a8192136 100644 --- a/src/bin/scripts/createdb.c +++ b/src/bin/scripts/createdb.c @@ -2,7 +2,7 @@ * * createdb * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/createdb.c diff --git a/src/bin/scripts/createuser.c b/src/bin/scripts/createuser.c index 0e36edcc5d..c488c018e0 100644 --- a/src/bin/scripts/createuser.c +++ b/src/bin/scripts/createuser.c @@ -2,7 +2,7 @@ * * createuser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/createuser.c diff --git a/src/bin/scripts/dropdb.c b/src/bin/scripts/dropdb.c index 5dc8558e8e..81929c43c4 100644 --- a/src/bin/scripts/dropdb.c +++ b/src/bin/scripts/dropdb.c @@ -2,7 +2,7 @@ * * dropdb * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/dropdb.c diff --git a/src/bin/scripts/dropuser.c b/src/bin/scripts/dropuser.c index 095c0a39ff..e3191afc31 100644 --- a/src/bin/scripts/dropuser.c +++ b/src/bin/scripts/dropuser.c @@ -2,7 +2,7 @@ * * dropuser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/dropuser.c diff --git a/src/bin/scripts/pg_isready.c b/src/bin/scripts/pg_isready.c index c7c06cc6ff..f7ad7b40f0 100644 --- a/src/bin/scripts/pg_isready.c +++ b/src/bin/scripts/pg_isready.c @@ -2,7 +2,7 @@ * * pg_isready --- checks the status of the PostgreSQL server * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * src/bin/scripts/pg_isready.c * diff --git a/src/bin/scripts/reindexdb.c b/src/bin/scripts/reindexdb.c index ffd611e7bb..64e9a2f3ce 100644 --- a/src/bin/scripts/reindexdb.c +++ b/src/bin/scripts/reindexdb.c @@ -2,7 +2,7 @@ * * reindexdb * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/bin/scripts/reindexdb.c * diff --git a/src/bin/scripts/vacuumdb.c b/src/bin/scripts/vacuumdb.c index 5d2869ea6b..663083828e 100644 --- a/src/bin/scripts/vacuumdb.c +++ b/src/bin/scripts/vacuumdb.c @@ -2,7 +2,7 @@ * * vacuumdb * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/bin/scripts/vacuumdb.c diff --git a/src/common/base64.c b/src/common/base64.c index e8e28ecca4..c6fde2a8dd 100644 --- a/src/common/base64.c +++ b/src/common/base64.c @@ -3,7 +3,7 @@ * base64.c * Encoding and decoding routines for base64 without whitespace. * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/common/config_info.c b/src/common/config_info.c index e0841a5af2..55e688e656 100644 --- a/src/common/config_info.c +++ b/src/common/config_info.c @@ -4,7 +4,7 @@ * Common code for pg_config output * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/controldata_utils.c b/src/common/controldata_utils.c index f1a097a974..f12a188856 100644 --- a/src/common/controldata_utils.c +++ b/src/common/controldata_utils.c @@ -4,7 +4,7 @@ * Common code for control data file output. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/exec.c b/src/common/exec.c index 67bf4d1d79..e3e81c1db8 100644 --- a/src/common/exec.c +++ b/src/common/exec.c @@ -4,7 +4,7 @@ * Functions for finding and validating executable files * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/fe_memutils.c b/src/common/fe_memutils.c index fb38067d97..2538661e19 100644 --- a/src/common/fe_memutils.c +++ b/src/common/fe_memutils.c @@ -3,7 +3,7 @@ * fe_memutils.c * memory management support for frontend code * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/file_utils.c b/src/common/file_utils.c index 4304058acb..48876061c3 100644 --- a/src/common/file_utils.c +++ b/src/common/file_utils.c @@ -5,7 +5,7 @@ * Assorted utility functions to work on files. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/common/file_utils.c diff --git a/src/common/ip.c b/src/common/ip.c index bb536d3e86..caca7be9e5 100644 --- a/src/common/ip.c +++ b/src/common/ip.c @@ -3,7 +3,7 @@ * ip.c * IPv6-aware network access. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/keywords.c b/src/common/keywords.c index a5c6c41cb8..0c0c794c68 100644 --- a/src/common/keywords.c +++ b/src/common/keywords.c @@ -4,7 +4,7 @@ * lexical token lookup for key words in PostgreSQL * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/md5.c b/src/common/md5.c index 9144cab6ee..c3936618b6 100644 --- a/src/common/md5.c +++ b/src/common/md5.c @@ -10,7 +10,7 @@ * * Sverre H. Huseby * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/common/pg_lzcompress.c b/src/common/pg_lzcompress.c index 67f570c362..a2f87cff8b 100644 --- a/src/common/pg_lzcompress.c +++ b/src/common/pg_lzcompress.c @@ -166,7 +166,7 @@ * * Jan Wieck * - * Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Copyright (c) 1999-2018, PostgreSQL Global Development Group * * src/common/pg_lzcompress.c * ---------- diff --git a/src/common/pgfnames.c b/src/common/pgfnames.c index e161d7dc04..ec50a36db7 100644 --- a/src/common/pgfnames.c +++ b/src/common/pgfnames.c @@ -3,7 +3,7 @@ * pgfnames.c * directory handling functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/common/psprintf.c b/src/common/psprintf.c index 8f5903d519..b974a99be1 100644 --- a/src/common/psprintf.c +++ b/src/common/psprintf.c @@ -4,7 +4,7 @@ * sprintf into an allocated-on-demand buffer * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/relpath.c b/src/common/relpath.c index c2f36625c1..d98050c590 100644 --- a/src/common/relpath.c +++ b/src/common/relpath.c @@ -4,7 +4,7 @@ * * This module also contains some logic associated with fork names. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/common/restricted_token.c b/src/common/restricted_token.c index 57591aaae2..8c5583da7a 100644 --- a/src/common/restricted_token.c +++ b/src/common/restricted_token.c @@ -4,7 +4,7 @@ * helper routine to ensure restricted token on Windows * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/rmtree.c b/src/common/rmtree.c index 09824b5463..fcf63eb953 100644 --- a/src/common/rmtree.c +++ b/src/common/rmtree.c @@ -2,7 +2,7 @@ * * rmtree.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/common/saslprep.c b/src/common/saslprep.c index 0a3585850b..271021550a 100644 --- a/src/common/saslprep.c +++ b/src/common/saslprep.c @@ -12,7 +12,7 @@ * http://www.ietf.org/rfc/rfc4013.txt * * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/saslprep.c diff --git a/src/common/scram-common.c b/src/common/scram-common.c index e54fe1a7c9..dc4160714f 100644 --- a/src/common/scram-common.c +++ b/src/common/scram-common.c @@ -6,7 +6,7 @@ * backend, for implement the Salted Challenge Response Authentication * Mechanism (SCRAM), per IETF's RFC 5802. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/scram-common.c diff --git a/src/common/sha2.c b/src/common/sha2.c index d7992f1d20..5aa678f8e3 100644 --- a/src/common/sha2.c +++ b/src/common/sha2.c @@ -6,7 +6,7 @@ * This is the set of in-core functions used when there are no other * alternative options like OpenSSL. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/sha2.c diff --git a/src/common/sha2_openssl.c b/src/common/sha2_openssl.c index b8e2e1139f..362e1318db 100644 --- a/src/common/sha2_openssl.c +++ b/src/common/sha2_openssl.c @@ -6,7 +6,7 @@ * * This should only be used if code is compiled with OpenSSL support. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/sha2_openssl.c diff --git a/src/common/string.c b/src/common/string.c index 159d9ea7b6..0e3557076a 100644 --- a/src/common/string.c +++ b/src/common/string.c @@ -4,7 +4,7 @@ * string handling helpers * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/common/unicode/generate-norm_test_table.pl b/src/common/unicode/generate-norm_test_table.pl index 310d32fd29..e3510b5c81 100644 --- a/src/common/unicode/generate-norm_test_table.pl +++ b/src/common/unicode/generate-norm_test_table.pl @@ -5,7 +5,7 @@ # # NormalizationTest.txt is part of the Unicode Character Database. # -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use strict; use warnings; @@ -30,7 +30,7 @@ * norm_test_table.h * Test strings for Unicode normalization. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/common/unicode/norm_test_table.h diff --git a/src/common/unicode/generate-unicode_norm_table.pl b/src/common/unicode/generate-unicode_norm_table.pl index 1d77bb6380..f9cb406f1b 100644 --- a/src/common/unicode/generate-unicode_norm_table.pl +++ b/src/common/unicode/generate-unicode_norm_table.pl @@ -5,7 +5,7 @@ # Input: UnicodeData.txt and CompositionExclusions.txt # Output: unicode_norm_table.h # -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use strict; use warnings; @@ -74,7 +74,7 @@ * unicode_norm_table.h * Composition table used for Unicode normalization * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/unicode_norm_table.h diff --git a/src/common/unicode/norm_test.c b/src/common/unicode/norm_test.c index f1bd99fce4..56759bee8d 100644 --- a/src/common/unicode/norm_test.c +++ b/src/common/unicode/norm_test.c @@ -2,7 +2,7 @@ * norm_test.c * Program to test Unicode normalization functions. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/unicode_norm.c diff --git a/src/common/unicode_norm.c b/src/common/unicode_norm.c index 5361f5f111..1eacdb298f 100644 --- a/src/common/unicode_norm.c +++ b/src/common/unicode_norm.c @@ -5,7 +5,7 @@ * This implements Unicode normalization, per the documentation at * http://www.unicode.org/reports/tr15/. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/common/unicode_norm.c diff --git a/src/common/username.c b/src/common/username.c index 7187bbde42..af382f95a5 100644 --- a/src/common/username.c +++ b/src/common/username.c @@ -3,7 +3,7 @@ * username.c * get user name * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/common/wait_error.c b/src/common/wait_error.c index f824a5f2af..941b606999 100644 --- a/src/common/wait_error.c +++ b/src/common/wait_error.c @@ -4,7 +4,7 @@ * Convert a wait/waitpid(2) result code to a human-readable string * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/fe_utils/Makefile b/src/fe_utils/Makefile index ebce38ceb4..3f4ba8bf7c 100644 --- a/src/fe_utils/Makefile +++ b/src/fe_utils/Makefile @@ -5,7 +5,7 @@ # This makefile generates a static library, libpgfeutils.a, # for use by client applications # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # IDENTIFICATION diff --git a/src/fe_utils/mbprint.c b/src/fe_utils/mbprint.c index 9aabe59f38..07c348ec49 100644 --- a/src/fe_utils/mbprint.c +++ b/src/fe_utils/mbprint.c @@ -3,7 +3,7 @@ * Multibyte character printing support for frontend code * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/fe_utils/mbprint.c diff --git a/src/fe_utils/print.c b/src/fe_utils/print.c index 8af5bbe97e..ec5ad45a30 100644 --- a/src/fe_utils/print.c +++ b/src/fe_utils/print.c @@ -8,7 +8,7 @@ * pager open/close functions, all that stuff came with it. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/fe_utils/print.c diff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l index 44fcf7ee46..1cc587be34 100644 --- a/src/fe_utils/psqlscan.l +++ b/src/fe_utils/psqlscan.l @@ -23,7 +23,7 @@ * * See psqlscan_int.h for additional commentary. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c index 21a2e57297..ef94b34cd1 100644 --- a/src/fe_utils/simple_list.c +++ b/src/fe_utils/simple_list.c @@ -7,7 +7,7 @@ * it's all we need in, eg, pg_dump. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/fe_utils/simple_list.c diff --git a/src/fe_utils/string_utils.c b/src/fe_utils/string_utils.c index c7e42ddec9..8c05a80d31 100644 --- a/src/fe_utils/string_utils.c +++ b/src/fe_utils/string_utils.c @@ -6,7 +6,7 @@ * and interpreting backend output. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/fe_utils/string_utils.c diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h index 0db4fc73ac..8d7bc246e6 100644 --- a/src/include/access/amapi.h +++ b/src/include/access/amapi.h @@ -3,7 +3,7 @@ * amapi.h * API for Postgres index access methods. * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * src/include/access/amapi.h * diff --git a/src/include/access/amvalidate.h b/src/include/access/amvalidate.h index 04b7429a78..e50afbd556 100644 --- a/src/include/access/amvalidate.h +++ b/src/include/access/amvalidate.h @@ -3,7 +3,7 @@ * amvalidate.h * Support routines for index access methods' amvalidate functions. * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/access/amvalidate.h * diff --git a/src/include/access/attnum.h b/src/include/access/attnum.h index d23888b098..c45a1acaaa 100644 --- a/src/include/access/attnum.h +++ b/src/include/access/attnum.h @@ -4,7 +4,7 @@ * POSTGRES attribute number definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/attnum.h diff --git a/src/include/access/brin.h b/src/include/access/brin.h index 61a38804ca..10999a38b5 100644 --- a/src/include/access/brin.h +++ b/src/include/access/brin.h @@ -1,7 +1,7 @@ /* * AM-callable functions for BRIN indexes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_internal.h b/src/include/access/brin_internal.h index 3ed67438b2..d3134f9dcd 100644 --- a/src/include/access/brin_internal.h +++ b/src/include/access/brin_internal.h @@ -2,7 +2,7 @@ * brin_internal.h * internal declarations for BRIN indexes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_page.h b/src/include/access/brin_page.h index bf03a6e9f8..82d5972c85 100644 --- a/src/include/access/brin_page.h +++ b/src/include/access/brin_page.h @@ -2,7 +2,7 @@ * brin_page.h * Prototypes and definitions for BRIN page layouts * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_pageops.h b/src/include/access/brin_pageops.h index e0f5641635..8b389de4a5 100644 --- a/src/include/access/brin_pageops.h +++ b/src/include/access/brin_pageops.h @@ -2,7 +2,7 @@ * brin_pageops.h * Prototypes for operating on BRIN pages. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_revmap.h b/src/include/access/brin_revmap.h index ddd87e040b..4dd844888f 100644 --- a/src/include/access/brin_revmap.h +++ b/src/include/access/brin_revmap.h @@ -2,7 +2,7 @@ * brin_revmap.h * Prototypes for BRIN reverse range maps * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_tuple.h b/src/include/access/brin_tuple.h index 6545c0a6ff..2adaf9125e 100644 --- a/src/include/access/brin_tuple.h +++ b/src/include/access/brin_tuple.h @@ -2,7 +2,7 @@ * brin_tuple.h * Declarations for dealing with BRIN-specific tuples. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/access/brin_xlog.h b/src/include/access/brin_xlog.h index 10e90d3c78..40e9772c89 100644 --- a/src/include/access/brin_xlog.h +++ b/src/include/access/brin_xlog.h @@ -4,7 +4,7 @@ * POSTGRES BRIN access XLOG definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/brin_xlog.h diff --git a/src/include/access/bufmask.h b/src/include/access/bufmask.h index 6a24c947ef..c00be32ff6 100644 --- a/src/include/access/bufmask.h +++ b/src/include/access/bufmask.h @@ -7,7 +7,7 @@ * individual rmgr, but we make things easier by providing some * common routines to handle cases which occur in multiple rmgrs. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/access/bufmask.h * diff --git a/src/include/access/clog.h b/src/include/access/clog.h index 7bae0902b5..7681ed90ae 100644 --- a/src/include/access/clog.h +++ b/src/include/access/clog.h @@ -3,7 +3,7 @@ * * PostgreSQL transaction-commit-log manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/clog.h diff --git a/src/include/access/commit_ts.h b/src/include/access/commit_ts.h index 31936faf08..2f40d59695 100644 --- a/src/include/access/commit_ts.h +++ b/src/include/access/commit_ts.h @@ -3,7 +3,7 @@ * * PostgreSQL commit timestamp manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/commit_ts.h diff --git a/src/include/access/genam.h b/src/include/access/genam.h index b56a44f902..24c720bf42 100644 --- a/src/include/access/genam.h +++ b/src/include/access/genam.h @@ -4,7 +4,7 @@ * POSTGRES generalized index access method definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/genam.h diff --git a/src/include/access/generic_xlog.h b/src/include/access/generic_xlog.h index 02696141ea..b23e1f684b 100644 --- a/src/include/access/generic_xlog.h +++ b/src/include/access/generic_xlog.h @@ -4,7 +4,7 @@ * Generic xlog API definition. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/generic_xlog.h diff --git a/src/include/access/gin.h b/src/include/access/gin.h index ec83058095..0acdb88241 100644 --- a/src/include/access/gin.h +++ b/src/include/access/gin.h @@ -2,7 +2,7 @@ * gin.h * Public header file for Generalized Inverted Index access method. * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/access/gin.h *-------------------------------------------------------------------------- diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h index dc49b6f17d..a709596a7a 100644 --- a/src/include/access/gin_private.h +++ b/src/include/access/gin_private.h @@ -2,7 +2,7 @@ * gin_private.h * header file for postgres inverted index access method implementation. * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/access/gin_private.h *-------------------------------------------------------------------------- diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h index c3af3f0380..553566529a 100644 --- a/src/include/access/ginblock.h +++ b/src/include/access/ginblock.h @@ -2,7 +2,7 @@ * ginblock.h * details of structures stored in GIN index blocks * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/access/ginblock.h *-------------------------------------------------------------------------- diff --git a/src/include/access/ginxlog.h b/src/include/access/ginxlog.h index 42e0ae90c3..64a3c9e18b 100644 --- a/src/include/access/ginxlog.h +++ b/src/include/access/ginxlog.h @@ -2,7 +2,7 @@ * ginxlog.h * header file for postgres inverted index xlog implementation. * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/access/ginxlog.h *-------------------------------------------------------------------------- diff --git a/src/include/access/gist.h b/src/include/access/gist.h index 83642189db..827566dc6e 100644 --- a/src/include/access/gist.h +++ b/src/include/access/gist.h @@ -6,7 +6,7 @@ * changes should be made with care. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/gist.h diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h index eb1c6728d4..36ed7244ba 100644 --- a/src/include/access/gist_private.h +++ b/src/include/access/gist_private.h @@ -4,7 +4,7 @@ * private declarations for GiST -- declarations related to the * internal implementation of GiST, not the public API * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/gist_private.h diff --git a/src/include/access/gistscan.h b/src/include/access/gistscan.h index 2aea6ad309..e04409afc0 100644 --- a/src/include/access/gistscan.h +++ b/src/include/access/gistscan.h @@ -4,7 +4,7 @@ * routines defined in access/gist/gistscan.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/gistscan.h diff --git a/src/include/access/gistxlog.h b/src/include/access/gistxlog.h index 3b126eca2a..1a2b9496d0 100644 --- a/src/include/access/gistxlog.h +++ b/src/include/access/gistxlog.h @@ -3,7 +3,7 @@ * gistxlog.h * gist xlog routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/gistxlog.h diff --git a/src/include/access/hash.h b/src/include/access/hash.h index 179ac97f87..65d23f32ef 100644 --- a/src/include/access/hash.h +++ b/src/include/access/hash.h @@ -4,7 +4,7 @@ * header file for postgres hash access method implementation * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/hash.h diff --git a/src/include/access/hash_xlog.h b/src/include/access/hash_xlog.h index abe8579249..527138440b 100644 --- a/src/include/access/hash_xlog.h +++ b/src/include/access/hash_xlog.h @@ -4,7 +4,7 @@ * header file for Postgres hash AM implementation * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/hash_xlog.h diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h index f1366ed958..4c0256b18a 100644 --- a/src/include/access/heapam.h +++ b/src/include/access/heapam.h @@ -4,7 +4,7 @@ * POSTGRES heap access method definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/heapam.h diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h index 38f7f63984..700e25c36a 100644 --- a/src/include/access/heapam_xlog.h +++ b/src/include/access/heapam_xlog.h @@ -4,7 +4,7 @@ * POSTGRES heap access XLOG definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/heapam_xlog.h diff --git a/src/include/access/hio.h b/src/include/access/hio.h index 4a8beb63a6..9993d5be70 100644 --- a/src/include/access/hio.h +++ b/src/include/access/hio.h @@ -4,7 +4,7 @@ * POSTGRES heap access method input/output definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/hio.h diff --git a/src/include/access/htup.h b/src/include/access/htup.h index 61b3e68639..e78d804756 100644 --- a/src/include/access/htup.h +++ b/src/include/access/htup.h @@ -4,7 +4,7 @@ * POSTGRES heap tuple definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/htup.h diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h index b0d4c54121..2ab1815390 100644 --- a/src/include/access/htup_details.h +++ b/src/include/access/htup_details.h @@ -4,7 +4,7 @@ * POSTGRES heap tuple header definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/htup_details.h diff --git a/src/include/access/itup.h b/src/include/access/itup.h index c178ae91a9..0ffa91d686 100644 --- a/src/include/access/itup.h +++ b/src/include/access/itup.h @@ -4,7 +4,7 @@ * POSTGRES index tuple definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/itup.h diff --git a/src/include/access/multixact.h b/src/include/access/multixact.h index d5e18c6733..18fe380c5f 100644 --- a/src/include/access/multixact.h +++ b/src/include/access/multixact.h @@ -3,7 +3,7 @@ * * PostgreSQL multi-transaction-log manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/multixact.h diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index 2d4c36d0b8..d28f413c66 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -4,7 +4,7 @@ * header file for postgres btree access method implementation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/nbtree.h diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h index e3cddb2e64..8297df75fe 100644 --- a/src/include/access/nbtxlog.h +++ b/src/include/access/nbtxlog.h @@ -3,7 +3,7 @@ * nbtxlog.h * header file for postgres btree xlog routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/nbtxlog.h diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index d381afb285..8c6a747ced 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -3,7 +3,7 @@ * parallel.h * Infrastructure for launching parallel workers * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/parallel.h diff --git a/src/include/access/printsimple.h b/src/include/access/printsimple.h index edf28cece9..4184f16560 100644 --- a/src/include/access/printsimple.h +++ b/src/include/access/printsimple.h @@ -3,7 +3,7 @@ * printsimple.h * print simple tuples without catalog access * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/printsimple.h diff --git a/src/include/access/printtup.h b/src/include/access/printtup.h index 1b5a003a99..94f8d705b5 100644 --- a/src/include/access/printtup.h +++ b/src/include/access/printtup.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/printtup.h diff --git a/src/include/access/reloptions.h b/src/include/access/reloptions.h index cd43e3a52e..94739f7ac6 100644 --- a/src/include/access/reloptions.h +++ b/src/include/access/reloptions.h @@ -9,7 +9,7 @@ * into a lot of low-level code. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/reloptions.h diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h index 147f862a2b..9c603ca637 100644 --- a/src/include/access/relscan.h +++ b/src/include/access/relscan.h @@ -4,7 +4,7 @@ * POSTGRES relation scan descriptor definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/relscan.h diff --git a/src/include/access/rewriteheap.h b/src/include/access/rewriteheap.h index 91ff36707a..6d7f669cbc 100644 --- a/src/include/access/rewriteheap.h +++ b/src/include/access/rewriteheap.h @@ -3,7 +3,7 @@ * rewriteheap.h * Declarations for heap rewrite support functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * src/include/access/rewriteheap.h diff --git a/src/include/access/rmgrlist.h b/src/include/access/rmgrlist.h index 2f43c199d3..0bbe9879ca 100644 --- a/src/include/access/rmgrlist.h +++ b/src/include/access/rmgrlist.h @@ -6,7 +6,7 @@ * by the PG_RMGR macro, which is not defined in this file; it can be * defined by the caller for special purposes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/rmgrlist.h diff --git a/src/include/access/sdir.h b/src/include/access/sdir.h index 65eab48551..490bac11d3 100644 --- a/src/include/access/sdir.h +++ b/src/include/access/sdir.h @@ -4,7 +4,7 @@ * POSTGRES scan direction definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/sdir.h diff --git a/src/include/access/session.h b/src/include/access/session.h index 45986208c8..37971c1c66 100644 --- a/src/include/access/session.h +++ b/src/include/access/session.h @@ -3,7 +3,7 @@ * session.h * Encapsulation of user session. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/access/session.h * diff --git a/src/include/access/skey.h b/src/include/access/skey.h index 2f4814f140..ab3bb2c8eb 100644 --- a/src/include/access/skey.h +++ b/src/include/access/skey.h @@ -4,7 +4,7 @@ * POSTGRES scan key definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/skey.h diff --git a/src/include/access/slru.h b/src/include/access/slru.h index 20114c4d44..0e89e48c97 100644 --- a/src/include/access/slru.h +++ b/src/include/access/slru.h @@ -3,7 +3,7 @@ * slru.h * Simple LRU buffering for transaction status logfiles * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/slru.h diff --git a/src/include/access/spgist.h b/src/include/access/spgist.h index 06b1d88e5a..c6d7e22a38 100644 --- a/src/include/access/spgist.h +++ b/src/include/access/spgist.h @@ -4,7 +4,7 @@ * Public header file for SP-GiST access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/spgist.h diff --git a/src/include/access/spgist_private.h b/src/include/access/spgist_private.h index e55de9dc54..6d5f1c6046 100644 --- a/src/include/access/spgist_private.h +++ b/src/include/access/spgist_private.h @@ -4,7 +4,7 @@ * Private declarations for SP-GiST access method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/spgist_private.h diff --git a/src/include/access/spgxlog.h b/src/include/access/spgxlog.h index cf4331be4a..b72ccb5cc4 100644 --- a/src/include/access/spgxlog.h +++ b/src/include/access/spgxlog.h @@ -3,7 +3,7 @@ * spgxlog.h * xlog declarations for SP-GiST access method. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/spgxlog.h diff --git a/src/include/access/stratnum.h b/src/include/access/stratnum.h index 91d57605b2..bddfac4c10 100644 --- a/src/include/access/stratnum.h +++ b/src/include/access/stratnum.h @@ -4,7 +4,7 @@ * POSTGRES strategy number definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/stratnum.h diff --git a/src/include/access/subtrans.h b/src/include/access/subtrans.h index 41716d7b71..ce700a60de 100644 --- a/src/include/access/subtrans.h +++ b/src/include/access/subtrans.h @@ -3,7 +3,7 @@ * * PostgreSQL subtransaction-log manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/subtrans.h diff --git a/src/include/access/sysattr.h b/src/include/access/sysattr.h index b88c5e1141..c6f244011a 100644 --- a/src/include/access/sysattr.h +++ b/src/include/access/sysattr.h @@ -4,7 +4,7 @@ * POSTGRES system attribute definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/sysattr.h diff --git a/src/include/access/timeline.h b/src/include/access/timeline.h index 4bdb0c1f4f..a9bf18cab6 100644 --- a/src/include/access/timeline.h +++ b/src/include/access/timeline.h @@ -3,7 +3,7 @@ * * Functions for reading and writing timeline history files. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/timeline.h diff --git a/src/include/access/transam.h b/src/include/access/transam.h index 86076dede1..83ec3f1979 100644 --- a/src/include/access/transam.h +++ b/src/include/access/transam.h @@ -4,7 +4,7 @@ * postgres transaction access method support code * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/transam.h diff --git a/src/include/access/tsmapi.h b/src/include/access/tsmapi.h index 3d94cc6466..3ecd4737e5 100644 --- a/src/include/access/tsmapi.h +++ b/src/include/access/tsmapi.h @@ -3,7 +3,7 @@ * tsmapi.h * API for tablesample methods * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * src/include/access/tsmapi.h * diff --git a/src/include/access/tupconvert.h b/src/include/access/tupconvert.h index 173904adae..66c0ed0882 100644 --- a/src/include/access/tupconvert.h +++ b/src/include/access/tupconvert.h @@ -4,7 +4,7 @@ * Tuple conversion support. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/tupconvert.h diff --git a/src/include/access/tupdesc.h b/src/include/access/tupdesc.h index 2be5af1d3e..415efbab97 100644 --- a/src/include/access/tupdesc.h +++ b/src/include/access/tupdesc.h @@ -4,7 +4,7 @@ * POSTGRES tuple descriptor definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/tupdesc.h diff --git a/src/include/access/tupmacs.h b/src/include/access/tupmacs.h index 6746203828..1c3741da65 100644 --- a/src/include/access/tupmacs.h +++ b/src/include/access/tupmacs.h @@ -4,7 +4,7 @@ * Tuple macros used by both index tuples and heap tuples. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/tupmacs.h diff --git a/src/include/access/tuptoaster.h b/src/include/access/tuptoaster.h index fd9f83ac44..f99291e30d 100644 --- a/src/include/access/tuptoaster.h +++ b/src/include/access/tuptoaster.h @@ -4,7 +4,7 @@ * POSTGRES definitions for external and compressed storage * of variable size attributes. * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/include/access/tuptoaster.h * diff --git a/src/include/access/twophase.h b/src/include/access/twophase.h index f5fbbea4b6..34d9470811 100644 --- a/src/include/access/twophase.h +++ b/src/include/access/twophase.h @@ -4,7 +4,7 @@ * Two-phase-commit related declarations. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/twophase.h diff --git a/src/include/access/twophase_rmgr.h b/src/include/access/twophase_rmgr.h index 44cd6d202f..ba9cd932a7 100644 --- a/src/include/access/twophase_rmgr.h +++ b/src/include/access/twophase_rmgr.h @@ -4,7 +4,7 @@ * Two-phase-commit resource managers definition * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/twophase_rmgr.h diff --git a/src/include/access/valid.h b/src/include/access/valid.h index 53a7d0685a..1e2d23f645 100644 --- a/src/include/access/valid.h +++ b/src/include/access/valid.h @@ -4,7 +4,7 @@ * POSTGRES tuple qualification validity definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/valid.h diff --git a/src/include/access/visibilitymap.h b/src/include/access/visibilitymap.h index da0e76d6be..b168612b4b 100644 --- a/src/include/access/visibilitymap.h +++ b/src/include/access/visibilitymap.h @@ -4,7 +4,7 @@ * visibility map interface * * - * Portions Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2007-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/visibilitymap.h diff --git a/src/include/access/xact.h b/src/include/access/xact.h index 118b0a8432..6445bbc46f 100644 --- a/src/include/access/xact.h +++ b/src/include/access/xact.h @@ -4,7 +4,7 @@ * postgres transaction system definitions * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xact.h diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index dd7d8b5e40..421ba6d775 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -3,7 +3,7 @@ * * PostgreSQL write-ahead log manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xlog.h diff --git a/src/include/access/xlog_internal.h b/src/include/access/xlog_internal.h index 7805c3c747..a5c074642f 100644 --- a/src/include/access/xlog_internal.h +++ b/src/include/access/xlog_internal.h @@ -11,7 +11,7 @@ * Note: This file must be includable in both frontend and backend contexts, * to allow stand-alone tools like pg_receivewal to deal with WAL files. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xlog_internal.h diff --git a/src/include/access/xlogdefs.h b/src/include/access/xlogdefs.h index 3a80d6be6f..0a48d1cfb4 100644 --- a/src/include/access/xlogdefs.h +++ b/src/include/access/xlogdefs.h @@ -4,7 +4,7 @@ * Postgres write-ahead log manager record pointer and * timeline number definitions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xlogdefs.h diff --git a/src/include/access/xloginsert.h b/src/include/access/xloginsert.h index 174c88677f..fa62f915af 100644 --- a/src/include/access/xloginsert.h +++ b/src/include/access/xloginsert.h @@ -3,7 +3,7 @@ * * Functions for generating WAL records * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xloginsert.h diff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h index 3a9ebd4354..688422f61c 100644 --- a/src/include/access/xlogreader.h +++ b/src/include/access/xlogreader.h @@ -3,7 +3,7 @@ * xlogreader.h * Definitions for the generic XLog reading facility * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/access/xlogreader.h diff --git a/src/include/access/xlogrecord.h b/src/include/access/xlogrecord.h index b53960e112..863781937e 100644 --- a/src/include/access/xlogrecord.h +++ b/src/include/access/xlogrecord.h @@ -3,7 +3,7 @@ * * Definitions for the WAL record format. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xlogrecord.h diff --git a/src/include/access/xlogutils.h b/src/include/access/xlogutils.h index 114ffbcc53..c406699936 100644 --- a/src/include/access/xlogutils.h +++ b/src/include/access/xlogutils.h @@ -3,7 +3,7 @@ * * Utilities for replaying WAL records. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/access/xlogutils.h diff --git a/src/include/bootstrap/bootstrap.h b/src/include/bootstrap/bootstrap.h index 35eb9a4ff5..4f4129419f 100644 --- a/src/include/bootstrap/bootstrap.h +++ b/src/include/bootstrap/bootstrap.h @@ -4,7 +4,7 @@ * include file for the bootstrapping code * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/bootstrap/bootstrap.h diff --git a/src/include/c.h b/src/include/c.h index 22535a7deb..34a7fa67b4 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -9,7 +9,7 @@ * polluting the namespace with lots of stuff... * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/c.h diff --git a/src/include/catalog/binary_upgrade.h b/src/include/catalog/binary_upgrade.h index 5ff365fe53..abc6e1ae1d 100644 --- a/src/include/catalog/binary_upgrade.h +++ b/src/include/catalog/binary_upgrade.h @@ -4,7 +4,7 @@ * variables used for binary upgrades * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/binary_upgrade.h diff --git a/src/include/catalog/catalog.h b/src/include/catalog/catalog.h index 8ce9a9966a..3e280b3750 100644 --- a/src/include/catalog/catalog.h +++ b/src/include/catalog/catalog.h @@ -4,7 +4,7 @@ * prototypes for functions in backend/catalog/catalog.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/catalog.h diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 3934582efc..f1765af4ba 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -34,7 +34,7 @@ * database contents or layout, such as altering tuple headers. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/catversion.h diff --git a/src/include/catalog/dependency.h b/src/include/catalog/dependency.h index b9f98423cc..6f290d5c6f 100644 --- a/src/include/catalog/dependency.h +++ b/src/include/catalog/dependency.h @@ -4,7 +4,7 @@ * Routines to support inter-object dependencies. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/dependency.h diff --git a/src/include/catalog/genbki.h b/src/include/catalog/genbki.h index a2cb313d4a..59b0f8ed5d 100644 --- a/src/include/catalog/genbki.h +++ b/src/include/catalog/genbki.h @@ -9,7 +9,7 @@ * bootstrap file from these header files.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/genbki.h diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h index 0fae02295b..9bdc63ceb5 100644 --- a/src/include/catalog/heap.h +++ b/src/include/catalog/heap.h @@ -4,7 +4,7 @@ * prototypes for functions in backend/catalog/heap.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/heap.h diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index ceaa91f1b2..12bf35567a 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -4,7 +4,7 @@ * prototypes for catalog/index.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/index.h diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h index ef8493674c..0bb875441e 100644 --- a/src/include/catalog/indexing.h +++ b/src/include/catalog/indexing.h @@ -5,7 +5,7 @@ * on system catalogs * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/indexing.h diff --git a/src/include/catalog/namespace.h b/src/include/catalog/namespace.h index f2ee935623..5f8cf4992e 100644 --- a/src/include/catalog/namespace.h +++ b/src/include/catalog/namespace.h @@ -4,7 +4,7 @@ * prototypes for functions in backend/catalog/namespace.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/namespace.h diff --git a/src/include/catalog/objectaccess.h b/src/include/catalog/objectaccess.h index 251eb6fd88..2ce2217b1e 100644 --- a/src/include/catalog/objectaccess.h +++ b/src/include/catalog/objectaccess.h @@ -3,7 +3,7 @@ * * Object access hooks. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California */ diff --git a/src/include/catalog/objectaddress.h b/src/include/catalog/objectaddress.h index 5fc54d0e57..834554ec17 100644 --- a/src/include/catalog/objectaddress.h +++ b/src/include/catalog/objectaddress.h @@ -3,7 +3,7 @@ * objectaddress.h * functions for working with object addresses * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/objectaddress.h diff --git a/src/include/catalog/opfam_internal.h b/src/include/catalog/opfam_internal.h index c4a010029a..e9ac904c72 100644 --- a/src/include/catalog/opfam_internal.h +++ b/src/include/catalog/opfam_internal.h @@ -2,7 +2,7 @@ * * opfam_internal.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/opfam_internal.h diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index 2983cfa217..ea0f549c9a 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -4,7 +4,7 @@ * Header file for structures and utility functions related to * partitioning * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * src/include/catalog/partition.h * diff --git a/src/include/catalog/pg_aggregate.h b/src/include/catalog/pg_aggregate.h index 13f1bce5af..125bb5b479 100644 --- a/src/include/catalog/pg_aggregate.h +++ b/src/include/catalog/pg_aggregate.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_aggregate.h diff --git a/src/include/catalog/pg_aggregate_fn.h b/src/include/catalog/pg_aggregate_fn.h index a323aab2d4..00f533ae48 100644 --- a/src/include/catalog/pg_aggregate_fn.h +++ b/src/include/catalog/pg_aggregate_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_aggregate.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_aggregate_fn.h diff --git a/src/include/catalog/pg_am.h b/src/include/catalog/pg_am.h index e021f5b894..2e785c4cec 100644 --- a/src/include/catalog/pg_am.h +++ b/src/include/catalog/pg_am.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_am.h diff --git a/src/include/catalog/pg_amop.h b/src/include/catalog/pg_amop.h index d8770798a6..03af581df4 100644 --- a/src/include/catalog/pg_amop.h +++ b/src/include/catalog/pg_amop.h @@ -30,7 +30,7 @@ * intentional denormalization of the catalogs to buy lookup speed. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_amop.h diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index b25ad105fd..f545a0580d 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -19,7 +19,7 @@ * some don't pay attention to non-default functions at all. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_amproc.h diff --git a/src/include/catalog/pg_attrdef.h b/src/include/catalog/pg_attrdef.h index b877f42a2d..8a8b8cac52 100644 --- a/src/include/catalog/pg_attrdef.h +++ b/src/include/catalog/pg_attrdef.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_attrdef.h diff --git a/src/include/catalog/pg_attribute.h b/src/include/catalog/pg_attribute.h index bcf28e8f04..6104254d7b 100644 --- a/src/include/catalog/pg_attribute.h +++ b/src/include/catalog/pg_attribute.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_attribute.h diff --git a/src/include/catalog/pg_auth_members.h b/src/include/catalog/pg_auth_members.h index 6a954fff97..ae3c14aa7a 100644 --- a/src/include/catalog/pg_auth_members.h +++ b/src/include/catalog/pg_auth_members.h @@ -5,7 +5,7 @@ * (pg_auth_members) along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_auth_members.h diff --git a/src/include/catalog/pg_authid.h b/src/include/catalog/pg_authid.h index 9b6b52c9f9..772e9153c4 100644 --- a/src/include/catalog/pg_authid.h +++ b/src/include/catalog/pg_authid.h @@ -7,7 +7,7 @@ * pg_shadow and pg_group are now publicly accessible views on pg_authid. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_authid.h diff --git a/src/include/catalog/pg_cast.h b/src/include/catalog/pg_cast.h index 17827531ad..b4478188ca 100644 --- a/src/include/catalog/pg_cast.h +++ b/src/include/catalog/pg_cast.h @@ -8,7 +8,7 @@ * but also length coercion functions. * * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/include/catalog/pg_cast.h * diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h index b256657bda..e7049438eb 100644 --- a/src/include/catalog/pg_class.h +++ b/src/include/catalog/pg_class.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_class.h diff --git a/src/include/catalog/pg_collation.h b/src/include/catalog/pg_collation.h index 0cac7cae72..8a3aa74b94 100644 --- a/src/include/catalog/pg_collation.h +++ b/src/include/catalog/pg_collation.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/catalog/pg_collation_fn.h b/src/include/catalog/pg_collation_fn.h index 0ef31389d5..f01f2ae65e 100644 --- a/src/include/catalog/pg_collation_fn.h +++ b/src/include/catalog/pg_collation_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_collation.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_collation_fn.h diff --git a/src/include/catalog/pg_constraint.h b/src/include/catalog/pg_constraint.h index ec035d8434..8fca86d71e 100644 --- a/src/include/catalog/pg_constraint.h +++ b/src/include/catalog/pg_constraint.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_constraint.h diff --git a/src/include/catalog/pg_constraint_fn.h b/src/include/catalog/pg_constraint_fn.h index 37b0b4ba82..6bb1b09714 100644 --- a/src/include/catalog/pg_constraint_fn.h +++ b/src/include/catalog/pg_constraint_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_constraint.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_constraint_fn.h diff --git a/src/include/catalog/pg_control.h b/src/include/catalog/pg_control.h index 9e9e01427e..773d9e6eba 100644 --- a/src/include/catalog/pg_control.h +++ b/src/include/catalog/pg_control.h @@ -5,7 +5,7 @@ * However, we define it here so that the format is documented. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_control.h diff --git a/src/include/catalog/pg_conversion.h b/src/include/catalog/pg_conversion.h index 9344585e66..29f3c9d65a 100644 --- a/src/include/catalog/pg_conversion.h +++ b/src/include/catalog/pg_conversion.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_conversion.h diff --git a/src/include/catalog/pg_conversion_fn.h b/src/include/catalog/pg_conversion_fn.h index 7074bcf13a..f8f8a78b7d 100644 --- a/src/include/catalog/pg_conversion_fn.h +++ b/src/include/catalog/pg_conversion_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_conversion.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_conversion_fn.h diff --git a/src/include/catalog/pg_database.h b/src/include/catalog/pg_database.h index e7cbca49cf..56f9960dfe 100644 --- a/src/include/catalog/pg_database.h +++ b/src/include/catalog/pg_database.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_database.h diff --git a/src/include/catalog/pg_db_role_setting.h b/src/include/catalog/pg_db_role_setting.h index 4a8e3370c9..86cc17d725 100644 --- a/src/include/catalog/pg_db_role_setting.h +++ b/src/include/catalog/pg_db_role_setting.h @@ -4,7 +4,7 @@ * definition of configuration settings * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_db_role_setting.h diff --git a/src/include/catalog/pg_default_acl.h b/src/include/catalog/pg_default_acl.h index 09587abee6..11b306037d 100644 --- a/src/include/catalog/pg_default_acl.h +++ b/src/include/catalog/pg_default_acl.h @@ -4,7 +4,7 @@ * definition of default ACLs for new objects. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_default_acl.h diff --git a/src/include/catalog/pg_depend.h b/src/include/catalog/pg_depend.h index 8bda78d9ef..be3867bbf2 100644 --- a/src/include/catalog/pg_depend.h +++ b/src/include/catalog/pg_depend.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_depend.h diff --git a/src/include/catalog/pg_description.h b/src/include/catalog/pg_description.h index e0499ca2d6..d29100013e 100644 --- a/src/include/catalog/pg_description.h +++ b/src/include/catalog/pg_description.h @@ -19,7 +19,7 @@ * for example). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_description.h diff --git a/src/include/catalog/pg_enum.h b/src/include/catalog/pg_enum.h index 5938ba5cac..a65a8f45b6 100644 --- a/src/include/catalog/pg_enum.h +++ b/src/include/catalog/pg_enum.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/catalog/pg_enum.h * diff --git a/src/include/catalog/pg_event_trigger.h b/src/include/catalog/pg_event_trigger.h index f9f568b27b..e03c81997c 100644 --- a/src/include/catalog/pg_event_trigger.h +++ b/src/include/catalog/pg_event_trigger.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_event_trigger.h diff --git a/src/include/catalog/pg_extension.h b/src/include/catalog/pg_extension.h index 2ce575d17e..9ca6ca7936 100644 --- a/src/include/catalog/pg_extension.h +++ b/src/include/catalog/pg_extension.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_extension.h diff --git a/src/include/catalog/pg_foreign_data_wrapper.h b/src/include/catalog/pg_foreign_data_wrapper.h index af602c74ee..dd00586b6e 100644 --- a/src/include/catalog/pg_foreign_data_wrapper.h +++ b/src/include/catalog/pg_foreign_data_wrapper.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_foreign_data_wrapper.h diff --git a/src/include/catalog/pg_foreign_server.h b/src/include/catalog/pg_foreign_server.h index 689dbbb4e7..a8c9e87540 100644 --- a/src/include/catalog/pg_foreign_server.h +++ b/src/include/catalog/pg_foreign_server.h @@ -3,7 +3,7 @@ * pg_foreign_server.h * definition of the system "foreign server" relation (pg_foreign_server) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_foreign_server.h diff --git a/src/include/catalog/pg_foreign_table.h b/src/include/catalog/pg_foreign_table.h index c5dbcceef1..210e77b79a 100644 --- a/src/include/catalog/pg_foreign_table.h +++ b/src/include/catalog/pg_foreign_table.h @@ -3,7 +3,7 @@ * pg_foreign_table.h * definition of the system "foreign table" relation (pg_foreign_table) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_foreign_table.h diff --git a/src/include/catalog/pg_index.h b/src/include/catalog/pg_index.h index 8505c3be5f..057a9f7fe4 100644 --- a/src/include/catalog/pg_index.h +++ b/src/include/catalog/pg_index.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_index.h diff --git a/src/include/catalog/pg_inherits.h b/src/include/catalog/pg_inherits.h index 26bfab5db6..3c572f421b 100644 --- a/src/include/catalog/pg_inherits.h +++ b/src/include/catalog/pg_inherits.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_inherits.h diff --git a/src/include/catalog/pg_inherits_fn.h b/src/include/catalog/pg_inherits_fn.h index 7743388899..405af230d1 100644 --- a/src/include/catalog/pg_inherits_fn.h +++ b/src/include/catalog/pg_inherits_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_inherits.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_inherits_fn.h diff --git a/src/include/catalog/pg_init_privs.h b/src/include/catalog/pg_init_privs.h index 5fca36334b..6ea005fdeb 100644 --- a/src/include/catalog/pg_init_privs.h +++ b/src/include/catalog/pg_init_privs.h @@ -15,7 +15,7 @@ * for a table itself, so that it is distinct from any column privilege. * Currently, objsubid is unused and zero for all other kinds of objects. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_init_privs.h diff --git a/src/include/catalog/pg_language.h b/src/include/catalog/pg_language.h index ad244e839b..8ae78a0898 100644 --- a/src/include/catalog/pg_language.h +++ b/src/include/catalog/pg_language.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_language.h diff --git a/src/include/catalog/pg_largeobject.h b/src/include/catalog/pg_largeobject.h index f2df67c35f..0a15649ddd 100644 --- a/src/include/catalog/pg_largeobject.h +++ b/src/include/catalog/pg_largeobject.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_largeobject.h diff --git a/src/include/catalog/pg_largeobject_metadata.h b/src/include/catalog/pg_largeobject_metadata.h index 7ae6d8c02b..4535b51b53 100644 --- a/src/include/catalog/pg_largeobject_metadata.h +++ b/src/include/catalog/pg_largeobject_metadata.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_largeobject_metadata.h diff --git a/src/include/catalog/pg_namespace.h b/src/include/catalog/pg_namespace.h index a61a8635f6..ac22d8918e 100644 --- a/src/include/catalog/pg_namespace.h +++ b/src/include/catalog/pg_namespace.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_namespace.h diff --git a/src/include/catalog/pg_opclass.h b/src/include/catalog/pg_opclass.h index 6aabc7279f..e11d52eaa8 100644 --- a/src/include/catalog/pg_opclass.h +++ b/src/include/catalog/pg_opclass.h @@ -25,7 +25,7 @@ * AMs support this. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_opclass.h diff --git a/src/include/catalog/pg_operator.h b/src/include/catalog/pg_operator.h index ff9b47077b..e74f963eb5 100644 --- a/src/include/catalog/pg_operator.h +++ b/src/include/catalog/pg_operator.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_operator.h diff --git a/src/include/catalog/pg_operator_fn.h b/src/include/catalog/pg_operator_fn.h index 37f5c712fe..6a2b39c2e7 100644 --- a/src/include/catalog/pg_operator_fn.h +++ b/src/include/catalog/pg_operator_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_operator.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_operator_fn.h diff --git a/src/include/catalog/pg_opfamily.h b/src/include/catalog/pg_opfamily.h index 838812b932..b544474254 100644 --- a/src/include/catalog/pg_opfamily.h +++ b/src/include/catalog/pg_opfamily.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_opfamily.h diff --git a/src/include/catalog/pg_partitioned_table.h b/src/include/catalog/pg_partitioned_table.h index 731147ecbf..9dc66f4954 100644 --- a/src/include/catalog/pg_partitioned_table.h +++ b/src/include/catalog/pg_partitioned_table.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/catalog/pg_partitioned_table.h * diff --git a/src/include/catalog/pg_pltemplate.h b/src/include/catalog/pg_pltemplate.h index fbe71bd0c3..d89f2db9e6 100644 --- a/src/include/catalog/pg_pltemplate.h +++ b/src/include/catalog/pg_pltemplate.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_pltemplate.h diff --git a/src/include/catalog/pg_policy.h b/src/include/catalog/pg_policy.h index 86000737fa..0d94f1ad72 100644 --- a/src/include/catalog/pg_policy.h +++ b/src/include/catalog/pg_policy.h @@ -2,7 +2,7 @@ * pg_policy.h * definition of the system "policy" relation (pg_policy) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * */ diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 830bab37ea..298e0ae2f0 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -4,7 +4,7 @@ * definition of the system "procedure" relation (pg_proc) * along with the relation's initial contents. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_proc.h diff --git a/src/include/catalog/pg_proc_fn.h b/src/include/catalog/pg_proc_fn.h index 2c85f5d267..098e2e6f07 100644 --- a/src/include/catalog/pg_proc_fn.h +++ b/src/include/catalog/pg_proc_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_proc.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_proc_fn.h diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h index aa148960cd..7bdc634cf3 100644 --- a/src/include/catalog/pg_publication.h +++ b/src/include/catalog/pg_publication.h @@ -3,7 +3,7 @@ * pg_publication.h * definition of the relation sets relation (pg_publication) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_publication.h diff --git a/src/include/catalog/pg_publication_rel.h b/src/include/catalog/pg_publication_rel.h index 3729e5abdc..033b600cee 100644 --- a/src/include/catalog/pg_publication_rel.h +++ b/src/include/catalog/pg_publication_rel.h @@ -3,7 +3,7 @@ * pg_publication_rel.h * definition of the publication to relation map (pg_publication_rel) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_publication_rel.h diff --git a/src/include/catalog/pg_range.h b/src/include/catalog/pg_range.h index f12e82b2f2..00d02d1712 100644 --- a/src/include/catalog/pg_range.h +++ b/src/include/catalog/pg_range.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_range.h diff --git a/src/include/catalog/pg_replication_origin.h b/src/include/catalog/pg_replication_origin.h index 24c8a8430c..9656179ad4 100644 --- a/src/include/catalog/pg_replication_origin.h +++ b/src/include/catalog/pg_replication_origin.h @@ -3,7 +3,7 @@ * pg_replication_origin.h * Persistent replication origin registry * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_replication_origin.h diff --git a/src/include/catalog/pg_rewrite.h b/src/include/catalog/pg_rewrite.h index 48b9333a9d..81f2b19a74 100644 --- a/src/include/catalog/pg_rewrite.h +++ b/src/include/catalog/pg_rewrite.h @@ -8,7 +8,7 @@ * --- ie, rule names are only unique among the rules of a given table. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_rewrite.h diff --git a/src/include/catalog/pg_seclabel.h b/src/include/catalog/pg_seclabel.h index 3db9612fc3..70dc01ebfc 100644 --- a/src/include/catalog/pg_seclabel.h +++ b/src/include/catalog/pg_seclabel.h @@ -3,7 +3,7 @@ * pg_seclabel.h * definition of the system "security label" relation (pg_seclabel) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_sequence.h b/src/include/catalog/pg_sequence.h index 6de54bb665..a78417eaeb 100644 --- a/src/include/catalog/pg_sequence.h +++ b/src/include/catalog/pg_sequence.h @@ -3,7 +3,7 @@ * pg_sequence.h * definition of the system "sequence" relation (pg_sequence) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_shdepend.h b/src/include/catalog/pg_shdepend.h index 51b6588d3e..ae40377e4e 100644 --- a/src/include/catalog/pg_shdepend.h +++ b/src/include/catalog/pg_shdepend.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_shdepend.h diff --git a/src/include/catalog/pg_shdescription.h b/src/include/catalog/pg_shdescription.h index 154c48b584..d4ec616f3b 100644 --- a/src/include/catalog/pg_shdescription.h +++ b/src/include/catalog/pg_shdescription.h @@ -12,7 +12,7 @@ * across tables. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_shdescription.h diff --git a/src/include/catalog/pg_shseclabel.h b/src/include/catalog/pg_shseclabel.h index f8a906bb12..57b854c217 100644 --- a/src/include/catalog/pg_shseclabel.h +++ b/src/include/catalog/pg_shseclabel.h @@ -3,7 +3,7 @@ * pg_shseclabel.h * definition of the system "security label" relation (pg_shseclabel) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_statistic.h b/src/include/catalog/pg_statistic.h index 43128f1928..a5c85feffe 100644 --- a/src/include/catalog/pg_statistic.h +++ b/src/include/catalog/pg_statistic.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_statistic.h diff --git a/src/include/catalog/pg_statistic_ext.h b/src/include/catalog/pg_statistic_ext.h index e6d1a8c3bc..2f5ef78c6c 100644 --- a/src/include/catalog/pg_statistic_ext.h +++ b/src/include/catalog/pg_statistic_ext.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_statistic_ext.h diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h index 274ff6bc42..46d0b48232 100644 --- a/src/include/catalog/pg_subscription.h +++ b/src/include/catalog/pg_subscription.h @@ -3,7 +3,7 @@ * pg_subscription.h * Definition of the subscription catalog (pg_subscription). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_subscription_rel.h b/src/include/catalog/pg_subscription_rel.h index 57482972fb..d936973a9d 100644 --- a/src/include/catalog/pg_subscription_rel.h +++ b/src/include/catalog/pg_subscription_rel.h @@ -4,7 +4,7 @@ * Local info about tables that come from the publisher of a * subscription (pg_subscription_rel). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/catalog/pg_tablespace.h b/src/include/catalog/pg_tablespace.h index b759d5cea4..3967056649 100644 --- a/src/include/catalog/pg_tablespace.h +++ b/src/include/catalog/pg_tablespace.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_tablespace.h diff --git a/src/include/catalog/pg_transform.h b/src/include/catalog/pg_transform.h index 8b1610bb83..d4fc4649d5 100644 --- a/src/include/catalog/pg_transform.h +++ b/src/include/catalog/pg_transform.h @@ -2,7 +2,7 @@ * * pg_transform.h * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * src/include/catalog/pg_transform.h * diff --git a/src/include/catalog/pg_trigger.h b/src/include/catalog/pg_trigger.h index f413caf34f..c80a3aa54d 100644 --- a/src/include/catalog/pg_trigger.h +++ b/src/include/catalog/pg_trigger.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_trigger.h diff --git a/src/include/catalog/pg_ts_config.h b/src/include/catalog/pg_ts_config.h index 0ba79a596f..51b535ee4f 100644 --- a/src/include/catalog/pg_ts_config.h +++ b/src/include/catalog/pg_ts_config.h @@ -4,7 +4,7 @@ * definition of configuration of tsearch * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_ts_config.h diff --git a/src/include/catalog/pg_ts_config_map.h b/src/include/catalog/pg_ts_config_map.h index 3df05195be..a3d9e3f21f 100644 --- a/src/include/catalog/pg_ts_config_map.h +++ b/src/include/catalog/pg_ts_config_map.h @@ -4,7 +4,7 @@ * definition of token mappings for configurations of tsearch * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_ts_config_map.h diff --git a/src/include/catalog/pg_ts_dict.h b/src/include/catalog/pg_ts_dict.h index 634ea703e3..9d9c06a982 100644 --- a/src/include/catalog/pg_ts_dict.h +++ b/src/include/catalog/pg_ts_dict.h @@ -4,7 +4,7 @@ * definition of dictionaries for tsearch * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_ts_dict.h diff --git a/src/include/catalog/pg_ts_parser.h b/src/include/catalog/pg_ts_parser.h index 96e09bdcd8..96ffd818d7 100644 --- a/src/include/catalog/pg_ts_parser.h +++ b/src/include/catalog/pg_ts_parser.h @@ -4,7 +4,7 @@ * definition of parsers for tsearch * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_ts_parser.h diff --git a/src/include/catalog/pg_ts_template.h b/src/include/catalog/pg_ts_template.h index dc0148c68b..8a7203f0a7 100644 --- a/src/include/catalog/pg_ts_template.h +++ b/src/include/catalog/pg_ts_template.h @@ -4,7 +4,7 @@ * definition of dictionary templates for tsearch * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_ts_template.h diff --git a/src/include/catalog/pg_type.h b/src/include/catalog/pg_type.h index e3551440a0..5b5b1218de 100644 --- a/src/include/catalog/pg_type.h +++ b/src/include/catalog/pg_type.h @@ -5,7 +5,7 @@ * along with the relation's initial contents. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_type.h diff --git a/src/include/catalog/pg_type_fn.h b/src/include/catalog/pg_type_fn.h index b570d3588f..0ea0e9029a 100644 --- a/src/include/catalog/pg_type_fn.h +++ b/src/include/catalog/pg_type_fn.h @@ -4,7 +4,7 @@ * prototypes for functions in catalog/pg_type.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_type_fn.h diff --git a/src/include/catalog/pg_user_mapping.h b/src/include/catalog/pg_user_mapping.h index f08e6a72ed..686562a4d2 100644 --- a/src/include/catalog/pg_user_mapping.h +++ b/src/include/catalog/pg_user_mapping.h @@ -3,7 +3,7 @@ * pg_user_mapping.h * definition of the system "user mapping" relation (pg_user_mapping) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/pg_user_mapping.h diff --git a/src/include/catalog/storage.h b/src/include/catalog/storage.h index a3a97db929..ef52d85803 100644 --- a/src/include/catalog/storage.h +++ b/src/include/catalog/storage.h @@ -4,7 +4,7 @@ * prototypes for functions in backend/catalog/storage.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/storage.h diff --git a/src/include/catalog/storage_xlog.h b/src/include/catalog/storage_xlog.h index 4c08005228..5738071598 100644 --- a/src/include/catalog/storage_xlog.h +++ b/src/include/catalog/storage_xlog.h @@ -4,7 +4,7 @@ * prototypes for XLog support for backend/catalog/storage.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/storage_xlog.h diff --git a/src/include/catalog/toasting.h b/src/include/catalog/toasting.h index f82a799572..f6387ae143 100644 --- a/src/include/catalog/toasting.h +++ b/src/include/catalog/toasting.h @@ -4,7 +4,7 @@ * This file provides some definitions to support creation of toast tables * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/catalog/toasting.h diff --git a/src/include/commands/alter.h b/src/include/commands/alter.h index 4365357ab8..402a4b2d58 100644 --- a/src/include/commands/alter.h +++ b/src/include/commands/alter.h @@ -4,7 +4,7 @@ * prototypes for commands/alter.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/alter.h diff --git a/src/include/commands/async.h b/src/include/commands/async.h index 939711d8d9..d5868c42a0 100644 --- a/src/include/commands/async.h +++ b/src/include/commands/async.h @@ -3,7 +3,7 @@ * async.h * Asynchronous notification: NOTIFY, LISTEN, UNLISTEN * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/async.h diff --git a/src/include/commands/cluster.h b/src/include/commands/cluster.h index 7bade9fad2..b338cb1098 100644 --- a/src/include/commands/cluster.h +++ b/src/include/commands/cluster.h @@ -3,7 +3,7 @@ * cluster.h * header file for postgres cluster command stuff * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * src/include/commands/cluster.h diff --git a/src/include/commands/collationcmds.h b/src/include/commands/collationcmds.h index 30e847432e..9b0f00a997 100644 --- a/src/include/commands/collationcmds.h +++ b/src/include/commands/collationcmds.h @@ -4,7 +4,7 @@ * prototypes for collationcmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/collationcmds.h diff --git a/src/include/commands/comment.h b/src/include/commands/comment.h index 0caf0e81ab..411433f862 100644 --- a/src/include/commands/comment.h +++ b/src/include/commands/comment.h @@ -7,7 +7,7 @@ * * Prototypes for functions in commands/comment.c * - * Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Copyright (c) 1999-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/commands/conversioncmds.h b/src/include/commands/conversioncmds.h index 7054505794..9fd40cb6f0 100644 --- a/src/include/commands/conversioncmds.h +++ b/src/include/commands/conversioncmds.h @@ -4,7 +4,7 @@ * prototypes for conversioncmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/conversioncmds.h diff --git a/src/include/commands/copy.h b/src/include/commands/copy.h index 8b2971d287..f393e7e73d 100644 --- a/src/include/commands/copy.h +++ b/src/include/commands/copy.h @@ -4,7 +4,7 @@ * Definitions for using the POSTGRES copy command. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/copy.h diff --git a/src/include/commands/createas.h b/src/include/commands/createas.h index aaf4fac97b..03ba21ded8 100644 --- a/src/include/commands/createas.h +++ b/src/include/commands/createas.h @@ -4,7 +4,7 @@ * prototypes for createas.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/createas.h diff --git a/src/include/commands/dbcommands.h b/src/include/commands/dbcommands.h index f42c8cdbe3..677c7fc5fc 100644 --- a/src/include/commands/dbcommands.h +++ b/src/include/commands/dbcommands.h @@ -4,7 +4,7 @@ * Database management commands (create/drop database). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/dbcommands.h diff --git a/src/include/commands/dbcommands_xlog.h b/src/include/commands/dbcommands_xlog.h index 63b1a6470c..83048d6c5b 100644 --- a/src/include/commands/dbcommands_xlog.h +++ b/src/include/commands/dbcommands_xlog.h @@ -4,7 +4,7 @@ * Database resource manager XLOG definitions (create/drop database). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/dbcommands_xlog.h diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index 52cbf61ccb..1f18cad963 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -4,7 +4,7 @@ * POSTGRES define and remove utility definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/defrem.h diff --git a/src/include/commands/discard.h b/src/include/commands/discard.h index 8ea0b30ddf..bbec7ef6ef 100644 --- a/src/include/commands/discard.h +++ b/src/include/commands/discard.h @@ -4,7 +4,7 @@ * prototypes for discard.c. * * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/commands/discard.h * diff --git a/src/include/commands/event_trigger.h b/src/include/commands/event_trigger.h index 2ce528272c..8e4142391d 100644 --- a/src/include/commands/event_trigger.h +++ b/src/include/commands/event_trigger.h @@ -3,7 +3,7 @@ * event_trigger.h * Declarations for command trigger handling. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/event_trigger.h diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h index 543b2bb0c6..dd8abae98a 100644 --- a/src/include/commands/explain.h +++ b/src/include/commands/explain.h @@ -3,7 +3,7 @@ * explain.h * prototypes for explain.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994-5, Regents of the University of California * * src/include/commands/explain.h diff --git a/src/include/commands/extension.h b/src/include/commands/extension.h index a0dfae10c6..068a3754aa 100644 --- a/src/include/commands/extension.h +++ b/src/include/commands/extension.h @@ -4,7 +4,7 @@ * Extension management commands (create/drop extension). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/extension.h diff --git a/src/include/commands/lockcmds.h b/src/include/commands/lockcmds.h index cdb5c62a52..6420e13e75 100644 --- a/src/include/commands/lockcmds.h +++ b/src/include/commands/lockcmds.h @@ -4,7 +4,7 @@ * prototypes for lockcmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/lockcmds.h diff --git a/src/include/commands/matview.h b/src/include/commands/matview.h index 3feb137ef4..3b30ad76b2 100644 --- a/src/include/commands/matview.h +++ b/src/include/commands/matview.h @@ -4,7 +4,7 @@ * prototypes for matview.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/matview.h diff --git a/src/include/commands/policy.h b/src/include/commands/policy.h index d6a920ccef..acf621a8ec 100644 --- a/src/include/commands/policy.h +++ b/src/include/commands/policy.h @@ -4,7 +4,7 @@ * prototypes for policy.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/policy.h diff --git a/src/include/commands/portalcmds.h b/src/include/commands/portalcmds.h index 488ce60cd6..99dd04594f 100644 --- a/src/include/commands/portalcmds.h +++ b/src/include/commands/portalcmds.h @@ -4,7 +4,7 @@ * prototypes for portalcmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/portalcmds.h diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h index 5ec1200e0a..ffec029df4 100644 --- a/src/include/commands/prepare.h +++ b/src/include/commands/prepare.h @@ -4,7 +4,7 @@ * PREPARE, EXECUTE and DEALLOCATE commands, and prepared-stmt storage * * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/include/commands/prepare.h * diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h index 9472ecca63..6a6b467fee 100644 --- a/src/include/commands/progress.h +++ b/src/include/commands/progress.h @@ -7,7 +7,7 @@ * constants, you probably also need to update the views based on them * in system_views.sql. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/progress.h diff --git a/src/include/commands/publicationcmds.h b/src/include/commands/publicationcmds.h index a2e0f4a21e..0c0d7795cf 100644 --- a/src/include/commands/publicationcmds.h +++ b/src/include/commands/publicationcmds.h @@ -4,7 +4,7 @@ * prototypes for publicationcmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/publicationcmds.h diff --git a/src/include/commands/schemacmds.h b/src/include/commands/schemacmds.h index 381e5b8dae..bf00754245 100644 --- a/src/include/commands/schemacmds.h +++ b/src/include/commands/schemacmds.h @@ -4,7 +4,7 @@ * prototypes for schemacmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/schemacmds.h diff --git a/src/include/commands/seclabel.h b/src/include/commands/seclabel.h index a97c3293b2..85f2cf67aa 100644 --- a/src/include/commands/seclabel.h +++ b/src/include/commands/seclabel.h @@ -3,7 +3,7 @@ * * Prototypes for functions in commands/seclabel.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California */ #ifndef SECLABEL_H diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h index caab195130..3f58bae31a 100644 --- a/src/include/commands/sequence.h +++ b/src/include/commands/sequence.h @@ -3,7 +3,7 @@ * sequence.h * prototypes for sequence.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/sequence.h diff --git a/src/include/commands/subscriptioncmds.h b/src/include/commands/subscriptioncmds.h index 3d92a682a1..6d70ad71b1 100644 --- a/src/include/commands/subscriptioncmds.h +++ b/src/include/commands/subscriptioncmds.h @@ -4,7 +4,7 @@ * prototypes for subscriptioncmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/subscriptioncmds.h diff --git a/src/include/commands/tablecmds.h b/src/include/commands/tablecmds.h index da3ff5dbee..06e5180a30 100644 --- a/src/include/commands/tablecmds.h +++ b/src/include/commands/tablecmds.h @@ -4,7 +4,7 @@ * prototypes for tablecmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/tablecmds.h diff --git a/src/include/commands/tablespace.h b/src/include/commands/tablespace.h index ba8de32c7b..d52b73d57a 100644 --- a/src/include/commands/tablespace.h +++ b/src/include/commands/tablespace.h @@ -4,7 +4,7 @@ * Tablespace management commands (create/drop tablespace). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/tablespace.h diff --git a/src/include/commands/trigger.h b/src/include/commands/trigger.h index adbcfa1297..ff5546cf28 100644 --- a/src/include/commands/trigger.h +++ b/src/include/commands/trigger.h @@ -3,7 +3,7 @@ * trigger.h * Declarations for trigger handling. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/trigger.h diff --git a/src/include/commands/typecmds.h b/src/include/commands/typecmds.h index 9fbf38629d..a04f3405de 100644 --- a/src/include/commands/typecmds.h +++ b/src/include/commands/typecmds.h @@ -4,7 +4,7 @@ * prototypes for typecmds.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/typecmds.h diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index 60586b2ca6..797b6dfec8 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -4,7 +4,7 @@ * header file for postgres vacuum cleaner and statistics analyzer * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/vacuum.h diff --git a/src/include/commands/variable.h b/src/include/commands/variable.h index 575339a6d8..4ea3b0209b 100644 --- a/src/include/commands/variable.h +++ b/src/include/commands/variable.h @@ -2,7 +2,7 @@ * variable.h * Routines for handling specialized SET variables. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/variable.h diff --git a/src/include/commands/view.h b/src/include/commands/view.h index 46d762db22..4703922ff6 100644 --- a/src/include/commands/view.h +++ b/src/include/commands/view.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/commands/view.h diff --git a/src/include/common/base64.h b/src/include/common/base64.h index 7fe19bb432..32cec4b210 100644 --- a/src/include/common/base64.h +++ b/src/include/common/base64.h @@ -3,7 +3,7 @@ * Encoding and decoding routines for base64 without whitespace * support. * - * Portions Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/include/common/base64.h */ diff --git a/src/include/common/config_info.h b/src/include/common/config_info.h index f775327861..72014a915a 100644 --- a/src/include/common/config_info.h +++ b/src/include/common/config_info.h @@ -2,7 +2,7 @@ * config_info.h * Common code for pg_config output * - * Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/common/config_info.h */ diff --git a/src/include/common/controldata_utils.h b/src/include/common/controldata_utils.h index e97abe6a51..d8fd316396 100644 --- a/src/include/common/controldata_utils.h +++ b/src/include/common/controldata_utils.h @@ -2,7 +2,7 @@ * controldata_utils.h * Common code for pg_controldata output * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/controldata_utils.h diff --git a/src/include/common/fe_memutils.h b/src/include/common/fe_memutils.h index 6708670b96..458743dd40 100644 --- a/src/include/common/fe_memutils.h +++ b/src/include/common/fe_memutils.h @@ -2,7 +2,7 @@ * fe_memutils.h * memory management support for frontend code * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/common/fe_memutils.h */ diff --git a/src/include/common/file_utils.h b/src/include/common/file_utils.h index 52af7f0baa..71f638562e 100644 --- a/src/include/common/file_utils.h +++ b/src/include/common/file_utils.h @@ -5,7 +5,7 @@ * Assorted utility functions to work on files. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/file_utils.h diff --git a/src/include/common/int.h b/src/include/common/int.h index e6907c699f..feb84102b4 100644 --- a/src/include/common/int.h +++ b/src/include/common/int.h @@ -11,7 +11,7 @@ * the 64 bit cases can be considerably faster with intrinsics. In case no * intrinsics are available 128 bit math is used where available. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/common/int.h * diff --git a/src/include/common/int128.h b/src/include/common/int128.h index af2c93da46..2654f18f85 100644 --- a/src/include/common/int128.h +++ b/src/include/common/int128.h @@ -8,7 +8,7 @@ * * See src/tools/testint128.c for a simple test harness for this file. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/common/int128.h * diff --git a/src/include/common/ip.h b/src/include/common/ip.h index f530139876..33147891d1 100644 --- a/src/include/common/ip.h +++ b/src/include/common/ip.h @@ -5,7 +5,7 @@ * * These definitions are used by both frontend and backend code. * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/common/ip.h * diff --git a/src/include/common/keywords.h b/src/include/common/keywords.h index 60522715a8..0b31505b66 100644 --- a/src/include/common/keywords.h +++ b/src/include/common/keywords.h @@ -4,7 +4,7 @@ * lexical token lookup for key words in PostgreSQL * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/keywords.h diff --git a/src/include/common/md5.h b/src/include/common/md5.h index ccaaeddbf4..905d3aa219 100644 --- a/src/include/common/md5.h +++ b/src/include/common/md5.h @@ -6,7 +6,7 @@ * These definitions are needed by both frontend and backend code to work * with MD5-encrypted passwords. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/md5.h diff --git a/src/include/common/relpath.h b/src/include/common/relpath.h index ec5ef99451..9137dc9ed3 100644 --- a/src/include/common/relpath.h +++ b/src/include/common/relpath.h @@ -3,7 +3,7 @@ * relpath.h * Declarations for GetRelationPath() and friends * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/relpath.h diff --git a/src/include/common/restricted_token.h b/src/include/common/restricted_token.h index 51be5a760c..a4a263fdee 100644 --- a/src/include/common/restricted_token.h +++ b/src/include/common/restricted_token.h @@ -2,7 +2,7 @@ * restricted_token.h * helper routine to ensure restricted token on Windows * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/restricted_token.h diff --git a/src/include/common/saslprep.h b/src/include/common/saslprep.h index c7b620cf19..dc1af15030 100644 --- a/src/include/common/saslprep.h +++ b/src/include/common/saslprep.h @@ -5,7 +5,7 @@ * * These definitions are used by both frontend and backend code. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/common/saslprep.h * diff --git a/src/include/common/scram-common.h b/src/include/common/scram-common.h index 857a60e71f..3d81934fda 100644 --- a/src/include/common/scram-common.h +++ b/src/include/common/scram-common.h @@ -3,7 +3,7 @@ * scram-common.h * Declarations for helper functions used for SCRAM authentication * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/scram-common.h diff --git a/src/include/common/sha2.h b/src/include/common/sha2.h index a31b3979d8..f3fd0d0d28 100644 --- a/src/include/common/sha2.h +++ b/src/include/common/sha2.h @@ -3,7 +3,7 @@ * sha2.h * Generic headers for SHA224, 256, 384 AND 512 functions of PostgreSQL. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/common/sha2.h diff --git a/src/include/common/string.h b/src/include/common/string.h index 5f3ea71d61..06934b9914 100644 --- a/src/include/common/string.h +++ b/src/include/common/string.h @@ -2,7 +2,7 @@ * string.h * string handling helpers * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/string.h diff --git a/src/include/common/unicode_norm.h b/src/include/common/unicode_norm.h index 8741209751..34ca2622ec 100644 --- a/src/include/common/unicode_norm.h +++ b/src/include/common/unicode_norm.h @@ -5,7 +5,7 @@ * * These definitions are used by both frontend and backend code. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/common/unicode_norm.h * diff --git a/src/include/common/unicode_norm_table.h b/src/include/common/unicode_norm_table.h index da08e487e3..3444bc8d80 100644 --- a/src/include/common/unicode_norm_table.h +++ b/src/include/common/unicode_norm_table.h @@ -3,7 +3,7 @@ * unicode_norm_table.h * Composition table used for Unicode normalization * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/common/unicode_norm_table.h diff --git a/src/include/common/username.h b/src/include/common/username.h index 735572d382..1bb3496f9e 100644 --- a/src/include/common/username.h +++ b/src/include/common/username.h @@ -2,7 +2,7 @@ * username.h * lookup effective username * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/common/username.h */ diff --git a/src/include/datatype/timestamp.h b/src/include/datatype/timestamp.h index 6f48d1c71b..f5b6026ef5 100644 --- a/src/include/datatype/timestamp.h +++ b/src/include/datatype/timestamp.h @@ -5,7 +5,7 @@ * * Note: this file must be includable in both frontend and backend contexts. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/datatype/timestamp.h diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 511205b5ac..b0c7bda76f 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -4,7 +4,7 @@ * Low level infrastructure related to expression evaluation * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/execExpr.h diff --git a/src/include/executor/execParallel.h b/src/include/executor/execParallel.h index f093f95232..626a66c27a 100644 --- a/src/include/executor/execParallel.h +++ b/src/include/executor/execParallel.h @@ -2,7 +2,7 @@ * execParallel.h * POSTGRES parallel execution interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index 86a199d169..f0998cb82f 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -2,7 +2,7 @@ * execPartition.h * POSTGRES partitioning executor interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/executor/execdebug.h b/src/include/executor/execdebug.h index cd04b60176..236b2cc4fd 100644 --- a/src/include/executor/execdebug.h +++ b/src/include/executor/execdebug.h @@ -7,7 +7,7 @@ * for debug printouts, because that's more flexible than printf(). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/execdebug.h diff --git a/src/include/executor/execdesc.h b/src/include/executor/execdesc.h index 8c09961e28..10e9ded246 100644 --- a/src/include/executor/execdesc.h +++ b/src/include/executor/execdesc.h @@ -5,7 +5,7 @@ * and related modules. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/execdesc.h diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 2cc74da0ba..e6569e1038 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -4,7 +4,7 @@ * support for the POSTGRES executor module * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/executor.h diff --git a/src/include/executor/functions.h b/src/include/executor/functions.h index 718d8947a3..e7454ee790 100644 --- a/src/include/executor/functions.h +++ b/src/include/executor/functions.h @@ -4,7 +4,7 @@ * Declarations for execution of SQL-language functions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/functions.h diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h index be83500b9d..6fb2dc04f6 100644 --- a/src/include/executor/hashjoin.h +++ b/src/include/executor/hashjoin.h @@ -4,7 +4,7 @@ * internal structures for hash joins * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/hashjoin.h diff --git a/src/include/executor/instrument.h b/src/include/executor/instrument.h index f1bae7a44d..b72f91898a 100644 --- a/src/include/executor/instrument.h +++ b/src/include/executor/instrument.h @@ -4,7 +4,7 @@ * definitions for run-time statistics collection * * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/include/executor/instrument.h * diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h index eff5af9c2a..90c68795f1 100644 --- a/src/include/executor/nodeAgg.h +++ b/src/include/executor/nodeAgg.h @@ -4,7 +4,7 @@ * prototypes for nodeAgg.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeAgg.h diff --git a/src/include/executor/nodeAppend.h b/src/include/executor/nodeAppend.h index d42d50614c..4e31a9fcd8 100644 --- a/src/include/executor/nodeAppend.h +++ b/src/include/executor/nodeAppend.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeAppend.h diff --git a/src/include/executor/nodeBitmapAnd.h b/src/include/executor/nodeBitmapAnd.h index 5d848b61af..029e5b600d 100644 --- a/src/include/executor/nodeBitmapAnd.h +++ b/src/include/executor/nodeBitmapAnd.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeBitmapAnd.h diff --git a/src/include/executor/nodeBitmapHeapscan.h b/src/include/executor/nodeBitmapHeapscan.h index 7907ecc3cb..e86d3e10d4 100644 --- a/src/include/executor/nodeBitmapHeapscan.h +++ b/src/include/executor/nodeBitmapHeapscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeBitmapHeapscan.h diff --git a/src/include/executor/nodeBitmapIndexscan.h b/src/include/executor/nodeBitmapIndexscan.h index 842193f4df..8b93baabea 100644 --- a/src/include/executor/nodeBitmapIndexscan.h +++ b/src/include/executor/nodeBitmapIndexscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeBitmapIndexscan.h diff --git a/src/include/executor/nodeBitmapOr.h b/src/include/executor/nodeBitmapOr.h index 526904eb4d..96f84d22ed 100644 --- a/src/include/executor/nodeBitmapOr.h +++ b/src/include/executor/nodeBitmapOr.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeBitmapOr.h diff --git a/src/include/executor/nodeCtescan.h b/src/include/executor/nodeCtescan.h index d2fbcbd586..21f4f191cd 100644 --- a/src/include/executor/nodeCtescan.h +++ b/src/include/executor/nodeCtescan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeCtescan.h diff --git a/src/include/executor/nodeCustom.h b/src/include/executor/nodeCustom.h index d7dcf3b8cb..454a684a40 100644 --- a/src/include/executor/nodeCustom.h +++ b/src/include/executor/nodeCustom.h @@ -4,7 +4,7 @@ * * prototypes for CustomScan nodes * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------ diff --git a/src/include/executor/nodeForeignscan.h b/src/include/executor/nodeForeignscan.h index 152abf022b..ccb66be733 100644 --- a/src/include/executor/nodeForeignscan.h +++ b/src/include/executor/nodeForeignscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeForeignscan.h diff --git a/src/include/executor/nodeFunctionscan.h b/src/include/executor/nodeFunctionscan.h index aaa9d8c316..5de8d15a5f 100644 --- a/src/include/executor/nodeFunctionscan.h +++ b/src/include/executor/nodeFunctionscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeFunctionscan.h diff --git a/src/include/executor/nodeGather.h b/src/include/executor/nodeGather.h index 189bd70041..4477b855c4 100644 --- a/src/include/executor/nodeGather.h +++ b/src/include/executor/nodeGather.h @@ -4,7 +4,7 @@ * prototypes for nodeGather.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeGather.h diff --git a/src/include/executor/nodeGatherMerge.h b/src/include/executor/nodeGatherMerge.h index 0154d73312..1e514ff090 100644 --- a/src/include/executor/nodeGatherMerge.h +++ b/src/include/executor/nodeGatherMerge.h @@ -4,7 +4,7 @@ * prototypes for nodeGatherMerge.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeGatherMerge.h diff --git a/src/include/executor/nodeGroup.h b/src/include/executor/nodeGroup.h index b0d7e312c9..483390db8c 100644 --- a/src/include/executor/nodeGroup.h +++ b/src/include/executor/nodeGroup.h @@ -4,7 +4,7 @@ * prototypes for nodeGroup.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeGroup.h diff --git a/src/include/executor/nodeHash.h b/src/include/executor/nodeHash.h index 367dfff018..8d700c06c5 100644 --- a/src/include/executor/nodeHash.h +++ b/src/include/executor/nodeHash.h @@ -4,7 +4,7 @@ * prototypes for nodeHash.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeHash.h diff --git a/src/include/executor/nodeHashjoin.h b/src/include/executor/nodeHashjoin.h index b46bcc1f68..4086dd5382 100644 --- a/src/include/executor/nodeHashjoin.h +++ b/src/include/executor/nodeHashjoin.h @@ -4,7 +4,7 @@ * prototypes for nodeHashjoin.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeHashjoin.h diff --git a/src/include/executor/nodeIndexonlyscan.h b/src/include/executor/nodeIndexonlyscan.h index c5344a8d5d..8f6c5a8d09 100644 --- a/src/include/executor/nodeIndexonlyscan.h +++ b/src/include/executor/nodeIndexonlyscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeIndexonlyscan.h diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h index ae0f44806a..822a9c9fad 100644 --- a/src/include/executor/nodeIndexscan.h +++ b/src/include/executor/nodeIndexscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeIndexscan.h diff --git a/src/include/executor/nodeLimit.h b/src/include/executor/nodeLimit.h index db65b5524c..160ae5b026 100644 --- a/src/include/executor/nodeLimit.h +++ b/src/include/executor/nodeLimit.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeLimit.h diff --git a/src/include/executor/nodeLockRows.h b/src/include/executor/nodeLockRows.h index c9d05b87f1..e3d77dff56 100644 --- a/src/include/executor/nodeLockRows.h +++ b/src/include/executor/nodeLockRows.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeLockRows.h diff --git a/src/include/executor/nodeMaterial.h b/src/include/executor/nodeMaterial.h index 4b3c2578c9..84004efd5e 100644 --- a/src/include/executor/nodeMaterial.h +++ b/src/include/executor/nodeMaterial.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeMaterial.h diff --git a/src/include/executor/nodeMergeAppend.h b/src/include/executor/nodeMergeAppend.h index a0ccbae965..e3bb7d91ff 100644 --- a/src/include/executor/nodeMergeAppend.h +++ b/src/include/executor/nodeMergeAppend.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeMergeAppend.h diff --git a/src/include/executor/nodeMergejoin.h b/src/include/executor/nodeMergejoin.h index d20e41505d..456a39d914 100644 --- a/src/include/executor/nodeMergejoin.h +++ b/src/include/executor/nodeMergejoin.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeMergejoin.h diff --git a/src/include/executor/nodeModifyTable.h b/src/include/executor/nodeModifyTable.h index a2e7af98de..0d7e579e1c 100644 --- a/src/include/executor/nodeModifyTable.h +++ b/src/include/executor/nodeModifyTable.h @@ -3,7 +3,7 @@ * nodeModifyTable.h * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeModifyTable.h diff --git a/src/include/executor/nodeNamedtuplestorescan.h b/src/include/executor/nodeNamedtuplestorescan.h index 395d978f62..6c7300bcb8 100644 --- a/src/include/executor/nodeNamedtuplestorescan.h +++ b/src/include/executor/nodeNamedtuplestorescan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeNamedtuplestorescan.h diff --git a/src/include/executor/nodeNestloop.h b/src/include/executor/nodeNestloop.h index 0d6486cc57..06b90d150e 100644 --- a/src/include/executor/nodeNestloop.h +++ b/src/include/executor/nodeNestloop.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeNestloop.h diff --git a/src/include/executor/nodeProjectSet.h b/src/include/executor/nodeProjectSet.h index a0b0521f8d..c365589754 100644 --- a/src/include/executor/nodeProjectSet.h +++ b/src/include/executor/nodeProjectSet.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeProjectSet.h diff --git a/src/include/executor/nodeRecursiveunion.h b/src/include/executor/nodeRecursiveunion.h index e6ce1b4783..09f211c500 100644 --- a/src/include/executor/nodeRecursiveunion.h +++ b/src/include/executor/nodeRecursiveunion.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeRecursiveunion.h diff --git a/src/include/executor/nodeResult.h b/src/include/executor/nodeResult.h index 20e0063410..422a2ffd47 100644 --- a/src/include/executor/nodeResult.h +++ b/src/include/executor/nodeResult.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeResult.h diff --git a/src/include/executor/nodeSamplescan.h b/src/include/executor/nodeSamplescan.h index 607bbd9412..0d489496cd 100644 --- a/src/include/executor/nodeSamplescan.h +++ b/src/include/executor/nodeSamplescan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSamplescan.h diff --git a/src/include/executor/nodeSeqscan.h b/src/include/executor/nodeSeqscan.h index ee3b1a0bb8..020b40c6b7 100644 --- a/src/include/executor/nodeSeqscan.h +++ b/src/include/executor/nodeSeqscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSeqscan.h diff --git a/src/include/executor/nodeSetOp.h b/src/include/executor/nodeSetOp.h index c15f945046..d41fcbdc6e 100644 --- a/src/include/executor/nodeSetOp.h +++ b/src/include/executor/nodeSetOp.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSetOp.h diff --git a/src/include/executor/nodeSort.h b/src/include/executor/nodeSort.h index 627a04c3fd..22f69ee1ea 100644 --- a/src/include/executor/nodeSort.h +++ b/src/include/executor/nodeSort.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSort.h diff --git a/src/include/executor/nodeSubplan.h b/src/include/executor/nodeSubplan.h index 5dbaeeb29a..d9784a2b71 100644 --- a/src/include/executor/nodeSubplan.h +++ b/src/include/executor/nodeSubplan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSubplan.h diff --git a/src/include/executor/nodeSubqueryscan.h b/src/include/executor/nodeSubqueryscan.h index 710e050285..4bd292159c 100644 --- a/src/include/executor/nodeSubqueryscan.h +++ b/src/include/executor/nodeSubqueryscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeSubqueryscan.h diff --git a/src/include/executor/nodeTableFuncscan.h b/src/include/executor/nodeTableFuncscan.h index c4672c0ac0..06ebbf31eb 100644 --- a/src/include/executor/nodeTableFuncscan.h +++ b/src/include/executor/nodeTableFuncscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeTableFuncscan.h diff --git a/src/include/executor/nodeTidscan.h b/src/include/executor/nodeTidscan.h index e68aaf3829..30d21ff229 100644 --- a/src/include/executor/nodeTidscan.h +++ b/src/include/executor/nodeTidscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeTidscan.h diff --git a/src/include/executor/nodeUnique.h b/src/include/executor/nodeUnique.h index 008774ae0f..b8a44b73c1 100644 --- a/src/include/executor/nodeUnique.h +++ b/src/include/executor/nodeUnique.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeUnique.h diff --git a/src/include/executor/nodeValuesscan.h b/src/include/executor/nodeValuesscan.h index 772a5e9705..b4f384b111 100644 --- a/src/include/executor/nodeValuesscan.h +++ b/src/include/executor/nodeValuesscan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeValuesscan.h diff --git a/src/include/executor/nodeWindowAgg.h b/src/include/executor/nodeWindowAgg.h index 1c177309ae..677e2d8d07 100644 --- a/src/include/executor/nodeWindowAgg.h +++ b/src/include/executor/nodeWindowAgg.h @@ -4,7 +4,7 @@ * prototypes for nodeWindowAgg.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeWindowAgg.h diff --git a/src/include/executor/nodeWorktablescan.h b/src/include/executor/nodeWorktablescan.h index df05e75111..7daee1e40a 100644 --- a/src/include/executor/nodeWorktablescan.h +++ b/src/include/executor/nodeWorktablescan.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeWorktablescan.h diff --git a/src/include/executor/spi.h b/src/include/executor/spi.h index acade7e92e..43580c5158 100644 --- a/src/include/executor/spi.h +++ b/src/include/executor/spi.h @@ -3,7 +3,7 @@ * spi.h * Server Programming Interface public declarations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/spi.h diff --git a/src/include/executor/spi_priv.h b/src/include/executor/spi_priv.h index 8fae755418..64f8a450eb 100644 --- a/src/include/executor/spi_priv.h +++ b/src/include/executor/spi_priv.h @@ -3,7 +3,7 @@ * spi_priv.h * Server Programming Interface private declarations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/spi_priv.h diff --git a/src/include/executor/tablefunc.h b/src/include/executor/tablefunc.h index 49e8c7c1b2..29c546f83e 100644 --- a/src/include/executor/tablefunc.h +++ b/src/include/executor/tablefunc.h @@ -3,7 +3,7 @@ * tablefunc.h * interface for TableFunc executor node * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/tablefunc.h diff --git a/src/include/executor/tqueue.h b/src/include/executor/tqueue.h index fdc9deb2b2..0fe3639252 100644 --- a/src/include/executor/tqueue.h +++ b/src/include/executor/tqueue.h @@ -3,7 +3,7 @@ * tqueue.h * Use shm_mq to send & receive tuples between parallel backends * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/tqueue.h diff --git a/src/include/executor/tstoreReceiver.h b/src/include/executor/tstoreReceiver.h index ac4de3a663..5e2f83123c 100644 --- a/src/include/executor/tstoreReceiver.h +++ b/src/include/executor/tstoreReceiver.h @@ -4,7 +4,7 @@ * prototypes for tstoreReceiver.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/tstoreReceiver.h diff --git a/src/include/executor/tuptable.h b/src/include/executor/tuptable.h index db2a42af5e..5b54834d33 100644 --- a/src/include/executor/tuptable.h +++ b/src/include/executor/tuptable.h @@ -4,7 +4,7 @@ * tuple table support stuff * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/tuptable.h diff --git a/src/include/fe_utils/mbprint.h b/src/include/fe_utils/mbprint.h index e3cfaf3ddd..7d8019c203 100644 --- a/src/include/fe_utils/mbprint.h +++ b/src/include/fe_utils/mbprint.h @@ -3,7 +3,7 @@ * Multibyte character printing support for frontend code * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/mbprint.h diff --git a/src/include/fe_utils/print.h b/src/include/fe_utils/print.h index 36b89e7d57..83320d06bd 100644 --- a/src/include/fe_utils/print.h +++ b/src/include/fe_utils/print.h @@ -3,7 +3,7 @@ * Query-result printing support for frontend code * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/print.h diff --git a/src/include/fe_utils/psqlscan.h b/src/include/fe_utils/psqlscan.h index c199a2917e..2d58c071f3 100644 --- a/src/include/fe_utils/psqlscan.h +++ b/src/include/fe_utils/psqlscan.h @@ -10,7 +10,7 @@ * backslash commands. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/psqlscan.h diff --git a/src/include/fe_utils/psqlscan_int.h b/src/include/fe_utils/psqlscan_int.h index 2653344701..0be0db69ab 100644 --- a/src/include/fe_utils/psqlscan_int.h +++ b/src/include/fe_utils/psqlscan_int.h @@ -34,7 +34,7 @@ * same flex version, or if they don't use the same flex options. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/psqlscan_int.h diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h index 97bb34f191..9785489128 100644 --- a/src/include/fe_utils/simple_list.h +++ b/src/include/fe_utils/simple_list.h @@ -7,7 +7,7 @@ * it's all we need in, eg, pg_dump. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/simple_list.h diff --git a/src/include/fe_utils/string_utils.h b/src/include/fe_utils/string_utils.h index bc6b87d6f1..9a311e0f0f 100644 --- a/src/include/fe_utils/string_utils.h +++ b/src/include/fe_utils/string_utils.h @@ -6,7 +6,7 @@ * assorted contexts. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fe_utils/string_utils.h diff --git a/src/include/fmgr.h b/src/include/fmgr.h index a68ec91c68..665dd76b12 100644 --- a/src/include/fmgr.h +++ b/src/include/fmgr.h @@ -8,7 +8,7 @@ * or call fmgr-callable functions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/fmgr.h diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h index 04e43cc5e5..e88fee301f 100644 --- a/src/include/foreign/fdwapi.h +++ b/src/include/foreign/fdwapi.h @@ -3,7 +3,7 @@ * fdwapi.h * API for foreign-data wrappers * - * Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Copyright (c) 2010-2018, PostgreSQL Global Development Group * * src/include/foreign/fdwapi.h * diff --git a/src/include/foreign/foreign.h b/src/include/foreign/foreign.h index 2f4c569d1d..3ca12e64d2 100644 --- a/src/include/foreign/foreign.h +++ b/src/include/foreign/foreign.h @@ -4,7 +4,7 @@ * support for foreign-data wrappers, servers and user mappings. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/foreign/foreign.h * diff --git a/src/include/funcapi.h b/src/include/funcapi.h index 223eef28d1..c2da2eb157 100644 --- a/src/include/funcapi.h +++ b/src/include/funcapi.h @@ -8,7 +8,7 @@ * or call FUNCAPI-callable functions or macros. * * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/include/funcapi.h * diff --git a/src/include/getaddrinfo.h b/src/include/getaddrinfo.h index 3dcfc1fa25..1b460a3e5f 100644 --- a/src/include/getaddrinfo.h +++ b/src/include/getaddrinfo.h @@ -13,7 +13,7 @@ * This code will also work on platforms where struct addrinfo is defined * in the system headers but no getaddrinfo() can be located. * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/getaddrinfo.h * diff --git a/src/include/getopt_long.h b/src/include/getopt_long.h index c55d45348a..a9013b40f7 100644 --- a/src/include/getopt_long.h +++ b/src/include/getopt_long.h @@ -2,7 +2,7 @@ * Portions Copyright (c) 1987, 1993, 1994 * The Regents of the University of California. All rights reserved. * - * Portions Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/getopt_long.h */ diff --git a/src/include/lib/binaryheap.h b/src/include/lib/binaryheap.h index da7504bd55..9399e0d60b 100644 --- a/src/include/lib/binaryheap.h +++ b/src/include/lib/binaryheap.h @@ -3,7 +3,7 @@ * * A simple binary heap implementation * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * src/include/lib/binaryheap.h */ diff --git a/src/include/lib/bipartite_match.h b/src/include/lib/bipartite_match.h index 8f580bbd97..c184c0d38e 100644 --- a/src/include/lib/bipartite_match.h +++ b/src/include/lib/bipartite_match.h @@ -1,7 +1,7 @@ /* * bipartite_match.h * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * src/include/lib/bipartite_match.h */ diff --git a/src/include/lib/dshash.h b/src/include/lib/dshash.h index 220553c0d9..afee6516af 100644 --- a/src/include/lib/dshash.h +++ b/src/include/lib/dshash.h @@ -3,7 +3,7 @@ * dshash.h * Concurrent hash tables backed by dynamic shared memory areas. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/lib/hyperloglog.h b/src/include/lib/hyperloglog.h index 7a249cd252..f735111f91 100644 --- a/src/include/lib/hyperloglog.h +++ b/src/include/lib/hyperloglog.h @@ -3,7 +3,7 @@ * * A simple HyperLogLog cardinality estimator implementation * - * Portions Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2014-2018, PostgreSQL Global Development Group * * Based on Hideaki Ohno's C++ implementation. The copyright terms of Ohno's * original version (the MIT license) follow. diff --git a/src/include/lib/ilist.h b/src/include/lib/ilist.h index e5ac5c218a..fc9d6b3ee4 100644 --- a/src/include/lib/ilist.h +++ b/src/include/lib/ilist.h @@ -96,7 +96,7 @@ * } * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/lib/knapsack.h b/src/include/lib/knapsack.h index 4485738e2a..f2a61675cb 100644 --- a/src/include/lib/knapsack.h +++ b/src/include/lib/knapsack.h @@ -1,7 +1,7 @@ /* * knapsack.h * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * src/include/lib/knapsack.h */ diff --git a/src/include/lib/pairingheap.h b/src/include/lib/pairingheap.h index e3a75f51f7..9d3de79601 100644 --- a/src/include/lib/pairingheap.h +++ b/src/include/lib/pairingheap.h @@ -3,7 +3,7 @@ * * A Pairing Heap implementation * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * src/include/lib/pairingheap.h */ diff --git a/src/include/lib/rbtree.h b/src/include/lib/rbtree.h index a4288d4fc4..c46db47a75 100644 --- a/src/include/lib/rbtree.h +++ b/src/include/lib/rbtree.h @@ -3,7 +3,7 @@ * rbtree.h * interface for PostgreSQL generic Red-Black binary tree package * - * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * Copyright (c) 2009-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/lib/rbtree.h diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h index 906ae5e980..8551237fc6 100644 --- a/src/include/lib/stringinfo.h +++ b/src/include/lib/stringinfo.h @@ -7,7 +7,7 @@ * It can be used to buffer either ordinary C strings (null-terminated text) * or arbitrary binary data. All storage is allocated with palloc(). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/lib/stringinfo.h diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h index 871cc03add..e8a1dc14ff 100644 --- a/src/include/libpq/auth.h +++ b/src/include/libpq/auth.h @@ -4,7 +4,7 @@ * Definitions for network authentication routines * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/auth.h diff --git a/src/include/libpq/be-fsstubs.h b/src/include/libpq/be-fsstubs.h index e8107a2c9f..ed31e54323 100644 --- a/src/include/libpq/be-fsstubs.h +++ b/src/include/libpq/be-fsstubs.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/be-fsstubs.h diff --git a/src/include/libpq/crypt.h b/src/include/libpq/crypt.h index 9bad67c890..4279a4a9ed 100644 --- a/src/include/libpq/crypt.h +++ b/src/include/libpq/crypt.h @@ -3,7 +3,7 @@ * crypt.h * Interface to libpq/crypt.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/crypt.h diff --git a/src/include/libpq/ifaddr.h b/src/include/libpq/ifaddr.h index be19ff8823..1d35597a7f 100644 --- a/src/include/libpq/ifaddr.h +++ b/src/include/libpq/ifaddr.h @@ -3,7 +3,7 @@ * ifaddr.h * IP netmask calculations, and enumerating network interfaces. * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/libpq/ifaddr.h * diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index 856e0439d5..e660e8afa8 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -8,7 +8,7 @@ * Structs that need to be client-visible are in pqcomm.h. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/libpq-be.h diff --git a/src/include/libpq/libpq-fs.h b/src/include/libpq/libpq-fs.h index ce4b2a1892..e63d11ef19 100644 --- a/src/include/libpq/libpq-fs.h +++ b/src/include/libpq/libpq-fs.h @@ -4,7 +4,7 @@ * definitions for using Inversion file system routines (ie, large objects) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/libpq-fs.h diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h index fd2dd5853c..2e7725db21 100644 --- a/src/include/libpq/libpq.h +++ b/src/include/libpq/libpq.h @@ -4,7 +4,7 @@ * POSTGRES LIBPQ buffer structure definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/libpq.h diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h index 10c7434c41..cc0e0b32c7 100644 --- a/src/include/libpq/pqcomm.h +++ b/src/include/libpq/pqcomm.h @@ -6,7 +6,7 @@ * NOTE: for historical reasons, this does not correspond to pqcomm.c. * pqcomm.c's routines are declared in libpq.h. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/pqcomm.h diff --git a/src/include/libpq/pqformat.h b/src/include/libpq/pqformat.h index 9f56d184fd..57bde691e8 100644 --- a/src/include/libpq/pqformat.h +++ b/src/include/libpq/pqformat.h @@ -3,7 +3,7 @@ * pqformat.h * Definitions for formatting and parsing frontend/backend messages * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/pqformat.h diff --git a/src/include/libpq/pqmq.h b/src/include/libpq/pqmq.h index 86436d6753..e273656fde 100644 --- a/src/include/libpq/pqmq.h +++ b/src/include/libpq/pqmq.h @@ -3,7 +3,7 @@ * pqmq.h * Use the frontend/backend protocol for communication over a shm_mq * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/pqmq.h diff --git a/src/include/libpq/pqsignal.h b/src/include/libpq/pqsignal.h index af4e61ba4d..f292591dfc 100644 --- a/src/include/libpq/pqsignal.h +++ b/src/include/libpq/pqsignal.h @@ -3,7 +3,7 @@ * pqsignal.h * Backend signal(2) support (see also src/port/pqsignal.c) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/pqsignal.h diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h index 2c245813d6..f43ce992c1 100644 --- a/src/include/libpq/scram.h +++ b/src/include/libpq/scram.h @@ -3,7 +3,7 @@ * scram.h * Interface to libpq/scram.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/libpq/scram.h diff --git a/src/include/mb/pg_wchar.h b/src/include/mb/pg_wchar.h index 9227d634f6..1d26cb51da 100644 --- a/src/include/mb/pg_wchar.h +++ b/src/include/mb/pg_wchar.h @@ -3,7 +3,7 @@ * pg_wchar.h * multibyte-character support * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/mb/pg_wchar.h diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 59da7a6091..54ee273747 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -10,7 +10,7 @@ * Over time, this has also become the preferred place for widely known * resource-limitation stuff, such as work_mem and check_stack_depth(). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/miscadmin.h diff --git a/src/include/nodes/bitmapset.h b/src/include/nodes/bitmapset.h index 3b62a97775..15397e9584 100644 --- a/src/include/nodes/bitmapset.h +++ b/src/include/nodes/bitmapset.h @@ -11,7 +11,7 @@ * bms_is_empty() in preference to testing for NULL.) * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/nodes/bitmapset.h * diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index bbc3ec3f3f..b121e16688 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -4,7 +4,7 @@ * definitions for executor state nodes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/execnodes.h diff --git a/src/include/nodes/extensible.h b/src/include/nodes/extensible.h index c3436c7a4e..3f909fb459 100644 --- a/src/include/nodes/extensible.h +++ b/src/include/nodes/extensible.h @@ -4,7 +4,7 @@ * Definitions for extensible nodes and custom scans * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/extensible.h diff --git a/src/include/nodes/lockoptions.h b/src/include/nodes/lockoptions.h index e0981dac6f..24afd6efd4 100644 --- a/src/include/nodes/lockoptions.h +++ b/src/include/nodes/lockoptions.h @@ -4,7 +4,7 @@ * Common header for some locking-related declarations. * * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * src/include/nodes/lockoptions.h * diff --git a/src/include/nodes/makefuncs.h b/src/include/nodes/makefuncs.h index dd0d2ea07d..57bd52ff24 100644 --- a/src/include/nodes/makefuncs.h +++ b/src/include/nodes/makefuncs.h @@ -4,7 +4,7 @@ * prototypes for the creator functions (for primitive nodes) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/makefuncs.h diff --git a/src/include/nodes/memnodes.h b/src/include/nodes/memnodes.h index c7eb1e72e9..ccb64f01a7 100644 --- a/src/include/nodes/memnodes.h +++ b/src/include/nodes/memnodes.h @@ -4,7 +4,7 @@ * POSTGRES memory context node definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/memnodes.h diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h index 3366983936..849f34d2a8 100644 --- a/src/include/nodes/nodeFuncs.h +++ b/src/include/nodes/nodeFuncs.h @@ -3,7 +3,7 @@ * nodeFuncs.h * Various general-purpose manipulations of Node trees * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/nodeFuncs.h diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index c5b5115f5b..2eb3d6d371 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -4,7 +4,7 @@ * Definitions for tagged nodes. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/nodes.h diff --git a/src/include/nodes/params.h b/src/include/nodes/params.h index b198db5ee4..04b03c7303 100644 --- a/src/include/nodes/params.h +++ b/src/include/nodes/params.h @@ -4,7 +4,7 @@ * Support for finding the values associated with Param nodes. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/params.h diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 2eaa6b2774..b72178efd1 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -12,7 +12,7 @@ * identifying statement boundaries in multi-statement source strings. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/parsenodes.h diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h index 711db92576..e6cd2cdfba 100644 --- a/src/include/nodes/pg_list.h +++ b/src/include/nodes/pg_list.h @@ -27,7 +27,7 @@ * always be so; try to be careful to maintain the distinction.) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/pg_list.h diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index d763da647b..74e9fb5f7b 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -4,7 +4,7 @@ * definitions for query plan nodes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/plannodes.h diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h index 074ae0a865..1b4b0d75af 100644 --- a/src/include/nodes/primnodes.h +++ b/src/include/nodes/primnodes.h @@ -7,7 +7,7 @@ * and join trees. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/primnodes.h diff --git a/src/include/nodes/print.h b/src/include/nodes/print.h index fa01c2ad84..fba87c6c13 100644 --- a/src/include/nodes/print.h +++ b/src/include/nodes/print.h @@ -4,7 +4,7 @@ * definitions for nodes/print.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/print.h diff --git a/src/include/nodes/readfuncs.h b/src/include/nodes/readfuncs.h index c80fae2311..491e61c459 100644 --- a/src/include/nodes/readfuncs.h +++ b/src/include/nodes/readfuncs.h @@ -4,7 +4,7 @@ * header file for read.c and readfuncs.c. These functions are internal * to the stringToNode interface and should not be used by anyone else. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/readfuncs.h diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 3b9d303ce4..71689b8ed6 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -4,7 +4,7 @@ * Definitions for planner's internal data structures. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/relation.h diff --git a/src/include/nodes/replnodes.h b/src/include/nodes/replnodes.h index 2053ffabe0..66c948d85e 100644 --- a/src/include/nodes/replnodes.h +++ b/src/include/nodes/replnodes.h @@ -4,7 +4,7 @@ * definitions for replication grammar parse nodes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/nodes/replnodes.h diff --git a/src/include/nodes/tidbitmap.h b/src/include/nodes/tidbitmap.h index d3ad0a5566..31532e9769 100644 --- a/src/include/nodes/tidbitmap.h +++ b/src/include/nodes/tidbitmap.h @@ -13,7 +13,7 @@ * fact that a particular page needs to be visited. * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/nodes/tidbitmap.h * diff --git a/src/include/nodes/value.h b/src/include/nodes/value.h index 83f5a19fe9..94d09e176e 100644 --- a/src/include/nodes/value.h +++ b/src/include/nodes/value.h @@ -4,7 +4,7 @@ * interface for Value nodes * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/nodes/value.h * diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h index e3672218f3..ba4fa4b68b 100644 --- a/src/include/optimizer/clauses.h +++ b/src/include/optimizer/clauses.h @@ -4,7 +4,7 @@ * prototypes for clauses.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/clauses.h diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 27afc2eaeb..d2fff76653 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -4,7 +4,7 @@ * prototypes for costsize.c and clausesel.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/cost.h diff --git a/src/include/optimizer/geqo.h b/src/include/optimizer/geqo.h index d0158d7adf..4ae4b6374a 100644 --- a/src/include/optimizer/geqo.h +++ b/src/include/optimizer/geqo.h @@ -3,7 +3,7 @@ * geqo.h * prototypes for various files in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo.h diff --git a/src/include/optimizer/geqo_copy.h b/src/include/optimizer/geqo_copy.h index 4035b4ff13..f70786bef4 100644 --- a/src/include/optimizer/geqo_copy.h +++ b/src/include/optimizer/geqo_copy.h @@ -3,7 +3,7 @@ * geqo_copy.h * prototypes for copy functions in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_copy.h diff --git a/src/include/optimizer/geqo_gene.h b/src/include/optimizer/geqo_gene.h index 3282193284..3ddd268449 100644 --- a/src/include/optimizer/geqo_gene.h +++ b/src/include/optimizer/geqo_gene.h @@ -3,7 +3,7 @@ * geqo_gene.h * genome representation in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_gene.h diff --git a/src/include/optimizer/geqo_misc.h b/src/include/optimizer/geqo_misc.h index 89f91b3a4d..26a3669006 100644 --- a/src/include/optimizer/geqo_misc.h +++ b/src/include/optimizer/geqo_misc.h @@ -3,7 +3,7 @@ * geqo_misc.h * prototypes for printout routines in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_misc.h diff --git a/src/include/optimizer/geqo_mutation.h b/src/include/optimizer/geqo_mutation.h index c5a94e03cf..c9a0025523 100644 --- a/src/include/optimizer/geqo_mutation.h +++ b/src/include/optimizer/geqo_mutation.h @@ -3,7 +3,7 @@ * geqo_mutation.h * prototypes for mutation functions in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_mutation.h diff --git a/src/include/optimizer/geqo_pool.h b/src/include/optimizer/geqo_pool.h index 9fac307d69..eb343412f8 100644 --- a/src/include/optimizer/geqo_pool.h +++ b/src/include/optimizer/geqo_pool.h @@ -3,7 +3,7 @@ * geqo_pool.h * pool representation in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_pool.h diff --git a/src/include/optimizer/geqo_random.h b/src/include/optimizer/geqo_random.h index 2665a096b3..03bd0ae8eb 100644 --- a/src/include/optimizer/geqo_random.h +++ b/src/include/optimizer/geqo_random.h @@ -3,7 +3,7 @@ * geqo_random.h * random number generator * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_random.h diff --git a/src/include/optimizer/geqo_recombination.h b/src/include/optimizer/geqo_recombination.h index 60286c6c27..3ca89d8091 100644 --- a/src/include/optimizer/geqo_recombination.h +++ b/src/include/optimizer/geqo_recombination.h @@ -3,7 +3,7 @@ * geqo_recombination.h * prototypes for recombination in the genetic query optimizer * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_recombination.h diff --git a/src/include/optimizer/geqo_selection.h b/src/include/optimizer/geqo_selection.h index 69ce2b4b8a..d6bea3c2bd 100644 --- a/src/include/optimizer/geqo_selection.h +++ b/src/include/optimizer/geqo_selection.h @@ -3,7 +3,7 @@ * geqo_selection.h * prototypes for selection routines in optimizer/geqo * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/geqo_selection.h diff --git a/src/include/optimizer/joininfo.h b/src/include/optimizer/joininfo.h index 4450d81408..48f6d625e2 100644 --- a/src/include/optimizer/joininfo.h +++ b/src/include/optimizer/joininfo.h @@ -4,7 +4,7 @@ * prototypes for joininfo.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/joininfo.h diff --git a/src/include/optimizer/orclauses.h b/src/include/optimizer/orclauses.h index a3d8e1d9f9..2154e66746 100644 --- a/src/include/optimizer/orclauses.h +++ b/src/include/optimizer/orclauses.h @@ -4,7 +4,7 @@ * prototypes for orclauses.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/orclauses.h diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index 3ef12b323b..725694f570 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -4,7 +4,7 @@ * prototypes for pathnode.c, relnode.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/pathnode.h diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index ea886b6501..0072b7aa0d 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -4,7 +4,7 @@ * prototypes for various files in optimizer/path * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/paths.h diff --git a/src/include/optimizer/placeholder.h b/src/include/optimizer/placeholder.h index a4a7b79f4d..91ebdb90fc 100644 --- a/src/include/optimizer/placeholder.h +++ b/src/include/optimizer/placeholder.h @@ -4,7 +4,7 @@ * prototypes for optimizer/util/placeholder.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/placeholder.h diff --git a/src/include/optimizer/plancat.h b/src/include/optimizer/plancat.h index 71f0faf938..7d53cbbb87 100644 --- a/src/include/optimizer/plancat.h +++ b/src/include/optimizer/plancat.h @@ -4,7 +4,7 @@ * prototypes for plancat.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/plancat.h diff --git a/src/include/optimizer/planmain.h b/src/include/optimizer/planmain.h index d6133228bd..7132c88242 100644 --- a/src/include/optimizer/planmain.h +++ b/src/include/optimizer/planmain.h @@ -4,7 +4,7 @@ * prototypes for various files in optimizer/plan * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/planmain.h diff --git a/src/include/optimizer/planner.h b/src/include/optimizer/planner.h index 2801bfdfbe..997b91fdf9 100644 --- a/src/include/optimizer/planner.h +++ b/src/include/optimizer/planner.h @@ -4,7 +4,7 @@ * prototypes for planner.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/planner.h diff --git a/src/include/optimizer/predtest.h b/src/include/optimizer/predtest.h index bccb8d63a9..88d2ca7e44 100644 --- a/src/include/optimizer/predtest.h +++ b/src/include/optimizer/predtest.h @@ -4,7 +4,7 @@ * prototypes for predtest.c * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/predtest.h diff --git a/src/include/optimizer/prep.h b/src/include/optimizer/prep.h index 804c9e3b8b..89b7ef337c 100644 --- a/src/include/optimizer/prep.h +++ b/src/include/optimizer/prep.h @@ -4,7 +4,7 @@ * prototypes for files in optimizer/prep/ * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/prep.h diff --git a/src/include/optimizer/restrictinfo.h b/src/include/optimizer/restrictinfo.h index b2a69998fd..9cd874d07e 100644 --- a/src/include/optimizer/restrictinfo.h +++ b/src/include/optimizer/restrictinfo.h @@ -4,7 +4,7 @@ * prototypes for restrictinfo.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/restrictinfo.h diff --git a/src/include/optimizer/subselect.h b/src/include/optimizer/subselect.h index ecd2011d54..d28c993b3a 100644 --- a/src/include/optimizer/subselect.h +++ b/src/include/optimizer/subselect.h @@ -2,7 +2,7 @@ * * subselect.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/subselect.h diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h index 0d3ec920dd..9fa52e1278 100644 --- a/src/include/optimizer/tlist.h +++ b/src/include/optimizer/tlist.h @@ -4,7 +4,7 @@ * prototypes for tlist.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/tlist.h diff --git a/src/include/optimizer/var.h b/src/include/optimizer/var.h index 61861528af..43c53b5344 100644 --- a/src/include/optimizer/var.h +++ b/src/include/optimizer/var.h @@ -4,7 +4,7 @@ * prototypes for optimizer/util/var.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/optimizer/var.h diff --git a/src/include/parser/analyze.h b/src/include/parser/analyze.h index 40f22339f1..687ae1b5b7 100644 --- a/src/include/parser/analyze.h +++ b/src/include/parser/analyze.h @@ -4,7 +4,7 @@ * parse analysis for optimizable statements * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/analyze.h diff --git a/src/include/parser/gramparse.h b/src/include/parser/gramparse.h index b6b67fb92c..42e7edee6d 100644 --- a/src/include/parser/gramparse.h +++ b/src/include/parser/gramparse.h @@ -8,7 +8,7 @@ * Definitions that are needed outside the core parser should be in parser.h. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/gramparse.h diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h index a932400058..26af944e03 100644 --- a/src/include/parser/kwlist.h +++ b/src/include/parser/kwlist.h @@ -7,7 +7,7 @@ * by the PG_KEYWORD macro, which is not defined in this file; it can * be defined by the caller for special purposes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/parser/parse_agg.h b/src/include/parser/parse_agg.h index 6947a0186e..0e6adffe57 100644 --- a/src/include/parser/parse_agg.h +++ b/src/include/parser/parse_agg.h @@ -3,7 +3,7 @@ * parse_agg.h * handle aggregates and window functions in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_agg.h diff --git a/src/include/parser/parse_clause.h b/src/include/parser/parse_clause.h index 1d205c6327..2c0e092862 100644 --- a/src/include/parser/parse_clause.h +++ b/src/include/parser/parse_clause.h @@ -4,7 +4,7 @@ * handle clauses in parser * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_clause.h diff --git a/src/include/parser/parse_coerce.h b/src/include/parser/parse_coerce.h index e560f0c96e..af12c136ef 100644 --- a/src/include/parser/parse_coerce.h +++ b/src/include/parser/parse_coerce.h @@ -4,7 +4,7 @@ * Routines for type coercion. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_coerce.h diff --git a/src/include/parser/parse_collate.h b/src/include/parser/parse_collate.h index 7279fa4e7c..aea297ce38 100644 --- a/src/include/parser/parse_collate.h +++ b/src/include/parser/parse_collate.h @@ -4,7 +4,7 @@ * Routines for assigning collation information. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_collate.h diff --git a/src/include/parser/parse_cte.h b/src/include/parser/parse_cte.h index 695e88c7ed..7e862d1906 100644 --- a/src/include/parser/parse_cte.h +++ b/src/include/parser/parse_cte.h @@ -4,7 +4,7 @@ * handle CTEs (common table expressions) in parser * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_cte.h diff --git a/src/include/parser/parse_enr.h b/src/include/parser/parse_enr.h index 9c68ddbb04..e8af457e02 100644 --- a/src/include/parser/parse_enr.h +++ b/src/include/parser/parse_enr.h @@ -4,7 +4,7 @@ * Internal definitions for parser * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_enr.h diff --git a/src/include/parser/parse_expr.h b/src/include/parser/parse_expr.h index 3af09b0056..e5aff61b8f 100644 --- a/src/include/parser/parse_expr.h +++ b/src/include/parser/parse_expr.h @@ -3,7 +3,7 @@ * parse_expr.h * handle expressions in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_expr.h diff --git a/src/include/parser/parse_func.h b/src/include/parser/parse_func.h index fccccd21ed..2e3810fc32 100644 --- a/src/include/parser/parse_func.h +++ b/src/include/parser/parse_func.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_func.h diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h index 565bb3dc6c..4e96fa7907 100644 --- a/src/include/parser/parse_node.h +++ b/src/include/parser/parse_node.h @@ -4,7 +4,7 @@ * Internal definitions for parser * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_node.h diff --git a/src/include/parser/parse_oper.h b/src/include/parser/parse_oper.h index 3cab732b1f..da9cd3877c 100644 --- a/src/include/parser/parse_oper.h +++ b/src/include/parser/parse_oper.h @@ -4,7 +4,7 @@ * handle operator things for parser * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_oper.h diff --git a/src/include/parser/parse_param.h b/src/include/parser/parse_param.h index 50c21cfc9d..fd6c695ebf 100644 --- a/src/include/parser/parse_param.h +++ b/src/include/parser/parse_param.h @@ -3,7 +3,7 @@ * parse_param.h * handle parameters in parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_param.h diff --git a/src/include/parser/parse_relation.h b/src/include/parser/parse_relation.h index 290f3b78cb..b9792acdae 100644 --- a/src/include/parser/parse_relation.h +++ b/src/include/parser/parse_relation.h @@ -4,7 +4,7 @@ * prototypes for parse_relation.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_relation.h diff --git a/src/include/parser/parse_target.h b/src/include/parser/parse_target.h index bb7b7b606b..ec6e0c102f 100644 --- a/src/include/parser/parse_target.h +++ b/src/include/parser/parse_target.h @@ -4,7 +4,7 @@ * handle target lists * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_target.h diff --git a/src/include/parser/parse_type.h b/src/include/parser/parse_type.h index af1e314b21..ab16737d57 100644 --- a/src/include/parser/parse_type.h +++ b/src/include/parser/parse_type.h @@ -3,7 +3,7 @@ * parse_type.h * handle type operations for parser * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_type.h diff --git a/src/include/parser/parse_utilcmd.h b/src/include/parser/parse_utilcmd.h index e749432ef0..a7f5e0caea 100644 --- a/src/include/parser/parse_utilcmd.h +++ b/src/include/parser/parse_utilcmd.h @@ -4,7 +4,7 @@ * parse analysis for utility commands * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parse_utilcmd.h diff --git a/src/include/parser/parser.h b/src/include/parser/parser.h index 8370df6bbb..f26f6e22e1 100644 --- a/src/include/parser/parser.h +++ b/src/include/parser/parser.h @@ -5,7 +5,7 @@ * * This is the external API for the raw lexing/parsing functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parser.h diff --git a/src/include/parser/parsetree.h b/src/include/parser/parsetree.h index 96cb2e2f67..dd9ae658ac 100644 --- a/src/include/parser/parsetree.h +++ b/src/include/parser/parsetree.h @@ -5,7 +5,7 @@ * parse trees. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/parsetree.h diff --git a/src/include/parser/scanner.h b/src/include/parser/scanner.h index bb95de730b..20bf1e6672 100644 --- a/src/include/parser/scanner.h +++ b/src/include/parser/scanner.h @@ -8,7 +8,7 @@ * higher-level API provided by parser.h. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/scanner.h diff --git a/src/include/parser/scansup.h b/src/include/parser/scansup.h index f9a36b5cbd..766b3f908a 100644 --- a/src/include/parser/scansup.h +++ b/src/include/parser/scansup.h @@ -4,7 +4,7 @@ * scanner support routines. used by both the bootstrap lexer * as well as the normal lexer * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/parser/scansup.h diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h index 6f2238b330..b309395f11 100644 --- a/src/include/pg_config_manual.h +++ b/src/include/pg_config_manual.h @@ -6,7 +6,7 @@ * for developers. If you edit any of these, be sure to do a *full* * rebuild (and an initdb if noted). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/pg_config_manual.h diff --git a/src/include/pg_getopt.h b/src/include/pg_getopt.h index 16d5a326f9..c050f2025f 100644 --- a/src/include/pg_getopt.h +++ b/src/include/pg_getopt.h @@ -2,7 +2,7 @@ * Portions Copyright (c) 1987, 1993, 1994 * The Regents of the University of California. All rights reserved. * - * Portions Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2003-2018, PostgreSQL Global Development Group * * src/include/pg_getopt.h */ diff --git a/src/include/pg_trace.h b/src/include/pg_trace.h index 29bc95d0d3..e00e39da4e 100644 --- a/src/include/pg_trace.h +++ b/src/include/pg_trace.h @@ -3,7 +3,7 @@ * * Definitions for the PostgreSQL tracing framework * - * Copyright (c) 2006-2017, PostgreSQL Global Development Group + * Copyright (c) 2006-2018, PostgreSQL Global Development Group * * src/include/pg_trace.h * ---------- diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 58f3a19bc6..3d3c0b64fc 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -3,7 +3,7 @@ * * Definitions for the PostgreSQL statistics collector daemon. * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/include/pgstat.h * ---------- diff --git a/src/include/pgtar.h b/src/include/pgtar.h index 9a1be4c9f6..9eaa8bf15d 100644 --- a/src/include/pgtar.h +++ b/src/include/pgtar.h @@ -4,7 +4,7 @@ * Functions for manipulating tarfile datastructures (src/port/tar.c) * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/pgtar.h diff --git a/src/include/pgtime.h b/src/include/pgtime.h index 8a13d717e0..5f18b04b20 100644 --- a/src/include/pgtime.h +++ b/src/include/pgtime.h @@ -3,7 +3,7 @@ * pgtime.h * PostgreSQL internal timezone library * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/pgtime.h diff --git a/src/include/port.h b/src/include/port.h index f55085de3a..3e528fa172 100644 --- a/src/include/port.h +++ b/src/include/port.h @@ -3,7 +3,7 @@ * port.h * Header for src/port/ compatibility functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port.h diff --git a/src/include/port/atomics.h b/src/include/port/atomics.h index 2bfd1ed728..8c825bd926 100644 --- a/src/include/port/atomics.h +++ b/src/include/port/atomics.h @@ -28,7 +28,7 @@ * For an introduction to using memory barriers within the PostgreSQL backend, * see src/backend/storage/lmgr/README.barrier * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port/atomics.h diff --git a/src/include/port/atomics/arch-arm.h b/src/include/port/atomics/arch-arm.h index 58614ae2ca..99fe3bbfd2 100644 --- a/src/include/port/atomics/arch-arm.h +++ b/src/include/port/atomics/arch-arm.h @@ -3,7 +3,7 @@ * arch-arm.h * Atomic operations considerations specific to ARM * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * NOTES: * diff --git a/src/include/port/atomics/arch-hppa.h b/src/include/port/atomics/arch-hppa.h index c4176c6f42..818a3e0c87 100644 --- a/src/include/port/atomics/arch-hppa.h +++ b/src/include/port/atomics/arch-hppa.h @@ -3,7 +3,7 @@ * arch-hppa.h * Atomic operations considerations specific to HPPA * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/arch-ia64.h b/src/include/port/atomics/arch-ia64.h index 3dc4b298e1..571a79ee67 100644 --- a/src/include/port/atomics/arch-ia64.h +++ b/src/include/port/atomics/arch-ia64.h @@ -3,7 +3,7 @@ * arch-ia64.h * Atomic operations considerations specific to intel itanium * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/arch-ppc.h b/src/include/port/atomics/arch-ppc.h index ed30468398..faeb7a5d15 100644 --- a/src/include/port/atomics/arch-ppc.h +++ b/src/include/port/atomics/arch-ppc.h @@ -3,7 +3,7 @@ * arch-ppc.h * Atomic operations considerations specific to PowerPC * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/arch-x86.h b/src/include/port/atomics/arch-x86.h index bf8152573d..5a3d95e056 100644 --- a/src/include/port/atomics/arch-x86.h +++ b/src/include/port/atomics/arch-x86.h @@ -7,7 +7,7 @@ * support for xadd and cmpxchg. Given that the 386 isn't supported anywhere * anymore that's not much of a restriction luckily. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/fallback.h b/src/include/port/atomics/fallback.h index 4e07add0a4..7b9dcad807 100644 --- a/src/include/port/atomics/fallback.h +++ b/src/include/port/atomics/fallback.h @@ -4,7 +4,7 @@ * Fallback for platforms without spinlock and/or atomics support. Slower * than native atomics support, but not unusably slow. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port/atomics/fallback.h diff --git a/src/include/port/atomics/generic-acc.h b/src/include/port/atomics/generic-acc.h index 4ea3ed3fc7..bd6caaf817 100644 --- a/src/include/port/atomics/generic-acc.h +++ b/src/include/port/atomics/generic-acc.h @@ -3,7 +3,7 @@ * generic-acc.h * Atomic operations support when using HPs acc on HPUX * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/generic-gcc.h b/src/include/port/atomics/generic-gcc.h index e6871646a7..3a1dc88377 100644 --- a/src/include/port/atomics/generic-gcc.h +++ b/src/include/port/atomics/generic-gcc.h @@ -3,7 +3,7 @@ * generic-gcc.h * Atomic operations, implemented using gcc (or compatible) intrinsics. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/generic-msvc.h b/src/include/port/atomics/generic-msvc.h index f82cfec8ec..59211c2203 100644 --- a/src/include/port/atomics/generic-msvc.h +++ b/src/include/port/atomics/generic-msvc.h @@ -3,7 +3,7 @@ * generic-msvc.h * Atomic operations support when using MSVC * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * NOTES: diff --git a/src/include/port/atomics/generic-sunpro.h b/src/include/port/atomics/generic-sunpro.h index a58e8e3bad..e903243b68 100644 --- a/src/include/port/atomics/generic-sunpro.h +++ b/src/include/port/atomics/generic-sunpro.h @@ -3,7 +3,7 @@ * generic-sunpro.h * Atomic operations for solaris' CC * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * NOTES: * diff --git a/src/include/port/atomics/generic-xlc.h b/src/include/port/atomics/generic-xlc.h index f854612d39..f18207568c 100644 --- a/src/include/port/atomics/generic-xlc.h +++ b/src/include/port/atomics/generic-xlc.h @@ -3,7 +3,7 @@ * generic-xlc.h * Atomic operations for IBM's CC * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * * NOTES: * diff --git a/src/include/port/atomics/generic.h b/src/include/port/atomics/generic.h index c7566919de..ea11698a35 100644 --- a/src/include/port/atomics/generic.h +++ b/src/include/port/atomics/generic.h @@ -4,7 +4,7 @@ * Implement higher level operations based on some lower level atomic * operations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port/atomics/generic.h diff --git a/src/include/port/pg_bswap.h b/src/include/port/pg_bswap.h index 0b2dcf52cb..0f09c05d6f 100644 --- a/src/include/port/pg_bswap.h +++ b/src/include/port/pg_bswap.h @@ -11,7 +11,7 @@ * return the same. Use caution when using these wrapper macros with signed * integers. * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * src/include/port/pg_bswap.h * diff --git a/src/include/port/pg_crc32c.h b/src/include/port/pg_crc32c.h index 32d7176273..ae2701e958 100644 --- a/src/include/port/pg_crc32c.h +++ b/src/include/port/pg_crc32c.h @@ -23,7 +23,7 @@ * EQ_CRC32C(c1, c2) * Check for equality of two CRCs. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port/pg_crc32c.h diff --git a/src/include/port/win32_port.h b/src/include/port/win32_port.h index d8a6392588..d31c28f7d4 100644 --- a/src/include/port/win32_port.h +++ b/src/include/port/win32_port.h @@ -6,7 +6,7 @@ * Note this is read in MinGW as well as native Windows builds, * but not in Cygwin builds. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/port/win32_port.h diff --git a/src/include/portability/instr_time.h b/src/include/portability/instr_time.h index b6e8c58d44..f968444671 100644 --- a/src/include/portability/instr_time.h +++ b/src/include/portability/instr_time.h @@ -43,7 +43,7 @@ * Beware of multiple evaluations of the macro arguments. * * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/include/portability/instr_time.h * diff --git a/src/include/portability/mem.h b/src/include/portability/mem.h index fa507c2581..ba183f5368 100644 --- a/src/include/portability/mem.h +++ b/src/include/portability/mem.h @@ -3,7 +3,7 @@ * mem.h * portability definitions for various memory operations * - * Copyright (c) 2001-2017, PostgreSQL Global Development Group + * Copyright (c) 2001-2018, PostgreSQL Global Development Group * * src/include/portability/mem.h * diff --git a/src/include/postgres.h b/src/include/postgres.h index 1ca9b60ea1..b69f88aa5b 100644 --- a/src/include/postgres.h +++ b/src/include/postgres.h @@ -7,7 +7,7 @@ * Client-side code should include postgres_fe.h instead. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1995, Regents of the University of California * * src/include/postgres.h diff --git a/src/include/postgres_fe.h b/src/include/postgres_fe.h index 1dd01d0283..14567953f2 100644 --- a/src/include/postgres_fe.h +++ b/src/include/postgres_fe.h @@ -8,7 +8,7 @@ * postgres.h. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1995, Regents of the University of California * * src/include/postgres_fe.h diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h index 3469915ae2..18cff540b7 100644 --- a/src/include/postmaster/autovacuum.h +++ b/src/include/postmaster/autovacuum.h @@ -4,7 +4,7 @@ * header file for integrated autovacuum daemon * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/postmaster/autovacuum.h diff --git a/src/include/postmaster/bgworker.h b/src/include/postmaster/bgworker.h index b6c5800cfe..0c04529f47 100644 --- a/src/include/postmaster/bgworker.h +++ b/src/include/postmaster/bgworker.h @@ -31,7 +31,7 @@ * different) code. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/postmaster/bgworker_internals.h b/src/include/postmaster/bgworker_internals.h index bff10ba53c..548dc1c146 100644 --- a/src/include/postmaster/bgworker_internals.h +++ b/src/include/postmaster/bgworker_internals.h @@ -2,7 +2,7 @@ * bgworker_internals.h * POSTGRES pluggable background workers internals * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/postmaster/bgwriter.h b/src/include/postmaster/bgwriter.h index a9b8bc7e8e..941c6aba7d 100644 --- a/src/include/postmaster/bgwriter.h +++ b/src/include/postmaster/bgwriter.h @@ -6,7 +6,7 @@ * The bgwriter process used to handle checkpointing duties too. Now * there is a separate process, but we did not bother to split this header. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/postmaster/bgwriter.h * diff --git a/src/include/postmaster/fork_process.h b/src/include/postmaster/fork_process.h index 9d5b97aca7..d552e0297c 100644 --- a/src/include/postmaster/fork_process.h +++ b/src/include/postmaster/fork_process.h @@ -3,7 +3,7 @@ * fork_process.h * Exports from postmaster/fork_process.c. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/postmaster/fork_process.h * diff --git a/src/include/postmaster/pgarch.h b/src/include/postmaster/pgarch.h index ab1ddcf52c..292e63a26a 100644 --- a/src/include/postmaster/pgarch.h +++ b/src/include/postmaster/pgarch.h @@ -3,7 +3,7 @@ * pgarch.h * Exports from postmaster/pgarch.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/postmaster/pgarch.h diff --git a/src/include/postmaster/postmaster.h b/src/include/postmaster/postmaster.h index f5b863e710..1877eef239 100644 --- a/src/include/postmaster/postmaster.h +++ b/src/include/postmaster/postmaster.h @@ -3,7 +3,7 @@ * postmaster.h * Exports from postmaster/postmaster.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/postmaster/postmaster.h diff --git a/src/include/postmaster/startup.h b/src/include/postmaster/startup.h index 883bd395bd..1bef7239b2 100644 --- a/src/include/postmaster/startup.h +++ b/src/include/postmaster/startup.h @@ -3,7 +3,7 @@ * startup.h * Exports from postmaster/startup.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/postmaster/startup.h * diff --git a/src/include/postmaster/syslogger.h b/src/include/postmaster/syslogger.h index f4248ef5d4..b35fadc1bd 100644 --- a/src/include/postmaster/syslogger.h +++ b/src/include/postmaster/syslogger.h @@ -3,7 +3,7 @@ * syslogger.h * Exports from postmaster/syslogger.c. * - * Copyright (c) 2004-2017, PostgreSQL Global Development Group + * Copyright (c) 2004-2018, PostgreSQL Global Development Group * * src/include/postmaster/syslogger.h * diff --git a/src/include/postmaster/walwriter.h b/src/include/postmaster/walwriter.h index f0eb425802..26ec256b96 100644 --- a/src/include/postmaster/walwriter.h +++ b/src/include/postmaster/walwriter.h @@ -3,7 +3,7 @@ * walwriter.h * Exports from postmaster/walwriter.c. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/postmaster/walwriter.h * diff --git a/src/include/regex/regexport.h b/src/include/regex/regexport.h index 1c84d15d55..ff6ff3d791 100644 --- a/src/include/regex/regexport.h +++ b/src/include/regex/regexport.h @@ -17,7 +17,7 @@ * line and start/end of string. Colors are numbered 0..C-1, but note that * color 0 is "white" (all unused characters) and can generally be ignored. * - * Portions Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2013-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1998, 1999 Henry Spencer * * IDENTIFICATION diff --git a/src/include/replication/basebackup.h b/src/include/replication/basebackup.h index 1a165c860b..941d07b99f 100644 --- a/src/include/replication/basebackup.h +++ b/src/include/replication/basebackup.h @@ -3,7 +3,7 @@ * basebackup.h * Exports from replication/basebackup.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * src/include/replication/basebackup.h * diff --git a/src/include/replication/decode.h b/src/include/replication/decode.h index f9d81d77d0..0e63c0b296 100644 --- a/src/include/replication/decode.h +++ b/src/include/replication/decode.h @@ -2,7 +2,7 @@ * decode.h * PostgreSQL WAL to logical transformation * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/replication/logical.h b/src/include/replication/logical.h index 7f0e0fa881..d9059e1cca 100644 --- a/src/include/replication/logical.h +++ b/src/include/replication/logical.h @@ -2,7 +2,7 @@ * logical.h * PostgreSQL logical decoding coordination * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/replication/logicalfuncs.h b/src/include/replication/logicalfuncs.h index 71faee18cf..4236b40bc6 100644 --- a/src/include/replication/logicalfuncs.h +++ b/src/include/replication/logicalfuncs.h @@ -2,7 +2,7 @@ * logicalfuncs.h * PostgreSQL WAL to logical transformation support functions * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/replication/logicallauncher.h b/src/include/replication/logicallauncher.h index 78016c448f..ef02512412 100644 --- a/src/include/replication/logicallauncher.h +++ b/src/include/replication/logicallauncher.h @@ -3,7 +3,7 @@ * logicallauncher.h * Exports for logical replication launcher. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/replication/logicallauncher.h * diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h index a9736e1bf6..0eb21057c5 100644 --- a/src/include/replication/logicalproto.h +++ b/src/include/replication/logicalproto.h @@ -3,7 +3,7 @@ * logicalproto.h * logical replication protocol * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/replication/logicalproto.h diff --git a/src/include/replication/logicalrelation.h b/src/include/replication/logicalrelation.h index 8352705650..d4250c2608 100644 --- a/src/include/replication/logicalrelation.h +++ b/src/include/replication/logicalrelation.h @@ -3,7 +3,7 @@ * logicalrelation.h * Relation definitions for logical replication relation mapping. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/replication/logicalrelation.h * diff --git a/src/include/replication/logicalworker.h b/src/include/replication/logicalworker.h index 2557d5a23b..6379aae7ce 100644 --- a/src/include/replication/logicalworker.h +++ b/src/include/replication/logicalworker.h @@ -3,7 +3,7 @@ * logicalworker.h * Exports for logical replication workers. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/replication/logicalworker.h * diff --git a/src/include/replication/message.h b/src/include/replication/message.h index 1f2d0bb535..37e1bd32d6 100644 --- a/src/include/replication/message.h +++ b/src/include/replication/message.h @@ -2,7 +2,7 @@ * message.h * Exports from replication/logical/message.c * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * src/include/replication/message.h *------------------------------------------------------------------------- diff --git a/src/include/replication/origin.h b/src/include/replication/origin.h index a9595c3c3d..34fc01207d 100644 --- a/src/include/replication/origin.h +++ b/src/include/replication/origin.h @@ -2,7 +2,7 @@ * origin.h * Exports from replication/logical/origin.c * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * src/include/replication/origin.h *------------------------------------------------------------------------- diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h index 26ff024882..78fd38bb16 100644 --- a/src/include/replication/output_plugin.h +++ b/src/include/replication/output_plugin.h @@ -2,7 +2,7 @@ * output_plugin.h * PostgreSQL Logical Decode Plugin Interface * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h index 1ec4500d9d..29178da1d1 100644 --- a/src/include/replication/pgoutput.h +++ b/src/include/replication/pgoutput.h @@ -3,7 +3,7 @@ * pgoutput.h * Logical Replication output plugin * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * pgoutput.h diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h index b18ce5a9df..f52a88dcd6 100644 --- a/src/include/replication/reorderbuffer.h +++ b/src/include/replication/reorderbuffer.h @@ -2,7 +2,7 @@ * reorderbuffer.h * PostgreSQL logical replay/reorder buffer management. * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * src/include/replication/reorderbuffer.h */ diff --git a/src/include/replication/slot.h b/src/include/replication/slot.h index 0c442330b2..76a88c6de7 100644 --- a/src/include/replication/slot.h +++ b/src/include/replication/slot.h @@ -2,7 +2,7 @@ * slot.h * Replication slot management. * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ diff --git a/src/include/replication/snapbuild.h b/src/include/replication/snapbuild.h index 7653717f83..56257430ae 100644 --- a/src/include/replication/snapbuild.h +++ b/src/include/replication/snapbuild.h @@ -3,7 +3,7 @@ * snapbuild.h * Exports from replication/logical/snapbuild.c. * - * Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Copyright (c) 2012-2018, PostgreSQL Global Development Group * * src/include/replication/snapbuild.h * diff --git a/src/include/replication/syncrep.h b/src/include/replication/syncrep.h index ceafe2cbea..bc43b4e109 100644 --- a/src/include/replication/syncrep.h +++ b/src/include/replication/syncrep.h @@ -3,7 +3,7 @@ * syncrep.h * Exports from replication/syncrep.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/replication/syncrep.h diff --git a/src/include/replication/walreceiver.h b/src/include/replication/walreceiver.h index e58fc49c68..ea7967f6fc 100644 --- a/src/include/replication/walreceiver.h +++ b/src/include/replication/walreceiver.h @@ -3,7 +3,7 @@ * walreceiver.h * Exports from replication/walreceiverfuncs.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * src/include/replication/walreceiver.h * diff --git a/src/include/replication/walsender.h b/src/include/replication/walsender.h index 1f20db827a..45b72a76db 100644 --- a/src/include/replication/walsender.h +++ b/src/include/replication/walsender.h @@ -3,7 +3,7 @@ * walsender.h * Exports from replication/walsender.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * src/include/replication/walsender.h * diff --git a/src/include/replication/walsender_private.h b/src/include/replication/walsender_private.h index 17c68cba23..4b90477936 100644 --- a/src/include/replication/walsender_private.h +++ b/src/include/replication/walsender_private.h @@ -3,7 +3,7 @@ * walsender_private.h * Private definitions from replication/walsender.c. * - * Portions Copyright (c) 2010-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2010-2018, PostgreSQL Global Development Group * * src/include/replication/walsender_private.h * diff --git a/src/include/replication/worker_internal.h b/src/include/replication/worker_internal.h index 7b8728cced..1ce3b6b058 100644 --- a/src/include/replication/worker_internal.h +++ b/src/include/replication/worker_internal.h @@ -3,7 +3,7 @@ * worker_internal.h * Internal headers shared by logical replication workers. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * src/include/replication/worker_internal.h * diff --git a/src/include/rewrite/prs2lock.h b/src/include/rewrite/prs2lock.h index 419d140bd5..9e7c87a8c1 100644 --- a/src/include/rewrite/prs2lock.h +++ b/src/include/rewrite/prs2lock.h @@ -3,7 +3,7 @@ * prs2lock.h * data structures for POSTGRES Rule System II (rewrite rules only) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/prs2lock.h diff --git a/src/include/rewrite/rewriteDefine.h b/src/include/rewrite/rewriteDefine.h index b496a0c154..4ceaaffee7 100644 --- a/src/include/rewrite/rewriteDefine.h +++ b/src/include/rewrite/rewriteDefine.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/rewriteDefine.h diff --git a/src/include/rewrite/rewriteHandler.h b/src/include/rewrite/rewriteHandler.h index 86ae571eb1..8128199fc3 100644 --- a/src/include/rewrite/rewriteHandler.h +++ b/src/include/rewrite/rewriteHandler.h @@ -4,7 +4,7 @@ * External interface to query rewriter. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/rewriteHandler.h diff --git a/src/include/rewrite/rewriteManip.h b/src/include/rewrite/rewriteManip.h index f0a7a8b2cd..f0299bc703 100644 --- a/src/include/rewrite/rewriteManip.h +++ b/src/include/rewrite/rewriteManip.h @@ -4,7 +4,7 @@ * Querytree manipulation subroutines for query rewriter. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/rewriteManip.h diff --git a/src/include/rewrite/rewriteRemove.h b/src/include/rewrite/rewriteRemove.h index d7d53b0137..351e26c307 100644 --- a/src/include/rewrite/rewriteRemove.h +++ b/src/include/rewrite/rewriteRemove.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/rewriteRemove.h diff --git a/src/include/rewrite/rewriteSupport.h b/src/include/rewrite/rewriteSupport.h index 60800aae25..f34bfda05f 100644 --- a/src/include/rewrite/rewriteSupport.h +++ b/src/include/rewrite/rewriteSupport.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rewrite/rewriteSupport.h diff --git a/src/include/rewrite/rowsecurity.h b/src/include/rewrite/rowsecurity.h index 0dbc1a4bee..f2f5251c71 100644 --- a/src/include/rewrite/rowsecurity.h +++ b/src/include/rewrite/rowsecurity.h @@ -5,7 +5,7 @@ * prototypes for rewrite/rowsecurity.c and the structures for managing * the row security policies for relations in relcache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * ------------------------------------------------------------------------- diff --git a/src/include/rusagestub.h b/src/include/rusagestub.h index f54f66e6f3..1423e2699e 100644 --- a/src/include/rusagestub.h +++ b/src/include/rusagestub.h @@ -4,7 +4,7 @@ * Stubs for getrusage(3). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/rusagestub.h diff --git a/src/include/snowball/header.h b/src/include/snowball/header.h index 10acf4c53d..c0450ae4eb 100644 --- a/src/include/snowball/header.h +++ b/src/include/snowball/header.h @@ -13,7 +13,7 @@ * * NOTE: this file should not be included into any non-snowball sources! * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/snowball/header.h * diff --git a/src/include/statistics/extended_stats_internal.h b/src/include/statistics/extended_stats_internal.h index 738ff3fadc..b3ca0c1229 100644 --- a/src/include/statistics/extended_stats_internal.h +++ b/src/include/statistics/extended_stats_internal.h @@ -3,7 +3,7 @@ * extended_stats_internal.h * POSTGRES extended statistics internal declarations * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/statistics/statistics.h b/src/include/statistics/statistics.h index 1d68c39df0..8009fee322 100644 --- a/src/include/statistics/statistics.h +++ b/src/include/statistics/statistics.h @@ -3,7 +3,7 @@ * statistics.h * Extended statistics and selectivity estimation functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/statistics/statistics.h diff --git a/src/include/storage/backendid.h b/src/include/storage/backendid.h index bf31ba4f48..e000bb848c 100644 --- a/src/include/storage/backendid.h +++ b/src/include/storage/backendid.h @@ -4,7 +4,7 @@ * POSTGRES backend id communication definitions * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/backendid.h diff --git a/src/include/storage/barrier.h b/src/include/storage/barrier.h index 0abaa2b98b..8b5a3751ea 100644 --- a/src/include/storage/barrier.h +++ b/src/include/storage/barrier.h @@ -3,7 +3,7 @@ * barrier.h * Barriers for synchronizing cooperating processes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/barrier.h diff --git a/src/include/storage/block.h b/src/include/storage/block.h index 33840798a8..e2bfa11e37 100644 --- a/src/include/storage/block.h +++ b/src/include/storage/block.h @@ -4,7 +4,7 @@ * POSTGRES disk block definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/block.h diff --git a/src/include/storage/buf.h b/src/include/storage/buf.h index 054f482bd7..3b89983b42 100644 --- a/src/include/storage/buf.h +++ b/src/include/storage/buf.h @@ -4,7 +4,7 @@ * Basic buffer manager data types. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/buf.h diff --git a/src/include/storage/buf_internals.h b/src/include/storage/buf_internals.h index 300adfcf9e..5370035f0c 100644 --- a/src/include/storage/buf_internals.h +++ b/src/include/storage/buf_internals.h @@ -5,7 +5,7 @@ * strategy. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/buf_internals.h diff --git a/src/include/storage/buffile.h b/src/include/storage/buffile.h index c3d7a61b64..a3df056a61 100644 --- a/src/include/storage/buffile.h +++ b/src/include/storage/buffile.h @@ -15,7 +15,7 @@ * but currently we have no need for oversize temp files without buffered * access. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/buffile.h diff --git a/src/include/storage/bufmgr.h b/src/include/storage/bufmgr.h index 98b63fc5ba..3cce3906a0 100644 --- a/src/include/storage/bufmgr.h +++ b/src/include/storage/bufmgr.h @@ -4,7 +4,7 @@ * POSTGRES buffer manager definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/bufmgr.h diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h index 50c72a3c8d..85dd10c45a 100644 --- a/src/include/storage/bufpage.h +++ b/src/include/storage/bufpage.h @@ -4,7 +4,7 @@ * Standard POSTGRES buffer page definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/bufpage.h diff --git a/src/include/storage/checksum.h b/src/include/storage/checksum.h index b85f714712..433755e279 100644 --- a/src/include/storage/checksum.h +++ b/src/include/storage/checksum.h @@ -3,7 +3,7 @@ * checksum.h * Checksum implementation for data pages. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/checksum.h diff --git a/src/include/storage/checksum_impl.h b/src/include/storage/checksum_impl.h index bffd061de8..64d76229ad 100644 --- a/src/include/storage/checksum_impl.h +++ b/src/include/storage/checksum_impl.h @@ -8,7 +8,7 @@ * referenced by storage/checksum.h. (Note: you may need to redefine * Assert() as empty to compile this successfully externally.) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/checksum_impl.h diff --git a/src/include/storage/condition_variable.h b/src/include/storage/condition_variable.h index f77c0b22ad..f9f93e0d4a 100644 --- a/src/include/storage/condition_variable.h +++ b/src/include/storage/condition_variable.h @@ -12,7 +12,7 @@ * can be cancelled prior to the fulfillment of the condition) and do not * use pointers internally (so that they are safe to use within DSMs). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/condition_variable.h diff --git a/src/include/storage/copydir.h b/src/include/storage/copydir.h index f88a044509..4fef3e2107 100644 --- a/src/include/storage/copydir.h +++ b/src/include/storage/copydir.h @@ -3,7 +3,7 @@ * copydir.h * Copy a directory. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/copydir.h diff --git a/src/include/storage/dsm.h b/src/include/storage/dsm.h index 31b1f4da9c..169de946f7 100644 --- a/src/include/storage/dsm.h +++ b/src/include/storage/dsm.h @@ -3,7 +3,7 @@ * dsm.h * manage dynamic shared memory segments * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/dsm.h diff --git a/src/include/storage/dsm_impl.h b/src/include/storage/dsm_impl.h index c2060431ba..0e5730f7c5 100644 --- a/src/include/storage/dsm_impl.h +++ b/src/include/storage/dsm_impl.h @@ -3,7 +3,7 @@ * dsm_impl.h * low-level dynamic shared memory primitives * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/dsm_impl.h diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index dc2eb35f06..db5ca16679 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -4,7 +4,7 @@ * Virtual file descriptor definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/fd.h diff --git a/src/include/storage/freespace.h b/src/include/storage/freespace.h index d110f006af..a517d7ec43 100644 --- a/src/include/storage/freespace.h +++ b/src/include/storage/freespace.h @@ -4,7 +4,7 @@ * POSTGRES free space map for quickly finding free space in relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/freespace.h diff --git a/src/include/storage/fsm_internals.h b/src/include/storage/fsm_internals.h index 722e649123..d6b8187861 100644 --- a/src/include/storage/fsm_internals.h +++ b/src/include/storage/fsm_internals.h @@ -4,7 +4,7 @@ * internal functions for free space map * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/fsm_internals.h diff --git a/src/include/storage/indexfsm.h b/src/include/storage/indexfsm.h index f8045f0df8..07cbef4106 100644 --- a/src/include/storage/indexfsm.h +++ b/src/include/storage/indexfsm.h @@ -4,7 +4,7 @@ * POSTGRES free space map for quickly finding an unused page in index * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/indexfsm.h diff --git a/src/include/storage/ipc.h b/src/include/storage/ipc.h index bde635f502..e934a83a1c 100644 --- a/src/include/storage/ipc.h +++ b/src/include/storage/ipc.h @@ -8,7 +8,7 @@ * exit-time cleanup for either a postmaster or a backend. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/ipc.h diff --git a/src/include/storage/item.h b/src/include/storage/item.h index 72426a2d48..44a52fbdfb 100644 --- a/src/include/storage/item.h +++ b/src/include/storage/item.h @@ -4,7 +4,7 @@ * POSTGRES disk item definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/item.h diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h index 2ec86b57fc..60570cc53d 100644 --- a/src/include/storage/itemid.h +++ b/src/include/storage/itemid.h @@ -4,7 +4,7 @@ * Standard POSTGRES buffer page item identifier definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/itemid.h diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h index 8f8e22444a..6c9ed3696b 100644 --- a/src/include/storage/itemptr.h +++ b/src/include/storage/itemptr.h @@ -4,7 +4,7 @@ * POSTGRES disk item pointer definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/itemptr.h diff --git a/src/include/storage/large_object.h b/src/include/storage/large_object.h index 01d0985b44..a234d5cf8e 100644 --- a/src/include/storage/large_object.h +++ b/src/include/storage/large_object.h @@ -5,7 +5,7 @@ * zillions of large objects (internal, external, jaquith, inversion). * Now we only support inversion. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/large_object.h diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h index a43193c916..a4bcb48874 100644 --- a/src/include/storage/latch.h +++ b/src/include/storage/latch.h @@ -90,7 +90,7 @@ * efficient than using WaitLatch or WaitLatchOrSocket. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/latch.h diff --git a/src/include/storage/lmgr.h b/src/include/storage/lmgr.h index 0b923227a2..a217de9716 100644 --- a/src/include/storage/lmgr.h +++ b/src/include/storage/lmgr.h @@ -4,7 +4,7 @@ * POSTGRES lock manager definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/lmgr.h diff --git a/src/include/storage/lock.h b/src/include/storage/lock.h index 765431e299..777da71679 100644 --- a/src/include/storage/lock.h +++ b/src/include/storage/lock.h @@ -4,7 +4,7 @@ * POSTGRES low-level lock mechanism * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/lock.h diff --git a/src/include/storage/lockdefs.h b/src/include/storage/lockdefs.h index fe9f7cb310..72eca39f17 100644 --- a/src/include/storage/lockdefs.h +++ b/src/include/storage/lockdefs.h @@ -7,7 +7,7 @@ * contains definition that have to (indirectly) be available when included by * FRONTEND code. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/lockdefs.h diff --git a/src/include/storage/lwlock.h b/src/include/storage/lwlock.h index 97e4a0bbbd..c21bfe2f66 100644 --- a/src/include/storage/lwlock.h +++ b/src/include/storage/lwlock.h @@ -4,7 +4,7 @@ * Lightweight lock manager * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/lwlock.h diff --git a/src/include/storage/off.h b/src/include/storage/off.h index 7228808b94..6179f2f854 100644 --- a/src/include/storage/off.h +++ b/src/include/storage/off.h @@ -4,7 +4,7 @@ * POSTGRES disk "offset" definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/off.h diff --git a/src/include/storage/pg_sema.h b/src/include/storage/pg_sema.h index 65db86f578..2072cf0e8f 100644 --- a/src/include/storage/pg_sema.h +++ b/src/include/storage/pg_sema.h @@ -10,7 +10,7 @@ * be provided by each port. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/pg_sema.h diff --git a/src/include/storage/pg_shmem.h b/src/include/storage/pg_shmem.h index e3ee096229..6b1e040251 100644 --- a/src/include/storage/pg_shmem.h +++ b/src/include/storage/pg_shmem.h @@ -14,7 +14,7 @@ * only one ID number. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/pg_shmem.h diff --git a/src/include/storage/pmsignal.h b/src/include/storage/pmsignal.h index 4b954d7614..bec162cc16 100644 --- a/src/include/storage/pmsignal.h +++ b/src/include/storage/pmsignal.h @@ -4,7 +4,7 @@ * routines for signaling the postmaster from its child processes * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/pmsignal.h diff --git a/src/include/storage/predicate.h b/src/include/storage/predicate.h index 06bcbf2471..6a3464daa1 100644 --- a/src/include/storage/predicate.h +++ b/src/include/storage/predicate.h @@ -4,7 +4,7 @@ * POSTGRES public predicate locking definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/predicate.h diff --git a/src/include/storage/predicate_internals.h b/src/include/storage/predicate_internals.h index 89874a5c3b..0f736d37df 100644 --- a/src/include/storage/predicate_internals.h +++ b/src/include/storage/predicate_internals.h @@ -4,7 +4,7 @@ * POSTGRES internal predicate locking definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/predicate_internals.h diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h index 1d370500af..5c19a61dcf 100644 --- a/src/include/storage/proc.h +++ b/src/include/storage/proc.h @@ -4,7 +4,7 @@ * per-process shared memory data structures * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/proc.h diff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h index 174c537be4..75bab2985f 100644 --- a/src/include/storage/procarray.h +++ b/src/include/storage/procarray.h @@ -4,7 +4,7 @@ * POSTGRES process array definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/procarray.h diff --git a/src/include/storage/proclist.h b/src/include/storage/proclist.h index 9f22f3fd48..4e25b47593 100644 --- a/src/include/storage/proclist.h +++ b/src/include/storage/proclist.h @@ -10,7 +10,7 @@ * See proclist_types.h for the structs that these functions operate on. They * are separated to break a header dependency cycle with proc.h. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/storage/proclist.h diff --git a/src/include/storage/proclist_types.h b/src/include/storage/proclist_types.h index 716c4498d5..237fb7613f 100644 --- a/src/include/storage/proclist_types.h +++ b/src/include/storage/proclist_types.h @@ -5,7 +5,7 @@ * * See proclist.h for functions that operate on these types. * - * Portions Copyright (c) 2016-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2016-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/include/storage/proclist_types.h diff --git a/src/include/storage/procsignal.h b/src/include/storage/procsignal.h index 20bb05b177..6db0d69b71 100644 --- a/src/include/storage/procsignal.h +++ b/src/include/storage/procsignal.h @@ -4,7 +4,7 @@ * Routines for interprocess signalling * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/procsignal.h diff --git a/src/include/storage/reinit.h b/src/include/storage/reinit.h index 90e494e933..addda2c0ea 100644 --- a/src/include/storage/reinit.h +++ b/src/include/storage/reinit.h @@ -4,7 +4,7 @@ * Reinitialization of unlogged relations * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/reinit.h diff --git a/src/include/storage/relfilenode.h b/src/include/storage/relfilenode.h index fb596e2ee7..abffd84a1c 100644 --- a/src/include/storage/relfilenode.h +++ b/src/include/storage/relfilenode.h @@ -4,7 +4,7 @@ * Physical access information for relations. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/relfilenode.h diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h index 35088db10a..04175dbaaa 100644 --- a/src/include/storage/s_lock.h +++ b/src/include/storage/s_lock.h @@ -86,7 +86,7 @@ * when using the SysV semaphore code. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/s_lock.h diff --git a/src/include/storage/sharedfileset.h b/src/include/storage/sharedfileset.h index 20651bb93b..d74c488e59 100644 --- a/src/include/storage/sharedfileset.h +++ b/src/include/storage/sharedfileset.h @@ -4,7 +4,7 @@ * Shared temporary file management. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/sharedfilespace.h diff --git a/src/include/storage/shm_mq.h b/src/include/storage/shm_mq.h index 7709efcc48..f85f2eb7d1 100644 --- a/src/include/storage/shm_mq.h +++ b/src/include/storage/shm_mq.h @@ -3,7 +3,7 @@ * shm_mq.h * single-reader, single-writer shared memory message queue * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/shm_mq.h diff --git a/src/include/storage/shm_toc.h b/src/include/storage/shm_toc.h index 8ccd35d96b..4efe1723b4 100644 --- a/src/include/storage/shm_toc.h +++ b/src/include/storage/shm_toc.h @@ -12,7 +12,7 @@ * other data structure within the segment and only put the pointer to * the data structure itself in the table of contents. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/shm_toc.h diff --git a/src/include/storage/shmem.h b/src/include/storage/shmem.h index c6993387ff..b84d104347 100644 --- a/src/include/storage/shmem.h +++ b/src/include/storage/shmem.h @@ -11,7 +11,7 @@ * at the same address. This means shared memory pointers can be passed * around directly between different processes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/shmem.h diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h index 84c0b02da0..7156780215 100644 --- a/src/include/storage/sinval.h +++ b/src/include/storage/sinval.h @@ -4,7 +4,7 @@ * POSTGRES shared cache invalidation communication definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/sinval.h diff --git a/src/include/storage/sinvaladt.h b/src/include/storage/sinvaladt.h index 751735fc9a..2652b87b60 100644 --- a/src/include/storage/sinvaladt.h +++ b/src/include/storage/sinvaladt.h @@ -12,7 +12,7 @@ * The struct type SharedInvalidationMessage, defining the contents of * a single message, is defined in sinval.h. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/sinvaladt.h diff --git a/src/include/storage/smgr.h b/src/include/storage/smgr.h index 2279134588..558e4d8518 100644 --- a/src/include/storage/smgr.h +++ b/src/include/storage/smgr.h @@ -4,7 +4,7 @@ * storage manager switch public interface declarations. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/smgr.h diff --git a/src/include/storage/spin.h b/src/include/storage/spin.h index 16413856ca..b49fc10aa4 100644 --- a/src/include/storage/spin.h +++ b/src/include/storage/spin.h @@ -41,7 +41,7 @@ * be again. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/spin.h diff --git a/src/include/storage/standby.h b/src/include/storage/standby.h index f5404b4c1f..28bf8f2f39 100644 --- a/src/include/storage/standby.h +++ b/src/include/storage/standby.h @@ -4,7 +4,7 @@ * Definitions for hot standby mode. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/standby.h diff --git a/src/include/storage/standbydefs.h b/src/include/storage/standbydefs.h index a0af6788e9..bb6144805a 100644 --- a/src/include/storage/standbydefs.h +++ b/src/include/storage/standbydefs.h @@ -4,7 +4,7 @@ * Frontend exposed definitions for hot standby mode. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/standbydefs.h diff --git a/src/include/tcop/deparse_utility.h b/src/include/tcop/deparse_utility.h index 9c4e608934..9b78748bfd 100644 --- a/src/include/tcop/deparse_utility.h +++ b/src/include/tcop/deparse_utility.h @@ -2,7 +2,7 @@ * * deparse_utility.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/deparse_utility.h diff --git a/src/include/tcop/dest.h b/src/include/tcop/dest.h index f31c06a9c0..82f0f2e741 100644 --- a/src/include/tcop/dest.h +++ b/src/include/tcop/dest.h @@ -57,7 +57,7 @@ * calls in portal and cursor manipulations. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/dest.h diff --git a/src/include/tcop/fastpath.h b/src/include/tcop/fastpath.h index 4a7c35f1a9..6e1608a4fa 100644 --- a/src/include/tcop/fastpath.h +++ b/src/include/tcop/fastpath.h @@ -3,7 +3,7 @@ * fastpath.h * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/fastpath.h diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h index 6abfe7b282..507d89cf69 100644 --- a/src/include/tcop/pquery.h +++ b/src/include/tcop/pquery.h @@ -4,7 +4,7 @@ * prototypes for pquery.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/pquery.h diff --git a/src/include/tcop/tcopprot.h b/src/include/tcop/tcopprot.h index 62c7f6c61c..63b4e4864d 100644 --- a/src/include/tcop/tcopprot.h +++ b/src/include/tcop/tcopprot.h @@ -4,7 +4,7 @@ * prototypes for postgres.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/tcopprot.h diff --git a/src/include/tcop/utility.h b/src/include/tcop/utility.h index 5bd386ddaa..5550055710 100644 --- a/src/include/tcop/utility.h +++ b/src/include/tcop/utility.h @@ -4,7 +4,7 @@ * prototypes for utility.c. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tcop/utility.h diff --git a/src/include/tsearch/dicts/regis.h b/src/include/tsearch/dicts/regis.h index 7fdf82af65..15dba3a505 100644 --- a/src/include/tsearch/dicts/regis.h +++ b/src/include/tsearch/dicts/regis.h @@ -4,7 +4,7 @@ * * Declarations for fast regex subset, used by ISpell * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/tsearch/dicts/regis.h * diff --git a/src/include/tsearch/dicts/spell.h b/src/include/tsearch/dicts/spell.h index 3032d0b508..210f97dda9 100644 --- a/src/include/tsearch/dicts/spell.h +++ b/src/include/tsearch/dicts/spell.h @@ -4,7 +4,7 @@ * * Declarations for ISpell dictionary * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/tsearch/dicts/spell.h * diff --git a/src/include/tsearch/ts_cache.h b/src/include/tsearch/ts_cache.h index abff0fdfcc..410f1d54af 100644 --- a/src/include/tsearch/ts_cache.h +++ b/src/include/tsearch/ts_cache.h @@ -3,7 +3,7 @@ * ts_cache.h * Tsearch related object caches. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/tsearch/ts_cache.h diff --git a/src/include/tsearch/ts_locale.h b/src/include/tsearch/ts_locale.h index 3ec276fc05..02809539dd 100644 --- a/src/include/tsearch/ts_locale.h +++ b/src/include/tsearch/ts_locale.h @@ -3,7 +3,7 @@ * ts_locale.h * locale compatibility layer for tsearch * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * src/include/tsearch/ts_locale.h * diff --git a/src/include/tsearch/ts_public.h b/src/include/tsearch/ts_public.h index 94ba7fcb20..0b7a5aa68e 100644 --- a/src/include/tsearch/ts_public.h +++ b/src/include/tsearch/ts_public.h @@ -4,7 +4,7 @@ * Public interface to various tsearch modules, such as * parsers and dictionaries. * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * src/include/tsearch/ts_public.h * diff --git a/src/include/tsearch/ts_type.h b/src/include/tsearch/ts_type.h index 30d7c4bccd..ccf5701aa3 100644 --- a/src/include/tsearch/ts_type.h +++ b/src/include/tsearch/ts_type.h @@ -3,7 +3,7 @@ * ts_type.h * Definitions for the tsvector and tsquery types * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * src/include/tsearch/ts_type.h * diff --git a/src/include/tsearch/ts_utils.h b/src/include/tsearch/ts_utils.h index 782548c0af..f8ddce5ecb 100644 --- a/src/include/tsearch/ts_utils.h +++ b/src/include/tsearch/ts_utils.h @@ -3,7 +3,7 @@ * ts_utils.h * helper utilities for tsearch * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * src/include/tsearch/ts_utils.h * diff --git a/src/include/utils/acl.h b/src/include/utils/acl.h index 254a811aff..67c7b2d4ac 100644 --- a/src/include/utils/acl.h +++ b/src/include/utils/acl.h @@ -4,7 +4,7 @@ * Definition of (and support for) access control list data structures. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/acl.h diff --git a/src/include/utils/aclchk_internal.h b/src/include/utils/aclchk_internal.h index 3374edb638..1843f50b5a 100644 --- a/src/include/utils/aclchk_internal.h +++ b/src/include/utils/aclchk_internal.h @@ -2,7 +2,7 @@ * * aclchk_internal.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/aclchk_internal.h diff --git a/src/include/utils/array.h b/src/include/utils/array.h index cc19879a9a..afbb532e9c 100644 --- a/src/include/utils/array.h +++ b/src/include/utils/array.h @@ -51,7 +51,7 @@ * arrays holding the elements. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/array.h diff --git a/src/include/utils/arrayaccess.h b/src/include/utils/arrayaccess.h index 7655c80bed..f04752213e 100644 --- a/src/include/utils/arrayaccess.h +++ b/src/include/utils/arrayaccess.h @@ -4,7 +4,7 @@ * Declarations for element-by-element access to Postgres arrays. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/arrayaccess.h diff --git a/src/include/utils/ascii.h b/src/include/utils/ascii.h index d3b183f11f..9ecf164047 100644 --- a/src/include/utils/ascii.h +++ b/src/include/utils/ascii.h @@ -1,7 +1,7 @@ /*----------------------------------------------------------------------- * ascii.h * - * Portions Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1999-2018, PostgreSQL Global Development Group * * src/include/utils/ascii.h * diff --git a/src/include/utils/attoptcache.h b/src/include/utils/attoptcache.h index 4eef793181..f36ce279ac 100644 --- a/src/include/utils/attoptcache.h +++ b/src/include/utils/attoptcache.h @@ -3,7 +3,7 @@ * attoptcache.h * Attribute options cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/attoptcache.h diff --git a/src/include/utils/backend_random.h b/src/include/utils/backend_random.h index 9781aea0ac..99ea2cb9fb 100644 --- a/src/include/utils/backend_random.h +++ b/src/include/utils/backend_random.h @@ -3,7 +3,7 @@ * backend_random.h * Declarations for backend random number generation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/utils/backend_random.h * diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h index 762532f636..8bb57c5829 100644 --- a/src/include/utils/builtins.h +++ b/src/include/utils/builtins.h @@ -4,7 +4,7 @@ * Declarations for operations on built-in types. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/builtins.h diff --git a/src/include/utils/bytea.h b/src/include/utils/bytea.h index d7bd30842e..a959dde791 100644 --- a/src/include/utils/bytea.h +++ b/src/include/utils/bytea.h @@ -4,7 +4,7 @@ * Declarations for BYTEA data type support. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/bytea.h diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h index 74535eb7c2..39d4876169 100644 --- a/src/include/utils/catcache.h +++ b/src/include/utils/catcache.h @@ -10,7 +10,7 @@ * guarantee that there can only be one matching row for a key combination. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/catcache.h diff --git a/src/include/utils/combocid.h b/src/include/utils/combocid.h index 6d8be8bf0f..094a9cf98b 100644 --- a/src/include/utils/combocid.h +++ b/src/include/utils/combocid.h @@ -4,7 +4,7 @@ * Combo command ID support routines * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/combocid.h diff --git a/src/include/utils/date.h b/src/include/utils/date.h index 0736a72946..274959231b 100644 --- a/src/include/utils/date.h +++ b/src/include/utils/date.h @@ -4,7 +4,7 @@ * Definitions for the SQL "date" and "time" types. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/date.h diff --git a/src/include/utils/datetime.h b/src/include/utils/datetime.h index 7968569fda..d66582b7a2 100644 --- a/src/include/utils/datetime.h +++ b/src/include/utils/datetime.h @@ -6,7 +6,7 @@ * including abstime, reltime, date, and time. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/datetime.h diff --git a/src/include/utils/datum.h b/src/include/utils/datum.h index d2f0f9ed51..90ab5381aa 100644 --- a/src/include/utils/datum.h +++ b/src/include/utils/datum.h @@ -8,7 +8,7 @@ * of the Datum. (We do it this way because in most situations the caller * can look up the info just once and use it for many per-datum operations.) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/datum.h diff --git a/src/include/utils/dsa.h b/src/include/utils/dsa.h index 516ef610f0..5be6aab292 100644 --- a/src/include/utils/dsa.h +++ b/src/include/utils/dsa.h @@ -3,7 +3,7 @@ * dsa.h * Dynamic shared memory areas. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/utils/dynahash.h b/src/include/utils/dynahash.h index 8e03245a03..4365e1b439 100644 --- a/src/include/utils/dynahash.h +++ b/src/include/utils/dynahash.h @@ -4,7 +4,7 @@ * POSTGRES dynahash.h file definitions * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/dynahash.h diff --git a/src/include/utils/dynamic_loader.h b/src/include/utils/dynamic_loader.h index 80ac1e3fe6..e2455b52ca 100644 --- a/src/include/utils/dynamic_loader.h +++ b/src/include/utils/dynamic_loader.h @@ -4,7 +4,7 @@ * * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/dynamic_loader.h diff --git a/src/include/utils/elog.h b/src/include/utils/elog.h index 7bfd25a9e9..7a9ba7f2ff 100644 --- a/src/include/utils/elog.h +++ b/src/include/utils/elog.h @@ -4,7 +4,7 @@ * POSTGRES error reporting/logging definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/elog.h diff --git a/src/include/utils/evtcache.h b/src/include/utils/evtcache.h index 9774eac5a6..727b6ba588 100644 --- a/src/include/utils/evtcache.h +++ b/src/include/utils/evtcache.h @@ -3,7 +3,7 @@ * evtcache.c * Special-purpose cache for event trigger data. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/include/utils/expandeddatum.h b/src/include/utils/expandeddatum.h index 7116b860cc..3361bb25ad 100644 --- a/src/include/utils/expandeddatum.h +++ b/src/include/utils/expandeddatum.h @@ -34,7 +34,7 @@ * value if they fail partway through. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/expandeddatum.h diff --git a/src/include/utils/fmgrtab.h b/src/include/utils/fmgrtab.h index 4d06b02578..d8317eb3ea 100644 --- a/src/include/utils/fmgrtab.h +++ b/src/include/utils/fmgrtab.h @@ -3,7 +3,7 @@ * fmgrtab.h * The function manager's table of internal functions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/fmgrtab.h diff --git a/src/include/utils/formatting.h b/src/include/utils/formatting.h index 8eaf2c3052..a9f5548b46 100644 --- a/src/include/utils/formatting.h +++ b/src/include/utils/formatting.h @@ -4,7 +4,7 @@ * src/include/utils/formatting.h * * - * Portions Copyright (c) 1999-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1999-2018, PostgreSQL Global Development Group * * The PostgreSQL routines for a DateTime/int/float/numeric formatting, * inspire with Oracle TO_CHAR() / TO_DATE() / TO_NUMBER() routines. diff --git a/src/include/utils/freepage.h b/src/include/utils/freepage.h index c370c733ee..cbbf267fb3 100644 --- a/src/include/utils/freepage.h +++ b/src/include/utils/freepage.h @@ -3,7 +3,7 @@ * freepage.h * Management of page-organized free memory. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/freepage.h diff --git a/src/include/utils/geo_decls.h b/src/include/utils/geo_decls.h index c89e6c3d1c..a589e4239f 100644 --- a/src/include/utils/geo_decls.h +++ b/src/include/utils/geo_decls.h @@ -3,7 +3,7 @@ * geo_decls.h - Declarations for various 2D constructs. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/geo_decls.h diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index 26e0caaf59..77daa5a539 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -4,7 +4,7 @@ * External declarations pertaining to backend/utils/misc/guc.c and * backend/utils/misc/guc-file.l * - * Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Copyright (c) 2000-2018, PostgreSQL Global Development Group * Written by Peter Eisentraut . * * src/include/utils/guc.h diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h index 042f7a0152..04de6a383a 100644 --- a/src/include/utils/guc_tables.h +++ b/src/include/utils/guc_tables.h @@ -5,7 +5,7 @@ * * See src/backend/utils/misc/README for design notes. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/utils/guc_tables.h * diff --git a/src/include/utils/hashutils.h b/src/include/utils/hashutils.h index 0a2620beff..5e9fe65012 100644 --- a/src/include/utils/hashutils.h +++ b/src/include/utils/hashutils.h @@ -1,7 +1,7 @@ /* * Utilities for working with hash values. * - * Portions Copyright (c) 2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2017-2018, PostgreSQL Global Development Group */ #ifndef HASHUTILS_H diff --git a/src/include/utils/help_config.h b/src/include/utils/help_config.h index 3f433d10d7..f35d4da8a8 100644 --- a/src/include/utils/help_config.h +++ b/src/include/utils/help_config.h @@ -3,7 +3,7 @@ * help_config.h * Interface to the --help-config option of main.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/utils/help_config.h * diff --git a/src/include/utils/hsearch.h b/src/include/utils/hsearch.h index bc5873ed20..8357faac5a 100644 --- a/src/include/utils/hsearch.h +++ b/src/include/utils/hsearch.h @@ -4,7 +4,7 @@ * exported definitions for utils/hash/dynahash.c; see notes therein * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/hsearch.h diff --git a/src/include/utils/index_selfuncs.h b/src/include/utils/index_selfuncs.h index 24f2f3a866..ae2b96943d 100644 --- a/src/include/utils/index_selfuncs.h +++ b/src/include/utils/index_selfuncs.h @@ -9,7 +9,7 @@ * If you make it depend on anything besides access/amapi.h, that's likely * a mistake. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/index_selfuncs.h diff --git a/src/include/utils/inet.h b/src/include/utils/inet.h index f2aa864a35..e3bec398b1 100644 --- a/src/include/utils/inet.h +++ b/src/include/utils/inet.h @@ -4,7 +4,7 @@ * Declarations for operations on INET datatypes. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/inet.h diff --git a/src/include/utils/int8.h b/src/include/utils/int8.h index 8b78983fee..91171ee4cc 100644 --- a/src/include/utils/int8.h +++ b/src/include/utils/int8.h @@ -4,7 +4,7 @@ * Declarations for operations on 64-bit integers. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/int8.h diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h index 361543f412..7a66d466f7 100644 --- a/src/include/utils/inval.h +++ b/src/include/utils/inval.h @@ -4,7 +4,7 @@ * POSTGRES cache invalidation dispatcher definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/inval.h diff --git a/src/include/utils/json.h b/src/include/utils/json.h index e3ffe6fc44..549bd4d287 100644 --- a/src/include/utils/json.h +++ b/src/include/utils/json.h @@ -3,7 +3,7 @@ * json.h * Declarations for JSON data type support. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/json.h diff --git a/src/include/utils/jsonapi.h b/src/include/utils/jsonapi.h index 4336823de2..d6baea5368 100644 --- a/src/include/utils/jsonapi.h +++ b/src/include/utils/jsonapi.h @@ -3,7 +3,7 @@ * jsonapi.h * Declarations for JSON API support. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/jsonapi.h diff --git a/src/include/utils/jsonb.h b/src/include/utils/jsonb.h index d639bbc960..27873d4d10 100644 --- a/src/include/utils/jsonb.h +++ b/src/include/utils/jsonb.h @@ -3,7 +3,7 @@ * jsonb.h * Declarations for jsonb data type support. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/include/utils/jsonb.h * diff --git a/src/include/utils/logtape.h b/src/include/utils/logtape.h index a1e869b80c..88662c10a4 100644 --- a/src/include/utils/logtape.h +++ b/src/include/utils/logtape.h @@ -5,7 +5,7 @@ * * See logtape.c for explanations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/logtape.h diff --git a/src/include/utils/lsyscache.h b/src/include/utils/lsyscache.h index b316cc594c..9731e6f7ae 100644 --- a/src/include/utils/lsyscache.h +++ b/src/include/utils/lsyscache.h @@ -3,7 +3,7 @@ * lsyscache.h * Convenience routines for common queries in the system catalog cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/lsyscache.h diff --git a/src/include/utils/memdebug.h b/src/include/utils/memdebug.h index a73d505be8..c2c58b9cdd 100644 --- a/src/include/utils/memdebug.h +++ b/src/include/utils/memdebug.h @@ -7,7 +7,7 @@ * empty definitions for Valgrind client request macros we use. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/memdebug.h diff --git a/src/include/utils/memutils.h b/src/include/utils/memutils.h index 9c30eb76e9..6e6c6f2e4c 100644 --- a/src/include/utils/memutils.h +++ b/src/include/utils/memutils.h @@ -7,7 +7,7 @@ * of the API of the memory management subsystem. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/memutils.h diff --git a/src/include/utils/nabstime.h b/src/include/utils/nabstime.h index 69133952d1..293a49f022 100644 --- a/src/include/utils/nabstime.h +++ b/src/include/utils/nabstime.h @@ -4,7 +4,7 @@ * Definitions for the "new" abstime code. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/nabstime.h diff --git a/src/include/utils/numeric.h b/src/include/utils/numeric.h index 3aa7fef947..cd8da8bdc2 100644 --- a/src/include/utils/numeric.h +++ b/src/include/utils/numeric.h @@ -5,7 +5,7 @@ * * Original coding 1998, Jan Wieck. Heavily revised 2003, Tom Lane. * - * Copyright (c) 1998-2017, PostgreSQL Global Development Group + * Copyright (c) 1998-2018, PostgreSQL Global Development Group * * src/include/utils/numeric.h * diff --git a/src/include/utils/palloc.h b/src/include/utils/palloc.h index a7dc837724..781e948f69 100644 --- a/src/include/utils/palloc.h +++ b/src/include/utils/palloc.h @@ -18,7 +18,7 @@ * everything that should be freed. See utils/mmgr/README for more info. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/palloc.h diff --git a/src/include/utils/pg_crc.h b/src/include/utils/pg_crc.h index 9ea0622321..48bca045c2 100644 --- a/src/include/utils/pg_crc.h +++ b/src/include/utils/pg_crc.h @@ -26,7 +26,7 @@ * * The CRC-32C variant is in port/pg_crc32c.h. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/pg_crc.h diff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h index b633511a7a..88a3134862 100644 --- a/src/include/utils/pg_locale.h +++ b/src/include/utils/pg_locale.h @@ -4,7 +4,7 @@ * * src/include/utils/pg_locale.h * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * *----------------------------------------------------------------------- */ diff --git a/src/include/utils/pg_lsn.h b/src/include/utils/pg_lsn.h index cc51b2a078..0db478a259 100644 --- a/src/include/utils/pg_lsn.h +++ b/src/include/utils/pg_lsn.h @@ -5,7 +5,7 @@ * PostgreSQL. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/pg_lsn.h diff --git a/src/include/utils/pg_rusage.h b/src/include/utils/pg_rusage.h index cea51f0cb2..5768caab81 100644 --- a/src/include/utils/pg_rusage.h +++ b/src/include/utils/pg_rusage.h @@ -4,7 +4,7 @@ * header file for resource usage measurement support routines * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/pg_rusage.h diff --git a/src/include/utils/pidfile.h b/src/include/utils/pidfile.h index c3db4c46e3..d3c47aea42 100644 --- a/src/include/utils/pidfile.h +++ b/src/include/utils/pidfile.h @@ -3,7 +3,7 @@ * pidfile.h * Declarations describing the data directory lock file (postmaster.pid) * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/pidfile.h diff --git a/src/include/utils/plancache.h b/src/include/utils/plancache.h index 87fab19f3c..ab20aa04b0 100644 --- a/src/include/utils/plancache.h +++ b/src/include/utils/plancache.h @@ -5,7 +5,7 @@ * * See plancache.c for comments. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/plancache.h diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h index cb6f00081d..3e7820b51c 100644 --- a/src/include/utils/portal.h +++ b/src/include/utils/portal.h @@ -36,7 +36,7 @@ * to look like NO SCROLL cursors. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/portal.h diff --git a/src/include/utils/queryenvironment.h b/src/include/utils/queryenvironment.h index 08e6051b4e..34fde6401f 100644 --- a/src/include/utils/queryenvironment.h +++ b/src/include/utils/queryenvironment.h @@ -4,7 +4,7 @@ * Access to functions to mutate the query environment and retrieve the * actual data related to entries (if any). * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/queryenvironment.h diff --git a/src/include/utils/rangetypes.h b/src/include/utils/rangetypes.h index 1ef5e54253..83e94e005b 100644 --- a/src/include/utils/rangetypes.h +++ b/src/include/utils/rangetypes.h @@ -4,7 +4,7 @@ * Declarations for Postgres range types. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/rangetypes.h diff --git a/src/include/utils/regproc.h b/src/include/utils/regproc.h index ba46bd7d58..5b9a8cbee8 100644 --- a/src/include/utils/regproc.h +++ b/src/include/utils/regproc.h @@ -3,7 +3,7 @@ * regproc.h * Functions for the built-in types regproc, regclass, regtype, etc. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/regproc.h diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h index 68fd6fbd54..aa8add544a 100644 --- a/src/include/utils/rel.h +++ b/src/include/utils/rel.h @@ -4,7 +4,7 @@ * POSTGRES relation descriptor (a/k/a relcache entry) definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/rel.h diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h index 29c6d9bae3..8a546aba28 100644 --- a/src/include/utils/relcache.h +++ b/src/include/utils/relcache.h @@ -4,7 +4,7 @@ * Relation descriptor cache definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/relcache.h diff --git a/src/include/utils/relfilenodemap.h b/src/include/utils/relfilenodemap.h index b3ee555fb7..e79115d34f 100644 --- a/src/include/utils/relfilenodemap.h +++ b/src/include/utils/relfilenodemap.h @@ -3,7 +3,7 @@ * relfilenodemap.h * relfilenode to oid mapping cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/relfilenodemap.h diff --git a/src/include/utils/relmapper.h b/src/include/utils/relmapper.h index 7af69ba6cf..f69b1006bf 100644 --- a/src/include/utils/relmapper.h +++ b/src/include/utils/relmapper.h @@ -4,7 +4,7 @@ * Catalog-to-filenode mapping * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/relmapper.h diff --git a/src/include/utils/relptr.h b/src/include/utils/relptr.h index 06e592e0c6..4b74571e91 100644 --- a/src/include/utils/relptr.h +++ b/src/include/utils/relptr.h @@ -3,7 +3,7 @@ * relptr.h * This file contains basic declarations for relative pointers. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/relptr.h diff --git a/src/include/utils/reltrigger.h b/src/include/utils/reltrigger.h index 2169b0306b..9b4dc7f810 100644 --- a/src/include/utils/reltrigger.h +++ b/src/include/utils/reltrigger.h @@ -4,7 +4,7 @@ * POSTGRES relation trigger definitions. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/reltrigger.h diff --git a/src/include/utils/resowner.h b/src/include/utils/resowner.h index 07d30d93bc..fe7f49119b 100644 --- a/src/include/utils/resowner.h +++ b/src/include/utils/resowner.h @@ -9,7 +9,7 @@ * See utils/resowner/README for more info. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/resowner.h diff --git a/src/include/utils/resowner_private.h b/src/include/utils/resowner_private.h index 2420b651b3..22b377c0df 100644 --- a/src/include/utils/resowner_private.h +++ b/src/include/utils/resowner_private.h @@ -6,7 +6,7 @@ * See utils/resowner/README for more info. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/resowner_private.h diff --git a/src/include/utils/rls.h b/src/include/utils/rls.h index f9780ad0c0..f3b4858333 100644 --- a/src/include/utils/rls.h +++ b/src/include/utils/rls.h @@ -4,7 +4,7 @@ * Header file for Row Level Security (RLS) utility commands to be used * with the rowsecurity feature. * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * src/include/utils/rls.h * diff --git a/src/include/utils/ruleutils.h b/src/include/utils/ruleutils.h index e9c5193855..9f9b029ab8 100644 --- a/src/include/utils/ruleutils.h +++ b/src/include/utils/ruleutils.h @@ -3,7 +3,7 @@ * ruleutils.h * Declarations for ruleutils.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/ruleutils.h diff --git a/src/include/utils/sampling.h b/src/include/utils/sampling.h index f566e0b866..c82148515d 100644 --- a/src/include/utils/sampling.h +++ b/src/include/utils/sampling.h @@ -3,7 +3,7 @@ * sampling.h * definitions for sampling functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/sampling.h diff --git a/src/include/utils/selfuncs.h b/src/include/utils/selfuncs.h index 199a6317f5..299c9f846a 100644 --- a/src/include/utils/selfuncs.h +++ b/src/include/utils/selfuncs.h @@ -5,7 +5,7 @@ * infrastructure for selectivity and cost estimation. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/selfuncs.h diff --git a/src/include/utils/sharedtuplestore.h b/src/include/utils/sharedtuplestore.h index 13642f6bfd..834773511d 100644 --- a/src/include/utils/sharedtuplestore.h +++ b/src/include/utils/sharedtuplestore.h @@ -3,7 +3,7 @@ * sharedtuplestore.h * Simple mechinism for sharing tuples between backends. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/sharedtuplestore.h diff --git a/src/include/utils/snapmgr.h b/src/include/utils/snapmgr.h index 8585194e78..83806f3040 100644 --- a/src/include/utils/snapmgr.h +++ b/src/include/utils/snapmgr.h @@ -3,7 +3,7 @@ * snapmgr.h * POSTGRES snapshot manager * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/snapmgr.h diff --git a/src/include/utils/snapshot.h b/src/include/utils/snapshot.h index bf519778df..a8a5a8f4c0 100644 --- a/src/include/utils/snapshot.h +++ b/src/include/utils/snapshot.h @@ -3,7 +3,7 @@ * snapshot.h * POSTGRES snapshot definition * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/snapshot.h diff --git a/src/include/utils/sortsupport.h b/src/include/utils/sortsupport.h index a98420c37e..53b692e9a2 100644 --- a/src/include/utils/sortsupport.h +++ b/src/include/utils/sortsupport.h @@ -42,7 +42,7 @@ * function for such cases, but probably not any other acceleration method. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/sortsupport.h diff --git a/src/include/utils/spccache.h b/src/include/utils/spccache.h index 7c45bb11eb..2f30c982b2 100644 --- a/src/include/utils/spccache.h +++ b/src/include/utils/spccache.h @@ -3,7 +3,7 @@ * spccache.h * Tablespace cache. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/spccache.h diff --git a/src/include/utils/syscache.h b/src/include/utils/syscache.h index 8a0be41929..55d573c687 100644 --- a/src/include/utils/syscache.h +++ b/src/include/utils/syscache.h @@ -6,7 +6,7 @@ * See also lsyscache.h, which provides convenience routines for * common cache-lookup operations. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/syscache.h diff --git a/src/include/utils/timeout.h b/src/include/utils/timeout.h index 5a2efc0dd9..dcc7307c16 100644 --- a/src/include/utils/timeout.h +++ b/src/include/utils/timeout.h @@ -4,7 +4,7 @@ * Routines to multiplex SIGALRM interrupts for multiple timeout reasons. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/timeout.h diff --git a/src/include/utils/timestamp.h b/src/include/utils/timestamp.h index 3f2d31dec8..2b3b35703e 100644 --- a/src/include/utils/timestamp.h +++ b/src/include/utils/timestamp.h @@ -3,7 +3,7 @@ * timestamp.h * Definitions for the SQL "timestamp" and "interval" types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/timestamp.h diff --git a/src/include/utils/tqual.h b/src/include/utils/tqual.h index 96eaf01ca0..d3b6e99bb4 100644 --- a/src/include/utils/tqual.h +++ b/src/include/utils/tqual.h @@ -5,7 +5,7 @@ * * Should be moved/renamed... - vadim 07/28/98 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/tqual.h diff --git a/src/include/utils/tuplesort.h b/src/include/utils/tuplesort.h index b6b8c8ef8c..5d57c503ab 100644 --- a/src/include/utils/tuplesort.h +++ b/src/include/utils/tuplesort.h @@ -10,7 +10,7 @@ * amounts are sorted using temporary files and a standard external sort * algorithm. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/tuplesort.h diff --git a/src/include/utils/tuplestore.h b/src/include/utils/tuplestore.h index 7f4e1e318f..2c0c3f8219 100644 --- a/src/include/utils/tuplestore.h +++ b/src/include/utils/tuplestore.h @@ -21,7 +21,7 @@ * Also, we have changed the API to return tuples in TupleTableSlots, * so that there is a check to prevent attempted access to system columns. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/tuplestore.h diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index c203dabbd0..f25448d316 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -6,7 +6,7 @@ * The type cache exists to speed lookup of certain information about data * types that is not directly available from a type's pg_type row. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/typcache.h diff --git a/src/include/utils/tzparser.h b/src/include/utils/tzparser.h index 1e444e1159..5a16d12103 100644 --- a/src/include/utils/tzparser.h +++ b/src/include/utils/tzparser.h @@ -3,7 +3,7 @@ * tzparser.h * Timezone offset file parsing definitions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/tzparser.h diff --git a/src/include/utils/uuid.h b/src/include/utils/uuid.h index ed3ec28959..312c8c48c6 100644 --- a/src/include/utils/uuid.h +++ b/src/include/utils/uuid.h @@ -5,7 +5,7 @@ * to avoid conflicts with any uuid_t type that might be defined by * the system headers. * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * src/include/utils/uuid.h * diff --git a/src/include/utils/varbit.h b/src/include/utils/varbit.h index 2a4ec67698..3278cb5461 100644 --- a/src/include/utils/varbit.h +++ b/src/include/utils/varbit.h @@ -5,7 +5,7 @@ * * Code originally contributed by Adriaan Joubert. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/varbit.h diff --git a/src/include/utils/varlena.h b/src/include/utils/varlena.h index 06f3b69893..90e131a3a0 100644 --- a/src/include/utils/varlena.h +++ b/src/include/utils/varlena.h @@ -3,7 +3,7 @@ * varlena.h * Functions for the variable-length built-in types. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/varlena.h diff --git a/src/include/utils/xml.h b/src/include/utils/xml.h index 385b728f42..fe488da42e 100644 --- a/src/include/utils/xml.h +++ b/src/include/utils/xml.h @@ -4,7 +4,7 @@ * Declarations for XML data type support. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/utils/xml.h diff --git a/src/include/windowapi.h b/src/include/windowapi.h index 0aa23ef2f5..f6f18e5159 100644 --- a/src/include/windowapi.h +++ b/src/include/windowapi.h @@ -19,7 +19,7 @@ * function in nodeWindowAgg.c for details. * * - * Portions Copyright (c) 2000-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2000-2018, PostgreSQL Global Development Group * * src/include/windowapi.h * diff --git a/src/interfaces/ecpg/compatlib/Makefile b/src/interfaces/ecpg/compatlib/Makefile index 9ea5cf4ac4..cd03af56a9 100644 --- a/src/interfaces/ecpg/compatlib/Makefile +++ b/src/interfaces/ecpg/compatlib/Makefile @@ -2,7 +2,7 @@ # # Makefile for ecpg compatibility library # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/interfaces/ecpg/compatlib/Makefile diff --git a/src/interfaces/ecpg/ecpglib/Makefile b/src/interfaces/ecpg/ecpglib/Makefile index 0b0a3e2857..bc20cf77af 100644 --- a/src/interfaces/ecpg/ecpglib/Makefile +++ b/src/interfaces/ecpg/ecpglib/Makefile @@ -2,7 +2,7 @@ # # Makefile for ecpg library # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/interfaces/ecpg/ecpglib/Makefile diff --git a/src/interfaces/ecpg/ecpglib/pg_type.h b/src/interfaces/ecpg/ecpglib/pg_type.h index 94d2d9287b..5d9eeca03a 100644 --- a/src/interfaces/ecpg/ecpglib/pg_type.h +++ b/src/interfaces/ecpg/ecpglib/pg_type.h @@ -5,7 +5,7 @@ * * XXX keep this in sync with src/include/catalog/pg_type.h * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/ecpg/ecpglib/pg_type.h diff --git a/src/interfaces/ecpg/pgtypeslib/Makefile b/src/interfaces/ecpg/pgtypeslib/Makefile index 960c26ae55..01206568f1 100644 --- a/src/interfaces/ecpg/pgtypeslib/Makefile +++ b/src/interfaces/ecpg/pgtypeslib/Makefile @@ -2,7 +2,7 @@ # # Makefile for ecpg pgtypes library # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/interfaces/ecpg/pgtypeslib/Makefile diff --git a/src/interfaces/ecpg/preproc/Makefile b/src/interfaces/ecpg/preproc/Makefile index 02a6e65daf..0a00893b43 100644 --- a/src/interfaces/ecpg/preproc/Makefile +++ b/src/interfaces/ecpg/preproc/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/interfaces/ecpg/preproc # -# Copyright (c) 1998-2017, PostgreSQL Global Development Group +# Copyright (c) 1998-2018, PostgreSQL Global Development Group # # src/interfaces/ecpg/preproc/Makefile # diff --git a/src/interfaces/ecpg/preproc/check_rules.pl b/src/interfaces/ecpg/preproc/check_rules.pl index e681943856..4c2151c1b6 100644 --- a/src/interfaces/ecpg/preproc/check_rules.pl +++ b/src/interfaces/ecpg/preproc/check_rules.pl @@ -3,7 +3,7 @@ # test parser generater for ecpg # call with backend parser as stdin # -# Copyright (c) 2009-2017, PostgreSQL Global Development Group +# Copyright (c) 2009-2018, PostgreSQL Global Development Group # # Written by Michael Meskes # Andy Colson diff --git a/src/interfaces/ecpg/preproc/ecpg.c b/src/interfaces/ecpg/preproc/ecpg.c index 536185fa1c..8a14572261 100644 --- a/src/interfaces/ecpg/preproc/ecpg.c +++ b/src/interfaces/ecpg/preproc/ecpg.c @@ -1,7 +1,7 @@ /* src/interfaces/ecpg/preproc/ecpg.c */ /* Main for ecpg, the PostgreSQL embedded SQL precompiler. */ -/* Copyright (c) 1996-2017, PostgreSQL Global Development Group */ +/* Copyright (c) 1996-2018, PostgreSQL Global Development Group */ #include "postgres_fe.h" diff --git a/src/interfaces/ecpg/preproc/keywords.c b/src/interfaces/ecpg/preproc/keywords.c index f016d7fc6f..21e1f928fd 100644 --- a/src/interfaces/ecpg/preproc/keywords.c +++ b/src/interfaces/ecpg/preproc/keywords.c @@ -4,7 +4,7 @@ * lexical token lookup for key words in PostgreSQL * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/ecpg/preproc/parse.pl b/src/interfaces/ecpg/preproc/parse.pl index 768df3a6b1..e68cc26f52 100644 --- a/src/interfaces/ecpg/preproc/parse.pl +++ b/src/interfaces/ecpg/preproc/parse.pl @@ -3,7 +3,7 @@ # parser generater for ecpg version 2 # call with backend parser as stdin # -# Copyright (c) 2007-2017, PostgreSQL Global Development Group +# Copyright (c) 2007-2018, PostgreSQL Global Development Group # # Written by Mike Aubury # Michael Meskes diff --git a/src/interfaces/ecpg/preproc/parser.c b/src/interfaces/ecpg/preproc/parser.c index 0c2705cd2b..e5a8f9d170 100644 --- a/src/interfaces/ecpg/preproc/parser.c +++ b/src/interfaces/ecpg/preproc/parser.c @@ -7,7 +7,7 @@ * need to bother with re-entrant interfaces. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l index da4fc673d9..e99b7ff586 100644 --- a/src/interfaces/ecpg/preproc/pgc.l +++ b/src/interfaces/ecpg/preproc/pgc.l @@ -7,7 +7,7 @@ * This is a modified version of src/backend/parser/scan.l * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/ecpg/test/pg_regress_ecpg.c b/src/interfaces/ecpg/test/pg_regress_ecpg.c index b6ecb618e6..a975a7e4e4 100644 --- a/src/interfaces/ecpg/test/pg_regress_ecpg.c +++ b/src/interfaces/ecpg/test/pg_regress_ecpg.c @@ -8,7 +8,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/ecpg/test/pg_regress_ecpg.c diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile index 94eb84be03..0bf1e7ef04 100644 --- a/src/interfaces/libpq/Makefile +++ b/src/interfaces/libpq/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/interfaces/libpq library # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/interfaces/libpq/Makefile diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index b8f7a6b5be..778ff500ea 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -3,7 +3,7 @@ * fe-auth-scram.c * The front-end (client) implementation of SCRAM authentication. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index 3340a9ad93..ecaed048e6 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -3,7 +3,7 @@ * fe-auth.c * The front-end (client) authorization routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h index db319ac071..91bc21ee8d 100644 --- a/src/interfaces/libpq/fe-auth.h +++ b/src/interfaces/libpq/fe-auth.h @@ -4,7 +4,7 @@ * * Definitions for network authentication routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/fe-auth.h diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 68fb9a124a..8d543334ae 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -3,7 +3,7 @@ * fe-connect.c * functions related to setting up a connection to the backend * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c index 66530870d4..4c0114c514 100644 --- a/src/interfaces/libpq/fe-exec.c +++ b/src/interfaces/libpq/fe-exec.c @@ -3,7 +3,7 @@ * fe-exec.c * functions related to sending a query down to the backend * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c index 2ff5559233..a5ad3af258 100644 --- a/src/interfaces/libpq/fe-lobj.c +++ b/src/interfaces/libpq/fe-lobj.c @@ -3,7 +3,7 @@ * fe-lobj.c * Front-end large object interface * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c index f885106615..2a6637fdda 100644 --- a/src/interfaces/libpq/fe-misc.c +++ b/src/interfaces/libpq/fe-misc.c @@ -19,7 +19,7 @@ * routines. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/interfaces/libpq/fe-print.c b/src/interfaces/libpq/fe-print.c index 6dbf847280..95de270b93 100644 --- a/src/interfaces/libpq/fe-print.c +++ b/src/interfaces/libpq/fe-print.c @@ -3,7 +3,7 @@ * fe-print.c * functions for pretty-printing query results * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * These functions were formerly part of fe-exec.c, but they diff --git a/src/interfaces/libpq/fe-protocol2.c b/src/interfaces/libpq/fe-protocol2.c index 5335a91440..7dcef808dd 100644 --- a/src/interfaces/libpq/fe-protocol2.c +++ b/src/interfaces/libpq/fe-protocol2.c @@ -3,7 +3,7 @@ * fe-protocol2.c * functions that are specific to frontend/backend protocol version 2 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c index 0c5099caac..d3ca5d25f6 100644 --- a/src/interfaces/libpq/fe-protocol3.c +++ b/src/interfaces/libpq/fe-protocol3.c @@ -3,7 +3,7 @@ * fe-protocol3.c * functions that are specific to frontend/backend protocol version 3 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index 61d161b367..7b7390a1fc 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -4,7 +4,7 @@ * OpenSSL support * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c index c122c63106..ec6c65a4b4 100644 --- a/src/interfaces/libpq/fe-secure.c +++ b/src/interfaces/libpq/fe-secure.c @@ -6,7 +6,7 @@ * message integrity and endpoint authentication. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/libpq-events.c b/src/interfaces/libpq/libpq-events.c index 883e2af8f4..09f9c7f9bb 100644 --- a/src/interfaces/libpq/libpq-events.c +++ b/src/interfaces/libpq/libpq-events.c @@ -3,7 +3,7 @@ * libpq-events.c * functions for supporting the libpq "events" API * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/interfaces/libpq/libpq-events.h b/src/interfaces/libpq/libpq-events.h index 20af1ffe6d..7d0726a839 100644 --- a/src/interfaces/libpq/libpq-events.h +++ b/src/interfaces/libpq/libpq-events.h @@ -5,7 +5,7 @@ * that invoke the libpq "events" API, but are not interesting to * ordinary users of libpq. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/libpq-events.h diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h index 1d915e7915..ed9c806861 100644 --- a/src/interfaces/libpq/libpq-fe.h +++ b/src/interfaces/libpq/libpq-fe.h @@ -4,7 +4,7 @@ * This file contains definitions for structures and * externs for functions used by frontend postgres applications. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/libpq-fe.h diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index f6c1023f37..516039eea0 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -9,7 +9,7 @@ * more likely to break across PostgreSQL releases than code that uses * only the official API. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/libpq-int.h diff --git a/src/interfaces/libpq/libpq.rc.in b/src/interfaces/libpq/libpq.rc.in index 437c45d602..bc89cd2a43 100644 --- a/src/interfaces/libpq/libpq.rc.in +++ b/src/interfaces/libpq/libpq.rc.in @@ -17,7 +17,7 @@ BEGIN VALUE "FileDescription", "PostgreSQL Access Library\0" VALUE "FileVersion", "11.0\0" VALUE "InternalName", "libpq\0" - VALUE "LegalCopyright", "Copyright (C) 2017\0" + VALUE "LegalCopyright", "Copyright (C) 2018\0" VALUE "LegalTrademarks", "\0" VALUE "OriginalFilename", "libpq.dll\0" VALUE "ProductName", "PostgreSQL\0" diff --git a/src/interfaces/libpq/pqexpbuffer.c b/src/interfaces/libpq/pqexpbuffer.c index f4aa7c9cef..86b16e60fb 100644 --- a/src/interfaces/libpq/pqexpbuffer.c +++ b/src/interfaces/libpq/pqexpbuffer.c @@ -15,7 +15,7 @@ * a usable vsnprintf(), then a copy of our own implementation of it will * be linked into libpq. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/pqexpbuffer.c diff --git a/src/interfaces/libpq/pqexpbuffer.h b/src/interfaces/libpq/pqexpbuffer.h index 19633f9b79..771602af09 100644 --- a/src/interfaces/libpq/pqexpbuffer.h +++ b/src/interfaces/libpq/pqexpbuffer.h @@ -15,7 +15,7 @@ * a usable vsnprintf(), then a copy of our own implementation of it will * be linked into libpq. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/interfaces/libpq/pqexpbuffer.h diff --git a/src/interfaces/libpq/pthread-win32.c b/src/interfaces/libpq/pthread-win32.c index 0e0d3eeb88..f6d675d5c4 100644 --- a/src/interfaces/libpq/pthread-win32.c +++ b/src/interfaces/libpq/pthread-win32.c @@ -3,7 +3,7 @@ * pthread-win32.c * partial pthread implementation for win32 * -* Copyright (c) 2004-2017, PostgreSQL Global Development Group +* Copyright (c) 2004-2018, PostgreSQL Global Development Group * IDENTIFICATION * src/interfaces/libpq/pthread-win32.c * diff --git a/src/interfaces/libpq/test/uri-regress.c b/src/interfaces/libpq/test/uri-regress.c index fac849bab2..4590f37008 100644 --- a/src/interfaces/libpq/test/uri-regress.c +++ b/src/interfaces/libpq/test/uri-regress.c @@ -7,7 +7,7 @@ * prints out the values from the parsed PQconninfoOption struct that differ * from the defaults (obtained from PQconndefaults). * - * Portions Copyright (c) 2012-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 2012-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/interfaces/libpq/test/uri-regress.c diff --git a/src/interfaces/libpq/win32.c b/src/interfaces/libpq/win32.c index 11abb0be04..79768e4e0b 100644 --- a/src/interfaces/libpq/win32.c +++ b/src/interfaces/libpq/win32.c @@ -15,7 +15,7 @@ * The error constants are taken from the Frambak Bakfram LGSOCKET * library guys who in turn took them from the Winsock FAQ. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * */ diff --git a/src/pl/plperl/plperl.h b/src/pl/plperl/plperl.h index aac95f8d2c..78366aac04 100644 --- a/src/pl/plperl/plperl.h +++ b/src/pl/plperl/plperl.h @@ -5,7 +5,7 @@ * * This should be included _AFTER_ postgres.h and system include files * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1995, Regents of the University of California * * src/pl/plperl/plperl.h diff --git a/src/pl/plpgsql/src/generate-plerrcodes.pl b/src/pl/plpgsql/src/generate-plerrcodes.pl index eb135bc25e..834cd5058f 100644 --- a/src/pl/plpgsql/src/generate-plerrcodes.pl +++ b/src/pl/plpgsql/src/generate-plerrcodes.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate the plerrcodes.h header from errcodes.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index e9eab17338..43de3f752c 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -3,7 +3,7 @@ * pl_comp.c - Compiler part of the PL/pgSQL * procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index cfc388498e..d096f242cd 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -3,7 +3,7 @@ * pl_exec.c - Executor for the PL/pgSQL * procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index be779b6fc4..80b8448b7f 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -3,7 +3,7 @@ * pl_funcs.c - Misc functions for the PL/pgSQL * procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index e802440b45..d9cab1ad7e 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -3,7 +3,7 @@ * * pl_gram.y - Parser for the PL/pgSQL procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/pl_handler.c b/src/pl/plpgsql/src/pl_handler.c index 1ebb7a7b5e..4c2ba2f734 100644 --- a/src/pl/plpgsql/src/pl_handler.c +++ b/src/pl/plpgsql/src/pl_handler.c @@ -3,7 +3,7 @@ * pl_handler.c - Handler for the PL/pgSQL * procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/pl_scanner.c b/src/pl/plpgsql/src/pl_scanner.c index 553be8c93c..ee9aef8bbc 100644 --- a/src/pl/plpgsql/src/pl_scanner.c +++ b/src/pl/plpgsql/src/pl_scanner.c @@ -4,7 +4,7 @@ * lexical scanning for PL/pgSQL * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index 43d7d7db36..c571afa34b 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -3,7 +3,7 @@ * plpgsql.h - Definitions for the PL/pgSQL * procedural language * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/pl/plpython/generate-spiexceptions.pl b/src/pl/plpython/generate-spiexceptions.pl index a9ee9601b3..73ca50e875 100644 --- a/src/pl/plpython/generate-spiexceptions.pl +++ b/src/pl/plpython/generate-spiexceptions.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate the spiexceptions.h header from errcodes.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/pl/plpython/plpython.h b/src/pl/plpython/plpython.h index 9a8e8f246d..5c2e6a83f3 100644 --- a/src/pl/plpython/plpython.h +++ b/src/pl/plpython/plpython.h @@ -2,7 +2,7 @@ * * plpython.h - Python as a procedural language for PostgreSQL * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/pl/plpython/plpython.h diff --git a/src/pl/tcl/generate-pltclerrcodes.pl b/src/pl/tcl/generate-pltclerrcodes.pl index b4e429a4fb..b5a595510c 100644 --- a/src/pl/tcl/generate-pltclerrcodes.pl +++ b/src/pl/tcl/generate-pltclerrcodes.pl @@ -1,7 +1,7 @@ #!/usr/bin/perl # # Generate the pltclerrcodes.h header from errcodes.txt -# Copyright (c) 2000-2017, PostgreSQL Global Development Group +# Copyright (c) 2000-2018, PostgreSQL Global Development Group use warnings; use strict; diff --git a/src/port/chklocale.c b/src/port/chklocale.c index c357fed6dc..dde913099f 100644 --- a/src/port/chklocale.c +++ b/src/port/chklocale.c @@ -4,7 +4,7 @@ * Functions for handling locale-related info * * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/port/dirent.c b/src/port/dirent.c index 2bab7938a0..7d1d069647 100644 --- a/src/port/dirent.c +++ b/src/port/dirent.c @@ -3,7 +3,7 @@ * dirent.c * opendir/readdir/closedir for win32/msvc * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/dirmod.c b/src/port/dirmod.c index eac59bdfda..26611922db 100644 --- a/src/port/dirmod.c +++ b/src/port/dirmod.c @@ -3,7 +3,7 @@ * dirmod.c * directory handling functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * This includes replacement versions of functions that work on diff --git a/src/port/fls.c b/src/port/fls.c index ddd18f17f5..46dceb59d5 100644 --- a/src/port/fls.c +++ b/src/port/fls.c @@ -3,7 +3,7 @@ * fls.c * finds the last (most significant) bit that is set * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/port/fseeko.c b/src/port/fseeko.c index e9c0b07f0b..38b9cbde91 100644 --- a/src/port/fseeko.c +++ b/src/port/fseeko.c @@ -3,7 +3,7 @@ * fseeko.c * 64-bit versions of fseeko/ftello() * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/getaddrinfo.c b/src/port/getaddrinfo.c index dbad0293e8..21f1f1b94b 100644 --- a/src/port/getaddrinfo.c +++ b/src/port/getaddrinfo.c @@ -13,7 +13,7 @@ * use the Windows native routines, but if not, we use our own. * * - * Copyright (c) 2003-2017, PostgreSQL Global Development Group + * Copyright (c) 2003-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/port/getaddrinfo.c diff --git a/src/port/getpeereid.c b/src/port/getpeereid.c index 53fa663122..fa871ae68f 100644 --- a/src/port/getpeereid.c +++ b/src/port/getpeereid.c @@ -3,7 +3,7 @@ * getpeereid.c * get peer userid for UNIX-domain socket connection * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/port/getrusage.c b/src/port/getrusage.c index d029fc2c76..229b5bb7fe 100644 --- a/src/port/getrusage.c +++ b/src/port/getrusage.c @@ -3,7 +3,7 @@ * getrusage.c * get information about resource utilisation * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/isinf.c b/src/port/isinf.c index 570aa40fa2..9990f9c422 100644 --- a/src/port/isinf.c +++ b/src/port/isinf.c @@ -2,7 +2,7 @@ * * isinf.c * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/kill.c b/src/port/kill.c index 58343c4152..fee5abfdfa 100644 --- a/src/port/kill.c +++ b/src/port/kill.c @@ -3,7 +3,7 @@ * kill.c * kill() * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * This is a replacement version of kill for Win32 which sends * signals that the backend can recognize. diff --git a/src/port/mkdtemp.c b/src/port/mkdtemp.c index 54844cb2f5..e0b3ada28a 100644 --- a/src/port/mkdtemp.c +++ b/src/port/mkdtemp.c @@ -3,7 +3,7 @@ * mkdtemp.c * create a mode-0700 temporary directory * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/port/noblock.c b/src/port/noblock.c index 673fa8aa3c..bd26bd5f52 100644 --- a/src/port/noblock.c +++ b/src/port/noblock.c @@ -3,7 +3,7 @@ * noblock.c * set a file descriptor as blocking or non-blocking * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/port/open.c b/src/port/open.c index 17a7145ad9..a3ad946a60 100644 --- a/src/port/open.c +++ b/src/port/open.c @@ -4,7 +4,7 @@ * Win32 open() replacement * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/port/open.c * diff --git a/src/port/path.c b/src/port/path.c index 2578393624..1ac1dbea4f 100644 --- a/src/port/path.c +++ b/src/port/path.c @@ -3,7 +3,7 @@ * path.c * portable path handling routines * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/pg_crc32c_choose.c b/src/port/pg_crc32c_choose.c index e82c9c4b27..40bee67b0a 100644 --- a/src/port/pg_crc32c_choose.c +++ b/src/port/pg_crc32c_choose.c @@ -7,7 +7,7 @@ * if available on the platform we're running on, but fall back to the * slicing-by-8 implementation otherwise. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/pg_crc32c_sb8.c b/src/port/pg_crc32c_sb8.c index dfd6cd9f49..5205ba9cdc 100644 --- a/src/port/pg_crc32c_sb8.c +++ b/src/port/pg_crc32c_sb8.c @@ -8,7 +8,7 @@ * Generation", IEEE Transactions on Computers, vol.57, no. 11, * pp. 1550-1560, November 2008, doi:10.1109/TC.2008.85 * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/pg_crc32c_sse42.c b/src/port/pg_crc32c_sse42.c index d698124121..b9def7e2ea 100644 --- a/src/port/pg_crc32c_sse42.c +++ b/src/port/pg_crc32c_sse42.c @@ -3,7 +3,7 @@ * pg_crc32c_sse42.c * Compute CRC-32C checksum using Intel SSE 4.2 instructions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/pg_strong_random.c b/src/port/pg_strong_random.c index c6ee5ea1d4..bc7a8aacb9 100644 --- a/src/port/pg_strong_random.c +++ b/src/port/pg_strong_random.c @@ -6,7 +6,7 @@ * Our definition of "strong" is that it's suitable for generating random * salts and query cancellation keys, during authentication. * - * Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/port/pg_strong_random.c diff --git a/src/port/pgcheckdir.c b/src/port/pgcheckdir.c index 965249eeaa..b5b999b385 100644 --- a/src/port/pgcheckdir.c +++ b/src/port/pgcheckdir.c @@ -5,7 +5,7 @@ * A simple subroutine to check whether a directory exists and is empty or not. * Useful in both initdb and the backend. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/port/pgsleep.c b/src/port/pgsleep.c index f2db68a33d..48536f4b7a 100644 --- a/src/port/pgsleep.c +++ b/src/port/pgsleep.c @@ -4,7 +4,7 @@ * Portable delay handling. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/port/pgsleep.c * diff --git a/src/port/pgstrcasecmp.c b/src/port/pgstrcasecmp.c index d12778da8d..3aaea305c0 100644 --- a/src/port/pgstrcasecmp.c +++ b/src/port/pgstrcasecmp.c @@ -18,7 +18,7 @@ * C library thinks the locale is. * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/port/pgstrcasecmp.c * diff --git a/src/port/pqsignal.c b/src/port/pqsignal.c index f176387ca2..5d8d5042b0 100644 --- a/src/port/pqsignal.c +++ b/src/port/pqsignal.c @@ -4,7 +4,7 @@ * reliable BSD-style signal(2) routine stolen from RWW who stole it * from Stevens... * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/quotes.c b/src/port/quotes.c index d7ea934c8b..29770c7a00 100644 --- a/src/port/quotes.c +++ b/src/port/quotes.c @@ -3,7 +3,7 @@ * quotes.c * string quoting and escaping functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/random.c b/src/port/random.c index 5071b31b5d..3996225c92 100644 --- a/src/port/random.c +++ b/src/port/random.c @@ -3,7 +3,7 @@ * random.c * random() wrapper * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/sprompt.c b/src/port/sprompt.c index 47cd9781fd..70dfa69d7b 100644 --- a/src/port/sprompt.c +++ b/src/port/sprompt.c @@ -3,7 +3,7 @@ * sprompt.c * simple_prompt() routine * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/srandom.c b/src/port/srandom.c index 867c71858c..6939260d33 100644 --- a/src/port/srandom.c +++ b/src/port/srandom.c @@ -3,7 +3,7 @@ * srandom.c * srandom() wrapper * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/strlcpy.c b/src/port/strlcpy.c index 29c14da0b6..920d7f88f5 100644 --- a/src/port/strlcpy.c +++ b/src/port/strlcpy.c @@ -3,7 +3,7 @@ * strlcpy.c * strncpy done right * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/port/strnlen.c b/src/port/strnlen.c index 260b883368..bd4b56bbb1 100644 --- a/src/port/strnlen.c +++ b/src/port/strnlen.c @@ -4,7 +4,7 @@ * Fallback implementation of strnlen(). * * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/port/system.c b/src/port/system.c index 3d99c7985f..9d5766e33c 100644 --- a/src/port/system.c +++ b/src/port/system.c @@ -29,7 +29,7 @@ * quote character on the command line, preserving any text after the last * quote character. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/port/system.c * diff --git a/src/port/thread.c b/src/port/thread.c index a3f37b1237..da2df1f808 100644 --- a/src/port/thread.c +++ b/src/port/thread.c @@ -5,7 +5,7 @@ * Prototypes and macros around system calls, used to help make * threaded libraries reentrant and safe to use from threaded applications. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * src/port/thread.c * diff --git a/src/port/unsetenv.c b/src/port/unsetenv.c index 83b04d2641..7841431482 100644 --- a/src/port/unsetenv.c +++ b/src/port/unsetenv.c @@ -3,7 +3,7 @@ * unsetenv.c * unsetenv() emulation for machines without it * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/win32env.c b/src/port/win32env.c index 5480525fa2..af8555a7a7 100644 --- a/src/port/win32env.c +++ b/src/port/win32env.c @@ -4,7 +4,7 @@ * putenv() and unsetenv() for win32, which update both process environment * and caches in (potentially multiple) C run-time library (CRT) versions. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/port/win32error.c b/src/port/win32error.c index fe07f6e0a2..71f6e89ddd 100644 --- a/src/port/win32error.c +++ b/src/port/win32error.c @@ -3,7 +3,7 @@ * win32error.c * Map win32 error codes to errno values * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/port/win32error.c diff --git a/src/port/win32security.c b/src/port/win32security.c index bb9d034a01..8d7bcd2d92 100644 --- a/src/port/win32security.c +++ b/src/port/win32security.c @@ -3,7 +3,7 @@ * win32security.c * Microsoft Windows Win32 Security Support Functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/port/win32security.c diff --git a/src/port/win32setlocale.c b/src/port/win32setlocale.c index c4da4a8f92..0597c2afca 100644 --- a/src/port/win32setlocale.c +++ b/src/port/win32setlocale.c @@ -3,7 +3,7 @@ * win32setlocale.c * Wrapper to work around bugs in Windows setlocale() implementation * - * Copyright (c) 2011-2017, PostgreSQL Global Development Group + * Copyright (c) 2011-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/port/win32setlocale.c diff --git a/src/port/win32ver.rc b/src/port/win32ver.rc index 6cb2e99b92..a62fde4b0c 100644 --- a/src/port/win32ver.rc +++ b/src/port/win32ver.rc @@ -17,7 +17,7 @@ BEGIN VALUE "CompanyName", "PostgreSQL Global Development Group" VALUE "FileDescription", FILEDESC VALUE "FileVersion", PG_VERSION - VALUE "LegalCopyright", "Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group. Portions Copyright (c) 1994, Regents of the University of California." + VALUE "LegalCopyright", "Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group. Portions Copyright (c) 1994, Regents of the University of California." VALUE "ProductName", "PostgreSQL" VALUE "ProductVersion", PG_VERSION END diff --git a/src/test/authentication/Makefile b/src/test/authentication/Makefile index 21ad15bea9..a435b13057 100644 --- a/src/test/authentication/Makefile +++ b/src/test/authentication/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/authentication # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/authentication/Makefile diff --git a/src/test/examples/testlo.c b/src/test/examples/testlo.c index b7470385eb..7afe24714a 100644 --- a/src/test/examples/testlo.c +++ b/src/test/examples/testlo.c @@ -3,7 +3,7 @@ * testlo.c * test using large objects with libpq * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/test/examples/testlo64.c b/src/test/examples/testlo64.c index 76558f4797..bb188cc3a1 100644 --- a/src/test/examples/testlo64.c +++ b/src/test/examples/testlo64.c @@ -3,7 +3,7 @@ * testlo64.c * test using large objects with libpq using 64-bit APIs * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * diff --git a/src/test/isolation/isolation_main.c b/src/test/isolation/isolation_main.c index 8a3d7f51b3..58402b74d8 100644 --- a/src/test/isolation/isolation_main.c +++ b/src/test/isolation/isolation_main.c @@ -2,7 +2,7 @@ * * isolation_main --- pg_regress test launcher for isolation tests * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/isolation/isolation_main.c diff --git a/src/test/isolation/isolationtester.h b/src/test/isolation/isolationtester.h index 1f28272d65..a4d989bd1a 100644 --- a/src/test/isolation/isolationtester.h +++ b/src/test/isolation/isolationtester.h @@ -3,7 +3,7 @@ * isolationtester.h * include file for isolation tests * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION diff --git a/src/test/isolation/specparse.y b/src/test/isolation/specparse.y index 759b9b456c..654716194c 100644 --- a/src/test/isolation/specparse.y +++ b/src/test/isolation/specparse.y @@ -4,7 +4,7 @@ * specparse.y * bison grammar for the isolation test file format * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/test/isolation/specscanner.l b/src/test/isolation/specscanner.l index 9c0532c0c5..481b32d1d7 100644 --- a/src/test/isolation/specscanner.l +++ b/src/test/isolation/specscanner.l @@ -4,7 +4,7 @@ * specscanner.l * a lexical scanner for an isolation test specification * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * *------------------------------------------------------------------------- diff --git a/src/test/ldap/Makefile b/src/test/ldap/Makefile index 9dd1bbeade..50e3c17e95 100644 --- a/src/test/ldap/Makefile +++ b/src/test/ldap/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/ldap # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/ldap/Makefile diff --git a/src/test/modules/dummy_seclabel/dummy_seclabel.c b/src/test/modules/dummy_seclabel/dummy_seclabel.c index 7fd78f05c7..fc1e745444 100644 --- a/src/test/modules/dummy_seclabel/dummy_seclabel.c +++ b/src/test/modules/dummy_seclabel/dummy_seclabel.c @@ -7,7 +7,7 @@ * perspective, but allows regression testing independent of platform-specific * features like SELinux. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California */ #include "postgres.h" diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c index 56394de92e..82a51eb303 100644 --- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c +++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c @@ -2,7 +2,7 @@ * test_ddl_deparse.c * Support functions for the test_ddl_deparse module * - * Copyright (c) 2014-2017, PostgreSQL Global Development Group + * Copyright (c) 2014-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_ddl_deparse/test_ddl_deparse.c diff --git a/src/test/modules/test_parser/test_parser.c b/src/test/modules/test_parser/test_parser.c index bb5305109e..bb700f8a3d 100644 --- a/src/test/modules/test_parser/test_parser.c +++ b/src/test/modules/test_parser/test_parser.c @@ -3,7 +3,7 @@ * test_parser.c * Simple example of a text search parser * - * Copyright (c) 2007-2017, PostgreSQL Global Development Group + * Copyright (c) 2007-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_parser/test_parser.c diff --git a/src/test/modules/test_rbtree/test_rbtree.c b/src/test/modules/test_rbtree/test_rbtree.c index 688ebbbbad..1274b9995d 100644 --- a/src/test/modules/test_rbtree/test_rbtree.c +++ b/src/test/modules/test_rbtree/test_rbtree.c @@ -3,7 +3,7 @@ * test_rbtree.c * Test correctness of red-black tree operations. * - * Copyright (c) 2009-2017, PostgreSQL Global Development Group + * Copyright (c) 2009-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_rbtree/test_rbtree.c diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.c b/src/test/modules/test_rls_hooks/test_rls_hooks.c index 65bf3e33c9..3e6cedf2bb 100644 --- a/src/test/modules/test_rls_hooks/test_rls_hooks.c +++ b/src/test/modules/test_rls_hooks/test_rls_hooks.c @@ -3,7 +3,7 @@ * test_rls_hooks.c * Code for testing RLS hooks. * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_rls_hooks/test_rls_hooks.c diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.h b/src/test/modules/test_rls_hooks/test_rls_hooks.h index 81f7b18090..774c64ff43 100644 --- a/src/test/modules/test_rls_hooks/test_rls_hooks.h +++ b/src/test/modules/test_rls_hooks/test_rls_hooks.h @@ -3,7 +3,7 @@ * test_rls_hooks.h * Definitions for RLS hooks * - * Copyright (c) 2015-2017, PostgreSQL Global Development Group + * Copyright (c) 2015-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_rls_hooks/test_rls_hooks.h diff --git a/src/test/modules/test_shm_mq/setup.c b/src/test/modules/test_shm_mq/setup.c index 561f6f9bac..97e8617b3e 100644 --- a/src/test/modules/test_shm_mq/setup.c +++ b/src/test/modules/test_shm_mq/setup.c @@ -5,7 +5,7 @@ * number of background workers for shared memory message queue * testing. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_shm_mq/setup.c diff --git a/src/test/modules/test_shm_mq/test.c b/src/test/modules/test_shm_mq/test.c index 7a6ad23f75..ebab986601 100644 --- a/src/test/modules/test_shm_mq/test.c +++ b/src/test/modules/test_shm_mq/test.c @@ -3,7 +3,7 @@ * test.c * Test harness code for shared memory message queues. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_shm_mq/test.c diff --git a/src/test/modules/test_shm_mq/test_shm_mq.h b/src/test/modules/test_shm_mq/test_shm_mq.h index e76ecab891..2134b1fdf1 100644 --- a/src/test/modules/test_shm_mq/test_shm_mq.h +++ b/src/test/modules/test_shm_mq/test_shm_mq.h @@ -3,7 +3,7 @@ * test_shm_mq.h * Definitions for shared memory message queues * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_shm_mq/test_shm_mq.h diff --git a/src/test/modules/test_shm_mq/worker.c b/src/test/modules/test_shm_mq/worker.c index e7e29f89c2..bcb992e1e4 100644 --- a/src/test/modules/test_shm_mq/worker.c +++ b/src/test/modules/test_shm_mq/worker.c @@ -9,7 +9,7 @@ * but it should be possible to use much of the control logic just * as presented here. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/test_shm_mq/worker.c diff --git a/src/test/modules/worker_spi/worker_spi.c b/src/test/modules/worker_spi/worker_spi.c index 4c6ab6d575..3b98b1682b 100644 --- a/src/test/modules/worker_spi/worker_spi.c +++ b/src/test/modules/worker_spi/worker_spi.c @@ -13,7 +13,7 @@ * "delta" type. Delta rows will be deleted by this worker and their values * aggregated into the total. * - * Copyright (c) 2013-2017, PostgreSQL Global Development Group + * Copyright (c) 2013-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/test/modules/worker_spi/worker_spi.c diff --git a/src/test/perl/Makefile b/src/test/perl/Makefile index a974f358fd..8e7012d943 100644 --- a/src/test/perl/Makefile +++ b/src/test/perl/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/perl # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/perl/Makefile diff --git a/src/test/recovery/Makefile b/src/test/recovery/Makefile index e31accf0f5..aecf37d89a 100644 --- a/src/test/recovery/Makefile +++ b/src/test/recovery/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/recovery # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/recovery/Makefile diff --git a/src/test/regress/GNUmakefile b/src/test/regress/GNUmakefile index 8b2d20c5b5..deef08dfc1 100644 --- a/src/test/regress/GNUmakefile +++ b/src/test/regress/GNUmakefile @@ -3,7 +3,7 @@ # GNUmakefile-- # Makefile for src/test/regress (the regression tests) # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/regress/GNUmakefile diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c index e7ea3ae138..a1ee1041b4 100644 --- a/src/test/regress/pg_regress.c +++ b/src/test/regress/pg_regress.c @@ -8,7 +8,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/regress/pg_regress.c diff --git a/src/test/regress/pg_regress.h b/src/test/regress/pg_regress.h index 0d9c4bfac3..e9045b75b6 100644 --- a/src/test/regress/pg_regress.h +++ b/src/test/regress/pg_regress.h @@ -1,7 +1,7 @@ /*------------------------------------------------------------------------- * pg_regress.h --- regression test driver * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/regress/pg_regress.h diff --git a/src/test/regress/pg_regress_main.c b/src/test/regress/pg_regress_main.c index 298ed758ee..a2bd6a2cd5 100644 --- a/src/test/regress/pg_regress_main.c +++ b/src/test/regress/pg_regress_main.c @@ -8,7 +8,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/regress/pg_regress_main.c diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c index 0e9e46e667..13e7207457 100644 --- a/src/test/regress/regress.c +++ b/src/test/regress/regress.c @@ -6,7 +6,7 @@ * * This code is released under the terms of the PostgreSQL License. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/regress/regress.c diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile index e4437d19c3..4886e901d0 100644 --- a/src/test/ssl/Makefile +++ b/src/test/ssl/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/ssl # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/ssl/Makefile diff --git a/src/test/subscription/Makefile b/src/test/subscription/Makefile index d423ff3662..25c48e470d 100644 --- a/src/test/subscription/Makefile +++ b/src/test/subscription/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/test/subscription # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/test/subscription/Makefile diff --git a/src/test/thread/Makefile b/src/test/thread/Makefile index 66c22691ae..bf2b07be56 100644 --- a/src/test/thread/Makefile +++ b/src/test/thread/Makefile @@ -2,7 +2,7 @@ # # Makefile for tools/thread # -# Copyright (c) 2003-2017, PostgreSQL Global Development Group +# Copyright (c) 2003-2018, PostgreSQL Global Development Group # # src/test/thread/Makefile # diff --git a/src/test/thread/thread_test.c b/src/test/thread/thread_test.c index 2501ca22b1..381607324d 100644 --- a/src/test/thread/thread_test.c +++ b/src/test/thread/thread_test.c @@ -3,7 +3,7 @@ * test_thread_funcs.c * libc thread test program * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/test/thread/thread_test.c diff --git a/src/timezone/pgtz.c b/src/timezone/pgtz.c index 4018310a5c..7a476eabf7 100644 --- a/src/timezone/pgtz.c +++ b/src/timezone/pgtz.c @@ -3,7 +3,7 @@ * pgtz.c * Timezone Library Integration Functions * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/timezone/pgtz.c diff --git a/src/timezone/pgtz.h b/src/timezone/pgtz.h index 3d89ba00a7..1e94d66d49 100644 --- a/src/timezone/pgtz.h +++ b/src/timezone/pgtz.h @@ -6,7 +6,7 @@ * Note: this file contains only definitions that are private to the * timezone library. Public definitions are in pgtime.h. * - * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * * IDENTIFICATION * src/timezone/pgtz.h diff --git a/src/tools/check_bison_recursion.pl b/src/tools/check_bison_recursion.pl index 14590663c6..9421eb93af 100755 --- a/src/tools/check_bison_recursion.pl +++ b/src/tools/check_bison_recursion.pl @@ -16,7 +16,7 @@ # To use: run bison with the -v switch, then feed the produced y.output # file to this script. # -# Copyright (c) 2011-2017, PostgreSQL Global Development Group +# Copyright (c) 2011-2018, PostgreSQL Global Development Group # # src/tools/check_bison_recursion.pl ################################################################# diff --git a/src/tools/copyright.pl b/src/tools/copyright.pl index 53942f12f0..41cb93d658 100755 --- a/src/tools/copyright.pl +++ b/src/tools/copyright.pl @@ -2,7 +2,7 @@ ################################################################# # copyright.pl -- update copyright notices throughout the source tree, idempotently. # -# Copyright (c) 2011-2017, PostgreSQL Global Development Group +# Copyright (c) 2011-2018, PostgreSQL Global Development Group # # src/tools/copyright.pl # diff --git a/src/tools/findoidjoins/Makefile b/src/tools/findoidjoins/Makefile index 5410d85ec2..a3462b937c 100644 --- a/src/tools/findoidjoins/Makefile +++ b/src/tools/findoidjoins/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/tools/findoidjoins # -# Copyright (c) 2003-2017, PostgreSQL Global Development Group +# Copyright (c) 2003-2018, PostgreSQL Global Development Group # # src/tools/findoidjoins/Makefile # diff --git a/src/tools/findoidjoins/findoidjoins.c b/src/tools/findoidjoins/findoidjoins.c index 7ce519d726..7ea53b8789 100644 --- a/src/tools/findoidjoins/findoidjoins.c +++ b/src/tools/findoidjoins/findoidjoins.c @@ -1,7 +1,7 @@ /* * findoidjoins.c * - * Copyright (c) 2002-2017, PostgreSQL Global Development Group + * Copyright (c) 2002-2018, PostgreSQL Global Development Group * * src/tools/findoidjoins/findoidjoins.c */ diff --git a/src/tools/fix-old-flex-code.pl b/src/tools/fix-old-flex-code.pl index da99875599..baa1feecb9 100644 --- a/src/tools/fix-old-flex-code.pl +++ b/src/tools/fix-old-flex-code.pl @@ -8,7 +8,7 @@ # let's suppress it by inserting a dummy reference to the variable. # (That's exactly what 2.5.36 and later do ...) # -# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +# Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group # Portions Copyright (c) 1994, Regents of the University of California # # src/tools/fix-old-flex-code.pl diff --git a/src/tools/ifaddrs/Makefile b/src/tools/ifaddrs/Makefile index a5153207fe..9e0e8945f8 100644 --- a/src/tools/ifaddrs/Makefile +++ b/src/tools/ifaddrs/Makefile @@ -2,7 +2,7 @@ # # Makefile for src/tools/ifaddrs # -# Copyright (c) 2003-2017, PostgreSQL Global Development Group +# Copyright (c) 2003-2018, PostgreSQL Global Development Group # # src/tools/ifaddrs/Makefile # diff --git a/src/tools/testint128.c b/src/tools/testint128.c index afdfd15cb0..559b1ea264 100644 --- a/src/tools/testint128.c +++ b/src/tools/testint128.c @@ -6,7 +6,7 @@ * This is a standalone test program that compares the behavior of an * implementation in int128.h to an (assumed correct) int128 native type. * - * Copyright (c) 2017, PostgreSQL Global Development Group + * Copyright (c) 2017-2018, PostgreSQL Global Development Group * * * IDENTIFICATION diff --git a/src/tools/version_stamp.pl b/src/tools/version_stamp.pl index 90ccf9cbaf..dc3173bf6a 100755 --- a/src/tools/version_stamp.pl +++ b/src/tools/version_stamp.pl @@ -3,7 +3,7 @@ ################################################################# # version_stamp.pl -- update version stamps throughout the source tree # -# Copyright (c) 2008-2017, PostgreSQL Global Development Group +# Copyright (c) 2008-2018, PostgreSQL Global Development Group # # src/tools/version_stamp.pl ################################################################# diff --git a/src/tools/win32tzlist.pl b/src/tools/win32tzlist.pl index 0bdcc3610f..ca81219164 100755 --- a/src/tools/win32tzlist.pl +++ b/src/tools/win32tzlist.pl @@ -2,7 +2,7 @@ # # win32tzlist.pl -- compare Windows timezone information # -# Copyright (c) 2008-2017, PostgreSQL Global Development Group +# Copyright (c) 2008-2018, PostgreSQL Global Development Group # # src/tools/win32tzlist.pl ################################################################# diff --git a/src/tutorial/complex.source b/src/tutorial/complex.source index a2307b9447..2fc3e501ce 100644 --- a/src/tutorial/complex.source +++ b/src/tutorial/complex.source @@ -5,7 +5,7 @@ -- use this new type. -- -- --- Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +-- Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group -- Portions Copyright (c) 1994, Regents of the University of California -- -- src/tutorial/complex.source diff --git a/src/tutorial/syscat.source b/src/tutorial/syscat.source index 2f97642a39..7b3c4a5637 100644 --- a/src/tutorial/syscat.source +++ b/src/tutorial/syscat.source @@ -4,7 +4,7 @@ -- sample queries to the system catalogs -- -- --- Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group +-- Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group -- Portions Copyright (c) 1994, Regents of the University of California -- -- src/tutorial/syscat.source From 2268e6afd59649d6bf6d114a19e9c492d59b43fc Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 3 Jan 2018 11:16:42 -0300 Subject: [PATCH 0763/1087] Fix isolation test to be less timing-dependent I did this by adding another locking process, which makes the other two wait. This way the output should be stable enough. Per buildfarm and Andres Freund Discussion: https://postgr.es/m/20180103034445.t3utrtrnrevfsghm@alap3.anarazel.de --- src/test/isolation/expected/multiple-cic.out | 17 +++++++++++------ src/test/isolation/specs/multiple-cic.spec | 12 ++++++++---- 2 files changed, 19 insertions(+), 10 deletions(-) diff --git a/src/test/isolation/expected/multiple-cic.out b/src/test/isolation/expected/multiple-cic.out index cc57940392..0b470e7d1d 100644 --- a/src/test/isolation/expected/multiple-cic.out +++ b/src/test/isolation/expected/multiple-cic.out @@ -1,6 +1,9 @@ -Parsed test spec with 2 sessions +Parsed test spec with 3 sessions -starting permutation: s2l s1i s2i +starting permutation: s2l s1i s2i s3u +pg_advisory_lock + + step s2l: SELECT pg_advisory_lock(281457); pg_advisory_lock @@ -11,9 +14,11 @@ step s1i: step s2i: CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) - WHERE unlck(); + WHERE unlck() AND lck_shr(572814); + +step s3u: SELECT unlck(); +unlck +t step s1i: <... completed> -s1 - - +step s2i: <... completed> diff --git a/src/test/isolation/specs/multiple-cic.spec b/src/test/isolation/specs/multiple-cic.spec index a7ba4eb4fd..fbec67ee25 100644 --- a/src/test/isolation/specs/multiple-cic.spec +++ b/src/test/isolation/specs/multiple-cic.spec @@ -26,15 +26,19 @@ session "s1" step "s1i" { CREATE INDEX CONCURRENTLY mcic_one_pkey ON mcic_one (id) WHERE lck_shr(281457); - } -teardown { SELECT pg_advisory_unlock_all() AS "s1"; } +} +step "s1u" { SELECT unlck(); } session "s2" step "s2l" { SELECT pg_advisory_lock(281457); } step "s2i" { CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) - WHERE unlck(); + WHERE unlck() AND lck_shr(572814); } -permutation "s2l" "s1i" "s2i" +session "s3" +setup { SELECT pg_advisory_lock(572814); } +step "s3u" { SELECT unlck(); } + +permutation "s2l" "s1i" "s2i" "s3u" From 35c0754fadca8010955f6b10cb47af00bdbe1286 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 3 Jan 2018 10:00:08 -0500 Subject: [PATCH 0764/1087] Allow ldaps when using ldap authentication While ldaptls=1 provides an RFC 4513 conforming way to do LDAP authentication with TLS encryption, there was an earlier de facto standard way to do LDAP over SSL called LDAPS. Even though it's not enshrined in a standard, it's still widely used and sometimes required by organizations' network policies. There seems to be no reason not to support it when available in the client library. Therefore, add support when using OpenLDAP 2.4+ or Windows. It can be configured with ldapscheme=ldaps or ldapurl=ldaps://... Add tests for both ways of requesting LDAPS and a test for the pre-existing ldaptls=1. Modify the 001_auth.pl test for "diagnostic messages", which was previously relying on the server rejecting ldaptls=1. Author: Thomas Munro Reviewed-By: Peter Eisentraut Discussion: https://postgr.es/m/CAEepm=1s+pA-LZUjQ-9GQz0Z4rX_eK=DFXAF1nBQ+ROPimuOYQ@mail.gmail.com --- configure | 11 +++++++ configure.in | 1 + doc/src/sgml/client-auth.sgml | 50 +++++++++++++++++++++------- src/backend/libpq/auth.c | 59 +++++++++++++++++++++++++++++---- src/backend/libpq/hba.c | 16 ++++++++- src/include/libpq/hba.h | 1 + src/include/pg_config.h.in | 3 ++ src/test/ldap/t/001_auth.pl | 61 ++++++++++++++++++++++++++++++++--- 8 files changed, 178 insertions(+), 24 deletions(-) diff --git a/configure b/configure index 82f332f545..d88863e50c 100755 --- a/configure +++ b/configure @@ -10424,6 +10424,17 @@ fi else LDAP_LIBS_FE="-lldap $EXTRA_LDAP_LIBS" fi + for ac_func in ldap_initialize +do : + ac_fn_c_check_func "$LINENO" "ldap_initialize" "ac_cv_func_ldap_initialize" +if test "x$ac_cv_func_ldap_initialize" = xyes; then : + cat >>confdefs.h <<_ACEOF +#define HAVE_LDAP_INITIALIZE 1 +_ACEOF + +fi +done + else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ldap_bind in -lwldap32" >&5 $as_echo_n "checking for ldap_bind in -lwldap32... " >&6; } diff --git a/configure.in b/configure.in index 4a20c9da96..4968b67bf9 100644 --- a/configure.in +++ b/configure.in @@ -1106,6 +1106,7 @@ if test "$with_ldap" = yes ; then else LDAP_LIBS_FE="-lldap $EXTRA_LDAP_LIBS" fi + AC_CHECK_FUNCS([ldap_initialize]) else AC_CHECK_LIB(wldap32, ldap_bind, [], [AC_MSG_ERROR([library 'wldap32' is required for LDAP])]) LDAP_LIBS_FE="-lwldap32" diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index c8a1bc79aa..53832d08e2 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -1502,19 +1502,40 @@ omicron bryanh guest1 + + ldapscheme + + + Set to ldaps to use LDAPS. This is a non-standard + way of using LDAP over SSL, supported by some LDAP server + implementations. See also the ldaptls option for + an alternative. + + + ldaptls - Set to 1 to make the connection between PostgreSQL and the - LDAP server use TLS encryption. Note that this only encrypts - the traffic to the LDAP server — the connection to the client - will still be unencrypted unless SSL is used. + Set to 1 to make the connection between PostgreSQL and the LDAP server + use TLS encryption. This uses the StartTLS + operation per RFC 4513. See also the ldapscheme + option for an alternative. + + + + Note that using ldapscheme or + ldaptls only encrypts the traffic between the + PostgreSQL server and the LDAP server. The connection between the + PostgreSQL server and the PostgreSQL client will still be unencrypted + unless SSL is used there as well. + + The following options are used in simple bind mode only: @@ -1536,7 +1557,9 @@ omicron bryanh guest1 + + The following options are used in search+bind mode only: @@ -1594,7 +1617,7 @@ omicron bryanh guest1 An RFC 4516 LDAP URL. This is an alternative way to write some of the other LDAP options in a more compact and standard form. The format is -ldap://host[:port]/basedn[?[attribute][?[scope][?[filter]]]] +ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]] scope must be one of base, one, sub, @@ -1608,16 +1631,19 @@ ldap://host[:port]/ - For non-anonymous binds, ldapbinddn - and ldapbindpasswd must be specified as separate - options. + The URL scheme ldaps chooses the LDAPS method for + making LDAP connections over SSL, equivalent to using + ldapscheme=ldaps. To use encrypted LDAP + connections using the StartTLS operation, use the + normal URL scheme ldap and specify the + ldaptls option in addition to + ldapurl. - To use encrypted LDAP connections, the ldaptls - option has to be used in addition to ldapurl. - The ldaps URL scheme (direct SSL connection) is not - supported. + For non-anonymous binds, ldapbinddn + and ldapbindpasswd must be specified as separate + options. diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 1d49ed784f..3560edc33a 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2355,22 +2355,61 @@ static int errdetail_for_ldap(LDAP *ldap); static int InitializeLDAPConnection(Port *port, LDAP **ldap) { + const char *scheme; int ldapversion = LDAP_VERSION3; int r; - *ldap = ldap_init(port->hba->ldapserver, port->hba->ldapport); + scheme = port->hba->ldapscheme; + if (scheme == NULL) + scheme = "ldap"; +#ifdef WIN32 + *ldap = ldap_sslinit(port->hba->ldapserver, + port->hba->ldapport, + strcmp(scheme, "ldaps") == 0); if (!*ldap) { -#ifndef WIN32 - ereport(LOG, - (errmsg("could not initialize LDAP: %m"))); -#else ereport(LOG, (errmsg("could not initialize LDAP: error code %d", (int) LdapGetLastError()))); -#endif + + return STATUS_ERROR; + } +#else +#ifdef HAVE_LDAP_INITIALIZE + { + char *uri; + + uri = psprintf("%s://%s:%d", scheme, port->hba->ldapserver, + port->hba->ldapport); + r = ldap_initialize(ldap, uri); + pfree(uri); + if (r != LDAP_SUCCESS) + { + ereport(LOG, + (errmsg("could not initialize LDAP: %s", + ldap_err2string(r)))); + + return STATUS_ERROR; + } + } +#else + if (strcmp(scheme, "ldaps") == 0) + { + ereport(LOG, + (errmsg("ldaps not supported with this LDAP library"))); + + return STATUS_ERROR; + } + *ldap = ldap_init(port->hba->ldapserver, port->hba->ldapport); + if (!*ldap) + { + ereport(LOG, + (errmsg("could not initialize LDAP: %m"))); + return STATUS_ERROR; } +#endif +#endif if ((r = ldap_set_option(*ldap, LDAP_OPT_PROTOCOL_VERSION, &ldapversion)) != LDAP_SUCCESS) { @@ -2493,7 +2532,13 @@ CheckLDAPAuth(Port *port) } if (port->hba->ldapport == 0) - port->hba->ldapport = LDAP_PORT; + { + if (port->hba->ldapscheme != NULL && + strcmp(port->hba->ldapscheme, "ldaps") == 0) + port->hba->ldapport = LDAPS_PORT; + else + port->hba->ldapport = LDAP_PORT; + } sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0); diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index f760d24886..aa20f266b8 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -1728,7 +1728,8 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, return false; } - if (strcmp(urldata->lud_scheme, "ldap") != 0) + if (strcmp(urldata->lud_scheme, "ldap") != 0 && + strcmp(urldata->lud_scheme, "ldaps") != 0) { ereport(elevel, (errcode(ERRCODE_CONFIG_FILE_ERROR), @@ -1739,6 +1740,8 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, return false; } + if (urldata->lud_scheme) + hbaline->ldapscheme = pstrdup(urldata->lud_scheme); if (urldata->lud_host) hbaline->ldapserver = pstrdup(urldata->lud_host); hbaline->ldapport = urldata->lud_port; @@ -1766,6 +1769,17 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline, else hbaline->ldaptls = false; } + else if (strcmp(name, "ldapscheme") == 0) + { + REQUIRE_AUTH_OPTION(uaLDAP, "ldapscheme", "ldap"); + if (strcmp(val, "ldap") != 0 && strcmp(val, "ldaps") != 0) + ereport(elevel, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg("invalid ldapscheme value: \"%s\"", val), + errcontext("line %d of configuration file \"%s\"", + line_num, HbaFileName))); + hbaline->ldapscheme = pstrdup(val); + } else if (strcmp(name, "ldapserver") == 0) { REQUIRE_AUTH_OPTION(uaLDAP, "ldapserver", "ldap"); diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h index e711bee8bf..5f68f4c666 100644 --- a/src/include/libpq/hba.h +++ b/src/include/libpq/hba.h @@ -75,6 +75,7 @@ typedef struct HbaLine char *pamservice; bool pam_use_hostname; bool ldaptls; + char *ldapscheme; char *ldapserver; int ldapport; char *ldapbinddn; diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 0aa6be4666..27b1368721 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -310,6 +310,9 @@ /* Define to 1 if you have the header file. */ #undef HAVE_LDAP_H +/* Define to 1 if you have the `ldap_initialize' function. */ +#undef HAVE_LDAP_INITIALIZE + /* Define to 1 if you have the `crypto' library (-lcrypto). */ #undef HAVE_LIBCRYPTO diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl index 38760ece61..5508da459f 100644 --- a/src/test/ldap/t/001_auth.pl +++ b/src/test/ldap/t/001_auth.pl @@ -2,7 +2,7 @@ use warnings; use TestLib; use PostgresNode; -use Test::More tests => 15; +use Test::More tests => 19; my ($slapd, $ldap_bin_dir, $ldap_schema_dir); @@ -33,13 +33,16 @@ $ENV{PATH} = "$ldap_bin_dir:$ENV{PATH}" if $ldap_bin_dir; my $ldap_datadir = "${TestLib::tmp_check}/openldap-data"; +my $slapd_certs = "${TestLib::tmp_check}/slapd-certs"; my $slapd_conf = "${TestLib::tmp_check}/slapd.conf"; my $slapd_pidfile = "${TestLib::tmp_check}/slapd.pid"; my $slapd_logfile = "${TestLib::tmp_check}/slapd.log"; my $ldap_conf = "${TestLib::tmp_check}/ldap.conf"; my $ldap_server = 'localhost'; my $ldap_port = int(rand() * 16384) + 49152; +my $ldaps_port = $ldap_port + 1; my $ldap_url = "ldap://$ldap_server:$ldap_port"; +my $ldaps_url = "ldaps://$ldap_server:$ldaps_port"; my $ldap_basedn = 'dc=example,dc=net'; my $ldap_rootdn = 'cn=Manager,dc=example,dc=net'; my $ldap_rootpw = 'secret'; @@ -63,13 +66,27 @@ database ldif directory $ldap_datadir +TLSCACertificateFile $slapd_certs/ca.crt +TLSCertificateFile $slapd_certs/server.crt +TLSCertificateKeyFile $slapd_certs/server.key + suffix "dc=example,dc=net" rootdn "$ldap_rootdn" rootpw $ldap_rootpw}); +# don't bother to check the server's cert (though perhaps we should) +append_to_file($ldap_conf, +qq{TLS_REQCERT never +}); + mkdir $ldap_datadir or die; +mkdir $slapd_certs or die; + +system_or_bail "openssl", "req", "-new", "-nodes", "-keyout", "$slapd_certs/ca.key", "-x509", "-out", "$slapd_certs/ca.crt", "-subj", "/cn=CA"; +system_or_bail "openssl", "req", "-new", "-nodes", "-keyout", "$slapd_certs/server.key", "-out", "$slapd_certs/server.csr", "-subj", "/cn=server"; +system_or_bail "openssl", "x509", "-req", "-in", "$slapd_certs/server.csr", "-CA", "$slapd_certs/ca.crt", "-CAkey", "$slapd_certs/ca.key", "-CAcreateserial", "-out", "$slapd_certs/server.crt"; -system_or_bail $slapd, '-f', $slapd_conf, '-h', $ldap_url; +system_or_bail $slapd, '-f', $slapd_conf, '-h', "$ldap_url $ldaps_url"; END { @@ -81,6 +98,7 @@ END $ENV{'LDAPURI'} = $ldap_url; $ENV{'LDAPBINDDN'} = $ldap_rootdn; +$ENV{'LDAPCONF'} = $ldap_conf; note "loading LDAP data"; @@ -178,9 +196,44 @@ sub test_access note "diagnostic message"; +# note bad ldapprefix with a question mark that triggers a diagnostic message +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="?uid=" ldapsuffix=""}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 2, 'any attempt fails due to bad search pattern'); + +note "TLS"; + +# request StartTLS with ldaptls=1 +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(uid=\$username)" ldaptls=1}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'StartTLS'); + +# request LDAPS with ldapscheme=ldaps +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapscheme=ldaps ldapport=$ldaps_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(uid=\$username)"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'LDAPS'); + +# request LDAPS with ldapurl=ldaps://... +unlink($node->data_dir . '/pg_hba.conf'); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldaps_url/$ldap_basedn??sub?(uid=\$username)"}); +$node->reload; + +$ENV{"PGPASSWORD"} = 'secret1'; +test_access($node, 'test1', 0, 'LDAPS with URL'); + +# bad combination of LDAPS and StartTLS unlink($node->data_dir . '/pg_hba.conf'); -$node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="uid=" ldapsuffix=",dc=example,dc=net" ldaptls=1}); +$node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldaps_url/$ldap_basedn??sub?(uid=\$username)" ldaptls=1}); $node->reload; $ENV{"PGPASSWORD"} = 'secret1'; -test_access($node, 'test1', 2, 'any attempt fails due to unsupported TLS'); +test_access($node, 'test1', 2, 'bad combination of LDAPS and StartTLS'); From 3decd150a2d5a8f8d43010dd0c207746ba946303 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 3 Jan 2018 12:35:09 -0500 Subject: [PATCH 0765/1087] Teach eval_const_expressions() to handle some more cases. Add some infrastructure (mostly macros) to make it easier to write typical cases for constant-expression simplification. Add simplification processing for ArrayRef, RowExpr, and ScalarArrayOpExpr node types, which formerly went unsimplified even if all their inputs were constants. Also teach it to simplify FieldSelect from a composite constant. Make use of the new infrastructure to reduce the amount of code needed for the existing ArrayExpr and ArrayCoerceExpr cases. One existing test case changes output as a result of the fact that RowExpr can now be folded to a constant. All the new code is exercised by existing test cases according to gcov, so I feel no need to add additional tests. Tom Lane, reviewed by Dmitry Dolgov Discussion: https://postgr.es/m/3be3b82c-e29c-b674-2163-bf47d98817b1@iki.fi --- src/backend/optimizer/util/clauses.c | 219 +++++++++++++++++-------- src/test/regress/expected/rowtypes.out | 6 +- 2 files changed, 150 insertions(+), 75 deletions(-) diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index bcdf7d624b..cf38b4eb5e 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -115,6 +115,9 @@ static List *find_nonnullable_vars_walker(Node *node, bool top_level); static bool is_strict_saop(ScalarArrayOpExpr *expr, bool falseOK); static Node *eval_const_expressions_mutator(Node *node, eval_const_expressions_context *context); +static bool contain_non_const_walker(Node *node, void *context); +static bool ece_function_is_safe(Oid funcid, + eval_const_expressions_context *context); static List *simplify_or_arguments(List *args, eval_const_expressions_context *context, bool *haveNull, bool *forceTrue); @@ -2502,6 +2505,37 @@ estimate_expression_value(PlannerInfo *root, Node *node) return eval_const_expressions_mutator(node, &context); } +/* + * The generic case in eval_const_expressions_mutator is to recurse using + * expression_tree_mutator, which will copy the given node unchanged but + * const-simplify its arguments (if any) as far as possible. If the node + * itself does immutable processing, and each of its arguments were reduced + * to a Const, we can then reduce it to a Const using evaluate_expr. (Some + * node types need more complicated logic; for example, a CASE expression + * might be reducible to a constant even if not all its subtrees are.) + */ +#define ece_generic_processing(node) \ + expression_tree_mutator((Node *) (node), eval_const_expressions_mutator, \ + (void *) context) + +/* + * Check whether all arguments of the given node were reduced to Consts. + * By going directly to expression_tree_walker, contain_non_const_walker + * is not applied to the node itself, only to its children. + */ +#define ece_all_arguments_const(node) \ + (!expression_tree_walker((Node *) (node), contain_non_const_walker, NULL)) + +/* Generic macro for applying evaluate_expr */ +#define ece_evaluate_expr(node) \ + ((Node *) evaluate_expr((Expr *) (node), \ + exprType((Node *) (node)), \ + exprTypmod((Node *) (node)), \ + exprCollation((Node *) (node)))) + +/* + * Recursive guts of eval_const_expressions/estimate_expression_value + */ static Node * eval_const_expressions_mutator(Node *node, eval_const_expressions_context *context) @@ -2830,6 +2864,25 @@ eval_const_expressions_mutator(Node *node, newexpr->location = expr->location; return (Node *) newexpr; } + case T_ScalarArrayOpExpr: + { + ScalarArrayOpExpr *saop; + + /* Copy the node and const-simplify its arguments */ + saop = (ScalarArrayOpExpr *) ece_generic_processing(node); + + /* Make sure we know underlying function */ + set_sa_opfuncid(saop); + + /* + * If all arguments are Consts, and it's a safe function, we + * can fold to a constant + */ + if (ece_all_arguments_const(saop) && + ece_function_is_safe(saop->opfuncid, context)) + return ece_evaluate_expr(saop); + return (Node *) saop; + } case T_BoolExpr: { BoolExpr *expr = (BoolExpr *) node; @@ -3054,47 +3107,24 @@ eval_const_expressions_mutator(Node *node, } case T_ArrayCoerceExpr: { - ArrayCoerceExpr *expr = (ArrayCoerceExpr *) node; - Expr *arg; - Expr *elemexpr; - ArrayCoerceExpr *newexpr; - - /* - * Reduce constants in the ArrayCoerceExpr's argument and - * per-element expressions, then build a new ArrayCoerceExpr. - */ - arg = (Expr *) eval_const_expressions_mutator((Node *) expr->arg, - context); - elemexpr = (Expr *) eval_const_expressions_mutator((Node *) expr->elemexpr, - context); + ArrayCoerceExpr *ac; - newexpr = makeNode(ArrayCoerceExpr); - newexpr->arg = arg; - newexpr->elemexpr = elemexpr; - newexpr->resulttype = expr->resulttype; - newexpr->resulttypmod = expr->resulttypmod; - newexpr->resultcollid = expr->resultcollid; - newexpr->coerceformat = expr->coerceformat; - newexpr->location = expr->location; + /* Copy the node and const-simplify its arguments */ + ac = (ArrayCoerceExpr *) ece_generic_processing(node); /* - * If constant argument and per-element expression is + * If constant argument and the per-element expression is * immutable, we can simplify the whole thing to a constant. * Exception: although contain_mutable_functions considers * CoerceToDomain immutable for historical reasons, let's not * do so here; this ensures coercion to an array-over-domain * does not apply the domain's constraints until runtime. */ - if (arg && IsA(arg, Const) && - elemexpr && !IsA(elemexpr, CoerceToDomain) && - !contain_mutable_functions((Node *) elemexpr)) - return (Node *) evaluate_expr((Expr *) newexpr, - newexpr->resulttype, - newexpr->resulttypmod, - newexpr->resultcollid); - - /* Else we must return the partially-simplified node */ - return (Node *) newexpr; + if (ac->arg && IsA(ac->arg, Const) && + ac->elemexpr && !IsA(ac->elemexpr, CoerceToDomain) && + !contain_mutable_functions((Node *) ac->elemexpr)) + return ece_evaluate_expr(ac); + return (Node *) ac; } case T_CollateExpr: { @@ -3286,41 +3316,22 @@ eval_const_expressions_mutator(Node *node, else return copyObject(node); } + case T_ArrayRef: case T_ArrayExpr: + case T_RowExpr: { - ArrayExpr *arrayexpr = (ArrayExpr *) node; - ArrayExpr *newarray; - bool all_const = true; - List *newelems; - ListCell *element; - - newelems = NIL; - foreach(element, arrayexpr->elements) - { - Node *e; - - e = eval_const_expressions_mutator((Node *) lfirst(element), - context); - if (!IsA(e, Const)) - all_const = false; - newelems = lappend(newelems, e); - } + /* + * Generic handling for node types whose own processing is + * known to be immutable, and for which we need no smarts + * beyond "simplify if all inputs are constants". + */ - newarray = makeNode(ArrayExpr); - newarray->array_typeid = arrayexpr->array_typeid; - newarray->array_collid = arrayexpr->array_collid; - newarray->element_typeid = arrayexpr->element_typeid; - newarray->elements = newelems; - newarray->multidims = arrayexpr->multidims; - newarray->location = arrayexpr->location; - - if (all_const) - return (Node *) evaluate_expr((Expr *) newarray, - newarray->array_typeid, - exprTypmod(node), - newarray->array_collid); - - return (Node *) newarray; + /* Copy the node and const-simplify its arguments */ + node = ece_generic_processing(node); + /* If all arguments are Consts, we can fold to a constant */ + if (ece_all_arguments_const(node)) + return ece_evaluate_expr(node); + return node; } case T_CoalesceExpr: { @@ -3397,7 +3408,8 @@ eval_const_expressions_mutator(Node *node, * simple Var. (This case won't be generated directly by the * parser, because ParseComplexProjection short-circuits it. * But it can arise while simplifying functions.) Also, we - * can optimize field selection from a RowExpr construct. + * can optimize field selection from a RowExpr construct, or + * of course from a constant. * * However, replacing a whole-row Var in this way has a * pitfall: if we've already built the rel targetlist for the @@ -3412,6 +3424,8 @@ eval_const_expressions_mutator(Node *node, * We must also check that the declared type of the field is * still the same as when the FieldSelect was created --- this * can change if someone did ALTER COLUMN TYPE on the rowtype. + * If it isn't, we skip the optimization; the case will + * probably fail at runtime, but that's not our problem here. */ FieldSelect *fselect = (FieldSelect *) node; FieldSelect *newfselect; @@ -3462,6 +3476,17 @@ eval_const_expressions_mutator(Node *node, newfselect->resulttype = fselect->resulttype; newfselect->resulttypmod = fselect->resulttypmod; newfselect->resultcollid = fselect->resultcollid; + if (arg && IsA(arg, Const)) + { + Const *con = (Const *) arg; + + if (rowtype_field_matches(con->consttype, + newfselect->fieldnum, + newfselect->resulttype, + newfselect->resulttypmod, + newfselect->resultcollid)) + return ece_evaluate_expr(newfselect); + } return (Node *) newfselect; } case T_NullTest: @@ -3557,6 +3582,13 @@ eval_const_expressions_mutator(Node *node, } case T_BooleanTest: { + /* + * This case could be folded into the generic handling used + * for ArrayRef etc. But because the simplification logic is + * so trivial, applying evaluate_expr() to perform it would be + * a heavy overhead. BooleanTest is probably common enough to + * justify keeping this bespoke implementation. + */ BooleanTest *btest = (BooleanTest *) node; BooleanTest *newbtest; Node *arg; @@ -3630,14 +3662,57 @@ eval_const_expressions_mutator(Node *node, } /* - * For any node type not handled above, we recurse using - * expression_tree_mutator, which will copy the node unchanged but try to - * simplify its arguments (if any) using this routine. For example: we - * cannot eliminate an ArrayRef node, but we might be able to simplify - * constant expressions in its subscripts. + * For any node type not handled above, copy the node unchanged but + * const-simplify its subexpressions. This is the correct thing for node + * types whose behavior might change between planning and execution, such + * as CoerceToDomain. It's also a safe default for new node types not + * known to this routine. */ - return expression_tree_mutator(node, eval_const_expressions_mutator, - (void *) context); + return ece_generic_processing(node); +} + +/* + * Subroutine for eval_const_expressions: check for non-Const nodes. + * + * We can abort recursion immediately on finding a non-Const node. This is + * critical for performance, else eval_const_expressions_mutator would take + * O(N^2) time on non-simplifiable trees. However, we do need to descend + * into List nodes since expression_tree_walker sometimes invokes the walker + * function directly on List subtrees. + */ +static bool +contain_non_const_walker(Node *node, void *context) +{ + if (node == NULL) + return false; + if (IsA(node, Const)) + return false; + if (IsA(node, List)) + return expression_tree_walker(node, contain_non_const_walker, context); + /* Otherwise, abort the tree traversal and return true */ + return true; +} + +/* + * Subroutine for eval_const_expressions: check if a function is OK to evaluate + */ +static bool +ece_function_is_safe(Oid funcid, eval_const_expressions_context *context) +{ + char provolatile = func_volatile(funcid); + + /* + * Ordinarily we are only allowed to simplify immutable functions. But for + * purposes of estimation, we consider it okay to simplify functions that + * are merely stable; the risk that the result might change from planning + * time to execution time is worth taking in preference to not being able + * to estimate the value at all. + */ + if (provolatile == PROVOLATILE_IMMUTABLE) + return true; + if (context->estimate && provolatile == PROVOLATILE_STABLE) + return true; + return false; } /* diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out index 43b36f6566..a4bac8e3b5 100644 --- a/src/test/regress/expected/rowtypes.out +++ b/src/test/regress/expected/rowtypes.out @@ -307,10 +307,10 @@ ERROR: cannot compare dissimilar column types bigint and integer at record colu explain (costs off) select * from int8_tbl i8 where i8 in (row(123,456)::int8_tbl, '(4567890123456789,123)'); - QUERY PLAN ------------------------------------------------------------------------------------------------------------------ + QUERY PLAN +------------------------------------------------------------------------------- Seq Scan on int8_tbl i8 - Filter: (i8.* = ANY (ARRAY[ROW('123'::bigint, '456'::bigint)::int8_tbl, '(4567890123456789,123)'::int8_tbl])) + Filter: (i8.* = ANY ('{"(123,456)","(4567890123456789,123)"}'::int8_tbl[])) (2 rows) select * from int8_tbl i8 From 6fcde24063047c1195d023dfa08309302987cdcf Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 3 Jan 2018 12:53:49 -0500 Subject: [PATCH 0766/1087] Fix some minor errors in new PHJ code. Correct ExecParallelHashTuplePrealloc's estimate of whether the space_allowed limit is exceeded. Be more consistent about tuples that are exactly HASH_CHUNK_THRESHOLD in size (they're "small", not "large"). Neither of these things explain the current buildfarm unhappiness, but they're still bugs. Thomas Munro, per gripe by me Discussion: https://postgr.es/m/CAEepm=34PDuR69kfYVhmZPgMdy8pSA-MYbpesEN1SR+2oj3Y+w@mail.gmail.com --- src/backend/executor/nodeHash.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 52f5c0c26e..a9149ef81c 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -2740,7 +2740,7 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size, */ chunk = hashtable->current_chunk; if (chunk != NULL && - size < HASH_CHUNK_THRESHOLD && + size <= HASH_CHUNK_THRESHOLD && chunk->maxlen - chunk->used >= size) { @@ -3260,6 +3260,7 @@ ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size) Assert(batchno > 0); Assert(batchno < hashtable->nbatch); + Assert(size == MAXALIGN(size)); LWLockAcquire(&pstate->lock, LW_EXCLUSIVE); @@ -3280,7 +3281,8 @@ ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size) if (pstate->growth != PHJ_GROWTH_DISABLED && batch->at_least_one_chunk && - (batch->shared->estimated_size + size > pstate->space_allowed)) + (batch->shared->estimated_size + want + HASH_CHUNK_HEADER_SIZE + > pstate->space_allowed)) { /* * We have determined that this batch would exceed the space budget if From 3c27944fb2141d8bd3942cb57e872174c6e1db97 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 3 Jan 2018 17:26:20 -0300 Subject: [PATCH 0767/1087] Make XactLockTableWait work for transactions that are not yet self-locked MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit XactLockTableWait assumed that its xid argument has already added itself to the lock table. That assumption led to another assumption that if locking the xid has succeeded but the xid is reported as still in progress, then the input xid must have been a subtransaction. These assumptions hold true for the original uses of this code in locking related to on-disk tuples, but they break down in logical replication slot snapshot building -- in particular, when a standby snapshot logged contains an xid that's already in ProcArray but not yet in the lock table. This leads to assertion failures that can be reproduced all the way back to 9.4, when logical decoding was introduced. To fix, change SubTransGetParent to SubTransGetTopmostTransaction which has a slightly different API: it returns the argument Xid if there is no parent, and it goes all the way to the top instead of moving up the levels one by one. Also, to avoid busy-waiting, add a 1ms sleep to give the other process time to register itself in the lock table. For consistency, change ConditionalXactLockTableWait the same way. Author: Petr Jelínek Discussion: https://postgr.es/m/1B3E32D8-FCF4-40B4-AEF9-5C0E3AC57969@postgrespro.ru Reported-by: Konstantin Knizhnik Diagnosed-by: Stas Kelvich, Petr Jelínek Reviewed-by: Andres Freund, Robert Haas --- src/backend/storage/lmgr/lmgr.c | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/src/backend/storage/lmgr/lmgr.c b/src/backend/storage/lmgr/lmgr.c index 3754283d2b..7b2dcb6c60 100644 --- a/src/backend/storage/lmgr/lmgr.c +++ b/src/backend/storage/lmgr/lmgr.c @@ -557,6 +557,7 @@ XactLockTableWait(TransactionId xid, Relation rel, ItemPointer ctid, LOCKTAG tag; XactLockTableWaitInfo info; ErrorContextCallback callback; + bool first = true; /* * If an operation is specified, set up our verbose error context @@ -590,7 +591,26 @@ XactLockTableWait(TransactionId xid, Relation rel, ItemPointer ctid, if (!TransactionIdIsInProgress(xid)) break; - xid = SubTransGetParent(xid); + + /* + * If the Xid belonged to a subtransaction, then the lock would have + * gone away as soon as it was finished; for correct tuple visibility, + * the right action is to wait on its parent transaction to go away. + * But instead of going levels up one by one, we can just wait for the + * topmost transaction to finish with the same end result, which also + * incurs less locktable traffic. + * + * Some uses of this function don't involve tuple visibility -- such + * as when building snapshots for logical decoding. It is possible to + * see a transaction in ProcArray before it registers itself in the + * locktable. The topmost transaction in that case is the same xid, + * so we try again after a short sleep. (Don't sleep the first time + * through, to avoid slowing down the normal case.) + */ + if (!first) + pg_usleep(1000L); + first = false; + xid = SubTransGetTopmostTransaction(xid); } if (oper != XLTW_None) @@ -607,6 +627,7 @@ bool ConditionalXactLockTableWait(TransactionId xid) { LOCKTAG tag; + bool first = true; for (;;) { @@ -622,7 +643,12 @@ ConditionalXactLockTableWait(TransactionId xid) if (!TransactionIdIsInProgress(xid)) break; - xid = SubTransGetParent(xid); + + /* See XactLockTableWait about this case */ + if (!first) + pg_usleep(1000L); + first = false; + xid = SubTransGetTopmostTransaction(xid); } return true; From 99d5a3ffb9fe61a5a8b01a4759d93c627f018923 Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 3 Jan 2018 15:26:39 -0500 Subject: [PATCH 0768/1087] Fix use of config-specific libraries for Windows OpenSSL Commit 614350a3 allowed for an different builds of OpenSSL libraries on Windows, but ignored the fact that the alternative builds don't have config-specific libraries. This patch fixes the Solution file to ask for the correct libraries. per offline discussions with Leonardo Cecchi and Marco Nenciarini, Backpatch to all live branches. --- src/tools/msvc/Solution.pm | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm index 6dcdd29f57..d3b50bd4ef 100644 --- a/src/tools/msvc/Solution.pm +++ b/src/tools/msvc/Solution.pm @@ -535,10 +535,12 @@ sub AddProject } else { + # We don't expect the config-specific library to be here, + # so don't ask for it in last parameter $proj->AddLibrary( - $self->{options}->{openssl} . '\lib\ssleay32.lib', 1); + $self->{options}->{openssl} . '\lib\ssleay32.lib', 0); $proj->AddLibrary( - $self->{options}->{openssl} . '\lib\libeay32.lib', 1); + $self->{options}->{openssl} . '\lib\libeay32.lib', 0); } } if ($self->{options}->{nls}) From 3e68686e2c55799234ecd020bd1621f913d65475 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 3 Jan 2018 12:00:11 -0800 Subject: [PATCH 0769/1087] Rename pg_rewind's copy_file_range() to avoid conflict with new linux syscall. Upcoming versions of glibc will contain copy_file_range(2), a wrapper around a new linux syscall for in-kernel copying of data ranges. This conflicts with pg_rewinds function of the same name. Therefore rename pg_rewinds version. As our version isn't a generic copying facility we decided to choose a rewind specific function name. Per buildfarm animal caiman and subsequent discussion with Tom Lane. Author: Andres Freund Discussion: https://postgr.es/m/20180103033425.w7jkljth3e26sduc@alap3.anarazel.de https://postgr.es/m/31122.1514951044@sss.pgh.pa.us Backpatch: 9.5-, where pg_rewind was introduced --- src/bin/pg_rewind/copy_fetch.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/src/bin/pg_rewind/copy_fetch.c b/src/bin/pg_rewind/copy_fetch.c index 380fb3b88e..04db409675 100644 --- a/src/bin/pg_rewind/copy_fetch.c +++ b/src/bin/pg_rewind/copy_fetch.c @@ -156,7 +156,7 @@ recurse_dir(const char *datadir, const char *parentpath, * If 'trunc' is true, any existing file with the same name is truncated. */ static void -copy_file_range(const char *path, off_t begin, off_t end, bool trunc) +rewind_copy_file_range(const char *path, off_t begin, off_t end, bool trunc) { char buf[BLCKSZ]; char srcpath[MAXPGPATH]; @@ -222,7 +222,7 @@ copy_executeFileMap(filemap_t *map) break; case FILE_ACTION_COPY: - copy_file_range(entry->path, 0, entry->newsize, true); + rewind_copy_file_range(entry->path, 0, entry->newsize, true); break; case FILE_ACTION_TRUNCATE: @@ -230,7 +230,8 @@ copy_executeFileMap(filemap_t *map) break; case FILE_ACTION_COPY_TAIL: - copy_file_range(entry->path, entry->oldsize, entry->newsize, false); + rewind_copy_file_range(entry->path, entry->oldsize, + entry->newsize, false); break; case FILE_ACTION_CREATE: @@ -257,7 +258,7 @@ execute_pagemap(datapagemap_t *pagemap, const char *path) while (datapagemap_next(iter, &blkno)) { offset = blkno * BLCKSZ; - copy_file_range(path, offset, offset + BLCKSZ, false); + rewind_copy_file_range(path, offset, offset + BLCKSZ, false); /* Ok, this block has now been copied from new data dir to old */ } pg_free(iter); From 6c8be5962aea2eee8c366c01bbbcf5bf5ddf5294 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 3 Jan 2018 11:16:34 -0300 Subject: [PATCH 0770/1087] Revert "Fix isolation test to be less timing-dependent" This reverts commit 2268e6afd596. It turned out that inconsistency in the report is still possible, so go back to the simpler formulation of the test and instead add an alternate expected output. Discussion: https://postgr.es/m/20180103193728.ysqpcp2xjnqpiep7@alvherre.pgsql --- src/test/isolation/expected/multiple-cic.out | 15 +++++--------- .../isolation/expected/multiple-cic_1.out | 20 +++++++++++++++++++ src/test/isolation/specs/multiple-cic.spec | 12 ++++------- 3 files changed, 29 insertions(+), 18 deletions(-) create mode 100644 src/test/isolation/expected/multiple-cic_1.out diff --git a/src/test/isolation/expected/multiple-cic.out b/src/test/isolation/expected/multiple-cic.out index 0b470e7d1d..2bf8fe365e 100644 --- a/src/test/isolation/expected/multiple-cic.out +++ b/src/test/isolation/expected/multiple-cic.out @@ -1,9 +1,6 @@ -Parsed test spec with 3 sessions +Parsed test spec with 2 sessions -starting permutation: s2l s1i s2i s3u -pg_advisory_lock - - +starting permutation: s2l s1i s2i step s2l: SELECT pg_advisory_lock(281457); pg_advisory_lock @@ -14,11 +11,9 @@ step s1i: step s2i: CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) - WHERE unlck() AND lck_shr(572814); - -step s3u: SELECT unlck(); + WHERE unlck(); + +step s1i: <... completed> unlck t -step s1i: <... completed> -step s2i: <... completed> diff --git a/src/test/isolation/expected/multiple-cic_1.out b/src/test/isolation/expected/multiple-cic_1.out new file mode 100644 index 0000000000..e41e04a480 --- /dev/null +++ b/src/test/isolation/expected/multiple-cic_1.out @@ -0,0 +1,20 @@ +Parsed test spec with 2 sessions + +starting permutation: s2l s1i s2i +step s2l: SELECT pg_advisory_lock(281457); +pg_advisory_lock + + +step s1i: + CREATE INDEX CONCURRENTLY mcic_one_pkey ON mcic_one (id) + WHERE lck_shr(281457); + +step s2i: + CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) + WHERE unlck(); + +step s1i: <... completed> +step s2i: <... completed> +unlck + +t diff --git a/src/test/isolation/specs/multiple-cic.spec b/src/test/isolation/specs/multiple-cic.spec index fbec67ee25..3199667be2 100644 --- a/src/test/isolation/specs/multiple-cic.spec +++ b/src/test/isolation/specs/multiple-cic.spec @@ -26,19 +26,15 @@ session "s1" step "s1i" { CREATE INDEX CONCURRENTLY mcic_one_pkey ON mcic_one (id) WHERE lck_shr(281457); -} -step "s1u" { SELECT unlck(); } + } +teardown { SELECT unlck(); } session "s2" step "s2l" { SELECT pg_advisory_lock(281457); } step "s2i" { CREATE INDEX CONCURRENTLY mcic_two_pkey ON mcic_two (id) - WHERE unlck() AND lck_shr(572814); + WHERE unlck(); } -session "s3" -setup { SELECT pg_advisory_lock(572814); } -step "s3u" { SELECT unlck(); } - -permutation "s2l" "s1i" "s2i" "s3u" +permutation "s2l" "s1i" "s2i" From bab2969867fbba6a6d12730f36a20d13542aea5a Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Wed, 3 Jan 2018 19:12:06 -0300 Subject: [PATCH 0771/1087] Fix typo MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/d8jefpk4jtd.fsf@dalvik.ping.uio.no --- src/backend/utils/cache/lsyscache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index 93cdf13a6e..e8aa179347 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -958,7 +958,7 @@ get_atttypetypmodcoll(Oid relid, AttrNumber attnum, * get_collation_name * Returns the name of a given pg_collation entry. * - * Returns a palloc'd copy of the string, or NULL if no such constraint. + * Returns a palloc'd copy of the string, or NULL if no such collation. * * NOTE: since collation name is not unique, be wary of code that uses this * for anything except preparing error messages. From 47c6772eb7222dbfa200db4bbeba8002b96b7976 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 3 Jan 2018 17:53:06 -0500 Subject: [PATCH 0772/1087] Clean up tupdesc.c for recent changes. TupleDescCopy needs to have the same effects as CreateTupleDescCopy in that, since it doesn't copy constraints, it should clear the per-attribute fields associated with them. Oversight in commit cc5f81366. Since TupleDescCopy has already established the presumption that it can just flat-copy the entire attribute array in one go, propagate that approach into CreateTupleDescCopy and CreateTupleDescCopyConstr. (I'm suspicious that this would lead to valgrind complaints if we had any trailing padding in the struct, but we do not, and anyway fixing that seems like a job for a separate commit.) Add some better comments. Thomas Munro, reviewed by Vik Fearing, some additional hacking by me Discussion: https://postgr.es/m/CAEepm=0NvOGZ8B6GbQyQe2C_c2m3LKJ9w=8OMBaYRLgZ_Gw6Nw@mail.gmail.com --- src/backend/access/common/tupdesc.c | 52 +++++++++++++++++++++++++---- 1 file changed, 45 insertions(+), 7 deletions(-) diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c index f07129ac53..f1f44230cd 100644 --- a/src/backend/access/common/tupdesc.c +++ b/src/backend/access/common/tupdesc.c @@ -52,6 +52,14 @@ CreateTemplateTupleDesc(int natts, bool hasoid) /* * Allocate enough memory for the tuple descriptor, including the * attribute rows. + * + * Note: the attribute array stride is sizeof(FormData_pg_attribute), + * since we declare the array elements as FormData_pg_attribute for + * notational convenience. However, we only guarantee that the first + * ATTRIBUTE_FIXED_PART_SIZE bytes of each entry are valid; most code that + * copies tupdesc entries around copies just that much. In principle that + * could be less due to trailing padding, although with the current + * definition of pg_attribute there probably isn't any padding. */ desc = (TupleDesc) palloc(offsetof(struct tupleDesc, attrs) + natts * sizeof(FormData_pg_attribute)); @@ -106,16 +114,25 @@ CreateTupleDescCopy(TupleDesc tupdesc) desc = CreateTemplateTupleDesc(tupdesc->natts, tupdesc->tdhasoid); + /* Flat-copy the attribute array */ + memcpy(TupleDescAttr(desc, 0), + TupleDescAttr(tupdesc, 0), + desc->natts * sizeof(FormData_pg_attribute)); + + /* + * Since we're not copying constraints and defaults, clear fields + * associated with them. + */ for (i = 0; i < desc->natts; i++) { Form_pg_attribute att = TupleDescAttr(desc, i); - memcpy(att, &tupdesc->attrs[i], ATTRIBUTE_FIXED_PART_SIZE); att->attnotnull = false; att->atthasdef = false; att->attidentity = '\0'; } + /* We can copy the tuple type identification, too */ desc->tdtypeid = tupdesc->tdtypeid; desc->tdtypmod = tupdesc->tdtypmod; @@ -136,13 +153,12 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc) desc = CreateTemplateTupleDesc(tupdesc->natts, tupdesc->tdhasoid); - for (i = 0; i < desc->natts; i++) - { - memcpy(TupleDescAttr(desc, i), - TupleDescAttr(tupdesc, i), - ATTRIBUTE_FIXED_PART_SIZE); - } + /* Flat-copy the attribute array */ + memcpy(TupleDescAttr(desc, 0), + TupleDescAttr(tupdesc, 0), + desc->natts * sizeof(FormData_pg_attribute)); + /* Copy the TupleConstr data structure, if any */ if (constr) { TupleConstr *cpy = (TupleConstr *) palloc0(sizeof(TupleConstr)); @@ -178,6 +194,7 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc) desc->constr = cpy; } + /* We can copy the tuple type identification, too */ desc->tdtypeid = tupdesc->tdtypeid; desc->tdtypmod = tupdesc->tdtypmod; @@ -195,8 +212,29 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc) void TupleDescCopy(TupleDesc dst, TupleDesc src) { + int i; + + /* Flat-copy the header and attribute array */ memcpy(dst, src, TupleDescSize(src)); + + /* + * Since we're not copying constraints and defaults, clear fields + * associated with them. + */ + for (i = 0; i < dst->natts; i++) + { + Form_pg_attribute att = TupleDescAttr(dst, i); + + att->attnotnull = false; + att->atthasdef = false; + att->attidentity = '\0'; + } dst->constr = NULL; + + /* + * Also, assume the destination is not to be ref-counted. (Copying the + * source's refcount would be wrong in any case.) + */ dst->tdrefcount = -1; } From 934c7986f4a0a6a3b606301d84b784a27c0c324b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 4 Jan 2018 01:06:58 -0500 Subject: [PATCH 0773/1087] Tweak parallel hash join test case in hopes of improving stability. This seems to make things better on gaur, let's see what the rest of the buildfarm thinks. Thomas Munro Discussion: https://postgr.es/m/CAEepm=1uuT8iJxMEsR=jL+3zEi87DB2v0+0H9o_rUXXCZPZT3A@mail.gmail.com --- src/test/regress/expected/join.out | 2 +- src/test/regress/sql/join.sql | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index a7cfdf1f44..02e7d56e55 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -6073,7 +6073,7 @@ rollback to settings; -- parallel with parallel-aware hash join savepoint settings; set local max_parallel_workers_per_gather = 2; -set local work_mem = '128kB'; +set local work_mem = '192kB'; set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join simple s using (id); diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index a6a452f960..dd62c38c15 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -2116,7 +2116,7 @@ rollback to settings; -- parallel with parallel-aware hash join savepoint settings; set local max_parallel_workers_per_gather = 2; -set local work_mem = '128kB'; +set local work_mem = '192kB'; set local enable_parallel_hash = on; explain (costs off) select count(*) from simple r join simple s using (id); From c759395617765c5bc21db149cf8c3df52f41ccff Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 4 Jan 2018 07:56:09 -0500 Subject: [PATCH 0774/1087] Code review for Parallel Append. - Remove unnecessary #include mistakenly added in execnodes.h. - Fix mistake in comment in choose_next_subplan_for_leader. - Adjust row estimates in cost_append for a possibly-different parallel divisor. - Clamp row estimates in cost_append after operations that may not produce integers. Amit Kapila, with cosmetic adjustments by me. Discussion: http://postgr.es/m/CAA4eK1+qcbeai3coPpRW=GFCzFeLUsuY4T-AKHqMjxpEGZBPQg@mail.gmail.com --- src/backend/executor/nodeAppend.c | 7 +++---- src/backend/optimizer/path/costsize.c | 16 ++++++++++++---- src/include/nodes/execnodes.h | 1 - 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 4245d8afaf..64a17fb032 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -446,10 +446,9 @@ choose_next_subplan_for_leader(AppendState *node) * * We start from the first plan and advance through the list; * when we get back to the end, we loop back to the first - * nonpartial plan. This assigns the non-partial plans first - * in order of descending cost and then spreads out the - * workers as evenly as possible across the remaining partial - * plans. + * partial plan. This assigns the non-partial plans first in + * order of descending cost and then spreads out the workers + * as evenly as possible across the remaining partial plans. * ---------------------------------------------------------------- */ static bool diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 7903b2cb16..8679b14b29 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -1883,18 +1883,26 @@ cost_append(AppendPath *apath) subpath->startup_cost); /* - * Apply parallel divisor to non-partial subpaths. Also add the - * cost of partial paths to the total cost, but ignore non-partial - * paths for now. + * Apply parallel divisor to subpaths. Scale the number of rows + * for each partial subpath based on the ratio of the parallel + * divisor originally used for the subpath to the one we adopted. + * Also add the cost of partial paths to the total cost, but + * ignore non-partial paths for now. */ if (i < apath->first_partial_path) apath->path.rows += subpath->rows / parallel_divisor; else { - apath->path.rows += subpath->rows; + double subpath_parallel_divisor; + + subpath_parallel_divisor = get_parallel_divisor(subpath); + apath->path.rows += subpath->rows * (subpath_parallel_divisor / + parallel_divisor); apath->path.total_cost += subpath->total_cost; } + apath->path.rows = clamp_row_est(apath->path.rows); + i++; } diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index b121e16688..3ad58cdfe7 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -21,7 +21,6 @@ #include "lib/pairingheap.h" #include "nodes/params.h" #include "nodes/plannodes.h" -#include "storage/spin.h" #include "utils/hsearch.h" #include "utils/queryenvironment.h" #include "utils/reltrigger.h" From 3ad2afc2e98fc85d5cf9529d84265b70acc0b13d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 10:34:41 -0500 Subject: [PATCH 0775/1087] Define LDAPS_PORT if it's missing and disable implicit LDAPS on Windows Some versions of Windows don't define LDAPS_PORT. Also, Windows' ldap_sslinit() is documented to use LDAPS even if you said secure=0 when the port number happens to be 636 or 3269. Let's avoid using the port number to imply that you want LDAPS, so that connection strings have the same meaning on Windows and Unix. Author: Thomas Munro Discussion: https://postgr.es/m/CAEepm%3D23B7GV4AUz3MYH1TKpTv030VHxD2Sn%2BLYWDv8d-qWxww%40mail.gmail.com --- src/backend/libpq/auth.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 3560edc33a..f327f7bb1b 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -2363,9 +2363,10 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) if (scheme == NULL) scheme = "ldap"; #ifdef WIN32 - *ldap = ldap_sslinit(port->hba->ldapserver, - port->hba->ldapport, - strcmp(scheme, "ldaps") == 0); + if (strcmp(scheme, "ldaps") == 0) + *ldap = ldap_sslinit(port->hba->ldapserver, port->hba->ldapport, 1); + else + *ldap = ldap_init(port->hba->ldapserver, port->hba->ldapport); if (!*ldap) { ereport(LOG, @@ -2489,6 +2490,11 @@ InitializeLDAPConnection(Port *port, LDAP **ldap) #define LDAP_NO_ATTRS "1.1" #endif +/* Not all LDAP implementations define this. */ +#ifndef LDAPS_PORT +#define LDAPS_PORT 636 +#endif + /* * Return a newly allocated C string copied from "pattern" with all * occurrences of the placeholder "$username" replaced with "user_name". From f3049a603a7950f313b33ab214f11563c66dc069 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 13:53:09 -0500 Subject: [PATCH 0776/1087] Refactor channel binding code to fetch cbind_data only when necessary As things stand now, channel binding data is fetched from OpenSSL and saved into the SCRAM exchange context for any SSL connection attempted for a SCRAM authentication, resulting in data fetched but not used if no channel binding is used or if a different channel binding type is used than what the data is here for. Refactor the code in such a way that binding data is fetched from the SSL stack only when a specific channel binding is used for both the frontend and the backend. In order to achieve that, save the libpq connection context directly in the SCRAM exchange state, and add a dependency to SSL in the low-level SCRAM routines. This makes the interface in charge of initializing the SCRAM context cleaner as all its data comes from either PGconn* (for frontend) or Port* (for the backend). Author: Michael Paquier --- src/backend/libpq/auth-scram.c | 33 +++--- src/backend/libpq/auth.c | 19 +--- src/include/libpq/scram.h | 6 +- src/interfaces/libpq/fe-auth-scram.c | 159 +++++++++++++-------------- src/interfaces/libpq/fe-auth.c | 27 +---- src/interfaces/libpq/fe-auth.h | 10 +- 6 files changed, 102 insertions(+), 152 deletions(-) diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 7068ee5b25..1b07eaebfa 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -110,10 +110,8 @@ typedef struct const char *username; /* username from startup packet */ + Port *port; char cbind_flag; - bool ssl_in_use; - const char *tls_finished_message; - size_t tls_finished_len; char *channel_binding_type; int iterations; @@ -172,21 +170,15 @@ static char *scram_mock_salt(const char *username); * it will fail, as if an incorrect password was given. */ void * -pg_be_scram_init(const char *username, - const char *shadow_pass, - bool ssl_in_use, - const char *tls_finished_message, - size_t tls_finished_len) +pg_be_scram_init(Port *port, + const char *shadow_pass) { scram_state *state; bool got_verifier; state = (scram_state *) palloc0(sizeof(scram_state)); + state->port = port; state->state = SCRAM_AUTH_INIT; - state->username = username; - state->ssl_in_use = ssl_in_use; - state->tls_finished_message = tls_finished_message; - state->tls_finished_len = tls_finished_len; state->channel_binding_type = NULL; /* @@ -209,7 +201,7 @@ pg_be_scram_init(const char *username, */ ereport(LOG, (errmsg("invalid SCRAM verifier for user \"%s\"", - username))); + state->port->user_name))); got_verifier = false; } } @@ -220,7 +212,7 @@ pg_be_scram_init(const char *username, * authentication with an MD5 hash.) */ state->logdetail = psprintf(_("User \"%s\" does not have a valid SCRAM verifier."), - state->username); + state->port->user_name); got_verifier = false; } } @@ -242,8 +234,8 @@ pg_be_scram_init(const char *username, */ if (!got_verifier) { - mock_scram_verifier(username, &state->iterations, &state->salt, - state->StoredKey, state->ServerKey); + mock_scram_verifier(state->port->user_name, &state->iterations, + &state->salt, state->StoredKey, state->ServerKey); state->doomed = true; } @@ -815,7 +807,7 @@ read_client_first_message(scram_state *state, char *input) * it supports channel binding, which in this implementation is * the case if a connection is using SSL. */ - if (state->ssl_in_use) + if (state->port->ssl_in_use) ereport(ERROR, (errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION), errmsg("SCRAM channel binding negotiation error"), @@ -839,7 +831,7 @@ read_client_first_message(scram_state *state, char *input) { char *channel_binding_type; - if (!state->ssl_in_use) + if (!state->port->ssl_in_use) { /* * Without SSL, we don't support channel binding. @@ -1120,8 +1112,9 @@ read_client_final_message(scram_state *state, char *input) */ if (strcmp(state->channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) == 0) { - cbind_data = state->tls_finished_message; - cbind_data_len = state->tls_finished_len; +#ifdef USE_SSL + cbind_data = be_tls_get_peer_finished(state->port, &cbind_data_len); +#endif } else { diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index f327f7bb1b..746d7cbb8a 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -873,8 +873,6 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) int inputlen; int result; bool initial; - char *tls_finished = NULL; - size_t tls_finished_len = 0; /* * SASL auth is not supported for protocol versions before 3, because it @@ -915,17 +913,6 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) sendAuthRequest(port, AUTH_REQ_SASL, sasl_mechs, p - sasl_mechs + 1); pfree(sasl_mechs); -#ifdef USE_SSL - - /* - * Get data for channel binding. - */ - if (port->ssl_in_use) - { - tls_finished = be_tls_get_peer_finished(port, &tls_finished_len); - } -#endif - /* * Initialize the status tracker for message exchanges. * @@ -937,11 +924,7 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) * This is because we don't want to reveal to an attacker what usernames * are valid, nor which users have a valid password. */ - scram_opaq = pg_be_scram_init(port->user_name, - shadow_pass, - port->ssl_in_use, - tls_finished, - tls_finished_len); + scram_opaq = pg_be_scram_init(port, shadow_pass); /* * Loop through SASL message exchange. This exchange can consist of diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h index f43ce992c1..91872fcd08 100644 --- a/src/include/libpq/scram.h +++ b/src/include/libpq/scram.h @@ -13,15 +13,15 @@ #ifndef PG_SCRAM_H #define PG_SCRAM_H +#include "libpq/libpq-be.h" + /* Status codes for message exchange */ #define SASL_EXCHANGE_CONTINUE 0 #define SASL_EXCHANGE_SUCCESS 1 #define SASL_EXCHANGE_FAILURE 2 /* Routines dedicated to authentication */ -extern void *pg_be_scram_init(const char *username, const char *shadow_pass, - bool ssl_in_use, const char *tls_finished_message, - size_t tls_finished_len); +extern void *pg_be_scram_init(Port *port, const char *shadow_pass); extern int pg_be_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, char **logdetail); diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 778ff500ea..06c9cb2614 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -42,13 +42,9 @@ typedef struct fe_scram_state_enum state; /* These are supplied by the user */ - const char *username; + PGconn *conn; char *password; - bool ssl_in_use; - char *tls_finished_message; - size_t tls_finished_len; char *sasl_mechanism; - const char *channel_binding_type; /* We construct these */ uint8 SaltedPassword[SCRAM_KEY_LEN]; @@ -68,14 +64,10 @@ typedef struct char ServerSignature[SCRAM_KEY_LEN]; } fe_scram_state; -static bool read_server_first_message(fe_scram_state *state, char *input, - PQExpBuffer errormessage); -static bool read_server_final_message(fe_scram_state *state, char *input, - PQExpBuffer errormessage); -static char *build_client_first_message(fe_scram_state *state, - PQExpBuffer errormessage); -static char *build_client_final_message(fe_scram_state *state, - PQExpBuffer errormessage); +static bool read_server_first_message(fe_scram_state *state, char *input); +static bool read_server_final_message(fe_scram_state *state, char *input); +static char *build_client_first_message(fe_scram_state *state); +static char *build_client_final_message(fe_scram_state *state); static bool verify_server_signature(fe_scram_state *state); static void calculate_client_proof(fe_scram_state *state, const char *client_final_message_without_proof, @@ -84,18 +76,11 @@ static bool pg_frontend_random(char *dst, int len); /* * Initialize SCRAM exchange status. - * - * The non-const char* arguments should be passed in malloc'ed. They will be - * freed by pg_fe_scram_free(). */ void * -pg_fe_scram_init(const char *username, +pg_fe_scram_init(PGconn *conn, const char *password, - bool ssl_in_use, - const char *sasl_mechanism, - const char *channel_binding_type, - char *tls_finished_message, - size_t tls_finished_len) + const char *sasl_mechanism) { fe_scram_state *state; char *prep_password; @@ -107,13 +92,9 @@ pg_fe_scram_init(const char *username, if (!state) return NULL; memset(state, 0, sizeof(fe_scram_state)); + state->conn = conn; state->state = FE_SCRAM_INIT; - state->username = username; - state->ssl_in_use = ssl_in_use; - state->tls_finished_message = tls_finished_message; - state->tls_finished_len = tls_finished_len; state->sasl_mechanism = strdup(sasl_mechanism); - state->channel_binding_type = channel_binding_type; if (!state->sasl_mechanism) { @@ -154,8 +135,6 @@ pg_fe_scram_free(void *opaq) if (state->password) free(state->password); - if (state->tls_finished_message) - free(state->tls_finished_message); if (state->sasl_mechanism) free(state->sasl_mechanism); @@ -188,9 +167,10 @@ pg_fe_scram_free(void *opaq) void pg_fe_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, - bool *done, bool *success, PQExpBuffer errorMessage) + bool *done, bool *success) { fe_scram_state *state = (fe_scram_state *) opaq; + PGconn *conn = state->conn; *done = false; *success = false; @@ -205,13 +185,13 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, { if (inputlen == 0) { - printfPQExpBuffer(errorMessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (empty message)\n")); goto error; } if (inputlen != strlen(input)) { - printfPQExpBuffer(errorMessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (length mismatch)\n")); goto error; } @@ -221,7 +201,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, { case FE_SCRAM_INIT: /* Begin the SCRAM handshake, by sending client nonce */ - *output = build_client_first_message(state, errorMessage); + *output = build_client_first_message(state); if (*output == NULL) goto error; @@ -232,10 +212,10 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, case FE_SCRAM_NONCE_SENT: /* Receive salt and server nonce, send response. */ - if (!read_server_first_message(state, input, errorMessage)) + if (!read_server_first_message(state, input)) goto error; - *output = build_client_final_message(state, errorMessage); + *output = build_client_final_message(state); if (*output == NULL) goto error; @@ -246,7 +226,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, case FE_SCRAM_PROOF_SENT: /* Receive server signature */ - if (!read_server_final_message(state, input, errorMessage)) + if (!read_server_final_message(state, input)) goto error; /* @@ -260,7 +240,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, else { *success = false; - printfPQExpBuffer(errorMessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("incorrect server signature\n")); } *done = true; @@ -269,7 +249,7 @@ pg_fe_scram_exchange(void *opaq, char *input, int inputlen, default: /* shouldn't happen */ - printfPQExpBuffer(errorMessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("invalid SCRAM exchange state\n")); goto error; } @@ -327,8 +307,9 @@ read_attr_value(char **input, char attr, PQExpBuffer errorMessage) * Build the first exchange message sent by the client. */ static char * -build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) +build_client_first_message(fe_scram_state *state) { + PGconn *conn = state->conn; char raw_nonce[SCRAM_RAW_NONCE_LEN + 1]; char *result; int channel_info_len; @@ -341,7 +322,7 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) */ if (!pg_frontend_random(raw_nonce, SCRAM_RAW_NONCE_LEN)) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("could not generate nonce\n")); return NULL; } @@ -349,7 +330,7 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) state->client_nonce = malloc(pg_b64_enc_len(SCRAM_RAW_NONCE_LEN) + 1); if (state->client_nonce == NULL) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return NULL; } @@ -370,11 +351,11 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) */ if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) { - Assert(state->ssl_in_use); - appendPQExpBuffer(&buf, "p=%s", state->channel_binding_type); + Assert(conn->ssl_in_use); + appendPQExpBuffer(&buf, "p=%s", conn->scram_channel_binding); } - else if (state->channel_binding_type == NULL || - strlen(state->channel_binding_type) == 0) + else if (conn->scram_channel_binding == NULL || + strlen(conn->scram_channel_binding) == 0) { /* * Client has chosen to not show to server that it supports channel @@ -382,7 +363,7 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) */ appendPQExpBuffer(&buf, "n"); } - else if (state->ssl_in_use) + else if (conn->ssl_in_use) { /* * Client supports channel binding, but thinks the server does not. @@ -423,7 +404,7 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) oom_error: termPQExpBuffer(&buf); - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return NULL; } @@ -432,9 +413,10 @@ build_client_first_message(fe_scram_state *state, PQExpBuffer errormessage) * Build the final exchange message sent from the client. */ static char * -build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) +build_client_final_message(fe_scram_state *state) { PQExpBufferData buf; + PGconn *conn = state->conn; uint8 client_proof[SCRAM_KEY_LEN]; char *result; @@ -450,22 +432,25 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) */ if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) { - char *cbind_data; - size_t cbind_data_len; + char *cbind_data = NULL; + size_t cbind_data_len = 0; size_t cbind_header_len; char *cbind_input; size_t cbind_input_len; - if (strcmp(state->channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) == 0) + if (strcmp(conn->scram_channel_binding, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) == 0) { - cbind_data = state->tls_finished_message; - cbind_data_len = state->tls_finished_len; +#ifdef USE_SSL + cbind_data = pgtls_get_finished(state->conn, &cbind_data_len); + if (cbind_data == NULL) + goto oom_error; +#endif } else { /* should not happen */ termPQExpBuffer(&buf); - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("invalid channel binding type\n")); return NULL; } @@ -473,37 +458,46 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) /* should not happen */ if (cbind_data == NULL || cbind_data_len == 0) { + if (cbind_data != NULL) + free(cbind_data); termPQExpBuffer(&buf); - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("empty channel binding data for channel binding type \"%s\"\n"), - state->channel_binding_type); + conn->scram_channel_binding); return NULL; } appendPQExpBuffer(&buf, "c="); - cbind_header_len = 4 + strlen(state->channel_binding_type); /* p=type,, */ + /* p=type,, */ + cbind_header_len = 4 + strlen(conn->scram_channel_binding); cbind_input_len = cbind_header_len + cbind_data_len; cbind_input = malloc(cbind_input_len); if (!cbind_input) + { + free(cbind_data); goto oom_error; - snprintf(cbind_input, cbind_input_len, "p=%s,,", state->channel_binding_type); + } + snprintf(cbind_input, cbind_input_len, "p=%s,,", + conn->scram_channel_binding); memcpy(cbind_input + cbind_header_len, cbind_data, cbind_data_len); if (!enlargePQExpBuffer(&buf, pg_b64_enc_len(cbind_input_len))) { + free(cbind_data); free(cbind_input); goto oom_error; } buf.len += pg_b64_encode(cbind_input, cbind_input_len, buf.data + buf.len); buf.data[buf.len] = '\0'; + free(cbind_data); free(cbind_input); } - else if (state->channel_binding_type == NULL || - strlen(state->channel_binding_type) == 0) + else if (conn->scram_channel_binding == NULL || + strlen(conn->scram_channel_binding) == 0) appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ - else if (state->ssl_in_use) + else if (conn->ssl_in_use) appendPQExpBuffer(&buf, "c=eSws"); /* base64 of "y,," */ else appendPQExpBuffer(&buf, "c=biws"); /* base64 of "n,," */ @@ -541,7 +535,7 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) oom_error: termPQExpBuffer(&buf); - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return NULL; } @@ -550,9 +544,9 @@ build_client_final_message(fe_scram_state *state, PQExpBuffer errormessage) * Read the first exchange message coming from the server. */ static bool -read_server_first_message(fe_scram_state *state, char *input, - PQExpBuffer errormessage) +read_server_first_message(fe_scram_state *state, char *input) { + PGconn *conn = state->conn; char *iterations_str; char *endptr; char *encoded_salt; @@ -561,13 +555,14 @@ read_server_first_message(fe_scram_state *state, char *input, state->server_first_message = strdup(input); if (state->server_first_message == NULL) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return false; } /* parse the message */ - nonce = read_attr_value(&input, 'r', errormessage); + nonce = read_attr_value(&input, 'r', + &conn->errorMessage); if (nonce == NULL) { /* read_attr_value() has generated an error string */ @@ -578,7 +573,7 @@ read_server_first_message(fe_scram_state *state, char *input, if (strlen(nonce) < strlen(state->client_nonce) || memcmp(nonce, state->client_nonce, strlen(state->client_nonce)) != 0) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("invalid SCRAM response (nonce mismatch)\n")); return false; } @@ -586,12 +581,12 @@ read_server_first_message(fe_scram_state *state, char *input, state->nonce = strdup(nonce); if (state->nonce == NULL) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return false; } - encoded_salt = read_attr_value(&input, 's', errormessage); + encoded_salt = read_attr_value(&input, 's', &conn->errorMessage); if (encoded_salt == NULL) { /* read_attr_value() has generated an error string */ @@ -600,7 +595,7 @@ read_server_first_message(fe_scram_state *state, char *input, state->salt = malloc(pg_b64_dec_len(strlen(encoded_salt))); if (state->salt == NULL) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return false; } @@ -608,7 +603,7 @@ read_server_first_message(fe_scram_state *state, char *input, strlen(encoded_salt), state->salt); - iterations_str = read_attr_value(&input, 'i', errormessage); + iterations_str = read_attr_value(&input, 'i', &conn->errorMessage); if (iterations_str == NULL) { /* read_attr_value() has generated an error string */ @@ -617,13 +612,13 @@ read_server_first_message(fe_scram_state *state, char *input, state->iterations = strtol(iterations_str, &endptr, 10); if (*endptr != '\0' || state->iterations < 1) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (invalid iteration count)\n")); return false; } if (*input != '\0') - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (garbage at end of server-first-message)\n")); return true; @@ -633,16 +628,16 @@ read_server_first_message(fe_scram_state *state, char *input, * Read the final exchange message coming from the server. */ static bool -read_server_final_message(fe_scram_state *state, char *input, - PQExpBuffer errormessage) +read_server_final_message(fe_scram_state *state, char *input) { + PGconn *conn = state->conn; char *encoded_server_signature; int server_signature_len; state->server_final_message = strdup(input); if (!state->server_final_message) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("out of memory\n")); return false; } @@ -650,16 +645,18 @@ read_server_final_message(fe_scram_state *state, char *input, /* Check for error result. */ if (*input == 'e') { - char *errmsg = read_attr_value(&input, 'e', errormessage); + char *errmsg = read_attr_value(&input, 'e', + &conn->errorMessage); - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("error received from server in SCRAM exchange: %s\n"), errmsg); return false; } /* Parse the message. */ - encoded_server_signature = read_attr_value(&input, 'v', errormessage); + encoded_server_signature = read_attr_value(&input, 'v', + &conn->errorMessage); if (encoded_server_signature == NULL) { /* read_attr_value() has generated an error message */ @@ -667,7 +664,7 @@ read_server_final_message(fe_scram_state *state, char *input, } if (*input != '\0') - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (garbage at end of server-final-message)\n")); server_signature_len = pg_b64_decode(encoded_server_signature, @@ -675,7 +672,7 @@ read_server_final_message(fe_scram_state *state, char *input, state->ServerSignature); if (server_signature_len != SCRAM_KEY_LEN) { - printfPQExpBuffer(errormessage, + printfPQExpBuffer(&conn->errorMessage, libpq_gettext("malformed SCRAM message (invalid server signature)\n")); return false; } diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index ecaed048e6..7bcbca9df6 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -491,8 +491,6 @@ pg_SASL_init(PGconn *conn, int payloadlen) bool success; const char *selected_mechanism; PQExpBufferData mechanism_buf; - char *tls_finished = NULL; - size_t tls_finished_len = 0; char *password; initPQExpBuffer(&mechanism_buf); @@ -570,32 +568,15 @@ pg_SASL_init(PGconn *conn, int payloadlen) goto error; } -#ifdef USE_SSL - - /* - * Get data for channel binding. - */ - if (strcmp(selected_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) - { - tls_finished = pgtls_get_finished(conn, &tls_finished_len); - if (tls_finished == NULL) - goto oom_error; - } -#endif - /* * Initialize the SASL state information with all the information gathered * during the initial exchange. * * Note: Only tls-unique is supported for the moment. */ - conn->sasl_state = pg_fe_scram_init(conn->pguser, + conn->sasl_state = pg_fe_scram_init(conn, password, - conn->ssl_in_use, - selected_mechanism, - conn->scram_channel_binding, - tls_finished, - tls_finished_len); + selected_mechanism); if (!conn->sasl_state) goto oom_error; @@ -603,7 +584,7 @@ pg_SASL_init(PGconn *conn, int payloadlen) pg_fe_scram_exchange(conn->sasl_state, NULL, -1, &initialresponse, &initialresponselen, - &done, &success, &conn->errorMessage); + &done, &success); if (done && !success) goto error; @@ -684,7 +665,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final) pg_fe_scram_exchange(conn->sasl_state, challenge, payloadlen, &output, &outputlen, - &done, &success, &conn->errorMessage); + &done, &success); free(challenge); /* don't need the input anymore */ if (final && !done) diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h index 91bc21ee8d..a8a27c24a6 100644 --- a/src/interfaces/libpq/fe-auth.h +++ b/src/interfaces/libpq/fe-auth.h @@ -23,17 +23,13 @@ extern int pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn); extern char *pg_fe_getauthname(PQExpBuffer errorMessage); /* Prototypes for functions in fe-auth-scram.c */ -extern void *pg_fe_scram_init(const char *username, +extern void *pg_fe_scram_init(PGconn *conn, const char *password, - bool ssl_in_use, - const char *sasl_mechanism, - const char *channel_binding_type, - char *tls_finished_message, - size_t tls_finished_len); + const char *sasl_mechanism); extern void pg_fe_scram_free(void *opaq); extern void pg_fe_scram_exchange(void *opaq, char *input, int inputlen, char **output, int *outputlen, - bool *done, bool *success, PQExpBuffer errorMessage); + bool *done, bool *success); extern char *pg_fe_scram_build_verifier(const char *password); #endif /* FE_AUTH_H */ From 39cfe86195f0b5cbc5fbe8d4e3aa6e2b0e322d0b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 4 Jan 2018 14:59:00 -0500 Subject: [PATCH 0777/1087] Fix incorrect computations of length of null bitmap in pageinspect. Instead of using our standard macro for this calculation, this code did it itself ... and got it wrong, leading to incorrect display of the null bitmap in some cases. Noted and fixed by Maksim Milyutin. In passing, remove a uselessly duplicative error check. Errors were introduced in commit d6061f83a; back-patch to 9.6 where that came in. Maksim Milyutin, reviewed by Andrey Borodin Discussion: https://postgr.es/m/ec295792-a69f-350f-6287-25a20e8f31d5@gmail.com --- contrib/pageinspect/expected/page.out | 17 +++++++++++++++++ contrib/pageinspect/heapfuncs.c | 15 +++++---------- contrib/pageinspect/sql/page.sql | 8 ++++++++ 3 files changed, 30 insertions(+), 10 deletions(-) diff --git a/contrib/pageinspect/expected/page.out b/contrib/pageinspect/expected/page.out index 8e15947a81..4dd620ee6f 100644 --- a/contrib/pageinspect/expected/page.out +++ b/contrib/pageinspect/expected/page.out @@ -92,3 +92,20 @@ create table test_part1 partition of test_partitioned for values from ( 1 ) to ( select get_raw_page('test_part1', 0); -- get farther and error about empty table ERROR: block number 0 is out of range for relation "test_part1" drop table test_partitioned; +-- check null bitmap alignment for table whose number of attributes is multiple of 8 +create table test8 (f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int); +insert into test8(f1, f8) values (x'f1'::int, 0); +select t_bits, t_data from heap_page_items(get_raw_page('test8', 0)); + t_bits | t_data +----------+-------------------- + 10000001 | \xf100000000000000 +(1 row) + +select tuple_data_split('test8'::regclass, t_data, t_infomask, t_infomask2, t_bits) + from heap_page_items(get_raw_page('test8', 0)); + tuple_data_split +------------------------------------------------------------- + {"\\xf1000000",NULL,NULL,NULL,NULL,NULL,NULL,"\\x00000000"} +(1 row) + +drop table test8; diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c index 088254453e..7438257c5b 100644 --- a/contrib/pageinspect/heapfuncs.c +++ b/contrib/pageinspect/heapfuncs.c @@ -234,7 +234,7 @@ heap_page_items(PG_FUNCTION_ARGS) int bits_len; bits_len = - ((tuphdr->t_infomask2 & HEAP_NATTS_MASK) / 8 + 1) * 8; + BITMAPLEN(HeapTupleHeaderGetNatts(tuphdr)) * BITS_PER_BYTE; values[11] = CStringGetTextDatum( bits_to_text(tuphdr->t_bits, bits_len)); } @@ -436,24 +436,19 @@ tuple_data_split(PG_FUNCTION_ARGS) int bits_str_len; int bits_len; - bits_len = (t_infomask2 & HEAP_NATTS_MASK) / 8 + 1; + bits_len = BITMAPLEN(t_infomask2 & HEAP_NATTS_MASK) * BITS_PER_BYTE; if (!t_bits_str) ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), errmsg("argument of t_bits is null, but it is expected to be null and %d character long", - bits_len * 8))); + bits_len))); bits_str_len = strlen(t_bits_str); - if ((bits_str_len % 8) != 0) - ereport(ERROR, - (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("length of t_bits is not a multiple of eight"))); - - if (bits_len * 8 != bits_str_len) + if (bits_len != bits_str_len) ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), errmsg("unexpected length of t_bits %u, expected %d", - bits_str_len, bits_len * 8))); + bits_str_len, bits_len))); /* do the conversion */ t_bits = text_to_bits(t_bits_str, bits_str_len); diff --git a/contrib/pageinspect/sql/page.sql b/contrib/pageinspect/sql/page.sql index 493ca9b211..438e0351c4 100644 --- a/contrib/pageinspect/sql/page.sql +++ b/contrib/pageinspect/sql/page.sql @@ -41,3 +41,11 @@ select get_raw_page('test_partitioned', 0); -- error about partitioned table create table test_part1 partition of test_partitioned for values from ( 1 ) to (100); select get_raw_page('test_part1', 0); -- get farther and error about empty table drop table test_partitioned; + +-- check null bitmap alignment for table whose number of attributes is multiple of 8 +create table test8 (f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int); +insert into test8(f1, f8) values (x'f1'::int, 0); +select t_bits, t_data from heap_page_items(get_raw_page('test8', 0)); +select tuple_data_split('test8'::regclass, t_data, t_infomask, t_infomask2, t_bits) + from heap_page_items(get_raw_page('test8', 0)); +drop table test8; From d3fb72ea6de58d285e278459bca9d7cdf7f6a38b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 15:18:39 -0500 Subject: [PATCH 0778/1087] Implement channel binding tls-server-end-point for SCRAM This adds a second standard channel binding type for SCRAM. It is mainly intended for third-party clients that cannot implement tls-unique, for example JDBC. Author: Michael Paquier --- doc/src/sgml/protocol.sgml | 17 +++-- src/backend/libpq/auth-scram.c | 20 ++++-- src/backend/libpq/be-secure-openssl.c | 61 ++++++++++++++++++ src/include/common/scram-common.h | 1 + src/include/libpq/libpq-be.h | 1 + src/interfaces/libpq/fe-auth-scram.c | 15 +++++ src/interfaces/libpq/fe-secure-openssl.c | 80 ++++++++++++++++++++++++ src/interfaces/libpq/libpq-int.h | 1 + src/test/ssl/t/002_scram.pl | 5 +- 9 files changed, 189 insertions(+), 12 deletions(-) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 8174e3defa..4c5ed1e6d6 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1575,9 +1575,13 @@ the password is in. Channel binding is supported in PostgreSQL builds with -SSL support. The SASL mechanism name for SCRAM with channel binding -is SCRAM-SHA-256-PLUS. The only channel binding type -supported at the moment is tls-unique, defined in RFC 5929. +SSL support. The SASL mechanism name for SCRAM with channel binding is +SCRAM-SHA-256-PLUS. Two channel binding types are +supported: tls-unique and +tls-server-end-point, both defined in RFC 5929. Clients +should use tls-unique if they can support it. +tls-server-end-point is intended for third-party clients +that cannot support tls-unique for some reason. @@ -1597,9 +1601,10 @@ supported at the moment is tls-unique, defined in RFC 5929. indicates the chosen mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS. (A client is free to choose either mechanism, but for better security it should choose the channel-binding - variant if it can support it.) In the Initial Client response field, - the message contains the SCRAM - client-first-message. + variant if it can support it.) In the Initial Client response field, the + message contains the SCRAM client-first-message. + The client-first-message also contains the channel + binding type chosen by the client. diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c index 1b07eaebfa..48eb531d0f 100644 --- a/src/backend/libpq/auth-scram.c +++ b/src/backend/libpq/auth-scram.c @@ -849,13 +849,14 @@ read_client_first_message(scram_state *state, char *input) } /* - * Read value provided by client; only tls-unique is supported - * for now. (It is not safe to print the name of an - * unsupported binding type in the error message. Pranksters - * could print arbitrary strings into the log that way.) + * Read value provided by client. (It is not safe to print + * the name of an unsupported binding type in the error + * message. Pranksters could print arbitrary strings into the + * log that way.) */ channel_binding_type = read_attr_value(&input, 'p'); - if (strcmp(channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) != 0) + if (strcmp(channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_UNIQUE) != 0 && + strcmp(channel_binding_type, SCRAM_CHANNEL_BINDING_TLS_END_POINT) != 0) ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), (errmsg("unsupported SCRAM channel-binding type")))); @@ -1114,6 +1115,15 @@ read_client_final_message(scram_state *state, char *input) { #ifdef USE_SSL cbind_data = be_tls_get_peer_finished(state->port, &cbind_data_len); +#endif + } + else if (strcmp(state->channel_binding_type, + SCRAM_CHANNEL_BINDING_TLS_END_POINT) == 0) + { + /* Fetch hash data of server's SSL certificate */ +#ifdef USE_SSL + cbind_data = be_tls_get_certificate_hash(state->port, + &cbind_data_len); #endif } else diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index 3a7aa01876..f75cc2ef18 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -1239,6 +1239,67 @@ be_tls_get_peer_finished(Port *port, size_t *len) return result; } +/* + * Get the server certificate hash for SCRAM channel binding type + * tls-server-end-point. + * + * The result is a palloc'd hash of the server certificate with its + * size, and NULL if there is no certificate available. + */ +char * +be_tls_get_certificate_hash(Port *port, size_t *len) +{ + X509 *server_cert; + char *cert_hash; + const EVP_MD *algo_type = NULL; + unsigned char hash[EVP_MAX_MD_SIZE]; /* size for SHA-512 */ + unsigned int hash_size; + int algo_nid; + + *len = 0; + server_cert = SSL_get_certificate(port->ssl); + if (server_cert == NULL) + return NULL; + + /* + * Get the signature algorithm of the certificate to determine the + * hash algorithm to use for the result. + */ + if (!OBJ_find_sigid_algs(X509_get_signature_nid(server_cert), + &algo_nid, NULL)) + elog(ERROR, "could not determine server certificate signature algorithm"); + + /* + * The TLS server's certificate bytes need to be hashed with SHA-256 if + * its signature algorithm is MD5 or SHA-1 as per RFC 5929 + * (https://tools.ietf.org/html/rfc5929#section-4.1). If something else + * is used, the same hash as the signature algorithm is used. + */ + switch (algo_nid) + { + case NID_md5: + case NID_sha1: + algo_type = EVP_sha256(); + break; + default: + algo_type = EVP_get_digestbynid(algo_nid); + if (algo_type == NULL) + elog(ERROR, "could not find digest for NID %s", + OBJ_nid2sn(algo_nid)); + break; + } + + /* generate and save the certificate hash */ + if (!X509_digest(server_cert, algo_type, hash, &hash_size)) + elog(ERROR, "could not generate server certificate hash"); + + cert_hash = palloc(hash_size); + memcpy(cert_hash, hash, hash_size); + *len = hash_size; + + return cert_hash; +} + /* * Convert an X509 subject name to a cstring. * diff --git a/src/include/common/scram-common.h b/src/include/common/scram-common.h index 3d81934fda..e1d742ba89 100644 --- a/src/include/common/scram-common.h +++ b/src/include/common/scram-common.h @@ -21,6 +21,7 @@ /* Channel binding types */ #define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" +#define SCRAM_CHANNEL_BINDING_TLS_END_POINT "tls-server-end-point" /* Length of SCRAM keys (client and server) */ #define SCRAM_KEY_LEN PG_SHA256_DIGEST_LENGTH diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index e660e8afa8..49cb263110 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -210,6 +210,7 @@ extern void be_tls_get_version(Port *port, char *ptr, size_t len); extern void be_tls_get_cipher(Port *port, char *ptr, size_t len); extern void be_tls_get_peerdn_name(Port *port, char *ptr, size_t len); extern char *be_tls_get_peer_finished(Port *port, size_t *len); +extern char *be_tls_get_certificate_hash(Port *port, size_t *len); #endif extern ProtocolVersion FrontendProtocol; diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 06c9cb2614..23bd5fb2b6 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -444,6 +444,21 @@ build_client_final_message(fe_scram_state *state) cbind_data = pgtls_get_finished(state->conn, &cbind_data_len); if (cbind_data == NULL) goto oom_error; +#endif + } + else if (strcmp(conn->scram_channel_binding, + SCRAM_CHANNEL_BINDING_TLS_END_POINT) == 0) + { + /* Fetch hash data of server's SSL certificate */ +#ifdef USE_SSL + cbind_data = + pgtls_get_peer_certificate_hash(state->conn, + &cbind_data_len); + if (cbind_data == NULL) + { + /* error message is already set on error */ + return NULL; + } #endif } else diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index 7b7390a1fc..52390640bf 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -419,6 +419,86 @@ pgtls_get_finished(PGconn *conn, size_t *len) return result; } +/* + * Get the hash of the server certificate, for SCRAM channel binding type + * tls-server-end-point. + * + * NULL is sent back to the caller in the event of an error, with an + * error message for the caller to consume. + */ +char * +pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) +{ + X509 *peer_cert; + const EVP_MD *algo_type; + unsigned char hash[EVP_MAX_MD_SIZE]; /* size for SHA-512 */ + unsigned int hash_size; + int algo_nid; + char *cert_hash; + + *len = 0; + + if (!conn->peer) + return NULL; + + peer_cert = conn->peer; + + /* + * Get the signature algorithm of the certificate to determine the hash + * algorithm to use for the result. + */ + if (!OBJ_find_sigid_algs(X509_get_signature_nid(peer_cert), + &algo_nid, NULL)) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("could not determine server certificate signature algorithm\n")); + return NULL; + } + + /* + * The TLS server's certificate bytes need to be hashed with SHA-256 if + * its signature algorithm is MD5 or SHA-1 as per RFC 5929 + * (https://tools.ietf.org/html/rfc5929#section-4.1). If something else + * is used, the same hash as the signature algorithm is used. + */ + switch (algo_nid) + { + case NID_md5: + case NID_sha1: + algo_type = EVP_sha256(); + break; + default: + algo_type = EVP_get_digestbynid(algo_nid); + if (algo_type == NULL) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("could not find digest for NID %s\n"), + OBJ_nid2sn(algo_nid)); + return NULL; + } + break; + } + + if (!X509_digest(peer_cert, algo_type, hash, &hash_size)) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("could not generate peer certificate hash\n")); + return NULL; + } + + /* save result */ + cert_hash = malloc(hash_size); + if (cert_hash == NULL) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("out of memory\n")); + return NULL; + } + memcpy(cert_hash, hash, hash_size); + *len = hash_size; + + return cert_hash; +} /* ------------------------------------------------------------ */ /* OpenSSL specific code */ diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index 516039eea0..4e354098b3 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -672,6 +672,7 @@ extern ssize_t pgtls_read(PGconn *conn, void *ptr, size_t len); extern bool pgtls_read_pending(PGconn *conn); extern ssize_t pgtls_write(PGconn *conn, const void *ptr, size_t len); extern char *pgtls_get_finished(PGconn *conn, size_t *len); +extern char *pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len); /* * this is so that we can check if a connection is non-blocking internally diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl index 324b4888d4..3f425e00f0 100644 --- a/src/test/ssl/t/002_scram.pl +++ b/src/test/ssl/t/002_scram.pl @@ -4,7 +4,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 4; +use Test::More tests => 5; use ServerSetup; use File::Copy; @@ -45,6 +45,9 @@ test_connect_ok($common_connstr, "scram_channel_binding=''", "SCRAM authentication without channel binding"); +test_connect_ok($common_connstr, + "scram_channel_binding=tls-server-end-point", + "SCRAM authentication with tls-server-end-point as channel binding"); test_connect_fails($common_connstr, "scram_channel_binding=not-exists", "SCRAM authentication with invalid channel binding"); From cc6337d2fed598d4b5ac54d9a62708182b83a81e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 4 Jan 2018 15:48:15 -0500 Subject: [PATCH 0779/1087] Simplify and encapsulate tuple routing support code. Instead of having ExecSetupPartitionTupleRouting return multiple out parameters, have it return a pointer to a structure containing all of those different things. Also, provide and use a cleanup function, ExecCleanupTupleRouting, instead of cleaning up all of the resources allocated by ExecSetupPartitionTupleRouting individually. Amit Khandekar, reviewed by Amit Langote, David Rowley, and me Discussion: http://postgr.es/m/CAJ3gD9fWfxgKC+PfJZF3hkgAcNOy-LpfPxVYitDEXKHjeieWQQ@mail.gmail.com --- src/backend/commands/copy.c | 86 +++++--------------- src/backend/executor/execPartition.c | 108 +++++++++++++++---------- src/backend/executor/nodeModifyTable.c | 94 +++++++-------------- src/include/executor/execPartition.h | 47 ++++++++--- src/include/nodes/execnodes.h | 9 +-- 5 files changed, 154 insertions(+), 190 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 118115aa42..66cbff7ead 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -166,12 +166,9 @@ typedef struct CopyStateData bool volatile_defexprs; /* is any of defexprs volatile? */ List *range_table; - PartitionDispatch *partition_dispatch_info; - int num_dispatch; /* Number of entries in the above array */ - int num_partitions; /* Number of members in the following arrays */ - ResultRelInfo **partitions; /* Per partition result relation pointers */ - TupleConversionMap **partition_tupconv_maps; - TupleTableSlot *partition_tuple_slot; + /* Tuple-routing support info */ + PartitionTupleRouting *partition_tuple_routing; + TransitionCaptureState *transition_capture; TupleConversionMap **transition_tupconv_maps; @@ -2472,28 +2469,10 @@ CopyFrom(CopyState cstate) */ if (cstate->rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { - PartitionDispatch *partition_dispatch_info; - ResultRelInfo **partitions; - TupleConversionMap **partition_tupconv_maps; - TupleTableSlot *partition_tuple_slot; - int num_parted, - num_partitions; - - ExecSetupPartitionTupleRouting(NULL, - cstate->rel, - 1, - estate, - &partition_dispatch_info, - &partitions, - &partition_tupconv_maps, - &partition_tuple_slot, - &num_parted, &num_partitions); - cstate->partition_dispatch_info = partition_dispatch_info; - cstate->num_dispatch = num_parted; - cstate->partitions = partitions; - cstate->num_partitions = num_partitions; - cstate->partition_tupconv_maps = partition_tupconv_maps; - cstate->partition_tuple_slot = partition_tuple_slot; + PartitionTupleRouting *proute; + + proute = cstate->partition_tuple_routing = + ExecSetupPartitionTupleRouting(NULL, cstate->rel, 1, estate); /* * If we are capturing transition tuples, they may need to be @@ -2506,11 +2485,11 @@ CopyFrom(CopyState cstate) int i; cstate->transition_tupconv_maps = (TupleConversionMap **) - palloc0(sizeof(TupleConversionMap *) * cstate->num_partitions); - for (i = 0; i < cstate->num_partitions; ++i) + palloc0(sizeof(TupleConversionMap *) * proute->num_partitions); + for (i = 0; i < proute->num_partitions; ++i) { cstate->transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(cstate->partitions[i]->ri_RelationDesc), + convert_tuples_by_name(RelationGetDescr(proute->partitions[i]->ri_RelationDesc), RelationGetDescr(cstate->rel), gettext_noop("could not convert row type")); } @@ -2530,7 +2509,7 @@ CopyFrom(CopyState cstate) if ((resultRelInfo->ri_TrigDesc != NULL && (resultRelInfo->ri_TrigDesc->trig_insert_before_row || resultRelInfo->ri_TrigDesc->trig_insert_instead_row)) || - cstate->partition_dispatch_info != NULL || + cstate->partition_tuple_routing != NULL || cstate->volatile_defexprs) { useHeapMultiInsert = false; @@ -2605,10 +2584,11 @@ CopyFrom(CopyState cstate) ExecStoreTuple(tuple, slot, InvalidBuffer, false); /* Determine the partition to heap_insert the tuple into */ - if (cstate->partition_dispatch_info) + if (cstate->partition_tuple_routing) { int leaf_part_index; TupleConversionMap *map; + PartitionTupleRouting *proute = cstate->partition_tuple_routing; /* * Away we go ... If we end up not finding a partition after all, @@ -2619,11 +2599,11 @@ CopyFrom(CopyState cstate) * partition, respectively. */ leaf_part_index = ExecFindPartition(resultRelInfo, - cstate->partition_dispatch_info, + proute->partition_dispatch_info, slot, estate); Assert(leaf_part_index >= 0 && - leaf_part_index < cstate->num_partitions); + leaf_part_index < proute->num_partitions); /* * If this tuple is mapped to a partition that is not same as the @@ -2641,7 +2621,7 @@ CopyFrom(CopyState cstate) * to the selected partition. */ saved_resultRelInfo = resultRelInfo; - resultRelInfo = cstate->partitions[leaf_part_index]; + resultRelInfo = proute->partitions[leaf_part_index]; /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) @@ -2688,7 +2668,7 @@ CopyFrom(CopyState cstate) * We might need to convert from the parent rowtype to the * partition rowtype. */ - map = cstate->partition_tupconv_maps[leaf_part_index]; + map = proute->partition_tupconv_maps[leaf_part_index]; if (map) { Relation partrel = resultRelInfo->ri_RelationDesc; @@ -2700,7 +2680,7 @@ CopyFrom(CopyState cstate) * point on. Use a dedicated slot from this point on until * we're finished dealing with the partition. */ - slot = cstate->partition_tuple_slot; + slot = proute->partition_tuple_slot; Assert(slot != NULL); ExecSetSlotDescriptor(slot, RelationGetDescr(partrel)); ExecStoreTuple(tuple, slot, InvalidBuffer, true); @@ -2852,34 +2832,8 @@ CopyFrom(CopyState cstate) ExecCloseIndices(resultRelInfo); /* Close all the partitioned tables, leaf partitions, and their indices */ - if (cstate->partition_dispatch_info) - { - int i; - - /* - * Remember cstate->partition_dispatch_info[0] corresponds to the root - * partitioned table, which we must not try to close, because it is - * the main target table of COPY that will be closed eventually by - * DoCopy(). Also, tupslot is NULL for the root partitioned table. - */ - for (i = 1; i < cstate->num_dispatch; i++) - { - PartitionDispatch pd = cstate->partition_dispatch_info[i]; - - heap_close(pd->reldesc, NoLock); - ExecDropSingleTupleTableSlot(pd->tupslot); - } - for (i = 0; i < cstate->num_partitions; i++) - { - ResultRelInfo *resultRelInfo = cstate->partitions[i]; - - ExecCloseIndices(resultRelInfo); - heap_close(resultRelInfo->ri_RelationDesc, NoLock); - } - - /* Release the standalone partition tuple descriptor */ - ExecDropSingleTupleTableSlot(cstate->partition_tuple_slot); - } + if (cstate->partition_tuple_routing) + ExecCleanupTupleRouting(cstate->partition_tuple_routing); /* Close any trigger target relations */ ExecCleanUpTriggerState(estate); diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 89c523ef44..115be02635 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -38,58 +38,40 @@ static char *ExecBuildSlotPartitionKeyDescription(Relation rel, int maxfieldlen); /* - * ExecSetupPartitionTupleRouting - set up information needed during - * tuple routing for partitioned tables - * - * Output arguments: - * 'pd' receives an array of PartitionDispatch objects with one entry for - * every partitioned table in the partition tree - * 'partitions' receives an array of ResultRelInfo* objects with one entry for - * every leaf partition in the partition tree - * 'tup_conv_maps' receives an array of TupleConversionMap objects with one - * entry for every leaf partition (required to convert input tuple based - * on the root table's rowtype to a leaf partition's rowtype after tuple - * routing is done) - * 'partition_tuple_slot' receives a standalone TupleTableSlot to be used - * to manipulate any given leaf partition's rowtype after that partition - * is chosen by tuple-routing. - * 'num_parted' receives the number of partitioned tables in the partition - * tree (= the number of entries in the 'pd' output array) - * 'num_partitions' receives the number of leaf partitions in the partition - * tree (= the number of entries in the 'partitions' and 'tup_conv_maps' - * output arrays + * ExecSetupPartitionTupleRouting - sets up information needed during + * tuple routing for partitioned tables, encapsulates it in + * PartitionTupleRouting, and returns it. * * Note that all the relations in the partition tree are locked using the * RowExclusiveLock mode upon return from this function. */ -void +PartitionTupleRouting * ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, - Relation rel, - Index resultRTindex, - EState *estate, - PartitionDispatch **pd, - ResultRelInfo ***partitions, - TupleConversionMap ***tup_conv_maps, - TupleTableSlot **partition_tuple_slot, - int *num_parted, int *num_partitions) + Relation rel, Index resultRTindex, + EState *estate) { TupleDesc tupDesc = RelationGetDescr(rel); List *leaf_parts; ListCell *cell; int i; ResultRelInfo *leaf_part_rri; + PartitionTupleRouting *proute; /* * Get the information about the partition tree after locking all the * partitions. */ (void) find_all_inheritors(RelationGetRelid(rel), RowExclusiveLock, NULL); - *pd = RelationGetPartitionDispatchInfo(rel, num_parted, &leaf_parts); - *num_partitions = list_length(leaf_parts); - *partitions = (ResultRelInfo **) palloc(*num_partitions * - sizeof(ResultRelInfo *)); - *tup_conv_maps = (TupleConversionMap **) palloc0(*num_partitions * - sizeof(TupleConversionMap *)); + proute = (PartitionTupleRouting *) palloc0(sizeof(PartitionTupleRouting)); + proute->partition_dispatch_info = + RelationGetPartitionDispatchInfo(rel, &proute->num_dispatch, + &leaf_parts); + proute->num_partitions = list_length(leaf_parts); + proute->partitions = (ResultRelInfo **) palloc(proute->num_partitions * + sizeof(ResultRelInfo *)); + proute->partition_tupconv_maps = + (TupleConversionMap **) palloc0(proute->num_partitions * + sizeof(TupleConversionMap *)); /* * Initialize an empty slot that will be used to manipulate tuples of any @@ -97,9 +79,9 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, * (such as ModifyTableState) and released when the node finishes * processing. */ - *partition_tuple_slot = MakeTupleTableSlot(); + proute->partition_tuple_slot = MakeTupleTableSlot(); - leaf_part_rri = (ResultRelInfo *) palloc0(*num_partitions * + leaf_part_rri = (ResultRelInfo *) palloc0(proute->num_partitions * sizeof(ResultRelInfo)); i = 0; foreach(cell, leaf_parts) @@ -109,8 +91,8 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, /* * We locked all the partitions above including the leaf partitions. - * Note that each of the relations in *partitions are eventually - * closed by the caller. + * Note that each of the relations in proute->partitions are + * eventually closed by the caller. */ partrel = heap_open(lfirst_oid(cell), NoLock); part_tupdesc = RelationGetDescr(partrel); @@ -119,8 +101,9 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, * Save a tuple conversion map to convert a tuple routed to this * partition from the parent's type to the partition's. */ - (*tup_conv_maps)[i] = convert_tuples_by_name(tupDesc, part_tupdesc, - gettext_noop("could not convert row type")); + proute->partition_tupconv_maps[i] = + convert_tuples_by_name(tupDesc, part_tupdesc, + gettext_noop("could not convert row type")); InitResultRelInfo(leaf_part_rri, partrel, @@ -149,9 +132,11 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, estate->es_leaf_result_relations = lappend(estate->es_leaf_result_relations, leaf_part_rri); - (*partitions)[i] = leaf_part_rri++; + proute->partitions[i] = leaf_part_rri++; i++; } + + return proute; } /* @@ -272,6 +257,45 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, return result; } +/* + * ExecCleanupTupleRouting -- Clean up objects allocated for partition tuple + * routing. + * + * Close all the partitioned tables, leaf partitions, and their indices. + */ +void +ExecCleanupTupleRouting(PartitionTupleRouting * proute) +{ + int i; + + /* + * Remember, proute->partition_dispatch_info[0] corresponds to the root + * partitioned table, which we must not try to close, because it is the + * main target table of the query that will be closed by callers such as + * ExecEndPlan() or DoCopy(). Also, tupslot is NULL for the root + * partitioned table. + */ + for (i = 1; i < proute->num_dispatch; i++) + { + PartitionDispatch pd = proute->partition_dispatch_info[i]; + + heap_close(pd->reldesc, NoLock); + ExecDropSingleTupleTableSlot(pd->tupslot); + } + + for (i = 0; i < proute->num_partitions; i++) + { + ResultRelInfo *resultRelInfo = proute->partitions[i]; + + ExecCloseIndices(resultRelInfo); + heap_close(resultRelInfo->ri_RelationDesc, NoLock); + } + + /* Release the standalone partition tuple descriptor, if any */ + if (proute->partition_tuple_slot) + ExecDropSingleTupleTableSlot(proute->partition_tuple_slot); +} + /* * RelationGetPartitionDispatchInfo * Returns information necessary to route tuples down a partition tree diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index e52a3bb95e..95e0748d8f 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -279,32 +279,33 @@ ExecInsert(ModifyTableState *mtstate, resultRelInfo = estate->es_result_relation_info; /* Determine the partition to heap_insert the tuple into */ - if (mtstate->mt_partition_dispatch_info) + if (mtstate->mt_partition_tuple_routing) { int leaf_part_index; + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; TupleConversionMap *map; /* * Away we go ... If we end up not finding a partition after all, * ExecFindPartition() does not return and errors out instead. * Otherwise, the returned value is to be used as an index into arrays - * mt_partitions[] and mt_partition_tupconv_maps[] that will get us - * the ResultRelInfo and TupleConversionMap for the partition, + * proute->partitions[] and proute->partition_tupconv_maps[] that will + * get us the ResultRelInfo and TupleConversionMap for the partition, * respectively. */ leaf_part_index = ExecFindPartition(resultRelInfo, - mtstate->mt_partition_dispatch_info, + proute->partition_dispatch_info, slot, estate); Assert(leaf_part_index >= 0 && - leaf_part_index < mtstate->mt_num_partitions); + leaf_part_index < proute->num_partitions); /* * Save the old ResultRelInfo and switch to the one corresponding to * the selected partition. */ saved_resultRelInfo = resultRelInfo; - resultRelInfo = mtstate->mt_partitions[leaf_part_index]; + resultRelInfo = proute->partitions[leaf_part_index]; /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) @@ -352,7 +353,7 @@ ExecInsert(ModifyTableState *mtstate, * We might need to convert from the parent rowtype to the partition * rowtype. */ - map = mtstate->mt_partition_tupconv_maps[leaf_part_index]; + map = proute->partition_tupconv_maps[leaf_part_index]; if (map) { Relation partrel = resultRelInfo->ri_RelationDesc; @@ -364,7 +365,7 @@ ExecInsert(ModifyTableState *mtstate, * on, until we're finished dealing with the partition. Use the * dedicated slot for that. */ - slot = mtstate->mt_partition_tuple_slot; + slot = proute->partition_tuple_slot; Assert(slot != NULL); ExecSetSlotDescriptor(slot, RelationGetDescr(partrel)); ExecStoreTuple(tuple, slot, InvalidBuffer, true); @@ -1500,9 +1501,10 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) mtstate->mt_oc_transition_capture != NULL) { int numResultRelInfos; + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; - numResultRelInfos = (mtstate->mt_partition_tuple_slot != NULL ? - mtstate->mt_num_partitions : + numResultRelInfos = (proute != NULL ? + proute->num_partitions : mtstate->mt_nplans); /* @@ -1515,13 +1517,13 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) palloc0(sizeof(TupleConversionMap *) * numResultRelInfos); /* Choose the right set of partitions */ - if (mtstate->mt_partition_dispatch_info != NULL) + if (proute != NULL) { /* * For tuple routing among partitions, we need TupleDescs based on * the partition routing table. */ - ResultRelInfo **resultRelInfos = mtstate->mt_partitions; + ResultRelInfo **resultRelInfos = proute->partitions; for (i = 0; i < numResultRelInfos; ++i) { @@ -1832,6 +1834,8 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ListCell *l; int i; Relation rel; + PartitionTupleRouting *proute = NULL; + int num_partitions = 0; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -1945,28 +1949,11 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) if (operation == CMD_INSERT && rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { - PartitionDispatch *partition_dispatch_info; - ResultRelInfo **partitions; - TupleConversionMap **partition_tupconv_maps; - TupleTableSlot *partition_tuple_slot; - int num_parted, - num_partitions; - - ExecSetupPartitionTupleRouting(mtstate, - rel, - node->nominalRelation, - estate, - &partition_dispatch_info, - &partitions, - &partition_tupconv_maps, - &partition_tuple_slot, - &num_parted, &num_partitions); - mtstate->mt_partition_dispatch_info = partition_dispatch_info; - mtstate->mt_num_dispatch = num_parted; - mtstate->mt_partitions = partitions; - mtstate->mt_num_partitions = num_partitions; - mtstate->mt_partition_tupconv_maps = partition_tupconv_maps; - mtstate->mt_partition_tuple_slot = partition_tuple_slot; + proute = mtstate->mt_partition_tuple_routing = + ExecSetupPartitionTupleRouting(mtstate, + rel, node->nominalRelation, + estate); + num_partitions = proute->num_partitions; } /* @@ -2009,7 +1996,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * will suffice. This only occurs for the INSERT case; UPDATE/DELETE * cases are handled above. */ - if (node->withCheckOptionLists != NIL && mtstate->mt_num_partitions > 0) + if (node->withCheckOptionLists != NIL && num_partitions > 0) { List *wcoList; PlanState *plan; @@ -2026,14 +2013,14 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->mt_nplans == 1); wcoList = linitial(node->withCheckOptionLists); plan = mtstate->mt_plans[0]; - for (i = 0; i < mtstate->mt_num_partitions; i++) + for (i = 0; i < num_partitions; i++) { Relation partrel; List *mapped_wcoList; List *wcoExprs = NIL; ListCell *ll; - resultRelInfo = mtstate->mt_partitions[i]; + resultRelInfo = proute->partitions[i]; partrel = resultRelInfo->ri_RelationDesc; /* varno = node->nominalRelation */ @@ -2101,12 +2088,12 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * are handled above. */ returningList = linitial(node->returningLists); - for (i = 0; i < mtstate->mt_num_partitions; i++) + for (i = 0; i < num_partitions; i++) { Relation partrel; List *rlist; - resultRelInfo = mtstate->mt_partitions[i]; + resultRelInfo = proute->partitions[i]; partrel = resultRelInfo->ri_RelationDesc; /* varno = node->nominalRelation */ @@ -2372,32 +2359,9 @@ ExecEndModifyTable(ModifyTableState *node) resultRelInfo); } - /* - * Close all the partitioned tables, leaf partitions, and their indices - * - * Remember node->mt_partition_dispatch_info[0] corresponds to the root - * partitioned table, which we must not try to close, because it is the - * main target table of the query that will be closed by ExecEndPlan(). - * Also, tupslot is NULL for the root partitioned table. - */ - for (i = 1; i < node->mt_num_dispatch; i++) - { - PartitionDispatch pd = node->mt_partition_dispatch_info[i]; - - heap_close(pd->reldesc, NoLock); - ExecDropSingleTupleTableSlot(pd->tupslot); - } - for (i = 0; i < node->mt_num_partitions; i++) - { - ResultRelInfo *resultRelInfo = node->mt_partitions[i]; - - ExecCloseIndices(resultRelInfo); - heap_close(resultRelInfo->ri_RelationDesc, NoLock); - } - - /* Release the standalone partition tuple descriptor, if any */ - if (node->mt_partition_tuple_slot) - ExecDropSingleTupleTableSlot(node->mt_partition_tuple_slot); + /* Close all the partitioned tables, leaf partitions, and their indices */ + if (node->mt_partition_tuple_routing) + ExecCleanupTupleRouting(node->mt_partition_tuple_routing); /* * Free the exprcontext diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index f0998cb82f..b5df357acd 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -49,18 +49,47 @@ typedef struct PartitionDispatchData typedef struct PartitionDispatchData *PartitionDispatch; -extern void ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, - Relation rel, - Index resultRTindex, - EState *estate, - PartitionDispatch **pd, - ResultRelInfo ***partitions, - TupleConversionMap ***tup_conv_maps, - TupleTableSlot **partition_tuple_slot, - int *num_parted, int *num_partitions); +/*----------------------- + * PartitionTupleRouting - Encapsulates all information required to execute + * tuple-routing between partitions. + * + * partition_dispatch_info Array of PartitionDispatch objects with one + * entry for every partitioned table in the + * partition tree. + * num_dispatch number of partitioned tables in the partition + * tree (= length of partition_dispatch_info[]) + * partitions Array of ResultRelInfo* objects with one entry + * for every leaf partition in the partition tree. + * num_partitions Number of leaf partitions in the partition tree + * (= 'partitions' array length) + * partition_tupconv_maps Array of TupleConversionMap objects with one + * entry for every leaf partition (required to + * convert input tuple based on the root table's + * rowtype to a leaf partition's rowtype after + * tuple routing is done) + * partition_tuple_slot TupleTableSlot to be used to manipulate any + * given leaf partition's rowtype after that + * partition is chosen for insertion by + * tuple-routing. + *----------------------- + */ +typedef struct PartitionTupleRouting +{ + PartitionDispatch *partition_dispatch_info; + int num_dispatch; + ResultRelInfo **partitions; + int num_partitions; + TupleConversionMap **partition_tupconv_maps; + TupleTableSlot *partition_tuple_slot; +} PartitionTupleRouting; + +extern PartitionTupleRouting *ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, + Relation rel, Index resultRTindex, + EState *estate); extern int ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, TupleTableSlot *slot, EState *estate); +extern void ExecCleanupTupleRouting(PartitionTupleRouting *proute); #endif /* EXECPARTITION_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 3ad58cdfe7..2a4f7407a1 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -985,15 +985,8 @@ typedef struct ModifyTableState TupleTableSlot *mt_existing; /* slot to store existing target tuple in */ List *mt_excludedtlist; /* the excluded pseudo relation's tlist */ TupleTableSlot *mt_conflproj; /* CONFLICT ... SET ... projection target */ - struct PartitionDispatchData **mt_partition_dispatch_info; + struct PartitionTupleRouting *mt_partition_tuple_routing; /* Tuple-routing support info */ - int mt_num_dispatch; /* Number of entries in the above array */ - int mt_num_partitions; /* Number of members in the following - * arrays */ - ResultRelInfo **mt_partitions; /* Per partition result relation pointers */ - TupleConversionMap **mt_partition_tupconv_maps; - /* Per partition tuple conversion map */ - TupleTableSlot *mt_partition_tuple_slot; struct TransitionCaptureState *mt_transition_capture; /* controls transition table population for specified operation */ struct TransitionCaptureState *mt_oc_transition_capture; From 18869e202b74f36d504c5c3c7d9db9c186039eba Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 4 Jan 2018 15:59:29 -0500 Subject: [PATCH 0780/1087] Fix new test case to not be endian-dependent. Per buildfarm. Discussion: https://postgr.es/m/ec295792-a69f-350f-6287-25a20e8f31d5@gmail.com --- contrib/pageinspect/expected/page.out | 6 +++--- contrib/pageinspect/sql/page.sql | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/contrib/pageinspect/expected/page.out b/contrib/pageinspect/expected/page.out index 4dd620ee6f..5edb650085 100644 --- a/contrib/pageinspect/expected/page.out +++ b/contrib/pageinspect/expected/page.out @@ -94,18 +94,18 @@ ERROR: block number 0 is out of range for relation "test_part1" drop table test_partitioned; -- check null bitmap alignment for table whose number of attributes is multiple of 8 create table test8 (f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int); -insert into test8(f1, f8) values (x'f1'::int, 0); +insert into test8(f1, f8) values (x'7f00007f'::int, 0); select t_bits, t_data from heap_page_items(get_raw_page('test8', 0)); t_bits | t_data ----------+-------------------- - 10000001 | \xf100000000000000 + 10000001 | \x7f00007f00000000 (1 row) select tuple_data_split('test8'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('test8', 0)); tuple_data_split ------------------------------------------------------------- - {"\\xf1000000",NULL,NULL,NULL,NULL,NULL,NULL,"\\x00000000"} + {"\\x7f00007f",NULL,NULL,NULL,NULL,NULL,NULL,"\\x00000000"} (1 row) drop table test8; diff --git a/contrib/pageinspect/sql/page.sql b/contrib/pageinspect/sql/page.sql index 438e0351c4..8f35830e06 100644 --- a/contrib/pageinspect/sql/page.sql +++ b/contrib/pageinspect/sql/page.sql @@ -44,7 +44,7 @@ drop table test_partitioned; -- check null bitmap alignment for table whose number of attributes is multiple of 8 create table test8 (f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int); -insert into test8(f1, f8) values (x'f1'::int, 0); +insert into test8(f1, f8) values (x'7f00007f'::int, 0); select t_bits, t_data from heap_page_items(get_raw_page('test8', 0)); select tuple_data_split('test8'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('test8', 0)); From ac3ff8b1d8f98da38c53a701e6397931080a39cf Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 16:22:06 -0500 Subject: [PATCH 0781/1087] Fix build with older OpenSSL versions Apparently, X509_get_signature_nid() is only in fairly new OpenSSL versions, so use the lower-level interface it is built on instead. --- src/backend/libpq/be-secure-openssl.c | 2 +- src/interfaces/libpq/fe-secure-openssl.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index f75cc2ef18..8d0256ba07 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -1265,7 +1265,7 @@ be_tls_get_certificate_hash(Port *port, size_t *len) * Get the signature algorithm of the certificate to determine the * hash algorithm to use for the result. */ - if (!OBJ_find_sigid_algs(X509_get_signature_nid(server_cert), + if (!OBJ_find_sigid_algs(OBJ_obj2nid(server_cert->sig_alg->algorithm), &algo_nid, NULL)) elog(ERROR, "could not determine server certificate signature algorithm"); diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index 52390640bf..ac2842cd06 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -447,7 +447,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) * Get the signature algorithm of the certificate to determine the hash * algorithm to use for the result. */ - if (!OBJ_find_sigid_algs(X509_get_signature_nid(peer_cert), + if (!OBJ_find_sigid_algs(OBJ_obj2nid(peer_cert->sig_alg->algorithm), &algo_nid, NULL)) { printfPQExpBuffer(&conn->errorMessage, From ef6087ee5fa84206dc24ba1339e229354b05cf2a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 4 Jan 2018 16:25:49 -0500 Subject: [PATCH 0782/1087] Minor preparatory refactoring for UPDATE row movement. Generalize is_partition_attr to has_partition_attrs and make it accessible from outside tablecmds.c. Change map_partition_varattnos to clarify that it can be used for mapping between any two relations in a partitioning hierarchy, not just parent -> child. Amit Khandekar, reviewed by Amit Langote, David Rowley, and me. Some comment changes by me. Discussion: http://postgr.es/m/CAJ3gD9fWfxgKC+PfJZF3hkgAcNOy-LpfPxVYitDEXKHjeieWQQ@mail.gmail.com --- src/backend/catalog/partition.c | 87 ++++++++++++++++++++++++++++---- src/backend/commands/tablecmds.c | 71 +++----------------------- src/include/catalog/partition.h | 6 ++- 3 files changed, 87 insertions(+), 77 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index ac9a2bda2e..8adc4ee977 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1446,10 +1446,13 @@ get_qual_from_partbound(Relation rel, Relation parent, /* * map_partition_varattnos - maps varattno of any Vars in expr from the - * parent attno to partition attno. + * attno's of 'from_rel' to the attno's of 'to_rel' partition, each of which + * may be either a leaf partition or a partitioned table, but both of which + * must be from the same partitioning hierarchy. * - * We must allow for cases where physical attnos of a partition can be - * different from the parent's. + * Even though all of the same column names must be present in all relations + * in the hierarchy, and they must also have the same types, the attnos may + * be different. * * If found_whole_row is not NULL, *found_whole_row returns whether a * whole-row variable was found in the input expression. @@ -1459,8 +1462,8 @@ get_qual_from_partbound(Relation rel, Relation parent, * are working on Lists, so it's less messy to do the casts internally. */ List * -map_partition_varattnos(List *expr, int target_varno, - Relation partrel, Relation parent, +map_partition_varattnos(List *expr, int fromrel_varno, + Relation to_rel, Relation from_rel, bool *found_whole_row) { bool my_found_whole_row = false; @@ -1469,14 +1472,14 @@ map_partition_varattnos(List *expr, int target_varno, { AttrNumber *part_attnos; - part_attnos = convert_tuples_by_name_map(RelationGetDescr(partrel), - RelationGetDescr(parent), + part_attnos = convert_tuples_by_name_map(RelationGetDescr(to_rel), + RelationGetDescr(from_rel), gettext_noop("could not convert row type")); expr = (List *) map_variable_attnos((Node *) expr, - target_varno, 0, + fromrel_varno, 0, part_attnos, - RelationGetDescr(parent)->natts, - RelationGetForm(partrel)->reltype, + RelationGetDescr(from_rel)->natts, + RelationGetForm(to_rel)->reltype, &my_found_whole_row); } @@ -2598,6 +2601,70 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) return part_index; } +/* + * Checks if any of the 'attnums' is a partition key attribute for rel + * + * Sets *used_in_expr if any of the 'attnums' is found to be referenced in some + * partition key expression. It's possible for a column to be both used + * directly and as part of an expression; if that happens, *used_in_expr may + * end up as either true or false. That's OK for current uses of this + * function, because *used_in_expr is only used to tailor the error message + * text. + */ +bool +has_partition_attrs(Relation rel, Bitmapset *attnums, + bool *used_in_expr) +{ + PartitionKey key; + int partnatts; + List *partexprs; + ListCell *partexprs_item; + int i; + + if (attnums == NULL || rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE) + return false; + + key = RelationGetPartitionKey(rel); + partnatts = get_partition_natts(key); + partexprs = get_partition_exprs(key); + + partexprs_item = list_head(partexprs); + for (i = 0; i < partnatts; i++) + { + AttrNumber partattno = get_partition_col_attnum(key, i); + + if (partattno != 0) + { + if (bms_is_member(partattno - FirstLowInvalidHeapAttributeNumber, + attnums)) + { + if (used_in_expr) + *used_in_expr = false; + return true; + } + } + else + { + /* Arbitrary expression */ + Node *expr = (Node *) lfirst(partexprs_item); + Bitmapset *expr_attrs = NULL; + + /* Find all attributes referenced */ + pull_varattnos(expr, 1, &expr_attrs); + partexprs_item = lnext(partexprs_item); + + if (bms_overlap(attnums, expr_attrs)) + { + if (used_in_expr) + *used_in_expr = true; + return true; + } + } + } + + return false; +} + /* * qsort_partition_hbound_cmp * diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 62cf81e95a..f2a928b823 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -468,7 +468,6 @@ static void RangeVarCallbackForDropRelation(const RangeVar *rel, Oid relOid, Oid oldRelOid, void *arg); static void RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, Oid oldrelid, void *arg); -static bool is_partition_attr(Relation rel, AttrNumber attnum, bool *used_in_expr); static PartitionSpec *transformPartitionSpec(Relation rel, PartitionSpec *partspec, char *strategy); static void ComputePartitionAttrs(Relation rel, List *partParams, AttrNumber *partattrs, List **partexprs, Oid *partopclass, Oid *partcollation, char strategy); @@ -6491,68 +6490,6 @@ ATPrepDropColumn(List **wqueue, Relation rel, bool recurse, bool recursing, cmd->subtype = AT_DropColumnRecurse; } -/* - * Checks if attnum is a partition attribute for rel - * - * Sets *used_in_expr if attnum is found to be referenced in some partition - * key expression. It's possible for a column to be both used directly and - * as part of an expression; if that happens, *used_in_expr may end up as - * either true or false. That's OK for current uses of this function, because - * *used_in_expr is only used to tailor the error message text. - */ -static bool -is_partition_attr(Relation rel, AttrNumber attnum, bool *used_in_expr) -{ - PartitionKey key; - int partnatts; - List *partexprs; - ListCell *partexprs_item; - int i; - - if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE) - return false; - - key = RelationGetPartitionKey(rel); - partnatts = get_partition_natts(key); - partexprs = get_partition_exprs(key); - - partexprs_item = list_head(partexprs); - for (i = 0; i < partnatts; i++) - { - AttrNumber partattno = get_partition_col_attnum(key, i); - - if (partattno != 0) - { - if (attnum == partattno) - { - if (used_in_expr) - *used_in_expr = false; - return true; - } - } - else - { - /* Arbitrary expression */ - Node *expr = (Node *) lfirst(partexprs_item); - Bitmapset *expr_attrs = NULL; - - /* Find all attributes referenced */ - pull_varattnos(expr, 1, &expr_attrs); - partexprs_item = lnext(partexprs_item); - - if (bms_is_member(attnum - FirstLowInvalidHeapAttributeNumber, - expr_attrs)) - { - if (used_in_expr) - *used_in_expr = true; - return true; - } - } - } - - return false; -} - /* * Return value is the address of the dropped column. */ @@ -6613,7 +6550,9 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName, colName))); /* Don't drop columns used in the partition key */ - if (is_partition_attr(rel, attnum, &is_expr)) + if (has_partition_attrs(rel, + bms_make_singleton(attnum - FirstLowInvalidHeapAttributeNumber), + &is_expr)) { if (!is_expr) ereport(ERROR, @@ -8837,7 +8776,9 @@ ATPrepAlterColumnType(List **wqueue, colName))); /* Don't alter columns used in the partition key */ - if (is_partition_attr(rel, attnum, &is_expr)) + if (has_partition_attrs(rel, + bms_make_singleton(attnum - FirstLowInvalidHeapAttributeNumber), + &is_expr)) { if (!is_expr) ereport(ERROR, diff --git a/src/include/catalog/partition.h b/src/include/catalog/partition.h index ea0f549c9a..2faf0ca26e 100644 --- a/src/include/catalog/partition.h +++ b/src/include/catalog/partition.h @@ -54,11 +54,13 @@ extern void check_new_partition_bound(char *relname, Relation parent, extern Oid get_partition_parent(Oid relid); extern List *get_qual_from_partbound(Relation rel, Relation parent, PartitionBoundSpec *spec); -extern List *map_partition_varattnos(List *expr, int target_varno, - Relation partrel, Relation parent, +extern List *map_partition_varattnos(List *expr, int fromrel_varno, + Relation to_rel, Relation from_rel, bool *found_whole_row); extern List *RelationGetPartitionQual(Relation rel); extern Expr *get_partition_qual_relid(Oid relid); +extern bool has_partition_attrs(Relation rel, Bitmapset *attnums, + bool *used_in_expr); extern Oid get_default_oid_from_partdesc(PartitionDesc partdesc); extern Oid get_default_partition_oid(Oid parentId); From 1834c1e432d22f9e186950c7dd8598958776e016 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 17:55:14 -0500 Subject: [PATCH 0783/1087] Add missing includes is necessary to look into the X509 struct, used by ac3ff8b1d8f98da38c53a701e6397931080a39cf. --- src/backend/libpq/be-secure-openssl.c | 1 + src/interfaces/libpq/fe-secure-openssl.c | 1 + 2 files changed, 2 insertions(+) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index 8d0256ba07..dff61776bd 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -57,6 +57,7 @@ #ifndef OPENSSL_NO_ECDH #include #endif +#include #include "libpq/libpq.h" #include "miscadmin.h" diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index ac2842cd06..ecd68061a2 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -58,6 +58,7 @@ #ifdef USE_SSL_ENGINE #include #endif +#include #include static bool verify_peer_name_matches_certificate(PGconn *); From 054e8c6cdb7f4261869e49d3ed7705cca475182e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 4 Jan 2018 19:09:27 -0500 Subject: [PATCH 0784/1087] Another attempt at fixing build with various OpenSSL versions It seems we can't easily work around the lack of X509_get_signature_nid(), so revert the previous attempts and just disable the tls-server-end-point feature if we don't have it. --- configure | 9 +++++---- configure.in | 2 +- src/backend/libpq/be-secure-openssl.c | 10 ++++++++-- src/include/pg_config.h.in | 3 +++ src/interfaces/libpq/fe-secure-openssl.c | 9 +++++++-- 5 files changed, 24 insertions(+), 9 deletions(-) diff --git a/configure b/configure index d88863e50c..45221e1ea3 100755 --- a/configure +++ b/configure @@ -10125,12 +10125,13 @@ else fi fi - for ac_func in SSL_get_current_compression + for ac_func in SSL_get_current_compression X509_get_signature_nid do : - ac_fn_c_check_func "$LINENO" "SSL_get_current_compression" "ac_cv_func_SSL_get_current_compression" -if test "x$ac_cv_func_SSL_get_current_compression" = xyes; then : + as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` +ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" +if eval test \"x\$"$as_ac_var"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF -#define HAVE_SSL_GET_CURRENT_COMPRESSION 1 +#define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1 _ACEOF fi diff --git a/configure.in b/configure.in index 4968b67bf9..4d26034579 100644 --- a/configure.in +++ b/configure.in @@ -1064,7 +1064,7 @@ if test "$with_openssl" = yes ; then AC_SEARCH_LIBS(CRYPTO_new_ex_data, [eay32 crypto], [], [AC_MSG_ERROR([library 'eay32' or 'crypto' is required for OpenSSL])]) AC_SEARCH_LIBS(SSL_new, [ssleay32 ssl], [], [AC_MSG_ERROR([library 'ssleay32' or 'ssl' is required for OpenSSL])]) fi - AC_CHECK_FUNCS([SSL_get_current_compression]) + AC_CHECK_FUNCS([SSL_get_current_compression X509_get_signature_nid]) # Functions introduced in OpenSSL 1.1.0. We used to check for # OPENSSL_VERSION_NUMBER, but that didn't work with 1.1.0, because LibreSSL # defines OPENSSL_VERSION_NUMBER to claim version 2.0.0, even though it diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index dff61776bd..c2032c2f30 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -57,7 +57,6 @@ #ifndef OPENSSL_NO_ECDH #include #endif -#include #include "libpq/libpq.h" #include "miscadmin.h" @@ -1250,6 +1249,7 @@ be_tls_get_peer_finished(Port *port, size_t *len) char * be_tls_get_certificate_hash(Port *port, size_t *len) { +#ifdef HAVE_X509_GET_SIGNATURE_NID X509 *server_cert; char *cert_hash; const EVP_MD *algo_type = NULL; @@ -1266,7 +1266,7 @@ be_tls_get_certificate_hash(Port *port, size_t *len) * Get the signature algorithm of the certificate to determine the * hash algorithm to use for the result. */ - if (!OBJ_find_sigid_algs(OBJ_obj2nid(server_cert->sig_alg->algorithm), + if (!OBJ_find_sigid_algs(X509_get_signature_nid(server_cert), &algo_nid, NULL)) elog(ERROR, "could not determine server certificate signature algorithm"); @@ -1299,6 +1299,12 @@ be_tls_get_certificate_hash(Port *port, size_t *len) *len = hash_size; return cert_hash; +#else + ereport(ERROR, + (errcode(ERRCODE_PROTOCOL_VIOLATION), + errmsg("channel binding type \"tls-server-end-point\" is not supported by this build"))); + return NULL; +#endif } /* diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in index 27b1368721..f98f773ff0 100644 --- a/src/include/pg_config.h.in +++ b/src/include/pg_config.h.in @@ -681,6 +681,9 @@ /* Define to 1 if you have the header file. */ #undef HAVE_WINLDAP_H +/* Define to 1 if you have the `X509_get_signature_nid' function. */ +#undef HAVE_X509_GET_SIGNATURE_NID + /* Define to 1 if your compiler understands __builtin_bswap16. */ #undef HAVE__BUILTIN_BSWAP16 diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index ecd68061a2..b50bfd144a 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -58,7 +58,6 @@ #ifdef USE_SSL_ENGINE #include #endif -#include #include static bool verify_peer_name_matches_certificate(PGconn *); @@ -430,6 +429,7 @@ pgtls_get_finished(PGconn *conn, size_t *len) char * pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) { +#ifdef HAVE_X509_GET_SIGNATURE_NID X509 *peer_cert; const EVP_MD *algo_type; unsigned char hash[EVP_MAX_MD_SIZE]; /* size for SHA-512 */ @@ -448,7 +448,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) * Get the signature algorithm of the certificate to determine the hash * algorithm to use for the result. */ - if (!OBJ_find_sigid_algs(OBJ_obj2nid(peer_cert->sig_alg->algorithm), + if (!OBJ_find_sigid_algs(X509_get_signature_nid(peer_cert), &algo_nid, NULL)) { printfPQExpBuffer(&conn->errorMessage, @@ -499,6 +499,11 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) *len = hash_size; return cert_hash; +#else + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("channel binding type \"tls-server-end-point\" is not supported by this build\n")); + return NULL; +#endif } /* ------------------------------------------------------------ */ From df9f682c7bf81674b6ae3900fd0146f35df0ae2e Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 5 Jan 2018 12:17:10 -0300 Subject: [PATCH 0785/1087] Fix failure to delete spill files of aborted transactions Logical decoding's reorderbuffer.c may spill transaction files to disk when transactions are large. These are supposed to be removed when they become "too old" by xid; but file removal requires the boundary LSNs of the transaction to be known. The final_lsn is only set when we see the commit or abort record for the transaction, but nothing sets the value for transactions that crash, so the removal code misbehaves -- in assertion-enabled builds, it crashes by a failed assertion. To fix, modify the final_lsn of transactions that don't have a value set, to the LSN of the very latest change in the transaction. This causes the spilled files to be removed appropriately. Author: Atsushi Torikoshi Reviewed-by: Kyotaro HORIGUCHI, Craig Ringer, Masahiko Sawada Discussion: https://postgr.es/m/54e4e488-186b-a056-6628-50628e4e4ebc@lab.ntt.co.jp --- .../replication/logical/reorderbuffer.c | 19 +++++++++++++++++-- src/include/replication/reorderbuffer.h | 2 ++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index fcc41ddb7c..1208da2972 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -1670,8 +1670,8 @@ ReorderBufferAbortOld(ReorderBuffer *rb, TransactionId oldestRunningXid) * Iterate through all (potential) toplevel TXNs and abort all that are * older than what possibly can be running. Once we've found the first * that is alive we stop, there might be some that acquired an xid earlier - * but started writing later, but it's unlikely and they will cleaned up - * in a later call to ReorderBufferAbortOld(). + * but started writing later, but it's unlikely and they will be cleaned + * up in a later call to this function. */ dlist_foreach_modify(it, &rb->toplevel_by_lsn) { @@ -1681,6 +1681,21 @@ ReorderBufferAbortOld(ReorderBuffer *rb, TransactionId oldestRunningXid) if (TransactionIdPrecedes(txn->xid, oldestRunningXid)) { + /* + * We set final_lsn on a transaction when we decode its commit or + * abort record, but we never see those records for crashed + * transactions. To ensure cleanup of these transactions, set + * final_lsn to that of their last change; this causes + * ReorderBufferRestoreCleanup to do the right thing. + */ + if (txn->serialized && txn->final_lsn == 0) + { + ReorderBufferChange *last = + dlist_tail_element(ReorderBufferChange, node, &txn->changes); + + txn->final_lsn = last->lsn; + } + elog(DEBUG2, "aborting old transaction %u", txn->xid); /* remove potential on-disk data, and deallocate this tx */ diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h index f52a88dcd6..0970abca52 100644 --- a/src/include/replication/reorderbuffer.h +++ b/src/include/replication/reorderbuffer.h @@ -168,6 +168,8 @@ typedef struct ReorderBufferTXN * * plain abort record * * prepared transaction abort * * error during decoding + * * for a crashed transaction, the LSN of the last change, regardless of + * what it was. * ---- */ XLogRecPtr final_lsn; From 959ee6d267fb24e667fc64e9837a376e236e84a5 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Fri, 5 Jan 2018 14:11:15 -0500 Subject: [PATCH 0786/1087] pg_upgrade: simplify code layout in a few places Backpatch-through: 9.4 (9.3 didn't need improving) --- src/bin/pg_upgrade/exec.c | 2 -- src/bin/pg_upgrade/server.c | 8 ++------ 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index 1fa56b8c61..ea45434222 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -112,7 +112,6 @@ exec_prog(const char *log_file, const char *opt_log_file, pg_log(PG_VERBOSE, "%s\n", cmd); #ifdef WIN32 - /* * For some reason, Windows issues a file-in-use error if we write data to * the log file from a non-primary thread just before we create a @@ -194,7 +193,6 @@ exec_prog(const char *log_file, const char *opt_log_file, } #ifndef WIN32 - /* * We can't do this on Windows because it will keep the "pg_ctl start" * output filename open until the server stops, so we do the \n\n above on diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 96181237f8..1b399a9841 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -310,12 +310,8 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) * running. */ if (!pg_ctl_return) - { - if (cluster == &old_cluster) - pg_fatal("pg_ctl failed to start the source server, or connection failed\n"); - else - pg_fatal("pg_ctl failed to start the target server, or connection failed\n"); - } + pg_fatal("pg_ctl failed to start the %s server, or connection failed\n", + cluster == &old_cluster ? "source" : "target"); return true; } From 3e6f01fd7d9b01b17626a6bc38cf664354eede71 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Fri, 5 Jan 2018 14:46:27 -0500 Subject: [PATCH 0787/1087] pg_upgrade: revert part of patch for ease of translation Revert part of 959ee6d267fb24e667fc64e9837a376e236e84a5 . Backpatch-through: 10 --- src/bin/pg_upgrade/server.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 1b399a9841..74ebaed5b2 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -310,8 +310,13 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) * running. */ if (!pg_ctl_return) - pg_fatal("pg_ctl failed to start the %s server, or connection failed\n", - cluster == &old_cluster ? "source" : "target"); + { + /* keep error strings separate to ease translation */ + if (cluster == &old_cluster) + pg_fatal("pg_ctl failed to start the source server, or connection failed\n"); + else + pg_fatal("pg_ctl failed to start the target server, or connection failed\n"); + } return true; } From 84a6f63e32dbefe3dc76cbe628fab6cbfc26141e Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Fri, 5 Jan 2018 14:49:36 -0500 Subject: [PATCH 0788/1087] pg_upgrade: remove C comment Revert another part of 959ee6d267fb24e667fc64e9837a376e236e84a5 . Backpatch-through: 10 --- src/bin/pg_upgrade/server.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 74ebaed5b2..96181237f8 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -311,7 +311,6 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) */ if (!pg_ctl_return) { - /* keep error strings separate to ease translation */ if (cluster == &old_cluster) pg_fatal("pg_ctl failed to start the source server, or connection failed\n"); else From 19c47e7c820241e1befd975cb4411af7d43e1309 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 5 Jan 2018 15:18:03 -0500 Subject: [PATCH 0789/1087] Factor error generation out of ExecPartitionCheck. At present, we always raise an ERROR if the partition constraint is violated, but a pending patch for UPDATE tuple routing will consider instead moving the tuple to the correct partition. Refactor to make that simpler. Amit Khandekar, reviewed by Amit Langote, David Rowley, and me. Discussion: http://postgr.es/m/CAJ3gD9cue54GbEzfV-61nyGpijvjZgCcghvLsB0_nL8Nm8HzCA@mail.gmail.com --- src/backend/commands/copy.c | 2 +- src/backend/executor/execMain.c | 107 ++++++++++++++----------- src/backend/executor/execPartition.c | 5 +- src/backend/executor/execReplication.c | 4 +- src/backend/executor/nodeModifyTable.c | 4 +- src/include/executor/executor.h | 7 +- 6 files changed, 74 insertions(+), 55 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 66cbff7ead..6bfca2a4af 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2731,7 +2731,7 @@ CopyFrom(CopyState cstate) /* Check the constraints of the tuple */ if (cstate->rel->rd_att->constr || check_partition_constr) - ExecConstraints(resultRelInfo, slot, estate); + ExecConstraints(resultRelInfo, slot, estate, true); if (useHeapMultiInsert) { diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index d8bc5028e8..16822e962a 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1849,16 +1849,12 @@ ExecRelCheck(ResultRelInfo *resultRelInfo, * ExecPartitionCheck --- check that tuple meets the partition constraint. * * Exported in executor.h for outside use. + * Returns true if it meets the partition constraint, else returns false. */ -void +bool ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate) { - Relation rel = resultRelInfo->ri_RelationDesc; - TupleDesc tupdesc = RelationGetDescr(rel); - Bitmapset *modifiedCols; - Bitmapset *insertedCols; - Bitmapset *updatedCols; ExprContext *econtext; /* @@ -1886,52 +1882,69 @@ ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, * As in case of the catalogued constraints, we treat a NULL result as * success here, not a failure. */ - if (!ExecCheck(resultRelInfo->ri_PartitionCheckExpr, econtext)) - { - char *val_desc; - Relation orig_rel = rel; + return ExecCheck(resultRelInfo->ri_PartitionCheckExpr, econtext); +} + +/* + * ExecPartitionCheckEmitError - Form and emit an error message after a failed + * partition constraint check. + */ +void +ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo, + TupleTableSlot *slot, + EState *estate) +{ + Relation rel = resultRelInfo->ri_RelationDesc; + Relation orig_rel = rel; + TupleDesc tupdesc = RelationGetDescr(rel); + char *val_desc; + Bitmapset *modifiedCols; + Bitmapset *insertedCols; + Bitmapset *updatedCols; - /* See the comment above. */ - if (resultRelInfo->ri_PartitionRoot) + /* + * Need to first convert the tuple to the root partitioned table's row + * type. For details, check similar comments in ExecConstraints(). + */ + if (resultRelInfo->ri_PartitionRoot) + { + HeapTuple tuple = ExecFetchSlotTuple(slot); + TupleDesc old_tupdesc = RelationGetDescr(rel); + TupleConversionMap *map; + + rel = resultRelInfo->ri_PartitionRoot; + tupdesc = RelationGetDescr(rel); + /* a reverse map */ + map = convert_tuples_by_name(old_tupdesc, tupdesc, + gettext_noop("could not convert row type")); + if (map != NULL) { - HeapTuple tuple = ExecFetchSlotTuple(slot); - TupleDesc old_tupdesc = RelationGetDescr(rel); - TupleConversionMap *map; - - rel = resultRelInfo->ri_PartitionRoot; - tupdesc = RelationGetDescr(rel); - /* a reverse map */ - map = convert_tuples_by_name(old_tupdesc, tupdesc, - gettext_noop("could not convert row type")); - if (map != NULL) - { - tuple = do_convert_tuple(tuple, map); - ExecSetSlotDescriptor(slot, tupdesc); - ExecStoreTuple(tuple, slot, InvalidBuffer, false); - } + tuple = do_convert_tuple(tuple, map); + ExecSetSlotDescriptor(slot, tupdesc); + ExecStoreTuple(tuple, slot, InvalidBuffer, false); } - - insertedCols = GetInsertedColumns(resultRelInfo, estate); - updatedCols = GetUpdatedColumns(resultRelInfo, estate); - modifiedCols = bms_union(insertedCols, updatedCols); - val_desc = ExecBuildSlotValueDescription(RelationGetRelid(rel), - slot, - tupdesc, - modifiedCols, - 64); - ereport(ERROR, - (errcode(ERRCODE_CHECK_VIOLATION), - errmsg("new row for relation \"%s\" violates partition constraint", - RelationGetRelationName(orig_rel)), - val_desc ? errdetail("Failing row contains %s.", val_desc) : 0)); } + + insertedCols = GetInsertedColumns(resultRelInfo, estate); + updatedCols = GetUpdatedColumns(resultRelInfo, estate); + modifiedCols = bms_union(insertedCols, updatedCols); + val_desc = ExecBuildSlotValueDescription(RelationGetRelid(rel), + slot, + tupdesc, + modifiedCols, + 64); + ereport(ERROR, + (errcode(ERRCODE_CHECK_VIOLATION), + errmsg("new row for relation \"%s\" violates partition constraint", + RelationGetRelationName(orig_rel)), + val_desc ? errdetail("Failing row contains %s.", val_desc) : 0)); } /* * ExecConstraints - check constraints of the tuple in 'slot' * - * This checks the traditional NOT NULL and check constraints, as well as - * the partition constraint, if any. + * This checks the traditional NOT NULL and check constraints, and if + * requested, checks the partition constraint. * * Note: 'slot' contains the tuple to check the constraints of, which may * have been converted from the original input tuple after tuple routing. @@ -1939,7 +1952,8 @@ ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, */ void ExecConstraints(ResultRelInfo *resultRelInfo, - TupleTableSlot *slot, EState *estate) + TupleTableSlot *slot, EState *estate, + bool check_partition_constraint) { Relation rel = resultRelInfo->ri_RelationDesc; TupleDesc tupdesc = RelationGetDescr(rel); @@ -2055,8 +2069,9 @@ ExecConstraints(ResultRelInfo *resultRelInfo, } } - if (resultRelInfo->ri_PartitionCheck) - ExecPartitionCheck(resultRelInfo, slot, estate); + if (check_partition_constraint && resultRelInfo->ri_PartitionCheck && + !ExecPartitionCheck(resultRelInfo, slot, estate)) + ExecPartitionCheckEmitError(resultRelInfo, slot, estate); } diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 115be02635..8c0d2df63c 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -167,8 +167,9 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, * First check the root table's partition constraint, if any. No point in * routing the tuple if it doesn't belong in the root table itself. */ - if (resultRelInfo->ri_PartitionCheck) - ExecPartitionCheck(resultRelInfo, slot, estate); + if (resultRelInfo->ri_PartitionCheck && + !ExecPartitionCheck(resultRelInfo, slot, estate)) + ExecPartitionCheckEmitError(resultRelInfo, slot, estate); /* start with the root partitioned table */ parent = pd[0]; diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c index 732ed42fe5..32891abbdf 100644 --- a/src/backend/executor/execReplication.c +++ b/src/backend/executor/execReplication.c @@ -401,7 +401,7 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot) /* Check the constraints of the tuple */ if (rel->rd_att->constr) - ExecConstraints(resultRelInfo, slot, estate); + ExecConstraints(resultRelInfo, slot, estate, true); /* Store the slot into tuple that we can inspect. */ tuple = ExecMaterializeSlot(slot); @@ -466,7 +466,7 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate, /* Check the constraints of the tuple */ if (rel->rd_att->constr) - ExecConstraints(resultRelInfo, slot, estate); + ExecConstraints(resultRelInfo, slot, estate, true); /* Store the slot into tuple that we can write. */ tuple = ExecMaterializeSlot(slot); diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 95e0748d8f..55dff5b21a 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -487,7 +487,7 @@ ExecInsert(ModifyTableState *mtstate, /* Check the constraints of the tuple */ if (resultRelationDesc->rd_att->constr || check_partition_constr) - ExecConstraints(resultRelInfo, slot, estate); + ExecConstraints(resultRelInfo, slot, estate, true); if (onconflict != ONCONFLICT_NONE && resultRelInfo->ri_NumIndices > 0) { @@ -1049,7 +1049,7 @@ lreplace:; * tuple-routing is performed here, hence the slot remains unchanged. */ if (resultRelationDesc->rd_att->constr || resultRelInfo->ri_PartitionCheck) - ExecConstraints(resultRelInfo, slot, estate); + ExecConstraints(resultRelInfo, slot, estate, true); /* * replace the heap tuple diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index e6569e1038..a782fae0f8 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -187,9 +187,12 @@ extern ResultRelInfo *ExecGetTriggerResultRel(EState *estate, Oid relid); extern void ExecCleanUpTriggerState(EState *estate); extern bool ExecContextForcesOids(PlanState *planstate, bool *hasoids); extern void ExecConstraints(ResultRelInfo *resultRelInfo, - TupleTableSlot *slot, EState *estate); -extern void ExecPartitionCheck(ResultRelInfo *resultRelInfo, + TupleTableSlot *slot, EState *estate, + bool check_partition_constraint); +extern bool ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); +extern void ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo, + TupleTableSlot *slot, EState *estate); extern void ExecWithCheckOptions(WCOKind kind, ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo); From aced5a92bf46532466417ab485bc94006cf60d91 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 5 Jan 2018 19:21:30 -0500 Subject: [PATCH 0790/1087] Rewrite ConditionVariableBroadcast() to avoid live-lock. The original implementation of ConditionVariableBroadcast was, per its self-description, "the dumbest way possible". Thomas Munro found out it was a bit too dumb. An awakened process may immediately re-queue itself, if the specific condition it's waiting for is not yet satisfied. If this happens before ConditionVariableBroadcast is able to see the wait queue as empty, then ConditionVariableBroadcast will re-awaken the same process, repeating the cycle. Given unlucky timing this back-and-forth can repeat indefinitely; loops lasting thousands of seconds have been seen in testing. To fix, add our own process to the end of the wait queue to serve as a sentinel, and exit the broadcast loop once our process is not there anymore. There are various special considerations described in the comments, the principal disadvantage being that wakers can no longer be sure whether they awakened a real waiter or just a sentinel. But in practice nobody pays attention to the result of ConditionVariableSignal or ConditionVariableBroadcast anyway, so that problem seems hypothetical. Back-patch to v10 where condition_variable.c was introduced. Tom Lane and Thomas Munro Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com --- src/backend/storage/lmgr/condition_variable.c | 82 +++++++++++++++++-- 1 file changed, 77 insertions(+), 5 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index 41378d614a..55275cfafc 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -214,15 +214,87 @@ int ConditionVariableBroadcast(ConditionVariable *cv) { int nwoken = 0; + int pgprocno = MyProc->pgprocno; + PGPROC *proc = NULL; + bool have_sentinel = false; + + /* + * In some use-cases, it is common for awakened processes to immediately + * re-queue themselves. If we just naively try to reduce the wakeup list + * to empty, we'll get into a potentially-indefinite loop against such a + * process. The semantics we really want are just to be sure that we have + * wakened all processes that were in the list at entry. We can use our + * own cvWaitLink as a sentinel to detect when we've finished. + * + * A seeming flaw in this approach is that someone else might signal the + * CV and in doing so remove our sentinel entry. But that's fine: since + * CV waiters are always added and removed in order, that must mean that + * every previous waiter has been wakened, so we're done. We'll get an + * extra "set" on our latch from the someone else's signal, which is + * slightly inefficient but harmless. + * + * We can't insert our cvWaitLink as a sentinel if it's already in use in + * some other proclist. While that's not expected to be true for typical + * uses of this function, we can deal with it by simply canceling any + * prepared CV sleep. The next call to ConditionVariableSleep will take + * care of re-establishing the lost state. + */ + ConditionVariableCancelSleep(); /* - * Let's just do this the dumbest way possible. We could try to dequeue - * all the sleepers at once to save spinlock cycles, but it's a bit hard - * to get that right in the face of possible sleep cancelations, and we - * don't want to loop holding the mutex. + * Inspect the state of the queue. If it's empty, we have nothing to do. + * If there's exactly one entry, we need only remove and signal that + * entry. Otherwise, remove the first entry and insert our sentinel. */ - while (ConditionVariableSignal(cv)) + SpinLockAcquire(&cv->mutex); + /* While we're here, let's assert we're not in the list. */ + Assert(!proclist_contains(&cv->wakeup, pgprocno, cvWaitLink)); + + if (!proclist_is_empty(&cv->wakeup)) + { + proc = proclist_pop_head_node(&cv->wakeup, cvWaitLink); + if (!proclist_is_empty(&cv->wakeup)) + { + proclist_push_tail(&cv->wakeup, pgprocno, cvWaitLink); + have_sentinel = true; + } + } + SpinLockRelease(&cv->mutex); + + /* Awaken first waiter, if there was one. */ + if (proc != NULL) + { + SetLatch(&proc->procLatch); ++nwoken; + } + + while (have_sentinel) + { + /* + * Each time through the loop, remove the first wakeup list entry, and + * signal it unless it's our sentinel. Repeat as long as the sentinel + * remains in the list. + * + * Notice that if someone else removes our sentinel, we will waken one + * additional process before exiting. That's intentional, because if + * someone else signals the CV, they may be intending to waken some + * third process that added itself to the list after we added the + * sentinel. Better to give a spurious wakeup (which should be + * harmless beyond wasting some cycles) than to lose a wakeup. + */ + proc = NULL; + SpinLockAcquire(&cv->mutex); + if (!proclist_is_empty(&cv->wakeup)) + proc = proclist_pop_head_node(&cv->wakeup, cvWaitLink); + have_sentinel = proclist_contains(&cv->wakeup, pgprocno, cvWaitLink); + SpinLockRelease(&cv->mutex); + + if (proc != NULL && proc != MyProc) + { + SetLatch(&proc->procLatch); + ++nwoken; + } + } return nwoken; } From 3cac0ec85992829c160bdd8a370dd4676d42f58c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 5 Jan 2018 19:42:49 -0500 Subject: [PATCH 0791/1087] Reorder steps in ConditionVariablePrepareToSleep for more safety. In the admittedly-very-unlikely case that AddWaitEventToSet fails, ConditionVariablePrepareToSleep would error out after already having set cv_sleep_target, which is probably bad, and after having already set cv_wait_event_set, which is very bad. Transaction abort might or might not clean up cv_sleep_target properly; but there is nothing that would be aware that the WaitEventSet wasn't fully constructed, so that all future condition variable sleeps would be broken. We can easily guard against these hazards with slight restructuring. Back-patch to v10 where condition_variable.c was introduced. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com --- src/backend/storage/lmgr/condition_variable.c | 23 ++++++++++++------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index 55275cfafc..cac3d36b6a 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -54,6 +54,21 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) { int pgprocno = MyProc->pgprocno; + /* + * If first time through in this process, create a WaitEventSet, which + * we'll reuse for all condition variable sleeps. + */ + if (cv_wait_event_set == NULL) + { + WaitEventSet *new_event_set; + + new_event_set = CreateWaitEventSet(TopMemoryContext, 1); + AddWaitEventToSet(new_event_set, WL_LATCH_SET, PGINVALID_SOCKET, + MyLatch, NULL); + /* Don't set cv_wait_event_set until we have a correct WES. */ + cv_wait_event_set = new_event_set; + } + /* * It's not legal to prepare a sleep until the previous sleep has been * completed or canceled. @@ -63,14 +78,6 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) /* Record the condition variable on which we will sleep. */ cv_sleep_target = cv; - /* Create a reusable WaitEventSet. */ - if (cv_wait_event_set == NULL) - { - cv_wait_event_set = CreateWaitEventSet(TopMemoryContext, 1); - AddWaitEventToSet(cv_wait_event_set, WL_LATCH_SET, PGINVALID_SOCKET, - MyLatch, NULL); - } - /* * Reset my latch before adding myself to the queue and before entering * the caller's predicate loop. From ccf312a4488ab8bb38dfd87168bf8915045d1a82 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 5 Jan 2018 20:33:26 -0500 Subject: [PATCH 0792/1087] Remove return values of ConditionVariableSignal/Broadcast. In the wake of commit aced5a92b, the semantics of these results are a bit squishy: we can tell whether we signaled some other process(es), but we do not know which ones were real waiters versus mere sentinels for ConditionVariableBroadcast operations. It does not help much that ConditionVariableBroadcast will attempt to pass on the signal to the next real waiter, because (a) there might not be one, and (b) that will only happen awhile later, anyway. So these results could overstate how much effect the calls really had. However, no existing caller of either function pays any attention to its result value, so it seems reasonable to just define that as a required property of a correct algorithm. To encourage correctness and save some tiny number of cycles, change both functions to return void. Patch by me, per an observation by Thomas Munro. No back-patch, since if any third parties happen to be using these functions, they might not appreciate an API break in a minor release. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com --- src/backend/storage/lmgr/condition_variable.c | 32 +++++++------------ src/include/storage/condition_variable.h | 4 +-- 2 files changed, 13 insertions(+), 23 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index cac3d36b6a..60234db4cd 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -186,11 +186,14 @@ ConditionVariableCancelSleep(void) } /* - * Wake up one sleeping process, assuming there is at least one. + * Wake up the oldest process sleeping on the CV, if there is any. * - * The return value indicates whether or not we woke somebody up. + * Note: it's difficult to tell whether this has any real effect: we know + * whether we took an entry off the list, but the entry might only be a + * sentinel. Hence, think twice before proposing that this should return + * a flag telling whether it woke somebody. */ -bool +void ConditionVariableSignal(ConditionVariable *cv) { PGPROC *proc = NULL; @@ -203,24 +206,19 @@ ConditionVariableSignal(ConditionVariable *cv) /* If we found someone sleeping, set their latch to wake them up. */ if (proc != NULL) - { SetLatch(&proc->procLatch); - return true; - } - - /* No sleeping processes. */ - return false; } /* - * Wake up all sleeping processes. + * Wake up all processes sleeping on the given CV. * - * The return value indicates the number of processes we woke. + * This guarantees to wake all processes that were sleeping on the CV + * at time of call, but processes that add themselves to the list mid-call + * will typically not get awakened. */ -int +void ConditionVariableBroadcast(ConditionVariable *cv) { - int nwoken = 0; int pgprocno = MyProc->pgprocno; PGPROC *proc = NULL; bool have_sentinel = false; @@ -270,10 +268,7 @@ ConditionVariableBroadcast(ConditionVariable *cv) /* Awaken first waiter, if there was one. */ if (proc != NULL) - { SetLatch(&proc->procLatch); - ++nwoken; - } while (have_sentinel) { @@ -297,11 +292,6 @@ ConditionVariableBroadcast(ConditionVariable *cv) SpinLockRelease(&cv->mutex); if (proc != NULL && proc != MyProc) - { SetLatch(&proc->procLatch); - ++nwoken; - } } - - return nwoken; } diff --git a/src/include/storage/condition_variable.h b/src/include/storage/condition_variable.h index f9f93e0d4a..c7afbbca42 100644 --- a/src/include/storage/condition_variable.h +++ b/src/include/storage/condition_variable.h @@ -53,7 +53,7 @@ extern void ConditionVariableCancelSleep(void); extern void ConditionVariablePrepareToSleep(ConditionVariable *); /* Wake up a single waiter (via signal) or all waiters (via broadcast). */ -extern bool ConditionVariableSignal(ConditionVariable *); -extern int ConditionVariableBroadcast(ConditionVariable *); +extern void ConditionVariableSignal(ConditionVariable *cv); +extern void ConditionVariableBroadcast(ConditionVariable *cv); #endif /* CONDITION_VARIABLE_H */ From 6668a54eb8ef639a3182ae9e37e4e67982c44292 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Sat, 6 Jan 2018 11:48:21 +0000 Subject: [PATCH 0793/1087] Default monitoring roles - errata 25fff40798fc4ac11a241bfd9ab0c45c085e2212 introduced default monitoring roles. Apply these corrections: * Allow access to pg_stat_get_wal_senders() by role pg_read_all_stats * Correct comment in pg_stat_get_wal_receiver() to show it is no longer superuser-only. Author: Feike Steenbergen Reviewed-by: Michael Paquier Apply to HEAD, then later backpatch to 10 --- src/backend/replication/walreceiver.c | 3 ++- src/backend/replication/walsender.c | 8 +++++--- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index a7fc67153a..a39a98ff18 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -1442,7 +1442,8 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS) if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS)) { /* - * Only superusers can see details. Other users only get the pid value + * Only superusers and members of pg_read_all_stats can see details. + * Other users only get the pid value * to know whether it is a WAL receiver, but no details. */ MemSet(&nulls[1], true, sizeof(bool) * (tupdesc->natts - 1)); diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 9b63c40e8e..8bef3fbdaf 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -56,6 +56,7 @@ #include "access/xlog_internal.h" #include "access/xlogutils.h" +#include "catalog/pg_authid.h" #include "catalog/pg_type.h" #include "commands/dbcommands.h" #include "commands/defrem.h" @@ -3242,11 +3243,12 @@ pg_stat_get_wal_senders(PG_FUNCTION_ARGS) memset(nulls, 0, sizeof(nulls)); values[0] = Int32GetDatum(pid); - if (!superuser()) + if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS)) { /* - * Only superusers can see details. Other users only get the pid - * value to know it's a walsender, but no details. + * Only superusers and members of pg_read_all_stats can see details. + * Other users only get the pid value to know it's a walsender, + * but no details. */ MemSet(&nulls[1], true, PG_STAT_GET_WAL_SENDERS_COLS - 1); } From 6271fceb8a4f07dafe9d67dcf7e849b319bb2647 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Sat, 6 Jan 2018 12:24:19 +0000 Subject: [PATCH 0794/1087] Add TIMELINE to backup_label file Allows new test to confirm timelines match Author: Michael Paquier Reviewed-by: David Steele --- src/backend/access/transam/xlog.c | 50 +++++++++++++++++++++++++++++-- src/test/perl/PostgresNode.pm | 1 + 2 files changed, 48 insertions(+), 3 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 02974f0e52..e42b828edf 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -10535,6 +10535,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p, backup_started_in_recovery ? "standby" : "master"); appendStringInfo(labelfile, "START TIME: %s\n", strfbuf); appendStringInfo(labelfile, "LABEL: %s\n", backupidstr); + appendStringInfo(labelfile, "START TIMELINE: %u\n", starttli); /* * Okay, write the file, or return its contents to caller. @@ -11015,9 +11016,13 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p) (uint32) (startpoint >> 32), (uint32) startpoint, startxlogfilename); fprintf(fp, "STOP WAL LOCATION: %X/%X (file %s)\n", (uint32) (stoppoint >> 32), (uint32) stoppoint, stopxlogfilename); - /* transfer remaining lines from label to history file */ + /* + * Transfer remaining lines including label and start timeline to + * history file. + */ fprintf(fp, "%s", remaining); fprintf(fp, "STOP TIME: %s\n", strfbuf); + fprintf(fp, "STOP TIMELINE: %u\n", stoptli); if (fflush(fp) || ferror(fp) || FreeFile(fp)) ereport(ERROR, (errcode_for_file_access(), @@ -11228,11 +11233,13 @@ read_backup_label(XLogRecPtr *checkPointLoc, bool *backupEndRequired, bool *backupFromStandby) { char startxlogfilename[MAXFNAMELEN]; - TimeLineID tli; + TimeLineID tli_from_walseg, tli_from_file; FILE *lfp; char ch; char backuptype[20]; char backupfrom[20]; + char backuplabel[MAXPGPATH]; + char backuptime[128]; uint32 hi, lo; @@ -11259,7 +11266,7 @@ read_backup_label(XLogRecPtr *checkPointLoc, bool *backupEndRequired, * format). */ if (fscanf(lfp, "START WAL LOCATION: %X/%X (file %08X%16s)%c", - &hi, &lo, &tli, startxlogfilename, &ch) != 5 || ch != '\n') + &hi, &lo, &tli_from_walseg, startxlogfilename, &ch) != 5 || ch != '\n') ereport(FATAL, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("invalid data in file \"%s\"", BACKUP_LABEL_FILE))); @@ -11288,6 +11295,43 @@ read_backup_label(XLogRecPtr *checkPointLoc, bool *backupEndRequired, *backupFromStandby = true; } + /* + * Parse START TIME and LABEL. Those are not mandatory fields for + * recovery but checking for their presence is useful for debugging + * and the next sanity checks. Cope also with the fact that the + * result buffers have a pre-allocated size, hence if the backup_label + * file has been generated with strings longer than the maximum assumed + * here an incorrect parsing happens. That's fine as only minor + * consistency checks are done afterwards. + */ + if (fscanf(lfp, "START TIME: %127[^\n]\n", backuptime) == 1) + ereport(DEBUG1, + (errmsg("backup time %s in file \"%s\"", + backuptime, BACKUP_LABEL_FILE))); + + if (fscanf(lfp, "LABEL: %1023[^\n]\n", backuplabel) == 1) + ereport(DEBUG1, + (errmsg("backup label %s in file \"%s\"", + backuplabel, BACKUP_LABEL_FILE))); + + /* + * START TIMELINE is new as of 11. Its parsing is not mandatory, still + * use it as a sanity check if present. + */ + if (fscanf(lfp, "START TIMELINE: %u\n", &tli_from_file) == 1) + { + if (tli_from_walseg != tli_from_file) + ereport(FATAL, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("invalid data in file \"%s\"", BACKUP_LABEL_FILE), + errdetail("Timeline ID parsed is %u, but expected %u", + tli_from_file, tli_from_walseg))); + + ereport(DEBUG1, + (errmsg("backup timeline %u in file \"%s\"", + tli_from_file, BACKUP_LABEL_FILE))); + } + if (ferror(lfp) || FreeFile(lfp)) ereport(FATAL, (errcode_for_file_access(), diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index 93faadc20e..80f68df246 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -419,6 +419,7 @@ sub init print $conf "restart_after_crash = off\n"; print $conf "log_line_prefix = '%m [%p] %q%a '\n"; print $conf "log_statement = all\n"; + print $conf "log_min_messages = debug1\n"; print $conf "log_replication_commands = on\n"; print $conf "wal_retrieve_retry_interval = '500ms'\n"; print $conf "port = $port\n"; From eeb3c2df429c943b2f8d028d110b55ac0a53dc75 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 7 Jan 2018 20:40:40 -0500 Subject: [PATCH 0795/1087] Back off chattiness in RemovePgTempFiles(). In commit 561885db0, as part of normalizing RemovePgTempFiles's error handling, I removed its behavior of silently ignoring ENOENT failures during directory opens. Thomas Munro points out that this is a bad idea at the top level, because we don't create pgsql_tmp directories until needed. Thus this coding could produce LOG messages in perfectly normal situations, which isn't what I intended. Restore the suppression of ENOENT logging, but only at top level --- it would still be unexpected for a nested temp directory to disappear between seeing it in the parent directory and opening it. Discussion: https://postgr.es/m/CAEepm=2y06SehAkTnd5sU_eVqdv5P-=Srt1y5vYNQk6yVDVaPw@mail.gmail.com --- src/backend/storage/file/fd.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index b5c7028618..71516a9a5a 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -321,7 +321,8 @@ static int FreeDesc(AllocateDesc *desc); static void AtProcExit_Files(int code, Datum arg); static void CleanupTempFiles(bool isProcExit); -static void RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all); +static void RemovePgTempFilesInDir(const char *tmpdirname, bool missing_ok, + bool unlink_all); static void RemovePgTempRelationFiles(const char *tsdirname); static void RemovePgTempRelationFilesInDbspace(const char *dbspacedirname); static bool looks_like_temp_rel_name(const char *name); @@ -3010,7 +3011,7 @@ RemovePgTempFiles(void) * First process temp files in pg_default ($PGDATA/base) */ snprintf(temp_path, sizeof(temp_path), "base/%s", PG_TEMP_FILES_DIR); - RemovePgTempFilesInDir(temp_path, false); + RemovePgTempFilesInDir(temp_path, true, false); RemovePgTempRelationFiles("base"); /* @@ -3026,7 +3027,7 @@ RemovePgTempFiles(void) snprintf(temp_path, sizeof(temp_path), "pg_tblspc/%s/%s/%s", spc_de->d_name, TABLESPACE_VERSION_DIRECTORY, PG_TEMP_FILES_DIR); - RemovePgTempFilesInDir(temp_path, false); + RemovePgTempFilesInDir(temp_path, true, false); snprintf(temp_path, sizeof(temp_path), "pg_tblspc/%s/%s", spc_de->d_name, TABLESPACE_VERSION_DIRECTORY); @@ -3040,19 +3041,27 @@ RemovePgTempFiles(void) * DataDir as well. */ #ifdef EXEC_BACKEND - RemovePgTempFilesInDir(PG_TEMP_FILES_DIR, false); + RemovePgTempFilesInDir(PG_TEMP_FILES_DIR, true, false); #endif } /* - * Process one pgsql_tmp directory for RemovePgTempFiles. At the top level in - * each tablespace, this should be called with unlink_all = false, so that + * Process one pgsql_tmp directory for RemovePgTempFiles. + * + * If missing_ok is true, it's all right for the named directory to not exist. + * Any other problem results in a LOG message. (missing_ok should be true at + * the top level, since pgsql_tmp directories are not created until needed.) + * + * At the top level, this should be called with unlink_all = false, so that * only files matching the temporary name prefix will be unlinked. When * recursing it will be called with unlink_all = true to unlink everything * under a top-level temporary directory. + * + * (These two flags could be replaced by one, but it seems clearer to keep + * them separate.) */ static void -RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) +RemovePgTempFilesInDir(const char *tmpdirname, bool missing_ok, bool unlink_all) { DIR *temp_dir; struct dirent *temp_de; @@ -3060,6 +3069,9 @@ RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) temp_dir = AllocateDir(tmpdirname); + if (temp_dir == NULL && errno == ENOENT && missing_ok) + return; + while ((temp_de = ReadDirExtended(temp_dir, tmpdirname, LOG)) != NULL) { if (strcmp(temp_de->d_name, ".") == 0 || @@ -3087,7 +3099,7 @@ RemovePgTempFilesInDir(const char *tmpdirname, bool unlink_all) if (S_ISDIR(statbuf.st_mode)) { /* recursively remove contents, then directory itself */ - RemovePgTempFilesInDir(rm_path, true); + RemovePgTempFilesInDir(rm_path, false, true); if (rmdir(rm_path) < 0) ereport(LOG, From ea8e1bbc538444d373cf712a0f5188c906b71a9d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 8 Jan 2018 18:07:04 -0500 Subject: [PATCH 0796/1087] Improve error detection capability in proclists. Previously, although the initial state of a proclist_node is expected to be next == prev == 0, proclist_delete_offset would reset nodes to next == prev == INVALID_PGPROCNO when removing them from a list. This is the same state that a node in a singleton list has, so that it's impossible to distinguish not-in-a-list from in-a-list. Change proclist_delete_offset to reset removed nodes to next == prev == 0, making it possible to distinguish those cases, and then add Asserts to the list add and delete functions that the supplied node isn't or is in a list at entry. Also tighten assertions about the node being in the particular list (not some other one) where it is possible to check that in O(1) time. In ConditionVariablePrepareToSleep, since we don't expect the process's cvWaitLink to already be in a list, remove the more-or-less-useless proclist_contains check; we'd rather have proclist_push_tail's new assertion fire if that happens. Improve various comments related to proclists, too. Patch by me, reviewed by Thomas Munro. This isn't back-patchable, since there could theoretically be inlined copies of proclist_delete_offset in third-party modules. But it's only improving debuggability anyway. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com --- src/backend/storage/lmgr/condition_variable.c | 3 +- src/include/storage/proclist.h | 55 +++++++++++-------- src/include/storage/proclist_types.h | 12 +++- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index 60234db4cd..e3bc034de4 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -86,8 +86,7 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) /* Add myself to the wait queue. */ SpinLockAcquire(&cv->mutex); - if (!proclist_contains(&cv->wakeup, pgprocno, cvWaitLink)) - proclist_push_tail(&cv->wakeup, pgprocno, cvWaitLink); + proclist_push_tail(&cv->wakeup, pgprocno, cvWaitLink); SpinLockRelease(&cv->mutex); } diff --git a/src/include/storage/proclist.h b/src/include/storage/proclist.h index 4e25b47593..59a478e1f6 100644 --- a/src/include/storage/proclist.h +++ b/src/include/storage/proclist.h @@ -42,7 +42,7 @@ proclist_is_empty(proclist_head *list) /* * Get a pointer to a proclist_node inside a given PGPROC, given a procno and - * an offset. + * the proclist_node field's offset within struct PGPROC. */ static inline proclist_node * proclist_node_get(int procno, size_t node_offset) @@ -53,13 +53,15 @@ proclist_node_get(int procno, size_t node_offset) } /* - * Insert a node at the beginning of a list. + * Insert a process at the beginning of a list. */ static inline void proclist_push_head_offset(proclist_head *list, int procno, size_t node_offset) { proclist_node *node = proclist_node_get(procno, node_offset); + Assert(node->next == 0 && node->prev == 0); + if (list->head == INVALID_PGPROCNO) { Assert(list->tail == INVALID_PGPROCNO); @@ -79,13 +81,15 @@ proclist_push_head_offset(proclist_head *list, int procno, size_t node_offset) } /* - * Insert a node at the end of a list. + * Insert a process at the end of a list. */ static inline void proclist_push_tail_offset(proclist_head *list, int procno, size_t node_offset) { proclist_node *node = proclist_node_get(procno, node_offset); + Assert(node->next == 0 && node->prev == 0); + if (list->tail == INVALID_PGPROCNO) { Assert(list->head == INVALID_PGPROCNO); @@ -105,30 +109,38 @@ proclist_push_tail_offset(proclist_head *list, int procno, size_t node_offset) } /* - * Delete a node. The node must be in the list. + * Delete a process from a list --- it must be in the list! */ static inline void proclist_delete_offset(proclist_head *list, int procno, size_t node_offset) { proclist_node *node = proclist_node_get(procno, node_offset); + Assert(node->next != 0 || node->prev != 0); + if (node->prev == INVALID_PGPROCNO) + { + Assert(list->head == procno); list->head = node->next; + } else proclist_node_get(node->prev, node_offset)->next = node->next; if (node->next == INVALID_PGPROCNO) + { + Assert(list->tail == procno); list->tail = node->prev; + } else proclist_node_get(node->next, node_offset)->prev = node->prev; - node->next = node->prev = INVALID_PGPROCNO; + node->next = node->prev = 0; } /* - * Check if a node is currently in a list. It must be known that the node is - * not in any _other_ proclist that uses the same proclist_node, so that the - * only possibilities are that it is in this list or none. + * Check if a process is currently in a list. It must be known that the + * process is not in any _other_ proclist that uses the same proclist_node, + * so that the only possibilities are that it is in this list or none. */ static inline bool proclist_contains_offset(proclist_head *list, int procno, @@ -136,27 +148,26 @@ proclist_contains_offset(proclist_head *list, int procno, { proclist_node *node = proclist_node_get(procno, node_offset); - /* - * If this is not a member of a proclist, then the next and prev pointers - * should be 0. Circular lists are not allowed so this condition is not - * confusable with a real pgprocno 0. - */ + /* If it's not in any list, it's definitely not in this one. */ if (node->prev == 0 && node->next == 0) return false; - /* If there is a previous node, then this node must be in the list. */ - if (node->prev != INVALID_PGPROCNO) - return true; - /* - * There is no previous node, so the only way this node can be in the list - * is if it's the head node. + * It must, in fact, be in this list. Ideally, in assert-enabled builds, + * we'd verify that. But since this function is typically used while + * holding a spinlock, crawling the whole list is unacceptable. However, + * we can verify matters in O(1) time when the node is a list head or + * tail, and that seems worth doing, since in practice that should often + * be enough to catch mistakes. */ - return list->head == procno; + Assert(node->prev != INVALID_PGPROCNO || list->head == procno); + Assert(node->next != INVALID_PGPROCNO || list->tail == procno); + + return true; } /* - * Remove and return the first node from a list (there must be one). + * Remove and return the first process from a list (there must be one). */ static inline PGPROC * proclist_pop_head_node_offset(proclist_head *list, size_t node_offset) @@ -205,4 +216,4 @@ proclist_pop_head_node_offset(proclist_head *list, size_t node_offset) proclist_node_get((iter).cur, \ offsetof(PGPROC, link_member))->next) -#endif +#endif /* PROCLIST_H */ diff --git a/src/include/storage/proclist_types.h b/src/include/storage/proclist_types.h index 237fb7613f..f4dac10fb6 100644 --- a/src/include/storage/proclist_types.h +++ b/src/include/storage/proclist_types.h @@ -16,7 +16,12 @@ #define PROCLIST_TYPES_H /* - * A node in a list of processes. + * A node in a doubly-linked list of processes. The link fields contain + * the 0-based PGPROC indexes of the next and previous process, or + * INVALID_PGPROCNO in the next-link of the last node and the prev-link + * of the first node. A node that is currently not in any list + * should have next == prev == 0; this is not a possible state for a node + * that is in a list, because we disallow circularity. */ typedef struct proclist_node { @@ -25,7 +30,8 @@ typedef struct proclist_node } proclist_node; /* - * Head of a doubly-linked list of PGPROCs, identified by pgprocno. + * Header of a doubly-linked list of PGPROCs, identified by pgprocno. + * An empty list is represented by head == tail == INVALID_PGPROCNO. */ typedef struct proclist_head { @@ -42,4 +48,4 @@ typedef struct proclist_mutable_iter int next; /* pgprocno of the next PGPROC */ } proclist_mutable_iter; -#endif +#endif /* PROCLIST_TYPES_H */ From e35dba475a440f73dccf9ed1fd61e3abc6ee61db Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 8 Jan 2018 18:28:03 -0500 Subject: [PATCH 0797/1087] Cosmetic improvements in condition_variable.[hc]. Clarify a bunch of comments. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com --- src/backend/storage/lmgr/condition_variable.c | 92 ++++++++++++------- src/include/storage/condition_variable.h | 23 +++-- 2 files changed, 70 insertions(+), 45 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index e3bc034de4..98a67965cd 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -43,11 +43,22 @@ ConditionVariableInit(ConditionVariable *cv) } /* - * Prepare to wait on a given condition variable. This can optionally be - * called before entering a test/sleep loop. Alternatively, the call to - * ConditionVariablePrepareToSleep can be omitted. The only advantage of - * calling ConditionVariablePrepareToSleep is that it avoids an initial - * double-test of the user's predicate in the case that we need to wait. + * Prepare to wait on a given condition variable. + * + * This can optionally be called before entering a test/sleep loop. + * Doing so is more efficient if we'll need to sleep at least once. + * However, if the first test of the exit condition is likely to succeed, + * it's more efficient to omit the ConditionVariablePrepareToSleep call. + * See comments in ConditionVariableSleep for more detail. + * + * Caution: "before entering the loop" means you *must* test the exit + * condition between calling ConditionVariablePrepareToSleep and calling + * ConditionVariableSleep. If that is inconvenient, omit calling + * ConditionVariablePrepareToSleep. + * + * Only one condition variable can be used at a time, ie, + * ConditionVariableCancelSleep must be called before any attempt is made + * to sleep on a different condition variable. */ void ConditionVariablePrepareToSleep(ConditionVariable *cv) @@ -79,8 +90,8 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) cv_sleep_target = cv; /* - * Reset my latch before adding myself to the queue and before entering - * the caller's predicate loop. + * Reset my latch before adding myself to the queue, to ensure that we + * don't miss a wakeup that occurs immediately. */ ResetLatch(MyLatch); @@ -90,20 +101,21 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) SpinLockRelease(&cv->mutex); } -/*-------------------------------------------------------------------------- - * Wait for the given condition variable to be signaled. This should be - * called in a predicate loop that tests for a specific exit condition and - * otherwise sleeps, like so: +/* + * Wait for the given condition variable to be signaled. + * + * This should be called in a predicate loop that tests for a specific exit + * condition and otherwise sleeps, like so: * - * ConditionVariablePrepareToSleep(cv); [optional] + * ConditionVariablePrepareToSleep(cv); // optional * while (condition for which we are waiting is not true) * ConditionVariableSleep(cv, wait_event_info); * ConditionVariableCancelSleep(); * - * Supply a value from one of the WaitEventXXX enums defined in pgstat.h to - * control the contents of pg_stat_activity's wait_event_type and wait_event - * columns while waiting. - *-------------------------------------------------------------------------*/ + * wait_event_info should be a value from one of the WaitEventXXX enums + * defined in pgstat.h. This controls the contents of pg_stat_activity's + * wait_event_type and wait_event columns while waiting. + */ void ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) { @@ -113,13 +125,14 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) /* * If the caller didn't prepare to sleep explicitly, then do so now and * return immediately. The caller's predicate loop should immediately - * call again if its exit condition is not yet met. This initial spurious - * return can be avoided by calling ConditionVariablePrepareToSleep(cv) + * call again if its exit condition is not yet met. This will result in + * the exit condition being tested twice before we first sleep. The extra + * test can be prevented by calling ConditionVariablePrepareToSleep(cv) * first. Whether it's worth doing that depends on whether you expect the - * condition to be met initially, in which case skipping the prepare - * allows you to skip manipulation of the wait list, or not met initially, - * in which case preparing first allows you to skip a spurious test of the - * caller's exit condition. + * exit condition to be met initially, in which case skipping the prepare + * is recommended because it avoids manipulations of the wait list, or not + * met initially, in which case preparing first is better because it + * avoids one extra test of the exit condition. */ if (cv_sleep_target == NULL) { @@ -130,7 +143,7 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) /* Any earlier condition variable sleep must have been canceled. */ Assert(cv_sleep_target == cv); - while (!done) + do { CHECK_FOR_INTERRUPTS(); @@ -140,18 +153,23 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) */ WaitEventSetWait(cv_wait_event_set, -1, &event, 1, wait_event_info); - /* Reset latch before testing whether we can return. */ + /* Reset latch before examining the state of the wait list. */ ResetLatch(MyLatch); /* * If this process has been taken out of the wait list, then we know - * that is has been signaled by ConditionVariableSignal. We put it - * back into the wait list, so we don't miss any further signals while - * the caller's loop checks its condition. If it hasn't been taken - * out of the wait list, then the latch must have been set by - * something other than ConditionVariableSignal; though we don't - * guarantee not to return spuriously, we'll avoid these obvious - * cases. + * that it has been signaled by ConditionVariableSignal (or + * ConditionVariableBroadcast), so we should return to the caller. But + * that doesn't guarantee that the exit condition is met, only that we + * ought to check it. So we must put the process back into the wait + * list, to ensure we don't miss any additional wakeup occurring while + * the caller checks its exit condition. We can take ourselves out of + * the wait list only when the caller calls + * ConditionVariableCancelSleep. + * + * If we're still in the wait list, then the latch must have been set + * by something other than ConditionVariableSignal; though we don't + * guarantee not to return spuriously, we'll avoid this obvious case. */ SpinLockAcquire(&cv->mutex); if (!proclist_contains(&cv->wakeup, MyProc->pgprocno, cvWaitLink)) @@ -160,13 +178,17 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) proclist_push_tail(&cv->wakeup, MyProc->pgprocno, cvWaitLink); } SpinLockRelease(&cv->mutex); - } + } while (!done); } /* - * Cancel any pending sleep operation. We just need to remove ourselves - * from the wait queue of any condition variable for which we have previously - * prepared a sleep. + * Cancel any pending sleep operation. + * + * We just need to remove ourselves from the wait queue of any condition + * variable for which we have previously prepared a sleep. + * + * Do nothing if nothing is pending; this allows this function to be called + * during transaction abort to clean up any unfinished CV sleep. */ void ConditionVariableCancelSleep(void) diff --git a/src/include/storage/condition_variable.h b/src/include/storage/condition_variable.h index c7afbbca42..7dac477d25 100644 --- a/src/include/storage/condition_variable.h +++ b/src/include/storage/condition_variable.h @@ -27,30 +27,33 @@ typedef struct { - slock_t mutex; - proclist_head wakeup; + slock_t mutex; /* spinlock protecting the wakeup list */ + proclist_head wakeup; /* list of wake-able processes */ } ConditionVariable; /* Initialize a condition variable. */ -extern void ConditionVariableInit(ConditionVariable *); +extern void ConditionVariableInit(ConditionVariable *cv); /* * To sleep on a condition variable, a process should use a loop which first * checks the condition, exiting the loop if it is met, and then calls * ConditionVariableSleep. Spurious wakeups are possible, but should be - * infrequent. After exiting the loop, ConditionVariableCancelSleep should + * infrequent. After exiting the loop, ConditionVariableCancelSleep must * be called to ensure that the process is no longer in the wait list for - * the condition variable. + * the condition variable. Only one condition variable can be used at a + * time, ie, ConditionVariableCancelSleep must be called before any attempt + * is made to sleep on a different condition variable. */ -extern void ConditionVariableSleep(ConditionVariable *, uint32 wait_event_info); +extern void ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info); extern void ConditionVariableCancelSleep(void); /* - * The use of this function is optional and not necessary for correctness; - * for efficiency, it should be called prior entering the loop described above - * if it is thought that the condition is unlikely to hold immediately. + * Optionally, ConditionVariablePrepareToSleep can be called before entering + * the test-and-sleep loop described above. Doing so is more efficient if + * at least one sleep is needed, whereas not doing so is more efficient when + * no sleep is needed because the test condition is true the first time. */ -extern void ConditionVariablePrepareToSleep(ConditionVariable *); +extern void ConditionVariablePrepareToSleep(ConditionVariable *cv); /* Wake up a single waiter (via signal) or all waiters (via broadcast). */ extern void ConditionVariableSignal(ConditionVariable *cv); From d25ee30031b08ad1348a090914c2af6bc640a832 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Mon, 8 Jan 2018 22:43:51 -0500 Subject: [PATCH 0798/1087] pg_upgrade: prevent check on live cluster from generating error Previously an inaccurate but harmless error was generated when running --check on a live server before reporting the servers as compatible. The fix is to split error reporting and exit control in the exec_prog() API. Reported-by: Daniel Westermann Backpatch-through: 10 --- src/bin/pg_upgrade/dump.c | 2 +- src/bin/pg_upgrade/exec.c | 14 ++++++-------- src/bin/pg_upgrade/parallel.c | 9 ++++----- src/bin/pg_upgrade/pg_upgrade.c | 24 ++++++++++++------------ src/bin/pg_upgrade/pg_upgrade.h | 6 +++--- src/bin/pg_upgrade/server.c | 18 +++++++++--------- 6 files changed, 35 insertions(+), 38 deletions(-) diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c index 5ed6b786e2..8a662e9865 100644 --- a/src/bin/pg_upgrade/dump.c +++ b/src/bin/pg_upgrade/dump.c @@ -23,7 +23,7 @@ generate_old_dump(void) prep_status("Creating dump of global objects"); /* run new pg_dumpall binary for globals */ - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers " "--binary-upgrade %s -f %s", new_cluster.bindir, cluster_conn_opts(&old_cluster), diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c index ea45434222..9122e2769e 100644 --- a/src/bin/pg_upgrade/exec.c +++ b/src/bin/pg_upgrade/exec.c @@ -71,16 +71,14 @@ get_bin_version(ClusterInfo *cluster) * and attempts to execute that command. If the command executes * successfully, exec_prog() returns true. * - * If the command fails, an error message is saved to the specified log_file. - * If throw_error is true, this raises a PG_FATAL error and pg_upgrade - * terminates; otherwise it is just reported as PG_REPORT and exec_prog() - * returns false. + * If the command fails, an error message is optionally written to the specified + * log_file, and the program optionally exits. * * The code requires it be called first from the primary thread on Windows. */ bool exec_prog(const char *log_file, const char *opt_log_file, - bool throw_error, const char *fmt,...) + bool report_error, bool exit_on_error, const char *fmt,...) { int result = 0; int written; @@ -173,7 +171,7 @@ exec_prog(const char *log_file, const char *opt_log_file, #endif result = system(cmd); - if (result != 0) + if (result != 0 && report_error) { /* we might be in on a progress status line, so go to the next line */ report_status(PG_REPORT, "\n*failure*"); @@ -181,12 +179,12 @@ exec_prog(const char *log_file, const char *opt_log_file, pg_log(PG_VERBOSE, "There were problems executing \"%s\"\n", cmd); if (opt_log_file) - pg_log(throw_error ? PG_FATAL : PG_REPORT, + pg_log(exit_on_error ? PG_FATAL : PG_REPORT, "Consult the last few lines of \"%s\" or \"%s\" for\n" "the probable cause of the failure.\n", log_file, opt_log_file); else - pg_log(throw_error ? PG_FATAL : PG_REPORT, + pg_log(exit_on_error ? PG_FATAL : PG_REPORT, "Consult the last few lines of \"%s\" for\n" "the probable cause of the failure.\n", log_file); diff --git a/src/bin/pg_upgrade/parallel.c b/src/bin/pg_upgrade/parallel.c index cb1dc434f6..23f869f6c7 100644 --- a/src/bin/pg_upgrade/parallel.c +++ b/src/bin/pg_upgrade/parallel.c @@ -78,8 +78,8 @@ parallel_exec_prog(const char *log_file, const char *opt_log_file, va_end(args); if (user_opts.jobs <= 1) - /* throw_error must be true to allow jobs */ - exec_prog(log_file, opt_log_file, true, "%s", cmd); + /* exit_on_error must be true to allow jobs */ + exec_prog(log_file, opt_log_file, true, true, "%s", cmd); else { /* parallel */ @@ -122,7 +122,7 @@ parallel_exec_prog(const char *log_file, const char *opt_log_file, child = fork(); if (child == 0) /* use _exit to skip atexit() functions */ - _exit(!exec_prog(log_file, opt_log_file, true, "%s", cmd)); + _exit(!exec_prog(log_file, opt_log_file, true, true, "%s", cmd)); else if (child < 0) /* fork failed */ pg_fatal("could not create worker process: %s\n", strerror(errno)); @@ -160,7 +160,7 @@ win32_exec_prog(exec_thread_arg *args) { int ret; - ret = !exec_prog(args->log_file, args->opt_log_file, true, "%s", args->cmd); + ret = !exec_prog(args->log_file, args->opt_log_file, true, true, "%s", args->cmd); /* terminates thread */ return ret; @@ -187,7 +187,6 @@ parallel_transfer_all_new_dbs(DbInfoArr *old_db_arr, DbInfoArr *new_db_arr, #endif if (user_opts.jobs <= 1) - /* throw_error must be true to allow jobs */ transfer_all_new_dbs(old_db_arr, new_db_arr, old_pgdata, new_pgdata, NULL); else { diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index 2b7da529e5..872621489f 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -149,14 +149,14 @@ main(int argc, char **argv) * because there is no need to have the schema load use new oids. */ prep_status("Setting next OID for new cluster"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -o %u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtoid, new_cluster.pgdata); check_ok(); prep_status("Sync data directory to disk"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/initdb\" --sync-only \"%s\"", new_cluster.bindir, new_cluster.pgdata); check_ok(); @@ -249,7 +249,7 @@ prepare_new_cluster(void) * --analyze so autovacuum doesn't update statistics later */ prep_status("Analyzing all rows in the new cluster"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/vacuumdb\" %s --all --analyze %s", new_cluster.bindir, cluster_conn_opts(&new_cluster), log_opts.verbose ? "--verbose" : ""); @@ -262,7 +262,7 @@ prepare_new_cluster(void) * counter later. */ prep_status("Freezing all rows in the new cluster"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/vacuumdb\" %s --all --freeze %s", new_cluster.bindir, cluster_conn_opts(&new_cluster), log_opts.verbose ? "--verbose" : ""); @@ -289,7 +289,7 @@ prepare_new_databases(void) * support functions in template1 but pg_dumpall creates database using * the template0 template. */ - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"", new_cluster.bindir, cluster_conn_opts(&new_cluster), GLOBALS_DUMP_FILE); @@ -392,7 +392,7 @@ copy_subdir_files(const char *old_subdir, const char *new_subdir) prep_status("Copying old %s to new server", old_subdir); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, #ifndef WIN32 "cp -Rf \"%s\" \"%s\"", #else @@ -418,16 +418,16 @@ copy_xact_xlog_xid(void) /* set the next transaction id and epoch of the new cluster */ prep_status("Setting next transaction ID and epoch for new cluster"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -f -x %u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtxid, new_cluster.pgdata); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -f -e %u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtepoch, new_cluster.pgdata); /* must reset commit timestamp limits also */ - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -f -c %u,%u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtxid, @@ -453,7 +453,7 @@ copy_xact_xlog_xid(void) * we preserve all files and contents, so we must preserve both "next" * counters here and the oldest multi present on system. */ - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -O %u -m %u,%u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtmxoff, @@ -481,7 +481,7 @@ copy_xact_xlog_xid(void) * might end up wrapped around (i.e. 0) if the old cluster had * next=MaxMultiXactId, but multixact.c can cope with that just fine. */ - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/pg_resetwal\" -m %u,%u \"%s\"", new_cluster.bindir, old_cluster.controldata.chkpnt_nxtmulti + 1, @@ -492,7 +492,7 @@ copy_xact_xlog_xid(void) /* now reset the wal archives in the new cluster */ prep_status("Resetting WAL archives"); - exec_prog(UTILITY_LOG_FILE, NULL, true, + exec_prog(UTILITY_LOG_FILE, NULL, true, true, /* use timeline 1 to match controldata and no WAL history file */ "\"%s/pg_resetwal\" -l 00000001%s \"%s\"", new_cluster.bindir, old_cluster.controldata.nextxlogfile + 8, diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index fbe69cb577..67f874b4f1 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -360,7 +360,7 @@ void generate_old_dump(void); #define EXEC_PSQL_ARGS "--echo-queries --set ON_ERROR_STOP=on --no-psqlrc --dbname=template1" bool exec_prog(const char *log_file, const char *opt_log_file, - bool throw_error, const char *fmt,...) pg_attribute_printf(4, 5); + bool report_error, bool exit_on_error, const char *fmt,...) pg_attribute_printf(5, 6); void verify_directories(void); bool pid_lock_file_exists(const char *datadir); @@ -416,8 +416,8 @@ PGresult *executeQueryOrDie(PGconn *conn, const char *fmt,...) pg_attribute_pr char *cluster_conn_opts(ClusterInfo *cluster); -bool start_postmaster(ClusterInfo *cluster, bool throw_error); -void stop_postmaster(bool fast); +bool start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error); +void stop_postmaster(bool in_atexit); uint32 get_major_server_version(ClusterInfo *cluster); void check_pghost_envvar(void); diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 96181237f8..5f55b585a8 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -191,7 +191,7 @@ stop_postmaster_atexit(void) bool -start_postmaster(ClusterInfo *cluster, bool throw_error) +start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error) { char cmd[MAXPGPATH * 4 + 1000]; PGconn *conn; @@ -257,11 +257,11 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) (strcmp(SERVER_LOG_FILE, SERVER_START_LOG_FILE) != 0) ? SERVER_LOG_FILE : NULL, - false, + report_and_exit_on_error, false, "%s", cmd); /* Did it fail and we are just testing if the server could be started? */ - if (!pg_ctl_return && !throw_error) + if (!pg_ctl_return && !report_and_exit_on_error) return false; /* @@ -305,9 +305,9 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) PQfinish(conn); /* - * If pg_ctl failed, and the connection didn't fail, and throw_error is - * enabled, fail now. This could happen if the server was already - * running. + * If pg_ctl failed, and the connection didn't fail, and + * report_and_exit_on_error is enabled, fail now. This + * could happen if the server was already running. */ if (!pg_ctl_return) { @@ -322,7 +322,7 @@ start_postmaster(ClusterInfo *cluster, bool throw_error) void -stop_postmaster(bool fast) +stop_postmaster(bool in_atexit) { ClusterInfo *cluster; @@ -333,11 +333,11 @@ stop_postmaster(bool fast) else return; /* no cluster running */ - exec_prog(SERVER_STOP_LOG_FILE, NULL, !fast, + exec_prog(SERVER_STOP_LOG_FILE, NULL, !in_atexit, !in_atexit, "\"%s/pg_ctl\" -w -D \"%s\" -o \"%s\" %s stop", cluster->bindir, cluster->pgconfig, cluster->pgopts ? cluster->pgopts : "", - fast ? "-m fast" : "-m smart"); + in_atexit ? "-m fast" : "-m smart"); os_info.running_cluster = NULL; } From 63008b19ee67270231694500832b031868d34428 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 9 Jan 2018 09:39:31 -0500 Subject: [PATCH 0799/1087] Fix comment. RELATION_IS_OTHER_TEMP is tested in the caller, not here. Discussion: http://postgr.es/m/5A5438E4.3090709@lab.ntt.co.jp --- src/backend/optimizer/prep/prepunion.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 5a08e75ad5..95557d750b 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -1634,11 +1634,8 @@ expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, /* * expand_single_inheritance_child - * Expand a single inheritance child, if needed. - * - * If this is a temp table of another backend, we'll return without doing - * anything at all. Otherwise, build a RangeTblEntry and an AppendRelInfo, if - * appropriate, plus maybe a PlanRowMark. + * Build a RangeTblEntry and an AppendRelInfo, if appropriate, plus + * maybe a PlanRowMark. * * We now expand the partition hierarchy level by level, creating a * corresponding hierarchy of AppendRelInfos and RelOptInfos, where each From bc7fa0c15c590ddf4872e426abd76c2634f22aca Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Tue, 9 Jan 2018 18:02:04 +0300 Subject: [PATCH 0800/1087] Improve scripting language in pgbench Added: - variable now might contain integer, double, boolean and null values - functions ln, exp - logical AND/OR/NOT - bitwise AND/OR/NOT/XOR - bit right/left shift - comparison operators - IS [NOT] (NULL|TRUE|FALSE) - conditional choice (in form of when/case/then) New operations and functions allow to implement more complicated test scenario. Author: Fabien Coelho with minor editorization by me Reviewed-By: Pavel Stehule, Jeevan Ladhe, me Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1604030742390.31618@sto --- doc/src/sgml/ref/pgbench.sgml | 223 ++++++++- src/bin/pgbench/exprparse.y | 195 +++++++- src/bin/pgbench/exprscan.l | 55 ++- src/bin/pgbench/pgbench.c | 484 ++++++++++++++++--- src/bin/pgbench/pgbench.h | 24 +- src/bin/pgbench/t/001_pgbench_with_server.pl | 171 +++++-- 6 files changed, 1026 insertions(+), 126 deletions(-) diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 1519fe78ef..3dd492cec1 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -904,14 +904,32 @@ pgbench options d Sets variable varname to a value calculated from expression. - The expression may contain integer constants such as 5432, + The expression may contain the NULL constant, + boolean constants TRUE and FALSE, + integer constants such as 5432, double constants such as 3.14159, references to variables :variablename, - unary operators (+, -) and binary operators - (+, -, *, /, - %) with their usual precedence and associativity, - function calls, and - parentheses. + operators + with their usual SQL precedence and associativity, + function calls, + SQL CASE generic conditional + expressions and parentheses. + + + + Functions and most operators return NULL on + NULL input. + + + + For conditional purposes, non zero numerical values are + TRUE, zero numerical values and NULL + are FALSE. + + + + When no final ELSE clause is provided to a + CASE, the default value is NULL. @@ -920,6 +938,7 @@ pgbench options d \set ntellers 10 * :scale \set aid (1021 * random(1, 100000 * :scale)) % \ (100000 * :scale) + 1 +\set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END @@ -996,6 +1015,177 @@ pgbench options d + + Built-In Operators + + + The arithmetic, bitwise, comparison and logical operators listed in + are built into pgbench + and may be used in expressions appearing in + \set. + + + + pgbench Operators by increasing precedence + + + + Operator + Description + Example + Result + + + + + OR + logical or + 5 or 0 + TRUE + + + AND + logical and + 3 and 0 + FALSE + + + NOT + logical not + not false + TRUE + + + IS [NOT] (NULL|TRUE|FALSE) + value tests + 1 is null + FALSE + + + ISNULL|NOTNULL + null tests + 1 notnull + TRUE + + + = + is equal + 5 = 4 + FALSE + + + <> + is not equal + 5 <> 4 + TRUE + + + != + is not equal + 5 != 5 + FALSE + + + < + lower than + 5 < 4 + FALSE + + + <= + lower or equal + 5 <= 4 + FALSE + + + > + greater than + 5 > 4 + TRUE + + + >= + greater or equal + 5 >= 4 + TRUE + + + | + integer bitwise OR + 1 | 2 + 3 + + + # + integer bitwise XOR + 1 # 3 + 2 + + + & + integer bitwise AND + 1 & 3 + 1 + + + ~ + integer bitwise NOT + ~ 1 + -2 + + + << + integer bitwise shift left + 1 << 2 + 4 + + + >> + integer bitwise shift right + 8 >> 2 + 2 + + + + + addition + 5 + 4 + 9 + + + - + substraction + 3 - 2.0 + 1.0 + + + * + multiplication + 5 * 4 + 20 + + + / + division (integer truncates the results) + 5 / 3 + 1 + + + % + modulo + 3 % 2 + 1 + + + - + opposite + - 2.0 + -2.0 + + + +
+
+ Built-In Functions @@ -1041,6 +1231,13 @@ pgbench options d double(5432) 5432.0 + + exp(x) + double + exponential + exp(1.0) + 2.718281828459045 + greatest(a [, ... ] ) double if any a is double, else integer @@ -1062,6 +1259,20 @@ pgbench options d least(5, 4, 3, 2.1) 2.1 + + ln(x) + double + natural logarithm + ln(2.718281828459045) + 1.0 + + + mod(i, bj) + integer + modulo + mod(54, 32) + 22 + pi() double diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y index 26494fdd9f..e23ca51786 100644 --- a/src/bin/pgbench/exprparse.y +++ b/src/bin/pgbench/exprparse.y @@ -19,13 +19,17 @@ PgBenchExpr *expr_parse_result; static PgBenchExprList *make_elist(PgBenchExpr *exp, PgBenchExprList *list); +static PgBenchExpr *make_null_constant(void); +static PgBenchExpr *make_boolean_constant(bool bval); static PgBenchExpr *make_integer_constant(int64 ival); static PgBenchExpr *make_double_constant(double dval); static PgBenchExpr *make_variable(char *varname); static PgBenchExpr *make_op(yyscan_t yyscanner, const char *operator, PgBenchExpr *lexpr, PgBenchExpr *rexpr); +static PgBenchExpr *make_uop(yyscan_t yyscanner, const char *operator, PgBenchExpr *expr); static int find_func(yyscan_t yyscanner, const char *fname); static PgBenchExpr *make_func(yyscan_t yyscanner, int fnumber, PgBenchExprList *args); +static PgBenchExpr *make_case(yyscan_t yyscanner, PgBenchExprList *when_then_list, PgBenchExpr *else_part); %} @@ -40,53 +44,126 @@ static PgBenchExpr *make_func(yyscan_t yyscanner, int fnumber, PgBenchExprList * { int64 ival; double dval; + bool bval; char *str; PgBenchExpr *expr; PgBenchExprList *elist; } -%type elist -%type expr +%type elist when_then_list +%type expr case_control %type INTEGER_CONST function %type DOUBLE_CONST +%type BOOLEAN_CONST %type VARIABLE FUNCTION -%token INTEGER_CONST DOUBLE_CONST VARIABLE FUNCTION - -/* Precedence: lowest to highest */ +%token NULL_CONST INTEGER_CONST DOUBLE_CONST BOOLEAN_CONST VARIABLE FUNCTION +%token AND_OP OR_OP NOT_OP NE_OP LE_OP GE_OP LS_OP RS_OP IS_OP +%token CASE_KW WHEN_KW THEN_KW ELSE_KW END_KW + +/* Precedence: lowest to highest, taken from postgres SQL parser */ +%left OR_OP +%left AND_OP +%right NOT_OP +%nonassoc IS_OP ISNULL_OP NOTNULL_OP +%nonassoc '<' '>' '=' LE_OP GE_OP NE_OP +%left '|' '#' '&' LS_OP RS_OP '~' %left '+' '-' %left '*' '/' '%' -%right UMINUS +%right UNARY %% result: expr { expr_parse_result = $1; } -elist: { $$ = NULL; } - | expr { $$ = make_elist($1, NULL); } +elist: { $$ = NULL; } + | expr { $$ = make_elist($1, NULL); } | elist ',' expr { $$ = make_elist($3, $1); } ; expr: '(' expr ')' { $$ = $2; } - | '+' expr %prec UMINUS { $$ = $2; } - | '-' expr %prec UMINUS { $$ = make_op(yyscanner, "-", + | '+' expr %prec UNARY { $$ = $2; } + /* unary minus "-x" implemented as "0 - x" */ + | '-' expr %prec UNARY { $$ = make_op(yyscanner, "-", make_integer_constant(0), $2); } + /* binary ones complement "~x" implemented as 0xffff... xor x" */ + | '~' expr { $$ = make_op(yyscanner, "#", + make_integer_constant(~INT64CONST(0)), $2); } + | NOT_OP expr { $$ = make_uop(yyscanner, "!not", $2); } | expr '+' expr { $$ = make_op(yyscanner, "+", $1, $3); } | expr '-' expr { $$ = make_op(yyscanner, "-", $1, $3); } | expr '*' expr { $$ = make_op(yyscanner, "*", $1, $3); } | expr '/' expr { $$ = make_op(yyscanner, "/", $1, $3); } - | expr '%' expr { $$ = make_op(yyscanner, "%", $1, $3); } + | expr '%' expr { $$ = make_op(yyscanner, "mod", $1, $3); } + | expr '<' expr { $$ = make_op(yyscanner, "<", $1, $3); } + | expr LE_OP expr { $$ = make_op(yyscanner, "<=", $1, $3); } + | expr '>' expr { $$ = make_op(yyscanner, "<", $3, $1); } + | expr GE_OP expr { $$ = make_op(yyscanner, "<=", $3, $1); } + | expr '=' expr { $$ = make_op(yyscanner, "=", $1, $3); } + | expr NE_OP expr { $$ = make_op(yyscanner, "<>", $1, $3); } + | expr '&' expr { $$ = make_op(yyscanner, "&", $1, $3); } + | expr '|' expr { $$ = make_op(yyscanner, "|", $1, $3); } + | expr '#' expr { $$ = make_op(yyscanner, "#", $1, $3); } + | expr LS_OP expr { $$ = make_op(yyscanner, "<<", $1, $3); } + | expr RS_OP expr { $$ = make_op(yyscanner, ">>", $1, $3); } + | expr AND_OP expr { $$ = make_op(yyscanner, "!and", $1, $3); } + | expr OR_OP expr { $$ = make_op(yyscanner, "!or", $1, $3); } + /* IS variants */ + | expr ISNULL_OP { $$ = make_op(yyscanner, "!is", $1, make_null_constant()); } + | expr NOTNULL_OP { + $$ = make_uop(yyscanner, "!not", + make_op(yyscanner, "!is", $1, make_null_constant())); + } + | expr IS_OP NULL_CONST { $$ = make_op(yyscanner, "!is", $1, make_null_constant()); } + | expr IS_OP NOT_OP NULL_CONST + { + $$ = make_uop(yyscanner, "!not", + make_op(yyscanner, "!is", $1, make_null_constant())); + } + | expr IS_OP BOOLEAN_CONST + { + $$ = make_op(yyscanner, "!is", $1, make_boolean_constant($3)); + } + | expr IS_OP NOT_OP BOOLEAN_CONST + { + $$ = make_uop(yyscanner, "!not", + make_op(yyscanner, "!is", $1, make_boolean_constant($4))); + } + /* constants */ + | NULL_CONST { $$ = make_null_constant(); } + | BOOLEAN_CONST { $$ = make_boolean_constant($1); } | INTEGER_CONST { $$ = make_integer_constant($1); } | DOUBLE_CONST { $$ = make_double_constant($1); } - | VARIABLE { $$ = make_variable($1); } + /* misc */ + | VARIABLE { $$ = make_variable($1); } | function '(' elist ')' { $$ = make_func(yyscanner, $1, $3); } + | case_control { $$ = $1; } ; +when_then_list: + when_then_list WHEN_KW expr THEN_KW expr { $$ = make_elist($5, make_elist($3, $1)); } + | WHEN_KW expr THEN_KW expr { $$ = make_elist($4, make_elist($2, NULL)); } + +case_control: + CASE_KW when_then_list END_KW { $$ = make_case(yyscanner, $2, make_null_constant()); } + | CASE_KW when_then_list ELSE_KW expr END_KW { $$ = make_case(yyscanner, $2, $4); } + function: FUNCTION { $$ = find_func(yyscanner, $1); pg_free($1); } ; %% +static PgBenchExpr * +make_null_constant(void) +{ + PgBenchExpr *expr = pg_malloc(sizeof(PgBenchExpr)); + + expr->etype = ENODE_CONSTANT; + expr->u.constant.type = PGBT_NULL; + expr->u.constant.u.ival = 0; + return expr; +} + static PgBenchExpr * make_integer_constant(int64 ival) { @@ -109,6 +186,17 @@ make_double_constant(double dval) return expr; } +static PgBenchExpr * +make_boolean_constant(bool bval) +{ + PgBenchExpr *expr = pg_malloc(sizeof(PgBenchExpr)); + + expr->etype = ENODE_CONSTANT; + expr->u.constant.type = PGBT_BOOLEAN; + expr->u.constant.u.bval = bval; + return expr; +} + static PgBenchExpr * make_variable(char *varname) { @@ -119,6 +207,7 @@ make_variable(char *varname) return expr; } +/* binary operators */ static PgBenchExpr * make_op(yyscan_t yyscanner, const char *operator, PgBenchExpr *lexpr, PgBenchExpr *rexpr) @@ -127,11 +216,19 @@ make_op(yyscan_t yyscanner, const char *operator, make_elist(rexpr, make_elist(lexpr, NULL))); } +/* unary operator */ +static PgBenchExpr * +make_uop(yyscan_t yyscanner, const char *operator, PgBenchExpr *expr) +{ + return make_func(yyscanner, find_func(yyscanner, operator), make_elist(expr, NULL)); +} + /* * List of available functions: - * - fname: function name + * - fname: function name, "!..." for special internal functions * - nargs: number of arguments * -1 is a special value for least & greatest meaning #args >= 1 + * -2 is for the "CASE WHEN ..." function, which has #args >= 3 and odd * - tag: function identifier from PgBenchFunction enum */ static const struct @@ -155,7 +252,7 @@ static const struct "/", 2, PGBENCH_DIV }, { - "%", 2, PGBENCH_MOD + "mod", 2, PGBENCH_MOD }, /* actual functions */ { @@ -176,6 +273,12 @@ static const struct { "sqrt", 1, PGBENCH_SQRT }, + { + "ln", 1, PGBENCH_LN + }, + { + "exp", 1, PGBENCH_EXP + }, { "int", 1, PGBENCH_INT }, @@ -200,6 +303,52 @@ static const struct { "power", 2, PGBENCH_POW }, + /* logical operators */ + { + "!and", 2, PGBENCH_AND + }, + { + "!or", 2, PGBENCH_OR + }, + { + "!not", 1, PGBENCH_NOT + }, + /* bitwise integer operators */ + { + "&", 2, PGBENCH_BITAND + }, + { + "|", 2, PGBENCH_BITOR + }, + { + "#", 2, PGBENCH_BITXOR + }, + { + "<<", 2, PGBENCH_LSHIFT + }, + { + ">>", 2, PGBENCH_RSHIFT + }, + /* comparison operators */ + { + "=", 2, PGBENCH_EQ + }, + { + "<>", 2, PGBENCH_NE + }, + { + "<=", 2, PGBENCH_LE + }, + { + "<", 2, PGBENCH_LT + }, + { + "!is", 2, PGBENCH_IS + }, + /* "case when ... then ... else ... end" construction */ + { + "!case_end", -2, PGBENCH_CASE + }, /* keep as last array element */ { NULL, 0, 0 @@ -288,6 +437,16 @@ make_func(yyscan_t yyscanner, int fnumber, PgBenchExprList *args) elist_length(args) == 0) expr_yyerror_more(yyscanner, "at least one argument expected", PGBENCH_FUNCTIONS[fnumber].fname); + /* special case: case (when ... then ...)+ (else ...)? end */ + if (PGBENCH_FUNCTIONS[fnumber].nargs == -2) + { + int len = elist_length(args); + + /* 'else' branch is always present, but could be a NULL-constant */ + if (len < 3 || len % 2 != 1) + expr_yyerror_more(yyscanner, "odd and >= 3 number of arguments expected", + "case control structure"); + } expr->etype = ENODE_FUNCTION; expr->u.function.function = PGBENCH_FUNCTIONS[fnumber].tag; @@ -300,6 +459,14 @@ make_func(yyscan_t yyscanner, int fnumber, PgBenchExprList *args) return expr; } +static PgBenchExpr * +make_case(yyscan_t yyscanner, PgBenchExprList *when_then_list, PgBenchExpr *else_part) +{ + return make_func(yyscanner, + find_func(yyscanner, "!case_end"), + make_elist(else_part, when_then_list)); +} + /* * exprscan.l is compiled as part of exprparse.y. Currently, this is * unavoidable because exprparse does not create a .h file to export diff --git a/src/bin/pgbench/exprscan.l b/src/bin/pgbench/exprscan.l index b86e77a7ea..5c1bd88128 100644 --- a/src/bin/pgbench/exprscan.l +++ b/src/bin/pgbench/exprscan.l @@ -71,6 +71,22 @@ newline [\n] /* Line continuation marker */ continuation \\{newline} +/* case insensitive keywords */ +and [Aa][Nn][Dd] +or [Oo][Rr] +not [Nn][Oo][Tt] +case [Cc][Aa][Ss][Ee] +when [Ww][Hh][Ee][Nn] +then [Tt][Hh][Ee][Nn] +else [Ee][Ll][Ss][Ee] +end [Ee][Nn][Dd] +true [Tt][Rr][Uu][Ee] +false [Ff][Aa][Ll][Ss][Ee] +null [Nn][Uu][Ll][Ll] +is [Ii][Ss] +isnull [Ii][Ss][Nn][Uu][Ll][Ll] +notnull [Nn][Oo][Tt][Nn][Uu][Ll][Ll] + /* Exclusive states */ %x EXPR @@ -129,15 +145,52 @@ continuation \\{newline} "-" { return '-'; } "*" { return '*'; } "/" { return '/'; } -"%" { return '%'; } +"%" { return '%'; } /* C version, also in Pg SQL */ +"=" { return '='; } +"<>" { return NE_OP; } +"!=" { return NE_OP; } /* C version, also in Pg SQL */ +"<=" { return LE_OP; } +">=" { return GE_OP; } +"<<" { return LS_OP; } +">>" { return RS_OP; } +"<" { return '<'; } +">" { return '>'; } +"|" { return '|'; } +"&" { return '&'; } +"#" { return '#'; } +"~" { return '~'; } + "(" { return '('; } ")" { return ')'; } "," { return ','; } +{and} { return AND_OP; } +{or} { return OR_OP; } +{not} { return NOT_OP; } +{is} { return IS_OP; } +{isnull} { return ISNULL_OP; } +{notnull} { return NOTNULL_OP; } + +{case} { return CASE_KW; } +{when} { return WHEN_KW; } +{then} { return THEN_KW; } +{else} { return ELSE_KW; } +{end} { return END_KW; } + :{alnum}+ { yylval->str = pg_strdup(yytext + 1); return VARIABLE; } + +{null} { return NULL_CONST; } +{true} { + yylval->bval = true; + return BOOLEAN_CONST; + } +{false} { + yylval->bval = false; + return BOOLEAN_CONST; + } {digit}+ { yylval->ival = strtoint64(yytext); return INTEGER_CONST; diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index fc2c7342ed..31ea6ca06e 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -189,19 +189,20 @@ const char *progname; volatile bool timer_exceeded = false; /* flag from signal handler */ /* - * Variable definitions. If a variable has a string value, "value" is that - * value, is_numeric is false, and num_value is undefined. If the value is - * known to be numeric, is_numeric is true and num_value contains the value - * (in any permitted numeric variant). In this case "value" contains the - * string equivalent of the number, if we've had occasion to compute that, - * or NULL if we haven't. + * Variable definitions. + * + * If a variable only has a string value, "svalue" is that value, and value is + * "not set". If the value is known, "value" contains the value (in any + * variant). + * + * In this case "svalue" contains the string equivalent of the value, if we've + * had occasion to compute that, or NULL if we haven't. */ typedef struct { char *name; /* variable's name */ - char *value; /* its value in string form, if known */ - bool is_numeric; /* is numeric value known? */ - PgBenchValue num_value; /* variable's value in numeric form */ + char *svalue; /* its value in string form, if known */ + PgBenchValue value; /* actual variable's value */ } Variable; #define MAX_SCRIPTS 128 /* max number of SQL scripts allowed */ @@ -488,6 +489,8 @@ static const BuiltinScript builtin_script[] = /* Function prototypes */ +static void setNullValue(PgBenchValue *pv); +static void setBoolValue(PgBenchValue *pv, bool bval); static void setIntValue(PgBenchValue *pv, int64 ival); static void setDoubleValue(PgBenchValue *pv, double dval); static bool evaluateExpr(TState *, CState *, PgBenchExpr *, PgBenchValue *); @@ -1146,50 +1149,82 @@ getVariable(CState *st, char *name) if (var == NULL) return NULL; /* not found */ - if (var->value) - return var->value; /* we have it in string form */ + if (var->svalue) + return var->svalue; /* we have it in string form */ - /* We need to produce a string equivalent of the numeric value */ - Assert(var->is_numeric); - if (var->num_value.type == PGBT_INT) + /* We need to produce a string equivalent of the value */ + Assert(var->value.type != PGBT_NO_VALUE); + if (var->value.type == PGBT_NULL) + snprintf(stringform, sizeof(stringform), "NULL"); + else if (var->value.type == PGBT_BOOLEAN) snprintf(stringform, sizeof(stringform), - INT64_FORMAT, var->num_value.u.ival); - else - { - Assert(var->num_value.type == PGBT_DOUBLE); + "%s", var->value.u.bval ? "true" : "false"); + else if (var->value.type == PGBT_INT) snprintf(stringform, sizeof(stringform), - "%.*g", DBL_DIG, var->num_value.u.dval); - } - var->value = pg_strdup(stringform); - return var->value; + INT64_FORMAT, var->value.u.ival); + else if (var->value.type == PGBT_DOUBLE) + snprintf(stringform, sizeof(stringform), + "%.*g", DBL_DIG, var->value.u.dval); + else /* internal error, unexpected type */ + Assert(0); + var->svalue = pg_strdup(stringform); + return var->svalue; } -/* Try to convert variable to numeric form; return false on failure */ +/* Try to convert variable to a value; return false on failure */ static bool -makeVariableNumeric(Variable *var) +makeVariableValue(Variable *var) { - if (var->is_numeric) + size_t slen; + + if (var->value.type != PGBT_NO_VALUE) return true; /* no work */ - if (is_an_int(var->value)) + slen = strlen(var->svalue); + + if (slen == 0) + /* what should it do on ""? */ + return false; + + if (pg_strcasecmp(var->svalue, "null") == 0) { - setIntValue(&var->num_value, strtoint64(var->value)); - var->is_numeric = true; + setNullValue(&var->value); + } + /* + * accept prefixes such as y, ye, n, no... but not for "o". + * 0/1 are recognized later as an int, which is converted + * to bool if needed. + */ + else if (pg_strncasecmp(var->svalue, "true", slen) == 0 || + pg_strncasecmp(var->svalue, "yes", slen) == 0 || + pg_strcasecmp(var->svalue, "on") == 0) + { + setBoolValue(&var->value, true); + } + else if (pg_strncasecmp(var->svalue, "false", slen) == 0 || + pg_strncasecmp(var->svalue, "no", slen) == 0 || + pg_strcasecmp(var->svalue, "off") == 0 || + pg_strcasecmp(var->svalue, "of") == 0) + { + setBoolValue(&var->value, false); + } + else if (is_an_int(var->svalue)) + { + setIntValue(&var->value, strtoint64(var->svalue)); } else /* type should be double */ { double dv; char xs; - if (sscanf(var->value, "%lf%c", &dv, &xs) != 1) + if (sscanf(var->svalue, "%lf%c", &dv, &xs) != 1) { fprintf(stderr, "malformed variable \"%s\" value: \"%s\"\n", - var->name, var->value); + var->name, var->svalue); return false; } - setDoubleValue(&var->num_value, dv); - var->is_numeric = true; + setDoubleValue(&var->value, dv); } return true; } @@ -1266,7 +1301,7 @@ lookupCreateVariable(CState *st, const char *context, char *name) var = &newvars[st->nvariables]; var->name = pg_strdup(name); - var->value = NULL; + var->svalue = NULL; /* caller is expected to initialize remaining fields */ st->nvariables++; @@ -1292,18 +1327,18 @@ putVariable(CState *st, const char *context, char *name, const char *value) /* dup then free, in case value is pointing at this variable */ val = pg_strdup(value); - if (var->value) - free(var->value); - var->value = val; - var->is_numeric = false; + if (var->svalue) + free(var->svalue); + var->svalue = val; + var->value.type = PGBT_NO_VALUE; return true; } -/* Assign a numeric value to a variable, creating it if need be */ +/* Assign a value to a variable, creating it if need be */ /* Returns false on failure (bad name) */ static bool -putVariableNumber(CState *st, const char *context, char *name, +putVariableValue(CState *st, const char *context, char *name, const PgBenchValue *value) { Variable *var; @@ -1312,11 +1347,10 @@ putVariableNumber(CState *st, const char *context, char *name, if (!var) return false; - if (var->value) - free(var->value); - var->value = NULL; - var->is_numeric = true; - var->num_value = *value; + if (var->svalue) + free(var->svalue); + var->svalue = NULL; + var->value = *value; return true; } @@ -1329,7 +1363,7 @@ putVariableInt(CState *st, const char *context, char *name, int64 value) PgBenchValue val; setIntValue(&val, value); - return putVariableNumber(st, context, name, &val); + return putVariableValue(st, context, name, &val); } /* @@ -1428,6 +1462,67 @@ getQueryParams(CState *st, const Command *command, const char **params) params[i] = getVariable(st, command->argv[i + 1]); } +static char * +valueTypeName(PgBenchValue *pval) +{ + if (pval->type == PGBT_NO_VALUE) + return "none"; + else if (pval->type == PGBT_NULL) + return "null"; + else if (pval->type == PGBT_INT) + return "int"; + else if (pval->type == PGBT_DOUBLE) + return "double"; + else if (pval->type == PGBT_BOOLEAN) + return "boolean"; + else + { + /* internal error, should never get there */ + Assert(false); + return NULL; + } +} + +/* get a value as a boolean, or tell if there is a problem */ +static bool +coerceToBool(PgBenchValue *pval, bool *bval) +{ + if (pval->type == PGBT_BOOLEAN) + { + *bval = pval->u.bval; + return true; + } + else /* NULL, INT or DOUBLE */ + { + fprintf(stderr, "cannot coerce %s to boolean\n", valueTypeName(pval)); + return false; + } +} + +/* + * Return true or false from an expression for conditional purposes. + * Non zero numerical values are true, zero and NULL are false. + */ +static bool +valueTruth(PgBenchValue *pval) +{ + switch (pval->type) + { + case PGBT_NULL: + return false; + case PGBT_BOOLEAN: + return pval->u.bval; + case PGBT_INT: + return pval->u.ival != 0; + case PGBT_DOUBLE: + return pval->u.dval != 0.0; + default: + /* internal error, unexpected type */ + Assert(0); + return false; + } +} + /* get a value as an int, tell if there is a problem */ static bool coerceToInt(PgBenchValue *pval, int64 *ival) @@ -1437,11 +1532,10 @@ coerceToInt(PgBenchValue *pval, int64 *ival) *ival = pval->u.ival; return true; } - else + else if (pval->type == PGBT_DOUBLE) { double dval = pval->u.dval; - Assert(pval->type == PGBT_DOUBLE); if (dval < PG_INT64_MIN || PG_INT64_MAX < dval) { fprintf(stderr, "double to int overflow for %f\n", dval); @@ -1450,6 +1544,11 @@ coerceToInt(PgBenchValue *pval, int64 *ival) *ival = (int64) dval; return true; } + else /* BOOLEAN or NULL */ + { + fprintf(stderr, "cannot coerce %s to int\n", valueTypeName(pval)); + return false; + } } /* get a value as a double, or tell if there is a problem */ @@ -1461,12 +1560,32 @@ coerceToDouble(PgBenchValue *pval, double *dval) *dval = pval->u.dval; return true; } - else + else if (pval->type == PGBT_INT) { - Assert(pval->type == PGBT_INT); *dval = (double) pval->u.ival; return true; } + else /* BOOLEAN or NULL */ + { + fprintf(stderr, "cannot coerce %s to double\n", valueTypeName(pval)); + return false; + } +} + +/* assign a null value */ +static void +setNullValue(PgBenchValue *pv) +{ + pv->type = PGBT_NULL; + pv->u.ival = 0; +} + +/* assign a boolean value */ +static void +setBoolValue(PgBenchValue *pv, bool bval) +{ + pv->type = PGBT_BOOLEAN; + pv->u.bval = bval; } /* assign an integer value */ static void @@ -1484,24 +1603,144 @@ setDoubleValue(PgBenchValue *pv, double dval) pv->u.dval = dval; } +static bool isLazyFunc(PgBenchFunction func) +{ + return func == PGBENCH_AND || func == PGBENCH_OR || func == PGBENCH_CASE; +} + +/* lazy evaluation of some functions */ +static bool +evalLazyFunc(TState *thread, CState *st, + PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval) +{ + PgBenchValue a1, a2; + bool ba1, ba2; + + Assert(isLazyFunc(func) && args != NULL && args->next != NULL); + + /* args points to first condition */ + if (!evaluateExpr(thread, st, args->expr, &a1)) + return false; + + /* second condition for AND/OR and corresponding branch for CASE */ + args = args->next; + + switch (func) + { + case PGBENCH_AND: + if (a1.type == PGBT_NULL) + { + setNullValue(retval); + return true; + } + + if (!coerceToBool(&a1, &ba1)) + return false; + + if (!ba1) + { + setBoolValue(retval, false); + return true; + } + + if (!evaluateExpr(thread, st, args->expr, &a2)) + return false; + + if (a2.type == PGBT_NULL) + { + setNullValue(retval); + return true; + } + else if (!coerceToBool(&a2, &ba2)) + return false; + else + { + setBoolValue(retval, ba2); + return true; + } + + return true; + + case PGBENCH_OR: + + if (a1.type == PGBT_NULL) + { + setNullValue(retval); + return true; + } + + if (!coerceToBool(&a1, &ba1)) + return false; + + if (ba1) + { + setBoolValue(retval, true); + return true; + } + + if (!evaluateExpr(thread, st, args->expr, &a2)) + return false; + + if (a2.type == PGBT_NULL) + { + setNullValue(retval); + return true; + } + else if (!coerceToBool(&a2, &ba2)) + return false; + else + { + setBoolValue(retval, ba2); + return true; + } + + case PGBENCH_CASE: + /* when true, execute branch */ + if (valueTruth(&a1)) + return evaluateExpr(thread, st, args->expr, retval); + + /* now args contains next condition or final else expression */ + args = args->next; + + /* final else case? */ + if (args->next == NULL) + return evaluateExpr(thread, st, args->expr, retval); + + /* no, another when, proceed */ + return evalLazyFunc(thread, st, PGBENCH_CASE, args, retval); + + default: + /* internal error, cannot get here */ + Assert(0); + break; + } + return false; +} + /* maximum number of function arguments */ #define MAX_FARGS 16 /* - * Recursive evaluation of functions + * Recursive evaluation of standard functions, + * which do not require lazy evaluation. */ static bool -evalFunc(TState *thread, CState *st, - PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval) +evalStandardFunc( + TState *thread, CState *st, + PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval) { /* evaluate all function arguments */ - int nargs = 0; - PgBenchValue vargs[MAX_FARGS]; + int nargs = 0; + PgBenchValue vargs[MAX_FARGS]; PgBenchExprLink *l = args; + bool has_null = false; for (nargs = 0; nargs < MAX_FARGS && l != NULL; nargs++, l = l->next) + { if (!evaluateExpr(thread, st, l->expr, &vargs[nargs])) return false; + has_null |= vargs[nargs].type == PGBT_NULL; + } if (l != NULL) { @@ -1510,6 +1749,13 @@ evalFunc(TState *thread, CState *st, return false; } + /* NULL arguments */ + if (has_null && func != PGBENCH_IS && func != PGBENCH_DEBUG) + { + setNullValue(retval); + return true; + } + /* then evaluate function */ switch (func) { @@ -1519,6 +1765,10 @@ evalFunc(TState *thread, CState *st, case PGBENCH_MUL: case PGBENCH_DIV: case PGBENCH_MOD: + case PGBENCH_EQ: + case PGBENCH_NE: + case PGBENCH_LE: + case PGBENCH_LT: { PgBenchValue *lval = &vargs[0], *rval = &vargs[1]; @@ -1554,6 +1804,22 @@ evalFunc(TState *thread, CState *st, setDoubleValue(retval, ld / rd); return true; + case PGBENCH_EQ: + setBoolValue(retval, ld == rd); + return true; + + case PGBENCH_NE: + setBoolValue(retval, ld != rd); + return true; + + case PGBENCH_LE: + setBoolValue(retval, ld <= rd); + return true; + + case PGBENCH_LT: + setBoolValue(retval, ld < rd); + return true; + default: /* cannot get here */ Assert(0); @@ -1582,6 +1848,22 @@ evalFunc(TState *thread, CState *st, setIntValue(retval, li * ri); return true; + case PGBENCH_EQ: + setBoolValue(retval, li == ri); + return true; + + case PGBENCH_NE: + setBoolValue(retval, li != ri); + return true; + + case PGBENCH_LE: + setBoolValue(retval, li <= ri); + return true; + + case PGBENCH_LT: + setBoolValue(retval, li < ri); + return true; + case PGBENCH_DIV: case PGBENCH_MOD: if (ri == 0) @@ -1622,6 +1904,45 @@ evalFunc(TState *thread, CState *st, } } + /* integer bitwise operators */ + case PGBENCH_BITAND: + case PGBENCH_BITOR: + case PGBENCH_BITXOR: + case PGBENCH_LSHIFT: + case PGBENCH_RSHIFT: + { + int64 li, ri; + + if (!coerceToInt(&vargs[0], &li) || !coerceToInt(&vargs[1], &ri)) + return false; + + if (func == PGBENCH_BITAND) + setIntValue(retval, li & ri); + else if (func == PGBENCH_BITOR) + setIntValue(retval, li | ri); + else if (func == PGBENCH_BITXOR) + setIntValue(retval, li ^ ri); + else if (func == PGBENCH_LSHIFT) + setIntValue(retval, li << ri); + else if (func == PGBENCH_RSHIFT) + setIntValue(retval, li >> ri); + else /* cannot get here */ + Assert(0); + + return true; + } + + /* logical operators */ + case PGBENCH_NOT: + { + bool b; + if (!coerceToBool(&vargs[0], &b)) + return false; + + setBoolValue(retval, !b); + return true; + } + /* no arguments */ case PGBENCH_PI: setDoubleValue(retval, M_PI); @@ -1660,13 +1981,16 @@ evalFunc(TState *thread, CState *st, fprintf(stderr, "debug(script=%d,command=%d): ", st->use_file, st->command + 1); - if (varg->type == PGBT_INT) + if (varg->type == PGBT_NULL) + fprintf(stderr, "null\n"); + else if (varg->type == PGBT_BOOLEAN) + fprintf(stderr, "boolean %s\n", varg->u.bval ? "true" : "false"); + else if (varg->type == PGBT_INT) fprintf(stderr, "int " INT64_FORMAT "\n", varg->u.ival); - else - { - Assert(varg->type == PGBT_DOUBLE); + else if (varg->type == PGBT_DOUBLE) fprintf(stderr, "double %.*g\n", DBL_DIG, varg->u.dval); - } + else /* internal error, unexpected type */ + Assert(0); *retval = *varg; @@ -1676,6 +2000,8 @@ evalFunc(TState *thread, CState *st, /* 1 double argument */ case PGBENCH_DOUBLE: case PGBENCH_SQRT: + case PGBENCH_LN: + case PGBENCH_EXP: { double dval; @@ -1686,6 +2012,11 @@ evalFunc(TState *thread, CState *st, if (func == PGBENCH_SQRT) dval = sqrt(dval); + else if (func == PGBENCH_LN) + dval = log(dval); + else if (func == PGBENCH_EXP) + dval = exp(dval); + /* else is cast: do nothing */ setDoubleValue(retval, dval); return true; @@ -1868,6 +2199,16 @@ evalFunc(TState *thread, CState *st, return true; } + case PGBENCH_IS: + { + Assert(nargs == 2); + /* note: this simple implementation is more permissive than SQL */ + setBoolValue(retval, + vargs[0].type == vargs[1].type && + vargs[0].u.bval == vargs[1].u.bval); + return true; + } + default: /* cannot get here */ Assert(0); @@ -1876,6 +2217,17 @@ evalFunc(TState *thread, CState *st, } } +/* evaluate some function */ +static bool +evalFunc(TState *thread, CState *st, + PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval) +{ + if (isLazyFunc(func)) + return evalLazyFunc(thread, st, func, args, retval); + else + return evalStandardFunc(thread, st, func, args, retval); +} + /* * Recursive evaluation of an expression in a pgbench script * using the current state of variables. @@ -1904,10 +2256,10 @@ evaluateExpr(TState *thread, CState *st, PgBenchExpr *expr, PgBenchValue *retval return false; } - if (!makeVariableNumeric(var)) + if (!makeVariableValue(var)) return false; - *retval = var->num_value; + *retval = var->value; return true; } @@ -2479,7 +2831,7 @@ doCustom(TState *thread, CState *st, StatsData *agg) break; } - if (!putVariableNumber(st, argv[0], argv[1], &result)) + if (!putVariableValue(st, argv[0], argv[1], &result)) { commandFailed(st, "assignment of meta-command 'set' failed"); st->state = CSTATE_ABORTED; @@ -4582,16 +4934,16 @@ main(int argc, char **argv) { Variable *var = &state[0].variables[j]; - if (var->is_numeric) + if (var->value.type != PGBT_NO_VALUE) { - if (!putVariableNumber(&state[i], "startup", - var->name, &var->num_value)) + if (!putVariableValue(&state[i], "startup", + var->name, &var->value)) exit(1); } else { if (!putVariable(&state[i], "startup", - var->name, var->value)) + var->name, var->svalue)) exit(1); } } diff --git a/src/bin/pgbench/pgbench.h b/src/bin/pgbench/pgbench.h index ce3c260988..0705ccdf0d 100644 --- a/src/bin/pgbench/pgbench.h +++ b/src/bin/pgbench/pgbench.h @@ -33,8 +33,11 @@ union YYSTYPE; */ typedef enum { + PGBT_NO_VALUE, + PGBT_NULL, PGBT_INT, - PGBT_DOUBLE + PGBT_DOUBLE, + PGBT_BOOLEAN /* add other types here */ } PgBenchValueType; @@ -45,6 +48,7 @@ typedef struct { int64 ival; double dval; + bool bval; /* add other types here */ } u; } PgBenchValue; @@ -73,11 +77,27 @@ typedef enum PgBenchFunction PGBENCH_DOUBLE, PGBENCH_PI, PGBENCH_SQRT, + PGBENCH_LN, + PGBENCH_EXP, PGBENCH_RANDOM, PGBENCH_RANDOM_GAUSSIAN, PGBENCH_RANDOM_EXPONENTIAL, PGBENCH_RANDOM_ZIPFIAN, - PGBENCH_POW + PGBENCH_POW, + PGBENCH_AND, + PGBENCH_OR, + PGBENCH_NOT, + PGBENCH_BITAND, + PGBENCH_BITOR, + PGBENCH_BITXOR, + PGBENCH_LSHIFT, + PGBENCH_RSHIFT, + PGBENCH_EQ, + PGBENCH_NE, + PGBENCH_LE, + PGBENCH_LT, + PGBENCH_IS, + PGBENCH_CASE } PgBenchFunction; typedef struct PgBenchExpr PgBenchExpr; diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 3dd080e6e6..e579334914 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -211,10 +211,13 @@ sub pgbench # test expressions pgbench( - '-t 1 -Dfoo=-10.1 -Dbla=false -Di=+3 -Dminint=-9223372036854775808', + '-t 1 -Dfoo=-10.1 -Dbla=false -Di=+3 -Dminint=-9223372036854775808 -Dn=null -Dt=t -Df=of -Dd=1.0', 0, [ qr{type: .*/001_pgbench_expressions}, qr{processed: 1/1} ], - [ qr{command=4.: int 4\b}, + [ qr{command=1.: int 1\d\b}, + qr{command=2.: int 1\d\d\b}, + qr{command=3.: int 1\d\d\d\b}, + qr{command=4.: int 4\b}, qr{command=5.: int 5\b}, qr{command=6.: int 6\b}, qr{command=7.: int 7\b}, @@ -223,51 +226,61 @@ sub pgbench qr{command=10.: int 10\b}, qr{command=11.: int 11\b}, qr{command=12.: int 12\b}, - qr{command=13.: double 13\b}, - qr{command=14.: double 14\b}, qr{command=15.: double 15\b}, qr{command=16.: double 16\b}, qr{command=17.: double 17\b}, - qr{command=18.: double 18\b}, - qr{command=19.: double 19\b}, - qr{command=20.: double 20\b}, - qr{command=21.: int 9223372036854775807\b}, - qr{command=23.: int [1-9]\b}, - qr{command=24.: double -27\b}, - qr{command=25.: double 1024\b}, - qr{command=26.: double 1\b}, - qr{command=27.: double 1\b}, - qr{command=28.: double -0.125\b}, - qr{command=29.: double -0.125\b}, - qr{command=30.: double -0.00032\b}, - qr{command=31.: double 8.50705917302346e\+0?37\b}, - qr{command=32.: double 1e\+0?30\b}, + qr{command=18.: int 9223372036854775807\b}, + qr{command=20.: int [1-9]\b}, + qr{command=21.: double -27\b}, + qr{command=22.: double 1024\b}, + qr{command=23.: double 1\b}, + qr{command=24.: double 1\b}, + qr{command=25.: double -0.125\b}, + qr{command=26.: double -0.125\b}, + qr{command=27.: double -0.00032\b}, + qr{command=28.: double 8.50705917302346e\+0?37\b}, + qr{command=29.: double 1e\+30\b}, + qr{command=30.: boolean false\b}, + qr{command=31.: boolean true\b}, + qr{command=32.: int 32\b}, + qr{command=33.: int 33\b}, + qr{command=34.: double 34\b}, + qr{command=35.: int 35\b}, + qr{command=36.: int 36\b}, + qr{command=37.: double 37\b}, + qr{command=38.: int 38\b}, + qr{command=39.: int 39\b}, + qr{command=40.: boolean true\b}, + qr{command=41.: null\b}, + qr{command=42.: null\b}, + qr{command=43.: boolean true\b}, + qr{command=44.: boolean true\b}, + qr{command=45.: boolean true\b}, + qr{command=46.: int 46\b}, + qr{command=47.: boolean true\b}, + qr{command=48.: boolean true\b}, ], 'pgbench expressions', { '001_pgbench_expressions' => q{-- integer functions -\set i1 debug(random(1, 100)) -\set i2 debug(random_exponential(1, 100, 10.0)) -\set i3 debug(random_gaussian(1, 100, 10.0)) +\set i1 debug(random(10, 19)) +\set i2 debug(random_exponential(100, 199, 10.0)) +\set i3 debug(random_gaussian(1000, 1999, 10.0)) \set i4 debug(abs(-4)) \set i5 debug(greatest(5, 4, 3, 2)) \set i6 debug(11 + least(-5, -4, -3, -2)) \set i7 debug(int(7.3)) --- integer operators -\set i8 debug(17 / 5 + 5) -\set i9 debug(- (3 * 4 - 3) / -1 + 3 % -1) +-- integer arithmetic and bit-wise operators +\set i8 debug(17 / (4|1) + ( 4 + (7 >> 2))) +\set i9 debug(- (3 * 4 - (-(~ 1) + -(~ 0))) / -1 + 3 % -1) \set ia debug(10 + (0 + 0 * 0 - 0 / 1)) \set ib debug(:ia + :scale) -\set ic debug(64 % 13) --- double functions -\set d1 debug(sqrt(3.0) * abs(-0.8E1)) -\set d2 debug(double(1 + 1) * 7) +\set ic debug(64 % (((2 + 1 * 2 + (1 # 2) | 4 * (2 & 11)) - (1 << 2)) + 2)) +-- double functions and operators +\set d1 debug(sqrt(+1.5 * 2.0) * abs(-0.8E1)) +\set d2 debug(double(1 + 1) * (-75.0 / :foo)) \set pi debug(pi() * 4.9) -\set d4 debug(greatest(4, 2, -1.17) * 4.0) +\set d4 debug(greatest(4, 2, -1.17) * 4.0 * Ln(Exp(1.0))) \set d5 debug(least(-5.18, .0E0, 1.0/0) * -3.3) --- double operators -\set d6 debug((0.5 * 12.1 - 0.05) * (31.0 / 10)) -\set d7 debug(11.1 + 7.9) -\set d8 debug(:foo * -2) -- forced overflow \set maxint debug(:minint - 1) -- reset a variable @@ -284,8 +297,55 @@ sub pgbench \set powernegd2 debug(power(-5.0,-5.0)) \set powerov debug(pow(9223372036854775807, 2)) \set powerov2 debug(pow(10,30)) +-- comparisons and logical operations +\set c0 debug(1.0 = 0.0 and 1.0 != 0.0) +\set c1 debug(0 = 1 Or 1.0 = 1) +\set c4 debug(case when 0 < 1 then 32 else 0 end) +\set c5 debug(case when true then 33 else 0 end) +\set c6 debug(case when false THEN -1 when 1 = 1 then 13 + 19 + 2.0 end ) +\set c7 debug(case when (1 > 0) and (1 >= 0) and (0 < 1) and (0 <= 1) and (0 != 1) and (0 = 0) and (0 <> 1) then 35 else 0 end) +\set c8 debug(CASE \ + WHEN (1.0 > 0.0) AND (1.0 >= 0.0) AND (0.0 < 1.0) AND (0.0 <= 1.0) AND \ + (0.0 != 1.0) AND (0.0 = 0.0) AND (0.0 <> 1.0) AND (0.0 = 0.0) \ + THEN 36 \ + ELSE 0 \ + END) +\set c9 debug(CASE WHEN NOT FALSE THEN 3 * 12.3333334 END) +\set ca debug(case when false then 0 when 1-1 <> 0 then 1 else 38 end) +\set cb debug(10 + mod(13 * 7 + 12, 13) - mod(-19 * 11 - 17, 19)) +\set cc debug(NOT (0 > 1) AND (1 <= 1) AND NOT (0 >= 1) AND (0 < 1) AND \ + NOT (false and true) AND (false OR TRUE) AND (NOT :f) AND (NOT FALSE) AND \ + NOT (NOT TRUE)) +-- NULL value and associated operators +\set n0 debug(NULL + NULL * exp(NULL)) +\set n1 debug(:n0) +\set n2 debug(NOT (:n0 IS NOT NULL OR :d1 IS NULL)) +\set n3 debug(:n0 IS NULL AND :d1 IS NOT NULL AND :d1 NOTNULL) +\set n4 debug(:n0 ISNULL AND NOT :n0 IS TRUE AND :n0 IS NOT FALSE) +\set n5 debug(CASE WHEN :n IS NULL THEN 46 ELSE NULL END) +-- use a variables of all types +\set n6 debug(:n IS NULL AND NOT :f AND :t) +-- conditional truth +\set cs debug(CASE WHEN 1 THEN TRUE END AND CASE WHEN 1.0 THEN TRUE END AND CASE WHEN :n THEN NULL ELSE TRUE END) +-- lazy evaluation +\set zy 0 +\set yz debug(case when :zy = 0 then -1 else (1 / :zy) end) +\set yz debug(case when :zy = 0 or (1 / :zy) < 0 then -1 else (1 / :zy) end) +\set yz debug(case when :zy > 0 and (1 / :zy) < 0 then (1 / :zy) else 1 end) +-- substitute variables of all possible types +\set v0 NULL +\set v1 TRUE +\set v2 5432 +\set v3 -54.21E-2 +SELECT :v0, :v1, :v2, :v3; } }); +=head + +} }); + +=cut + # backslash commands pgbench( '-t 1', 0, @@ -404,8 +464,42 @@ sub pgbench q{\set i random_zipfian(0, 10, 1000000)} ], [ 'set non numeric value', 0, [qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1} ], - [ 'set no expression', 1, [qr{syntax error}], q{\set i} ], - [ 'set missing argument', 1, [qr{missing argument}i], q{\set} ], + [ 'set no expression', + 1, + [qr{syntax error}], + q{\set i} ], + [ 'set missing argument', + 1, + [qr{missing argument}i], + q{\set} ], + [ 'set not a bool', + 0, + [ qr{cannot coerce double to boolean} ], + q{\set b NOT 0.0} ], + [ 'set not an int', + 0, + [ qr{cannot coerce boolean to int} ], + q{\set i TRUE + 2} ], + [ 'set not an double', + 0, + [ qr{cannot coerce boolean to double} ], + q{\set d ln(TRUE)} ], + [ 'set case error', + 1, + [ qr{syntax error in command "set"} ], + q{\set i CASE TRUE THEN 1 ELSE 0 END} ], + [ 'set random error', + 0, + [ qr{cannot coerce boolean to int} ], + q{\set b random(FALSE, TRUE)} ], + [ 'set number of args mismatch', + 1, + [ qr{unexpected number of arguments} ], + q{\set d ln(1.0, 2.0))} ], + [ 'set at least one arg', + 1, + [ qr{at least one argument expected} ], + q{\set i greatest())} ], # SETSHELL [ 'setshell not an int', 0, @@ -427,7 +521,10 @@ sub pgbench # MISC [ 'misc invalid backslash command', 1, [qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand} ], - [ 'misc empty script', 1, [qr{empty command list for script}], q{} ],); + [ 'misc empty script', 1, [qr{empty command list for script}], q{} ], + [ 'bad boolean', 0, [qr{malformed variable.*trueXXX}], q{\set b :badtrue or true} ], + ); + for my $e (@errors) { @@ -435,7 +532,7 @@ sub pgbench my $n = '001_pgbench_error_' . $name; $n =~ s/ /_/g; pgbench( - '-n -t 1 -Dfoo=bla -M prepared', + '-n -t 1 -Dfoo=bla -Dnull=null -Dtrue=true -Done=1 -Dzero=0.0 -Dbadtrue=trueXXX -M prepared', $status, [ $status ? qr{^$} : qr{processed: 0/1} ], $re, From 921059bd66c7fb1230c705d3b1a65940800c4cbb Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 9 Jan 2018 10:12:58 -0500 Subject: [PATCH 0801/1087] Don't allow VACUUM VERBOSE ANALYZE VERBOSE. There are plans to extend the syntax for ANALYZE, so we need to break the link between VacuumStmt and AnalyzeStmt. But apart from that, the syntax above is undocumented and, if discovered by users, might give the impression that the VERBOSE option for VACUUM differs from the verbose option from ANALYZE, which it does not. Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada Discussion: http://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com --- src/backend/parser/gram.y | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 16923e853a..e42b7caff6 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -437,7 +437,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type opt_instead %type opt_unique opt_concurrently opt_verbose opt_full -%type opt_freeze opt_default opt_recheck +%type opt_freeze opt_analyze opt_default opt_recheck %type opt_binary opt_oids copy_delimiter %type copy_from opt_program @@ -10462,7 +10462,7 @@ cluster_index_specification: * *****************************************************************************/ -VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_vacuum_relation_list +VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_analyze opt_vacuum_relation_list { VacuumStmt *n = makeNode(VacuumStmt); n->options = VACOPT_VACUUM; @@ -10472,19 +10472,9 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_vacuum_relation_list n->options |= VACOPT_FREEZE; if ($4) n->options |= VACOPT_VERBOSE; - n->rels = $5; - $$ = (Node *)n; - } - | VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt - { - VacuumStmt *n = (VacuumStmt *) $5; - n->options |= VACOPT_VACUUM; - if ($2) - n->options |= VACOPT_FULL; - if ($3) - n->options |= VACOPT_FREEZE; - if ($4) - n->options |= VACOPT_VERBOSE; + if ($5) + n->options |= VACOPT_ANALYZE; + n->rels = $6; $$ = (Node *)n; } | VACUUM '(' vacuum_option_list ')' opt_vacuum_relation_list @@ -10534,6 +10524,11 @@ analyze_keyword: | ANALYSE /* British */ {} ; +opt_analyze: + analyze_keyword { $$ = true; } + | /*EMPTY*/ { $$ = false; } + ; + opt_verbose: VERBOSE { $$ = true; } | /*EMPTY*/ { $$ = false; } From 13db3b936359eebf02a768db3a1959af880b6cc6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Jan 2018 11:39:10 -0500 Subject: [PATCH 0802/1087] Allow ConditionVariable[PrepareTo]Sleep to auto-switch between CVs. The original coding here insisted that callers manually cancel any prepared sleep for one condition variable before starting a sleep on another one. While that's not a huge burden today, it seems like a gotcha that will bite us in future if the use of condition variables increases; anything we can do to make the use of this API simpler and more robust is attractive. Hence, allow these functions to automatically switch their attention to a different CV when required. This is safe for the same reason it was OK for commit aced5a92b to let a broadcast operation cancel any prepared CV sleep: whenever we return to the other test-and-sleep loop, we will automatically re-prepare that CV, paying at most an extra test of that loop's exit condition. Back-patch to v10 where condition variables were introduced. Ordinarily we would probably not back-patch a change like this, but since it does not invalidate any coding pattern that was legal before, it seems safe enough. Furthermore, there's an open bug in replorigin_drop() for which the simplest fix requires this. Even if we chose to fix that in some more complicated way, the hazard would remain that we might back-patch some other bug fix that requires this behavior. Patch by me, reviewed by Thomas Munro. Discussion: https://postgr.es/m/2437.1515368316@sss.pgh.pa.us --- src/backend/storage/lmgr/condition_variable.c | 26 ++++++++++--------- src/include/storage/condition_variable.h | 4 +-- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index 98a67965cd..25c5cd7b45 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -55,10 +55,6 @@ ConditionVariableInit(ConditionVariable *cv) * condition between calling ConditionVariablePrepareToSleep and calling * ConditionVariableSleep. If that is inconvenient, omit calling * ConditionVariablePrepareToSleep. - * - * Only one condition variable can be used at a time, ie, - * ConditionVariableCancelSleep must be called before any attempt is made - * to sleep on a different condition variable. */ void ConditionVariablePrepareToSleep(ConditionVariable *cv) @@ -81,10 +77,15 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) } /* - * It's not legal to prepare a sleep until the previous sleep has been - * completed or canceled. + * If some other sleep is already prepared, cancel it; this is necessary + * because we have just one static variable tracking the prepared sleep, + * and also only one cvWaitLink in our PGPROC. It's okay to do this + * because whenever control does return to the other test-and-sleep loop, + * its ConditionVariableSleep call will just re-establish that sleep as + * the prepared one. */ - Assert(cv_sleep_target == NULL); + if (cv_sleep_target != NULL) + ConditionVariableCancelSleep(); /* Record the condition variable on which we will sleep. */ cv_sleep_target = cv; @@ -133,16 +134,16 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) * is recommended because it avoids manipulations of the wait list, or not * met initially, in which case preparing first is better because it * avoids one extra test of the exit condition. + * + * If we are currently prepared to sleep on some other CV, we just cancel + * that and prepare this one; see ConditionVariablePrepareToSleep. */ - if (cv_sleep_target == NULL) + if (cv_sleep_target != cv) { ConditionVariablePrepareToSleep(cv); return; } - /* Any earlier condition variable sleep must have been canceled. */ - Assert(cv_sleep_target == cv); - do { CHECK_FOR_INTERRUPTS(); @@ -265,7 +266,8 @@ ConditionVariableBroadcast(ConditionVariable *cv) * prepared CV sleep. The next call to ConditionVariableSleep will take * care of re-establishing the lost state. */ - ConditionVariableCancelSleep(); + if (cv_sleep_target != NULL) + ConditionVariableCancelSleep(); /* * Inspect the state of the queue. If it's empty, we have nothing to do. diff --git a/src/include/storage/condition_variable.h b/src/include/storage/condition_variable.h index 7dac477d25..32e645c02a 100644 --- a/src/include/storage/condition_variable.h +++ b/src/include/storage/condition_variable.h @@ -40,9 +40,7 @@ extern void ConditionVariableInit(ConditionVariable *cv); * ConditionVariableSleep. Spurious wakeups are possible, but should be * infrequent. After exiting the loop, ConditionVariableCancelSleep must * be called to ensure that the process is no longer in the wait list for - * the condition variable. Only one condition variable can be used at a - * time, ie, ConditionVariableCancelSleep must be called before any attempt - * is made to sleep on a different condition variable. + * the condition variable. */ extern void ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info); extern void ConditionVariableCancelSleep(void); From 8a906204aec44de6d8a1514082870f25085d9431 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Jan 2018 12:09:30 -0500 Subject: [PATCH 0803/1087] Fix race condition during replication origin drop. replorigin_drop() misunderstood the API for condition variables: it had ConditionVariablePrepareToSleep and ConditionVariableCancelSleep inside its test-and-sleep loop, rather than outside the loop as intended. The net effect is a narrow race-condition window wherein, if the process using a replication slot releases it immediately after replorigin_drop() releases the ReplicationOriginLock, replorigin_drop() would get into the condition variable's wait list too late and then wait indefinitely for a signal that won't come. Because there's a different CV for each replication slot, we can't just move the ConditionVariablePrepareToSleep call to above the test-and-sleep loop. What we can do, in the wake of commit 13db3b936, is drop the ConditionVariablePrepareToSleep call entirely. This fix depends on that commit because (at least in principle) the slot matching the target replication origin might move around, so that once in a blue moon successive loop iterations might involve different CVs. We can now cope with such a scenario, at the cost of an extra trip through the retry loop. (There are ways we could fix this bug without depending on that commit, but they're all a lot more complicated than this way.) While at it, upgrade the rather skimpy comments in this function. Back-patch to v10 where this code came in. Discussion: https://postgr.es/m/19947.1515455433@sss.pgh.pa.us --- src/backend/replication/logical/origin.c | 29 +++++++++++++++++++----- 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 9c30baf544..9a20042a3c 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -339,20 +339,26 @@ replorigin_drop(RepOriginId roident, bool nowait) Assert(IsTransactionState()); + /* + * To interlock against concurrent drops, we hold ExclusiveLock on + * pg_replication_origin throughout this funcion. + */ rel = heap_open(ReplicationOriginRelationId, ExclusiveLock); + /* + * First, clean up the slot state info, if there is any matching slot. + */ restart: tuple = NULL; - /* cleanup the slot state info */ LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE); for (i = 0; i < max_replication_slots; i++) { ReplicationState *state = &replication_states[i]; - /* found our slot */ if (state->roident == roident) { + /* found our slot, is it busy? */ if (state->acquired_by != 0) { ConditionVariable *cv; @@ -363,16 +369,23 @@ replorigin_drop(RepOriginId roident, bool nowait) errmsg("could not drop replication origin with OID %d, in use by PID %d", state->roident, state->acquired_by))); + + /* + * We must wait and then retry. Since we don't know which CV + * to wait on until here, we can't readily use + * ConditionVariablePrepareToSleep (calling it here would be + * wrong, since we could miss the signal if we did so); just + * use ConditionVariableSleep directly. + */ cv = &state->origin_cv; LWLockRelease(ReplicationOriginLock); - ConditionVariablePrepareToSleep(cv); + ConditionVariableSleep(cv, WAIT_EVENT_REPLICATION_ORIGIN_DROP); - ConditionVariableCancelSleep(); goto restart; } - /* first WAL log */ + /* first make a WAL log entry */ { xl_replorigin_drop xlrec; @@ -382,7 +395,7 @@ replorigin_drop(RepOriginId roident, bool nowait) XLogInsert(RM_REPLORIGIN_ID, XLOG_REPLORIGIN_DROP); } - /* then reset the in-memory entry */ + /* then clear the in-memory slot */ state->roident = InvalidRepOriginId; state->remote_lsn = InvalidXLogRecPtr; state->local_lsn = InvalidXLogRecPtr; @@ -390,7 +403,11 @@ replorigin_drop(RepOriginId roident, bool nowait) } } LWLockRelease(ReplicationOriginLock); + ConditionVariableCancelSleep(); + /* + * Now, we can delete the catalog entry. + */ tuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident)); if (!HeapTupleIsValid(tuple)) elog(ERROR, "cache lookup failed for replication origin with oid %u", From c3d41ccf5931a2e587d114d9886717df76459a9d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 9 Jan 2018 12:28:49 -0500 Subject: [PATCH 0804/1087] Fix ssl tests for when tls-server-end-point is not supported Add a function to TestLib that allows us to check pg_config.h and then decide the expected test outcome based on that. Author: Michael Paquier --- src/test/perl/TestLib.pm | 19 +++++++++++++++++++ src/test/ssl/t/002_scram.pl | 21 +++++++++++++++++---- 2 files changed, 36 insertions(+), 4 deletions(-) diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 72826d5bad..fdd427608b 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -26,6 +26,7 @@ our @EXPORT = qw( slurp_dir slurp_file append_to_file + check_pg_config system_or_bail system_log run_log @@ -221,6 +222,24 @@ sub append_to_file close $fh; } +# Check presence of a given regexp within pg_config.h for the installation +# where tests are running, returning a match status result depending on +# that. +sub check_pg_config +{ + my ($regexp) = @_; + my ($stdout, $stderr); + my $result = IPC::Run::run [ 'pg_config', '--includedir' ], '>', + \$stdout, '2>', \$stderr + or die "could not execute pg_config"; + chomp($stdout); + + open my $pg_config_h, '<', "$stdout/pg_config.h" or die "$!"; + my $match = (grep {/^$regexp/} <$pg_config_h>); + close $pg_config_h; + return $match; +} + # # Test functions # diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl index 3f425e00f0..67c1409a6e 100644 --- a/src/test/ssl/t/002_scram.pl +++ b/src/test/ssl/t/002_scram.pl @@ -11,6 +11,10 @@ # This is the hostname used to connect to the server. my $SERVERHOSTADDR = '127.0.0.1'; +# Determine whether build supports tls-server-end-point. +my $supports_tls_server_end_point = + check_pg_config("#define HAVE_X509_GET_SIGNATURE_NID 1"); + # Allocation of base connection string shared among multiple tests. my $common_connstr; @@ -44,10 +48,19 @@ "SCRAM authentication with tls-unique as channel binding"); test_connect_ok($common_connstr, "scram_channel_binding=''", - "SCRAM authentication without channel binding"); -test_connect_ok($common_connstr, - "scram_channel_binding=tls-server-end-point", - "SCRAM authentication with tls-server-end-point as channel binding"); + "SCRAM authentication without channel binding"); +if ($supports_tls_server_end_point) +{ + test_connect_ok($common_connstr, + "scram_channel_binding=tls-server-end-point", + "SCRAM authentication with tls-server-end-point as channel binding"); +} +else +{ + test_connect_fails($common_connstr, + "scram_channel_binding=tls-server-end-point", + "SCRAM authentication with tls-server-end-point as channel binding"); +} test_connect_fails($common_connstr, "scram_channel_binding=not-exists", "SCRAM authentication with invalid channel binding"); From 80259d4dbf47d13ef4c105e06c4ea084639d9466 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Jan 2018 12:34:46 -0500 Subject: [PATCH 0805/1087] While waiting for a condition variable, detect postmaster death. The general assumption for postmaster child processes is that they should just exit(1), reasonably promptly, if the postmaster disappears. condition_variable.c neglected this consideration and could be left waiting forever, if the counterpart process it is waiting for has done the right thing and exited. We had some discussion of adjusting the WaitEventSet API to make it harder to make this type of mistake in future; but for the moment, and for v10, let's make this narrow fix. Discussion: https://postgr.es/m/20412.1515456143@sss.pgh.pa.us --- src/backend/storage/lmgr/condition_variable.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c index 25c5cd7b45..ef1d5baf01 100644 --- a/src/backend/storage/lmgr/condition_variable.c +++ b/src/backend/storage/lmgr/condition_variable.c @@ -69,9 +69,11 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv) { WaitEventSet *new_event_set; - new_event_set = CreateWaitEventSet(TopMemoryContext, 1); + new_event_set = CreateWaitEventSet(TopMemoryContext, 2); AddWaitEventToSet(new_event_set, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL); + AddWaitEventToSet(new_event_set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, + NULL, NULL); /* Don't set cv_wait_event_set until we have a correct WES. */ cv_wait_event_set = new_event_set; } @@ -149,11 +151,20 @@ ConditionVariableSleep(ConditionVariable *cv, uint32 wait_event_info) CHECK_FOR_INTERRUPTS(); /* - * Wait for latch to be set. We don't care about the result because - * our contract permits spurious returns. + * Wait for latch to be set. (If we're awakened for some other + * reason, the code below will cope anyway.) */ WaitEventSetWait(cv_wait_event_set, -1, &event, 1, wait_event_info); + if (event.events & WL_POSTMASTER_DEATH) + { + /* + * Emergency bailout if postmaster has died. This is to avoid the + * necessity for manual cleanup of all postmaster children. + */ + exit(1); + } + /* Reset latch before examining the state of the wait list. */ ResetLatch(MyLatch); From 624e440a474420fa0d6cf26c19bfb256547ab71d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Jan 2018 13:07:52 -0500 Subject: [PATCH 0806/1087] Improve the heuristic for ordering child paths of a parallel append. Commit ab7271677 introduced code that attempts to order the child scans of a Parallel Append node in a way that will minimize execution time, based on total cost and startup cost. However, it failed to think hard about what to do when estimated costs are exactly equal; a case that's particularly likely to occur when comparing on startup cost. In such a case the ordering of the child paths would be left to the whims of qsort, an algorithm that isn't even stable. We can improve matters by applying the rule used elsewhere in the planner: if total costs are equal, sort on startup cost, and vice versa. When both cost estimates are exactly equal, rather than letting qsort do something unpredictable, sort based on the child paths' relids, which should typically result in sorting in inheritance order. (The latter provision requires inventing a qsort-style comparator for bitmapsets, but maybe we'll have use for that for other reasons in future.) This results in a few plan changes in the select_parallel test, but those all look more reasonable than before, when the actual underlying cost numbers are taken into account. Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us --- src/backend/nodes/bitmapset.c | 46 ++++++++++++++++++- src/backend/optimizer/util/pathnode.c | 34 ++++++++------ src/include/nodes/bitmapset.h | 1 + src/test/regress/expected/select_parallel.out | 14 +++--- 4 files changed, 73 insertions(+), 22 deletions(-) diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c index 733fe3cf2a..edcd19a4fd 100644 --- a/src/backend/nodes/bitmapset.c +++ b/src/backend/nodes/bitmapset.c @@ -172,6 +172,50 @@ bms_equal(const Bitmapset *a, const Bitmapset *b) return true; } +/* + * bms_compare - qsort-style comparator for bitmapsets + * + * This guarantees to report values as equal iff bms_equal would say they are + * equal. Otherwise, the highest-numbered bit that is set in one value but + * not the other determines the result. (This rule means that, for example, + * {6} is greater than {5}, which seems plausible.) + */ +int +bms_compare(const Bitmapset *a, const Bitmapset *b) +{ + int shortlen; + int i; + + /* Handle cases where either input is NULL */ + if (a == NULL) + return bms_is_empty(b) ? 0 : -1; + else if (b == NULL) + return bms_is_empty(a) ? 0 : +1; + /* Handle cases where one input is longer than the other */ + shortlen = Min(a->nwords, b->nwords); + for (i = shortlen; i < a->nwords; i++) + { + if (a->words[i] != 0) + return +1; + } + for (i = shortlen; i < b->nwords; i++) + { + if (b->words[i] != 0) + return -1; + } + /* Process words in common */ + i = shortlen; + while (--i >= 0) + { + bitmapword aw = a->words[i]; + bitmapword bw = b->words[i]; + + if (aw != bw) + return (aw > bw) ? +1 : -1; + } + return 0; +} + /* * bms_make_singleton - build a bitmapset containing a single member */ @@ -838,7 +882,7 @@ bms_add_range(Bitmapset *a, int lower, int upper) if (lwordnum == uwordnum) { a->words[lwordnum] |= ~(bitmapword) (((bitmapword) 1 << lbitnum) - 1) - & (~(bitmapword) 0) >> ushiftbits; + & (~(bitmapword) 0) >> ushiftbits; } else { diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 7df8761710..48b4db72bc 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -1274,38 +1274,44 @@ create_append_path(RelOptInfo *rel, /* * append_total_cost_compare - * list_qsort comparator for sorting append child paths by total_cost + * qsort comparator for sorting append child paths by total_cost descending + * + * For equal total costs, we fall back to comparing startup costs; if those + * are equal too, break ties using bms_compare on the paths' relids. + * (This is to avoid getting unpredictable results from qsort.) */ static int append_total_cost_compare(const void *a, const void *b) { Path *path1 = (Path *) lfirst(*(ListCell **) a); Path *path2 = (Path *) lfirst(*(ListCell **) b); + int cmp; - if (path1->total_cost > path2->total_cost) - return -1; - if (path1->total_cost < path2->total_cost) - return 1; - - return 0; + cmp = compare_path_costs(path1, path2, TOTAL_COST); + if (cmp != 0) + return -cmp; + return bms_compare(path1->parent->relids, path2->parent->relids); } /* * append_startup_cost_compare - * list_qsort comparator for sorting append child paths by startup_cost + * qsort comparator for sorting append child paths by startup_cost descending + * + * For equal startup costs, we fall back to comparing total costs; if those + * are equal too, break ties using bms_compare on the paths' relids. + * (This is to avoid getting unpredictable results from qsort.) */ static int append_startup_cost_compare(const void *a, const void *b) { Path *path1 = (Path *) lfirst(*(ListCell **) a); Path *path2 = (Path *) lfirst(*(ListCell **) b); + int cmp; - if (path1->startup_cost > path2->startup_cost) - return -1; - if (path1->startup_cost < path2->startup_cost) - return 1; - - return 0; + cmp = compare_path_costs(path1, path2, STARTUP_COST); + if (cmp != 0) + return -cmp; + return bms_compare(path1->parent->relids, path2->parent->relids); } /* diff --git a/src/include/nodes/bitmapset.h b/src/include/nodes/bitmapset.h index 15397e9584..67e8920f65 100644 --- a/src/include/nodes/bitmapset.h +++ b/src/include/nodes/bitmapset.h @@ -65,6 +65,7 @@ typedef enum extern Bitmapset *bms_copy(const Bitmapset *a); extern bool bms_equal(const Bitmapset *a, const Bitmapset *b); +extern int bms_compare(const Bitmapset *a, const Bitmapset *b); extern Bitmapset *bms_make_singleton(int x); extern void bms_free(Bitmapset *a); diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out index 7824ca52ca..452494fbfa 100644 --- a/src/test/regress/expected/select_parallel.out +++ b/src/test/regress/expected/select_parallel.out @@ -21,12 +21,12 @@ explain (costs off) Workers Planned: 3 -> Partial Aggregate -> Parallel Append - -> Parallel Seq Scan on a_star - -> Parallel Seq Scan on b_star - -> Parallel Seq Scan on c_star -> Parallel Seq Scan on d_star - -> Parallel Seq Scan on e_star -> Parallel Seq Scan on f_star + -> Parallel Seq Scan on e_star + -> Parallel Seq Scan on b_star + -> Parallel Seq Scan on c_star + -> Parallel Seq Scan on a_star (11 rows) select round(avg(aa)), sum(aa) from a_star a1; @@ -49,10 +49,10 @@ explain (costs off) -> Parallel Append -> Seq Scan on d_star -> Seq Scan on c_star - -> Parallel Seq Scan on a_star - -> Parallel Seq Scan on b_star - -> Parallel Seq Scan on e_star -> Parallel Seq Scan on f_star + -> Parallel Seq Scan on e_star + -> Parallel Seq Scan on b_star + -> Parallel Seq Scan on a_star (11 rows) select round(avg(aa)), sum(aa) from a_star a2; From 3cb1b2a8804da8365fe17f687d96b720df4a583d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 9 Jan 2018 13:25:53 -0500 Subject: [PATCH 0807/1087] Rewrite list_qsort() to avoid trashing its input list. The initial implementation of list_qsort(), from commit ab7271677, re-used the ListCells of the input list while not touching the List header. This meant that anybody who still had a pointer to the original header would now be in possession of a corrupted list, a problem that seems sure to bite us eventually. One possible solution is to re-use the original List header as well, giving the function the semantics of update-in-place. However, that doesn't seem like a very good idea either given the way that the function is used in the planner: create_path functions aren't normally supposed to modify their input lists. It doesn't look like there would be a problem today, but it's not hard to foresee a time when modifying a list of Paths in-place could have side-effects on some other append path. On the whole, and in view of the likelihood that this function might be used in other contexts in the future, it seems best to get rid of the micro-optimization of re-using the input list cells. Just build a new list. Discussion: https://postgr.es/m/16912.1515449066@sss.pgh.pa.us --- src/backend/nodes/list.c | 56 ++++++++++++++++++++++++++++------------ 1 file changed, 40 insertions(+), 16 deletions(-) diff --git a/src/backend/nodes/list.c b/src/backend/nodes/list.c index 083538f70a..f3e1800708 100644 --- a/src/backend/nodes/list.c +++ b/src/backend/nodes/list.c @@ -1250,41 +1250,65 @@ list_copy_tail(const List *oldlist, int nskip) } /* - * Sort a list using qsort. A sorted list is built but the cells of the - * original list are re-used. The comparator function receives arguments of - * type ListCell ** + * Sort a list as though by qsort. + * + * A new list is built and returned. Like list_copy, this doesn't make + * fresh copies of any pointed-to data. + * + * The comparator function receives arguments of type ListCell **. */ List * list_qsort(const List *list, list_qsort_comparator cmp) { - ListCell *cell; - int i; int len = list_length(list); ListCell **list_arr; - List *new_list; + List *newlist; + ListCell *newlist_prev; + ListCell *cell; + int i; + /* Empty list is easy */ if (len == 0) return NIL; + /* Flatten list cells into an array, so we can use qsort */ + list_arr = (ListCell **) palloc(sizeof(ListCell *) * len); i = 0; - list_arr = palloc(sizeof(ListCell *) * len); foreach(cell, list) list_arr[i++] = cell; qsort(list_arr, len, sizeof(ListCell *), cmp); - new_list = (List *) palloc(sizeof(List)); - new_list->type = list->type; - new_list->length = len; - new_list->head = list_arr[0]; - new_list->tail = list_arr[len - 1]; + /* Construct new list (this code is much like list_copy) */ + newlist = new_list(list->type); + newlist->length = len; + + /* + * Copy over the data in the first cell; new_list() has already allocated + * the head cell itself + */ + newlist->head->data = list_arr[0]->data; + + newlist_prev = newlist->head; + for (i = 1; i < len; i++) + { + ListCell *newlist_cur; + + newlist_cur = (ListCell *) palloc(sizeof(*newlist_cur)); + newlist_cur->data = list_arr[i]->data; + newlist_prev->next = newlist_cur; - for (i = 0; i < len - 1; i++) - list_arr[i]->next = list_arr[i + 1]; + newlist_prev = newlist_cur; + } - list_arr[len - 1]->next = NULL; + newlist_prev->next = NULL; + newlist->tail = newlist_prev; + + /* Might as well free the workspace array */ pfree(list_arr); - return new_list; + + check_list_invariants(newlist); + return newlist; } /* From 0f7c49e85518dd846ccd0a044d49a922b9132983 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 16 Dec 2017 17:26:26 -0500 Subject: [PATCH 0808/1087] Update portal-related memory context names and API Rename PortalMemory to TopPortalContext, to avoid confusion with PortalContext and align naming with similar top-level memory contexts. Rename PortalData's "heap" field to portalContext. The "heap" naming seems quite antiquated and confusing. Also get rid of the PortalGetHeapMemory() macro and access the field directly, which we do for other portal fields, so this abstraction doesn't buy anything. Reviewed-by: Andrew Dunstan Reviewed-by: Alvaro Herrera --- src/backend/commands/portalcmds.c | 10 +++++----- src/backend/commands/prepare.c | 2 +- src/backend/executor/spi.c | 6 +++--- src/backend/tcop/postgres.c | 2 +- src/backend/tcop/pquery.c | 16 +++++++-------- src/backend/utils/mmgr/portalmem.c | 32 +++++++++++++++--------------- src/include/utils/portal.h | 3 +-- 7 files changed, 35 insertions(+), 36 deletions(-) diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c index ff38e94cb1..e977154689 100644 --- a/src/backend/commands/portalcmds.c +++ b/src/backend/commands/portalcmds.c @@ -96,7 +96,7 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params, */ portal = CreatePortal(cstmt->portalname, false, false); - oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + oldContext = MemoryContextSwitchTo(portal->portalContext); plan = copyObject(plan); @@ -363,7 +363,7 @@ PersistHoldablePortal(Portal portal) ActivePortal = portal; if (portal->resowner) CurrentResourceOwner = portal->resowner; - PortalContext = PortalGetHeapMemory(portal); + PortalContext = portal->portalContext; MemoryContextSwitchTo(PortalContext); @@ -450,10 +450,10 @@ PersistHoldablePortal(Portal portal) PopActiveSnapshot(); /* - * We can now release any subsidiary memory of the portal's heap context; + * We can now release any subsidiary memory of the portal's context; * we'll never use it again. The executor already dropped its context, - * but this will clean up anything that glommed onto the portal's heap via + * but this will clean up anything that glommed onto the portal's context via * PortalContext. */ - MemoryContextDeleteChildren(PortalGetHeapMemory(portal)); + MemoryContextDeleteChildren(portal->portalContext); } diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c index 21cb855aeb..b945b1556a 100644 --- a/src/backend/commands/prepare.c +++ b/src/backend/commands/prepare.c @@ -239,7 +239,7 @@ ExecuteQuery(ExecuteStmt *stmt, IntoClause *intoClause, portal->visible = false; /* Copy the plan's saved query string into the portal's memory */ - query_string = MemoryContextStrdup(PortalGetHeapMemory(portal), + query_string = MemoryContextStrdup(portal->portalContext, entry->plansource->query_string); /* Replan if needed, and increment plan refcount for portal */ diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 4d9b51b947..995f67d266 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -1183,7 +1183,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan, } /* Copy the plan's query string into the portal */ - query_string = MemoryContextStrdup(PortalGetHeapMemory(portal), + query_string = MemoryContextStrdup(portal->portalContext, plansource->query_string); /* @@ -1213,7 +1213,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan, * will result in leaking our refcount on the plan, but it doesn't * matter because the plan is unsaved and hence transient anyway. */ - oldcontext = MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + oldcontext = MemoryContextSwitchTo(portal->portalContext); stmt_list = copyObject(stmt_list); MemoryContextSwitchTo(oldcontext); ReleaseCachedPlan(cplan, false); @@ -1311,7 +1311,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan, */ if (paramLI) { - oldcontext = MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + oldcontext = MemoryContextSwitchTo(portal->portalContext); paramLI = copyParamList(paramLI); MemoryContextSwitchTo(oldcontext); } diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 4654a01eab..ddc3ec860a 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -1608,7 +1608,7 @@ exec_bind_message(StringInfo input_message) * don't want a failure to occur between GetCachedPlan and * PortalDefineQuery; that would result in leaking our plancache refcount. */ - oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + oldContext = MemoryContextSwitchTo(portal->portalContext); /* Copy the plan's query string into the portal */ query_string = pstrdup(psrc->query_string); diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c index 9925712768..0420231864 100644 --- a/src/backend/tcop/pquery.c +++ b/src/backend/tcop/pquery.c @@ -466,9 +466,9 @@ PortalStart(Portal portal, ParamListInfo params, ActivePortal = portal; if (portal->resowner) CurrentResourceOwner = portal->resowner; - PortalContext = PortalGetHeapMemory(portal); + PortalContext = portal->portalContext; - oldContext = MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + oldContext = MemoryContextSwitchTo(PortalContext); /* Must remember portal param list, if any */ portal->portalParams = params; @@ -634,7 +634,7 @@ PortalSetResultFormat(Portal portal, int nFormats, int16 *formats) return; natts = portal->tupDesc->natts; portal->formats = (int16 *) - MemoryContextAlloc(PortalGetHeapMemory(portal), + MemoryContextAlloc(portal->portalContext, natts * sizeof(int16)); if (nFormats > 1) { @@ -748,7 +748,7 @@ PortalRun(Portal portal, long count, bool isTopLevel, bool run_once, ActivePortal = portal; if (portal->resowner) CurrentResourceOwner = portal->resowner; - PortalContext = PortalGetHeapMemory(portal); + PortalContext = portal->portalContext; MemoryContextSwitchTo(PortalContext); @@ -1184,7 +1184,7 @@ PortalRunUtility(Portal portal, PlannedStmt *pstmt, completionTag); /* Some utility statements may change context on us */ - MemoryContextSwitchTo(PortalGetHeapMemory(portal)); + MemoryContextSwitchTo(portal->portalContext); /* * Some utility commands may pop the ActiveSnapshot stack from under us, @@ -1343,9 +1343,9 @@ PortalRunMulti(Portal portal, /* * Clear subsidiary contexts to recover temporary memory. */ - Assert(PortalGetHeapMemory(portal) == CurrentMemoryContext); + Assert(portal->portalContext == CurrentMemoryContext); - MemoryContextDeleteChildren(PortalGetHeapMemory(portal)); + MemoryContextDeleteChildren(portal->portalContext); } /* Pop the snapshot if we pushed one. */ @@ -1424,7 +1424,7 @@ PortalRunFetch(Portal portal, ActivePortal = portal; if (portal->resowner) CurrentResourceOwner = portal->resowner; - PortalContext = PortalGetHeapMemory(portal); + PortalContext = portal->portalContext; oldContext = MemoryContextSwitchTo(PortalContext); diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index c93c37d74a..9edc1ccc83 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -87,7 +87,7 @@ do { \ elog(WARNING, "trying to delete portal name that does not exist"); \ } while(0) -static MemoryContext PortalMemory = NULL; +static MemoryContext TopPortalContext = NULL; /* ---------------------------------------------------------------- @@ -104,10 +104,10 @@ EnablePortalManager(void) { HASHCTL ctl; - Assert(PortalMemory == NULL); + Assert(TopPortalContext == NULL); - PortalMemory = AllocSetContextCreate(TopMemoryContext, - "PortalMemory", + TopPortalContext = AllocSetContextCreate(TopMemoryContext, + "TopPortalContext", ALLOCSET_DEFAULT_SIZES); ctl.keysize = MAX_PORTALNAME_LEN; @@ -193,12 +193,12 @@ CreatePortal(const char *name, bool allowDup, bool dupSilent) } /* make new portal structure */ - portal = (Portal) MemoryContextAllocZero(PortalMemory, sizeof *portal); + portal = (Portal) MemoryContextAllocZero(TopPortalContext, sizeof *portal); - /* initialize portal heap context; typically it won't store much */ - portal->heap = AllocSetContextCreate(PortalMemory, - "PortalHeapMemory", - ALLOCSET_SMALL_SIZES); + /* initialize portal context; typically it won't store much */ + portal->portalContext = AllocSetContextCreate(TopPortalContext, + "PortalContext", + ALLOCSET_SMALL_SIZES); /* create a resource owner for the portal */ portal->resowner = ResourceOwnerCreate(CurTransactionResourceOwner, @@ -263,7 +263,7 @@ CreateNewPortal(void) * * If cplan is NULL, then it is the caller's responsibility to ensure that * the passed plan trees have adequate lifetime. Typically this is done by - * copying them into the portal's heap context. + * copying them into the portal's context. * * The caller is also responsible for ensuring that the passed prepStmtName * (if not NULL) and sourceText have adequate lifetime. @@ -331,10 +331,10 @@ PortalCreateHoldStore(Portal portal) /* * Create the memory context that is used for storage of the tuple set. - * Note this is NOT a child of the portal's heap memory. + * Note this is NOT a child of the portal's portalContext. */ portal->holdContext = - AllocSetContextCreate(PortalMemory, + AllocSetContextCreate(TopPortalContext, "PortalHoldContext", ALLOCSET_DEFAULT_SIZES); @@ -576,9 +576,9 @@ PortalDrop(Portal portal, bool isTopCommit) MemoryContextDelete(portal->holdContext); /* release subsidiary storage */ - MemoryContextDelete(PortalGetHeapMemory(portal)); + MemoryContextDelete(portal->portalContext); - /* release portal struct (it's in PortalMemory) */ + /* release portal struct (it's in TopPortalContext) */ pfree(portal); } @@ -806,7 +806,7 @@ AtAbort_Portals(void) * The cleanup hook was the last thing that might have needed data * there. */ - MemoryContextDeleteChildren(PortalGetHeapMemory(portal)); + MemoryContextDeleteChildren(portal->portalContext); } } @@ -1000,7 +1000,7 @@ AtSubAbort_Portals(SubTransactionId mySubid, * The cleanup hook was the last thing that might have needed data * there. */ - MemoryContextDeleteChildren(PortalGetHeapMemory(portal)); + MemoryContextDeleteChildren(portal->portalContext); } } diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h index 3e7820b51c..8cedc0ea60 100644 --- a/src/include/utils/portal.h +++ b/src/include/utils/portal.h @@ -116,7 +116,7 @@ typedef struct PortalData /* Bookkeeping data */ const char *name; /* portal's name */ const char *prepStmtName; /* source prepared statement (NULL if none) */ - MemoryContext heap; /* subsidiary memory for portal */ + MemoryContext portalContext;/* subsidiary memory for portal */ ResourceOwner resowner; /* resources owned by portal */ void (*cleanup) (Portal portal); /* cleanup hook */ @@ -202,7 +202,6 @@ typedef struct PortalData * Access macros for Portal ... use these in preference to field access. */ #define PortalGetQueryDesc(portal) ((portal)->queryDesc) -#define PortalGetHeapMemory(portal) ((portal)->heap) /* Prototypes for functions in utils/mmgr/portalmem.c */ From a77dd53f3089a3d6bf74966bfd3ab7e27537183b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 16 Dec 2017 17:43:41 -0500 Subject: [PATCH 0809/1087] Remove PortalGetQueryDesc() After having gotten rid of PortalGetHeapMemory(), there seems little reason to keep one Portal access macro around that offers no actual abstraction and isn't consistently used anyway. Reviewed-by: Andrew Dunstan Reviewed-by: Alvaro Herrera --- src/backend/commands/portalcmds.c | 4 ++-- src/backend/executor/execCurrent.c | 2 +- src/backend/tcop/pquery.c | 4 ++-- src/include/utils/portal.h | 5 ----- 4 files changed, 5 insertions(+), 10 deletions(-) diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c index e977154689..6ecaea1443 100644 --- a/src/backend/commands/portalcmds.c +++ b/src/backend/commands/portalcmds.c @@ -277,7 +277,7 @@ PortalCleanup(Portal portal) * since other mechanisms will take care of releasing executor resources, * and we can't be sure that ExecutorEnd itself wouldn't fail. */ - queryDesc = PortalGetQueryDesc(portal); + queryDesc = portal->queryDesc; if (queryDesc) { /* @@ -317,7 +317,7 @@ PortalCleanup(Portal portal) void PersistHoldablePortal(Portal portal) { - QueryDesc *queryDesc = PortalGetQueryDesc(portal); + QueryDesc *queryDesc = portal->queryDesc; Portal saveActivePortal; ResourceOwner saveResourceOwner; MemoryContext savePortalContext; diff --git a/src/backend/executor/execCurrent.c b/src/backend/executor/execCurrent.c index 6a8db582db..ce7d4ac592 100644 --- a/src/backend/executor/execCurrent.c +++ b/src/backend/executor/execCurrent.c @@ -75,7 +75,7 @@ execCurrentOf(CurrentOfExpr *cexpr, (errcode(ERRCODE_INVALID_CURSOR_STATE), errmsg("cursor \"%s\" is not a SELECT query", cursor_name))); - queryDesc = PortalGetQueryDesc(portal); + queryDesc = portal->queryDesc; if (queryDesc == NULL || queryDesc->estate == NULL) ereport(ERROR, (errcode(ERRCODE_INVALID_CURSOR_STATE), diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c index 0420231864..66cc5c35c6 100644 --- a/src/backend/tcop/pquery.c +++ b/src/backend/tcop/pquery.c @@ -885,7 +885,7 @@ PortalRunSelect(Portal portal, * NB: queryDesc will be NULL if we are fetching from a held cursor or a * completed utility query; can't use it in that path. */ - queryDesc = PortalGetQueryDesc(portal); + queryDesc = portal->queryDesc; /* Caller messed up if we have neither a ready query nor held data. */ Assert(queryDesc || portal->holdStore); @@ -1694,7 +1694,7 @@ DoPortalRewind(Portal portal) } /* Rewind executor, if active */ - queryDesc = PortalGetQueryDesc(portal); + queryDesc = portal->queryDesc; if (queryDesc) { PushActiveSnapshot(queryDesc->snapshot); diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h index 8cedc0ea60..bc9d52e506 100644 --- a/src/include/utils/portal.h +++ b/src/include/utils/portal.h @@ -198,11 +198,6 @@ typedef struct PortalData */ #define PortalIsValid(p) PointerIsValid(p) -/* - * Access macros for Portal ... use these in preference to field access. - */ -#define PortalGetQueryDesc(portal) ((portal)->queryDesc) - /* Prototypes for functions in utils/mmgr/portalmem.c */ extern void EnablePortalManager(void); From 11b623dd0a2c385719ebbbdd42dd4ec395dcdc9d Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Tue, 9 Jan 2018 14:25:05 -0500 Subject: [PATCH 0810/1087] Implement TZH and TZM timestamp format patterns These are compatible with Oracle and required for the datetime template language for jsonpath in an upcoming patch. Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule. --- doc/src/sgml/func.sgml | 8 +++ src/backend/utils/adt/formatting.c | 69 ++++++++++++++++++++-- src/test/regress/expected/horology.out | 30 ++++++++++ src/test/regress/expected/timestamptz.out | 72 ++++++++++++++--------- src/test/regress/sql/horology.sql | 6 ++ src/test/regress/sql/timestamptz.sql | 20 ++++--- 6 files changed, 164 insertions(+), 41 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 4dd9d029e6..2428434030 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -6073,6 +6073,14 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); lower case time-zone abbreviation (only supported in to_char) + + TZH + time-zone hours + + + TZM + time-zone minutes + OF time-zone offset from UTC diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c index 0e30810ae4..b8bd4caa3e 100644 --- a/src/backend/utils/adt/formatting.c +++ b/src/backend/utils/adt/formatting.c @@ -424,7 +424,10 @@ typedef struct j, us, yysz, /* is it YY or YYYY ? */ - clock; /* 12 or 24 hour clock? */ + clock, /* 12 or 24 hour clock? */ + tzsign, /* +1, -1 or 0 if timezone info is absent */ + tzh, + tzm; } TmFromChar; #define ZERO_tmfc(_X) memset(_X, 0, sizeof(TmFromChar)) @@ -470,6 +473,7 @@ do { \ (_X)->tm_sec = (_X)->tm_year = (_X)->tm_min = (_X)->tm_wday = \ (_X)->tm_hour = (_X)->tm_yday = (_X)->tm_isdst = 0; \ (_X)->tm_mday = (_X)->tm_mon = 1; \ + (_X)->tm_zone = NULL; \ } while(0) #define ZERO_tmtc(_X) \ @@ -609,6 +613,8 @@ typedef enum DCH_RM, DCH_SSSS, DCH_SS, + DCH_TZH, + DCH_TZM, DCH_TZ, DCH_US, DCH_WW, @@ -756,7 +762,9 @@ static const KeyWord DCH_keywords[] = { {"RM", 2, DCH_RM, false, FROM_CHAR_DATE_GREGORIAN}, /* R */ {"SSSS", 4, DCH_SSSS, true, FROM_CHAR_DATE_NONE}, /* S */ {"SS", 2, DCH_SS, true, FROM_CHAR_DATE_NONE}, - {"TZ", 2, DCH_TZ, false, FROM_CHAR_DATE_NONE}, /* T */ + {"TZH", 3, DCH_TZH, false, FROM_CHAR_DATE_NONE}, /* T */ + {"TZM", 3, DCH_TZM, true, FROM_CHAR_DATE_NONE}, + {"TZ", 2, DCH_TZ, false, FROM_CHAR_DATE_NONE}, {"US", 2, DCH_US, true, FROM_CHAR_DATE_NONE}, /* U */ {"WW", 2, DCH_WW, true, FROM_CHAR_DATE_GREGORIAN}, /* W */ {"W", 1, DCH_W, true, FROM_CHAR_DATE_GREGORIAN}, @@ -879,7 +887,7 @@ static const int DCH_index[KeyWord_INDEX_SIZE] = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, DCH_A_D, DCH_B_C, DCH_CC, DCH_DAY, -1, DCH_FX, -1, DCH_HH24, DCH_IDDD, DCH_J, -1, -1, DCH_MI, -1, DCH_OF, - DCH_P_M, DCH_Q, DCH_RM, DCH_SSSS, DCH_TZ, DCH_US, -1, DCH_WW, -1, DCH_Y_YYY, + DCH_P_M, DCH_Q, DCH_RM, DCH_SSSS, DCH_TZH, DCH_US, -1, DCH_WW, -1, DCH_Y_YYY, -1, -1, -1, -1, -1, -1, -1, DCH_a_d, DCH_b_c, DCH_cc, DCH_day, -1, DCH_fx, -1, DCH_hh24, DCH_iddd, DCH_j, -1, -1, DCH_mi, -1, -1, DCH_p_m, DCH_q, DCH_rm, DCH_ssss, DCH_tz, DCH_us, -1, DCH_ww, @@ -2519,6 +2527,19 @@ DCH_to_char(FormatNode *node, bool is_interval, TmToChar *in, char *out, Oid col s += strlen(s); } break; + case DCH_TZH: + INVALID_FOR_INTERVAL; + sprintf(s, "%c%02d", + (tm->tm_gmtoff >= 0) ? '+' : '-', + abs((int) tm->tm_gmtoff) / SECS_PER_HOUR); + s += strlen(s); + break; + case DCH_TZM: + INVALID_FOR_INTERVAL; + sprintf(s, "%02d", + (abs((int) tm->tm_gmtoff) % SECS_PER_HOUR) / SECS_PER_MINUTE); + s += strlen(s); + break; case DCH_OF: INVALID_FOR_INTERVAL; sprintf(s, "%c%0*d", @@ -3070,6 +3091,20 @@ DCH_from_char(FormatNode *node, char *in, TmFromChar *out) errmsg("formatting field \"%s\" is only supported in to_char", n->key->name))); break; + case DCH_TZH: + out->tzsign = *s == '-' ? -1 : +1; + + if (*s == '+' || *s == '-' || *s == ' ') + s++; + + from_char_parse_int_len(&out->tzh, &s, 2, n); + break; + case DCH_TZM: + /* assign positive timezone sign if TZH was not seen before */ + if (!out->tzsign) + out->tzsign = +1; + from_char_parse_int_len(&out->tzm, &s, 2, n); + break; case DCH_A_D: case DCH_B_C: case DCH_a_d: @@ -3536,7 +3571,16 @@ to_timestamp(PG_FUNCTION_ARGS) do_to_timestamp(date_txt, fmt, &tm, &fsec); - tz = DetermineTimeZoneOffset(&tm, session_timezone); + /* Use the specified time zone, if any. */ + if (tm.tm_zone) + { + int dterr = DecodeTimezone((char *) tm.tm_zone, &tz); + + if (dterr) + DateTimeParseError(dterr, text_to_cstring(date_txt), "timestamptz"); + } + else + tz = DetermineTimeZoneOffset(&tm, session_timezone); if (tm2timestamp(&tm, fsec, &tz, &result) != 0) ereport(ERROR, @@ -3858,6 +3902,23 @@ do_to_timestamp(text *date_txt, text *fmt, *fsec < INT64CONST(0) || *fsec >= USECS_PER_SEC) DateTimeParseError(DTERR_FIELD_OVERFLOW, date_str, "timestamp"); + /* Save parsed time-zone into tm->tm_zone if it was specified */ + if (tmfc.tzsign) + { + char *tz; + + if (tmfc.tzh < 0 || tmfc.tzh > MAX_TZDISP_HOUR || + tmfc.tzm < 0 || tmfc.tzm >= MINS_PER_HOUR) + DateTimeParseError(DTERR_TZDISP_OVERFLOW, date_str, "timestamp"); + + tz = palloc(7); + + snprintf(tz, 7, "%c%02d:%02d", + tmfc.tzsign > 0 ? '+' : '-', tmfc.tzh, tmfc.tzm); + + tm->tm_zone = tz; + } + DEBUG_TM(tm); pfree(date_str); diff --git a/src/test/regress/expected/horology.out b/src/test/regress/expected/horology.out index 7b3d058425..63e39198e6 100644 --- a/src/test/regress/expected/horology.out +++ b/src/test/regress/expected/horology.out @@ -2930,6 +2930,36 @@ SELECT to_timestamp('2011-12-18 11:38 PM', 'YYYY-MM-DD HH12:MI PM'); Sun Dec 18 23:38:00 2011 PST (1 row) +SELECT to_timestamp('2011-12-18 11:38 +05', 'YYYY-MM-DD HH12:MI TZH'); + to_timestamp +------------------------------ + Sat Dec 17 22:38:00 2011 PST +(1 row) + +SELECT to_timestamp('2011-12-18 11:38 -05', 'YYYY-MM-DD HH12:MI TZH'); + to_timestamp +------------------------------ + Sun Dec 18 08:38:00 2011 PST +(1 row) + +SELECT to_timestamp('2011-12-18 11:38 +05:20', 'YYYY-MM-DD HH12:MI TZH:TZM'); + to_timestamp +------------------------------ + Sat Dec 17 22:18:00 2011 PST +(1 row) + +SELECT to_timestamp('2011-12-18 11:38 -05:20', 'YYYY-MM-DD HH12:MI TZH:TZM'); + to_timestamp +------------------------------ + Sun Dec 18 08:58:00 2011 PST +(1 row) + +SELECT to_timestamp('2011-12-18 11:38 20', 'YYYY-MM-DD HH12:MI TZM'); + to_timestamp +------------------------------ + Sun Dec 18 03:18:00 2011 PST +(1 row) + -- -- Check handling of multiple spaces in format and/or input -- diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out index 7226670962..a901fd909d 100644 --- a/src/test/regress/expected/timestamptz.out +++ b/src/test/regress/expected/timestamptz.out @@ -1699,54 +1699,68 @@ SELECT '' AS to_char_11, to_char(d1, 'FMIYYY FMIYY FMIY FMI FMIW FMIDDD FMID') | 2001 1 1 1 1 1 1 (66 rows) --- Check OF with various zone offsets, particularly fractional hours +-- Check OF, TZH, TZM with various zone offsets, particularly fractional hours SET timezone = '00:00'; -SELECT to_char(now(), 'OF'); - to_char ---------- - +00 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +-----+--------- + +00 | +00:00 (1 row) SET timezone = '+02:00'; -SELECT to_char(now(), 'OF'); - to_char ---------- - -02 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +-----+--------- + -02 | -02:00 (1 row) SET timezone = '-13:00'; -SELECT to_char(now(), 'OF'); - to_char ---------- - +13 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +-----+--------- + +13 | +13:00 (1 row) SET timezone = '-00:30'; -SELECT to_char(now(), 'OF'); - to_char ---------- - +00:30 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + +00:30 | +00:30 (1 row) SET timezone = '00:30'; -SELECT to_char(now(), 'OF'); - to_char ---------- - -00:30 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + -00:30 | -00:30 (1 row) SET timezone = '-04:30'; -SELECT to_char(now(), 'OF'); - to_char ---------- - +04:30 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + +04:30 | +04:30 (1 row) SET timezone = '04:30'; -SELECT to_char(now(), 'OF'); - to_char ---------- - -04:30 +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + -04:30 | -04:30 +(1 row) + +SET timezone = '-04:15'; +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + +04:15 | +04:15 +(1 row) + +SET timezone = '04:15'; +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; + OF | TZH:TZM +--------+--------- + -04:15 | -04:15 (1 row) RESET timezone; diff --git a/src/test/regress/sql/horology.sql b/src/test/regress/sql/horology.sql index a7bc9dcfc4..ebb196a1cf 100644 --- a/src/test/regress/sql/horology.sql +++ b/src/test/regress/sql/horology.sql @@ -446,6 +446,12 @@ SELECT to_timestamp(' 20050302', 'YYYYMMDD'); SELECT to_timestamp('2011-12-18 11:38 AM', 'YYYY-MM-DD HH12:MI PM'); SELECT to_timestamp('2011-12-18 11:38 PM', 'YYYY-MM-DD HH12:MI PM'); +SELECT to_timestamp('2011-12-18 11:38 +05', 'YYYY-MM-DD HH12:MI TZH'); +SELECT to_timestamp('2011-12-18 11:38 -05', 'YYYY-MM-DD HH12:MI TZH'); +SELECT to_timestamp('2011-12-18 11:38 +05:20', 'YYYY-MM-DD HH12:MI TZH:TZM'); +SELECT to_timestamp('2011-12-18 11:38 -05:20', 'YYYY-MM-DD HH12:MI TZH:TZM'); +SELECT to_timestamp('2011-12-18 11:38 20', 'YYYY-MM-DD HH12:MI TZM'); + -- -- Check handling of multiple spaces in format and/or input -- diff --git a/src/test/regress/sql/timestamptz.sql b/src/test/regress/sql/timestamptz.sql index 97e57a2403..f17d153fcc 100644 --- a/src/test/regress/sql/timestamptz.sql +++ b/src/test/regress/sql/timestamptz.sql @@ -248,21 +248,25 @@ SELECT '' AS to_char_10, to_char(d1, 'IYYY IYY IY I IW IDDD ID') SELECT '' AS to_char_11, to_char(d1, 'FMIYYY FMIYY FMIY FMI FMIW FMIDDD FMID') FROM TIMESTAMPTZ_TBL; --- Check OF with various zone offsets, particularly fractional hours +-- Check OF, TZH, TZM with various zone offsets, particularly fractional hours SET timezone = '00:00'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '+02:00'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '-13:00'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '-00:30'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '00:30'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '-04:30'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; SET timezone = '04:30'; -SELECT to_char(now(), 'OF'); +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; +SET timezone = '-04:15'; +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; +SET timezone = '04:15'; +SELECT to_char(now(), 'OF') as "OF", to_char(now(), 'TZH:TZM') as "TZH:TZM"; RESET timezone; CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz); From 272c2ab9fd0a604e3200030b1ea26fd464c44935 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 9 Jan 2018 15:54:39 -0300 Subject: [PATCH 0811/1087] Change some bogus PageGetLSN calls to BufferGetLSNAtomic MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit As src/backend/access/transam/README says, PageGetLSN may only be called by processes holding either exclusive lock on buffer, or a shared lock on buffer plus buffer header lock. Therefore any place that only holds a shared buffer lock must use BufferGetLSNAtomic instead of PageGetLSN, which internally obtains buffer header lock prior to reading the LSN. A few callsites failed to comply with this rule. This was detected by running all tests under a new (not committed) assertion that verifies PageGetLSN locking contract. All but one of the callsites that failed the assertion are fixed by this patch. Remaining callsites were inspected manually and determined not to need any change. The exception (unfixed callsite) is in TestForOldSnapshot, which only has a Page argument, making it impossible to access the corresponding Buffer from it. Fixing that seems a much larger patch that will have to be done separately; and that's just as well, since it was only introduced in 9.6 and other bugs are much older. Some of these bugs are ancient; backpatch all the way back to 9.3. Authors: Jacob Champion, Asim Praveen, Ashwin Agrawal Reviewed-by: Michaël Paquier Discussion: https://postgr.es/m/CABAq_6GXgQDVu3u12mK9O5Xt5abBZWQ0V40LZCE+oUf95XyNFg@mail.gmail.com --- src/backend/access/gist/gist.c | 5 +++-- src/backend/access/gist/gistget.c | 4 ++-- src/backend/access/gist/gistvacuum.c | 2 +- src/backend/access/nbtree/nbtsearch.c | 2 +- src/backend/access/nbtree/nbtutils.c | 2 +- 5 files changed, 8 insertions(+), 7 deletions(-) diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c index aff969ead4..51c32e4afe 100644 --- a/src/backend/access/gist/gist.c +++ b/src/backend/access/gist/gist.c @@ -640,7 +640,8 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate) } stack->page = (Page) BufferGetPage(stack->buffer); - stack->lsn = PageGetLSN(stack->page); + stack->lsn = xlocked ? + PageGetLSN(stack->page) : BufferGetLSNAtomic(stack->buffer); Assert(!RelationNeedsWAL(state.r) || !XLogRecPtrIsInvalid(stack->lsn)); /* @@ -890,7 +891,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum) break; } - top->lsn = PageGetLSN(page); + top->lsn = BufferGetLSNAtomic(buffer); /* * If F_FOLLOW_RIGHT is set, the page to the right doesn't have a diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c index ca21cf7047..b30b931c3b 100644 --- a/src/backend/access/gist/gistget.c +++ b/src/backend/access/gist/gistget.c @@ -61,7 +61,7 @@ gistkillitems(IndexScanDesc scan) * read. killedItems could be not valid so LP_DEAD hints applying is not * safe. */ - if (PageGetLSN(page) != so->curPageLSN) + if (BufferGetLSNAtomic(buffer) != so->curPageLSN) { UnlockReleaseBuffer(buffer); so->numKilled = 0; /* reset counter */ @@ -384,7 +384,7 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances, * safe to apply LP_DEAD hints to the page later. This allows us to drop * the pin for MVCC scans, which allows vacuum to avoid blocking. */ - so->curPageLSN = PageGetLSN(page); + so->curPageLSN = BufferGetLSNAtomic(buffer); /* * check all tuples on page diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c index 95a0c54f63..22181c6299 100644 --- a/src/backend/access/gist/gistvacuum.c +++ b/src/backend/access/gist/gistvacuum.c @@ -249,7 +249,7 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, ptr = (GistBDItem *) palloc(sizeof(GistBDItem)); ptr->blkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid)); - ptr->parentlsn = PageGetLSN(page); + ptr->parentlsn = BufferGetLSNAtomic(buffer); ptr->next = stack->next; stack->next = ptr; diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c index 847434fec6..51dca64e13 100644 --- a/src/backend/access/nbtree/nbtsearch.c +++ b/src/backend/access/nbtree/nbtsearch.c @@ -1224,7 +1224,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum) * safe to apply LP_DEAD hints to the page later. This allows us to drop * the pin for MVCC scans, which allows vacuum to avoid blocking. */ - so->currPos.lsn = PageGetLSN(page); + so->currPos.lsn = BufferGetLSNAtomic(so->currPos.buf); /* * we must save the page's right-link while scanning it; this tells us diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c index c62e4ef782..752667c885 100644 --- a/src/backend/access/nbtree/nbtutils.c +++ b/src/backend/access/nbtree/nbtutils.c @@ -1772,7 +1772,7 @@ _bt_killitems(IndexScanDesc scan) return; page = BufferGetPage(buf); - if (PageGetLSN(page) == so->currPos.lsn) + if (BufferGetLSNAtomic(buf) == so->currPos.lsn) so->currPos.buf = buf; else { From 69c3936a1499b772a749ae629fc59b2d72722332 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 9 Jan 2018 13:25:38 -0800 Subject: [PATCH 0812/1087] Expression evaluation based aggregate transition invocation. Previously aggregate transition and combination functions were invoked by special case code in nodeAgg.c, evaluating input and filters separately using the expression evaluation machinery. That turns out to not be great for performance for several reasons: - repeated expression evaluations have some cost - the transition functions invocations are poorly predicted, as commonly there are multiple aggregates in a query, resulting in the same call-stack invoking different functions. - filter and input computation had to be done separately - the special case code made it hard to implement JITing of the whole transition function invocation Address this by building one large expression that computes input, evaluates filters, and invokes transition functions. This leads to moderate speedups in queries bottlenecked by aggregate computations, and enables large speedups for similar cases once JITing is done. There's potential for further improvement: - It'd be nice if we could simplify the somewhat expensive aggstate->all_pergroups lookups. - right now there's still an advance_transition_function invocation in nodeAgg.c, leading to some code duplication. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de --- src/backend/executor/execExpr.c | 429 ++++++++++++- src/backend/executor/execExprInterp.c | 356 ++++++++++- src/backend/executor/nodeAgg.c | 864 +++----------------------- src/include/executor/execExpr.h | 76 ++- src/include/executor/executor.h | 4 +- src/include/executor/nodeAgg.h | 284 +++++++++ src/include/nodes/execnodes.h | 5 +- src/tools/pgindent/typedefs.list | 1 + 8 files changed, 1230 insertions(+), 789 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 16f908037c..794573803d 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -43,6 +43,7 @@ #include "optimizer/planner.h" #include "pgstat.h" #include "utils/builtins.h" +#include "utils/datum.h" #include "utils/lsyscache.h" #include "utils/typcache.h" @@ -61,6 +62,7 @@ static void ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, Oid funcid, Oid inputcollid, ExprState *state); static void ExecInitExprSlots(ExprState *state, Node *node); +static void ExecPushExprSlots(ExprState *state, LastAttnumInfo *info); static bool get_last_attnums_walker(Node *node, LastAttnumInfo *info); static void ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, ExprState *state); @@ -71,6 +73,10 @@ static bool isAssignmentIndirectionExpr(Expr *expr); static void ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, ExprState *state, Datum *resv, bool *resnull); +static void ExecBuildAggTransCall(ExprState *state, AggState *aggstate, + ExprEvalStep *scratch, + FunctionCallInfo fcinfo, AggStatePerTrans pertrans, + int transno, int setno, int setoff, bool ishash); /* @@ -2250,30 +2256,42 @@ static void ExecInitExprSlots(ExprState *state, Node *node) { LastAttnumInfo info = {0, 0, 0}; - ExprEvalStep scratch; /* * Figure out which attributes we're going to need. */ get_last_attnums_walker(node, &info); + ExecPushExprSlots(state, &info); +} + +/* + * Add steps deforming the ExprState's inner/out/scan slots as much as + * indicated by info. This is useful when building an ExprState covering more + * than one expression. + */ +static void +ExecPushExprSlots(ExprState *state, LastAttnumInfo *info) +{ + ExprEvalStep scratch; + /* Emit steps as needed */ - if (info.last_inner > 0) + if (info->last_inner > 0) { scratch.opcode = EEOP_INNER_FETCHSOME; - scratch.d.fetch.last_var = info.last_inner; + scratch.d.fetch.last_var = info->last_inner; ExprEvalPushStep(state, &scratch); } - if (info.last_outer > 0) + if (info->last_outer > 0) { scratch.opcode = EEOP_OUTER_FETCHSOME; - scratch.d.fetch.last_var = info.last_outer; + scratch.d.fetch.last_var = info->last_outer; ExprEvalPushStep(state, &scratch); } - if (info.last_scan > 0) + if (info->last_scan > 0) { scratch.opcode = EEOP_SCAN_FETCHSOME; - scratch.d.fetch.last_var = info.last_scan; + scratch.d.fetch.last_var = info->last_scan; ExprEvalPushStep(state, &scratch); } } @@ -2775,3 +2793,400 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, } } } + +/* + * Build transition/combine function invocations for all aggregate transition + * / combination function invocations in a grouping sets phase. This has to + * invoke all sort based transitions in a phase (if doSort is true), all hash + * based transitions (if doHash is true), or both (both true). + * + * The resulting expression will, for each set of transition values, first + * check for filters, evaluate aggregate input, check that that input is not + * NULL for a strict transition function, and then finally invoke the + * transition for each of the concurrently computed grouping sets. + */ +ExprState * +ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase, + bool doSort, bool doHash) +{ + ExprState *state = makeNode(ExprState); + PlanState *parent = &aggstate->ss.ps; + ExprEvalStep scratch; + int transno = 0; + int setoff = 0; + bool isCombine = DO_AGGSPLIT_COMBINE(aggstate->aggsplit); + LastAttnumInfo deform = {0, 0, 0}; + + state->expr = (Expr *) aggstate; + state->parent = parent; + + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + + /* + * First figure out which slots, and how many columns from each, we're + * going to need. + */ + for (transno = 0; transno < aggstate->numtrans; transno++) + { + AggStatePerTrans pertrans = &aggstate->pertrans[transno]; + + get_last_attnums_walker((Node *) pertrans->aggref->aggdirectargs, + &deform); + get_last_attnums_walker((Node *) pertrans->aggref->args, + &deform); + get_last_attnums_walker((Node *) pertrans->aggref->aggorder, + &deform); + get_last_attnums_walker((Node *) pertrans->aggref->aggdistinct, + &deform); + get_last_attnums_walker((Node *) pertrans->aggref->aggfilter, + &deform); + } + ExecPushExprSlots(state, &deform); + + /* + * Emit instructions for each transition value / grouping set combination. + */ + for (transno = 0; transno < aggstate->numtrans; transno++) + { + AggStatePerTrans pertrans = &aggstate->pertrans[transno]; + int numInputs = pertrans->numInputs; + int argno; + int setno; + FunctionCallInfo trans_fcinfo = &pertrans->transfn_fcinfo; + ListCell *arg; + ListCell *bail; + List *adjust_bailout = NIL; + bool *strictnulls = NULL; + + /* + * If filter present, emit. Do so before evaluating the input, to + * avoid potentially unneeded computations, or even worse, unintended + * side-effects. When combining, all the necessary filtering has + * already been done. + */ + if (pertrans->aggref->aggfilter && !isCombine) + { + /* evaluate filter expression */ + ExecInitExprRec(pertrans->aggref->aggfilter, state, + &state->resvalue, &state->resnull); + /* and jump out if false */ + scratch.opcode = EEOP_JUMP_IF_NOT_TRUE; + scratch.d.jump.jumpdone = -1; /* adjust later */ + ExprEvalPushStep(state, &scratch); + adjust_bailout = lappend_int(adjust_bailout, + state->steps_len - 1); + } + + /* + * Evaluate arguments to aggregate/combine function. + */ + argno = 0; + if (isCombine) + { + /* + * Combining two aggregate transition values. Instead of directly + * coming from a tuple the input is a, potentially deserialized, + * transition value. + */ + TargetEntry *source_tle; + + Assert(pertrans->numSortCols == 0); + Assert(list_length(pertrans->aggref->args) == 1); + + strictnulls = trans_fcinfo->argnull + 1; + source_tle = (TargetEntry *) linitial(pertrans->aggref->args); + + /* + * deserialfn_oid will be set if we must deserialize the input + * state before calling the combine function. + */ + if (!OidIsValid(pertrans->deserialfn_oid)) + { + /* + * Start from 1, since the 0th arg will be the transition + * value + */ + ExecInitExprRec(source_tle->expr, state, + &trans_fcinfo->arg[argno + 1], + &trans_fcinfo->argnull[argno + 1]); + } + else + { + FunctionCallInfo ds_fcinfo = &pertrans->deserialfn_fcinfo; + + /* evaluate argument */ + ExecInitExprRec(source_tle->expr, state, + &ds_fcinfo->arg[0], + &ds_fcinfo->argnull[0]); + + /* Dummy second argument for type-safety reasons */ + ds_fcinfo->arg[1] = PointerGetDatum(NULL); + ds_fcinfo->argnull[1] = false; + + /* + * Don't call a strict deserialization function with NULL + * input + */ + if (pertrans->deserialfn.fn_strict) + scratch.opcode = EEOP_AGG_STRICT_DESERIALIZE; + else + scratch.opcode = EEOP_AGG_DESERIALIZE; + + scratch.d.agg_deserialize.aggstate = aggstate; + scratch.d.agg_deserialize.fcinfo_data = ds_fcinfo; + scratch.d.agg_deserialize.jumpnull = -1; /* adjust later */ + scratch.resvalue = &trans_fcinfo->arg[argno + 1]; + scratch.resnull = &trans_fcinfo->argnull[argno + 1]; + + ExprEvalPushStep(state, &scratch); + adjust_bailout = lappend_int(adjust_bailout, + state->steps_len - 1); + + /* restore normal settings of scratch fields */ + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + } + argno++; + } + else if (pertrans->numSortCols == 0) + { + /* + * Normal transition function without ORDER BY / DISTINCT. + */ + strictnulls = trans_fcinfo->argnull + 1; + + foreach(arg, pertrans->aggref->args) + { + TargetEntry *source_tle = (TargetEntry *) lfirst(arg); + + /* + * Start from 1, since the 0th arg will be the transition + * value + */ + ExecInitExprRec(source_tle->expr, state, + &trans_fcinfo->arg[argno + 1], + &trans_fcinfo->argnull[argno + 1]); + argno++; + } + } + else if (pertrans->numInputs == 1) + { + /* + * DISTINCT and/or ORDER BY case, with a single column sorted on. + */ + TargetEntry *source_tle = + (TargetEntry *) linitial(pertrans->aggref->args); + + Assert(list_length(pertrans->aggref->args) == 1); + + ExecInitExprRec(source_tle->expr, state, + &state->resvalue, + &state->resnull); + strictnulls = &state->resnull; + argno++; + } + else + { + /* + * DISTINCT and/or ORDER BY case, with multiple columns sorted on. + */ + Datum *values = pertrans->sortslot->tts_values; + bool *nulls = pertrans->sortslot->tts_isnull; + + strictnulls = nulls; + + foreach(arg, pertrans->aggref->args) + { + TargetEntry *source_tle = (TargetEntry *) lfirst(arg); + + ExecInitExprRec(source_tle->expr, state, + &values[argno], &nulls[argno]); + argno++; + } + } + Assert(numInputs == argno); + + /* + * For a strict transfn, nothing happens when there's a NULL input; we + * just keep the prior transValue. This is true for both plain and + * sorted/distinct aggregates. + */ + if (trans_fcinfo->flinfo->fn_strict && numInputs > 0) + { + scratch.opcode = EEOP_AGG_STRICT_INPUT_CHECK; + scratch.d.agg_strict_input_check.nulls = strictnulls; + scratch.d.agg_strict_input_check.jumpnull = -1; /* adjust later */ + scratch.d.agg_strict_input_check.nargs = numInputs; + ExprEvalPushStep(state, &scratch); + adjust_bailout = lappend_int(adjust_bailout, + state->steps_len - 1); + } + + /* + * Call transition function (once for each concurrently evaluated + * grouping set). Do so for both sort and hash based computations, as + * applicable. + */ + setoff = 0; + if (doSort) + { + int processGroupingSets = Max(phase->numsets, 1); + + for (setno = 0; setno < processGroupingSets; setno++) + { + ExecBuildAggTransCall(state, aggstate, &scratch, trans_fcinfo, + pertrans, transno, setno, setoff, false); + setoff++; + } + } + + if (doHash) + { + int numHashes = aggstate->num_hashes; + + /* in MIXED mode, there'll be preceding transition values */ + if (aggstate->aggstrategy != AGG_HASHED) + setoff = aggstate->maxsets; + else + setoff = 0; + + for (setno = 0; setno < numHashes; setno++) + { + ExecBuildAggTransCall(state, aggstate, &scratch, trans_fcinfo, + pertrans, transno, setno, setoff, true); + setoff++; + } + } + + /* adjust early bail out jump target(s) */ + foreach(bail, adjust_bailout) + { + ExprEvalStep *as = &state->steps[lfirst_int(bail)]; + + if (as->opcode == EEOP_JUMP_IF_NOT_TRUE) + { + Assert(as->d.jump.jumpdone == -1); + as->d.jump.jumpdone = state->steps_len; + } + else if (as->opcode == EEOP_AGG_STRICT_INPUT_CHECK) + { + Assert(as->d.agg_strict_input_check.jumpnull == -1); + as->d.agg_strict_input_check.jumpnull = state->steps_len; + } + else if (as->opcode == EEOP_AGG_STRICT_DESERIALIZE) + { + Assert(as->d.agg_deserialize.jumpnull == -1); + as->d.agg_deserialize.jumpnull = state->steps_len; + } + } + } + + scratch.resvalue = NULL; + scratch.resnull = NULL; + scratch.opcode = EEOP_DONE; + ExprEvalPushStep(state, &scratch); + + ExecReadyExpr(state); + + return state; +} + +/* + * Build transition/combine function invocation for a single transition + * value. This is separated from ExecBuildAggTrans() because there are + * multiple callsites (hash and sort in some grouping set cases). + */ +static void +ExecBuildAggTransCall(ExprState *state, AggState *aggstate, + ExprEvalStep *scratch, + FunctionCallInfo fcinfo, AggStatePerTrans pertrans, + int transno, int setno, int setoff, bool ishash) +{ + int adjust_init_jumpnull = -1; + int adjust_strict_jumpnull = -1; + ExprContext *aggcontext; + + if (ishash) + aggcontext = aggstate->hashcontext; + else + aggcontext = aggstate->aggcontexts[setno]; + + /* + * If the initial value for the transition state doesn't exist in the + * pg_aggregate table then we will let the first non-NULL value returned + * from the outer procNode become the initial value. (This is useful for + * aggregates like max() and min().) The noTransValue flag signals that we + * still need to do this. + */ + if (pertrans->numSortCols == 0 && + fcinfo->flinfo->fn_strict && + pertrans->initValueIsNull) + { + scratch->opcode = EEOP_AGG_INIT_TRANS; + scratch->d.agg_init_trans.aggstate = aggstate; + scratch->d.agg_init_trans.pertrans = pertrans; + scratch->d.agg_init_trans.setno = setno; + scratch->d.agg_init_trans.setoff = setoff; + scratch->d.agg_init_trans.transno = transno; + scratch->d.agg_init_trans.aggcontext = aggcontext; + scratch->d.agg_init_trans.jumpnull = -1; /* adjust later */ + ExprEvalPushStep(state, scratch); + + /* see comment about jumping out below */ + adjust_init_jumpnull = state->steps_len - 1; + } + + if (pertrans->numSortCols == 0 && + fcinfo->flinfo->fn_strict) + { + scratch->opcode = EEOP_AGG_STRICT_TRANS_CHECK; + scratch->d.agg_strict_trans_check.aggstate = aggstate; + scratch->d.agg_strict_trans_check.setno = setno; + scratch->d.agg_strict_trans_check.setoff = setoff; + scratch->d.agg_strict_trans_check.transno = transno; + scratch->d.agg_strict_trans_check.jumpnull = -1; /* adjust later */ + ExprEvalPushStep(state, scratch); + + /* + * Note, we don't push into adjust_bailout here - those jump to the + * end of all transition value computations. Here a single transition + * value is NULL, so just skip processing the individual value. + */ + adjust_strict_jumpnull = state->steps_len - 1; + } + + /* invoke appropriate transition implementation */ + if (pertrans->numSortCols == 0 && pertrans->transtypeByVal) + scratch->opcode = EEOP_AGG_PLAIN_TRANS_BYVAL; + else if (pertrans->numSortCols == 0) + scratch->opcode = EEOP_AGG_PLAIN_TRANS; + else if (pertrans->numInputs == 1) + scratch->opcode = EEOP_AGG_ORDERED_TRANS_DATUM; + else + scratch->opcode = EEOP_AGG_ORDERED_TRANS_TUPLE; + + scratch->d.agg_trans.aggstate = aggstate; + scratch->d.agg_trans.pertrans = pertrans; + scratch->d.agg_trans.setno = setno; + scratch->d.agg_trans.setoff = setoff; + scratch->d.agg_trans.transno = transno; + scratch->d.agg_trans.aggcontext = aggcontext; + ExprEvalPushStep(state, scratch); + + /* adjust jumps so they jump till after transition invocation */ + if (adjust_init_jumpnull != -1) + { + ExprEvalStep *as = &state->steps[adjust_init_jumpnull]; + + Assert(as->d.agg_init_trans.jumpnull == -1); + as->d.agg_init_trans.jumpnull = state->steps_len; + } + if (adjust_strict_jumpnull != -1) + { + ExprEvalStep *as = &state->steps[adjust_strict_jumpnull]; + + Assert(as->d.agg_strict_trans_check.jumpnull == -1); + as->d.agg_strict_trans_check.jumpnull = state->steps_len; + } +} diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 2e88417265..f646fd9c51 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -62,12 +62,14 @@ #include "executor/execExpr.h" #include "executor/nodeSubplan.h" #include "funcapi.h" +#include "utils/memutils.h" #include "miscadmin.h" #include "nodes/nodeFuncs.h" #include "parser/parsetree.h" #include "pgstat.h" #include "utils/builtins.h" #include "utils/date.h" +#include "utils/datum.h" #include "utils/lsyscache.h" #include "utils/timestamp.h" #include "utils/typcache.h" @@ -99,11 +101,12 @@ typedef struct ExprEvalOpLookup { const void *opcode; - ExprEvalOp op; + ExprEvalOp op; } ExprEvalOpLookup; /* to make dispatch_table accessible outside ExecInterpExpr() */ static const void **dispatch_table = NULL; + /* jump target -> opcode lookup table */ static ExprEvalOpLookup reverse_dispatch_table[EEOP_LAST]; @@ -379,6 +382,15 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_WINDOW_FUNC, &&CASE_EEOP_SUBPLAN, &&CASE_EEOP_ALTERNATIVE_SUBPLAN, + &&CASE_EEOP_AGG_STRICT_DESERIALIZE, + &&CASE_EEOP_AGG_DESERIALIZE, + &&CASE_EEOP_AGG_STRICT_INPUT_CHECK, + &&CASE_EEOP_AGG_INIT_TRANS, + &&CASE_EEOP_AGG_STRICT_TRANS_CHECK, + &&CASE_EEOP_AGG_PLAIN_TRANS_BYVAL, + &&CASE_EEOP_AGG_PLAIN_TRANS, + &&CASE_EEOP_AGG_ORDERED_TRANS_DATUM, + &&CASE_EEOP_AGG_ORDERED_TRANS_TUPLE, &&CASE_EEOP_LAST }; @@ -1514,6 +1526,235 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } + /* evaluate a strict aggregate deserialization function */ + EEO_CASE(EEOP_AGG_STRICT_DESERIALIZE) + { + bool *argnull = op->d.agg_deserialize.fcinfo_data->argnull; + + /* Don't call a strict deserialization function with NULL input */ + if (argnull[0]) + EEO_JUMP(op->d.agg_deserialize.jumpnull); + + /* fallthrough */ + } + + /* evaluate aggregate deserialization function (non-strict portion) */ + EEO_CASE(EEOP_AGG_DESERIALIZE) + { + FunctionCallInfo fcinfo = op->d.agg_deserialize.fcinfo_data; + AggState *aggstate = op->d.agg_deserialize.aggstate; + MemoryContext oldContext; + + /* + * We run the deserialization functions in per-input-tuple memory + * context. + */ + oldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory); + fcinfo->isnull = false; + *op->resvalue = FunctionCallInvoke(fcinfo); + *op->resnull = fcinfo->isnull; + MemoryContextSwitchTo(oldContext); + + EEO_NEXT(); + } + + /* + * Check that a strict aggregate transition / combination function's + * input is not NULL. + */ + EEO_CASE(EEOP_AGG_STRICT_INPUT_CHECK) + { + int argno; + bool *nulls = op->d.agg_strict_input_check.nulls; + int nargs = op->d.agg_strict_input_check.nargs; + + for (argno = 0; argno < nargs; argno++) + { + if (nulls[argno]) + EEO_JUMP(op->d.agg_strict_input_check.jumpnull); + } + EEO_NEXT(); + } + + /* + * Initialize an aggregate's first value if necessary. + */ + EEO_CASE(EEOP_AGG_INIT_TRANS) + { + AggState *aggstate; + AggStatePerGroup pergroup; + + aggstate = op->d.agg_init_trans.aggstate; + pergroup = &aggstate->all_pergroups + [op->d.agg_init_trans.setoff] + [op->d.agg_init_trans.transno]; + + /* If transValue has not yet been initialized, do so now. */ + if (pergroup->noTransValue) + { + AggStatePerTrans pertrans = op->d.agg_init_trans.pertrans; + + aggstate->curaggcontext = op->d.agg_init_trans.aggcontext; + aggstate->current_set = op->d.agg_init_trans.setno; + + ExecAggInitGroup(aggstate, pertrans, pergroup); + + /* copied trans value from input, done this round */ + EEO_JUMP(op->d.agg_init_trans.jumpnull); + } + + EEO_NEXT(); + } + + /* check that a strict aggregate's input isn't NULL */ + EEO_CASE(EEOP_AGG_STRICT_TRANS_CHECK) + { + AggState *aggstate; + AggStatePerGroup pergroup; + + aggstate = op->d.agg_strict_trans_check.aggstate; + pergroup = &aggstate->all_pergroups + [op->d.agg_strict_trans_check.setoff] + [op->d.agg_strict_trans_check.transno]; + + if (unlikely(pergroup->transValueIsNull)) + EEO_JUMP(op->d.agg_strict_trans_check.jumpnull); + + EEO_NEXT(); + } + + /* + * Evaluate aggregate transition / combine function that has a + * by-value transition type. That's a seperate case from the + * by-reference implementation because it's a bit simpler. + */ + EEO_CASE(EEOP_AGG_PLAIN_TRANS_BYVAL) + { + AggState *aggstate; + AggStatePerTrans pertrans; + AggStatePerGroup pergroup; + FunctionCallInfo fcinfo; + MemoryContext oldContext; + Datum newVal; + + aggstate = op->d.agg_trans.aggstate; + pertrans = op->d.agg_trans.pertrans; + + pergroup = &aggstate->all_pergroups + [op->d.agg_trans.setoff] + [op->d.agg_trans.transno]; + + Assert(pertrans->transtypeByVal); + + fcinfo = &pertrans->transfn_fcinfo; + + /* cf. select_current_set() */ + aggstate->curaggcontext = op->d.agg_trans.aggcontext; + aggstate->current_set = op->d.agg_trans.setno; + + /* set up aggstate->curpertrans for AggGetAggref() */ + aggstate->curpertrans = pertrans; + + /* invoke transition function in per-tuple context */ + oldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory); + + fcinfo->arg[0] = pergroup->transValue; + fcinfo->argnull[0] = pergroup->transValueIsNull; + fcinfo->isnull = false; /* just in case transfn doesn't set it */ + + newVal = FunctionCallInvoke(fcinfo); + + pergroup->transValue = newVal; + pergroup->transValueIsNull = fcinfo->isnull; + + MemoryContextSwitchTo(oldContext); + + EEO_NEXT(); + } + + /* + * Evaluate aggregate transition / combine function that has a + * by-reference transition type. + * + * Could optimize a bit further by splitting off by-reference + * fixed-length types, but currently that doesn't seem worth it. + */ + EEO_CASE(EEOP_AGG_PLAIN_TRANS) + { + AggState *aggstate; + AggStatePerTrans pertrans; + AggStatePerGroup pergroup; + FunctionCallInfo fcinfo; + MemoryContext oldContext; + Datum newVal; + + aggstate = op->d.agg_trans.aggstate; + pertrans = op->d.agg_trans.pertrans; + + pergroup = &aggstate->all_pergroups + [op->d.agg_trans.setoff] + [op->d.agg_trans.transno]; + + Assert(!pertrans->transtypeByVal); + + fcinfo = &pertrans->transfn_fcinfo; + + /* cf. select_current_set() */ + aggstate->curaggcontext = op->d.agg_trans.aggcontext; + aggstate->current_set = op->d.agg_trans.setno; + + /* set up aggstate->curpertrans for AggGetAggref() */ + aggstate->curpertrans = pertrans; + + /* invoke transition function in per-tuple context */ + oldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory); + + fcinfo->arg[0] = pergroup->transValue; + fcinfo->argnull[0] = pergroup->transValueIsNull; + fcinfo->isnull = false; /* just in case transfn doesn't set it */ + + newVal = FunctionCallInvoke(fcinfo); + + /* + * For pass-by-ref datatype, must copy the new value into + * aggcontext and free the prior transValue. But if transfn + * returned a pointer to its first input, we don't need to do + * anything. Also, if transfn returned a pointer to a R/W + * expanded object that is already a child of the aggcontext, + * assume we can adopt that value without copying it. + */ + if (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue)) + newVal = ExecAggTransReparent(aggstate, pertrans, + newVal, fcinfo->isnull, + pergroup->transValue, + pergroup->transValueIsNull); + + pergroup->transValue = newVal; + pergroup->transValueIsNull = fcinfo->isnull; + + MemoryContextSwitchTo(oldContext); + + EEO_NEXT(); + } + + /* process single-column ordered aggregate datum */ + EEO_CASE(EEOP_AGG_ORDERED_TRANS_DATUM) + { + /* too complex for an inline implementation */ + ExecEvalAggOrderedTransDatum(state, op, econtext); + + EEO_NEXT(); + } + + /* process multi-column ordered aggregate tuple */ + EEO_CASE(EEOP_AGG_ORDERED_TRANS_TUPLE) + { + /* too complex for an inline implementation */ + ExecEvalAggOrderedTransTuple(state, op, econtext); + + EEO_NEXT(); + } + EEO_CASE(EEOP_LAST) { /* unreachable */ @@ -1536,8 +1777,8 @@ Datum ExecInterpExprStillValid(ExprState *state, ExprContext *econtext, bool *isNull) { /* - * First time through, check whether attribute matches Var. Might - * not be ok anymore, due to schema changes. + * First time through, check whether attribute matches Var. Might not be + * ok anymore, due to schema changes. */ CheckExprStillValid(state, econtext); @@ -1555,7 +1796,7 @@ ExecInterpExprStillValid(ExprState *state, ExprContext *econtext, bool *isNull) void CheckExprStillValid(ExprState *state, ExprContext *econtext) { - int i = 0; + int i = 0; TupleTableSlot *innerslot; TupleTableSlot *outerslot; TupleTableSlot *scanslot; @@ -1564,9 +1805,9 @@ CheckExprStillValid(ExprState *state, ExprContext *econtext) outerslot = econtext->ecxt_outertuple; scanslot = econtext->ecxt_scantuple; - for (i = 0; i < state->steps_len;i++) + for (i = 0; i < state->steps_len; i++) { - ExprEvalStep *op = &state->steps[i]; + ExprEvalStep *op = &state->steps[i]; switch (ExecEvalStepOp(state, op)) { @@ -1859,7 +2100,7 @@ ExecJustApplyFuncToCase(ExprState *state, ExprContext *econtext, bool *isnull) * ExecEvalStepOp() in the threaded dispatch case. */ static int -dispatch_compare_ptr(const void* a, const void *b) +dispatch_compare_ptr(const void *a, const void *b) { const ExprEvalOpLookup *la = (const ExprEvalOpLookup *) a; const ExprEvalOpLookup *lb = (const ExprEvalOpLookup *) b; @@ -1896,7 +2137,7 @@ ExecInitInterpreter(void) /* make it bsearch()able */ qsort(reverse_dispatch_table, - EEOP_LAST /* nmembers */, + EEOP_LAST /* nmembers */ , sizeof(ExprEvalOpLookup), dispatch_compare_ptr); } @@ -1918,13 +2159,13 @@ ExecEvalStepOp(ExprState *state, ExprEvalStep *op) ExprEvalOpLookup key; ExprEvalOpLookup *res; - key.opcode = (void *) op->opcode; + key.opcode = (void *) op->opcode; res = bsearch(&key, reverse_dispatch_table, - EEOP_LAST /* nmembers */, + EEOP_LAST /* nmembers */ , sizeof(ExprEvalOpLookup), dispatch_compare_ptr); - Assert(res); /* unknown ops shouldn't get looked up */ + Assert(res); /* unknown ops shouldn't get looked up */ return res->op; } #endif @@ -3691,3 +3932,96 @@ ExecEvalWholeRowVar(ExprState *state, ExprEvalStep *op, ExprContext *econtext) *op->resvalue = PointerGetDatum(dtuple); *op->resnull = false; } + +/* + * Transition value has not been initialized. This is the first non-NULL input + * value for a group. We use it as the initial value for transValue. + */ +void +ExecAggInitGroup(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroup) +{ + FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; + MemoryContext oldContext; + + /* + * We must copy the datum into aggcontext if it is pass-by-ref. We do not + * need to pfree the old transValue, since it's NULL. (We already checked + * that the agg's input type is binary-compatible with its transtype, so + * straight copy here is OK.) + */ + oldContext = MemoryContextSwitchTo( + aggstate->curaggcontext->ecxt_per_tuple_memory); + pergroup->transValue = datumCopy(fcinfo->arg[1], + pertrans->transtypeByVal, + pertrans->transtypeLen); + pergroup->transValueIsNull = false; + pergroup->noTransValue = false; + MemoryContextSwitchTo(oldContext); +} + +/* + * Ensure that the current transition value is a child of the aggcontext, + * rather than the per-tuple context. + * + * NB: This can change the current memory context. + */ +Datum +ExecAggTransReparent(AggState *aggstate, AggStatePerTrans pertrans, + Datum newValue, bool newValueIsNull, + Datum oldValue, bool oldValueIsNull) +{ + if (!newValueIsNull) + { + MemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory); + if (DatumIsReadWriteExpandedObject(newValue, + false, + pertrans->transtypeLen) && + MemoryContextGetParent(DatumGetEOHP(newValue)->eoh_context) == CurrentMemoryContext) + /* do nothing */ ; + else + newValue = datumCopy(newValue, + pertrans->transtypeByVal, + pertrans->transtypeLen); + } + if (!oldValueIsNull) + { + if (DatumIsReadWriteExpandedObject(oldValue, + false, + pertrans->transtypeLen)) + DeleteExpandedObject(oldValue); + else + pfree(DatumGetPointer(oldValue)); + } + + return newValue; +} + +/* + * Invoke ordered transition function, with a datum argument. + */ +void +ExecEvalAggOrderedTransDatum(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + AggStatePerTrans pertrans = op->d.agg_trans.pertrans; + int setno = op->d.agg_trans.setno; + + tuplesort_putdatum(pertrans->sortstates[setno], + *op->resvalue, *op->resnull); +} + +/* + * Invoke ordered transition function, with a tuple argument. + */ +void +ExecEvalAggOrderedTransTuple(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + AggStatePerTrans pertrans = op->d.agg_trans.pertrans; + int setno = op->d.agg_trans.setno; + + ExecClearTuple(pertrans->sortslot); + pertrans->sortslot->tts_nvalid = pertrans->numInputs; + ExecStoreVirtualTuple(pertrans->sortslot); + tuplesort_puttupleslot(pertrans->sortstates[setno], pertrans->sortslot); +} diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 46ee880415..061acad80f 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -90,7 +90,7 @@ * but in the aggregate case we know the left input is either the initial * transition value or a previous function result, and in either case its * value need not be preserved. See int8inc() for an example. Notice that - * advance_transition_function() is coded to avoid a data copy step when + * the EEOP_AGG_PLAIN_TRANS step is coded to avoid a data copy step when * the previous transition value pointer is returned. It is also possible * to avoid repeated data copying when the transition value is an expanded * object: to do that, the transition function must take care to return @@ -194,6 +194,16 @@ * transition values. hashcontext is the single context created to support * all hash tables. * + * Transition / Combine function invocation: + * + * For performance reasons transition functions, including combine + * functions, aren't invoked one-by-one from nodeAgg.c after computing + * arguments using the expression evaluation engine. Instead + * ExecBuildAggTrans() builds one large expression that does both argument + * evaluation and transition function invocation. That avoids performance + * issues due to repeated uses of expression evaluation, complications due + * to filter expressions having to be evaluated early, and allows to JIT + * the entire expression into one native function. * * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -229,305 +239,6 @@ #include "utils/datum.h" -/* - * AggStatePerTransData - per aggregate state value information - * - * Working state for updating the aggregate's state value, by calling the - * transition function with an input row. This struct does not store the - * information needed to produce the final aggregate result from the transition - * state, that's stored in AggStatePerAggData instead. This separation allows - * multiple aggregate results to be produced from a single state value. - */ -typedef struct AggStatePerTransData -{ - /* - * These values are set up during ExecInitAgg() and do not change - * thereafter: - */ - - /* - * Link to an Aggref expr this state value is for. - * - * There can be multiple Aggref's sharing the same state value, so long as - * the inputs and transition functions are identical and the final - * functions are not read-write. This points to the first one of them. - */ - Aggref *aggref; - - /* - * Is this state value actually being shared by more than one Aggref? - */ - bool aggshared; - - /* - * Number of aggregated input columns. This includes ORDER BY expressions - * in both the plain-agg and ordered-set cases. Ordered-set direct args - * are not counted, though. - */ - int numInputs; - - /* - * Number of aggregated input columns to pass to the transfn. This - * includes the ORDER BY columns for ordered-set aggs, but not for plain - * aggs. (This doesn't count the transition state value!) - */ - int numTransInputs; - - /* - * At each input row, we perform a single ExecProject call to evaluate all - * argument expressions that will certainly be needed at this row; that - * includes this aggregate's filter expression if it has one, or its - * regular argument expressions (including any ORDER BY columns) if it - * doesn't. inputoff is the starting index of this aggregate's required - * expressions in the resulting tuple. - */ - int inputoff; - - /* Oid of the state transition or combine function */ - Oid transfn_oid; - - /* Oid of the serialization function or InvalidOid */ - Oid serialfn_oid; - - /* Oid of the deserialization function or InvalidOid */ - Oid deserialfn_oid; - - /* Oid of state value's datatype */ - Oid aggtranstype; - - /* - * fmgr lookup data for transition function or combine function. Note in - * particular that the fn_strict flag is kept here. - */ - FmgrInfo transfn; - - /* fmgr lookup data for serialization function */ - FmgrInfo serialfn; - - /* fmgr lookup data for deserialization function */ - FmgrInfo deserialfn; - - /* Input collation derived for aggregate */ - Oid aggCollation; - - /* number of sorting columns */ - int numSortCols; - - /* number of sorting columns to consider in DISTINCT comparisons */ - /* (this is either zero or the same as numSortCols) */ - int numDistinctCols; - - /* deconstructed sorting information (arrays of length numSortCols) */ - AttrNumber *sortColIdx; - Oid *sortOperators; - Oid *sortCollations; - bool *sortNullsFirst; - - /* - * fmgr lookup data for input columns' equality operators --- only - * set/used when aggregate has DISTINCT flag. Note that these are in - * order of sort column index, not parameter index. - */ - FmgrInfo *equalfns; /* array of length numDistinctCols */ - - /* - * initial value from pg_aggregate entry - */ - Datum initValue; - bool initValueIsNull; - - /* - * We need the len and byval info for the agg's input and transition data - * types in order to know how to copy/delete values. - * - * Note that the info for the input type is used only when handling - * DISTINCT aggs with just one argument, so there is only one input type. - */ - int16 inputtypeLen, - transtypeLen; - bool inputtypeByVal, - transtypeByVal; - - /* - * Stuff for evaluation of aggregate inputs, when they must be evaluated - * separately because there's a FILTER expression. In such cases we will - * create a sortslot and the result will be stored there, whether or not - * we're actually sorting. - */ - ProjectionInfo *evalproj; /* projection machinery */ - - /* - * Slots for holding the evaluated input arguments. These are set up - * during ExecInitAgg() and then used for each input row requiring either - * FILTER or ORDER BY/DISTINCT processing. - */ - TupleTableSlot *sortslot; /* current input tuple */ - TupleTableSlot *uniqslot; /* used for multi-column DISTINCT */ - TupleDesc sortdesc; /* descriptor of input tuples */ - - /* - * These values are working state that is initialized at the start of an - * input tuple group and updated for each input tuple. - * - * For a simple (non DISTINCT/ORDER BY) aggregate, we just feed the input - * values straight to the transition function. If it's DISTINCT or - * requires ORDER BY, we pass the input values into a Tuplesort object; - * then at completion of the input tuple group, we scan the sorted values, - * eliminate duplicates if needed, and run the transition function on the - * rest. - * - * We need a separate tuplesort for each grouping set. - */ - - Tuplesortstate **sortstates; /* sort objects, if DISTINCT or ORDER BY */ - - /* - * This field is a pre-initialized FunctionCallInfo struct used for - * calling this aggregate's transfn. We save a few cycles per row by not - * re-initializing the unchanging fields; which isn't much, but it seems - * worth the extra space consumption. - */ - FunctionCallInfoData transfn_fcinfo; - - /* Likewise for serialization and deserialization functions */ - FunctionCallInfoData serialfn_fcinfo; - - FunctionCallInfoData deserialfn_fcinfo; -} AggStatePerTransData; - -/* - * AggStatePerAggData - per-aggregate information - * - * This contains the information needed to call the final function, to produce - * a final aggregate result from the state value. If there are multiple - * identical Aggrefs in the query, they can all share the same per-agg data. - * - * These values are set up during ExecInitAgg() and do not change thereafter. - */ -typedef struct AggStatePerAggData -{ - /* - * Link to an Aggref expr this state value is for. - * - * There can be multiple identical Aggref's sharing the same per-agg. This - * points to the first one of them. - */ - Aggref *aggref; - - /* index to the state value which this agg should use */ - int transno; - - /* Optional Oid of final function (may be InvalidOid) */ - Oid finalfn_oid; - - /* - * fmgr lookup data for final function --- only valid when finalfn_oid is - * not InvalidOid. - */ - FmgrInfo finalfn; - - /* - * Number of arguments to pass to the finalfn. This is always at least 1 - * (the transition state value) plus any ordered-set direct args. If the - * finalfn wants extra args then we pass nulls corresponding to the - * aggregated input columns. - */ - int numFinalArgs; - - /* ExprStates for any direct-argument expressions */ - List *aggdirectargs; - - /* - * We need the len and byval info for the agg's result data type in order - * to know how to copy/delete values. - */ - int16 resulttypeLen; - bool resulttypeByVal; - - /* - * "sharable" is false if this agg cannot share state values with other - * aggregates because the final function is read-write. - */ - bool sharable; -} AggStatePerAggData; - -/* - * AggStatePerGroupData - per-aggregate-per-group working state - * - * These values are working state that is initialized at the start of - * an input tuple group and updated for each input tuple. - * - * In AGG_PLAIN and AGG_SORTED modes, we have a single array of these - * structs (pointed to by aggstate->pergroup); we re-use the array for - * each input group, if it's AGG_SORTED mode. In AGG_HASHED mode, the - * hash table contains an array of these structs for each tuple group. - * - * Logically, the sortstate field belongs in this struct, but we do not - * keep it here for space reasons: we don't support DISTINCT aggregates - * in AGG_HASHED mode, so there's no reason to use up a pointer field - * in every entry of the hashtable. - */ -typedef struct AggStatePerGroupData -{ - Datum transValue; /* current transition value */ - bool transValueIsNull; - - bool noTransValue; /* true if transValue not set yet */ - - /* - * Note: noTransValue initially has the same value as transValueIsNull, - * and if true both are cleared to false at the same time. They are not - * the same though: if transfn later returns a NULL, we want to keep that - * NULL and not auto-replace it with a later input value. Only the first - * non-NULL input will be auto-substituted. - */ -} AggStatePerGroupData; - -/* - * AggStatePerPhaseData - per-grouping-set-phase state - * - * Grouping sets are divided into "phases", where a single phase can be - * processed in one pass over the input. If there is more than one phase, then - * at the end of input from the current phase, state is reset and another pass - * taken over the data which has been re-sorted in the mean time. - * - * Accordingly, each phase specifies a list of grouping sets and group clause - * information, plus each phase after the first also has a sort order. - */ -typedef struct AggStatePerPhaseData -{ - AggStrategy aggstrategy; /* strategy for this phase */ - int numsets; /* number of grouping sets (or 0) */ - int *gset_lengths; /* lengths of grouping sets */ - Bitmapset **grouped_cols; /* column groupings for rollup */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ - Agg *aggnode; /* Agg node for phase data */ - Sort *sortnode; /* Sort node for input ordering for phase */ -} AggStatePerPhaseData; - -/* - * AggStatePerHashData - per-hashtable state - * - * When doing grouping sets with hashing, we have one of these for each - * grouping set. (When doing hashing without grouping sets, we have just one of - * them.) - */ -typedef struct AggStatePerHashData -{ - TupleHashTable hashtable; /* hash table with one entry per group */ - TupleHashIterator hashiter; /* for iterating through hash table */ - TupleTableSlot *hashslot; /* slot for loading hash table */ - FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ - int numCols; /* number of hash key columns */ - int numhashGrpCols; /* number of columns in hash table */ - int largestGrpColIdx; /* largest col required for hashing */ - AttrNumber *hashGrpColIdxInput; /* hash col indices in input slot */ - AttrNumber *hashGrpColIdxHash; /* indices in hashtbl tuples */ - Agg *aggnode; /* original Agg node, for numGroups etc. */ -} AggStatePerHashData; - - static void select_current_set(AggState *aggstate, int setno, bool is_hash); static void initialize_phase(AggState *aggstate, int newphase); static TupleTableSlot *fetch_input_tuple(AggState *aggstate); @@ -537,13 +248,7 @@ static void initialize_aggregates(AggState *aggstate, static void advance_transition_function(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate); -static void advance_aggregates(AggState *aggstate, - AggStatePerGroup *sort_pergroups, - AggStatePerGroup *hash_pergroups); -static void advance_combine_function(AggState *aggstate, - AggStatePerTrans pertrans, - AggStatePerGroup pergroupstate); -static void combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup); +static void advance_aggregates(AggState *aggstate); static void process_ordered_aggregate_single(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate); @@ -569,7 +274,7 @@ static Bitmapset *find_unaggregated_cols(AggState *aggstate); static bool find_unaggregated_cols_walker(Node *node, Bitmapset **colnos); static void build_hash_table(AggState *aggstate); static TupleHashEntryData *lookup_hash_entry(AggState *aggstate); -static AggStatePerGroup *lookup_hash_entries(AggState *aggstate); +static void lookup_hash_entries(AggState *aggstate); static TupleTableSlot *agg_retrieve_direct(AggState *aggstate); static void agg_fill_hash_table(AggState *aggstate); static TupleTableSlot *agg_retrieve_hash_table(AggState *aggstate); @@ -597,6 +302,7 @@ static int find_compatible_pertrans(AggState *aggstate, Aggref *newagg, static void select_current_set(AggState *aggstate, int setno, bool is_hash) { + /* when changing this, also adapt ExecInterpExpr() and friends */ if (is_hash) aggstate->curaggcontext = aggstate->hashcontext; else @@ -967,350 +673,15 @@ advance_transition_function(AggState *aggstate, * When called, CurrentMemoryContext should be the per-query context. */ static void -advance_aggregates(AggState *aggstate, - AggStatePerGroup *sort_pergroups, - AggStatePerGroup *hash_pergroups) +advance_aggregates(AggState *aggstate) { - int transno; - int setno = 0; - int numGroupingSets = Max(aggstate->phase->numsets, 1); - int numHashes = aggstate->num_hashes; - int numTrans = aggstate->numtrans; - TupleTableSlot *combinedslot; - - /* compute required inputs for all aggregates */ - combinedslot = ExecProject(aggstate->combinedproj); - - for (transno = 0; transno < numTrans; transno++) - { - AggStatePerTrans pertrans = &aggstate->pertrans[transno]; - int numTransInputs = pertrans->numTransInputs; - int inputoff = pertrans->inputoff; - TupleTableSlot *slot; - int i; - - /* Skip anything FILTERed out */ - if (pertrans->aggref->aggfilter) - { - /* Check the result of the filter expression */ - if (combinedslot->tts_isnull[inputoff] || - !DatumGetBool(combinedslot->tts_values[inputoff])) - continue; - - /* Now it's safe to evaluate this agg's arguments */ - slot = ExecProject(pertrans->evalproj); - /* There's no offset needed in this slot, of course */ - inputoff = 0; - } - else - { - /* arguments are already evaluated into combinedslot @ inputoff */ - slot = combinedslot; - } - - if (pertrans->numSortCols > 0) - { - /* DISTINCT and/or ORDER BY case */ - Assert(slot->tts_nvalid >= (pertrans->numInputs + inputoff)); - Assert(!hash_pergroups); - - /* - * If the transfn is strict, we want to check for nullity before - * storing the row in the sorter, to save space if there are a lot - * of nulls. Note that we must only check numTransInputs columns, - * not numInputs, since nullity in columns used only for sorting - * is not relevant here. - */ - if (pertrans->transfn.fn_strict) - { - for (i = 0; i < numTransInputs; i++) - { - if (slot->tts_isnull[i + inputoff]) - break; - } - if (i < numTransInputs) - continue; - } - - for (setno = 0; setno < numGroupingSets; setno++) - { - /* OK, put the tuple into the tuplesort object */ - if (pertrans->numInputs == 1) - tuplesort_putdatum(pertrans->sortstates[setno], - slot->tts_values[inputoff], - slot->tts_isnull[inputoff]); - else if (pertrans->aggref->aggfilter) - { - /* - * When filtering and ordering, we already have a slot - * containing just the argument columns. - */ - Assert(slot == pertrans->sortslot); - tuplesort_puttupleslot(pertrans->sortstates[setno], slot); - } - else - { - /* - * Copy argument columns from combined slot, starting at - * inputoff, into sortslot, so that we can store just the - * columns we want. - */ - ExecClearTuple(pertrans->sortslot); - memcpy(pertrans->sortslot->tts_values, - &slot->tts_values[inputoff], - pertrans->numInputs * sizeof(Datum)); - memcpy(pertrans->sortslot->tts_isnull, - &slot->tts_isnull[inputoff], - pertrans->numInputs * sizeof(bool)); - ExecStoreVirtualTuple(pertrans->sortslot); - tuplesort_puttupleslot(pertrans->sortstates[setno], - pertrans->sortslot); - } - } - } - else - { - /* We can apply the transition function immediately */ - FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; + bool dummynull; - /* Load values into fcinfo */ - /* Start from 1, since the 0th arg will be the transition value */ - Assert(slot->tts_nvalid >= (numTransInputs + inputoff)); - - for (i = 0; i < numTransInputs; i++) - { - fcinfo->arg[i + 1] = slot->tts_values[i + inputoff]; - fcinfo->argnull[i + 1] = slot->tts_isnull[i + inputoff]; - } - - if (sort_pergroups) - { - /* advance transition states for ordered grouping */ - - for (setno = 0; setno < numGroupingSets; setno++) - { - AggStatePerGroup pergroupstate; - - select_current_set(aggstate, setno, false); - - pergroupstate = &sort_pergroups[setno][transno]; - - advance_transition_function(aggstate, pertrans, pergroupstate); - } - } - - if (hash_pergroups) - { - /* advance transition states for hashed grouping */ - - for (setno = 0; setno < numHashes; setno++) - { - AggStatePerGroup pergroupstate; - - select_current_set(aggstate, setno, true); - - pergroupstate = &hash_pergroups[setno][transno]; - - advance_transition_function(aggstate, pertrans, pergroupstate); - } - } - } - } + ExecEvalExprSwitchContext(aggstate->phase->evaltrans, + aggstate->tmpcontext, + &dummynull); } -/* - * combine_aggregates replaces advance_aggregates in DO_AGGSPLIT_COMBINE - * mode. The principal difference is that here we may need to apply the - * deserialization function before running the transfn (which, in this mode, - * is actually the aggregate's combinefn). Also, we know we don't need to - * handle FILTER, DISTINCT, ORDER BY, or grouping sets. - */ -static void -combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup) -{ - int transno; - int numTrans = aggstate->numtrans; - TupleTableSlot *slot; - - /* combine not supported with grouping sets */ - Assert(aggstate->phase->numsets <= 1); - - /* compute input for all aggregates */ - slot = ExecProject(aggstate->combinedproj); - - for (transno = 0; transno < numTrans; transno++) - { - AggStatePerTrans pertrans = &aggstate->pertrans[transno]; - AggStatePerGroup pergroupstate = &pergroup[transno]; - FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; - int inputoff = pertrans->inputoff; - - Assert(slot->tts_nvalid > inputoff); - - /* - * deserialfn_oid will be set if we must deserialize the input state - * before calling the combine function - */ - if (OidIsValid(pertrans->deserialfn_oid)) - { - /* Don't call a strict deserialization function with NULL input */ - if (pertrans->deserialfn.fn_strict && slot->tts_isnull[inputoff]) - { - fcinfo->arg[1] = slot->tts_values[inputoff]; - fcinfo->argnull[1] = slot->tts_isnull[inputoff]; - } - else - { - FunctionCallInfo dsinfo = &pertrans->deserialfn_fcinfo; - MemoryContext oldContext; - - dsinfo->arg[0] = slot->tts_values[inputoff]; - dsinfo->argnull[0] = slot->tts_isnull[inputoff]; - /* Dummy second argument for type-safety reasons */ - dsinfo->arg[1] = PointerGetDatum(NULL); - dsinfo->argnull[1] = false; - - /* - * We run the deserialization functions in per-input-tuple - * memory context. - */ - oldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory); - - fcinfo->arg[1] = FunctionCallInvoke(dsinfo); - fcinfo->argnull[1] = dsinfo->isnull; - - MemoryContextSwitchTo(oldContext); - } - } - else - { - fcinfo->arg[1] = slot->tts_values[inputoff]; - fcinfo->argnull[1] = slot->tts_isnull[inputoff]; - } - - advance_combine_function(aggstate, pertrans, pergroupstate); - } -} - -/* - * Perform combination of states between 2 aggregate states. Effectively this - * 'adds' two states together by whichever logic is defined in the aggregate - * function's combine function. - * - * Note that in this case transfn is set to the combination function. This - * perhaps should be changed to avoid confusion, but one field is ok for now - * as they'll never be needed at the same time. - */ -static void -advance_combine_function(AggState *aggstate, - AggStatePerTrans pertrans, - AggStatePerGroup pergroupstate) -{ - FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; - MemoryContext oldContext; - Datum newVal; - - if (pertrans->transfn.fn_strict) - { - /* if we're asked to merge to a NULL state, then do nothing */ - if (fcinfo->argnull[1]) - return; - - if (pergroupstate->noTransValue) - { - /* - * transValue has not yet been initialized. If pass-by-ref - * datatype we must copy the combining state value into - * aggcontext. - */ - if (!pertrans->transtypeByVal) - { - oldContext = MemoryContextSwitchTo( - aggstate->curaggcontext->ecxt_per_tuple_memory); - pergroupstate->transValue = datumCopy(fcinfo->arg[1], - pertrans->transtypeByVal, - pertrans->transtypeLen); - MemoryContextSwitchTo(oldContext); - } - else - pergroupstate->transValue = fcinfo->arg[1]; - - pergroupstate->transValueIsNull = false; - pergroupstate->noTransValue = false; - return; - } - - if (pergroupstate->transValueIsNull) - { - /* - * Don't call a strict function with NULL inputs. Note it is - * possible to get here despite the above tests, if the combinefn - * is strict *and* returned a NULL on a prior cycle. If that - * happens we will propagate the NULL all the way to the end. - */ - return; - } - } - - /* We run the combine functions in per-input-tuple memory context */ - oldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory); - - /* set up aggstate->curpertrans for AggGetAggref() */ - aggstate->curpertrans = pertrans; - - /* - * OK to call the combine function - */ - fcinfo->arg[0] = pergroupstate->transValue; - fcinfo->argnull[0] = pergroupstate->transValueIsNull; - fcinfo->isnull = false; /* just in case combine func doesn't set it */ - - newVal = FunctionCallInvoke(fcinfo); - - aggstate->curpertrans = NULL; - - /* - * If pass-by-ref datatype, must copy the new value into aggcontext and - * free the prior transValue. But if the combine function returned a - * pointer to its first input, we don't need to do anything. Also, if the - * combine function returned a pointer to a R/W expanded object that is - * already a child of the aggcontext, assume we can adopt that value - * without copying it. - */ - if (!pertrans->transtypeByVal && - DatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue)) - { - if (!fcinfo->isnull) - { - MemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory); - if (DatumIsReadWriteExpandedObject(newVal, - false, - pertrans->transtypeLen) && - MemoryContextGetParent(DatumGetEOHP(newVal)->eoh_context) == CurrentMemoryContext) - /* do nothing */ ; - else - newVal = datumCopy(newVal, - pertrans->transtypeByVal, - pertrans->transtypeLen); - } - if (!pergroupstate->transValueIsNull) - { - if (DatumIsReadWriteExpandedObject(pergroupstate->transValue, - false, - pertrans->transtypeLen)) - DeleteExpandedObject(pergroupstate->transValue); - else - pfree(DatumGetPointer(pergroupstate->transValue)); - } - } - - pergroupstate->transValue = newVal; - pergroupstate->transValueIsNull = fcinfo->isnull; - - MemoryContextSwitchTo(oldContext); -} - - /* * Run the transition function for a DISTINCT or ORDER BY aggregate * with only one input. This is called after we have completed @@ -2118,7 +1489,7 @@ lookup_hash_entry(AggState *aggstate) * * Be aware that lookup_hash_entry can reset the tmpcontext. */ -static AggStatePerGroup * +static void lookup_hash_entries(AggState *aggstate) { int numHashes = aggstate->num_hashes; @@ -2130,8 +1501,6 @@ lookup_hash_entries(AggState *aggstate) select_current_set(aggstate, setno, true); pergroup[setno] = lookup_hash_entry(aggstate)->additional; } - - return pergroup; } /* @@ -2191,7 +1560,6 @@ agg_retrieve_direct(AggState *aggstate) ExprContext *tmpcontext; AggStatePerAgg peragg; AggStatePerGroup *pergroups; - AggStatePerGroup *hash_pergroups = NULL; TupleTableSlot *outerslot; TupleTableSlot *firstSlot; TupleTableSlot *result; @@ -2446,15 +1814,11 @@ agg_retrieve_direct(AggState *aggstate) if (aggstate->aggstrategy == AGG_MIXED && aggstate->current_phase == 1) { - hash_pergroups = lookup_hash_entries(aggstate); + lookup_hash_entries(aggstate); } - else - hash_pergroups = NULL; - if (DO_AGGSPLIT_COMBINE(aggstate->aggsplit)) - combine_aggregates(aggstate, pergroups[0]); - else - advance_aggregates(aggstate, pergroups, hash_pergroups); + /* Advance the aggregates (or combine functions) */ + advance_aggregates(aggstate); /* Reset per-input-tuple context after each tuple */ ResetExprContext(tmpcontext); @@ -2548,8 +1912,6 @@ agg_fill_hash_table(AggState *aggstate) */ for (;;) { - AggStatePerGroup *pergroups; - outerslot = fetch_input_tuple(aggstate); if (TupIsNull(outerslot)) break; @@ -2558,13 +1920,10 @@ agg_fill_hash_table(AggState *aggstate) tmpcontext->ecxt_outertuple = outerslot; /* Find or build hashtable entries */ - pergroups = lookup_hash_entries(aggstate); + lookup_hash_entries(aggstate); - /* Advance the aggregates */ - if (DO_AGGSPLIT_COMBINE(aggstate->aggsplit)) - combine_aggregates(aggstate, pergroups[0]); - else - advance_aggregates(aggstate, NULL, pergroups); + /* Advance the aggregates (or combine functions) */ + advance_aggregates(aggstate); /* * Reset per-input-tuple context after each tuple, but note that the @@ -2716,6 +2075,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AggState *aggstate; AggStatePerAgg peraggs; AggStatePerTrans pertransstates; + AggStatePerGroup *pergroups; Plan *outerPlan; ExprContext *econtext; int numaggs, @@ -2723,15 +2083,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggno; int phase; int phaseidx; - List *combined_inputeval; - TupleDesc combineddesc; - TupleTableSlot *combinedslot; ListCell *l; Bitmapset *all_grouped_cols = NULL; int numGroupingSets = 1; int numPhases; int numHashes; - int column_offset; int i = 0; int j = 0; bool use_hashing = (node->aggstrategy == AGG_HASHED || @@ -3033,6 +2389,24 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggstate->peragg = peraggs; aggstate->pertrans = pertransstates; + + aggstate->all_pergroups = + (AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup) + * (numGroupingSets + numHashes)); + pergroups = aggstate->all_pergroups; + + if (node->aggstrategy != AGG_HASHED) + { + for (i = 0; i < numGroupingSets; i++) + { + pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) + * numaggs); + } + + aggstate->pergroups = pergroups; + pergroups += numGroupingSets; + } + /* * Hashing can only appear in the initial phase. */ @@ -3049,27 +2423,13 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) } /* this is an array of pointers, not structures */ - aggstate->hash_pergroup = palloc0(sizeof(AggStatePerGroup) * numHashes); + aggstate->hash_pergroup = pergroups; find_hash_columns(aggstate); build_hash_table(aggstate); aggstate->table_filled = false; } - if (node->aggstrategy != AGG_HASHED) - { - AggStatePerGroup *pergroups; - - pergroups = (AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup) * - numGroupingSets); - - for (i = 0; i < numGroupingSets; i++) - pergroups[i] = (AggStatePerGroup) palloc0(sizeof(AggStatePerGroupData) - * numaggs); - - aggstate->pergroups = pergroups; - } - /* * Initialize current phase-dependent values to initial phase. The initial * phase is 1 (first sort pass) for all strategies that use sorting (if @@ -3409,98 +2769,72 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggstate->numtrans = transno + 1; /* - * Build a single projection computing the required arguments for all - * aggregates at once; if there's more than one, that's considerably - * faster than doing it separately for each. - * - * First create a targetlist representing the values to compute. + * Last, check whether any more aggregates got added onto the node while + * we processed the expressions for the aggregate arguments (including not + * only the regular arguments and FILTER expressions handled immediately + * above, but any direct arguments we might've handled earlier). If so, + * we have nested aggregate functions, which is semantically nonsensical, + * so complain. (This should have been caught by the parser, so we don't + * need to work hard on a helpful error message; but we defend against it + * here anyway, just to be sure.) */ - combined_inputeval = NIL; - column_offset = 0; - for (transno = 0; transno < aggstate->numtrans; transno++) + if (numaggs != list_length(aggstate->aggs)) + ereport(ERROR, + (errcode(ERRCODE_GROUPING_ERROR), + errmsg("aggregate function calls cannot be nested"))); + + /* + * Build expressions doing all the transition work at once. We build a + * different one for each phase, as the number of transition function + * invocation can differ between phases. Note this'll work both for + * transition and combination functions (although there'll only be one + * phase in the latter case). + */ + for (phaseidx = 0; phaseidx < aggstate->numphases; phaseidx++) { - AggStatePerTrans pertrans = &pertransstates[transno]; + AggStatePerPhase phase = &aggstate->phases[phaseidx]; + bool dohash = false; + bool dosort = false; - /* - * Mark this per-trans state with its starting column in the combined - * slot. - */ - pertrans->inputoff = column_offset; + /* phase 0 doesn't necessarily exist */ + if (!phase->aggnode) + continue; - /* - * If the aggregate has a FILTER, we can only evaluate the filter - * expression, not the actual input expressions, during the combined - * eval step --- unless we're ignoring the filter because this node is - * running combinefns not transfns. - */ - if (pertrans->aggref->aggfilter && - !DO_AGGSPLIT_COMBINE(aggstate->aggsplit)) + if (aggstate->aggstrategy == AGG_MIXED && phaseidx == 1) { - TargetEntry *tle; - - tle = makeTargetEntry(pertrans->aggref->aggfilter, - column_offset + 1, NULL, false); - combined_inputeval = lappend(combined_inputeval, tle); - column_offset++; - /* - * We'll need separate projection machinery for the real args. - * Arrange to evaluate them into the sortslot previously created. + * Phase one, and only phase one, in a mixed agg performs both + * sorting and aggregation. */ - Assert(pertrans->sortslot); - pertrans->evalproj = ExecBuildProjectionInfo(pertrans->aggref->args, - aggstate->tmpcontext, - pertrans->sortslot, - &aggstate->ss.ps, - NULL); + dohash = true; + dosort = true; } - else + else if (aggstate->aggstrategy == AGG_MIXED && phaseidx == 0) { /* - * Add agg's input expressions to combined_inputeval, adjusting - * resnos in the copied target entries to match the combined slot. + * No need to compute a transition function for an AGG_MIXED phase + * 0 - the contents of the hashtables will have been computed + * during phase 1. */ - ListCell *arg; - - foreach(arg, pertrans->aggref->args) - { - TargetEntry *source_tle = lfirst_node(TargetEntry, arg); - TargetEntry *tle; - - tle = flatCopyTargetEntry(source_tle); - tle->resno += column_offset; - - combined_inputeval = lappend(combined_inputeval, tle); - } - - column_offset += list_length(pertrans->aggref->args); + continue; } - } + else if (phase->aggstrategy == AGG_PLAIN || + phase->aggstrategy == AGG_SORTED) + { + dohash = false; + dosort = true; + } + else if (phase->aggstrategy == AGG_HASHED) + { + dohash = true; + dosort = false; + } + else + Assert(false); - /* Now create a projection for the combined targetlist */ - combineddesc = ExecTypeFromTL(combined_inputeval, false); - combinedslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(combinedslot, combineddesc); - aggstate->combinedproj = ExecBuildProjectionInfo(combined_inputeval, - aggstate->tmpcontext, - combinedslot, - &aggstate->ss.ps, - NULL); + phase->evaltrans = ExecBuildAggTrans(aggstate, phase, dosort, dohash); - /* - * Last, check whether any more aggregates got added onto the node while - * we processed the expressions for the aggregate arguments (including not - * only the regular arguments and FILTER expressions handled immediately - * above, but any direct arguments we might've handled earlier). If so, - * we have nested aggregate functions, which is semantically nonsensical, - * so complain. (This should have been caught by the parser, so we don't - * need to work hard on a helpful error message; but we defend against it - * here anyway, just to be sure.) - */ - if (numaggs != list_length(aggstate->aggs)) - ereport(ERROR, - (errcode(ERRCODE_GROUPING_ERROR), - errmsg("aggregate function calls cannot be nested"))); + } return aggstate; } @@ -3557,8 +2891,6 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, else pertrans->numTransInputs = numArguments; - /* inputoff and evalproj will be set up later, in ExecInitAgg */ - /* * When combining states, we have no use at all for the aggregate * function's transfn. Instead we use the combinefn. In this case, the diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index b0c7bda76f..117fc892f4 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -14,6 +14,7 @@ #ifndef EXEC_EXPR_H #define EXEC_EXPR_H +#include "executor/nodeAgg.h" #include "nodes/execnodes.h" /* forward references to avoid circularity */ @@ -64,9 +65,9 @@ typedef enum ExprEvalOp EEOP_WHOLEROW, /* - * Compute non-system Var value, assign it into ExprState's - * resultslot. These are not used if a CheckVarSlotCompatibility() check - * would be needed. + * Compute non-system Var value, assign it into ExprState's resultslot. + * These are not used if a CheckVarSlotCompatibility() check would be + * needed. */ EEOP_ASSIGN_INNER_VAR, EEOP_ASSIGN_OUTER_VAR, @@ -218,6 +219,17 @@ typedef enum ExprEvalOp EEOP_SUBPLAN, EEOP_ALTERNATIVE_SUBPLAN, + /* aggregation related nodes */ + EEOP_AGG_STRICT_DESERIALIZE, + EEOP_AGG_DESERIALIZE, + EEOP_AGG_STRICT_INPUT_CHECK, + EEOP_AGG_INIT_TRANS, + EEOP_AGG_STRICT_TRANS_CHECK, + EEOP_AGG_PLAIN_TRANS_BYVAL, + EEOP_AGG_PLAIN_TRANS, + EEOP_AGG_ORDERED_TRANS_DATUM, + EEOP_AGG_ORDERED_TRANS_TUPLE, + /* non-existent operation, used e.g. to check array lengths */ EEOP_LAST } ExprEvalOp; @@ -573,6 +585,55 @@ typedef struct ExprEvalStep /* out-of-line state, created by nodeSubplan.c */ AlternativeSubPlanState *asstate; } alternative_subplan; + + /* for EEOP_AGG_*DESERIALIZE */ + struct + { + AggState *aggstate; + FunctionCallInfo fcinfo_data; + int jumpnull; + } agg_deserialize; + + /* for EEOP_AGG_STRICT_INPUT_CHECK */ + struct + { + bool *nulls; + int nargs; + int jumpnull; + } agg_strict_input_check; + + /* for EEOP_AGG_INIT_TRANS */ + struct + { + AggState *aggstate; + AggStatePerTrans pertrans; + ExprContext *aggcontext; + int setno; + int transno; + int setoff; + int jumpnull; + } agg_init_trans; + + /* for EEOP_AGG_STRICT_TRANS_CHECK */ + struct + { + AggState *aggstate; + int setno; + int transno; + int setoff; + int jumpnull; + } agg_strict_trans_check; + + /* for EEOP_AGG_{PLAIN,ORDERED}_TRANS* */ + struct + { + AggState *aggstate; + AggStatePerTrans pertrans; + ExprContext *aggcontext; + int setno; + int transno; + int setoff; + } agg_trans; } d; } ExprEvalStep; @@ -669,4 +730,13 @@ extern void ExecEvalAlternativeSubPlan(ExprState *state, ExprEvalStep *op, extern void ExecEvalWholeRowVar(ExprState *state, ExprEvalStep *op, ExprContext *econtext); +extern void ExecAggInitGroup(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroup); +extern Datum ExecAggTransReparent(AggState *aggstate, AggStatePerTrans pertrans, + Datum newValue, bool newValueIsNull, + Datum oldValue, bool oldValueIsNull); +extern void ExecEvalAggOrderedTransDatum(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); +extern void ExecEvalAggOrderedTransTuple(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); + #endif /* EXEC_EXPR_H */ diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index a782fae0f8..6545a80222 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -192,7 +192,7 @@ extern void ExecConstraints(ResultRelInfo *resultRelInfo, extern bool ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); extern void ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo, - TupleTableSlot *slot, EState *estate); + TupleTableSlot *slot, EState *estate); extern void ExecWithCheckOptions(WCOKind kind, ResultRelInfo *resultRelInfo, TupleTableSlot *slot, EState *estate); extern LockTupleMode ExecUpdateLockMode(EState *estate, ResultRelInfo *relinfo); @@ -254,6 +254,8 @@ extern ExprState *ExecInitExprWithParams(Expr *node, ParamListInfo ext_params); extern ExprState *ExecInitQual(List *qual, PlanState *parent); extern ExprState *ExecInitCheck(List *qual, PlanState *parent); extern List *ExecInitExprList(List *nodes, PlanState *parent); +extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase, + bool doSort, bool doHash); extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList, ExprContext *econtext, TupleTableSlot *slot, diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h index 90c68795f1..3b06db86fd 100644 --- a/src/include/executor/nodeAgg.h +++ b/src/include/executor/nodeAgg.h @@ -16,6 +16,290 @@ #include "nodes/execnodes.h" + +/* + * AggStatePerTransData - per aggregate state value information + * + * Working state for updating the aggregate's state value, by calling the + * transition function with an input row. This struct does not store the + * information needed to produce the final aggregate result from the transition + * state, that's stored in AggStatePerAggData instead. This separation allows + * multiple aggregate results to be produced from a single state value. + */ +typedef struct AggStatePerTransData +{ + /* + * These values are set up during ExecInitAgg() and do not change + * thereafter: + */ + + /* + * Link to an Aggref expr this state value is for. + * + * There can be multiple Aggref's sharing the same state value, so long as + * the inputs and transition functions are identical and the final + * functions are not read-write. This points to the first one of them. + */ + Aggref *aggref; + + /* + * Is this state value actually being shared by more than one Aggref? + */ + bool aggshared; + + /* + * Number of aggregated input columns. This includes ORDER BY expressions + * in both the plain-agg and ordered-set cases. Ordered-set direct args + * are not counted, though. + */ + int numInputs; + + /* + * Number of aggregated input columns to pass to the transfn. This + * includes the ORDER BY columns for ordered-set aggs, but not for plain + * aggs. (This doesn't count the transition state value!) + */ + int numTransInputs; + + /* Oid of the state transition or combine function */ + Oid transfn_oid; + + /* Oid of the serialization function or InvalidOid */ + Oid serialfn_oid; + + /* Oid of the deserialization function or InvalidOid */ + Oid deserialfn_oid; + + /* Oid of state value's datatype */ + Oid aggtranstype; + + /* + * fmgr lookup data for transition function or combine function. Note in + * particular that the fn_strict flag is kept here. + */ + FmgrInfo transfn; + + /* fmgr lookup data for serialization function */ + FmgrInfo serialfn; + + /* fmgr lookup data for deserialization function */ + FmgrInfo deserialfn; + + /* Input collation derived for aggregate */ + Oid aggCollation; + + /* number of sorting columns */ + int numSortCols; + + /* number of sorting columns to consider in DISTINCT comparisons */ + /* (this is either zero or the same as numSortCols) */ + int numDistinctCols; + + /* deconstructed sorting information (arrays of length numSortCols) */ + AttrNumber *sortColIdx; + Oid *sortOperators; + Oid *sortCollations; + bool *sortNullsFirst; + + /* + * fmgr lookup data for input columns' equality operators --- only + * set/used when aggregate has DISTINCT flag. Note that these are in + * order of sort column index, not parameter index. + */ + FmgrInfo *equalfns; /* array of length numDistinctCols */ + + /* + * initial value from pg_aggregate entry + */ + Datum initValue; + bool initValueIsNull; + + /* + * We need the len and byval info for the agg's input and transition data + * types in order to know how to copy/delete values. + * + * Note that the info for the input type is used only when handling + * DISTINCT aggs with just one argument, so there is only one input type. + */ + int16 inputtypeLen, + transtypeLen; + bool inputtypeByVal, + transtypeByVal; + + /* + * Slots for holding the evaluated input arguments. These are set up + * during ExecInitAgg() and then used for each input row requiring either + * FILTER or ORDER BY/DISTINCT processing. + */ + TupleTableSlot *sortslot; /* current input tuple */ + TupleTableSlot *uniqslot; /* used for multi-column DISTINCT */ + TupleDesc sortdesc; /* descriptor of input tuples */ + + /* + * These values are working state that is initialized at the start of an + * input tuple group and updated for each input tuple. + * + * For a simple (non DISTINCT/ORDER BY) aggregate, we just feed the input + * values straight to the transition function. If it's DISTINCT or + * requires ORDER BY, we pass the input values into a Tuplesort object; + * then at completion of the input tuple group, we scan the sorted values, + * eliminate duplicates if needed, and run the transition function on the + * rest. + * + * We need a separate tuplesort for each grouping set. + */ + + Tuplesortstate **sortstates; /* sort objects, if DISTINCT or ORDER BY */ + + /* + * This field is a pre-initialized FunctionCallInfo struct used for + * calling this aggregate's transfn. We save a few cycles per row by not + * re-initializing the unchanging fields; which isn't much, but it seems + * worth the extra space consumption. + */ + FunctionCallInfoData transfn_fcinfo; + + /* Likewise for serialization and deserialization functions */ + FunctionCallInfoData serialfn_fcinfo; + + FunctionCallInfoData deserialfn_fcinfo; +} AggStatePerTransData; + +/* + * AggStatePerAggData - per-aggregate information + * + * This contains the information needed to call the final function, to produce + * a final aggregate result from the state value. If there are multiple + * identical Aggrefs in the query, they can all share the same per-agg data. + * + * These values are set up during ExecInitAgg() and do not change thereafter. + */ +typedef struct AggStatePerAggData +{ + /* + * Link to an Aggref expr this state value is for. + * + * There can be multiple identical Aggref's sharing the same per-agg. This + * points to the first one of them. + */ + Aggref *aggref; + + /* index to the state value which this agg should use */ + int transno; + + /* Optional Oid of final function (may be InvalidOid) */ + Oid finalfn_oid; + + /* + * fmgr lookup data for final function --- only valid when finalfn_oid is + * not InvalidOid. + */ + FmgrInfo finalfn; + + /* + * Number of arguments to pass to the finalfn. This is always at least 1 + * (the transition state value) plus any ordered-set direct args. If the + * finalfn wants extra args then we pass nulls corresponding to the + * aggregated input columns. + */ + int numFinalArgs; + + /* ExprStates for any direct-argument expressions */ + List *aggdirectargs; + + /* + * We need the len and byval info for the agg's result data type in order + * to know how to copy/delete values. + */ + int16 resulttypeLen; + bool resulttypeByVal; + + /* + * "sharable" is false if this agg cannot share state values with other + * aggregates because the final function is read-write. + */ + bool sharable; +} AggStatePerAggData; + +/* + * AggStatePerGroupData - per-aggregate-per-group working state + * + * These values are working state that is initialized at the start of + * an input tuple group and updated for each input tuple. + * + * In AGG_PLAIN and AGG_SORTED modes, we have a single array of these + * structs (pointed to by aggstate->pergroup); we re-use the array for + * each input group, if it's AGG_SORTED mode. In AGG_HASHED mode, the + * hash table contains an array of these structs for each tuple group. + * + * Logically, the sortstate field belongs in this struct, but we do not + * keep it here for space reasons: we don't support DISTINCT aggregates + * in AGG_HASHED mode, so there's no reason to use up a pointer field + * in every entry of the hashtable. + */ +typedef struct AggStatePerGroupData +{ + Datum transValue; /* current transition value */ + bool transValueIsNull; + + bool noTransValue; /* true if transValue not set yet */ + + /* + * Note: noTransValue initially has the same value as transValueIsNull, + * and if true both are cleared to false at the same time. They are not + * the same though: if transfn later returns a NULL, we want to keep that + * NULL and not auto-replace it with a later input value. Only the first + * non-NULL input will be auto-substituted. + */ +} AggStatePerGroupData; + +/* + * AggStatePerPhaseData - per-grouping-set-phase state + * + * Grouping sets are divided into "phases", where a single phase can be + * processed in one pass over the input. If there is more than one phase, then + * at the end of input from the current phase, state is reset and another pass + * taken over the data which has been re-sorted in the mean time. + * + * Accordingly, each phase specifies a list of grouping sets and group clause + * information, plus each phase after the first also has a sort order. + */ +typedef struct AggStatePerPhaseData +{ + AggStrategy aggstrategy; /* strategy for this phase */ + int numsets; /* number of grouping sets (or 0) */ + int *gset_lengths; /* lengths of grouping sets */ + Bitmapset **grouped_cols; /* column groupings for rollup */ + FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + Agg *aggnode; /* Agg node for phase data */ + Sort *sortnode; /* Sort node for input ordering for phase */ + + ExprState *evaltrans; /* evaluation of transition functions */ +} AggStatePerPhaseData; + +/* + * AggStatePerHashData - per-hashtable state + * + * When doing grouping sets with hashing, we have one of these for each + * grouping set. (When doing hashing without grouping sets, we have just one of + * them.) + */ +typedef struct AggStatePerHashData +{ + TupleHashTable hashtable; /* hash table with one entry per group */ + TupleHashIterator hashiter; /* for iterating through hash table */ + TupleTableSlot *hashslot; /* slot for loading hash table */ + FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ + FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + int numCols; /* number of hash key columns */ + int numhashGrpCols; /* number of columns in hash table */ + int largestGrpColIdx; /* largest col required for hashing */ + AttrNumber *hashGrpColIdxInput; /* hash col indices in input slot */ + AttrNumber *hashGrpColIdxHash; /* indices in hashtbl tuples */ + Agg *aggnode; /* original Agg node, for numGroups etc. */ +} AggStatePerHashData; + + extern AggState *ExecInitAgg(Agg *node, EState *estate, int eflags); extern void ExecEndAgg(AggState *node); extern void ExecReScanAgg(AggState *node); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 2a4f7407a1..4bb5cb163d 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1850,10 +1850,13 @@ typedef struct AggState /* these fields are used in AGG_HASHED and AGG_MIXED modes: */ bool table_filled; /* hash table filled yet? */ int num_hashes; - AggStatePerHash perhash; + AggStatePerHash perhash; /* array of per-hashtable data */ AggStatePerGroup *hash_pergroup; /* grouping set indexed array of * per-group pointers */ + /* support for evaluation of agg input expressions: */ + AggStatePerGroup *all_pergroups; /* array of first ->pergroups, than + * ->hash_pergroup */ ProjectionInfo *combinedproj; /* projection machinery */ } AggState; diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index a92c62adde..cc84217dd9 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -604,6 +604,7 @@ ExprContextCallbackFunction ExprContext_CB ExprDoneCond ExprEvalOp +ExprEvalOpLookup ExprEvalStep ExprState ExprStateEvalFunc From fccaea45496d721012ce8fbbebae82e4dbfc1ef4 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 9 Jan 2018 18:33:11 -0500 Subject: [PATCH 0813/1087] Remove outdated/removed Win32 URLs in C comments Reported-by: Ashutosh Sharma --- src/include/port/win32.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/src/include/port/win32.h b/src/include/port/win32.h index 611e04fac6..9f48a58aed 100644 --- a/src/include/port/win32.h +++ b/src/include/port/win32.h @@ -43,9 +43,6 @@ /* * defines for dynamic linking on Win32 platform - * http://support.microsoft.com/kb/132044 - * http://msdn.microsoft.com/en-us/library/8fskxacy(v=vs.80).aspx - * http://msdn.microsoft.com/en-us/library/a90k134d(v=vs.80).aspx */ #ifdef BUILDING_DLL From d16c2de6244f3b71c0c77a3d63905227fdc78428 Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Wed, 10 Jan 2018 11:33:37 +0300 Subject: [PATCH 0814/1087] Fix allowing of leading zero on exponents in pgbench test results Commit bc7fa0c15c590ddf4872e426abd76c2634f22aca accidentally lost fixes of 0aa1d489ea756b96b6d5573692ae9cd5d143c2a5 commit. Thanks to Thomas Munro --- src/bin/pgbench/t/001_pgbench_with_server.pl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index e579334914..a8b2962bd0 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -239,7 +239,7 @@ sub pgbench qr{command=26.: double -0.125\b}, qr{command=27.: double -0.00032\b}, qr{command=28.: double 8.50705917302346e\+0?37\b}, - qr{command=29.: double 1e\+30\b}, + qr{command=29.: double 1e\+0?30\b}, qr{command=30.: boolean false\b}, qr{command=31.: boolean true\b}, qr{command=32.: int 32\b}, From acc67ffd0a8c728b928958e75b76ee544b64c2d8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 10 Jan 2018 09:22:07 -0500 Subject: [PATCH 0815/1087] Give more accurate error message for dropping pinned portal The previous code gave the same error message for attempting to drop pinned and active portals, but those are separate states, so give separate error messages. --- src/backend/utils/mmgr/portalmem.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index 9edc1ccc83..84c68ac189 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -464,11 +464,17 @@ PortalDrop(Portal portal, bool isTopCommit) /* * Don't allow dropping a pinned portal, it's still needed by whoever - * pinned it. Not sure if the PORTAL_ACTIVE case can validly happen or - * not... + * pinned it. */ - if (portal->portalPinned || - portal->status == PORTAL_ACTIVE) + if (portal->portalPinned) + ereport(ERROR, + (errcode(ERRCODE_INVALID_CURSOR_STATE), + errmsg("cannot drop pinned portal \"%s\"", portal->name))); + + /* + * Not sure if the PORTAL_ACTIVE case can validly happen or not... + */ + if (portal->status == PORTAL_ACTIVE) ereport(ERROR, (errcode(ERRCODE_INVALID_CURSOR_STATE), errmsg("cannot drop active portal \"%s\"", portal->name))); From b3617cdfbba1b5381e9d1c6bc0839500e8eb7273 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 15 Dec 2017 15:24:10 -0500 Subject: [PATCH 0816/1087] Move portal pinning from PL/pgSQL to SPI PL/pgSQL "pins" internally generated (unnamed) portals so that user code cannot close them by guessing their names. This logic is also useful in other languages and really for any code. So move that logic into SPI. An unnamed portal obtained through SPI_cursor_open() and related functions is now automatically pinned, and SPI_cursor_close() automatically unpins a portal that is pinned. In the core distribution, this affects PL/Perl and PL/Python, preventing users from manually closing cursors created by spi_query and plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor functionality.) Reviewed-by: Andrew Dunstan --- src/backend/executor/spi.c | 9 +++++++++ src/pl/plpgsql/src/pl_exec.c | 8 -------- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 995f67d266..96370513e8 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -1175,6 +1175,12 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan, { /* Use a random nonconflicting name */ portal = CreateNewPortal(); + + /* + * Make sure the portal doesn't get closed by the user statements we + * execute. + */ + PinPortal(portal); } else { @@ -1413,6 +1419,9 @@ SPI_cursor_close(Portal portal) if (!PortalIsValid(portal)) elog(ERROR, "invalid portal in SPI cursor operation"); + if (portal->portalPinned) + UnpinPortal(portal); + PortalDrop(portal, false); } diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index d096f242cd..a326a04fc9 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -5257,12 +5257,6 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, /* Fetch loop variable's datum entry */ var = (PLpgSQL_variable *) estate->datums[stmt->var->dno]; - /* - * Make sure the portal doesn't get closed by the user statements we - * execute. - */ - PinPortal(portal); - /* * Fetch the initial tuple(s). If prefetching is allowed then we grab a * few more rows to avoid multiple trips through executor startup @@ -5324,8 +5318,6 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, */ SPI_freetuptable(tuptab); - UnpinPortal(portal); - /* * Set the FOUND variable to indicate the result of executing the loop * (namely, whether we looped one or more times). This must be set last so From 2fd58096f02777c38edb392f78cb5b4ebd90e9d2 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 10 Jan 2018 11:18:40 -0500 Subject: [PATCH 0817/1087] Add missing "return" statement to accumulate_append_subpath. Without this, Parallel Append can end up with extra children. Report by Rajkumar Raghuwanshi. Fix by Amit Khandekar. Brown paper bag bug by me. Discussion: http://postgr.es/m/CAKcux6mBF-NiddyEe9LwymoUC5+wh8bQJ=uk2gGkOE+L8cv=LA@mail.gmail.com --- src/backend/optimizer/path/allpaths.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 12a6ee4a22..c5304b712e 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -1926,6 +1926,7 @@ accumulate_append_subpath(Path *path, List **subpaths, List **special_subpaths) apath->first_partial_path); *special_subpaths = list_concat(*special_subpaths, new_special_subpaths); + return; } } else if (IsA(path, MergeAppendPath)) From 3afd75eaac8aaccf5aeebc52548c396b84d85516 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 10 Jan 2018 15:50:54 -0500 Subject: [PATCH 0818/1087] Remove dubious micro-optimization in ckpt_buforder_comparator(). It seems incorrect to assume that the list of CkptSortItems can never contain duplicate page numbers: concurrent activity could result in some page getting dropped from a low-numbered buffer and later loaded into a high-numbered buffer while BufferSync is scanning the buffer pool. If that happened, the comparator would give self-inconsistent results, potentially confusing qsort(). Saving one comparison step is not worth possibly getting the sort wrong. So far as I can tell, nothing would actually go wrong given our current implementation of qsort(). It might get a bit slower than expected if there were a large number of duplicates of one value, but that's surely a probability-epsilon case. Still, the comment is wrong, and if we ever switched to another sort implementation it might be less forgiving. In passing, avoid casting away const-ness of the argument pointers; I've not seen any compiler complaints from that, but it seems likely that some compilers would not like it. Back-patch to 9.6 where this code came in, just in case I've underestimated the possible consequences. Discussion: https://postgr.es/m/18437.1515607610@sss.pgh.pa.us --- src/backend/storage/buffer/bufmgr.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 4e44336332..01eabe5706 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -4064,8 +4064,8 @@ local_buffer_write_error_callback(void *arg) static int rnode_comparator(const void *p1, const void *p2) { - RelFileNode n1 = *(RelFileNode *) p1; - RelFileNode n2 = *(RelFileNode *) p2; + RelFileNode n1 = *(const RelFileNode *) p1; + RelFileNode n2 = *(const RelFileNode *) p2; if (n1.relNode < n2.relNode) return -1; @@ -4174,8 +4174,8 @@ buffertag_comparator(const void *a, const void *b) static int ckpt_buforder_comparator(const void *pa, const void *pb) { - const CkptSortItem *a = (CkptSortItem *) pa; - const CkptSortItem *b = (CkptSortItem *) pb; + const CkptSortItem *a = (const CkptSortItem *) pa; + const CkptSortItem *b = (const CkptSortItem *) pb; /* compare tablespace */ if (a->tsId < b->tsId) @@ -4195,8 +4195,10 @@ ckpt_buforder_comparator(const void *pa, const void *pb) /* compare block number */ else if (a->blockNum < b->blockNum) return -1; - else /* should not be the same block ... */ + else if (a->blockNum > b->blockNum) return 1; + /* equal page IDs are unlikely, but not impossible */ + return 0; } /* From b48b2f8793ef256d19274b4ef6ff587fd47ab553 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 10 Jan 2018 16:01:17 -0500 Subject: [PATCH 0819/1087] Revert "Move portal pinning from PL/pgSQL to SPI" This reverts commit b3617cdfbba1b5381e9d1c6bc0839500e8eb7273. This broke returning unnamed cursors from PL/pgSQL functions. Apparently, there are no test cases for this. --- src/backend/executor/spi.c | 9 --------- src/pl/plpgsql/src/pl_exec.c | 8 ++++++++ 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 96370513e8..995f67d266 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -1175,12 +1175,6 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan, { /* Use a random nonconflicting name */ portal = CreateNewPortal(); - - /* - * Make sure the portal doesn't get closed by the user statements we - * execute. - */ - PinPortal(portal); } else { @@ -1419,9 +1413,6 @@ SPI_cursor_close(Portal portal) if (!PortalIsValid(portal)) elog(ERROR, "invalid portal in SPI cursor operation"); - if (portal->portalPinned) - UnpinPortal(portal); - PortalDrop(portal, false); } diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index a326a04fc9..d096f242cd 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -5257,6 +5257,12 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, /* Fetch loop variable's datum entry */ var = (PLpgSQL_variable *) estate->datums[stmt->var->dno]; + /* + * Make sure the portal doesn't get closed by the user statements we + * execute. + */ + PinPortal(portal); + /* * Fetch the initial tuple(s). If prefetching is allowed then we grab a * few more rows to avoid multiple trips through executor startup @@ -5318,6 +5324,8 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, */ SPI_freetuptable(tuptab); + UnpinPortal(portal); + /* * Set the FOUND variable to indicate the result of executing the loop * (namely, whether we looped one or more times). This must be set last so From 511585417079b7d52211e09b20de0e0981b6eaa6 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 10 Jan 2018 16:39:13 -0500 Subject: [PATCH 0820/1087] Add tests for PL/pgSQL returning unnamed portals as refcursor Existing tests only covered returning explicitly named portals as refcursor. The unnamed cursor case was recently broken without a test failing. --- src/test/regress/expected/plpgsql.out | 24 ++++++++++++++++++++++++ src/test/regress/sql/plpgsql.sql | 22 ++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 4783807ae0..4f9501db00 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -2242,6 +2242,30 @@ drop function sp_id_user(text); -- create table rc_test (a int, b int); copy rc_test from stdin; +create function return_unnamed_refcursor() returns refcursor as $$ +declare + rc refcursor; +begin + open rc for select a from rc_test; + return rc; +end +$$ language plpgsql; +create function use_refcursor(rc refcursor) returns int as $$ +declare + rc refcursor; + x record; +begin + rc := return_unnamed_refcursor(); + fetch next from rc into x; + return x.a; +end +$$ language plpgsql; +select use_refcursor(return_unnamed_refcursor()); + use_refcursor +--------------- + 5 +(1 row) + create function return_refcursor(rc refcursor) returns refcursor as $$ begin open rc for select a from rc_test; diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 768270d467..3914651bf6 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -1910,6 +1910,28 @@ copy rc_test from stdin; 500 1000 \. +create function return_unnamed_refcursor() returns refcursor as $$ +declare + rc refcursor; +begin + open rc for select a from rc_test; + return rc; +end +$$ language plpgsql; + +create function use_refcursor(rc refcursor) returns int as $$ +declare + rc refcursor; + x record; +begin + rc := return_unnamed_refcursor(); + fetch next from rc into x; + return x.a; +end +$$ language plpgsql; + +select use_refcursor(return_unnamed_refcursor()); + create function return_refcursor(rc refcursor) returns refcursor as $$ begin open rc for select a from rc_test; From 70d6226e4fba26765877fc3c2ec6c468d3ff4084 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 12 Dec 2017 10:26:47 -0500 Subject: [PATCH 0821/1087] Use portal pinning in PL/Perl and PL/Python PL/pgSQL "pins" internally generated portals so that user code cannot close them by guessing their names. Add this functionality to PL/Perl and PL/Python as well, preventing users from manually closing cursors created by spi_query and plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor functionality.) --- src/pl/plperl/plperl.c | 8 ++++++++ src/pl/plpython/plpy_cursorobject.c | 8 ++++++++ 2 files changed, 16 insertions(+) diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index 41fd0ba421..10feef11cf 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -3406,6 +3406,8 @@ plperl_spi_query(char *query) SPI_result_code_string(SPI_result)); cursor = cstr2sv(portal->name); + PinPortal(portal); + /* Commit the inner transaction, return to outer xact context */ ReleaseCurrentSubTransaction(); MemoryContextSwitchTo(oldcontext); @@ -3469,6 +3471,7 @@ plperl_spi_fetchrow(char *cursor) SPI_cursor_fetch(p, true, 1); if (SPI_processed == 0) { + UnpinPortal(p); SPI_cursor_close(p); row = &PL_sv_undef; } @@ -3520,7 +3523,10 @@ plperl_spi_cursor_close(char *cursor) p = SPI_cursor_find(cursor); if (p) + { + UnpinPortal(p); SPI_cursor_close(p); + } } SV * @@ -3884,6 +3890,8 @@ plperl_spi_query_prepared(char *query, int argc, SV **argv) cursor = cstr2sv(portal->name); + PinPortal(portal); + /* Commit the inner transaction, return to outer xact context */ ReleaseCurrentSubTransaction(); MemoryContextSwitchTo(oldcontext); diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c index 9467f64808..a527585b81 100644 --- a/src/pl/plpython/plpy_cursorobject.c +++ b/src/pl/plpython/plpy_cursorobject.c @@ -151,6 +151,8 @@ PLy_cursor_query(const char *query) cursor->portalname = MemoryContextStrdup(cursor->mcxt, portal->name); + PinPortal(portal); + PLy_spi_subtransaction_commit(oldcontext, oldowner); } PG_CATCH(); @@ -266,6 +268,8 @@ PLy_cursor_plan(PyObject *ob, PyObject *args) cursor->portalname = MemoryContextStrdup(cursor->mcxt, portal->name); + PinPortal(portal); + PLy_spi_subtransaction_commit(oldcontext, oldowner); } PG_CATCH(); @@ -317,7 +321,10 @@ PLy_cursor_dealloc(PyObject *arg) portal = GetPortalByName(cursor->portalname); if (PortalIsValid(portal)) + { + UnpinPortal(portal); SPI_cursor_close(portal); + } cursor->closed = true; } if (cursor->mcxt) @@ -508,6 +515,7 @@ PLy_cursor_close(PyObject *self, PyObject *unused) return NULL; } + UnpinPortal(portal); SPI_cursor_close(portal); cursor->closed = true; } From 3c1e9fd23269849e32c73683a8457fb3095309e3 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 10 Jan 2018 17:13:29 -0500 Subject: [PATCH 0822/1087] Fix sample INSTR() functions in the plpgsql documentation. These functions are stated to be Oracle-compatible, but they weren't. Yugo Nagata noticed that while our code returns zero for a zero or negative fourth parameter (occur_index), Oracle throws an error. Further testing by me showed that there was also a discrepancy in the interpretation of a negative third parameter (beg_index): Oracle thinks that a negative beg_index indicates the last place where the target substring can *begin*, whereas our code thinks it is the last place where the target can *end*. Adjust the sample code to behave like Oracle in both these respects. Also change it to be a CDATA[] section, simplifying copying-and-pasting out of the documentation source file. And fix minor problems in the introductory comment, which wasn't very complete or accurate. Back-patch to all supported branches. Although this patch only touches documentation, we should probably call it out as a bug fix in the next minor release notes, since users who have adopted the functions will likely want to update their versions. Yugo Nagata and Tom Lane Discussion: https://postgr.es/m/20171229191705.c0b43a8c.nagata@sraoss.co.jp --- doc/src/sgml/plpgsql.sgml | 82 ++++++++++++++++++--------------------- 1 file changed, 38 insertions(+), 44 deletions(-) diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 7d23ed437e..ddd054c6cc 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -5647,27 +5647,29 @@ $$ LANGUAGE plpgsql STRICT IMMUTABLE; instr function - + 0 THEN temp_str := substring(string FROM beg_index); - pos := position(string_to_search IN temp_str); + pos := position(string_to_search_for IN temp_str); IF pos = 0 THEN RETURN 0; ELSE RETURN pos + beg_index - 1; END IF; - ELSIF beg_index < 0 THEN - ss_length := char_length(string_to_search); + ELSIF beg_index < 0 THEN + ss_length := char_length(string_to_search_for); length := char_length(string); - beg := length + beg_index - ss_length + 2; + beg := length + 1 + beg_index; - WHILE beg > 0 LOOP + WHILE beg > 0 LOOP temp_str := substring(string FROM beg FOR ss_length); - pos := position(string_to_search IN temp_str); - - IF pos > 0 THEN + IF string_to_search_for = temp_str THEN RETURN beg; END IF; @@ -5709,7 +5709,7 @@ END; $$ LANGUAGE plpgsql STRICT IMMUTABLE; -CREATE FUNCTION instr(string varchar, string_to_search varchar, +CREATE FUNCTION instr(string varchar, string_to_search_for varchar, beg_index integer, occur_index integer) RETURNS integer AS $$ DECLARE @@ -5721,39 +5721,32 @@ DECLARE length integer; ss_length integer; BEGIN - IF beg_index > 0 THEN - beg := beg_index; - temp_str := substring(string FROM beg_index); + IF occur_index <= 0 THEN + RAISE 'argument ''%'' is out of range', occur_index + USING ERRCODE = '22003'; + END IF; + IF beg_index > 0 THEN + beg := beg_index - 1; FOR i IN 1..occur_index LOOP - pos := position(string_to_search IN temp_str); - - IF i = 1 THEN - beg := beg + pos - 1; - ELSE - beg := beg + pos; - END IF; - temp_str := substring(string FROM beg + 1); + pos := position(string_to_search_for IN temp_str); + IF pos = 0 THEN + RETURN 0; + END IF; + beg := beg + pos; END LOOP; - IF pos = 0 THEN - RETURN 0; - ELSE - RETURN beg; - END IF; - ELSIF beg_index < 0 THEN - ss_length := char_length(string_to_search); + RETURN beg; + ELSIF beg_index < 0 THEN + ss_length := char_length(string_to_search_for); length := char_length(string); - beg := length + beg_index - ss_length + 2; + beg := length + 1 + beg_index; - WHILE beg > 0 LOOP + WHILE beg > 0 LOOP temp_str := substring(string FROM beg FOR ss_length); - pos := position(string_to_search IN temp_str); - - IF pos > 0 THEN + IF string_to_search_for = temp_str THEN occur_number := occur_number + 1; - IF occur_number = occur_index THEN RETURN beg; END IF; @@ -5768,6 +5761,7 @@ BEGIN END IF; END; $$ LANGUAGE plpgsql STRICT IMMUTABLE; +]]> From 563a053bdd4b91c5e5560f4bf91220e562326f7d Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Thu, 11 Jan 2018 14:41:14 +0300 Subject: [PATCH 0823/1087] Fix behavior of ~> (cube, int) operator ~> (cube, int) operator was especially designed for knn-gist search. However, it appears that knn-gist search can't work correctly with current behavior of this operator when dataset contains cubes of variable dimensionality. In this case, the same value of second operator argument can point to different dimension depending on dimensionality of particular cube. Such behavior is incompatible with gist indexing of cubes, and knn-gist doesn't work correctly for it. This patch changes behavior of ~> (cube, int) operator by introducing dimension numbering where value of second argument unambiguously identifies number of dimension. With new behavior, this operator can be correctly supported by knn-gist. Relevant changes to cube operator class are also included. Backpatch to v9.6 where operator was introduced. Since behavior of ~> (cube, int) operator is changed, depending entities must be refreshed after upgrade. Such as, expression indexes using this operator must be reindexed, materialized views must be rebuilt, stored procedures and client code must be revised to correctly use new behavior. That should be mentioned in release notes. Noticed by: Tomas Vondra Author: Alexander Korotkov Reviewed by: Tomas Vondra, Andrey Borodin Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com --- contrib/cube/cube.c | 120 +++++++++--- contrib/cube/expected/cube.out | 317 +++++++++++++++++++------------ contrib/cube/expected/cube_2.out | 317 +++++++++++++++++++------------ contrib/cube/sql/cube.sql | 35 ++-- doc/src/sgml/cube.sgml | 9 +- 5 files changed, 512 insertions(+), 286 deletions(-) diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index 3e3d83323c..dcc0850aa9 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -1337,15 +1337,55 @@ g_cube_distance(PG_FUNCTION_ARGS) if (strategy == CubeKNNDistanceCoord) { + /* + * Handle ordering by ~> operator. See comments of cube_coord_llur() + * for details + */ int coord = PG_GETARG_INT32(1); + bool isLeaf = GistPageIsLeaf(entry->page); - if (DIM(cube) == 0) - retval = 0.0; - else if (IS_POINT(cube)) - retval = cube->x[(coord - 1) % DIM(cube)]; + /* 0 is the only unsupported coordinate value */ + if (coord <= 0) + ereport(ERROR, + (errcode(ERRCODE_ARRAY_ELEMENT_ERROR), + errmsg("cube index %d is out of bounds", coord))); + + if (coord <= 2 * DIM(cube)) + { + /* dimension index */ + int index = (coord - 1) / 2; + /* whether this is upper bound (lower bound otherwise) */ + bool upper = ((coord - 1) % 2 == 1); + + if (IS_POINT(cube)) + { + retval = cube->x[index]; + } + else + { + if (isLeaf) + { + /* For leaf just return required upper/lower bound */ + if (upper) + retval = Max(cube->x[index], cube->x[index + DIM(cube)]); + else + retval = Min(cube->x[index], cube->x[index + DIM(cube)]); + } + else + { + /* + * For non-leaf we should always return lower bound, + * because even upper bound of a child in the subtree can + * be as small as our lower bound. + */ + retval = Min(cube->x[index], cube->x[index + DIM(cube)]); + } + } + } else - retval = Min(cube->x[(coord - 1) % DIM(cube)], - cube->x[(coord - 1) % DIM(cube) + DIM(cube)]); + { + retval = 0.0; + } } else { @@ -1492,43 +1532,73 @@ cube_coord(PG_FUNCTION_ARGS) } -/* - * This function works like cube_coord(), - * but rearranges coordinates of corners to get cube representation - * in the form of (lower left, upper right). - * For historical reasons that extension allows us to create cubes in form - * ((2,1),(1,2)) and instead of normalizing such cube to ((1,1),(2,2)) it - * stores cube in original way. But to get cubes ordered by one of dimensions - * directly from the index without extra sort step we need some - * representation-independent coordinate getter. This function implements it. +/*---- + * This function works like cube_coord(), but rearranges coordinates in the + * way suitable to support coordinate ordering using KNN-GiST. For historical + * reasons this extension allows us to create cubes in form ((2,1),(1,2)) and + * instead of normalizing such cube to ((1,1),(2,2)) it stores cube in original + * way. But in order to get cubes ordered by one of dimensions from the index + * without explicit sort step we need this representation-independent coordinate + * getter. Moreover, indexed dataset may contain cubes of different dimensions + * number. Accordingly, this coordinate getter should be able to return + * lower/upper bound for particular dimension independently on number of cube + * dimensions. + * + * Long story short, this function uses following meaning of coordinates: + * # (2 * N - 1) -- lower bound of Nth dimension, + * # (2 * N) -- upper bound of Nth dimension. + * + * When given coordinate exceeds number of cube dimensions, then 0 returned + * (reproducing logic of GiST indexing of variable-length cubes). */ Datum cube_coord_llur(PG_FUNCTION_ARGS) { NDBOX *cube = PG_GETARG_NDBOX_P(0); int coord = PG_GETARG_INT32(1); + bool inverse = false; + float8 result; - if (coord <= 0 || coord > 2 * DIM(cube)) + /* 0 is the only unsupported coordinate value */ + if (coord <= 0) ereport(ERROR, (errcode(ERRCODE_ARRAY_ELEMENT_ERROR), errmsg("cube index %d is out of bounds", coord))); - if (coord <= DIM(cube)) + if (coord <= 2 * DIM(cube)) { + /* dimension index */ + int index = (coord - 1) / 2; + /* whether this is upper bound (lower bound otherwise) */ + bool upper = ((coord - 1) % 2 == 1); + if (IS_POINT(cube)) - PG_RETURN_FLOAT8(cube->x[coord - 1]); + { + result = cube->x[index]; + } else - PG_RETURN_FLOAT8(Min(cube->x[coord - 1], - cube->x[coord - 1 + DIM(cube)])); + { + if (upper) + result = Max(cube->x[index], cube->x[index + DIM(cube)]); + else + result = Min(cube->x[index], cube->x[index + DIM(cube)]); + } } else { - if (IS_POINT(cube)) - PG_RETURN_FLOAT8(cube->x[(coord - 1) % DIM(cube)]); - else - PG_RETURN_FLOAT8(Max(cube->x[coord - 1], - cube->x[coord - 1 - DIM(cube)])); + /* + * Return zero if coordinate is out of bound. That reproduces logic of + * how cubes with low dimension number are expanded during GiST + * indexing. + */ + result = 0.0; } + + /* Inverse value if needed */ + if (inverse) + result = -result; + + PG_RETURN_FLOAT8(result); } /* Increase or decrease box size by a radius in at least n dimensions. */ diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out index c430b4e1f0..c586a73727 100644 --- a/contrib/cube/expected/cube.out +++ b/contrib/cube/expected/cube.out @@ -1532,25 +1532,25 @@ SELECT cube(array[40,50,60], array[10,20,30])~>1; SELECT cube(array[10,20,30], array[40,50,60])~>2; ?column? ---------- - 20 + 40 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>2; ?column? ---------- - 20 + 40 (1 row) SELECT cube(array[10,20,30], array[40,50,60])~>3; ?column? ---------- - 30 + 20 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>3; ?column? ---------- - 30 + 20 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>0; @@ -1558,7 +1558,7 @@ ERROR: cube index 0 is out of bounds SELECT cube(array[40,50,60], array[10,20,30])~>4; ?column? ---------- - 40 + 50 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>(-1); @@ -1611,25 +1611,28 @@ SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; (4 rows) RESET enable_bitmapscan; --- kNN with index +-- Test kNN +INSERT INTO test_cube VALUES ('(1,1)'), ('(100000)'), ('(0, 100000)'); -- Some corner cases +SET enable_seqscan = false; +-- Test different metrics SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist -------------------------+------------------ (337, 455),(240, 359) | 0 + (1, 1) | 140.007142674936 (759, 187),(662, 163) | 162 (948, 1201),(907, 1156) | 772.000647668122 (1444, 403),(1346, 344) | 846 - (369, 1457),(278, 1409) | 909 (5 rows) SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist -------------------------+------ (337, 455),(240, 359) | 0 + (1, 1) | 99 (759, 187),(662, 163) | 162 (948, 1201),(907, 1156) | 656 (1444, 403),(1346, 344) | 846 - (369, 1457),(278, 1409) | 909 (5 rows) SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; @@ -1637,133 +1640,203 @@ SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c -------------------------+------ (337, 455),(240, 359) | 0 (759, 187),(662, 163) | 162 + (1, 1) | 198 (1444, 403),(1346, 344) | 846 (369, 1457),(278, 1409) | 909 - (948, 1201),(907, 1156) | 1063 (5 rows) --- kNN-based sorting -SELECT * FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by 1st coordinate of lower left corner - c ---------------------------- - (54, 38679),(3, 38602) - (83, 10271),(15, 10265) - (122, 46832),(64, 46762) - (167, 17214),(92, 17184) - (161, 24465),(107, 24374) - (162, 26040),(120, 25963) - (154, 4019),(138, 3990) - (259, 1850),(175, 1820) - (207, 40886),(179, 40879) - (288, 49588),(204, 49571) - (270, 32616),(226, 32607) - (318, 31489),(235, 31404) - (337, 455),(240, 359) - (270, 29508),(264, 29440) - (369, 1457),(278, 1409) +-- Test sorting by coordinates +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 3 | (54, 38679),(3, 38602) + 15 | (83, 10271),(15, 10265) + 64 | (122, 46832),(64, 46762) + 92 | (167, 17214),(92, 17184) + 107 | (161, 24465),(107, 24374) + 120 | (162, 26040),(120, 25963) + 138 | (154, 4019),(138, 3990) + 175 | (259, 1850),(175, 1820) + 179 | (207, 40886),(179, 40879) + 204 | (288, 49588),(204, 49571) + 226 | (270, 32616),(226, 32607) + 235 | (318, 31489),(235, 31404) + 240 | (337, 455),(240, 359) (15 rows) -SELECT * FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by 2nd coordinate or upper right corner - c ---------------------------- - (30333, 50),(30273, 6) - (43301, 75),(43227, 43) - (19650, 142),(19630, 51) - (2424, 160),(2424, 81) - (3449, 171),(3354, 108) - (18037, 155),(17941, 109) - (28511, 208),(28479, 114) - (19946, 217),(19941, 118) - (16906, 191),(16816, 139) - (759, 187),(662, 163) - (22684, 266),(22656, 181) - (24423, 255),(24360, 213) - (45989, 249),(45910, 222) - (11399, 377),(11360, 294) - (12162, 389),(12103, 309) +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 54 | (54, 38679),(3, 38602) + 83 | (83, 10271),(15, 10265) + 122 | (122, 46832),(64, 46762) + 154 | (154, 4019),(138, 3990) + 161 | (161, 24465),(107, 24374) + 162 | (162, 26040),(120, 25963) + 167 | (167, 17214),(92, 17184) + 207 | (207, 40886),(179, 40879) + 259 | (259, 1850),(175, 1820) + 270 | (270, 29508),(264, 29440) + 270 | (270, 32616),(226, 32607) + 288 | (288, 49588),(204, 49571) + 318 | (318, 31489),(235, 31404) (15 rows) -SELECT * FROM test_cube ORDER BY c~>1 DESC LIMIT 15; -- descending by 1st coordinate of lower left corner - c -------------------------------- - (50027, 49230),(49951, 49214) - (49980, 35004),(49937, 34963) - (49985, 6436),(49927, 6338) - (49999, 27218),(49908, 27176) - (49954, 1340),(49905, 1294) - (49944, 25163),(49902, 25153) - (49981, 34876),(49898, 34786) - (49957, 43390),(49897, 43384) - (49853, 18504),(49848, 18503) - (49902, 41752),(49818, 41746) - (49907, 30225),(49810, 30158) - (49843, 5175),(49808, 5145) - (49887, 24274),(49805, 24184) - (49847, 7128),(49798, 7067) - (49820, 7990),(49771, 7967) +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 6 | (30333, 50),(30273, 6) + 43 | (43301, 75),(43227, 43) + 51 | (19650, 142),(19630, 51) + 81 | (2424, 160),(2424, 81) + 108 | (3449, 171),(3354, 108) + 109 | (18037, 155),(17941, 109) + 114 | (28511, 208),(28479, 114) + 118 | (19946, 217),(19941, 118) + 139 | (16906, 191),(16816, 139) + 163 | (759, 187),(662, 163) + 181 | (22684, 266),(22656, 181) + 213 | (24423, 255),(24360, 213) + 222 | (45989, 249),(45910, 222) (15 rows) -SELECT * FROM test_cube ORDER BY c~>4 DESC LIMIT 15; -- descending by 2nd coordinate or upper right corner - c -------------------------------- - (36311, 50073),(36258, 49987) - (30746, 50040),(30727, 49992) - (2168, 50012),(2108, 49914) - (21551, 49983),(21492, 49885) - (17954, 49975),(17865, 49915) - (3531, 49962),(3463, 49934) - (19128, 49932),(19112, 49849) - (31287, 49923),(31236, 49913) - (43925, 49912),(43888, 49878) - (29261, 49910),(29247, 49818) - (14913, 49873),(14849, 49836) - (20007, 49858),(19921, 49778) - (38266, 49852),(38233, 49844) - (37595, 49849),(37581, 49834) - (46151, 49848),(46058, 49830) +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 50 | (30333, 50),(30273, 6) + 75 | (43301, 75),(43227, 43) + 142 | (19650, 142),(19630, 51) + 155 | (18037, 155),(17941, 109) + 160 | (2424, 160),(2424, 81) + 171 | (3449, 171),(3354, 108) + 187 | (759, 187),(662, 163) + 191 | (16906, 191),(16816, 139) + 208 | (28511, 208),(28479, 114) + 217 | (19946, 217),(19941, 118) + 249 | (45989, 249),(45910, 222) + 255 | (24423, 255),(24360, 213) + 266 | (22684, 266),(22656, 181) (15 rows) --- same thing for index with points -CREATE TABLE test_point(c cube); -INSERT INTO test_point(SELECT cube(array[c->1,c->2,c->3,c->4]) FROM test_cube); -CREATE INDEX ON test_point USING gist(c); -SELECT * FROM test_point ORDER BY c~>1, c~>2 LIMIT 15; -- ascending by 1st then by 2nd coordinate - c --------------------------- - (54, 38679, 3, 38602) - (83, 10271, 15, 10265) - (122, 46832, 64, 46762) - (154, 4019, 138, 3990) - (161, 24465, 107, 24374) - (162, 26040, 120, 25963) - (167, 17214, 92, 17184) - (207, 40886, 179, 40879) - (259, 1850, 175, 1820) - (270, 29508, 264, 29440) - (270, 32616, 226, 32607) - (288, 49588, 204, 49571) - (318, 31489, 235, 31404) - (326, 18837, 285, 18817) - (337, 455, 240, 359) +-- Same queries with sequential scan (should give the same results as above) +RESET enable_seqscan; +SET enable_indexscan = OFF; +SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------------------ + (337, 455),(240, 359) | 0 + (1, 1) | 140.007142674936 + (759, 187),(662, 163) | 162 + (948, 1201),(907, 1156) | 772.000647668122 + (1444, 403),(1346, 344) | 846 +(5 rows) + +SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------ + (337, 455),(240, 359) | 0 + (1, 1) | 99 + (759, 187),(662, 163) | 162 + (948, 1201),(907, 1156) | 656 + (1444, 403),(1346, 344) | 846 +(5 rows) + +SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------ + (337, 455),(240, 359) | 0 + (759, 187),(662, 163) | 162 + (1, 1) | 198 + (1444, 403),(1346, 344) | 846 + (369, 1457),(278, 1409) | 909 +(5 rows) + +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 3 | (54, 38679),(3, 38602) + 15 | (83, 10271),(15, 10265) + 64 | (122, 46832),(64, 46762) + 92 | (167, 17214),(92, 17184) + 107 | (161, 24465),(107, 24374) + 120 | (162, 26040),(120, 25963) + 138 | (154, 4019),(138, 3990) + 175 | (259, 1850),(175, 1820) + 179 | (207, 40886),(179, 40879) + 204 | (288, 49588),(204, 49571) + 226 | (270, 32616),(226, 32607) + 235 | (318, 31489),(235, 31404) + 240 | (337, 455),(240, 359) +(15 rows) + +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 54 | (54, 38679),(3, 38602) + 83 | (83, 10271),(15, 10265) + 122 | (122, 46832),(64, 46762) + 154 | (154, 4019),(138, 3990) + 161 | (161, 24465),(107, 24374) + 162 | (162, 26040),(120, 25963) + 167 | (167, 17214),(92, 17184) + 207 | (207, 40886),(179, 40879) + 259 | (259, 1850),(175, 1820) + 270 | (270, 29508),(264, 29440) + 270 | (270, 32616),(226, 32607) + 288 | (288, 49588),(204, 49571) + 318 | (318, 31489),(235, 31404) +(15 rows) + +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 6 | (30333, 50),(30273, 6) + 43 | (43301, 75),(43227, 43) + 51 | (19650, 142),(19630, 51) + 81 | (2424, 160),(2424, 81) + 108 | (3449, 171),(3354, 108) + 109 | (18037, 155),(17941, 109) + 114 | (28511, 208),(28479, 114) + 118 | (19946, 217),(19941, 118) + 139 | (16906, 191),(16816, 139) + 163 | (759, 187),(662, 163) + 181 | (22684, 266),(22656, 181) + 213 | (24423, 255),(24360, 213) + 222 | (45989, 249),(45910, 222) (15 rows) -SELECT * FROM test_point ORDER BY c~>4 DESC LIMIT 15; -- descending by 1st coordinate - c ------------------------------- - (30746, 50040, 30727, 49992) - (36311, 50073, 36258, 49987) - (3531, 49962, 3463, 49934) - (17954, 49975, 17865, 49915) - (2168, 50012, 2108, 49914) - (31287, 49923, 31236, 49913) - (21551, 49983, 21492, 49885) - (43925, 49912, 43888, 49878) - (19128, 49932, 19112, 49849) - (38266, 49852, 38233, 49844) - (14913, 49873, 14849, 49836) - (37595, 49849, 37581, 49834) - (46151, 49848, 46058, 49830) - (29261, 49910, 29247, 49818) - (19233, 49824, 19185, 49794) +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 50 | (30333, 50),(30273, 6) + 75 | (43301, 75),(43227, 43) + 142 | (19650, 142),(19630, 51) + 155 | (18037, 155),(17941, 109) + 160 | (2424, 160),(2424, 81) + 171 | (3449, 171),(3354, 108) + 187 | (759, 187),(662, 163) + 191 | (16906, 191),(16816, 139) + 208 | (28511, 208),(28479, 114) + 217 | (19946, 217),(19941, 118) + 249 | (45989, 249),(45910, 222) + 255 | (24423, 255),(24360, 213) + 266 | (22684, 266),(22656, 181) (15 rows) +RESET enable_indexscan; diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out index b979c4d6c8..8c75e27b46 100644 --- a/contrib/cube/expected/cube_2.out +++ b/contrib/cube/expected/cube_2.out @@ -1532,25 +1532,25 @@ SELECT cube(array[40,50,60], array[10,20,30])~>1; SELECT cube(array[10,20,30], array[40,50,60])~>2; ?column? ---------- - 20 + 40 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>2; ?column? ---------- - 20 + 40 (1 row) SELECT cube(array[10,20,30], array[40,50,60])~>3; ?column? ---------- - 30 + 20 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>3; ?column? ---------- - 30 + 20 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>0; @@ -1558,7 +1558,7 @@ ERROR: cube index 0 is out of bounds SELECT cube(array[40,50,60], array[10,20,30])~>4; ?column? ---------- - 40 + 50 (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>(-1); @@ -1611,25 +1611,28 @@ SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; (4 rows) RESET enable_bitmapscan; --- kNN with index +-- Test kNN +INSERT INTO test_cube VALUES ('(1,1)'), ('(100000)'), ('(0, 100000)'); -- Some corner cases +SET enable_seqscan = false; +-- Test different metrics SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist -------------------------+------------------ (337, 455),(240, 359) | 0 + (1, 1) | 140.007142674936 (759, 187),(662, 163) | 162 (948, 1201),(907, 1156) | 772.000647668122 (1444, 403),(1346, 344) | 846 - (369, 1457),(278, 1409) | 909 (5 rows) SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; c | dist -------------------------+------ (337, 455),(240, 359) | 0 + (1, 1) | 99 (759, 187),(662, 163) | 162 (948, 1201),(907, 1156) | 656 (1444, 403),(1346, 344) | 846 - (369, 1457),(278, 1409) | 909 (5 rows) SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; @@ -1637,133 +1640,203 @@ SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c -------------------------+------ (337, 455),(240, 359) | 0 (759, 187),(662, 163) | 162 + (1, 1) | 198 (1444, 403),(1346, 344) | 846 (369, 1457),(278, 1409) | 909 - (948, 1201),(907, 1156) | 1063 (5 rows) --- kNN-based sorting -SELECT * FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by 1st coordinate of lower left corner - c ---------------------------- - (54, 38679),(3, 38602) - (83, 10271),(15, 10265) - (122, 46832),(64, 46762) - (167, 17214),(92, 17184) - (161, 24465),(107, 24374) - (162, 26040),(120, 25963) - (154, 4019),(138, 3990) - (259, 1850),(175, 1820) - (207, 40886),(179, 40879) - (288, 49588),(204, 49571) - (270, 32616),(226, 32607) - (318, 31489),(235, 31404) - (337, 455),(240, 359) - (270, 29508),(264, 29440) - (369, 1457),(278, 1409) +-- Test sorting by coordinates +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 3 | (54, 38679),(3, 38602) + 15 | (83, 10271),(15, 10265) + 64 | (122, 46832),(64, 46762) + 92 | (167, 17214),(92, 17184) + 107 | (161, 24465),(107, 24374) + 120 | (162, 26040),(120, 25963) + 138 | (154, 4019),(138, 3990) + 175 | (259, 1850),(175, 1820) + 179 | (207, 40886),(179, 40879) + 204 | (288, 49588),(204, 49571) + 226 | (270, 32616),(226, 32607) + 235 | (318, 31489),(235, 31404) + 240 | (337, 455),(240, 359) (15 rows) -SELECT * FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by 2nd coordinate or upper right corner - c ---------------------------- - (30333, 50),(30273, 6) - (43301, 75),(43227, 43) - (19650, 142),(19630, 51) - (2424, 160),(2424, 81) - (3449, 171),(3354, 108) - (18037, 155),(17941, 109) - (28511, 208),(28479, 114) - (19946, 217),(19941, 118) - (16906, 191),(16816, 139) - (759, 187),(662, 163) - (22684, 266),(22656, 181) - (24423, 255),(24360, 213) - (45989, 249),(45910, 222) - (11399, 377),(11360, 294) - (12162, 389),(12103, 309) +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 54 | (54, 38679),(3, 38602) + 83 | (83, 10271),(15, 10265) + 122 | (122, 46832),(64, 46762) + 154 | (154, 4019),(138, 3990) + 161 | (161, 24465),(107, 24374) + 162 | (162, 26040),(120, 25963) + 167 | (167, 17214),(92, 17184) + 207 | (207, 40886),(179, 40879) + 259 | (259, 1850),(175, 1820) + 270 | (270, 29508),(264, 29440) + 270 | (270, 32616),(226, 32607) + 288 | (288, 49588),(204, 49571) + 318 | (318, 31489),(235, 31404) (15 rows) -SELECT * FROM test_cube ORDER BY c~>1 DESC LIMIT 15; -- descending by 1st coordinate of lower left corner - c -------------------------------- - (50027, 49230),(49951, 49214) - (49980, 35004),(49937, 34963) - (49985, 6436),(49927, 6338) - (49999, 27218),(49908, 27176) - (49954, 1340),(49905, 1294) - (49944, 25163),(49902, 25153) - (49981, 34876),(49898, 34786) - (49957, 43390),(49897, 43384) - (49853, 18504),(49848, 18503) - (49902, 41752),(49818, 41746) - (49907, 30225),(49810, 30158) - (49843, 5175),(49808, 5145) - (49887, 24274),(49805, 24184) - (49847, 7128),(49798, 7067) - (49820, 7990),(49771, 7967) +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 6 | (30333, 50),(30273, 6) + 43 | (43301, 75),(43227, 43) + 51 | (19650, 142),(19630, 51) + 81 | (2424, 160),(2424, 81) + 108 | (3449, 171),(3354, 108) + 109 | (18037, 155),(17941, 109) + 114 | (28511, 208),(28479, 114) + 118 | (19946, 217),(19941, 118) + 139 | (16906, 191),(16816, 139) + 163 | (759, 187),(662, 163) + 181 | (22684, 266),(22656, 181) + 213 | (24423, 255),(24360, 213) + 222 | (45989, 249),(45910, 222) (15 rows) -SELECT * FROM test_cube ORDER BY c~>4 DESC LIMIT 15; -- descending by 2nd coordinate or upper right corner - c -------------------------------- - (36311, 50073),(36258, 49987) - (30746, 50040),(30727, 49992) - (2168, 50012),(2108, 49914) - (21551, 49983),(21492, 49885) - (17954, 49975),(17865, 49915) - (3531, 49962),(3463, 49934) - (19128, 49932),(19112, 49849) - (31287, 49923),(31236, 49913) - (43925, 49912),(43888, 49878) - (29261, 49910),(29247, 49818) - (14913, 49873),(14849, 49836) - (20007, 49858),(19921, 49778) - (38266, 49852),(38233, 49844) - (37595, 49849),(37581, 49834) - (46151, 49848),(46058, 49830) +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 50 | (30333, 50),(30273, 6) + 75 | (43301, 75),(43227, 43) + 142 | (19650, 142),(19630, 51) + 155 | (18037, 155),(17941, 109) + 160 | (2424, 160),(2424, 81) + 171 | (3449, 171),(3354, 108) + 187 | (759, 187),(662, 163) + 191 | (16906, 191),(16816, 139) + 208 | (28511, 208),(28479, 114) + 217 | (19946, 217),(19941, 118) + 249 | (45989, 249),(45910, 222) + 255 | (24423, 255),(24360, 213) + 266 | (22684, 266),(22656, 181) (15 rows) --- same thing for index with points -CREATE TABLE test_point(c cube); -INSERT INTO test_point(SELECT cube(array[c->1,c->2,c->3,c->4]) FROM test_cube); -CREATE INDEX ON test_point USING gist(c); -SELECT * FROM test_point ORDER BY c~>1, c~>2 LIMIT 15; -- ascending by 1st then by 2nd coordinate - c --------------------------- - (54, 38679, 3, 38602) - (83, 10271, 15, 10265) - (122, 46832, 64, 46762) - (154, 4019, 138, 3990) - (161, 24465, 107, 24374) - (162, 26040, 120, 25963) - (167, 17214, 92, 17184) - (207, 40886, 179, 40879) - (259, 1850, 175, 1820) - (270, 29508, 264, 29440) - (270, 32616, 226, 32607) - (288, 49588, 204, 49571) - (318, 31489, 235, 31404) - (326, 18837, 285, 18817) - (337, 455, 240, 359) +-- Same queries with sequential scan (should give the same results as above) +RESET enable_seqscan; +SET enable_indexscan = OFF; +SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------------------ + (337, 455),(240, 359) | 0 + (1, 1) | 140.007142674936 + (759, 187),(662, 163) | 162 + (948, 1201),(907, 1156) | 772.000647668122 + (1444, 403),(1346, 344) | 846 +(5 rows) + +SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------ + (337, 455),(240, 359) | 0 + (1, 1) | 99 + (759, 187),(662, 163) | 162 + (948, 1201),(907, 1156) | 656 + (1444, 403),(1346, 344) | 846 +(5 rows) + +SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; + c | dist +-------------------------+------ + (337, 455),(240, 359) | 0 + (759, 187),(662, 163) | 162 + (1, 1) | 198 + (1444, 403),(1346, 344) | 846 + (369, 1457),(278, 1409) | 909 +(5 rows) + +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 3 | (54, 38679),(3, 38602) + 15 | (83, 10271),(15, 10265) + 64 | (122, 46832),(64, 46762) + 92 | (167, 17214),(92, 17184) + 107 | (161, 24465),(107, 24374) + 120 | (162, 26040),(120, 25963) + 138 | (154, 4019),(138, 3990) + 175 | (259, 1850),(175, 1820) + 179 | (207, 40886),(179, 40879) + 204 | (288, 49588),(204, 49571) + 226 | (270, 32616),(226, 32607) + 235 | (318, 31489),(235, 31404) + 240 | (337, 455),(240, 359) +(15 rows) + +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound + ?column? | c +----------+--------------------------- + 0 | (0, 100000) + 1 | (1, 1) + 54 | (54, 38679),(3, 38602) + 83 | (83, 10271),(15, 10265) + 122 | (122, 46832),(64, 46762) + 154 | (154, 4019),(138, 3990) + 161 | (161, 24465),(107, 24374) + 162 | (162, 26040),(120, 25963) + 167 | (167, 17214),(92, 17184) + 207 | (207, 40886),(179, 40879) + 259 | (259, 1850),(175, 1820) + 270 | (270, 29508),(264, 29440) + 270 | (270, 32616),(226, 32607) + 288 | (288, 49588),(204, 49571) + 318 | (318, 31489),(235, 31404) +(15 rows) + +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 6 | (30333, 50),(30273, 6) + 43 | (43301, 75),(43227, 43) + 51 | (19650, 142),(19630, 51) + 81 | (2424, 160),(2424, 81) + 108 | (3449, 171),(3354, 108) + 109 | (18037, 155),(17941, 109) + 114 | (28511, 208),(28479, 114) + 118 | (19946, 217),(19941, 118) + 139 | (16906, 191),(16816, 139) + 163 | (759, 187),(662, 163) + 181 | (22684, 266),(22656, 181) + 213 | (24423, 255),(24360, 213) + 222 | (45989, 249),(45910, 222) (15 rows) -SELECT * FROM test_point ORDER BY c~>4 DESC LIMIT 15; -- descending by 1st coordinate - c ------------------------------- - (30746, 50040, 30727, 49992) - (36311, 50073, 36258, 49987) - (3531, 49962, 3463, 49934) - (17954, 49975, 17865, 49915) - (2168, 50012, 2108, 49914) - (31287, 49923, 31236, 49913) - (21551, 49983, 21492, 49885) - (43925, 49912, 43888, 49878) - (19128, 49932, 19112, 49849) - (38266, 49852, 38233, 49844) - (14913, 49873, 14849, 49836) - (37595, 49849, 37581, 49834) - (46151, 49848, 46058, 49830) - (29261, 49910, 29247, 49818) - (19233, 49824, 19185, 49794) +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound + ?column? | c +----------+--------------------------- + 0 | (100000) + 1 | (1, 1) + 50 | (30333, 50),(30273, 6) + 75 | (43301, 75),(43227, 43) + 142 | (19650, 142),(19630, 51) + 155 | (18037, 155),(17941, 109) + 160 | (2424, 160),(2424, 81) + 171 | (3449, 171),(3354, 108) + 187 | (759, 187),(662, 163) + 191 | (16906, 191),(16816, 139) + 208 | (28511, 208),(28479, 114) + 217 | (19946, 217),(19941, 118) + 249 | (45989, 249),(45910, 222) + 255 | (24423, 255),(24360, 213) + 266 | (22684, 266),(22656, 181) (15 rows) +RESET enable_indexscan; diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql index eb24576895..efa1dbe9e8 100644 --- a/contrib/cube/sql/cube.sql +++ b/contrib/cube/sql/cube.sql @@ -389,20 +389,29 @@ SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; SELECT c FROM test_cube WHERE c <@ '(3000,1000),(0,0)' ORDER BY c; RESET enable_bitmapscan; --- kNN with index +-- Test kNN +INSERT INTO test_cube VALUES ('(1,1)'), ('(100000)'), ('(0, 100000)'); -- Some corner cases +SET enable_seqscan = false; + +-- Test different metrics SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; --- kNN-based sorting -SELECT * FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by 1st coordinate of lower left corner -SELECT * FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by 2nd coordinate or upper right corner -SELECT * FROM test_cube ORDER BY c~>1 DESC LIMIT 15; -- descending by 1st coordinate of lower left corner -SELECT * FROM test_cube ORDER BY c~>4 DESC LIMIT 15; -- descending by 2nd coordinate or upper right corner - --- same thing for index with points -CREATE TABLE test_point(c cube); -INSERT INTO test_point(SELECT cube(array[c->1,c->2,c->3,c->4]) FROM test_cube); -CREATE INDEX ON test_point USING gist(c); -SELECT * FROM test_point ORDER BY c~>1, c~>2 LIMIT 15; -- ascending by 1st then by 2nd coordinate -SELECT * FROM test_point ORDER BY c~>4 DESC LIMIT 15; -- descending by 1st coordinate +-- Test sorting by coordinates +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound + +-- Same queries with sequential scan (should give the same results as above) +RESET enable_seqscan; +SET enable_indexscan = OFF; +SELECT *, c <-> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <-> '(100, 100),(500, 500)'::cube LIMIT 5; +SELECT *, c <=> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <=> '(100, 100),(500, 500)'::cube LIMIT 5; +SELECT *, c <#> '(100, 100),(500, 500)'::cube as dist FROM test_cube ORDER BY c <#> '(100, 100),(500, 500)'::cube LIMIT 5; +SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound +SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound +SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound +SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound +RESET enable_indexscan; diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index b995dc7e2a..9cda8cac97 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -186,10 +186,11 @@ a ~> n float8 - Get n-th coordinate in normalized cube - representation, in which the coordinates have been rearranged into - the form lower left — upper right; that is, the - smaller endpoint along each dimension appears first. + Get n-th coordinate of cube in following way: + n = 2 * k - 1 means lower bound of k-th + dimension, n = 2 * k means upper bound of + k-th dimension. This operator is designed + for KNN-GiST support. From f50c80dbb17efa39c169f6c510e9464486ff5edc Mon Sep 17 00:00:00 2001 From: Teodor Sigaev Date: Thu, 11 Jan 2018 14:49:36 +0300 Subject: [PATCH 0824/1087] llow negative coordinate for ~> (cube, int) operator ~> (cube, int) operator was especially designed for knn-gist search. However, knn-gist supports only ascending ordering of results. Nevertheless it would be useful to support descending ordering by ~> (cube, int) operator. We provide workaround for that: negative coordinate give us inversed value of corresponding cube bound. Therefore, knn search using negative coordinate gives us an effect of descending ordering by cube bound. Author: Alexander Korotkov Reviewed by: Tomas Vondra, Andrey Borodin Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com --- contrib/cube/cube.c | 44 ++++++-- contrib/cube/expected/cube.out | 168 ++++++++++++++++++++++++++++++- contrib/cube/expected/cube_2.out | 168 ++++++++++++++++++++++++++++++- contrib/cube/sql/cube.sql | 8 ++ doc/src/sgml/cube.sgml | 5 +- 5 files changed, 379 insertions(+), 14 deletions(-) diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c index dcc0850aa9..d96ca1ec1f 100644 --- a/contrib/cube/cube.c +++ b/contrib/cube/cube.c @@ -1343,12 +1343,20 @@ g_cube_distance(PG_FUNCTION_ARGS) */ int coord = PG_GETARG_INT32(1); bool isLeaf = GistPageIsLeaf(entry->page); + bool inverse = false; /* 0 is the only unsupported coordinate value */ - if (coord <= 0) + if (coord == 0) ereport(ERROR, (errcode(ERRCODE_ARRAY_ELEMENT_ERROR), - errmsg("cube index %d is out of bounds", coord))); + errmsg("zero cube index is not defined"))); + + /* Return inversed value for negative coordinate */ + if (coord < 0) + { + coord = -coord; + inverse = true; + } if (coord <= 2 * DIM(cube)) { @@ -1376,9 +1384,14 @@ g_cube_distance(PG_FUNCTION_ARGS) /* * For non-leaf we should always return lower bound, * because even upper bound of a child in the subtree can - * be as small as our lower bound. + * be as small as our lower bound. For inversed case we + * return upper bound because it becomes lower bound for + * inversed value. */ - retval = Min(cube->x[index], cube->x[index + DIM(cube)]); + if (!inverse) + retval = Min(cube->x[index], cube->x[index + DIM(cube)]); + else + retval = Max(cube->x[index], cube->x[index + DIM(cube)]); } } } @@ -1386,6 +1399,10 @@ g_cube_distance(PG_FUNCTION_ARGS) { retval = 0.0; } + + /* Inverse return value if needed */ + if (inverse) + retval = -retval; } else { @@ -1542,11 +1559,15 @@ cube_coord(PG_FUNCTION_ARGS) * getter. Moreover, indexed dataset may contain cubes of different dimensions * number. Accordingly, this coordinate getter should be able to return * lower/upper bound for particular dimension independently on number of cube - * dimensions. + * dimensions. Also, KNN-GiST supports only ascending sorting. In order to + * support descending sorting, this function returns inverse of value when + * negative coordinate is given. * * Long story short, this function uses following meaning of coordinates: * # (2 * N - 1) -- lower bound of Nth dimension, - * # (2 * N) -- upper bound of Nth dimension. + * # (2 * N) -- upper bound of Nth dimension, + * # - (2 * N - 1) -- negative of lower bound of Nth dimension, + * # - (2 * N) -- negative of upper bound of Nth dimension. * * When given coordinate exceeds number of cube dimensions, then 0 returned * (reproducing logic of GiST indexing of variable-length cubes). @@ -1560,10 +1581,17 @@ cube_coord_llur(PG_FUNCTION_ARGS) float8 result; /* 0 is the only unsupported coordinate value */ - if (coord <= 0) + if (coord == 0) ereport(ERROR, (errcode(ERRCODE_ARRAY_ELEMENT_ERROR), - errmsg("cube index %d is out of bounds", coord))); + errmsg("zero cube index is not defined"))); + + /* Return inversed value for negative coordinate */ + if (coord < 0) + { + coord = -coord; + inverse = true; + } if (coord <= 2 * DIM(cube)) { diff --git a/contrib/cube/expected/cube.out b/contrib/cube/expected/cube.out index c586a73727..6378db3004 100644 --- a/contrib/cube/expected/cube.out +++ b/contrib/cube/expected/cube.out @@ -1554,7 +1554,7 @@ SELECT cube(array[40,50,60], array[10,20,30])~>3; (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>0; -ERROR: cube index 0 is out of bounds +ERROR: zero cube index is not defined SELECT cube(array[40,50,60], array[10,20,30])~>4; ?column? ---------- @@ -1562,7 +1562,11 @@ SELECT cube(array[40,50,60], array[10,20,30])~>4; (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>(-1); -ERROR: cube index -1 is out of bounds + ?column? +---------- + -10 +(1 row) + -- Load some example data and build the index -- CREATE TABLE test_cube (c cube); @@ -1726,6 +1730,86 @@ SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper boun 266 | (22684, 266),(22656, 181) (15 rows) +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -49951 | (50027, 49230),(49951, 49214) + -49937 | (49980, 35004),(49937, 34963) + -49927 | (49985, 6436),(49927, 6338) + -49908 | (49999, 27218),(49908, 27176) + -49905 | (49954, 1340),(49905, 1294) + -49902 | (49944, 25163),(49902, 25153) + -49898 | (49981, 34876),(49898, 34786) + -49897 | (49957, 43390),(49897, 43384) + -49848 | (49853, 18504),(49848, 18503) + -49818 | (49902, 41752),(49818, 41746) + -49810 | (49907, 30225),(49810, 30158) + -49808 | (49843, 5175),(49808, 5145) + -49805 | (49887, 24274),(49805, 24184) + -49798 | (49847, 7128),(49798, 7067) +(15 rows) + +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -50027 | (50027, 49230),(49951, 49214) + -49999 | (49999, 27218),(49908, 27176) + -49985 | (49985, 6436),(49927, 6338) + -49981 | (49981, 34876),(49898, 34786) + -49980 | (49980, 35004),(49937, 34963) + -49957 | (49957, 43390),(49897, 43384) + -49954 | (49954, 1340),(49905, 1294) + -49944 | (49944, 25163),(49902, 25153) + -49907 | (49907, 30225),(49810, 30158) + -49902 | (49902, 41752),(49818, 41746) + -49887 | (49887, 24274),(49805, 24184) + -49853 | (49853, 18504),(49848, 18503) + -49847 | (49847, 7128),(49798, 7067) + -49843 | (49843, 5175),(49808, 5145) +(15 rows) + +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -49992 | (30746, 50040),(30727, 49992) + -49987 | (36311, 50073),(36258, 49987) + -49934 | (3531, 49962),(3463, 49934) + -49915 | (17954, 49975),(17865, 49915) + -49914 | (2168, 50012),(2108, 49914) + -49913 | (31287, 49923),(31236, 49913) + -49885 | (21551, 49983),(21492, 49885) + -49878 | (43925, 49912),(43888, 49878) + -49849 | (19128, 49932),(19112, 49849) + -49844 | (38266, 49852),(38233, 49844) + -49836 | (14913, 49873),(14849, 49836) + -49834 | (37595, 49849),(37581, 49834) + -49830 | (46151, 49848),(46058, 49830) + -49818 | (29261, 49910),(29247, 49818) +(15 rows) + +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -50073 | (36311, 50073),(36258, 49987) + -50040 | (30746, 50040),(30727, 49992) + -50012 | (2168, 50012),(2108, 49914) + -49983 | (21551, 49983),(21492, 49885) + -49975 | (17954, 49975),(17865, 49915) + -49962 | (3531, 49962),(3463, 49934) + -49932 | (19128, 49932),(19112, 49849) + -49923 | (31287, 49923),(31236, 49913) + -49912 | (43925, 49912),(43888, 49878) + -49910 | (29261, 49910),(29247, 49818) + -49873 | (14913, 49873),(14849, 49836) + -49858 | (20007, 49858),(19921, 49778) + -49852 | (38266, 49852),(38233, 49844) + -49849 | (37595, 49849),(37581, 49834) +(15 rows) + -- Same queries with sequential scan (should give the same results as above) RESET enable_seqscan; SET enable_indexscan = OFF; @@ -1839,4 +1923,84 @@ SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper boun 266 | (22684, 266),(22656, 181) (15 rows) +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -49951 | (50027, 49230),(49951, 49214) + -49937 | (49980, 35004),(49937, 34963) + -49927 | (49985, 6436),(49927, 6338) + -49908 | (49999, 27218),(49908, 27176) + -49905 | (49954, 1340),(49905, 1294) + -49902 | (49944, 25163),(49902, 25153) + -49898 | (49981, 34876),(49898, 34786) + -49897 | (49957, 43390),(49897, 43384) + -49848 | (49853, 18504),(49848, 18503) + -49818 | (49902, 41752),(49818, 41746) + -49810 | (49907, 30225),(49810, 30158) + -49808 | (49843, 5175),(49808, 5145) + -49805 | (49887, 24274),(49805, 24184) + -49798 | (49847, 7128),(49798, 7067) +(15 rows) + +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -50027 | (50027, 49230),(49951, 49214) + -49999 | (49999, 27218),(49908, 27176) + -49985 | (49985, 6436),(49927, 6338) + -49981 | (49981, 34876),(49898, 34786) + -49980 | (49980, 35004),(49937, 34963) + -49957 | (49957, 43390),(49897, 43384) + -49954 | (49954, 1340),(49905, 1294) + -49944 | (49944, 25163),(49902, 25153) + -49907 | (49907, 30225),(49810, 30158) + -49902 | (49902, 41752),(49818, 41746) + -49887 | (49887, 24274),(49805, 24184) + -49853 | (49853, 18504),(49848, 18503) + -49847 | (49847, 7128),(49798, 7067) + -49843 | (49843, 5175),(49808, 5145) +(15 rows) + +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -49992 | (30746, 50040),(30727, 49992) + -49987 | (36311, 50073),(36258, 49987) + -49934 | (3531, 49962),(3463, 49934) + -49915 | (17954, 49975),(17865, 49915) + -49914 | (2168, 50012),(2108, 49914) + -49913 | (31287, 49923),(31236, 49913) + -49885 | (21551, 49983),(21492, 49885) + -49878 | (43925, 49912),(43888, 49878) + -49849 | (19128, 49932),(19112, 49849) + -49844 | (38266, 49852),(38233, 49844) + -49836 | (14913, 49873),(14849, 49836) + -49834 | (37595, 49849),(37581, 49834) + -49830 | (46151, 49848),(46058, 49830) + -49818 | (29261, 49910),(29247, 49818) +(15 rows) + +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -50073 | (36311, 50073),(36258, 49987) + -50040 | (30746, 50040),(30727, 49992) + -50012 | (2168, 50012),(2108, 49914) + -49983 | (21551, 49983),(21492, 49885) + -49975 | (17954, 49975),(17865, 49915) + -49962 | (3531, 49962),(3463, 49934) + -49932 | (19128, 49932),(19112, 49849) + -49923 | (31287, 49923),(31236, 49913) + -49912 | (43925, 49912),(43888, 49878) + -49910 | (29261, 49910),(29247, 49818) + -49873 | (14913, 49873),(14849, 49836) + -49858 | (20007, 49858),(19921, 49778) + -49852 | (38266, 49852),(38233, 49844) + -49849 | (37595, 49849),(37581, 49834) +(15 rows) + RESET enable_indexscan; diff --git a/contrib/cube/expected/cube_2.out b/contrib/cube/expected/cube_2.out index 8c75e27b46..75fe405c49 100644 --- a/contrib/cube/expected/cube_2.out +++ b/contrib/cube/expected/cube_2.out @@ -1554,7 +1554,7 @@ SELECT cube(array[40,50,60], array[10,20,30])~>3; (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>0; -ERROR: cube index 0 is out of bounds +ERROR: zero cube index is not defined SELECT cube(array[40,50,60], array[10,20,30])~>4; ?column? ---------- @@ -1562,7 +1562,11 @@ SELECT cube(array[40,50,60], array[10,20,30])~>4; (1 row) SELECT cube(array[40,50,60], array[10,20,30])~>(-1); -ERROR: cube index -1 is out of bounds + ?column? +---------- + -10 +(1 row) + -- Load some example data and build the index -- CREATE TABLE test_cube (c cube); @@ -1726,6 +1730,86 @@ SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper boun 266 | (22684, 266),(22656, 181) (15 rows) +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -49951 | (50027, 49230),(49951, 49214) + -49937 | (49980, 35004),(49937, 34963) + -49927 | (49985, 6436),(49927, 6338) + -49908 | (49999, 27218),(49908, 27176) + -49905 | (49954, 1340),(49905, 1294) + -49902 | (49944, 25163),(49902, 25153) + -49898 | (49981, 34876),(49898, 34786) + -49897 | (49957, 43390),(49897, 43384) + -49848 | (49853, 18504),(49848, 18503) + -49818 | (49902, 41752),(49818, 41746) + -49810 | (49907, 30225),(49810, 30158) + -49808 | (49843, 5175),(49808, 5145) + -49805 | (49887, 24274),(49805, 24184) + -49798 | (49847, 7128),(49798, 7067) +(15 rows) + +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -50027 | (50027, 49230),(49951, 49214) + -49999 | (49999, 27218),(49908, 27176) + -49985 | (49985, 6436),(49927, 6338) + -49981 | (49981, 34876),(49898, 34786) + -49980 | (49980, 35004),(49937, 34963) + -49957 | (49957, 43390),(49897, 43384) + -49954 | (49954, 1340),(49905, 1294) + -49944 | (49944, 25163),(49902, 25153) + -49907 | (49907, 30225),(49810, 30158) + -49902 | (49902, 41752),(49818, 41746) + -49887 | (49887, 24274),(49805, 24184) + -49853 | (49853, 18504),(49848, 18503) + -49847 | (49847, 7128),(49798, 7067) + -49843 | (49843, 5175),(49808, 5145) +(15 rows) + +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -49992 | (30746, 50040),(30727, 49992) + -49987 | (36311, 50073),(36258, 49987) + -49934 | (3531, 49962),(3463, 49934) + -49915 | (17954, 49975),(17865, 49915) + -49914 | (2168, 50012),(2108, 49914) + -49913 | (31287, 49923),(31236, 49913) + -49885 | (21551, 49983),(21492, 49885) + -49878 | (43925, 49912),(43888, 49878) + -49849 | (19128, 49932),(19112, 49849) + -49844 | (38266, 49852),(38233, 49844) + -49836 | (14913, 49873),(14849, 49836) + -49834 | (37595, 49849),(37581, 49834) + -49830 | (46151, 49848),(46058, 49830) + -49818 | (29261, 49910),(29247, 49818) +(15 rows) + +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -50073 | (36311, 50073),(36258, 49987) + -50040 | (30746, 50040),(30727, 49992) + -50012 | (2168, 50012),(2108, 49914) + -49983 | (21551, 49983),(21492, 49885) + -49975 | (17954, 49975),(17865, 49915) + -49962 | (3531, 49962),(3463, 49934) + -49932 | (19128, 49932),(19112, 49849) + -49923 | (31287, 49923),(31236, 49913) + -49912 | (43925, 49912),(43888, 49878) + -49910 | (29261, 49910),(29247, 49818) + -49873 | (14913, 49873),(14849, 49836) + -49858 | (20007, 49858),(19921, 49778) + -49852 | (38266, 49852),(38233, 49844) + -49849 | (37595, 49849),(37581, 49834) +(15 rows) + -- Same queries with sequential scan (should give the same results as above) RESET enable_seqscan; SET enable_indexscan = OFF; @@ -1839,4 +1923,84 @@ SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper boun 266 | (22684, 266),(22656, 181) (15 rows) +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -49951 | (50027, 49230),(49951, 49214) + -49937 | (49980, 35004),(49937, 34963) + -49927 | (49985, 6436),(49927, 6338) + -49908 | (49999, 27218),(49908, 27176) + -49905 | (49954, 1340),(49905, 1294) + -49902 | (49944, 25163),(49902, 25153) + -49898 | (49981, 34876),(49898, 34786) + -49897 | (49957, 43390),(49897, 43384) + -49848 | (49853, 18504),(49848, 18503) + -49818 | (49902, 41752),(49818, 41746) + -49810 | (49907, 30225),(49810, 30158) + -49808 | (49843, 5175),(49808, 5145) + -49805 | (49887, 24274),(49805, 24184) + -49798 | (49847, 7128),(49798, 7067) +(15 rows) + +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound + ?column? | c +----------+------------------------------- + -100000 | (100000) + -50027 | (50027, 49230),(49951, 49214) + -49999 | (49999, 27218),(49908, 27176) + -49985 | (49985, 6436),(49927, 6338) + -49981 | (49981, 34876),(49898, 34786) + -49980 | (49980, 35004),(49937, 34963) + -49957 | (49957, 43390),(49897, 43384) + -49954 | (49954, 1340),(49905, 1294) + -49944 | (49944, 25163),(49902, 25153) + -49907 | (49907, 30225),(49810, 30158) + -49902 | (49902, 41752),(49818, 41746) + -49887 | (49887, 24274),(49805, 24184) + -49853 | (49853, 18504),(49848, 18503) + -49847 | (49847, 7128),(49798, 7067) + -49843 | (49843, 5175),(49808, 5145) +(15 rows) + +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -49992 | (30746, 50040),(30727, 49992) + -49987 | (36311, 50073),(36258, 49987) + -49934 | (3531, 49962),(3463, 49934) + -49915 | (17954, 49975),(17865, 49915) + -49914 | (2168, 50012),(2108, 49914) + -49913 | (31287, 49923),(31236, 49913) + -49885 | (21551, 49983),(21492, 49885) + -49878 | (43925, 49912),(43888, 49878) + -49849 | (19128, 49932),(19112, 49849) + -49844 | (38266, 49852),(38233, 49844) + -49836 | (14913, 49873),(14849, 49836) + -49834 | (37595, 49849),(37581, 49834) + -49830 | (46151, 49848),(46058, 49830) + -49818 | (29261, 49910),(29247, 49818) +(15 rows) + +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound + ?column? | c +----------+------------------------------- + -100000 | (0, 100000) + -50073 | (36311, 50073),(36258, 49987) + -50040 | (30746, 50040),(30727, 49992) + -50012 | (2168, 50012),(2108, 49914) + -49983 | (21551, 49983),(21492, 49885) + -49975 | (17954, 49975),(17865, 49915) + -49962 | (3531, 49962),(3463, 49934) + -49932 | (19128, 49932),(19112, 49849) + -49923 | (31287, 49923),(31236, 49913) + -49912 | (43925, 49912),(43888, 49878) + -49910 | (29261, 49910),(29247, 49818) + -49873 | (14913, 49873),(14849, 49836) + -49858 | (20007, 49858),(19921, 49778) + -49852 | (38266, 49852),(38233, 49844) + -49849 | (37595, 49849),(37581, 49834) +(15 rows) + RESET enable_indexscan; diff --git a/contrib/cube/sql/cube.sql b/contrib/cube/sql/cube.sql index efa1dbe9e8..f599e7f7c0 100644 --- a/contrib/cube/sql/cube.sql +++ b/contrib/cube/sql/cube.sql @@ -403,6 +403,10 @@ SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound -- Same queries with sequential scan (should give the same results as above) RESET enable_seqscan; @@ -414,4 +418,8 @@ SELECT c~>1, c FROM test_cube ORDER BY c~>1 LIMIT 15; -- ascending by left bound SELECT c~>2, c FROM test_cube ORDER BY c~>2 LIMIT 15; -- ascending by right bound SELECT c~>3, c FROM test_cube ORDER BY c~>3 LIMIT 15; -- ascending by lower bound SELECT c~>4, c FROM test_cube ORDER BY c~>4 LIMIT 15; -- ascending by upper bound +SELECT c~>(-1), c FROM test_cube ORDER BY c~>(-1) LIMIT 15; -- descending by left bound +SELECT c~>(-2), c FROM test_cube ORDER BY c~>(-2) LIMIT 15; -- descending by right bound +SELECT c~>(-3), c FROM test_cube ORDER BY c~>(-3) LIMIT 15; -- descending by lower bound +SELECT c~>(-4), c FROM test_cube ORDER BY c~>(-4) LIMIT 15; -- descending by upper bound RESET enable_indexscan; diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index 9cda8cac97..e010305d84 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -189,8 +189,9 @@ Get n-th coordinate of cube in following way: n = 2 * k - 1 means lower bound of k-th dimension, n = 2 * k means upper bound of - k-th dimension. This operator is designed - for KNN-GiST support. + k-th dimension. Negative + n denotes inversed value of corresponding + positive coordinate. This operator is designed for KNN-GiST support. From 9e945f862633882cae3183d465f321bd8dd591f9 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 11 Jan 2018 08:31:11 -0500 Subject: [PATCH 0825/1087] Fix Latin spelling "c.f." should be "cf.". --- src/backend/catalog/catalog.c | 2 +- src/backend/optimizer/util/clauses.c | 2 +- src/backend/replication/logical/origin.c | 2 +- src/backend/replication/logical/reorderbuffer.c | 4 ++-- src/backend/replication/logical/snapbuild.c | 2 +- src/backend/storage/ipc/procarray.c | 2 +- src/backend/utils/cache/relcache.c | 2 +- src/bin/psql/describe.c | 2 +- 8 files changed, 9 insertions(+), 9 deletions(-) diff --git a/src/backend/catalog/catalog.c b/src/backend/catalog/catalog.c index 8f3cd07fa4..809749add9 100644 --- a/src/backend/catalog/catalog.c +++ b/src/backend/catalog/catalog.c @@ -120,7 +120,7 @@ IsCatalogClass(Oid relid, Form_pg_class reltuple) * this is noticeably cheaper and doesn't require catalog access. * * This test is safe since even an oid wraparound will preserve this - * property (c.f. GetNewObjectId()) and it has the advantage that it works + * property (cf. GetNewObjectId()) and it has the advantage that it works * correctly even if a user decides to create a relation in the pg_catalog * namespace. * ---- diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c index cf38b4eb5e..89f27ce0eb 100644 --- a/src/backend/optimizer/util/clauses.c +++ b/src/backend/optimizer/util/clauses.c @@ -1611,7 +1611,7 @@ contain_leaked_vars_walker(Node *node, void *context) * WHERE CURRENT OF doesn't contain leaky function calls. * Moreover, it is essential that this is considered non-leaky, * since the planner must always generate a TID scan when CURRENT - * OF is present -- c.f. cost_tidscan. + * OF is present -- cf. cost_tidscan. */ return false; diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 9a20042a3c..5cc9a955d7 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -60,7 +60,7 @@ * all our platforms, but it also simplifies memory ordering concerns * between the remote and local lsn. We use a lwlock instead of a spinlock * so it's less harmful to hold the lock over a WAL write - * (c.f. AdvanceReplicationProgress). + * (cf. AdvanceReplicationProgress). * * --------------------------------------------------------------------------- */ diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c index 1208da2972..c72a611a39 100644 --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -15,7 +15,7 @@ * they are written to the WAL and is responsible to reassemble them into * toplevel transaction sized pieces. When a transaction is completely * reassembled - signalled by reading the transaction commit record - it - * will then call the output plugin (c.f. ReorderBufferCommit()) with the + * will then call the output plugin (cf. ReorderBufferCommit()) with the * individual changes. The output plugins rely on snapshots built by * snapbuild.c which hands them to us. * @@ -1752,7 +1752,7 @@ ReorderBufferForget(ReorderBuffer *rb, TransactionId xid, XLogRecPtr lsn) /* * Execute invalidations happening outside the context of a decoded * transaction. That currently happens either for xid-less commits - * (c.f. RecordTransactionCommit()) or for invalidations in uninteresting + * (cf. RecordTransactionCommit()) or for invalidations in uninteresting * transactions (via ReorderBufferForget()). */ void diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c index 5b35f22a32..4123cdebcf 100644 --- a/src/backend/replication/logical/snapbuild.c +++ b/src/backend/replication/logical/snapbuild.c @@ -42,7 +42,7 @@ * catalog in a transaction. During normal operation this is achieved by using * CommandIds/cmin/cmax. The problem with that however is that for space * efficiency reasons only one value of that is stored - * (c.f. combocid.c). Since ComboCids are only available in memory we log + * (cf. combocid.c). Since ComboCids are only available in memory we log * additional information which allows us to get the original (cmin, cmax) * pair during visibility checks. Check the reorderbuffer.c's comment above * ResolveCminCmaxDuringDecoding() for details. diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 40451c9315..1a00011adc 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -820,7 +820,7 @@ ProcArrayApplyRecoveryInfo(RunningTransactions running) /* * latestObservedXid is at least set to the point where SUBTRANS was - * started up to (c.f. ProcArrayInitRecovery()) or to the biggest xid + * started up to (cf. ProcArrayInitRecovery()) or to the biggest xid * RecordKnownAssignedTransactionIds() was called for. Initialize * subtrans from thereon, up to nextXid - 1. * diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 28a4483434..00ba33bfb4 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -2511,7 +2511,7 @@ RelationClearRelation(Relation relation, bool rebuild) /* * This shouldn't happen as dropping a relation is intended to be - * impossible if still referenced (c.f. CheckTableNotInUse()). But + * impossible if still referenced (cf. CheckTableNotInUse()). But * if we get here anyway, we can't just delete the relcache entry, * as it possibly could get accessed later (as e.g. the error * might get trapped and handled via a subtransaction rollback). diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index f2e62946d8..d2787ab41b 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1722,7 +1722,7 @@ describeOneTableDetails(const char *schemaname, /* * In 9.0+, we have column comments for: relations, views, composite - * types, and foreign tables (c.f. CommentObject() in comment.c). + * types, and foreign tables (cf. CommentObject() in comment.c). */ if (tableinfo.relkind == RELKIND_RELATION || tableinfo.relkind == RELKIND_VIEW || From ca454b9bd34c75995eda4d07c9858f7c22890c2b Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Thu, 11 Jan 2018 11:21:24 -0500 Subject: [PATCH 0826/1087] doc: add JSON acronym Reported-by: torsten.grust@gmail.com Discussion: https://postgr.es/m/20171024201849.1488.71071@wrigleys.postgresql.org --- doc/src/sgml/acronyms.sgml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/src/sgml/acronyms.sgml b/doc/src/sgml/acronyms.sgml index 6e9fddf404..751c46de6d 100644 --- a/doc/src/sgml/acronyms.sgml +++ b/doc/src/sgml/acronyms.sgml @@ -369,6 +369,16 @@ + + JSON + + + JavaScript Object Notation + + + + LDAP From 9ff4f758ee430dbce0be13ab5da315be52cb6f55 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 11 Jan 2018 11:53:59 -0500 Subject: [PATCH 0827/1087] Cosmetic fix in postgres_fdw.c. Make the forward declaration of estimate_path_cost_size match its actual definition. Tatsuro Yamada Discussion: https://postgr.es/m/96f2f554-1eeb-fe6f-e0db-650771886781@lab.ntt.co.jp --- contrib/postgres_fdw/postgres_fdw.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index 7992ba5852..c6e1211f8f 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -353,8 +353,8 @@ static void postgresGetForeignUpperPaths(PlannerInfo *root, * Helper functions */ static void estimate_path_cost_size(PlannerInfo *root, - RelOptInfo *baserel, - List *join_conds, + RelOptInfo *foreignrel, + List *param_join_conds, List *pathkeys, double *p_rows, int *p_width, Cost *p_startup_cost, Cost *p_total_cost); From 4d41b2e0926548e338d20875729a55d41289f867 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 11 Jan 2018 12:16:18 -0500 Subject: [PATCH 0828/1087] Add QueryEnvironment to ExplainOneQuery_hook's parameter list. This should have been done in commit 18ce3a4ab, which added that parameter to ExplainOneQuery, but it was overlooked. This makes it impossible for a user of the hook to pass the queryEnv down to ExplainOnePlan. It's too late to change this API in v10, I suppose, but fortunately passing NULL to ExplainOnePlan will work in nearly all interesting cases in v10. That might not be true forever, so we'd better fix it. Tatsuro Yamada, reviewed by Thomas Munro Discussion: https://postgr.es/m/890e8dd9-c1c7-a422-6892-874f5eaee048@lab.ntt.co.jp --- src/backend/commands/explain.c | 2 +- src/include/commands/explain.h | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 79e6985d0d..41cd47e8bc 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -351,7 +351,7 @@ ExplainOneQuery(Query *query, int cursorOptions, /* if an advisor plugin is present, let it manage things */ if (ExplainOneQuery_hook) (*ExplainOneQuery_hook) (query, cursorOptions, into, es, - queryString, params); + queryString, params, queryEnv); else { PlannedStmt *plan; diff --git a/src/include/commands/explain.h b/src/include/commands/explain.h index dd8abae98a..0c3986ae17 100644 --- a/src/include/commands/explain.h +++ b/src/include/commands/explain.h @@ -53,7 +53,8 @@ typedef void (*ExplainOneQuery_hook_type) (Query *query, IntoClause *into, ExplainState *es, const char *queryString, - ParamListInfo params); + ParamListInfo params, + QueryEnvironment *queryEnv); extern PGDLLIMPORT ExplainOneQuery_hook_type ExplainOneQuery_hook; /* Hook for plugins to get control in explain_get_index_name() */ From bbd3363e128daec0e70952c1bb2f12ab1f6f1292 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 8 Jan 2018 17:32:09 -0500 Subject: [PATCH 0829/1087] Refactor subscription tests to use PostgresNode's wait_for_catchup This was nearly the same code. Extend wait_for_catchup to allow waiting for pg_current_wal_lsn() and use that in the subscription tests. Also change one use in the pg_rewind tests to use this. Also remove some broken code in wait_for_catchup and wait_for_slot_catchup. The error message in case the waiting failed wanted to show the current LSN, but the way it was written never worked. So since nobody ever cared, just remove it. Reviewed-by: Michael Paquier --- src/bin/pg_rewind/RewindTest.pm | 5 +---- src/test/perl/PostgresNode.pm | 22 +++++++++++++++------- src/test/subscription/t/001_rep_changes.pl | 19 +++++-------------- src/test/subscription/t/002_types.pl | 15 ++++----------- src/test/subscription/t/003_constraints.pl | 15 ++++----------- src/test/subscription/t/004_sync.pl | 14 +++----------- src/test/subscription/t/005_encoding.pl | 13 ++----------- src/test/subscription/t/006_rewrite.pl | 17 ++++------------- src/test/subscription/t/007_ddl.pl | 11 +---------- src/test/subscription/t/008_diff_schema.pl | 15 +++------------ 10 files changed, 42 insertions(+), 104 deletions(-) diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm index e6041f38a5..42fd577f21 100644 --- a/src/bin/pg_rewind/RewindTest.pm +++ b/src/bin/pg_rewind/RewindTest.pm @@ -163,10 +163,7 @@ sub promote_standby # up standby # Wait for the standby to receive and write all WAL. - my $wal_received_query = -"SELECT pg_current_wal_lsn() = write_lsn FROM pg_stat_replication WHERE application_name = 'rewind_standby';"; - $node_master->poll_query_until('postgres', $wal_received_query) - or die "Timed out while waiting for standby to receive and write WAL"; + $node_master->wait_for_catchup('rewind_standby', 'write'); # Now promote standby and insert some new data on master, this will put # the master out-of-sync with the standby. diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index 80f68df246..1d5ac4ee35 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -1465,7 +1465,8 @@ sub lsn =item $node->wait_for_catchup(standby_name, mode, target_lsn) -Wait for the node with application_name standby_name (usually from node->name) +Wait for the node with application_name standby_name (usually from node->name, +also works for logical subscriptions) until its replication location in pg_stat_replication equals or passes the upstream's WAL insert point at the time this function is called. By default the replay_lsn is waited for, but 'mode' may be specified to wait for any of @@ -1477,6 +1478,7 @@ poll_query_until timeout. Requires that the 'postgres' db exists and is accessible. target_lsn may be any arbitrary lsn, but is typically $master_node->lsn('insert'). +If omitted, pg_current_wal_lsn() is used. This is not a test. It die()s on failure. @@ -1497,7 +1499,15 @@ sub wait_for_catchup { $standby_name = $standby_name->name; } - die 'target_lsn must be specified' unless defined($target_lsn); + my $lsn_expr; + if (defined($target_lsn)) + { + $lsn_expr = "'$target_lsn'"; + } + else + { + $lsn_expr = 'pg_current_wal_lsn()' + } print "Waiting for replication conn " . $standby_name . "'s " . $mode @@ -1505,10 +1515,9 @@ sub wait_for_catchup . $target_lsn . " on " . $self->name . "\n"; my $query = -qq[SELECT '$target_lsn' <= ${mode}_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = '$standby_name';]; +qq[SELECT $lsn_expr <= ${mode}_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = '$standby_name';]; $self->poll_query_until('postgres', $query) - or die "timed out waiting for catchup, current location is " - . ($self->safe_psql('postgres', $query) || '(unknown)'); + or die "timed out waiting for catchup"; print "done\n"; } @@ -1550,8 +1559,7 @@ sub wait_for_slot_catchup my $query = qq[SELECT '$target_lsn' <= ${mode}_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = '$slot_name';]; $self->poll_query_until('postgres', $query) - or die "timed out waiting for catchup, current location is " - . ($self->safe_psql('postgres', $query) || '(unknown)'); + or die "timed out waiting for catchup"; print "done\n"; } diff --git a/src/test/subscription/t/001_rep_changes.pl b/src/test/subscription/t/001_rep_changes.pl index 0136c79d4b..e0104cd8d0 100644 --- a/src/test/subscription/t/001_rep_changes.pl +++ b/src/test/subscription/t/001_rep_changes.pl @@ -60,11 +60,7 @@ "CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub, tap_pub_ins_only" ); -# Wait for subscriber to finish initialization -my $caughtup_query = -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';"; -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Also wait for initial table sync to finish my $synced_query = @@ -93,8 +89,7 @@ $node_publisher->safe_psql('postgres', "INSERT INTO tab_mixed VALUES (2, 'bar')"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*), min(a), max(a) FROM tab_ins"); @@ -132,9 +127,7 @@ $node_publisher->safe_psql('postgres', "UPDATE tab_full2 SET x = 'bb' WHERE x = 'b'"); -# Wait for subscription to catch up -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*), min(a), max(a) FROM tab_full"); @@ -176,8 +169,7 @@ "INSERT INTO tab_ins SELECT generate_series(1001,1100)"); $node_publisher->safe_psql('postgres', "DELETE FROM tab_rep"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*), min(a), max(a) FROM tab_ins"); @@ -200,8 +192,7 @@ ); $node_publisher->safe_psql('postgres', "INSERT INTO tab_full VALUES(0)"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # note that data are different on provider and subscriber $result = $node_subscriber->safe_psql('postgres', diff --git a/src/test/subscription/t/002_types.pl b/src/test/subscription/t/002_types.pl index 3ca027ecb4..80620416fa 100644 --- a/src/test/subscription/t/002_types.pl +++ b/src/test/subscription/t/002_types.pl @@ -106,11 +106,7 @@ "CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (slot_name = tap_sub_slot)" ); -# Wait for subscriber to finish initialization -my $caughtup_query = -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';"; -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Wait for initial sync to finish as well my $synced_query = @@ -246,8 +242,7 @@ (4, '"yellow horse"=>"moaned"'); )); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Check the data on subscriber my $result = $node_subscriber->safe_psql( @@ -368,8 +363,7 @@ UPDATE tst_hstore SET b = '"also"=>"updated"' WHERE a = 3; )); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Check the data on subscriber $result = $node_subscriber->safe_psql( @@ -489,8 +483,7 @@ DELETE FROM tst_hstore WHERE a = 1; )); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Check the data on subscriber $result = $node_subscriber->safe_psql( diff --git a/src/test/subscription/t/003_constraints.pl b/src/test/subscription/t/003_constraints.pl index 06863aef84..6f6805b952 100644 --- a/src/test/subscription/t/003_constraints.pl +++ b/src/test/subscription/t/003_constraints.pl @@ -39,19 +39,14 @@ "CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (copy_data = false)" ); -# Wait for subscriber to finish initialization -my $caughtup_query = -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';"; -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $node_publisher->safe_psql('postgres', "INSERT INTO tab_fk (bid) VALUES (1);"); $node_publisher->safe_psql('postgres', "INSERT INTO tab_fk_ref (id, bid) VALUES (1, 1);"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Check data on subscriber my $result = $node_subscriber->safe_psql('postgres', @@ -69,8 +64,7 @@ $node_publisher->safe_psql('postgres', "INSERT INTO tab_fk_ref (id, bid) VALUES (2, 2);"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # FK is not enforced on subscriber $result = $node_subscriber->safe_psql('postgres', @@ -104,8 +98,7 @@ BEGIN $node_publisher->safe_psql('postgres', "INSERT INTO tab_fk_ref (id, bid) VALUES (10, 10);"); -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # The row should be skipped on subscriber $result = $node_subscriber->safe_psql('postgres', diff --git a/src/test/subscription/t/004_sync.pl b/src/test/subscription/t/004_sync.pl index 05fd2f0e6c..a9a223bdf7 100644 --- a/src/test/subscription/t/004_sync.pl +++ b/src/test/subscription/t/004_sync.pl @@ -37,11 +37,7 @@ "CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub" ); -# Wait for subscriber to finish initialization -my $caughtup_query = -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';"; -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); # Also wait for initial table sync to finish my $synced_query = @@ -124,9 +120,7 @@ $node_publisher->safe_psql('postgres', "CREATE TABLE tab_rep_next (a) AS SELECT generate_series(1,10)"); -# Wait for subscription to catch up -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*) FROM tab_rep_next"); @@ -149,9 +143,7 @@ $node_publisher->safe_psql('postgres', "INSERT INTO tab_rep_next SELECT generate_series(1,10)"); -# Wait for subscription to catch up -$node_publisher->poll_query_until('postgres', $caughtup_query) - or die "Timed out while waiting for subscriber to catch up"; +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*) FROM tab_rep_next"); diff --git a/src/test/subscription/t/005_encoding.pl b/src/test/subscription/t/005_encoding.pl index 2b0c47c07d..65439f1b28 100644 --- a/src/test/subscription/t/005_encoding.pl +++ b/src/test/subscription/t/005_encoding.pl @@ -5,15 +5,6 @@ use TestLib; use Test::More tests => 1; -sub wait_for_caught_up -{ - my ($node, $appname) = @_; - - $node->poll_query_until('postgres', -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" - ) or die "Timed out while waiting for subscriber to catch up"; -} - my $node_publisher = get_new_node('publisher'); $node_publisher->init( allows_streaming => 'logical', @@ -39,7 +30,7 @@ sub wait_for_caught_up "CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" ); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); # Wait for initial sync to finish as well my $synced_query = @@ -50,7 +41,7 @@ sub wait_for_caught_up $node_publisher->safe_psql('postgres', q{INSERT INTO test1 VALUES (1, E'Mot\xc3\xb6rhead')}); # hand-rolled UTF-8 -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); is( $node_subscriber->safe_psql( 'postgres', q{SELECT a FROM test1 WHERE b = E'Mot\xf6rhead'} diff --git a/src/test/subscription/t/006_rewrite.pl b/src/test/subscription/t/006_rewrite.pl index 5e3211aefa..aa1184c85f 100644 --- a/src/test/subscription/t/006_rewrite.pl +++ b/src/test/subscription/t/006_rewrite.pl @@ -5,15 +5,6 @@ use TestLib; use Test::More tests => 2; -sub wait_for_caught_up -{ - my ($node, $appname) = @_; - - $node->poll_query_until('postgres', -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" - ) or die "Timed out while waiting for subscriber to catch up"; -} - my $node_publisher = get_new_node('publisher'); $node_publisher->init(allows_streaming => 'logical'); $node_publisher->start; @@ -35,7 +26,7 @@ sub wait_for_caught_up "CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" ); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); # Wait for initial sync to finish as well my $synced_query = @@ -45,7 +36,7 @@ sub wait_for_caught_up $node_publisher->safe_psql('postgres', q{INSERT INTO test1 (a, b) VALUES (1, 'one'), (2, 'two');}); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); is($node_subscriber->safe_psql('postgres', q{SELECT a, b FROM test1}), qq(1|one @@ -57,11 +48,11 @@ sub wait_for_caught_up $node_subscriber->safe_psql('postgres', $ddl2); $node_publisher->safe_psql('postgres', $ddl2); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); $node_publisher->safe_psql('postgres', q{INSERT INTO test1 (a, b, c) VALUES (3, 'three', 33);}); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); is($node_subscriber->safe_psql('postgres', q{SELECT a, b, c FROM test1}), qq(1|one|0 diff --git a/src/test/subscription/t/007_ddl.pl b/src/test/subscription/t/007_ddl.pl index 3f36238840..b219bf33dd 100644 --- a/src/test/subscription/t/007_ddl.pl +++ b/src/test/subscription/t/007_ddl.pl @@ -5,15 +5,6 @@ use TestLib; use Test::More tests => 1; -sub wait_for_caught_up -{ - my ($node, $appname) = @_; - - $node->poll_query_until('postgres', -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" - ) or die "Timed out while waiting for subscriber to catch up"; -} - my $node_publisher = get_new_node('publisher'); $node_publisher->init(allows_streaming => 'logical'); $node_publisher->start; @@ -35,7 +26,7 @@ sub wait_for_caught_up "CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" ); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); $node_subscriber->safe_psql('postgres', q{ BEGIN; diff --git a/src/test/subscription/t/008_diff_schema.pl b/src/test/subscription/t/008_diff_schema.pl index b71be6e487..ea31625402 100644 --- a/src/test/subscription/t/008_diff_schema.pl +++ b/src/test/subscription/t/008_diff_schema.pl @@ -5,15 +5,6 @@ use TestLib; use Test::More tests => 3; -sub wait_for_caught_up -{ - my ($node, $appname) = @_; - - $node->poll_query_until('postgres', -"SELECT pg_current_wal_lsn() <= replay_lsn FROM pg_stat_replication WHERE application_name = '$appname';" - ) or die "Timed out while waiting for subscriber to catch up"; -} - # Create publisher node my $node_publisher = get_new_node('publisher'); $node_publisher->init(allows_streaming => 'logical'); @@ -42,7 +33,7 @@ sub wait_for_caught_up "CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub" ); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); # Also wait for initial table sync to finish my $synced_query = @@ -58,7 +49,7 @@ sub wait_for_caught_up # subscriber didn't change $node_publisher->safe_psql('postgres', "UPDATE test_tab SET b = md5(b)"); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999) FROM test_tab"); @@ -70,7 +61,7 @@ sub wait_for_caught_up $node_subscriber->safe_psql('postgres', "UPDATE test_tab SET c = 'epoch'::timestamptz + 987654321 * interval '1s'"); $node_publisher->safe_psql('postgres', "UPDATE test_tab SET b = md5(a::text)"); -wait_for_caught_up($node_publisher, $appname); +$node_publisher->wait_for_catchup($appname); $result = $node_subscriber->safe_psql('postgres', "SELECT count(*), count(extract(epoch from c) = 987654321), count(d = 999) FROM test_tab"); From bdb70c12b3a2e69eec6e51411df60d9f43ecc841 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Thu, 11 Jan 2018 21:50:21 -0500 Subject: [PATCH 0830/1087] C comment: fix "the the" mentions in C comments Reported-by: Christoph Dreis Discussion: https://postgr.es/m/007e01d3519e$2734ca10$759e5e30$@freenet.de Author: Christoph Dreis --- src/backend/optimizer/prep/prepunion.c | 2 +- src/test/regress/expected/triggers.out | 2 +- src/test/regress/sql/triggers.sql | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 95557d750b..e8eeabdc88 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -2502,7 +2502,7 @@ build_child_join_sjinfo(PlannerInfo *root, SpecialJoinInfo *parent_sjinfo, * Find AppendRelInfo structures for all relations specified by relids. * * The AppendRelInfos are returned in an array, which can be pfree'd by the - * caller. *nappinfos is set to the the number of entries in the array. + * caller. *nappinfos is set to the number of entries in the array. */ AppendRelInfo ** find_appinfos_by_relids(PlannerInfo *root, Relids relids, int *nappinfos) diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 85d948741e..49cd7a1338 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -1934,7 +1934,7 @@ $$; -- -- Verify behavior of statement triggers on partition hierarchy with -- transition tables. Tuples should appear to each trigger in the --- format of the the relation the trigger is attached to. +-- format of the relation the trigger is attached to. -- -- set up a partition hierarchy with some different TupleDescriptors create table parent (a text, b int) partition by list (a); diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 2b2236ed7d..81c632ef7e 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1409,7 +1409,7 @@ $$; -- -- Verify behavior of statement triggers on partition hierarchy with -- transition tables. Tuples should appear to each trigger in the --- format of the the relation the trigger is attached to. +-- format of the relation the trigger is attached to. -- -- set up a partition hierarchy with some different TupleDescriptors From 49c784ece766781250224a371be14af71e7eda93 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 12 Jan 2018 11:21:42 -0300 Subject: [PATCH 0831/1087] Remove hard-coded schema knowledge about pg_attribute from genbki.pl MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add the ability to label a column's default value in the catalog header, and implement this for pg_attribute. A new function in Catalog.pm is used to fill in a tuple with defaults. The build process will complain loudly if a catalog entry is incomplete, Commit 8137f2c3232 labeled variable length columns for the C preprocessor. Expose that label to genbki.pl so we can exclude those columns from schema macros in a general fashion. Also, format schema macro entries according to their types. This means slightly less code maintenance, but more importantly it's a proving ground for mechanisms intended to be used in later commits. While at it, I (Álvaro) couldn't resist making some changes in genbki.pl: rename some functions to actually indicate their purpose instead of actively misleading onlookers; and don't iterate on the whole of pg_type to find the entry for each catalog row, using a hash instead of an array. Author: John Naylor, some changes by Álvaro Herrera Discussion: https://postgr.es/m/CAJVSVGVJHwD8sfDfZW9TbCHWKf=C1YDRM-rF=2JenRU_y+VcFg@mail.gmail.com --- src/backend/catalog/Catalog.pm | 70 ++++++++- src/backend/catalog/genbki.pl | 228 ++++++++++++++--------------- src/include/catalog/genbki.h | 3 + src/include/catalog/pg_attribute.h | 22 +-- 4 files changed, 187 insertions(+), 136 deletions(-) diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm index f18b400bd7..9ced1547f6 100644 --- a/src/backend/catalog/Catalog.pm +++ b/src/backend/catalog/Catalog.pm @@ -37,6 +37,8 @@ sub Catalogs foreach my $input_file (@_) { my %catalog; + my $is_varlen = 0; + $catalog{columns} = []; $catalog{data} = []; @@ -164,7 +166,11 @@ sub Catalogs elsif ($declaring_attributes) { next if (/^{|^$/); - next if (/^#/); + if (/^#/) + { + $is_varlen = 1 if /^#ifdef\s+CATALOG_VARLEN/; + next; + } if (/^}/) { undef $declaring_attributes; @@ -172,8 +178,12 @@ sub Catalogs else { my %column; - my ($atttype, $attname, $attopt) = split /\s+/, $_; - die "parse error ($input_file)" unless $attname; + my @attopts = split /\s+/, $_; + my $atttype = shift @attopts; + my $attname = shift @attopts; + die "parse error ($input_file)" + unless ($attname and $atttype); + if (exists $RENAME_ATTTYPE{$atttype}) { $atttype = $RENAME_ATTTYPE{$atttype}; @@ -181,13 +191,14 @@ sub Catalogs if ($attname =~ /(.*)\[.*\]/) # array attribute { $attname = $1; - $atttype .= '[]'; # variable-length only + $atttype .= '[]'; } $column{type} = $atttype; $column{name} = $attname; + $column{is_varlen} = 1 if $is_varlen; - if (defined $attopt) + foreach my $attopt (@attopts) { if ($attopt eq 'BKI_FORCE_NULL') { @@ -197,11 +208,20 @@ sub Catalogs { $column{forcenotnull} = 1; } + elsif ($attopt =~ /BKI_DEFAULT\((\S+)\)/) + { + $column{default} = $1; + } else { die "unknown column option $attopt on column $attname"; } + + if ($column{forcenull} and $column{forcenotnull}) + { + die "$attname is forced both null and not null"; + } } push @{ $catalog{columns} }, \%column; } @@ -235,6 +255,46 @@ sub SplitDataLine return @result; } +# Fill in default values of a record using the given schema. It's the +# caller's responsibility to specify other values beforehand. +sub AddDefaultValues +{ + my ($row, $schema) = @_; + my @missing_fields; + my $msg; + + foreach my $column (@$schema) + { + my $attname = $column->{name}; + my $atttype = $column->{type}; + + if (defined $row->{$attname}) + { + ; + } + elsif (defined $column->{default}) + { + $row->{$attname} = $column->{default}; + } + else + { + # Failed to find a value. + push @missing_fields, $attname; + } + } + + if (@missing_fields) + { + $msg = "Missing values for: " . join(', ', @missing_fields); + $msg .= "\nShowing other values for context:\n"; + while (my($key, $value) = each %$row) + { + $msg .= "$key => $value, "; + } + } + return $msg; +} + # Rename temporary files to final names. # Call this function with the final file name and the .tmp extension # Note: recommended extension is ".tmp$$", so that parallel make steps diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl index 1d3bbcc532..ed90a02303 100644 --- a/src/backend/catalog/genbki.pl +++ b/src/backend/catalog/genbki.pl @@ -105,7 +105,7 @@ my %schemapg_entries; my @tables_needing_macros; my %regprocoids; -my @types; +my %types; # produce output, one catalog at a time foreach my $catname (@{ $catalogs->{names} }) @@ -119,7 +119,6 @@ . $catalog->{without_oids} . $catalog->{rowtype_oid} . "\n"; - my %bki_attr; my @attnames; my $first = 1; @@ -129,7 +128,6 @@ { my $attname = $column->{name}; my $atttype = $column->{type}; - $bki_attr{$attname} = $column; push @attnames, $attname; if (!$first) @@ -211,7 +209,7 @@ { my %type = %bki_values; $type{oid} = $row->{oid}; - push @types, \%type; + $types{ $type{typname} } = \%type; } # Write to postgres.bki @@ -253,28 +251,24 @@ # Generate entries for user attributes. my $attnum = 0; my $priornotnull = 1; - my @user_attrs = @{ $table->{columns} }; - foreach my $attr (@user_attrs) + foreach my $attr (@{ $table->{columns} }) { $attnum++; - my $row = emit_pgattr_row($table_name, $attr, $priornotnull); - $row->{attnum} = $attnum; - $row->{attstattarget} = '-1'; - $priornotnull &= ($row->{attnotnull} eq 't'); + my %row; + $row{attnum} = $attnum; + $row{attrelid} = $table->{relation_oid}; + + morph_row_for_pgattr(\%row, $schema, $attr, $priornotnull); + $priornotnull &= ($row{attnotnull} eq 't'); # If it's bootstrapped, put an entry in postgres.bki. - if ($table->{bootstrap}) - { - bki_insert($row, @attnames); - } + print_bki_insert(\%row, @attnames) if $table->{bootstrap}; # Store schemapg entries for later. - $row = - emit_schemapg_row($row, - grep { $bki_attr{$_}{type} eq 'bool' } @attnames); + morph_row_for_schemapg(\%row, $schema); push @{ $schemapg_entries{$table_name} }, sprintf "{ %s }", - join(', ', grep { defined $_ } @{$row}{@attnames}); + join(', ', grep { defined $_ } @row{@attnames}); } # Generate entries for system attributes. @@ -293,16 +287,18 @@ foreach my $attr (@SYS_ATTRS) { $attnum--; - my $row = emit_pgattr_row($table_name, $attr, 1); - $row->{attnum} = $attnum; - $row->{attstattarget} = '0'; + my %row; + $row{attnum} = $attnum; + $row{attrelid} = $table->{relation_oid}; + $row{attstattarget} = '0'; # Omit the oid column if the catalog doesn't have them next if $table->{without_oids} - && $row->{attname} eq 'oid'; + && $attr->{name} eq 'oid'; - bki_insert($row, @attnames); + morph_row_for_pgattr(\%row, $schema, $attr, 1); + print_bki_insert(\%row, @attnames); } } } @@ -379,130 +375,122 @@ #################### Subroutines ######################## -# Given a system catalog name and a reference to a key-value pair corresponding -# to the name and type of a column, generate a reference to a hash that -# represents a pg_attribute entry. We must also be told whether preceding -# columns were all not-null. -sub emit_pgattr_row +# Given $pgattr_schema (the pg_attribute schema for a catalog sufficient for +# AddDefaultValues), $attr (the description of a catalog row), and +# $priornotnull (whether all prior attributes in this catalog are not null), +# modify the $row hashref for print_bki_insert. This includes setting data +# from the corresponding pg_type element and filling in any default values. +# Any value not handled here must be supplied by caller. +sub morph_row_for_pgattr { - my ($table_name, $attr, $priornotnull) = @_; + my ($row, $pgattr_schema, $attr, $priornotnull) = @_; my $attname = $attr->{name}; my $atttype = $attr->{type}; - my %row; - $row{attrelid} = $catalogs->{$table_name}->{relation_oid}; - $row{attname} = $attname; + $row->{attname} = $attname; - # Adjust type name for arrays: foo[] becomes _foo - # so we can look it up in pg_type - if ($atttype =~ /(.+)\[\]$/) - { - $atttype = '_' . $1; - } + # Adjust type name for arrays: foo[] becomes _foo, so we can look it up in + # pg_type + $atttype = '_' . $1 if $atttype =~ /(.+)\[\]$/; # Copy the type data from pg_type, and add some type-dependent items - foreach my $type (@types) - { - if (defined $type->{typname} && $type->{typname} eq $atttype) - { - $row{atttypid} = $type->{oid}; - $row{attlen} = $type->{typlen}; - $row{attbyval} = $type->{typbyval}; - $row{attstorage} = $type->{typstorage}; - $row{attalign} = $type->{typalign}; + my $type = $types{$atttype}; - # set attndims if it's an array type - $row{attndims} = $type->{typcategory} eq 'A' ? '1' : '0'; - $row{attcollation} = $type->{typcollation}; + $row->{atttypid} = $type->{oid}; + $row->{attlen} = $type->{typlen}; + $row->{attbyval} = $type->{typbyval}; + $row->{attstorage} = $type->{typstorage}; + $row->{attalign} = $type->{typalign}; - if (defined $attr->{forcenotnull}) - { - $row{attnotnull} = 't'; - } - elsif (defined $attr->{forcenull}) - { - $row{attnotnull} = 'f'; - } - elsif ($priornotnull) - { + # set attndims if it's an array type + $row->{attndims} = $type->{typcategory} eq 'A' ? '1' : '0'; + $row->{attcollation} = $type->{typcollation}; - # attnotnull will automatically be set if the type is - # fixed-width and prior columns are all NOT NULL --- - # compare DefineAttr in bootstrap.c. oidvector and - # int2vector are also treated as not-nullable. - $row{attnotnull} = - $type->{typname} eq 'oidvector' ? 't' - : $type->{typname} eq 'int2vector' ? 't' - : $type->{typlen} eq 'NAMEDATALEN' ? 't' - : $type->{typlen} > 0 ? 't' - : 'f'; - } - else - { - $row{attnotnull} = 'f'; - } - last; - } + if (defined $attr->{forcenotnull}) + { + $row->{attnotnull} = 't'; + } + elsif (defined $attr->{forcenull}) + { + $row->{attnotnull} = 'f'; } + elsif ($priornotnull) + { - # Add in default values for pg_attribute - my %PGATTR_DEFAULTS = ( - attcacheoff => '-1', - atttypmod => '-1', - atthasdef => 'f', - attidentity => '', - attisdropped => 'f', - attislocal => 't', - attinhcount => '0', - attacl => '_null_', - attoptions => '_null_', - attfdwoptions => '_null_'); - return { %PGATTR_DEFAULTS, %row }; + # attnotnull will automatically be set if the type is + # fixed-width and prior columns are all NOT NULL --- + # compare DefineAttr in bootstrap.c. oidvector and + # int2vector are also treated as not-nullable. + $row->{attnotnull} = + $type->{typname} eq 'oidvector' ? 't' + : $type->{typname} eq 'int2vector' ? 't' + : $type->{typlen} eq 'NAMEDATALEN' ? 't' + : $type->{typlen} > 0 ? 't' + : 'f'; + } + else + { + $row->{attnotnull} = 'f'; + } + + my $error = Catalog::AddDefaultValues($row, $pgattr_schema); + if ($error) + { + die "Failed to form full tuple for pg_attribute: ", $error; + } } # Write a pg_attribute entry to postgres.bki -sub bki_insert +sub print_bki_insert { my $row = shift; my @attnames = @_; my $oid = $row->{oid} ? "OID = $row->{oid} " : ''; - my $bki_values = join ' ', map { $_ eq '' ? '""' : $_ } map $row->{$_}, - @attnames; + my $bki_values = join ' ', @{$row}{@attnames}; printf $bki "insert %s( %s )\n", $oid, $bki_values; } +# Given a row reference, modify it so that it becomes a valid entry for +# a catalog schema declaration in schemapg.h. +# # The field values of a Schema_pg_xxx declaration are similar, but not # quite identical, to the corresponding values in postgres.bki. -sub emit_schemapg_row +sub morph_row_for_schemapg { - my $row = shift; - my @bool_attrs = @_; - - # Replace empty string by zero char constant - $row->{attidentity} ||= '\0'; - - # Supply appropriate quoting for these fields. - $row->{attname} = q|{"| . $row->{attname} . q|"}|; - $row->{attstorage} = q|'| . $row->{attstorage} . q|'|; - $row->{attalign} = q|'| . $row->{attalign} . q|'|; - $row->{attidentity} = q|'| . $row->{attidentity} . q|'|; - - # We don't emit initializers for the variable length fields at all. - # Only the fixed-size portions of the descriptors are ever used. - delete $row->{attacl}; - delete $row->{attoptions}; - delete $row->{attfdwoptions}; - - # Expand booleans from 'f'/'t' to 'false'/'true'. - # Some values might be other macros (eg FLOAT4PASSBYVAL), don't change. - foreach my $attr (@bool_attrs) + my $row = shift; + my $pgattr_schema = shift; + + foreach my $column (@$pgattr_schema) { - $row->{$attr} = - $row->{$attr} eq 't' ? 'true' - : $row->{$attr} eq 'f' ? 'false' - : $row->{$attr}; + my $attname = $column->{name}; + my $atttype = $column->{type}; + + # Some data types have special formatting rules. + if ($atttype eq 'name') + { + # add {" ... "} quoting + $row->{$attname} = sprintf(qq'{"%s"}', $row->{$attname}); + } + elsif ($atttype eq 'char') + { + # Replace empty string by zero char constant; add single quotes + $row->{$attname} = '\0' if $row->{$attname} eq q|""|; + $row->{$attname} = sprintf("'%s'", $row->{$attname}); + } + + # Expand booleans from 'f'/'t' to 'false'/'true'. + # Some values might be other macros (eg FLOAT4PASSBYVAL), + # don't change. + elsif ($atttype eq 'bool') + { + $row->{$attname} = 'true' if $row->{$attname} eq 't'; + $row->{$attname} = 'false' if $row->{$attname} eq 'f'; + } + + # We don't emit initializers for the variable length fields at all. + # Only the fixed-size portions of the descriptors are ever used. + delete $row->{$attname} if $column->{is_varlen}; } - return $row; } sub usage diff --git a/src/include/catalog/genbki.h b/src/include/catalog/genbki.h index 59b0f8ed5d..96ac4025de 100644 --- a/src/include/catalog/genbki.h +++ b/src/include/catalog/genbki.h @@ -31,6 +31,9 @@ #define BKI_FORCE_NULL #define BKI_FORCE_NOT_NULL +/* Specifies a default value for a catalog field */ +#define BKI_DEFAULT(value) + /* * This is never defined; it's here only for documentation. * diff --git a/src/include/catalog/pg_attribute.h b/src/include/catalog/pg_attribute.h index 6104254d7b..8159383834 100644 --- a/src/include/catalog/pg_attribute.h +++ b/src/include/catalog/pg_attribute.h @@ -54,7 +54,7 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK * that no value has been explicitly set for this column, so ANALYZE * should use the default setting. */ - int32 attstattarget; + int32 attstattarget BKI_DEFAULT(-1); /* * attlen is a copy of the typlen field from pg_type for this attribute. @@ -90,7 +90,7 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK * descriptor, we may then update attcacheoff in the copies. This speeds * up the attribute walking process. */ - int32 attcacheoff; + int32 attcacheoff BKI_DEFAULT(-1); /* * atttypmod records type-specific data supplied at table creation time @@ -98,7 +98,7 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK * type-specific input and output functions as the third argument. The * value will generally be -1 for types that do not need typmod. */ - int32 atttypmod; + int32 atttypmod BKI_DEFAULT(-1); /* * attbyval is a copy of the typbyval field from pg_type for this @@ -131,13 +131,13 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK bool attnotnull; /* Has DEFAULT value or not */ - bool atthasdef; + bool atthasdef BKI_DEFAULT(f); /* One of the ATTRIBUTE_IDENTITY_* constants below, or '\0' */ - char attidentity; + char attidentity BKI_DEFAULT(""); /* Is dropped (ie, logically invisible) or not */ - bool attisdropped; + bool attisdropped BKI_DEFAULT(f); /* * This flag specifies whether this column has ever had a local @@ -148,10 +148,10 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK * not dropped by a parent's DROP COLUMN even if this causes the column's * attinhcount to become zero. */ - bool attislocal; + bool attislocal BKI_DEFAULT(t); /* Number of times inherited from direct parent relation(s) */ - int32 attinhcount; + int32 attinhcount BKI_DEFAULT(0); /* attribute's collation */ Oid attcollation; @@ -160,13 +160,13 @@ CATALOG(pg_attribute,1249) BKI_BOOTSTRAP BKI_WITHOUT_OIDS BKI_ROWTYPE_OID(75) BK /* NOTE: The following fields are not present in tuple descriptors. */ /* Column-level access permissions */ - aclitem attacl[1]; + aclitem attacl[1] BKI_DEFAULT(_null_); /* Column-level options */ - text attoptions[1]; + text attoptions[1] BKI_DEFAULT(_null_); /* Column-level FDW options */ - text attfdwoptions[1]; + text attfdwoptions[1] BKI_DEFAULT(_null_); #endif } FormData_pg_attribute; From ca4587f3f94f5c33da6543535f666a9f20f3ef33 Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Fri, 12 Jan 2018 15:59:43 +0100 Subject: [PATCH 0832/1087] Fix parsing of compatibility mode argument. Patch by Ashutosh Sharma --- src/interfaces/ecpg/preproc/ecpg.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/interfaces/ecpg/preproc/ecpg.c b/src/interfaces/ecpg/preproc/ecpg.c index 8a14572261..cd770c8b15 100644 --- a/src/interfaces/ecpg/preproc/ecpg.c +++ b/src/interfaces/ecpg/preproc/ecpg.c @@ -198,12 +198,12 @@ main(int argc, char *const argv[]) system_includes = true; break; case 'C': - if (strncmp(optarg, "INFORMIX", strlen("INFORMIX")) == 0) + if (pg_strcasecmp(optarg, "INFORMIX") == 0 || pg_strcasecmp(optarg, "INFORMIX_SE") == 0) { char pkginclude_path[MAXPGPATH]; char informix_path[MAXPGPATH]; - compat = (strcmp(optarg, "INFORMIX") == 0) ? ECPG_COMPAT_INFORMIX : ECPG_COMPAT_INFORMIX_SE; + compat = (pg_strcasecmp(optarg, "INFORMIX") == 0) ? ECPG_COMPAT_INFORMIX : ECPG_COMPAT_INFORMIX_SE; get_pkginclude_path(my_exec_path, pkginclude_path); snprintf(informix_path, MAXPGPATH, "%s/informix/esql", pkginclude_path); add_include_path(informix_path); From 90947674fc984f5639e3b1bf013435a023aa713b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 12 Jan 2018 12:24:50 -0500 Subject: [PATCH 0833/1087] Fix incorrect handling of subquery pullup in the presence of grouping sets. If we flatten a subquery whose target list contains constants or expressions, when those output columns are used in GROUPING SET columns, the planner was capable of doing the wrong thing by merging a pulled-up expression into the surrounding expression during const-simplification. Then the late processing that attempts to match subexpressions to grouping sets would fail to match those subexpressions to grouping sets, with the effect that they'd not go to null when expected. To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that they preserve their separate identity throughout the planner's expression processing. This is a bit of a band-aid, because the wrapper defeats const-simplification even in places where it would be safe to allow. But a nicer fix would likely be too invasive to back-patch, and the consequences of the missed optimizations probably aren't large in most cases. Back-patch to 9.5 where grouping sets were introduced. Heikki Linnakangas, with small mods and better test cases by me; additional review by Andrew Gierth Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi --- src/backend/optimizer/prep/prepjointree.c | 48 ++++++++++++++++++---- src/test/regress/expected/groupingsets.out | 45 ++++++++++++++++++++ src/test/regress/sql/groupingsets.sql | 20 +++++++++ 3 files changed, 105 insertions(+), 8 deletions(-) diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c index 0e2a220ad0..45d82da459 100644 --- a/src/backend/optimizer/prep/prepjointree.c +++ b/src/backend/optimizer/prep/prepjointree.c @@ -1003,11 +1003,8 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte, /* * The subquery's targetlist items are now in the appropriate form to - * insert into the top query, but if we are under an outer join then - * non-nullable items and lateral references may have to be turned into - * PlaceHolderVars. If we are dealing with an appendrel member then - * anything that's not a simple Var has to be turned into a - * PlaceHolderVar. Set up required context data for pullup_replace_vars. + * insert into the top query, except that we may need to wrap them in + * PlaceHolderVars. Set up required context data for pullup_replace_vars. */ rvcontext.root = root; rvcontext.targetlist = subquery->targetList; @@ -1019,13 +1016,48 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte, rvcontext.relids = NULL; rvcontext.outer_hasSubLinks = &parse->hasSubLinks; rvcontext.varno = varno; - rvcontext.need_phvs = (lowest_nulling_outer_join != NULL || - containing_appendrel != NULL); - rvcontext.wrap_non_vars = (containing_appendrel != NULL); + /* these flags will be set below, if needed */ + rvcontext.need_phvs = false; + rvcontext.wrap_non_vars = false; /* initialize cache array with indexes 0 .. length(tlist) */ rvcontext.rv_cache = palloc0((list_length(subquery->targetList) + 1) * sizeof(Node *)); + /* + * If we are under an outer join then non-nullable items and lateral + * references may have to be turned into PlaceHolderVars. + */ + if (lowest_nulling_outer_join != NULL) + rvcontext.need_phvs = true; + + /* + * If we are dealing with an appendrel member then anything that's not a + * simple Var has to be turned into a PlaceHolderVar. We force this to + * ensure that what we pull up doesn't get merged into a surrounding + * expression during later processing and then fail to match the + * expression actually available from the appendrel. + */ + if (containing_appendrel != NULL) + { + rvcontext.need_phvs = true; + rvcontext.wrap_non_vars = true; + } + + /* + * If the parent query uses grouping sets, we need a PlaceHolderVar for + * anything that's not a simple Var. Again, this ensures that expressions + * retain their separate identity so that they will match grouping set + * columns when appropriate. (It'd be sufficient to wrap values used in + * grouping set columns, and do so only in non-aggregated portions of the + * tlist and havingQual, but that would require a lot of infrastructure + * that pullup_replace_vars hasn't currently got.) + */ + if (parse->groupingSets) + { + rvcontext.need_phvs = true; + rvcontext.wrap_non_vars = true; + } + /* * Replace all of the top query's references to the subquery's outputs * with copies of the adjusted subtlist items, being careful not to diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out index 833d515174..cbfdbfd856 100644 --- a/src/test/regress/expected/groupingsets.out +++ b/src/test/regress/expected/groupingsets.out @@ -389,6 +389,51 @@ select g as alias1, g as alias2 3 | (6 rows) +-- check that pulled-up subquery outputs still go to null when appropriate +select four, x + from (select four, ten, 'foo'::text as x from tenk1) as t + group by grouping sets (four, x) + having x = 'foo'; + four | x +------+----- + | foo +(1 row) + +select four, x || 'x' + from (select four, ten, 'foo'::text as x from tenk1) as t + group by grouping sets (four, x) + order by four; + four | ?column? +------+---------- + 0 | + 1 | + 2 | + 3 | + | foox +(5 rows) + +select (x+y)*1, sum(z) + from (select 1 as x, 2 as y, 3 as z) s + group by grouping sets (x+y, x); + ?column? | sum +----------+----- + 3 | 3 + | 3 +(2 rows) + +select x, not x as not_x, q2 from + (select *, q1 = 1 as x from int8_tbl i1) as t + group by grouping sets(x, q2) + order by x, q2; + x | not_x | q2 +---+-------+------------------- + f | t | + | | -4567890123456789 + | | 123 + | | 456 + | | 4567890123456789 +(5 rows) + -- simple rescan tests select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql index 2b4ab692c4..b28d8217c1 100644 --- a/src/test/regress/sql/groupingsets.sql +++ b/src/test/regress/sql/groupingsets.sql @@ -152,6 +152,26 @@ select g as alias1, g as alias2 from generate_series(1,3) g group by alias1, rollup(alias2); +-- check that pulled-up subquery outputs still go to null when appropriate +select four, x + from (select four, ten, 'foo'::text as x from tenk1) as t + group by grouping sets (four, x) + having x = 'foo'; + +select four, x || 'x' + from (select four, ten, 'foo'::text as x from tenk1) as t + group by grouping sets (four, x) + order by four; + +select (x+y)*1, sum(z) + from (select 1 as x, 2 as y, 3 as z) s + group by grouping sets (x+y, x); + +select x, not x as not_x, q2 from + (select *, q1 = 1 as x from int8_tbl i1) as t + group by grouping sets(x, q2) + order by x, q2; + -- simple rescan tests select a, b, sum(v.x) From 680d540502609b422d378a1b8e0c10cac3c60084 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 12 Jan 2018 15:46:37 -0500 Subject: [PATCH 0834/1087] Avoid unnecessary failure in SELECT concurrent with ALTER NO INHERIT. If a query against an inheritance tree runs concurrently with an ALTER TABLE that's disinheriting one of the tree members, it's possible to get a "could not find inherited attribute" error because after obtaining lock on the removed member, make_inh_translation_list sees that its columns have attinhcount=0 and decides they aren't the columns it's looking for. An ideal fix, perhaps, would avoid including such a just-removed member table in the query at all; but there seems no way to accomplish that without adding expensive catalog rechecks or creating a likelihood of deadlocks. Instead, let's just drop the check on attinhcount. In this way, a query that's included a just-disinherited child will still succeed, which is not a completely unreasonable behavior. This problem has existed for a long time, so back-patch to all supported branches. Also add an isolation test verifying related behaviors. Patch by me; the new isolation test is based on Kyotaro Horiguchi's work. Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp --- src/backend/optimizer/prep/prepunion.c | 4 +- src/test/isolation/expected/alter-table-4.out | 57 +++++++++++++++++++ src/test/isolation/isolation_schedule | 1 + src/test/isolation/specs/alter-table-4.spec | 37 ++++++++++++ 4 files changed, 97 insertions(+), 2 deletions(-) create mode 100644 src/test/isolation/expected/alter-table-4.out create mode 100644 src/test/isolation/specs/alter-table-4.spec diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index e8eeabdc88..7ef391ffeb 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -1832,7 +1832,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation, */ if (old_attno < newnatts && (att = TupleDescAttr(new_tupdesc, old_attno)) != NULL && - !att->attisdropped && att->attinhcount != 0 && + !att->attisdropped && strcmp(attname, NameStr(att->attname)) == 0) new_attno = old_attno; else @@ -1840,7 +1840,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation, for (new_attno = 0; new_attno < newnatts; new_attno++) { att = TupleDescAttr(new_tupdesc, new_attno); - if (!att->attisdropped && att->attinhcount != 0 && + if (!att->attisdropped && strcmp(attname, NameStr(att->attname)) == 0) break; } diff --git a/src/test/isolation/expected/alter-table-4.out b/src/test/isolation/expected/alter-table-4.out new file mode 100644 index 0000000000..d2dac0be09 --- /dev/null +++ b/src/test/isolation/expected/alter-table-4.out @@ -0,0 +1,57 @@ +Parsed test spec with 2 sessions + +starting permutation: s1b s1delc1 s2sel s1c s2sel +step s1b: BEGIN; +step s1delc1: ALTER TABLE c1 NO INHERIT p; +step s2sel: SELECT SUM(a) FROM p; +step s1c: COMMIT; +step s2sel: <... completed> +sum + +11 +step s2sel: SELECT SUM(a) FROM p; +sum + +1 + +starting permutation: s1b s1delc1 s1addc2 s2sel s1c s2sel +step s1b: BEGIN; +step s1delc1: ALTER TABLE c1 NO INHERIT p; +step s1addc2: ALTER TABLE c2 INHERIT p; +step s2sel: SELECT SUM(a) FROM p; +step s1c: COMMIT; +step s2sel: <... completed> +sum + +11 +step s2sel: SELECT SUM(a) FROM p; +sum + +101 + +starting permutation: s1b s1dropc1 s2sel s1c s2sel +step s1b: BEGIN; +step s1dropc1: DROP TABLE c1; +step s2sel: SELECT SUM(a) FROM p; +step s1c: COMMIT; +step s2sel: <... completed> +sum + +1 +step s2sel: SELECT SUM(a) FROM p; +sum + +1 + +starting permutation: s1b s1delc1 s1modc1a s2sel s1c s2sel +step s1b: BEGIN; +step s1delc1: ALTER TABLE c1 NO INHERIT p; +step s1modc1a: ALTER TABLE c1 ALTER COLUMN a TYPE float; +step s2sel: SELECT SUM(a) FROM p; +step s1c: COMMIT; +step s2sel: <... completed> +error in steps s1c s2sel: ERROR: attribute "a" of relation "c1" does not match parent's type +step s2sel: SELECT SUM(a) FROM p; +sum + +1 diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule index befe676816..74d7d59546 100644 --- a/src/test/isolation/isolation_schedule +++ b/src/test/isolation/isolation_schedule @@ -59,6 +59,7 @@ test: multiple-cic test: alter-table-1 test: alter-table-2 test: alter-table-3 +test: alter-table-4 test: create-trigger test: sequence-ddl test: async-notify diff --git a/src/test/isolation/specs/alter-table-4.spec b/src/test/isolation/specs/alter-table-4.spec new file mode 100644 index 0000000000..a9c1a93723 --- /dev/null +++ b/src/test/isolation/specs/alter-table-4.spec @@ -0,0 +1,37 @@ +# ALTER TABLE - Add and remove inheritance with concurrent reads + +setup +{ + CREATE TABLE p (a integer); + INSERT INTO p VALUES(1); + CREATE TABLE c1 () INHERITS (p); + INSERT INTO c1 VALUES(10); + CREATE TABLE c2 (a integer); + INSERT INTO c2 VALUES(100); +} + +teardown +{ + DROP TABLE IF EXISTS c1, c2, p; +} + +session "s1" +step "s1b" { BEGIN; } +step "s1delc1" { ALTER TABLE c1 NO INHERIT p; } +step "s1modc1a" { ALTER TABLE c1 ALTER COLUMN a TYPE float; } +step "s1addc2" { ALTER TABLE c2 INHERIT p; } +step "s1dropc1" { DROP TABLE c1; } +step "s1c" { COMMIT; } + +session "s2" +step "s2sel" { SELECT SUM(a) FROM p; } + +# NO INHERIT will not be visible to concurrent select, +# since we identify children before locking them +permutation "s1b" "s1delc1" "s2sel" "s1c" "s2sel" +# adding inheritance likewise is not seen if s1 commits after s2 locks p +permutation "s1b" "s1delc1" "s1addc2" "s2sel" "s1c" "s2sel" +# but we do cope with DROP on a child table +permutation "s1b" "s1dropc1" "s2sel" "s1c" "s2sel" +# this case currently results in an error; doesn't seem worth preventing +permutation "s1b" "s1delc1" "s1modc1a" "s2sel" "s1c" "s2sel" From e9f2703ab7b29f7e9100807cfbd19ddebbaa0b12 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 12 Jan 2018 16:52:49 -0500 Subject: [PATCH 0835/1087] Fix postgres_fdw to cope with duplicate GROUP BY entries. Commit 7012b132d, which added the ability to push down aggregates and grouping to the remote server, wasn't careful to ensure that the remote server would have the same idea we do about which columns are the grouping columns, in cases where there are textually identical GROUP BY expressions. Such cases typically led to "targetlist item has multiple sortgroupref labels" errors. To fix this reliably, switch over to using "GROUP BY column-number" syntax rather than "GROUP BY expression" in transmitted queries, and adjust foreign_grouping_ok() to be more careful about duplicating the sortgroupref labeling of the local pathtarget. Per bug #14890 from Sean Johnston. Back-patch to v10 where the buggy code was introduced. Jeevan Chalke, reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/20171107134948.1508.94783@wrigleys.postgresql.org --- contrib/postgres_fdw/deparse.c | 17 +- .../postgres_fdw/expected/postgres_fdw.out | 163 ++++++++++-------- contrib/postgres_fdw/postgres_fdw.c | 83 +++++---- contrib/postgres_fdw/sql/postgres_fdw.sql | 6 + 4 files changed, 150 insertions(+), 119 deletions(-) diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 96f804a28d..e111b09c7c 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -178,7 +178,7 @@ static void appendGroupByClause(List *tlist, deparse_expr_cxt *context); static void appendAggOrderBy(List *orderList, List *targetList, deparse_expr_cxt *context); static void appendFunctionName(Oid funcid, deparse_expr_cxt *context); -static Node *deparseSortGroupClause(Index ref, List *tlist, +static Node *deparseSortGroupClause(Index ref, List *tlist, bool force_colno, deparse_expr_cxt *context); /* @@ -2853,7 +2853,7 @@ appendAggOrderBy(List *orderList, List *targetList, deparse_expr_cxt *context) first = false; sortexpr = deparseSortGroupClause(srt->tleSortGroupRef, targetList, - context); + false, context); sortcoltype = exprType(sortexpr); /* See whether operator is default < or > for datatype */ typentry = lookup_type_cache(sortcoltype, @@ -2960,7 +2960,7 @@ appendGroupByClause(List *tlist, deparse_expr_cxt *context) appendStringInfoString(buf, ", "); first = false; - deparseSortGroupClause(grp->tleSortGroupRef, tlist, context); + deparseSortGroupClause(grp->tleSortGroupRef, tlist, true, context); } } @@ -3047,7 +3047,8 @@ appendFunctionName(Oid funcid, deparse_expr_cxt *context) * need not find it again. */ static Node * -deparseSortGroupClause(Index ref, List *tlist, deparse_expr_cxt *context) +deparseSortGroupClause(Index ref, List *tlist, bool force_colno, + deparse_expr_cxt *context) { StringInfo buf = context->buf; TargetEntry *tle; @@ -3056,7 +3057,13 @@ deparseSortGroupClause(Index ref, List *tlist, deparse_expr_cxt *context) tle = get_sortgroupref_tle(ref, tlist); expr = tle->expr; - if (expr && IsA(expr, Const)) + if (force_colno) + { + /* Use column-number form when requested by caller. */ + Assert(!tle->resjunk); + appendStringInfo(buf, "%d", tle->resno); + } + else if (expr && IsA(expr, Const)) { /* * Force a typecast here so that we don't emit something like "GROUP diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 683d641fa7..993219133a 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -2462,8 +2462,8 @@ DROP ROLE regress_view_owner; -- Simple aggregates explain (verbose, costs off) select count(c6), sum(c1), avg(c1), min(c2), max(c1), stddev(c2), sum(c1) * (random() <= 1)::int as sum2 from ft1 where c2 < 5 group by c2 order by 1, 2; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------ Result Output: (count(c6)), (sum(c1)), (avg(c1)), (min(c2)), (max(c1)), (stddev(c2)), ((sum(c1)) * ((random() <= '1'::double precision))::integer), c2 -> Sort @@ -2472,7 +2472,7 @@ select count(c6), sum(c1), avg(c1), min(c2), max(c1), stddev(c2), sum(c1) * (ran -> Foreign Scan Output: (count(c6)), (sum(c1)), (avg(c1)), (min(c2)), (max(c1)), (stddev(c2)), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT count(c6), sum("C 1"), avg("C 1"), min(c2), max("C 1"), stddev(c2), c2 FROM "S 1"."T 1" WHERE ((c2 < 5)) GROUP BY c2 + Remote SQL: SELECT count(c6), sum("C 1"), avg("C 1"), min(c2), max("C 1"), stddev(c2), c2 FROM "S 1"."T 1" WHERE ((c2 < 5)) GROUP BY 7 (9 rows) select count(c6), sum(c1), avg(c1), min(c2), max(c1), stddev(c2), sum(c1) * (random() <= 1)::int as sum2 from ft1 where c2 < 5 group by c2 order by 1, 2; @@ -2531,15 +2531,15 @@ select sum(t1.c1), count(t2.c1) from ft1 t1 inner join ft2 t2 on (t1.c1 = t2.c1) -- GROUP BY clause having expressions explain (verbose, costs off) select c2/2, sum(c2) * (c2/2) from ft1 group by c2/2 order by c2/2; - QUERY PLAN ------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------- Sort Output: ((c2 / 2)), ((sum(c2) * (c2 / 2))) Sort Key: ((ft1.c2 / 2)) -> Foreign Scan Output: ((c2 / 2)), ((sum(c2) * (c2 / 2))) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT (c2 / 2), (sum(c2) * (c2 / 2)) FROM "S 1"."T 1" GROUP BY ((c2 / 2)) + Remote SQL: SELECT (c2 / 2), (sum(c2) * (c2 / 2)) FROM "S 1"."T 1" GROUP BY 1 (7 rows) select c2/2, sum(c2) * (c2/2) from ft1 group by c2/2 order by c2/2; @@ -2555,8 +2555,8 @@ select c2/2, sum(c2) * (c2/2) from ft1 group by c2/2 order by c2/2; -- Aggregates in subquery are pushed down. explain (verbose, costs off) select count(x.a), sum(x.a) from (select c2 a, sum(c1) b from ft1 group by c2, sqrt(c1) order by 1, 2) x; - QUERY PLAN ----------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------- Aggregate Output: count(ft1.c2), sum(ft1.c2) -> Sort @@ -2565,7 +2565,7 @@ select count(x.a), sum(x.a) from (select c2 a, sum(c1) b from ft1 group by c2, s -> Foreign Scan Output: ft1.c2, (sum(ft1.c1)), (sqrt((ft1.c1)::double precision)) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, sum("C 1"), sqrt("C 1") FROM "S 1"."T 1" GROUP BY c2, (sqrt("C 1")) + Remote SQL: SELECT c2, sum("C 1"), sqrt("C 1") FROM "S 1"."T 1" GROUP BY 1, 3 (9 rows) select count(x.a), sum(x.a) from (select c2 a, sum(c1) b from ft1 group by c2, sqrt(c1) order by 1, 2) x; @@ -2585,7 +2585,7 @@ select c2 * (random() <= 1)::int as sum1, sum(c1) * c2 as sum2 from ft1 group by -> Foreign Scan Output: (c2 * ((random() <= '1'::double precision))::integer), ((sum(c1) * c2)), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT (sum("C 1") * c2), c2 FROM "S 1"."T 1" GROUP BY c2 + Remote SQL: SELECT (sum("C 1") * c2), c2 FROM "S 1"."T 1" GROUP BY 2 (7 rows) select c2 * (random() <= 1)::int as sum1, sum(c1) * c2 as sum2 from ft1 group by c2 order by 1, 2; @@ -2622,15 +2622,15 @@ select c2 * (random() <= 1)::int as c2 from ft2 group by c2 * (random() <= 1)::i -- GROUP BY clause in various forms, cardinal, alias and constant expression explain (verbose, costs off) select count(c2) w, c2 x, 5 y, 7.0 z from ft1 group by 2, y, 9.0::int order by 2; - QUERY PLAN ----------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------- Sort Output: (count(c2)), c2, 5, 7.0, 9 Sort Key: ft1.c2 -> Foreign Scan Output: (count(c2)), c2, 5, 7.0, 9 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT count(c2), c2, 5, 7.0, 9 FROM "S 1"."T 1" GROUP BY c2, 5::integer, 9::integer + Remote SQL: SELECT count(c2), c2, 5, 7.0, 9 FROM "S 1"."T 1" GROUP BY 2, 3, 5 (7 rows) select count(c2) w, c2 x, 5 y, 7.0 z from ft1 group by 2, y, 9.0::int order by 2; @@ -2648,18 +2648,41 @@ select count(c2) w, c2 x, 5 y, 7.0 z from ft1 group by 2, y, 9.0::int order by 2 100 | 9 | 5 | 7.0 (10 rows) +-- GROUP BY clause referring to same column multiple times +-- Also, ORDER BY contains an aggregate function +explain (verbose, costs off) +select c2, c2 from ft1 where c2 > 6 group by 1, 2 order by sum(c1); + QUERY PLAN +----------------------------------------------------------------------------------------------- + Sort + Output: c2, c2, (sum(c1)) + Sort Key: (sum(ft1.c1)) + -> Foreign Scan + Output: c2, c2, (sum(c1)) + Relations: Aggregate on (public.ft1) + Remote SQL: SELECT c2, c2, sum("C 1") FROM "S 1"."T 1" WHERE ((c2 > 6)) GROUP BY 1, 2 +(7 rows) + +select c2, c2 from ft1 where c2 > 6 group by 1, 2 order by sum(c1); + c2 | c2 +----+---- + 7 | 7 + 8 | 8 + 9 | 9 +(3 rows) + -- Testing HAVING clause shippability explain (verbose, costs off) select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800 order by c2; - QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------- Sort Output: c2, (sum(c1)) Sort Key: ft2.c2 -> Foreign Scan Output: c2, (sum(c1)) Relations: Aggregate on (public.ft2) - Remote SQL: SELECT c2, sum("C 1") FROM "S 1"."T 1" GROUP BY c2 HAVING ((avg("C 1") < 500::numeric)) AND ((sum("C 1") < 49800)) + Remote SQL: SELECT c2, sum("C 1") FROM "S 1"."T 1" GROUP BY 1 HAVING ((avg("C 1") < 500::numeric)) AND ((sum("C 1") < 49800)) (7 rows) select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800 order by c2; @@ -2672,15 +2695,15 @@ select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800 -- Unshippable HAVING clause will be evaluated locally, and other qual in HAVING clause is pushed down explain (verbose, costs off) select count(*) from (select c5, count(c1) from ft1 group by c5, sqrt(c2) having (avg(c1) / avg(c1)) * random() <= 1 and avg(c1) < 500) x; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------- Aggregate Output: count(*) -> Foreign Scan Output: ft1.c5, NULL::bigint, (sqrt((ft1.c2)::double precision)) Filter: (((((avg(ft1.c1)) / (avg(ft1.c1))))::double precision * random()) <= '1'::double precision) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY c5, (sqrt(c2)) HAVING ((avg("C 1") < 500::numeric)) + Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY 1, 3 HAVING ((avg("C 1") < 500::numeric)) (7 rows) select count(*) from (select c5, count(c1) from ft1 group by c5, sqrt(c2) having (avg(c1) / avg(c1)) * random() <= 1 and avg(c1) < 500) x; @@ -2710,15 +2733,15 @@ select sum(c1) from ft1 group by c2 having avg(c1 * (random() <= 1)::int) > 100 -- ORDER BY within aggregate, same column used to order explain (verbose, costs off) select array_agg(c1 order by c1) from ft1 where c1 < 100 group by c2 order by 1; - QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------- Sort Output: (array_agg(c1 ORDER BY c1)), c2 Sort Key: (array_agg(ft1.c1 ORDER BY ft1.c1)) -> Foreign Scan Output: (array_agg(c1 ORDER BY c1)), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT array_agg("C 1" ORDER BY "C 1" ASC NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" < 100)) GROUP BY c2 + Remote SQL: SELECT array_agg("C 1" ORDER BY "C 1" ASC NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" < 100)) GROUP BY 2 (7 rows) select array_agg(c1 order by c1) from ft1 where c1 < 100 group by c2 order by 1; @@ -2756,15 +2779,15 @@ select array_agg(c5 order by c1 desc) from ft2 where c2 = 6 and c1 < 50; -- DISTINCT within aggregate explain (verbose, costs off) select array_agg(distinct (t1.c1)%5) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; - QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort Output: (array_agg(DISTINCT (t1.c1 % 5))), ((t2.c1 % 3)) Sort Key: (array_agg(DISTINCT (t1.c1 % 5))) -> Foreign Scan Output: (array_agg(DISTINCT (t1.c1 % 5))), ((t2.c1 % 3)) Relations: Aggregate on ((public.ft4 t1) FULL JOIN (public.ft5 t2)) - Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5)), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY ((r2.c1 % 3)) + Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5)), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY 2 (7 rows) select array_agg(distinct (t1.c1)%5) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; @@ -2777,15 +2800,15 @@ select array_agg(distinct (t1.c1)%5) from ft4 t1 full join ft5 t2 on (t1.c1 = t2 -- DISTINCT combined with ORDER BY within aggregate explain (verbose, costs off) select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort Output: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5))), ((t2.c1 % 3)) Sort Key: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5))) -> Foreign Scan Output: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5))), ((t2.c1 % 3)) Relations: Aggregate on ((public.ft4 t1) FULL JOIN (public.ft5 t2)) - Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5) ORDER BY ((r1.c1 % 5)) ASC NULLS LAST), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY ((r2.c1 % 3)) + Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5) ORDER BY ((r1.c1 % 5)) ASC NULLS LAST), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY 2 (7 rows) select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; @@ -2797,15 +2820,15 @@ select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5) from ft4 t1 full join ft explain (verbose, costs off) select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5 desc nulls last) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort Output: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5) DESC NULLS LAST)), ((t2.c1 % 3)) Sort Key: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5) DESC NULLS LAST)) -> Foreign Scan Output: (array_agg(DISTINCT (t1.c1 % 5) ORDER BY (t1.c1 % 5) DESC NULLS LAST)), ((t2.c1 % 3)) Relations: Aggregate on ((public.ft4 t1) FULL JOIN (public.ft5 t2)) - Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5) ORDER BY ((r1.c1 % 5)) DESC NULLS LAST), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY ((r2.c1 % 3)) + Remote SQL: SELECT array_agg(DISTINCT (r1.c1 % 5) ORDER BY ((r1.c1 % 5)) DESC NULLS LAST), (r2.c1 % 3) FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) WHERE (((r1.c1 < 20) OR ((r1.c1 IS NULL) AND (r2.c1 < 5)))) GROUP BY 2 (7 rows) select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5 desc nulls last) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) where t1.c1 < 20 or (t1.c1 is null and t2.c1 < 5) group by (t2.c1)%3 order by 1; @@ -2818,15 +2841,15 @@ select array_agg(distinct (t1.c1)%5 order by (t1.c1)%5 desc nulls last) from ft4 -- FILTER within aggregate explain (verbose, costs off) select sum(c1) filter (where c1 < 100 and c2 > 5) from ft1 group by c2 order by 1 nulls last; - QUERY PLAN --------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------- Sort Output: (sum(c1) FILTER (WHERE ((c1 < 100) AND (c2 > 5)))), c2 Sort Key: (sum(ft1.c1) FILTER (WHERE ((ft1.c1 < 100) AND (ft1.c2 > 5)))) -> Foreign Scan Output: (sum(c1) FILTER (WHERE ((c1 < 100) AND (c2 > 5)))), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT sum("C 1") FILTER (WHERE (("C 1" < 100) AND (c2 > 5))), c2 FROM "S 1"."T 1" GROUP BY c2 + Remote SQL: SELECT sum("C 1") FILTER (WHERE (("C 1" < 100) AND (c2 > 5))), c2 FROM "S 1"."T 1" GROUP BY 2 (7 rows) select sum(c1) filter (where c1 < 100 and c2 > 5) from ft1 group by c2 order by 1 nulls last; @@ -2847,12 +2870,12 @@ select sum(c1) filter (where c1 < 100 and c2 > 5) from ft1 group by c2 order by -- DISTINCT, ORDER BY and FILTER within aggregate explain (verbose, costs off) select sum(c1%3), sum(distinct c1%3 order by c1%3) filter (where c1%3 < 2), c2 from ft1 where c2 = 6 group by c2; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Foreign Scan Output: (sum((c1 % 3))), (sum(DISTINCT (c1 % 3) ORDER BY (c1 % 3)) FILTER (WHERE ((c1 % 3) < 2))), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT sum(("C 1" % 3)), sum(DISTINCT ("C 1" % 3) ORDER BY (("C 1" % 3)) ASC NULLS LAST) FILTER (WHERE (("C 1" % 3) < 2)), c2 FROM "S 1"."T 1" WHERE ((c2 = 6)) GROUP BY c2 + Remote SQL: SELECT sum(("C 1" % 3)), sum(DISTINCT ("C 1" % 3) ORDER BY (("C 1" % 3)) ASC NULLS LAST) FILTER (WHERE (("C 1" % 3) < 2)), c2 FROM "S 1"."T 1" WHERE ((c2 = 6)) GROUP BY 3 (4 rows) select sum(c1%3), sum(distinct c1%3 order by c1%3) filter (where c1%3 < 2), c2 from ft1 where c2 = 6 group by c2; @@ -2948,15 +2971,15 @@ select sum(c2) filter (where c2 in (select c2 from ft1 where c2 < 5)) from ft1; -- Ordered-sets within aggregate explain (verbose, costs off) select c2, rank('10'::varchar) within group (order by c6), percentile_cont(c2/10::numeric) within group (order by c1) from ft1 where c2 < 10 group by c2 having percentile_cont(c2/10::numeric) within group (order by c1) < 500 order by c2; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort Output: c2, (rank('10'::character varying) WITHIN GROUP (ORDER BY c6)), (percentile_cont((((c2)::numeric / '10'::numeric))::double precision) WITHIN GROUP (ORDER BY ((c1)::double precision))) Sort Key: ft1.c2 -> Foreign Scan Output: c2, (rank('10'::character varying) WITHIN GROUP (ORDER BY c6)), (percentile_cont((((c2)::numeric / '10'::numeric))::double precision) WITHIN GROUP (ORDER BY ((c1)::double precision))) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, rank('10'::character varying) WITHIN GROUP (ORDER BY c6 ASC NULLS LAST), percentile_cont((c2 / 10::numeric)) WITHIN GROUP (ORDER BY ("C 1") ASC NULLS LAST) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY c2 HAVING ((percentile_cont((c2 / 10::numeric)) WITHIN GROUP (ORDER BY ("C 1") ASC NULLS LAST) < 500::double precision)) + Remote SQL: SELECT c2, rank('10'::character varying) WITHIN GROUP (ORDER BY c6 ASC NULLS LAST), percentile_cont((c2 / 10::numeric)) WITHIN GROUP (ORDER BY ("C 1") ASC NULLS LAST) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY 1 HAVING ((percentile_cont((c2 / 10::numeric)) WITHIN GROUP (ORDER BY ("C 1") ASC NULLS LAST) < 500::double precision)) (7 rows) select c2, rank('10'::varchar) within group (order by c6), percentile_cont(c2/10::numeric) within group (order by c1) from ft1 where c2 < 10 group by c2 having percentile_cont(c2/10::numeric) within group (order by c1) < 500 order by c2; @@ -2972,12 +2995,12 @@ select c2, rank('10'::varchar) within group (order by c6), percentile_cont(c2/10 -- Using multiple arguments within aggregates explain (verbose, costs off) select c1, rank(c1, c2) within group (order by c1, c2) from ft1 group by c1, c2 having c1 = 6 order by 1; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan Output: c1, (rank(c1, c2) WITHIN GROUP (ORDER BY c1, c2)), c2 Relations: Aggregate on (public.ft1) - Remote SQL: SELECT "C 1", rank("C 1", c2) WITHIN GROUP (ORDER BY "C 1" ASC NULLS LAST, c2 ASC NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" = 6)) GROUP BY "C 1", c2 + Remote SQL: SELECT "C 1", rank("C 1", c2) WITHIN GROUP (ORDER BY "C 1" ASC NULLS LAST, c2 ASC NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" = 6)) GROUP BY 1, 3 (4 rows) select c1, rank(c1, c2) within group (order by c1, c2) from ft1 group by c1, c2 having c1 = 6 order by 1; @@ -3015,15 +3038,15 @@ alter server loopback options (set extensions 'postgres_fdw'); -- Now aggregate will be pushed. Aggregate will display VARIADIC argument. explain (verbose, costs off) select c2, least_agg(c1) from ft1 where c2 < 100 group by c2 order by c2; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------- Sort Output: c2, (least_agg(VARIADIC ARRAY[c1])) Sort Key: ft1.c2 -> Foreign Scan Output: c2, (least_agg(VARIADIC ARRAY[c1])) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, public.least_agg(VARIADIC ARRAY["C 1"]) FROM "S 1"."T 1" WHERE ((c2 < 100)) GROUP BY c2 + Remote SQL: SELECT c2, public.least_agg(VARIADIC ARRAY["C 1"]) FROM "S 1"."T 1" WHERE ((c2 < 100)) GROUP BY 1 (7 rows) select c2, least_agg(c1) from ft1 where c2 < 100 group by c2 order by c2; @@ -3115,12 +3138,12 @@ alter server loopback options (set extensions 'postgres_fdw'); -- Now this will be pushed as sort operator is part of the extension. explain (verbose, costs off) select array_agg(c1 order by c1 using operator(public.<^)) from ft2 where c2 = 6 and c1 < 100 group by c2; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------ + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan Output: (array_agg(c1 ORDER BY c1 USING <^ NULLS LAST)), c2 Relations: Aggregate on (public.ft2) - Remote SQL: SELECT array_agg("C 1" ORDER BY "C 1" USING OPERATOR(public.<^) NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" < 100)) AND ((c2 = 6)) GROUP BY c2 + Remote SQL: SELECT array_agg("C 1" ORDER BY "C 1" USING OPERATOR(public.<^) NULLS LAST), c2 FROM "S 1"."T 1" WHERE (("C 1" < 100)) AND ((c2 = 6)) GROUP BY 2 (4 rows) select array_agg(c1 order by c1 using operator(public.<^)) from ft2 where c2 = 6 and c1 < 100 group by c2; @@ -3181,8 +3204,8 @@ select count(t1.c3) from ft2 t1 left join ft2 t2 on (t1.c1 = random() * t2.c2); -- Subquery in FROM clause having aggregate explain (verbose, costs off) select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x where ft1.c2 = x.a group by x.b order by 1, 2; - QUERY PLAN ------------------------------------------------------------------------------------------------- + QUERY PLAN +----------------------------------------------------------------------------------------------- Sort Output: (count(*)), x.b Sort Key: (count(*)), x.b @@ -3203,7 +3226,7 @@ select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x w -> Foreign Scan Output: ft1_1.c2, (sum(ft1_1.c1)) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, sum("C 1") FROM "S 1"."T 1" GROUP BY c2 + Remote SQL: SELECT c2, sum("C 1") FROM "S 1"."T 1" GROUP BY 1 (21 rows) select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x where ft1.c2 = x.a group by x.b order by 1, 2; @@ -3224,15 +3247,15 @@ select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x w -- FULL join with IS NULL check in HAVING explain (verbose, costs off) select avg(t1.c1), sum(t2.c1) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) group by t2.c1 having (avg(t1.c1) is null and sum(t2.c1) < 10) or sum(t2.c1) is null order by 1 nulls last, 2; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort Output: (avg(t1.c1)), (sum(t2.c1)), t2.c1 Sort Key: (avg(t1.c1)), (sum(t2.c1)) -> Foreign Scan Output: (avg(t1.c1)), (sum(t2.c1)), t2.c1 Relations: Aggregate on ((public.ft4 t1) FULL JOIN (public.ft5 t2)) - Remote SQL: SELECT avg(r1.c1), sum(r2.c1), r2.c1 FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) GROUP BY r2.c1 HAVING ((((avg(r1.c1) IS NULL) AND (sum(r2.c1) < 10)) OR (sum(r2.c1) IS NULL))) + Remote SQL: SELECT avg(r1.c1), sum(r2.c1), r2.c1 FROM ("S 1"."T 3" r1 FULL JOIN "S 1"."T 4" r2 ON (((r1.c1 = r2.c1)))) GROUP BY 3 HAVING ((((avg(r1.c1) IS NULL) AND (sum(r2.c1) < 10)) OR (sum(r2.c1) IS NULL))) (7 rows) select avg(t1.c1), sum(t2.c1) from ft4 t1 full join ft5 t2 on (t1.c1 = t2.c1) group by t2.c1 having (avg(t1.c1) is null and sum(t2.c1) < 10) or sum(t2.c1) is null order by 1 nulls last, 2; @@ -3286,8 +3309,8 @@ select sum(c2) * (random() <= 1)::int as sum from ft1 order by 1; set enable_hashagg to false; explain (verbose, costs off) select c2, sum from "S 1"."T 1" t1, lateral (select sum(t2.c1 + t1."C 1") sum from ft2 t2 group by t2.c1) qry where t1.c2 * 2 = qry.sum and t1.c2 < 3 and t1."C 1" < 100 order by 1; - QUERY PLAN ----------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------ Sort Output: t1.c2, qry.sum Sort Key: t1.c2 @@ -3303,7 +3326,7 @@ select c2, sum from "S 1"."T 1" t1, lateral (select sum(t2.c1 + t1."C 1") sum fr -> Foreign Scan Output: (sum((t2.c1 + t1."C 1"))), t2.c1 Relations: Aggregate on (public.ft2 t2) - Remote SQL: SELECT sum(("C 1" + $1::integer)), "C 1" FROM "S 1"."T 1" GROUP BY "C 1" + Remote SQL: SELECT sum(("C 1" + $1::integer)), "C 1" FROM "S 1"."T 1" GROUP BY 2 (16 rows) select c2, sum from "S 1"."T 1" t1, lateral (select sum(t2.c1 + t1."C 1") sum from ft2 t2 group by t2.c1) qry where t1.c2 * 2 = qry.sum and t1.c2 < 3 and t1."C 1" < 100 order by 1; @@ -3449,8 +3472,8 @@ select c2, sum(c1), grouping(c2) from ft1 where c2 < 3 group by c2 order by 1 nu -- DISTINCT itself is not pushed down, whereas underneath aggregate is pushed explain (verbose, costs off) select distinct sum(c1)/1000 s from ft2 where c2 < 6 group by c2 order by 1; - QUERY PLAN --------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------- Unique Output: ((sum(c1) / 1000)), c2 -> Sort @@ -3459,7 +3482,7 @@ select distinct sum(c1)/1000 s from ft2 where c2 < 6 group by c2 order by 1; -> Foreign Scan Output: ((sum(c1) / 1000)), c2 Relations: Aggregate on (public.ft2) - Remote SQL: SELECT (sum("C 1") / 1000), c2 FROM "S 1"."T 1" WHERE ((c2 < 6)) GROUP BY c2 + Remote SQL: SELECT (sum("C 1") / 1000), c2 FROM "S 1"."T 1" WHERE ((c2 < 6)) GROUP BY 2 (9 rows) select distinct sum(c1)/1000 s from ft2 where c2 < 6 group by c2 order by 1; @@ -3472,8 +3495,8 @@ select distinct sum(c1)/1000 s from ft2 where c2 < 6 group by c2 order by 1; -- WindowAgg explain (verbose, costs off) select c2, sum(c2), count(c2) over (partition by c2%2) from ft2 where c2 < 10 group by c2 order by 1; - QUERY PLAN -------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------ Sort Output: c2, (sum(c2)), (count(c2) OVER (?)), ((c2 % 2)) Sort Key: ft2.c2 @@ -3485,7 +3508,7 @@ select c2, sum(c2), count(c2) over (partition by c2%2) from ft2 where c2 < 10 gr -> Foreign Scan Output: c2, ((c2 % 2)), (sum(c2)) Relations: Aggregate on (public.ft2) - Remote SQL: SELECT c2, (c2 % 2), sum(c2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY c2 + Remote SQL: SELECT c2, (c2 % 2), sum(c2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY 1 (12 rows) select c2, sum(c2), count(c2) over (partition by c2%2) from ft2 where c2 < 10 group by c2 order by 1; @@ -3505,8 +3528,8 @@ select c2, sum(c2), count(c2) over (partition by c2%2) from ft2 where c2 < 10 gr explain (verbose, costs off) select c2, array_agg(c2) over (partition by c2%2 order by c2 desc) from ft1 where c2 < 10 group by c2 order by 1; - QUERY PLAN ----------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------- Sort Output: c2, (array_agg(c2) OVER (?)), ((c2 % 2)) Sort Key: ft1.c2 @@ -3518,7 +3541,7 @@ select c2, array_agg(c2) over (partition by c2%2 order by c2 desc) from ft1 wher -> Foreign Scan Output: c2, ((c2 % 2)) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, (c2 % 2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY c2 + Remote SQL: SELECT c2, (c2 % 2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY 1 (12 rows) select c2, array_agg(c2) over (partition by c2%2 order by c2 desc) from ft1 where c2 < 10 group by c2 order by 1; @@ -3538,8 +3561,8 @@ select c2, array_agg(c2) over (partition by c2%2 order by c2 desc) from ft1 wher explain (verbose, costs off) select c2, array_agg(c2) over (partition by c2%2 order by c2 range between current row and unbounded following) from ft1 where c2 < 10 group by c2 order by 1; - QUERY PLAN ----------------------------------------------------------------------------------------------------- + QUERY PLAN +--------------------------------------------------------------------------------------------------- Sort Output: c2, (array_agg(c2) OVER (?)), ((c2 % 2)) Sort Key: ft1.c2 @@ -3551,7 +3574,7 @@ select c2, array_agg(c2) over (partition by c2%2 order by c2 range between curre -> Foreign Scan Output: c2, ((c2 % 2)) Relations: Aggregate on (public.ft1) - Remote SQL: SELECT c2, (c2 % 2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY c2 + Remote SQL: SELECT c2, (c2 % 2) FROM "S 1"."T 1" WHERE ((c2 < 10)) GROUP BY 1 (12 rows) select c2, array_agg(c2) over (partition by c2%2 order by c2 range between current row and unbounded following) from ft1 where c2 < 10 group by c2 order by 1; diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index c6e1211f8f..dc302ca789 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -4591,7 +4591,7 @@ static bool foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) { Query *query = root->parse; - PathTarget *grouping_target; + PathTarget *grouping_target = root->upper_targets[UPPERREL_GROUP_AGG]; PgFdwRelationInfo *fpinfo = (PgFdwRelationInfo *) grouped_rel->fdw_private; PgFdwRelationInfo *ofpinfo; List *aggvars; @@ -4599,7 +4599,7 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) int i; List *tlist = NIL; - /* Grouping Sets are not pushable */ + /* We currently don't support pushing Grouping Sets. */ if (query->groupingSets) return false; @@ -4607,7 +4607,7 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) ofpinfo = (PgFdwRelationInfo *) fpinfo->outerrel->fdw_private; /* - * If underneath input relation has any local conditions, those conditions + * If underlying scan relation has any local conditions, those conditions * are required to be applied before performing aggregation. Hence the * aggregate cannot be pushed down. */ @@ -4615,21 +4615,11 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) return false; /* - * The targetlist expected from this node and the targetlist pushed down - * to the foreign server may be different. The latter requires - * sortgrouprefs to be set to push down GROUP BY clause, but should not - * have those arising from ORDER BY clause. These sortgrouprefs may be - * different from those in the plan's targetlist. Use a copy of path - * target to record the new sortgrouprefs. - */ - grouping_target = copy_pathtarget(root->upper_targets[UPPERREL_GROUP_AGG]); - - /* - * Evaluate grouping targets and check whether they are safe to push down - * to the foreign side. All GROUP BY expressions will be part of the - * grouping target and thus there is no need to evaluate it separately. - * While doing so, add required expressions into target list which can - * then be used to pass to foreign server. + * Examine grouping expressions, as well as other expressions we'd need to + * compute, and check whether they are safe to push down to the foreign + * server. All GROUP BY expressions will be part of the grouping target + * and thus there is no need to search for them separately. Add grouping + * expressions into target list which will be passed to foreign server. */ i = 0; foreach(lc, grouping_target->exprs) @@ -4641,51 +4631,59 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) /* Check whether this expression is part of GROUP BY clause */ if (sgref && get_sortgroupref_clause_noerr(sgref, query->groupClause)) { + TargetEntry *tle; + /* - * If any of the GROUP BY expression is not shippable we can not + * If any GROUP BY expression is not shippable, then we cannot * push down aggregation to the foreign server. */ if (!is_foreign_expr(root, grouped_rel, expr)) return false; - /* Pushable, add to tlist */ - tlist = add_to_flat_tlist(tlist, list_make1(expr)); + /* + * Pushable, so add to tlist. We need to create a TLE for this + * expression and apply the sortgroupref to it. We cannot use + * add_to_flat_tlist() here because that avoids making duplicate + * entries in the tlist. If there are duplicate entries with + * distinct sortgrouprefs, we have to duplicate that situation in + * the output tlist. + */ + tle = makeTargetEntry(expr, list_length(tlist) + 1, NULL, false); + tle->ressortgroupref = sgref; + tlist = lappend(tlist, tle); } else { - /* Check entire expression whether it is pushable or not */ + /* + * Non-grouping expression we need to compute. Is it shippable? + */ if (is_foreign_expr(root, grouped_rel, expr)) { - /* Pushable, add to tlist */ + /* Yes, so add to tlist as-is; OK to suppress duplicates */ tlist = add_to_flat_tlist(tlist, list_make1(expr)); } else { - /* - * If we have sortgroupref set, then it means that we have an - * ORDER BY entry pointing to this expression. Since we are - * not pushing ORDER BY with GROUP BY, clear it. - */ - if (sgref) - grouping_target->sortgrouprefs[i] = 0; - - /* Not matched exactly, pull the var with aggregates then */ + /* Not pushable as a whole; extract its Vars and aggregates */ aggvars = pull_var_clause((Node *) expr, PVC_INCLUDE_AGGREGATES); + /* + * If any aggregate expression is not shippable, then we + * cannot push down aggregation to the foreign server. + */ if (!is_foreign_expr(root, grouped_rel, (Expr *) aggvars)) return false; /* - * Add aggregates, if any, into the targetlist. Plain var - * nodes should be either same as some GROUP BY expression or - * part of some GROUP BY expression. In later case, the query - * cannot refer plain var nodes without the surrounding - * expression. In both the cases, they are already part of + * Add aggregates, if any, into the targetlist. Plain Vars + * outside an aggregate can be ignored, because they should be + * either same as some GROUP BY column or part of some GROUP + * BY expression. In either case, they are already part of * the targetlist and thus no need to add them again. In fact - * adding pulled plain var nodes in SELECT clause will cause - * an error on the foreign server if they are not same as some - * GROUP BY expression. + * including plain Vars in the tlist when they do not match a + * GROUP BY column would cause the foreign server to complain + * that the shipped query is invalid. */ foreach(l, aggvars) { @@ -4701,7 +4699,7 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) } /* - * Classify the pushable and non-pushable having clauses and save them in + * Classify the pushable and non-pushable HAVING clauses and save them in * remote_conds and local_conds of the grouped rel's fpinfo. */ if (root->hasHavingQual && query->havingQual) @@ -4771,9 +4769,6 @@ foreign_grouping_ok(PlannerInfo *root, RelOptInfo *grouped_rel) } } - /* Transfer any sortgroupref data to the replacement tlist */ - apply_pathtarget_labeling_to_tlist(tlist, grouping_target); - /* Store generated targetlist */ fpinfo->grouped_tlist = tlist; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 3c3c5c705f..2bdf7d8c1e 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -636,6 +636,12 @@ explain (verbose, costs off) select count(c2) w, c2 x, 5 y, 7.0 z from ft1 group by 2, y, 9.0::int order by 2; select count(c2) w, c2 x, 5 y, 7.0 z from ft1 group by 2, y, 9.0::int order by 2; +-- GROUP BY clause referring to same column multiple times +-- Also, ORDER BY contains an aggregate function +explain (verbose, costs off) +select c2, c2 from ft1 where c2 > 6 group by 1, 2 order by sum(c1); +select c2, c2 from ft1 where c2 > 6 group by 1, 2 order by sum(c1); + -- Testing HAVING clause shippability explain (verbose, costs off) select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800 order by c2; From 255f14183ac7bc6a83a5bb00d67d5ac7e8b645f1 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Fri, 12 Jan 2018 16:53:25 -0500 Subject: [PATCH 0836/1087] docs: replace dblink() mention with foreign data mention Reported-by: steven.winfield@cantabcapital.com Discussion: https://postgr.es/m/20171031105039.17183.850@wrigleys.postgresql.org --- doc/src/sgml/textsearch.sgml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index 4dc52ec983..1a2f04019c 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -3614,8 +3614,9 @@ SELECT plainto_tsquery('supernovae stars'); allows the implementation of very fast searches with online update. Partitioning can be done at the database level using table inheritance, or by distributing documents over - servers and collecting search results using the - module. The latter is possible because ranking functions use + servers and collecting external search results, e.g. via Foreign Data access. + The latter is possible because ranking functions use only local information.
From 649aeb123f73e69cf78c52b534c15c51a229d63d Mon Sep 17 00:00:00 2001 From: Michael Meskes Date: Sat, 13 Jan 2018 14:56:49 +0100 Subject: [PATCH 0837/1087] Cope with indicator arrays that do not have the correct length. Patch by: "Rader, David" --- src/interfaces/ecpg/preproc/type.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/interfaces/ecpg/preproc/type.c b/src/interfaces/ecpg/preproc/type.c index 4abbf93d19..fa1a05c302 100644 --- a/src/interfaces/ecpg/preproc/type.c +++ b/src/interfaces/ecpg/preproc/type.c @@ -609,7 +609,17 @@ ECPGdump_a_struct(FILE *o, const char *name, const char *ind_name, char *arrsize prefix, ind_prefix, arrsize, type->struct_sizeof, (ind_p != NULL) ? ind_type->struct_sizeof : NULL); if (ind_p != NULL && ind_p != &struct_no_indicator) + { ind_p = ind_p->next; + if (ind_p == NULL && p->next != NULL) { + mmerror(PARSE_ERROR, ET_WARNING, "indicator struct \"%s\" has too few members", ind_name); + ind_p = &struct_no_indicator; + } + } + } + + if (ind_type != NULL && ind_p != NULL && ind_p != &struct_no_indicator) { + mmerror(PARSE_ERROR, ET_WARNING, "indicator struct \"%s\" has too many members", ind_name); } free(pbuf); From d91da5ecedc8f8965bd35de66b09feb79c26e5ca Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 16 Jan 2018 17:12:16 -0500 Subject: [PATCH 0838/1087] Remove useless use of bit-masking macros In this case, the macros SET_8_BYTES(), GET_8_BYTES(), SET_4_BYTES(), GET_4_BYTES() are no-ops, so we can just remove them. The plan is to perhaps remove them from the source code altogether, so we'll start here. Discussion: https://www.postgresql.org/message-id/5d51721a-69ef-2053-9172-599b539f0628@2ndquadrant.com --- src/backend/utils/adt/numeric.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index a1792f0b01..5b34badd5b 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -354,12 +354,12 @@ typedef struct NumericSumAccum */ #define NUMERIC_ABBREV_BITS (SIZEOF_DATUM * BITS_PER_BYTE) #if SIZEOF_DATUM == 8 -#define NumericAbbrevGetDatum(X) ((Datum) SET_8_BYTES(X)) -#define DatumGetNumericAbbrev(X) ((int64) GET_8_BYTES(X)) +#define NumericAbbrevGetDatum(X) ((Datum) (X)) +#define DatumGetNumericAbbrev(X) ((int64) (X)) #define NUMERIC_ABBREV_NAN NumericAbbrevGetDatum(PG_INT64_MIN) #else -#define NumericAbbrevGetDatum(X) ((Datum) SET_4_BYTES(X)) -#define DatumGetNumericAbbrev(X) ((int32) GET_4_BYTES(X)) +#define NumericAbbrevGetDatum(X) ((Datum) (X)) +#define DatumGetNumericAbbrev(X) ((int32) (X)) #define NUMERIC_ABBREV_NAN NumericAbbrevGetDatum(PG_INT32_MIN) #endif From cc4feded0a31d2b732d4ea68613115cb720e624e Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Tue, 16 Jan 2018 19:07:13 -0500 Subject: [PATCH 0839/1087] Centralize json and jsonb handling of datetime types The creates a single function JsonEncodeDateTime which will format these data types in an efficient and consistent manner. This will be all the more important when we come to jsonpath so we don't have to implement yet more code doing the same thing in two more places. This also extends the code to handle time and timetz types which were not previously handled specially. This requires exposing the time2tm and timetz2tm functions. Patch from Nikita Glukhov --- src/backend/utils/adt/date.c | 6 +- src/backend/utils/adt/json.c | 122 ++++++++++++++++++++++++++-------- src/backend/utils/adt/jsonb.c | 70 +++---------------- src/include/utils/date.h | 4 +- src/include/utils/jsonapi.h | 2 + 5 files changed, 109 insertions(+), 95 deletions(-) diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 95a999857c..747ef49789 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -41,8 +41,6 @@ #endif -static int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec); -static int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, int *tzp); static int tm2time(struct pg_tm *tm, fsec_t fsec, TimeADT *result); static int tm2timetz(struct pg_tm *tm, fsec_t fsec, int tz, TimeTzADT *result); static void AdjustTimeForTypmod(TimeADT *time, int32 typmod); @@ -1249,7 +1247,7 @@ tm2time(struct pg_tm *tm, fsec_t fsec, TimeADT *result) * If out of this range, leave as UTC (in practice that could only happen * if pg_time_t is just 32 bits) - thomas 97/05/27 */ -static int +int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec) { tm->tm_hour = time / USECS_PER_HOUR; @@ -2073,7 +2071,7 @@ timetztypmodout(PG_FUNCTION_ARGS) /* timetz2tm() * Convert TIME WITH TIME ZONE data type to POSIX time structure. */ -static int +int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, int *tzp) { TimeOffset trem = time->time; diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index 151345ab2f..97a5b85516 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -1503,12 +1503,70 @@ datum_to_json(Datum val, bool is_null, StringInfo result, pfree(outputstr); break; case JSONTYPE_DATE: + { + char buf[MAXDATELEN + 1]; + + JsonEncodeDateTime(buf, val, DATEOID); + appendStringInfo(result, "\"%s\"", buf); + } + break; + case JSONTYPE_TIMESTAMP: + { + char buf[MAXDATELEN + 1]; + + JsonEncodeDateTime(buf, val, TIMESTAMPOID); + appendStringInfo(result, "\"%s\"", buf); + } + break; + case JSONTYPE_TIMESTAMPTZ: + { + char buf[MAXDATELEN + 1]; + + JsonEncodeDateTime(buf, val, TIMESTAMPTZOID); + appendStringInfo(result, "\"%s\"", buf); + } + break; + case JSONTYPE_JSON: + /* JSON and JSONB output will already be escaped */ + outputstr = OidOutputFunctionCall(outfuncoid, val); + appendStringInfoString(result, outputstr); + pfree(outputstr); + break; + case JSONTYPE_CAST: + /* outfuncoid refers to a cast function, not an output function */ + jsontext = DatumGetTextPP(OidFunctionCall1(outfuncoid, val)); + outputstr = text_to_cstring(jsontext); + appendStringInfoString(result, outputstr); + pfree(outputstr); + pfree(jsontext); + break; + default: + outputstr = OidOutputFunctionCall(outfuncoid, val); + escape_json(result, outputstr); + pfree(outputstr); + break; + } +} + +/* + * Encode 'value' of datetime type 'typid' into JSON string in ISO format using + * optionally preallocated buffer 'buf'. + */ +char * +JsonEncodeDateTime(char *buf, Datum value, Oid typid) +{ + if (!buf) + buf = palloc(MAXDATELEN + 1); + + switch (typid) + { + case DATEOID: { DateADT date; struct pg_tm tm; - char buf[MAXDATELEN + 1]; - date = DatumGetDateADT(val); + date = DatumGetDateADT(value); + /* Same as date_out(), but forcing DateStyle */ if (DATE_NOT_FINITE(date)) EncodeSpecialDate(date, buf); @@ -1518,17 +1576,40 @@ datum_to_json(Datum val, bool is_null, StringInfo result, &(tm.tm_year), &(tm.tm_mon), &(tm.tm_mday)); EncodeDateOnly(&tm, USE_XSD_DATES, buf); } - appendStringInfo(result, "\"%s\"", buf); } break; - case JSONTYPE_TIMESTAMP: + case TIMEOID: + { + TimeADT time = DatumGetTimeADT(value); + struct pg_tm tt, + *tm = &tt; + fsec_t fsec; + + /* Same as time_out(), but forcing DateStyle */ + time2tm(time, tm, &fsec); + EncodeTimeOnly(tm, fsec, false, 0, USE_XSD_DATES, buf); + } + break; + case TIMETZOID: + { + TimeTzADT *time = DatumGetTimeTzADTP(value); + struct pg_tm tt, + *tm = &tt; + fsec_t fsec; + int tz; + + /* Same as timetz_out(), but forcing DateStyle */ + timetz2tm(time, tm, &fsec, &tz); + EncodeTimeOnly(tm, fsec, true, tz, USE_XSD_DATES, buf); + } + break; + case TIMESTAMPOID: { Timestamp timestamp; struct pg_tm tm; fsec_t fsec; - char buf[MAXDATELEN + 1]; - timestamp = DatumGetTimestamp(val); + timestamp = DatumGetTimestamp(value); /* Same as timestamp_out(), but forcing DateStyle */ if (TIMESTAMP_NOT_FINITE(timestamp)) EncodeSpecialTimestamp(timestamp, buf); @@ -1538,19 +1619,17 @@ datum_to_json(Datum val, bool is_null, StringInfo result, ereport(ERROR, (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), errmsg("timestamp out of range"))); - appendStringInfo(result, "\"%s\"", buf); } break; - case JSONTYPE_TIMESTAMPTZ: + case TIMESTAMPTZOID: { TimestampTz timestamp; struct pg_tm tm; int tz; fsec_t fsec; const char *tzn = NULL; - char buf[MAXDATELEN + 1]; - timestamp = DatumGetTimestampTz(val); + timestamp = DatumGetTimestampTz(value); /* Same as timestamptz_out(), but forcing DateStyle */ if (TIMESTAMP_NOT_FINITE(timestamp)) EncodeSpecialTimestamp(timestamp, buf); @@ -1560,29 +1639,14 @@ datum_to_json(Datum val, bool is_null, StringInfo result, ereport(ERROR, (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), errmsg("timestamp out of range"))); - appendStringInfo(result, "\"%s\"", buf); } break; - case JSONTYPE_JSON: - /* JSON and JSONB output will already be escaped */ - outputstr = OidOutputFunctionCall(outfuncoid, val); - appendStringInfoString(result, outputstr); - pfree(outputstr); - break; - case JSONTYPE_CAST: - /* outfuncoid refers to a cast function, not an output function */ - jsontext = DatumGetTextPP(OidFunctionCall1(outfuncoid, val)); - outputstr = text_to_cstring(jsontext); - appendStringInfoString(result, outputstr); - pfree(outputstr); - pfree(jsontext); - break; default: - outputstr = OidOutputFunctionCall(outfuncoid, val); - escape_json(result, outputstr); - pfree(outputstr); - break; + elog(ERROR, "unknown jsonb value datetime type oid %d", typid); + return NULL; } + + return buf; } /* diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c index 014e7aa6e3..0f70180164 100644 --- a/src/backend/utils/adt/jsonb.c +++ b/src/backend/utils/adt/jsonb.c @@ -786,71 +786,19 @@ datum_to_jsonb(Datum val, bool is_null, JsonbInState *result, } break; case JSONBTYPE_DATE: - { - DateADT date; - struct pg_tm tm; - char buf[MAXDATELEN + 1]; - - date = DatumGetDateADT(val); - /* Same as date_out(), but forcing DateStyle */ - if (DATE_NOT_FINITE(date)) - EncodeSpecialDate(date, buf); - else - { - j2date(date + POSTGRES_EPOCH_JDATE, - &(tm.tm_year), &(tm.tm_mon), &(tm.tm_mday)); - EncodeDateOnly(&tm, USE_XSD_DATES, buf); - } - jb.type = jbvString; - jb.val.string.len = strlen(buf); - jb.val.string.val = pstrdup(buf); - } + jb.type = jbvString; + jb.val.string.val = JsonEncodeDateTime(NULL, val, DATEOID); + jb.val.string.len = strlen(jb.val.string.val); break; case JSONBTYPE_TIMESTAMP: - { - Timestamp timestamp; - struct pg_tm tm; - fsec_t fsec; - char buf[MAXDATELEN + 1]; - - timestamp = DatumGetTimestamp(val); - /* Same as timestamp_out(), but forcing DateStyle */ - if (TIMESTAMP_NOT_FINITE(timestamp)) - EncodeSpecialTimestamp(timestamp, buf); - else if (timestamp2tm(timestamp, NULL, &tm, &fsec, NULL, NULL) == 0) - EncodeDateTime(&tm, fsec, false, 0, NULL, USE_XSD_DATES, buf); - else - ereport(ERROR, - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), - errmsg("timestamp out of range"))); - jb.type = jbvString; - jb.val.string.len = strlen(buf); - jb.val.string.val = pstrdup(buf); - } + jb.type = jbvString; + jb.val.string.val = JsonEncodeDateTime(NULL, val, TIMESTAMPOID); + jb.val.string.len = strlen(jb.val.string.val); break; case JSONBTYPE_TIMESTAMPTZ: - { - TimestampTz timestamp; - struct pg_tm tm; - int tz; - fsec_t fsec; - const char *tzn = NULL; - char buf[MAXDATELEN + 1]; - - timestamp = DatumGetTimestampTz(val); - /* Same as timestamptz_out(), but forcing DateStyle */ - if (TIMESTAMP_NOT_FINITE(timestamp)) - EncodeSpecialTimestamp(timestamp, buf); - else if (timestamp2tm(timestamp, &tz, &tm, &fsec, &tzn, NULL) == 0) - EncodeDateTime(&tm, fsec, true, tz, tzn, USE_XSD_DATES, buf); - else - ereport(ERROR, - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), - errmsg("timestamp out of range"))); - jb.type = jbvString; - jb.val.string.len = strlen(buf); - jb.val.string.val = pstrdup(buf); - } + jb.type = jbvString; + jb.val.string.val = JsonEncodeDateTime(NULL, val, TIMESTAMPTZOID); + jb.val.string.len = strlen(jb.val.string.val); break; case JSONBTYPE_JSONCAST: case JSONBTYPE_JSON: diff --git a/src/include/utils/date.h b/src/include/utils/date.h index 274959231b..e17cd49602 100644 --- a/src/include/utils/date.h +++ b/src/include/utils/date.h @@ -17,7 +17,7 @@ #include #include "fmgr.h" - +#include "datatype/timestamp.h" typedef int32 DateADT; @@ -73,5 +73,7 @@ extern void EncodeSpecialDate(DateADT dt, char *str); extern DateADT GetSQLCurrentDate(void); extern TimeTzADT *GetSQLCurrentTime(int32 typmod); extern TimeADT GetSQLLocalTime(int32 typmod); +extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec); +extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, int *tzp); #endif /* DATE_H */ diff --git a/src/include/utils/jsonapi.h b/src/include/utils/jsonapi.h index d6baea5368..e39572e00f 100644 --- a/src/include/utils/jsonapi.h +++ b/src/include/utils/jsonapi.h @@ -147,4 +147,6 @@ extern Jsonb *transform_jsonb_string_values(Jsonb *jsonb, void *action_state, extern text *transform_json_string_values(text *json, void *action_state, JsonTransformStringValuesAction transform_action); +extern char *JsonEncodeDateTime(char *buf, Datum value, Oid typid); + #endif /* JSONAPI_H */ From 585e166e46a1572b59eb9fdaffc2d4b785000f9e Mon Sep 17 00:00:00 2001 From: Andrew Dunstan Date: Wed, 17 Jan 2018 03:33:02 -0500 Subject: [PATCH 0840/1087] Fix compiler warnings due to commit cc4feded --- src/include/utils/date.h | 1 + 1 file changed, 1 insertion(+) diff --git a/src/include/utils/date.h b/src/include/utils/date.h index e17cd49602..eb6d2a16fe 100644 --- a/src/include/utils/date.h +++ b/src/include/utils/date.h @@ -17,6 +17,7 @@ #include #include "fmgr.h" +#include "pgtime.h" #include "datatype/timestamp.h" typedef int32 DateADT; From 9c7d06d60680c7f00d931233873dee81fdb311c6 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Wed, 17 Jan 2018 11:38:34 +0000 Subject: [PATCH 0841/1087] Ability to advance replication slots Ability to advance both physical and logical replication slots using a new user function pg_replication_slot_advance(). For logical advance that means records are consumed as fast as possible and changes are not given to output plugin for sending. Makes 2nd phase (after we reached SNAPBUILD_FULL_SNAPSHOT) of replication slot creation faster, especially when there are big transactions as the reorder buffer does not have to deal with data changes and does not have to spill to disk. Author: Petr Jelinek Reviewed-by: Simon Riggs --- contrib/test_decoding/expected/slot.out | 30 +++ contrib/test_decoding/sql/slot.sql | 15 ++ doc/src/sgml/func.sgml | 19 ++ src/backend/replication/logical/decode.c | 44 ++-- src/backend/replication/logical/logical.c | 30 ++- .../replication/logical/logicalfuncs.c | 1 + src/backend/replication/slotfuncs.c | 200 ++++++++++++++++++ src/backend/replication/walsender.c | 1 + src/include/catalog/pg_proc.h | 2 + src/include/replication/logical.h | 8 + 10 files changed, 333 insertions(+), 17 deletions(-) diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out index 9f5f8a9b76..21e9d56f73 100644 --- a/contrib/test_decoding/expected/slot.out +++ b/contrib/test_decoding/expected/slot.out @@ -92,6 +92,36 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'in COMMIT (3 rows) +INSERT INTO replication_example(somedata, text) VALUES (1, 4); +INSERT INTO replication_example(somedata, text) VALUES (1, 5); +SELECT pg_current_wal_lsn() AS wal_lsn \gset +INSERT INTO replication_example(somedata, text) VALUES (1, 6); +SELECT end_lsn FROM pg_replication_slot_advance('regression_slot1', :'wal_lsn') \gset +SELECT slot_name FROM pg_replication_slot_advance('regression_slot2', pg_current_wal_lsn()); + slot_name +------------------ + regression_slot2 +(1 row) + +SELECT :'wal_lsn' = :'end_lsn'; + ?column? +---------- + t +(1 row) + +SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); + data +--------------------------------------------------------------------------------------------------------- + BEGIN + table public.replication_example: INSERT: id[integer]:6 somedata[integer]:1 text[character varying]:'6' + COMMIT +(3 rows) + +SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); + data +------ +(0 rows) + DROP TABLE replication_example; -- error SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot1', 'test_decoding', true); diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql index fa9561f54e..706340c1d8 100644 --- a/contrib/test_decoding/sql/slot.sql +++ b/contrib/test_decoding/sql/slot.sql @@ -45,6 +45,21 @@ INSERT INTO replication_example(somedata, text) VALUES (1, 3); SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); +INSERT INTO replication_example(somedata, text) VALUES (1, 4); +INSERT INTO replication_example(somedata, text) VALUES (1, 5); + +SELECT pg_current_wal_lsn() AS wal_lsn \gset + +INSERT INTO replication_example(somedata, text) VALUES (1, 6); + +SELECT end_lsn FROM pg_replication_slot_advance('regression_slot1', :'wal_lsn') \gset +SELECT slot_name FROM pg_replication_slot_advance('regression_slot2', pg_current_wal_lsn()); + +SELECT :'wal_lsn' = :'end_lsn'; + +SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); +SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); + DROP TABLE replication_example; -- error diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 2428434030..487c7ff750 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -19155,6 +19155,25 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); + + + + pg_replication_slot_advance + + pg_replication_slot_advance(slot_name name, upto_lsn pg_lsn) + + + (slot_name name, end_lsn pg_lsn) + bool + + + Advances the current confirmed position of a replication slot named + slot_name. The slot will not be moved backwards, + and it will not be moved beyond the current insert location. Returns + name of the slot and real position to which it was advanced to. + + + diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c index 537eba7875..6eb0d5527e 100644 --- a/src/backend/replication/logical/decode.c +++ b/src/backend/replication/logical/decode.c @@ -88,6 +88,9 @@ static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup); * call ReorderBufferProcessXid for each record type by default, because * e.g. empty xacts can be handled more efficiently if there's no previous * state for them. + * + * We also support the ability to fast forward thru records, skipping some + * record types completely - see individual record types for details. */ void LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *record) @@ -332,8 +335,10 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf) xl_invalidations *invalidations = (xl_invalidations *) XLogRecGetData(r); - ReorderBufferImmediateInvalidation( - ctx->reorder, invalidations->nmsgs, invalidations->msgs); + if (!ctx->fast_forward) + ReorderBufferImmediateInvalidation(ctx->reorder, + invalidations->nmsgs, + invalidations->msgs); } break; default: @@ -353,14 +358,19 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf) ReorderBufferProcessXid(ctx->reorder, xid, buf->origptr); - /* no point in doing anything yet */ - if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) + /* + * If we don't have snapshot or we are just fast-forwarding, there is no + * point in decoding changes. + */ + if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT || + ctx->fast_forward) return; switch (info) { case XLOG_HEAP2_MULTI_INSERT: - if (SnapBuildProcessChange(builder, xid, buf->origptr)) + if (!ctx->fast_forward && + SnapBuildProcessChange(builder, xid, buf->origptr)) DecodeMultiInsert(ctx, buf); break; case XLOG_HEAP2_NEW_CID: @@ -408,8 +418,12 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf) ReorderBufferProcessXid(ctx->reorder, xid, buf->origptr); - /* no point in doing anything yet */ - if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) + /* + * If we don't have snapshot or we are just fast-forwarding, there is no + * point in decoding data changes. + */ + if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT || + ctx->fast_forward) return; switch (info) @@ -501,8 +515,12 @@ DecodeLogicalMsgOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf) ReorderBufferProcessXid(ctx->reorder, XLogRecGetXid(r), buf->origptr); - /* No point in doing anything yet. */ - if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) + /* + * If we don't have snapshot or we are just fast-forwarding, there is no + * point in decoding messages. + */ + if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT || + ctx->fast_forward) return; message = (xl_logical_message *) XLogRecGetData(r); @@ -554,8 +572,9 @@ DecodeCommit(LogicalDecodingContext *ctx, XLogRecordBuffer *buf, */ if (parsed->nmsgs > 0) { - ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr, - parsed->nmsgs, parsed->msgs); + if (!ctx->fast_forward) + ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr, + parsed->nmsgs, parsed->msgs); ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr); } @@ -574,6 +593,7 @@ DecodeCommit(LogicalDecodingContext *ctx, XLogRecordBuffer *buf, * are restarting or if we haven't assembled a consistent snapshot yet. * 2) The transaction happened in another database. * 3) The output plugin is not interested in the origin. + * 4) We are doing fast-forwarding * * We can't just use ReorderBufferAbort() here, because we need to execute * the transaction's invalidations. This currently won't be needed if @@ -589,7 +609,7 @@ DecodeCommit(LogicalDecodingContext *ctx, XLogRecordBuffer *buf, */ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) || (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) || - FilterByOrigin(ctx, origin_id)) + ctx->fast_forward || FilterByOrigin(ctx, origin_id)) { for (i = 0; i < parsed->nsubxacts; i++) { diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c index 2fc9d7d70f..7637efc32e 100644 --- a/src/backend/replication/logical/logical.c +++ b/src/backend/replication/logical/logical.c @@ -115,6 +115,7 @@ StartupDecodingContext(List *output_plugin_options, XLogRecPtr start_lsn, TransactionId xmin_horizon, bool need_full_snapshot, + bool fast_forward, XLogPageReadCB read_page, LogicalOutputPluginWriterPrepareWrite prepare_write, LogicalOutputPluginWriterWrite do_write, @@ -140,7 +141,8 @@ StartupDecodingContext(List *output_plugin_options, * (re-)load output plugins, so we detect a bad (removed) output plugin * now. */ - LoadOutputPlugin(&ctx->callbacks, NameStr(slot->data.plugin)); + if (!fast_forward) + LoadOutputPlugin(&ctx->callbacks, NameStr(slot->data.plugin)); /* * Now that the slot's xmin has been set, we can announce ourselves as a @@ -191,6 +193,8 @@ StartupDecodingContext(List *output_plugin_options, ctx->output_plugin_options = output_plugin_options; + ctx->fast_forward = fast_forward; + MemoryContextSwitchTo(old_context); return ctx; @@ -303,8 +307,9 @@ CreateInitDecodingContext(char *plugin, ReplicationSlotSave(); ctx = StartupDecodingContext(NIL, InvalidXLogRecPtr, xmin_horizon, - need_full_snapshot, read_page, prepare_write, - do_write, update_progress); + need_full_snapshot, true, + read_page, prepare_write, do_write, + update_progress); /* call output plugin initialization callback */ old_context = MemoryContextSwitchTo(ctx->context); @@ -342,6 +347,7 @@ CreateInitDecodingContext(char *plugin, LogicalDecodingContext * CreateDecodingContext(XLogRecPtr start_lsn, List *output_plugin_options, + bool fast_forward, XLogPageReadCB read_page, LogicalOutputPluginWriterPrepareWrite prepare_write, LogicalOutputPluginWriterWrite do_write, @@ -395,8 +401,8 @@ CreateDecodingContext(XLogRecPtr start_lsn, ctx = StartupDecodingContext(output_plugin_options, start_lsn, InvalidTransactionId, false, - read_page, prepare_write, do_write, - update_progress); + fast_forward, read_page, prepare_write, + do_write, update_progress); /* call output plugin initialization callback */ old_context = MemoryContextSwitchTo(ctx->context); @@ -573,6 +579,8 @@ startup_cb_wrapper(LogicalDecodingContext *ctx, OutputPluginOptions *opt, bool i LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "startup"; @@ -598,6 +606,8 @@ shutdown_cb_wrapper(LogicalDecodingContext *ctx) LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "shutdown"; @@ -629,6 +639,8 @@ begin_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn) LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "begin"; @@ -658,6 +670,8 @@ commit_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn, LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "commit"; @@ -687,6 +701,8 @@ change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn, LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "change"; @@ -721,6 +737,8 @@ filter_by_origin_cb_wrapper(LogicalDecodingContext *ctx, RepOriginId origin_id) ErrorContextCallback errcallback; bool ret; + Assert(!ctx->fast_forward); + /* Push callback + info on the error context stack */ state.ctx = ctx; state.callback_name = "filter_by_origin"; @@ -751,6 +769,8 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn, LogicalErrorCallbackState state; ErrorContextCallback errcallback; + Assert(!ctx->fast_forward); + if (ctx->callbacks.message_cb == NULL) return; diff --git a/src/backend/replication/logical/logicalfuncs.c b/src/backend/replication/logical/logicalfuncs.c index 9aab6e71b2..54c25f1f5b 100644 --- a/src/backend/replication/logical/logicalfuncs.c +++ b/src/backend/replication/logical/logicalfuncs.c @@ -251,6 +251,7 @@ pg_logical_slot_get_changes_guts(FunctionCallInfo fcinfo, bool confirm, bool bin /* restart at slot's confirmed_flush */ ctx = CreateDecodingContext(InvalidXLogRecPtr, options, + false, logical_read_local_xlog_page, LogicalOutputPrepareWrite, LogicalOutputWrite, NULL); diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c index b02df593e9..93d2e20f76 100644 --- a/src/backend/replication/slotfuncs.c +++ b/src/backend/replication/slotfuncs.c @@ -17,11 +17,14 @@ #include "miscadmin.h" #include "access/htup_details.h" +#include "replication/decode.h" #include "replication/slot.h" #include "replication/logical.h" #include "replication/logicalfuncs.h" #include "utils/builtins.h" +#include "utils/inval.h" #include "utils/pg_lsn.h" +#include "utils/resowner.h" static void check_permissions(void) @@ -312,3 +315,200 @@ pg_get_replication_slots(PG_FUNCTION_ARGS) return (Datum) 0; } + +/* + * Helper function for advancing physical replication slot forward. + */ +static XLogRecPtr +pg_physical_replication_slot_advance(XLogRecPtr startlsn, XLogRecPtr moveto) +{ + XLogRecPtr retlsn = InvalidXLogRecPtr; + + SpinLockAcquire(&MyReplicationSlot->mutex); + if (MyReplicationSlot->data.restart_lsn < moveto) + { + MyReplicationSlot->data.restart_lsn = moveto; + retlsn = moveto; + } + SpinLockRelease(&MyReplicationSlot->mutex); + + return retlsn; +} + +/* + * Helper function for advancing logical replication slot forward. + */ +static XLogRecPtr +pg_logical_replication_slot_advance(XLogRecPtr startlsn, XLogRecPtr moveto) +{ + LogicalDecodingContext *ctx; + ResourceOwner old_resowner = CurrentResourceOwner; + XLogRecPtr retlsn = InvalidXLogRecPtr; + + PG_TRY(); + { + /* restart at slot's confirmed_flush */ + ctx = CreateDecodingContext(InvalidXLogRecPtr, + NIL, + true, + logical_read_local_xlog_page, + NULL, NULL, NULL); + + CurrentResourceOwner = ResourceOwnerCreate(CurrentResourceOwner, + "logical decoding"); + + /* invalidate non-timetravel entries */ + InvalidateSystemCaches(); + + /* Decode until we run out of records */ + while ((startlsn != InvalidXLogRecPtr && startlsn < moveto) || + (ctx->reader->EndRecPtr != InvalidXLogRecPtr && ctx->reader->EndRecPtr < moveto)) + { + XLogRecord *record; + char *errm = NULL; + + record = XLogReadRecord(ctx->reader, startlsn, &errm); + if (errm) + elog(ERROR, "%s", errm); + + /* + * Now that we've set up the xlog reader state, subsequent calls + * pass InvalidXLogRecPtr to say "continue from last record" + */ + startlsn = InvalidXLogRecPtr; + + /* + * The {begin_txn,change,commit_txn}_wrapper callbacks above will + * store the description into our tuplestore. + */ + if (record != NULL) + LogicalDecodingProcessRecord(ctx, ctx->reader); + + /* check limits */ + if (moveto <= ctx->reader->EndRecPtr) + break; + + CHECK_FOR_INTERRUPTS(); + } + + CurrentResourceOwner = old_resowner; + + if (ctx->reader->EndRecPtr != InvalidXLogRecPtr) + { + LogicalConfirmReceivedLocation(moveto); + + /* + * If only the confirmed_flush_lsn has changed the slot won't get + * marked as dirty by the above. Callers on the walsender + * interface are expected to keep track of their own progress and + * don't need it written out. But SQL-interface users cannot + * specify their own start positions and it's harder for them to + * keep track of their progress, so we should make more of an + * effort to save it for them. + * + * Dirty the slot so it's written out at the next checkpoint. + * We'll still lose its position on crash, as documented, but it's + * better than always losing the position even on clean restart. + */ + ReplicationSlotMarkDirty(); + } + + retlsn = MyReplicationSlot->data.confirmed_flush; + + /* free context, call shutdown callback */ + FreeDecodingContext(ctx); + + InvalidateSystemCaches(); + } + PG_CATCH(); + { + /* clear all timetravel entries */ + InvalidateSystemCaches(); + + PG_RE_THROW(); + } + PG_END_TRY(); + + return retlsn; +} + +/* + * SQL function for moving the position in a replication slot. + */ +Datum +pg_replication_slot_advance(PG_FUNCTION_ARGS) +{ + Name slotname = PG_GETARG_NAME(0); + XLogRecPtr moveto = PG_GETARG_LSN(1); + XLogRecPtr endlsn; + XLogRecPtr startlsn; + TupleDesc tupdesc; + Datum values[2]; + bool nulls[2]; + HeapTuple tuple; + Datum result; + + Assert(!MyReplicationSlot); + + check_permissions(); + + if (XLogRecPtrIsInvalid(moveto)) + ereport(ERROR, + (errmsg("invalid target wal lsn"))); + + /* Build a tuple descriptor for our result type */ + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + elog(ERROR, "return type must be a row type"); + + /* + * We can't move slot past what's been flushed/replayed so clamp the + * target possition accordingly. + */ + if (!RecoveryInProgress()) + moveto = Min(moveto, GetFlushRecPtr()); + else + moveto = Min(moveto, GetXLogReplayRecPtr(&ThisTimeLineID)); + + /* Acquire the slot so we "own" it */ + ReplicationSlotAcquire(NameStr(*slotname), true); + + startlsn = MyReplicationSlot->data.confirmed_flush; + if (moveto < startlsn) + { + ReplicationSlotRelease(); + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot move slot to %X/%X, minimum is %X/%X", + (uint32) (moveto >> 32), (uint32) moveto, + (uint32) (MyReplicationSlot->data.confirmed_flush >> 32), + (uint32) (MyReplicationSlot->data.confirmed_flush)))); + } + + if (OidIsValid(MyReplicationSlot->data.database)) + endlsn = pg_logical_replication_slot_advance(startlsn, moveto); + else + endlsn = pg_physical_replication_slot_advance(startlsn, moveto); + + values[0] = NameGetDatum(&MyReplicationSlot->data.name); + nulls[0] = false; + + /* Update the on disk state when lsn was updated. */ + if (XLogRecPtrIsInvalid(endlsn)) + { + ReplicationSlotMarkDirty(); + ReplicationSlotsComputeRequiredXmin(false); + ReplicationSlotsComputeRequiredLSN(); + ReplicationSlotSave(); + } + + ReplicationSlotRelease(); + + /* Return the reached position. */ + values[1] = LSNGetDatum(endlsn); + nulls[1] = false; + + tuple = heap_form_tuple(tupdesc, values, nulls); + result = HeapTupleGetDatum(tuple); + + PG_RETURN_DATUM(result); +} diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 8bef3fbdaf..130ecd5559 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1075,6 +1075,7 @@ StartLogicalReplication(StartReplicationCmd *cmd) * to be shipped from that position. */ logical_decoding_ctx = CreateDecodingContext(cmd->startpoint, cmd->options, + false, logical_read_xlog_page, WalSndPrepareWrite, WalSndWriteData, diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 298e0ae2f0..f01648c961 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -5357,6 +5357,8 @@ DATA(insert OID = 3784 ( pg_logical_slot_peek_changes PGNSP PGUID 12 1000 1000 DESCR("peek at changes from replication slot"); DATA(insert OID = 3785 ( pg_logical_slot_peek_binary_changes PGNSP PGUID 12 1000 1000 25 0 f f f f f t v u 4 0 2249 "19 3220 23 1009" "{19,3220,23,1009,3220,28,17}" "{i,i,i,v,o,o,o}" "{slot_name,upto_lsn,upto_nchanges,options,lsn,xid,data}" _null_ _null_ pg_logical_slot_peek_binary_changes _null_ _null_ _null_ )); DESCR("peek at binary changes from replication slot"); +DATA(insert OID = 3878 ( pg_replication_slot_advance PGNSP PGUID 12 1 0 0 0 f f f f t f v u 2 0 2249 "19 3220" "{19,3220,19,3220}" "{i,i,o,o}" "{slot_name,upto_lsn,slot_name,end_lsn}" _null_ _null_ pg_replication_slot_advance _null_ _null_ _null_ )); +DESCR("advance logical replication slot"); DATA(insert OID = 3577 ( pg_logical_emit_message PGNSP PGUID 12 1 0 0 0 f f f f t f v u 3 0 3220 "16 25 25" _null_ _null_ _null_ _null_ _null_ pg_logical_emit_message_text _null_ _null_ _null_ )); DESCR("emit a textual logical decoding message"); DATA(insert OID = 3578 ( pg_logical_emit_message PGNSP PGUID 12 1 0 0 0 f f f f t f v u 3 0 3220 "16 25 17" _null_ _null_ _null_ _null_ _null_ pg_logical_emit_message_bytea _null_ _null_ _null_ )); diff --git a/src/include/replication/logical.h b/src/include/replication/logical.h index d9059e1cca..619c5f4d73 100644 --- a/src/include/replication/logical.h +++ b/src/include/replication/logical.h @@ -45,6 +45,13 @@ typedef struct LogicalDecodingContext struct ReorderBuffer *reorder; struct SnapBuild *snapshot_builder; + /* + * Marks the logical decoding context as fast forward decoding one. + * Such a context does not have plugin loaded so most of the the following + * properties are unused. + */ + bool fast_forward; + OutputPluginCallbacks callbacks; OutputPluginOptions options; @@ -97,6 +104,7 @@ extern LogicalDecodingContext *CreateInitDecodingContext(char *plugin, extern LogicalDecodingContext *CreateDecodingContext( XLogRecPtr start_lsn, List *output_plugin_options, + bool fast_forward, XLogPageReadCB read_page, LogicalOutputPluginWriterPrepareWrite prepare_write, LogicalOutputPluginWriterWrite do_write, From dca48d145e0e757f0549430ec48687d12c6b6751 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 17 Jan 2018 14:44:15 -0500 Subject: [PATCH 0842/1087] Remove useless lookup of root partitioned rel in ExecInitModifyTable(). node->partitioned_rels is only set in UPDATE/DELETE cases, but ExecInitModifyTable only uses its "rel" variable in INSERT cases, so the extra logic to find the root rel is just a waste of complexity and cycles. Etsuro Fujita, reviewed by Amit Langote Discussion: https://postgr.es/m/93cf9816-2f7d-0f67-8ed2-4a4e497a6ab8@lab.ntt.co.jp --- src/backend/executor/nodeModifyTable.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 55dff5b21a..c5eca1bb74 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -46,7 +46,6 @@ #include "foreign/fdwapi.h" #include "miscadmin.h" #include "nodes/nodeFuncs.h" -#include "parser/parsetree.h" #include "storage/bufmgr.h" #include "storage/lmgr.h" #include "utils/builtins.h" @@ -1932,20 +1931,8 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) estate->es_result_relation_info = saved_resultRelInfo; - /* The root table RT index is at the head of the partitioned_rels list */ - if (node->partitioned_rels) - { - Index root_rti; - Oid root_oid; - - root_rti = linitial_int(node->partitioned_rels); - root_oid = getrelid(root_rti, estate->es_range_table); - rel = heap_open(root_oid, NoLock); /* locked by InitPlan */ - } - else - rel = mtstate->resultRelInfo->ri_RelationDesc; - /* Build state for INSERT tuple routing */ + rel = mtstate->resultRelInfo->ri_RelationDesc; if (operation == CMD_INSERT && rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) { @@ -2118,10 +2105,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->ps.ps_ExprContext = NULL; } - /* Close the root partitioned rel if we opened it above. */ - if (rel != mtstate->resultRelInfo->ri_RelationDesc) - heap_close(rel, NoLock); - /* * If needed, Initialize target list, projection and qual for ON CONFLICT * DO UPDATE. From 4bbf6edfbd5d03743ff82dda2f00c738fb3208f5 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 17 Jan 2018 16:18:39 -0500 Subject: [PATCH 0843/1087] postgres_fdw: Avoid 'outer pathkeys do not match mergeclauses' error. When pushing down a join to a foreign server, postgres_fdw constructs an alternative plan to be used for any EvalPlanQual rechecks that prove to be necessary. This plan is stored as the outer subplan of the Foreign Scan implementing the pushed-down join. Previously, this alternative plan could have a different nominal sort ordering than its parent, which seemed OK since there will only be one tuple per base table anyway in the case of an EvalPlanQual recheck. Actually, though, it caused a problem if that path was used as a building block for the EvalPlanQual recheck plan of a higher-level foreign join, because we could end up with a merge join one of whose inputs was not labelled with the correct sort order. Repair by injecting an extra Sort node into the EvalPlanQual recheck plan whenever it would otherwise fail to be sorted at least as well as its parent Foreign Scan. Report by Jeff Janes. Patch by me, reviewed by Tom Lane, who also provided the test case and comment text. Discussion: http://postgr.es/m/CAMkU=1y2G8VOVBHv3iXU2TMAj7-RyBFFW1uhkr5sm9LQ2=X35g@mail.gmail.com --- .../postgres_fdw/expected/postgres_fdw.out | 210 ++++++++++++------ contrib/postgres_fdw/postgres_fdw.c | 18 +- contrib/postgres_fdw/sql/postgres_fdw.sql | 7 + 3 files changed, 171 insertions(+), 64 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 993219133a..f88e0a2d52 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -1371,24 +1371,27 @@ SELECT t1.c1, ss.a, ss.b FROM (SELECT c1 FROM "S 1"."T 3" WHERE c1 = 50) t1 INNE Output: ft4.c1, ft4.*, ft5.c1, ft5.* Relations: (public.ft4) FULL JOIN (public.ft5) Remote SQL: SELECT s8.c1, s8.c2, s9.c1, s9.c2 FROM ((SELECT c1, ROW(c1, c2, c3) FROM "S 1"."T 3" WHERE ((c1 >= 50)) AND ((c1 <= 60))) s8(c1, c2) FULL JOIN (SELECT c1, ROW(c1, c2, c3) FROM "S 1"."T 4" WHERE ((c1 >= 50)) AND ((c1 <= 60))) s9(c1, c2) ON (((s8.c1 = s9.c1)))) WHERE (((s8.c1 IS NULL) OR (s8.c1 IS NOT NULL))) ORDER BY s8.c1 ASC NULLS LAST, s9.c1 ASC NULLS LAST - -> Hash Full Join + -> Sort Output: ft4.c1, ft4.*, ft5.c1, ft5.* - Hash Cond: (ft4.c1 = ft5.c1) - Filter: ((ft4.c1 IS NULL) OR (ft4.c1 IS NOT NULL)) - -> Foreign Scan on public.ft4 - Output: ft4.c1, ft4.* - Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" WHERE ((c1 >= 50)) AND ((c1 <= 60)) - -> Hash - Output: ft5.c1, ft5.* - -> Foreign Scan on public.ft5 + Sort Key: ft4.c1, ft5.c1 + -> Hash Full Join + Output: ft4.c1, ft4.*, ft5.c1, ft5.* + Hash Cond: (ft4.c1 = ft5.c1) + Filter: ((ft4.c1 IS NULL) OR (ft4.c1 IS NOT NULL)) + -> Foreign Scan on public.ft4 + Output: ft4.c1, ft4.* + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" WHERE ((c1 >= 50)) AND ((c1 <= 60)) + -> Hash Output: ft5.c1, ft5.* - Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" WHERE ((c1 >= 50)) AND ((c1 <= 60)) + -> Foreign Scan on public.ft5 + Output: ft5.c1, ft5.* + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" WHERE ((c1 >= 50)) AND ((c1 <= 60)) -> Materialize Output: "T 3".c1, "T 3".ctid -> Seq Scan on "S 1"."T 3" Output: "T 3".c1, "T 3".ctid Filter: ("T 3".c1 = 50) -(25 rows) +(28 rows) SELECT t1.c1, ss.a, ss.b FROM (SELECT c1 FROM "S 1"."T 3" WHERE c1 = 50) t1 INNER JOIN (SELECT t2.c1, t3.c1 FROM (SELECT c1 FROM ft4 WHERE c1 between 50 and 60) t2 FULL JOIN (SELECT c1 FROM ft5 WHERE c1 between 50 and 60) t3 ON (t2.c1 = t3.c1) WHERE t2.c1 IS NULL OR t2.c1 IS NOT NULL) ss(a, b) ON (TRUE) ORDER BY t1.c1, ss.a, ss.b FOR UPDATE OF t1; c1 | a | b @@ -1701,22 +1704,25 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t Output: t1.c1, t2.c1, t1.c3, t1.*, t2.* Relations: (public.ft1 t1) INNER JOIN (public.ft2 t2) Remote SQL: SELECT r1."C 1", r1.c3, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) ORDER BY r1.c3 ASC NULLS LAST, r1."C 1" ASC NULLS LAST FOR UPDATE OF r1 - -> Merge Join + -> Sort Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* - Merge Cond: (t1.c1 = t2.c1) - -> Sort - Output: t1.c1, t1.c3, t1.* - Sort Key: t1.c1 - -> Foreign Scan on public.ft1 t1 + Sort Key: t1.c3, t1.c1 + -> Merge Join + Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* + Merge Cond: (t1.c1 = t2.c1) + -> Sort Output: t1.c1, t1.c3, t1.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE - -> Sort - Output: t2.c1, t2.* - Sort Key: t2.c1 - -> Foreign Scan on public.ft2 t2 + Sort Key: t1.c1 + -> Foreign Scan on public.ft1 t1 + Output: t1.c1, t1.c3, t1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE + -> Sort Output: t2.c1, t2.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" -(23 rows) + Sort Key: t2.c1 + -> Foreign Scan on public.ft2 t2 + Output: t2.c1, t2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" +(26 rows) SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10 FOR UPDATE OF t1; c1 | c1 @@ -1745,22 +1751,25 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t Output: t1.c1, t2.c1, t1.c3, t1.*, t2.* Relations: (public.ft1 t1) INNER JOIN (public.ft2 t2) Remote SQL: SELECT r1."C 1", r1.c3, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) ORDER BY r1.c3 ASC NULLS LAST, r1."C 1" ASC NULLS LAST FOR UPDATE OF r1 FOR UPDATE OF r2 - -> Merge Join + -> Sort Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* - Merge Cond: (t1.c1 = t2.c1) - -> Sort - Output: t1.c1, t1.c3, t1.* - Sort Key: t1.c1 - -> Foreign Scan on public.ft1 t1 + Sort Key: t1.c3, t1.c1 + -> Merge Join + Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* + Merge Cond: (t1.c1 = t2.c1) + -> Sort Output: t1.c1, t1.c3, t1.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE - -> Sort - Output: t2.c1, t2.* - Sort Key: t2.c1 - -> Foreign Scan on public.ft2 t2 + Sort Key: t1.c1 + -> Foreign Scan on public.ft1 t1 + Output: t1.c1, t1.c3, t1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE + -> Sort Output: t2.c1, t2.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE -(23 rows) + Sort Key: t2.c1 + -> Foreign Scan on public.ft2 t2 + Output: t2.c1, t2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE +(26 rows) SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10 FOR UPDATE; c1 | c1 @@ -1790,22 +1799,25 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t Output: t1.c1, t2.c1, t1.c3, t1.*, t2.* Relations: (public.ft1 t1) INNER JOIN (public.ft2 t2) Remote SQL: SELECT r1."C 1", r1.c3, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) ORDER BY r1.c3 ASC NULLS LAST, r1."C 1" ASC NULLS LAST FOR SHARE OF r1 - -> Merge Join + -> Sort Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* - Merge Cond: (t1.c1 = t2.c1) - -> Sort - Output: t1.c1, t1.c3, t1.* - Sort Key: t1.c1 - -> Foreign Scan on public.ft1 t1 + Sort Key: t1.c3, t1.c1 + -> Merge Join + Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* + Merge Cond: (t1.c1 = t2.c1) + -> Sort Output: t1.c1, t1.c3, t1.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE - -> Sort - Output: t2.c1, t2.* - Sort Key: t2.c1 - -> Foreign Scan on public.ft2 t2 + Sort Key: t1.c1 + -> Foreign Scan on public.ft1 t1 + Output: t1.c1, t1.c3, t1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE + -> Sort Output: t2.c1, t2.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" -(23 rows) + Sort Key: t2.c1 + -> Foreign Scan on public.ft2 t2 + Output: t2.c1, t2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" +(26 rows) SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10 FOR SHARE OF t1; c1 | c1 @@ -1834,22 +1846,25 @@ SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t Output: t1.c1, t2.c1, t1.c3, t1.*, t2.* Relations: (public.ft1 t1) INNER JOIN (public.ft2 t2) Remote SQL: SELECT r1."C 1", r1.c3, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) ORDER BY r1.c3 ASC NULLS LAST, r1."C 1" ASC NULLS LAST FOR SHARE OF r1 FOR SHARE OF r2 - -> Merge Join + -> Sort Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* - Merge Cond: (t1.c1 = t2.c1) - -> Sort - Output: t1.c1, t1.c3, t1.* - Sort Key: t1.c1 - -> Foreign Scan on public.ft1 t1 + Sort Key: t1.c3, t1.c1 + -> Merge Join + Output: t1.c1, t1.c3, t1.*, t2.c1, t2.* + Merge Cond: (t1.c1 = t2.c1) + -> Sort Output: t1.c1, t1.c3, t1.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE - -> Sort - Output: t2.c1, t2.* - Sort Key: t2.c1 - -> Foreign Scan on public.ft2 t2 + Sort Key: t1.c1 + -> Foreign Scan on public.ft1 t1 + Output: t1.c1, t1.c3, t1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE + -> Sort Output: t2.c1, t2.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE -(23 rows) + Sort Key: t2.c1 + -> Foreign Scan on public.ft2 t2 + Output: t2.c1, t2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR SHARE +(26 rows) SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10 FOR SHARE; c1 | c1 @@ -2314,6 +2329,75 @@ SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5 (30,31,AAA030) | 30 | 31 | AAA030 | 30 | 31 (4 rows) +-- multi-way join involving multiple merge joins +EXPLAIN (VERBOSE, COSTS OFF) +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 + AND ft1.c1 = ft5.c1 FOR UPDATE; + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + LockRows + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3, ft1.*, ft2.*, ft4.*, ft5.* + -> Foreign Scan + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3, ft1.*, ft2.*, ft4.*, ft5.* + Relations: (((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5) + Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END, r3.c1, r3.c2, r3.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r4.c1, r4.c2, r4.c3, CASE WHEN (r4.*)::text IS NOT NULL THEN ROW(r4.c1, r4.c2, r4.c3) END FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) INNER JOIN "S 1"."T 3" r3 ON (((r1."C 1" = r3.c1)))) INNER JOIN "S 1"."T 4" r4 ON (((r1."C 1" = r4.c1)))) FOR UPDATE OF r1 FOR UPDATE OF r2 FOR UPDATE OF r3 FOR UPDATE OF r4 + -> Merge Join + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.*, ft4.c1, ft4.c2, ft4.c3, ft4.*, ft5.c1, ft5.c2, ft5.c3, ft5.* + Merge Cond: (ft1.c1 = ft5.c1) + -> Merge Join + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.*, ft4.c1, ft4.c2, ft4.c3, ft4.* + Merge Cond: (ft1.c1 = ft4.c1) + -> Merge Join + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* + Merge Cond: (ft1.c1 = ft2.c1) + -> Sort + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.* + Sort Key: ft1.c1 + -> Foreign Scan on public.ft1 + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE + -> Sort + Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* + Sort Key: ft2.c1 + -> Foreign Scan on public.ft2 + Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE + -> Sort + Output: ft4.c1, ft4.c2, ft4.c3, ft4.* + Sort Key: ft4.c1 + -> Foreign Scan on public.ft4 + Output: ft4.c1, ft4.c2, ft4.c3, ft4.* + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" FOR UPDATE + -> Sort + Output: ft5.c1, ft5.c2, ft5.c3, ft5.* + Sort Key: ft5.c1 + -> Foreign Scan on public.ft5 + Output: ft5.c1, ft5.c2, ft5.c3, ft5.* + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" FOR UPDATE +(39 rows) + +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 + AND ft1.c1 = ft5.c1 FOR UPDATE; + c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c1 | c2 | c3 | c1 | c2 | c3 +----+----+-------+------------------------------+--------------------------+----+------------+-----+----+----+-------+------------------------------+--------------------------+----+------------+-----+----+----+--------+----+----+-------- + 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo | 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 12 | 2 | 00012 | Tue Jan 13 00:00:00 1970 PST | Tue Jan 13 00:00:00 1970 | 2 | 2 | foo | 12 | 2 | 00012 | Tue Jan 13 00:00:00 1970 PST | Tue Jan 13 00:00:00 1970 | 2 | 2 | foo | 12 | 13 | AAA012 | 12 | 13 | AAA012 + 18 | 8 | 00018 | Mon Jan 19 00:00:00 1970 PST | Mon Jan 19 00:00:00 1970 | 8 | 8 | foo | 18 | 8 | 00018 | Mon Jan 19 00:00:00 1970 PST | Mon Jan 19 00:00:00 1970 | 8 | 8 | foo | 18 | 19 | AAA018 | 18 | 19 | + 24 | 4 | 00024 | Sun Jan 25 00:00:00 1970 PST | Sun Jan 25 00:00:00 1970 | 4 | 4 | foo | 24 | 4 | 00024 | Sun Jan 25 00:00:00 1970 PST | Sun Jan 25 00:00:00 1970 | 4 | 4 | foo | 24 | 25 | AAA024 | 24 | 25 | AAA024 + 30 | 0 | 00030 | Sat Jan 31 00:00:00 1970 PST | Sat Jan 31 00:00:00 1970 | 0 | 0 | foo | 30 | 0 | 00030 | Sat Jan 31 00:00:00 1970 PST | Sat Jan 31 00:00:00 1970 | 0 | 0 | foo | 30 | 31 | AAA030 | 30 | 31 | AAA030 + 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 36 | 37 | AAA036 | 36 | 37 | + 42 | 2 | 00042 | Thu Feb 12 00:00:00 1970 PST | Thu Feb 12 00:00:00 1970 | 2 | 2 | foo | 42 | 2 | 00042 | Thu Feb 12 00:00:00 1970 PST | Thu Feb 12 00:00:00 1970 | 2 | 2 | foo | 42 | 43 | AAA042 | 42 | 43 | AAA042 + 48 | 8 | 00048 | Wed Feb 18 00:00:00 1970 PST | Wed Feb 18 00:00:00 1970 | 8 | 8 | foo | 48 | 8 | 00048 | Wed Feb 18 00:00:00 1970 PST | Wed Feb 18 00:00:00 1970 | 8 | 8 | foo | 48 | 49 | AAA048 | 48 | 49 | AAA048 + 54 | 4 | 00054 | Tue Feb 24 00:00:00 1970 PST | Tue Feb 24 00:00:00 1970 | 4 | 4 | foo | 54 | 4 | 00054 | Tue Feb 24 00:00:00 1970 PST | Tue Feb 24 00:00:00 1970 | 4 | 4 | foo | 54 | 55 | AAA054 | 54 | 55 | + 60 | 0 | 00060 | Mon Mar 02 00:00:00 1970 PST | Mon Mar 02 00:00:00 1970 | 0 | 0 | foo | 60 | 0 | 00060 | Mon Mar 02 00:00:00 1970 PST | Mon Mar 02 00:00:00 1970 | 0 | 0 | foo | 60 | 61 | AAA060 | 60 | 61 | AAA060 + 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 66 | 67 | AAA066 | 66 | 67 | AAA066 + 72 | 2 | 00072 | Sat Mar 14 00:00:00 1970 PST | Sat Mar 14 00:00:00 1970 | 2 | 2 | foo | 72 | 2 | 00072 | Sat Mar 14 00:00:00 1970 PST | Sat Mar 14 00:00:00 1970 | 2 | 2 | foo | 72 | 73 | AAA072 | 72 | 73 | + 78 | 8 | 00078 | Fri Mar 20 00:00:00 1970 PST | Fri Mar 20 00:00:00 1970 | 8 | 8 | foo | 78 | 8 | 00078 | Fri Mar 20 00:00:00 1970 PST | Fri Mar 20 00:00:00 1970 | 8 | 8 | foo | 78 | 79 | AAA078 | 78 | 79 | AAA078 + 84 | 4 | 00084 | Thu Mar 26 00:00:00 1970 PST | Thu Mar 26 00:00:00 1970 | 4 | 4 | foo | 84 | 4 | 00084 | Thu Mar 26 00:00:00 1970 PST | Thu Mar 26 00:00:00 1970 | 4 | 4 | foo | 84 | 85 | AAA084 | 84 | 85 | AAA084 + 90 | 0 | 00090 | Wed Apr 01 00:00:00 1970 PST | Wed Apr 01 00:00:00 1970 | 0 | 0 | foo | 90 | 0 | 00090 | Wed Apr 01 00:00:00 1970 PST | Wed Apr 01 00:00:00 1970 | 0 | 0 | foo | 90 | 91 | AAA090 | 90 | 91 | + 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 96 | 97 | AAA096 | 96 | 97 | AAA096 +(16 rows) + -- check join pushdown in situations where multiple userids are involved CREATE ROLE regress_view_owner SUPERUSER; CREATE USER MAPPING FOR regress_view_owner SERVER loopback; diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index dc302ca789..7ff43337a9 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -4330,10 +4330,26 @@ add_paths_with_pathkeys_for_rel(PlannerInfo *root, RelOptInfo *rel, Cost startup_cost; Cost total_cost; List *useful_pathkeys = lfirst(lc); + Path *sorted_epq_path; estimate_path_cost_size(root, rel, NIL, useful_pathkeys, &rows, &width, &startup_cost, &total_cost); + /* + * The EPQ path must be at least as well sorted as the path itself, + * in case it gets used as input to a mergejoin. + */ + sorted_epq_path = epq_path; + if (sorted_epq_path != NULL && + !pathkeys_contained_in(useful_pathkeys, + sorted_epq_path->pathkeys)) + sorted_epq_path = (Path *) + create_sort_path(root, + rel, + sorted_epq_path, + useful_pathkeys, + -1.0); + add_path(rel, (Path *) create_foreignscan_path(root, rel, NULL, @@ -4342,7 +4358,7 @@ add_paths_with_pathkeys_for_rel(PlannerInfo *root, RelOptInfo *rel, total_cost, useful_pathkeys, NULL, - epq_path, + sorted_epq_path, NIL)); } } diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 2bdf7d8c1e..e73c258ff4 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -559,6 +559,13 @@ EXPLAIN (VERBOSE, COSTS OFF) SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5.c1 = ft4.c1 WHERE ft4.c1 BETWEEN 10 and 30 ORDER BY ft5.c1, ft4.c1; SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5.c1 = ft4.c1 WHERE ft4.c1 BETWEEN 10 and 30 ORDER BY ft5.c1, ft4.c1; +-- multi-way join involving multiple merge joins +EXPLAIN (VERBOSE, COSTS OFF) +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 + AND ft1.c1 = ft5.c1 FOR UPDATE; +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 + AND ft1.c1 = ft5.c1 FOR UPDATE; + -- check join pushdown in situations where multiple userids are involved CREATE ROLE regress_view_owner SUPERUSER; CREATE USER MAPPING FOR regress_view_owner SERVER loopback; From f033462d8f77c40b7d6b33c5116e50118fb4699d Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 17 Jan 2018 18:09:57 -0500 Subject: [PATCH 0844/1087] Reorder C includes Reorder header files in joinrels.c and pathnode.c in alphabetical order, removing unnecessary ones. Author: Etsuro Fujita --- src/backend/optimizer/path/joinrels.c | 4 +--- src/backend/optimizer/util/pathnode.c | 4 ++-- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index 1d152c514e..a35d068911 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -16,15 +16,13 @@ #include "miscadmin.h" #include "catalog/partition.h" -#include "nodes/relation.h" #include "optimizer/clauses.h" #include "optimizer/joininfo.h" #include "optimizer/pathnode.h" #include "optimizer/paths.h" #include "optimizer/prep.h" -#include "optimizer/cost.h" -#include "utils/memutils.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" static void make_rels_by_clause_joins(PlannerInfo *root, diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 48b4db72bc..fa4b4683e5 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -17,8 +17,9 @@ #include #include "miscadmin.h" -#include "nodes/nodeFuncs.h" +#include "foreign/fdwapi.h" #include "nodes/extensible.h" +#include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" #include "optimizer/cost.h" #include "optimizer/pathnode.h" @@ -29,7 +30,6 @@ #include "optimizer/tlist.h" #include "optimizer/var.h" #include "parser/parsetree.h" -#include "foreign/fdwapi.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/selfuncs.h" From a063d842f8f48e197f5a9bfb892210ce219c5556 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 09:34:51 -0500 Subject: [PATCH 0845/1087] doc: Expand documentation of session_replication_role --- doc/src/sgml/config.sgml | 26 ++++++++++++++++++++++++-- doc/src/sgml/ref/alter_table.sgml | 26 ++++++++++++++++++++++++-- 2 files changed, 48 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index e4a01699e4..37a61a13c8 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -6506,8 +6506,30 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; superuser privilege and results in discarding any previously cached query plans. Possible values are origin (the default), replica and local. - See for - more information. + + + + The intended use of this setting is that logical replication systems + set it to replica when they are applying replicated + changes. The effect of that will be that triggers and rules (that + have not been altered from their default configuration) will not fire + on the replica. See the clauses + ENABLE TRIGGER and ENABLE RULE + for more information. + + + + PostgreSQL treats the settings origin and + local the same internally. Third-party replication + systems may use these two values for their internal purposes, for + example using local to designate a session whose + changes should not be replicated. + + + + Since foreign keys are implemented as triggers, setting this parameter + to replica also disables all foreign key checks, + which can leave data in an inconsistent state if improperly used. diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 7bcf242846..686bb2c11c 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -456,14 +456,30 @@ ALTER TABLE [ IF EXISTS ] name requires superuser privileges; it should be done with caution since of course the integrity of the constraint cannot be guaranteed if the triggers are not executed. + + + The trigger firing mechanism is also affected by the configuration variable . Simply enabled - triggers will fire when the replication role is origin + triggers (the default) will fire when the replication role is origin (the default) or local. Triggers configured as ENABLE REPLICA will only fire if the session is in replica mode, and triggers configured as ENABLE ALWAYS will - fire regardless of the current replication mode. + fire regardless of the current replication role. + + + + The effect of this mechanism is that in the default configuration, + triggers do not fire on replicas. This is useful because if a trigger + is used on the origin to propagate data between tables, then the + replication system will also replicate the propagated data, and the + trigger should not fire a second time on the replica, because that would + lead to duplication. However, if a trigger is used for another purpose + such as creating external alerts, then it might be appropriate to set it + to ENABLE ALWAYS so that it is also fired on + replicas. + This command acquires a SHARE ROW EXCLUSIVE lock. @@ -481,6 +497,12 @@ ALTER TABLE [ IF EXISTS ] name are always applied in order to keep views working even if the current session is in a non-default replication role. + + + The rule firing mechanism is also affected by the configuration variable + , analogous to triggers as + described above. + From 2082b3745a7165d10788d55c5b6c609a8d39d729 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 18 Jan 2018 11:09:44 -0500 Subject: [PATCH 0846/1087] Extend configure's __int128 test to check for a known gcc bug. On Sparc64, use of __attribute__(aligned(8)) with __int128 causes faulty code generation in gcc versions at least through 5.5.0. We can work around that by disabling use of __int128, so teach configure to test for the bug. This solution doesn't fix things for the case of cross-compiling with a buggy compiler; to support that nicely, we'd need to add a manual disable switch. Unless more such cases turn up, it doesn't seem worth the work. Affected users could always edit pg_config.h manually. In passing, fix some typos in the existing configure test for __int128. They're harmless because we only compile that code not run it, but they're still confusing for anyone looking at it closely. This is needed in support of commit 751804998, so back-patch to 9.5 as that was. Marina Polyakova, Victor Wagner, Tom Lane Discussion: https://postgr.es/m/0d3a9fa264cebe1cb9966f37b7c06e86@postgrespro.ru --- config/c-compiler.m4 | 48 ++++++++++++++++++++++++----- configure | 72 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 105 insertions(+), 15 deletions(-) diff --git a/config/c-compiler.m4 b/config/c-compiler.m4 index 076656c77f..689bb7f181 100644 --- a/config/c-compiler.m4 +++ b/config/c-compiler.m4 @@ -108,29 +108,61 @@ AC_DEFUN([PGAC_TYPE_128BIT_INT], [AC_CACHE_CHECK([for __int128], [pgac_cv__128bit_int], [AC_LINK_IFELSE([AC_LANG_PROGRAM([ /* + * We don't actually run this test, just link it to verify that any support + * functions needed for __int128 are present. + * * These are globals to discourage the compiler from folding all the * arithmetic tests down to compile-time constants. We do not have - * convenient support for 64bit literals at this point... + * convenient support for 128bit literals at this point... */ __int128 a = 48828125; -__int128 b = 97656255; +__int128 b = 97656250; ],[ __int128 c,d; a = (a << 12) + 1; /* 200000000001 */ b = (b << 12) + 5; /* 400000000005 */ -/* use the most relevant arithmetic ops */ +/* try the most relevant arithmetic ops */ c = a * b; d = (c + b) / b; -/* return different values, to prevent optimizations */ +/* must use the results, else compiler may optimize arithmetic away */ if (d != a+1) - return 0; -return 1; + return 1; ])], [pgac_cv__128bit_int=yes], [pgac_cv__128bit_int=no])]) if test x"$pgac_cv__128bit_int" = xyes ; then - AC_DEFINE(PG_INT128_TYPE, __int128, [Define to the name of a signed 128-bit integer type.]) - AC_CHECK_ALIGNOF(PG_INT128_TYPE) + # Use of non-default alignment with __int128 tickles bugs in some compilers. + # If not cross-compiling, we can test for bugs and disable use of __int128 + # with buggy compilers. If cross-compiling, hope for the best. + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83925 + AC_CACHE_CHECK([for __int128 alignment bug], [pgac_cv__128bit_int_bug], + [AC_RUN_IFELSE([AC_LANG_PROGRAM([ +/* This must match the corresponding code in c.h: */ +#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) +#define pg_attribute_aligned(a) __attribute__((aligned(a))) +#endif +typedef __int128 int128a +#if defined(pg_attribute_aligned) +pg_attribute_aligned(8) +#endif +; +int128a holder; +void pass_by_val(void *buffer, int128a par) { holder = par; } +],[ +long int i64 = 97656225L << 12; +int128a q; +pass_by_val(main, (int128a) i64); +q = (int128a) i64; +if (q != holder) + return 1; +])], + [pgac_cv__128bit_int_bug=ok], + [pgac_cv__128bit_int_bug=broken], + [pgac_cv__128bit_int_bug="assuming ok"])]) + if test x"$pgac_cv__128bit_int_bug" != xbroken ; then + AC_DEFINE(PG_INT128_TYPE, __int128, [Define to the name of a signed 128-bit integer type.]) + AC_CHECK_ALIGNOF(PG_INT128_TYPE) + fi fi])# PGAC_TYPE_128BIT_INT diff --git a/configure b/configure index 45221e1ea3..7dcca506f8 100755 --- a/configure +++ b/configure @@ -14996,12 +14996,15 @@ else /* end confdefs.h. */ /* + * We don't actually run this test, just link it to verify that any support + * functions needed for __int128 are present. + * * These are globals to discourage the compiler from folding all the * arithmetic tests down to compile-time constants. We do not have - * convenient support for 64bit literals at this point... + * convenient support for 128bit literals at this point... */ __int128 a = 48828125; -__int128 b = 97656255; +__int128 b = 97656250; int main () @@ -15010,13 +15013,12 @@ main () __int128 c,d; a = (a << 12) + 1; /* 200000000001 */ b = (b << 12) + 5; /* 400000000005 */ -/* use the most relevant arithmetic ops */ +/* try the most relevant arithmetic ops */ c = a * b; d = (c + b) / b; -/* return different values, to prevent optimizations */ +/* must use the results, else compiler may optimize arithmetic away */ if (d != a+1) - return 0; -return 1; + return 1; ; return 0; @@ -15033,10 +15035,65 @@ fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__128bit_int" >&5 $as_echo "$pgac_cv__128bit_int" >&6; } if test x"$pgac_cv__128bit_int" = xyes ; then + # Use of non-default alignment with __int128 tickles bugs in some compilers. + # If not cross-compiling, we can test for bugs and disable use of __int128 + # with buggy compilers. If cross-compiling, hope for the best. + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83925 + { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __int128 alignment bug" >&5 +$as_echo_n "checking for __int128 alignment bug... " >&6; } +if ${pgac_cv__128bit_int_bug+:} false; then : + $as_echo_n "(cached) " >&6 +else + if test "$cross_compiling" = yes; then : + pgac_cv__128bit_int_bug="assuming ok" +else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ + +/* This must match the corresponding code in c.h: */ +#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__) +#define pg_attribute_aligned(a) __attribute__((aligned(a))) +#endif +typedef __int128 int128a +#if defined(pg_attribute_aligned) +pg_attribute_aligned(8) +#endif +; +int128a holder; +void pass_by_val(void *buffer, int128a par) { holder = par; } + +int +main () +{ + +long int i64 = 97656225L << 12; +int128a q; +pass_by_val(main, (int128a) i64); +q = (int128a) i64; +if (q != holder) + return 1; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_run "$LINENO"; then : + pgac_cv__128bit_int_bug=ok +else + pgac_cv__128bit_int_bug=broken +fi +rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ + conftest.$ac_objext conftest.beam conftest.$ac_ext +fi + +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__128bit_int_bug" >&5 +$as_echo "$pgac_cv__128bit_int_bug" >&6; } + if test x"$pgac_cv__128bit_int_bug" != xbroken ; then $as_echo "#define PG_INT128_TYPE __int128" >>confdefs.h - # The cast to long int works around a bug in the HP C Compiler, + # The cast to long int works around a bug in the HP C Compiler, # see AC_CHECK_SIZEOF for more information. { $as_echo "$as_me:${as_lineno-$LINENO}: checking alignment of PG_INT128_TYPE" >&5 $as_echo_n "checking alignment of PG_INT128_TYPE... " >&6; } @@ -15071,6 +15128,7 @@ cat >>confdefs.h <<_ACEOF _ACEOF + fi fi # Check for various atomic operations now that we have checked how to declare From 77216cae47e3ded13f36361f60ce04ec0a709e2a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 11:24:07 -0500 Subject: [PATCH 0847/1087] Add tests for session_replication_role This was hardly tested at all. The trigger case was lightly tested by the logical replication tests, but rules and event triggers were not tested at all. --- src/test/regress/expected/event_trigger.out | 30 ++++++++++++++---- src/test/regress/expected/rules.out | 34 +++++++++++++++++++++ src/test/regress/expected/triggers.out | 10 +++++- src/test/regress/sql/event_trigger.sql | 23 ++++++++++---- src/test/regress/sql/rules.sql | 26 ++++++++++++++++ src/test/regress/sql/triggers.sql | 5 +++ 6 files changed, 115 insertions(+), 13 deletions(-) diff --git a/src/test/regress/expected/event_trigger.out b/src/test/regress/expected/event_trigger.out index 906dcb8b31..88c6803081 100644 --- a/src/test/regress/expected/event_trigger.out +++ b/src/test/regress/expected/event_trigger.out @@ -88,16 +88,34 @@ create event trigger regress_event_trigger_noperms on ddl_command_start ERROR: permission denied to create event trigger "regress_event_trigger_noperms" HINT: Must be superuser to create an event trigger. reset role; --- all OK -alter event trigger regress_event_trigger enable replica; -alter event trigger regress_event_trigger enable always; -alter event trigger regress_event_trigger enable; +-- test enabling and disabling alter event trigger regress_event_trigger disable; --- regress_event_trigger2 and regress_event_trigger_end should fire, but not --- regress_event_trigger +-- fires _trigger2 and _trigger_end should fire, but not _trigger create table event_trigger_fire1 (a int); NOTICE: test_event_trigger: ddl_command_start CREATE TABLE NOTICE: test_event_trigger: ddl_command_end CREATE TABLE +alter event trigger regress_event_trigger enable; +set session_replication_role = replica; +-- fires nothing +create table event_trigger_fire2 (a int); +alter event trigger regress_event_trigger enable replica; +-- fires only _trigger +create table event_trigger_fire3 (a int); +NOTICE: test_event_trigger: ddl_command_start CREATE TABLE +alter event trigger regress_event_trigger enable always; +-- fires only _trigger +create table event_trigger_fire4 (a int); +NOTICE: test_event_trigger: ddl_command_start CREATE TABLE +reset session_replication_role; +-- fires all three +create table event_trigger_fire5 (a int); +NOTICE: test_event_trigger: ddl_command_start CREATE TABLE +NOTICE: test_event_trigger: ddl_command_start CREATE TABLE +NOTICE: test_event_trigger: ddl_command_end CREATE TABLE +-- clean up +alter event trigger regress_event_trigger disable; +drop table event_trigger_fire2, event_trigger_fire3, event_trigger_fire4, event_trigger_fire5; +NOTICE: test_event_trigger: ddl_command_end DROP TABLE -- regress_event_trigger_end should fire on these commands grant all on table event_trigger_fire1 to public; NOTICE: test_event_trigger: ddl_command_end GRANT diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out index f1c1b44d6f..5433944c6a 100644 --- a/src/test/regress/expected/rules.out +++ b/src/test/regress/expected/rules.out @@ -3233,3 +3233,37 @@ CREATE RULE parted_table_insert AS ON INSERT to parted_table DO INSTEAD INSERT INTO parted_table_1 VALUES (NEW.*); ALTER RULE parted_table_insert ON parted_table RENAME TO parted_table_insert_redirect; DROP TABLE parted_table; +-- +-- Test enabling/disabling +-- +CREATE TABLE ruletest1 (a int); +CREATE TABLE ruletest2 (b int); +CREATE RULE rule1 AS ON INSERT TO ruletest1 + DO INSTEAD INSERT INTO ruletest2 VALUES (NEW.*); +INSERT INTO ruletest1 VALUES (1); +ALTER TABLE ruletest1 DISABLE RULE rule1; +INSERT INTO ruletest1 VALUES (2); +ALTER TABLE ruletest1 ENABLE RULE rule1; +SET session_replication_role = replica; +INSERT INTO ruletest1 VALUES (3); +ALTER TABLE ruletest1 ENABLE REPLICA RULE rule1; +INSERT INTO ruletest1 VALUES (4); +RESET session_replication_role; +INSERT INTO ruletest1 VALUES (5); +SELECT * FROM ruletest1; + a +--- + 2 + 3 + 5 +(3 rows) + +SELECT * FROM ruletest2; + b +--- + 1 + 4 +(2 rows) + +DROP TABLE ruletest1; +DROP TABLE ruletest2; diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 49cd7a1338..9a7aafcc96 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -569,6 +569,12 @@ insert into trigtest default values; alter table trigtest enable trigger trigtest_a_stmt_tg; insert into trigtest default values; NOTICE: trigtest INSERT AFTER STATEMENT +set session_replication_role = replica; +insert into trigtest default values; -- does not trigger +alter table trigtest enable always trigger trigtest_a_stmt_tg; +insert into trigtest default values; -- now it does +NOTICE: trigtest INSERT AFTER STATEMENT +reset session_replication_role; insert into trigtest2 values(1); insert into trigtest2 values(2); delete from trigtest where i=2; @@ -595,7 +601,9 @@ select * from trigtest; 3 4 5 -(3 rows) + 6 + 7 +(5 rows) drop table trigtest2; drop table trigtest; diff --git a/src/test/regress/sql/event_trigger.sql b/src/test/regress/sql/event_trigger.sql index b65bf3ec66..ef7faf0ab7 100644 --- a/src/test/regress/sql/event_trigger.sql +++ b/src/test/regress/sql/event_trigger.sql @@ -89,15 +89,26 @@ create event trigger regress_event_trigger_noperms on ddl_command_start execute procedure test_event_trigger(); reset role; --- all OK +-- test enabling and disabling +alter event trigger regress_event_trigger disable; +-- fires _trigger2 and _trigger_end should fire, but not _trigger +create table event_trigger_fire1 (a int); +alter event trigger regress_event_trigger enable; +set session_replication_role = replica; +-- fires nothing +create table event_trigger_fire2 (a int); alter event trigger regress_event_trigger enable replica; +-- fires only _trigger +create table event_trigger_fire3 (a int); alter event trigger regress_event_trigger enable always; -alter event trigger regress_event_trigger enable; +-- fires only _trigger +create table event_trigger_fire4 (a int); +reset session_replication_role; +-- fires all three +create table event_trigger_fire5 (a int); +-- clean up alter event trigger regress_event_trigger disable; - --- regress_event_trigger2 and regress_event_trigger_end should fire, but not --- regress_event_trigger -create table event_trigger_fire1 (a int); +drop table event_trigger_fire2, event_trigger_fire3, event_trigger_fire4, event_trigger_fire5; -- regress_event_trigger_end should fire on these commands grant all on table event_trigger_fire1 to public; diff --git a/src/test/regress/sql/rules.sql b/src/test/regress/sql/rules.sql index 0ded0f01d2..0823c02acf 100644 --- a/src/test/regress/sql/rules.sql +++ b/src/test/regress/sql/rules.sql @@ -1177,3 +1177,29 @@ CREATE RULE parted_table_insert AS ON INSERT to parted_table DO INSTEAD INSERT INTO parted_table_1 VALUES (NEW.*); ALTER RULE parted_table_insert ON parted_table RENAME TO parted_table_insert_redirect; DROP TABLE parted_table; + +-- +-- Test enabling/disabling +-- +CREATE TABLE ruletest1 (a int); +CREATE TABLE ruletest2 (b int); + +CREATE RULE rule1 AS ON INSERT TO ruletest1 + DO INSTEAD INSERT INTO ruletest2 VALUES (NEW.*); + +INSERT INTO ruletest1 VALUES (1); +ALTER TABLE ruletest1 DISABLE RULE rule1; +INSERT INTO ruletest1 VALUES (2); +ALTER TABLE ruletest1 ENABLE RULE rule1; +SET session_replication_role = replica; +INSERT INTO ruletest1 VALUES (3); +ALTER TABLE ruletest1 ENABLE REPLICA RULE rule1; +INSERT INTO ruletest1 VALUES (4); +RESET session_replication_role; +INSERT INTO ruletest1 VALUES (5); + +SELECT * FROM ruletest1; +SELECT * FROM ruletest2; + +DROP TABLE ruletest1; +DROP TABLE ruletest2; diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 81c632ef7e..47b5bde390 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -401,6 +401,11 @@ alter table trigtest disable trigger user; insert into trigtest default values; alter table trigtest enable trigger trigtest_a_stmt_tg; insert into trigtest default values; +set session_replication_role = replica; +insert into trigtest default values; -- does not trigger +alter table trigtest enable always trigger trigtest_a_stmt_tg; +insert into trigtest default values; -- now it does +reset session_replication_role; insert into trigtest2 values(1); insert into trigtest2 values(2); delete from trigtest where i=2; From 958c7ae0b7ca4ee9d422271c2ffbef4e3a6d1c47 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 13:00:49 -0500 Subject: [PATCH 0848/1087] Fix typo and improve punctuation --- src/test/ssl/t/001_ssltests.pl | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl index a0a06825c6..28837a1391 100644 --- a/src/test/ssl/t/001_ssltests.pl +++ b/src/test/ssl/t/001_ssltests.pl @@ -46,20 +46,20 @@ $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test"; -# The server should not accept non-SSL connections +# The server should not accept non-SSL connections. note "test that the server doesn't accept non-SSL connections"; test_connect_fails($common_connstr, "sslmode=disable"); # Try without a root cert. In sslmode=require, this should work. In verify-ca -# or verify-full mode it should fail +# or verify-full mode it should fail. note "connect without server root cert"; test_connect_ok($common_connstr, "sslrootcert=invalid sslmode=require"); test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-ca"); test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-full"); -# Try with wrong root cert, should fail. (we're using the client CA as the -# root, but the server's key is signed by the server CA) -note "connect without wrong server root cert"; +# Try with wrong root cert, should fail. (We're using the client CA as the +# root, but the server's key is signed by the server CA.) +note "connect with wrong server root cert"; test_connect_fails($common_connstr, "sslrootcert=ssl/client_ca.crt sslmode=require"); test_connect_fails($common_connstr, From a228e44ce4a2bfd1de3764763039cfcb009d7864 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 19:36:34 -0500 Subject: [PATCH 0849/1087] Update comment The "callback" that this comment was referring to was removed by commit c0a15e07cd718cb6e455e68328f522ac076a0e4b, so update to match the current code. --- src/backend/libpq/be-secure-openssl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index c2032c2f30..fc6e8a0a88 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -31,7 +31,7 @@ * The downside to EDH is that it makes it impossible to * use ssldump(1) if there's a problem establishing an SSL * session. In this case you'll need to temporarily disable - * EDH by commenting out the callback. + * EDH (see initialize_dh()). * *------------------------------------------------------------------------- */ From 4e54dd2e0a750352ce2a5c45d1cc9183e887eec3 Mon Sep 17 00:00:00 2001 From: Simon Riggs Date: Fri, 19 Jan 2018 06:36:17 +0000 Subject: [PATCH 0850/1087] Fix typo in recent commit Typo in 9c7d06d60680c7f00d931233873dee81fdb311c6 Reported-by: Masahiko Sawada --- src/backend/replication/slotfuncs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c index 93d2e20f76..cf2195bc93 100644 --- a/src/backend/replication/slotfuncs.c +++ b/src/backend/replication/slotfuncs.c @@ -462,7 +462,7 @@ pg_replication_slot_advance(PG_FUNCTION_ARGS) /* * We can't move slot past what's been flushed/replayed so clamp the - * target possition accordingly. + * target position accordingly. */ if (!RecoveryInProgress()) moveto = Min(moveto, GetFlushRecPtr()); From 29d58fd3adae9057c3fd502393b2f131bc96eaf9 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 19 Jan 2018 07:48:44 -0500 Subject: [PATCH 0851/1087] Transfer state pertaining to pending REINDEX operations to workers. This will allow the pending patch for parallel CREATE INDEX to work on system catalogs, and to provide the same level of protection against use of user indexes while they are being rebuilt that we have for non-parallel CREATE INDEX. Patch by me, reviewed by Peter Geoghegan. Discussion: http://postgr.es/m/CA+TgmoYN-YQU9JsGQcqFLovZ-C+Xgp1_xhJQad=cunGG-_p5gg@mail.gmail.com Discussion: http://postgr.es/m/CAH2-Wzkv4UNkXYhqQRqk-u9rS7h5c-4cCW+EqQ8K_WSeS43aZg@mail.gmail.com --- src/backend/access/transam/README.parallel | 3 + src/backend/access/transam/parallel.c | 18 +++++- src/backend/catalog/index.c | 75 +++++++++++++++++++++- src/include/catalog/index.h | 4 ++ 4 files changed, 98 insertions(+), 2 deletions(-) diff --git a/src/backend/access/transam/README.parallel b/src/backend/access/transam/README.parallel index 5c33c40ae9..32994719e3 100644 --- a/src/backend/access/transam/README.parallel +++ b/src/backend/access/transam/README.parallel @@ -122,6 +122,9 @@ worker. This includes: values are restored, this incidentally sets SessionUserId and OuterUserId to the correct values. This final step restores CurrentUserId. + - State related to pending REINDEX operations, which prevents access to + an index that is currently being rebuilt. + To prevent undetected or unprincipled deadlocks when running in parallel mode, this could should eventually handle heavyweight locks in some way. This is not implemented yet. diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index f720896e50..0a0157a878 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -18,6 +18,7 @@ #include "access/session.h" #include "access/xact.h" #include "access/xlog.h" +#include "catalog/index.h" #include "catalog/namespace.h" #include "commands/async.h" #include "executor/execParallel.h" @@ -67,6 +68,7 @@ #define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0xFFFFFFFFFFFF0008) #define PARALLEL_KEY_ENTRYPOINT UINT64CONST(0xFFFFFFFFFFFF0009) #define PARALLEL_KEY_SESSION_DSM UINT64CONST(0xFFFFFFFFFFFF000A) +#define PARALLEL_KEY_REINDEX_STATE UINT64CONST(0xFFFFFFFFFFFF000B) /* Fixed-size parallel state. */ typedef struct FixedParallelState @@ -200,6 +202,7 @@ InitializeParallelDSM(ParallelContext *pcxt) Size tsnaplen = 0; Size asnaplen = 0; Size tstatelen = 0; + Size reindexlen = 0; Size segsize = 0; int i; FixedParallelState *fps; @@ -249,8 +252,10 @@ InitializeParallelDSM(ParallelContext *pcxt) tstatelen = EstimateTransactionStateSpace(); shm_toc_estimate_chunk(&pcxt->estimator, tstatelen); shm_toc_estimate_chunk(&pcxt->estimator, sizeof(dsm_handle)); + reindexlen = EstimateReindexStateSpace(); + shm_toc_estimate_chunk(&pcxt->estimator, reindexlen); /* If you add more chunks here, you probably need to add keys. */ - shm_toc_estimate_keys(&pcxt->estimator, 7); + shm_toc_estimate_keys(&pcxt->estimator, 8); /* Estimate space need for error queues. */ StaticAssertStmt(BUFFERALIGN(PARALLEL_ERROR_QUEUE_SIZE) == @@ -319,6 +324,7 @@ InitializeParallelDSM(ParallelContext *pcxt) char *tsnapspace; char *asnapspace; char *tstatespace; + char *reindexspace; char *error_queue_space; char *session_dsm_handle_space; char *entrypointstate; @@ -360,6 +366,11 @@ InitializeParallelDSM(ParallelContext *pcxt) SerializeTransactionState(tstatelen, tstatespace); shm_toc_insert(pcxt->toc, PARALLEL_KEY_TRANSACTION_STATE, tstatespace); + /* Serialize reindex state. */ + reindexspace = shm_toc_allocate(pcxt->toc, reindexlen); + SerializeReindexState(reindexlen, reindexspace); + shm_toc_insert(pcxt->toc, PARALLEL_KEY_REINDEX_STATE, reindexspace); + /* Allocate space for worker information. */ pcxt->worker = palloc0(sizeof(ParallelWorkerInfo) * pcxt->nworkers); @@ -972,6 +983,7 @@ ParallelWorkerMain(Datum main_arg) char *tsnapspace; char *asnapspace; char *tstatespace; + char *reindexspace; StringInfoData msgbuf; char *session_dsm_handle_space; @@ -1137,6 +1149,10 @@ ParallelWorkerMain(Datum main_arg) /* Set ParallelMasterBackendId so we know how to address temp relations. */ ParallelMasterBackendId = fps->parallel_master_backend_id; + /* Restore reindex state. */ + reindexspace = shm_toc_lookup(toc, PARALLEL_KEY_REINDEX_STATE, false); + RestoreReindexState(reindexspace); + /* * We've initialized all of our state now; nothing should change * hereafter. diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 330488b96f..007b929a6f 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -86,6 +86,18 @@ typedef struct tups_inserted; } v_i_state; +/* + * Pointer-free representation of variables used when reindexing system + * catalogs; we use this to propagate those values to parallel workers. + */ +typedef struct +{ + Oid currentlyReindexedHeap; + Oid currentlyReindexedIndex; + int numPendingReindexedIndexes; + Oid pendingReindexedIndexes[FLEXIBLE_ARRAY_MEMBER]; +} SerializedReindexState; + /* non-export function prototypes */ static bool relationHasPrimaryKey(Relation rel); static TupleDesc ConstructTupleDescriptor(Relation heapRelation, @@ -3653,7 +3665,8 @@ reindex_relation(Oid relid, int flags, int options) * When we are busy reindexing a system index, this code provides support * for preventing catalog lookups from using that index. We also make use * of this to catch attempted uses of user indexes during reindexing of - * those indexes. + * those indexes. This information is propagated to parallel workers; + * attempting to change it during a parallel operation is not permitted. * ---------------------------------------------------------------- */ @@ -3719,6 +3732,8 @@ SetReindexProcessing(Oid heapOid, Oid indexOid) static void ResetReindexProcessing(void) { + if (IsInParallelMode()) + elog(ERROR, "cannot modify reindex state during a parallel operation"); currentlyReindexedHeap = InvalidOid; currentlyReindexedIndex = InvalidOid; } @@ -3736,6 +3751,8 @@ SetReindexPending(List *indexes) /* Reindexing is not re-entrant. */ if (pendingReindexedIndexes) elog(ERROR, "cannot reindex while reindexing"); + if (IsInParallelMode()) + elog(ERROR, "cannot modify reindex state during a parallel operation"); pendingReindexedIndexes = list_copy(indexes); } @@ -3746,6 +3763,8 @@ SetReindexPending(List *indexes) static void RemoveReindexPending(Oid indexOid) { + if (IsInParallelMode()) + elog(ERROR, "cannot modify reindex state during a parallel operation"); pendingReindexedIndexes = list_delete_oid(pendingReindexedIndexes, indexOid); } @@ -3757,5 +3776,59 @@ RemoveReindexPending(Oid indexOid) static void ResetReindexPending(void) { + if (IsInParallelMode()) + elog(ERROR, "cannot modify reindex state during a parallel operation"); pendingReindexedIndexes = NIL; } + +/* + * EstimateReindexStateSpace + * Estimate space needed to pass reindex state to parallel workers. + */ +extern Size +EstimateReindexStateSpace(void) +{ + return offsetof(SerializedReindexState, pendingReindexedIndexes) + + mul_size(sizeof(Oid), list_length(pendingReindexedIndexes)); +} + +/* + * SerializeReindexState + * Serialize reindex state for parallel workers. + */ +extern void +SerializeReindexState(Size maxsize, char *start_address) +{ + SerializedReindexState *sistate = (SerializedReindexState *) start_address; + int c = 0; + ListCell *lc; + + sistate->currentlyReindexedHeap = currentlyReindexedHeap; + sistate->currentlyReindexedIndex = currentlyReindexedIndex; + sistate->numPendingReindexedIndexes = list_length(pendingReindexedIndexes); + foreach(lc, pendingReindexedIndexes) + sistate->pendingReindexedIndexes[c++] = lfirst_oid(lc); +} + +/* + * RestoreReindexState + * Restore reindex state in a parallel worker. + */ +extern void +RestoreReindexState(void *reindexstate) +{ + SerializedReindexState *sistate = (SerializedReindexState *) reindexstate; + int c = 0; + MemoryContext oldcontext; + + currentlyReindexedHeap = sistate->currentlyReindexedHeap; + currentlyReindexedIndex = sistate->currentlyReindexedIndex; + + Assert(pendingReindexedIndexes == NIL); + oldcontext = MemoryContextSwitchTo(TopMemoryContext); + for (c = 0; c < sistate->numPendingReindexedIndexes; ++c) + pendingReindexedIndexes = + lappend_oid(pendingReindexedIndexes, + sistate->pendingReindexedIndexes[c]); + MemoryContextSwitchTo(oldcontext); +} diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index 12bf35567a..4790f0c735 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -134,4 +134,8 @@ extern bool ReindexIsProcessingHeap(Oid heapOid); extern bool ReindexIsProcessingIndex(Oid indexOid); extern Oid IndexGetRelation(Oid indexId, bool missing_ok); +extern Size EstimateReindexStateSpace(void); +extern void SerializeReindexState(Size maxsize, char *start_address); +extern void RestoreReindexState(void *reindexstate); + #endif /* INDEX_H */ From 1ef61ddce9086c30a18a6ecc48bc3ce0ef62cb39 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 19 Jan 2018 10:15:08 -0300 Subject: [PATCH 0852/1087] Fix StoreCatalogInheritance1 to use 32bit inhseqno MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit For no apparent reason, this function was using a 16bit-wide inhseqno value, rather than the correct 32 bit width which is what is stored in the pg_inherits catalog. This becomes evident if you try to create a table with more than 65535 parents, because this error appears: ERROR: duplicate key value violates unique constraint «pg_inherits_relid_seqno_index» DETAIL: Key (inhrelid, inhseqno)=(329371, 0) already exists. Needless to say, having so many parents is an uncommon situations, which explains why this error has never been reported despite being having been introduced with the Postgres95 1.01 sources in commit d31084e9d111: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/creatinh.c;hb=d31084e9d111#l349 Backpatch all the way back. David Rowley noticed this while reviewing a patch of mine. Discussion: https://postgr.es/m/CAKJS1f8Dn7swSEhOWwzZzssW7747YB=2Hi+T7uGud40dur69-g@mail.gmail.com --- src/backend/commands/tablecmds.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index f2a928b823..59806349cc 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -303,7 +303,7 @@ static void MergeConstraintsIntoExisting(Relation child_rel, Relation parent_rel static void StoreCatalogInheritance(Oid relationId, List *supers, bool child_is_partition); static void StoreCatalogInheritance1(Oid relationId, Oid parentOid, - int16 seqNumber, Relation inhRelation, + int32 seqNumber, Relation inhRelation, bool child_is_partition); static int findAttrByName(const char *attributeName, List *schema); static void AlterIndexNamespaces(Relation classRel, Relation rel, @@ -2352,7 +2352,7 @@ StoreCatalogInheritance(Oid relationId, List *supers, bool child_is_partition) { Relation relation; - int16 seqNumber; + int32 seqNumber; ListCell *entry; /* @@ -2393,7 +2393,7 @@ StoreCatalogInheritance(Oid relationId, List *supers, */ static void StoreCatalogInheritance1(Oid relationId, Oid parentOid, - int16 seqNumber, Relation inhRelation, + int32 seqNumber, Relation inhRelation, bool child_is_partition) { TupleDesc desc = RelationGetDescr(inhRelation); @@ -2408,7 +2408,7 @@ StoreCatalogInheritance1(Oid relationId, Oid parentOid, */ values[Anum_pg_inherits_inhrelid - 1] = ObjectIdGetDatum(relationId); values[Anum_pg_inherits_inhparent - 1] = ObjectIdGetDatum(parentOid); - values[Anum_pg_inherits_inhseqno - 1] = Int16GetDatum(seqNumber); + values[Anum_pg_inherits_inhseqno - 1] = Int32GetDatum(seqNumber); memset(nulls, 0, sizeof(nulls)); From 8b08f7d4820fd7a8ef6152a9dd8c6e3cb01e5f99 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 19 Jan 2018 11:49:22 -0300 Subject: [PATCH 0853/1087] Local partitioned indexes When CREATE INDEX is run on a partitioned table, create catalog entries for an index on the partitioned table (which is just a placeholder since the table proper has no data of its own), and recurse to create actual indexes on the existing partitions; create them in future partitions also. As a convenience gadget, if the new index definition matches some existing index in partitions, these are picked up and used instead of creating new ones. Whichever way these indexes come about, they become attached to the index on the parent table and are dropped alongside it, and cannot be dropped on isolation unless they are detached first. To support pg_dump'ing these indexes, add commands CREATE INDEX ON ONLY (which creates the index on the parent partitioned table, without recursing) and ALTER INDEX ATTACH PARTITION (which is used after the indexes have been created individually on each partition, to attach them to the parent index). These reconstruct prior database state exactly. Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit Langote, Jesper Pedersen, Simon Riggs, David Rowley Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql --- doc/src/sgml/catalogs.sgml | 23 + doc/src/sgml/ref/alter_index.sgml | 14 + doc/src/sgml/ref/alter_table.sgml | 8 +- doc/src/sgml/ref/create_index.sgml | 33 +- doc/src/sgml/ref/reindex.sgml | 5 + src/backend/access/common/reloptions.c | 1 + src/backend/access/heap/heapam.c | 9 +- src/backend/access/index/indexam.c | 3 +- src/backend/bootstrap/bootparse.y | 2 + src/backend/catalog/aclchk.c | 9 +- src/backend/catalog/dependency.c | 14 +- src/backend/catalog/heap.c | 1 + src/backend/catalog/index.c | 203 +++++- src/backend/catalog/objectaddress.c | 5 +- src/backend/catalog/pg_depend.c | 13 +- src/backend/catalog/pg_inherits.c | 80 +++ src/backend/catalog/toasting.c | 2 + src/backend/commands/indexcmds.c | 397 +++++++++++- src/backend/commands/tablecmds.c | 653 +++++++++++++++++-- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/equalfuncs.c | 1 + src/backend/nodes/outfuncs.c | 1 + src/backend/parser/gram.y | 33 +- src/backend/parser/parse_utilcmd.c | 65 +- src/backend/tcop/utility.c | 22 + src/backend/utils/adt/amutils.c | 3 +- src/backend/utils/adt/ruleutils.c | 17 +- src/backend/utils/cache/relcache.c | 39 +- src/bin/pg_dump/common.c | 107 ++- src/bin/pg_dump/pg_dump.c | 102 ++- src/bin/pg_dump/pg_dump.h | 11 + src/bin/pg_dump/pg_dump_sort.c | 56 +- src/bin/pg_dump/t/002_pg_dump.pl | 95 +++ src/bin/psql/describe.c | 20 +- src/bin/psql/tab-complete.c | 34 +- src/include/catalog/dependency.h | 15 + src/include/catalog/index.h | 10 + src/include/catalog/pg_class.h | 1 + src/include/catalog/pg_inherits_fn.h | 3 + src/include/commands/defrem.h | 3 +- src/include/nodes/execnodes.h | 1 + src/include/nodes/parsenodes.h | 7 +- src/include/parser/parse_utilcmd.h | 3 + src/test/regress/expected/alter_table.out | 65 +- src/test/regress/expected/indexing.out | 757 ++++++++++++++++++++++ src/test/regress/parallel_schedule | 2 +- src/test/regress/serial_schedule | 1 + src/test/regress/sql/alter_table.sql | 16 + src/test/regress/sql/indexing.sql | 388 +++++++++++ 49 files changed, 3172 insertions(+), 182 deletions(-) create mode 100644 src/test/regress/expected/indexing.out create mode 100644 src/test/regress/sql/indexing.sql diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 3f02202caf..71e20f2740 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -2995,6 +2995,29 @@ SCRAM-SHA-256$<iteration count>:&l + + DEPENDENCY_INTERNAL_AUTO (I) + + + The dependent object was created as part of creation of the + referenced object, and is really just a part of its internal + implementation. A DROP of the dependent object + will be disallowed outright (we'll tell the user to issue a + DROP against the referenced object, instead). + While a regular internal dependency will prevent + the dependent object from being dropped while any such dependencies + remain, DEPENDENCY_INTERNAL_AUTO will allow such + a drop as long as the object can be found by following any of such + dependencies. + Example: an index on a partition is made internal-auto-dependent on + both the partition itself as well as on the index on the parent + partitioned table; so the partition index is dropped together with + either the partition it indexes, or with the parent index it is + attached to. + + + + DEPENDENCY_EXTENSION (e) diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml index e54237272c..c0606689f0 100644 --- a/doc/src/sgml/ref/alter_index.sgml +++ b/doc/src/sgml/ref/alter_index.sgml @@ -23,6 +23,7 @@ PostgreSQL documentation ALTER INDEX [ IF EXISTS ] name RENAME TO new_name ALTER INDEX [ IF EXISTS ] name SET TABLESPACE tablespace_name +ALTER INDEX name ATTACH PARTITION index_name ALTER INDEX name DEPENDS ON EXTENSION extension_name ALTER INDEX [ IF EXISTS ] name SET ( storage_parameter = value [, ... ] ) ALTER INDEX [ IF EXISTS ] name RESET ( storage_parameter [, ... ] ) @@ -75,6 +76,19 @@ ALTER INDEX ALL IN TABLESPACE name + + ATTACH PARTITION + + + Causes the named index to become attached to the altered index. + The named index must be on a partition of the table containing the + index being altered, and have an equivalent definition. An attached + index cannot be dropped by itself, and will automatically be dropped + if its parent index is dropped. + + + + DEPENDS ON EXTENSION diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 686bb2c11c..286c7a8589 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -805,7 +805,10 @@ ALTER TABLE [ IF EXISTS ] name as a partition of the target table. The table can be attached as a partition for specific values using FOR VALUES or as a default partition by using DEFAULT - . + . For each index in the target table, a corresponding + one will be created in the attached table; or, if an equivalent + index already exists, will be attached to the target table's index, + as if ALTER INDEX ATTACH PARTITION had been executed. @@ -866,7 +869,8 @@ ALTER TABLE [ IF EXISTS ] name This form detaches specified partition of the target table. The detached partition continues to exist as a standalone table, but no longer has any - ties to the table from which it was detached. + ties to the table from which it was detached. Any indexes that were + attached to the target table's indexes are detached. diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 025537575b..5137fe6383 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -21,7 +21,7 @@ PostgreSQL documentation -CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] name ] ON table_name [ USING method ] +CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] name ] ON [ ONLY ] table_name [ USING method ] ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ) [ WITH ( storage_parameter = value [, ... ] ) ] [ TABLESPACE tablespace_name ] @@ -151,6 +151,16 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] + + ONLY + + + Indicates not to recurse creating indexes on partitions, if the + table is partitioned. The default is to recurse. + + + + table_name @@ -545,6 +555,27 @@ Indexes: linkend="xindex"/>. + + When CREATE INDEX is invoked on a partitioned + table, the default behavior is to recurse to all partitions to ensure + they all have matching indexes. + Each partition is first checked to determine whether an equivalent + index already exists, and if so, that index will become attached as a + partition index to the index being created, which will become its + parent index. + If no matching index exists, a new index will be created and + automatically attached; the name of the new index in each partition + will be determined as if no index name had been specified in the + command. + If the ONLY option is specified, no recursion + is done, and the index is marked invalid + (ALTER INDEX ... ATTACH PARTITION turns the index + valid, once all partitions acquire the index.) Note, however, that + any partition that is created in the future using + CREATE TABLE ... PARTITION OF will automatically + contain the index regardless of whether this option was specified. + + For index methods that support ordered scans (currently, only B-tree), the optional clauses ASC, DESC, NULLS diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml index 79f6931c6a..1c21fafb80 100644 --- a/doc/src/sgml/ref/reindex.sgml +++ b/doc/src/sgml/ref/reindex.sgml @@ -231,6 +231,11 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } + + Reindexing partitioned tables or partitioned indexes is not supported. + Each individual partition can be reindexed separately instead. + + diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index 425bc5d06e..274f7aa8e9 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -993,6 +993,7 @@ extractRelOptions(HeapTuple tuple, TupleDesc tupdesc, options = view_reloptions(datum, false); break; case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: options = index_reloptions(amoptions, datum, false); break; case RELKIND_FOREIGN_TABLE: diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index dbc8f2d6c7..be263850cd 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -1293,7 +1293,8 @@ heap_open(Oid relationId, LOCKMODE lockmode) r = relation_open(relationId, lockmode); - if (r->rd_rel->relkind == RELKIND_INDEX) + if (r->rd_rel->relkind == RELKIND_INDEX || + r->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an index", @@ -1321,7 +1322,8 @@ heap_openrv(const RangeVar *relation, LOCKMODE lockmode) r = relation_openrv(relation, lockmode); - if (r->rd_rel->relkind == RELKIND_INDEX) + if (r->rd_rel->relkind == RELKIND_INDEX || + r->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an index", @@ -1353,7 +1355,8 @@ heap_openrv_extended(const RangeVar *relation, LOCKMODE lockmode, if (r) { - if (r->rd_rel->relkind == RELKIND_INDEX) + if (r->rd_rel->relkind == RELKIND_INDEX || + r->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an index", diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c index 1b61cd9515..91247f0fa5 100644 --- a/src/backend/access/index/indexam.c +++ b/src/backend/access/index/indexam.c @@ -154,7 +154,8 @@ index_open(Oid relationId, LOCKMODE lockmode) r = relation_open(relationId, lockmode); - if (r->rd_rel->relkind != RELKIND_INDEX) + if (r->rd_rel->relkind != RELKIND_INDEX && + r->rd_rel->relkind != RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not an index", diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y index 8c52846a92..dfd53fa054 100644 --- a/src/backend/bootstrap/bootparse.y +++ b/src/backend/bootstrap/bootparse.y @@ -321,6 +321,7 @@ Boot_DeclareIndexStmt: DefineIndex(relationId, stmt, $4, + InvalidOid, false, false, false, @@ -365,6 +366,7 @@ Boot_DeclareUniqueIndexStmt: DefineIndex(relationId, stmt, $5, + InvalidOid, false, false, false, diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index fac80612b8..50a2e2681b 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -1824,7 +1824,8 @@ ExecGrant_Relation(InternalGrant *istmt) pg_class_tuple = (Form_pg_class) GETSTRUCT(tuple); /* Not sensible to grant on an index */ - if (pg_class_tuple->relkind == RELKIND_INDEX) + if (pg_class_tuple->relkind == RELKIND_INDEX || + pg_class_tuple->relkind == RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an index", @@ -5405,7 +5406,8 @@ recordExtObjInitPriv(Oid objoid, Oid classoid) pg_class_tuple = (Form_pg_class) GETSTRUCT(tuple); /* Indexes don't have permissions */ - if (pg_class_tuple->relkind == RELKIND_INDEX) + if (pg_class_tuple->relkind == RELKIND_INDEX || + pg_class_tuple->relkind == RELKIND_PARTITIONED_INDEX) return; /* Composite types don't have permissions either */ @@ -5690,7 +5692,8 @@ removeExtObjInitPriv(Oid objoid, Oid classoid) pg_class_tuple = (Form_pg_class) GETSTRUCT(tuple); /* Indexes don't have permissions */ - if (pg_class_tuple->relkind == RELKIND_INDEX) + if (pg_class_tuple->relkind == RELKIND_INDEX || + pg_class_tuple->relkind == RELKIND_PARTITIONED_INDEX) return; /* Composite types don't have permissions either */ diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 269111b4c1..be60270ea5 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -582,6 +582,7 @@ findDependentObjects(const ObjectAddress *object, /* FALL THRU */ case DEPENDENCY_INTERNAL: + case DEPENDENCY_INTERNAL_AUTO: /* * This object is part of the internal implementation of @@ -633,6 +634,14 @@ findDependentObjects(const ObjectAddress *object, * transform this deletion request into a delete of this * owning object. * + * For INTERNAL_AUTO dependencies, we don't enforce this; + * in other words, we don't follow the links back to the + * owning object. + */ + if (foundDep->deptype == DEPENDENCY_INTERNAL_AUTO) + break; + + /* * First, release caller's lock on this object and get * deletion lock on the owning object. (We must release * caller's lock to avoid deadlock against a concurrent @@ -675,6 +684,7 @@ findDependentObjects(const ObjectAddress *object, /* And we're done here. */ systable_endscan(scan); return; + case DEPENDENCY_PIN: /* @@ -762,6 +772,7 @@ findDependentObjects(const ObjectAddress *object, case DEPENDENCY_AUTO_EXTENSION: subflags = DEPFLAG_AUTO; break; + case DEPENDENCY_INTERNAL_AUTO: case DEPENDENCY_INTERNAL: subflags = DEPFLAG_INTERNAL; break; @@ -1109,7 +1120,8 @@ doDeletion(const ObjectAddress *object, int flags) { char relKind = get_rel_relkind(object->objectId); - if (relKind == RELKIND_INDEX) + if (relKind == RELKIND_INDEX || + relKind == RELKIND_PARTITIONED_INDEX) { bool concurrent = ((flags & PERFORM_DELETION_CONCURRENTLY) != 0); diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 089b7965f2..99f4d59863 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -294,6 +294,7 @@ heap_create(const char *relname, case RELKIND_COMPOSITE_TYPE: case RELKIND_FOREIGN_TABLE: case RELKIND_PARTITIONED_TABLE: + case RELKIND_PARTITIONED_INDEX: create_storage = false; /* diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 007b929a6f..f0223416ad 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -41,6 +41,8 @@ #include "catalog/pg_collation.h" #include "catalog/pg_constraint.h" #include "catalog/pg_constraint_fn.h" +#include "catalog/pg_depend.h" +#include "catalog/pg_inherits_fn.h" #include "catalog/pg_operator.h" #include "catalog/pg_opclass.h" #include "catalog/pg_tablespace.h" @@ -55,6 +57,7 @@ #include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" #include "parser/parser.h" +#include "rewrite/rewriteManip.h" #include "storage/bufmgr.h" #include "storage/lmgr.h" #include "storage/predicate.h" @@ -110,6 +113,7 @@ static void InitializeAttributeOids(Relation indexRelation, int numatts, Oid indexoid); static void AppendAttributeTuples(Relation indexRelation, int numatts); static void UpdateIndexRelation(Oid indexoid, Oid heapoid, + Oid parentIndexId, IndexInfo *indexInfo, Oid *collationOids, Oid *classOids, @@ -117,7 +121,8 @@ static void UpdateIndexRelation(Oid indexoid, Oid heapoid, bool primary, bool isexclusion, bool immediate, - bool isvalid); + bool isvalid, + bool isready); static void index_update_stats(Relation rel, bool hasindex, bool isprimary, double reltuples); @@ -563,6 +568,7 @@ AppendAttributeTuples(Relation indexRelation, int numatts) static void UpdateIndexRelation(Oid indexoid, Oid heapoid, + Oid parentIndexOid, IndexInfo *indexInfo, Oid *collationOids, Oid *classOids, @@ -570,7 +576,8 @@ UpdateIndexRelation(Oid indexoid, bool primary, bool isexclusion, bool immediate, - bool isvalid) + bool isvalid, + bool isready) { int2vector *indkey; oidvector *indcollation; @@ -644,8 +651,7 @@ UpdateIndexRelation(Oid indexoid, values[Anum_pg_index_indisclustered - 1] = BoolGetDatum(false); values[Anum_pg_index_indisvalid - 1] = BoolGetDatum(isvalid); values[Anum_pg_index_indcheckxmin - 1] = BoolGetDatum(false); - /* we set isvalid and isready the same way */ - values[Anum_pg_index_indisready - 1] = BoolGetDatum(isvalid); + values[Anum_pg_index_indisready - 1] = BoolGetDatum(isready); values[Anum_pg_index_indislive - 1] = BoolGetDatum(true); values[Anum_pg_index_indisreplident - 1] = BoolGetDatum(false); values[Anum_pg_index_indkey - 1] = PointerGetDatum(indkey); @@ -682,6 +688,8 @@ UpdateIndexRelation(Oid indexoid, * indexRelationId: normally, pass InvalidOid to let this routine * generate an OID for the index. During bootstrap this may be * nonzero to specify a preselected OID. + * parentIndexRelid: if creating an index partition, the OID of the + * parent index; otherwise InvalidOid. * relFileNode: normally, pass InvalidOid to get new storage. May be * nonzero to attach an existing valid build. * indexInfo: same info executor uses to insert into the index @@ -707,6 +715,8 @@ UpdateIndexRelation(Oid indexoid, * INDEX_CREATE_IF_NOT_EXISTS: * do not throw an error if a relation with the same name * already exists. + * INDEX_CREATE_PARTITIONED: + * create a partitioned index (table must be partitioned) * constr_flags: flags passed to index_constraint_create * (only if INDEX_CREATE_ADD_CONSTRAINT is set) * allow_system_table_mods: allow table to be a system catalog @@ -718,6 +728,7 @@ Oid index_create(Relation heapRelation, const char *indexRelationName, Oid indexRelationId, + Oid parentIndexRelid, Oid relFileNode, IndexInfo *indexInfo, List *indexColNames, @@ -743,12 +754,18 @@ index_create(Relation heapRelation, int i; char relpersistence; bool isprimary = (flags & INDEX_CREATE_IS_PRIMARY) != 0; + bool invalid = (flags & INDEX_CREATE_INVALID) != 0; bool concurrent = (flags & INDEX_CREATE_CONCURRENT) != 0; + bool partitioned = (flags & INDEX_CREATE_PARTITIONED) != 0; + char relkind; /* constraint flags can only be set when a constraint is requested */ Assert((constr_flags == 0) || ((flags & INDEX_CREATE_ADD_CONSTRAINT) != 0)); + /* partitioned indexes must never be "built" by themselves */ + Assert(!partitioned || (flags & INDEX_CREATE_SKIP_BUILD)); + relkind = partitioned ? RELKIND_PARTITIONED_INDEX : RELKIND_INDEX; is_exclusion = (indexInfo->ii_ExclusionOps != NULL); pg_class = heap_open(RelationRelationId, RowExclusiveLock); @@ -866,9 +883,9 @@ index_create(Relation heapRelation, } /* - * create the index relation's relcache entry and physical disk file. (If - * we fail further down, it's the smgr's responsibility to remove the disk - * file again.) + * create the index relation's relcache entry and, if necessary, the + * physical disk file. (If we fail further down, it's the smgr's + * responsibility to remove the disk file again, if any.) */ indexRelation = heap_create(indexRelationName, namespaceId, @@ -876,7 +893,7 @@ index_create(Relation heapRelation, indexRelationId, relFileNode, indexTupDesc, - RELKIND_INDEX, + relkind, relpersistence, shared_relation, mapped_relation, @@ -933,12 +950,18 @@ index_create(Relation heapRelation, * (Or, could define a rule to maintain the predicate) --Nels, Feb '92 * ---------------- */ - UpdateIndexRelation(indexRelationId, heapRelationId, indexInfo, + UpdateIndexRelation(indexRelationId, heapRelationId, parentIndexRelid, + indexInfo, collationObjectId, classObjectId, coloptions, isprimary, is_exclusion, (constr_flags & INDEX_CONSTR_CREATE_DEFERRABLE) == 0, + !concurrent && !invalid, !concurrent); + /* update pg_inherits, if needed */ + if (OidIsValid(parentIndexRelid)) + StoreSingleInheritance(indexRelationId, parentIndexRelid, 1); + /* * Register constraint and dependencies for the index. * @@ -990,6 +1013,9 @@ index_create(Relation heapRelation, else { bool have_simple_col = false; + DependencyType deptype; + + deptype = OidIsValid(parentIndexRelid) ? DEPENDENCY_INTERNAL_AUTO : DEPENDENCY_AUTO; /* Create auto dependencies on simply-referenced columns */ for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++) @@ -1000,7 +1026,7 @@ index_create(Relation heapRelation, referenced.objectId = heapRelationId; referenced.objectSubId = indexInfo->ii_KeyAttrNumbers[i]; - recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO); + recordDependencyOn(&myself, &referenced, deptype); have_simple_col = true; } @@ -1018,10 +1044,20 @@ index_create(Relation heapRelation, referenced.objectId = heapRelationId; referenced.objectSubId = 0; - recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO); + recordDependencyOn(&myself, &referenced, deptype); } } + /* Store dependency on parent index, if any */ + if (OidIsValid(parentIndexRelid)) + { + referenced.classId = RelationRelationId; + referenced.objectId = parentIndexRelid; + referenced.objectSubId = 0; + + recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL_AUTO); + } + /* Store dependency on collations */ /* The default collation is pinned, so don't bother recording it */ for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++) @@ -1567,9 +1603,10 @@ index_drop(Oid indexId, bool concurrent) } /* - * Schedule physical removal of the files + * Schedule physical removal of the files (if any) */ - RelationDropStorage(userIndexRelation); + if (userIndexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX) + RelationDropStorage(userIndexRelation); /* * Close and flush the index's relcache entry, to ensure relcache doesn't @@ -1613,6 +1650,11 @@ index_drop(Oid indexId, bool concurrent) */ DeleteRelationTuple(indexId); + /* + * fix INHERITS relation + */ + DeleteInheritsTuple(indexId, InvalidOid); + /* * We are presently too lazy to attempt to compute the new correct value * of relhasindex (the next VACUUM will fix it if necessary). So there is @@ -1706,12 +1748,120 @@ BuildIndexInfo(Relation index) ii->ii_BrokenHotChain = false; /* set up for possible use by index AM */ + ii->ii_Am = index->rd_rel->relam; ii->ii_AmCache = NULL; ii->ii_Context = CurrentMemoryContext; return ii; } +/* + * CompareIndexInfo + * Return whether the properties of two indexes (in different tables) + * indicate that they have the "same" definitions. + * + * Note: passing collations and opfamilies separately is a kludge. Adding + * them to IndexInfo may result in better coding here and elsewhere. + * + * Use convert_tuples_by_name_map(index2, index1) to build the attmap. + */ +bool +CompareIndexInfo(IndexInfo *info1, IndexInfo *info2, + Oid *collations1, Oid *collations2, + Oid *opfamilies1, Oid *opfamilies2, + AttrNumber *attmap, int maplen) +{ + int i; + + if (info1->ii_Unique != info2->ii_Unique) + return false; + + /* indexes are only equivalent if they have the same access method */ + if (info1->ii_Am != info2->ii_Am) + return false; + + /* and same number of attributes */ + if (info1->ii_NumIndexAttrs != info2->ii_NumIndexAttrs) + return false; + + /* + * and columns match through the attribute map (actual attribute numbers + * might differ!) Note that this implies that index columns that are + * expressions appear in the same positions. We will next compare the + * expressions themselves. + */ + for (i = 0; i < info1->ii_NumIndexAttrs; i++) + { + if (maplen < info2->ii_KeyAttrNumbers[i]) + elog(ERROR, "incorrect attribute map"); + + if (attmap[info2->ii_KeyAttrNumbers[i] - 1] != + info1->ii_KeyAttrNumbers[i]) + return false; + + if (collations1[i] != collations2[i]) + return false; + if (opfamilies1[i] != opfamilies2[i]) + return false; + } + + /* + * For expression indexes: either both are expression indexes, or neither + * is; if they are, make sure the expressions match. + */ + if ((info1->ii_Expressions != NIL) != (info2->ii_Expressions != NIL)) + return false; + if (info1->ii_Expressions != NIL) + { + bool found_whole_row; + Node *mapped; + + mapped = map_variable_attnos((Node *) info2->ii_Expressions, + 1, 0, attmap, maplen, + InvalidOid, &found_whole_row); + if (found_whole_row) + { + /* + * we could throw an error here, but seems out of scope for this + * routine. + */ + return false; + } + + if (!equal(info1->ii_Expressions, mapped)) + return false; + } + + /* Partial index predicates must be identical, if they exist */ + if ((info1->ii_Predicate == NULL) != (info2->ii_Predicate == NULL)) + return false; + if (info1->ii_Predicate != NULL) + { + bool found_whole_row; + Node *mapped; + + mapped = map_variable_attnos((Node *) info2->ii_Predicate, + 1, 0, attmap, maplen, + InvalidOid, &found_whole_row); + if (found_whole_row) + { + /* + * we could throw an error here, but seems out of scope for this + * routine. + */ + return false; + } + if (!equal(info1->ii_Predicate, mapped)) + return false; + } + + /* No support currently for comparing exclusion indexes. */ + if (info1->ii_ExclusionOps != NULL || info2->ii_ExclusionOps != NULL) + return false; + + return true; +} + /* ---------------- * BuildSpeculativeIndexInfo * Add extra state to IndexInfo record @@ -1934,6 +2084,9 @@ index_update_stats(Relation rel, elog(ERROR, "could not find tuple for relation %u", relid); rd_rel = (Form_pg_class) GETSTRUCT(tuple); + /* Should this be a more comprehensive test? */ + Assert(rd_rel->relkind != RELKIND_PARTITIONED_INDEX); + /* Apply required updates, if any, to copied tuple */ dirty = false; @@ -3343,6 +3496,14 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence, */ iRel = index_open(indexId, AccessExclusiveLock); + /* + * The case of reindexing partitioned tables and indexes is handled + * differently by upper layers, so this case shouldn't arise. + */ + if (iRel->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) + elog(ERROR, "unsupported relation kind for index \"%s\"", + RelationGetRelationName(iRel)); + /* * Don't allow reindex on temp tables of other backends ... their local * buffer manager is not going to cope. @@ -3542,6 +3703,22 @@ reindex_relation(Oid relid, int flags, int options) */ rel = heap_open(relid, ShareLock); + /* + * This may be useful when implemented someday; but that day is not today. + * For now, avoid erroring out when called in a multi-table context + * (REINDEX SCHEMA) and happen to come across a partitioned table. The + * partitions may be reindexed on their own anyway. + */ + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + { + ereport(WARNING, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("REINDEX of partitioned tables is not yet implemented, skipping \"%s\"", + RelationGetRelationName(rel)))); + heap_close(rel, ShareLock); + return false; + } + toast_relid = rel->rd_rel->reltoastrelid; /* diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index bc999ca3c4..7576606c1b 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -1217,7 +1217,8 @@ get_relation_by_qualified_name(ObjectType objtype, List *object, switch (objtype) { case OBJECT_INDEX: - if (relation->rd_rel->relkind != RELKIND_INDEX) + if (relation->rd_rel->relkind != RELKIND_INDEX && + relation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not an index", @@ -3483,6 +3484,7 @@ getRelationDescription(StringInfo buffer, Oid relid) relname); break; case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: appendStringInfo(buffer, _("index %s"), relname); break; @@ -3957,6 +3959,7 @@ getRelationTypeDescription(StringInfo buffer, Oid relid, int32 objectSubId) appendStringInfoString(buffer, "table"); break; case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: appendStringInfoString(buffer, "index"); break; case RELKIND_SEQUENCE: diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c index 9dfbe123b5..2ea05f350b 100644 --- a/src/backend/catalog/pg_depend.c +++ b/src/backend/catalog/pg_depend.c @@ -656,14 +656,19 @@ get_constraint_index(Oid constraintId) /* * We assume any internal dependency of an index on the constraint - * must be what we are looking for. (The relkind test is just - * paranoia; there shouldn't be any such dependencies otherwise.) + * must be what we are looking for. */ if (deprec->classid == RelationRelationId && deprec->objsubid == 0 && - deprec->deptype == DEPENDENCY_INTERNAL && - get_rel_relkind(deprec->objid) == RELKIND_INDEX) + deprec->deptype == DEPENDENCY_INTERNAL) { + char relkind = get_rel_relkind(deprec->objid); + + /* This is pure paranoia; there shouldn't be any such */ + if (relkind != RELKIND_INDEX && + relkind != RELKIND_PARTITIONED_INDEX) + break; + indexId = deprec->objid; break; } diff --git a/src/backend/catalog/pg_inherits.c b/src/backend/catalog/pg_inherits.c index b32d677347..5a5beb9273 100644 --- a/src/backend/catalog/pg_inherits.c +++ b/src/backend/catalog/pg_inherits.c @@ -405,3 +405,83 @@ typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId) return result; } + +/* + * Create a single pg_inherits row with the given data + */ +void +StoreSingleInheritance(Oid relationId, Oid parentOid, int32 seqNumber) +{ + Datum values[Natts_pg_inherits]; + bool nulls[Natts_pg_inherits]; + HeapTuple tuple; + Relation inhRelation; + + inhRelation = heap_open(InheritsRelationId, RowExclusiveLock); + + /* + * Make the pg_inherits entry + */ + values[Anum_pg_inherits_inhrelid - 1] = ObjectIdGetDatum(relationId); + values[Anum_pg_inherits_inhparent - 1] = ObjectIdGetDatum(parentOid); + values[Anum_pg_inherits_inhseqno - 1] = Int32GetDatum(seqNumber); + + memset(nulls, 0, sizeof(nulls)); + + tuple = heap_form_tuple(RelationGetDescr(inhRelation), values, nulls); + + CatalogTupleInsert(inhRelation, tuple); + + heap_freetuple(tuple); + + heap_close(inhRelation, RowExclusiveLock); +} + +/* + * DeleteInheritsTuple + * + * Delete pg_inherits tuples with the given inhrelid. inhparent may be given + * as InvalidOid, in which case all tuples matching inhrelid are deleted; + * otherwise only delete tuples with the specified inhparent. + * + * Returns whether at least one row was deleted. + */ +bool +DeleteInheritsTuple(Oid inhrelid, Oid inhparent) +{ + bool found = false; + Relation catalogRelation; + ScanKeyData key; + SysScanDesc scan; + HeapTuple inheritsTuple; + + /* + * Find pg_inherits entries by inhrelid. + */ + catalogRelation = heap_open(InheritsRelationId, RowExclusiveLock); + ScanKeyInit(&key, + Anum_pg_inherits_inhrelid, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(inhrelid)); + scan = systable_beginscan(catalogRelation, InheritsRelidSeqnoIndexId, + true, NULL, 1, &key); + + while (HeapTupleIsValid(inheritsTuple = systable_getnext(scan))) + { + Oid parent; + + /* Compare inhparent if it was given, and do the actual deletion. */ + parent = ((Form_pg_inherits) GETSTRUCT(inheritsTuple))->inhparent; + if (!OidIsValid(inhparent) || parent == inhparent) + { + CatalogTupleDelete(catalogRelation, &inheritsTuple->t_self); + found = true; + } + } + + /* Done */ + systable_endscan(scan); + heap_close(catalogRelation, RowExclusiveLock); + + return found; +} diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index 0b4b5631a1..cf37011b73 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -315,6 +315,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, indexInfo->ii_ReadyForInserts = true; indexInfo->ii_Concurrent = false; indexInfo->ii_BrokenHotChain = false; + indexInfo->ii_Am = BTREE_AM_OID; indexInfo->ii_AmCache = NULL; indexInfo->ii_Context = CurrentMemoryContext; @@ -328,6 +329,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, coloptions[1] = 0; index_create(toast_rel, toast_idxname, toastIndexOid, InvalidOid, + InvalidOid, indexInfo, list_make2("chunk_id", "chunk_seq"), BTREE_AM_OID, diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 9e6ba92008..8118a39a7b 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -23,7 +23,10 @@ #include "catalog/catalog.h" #include "catalog/index.h" #include "catalog/indexing.h" +#include "catalog/partition.h" #include "catalog/pg_am.h" +#include "catalog/pg_inherits.h" +#include "catalog/pg_inherits_fn.h" #include "catalog/pg_opclass.h" #include "catalog/pg_opfamily.h" #include "catalog/pg_tablespace.h" @@ -35,6 +38,7 @@ #include "commands/tablespace.h" #include "mb/pg_wchar.h" #include "miscadmin.h" +#include "nodes/makefuncs.h" #include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" #include "optimizer/planner.h" @@ -42,6 +46,7 @@ #include "parser/parse_coerce.h" #include "parser/parse_func.h" #include "parser/parse_oper.h" +#include "rewrite/rewriteManip.h" #include "storage/lmgr.h" #include "storage/proc.h" #include "storage/procarray.h" @@ -77,6 +82,7 @@ static char *ChooseIndexNameAddition(List *colnames); static List *ChooseIndexColumnNames(List *indexElems); static void RangeVarCallbackForReindexIndex(const RangeVar *relation, Oid relId, Oid oldRelId, void *arg); +static void ReindexPartitionedIndex(Relation parentIdx); /* * CheckIndexCompatible @@ -183,6 +189,7 @@ CheckIndexCompatible(Oid oldId, indexInfo->ii_ExclusionOps = NULL; indexInfo->ii_ExclusionProcs = NULL; indexInfo->ii_ExclusionStrats = NULL; + indexInfo->ii_Am = accessMethodId; indexInfo->ii_AmCache = NULL; indexInfo->ii_Context = CurrentMemoryContext; typeObjectId = (Oid *) palloc(numberOfAttributes * sizeof(Oid)); @@ -292,14 +299,15 @@ CheckIndexCompatible(Oid oldId, * 'stmt': IndexStmt describing the properties of the new index. * 'indexRelationId': normally InvalidOid, but during bootstrap can be * nonzero to specify a preselected OID for the index. + * 'parentIndexId': the OID of the parent index; InvalidOid if not the child + * of a partitioned index. * 'is_alter_table': this is due to an ALTER rather than a CREATE operation. * 'check_rights': check for CREATE rights in namespace and tablespace. (This * should be true except when ALTER is deleting/recreating an index.) * 'check_not_in_use': check for table not already in use in current session. * This should be true unless caller is holding the table open, in which * case the caller had better have checked it earlier. - * 'skip_build': make the catalog entries but leave the index file empty; - * it will be filled later. + * 'skip_build': make the catalog entries but don't create the index files * 'quiet': suppress the NOTICE chatter ordinarily provided for constraints. * * Returns the object address of the created index. @@ -308,6 +316,7 @@ ObjectAddress DefineIndex(Oid relationId, IndexStmt *stmt, Oid indexRelationId, + Oid parentIndexId, bool is_alter_table, bool check_rights, bool check_not_in_use, @@ -330,6 +339,7 @@ DefineIndex(Oid relationId, IndexAmRoutine *amRoutine; bool amcanorder; amoptions_function amoptions; + bool partitioned; Datum reloptions; int16 *coloptions; IndexInfo *indexInfo; @@ -382,23 +392,56 @@ DefineIndex(Oid relationId, { case RELKIND_RELATION: case RELKIND_MATVIEW: + case RELKIND_PARTITIONED_TABLE: /* OK */ break; case RELKIND_FOREIGN_TABLE: + /* + * Custom error message for FOREIGN TABLE since the term is close + * to a regular table and can confuse the user. + */ ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("cannot create index on foreign table \"%s\"", RelationGetRelationName(rel)))); - case RELKIND_PARTITIONED_TABLE: - ereport(ERROR, - (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("cannot create index on partitioned table \"%s\"", - RelationGetRelationName(rel)))); default: ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not a table or materialized view", RelationGetRelationName(rel)))); + break; + } + + /* + * Establish behavior for partitioned tables, and verify sanity of + * parameters. + * + * We do not build an actual index in this case; we only create a few + * catalog entries. The actual indexes are built by recursing for each + * partition. + */ + partitioned = rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE; + if (partitioned) + { + if (stmt->concurrent) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot create index on partitioned table \"%s\" concurrently", + RelationGetRelationName(rel)))); + if (stmt->unique) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot create unique index on partitioned table \"%s\"", + RelationGetRelationName(rel)))); + if (stmt->excludeOpNames) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot create exclusion constraints on partitioned table \"%s\"", + RelationGetRelationName(rel)))); + if (stmt->primary || stmt->isconstraint) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot create constraints on partitioned tables"))); } /* @@ -574,6 +617,7 @@ DefineIndex(Oid relationId, indexInfo->ii_ReadyForInserts = !stmt->concurrent; indexInfo->ii_Concurrent = stmt->concurrent; indexInfo->ii_BrokenHotChain = false; + indexInfo->ii_Am = accessMethodId; indexInfo->ii_AmCache = NULL; indexInfo->ii_Context = CurrentMemoryContext; @@ -665,19 +709,24 @@ DefineIndex(Oid relationId, /* * Make the catalog entries for the index, including constraints. This * step also actually builds the index, except if caller requested not to - * or in concurrent mode, in which case it'll be done later. + * or in concurrent mode, in which case it'll be done later, or + * doing a partitioned index (because those don't have storage). */ flags = constr_flags = 0; if (stmt->isconstraint) flags |= INDEX_CREATE_ADD_CONSTRAINT; - if (skip_build || stmt->concurrent) + if (skip_build || stmt->concurrent || partitioned) flags |= INDEX_CREATE_SKIP_BUILD; if (stmt->if_not_exists) flags |= INDEX_CREATE_IF_NOT_EXISTS; if (stmt->concurrent) flags |= INDEX_CREATE_CONCURRENT; + if (partitioned) + flags |= INDEX_CREATE_PARTITIONED; if (stmt->primary) flags |= INDEX_CREATE_IS_PRIMARY; + if (partitioned && stmt->relation && !stmt->relation->inh) + flags |= INDEX_CREATE_INVALID; if (stmt->deferrable) constr_flags |= INDEX_CONSTR_CREATE_DEFERRABLE; @@ -685,8 +734,8 @@ DefineIndex(Oid relationId, constr_flags |= INDEX_CONSTR_CREATE_INIT_DEFERRED; indexRelationId = - index_create(rel, indexRelationName, indexRelationId, stmt->oldNode, - indexInfo, indexColNames, + index_create(rel, indexRelationName, indexRelationId, parentIndexId, + stmt->oldNode, indexInfo, indexColNames, accessMethodId, tablespaceId, collationObjectId, classObjectId, coloptions, reloptions, @@ -706,6 +755,160 @@ DefineIndex(Oid relationId, CreateComments(indexRelationId, RelationRelationId, 0, stmt->idxcomment); + if (partitioned) + { + /* + * Unless caller specified to skip this step (via ONLY), process + * each partition to make sure they all contain a corresponding index. + * + * If we're called internally (no stmt->relation), recurse always. + */ + if (!stmt->relation || stmt->relation->inh) + { + PartitionDesc partdesc = RelationGetPartitionDesc(rel); + int nparts = partdesc->nparts; + Oid *part_oids = palloc(sizeof(Oid) * nparts); + bool invalidate_parent = false; + TupleDesc parentDesc; + Oid *opfamOids; + + memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts); + + parentDesc = CreateTupleDescCopy(RelationGetDescr(rel)); + opfamOids = palloc(sizeof(Oid) * numberOfAttributes); + for (i = 0; i < numberOfAttributes; i++) + opfamOids[i] = get_opclass_family(classObjectId[i]); + + heap_close(rel, NoLock); + + /* + * For each partition, scan all existing indexes; if one matches + * our index definition and is not already attached to some other + * parent index, attach it to the one we just created. + * + * If none matches, build a new index by calling ourselves + * recursively with the same options (except for the index name). + */ + for (i = 0; i < nparts; i++) + { + Oid childRelid = part_oids[i]; + Relation childrel; + List *childidxs; + ListCell *cell; + AttrNumber *attmap; + bool found = false; + int maplen; + + childrel = heap_open(childRelid, lockmode); + childidxs = RelationGetIndexList(childrel); + attmap = + convert_tuples_by_name_map(RelationGetDescr(childrel), + parentDesc, + gettext_noop("could not convert row type")); + maplen = parentDesc->natts; + + + foreach(cell, childidxs) + { + Oid cldidxid = lfirst_oid(cell); + Relation cldidx; + IndexInfo *cldIdxInfo; + + /* this index is already partition of another one */ + if (has_superclass(cldidxid)) + continue; + + cldidx = index_open(cldidxid, lockmode); + cldIdxInfo = BuildIndexInfo(cldidx); + if (CompareIndexInfo(cldIdxInfo, indexInfo, + cldidx->rd_indcollation, + collationObjectId, + cldidx->rd_opfamily, + opfamOids, + attmap, maplen)) + { + /* + * Found a match. Attach index to parent and we're + * done, but keep lock till commit. + */ + IndexSetParentIndex(cldidx, indexRelationId); + + if (!IndexIsValid(cldidx->rd_index)) + invalidate_parent = true; + + found = true; + index_close(cldidx, NoLock); + break; + } + + index_close(cldidx, lockmode); + } + + list_free(childidxs); + heap_close(childrel, NoLock); + + /* + * If no matching index was found, create our own. + */ + if (!found) + { + IndexStmt *childStmt = copyObject(stmt); + bool found_whole_row; + + childStmt->whereClause = + map_variable_attnos(stmt->whereClause, 1, 0, + attmap, maplen, + InvalidOid, &found_whole_row); + if (found_whole_row) + elog(ERROR, "cannot convert whole-row table reference"); + + childStmt->idxname = NULL; + childStmt->relationId = childRelid; + DefineIndex(childRelid, childStmt, + InvalidOid, /* no predefined OID */ + indexRelationId, /* this is our child */ + false, check_rights, check_not_in_use, + false, quiet); + } + + pfree(attmap); + } + + /* + * The pg_index row we inserted for this index was marked + * indisvalid=true. But if we attached an existing index that + * is invalid, this is incorrect, so update our row to + * invalid too. + */ + if (invalidate_parent) + { + Relation pg_index = heap_open(IndexRelationId, RowExclusiveLock); + HeapTuple tup, + newtup; + + tup = SearchSysCache1(INDEXRELID, + ObjectIdGetDatum(indexRelationId)); + if (!tup) + elog(ERROR, "cache lookup failed for index %u", + indexRelationId); + newtup = heap_copytuple(tup); + ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = false; + CatalogTupleUpdate(pg_index, &tup->t_self, newtup); + ReleaseSysCache(tup); + heap_close(pg_index, RowExclusiveLock); + heap_freetuple(newtup); + } + } + else + heap_close(rel, NoLock); + + /* + * Indexes on partitioned tables are not themselves built, so we're + * done here. + */ + return address; + } + if (!stmt->concurrent) { /* Close the heap and we're done, in the non-concurrent case */ @@ -1765,7 +1968,7 @@ ChooseIndexColumnNames(List *indexElems) * ReindexIndex * Recreate a specific index. */ -Oid +void ReindexIndex(RangeVar *indexRelation, int options) { Oid indOid; @@ -1788,12 +1991,17 @@ ReindexIndex(RangeVar *indexRelation, int options) * lock on the index. */ irel = index_open(indOid, NoLock); + + if (irel->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) + { + ReindexPartitionedIndex(irel); + return; + } + persistence = irel->rd_rel->relpersistence; index_close(irel, NoLock); reindex_index(indOid, false, persistence, options); - - return indOid; } /* @@ -1832,7 +2040,8 @@ RangeVarCallbackForReindexIndex(const RangeVar *relation, relkind = get_rel_relkind(relId); if (!relkind) return; - if (relkind != RELKIND_INDEX) + if (relkind != RELKIND_INDEX && + relkind != RELKIND_PARTITIONED_INDEX) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not an index", relation->relname))); @@ -1976,6 +2185,12 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind, /* * Only regular tables and matviews can have indexes, so ignore any * other kind of relation. + * + * It is tempting to also consider partitioned tables here, but that + * has the problem that if the children are in the same schema, they + * would be processed twice. Maybe we could have a separate list of + * partitioned tables, and expand that afterwards into relids, + * ignoring any duplicates. */ if (classtuple->relkind != RELKIND_RELATION && classtuple->relkind != RELKIND_MATVIEW) @@ -2038,3 +2253,155 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind, MemoryContextDelete(private_context); } + +/* + * ReindexPartitionedIndex + * Reindex each child of the given partitioned index. + * + * Not yet implemented. + */ +static void +ReindexPartitionedIndex(Relation parentIdx) +{ + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("REINDEX is not yet implemented for partitioned indexes"))); +} + +/* + * Insert or delete an appropriate pg_inherits tuple to make the given index + * be a partition of the indicated parent index. + * + * This also corrects the pg_depend information for the affected index. + */ +void +IndexSetParentIndex(Relation partitionIdx, Oid parentOid) +{ + Relation pg_inherits; + ScanKeyData key[2]; + SysScanDesc scan; + Oid partRelid = RelationGetRelid(partitionIdx); + HeapTuple tuple; + bool fix_dependencies; + + /* Make sure this is an index */ + Assert(partitionIdx->rd_rel->relkind == RELKIND_INDEX || + partitionIdx->rd_rel->relkind == RELKIND_PARTITIONED_INDEX); + + /* + * Scan pg_inherits for rows linking our index to some parent. + */ + pg_inherits = relation_open(InheritsRelationId, RowExclusiveLock); + ScanKeyInit(&key[0], + Anum_pg_inherits_inhrelid, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(partRelid)); + ScanKeyInit(&key[1], + Anum_pg_inherits_inhseqno, + BTEqualStrategyNumber, F_INT4EQ, + Int32GetDatum(1)); + scan = systable_beginscan(pg_inherits, InheritsRelidSeqnoIndexId, true, + NULL, 2, key); + tuple = systable_getnext(scan); + + if (!HeapTupleIsValid(tuple)) + { + if (parentOid == InvalidOid) + { + /* + * No pg_inherits row, and no parent wanted: nothing to do in + * this case. + */ + fix_dependencies = false; + } + else + { + Datum values[Natts_pg_inherits]; + bool isnull[Natts_pg_inherits]; + + /* + * No pg_inherits row exists, and we want a parent for this index, + * so insert it. + */ + values[Anum_pg_inherits_inhrelid - 1] = ObjectIdGetDatum(partRelid); + values[Anum_pg_inherits_inhparent - 1] = + ObjectIdGetDatum(parentOid); + values[Anum_pg_inherits_inhseqno - 1] = Int32GetDatum(1); + memset(isnull, false, sizeof(isnull)); + + tuple = heap_form_tuple(RelationGetDescr(pg_inherits), + values, isnull); + CatalogTupleInsert(pg_inherits, tuple); + + fix_dependencies = true; + } + } + else + { + Form_pg_inherits inhForm = (Form_pg_inherits) GETSTRUCT(tuple); + + if (parentOid == InvalidOid) + { + /* + * There exists a pg_inherits row, which we want to clear; do so. + */ + CatalogTupleDelete(pg_inherits, &tuple->t_self); + fix_dependencies = true; + } + else + { + /* + * A pg_inherits row exists. If it's the same we want, then we're + * good; if it differs, that amounts to a corrupt catalog and + * should not happen. + */ + if (inhForm->inhparent != parentOid) + { + /* unexpected: we should not get called in this case */ + elog(ERROR, "bogus pg_inherit row: inhrelid %u inhparent %u", + inhForm->inhrelid, inhForm->inhparent); + } + + /* already in the right state */ + fix_dependencies = false; + } + } + + /* done with pg_inherits */ + systable_endscan(scan); + relation_close(pg_inherits, RowExclusiveLock); + + if (fix_dependencies) + { + ObjectAddress partIdx; + + /* + * Insert/delete pg_depend rows. If setting a parent, add an + * INTERNAL_AUTO dependency to the parent index; if making standalone, + * remove all existing rows and put back the regular dependency on the + * table. + */ + ObjectAddressSet(partIdx, RelationRelationId, partRelid); + + if (OidIsValid(parentOid)) + { + ObjectAddress parentIdx; + + ObjectAddressSet(parentIdx, RelationRelationId, parentOid); + recordDependencyOn(&partIdx, &parentIdx, DEPENDENCY_INTERNAL_AUTO); + } + else + { + ObjectAddress partitionTbl; + + ObjectAddressSet(partitionTbl, RelationRelationId, + partitionIdx->rd_index->indrelid); + + deleteDependencyRecordsForClass(RelationRelationId, partRelid, + RelationRelationId, + DEPENDENCY_INTERNAL_AUTO); + + recordDependencyOn(&partIdx, &partitionTbl, DEPENDENCY_AUTO); + } + } +} diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 59806349cc..57ee112653 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -266,6 +266,12 @@ static const struct dropmsgstrings dropmsgstringarray[] = { gettext_noop("table \"%s\" does not exist, skipping"), gettext_noop("\"%s\" is not a table"), gettext_noop("Use DROP TABLE to remove a table.")}, + {RELKIND_PARTITIONED_INDEX, + ERRCODE_UNDEFINED_OBJECT, + gettext_noop("index \"%s\" does not exist"), + gettext_noop("index \"%s\" does not exist, skipping"), + gettext_noop("\"%s\" is not an index"), + gettext_noop("Use DROP INDEX to remove an index.")}, {'\0', 0, NULL, NULL, NULL, NULL} }; @@ -284,6 +290,7 @@ struct DropRelationCallbackState #define ATT_INDEX 0x0008 #define ATT_COMPOSITE_TYPE 0x0010 #define ATT_FOREIGN_TABLE 0x0020 +#define ATT_PARTITIONED_INDEX 0x0040 /* * Partition tables are expected to be dropped when the parent partitioned @@ -475,11 +482,17 @@ static void CreateInheritance(Relation child_rel, Relation parent_rel); static void RemoveInheritance(Relation child_rel, Relation parent_rel); static ObjectAddress ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd); +static void AttachPartitionEnsureIndexes(Relation rel, Relation attachrel); static void ValidatePartitionConstraints(List **wqueue, Relation scanrel, List *scanrel_children, List *partConstraint, bool validate_default); static ObjectAddress ATExecDetachPartition(Relation rel, RangeVar *name); +static ObjectAddress ATExecAttachPartitionIdx(List **wqueue, Relation rel, + RangeVar *name); +static void validatePartitionedIndex(Relation partedIdx, Relation partedTbl); +static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx, + Relation partitionTbl); /* ---------------------------------------------------------------- @@ -897,6 +910,53 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, StorePartitionKey(rel, strategy, partnatts, partattrs, partexprs, partopclass, partcollation); + + /* make it all visible */ + CommandCounterIncrement(); + } + + /* + * If we're creating a partition, create now all the indexes defined in + * the parent. We can't do it earlier, because DefineIndex wants to know + * the partition key which we just stored. + */ + if (stmt->partbound) + { + Oid parentId = linitial_oid(inheritOids); + Relation parent; + List *idxlist; + ListCell *cell; + + /* Already have strong enough lock on the parent */ + parent = heap_open(parentId, NoLock); + idxlist = RelationGetIndexList(parent); + + /* + * For each index in the parent table, create one in the partition + */ + foreach(cell, idxlist) + { + Relation idxRel = index_open(lfirst_oid(cell), AccessShareLock); + AttrNumber *attmap; + IndexStmt *idxstmt; + + attmap = convert_tuples_by_name_map(RelationGetDescr(rel), + RelationGetDescr(parent), + gettext_noop("could not convert row type")); + idxstmt = + generateClonedIndexStmt(NULL, RelationGetRelid(rel), idxRel, + attmap, RelationGetDescr(rel)->natts); + DefineIndex(RelationGetRelid(rel), + idxstmt, + InvalidOid, + RelationGetRelid(idxRel), + false, false, false, false, false); + + index_close(idxRel, AccessShareLock); + } + + list_free(idxlist); + heap_close(parent, NoLock); } /* @@ -1179,10 +1239,13 @@ RangeVarCallbackForDropRelation(const RangeVar *rel, Oid relOid, Oid oldRelOid, * but RemoveRelations() can only pass one relkind for a given relation. * It chooses RELKIND_RELATION for both regular and partitioned tables. * That means we must be careful before giving the wrong type error when - * the relation is RELKIND_PARTITIONED_TABLE. + * the relation is RELKIND_PARTITIONED_TABLE. An equivalent problem + * exists with indexes. */ if (classform->relkind == RELKIND_PARTITIONED_TABLE) expected_relkind = RELKIND_RELATION; + else if (classform->relkind == RELKIND_PARTITIONED_INDEX) + expected_relkind = RELKIND_INDEX; else expected_relkind = classform->relkind; @@ -1210,7 +1273,8 @@ RangeVarCallbackForDropRelation(const RangeVar *rel, Oid relOid, Oid oldRelOid, * we do it the other way around. No error if we don't find a pg_index * entry, though --- the relation may have been dropped. */ - if (relkind == RELKIND_INDEX && relOid != oldRelOid) + if ((relkind == RELKIND_INDEX || relkind == RELKIND_PARTITIONED_INDEX) && + relOid != oldRelOid) { state->heapOid = IndexGetRelation(relOid, true); if (OidIsValid(state->heapOid)) @@ -2396,27 +2460,11 @@ StoreCatalogInheritance1(Oid relationId, Oid parentOid, int32 seqNumber, Relation inhRelation, bool child_is_partition) { - TupleDesc desc = RelationGetDescr(inhRelation); - Datum values[Natts_pg_inherits]; - bool nulls[Natts_pg_inherits]; ObjectAddress childobject, parentobject; - HeapTuple tuple; - - /* - * Make the pg_inherits entry - */ - values[Anum_pg_inherits_inhrelid - 1] = ObjectIdGetDatum(relationId); - values[Anum_pg_inherits_inhparent - 1] = ObjectIdGetDatum(parentOid); - values[Anum_pg_inherits_inhseqno - 1] = Int32GetDatum(seqNumber); - memset(nulls, 0, sizeof(nulls)); - - tuple = heap_form_tuple(desc, values, nulls); - - CatalogTupleInsert(inhRelation, tuple); - - heap_freetuple(tuple); + /* store the pg_inherits row */ + StoreSingleInheritance(relationId, parentOid, seqNumber); /* * Store a dependency too @@ -2540,6 +2588,7 @@ renameatt_check(Oid myrelid, Form_pg_class classform, bool recursing) relkind != RELKIND_MATVIEW && relkind != RELKIND_COMPOSITE_TYPE && relkind != RELKIND_INDEX && + relkind != RELKIND_PARTITIONED_INDEX && relkind != RELKIND_FOREIGN_TABLE && relkind != RELKIND_PARTITIONED_TABLE) ereport(ERROR, @@ -3019,7 +3068,8 @@ RenameRelationInternal(Oid myrelid, const char *newrelname, bool is_internal) /* * Also rename the associated constraint, if any. */ - if (targetrelation->rd_rel->relkind == RELKIND_INDEX) + if (targetrelation->rd_rel->relkind == RELKIND_INDEX || + targetrelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) { Oid constraintId = get_index_constraint(myrelid); @@ -3073,6 +3123,7 @@ CheckTableNotInUse(Relation rel, const char *stmt) stmt, RelationGetRelationName(rel)))); if (rel->rd_rel->relkind != RELKIND_INDEX && + rel->rd_rel->relkind != RELKIND_PARTITIONED_INDEX && AfterTriggerPendingOnRel(RelationGetRelid(rel))) ereport(ERROR, (errcode(ERRCODE_OBJECT_IN_USE), @@ -3764,6 +3815,10 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd, pass = AT_PASS_MISC; break; case AT_AttachPartition: + ATSimplePermissions(rel, ATT_TABLE | ATT_PARTITIONED_INDEX); + /* No command-specific prep needed */ + pass = AT_PASS_MISC; + break; case AT_DetachPartition: ATSimplePermissions(rel, ATT_TABLE); /* No command-specific prep needed */ @@ -4112,9 +4167,15 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab, Relation rel, ATExecGenericOptions(rel, (List *) cmd->def); break; case AT_AttachPartition: - ATExecAttachPartition(wqueue, rel, (PartitionCmd *) cmd->def); + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + ATExecAttachPartition(wqueue, rel, (PartitionCmd *) cmd->def); + else + ATExecAttachPartitionIdx(wqueue, rel, + ((PartitionCmd *) cmd->def)->name); break; case AT_DetachPartition: + /* ATPrepCmd ensures it must be a table */ + Assert(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE); ATExecDetachPartition(rel, ((PartitionCmd *) cmd->def)->name); break; default: /* oops */ @@ -4148,9 +4209,13 @@ ATRewriteTables(AlterTableStmt *parsetree, List **wqueue, LOCKMODE lockmode) { AlteredTableInfo *tab = (AlteredTableInfo *) lfirst(ltab); - /* Foreign tables have no storage, nor do partitioned tables. */ + /* + * Foreign tables have no storage, nor do partitioned tables and + * indexes. + */ if (tab->relkind == RELKIND_FOREIGN_TABLE || - tab->relkind == RELKIND_PARTITIONED_TABLE) + tab->relkind == RELKIND_PARTITIONED_TABLE || + tab->relkind == RELKIND_PARTITIONED_INDEX) continue; /* @@ -4752,6 +4817,9 @@ ATSimplePermissions(Relation rel, int allowed_targets) case RELKIND_INDEX: actual_target = ATT_INDEX; break; + case RELKIND_PARTITIONED_INDEX: + actual_target = ATT_PARTITIONED_INDEX; + break; case RELKIND_COMPOSITE_TYPE: actual_target = ATT_COMPOSITE_TYPE; break; @@ -6194,6 +6262,7 @@ ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newVa if (rel->rd_rel->relkind != RELKIND_RELATION && rel->rd_rel->relkind != RELKIND_MATVIEW && rel->rd_rel->relkind != RELKIND_INDEX && + rel->rd_rel->relkind != RELKIND_PARTITIONED_INDEX && rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE && rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE) ereport(ERROR, @@ -6205,7 +6274,9 @@ ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newVa * We allow referencing columns by numbers only for indexes, since table * column numbers could contain gaps if columns are later dropped. */ - if (rel->rd_rel->relkind != RELKIND_INDEX && !colName) + if (rel->rd_rel->relkind != RELKIND_INDEX && + rel->rd_rel->relkind != RELKIND_PARTITIONED_INDEX && + !colName) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot refer to non-index column by number"))); @@ -6283,7 +6354,8 @@ ATExecSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newVa errmsg("cannot alter system column \"%s\"", colName))); - if (rel->rd_rel->relkind == RELKIND_INDEX && + if ((rel->rd_rel->relkind == RELKIND_INDEX || + rel->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) && rel->rd_index->indkey.values[attnum - 1] != 0) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), @@ -6736,6 +6808,7 @@ ATExecAddIndex(AlteredTableInfo *tab, Relation rel, address = DefineIndex(RelationGetRelid(rel), stmt, InvalidOid, /* no predefined OID */ + InvalidOid, /* no parent index */ true, /* is_alter_table */ check_rights, false, /* check_not_in_use - we did it already */ @@ -9139,7 +9212,8 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel, { char relKind = get_rel_relkind(foundObject.objectId); - if (relKind == RELKIND_INDEX) + if (relKind == RELKIND_INDEX || + relKind == RELKIND_PARTITIONED_INDEX) { Assert(foundObject.objectSubId == 0); if (!list_member_oid(tab->changedIndexOids, foundObject.objectId)) @@ -9982,6 +10056,15 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock newOwnerId = tuple_class->relowner; } break; + case RELKIND_PARTITIONED_INDEX: + if (recursing) + break; + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("cannot change owner of index \"%s\"", + NameStr(tuple_class->relname)), + errhint("Change the ownership of the index's table, instead."))); + break; case RELKIND_SEQUENCE: if (!recursing && tuple_class->relowner != newOwnerId) @@ -10103,6 +10186,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock */ if (tuple_class->relkind != RELKIND_COMPOSITE_TYPE && tuple_class->relkind != RELKIND_INDEX && + tuple_class->relkind != RELKIND_PARTITIONED_INDEX && tuple_class->relkind != RELKIND_TOASTVALUE) changeDependencyOnOwner(RelationRelationId, relationOid, newOwnerId); @@ -10110,7 +10194,8 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock /* * Also change the ownership of the table's row type, if it has one */ - if (tuple_class->relkind != RELKIND_INDEX) + if (tuple_class->relkind != RELKIND_INDEX && + tuple_class->relkind != RELKIND_PARTITIONED_INDEX) AlterTypeOwnerInternal(tuple_class->reltype, newOwnerId); /* @@ -10119,6 +10204,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock * relation, as well as its toast table (if it has one). */ if (tuple_class->relkind == RELKIND_RELATION || + tuple_class->relkind == RELKIND_PARTITIONED_TABLE || tuple_class->relkind == RELKIND_MATVIEW || tuple_class->relkind == RELKIND_TOASTVALUE) { @@ -10427,6 +10513,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation, (void) view_reloptions(newOptions, true); break; case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: (void) index_reloptions(rel->rd_amroutine->amoptions, newOptions, true); break; default: @@ -10839,7 +10926,8 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt) relForm->relkind != RELKIND_RELATION && relForm->relkind != RELKIND_PARTITIONED_TABLE) || (stmt->objtype == OBJECT_INDEX && - relForm->relkind != RELKIND_INDEX) || + relForm->relkind != RELKIND_INDEX && + relForm->relkind != RELKIND_PARTITIONED_INDEX) || (stmt->objtype == OBJECT_MATVIEW && relForm->relkind != RELKIND_MATVIEW)) continue; @@ -11633,45 +11721,18 @@ RemoveInheritance(Relation child_rel, Relation parent_rel) Relation catalogRelation; SysScanDesc scan; ScanKeyData key[3]; - HeapTuple inheritsTuple, - attributeTuple, + HeapTuple attributeTuple, constraintTuple; List *connames; - bool found = false; + bool found; bool child_is_partition = false; /* If parent_rel is a partitioned table, child_rel must be a partition */ if (parent_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) child_is_partition = true; - /* - * Find and destroy the pg_inherits entry linking the two, or error out if - * there is none. - */ - catalogRelation = heap_open(InheritsRelationId, RowExclusiveLock); - ScanKeyInit(&key[0], - Anum_pg_inherits_inhrelid, - BTEqualStrategyNumber, F_OIDEQ, - ObjectIdGetDatum(RelationGetRelid(child_rel))); - scan = systable_beginscan(catalogRelation, InheritsRelidSeqnoIndexId, - true, NULL, 1, key); - - while (HeapTupleIsValid(inheritsTuple = systable_getnext(scan))) - { - Oid inhparent; - - inhparent = ((Form_pg_inherits) GETSTRUCT(inheritsTuple))->inhparent; - if (inhparent == RelationGetRelid(parent_rel)) - { - CatalogTupleDelete(catalogRelation, &inheritsTuple->t_self); - found = true; - break; - } - } - - systable_endscan(scan); - heap_close(catalogRelation, RowExclusiveLock); - + found = DeleteInheritsTuple(RelationGetRelid(child_rel), + RelationGetRelid(parent_rel)); if (!found) { if (child_is_partition) @@ -13226,7 +13287,8 @@ RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, Oid oldrelid, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is not a composite type", rv->relname))); - if (reltype == OBJECT_INDEX && relkind != RELKIND_INDEX + if (reltype == OBJECT_INDEX && relkind != RELKIND_INDEX && + relkind != RELKIND_PARTITIONED_INDEX && !IsA(stmt, RenameStmt)) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -13946,6 +14008,9 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) /* Update the pg_class entry. */ StorePartitionBound(attachrel, rel, cmd->bound); + /* Ensure there exists a correct set of indexes in the partition. */ + AttachPartitionEnsureIndexes(rel, attachrel); + /* * Generate partition constraint from the partition bound specification. * If the parent itself is a partition, make sure to include its @@ -14015,6 +14080,127 @@ ATExecAttachPartition(List **wqueue, Relation rel, PartitionCmd *cmd) return address; } +/* + * AttachPartitionEnsureIndexes + * subroutine for ATExecAttachPartition to create/match indexes + * + * Enforce the indexing rule for partitioned tables during ALTER TABLE / ATTACH + * PARTITION: every partition must have an index attached to each index on the + * partitioned table. + */ +static void +AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) +{ + List *idxes; + List *attachRelIdxs; + Relation *attachrelIdxRels; + IndexInfo **attachInfos; + int i; + ListCell *cell; + MemoryContext cxt; + MemoryContext oldcxt; + + cxt = AllocSetContextCreate(CurrentMemoryContext, + "AttachPartitionEnsureIndexes", + ALLOCSET_DEFAULT_SIZES); + oldcxt = MemoryContextSwitchTo(cxt); + + idxes = RelationGetIndexList(rel); + attachRelIdxs = RelationGetIndexList(attachrel); + attachrelIdxRels = palloc(sizeof(Relation) * list_length(attachRelIdxs)); + attachInfos = palloc(sizeof(IndexInfo *) * list_length(attachRelIdxs)); + + /* Build arrays of all existing indexes and their IndexInfos */ + i = 0; + foreach(cell, attachRelIdxs) + { + Oid cldIdxId = lfirst_oid(cell); + + attachrelIdxRels[i] = index_open(cldIdxId, AccessShareLock); + attachInfos[i] = BuildIndexInfo(attachrelIdxRels[i]); + i++; + } + + /* + * For each index on the partitioned table, find a matching one in the + * partition-to-be; if one is not found, create one. + */ + foreach(cell, idxes) + { + Oid idx = lfirst_oid(cell); + Relation idxRel = index_open(idx, AccessShareLock); + IndexInfo *info; + AttrNumber *attmap; + bool found = false; + + /* + * Ignore indexes in the partitioned table other than partitioned + * indexes. + */ + if (idxRel->rd_rel->relkind != RELKIND_PARTITIONED_INDEX) + { + index_close(idxRel, AccessShareLock); + continue; + } + + /* construct an indexinfo to compare existing indexes against */ + info = BuildIndexInfo(idxRel); + attmap = convert_tuples_by_name_map(RelationGetDescr(attachrel), + RelationGetDescr(rel), + gettext_noop("could not convert row type")); + + /* + * Scan the list of existing indexes in the partition-to-be, and mark + * the first matching, unattached one we find, if any, as partition of + * the parent index. If we find one, we're done. + */ + for (i = 0; i < list_length(attachRelIdxs); i++) + { + /* does this index have a parent? if so, can't use it */ + if (has_superclass(RelationGetRelid(attachrelIdxRels[i]))) + continue; + + if (CompareIndexInfo(attachInfos[i], info, + attachrelIdxRels[i]->rd_indcollation, + idxRel->rd_indcollation, + attachrelIdxRels[i]->rd_opfamily, + idxRel->rd_opfamily, + attmap, + RelationGetDescr(rel)->natts)) + { + /* bingo. */ + IndexSetParentIndex(attachrelIdxRels[i], idx); + found = true; + break; + } + } + + /* + * If no suitable index was found in the partition-to-be, create one + * now. + */ + if (!found) + { + IndexStmt *stmt; + + stmt = generateClonedIndexStmt(NULL, RelationGetRelid(attachrel), + idxRel, attmap, + RelationGetDescr(rel)->natts); + DefineIndex(RelationGetRelid(attachrel), stmt, InvalidOid, + RelationGetRelid(idxRel), + false, false, false, false, false); + } + + index_close(idxRel, AccessShareLock); + } + + /* Clean up. */ + for (i = 0; i < list_length(attachRelIdxs); i++) + index_close(attachrelIdxRels[i], AccessShareLock); + MemoryContextSwitchTo(oldcxt); + MemoryContextDelete(cxt); +} + /* * ALTER TABLE DETACH PARTITION * @@ -14033,6 +14219,8 @@ ATExecDetachPartition(Relation rel, RangeVar *name) new_repl[Natts_pg_class]; ObjectAddress address; Oid defaultPartOid; + List *indexes; + ListCell *cell; /* * We must lock the default partition, because detaching this partition @@ -14094,6 +14282,24 @@ ATExecDetachPartition(Relation rel, RangeVar *name) } } + /* detach indexes too */ + indexes = RelationGetIndexList(partRel); + foreach(cell, indexes) + { + Oid idxid = lfirst_oid(cell); + Relation idx; + + if (!has_superclass(idxid)) + continue; + + Assert((IndexGetRelation(get_partition_parent(idxid), false) == + RelationGetRelid(rel))); + + idx = index_open(idxid, AccessExclusiveLock); + IndexSetParentIndex(idx, InvalidOid); + relation_close(idx, AccessExclusiveLock); + } + /* * Invalidate the parent's relcache so that the partition is no longer * included in its partition descriptor. @@ -14107,3 +14313,328 @@ ATExecDetachPartition(Relation rel, RangeVar *name) return address; } + +/* + * Before acquiring lock on an index, acquire the same lock on the owning + * table. + */ +struct AttachIndexCallbackState +{ + Oid partitionOid; + Oid parentTblOid; + bool lockedParentTbl; +}; + +static void +RangeVarCallbackForAttachIndex(const RangeVar *rv, Oid relOid, Oid oldRelOid, + void *arg) +{ + struct AttachIndexCallbackState *state; + Form_pg_class classform; + HeapTuple tuple; + + state = (struct AttachIndexCallbackState *) arg; + + if (!state->lockedParentTbl) + { + LockRelationOid(state->parentTblOid, AccessShareLock); + state->lockedParentTbl = true; + } + + /* + * If we previously locked some other heap, and the name we're looking up + * no longer refers to an index on that relation, release the now-useless + * lock. XXX maybe we should do *after* we verify whether the index does + * not actually belong to the same relation ... + */ + if (relOid != oldRelOid && OidIsValid(state->partitionOid)) + { + UnlockRelationOid(state->partitionOid, AccessShareLock); + state->partitionOid = InvalidOid; + } + + /* Didn't find a relation, so no need for locking or permission checks. */ + if (!OidIsValid(relOid)) + return; + + tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(relOid)); + if (!HeapTupleIsValid(tuple)) + return; /* concurrently dropped, so nothing to do */ + classform = (Form_pg_class) GETSTRUCT(tuple); + if (classform->relkind != RELKIND_PARTITIONED_INDEX && + classform->relkind != RELKIND_INDEX) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("\"%s\" is not an index", rv->relname))); + ReleaseSysCache(tuple); + + /* + * Since we need only examine the heap's tupledesc, an access share lock + * on it (preventing any DDL) is sufficient. + */ + state->partitionOid = IndexGetRelation(relOid, false); + LockRelationOid(state->partitionOid, AccessShareLock); +} + +/* + * ALTER INDEX i1 ATTACH PARTITION i2 + */ +static ObjectAddress +ATExecAttachPartitionIdx(List **wqueue, Relation parentIdx, RangeVar *name) +{ + Relation partIdx; + Relation partTbl; + Relation parentTbl; + ObjectAddress address; + Oid partIdxId; + Oid currParent; + struct AttachIndexCallbackState state; + + /* + * We need to obtain lock on the index 'name' to modify it, but we also + * need to read its owning table's tuple descriptor -- so we need to lock + * both. To avoid deadlocks, obtain lock on the table before doing so on + * the index. Furthermore, we need to examine the parent table of the + * partition, so lock that one too. + */ + state.partitionOid = InvalidOid; + state.parentTblOid = parentIdx->rd_index->indrelid; + state.lockedParentTbl = false; + partIdxId = + RangeVarGetRelidExtended(name, AccessExclusiveLock, false, false, + RangeVarCallbackForAttachIndex, + (void *) &state); + /* Not there? */ + if (!OidIsValid(partIdxId)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_OBJECT), + errmsg("index \"%s\" does not exist", name->relname))); + + /* no deadlock risk: RangeVarGetRelidExtended already acquired the lock */ + partIdx = relation_open(partIdxId, AccessExclusiveLock); + + /* we already hold locks on both tables, so this is safe: */ + parentTbl = relation_open(parentIdx->rd_index->indrelid, AccessShareLock); + partTbl = relation_open(partIdx->rd_index->indrelid, NoLock); + + ObjectAddressSet(address, RelationRelationId, RelationGetRelid(partIdx)); + + /* Silently do nothing if already in the right state */ + currParent = !has_superclass(partIdxId) ? InvalidOid : + get_partition_parent(partIdxId); + if (currParent != RelationGetRelid(parentIdx)) + { + IndexInfo *childInfo; + IndexInfo *parentInfo; + AttrNumber *attmap; + bool found; + int i; + PartitionDesc partDesc; + + /* + * If this partition already has an index attached, refuse the operation. + */ + refuseDupeIndexAttach(parentIdx, partIdx, partTbl); + + if (OidIsValid(currParent)) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("cannot attach index \"%s\" as a partition of index \"%s\"", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentIdx)), + errdetail("Index \"%s\" is already attached to another index.", + RelationGetRelationName(partIdx)))); + + /* Make sure it indexes a partition of the other index's table */ + partDesc = RelationGetPartitionDesc(parentTbl); + found = false; + for (i = 0; i < partDesc->nparts; i++) + { + if (partDesc->oids[i] == state.partitionOid) + { + found = true; + break; + } + } + if (!found) + ereport(ERROR, + (errmsg("cannot attach index \"%s\" as a partition of index \"%s\"", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentIdx)), + errdetail("Index \"%s\" is not an index on any partition of table \"%s\".", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentTbl)))); + + /* Ensure the indexes are compatible */ + childInfo = BuildIndexInfo(partIdx); + parentInfo = BuildIndexInfo(parentIdx); + attmap = convert_tuples_by_name_map(RelationGetDescr(partTbl), + RelationGetDescr(parentTbl), + gettext_noop("could not convert row type")); + if (!CompareIndexInfo(childInfo, parentInfo, + partIdx->rd_indcollation, + parentIdx->rd_indcollation, + partIdx->rd_opfamily, + parentIdx->rd_opfamily, + attmap, + RelationGetDescr(partTbl)->natts)) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("cannot attach index \"%s\" as a partition of index \"%s\"", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentIdx)), + errdetail("The index definitions do not match."))); + + /* All good -- do it */ + IndexSetParentIndex(partIdx, RelationGetRelid(parentIdx)); + pfree(attmap); + + CommandCounterIncrement(); + + validatePartitionedIndex(parentIdx, parentTbl); + } + + relation_close(parentTbl, AccessShareLock); + /* keep these locks till commit */ + relation_close(partTbl, NoLock); + relation_close(partIdx, NoLock); + + return address; +} + +/* + * Verify whether the given partition already contains an index attached + * to the given partitioned index. If so, raise an error. + */ +static void +refuseDupeIndexAttach(Relation parentIdx, Relation partIdx, Relation partitionTbl) +{ + Relation pg_inherits; + ScanKeyData key; + HeapTuple tuple; + SysScanDesc scan; + + pg_inherits = heap_open(InheritsRelationId, AccessShareLock); + ScanKeyInit(&key, Anum_pg_inherits_inhparent, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(RelationGetRelid(parentIdx))); + scan = systable_beginscan(pg_inherits, InheritsParentIndexId, true, + NULL, 1, &key); + while (HeapTupleIsValid(tuple = systable_getnext(scan))) + { + Form_pg_inherits inhForm; + Oid tab; + + inhForm = (Form_pg_inherits) GETSTRUCT(tuple); + tab = IndexGetRelation(inhForm->inhrelid, false); + if (tab == RelationGetRelid(partitionTbl)) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("cannot attach index \"%s\" as a partition of index \"%s\"", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentIdx)), + errdetail("Another index is already attached for partition \"%s\".", + RelationGetRelationName(partitionTbl)))); + } + + systable_endscan(scan); + heap_close(pg_inherits, AccessShareLock); +} + +/* + * Verify whether the set of attached partition indexes to a parent index on + * a partitioned table is complete. If it is, mark the parent index valid. + * + * This should be called each time a partition index is attached. + */ +static void +validatePartitionedIndex(Relation partedIdx, Relation partedTbl) +{ + Relation inheritsRel; + SysScanDesc scan; + ScanKeyData key; + int tuples = 0; + HeapTuple inhTup; + bool updated = false; + + Assert(partedIdx->rd_rel->relkind == RELKIND_PARTITIONED_INDEX); + + /* + * Scan pg_inherits for this parent index. Count each valid index we find + * (verifying the pg_index entry for each), and if we reach the total + * amount we expect, we can mark this parent index as valid. + */ + inheritsRel = heap_open(InheritsRelationId, AccessShareLock); + ScanKeyInit(&key, Anum_pg_inherits_inhparent, + BTEqualStrategyNumber, F_OIDEQ, + ObjectIdGetDatum(RelationGetRelid(partedIdx))); + scan = systable_beginscan(inheritsRel, InheritsParentIndexId, true, + NULL, 1, &key); + while ((inhTup = systable_getnext(scan)) != NULL) + { + Form_pg_inherits inhForm = (Form_pg_inherits) GETSTRUCT(inhTup); + HeapTuple indTup; + Form_pg_index indexForm; + + indTup = SearchSysCache1(INDEXRELID, + ObjectIdGetDatum(inhForm->inhrelid)); + if (!indTup) + elog(ERROR, "cache lookup failed for index %u", + inhForm->inhrelid); + indexForm = (Form_pg_index) GETSTRUCT(indTup); + if (IndexIsValid(indexForm)) + tuples += 1; + ReleaseSysCache(indTup); + } + + /* Done with pg_inherits */ + systable_endscan(scan); + heap_close(inheritsRel, AccessShareLock); + + /* + * If we found as many inherited indexes as the partitioned table has + * partitions, we're good; update pg_index to set indisvalid. + */ + if (tuples == RelationGetPartitionDesc(partedTbl)->nparts) + { + Relation idxRel; + HeapTuple newtup; + + idxRel = heap_open(IndexRelationId, RowExclusiveLock); + + newtup = heap_copytuple(partedIdx->rd_indextuple); + ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true; + updated = true; + + CatalogTupleUpdate(idxRel, &partedIdx->rd_indextuple->t_self, newtup); + + heap_close(idxRel, RowExclusiveLock); + } + + /* + * If this index is in turn a partition of a larger index, validating it + * might cause the parent to become valid also. Try that. + */ + if (updated && + has_superclass(RelationGetRelid(partedIdx))) + { + Oid parentIdxId, + parentTblId; + Relation parentIdx, + parentTbl; + + /* make sure we see the validation we just did */ + CommandCounterIncrement(); + + parentIdxId = get_partition_parent(RelationGetRelid(partedIdx)); + parentTblId = get_partition_parent(RelationGetRelid(partedTbl)); + parentIdx = relation_open(parentIdxId, AccessExclusiveLock); + parentTbl = relation_open(parentTblId, AccessExclusiveLock); + Assert(!parentIdx->rd_index->indisvalid); + + validatePartitionedIndex(parentIdx, parentTbl); + + relation_close(parentIdx, AccessExclusiveLock); + relation_close(parentTbl, AccessExclusiveLock); + } +} diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index ddbbc79823..65d8c77d7a 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3379,6 +3379,7 @@ _copyIndexStmt(const IndexStmt *from) COPY_STRING_FIELD(idxname); COPY_NODE_FIELD(relation); + COPY_SCALAR_FIELD(relationId); COPY_STRING_FIELD(accessMethod); COPY_STRING_FIELD(tableSpace); COPY_NODE_FIELD(indexParams); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 30ccc9c5ae..0bd12e862e 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1332,6 +1332,7 @@ _equalIndexStmt(const IndexStmt *a, const IndexStmt *b) { COMPARE_STRING_FIELD(idxname); COMPARE_NODE_FIELD(relation); + COMPARE_SCALAR_FIELD(relationId); COMPARE_STRING_FIELD(accessMethod); COMPARE_STRING_FIELD(tableSpace); COMPARE_NODE_FIELD(indexParams); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 5e72df137e..b1cdfc36a6 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2650,6 +2650,7 @@ _outIndexStmt(StringInfo str, const IndexStmt *node) WRITE_STRING_FIELD(idxname); WRITE_NODE_FIELD(relation); + WRITE_OID_FIELD(relationId); WRITE_STRING_FIELD(accessMethod); WRITE_STRING_FIELD(tableSpace); WRITE_NODE_FIELD(indexParams); diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index e42b7caff6..93e67e8adc 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -290,7 +290,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type add_drop opt_asc_desc opt_nulls_order %type alter_table_cmd alter_type_cmd opt_collate_clause - replica_identity partition_cmd + replica_identity partition_cmd index_partition_cmd %type alter_table_cmds alter_type_cmds %type alter_identity_column_option_list %type alter_identity_column_option @@ -1891,6 +1891,15 @@ AlterTableStmt: n->missing_ok = true; $$ = (Node *)n; } + | ALTER INDEX qualified_name index_partition_cmd + { + AlterTableStmt *n = makeNode(AlterTableStmt); + n->relation = $3; + n->cmds = list_make1($4); + n->relkind = OBJECT_INDEX; + n->missing_ok = false; + $$ = (Node *)n; + } | ALTER INDEX ALL IN_P TABLESPACE name SET TABLESPACE name opt_nowait { AlterTableMoveAllStmt *n = @@ -2025,6 +2034,22 @@ partition_cmd: } ; +index_partition_cmd: + /* ALTER INDEX ATTACH PARTITION */ + ATTACH PARTITION qualified_name + { + AlterTableCmd *n = makeNode(AlterTableCmd); + PartitionCmd *cmd = makeNode(PartitionCmd); + + n->subtype = AT_AttachPartition; + cmd->name = $3; + cmd->bound = NULL; + n->def = (Node *) cmd; + + $$ = (Node *) n; + } + ; + alter_table_cmd: /* ALTER TABLE ADD */ ADD_P columnDef @@ -7330,7 +7355,7 @@ defacl_privilege_target: *****************************************************************************/ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name - ON qualified_name access_method_clause '(' index_params ')' + ON relation_expr access_method_clause '(' index_params ')' opt_reloptions OptTableSpace where_clause { IndexStmt *n = makeNode(IndexStmt); @@ -7338,6 +7363,7 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name n->concurrent = $4; n->idxname = $5; n->relation = $7; + n->relationId = InvalidOid; n->accessMethod = $8; n->indexParams = $10; n->options = $12; @@ -7356,7 +7382,7 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name $$ = (Node *)n; } | CREATE opt_unique INDEX opt_concurrently IF_P NOT EXISTS index_name - ON qualified_name access_method_clause '(' index_params ')' + ON relation_expr access_method_clause '(' index_params ')' opt_reloptions OptTableSpace where_clause { IndexStmt *n = makeNode(IndexStmt); @@ -7364,6 +7390,7 @@ IndexStmt: CREATE opt_unique INDEX opt_concurrently opt_index_name n->concurrent = $4; n->idxname = $8; n->relation = $10; + n->relationId = InvalidOid; n->accessMethod = $11; n->indexParams = $13; n->options = $15; diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 128f1679c6..90bb356df8 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -118,9 +118,6 @@ static void transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_clause); static void transformOfType(CreateStmtContext *cxt, TypeName *ofTypename); -static IndexStmt *generateClonedIndexStmt(CreateStmtContext *cxt, - Relation source_idx, - const AttrNumber *attmap, int attmap_length); static List *get_collation(Oid collation, Oid actual_datatype); static List *get_opclass(Oid opclass, Oid actual_datatype); static void transformIndexConstraints(CreateStmtContext *cxt); @@ -1185,7 +1182,8 @@ transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla parent_index = index_open(parent_index_oid, AccessShareLock); /* Build CREATE INDEX statement to recreate the parent_index */ - index_stmt = generateClonedIndexStmt(cxt, parent_index, + index_stmt = generateClonedIndexStmt(cxt->relation, InvalidOid, + parent_index, attmap, tupleDesc->natts); /* Copy comment on index, if requested */ @@ -1263,10 +1261,12 @@ transformOfType(CreateStmtContext *cxt, TypeName *ofTypename) /* * Generate an IndexStmt node using information from an already existing index - * "source_idx". Attribute numbers should be adjusted according to attmap. + * "source_idx", for the rel identified either by heapRel or heapRelid. + * + * Attribute numbers should be adjusted according to attmap. */ -static IndexStmt * -generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, +IndexStmt * +generateClonedIndexStmt(RangeVar *heapRel, Oid heapRelid, Relation source_idx, const AttrNumber *attmap, int attmap_length) { Oid source_relid = RelationGetRelid(source_idx); @@ -1287,6 +1287,9 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, Datum datum; bool isnull; + Assert((heapRel == NULL && OidIsValid(heapRelid)) || + (heapRel != NULL && !OidIsValid(heapRelid))); + /* * Fetch pg_class tuple of source index. We can't use the copy in the * relcache entry because it doesn't include optional fields. @@ -1322,7 +1325,8 @@ generateClonedIndexStmt(CreateStmtContext *cxt, Relation source_idx, /* Begin building the IndexStmt */ index = makeNode(IndexStmt); - index->relation = cxt->relation; + index->relation = heapRel; + index->relationId = heapRelid; index->accessMethod = pstrdup(NameStr(amrec->amname)); if (OidIsValid(idxrelrec->reltablespace)) index->tableSpace = get_tablespace_name(idxrelrec->reltablespace); @@ -3289,18 +3293,39 @@ transformPartitionCmd(CreateStmtContext *cxt, PartitionCmd *cmd) { Relation parentRel = cxt->rel; - /* the table must be partitioned */ - if (parentRel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE) - ereport(ERROR, - (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), - errmsg("\"%s\" is not partitioned", - RelationGetRelationName(parentRel)))); - - /* transform the partition bound, if any */ - Assert(RelationGetPartitionKey(parentRel) != NULL); - if (cmd->bound != NULL) - cxt->partbound = transformPartitionBound(cxt->pstate, parentRel, - cmd->bound); + switch (parentRel->rd_rel->relkind) + { + case RELKIND_PARTITIONED_TABLE: + /* transform the partition bound, if any */ + Assert(RelationGetPartitionKey(parentRel) != NULL); + if (cmd->bound != NULL) + cxt->partbound = transformPartitionBound(cxt->pstate, parentRel, + cmd->bound); + break; + case RELKIND_PARTITIONED_INDEX: + /* nothing to check */ + Assert(cmd->bound == NULL); + break; + case RELKIND_RELATION: + /* the table must be partitioned */ + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("table \"%s\" is not partitioned", + RelationGetRelationName(parentRel)))); + break; + case RELKIND_INDEX: + /* the index must be partitioned */ + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("index \"%s\" is not partitioned", + RelationGetRelationName(parentRel)))); + break; + default: + /* parser shouldn't let this case through */ + elog(ERROR, "\"%s\" is not a partitioned table or index", + RelationGetRelationName(parentRel)); + break; + } } /* diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index ec98a612ec..9cccc8d39d 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -23,6 +23,7 @@ #include "access/xlog.h" #include "catalog/catalog.h" #include "catalog/namespace.h" +#include "catalog/pg_inherits_fn.h" #include "catalog/toasting.h" #include "commands/alter.h" #include "commands/async.h" @@ -1300,6 +1301,7 @@ ProcessUtilitySlow(ParseState *pstate, IndexStmt *stmt = (IndexStmt *) parsetree; Oid relid; LOCKMODE lockmode; + List *inheritors = NIL; if (stmt->concurrent) PreventTransactionChain(isTopLevel, @@ -1322,6 +1324,23 @@ ProcessUtilitySlow(ParseState *pstate, RangeVarCallbackOwnsRelation, NULL); + /* + * CREATE INDEX on partitioned tables (but not regular + * inherited tables) recurses to partitions, so we must + * acquire locks early to avoid deadlocks. + */ + if (stmt->relation->inh) + { + Relation rel; + + /* already locked by RangeVarGetRelidExtended */ + rel = heap_open(relid, NoLock); + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + inheritors = find_all_inheritors(relid, lockmode, + NULL); + heap_close(rel, NoLock); + } + /* Run parse analysis ... */ stmt = transformIndexStmt(relid, stmt, queryString); @@ -1331,6 +1350,7 @@ ProcessUtilitySlow(ParseState *pstate, DefineIndex(relid, /* OID of heap relation */ stmt, InvalidOid, /* no predefined OID */ + InvalidOid, /* no parent index */ false, /* is_alter_table */ true, /* check_rights */ true, /* check_not_in_use */ @@ -1346,6 +1366,8 @@ ProcessUtilitySlow(ParseState *pstate, parsetree); commandCollected = true; EventTriggerAlterTableEnd(); + + list_free(inheritors); } break; diff --git a/src/backend/utils/adt/amutils.c b/src/backend/utils/adt/amutils.c index a6d8feea5b..0f7ceb62eb 100644 --- a/src/backend/utils/adt/amutils.c +++ b/src/backend/utils/adt/amutils.c @@ -183,7 +183,8 @@ indexam_property(FunctionCallInfo fcinfo, if (!HeapTupleIsValid(tuple)) PG_RETURN_NULL(); rd_rel = (Form_pg_class) GETSTRUCT(tuple); - if (rd_rel->relkind != RELKIND_INDEX) + if (rd_rel->relkind != RELKIND_INDEX && + rd_rel->relkind != RELKIND_PARTITIONED_INDEX) { ReleaseSysCache(tuple); PG_RETURN_NULL(); diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 9cdbb06add..c5f5a1ca3f 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -317,7 +317,7 @@ static void decompile_column_index_array(Datum column_index_array, Oid relId, static char *pg_get_ruledef_worker(Oid ruleoid, int prettyFlags); static char *pg_get_indexdef_worker(Oid indexrelid, int colno, const Oid *excludeOps, - bool attrsOnly, bool showTblSpc, + bool attrsOnly, bool showTblSpc, bool inherits, int prettyFlags, bool missing_ok); static char *pg_get_statisticsobj_worker(Oid statextid, bool missing_ok); static char *pg_get_partkeydef_worker(Oid relid, int prettyFlags, @@ -1086,7 +1086,7 @@ pg_get_indexdef(PG_FUNCTION_ARGS) prettyFlags = PRETTYFLAG_INDENT; - res = pg_get_indexdef_worker(indexrelid, 0, NULL, false, false, + res = pg_get_indexdef_worker(indexrelid, 0, NULL, false, false, false, prettyFlags, true); if (res == NULL) @@ -1107,7 +1107,7 @@ pg_get_indexdef_ext(PG_FUNCTION_ARGS) prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; res = pg_get_indexdef_worker(indexrelid, colno, NULL, colno != 0, false, - prettyFlags, true); + false, prettyFlags, true); if (res == NULL) PG_RETURN_NULL(); @@ -1123,7 +1123,7 @@ pg_get_indexdef_ext(PG_FUNCTION_ARGS) char * pg_get_indexdef_string(Oid indexrelid) { - return pg_get_indexdef_worker(indexrelid, 0, NULL, false, true, 0, false); + return pg_get_indexdef_worker(indexrelid, 0, NULL, false, true, true, 0, false); } /* Internal version that just reports the column definitions */ @@ -1133,7 +1133,7 @@ pg_get_indexdef_columns(Oid indexrelid, bool pretty) int prettyFlags; prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; - return pg_get_indexdef_worker(indexrelid, 0, NULL, true, false, + return pg_get_indexdef_worker(indexrelid, 0, NULL, true, false, false, prettyFlags, false); } @@ -1146,7 +1146,7 @@ pg_get_indexdef_columns(Oid indexrelid, bool pretty) static char * pg_get_indexdef_worker(Oid indexrelid, int colno, const Oid *excludeOps, - bool attrsOnly, bool showTblSpc, + bool attrsOnly, bool showTblSpc, bool inherits, int prettyFlags, bool missing_ok) { /* might want a separate isConstraint parameter later */ @@ -1259,9 +1259,11 @@ pg_get_indexdef_worker(Oid indexrelid, int colno, if (!attrsOnly) { if (!isConstraint) - appendStringInfo(&buf, "CREATE %sINDEX %s ON %s USING %s (", + appendStringInfo(&buf, "CREATE %sINDEX %s ON %s%s USING %s (", idxrec->indisunique ? "UNIQUE " : "", quote_identifier(NameStr(idxrelrec->relname)), + idxrelrec->relkind == RELKIND_PARTITIONED_INDEX + && !inherits ? "ONLY " : "", generate_relation_name(indrelid, NIL), quote_identifier(NameStr(amrec->amname))); else /* currently, must be EXCLUDE constraint */ @@ -2148,6 +2150,7 @@ pg_get_constraintdef_worker(Oid constraintId, bool fullCommand, operators, false, false, + false, prettyFlags, false)); break; diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index 00ba33bfb4..c081b88b73 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -430,18 +430,26 @@ static void RelationParseRelOptions(Relation relation, HeapTuple tuple) { bytea *options; + amoptions_function amoptsfn; relation->rd_options = NULL; - /* Fall out if relkind should not have options */ + /* + * Look up any AM-specific parse function; fall out if relkind should not + * have options. + */ switch (relation->rd_rel->relkind) { case RELKIND_RELATION: case RELKIND_TOASTVALUE: - case RELKIND_INDEX: case RELKIND_VIEW: case RELKIND_MATVIEW: case RELKIND_PARTITIONED_TABLE: + amoptsfn = NULL; + break; + case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: + amoptsfn = relation->rd_amroutine->amoptions; break; default: return; @@ -452,10 +460,7 @@ RelationParseRelOptions(Relation relation, HeapTuple tuple) * we might not have any other for pg_class yet (consider executing this * code for pg_class itself) */ - options = extractRelOptions(tuple, - GetPgClassDescriptor(), - relation->rd_rel->relkind == RELKIND_INDEX ? - relation->rd_amroutine->amoptions : NULL); + options = extractRelOptions(tuple, GetPgClassDescriptor(), amoptsfn); /* * Copy parsed data into CacheMemoryContext. To guard against the @@ -2053,7 +2058,8 @@ RelationIdGetRelation(Oid relationId) * and we don't want to use the full-blown procedure because it's * a headache for indexes that reload itself depends on. */ - if (rd->rd_rel->relkind == RELKIND_INDEX) + if (rd->rd_rel->relkind == RELKIND_INDEX || + rd->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) RelationReloadIndexInfo(rd); else RelationClearRelation(rd, true); @@ -2167,7 +2173,8 @@ RelationReloadIndexInfo(Relation relation) Form_pg_class relp; /* Should be called only for invalidated indexes */ - Assert(relation->rd_rel->relkind == RELKIND_INDEX && + Assert((relation->rd_rel->relkind == RELKIND_INDEX || + relation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) && !relation->rd_isvalid); /* Ensure it's closed at smgr level */ @@ -2387,7 +2394,8 @@ RelationClearRelation(Relation relation, bool rebuild) { RelationInitPhysicalAddr(relation); - if (relation->rd_rel->relkind == RELKIND_INDEX) + if (relation->rd_rel->relkind == RELKIND_INDEX || + relation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) { relation->rd_isvalid = false; /* needs to be revalidated */ if (relation->rd_refcnt > 1 && IsTransactionState()) @@ -2403,7 +2411,8 @@ RelationClearRelation(Relation relation, bool rebuild) * re-read the pg_class row to handle possible physical relocation of the * index, and we check for pg_index updates too. */ - if (relation->rd_rel->relkind == RELKIND_INDEX && + if ((relation->rd_rel->relkind == RELKIND_INDEX || + relation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) && relation->rd_refcnt > 0 && relation->rd_indexcxt != NULL) { @@ -5461,7 +5470,10 @@ load_relcache_init_file(bool shared) rel->rd_att->constr = constr; } - /* If it's an index, there's more to do */ + /* + * If it's an index, there's more to do. Note we explicitly ignore + * partitioned indexes here. + */ if (rel->rd_rel->relkind == RELKIND_INDEX) { MemoryContext indexcxt; @@ -5825,7 +5837,10 @@ write_relcache_init_file(bool shared) (rel->rd_options ? VARSIZE(rel->rd_options) : 0), fp); - /* If it's an index, there's more to do */ + /* + * If it's an index, there's more to do. Note we explicitly ignore + * partitioned indexes here. + */ if (rel->rd_rel->relkind == RELKIND_INDEX) { /* write the pg_index tuple */ diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c index 7f5f351486..2ec3627a68 100644 --- a/src/bin/pg_dump/common.c +++ b/src/bin/pg_dump/common.c @@ -68,6 +68,7 @@ static int numextmembers; static void flagInhTables(Archive *fout, TableInfo *tbinfo, int numTables, InhInfo *inhinfo, int numInherits); +static void flagInhIndexes(Archive *fout, TableInfo *tblinfo, int numTables); static void flagInhAttrs(DumpOptions *dopt, TableInfo *tblinfo, int numTables); static DumpableObject **buildIndexArray(void *objArray, int numObjs, Size objSize); @@ -76,6 +77,8 @@ static int ExtensionMemberIdCompare(const void *p1, const void *p2); static void findParentsByOid(TableInfo *self, InhInfo *inhinfo, int numInherits); static int strInArray(const char *pattern, char **arr, int arr_size); +static IndxInfo *findIndexByOid(Oid oid, DumpableObject **idxinfoindex, + int numIndexes); /* @@ -257,6 +260,10 @@ getSchemaData(Archive *fout, int *numTablesPtr) write_msg(NULL, "reading indexes\n"); getIndexes(fout, tblinfo, numTables); + if (g_verbose) + write_msg(NULL, "flagging indexes in partitioned tables\n"); + flagInhIndexes(fout, tblinfo, numTables); + if (g_verbose) write_msg(NULL, "reading extended statistics\n"); getExtendedStatistics(fout, tblinfo, numTables); @@ -342,7 +349,10 @@ flagInhTables(Archive *fout, TableInfo *tblinfo, int numTables, if (find_parents) findParentsByOid(&tblinfo[i], inhinfo, numInherits); - /* If needed, mark the parents as interesting for getTableAttrs. */ + /* + * If needed, mark the parents as interesting for getTableAttrs + * and getIndexes. + */ if (mark_parents) { int numParents = tblinfo[i].numParents; @@ -354,6 +364,89 @@ flagInhTables(Archive *fout, TableInfo *tblinfo, int numTables, } } +/* + * flagInhIndexes - + * Create AttachIndexInfo objects for partitioned indexes, and add + * appropriate dependency links. + */ +static void +flagInhIndexes(Archive *fout, TableInfo tblinfo[], int numTables) +{ + int i, + j, + k; + DumpableObject ***parentIndexArray; + + parentIndexArray = (DumpableObject ***) + pg_malloc0(getMaxDumpId() * sizeof(DumpableObject **)); + + for (i = 0; i < numTables; i++) + { + TableInfo *parenttbl; + IndexAttachInfo *attachinfo; + + if (!tblinfo[i].ispartition || tblinfo[i].numParents == 0) + continue; + + Assert(tblinfo[i].numParents == 1); + parenttbl = tblinfo[i].parents[0]; + + /* + * We need access to each parent table's index list, but there is no + * index to cover them outside of this function. To avoid having to + * sort every parent table's indexes each time we come across each of + * its partitions, create an indexed array for each parent the first + * time it is required. + */ + if (parentIndexArray[parenttbl->dobj.dumpId] == NULL) + parentIndexArray[parenttbl->dobj.dumpId] = + buildIndexArray(parenttbl->indexes, + parenttbl->numIndexes, + sizeof(IndxInfo)); + + attachinfo = (IndexAttachInfo *) + pg_malloc0(tblinfo[i].numIndexes * sizeof(IndexAttachInfo)); + for (j = 0, k = 0; j < tblinfo[i].numIndexes; j++) + { + IndxInfo *index = &(tblinfo[i].indexes[j]); + IndxInfo *parentidx; + + if (index->parentidx == 0) + continue; + + parentidx = findIndexByOid(index->parentidx, + parentIndexArray[parenttbl->dobj.dumpId], + parenttbl->numIndexes); + if (parentidx == NULL) + continue; + + attachinfo[k].dobj.objType = DO_INDEX_ATTACH; + attachinfo[k].dobj.catId.tableoid = 0; + attachinfo[k].dobj.catId.oid = 0; + AssignDumpId(&attachinfo[k].dobj); + attachinfo[k].dobj.name = pg_strdup(index->dobj.name); + attachinfo[k].parentIdx = parentidx; + attachinfo[k].partitionIdx = index; + + /* + * We want dependencies from parent to partition (so that the + * partition index is created first), and another one from + * attach object to parent (so that the partition index is + * attached once the parent index has been created). + */ + addObjectDependency(&parentidx->dobj, index->dobj.dumpId); + addObjectDependency(&attachinfo[k].dobj, parentidx->dobj.dumpId); + + k++; + } + } + + for (i = 0; i < numTables; i++) + if (parentIndexArray[i]) + pg_free(parentIndexArray[i]); + pg_free(parentIndexArray); +} + /* flagInhAttrs - * for each dumpable table in tblinfo, flag its inherited attributes * @@ -827,6 +920,18 @@ findExtensionByOid(Oid oid) return (ExtensionInfo *) findObjectByOid(oid, extinfoindex, numExtensions); } +/* + * findIndexByOid + * find the entry of the index with the given oid + * + * This one's signature is different from the previous ones because we lack a + * global array of all indexes, so caller must pass their array as argument. + */ +static IndxInfo * +findIndexByOid(Oid oid, DumpableObject **idxinfoindex, int numIndexes) +{ + return (IndxInfo *) findObjectByOid(oid, idxinfoindex, numIndexes); +} /* * setExtensionMembership diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 27628a397c..af2d03ed19 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -193,6 +193,7 @@ static void dumpAttrDef(Archive *fout, AttrDefInfo *adinfo); static void dumpSequence(Archive *fout, TableInfo *tbinfo); static void dumpSequenceData(Archive *fout, TableDataInfo *tdinfo); static void dumpIndex(Archive *fout, IndxInfo *indxinfo); +static void dumpIndexAttach(Archive *fout, IndexAttachInfo *attachinfo); static void dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo); static void dumpConstraint(Archive *fout, ConstraintInfo *coninfo); static void dumpTableConstraintComment(Archive *fout, ConstraintInfo *coninfo); @@ -6509,6 +6510,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) int i_tableoid, i_oid, i_indexname, + i_parentidx, i_indexdef, i_indnkeys, i_indkey, @@ -6530,15 +6532,17 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) { TableInfo *tbinfo = &tblinfo[i]; - /* Only plain tables and materialized views have indexes. */ - if (tbinfo->relkind != RELKIND_RELATION && - tbinfo->relkind != RELKIND_MATVIEW) - continue; if (!tbinfo->hasindex) continue; - /* Ignore indexes of tables whose definitions are not to be dumped */ - if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)) + /* + * Ignore indexes of tables whose definitions are not to be dumped. + * + * We also need indexes on partitioned tables which have partitions to + * be dumped, in order to dump the indexes on the partitions. + */ + if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) && + !tbinfo->interesting) continue; if (g_verbose) @@ -6561,7 +6565,39 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) * is not. */ resetPQExpBuffer(query); - if (fout->remoteVersion >= 90400) + if (fout->remoteVersion >= 11000) + { + appendPQExpBuffer(query, + "SELECT t.tableoid, t.oid, " + "t.relname AS indexname, " + "inh.inhparent AS parentidx, " + "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, " + "t.relnatts AS indnkeys, " + "i.indkey, i.indisclustered, " + "i.indisreplident, t.relpages, " + "c.contype, c.conname, " + "c.condeferrable, c.condeferred, " + "c.tableoid AS contableoid, " + "c.oid AS conoid, " + "pg_catalog.pg_get_constraintdef(c.oid, false) AS condef, " + "(SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS tablespace, " + "t.reloptions AS indreloptions " + "FROM pg_catalog.pg_index i " + "JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid) " + "JOIN pg_catalog.pg_class t2 ON (t2.oid = i.indrelid) " + "LEFT JOIN pg_catalog.pg_constraint c " + "ON (i.indrelid = c.conrelid AND " + "i.indexrelid = c.conindid AND " + "c.contype IN ('p','u','x')) " + "LEFT JOIN pg_catalog.pg_inherits inh " + "ON (inh.inhrelid = indexrelid) " + "WHERE i.indrelid = '%u'::pg_catalog.oid " + "AND (i.indisvalid OR t2.relkind = 'p') " + "AND i.indisready " + "ORDER BY indexname", + tbinfo->dobj.catId.oid); + } + else if (fout->remoteVersion >= 90400) { /* * the test on indisready is necessary in 9.2, and harmless in @@ -6570,6 +6606,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, " "t.relname AS indexname, " + "0 AS parentidx, " "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, " "t.relnatts AS indnkeys, " "i.indkey, i.indisclustered, " @@ -6601,6 +6638,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, " "t.relname AS indexname, " + "0 AS parentidx, " "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, " "t.relnatts AS indnkeys, " "i.indkey, i.indisclustered, " @@ -6628,6 +6666,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, " "t.relname AS indexname, " + "0 AS parentidx, " "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, " "t.relnatts AS indnkeys, " "i.indkey, i.indisclustered, " @@ -6658,6 +6697,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, " "t.relname AS indexname, " + "0 AS parentidx, " "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, " "t.relnatts AS indnkeys, " "i.indkey, i.indisclustered, " @@ -6690,6 +6730,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) i_tableoid = PQfnumber(res, "tableoid"); i_oid = PQfnumber(res, "oid"); i_indexname = PQfnumber(res, "indexname"); + i_parentidx = PQfnumber(res, "parentidx"); i_indexdef = PQfnumber(res, "indexdef"); i_indnkeys = PQfnumber(res, "indnkeys"); i_indkey = PQfnumber(res, "indkey"); @@ -6706,8 +6747,10 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) i_tablespace = PQfnumber(res, "tablespace"); i_indreloptions = PQfnumber(res, "indreloptions"); - indxinfo = (IndxInfo *) pg_malloc(ntups * sizeof(IndxInfo)); + tbinfo->indexes = indxinfo = + (IndxInfo *) pg_malloc(ntups * sizeof(IndxInfo)); constrinfo = (ConstraintInfo *) pg_malloc(ntups * sizeof(ConstraintInfo)); + tbinfo->numIndexes = ntups; for (j = 0; j < ntups; j++) { @@ -6717,6 +6760,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) indxinfo[j].dobj.catId.tableoid = atooid(PQgetvalue(res, j, i_tableoid)); indxinfo[j].dobj.catId.oid = atooid(PQgetvalue(res, j, i_oid)); AssignDumpId(&indxinfo[j].dobj); + indxinfo[j].dobj.dump = tbinfo->dobj.dump; indxinfo[j].dobj.name = pg_strdup(PQgetvalue(res, j, i_indexname)); indxinfo[j].dobj.namespace = tbinfo->dobj.namespace; indxinfo[j].indextable = tbinfo; @@ -6729,6 +6773,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) indxinfo[j].indkeys, indxinfo[j].indnkeys); indxinfo[j].indisclustered = (PQgetvalue(res, j, i_indisclustered)[0] == 't'); indxinfo[j].indisreplident = (PQgetvalue(res, j, i_indisreplident)[0] == 't'); + indxinfo[j].parentidx = atooid(PQgetvalue(res, j, i_parentidx)); indxinfo[j].relpages = atoi(PQgetvalue(res, j, i_relpages)); contype = *(PQgetvalue(res, j, i_contype)); @@ -6742,6 +6787,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) constrinfo[j].dobj.catId.tableoid = atooid(PQgetvalue(res, j, i_contableoid)); constrinfo[j].dobj.catId.oid = atooid(PQgetvalue(res, j, i_conoid)); AssignDumpId(&constrinfo[j].dobj); + constrinfo[j].dobj.dump = tbinfo->dobj.dump; constrinfo[j].dobj.name = pg_strdup(PQgetvalue(res, j, i_conname)); constrinfo[j].dobj.namespace = tbinfo->dobj.namespace; constrinfo[j].contable = tbinfo; @@ -9512,6 +9558,9 @@ dumpDumpableObject(Archive *fout, DumpableObject *dobj) case DO_INDEX: dumpIndex(fout, (IndxInfo *) dobj); break; + case DO_INDEX_ATTACH: + dumpIndexAttach(fout, (IndexAttachInfo *) dobj); + break; case DO_STATSEXT: dumpStatisticsExt(fout, (StatsExtInfo *) dobj); break; @@ -16172,6 +16221,42 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) destroyPQExpBuffer(labelq); } +/* + * dumpIndexAttach + * write out to fout a partitioned-index attachment clause + */ +void +dumpIndexAttach(Archive *fout, IndexAttachInfo *attachinfo) +{ + if (fout->dopt->dataOnly) + return; + + if (attachinfo->partitionIdx->dobj.dump & DUMP_COMPONENT_DEFINITION) + { + PQExpBuffer q = createPQExpBuffer(); + + appendPQExpBuffer(q, "\nALTER INDEX %s ", + fmtQualifiedId(fout->remoteVersion, + attachinfo->parentIdx->dobj.namespace->dobj.name, + attachinfo->parentIdx->dobj.name)); + appendPQExpBuffer(q, "ATTACH PARTITION %s;\n", + fmtQualifiedId(fout->remoteVersion, + attachinfo->partitionIdx->dobj.namespace->dobj.name, + attachinfo->partitionIdx->dobj.name)); + + ArchiveEntry(fout, attachinfo->dobj.catId, attachinfo->dobj.dumpId, + attachinfo->dobj.name, + NULL, NULL, + "", + false, "INDEX ATTACH", SECTION_POST_DATA, + q->data, "", NULL, + NULL, 0, + NULL, NULL); + + destroyPQExpBuffer(q); + } +} + /* * dumpStatisticsExt * write out to fout an extended statistics object @@ -17803,6 +17888,7 @@ addBoundaryDependencies(DumpableObject **dobjs, int numObjs, addObjectDependency(postDataBound, dobj->dumpId); break; case DO_INDEX: + case DO_INDEX_ATTACH: case DO_STATSEXT: case DO_REFRESH_MATVIEW: case DO_TRIGGER: diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h index 49a02b4fa8..6c18d451ef 100644 --- a/src/bin/pg_dump/pg_dump.h +++ b/src/bin/pg_dump/pg_dump.h @@ -56,6 +56,7 @@ typedef enum DO_TABLE, DO_ATTRDEF, DO_INDEX, + DO_INDEX_ATTACH, DO_STATSEXT, DO_RULE, DO_TRIGGER, @@ -328,6 +329,8 @@ typedef struct _tableInfo */ int numParents; /* number of (immediate) parent tables */ struct _tableInfo **parents; /* TableInfos of immediate parents */ + int numIndexes; /* number of indexes */ + struct _indxInfo *indexes; /* indexes */ struct _tableDataInfo *dataObj; /* TableDataInfo, if dumping its data */ int numTriggers; /* number of triggers for table */ struct _triggerInfo *triggers; /* array of TriggerInfo structs */ @@ -361,11 +364,19 @@ typedef struct _indxInfo Oid *indkeys; bool indisclustered; bool indisreplident; + Oid parentidx; /* if partitioned, parent index OID */ /* if there is an associated constraint object, its dumpId: */ DumpId indexconstraint; int relpages; /* relpages of the underlying table */ } IndxInfo; +typedef struct _indexAttachInfo +{ + DumpableObject dobj; + IndxInfo *parentIdx; /* link to index on partitioned table */ + IndxInfo *partitionIdx; /* link to index on partition */ +} IndexAttachInfo; + typedef struct _statsExtInfo { DumpableObject dobj; diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c index 6da1c35a42..5ce3c5d485 100644 --- a/src/bin/pg_dump/pg_dump_sort.c +++ b/src/bin/pg_dump/pg_dump_sort.c @@ -35,6 +35,10 @@ static const char *modulename = gettext_noop("sorter"); * pg_dump.c; that is, PRE_DATA objects must sort before DO_PRE_DATA_BOUNDARY, * POST_DATA objects must sort after DO_POST_DATA_BOUNDARY, and DATA objects * must sort between them. + * + * Note: sortDataAndIndexObjectsBySize wants to have all DO_TABLE_DATA and + * DO_INDEX objects in contiguous chunks, so do not reuse the values for those + * for other object types. */ static const int dbObjectTypePriority[] = { @@ -53,11 +57,12 @@ static const int dbObjectTypePriority[] = 18, /* DO_TABLE */ 20, /* DO_ATTRDEF */ 28, /* DO_INDEX */ - 29, /* DO_STATSEXT */ - 30, /* DO_RULE */ - 31, /* DO_TRIGGER */ + 29, /* DO_INDEX_ATTACH */ + 30, /* DO_STATSEXT */ + 31, /* DO_RULE */ + 32, /* DO_TRIGGER */ 27, /* DO_CONSTRAINT */ - 32, /* DO_FK_CONSTRAINT */ + 33, /* DO_FK_CONSTRAINT */ 2, /* DO_PROCLANG */ 10, /* DO_CAST */ 23, /* DO_TABLE_DATA */ @@ -69,18 +74,18 @@ static const int dbObjectTypePriority[] = 15, /* DO_TSCONFIG */ 16, /* DO_FDW */ 17, /* DO_FOREIGN_SERVER */ - 32, /* DO_DEFAULT_ACL */ + 33, /* DO_DEFAULT_ACL */ 3, /* DO_TRANSFORM */ 21, /* DO_BLOB */ 25, /* DO_BLOB_DATA */ 22, /* DO_PRE_DATA_BOUNDARY */ 26, /* DO_POST_DATA_BOUNDARY */ - 33, /* DO_EVENT_TRIGGER */ - 38, /* DO_REFRESH_MATVIEW */ - 34, /* DO_POLICY */ - 35, /* DO_PUBLICATION */ - 36, /* DO_PUBLICATION_REL */ - 37 /* DO_SUBSCRIPTION */ + 34, /* DO_EVENT_TRIGGER */ + 39, /* DO_REFRESH_MATVIEW */ + 35, /* DO_POLICY */ + 36, /* DO_PUBLICATION */ + 37, /* DO_PUBLICATION_REL */ + 38 /* DO_SUBSCRIPTION */ }; static DumpId preDataBoundId; @@ -937,6 +942,13 @@ repairDomainConstraintMultiLoop(DumpableObject *domainobj, addObjectDependency(constraintobj, postDataBoundId); } +static void +repairIndexLoop(DumpableObject *partedindex, + DumpableObject *partindex) +{ + removeObjectDependency(partedindex, partindex->dumpId); +} + /* * Fix a dependency loop, or die trying ... * @@ -1099,6 +1111,23 @@ repairDependencyLoop(DumpableObject **loop, return; } + /* index on partitioned table and corresponding index on partition */ + if (nLoop == 2 && + loop[0]->objType == DO_INDEX && + loop[1]->objType == DO_INDEX) + { + if (((IndxInfo *) loop[0])->parentidx == loop[1]->catId.oid) + { + repairIndexLoop(loop[0], loop[1]); + return; + } + else if (((IndxInfo *) loop[1])->parentidx == loop[0]->catId.oid) + { + repairIndexLoop(loop[1], loop[0]); + return; + } + } + /* Indirect loop involving table and attribute default */ if (nLoop > 2) { @@ -1292,6 +1321,11 @@ describeDumpableObject(DumpableObject *obj, char *buf, int bufsize) "INDEX %s (ID %d OID %u)", obj->name, obj->dumpId, obj->catId.oid); return; + case DO_INDEX_ATTACH: + snprintf(buf, bufsize, + "INDEX ATTACH %s (ID %d)", + obj->name, obj->dumpId); + return; case DO_STATSEXT: snprintf(buf, bufsize, "STATISTICS %s (ID %d OID %u)", diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index 7cf9bdadb2..fce1465c11 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -5203,6 +5203,101 @@ section_pre_data => 1, test_schema_plus_blobs => 1, }, }, + 'CREATE INDEX ON ONLY measurement' => { + all_runs => 1, + catch_all => 'CREATE ... commands', + create_order => 92, + create_sql => 'CREATE INDEX ON dump_test.measurement (city_id, logdate);', + regexp => qr/^ + \QCREATE INDEX measurement_city_id_logdate_idx ON ONLY measurement USING\E + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + pg_dumpall_dbprivs => 1, + schema_only => 1, + section_post_data => 1, + test_schema_plus_blobs => 1, + with_oids => 1, }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + role => 1, + section_pre_data => 1, }, }, + + 'CREATE INDEX ... ON measurement_y2006_m2' => { + all_runs => 1, + catch_all => 'CREATE ... commands', + regexp => qr/^ + \QCREATE INDEX measurement_y2006m2_city_id_logdate_idx ON measurement_y2006m2 \E + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + pg_dumpall_dbprivs => 1, + role => 1, + schema_only => 1, + section_post_data => 1, + with_oids => 1, }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + section_pre_data => 1, + test_schema_plus_blobs => 1, }, }, + + 'ALTER INDEX ... ATTACH PARTITION' => { + all_runs => 1, + catch_all => 'CREATE ... commands', + regexp => qr/^ + \QALTER INDEX dump_test.measurement_city_id_logdate_idx ATTACH PARTITION dump_test_second_schema.measurement_y2006m2_city_id_logdate_idx\E + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + pg_dumpall_dbprivs => 1, + role => 1, + schema_only => 1, + section_post_data => 1, + with_oids => 1, }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + section_pre_data => 1, + test_schema_plus_blobs => 1, }, }, + 'CREATE VIEW test_view' => { all_runs => 1, catch_all => 'CREATE ... commands', diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c index d2787ab41b..466a78004b 100644 --- a/src/bin/psql/describe.c +++ b/src/bin/psql/describe.c @@ -1705,7 +1705,8 @@ describeOneTableDetails(const char *schemaname, appendPQExpBufferStr(&buf, ",\n a.attidentity"); else appendPQExpBufferStr(&buf, ",\n ''::pg_catalog.char AS attidentity"); - if (tableinfo.relkind == RELKIND_INDEX) + if (tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX) appendPQExpBufferStr(&buf, ",\n pg_catalog.pg_get_indexdef(a.attrelid, a.attnum, TRUE) AS indexdef"); else appendPQExpBufferStr(&buf, ",\n NULL AS indexdef"); @@ -1766,6 +1767,7 @@ describeOneTableDetails(const char *schemaname, schemaname, relationname); break; case RELKIND_INDEX: + case RELKIND_PARTITIONED_INDEX: if (tableinfo.relpersistence == 'u') printfPQExpBuffer(&title, _("Unlogged index \"%s.%s\""), schemaname, relationname); @@ -1823,7 +1825,8 @@ describeOneTableDetails(const char *schemaname, show_column_details = true; } - if (tableinfo.relkind == RELKIND_INDEX) + if (tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX) headers[cols++] = gettext_noop("Definition"); if (tableinfo.relkind == RELKIND_FOREIGN_TABLE && pset.sversion >= 90200) @@ -1834,6 +1837,7 @@ describeOneTableDetails(const char *schemaname, headers[cols++] = gettext_noop("Storage"); if (tableinfo.relkind == RELKIND_RELATION || tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX || tableinfo.relkind == RELKIND_MATVIEW || tableinfo.relkind == RELKIND_FOREIGN_TABLE || tableinfo.relkind == RELKIND_PARTITIONED_TABLE) @@ -1906,7 +1910,8 @@ describeOneTableDetails(const char *schemaname, } /* Expression for index column */ - if (tableinfo.relkind == RELKIND_INDEX) + if (tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX) printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* FDW options for foreign table column, only for 9.2 or later */ @@ -1930,6 +1935,7 @@ describeOneTableDetails(const char *schemaname, /* Statistics target, if the relkind supports this feature */ if (tableinfo.relkind == RELKIND_RELATION || tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX || tableinfo.relkind == RELKIND_MATVIEW || tableinfo.relkind == RELKIND_FOREIGN_TABLE || tableinfo.relkind == RELKIND_PARTITIONED_TABLE) @@ -2021,7 +2027,8 @@ describeOneTableDetails(const char *schemaname, PQclear(result); } - if (tableinfo.relkind == RELKIND_INDEX) + if (tableinfo.relkind == RELKIND_INDEX || + tableinfo.relkind == RELKIND_PARTITIONED_INDEX) { /* Footer information about an index */ PGresult *result; @@ -3397,6 +3404,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys " WHEN 's' THEN '%s'" " WHEN " CppAsString2(RELKIND_FOREIGN_TABLE) " THEN '%s'" " WHEN " CppAsString2(RELKIND_PARTITIONED_TABLE) " THEN '%s'" + " WHEN " CppAsString2(RELKIND_PARTITIONED_INDEX) " THEN '%s'" " END as \"%s\",\n" " pg_catalog.pg_get_userbyid(c.relowner) as \"%s\"", gettext_noop("Schema"), @@ -3409,6 +3417,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys gettext_noop("special"), gettext_noop("foreign table"), gettext_noop("table"), /* partitioned table */ + gettext_noop("index"), /* partitioned index */ gettext_noop("Type"), gettext_noop("Owner")); @@ -3454,7 +3463,8 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys if (showMatViews) appendPQExpBufferStr(&buf, CppAsString2(RELKIND_MATVIEW) ","); if (showIndexes) - appendPQExpBufferStr(&buf, CppAsString2(RELKIND_INDEX) ","); + appendPQExpBufferStr(&buf, CppAsString2(RELKIND_INDEX) "," + CppAsString2(RELKIND_PARTITIONED_INDEX) ","); if (showSeq) appendPQExpBufferStr(&buf, CppAsString2(RELKIND_SEQUENCE) ","); if (showSystem || pattern) diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index b51098deca..8bc4a194a5 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -412,7 +412,8 @@ static const SchemaQuery Query_for_list_of_indexes = { /* catname */ "pg_catalog.pg_class c", /* selcondition */ - "c.relkind IN (" CppAsString2(RELKIND_INDEX) ")", + "c.relkind IN (" CppAsString2(RELKIND_INDEX) ", " + CppAsString2(RELKIND_PARTITIONED_INDEX) ")", /* viscondition */ "pg_catalog.pg_table_is_visible(c.oid)", /* namespace */ @@ -600,6 +601,23 @@ static const SchemaQuery Query_for_list_of_tmf = { NULL }; +static const SchemaQuery Query_for_list_of_tpm = { + /* catname */ + "pg_catalog.pg_class c", + /* selcondition */ + "c.relkind IN (" CppAsString2(RELKIND_RELATION) ", " + CppAsString2(RELKIND_PARTITIONED_TABLE) ", " + CppAsString2(RELKIND_MATVIEW) ")", + /* viscondition */ + "pg_catalog.pg_table_is_visible(c.oid)", + /* namespace */ + "c.relnamespace", + /* result */ + "pg_catalog.quote_ident(c.relname)", + /* qualresult */ + NULL +}; + static const SchemaQuery Query_for_list_of_tm = { /* catname */ "pg_catalog.pg_class c", @@ -1676,7 +1694,12 @@ psql_completion(const char *text, int start, int end) "UNION SELECT 'ALL IN TABLESPACE'"); /* ALTER INDEX */ else if (Matches3("ALTER", "INDEX", MatchAny)) - COMPLETE_WITH_LIST5("ALTER COLUMN", "OWNER TO", "RENAME TO", "SET", "RESET"); + COMPLETE_WITH_LIST6("ALTER COLUMN", "OWNER TO", "RENAME TO", "SET", + "RESET", "ATTACH PARTITION"); + else if (Matches4("ALTER", "INDEX", MatchAny, "ATTACH")) + COMPLETE_WITH_CONST("PARTITION"); + else if (Matches5("ALTER", "INDEX", MatchAny, "ATTACH", "PARTITION")) + COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes, NULL); /* ALTER INDEX ALTER COLUMN */ else if (Matches6("ALTER", "INDEX", MatchAny, "ALTER", "COLUMN", MatchAny)) COMPLETE_WITH_CONST("SET STATISTICS"); @@ -2338,10 +2361,13 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes, " UNION SELECT 'ON'" " UNION SELECT 'CONCURRENTLY'"); - /* Complete ... INDEX|CONCURRENTLY [] ON with a list of tables */ + /* + * Complete ... INDEX|CONCURRENTLY [] ON with a list of relations + * that can indexes can be created on + */ else if (TailMatches3("INDEX|CONCURRENTLY", MatchAny, "ON") || TailMatches2("INDEX|CONCURRENTLY", "ON")) - COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, NULL); + COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tpm, NULL); /* * Complete CREATE|UNIQUE INDEX CONCURRENTLY with "ON" and existing diff --git a/src/include/catalog/dependency.h b/src/include/catalog/dependency.h index 6f290d5c6f..46c271a46c 100644 --- a/src/include/catalog/dependency.h +++ b/src/include/catalog/dependency.h @@ -49,6 +49,20 @@ * Example: a trigger that's created to enforce a foreign-key constraint * is made internally dependent on the constraint's pg_constraint entry. * + * DEPENDENCY_INTERNAL_AUTO ('I'): the dependent object was created as + * part of creation of the referenced object, and is really just a part + * of its internal implementation. A DROP of the dependent object will + * be disallowed outright (we'll tell the user to issue a DROP against the + * referenced object, instead). While a regular internal dependency will + * prevent the dependent object from being dropped while any such + * dependencies remain, DEPENDENCY_INTERNAL_AUTO will allow such a drop as + * long as the object can be found by following any of such dependencies. + * Example: an index on a partition is made internal-auto-dependent on + * both the partition itself as well as on the index on the parent + * partitioned table; so the partition index is dropped together with + * either the partition it indexes, or with the parent index it is attached + * to. + * DEPENDENCY_EXTENSION ('e'): the dependent object is a member of the * extension that is the referenced object. The dependent object can be * dropped only via DROP EXTENSION on the referenced object. Functionally @@ -75,6 +89,7 @@ typedef enum DependencyType DEPENDENCY_NORMAL = 'n', DEPENDENCY_AUTO = 'a', DEPENDENCY_INTERNAL = 'i', + DEPENDENCY_INTERNAL_AUTO = 'I', DEPENDENCY_EXTENSION = 'e', DEPENDENCY_AUTO_EXTENSION = 'x', DEPENDENCY_PIN = 'p' diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index 4790f0c735..235e180299 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -47,10 +47,13 @@ extern void index_check_primary_key(Relation heapRel, #define INDEX_CREATE_SKIP_BUILD (1 << 2) #define INDEX_CREATE_CONCURRENT (1 << 3) #define INDEX_CREATE_IF_NOT_EXISTS (1 << 4) +#define INDEX_CREATE_PARTITIONED (1 << 5) +#define INDEX_CREATE_INVALID (1 << 6) extern Oid index_create(Relation heapRelation, const char *indexRelationName, Oid indexRelationId, + Oid parentIndexRelid, Oid relFileNode, IndexInfo *indexInfo, List *indexColNames, @@ -84,6 +87,11 @@ extern void index_drop(Oid indexId, bool concurrent); extern IndexInfo *BuildIndexInfo(Relation index); +extern bool CompareIndexInfo(IndexInfo *info1, IndexInfo *info2, + Oid *collations1, Oid *collations2, + Oid *opfamilies1, Oid *opfamilies2, + AttrNumber *attmap, int maplen); + extern void BuildSpeculativeIndexInfo(Relation index, IndexInfo *ii); extern void FormIndexDatum(IndexInfo *indexInfo, @@ -138,4 +146,6 @@ extern Size EstimateReindexStateSpace(void); extern void SerializeReindexState(Size maxsize, char *start_address); extern void RestoreReindexState(void *reindexstate); +extern void IndexSetParentIndex(Relation idx, Oid parentOid); + #endif /* INDEX_H */ diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h index e7049438eb..26b1866c69 100644 --- a/src/include/catalog/pg_class.h +++ b/src/include/catalog/pg_class.h @@ -166,6 +166,7 @@ DESCR(""); #define RELKIND_COMPOSITE_TYPE 'c' /* composite type */ #define RELKIND_FOREIGN_TABLE 'f' /* foreign table */ #define RELKIND_PARTITIONED_TABLE 'p' /* partitioned table */ +#define RELKIND_PARTITIONED_INDEX 'I' /* partitioned index */ #define RELPERSISTENCE_PERMANENT 'p' /* regular table */ #define RELPERSISTENCE_UNLOGGED 'u' /* unlogged permanent table */ diff --git a/src/include/catalog/pg_inherits_fn.h b/src/include/catalog/pg_inherits_fn.h index 405af230d1..eebee977a5 100644 --- a/src/include/catalog/pg_inherits_fn.h +++ b/src/include/catalog/pg_inherits_fn.h @@ -23,5 +23,8 @@ extern List *find_all_inheritors(Oid parentrelId, LOCKMODE lockmode, extern bool has_subclass(Oid relationId); extern bool has_superclass(Oid relationId); extern bool typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId); +extern void StoreSingleInheritance(Oid relationId, Oid parentOid, + int32 seqNumber); +extern bool DeleteInheritsTuple(Oid inhrelid, Oid inhparent); #endif /* PG_INHERITS_FN_H */ diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index 1f18cad963..41007162aa 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -25,12 +25,13 @@ extern void RemoveObjects(DropStmt *stmt); extern ObjectAddress DefineIndex(Oid relationId, IndexStmt *stmt, Oid indexRelationId, + Oid parentIndexId, bool is_alter_table, bool check_rights, bool check_not_in_use, bool skip_build, bool quiet); -extern Oid ReindexIndex(RangeVar *indexRelation, int options); +extern void ReindexIndex(RangeVar *indexRelation, int options); extern Oid ReindexTable(RangeVar *relation, int options); extern void ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind, int options); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 4bb5cb163d..63a75bd5ed 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -158,6 +158,7 @@ typedef struct IndexInfo bool ii_ReadyForInserts; bool ii_Concurrent; bool ii_BrokenHotChain; + Oid ii_Am; void *ii_AmCache; MemoryContext ii_Context; } IndexInfo; diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index b72178efd1..0296784726 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -839,7 +839,7 @@ typedef struct PartitionRangeDatum } PartitionRangeDatum; /* - * PartitionCmd - info for ALTER TABLE ATTACH/DETACH PARTITION commands + * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands */ typedef struct PartitionCmd { @@ -2702,6 +2702,10 @@ typedef struct FetchStmt * index, just a UNIQUE/PKEY constraint using an existing index. isconstraint * must always be true in this case, and the fields describing the index * properties are empty. + * + * The relation to build the index on can be represented either by name + * (in which case the RangeVar indicates whether to recurse or not) or by OID + * (in which case the command is always recursive). * ---------------------- */ typedef struct IndexStmt @@ -2709,6 +2713,7 @@ typedef struct IndexStmt NodeTag type; char *idxname; /* name of new index, or NULL for default */ RangeVar *relation; /* relation to build index on */ + Oid relationId; /* OID of relation to build index on */ char *accessMethod; /* name of access method (eg. btree) */ char *tableSpace; /* tablespace, or NULL for default */ List *indexParams; /* columns to index: a list of IndexElem */ diff --git a/src/include/parser/parse_utilcmd.h b/src/include/parser/parse_utilcmd.h index a7f5e0caea..64aa8234e5 100644 --- a/src/include/parser/parse_utilcmd.h +++ b/src/include/parser/parse_utilcmd.h @@ -27,5 +27,8 @@ extern void transformRuleStmt(RuleStmt *stmt, const char *queryString, extern List *transformCreateSchemaStmt(CreateSchemaStmt *stmt); extern PartitionBoundSpec *transformPartitionBound(ParseState *pstate, Relation parent, PartitionBoundSpec *spec); +extern IndexStmt *generateClonedIndexStmt(RangeVar *heapRel, Oid heapOid, + Relation source_idx, + const AttrNumber *attmap, int attmap_length); #endif /* PARSE_UTILCMD_H */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 11f0baa11b..517fb080bd 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -1965,6 +1965,67 @@ create table tab1 (a int, b text); create table tab2 (x int, y tab1); alter table tab1 alter column b type varchar; -- fails ERROR: cannot alter table "tab1" because column "tab2.y" uses its row type +-- Alter column type that's part of a partitioned index +create table at_partitioned (a int, b text) partition by range (a); +create table at_part_1 partition of at_partitioned for values from (0) to (1000); +insert into at_partitioned values (512, '0.123'); +create table at_part_2 (b text, a int); +insert into at_part_2 values ('1.234', 1024); +create index on at_partitioned (b); +create index on at_partitioned (a); +\d at_part_1 + Table "public.at_part_1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | text | | | +Partition of: at_partitioned FOR VALUES FROM (0) TO (1000) +Indexes: + "at_part_1_a_idx" btree (a) + "at_part_1_b_idx" btree (b) + +\d at_part_2 + Table "public.at_part_2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + b | text | | | + a | integer | | | + +alter table at_partitioned attach partition at_part_2 for values from (1000) to (2000); +\d at_part_2 + Table "public.at_part_2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + b | text | | | + a | integer | | | +Partition of: at_partitioned FOR VALUES FROM (1000) TO (2000) +Indexes: + "at_part_2_a_idx" btree (a) + "at_part_2_b_idx" btree (b) + +alter table at_partitioned alter column b type numeric using b::numeric; +\d at_part_1 + Table "public.at_part_1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | numeric | | | +Partition of: at_partitioned FOR VALUES FROM (0) TO (1000) +Indexes: + "at_part_1_a_idx" btree (a) + "at_part_1_b_idx" btree (b) + +\d at_part_2 + Table "public.at_part_2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + b | numeric | | | + a | integer | | | +Partition of: at_partitioned FOR VALUES FROM (1000) TO (2000) +Indexes: + "at_part_2_a_idx" btree (a) + "at_part_2_b_idx" btree (b) + -- disallow recursive containment of row types create temp table recur1 (f1 int); alter table recur1 add column f2 recur1; -- fails @@ -3276,7 +3337,7 @@ CREATE TABLE unparted ( ); CREATE TABLE fail_part (like unparted); ALTER TABLE unparted ATTACH PARTITION fail_part FOR VALUES IN ('a'); -ERROR: "unparted" is not partitioned +ERROR: table "unparted" is not partitioned DROP TABLE unparted, fail_part; -- check that partition bound is compatible CREATE TABLE list_parted ( @@ -3656,7 +3717,7 @@ DROP TABLE fail_part; -- check that the table is partitioned at all CREATE TABLE regular_table (a int); ALTER TABLE regular_table DETACH PARTITION any_name; -ERROR: "regular_table" is not partitioned +ERROR: table "regular_table" is not partitioned DROP TABLE regular_table; -- check that the partition being detached exists at all ALTER TABLE list_parted2 DETACH PARTITION part_4; diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out new file mode 100644 index 0000000000..e9cccca876 --- /dev/null +++ b/src/test/regress/expected/indexing.out @@ -0,0 +1,757 @@ +-- Creating an index on a partitioned table makes the partitions +-- automatically get the index +create table idxpart (a int, b int, c text) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +create table idxpart2 partition of idxpart for values from (10) to (100) + partition by range (b); +create table idxpart21 partition of idxpart2 for values from (0) to (100); +create index on idxpart (a); +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; + relname | relkind | inhparent +-----------------+---------+---------------- + idxpart | p | + idxpart1 | r | + idxpart1_a_idx | i | idxpart_a_idx + idxpart2 | p | + idxpart21 | r | + idxpart21_a_idx | i | idxpart2_a_idx + idxpart2_a_idx | I | idxpart_a_idx + idxpart_a_idx | I | +(8 rows) + +drop table idxpart; +-- Some unsupported features +create table idxpart (a int, b int, c text) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +create unique index on idxpart (a); +ERROR: cannot create unique index on partitioned table "idxpart" +create index concurrently on idxpart (a); +ERROR: cannot create index on partitioned table "idxpart" concurrently +drop table idxpart; +-- If a table without index is attached as partition to a table with +-- an index, the index is automatically created +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (b, c); +create table idxpart1 (like idxpart); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | + c | text | | | + +alter table idxpart attach partition idxpart1 for values from (0) to (10); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | + c | text | | | +Partition of: idxpart FOR VALUES FROM (0) TO (10) +Indexes: + "idxpart1_a_idx" btree (a) + "idxpart1_b_c_idx" btree (b, c) + +drop table idxpart; +-- If a partition already has an index, don't create a duplicative one +create table idxpart (a int, b int) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (0, 0) to (10, 10); +create index on idxpart1 (a, b); +create index on idxpart (a, b); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | +Partition of: idxpart FOR VALUES FROM (0, 0) TO (10, 10) +Indexes: + "idxpart1_a_b_idx" btree (a, b) + +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; + relname | relkind | inhparent +------------------+---------+----------------- + idxpart | p | + idxpart1 | r | + idxpart1_a_b_idx | i | idxpart_a_b_idx + idxpart_a_b_idx | I | +(4 rows) + +drop table idxpart; +-- DROP behavior for partitioned indexes +create table idxpart (a int) partition by range (a); +create index on idxpart (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +drop index idxpart1_a_idx; -- no way +ERROR: cannot drop index idxpart1_a_idx because index idxpart_a_idx requires it +HINT: You can drop index idxpart_a_idx instead. +drop index idxpart_a_idx; -- both indexes go away +select relname, relkind from pg_class + where relname like 'idxpart%' order by relname; + relname | relkind +----------+--------- + idxpart | p + idxpart1 | r +(2 rows) + +create index on idxpart (a); +drop table idxpart1; -- the index on partition goes away too +select relname, relkind from pg_class + where relname like 'idxpart%' order by relname; + relname | relkind +---------------+--------- + idxpart | p + idxpart_a_idx | I +(2 rows) + +drop table idxpart; +-- ALTER INDEX .. ATTACH, error cases +create table idxpart (a int, b int) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (0, 0) to (10, 10); +create index idxpart_a_b_idx on only idxpart (a, b); +create index idxpart1_a_b_idx on idxpart1 (a, b); +create index idxpart1_tst1 on idxpart1 (b, a); +create index idxpart1_tst2 on idxpart1 using hash (a); +create index idxpart1_tst3 on idxpart1 (a, b) where a > 10; +alter index idxpart attach partition idxpart1; +ERROR: "idxpart" is not an index +alter index idxpart_a_b_idx attach partition idxpart1; +ERROR: "idxpart1" is not an index +alter index idxpart_a_b_idx attach partition idxpart_a_b_idx; +ERROR: cannot attach index "idxpart_a_b_idx" as a partition of index "idxpart_a_b_idx" +DETAIL: Index "idxpart_a_b_idx" is not an index on any partition of table "idxpart". +alter index idxpart_a_b_idx attach partition idxpart1_b_idx; +ERROR: relation "idxpart1_b_idx" does not exist +alter index idxpart_a_b_idx attach partition idxpart1_tst1; +ERROR: cannot attach index "idxpart1_tst1" as a partition of index "idxpart_a_b_idx" +DETAIL: The index definitions do not match. +alter index idxpart_a_b_idx attach partition idxpart1_tst2; +ERROR: cannot attach index "idxpart1_tst2" as a partition of index "idxpart_a_b_idx" +DETAIL: The index definitions do not match. +alter index idxpart_a_b_idx attach partition idxpart1_tst3; +ERROR: cannot attach index "idxpart1_tst3" as a partition of index "idxpart_a_b_idx" +DETAIL: The index definitions do not match. +-- OK +alter index idxpart_a_b_idx attach partition idxpart1_a_b_idx; +alter index idxpart_a_b_idx attach partition idxpart1_a_b_idx; -- quiet +-- reject dupe +create index idxpart1_2_a_b on idxpart1 (a, b); +alter index idxpart_a_b_idx attach partition idxpart1_2_a_b; +ERROR: cannot attach index "idxpart1_2_a_b" as a partition of index "idxpart_a_b_idx" +DETAIL: Another index is already attached for partition "idxpart1". +drop table idxpart; +-- make sure everything's gone +select indexrelid::regclass, indrelid::regclass + from pg_index where indexrelid::regclass::text like 'idxpart%'; + indexrelid | indrelid +------------+---------- +(0 rows) + +-- Don't auto-attach incompatible indexes +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int, b int); +create index on idxpart1 using hash (a); +create index on idxpart1 (a) where b > 1; +create index on idxpart1 ((a + 0)); +create index on idxpart1 (a, a); +create index on idxpart (a); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | +Partition of: idxpart FOR VALUES FROM (0) TO (1000) +Indexes: + "idxpart1_a_a1_idx" btree (a, a) + "idxpart1_a_idx" hash (a) + "idxpart1_a_idx1" btree (a) WHERE b > 1 + "idxpart1_a_idx2" btree (a) + "idxpart1_expr_idx" btree ((a + 0)) + +drop table idxpart; +-- If CREATE INDEX ONLY, don't create indexes on partitions; and existing +-- indexes on partitions don't change parent. ALTER INDEX ATTACH can change +-- the parent after the fact. +create table idxpart (a int) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100); +create table idxpart2 partition of idxpart for values from (100) to (1000) + partition by range (a); +create table idxpart21 partition of idxpart2 for values from (100) to (200); +create table idxpart22 partition of idxpart2 for values from (200) to (300); +create index on idxpart22 (a); +create index on only idxpart2 (a); +create index on idxpart (a); +-- Here we expect that idxpart1 and idxpart2 have a new index, but idxpart21 +-- does not; also, idxpart22 is not attached. +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition of: idxpart FOR VALUES FROM (0) TO (100) +Indexes: + "idxpart1_a_idx" btree (a) + +\d idxpart2 + Table "public.idxpart2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition of: idxpart FOR VALUES FROM (100) TO (1000) +Partition key: RANGE (a) +Indexes: + "idxpart2_a_idx" btree (a) INVALID +Number of partitions: 2 (Use \d+ to list them.) + +\d idxpart21 + Table "public.idxpart21" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition of: idxpart2 FOR VALUES FROM (100) TO (200) + +select indexrelid::regclass, indrelid::regclass, inhparent::regclass + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) +where indexrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; + indexrelid | indrelid | inhparent +-----------------+-----------+--------------- + idxpart_a_idx | idxpart | + idxpart1_a_idx | idxpart1 | idxpart_a_idx + idxpart2_a_idx | idxpart2 | idxpart_a_idx + idxpart22_a_idx | idxpart22 | +(4 rows) + +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +select indexrelid::regclass, indrelid::regclass, inhparent::regclass + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) +where indexrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; + indexrelid | indrelid | inhparent +-----------------+-----------+---------------- + idxpart_a_idx | idxpart | + idxpart1_a_idx | idxpart1 | idxpart_a_idx + idxpart2_a_idx | idxpart2 | idxpart_a_idx + idxpart22_a_idx | idxpart22 | idxpart2_a_idx +(4 rows) + +-- attaching idxpart22 is not enough to set idxpart22_a_idx valid ... +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +\d idxpart2 + Table "public.idxpart2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition of: idxpart FOR VALUES FROM (100) TO (1000) +Partition key: RANGE (a) +Indexes: + "idxpart2_a_idx" btree (a) INVALID +Number of partitions: 2 (Use \d+ to list them.) + +-- ... but this one is. +create index on idxpart21 (a); +alter index idxpart2_a_idx attach partition idxpart21_a_idx; +\d idxpart2 + Table "public.idxpart2" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | +Partition of: idxpart FOR VALUES FROM (100) TO (1000) +Partition key: RANGE (a) +Indexes: + "idxpart2_a_idx" btree (a) +Number of partitions: 2 (Use \d+ to list them.) + +drop table idxpart; +-- When a table is attached a partition and it already has an index, a +-- duplicate index should not get created, but rather the index becomes +-- attached to the parent's index. +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (b, c); +create table idxpart1 (like idxpart including indexes); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | + c | text | | | +Indexes: + "idxpart1_a_idx" btree (a) + "idxpart1_b_c_idx" btree (b, c) + +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; + relname | relkind | inhparent +------------------+---------+----------- + idxpart | p | + idxpart1 | r | + idxpart1_a_idx | i | + idxpart1_b_c_idx | i | + idxparti | I | + idxparti2 | I | +(6 rows) + +alter table idxpart attach partition idxpart1 for values from (0) to (10); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | + c | text | | | +Partition of: idxpart FOR VALUES FROM (0) TO (10) +Indexes: + "idxpart1_a_idx" btree (a) + "idxpart1_b_c_idx" btree (b, c) + +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; + relname | relkind | inhparent +------------------+---------+----------- + idxpart | p | + idxpart1 | r | + idxpart1_a_idx | i | idxparti + idxpart1_b_c_idx | i | idxparti2 + idxparti | I | + idxparti2 | I | +(6 rows) + +drop table idxpart; +-- Verify that attaching an invalid index does not mark the parent index valid. +-- On the other hand, attaching a valid index marks not only its direct +-- ancestor valid, but also any indirect ancestor that was only missing the one +-- that was just made valid +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 partition of idxpart for values from (1) to (1000) partition by range (a); +create table idxpart11 partition of idxpart1 for values from (1) to (100); +create index on only idxpart1 (a); +create index on only idxpart (a); +-- this results in two invalid indexes: +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; + relname | indisvalid +----------------+------------ + idxpart1_a_idx | f + idxpart_a_idx | f +(2 rows) + +-- idxpart1_a_idx is not valid, so idxpart_a_idx should not become valid: +alter index idxpart_a_idx attach partition idxpart1_a_idx; +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; + relname | indisvalid +----------------+------------ + idxpart1_a_idx | f + idxpart_a_idx | f +(2 rows) + +-- after creating and attaching this, both idxpart1_a_idx and idxpart_a_idx +-- should become valid +create index on idxpart11 (a); +alter index idxpart1_a_idx attach partition idxpart11_a_idx; +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; + relname | indisvalid +-----------------+------------ + idxpart11_a_idx | t + idxpart1_a_idx | t + idxpart_a_idx | t +(3 rows) + +drop table idxpart; +-- verify dependency handling during ALTER TABLE DETACH PARTITION +create table idxpart (a int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 (a); +create index on idxpart (a); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +----------------+--------- + idxpart | p + idxpart1 | r + idxpart1_a_idx | i + idxpart2 | r + idxpart2_a_idx | i + idxpart3 | r + idxpart3_a_idx | i + idxpart_a_idx | I +(8 rows) + +-- a) after detaching partitions, the indexes can be dropped independently +alter table idxpart detach partition idxpart1; +alter table idxpart detach partition idxpart2; +alter table idxpart detach partition idxpart3; +drop index idxpart1_a_idx; +drop index idxpart2_a_idx; +drop index idxpart3_a_idx; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +---------------+--------- + idxpart | p + idxpart1 | r + idxpart2 | r + idxpart3 | r + idxpart_a_idx | I +(5 rows) + +drop table idxpart, idxpart1, idxpart2, idxpart3; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +---------+--------- +(0 rows) + +create table idxpart (a int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 (a); +create index on idxpart (a); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +-- b) after detaching, dropping the index on parent does not remove the others +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +----------------+--------- + idxpart | p + idxpart1 | r + idxpart1_a_idx | i + idxpart2 | r + idxpart2_a_idx | i + idxpart3 | r + idxpart3_a_idx | i + idxpart_a_idx | I +(8 rows) + +alter table idxpart detach partition idxpart1; +alter table idxpart detach partition idxpart2; +alter table idxpart detach partition idxpart3; +drop index idxpart_a_idx; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +----------------+--------- + idxpart | p + idxpart1 | r + idxpart1_a_idx | i + idxpart2 | r + idxpart2_a_idx | i + idxpart3 | r + idxpart3_a_idx | i +(7 rows) + +drop table idxpart, idxpart1, idxpart2, idxpart3; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + relname | relkind +---------+--------- +(0 rows) + +-- Verify that expression indexes inherit correctly +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 ((a + b)); +create index on idxpart ((a + b)); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; + child | parent | childdef +-------------------+------------------+-------------------------------------------------------------------- + idxpart1_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart1_expr_idx ON idxpart1 USING btree (((a + b))) + idxpart2_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart2_expr_idx ON idxpart2 USING btree (((a + b))) + idxpart3_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart3_expr_idx ON idxpart3 USING btree (((a + b))) +(3 rows) + +drop table idxpart; +-- Verify behavior for collation (mis)matches +create table idxpart (a text) partition by range (a); +create table idxpart1 (like idxpart); +create table idxpart2 (like idxpart); +create index on idxpart2 (a collate "POSIX"); +create index on idxpart2 (a); +create index on idxpart2 (a collate "C"); +alter table idxpart attach partition idxpart1 for values from ('aaa') to ('bbb'); +alter table idxpart attach partition idxpart2 for values from ('bbb') to ('ccc'); +create table idxpart3 partition of idxpart for values from ('ccc') to ('ddd'); +create index on idxpart (a collate "C"); +create table idxpart4 partition of idxpart for values from ('ddd') to ('eee'); +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; + child | parent | childdef +-----------------+---------------+------------------------------------------------------------------------- + idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a COLLATE "C") + idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a COLLATE "POSIX") + idxpart2_a_idx1 | | CREATE INDEX idxpart2_a_idx1 ON idxpart2 USING btree (a) + idxpart2_a_idx2 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx2 ON idxpart2 USING btree (a COLLATE "C") + idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON idxpart3 USING btree (a COLLATE "C") + idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON idxpart4 USING btree (a COLLATE "C") + idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a COLLATE "C") +(7 rows) + +drop table idxpart; +-- Verify behavior for opclass (mis)matches +create table idxpart (a text) partition by range (a); +create table idxpart1 (like idxpart); +create table idxpart2 (like idxpart); +create index on idxpart2 (a); +alter table idxpart attach partition idxpart1 for values from ('aaa') to ('bbb'); +alter table idxpart attach partition idxpart2 for values from ('bbb') to ('ccc'); +create table idxpart3 partition of idxpart for values from ('ccc') to ('ddd'); +create index on idxpart (a text_pattern_ops); +create table idxpart4 partition of idxpart for values from ('ddd') to ('eee'); +-- must *not* have attached the index we created on idxpart2 +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; + child | parent | childdef +-----------------+---------------+----------------------------------------------------------------------------- + idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a text_pattern_ops) + idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) + idxpart2_a_idx1 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx1 ON idxpart2 USING btree (a text_pattern_ops) + idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON idxpart3 USING btree (a text_pattern_ops) + idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON idxpart4 USING btree (a text_pattern_ops) + idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a text_pattern_ops) +(6 rows) + +drop index idxpart_a_idx; +create index on only idxpart (a text_pattern_ops); +-- must reject +alter index idxpart_a_idx attach partition idxpart2_a_idx; +ERROR: cannot attach index "idxpart2_a_idx" as a partition of index "idxpart_a_idx" +DETAIL: The index definitions do not match. +drop table idxpart; +-- Verify that attaching indexes maps attribute numbers correctly +create table idxpart (col1 int, a int, col2 int, b int) partition by range (a); +create table idxpart1 (b int, col1 int, col2 int, col3 int, a int); +alter table idxpart drop column col1, drop column col2; +alter table idxpart1 drop column col1, drop column col2, drop column col3; +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +create index idxpart_1_idx on only idxpart (b, a); +create index idxpart1_1_idx on idxpart1 (b, a); +create index idxpart1_1b_idx on idxpart1 (b); +-- test expressions and partial-index predicate, too +create index idxpart_2_idx on only idxpart ((b + a)) where a > 1; +create index idxpart1_2_idx on idxpart1 ((b + a)) where a > 1; +create index idxpart1_2b_idx on idxpart1 ((a + b)) where a > 1; +create index idxpart1_2c_idx on idxpart1 ((b + a)) where b > 1; +alter index idxpart_1_idx attach partition idxpart1_1b_idx; -- fail +ERROR: cannot attach index "idxpart1_1b_idx" as a partition of index "idxpart_1_idx" +DETAIL: The index definitions do not match. +alter index idxpart_1_idx attach partition idxpart1_1_idx; +alter index idxpart_2_idx attach partition idxpart1_2b_idx; -- fail +ERROR: cannot attach index "idxpart1_2b_idx" as a partition of index "idxpart_2_idx" +DETAIL: The index definitions do not match. +alter index idxpart_2_idx attach partition idxpart1_2c_idx; -- fail +ERROR: cannot attach index "idxpart1_2c_idx" as a partition of index "idxpart_2_idx" +DETAIL: The index definitions do not match. +alter index idxpart_2_idx attach partition idxpart1_2_idx; -- ok +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; + child | parent | childdef +-----------------+---------------+---------------------------------------------------------------------------------- + idxpart1_1_idx | idxpart_1_idx | CREATE INDEX idxpart1_1_idx ON idxpart1 USING btree (b, a) + idxpart1_1b_idx | | CREATE INDEX idxpart1_1b_idx ON idxpart1 USING btree (b) + idxpart1_2_idx | idxpart_2_idx | CREATE INDEX idxpart1_2_idx ON idxpart1 USING btree (((b + a))) WHERE (a > 1) + idxpart1_2b_idx | | CREATE INDEX idxpart1_2b_idx ON idxpart1 USING btree (((a + b))) WHERE (a > 1) + idxpart1_2c_idx | | CREATE INDEX idxpart1_2c_idx ON idxpart1 USING btree (((b + a))) WHERE (b > 1) + idxpart_1_idx | | CREATE INDEX idxpart_1_idx ON ONLY idxpart USING btree (b, a) + idxpart_2_idx | | CREATE INDEX idxpart_2_idx ON ONLY idxpart USING btree (((b + a))) WHERE (a > 1) +(7 rows) + +drop table idxpart; +-- Make sure the partition columns are mapped correctly +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (c, b); +create table idxpart1 (c text, a int, b int); +alter table idxpart attach partition idxpart1 for values from (0) to (10); +create table idxpart2 (c text, a int, b int); +create index on idxpart2 (a); +create index on idxpart2 (c, b); +alter table idxpart attach partition idxpart2 for values from (10) to (20); +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; + relname | pg_get_indexdef +------------------+-------------------------------------------------------------- + idxparti | CREATE INDEX idxparti ON ONLY idxpart USING btree (a) + idxparti2 | CREATE INDEX idxparti2 ON ONLY idxpart USING btree (c, b) + idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) + idxpart1_c_b_idx | CREATE INDEX idxpart1_c_b_idx ON idxpart1 USING btree (c, b) + idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) + idxpart2_c_b_idx | CREATE INDEX idxpart2_c_b_idx ON idxpart2 USING btree (c, b) +(6 rows) + +drop table idxpart; +-- Verify that columns are mapped correctly in expression indexes +create table idxpart (col1 int, col2 int, a int, b int) partition by range (a); +create table idxpart1 (col2 int, b int, col1 int, a int); +create table idxpart2 (col1 int, col2 int, b int, a int); +alter table idxpart drop column col1, drop column col2; +alter table idxpart1 drop column col1, drop column col2; +alter table idxpart2 drop column col1, drop column col2; +create index on idxpart2 (abs(b)); +alter table idxpart attach partition idxpart2 for values from (0) to (1); +create index on idxpart (abs(b)); +alter table idxpart attach partition idxpart1 for values from (1) to (2); +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; + relname | pg_get_indexdef +------------------+------------------------------------------------------------------- + idxpart_abs_idx | CREATE INDEX idxpart_abs_idx ON ONLY idxpart USING btree (abs(b)) + idxpart1_abs_idx | CREATE INDEX idxpart1_abs_idx ON idxpart1 USING btree (abs(b)) + idxpart2_abs_idx | CREATE INDEX idxpart2_abs_idx ON idxpart2 USING btree (abs(b)) +(3 rows) + +drop table idxpart; +-- Verify that columns are mapped correctly for WHERE in a partial index +create table idxpart (col1 int, a int, col3 int, b int) partition by range (a); +alter table idxpart drop column col1, drop column col3; +create table idxpart1 (col1 int, col2 int, col3 int, col4 int, b int, a int); +alter table idxpart1 drop column col1, drop column col2, drop column col3, drop column col4; +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +create table idxpart2 (col1 int, col2 int, b int, a int); +create index on idxpart2 (a) where b > 1000; +alter table idxpart2 drop column col1, drop column col2; +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create index on idxpart (a) where b > 1000; +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; + relname | pg_get_indexdef +----------------+----------------------------------------------------------------------------- + idxpart_a_idx | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a) WHERE (b > 1000) + idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) WHERE (b > 1000) + idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) WHERE (b > 1000) +(3 rows) + +drop table idxpart; +-- Column number mapping: dropped columns in the partition +create table idxpart1 (drop_1 int, drop_2 int, col_keep int, drop_3 int); +alter table idxpart1 drop column drop_1; +alter table idxpart1 drop column drop_2; +alter table idxpart1 drop column drop_3; +create index on idxpart1 (col_keep); +create table idxpart (col_keep int) partition by range (col_keep); +create index on idxpart (col_keep); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart + Table "public.idxpart" + Column | Type | Collation | Nullable | Default +----------+---------+-----------+----------+--------- + col_keep | integer | | | +Partition key: RANGE (col_keep) +Indexes: + "idxpart_col_keep_idx" btree (col_keep) +Number of partitions: 1 (Use \d+ to list them.) + +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +----------+---------+-----------+----------+--------- + col_keep | integer | | | +Partition of: idxpart FOR VALUES FROM (0) TO (1000) +Indexes: + "idxpart1_col_keep_idx" btree (col_keep) + +select attrelid::regclass, attname, attnum from pg_attribute + where attrelid::regclass::text like 'idxpart%' and attnum > 0 + order by attrelid::regclass, attnum; + attrelid | attname | attnum +-----------------------+------------------------------+-------- + idxpart1 | ........pg.dropped.1........ | 1 + idxpart1 | ........pg.dropped.2........ | 2 + idxpart1 | col_keep | 3 + idxpart1 | ........pg.dropped.4........ | 4 + idxpart1_col_keep_idx | col_keep | 1 + idxpart | col_keep | 1 + idxpart_col_keep_idx | col_keep | 1 +(7 rows) + +drop table idxpart; +-- Column number mapping: dropped columns in the parent table +create table idxpart(drop_1 int, drop_2 int, col_keep int, drop_3 int) partition by range (col_keep); +alter table idxpart drop column drop_1; +alter table idxpart drop column drop_2; +alter table idxpart drop column drop_3; +create table idxpart1 (col_keep int); +create index on idxpart1 (col_keep); +create index on idxpart (col_keep); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart + Table "public.idxpart" + Column | Type | Collation | Nullable | Default +----------+---------+-----------+----------+--------- + col_keep | integer | | | +Partition key: RANGE (col_keep) +Indexes: + "idxpart_col_keep_idx" btree (col_keep) +Number of partitions: 1 (Use \d+ to list them.) + +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +----------+---------+-----------+----------+--------- + col_keep | integer | | | +Partition of: idxpart FOR VALUES FROM (0) TO (1000) +Indexes: + "idxpart1_col_keep_idx" btree (col_keep) + +select attrelid::regclass, attname, attnum from pg_attribute + where attrelid::regclass::text like 'idxpart%' and attnum > 0 + order by attrelid::regclass, attnum; + attrelid | attname | attnum +-----------------------+------------------------------+-------- + idxpart | ........pg.dropped.1........ | 1 + idxpart | ........pg.dropped.2........ | 2 + idxpart | col_keep | 3 + idxpart | ........pg.dropped.4........ | 4 + idxpart1 | col_keep | 1 + idxpart1_col_keep_idx | col_keep | 1 + idxpart_col_keep_idx | col_keep | 1 +(7 rows) + +drop table idxpart; +-- intentionally leave some objects around +create table idxpart (a int) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100); +create table idxpart2 partition of idxpart for values from (100) to (1000) + partition by range (a); +create table idxpart21 partition of idxpart2 for values from (100) to (200); +create table idxpart22 partition of idxpart2 for values from (200) to (300); +create index on idxpart22 (a); +create index on only idxpart2 (a); +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +create index on idxpart (a); diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule index e224977791..ad9434fb87 100644 --- a/src/test/regress/parallel_schedule +++ b/src/test/regress/parallel_schedule @@ -116,7 +116,7 @@ test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid c # ---------- # Another group of parallel tests # ---------- -test: identity partition_join partition_prune reloptions hash_part +test: identity partition_join partition_prune reloptions hash_part indexing # event triggers cannot run concurrently with any test that runs DDL test: event_trigger diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index 9fc5f1a268..27cd49845e 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -184,5 +184,6 @@ test: partition_join test: partition_prune test: reloptions test: hash_part +test: indexing test: event_trigger test: stats diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index 02a33ca7c4..af25ee9e77 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -1330,6 +1330,22 @@ create table tab1 (a int, b text); create table tab2 (x int, y tab1); alter table tab1 alter column b type varchar; -- fails +-- Alter column type that's part of a partitioned index +create table at_partitioned (a int, b text) partition by range (a); +create table at_part_1 partition of at_partitioned for values from (0) to (1000); +insert into at_partitioned values (512, '0.123'); +create table at_part_2 (b text, a int); +insert into at_part_2 values ('1.234', 1024); +create index on at_partitioned (b); +create index on at_partitioned (a); +\d at_part_1 +\d at_part_2 +alter table at_partitioned attach partition at_part_2 for values from (1000) to (2000); +\d at_part_2 +alter table at_partitioned alter column b type numeric using b::numeric; +\d at_part_1 +\d at_part_2 + -- disallow recursive containment of row types create temp table recur1 (f1 int); alter table recur1 add column f2 recur1; -- fails diff --git a/src/test/regress/sql/indexing.sql b/src/test/regress/sql/indexing.sql new file mode 100644 index 0000000000..33be718699 --- /dev/null +++ b/src/test/regress/sql/indexing.sql @@ -0,0 +1,388 @@ +-- Creating an index on a partitioned table makes the partitions +-- automatically get the index +create table idxpart (a int, b int, c text) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +create table idxpart2 partition of idxpart for values from (10) to (100) + partition by range (b); +create table idxpart21 partition of idxpart2 for values from (0) to (100); +create index on idxpart (a); +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; +drop table idxpart; + +-- Some unsupported features +create table idxpart (a int, b int, c text) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +create unique index on idxpart (a); +create index concurrently on idxpart (a); +drop table idxpart; + +-- If a table without index is attached as partition to a table with +-- an index, the index is automatically created +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (b, c); +create table idxpart1 (like idxpart); +\d idxpart1 +alter table idxpart attach partition idxpart1 for values from (0) to (10); +\d idxpart1 +drop table idxpart; + +-- If a partition already has an index, don't create a duplicative one +create table idxpart (a int, b int) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (0, 0) to (10, 10); +create index on idxpart1 (a, b); +create index on idxpart (a, b); +\d idxpart1 +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; +drop table idxpart; + +-- DROP behavior for partitioned indexes +create table idxpart (a int) partition by range (a); +create index on idxpart (a); +create table idxpart1 partition of idxpart for values from (0) to (10); +drop index idxpart1_a_idx; -- no way +drop index idxpart_a_idx; -- both indexes go away +select relname, relkind from pg_class + where relname like 'idxpart%' order by relname; +create index on idxpart (a); +drop table idxpart1; -- the index on partition goes away too +select relname, relkind from pg_class + where relname like 'idxpart%' order by relname; +drop table idxpart; + +-- ALTER INDEX .. ATTACH, error cases +create table idxpart (a int, b int) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (0, 0) to (10, 10); +create index idxpart_a_b_idx on only idxpart (a, b); +create index idxpart1_a_b_idx on idxpart1 (a, b); +create index idxpart1_tst1 on idxpart1 (b, a); +create index idxpart1_tst2 on idxpart1 using hash (a); +create index idxpart1_tst3 on idxpart1 (a, b) where a > 10; + +alter index idxpart attach partition idxpart1; +alter index idxpart_a_b_idx attach partition idxpart1; +alter index idxpart_a_b_idx attach partition idxpart_a_b_idx; +alter index idxpart_a_b_idx attach partition idxpart1_b_idx; +alter index idxpart_a_b_idx attach partition idxpart1_tst1; +alter index idxpart_a_b_idx attach partition idxpart1_tst2; +alter index idxpart_a_b_idx attach partition idxpart1_tst3; +-- OK +alter index idxpart_a_b_idx attach partition idxpart1_a_b_idx; +alter index idxpart_a_b_idx attach partition idxpart1_a_b_idx; -- quiet + +-- reject dupe +create index idxpart1_2_a_b on idxpart1 (a, b); +alter index idxpart_a_b_idx attach partition idxpart1_2_a_b; +drop table idxpart; +-- make sure everything's gone +select indexrelid::regclass, indrelid::regclass + from pg_index where indexrelid::regclass::text like 'idxpart%'; + +-- Don't auto-attach incompatible indexes +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int, b int); +create index on idxpart1 using hash (a); +create index on idxpart1 (a) where b > 1; +create index on idxpart1 ((a + 0)); +create index on idxpart1 (a, a); +create index on idxpart (a); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart1 +drop table idxpart; + +-- If CREATE INDEX ONLY, don't create indexes on partitions; and existing +-- indexes on partitions don't change parent. ALTER INDEX ATTACH can change +-- the parent after the fact. +create table idxpart (a int) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100); +create table idxpart2 partition of idxpart for values from (100) to (1000) + partition by range (a); +create table idxpart21 partition of idxpart2 for values from (100) to (200); +create table idxpart22 partition of idxpart2 for values from (200) to (300); +create index on idxpart22 (a); +create index on only idxpart2 (a); +create index on idxpart (a); +-- Here we expect that idxpart1 and idxpart2 have a new index, but idxpart21 +-- does not; also, idxpart22 is not attached. +\d idxpart1 +\d idxpart2 +\d idxpart21 +select indexrelid::regclass, indrelid::regclass, inhparent::regclass + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) +where indexrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +select indexrelid::regclass, indrelid::regclass, inhparent::regclass + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) +where indexrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; +-- attaching idxpart22 is not enough to set idxpart22_a_idx valid ... +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +\d idxpart2 +-- ... but this one is. +create index on idxpart21 (a); +alter index idxpart2_a_idx attach partition idxpart21_a_idx; +\d idxpart2 +drop table idxpart; + +-- When a table is attached a partition and it already has an index, a +-- duplicate index should not get created, but rather the index becomes +-- attached to the parent's index. +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (b, c); +create table idxpart1 (like idxpart including indexes); +\d idxpart1 +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; +alter table idxpart attach partition idxpart1 for values from (0) to (10); +\d idxpart1 +select relname, relkind, inhparent::regclass + from pg_class left join pg_index ix on (indexrelid = oid) + left join pg_inherits on (ix.indexrelid = inhrelid) + where relname like 'idxpart%' order by relname; +drop table idxpart; + +-- Verify that attaching an invalid index does not mark the parent index valid. +-- On the other hand, attaching a valid index marks not only its direct +-- ancestor valid, but also any indirect ancestor that was only missing the one +-- that was just made valid +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 partition of idxpart for values from (1) to (1000) partition by range (a); +create table idxpart11 partition of idxpart1 for values from (1) to (100); +create index on only idxpart1 (a); +create index on only idxpart (a); +-- this results in two invalid indexes: +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; +-- idxpart1_a_idx is not valid, so idxpart_a_idx should not become valid: +alter index idxpart_a_idx attach partition idxpart1_a_idx; +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; +-- after creating and attaching this, both idxpart1_a_idx and idxpart_a_idx +-- should become valid +create index on idxpart11 (a); +alter index idxpart1_a_idx attach partition idxpart11_a_idx; +select relname, indisvalid from pg_class join pg_index on indexrelid = oid + where relname like 'idxpart%' order by relname; +drop table idxpart; + +-- verify dependency handling during ALTER TABLE DETACH PARTITION +create table idxpart (a int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 (a); +create index on idxpart (a); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; +-- a) after detaching partitions, the indexes can be dropped independently +alter table idxpart detach partition idxpart1; +alter table idxpart detach partition idxpart2; +alter table idxpart detach partition idxpart3; +drop index idxpart1_a_idx; +drop index idxpart2_a_idx; +drop index idxpart3_a_idx; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; +drop table idxpart, idxpart1, idxpart2, idxpart3; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + +create table idxpart (a int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 (a); +create index on idxpart (a); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +-- b) after detaching, dropping the index on parent does not remove the others +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; +alter table idxpart detach partition idxpart1; +alter table idxpart detach partition idxpart2; +alter table idxpart detach partition idxpart3; +drop index idxpart_a_idx; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; +drop table idxpart, idxpart1, idxpart2, idxpart3; +select relname, relkind from pg_class where relname like 'idxpart%' order by relname; + +-- Verify that expression indexes inherit correctly +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (like idxpart); +create index on idxpart1 ((a + b)); +create index on idxpart ((a + b)); +create table idxpart2 (like idxpart); +alter table idxpart attach partition idxpart1 for values from (0000) to (1000); +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create table idxpart3 partition of idxpart for values from (2000) to (3000); +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; +drop table idxpart; + +-- Verify behavior for collation (mis)matches +create table idxpart (a text) partition by range (a); +create table idxpart1 (like idxpart); +create table idxpart2 (like idxpart); +create index on idxpart2 (a collate "POSIX"); +create index on idxpart2 (a); +create index on idxpart2 (a collate "C"); +alter table idxpart attach partition idxpart1 for values from ('aaa') to ('bbb'); +alter table idxpart attach partition idxpart2 for values from ('bbb') to ('ccc'); +create table idxpart3 partition of idxpart for values from ('ccc') to ('ddd'); +create index on idxpart (a collate "C"); +create table idxpart4 partition of idxpart for values from ('ddd') to ('eee'); +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; +drop table idxpart; + +-- Verify behavior for opclass (mis)matches +create table idxpart (a text) partition by range (a); +create table idxpart1 (like idxpart); +create table idxpart2 (like idxpart); +create index on idxpart2 (a); +alter table idxpart attach partition idxpart1 for values from ('aaa') to ('bbb'); +alter table idxpart attach partition idxpart2 for values from ('bbb') to ('ccc'); +create table idxpart3 partition of idxpart for values from ('ccc') to ('ddd'); +create index on idxpart (a text_pattern_ops); +create table idxpart4 partition of idxpart for values from ('ddd') to ('eee'); +-- must *not* have attached the index we created on idxpart2 +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; +drop index idxpart_a_idx; +create index on only idxpart (a text_pattern_ops); +-- must reject +alter index idxpart_a_idx attach partition idxpart2_a_idx; +drop table idxpart; + +-- Verify that attaching indexes maps attribute numbers correctly +create table idxpart (col1 int, a int, col2 int, b int) partition by range (a); +create table idxpart1 (b int, col1 int, col2 int, col3 int, a int); +alter table idxpart drop column col1, drop column col2; +alter table idxpart1 drop column col1, drop column col2, drop column col3; +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +create index idxpart_1_idx on only idxpart (b, a); +create index idxpart1_1_idx on idxpart1 (b, a); +create index idxpart1_1b_idx on idxpart1 (b); +-- test expressions and partial-index predicate, too +create index idxpart_2_idx on only idxpart ((b + a)) where a > 1; +create index idxpart1_2_idx on idxpart1 ((b + a)) where a > 1; +create index idxpart1_2b_idx on idxpart1 ((a + b)) where a > 1; +create index idxpart1_2c_idx on idxpart1 ((b + a)) where b > 1; +alter index idxpart_1_idx attach partition idxpart1_1b_idx; -- fail +alter index idxpart_1_idx attach partition idxpart1_1_idx; +alter index idxpart_2_idx attach partition idxpart1_2b_idx; -- fail +alter index idxpart_2_idx attach partition idxpart1_2c_idx; -- fail +alter index idxpart_2_idx attach partition idxpart1_2_idx; -- ok +select relname as child, inhparent::regclass as parent, pg_get_indexdef as childdef + from pg_class left join pg_inherits on inhrelid = oid, + lateral pg_get_indexdef(pg_class.oid) + where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; +drop table idxpart; + +-- Make sure the partition columns are mapped correctly +create table idxpart (a int, b int, c text) partition by range (a); +create index idxparti on idxpart (a); +create index idxparti2 on idxpart (c, b); +create table idxpart1 (c text, a int, b int); +alter table idxpart attach partition idxpart1 for values from (0) to (10); +create table idxpart2 (c text, a int, b int); +create index on idxpart2 (a); +create index on idxpart2 (c, b); +alter table idxpart attach partition idxpart2 for values from (10) to (20); +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; +drop table idxpart; + +-- Verify that columns are mapped correctly in expression indexes +create table idxpart (col1 int, col2 int, a int, b int) partition by range (a); +create table idxpart1 (col2 int, b int, col1 int, a int); +create table idxpart2 (col1 int, col2 int, b int, a int); +alter table idxpart drop column col1, drop column col2; +alter table idxpart1 drop column col1, drop column col2; +alter table idxpart2 drop column col1, drop column col2; +create index on idxpart2 (abs(b)); +alter table idxpart attach partition idxpart2 for values from (0) to (1); +create index on idxpart (abs(b)); +alter table idxpart attach partition idxpart1 for values from (1) to (2); +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; +drop table idxpart; + +-- Verify that columns are mapped correctly for WHERE in a partial index +create table idxpart (col1 int, a int, col3 int, b int) partition by range (a); +alter table idxpart drop column col1, drop column col3; +create table idxpart1 (col1 int, col2 int, col3 int, col4 int, b int, a int); +alter table idxpart1 drop column col1, drop column col2, drop column col3, drop column col4; +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +create table idxpart2 (col1 int, col2 int, b int, a int); +create index on idxpart2 (a) where b > 1000; +alter table idxpart2 drop column col1, drop column col2; +alter table idxpart attach partition idxpart2 for values from (1000) to (2000); +create index on idxpart (a) where b > 1000; +select c.relname, pg_get_indexdef(indexrelid) + from pg_class c join pg_index i on c.oid = i.indexrelid + where indrelid::regclass::text like 'idxpart%' + order by indrelid::regclass::text collate "C"; +drop table idxpart; + +-- Column number mapping: dropped columns in the partition +create table idxpart1 (drop_1 int, drop_2 int, col_keep int, drop_3 int); +alter table idxpart1 drop column drop_1; +alter table idxpart1 drop column drop_2; +alter table idxpart1 drop column drop_3; +create index on idxpart1 (col_keep); +create table idxpart (col_keep int) partition by range (col_keep); +create index on idxpart (col_keep); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart +\d idxpart1 +select attrelid::regclass, attname, attnum from pg_attribute + where attrelid::regclass::text like 'idxpart%' and attnum > 0 + order by attrelid::regclass, attnum; +drop table idxpart; + +-- Column number mapping: dropped columns in the parent table +create table idxpart(drop_1 int, drop_2 int, col_keep int, drop_3 int) partition by range (col_keep); +alter table idxpart drop column drop_1; +alter table idxpart drop column drop_2; +alter table idxpart drop column drop_3; +create table idxpart1 (col_keep int); +create index on idxpart1 (col_keep); +create index on idxpart (col_keep); +alter table idxpart attach partition idxpart1 for values from (0) to (1000); +\d idxpart +\d idxpart1 +select attrelid::regclass, attname, attnum from pg_attribute + where attrelid::regclass::text like 'idxpart%' and attnum > 0 + order by attrelid::regclass, attnum; +drop table idxpart; + +-- intentionally leave some objects around +create table idxpart (a int) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100); +create table idxpart2 partition of idxpart for values from (100) to (1000) + partition by range (a); +create table idxpart21 partition of idxpart2 for values from (100) to (200); +create table idxpart22 partition of idxpart2 for values from (200) to (300); +create index on idxpart22 (a); +create index on only idxpart2 (a); +alter index idxpart2_a_idx attach partition idxpart22_a_idx; +create index on idxpart (a); From 189d0ff588f54b9641c6684d7c668ef85ea4dfbd Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 19 Jan 2018 12:31:34 -0300 Subject: [PATCH 0854/1087] Fix regression tests for better stability Per buildfarm --- src/test/regress/expected/indexing.out | 26 +++++++++++++------------- src/test/regress/sql/indexing.sql | 10 +++++----- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out index e9cccca876..ffd4b10c37 100644 --- a/src/test/regress/expected/indexing.out +++ b/src/test/regress/expected/indexing.out @@ -224,26 +224,26 @@ Partition of: idxpart2 FOR VALUES FROM (100) TO (200) select indexrelid::regclass, indrelid::regclass, inhparent::regclass from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) where indexrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; indexrelid | indrelid | inhparent -----------------+-----------+--------------- - idxpart_a_idx | idxpart | idxpart1_a_idx | idxpart1 | idxpart_a_idx - idxpart2_a_idx | idxpart2 | idxpart_a_idx idxpart22_a_idx | idxpart22 | + idxpart2_a_idx | idxpart2 | idxpart_a_idx + idxpart_a_idx | idxpart | (4 rows) alter index idxpart2_a_idx attach partition idxpart22_a_idx; select indexrelid::regclass, indrelid::regclass, inhparent::regclass from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) where indexrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; indexrelid | indrelid | inhparent -----------------+-----------+---------------- - idxpart_a_idx | idxpart | idxpart1_a_idx | idxpart1 | idxpart_a_idx - idxpart2_a_idx | idxpart2 | idxpart_a_idx idxpart22_a_idx | idxpart22 | idxpart2_a_idx + idxpart2_a_idx | idxpart2 | idxpart_a_idx + idxpart_a_idx | idxpart | (4 rows) -- attaching idxpart22 is not enough to set idxpart22_a_idx valid ... @@ -600,15 +600,15 @@ alter table idxpart attach partition idxpart2 for values from (10) to (20); select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; relname | pg_get_indexdef ------------------+-------------------------------------------------------------- - idxparti | CREATE INDEX idxparti ON ONLY idxpart USING btree (a) - idxparti2 | CREATE INDEX idxparti2 ON ONLY idxpart USING btree (c, b) idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) idxpart1_c_b_idx | CREATE INDEX idxpart1_c_b_idx ON idxpart1 USING btree (c, b) idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) idxpart2_c_b_idx | CREATE INDEX idxpart2_c_b_idx ON idxpart2 USING btree (c, b) + idxparti | CREATE INDEX idxparti ON ONLY idxpart USING btree (a) + idxparti2 | CREATE INDEX idxparti2 ON ONLY idxpart USING btree (c, b) (6 rows) drop table idxpart; @@ -626,12 +626,12 @@ alter table idxpart attach partition idxpart1 for values from (1) to (2); select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; relname | pg_get_indexdef ------------------+------------------------------------------------------------------- - idxpart_abs_idx | CREATE INDEX idxpart_abs_idx ON ONLY idxpart USING btree (abs(b)) idxpart1_abs_idx | CREATE INDEX idxpart1_abs_idx ON idxpart1 USING btree (abs(b)) idxpart2_abs_idx | CREATE INDEX idxpart2_abs_idx ON idxpart2 USING btree (abs(b)) + idxpart_abs_idx | CREATE INDEX idxpart_abs_idx ON ONLY idxpart USING btree (abs(b)) (3 rows) drop table idxpart; @@ -649,12 +649,12 @@ create index on idxpart (a) where b > 1000; select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; relname | pg_get_indexdef ----------------+----------------------------------------------------------------------------- - idxpart_a_idx | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a) WHERE (b > 1000) idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) WHERE (b > 1000) idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) WHERE (b > 1000) + idxpart_a_idx | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a) WHERE (b > 1000) (3 rows) drop table idxpart; diff --git a/src/test/regress/sql/indexing.sql b/src/test/regress/sql/indexing.sql index 33be718699..2f985ec866 100644 --- a/src/test/regress/sql/indexing.sql +++ b/src/test/regress/sql/indexing.sql @@ -116,12 +116,12 @@ create index on idxpart (a); select indexrelid::regclass, indrelid::regclass, inhparent::regclass from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) where indexrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; alter index idxpart2_a_idx attach partition idxpart22_a_idx; select indexrelid::regclass, indrelid::regclass, inhparent::regclass from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) where indexrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; -- attaching idxpart22 is not enough to set idxpart22_a_idx valid ... alter index idxpart2_a_idx attach partition idxpart22_a_idx; \d idxpart2 @@ -306,7 +306,7 @@ alter table idxpart attach partition idxpart2 for values from (10) to (20); select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; drop table idxpart; -- Verify that columns are mapped correctly in expression indexes @@ -323,7 +323,7 @@ alter table idxpart attach partition idxpart1 for values from (1) to (2); select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; drop table idxpart; -- Verify that columns are mapped correctly for WHERE in a partial index @@ -340,7 +340,7 @@ create index on idxpart (a) where b > 1000; select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' - order by indrelid::regclass::text collate "C"; + order by indexrelid::regclass::text collate "C"; drop table idxpart; -- Column number mapping: dropped columns in the partition From 42b5856038a5af6bb4ec3c09b62d9d9a3ab43172 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 19 Jan 2018 13:23:49 -0300 Subject: [PATCH 0855/1087] Fix pg_dump version comparison I missed a '0' in the version number string ... Per buildfarm member crake. --- src/bin/pg_dump/pg_dump.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index af2d03ed19..0bdd3982fe 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -6565,7 +6565,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) * is not. */ resetPQExpBuffer(query); - if (fout->remoteVersion >= 11000) + if (fout->remoteVersion >= 110000) { appendPQExpBuffer(query, "SELECT t.tableoid, t.oid, " From 2c6f37ed62114bd5a092c20fe721bd11b3bcb91e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 11 Oct 2017 18:35:19 -0400 Subject: [PATCH 0856/1087] Replace GrantObjectType with ObjectType There used to be a lot of different *Type and *Kind symbol groups to address objects within different commands, most of which have been replaced by ObjectType, starting with b256f2426433c56b4bea3a8102757749885b81ba. But this conversion was never done for the ACL commands until now. This change ends up being just a plain replacement of the types and symbols, without any code restructuring needed, except deleting some now redundant code. Reviewed-by: Michael Paquier Reviewed-by: Stephen Frost --- src/backend/catalog/aclchk.c | 244 +++++++++++++-------------- src/backend/catalog/heap.c | 4 +- src/backend/catalog/pg_namespace.c | 2 +- src/backend/catalog/pg_proc.c | 2 +- src/backend/catalog/pg_type.c | 2 +- src/backend/commands/event_trigger.c | 185 ++++++++++++-------- src/backend/parser/gram.y | 54 +++--- src/backend/tcop/utility.c | 2 +- src/backend/utils/adt/acl.c | 58 +++---- src/include/commands/event_trigger.h | 1 - src/include/nodes/parsenodes.h | 21 +-- src/include/tcop/deparse_utility.h | 2 +- src/include/utils/acl.h | 6 +- src/include/utils/aclchk_internal.h | 4 +- 14 files changed, 302 insertions(+), 285 deletions(-) diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index 50a2e2681b..5cfaa510de 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -86,7 +86,7 @@ typedef struct Oid nspid; /* namespace, or InvalidOid if none */ /* remaining fields are same as in InternalGrant: */ bool is_grant; - GrantObjectType objtype; + ObjectType objtype; bool all_privs; AclMode privileges; List *grantees; @@ -116,8 +116,8 @@ static void ExecGrant_Type(InternalGrant *grantStmt); static void SetDefaultACLsInSchemas(InternalDefaultACL *iacls, List *nspnames); static void SetDefaultACL(InternalDefaultACL *iacls); -static List *objectNamesToOids(GrantObjectType objtype, List *objnames); -static List *objectsInSchemaToOids(GrantObjectType objtype, List *nspnames); +static List *objectNamesToOids(ObjectType objtype, List *objnames); +static List *objectsInSchemaToOids(ObjectType objtype, List *nspnames); static List *getRelationsInNamespace(Oid namespaceId, char relkind); static void expand_col_privileges(List *colnames, Oid table_oid, AclMode this_privileges, @@ -266,7 +266,7 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, whole_mask = ACL_ALL_RIGHTS_LARGEOBJECT; break; case ACL_KIND_NAMESPACE: - whole_mask = ACL_ALL_RIGHTS_NAMESPACE; + whole_mask = ACL_ALL_RIGHTS_SCHEMA; break; case ACL_KIND_TABLESPACE: whole_mask = ACL_ALL_RIGHTS_TABLESPACE; @@ -441,68 +441,68 @@ ExecuteGrantStmt(GrantStmt *stmt) /* * Convert stmt->privileges, a list of AccessPriv nodes, into an AclMode - * bitmask. Note: objtype can't be ACL_OBJECT_COLUMN. + * bitmask. Note: objtype can't be OBJECT_COLUMN. */ switch (stmt->objtype) { + case OBJECT_TABLE: /* * Because this might be a sequence, we test both relation and * sequence bits, and later do a more limited test when we know * the object type. */ - case ACL_OBJECT_RELATION: all_privileges = ACL_ALL_RIGHTS_RELATION | ACL_ALL_RIGHTS_SEQUENCE; errormsg = gettext_noop("invalid privilege type %s for relation"); break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: all_privileges = ACL_ALL_RIGHTS_SEQUENCE; errormsg = gettext_noop("invalid privilege type %s for sequence"); break; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: all_privileges = ACL_ALL_RIGHTS_DATABASE; errormsg = gettext_noop("invalid privilege type %s for database"); break; - case ACL_OBJECT_DOMAIN: + case OBJECT_DOMAIN: all_privileges = ACL_ALL_RIGHTS_TYPE; errormsg = gettext_noop("invalid privilege type %s for domain"); break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for function"); break; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: all_privileges = ACL_ALL_RIGHTS_LANGUAGE; errormsg = gettext_noop("invalid privilege type %s for language"); break; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: all_privileges = ACL_ALL_RIGHTS_LARGEOBJECT; errormsg = gettext_noop("invalid privilege type %s for large object"); break; - case ACL_OBJECT_NAMESPACE: - all_privileges = ACL_ALL_RIGHTS_NAMESPACE; + case OBJECT_SCHEMA: + all_privileges = ACL_ALL_RIGHTS_SCHEMA; errormsg = gettext_noop("invalid privilege type %s for schema"); break; - case ACL_OBJECT_PROCEDURE: + case OBJECT_PROCEDURE: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for procedure"); break; - case ACL_OBJECT_ROUTINE: + case OBJECT_ROUTINE: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for routine"); break; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: all_privileges = ACL_ALL_RIGHTS_TABLESPACE; errormsg = gettext_noop("invalid privilege type %s for tablespace"); break; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: all_privileges = ACL_ALL_RIGHTS_TYPE; errormsg = gettext_noop("invalid privilege type %s for type"); break; - case ACL_OBJECT_FDW: + case OBJECT_FDW: all_privileges = ACL_ALL_RIGHTS_FDW; errormsg = gettext_noop("invalid privilege type %s for foreign-data wrapper"); break; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: all_privileges = ACL_ALL_RIGHTS_FOREIGN_SERVER; errormsg = gettext_noop("invalid privilege type %s for foreign server"); break; @@ -540,7 +540,7 @@ ExecuteGrantStmt(GrantStmt *stmt) */ if (privnode->cols) { - if (stmt->objtype != ACL_OBJECT_RELATION) + if (stmt->objtype != OBJECT_TABLE) ereport(ERROR, (errcode(ERRCODE_INVALID_GRANT_OPERATION), errmsg("column privileges are only valid for relations"))); @@ -574,38 +574,38 @@ ExecGrantStmt_oids(InternalGrant *istmt) { switch (istmt->objtype) { - case ACL_OBJECT_RELATION: - case ACL_OBJECT_SEQUENCE: + case OBJECT_TABLE: + case OBJECT_SEQUENCE: ExecGrant_Relation(istmt); break; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: ExecGrant_Database(istmt); break; - case ACL_OBJECT_DOMAIN: - case ACL_OBJECT_TYPE: + case OBJECT_DOMAIN: + case OBJECT_TYPE: ExecGrant_Type(istmt); break; - case ACL_OBJECT_FDW: + case OBJECT_FDW: ExecGrant_Fdw(istmt); break; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: ExecGrant_ForeignServer(istmt); break; - case ACL_OBJECT_FUNCTION: - case ACL_OBJECT_PROCEDURE: - case ACL_OBJECT_ROUTINE: + case OBJECT_FUNCTION: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: ExecGrant_Function(istmt); break; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: ExecGrant_Language(istmt); break; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: ExecGrant_Largeobject(istmt); break; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: ExecGrant_Namespace(istmt); break; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: ExecGrant_Tablespace(istmt); break; default: @@ -619,7 +619,7 @@ ExecGrantStmt_oids(InternalGrant *istmt) * the functions a chance to adjust the istmt with privileges actually * granted. */ - if (EventTriggerSupportsGrantObjectType(istmt->objtype)) + if (EventTriggerSupportsObjectType(istmt->objtype)) EventTriggerCollectGrant(istmt); } @@ -634,7 +634,7 @@ ExecGrantStmt_oids(InternalGrant *istmt) * to fail. */ static List * -objectNamesToOids(GrantObjectType objtype, List *objnames) +objectNamesToOids(ObjectType objtype, List *objnames) { List *objects = NIL; ListCell *cell; @@ -643,8 +643,8 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) switch (objtype) { - case ACL_OBJECT_RELATION: - case ACL_OBJECT_SEQUENCE: + case OBJECT_TABLE: + case OBJECT_SEQUENCE: foreach(cell, objnames) { RangeVar *relvar = (RangeVar *) lfirst(cell); @@ -654,7 +654,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, relOid); } break; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: foreach(cell, objnames) { char *dbname = strVal(lfirst(cell)); @@ -664,8 +664,8 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, dbid); } break; - case ACL_OBJECT_DOMAIN: - case ACL_OBJECT_TYPE: + case OBJECT_DOMAIN: + case OBJECT_TYPE: foreach(cell, objnames) { List *typname = (List *) lfirst(cell); @@ -675,7 +675,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, oid); } break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: foreach(cell, objnames) { ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); @@ -685,7 +685,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, funcid); } break; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: foreach(cell, objnames) { char *langname = strVal(lfirst(cell)); @@ -695,7 +695,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, oid); } break; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: foreach(cell, objnames) { Oid lobjOid = oidparse(lfirst(cell)); @@ -709,7 +709,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, lobjOid); } break; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: foreach(cell, objnames) { char *nspname = strVal(lfirst(cell)); @@ -719,7 +719,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, oid); } break; - case ACL_OBJECT_PROCEDURE: + case OBJECT_PROCEDURE: foreach(cell, objnames) { ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); @@ -729,7 +729,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, procid); } break; - case ACL_OBJECT_ROUTINE: + case OBJECT_ROUTINE: foreach(cell, objnames) { ObjectWithArgs *func = (ObjectWithArgs *) lfirst(cell); @@ -739,7 +739,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, routid); } break; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: foreach(cell, objnames) { char *spcname = strVal(lfirst(cell)); @@ -749,7 +749,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, spcoid); } break; - case ACL_OBJECT_FDW: + case OBJECT_FDW: foreach(cell, objnames) { char *fdwname = strVal(lfirst(cell)); @@ -758,7 +758,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) objects = lappend_oid(objects, fdwid); } break; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: foreach(cell, objnames) { char *srvname = strVal(lfirst(cell)); @@ -783,7 +783,7 @@ objectNamesToOids(GrantObjectType objtype, List *objnames) * no privilege checking on the individual objects here. */ static List * -objectsInSchemaToOids(GrantObjectType objtype, List *nspnames) +objectsInSchemaToOids(ObjectType objtype, List *nspnames) { List *objects = NIL; ListCell *cell; @@ -798,7 +798,7 @@ objectsInSchemaToOids(GrantObjectType objtype, List *nspnames) switch (objtype) { - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: objs = getRelationsInNamespace(namespaceId, RELKIND_RELATION); objects = list_concat(objects, objs); objs = getRelationsInNamespace(namespaceId, RELKIND_VIEW); @@ -810,13 +810,13 @@ objectsInSchemaToOids(GrantObjectType objtype, List *nspnames) objs = getRelationsInNamespace(namespaceId, RELKIND_PARTITIONED_TABLE); objects = list_concat(objects, objs); break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: objs = getRelationsInNamespace(namespaceId, RELKIND_SEQUENCE); objects = list_concat(objects, objs); break; - case ACL_OBJECT_FUNCTION: - case ACL_OBJECT_PROCEDURE: - case ACL_OBJECT_ROUTINE: + case OBJECT_FUNCTION: + case OBJECT_PROCEDURE: + case OBJECT_ROUTINE: { ScanKeyData key[2]; int keycount; @@ -835,12 +835,12 @@ objectsInSchemaToOids(GrantObjectType objtype, List *nspnames) * When looking for procedures, check for return type ==0. * When looking for routines, don't check the return type. */ - if (objtype == ACL_OBJECT_FUNCTION) + if (objtype == OBJECT_FUNCTION) ScanKeyInit(&key[keycount++], Anum_pg_proc_prorettype, BTEqualStrategyNumber, F_OIDNE, InvalidOid); - else if (objtype == ACL_OBJECT_PROCEDURE) + else if (objtype == OBJECT_PROCEDURE) ScanKeyInit(&key[keycount++], Anum_pg_proc_prorettype, BTEqualStrategyNumber, F_OIDEQ, @@ -993,32 +993,32 @@ ExecAlterDefaultPrivilegesStmt(ParseState *pstate, AlterDefaultPrivilegesStmt *s */ switch (action->objtype) { - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: all_privileges = ACL_ALL_RIGHTS_RELATION; errormsg = gettext_noop("invalid privilege type %s for relation"); break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: all_privileges = ACL_ALL_RIGHTS_SEQUENCE; errormsg = gettext_noop("invalid privilege type %s for sequence"); break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for function"); break; - case ACL_OBJECT_PROCEDURE: + case OBJECT_PROCEDURE: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for procedure"); break; - case ACL_OBJECT_ROUTINE: + case OBJECT_ROUTINE: all_privileges = ACL_ALL_RIGHTS_FUNCTION; errormsg = gettext_noop("invalid privilege type %s for routine"); break; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: all_privileges = ACL_ALL_RIGHTS_TYPE; errormsg = gettext_noop("invalid privilege type %s for type"); break; - case ACL_OBJECT_NAMESPACE: - all_privileges = ACL_ALL_RIGHTS_NAMESPACE; + case OBJECT_SCHEMA: + all_privileges = ACL_ALL_RIGHTS_SCHEMA; errormsg = gettext_noop("invalid privilege type %s for schema"); break; default: @@ -1184,38 +1184,38 @@ SetDefaultACL(InternalDefaultACL *iacls) */ switch (iacls->objtype) { - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: objtype = DEFACLOBJ_RELATION; if (iacls->all_privs && this_privileges == ACL_NO_RIGHTS) this_privileges = ACL_ALL_RIGHTS_RELATION; break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: objtype = DEFACLOBJ_SEQUENCE; if (iacls->all_privs && this_privileges == ACL_NO_RIGHTS) this_privileges = ACL_ALL_RIGHTS_SEQUENCE; break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: objtype = DEFACLOBJ_FUNCTION; if (iacls->all_privs && this_privileges == ACL_NO_RIGHTS) this_privileges = ACL_ALL_RIGHTS_FUNCTION; break; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: objtype = DEFACLOBJ_TYPE; if (iacls->all_privs && this_privileges == ACL_NO_RIGHTS) this_privileges = ACL_ALL_RIGHTS_TYPE; break; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: if (OidIsValid(iacls->nspid)) ereport(ERROR, (errcode(ERRCODE_INVALID_GRANT_OPERATION), errmsg("cannot use IN SCHEMA clause when using GRANT/REVOKE ON SCHEMAS"))); objtype = DEFACLOBJ_NAMESPACE; if (iacls->all_privs && this_privileges == ACL_NO_RIGHTS) - this_privileges = ACL_ALL_RIGHTS_NAMESPACE; + this_privileges = ACL_ALL_RIGHTS_SCHEMA; break; default: @@ -1430,19 +1430,19 @@ RemoveRoleFromObjectACL(Oid roleid, Oid classid, Oid objid) switch (pg_default_acl_tuple->defaclobjtype) { case DEFACLOBJ_RELATION: - iacls.objtype = ACL_OBJECT_RELATION; + iacls.objtype = OBJECT_TABLE; break; case DEFACLOBJ_SEQUENCE: - iacls.objtype = ACL_OBJECT_SEQUENCE; + iacls.objtype = OBJECT_SEQUENCE; break; case DEFACLOBJ_FUNCTION: - iacls.objtype = ACL_OBJECT_FUNCTION; + iacls.objtype = OBJECT_FUNCTION; break; case DEFACLOBJ_TYPE: - iacls.objtype = ACL_OBJECT_TYPE; + iacls.objtype = OBJECT_TYPE; break; case DEFACLOBJ_NAMESPACE: - iacls.objtype = ACL_OBJECT_NAMESPACE; + iacls.objtype = OBJECT_SCHEMA; break; default: /* Shouldn't get here */ @@ -1471,35 +1471,35 @@ RemoveRoleFromObjectACL(Oid roleid, Oid classid, Oid objid) switch (classid) { case RelationRelationId: - /* it's OK to use RELATION for a sequence */ - istmt.objtype = ACL_OBJECT_RELATION; + /* it's OK to use TABLE for a sequence */ + istmt.objtype = OBJECT_TABLE; break; case DatabaseRelationId: - istmt.objtype = ACL_OBJECT_DATABASE; + istmt.objtype = OBJECT_DATABASE; break; case TypeRelationId: - istmt.objtype = ACL_OBJECT_TYPE; + istmt.objtype = OBJECT_TYPE; break; case ProcedureRelationId: - istmt.objtype = ACL_OBJECT_ROUTINE; + istmt.objtype = OBJECT_ROUTINE; break; case LanguageRelationId: - istmt.objtype = ACL_OBJECT_LANGUAGE; + istmt.objtype = OBJECT_LANGUAGE; break; case LargeObjectRelationId: - istmt.objtype = ACL_OBJECT_LARGEOBJECT; + istmt.objtype = OBJECT_LARGEOBJECT; break; case NamespaceRelationId: - istmt.objtype = ACL_OBJECT_NAMESPACE; + istmt.objtype = OBJECT_SCHEMA; break; case TableSpaceRelationId: - istmt.objtype = ACL_OBJECT_TABLESPACE; + istmt.objtype = OBJECT_TABLESPACE; break; case ForeignServerRelationId: - istmt.objtype = ACL_OBJECT_FOREIGN_SERVER; + istmt.objtype = OBJECT_FOREIGN_SERVER; break; case ForeignDataWrapperRelationId: - istmt.objtype = ACL_OBJECT_FDW; + istmt.objtype = OBJECT_FDW; break; default: elog(ERROR, "unexpected object class %u", classid); @@ -1682,7 +1682,7 @@ ExecGrant_Attribute(InternalGrant *istmt, Oid relOid, const char *relname, &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_COLUMN, ownerId); + old_acl = acldefault(OBJECT_COLUMN, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -1839,7 +1839,7 @@ ExecGrant_Relation(InternalGrant *istmt) NameStr(pg_class_tuple->relname)))); /* Used GRANT SEQUENCE on a non-sequence? */ - if (istmt->objtype == ACL_OBJECT_SEQUENCE && + if (istmt->objtype == OBJECT_SEQUENCE && pg_class_tuple->relkind != RELKIND_SEQUENCE) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -1863,7 +1863,7 @@ ExecGrant_Relation(InternalGrant *istmt) * permissions. The OR of table and sequence permissions were already * checked. */ - if (istmt->objtype == ACL_OBJECT_RELATION) + if (istmt->objtype == OBJECT_TABLE) { if (pg_class_tuple->relkind == RELKIND_SEQUENCE) { @@ -1942,10 +1942,10 @@ ExecGrant_Relation(InternalGrant *istmt) switch (pg_class_tuple->relkind) { case RELKIND_SEQUENCE: - old_acl = acldefault(ACL_OBJECT_SEQUENCE, ownerId); + old_acl = acldefault(OBJECT_SEQUENCE, ownerId); break; default: - old_acl = acldefault(ACL_OBJECT_RELATION, ownerId); + old_acl = acldefault(OBJECT_TABLE, ownerId); break; } /* There are no old member roles according to the catalogs */ @@ -2170,7 +2170,7 @@ ExecGrant_Database(InternalGrant *istmt) RelationGetDescr(relation), &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_DATABASE, ownerId); + old_acl = acldefault(OBJECT_DATABASE, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2292,7 +2292,7 @@ ExecGrant_Fdw(InternalGrant *istmt) &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_FDW, ownerId); + old_acl = acldefault(OBJECT_FDW, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2418,7 +2418,7 @@ ExecGrant_ForeignServer(InternalGrant *istmt) &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_FOREIGN_SERVER, ownerId); + old_acl = acldefault(OBJECT_FOREIGN_SERVER, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2542,7 +2542,7 @@ ExecGrant_Function(InternalGrant *istmt) &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_FUNCTION, ownerId); + old_acl = acldefault(OBJECT_FUNCTION, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2673,7 +2673,7 @@ ExecGrant_Language(InternalGrant *istmt) &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId); + old_acl = acldefault(OBJECT_LANGUAGE, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2811,7 +2811,7 @@ ExecGrant_Largeobject(InternalGrant *istmt) RelationGetDescr(relation), &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_LARGEOBJECT, ownerId); + old_acl = acldefault(OBJECT_LARGEOBJECT, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -2895,7 +2895,7 @@ ExecGrant_Namespace(InternalGrant *istmt) ListCell *cell; if (istmt->all_privs && istmt->privileges == ACL_NO_RIGHTS) - istmt->privileges = ACL_ALL_RIGHTS_NAMESPACE; + istmt->privileges = ACL_ALL_RIGHTS_SCHEMA; relation = heap_open(NamespaceRelationId, RowExclusiveLock); @@ -2937,7 +2937,7 @@ ExecGrant_Namespace(InternalGrant *istmt) &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_NAMESPACE, ownerId); + old_acl = acldefault(OBJECT_SCHEMA, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -3061,7 +3061,7 @@ ExecGrant_Tablespace(InternalGrant *istmt) RelationGetDescr(relation), &isNull); if (isNull) { - old_acl = acldefault(ACL_OBJECT_TABLESPACE, ownerId); + old_acl = acldefault(OBJECT_TABLESPACE, ownerId); /* There are no old member roles according to the catalogs */ noldmembers = 0; oldmembers = NULL; @@ -3179,7 +3179,7 @@ ExecGrant_Type(InternalGrant *istmt) errhint("Set the privileges of the element type instead."))); /* Used GRANT DOMAIN on a non-domain? */ - if (istmt->objtype == ACL_OBJECT_DOMAIN && + if (istmt->objtype == OBJECT_DOMAIN && pg_type_tuple->typtype != TYPTYPE_DOMAIN) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), @@ -3745,10 +3745,10 @@ pg_class_aclmask(Oid table_oid, Oid roleid, switch (classForm->relkind) { case RELKIND_SEQUENCE: - acl = acldefault(ACL_OBJECT_SEQUENCE, ownerId); + acl = acldefault(OBJECT_SEQUENCE, ownerId); break; default: - acl = acldefault(ACL_OBJECT_RELATION, ownerId); + acl = acldefault(OBJECT_TABLE, ownerId); break; } aclDatum = (Datum) 0; @@ -3804,7 +3804,7 @@ pg_database_aclmask(Oid db_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_DATABASE, ownerId); + acl = acldefault(OBJECT_DATABASE, ownerId); aclDatum = (Datum) 0; } else @@ -3858,7 +3858,7 @@ pg_proc_aclmask(Oid proc_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_FUNCTION, ownerId); + acl = acldefault(OBJECT_FUNCTION, ownerId); aclDatum = (Datum) 0; } else @@ -3912,7 +3912,7 @@ pg_language_aclmask(Oid lang_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_LANGUAGE, ownerId); + acl = acldefault(OBJECT_LANGUAGE, ownerId); aclDatum = (Datum) 0; } else @@ -3992,7 +3992,7 @@ pg_largeobject_aclmask_snapshot(Oid lobj_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_LARGEOBJECT, ownerId); + acl = acldefault(OBJECT_LARGEOBJECT, ownerId); aclDatum = (Datum) 0; } else @@ -4055,7 +4055,7 @@ pg_namespace_aclmask(Oid nsp_oid, Oid roleid, { if (pg_database_aclcheck(MyDatabaseId, roleid, ACL_CREATE_TEMP) == ACLCHECK_OK) - return mask & ACL_ALL_RIGHTS_NAMESPACE; + return mask & ACL_ALL_RIGHTS_SCHEMA; else return mask & ACL_USAGE; } @@ -4076,7 +4076,7 @@ pg_namespace_aclmask(Oid nsp_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_NAMESPACE, ownerId); + acl = acldefault(OBJECT_SCHEMA, ownerId); aclDatum = (Datum) 0; } else @@ -4132,7 +4132,7 @@ pg_tablespace_aclmask(Oid spc_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_TABLESPACE, ownerId); + acl = acldefault(OBJECT_TABLESPACE, ownerId); aclDatum = (Datum) 0; } else @@ -4194,7 +4194,7 @@ pg_foreign_data_wrapper_aclmask(Oid fdw_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_FDW, ownerId); + acl = acldefault(OBJECT_FDW, ownerId); aclDatum = (Datum) 0; } else @@ -4256,7 +4256,7 @@ pg_foreign_server_aclmask(Oid srv_oid, Oid roleid, if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_FOREIGN_SERVER, ownerId); + acl = acldefault(OBJECT_FOREIGN_SERVER, ownerId); aclDatum = (Datum) 0; } else @@ -4333,7 +4333,7 @@ pg_type_aclmask(Oid type_oid, Oid roleid, AclMode mask, AclMaskHow how) if (isNull) { /* No ACL, so build default ACL */ - acl = acldefault(ACL_OBJECT_TYPE, ownerId); + acl = acldefault(OBJECT_TYPE, ownerId); aclDatum = (Datum) 0; } else @@ -5302,7 +5302,7 @@ get_default_acl_internal(Oid roleId, Oid nsp_oid, char objtype) * Returns NULL if built-in system defaults should be used */ Acl * -get_user_default_acl(GrantObjectType objtype, Oid ownerId, Oid nsp_oid) +get_user_default_acl(ObjectType objtype, Oid ownerId, Oid nsp_oid) { Acl *result; Acl *glob_acl; @@ -5320,23 +5320,23 @@ get_user_default_acl(GrantObjectType objtype, Oid ownerId, Oid nsp_oid) /* Check if object type is supported in pg_default_acl */ switch (objtype) { - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: defaclobjtype = DEFACLOBJ_RELATION; break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: defaclobjtype = DEFACLOBJ_SEQUENCE; break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: defaclobjtype = DEFACLOBJ_FUNCTION; break; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: defaclobjtype = DEFACLOBJ_TYPE; break; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: defaclobjtype = DEFACLOBJ_NAMESPACE; break; diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 99f4d59863..774c07b03a 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -1143,11 +1143,11 @@ heap_create_with_catalog(const char *relname, case RELKIND_MATVIEW: case RELKIND_FOREIGN_TABLE: case RELKIND_PARTITIONED_TABLE: - relacl = get_user_default_acl(ACL_OBJECT_RELATION, ownerid, + relacl = get_user_default_acl(OBJECT_TABLE, ownerid, relnamespace); break; case RELKIND_SEQUENCE: - relacl = get_user_default_acl(ACL_OBJECT_SEQUENCE, ownerid, + relacl = get_user_default_acl(OBJECT_SEQUENCE, ownerid, relnamespace); break; default: diff --git a/src/backend/catalog/pg_namespace.c b/src/backend/catalog/pg_namespace.c index a82d785034..2cf52be025 100644 --- a/src/backend/catalog/pg_namespace.c +++ b/src/backend/catalog/pg_namespace.c @@ -63,7 +63,7 @@ NamespaceCreate(const char *nspName, Oid ownerId, bool isTemp) errmsg("schema \"%s\" already exists", nspName))); if (!isTemp) - nspacl = get_user_default_acl(ACL_OBJECT_NAMESPACE, ownerId, + nspacl = get_user_default_acl(OBJECT_SCHEMA, ownerId, InvalidOid); else nspacl = NULL; diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index 39d5172e97..dd674113ba 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -582,7 +582,7 @@ ProcedureCreate(const char *procedureName, /* Creating a new procedure */ /* First, get default permissions and set up proacl */ - proacl = get_user_default_acl(ACL_OBJECT_FUNCTION, proowner, + proacl = get_user_default_acl(OBJECT_FUNCTION, proowner, procNamespace); if (proacl != NULL) values[Anum_pg_proc_proacl - 1] = PointerGetDatum(proacl); diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c index 963ccb7ff2..fd63ea8cd1 100644 --- a/src/backend/catalog/pg_type.c +++ b/src/backend/catalog/pg_type.c @@ -380,7 +380,7 @@ TypeCreate(Oid newTypeOid, else nulls[Anum_pg_type_typdefault - 1] = true; - typacl = get_user_default_acl(ACL_OBJECT_TYPE, ownerId, + typacl = get_user_default_acl(OBJECT_TYPE, ownerId, typeNamespace); if (typacl != NULL) values[Anum_pg_type_typacl - 1] = PointerGetDatum(typacl); diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index 8455138ed3..82c7b6a0ba 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -159,8 +159,8 @@ static Oid insert_event_trigger_tuple(const char *trigname, const char *eventnam static void validate_ddl_tags(const char *filtervar, List *taglist); static void validate_table_rewrite_tags(const char *filtervar, List *taglist); static void EventTriggerInvoke(List *fn_oid_list, EventTriggerData *trigdata); -static const char *stringify_grantobjtype(GrantObjectType objtype); -static const char *stringify_adefprivs_objtype(GrantObjectType objtype); +static const char *stringify_grant_objtype(ObjectType objtype); +static const char *stringify_adefprivs_objtype(ObjectType objtype); /* * Create an event trigger. @@ -1199,41 +1199,6 @@ EventTriggerSupportsObjectClass(ObjectClass objclass) return false; } -bool -EventTriggerSupportsGrantObjectType(GrantObjectType objtype) -{ - switch (objtype) - { - case ACL_OBJECT_DATABASE: - case ACL_OBJECT_TABLESPACE: - /* no support for global objects */ - return false; - - case ACL_OBJECT_COLUMN: - case ACL_OBJECT_RELATION: - case ACL_OBJECT_SEQUENCE: - case ACL_OBJECT_DOMAIN: - case ACL_OBJECT_FDW: - case ACL_OBJECT_FOREIGN_SERVER: - case ACL_OBJECT_FUNCTION: - case ACL_OBJECT_LANGUAGE: - case ACL_OBJECT_LARGEOBJECT: - case ACL_OBJECT_NAMESPACE: - case ACL_OBJECT_PROCEDURE: - case ACL_OBJECT_ROUTINE: - case ACL_OBJECT_TYPE: - return true; - - /* - * There's intentionally no default: case here; we want the - * compiler to warn if a new ACL class hasn't been handled above. - */ - } - - /* Shouldn't get here, but if we do, say "no support" */ - return false; -} - /* * Prepare event trigger state for a new complete query to run, if necessary; * returns whether this was done. If it was, EventTriggerEndCompleteQuery must @@ -2196,7 +2161,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS) values[i++] = CStringGetTextDatum(cmd->d.grant.istmt->is_grant ? "GRANT" : "REVOKE"); /* object_type */ - values[i++] = CStringGetTextDatum(stringify_grantobjtype( + values[i++] = CStringGetTextDatum(stringify_grant_objtype( cmd->d.grant.istmt->objtype)); /* schema */ nulls[i++] = true; @@ -2219,92 +2184,164 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS) } /* - * Return the GrantObjectType as a string, as it would appear in GRANT and + * Return the ObjectType as a string, as it would appear in GRANT and * REVOKE commands. */ static const char * -stringify_grantobjtype(GrantObjectType objtype) +stringify_grant_objtype(ObjectType objtype) { switch (objtype) { - case ACL_OBJECT_COLUMN: + case OBJECT_COLUMN: return "COLUMN"; - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: return "TABLE"; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: return "SEQUENCE"; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: return "DATABASE"; - case ACL_OBJECT_DOMAIN: + case OBJECT_DOMAIN: return "DOMAIN"; - case ACL_OBJECT_FDW: + case OBJECT_FDW: return "FOREIGN DATA WRAPPER"; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: return "FOREIGN SERVER"; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: return "FUNCTION"; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: return "LANGUAGE"; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: return "LARGE OBJECT"; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: return "SCHEMA"; - case ACL_OBJECT_PROCEDURE: + case OBJECT_PROCEDURE: return "PROCEDURE"; - case ACL_OBJECT_ROUTINE: + case OBJECT_ROUTINE: return "ROUTINE"; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: return "TABLESPACE"; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: return "TYPE"; + /* these currently aren't used */ + case OBJECT_ACCESS_METHOD: + case OBJECT_AGGREGATE: + case OBJECT_AMOP: + case OBJECT_AMPROC: + case OBJECT_ATTRIBUTE: + case OBJECT_CAST: + case OBJECT_COLLATION: + case OBJECT_CONVERSION: + case OBJECT_DEFAULT: + case OBJECT_DEFACL: + case OBJECT_DOMCONSTRAINT: + case OBJECT_EVENT_TRIGGER: + case OBJECT_EXTENSION: + case OBJECT_FOREIGN_TABLE: + case OBJECT_INDEX: + case OBJECT_MATVIEW: + case OBJECT_OPCLASS: + case OBJECT_OPERATOR: + case OBJECT_OPFAMILY: + case OBJECT_POLICY: + case OBJECT_PUBLICATION: + case OBJECT_PUBLICATION_REL: + case OBJECT_ROLE: + case OBJECT_RULE: + case OBJECT_STATISTIC_EXT: + case OBJECT_SUBSCRIPTION: + case OBJECT_TABCONSTRAINT: + case OBJECT_TRANSFORM: + case OBJECT_TRIGGER: + case OBJECT_TSCONFIGURATION: + case OBJECT_TSDICTIONARY: + case OBJECT_TSPARSER: + case OBJECT_TSTEMPLATE: + case OBJECT_USER_MAPPING: + case OBJECT_VIEW: + elog(ERROR, "unsupported object type: %d", (int) objtype); } - elog(ERROR, "unrecognized grant object type: %d", (int) objtype); return "???"; /* keep compiler quiet */ } /* - * Return the GrantObjectType as a string; as above, but use the spelling + * Return the ObjectType as a string; as above, but use the spelling * in ALTER DEFAULT PRIVILEGES commands instead. Generally this is just * the plural. */ static const char * -stringify_adefprivs_objtype(GrantObjectType objtype) +stringify_adefprivs_objtype(ObjectType objtype) { switch (objtype) { - case ACL_OBJECT_COLUMN: + case OBJECT_COLUMN: return "COLUMNS"; - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: return "TABLES"; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: return "SEQUENCES"; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: return "DATABASES"; - case ACL_OBJECT_DOMAIN: + case OBJECT_DOMAIN: return "DOMAINS"; - case ACL_OBJECT_FDW: + case OBJECT_FDW: return "FOREIGN DATA WRAPPERS"; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: return "FOREIGN SERVERS"; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: return "FUNCTIONS"; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: return "LANGUAGES"; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: return "LARGE OBJECTS"; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: return "SCHEMAS"; - case ACL_OBJECT_PROCEDURE: + case OBJECT_PROCEDURE: return "PROCEDURES"; - case ACL_OBJECT_ROUTINE: + case OBJECT_ROUTINE: return "ROUTINES"; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: return "TABLESPACES"; - case ACL_OBJECT_TYPE: + case OBJECT_TYPE: return "TYPES"; + /* these currently aren't used */ + case OBJECT_ACCESS_METHOD: + case OBJECT_AGGREGATE: + case OBJECT_AMOP: + case OBJECT_AMPROC: + case OBJECT_ATTRIBUTE: + case OBJECT_CAST: + case OBJECT_COLLATION: + case OBJECT_CONVERSION: + case OBJECT_DEFAULT: + case OBJECT_DEFACL: + case OBJECT_DOMCONSTRAINT: + case OBJECT_EVENT_TRIGGER: + case OBJECT_EXTENSION: + case OBJECT_FOREIGN_TABLE: + case OBJECT_INDEX: + case OBJECT_MATVIEW: + case OBJECT_OPCLASS: + case OBJECT_OPERATOR: + case OBJECT_OPFAMILY: + case OBJECT_POLICY: + case OBJECT_PUBLICATION: + case OBJECT_PUBLICATION_REL: + case OBJECT_ROLE: + case OBJECT_RULE: + case OBJECT_STATISTIC_EXT: + case OBJECT_SUBSCRIPTION: + case OBJECT_TABCONSTRAINT: + case OBJECT_TRANSFORM: + case OBJECT_TRIGGER: + case OBJECT_TSCONFIGURATION: + case OBJECT_TSDICTIONARY: + case OBJECT_TSPARSER: + case OBJECT_TSTEMPLATE: + case OBJECT_USER_MAPPING: + case OBJECT_VIEW: + elog(ERROR, "unsupported object type: %d", (int) objtype); } - elog(ERROR, "unrecognized grant object type: %d", (int) objtype); return "???"; /* keep compiler quiet */ } diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 93e67e8adc..459a227e57 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -115,7 +115,7 @@ typedef struct PrivTarget { GrantTargetType targtype; - GrantObjectType objtype; + ObjectType objtype; List *objs; } PrivTarget; @@ -7027,7 +7027,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_RELATION; + n->objtype = OBJECT_TABLE; n->objs = $1; $$ = n; } @@ -7035,7 +7035,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_RELATION; + n->objtype = OBJECT_TABLE; n->objs = $2; $$ = n; } @@ -7043,7 +7043,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_SEQUENCE; + n->objtype = OBJECT_SEQUENCE; n->objs = $2; $$ = n; } @@ -7051,7 +7051,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_FDW; + n->objtype = OBJECT_FDW; n->objs = $4; $$ = n; } @@ -7059,7 +7059,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_FOREIGN_SERVER; + n->objtype = OBJECT_FOREIGN_SERVER; n->objs = $3; $$ = n; } @@ -7067,7 +7067,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_FUNCTION; + n->objtype = OBJECT_FUNCTION; n->objs = $2; $$ = n; } @@ -7075,7 +7075,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_PROCEDURE; + n->objtype = OBJECT_PROCEDURE; n->objs = $2; $$ = n; } @@ -7083,7 +7083,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_ROUTINE; + n->objtype = OBJECT_ROUTINE; n->objs = $2; $$ = n; } @@ -7091,7 +7091,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_DATABASE; + n->objtype = OBJECT_DATABASE; n->objs = $2; $$ = n; } @@ -7099,7 +7099,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_DOMAIN; + n->objtype = OBJECT_DOMAIN; n->objs = $2; $$ = n; } @@ -7107,7 +7107,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_LANGUAGE; + n->objtype = OBJECT_LANGUAGE; n->objs = $2; $$ = n; } @@ -7115,7 +7115,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_LARGEOBJECT; + n->objtype = OBJECT_LARGEOBJECT; n->objs = $3; $$ = n; } @@ -7123,7 +7123,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_NAMESPACE; + n->objtype = OBJECT_SCHEMA; n->objs = $2; $$ = n; } @@ -7131,7 +7131,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_TABLESPACE; + n->objtype = OBJECT_TABLESPACE; n->objs = $2; $$ = n; } @@ -7139,7 +7139,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_OBJECT; - n->objtype = ACL_OBJECT_TYPE; + n->objtype = OBJECT_TYPE; n->objs = $2; $$ = n; } @@ -7147,7 +7147,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_ALL_IN_SCHEMA; - n->objtype = ACL_OBJECT_RELATION; + n->objtype = OBJECT_TABLE; n->objs = $5; $$ = n; } @@ -7155,7 +7155,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_ALL_IN_SCHEMA; - n->objtype = ACL_OBJECT_SEQUENCE; + n->objtype = OBJECT_SEQUENCE; n->objs = $5; $$ = n; } @@ -7163,7 +7163,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_ALL_IN_SCHEMA; - n->objtype = ACL_OBJECT_FUNCTION; + n->objtype = OBJECT_FUNCTION; n->objs = $5; $$ = n; } @@ -7171,7 +7171,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_ALL_IN_SCHEMA; - n->objtype = ACL_OBJECT_PROCEDURE; + n->objtype = OBJECT_PROCEDURE; n->objs = $5; $$ = n; } @@ -7179,7 +7179,7 @@ privilege_target: { PrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget)); n->targtype = ACL_TARGET_ALL_IN_SCHEMA; - n->objtype = ACL_OBJECT_ROUTINE; + n->objtype = OBJECT_ROUTINE; n->objs = $5; $$ = n; } @@ -7337,12 +7337,12 @@ DefACLAction: ; defacl_privilege_target: - TABLES { $$ = ACL_OBJECT_RELATION; } - | FUNCTIONS { $$ = ACL_OBJECT_FUNCTION; } - | ROUTINES { $$ = ACL_OBJECT_FUNCTION; } - | SEQUENCES { $$ = ACL_OBJECT_SEQUENCE; } - | TYPES_P { $$ = ACL_OBJECT_TYPE; } - | SCHEMAS { $$ = ACL_OBJECT_NAMESPACE; } + TABLES { $$ = OBJECT_TABLE; } + | FUNCTIONS { $$ = OBJECT_FUNCTION; } + | ROUTINES { $$ = OBJECT_FUNCTION; } + | SEQUENCES { $$ = OBJECT_SEQUENCE; } + | TYPES_P { $$ = OBJECT_TYPE; } + | SCHEMAS { $$ = OBJECT_SCHEMA; } ; diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 9cccc8d39d..26df660f35 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -828,7 +828,7 @@ standard_ProcessUtility(PlannedStmt *pstmt, { GrantStmt *stmt = (GrantStmt *) parsetree; - if (EventTriggerSupportsGrantObjectType(stmt->objtype)) + if (EventTriggerSupportsObjectType(stmt->objtype)) ProcessUtilitySlow(pstate, pstmt, queryString, context, params, queryEnv, dest, completionTag); diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c index c11f3dd1cb..0cfc297b65 100644 --- a/src/backend/utils/adt/acl.c +++ b/src/backend/utils/adt/acl.c @@ -745,7 +745,7 @@ hash_aclitem_extended(PG_FUNCTION_ARGS) * absence of any pg_default_acl entry. */ Acl * -acldefault(GrantObjectType objtype, Oid ownerId) +acldefault(ObjectType objtype, Oid ownerId) { AclMode world_default; AclMode owner_default; @@ -755,56 +755,56 @@ acldefault(GrantObjectType objtype, Oid ownerId) switch (objtype) { - case ACL_OBJECT_COLUMN: + case OBJECT_COLUMN: /* by default, columns have no extra privileges */ world_default = ACL_NO_RIGHTS; owner_default = ACL_NO_RIGHTS; break; - case ACL_OBJECT_RELATION: + case OBJECT_TABLE: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_RELATION; break; - case ACL_OBJECT_SEQUENCE: + case OBJECT_SEQUENCE: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_SEQUENCE; break; - case ACL_OBJECT_DATABASE: + case OBJECT_DATABASE: /* for backwards compatibility, grant some rights by default */ world_default = ACL_CREATE_TEMP | ACL_CONNECT; owner_default = ACL_ALL_RIGHTS_DATABASE; break; - case ACL_OBJECT_FUNCTION: + case OBJECT_FUNCTION: /* Grant EXECUTE by default, for now */ world_default = ACL_EXECUTE; owner_default = ACL_ALL_RIGHTS_FUNCTION; break; - case ACL_OBJECT_LANGUAGE: + case OBJECT_LANGUAGE: /* Grant USAGE by default, for now */ world_default = ACL_USAGE; owner_default = ACL_ALL_RIGHTS_LANGUAGE; break; - case ACL_OBJECT_LARGEOBJECT: + case OBJECT_LARGEOBJECT: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_LARGEOBJECT; break; - case ACL_OBJECT_NAMESPACE: + case OBJECT_SCHEMA: world_default = ACL_NO_RIGHTS; - owner_default = ACL_ALL_RIGHTS_NAMESPACE; + owner_default = ACL_ALL_RIGHTS_SCHEMA; break; - case ACL_OBJECT_TABLESPACE: + case OBJECT_TABLESPACE: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_TABLESPACE; break; - case ACL_OBJECT_FDW: + case OBJECT_FDW: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_FDW; break; - case ACL_OBJECT_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: world_default = ACL_NO_RIGHTS; owner_default = ACL_ALL_RIGHTS_FOREIGN_SERVER; break; - case ACL_OBJECT_DOMAIN: - case ACL_OBJECT_TYPE: + case OBJECT_DOMAIN: + case OBJECT_TYPE: world_default = ACL_USAGE; owner_default = ACL_ALL_RIGHTS_TYPE; break; @@ -855,7 +855,7 @@ acldefault(GrantObjectType objtype, Oid ownerId) /* * SQL-accessible version of acldefault(). Hackish mapping from "char" type to - * ACL_OBJECT_* values, but it's only used in the information schema, not + * OBJECT_* values, but it's only used in the information schema, not * documented for general use. */ Datum @@ -863,45 +863,45 @@ acldefault_sql(PG_FUNCTION_ARGS) { char objtypec = PG_GETARG_CHAR(0); Oid owner = PG_GETARG_OID(1); - GrantObjectType objtype = 0; + ObjectType objtype = 0; switch (objtypec) { case 'c': - objtype = ACL_OBJECT_COLUMN; + objtype = OBJECT_COLUMN; break; case 'r': - objtype = ACL_OBJECT_RELATION; + objtype = OBJECT_TABLE; break; case 's': - objtype = ACL_OBJECT_SEQUENCE; + objtype = OBJECT_SEQUENCE; break; case 'd': - objtype = ACL_OBJECT_DATABASE; + objtype = OBJECT_DATABASE; break; case 'f': - objtype = ACL_OBJECT_FUNCTION; + objtype = OBJECT_FUNCTION; break; case 'l': - objtype = ACL_OBJECT_LANGUAGE; + objtype = OBJECT_LANGUAGE; break; case 'L': - objtype = ACL_OBJECT_LARGEOBJECT; + objtype = OBJECT_LARGEOBJECT; break; case 'n': - objtype = ACL_OBJECT_NAMESPACE; + objtype = OBJECT_SCHEMA; break; case 't': - objtype = ACL_OBJECT_TABLESPACE; + objtype = OBJECT_TABLESPACE; break; case 'F': - objtype = ACL_OBJECT_FDW; + objtype = OBJECT_FDW; break; case 'S': - objtype = ACL_OBJECT_FOREIGN_SERVER; + objtype = OBJECT_FOREIGN_SERVER; break; case 'T': - objtype = ACL_OBJECT_TYPE; + objtype = OBJECT_TYPE; break; default: elog(ERROR, "unrecognized objtype abbreviation: %c", objtypec); diff --git a/src/include/commands/event_trigger.h b/src/include/commands/event_trigger.h index 8e4142391d..0e1959462e 100644 --- a/src/include/commands/event_trigger.h +++ b/src/include/commands/event_trigger.h @@ -50,7 +50,6 @@ extern void AlterEventTriggerOwner_oid(Oid, Oid newOwnerId); extern bool EventTriggerSupportsObjectType(ObjectType obtype); extern bool EventTriggerSupportsObjectClass(ObjectClass objclass); -extern bool EventTriggerSupportsGrantObjectType(GrantObjectType objtype); extern void EventTriggerDDLCommandStart(Node *parsetree); extern void EventTriggerDDLCommandEnd(Node *parsetree); extern void EventTriggerSQLDrop(Node *parsetree); diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 0296784726..93122adae8 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -1845,31 +1845,12 @@ typedef enum GrantTargetType ACL_TARGET_DEFAULTS /* ALTER DEFAULT PRIVILEGES */ } GrantTargetType; -typedef enum GrantObjectType -{ - ACL_OBJECT_COLUMN, /* column */ - ACL_OBJECT_RELATION, /* table, view */ - ACL_OBJECT_SEQUENCE, /* sequence */ - ACL_OBJECT_DATABASE, /* database */ - ACL_OBJECT_DOMAIN, /* domain */ - ACL_OBJECT_FDW, /* foreign-data wrapper */ - ACL_OBJECT_FOREIGN_SERVER, /* foreign server */ - ACL_OBJECT_FUNCTION, /* function */ - ACL_OBJECT_LANGUAGE, /* procedural language */ - ACL_OBJECT_LARGEOBJECT, /* largeobject */ - ACL_OBJECT_NAMESPACE, /* namespace */ - ACL_OBJECT_PROCEDURE, /* procedure */ - ACL_OBJECT_ROUTINE, /* routine */ - ACL_OBJECT_TABLESPACE, /* tablespace */ - ACL_OBJECT_TYPE /* type */ -} GrantObjectType; - typedef struct GrantStmt { NodeTag type; bool is_grant; /* true = GRANT, false = REVOKE */ GrantTargetType targtype; /* type of the grant target */ - GrantObjectType objtype; /* kind of object being operated on */ + ObjectType objtype; /* kind of object being operated on */ List *objects; /* list of RangeVar nodes, ObjectWithArgs * nodes, or plain names (as Value strings) */ List *privileges; /* list of AccessPriv nodes */ diff --git a/src/include/tcop/deparse_utility.h b/src/include/tcop/deparse_utility.h index 9b78748bfd..8459463391 100644 --- a/src/include/tcop/deparse_utility.h +++ b/src/include/tcop/deparse_utility.h @@ -97,7 +97,7 @@ typedef struct CollectedCommand /* ALTER DEFAULT PRIVILEGES */ struct { - GrantObjectType objtype; + ObjectType objtype; } defprivs; } d; } CollectedCommand; diff --git a/src/include/utils/acl.h b/src/include/utils/acl.h index 67c7b2d4ac..7db1606b8f 100644 --- a/src/include/utils/acl.h +++ b/src/include/utils/acl.h @@ -163,7 +163,7 @@ typedef ArrayType Acl; #define ACL_ALL_RIGHTS_FUNCTION (ACL_EXECUTE) #define ACL_ALL_RIGHTS_LANGUAGE (ACL_USAGE) #define ACL_ALL_RIGHTS_LARGEOBJECT (ACL_SELECT|ACL_UPDATE) -#define ACL_ALL_RIGHTS_NAMESPACE (ACL_USAGE|ACL_CREATE) +#define ACL_ALL_RIGHTS_SCHEMA (ACL_USAGE|ACL_CREATE) #define ACL_ALL_RIGHTS_TABLESPACE (ACL_CREATE) #define ACL_ALL_RIGHTS_TYPE (ACL_USAGE) @@ -217,8 +217,8 @@ typedef enum AclObjectKind /* * routines used internally */ -extern Acl *acldefault(GrantObjectType objtype, Oid ownerId); -extern Acl *get_user_default_acl(GrantObjectType objtype, Oid ownerId, +extern Acl *acldefault(ObjectType objtype, Oid ownerId); +extern Acl *get_user_default_acl(ObjectType objtype, Oid ownerId, Oid nsp_oid); extern Acl *aclupdate(const Acl *old_acl, const AclItem *mod_aip, diff --git a/src/include/utils/aclchk_internal.h b/src/include/utils/aclchk_internal.h index 1843f50b5a..f7c44fcd4b 100644 --- a/src/include/utils/aclchk_internal.h +++ b/src/include/utils/aclchk_internal.h @@ -26,12 +26,12 @@ * Note: 'all_privs' and 'privileges' represent object-level privileges only. * There might also be column-level privilege specifications, which are * represented in col_privs (this is a list of untransformed AccessPriv nodes). - * Column privileges are only valid for objtype ACL_OBJECT_RELATION. + * Column privileges are only valid for objtype OBJECT_TABLE. */ typedef struct { bool is_grant; - GrantObjectType objtype; + ObjectType objtype; List *objects; bool all_privs; AclMode privileges; From 8b9e9644dc6a9bd4b7a97950e6212f63880cf18b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 2 Dec 2017 09:26:34 -0500 Subject: [PATCH 0857/1087] Replace AclObjectKind with ObjectType AclObjectKind was basically just another enumeration for object types, and we already have a preferred one for that. It's only used in aclcheck_error. By using ObjectType instead, we can also give some more precise error messages, for example "index" instead of "relation". Reviewed-by: Michael Paquier --- contrib/dblink/dblink.c | 4 +- contrib/file_fdw/output/file_fdw.source | 2 +- contrib/pg_prewarm/pg_prewarm.c | 2 +- contrib/pgrowlocks/pgrowlocks.c | 2 +- .../test_decoding/expected/permissions.out | 4 +- src/backend/access/brin/brin.c | 4 +- src/backend/access/gin/ginfast.c | 2 +- src/backend/catalog/aclchk.c | 505 ++++++++++++------ src/backend/catalog/namespace.c | 8 +- src/backend/catalog/objectaddress.c | 125 +++-- src/backend/catalog/pg_aggregate.c | 2 +- src/backend/catalog/pg_operator.c | 8 +- src/backend/catalog/pg_proc.c | 2 +- src/backend/catalog/pg_type.c | 2 +- src/backend/commands/aggregatecmds.c | 2 +- src/backend/commands/alter.c | 18 +- src/backend/commands/collationcmds.c | 4 +- src/backend/commands/conversioncmds.c | 4 +- src/backend/commands/dbcommands.c | 16 +- src/backend/commands/event_trigger.c | 4 +- src/backend/commands/extension.c | 8 +- src/backend/commands/foreigncmds.c | 16 +- src/backend/commands/functioncmds.c | 26 +- src/backend/commands/indexcmds.c | 10 +- src/backend/commands/lockcmds.c | 4 +- src/backend/commands/opclasscmds.c | 16 +- src/backend/commands/operatorcmds.c | 10 +- src/backend/commands/policy.c | 2 +- src/backend/commands/proclang.c | 4 +- src/backend/commands/publicationcmds.c | 10 +- src/backend/commands/schemacmds.c | 10 +- src/backend/commands/statscmds.c | 2 +- src/backend/commands/subscriptioncmds.c | 6 +- src/backend/commands/tablecmds.c | 36 +- src/backend/commands/tablespace.c | 10 +- src/backend/commands/trigger.c | 8 +- src/backend/commands/tsearchcmds.c | 8 +- src/backend/commands/typecmds.c | 28 +- src/backend/commands/user.c | 2 +- src/backend/executor/execExpr.c | 4 +- src/backend/executor/execMain.c | 2 +- src/backend/executor/execSRF.c | 2 +- src/backend/executor/nodeAgg.c | 10 +- src/backend/executor/nodeWindowAgg.c | 8 +- src/backend/parser/parse_utilcmd.c | 4 +- src/backend/rewrite/rewriteDefine.c | 6 +- src/backend/tcop/fastpath.c | 4 +- src/backend/utils/adt/dbsize.c | 4 +- src/backend/utils/adt/tid.c | 4 +- src/backend/utils/fmgr/fmgr.c | 4 +- src/include/catalog/objectaddress.h | 4 +- src/include/utils/acl.h | 35 +- src/pl/tcl/pltcl.c | 2 +- .../expected/dummy_seclabel.out | 4 +- src/test/regress/expected/alter_table.out | 18 +- src/test/regress/expected/copy2.out | 6 +- .../regress/expected/create_procedure.out | 2 +- src/test/regress/expected/lock.out | 2 +- src/test/regress/expected/privileges.out | 152 +++--- src/test/regress/expected/publication.out | 2 +- src/test/regress/expected/rowsecurity.out | 18 +- src/test/regress/expected/select_into.out | 6 +- src/test/regress/expected/sequence.out | 2 +- src/test/regress/expected/updatable_views.out | 30 +- src/test/regress/sql/alter_table.sql | 21 + 65 files changed, 742 insertions(+), 550 deletions(-) diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index a6c897c319..ae7e24ad08 100644 --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -2504,7 +2504,7 @@ get_rel_from_relname(text *relname_text, LOCKMODE lockmode, AclMode aclmode) aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(), aclmode); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); return rel; @@ -2789,7 +2789,7 @@ get_connect_string(const char *servername) /* Check permissions, user must have usage on the server. */ aclresult = pg_foreign_server_aclcheck(serverid, userid, ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_FOREIGN_SERVER, foreign_server->servername); + aclcheck_error(aclresult, OBJECT_FOREIGN_SERVER, foreign_server->servername); foreach(cell, fdw->options) { diff --git a/contrib/file_fdw/output/file_fdw.source b/contrib/file_fdw/output/file_fdw.source index 709c43ec80..e2d8b87015 100644 --- a/contrib/file_fdw/output/file_fdw.source +++ b/contrib/file_fdw/output/file_fdw.source @@ -393,7 +393,7 @@ SELECT * FROM agg_text ORDER BY a; SET ROLE regress_no_priv_user; SELECT * FROM agg_text ORDER BY a; -- ERROR -ERROR: permission denied for relation agg_text +ERROR: permission denied for foreign table agg_text SET ROLE regress_file_fdw_user; \t on EXPLAIN (VERBOSE, COSTS FALSE) SELECT * FROM agg_text WHERE a > 0; diff --git a/contrib/pg_prewarm/pg_prewarm.c b/contrib/pg_prewarm/pg_prewarm.c index 4117bf6d2e..7f084462b1 100644 --- a/contrib/pg_prewarm/pg_prewarm.c +++ b/contrib/pg_prewarm/pg_prewarm.c @@ -107,7 +107,7 @@ pg_prewarm(PG_FUNCTION_ARGS) rel = relation_open(relOid, AccessShareLock); aclresult = pg_class_aclcheck(relOid, GetUserId(), ACL_SELECT); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, get_rel_name(relOid)); + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), get_rel_name(relOid)); /* Check that the fork exists. */ RelationOpenSmgr(rel); diff --git a/contrib/pgrowlocks/pgrowlocks.c b/contrib/pgrowlocks/pgrowlocks.c index eabca65bd2..94e051d642 100644 --- a/contrib/pgrowlocks/pgrowlocks.c +++ b/contrib/pgrowlocks/pgrowlocks.c @@ -121,7 +121,7 @@ pgrowlocks(PG_FUNCTION_ARGS) aclresult = is_member_of_role(GetUserId(), DEFAULT_ROLE_STAT_SCAN_TABLES) ? ACLCHECK_OK : ACLCHECK_NO_PRIV; if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); scan = heap_beginscan(rel, GetActiveSnapshot(), 0, NULL); diff --git a/contrib/test_decoding/expected/permissions.out b/contrib/test_decoding/expected/permissions.out index 7175dcd5f6..ed97f81dda 100644 --- a/contrib/test_decoding/expected/permissions.out +++ b/contrib/test_decoding/expected/permissions.out @@ -38,7 +38,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d (1 row) INSERT INTO lr_test VALUES('lr_superuser_init'); -ERROR: permission denied for relation lr_test +ERROR: permission denied for table lr_test SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); data ------ @@ -56,7 +56,7 @@ SET ROLE regress_lr_normal; SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding'); ERROR: must be superuser or replication role to use replication slots INSERT INTO lr_test VALUES('lr_superuser_init'); -ERROR: permission denied for relation lr_test +ERROR: permission denied for table lr_test SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1'); ERROR: must be superuser or replication role to use replication slots SELECT pg_drop_replication_slot('regression_slot'); diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index f54968bfb5..5027872267 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -894,7 +894,7 @@ brin_summarize_range(PG_FUNCTION_ARGS) /* User must own the index (comparable to privileges needed for VACUUM) */ if (!pg_class_ownercheck(indexoid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX, RelationGetRelationName(indexRel)); /* @@ -965,7 +965,7 @@ brin_desummarize_range(PG_FUNCTION_ARGS) /* User must own the index (comparable to privileges needed for VACUUM) */ if (!pg_class_ownercheck(indexoid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX, RelationGetRelationName(indexRel)); /* diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c index 222a80bc4b..615730b8e5 100644 --- a/src/backend/access/gin/ginfast.c +++ b/src/backend/access/gin/ginfast.c @@ -1035,7 +1035,7 @@ gin_clean_pending_list(PG_FUNCTION_ARGS) /* User must own the index (comparable to privileges needed for VACUUM) */ if (!pg_class_ownercheck(indexoid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX, RelationGetRelationName(indexRel)); memset(&stats, 0, sizeof(stats)); diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index 5cfaa510de..de18610a91 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -132,9 +132,9 @@ static const char *privilege_to_string(AclMode privilege); static AclMode restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, AclMode privileges, Oid objectId, Oid grantorId, - AclObjectKind objkind, const char *objname, + ObjectType objtype, const char *objname, AttrNumber att_number, const char *colname); -static AclMode pg_aclmask(AclObjectKind objkind, Oid table_oid, AttrNumber attnum, +static AclMode pg_aclmask(ObjectType objtype, Oid table_oid, AttrNumber attnum, Oid roleid, AclMode mask, AclMaskHow how); static void recordExtensionInitPriv(Oid objoid, Oid classoid, int objsubid, Acl *new_acl); @@ -236,56 +236,56 @@ merge_acl_with_grant(Acl *old_acl, bool is_grant, static AclMode restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, AclMode privileges, Oid objectId, Oid grantorId, - AclObjectKind objkind, const char *objname, + ObjectType objtype, const char *objname, AttrNumber att_number, const char *colname) { AclMode this_privileges; AclMode whole_mask; - switch (objkind) + switch (objtype) { - case ACL_KIND_COLUMN: + case OBJECT_COLUMN: whole_mask = ACL_ALL_RIGHTS_COLUMN; break; - case ACL_KIND_CLASS: + case OBJECT_TABLE: whole_mask = ACL_ALL_RIGHTS_RELATION; break; - case ACL_KIND_SEQUENCE: + case OBJECT_SEQUENCE: whole_mask = ACL_ALL_RIGHTS_SEQUENCE; break; - case ACL_KIND_DATABASE: + case OBJECT_DATABASE: whole_mask = ACL_ALL_RIGHTS_DATABASE; break; - case ACL_KIND_PROC: + case OBJECT_FUNCTION: whole_mask = ACL_ALL_RIGHTS_FUNCTION; break; - case ACL_KIND_LANGUAGE: + case OBJECT_LANGUAGE: whole_mask = ACL_ALL_RIGHTS_LANGUAGE; break; - case ACL_KIND_LARGEOBJECT: + case OBJECT_LARGEOBJECT: whole_mask = ACL_ALL_RIGHTS_LARGEOBJECT; break; - case ACL_KIND_NAMESPACE: + case OBJECT_SCHEMA: whole_mask = ACL_ALL_RIGHTS_SCHEMA; break; - case ACL_KIND_TABLESPACE: + case OBJECT_TABLESPACE: whole_mask = ACL_ALL_RIGHTS_TABLESPACE; break; - case ACL_KIND_FDW: + case OBJECT_FDW: whole_mask = ACL_ALL_RIGHTS_FDW; break; - case ACL_KIND_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: whole_mask = ACL_ALL_RIGHTS_FOREIGN_SERVER; break; - case ACL_KIND_EVENT_TRIGGER: + case OBJECT_EVENT_TRIGGER: elog(ERROR, "grantable rights not supported for event triggers"); /* not reached, but keep compiler quiet */ return ACL_NO_RIGHTS; - case ACL_KIND_TYPE: + case OBJECT_TYPE: whole_mask = ACL_ALL_RIGHTS_TYPE; break; default: - elog(ERROR, "unrecognized object kind: %d", objkind); + elog(ERROR, "unrecognized object type: %d", objtype); /* not reached, but keep compiler quiet */ return ACL_NO_RIGHTS; } @@ -297,14 +297,14 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, */ if (avail_goptions == ACL_NO_RIGHTS) { - if (pg_aclmask(objkind, objectId, att_number, grantorId, + if (pg_aclmask(objtype, objectId, att_number, grantorId, whole_mask | ACL_GRANT_OPTION_FOR(whole_mask), ACLMASK_ANY) == ACL_NO_RIGHTS) { - if (objkind == ACL_KIND_COLUMN && colname) - aclcheck_error_col(ACLCHECK_NO_PRIV, objkind, objname, colname); + if (objtype == OBJECT_COLUMN && colname) + aclcheck_error_col(ACLCHECK_NO_PRIV, objtype, objname, colname); else - aclcheck_error(ACLCHECK_NO_PRIV, objkind, objname); + aclcheck_error(ACLCHECK_NO_PRIV, objtype, objname); } } @@ -320,7 +320,7 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, { if (this_privileges == 0) { - if (objkind == ACL_KIND_COLUMN && colname) + if (objtype == OBJECT_COLUMN && colname) ereport(WARNING, (errcode(ERRCODE_WARNING_PRIVILEGE_NOT_GRANTED), errmsg("no privileges were granted for column \"%s\" of relation \"%s\"", @@ -333,7 +333,7 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, } else if (!all_privs && this_privileges != privileges) { - if (objkind == ACL_KIND_COLUMN && colname) + if (objtype == OBJECT_COLUMN && colname) ereport(WARNING, (errcode(ERRCODE_WARNING_PRIVILEGE_NOT_GRANTED), errmsg("not all privileges were granted for column \"%s\" of relation \"%s\"", @@ -349,7 +349,7 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, { if (this_privileges == 0) { - if (objkind == ACL_KIND_COLUMN && colname) + if (objtype == OBJECT_COLUMN && colname) ereport(WARNING, (errcode(ERRCODE_WARNING_PRIVILEGE_NOT_REVOKED), errmsg("no privileges could be revoked for column \"%s\" of relation \"%s\"", @@ -362,7 +362,7 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs, } else if (!all_privs && this_privileges != privileges) { - if (objkind == ACL_KIND_COLUMN && colname) + if (objtype == OBJECT_COLUMN && colname) ereport(WARNING, (errcode(ERRCODE_WARNING_PRIVILEGE_NOT_REVOKED), errmsg("not all privileges could be revoked for column \"%s\" of relation \"%s\"", @@ -1721,7 +1721,7 @@ ExecGrant_Attribute(InternalGrant *istmt, Oid relOid, const char *relname, restrict_and_check_grant(istmt->is_grant, avail_goptions, (col_privileges == ACL_ALL_RIGHTS_COLUMN), col_privileges, - relOid, grantorId, ACL_KIND_COLUMN, + relOid, grantorId, OBJECT_COLUMN, relname, attnum, NameStr(pg_attribute_tuple->attname)); @@ -1976,7 +1976,7 @@ ExecGrant_Relation(InternalGrant *istmt) bool replaces[Natts_pg_class]; int nnewmembers; Oid *newmembers; - AclObjectKind aclkind; + ObjectType objtype; /* Determine ID to do the grant as, and available grant options */ select_best_grantor(GetUserId(), this_privileges, @@ -1986,10 +1986,10 @@ ExecGrant_Relation(InternalGrant *istmt) switch (pg_class_tuple->relkind) { case RELKIND_SEQUENCE: - aclkind = ACL_KIND_SEQUENCE; + objtype = OBJECT_SEQUENCE; break; default: - aclkind = ACL_KIND_CLASS; + objtype = OBJECT_TABLE; break; } @@ -2000,7 +2000,7 @@ ExecGrant_Relation(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, this_privileges, - relOid, grantorId, aclkind, + relOid, grantorId, objtype, NameStr(pg_class_tuple->relname), 0, NULL); @@ -2194,7 +2194,7 @@ ExecGrant_Database(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - datId, grantorId, ACL_KIND_DATABASE, + datId, grantorId, OBJECT_DATABASE, NameStr(pg_database_tuple->datname), 0, NULL); @@ -2316,7 +2316,7 @@ ExecGrant_Fdw(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - fdwid, grantorId, ACL_KIND_FDW, + fdwid, grantorId, OBJECT_FDW, NameStr(pg_fdw_tuple->fdwname), 0, NULL); @@ -2442,7 +2442,7 @@ ExecGrant_ForeignServer(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - srvid, grantorId, ACL_KIND_FOREIGN_SERVER, + srvid, grantorId, OBJECT_FOREIGN_SERVER, NameStr(pg_server_tuple->srvname), 0, NULL); @@ -2566,7 +2566,7 @@ ExecGrant_Function(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - funcId, grantorId, ACL_KIND_PROC, + funcId, grantorId, OBJECT_FUNCTION, NameStr(pg_proc_tuple->proname), 0, NULL); @@ -2697,7 +2697,7 @@ ExecGrant_Language(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - langId, grantorId, ACL_KIND_LANGUAGE, + langId, grantorId, OBJECT_LANGUAGE, NameStr(pg_language_tuple->lanname), 0, NULL); @@ -2836,7 +2836,7 @@ ExecGrant_Largeobject(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - loid, grantorId, ACL_KIND_LARGEOBJECT, + loid, grantorId, OBJECT_LARGEOBJECT, loname, 0, NULL); /* @@ -2961,7 +2961,7 @@ ExecGrant_Namespace(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - nspid, grantorId, ACL_KIND_NAMESPACE, + nspid, grantorId, OBJECT_SCHEMA, NameStr(pg_namespace_tuple->nspname), 0, NULL); @@ -3085,7 +3085,7 @@ ExecGrant_Tablespace(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - tblId, grantorId, ACL_KIND_TABLESPACE, + tblId, grantorId, OBJECT_TABLESPACE, NameStr(pg_tablespace_tuple->spcname), 0, NULL); @@ -3219,7 +3219,7 @@ ExecGrant_Type(InternalGrant *istmt) this_privileges = restrict_and_check_grant(istmt->is_grant, avail_goptions, istmt->all_privs, istmt->privileges, - typId, grantorId, ACL_KIND_TYPE, + typId, grantorId, OBJECT_TYPE, NameStr(pg_type_tuple->typname), 0, NULL); @@ -3348,114 +3348,8 @@ privilege_to_string(AclMode privilege) * Note: we do not double-quote the %s's below, because many callers * supply strings that might be already quoted. */ - -static const char *const no_priv_msg[MAX_ACL_KIND] = -{ - /* ACL_KIND_COLUMN */ - gettext_noop("permission denied for column %s"), - /* ACL_KIND_CLASS */ - gettext_noop("permission denied for relation %s"), - /* ACL_KIND_SEQUENCE */ - gettext_noop("permission denied for sequence %s"), - /* ACL_KIND_DATABASE */ - gettext_noop("permission denied for database %s"), - /* ACL_KIND_PROC */ - gettext_noop("permission denied for function %s"), - /* ACL_KIND_OPER */ - gettext_noop("permission denied for operator %s"), - /* ACL_KIND_TYPE */ - gettext_noop("permission denied for type %s"), - /* ACL_KIND_LANGUAGE */ - gettext_noop("permission denied for language %s"), - /* ACL_KIND_LARGEOBJECT */ - gettext_noop("permission denied for large object %s"), - /* ACL_KIND_NAMESPACE */ - gettext_noop("permission denied for schema %s"), - /* ACL_KIND_OPCLASS */ - gettext_noop("permission denied for operator class %s"), - /* ACL_KIND_OPFAMILY */ - gettext_noop("permission denied for operator family %s"), - /* ACL_KIND_COLLATION */ - gettext_noop("permission denied for collation %s"), - /* ACL_KIND_CONVERSION */ - gettext_noop("permission denied for conversion %s"), - /* ACL_KIND_STATISTICS */ - gettext_noop("permission denied for statistics object %s"), - /* ACL_KIND_TABLESPACE */ - gettext_noop("permission denied for tablespace %s"), - /* ACL_KIND_TSDICTIONARY */ - gettext_noop("permission denied for text search dictionary %s"), - /* ACL_KIND_TSCONFIGURATION */ - gettext_noop("permission denied for text search configuration %s"), - /* ACL_KIND_FDW */ - gettext_noop("permission denied for foreign-data wrapper %s"), - /* ACL_KIND_FOREIGN_SERVER */ - gettext_noop("permission denied for foreign server %s"), - /* ACL_KIND_EVENT_TRIGGER */ - gettext_noop("permission denied for event trigger %s"), - /* ACL_KIND_EXTENSION */ - gettext_noop("permission denied for extension %s"), - /* ACL_KIND_PUBLICATION */ - gettext_noop("permission denied for publication %s"), - /* ACL_KIND_SUBSCRIPTION */ - gettext_noop("permission denied for subscription %s"), -}; - -static const char *const not_owner_msg[MAX_ACL_KIND] = -{ - /* ACL_KIND_COLUMN */ - gettext_noop("must be owner of relation %s"), - /* ACL_KIND_CLASS */ - gettext_noop("must be owner of relation %s"), - /* ACL_KIND_SEQUENCE */ - gettext_noop("must be owner of sequence %s"), - /* ACL_KIND_DATABASE */ - gettext_noop("must be owner of database %s"), - /* ACL_KIND_PROC */ - gettext_noop("must be owner of function %s"), - /* ACL_KIND_OPER */ - gettext_noop("must be owner of operator %s"), - /* ACL_KIND_TYPE */ - gettext_noop("must be owner of type %s"), - /* ACL_KIND_LANGUAGE */ - gettext_noop("must be owner of language %s"), - /* ACL_KIND_LARGEOBJECT */ - gettext_noop("must be owner of large object %s"), - /* ACL_KIND_NAMESPACE */ - gettext_noop("must be owner of schema %s"), - /* ACL_KIND_OPCLASS */ - gettext_noop("must be owner of operator class %s"), - /* ACL_KIND_OPFAMILY */ - gettext_noop("must be owner of operator family %s"), - /* ACL_KIND_COLLATION */ - gettext_noop("must be owner of collation %s"), - /* ACL_KIND_CONVERSION */ - gettext_noop("must be owner of conversion %s"), - /* ACL_KIND_STATISTICS */ - gettext_noop("must be owner of statistics object %s"), - /* ACL_KIND_TABLESPACE */ - gettext_noop("must be owner of tablespace %s"), - /* ACL_KIND_TSDICTIONARY */ - gettext_noop("must be owner of text search dictionary %s"), - /* ACL_KIND_TSCONFIGURATION */ - gettext_noop("must be owner of text search configuration %s"), - /* ACL_KIND_FDW */ - gettext_noop("must be owner of foreign-data wrapper %s"), - /* ACL_KIND_FOREIGN_SERVER */ - gettext_noop("must be owner of foreign server %s"), - /* ACL_KIND_EVENT_TRIGGER */ - gettext_noop("must be owner of event trigger %s"), - /* ACL_KIND_EXTENSION */ - gettext_noop("must be owner of extension %s"), - /* ACL_KIND_PUBLICATION */ - gettext_noop("must be owner of publication %s"), - /* ACL_KIND_SUBSCRIPTION */ - gettext_noop("must be owner of subscription %s"), -}; - - void -aclcheck_error(AclResult aclerr, AclObjectKind objectkind, +aclcheck_error(AclResult aclerr, ObjectType objtype, const char *objectname) { switch (aclerr) @@ -3464,15 +3358,272 @@ aclcheck_error(AclResult aclerr, AclObjectKind objectkind, /* no error, so return to caller */ break; case ACLCHECK_NO_PRIV: - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg(no_priv_msg[objectkind], objectname))); - break; + { + const char *msg; + + switch (objtype) + { + case OBJECT_AGGREGATE: + msg = gettext_noop("permission denied for aggregate %s"); + break; + case OBJECT_COLLATION: + msg = gettext_noop("permission denied for collation %s"); + break; + case OBJECT_COLUMN: + msg = gettext_noop("permission denied for column %s"); + break; + case OBJECT_CONVERSION: + msg = gettext_noop("permission denied for conversion %s"); + break; + case OBJECT_DATABASE: + msg = gettext_noop("permission denied for database %s"); + break; + case OBJECT_DOMAIN: + msg = gettext_noop("permission denied for domain %s"); + break; + case OBJECT_EVENT_TRIGGER: + msg = gettext_noop("permission denied for event trigger %s"); + break; + case OBJECT_EXTENSION: + msg = gettext_noop("permission denied for extension %s"); + break; + case OBJECT_FDW: + msg = gettext_noop("permission denied for foreign-data wrapper %s"); + break; + case OBJECT_FOREIGN_SERVER: + msg = gettext_noop("permission denied for foreign server %s"); + break; + case OBJECT_FOREIGN_TABLE: + msg = gettext_noop("permission denied for foreign table %s"); + break; + case OBJECT_FUNCTION: + msg = gettext_noop("permission denied for function %s"); + break; + case OBJECT_INDEX: + msg = gettext_noop("permission denied for index %s"); + break; + case OBJECT_LANGUAGE: + msg = gettext_noop("permission denied for language %s"); + break; + case OBJECT_LARGEOBJECT: + msg = gettext_noop("permission denied for large object %s"); + break; + case OBJECT_MATVIEW: + msg = gettext_noop("permission denied for materialized view %s"); + break; + case OBJECT_OPCLASS: + msg = gettext_noop("permission denied for operator class %s"); + break; + case OBJECT_OPERATOR: + msg = gettext_noop("permission denied for operator %s"); + break; + case OBJECT_OPFAMILY: + msg = gettext_noop("permission denied for operator family %s"); + break; + case OBJECT_POLICY: + msg = gettext_noop("permission denied for policy %s"); + break; + case OBJECT_PROCEDURE: + msg = gettext_noop("permission denied for procedure %s"); + break; + case OBJECT_PUBLICATION: + msg = gettext_noop("permission denied for publication %s"); + break; + case OBJECT_ROUTINE: + msg = gettext_noop("permission denied for routine %s"); + break; + case OBJECT_SCHEMA: + msg = gettext_noop("permission denied for schema %s"); + break; + case OBJECT_SEQUENCE: + msg = gettext_noop("permission denied for sequence %s"); + break; + case OBJECT_STATISTIC_EXT: + msg = gettext_noop("permission denied for statistics object %s"); + break; + case OBJECT_SUBSCRIPTION: + msg = gettext_noop("permission denied for subscription %s"); + break; + case OBJECT_TABLE: + msg = gettext_noop("permission denied for table %s"); + break; + case OBJECT_TABLESPACE: + msg = gettext_noop("permission denied for tablespace %s"); + break; + case OBJECT_TSCONFIGURATION: + msg = gettext_noop("permission denied for text search configuration %s"); + break; + case OBJECT_TSDICTIONARY: + msg = gettext_noop("permission denied for text search dictionary %s"); + break; + case OBJECT_TYPE: + msg = gettext_noop("permission denied for type %s"); + break; + case OBJECT_VIEW: + msg = gettext_noop("permission denied for view %s"); + break; + /* these currently aren't used */ + case OBJECT_ACCESS_METHOD: + case OBJECT_AMOP: + case OBJECT_AMPROC: + case OBJECT_ATTRIBUTE: + case OBJECT_CAST: + case OBJECT_DEFAULT: + case OBJECT_DEFACL: + case OBJECT_DOMCONSTRAINT: + case OBJECT_PUBLICATION_REL: + case OBJECT_ROLE: + case OBJECT_RULE: + case OBJECT_TABCONSTRAINT: + case OBJECT_TRANSFORM: + case OBJECT_TRIGGER: + case OBJECT_TSPARSER: + case OBJECT_TSTEMPLATE: + case OBJECT_USER_MAPPING: + elog(ERROR, "unsupported object type %d", objtype); + msg = "???"; + } + + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg(msg, objectname))); + break; + } case ACLCHECK_NOT_OWNER: - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg(not_owner_msg[objectkind], objectname))); - break; + { + const char *msg; + + switch (objtype) + { + case OBJECT_AGGREGATE: + msg = gettext_noop("must be owner of aggregate %s"); + break; + case OBJECT_COLLATION: + msg = gettext_noop("must be owner of collation %s"); + break; + case OBJECT_CONVERSION: + msg = gettext_noop("must be owner of conversion %s"); + break; + case OBJECT_DATABASE: + msg = gettext_noop("must be owner of database %s"); + break; + case OBJECT_DOMAIN: + msg = gettext_noop("must be owner of domain %s"); + break; + case OBJECT_EVENT_TRIGGER: + msg = gettext_noop("must be owner of event trigger %s"); + break; + case OBJECT_EXTENSION: + msg = gettext_noop("must be owner of extension %s"); + break; + case OBJECT_FDW: + msg = gettext_noop("must be owner of foreign-data wrapper %s"); + break; + case OBJECT_FOREIGN_SERVER: + msg = gettext_noop("must be owner of foreign server %s"); + break; + case OBJECT_FOREIGN_TABLE: + msg = gettext_noop("must be owner of foreign table %s"); + break; + case OBJECT_FUNCTION: + msg = gettext_noop("must be owner of function %s"); + break; + case OBJECT_INDEX: + msg = gettext_noop("must be owner of index %s"); + break; + case OBJECT_LANGUAGE: + msg = gettext_noop("must be owner of language %s"); + break; + case OBJECT_LARGEOBJECT: + msg = gettext_noop("must be owner of large object %s"); + break; + case OBJECT_MATVIEW: + msg = gettext_noop("must be owner of materialized view %s"); + break; + case OBJECT_OPCLASS: + msg = gettext_noop("must be owner of operator class %s"); + break; + case OBJECT_OPERATOR: + msg = gettext_noop("must be owner of operator %s"); + break; + case OBJECT_OPFAMILY: + msg = gettext_noop("must be owner of operator family %s"); + break; + case OBJECT_PROCEDURE: + msg = gettext_noop("must be owner of procedure %s"); + break; + case OBJECT_PUBLICATION: + msg = gettext_noop("must be owner of publication %s"); + break; + case OBJECT_ROUTINE: + msg = gettext_noop("must be owner of routine %s"); + break; + case OBJECT_SEQUENCE: + msg = gettext_noop("must be owner of sequence %s"); + break; + case OBJECT_SUBSCRIPTION: + msg = gettext_noop("must be owner of subscription %s"); + break; + case OBJECT_TABLE: + msg = gettext_noop("must be owner of table %s"); + break; + case OBJECT_TYPE: + msg = gettext_noop("must be owner of type %s"); + break; + case OBJECT_VIEW: + msg = gettext_noop("must be owner of view %s"); + break; + case OBJECT_SCHEMA: + msg = gettext_noop("must be owner of schema %s"); + break; + case OBJECT_STATISTIC_EXT: + msg = gettext_noop("must be owner of statistics object %s"); + break; + case OBJECT_TABLESPACE: + msg = gettext_noop("must be owner of tablespace %s"); + break; + case OBJECT_TSCONFIGURATION: + msg = gettext_noop("must be owner of text search configuration %s"); + break; + case OBJECT_TSDICTIONARY: + msg = gettext_noop("must be owner of text search dictionary %s"); + break; + /* + * Special cases: For these, the error message talks about + * "relation", because that's where the ownership is + * attached. See also check_object_ownership(). + */ + case OBJECT_COLUMN: + case OBJECT_POLICY: + case OBJECT_RULE: + case OBJECT_TABCONSTRAINT: + case OBJECT_TRIGGER: + msg = gettext_noop("must be owner of relation %s"); + break; + /* these currently aren't used */ + case OBJECT_ACCESS_METHOD: + case OBJECT_AMOP: + case OBJECT_AMPROC: + case OBJECT_ATTRIBUTE: + case OBJECT_CAST: + case OBJECT_DEFAULT: + case OBJECT_DEFACL: + case OBJECT_DOMCONSTRAINT: + case OBJECT_PUBLICATION_REL: + case OBJECT_ROLE: + case OBJECT_TRANSFORM: + case OBJECT_TSPARSER: + case OBJECT_TSTEMPLATE: + case OBJECT_USER_MAPPING: + elog(ERROR, "unsupported object type %d", objtype); + msg = "???"; + } + + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg(msg, objectname))); + break; + } default: elog(ERROR, "unrecognized AclResult: %d", (int) aclerr); break; @@ -3481,7 +3632,7 @@ aclcheck_error(AclResult aclerr, AclObjectKind objectkind, void -aclcheck_error_col(AclResult aclerr, AclObjectKind objectkind, +aclcheck_error_col(AclResult aclerr, ObjectType objtype, const char *objectname, const char *colname) { switch (aclerr) @@ -3497,9 +3648,7 @@ aclcheck_error_col(AclResult aclerr, AclObjectKind objectkind, break; case ACLCHECK_NOT_OWNER: /* relation msg is OK since columns don't have separate owners */ - ereport(ERROR, - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), - errmsg(not_owner_msg[objectkind], objectname))); + aclcheck_error(aclerr, objtype, objectname); break; default: elog(ERROR, "unrecognized AclResult: %d", (int) aclerr); @@ -3517,7 +3666,7 @@ aclcheck_error_type(AclResult aclerr, Oid typeOid) { Oid element_type = get_element_type(typeOid); - aclcheck_error(aclerr, ACL_KIND_TYPE, format_type_be(element_type ? element_type : typeOid)); + aclcheck_error(aclerr, OBJECT_TYPE, format_type_be(element_type ? element_type : typeOid)); } @@ -3525,48 +3674,48 @@ aclcheck_error_type(AclResult aclerr, Oid typeOid) * Relay for the various pg_*_mask routines depending on object kind */ static AclMode -pg_aclmask(AclObjectKind objkind, Oid table_oid, AttrNumber attnum, Oid roleid, +pg_aclmask(ObjectType objtype, Oid table_oid, AttrNumber attnum, Oid roleid, AclMode mask, AclMaskHow how) { - switch (objkind) + switch (objtype) { - case ACL_KIND_COLUMN: + case OBJECT_COLUMN: return pg_class_aclmask(table_oid, roleid, mask, how) | pg_attribute_aclmask(table_oid, attnum, roleid, mask, how); - case ACL_KIND_CLASS: - case ACL_KIND_SEQUENCE: + case OBJECT_TABLE: + case OBJECT_SEQUENCE: return pg_class_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_DATABASE: + case OBJECT_DATABASE: return pg_database_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_PROC: + case OBJECT_FUNCTION: return pg_proc_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_LANGUAGE: + case OBJECT_LANGUAGE: return pg_language_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_LARGEOBJECT: + case OBJECT_LARGEOBJECT: return pg_largeobject_aclmask_snapshot(table_oid, roleid, mask, how, NULL); - case ACL_KIND_NAMESPACE: + case OBJECT_SCHEMA: return pg_namespace_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_STATISTICS: + case OBJECT_STATISTIC_EXT: elog(ERROR, "grantable rights not supported for statistics objects"); /* not reached, but keep compiler quiet */ return ACL_NO_RIGHTS; - case ACL_KIND_TABLESPACE: + case OBJECT_TABLESPACE: return pg_tablespace_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_FDW: + case OBJECT_FDW: return pg_foreign_data_wrapper_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_FOREIGN_SERVER: + case OBJECT_FOREIGN_SERVER: return pg_foreign_server_aclmask(table_oid, roleid, mask, how); - case ACL_KIND_EVENT_TRIGGER: + case OBJECT_EVENT_TRIGGER: elog(ERROR, "grantable rights not supported for event triggers"); /* not reached, but keep compiler quiet */ return ACL_NO_RIGHTS; - case ACL_KIND_TYPE: + case OBJECT_TYPE: return pg_type_aclmask(table_oid, roleid, mask, how); default: - elog(ERROR, "unrecognized objkind: %d", - (int) objkind); + elog(ERROR, "unrecognized objtype: %d", + (int) objtype); /* not reached, but keep compiler quiet */ return ACL_NO_RIGHTS; } diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c index 93c4bbfcb0..65e271a8d1 100644 --- a/src/backend/catalog/namespace.c +++ b/src/backend/catalog/namespace.c @@ -560,7 +560,7 @@ RangeVarGetAndCheckCreationNamespace(RangeVar *relation, /* Check namespace permissions. */ aclresult = pg_namespace_aclcheck(nspid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(nspid)); if (retry) @@ -585,7 +585,7 @@ RangeVarGetAndCheckCreationNamespace(RangeVar *relation, if (lockmode != NoLock && OidIsValid(relid)) { if (!pg_class_ownercheck(relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relid)), relation->relname); if (relid != oldrelid) LockRelationOid(relid, lockmode); @@ -2874,7 +2874,7 @@ LookupExplicitNamespace(const char *nspname, bool missing_ok) aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, nspname); /* Schema search hook for this lookup */ InvokeNamespaceSearchHook(namespaceId, true); @@ -2911,7 +2911,7 @@ LookupCreationNamespace(const char *nspname) aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, nspname); return namespaceId; diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index 7576606c1b..570e65affb 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -104,7 +104,7 @@ typedef struct AttrNumber attnum_namespace; /* attnum of namespace field */ AttrNumber attnum_owner; /* attnum of owner field */ AttrNumber attnum_acl; /* attnum of acl field */ - AclObjectKind acl_kind; /* ACL_KIND_* of this object type */ + ObjectType objtype; /* OBJECT_* of this object type */ bool is_nsp_name_unique; /* can the nsp/name combination (or name * alone, if there's no namespace) be * considered a unique identifier for an @@ -146,7 +146,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_collation_collnamespace, Anum_pg_collation_collowner, InvalidAttrNumber, - ACL_KIND_COLLATION, + OBJECT_COLLATION, true }, { @@ -170,7 +170,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_conversion_connamespace, Anum_pg_conversion_conowner, InvalidAttrNumber, - ACL_KIND_CONVERSION, + OBJECT_CONVERSION, true }, { @@ -182,7 +182,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_database_datdba, Anum_pg_database_datacl, - ACL_KIND_DATABASE, + OBJECT_DATABASE, true }, { @@ -194,7 +194,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, /* extension doesn't belong to extnamespace */ Anum_pg_extension_extowner, InvalidAttrNumber, - ACL_KIND_EXTENSION, + OBJECT_EXTENSION, true }, { @@ -206,7 +206,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_foreign_data_wrapper_fdwowner, Anum_pg_foreign_data_wrapper_fdwacl, - ACL_KIND_FDW, + OBJECT_FDW, true }, { @@ -218,7 +218,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_foreign_server_srvowner, Anum_pg_foreign_server_srvacl, - ACL_KIND_FOREIGN_SERVER, + OBJECT_FOREIGN_SERVER, true }, { @@ -230,7 +230,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_proc_pronamespace, Anum_pg_proc_proowner, Anum_pg_proc_proacl, - ACL_KIND_PROC, + OBJECT_FUNCTION, false }, { @@ -242,7 +242,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_language_lanowner, Anum_pg_language_lanacl, - ACL_KIND_LANGUAGE, + OBJECT_LANGUAGE, true }, { @@ -254,7 +254,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_largeobject_metadata_lomowner, Anum_pg_largeobject_metadata_lomacl, - ACL_KIND_LARGEOBJECT, + OBJECT_LARGEOBJECT, false }, { @@ -266,7 +266,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_opclass_opcnamespace, Anum_pg_opclass_opcowner, InvalidAttrNumber, - ACL_KIND_OPCLASS, + OBJECT_OPCLASS, true }, { @@ -278,7 +278,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_operator_oprnamespace, Anum_pg_operator_oprowner, InvalidAttrNumber, - ACL_KIND_OPER, + OBJECT_OPERATOR, false }, { @@ -290,7 +290,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_opfamily_opfnamespace, Anum_pg_opfamily_opfowner, InvalidAttrNumber, - ACL_KIND_OPFAMILY, + OBJECT_OPFAMILY, true }, { @@ -326,7 +326,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_namespace_nspowner, Anum_pg_namespace_nspacl, - ACL_KIND_NAMESPACE, + OBJECT_SCHEMA, true }, { @@ -338,7 +338,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_class_relnamespace, Anum_pg_class_relowner, Anum_pg_class_relacl, - ACL_KIND_CLASS, + OBJECT_TABLE, true }, { @@ -350,7 +350,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_tablespace_spcowner, Anum_pg_tablespace_spcacl, - ACL_KIND_TABLESPACE, + OBJECT_TABLESPACE, true }, { @@ -392,7 +392,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_event_trigger_evtowner, InvalidAttrNumber, - ACL_KIND_EVENT_TRIGGER, + OBJECT_EVENT_TRIGGER, true }, { @@ -404,7 +404,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_ts_config_cfgnamespace, Anum_pg_ts_config_cfgowner, InvalidAttrNumber, - ACL_KIND_TSCONFIGURATION, + OBJECT_TSCONFIGURATION, true }, { @@ -416,7 +416,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_ts_dict_dictnamespace, Anum_pg_ts_dict_dictowner, InvalidAttrNumber, - ACL_KIND_TSDICTIONARY, + OBJECT_TSDICTIONARY, true }, { @@ -452,7 +452,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_type_typnamespace, Anum_pg_type_typowner, Anum_pg_type_typacl, - ACL_KIND_TYPE, + OBJECT_TYPE, true }, { @@ -464,7 +464,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_publication_pubowner, InvalidAttrNumber, - ACL_KIND_PUBLICATION, + OBJECT_PUBLICATION, true }, { @@ -476,7 +476,7 @@ static const ObjectPropertyType ObjectProperty[] = InvalidAttrNumber, Anum_pg_subscription_subowner, InvalidAttrNumber, - ACL_KIND_SUBSCRIPTION, + OBJECT_SUBSCRIPTION, true }, { @@ -488,7 +488,7 @@ static const ObjectPropertyType ObjectProperty[] = Anum_pg_statistic_ext_stxnamespace, Anum_pg_statistic_ext_stxowner, InvalidAttrNumber, /* no ACL (same as relation) */ - ACL_KIND_STATISTICS, + OBJECT_STATISTIC_EXT, true } }; @@ -2242,12 +2242,12 @@ check_object_ownership(Oid roleid, ObjectType objtype, ObjectAddress address, case OBJECT_POLICY: case OBJECT_TABCONSTRAINT: if (!pg_class_ownercheck(RelationGetRelid(relation), roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, RelationGetRelationName(relation)); break; case OBJECT_DATABASE: if (!pg_database_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_TYPE: @@ -2262,62 +2262,62 @@ check_object_ownership(Oid roleid, ObjectType objtype, ObjectAddress address, case OBJECT_PROCEDURE: case OBJECT_ROUTINE: if (!pg_proc_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString((castNode(ObjectWithArgs, object))->objname)); break; case OBJECT_OPERATOR: if (!pg_oper_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString((castNode(ObjectWithArgs, object))->objname)); break; case OBJECT_SCHEMA: if (!pg_namespace_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_NAMESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_COLLATION: if (!pg_collation_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_COLLATION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_CONVERSION: if (!pg_conversion_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CONVERSION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_EXTENSION: if (!pg_extension_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EXTENSION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_FDW: if (!pg_foreign_data_wrapper_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_FDW, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_FOREIGN_SERVER: if (!pg_foreign_server_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_FOREIGN_SERVER, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_EVENT_TRIGGER: if (!pg_event_trigger_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EVENT_TRIGGER, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_LANGUAGE: if (!pg_language_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_LANGUAGE, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_OPCLASS: if (!pg_opclass_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPCLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_OPFAMILY: if (!pg_opfamily_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPFAMILY, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_LARGEOBJECT: @@ -2347,12 +2347,12 @@ check_object_ownership(Oid roleid, ObjectType objtype, ObjectAddress address, break; case OBJECT_PUBLICATION: if (!pg_publication_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PUBLICATION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_SUBSCRIPTION: if (!pg_subscription_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_SUBSCRIPTION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_TRANSFORM: @@ -2366,17 +2366,17 @@ check_object_ownership(Oid roleid, ObjectType objtype, ObjectAddress address, break; case OBJECT_TABLESPACE: if (!pg_tablespace_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TABLESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, strVal((Value *) object)); break; case OBJECT_TSDICTIONARY: if (!pg_ts_dict_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TSDICTIONARY, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_TSCONFIGURATION: if (!pg_ts_config_ownercheck(address.objectId, roleid)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TSCONFIGURATION, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameListToString(castNode(List, object))); break; case OBJECT_ROLE: @@ -2540,12 +2540,22 @@ get_object_attnum_acl(Oid class_id) return prop->attnum_acl; } -AclObjectKind -get_object_aclkind(Oid class_id) +ObjectType +get_object_type(Oid class_id, Oid object_id) { const ObjectPropertyType *prop = get_object_property_data(class_id); - return prop->acl_kind; + if (prop->objtype == OBJECT_TABLE) + { + /* + * If the property data says it's a table, dig a little deeper to get + * the real relation kind, so that callers can produce more precise + * error messages. + */ + return get_relkind_objtype(get_rel_relkind(object_id)); + } + else + return prop->objtype; } bool @@ -5099,3 +5109,28 @@ strlist_to_textarray(List *list) return arr; } + +ObjectType +get_relkind_objtype(char relkind) +{ + switch (relkind) + { + case RELKIND_RELATION: + case RELKIND_PARTITIONED_TABLE: + return OBJECT_TABLE; + case RELKIND_INDEX: + return OBJECT_INDEX; + case RELKIND_SEQUENCE: + return OBJECT_SEQUENCE; + case RELKIND_VIEW: + return OBJECT_VIEW; + case RELKIND_MATVIEW: + return OBJECT_MATVIEW; + case RELKIND_FOREIGN_TABLE: + return OBJECT_FOREIGN_TABLE; + /* other relkinds are not supported here because they don't map to OBJECT_* values */ + default: + elog(ERROR, "unexpected relkind: %d", relkind); + return 0; + } +} diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c index e801c1ed5c..f14ea26fcb 100644 --- a/src/backend/catalog/pg_aggregate.c +++ b/src/backend/catalog/pg_aggregate.c @@ -865,7 +865,7 @@ lookup_agg_function(List *fnName, /* Check aggregate creator has permission to call the function */ aclresult = pg_proc_aclcheck(fnOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(fnOid)); + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(fnOid)); return fnOid; } diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c index c96f336b7a..051602d820 100644 --- a/src/backend/catalog/pg_operator.c +++ b/src/backend/catalog/pg_operator.c @@ -425,7 +425,7 @@ OperatorCreate(const char *operatorName, */ if (OidIsValid(operatorObjectId) && !pg_oper_ownercheck(operatorObjectId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, operatorName); /* @@ -445,7 +445,7 @@ OperatorCreate(const char *operatorName, /* Permission check: must own other operator */ if (OidIsValid(commutatorId) && !pg_oper_ownercheck(commutatorId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, NameListToString(commutatorName)); /* @@ -470,7 +470,7 @@ OperatorCreate(const char *operatorName, /* Permission check: must own other operator */ if (OidIsValid(negatorId) && !pg_oper_ownercheck(negatorId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, NameListToString(negatorName)); } else @@ -618,7 +618,7 @@ get_other_operator(List *otherOp, Oid otherLeftTypeId, Oid otherRightTypeId, aclresult = pg_namespace_aclcheck(otherNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(otherNamespace)); other_oid = OperatorShellMake(otherName, diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index dd674113ba..b59fadbf76 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -400,7 +400,7 @@ ProcedureCreate(const char *procedureName, errmsg("function \"%s\" already exists with same argument types", procedureName))); if (!pg_proc_ownercheck(HeapTupleGetOid(oldtup), proowner)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, procedureName); /* diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c index fd63ea8cd1..660ac5b7c9 100644 --- a/src/backend/catalog/pg_type.c +++ b/src/backend/catalog/pg_type.c @@ -413,7 +413,7 @@ TypeCreate(Oid newTypeOid, * shell type must have been created by same owner */ if (((Form_pg_type) GETSTRUCT(tup))->typowner != ownerId) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TYPE, typeName); + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_TYPE, typeName); /* trouble if caller wanted to force the OID */ if (OidIsValid(newTypeOid)) diff --git a/src/backend/commands/aggregatecmds.c b/src/backend/commands/aggregatecmds.c index 15378a9d4d..1779ba7bcb 100644 --- a/src/backend/commands/aggregatecmds.c +++ b/src/backend/commands/aggregatecmds.c @@ -103,7 +103,7 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(aggNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(aggNamespace)); /* Deconstruct the output of the aggr_args grammar production */ diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c index 3995c5ef3d..0d63866fb0 100644 --- a/src/backend/commands/alter.c +++ b/src/backend/commands/alter.c @@ -171,7 +171,7 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name) AttrNumber Anum_name = get_object_attnum_name(classId); AttrNumber Anum_namespace = get_object_attnum_namespace(classId); AttrNumber Anum_owner = get_object_attnum_owner(classId); - AclObjectKind acl_kind = get_object_aclkind(classId); + ObjectType objtype = get_object_type(classId, objectId); HeapTuple oldtup; HeapTuple newtup; Datum datum; @@ -223,7 +223,7 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name) ownerId = DatumGetObjectId(datum); if (!has_privs_of_role(GetUserId(), DatumGetObjectId(ownerId))) - aclcheck_error(ACLCHECK_NOT_OWNER, acl_kind, old_name); + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, old_name); /* User must have CREATE privilege on the namespace */ if (OidIsValid(namespaceId)) @@ -231,7 +231,7 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name) aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); } } @@ -663,7 +663,7 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid) AttrNumber Anum_name = get_object_attnum_name(classId); AttrNumber Anum_namespace = get_object_attnum_namespace(classId); AttrNumber Anum_owner = get_object_attnum_owner(classId); - AclObjectKind acl_kind = get_object_aclkind(classId); + ObjectType objtype = get_object_type(classId, objid); Oid oldNspOid; Datum name, namespace; @@ -719,13 +719,13 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid) ownerId = DatumGetObjectId(owner); if (!has_privs_of_role(GetUserId(), ownerId)) - aclcheck_error(ACLCHECK_NOT_OWNER, acl_kind, + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, NameStr(*(DatumGetName(name)))); /* User must have CREATE privilege on new namespace */ aclresult = pg_namespace_aclcheck(nspOid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(nspOid)); } @@ -942,7 +942,7 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId) /* Superusers can bypass permission checks */ if (!superuser()) { - AclObjectKind aclkind = get_object_aclkind(classId); + ObjectType objtype = get_object_type(classId, objectId); /* must be owner */ if (!has_privs_of_role(GetUserId(), old_ownerId)) @@ -963,7 +963,7 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId) HeapTupleGetOid(oldtup)); objname = namebuf; } - aclcheck_error(ACLCHECK_NOT_OWNER, aclkind, objname); + aclcheck_error(ACLCHECK_NOT_OWNER, objtype, objname); } /* Must be able to become new owner */ check_is_member_of_role(GetUserId(), new_ownerId); @@ -976,7 +976,7 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId) aclresult = pg_namespace_aclcheck(namespaceId, new_ownerId, ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); } } diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index 6c6877395f..fdfb3dcd2c 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -74,7 +74,7 @@ DefineCollation(ParseState *pstate, List *names, List *parameters, bool if_not_e aclresult = pg_namespace_aclcheck(collNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(collNamespace)); foreach(pl, parameters) @@ -278,7 +278,7 @@ AlterCollation(AlterCollationStmt *stmt) collOid = get_collation_oid(stmt->collname, false); if (!pg_collation_ownercheck(collOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_COLLATION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_COLLATION, NameListToString(stmt->collname)); tup = SearchSysCacheCopy1(COLLOID, ObjectIdGetDatum(collOid)); diff --git a/src/backend/commands/conversioncmds.c b/src/backend/commands/conversioncmds.c index 294143c522..01a02484a2 100644 --- a/src/backend/commands/conversioncmds.c +++ b/src/backend/commands/conversioncmds.c @@ -55,7 +55,7 @@ CreateConversionCommand(CreateConversionStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); /* Check the encoding names */ @@ -90,7 +90,7 @@ CreateConversionCommand(CreateConversionStmt *stmt) /* Check we have EXECUTE rights for the function */ aclresult = pg_proc_aclcheck(funcoid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(func_name)); /* diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index 0b111fc5cf..d2020d07cf 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -422,7 +422,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt) aclresult = pg_tablespace_aclcheck(dst_deftablespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, tablespacename); /* pg_global must never be the default tablespace */ @@ -822,7 +822,7 @@ dropdb(const char *dbname, bool missing_ok) * Permission checks */ if (!pg_database_ownercheck(db_id, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, dbname); /* DROP hook for the database being removed */ @@ -997,7 +997,7 @@ RenameDatabase(const char *oldname, const char *newname) /* must be owner */ if (!pg_database_ownercheck(db_id, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, oldname); /* must have createdb rights */ @@ -1112,7 +1112,7 @@ movedb(const char *dbname, const char *tblspcname) * Permission checks */ if (!pg_database_ownercheck(db_id, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, dbname); /* @@ -1134,7 +1134,7 @@ movedb(const char *dbname, const char *tblspcname) aclresult = pg_tablespace_aclcheck(dst_tblspcoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, tblspcname); /* @@ -1515,7 +1515,7 @@ AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel) dboid = HeapTupleGetOid(tuple); if (!pg_database_ownercheck(HeapTupleGetOid(tuple), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, stmt->dbname); /* @@ -1583,7 +1583,7 @@ AlterDatabaseSet(AlterDatabaseSetStmt *stmt) shdepLockAndCheckObject(DatabaseRelationId, datid); if (!pg_database_ownercheck(datid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, stmt->dbname); AlterSetting(datid, InvalidOid, stmt->setstmt); @@ -1646,7 +1646,7 @@ AlterDatabaseOwner(const char *dbname, Oid newOwnerId) /* Otherwise, must be owner of the existing object */ if (!pg_database_ownercheck(HeapTupleGetOid(tuple), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, dbname); /* Must be able to become new owner */ diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index 82c7b6a0ba..549c7ea51d 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -519,7 +519,7 @@ AlterEventTrigger(AlterEventTrigStmt *stmt) trigoid = HeapTupleGetOid(tup); if (!pg_event_trigger_ownercheck(trigoid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EVENT_TRIGGER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_EVENT_TRIGGER, stmt->trigname); /* tuple is a copy, so we can modify it below */ @@ -610,7 +610,7 @@ AlterEventTriggerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) return; if (!pg_event_trigger_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EVENT_TRIGGER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_EVENT_TRIGGER, NameStr(form->evtname)); /* New owner must be a superuser */ diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c index c0c933583f..2e4538146d 100644 --- a/src/backend/commands/extension.c +++ b/src/backend/commands/extension.c @@ -2704,13 +2704,13 @@ AlterExtensionNamespace(const char *extensionName, const char *newschema, Oid *o * check ownership of the individual member objects ... */ if (!pg_extension_ownercheck(extensionOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EXTENSION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_EXTENSION, extensionName); /* Permission check: must have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(nspOid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, newschema); + aclcheck_error(aclresult, OBJECT_SCHEMA, newschema); /* * If the schema is currently a member of the extension, disallow moving @@ -2924,7 +2924,7 @@ ExecAlterExtensionStmt(ParseState *pstate, AlterExtensionStmt *stmt) /* Permission check: must own extension */ if (!pg_extension_ownercheck(extensionOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EXTENSION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_EXTENSION, stmt->extname); /* @@ -3182,7 +3182,7 @@ ExecAlterExtensionContentsStmt(AlterExtensionContentsStmt *stmt, /* Permission check: must own extension */ if (!pg_extension_ownercheck(extension.objectId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_EXTENSION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_EXTENSION, stmt->extname); /* diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c index 44f3da9b51..5c53aeeaeb 100644 --- a/src/backend/commands/foreigncmds.c +++ b/src/backend/commands/foreigncmds.c @@ -358,7 +358,7 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) /* Must be owner */ if (!pg_foreign_server_ownercheck(srvId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_FOREIGN_SERVER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FOREIGN_SERVER, NameStr(form->srvname)); /* Must be able to become new owner */ @@ -370,7 +370,7 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) { ForeignDataWrapper *fdw = GetForeignDataWrapper(form->srvfdw); - aclcheck_error(aclresult, ACL_KIND_FDW, fdw->fdwname); + aclcheck_error(aclresult, OBJECT_FDW, fdw->fdwname); } } @@ -907,7 +907,7 @@ CreateForeignServer(CreateForeignServerStmt *stmt) aclresult = pg_foreign_data_wrapper_aclcheck(fdw->fdwid, ownerId, ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_FDW, fdw->fdwname); + aclcheck_error(aclresult, OBJECT_FDW, fdw->fdwname); /* * Insert tuple into pg_foreign_server. @@ -1010,7 +1010,7 @@ AlterForeignServer(AlterForeignServerStmt *stmt) * Only owner or a superuser can ALTER a SERVER. */ if (!pg_foreign_server_ownercheck(srvId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_FOREIGN_SERVER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FOREIGN_SERVER, stmt->servername); memset(repl_val, 0, sizeof(repl_val)); @@ -1119,10 +1119,10 @@ user_mapping_ddl_aclcheck(Oid umuserid, Oid serverid, const char *servername) aclresult = pg_foreign_server_aclcheck(serverid, curuserid, ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_FOREIGN_SERVER, servername); + aclcheck_error(aclresult, OBJECT_FOREIGN_SERVER, servername); } else - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_FOREIGN_SERVER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FOREIGN_SERVER, servername); } } @@ -1477,7 +1477,7 @@ CreateForeignTable(CreateForeignTableStmt *stmt, Oid relid) server = GetForeignServerByName(stmt->servername, false); aclresult = pg_foreign_server_aclcheck(server->serverid, ownerId, ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_FOREIGN_SERVER, server->servername); + aclcheck_error(aclresult, OBJECT_FOREIGN_SERVER, server->servername); fdw = GetForeignDataWrapper(server->fdwid); @@ -1536,7 +1536,7 @@ ImportForeignSchema(ImportForeignSchemaStmt *stmt) server = GetForeignServerByName(stmt->server_name, false); aclresult = pg_foreign_server_aclcheck(server->serverid, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_FOREIGN_SERVER, server->servername); + aclcheck_error(aclresult, OBJECT_FOREIGN_SERVER, server->servername); /* Check that the schema exists and we have CREATE permissions on it */ (void) LookupCreationNamespace(stmt->local_schema); diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index 12ab33f418..ea08c3237c 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -146,7 +146,7 @@ compute_return_type(TypeName *returnType, Oid languageOid, aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); address = TypeShellMake(typname, namespaceId, GetUserId()); rettype = address.objectId; @@ -953,7 +953,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); /* default attributes */ @@ -995,14 +995,14 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) aclresult = pg_language_aclcheck(languageOid, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_LANGUAGE, + aclcheck_error(aclresult, OBJECT_LANGUAGE, NameStr(languageStruct->lanname)); } else { /* if untrusted language, must be superuser */ if (!superuser()) - aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_LANGUAGE, + aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE, NameStr(languageStruct->lanname)); } @@ -1254,7 +1254,7 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt) /* Permission check: must own function */ if (!pg_proc_ownercheck(funcOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, stmt->objtype, NameListToString(stmt->func->objname)); if (procForm->proisagg) @@ -1911,7 +1911,7 @@ CreateTransform(CreateTransformStmt *stmt) aclresult = pg_language_aclcheck(langid, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_LANGUAGE, stmt->lang); + aclcheck_error(aclresult, OBJECT_LANGUAGE, stmt->lang); /* * Get the functions @@ -1921,11 +1921,11 @@ CreateTransform(CreateTransformStmt *stmt) fromsqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->fromsql, false); if (!pg_proc_ownercheck(fromsqlfuncid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, NameListToString(stmt->fromsql->objname)); + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname)); aclresult = pg_proc_aclcheck(fromsqlfuncid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, NameListToString(stmt->fromsql->objname)); + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->fromsql->objname)); tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(fromsqlfuncid)); if (!HeapTupleIsValid(tuple)) @@ -1947,11 +1947,11 @@ CreateTransform(CreateTransformStmt *stmt) tosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, false); if (!pg_proc_ownercheck(tosqlfuncid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, NameListToString(stmt->tosql->objname)); + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname)); aclresult = pg_proc_aclcheck(tosqlfuncid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, NameListToString(stmt->tosql->objname)); + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->tosql->objname)); tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(tosqlfuncid)); if (!HeapTupleIsValid(tuple)) @@ -2209,14 +2209,14 @@ ExecuteDoStmt(DoStmt *stmt) aclresult = pg_language_aclcheck(codeblock->langOid, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_LANGUAGE, + aclcheck_error(aclresult, OBJECT_LANGUAGE, NameStr(languageStruct->lanname)); } else { /* if untrusted language, must be superuser */ if (!superuser()) - aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_LANGUAGE, + aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_LANGUAGE, NameStr(languageStruct->lanname)); } @@ -2270,7 +2270,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt) aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(fexpr->funcid)); + aclcheck_error(aclresult, OBJECT_PROCEDURE, get_func_name(fexpr->funcid)); InvokeFunctionExecuteHook(fexpr->funcid); nargs = list_length(fexpr->args); diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 8118a39a7b..a9461a4b06 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -474,7 +474,7 @@ DefineIndex(Oid relationId, aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); } @@ -501,7 +501,7 @@ DefineIndex(Oid relationId, aclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, get_tablespace_name(tablespaceId)); } @@ -2048,7 +2048,7 @@ RangeVarCallbackForReindexIndex(const RangeVar *relation, /* Check permissions */ if (!pg_class_ownercheck(relId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, relation->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX, relation->relname); /* Lock heap before index to avoid deadlock. */ if (relId != oldRelId) @@ -2127,7 +2127,7 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind, objectOid = get_namespace_oid(objectName, false); if (!pg_namespace_ownercheck(objectOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_NAMESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SCHEMA, objectName); } else @@ -2139,7 +2139,7 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("can only reindex the currently open database"))); if (!pg_database_ownercheck(objectOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, objectName); } diff --git a/src/backend/commands/lockcmds.c b/src/backend/commands/lockcmds.c index c587edc209..6479dcb31b 100644 --- a/src/backend/commands/lockcmds.c +++ b/src/backend/commands/lockcmds.c @@ -96,7 +96,7 @@ RangeVarCallbackForLockTable(const RangeVar *rv, Oid relid, Oid oldrelid, /* Check permissions. */ aclresult = LockTableAclCheck(relid, lockmode); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, rv->relname); + aclcheck_error(aclresult, get_relkind_objtype(get_rel_relkind(relid)), rv->relname); } /* @@ -127,7 +127,7 @@ LockTableRecurse(Oid reloid, LOCKMODE lockmode, bool nowait) if (!relname) continue; /* child concurrently dropped, just skip it */ - aclcheck_error(aclresult, ACL_KIND_CLASS, relname); + aclcheck_error(aclresult, get_relkind_objtype(get_rel_relkind(childreloid)), relname); } /* We have enough rights to lock the relation; do so. */ diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index 7fb7b3976c..1768140a83 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -353,7 +353,7 @@ DefineOpClass(CreateOpClassStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceoid)); /* Get necessary info about access method */ @@ -497,11 +497,11 @@ DefineOpClass(CreateOpClassStmt *stmt) /* XXX this is unnecessary given the superuser check above */ /* Caller must own operator and its underlying function */ if (!pg_oper_ownercheck(operOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, get_opname(operOid)); funcOid = get_opcode(operOid); if (!pg_proc_ownercheck(funcOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, get_func_name(funcOid)); #endif @@ -525,7 +525,7 @@ DefineOpClass(CreateOpClassStmt *stmt) /* XXX this is unnecessary given the superuser check above */ /* Caller must own function */ if (!pg_proc_ownercheck(funcOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, get_func_name(funcOid)); #endif @@ -730,7 +730,7 @@ DefineOpFamily(CreateOpFamilyStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceoid)); /* Get access method OID, throwing an error if it doesn't exist. */ @@ -871,11 +871,11 @@ AlterOpFamilyAdd(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid, /* XXX this is unnecessary given the superuser check above */ /* Caller must own operator and its underlying function */ if (!pg_oper_ownercheck(operOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, get_opname(operOid)); funcOid = get_opcode(operOid); if (!pg_proc_ownercheck(funcOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, get_func_name(funcOid)); #endif @@ -899,7 +899,7 @@ AlterOpFamilyAdd(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid, /* XXX this is unnecessary given the superuser check above */ /* Caller must own function */ if (!pg_proc_ownercheck(funcOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, get_func_name(funcOid)); #endif diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c index 81ef532184..35404ac39a 100644 --- a/src/backend/commands/operatorcmds.c +++ b/src/backend/commands/operatorcmds.c @@ -95,7 +95,7 @@ DefineOperator(List *names, List *parameters) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(oprNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(oprNamespace)); /* @@ -215,7 +215,7 @@ DefineOperator(List *names, List *parameters) */ aclresult = pg_proc_aclcheck(functionOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(functionName)); rettype = get_func_rettype(functionOid); @@ -281,7 +281,7 @@ ValidateRestrictionEstimator(List *restrictionName) /* Require EXECUTE rights for the estimator */ aclresult = pg_proc_aclcheck(restrictionOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(restrictionName)); return restrictionOid; @@ -327,7 +327,7 @@ ValidateJoinEstimator(List *joinName) /* Require EXECUTE rights for the estimator */ aclresult = pg_proc_aclcheck(joinOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(joinName)); return joinOid; @@ -457,7 +457,7 @@ AlterOperator(AlterOperatorStmt *stmt) /* Check permissions. Must be owner. */ if (!pg_oper_ownercheck(oprId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_OPER, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_OPERATOR, NameStr(oprForm->oprname)); /* diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c index 396d4c3449..280a14a101 100644 --- a/src/backend/commands/policy.c +++ b/src/backend/commands/policy.c @@ -78,7 +78,7 @@ RangeVarCallbackForPolicy(const RangeVar *rv, Oid relid, Oid oldrelid, /* Must own relation. */ if (!pg_class_ownercheck(relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, rv->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relid)), rv->relname); /* No system table modifications unless explicitly allowed. */ if (!allowSystemTableMods && IsSystemClass(relid, classform)) diff --git a/src/backend/commands/proclang.c b/src/backend/commands/proclang.c index 9783a162d7..2ec96242ae 100644 --- a/src/backend/commands/proclang.c +++ b/src/backend/commands/proclang.c @@ -97,7 +97,7 @@ CreateProceduralLanguage(CreatePLangStmt *stmt) errmsg("must be superuser to create procedural language \"%s\"", stmt->plname))); if (!pg_database_ownercheck(MyDatabaseId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, get_database_name(MyDatabaseId)); } @@ -366,7 +366,7 @@ create_proc_lang(const char *languageName, bool replace, (errcode(ERRCODE_DUPLICATE_OBJECT), errmsg("language \"%s\" already exists", languageName))); if (!pg_language_ownercheck(HeapTupleGetOid(oldtup), languageOwner)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_LANGUAGE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_LANGUAGE, languageName); /* diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c index 91338e8d49..9c5aa9ebc2 100644 --- a/src/backend/commands/publicationcmds.c +++ b/src/backend/commands/publicationcmds.c @@ -150,7 +150,7 @@ CreatePublication(CreatePublicationStmt *stmt) /* must have CREATE privilege on database */ aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(MyDatabaseId)); /* FOR ALL TABLES requires superuser */ @@ -403,7 +403,7 @@ AlterPublication(AlterPublicationStmt *stmt) /* must be owner */ if (!pg_publication_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PUBLICATION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION, stmt->pubname); if (stmt->options) @@ -582,7 +582,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists, /* Must be owner of the table or superuser. */ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); obj = publication_add_relation(pubid, rel, if_not_exists); @@ -649,7 +649,7 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) /* Must be owner */ if (!pg_publication_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PUBLICATION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION, NameStr(form->pubname)); /* Must be able to become new owner */ @@ -658,7 +658,7 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) /* New owner must have CREATE privilege on database */ aclresult = pg_database_aclcheck(MyDatabaseId, newOwnerId, ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(MyDatabaseId)); if (form->puballtables && !superuser_arg(newOwnerId)) diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c index 16b6e8f111..dc6cb46e4e 100644 --- a/src/backend/commands/schemacmds.c +++ b/src/backend/commands/schemacmds.c @@ -94,7 +94,7 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char *queryString, */ aclresult = pg_database_aclcheck(MyDatabaseId, saved_uid, ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(MyDatabaseId)); check_is_member_of_role(saved_uid, owner_uid); @@ -265,13 +265,13 @@ RenameSchema(const char *oldname, const char *newname) /* must be owner */ if (!pg_namespace_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_NAMESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SCHEMA, oldname); /* must have CREATE privilege on database */ aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(MyDatabaseId)); if (!allowSystemTableMods && IsReservedName(newname)) @@ -373,7 +373,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId) /* Otherwise, must be owner of the existing object */ if (!pg_namespace_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_NAMESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SCHEMA, NameStr(nspForm->nspname)); /* Must be able to become new owner */ @@ -391,7 +391,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId) aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(MyDatabaseId)); memset(repl_null, false, sizeof(repl_null)); diff --git a/src/backend/commands/statscmds.c b/src/backend/commands/statscmds.c index 5773a8fd03..e90a14273b 100644 --- a/src/backend/commands/statscmds.c +++ b/src/backend/commands/statscmds.c @@ -141,7 +141,7 @@ CreateStatistics(CreateStatsStmt *stmt) /* You must own the relation to create stats on it */ if (!pg_class_ownercheck(RelationGetRelid(rel), stxowner)) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); } diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c index bd25ec2401..9de5969302 100644 --- a/src/backend/commands/subscriptioncmds.c +++ b/src/backend/commands/subscriptioncmds.c @@ -635,7 +635,7 @@ AlterSubscription(AlterSubscriptionStmt *stmt) /* must be owner */ if (!pg_subscription_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_SUBSCRIPTION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SUBSCRIPTION, stmt->subname); subid = HeapTupleGetOid(tup); @@ -854,7 +854,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel) /* must be owner */ if (!pg_subscription_ownercheck(subid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_SUBSCRIPTION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SUBSCRIPTION, stmt->subname); /* DROP hook for the subscription being removed */ @@ -1022,7 +1022,7 @@ AlterSubscriptionOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId) return; if (!pg_subscription_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_SUBSCRIPTION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SUBSCRIPTION, NameStr(form->subname)); /* New owner must be a superuser */ diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 57ee112653..2e768dd5e4 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -601,7 +601,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, aclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, get_tablespace_name(tablespaceId)); } @@ -1255,7 +1255,7 @@ RangeVarCallbackForDropRelation(const RangeVar *rel, Oid relOid, Oid oldRelOid, /* Allow DROP to either table owner or schema owner */ if (!pg_class_ownercheck(relOid, GetUserId()) && !pg_namespace_ownercheck(classform->relnamespace, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relOid)), rel->relname); if (!allowSystemTableMods && IsSystemClass(relOid, classform)) @@ -1438,7 +1438,7 @@ ExecuteTruncate(TruncateStmt *stmt) /* This check must match AlterSequence! */ if (!pg_class_ownercheck(seq_relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_SEQUENCE, RelationGetRelationName(seq_rel)); seq_relids = lappend_oid(seq_relids, seq_relid); @@ -1626,7 +1626,7 @@ truncate_check_rel(Relation rel) aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(), ACL_TRUNCATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); if (!allowSystemTableMods && IsSystemRelation(rel)) @@ -1912,7 +1912,7 @@ MergeAttributes(List *schema, List *supers, char relpersistence, * demand that creator of a child table own the parent. */ if (!pg_class_ownercheck(RelationGetRelid(relation), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(relation->rd_rel->relkind), RelationGetRelationName(relation)); /* @@ -2600,7 +2600,7 @@ renameatt_check(Oid myrelid, Form_pg_class classform, bool recursing) * permissions checking. only the owner of a class can change its schema. */ if (!pg_class_ownercheck(myrelid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(myrelid)), NameStr(classform->relname)); if (!allowSystemTableMods && IsSystemClass(myrelid, classform)) ereport(ERROR, @@ -4837,7 +4837,7 @@ ATSimplePermissions(Relation rel, int allowed_targets) /* Permissions checks */ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); if (!allowSystemTableMods && IsSystemRelation(rel)) @@ -6283,7 +6283,7 @@ ATPrepSetStatistics(Relation rel, const char *colName, int16 colNum, Node *newVa /* Permissions checks */ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); } @@ -8209,7 +8209,7 @@ checkFkeyPermissions(Relation rel, int16 *attnums, int natts) aclresult = pg_attribute_aclcheck(RelationGetRelid(rel), attnums[i], roleid, ACL_REFERENCES); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); } } @@ -10129,7 +10129,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock /* Otherwise, must be owner of the existing object */ if (!pg_class_ownercheck(relationOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relationOid)), RelationGetRelationName(target_rel)); /* Must be able to become new owner */ @@ -10139,7 +10139,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock aclresult = pg_namespace_aclcheck(namespaceOid, newOwnerId, ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceOid)); } } @@ -10437,7 +10437,7 @@ ATPrepSetTableSpace(AlteredTableInfo *tab, Relation rel, const char *tablespacen aclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, tablespacename); + aclcheck_error(aclresult, OBJECT_TABLESPACE, tablespacename); } /* Save info for Phase 3 to do the real work */ @@ -10872,7 +10872,7 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt) aclresult = pg_tablespace_aclcheck(new_tablespaceoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, get_tablespace_name(new_tablespaceoid)); } @@ -10944,7 +10944,7 @@ AlterTableMoveAll(AlterTableMoveAllStmt *stmt) * Caller must be considered an owner on the table to move it. */ if (!pg_class_ownercheck(relOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relOid)), NameStr(relForm->relname)); if (stmt->nowait && @@ -13162,7 +13162,7 @@ RangeVarCallbackOwnsTable(const RangeVar *relation, /* Check permissions */ if (!pg_class_ownercheck(relId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, relation->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relId)), relation->relname); } /* @@ -13184,7 +13184,7 @@ RangeVarCallbackOwnsRelation(const RangeVar *relation, elog(ERROR, "cache lookup failed for relation %u", relId); if (!pg_class_ownercheck(relId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relId)), relation->relname); if (!allowSystemTableMods && @@ -13220,7 +13220,7 @@ RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, Oid oldrelid, /* Must own relation. */ if (!pg_class_ownercheck(relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, rv->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relid)), rv->relname); /* No system table modifications unless explicitly allowed. */ if (!allowSystemTableMods && IsSystemClass(relid, classform)) @@ -13240,7 +13240,7 @@ RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, Oid oldrelid, aclresult = pg_namespace_aclcheck(classform->relnamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(classform->relnamespace)); reltype = ((RenameStmt *) stmt)->renameType; } diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c index 8cb834c271..5c450caa4e 100644 --- a/src/backend/commands/tablespace.c +++ b/src/backend/commands/tablespace.c @@ -444,13 +444,13 @@ DropTableSpace(DropTableSpaceStmt *stmt) /* Must be tablespace owner */ if (!pg_tablespace_ownercheck(tablespaceoid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TABLESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_TABLESPACE, tablespacename); /* Disallow drop of the standard tablespaces, even by superuser */ if (tablespaceoid == GLOBALTABLESPACE_OID || tablespaceoid == DEFAULTTABLESPACE_OID) - aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_TABLESPACE, + aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_TABLESPACE, tablespacename); /* DROP hook for the tablespace being removed */ @@ -941,7 +941,7 @@ RenameTableSpace(const char *oldname, const char *newname) /* Must be owner */ if (!pg_tablespace_ownercheck(HeapTupleGetOid(newtuple), GetUserId())) - aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_TABLESPACE, oldname); + aclcheck_error(ACLCHECK_NO_PRIV, OBJECT_TABLESPACE, oldname); /* Validate new name */ if (!allowSystemTableMods && IsReservedName(newname)) @@ -1017,7 +1017,7 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt) /* Must be owner of the existing object */ if (!pg_tablespace_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TABLESPACE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_TABLESPACE, stmt->tablespacename); /* Generate new proposed spcoptions (text array) */ @@ -1232,7 +1232,7 @@ check_temp_tablespaces(char **newval, void **extra, GucSource source) if (aclresult != ACLCHECK_OK) { if (source >= PGC_S_INTERACTIVE) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, curname); + aclcheck_error(aclresult, OBJECT_TABLESPACE, curname); continue; } diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 1c488c338a..b45c30161f 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -284,7 +284,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(), ACL_TRIGGER); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); if (OidIsValid(constrrelid)) @@ -292,7 +292,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, aclresult = pg_class_aclcheck(constrrelid, GetUserId(), ACL_TRIGGER); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(get_rel_relkind(constrrelid)), get_rel_name(constrrelid)); } } @@ -592,7 +592,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString, { aclresult = pg_proc_aclcheck(funcoid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, NameListToString(stmt->funcname)); } funcrettype = get_func_rettype(funcoid); @@ -1422,7 +1422,7 @@ RangeVarCallbackForRenameTrigger(const RangeVar *rv, Oid relid, Oid oldrelid, /* you must own the table to rename one of its triggers */ if (!pg_class_ownercheck(relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, rv->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relid)), rv->relname); if (!allowSystemTableMods && IsSystemClass(relid, form)) ereport(ERROR, (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c index bf06ed9318..bdf3857ce4 100644 --- a/src/backend/commands/tsearchcmds.c +++ b/src/backend/commands/tsearchcmds.c @@ -428,7 +428,7 @@ DefineTSDictionary(List *names, List *parameters) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceoid)); /* @@ -549,7 +549,7 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt) /* must be owner */ if (!pg_ts_dict_ownercheck(dictId, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TSDICTIONARY, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_TSDICTIONARY, NameListToString(stmt->dictname)); /* deserialize the existing set of options */ @@ -980,7 +980,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(namespaceoid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceoid)); /* @@ -1189,7 +1189,7 @@ AlterTSConfiguration(AlterTSConfigurationStmt *stmt) /* must be owner */ if (!pg_ts_config_ownercheck(HeapTupleGetOid(tup), GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_TSCONFIGURATION, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_TSCONFIGURATION, NameListToString(stmt->cfgname)); relMap = heap_open(TSConfigMapRelationId, RowExclusiveLock); diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index a40b3cf752..74eb430f96 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -190,7 +190,7 @@ DefineType(ParseState *pstate, List *names, List *parameters) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(typeNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(typeNamespace)); #endif @@ -526,25 +526,25 @@ DefineType(ParseState *pstate, List *names, List *parameters) #ifdef NOT_USED /* XXX this is unnecessary given the superuser check above */ if (inputOid && !pg_proc_ownercheck(inputOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(inputName)); if (outputOid && !pg_proc_ownercheck(outputOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(outputName)); if (receiveOid && !pg_proc_ownercheck(receiveOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(receiveName)); if (sendOid && !pg_proc_ownercheck(sendOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(sendName)); if (typmodinOid && !pg_proc_ownercheck(typmodinOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(typmodinName)); if (typmodoutOid && !pg_proc_ownercheck(typmodoutOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(typmodoutName)); if (analyzeOid && !pg_proc_ownercheck(analyzeOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_PROC, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION, NameListToString(analyzeName)); #endif @@ -772,7 +772,7 @@ DefineDomain(CreateDomainStmt *stmt) aclresult = pg_namespace_aclcheck(domainNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(domainNamespace)); /* @@ -1171,7 +1171,7 @@ DefineEnum(CreateEnumStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(enumNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(enumNamespace)); /* @@ -1398,7 +1398,7 @@ DefineRange(CreateRangeStmt *stmt) /* Check we have creation rights in target namespace */ aclresult = pg_namespace_aclcheck(typeNamespace, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(typeNamespace)); /* @@ -2042,7 +2042,7 @@ findRangeCanonicalFunction(List *procname, Oid typeOid) /* Also, range type's creator must have permission to call function */ aclresult = pg_proc_aclcheck(procOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(procOid)); + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(procOid)); return procOid; } @@ -2085,7 +2085,7 @@ findRangeSubtypeDiffFunction(List *procname, Oid subtype) /* Also, range type's creator must have permission to call function */ aclresult = pg_proc_aclcheck(procOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(procOid)); + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(procOid)); return procOid; } @@ -3380,7 +3380,7 @@ AlterTypeOwner(List *names, Oid newOwnerId, ObjectType objecttype) newOwnerId, ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(typTup->typnamespace)); } diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c index d559c29d24..71c5caa41b 100644 --- a/src/backend/commands/user.c +++ b/src/backend/commands/user.c @@ -939,7 +939,7 @@ AlterRoleSet(AlterRoleSetStmt *stmt) * ALTER DATABASE ... SET, so use the same permission check. */ if (!pg_database_ownercheck(databaseid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE, + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE, stmt->database); } } diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 794573803d..883a29f0a7 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -955,7 +955,7 @@ ExecInitExprRec(Expr *node, ExprState *state, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(opexpr->opfuncid)); InvokeFunctionExecuteHook(opexpr->opfuncid); @@ -2162,7 +2162,7 @@ ExecInitFunc(ExprEvalStep *scratch, Expr *node, List *args, Oid funcid, /* Check permission to call function */ aclresult = pg_proc_aclcheck(funcid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(funcid)); + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(funcid)); InvokeFunctionExecuteHook(funcid); /* diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 16822e962a..410921cc40 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -579,7 +579,7 @@ ExecCheckRTPerms(List *rangeTable, bool ereport_on_violation) { Assert(rte->rtekind == RTE_RELATION); if (ereport_on_violation) - aclcheck_error(ACLCHECK_NO_PRIV, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NO_PRIV, get_relkind_objtype(get_rel_relkind(rte->relid)), get_rel_name(rte->relid)); return false; } diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c index 8e521288a9..b97b8d797e 100644 --- a/src/backend/executor/execSRF.c +++ b/src/backend/executor/execSRF.c @@ -682,7 +682,7 @@ init_sexpr(Oid foid, Oid input_collation, Expr *node, /* Check permission to call function */ aclresult = pg_proc_aclcheck(foid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, get_func_name(foid)); + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(foid)); InvokeFunctionExecuteHook(foid); /* diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 061acad80f..ec62e7fb38 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -2548,7 +2548,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(aggref->aggfnoid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_AGGREGATE, get_func_name(aggref->aggfnoid)); InvokeFunctionExecuteHook(aggref->aggfnoid); @@ -2638,7 +2638,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(transfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(transfn_oid)); InvokeFunctionExecuteHook(transfn_oid); if (OidIsValid(finalfn_oid)) @@ -2646,7 +2646,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(finalfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(finalfn_oid)); InvokeFunctionExecuteHook(finalfn_oid); } @@ -2655,7 +2655,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(serialfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(serialfn_oid)); InvokeFunctionExecuteHook(serialfn_oid); } @@ -2664,7 +2664,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(deserialfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(deserialfn_oid)); InvokeFunctionExecuteHook(deserialfn_oid); } diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 5492fb3369..0afb1c83d3 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -1928,7 +1928,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) aclresult = pg_proc_aclcheck(wfunc->winfnoid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(wfunc->winfnoid)); InvokeFunctionExecuteHook(wfunc->winfnoid); @@ -2189,7 +2189,7 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, aclresult = pg_proc_aclcheck(transfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(transfn_oid)); InvokeFunctionExecuteHook(transfn_oid); @@ -2198,7 +2198,7 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, aclresult = pg_proc_aclcheck(invtransfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(invtransfn_oid)); InvokeFunctionExecuteHook(invtransfn_oid); } @@ -2208,7 +2208,7 @@ initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, aclresult = pg_proc_aclcheck(finalfn_oid, aggOwner, ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(finalfn_oid)); InvokeFunctionExecuteHook(finalfn_oid); } diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 90bb356df8..5afb363096 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -951,7 +951,7 @@ transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla aclresult = pg_type_aclcheck(relation->rd_rel->reltype, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TYPE, + aclcheck_error(aclresult, OBJECT_TYPE, RelationGetRelationName(relation)); } else @@ -959,7 +959,7 @@ transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla aclresult = pg_class_aclcheck(RelationGetRelid(relation), GetUserId(), ACL_SELECT); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(relation->rd_rel->relkind), RelationGetRelationName(relation)); } diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c index abf45fa82b..f3a9b639a8 100644 --- a/src/backend/rewrite/rewriteDefine.c +++ b/src/backend/rewrite/rewriteDefine.c @@ -276,7 +276,7 @@ DefineQueryRewrite(const char *rulename, * Check user has permission to apply rules to this relation. */ if (!pg_class_ownercheck(event_relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(event_relation->rd_rel->relkind), RelationGetRelationName(event_relation)); /* @@ -864,7 +864,7 @@ EnableDisableRule(Relation rel, const char *rulename, eventRelationOid = ((Form_pg_rewrite) GETSTRUCT(ruletup))->ev_class; Assert(eventRelationOid == owningRel); if (!pg_class_ownercheck(eventRelationOid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(eventRelationOid)), get_rel_name(eventRelationOid)); /* @@ -927,7 +927,7 @@ RangeVarCallbackForRenameRule(const RangeVar *rv, Oid relid, Oid oldrelid, /* you must own the table to rename one of its rules */ if (!pg_class_ownercheck(relid, GetUserId())) - aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, rv->relname); + aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(get_rel_relkind(relid)), rv->relname); ReleaseSysCache(tuple); } diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c index 3e531977db..d16ba5ec92 100644 --- a/src/backend/tcop/fastpath.c +++ b/src/backend/tcop/fastpath.c @@ -315,13 +315,13 @@ HandleFunctionRequest(StringInfo msgBuf) */ aclresult = pg_namespace_aclcheck(fip->namespace, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_NAMESPACE, + aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(fip->namespace)); InvokeNamespaceSearchHook(fip->namespace, true); aclresult = pg_proc_aclcheck(fid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(fid)); InvokeFunctionExecuteHook(fid); diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c index 60f417f1b2..834a10485f 100644 --- a/src/backend/utils/adt/dbsize.c +++ b/src/backend/utils/adt/dbsize.c @@ -97,7 +97,7 @@ calculate_database_size(Oid dbOid) if (aclresult != ACLCHECK_OK && !is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS)) { - aclcheck_error(aclresult, ACL_KIND_DATABASE, + aclcheck_error(aclresult, OBJECT_DATABASE, get_database_name(dbOid)); } @@ -183,7 +183,7 @@ calculate_tablespace_size(Oid tblspcOid) { aclresult = pg_tablespace_aclcheck(tblspcOid, GetUserId(), ACL_CREATE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_TABLESPACE, + aclcheck_error(aclresult, OBJECT_TABLESPACE, get_tablespace_name(tblspcOid)); } diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c index 5ed5fdaffe..41d540b46e 100644 --- a/src/backend/utils/adt/tid.c +++ b/src/backend/utils/adt/tid.c @@ -343,7 +343,7 @@ currtid_byreloid(PG_FUNCTION_ARGS) aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(), ACL_SELECT); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); if (rel->rd_rel->relkind == RELKIND_VIEW) @@ -377,7 +377,7 @@ currtid_byrelname(PG_FUNCTION_ARGS) aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(), ACL_SELECT); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_CLASS, + aclcheck_error(aclresult, get_relkind_objtype(rel->rd_rel->relkind), RelationGetRelationName(rel)); if (rel->rd_rel->relkind == RELKIND_VIEW) diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c index 8968a9fde8..25f4c34901 100644 --- a/src/backend/utils/fmgr/fmgr.c +++ b/src/backend/utils/fmgr/fmgr.c @@ -2124,7 +2124,7 @@ CheckFunctionValidatorAccess(Oid validatorOid, Oid functionOid) aclresult = pg_language_aclcheck(procStruct->prolang, GetUserId(), ACL_USAGE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_LANGUAGE, + aclcheck_error(aclresult, OBJECT_LANGUAGE, NameStr(langStruct->lanname)); /* @@ -2134,7 +2134,7 @@ CheckFunctionValidatorAccess(Oid validatorOid, Oid functionOid) */ aclresult = pg_proc_aclcheck(functionOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, NameStr(procStruct->proname)); + aclcheck_error(aclresult, OBJECT_FUNCTION, NameStr(procStruct->proname)); ReleaseSysCache(procTup); ReleaseSysCache(langTup); diff --git a/src/include/catalog/objectaddress.h b/src/include/catalog/objectaddress.h index 834554ec17..6a9b1eec73 100644 --- a/src/include/catalog/objectaddress.h +++ b/src/include/catalog/objectaddress.h @@ -62,7 +62,7 @@ extern AttrNumber get_object_attnum_name(Oid class_id); extern AttrNumber get_object_attnum_namespace(Oid class_id); extern AttrNumber get_object_attnum_owner(Oid class_id); extern AttrNumber get_object_attnum_acl(Oid class_id); -extern AclObjectKind get_object_aclkind(Oid class_id); +extern ObjectType get_object_type(Oid class_id, Oid object_id); extern bool get_object_namensp_unique(Oid class_id); extern HeapTuple get_catalog_object_by_oid(Relation catalog, @@ -78,4 +78,6 @@ extern char *getObjectIdentityParts(const ObjectAddress *address, List **objname, List **objargs); extern ArrayType *strlist_to_textarray(List *list); +extern ObjectType get_relkind_objtype(char relkind); + #endif /* OBJECTADDRESS_H */ diff --git a/src/include/utils/acl.h b/src/include/utils/acl.h index 7db1606b8f..f4d4be8d0d 100644 --- a/src/include/utils/acl.h +++ b/src/include/utils/acl.h @@ -182,37 +182,6 @@ typedef enum ACLCHECK_NOT_OWNER } AclResult; -/* this enum covers all object types that can have privilege errors */ -/* currently it's only used to tell aclcheck_error what to say */ -typedef enum AclObjectKind -{ - ACL_KIND_COLUMN, /* pg_attribute */ - ACL_KIND_CLASS, /* pg_class */ - ACL_KIND_SEQUENCE, /* pg_sequence */ - ACL_KIND_DATABASE, /* pg_database */ - ACL_KIND_PROC, /* pg_proc */ - ACL_KIND_OPER, /* pg_operator */ - ACL_KIND_TYPE, /* pg_type */ - ACL_KIND_LANGUAGE, /* pg_language */ - ACL_KIND_LARGEOBJECT, /* pg_largeobject */ - ACL_KIND_NAMESPACE, /* pg_namespace */ - ACL_KIND_OPCLASS, /* pg_opclass */ - ACL_KIND_OPFAMILY, /* pg_opfamily */ - ACL_KIND_COLLATION, /* pg_collation */ - ACL_KIND_CONVERSION, /* pg_conversion */ - ACL_KIND_STATISTICS, /* pg_statistic_ext */ - ACL_KIND_TABLESPACE, /* pg_tablespace */ - ACL_KIND_TSDICTIONARY, /* pg_ts_dict */ - ACL_KIND_TSCONFIGURATION, /* pg_ts_config */ - ACL_KIND_FDW, /* pg_foreign_data_wrapper */ - ACL_KIND_FOREIGN_SERVER, /* pg_foreign_server */ - ACL_KIND_EVENT_TRIGGER, /* pg_event_trigger */ - ACL_KIND_EXTENSION, /* pg_extension */ - ACL_KIND_PUBLICATION, /* pg_publication */ - ACL_KIND_SUBSCRIPTION, /* pg_subscription */ - MAX_ACL_KIND /* MUST BE LAST */ -} AclObjectKind; - /* * routines used internally @@ -301,10 +270,10 @@ extern AclResult pg_foreign_data_wrapper_aclcheck(Oid fdw_oid, Oid roleid, AclMo extern AclResult pg_foreign_server_aclcheck(Oid srv_oid, Oid roleid, AclMode mode); extern AclResult pg_type_aclcheck(Oid type_oid, Oid roleid, AclMode mode); -extern void aclcheck_error(AclResult aclerr, AclObjectKind objectkind, +extern void aclcheck_error(AclResult aclerr, ObjectType objtype, const char *objectname); -extern void aclcheck_error_col(AclResult aclerr, AclObjectKind objectkind, +extern void aclcheck_error_col(AclResult aclerr, ObjectType objtype, const char *objectname, const char *colname); extern void aclcheck_error_type(AclResult aclerr, Oid typeOid); diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index 8069784151..8f5847c4ff 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -618,7 +618,7 @@ call_pltcl_start_proc(Oid prolang, bool pltrusted) /* Current user must have permission to call function */ aclresult = pg_proc_aclcheck(procOid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) - aclcheck_error(aclresult, ACL_KIND_PROC, start_proc); + aclcheck_error(aclresult, OBJECT_FUNCTION, start_proc); /* Get the function's pg_proc entry */ procTup = SearchSysCache1(PROCOID, ObjectIdGetDatum(procOid)); diff --git a/src/test/modules/dummy_seclabel/expected/dummy_seclabel.out b/src/test/modules/dummy_seclabel/expected/dummy_seclabel.out index 77bdc9345d..b2d898a7d1 100644 --- a/src/test/modules/dummy_seclabel/expected/dummy_seclabel.out +++ b/src/test/modules/dummy_seclabel/expected/dummy_seclabel.out @@ -30,14 +30,14 @@ SECURITY LABEL FOR 'dummy' ON TABLE dummy_seclabel_tbl1 IS 'unclassified'; -- OK SECURITY LABEL FOR 'unknown_seclabel' ON TABLE dummy_seclabel_tbl1 IS 'classified'; -- fail ERROR: security label provider "unknown_seclabel" is not loaded SECURITY LABEL ON TABLE dummy_seclabel_tbl2 IS 'unclassified'; -- fail (not owner) -ERROR: must be owner of relation dummy_seclabel_tbl2 +ERROR: must be owner of table dummy_seclabel_tbl2 SECURITY LABEL ON TABLE dummy_seclabel_tbl1 IS 'secret'; -- fail (not superuser) ERROR: only superuser can set 'secret' label SECURITY LABEL ON TABLE dummy_seclabel_tbl3 IS 'unclassified'; -- fail (not found) ERROR: relation "dummy_seclabel_tbl3" does not exist SET SESSION AUTHORIZATION regress_dummy_seclabel_user2; SECURITY LABEL ON TABLE dummy_seclabel_tbl1 IS 'unclassified'; -- fail -ERROR: must be owner of relation dummy_seclabel_tbl1 +ERROR: must be owner of table dummy_seclabel_tbl1 SECURITY LABEL ON TABLE dummy_seclabel_tbl2 IS 'classified'; -- OK -- -- Test for shared database object diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index 517fb080bd..e9a1d37f6f 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -1,5 +1,12 @@ -- -- ALTER_TABLE +-- +-- Clean up in case a prior regression run failed +SET client_min_messages TO 'warning'; +DROP ROLE IF EXISTS regress_alter_user1; +RESET client_min_messages; +CREATE USER regress_alter_user1; +-- -- add attribute -- CREATE TABLE tmp (initial int4); @@ -209,9 +216,17 @@ ALTER INDEX IF EXISTS __tmp_onek_unique1 RENAME TO onek_unique1; NOTICE: relation "__tmp_onek_unique1" does not exist, skipping ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1; ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1; +SET ROLE regress_alter_user1; +ALTER INDEX onek_unique1 RENAME TO fail; -- permission denied +ERROR: must be owner of index onek_unique1 +RESET ROLE; -- renaming views CREATE VIEW tmp_view (unique1) AS SELECT unique1 FROM tenk1; ALTER TABLE tmp_view RENAME TO tmp_view_new; +SET ROLE regress_alter_user1; +ALTER VIEW tmp_view_new RENAME TO fail; -- permission denied +ERROR: must be owner of view tmp_view_new +RESET ROLE; -- hack to ensure we get an indexscan here set enable_seqscan to off; set enable_bitmapscan to off; @@ -3364,7 +3379,7 @@ CREATE TABLE owned_by_me ( a int ) PARTITION BY LIST (a); ALTER TABLE owned_by_me ATTACH PARTITION not_owned_by_me FOR VALUES IN (1); -ERROR: must be owner of relation not_owned_by_me +ERROR: must be owner of table not_owned_by_me RESET SESSION AUTHORIZATION; DROP TABLE owned_by_me, not_owned_by_me; DROP ROLE regress_test_not_me; @@ -3883,3 +3898,4 @@ ALTER TABLE tmp ALTER COLUMN i SET (n_distinct = 1, n_distinct_inherited = 2); ALTER TABLE tmp ALTER COLUMN i RESET (n_distinct_inherited); ANALYZE tmp; DROP TABLE tmp; +DROP USER regress_alter_user1; diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out index 65e9c626b3..e606a5fda4 100644 --- a/src/test/regress/expected/copy2.out +++ b/src/test/regress/expected/copy2.out @@ -521,12 +521,12 @@ RESET SESSION AUTHORIZATION; SET SESSION AUTHORIZATION regress_rls_copy_user_colperms; -- attempt all columns (should fail) COPY rls_t1 TO stdout; -ERROR: permission denied for relation rls_t1 +ERROR: permission denied for table rls_t1 COPY rls_t1 (a, b, c) TO stdout; -ERROR: permission denied for relation rls_t1 +ERROR: permission denied for table rls_t1 -- try to copy column with no privileges (should fail) COPY rls_t1 (c) TO stdout; -ERROR: permission denied for relation rls_t1 +ERROR: permission denied for table rls_t1 -- subset of columns (should succeed) COPY rls_t1 (a) TO stdout; 2 diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out index e627d8ebbc..ccad5c87d5 100644 --- a/src/test/regress/expected/create_procedure.out +++ b/src/test/regress/expected/create_procedure.out @@ -82,7 +82,7 @@ GRANT INSERT ON cp_test TO regress_user1; REVOKE EXECUTE ON PROCEDURE ptest1(text) FROM PUBLIC; SET ROLE regress_user1; CALL ptest1('a'); -- error -ERROR: permission denied for function ptest1 +ERROR: permission denied for procedure ptest1 RESET ROLE; GRANT EXECUTE ON PROCEDURE ptest1(text) TO regress_user1; SET ROLE regress_user1; diff --git a/src/test/regress/expected/lock.out b/src/test/regress/expected/lock.out index fd27344503..74a434d24d 100644 --- a/src/test/regress/expected/lock.out +++ b/src/test/regress/expected/lock.out @@ -45,7 +45,7 @@ GRANT UPDATE ON TABLE lock_tbl1 TO regress_rol_lock1; SET ROLE regress_rol_lock1; BEGIN; LOCK TABLE lock_tbl1 * IN ACCESS EXCLUSIVE MODE; -ERROR: permission denied for relation lock_tbl2 +ERROR: permission denied for table lock_tbl2 ROLLBACK; BEGIN; LOCK TABLE ONLY lock_tbl1; diff --git a/src/test/regress/expected/privileges.out b/src/test/regress/expected/privileges.out index e6994f0490..cf53b37383 100644 --- a/src/test/regress/expected/privileges.out +++ b/src/test/regress/expected/privileges.out @@ -92,11 +92,11 @@ SELECT * FROM atest2; -- ok INSERT INTO atest1 VALUES (2, 'two'); -- ok INSERT INTO atest2 VALUES ('foo', true); -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 INSERT INTO atest1 SELECT 1, b FROM atest1; -- ok UPDATE atest1 SET a = 1 WHERE a = 2; -- ok UPDATE atest2 SET col2 = NOT col2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 SELECT * FROM atest1 FOR UPDATE; -- ok a | b ---+----- @@ -105,17 +105,17 @@ SELECT * FROM atest1 FOR UPDATE; -- ok (2 rows) SELECT * FROM atest2 FOR UPDATE; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 DELETE FROM atest2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 TRUNCATE atest2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 BEGIN; LOCK atest2 IN ACCESS EXCLUSIVE MODE; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 COMMIT; COPY atest2 FROM stdin; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 GRANT ALL ON atest1 TO PUBLIC; -- fail WARNING: no privileges were granted for "atest1" -- checks in subquery, both ok @@ -144,37 +144,37 @@ SELECT * FROM atest1; -- ok (2 rows) SELECT * FROM atest2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 INSERT INTO atest1 VALUES (2, 'two'); -- fail -ERROR: permission denied for relation atest1 +ERROR: permission denied for table atest1 INSERT INTO atest2 VALUES ('foo', true); -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 INSERT INTO atest1 SELECT 1, b FROM atest1; -- fail -ERROR: permission denied for relation atest1 +ERROR: permission denied for table atest1 UPDATE atest1 SET a = 1 WHERE a = 2; -- fail -ERROR: permission denied for relation atest1 +ERROR: permission denied for table atest1 UPDATE atest2 SET col2 = NULL; -- ok UPDATE atest2 SET col2 = NOT col2; -- fails; requires SELECT on atest2 -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 UPDATE atest2 SET col2 = true FROM atest1 WHERE atest1.a = 5; -- ok SELECT * FROM atest1 FOR UPDATE; -- fail -ERROR: permission denied for relation atest1 +ERROR: permission denied for table atest1 SELECT * FROM atest2 FOR UPDATE; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 DELETE FROM atest2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 TRUNCATE atest2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 BEGIN; LOCK atest2 IN ACCESS EXCLUSIVE MODE; -- ok COMMIT; COPY atest2 FROM stdin; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 -- checks in subquery, both fail SELECT * FROM atest1 WHERE ( b IN ( SELECT col1 FROM atest2 ) ); -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 SELECT * FROM atest2 WHERE ( col1 IN ( SELECT b FROM atest1 ) ); -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 SET SESSION AUTHORIZATION regress_user4; COPY atest2 FROM stdin; -- ok SELECT * FROM atest1; -- ok @@ -234,7 +234,7 @@ CREATE OPERATOR >>> (procedure = leak2, leftarg = integer, rightarg = integer, restrict = scalargtsel); -- This should not show any "leak" notices before failing. EXPLAIN (COSTS OFF) SELECT * FROM atest12 WHERE a >>> 0; -ERROR: permission denied for relation atest12 +ERROR: permission denied for table atest12 -- This plan should use hashjoin, as it will expect many rows to be selected. EXPLAIN (COSTS OFF) SELECT * FROM atest12v x, atest12v y WHERE x.a = y.b; QUERY PLAN @@ -287,7 +287,7 @@ CREATE TABLE atest3 (one int, two int, three int); GRANT DELETE ON atest3 TO GROUP regress_group2; SET SESSION AUTHORIZATION regress_user1; SELECT * FROM atest3; -- fail -ERROR: permission denied for relation atest3 +ERROR: permission denied for table atest3 DELETE FROM atest3; -- ok -- views SET SESSION AUTHORIZATION regress_user3; @@ -305,7 +305,7 @@ SELECT * FROM atestv1; -- ok (2 rows) SELECT * FROM atestv2; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 GRANT SELECT ON atestv1, atestv3 TO regress_user4; GRANT SELECT ON atestv2 TO regress_user2; SET SESSION AUTHORIZATION regress_user4; @@ -317,28 +317,28 @@ SELECT * FROM atestv1; -- ok (2 rows) SELECT * FROM atestv2; -- fail -ERROR: permission denied for relation atestv2 +ERROR: permission denied for view atestv2 SELECT * FROM atestv3; -- ok one | two | three -----+-----+------- (0 rows) SELECT * FROM atestv0; -- fail -ERROR: permission denied for relation atestv0 +ERROR: permission denied for view atestv0 -- Appendrels excluded by constraints failed to check permissions in 8.4-9.2. select * from ((select a.q1 as x from int8_tbl a offset 0) union all (select b.q2 as x from int8_tbl b offset 0)) ss where false; -ERROR: permission denied for relation int8_tbl +ERROR: permission denied for table int8_tbl set constraint_exclusion = on; select * from ((select a.q1 as x, random() from int8_tbl a where q1 > 0) union all (select b.q2 as x, random() from int8_tbl b where q2 > 0)) ss where x < 0; -ERROR: permission denied for relation int8_tbl +ERROR: permission denied for table int8_tbl reset constraint_exclusion; CREATE VIEW atestv4 AS SELECT * FROM atestv3; -- nested view SELECT * FROM atestv4; -- ok @@ -350,7 +350,7 @@ GRANT SELECT ON atestv4 TO regress_user2; SET SESSION AUTHORIZATION regress_user2; -- Two complex cases: SELECT * FROM atestv3; -- fail -ERROR: permission denied for relation atestv3 +ERROR: permission denied for view atestv3 SELECT * FROM atestv4; -- ok (even though regress_user2 cannot access underlying atestv3) one | two | three -----+-----+------- @@ -363,7 +363,7 @@ SELECT * FROM atest2; -- ok (1 row) SELECT * FROM atestv2; -- fail (even though regress_user2 can access underlying atest2) -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 -- Test column level permissions SET SESSION AUTHORIZATION regress_user1; CREATE TABLE atest5 (one int, two int unique, three int, four int unique); @@ -373,7 +373,7 @@ GRANT ALL (one) ON atest5 TO regress_user3; INSERT INTO atest5 VALUES (1,2,3); SET SESSION AUTHORIZATION regress_user4; SELECT * FROM atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT one FROM atest5; -- ok one ----- @@ -383,13 +383,13 @@ SELECT one FROM atest5; -- ok COPY atest5 (one) TO stdout; -- ok 1 SELECT two FROM atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 COPY atest5 (two) TO stdout; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT atest5 FROM atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 COPY atest5 (one,two) TO stdout; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT 1 FROM atest5; -- ok ?column? ---------- @@ -403,15 +403,15 @@ SELECT 1 FROM atest5 a JOIN atest5 b USING (one); -- ok (1 row) SELECT 1 FROM atest5 a JOIN atest5 b USING (two); -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT 1 FROM atest5 a NATURAL JOIN atest5 b; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT (j.*) IS NULL FROM (atest5 a JOIN atest5 b USING (one)) j; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT 1 FROM atest5 WHERE two = 2; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT * FROM atest1, atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT atest1.* FROM atest1, atest5; -- ok a | b ---+----- @@ -427,7 +427,7 @@ SELECT atest1.*,atest5.one FROM atest1, atest5; -- ok (2 rows) SELECT atest1.*,atest5.one FROM atest1 JOIN atest5 ON (atest1.a = atest5.two); -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT atest1.*,atest5.one FROM atest1 JOIN atest5 ON (atest1.a = atest5.one); -- ok a | b | one ---+-----+----- @@ -436,12 +436,12 @@ SELECT atest1.*,atest5.one FROM atest1 JOIN atest5 ON (atest1.a = atest5.one); - (2 rows) SELECT one, two FROM atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SET SESSION AUTHORIZATION regress_user1; GRANT SELECT (one,two) ON atest6 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; SELECT one, two FROM atest5 NATURAL JOIN atest6; -- fail still -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SET SESSION AUTHORIZATION regress_user1; GRANT SELECT (two) ON atest5 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; @@ -453,23 +453,23 @@ SELECT one, two FROM atest5 NATURAL JOIN atest6; -- ok now -- test column-level privileges for INSERT and UPDATE INSERT INTO atest5 (two) VALUES (3); -- ok COPY atest5 FROM stdin; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 COPY atest5 (two) FROM stdin; -- ok INSERT INTO atest5 (three) VALUES (4); -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 INSERT INTO atest5 VALUES (5,5,5); -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 UPDATE atest5 SET three = 10; -- ok UPDATE atest5 SET one = 8; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 UPDATE atest5 SET three = 5, one = 2; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 -- Check that column level privs are enforced in RETURNING -- Ok. INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = 10; -- Error. No SELECT on column three. INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = 10 RETURNING atest5.three; -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 -- Ok. May SELECT on column "one": INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = 10 RETURNING atest5.one; one @@ -482,21 +482,21 @@ INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = 10 RE INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = EXCLUDED.one; -- Error. No select rights on three INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set three = EXCLUDED.three; -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 INSERT INTO atest5(two) VALUES (6) ON CONFLICT (two) DO UPDATE set one = 8; -- fails (due to UPDATE) -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 INSERT INTO atest5(three) VALUES (4) ON CONFLICT (two) DO UPDATE set three = 10; -- fails (due to INSERT) -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 -- Check that the columns in the inference require select privileges INSERT INTO atest5(four) VALUES (4); -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SET SESSION AUTHORIZATION regress_user1; GRANT INSERT (four) ON atest5 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; INSERT INTO atest5(four) VALUES (4) ON CONFLICT (four) DO UPDATE set three = 3; -- fails (due to SELECT) -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 INSERT INTO atest5(four) VALUES (4) ON CONFLICT ON CONSTRAINT atest5_four_key DO UPDATE set three = 3; -- fails (due to SELECT) -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 INSERT INTO atest5(four) VALUES (4); -- ok SET SESSION AUTHORIZATION regress_user1; GRANT SELECT (four) ON atest5 TO regress_user4; @@ -508,9 +508,9 @@ REVOKE ALL (one) ON atest5 FROM regress_user4; GRANT SELECT (one,two,blue) ON atest6 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; SELECT one FROM atest5; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 UPDATE atest5 SET one = 1; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SELECT atest6 FROM atest6; -- ok atest6 -------- @@ -557,9 +557,9 @@ REVOKE ALL (one) ON atest5 FROM regress_user3; GRANT SELECT (one) ON atest5 TO regress_user4; SET SESSION AUTHORIZATION regress_user4; SELECT atest6 FROM atest6; -- fail -ERROR: permission denied for relation atest6 +ERROR: permission denied for table atest6 SELECT one FROM atest5 NATURAL JOIN atest6; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 SET SESSION AUTHORIZATION regress_user1; ALTER TABLE atest6 DROP COLUMN three; SET SESSION AUTHORIZATION regress_user4; @@ -578,12 +578,12 @@ ALTER TABLE atest6 DROP COLUMN two; REVOKE SELECT (one,blue) ON atest6 FROM regress_user4; SET SESSION AUTHORIZATION regress_user4; SELECT * FROM atest6; -- fail -ERROR: permission denied for relation atest6 +ERROR: permission denied for table atest6 SELECT 1 FROM atest6; -- fail -ERROR: permission denied for relation atest6 +ERROR: permission denied for table atest6 SET SESSION AUTHORIZATION regress_user3; DELETE FROM atest5 WHERE one = 1; -- fail -ERROR: permission denied for relation atest5 +ERROR: permission denied for table atest5 DELETE FROM atest5 WHERE two = 2; -- ok -- check inheritance cases SET SESSION AUTHORIZATION regress_user1; @@ -614,7 +614,7 @@ SELECT oid FROM atestp2; -- ok (0 rows) SELECT fy FROM atestc; -- fail -ERROR: permission denied for relation atestc +ERROR: permission denied for table atestc SET SESSION AUTHORIZATION regress_user1; GRANT SELECT(fy,oid) ON atestc TO regress_user2; SET SESSION AUTHORIZATION regress_user2; @@ -694,11 +694,11 @@ SET SESSION AUTHORIZATION regress_user3; SELECT testfunc1(5); -- fail ERROR: permission denied for function testfunc1 SELECT testagg1(x) FROM (VALUES (1), (2), (3)) _(x); -- fail -ERROR: permission denied for function testagg1 +ERROR: permission denied for aggregate testagg1 CALL testproc1(6); -- fail -ERROR: permission denied for function testproc1 +ERROR: permission denied for procedure testproc1 SELECT col1 FROM atest2 WHERE col2 = true; -- fail -ERROR: permission denied for relation atest2 +ERROR: permission denied for table atest2 SELECT testfunc4(true); -- ok testfunc4 ----------- @@ -722,9 +722,9 @@ CALL testproc1(6); -- ok DROP FUNCTION testfunc1(int); -- fail ERROR: must be owner of function testfunc1 DROP AGGREGATE testagg1(int); -- fail -ERROR: must be owner of function testagg1 +ERROR: must be owner of aggregate testagg1 DROP PROCEDURE testproc1(int); -- fail -ERROR: must be owner of function testproc1 +ERROR: must be owner of procedure testproc1 \c - DROP FUNCTION testfunc1(int); -- ok -- restore to sanity @@ -849,7 +849,7 @@ DROP DOMAIN testdomain1; -- ok SET SESSION AUTHORIZATION regress_user5; TRUNCATE atest2; -- ok TRUNCATE atest3; -- fail -ERROR: permission denied for relation atest3 +ERROR: permission denied for table atest3 -- has_table_privilege function -- bad-input checks select has_table_privilege(NULL,'pg_authid','select'); @@ -1435,7 +1435,7 @@ SELECT * FROM pg_largeobject LIMIT 0; SET SESSION AUTHORIZATION regress_user1; SELECT * FROM pg_largeobject LIMIT 0; -- to be denied -ERROR: permission denied for relation pg_largeobject +ERROR: permission denied for table pg_largeobject -- test default ACLs \c - CREATE SCHEMA testns; @@ -1899,14 +1899,14 @@ GRANT SELECT ON lock_table TO regress_locktable_user; SET SESSION AUTHORIZATION regress_locktable_user; BEGIN; LOCK TABLE lock_table IN ROW EXCLUSIVE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; BEGIN; LOCK TABLE lock_table IN ACCESS SHARE MODE; -- should pass COMMIT; BEGIN; LOCK TABLE lock_table IN ACCESS EXCLUSIVE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; \c REVOKE SELECT ON lock_table FROM regress_locktable_user; @@ -1918,11 +1918,11 @@ LOCK TABLE lock_table IN ROW EXCLUSIVE MODE; -- should pass COMMIT; BEGIN; LOCK TABLE lock_table IN ACCESS SHARE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; BEGIN; LOCK TABLE lock_table IN ACCESS EXCLUSIVE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; \c REVOKE INSERT ON lock_table FROM regress_locktable_user; @@ -1934,7 +1934,7 @@ LOCK TABLE lock_table IN ROW EXCLUSIVE MODE; -- should pass COMMIT; BEGIN; LOCK TABLE lock_table IN ACCESS SHARE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; BEGIN; LOCK TABLE lock_table IN ACCESS EXCLUSIVE MODE; -- should pass @@ -1949,7 +1949,7 @@ LOCK TABLE lock_table IN ROW EXCLUSIVE MODE; -- should pass COMMIT; BEGIN; LOCK TABLE lock_table IN ACCESS SHARE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; BEGIN; LOCK TABLE lock_table IN ACCESS EXCLUSIVE MODE; -- should pass @@ -1964,7 +1964,7 @@ LOCK TABLE lock_table IN ROW EXCLUSIVE MODE; -- should pass COMMIT; BEGIN; LOCK TABLE lock_table IN ACCESS SHARE MODE; -- should fail -ERROR: permission denied for relation lock_table +ERROR: permission denied for table lock_table ROLLBACK; BEGIN; LOCK TABLE lock_table IN ACCESS EXCLUSIVE MODE; -- should pass diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out index b101331d69..0c86c647bc 100644 --- a/src/test/regress/expected/publication.out +++ b/src/test/regress/expected/publication.out @@ -198,7 +198,7 @@ GRANT CREATE ON DATABASE regression TO regress_publication_user2; SET ROLE regress_publication_user2; CREATE PUBLICATION testpub2; -- ok ALTER PUBLICATION testpub2 ADD TABLE testpub_tbl1; -- fail -ERROR: must be owner of relation testpub_tbl1 +ERROR: must be owner of table testpub_tbl1 SET ROLE regress_publication_user; GRANT regress_publication_user TO regress_publication_user2; SET ROLE regress_publication_user2; diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out index b8dcf51a30..f1ae40df61 100644 --- a/src/test/regress/expected/rowsecurity.out +++ b/src/test/regress/expected/rowsecurity.out @@ -361,7 +361,7 @@ INSERT INTO document VALUES (100, 55, 1, 'regress_rls_dave', 'testing sorting of ERROR: new row violates row-level security policy "p2r" for table "document" -- only owner can change policies ALTER POLICY p1 ON document USING (true); --fail -ERROR: must be owner of relation document +ERROR: must be owner of table document DROP POLICY p1 ON document; --fail ERROR: must be owner of relation document SET SESSION AUTHORIZATION regress_rls_alice; @@ -1192,7 +1192,7 @@ EXPLAIN (COSTS OFF) SELECT * FROM part_document WHERE f_leak(dtitle); -- only owner can change policies ALTER POLICY pp1 ON part_document USING (true); --fail -ERROR: must be owner of relation part_document +ERROR: must be owner of table part_document DROP POLICY pp1 ON part_document; --fail ERROR: must be owner of relation part_document SET SESSION AUTHORIZATION regress_rls_alice; @@ -2446,9 +2446,9 @@ EXPLAIN (COSTS OFF) SELECT * FROM rls_view; -- Query as role that is not the owner of the table or view without permissions. SET SESSION AUTHORIZATION regress_rls_carol; SELECT * FROM rls_view; --fail - permission denied. -ERROR: permission denied for relation rls_view +ERROR: permission denied for view rls_view EXPLAIN (COSTS OFF) SELECT * FROM rls_view; --fail - permission denied. -ERROR: permission denied for relation rls_view +ERROR: permission denied for view rls_view -- Query as role that is not the owner of the table or view with permissions. SET SESSION AUTHORIZATION regress_rls_bob; GRANT SELECT ON rls_view TO regress_rls_carol; @@ -3235,7 +3235,7 @@ COPY (SELECT * FROM copy_t ORDER BY a ASC) TO STDOUT WITH DELIMITER ','; --fail ERROR: query would be affected by row-level security policy for table "copy_t" SET row_security TO ON; COPY (SELECT * FROM copy_t ORDER BY a ASC) TO STDOUT WITH DELIMITER ','; --fail - permission denied -ERROR: permission denied for relation copy_t +ERROR: permission denied for table copy_t -- Check COPY relation TO; keep it just one row to avoid reordering issues RESET SESSION AUTHORIZATION; SET row_security TO ON; @@ -3271,10 +3271,10 @@ COPY copy_rel_to TO STDOUT WITH DELIMITER ','; --ok SET SESSION AUTHORIZATION regress_rls_carol; SET row_security TO OFF; COPY copy_rel_to TO STDOUT WITH DELIMITER ','; --fail - permission denied -ERROR: permission denied for relation copy_rel_to +ERROR: permission denied for table copy_rel_to SET row_security TO ON; COPY copy_rel_to TO STDOUT WITH DELIMITER ','; --fail - permission denied -ERROR: permission denied for relation copy_rel_to +ERROR: permission denied for table copy_rel_to -- Check COPY FROM as Superuser/owner. RESET SESSION AUTHORIZATION; SET row_security TO OFF; @@ -3298,10 +3298,10 @@ COPY copy_t FROM STDIN; --ok SET SESSION AUTHORIZATION regress_rls_carol; SET row_security TO OFF; COPY copy_t FROM STDIN; --fail - permission denied. -ERROR: permission denied for relation copy_t +ERROR: permission denied for table copy_t SET row_security TO ON; COPY copy_t FROM STDIN; --fail - permission denied. -ERROR: permission denied for relation copy_t +ERROR: permission denied for table copy_t RESET SESSION AUTHORIZATION; DROP TABLE copy_t; DROP TABLE copy_rel_to CASCADE; diff --git a/src/test/regress/expected/select_into.out b/src/test/regress/expected/select_into.out index 5d54bbf3b0..ef7cfd6f29 100644 --- a/src/test/regress/expected/select_into.out +++ b/src/test/regress/expected/select_into.out @@ -22,15 +22,15 @@ GRANT ALL ON SCHEMA selinto_schema TO public; SET SESSION AUTHORIZATION regress_selinto_user; SELECT * INTO TABLE selinto_schema.tmp1 FROM pg_class WHERE relname like '%a%'; -- Error -ERROR: permission denied for relation tmp1 +ERROR: permission denied for table tmp1 SELECT oid AS clsoid, relname, relnatts + 10 AS x INTO selinto_schema.tmp2 FROM pg_class WHERE relname like '%b%'; -- Error -ERROR: permission denied for relation tmp2 +ERROR: permission denied for table tmp2 CREATE TABLE selinto_schema.tmp3 (a,b,c) AS SELECT oid,relname,relacl FROM pg_class WHERE relname like '%c%'; -- Error -ERROR: permission denied for relation tmp3 +ERROR: permission denied for table tmp3 RESET SESSION AUTHORIZATION; ALTER DEFAULT PRIVILEGES FOR ROLE regress_selinto_user GRANT INSERT ON TABLES TO regress_selinto_user; diff --git a/src/test/regress/expected/sequence.out b/src/test/regress/expected/sequence.out index 2384b7dd81..ca5ea063fa 100644 --- a/src/test/regress/expected/sequence.out +++ b/src/test/regress/expected/sequence.out @@ -785,7 +785,7 @@ ROLLBACK; BEGIN; SET LOCAL SESSION AUTHORIZATION regress_seq_user; ALTER SEQUENCE sequence_test2 START WITH 1; -ERROR: must be owner of relation sequence_test2 +ERROR: must be owner of sequence sequence_test2 ROLLBACK; -- Sequences should get wiped out as well: DROP TABLE serialTest1, serialTest2; diff --git a/src/test/regress/expected/updatable_views.out b/src/test/regress/expected/updatable_views.out index 2090a411fe..964c115b14 100644 --- a/src/test/regress/expected/updatable_views.out +++ b/src/test/regress/expected/updatable_views.out @@ -990,26 +990,26 @@ SELECT * FROM rw_view2; -- ok (2 rows) INSERT INTO base_tbl VALUES (3, 'Row 3', 3.0); -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl INSERT INTO rw_view1 VALUES ('Row 3', 3.0, 3); -- not allowed -ERROR: permission denied for relation rw_view1 +ERROR: permission denied for view rw_view1 INSERT INTO rw_view2 VALUES ('Row 3', 3.0, 3); -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl UPDATE base_tbl SET a=a, c=c; -- ok UPDATE base_tbl SET b=b; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl UPDATE rw_view1 SET bb=bb, cc=cc; -- ok UPDATE rw_view1 SET aa=aa; -- not allowed -ERROR: permission denied for relation rw_view1 +ERROR: permission denied for view rw_view1 UPDATE rw_view2 SET aa=aa, cc=cc; -- ok UPDATE rw_view2 SET bb=bb; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl DELETE FROM base_tbl; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl DELETE FROM rw_view1; -- not allowed -ERROR: permission denied for relation rw_view1 +ERROR: permission denied for view rw_view1 DELETE FROM rw_view2; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl RESET SESSION AUTHORIZATION; SET SESSION AUTHORIZATION regress_view_user1; GRANT INSERT, DELETE ON base_tbl TO regress_view_user2; @@ -1017,11 +1017,11 @@ RESET SESSION AUTHORIZATION; SET SESSION AUTHORIZATION regress_view_user2; INSERT INTO base_tbl VALUES (3, 'Row 3', 3.0); -- ok INSERT INTO rw_view1 VALUES ('Row 4', 4.0, 4); -- not allowed -ERROR: permission denied for relation rw_view1 +ERROR: permission denied for view rw_view1 INSERT INTO rw_view2 VALUES ('Row 4', 4.0, 4); -- ok DELETE FROM base_tbl WHERE a=1; -- ok DELETE FROM rw_view1 WHERE aa=2; -- not allowed -ERROR: permission denied for relation rw_view1 +ERROR: permission denied for view rw_view1 DELETE FROM rw_view2 WHERE aa=2; -- ok SELECT * FROM base_tbl; a | b | c @@ -1037,15 +1037,15 @@ GRANT INSERT, DELETE ON rw_view1 TO regress_view_user2; RESET SESSION AUTHORIZATION; SET SESSION AUTHORIZATION regress_view_user2; INSERT INTO base_tbl VALUES (5, 'Row 5', 5.0); -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl INSERT INTO rw_view1 VALUES ('Row 5', 5.0, 5); -- ok INSERT INTO rw_view2 VALUES ('Row 6', 6.0, 6); -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl DELETE FROM base_tbl WHERE a=3; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl DELETE FROM rw_view1 WHERE aa=3; -- ok DELETE FROM rw_view2 WHERE aa=4; -- not allowed -ERROR: permission denied for relation base_tbl +ERROR: permission denied for table base_tbl SELECT * FROM base_tbl; a | b | c ---+-------+--- diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index af25ee9e77..b27e8f6777 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -1,5 +1,15 @@ -- -- ALTER_TABLE +-- + +-- Clean up in case a prior regression run failed +SET client_min_messages TO 'warning'; +DROP ROLE IF EXISTS regress_alter_user1; +RESET client_min_messages; + +CREATE USER regress_alter_user1; + +-- -- add attribute -- @@ -209,10 +219,19 @@ ALTER INDEX IF EXISTS __tmp_onek_unique1 RENAME TO onek_unique1; ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1; ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1; + +SET ROLE regress_alter_user1; +ALTER INDEX onek_unique1 RENAME TO fail; -- permission denied +RESET ROLE; + -- renaming views CREATE VIEW tmp_view (unique1) AS SELECT unique1 FROM tenk1; ALTER TABLE tmp_view RENAME TO tmp_view_new; +SET ROLE regress_alter_user1; +ALTER VIEW tmp_view_new RENAME TO fail; -- permission denied +RESET ROLE; + -- hack to ensure we get an indexscan here set enable_seqscan to off; set enable_bitmapscan to off; @@ -2546,3 +2565,5 @@ ALTER TABLE tmp ALTER COLUMN i SET (n_distinct = 1, n_distinct_inherited = 2); ALTER TABLE tmp ALTER COLUMN i RESET (n_distinct_inherited); ANALYZE tmp; DROP TABLE tmp; + +DROP USER regress_alter_user1; From 7f17fd6fc7125b41218bc99ccfa8165e2d730cd9 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Fri, 19 Jan 2018 16:34:44 -0300 Subject: [PATCH 0858/1087] Fix CompareIndexInfo's attnum comparisons When an index column is an expression, it makes no sense to compare its attribute numbers. This seems to account for remaining buildfarm fallout from 8b08f7d4820f. At least, it solves the issue in my local 32bit VM -- let's see what the rest thinks. --- src/backend/catalog/index.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index f0223416ad..849a469127 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -1795,8 +1795,10 @@ CompareIndexInfo(IndexInfo *info1, IndexInfo *info2, if (maplen < info2->ii_KeyAttrNumbers[i]) elog(ERROR, "incorrect attribute map"); - if (attmap[info2->ii_KeyAttrNumbers[i] - 1] != - info1->ii_KeyAttrNumbers[i]) + /* ignore expressions at this stage */ + if ((info1->ii_KeyAttrNumbers[i] != InvalidAttrNumber) && + (attmap[info2->ii_KeyAttrNumbers[i] - 1] != + info1->ii_KeyAttrNumbers[i])) return false; if (collations1[i] != collations2[i]) From 2f178441044be430f6b4d626e4dae68a9a6f6cec Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 19 Jan 2018 15:33:06 -0500 Subject: [PATCH 0859/1087] Allow UPDATE to move rows between partitions. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When an UPDATE causes a row to no longer match the partition constraint, try to move it to a different partition where it does match the partition constraint. In essence, the UPDATE is split into a DELETE from the old partition and an INSERT into the new one. This can lead to surprising behavior in concurrency scenarios because EvalPlanQual rechecks won't work as they normally did; the known problems are documented. (There is a pending patch to improve the situation further, but it needs more review.) Amit Khandekar, reviewed and tested by Amit Langote, David Rowley, Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro Herrera, Amit Kapila, and me. A few final revisions by me. Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com --- contrib/file_fdw/input/file_fdw.source | 1 + contrib/file_fdw/output/file_fdw.source | 2 + doc/src/sgml/ddl.sgml | 24 +- doc/src/sgml/ref/update.sgml | 13 +- doc/src/sgml/trigger.sgml | 23 + src/backend/commands/copy.c | 40 +- src/backend/commands/trigger.c | 52 +- src/backend/executor/execPartition.c | 241 ++++++++- src/backend/executor/nodeModifyTable.c | 583 +++++++++++++++----- src/backend/nodes/copyfuncs.c | 2 + src/backend/nodes/equalfuncs.c | 1 + src/backend/nodes/outfuncs.c | 3 + src/backend/nodes/readfuncs.c | 1 + src/backend/optimizer/path/allpaths.c | 4 +- src/backend/optimizer/plan/createplan.c | 4 + src/backend/optimizer/plan/planner.c | 19 +- src/backend/optimizer/prep/prepunion.c | 28 +- src/backend/optimizer/util/pathnode.c | 4 + src/include/executor/execPartition.h | 34 +- src/include/nodes/execnodes.h | 4 +- src/include/nodes/plannodes.h | 1 + src/include/nodes/relation.h | 3 + src/include/optimizer/pathnode.h | 1 + src/include/optimizer/planner.h | 3 +- src/test/regress/expected/update.out | 681 ++++++++++++++++++++++-- src/test/regress/sql/update.sql | 458 +++++++++++++++- src/tools/pgindent/typedefs.list | 1 + 27 files changed, 1957 insertions(+), 274 deletions(-) diff --git a/contrib/file_fdw/input/file_fdw.source b/contrib/file_fdw/input/file_fdw.source index e6821d64d4..88cb5f294c 100644 --- a/contrib/file_fdw/input/file_fdw.source +++ b/contrib/file_fdw/input/file_fdw.source @@ -178,6 +178,7 @@ SELECT tableoid::regclass, * FROM p1; SELECT tableoid::regclass, * FROM p2; INSERT INTO pt VALUES (1, 'xyzzy'); -- ERROR INSERT INTO pt VALUES (2, 'xyzzy'); +UPDATE pt set a = 1 where a = 2; -- ERROR SELECT tableoid::regclass, * FROM pt; SELECT tableoid::regclass, * FROM p1; SELECT tableoid::regclass, * FROM p2; diff --git a/contrib/file_fdw/output/file_fdw.source b/contrib/file_fdw/output/file_fdw.source index e2d8b87015..b92392fd25 100644 --- a/contrib/file_fdw/output/file_fdw.source +++ b/contrib/file_fdw/output/file_fdw.source @@ -344,6 +344,8 @@ SELECT tableoid::regclass, * FROM p2; INSERT INTO pt VALUES (1, 'xyzzy'); -- ERROR ERROR: cannot route inserted tuples to a foreign table INSERT INTO pt VALUES (2, 'xyzzy'); +UPDATE pt set a = 1 where a = 2; -- ERROR +ERROR: cannot route inserted tuples to a foreign table SELECT tableoid::regclass, * FROM pt; tableoid | a | b ----------+---+------- diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index b1167a40e6..3244399782 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3005,6 +3005,11 @@ VALUES ('Albany', NULL, NULL, 'NY'); foreign table partitions. + + Updating the partition key of a row might cause it to be moved into a + different partition where this row satisfies its partition constraint. + + Example @@ -3302,9 +3307,22 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - An UPDATE that causes a row to move from one partition to - another fails, because the new value of the row fails to satisfy the - implicit partition constraint of the original partition. + When an UPDATE causes a row to move from one + partition to another, there is a chance that another concurrent + UPDATE or DELETE misses this row. + Suppose session 1 is performing an UPDATE on a + partition key, and meanwhile a concurrent session 2 for which this row + is visible performs an UPDATE or + DELETE operation on this row. Session 2 can silently + miss the row if the row is deleted from the partition due to session + 1's activity. In such case, session 2's + UPDATE or DELETE, being unaware of + the row movement thinks that the row has just been deleted and concludes + that there is nothing to be done for this row. In the usual case where + the table is not partitioned, or where there is no row movement, + session 2 would have identified the newly updated row and carried out + the UPDATE/DELETE on this new row + version. diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml index c0d0f7134d..c8ac8a335b 100644 --- a/doc/src/sgml/ref/update.sgml +++ b/doc/src/sgml/ref/update.sgml @@ -282,10 +282,15 @@ UPDATE count In the case of a partitioned table, updating a row might cause it to no - longer satisfy the partition constraint. Since there is no provision to - move the row to the partition appropriate to the new value of its - partitioning key, an error will occur in this case. This can also happen - when updating a partition directly. + longer satisfy the partition constraint of the containing partition. In that + case, if there is some other partition in the partition tree for which this + row satisfies its partition constraint, then the row is moved to that + partition. If there is no such partition, an error will occur. Behind the + scenes, the row movement is actually a DELETE and + INSERT operation. However, there is a possibility that a + concurrent UPDATE or DELETE on the + same row may miss this row. For details see the section + . diff --git a/doc/src/sgml/trigger.sgml b/doc/src/sgml/trigger.sgml index bf5d3f9088..8f83e6a47c 100644 --- a/doc/src/sgml/trigger.sgml +++ b/doc/src/sgml/trigger.sgml @@ -153,6 +153,29 @@ triggers. + + If an UPDATE on a partitioned table causes a row to move + to another partition, it will be performed as a DELETE + from the original partition followed by an INSERT into + the new partition. In this case, all row-level BEFORE + UPDATE triggers and all row-level + BEFORE DELETE triggers are fired on + the original partition. Then all row-level BEFORE + INSERT triggers are fired on the destination partition. + The possibility of surprising outcomes should be considered when all these + triggers affect the row being moved. As far as AFTER ROW + triggers are concerned, AFTER DELETE + and AFTER INSERT triggers are + applied; but AFTER UPDATE triggers + are not applied because the UPDATE has been converted to + a DELETE and an INSERT. As far as + statement-level triggers are concerned, none of the + DELETE or INSERT triggers are fired, + even if row movement occurs; only the UPDATE triggers + defined on the target table used in the UPDATE statement + will be fired. + + Trigger functions invoked by per-statement triggers should always return NULL. Trigger functions invoked by per-row diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 6bfca2a4af..04a24c6082 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -170,7 +170,6 @@ typedef struct CopyStateData PartitionTupleRouting *partition_tuple_routing; TransitionCaptureState *transition_capture; - TupleConversionMap **transition_tupconv_maps; /* * These variables are used to reduce overhead in textual COPY FROM. @@ -2481,19 +2480,7 @@ CopyFrom(CopyState cstate) * tuple). */ if (cstate->transition_capture != NULL) - { - int i; - - cstate->transition_tupconv_maps = (TupleConversionMap **) - palloc0(sizeof(TupleConversionMap *) * proute->num_partitions); - for (i = 0; i < proute->num_partitions; ++i) - { - cstate->transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(proute->partitions[i]->ri_RelationDesc), - RelationGetDescr(cstate->rel), - gettext_noop("could not convert row type")); - } - } + ExecSetupChildParentMapForLeaf(proute); } /* @@ -2587,7 +2574,6 @@ CopyFrom(CopyState cstate) if (cstate->partition_tuple_routing) { int leaf_part_index; - TupleConversionMap *map; PartitionTupleRouting *proute = cstate->partition_tuple_routing; /* @@ -2651,7 +2637,8 @@ CopyFrom(CopyState cstate) */ cstate->transition_capture->tcs_original_insert_tuple = NULL; cstate->transition_capture->tcs_map = - cstate->transition_tupconv_maps[leaf_part_index]; + TupConvMapForLeaf(proute, saved_resultRelInfo, + leaf_part_index); } else { @@ -2668,23 +2655,10 @@ CopyFrom(CopyState cstate) * We might need to convert from the parent rowtype to the * partition rowtype. */ - map = proute->partition_tupconv_maps[leaf_part_index]; - if (map) - { - Relation partrel = resultRelInfo->ri_RelationDesc; - - tuple = do_convert_tuple(tuple, map); - - /* - * We must use the partition's tuple descriptor from this - * point on. Use a dedicated slot from this point on until - * we're finished dealing with the partition. - */ - slot = proute->partition_tuple_slot; - Assert(slot != NULL); - ExecSetSlotDescriptor(slot, RelationGetDescr(partrel)); - ExecStoreTuple(tuple, slot, InvalidBuffer, true); - } + tuple = ConvertPartitionTupleSlot(proute->parent_child_tupconv_maps[leaf_part_index], + tuple, + proute->partition_tuple_slot, + &slot); tuple->t_tableOid = RelationGetRelid(resultRelInfo->ri_RelationDesc); } diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index b45c30161f..160d941c00 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -2854,8 +2854,13 @@ ExecARUpdateTriggers(EState *estate, ResultRelInfo *relinfo, { HeapTuple trigtuple; - Assert(HeapTupleIsValid(fdw_trigtuple) ^ ItemPointerIsValid(tupleid)); - if (fdw_trigtuple == NULL) + /* + * Note: if the UPDATE is converted into a DELETE+INSERT as part of + * update-partition-key operation, then this function is also called + * separately for DELETE and INSERT to capture transition table rows. + * In such case, either old tuple or new tuple can be NULL. + */ + if (fdw_trigtuple == NULL && ItemPointerIsValid(tupleid)) trigtuple = GetTupleForTrigger(estate, NULL, relinfo, @@ -5414,7 +5419,12 @@ AfterTriggerPendingOnRel(Oid relid) * triggers actually need to be queued. It is also called after each row, * even if there are no triggers for that event, if there are any AFTER * STATEMENT triggers for the statement which use transition tables, so that - * the transition tuplestores can be built. + * the transition tuplestores can be built. Furthermore, if the transition + * capture is happening for UPDATEd rows being moved to another partition due + * to the partition-key being changed, then this function is called once when + * the row is deleted (to capture OLD row), and once when the row is inserted + * into another partition (to capture NEW row). This is done separately because + * DELETE and INSERT happen on different tables. * * Transition tuplestores are built now, rather than when events are pulled * off of the queue because AFTER ROW triggers are allowed to select from the @@ -5463,12 +5473,25 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, bool update_new_table = transition_capture->tcs_update_new_table; bool insert_new_table = transition_capture->tcs_insert_new_table;; - if ((event == TRIGGER_EVENT_DELETE && delete_old_table) || - (event == TRIGGER_EVENT_UPDATE && update_old_table)) + /* + * For INSERT events newtup should be non-NULL, for DELETE events + * oldtup should be non-NULL, whereas for UPDATE events normally both + * oldtup and newtup are non-NULL. But for UPDATE events fired for + * capturing transition tuples during UPDATE partition-key row + * movement, oldtup is NULL when the event is for a row being inserted, + * whereas newtup is NULL when the event is for a row being deleted. + */ + Assert(!(event == TRIGGER_EVENT_DELETE && delete_old_table && + oldtup == NULL)); + Assert(!(event == TRIGGER_EVENT_INSERT && insert_new_table && + newtup == NULL)); + + if (oldtup != NULL && + ((event == TRIGGER_EVENT_DELETE && delete_old_table) || + (event == TRIGGER_EVENT_UPDATE && update_old_table))) { Tuplestorestate *old_tuplestore; - Assert(oldtup != NULL); old_tuplestore = transition_capture->tcs_private->old_tuplestore; if (map != NULL) @@ -5481,12 +5504,12 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, else tuplestore_puttuple(old_tuplestore, oldtup); } - if ((event == TRIGGER_EVENT_INSERT && insert_new_table) || - (event == TRIGGER_EVENT_UPDATE && update_new_table)) + if (newtup != NULL && + ((event == TRIGGER_EVENT_INSERT && insert_new_table) || + (event == TRIGGER_EVENT_UPDATE && update_new_table))) { Tuplestorestate *new_tuplestore; - Assert(newtup != NULL); new_tuplestore = transition_capture->tcs_private->new_tuplestore; if (original_insert_tuple != NULL) @@ -5502,11 +5525,18 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo, tuplestore_puttuple(new_tuplestore, newtup); } - /* If transition tables are the only reason we're here, return. */ + /* + * If transition tables are the only reason we're here, return. As + * mentioned above, we can also be here during update tuple routing in + * presence of transition tables, in which case this function is called + * separately for oldtup and newtup, so we expect exactly one of them + * to be NULL. + */ if (trigdesc == NULL || (event == TRIGGER_EVENT_DELETE && !trigdesc->trig_delete_after_row) || (event == TRIGGER_EVENT_INSERT && !trigdesc->trig_insert_after_row) || - (event == TRIGGER_EVENT_UPDATE && !trigdesc->trig_update_after_row)) + (event == TRIGGER_EVENT_UPDATE && !trigdesc->trig_update_after_row) || + (event == TRIGGER_EVENT_UPDATE && ((oldtup == NULL) ^ (newtup == NULL)))) return; } diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 8c0d2df63c..89b7bb4c60 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -54,7 +54,11 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, List *leaf_parts; ListCell *cell; int i; - ResultRelInfo *leaf_part_rri; + ResultRelInfo *leaf_part_arr = NULL, + *update_rri = NULL; + int num_update_rri = 0, + update_rri_index = 0; + bool is_update = false; PartitionTupleRouting *proute; /* @@ -69,10 +73,38 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, proute->num_partitions = list_length(leaf_parts); proute->partitions = (ResultRelInfo **) palloc(proute->num_partitions * sizeof(ResultRelInfo *)); - proute->partition_tupconv_maps = + proute->parent_child_tupconv_maps = (TupleConversionMap **) palloc0(proute->num_partitions * sizeof(TupleConversionMap *)); + /* Set up details specific to the type of tuple routing we are doing. */ + if (mtstate && mtstate->operation == CMD_UPDATE) + { + ModifyTable *node = (ModifyTable *) mtstate->ps.plan; + + is_update = true; + update_rri = mtstate->resultRelInfo; + num_update_rri = list_length(node->plans); + proute->subplan_partition_offsets = + palloc(num_update_rri * sizeof(int)); + + /* + * We need an additional tuple slot for storing transient tuples that + * are converted to the root table descriptor. + */ + proute->root_tuple_slot = MakeTupleTableSlot(); + } + else + { + /* + * Since we are inserting tuples, we need to create all new result + * rels. Avoid repeated pallocs by allocating memory for all the + * result rels in bulk. + */ + leaf_part_arr = (ResultRelInfo *) palloc0(proute->num_partitions * + sizeof(ResultRelInfo)); + } + /* * Initialize an empty slot that will be used to manipulate tuples of any * given partition's rowtype. It is attached to the caller-specified node @@ -81,38 +113,86 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, */ proute->partition_tuple_slot = MakeTupleTableSlot(); - leaf_part_rri = (ResultRelInfo *) palloc0(proute->num_partitions * - sizeof(ResultRelInfo)); i = 0; foreach(cell, leaf_parts) { - Relation partrel; + ResultRelInfo *leaf_part_rri; + Relation partrel = NULL; TupleDesc part_tupdesc; + Oid leaf_oid = lfirst_oid(cell); + + if (is_update) + { + /* + * If the leaf partition is already present in the per-subplan + * result rels, we re-use that rather than initialize a new result + * rel. The per-subplan resultrels and the resultrels of the leaf + * partitions are both in the same canonical order. So while going + * through the leaf partition oids, we need to keep track of the + * next per-subplan result rel to be looked for in the leaf + * partition resultrels. + */ + if (update_rri_index < num_update_rri && + RelationGetRelid(update_rri[update_rri_index].ri_RelationDesc) == leaf_oid) + { + leaf_part_rri = &update_rri[update_rri_index]; + partrel = leaf_part_rri->ri_RelationDesc; + + /* + * This is required in order to we convert the partition's + * tuple to be compatible with the root partitioned table's + * tuple descriptor. When generating the per-subplan result + * rels, this was not set. + */ + leaf_part_rri->ri_PartitionRoot = rel; + + /* Remember the subplan offset for this ResultRelInfo */ + proute->subplan_partition_offsets[update_rri_index] = i; + + update_rri_index++; + } + else + leaf_part_rri = (ResultRelInfo *) palloc0(sizeof(ResultRelInfo)); + } + else + { + /* For INSERTs, we already have an array of result rels allocated */ + leaf_part_rri = &leaf_part_arr[i]; + } /* - * We locked all the partitions above including the leaf partitions. - * Note that each of the relations in proute->partitions are - * eventually closed by the caller. + * If we didn't open the partition rel, it means we haven't + * initialized the result rel either. */ - partrel = heap_open(lfirst_oid(cell), NoLock); + if (!partrel) + { + /* + * We locked all the partitions above including the leaf + * partitions. Note that each of the newly opened relations in + * proute->partitions are eventually closed by the caller. + */ + partrel = heap_open(leaf_oid, NoLock); + InitResultRelInfo(leaf_part_rri, + partrel, + resultRTindex, + rel, + estate->es_instrument); + } + part_tupdesc = RelationGetDescr(partrel); /* * Save a tuple conversion map to convert a tuple routed to this * partition from the parent's type to the partition's. */ - proute->partition_tupconv_maps[i] = + proute->parent_child_tupconv_maps[i] = convert_tuples_by_name(tupDesc, part_tupdesc, gettext_noop("could not convert row type")); - InitResultRelInfo(leaf_part_rri, - partrel, - resultRTindex, - rel, - estate->es_instrument); - /* - * Verify result relation is a valid target for INSERT. + * Verify result relation is a valid target for an INSERT. An UPDATE + * of a partition-key becomes a DELETE+INSERT operation, so this check + * is still required when the operation is CMD_UPDATE. */ CheckValidResultRel(leaf_part_rri, CMD_INSERT); @@ -132,10 +212,16 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, estate->es_leaf_result_relations = lappend(estate->es_leaf_result_relations, leaf_part_rri); - proute->partitions[i] = leaf_part_rri++; + proute->partitions[i] = leaf_part_rri; i++; } + /* + * For UPDATE, we should have found all the per-subplan resultrels in the + * leaf partitions. + */ + Assert(!is_update || update_rri_index == num_update_rri); + return proute; } @@ -258,6 +344,101 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, return result; } +/* + * ExecSetupChildParentMapForLeaf -- Initialize the per-leaf-partition + * child-to-root tuple conversion map array. + * + * This map is required for capturing transition tuples when the target table + * is a partitioned table. For a tuple that is routed by an INSERT or UPDATE, + * we need to convert it from the leaf partition to the target table + * descriptor. + */ +void +ExecSetupChildParentMapForLeaf(PartitionTupleRouting *proute) +{ + Assert(proute != NULL); + + /* + * These array elements gets filled up with maps on an on-demand basis. + * Initially just set all of them to NULL. + */ + proute->child_parent_tupconv_maps = + (TupleConversionMap **) palloc0(sizeof(TupleConversionMap *) * + proute->num_partitions); + + /* Same is the case for this array. All the values are set to false */ + proute->child_parent_map_not_required = + (bool *) palloc0(sizeof(bool) * proute->num_partitions); +} + +/* + * TupConvMapForLeaf -- Get the tuple conversion map for a given leaf partition + * index. + */ +TupleConversionMap * +TupConvMapForLeaf(PartitionTupleRouting *proute, + ResultRelInfo *rootRelInfo, int leaf_index) +{ + ResultRelInfo **resultRelInfos = proute->partitions; + TupleConversionMap **map; + TupleDesc tupdesc; + + /* Don't call this if we're not supposed to be using this type of map. */ + Assert(proute->child_parent_tupconv_maps != NULL); + + /* If it's already known that we don't need a map, return NULL. */ + if (proute->child_parent_map_not_required[leaf_index]) + return NULL; + + /* If we've already got a map, return it. */ + map = &proute->child_parent_tupconv_maps[leaf_index]; + if (*map != NULL) + return *map; + + /* No map yet; try to create one. */ + tupdesc = RelationGetDescr(resultRelInfos[leaf_index]->ri_RelationDesc); + *map = + convert_tuples_by_name(tupdesc, + RelationGetDescr(rootRelInfo->ri_RelationDesc), + gettext_noop("could not convert row type")); + + /* If it turns out no map is needed, remember for next time. */ + proute->child_parent_map_not_required[leaf_index] = (*map == NULL); + + return *map; +} + +/* + * ConvertPartitionTupleSlot -- convenience function for tuple conversion. + * The tuple, if converted, is stored in new_slot, and *p_my_slot is + * updated to point to it. new_slot typically should be one of the + * dedicated partition tuple slots. If map is NULL, *p_my_slot is not changed. + * + * Returns the converted tuple, unless map is NULL, in which case original + * tuple is returned unmodified. + */ +HeapTuple +ConvertPartitionTupleSlot(TupleConversionMap *map, + HeapTuple tuple, + TupleTableSlot *new_slot, + TupleTableSlot **p_my_slot) +{ + if (!map) + return tuple; + + tuple = do_convert_tuple(tuple, map); + + /* + * Change the partition tuple slot descriptor, as per converted tuple. + */ + *p_my_slot = new_slot; + Assert(new_slot != NULL); + ExecSetSlotDescriptor(new_slot, map->outdesc); + ExecStoreTuple(tuple, new_slot, InvalidBuffer, true); + + return tuple; +} + /* * ExecCleanupTupleRouting -- Clean up objects allocated for partition tuple * routing. @@ -265,9 +446,10 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, * Close all the partitioned tables, leaf partitions, and their indices. */ void -ExecCleanupTupleRouting(PartitionTupleRouting * proute) +ExecCleanupTupleRouting(PartitionTupleRouting *proute) { int i; + int subplan_index = 0; /* * Remember, proute->partition_dispatch_info[0] corresponds to the root @@ -288,11 +470,30 @@ ExecCleanupTupleRouting(PartitionTupleRouting * proute) { ResultRelInfo *resultRelInfo = proute->partitions[i]; + /* + * If this result rel is one of the UPDATE subplan result rels, let + * ExecEndPlan() close it. For INSERT or COPY, + * proute->subplan_partition_offsets will always be NULL. Note that + * the subplan_partition_offsets array and the partitions array have + * the partitions in the same order. So, while we iterate over + * partitions array, we also iterate over the + * subplan_partition_offsets array in order to figure out which of the + * result rels are present in the UPDATE subplans. + */ + if (proute->subplan_partition_offsets && + proute->subplan_partition_offsets[subplan_index] == i) + { + subplan_index++; + continue; + } + ExecCloseIndices(resultRelInfo); heap_close(resultRelInfo->ri_RelationDesc, NoLock); } - /* Release the standalone partition tuple descriptor, if any */ + /* Release the standalone partition tuple descriptors, if any */ + if (proute->root_tuple_slot) + ExecDropSingleTupleTableSlot(proute->root_tuple_slot); if (proute->partition_tuple_slot) ExecDropSingleTupleTableSlot(proute->partition_tuple_slot); } diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index c5eca1bb74..6c2f8d4ec0 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -62,6 +62,11 @@ static bool ExecOnConflictUpdate(ModifyTableState *mtstate, EState *estate, bool canSetTag, TupleTableSlot **returning); +static ResultRelInfo *getTargetResultRelInfo(ModifyTableState *node); +static void ExecSetupChildParentMapForTcs(ModifyTableState *mtstate); +static void ExecSetupChildParentMapForSubplan(ModifyTableState *mtstate); +static TupleConversionMap *tupconv_map_for_subplan(ModifyTableState *node, + int whichplan); /* * Verify that the tuples to be produced by INSERT or UPDATE match the @@ -265,6 +270,7 @@ ExecInsert(ModifyTableState *mtstate, Oid newId; List *recheckIndexes = NIL; TupleTableSlot *result = NULL; + TransitionCaptureState *ar_insert_trig_tcs; /* * get the heap tuple out of the tuple table slot, making sure we have a @@ -282,7 +288,6 @@ ExecInsert(ModifyTableState *mtstate, { int leaf_part_index; PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; - TupleConversionMap *map; /* * Away we go ... If we end up not finding a partition after all, @@ -331,8 +336,10 @@ ExecInsert(ModifyTableState *mtstate, * back to tuplestore format. */ mtstate->mt_transition_capture->tcs_original_insert_tuple = NULL; + mtstate->mt_transition_capture->tcs_map = - mtstate->mt_transition_tupconv_maps[leaf_part_index]; + TupConvMapForLeaf(proute, saved_resultRelInfo, + leaf_part_index); } else { @@ -345,30 +352,20 @@ ExecInsert(ModifyTableState *mtstate, } } if (mtstate->mt_oc_transition_capture != NULL) + { mtstate->mt_oc_transition_capture->tcs_map = - mtstate->mt_transition_tupconv_maps[leaf_part_index]; + TupConvMapForLeaf(proute, saved_resultRelInfo, + leaf_part_index); + } /* * We might need to convert from the parent rowtype to the partition * rowtype. */ - map = proute->partition_tupconv_maps[leaf_part_index]; - if (map) - { - Relation partrel = resultRelInfo->ri_RelationDesc; - - tuple = do_convert_tuple(tuple, map); - - /* - * We must use the partition's tuple descriptor from this point - * on, until we're finished dealing with the partition. Use the - * dedicated slot for that. - */ - slot = proute->partition_tuple_slot; - Assert(slot != NULL); - ExecSetSlotDescriptor(slot, RelationGetDescr(partrel)); - ExecStoreTuple(tuple, slot, InvalidBuffer, true); - } + tuple = ConvertPartitionTupleSlot(proute->parent_child_tupconv_maps[leaf_part_index], + tuple, + proute->partition_tuple_slot, + &slot); } resultRelationDesc = resultRelInfo->ri_RelationDesc; @@ -449,6 +446,8 @@ ExecInsert(ModifyTableState *mtstate, } else { + WCOKind wco_kind; + /* * We always check the partition constraint, including when the tuple * got here via tuple-routing. However we don't need to in the latter @@ -466,14 +465,23 @@ ExecInsert(ModifyTableState *mtstate, tuple->t_tableOid = RelationGetRelid(resultRelationDesc); /* - * Check any RLS INSERT WITH CHECK policies + * Check any RLS WITH CHECK policies. * + * Normally we should check INSERT policies. But if the insert is the + * result of a partition key update that moved the tuple to a new + * partition, we should instead check UPDATE policies, because we are + * executing policies defined on the target table, and not those + * defined on the child partitions. + */ + wco_kind = (mtstate->operation == CMD_UPDATE) ? + WCO_RLS_UPDATE_CHECK : WCO_RLS_INSERT_CHECK; + + /* * ExecWithCheckOptions() will skip any WCOs which are not of the kind * we are looking for at this point. */ if (resultRelInfo->ri_WithCheckOptions != NIL) - ExecWithCheckOptions(WCO_RLS_INSERT_CHECK, - resultRelInfo, slot, estate); + ExecWithCheckOptions(wco_kind, resultRelInfo, slot, estate); /* * No need though if the tuple has been routed, and a BR trigger @@ -622,9 +630,32 @@ ExecInsert(ModifyTableState *mtstate, setLastTid(&(tuple->t_self)); } + /* + * If this insert is the result of a partition key update that moved the + * tuple to a new partition, put this row into the transition NEW TABLE, + * if there is one. We need to do this separately for DELETE and INSERT + * because they happen on different tables. + */ + ar_insert_trig_tcs = mtstate->mt_transition_capture; + if (mtstate->operation == CMD_UPDATE && mtstate->mt_transition_capture + && mtstate->mt_transition_capture->tcs_update_new_table) + { + ExecARUpdateTriggers(estate, resultRelInfo, NULL, + NULL, + tuple, + NULL, + mtstate->mt_transition_capture); + + /* + * We've already captured the NEW TABLE row, so make sure any AR + * INSERT trigger fired below doesn't capture it again. + */ + ar_insert_trig_tcs = NULL; + } + /* AFTER ROW INSERT Triggers */ ExecARInsertTriggers(estate, resultRelInfo, tuple, recheckIndexes, - mtstate->mt_transition_capture); + ar_insert_trig_tcs); list_free(recheckIndexes); @@ -678,6 +709,8 @@ ExecDelete(ModifyTableState *mtstate, TupleTableSlot *planSlot, EPQState *epqstate, EState *estate, + bool *tupleDeleted, + bool processReturning, bool canSetTag) { ResultRelInfo *resultRelInfo; @@ -685,6 +718,10 @@ ExecDelete(ModifyTableState *mtstate, HTSU_Result result; HeapUpdateFailureData hufd; TupleTableSlot *slot = NULL; + TransitionCaptureState *ar_delete_trig_tcs; + + if (tupleDeleted) + *tupleDeleted = false; /* * get information on the (current) result relation @@ -849,12 +886,40 @@ ldelete:; if (canSetTag) (estate->es_processed)++; + /* Tell caller that the delete actually happened. */ + if (tupleDeleted) + *tupleDeleted = true; + + /* + * If this delete is the result of a partition key update that moved the + * tuple to a new partition, put this row into the transition OLD TABLE, + * if there is one. We need to do this separately for DELETE and INSERT + * because they happen on different tables. + */ + ar_delete_trig_tcs = mtstate->mt_transition_capture; + if (mtstate->operation == CMD_UPDATE && mtstate->mt_transition_capture + && mtstate->mt_transition_capture->tcs_update_old_table) + { + ExecARUpdateTriggers(estate, resultRelInfo, + tupleid, + oldtuple, + NULL, + NULL, + mtstate->mt_transition_capture); + + /* + * We've already captured the NEW TABLE row, so make sure any AR + * DELETE trigger fired below doesn't capture it again. + */ + ar_delete_trig_tcs = NULL; + } + /* AFTER ROW DELETE Triggers */ ExecARDeleteTriggers(estate, resultRelInfo, tupleid, oldtuple, - mtstate->mt_transition_capture); + ar_delete_trig_tcs); - /* Process RETURNING if present */ - if (resultRelInfo->ri_projectReturning) + /* Process RETURNING if present and if requested */ + if (processReturning && resultRelInfo->ri_projectReturning) { /* * We have to put the target tuple into a slot, which means first we @@ -947,6 +1012,7 @@ ExecUpdate(ModifyTableState *mtstate, HTSU_Result result; HeapUpdateFailureData hufd; List *recheckIndexes = NIL; + TupleConversionMap *saved_tcs_map = NULL; /* * abort the operation if not running transactions @@ -1018,6 +1084,7 @@ ExecUpdate(ModifyTableState *mtstate, else { LockTupleMode lockmode; + bool partition_constraint_failed; /* * Constraints might reference the tableoid column, so initialize @@ -1033,22 +1100,142 @@ ExecUpdate(ModifyTableState *mtstate, * (We don't need to redo triggers, however. If there are any BEFORE * triggers then trigger.c will have done heap_lock_tuple to lock the * correct tuple, so there's no need to do them again.) - * - * ExecWithCheckOptions() will skip any WCOs which are not of the kind - * we are looking for at this point. */ lreplace:; - if (resultRelInfo->ri_WithCheckOptions != NIL) + + /* + * If partition constraint fails, this row might get moved to another + * partition, in which case we should check the RLS CHECK policy just + * before inserting into the new partition, rather than doing it here. + * This is because a trigger on that partition might again change the + * row. So skip the WCO checks if the partition constraint fails. + */ + partition_constraint_failed = + resultRelInfo->ri_PartitionCheck && + !ExecPartitionCheck(resultRelInfo, slot, estate); + + if (!partition_constraint_failed && + resultRelInfo->ri_WithCheckOptions != NIL) + { + /* + * ExecWithCheckOptions() will skip any WCOs which are not of the + * kind we are looking for at this point. + */ ExecWithCheckOptions(WCO_RLS_UPDATE_CHECK, resultRelInfo, slot, estate); + } + + /* + * If a partition check failed, try to move the row into the right + * partition. + */ + if (partition_constraint_failed) + { + bool tuple_deleted; + TupleTableSlot *ret_slot; + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; + int map_index; + TupleConversionMap *tupconv_map; + + /* + * When an UPDATE is run on a leaf partition, we will not have + * partition tuple routing set up. In that case, fail with + * partition constraint violation error. + */ + if (proute == NULL) + ExecPartitionCheckEmitError(resultRelInfo, slot, estate); + + /* + * Row movement, part 1. Delete the tuple, but skip RETURNING + * processing. We want to return rows from INSERT. + */ + ExecDelete(mtstate, tupleid, oldtuple, planSlot, epqstate, estate, + &tuple_deleted, false, false); + + /* + * For some reason if DELETE didn't happen (e.g. trigger prevented + * it, or it was already deleted by self, or it was concurrently + * deleted by another transaction), then we should skip the insert + * as well; otherwise, an UPDATE could cause an increase in the + * total number of rows across all partitions, which is clearly + * wrong. + * + * For a normal UPDATE, the case where the tuple has been the + * subject of a concurrent UPDATE or DELETE would be handled by + * the EvalPlanQual machinery, but for an UPDATE that we've + * translated into a DELETE from this partition and an INSERT into + * some other partition, that's not available, because CTID chains + * can't span relation boundaries. We mimic the semantics to a + * limited extent by skipping the INSERT if the DELETE fails to + * find a tuple. This ensures that two concurrent attempts to + * UPDATE the same tuple at the same time can't turn one tuple + * into two, and that an UPDATE of a just-deleted tuple can't + * resurrect it. + */ + if (!tuple_deleted) + return NULL; + + /* + * Updates set the transition capture map only when a new subplan + * is chosen. But for inserts, it is set for each row. So after + * INSERT, we need to revert back to the map created for UPDATE; + * otherwise the next UPDATE will incorrectly use the one created + * for INSERT. So first save the one created for UPDATE. + */ + if (mtstate->mt_transition_capture) + saved_tcs_map = mtstate->mt_transition_capture->tcs_map; + + /* + * resultRelInfo is one of the per-subplan resultRelInfos. So we + * should convert the tuple into root's tuple descriptor, since + * ExecInsert() starts the search from root. The tuple conversion + * map list is in the order of mtstate->resultRelInfo[], so to + * retrieve the one for this resultRel, we need to know the + * position of the resultRel in mtstate->resultRelInfo[]. + */ + map_index = resultRelInfo - mtstate->resultRelInfo; + Assert(map_index >= 0 && map_index < mtstate->mt_nplans); + tupconv_map = tupconv_map_for_subplan(mtstate, map_index); + tuple = ConvertPartitionTupleSlot(tupconv_map, + tuple, + proute->root_tuple_slot, + &slot); + + + /* + * For ExecInsert(), make it look like we are inserting into the + * root. + */ + Assert(mtstate->rootResultRelInfo != NULL); + estate->es_result_relation_info = mtstate->rootResultRelInfo; + + ret_slot = ExecInsert(mtstate, slot, planSlot, NULL, + ONCONFLICT_NONE, estate, canSetTag); + + /* + * Revert back the active result relation and the active + * transition capture map that we changed above. + */ + estate->es_result_relation_info = resultRelInfo; + if (mtstate->mt_transition_capture) + { + mtstate->mt_transition_capture->tcs_original_insert_tuple = NULL; + mtstate->mt_transition_capture->tcs_map = saved_tcs_map; + } + return ret_slot; + } /* * Check the constraints of the tuple. Note that we pass the same * slot for the orig_slot argument, because unlike ExecInsert(), no * tuple-routing is performed here, hence the slot remains unchanged. + * We've already checked the partition constraint above; however, we + * must still ensure the tuple passes all other constraints, so we + * will call ExecConstraints() and have it validate all remaining + * checks. */ - if (resultRelationDesc->rd_att->constr || resultRelInfo->ri_PartitionCheck) - ExecConstraints(resultRelInfo, slot, estate, true); + if (resultRelationDesc->rd_att->constr) + ExecConstraints(resultRelInfo, slot, estate, false); /* * replace the heap tuple @@ -1418,17 +1605,20 @@ fireBSTriggers(ModifyTableState *node) } /* - * Return the ResultRelInfo for which we will fire AFTER STATEMENT triggers. - * This is also the relation into whose tuple format all captured transition - * tuples must be converted. + * Return the target rel ResultRelInfo. + * + * This relation is the same as : + * - the relation for which we will fire AFTER STATEMENT triggers. + * - the relation into whose tuple format all captured transition tuples must + * be converted. + * - the root partitioned table. */ static ResultRelInfo * -getASTriggerResultRelInfo(ModifyTableState *node) +getTargetResultRelInfo(ModifyTableState *node) { /* - * If the node modifies a partitioned table, we must fire its triggers. - * Note that in that case, node->resultRelInfo points to the first leaf - * partition, not the root table. + * Note that if the node modifies a partitioned table, node->resultRelInfo + * points to the first leaf partition, not the root table. */ if (node->rootResultRelInfo != NULL) return node->rootResultRelInfo; @@ -1442,7 +1632,7 @@ getASTriggerResultRelInfo(ModifyTableState *node) static void fireASTriggers(ModifyTableState *node) { - ResultRelInfo *resultRelInfo = getASTriggerResultRelInfo(node); + ResultRelInfo *resultRelInfo = getTargetResultRelInfo(node); switch (node->operation) { @@ -1475,8 +1665,7 @@ fireASTriggers(ModifyTableState *node) static void ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) { - ResultRelInfo *targetRelInfo = getASTriggerResultRelInfo(mtstate); - int i; + ResultRelInfo *targetRelInfo = getTargetResultRelInfo(mtstate); /* Check for transition tables on the directly targeted relation. */ mtstate->mt_transition_capture = @@ -1499,62 +1688,141 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) if (mtstate->mt_transition_capture != NULL || mtstate->mt_oc_transition_capture != NULL) { - int numResultRelInfos; - PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; - - numResultRelInfos = (proute != NULL ? - proute->num_partitions : - mtstate->mt_nplans); + ExecSetupChildParentMapForTcs(mtstate); /* - * Build array of conversion maps from each child's TupleDesc to the - * one used in the tuplestore. The map pointers may be NULL when no - * conversion is necessary, which is hopefully a common case for - * partitions. + * Install the conversion map for the first plan for UPDATE and DELETE + * operations. It will be advanced each time we switch to the next + * plan. (INSERT operations set it every time, so we need not update + * mtstate->mt_oc_transition_capture here.) */ - mtstate->mt_transition_tupconv_maps = (TupleConversionMap **) - palloc0(sizeof(TupleConversionMap *) * numResultRelInfos); + if (mtstate->mt_transition_capture && mtstate->operation != CMD_INSERT) + mtstate->mt_transition_capture->tcs_map = + tupconv_map_for_subplan(mtstate, 0); + } +} - /* Choose the right set of partitions */ - if (proute != NULL) - { - /* - * For tuple routing among partitions, we need TupleDescs based on - * the partition routing table. - */ - ResultRelInfo **resultRelInfos = proute->partitions; +/* + * Initialize the child-to-root tuple conversion map array for UPDATE subplans. + * + * This map array is required to convert the tuple from the subplan result rel + * to the target table descriptor. This requirement arises for two independent + * scenarios: + * 1. For update-tuple-routing. + * 2. For capturing tuples in transition tables. + */ +void +ExecSetupChildParentMapForSubplan(ModifyTableState *mtstate) +{ + ResultRelInfo *targetRelInfo = getTargetResultRelInfo(mtstate); + ResultRelInfo *resultRelInfos = mtstate->resultRelInfo; + TupleDesc outdesc; + int numResultRelInfos = mtstate->mt_nplans; + int i; - for (i = 0; i < numResultRelInfos; ++i) - { - mtstate->mt_transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(resultRelInfos[i]->ri_RelationDesc), - RelationGetDescr(targetRelInfo->ri_RelationDesc), - gettext_noop("could not convert row type")); - } - } - else - { - /* Otherwise we need the ResultRelInfo for each subplan. */ - ResultRelInfo *resultRelInfos = mtstate->resultRelInfo; + /* + * First check if there is already a per-subplan array allocated. Even if + * there is already a per-leaf map array, we won't require a per-subplan + * one, since we will use the subplan offset array to convert the subplan + * index to per-leaf index. + */ + if (mtstate->mt_per_subplan_tupconv_maps || + (mtstate->mt_partition_tuple_routing && + mtstate->mt_partition_tuple_routing->child_parent_tupconv_maps)) + return; - for (i = 0; i < numResultRelInfos; ++i) - { - mtstate->mt_transition_tupconv_maps[i] = - convert_tuples_by_name(RelationGetDescr(resultRelInfos[i].ri_RelationDesc), - RelationGetDescr(targetRelInfo->ri_RelationDesc), - gettext_noop("could not convert row type")); - } - } + /* + * Build array of conversion maps from each child's TupleDesc to the one + * used in the target relation. The map pointers may be NULL when no + * conversion is necessary, which is hopefully a common case. + */ + /* Get tuple descriptor of the target rel. */ + outdesc = RelationGetDescr(targetRelInfo->ri_RelationDesc); + + mtstate->mt_per_subplan_tupconv_maps = (TupleConversionMap **) + palloc(sizeof(TupleConversionMap *) * numResultRelInfos); + + for (i = 0; i < numResultRelInfos; ++i) + { + mtstate->mt_per_subplan_tupconv_maps[i] = + convert_tuples_by_name(RelationGetDescr(resultRelInfos[i].ri_RelationDesc), + outdesc, + gettext_noop("could not convert row type")); + } +} + +/* + * Initialize the child-to-root tuple conversion map array required for + * capturing transition tuples. + * + * The map array can be indexed either by subplan index or by leaf-partition + * index. For transition tables, we need a subplan-indexed access to the map, + * and where tuple-routing is present, we also require a leaf-indexed access. + */ +static void +ExecSetupChildParentMapForTcs(ModifyTableState *mtstate) +{ + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; + + /* + * If partition tuple routing is set up, we will require partition-indexed + * access. In that case, create the map array indexed by partition; we + * will still be able to access the maps using a subplan index by + * converting the subplan index to a partition index using + * subplan_partition_offsets. If tuple routing is not set up, it means we + * don't require partition-indexed access. In that case, create just a + * subplan-indexed map. + */ + if (proute) + { /* - * Install the conversion map for the first plan for UPDATE and DELETE - * operations. It will be advanced each time we switch to the next - * plan. (INSERT operations set it every time, so we need not update - * mtstate->mt_oc_transition_capture here.) + * If a partition-indexed map array is to be created, the subplan map + * array has to be NULL. If the subplan map array is already created, + * we won't be able to access the map using a partition index. */ - if (mtstate->mt_transition_capture) - mtstate->mt_transition_capture->tcs_map = - mtstate->mt_transition_tupconv_maps[0]; + Assert(mtstate->mt_per_subplan_tupconv_maps == NULL); + + ExecSetupChildParentMapForLeaf(proute); + } + else + ExecSetupChildParentMapForSubplan(mtstate); +} + +/* + * For a given subplan index, get the tuple conversion map. + */ +static TupleConversionMap * +tupconv_map_for_subplan(ModifyTableState *mtstate, int whichplan) +{ + /* + * If a partition-index tuple conversion map array is allocated, we need + * to first get the index into the partition array. Exactly *one* of the + * two arrays is allocated. This is because if there is a partition array + * required, we don't require subplan-indexed array since we can translate + * subplan index into partition index. And, we create a subplan-indexed + * array *only* if partition-indexed array is not required. + */ + if (mtstate->mt_per_subplan_tupconv_maps == NULL) + { + int leaf_index; + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing; + + /* + * If subplan-indexed array is NULL, things should have been arranged + * to convert the subplan index to partition index. + */ + Assert(proute && proute->subplan_partition_offsets != NULL); + + leaf_index = proute->subplan_partition_offsets[whichplan]; + + return TupConvMapForLeaf(proute, getTargetResultRelInfo(mtstate), + leaf_index); + } + else + { + Assert(whichplan >= 0 && whichplan < mtstate->mt_nplans); + return mtstate->mt_per_subplan_tupconv_maps[whichplan]; } } @@ -1661,15 +1929,13 @@ ExecModifyTable(PlanState *pstate) /* Prepare to convert transition tuples from this child. */ if (node->mt_transition_capture != NULL) { - Assert(node->mt_transition_tupconv_maps != NULL); node->mt_transition_capture->tcs_map = - node->mt_transition_tupconv_maps[node->mt_whichplan]; + tupconv_map_for_subplan(node, node->mt_whichplan); } if (node->mt_oc_transition_capture != NULL) { - Assert(node->mt_transition_tupconv_maps != NULL); node->mt_oc_transition_capture->tcs_map = - node->mt_transition_tupconv_maps[node->mt_whichplan]; + tupconv_map_for_subplan(node, node->mt_whichplan); } continue; } @@ -1786,7 +2052,8 @@ ExecModifyTable(PlanState *pstate) break; case CMD_DELETE: slot = ExecDelete(node, tupleid, oldtuple, planSlot, - &node->mt_epqstate, estate, node->canSetTag); + &node->mt_epqstate, estate, + NULL, true, node->canSetTag); break; default: elog(ERROR, "unknown operation"); @@ -1830,9 +2097,12 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ResultRelInfo *saved_resultRelInfo; ResultRelInfo *resultRelInfo; Plan *subplan; + int firstVarno = 0; + Relation firstResultRel = NULL; ListCell *l; int i; Relation rel; + bool update_tuple_routing_needed = node->partColsUpdated; PartitionTupleRouting *proute = NULL; int num_partitions = 0; @@ -1907,6 +2177,16 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) resultRelInfo->ri_IndexRelationDescs == NULL) ExecOpenIndices(resultRelInfo, mtstate->mt_onconflict != ONCONFLICT_NONE); + /* + * If this is an UPDATE and a BEFORE UPDATE trigger is present, the + * trigger itself might modify the partition-key values. So arrange + * for tuple routing. + */ + if (resultRelInfo->ri_TrigDesc && + resultRelInfo->ri_TrigDesc->trig_update_before_row && + operation == CMD_UPDATE) + update_tuple_routing_needed = true; + /* Now init the plan for this result rel */ estate->es_result_relation_info = resultRelInfo; mtstate->mt_plans[i] = ExecInitNode(subplan, estate, eflags); @@ -1931,16 +2211,35 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) estate->es_result_relation_info = saved_resultRelInfo; - /* Build state for INSERT tuple routing */ - rel = mtstate->resultRelInfo->ri_RelationDesc; - if (operation == CMD_INSERT && - rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + /* Get the target relation */ + rel = (getTargetResultRelInfo(mtstate))->ri_RelationDesc; + + /* + * If it's not a partitioned table after all, UPDATE tuple routing should + * not be attempted. + */ + if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE) + update_tuple_routing_needed = false; + + /* + * Build state for tuple routing if it's an INSERT or if it's an UPDATE of + * partition key. + */ + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE && + (operation == CMD_INSERT || update_tuple_routing_needed)) { proute = mtstate->mt_partition_tuple_routing = ExecSetupPartitionTupleRouting(mtstate, rel, node->nominalRelation, estate); num_partitions = proute->num_partitions; + + /* + * Below are required as reference objects for mapping partition + * attno's in expressions such as WithCheckOptions and RETURNING. + */ + firstVarno = mtstate->resultRelInfo[0].ri_RangeTableIndex; + firstResultRel = mtstate->resultRelInfo[0].ri_RelationDesc; } /* @@ -1950,6 +2249,17 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) if (!(eflags & EXEC_FLAG_EXPLAIN_ONLY)) ExecSetupTransitionCaptureState(mtstate, estate); + /* + * Construct mapping from each of the per-subplan partition attnos to the + * root attno. This is required when during update row movement the tuple + * descriptor of a source partition does not match the root partitioned + * table descriptor. In such a case we need to convert tuples to the root + * tuple descriptor, because the search for destination partition starts + * from the root. Skip this setup if it's not a partition key update. + */ + if (update_tuple_routing_needed) + ExecSetupChildParentMapForSubplan(mtstate); + /* * Initialize any WITH CHECK OPTION constraints if needed. */ @@ -1980,26 +2290,29 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * Build WITH CHECK OPTION constraints for each leaf partition rel. Note * that we didn't build the withCheckOptionList for each partition within * the planner, but simple translation of the varattnos for each partition - * will suffice. This only occurs for the INSERT case; UPDATE/DELETE - * cases are handled above. + * will suffice. This only occurs for the INSERT case or for UPDATE row + * movement. DELETEs and local UPDATEs are handled above. */ if (node->withCheckOptionLists != NIL && num_partitions > 0) { - List *wcoList; - PlanState *plan; + List *first_wcoList; /* * In case of INSERT on partitioned tables, there is only one plan. * Likewise, there is only one WITH CHECK OPTIONS list, not one per - * partition. We make a copy of the WCO qual for each partition; note - * that, if there are SubPlans in there, they all end up attached to - * the one parent Plan node. + * partition. Whereas for UPDATE, there are as many WCOs as there are + * plans. So in either case, use the WCO expression of the first + * resultRelInfo as a reference to calculate attno's for the WCO + * expression of each of the partitions. We make a copy of the WCO + * qual for each partition. Note that, if there are SubPlans in there, + * they all end up attached to the one parent Plan node. */ - Assert(operation == CMD_INSERT && - list_length(node->withCheckOptionLists) == 1 && - mtstate->mt_nplans == 1); - wcoList = linitial(node->withCheckOptionLists); - plan = mtstate->mt_plans[0]; + Assert(update_tuple_routing_needed || + (operation == CMD_INSERT && + list_length(node->withCheckOptionLists) == 1 && + mtstate->mt_nplans == 1)); + + first_wcoList = linitial(node->withCheckOptionLists); for (i = 0; i < num_partitions; i++) { Relation partrel; @@ -2008,17 +2321,26 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ListCell *ll; resultRelInfo = proute->partitions[i]; + + /* + * If we are referring to a resultRelInfo from one of the update + * result rels, that result rel would already have + * WithCheckOptions initialized. + */ + if (resultRelInfo->ri_WithCheckOptions) + continue; + partrel = resultRelInfo->ri_RelationDesc; - /* varno = node->nominalRelation */ - mapped_wcoList = map_partition_varattnos(wcoList, - node->nominalRelation, - partrel, rel, NULL); + mapped_wcoList = map_partition_varattnos(first_wcoList, + firstVarno, + partrel, firstResultRel, + NULL); foreach(ll, mapped_wcoList) { WithCheckOption *wco = castNode(WithCheckOption, lfirst(ll)); ExprState *wcoExpr = ExecInitQual(castNode(List, wco->qual), - plan); + &mtstate->ps); wcoExprs = lappend(wcoExprs, wcoExpr); } @@ -2035,7 +2357,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) { TupleTableSlot *slot; ExprContext *econtext; - List *returningList; + List *firstReturningList; /* * Initialize result tuple slot and assign its rowtype using the first @@ -2071,22 +2393,35 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * Build a projection for each leaf partition rel. Note that we * didn't build the returningList for each partition within the * planner, but simple translation of the varattnos for each partition - * will suffice. This only occurs for the INSERT case; UPDATE/DELETE - * are handled above. + * will suffice. This only occurs for the INSERT case or for UPDATE + * row movement. DELETEs and local UPDATEs are handled above. */ - returningList = linitial(node->returningLists); + firstReturningList = linitial(node->returningLists); for (i = 0; i < num_partitions; i++) { Relation partrel; List *rlist; resultRelInfo = proute->partitions[i]; + + /* + * If we are referring to a resultRelInfo from one of the update + * result rels, that result rel would already have a returningList + * built. + */ + if (resultRelInfo->ri_projectReturning) + continue; + partrel = resultRelInfo->ri_RelationDesc; - /* varno = node->nominalRelation */ - rlist = map_partition_varattnos(returningList, - node->nominalRelation, - partrel, rel, NULL); + /* + * Use the returning expression of the first resultRelInfo as a + * reference to calculate attno's for the returning expression of + * each of the partitions. + */ + rlist = map_partition_varattnos(firstReturningList, + firstVarno, + partrel, firstResultRel, NULL); resultRelInfo->ri_projectReturning = ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps, resultRelInfo->ri_RelationDesc->rd_att); diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 65d8c77d7a..e5d2de5330 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -204,6 +204,7 @@ _copyModifyTable(const ModifyTable *from) COPY_SCALAR_FIELD(canSetTag); COPY_SCALAR_FIELD(nominalRelation); COPY_NODE_FIELD(partitioned_rels); + COPY_SCALAR_FIELD(partColsUpdated); COPY_NODE_FIELD(resultRelations); COPY_SCALAR_FIELD(resultRelIndex); COPY_SCALAR_FIELD(rootResultRelIndex); @@ -2263,6 +2264,7 @@ _copyPartitionedChildRelInfo(const PartitionedChildRelInfo *from) COPY_SCALAR_FIELD(parent_relid); COPY_NODE_FIELD(child_rels); + COPY_SCALAR_FIELD(part_cols_updated); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 0bd12e862e..785dc54d37 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -908,6 +908,7 @@ _equalPartitionedChildRelInfo(const PartitionedChildRelInfo *a, const Partitione { COMPARE_SCALAR_FIELD(parent_relid); COMPARE_NODE_FIELD(child_rels); + COMPARE_SCALAR_FIELD(part_cols_updated); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index b1cdfc36a6..e0f4befd9f 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -372,6 +372,7 @@ _outModifyTable(StringInfo str, const ModifyTable *node) WRITE_BOOL_FIELD(canSetTag); WRITE_UINT_FIELD(nominalRelation); WRITE_NODE_FIELD(partitioned_rels); + WRITE_BOOL_FIELD(partColsUpdated); WRITE_NODE_FIELD(resultRelations); WRITE_INT_FIELD(resultRelIndex); WRITE_INT_FIELD(rootResultRelIndex); @@ -2105,6 +2106,7 @@ _outModifyTablePath(StringInfo str, const ModifyTablePath *node) WRITE_BOOL_FIELD(canSetTag); WRITE_UINT_FIELD(nominalRelation); WRITE_NODE_FIELD(partitioned_rels); + WRITE_BOOL_FIELD(partColsUpdated); WRITE_NODE_FIELD(resultRelations); WRITE_NODE_FIELD(subpaths); WRITE_NODE_FIELD(subroots); @@ -2527,6 +2529,7 @@ _outPartitionedChildRelInfo(StringInfo str, const PartitionedChildRelInfo *node) WRITE_UINT_FIELD(parent_relid); WRITE_NODE_FIELD(child_rels); + WRITE_BOOL_FIELD(part_cols_updated); } static void diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 9925866b53..22d8b9d0d5 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -1568,6 +1568,7 @@ _readModifyTable(void) READ_BOOL_FIELD(canSetTag); READ_UINT_FIELD(nominalRelation); READ_NODE_FIELD(partitioned_rels); + READ_BOOL_FIELD(partColsUpdated); READ_NODE_FIELD(resultRelations); READ_INT_FIELD(resultRelIndex); READ_INT_FIELD(rootResultRelIndex); diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index c5304b712e..fd1a58336b 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -1364,7 +1364,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, case RTE_RELATION: if (rte->relkind == RELKIND_PARTITIONED_TABLE) partitioned_rels = - get_partitioned_child_rels(root, rel->relid); + get_partitioned_child_rels(root, rel->relid, NULL); break; case RTE_SUBQUERY: build_partitioned_rels = true; @@ -1403,7 +1403,7 @@ add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel, { List *cprels; - cprels = get_partitioned_child_rels(root, childrel->relid); + cprels = get_partitioned_child_rels(root, childrel->relid, NULL); partitioned_rels = list_concat(partitioned_rels, list_copy(cprels)); } diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index e599283d6b..86e7e74793 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -279,6 +279,7 @@ static ProjectSet *make_project_set(List *tlist, Plan *subplan); static ModifyTable *make_modifytable(PlannerInfo *root, CmdType operation, bool canSetTag, Index nominalRelation, List *partitioned_rels, + bool partColsUpdated, List *resultRelations, List *subplans, List *withCheckOptionLists, List *returningLists, List *rowMarks, OnConflictExpr *onconflict, int epqParam); @@ -2373,6 +2374,7 @@ create_modifytable_plan(PlannerInfo *root, ModifyTablePath *best_path) best_path->canSetTag, best_path->nominalRelation, best_path->partitioned_rels, + best_path->partColsUpdated, best_path->resultRelations, subplans, best_path->withCheckOptionLists, @@ -6442,6 +6444,7 @@ static ModifyTable * make_modifytable(PlannerInfo *root, CmdType operation, bool canSetTag, Index nominalRelation, List *partitioned_rels, + bool partColsUpdated, List *resultRelations, List *subplans, List *withCheckOptionLists, List *returningLists, List *rowMarks, OnConflictExpr *onconflict, int epqParam) @@ -6468,6 +6471,7 @@ make_modifytable(PlannerInfo *root, node->canSetTag = canSetTag; node->nominalRelation = nominalRelation; node->partitioned_rels = partitioned_rels; + node->partColsUpdated = partColsUpdated; node->resultRelations = resultRelations; node->resultRelIndex = -1; /* will be set correctly in setrefs.c */ node->rootResultRelIndex = -1; /* will be set correctly in setrefs.c */ diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 7b52dadd81..53870432ea 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -1101,6 +1101,7 @@ inheritance_planner(PlannerInfo *root) Query *parent_parse; Bitmapset *parent_relids = bms_make_singleton(top_parentRTindex); PlannerInfo **parent_roots = NULL; + bool partColsUpdated = false; Assert(parse->commandType != CMD_INSERT); @@ -1172,7 +1173,8 @@ inheritance_planner(PlannerInfo *root) if (parent_rte->relkind == RELKIND_PARTITIONED_TABLE) { nominalRelation = top_parentRTindex; - partitioned_rels = get_partitioned_child_rels(root, top_parentRTindex); + partitioned_rels = get_partitioned_child_rels(root, top_parentRTindex, + &partColsUpdated); /* The root partitioned table is included as a child rel */ Assert(list_length(partitioned_rels) >= 1); } @@ -1512,6 +1514,7 @@ inheritance_planner(PlannerInfo *root) parse->canSetTag, nominalRelation, partitioned_rels, + partColsUpdated, resultRelations, subpaths, subroots, @@ -2123,6 +2126,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update, parse->canSetTag, parse->resultRelation, NIL, + false, list_make1_int(parse->resultRelation), list_make1(path), list_make1(root), @@ -6155,17 +6159,24 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid) /* * get_partitioned_child_rels * Returns a list of the RT indexes of the partitioned child relations - * with rti as the root parent RT index. + * with rti as the root parent RT index. Also sets + * *part_cols_updated to true if any of the root rte's updated + * columns is used in the partition key either of the relation whose RTI + * is specified or of any child relation. * * Note: This function might get called even for range table entries that * are not partitioned tables; in such a case, it will simply return NIL. */ List * -get_partitioned_child_rels(PlannerInfo *root, Index rti) +get_partitioned_child_rels(PlannerInfo *root, Index rti, + bool *part_cols_updated) { List *result = NIL; ListCell *l; + if (part_cols_updated) + *part_cols_updated = false; + foreach(l, root->pcinfo_list) { PartitionedChildRelInfo *pc = lfirst_node(PartitionedChildRelInfo, l); @@ -6173,6 +6184,8 @@ get_partitioned_child_rels(PlannerInfo *root, Index rti) if (pc->parent_relid == rti) { result = pc->child_rels; + if (part_cols_updated) + *part_cols_updated = pc->part_cols_updated; break; } } diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index 7ef391ffeb..e6b15348c1 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -105,7 +105,8 @@ static void expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, PlanRowMark *top_parentrc, LOCKMODE lockmode, - List **appinfos, List **partitioned_child_rels); + List **appinfos, List **partitioned_child_rels, + bool *part_cols_updated); static void expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, @@ -1461,16 +1462,19 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) if (RelationGetPartitionDesc(oldrelation) != NULL) { List *partitioned_child_rels = NIL; + bool part_cols_updated = false; Assert(rte->relkind == RELKIND_PARTITIONED_TABLE); /* * If this table has partitions, recursively expand them in the order - * in which they appear in the PartitionDesc. + * in which they appear in the PartitionDesc. While at it, also + * extract the partition key columns of all the partitioned tables. */ expand_partitioned_rtentry(root, rte, rti, oldrelation, oldrc, lockmode, &root->append_rel_list, - &partitioned_child_rels); + &partitioned_child_rels, + &part_cols_updated); /* * We keep a list of objects in root, each of which maps a root @@ -1487,6 +1491,7 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti) pcinfo = makeNode(PartitionedChildRelInfo); pcinfo->parent_relid = rti; pcinfo->child_rels = partitioned_child_rels; + pcinfo->part_cols_updated = part_cols_updated; root->pcinfo_list = lappend(root->pcinfo_list, pcinfo); } } @@ -1563,7 +1568,8 @@ static void expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Index parentRTindex, Relation parentrel, PlanRowMark *top_parentrc, LOCKMODE lockmode, - List **appinfos, List **partitioned_child_rels) + List **appinfos, List **partitioned_child_rels, + bool *part_cols_updated) { int i; RangeTblEntry *childrte; @@ -1578,6 +1584,17 @@ expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, Assert(parentrte->inh); + /* + * Note down whether any partition key cols are being updated. Though it's + * the root partitioned table's updatedCols we are interested in, we + * instead use parentrte to get the updatedCols. This is convenient because + * parentrte already has the root partrel's updatedCols translated to match + * the attribute ordering of parentrel. + */ + if (!*part_cols_updated) + *part_cols_updated = + has_partition_attrs(parentrel, parentrte->updatedCols, NULL); + /* First expand the partitioned table itself. */ expand_single_inheritance_child(root, parentrte, parentRTindex, parentrel, top_parentrc, parentrel, @@ -1617,7 +1634,8 @@ expand_partitioned_rtentry(PlannerInfo *root, RangeTblEntry *parentrte, if (childrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) expand_partitioned_rtentry(root, childrte, childRTindex, childrel, top_parentrc, lockmode, - appinfos, partitioned_child_rels); + appinfos, partitioned_child_rels, + part_cols_updated); /* Close child relation, but keep locks */ heap_close(childrel, NoLock); diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index fa4b4683e5..91295ebca4 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -3274,6 +3274,8 @@ create_lockrows_path(PlannerInfo *root, RelOptInfo *rel, * 'partitioned_rels' is an integer list of RT indexes of non-leaf tables in * the partition tree, if this is an UPDATE/DELETE to a partitioned table. * Otherwise NIL. + * 'partColsUpdated' is true if any partitioning columns are being updated, + * either from the target relation or a descendent partitioned table. * 'resultRelations' is an integer list of actual RT indexes of target rel(s) * 'subpaths' is a list of Path(s) producing source data (one per rel) * 'subroots' is a list of PlannerInfo structs (one per rel) @@ -3287,6 +3289,7 @@ ModifyTablePath * create_modifytable_path(PlannerInfo *root, RelOptInfo *rel, CmdType operation, bool canSetTag, Index nominalRelation, List *partitioned_rels, + bool partColsUpdated, List *resultRelations, List *subpaths, List *subroots, List *withCheckOptionLists, List *returningLists, @@ -3354,6 +3357,7 @@ create_modifytable_path(PlannerInfo *root, RelOptInfo *rel, pathnode->canSetTag = canSetTag; pathnode->nominalRelation = nominalRelation; pathnode->partitioned_rels = list_copy(partitioned_rels); + pathnode->partColsUpdated = partColsUpdated; pathnode->resultRelations = resultRelations; pathnode->subpaths = subpaths; pathnode->subroots = subroots; diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index b5df357acd..18e08129f8 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -62,11 +62,24 @@ typedef struct PartitionDispatchData *PartitionDispatch; * for every leaf partition in the partition tree. * num_partitions Number of leaf partitions in the partition tree * (= 'partitions' array length) - * partition_tupconv_maps Array of TupleConversionMap objects with one + * parent_child_tupconv_maps Array of TupleConversionMap objects with one * entry for every leaf partition (required to - * convert input tuple based on the root table's - * rowtype to a leaf partition's rowtype after - * tuple routing is done) + * convert tuple from the root table's rowtype to + * a leaf partition's rowtype after tuple routing + * is done) + * child_parent_tupconv_maps Array of TupleConversionMap objects with one + * entry for every leaf partition (required to + * convert an updated tuple from the leaf + * partition's rowtype to the root table's rowtype + * so that tuple routing can be done) + * child_parent_map_not_required Array of bool. True value means that a map is + * determined to be not required for the given + * partition. False means either we haven't yet + * checked if a map is required, or it was + * determined to be required. + * subplan_partition_offsets Integer array ordered by UPDATE subplans. Each + * element of this array has the index into the + * corresponding partition in partitions array. * partition_tuple_slot TupleTableSlot to be used to manipulate any * given leaf partition's rowtype after that * partition is chosen for insertion by @@ -79,8 +92,12 @@ typedef struct PartitionTupleRouting int num_dispatch; ResultRelInfo **partitions; int num_partitions; - TupleConversionMap **partition_tupconv_maps; + TupleConversionMap **parent_child_tupconv_maps; + TupleConversionMap **child_parent_tupconv_maps; + bool *child_parent_map_not_required; + int *subplan_partition_offsets; TupleTableSlot *partition_tuple_slot; + TupleTableSlot *root_tuple_slot; } PartitionTupleRouting; extern PartitionTupleRouting *ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, @@ -90,6 +107,13 @@ extern int ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, TupleTableSlot *slot, EState *estate); +extern void ExecSetupChildParentMapForLeaf(PartitionTupleRouting *proute); +extern TupleConversionMap *TupConvMapForLeaf(PartitionTupleRouting *proute, + ResultRelInfo *rootRelInfo, int leaf_index); +extern HeapTuple ConvertPartitionTupleSlot(TupleConversionMap *map, + HeapTuple tuple, + TupleTableSlot *new_slot, + TupleTableSlot **p_my_slot); extern void ExecCleanupTupleRouting(PartitionTupleRouting *proute); #endif /* EXECPARTITION_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 63a75bd5ed..1bf67455e0 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -992,8 +992,8 @@ typedef struct ModifyTableState /* controls transition table population for specified operation */ struct TransitionCaptureState *mt_oc_transition_capture; /* controls transition table population for INSERT...ON CONFLICT UPDATE */ - TupleConversionMap **mt_transition_tupconv_maps; - /* Per plan/partition tuple conversion */ + TupleConversionMap **mt_per_subplan_tupconv_maps; + /* Per plan map for tuple conversion from child to root */ } ModifyTableState; /* ---------------- diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index 74e9fb5f7b..baf3c07417 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -219,6 +219,7 @@ typedef struct ModifyTable Index nominalRelation; /* Parent RT index for use of EXPLAIN */ /* RT indexes of non-leaf tables in a partition tree */ List *partitioned_rels; + bool partColsUpdated; /* some part key in hierarchy updated */ List *resultRelations; /* integer list of RT indexes */ int resultRelIndex; /* index of first resultRel in plan's list */ int rootResultRelIndex; /* index of the partitioned table root */ diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index 71689b8ed6..6bf68f31da 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -1674,6 +1674,7 @@ typedef struct ModifyTablePath Index nominalRelation; /* Parent RT index for use of EXPLAIN */ /* RT indexes of non-leaf tables in a partition tree */ List *partitioned_rels; + bool partColsUpdated; /* some part key in hierarchy updated */ List *resultRelations; /* integer list of RT indexes */ List *subpaths; /* Path(s) producing source data */ List *subroots; /* per-target-table PlannerInfos */ @@ -2124,6 +2125,8 @@ typedef struct PartitionedChildRelInfo Index parent_relid; List *child_rels; + bool part_cols_updated; /* is the partition key of any of + * the partitioned tables updated? */ } PartitionedChildRelInfo; /* diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index 725694f570..ef7173fbf8 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -242,6 +242,7 @@ extern ModifyTablePath *create_modifytable_path(PlannerInfo *root, RelOptInfo *rel, CmdType operation, bool canSetTag, Index nominalRelation, List *partitioned_rels, + bool partColsUpdated, List *resultRelations, List *subpaths, List *subroots, List *withCheckOptionLists, List *returningLists, diff --git a/src/include/optimizer/planner.h b/src/include/optimizer/planner.h index 997b91fdf9..29173d36c4 100644 --- a/src/include/optimizer/planner.h +++ b/src/include/optimizer/planner.h @@ -57,7 +57,8 @@ extern Expr *preprocess_phv_expression(PlannerInfo *root, Expr *expr); extern bool plan_cluster_use_sort(Oid tableOid, Oid indexOid); -extern List *get_partitioned_child_rels(PlannerInfo *root, Index rti); +extern List *get_partitioned_child_rels(PlannerInfo *root, Index rti, + bool *part_cols_updated); extern List *get_partitioned_child_rels_for_join(PlannerInfo *root, Relids join_relids); diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out index b69ceaa75e..d09326c182 100644 --- a/src/test/regress/expected/update.out +++ b/src/test/regress/expected/update.out @@ -198,36 +198,477 @@ INSERT INTO upsert_test VALUES (1, 'Bat') ON CONFLICT(a) DROP TABLE update_test; DROP TABLE upsert_test; --- update to a partition should check partition bound constraint for the new tuple -create table range_parted ( +--------------------------- +-- UPDATE with row movement +--------------------------- +-- When a partitioned table receives an UPDATE to the partitioned key and the +-- new values no longer meet the partition's bound, the row must be moved to +-- the correct partition for the new partition key (if one exists). We must +-- also ensure that updatable views on partitioned tables properly enforce any +-- WITH CHECK OPTION that is defined. The situation with triggers in this case +-- also requires thorough testing as partition key updates causing row +-- movement convert UPDATEs into DELETE+INSERT. +CREATE TABLE range_parted ( a text, - b int -) partition by range (a, b); -create table part_a_1_a_10 partition of range_parted for values from ('a', 1) to ('a', 10); -create table part_a_10_a_20 partition of range_parted for values from ('a', 10) to ('a', 20); -create table part_b_1_b_10 partition of range_parted for values from ('b', 1) to ('b', 10); -create table part_b_10_b_20 partition of range_parted for values from ('b', 10) to ('b', 20); -insert into part_a_1_a_10 values ('a', 1); -insert into part_b_10_b_20 values ('b', 10); --- fail -update part_a_1_a_10 set a = 'b' where a = 'a'; -ERROR: new row for relation "part_a_1_a_10" violates partition constraint -DETAIL: Failing row contains (b, 1). -update range_parted set b = b - 1 where b = 10; -ERROR: new row for relation "part_b_10_b_20" violates partition constraint -DETAIL: Failing row contains (b, 9). + b bigint, + c numeric, + d int, + e varchar +) PARTITION BY RANGE (a, b); +-- Create partitions intentionally in descending bound order, so as to test +-- that update-row-movement works with the leaf partitions not in bound order. +CREATE TABLE part_b_20_b_30 (e varchar, c numeric, a text, b bigint, d int); +ALTER TABLE range_parted ATTACH PARTITION part_b_20_b_30 FOR VALUES FROM ('b', 20) TO ('b', 30); +CREATE TABLE part_b_10_b_20 (e varchar, c numeric, a text, b bigint, d int) PARTITION BY RANGE (c); +CREATE TABLE part_b_1_b_10 PARTITION OF range_parted FOR VALUES FROM ('b', 1) TO ('b', 10); +ALTER TABLE range_parted ATTACH PARTITION part_b_10_b_20 FOR VALUES FROM ('b', 10) TO ('b', 20); +CREATE TABLE part_a_10_a_20 PARTITION OF range_parted FOR VALUES FROM ('a', 10) TO ('a', 20); +CREATE TABLE part_a_1_a_10 PARTITION OF range_parted FOR VALUES FROM ('a', 1) TO ('a', 10); +-- Check that partition-key UPDATE works sanely on a partitioned table that +-- does not have any child partitions. +UPDATE part_b_10_b_20 set b = b - 6; +-- Create some more partitions following the above pattern of descending bound +-- order, but let's make the situation a bit more complex by having the +-- attribute numbers of the columns vary from their parent partition. +CREATE TABLE part_c_100_200 (e varchar, c numeric, a text, b bigint, d int) PARTITION BY range (abs(d)); +ALTER TABLE part_c_100_200 DROP COLUMN e, DROP COLUMN c, DROP COLUMN a; +ALTER TABLE part_c_100_200 ADD COLUMN c numeric, ADD COLUMN e varchar, ADD COLUMN a text; +ALTER TABLE part_c_100_200 DROP COLUMN b; +ALTER TABLE part_c_100_200 ADD COLUMN b bigint; +CREATE TABLE part_d_1_15 PARTITION OF part_c_100_200 FOR VALUES FROM (1) TO (15); +CREATE TABLE part_d_15_20 PARTITION OF part_c_100_200 FOR VALUES FROM (15) TO (20); +ALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_100_200 FOR VALUES FROM (100) TO (200); +CREATE TABLE part_c_1_100 (e varchar, d int, c numeric, b bigint, a text); +ALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_1_100 FOR VALUES FROM (1) TO (100); +\set init_range_parted 'truncate range_parted; insert into range_parted VALUES (''a'', 1, 1, 1), (''a'', 10, 200, 1), (''b'', 12, 96, 1), (''b'', 13, 97, 2), (''b'', 15, 105, 16), (''b'', 17, 105, 19)' +\set show_data 'select tableoid::regclass::text COLLATE "C" partname, * from range_parted ORDER BY 1, 2, 3, 4, 5, 6' +:init_range_parted; +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 12 | 96 | 1 | + part_c_1_100 | b | 13 | 97 | 2 | + part_d_15_20 | b | 15 | 105 | 16 | + part_d_15_20 | b | 17 | 105 | 19 | +(6 rows) + +-- The order of subplans should be in bound order +EXPLAIN (costs off) UPDATE range_parted set c = c - 50 WHERE c > 97; + QUERY PLAN +------------------------------------- + Update on range_parted + Update on part_a_1_a_10 + Update on part_a_10_a_20 + Update on part_b_1_b_10 + Update on part_c_1_100 + Update on part_d_1_15 + Update on part_d_15_20 + Update on part_b_20_b_30 + -> Seq Scan on part_a_1_a_10 + Filter: (c > '97'::numeric) + -> Seq Scan on part_a_10_a_20 + Filter: (c > '97'::numeric) + -> Seq Scan on part_b_1_b_10 + Filter: (c > '97'::numeric) + -> Seq Scan on part_c_1_100 + Filter: (c > '97'::numeric) + -> Seq Scan on part_d_1_15 + Filter: (c > '97'::numeric) + -> Seq Scan on part_d_15_20 + Filter: (c > '97'::numeric) + -> Seq Scan on part_b_20_b_30 + Filter: (c > '97'::numeric) +(22 rows) + +-- fail, row movement happens only within the partition subtree. +UPDATE part_c_100_200 set c = c - 20, d = c WHERE c = 105; +ERROR: new row for relation "part_c_100_200" violates partition constraint +DETAIL: Failing row contains (105, 85, null, b, 15). +-- fail, no partition key update, so no attempt to move tuple, +-- but "a = 'a'" violates partition constraint enforced by root partition) +UPDATE part_b_10_b_20 set a = 'a'; +ERROR: new row for relation "part_c_1_100" violates partition constraint +DETAIL: Failing row contains (null, 1, 96, 12, a). +-- ok, partition key update, no constraint violation +UPDATE range_parted set d = d - 10 WHERE d > 10; +-- ok, no partition key update, no constraint violation +UPDATE range_parted set e = d; +-- No row found +UPDATE part_c_1_100 set c = c + 20 WHERE c = 98; +-- ok, row movement +UPDATE part_b_10_b_20 set c = c + 20 returning c, b, a; + c | b | a +-----+----+--- + 116 | 12 | b + 117 | 13 | b + 125 | 15 | b + 125 | 17 | b +(4 rows) + +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+---+--- + part_a_10_a_20 | a | 10 | 200 | 1 | 1 + part_a_1_a_10 | a | 1 | 1 | 1 | 1 + part_d_1_15 | b | 12 | 116 | 1 | 1 + part_d_1_15 | b | 13 | 117 | 2 | 2 + part_d_1_15 | b | 15 | 125 | 6 | 6 + part_d_1_15 | b | 17 | 125 | 9 | 9 +(6 rows) + +-- fail, row movement happens only within the partition subtree. +UPDATE part_b_10_b_20 set b = b - 6 WHERE c > 116 returning *; +ERROR: new row for relation "part_d_1_15" violates partition constraint +DETAIL: Failing row contains (2, 117, 2, b, 7). +-- ok, row movement, with subset of rows moved into different partition. +UPDATE range_parted set b = b - 6 WHERE c > 116 returning a, b + c; + a | ?column? +---+---------- + a | 204 + b | 124 + b | 134 + b | 136 +(4 rows) + +:show_data; + partname | a | b | c | d | e +---------------+---+----+-----+---+--- + part_a_1_a_10 | a | 1 | 1 | 1 | 1 + part_a_1_a_10 | a | 4 | 200 | 1 | 1 + part_b_1_b_10 | b | 7 | 117 | 2 | 2 + part_b_1_b_10 | b | 9 | 125 | 6 | 6 + part_d_1_15 | b | 11 | 125 | 9 | 9 + part_d_1_15 | b | 12 | 116 | 1 | 1 +(6 rows) + +-- Common table needed for multiple test scenarios. +CREATE TABLE mintab(c1 int); +INSERT into mintab VALUES (120); +-- update partition key using updatable view. +CREATE VIEW upview AS SELECT * FROM range_parted WHERE (select c > c1 FROM mintab) WITH CHECK OPTION; +-- ok +UPDATE upview set c = 199 WHERE b = 4; +-- fail, check option violation +UPDATE upview set c = 120 WHERE b = 4; +ERROR: new row violates check option for view "upview" +DETAIL: Failing row contains (a, 4, 120, 1, 1). +-- fail, row movement with check option violation +UPDATE upview set a = 'b', b = 15, c = 120 WHERE b = 4; +ERROR: new row violates check option for view "upview" +DETAIL: Failing row contains (b, 15, 120, 1, 1). +-- ok, row movement, check option passes +UPDATE upview set a = 'b', b = 15 WHERE b = 4; +:show_data; + partname | a | b | c | d | e +---------------+---+----+-----+---+--- + part_a_1_a_10 | a | 1 | 1 | 1 | 1 + part_b_1_b_10 | b | 7 | 117 | 2 | 2 + part_b_1_b_10 | b | 9 | 125 | 6 | 6 + part_d_1_15 | b | 11 | 125 | 9 | 9 + part_d_1_15 | b | 12 | 116 | 1 | 1 + part_d_1_15 | b | 15 | 199 | 1 | 1 +(6 rows) + +-- cleanup +DROP VIEW upview; +-- RETURNING having whole-row vars. +:init_range_parted; +UPDATE range_parted set c = 95 WHERE a = 'b' and b > 10 and c > 100 returning (range_parted), *; + range_parted | a | b | c | d | e +---------------+---+----+----+----+--- + (b,15,95,16,) | b | 15 | 95 | 16 | + (b,17,95,19,) | b | 17 | 95 | 19 | +(2 rows) + +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 12 | 96 | 1 | + part_c_1_100 | b | 13 | 97 | 2 | + part_c_1_100 | b | 15 | 95 | 16 | + part_c_1_100 | b | 17 | 95 | 19 | +(6 rows) + +-- Transition tables with update row movement +:init_range_parted; +CREATE FUNCTION trans_updatetrigfunc() RETURNS trigger LANGUAGE plpgsql AS +$$ + begin + raise notice 'trigger = %, old table = %, new table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' ORDER BY a) FROM old_table), + (select string_agg(new_table::text, ', ' ORDER BY a) FROM new_table); + return null; + end; +$$; +CREATE TRIGGER trans_updatetrig + AFTER UPDATE ON range_parted REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); +UPDATE range_parted set c = (case when c = 96 then 110 else c + 1 end ) WHERE a = 'b' and b > 10 and c >= 96; +NOTICE: trigger = trans_updatetrig, old table = (b,12,96,1,), (b,13,97,2,), (b,15,105,16,), (b,17,105,19,), new table = (b,12,110,1,), (b,13,98,2,), (b,15,106,16,), (b,17,106,19,) +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 13 | 98 | 2 | + part_d_15_20 | b | 15 | 106 | 16 | + part_d_15_20 | b | 17 | 106 | 19 | + part_d_1_15 | b | 12 | 110 | 1 | +(6 rows) + +:init_range_parted; +-- Enabling OLD TABLE capture for both DELETE as well as UPDATE stmt triggers +-- should not cause DELETEd rows to be captured twice. Similar thing for +-- INSERT triggers and inserted rows. +CREATE TRIGGER trans_deletetrig + AFTER DELETE ON range_parted REFERENCING OLD TABLE AS old_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); +CREATE TRIGGER trans_inserttrig + AFTER INSERT ON range_parted REFERENCING NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); +UPDATE range_parted set c = c + 50 WHERE a = 'b' and b > 10 and c >= 96; +NOTICE: trigger = trans_updatetrig, old table = (b,12,96,1,), (b,13,97,2,), (b,15,105,16,), (b,17,105,19,), new table = (b,12,146,1,), (b,13,147,2,), (b,15,155,16,), (b,17,155,19,) +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_d_15_20 | b | 15 | 155 | 16 | + part_d_15_20 | b | 17 | 155 | 19 | + part_d_1_15 | b | 12 | 146 | 1 | + part_d_1_15 | b | 13 | 147 | 2 | +(6 rows) + +DROP TRIGGER trans_deletetrig ON range_parted; +DROP TRIGGER trans_inserttrig ON range_parted; +-- Don't drop trans_updatetrig yet. It is required below. +-- Test with transition tuple conversion happening for rows moved into the +-- new partition. This requires a trigger that references transition table +-- (we already have trans_updatetrig). For inserted rows, the conversion +-- is not usually needed, because the original tuple is already compatible with +-- the desired transition tuple format. But conversion happens when there is a +-- BR trigger because the trigger can change the inserted row. So install a +-- BR triggers on those child partitions where the rows will be moved. +CREATE FUNCTION func_parted_mod_b() RETURNS trigger AS $$ +BEGIN + NEW.b = NEW.b + 1; + return NEW; +END $$ language plpgsql; +CREATE TRIGGER trig_c1_100 BEFORE UPDATE OR INSERT ON part_c_1_100 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +CREATE TRIGGER trig_d1_15 BEFORE UPDATE OR INSERT ON part_d_1_15 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +CREATE TRIGGER trig_d15_20 BEFORE UPDATE OR INSERT ON part_d_15_20 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +:init_range_parted; +UPDATE range_parted set c = (case when c = 96 then 110 else c + 1 end) WHERE a = 'b' and b > 10 and c >= 96; +NOTICE: trigger = trans_updatetrig, old table = (b,13,96,1,), (b,14,97,2,), (b,16,105,16,), (b,18,105,19,), new table = (b,15,110,1,), (b,15,98,2,), (b,17,106,16,), (b,19,106,19,) +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 15 | 98 | 2 | + part_d_15_20 | b | 17 | 106 | 16 | + part_d_15_20 | b | 19 | 106 | 19 | + part_d_1_15 | b | 15 | 110 | 1 | +(6 rows) + +:init_range_parted; +UPDATE range_parted set c = c + 50 WHERE a = 'b' and b > 10 and c >= 96; +NOTICE: trigger = trans_updatetrig, old table = (b,13,96,1,), (b,14,97,2,), (b,16,105,16,), (b,18,105,19,), new table = (b,15,146,1,), (b,16,147,2,), (b,17,155,16,), (b,19,155,19,) +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_d_15_20 | b | 17 | 155 | 16 | + part_d_15_20 | b | 19 | 155 | 19 | + part_d_1_15 | b | 15 | 146 | 1 | + part_d_1_15 | b | 16 | 147 | 2 | +(6 rows) + +-- Case where per-partition tuple conversion map array is allocated, but the +-- map is not required for the particular tuple that is routed, thanks to +-- matching table attributes of the partition and the target table. +:init_range_parted; +UPDATE range_parted set b = 15 WHERE b = 1; +NOTICE: trigger = trans_updatetrig, old table = (a,1,1,1,), new table = (a,15,1,1,) +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_10_a_20 | a | 15 | 1 | 1 | + part_c_1_100 | b | 13 | 96 | 1 | + part_c_1_100 | b | 14 | 97 | 2 | + part_d_15_20 | b | 16 | 105 | 16 | + part_d_15_20 | b | 18 | 105 | 19 | +(6 rows) + +DROP TRIGGER trans_updatetrig ON range_parted; +DROP TRIGGER trig_c1_100 ON part_c_1_100; +DROP TRIGGER trig_d1_15 ON part_d_1_15; +DROP TRIGGER trig_d15_20 ON part_d_15_20; +DROP FUNCTION func_parted_mod_b(); +-- RLS policies with update-row-movement +----------------------------------------- +ALTER TABLE range_parted ENABLE ROW LEVEL SECURITY; +CREATE USER regress_range_parted_user; +GRANT ALL ON range_parted, mintab TO regress_range_parted_user; +CREATE POLICY seeall ON range_parted AS PERMISSIVE FOR SELECT USING (true); +CREATE POLICY policy_range_parted ON range_parted for UPDATE USING (true) WITH CHECK (c % 2 = 0); +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- This should fail with RLS violation error while moving row from +-- part_a_10_a_20 to part_d_1_15, because we are setting 'c' to an odd number. +UPDATE range_parted set a = 'b', c = 151 WHERE a = 'a' and c = 200; +ERROR: new row violates row-level security policy for table "range_parted" +RESET SESSION AUTHORIZATION; +-- Create a trigger on part_d_1_15 +CREATE FUNCTION func_d_1_15() RETURNS trigger AS $$ +BEGIN + NEW.c = NEW.c + 1; -- Make even numbers odd, or vice versa + return NEW; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER trig_d_1_15 BEFORE INSERT ON part_d_1_15 + FOR EACH ROW EXECUTE PROCEDURE func_d_1_15(); +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- Here, RLS checks should succeed while moving row from part_a_10_a_20 to +-- part_d_1_15. Even though the UPDATE is setting 'c' to an odd number, the +-- trigger at the destination partition again makes it an even number. +UPDATE range_parted set a = 'b', c = 151 WHERE a = 'a' and c = 200; +RESET SESSION AUTHORIZATION; +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- This should fail with RLS violation error. Even though the UPDATE is setting +-- 'c' to an even number, the trigger at the destination partition again makes +-- it an odd number. +UPDATE range_parted set a = 'b', c = 150 WHERE a = 'a' and c = 200; +ERROR: new row violates row-level security policy for table "range_parted" +-- Cleanup +RESET SESSION AUTHORIZATION; +DROP TRIGGER trig_d_1_15 ON part_d_1_15; +DROP FUNCTION func_d_1_15(); +-- Policy expression contains SubPlan +RESET SESSION AUTHORIZATION; +:init_range_parted; +CREATE POLICY policy_range_parted_subplan on range_parted + AS RESTRICTIVE for UPDATE USING (true) + WITH CHECK ((SELECT range_parted.c <= c1 FROM mintab)); +SET SESSION AUTHORIZATION regress_range_parted_user; +-- fail, mintab has row with c1 = 120 +UPDATE range_parted set a = 'b', c = 122 WHERE a = 'a' and c = 200; +ERROR: new row violates row-level security policy "policy_range_parted_subplan" for table "range_parted" -- ok -update range_parted set b = b + 1 where b = 10; +UPDATE range_parted set a = 'b', c = 120 WHERE a = 'a' and c = 200; +-- RLS policy expression contains whole row. +RESET SESSION AUTHORIZATION; +:init_range_parted; +CREATE POLICY policy_range_parted_wholerow on range_parted AS RESTRICTIVE for UPDATE USING (true) + WITH CHECK (range_parted = row('b', 10, 112, 1, NULL)::range_parted); +SET SESSION AUTHORIZATION regress_range_parted_user; +-- ok, should pass the RLS check +UPDATE range_parted set a = 'b', c = 112 WHERE a = 'a' and c = 200; +RESET SESSION AUTHORIZATION; +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- fail, the whole row RLS check should fail +UPDATE range_parted set a = 'b', c = 116 WHERE a = 'a' and c = 200; +ERROR: new row violates row-level security policy "policy_range_parted_wholerow" for table "range_parted" +-- Cleanup +RESET SESSION AUTHORIZATION; +DROP POLICY policy_range_parted ON range_parted; +DROP POLICY policy_range_parted_subplan ON range_parted; +DROP POLICY policy_range_parted_wholerow ON range_parted; +REVOKE ALL ON range_parted, mintab FROM regress_range_parted_user; +DROP USER regress_range_parted_user; +DROP TABLE mintab; +-- statement triggers with update row movement +--------------------------------------------------- +:init_range_parted; +CREATE FUNCTION trigfunc() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = % fired on table % during %', + TG_NAME, TG_TABLE_NAME, TG_OP; + return null; + end; +$$; +-- Triggers on root partition +CREATE TRIGGER parent_delete_trig + AFTER DELETE ON range_parted for each statement execute procedure trigfunc(); +CREATE TRIGGER parent_update_trig + AFTER UPDATE ON range_parted for each statement execute procedure trigfunc(); +CREATE TRIGGER parent_insert_trig + AFTER INSERT ON range_parted for each statement execute procedure trigfunc(); +-- Triggers on leaf partition part_c_1_100 +CREATE TRIGGER c1_delete_trig + AFTER DELETE ON part_c_1_100 for each statement execute procedure trigfunc(); +CREATE TRIGGER c1_update_trig + AFTER UPDATE ON part_c_1_100 for each statement execute procedure trigfunc(); +CREATE TRIGGER c1_insert_trig + AFTER INSERT ON part_c_1_100 for each statement execute procedure trigfunc(); +-- Triggers on leaf partition part_d_1_15 +CREATE TRIGGER d1_delete_trig + AFTER DELETE ON part_d_1_15 for each statement execute procedure trigfunc(); +CREATE TRIGGER d1_update_trig + AFTER UPDATE ON part_d_1_15 for each statement execute procedure trigfunc(); +CREATE TRIGGER d1_insert_trig + AFTER INSERT ON part_d_1_15 for each statement execute procedure trigfunc(); +-- Triggers on leaf partition part_d_15_20 +CREATE TRIGGER d15_delete_trig + AFTER DELETE ON part_d_15_20 for each statement execute procedure trigfunc(); +CREATE TRIGGER d15_update_trig + AFTER UPDATE ON part_d_15_20 for each statement execute procedure trigfunc(); +CREATE TRIGGER d15_insert_trig + AFTER INSERT ON part_d_15_20 for each statement execute procedure trigfunc(); +-- Move all rows from part_c_100_200 to part_c_1_100. None of the delete or +-- insert statement triggers should be fired. +UPDATE range_parted set c = c - 50 WHERE c > 97; +NOTICE: trigger = parent_update_trig fired on table range_parted during UPDATE +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 150 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 12 | 96 | 1 | + part_c_1_100 | b | 13 | 97 | 2 | + part_c_1_100 | b | 15 | 55 | 16 | + part_c_1_100 | b | 17 | 55 | 19 | +(6 rows) + +DROP TRIGGER parent_delete_trig ON range_parted; +DROP TRIGGER parent_update_trig ON range_parted; +DROP TRIGGER parent_insert_trig ON range_parted; +DROP TRIGGER c1_delete_trig ON part_c_1_100; +DROP TRIGGER c1_update_trig ON part_c_1_100; +DROP TRIGGER c1_insert_trig ON part_c_1_100; +DROP TRIGGER d1_delete_trig ON part_d_1_15; +DROP TRIGGER d1_update_trig ON part_d_1_15; +DROP TRIGGER d1_insert_trig ON part_d_1_15; +DROP TRIGGER d15_delete_trig ON part_d_15_20; +DROP TRIGGER d15_update_trig ON part_d_15_20; +DROP TRIGGER d15_insert_trig ON part_d_15_20; -- Creating default partition for range +:init_range_parted; create table part_def partition of range_parted default; \d+ part_def - Table "public.part_def" - Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ---------+---------+-----------+----------+---------+----------+--------------+------------- - a | text | | | | extended | | - b | integer | | | | plain | | + Table "public.part_def" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+-------------------+-----------+----------+---------+----------+--------------+------------- + a | text | | | | extended | | + b | bigint | | | | plain | | + c | numeric | | | | main | | + d | integer | | | | plain | | + e | character varying | | | | extended | | Partition of: range_parted DEFAULT -Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'a'::text) AND (b >= 10) AND (b < 20)) OR ((a = 'b'::text) AND (b >= 1) AND (b < 10)) OR ((a = 'b'::text) AND (b >= 10) AND (b < 20))))) +Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint))))) insert into range_parted values ('c', 9); -- ok @@ -235,21 +676,190 @@ update part_def set a = 'd' where a = 'c'; -- fail update part_def set a = 'a' where a = 'd'; ERROR: new row for relation "part_def" violates partition constraint -DETAIL: Failing row contains (a, 9). -create table list_parted ( +DETAIL: Failing row contains (a, 9, null, null, null). +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 12 | 96 | 1 | + part_c_1_100 | b | 13 | 97 | 2 | + part_d_15_20 | b | 15 | 105 | 16 | + part_d_15_20 | b | 17 | 105 | 19 | + part_def | d | 9 | | | +(7 rows) + +-- Update row movement from non-default to default partition. +-- fail, default partition is not under part_a_10_a_20; +UPDATE part_a_10_a_20 set a = 'ad' WHERE a = 'a'; +ERROR: new row for relation "part_a_10_a_20" violates partition constraint +DETAIL: Failing row contains (ad, 10, 200, 1, null). +-- ok +UPDATE range_parted set a = 'ad' WHERE a = 'a'; +UPDATE range_parted set a = 'bd' WHERE a = 'b'; +:show_data; + partname | a | b | c | d | e +----------+----+----+-----+----+--- + part_def | ad | 1 | 1 | 1 | + part_def | ad | 10 | 200 | 1 | + part_def | bd | 12 | 96 | 1 | + part_def | bd | 13 | 97 | 2 | + part_def | bd | 15 | 105 | 16 | + part_def | bd | 17 | 105 | 19 | + part_def | d | 9 | | | +(7 rows) + +-- Update row movement from default to non-default partitions. +-- ok +UPDATE range_parted set a = 'a' WHERE a = 'ad'; +UPDATE range_parted set a = 'b' WHERE a = 'bd'; +:show_data; + partname | a | b | c | d | e +----------------+---+----+-----+----+--- + part_a_10_a_20 | a | 10 | 200 | 1 | + part_a_1_a_10 | a | 1 | 1 | 1 | + part_c_1_100 | b | 12 | 96 | 1 | + part_c_1_100 | b | 13 | 97 | 2 | + part_d_15_20 | b | 15 | 105 | 16 | + part_d_15_20 | b | 17 | 105 | 19 | + part_def | d | 9 | | | +(7 rows) + +-- Cleanup: range_parted no longer needed. +DROP TABLE range_parted; +CREATE TABLE list_parted ( a text, b int -) partition by list (a); -create table list_part1 partition of list_parted for values in ('a', 'b'); -create table list_default partition of list_parted default; -insert into list_part1 values ('a', 1); -insert into list_default values ('d', 10); +) PARTITION BY list (a); +CREATE TABLE list_part1 PARTITION OF list_parted for VALUES in ('a', 'b'); +CREATE TABLE list_default PARTITION OF list_parted default; +INSERT into list_part1 VALUES ('a', 1); +INSERT into list_default VALUES ('d', 10); -- fail -update list_default set a = 'a' where a = 'd'; +UPDATE list_default set a = 'a' WHERE a = 'd'; ERROR: new row for relation "list_default" violates partition constraint DETAIL: Failing row contains (a, 10). -- ok -update list_default set a = 'x' where a = 'd'; +UPDATE list_default set a = 'x' WHERE a = 'd'; +DROP TABLE list_parted; +-------------- +-- Some more update-partition-key test scenarios below. This time use list +-- partitions. +-------------- +-- Setup for list partitions +CREATE TABLE list_parted (a numeric, b int, c int8) PARTITION BY list (a); +CREATE TABLE sub_parted PARTITION OF list_parted for VALUES in (1) PARTITION BY list (b); +CREATE TABLE sub_part1(b int, c int8, a numeric); +ALTER TABLE sub_parted ATTACH PARTITION sub_part1 for VALUES in (1); +CREATE TABLE sub_part2(b int, c int8, a numeric); +ALTER TABLE sub_parted ATTACH PARTITION sub_part2 for VALUES in (2); +CREATE TABLE list_part1(a numeric, b int, c int8); +ALTER TABLE list_parted ATTACH PARTITION list_part1 for VALUES in (2,3); +INSERT into list_parted VALUES (2,5,50); +INSERT into list_parted VALUES (3,6,60); +INSERT into sub_parted VALUES (1,1,60); +INSERT into sub_parted VALUES (1,2,10); +-- Test partition constraint violation when intermediate ancestor is used and +-- constraint is inherited from upper root. +UPDATE sub_parted set a = 2 WHERE c = 10; +ERROR: new row for relation "sub_part2" violates partition constraint +DETAIL: Failing row contains (2, 10, 2). +-- Test update-partition-key, where the unpruned partitions do not have their +-- partition keys updated. +SELECT tableoid::regclass::text, * FROM list_parted WHERE a = 2 ORDER BY 1; + tableoid | a | b | c +------------+---+---+---- + list_part1 | 2 | 5 | 50 +(1 row) + +UPDATE list_parted set b = c + a WHERE a = 2; +SELECT tableoid::regclass::text, * FROM list_parted WHERE a = 2 ORDER BY 1; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 52 | 50 +(1 row) + +-- Test the case where BR UPDATE triggers change the partition key. +CREATE FUNCTION func_parted_mod_b() returns trigger as $$ +BEGIN + NEW.b = 2; -- This is changing partition key column. + return NEW; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER parted_mod_b before update on sub_part1 + for each row execute procedure func_parted_mod_b(); +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 52 | 50 + list_part1 | 3 | 6 | 60 + sub_part1 | 1 | 1 | 60 + sub_part2 | 1 | 2 | 10 +(4 rows) + +-- This should do the tuple routing even though there is no explicit +-- partition-key update, because there is a trigger on sub_part1. +UPDATE list_parted set c = 70 WHERE b = 1; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 52 | 50 + list_part1 | 3 | 6 | 60 + sub_part2 | 1 | 2 | 10 + sub_part2 | 1 | 2 | 70 +(4 rows) + +DROP TRIGGER parted_mod_b ON sub_part1; +-- If BR DELETE trigger prevented DELETE from happening, we should also skip +-- the INSERT if that delete is part of UPDATE=>DELETE+INSERT. +CREATE OR REPLACE FUNCTION func_parted_mod_b() returns trigger as $$ +BEGIN + raise notice 'Trigger: Got OLD row %, but returning NULL', OLD; + return NULL; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER trig_skip_delete before delete on sub_part2 + for each row execute procedure func_parted_mod_b(); +UPDATE list_parted set b = 1 WHERE c = 70; +NOTICE: Trigger: Got OLD row (2,70,1), but returning NULL +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 52 | 50 + list_part1 | 3 | 6 | 60 + sub_part2 | 1 | 2 | 10 + sub_part2 | 1 | 2 | 70 +(4 rows) + +-- Drop the trigger. Now the row should be moved. +DROP TRIGGER trig_skip_delete ON sub_part2; +UPDATE list_parted set b = 1 WHERE c = 70; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 52 | 50 + list_part1 | 3 | 6 | 60 + sub_part1 | 1 | 1 | 70 + sub_part2 | 1 | 2 | 10 +(4 rows) + +DROP FUNCTION func_parted_mod_b(); +-- UPDATE partition-key with FROM clause. If join produces multiple output +-- rows for the same row to be modified, we should tuple-route the row only +-- once. There should not be any rows inserted. +CREATE TABLE non_parted (id int); +INSERT into non_parted VALUES (1), (1), (1), (2), (2), (2), (3), (3), (3); +UPDATE list_parted t1 set a = 2 FROM non_parted t2 WHERE t1.a = t2.id and a = 1; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + tableoid | a | b | c +------------+---+----+---- + list_part1 | 2 | 1 | 70 + list_part1 | 2 | 2 | 10 + list_part1 | 2 | 52 | 50 + list_part1 | 3 | 6 | 60 +(4 rows) + +DROP TABLE non_parted; +-- Cleanup: list_parted no longer needed. +DROP TABLE list_parted; -- create custom operator class and hash function, for the same reason -- explained in alter_table.sql create or replace function dummy_hashint4(a int4, seed int8) returns int8 as @@ -271,14 +881,11 @@ insert into hpart4 values (3, 4); update hpart1 set a = 3, b=4 where a = 1; ERROR: new row for relation "hpart1" violates partition constraint DETAIL: Failing row contains (3, 4). +-- ok, row movement update hash_parted set b = b - 1 where b = 1; -ERROR: new row for relation "hpart1" violates partition constraint -DETAIL: Failing row contains (1, 0). -- ok update hash_parted set b = b + 8 where b = 1; -- cleanup -drop table range_parted; -drop table list_parted; drop table hash_parted; drop operator class custom_opclass using hash; drop function dummy_hashint4(a int4, seed int8); diff --git a/src/test/regress/sql/update.sql b/src/test/regress/sql/update.sql index 0c70d64a89..c9bb3b53d3 100644 --- a/src/test/regress/sql/update.sql +++ b/src/test/regress/sql/update.sql @@ -107,25 +107,336 @@ INSERT INTO upsert_test VALUES (1, 'Bat') ON CONFLICT(a) DROP TABLE update_test; DROP TABLE upsert_test; --- update to a partition should check partition bound constraint for the new tuple -create table range_parted ( + +--------------------------- +-- UPDATE with row movement +--------------------------- + +-- When a partitioned table receives an UPDATE to the partitioned key and the +-- new values no longer meet the partition's bound, the row must be moved to +-- the correct partition for the new partition key (if one exists). We must +-- also ensure that updatable views on partitioned tables properly enforce any +-- WITH CHECK OPTION that is defined. The situation with triggers in this case +-- also requires thorough testing as partition key updates causing row +-- movement convert UPDATEs into DELETE+INSERT. + +CREATE TABLE range_parted ( a text, - b int -) partition by range (a, b); -create table part_a_1_a_10 partition of range_parted for values from ('a', 1) to ('a', 10); -create table part_a_10_a_20 partition of range_parted for values from ('a', 10) to ('a', 20); -create table part_b_1_b_10 partition of range_parted for values from ('b', 1) to ('b', 10); -create table part_b_10_b_20 partition of range_parted for values from ('b', 10) to ('b', 20); -insert into part_a_1_a_10 values ('a', 1); -insert into part_b_10_b_20 values ('b', 10); + b bigint, + c numeric, + d int, + e varchar +) PARTITION BY RANGE (a, b); --- fail -update part_a_1_a_10 set a = 'b' where a = 'a'; -update range_parted set b = b - 1 where b = 10; +-- Create partitions intentionally in descending bound order, so as to test +-- that update-row-movement works with the leaf partitions not in bound order. +CREATE TABLE part_b_20_b_30 (e varchar, c numeric, a text, b bigint, d int); +ALTER TABLE range_parted ATTACH PARTITION part_b_20_b_30 FOR VALUES FROM ('b', 20) TO ('b', 30); +CREATE TABLE part_b_10_b_20 (e varchar, c numeric, a text, b bigint, d int) PARTITION BY RANGE (c); +CREATE TABLE part_b_1_b_10 PARTITION OF range_parted FOR VALUES FROM ('b', 1) TO ('b', 10); +ALTER TABLE range_parted ATTACH PARTITION part_b_10_b_20 FOR VALUES FROM ('b', 10) TO ('b', 20); +CREATE TABLE part_a_10_a_20 PARTITION OF range_parted FOR VALUES FROM ('a', 10) TO ('a', 20); +CREATE TABLE part_a_1_a_10 PARTITION OF range_parted FOR VALUES FROM ('a', 1) TO ('a', 10); + +-- Check that partition-key UPDATE works sanely on a partitioned table that +-- does not have any child partitions. +UPDATE part_b_10_b_20 set b = b - 6; + +-- Create some more partitions following the above pattern of descending bound +-- order, but let's make the situation a bit more complex by having the +-- attribute numbers of the columns vary from their parent partition. +CREATE TABLE part_c_100_200 (e varchar, c numeric, a text, b bigint, d int) PARTITION BY range (abs(d)); +ALTER TABLE part_c_100_200 DROP COLUMN e, DROP COLUMN c, DROP COLUMN a; +ALTER TABLE part_c_100_200 ADD COLUMN c numeric, ADD COLUMN e varchar, ADD COLUMN a text; +ALTER TABLE part_c_100_200 DROP COLUMN b; +ALTER TABLE part_c_100_200 ADD COLUMN b bigint; +CREATE TABLE part_d_1_15 PARTITION OF part_c_100_200 FOR VALUES FROM (1) TO (15); +CREATE TABLE part_d_15_20 PARTITION OF part_c_100_200 FOR VALUES FROM (15) TO (20); + +ALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_100_200 FOR VALUES FROM (100) TO (200); + +CREATE TABLE part_c_1_100 (e varchar, d int, c numeric, b bigint, a text); +ALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_1_100 FOR VALUES FROM (1) TO (100); + +\set init_range_parted 'truncate range_parted; insert into range_parted VALUES (''a'', 1, 1, 1), (''a'', 10, 200, 1), (''b'', 12, 96, 1), (''b'', 13, 97, 2), (''b'', 15, 105, 16), (''b'', 17, 105, 19)' +\set show_data 'select tableoid::regclass::text COLLATE "C" partname, * from range_parted ORDER BY 1, 2, 3, 4, 5, 6' +:init_range_parted; +:show_data; + +-- The order of subplans should be in bound order +EXPLAIN (costs off) UPDATE range_parted set c = c - 50 WHERE c > 97; + +-- fail, row movement happens only within the partition subtree. +UPDATE part_c_100_200 set c = c - 20, d = c WHERE c = 105; +-- fail, no partition key update, so no attempt to move tuple, +-- but "a = 'a'" violates partition constraint enforced by root partition) +UPDATE part_b_10_b_20 set a = 'a'; +-- ok, partition key update, no constraint violation +UPDATE range_parted set d = d - 10 WHERE d > 10; +-- ok, no partition key update, no constraint violation +UPDATE range_parted set e = d; +-- No row found +UPDATE part_c_1_100 set c = c + 20 WHERE c = 98; +-- ok, row movement +UPDATE part_b_10_b_20 set c = c + 20 returning c, b, a; +:show_data; + +-- fail, row movement happens only within the partition subtree. +UPDATE part_b_10_b_20 set b = b - 6 WHERE c > 116 returning *; +-- ok, row movement, with subset of rows moved into different partition. +UPDATE range_parted set b = b - 6 WHERE c > 116 returning a, b + c; + +:show_data; + +-- Common table needed for multiple test scenarios. +CREATE TABLE mintab(c1 int); +INSERT into mintab VALUES (120); + +-- update partition key using updatable view. +CREATE VIEW upview AS SELECT * FROM range_parted WHERE (select c > c1 FROM mintab) WITH CHECK OPTION; +-- ok +UPDATE upview set c = 199 WHERE b = 4; +-- fail, check option violation +UPDATE upview set c = 120 WHERE b = 4; +-- fail, row movement with check option violation +UPDATE upview set a = 'b', b = 15, c = 120 WHERE b = 4; +-- ok, row movement, check option passes +UPDATE upview set a = 'b', b = 15 WHERE b = 4; + +:show_data; + +-- cleanup +DROP VIEW upview; + +-- RETURNING having whole-row vars. +:init_range_parted; +UPDATE range_parted set c = 95 WHERE a = 'b' and b > 10 and c > 100 returning (range_parted), *; +:show_data; + + +-- Transition tables with update row movement +:init_range_parted; + +CREATE FUNCTION trans_updatetrigfunc() RETURNS trigger LANGUAGE plpgsql AS +$$ + begin + raise notice 'trigger = %, old table = %, new table = %', + TG_NAME, + (select string_agg(old_table::text, ', ' ORDER BY a) FROM old_table), + (select string_agg(new_table::text, ', ' ORDER BY a) FROM new_table); + return null; + end; +$$; + +CREATE TRIGGER trans_updatetrig + AFTER UPDATE ON range_parted REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); + +UPDATE range_parted set c = (case when c = 96 then 110 else c + 1 end ) WHERE a = 'b' and b > 10 and c >= 96; +:show_data; +:init_range_parted; + +-- Enabling OLD TABLE capture for both DELETE as well as UPDATE stmt triggers +-- should not cause DELETEd rows to be captured twice. Similar thing for +-- INSERT triggers and inserted rows. +CREATE TRIGGER trans_deletetrig + AFTER DELETE ON range_parted REFERENCING OLD TABLE AS old_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); +CREATE TRIGGER trans_inserttrig + AFTER INSERT ON range_parted REFERENCING NEW TABLE AS new_table + FOR EACH STATEMENT EXECUTE PROCEDURE trans_updatetrigfunc(); +UPDATE range_parted set c = c + 50 WHERE a = 'b' and b > 10 and c >= 96; +:show_data; +DROP TRIGGER trans_deletetrig ON range_parted; +DROP TRIGGER trans_inserttrig ON range_parted; +-- Don't drop trans_updatetrig yet. It is required below. + +-- Test with transition tuple conversion happening for rows moved into the +-- new partition. This requires a trigger that references transition table +-- (we already have trans_updatetrig). For inserted rows, the conversion +-- is not usually needed, because the original tuple is already compatible with +-- the desired transition tuple format. But conversion happens when there is a +-- BR trigger because the trigger can change the inserted row. So install a +-- BR triggers on those child partitions where the rows will be moved. +CREATE FUNCTION func_parted_mod_b() RETURNS trigger AS $$ +BEGIN + NEW.b = NEW.b + 1; + return NEW; +END $$ language plpgsql; +CREATE TRIGGER trig_c1_100 BEFORE UPDATE OR INSERT ON part_c_1_100 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +CREATE TRIGGER trig_d1_15 BEFORE UPDATE OR INSERT ON part_d_1_15 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +CREATE TRIGGER trig_d15_20 BEFORE UPDATE OR INSERT ON part_d_15_20 + FOR EACH ROW EXECUTE PROCEDURE func_parted_mod_b(); +:init_range_parted; +UPDATE range_parted set c = (case when c = 96 then 110 else c + 1 end) WHERE a = 'b' and b > 10 and c >= 96; +:show_data; +:init_range_parted; +UPDATE range_parted set c = c + 50 WHERE a = 'b' and b > 10 and c >= 96; +:show_data; + +-- Case where per-partition tuple conversion map array is allocated, but the +-- map is not required for the particular tuple that is routed, thanks to +-- matching table attributes of the partition and the target table. +:init_range_parted; +UPDATE range_parted set b = 15 WHERE b = 1; +:show_data; + +DROP TRIGGER trans_updatetrig ON range_parted; +DROP TRIGGER trig_c1_100 ON part_c_1_100; +DROP TRIGGER trig_d1_15 ON part_d_1_15; +DROP TRIGGER trig_d15_20 ON part_d_15_20; +DROP FUNCTION func_parted_mod_b(); + +-- RLS policies with update-row-movement +----------------------------------------- + +ALTER TABLE range_parted ENABLE ROW LEVEL SECURITY; +CREATE USER regress_range_parted_user; +GRANT ALL ON range_parted, mintab TO regress_range_parted_user; +CREATE POLICY seeall ON range_parted AS PERMISSIVE FOR SELECT USING (true); +CREATE POLICY policy_range_parted ON range_parted for UPDATE USING (true) WITH CHECK (c % 2 = 0); + +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- This should fail with RLS violation error while moving row from +-- part_a_10_a_20 to part_d_1_15, because we are setting 'c' to an odd number. +UPDATE range_parted set a = 'b', c = 151 WHERE a = 'a' and c = 200; + +RESET SESSION AUTHORIZATION; +-- Create a trigger on part_d_1_15 +CREATE FUNCTION func_d_1_15() RETURNS trigger AS $$ +BEGIN + NEW.c = NEW.c + 1; -- Make even numbers odd, or vice versa + return NEW; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER trig_d_1_15 BEFORE INSERT ON part_d_1_15 + FOR EACH ROW EXECUTE PROCEDURE func_d_1_15(); + +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; + +-- Here, RLS checks should succeed while moving row from part_a_10_a_20 to +-- part_d_1_15. Even though the UPDATE is setting 'c' to an odd number, the +-- trigger at the destination partition again makes it an even number. +UPDATE range_parted set a = 'b', c = 151 WHERE a = 'a' and c = 200; + +RESET SESSION AUTHORIZATION; +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- This should fail with RLS violation error. Even though the UPDATE is setting +-- 'c' to an even number, the trigger at the destination partition again makes +-- it an odd number. +UPDATE range_parted set a = 'b', c = 150 WHERE a = 'a' and c = 200; + +-- Cleanup +RESET SESSION AUTHORIZATION; +DROP TRIGGER trig_d_1_15 ON part_d_1_15; +DROP FUNCTION func_d_1_15(); + +-- Policy expression contains SubPlan +RESET SESSION AUTHORIZATION; +:init_range_parted; +CREATE POLICY policy_range_parted_subplan on range_parted + AS RESTRICTIVE for UPDATE USING (true) + WITH CHECK ((SELECT range_parted.c <= c1 FROM mintab)); +SET SESSION AUTHORIZATION regress_range_parted_user; +-- fail, mintab has row with c1 = 120 +UPDATE range_parted set a = 'b', c = 122 WHERE a = 'a' and c = 200; -- ok -update range_parted set b = b + 1 where b = 10; +UPDATE range_parted set a = 'b', c = 120 WHERE a = 'a' and c = 200; + +-- RLS policy expression contains whole row. + +RESET SESSION AUTHORIZATION; +:init_range_parted; +CREATE POLICY policy_range_parted_wholerow on range_parted AS RESTRICTIVE for UPDATE USING (true) + WITH CHECK (range_parted = row('b', 10, 112, 1, NULL)::range_parted); +SET SESSION AUTHORIZATION regress_range_parted_user; +-- ok, should pass the RLS check +UPDATE range_parted set a = 'b', c = 112 WHERE a = 'a' and c = 200; +RESET SESSION AUTHORIZATION; +:init_range_parted; +SET SESSION AUTHORIZATION regress_range_parted_user; +-- fail, the whole row RLS check should fail +UPDATE range_parted set a = 'b', c = 116 WHERE a = 'a' and c = 200; + +-- Cleanup +RESET SESSION AUTHORIZATION; +DROP POLICY policy_range_parted ON range_parted; +DROP POLICY policy_range_parted_subplan ON range_parted; +DROP POLICY policy_range_parted_wholerow ON range_parted; +REVOKE ALL ON range_parted, mintab FROM regress_range_parted_user; +DROP USER regress_range_parted_user; +DROP TABLE mintab; + + +-- statement triggers with update row movement +--------------------------------------------------- + +:init_range_parted; + +CREATE FUNCTION trigfunc() returns trigger language plpgsql as +$$ + begin + raise notice 'trigger = % fired on table % during %', + TG_NAME, TG_TABLE_NAME, TG_OP; + return null; + end; +$$; +-- Triggers on root partition +CREATE TRIGGER parent_delete_trig + AFTER DELETE ON range_parted for each statement execute procedure trigfunc(); +CREATE TRIGGER parent_update_trig + AFTER UPDATE ON range_parted for each statement execute procedure trigfunc(); +CREATE TRIGGER parent_insert_trig + AFTER INSERT ON range_parted for each statement execute procedure trigfunc(); + +-- Triggers on leaf partition part_c_1_100 +CREATE TRIGGER c1_delete_trig + AFTER DELETE ON part_c_1_100 for each statement execute procedure trigfunc(); +CREATE TRIGGER c1_update_trig + AFTER UPDATE ON part_c_1_100 for each statement execute procedure trigfunc(); +CREATE TRIGGER c1_insert_trig + AFTER INSERT ON part_c_1_100 for each statement execute procedure trigfunc(); + +-- Triggers on leaf partition part_d_1_15 +CREATE TRIGGER d1_delete_trig + AFTER DELETE ON part_d_1_15 for each statement execute procedure trigfunc(); +CREATE TRIGGER d1_update_trig + AFTER UPDATE ON part_d_1_15 for each statement execute procedure trigfunc(); +CREATE TRIGGER d1_insert_trig + AFTER INSERT ON part_d_1_15 for each statement execute procedure trigfunc(); +-- Triggers on leaf partition part_d_15_20 +CREATE TRIGGER d15_delete_trig + AFTER DELETE ON part_d_15_20 for each statement execute procedure trigfunc(); +CREATE TRIGGER d15_update_trig + AFTER UPDATE ON part_d_15_20 for each statement execute procedure trigfunc(); +CREATE TRIGGER d15_insert_trig + AFTER INSERT ON part_d_15_20 for each statement execute procedure trigfunc(); + +-- Move all rows from part_c_100_200 to part_c_1_100. None of the delete or +-- insert statement triggers should be fired. +UPDATE range_parted set c = c - 50 WHERE c > 97; +:show_data; + +DROP TRIGGER parent_delete_trig ON range_parted; +DROP TRIGGER parent_update_trig ON range_parted; +DROP TRIGGER parent_insert_trig ON range_parted; +DROP TRIGGER c1_delete_trig ON part_c_1_100; +DROP TRIGGER c1_update_trig ON part_c_1_100; +DROP TRIGGER c1_insert_trig ON part_c_1_100; +DROP TRIGGER d1_delete_trig ON part_d_1_15; +DROP TRIGGER d1_update_trig ON part_d_1_15; +DROP TRIGGER d1_insert_trig ON part_d_1_15; +DROP TRIGGER d15_delete_trig ON part_d_15_20; +DROP TRIGGER d15_update_trig ON part_d_15_20; +DROP TRIGGER d15_insert_trig ON part_d_15_20; + -- Creating default partition for range +:init_range_parted; create table part_def partition of range_parted default; \d+ part_def insert into range_parted values ('c', 9); @@ -134,19 +445,119 @@ update part_def set a = 'd' where a = 'c'; -- fail update part_def set a = 'a' where a = 'd'; -create table list_parted ( +:show_data; + +-- Update row movement from non-default to default partition. +-- fail, default partition is not under part_a_10_a_20; +UPDATE part_a_10_a_20 set a = 'ad' WHERE a = 'a'; +-- ok +UPDATE range_parted set a = 'ad' WHERE a = 'a'; +UPDATE range_parted set a = 'bd' WHERE a = 'b'; +:show_data; +-- Update row movement from default to non-default partitions. +-- ok +UPDATE range_parted set a = 'a' WHERE a = 'ad'; +UPDATE range_parted set a = 'b' WHERE a = 'bd'; +:show_data; + +-- Cleanup: range_parted no longer needed. +DROP TABLE range_parted; + +CREATE TABLE list_parted ( a text, b int -) partition by list (a); -create table list_part1 partition of list_parted for values in ('a', 'b'); -create table list_default partition of list_parted default; -insert into list_part1 values ('a', 1); -insert into list_default values ('d', 10); +) PARTITION BY list (a); +CREATE TABLE list_part1 PARTITION OF list_parted for VALUES in ('a', 'b'); +CREATE TABLE list_default PARTITION OF list_parted default; +INSERT into list_part1 VALUES ('a', 1); +INSERT into list_default VALUES ('d', 10); -- fail -update list_default set a = 'a' where a = 'd'; +UPDATE list_default set a = 'a' WHERE a = 'd'; -- ok -update list_default set a = 'x' where a = 'd'; +UPDATE list_default set a = 'x' WHERE a = 'd'; + +DROP TABLE list_parted; + +-------------- +-- Some more update-partition-key test scenarios below. This time use list +-- partitions. +-------------- + +-- Setup for list partitions +CREATE TABLE list_parted (a numeric, b int, c int8) PARTITION BY list (a); +CREATE TABLE sub_parted PARTITION OF list_parted for VALUES in (1) PARTITION BY list (b); + +CREATE TABLE sub_part1(b int, c int8, a numeric); +ALTER TABLE sub_parted ATTACH PARTITION sub_part1 for VALUES in (1); +CREATE TABLE sub_part2(b int, c int8, a numeric); +ALTER TABLE sub_parted ATTACH PARTITION sub_part2 for VALUES in (2); + +CREATE TABLE list_part1(a numeric, b int, c int8); +ALTER TABLE list_parted ATTACH PARTITION list_part1 for VALUES in (2,3); + +INSERT into list_parted VALUES (2,5,50); +INSERT into list_parted VALUES (3,6,60); +INSERT into sub_parted VALUES (1,1,60); +INSERT into sub_parted VALUES (1,2,10); + +-- Test partition constraint violation when intermediate ancestor is used and +-- constraint is inherited from upper root. +UPDATE sub_parted set a = 2 WHERE c = 10; + +-- Test update-partition-key, where the unpruned partitions do not have their +-- partition keys updated. +SELECT tableoid::regclass::text, * FROM list_parted WHERE a = 2 ORDER BY 1; +UPDATE list_parted set b = c + a WHERE a = 2; +SELECT tableoid::regclass::text, * FROM list_parted WHERE a = 2 ORDER BY 1; + + +-- Test the case where BR UPDATE triggers change the partition key. +CREATE FUNCTION func_parted_mod_b() returns trigger as $$ +BEGIN + NEW.b = 2; -- This is changing partition key column. + return NEW; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER parted_mod_b before update on sub_part1 + for each row execute procedure func_parted_mod_b(); + +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + +-- This should do the tuple routing even though there is no explicit +-- partition-key update, because there is a trigger on sub_part1. +UPDATE list_parted set c = 70 WHERE b = 1; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; + +DROP TRIGGER parted_mod_b ON sub_part1; + +-- If BR DELETE trigger prevented DELETE from happening, we should also skip +-- the INSERT if that delete is part of UPDATE=>DELETE+INSERT. +CREATE OR REPLACE FUNCTION func_parted_mod_b() returns trigger as $$ +BEGIN + raise notice 'Trigger: Got OLD row %, but returning NULL', OLD; + return NULL; +END $$ LANGUAGE plpgsql; +CREATE TRIGGER trig_skip_delete before delete on sub_part2 + for each row execute procedure func_parted_mod_b(); +UPDATE list_parted set b = 1 WHERE c = 70; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; +-- Drop the trigger. Now the row should be moved. +DROP TRIGGER trig_skip_delete ON sub_part2; +UPDATE list_parted set b = 1 WHERE c = 70; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; +DROP FUNCTION func_parted_mod_b(); + +-- UPDATE partition-key with FROM clause. If join produces multiple output +-- rows for the same row to be modified, we should tuple-route the row only +-- once. There should not be any rows inserted. +CREATE TABLE non_parted (id int); +INSERT into non_parted VALUES (1), (1), (1), (2), (2), (2), (3), (3), (3); +UPDATE list_parted t1 set a = 2 FROM non_parted t2 WHERE t1.a = t2.id and a = 1; +SELECT tableoid::regclass::text, * FROM list_parted ORDER BY 1, 2, 3, 4; +DROP TABLE non_parted; + +-- Cleanup: list_parted no longer needed. +DROP TABLE list_parted; -- create custom operator class and hash function, for the same reason -- explained in alter_table.sql @@ -169,13 +580,12 @@ insert into hpart4 values (3, 4); -- fail update hpart1 set a = 3, b=4 where a = 1; +-- ok, row movement update hash_parted set b = b - 1 where b = 1; -- ok update hash_parted set b = b + 8 where b = 1; -- cleanup -drop table range_parted; -drop table list_parted; drop table hash_parted; drop operator class custom_opclass using hash; drop function dummy_hashint4(a int4, seed int8); diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index cc84217dd9..a42ff9794a 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -1590,6 +1590,7 @@ PartitionRangeDatum PartitionRangeDatumKind PartitionScheme PartitionSpec +PartitionTupleRouting PartitionedChildRelInfo PasswordType Path From eee50a8d4c389171ad5180568a7221f7e9b28f09 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 19 Jan 2018 17:22:38 -0500 Subject: [PATCH 0860/1087] PL/Python: Simplify PLyLong_FromInt64 We don't actually need two code paths, one for 32 bits and one for 64 bits. Since the existing code already assumed that "long long" is available, we can just use PyLong_FromLongLong() for 64 bits as well. In Python 2.5 and later, PyLong_FromLong() and PyLong_FromLongLong() use the same code, so there will be no difference for 64-bit platforms. In Python 2.4, the code is different, but performance testing showed no noticeable difference in PL/Python, and that Python version is ancient anyway. Discussion: https://www.postgresql.org/message-id/0a02203c-e157-55b2-464e-6087066a1849@2ndquadrant.com --- src/pl/plpython/plpy_typeio.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c index c48e8fd5f3..6c6b16f4d7 100644 --- a/src/pl/plpython/plpy_typeio.c +++ b/src/pl/plpython/plpy_typeio.c @@ -618,11 +618,7 @@ PLyInt_FromInt32(PLyDatumToOb *arg, Datum d) static PyObject * PLyLong_FromInt64(PLyDatumToOb *arg, Datum d) { - /* on 32 bit platforms "long" may be too small */ - if (sizeof(int64) > sizeof(long)) - return PyLong_FromLongLong(DatumGetInt64(d)); - else - return PyLong_FromLong(DatumGetInt64(d)); + return PyLong_FromLongLong(DatumGetInt64(d)); } static PyObject * From 96102a32a374c3b81ba9c2b24bcf1943a87a9ef6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 19 Jan 2018 22:16:25 -0500 Subject: [PATCH 0861/1087] Suppress possibly-uninitialized-variable warnings. Apparently, Peter's compiler has faith that the switch test values here could never not be valid values of their enums. Mine does not, and I tend to agree with it. --- src/backend/catalog/aclchk.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c index de18610a91..1156627b9e 100644 --- a/src/backend/catalog/aclchk.c +++ b/src/backend/catalog/aclchk.c @@ -3359,7 +3359,7 @@ aclcheck_error(AclResult aclerr, ObjectType objtype, break; case ACLCHECK_NO_PRIV: { - const char *msg; + const char *msg = "???"; switch (objtype) { @@ -3481,7 +3481,6 @@ aclcheck_error(AclResult aclerr, ObjectType objtype, case OBJECT_TSTEMPLATE: case OBJECT_USER_MAPPING: elog(ERROR, "unsupported object type %d", objtype); - msg = "???"; } ereport(ERROR, @@ -3491,7 +3490,7 @@ aclcheck_error(AclResult aclerr, ObjectType objtype, } case ACLCHECK_NOT_OWNER: { - const char *msg; + const char *msg = "???"; switch (objtype) { @@ -3616,7 +3615,6 @@ aclcheck_error(AclResult aclerr, ObjectType objtype, case OBJECT_TSTEMPLATE: case OBJECT_USER_MAPPING: elog(ERROR, "unsupported object type %d", objtype); - msg = "???"; } ereport(ERROR, From 918e02a221db1ee40d545cb05dc9d8d392b4b743 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 20 Jan 2018 08:02:01 -0500 Subject: [PATCH 0862/1087] Improve type conversion of SPI_processed in Python The previous code converted SPI_processed to a Python float if it didn't fit into a Python int. But Python longs have unlimited precision, so use that instead in all cases. As in eee50a8d4c389171ad5180568a7221f7e9b28f09, we use the Python LongLong API unconditionally for simplicity. Reviewed-by: Tom Lane --- src/pl/plpython/plpy_cursorobject.c | 4 +--- src/pl/plpython/plpy_spi.c | 8 ++------ 2 files changed, 3 insertions(+), 9 deletions(-) diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c index a527585b81..e32bc568bc 100644 --- a/src/pl/plpython/plpy_cursorobject.c +++ b/src/pl/plpython/plpy_cursorobject.c @@ -444,9 +444,7 @@ PLy_cursor_fetch(PyObject *self, PyObject *args) ret->status = PyInt_FromLong(SPI_OK_FETCH); Py_DECREF(ret->nrows); - ret->nrows = (SPI_processed > (uint64) LONG_MAX) ? - PyFloat_FromDouble((double) SPI_processed) : - PyInt_FromLong((long) SPI_processed); + ret->nrows = PyLong_FromUnsignedLongLong(SPI_processed); if (SPI_processed != 0) { diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c index 0c623a9458..41155fc81e 100644 --- a/src/pl/plpython/plpy_spi.c +++ b/src/pl/plpython/plpy_spi.c @@ -371,9 +371,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) if (status > 0 && tuptable == NULL) { Py_DECREF(result->nrows); - result->nrows = (rows > (uint64) LONG_MAX) ? - PyFloat_FromDouble((double) rows) : - PyInt_FromLong((long) rows); + result->nrows = PyLong_FromUnsignedLongLong(rows); } else if (status > 0 && tuptable != NULL) { @@ -381,9 +379,7 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, uint64 rows, int status) MemoryContext cxt; Py_DECREF(result->nrows); - result->nrows = (rows > (uint64) LONG_MAX) ? - PyFloat_FromDouble((double) rows) : - PyInt_FromLong((long) rows); + result->nrows = PyLong_FromUnsignedLongLong(rows); cxt = AllocSetContextCreate(CurrentMemoryContext, "PL/Python temp context", From 815f84aa166de294b80e80cc456b79128592720e Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Sat, 20 Jan 2018 21:47:02 -0500 Subject: [PATCH 0863/1087] doc: update intermediate certificate instructions Document how to properly create root and intermediate certificates using v3_ca extensions and where to place intermediate certificates so they are properly transferred to the remote side with the leaf certificate to link to the remote root certificate. This corrects docs that used to say that intermediate certificates must be stored with the root certificate. Also add instructions on how to create root, intermediate, and leaf certificates. Discussion: https://postgr.es/m/20180116002238.GC12724@momjian.us Reviewed-by: Michael Paquier Backpatch-through: 9.3 --- doc/src/sgml/libpq.sgml | 75 +++++++++++-------- doc/src/sgml/runtime.sgml | 151 ++++++++++++++++++++++++++++---------- 2 files changed, 156 insertions(+), 70 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 4e4645136c..92c64b43d4 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -7574,17 +7574,37 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) the server certificate. This means that it is possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent spoofing, - SSL certificate verification must be used. + the client must be able to verify the server's identity via a chain of + trust. A chain of trust is established by placing a root (self-signed) + certificate authority (CA) certificate on one + computer and a leaf certificate signed by the + root certificate on another computer. It is also possible to use an + intermediate certificate which is signed by the root + certificate and signs leaf certificates. + To allow the client to verify the identity of the server, place a root + certificate on the client and a leaf certificate signed by the root + certificate on the server. To allow the server to verify the identity + of the client, place a root certificate on the server and a leaf and + optional intermediate certificates signed by the root certificate on + the client. Intermediate certificates (usually stored with the leaf + certificate) can also be used to link the leaf certificate to the + root certificate. + + + + Once a chain of trust has been established, there are two ways for + the client to validate the leaf certificate sent by the server. If the parameter sslmode is set to verify-ca, libpq will verify that the server is trustworthy by checking the - certificate chain up to a trusted certificate authority - (CA). If sslmode is set to verify-full, - libpq will also verify that the server host name matches its - certificate. The SSL connection will fail if the server certificate cannot - be verified. verify-full is recommended in most + certificate chain up to the root certificate stored on the client. + If sslmode is set to verify-full, + libpq will also verify that the server host + name matches the name stored in the server certificate. The + SSL connection will fail if the server certificate cannot be + verified. verify-full is recommended in most security-sensitive environments. @@ -7601,13 +7621,13 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - To allow server certificate verification, the certificate(s) of one or more - trusted CAs must be - placed in the file ~/.postgresql/root.crt in the user's home - directory. If intermediate CAs appear in - root.crt, the file must also contain certificate - chains to their root CAs. (On Microsoft Windows the file is named - %APPDATA%\postgresql\root.crt.) + To allow server certificate verification, one or more root certificates + must be placed in the file ~/.postgresql/root.crt + in the user's home directory. (On Microsoft Windows the file is named + %APPDATA%\postgresql\root.crt.) Intermediate + certificates should also be added to the file if they are needed to link + the certificate chain sent by the server to the root certificates + stored on the client. @@ -7641,11 +7661,12 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) Client Certificates - If the server requests a trusted client certificate, - libpq will send the certificate stored in + If the server attempts to verify the identity of the + client by requesting the client's leaf certificate, + libpq will send the certificates stored in file ~/.postgresql/postgresql.crt in the user's home - directory. The certificate must be signed by one of the certificate - authorities (CA) trusted by the server. A matching + directory. The certificates must chain to the root certificate trusted + by the server. A matching private key file ~/.postgresql/postgresql.key must also be present. The private key file must not allow any access to world or group; achieve this by the @@ -7660,23 +7681,17 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) - In some cases, the client certificate might be signed by an - intermediate certificate authority, rather than one that is - directly trusted by the server. To use such a certificate, append the - certificate of the signing authority to the postgresql.crt - file, then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by - the server, i.e. signed by a certificate in the server's root CA file - (). + The first certificate in postgresql.crt must be the + client's certificate because it must match the client's private key. + Intermediate certificates can be optionally appended + to the file — doing so avoids requiring storage of intermediate + certificates on the server (). - Note that the client's ~/.postgresql/root.crt lists the top-level CAs - that are considered trusted for signing server certificates. In principle it need - not list the CA that signed the client's certificate, though in most cases - that CA would also be trusted for server certificates. + For instructions on creating certificates, see . - diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index a2ebd3e21c..d162acb2e8 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -2247,40 +2247,46 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - In some cases, the server certificate might be signed by an - intermediate certificate authority, rather than one that is - directly trusted by clients. To use such a certificate, append the - certificate of the signing authority to the server.crt file, - then its parent authority's certificate, and so on up to a certificate - authority, root or intermediate, that is trusted by - clients, i.e. signed by a certificate in the clients' - root.crt files. + The first certificate in server.crt must be the + server's certificate because it must match the server's private key. + The certificates of intermediate certificate authorities + can also be appended to the file. Doing this avoids the necessity of + storing intermediate certificates on clients, assuming the root and + intermediate certificates were created with v3_ca + extensions. This allows easier expiration of intermediate certificates. + + + + It is not necessary to add the root certificate to + server.crt. Instead, clients must have the root + certificate of the server's certificate chain. Using Client Certificates - To require the client to supply a trusted certificate, place - certificates of the certificate authorities (CAs) - you trust in a file named root.crt in the data + To require the client to supply a trusted certificate, + place certificates of the root certificate authorities + (CAs) you trust in a file in the data directory, set the parameter in - postgresql.conf to root.crt, - and add the authentication option clientcert=1 to the - appropriate hostssl line(s) in pg_hba.conf. - A certificate will then be requested from the client during - SSL connection startup. (See for a - description of how to set up certificates on the client.) The server will + postgresql.conf to the new file name, and add the + authentication option clientcert=1 to the appropriate + hostssl line(s) in pg_hba.conf. + A certificate will then be requested from the client during SSL + connection startup. (See for a description + of how to set up certificates on the client.) The server will verify that the client's certificate is signed by one of the trusted certificate authorities. - If intermediate CAs appear in - root.crt, the file must also contain certificate - chains to their root CAs. Certificate Revocation List - (CRL) entries - are also checked if the parameter is set. + Intermediate certificates that chain up to existing root certificates + can also appear in the file if + you wish to avoid storing them on clients (assuming the root and + intermediate certificates were created with v3_ca + extensions). Certificate Revocation List (CRL) entries are also + checked if the parameter is set. (See @@ -2296,14 +2302,6 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 it will not insist that a client certificate be presented. - - Note that the server's root.crt lists the top-level - CAs that are considered trusted for signing client certificates. - In principle it need - not list the CA that signed the server's certificate, though in most cases - that CA would also be trusted for client certificates. - - If you are setting up client certificates, you may wish to use the cert authentication method, so that the certificates @@ -2385,15 +2383,16 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - Creating a Self-signed Certificate + Creating Certificates - To create a quick self-signed certificate for the server, valid for 365 + To create a simple self-signed certificate for the server, valid for 365 days, use the following OpenSSL command, - replacing yourdomain.com with the server's host name: + replacing dbhost.yourdomain.com with the + server's host name: openssl req -new -x509 -days 365 -nodes -text -out server.crt \ - -keyout server.key -subj "/CN=yourdomain.com" + -keyout server.key -subj "/CN=dbhost.yourdomain.com" Then do: @@ -2406,14 +2405,86 @@ chmod og-rwx server.key - A self-signed certificate can be used for testing, but a certificate - signed by a certificate authority (CA) (either one of the - global CAs or a local one) should be used in production - so that clients can verify the server's identity. If all the clients - are local to the organization, using a local CA is - recommended. + While a self-signed certificate can be used for testing, a certificate + signed by a certificate authority (CA) (usually an + enterprise-wide root CA) should be used in production. + + To create a server certificate whose identity can be validated + by clients, first create a certificate signing request + (CSR) and a public/private key file: + +openssl req -new -nodes -text -out root.csr \ + -keyout root.key -subj "/CN=root.yourdomain.com" +chmod og-rwx root.key + + Then, sign the request with the key to create a root certificate + authority (using the default OpenSSL + configuration file location on Linux): + +openssl x509 -req -in root.csr -text -days 3650 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -signkey root.key -out root.crt + + Finally, create a server certificate signed by the new root certificate + authority: + +openssl req -new -nodes -text -out server.csr \ + -keyout server.key -subj "/CN=dbhost.yourdomain.com" +chmod og-rwx server.key + +openssl x509 -req -in server.csr -text -days 365 \ + -CA root.crt -CAkey root.key -CAcreateserial \ + -out server.crt + + server.crt and server.key + should be stored on the server, and root.crt should + be stored on the client so the client can verify that the server's leaf + certificate was signed by its trusted root certificate. + root.key should be stored offline for use in + creating future certificates. + + + + It is also possible to create a chain of trust that includes + intermediate certificates: + +# root +openssl req -new -nodes -text -out root.csr \ + -keyout root.key -subj "/CN=root.yourdomain.com" +chmod og-rwx root.key +openssl x509 -req -in root.csr -text -days 3650 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -signkey root.key -out root.crt + +# intermediate +openssl req -new -nodes -text -out intermediate.csr \ + -keyout intermediate.key -subj "/CN=intermediate.yourdomain.com" +chmod og-rwx intermediate.key +openssl x509 -req -in intermediate.csr -text -days 1825 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -CA root.crt -CAkey root.key -CAcreateserial \ + -out intermediate.crt + +# leaf +openssl req -new -nodes -text -out server.csr \ + -keyout server.key -subj "/CN=dbhost.yourdomain.com" +chmod og-rwx server.key +openssl x509 -req -in server.csr -text -days 365 \ + -CA intermediate.crt -CAkey intermediate.key -CAcreateserial \ + -out server.crt + + server.crt and + intermediate.crt should be concatenated + into a certificate file bundle and stored on the server. + server.key should also be stored on the server. + root.crt should be stored on the client so + the client can verify that the server's leaf certificate was signed + by a chain of certificates linked to its trusted root certificate. + root.key and intermediate.key + should be stored offline for use in creating future certificates. + From 5c15a54e851ecdd2b53e6d6a84f8ec0802ffc3cb Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sun, 21 Jan 2018 13:40:55 +0100 Subject: [PATCH 0864/1087] Fix wording of "hostaddrs" The field is still called "hostaddr", so make sure references use "hostaddr values" instead. Author: Michael Paquier --- doc/src/sgml/libpq.sgml | 4 ++-- src/interfaces/libpq/fe-connect.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 92c64b43d4..02884bae1f 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -1010,8 +1010,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname - A comma-separated list of hostaddrs is also accepted, in - which case each host in the list is tried in order. See + A comma-separated list of hostaddr values is also + accepted, in which case each host in the list is tried in order. See for details. diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 8d543334ae..77eebb0ba1 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -972,7 +972,7 @@ connectOptions2(PGconn *conn) { conn->status = CONNECTION_BAD; printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("could not match %d host names to %d hostaddrs\n"), + libpq_gettext("could not match %d host names to %d hostaddr values\n"), count_comma_separated_elems(conn->pghost), conn->nconnhost); return false; } From 1cc4f536ef86928a241126ca70d121873594630e Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sun, 21 Jan 2018 15:40:46 +0100 Subject: [PATCH 0865/1087] Support huge pages on Windows Add support for huge pages (called large pages on Windows) to the Windows build. This (probably) breaks compatibility with Windows versions prior to Windows 2003 or Windows Vista. Authors: Takayuki Tsunakawa and Thomas Munro Reviewed by: Magnus Hagander, Amit Kapila --- doc/src/sgml/config.sgml | 20 ++++-- src/backend/port/win32_shmem.c | 127 ++++++++++++++++++++++++++++++--- src/backend/utils/misc/guc.c | 2 +- src/bin/pg_ctl/pg_ctl.c | 77 ++++++++++++++++++-- 4 files changed, 204 insertions(+), 22 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 37a61a13c8..cc156c6385 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1369,14 +1369,26 @@ include_dir 'conf.d' - At present, this feature is supported only on Linux. The setting is - ignored on other systems when set to try. + At present, this feature is supported only on Linux and Windows. The + setting is ignored on other systems when set to try. The use of huge pages results in smaller page tables and less CPU time - spent on memory management, increasing performance. For more details, - see . + spent on memory management, increasing performance. For more details about + using huge pages on Linux, see . + + + + Huge pages are known as large pages on Windows. To use them, you need to + assign the user right Lock Pages in Memory to the Windows user account + that runs PostgreSQL. + You can use Windows Group Policy tool (gpedit.msc) to assign the user right + Lock Pages in Memory. + To start the database server on the command prompt as a standalone process, + not as a Windows service, the command prompt must be run as an administrator + User Access Control (UAC) must be disabled. When the UAC is enabled, the normal + command prompt revokes the user right Lock Pages in Memory when started. diff --git a/src/backend/port/win32_shmem.c b/src/backend/port/win32_shmem.c index 4991ed46f1..fa80cebfbd 100644 --- a/src/backend/port/win32_shmem.c +++ b/src/backend/port/win32_shmem.c @@ -21,6 +21,7 @@ HANDLE UsedShmemSegID = INVALID_HANDLE_VALUE; void *UsedShmemSegAddr = NULL; static Size UsedShmemSegSize = 0; +static bool EnableLockPagesPrivilege(int elevel); static void pgwin32_SharedMemoryDelete(int status, Datum shmId); /* @@ -103,6 +104,66 @@ PGSharedMemoryIsInUse(unsigned long id1, unsigned long id2) return true; } +/* + * EnableLockPagesPrivilege + * + * Try to acquire SeLockMemoryPrivilege so we can use large pages. + */ +static bool +EnableLockPagesPrivilege(int elevel) +{ + HANDLE hToken; + TOKEN_PRIVILEGES tp; + LUID luid; + + if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken)) + { + ereport(elevel, + (errmsg("could not enable Lock Pages in Memory user right: error code %lu", GetLastError()), + errdetail("Failed system call was %s.", "OpenProcessToken"))); + return FALSE; + } + + if (!LookupPrivilegeValue(NULL, SE_LOCK_MEMORY_NAME, &luid)) + { + ereport(elevel, + (errmsg("could not enable Lock Pages in Memory user right: error code %lu", GetLastError()), + errdetail("Failed system call was %s.", "LookupPrivilegeValue"))); + CloseHandle(hToken); + return FALSE; + } + tp.PrivilegeCount = 1; + tp.Privileges[0].Luid = luid; + tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; + + if (!AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL)) + { + ereport(elevel, + (errmsg("could not enable Lock Pages in Memory user right: error code %lu", GetLastError()), + errdetail("Failed system call was %s.", "AdjustTokenPrivileges"))); + CloseHandle(hToken); + return FALSE; + } + + if (GetLastError() != ERROR_SUCCESS) + { + if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) + ereport(elevel, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("could not enable Lock Pages in Memory user right"), + errhint("Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL."))); + else + ereport(elevel, + (errmsg("could not enable Lock Pages in Memory user right: error code %lu", GetLastError()), + errdetail("Failed system call was %s.", "AdjustTokenPrivileges"))); + CloseHandle(hToken); + return FALSE; + } + + CloseHandle(hToken); + + return TRUE; +} /* * PGSharedMemoryCreate @@ -127,11 +188,9 @@ PGSharedMemoryCreate(Size size, bool makePrivate, int port, int i; DWORD size_high; DWORD size_low; - - if (huge_pages == HUGE_PAGES_ON) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("huge pages not supported on this platform"))); + SIZE_T largePageSize = 0; + Size orig_size = size; + DWORD flProtect = PAGE_READWRITE; /* Room for a header? */ Assert(size > MAXALIGN(sizeof(PGShmemHeader))); @@ -140,6 +199,35 @@ PGSharedMemoryCreate(Size size, bool makePrivate, int port, UsedShmemSegAddr = NULL; + if (huge_pages == HUGE_PAGES_ON || huge_pages == HUGE_PAGES_TRY) + { + /* Does the processor support large pages? */ + largePageSize = GetLargePageMinimum(); + if (largePageSize == 0) + { + ereport(huge_pages == HUGE_PAGES_ON ? FATAL : DEBUG1, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("the processor does not support large pages"))); + ereport(DEBUG1, + (errmsg("disabling huge pages"))); + } + else if (!EnableLockPagesPrivilege(huge_pages == HUGE_PAGES_ON ? FATAL : DEBUG1)) + { + ereport(DEBUG1, + (errmsg("disabling huge pages"))); + } + else + { + /* Huge pages available and privilege enabled, so turn on */ + flProtect = PAGE_READWRITE | SEC_COMMIT | SEC_LARGE_PAGES; + + /* Round size up as appropriate. */ + if (size % largePageSize != 0) + size += largePageSize - (size % largePageSize); + } + } + +retry: #ifdef _WIN64 size_high = size >> 32; #else @@ -163,16 +251,35 @@ PGSharedMemoryCreate(Size size, bool makePrivate, int port, hmap = CreateFileMapping(INVALID_HANDLE_VALUE, /* Use the pagefile */ NULL, /* Default security attrs */ - PAGE_READWRITE, /* Memory is Read/Write */ + flProtect, size_high, /* Size Upper 32 Bits */ size_low, /* Size Lower 32 bits */ szShareMem); if (!hmap) - ereport(FATAL, - (errmsg("could not create shared memory segment: error code %lu", GetLastError()), - errdetail("Failed system call was CreateFileMapping(size=%zu, name=%s).", - size, szShareMem))); + { + if (GetLastError() == ERROR_NO_SYSTEM_RESOURCES && + huge_pages == HUGE_PAGES_TRY && + (flProtect & SEC_LARGE_PAGES) != 0) + { + elog(DEBUG1, "CreateFileMapping(%zu) with SEC_LARGE_PAGES failed, " + "huge pages disabled", + size); + + /* + * Use the original size, not the rounded-up value, when falling back + * to non-huge pages. + */ + size = orig_size; + flProtect = PAGE_READWRITE; + goto retry; + } + else + ereport(FATAL, + (errmsg("could not create shared memory segment: error code %lu", GetLastError()), + errdetail("Failed system call was CreateFileMapping(size=%zu, name=%s).", + size, szShareMem))); + } /* * If the segment already existed, CreateFileMapping() will return a diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 72f6be329e..d03ba234b5 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -3913,7 +3913,7 @@ static struct config_enum ConfigureNamesEnum[] = { {"huge_pages", PGC_POSTMASTER, RESOURCES_MEM, - gettext_noop("Use of huge pages on Linux."), + gettext_noop("Use of huge pages on Linux or Windows."), NULL }, &huge_pages, diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index 62c72c3fcf..9bc830b085 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -144,6 +144,7 @@ static void WINAPI pgwin32_ServiceHandler(DWORD); static void WINAPI pgwin32_ServiceMain(DWORD, LPTSTR *); static void pgwin32_doRunAsService(void); static int CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_service); +static PTOKEN_PRIVILEGES GetPrivilegesToDelete(HANDLE hToken); #endif static pgpid_t get_pgpid(bool is_status_request); @@ -1623,11 +1624,6 @@ typedef BOOL (WINAPI * __SetInformationJobObject) (HANDLE, JOBOBJECTINFOCLASS, L typedef BOOL (WINAPI * __AssignProcessToJobObject) (HANDLE, HANDLE); typedef BOOL (WINAPI * __QueryInformationJobObject) (HANDLE, JOBOBJECTINFOCLASS, LPVOID, DWORD, LPDWORD); -/* Windows API define missing from some versions of MingW headers */ -#ifndef DISABLE_MAX_PRIVILEGE -#define DISABLE_MAX_PRIVILEGE 0x1 -#endif - /* * Create a restricted token, a job object sandbox, and execute the specified * process with it. @@ -1650,6 +1646,7 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser HANDLE restrictedToken; SID_IDENTIFIER_AUTHORITY NtAuthority = {SECURITY_NT_AUTHORITY}; SID_AND_ATTRIBUTES dropSids[2]; + PTOKEN_PRIVILEGES delPrivs; /* Functions loaded dynamically */ __CreateRestrictedToken _CreateRestrictedToken = NULL; @@ -1708,14 +1705,21 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser return 0; } + /* Get list of privileges to remove */ + delPrivs = GetPrivilegesToDelete(origToken); + if (delPrivs == NULL) + /* Error message already printed */ + return 0; + b = _CreateRestrictedToken(origToken, - DISABLE_MAX_PRIVILEGE, + 0, sizeof(dropSids) / sizeof(dropSids[0]), dropSids, - 0, NULL, + delPrivs->PrivilegeCount, delPrivs->Privileges, 0, NULL, &restrictedToken); + free(delPrivs); FreeSid(dropSids[1].Sid); FreeSid(dropSids[0].Sid); CloseHandle(origToken); @@ -1832,6 +1836,65 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser */ return r; } + +/* + * Get a list of privileges to delete from the access token. We delete all privileges + * except SeLockMemoryPrivilege which is needed to use large pages, and + * SeChangeNotifyPrivilege which is enabled by default in DISABLE_MAX_PRIVILEGE. + */ +static PTOKEN_PRIVILEGES +GetPrivilegesToDelete(HANDLE hToken) +{ + int i, j; + DWORD length; + PTOKEN_PRIVILEGES tokenPrivs; + LUID luidLockPages; + LUID luidChangeNotify; + + if (!LookupPrivilegeValue(NULL, SE_LOCK_MEMORY_NAME, &luidLockPages) || + !LookupPrivilegeValue(NULL, SE_CHANGE_NOTIFY_NAME, &luidChangeNotify)) + { + write_stderr(_("%s: could not get LUIDs for privileges: error code %lu\n"), + progname, (unsigned long) GetLastError()); + return NULL; + } + + if (!GetTokenInformation(hToken, TokenPrivileges, NULL, 0, &length) && + GetLastError() != ERROR_INSUFFICIENT_BUFFER) + { + write_stderr(_("%s: could not get token information: error code %lu\n"), + progname, (unsigned long) GetLastError()); + return NULL; + } + + tokenPrivs = (PTOKEN_PRIVILEGES) malloc(length); + if (tokenPrivs == NULL) + { + write_stderr(_("%s: out of memory\n"), progname); + return NULL; + } + + if (!GetTokenInformation(hToken, TokenPrivileges, tokenPrivs, length, &length)) + { + write_stderr(_("%s: could not get token information: error code %lu\n"), + progname, (unsigned long) GetLastError()); + free(tokenPrivs); + return NULL; + } + + for (i = 0; i < tokenPrivs->PrivilegeCount; i++) + { + if (memcmp(&tokenPrivs->Privileges[i].Luid, &luidLockPages, sizeof(LUID)) == 0 || + memcmp(&tokenPrivs->Privileges[i].Luid, &luidChangeNotify, sizeof(LUID)) == 0) + { + for (j = i; j < tokenPrivs->PrivilegeCount - 1; j++) + tokenPrivs->Privileges[j] = tokenPrivs->Privileges[j + 1]; + tokenPrivs->PrivilegeCount--; + } + } + + return tokenPrivs; +} #endif /* WIN32 */ static void From b9ff79b8f17697f3df492017d454caa9920a7183 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Mon, 22 Jan 2018 10:18:09 +0100 Subject: [PATCH 0866/1087] Fix docs typo Spotted by Thomas Munro --- doc/src/sgml/config.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index cc156c6385..31eaacfc4f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1386,7 +1386,7 @@ include_dir 'conf.d' You can use Windows Group Policy tool (gpedit.msc) to assign the user right Lock Pages in Memory. To start the database server on the command prompt as a standalone process, - not as a Windows service, the command prompt must be run as an administrator + not as a Windows service, the command prompt must be run as an administrator or User Access Control (UAC) must be disabled. When the UAC is enabled, the normal command prompt revokes the user right Lock Pages in Memory when started. From 8561e4840c81f7e345be2df170839846814fa004 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 22 Jan 2018 08:30:16 -0500 Subject: [PATCH 0867/1087] Transaction control in PL procedures In each of the supplied procedural languages (PL/pgSQL, PL/Perl, PL/Python, PL/Tcl), add language-specific commit and rollback functions/commands to control transactions in procedures in that language. Add similar underlying functions to SPI. Some additional cleanup so that transaction commit or abort doesn't blow away data structures still used by the procedure call. Add execution context tracking to CALL and DO statements so that transaction control commands can only be issued in top-level procedure and block calls, not function calls or other procedure or block calls. - SPI Add a new function SPI_connect_ext() that is like SPI_connect() but allows passing option flags. The only option flag right now is SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction control commands, otherwise it's not allowed. This is meant to be passed down from CALL and DO statements which themselves know in which context they are called. A nonatomic SPI connection uses different memory management. A normal SPI connection allocates its memory in TopTransactionContext. For nonatomic connections we use PortalContext instead. As the comment in SPI_connect_ext() (previously SPI_connect()) indicates, one could potentially use PortalContext in all cases, but it seems safest to leave the existing uses alone, because this stuff is complicated enough already. SPI also gets new functions SPI_start_transaction(), SPI_commit(), and SPI_rollback(), which can be used by PLs to implement their transaction control logic. - portalmem.c Some adjustments were made in the code that cleans up portals at transaction abort. The portal code could already handle a command *committing* a transaction and continuing (e.g., VACUUM), but it was not quite prepared for a command *aborting* a transaction and continuing. In AtAbort_Portals(), remove the code that marks an active portal as failed. As the comment there already predicted, this doesn't work if the running command wants to keep running after transaction abort. And it's actually not necessary, because pquery.c is careful to run all portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if there is an exception. So the code in AtAbort_Portals() is never used anyway. In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not to clean up active portals too much. This mirrors similar code in PreCommit_Portals(). - PL/Perl Gets new functions spi_commit() and spi_rollback() - PL/pgSQL Gets new commands COMMIT and ROLLBACK. Update the PL/SQL porting example in the documentation to reflect that transactions are now possible in procedures. - PL/Python Gets new functions plpy.commit and plpy.rollback. - PL/Tcl Gets new commands commit and rollback. Reviewed-by: Andrew Dunstan --- doc/src/sgml/plperl.sgml | 54 ++++ doc/src/sgml/plpgsql.sgml | 91 +++---- doc/src/sgml/plpython.sgml | 41 +++ doc/src/sgml/pltcl.sgml | 41 +++ doc/src/sgml/ref/call.sgml | 7 + doc/src/sgml/ref/create_procedure.sgml | 7 + doc/src/sgml/ref/do.sgml | 7 + doc/src/sgml/spi.sgml | 177 +++++++++++++ src/backend/commands/functioncmds.c | 47 +++- src/backend/executor/spi.c | 102 +++++++- src/backend/tcop/utility.c | 6 +- src/backend/utils/mmgr/portalmem.c | 49 ++-- src/include/commands/defrem.h | 4 +- src/include/executor/spi.h | 7 + src/include/executor/spi_priv.h | 4 + src/include/nodes/nodes.h | 3 +- src/include/nodes/parsenodes.h | 7 + src/include/utils/portal.h | 1 + src/pl/plperl/GNUmakefile | 2 +- src/pl/plperl/SPI.xs | 9 + src/pl/plperl/expected/plperl_transaction.out | 133 ++++++++++ src/pl/plperl/plperl.c | 69 ++++- src/pl/plperl/plperl.h | 2 + src/pl/plperl/sql/plperl_transaction.sql | 120 +++++++++ src/pl/plpgsql/src/Makefile | 2 +- .../src/expected/plpgsql_transaction.out | 241 ++++++++++++++++++ src/pl/plpgsql/src/pl_exec.c | 66 ++++- src/pl/plpgsql/src/pl_funcs.c | 44 ++++ src/pl/plpgsql/src/pl_gram.y | 34 +++ src/pl/plpgsql/src/pl_handler.c | 9 +- src/pl/plpgsql/src/pl_scanner.c | 2 + src/pl/plpgsql/src/plpgsql.h | 22 +- .../plpgsql/src/sql/plpgsql_transaction.sql | 215 ++++++++++++++++ src/pl/plpython/Makefile | 1 + src/pl/plpython/expected/plpython_test.out | 4 +- .../expected/plpython_transaction.out | 135 ++++++++++ src/pl/plpython/plpy_main.c | 21 +- src/pl/plpython/plpy_plpymodule.c | 49 ++++ src/pl/plpython/sql/plpython_transaction.sql | 115 +++++++++ src/pl/tcl/Makefile | 2 +- src/pl/tcl/expected/pltcl_transaction.out | 100 ++++++++ src/pl/tcl/pltcl.c | 95 ++++++- src/pl/tcl/sql/pltcl_transaction.sql | 98 +++++++ 43 files changed, 2149 insertions(+), 96 deletions(-) create mode 100644 src/pl/plperl/expected/plperl_transaction.out create mode 100644 src/pl/plperl/sql/plperl_transaction.sql create mode 100644 src/pl/plpgsql/src/expected/plpgsql_transaction.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_transaction.sql create mode 100644 src/pl/plpython/expected/plpython_transaction.out create mode 100644 src/pl/plpython/sql/plpython_transaction.sql create mode 100644 src/pl/tcl/expected/pltcl_transaction.out create mode 100644 src/pl/tcl/sql/pltcl_transaction.sql diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml index 100162dead..cff7a847de 100644 --- a/doc/src/sgml/plperl.sgml +++ b/doc/src/sgml/plperl.sgml @@ -661,6 +661,60 @@ SELECT release_hosts_query(); + + + + spi_commit() + + spi_commit + in PL/Perl + + + + spi_rollback() + + spi_rollback + in PL/Perl + + + + + Commit or roll back the current transaction. This can only be called + in a procedure or anonymous code block (DO command) + called from the top level. (Note that it is not possible to run the + SQL commands COMMIT or ROLLBACK + via spi_exec_query or similar. It has to be done + using these functions.) After a transaction is ended, a new + transaction is automatically started, so there is no separate function + for that. + + + + Here is an example: + +CREATE PROCEDURE transaction_test1() +LANGUAGE plperl +AS $$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +$$; + +CALL transaction_test1(); + + + + + Transactions cannot be ended when a cursor created by + spi_query is open. + + + diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index ddd054c6cc..90a3c00dfe 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -3449,6 +3449,48 @@ END LOOP label ; + + Transaction Management + + + In procedures invoked by the CALL command from the top + level as well as in anonymous code blocks (DO command) + called from the top level, it is possible to end transactions using the + commands COMMIT and ROLLBACK. A new + transaction is started automatically after a transaction is ended using + these commands, so there is no separate START + TRANSACTION command. (Note that BEGIN and + END have different meanings in PL/pgSQL.) + + + + Here is a simple example: + +CREATE PROCEDURE transaction_test1() +LANGUAGE plpgsql +AS $$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; +END +$$; + +CALL transaction_test1(); + + + + + A transaction cannot be ended inside a loop over a query result, nor + inside a block with exception handlers. + + + Errors and Messages @@ -5432,14 +5474,13 @@ SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); CREATE OR REPLACE PROCEDURE cs_create_job(v_job_id IN INTEGER) IS a_running_job_count INTEGER; - PRAGMA AUTONOMOUS_TRANSACTION; -- BEGIN - LOCK TABLE cs_jobs IN EXCLUSIVE MODE; -- + LOCK TABLE cs_jobs IN EXCLUSIVE MODE; SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; IF a_running_job_count > 0 THEN - COMMIT; -- free lock + COMMIT; -- free lock raise_application_error(-20000, 'Unable to create a new job: a job is currently running.'); END IF; @@ -5459,45 +5500,11 @@ show errors - - Procedures like this can easily be converted into PostgreSQL - functions returning void. This procedure in - particular is interesting because it can teach us some things: - - - - - There is no PRAGMA statement in PostgreSQL. - - - - - - If you do a LOCK TABLE in PL/pgSQL, - the lock will not be released until the calling transaction is - finished. - - - - - - You cannot issue COMMIT in a - PL/pgSQL function. The function is - running within some outer transaction and so COMMIT - would imply terminating the function's execution. However, in - this particular case it is not necessary anyway, because the lock - obtained by the LOCK TABLE will be released when - we raise an error. - - - - - This is how we could port this procedure to PL/pgSQL: -CREATE OR REPLACE FUNCTION cs_create_job(v_job_id integer) RETURNS void AS $$ +CREATE OR REPLACE PROCEDURE cs_create_job(v_job_id integer) AS $$ DECLARE a_running_job_count integer; BEGIN @@ -5506,6 +5513,7 @@ BEGIN SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; IF a_running_job_count > 0 THEN + COMMIT; -- free lock RAISE EXCEPTION 'Unable to create a new job: a job is currently running'; -- END IF; @@ -5518,6 +5526,7 @@ BEGIN WHEN unique_violation THEN -- -- don't worry if it already exists END; + COMMIT; END; $$ LANGUAGE plpgsql; @@ -5541,12 +5550,6 @@ $$ LANGUAGE plpgsql; - - The main functional difference between this procedure and the - Oracle equivalent is that the exclusive lock on the cs_jobs - table will be held until the calling transaction completes. Also, if - the caller later aborts (for example due to an error), the effects of - this procedure will be rolled back. diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml index 0dbeee1fa2..ba79beb743 100644 --- a/doc/src/sgml/plpython.sgml +++ b/doc/src/sgml/plpython.sgml @@ -1370,6 +1370,47 @@ $$ LANGUAGE plpythonu; + + Transaction Management + + + In a procedure called from the top level or an anonymous code block + (DO command) called from the top level it is possible to + control transactions. To commit the current transaction, call + plpy.commit(). To roll back the current transaction, + call plpy.rollback(). (Note that it is not possible to + run the SQL commands COMMIT or + ROLLBACK via plpy.execute or + similar. It has to be done using these functions.) After a transaction is + ended, a new transaction is automatically started, so there is no separate + function for that. + + + + Here is an example: + +CREATE PROCEDURE transaction_test1() +LANGUAGE plpythonu +AS $$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +$$; + +CALL transaction_test1(); + + + + + Transactions cannot be ended when a cursor created by + plpy.cursor is open or when an explicit subtransaction + is active. + + + Utility Functions diff --git a/doc/src/sgml/pltcl.sgml b/doc/src/sgml/pltcl.sgml index 8018783b0a..a834ab8862 100644 --- a/doc/src/sgml/pltcl.sgml +++ b/doc/src/sgml/pltcl.sgml @@ -1002,6 +1002,47 @@ $$ LANGUAGE pltcl; + + Transaction Management + + + In a procedure called from the top level or an anonymous code block + (DO command) called from the top level it is possible + to control transactions. To commit the current transaction, call the + commit command. To roll back the current transaction, + call the rollback command. (Note that it is not + possible to run the SQL commands COMMIT or + ROLLBACK via spi_exec or similar. + It has to be done using these functions.) After a transaction is ended, + a new transaction is automatically started, so there is no separate + command for that. + + + + Here is an example: + +CREATE PROCEDURE transaction_test1() +LANGUAGE pltcl +AS $$ +for {set i 0} {$i < 10} {incr i} { + spi_exec "INSERT INTO test1 (a) VALUES ($i)" + if {$i % 2 == 0} { + commit + } else { + rollback + } +} +$$; + +CALL transaction_test1(); + + + + + Transactions cannot be ended when an explicit subtransaction is active. + + + PL/Tcl Configuration diff --git a/doc/src/sgml/ref/call.sgml b/doc/src/sgml/ref/call.sgml index 2741d8d15e..03da4518ee 100644 --- a/doc/src/sgml/ref/call.sgml +++ b/doc/src/sgml/ref/call.sgml @@ -70,6 +70,13 @@ CALL name ( [ and diff --git a/doc/src/sgml/ref/do.sgml b/doc/src/sgml/ref/do.sgml index 061218b135..b9a6f9a6fd 100644 --- a/doc/src/sgml/ref/do.sgml +++ b/doc/src/sgml/ref/do.sgml @@ -91,6 +91,13 @@ DO [ LANGUAGE lang_name ] + + + If DO is executed in a transaction block, then the + procedure code cannot execute transaction control statements. Transaction + control statements are only allowed if DO is executed in + its own transaction. + diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 350f0863e9..10448922b1 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -64,6 +64,7 @@ SPI_connect + SPI_connect_ext SPI_connect @@ -72,12 +73,17 @@ SPI_connect + SPI_connect_ext connect a procedure to the SPI manager int SPI_connect(void) + + + +int SPI_connect_ext(int options) @@ -90,6 +96,31 @@ int SPI_connect(void) function if you want to execute commands through SPI. Some utility SPI functions can be called from unconnected procedures. + + + SPI_connect_ext does the same but has an argument that + allows passing option flags. Currently, the following option values are + available: + + + SPI_OPT_NONATOMIC + + + Sets the SPI connection to be nonatomic, which + means that transaction control calls SPI_commit, + SPI_rollback, and + SPI_start_transaction are allowed. Otherwise, + calling these functions will result in an immediate error. + + + + + + + + SPI_connect() is equivalent to + SPI_connect_ext(0). + @@ -4325,6 +4356,152 @@ int SPI_freeplan(SPIPlanPtr plan) + + Transaction Management + + + It is not possible to run transaction control commands such + as COMMIT and ROLLBACK through SPI + functions such as SPI_execute. There are, however, + separate interface functions that allow transaction control through SPI. + + + + It is not generally safe and sensible to start and end transactions in + arbitrary user-defined SQL-callable functions without taking into account + the context in which they are called. For example, a transaction boundary + in the middle of a function that is part of a complex SQL expression that + is part of some SQL command will probably result in obscure internal errors + or crashes. The interface functions presented here are primarily intended + to be used by procedural language implementations to support transaction + management in procedures that are invoked by the CALL + command, taking the context of the CALL invocation into + account. SPI procedures implemented in C can implement the same logic, but + the details of that are beyond the scope of this documentation. + + + + + + SPI_commit + + + SPI_commit + 3 + + + + SPI_commit + commit the current transaction + + + + +void SPI_commit(void) + + + + + Description + + + SPI_commit commits the current transaction. It is + approximately equivalent to running the SQL + command COMMIT. After a transaction is committed, a new + transaction has to be started + using SPI_start_transaction before further database + actions can be executed. + + + + This function can only be executed if the SPI connection has been set as + nonatomic in the call to SPI_connect_ext. + + + + + + + + SPI_rollback + + + SPI_rollback + 3 + + + + SPI_rollback + abort the current transaction + + + + +void SPI_rollback(void) + + + + + Description + + + SPI_rollback rolls back the current transaction. It + is approximately equivalent to running the SQL + command ROLLBACK. After a transaction is rolled back, a + new transaction has to be started + using SPI_start_transaction before further database + actions can be executed. + + + + This function can only be executed if the SPI connection has been set as + nonatomic in the call to SPI_connect_ext. + + + + + + + + SPI_start_transaction + + + SPI_start_transaction + 3 + + + + SPI_start_transaction + start a new transaction + + + + +void SPI_start_transaction(void) + + + + + Description + + + SPI_start_transaction starts a new transaction. It + can only be called after SPI_commit + or SPI_rollback, as there is no transaction active at + that point. Normally, when an SPI procedure is called, there is already a + transaction active, so attempting to start another one before closing out + the current one will result in an error. + + + + This function can only be executed if the SPI connection has been set as + nonatomic in the call to SPI_connect_ext. + + + + + + Visibility of Data Changes diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index ea08c3237c..df87dfeb54 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -65,6 +65,7 @@ #include "utils/fmgroids.h" #include "utils/guc.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" #include "utils/rel.h" #include "utils/syscache.h" #include "utils/tqual.h" @@ -2136,9 +2137,11 @@ IsThereFunctionInNamespace(const char *proname, int pronargs, /* * ExecuteDoStmt * Execute inline procedural-language code + * + * See at ExecuteCallStmt() about the atomic argument. */ void -ExecuteDoStmt(DoStmt *stmt) +ExecuteDoStmt(DoStmt *stmt, bool atomic) { InlineCodeBlock *codeblock = makeNode(InlineCodeBlock); ListCell *arg; @@ -2200,6 +2203,7 @@ ExecuteDoStmt(DoStmt *stmt) codeblock->langOid = HeapTupleGetOid(languageTuple); languageStruct = (Form_pg_language) GETSTRUCT(languageTuple); codeblock->langIsTrusted = languageStruct->lanpltrusted; + codeblock->atomic = atomic; if (languageStruct->lanpltrusted) { @@ -2236,9 +2240,28 @@ ExecuteDoStmt(DoStmt *stmt) /* * Execute CALL statement + * + * Inside a top-level CALL statement, transaction-terminating commands such as + * COMMIT or a PL-specific equivalent are allowed. The terminology in the SQL + * standard is that CALL establishes a non-atomic execution context. Most + * other commands establish an atomic execution context, in which transaction + * control actions are not allowed. If there are nested executions of CALL, + * we want to track the execution context recursively, so that the nested + * CALLs can also do transaction control. Note, however, that for example in + * CALL -> SELECT -> CALL, the second call cannot do transaction control, + * because the SELECT in between establishes an atomic execution context. + * + * So when ExecuteCallStmt() is called from the top level, we pass in atomic = + * false (recall that that means transactions = yes). We then create a + * CallContext node with content atomic = false, which is passed in the + * fcinfo->context field to the procedure invocation. The language + * implementation should then take appropriate measures to allow or prevent + * transaction commands based on that information, e.g., call + * SPI_connect_ext(SPI_OPT_NONATOMIC). The language should also pass on the + * atomic flag to any nested invocations to CALL. */ void -ExecuteCallStmt(ParseState *pstate, CallStmt *stmt) +ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) { List *targs; ListCell *lc; @@ -2249,6 +2272,8 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt) AclResult aclresult; FmgrInfo flinfo; FunctionCallInfoData fcinfo; + CallContext *callcontext; + HeapTuple tp; targs = NIL; foreach(lc, stmt->funccall->args) @@ -2284,8 +2309,24 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt) FUNC_MAX_ARGS, FUNC_MAX_ARGS))); + callcontext = makeNode(CallContext); + callcontext->atomic = atomic; + + /* + * If proconfig is set we can't allow transaction commands because of the + * way the GUC stacking works: The transaction boundary would have to pop + * the proconfig setting off the stack. That restriction could be lifted + * by redesigning the GUC nesting mechanism a bit. + */ + tp = SearchSysCache1(PROCOID, ObjectIdGetDatum(fexpr->funcid)); + if (!HeapTupleIsValid(tp)) + elog(ERROR, "cache lookup failed for function %u", fexpr->funcid); + if (!heap_attisnull(tp, Anum_pg_proc_proconfig)) + callcontext->atomic = true; + ReleaseSysCache(tp); + fmgr_info(fexpr->funcid, &flinfo); - InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, NULL, NULL); + InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, (Node *) callcontext, NULL); i = 0; foreach (lc, fexpr->args) diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c index 995f67d266..9fc4431b80 100644 --- a/src/backend/executor/spi.c +++ b/src/backend/executor/spi.c @@ -82,6 +82,12 @@ static bool _SPI_checktuples(void); int SPI_connect(void) +{ + return SPI_connect_ext(0); +} + +int +SPI_connect_ext(int options) { int newdepth; @@ -92,7 +98,7 @@ SPI_connect(void) elog(ERROR, "SPI stack corrupted"); newdepth = 16; _SPI_stack = (_SPI_connection *) - MemoryContextAlloc(TopTransactionContext, + MemoryContextAlloc(TopMemoryContext, newdepth * sizeof(_SPI_connection)); _SPI_stack_depth = newdepth; } @@ -124,19 +130,25 @@ SPI_connect(void) _SPI_current->execCxt = NULL; _SPI_current->connectSubid = GetCurrentSubTransactionId(); _SPI_current->queryEnv = NULL; + _SPI_current->atomic = (options & SPI_OPT_NONATOMIC ? false : true); + _SPI_current->internal_xact = false; /* * Create memory contexts for this procedure * - * XXX it would be better to use PortalContext as the parent context, but - * we may not be inside a portal (consider deferred-trigger execution). - * Perhaps CurTransactionContext would do? For now it doesn't matter - * because we clean up explicitly in AtEOSubXact_SPI(). + * In atomic contexts (the normal case), we use TopTransactionContext, + * otherwise PortalContext, so that it lives across transaction + * boundaries. + * + * XXX It could be better to use PortalContext as the parent context in + * all cases, but we may not be inside a portal (consider deferred-trigger + * execution). Perhaps CurTransactionContext could be an option? For now + * it doesn't matter because we clean up explicitly in AtEOSubXact_SPI(). */ - _SPI_current->procCxt = AllocSetContextCreate(TopTransactionContext, + _SPI_current->procCxt = AllocSetContextCreate(_SPI_current->atomic ? TopTransactionContext : PortalContext, "SPI Proc", ALLOCSET_DEFAULT_SIZES); - _SPI_current->execCxt = AllocSetContextCreate(TopTransactionContext, + _SPI_current->execCxt = AllocSetContextCreate(_SPI_current->atomic ? TopTransactionContext : _SPI_current->procCxt, "SPI Exec", ALLOCSET_DEFAULT_SIZES); /* ... and switch to procedure's context */ @@ -181,12 +193,85 @@ SPI_finish(void) return SPI_OK_FINISH; } +void +SPI_start_transaction(void) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + StartTransactionCommand(); + MemoryContextSwitchTo(oldcontext); +} + +void +SPI_commit(void) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + if (_SPI_current->atomic) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("invalid transaction termination"))); + + /* + * This restriction is required by PLs implemented on top of SPI. They + * use subtransactions to establish exception blocks that are supposed to + * be rolled back together if there is an error. Terminating the + * top-level transaction in such a block violates that idea. A future PL + * implementation might have different ideas about this, in which case + * this restriction would have to be refined or the check possibly be + * moved out of SPI into the PLs. + */ + if (IsSubTransaction()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("cannot commit while a subtransaction is active"))); + + _SPI_current->internal_xact = true; + + if (ActiveSnapshotSet()) + PopActiveSnapshot(); + CommitTransactionCommand(); + MemoryContextSwitchTo(oldcontext); + + _SPI_current->internal_xact = false; +} + +void +SPI_rollback(void) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + if (_SPI_current->atomic) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("invalid transaction termination"))); + + /* see under SPI_commit() */ + if (IsSubTransaction()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("cannot roll back while a subtransaction is active"))); + + _SPI_current->internal_xact = true; + + AbortCurrentTransaction(); + MemoryContextSwitchTo(oldcontext); + + _SPI_current->internal_xact = false; +} + /* * Clean up SPI state at transaction commit or abort. */ void AtEOXact_SPI(bool isCommit) { + /* + * Do nothing if the transaction end was initiated by SPI. + */ + if (_SPI_current && _SPI_current->internal_xact) + return; + /* * Note that memory contexts belonging to SPI stack entries will be freed * automatically, so we can ignore them here. We just need to restore our @@ -224,6 +309,9 @@ AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid) if (connection->connectSubid != mySubid) break; /* couldn't be any underneath it either */ + if (connection->internal_xact) + break; + found = true; /* diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 26df660f35..3abe7d6155 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -530,7 +530,8 @@ standard_ProcessUtility(PlannedStmt *pstmt, break; case T_DoStmt: - ExecuteDoStmt((DoStmt *) parsetree); + ExecuteDoStmt((DoStmt *) parsetree, + (context != PROCESS_UTILITY_TOPLEVEL || IsTransactionBlock())); break; case T_CreateTableSpaceStmt: @@ -659,7 +660,8 @@ standard_ProcessUtility(PlannedStmt *pstmt, break; case T_CallStmt: - ExecuteCallStmt(pstate, castNode(CallStmt, parsetree)); + ExecuteCallStmt(pstate, castNode(CallStmt, parsetree), + (context != PROCESS_UTILITY_TOPLEVEL || IsTransactionBlock())); break; case T_ClusterStmt: diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index 84c68ac189..f3f0add1d6 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -742,11 +742,8 @@ PreCommit_Portals(bool isPrepare) /* * Abort processing for portals. * - * At this point we reset "active" status and run the cleanup hook if - * present, but we can't release the portal's memory until the cleanup call. - * - * The reason we need to reset active is so that we can replace the unnamed - * portal, else we'll fail to execute ROLLBACK when it arrives. + * At this point we run the cleanup hook if present, but we can't release the + * portal's memory until the cleanup call. */ void AtAbort_Portals(void) @@ -760,17 +757,6 @@ AtAbort_Portals(void) { Portal portal = hentry->portal; - /* - * See similar code in AtSubAbort_Portals(). This would fire if code - * orchestrating multiple top-level transactions within a portal, such - * as VACUUM, caught errors and continued under the same portal with a - * fresh transaction. No part of core PostgreSQL functions that way. - * XXX Such code would wish the portal to remain ACTIVE, as in - * PreCommit_Portals(). - */ - if (portal->status == PORTAL_ACTIVE) - MarkPortalFailed(portal); - /* * Do nothing else to cursors held over from a previous transaction. */ @@ -810,9 +796,10 @@ AtAbort_Portals(void) * Although we can't delete the portal data structure proper, we can * release any memory in subsidiary contexts, such as executor state. * The cleanup hook was the last thing that might have needed data - * there. + * there. But leave active portals alone. */ - MemoryContextDeleteChildren(portal->portalContext); + if (portal->status != PORTAL_ACTIVE) + MemoryContextDeleteChildren(portal->portalContext); } } @@ -832,6 +819,13 @@ AtCleanup_Portals(void) { Portal portal = hentry->portal; + /* + * Do not touch active portals --- this can only happen in the case of + * a multi-transaction command. + */ + if (portal->status == PORTAL_ACTIVE) + continue; + /* Do nothing to cursors held over from a previous transaction */ if (portal->createSubid == InvalidSubTransactionId) { @@ -1161,3 +1155,22 @@ ThereAreNoReadyPortals(void) return true; } + +bool +ThereArePinnedPortals(void) +{ + HASH_SEQ_STATUS status; + PortalHashEnt *hentry; + + hash_seq_init(&status, PortalHashTable); + + while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) != NULL) + { + Portal portal = hentry->portal; + + if (portal->portalPinned) + return true; + } + + return false; +} diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index 41007162aa..7b824c95af 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -59,8 +59,8 @@ extern ObjectAddress CreateTransform(CreateTransformStmt *stmt); extern void DropTransformById(Oid transformOid); extern void IsThereFunctionInNamespace(const char *proname, int pronargs, oidvector *proargtypes, Oid nspOid); -extern void ExecuteDoStmt(DoStmt *stmt); -extern void ExecuteCallStmt(ParseState *pstate, CallStmt *stmt); +extern void ExecuteDoStmt(DoStmt *stmt, bool atomic); +extern void ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic); extern Oid get_cast_oid(Oid sourcetypeid, Oid targettypeid, bool missing_ok); extern Oid get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok); extern void interpret_function_parameter_list(ParseState *pstate, diff --git a/src/include/executor/spi.h b/src/include/executor/spi.h index 43580c5158..e5bdaecc4e 100644 --- a/src/include/executor/spi.h +++ b/src/include/executor/spi.h @@ -65,6 +65,8 @@ typedef struct _SPI_plan *SPIPlanPtr; #define SPI_OK_REL_UNREGISTER 16 #define SPI_OK_TD_REGISTER 17 +#define SPI_OPT_NONATOMIC (1 << 0) + /* These used to be functions, now just no-ops for backwards compatibility */ #define SPI_push() ((void) 0) #define SPI_pop() ((void) 0) @@ -78,6 +80,7 @@ extern PGDLLIMPORT SPITupleTable *SPI_tuptable; extern PGDLLIMPORT int SPI_result; extern int SPI_connect(void); +extern int SPI_connect_ext(int options); extern int SPI_finish(void); extern int SPI_execute(const char *src, bool read_only, long tcount); extern int SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const char *Nulls, @@ -156,6 +159,10 @@ extern int SPI_register_relation(EphemeralNamedRelation enr); extern int SPI_unregister_relation(const char *name); extern int SPI_register_trigger_data(TriggerData *tdata); +extern void SPI_start_transaction(void); +extern void SPI_commit(void); +extern void SPI_rollback(void); + extern void AtEOXact_SPI(bool isCommit); extern void AtEOSubXact_SPI(bool isCommit, SubTransactionId mySubid); diff --git a/src/include/executor/spi_priv.h b/src/include/executor/spi_priv.h index 64f8a450eb..263c8f1453 100644 --- a/src/include/executor/spi_priv.h +++ b/src/include/executor/spi_priv.h @@ -36,6 +36,10 @@ typedef struct MemoryContext savedcxt; /* context of SPI_connect's caller */ SubTransactionId connectSubid; /* ID of connecting subtransaction */ QueryEnvironment *queryEnv; /* query environment setup for SPI level */ + + /* transaction management support */ + bool atomic; /* atomic execution context, does not allow transactions */ + bool internal_xact; /* SPI-managed transaction boundary, skip cleanup */ } _SPI_connection; /* diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h index 2eb3d6d371..74b094a9c3 100644 --- a/src/include/nodes/nodes.h +++ b/src/include/nodes/nodes.h @@ -500,7 +500,8 @@ typedef enum NodeTag T_FdwRoutine, /* in foreign/fdwapi.h */ T_IndexAmRoutine, /* in access/amapi.h */ T_TsmRoutine, /* in access/tsmapi.h */ - T_ForeignKeyCacheInfo /* in utils/rel.h */ + T_ForeignKeyCacheInfo, /* in utils/rel.h */ + T_CallContext /* in nodes/parsenodes.h */ } NodeTag; /* diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 93122adae8..bbacbe144c 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -2789,6 +2789,7 @@ typedef struct InlineCodeBlock char *source_text; /* source text of anonymous code block */ Oid langOid; /* OID of selected language */ bool langIsTrusted; /* trusted property of the language */ + bool atomic; /* atomic execution context */ } InlineCodeBlock; /* ---------------------- @@ -2801,6 +2802,12 @@ typedef struct CallStmt FuncCall *funccall; } CallStmt; +typedef struct CallContext +{ + NodeTag type; + bool atomic; +} CallContext; + /* ---------------------- * Alter Object Rename Statement * ---------------------- diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h index bc9d52e506..b903cb0fbe 100644 --- a/src/include/utils/portal.h +++ b/src/include/utils/portal.h @@ -231,5 +231,6 @@ extern PlannedStmt *PortalGetPrimaryStmt(Portal portal); extern void PortalCreateHoldStore(Portal portal); extern void PortalHashTableDeleteAll(void); extern bool ThereAreNoReadyPortals(void); +extern bool ThereArePinnedPortals(void); #endif /* PORTAL_H */ diff --git a/src/pl/plperl/GNUmakefile b/src/pl/plperl/GNUmakefile index b829027d05..933abb47c4 100644 --- a/src/pl/plperl/GNUmakefile +++ b/src/pl/plperl/GNUmakefile @@ -55,7 +55,7 @@ endif # win32 SHLIB_LINK = $(perl_embed_ldflags) REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-extension=plperl --load-extension=plperlu -REGRESS = plperl plperl_lc plperl_trigger plperl_shared plperl_elog plperl_util plperl_init plperlu plperl_array plperl_call +REGRESS = plperl plperl_lc plperl_trigger plperl_shared plperl_elog plperl_util plperl_init plperlu plperl_array plperl_call plperl_transaction # if Perl can support two interpreters in one backend, # test plperl-and-plperlu cases ifneq ($(PERL),) diff --git a/src/pl/plperl/SPI.xs b/src/pl/plperl/SPI.xs index d9e6f579d4..b98c547e8b 100644 --- a/src/pl/plperl/SPI.xs +++ b/src/pl/plperl/SPI.xs @@ -152,6 +152,15 @@ spi_spi_cursor_close(sv) plperl_spi_cursor_close(cursor); pfree(cursor); +void +spi_spi_commit() + CODE: + plperl_spi_commit(); + +void +spi_spi_rollback() + CODE: + plperl_spi_rollback(); BOOT: items = 0; /* avoid 'unused variable' warning */ diff --git a/src/pl/plperl/expected/plperl_transaction.out b/src/pl/plperl/expected/plperl_transaction.out new file mode 100644 index 0000000000..bd7b7f8660 --- /dev/null +++ b/src/pl/plperl/expected/plperl_transaction.out @@ -0,0 +1,133 @@ +CREATE TABLE test1 (a int, b text); +CREATE PROCEDURE transaction_test1() +LANGUAGE plperl +AS $$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +$$; +CALL transaction_test1(); +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +DO +LANGUAGE plperl +$$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +$$; +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plperl +AS $$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +return 1; +$$; +SELECT transaction_test2(); +ERROR: invalid transaction termination at line 5. +CONTEXT: PL/Perl function "transaction_test2" +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plperl +AS $$ +spi_exec_query("CALL transaction_test1()"); +return 1; +$$; +SELECT transaction_test3(); +ERROR: invalid transaction termination at line 5. at line 2. +CONTEXT: PL/Perl function "transaction_test3" +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plperl +AS $$ +spi_exec_query('DO LANGUAGE plperl $x$ spi_commit(); $x$'); +return 1; +$$; +SELECT transaction_test4(); +ERROR: invalid transaction termination at line 1. at line 2. +CONTEXT: PL/Perl function "transaction_test4" +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); +TRUNCATE test1; +DO LANGUAGE plperl $$ +my $sth = spi_query("SELECT * FROM test2 ORDER BY x"); +my $row; +while (defined($row = spi_fetchrow($sth))) { + spi_exec_query("INSERT INTO test1 (a) VALUES (" . $row->{x} . ")"); + spi_commit(); +} +$$; +ERROR: cannot commit transaction while a cursor is open at line 6. +CONTEXT: PL/Perl anonymous code block +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- rollback inside cursor loop +TRUNCATE test1; +DO LANGUAGE plperl $$ +my $sth = spi_query("SELECT * FROM test2 ORDER BY x"); +my $row; +while (defined($row = spi_fetchrow($sth))) { + spi_exec_query("INSERT INTO test1 (a) VALUES (" . $row->{x} . ")"); + spi_rollback(); +} +$$; +ERROR: cannot abort transaction while a cursor is open at line 6. +CONTEXT: PL/Perl anonymous code block +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +DROP TABLE test1; +DROP TABLE test2; diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c index 10feef11cf..77c41b2821 100644 --- a/src/pl/plperl/plperl.c +++ b/src/pl/plperl/plperl.c @@ -1929,7 +1929,7 @@ plperl_inline_handler(PG_FUNCTION_ARGS) current_call_data = &this_call_data; - if (SPI_connect() != SPI_OK_CONNECT) + if (SPI_connect_ext(codeblock->atomic ? 0 : SPI_OPT_NONATOMIC) != SPI_OK_CONNECT) elog(ERROR, "could not connect to SPI manager"); select_perl_context(desc.lanpltrusted); @@ -2396,13 +2396,18 @@ plperl_call_perl_event_trigger_func(plperl_proc_desc *desc, static Datum plperl_func_handler(PG_FUNCTION_ARGS) { + bool nonatomic; plperl_proc_desc *prodesc; SV *perlret; Datum retval = 0; ReturnSetInfo *rsi; ErrorContextCallback pl_error_context; - if (SPI_connect() != SPI_OK_CONNECT) + nonatomic = fcinfo->context && + IsA(fcinfo->context, CallContext) && + !castNode(CallContext, fcinfo->context)->atomic; + + if (SPI_connect_ext(nonatomic ? SPI_OPT_NONATOMIC : 0) != SPI_OK_CONNECT) elog(ERROR, "could not connect to SPI manager"); prodesc = compile_plperl_function(fcinfo->flinfo->fn_oid, false, false); @@ -3953,6 +3958,66 @@ plperl_spi_freeplan(char *query) SPI_freeplan(plan); } +void +plperl_spi_commit(void) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + PG_TRY(); + { + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot commit transaction while a cursor is open"))); + + SPI_commit(); + SPI_start_transaction(); + } + PG_CATCH(); + { + ErrorData *edata; + + /* Save error info */ + MemoryContextSwitchTo(oldcontext); + edata = CopyErrorData(); + FlushErrorState(); + + /* Punt the error to Perl */ + croak_cstr(edata->message); + } + PG_END_TRY(); +} + +void +plperl_spi_rollback(void) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + PG_TRY(); + { + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("cannot abort transaction while a cursor is open"))); + + SPI_rollback(); + SPI_start_transaction(); + } + PG_CATCH(); + { + ErrorData *edata; + + /* Save error info */ + MemoryContextSwitchTo(oldcontext); + edata = CopyErrorData(); + FlushErrorState(); + + /* Punt the error to Perl */ + croak_cstr(edata->message); + } + PG_END_TRY(); +} + /* * Implementation of plperl's elog() function * diff --git a/src/pl/plperl/plperl.h b/src/pl/plperl/plperl.h index 78366aac04..6fe7803088 100644 --- a/src/pl/plperl/plperl.h +++ b/src/pl/plperl/plperl.h @@ -125,6 +125,8 @@ HV *plperl_spi_exec_prepared(char *, HV *, int, SV **); SV *plperl_spi_query_prepared(char *, int, SV **); void plperl_spi_freeplan(char *); void plperl_spi_cursor_close(char *); +void plperl_spi_commit(void); +void plperl_spi_rollback(void); char *plperl_sv_to_literal(SV *, char *); void plperl_util_elog(int level, SV *msg); diff --git a/src/pl/plperl/sql/plperl_transaction.sql b/src/pl/plperl/sql/plperl_transaction.sql new file mode 100644 index 0000000000..5c14d4732e --- /dev/null +++ b/src/pl/plperl/sql/plperl_transaction.sql @@ -0,0 +1,120 @@ +CREATE TABLE test1 (a int, b text); + + +CREATE PROCEDURE transaction_test1() +LANGUAGE plperl +AS $$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +$$; + +CALL transaction_test1(); + +SELECT * FROM test1; + + +TRUNCATE test1; + +DO +LANGUAGE plperl +$$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +$$; + +SELECT * FROM test1; + + +TRUNCATE test1; + +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plperl +AS $$ +foreach my $i (0..9) { + spi_exec_query("INSERT INTO test1 (a) VALUES ($i)"); + if ($i % 2 == 0) { + spi_commit(); + } else { + spi_rollback(); + } +} +return 1; +$$; + +SELECT transaction_test2(); + +SELECT * FROM test1; + + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plperl +AS $$ +spi_exec_query("CALL transaction_test1()"); +return 1; +$$; + +SELECT transaction_test3(); + +SELECT * FROM test1; + + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plperl +AS $$ +spi_exec_query('DO LANGUAGE plperl $x$ spi_commit(); $x$'); +return 1; +$$; + +SELECT transaction_test4(); + + +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); + +TRUNCATE test1; + +DO LANGUAGE plperl $$ +my $sth = spi_query("SELECT * FROM test2 ORDER BY x"); +my $row; +while (defined($row = spi_fetchrow($sth))) { + spi_exec_query("INSERT INTO test1 (a) VALUES (" . $row->{x} . ")"); + spi_commit(); +} +$$; + +SELECT * FROM test1; + + +-- rollback inside cursor loop +TRUNCATE test1; + +DO LANGUAGE plperl $$ +my $sth = spi_query("SELECT * FROM test2 ORDER BY x"); +my $row; +while (defined($row = spi_fetchrow($sth))) { + spi_exec_query("INSERT INTO test1 (a) VALUES (" . $row->{x} . ")"); + spi_rollback(); +} +$$; + +SELECT * FROM test1; + + +DROP TABLE test1; +DROP TABLE test2; diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 14a4d83584..91e1ada7ad 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -26,7 +26,7 @@ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) -REGRESS = plpgsql_call plpgsql_control +REGRESS = plpgsql_call plpgsql_control plpgsql_transaction all: all-lib diff --git a/src/pl/plpgsql/src/expected/plpgsql_transaction.out b/src/pl/plpgsql/src/expected/plpgsql_transaction.out new file mode 100644 index 0000000000..8ec22c646c --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_transaction.out @@ -0,0 +1,241 @@ +CREATE TABLE test1 (a int, b text); +CREATE PROCEDURE transaction_test1() +LANGUAGE plpgsql +AS $$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; +END +$$; +CALL transaction_test1(); +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +DO +LANGUAGE plpgsql +$$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; +END +$$; +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +-- transaction commands not allowed when called in transaction block +START TRANSACTION; +CALL transaction_test1(); +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function transaction_test1() line 6 at COMMIT +COMMIT; +START TRANSACTION; +DO LANGUAGE plpgsql $$ BEGIN COMMIT; END $$; +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function inline_code_block line 1 at COMMIT +COMMIT; +TRUNCATE test1; +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; + RETURN 1; +END +$$; +SELECT transaction_test2(); +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function transaction_test2() line 6 at COMMIT +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + CALL transaction_test1(); + RETURN 1; +END; +$$; +SELECT transaction_test3(); +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function transaction_test1() line 6 at COMMIT +SQL statement "CALL transaction_test1()" +PL/pgSQL function transaction_test3() line 3 at SQL statement +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + EXECUTE 'DO LANGUAGE plpgsql $x$ BEGIN COMMIT; END $x$'; + RETURN 1; +END; +$$; +SELECT transaction_test4(); +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function inline_code_block line 1 at COMMIT +SQL statement "DO LANGUAGE plpgsql $x$ BEGIN COMMIT; END $x$" +PL/pgSQL function transaction_test4() line 3 at EXECUTE +-- proconfig settings currently disallow transaction statements +CREATE PROCEDURE transaction_test5() +LANGUAGE plpgsql +SET work_mem = 555 +AS $$ +BEGIN + COMMIT; +END; +$$; +CALL transaction_test5(); +ERROR: invalid transaction termination +CONTEXT: PL/pgSQL function transaction_test5() line 3 at COMMIT +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); +TRUNCATE test1; +DO LANGUAGE plpgsql $$ +DECLARE + r RECORD; +BEGIN + FOR r IN SELECT * FROM test2 ORDER BY x LOOP + INSERT INTO test1 (a) VALUES (r.x); + COMMIT; + END LOOP; +END; +$$; +ERROR: committing inside a cursor loop is not supported +CONTEXT: PL/pgSQL function inline_code_block line 7 at COMMIT +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- rollback inside cursor loop +TRUNCATE test1; +DO LANGUAGE plpgsql $$ +DECLARE + r RECORD; +BEGIN + FOR r IN SELECT * FROM test2 ORDER BY x LOOP + INSERT INTO test1 (a) VALUES (r.x); + ROLLBACK; + END LOOP; +END; +$$; +ERROR: cannot abort transaction inside a cursor loop +CONTEXT: PL/pgSQL function inline_code_block line 7 at ROLLBACK +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- commit inside block with exception handler +TRUNCATE test1; +DO LANGUAGE plpgsql $$ +BEGIN + BEGIN + INSERT INTO test1 (a) VALUES (1); + COMMIT; + INSERT INTO test1 (a) VALUES (1/0); + COMMIT; + EXCEPTION + WHEN division_by_zero THEN + RAISE NOTICE 'caught division_by_zero'; + END; +END; +$$; +ERROR: cannot commit while a subtransaction is active +CONTEXT: PL/pgSQL function inline_code_block line 5 at COMMIT +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- rollback inside block with exception handler +TRUNCATE test1; +DO LANGUAGE plpgsql $$ +BEGIN + BEGIN + INSERT INTO test1 (a) VALUES (1); + ROLLBACK; + INSERT INTO test1 (a) VALUES (1/0); + ROLLBACK; + EXCEPTION + WHEN division_by_zero THEN + RAISE NOTICE 'caught division_by_zero'; + END; +END; +$$; +ERROR: cannot roll back while a subtransaction is active +CONTEXT: PL/pgSQL function inline_code_block line 5 at ROLLBACK +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- COMMIT failures +DO LANGUAGE plpgsql $$ +BEGIN + CREATE TABLE test3 (y int UNIQUE DEFERRABLE INITIALLY DEFERRED); + COMMIT; + INSERT INTO test3 (y) VALUES (1); + COMMIT; + INSERT INTO test3 (y) VALUES (1); + INSERT INTO test3 (y) VALUES (2); + COMMIT; + INSERT INTO test3 (y) VALUES (3); -- won't get here +END; +$$; +ERROR: duplicate key value violates unique constraint "test3_y_key" +DETAIL: Key (y)=(1) already exists. +CONTEXT: PL/pgSQL function inline_code_block line 9 at COMMIT +SELECT * FROM test3; + y +--- + 1 +(1 row) + +DROP TABLE test1; +DROP TABLE test2; +DROP TABLE test3; diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index d096f242cd..4478c5332e 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -290,6 +290,10 @@ static int exec_stmt_dynexecute(PLpgSQL_execstate *estate, PLpgSQL_stmt_dynexecute *stmt); static int exec_stmt_dynfors(PLpgSQL_execstate *estate, PLpgSQL_stmt_dynfors *stmt); +static int exec_stmt_commit(PLpgSQL_execstate *estate, + PLpgSQL_stmt_commit *stmt); +static int exec_stmt_rollback(PLpgSQL_execstate *estate, + PLpgSQL_stmt_rollback *stmt); static void plpgsql_estate_setup(PLpgSQL_execstate *estate, PLpgSQL_function *func, @@ -1731,6 +1735,14 @@ exec_stmt(PLpgSQL_execstate *estate, PLpgSQL_stmt *stmt) rc = exec_stmt_close(estate, (PLpgSQL_stmt_close *) stmt); break; + case PLPGSQL_STMT_COMMIT: + rc = exec_stmt_commit(estate, (PLpgSQL_stmt_commit *) stmt); + break; + + case PLPGSQL_STMT_ROLLBACK: + rc = exec_stmt_rollback(estate, (PLpgSQL_stmt_rollback *) stmt); + break; + default: estate->err_stmt = save_estmt; elog(ERROR, "unrecognized cmdtype: %d", stmt->cmd_type); @@ -4264,6 +4276,57 @@ exec_stmt_close(PLpgSQL_execstate *estate, PLpgSQL_stmt_close *stmt) return PLPGSQL_RC_OK; } +/* + * exec_stmt_commit + * + * Commit the transaction. + */ +static int +exec_stmt_commit(PLpgSQL_execstate *estate, PLpgSQL_stmt_commit *stmt) +{ + /* + * XXX This could be implemented by converting the pinned portals to + * holdable ones and organizing the cleanup separately. + */ + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("committing inside a cursor loop is not supported"))); + + SPI_commit(); + SPI_start_transaction(); + + estate->simple_eval_estate = NULL; + plpgsql_create_econtext(estate); + + return PLPGSQL_RC_OK; +} + +/* + * exec_stmt_rollback + * + * Abort the transaction. + */ +static int +exec_stmt_rollback(PLpgSQL_execstate *estate, PLpgSQL_stmt_rollback *stmt) +{ + /* + * Unlike the COMMIT case above, this might not make sense at all, + * especially if the query driving the cursor loop has side effects. + */ + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("cannot abort transaction inside a cursor loop"))); + + SPI_rollback(); + SPI_start_transaction(); + + estate->simple_eval_estate = NULL; + plpgsql_create_econtext(estate); + + return PLPGSQL_RC_OK; +} /* ---------- * exec_assign_expr Put an expression's result into a variable. @@ -6767,8 +6830,7 @@ plpgsql_xact_cb(XactEvent event, void *arg) */ if (event == XACT_EVENT_COMMIT || event == XACT_EVENT_PREPARE) { - /* Shouldn't be any econtext stack entries left at commit */ - Assert(simple_econtext_stack == NULL); + simple_econtext_stack = NULL; if (shared_simple_eval_estate) FreeExecutorState(shared_simple_eval_estate); diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index 80b8448b7f..f0e85fcfcd 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -284,6 +284,10 @@ plpgsql_stmt_typename(PLpgSQL_stmt *stmt) return "CLOSE"; case PLPGSQL_STMT_PERFORM: return "PERFORM"; + case PLPGSQL_STMT_COMMIT: + return "COMMIT"; + case PLPGSQL_STMT_ROLLBACK: + return "ROLLBACK"; } return "unknown"; @@ -363,6 +367,8 @@ static void free_open(PLpgSQL_stmt_open *stmt); static void free_fetch(PLpgSQL_stmt_fetch *stmt); static void free_close(PLpgSQL_stmt_close *stmt); static void free_perform(PLpgSQL_stmt_perform *stmt); +static void free_commit(PLpgSQL_stmt_commit *stmt); +static void free_rollback(PLpgSQL_stmt_rollback *stmt); static void free_expr(PLpgSQL_expr *expr); @@ -443,6 +449,12 @@ free_stmt(PLpgSQL_stmt *stmt) case PLPGSQL_STMT_PERFORM: free_perform((PLpgSQL_stmt_perform *) stmt); break; + case PLPGSQL_STMT_COMMIT: + free_commit((PLpgSQL_stmt_commit *) stmt); + break; + case PLPGSQL_STMT_ROLLBACK: + free_rollback((PLpgSQL_stmt_rollback *) stmt); + break; default: elog(ERROR, "unrecognized cmd_type: %d", stmt->cmd_type); break; @@ -590,6 +602,16 @@ free_perform(PLpgSQL_stmt_perform *stmt) free_expr(stmt->expr); } +static void +free_commit(PLpgSQL_stmt_commit *stmt) +{ +} + +static void +free_rollback(PLpgSQL_stmt_rollback *stmt) +{ +} + static void free_exit(PLpgSQL_stmt_exit *stmt) { @@ -777,6 +799,8 @@ static void dump_fetch(PLpgSQL_stmt_fetch *stmt); static void dump_cursor_direction(PLpgSQL_stmt_fetch *stmt); static void dump_close(PLpgSQL_stmt_close *stmt); static void dump_perform(PLpgSQL_stmt_perform *stmt); +static void dump_commit(PLpgSQL_stmt_commit *stmt); +static void dump_rollback(PLpgSQL_stmt_rollback *stmt); static void dump_expr(PLpgSQL_expr *expr); @@ -867,6 +891,12 @@ dump_stmt(PLpgSQL_stmt *stmt) case PLPGSQL_STMT_PERFORM: dump_perform((PLpgSQL_stmt_perform *) stmt); break; + case PLPGSQL_STMT_COMMIT: + dump_commit((PLpgSQL_stmt_commit *) stmt); + break; + case PLPGSQL_STMT_ROLLBACK: + dump_rollback((PLpgSQL_stmt_rollback *) stmt); + break; default: elog(ERROR, "unrecognized cmd_type: %d", stmt->cmd_type); break; @@ -1239,6 +1269,20 @@ dump_perform(PLpgSQL_stmt_perform *stmt) printf("\n"); } +static void +dump_commit(PLpgSQL_stmt_commit *stmt) +{ + dump_ind(); + printf("COMMIT\n"); +} + +static void +dump_rollback(PLpgSQL_stmt_rollback *stmt) +{ + dump_ind(); + printf("ROLLBACK\n"); +} + static void dump_exit(PLpgSQL_stmt_exit *stmt) { diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index d9cab1ad7e..42f6a2e161 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -198,6 +198,7 @@ static void check_raise_parameters(PLpgSQL_stmt_raise *stmt); %type stmt_return stmt_raise stmt_assert stmt_execsql %type stmt_dynexecute stmt_for stmt_perform stmt_getdiag %type stmt_open stmt_fetch stmt_move stmt_close stmt_null +%type stmt_commit stmt_rollback %type stmt_case stmt_foreach_a %type proc_exceptions @@ -260,6 +261,7 @@ static void check_raise_parameters(PLpgSQL_stmt_raise *stmt); %token K_COLLATE %token K_COLUMN %token K_COLUMN_NAME +%token K_COMMIT %token K_CONSTANT %token K_CONSTRAINT %token K_CONSTRAINT_NAME @@ -325,6 +327,7 @@ static void check_raise_parameters(PLpgSQL_stmt_raise *stmt); %token K_RETURN %token K_RETURNED_SQLSTATE %token K_REVERSE +%token K_ROLLBACK %token K_ROW_COUNT %token K_ROWTYPE %token K_SCHEMA @@ -897,6 +900,10 @@ proc_stmt : pl_block ';' { $$ = $1; } | stmt_null { $$ = $1; } + | stmt_commit + { $$ = $1; } + | stmt_rollback + { $$ = $1; } ; stmt_perform : K_PERFORM expr_until_semi @@ -2151,6 +2158,31 @@ stmt_null : K_NULL ';' } ; +stmt_commit : K_COMMIT ';' + { + PLpgSQL_stmt_commit *new; + + new = palloc(sizeof(PLpgSQL_stmt_commit)); + new->cmd_type = PLPGSQL_STMT_COMMIT; + new->lineno = plpgsql_location_to_lineno(@1); + + $$ = (PLpgSQL_stmt *)new; + } + ; + +stmt_rollback : K_ROLLBACK ';' + { + PLpgSQL_stmt_rollback *new; + + new = palloc(sizeof(PLpgSQL_stmt_rollback)); + new->cmd_type = PLPGSQL_STMT_ROLLBACK; + new->lineno = plpgsql_location_to_lineno(@1); + + $$ = (PLpgSQL_stmt *)new; + } + ; + + cursor_variable : T_DATUM { /* @@ -2387,6 +2419,7 @@ unreserved_keyword : | K_COLLATE | K_COLUMN | K_COLUMN_NAME + | K_COMMIT | K_CONSTANT | K_CONSTRAINT | K_CONSTRAINT_NAME @@ -2438,6 +2471,7 @@ unreserved_keyword : | K_RETURN | K_RETURNED_SQLSTATE | K_REVERSE + | K_ROLLBACK | K_ROW_COUNT | K_ROWTYPE | K_SCHEMA diff --git a/src/pl/plpgsql/src/pl_handler.c b/src/pl/plpgsql/src/pl_handler.c index 4c2ba2f734..c49428d923 100644 --- a/src/pl/plpgsql/src/pl_handler.c +++ b/src/pl/plpgsql/src/pl_handler.c @@ -219,15 +219,20 @@ PG_FUNCTION_INFO_V1(plpgsql_call_handler); Datum plpgsql_call_handler(PG_FUNCTION_ARGS) { + bool nonatomic; PLpgSQL_function *func; PLpgSQL_execstate *save_cur_estate; Datum retval; int rc; + nonatomic = fcinfo->context && + IsA(fcinfo->context, CallContext) && + !castNode(CallContext, fcinfo->context)->atomic; + /* * Connect to SPI manager */ - if ((rc = SPI_connect()) != SPI_OK_CONNECT) + if ((rc = SPI_connect_ext(nonatomic ? SPI_OPT_NONATOMIC : 0)) != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed: %s", SPI_result_code_string(rc)); /* Find or compile the function */ @@ -301,7 +306,7 @@ plpgsql_inline_handler(PG_FUNCTION_ARGS) /* * Connect to SPI manager */ - if ((rc = SPI_connect()) != SPI_OK_CONNECT) + if ((rc = SPI_connect_ext(codeblock->atomic ? 0 : SPI_OPT_NONATOMIC)) != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed: %s", SPI_result_code_string(rc)); /* Compile the anonymous code block */ diff --git a/src/pl/plpgsql/src/pl_scanner.c b/src/pl/plpgsql/src/pl_scanner.c index ee9aef8bbc..12a3e6b818 100644 --- a/src/pl/plpgsql/src/pl_scanner.c +++ b/src/pl/plpgsql/src/pl_scanner.c @@ -106,6 +106,7 @@ static const ScanKeyword unreserved_keywords[] = { PG_KEYWORD("collate", K_COLLATE, UNRESERVED_KEYWORD) PG_KEYWORD("column", K_COLUMN, UNRESERVED_KEYWORD) PG_KEYWORD("column_name", K_COLUMN_NAME, UNRESERVED_KEYWORD) + PG_KEYWORD("commit", K_COMMIT, UNRESERVED_KEYWORD) PG_KEYWORD("constant", K_CONSTANT, UNRESERVED_KEYWORD) PG_KEYWORD("constraint", K_CONSTRAINT, UNRESERVED_KEYWORD) PG_KEYWORD("constraint_name", K_CONSTRAINT_NAME, UNRESERVED_KEYWORD) @@ -158,6 +159,7 @@ static const ScanKeyword unreserved_keywords[] = { PG_KEYWORD("return", K_RETURN, UNRESERVED_KEYWORD) PG_KEYWORD("returned_sqlstate", K_RETURNED_SQLSTATE, UNRESERVED_KEYWORD) PG_KEYWORD("reverse", K_REVERSE, UNRESERVED_KEYWORD) + PG_KEYWORD("rollback", K_ROLLBACK, UNRESERVED_KEYWORD) PG_KEYWORD("row_count", K_ROW_COUNT, UNRESERVED_KEYWORD) PG_KEYWORD("rowtype", K_ROWTYPE, UNRESERVED_KEYWORD) PG_KEYWORD("schema", K_SCHEMA, UNRESERVED_KEYWORD) diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index c571afa34b..a9b9d91de7 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -105,7 +105,9 @@ typedef enum PLpgSQL_stmt_type PLPGSQL_STMT_OPEN, PLPGSQL_STMT_FETCH, PLPGSQL_STMT_CLOSE, - PLPGSQL_STMT_PERFORM + PLPGSQL_STMT_PERFORM, + PLPGSQL_STMT_COMMIT, + PLPGSQL_STMT_ROLLBACK } PLpgSQL_stmt_type; /* @@ -433,6 +435,24 @@ typedef struct PLpgSQL_stmt_perform PLpgSQL_expr *expr; } PLpgSQL_stmt_perform; +/* + * COMMIT statement + */ +typedef struct PLpgSQL_stmt_commit +{ + PLpgSQL_stmt_type cmd_type; + int lineno; +} PLpgSQL_stmt_commit; + +/* + * ROLLBACK statement + */ +typedef struct PLpgSQL_stmt_rollback +{ + PLpgSQL_stmt_type cmd_type; + int lineno; +} PLpgSQL_stmt_rollback; + /* * GET DIAGNOSTICS item */ diff --git a/src/pl/plpgsql/src/sql/plpgsql_transaction.sql b/src/pl/plpgsql/src/sql/plpgsql_transaction.sql new file mode 100644 index 0000000000..02ee735079 --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_transaction.sql @@ -0,0 +1,215 @@ +CREATE TABLE test1 (a int, b text); + + +CREATE PROCEDURE transaction_test1() +LANGUAGE plpgsql +AS $$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; +END +$$; + +CALL transaction_test1(); + +SELECT * FROM test1; + + +TRUNCATE test1; + +DO +LANGUAGE plpgsql +$$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; +END +$$; + +SELECT * FROM test1; + + +-- transaction commands not allowed when called in transaction block +START TRANSACTION; +CALL transaction_test1(); +COMMIT; + +START TRANSACTION; +DO LANGUAGE plpgsql $$ BEGIN COMMIT; END $$; +COMMIT; + + +TRUNCATE test1; + +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + FOR i IN 0..9 LOOP + INSERT INTO test1 (a) VALUES (i); + IF i % 2 = 0 THEN + COMMIT; + ELSE + ROLLBACK; + END IF; + END LOOP; + RETURN 1; +END +$$; + +SELECT transaction_test2(); + +SELECT * FROM test1; + + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + CALL transaction_test1(); + RETURN 1; +END; +$$; + +SELECT transaction_test3(); + +SELECT * FROM test1; + + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plpgsql +AS $$ +BEGIN + EXECUTE 'DO LANGUAGE plpgsql $x$ BEGIN COMMIT; END $x$'; + RETURN 1; +END; +$$; + +SELECT transaction_test4(); + + +-- proconfig settings currently disallow transaction statements +CREATE PROCEDURE transaction_test5() +LANGUAGE plpgsql +SET work_mem = 555 +AS $$ +BEGIN + COMMIT; +END; +$$; + +CALL transaction_test5(); + + +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); + +TRUNCATE test1; + +DO LANGUAGE plpgsql $$ +DECLARE + r RECORD; +BEGIN + FOR r IN SELECT * FROM test2 ORDER BY x LOOP + INSERT INTO test1 (a) VALUES (r.x); + COMMIT; + END LOOP; +END; +$$; + +SELECT * FROM test1; + + +-- rollback inside cursor loop +TRUNCATE test1; + +DO LANGUAGE plpgsql $$ +DECLARE + r RECORD; +BEGIN + FOR r IN SELECT * FROM test2 ORDER BY x LOOP + INSERT INTO test1 (a) VALUES (r.x); + ROLLBACK; + END LOOP; +END; +$$; + +SELECT * FROM test1; + + +-- commit inside block with exception handler +TRUNCATE test1; + +DO LANGUAGE plpgsql $$ +BEGIN + BEGIN + INSERT INTO test1 (a) VALUES (1); + COMMIT; + INSERT INTO test1 (a) VALUES (1/0); + COMMIT; + EXCEPTION + WHEN division_by_zero THEN + RAISE NOTICE 'caught division_by_zero'; + END; +END; +$$; + +SELECT * FROM test1; + + +-- rollback inside block with exception handler +TRUNCATE test1; + +DO LANGUAGE plpgsql $$ +BEGIN + BEGIN + INSERT INTO test1 (a) VALUES (1); + ROLLBACK; + INSERT INTO test1 (a) VALUES (1/0); + ROLLBACK; + EXCEPTION + WHEN division_by_zero THEN + RAISE NOTICE 'caught division_by_zero'; + END; +END; +$$; + +SELECT * FROM test1; + + +-- COMMIT failures +DO LANGUAGE plpgsql $$ +BEGIN + CREATE TABLE test3 (y int UNIQUE DEFERRABLE INITIALLY DEFERRED); + COMMIT; + INSERT INTO test3 (y) VALUES (1); + COMMIT; + INSERT INTO test3 (y) VALUES (1); + INSERT INTO test3 (y) VALUES (2); + COMMIT; + INSERT INTO test3 (y) VALUES (3); -- won't get here +END; +$$; + +SELECT * FROM test3; + + +DROP TABLE test1; +DROP TABLE test2; +DROP TABLE test3; diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile index cc91afebde..d09910835d 100644 --- a/src/pl/plpython/Makefile +++ b/src/pl/plpython/Makefile @@ -90,6 +90,7 @@ REGRESS = \ plpython_quote \ plpython_composite \ plpython_subtransaction \ + plpython_transaction \ plpython_drop REGRESS_PLPYTHON3_MANGLE := $(REGRESS) diff --git a/src/pl/plpython/expected/plpython_test.out b/src/pl/plpython/expected/plpython_test.out index 847e4cc412..39b994f446 100644 --- a/src/pl/plpython/expected/plpython_test.out +++ b/src/pl/plpython/expected/plpython_test.out @@ -48,6 +48,7 @@ select module_contents(); Error Fatal SPIError + commit cursor debug error @@ -60,10 +61,11 @@ select module_contents(); quote_ident quote_literal quote_nullable + rollback spiexceptions subtransaction warning -(18 rows) +(20 rows) CREATE FUNCTION elog_test_basic() RETURNS void AS $$ diff --git a/src/pl/plpython/expected/plpython_transaction.out b/src/pl/plpython/expected/plpython_transaction.out new file mode 100644 index 0000000000..1fadc69b63 --- /dev/null +++ b/src/pl/plpython/expected/plpython_transaction.out @@ -0,0 +1,135 @@ +CREATE TABLE test1 (a int, b text); +CREATE PROCEDURE transaction_test1() +LANGUAGE plpythonu +AS $$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +$$; +CALL transaction_test1(); +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +DO +LANGUAGE plpythonu +$$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +$$; +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plpythonu +AS $$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +return 1 +$$; +SELECT transaction_test2(); +ERROR: invalid transaction termination +CONTEXT: PL/Python function "transaction_test2" +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plpythonu +AS $$ +plpy.execute("CALL transaction_test1()") +return 1 +$$; +SELECT transaction_test3(); +ERROR: spiexceptions.InvalidTransactionTermination: invalid transaction termination +CONTEXT: Traceback (most recent call last): + PL/Python function "transaction_test3", line 2, in + plpy.execute("CALL transaction_test1()") +PL/Python function "transaction_test3" +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plpythonu +AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.commit() $x$") +return 1 +$$; +SELECT transaction_test4(); +ERROR: spiexceptions.InvalidTransactionTermination: invalid transaction termination +CONTEXT: Traceback (most recent call last): + PL/Python function "transaction_test4", line 2, in + plpy.execute("DO LANGUAGE plpythonu $x$ plpy.commit() $x$") +PL/Python function "transaction_test4" +-- commit inside subtransaction (prohibited) +DO LANGUAGE plpythonu $$ +with plpy.subtransaction(): + plpy.commit() +$$; +WARNING: forcibly aborting a subtransaction that has not been exited +ERROR: cannot commit while a subtransaction is active +CONTEXT: PL/Python anonymous code block +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); +TRUNCATE test1; +DO LANGUAGE plpythonu $$ +for row in plpy.cursor("SELECT * FROM test2 ORDER BY x"): + plpy.execute("INSERT INTO test1 (a) VALUES (%s)" % row['x']) + plpy.commit() +$$; +ERROR: cannot commit transaction while a cursor is open +CONTEXT: PL/Python anonymous code block +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- rollback inside cursor loop +TRUNCATE test1; +DO LANGUAGE plpythonu $$ +for row in plpy.cursor("SELECT * FROM test2 ORDER BY x"): + plpy.execute("INSERT INTO test1 (a) VALUES (%s)" % row['x']) + plpy.rollback() +$$; +ERROR: cannot abort transaction while a cursor is open +CONTEXT: PL/Python anonymous code block +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +DROP TABLE test1; +DROP TABLE test2; diff --git a/src/pl/plpython/plpy_main.c b/src/pl/plpython/plpy_main.c index 695de30583..5a197ce27a 100644 --- a/src/pl/plpython/plpy_main.c +++ b/src/pl/plpython/plpy_main.c @@ -60,7 +60,7 @@ static void plpython_error_callback(void *arg); static void plpython_inline_error_callback(void *arg); static void PLy_init_interp(void); -static PLyExecutionContext *PLy_push_execution_context(void); +static PLyExecutionContext *PLy_push_execution_context(bool atomic_context); static void PLy_pop_execution_context(void); /* static state for Python library conflict detection */ @@ -219,14 +219,19 @@ plpython2_validator(PG_FUNCTION_ARGS) Datum plpython_call_handler(PG_FUNCTION_ARGS) { + bool nonatomic; Datum retval; PLyExecutionContext *exec_ctx; ErrorContextCallback plerrcontext; PLy_initialize(); + nonatomic = fcinfo->context && + IsA(fcinfo->context, CallContext) && + !castNode(CallContext, fcinfo->context)->atomic; + /* Note: SPI_finish() happens in plpy_exec.c, which is dubious design */ - if (SPI_connect() != SPI_OK_CONNECT) + if (SPI_connect_ext(nonatomic ? SPI_OPT_NONATOMIC : 0) != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed"); /* @@ -235,7 +240,7 @@ plpython_call_handler(PG_FUNCTION_ARGS) * here and the PG_TRY. (plpython_error_callback expects the stack entry * to be there, so we have to make the context first.) */ - exec_ctx = PLy_push_execution_context(); + exec_ctx = PLy_push_execution_context(!nonatomic); /* * Setup error traceback support for ereport() @@ -303,7 +308,7 @@ plpython_inline_handler(PG_FUNCTION_ARGS) PLy_initialize(); /* Note: SPI_finish() happens in plpy_exec.c, which is dubious design */ - if (SPI_connect() != SPI_OK_CONNECT) + if (SPI_connect_ext(codeblock->atomic ? 0 : SPI_OPT_NONATOMIC) != SPI_OK_CONNECT) elog(ERROR, "SPI_connect failed"); MemSet(&fake_fcinfo, 0, sizeof(fake_fcinfo)); @@ -332,7 +337,7 @@ plpython_inline_handler(PG_FUNCTION_ARGS) * need the stack entry, but for consistency with plpython_call_handler we * do it in this order.) */ - exec_ctx = PLy_push_execution_context(); + exec_ctx = PLy_push_execution_context(codeblock->atomic); /* * Setup error traceback support for ereport() @@ -430,12 +435,14 @@ PLy_get_scratch_context(PLyExecutionContext *context) } static PLyExecutionContext * -PLy_push_execution_context(void) +PLy_push_execution_context(bool atomic_context) { PLyExecutionContext *context; + /* Pick a memory context similar to what SPI uses. */ context = (PLyExecutionContext *) - MemoryContextAlloc(TopTransactionContext, sizeof(PLyExecutionContext)); + MemoryContextAlloc(atomic_context ? TopTransactionContext : PortalContext, + sizeof(PLyExecutionContext)); context->curr_proc = NULL; context->scratch_ctx = NULL; context->next = PLy_execution_contexts; diff --git a/src/pl/plpython/plpy_plpymodule.c b/src/pl/plpython/plpy_plpymodule.c index 23f99e20ca..3d7dd13f0c 100644 --- a/src/pl/plpython/plpy_plpymodule.c +++ b/src/pl/plpython/plpy_plpymodule.c @@ -6,8 +6,10 @@ #include "postgres.h" +#include "access/xact.h" #include "mb/pg_wchar.h" #include "utils/builtins.h" +#include "utils/snapmgr.h" #include "plpython.h" @@ -15,6 +17,7 @@ #include "plpy_cursorobject.h" #include "plpy_elog.h" +#include "plpy_main.h" #include "plpy_planobject.h" #include "plpy_resultobject.h" #include "plpy_spi.h" @@ -41,6 +44,8 @@ static PyObject *PLy_fatal(PyObject *self, PyObject *args, PyObject *kw); static PyObject *PLy_quote_literal(PyObject *self, PyObject *args); static PyObject *PLy_quote_nullable(PyObject *self, PyObject *args); static PyObject *PLy_quote_ident(PyObject *self, PyObject *args); +static PyObject *PLy_commit(PyObject *self, PyObject *args); +static PyObject *PLy_rollback(PyObject *self, PyObject *args); /* A list of all known exceptions, generated from backend/utils/errcodes.txt */ @@ -95,6 +100,12 @@ static PyMethodDef PLy_methods[] = { */ {"cursor", PLy_cursor, METH_VARARGS, NULL}, + /* + * transaction control + */ + {"commit", PLy_commit, METH_NOARGS, NULL}, + {"rollback", PLy_rollback, METH_NOARGS, NULL}, + {NULL, NULL, 0, NULL} }; @@ -577,3 +588,41 @@ PLy_output(volatile int level, PyObject *self, PyObject *args, PyObject *kw) */ Py_RETURN_NONE; } + +static PyObject * +PLy_commit(PyObject *self, PyObject *args) +{ + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); + + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot commit transaction while a cursor is open"))); + + SPI_commit(); + SPI_start_transaction(); + + /* was cleared at transaction end, reset pointer */ + exec_ctx->scratch_ctx = NULL; + + Py_RETURN_NONE; +} + +static PyObject * +PLy_rollback(PyObject *self, PyObject *args) +{ + PLyExecutionContext *exec_ctx = PLy_current_execution_context(); + + if (ThereArePinnedPortals()) + ereport(ERROR, + (errcode(ERRCODE_INVALID_TRANSACTION_TERMINATION), + errmsg("cannot abort transaction while a cursor is open"))); + + SPI_rollback(); + SPI_start_transaction(); + + /* was cleared at transaction end, reset pointer */ + exec_ctx->scratch_ctx = NULL; + + Py_RETURN_NONE; +} diff --git a/src/pl/plpython/sql/plpython_transaction.sql b/src/pl/plpython/sql/plpython_transaction.sql new file mode 100644 index 0000000000..36c7b2ef38 --- /dev/null +++ b/src/pl/plpython/sql/plpython_transaction.sql @@ -0,0 +1,115 @@ +CREATE TABLE test1 (a int, b text); + + +CREATE PROCEDURE transaction_test1() +LANGUAGE plpythonu +AS $$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +$$; + +CALL transaction_test1(); + +SELECT * FROM test1; + + +TRUNCATE test1; + +DO +LANGUAGE plpythonu +$$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +$$; + +SELECT * FROM test1; + + +TRUNCATE test1; + +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE plpythonu +AS $$ +for i in range(0, 10): + plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i) + if i % 2 == 0: + plpy.commit() + else: + plpy.rollback() +return 1 +$$; + +SELECT transaction_test2(); + +SELECT * FROM test1; + + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE plpythonu +AS $$ +plpy.execute("CALL transaction_test1()") +return 1 +$$; + +SELECT transaction_test3(); + +SELECT * FROM test1; + + +-- DO block inside function +CREATE FUNCTION transaction_test4() RETURNS int +LANGUAGE plpythonu +AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.commit() $x$") +return 1 +$$; + +SELECT transaction_test4(); + + +-- commit inside subtransaction (prohibited) +DO LANGUAGE plpythonu $$ +with plpy.subtransaction(): + plpy.commit() +$$; + + +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); + +TRUNCATE test1; + +DO LANGUAGE plpythonu $$ +for row in plpy.cursor("SELECT * FROM test2 ORDER BY x"): + plpy.execute("INSERT INTO test1 (a) VALUES (%s)" % row['x']) + plpy.commit() +$$; + +SELECT * FROM test1; + + +-- rollback inside cursor loop +TRUNCATE test1; + +DO LANGUAGE plpythonu $$ +for row in plpy.cursor("SELECT * FROM test2 ORDER BY x"): + plpy.execute("INSERT INTO test1 (a) VALUES (%s)" % row['x']) + plpy.rollback() +$$; + +SELECT * FROM test1; + + +DROP TABLE test1; +DROP TABLE test2; diff --git a/src/pl/tcl/Makefile b/src/pl/tcl/Makefile index 6a92a9b6aa..ef61ee596e 100644 --- a/src/pl/tcl/Makefile +++ b/src/pl/tcl/Makefile @@ -28,7 +28,7 @@ DATA = pltcl.control pltcl--1.0.sql pltcl--unpackaged--1.0.sql \ pltclu.control pltclu--1.0.sql pltclu--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) --load-extension=pltcl -REGRESS = pltcl_setup pltcl_queries pltcl_call pltcl_start_proc pltcl_subxact pltcl_unicode +REGRESS = pltcl_setup pltcl_queries pltcl_call pltcl_start_proc pltcl_subxact pltcl_unicode pltcl_transaction # Tcl on win32 ships with import libraries only for Microsoft Visual C++, # which are not compatible with mingw gcc. Therefore we need to build a diff --git a/src/pl/tcl/expected/pltcl_transaction.out b/src/pl/tcl/expected/pltcl_transaction.out new file mode 100644 index 0000000000..007204b99a --- /dev/null +++ b/src/pl/tcl/expected/pltcl_transaction.out @@ -0,0 +1,100 @@ +-- suppress CONTEXT so that function OIDs aren't in output +\set VERBOSITY terse +CREATE TABLE test1 (a int, b text); +CREATE PROCEDURE transaction_test1() +LANGUAGE pltcl +AS $$ +for {set i 0} {$i < 10} {incr i} { + spi_exec "INSERT INTO test1 (a) VALUES ($i)" + if {$i % 2 == 0} { + commit + } else { + rollback + } +} +$$; +CALL transaction_test1(); +SELECT * FROM test1; + a | b +---+--- + 0 | + 2 | + 4 | + 6 | + 8 | +(5 rows) + +TRUNCATE test1; +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE pltcl +AS $$ +for {set i 0} {$i < 10} {incr i} { + spi_exec "INSERT INTO test1 (a) VALUES ($i)" + if {$i % 2 == 0} { + commit + } else { + rollback + } +} +return 1 +$$; +SELECT transaction_test2(); +ERROR: invalid transaction termination +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE pltcl +AS $$ +spi_exec "CALL transaction_test1()" +return 1 +$$; +SELECT transaction_test3(); +ERROR: invalid transaction termination +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); +TRUNCATE test1; +CREATE PROCEDURE transaction_test4a() +LANGUAGE pltcl +AS $$ +spi_exec -array row "SELECT * FROM test2 ORDER BY x" { + spi_exec "INSERT INTO test1 (a) VALUES ($row(x))" + commit +} +$$; +CALL transaction_test4a(); +ERROR: cannot commit while a subtransaction is active +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +-- rollback inside cursor loop +TRUNCATE test1; +CREATE PROCEDURE transaction_test4b() +LANGUAGE pltcl +AS $$ +spi_exec -array row "SELECT * FROM test2 ORDER BY x" { + spi_exec "INSERT INTO test1 (a) VALUES ($row(x))" + rollback +} +$$; +CALL transaction_test4b(); +ERROR: cannot roll back while a subtransaction is active +SELECT * FROM test1; + a | b +---+--- +(0 rows) + +DROP TABLE test1; +DROP TABLE test2; diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c index 8f5847c4ff..5df4dfdf55 100644 --- a/src/pl/tcl/pltcl.c +++ b/src/pl/tcl/pltcl.c @@ -312,6 +312,10 @@ static int pltcl_SPI_lastoid(ClientData cdata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]); static int pltcl_subtransaction(ClientData cdata, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]); +static int pltcl_commit(ClientData cdata, Tcl_Interp *interp, + int objc, Tcl_Obj *const objv[]); +static int pltcl_rollback(ClientData cdata, Tcl_Interp *interp, + int objc, Tcl_Obj *const objv[]); static void pltcl_subtrans_begin(MemoryContext oldcontext, ResourceOwner oldowner); @@ -524,6 +528,10 @@ pltcl_init_interp(pltcl_interp_desc *interp_desc, Oid prolang, bool pltrusted) pltcl_SPI_lastoid, NULL, NULL); Tcl_CreateObjCommand(interp, "subtransaction", pltcl_subtransaction, NULL, NULL); + Tcl_CreateObjCommand(interp, "commit", + pltcl_commit, NULL, NULL); + Tcl_CreateObjCommand(interp, "rollback", + pltcl_rollback, NULL, NULL); /************************************************************ * Call the appropriate start_proc, if there is one. @@ -797,6 +805,7 @@ static Datum pltcl_func_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, bool pltrusted) { + bool nonatomic; pltcl_proc_desc *prodesc; Tcl_Interp *volatile interp; Tcl_Obj *tcl_cmd; @@ -804,8 +813,12 @@ pltcl_func_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state, int tcl_rc; Datum retval; + nonatomic = fcinfo->context && + IsA(fcinfo->context, CallContext) && + !castNode(CallContext, fcinfo->context)->atomic; + /* Connect to SPI manager */ - if (SPI_connect() != SPI_OK_CONNECT) + if (SPI_connect_ext(nonatomic ? SPI_OPT_NONATOMIC : 0) != SPI_OK_CONNECT) elog(ERROR, "could not connect to SPI manager"); /* Find or compile the function */ @@ -2936,6 +2949,86 @@ pltcl_subtransaction(ClientData cdata, Tcl_Interp *interp, } +/********************************************************************** + * pltcl_commit() + * + * Commit the transaction and start a new one. + **********************************************************************/ +static int +pltcl_commit(ClientData cdata, Tcl_Interp *interp, + int objc, Tcl_Obj *const objv[]) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + PG_TRY(); + { + SPI_commit(); + SPI_start_transaction(); + } + PG_CATCH(); + { + ErrorData *edata; + + /* Save error info */ + MemoryContextSwitchTo(oldcontext); + edata = CopyErrorData(); + FlushErrorState(); + + /* Pass the error data to Tcl */ + pltcl_construct_errorCode(interp, edata); + UTF_BEGIN; + Tcl_SetObjResult(interp, Tcl_NewStringObj(UTF_E2U(edata->message), -1)); + UTF_END; + FreeErrorData(edata); + + return TCL_ERROR; + } + PG_END_TRY(); + + return TCL_OK; +} + + +/********************************************************************** + * pltcl_rollback() + * + * Abort the transaction and start a new one. + **********************************************************************/ +static int +pltcl_rollback(ClientData cdata, Tcl_Interp *interp, + int objc, Tcl_Obj *const objv[]) +{ + MemoryContext oldcontext = CurrentMemoryContext; + + PG_TRY(); + { + SPI_rollback(); + SPI_start_transaction(); + } + PG_CATCH(); + { + ErrorData *edata; + + /* Save error info */ + MemoryContextSwitchTo(oldcontext); + edata = CopyErrorData(); + FlushErrorState(); + + /* Pass the error data to Tcl */ + pltcl_construct_errorCode(interp, edata); + UTF_BEGIN; + Tcl_SetObjResult(interp, Tcl_NewStringObj(UTF_E2U(edata->message), -1)); + UTF_END; + FreeErrorData(edata); + + return TCL_ERROR; + } + PG_END_TRY(); + + return TCL_OK; +} + + /********************************************************************** * pltcl_set_tuple_values() - Set variables for all attributes * of a given tuple diff --git a/src/pl/tcl/sql/pltcl_transaction.sql b/src/pl/tcl/sql/pltcl_transaction.sql new file mode 100644 index 0000000000..c752faf665 --- /dev/null +++ b/src/pl/tcl/sql/pltcl_transaction.sql @@ -0,0 +1,98 @@ +-- suppress CONTEXT so that function OIDs aren't in output +\set VERBOSITY terse + +CREATE TABLE test1 (a int, b text); + + +CREATE PROCEDURE transaction_test1() +LANGUAGE pltcl +AS $$ +for {set i 0} {$i < 10} {incr i} { + spi_exec "INSERT INTO test1 (a) VALUES ($i)" + if {$i % 2 == 0} { + commit + } else { + rollback + } +} +$$; + +CALL transaction_test1(); + +SELECT * FROM test1; + + +TRUNCATE test1; + +-- not allowed in a function +CREATE FUNCTION transaction_test2() RETURNS int +LANGUAGE pltcl +AS $$ +for {set i 0} {$i < 10} {incr i} { + spi_exec "INSERT INTO test1 (a) VALUES ($i)" + if {$i % 2 == 0} { + commit + } else { + rollback + } +} +return 1 +$$; + +SELECT transaction_test2(); + +SELECT * FROM test1; + + +-- also not allowed if procedure is called from a function +CREATE FUNCTION transaction_test3() RETURNS int +LANGUAGE pltcl +AS $$ +spi_exec "CALL transaction_test1()" +return 1 +$$; + +SELECT transaction_test3(); + +SELECT * FROM test1; + + +-- commit inside cursor loop +CREATE TABLE test2 (x int); +INSERT INTO test2 VALUES (0), (1), (2), (3), (4); + +TRUNCATE test1; + +CREATE PROCEDURE transaction_test4a() +LANGUAGE pltcl +AS $$ +spi_exec -array row "SELECT * FROM test2 ORDER BY x" { + spi_exec "INSERT INTO test1 (a) VALUES ($row(x))" + commit +} +$$; + +CALL transaction_test4a(); + +SELECT * FROM test1; + + +-- rollback inside cursor loop +TRUNCATE test1; + +CREATE PROCEDURE transaction_test4b() +LANGUAGE pltcl +AS $$ +spi_exec -array row "SELECT * FROM test2 ORDER BY x" { + spi_exec "INSERT INTO test1 (a) VALUES ($row(x))" + rollback +} +$$; + +CALL transaction_test4b(); + +SELECT * FROM test1; + + +DROP TABLE test1; +DROP TABLE test2; From 2b792ab094415f351abd5854de5cefb023931a85 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 22 Jan 2018 12:06:18 -0500 Subject: [PATCH 0868/1087] Make pg_dump's ACL, sec label, and comment entries reliably identifiable. _tocEntryRequired() expects that it can identify ACL, SECURITY LABEL, and COMMENT TOC entries that are for large objects by seeing whether the tag for them starts with "LARGE OBJECT ". While that works fine for actual large objects, which are indeed tagged that way, it's subject to false positives unless every such entry's tag starts with an appropriate type ID. And in fact it does not work for ACLs, because up to now we customarily tagged those entries with just the bare name of the object. This means that an ACL for an object named "LARGE OBJECT something" would be misclassified as data not schema, with undesirable results in a schema-only or data-only dump --- although pg_upgrade seems unaffected, due to the special case for binary-upgrade mode further down in _tocEntryRequired(). We can fix this by changing all the dumpACL calls to use the label strings already in use for comments and security labels, which do follow the convention of starting with an object type indicator. Well, mostly they follow it. dumpDatabase() got it wrong, using just the bare database name for those purposes, so that a database named "LARGE OBJECT something" would similarly be subject to having its comment or security label dropped or included when not wanted. Bring that into line too. (Note that up to now, database ACLs have not been processed by pg_dump, so that this issue doesn't affect them.) _tocEntryRequired() itself is not free of fault: it was overly liberal about matching object tags to "LARGE OBJECT " in binary-upgrade mode. This looks like it is probably harmless because there would be no data component to strip anyway in that mode, but at best it's trouble waiting to happen, so tighten that up too. The possible misclassification of SECURITY LABEL entries for databases is in principle a security problem, but the opportunities for actual exploits seem too narrow to be interesting. The other cases seem like just bugs, since an object owner can change its ACL or comment for himself, he needn't try to trick someone else into doing it by choosing a strange name. This has been broken since per-large-object TOC entries were introduced in 9.0, so back-patch to all supported branches. Discussion: https://postgr.es/m/21714.1516553459@sss.pgh.pa.us --- src/bin/pg_dump/pg_backup_archiver.c | 12 ++++- src/bin/pg_dump/pg_dump.c | 78 ++++++++++++++++------------ 2 files changed, 55 insertions(+), 35 deletions(-) diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 41741aefbc..acef20fdf7 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -2944,14 +2944,22 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) if (ropt->schemaOnly) { /* + * The sequence_data option overrides schema-only for SEQUENCE SET. + * * In binary-upgrade mode, even with schema-only set, we do not mask * out large objects. Only large object definitions, comments and * other information should be generated in binary-upgrade mode (not * the actual data). */ if (!(ropt->sequence_data && strcmp(te->desc, "SEQUENCE SET") == 0) && - !(ropt->binary_upgrade && strcmp(te->desc, "BLOB") == 0) && - !(ropt->binary_upgrade && strncmp(te->tag, "LARGE OBJECT ", 13) == 0)) + !(ropt->binary_upgrade && + (strcmp(te->desc, "BLOB") == 0 || + (strcmp(te->desc, "ACL") == 0 && + strncmp(te->tag, "LARGE OBJECT ", 13) == 0) || + (strcmp(te->desc, "COMMENT") == 0 && + strncmp(te->tag, "LARGE OBJECT ", 13) == 0) || + (strcmp(te->desc, "SECURITY LABEL") == 0 && + strncmp(te->tag, "LARGE OBJECT ", 13) == 0)))) res = res & REQ_SCHEMA; } diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 0bdd3982fe..0f70026492 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -2529,6 +2529,7 @@ dumpDatabase(Archive *fout) PQExpBuffer dbQry = createPQExpBuffer(); PQExpBuffer delQry = createPQExpBuffer(); PQExpBuffer creaQry = createPQExpBuffer(); + PQExpBuffer labelq = createPQExpBuffer(); PGconn *conn = GetConnection(fout); PGresult *res; int i_tableoid, @@ -2787,16 +2788,20 @@ dumpDatabase(Archive *fout) destroyPQExpBuffer(loOutQry); } + /* Compute correct tag for comments etc */ + appendPQExpBuffer(labelq, "DATABASE %s", fmtId(datname)); + /* Dump DB comment if any */ if (fout->remoteVersion >= 80200) { /* - * 8.2 keeps comments on shared objects in a shared table, so we - * cannot use the dumpComment used for other database objects. + * 8.2 and up keep comments on shared objects in a shared table, so we + * cannot use the dumpComment() code used for other database objects. + * Be careful that the ArchiveEntry parameters match that function. */ char *comment = PQgetvalue(res, 0, PQfnumber(res, "description")); - if (comment && strlen(comment)) + if (comment && *comment) { resetPQExpBuffer(dbQry); @@ -2808,17 +2813,17 @@ dumpDatabase(Archive *fout) appendStringLiteralAH(dbQry, comment, fout); appendPQExpBufferStr(dbQry, ";\n"); - ArchiveEntry(fout, dbCatId, createDumpId(), datname, NULL, NULL, - dba, false, "COMMENT", SECTION_NONE, + ArchiveEntry(fout, nilCatalogId, createDumpId(), + labelq->data, NULL, NULL, dba, + false, "COMMENT", SECTION_NONE, dbQry->data, "", NULL, - &dbDumpId, 1, NULL, NULL); + &(dbDumpId), 1, + NULL, NULL); } } else { - resetPQExpBuffer(dbQry); - appendPQExpBuffer(dbQry, "DATABASE %s", fmtId(datname)); - dumpComment(fout, dbQry->data, NULL, "", + dumpComment(fout, labelq->data, NULL, dba, dbCatId, 0, dbDumpId); } @@ -2834,11 +2839,13 @@ dumpDatabase(Archive *fout) shres = ExecuteSqlQuery(fout, seclabelQry->data, PGRES_TUPLES_OK); resetPQExpBuffer(seclabelQry); emitShSecLabels(conn, shres, seclabelQry, "DATABASE", datname); - if (strlen(seclabelQry->data)) - ArchiveEntry(fout, dbCatId, createDumpId(), datname, NULL, NULL, - dba, false, "SECURITY LABEL", SECTION_NONE, + if (seclabelQry->len > 0) + ArchiveEntry(fout, nilCatalogId, createDumpId(), + labelq->data, NULL, NULL, dba, + false, "SECURITY LABEL", SECTION_NONE, seclabelQry->data, "", NULL, - &dbDumpId, 1, NULL, NULL); + &(dbDumpId), 1, + NULL, NULL); destroyPQExpBuffer(seclabelQry); PQclear(shres); } @@ -2848,6 +2855,7 @@ dumpDatabase(Archive *fout) destroyPQExpBuffer(dbQry); destroyPQExpBuffer(delQry); destroyPQExpBuffer(creaQry); + destroyPQExpBuffer(labelq); } /* @@ -9707,7 +9715,7 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) if (nspinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, nspinfo->dobj.catId, nspinfo->dobj.dumpId, "SCHEMA", - qnspname, NULL, nspinfo->dobj.name, NULL, + qnspname, NULL, labelq->data, NULL, nspinfo->rolname, nspinfo->nspacl, nspinfo->rnspacl, nspinfo->initnspacl, nspinfo->initrnspacl); @@ -10003,7 +10011,7 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10143,7 +10151,7 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10220,7 +10228,7 @@ dumpUndefinedType(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10509,7 +10517,7 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10678,7 +10686,7 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10914,7 +10922,7 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, tyinfo->dobj.name, + qtypname, NULL, labelq->data, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -11233,7 +11241,7 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) if (plang->lanpltrusted && plang->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, plang->dobj.catId, plang->dobj.dumpId, "LANGUAGE", - qlanname, NULL, plang->dobj.name, + qlanname, NULL, labelq->data, lanschema, plang->lanowner, plang->lanacl, plang->rlanacl, plang->initlanacl, plang->initrlanacl); @@ -11867,7 +11875,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo) if (finfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, finfo->dobj.catId, finfo->dobj.dumpId, keyword, - funcsig, NULL, funcsig_tag, + funcsig, NULL, labelq->data, finfo->dobj.namespace->dobj.name, finfo->rolname, finfo->proacl, finfo->rproacl, finfo->initproacl, finfo->initrproacl); @@ -13939,15 +13947,13 @@ dumpAgg(Archive *fout, AggInfo *agginfo) * syntax for zero-argument aggregates and ordered-set aggregates. */ free(aggsig); - free(aggsig_tag); aggsig = format_function_signature(fout, &agginfo->aggfn, true); - aggsig_tag = format_function_signature(fout, &agginfo->aggfn, false); if (agginfo->aggfn.dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, agginfo->aggfn.dobj.catId, agginfo->aggfn.dobj.dumpId, "FUNCTION", - aggsig, NULL, aggsig_tag, + aggsig, NULL, labelq->data, agginfo->aggfn.dobj.namespace->dobj.name, agginfo->aggfn.rolname, agginfo->aggfn.proacl, agginfo->aggfn.rproacl, @@ -14393,7 +14399,7 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) if (fdwinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, fdwinfo->dobj.catId, fdwinfo->dobj.dumpId, "FOREIGN DATA WRAPPER", - qfdwname, NULL, fdwinfo->dobj.name, + qfdwname, NULL, labelq->data, NULL, fdwinfo->rolname, fdwinfo->fdwacl, fdwinfo->rfdwacl, fdwinfo->initfdwacl, fdwinfo->initrfdwacl); @@ -14490,7 +14496,7 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) if (srvinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, srvinfo->dobj.catId, srvinfo->dobj.dumpId, "FOREIGN SERVER", - qsrvname, NULL, srvinfo->dobj.name, + qsrvname, NULL, labelq->data, NULL, srvinfo->rolname, srvinfo->srvacl, srvinfo->rsrvacl, srvinfo->initsrvacl, srvinfo->initrsrvacl); @@ -14701,7 +14707,8 @@ dumpDefaultACL(Archive *fout, DefaultACLInfo *daclinfo) * FOREIGN DATA WRAPPER, SERVER, or LARGE OBJECT. * 'name' is the formatted name of the object. Must be quoted etc. already. * 'subname' is the formatted name of the sub-object, if any. Must be quoted. - * 'tag' is the tag for the archive entry (typ. unquoted name of object). + * 'tag' is the tag for the archive entry (should be the same tag as would be + * used for comments etc; for example "TABLE foo"). * 'nspname' is the namespace the object is in (NULL if none). * 'owner' is the owner, NULL if there is no owner (for languages). * 'acls' contains the ACL string of the object from the appropriate system @@ -14811,7 +14818,7 @@ dumpSecLabel(Archive *fout, const char *target, if (dopt->no_security_labels) return; - /* Comments are schema not data ... except blob comments are data */ + /* Security labels are schema not data ... except blob labels are data */ if (strncmp(target, "LARGE OBJECT ", 13) != 0) { if (dopt->dataOnly) @@ -15105,13 +15112,18 @@ dumpTable(Archive *fout, TableInfo *tbinfo) /* Handle the ACL here */ namecopy = pg_strdup(fmtId(tbinfo->dobj.name)); if (tbinfo->dobj.dump & DUMP_COMPONENT_ACL) + { + const char *objtype = + (tbinfo->relkind == RELKIND_SEQUENCE) ? "SEQUENCE" : "TABLE"; + char *acltag = psprintf("%s %s", objtype, namecopy); + dumpACL(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, - (tbinfo->relkind == RELKIND_SEQUENCE) ? "SEQUENCE" : - "TABLE", - namecopy, NULL, tbinfo->dobj.name, + objtype, namecopy, NULL, acltag, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, tbinfo->relacl, tbinfo->rrelacl, tbinfo->initrelacl, tbinfo->initrrelacl); + free(acltag); + } /* * Handle column ACLs, if any. Note: we pull these with a separate query @@ -15195,7 +15207,7 @@ dumpTable(Archive *fout, TableInfo *tbinfo) char *acltag; attnamecopy = pg_strdup(fmtId(attname)); - acltag = psprintf("%s.%s", tbinfo->dobj.name, attname); + acltag = psprintf("COLUMN %s.%s", namecopy, attnamecopy); /* Column's GRANT type is always TABLE */ dumpACL(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, "TABLE", namecopy, attnamecopy, acltag, From f498704346a4ce4953fc5f837cacb545b3166ee1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 22 Jan 2018 12:09:52 -0500 Subject: [PATCH 0869/1087] PL/Python: Fix tests for older Python versions Commit 8561e4840c81f7e345be2df170839846814fa004 neglected to handle older Python versions that don't support the "with" statement. So write the tests in a way that older versions can handle as well. --- src/pl/plpython/expected/plpython_transaction.out | 5 +++-- src/pl/plpython/sql/plpython_transaction.sql | 5 +++-- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/src/pl/plpython/expected/plpython_transaction.out b/src/pl/plpython/expected/plpython_transaction.out index 1fadc69b63..6f6dfadf9c 100644 --- a/src/pl/plpython/expected/plpython_transaction.out +++ b/src/pl/plpython/expected/plpython_transaction.out @@ -95,8 +95,9 @@ CONTEXT: Traceback (most recent call last): PL/Python function "transaction_test4" -- commit inside subtransaction (prohibited) DO LANGUAGE plpythonu $$ -with plpy.subtransaction(): - plpy.commit() +s = plpy.subtransaction() +s.enter() +plpy.commit() $$; WARNING: forcibly aborting a subtransaction that has not been exited ERROR: cannot commit while a subtransaction is active diff --git a/src/pl/plpython/sql/plpython_transaction.sql b/src/pl/plpython/sql/plpython_transaction.sql index 36c7b2ef38..b337d4e300 100644 --- a/src/pl/plpython/sql/plpython_transaction.sql +++ b/src/pl/plpython/sql/plpython_transaction.sql @@ -79,8 +79,9 @@ SELECT transaction_test4(); -- commit inside subtransaction (prohibited) DO LANGUAGE plpythonu $$ -with plpy.subtransaction(): - plpy.commit() +s = plpy.subtransaction() +s.enter() +plpy.commit() $$; From d6c84667d130f19efdf0f04f7d52a6b37df0f21b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 22 Jan 2018 12:37:11 -0500 Subject: [PATCH 0870/1087] Reorder code in pg_dump to dump comments etc in a uniform order. Most of the code in pg_dump dumps an object's comment, security label, and ACL auxiliary TOC entries, in that order, immediately after the object's main TOC entry, and at least dumpComment's API spec says this isn't optional. dumpDatabase was significantly violating that when in binary-upgrade mode, by inserting totally unrelated stuff between. Also, dumpForeignDataWrapper and dumpForeignServer were being randomly inconsistent. Reorder code so everybody does it the same. This may be future-proofing us against some code growing a requirement for such auxiliary entries to be adjacent to their main entry. But for now it's just neatnik-ism, so I see no need for back-patch. Discussion: https://postgr.es/m/21714.1516553459@sss.pgh.pa.us --- src/bin/pg_dump/pg_dump.c | 150 +++++++++++++++++++------------------- 1 file changed, 75 insertions(+), 75 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 0f70026492..1dc1d80ab1 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -2696,6 +2696,68 @@ dumpDatabase(Archive *fout) NULL, /* Dumper */ NULL); /* Dumper Arg */ + /* Compute correct tag for comments etc */ + appendPQExpBuffer(labelq, "DATABASE %s", fmtId(datname)); + + /* Dump DB comment if any */ + if (fout->remoteVersion >= 80200) + { + /* + * 8.2 and up keep comments on shared objects in a shared table, so we + * cannot use the dumpComment() code used for other database objects. + * Be careful that the ArchiveEntry parameters match that function. + */ + char *comment = PQgetvalue(res, 0, PQfnumber(res, "description")); + + if (comment && *comment) + { + resetPQExpBuffer(dbQry); + + /* + * Generates warning when loaded into a differently-named + * database. + */ + appendPQExpBuffer(dbQry, "COMMENT ON DATABASE %s IS ", fmtId(datname)); + appendStringLiteralAH(dbQry, comment, fout); + appendPQExpBufferStr(dbQry, ";\n"); + + ArchiveEntry(fout, nilCatalogId, createDumpId(), + labelq->data, NULL, NULL, dba, + false, "COMMENT", SECTION_NONE, + dbQry->data, "", NULL, + &(dbDumpId), 1, + NULL, NULL); + } + } + else + { + dumpComment(fout, labelq->data, NULL, dba, + dbCatId, 0, dbDumpId); + } + + /* Dump shared security label. */ + if (!dopt->no_security_labels && fout->remoteVersion >= 90200) + { + PGresult *shres; + PQExpBuffer seclabelQry; + + seclabelQry = createPQExpBuffer(); + + buildShSecLabelQuery(conn, "pg_database", dbCatId.oid, seclabelQry); + shres = ExecuteSqlQuery(fout, seclabelQry->data, PGRES_TUPLES_OK); + resetPQExpBuffer(seclabelQry); + emitShSecLabels(conn, shres, seclabelQry, "DATABASE", datname); + if (seclabelQry->len > 0) + ArchiveEntry(fout, nilCatalogId, createDumpId(), + labelq->data, NULL, NULL, dba, + false, "SECURITY LABEL", SECTION_NONE, + seclabelQry->data, "", NULL, + &(dbDumpId), 1, + NULL, NULL); + destroyPQExpBuffer(seclabelQry); + PQclear(shres); + } + /* * pg_largeobject and pg_largeobject_metadata come from the old system * intact, so set their relfrozenxids and relminmxids. @@ -2788,68 +2850,6 @@ dumpDatabase(Archive *fout) destroyPQExpBuffer(loOutQry); } - /* Compute correct tag for comments etc */ - appendPQExpBuffer(labelq, "DATABASE %s", fmtId(datname)); - - /* Dump DB comment if any */ - if (fout->remoteVersion >= 80200) - { - /* - * 8.2 and up keep comments on shared objects in a shared table, so we - * cannot use the dumpComment() code used for other database objects. - * Be careful that the ArchiveEntry parameters match that function. - */ - char *comment = PQgetvalue(res, 0, PQfnumber(res, "description")); - - if (comment && *comment) - { - resetPQExpBuffer(dbQry); - - /* - * Generates warning when loaded into a differently-named - * database. - */ - appendPQExpBuffer(dbQry, "COMMENT ON DATABASE %s IS ", fmtId(datname)); - appendStringLiteralAH(dbQry, comment, fout); - appendPQExpBufferStr(dbQry, ";\n"); - - ArchiveEntry(fout, nilCatalogId, createDumpId(), - labelq->data, NULL, NULL, dba, - false, "COMMENT", SECTION_NONE, - dbQry->data, "", NULL, - &(dbDumpId), 1, - NULL, NULL); - } - } - else - { - dumpComment(fout, labelq->data, NULL, dba, - dbCatId, 0, dbDumpId); - } - - /* Dump shared security label. */ - if (!dopt->no_security_labels && fout->remoteVersion >= 90200) - { - PGresult *shres; - PQExpBuffer seclabelQry; - - seclabelQry = createPQExpBuffer(); - - buildShSecLabelQuery(conn, "pg_database", dbCatId.oid, seclabelQry); - shres = ExecuteSqlQuery(fout, seclabelQry->data, PGRES_TUPLES_OK); - resetPQExpBuffer(seclabelQry); - emitShSecLabels(conn, shres, seclabelQry, "DATABASE", datname); - if (seclabelQry->len > 0) - ArchiveEntry(fout, nilCatalogId, createDumpId(), - labelq->data, NULL, NULL, dba, - false, "SECURITY LABEL", SECTION_NONE, - seclabelQry->data, "", NULL, - &(dbDumpId), 1, - NULL, NULL); - destroyPQExpBuffer(seclabelQry); - PQclear(shres); - } - PQclear(res); destroyPQExpBuffer(dbQry); @@ -14395,6 +14395,12 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) NULL, 0, NULL, NULL); + /* Dump Foreign Data Wrapper Comments */ + if (fdwinfo->dobj.dump & DUMP_COMPONENT_COMMENT) + dumpComment(fout, labelq->data, + NULL, fdwinfo->rolname, + fdwinfo->dobj.catId, 0, fdwinfo->dobj.dumpId); + /* Handle the ACL */ if (fdwinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, fdwinfo->dobj.catId, fdwinfo->dobj.dumpId, @@ -14404,12 +14410,6 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) fdwinfo->fdwacl, fdwinfo->rfdwacl, fdwinfo->initfdwacl, fdwinfo->initrfdwacl); - /* Dump Foreign Data Wrapper Comments */ - if (fdwinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, - NULL, fdwinfo->rolname, - fdwinfo->dobj.catId, 0, fdwinfo->dobj.dumpId); - free(qfdwname); destroyPQExpBuffer(q); @@ -14492,6 +14492,12 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) NULL, 0, NULL, NULL); + /* Dump Foreign Server Comments */ + if (srvinfo->dobj.dump & DUMP_COMPONENT_COMMENT) + dumpComment(fout, labelq->data, + NULL, srvinfo->rolname, + srvinfo->dobj.catId, 0, srvinfo->dobj.dumpId); + /* Handle the ACL */ if (srvinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, srvinfo->dobj.catId, srvinfo->dobj.dumpId, @@ -14508,12 +14514,6 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) srvinfo->rolname, srvinfo->dobj.catId, srvinfo->dobj.dumpId); - /* Dump Foreign Server Comments */ - if (srvinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, - NULL, srvinfo->rolname, - srvinfo->dobj.catId, 0, srvinfo->dobj.dumpId); - free(qsrvname); destroyPQExpBuffer(q); @@ -16245,7 +16245,7 @@ dumpIndexAttach(Archive *fout, IndexAttachInfo *attachinfo) if (attachinfo->partitionIdx->dobj.dump & DUMP_COMPONENT_DEFINITION) { - PQExpBuffer q = createPQExpBuffer(); + PQExpBuffer q = createPQExpBuffer(); appendPQExpBuffer(q, "\nALTER INDEX %s ", fmtQualifiedId(fout->remoteVersion, From b3f8401205afdaf63cb20dc316d44644c933d5a1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 22 Jan 2018 14:09:09 -0500 Subject: [PATCH 0871/1087] Move handling of database properties from pg_dumpall into pg_dump. This patch rearranges the division of labor between pg_dump and pg_dumpall so that pg_dump itself handles all properties attached to a single database. Notably, a database's ACL (GRANT/REVOKE status) and local GUC settings established by ALTER DATABASE SET and ALTER ROLE IN DATABASE SET can be dumped and restored by pg_dump. This is a long-requested improvement. "pg_dumpall -g" will now produce only role- and tablespace-related output, nothing about individual databases. The total output of a regular pg_dumpall run remains the same. pg_dump (or pg_restore) will restore database-level properties only when creating the target database with --create. This applies not only to ACLs and GUCs but to the other database properties it already handled, that is database comments and security labels. This is more consistent and useful, but does represent an incompatibility in the behavior seen without --create. (This change makes the proposed patch to have pg_dump use "COMMENT ON DATABASE CURRENT_DATABASE" unnecessary, since there is no case where the command is issued that we won't know the true name of the database. We might still want that patch as a feature in its own right, but pg_dump no longer needs it.) pg_dumpall with --clean will now drop and recreate the "postgres" and "template1" databases in the target cluster, allowing their locale and encoding settings to be changed if necessary, and providing a cleaner way to set nondefault tablespaces for them than we had before. This means that such a script must now always be started in the "postgres" database; the order of drops and reconnects will not work otherwise. Without --clean, the script will not adjust any database-level properties of those two databases (including their comments, ACLs, and security labels, which it formerly would try to set). Another minor incompatibility is that the CREATE DATABASE commands in a pg_dumpall script will now always specify locale and encoding settings. Formerly those would be omitted if they matched the cluster's default. While that behavior had some usefulness in some migration scenarios, it also posed a significant hazard of unwanted locale/encoding changes. To migrate to another locale/encoding, it's now necessary to use pg_dump without --create to restore into a database with the desired settings. Commit 4bd371f6f's hack to emit "SET default_transaction_read_only = off" is gone: we now dodge that problem by the expedient of not issuing ALTER DATABASE SET commands until after reconnecting to the target database. Therefore, such settings won't apply during the restore session. In passing, improve some shaky grammar in the docs, and add a note pointing out that pg_dumpall's output can't be expected to load without any errors. (Someday we might want to fix that, but this is not that patch.) Haribabu Kommi, reviewed at various times by Andreas Karlsson, Vaishnavi Prabakaran, and Robert Haas; further hacking by me. Discussion: https://postgr.es/m/CAJrrPGcUurV0eWTeXODwsOYFN=Ekq36t1s0YnFYUNzsmRfdAyA@mail.gmail.com --- doc/src/sgml/ref/pg_dump.sgml | 34 +- doc/src/sgml/ref/pg_dumpall.sgml | 50 ++- doc/src/sgml/ref/pg_restore.sgml | 11 + src/bin/pg_dump/dumputils.c | 51 +++ src/bin/pg_dump/dumputils.h | 5 + src/bin/pg_dump/pg_backup_archiver.c | 33 +- src/bin/pg_dump/pg_dump.c | 237 +++++++++++-- src/bin/pg_dump/pg_dumpall.c | 507 ++++----------------------- src/bin/pg_dump/t/002_pg_dump.pl | 25 +- src/bin/pg_upgrade/pg_upgrade.c | 54 ++- 10 files changed, 470 insertions(+), 537 deletions(-) diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 08cad68199..11582dd1c8 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -46,9 +46,10 @@ PostgreSQL documentation - pg_dump only dumps a single database. To backup - global objects that are common to all databases in a cluster, such as roles - and tablespaces, use . + pg_dump only dumps a single database. + To back up an entire cluster, or to back up global objects that are + common to all databases in a cluster (such as roles and tablespaces), + use . @@ -142,7 +143,8 @@ PostgreSQL documentation switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when - --data-only is used, but not when --schema-only is. + is used, but not + when is. @@ -196,6 +198,17 @@ PostgreSQL documentation recreates the target database before reconnecting to it. + + With , the output also includes the + database's comment if any, and any configuration variable settings + that are specific to this database, that is, + any ALTER DATABASE ... SET ... + and ALTER ROLE ... IN DATABASE ... SET ... + commands that mention this database. + Access privileges for the database itself are also dumped, + unless is specified. + + This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you @@ -1231,10 +1244,6 @@ CREATE DATABASE foo WITH TEMPLATE template0; ANALYZE after restoring from a dump file to ensure optimal performance; see and for more information. - The dump file also does not - contain any ALTER DATABASE ... SET commands; - these settings are dumped by , - along with database users and other installation-wide settings. @@ -1325,6 +1334,15 @@ CREATE DATABASE foo WITH TEMPLATE template0; + + To reload an archive file into the same database it was dumped from, + discarding the current contents of that database: + + +$ pg_restore -d postgres --clean --create db.dump + + + To dump a single table named mytab: diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 5196a211b1..4a639f2d41 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -36,13 +36,10 @@ PostgreSQL documentation of a cluster into one script file. The script file contains SQL commands that can be used as input to to restore the databases. It does this by - calling for each database in a cluster. + calling for each database in the cluster. pg_dumpall also dumps global objects - that are common to all databases. + that are common to all databases, that is, database roles and tablespaces. (pg_dump does not save these objects.) - This currently includes information about database users and - groups, tablespaces, and properties such as access permissions - that apply to databases as a whole. @@ -50,7 +47,7 @@ PostgreSQL documentation databases you will most likely have to connect as a database superuser in order to produce a complete dump. Also you will need superuser privileges to execute the saved script in order to be - allowed to add users and groups, and to create databases. + allowed to add roles and create databases. @@ -308,7 +305,7 @@ PostgreSQL documentation Use conditional commands (i.e. add an IF EXISTS - clause) to clean databases and other objects. This option is not valid + clause) to drop databases and other objects. This option is not valid unless is also specified. @@ -500,10 +497,11 @@ PostgreSQL documentation The option is called --dbname for consistency with other client applications, but because pg_dumpall - needs to connect to many databases, database name in the connection - string will be ignored. Use -l option to specify - the name of the database used to dump global objects and to discover - what other databases should be dumped. + needs to connect to many databases, the database name in the + connection string will be ignored. Use the -l + option to specify the name of the database used for the initial + connection, which will dump global objects and discover what other + databases should be dumped. @@ -657,6 +655,17 @@ PostgreSQL documentation messages will refer to pg_dump. + + The option can be useful even when your + intention is to restore the dump script into a fresh cluster. Use of + authorizes the script to drop and re-create the + built-in postgres and template1 + databases, ensuring that those databases will retain the same properties + (for instance, locale and encoding) that they had in the source cluster. + Without the option, those databases will retain their existing + database-level properties, as well as any pre-existing contents. + + Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You @@ -664,6 +673,18 @@ PostgreSQL documentation databases. + + The dump script should not be expected to run completely without errors. + In particular, because the script will issue CREATE ROLE + for every role existing in the source cluster, it is certain to get a + role already exists error for the bootstrap superuser, + unless the destination cluster was initialized with a different bootstrap + superuser name. This error is harmless and should be ignored. Use of + the option is likely to produce additional + harmless error messages about non-existent objects, although you can + minimize those by adding . + + pg_dumpall requires all needed tablespace directories to exist before the restore; otherwise, @@ -688,10 +709,13 @@ PostgreSQL documentation $ psql -f db.out postgres - (It is not important to which database you connect here since the + It is not important to which database you connect here since the script file created by pg_dumpall will contain the appropriate commands to create and connect to the saved - databases.) + databases. An exception is that if you specified , + you must connect to the postgres database initially; + the script will attempt to drop other databases immediately, and that + will fail for the database you are connected to. diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index 9946b94e84..a2ebf75ebb 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -126,6 +126,17 @@ recreate the target database before connecting to it. + + With , pg_restore + also restores the database's comment if any, and any configuration + variable settings that are specific to this database, that is, + any ALTER DATABASE ... SET ... + and ALTER ROLE ... IN DATABASE ... SET ... + commands that mention this database. + Access privileges for the database itself are also restored, + unless is specified. + + When this option is used, the database named with is used only to issue the initial DROP DATABASE and diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index 32ad600fd0..7afddc3153 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -807,3 +807,54 @@ buildACLQueries(PQExpBuffer acl_subquery, PQExpBuffer racl_subquery, printfPQExpBuffer(init_racl_subquery, "NULL"); } } + +/* + * Helper function for dumping "ALTER DATABASE/ROLE SET ..." commands. + * + * Parse the contents of configitem (a "name=value" string), wrap it in + * a complete ALTER command, and append it to buf. + * + * type is DATABASE or ROLE, and name is the name of the database or role. + * If we need an "IN" clause, type2 and name2 similarly define what to put + * there; otherwise they should be NULL. + * conn is used only to determine string-literal quoting conventions. + */ +void +makeAlterConfigCommand(PGconn *conn, const char *configitem, + const char *type, const char *name, + const char *type2, const char *name2, + PQExpBuffer buf) +{ + char *mine; + char *pos; + + /* Parse the configitem. If we can't find an "=", silently do nothing. */ + mine = pg_strdup(configitem); + pos = strchr(mine, '='); + if (pos == NULL) + { + pg_free(mine); + return; + } + *pos++ = '\0'; + + /* Build the command, with suitable quoting for everything. */ + appendPQExpBuffer(buf, "ALTER %s %s ", type, fmtId(name)); + if (type2 != NULL && name2 != NULL) + appendPQExpBuffer(buf, "IN %s %s ", type2, fmtId(name2)); + appendPQExpBuffer(buf, "SET %s TO ", fmtId(mine)); + + /* + * Some GUC variable names are 'LIST' type and hence must not be quoted. + * XXX this list is incomplete ... + */ + if (pg_strcasecmp(mine, "DateStyle") == 0 + || pg_strcasecmp(mine, "search_path") == 0) + appendPQExpBufferStr(buf, pos); + else + appendStringLiteralConn(buf, pos, conn); + + appendPQExpBufferStr(buf, ";\n"); + + pg_free(mine); +} diff --git a/src/bin/pg_dump/dumputils.h b/src/bin/pg_dump/dumputils.h index d5f150dfa0..23a0645be8 100644 --- a/src/bin/pg_dump/dumputils.h +++ b/src/bin/pg_dump/dumputils.h @@ -56,4 +56,9 @@ extern void buildACLQueries(PQExpBuffer acl_subquery, PQExpBuffer racl_subquery, const char *acl_column, const char *acl_owner, const char *obj_kind, bool binary_upgrade); +extern void makeAlterConfigCommand(PGconn *conn, const char *configitem, + const char *type, const char *name, + const char *type2, const char *name2, + PQExpBuffer buf); + #endif /* DUMPUTILS_H */ diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index acef20fdf7..ab009e6fe3 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -489,16 +489,19 @@ RestoreArchive(Archive *AHX) * whole. Issuing drops against anything else would be wrong, * because at this point we're connected to the wrong database. * Conversely, if we're not in createDB mode, we'd better not - * issue a DROP against the database at all. + * issue a DROP against the database at all. (The DATABASE + * PROPERTIES entry, if any, works like the DATABASE entry.) */ if (ropt->createDB) { - if (strcmp(te->desc, "DATABASE") != 0) + if (strcmp(te->desc, "DATABASE") != 0 && + strcmp(te->desc, "DATABASE PROPERTIES") != 0) continue; } else { - if (strcmp(te->desc, "DATABASE") == 0) + if (strcmp(te->desc, "DATABASE") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0) continue; } @@ -558,6 +561,8 @@ RestoreArchive(Archive *AHX) * we simply emit the original command for DEFAULT * objects (modulo the adjustment made above). * + * Likewise, don't mess with DATABASE PROPERTIES. + * * If we used CREATE OR REPLACE VIEW as a means of * quasi-dropping an ON SELECT rule, that should * be emitted unchanged as well. @@ -570,6 +575,7 @@ RestoreArchive(Archive *AHX) * search for hardcoded "DROP CONSTRAINT" instead. */ if (strcmp(te->desc, "DEFAULT") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0 || strncmp(dropStmt, "CREATE OR REPLACE VIEW", 22) == 0) appendPQExpBufferStr(ftStmt, dropStmt); else @@ -750,11 +756,19 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) reqs = te->reqs; /* - * Ignore DATABASE entry unless we should create it. We must check this - * here, not in _tocEntryRequired, because the createDB option should not - * affect emitting a DATABASE entry to an archive file. + * Ignore DATABASE and related entries unless createDB is specified. We + * must check this here, not in _tocEntryRequired, because !createDB + * should not prevent emitting these entries to an archive file. */ - if (!ropt->createDB && strcmp(te->desc, "DATABASE") == 0) + if (!ropt->createDB && + (strcmp(te->desc, "DATABASE") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0 || + (strcmp(te->desc, "ACL") == 0 && + strncmp(te->tag, "DATABASE ", 9) == 0) || + (strcmp(te->desc, "COMMENT") == 0 && + strncmp(te->tag, "DATABASE ", 9) == 0) || + (strcmp(te->desc, "SECURITY LABEL") == 0 && + strncmp(te->tag, "DATABASE ", 9) == 0))) reqs = 0; /* Dump any relevant dump warnings to stderr */ @@ -2917,8 +2931,8 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) * Special Case: If 'SEQUENCE SET' or anything to do with BLOBs, then * it is considered a data entry. We don't need to check for the * BLOBS entry or old-style BLOB COMMENTS, because they will have - * hadDumper = true ... but we do need to check new-style BLOB - * comments. + * hadDumper = true ... but we do need to check new-style BLOB ACLs, + * comments, etc. */ if (strcmp(te->desc, "SEQUENCE SET") == 0 || strcmp(te->desc, "BLOB") == 0 || @@ -3598,6 +3612,7 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) else if (strcmp(te->desc, "CAST") == 0 || strcmp(te->desc, "CHECK CONSTRAINT") == 0 || strcmp(te->desc, "CONSTRAINT") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0 || strcmp(te->desc, "DEFAULT") == 0 || strcmp(te->desc, "FK CONSTRAINT") == 0 || strcmp(te->desc, "INDEX") == 0 || diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 1dc1d80ab1..11e1ba04cc 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -252,6 +252,8 @@ static void dumpPublication(Archive *fout, PublicationInfo *pubinfo); static void dumpPublicationTable(Archive *fout, PublicationRelInfo *pubrinfo); static void dumpSubscription(Archive *fout, SubscriptionInfo *subinfo); static void dumpDatabase(Archive *AH); +static void dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf, + const char *dbname, Oid dboid); static void dumpEncoding(Archive *AH); static void dumpStdStrings(Archive *AH); static void binary_upgrade_set_type_oids_by_type_oid(Archive *fout, @@ -838,7 +840,7 @@ main(int argc, char **argv) dumpEncoding(fout); dumpStdStrings(fout); - /* The database item is always next, unless we don't want it at all */ + /* The database items are always next, unless we don't want them at all */ if (dopt.include_everything && !dopt.dataOnly) dumpDatabase(fout); @@ -2540,6 +2542,10 @@ dumpDatabase(Archive *fout) i_ctype, i_frozenxid, i_minmxid, + i_datacl, + i_rdatacl, + i_datistemplate, + i_datconnlimit, i_tablespace; CatalogId dbCatId; DumpId dbDumpId; @@ -2548,11 +2554,17 @@ dumpDatabase(Archive *fout) *encoding, *collate, *ctype, + *datacl, + *rdatacl, + *datistemplate, + *datconnlimit, *tablespace; uint32 frozenxid, minmxid; + char *qdatname; datname = PQdb(conn); + qdatname = pg_strdup(fmtId(datname)); if (g_verbose) write_msg(NULL, "saving database definition\n"); @@ -2560,13 +2572,37 @@ dumpDatabase(Archive *fout) /* Make sure we are in proper schema */ selectSourceSchema(fout, "pg_catalog"); - /* Get the database owner and parameters from pg_database */ - if (fout->remoteVersion >= 90300) + /* Fetch the database-level properties for this database */ + if (fout->remoteVersion >= 90600) { appendPQExpBuffer(dbQry, "SELECT tableoid, oid, " "(%s datdba) AS dba, " "pg_encoding_to_char(encoding) AS encoding, " "datcollate, datctype, datfrozenxid, datminmxid, " + "(SELECT array_agg(acl ORDER BY acl::text COLLATE \"C\") FROM ( " + " SELECT unnest(coalesce(datacl,acldefault('d',datdba))) AS acl " + " EXCEPT SELECT unnest(acldefault('d',datdba))) as datacls)" + " AS datacl, " + "(SELECT array_agg(acl ORDER BY acl::text COLLATE \"C\") FROM ( " + " SELECT unnest(acldefault('d',datdba)) AS acl " + " EXCEPT SELECT unnest(coalesce(datacl,acldefault('d',datdba)))) as rdatacls)" + " AS rdatacl, " + "datistemplate, datconnlimit, " + "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace, " + "shobj_description(oid, 'pg_database') AS description " + + "FROM pg_database " + "WHERE datname = ", + username_subquery); + appendStringLiteralAH(dbQry, datname, fout); + } + else if (fout->remoteVersion >= 90300) + { + appendPQExpBuffer(dbQry, "SELECT tableoid, oid, " + "(%s datdba) AS dba, " + "pg_encoding_to_char(encoding) AS encoding, " + "datcollate, datctype, datfrozenxid, datminmxid, " + "datacl, '' as rdatacl, datistemplate, datconnlimit, " "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace, " "shobj_description(oid, 'pg_database') AS description " @@ -2581,6 +2617,7 @@ dumpDatabase(Archive *fout) "(%s datdba) AS dba, " "pg_encoding_to_char(encoding) AS encoding, " "datcollate, datctype, datfrozenxid, 0 AS datminmxid, " + "datacl, '' as rdatacl, datistemplate, datconnlimit, " "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace, " "shobj_description(oid, 'pg_database') AS description " @@ -2595,6 +2632,7 @@ dumpDatabase(Archive *fout) "(%s datdba) AS dba, " "pg_encoding_to_char(encoding) AS encoding, " "NULL AS datcollate, NULL AS datctype, datfrozenxid, 0 AS datminmxid, " + "datacl, '' as rdatacl, datistemplate, datconnlimit, " "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace, " "shobj_description(oid, 'pg_database') AS description " @@ -2609,6 +2647,8 @@ dumpDatabase(Archive *fout) "(%s datdba) AS dba, " "pg_encoding_to_char(encoding) AS encoding, " "NULL AS datcollate, NULL AS datctype, datfrozenxid, 0 AS datminmxid, " + "datacl, '' as rdatacl, datistemplate, " + "-1 as datconnlimit, " "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace " "FROM pg_database " "WHERE datname = ", @@ -2626,6 +2666,10 @@ dumpDatabase(Archive *fout) i_ctype = PQfnumber(res, "datctype"); i_frozenxid = PQfnumber(res, "datfrozenxid"); i_minmxid = PQfnumber(res, "datminmxid"); + i_datacl = PQfnumber(res, "datacl"); + i_rdatacl = PQfnumber(res, "rdatacl"); + i_datistemplate = PQfnumber(res, "datistemplate"); + i_datconnlimit = PQfnumber(res, "datconnlimit"); i_tablespace = PQfnumber(res, "tablespace"); dbCatId.tableoid = atooid(PQgetvalue(res, 0, i_tableoid)); @@ -2636,10 +2680,20 @@ dumpDatabase(Archive *fout) ctype = PQgetvalue(res, 0, i_ctype); frozenxid = atooid(PQgetvalue(res, 0, i_frozenxid)); minmxid = atooid(PQgetvalue(res, 0, i_minmxid)); + datacl = PQgetvalue(res, 0, i_datacl); + rdatacl = PQgetvalue(res, 0, i_rdatacl); + datistemplate = PQgetvalue(res, 0, i_datistemplate); + datconnlimit = PQgetvalue(res, 0, i_datconnlimit); tablespace = PQgetvalue(res, 0, i_tablespace); + /* + * Prepare the CREATE DATABASE command. We must specify encoding, locale, + * and tablespace since those can't be altered later. Other DB properties + * are left to the DATABASE PROPERTIES entry, so that they can be applied + * after reconnecting to the target DB. + */ appendPQExpBuffer(creaQry, "CREATE DATABASE %s WITH TEMPLATE = template0", - fmtId(datname)); + qdatname); if (strlen(encoding) > 0) { appendPQExpBufferStr(creaQry, " ENCODING = "); @@ -2655,26 +2709,23 @@ dumpDatabase(Archive *fout) appendPQExpBufferStr(creaQry, " LC_CTYPE = "); appendStringLiteralAH(creaQry, ctype, fout); } + + /* + * Note: looking at dopt->outputNoTablespaces here is completely the wrong + * thing; the decision whether to specify a tablespace should be left till + * pg_restore, so that pg_restore --no-tablespaces applies. Ideally we'd + * label the DATABASE entry with the tablespace and let the normal + * tablespace selection logic work ... but CREATE DATABASE doesn't pay + * attention to default_tablespace, so that won't work. + */ if (strlen(tablespace) > 0 && strcmp(tablespace, "pg_default") != 0 && !dopt->outputNoTablespaces) appendPQExpBuffer(creaQry, " TABLESPACE = %s", fmtId(tablespace)); appendPQExpBufferStr(creaQry, ";\n"); - if (dopt->binary_upgrade) - { - appendPQExpBufferStr(creaQry, "\n-- For binary upgrade, set datfrozenxid and datminmxid.\n"); - appendPQExpBuffer(creaQry, "UPDATE pg_catalog.pg_database\n" - "SET datfrozenxid = '%u', datminmxid = '%u'\n" - "WHERE datname = ", - frozenxid, minmxid); - appendStringLiteralAH(creaQry, datname, fout); - appendPQExpBufferStr(creaQry, ";\n"); - - } - appendPQExpBuffer(delQry, "DROP DATABASE %s;\n", - fmtId(datname)); + qdatname); dbDumpId = createDumpId(); @@ -2697,7 +2748,7 @@ dumpDatabase(Archive *fout) NULL); /* Dumper Arg */ /* Compute correct tag for comments etc */ - appendPQExpBuffer(labelq, "DATABASE %s", fmtId(datname)); + appendPQExpBuffer(labelq, "DATABASE %s", qdatname); /* Dump DB comment if any */ if (fout->remoteVersion >= 80200) @@ -2717,7 +2768,7 @@ dumpDatabase(Archive *fout) * Generates warning when loaded into a differently-named * database. */ - appendPQExpBuffer(dbQry, "COMMENT ON DATABASE %s IS ", fmtId(datname)); + appendPQExpBuffer(dbQry, "COMMENT ON DATABASE %s IS ", qdatname); appendStringLiteralAH(dbQry, comment, fout); appendPQExpBufferStr(dbQry, ";\n"); @@ -2758,6 +2809,73 @@ dumpDatabase(Archive *fout) PQclear(shres); } + /* + * Dump ACL if any. Note that we do not support initial privileges + * (pg_init_privs) on databases. + */ + dumpACL(fout, dbCatId, dbDumpId, "DATABASE", + qdatname, NULL, labelq->data, NULL, + dba, datacl, rdatacl, "", ""); + + /* + * Now construct a DATABASE PROPERTIES archive entry to restore any + * non-default database-level properties. We want to do this after + * reconnecting so that these properties won't apply during the restore + * session. In this way, restoring works even if there is, say, an ALTER + * DATABASE SET that turns on default_transaction_read_only. + */ + resetPQExpBuffer(creaQry); + resetPQExpBuffer(delQry); + + if (strlen(datconnlimit) > 0 && strcmp(datconnlimit, "-1") != 0) + appendPQExpBuffer(creaQry, "ALTER DATABASE %s CONNECTION LIMIT = %s;\n", + qdatname, datconnlimit); + + if (strcmp(datistemplate, "t") == 0) + { + appendPQExpBuffer(creaQry, "ALTER DATABASE %s IS_TEMPLATE = true;\n", + qdatname); + + /* + * The backend won't accept DROP DATABASE on a template database. We + * can deal with that by removing the template marking before the DROP + * gets issued. We'd prefer to use ALTER DATABASE IF EXISTS here, but + * since no such command is currently supported, fake it with a direct + * UPDATE on pg_database. + */ + appendPQExpBufferStr(delQry, "UPDATE pg_catalog.pg_database " + "SET datistemplate = false WHERE datname = "); + appendStringLiteralAH(delQry, datname, fout); + appendPQExpBufferStr(delQry, ";\n"); + } + + /* Add database-specific SET options */ + dumpDatabaseConfig(fout, creaQry, datname, dbCatId.oid); + + /* + * We stick this binary-upgrade query into the DATABASE PROPERTIES archive + * entry, too. It can't go into the DATABASE entry because that would + * result in an implicit transaction block around the CREATE DATABASE. + */ + if (dopt->binary_upgrade) + { + appendPQExpBufferStr(creaQry, "\n-- For binary upgrade, set datfrozenxid and datminmxid.\n"); + appendPQExpBuffer(creaQry, "UPDATE pg_catalog.pg_database\n" + "SET datfrozenxid = '%u', datminmxid = '%u'\n" + "WHERE datname = ", + frozenxid, minmxid); + appendStringLiteralAH(creaQry, datname, fout); + appendPQExpBufferStr(creaQry, ";\n"); + } + + if (creaQry->len > 0) + ArchiveEntry(fout, nilCatalogId, createDumpId(), + datname, NULL, NULL, dba, + false, "DATABASE PROPERTIES", SECTION_PRE_DATA, + creaQry->data, delQry->data, NULL, + &(dbDumpId), 1, + NULL, NULL); + /* * pg_largeobject and pg_largeobject_metadata come from the old system * intact, so set their relfrozenxids and relminmxids. @@ -2793,8 +2911,8 @@ dumpDatabase(Archive *fout) appendPQExpBuffer(loOutQry, "UPDATE pg_catalog.pg_class\n" "SET relfrozenxid = '%u', relminmxid = '%u'\n" "WHERE oid = %u;\n", - atoi(PQgetvalue(lo_res, 0, i_relfrozenxid)), - atoi(PQgetvalue(lo_res, 0, i_relminmxid)), + atooid(PQgetvalue(lo_res, 0, i_relfrozenxid)), + atooid(PQgetvalue(lo_res, 0, i_relminmxid)), LargeObjectRelationId); ArchiveEntry(fout, nilCatalogId, createDumpId(), "pg_largeobject", NULL, NULL, "", @@ -2833,8 +2951,8 @@ dumpDatabase(Archive *fout) appendPQExpBuffer(loOutQry, "UPDATE pg_catalog.pg_class\n" "SET relfrozenxid = '%u', relminmxid = '%u'\n" "WHERE oid = %u;\n", - atoi(PQgetvalue(lo_res, 0, i_relfrozenxid)), - atoi(PQgetvalue(lo_res, 0, i_relminmxid)), + atooid(PQgetvalue(lo_res, 0, i_relfrozenxid)), + atooid(PQgetvalue(lo_res, 0, i_relminmxid)), LargeObjectMetadataRelationId); ArchiveEntry(fout, nilCatalogId, createDumpId(), "pg_largeobject_metadata", NULL, NULL, "", @@ -2852,12 +2970,85 @@ dumpDatabase(Archive *fout) PQclear(res); + free(qdatname); destroyPQExpBuffer(dbQry); destroyPQExpBuffer(delQry); destroyPQExpBuffer(creaQry); destroyPQExpBuffer(labelq); } +/* + * Collect any database-specific or role-and-database-specific SET options + * for this database, and append them to outbuf. + */ +static void +dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf, + const char *dbname, Oid dboid) +{ + PGconn *conn = GetConnection(AH); + PQExpBuffer buf = createPQExpBuffer(); + PGresult *res; + int count = 1; + + /* + * First collect database-specific options. Pre-8.4 server versions lack + * unnest(), so we do this the hard way by querying once per subscript. + */ + for (;;) + { + if (AH->remoteVersion >= 90000) + printfPQExpBuffer(buf, "SELECT setconfig[%d] FROM pg_db_role_setting " + "WHERE setrole = 0 AND setdatabase = '%u'::oid", + count, dboid); + else + printfPQExpBuffer(buf, "SELECT datconfig[%d] FROM pg_database WHERE oid = '%u'::oid", count, dboid); + + res = ExecuteSqlQuery(AH, buf->data, PGRES_TUPLES_OK); + + if (PQntuples(res) == 1 && + !PQgetisnull(res, 0, 0)) + { + makeAlterConfigCommand(conn, PQgetvalue(res, 0, 0), + "DATABASE", dbname, NULL, NULL, + outbuf); + PQclear(res); + count++; + } + else + { + PQclear(res); + break; + } + } + + /* Now look for role-and-database-specific options */ + if (AH->remoteVersion >= 90000) + { + /* Here we can assume we have unnest() */ + printfPQExpBuffer(buf, "SELECT rolname, unnest(setconfig) " + "FROM pg_db_role_setting s, pg_roles r " + "WHERE setrole = r.oid AND setdatabase = '%u'::oid", + dboid); + + res = ExecuteSqlQuery(AH, buf->data, PGRES_TUPLES_OK); + + if (PQntuples(res) > 0) + { + int i; + + for (i = 0; i < PQntuples(res); i++) + makeAlterConfigCommand(conn, PQgetvalue(res, i, 1), + "ROLE", PQgetvalue(res, i, 0), + "DATABASE", dbname, + outbuf); + } + + PQclear(res); + } + + destroyPQExpBuffer(buf); +} + /* * dumpEncoding: put the correct encoding into the archive */ diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 3dd2c3871e..2fd5a025af 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -38,17 +38,10 @@ static void dumpGroups(PGconn *conn); static void dropTablespaces(PGconn *conn); static void dumpTablespaces(PGconn *conn); static void dropDBs(PGconn *conn); -static void dumpCreateDB(PGconn *conn); -static void dumpDatabaseConfig(PGconn *conn, const char *dbname); static void dumpUserConfig(PGconn *conn, const char *username); -static void dumpDbRoleConfig(PGconn *conn); -static void makeAlterConfigCommand(PGconn *conn, const char *arrayitem, - const char *type, const char *name, const char *type2, - const char *name2); static void dumpDatabases(PGconn *conn); static void dumpTimestamp(const char *msg); - -static int runPgDump(const char *dbname); +static int runPgDump(const char *dbname, const char *create_opts); static void buildShSecLabels(PGconn *conn, const char *catalog_name, uint32 objectId, PQExpBuffer buffer, const char *target, const char *objname); @@ -62,6 +55,7 @@ static char pg_dump_bin[MAXPGPATH]; static const char *progname; static PQExpBuffer pgdumpopts; static char *connstr = ""; +static bool output_clean = false; static bool skip_acls = false; static bool verbose = false; static bool dosync = true; @@ -152,7 +146,6 @@ main(int argc, char *argv[]) trivalue prompt_password = TRI_DEFAULT; bool data_only = false; bool globals_only = false; - bool output_clean = false; bool roles_only = false; bool tablespaces_only = false; PGconn *conn; @@ -558,17 +551,6 @@ main(int argc, char *argv[]) /* Dump tablespaces */ if (!roles_only && !no_tablespaces) dumpTablespaces(conn); - - /* Dump CREATE DATABASE commands */ - if (binary_upgrade || (!globals_only && !roles_only && !tablespaces_only)) - dumpCreateDB(conn); - - /* Dump role/database settings */ - if (!tablespaces_only && !roles_only) - { - if (server_version >= 90000) - dumpDbRoleConfig(conn); - } } if (!globals_only && !roles_only && !tablespaces_only) @@ -1262,8 +1244,6 @@ dumpTablespaces(PGconn *conn) /* * Dump commands to drop each database. - * - * This should match the set of databases targeted by dumpCreateDB(). */ static void dropDBs(PGconn *conn) @@ -1271,24 +1251,30 @@ dropDBs(PGconn *conn) PGresult *res; int i; + /* + * Skip databases marked not datallowconn, since we'd be unable to connect + * to them anyway. This must agree with dumpDatabases(). + */ res = executeQuery(conn, "SELECT datname " "FROM pg_database d " - "WHERE datallowconn ORDER BY 1"); + "WHERE datallowconn " + "ORDER BY datname"); if (PQntuples(res) > 0) - fprintf(OPF, "--\n-- Drop databases\n--\n\n"); + fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n"); for (i = 0; i < PQntuples(res); i++) { char *dbname = PQgetvalue(res, i, 0); /* - * Skip "template1" and "postgres"; the restore script is almost - * certainly going to be run in one or the other, and we don't know - * which. This must agree with dumpCreateDB's choices! + * Skip "postgres" and "template1"; dumpDatabases() will deal with + * them specially. Also, be sure to skip "template0", even if for + * some reason it's not marked !datallowconn. */ if (strcmp(dbname, "template1") != 0 && + strcmp(dbname, "template0") != 0 && strcmp(dbname, "postgres") != 0) { fprintf(OPF, "DROP DATABASE %s%s;\n", @@ -1302,323 +1288,6 @@ dropDBs(PGconn *conn) fprintf(OPF, "\n\n"); } -/* - * Dump commands to create each database. - * - * To minimize the number of reconnections (and possibly ensuing - * password prompts) required by the output script, we emit all CREATE - * DATABASE commands during the initial phase of the script, and then - * run pg_dump for each database to dump the contents of that - * database. We skip databases marked not datallowconn, since we'd be - * unable to connect to them anyway (and besides, we don't want to - * dump template0). - */ -static void -dumpCreateDB(PGconn *conn) -{ - PQExpBuffer buf = createPQExpBuffer(); - char *default_encoding = NULL; - char *default_collate = NULL; - char *default_ctype = NULL; - PGresult *res; - int i; - - fprintf(OPF, "--\n-- Database creation\n--\n\n"); - - /* - * First, get the installation's default encoding and locale information. - * We will dump encoding and locale specifications in the CREATE DATABASE - * commands for just those databases with values different from defaults. - * - * We consider template0's encoding and locale to define the installation - * default. Pre-8.4 installations do not have per-database locale - * settings; for them, every database must necessarily be using the - * installation default, so there's no need to do anything. - */ - if (server_version >= 80400) - res = executeQuery(conn, - "SELECT pg_encoding_to_char(encoding), " - "datcollate, datctype " - "FROM pg_database " - "WHERE datname = 'template0'"); - else - res = executeQuery(conn, - "SELECT pg_encoding_to_char(encoding), " - "null::text AS datcollate, null::text AS datctype " - "FROM pg_database " - "WHERE datname = 'template0'"); - - /* If for some reason the template DB isn't there, treat as unknown */ - if (PQntuples(res) > 0) - { - if (!PQgetisnull(res, 0, 0)) - default_encoding = pg_strdup(PQgetvalue(res, 0, 0)); - if (!PQgetisnull(res, 0, 1)) - default_collate = pg_strdup(PQgetvalue(res, 0, 1)); - if (!PQgetisnull(res, 0, 2)) - default_ctype = pg_strdup(PQgetvalue(res, 0, 2)); - } - - PQclear(res); - - - /* - * Now collect all the information about databases to dump. - * - * For the database ACLs, as of 9.6, we extract both the positive (as - * datacl) and negative (as rdatacl) ACLs, relative to the default ACL for - * databases, which are then passed to buildACLCommands() below. - * - * See buildACLQueries() and buildACLCommands(). - * - * Note that we do not support initial privileges (pg_init_privs) on - * databases. - */ - if (server_version >= 90600) - printfPQExpBuffer(buf, - "SELECT datname, " - "coalesce(rolname, (select rolname from %s where oid=(select datdba from pg_database where datname='template0'))), " - "pg_encoding_to_char(d.encoding), " - "datcollate, datctype, datfrozenxid, datminmxid, " - "datistemplate, " - "(SELECT pg_catalog.array_agg(acl ORDER BY acl::text COLLATE \"C\") FROM ( " - " SELECT pg_catalog.unnest(coalesce(datacl,pg_catalog.acldefault('d',datdba))) AS acl " - " EXCEPT SELECT pg_catalog.unnest(pg_catalog.acldefault('d',datdba))) as datacls)" - "AS datacl, " - "(SELECT pg_catalog.array_agg(acl ORDER BY acl::text COLLATE \"C\") FROM ( " - " SELECT pg_catalog.unnest(pg_catalog.acldefault('d',datdba)) AS acl " - " EXCEPT SELECT pg_catalog.unnest(coalesce(datacl,pg_catalog.acldefault('d',datdba)))) as rdatacls)" - "AS rdatacl, " - "datconnlimit, " - "(SELECT spcname FROM pg_tablespace t WHERE t.oid = d.dattablespace) AS dattablespace " - "FROM pg_database d LEFT JOIN %s u ON (datdba = u.oid) " - "WHERE datallowconn ORDER BY 1", role_catalog, role_catalog); - else if (server_version >= 90300) - printfPQExpBuffer(buf, - "SELECT datname, " - "coalesce(rolname, (select rolname from %s where oid=(select datdba from pg_database where datname='template0'))), " - "pg_encoding_to_char(d.encoding), " - "datcollate, datctype, datfrozenxid, datminmxid, " - "datistemplate, datacl, '' as rdatacl, " - "datconnlimit, " - "(SELECT spcname FROM pg_tablespace t WHERE t.oid = d.dattablespace) AS dattablespace " - "FROM pg_database d LEFT JOIN %s u ON (datdba = u.oid) " - "WHERE datallowconn ORDER BY 1", role_catalog, role_catalog); - else if (server_version >= 80400) - printfPQExpBuffer(buf, - "SELECT datname, " - "coalesce(rolname, (select rolname from %s where oid=(select datdba from pg_database where datname='template0'))), " - "pg_encoding_to_char(d.encoding), " - "datcollate, datctype, datfrozenxid, 0 AS datminmxid, " - "datistemplate, datacl, '' as rdatacl, " - "datconnlimit, " - "(SELECT spcname FROM pg_tablespace t WHERE t.oid = d.dattablespace) AS dattablespace " - "FROM pg_database d LEFT JOIN %s u ON (datdba = u.oid) " - "WHERE datallowconn ORDER BY 1", role_catalog, role_catalog); - else if (server_version >= 80100) - printfPQExpBuffer(buf, - "SELECT datname, " - "coalesce(rolname, (select rolname from %s where oid=(select datdba from pg_database where datname='template0'))), " - "pg_encoding_to_char(d.encoding), " - "null::text AS datcollate, null::text AS datctype, datfrozenxid, 0 AS datminmxid, " - "datistemplate, datacl, '' as rdatacl, " - "datconnlimit, " - "(SELECT spcname FROM pg_tablespace t WHERE t.oid = d.dattablespace) AS dattablespace " - "FROM pg_database d LEFT JOIN %s u ON (datdba = u.oid) " - "WHERE datallowconn ORDER BY 1", role_catalog, role_catalog); - else - printfPQExpBuffer(buf, - "SELECT datname, " - "coalesce(usename, (select usename from pg_shadow where usesysid=(select datdba from pg_database where datname='template0'))), " - "pg_encoding_to_char(d.encoding), " - "null::text AS datcollate, null::text AS datctype, datfrozenxid, 0 AS datminmxid, " - "datistemplate, datacl, '' as rdatacl, " - "-1 as datconnlimit, " - "(SELECT spcname FROM pg_tablespace t WHERE t.oid = d.dattablespace) AS dattablespace " - "FROM pg_database d LEFT JOIN pg_shadow u ON (datdba = usesysid) " - "WHERE datallowconn ORDER BY 1"); - - res = executeQuery(conn, buf->data); - - for (i = 0; i < PQntuples(res); i++) - { - char *dbname = PQgetvalue(res, i, 0); - char *dbowner = PQgetvalue(res, i, 1); - char *dbencoding = PQgetvalue(res, i, 2); - char *dbcollate = PQgetvalue(res, i, 3); - char *dbctype = PQgetvalue(res, i, 4); - uint32 dbfrozenxid = atooid(PQgetvalue(res, i, 5)); - uint32 dbminmxid = atooid(PQgetvalue(res, i, 6)); - char *dbistemplate = PQgetvalue(res, i, 7); - char *dbacl = PQgetvalue(res, i, 8); - char *rdbacl = PQgetvalue(res, i, 9); - char *dbconnlimit = PQgetvalue(res, i, 10); - char *dbtablespace = PQgetvalue(res, i, 11); - char *fdbname; - - fdbname = pg_strdup(fmtId(dbname)); - - resetPQExpBuffer(buf); - - /* - * Skip the CREATE DATABASE commands for "template1" and "postgres", - * since they are presumably already there in the destination cluster. - * We do want to emit their ACLs and config options if any, however. - */ - if (strcmp(dbname, "template1") != 0 && - strcmp(dbname, "postgres") != 0) - { - appendPQExpBuffer(buf, "CREATE DATABASE %s", fdbname); - - appendPQExpBufferStr(buf, " WITH TEMPLATE = template0"); - - if (strlen(dbowner) != 0) - appendPQExpBuffer(buf, " OWNER = %s", fmtId(dbowner)); - - if (default_encoding && strcmp(dbencoding, default_encoding) != 0) - { - appendPQExpBufferStr(buf, " ENCODING = "); - appendStringLiteralConn(buf, dbencoding, conn); - } - - if (default_collate && strcmp(dbcollate, default_collate) != 0) - { - appendPQExpBufferStr(buf, " LC_COLLATE = "); - appendStringLiteralConn(buf, dbcollate, conn); - } - - if (default_ctype && strcmp(dbctype, default_ctype) != 0) - { - appendPQExpBufferStr(buf, " LC_CTYPE = "); - appendStringLiteralConn(buf, dbctype, conn); - } - - /* - * Output tablespace if it isn't the default. For default, it - * uses the default from the template database. If tablespace is - * specified and tablespace creation failed earlier, (e.g. no such - * directory), the database creation will fail too. One solution - * would be to use 'SET default_tablespace' like we do in pg_dump - * for setting non-default database locations. - */ - if (strcmp(dbtablespace, "pg_default") != 0 && !no_tablespaces) - appendPQExpBuffer(buf, " TABLESPACE = %s", - fmtId(dbtablespace)); - - if (strcmp(dbistemplate, "t") == 0) - appendPQExpBuffer(buf, " IS_TEMPLATE = true"); - - if (strcmp(dbconnlimit, "-1") != 0) - appendPQExpBuffer(buf, " CONNECTION LIMIT = %s", - dbconnlimit); - - appendPQExpBufferStr(buf, ";\n"); - } - else if (strcmp(dbtablespace, "pg_default") != 0 && !no_tablespaces) - { - /* - * Cannot change tablespace of the database we're connected to, so - * to move "postgres" to another tablespace, we connect to - * "template1", and vice versa. - */ - if (strcmp(dbname, "postgres") == 0) - appendPQExpBuffer(buf, "\\connect template1\n"); - else - appendPQExpBuffer(buf, "\\connect postgres\n"); - - appendPQExpBuffer(buf, "ALTER DATABASE %s SET TABLESPACE %s;\n", - fdbname, fmtId(dbtablespace)); - - /* connect to original database */ - appendPsqlMetaConnect(buf, dbname); - } - - if (binary_upgrade) - { - appendPQExpBufferStr(buf, "-- For binary upgrade, set datfrozenxid and datminmxid.\n"); - appendPQExpBuffer(buf, "UPDATE pg_catalog.pg_database " - "SET datfrozenxid = '%u', datminmxid = '%u' " - "WHERE datname = ", - dbfrozenxid, dbminmxid); - appendStringLiteralConn(buf, dbname, conn); - appendPQExpBufferStr(buf, ";\n"); - } - - if (!skip_acls && - !buildACLCommands(fdbname, NULL, "DATABASE", - dbacl, rdbacl, dbowner, - "", server_version, buf)) - { - fprintf(stderr, _("%s: could not parse ACL list (%s) for database \"%s\"\n"), - progname, dbacl, fdbname); - PQfinish(conn); - exit_nicely(1); - } - - fprintf(OPF, "%s", buf->data); - - dumpDatabaseConfig(conn, dbname); - - free(fdbname); - } - - if (default_encoding) - free(default_encoding); - if (default_collate) - free(default_collate); - if (default_ctype) - free(default_ctype); - - PQclear(res); - destroyPQExpBuffer(buf); - - fprintf(OPF, "\n\n"); -} - - -/* - * Dump database-specific configuration - */ -static void -dumpDatabaseConfig(PGconn *conn, const char *dbname) -{ - PQExpBuffer buf = createPQExpBuffer(); - int count = 1; - - for (;;) - { - PGresult *res; - - if (server_version >= 90000) - printfPQExpBuffer(buf, "SELECT setconfig[%d] FROM pg_db_role_setting WHERE " - "setrole = 0 AND setdatabase = (SELECT oid FROM pg_database WHERE datname = ", count); - else - printfPQExpBuffer(buf, "SELECT datconfig[%d] FROM pg_database WHERE datname = ", count); - appendStringLiteralConn(buf, dbname, conn); - - if (server_version >= 90000) - appendPQExpBufferChar(buf, ')'); - - res = executeQuery(conn, buf->data); - if (PQntuples(res) == 1 && - !PQgetisnull(res, 0, 0)) - { - makeAlterConfigCommand(conn, PQgetvalue(res, 0, 0), - "DATABASE", dbname, NULL, NULL); - PQclear(res); - count++; - } - else - { - PQclear(res); - break; - } - } - - destroyPQExpBuffer(buf); -} - - /* * Dump user-specific configuration @@ -1649,8 +1318,11 @@ dumpUserConfig(PGconn *conn, const char *username) if (PQntuples(res) == 1 && !PQgetisnull(res, 0, 0)) { + resetPQExpBuffer(buf); makeAlterConfigCommand(conn, PQgetvalue(res, 0, 0), - "ROLE", username, NULL, NULL); + "ROLE", username, NULL, NULL, + buf); + fprintf(OPF, "%s", buf->data); PQclear(res); count++; } @@ -1665,85 +1337,6 @@ dumpUserConfig(PGconn *conn, const char *username) } -/* - * Dump user-and-database-specific configuration - */ -static void -dumpDbRoleConfig(PGconn *conn) -{ - PQExpBuffer buf = createPQExpBuffer(); - PGresult *res; - int i; - - printfPQExpBuffer(buf, "SELECT rolname, datname, unnest(setconfig) " - "FROM pg_db_role_setting, %s u, pg_database " - "WHERE setrole = u.oid AND setdatabase = pg_database.oid", role_catalog); - res = executeQuery(conn, buf->data); - - if (PQntuples(res) > 0) - { - fprintf(OPF, "--\n-- Per-Database Role Settings \n--\n\n"); - - for (i = 0; i < PQntuples(res); i++) - { - makeAlterConfigCommand(conn, PQgetvalue(res, i, 2), - "ROLE", PQgetvalue(res, i, 0), - "DATABASE", PQgetvalue(res, i, 1)); - } - - fprintf(OPF, "\n\n"); - } - - PQclear(res); - destroyPQExpBuffer(buf); -} - - -/* - * Helper function for dumpXXXConfig(). - */ -static void -makeAlterConfigCommand(PGconn *conn, const char *arrayitem, - const char *type, const char *name, - const char *type2, const char *name2) -{ - char *pos; - char *mine; - PQExpBuffer buf; - - mine = pg_strdup(arrayitem); - pos = strchr(mine, '='); - if (pos == NULL) - { - free(mine); - return; - } - - buf = createPQExpBuffer(); - - *pos = 0; - appendPQExpBuffer(buf, "ALTER %s %s ", type, fmtId(name)); - if (type2 != NULL && name2 != NULL) - appendPQExpBuffer(buf, "IN %s %s ", type2, fmtId(name2)); - appendPQExpBuffer(buf, "SET %s TO ", fmtId(mine)); - - /* - * Some GUC variable names are 'LIST' type and hence must not be quoted. - */ - if (pg_strcasecmp(mine, "DateStyle") == 0 - || pg_strcasecmp(mine, "search_path") == 0) - appendPQExpBufferStr(buf, pos + 1); - else - appendStringLiteralConn(buf, pos + 1, conn); - appendPQExpBufferStr(buf, ";\n"); - - fprintf(OPF, "%s", buf->data); - destroyPQExpBuffer(buf); - free(mine); -} - - - /* * Dump contents of databases. */ @@ -1753,38 +1346,62 @@ dumpDatabases(PGconn *conn) PGresult *res; int i; - res = executeQuery(conn, "SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1"); + /* + * Skip databases marked not datallowconn, since we'd be unable to connect + * to them anyway. This must agree with dropDBs(). + * + * We arrange for template1 to be processed first, then we process other + * DBs in alphabetical order. If we just did them all alphabetically, we + * might find ourselves trying to drop the "postgres" database while still + * connected to it. This makes trying to run the restore script while + * connected to "template1" a bad idea, but there's no fixed order that + * doesn't have some failure mode with --clean. + */ + res = executeQuery(conn, + "SELECT datname " + "FROM pg_database d " + "WHERE datallowconn " + "ORDER BY (datname <> 'template1'), datname"); for (i = 0; i < PQntuples(res); i++) { + char *dbname = PQgetvalue(res, i, 0); + const char *create_opts; int ret; - char *dbname = PQgetvalue(res, i, 0); - PQExpBufferData connectbuf; + /* Skip template0, even if it's not marked !datallowconn. */ + if (strcmp(dbname, "template0") == 0) + continue; if (verbose) fprintf(stderr, _("%s: dumping database \"%s\"...\n"), progname, dbname); - initPQExpBuffer(&connectbuf); - appendPsqlMetaConnect(&connectbuf, dbname); - fprintf(OPF, "%s\n", connectbuf.data); - termPQExpBuffer(&connectbuf); - /* - * Restore will need to write to the target cluster. This connection - * setting is emitted for pg_dumpall rather than in the code also used - * by pg_dump, so that a cluster with databases or users which have - * this flag turned on can still be replicated through pg_dumpall - * without editing the file or stream. With pg_dump there are many - * other ways to allow the file to be used, and leaving it out allows - * users to protect databases from being accidental restore targets. + * We assume that "template1" and "postgres" already exist in the + * target installation. dropDBs() won't have removed them, for fear + * of removing the DB the restore script is initially connected to. If + * --clean was specified, tell pg_dump to drop and recreate them; + * otherwise we'll merely restore their contents. Other databases + * should simply be created. */ - fprintf(OPF, "SET default_transaction_read_only = off;\n\n"); + if (strcmp(dbname, "template1") == 0 || strcmp(dbname, "postgres") == 0) + { + if (output_clean) + create_opts = "--clean --create"; + else + { + create_opts = ""; + /* Since pg_dump won't emit a \connect command, we must */ + fprintf(OPF, "\\connect %s\n\n", dbname); + } + } + else + create_opts = "--create"; if (filename) fclose(OPF); - ret = runPgDump(dbname); + ret = runPgDump(dbname, create_opts); if (ret != 0) { fprintf(stderr, _("%s: pg_dump failed on database \"%s\", exiting\n"), progname, dbname); @@ -1810,17 +1427,17 @@ dumpDatabases(PGconn *conn) /* - * Run pg_dump on dbname. + * Run pg_dump on dbname, with specified options. */ static int -runPgDump(const char *dbname) +runPgDump(const char *dbname, const char *create_opts) { PQExpBuffer connstrbuf = createPQExpBuffer(); PQExpBuffer cmd = createPQExpBuffer(); int ret; - appendPQExpBuffer(cmd, "\"%s\" %s", pg_dump_bin, - pgdumpopts->data); + appendPQExpBuffer(cmd, "\"%s\" %s %s", pg_dump_bin, + pgdumpopts->data, create_opts); /* * If we have a filename, use the undocumented plain-append pg_dump diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index fce1465c11..74730bfc65 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -1516,11 +1516,14 @@ all_runs => 1, catch_all => 'COMMENT commands', regexp => qr/^COMMENT ON DATABASE postgres IS .*;/m, - like => { + # Should appear in the same tests as "CREATE DATABASE postgres" + like => { createdb => 1, }, + unlike => { binary_upgrade => 1, clean => 1, clean_if_exists => 1, - createdb => 1, + column_inserts => 1, + data_only => 1, defaults => 1, exclude_dump_test_schema => 1, exclude_test_table => 1, @@ -1528,18 +1531,18 @@ no_blobs => 1, no_privs => 1, no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, pg_dumpall_dbprivs => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + role => 1, schema_only => 1, section_pre_data => 1, - with_oids => 1, }, - unlike => { - column_inserts => 1, - data_only => 1, - only_dump_test_schema => 1, - only_dump_test_table => 1, - role => 1, - section_post_data => 1, - test_schema_plus_blobs => 1, }, }, + section_data => 1, + section_post_data => 1, + test_schema_plus_blobs => 1, + with_oids => 1, }, }, 'COMMENT ON EXTENSION plpgsql' => { all_runs => 1, diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index 872621489f..a67e484a85 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -46,7 +46,7 @@ #endif static void prepare_new_cluster(void); -static void prepare_new_databases(void); +static void prepare_new_globals(void); static void create_new_objects(void); static void copy_xact_xlog_xid(void); static void set_frozenxids(bool minmxid_only); @@ -124,7 +124,7 @@ main(int argc, char **argv) /* -- NEW -- */ start_postmaster(&new_cluster, true); - prepare_new_databases(); + prepare_new_globals(); create_new_objects(); @@ -271,7 +271,7 @@ prepare_new_cluster(void) static void -prepare_new_databases(void) +prepare_new_globals(void) { /* * We set autovacuum_freeze_max_age to its maximum value so autovacuum @@ -283,20 +283,11 @@ prepare_new_databases(void) prep_status("Restoring global objects in the new cluster"); - /* - * We have to create the databases first so we can install support - * functions in all the other databases. Ideally we could create the - * support functions in template1 but pg_dumpall creates database using - * the template0 template. - */ exec_prog(UTILITY_LOG_FILE, NULL, true, true, "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"", new_cluster.bindir, cluster_conn_opts(&new_cluster), GLOBALS_DUMP_FILE); check_ok(); - - /* we load this to get a current list of databases */ - get_db_and_rel_infos(&new_cluster); } @@ -312,33 +303,40 @@ create_new_objects(void) char sql_file_name[MAXPGPATH], log_file_name[MAXPGPATH]; DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum]; - PQExpBufferData connstr, - escaped_connstr; - - initPQExpBuffer(&connstr); - appendPQExpBuffer(&connstr, "dbname="); - appendConnStrVal(&connstr, old_db->db_name); - initPQExpBuffer(&escaped_connstr); - appendShellString(&escaped_connstr, connstr.data); - termPQExpBuffer(&connstr); + const char *create_opts; + const char *starting_db; pg_log(PG_STATUS, "%s", old_db->db_name); snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid); snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid); /* - * pg_dump only produces its output at the end, so there is little - * parallelism if using the pipe. + * template1 and postgres databases will already exist in the target + * installation, so tell pg_restore to drop and recreate them; + * otherwise we would fail to propagate their database-level + * properties. */ + if (strcmp(old_db->db_name, "template1") == 0 || + strcmp(old_db->db_name, "postgres") == 0) + create_opts = "--clean --create"; + else + create_opts = "--create"; + + /* When processing template1, we can't connect there to start with */ + if (strcmp(old_db->db_name, "template1") == 0) + starting_db = "postgres"; + else + starting_db = "template1"; + parallel_exec_prog(log_file_name, NULL, - "\"%s/pg_restore\" %s --exit-on-error --verbose --dbname %s \"%s\"", + "\"%s/pg_restore\" %s %s --exit-on-error --verbose " + "--dbname %s \"%s\"", new_cluster.bindir, cluster_conn_opts(&new_cluster), - escaped_connstr.data, + create_opts, + starting_db, sql_file_name); - - termPQExpBuffer(&escaped_connstr); } /* reap all children */ @@ -355,7 +353,7 @@ create_new_objects(void) if (GET_MAJOR_VERSION(old_cluster.major_version) < 903) set_frozenxids(true); - /* regenerate now that we have objects in the databases */ + /* update new_cluster info now that we have objects in the databases */ get_db_and_rel_infos(&new_cluster); } From f5da5683a86e9fc42fdf3eae2da8b096bda76a8a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 19 Jan 2018 12:17:35 -0500 Subject: [PATCH 0872/1087] Add installcheck support to more test suites Several of the test suites under src/test/ were missing an installcheck target. --- src/test/authentication/Makefile | 3 +++ src/test/authentication/README | 4 ++++ src/test/ldap/Makefile | 3 +++ src/test/ldap/README | 4 ++++ src/test/recovery/Makefile | 3 +++ src/test/recovery/README | 10 +++++++--- src/test/ssl/Makefile | 3 +++ src/test/ssl/README | 13 +++++++++---- src/test/subscription/Makefile | 3 +++ src/test/subscription/README | 9 +++++++-- 10 files changed, 46 insertions(+), 9 deletions(-) diff --git a/src/test/authentication/Makefile b/src/test/authentication/Makefile index a435b13057..218452ec76 100644 --- a/src/test/authentication/Makefile +++ b/src/test/authentication/Makefile @@ -16,5 +16,8 @@ include $(top_builddir)/src/Makefile.global check: $(prove_check) +installcheck: + $(prove_installcheck) + clean distclean maintainer-clean: rm -rf tmp_check diff --git a/src/test/authentication/README b/src/test/authentication/README index 5cffc7dc49..dd79746753 100644 --- a/src/test/authentication/README +++ b/src/test/authentication/README @@ -13,4 +13,8 @@ Running the tests make check +or + + make installcheck + NOTE: This requires the --enable-tap-tests argument to configure. diff --git a/src/test/ldap/Makefile b/src/test/ldap/Makefile index 50e3c17e95..fef5742b82 100644 --- a/src/test/ldap/Makefile +++ b/src/test/ldap/Makefile @@ -16,5 +16,8 @@ include $(top_builddir)/src/Makefile.global check: $(prove_check) +installcheck: + $(prove_installcheck) + clean distclean maintainer-clean: rm -rf tmp_check diff --git a/src/test/ldap/README b/src/test/ldap/README index 61579f87c6..61578385c5 100644 --- a/src/test/ldap/README +++ b/src/test/ldap/README @@ -18,3 +18,7 @@ Running the tests ================= make check + +or + + make installcheck diff --git a/src/test/recovery/Makefile b/src/test/recovery/Makefile index aecf37d89a..daf79a0b1f 100644 --- a/src/test/recovery/Makefile +++ b/src/test/recovery/Makefile @@ -18,5 +18,8 @@ include $(top_builddir)/src/Makefile.global check: $(prove_check) +installcheck: + $(prove_installcheck) + clean distclean maintainer-clean: rm -rf tmp_check diff --git a/src/test/recovery/README b/src/test/recovery/README index 3cafb9ddfe..93bdcf4fed 100644 --- a/src/test/recovery/README +++ b/src/test/recovery/README @@ -10,8 +10,12 @@ Running the tests make check -NOTE: This creates a temporary installation, and some tests may -create one or multiple nodes, be they master or standby(s) for the -purpose of the tests. +or + + make installcheck + +NOTE: This creates a temporary installation (in the case of "check"), +and some tests may create one or multiple nodes, be they master or +standby(s) for the purpose of the tests. NOTE: This requires the --enable-tap-tests argument to configure. diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile index 4886e901d0..4e9095529a 100644 --- a/src/test/ssl/Makefile +++ b/src/test/ssl/Makefile @@ -132,3 +132,6 @@ clean distclean maintainer-clean: check: $(prove_check) + +installcheck: + $(prove_installcheck) diff --git a/src/test/ssl/README b/src/test/ssl/README index 50fa14e287..0be06e755c 100644 --- a/src/test/ssl/README +++ b/src/test/ssl/README @@ -12,10 +12,15 @@ Running the tests make check -NOTE: This creates a temporary installation, and sets it up to listen for TCP -connections on localhost. Any user on the same host is allowed to log in to -the test installation while the tests are running. Do not run this suite -on a multi-user system where you don't trust all local users! +or + + make installcheck + +NOTE: This creates a temporary installation (in the case of "check"), +and sets it up to listen for TCP connections on localhost. Any user on +the same host is allowed to log in to the test installation while the +tests are running. Do not run this suite on a multi-user system where +you don't trust all local users! Certificates ============ diff --git a/src/test/subscription/Makefile b/src/test/subscription/Makefile index 25c48e470d..0f3d2098ad 100644 --- a/src/test/subscription/Makefile +++ b/src/test/subscription/Makefile @@ -18,5 +18,8 @@ EXTRA_INSTALL = contrib/hstore check: $(prove_check) +installcheck: + $(prove_installcheck) + clean distclean maintainer-clean: rm -rf tmp_check diff --git a/src/test/subscription/README b/src/test/subscription/README index e9e93755b7..1d50dcceed 100644 --- a/src/test/subscription/README +++ b/src/test/subscription/README @@ -10,7 +10,12 @@ Running the tests make check -NOTE: This creates a temporary installation, and some tests may -create one or multiple nodes, for the purpose of the tests. +or + + make installcheck + +NOTE: This creates a temporary installation (in the case of "check"), +and some tests may create one or multiple nodes, for the purpose of +the tests. NOTE: This requires the --enable-tap-tests argument to configure. From 7404e77cc1192855afef28ae557993ba6f35c16e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 19:12:05 -0500 Subject: [PATCH 0873/1087] Split out documentation of SSL parameters into their own section Split the "Authentication and Security" section into two separate sections "Authentication" and "SSL". The latter part has gotten much longer over time, and doesn't primarily have to do with authentication. Also, the row_security parameter was inconsistently categorized, so clean that up while we're here. --- doc/src/sgml/config.sgml | 233 +++++++++--------- src/backend/utils/misc/guc.c | 38 +-- src/backend/utils/misc/postgresql.conf.sample | 41 +-- src/include/utils/guc_tables.h | 3 +- 4 files changed, 165 insertions(+), 150 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 31eaacfc4f..45b2af14eb 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -924,8 +924,9 @@ include_dir 'conf.d' - - Security and Authentication + + + Authentication @@ -950,6 +951,123 @@ include_dir 'conf.d' + + password_encryption (enum) + + password_encryption configuration parameter + + + + + When a password is specified in or + , this parameter determines the algorithm + to use to encrypt the password. The default value is md5, + which stores the password as an MD5 hash (on is also + accepted, as alias for md5). Setting this parameter to + scram-sha-256 will encrypt the password with SCRAM-SHA-256. + + + Note that older clients might lack support for the SCRAM authentication + mechanism, and hence not work with passwords encrypted with + SCRAM-SHA-256. See for more details. + + + + + + krb_server_keyfile (string) + + krb_server_keyfile configuration parameter + + + + + Sets the location of the Kerberos server key file. See + + for details. This parameter can only be set in the + postgresql.conf file or on the server command line. + + + + + + krb_caseins_users (boolean) + + krb_caseins_users configuration parameter + + + + + Sets whether GSSAPI user names should be treated + case-insensitively. + The default is off (case sensitive). This parameter can only be + set in the postgresql.conf file or on the server command line. + + + + + + db_user_namespace (boolean) + + db_user_namespace configuration parameter + + + + + This parameter enables per-database user names. It is off by default. + This parameter can only be set in the postgresql.conf + file or on the server command line. + + + + If this is on, you should create users as username@dbname. + When username is passed by a connecting client, + @ and the database name are appended to the user + name and that database-specific user name is looked up by the + server. Note that when you create users with names containing + @ within the SQL environment, you will need to + quote the user name. + + + + With this parameter enabled, you can still create ordinary global + users. Simply append @ when specifying the user + name in the client, e.g. joe@. The @ + will be stripped off before the user name is looked up by the + server. + + + + db_user_namespace causes the client's and + server's user name representation to differ. + Authentication checks are always done with the server's user name + so authentication methods must be configured for the + server's user name, not the client's. Because + md5 uses the user name as salt on both the + client and server, md5 cannot be used with + db_user_namespace. + + + + + This feature is intended as a temporary measure until a + complete solution is found. At that time, this option will + be removed. + + + + + + + + + SSL + + + See for more information about setting up SSL. + + + ssl (boolean) @@ -958,8 +1076,7 @@ include_dir 'conf.d' - Enables SSL connections. Please read - before using this. + Enables SSL connections. This parameter can only be set in the postgresql.conf file or on the server command line. The default is off. @@ -1172,29 +1289,6 @@ include_dir 'conf.d' - - password_encryption (enum) - - password_encryption configuration parameter - - - - - When a password is specified in or - , this parameter determines the algorithm - to use to encrypt the password. The default value is md5, - which stores the password as an MD5 hash (on is also - accepted, as alias for md5). Setting this parameter to - scram-sha-256 will encrypt the password with SCRAM-SHA-256. - - - Note that older clients might lack support for the SCRAM authentication - mechanism, and hence not work with passwords encrypted with - SCRAM-SHA-256. See for more details. - - - - ssl_dh_params_file (string) @@ -1218,91 +1312,6 @@ include_dir 'conf.d' - - - krb_server_keyfile (string) - - krb_server_keyfile configuration parameter - - - - - Sets the location of the Kerberos server key file. See - - for details. This parameter can only be set in the - postgresql.conf file or on the server command line. - - - - - - krb_caseins_users (boolean) - - krb_caseins_users configuration parameter - - - - - Sets whether GSSAPI user names should be treated - case-insensitively. - The default is off (case sensitive). This parameter can only be - set in the postgresql.conf file or on the server command line. - - - - - - db_user_namespace (boolean) - - db_user_namespace configuration parameter - - - - - This parameter enables per-database user names. It is off by default. - This parameter can only be set in the postgresql.conf - file or on the server command line. - - - - If this is on, you should create users as username@dbname. - When username is passed by a connecting client, - @ and the database name are appended to the user - name and that database-specific user name is looked up by the - server. Note that when you create users with names containing - @ within the SQL environment, you will need to - quote the user name. - - - - With this parameter enabled, you can still create ordinary global - users. Simply append @ when specifying the user - name in the client, e.g. joe@. The @ - will be stripped off before the user name is looked up by the - server. - - - - db_user_namespace causes the client's and - server's user name representation to differ. - Authentication checks are always done with the server's user name - so authentication methods must be configured for the - server's user name, not the client's. Because - md5 uses the user name as salt on both the - client and server, md5 cannot be used with - db_user_namespace. - - - - - This feature is intended as a temporary measure until a - complete solution is found. At that time, this option will - be removed. - - - - - diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index d03ba234b5..5884fa905e 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -573,8 +573,10 @@ const char *const config_group_names[] = gettext_noop("Connections and Authentication"), /* CONN_AUTH_SETTINGS */ gettext_noop("Connections and Authentication / Connection Settings"), - /* CONN_AUTH_SECURITY */ - gettext_noop("Connections and Authentication / Security and Authentication"), + /* CONN_AUTH_AUTH */ + gettext_noop("Connections and Authentication / Authentication"), + /* CONN_AUTH_SSL */ + gettext_noop("Connections and Authentication / SSL"), /* RESOURCES */ gettext_noop("Resource Usage"), /* RESOURCES_MEM */ @@ -978,7 +980,7 @@ static struct config_bool ConfigureNamesBool[] = NULL, NULL, NULL }, { - {"ssl", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Enables SSL connections."), NULL }, @@ -987,7 +989,7 @@ static struct config_bool ConfigureNamesBool[] = check_ssl, NULL, NULL }, { - {"ssl_prefer_server_ciphers", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_prefer_server_ciphers", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Give priority to server ciphersuite order."), NULL }, @@ -1378,7 +1380,7 @@ static struct config_bool ConfigureNamesBool[] = NULL, NULL, NULL }, { - {"db_user_namespace", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"db_user_namespace", PGC_SIGHUP, CONN_AUTH_AUTH, gettext_noop("Enables per-database user names."), NULL }, @@ -1425,7 +1427,7 @@ static struct config_bool ConfigureNamesBool[] = check_transaction_deferrable, NULL, NULL }, { - {"row_security", PGC_USERSET, CONN_AUTH_SECURITY, + {"row_security", PGC_USERSET, CLIENT_CONN_STATEMENT, gettext_noop("Enable row security."), gettext_noop("When enabled, row security will be applied to all users.") }, @@ -1548,7 +1550,7 @@ static struct config_bool ConfigureNamesBool[] = }, { - {"krb_caseins_users", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"krb_caseins_users", PGC_SIGHUP, CONN_AUTH_AUTH, gettext_noop("Sets whether Kerberos and GSSAPI user names should be treated as case-insensitive."), NULL }, @@ -2247,7 +2249,7 @@ static struct config_int ConfigureNamesInt[] = }, { - {"authentication_timeout", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"authentication_timeout", PGC_SIGHUP, CONN_AUTH_AUTH, gettext_noop("Sets the maximum allowed time to complete client authentication."), NULL, GUC_UNIT_S @@ -2797,7 +2799,7 @@ static struct config_int ConfigureNamesInt[] = }, { - {"ssl_renegotiation_limit", PGC_USERSET, CONN_AUTH_SECURITY, + {"ssl_renegotiation_limit", PGC_USERSET, CONN_AUTH_SSL, gettext_noop("SSL renegotiation is no longer supported; this can only be 0."), NULL, GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE, @@ -3170,7 +3172,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"krb_server_keyfile", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"krb_server_keyfile", PGC_SIGHUP, CONN_AUTH_AUTH, gettext_noop("Sets the location of the Kerberos server key file."), NULL, GUC_SUPERUSER_ONLY @@ -3530,7 +3532,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_cert_file", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_cert_file", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Location of the SSL server certificate file."), NULL }, @@ -3540,7 +3542,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_key_file", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_key_file", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Location of the SSL server private key file."), NULL }, @@ -3550,7 +3552,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_ca_file", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_ca_file", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Location of the SSL certificate authority file."), NULL }, @@ -3560,7 +3562,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_crl_file", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_crl_file", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Location of the SSL certificate revocation list file."), NULL }, @@ -3602,7 +3604,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_ciphers", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_ciphers", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Sets the list of allowed SSL ciphers."), NULL, GUC_SUPERUSER_ONLY @@ -3617,7 +3619,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_ecdh_curve", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_ecdh_curve", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Sets the curve to use for ECDH."), NULL, GUC_SUPERUSER_ONLY @@ -3632,7 +3634,7 @@ static struct config_string ConfigureNamesString[] = }, { - {"ssl_dh_params_file", PGC_SIGHUP, CONN_AUTH_SECURITY, + {"ssl_dh_params_file", PGC_SIGHUP, CONN_AUTH_SSL, gettext_noop("Location of the SSL DH parameters file."), NULL, GUC_SUPERUSER_ONLY @@ -3932,7 +3934,7 @@ static struct config_enum ConfigureNamesEnum[] = }, { - {"password_encryption", PGC_USERSET, CONN_AUTH_SECURITY, + {"password_encryption", PGC_USERSET, CONN_AUTH_AUTH, gettext_noop("Encrypt passwords."), gettext_noop("When a password is specified in CREATE USER or " "ALTER USER without writing either ENCRYPTED or UNENCRYPTED, " diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 69f40f04b0..abffde6b2b 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -73,35 +73,37 @@ #bonjour_name = '' # defaults to the computer name # (change requires restart) -# - Security and Authentication - +# - TCP Keepalives - +# see "man 7 tcp" for details + +#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; + # 0 selects the system default +#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; + # 0 selects the system default +#tcp_keepalives_count = 0 # TCP_KEEPCNT; + # 0 selects the system default + +# - Authentication - #authentication_timeout = 1min # 1s-600s -#ssl = off -#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers -#ssl_prefer_server_ciphers = on -#ssl_ecdh_curve = 'prime256v1' -#ssl_dh_params_file = '' -#ssl_cert_file = 'server.crt' -#ssl_key_file = 'server.key' -#ssl_ca_file = '' -#ssl_crl_file = '' #password_encryption = md5 # md5 or scram-sha-256 #db_user_namespace = off -#row_security = on # GSSAPI using Kerberos #krb_server_keyfile = '' #krb_caseins_users = off -# - TCP Keepalives - -# see "man 7 tcp" for details +# - SSL - -#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; - # 0 selects the system default -#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; - # 0 selects the system default -#tcp_keepalives_count = 0 # TCP_KEEPCNT; - # 0 selects the system default +#ssl = off +#ssl_ca_file = '' +#ssl_cert_file = 'server.crt' +#ssl_crl_file = '' +#ssl_key_file = 'server.key' +#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers +#ssl_prefer_server_ciphers = on +#ssl_ecdh_curve = 'prime256v1' +#ssl_dh_params_file = '' #------------------------------------------------------------------------------ @@ -543,6 +545,7 @@ # - Statement Behavior - #search_path = '"$user", public' # schema names +#row_security = on #default_tablespace = '' # a tablespace name, '' uses the default #temp_tablespaces = '' # a list of tablespace names, '' uses # only default tablespace diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h index 04de6a383a..668d9efd35 100644 --- a/src/include/utils/guc_tables.h +++ b/src/include/utils/guc_tables.h @@ -56,7 +56,8 @@ enum config_group FILE_LOCATIONS, CONN_AUTH, CONN_AUTH_SETTINGS, - CONN_AUTH_SECURITY, + CONN_AUTH_AUTH, + CONN_AUTH_SSL, RESOURCES, RESOURCES_MEM, RESOURCES_DISK, From 573bd08b99e277026e87bb55ae69c489fab321b8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 19 Jan 2018 12:18:42 -0500 Subject: [PATCH 0874/1087] Move EDH support to common files The EDH support is not really specific to the OpenSSL implementation, so move the support and documentation comments to common files. --- src/backend/libpq/README.SSL | 22 ++++++++++ src/backend/libpq/be-secure-openssl.c | 58 +-------------------------- src/include/libpq/libpq-be.h | 19 +++++++++ 3 files changed, 42 insertions(+), 57 deletions(-) diff --git a/src/backend/libpq/README.SSL b/src/backend/libpq/README.SSL index 53dc9dd005..d84a434a6e 100644 --- a/src/backend/libpq/README.SSL +++ b/src/backend/libpq/README.SSL @@ -58,3 +58,25 @@ SSL Fail with unknown --------------------------------------------------------------------------- + +Ephemeral DH +============ + +Since the server static private key ($DataDir/server.key) will +normally be stored unencrypted so that the database backend can +restart automatically, it is important that we select an algorithm +that continues to provide confidentiality even if the attacker has the +server's private key. Ephemeral DH (EDH) keys provide this and more +(Perfect Forward Secrecy aka PFS). + +N.B., the static private key should still be protected to the largest +extent possible, to minimize the risk of impersonations. + +Another benefit of EDH is that it allows the backend and clients to +use DSA keys. DSA keys can only provide digital signatures, not +encryption, and are often acceptable in jurisdictions where RSA keys +are unacceptable. + +The downside to EDH is that it makes it impossible to use ssldump(1) +if there's a problem establishing an SSL session. In this case you'll +need to temporarily disable EDH (see initialize_dh()). diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index fc6e8a0a88..450a2f614c 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -11,28 +11,6 @@ * IDENTIFICATION * src/backend/libpq/be-secure-openssl.c * - * Since the server static private key ($DataDir/server.key) - * will normally be stored unencrypted so that the database - * backend can restart automatically, it is important that - * we select an algorithm that continues to provide confidentiality - * even if the attacker has the server's private key. Ephemeral - * DH (EDH) keys provide this and more (Perfect Forward Secrecy - * aka PFS). - * - * N.B., the static private key should still be protected to - * the largest extent possible, to minimize the risk of - * impersonations. - * - * Another benefit of EDH is that it allows the backend and - * clients to use DSA keys. DSA keys can only provide digital - * signatures, not encryption, and are often acceptable in - * jurisdictions where RSA keys are unacceptable. - * - * The downside to EDH is that it makes it impossible to - * use ssldump(1) if there's a problem establishing an SSL - * session. In this case you'll need to temporarily disable - * EDH (see initialize_dh()). - * *------------------------------------------------------------------------- */ @@ -87,40 +65,6 @@ static SSL_CTX *SSL_context = NULL; static bool SSL_initialized = false; static bool ssl_passwd_cb_called = false; -/* ------------------------------------------------------------ */ -/* Hardcoded values */ -/* ------------------------------------------------------------ */ - -/* - * Hardcoded DH parameters, used in ephemeral DH keying. - * As discussed above, EDH protects the confidentiality of - * sessions even if the static private key is compromised, - * so we are *highly* motivated to ensure that we can use - * EDH even if the DBA has not provided custom DH parameters. - * - * We could refuse SSL connections unless a good DH parameter - * file exists, but some clients may quietly renegotiate an - * unsecured connection without fully informing the user. - * Very uncool. Alternatively, the system could refuse to start - * if a DH parameters is not specified, but this would tend to - * piss off DBAs. - * - * If you want to create your own hardcoded DH parameters - * for fun and profit, review "Assigned Number for SKIP - * Protocols" (http://www.skip-vpn.org/spec/numbers.html) - * for suggestions. - */ - -static const char file_dh2048[] = -"-----BEGIN DH PARAMETERS-----\n\ -MIIBCAKCAQEA9kJXtwh/CBdyorrWqULzBej5UxE5T7bxbrlLOCDaAadWoxTpj0BV\n\ -89AHxstDqZSt90xkhkn4DIO9ZekX1KHTUPj1WV/cdlJPPT2N286Z4VeSWc39uK50\n\ -T8X8dryDxUcwYc58yWb/Ffm7/ZFexwGq01uejaClcjrUGvC/RgBYK+X0iP1YTknb\n\ -zSC0neSRBzZrM2w4DUUdD3yIsxx8Wy2O9vPJI8BD8KVbGI2Ou1WMuF040zT9fBdX\n\ -Q6MdGGzeMyEstSr/POGxKUAYEY18hKcKctaGxAMZyAcpesqVDNmWn6vQClCbAkbT\n\ -CD1mpF1Bn5x8vYlLIhkmuquiXsNV6TILOwIBAg==\n\ ------END DH PARAMETERS-----\n"; - /* ------------------------------------------------------------ */ /* Public interface */ @@ -1080,7 +1024,7 @@ initialize_dh(SSL_CTX *context, bool isServerStart) if (ssl_dh_params_file[0]) dh = load_dh_file(ssl_dh_params_file, isServerStart); if (!dh) - dh = load_dh_buffer(file_dh2048, sizeof file_dh2048); + dh = load_dh_buffer(FILE_DH2048, sizeof(FILE_DH2048)); if (!dh) { ereport(isServerStart ? FATAL : LOG, diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index 49cb263110..a38849b0d0 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -193,6 +193,25 @@ typedef struct Port } Port; #ifdef USE_SSL +/* + * Hardcoded DH parameters, used in ephemeral DH keying. (See also + * README.SSL for more details on EDH.) + * + * If you want to create your own hardcoded DH parameters + * for fun and profit, review "Assigned Number for SKIP + * Protocols" (http://www.skip-vpn.org/spec/numbers.html) + * for suggestions. + */ +#define FILE_DH2048 \ +"-----BEGIN DH PARAMETERS-----\n\ +MIIBCAKCAQEA9kJXtwh/CBdyorrWqULzBej5UxE5T7bxbrlLOCDaAadWoxTpj0BV\n\ +89AHxstDqZSt90xkhkn4DIO9ZekX1KHTUPj1WV/cdlJPPT2N286Z4VeSWc39uK50\n\ +T8X8dryDxUcwYc58yWb/Ffm7/ZFexwGq01uejaClcjrUGvC/RgBYK+X0iP1YTknb\n\ +zSC0neSRBzZrM2w4DUUdD3yIsxx8Wy2O9vPJI8BD8KVbGI2Ou1WMuF040zT9fBdX\n\ +Q6MdGGzeMyEstSr/POGxKUAYEY18hKcKctaGxAMZyAcpesqVDNmWn6vQClCbAkbT\n\ +CD1mpF1Bn5x8vYlLIhkmuquiXsNV6TILOwIBAg==\n\ +-----END DH PARAMETERS-----\n" + /* * These functions are implemented by the glue code specific to each * SSL implementation (e.g. be-secure-openssl.c) From f966101d19fcef6441e43da417467b3ed5ad3074 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 18 Jan 2018 19:53:22 -0500 Subject: [PATCH 0875/1087] Move SSL API comments to header files Move the documentation of the SSL API calls are supposed to do into the headers files, instead of keeping them in the files for the OpenSSL implementation. That way, they don't have to be duplicated or be inconsistent when other implementations are added. --- src/backend/libpq/be-secure-openssl.c | 38 --------------- src/include/libpq/libpq-be.h | 46 ++++++++++++++++++ src/interfaces/libpq/fe-secure-openssl.c | 57 +++------------------- src/interfaces/libpq/libpq-int.h | 62 +++++++++++++++++++++++- 4 files changed, 113 insertions(+), 90 deletions(-) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index 450a2f614c..f550ec82a9 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -70,13 +70,6 @@ static bool ssl_passwd_cb_called = false; /* Public interface */ /* ------------------------------------------------------------ */ -/* - * Initialize global SSL context. - * - * If isServerStart is true, report any errors as FATAL (so we don't return). - * Otherwise, log errors at LOG level and return -1 to indicate trouble, - * preserving the old SSL state if any. Returns 0 if OK. - */ int be_tls_init(bool isServerStart) { @@ -356,9 +349,6 @@ be_tls_init(bool isServerStart) return -1; } -/* - * Destroy global SSL context, if any. - */ void be_tls_destroy(void) { @@ -368,9 +358,6 @@ be_tls_destroy(void) ssl_loaded_verify_locations = false; } -/* - * Attempt to negotiate SSL connection. - */ int be_tls_open_server(Port *port) { @@ -539,9 +526,6 @@ be_tls_open_server(Port *port) return 0; } -/* - * Close SSL connection. - */ void be_tls_close(Port *port) { @@ -566,9 +550,6 @@ be_tls_close(Port *port) } } -/* - * Read data from a secure connection. - */ ssize_t be_tls_read(Port *port, void *ptr, size_t len, int *waitfor) { @@ -628,9 +609,6 @@ be_tls_read(Port *port, void *ptr, size_t len, int *waitfor) return n; } -/* - * Write data to a secure connection. - */ ssize_t be_tls_write(Port *port, void *ptr, size_t len, int *waitfor) { @@ -1106,9 +1084,6 @@ SSLerrmessage(unsigned long ecode) return errbuf; } -/* - * Return information about the SSL connection - */ int be_tls_get_cipher_bits(Port *port) { @@ -1159,12 +1134,6 @@ be_tls_get_peerdn_name(Port *port, char *ptr, size_t len) ptr[0] = '\0'; } -/* - * Routine to get the expected TLS Finished message information from the - * client, useful for authorization when doing channel binding. - * - * Result is a palloc'd copy of the TLS Finished message with its size. - */ char * be_tls_get_peer_finished(Port *port, size_t *len) { @@ -1183,13 +1152,6 @@ be_tls_get_peer_finished(Port *port, size_t *len) return result; } -/* - * Get the server certificate hash for SCRAM channel binding type - * tls-server-end-point. - * - * The result is a palloc'd hash of the server certificate with its - * size, and NULL if there is no certificate available. - */ char * be_tls_get_certificate_hash(Port *port, size_t *len) { diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index a38849b0d0..584f794b9e 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -216,19 +216,65 @@ CD1mpF1Bn5x8vYlLIhkmuquiXsNV6TILOwIBAg==\n\ * These functions are implemented by the glue code specific to each * SSL implementation (e.g. be-secure-openssl.c) */ + +/* + * Initialize global SSL context. + * + * If isServerStart is true, report any errors as FATAL (so we don't return). + * Otherwise, log errors at LOG level and return -1 to indicate trouble, + * preserving the old SSL state if any. Returns 0 if OK. + */ extern int be_tls_init(bool isServerStart); + +/* + * Destroy global SSL context, if any. + */ extern void be_tls_destroy(void); + +/* + * Attempt to negotiate SSL connection. + */ extern int be_tls_open_server(Port *port); + +/* + * Close SSL connection. + */ extern void be_tls_close(Port *port); + +/* + * Read data from a secure connection. + */ extern ssize_t be_tls_read(Port *port, void *ptr, size_t len, int *waitfor); + +/* + * Write data to a secure connection. + */ extern ssize_t be_tls_write(Port *port, void *ptr, size_t len, int *waitfor); +/* + * Return information about the SSL connection. + */ extern int be_tls_get_cipher_bits(Port *port); extern bool be_tls_get_compression(Port *port); extern void be_tls_get_version(Port *port, char *ptr, size_t len); extern void be_tls_get_cipher(Port *port, char *ptr, size_t len); extern void be_tls_get_peerdn_name(Port *port, char *ptr, size_t len); + +/* + * Get the expected TLS Finished message information from the client, useful + * for authorization when doing channel binding. + * + * Result is a palloc'd copy of the TLS Finished message with its size. + */ extern char *be_tls_get_peer_finished(Port *port, size_t *len); + +/* + * Get the server certificate hash for SCRAM channel binding type + * tls-server-end-point. + * + * The result is a palloc'd hash of the server certificate with its + * size, and NULL if there is no certificate available. + */ extern char *be_tls_get_certificate_hash(Port *port, size_t *len); #endif diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index b50bfd144a..eb13120941 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -98,10 +98,6 @@ static long win32_ssl_create_mutex = 0; /* Procedures common to all secure sessions */ /* ------------------------------------------------------------ */ -/* - * Exported function to allow application to tell us it's already - * initialized OpenSSL and/or libcrypto. - */ void pgtls_init_library(bool do_ssl, int do_crypto) { @@ -119,9 +115,6 @@ pgtls_init_library(bool do_ssl, int do_crypto) pq_init_crypto_lib = do_crypto; } -/* - * Begin or continue negotiating a secure session. - */ PostgresPollingStatusType pgtls_open_client(PGconn *conn) { @@ -144,22 +137,6 @@ pgtls_open_client(PGconn *conn) return open_client_SSL(conn); } -/* - * Is there unread data waiting in the SSL read buffer? - */ -bool -pgtls_read_pending(PGconn *conn) -{ - return SSL_pending(conn->ssl); -} - -/* - * Read data from a secure connection. - * - * On failure, this function is responsible for putting a suitable message - * into conn->errorMessage. The caller must still inspect errno, but only - * to determine whether to continue/retry after error. - */ ssize_t pgtls_read(PGconn *conn, void *ptr, size_t len) { @@ -284,13 +261,12 @@ pgtls_read(PGconn *conn, void *ptr, size_t len) return n; } -/* - * Write data to a secure connection. - * - * On failure, this function is responsible for putting a suitable message - * into conn->errorMessage. The caller must still inspect errno, but only - * to determine whether to continue/retry after error. - */ +bool +pgtls_read_pending(PGconn *conn) +{ + return SSL_pending(conn->ssl); +} + ssize_t pgtls_write(PGconn *conn, const void *ptr, size_t len) { @@ -393,12 +369,6 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len) return n; } -/* - * Get the TLS finish message sent during last handshake - * - * This information is useful for callers doing channel binding during - * authentication. - */ char * pgtls_get_finished(PGconn *conn, size_t *len) { @@ -419,13 +389,6 @@ pgtls_get_finished(PGconn *conn, size_t *len) return result; } -/* - * Get the hash of the server certificate, for SCRAM channel binding type - * tls-server-end-point. - * - * NULL is sent back to the caller in the event of an error, with an - * error message for the caller to consume. - */ char * pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len) { @@ -854,11 +817,6 @@ pq_lockingcallback(int mode, int n, const char *file, int line) * If the caller has told us (through PQinitOpenSSL) that he's taking care * of libcrypto, we expect that callbacks are already set, and won't try to * override it. - * - * The conn parameter is only used to be able to pass back an error - * message - no connection-local setup is made here. - * - * Returns 0 if OK, -1 on failure (with a message in conn->errorMessage). */ int pgtls_init(PGconn *conn) @@ -1493,9 +1451,6 @@ open_client_SSL(PGconn *conn) return PGRES_POLLING_OK; } -/* - * Close SSL connection. - */ void pgtls_close(PGconn *conn) { diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index 4e354098b3..b3492b033a 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -661,19 +661,79 @@ extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe); #endif +/* === SSL === */ + /* - * The SSL implementation provides these functions (fe-secure-openssl.c) + * The SSL implementation provides these functions. + */ + +/* + * Implementation of PQinitSSL(). */ extern void pgtls_init_library(bool do_ssl, int do_crypto); + +/* + * Initialize SSL library. + * + * The conn parameter is only used to be able to pass back an error + * message - no connection-local setup is made here. + * + * Returns 0 if OK, -1 on failure (with a message in conn->errorMessage). + */ extern int pgtls_init(PGconn *conn); + +/* + * Begin or continue negotiating a secure session. + */ extern PostgresPollingStatusType pgtls_open_client(PGconn *conn); + +/* + * Close SSL connection. + */ extern void pgtls_close(PGconn *conn); + +/* + * Read data from a secure connection. + * + * On failure, this function is responsible for putting a suitable message + * into conn->errorMessage. The caller must still inspect errno, but only + * to determine whether to continue/retry after error. + */ extern ssize_t pgtls_read(PGconn *conn, void *ptr, size_t len); + +/* + * Is there unread data waiting in the SSL read buffer? + */ extern bool pgtls_read_pending(PGconn *conn); + +/* + * Write data to a secure connection. + * + * On failure, this function is responsible for putting a suitable message + * into conn->errorMessage. The caller must still inspect errno, but only + * to determine whether to continue/retry after error. + */ extern ssize_t pgtls_write(PGconn *conn, const void *ptr, size_t len); + +/* + * Get the TLS finish message sent during last handshake. + * + * This information is useful for callers doing channel binding during + * authentication. + */ extern char *pgtls_get_finished(PGconn *conn, size_t *len); + +/* + * Get the hash of the server certificate, for SCRAM channel binding type + * tls-server-end-point. + * + * NULL is sent back to the caller in the event of an error, with an + * error message for the caller to consume. + */ extern char *pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len); +/* === miscellaneous macros === */ + /* * this is so that we can check if a connection is non-blocking internally * without the overhead of a function call From 1c2183403b958422c27782329ba19f9a3e0874ba Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 19 Jan 2018 10:17:56 -0500 Subject: [PATCH 0876/1087] Extract common bits from OpenSSL implementation Some things in be-secure-openssl.c and fe-secure-openssl.c were not actually specific to OpenSSL but could also be used by other implementations. In order to avoid copy-and-pasting, move some of that code to common files. --- src/backend/libpq/be-secure-openssl.c | 62 +-------------------- src/backend/libpq/be-secure.c | 71 ++++++++++++++++++++++++ src/include/libpq/libpq.h | 1 + src/interfaces/libpq/fe-secure-openssl.c | 8 --- src/interfaces/libpq/fe-secure.c | 14 +++-- 5 files changed, 81 insertions(+), 75 deletions(-) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index f550ec82a9..02601da6c8 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -75,7 +75,6 @@ be_tls_init(bool isServerStart) { STACK_OF(X509_NAME) *root_cert_list = NULL; SSL_CTX *context; - struct stat buf; /* This stuff need be done only once. */ if (!SSL_initialized) @@ -133,63 +132,8 @@ be_tls_init(bool isServerStart) goto error; } - if (stat(ssl_key_file, &buf) != 0) - { - ereport(isServerStart ? FATAL : LOG, - (errcode_for_file_access(), - errmsg("could not access private key file \"%s\": %m", - ssl_key_file))); + if (!check_ssl_key_file_permissions(ssl_key_file, isServerStart)) goto error; - } - - if (!S_ISREG(buf.st_mode)) - { - ereport(isServerStart ? FATAL : LOG, - (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("private key file \"%s\" is not a regular file", - ssl_key_file))); - goto error; - } - - /* - * Refuse to load key files owned by users other than us or root. - * - * XXX surely we can check this on Windows somehow, too. - */ -#if !defined(WIN32) && !defined(__CYGWIN__) - if (buf.st_uid != geteuid() && buf.st_uid != 0) - { - ereport(isServerStart ? FATAL : LOG, - (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("private key file \"%s\" must be owned by the database user or root", - ssl_key_file))); - goto error; - } -#endif - - /* - * Require no public access to key file. If the file is owned by us, - * require mode 0600 or less. If owned by root, require 0640 or less to - * allow read access through our gid, or a supplementary gid that allows - * to read system-wide certificates. - * - * XXX temporarily suppress check when on Windows, because there may not - * be proper support for Unix-y file permissions. Need to think of a - * reasonable check to apply on Windows. (See also the data directory - * permission check in postmaster.c) - */ -#if !defined(WIN32) && !defined(__CYGWIN__) - if ((buf.st_uid == geteuid() && buf.st_mode & (S_IRWXG | S_IRWXO)) || - (buf.st_uid == 0 && buf.st_mode & (S_IWGRP | S_IXGRP | S_IRWXO))) - { - ereport(isServerStart ? FATAL : LOG, - (errcode(ERRCODE_CONFIG_FILE_ERROR), - errmsg("private key file \"%s\" has group or world access", - ssl_key_file), - errdetail("File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root."))); - goto error; - } -#endif /* * OK, try to load the private key file. @@ -516,10 +460,6 @@ be_tls_open_server(Port *port) port->peer_cert_valid = true; } - ereport(DEBUG2, - (errmsg("SSL connection from \"%s\"", - port->peer_cn ? port->peer_cn : "(anonymous)"))); - /* set up debugging/info callback */ SSL_CTX_set_info_callback(SSL_context, info_cb); diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c index eb42ea1a1e..76c0a9e39b 100644 --- a/src/backend/libpq/be-secure.c +++ b/src/backend/libpq/be-secure.c @@ -114,6 +114,10 @@ secure_open_server(Port *port) #ifdef USE_SSL r = be_tls_open_server(port); + + ereport(DEBUG2, + (errmsg("SSL connection from \"%s\"", + port->peer_cn ? port->peer_cn : "(anonymous)"))); #endif return r; @@ -314,3 +318,70 @@ secure_raw_write(Port *port, const void *ptr, size_t len) return n; } + +bool +check_ssl_key_file_permissions(const char *ssl_key_file, bool isServerStart) +{ + int loglevel = isServerStart ? FATAL : LOG; + struct stat buf; + + if (stat(ssl_key_file, &buf) != 0) + { + ereport(loglevel, + (errcode_for_file_access(), + errmsg("could not access private key file \"%s\": %m", + ssl_key_file))); + return false; + } + + if (!S_ISREG(buf.st_mode)) + { + ereport(loglevel, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg("private key file \"%s\" is not a regular file", + ssl_key_file))); + return false; + } + + /* + * Refuse to load key files owned by users other than us or root. + * + * XXX surely we can check this on Windows somehow, too. + */ +#if !defined(WIN32) && !defined(__CYGWIN__) + if (buf.st_uid != geteuid() && buf.st_uid != 0) + { + ereport(loglevel, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg("private key file \"%s\" must be owned by the database user or root", + ssl_key_file))); + return false; + } +#endif + + /* + * Require no public access to key file. If the file is owned by us, + * require mode 0600 or less. If owned by root, require 0640 or less to + * allow read access through our gid, or a supplementary gid that allows + * to read system-wide certificates. + * + * XXX temporarily suppress check when on Windows, because there may not + * be proper support for Unix-y file permissions. Need to think of a + * reasonable check to apply on Windows. (See also the data directory + * permission check in postmaster.c) + */ +#if !defined(WIN32) && !defined(__CYGWIN__) + if ((buf.st_uid == geteuid() && buf.st_mode & (S_IRWXG | S_IRWXO)) || + (buf.st_uid == 0 && buf.st_mode & (S_IWGRP | S_IXGRP | S_IRWXO))) + { + ereport(loglevel, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg("private key file \"%s\" has group or world access", + ssl_key_file), + errdetail("File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root."))); + return false; + } +#endif + + return true; +} diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h index 2e7725db21..255222acd7 100644 --- a/src/include/libpq/libpq.h +++ b/src/include/libpq/libpq.h @@ -90,6 +90,7 @@ extern ssize_t secure_read(Port *port, void *ptr, size_t len); extern ssize_t secure_write(Port *port, void *ptr, size_t len); extern ssize_t secure_raw_read(Port *port, void *ptr, size_t len); extern ssize_t secure_raw_write(Port *port, const void *ptr, size_t len); +extern bool check_ssl_key_file_permissions(const char *ssl_key_file, bool isServerStart); extern bool ssl_loaded_verify_locations; diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index eb13120941..9ab317320a 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -1547,14 +1547,6 @@ SSLerrfree(char *buf) /* SSL information functions */ /* ------------------------------------------------------------ */ -int -PQsslInUse(PGconn *conn) -{ - if (!conn) - return 0; - return conn->ssl_in_use; -} - /* * Return pointer to OpenSSL object. */ diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c index ec6c65a4b4..cfb77f6d85 100644 --- a/src/interfaces/libpq/fe-secure.c +++ b/src/interfaces/libpq/fe-secure.c @@ -129,6 +129,14 @@ struct sigpipe_info /* ------------------------------------------------------------ */ +int +PQsslInUse(PGconn *conn) +{ + if (!conn) + return 0; + return conn->ssl_in_use; +} + /* * Exported function to allow application to tell us it's already * initialized OpenSSL. @@ -384,12 +392,6 @@ pqsecure_raw_write(PGconn *conn, const void *ptr, size_t len) /* Dummy versions of SSL info functions, when built without SSL support */ #ifndef USE_SSL -int -PQsslInUse(PGconn *conn) -{ - return 0; -} - void * PQgetssl(PGconn *conn) { From a541dbb6fa389bb0ffdd24a403bc6d276d77a074 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 23 Jan 2018 10:18:21 -0500 Subject: [PATCH 0877/1087] doc: simplify intermediate certificate mention in libpq docs Backpatch-through: 9.3 --- doc/src/sgml/libpq.sgml | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index 02884bae1f..b66c6da4f7 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -7587,11 +7587,10 @@ ldap://ldap.acme.com/cn=dbserver,cn=hosts?pgconnectinfo?base?(objectclass=*) To allow the client to verify the identity of the server, place a root certificate on the client and a leaf certificate signed by the root certificate on the server. To allow the server to verify the identity - of the client, place a root certificate on the server and a leaf and - optional intermediate certificates signed by the root certificate on - the client. Intermediate certificates (usually stored with the leaf - certificate) can also be used to link the leaf certificate to the - root certificate. + of the client, place a root certificate on the server and a leaf + certificate signed by the root certificate on the client. One or more + intermediate certificates (usually stored with the leaf certificate) + can also be used to link the leaf certificate to the root certificate. From 160a4f62ee7b8a96984f8bef19c90488aa6c8045 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 23 Jan 2018 10:55:08 -0500 Subject: [PATCH 0878/1087] In pg_dump, force reconnection after issuing ALTER DATABASE SET command(s). The folly of not doing this was exposed by the buildfarm: in some cases, the GUC settings applied through ALTER DATABASE SET may be essential to interpreting the reloaded data correctly. Another argument why we can't really get away with the scheme proposed in commit b3f840120 is that it cannot work for parallel restore: even if the parent process manages to hang onto the previous GUC state, worker processes would see the state post-ALTER-DATABASE. (Perhaps we could have dodged that bullet by delaying DATABASE PROPERTIES restoration to the end of the run, but that does nothing for the data semantics problem.) This leaves us with no solution for the default_transaction_read_only issue that commit 4bd371f6f intended to work around, other than "you gotta remove such settings before dumping/upgrading". However, in view of the fact that parallel restore broke that hack years ago and no one has noticed, it's fair to question how many people care. I'm unexcited about adding a large dollop of new complexity to handle that corner case. This would be a one-liner fix, except it turns out that ReconnectToServer tries to optimize away "redundant" reconnections. While that may have been valuable when coded, a quick survey of current callers shows that there are no cases where that's actually useful, so just remove that check. While at it, remove the function's useless return value. Discussion: https://postgr.es/m/12453.1516655001@sss.pgh.pa.us --- src/bin/pg_dump/pg_backup_archiver.c | 9 +++++++-- src/bin/pg_dump/pg_backup_archiver.h | 2 +- src/bin/pg_dump/pg_backup_db.c | 15 ++------------- src/bin/pg_dump/pg_dump.c | 12 ++++++------ 4 files changed, 16 insertions(+), 22 deletions(-) diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index ab009e6fe3..f55aa36c49 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -833,8 +833,13 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) } } - /* If we created a DB, connect to it... */ - if (strcmp(te->desc, "DATABASE") == 0) + /* + * If we created a DB, connect to it. Also, if we changed DB + * properties, reconnect to ensure that relevant GUC settings are + * applied to our session. + */ + if (strcmp(te->desc, "DATABASE") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0) { PQExpBufferData connstr; diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h index fd0d01b506..becfee6e81 100644 --- a/src/bin/pg_dump/pg_backup_archiver.h +++ b/src/bin/pg_dump/pg_backup_archiver.h @@ -448,7 +448,7 @@ extern void InitArchiveFmt_Tar(ArchiveHandle *AH); extern bool isValidTarHeader(char *header); -extern int ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *newUser); +extern void ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *newUser); extern void DropBlobIfExists(ArchiveHandle *AH, Oid oid); void ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle *AH); diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c index 216853d627..3b7dd24151 100644 --- a/src/bin/pg_dump/pg_backup_db.c +++ b/src/bin/pg_dump/pg_backup_db.c @@ -76,13 +76,9 @@ _check_database_version(ArchiveHandle *AH) /* * Reconnect to the server. If dbname is not NULL, use that database, * else the one associated with the archive handle. If username is - * not NULL, use that user name, else the one from the handle. If - * both the database and the user match the existing connection already, - * nothing will be done. - * - * Returns 1 in any case. + * not NULL, use that user name, else the one from the handle. */ -int +void ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *username) { PGconn *newConn; @@ -99,11 +95,6 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *username) else newusername = username; - /* Let's see if the request is already satisfied */ - if (strcmp(newdbname, PQdb(AH->connection)) == 0 && - strcmp(newusername, PQuser(AH->connection)) == 0) - return 1; - newConn = _connectDB(AH, newdbname, newusername); /* Update ArchiveHandle's connCancel before closing old connection */ @@ -111,8 +102,6 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *username) PQfinish(AH->connection); AH->connection = newConn; - - return 1; } /* diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 11e1ba04cc..d65ea54a69 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -2819,10 +2819,11 @@ dumpDatabase(Archive *fout) /* * Now construct a DATABASE PROPERTIES archive entry to restore any - * non-default database-level properties. We want to do this after - * reconnecting so that these properties won't apply during the restore - * session. In this way, restoring works even if there is, say, an ALTER - * DATABASE SET that turns on default_transaction_read_only. + * non-default database-level properties. (The reason this must be + * separate is that we cannot put any additional commands into the TOC + * entry that has CREATE DATABASE. pg_restore would execute such a group + * in an implicit transaction block, and the backend won't allow CREATE + * DATABASE in that context.) */ resetPQExpBuffer(creaQry); resetPQExpBuffer(delQry); @@ -2854,8 +2855,7 @@ dumpDatabase(Archive *fout) /* * We stick this binary-upgrade query into the DATABASE PROPERTIES archive - * entry, too. It can't go into the DATABASE entry because that would - * result in an implicit transaction block around the CREATE DATABASE. + * entry, too, for lack of a better place. */ if (dopt->binary_upgrade) { From 2badb5afb89cd569500ef7c3b23c7a9d11718f2f Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 23 Jan 2018 11:03:03 -0500 Subject: [PATCH 0879/1087] Report an ERROR if a parallel worker fails to start properly. Commit 28724fd90d2f85a0573a8107b48abad062a86d83 fixed things so that if a background worker fails to start due to fork() failure or because it is terminated before startup succeeds, BGWH_STOPPED will be reported. However, that only helps if the code that uses the background worker machinery notices the change in status, and the code in parallel.c did not. To fix that, do two things. First, make sure that when a worker exits, it triggers the leader to read from error queues. That way, if a worker which has attached to an error queue exits uncleanly, the leader is sure to throw some error, either the contents of the ErrorResponse sent by the worker, or "lost connection to parallel worker" if it exited without sending one. To cover the case where the worker never starts up in the first place or exits before attaching to the error queue, the ParallelContext now keeps track of which workers have sent at least one message via the error queue. A worker which sends no messages by the time the parallel operation finishes will be checked to see whether it exited before attaching to the error queue; if so, a new error message, "parallel worker failed to initialize", will be reported. If not, we'll continue to wait until it either starts up and exits cleanly, starts up and exits uncleanly, or fails to start, and then take the appropriate action. Patch by me, reviewed by Amit Kapila. Discussion: http://postgr.es/m/CA+TgmoYnBgXgdTu6wk5YPdWhmgabYc9nY_pFLq=tB=FSLYkD8Q@mail.gmail.com --- src/backend/access/transam/parallel.c | 118 ++++++++++++++++++++++++-- src/include/access/parallel.h | 1 + 2 files changed, 110 insertions(+), 9 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 0a0157a878..54d9ea7be0 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -113,6 +113,9 @@ static FixedParallelState *MyFixedParallelState; /* List of active parallel contexts. */ static dlist_head pcxt_list = DLIST_STATIC_INIT(pcxt_list); +/* Backend-local copy of data from FixedParallelState. */ +static pid_t ParallelMasterPid; + /* * List of internal parallel worker entry points. We need this for * reasons explained in LookupParallelWorkerFunction(), below. @@ -133,6 +136,7 @@ static const struct static void HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg); static void WaitForParallelWorkersToExit(ParallelContext *pcxt); static parallel_worker_main_type LookupParallelWorkerFunction(const char *libraryname, const char *funcname); +static void ParallelWorkerShutdown(int code, Datum arg); /* @@ -433,6 +437,11 @@ ReinitializeParallelDSM(ParallelContext *pcxt) WaitForParallelWorkersToFinish(pcxt); WaitForParallelWorkersToExit(pcxt); pcxt->nworkers_launched = 0; + if (pcxt->any_message_received) + { + pfree(pcxt->any_message_received); + pcxt->any_message_received = NULL; + } } /* Reset a few bits of fixed parallel state to a clean state. */ @@ -531,6 +540,14 @@ LaunchParallelWorkers(ParallelContext *pcxt) } } + /* + * Now that nworkers_launched has taken its final value, we can initialize + * any_message_received. + */ + if (pcxt->nworkers_launched > 0) + pcxt->any_message_received = + palloc0(sizeof(bool) * pcxt->nworkers_launched); + /* Restore previous memory context. */ MemoryContextSwitchTo(oldcontext); } @@ -552,6 +569,7 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt) for (;;) { bool anyone_alive = false; + int nfinished = 0; int i; /* @@ -563,7 +581,15 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt) for (i = 0; i < pcxt->nworkers_launched; ++i) { - if (pcxt->worker[i].error_mqh != NULL) + /* + * If error_mqh is NULL, then the worker has already exited + * cleanly. If we have received a message through error_mqh from + * the worker, we know it started up cleanly, and therefore we're + * certain to be notified when it exits. + */ + if (pcxt->worker[i].error_mqh == NULL) + ++nfinished; + else if (pcxt->any_message_received[i]) { anyone_alive = true; break; @@ -571,7 +597,62 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt) } if (!anyone_alive) - break; + { + /* If all workers are known to have finished, we're done. */ + if (nfinished >= pcxt->nworkers_launched) + { + Assert(nfinished == pcxt->nworkers_launched); + break; + } + + /* + * We didn't detect any living workers, but not all workers are + * known to have exited cleanly. Either not all workers have + * launched yet, or maybe some of them failed to start or + * terminated abnormally. + */ + for (i = 0; i < pcxt->nworkers_launched; ++i) + { + pid_t pid; + shm_mq *mq; + + /* + * If the worker is BGWH_NOT_YET_STARTED or BGWH_STARTED, we + * should just keep waiting. If it is BGWH_STOPPED, then + * further investigation is needed. + */ + if (pcxt->worker[i].error_mqh == NULL || + pcxt->worker[i].bgwhandle == NULL || + GetBackgroundWorkerPid(pcxt->worker[i].bgwhandle, + &pid) != BGWH_STOPPED) + continue; + + /* + * Check whether the worker ended up stopped without ever + * attaching to the error queue. If so, the postmaster was + * unable to fork the worker or it exited without initializing + * properly. We must throw an error, since the caller may + * have been expecting the worker to do some work before + * exiting. + */ + mq = shm_mq_get_queue(pcxt->worker[i].error_mqh); + if (shm_mq_get_sender(mq) == NULL) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("parallel worker failed to initialize"), + errhint("More details may be available in the server log."))); + + /* + * The worker is stopped, but is attached to the error queue. + * Unless there's a bug somewhere, this will only happen when + * the worker writes messages and terminates after the + * CHECK_FOR_INTERRUPTS() near the top of this function and + * before the call to GetBackgroundWorkerPid(). In that case, + * or latch should have been set as well and the right things + * will happen on the next pass through the loop. + */ + } + } WaitLatch(MyLatch, WL_LATCH_SET, -1, WAIT_EVENT_PARALLEL_FINISH); @@ -828,6 +909,9 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg) { char msgtype; + if (pcxt->any_message_received != NULL) + pcxt->any_message_received[i] = true; + msgtype = pq_getmsgbyte(msg); switch (msgtype) @@ -1024,11 +1108,16 @@ ParallelWorkerMain(Datum main_arg) fps = shm_toc_lookup(toc, PARALLEL_KEY_FIXED, false); MyFixedParallelState = fps; + /* Arrange to signal the leader if we exit. */ + ParallelMasterPid = fps->parallel_master_pid; + ParallelMasterBackendId = fps->parallel_master_backend_id; + on_shmem_exit(ParallelWorkerShutdown, (Datum) 0); + /* - * Now that we have a worker number, we can find and attach to the error - * queue provided for us. That's good, because until we do that, any - * errors that happen here will not be reported back to the process that - * requested that this worker be launched. + * Now we can find and attach to the error queue provided for us. That's + * good, because until we do that, any errors that happen here will not be + * reported back to the process that requested that this worker be + * launched. */ error_queue_space = shm_toc_lookup(toc, PARALLEL_KEY_ERROR_QUEUE, false); mq = (shm_mq *) (error_queue_space + @@ -1146,9 +1235,6 @@ ParallelWorkerMain(Datum main_arg) SetTempNamespaceState(fps->temp_namespace_id, fps->temp_toast_namespace_id); - /* Set ParallelMasterBackendId so we know how to address temp relations. */ - ParallelMasterBackendId = fps->parallel_master_backend_id; - /* Restore reindex state. */ reindexspace = shm_toc_lookup(toc, PARALLEL_KEY_REINDEX_STATE, false); RestoreReindexState(reindexspace); @@ -1197,6 +1283,20 @@ ParallelWorkerReportLastRecEnd(XLogRecPtr last_xlog_end) SpinLockRelease(&fps->mutex); } +/* + * Make sure the leader tries to read from our error queue one more time. + * This guards against the case where we exit uncleanly without sending an + * ErrorResponse to the leader, for example because some code calls proc_exit + * directly. + */ +static void +ParallelWorkerShutdown(int code, Datum arg) +{ + SendProcSignal(ParallelMasterPid, + PROCSIG_PARALLEL_MESSAGE, + ParallelMasterBackendId); +} + /* * Look up (and possibly load) a parallel worker entry point function. * diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index 8c6a747ced..32c2e32bea 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -43,6 +43,7 @@ typedef struct ParallelContext void *private_memory; shm_toc *toc; ParallelWorkerInfo *worker; + bool *any_message_received; } ParallelContext; typedef struct ParallelWorkerContext From 28e04155f17cabda7a18aee31d130aa10e25ee86 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 23 Jan 2018 11:20:18 -0500 Subject: [PATCH 0880/1087] Update obsolete sentence in README.parallel. Since 9.6, heavyweight locking is not an abstract and unhandled concern of the parallel machinery, but rather something to which we have a specific approach. --- src/backend/access/transam/README.parallel | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/access/transam/README.parallel b/src/backend/access/transam/README.parallel index 32994719e3..f09a580634 100644 --- a/src/backend/access/transam/README.parallel +++ b/src/backend/access/transam/README.parallel @@ -125,9 +125,9 @@ worker. This includes: - State related to pending REINDEX operations, which prevents access to an index that is currently being rebuilt. -To prevent undetected or unprincipled deadlocks when running in parallel mode, -this could should eventually handle heavyweight locks in some way. This is -not implemented yet. +To prevent unprincipled deadlocks when running in parallel mode, this code +also arranges for the leader and all workers to participate in group +locking. See src/backend/storage/lmgr/README for more details. Transaction Integration ======================= From f9bbd46adbf350ba9e99a808f2c759e4aab9ea70 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 23 Jan 2018 12:31:01 -0500 Subject: [PATCH 0881/1087] pgbench: Remove accidental garbage in test file Author: Fabien COELHO --- src/bin/pgbench/t/001_pgbench_with_server.pl | 6 ------ 1 file changed, 6 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index a8b2962bd0..99286f6bc0 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -340,12 +340,6 @@ sub pgbench SELECT :v0, :v1, :v2, :v3; } }); -=head - -} }); - -=cut - # backslash commands pgbench( '-t 1', 0, From c9707d9413b171a6f017db1ea7832d797d3abc0d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 23 Jan 2018 12:41:35 -0500 Subject: [PATCH 0882/1087] Documentation fix: pg_ctl no longer makes connection attempts. Overlooked in commit f13ea95f9. Noted by Nick Barnes. Discussion: https://postgr.es/m/20180123093723.7407.3386@wrigleys.postgresql.org --- doc/src/sgml/ref/pg_ctl-ref.sgml | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/ref/pg_ctl-ref.sgml b/doc/src/sgml/ref/pg_ctl-ref.sgml index 7eb5dd320c..0816bc1686 100644 --- a/doc/src/sgml/ref/pg_ctl-ref.sgml +++ b/doc/src/sgml/ref/pg_ctl-ref.sgml @@ -405,10 +405,12 @@ PostgreSQL documentation - When waiting for startup, pg_ctl repeatedly - attempts to connect to the server. - When waiting for shutdown, pg_ctl waits for - the server to remove its PID file. + When waiting, pg_ctl repeatedly checks the + server's PID file, sleeping for a short amount + of time between checks. Startup is considered complete when + the PID file indicates that the server is ready to + accept connections. Shutdown is considered complete when the server + removes the PID file. pg_ctl returns an exit code based on the success of the startup or shutdown. From 95be5ce1bce3fdcf3ca0638baa12508e5b67ec17 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 23 Jan 2018 15:22:13 -0300 Subject: [PATCH 0883/1087] Remove unnecessary include autovacuum.c no longer needs dsa.h, since commit 31ae1638ce3. Author: Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoCWvYyXrvdANSHWWWEWJH5TeAWAkJ_2gqrHhukG+OBo1g@mail.gmail.com --- src/backend/postmaster/autovacuum.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 75c2362f46..702f8d8188 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -94,7 +94,6 @@ #include "storage/sinvaladt.h" #include "storage/smgr.h" #include "tcop/tcopprot.h" -#include "utils/dsa.h" #include "utils/fmgroids.h" #include "utils/fmgrprotos.h" #include "utils/lsyscache.h" From bb94ce4d26c3b011c01bf44ab200334fea52b600 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 23 Jan 2018 16:50:34 -0500 Subject: [PATCH 0884/1087] Teach reparameterize_path() to handle AppendPaths. If we're inside a lateral subquery, there may be no unparameterized paths for a particular child relation of an appendrel, in which case we *must* be able to create similarly-parameterized paths for each other child relation, else the planner will fail with "could not devise a query plan for the given query". This means that there are situations where we'd better be able to reparameterize at least one path for each child. This calls into question the assumption in reparameterize_path() that it can just punt if it feels like it. However, the only case that is known broken right now is where the child is itself an appendrel so that all its paths are AppendPaths. (I think possibly I disregarded that in the original coding on the theory that nested appendrels would get folded together --- but that only happens *after* reparameterize_path(), so it's not excused from handling a child AppendPath.) Given that this code's been like this since 9.3 when LATERAL was introduced, it seems likely we'd have heard of other cases by now if there were a larger problem. Per report from Elvis Pranskevichus. Back-patch to 9.3. Discussion: https://postgr.es/m/5981018.zdth1YWmNy@hammer.magicstack.net --- src/backend/optimizer/util/pathnode.c | 34 +++++++++++++++++++++++++++ src/test/regress/expected/join.out | 19 +++++++++++++++ src/test/regress/sql/join.sql | 10 ++++++++ 3 files changed, 63 insertions(+) diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 91295ebca4..fe3b4582d4 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -3540,6 +3540,40 @@ reparameterize_path(PlannerInfo *root, Path *path, spath->path.pathkeys, required_outer); } + case T_Append: + { + AppendPath *apath = (AppendPath *) path; + List *childpaths = NIL; + List *partialpaths = NIL; + int i; + ListCell *lc; + + /* Reparameterize the children */ + i = 0; + foreach(lc, apath->subpaths) + { + Path *spath = (Path *) lfirst(lc); + + spath = reparameterize_path(root, spath, + required_outer, + loop_count); + if (spath == NULL) + return NULL; + /* We have to re-split the regular and partial paths */ + if (i < apath->first_partial_path) + childpaths = lappend(childpaths, spath); + else + partialpaths = lappend(partialpaths, spath); + i++; + } + return (Path *) + create_append_path(rel, childpaths, partialpaths, + required_outer, + apath->path.parallel_workers, + apath->path.parallel_aware, + apath->partitioned_rels, + -1); + } default: break; } diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 02e7d56e55..c50a206efb 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -5188,6 +5188,25 @@ select * from Output: 3 (11 rows) +-- check handling of nested appendrels inside LATERAL +select * from + ((select 2 as v) union all (select 3 as v)) as q1 + cross join lateral + ((select * from + ((select 4 as v) union all (select 5 as v)) as q3) + union all + (select q1.v) + ) as q2; + v | v +---+--- + 2 | 4 + 2 | 5 + 2 | 2 + 3 | 4 + 3 | 5 + 3 | 3 +(6 rows) + -- check we don't try to do a unique-ified semijoin with LATERAL explain (verbose, costs off) select * from diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index dd62c38c15..fc84237ce9 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -1682,6 +1682,16 @@ select * from select * from (select 3 as z offset 0) z where z.z = x.x ) zz on zz.z = y.y; +-- check handling of nested appendrels inside LATERAL +select * from + ((select 2 as v) union all (select 3 as v)) as q1 + cross join lateral + ((select * from + ((select 4 as v) union all (select 5 as v)) as q3) + union all + (select q1.v) + ) as q2; + -- check we don't try to do a unique-ified semijoin with LATERAL explain (verbose, costs off) select * from From e0a0deca389849383ff6337a488300eb22f31cef Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Tue, 23 Jan 2018 18:22:56 -0500 Subject: [PATCH 0885/1087] doc: mention psql -l uses the 'postgres' database by default Reported-by: Mark Wood Bug: 14912 Discussion: https://postgr.es/m/20171116171735.1474.30450@wrigleys.postgresql.org Author: David G. Johnston Backpatch-through: 10 --- doc/src/sgml/ref/psql-ref.sgml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index fce7e3a585..7ea7edc3d1 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -283,7 +283,9 @@ EOF List all available databases, then exit. Other non-connection - options are ignored. This is similar to the meta-command + options are ignored. If an explicit database name is not + found the postgres database, not the user's, + will be targeted for connection. This is similar to the meta-command \list. From 434e6e1484418c55561914600de9e180fc408378 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 23 Jan 2018 23:07:13 -0500 Subject: [PATCH 0886/1087] Improve implementation of pg_attribute_always_inline. Avoid compiler warnings on MSVC (which doesn't want to see both __forceinline and inline) and ancient GCC (which doesn't have __attribute__((always_inline))). Don't force inline-ing when building at -O0, as the programmer is probably hoping for exact source-to-object-line correspondence in that case. (For the moment this only works for GCC; maybe we can extend it later.) Make pg_attribute_always_inline be syntactically a drop-in replacement for inline, rather than an additional wart. And improve the comments. Thomas Munro and Michail Nikolaev, small tweaks by me Discussion: https://postgr.es/m/32278.1514863068@sss.pgh.pa.us Discussion: https://postgr.es/m/CANtu0oiYp74brgntKOxgg1FK5+t8uQ05guSiFU6FYz_5KUhr6Q@mail.gmail.com --- src/backend/executor/nodeHashjoin.c | 3 +-- src/include/c.h | 17 ++++++++++++----- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index 8f2b634b12..03d78042fa 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -161,8 +161,7 @@ static void ExecParallelHashJoinPartitionOuter(HashJoinState *node); * the other one is "outer". * ---------------------------------------------------------------- */ -pg_attribute_always_inline -static inline TupleTableSlot * +static pg_attribute_always_inline TupleTableSlot * ExecHashJoinImpl(PlanState *pstate, bool parallel) { HashJoinState *node = castNode(HashJoinState, pstate); diff --git a/src/include/c.h b/src/include/c.h index 34a7fa67b4..9b7fe87f32 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -146,14 +146,21 @@ #define pg_attribute_noreturn() #endif -/* GCC, Sunpro and XLC support always_inline via __attribute__ */ -#if defined(__GNUC__) -#define pg_attribute_always_inline __attribute__((always_inline)) -/* msvc via a special keyword */ +/* + * Use "pg_attribute_always_inline" in place of "inline" for functions that + * we wish to force inlining of, even when the compiler's heuristics would + * choose not to. But, if possible, don't force inlining in unoptimized + * debug builds. + */ +#if (defined(__GNUC__) && __GNUC__ > 3 && defined(__OPTIMIZE__)) || defined(__SUNPRO_C) || defined(__IBMC__) +/* GCC > 3, Sunpro and XLC support always_inline via __attribute__ */ +#define pg_attribute_always_inline __attribute__((always_inline)) inline #elif defined(_MSC_VER) +/* MSVC has a special keyword for this */ #define pg_attribute_always_inline __forceinline #else -#define pg_attribute_always_inline +/* Otherwise, the best we can do is to say "inline" */ +#define pg_attribute_always_inline inline #endif /* From 5b2a8cf96f6fa4f2c98c9a4c32a5a387b4f69d6c Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 24 Jan 2018 13:20:37 -0500 Subject: [PATCH 0887/1087] doc: clarify use of RegisterDynamicBackgroundWorker Document likely use of RegisterDynamicBackgroundWorker by another background worker. Reported-by: Chapman Flack Discussion: https://postgr.es/m/CAB7nPqTdi=J9HH8PPPiEOohebdd+xkgbbhdY7=VbGnZ3CkZXxA@mail.gmail.com Author: Chapman Flack --- doc/src/sgml/bgworker.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index 4bc2b696b3..e490bb8750 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -41,7 +41,7 @@ *worker, BackgroundWorkerHandle **handle). Unlike RegisterBackgroundWorker, which can only be called from within the postmaster, RegisterDynamicBackgroundWorker must be - called from a regular backend. + called from a regular backend, possibly another background worker. From a61116da8b99c4ff4b8c5757697abda7ac36b022 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 23 Jan 2018 10:13:45 -0500 Subject: [PATCH 0888/1087] Add tests for record_image_eq and record_image_cmp record_image_eq was covered a bit by the materialized view code that it is meant to support, but record_image_cmp was not tested at all. While we're here, add more tests to record_eq and record_cmp as well, for symmetry. Reviewed-by: Michael Paquier --- src/test/regress/expected/rowtypes.out | 300 +++++++++++++++++++++++++ src/test/regress/sql/rowtypes.sql | 102 +++++++++ 2 files changed, 402 insertions(+) diff --git a/src/test/regress/expected/rowtypes.out b/src/test/regress/expected/rowtypes.out index a4bac8e3b5..45cb6ff3da 100644 --- a/src/test/regress/expected/rowtypes.out +++ b/src/test/regress/expected/rowtypes.out @@ -53,6 +53,22 @@ ERROR: malformed record literal: "(Joe,,)" LINE 1: select '(Joe,,)'::fullname; ^ DETAIL: Too many columns. +select '[]'::fullname; -- bad +ERROR: malformed record literal: "[]" +LINE 1: select '[]'::fullname; + ^ +DETAIL: Missing left parenthesis. +select ' (Joe,Blow) '::fullname; -- ok, extra whitespace + fullname +------------ + (Joe,Blow) +(1 row) + +select '(Joe,Blow) /'::fullname; -- bad +ERROR: malformed record literal: "(Joe,Blow) /" +LINE 1: select '(Joe,Blow) /'::fullname; + ^ +DETAIL: Junk after right parenthesis. create temp table quadtable(f1 int, q quad); insert into quadtable values (1, ((3.3,4.4),(5.5,6.6))); insert into quadtable values (2, ((null,4.4),(5.5,6.6))); @@ -369,6 +385,290 @@ LINE 1: select * from cc order by f1; ^ HINT: Use an explicit ordering operator or modify the query. -- +-- Tests for record_{eq,cmp} +-- +create type testtype1 as (a int, b int); +-- all true +select row(1, 2)::testtype1 < row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 <= row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 = row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 <> row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 3)::testtype1 >= row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 3)::testtype1 > row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +-- all false +select row(1, -2)::testtype1 < row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 <= row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 = row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 <> row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -3)::testtype1 >= row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -3)::testtype1 > row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +-- true, but see *< below +select row(1, -2)::testtype1 < row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +-- mismatches +create type testtype3 as (a int, b text); +select row(1, 2)::testtype1 < row(1, 'abc')::testtype3; +ERROR: cannot compare dissimilar column types integer and text at record column 2 +select row(1, 2)::testtype1 <> row(1, 'abc')::testtype3; +ERROR: cannot compare dissimilar column types integer and text at record column 2 +create type testtype5 as (a int); +select row(1, 2)::testtype1 < row(1)::testtype5; +ERROR: cannot compare record types with different numbers of columns +select row(1, 2)::testtype1 <> row(1)::testtype5; +ERROR: cannot compare record types with different numbers of columns +-- non-comparable types +create type testtype6 as (a int, b point); +select row(1, '(1,2)')::testtype6 < row(1, '(1,3)')::testtype6; +ERROR: could not identify a comparison function for type point +select row(1, '(1,2)')::testtype6 <> row(1, '(1,3)')::testtype6; +ERROR: could not identify an equality operator for type point +drop type testtype1, testtype3, testtype5, testtype6; +-- +-- Tests for record_image_{eq,cmp} +-- +create type testtype1 as (a int, b int); +-- all true +select row(1, 2)::testtype1 *< row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 *<= row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 *= row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 2)::testtype1 *<> row(1, 3)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 3)::testtype1 *>= row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +select row(1, 3)::testtype1 *> row(1, 2)::testtype1; + ?column? +---------- + t +(1 row) + +-- all false +select row(1, -2)::testtype1 *< row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 *<= row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 *= row(1, -3)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -2)::testtype1 *<> row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -3)::testtype1 *>= row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +select row(1, -3)::testtype1 *> row(1, -2)::testtype1; + ?column? +---------- + f +(1 row) + +-- This returns the "wrong" order because record_image_cmp works on +-- unsigned datums without knowing about the actual data type. +select row(1, -2)::testtype1 *< row(1, 3)::testtype1; + ?column? +---------- + f +(1 row) + +-- other types +create type testtype2 as (a smallint, b bool); -- byval different sizes +select row(1, true)::testtype2 *< row(2, true)::testtype2; + ?column? +---------- + t +(1 row) + +select row(-2, true)::testtype2 *< row(-1, true)::testtype2; + ?column? +---------- + t +(1 row) + +select row(0, false)::testtype2 *< row(0, true)::testtype2; + ?column? +---------- + t +(1 row) + +select row(0, false)::testtype2 *<> row(0, true)::testtype2; + ?column? +---------- + t +(1 row) + +create type testtype3 as (a int, b text); -- variable length +select row(1, 'abc')::testtype3 *< row(1, 'abd')::testtype3; + ?column? +---------- + t +(1 row) + +select row(1, 'abc')::testtype3 *< row(1, 'abcd')::testtype3; + ?column? +---------- + t +(1 row) + +select row(1, 'abc')::testtype3 *> row(1, 'abd')::testtype3; + ?column? +---------- + f +(1 row) + +select row(1, 'abc')::testtype3 *<> row(1, 'abd')::testtype3; + ?column? +---------- + t +(1 row) + +create type testtype4 as (a int, b point); -- by ref, fixed length +select row(1, '(1,2)')::testtype4 *< row(1, '(1,3)')::testtype4; + ?column? +---------- + t +(1 row) + +select row(1, '(1,2)')::testtype4 *<> row(1, '(1,3)')::testtype4; + ?column? +---------- + t +(1 row) + +-- mismatches +select row(1, 2)::testtype1 *< row(1, 'abc')::testtype3; +ERROR: cannot compare dissimilar column types integer and text at record column 2 +select row(1, 2)::testtype1 *<> row(1, 'abc')::testtype3; +ERROR: cannot compare dissimilar column types integer and text at record column 2 +create type testtype5 as (a int); +select row(1, 2)::testtype1 *< row(1)::testtype5; +ERROR: cannot compare record types with different numbers of columns +select row(1, 2)::testtype1 *<> row(1)::testtype5; +ERROR: cannot compare record types with different numbers of columns +-- non-comparable types +create type testtype6 as (a int, b point); +select row(1, '(1,2)')::testtype6 *< row(1, '(1,3)')::testtype6; + ?column? +---------- + t +(1 row) + +select row(1, '(1,2)')::testtype6 *>= row(1, '(1,3)')::testtype6; + ?column? +---------- + f +(1 row) + +select row(1, '(1,2)')::testtype6 *<> row(1, '(1,3)')::testtype6; + ?column? +---------- + t +(1 row) + +drop type testtype1, testtype2, testtype3, testtype4, testtype5, testtype6; +-- -- Test case derived from bug #5716: check multiple uses of a rowtype result -- BEGIN; diff --git a/src/test/regress/sql/rowtypes.sql b/src/test/regress/sql/rowtypes.sql index 8d63060500..305639f05d 100644 --- a/src/test/regress/sql/rowtypes.sql +++ b/src/test/regress/sql/rowtypes.sql @@ -27,6 +27,9 @@ select '(Joe,"Blow,Jr")'::fullname; select '(Joe,)'::fullname; -- ok, null 2nd column select '(Joe)'::fullname; -- bad select '(Joe,,)'::fullname; -- bad +select '[]'::fullname; -- bad +select ' (Joe,Blow) '::fullname; -- ok, extra whitespace +select '(Joe,Blow) /'::fullname; -- bad create temp table quadtable(f1 int, q quad); @@ -160,6 +163,105 @@ insert into cc values('("(1,2)",3)'); insert into cc values('("(4,5)",6)'); select * from cc order by f1; -- fail, but should complain about cantcompare +-- +-- Tests for record_{eq,cmp} +-- + +create type testtype1 as (a int, b int); + +-- all true +select row(1, 2)::testtype1 < row(1, 3)::testtype1; +select row(1, 2)::testtype1 <= row(1, 3)::testtype1; +select row(1, 2)::testtype1 = row(1, 2)::testtype1; +select row(1, 2)::testtype1 <> row(1, 3)::testtype1; +select row(1, 3)::testtype1 >= row(1, 2)::testtype1; +select row(1, 3)::testtype1 > row(1, 2)::testtype1; + +-- all false +select row(1, -2)::testtype1 < row(1, -3)::testtype1; +select row(1, -2)::testtype1 <= row(1, -3)::testtype1; +select row(1, -2)::testtype1 = row(1, -3)::testtype1; +select row(1, -2)::testtype1 <> row(1, -2)::testtype1; +select row(1, -3)::testtype1 >= row(1, -2)::testtype1; +select row(1, -3)::testtype1 > row(1, -2)::testtype1; + +-- true, but see *< below +select row(1, -2)::testtype1 < row(1, 3)::testtype1; + +-- mismatches +create type testtype3 as (a int, b text); +select row(1, 2)::testtype1 < row(1, 'abc')::testtype3; +select row(1, 2)::testtype1 <> row(1, 'abc')::testtype3; +create type testtype5 as (a int); +select row(1, 2)::testtype1 < row(1)::testtype5; +select row(1, 2)::testtype1 <> row(1)::testtype5; + +-- non-comparable types +create type testtype6 as (a int, b point); +select row(1, '(1,2)')::testtype6 < row(1, '(1,3)')::testtype6; +select row(1, '(1,2)')::testtype6 <> row(1, '(1,3)')::testtype6; + +drop type testtype1, testtype3, testtype5, testtype6; + +-- +-- Tests for record_image_{eq,cmp} +-- + +create type testtype1 as (a int, b int); + +-- all true +select row(1, 2)::testtype1 *< row(1, 3)::testtype1; +select row(1, 2)::testtype1 *<= row(1, 3)::testtype1; +select row(1, 2)::testtype1 *= row(1, 2)::testtype1; +select row(1, 2)::testtype1 *<> row(1, 3)::testtype1; +select row(1, 3)::testtype1 *>= row(1, 2)::testtype1; +select row(1, 3)::testtype1 *> row(1, 2)::testtype1; + +-- all false +select row(1, -2)::testtype1 *< row(1, -3)::testtype1; +select row(1, -2)::testtype1 *<= row(1, -3)::testtype1; +select row(1, -2)::testtype1 *= row(1, -3)::testtype1; +select row(1, -2)::testtype1 *<> row(1, -2)::testtype1; +select row(1, -3)::testtype1 *>= row(1, -2)::testtype1; +select row(1, -3)::testtype1 *> row(1, -2)::testtype1; + +-- This returns the "wrong" order because record_image_cmp works on +-- unsigned datums without knowing about the actual data type. +select row(1, -2)::testtype1 *< row(1, 3)::testtype1; + +-- other types +create type testtype2 as (a smallint, b bool); -- byval different sizes +select row(1, true)::testtype2 *< row(2, true)::testtype2; +select row(-2, true)::testtype2 *< row(-1, true)::testtype2; +select row(0, false)::testtype2 *< row(0, true)::testtype2; +select row(0, false)::testtype2 *<> row(0, true)::testtype2; + +create type testtype3 as (a int, b text); -- variable length +select row(1, 'abc')::testtype3 *< row(1, 'abd')::testtype3; +select row(1, 'abc')::testtype3 *< row(1, 'abcd')::testtype3; +select row(1, 'abc')::testtype3 *> row(1, 'abd')::testtype3; +select row(1, 'abc')::testtype3 *<> row(1, 'abd')::testtype3; + +create type testtype4 as (a int, b point); -- by ref, fixed length +select row(1, '(1,2)')::testtype4 *< row(1, '(1,3)')::testtype4; +select row(1, '(1,2)')::testtype4 *<> row(1, '(1,3)')::testtype4; + +-- mismatches +select row(1, 2)::testtype1 *< row(1, 'abc')::testtype3; +select row(1, 2)::testtype1 *<> row(1, 'abc')::testtype3; +create type testtype5 as (a int); +select row(1, 2)::testtype1 *< row(1)::testtype5; +select row(1, 2)::testtype1 *<> row(1)::testtype5; + +-- non-comparable types +create type testtype6 as (a int, b point); +select row(1, '(1,2)')::testtype6 *< row(1, '(1,3)')::testtype6; +select row(1, '(1,2)')::testtype6 *>= row(1, '(1,3)')::testtype6; +select row(1, '(1,2)')::testtype6 *<> row(1, '(1,3)')::testtype6; + +drop type testtype1, testtype2, testtype3, testtype4, testtype5, testtype6; + + -- -- Test case derived from bug #5716: check multiple uses of a rowtype result -- From d6ab7203607a3f43fe41d384f46c15bdac68d745 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 24 Jan 2018 15:13:04 -0500 Subject: [PATCH 0889/1087] doc: properly indent CREATE TRIGGER paragraph This was done to match the surrounding indentation. Text added in PG 10. Backpatch-through: 10 --- doc/src/sgml/ref/create_trigger.sgml | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index ad7f9efb55..a8c0b5725d 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -500,17 +500,17 @@ UPDATE OF column_name1 [, column_name2 - Modifying a partitioned table or a table with inheritance children fires - statement-level triggers directly attached to that table, but not - statement-level triggers for its partitions or child tables. In contrast, - row-level triggers are fired for all affected partitions or child tables. - If a statement-level trigger has been defined with transition relations - named by a REFERENCING clause, then before and after - images of rows are visible from all affected partitions or child tables. - In the case of inheritance children, the row images include only columns - that are present in the table that the trigger is attached to. Currently, - row-level triggers with transition relations cannot be defined on - partitions or inheritance child tables. + Modifying a partitioned table or a table with inheritance children fires + statement-level triggers directly attached to that table, but not + statement-level triggers for its partitions or child tables. In contrast, + row-level triggers are fired for all affected partitions or child tables. + If a statement-level trigger has been defined with transition relations + named by a REFERENCING clause, then before and after + images of rows are visible from all affected partitions or child tables. + In the case of inheritance children, the row images include only columns + that are present in the table that the trigger is attached to. Currently, + row-level triggers with transition relations cannot be defined on + partitions or inheritance child tables. From 945f71db845262e7491b5fe4403b01147027576b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 24 Jan 2018 16:34:51 -0500 Subject: [PATCH 0890/1087] Avoid referencing off the end of subplan_partition_offsets. Report by buildfarm member skink and Tom Lane. Analysis by me. Patch by Amit Khandekar. Discussion: http://postgr.es/m/CAJ3gD9fVA1iXQYhfqHP5n_TEd4U9=V8TL_cc-oKRnRmxgdvJrQ@mail.gmail.com --- src/backend/executor/execPartition.c | 2 ++ src/backend/executor/nodeModifyTable.c | 3 ++- src/include/executor/execPartition.h | 2 ++ 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 89b7bb4c60..106a96d910 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -87,6 +87,7 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, num_update_rri = list_length(node->plans); proute->subplan_partition_offsets = palloc(num_update_rri * sizeof(int)); + proute->num_subplan_partition_offsets = num_update_rri; /* * We need an additional tuple slot for storing transient tuples that @@ -481,6 +482,7 @@ ExecCleanupTupleRouting(PartitionTupleRouting *proute) * result rels are present in the UPDATE subplans. */ if (proute->subplan_partition_offsets && + subplan_index < proute->num_subplan_partition_offsets && proute->subplan_partition_offsets[subplan_index] == i) { subplan_index++; diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 6c2f8d4ec0..828e1b0015 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1812,7 +1812,8 @@ tupconv_map_for_subplan(ModifyTableState *mtstate, int whichplan) * If subplan-indexed array is NULL, things should have been arranged * to convert the subplan index to partition index. */ - Assert(proute && proute->subplan_partition_offsets != NULL); + Assert(proute && proute->subplan_partition_offsets != NULL && + whichplan < proute->num_subplan_partition_offsets); leaf_index = proute->subplan_partition_offsets[whichplan]; diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index 18e08129f8..3df9c498bb 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -80,6 +80,7 @@ typedef struct PartitionDispatchData *PartitionDispatch; * subplan_partition_offsets Integer array ordered by UPDATE subplans. Each * element of this array has the index into the * corresponding partition in partitions array. + * num_subplan_partition_offsets Length of 'subplan_partition_offsets' array * partition_tuple_slot TupleTableSlot to be used to manipulate any * given leaf partition's rowtype after that * partition is chosen for insertion by @@ -96,6 +97,7 @@ typedef struct PartitionTupleRouting TupleConversionMap **child_parent_tupconv_maps; bool *child_parent_map_not_required; int *subplan_partition_offsets; + int num_subplan_partition_offsets; TupleTableSlot *partition_tuple_slot; TupleTableSlot *root_tuple_slot; } PartitionTupleRouting; From 4a3fdbdf766d80b21271e32da865801ab005d786 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 25 Jan 2018 09:14:24 -0500 Subject: [PATCH 0891/1087] Allow spaces in connection strings in SSL tests Connection strings can have items with spaces in them, wrapped in quotes. The tests however ran a SELECT '$connstr' upon connection which broke on the embedded quotes. Use dollar quotes on the connstr to protect against this. This was hit during the development of the macOS Secure Transport patch, but is independent of it. Author: Daniel Gustafsson --- src/test/ssl/ServerSetup.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index 02f8028b2b..b4d5746e20 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -42,7 +42,7 @@ sub run_test_psql my $logstring = $_[1]; my $cmd = [ - 'psql', '-X', '-A', '-t', '-c', "SELECT 'connected with $connstr'", + 'psql', '-X', '-A', '-t', '-c', "SELECT \$\$connected with $connstr\$\$", '-d', "$connstr" ]; my $result = run_log($cmd); From 0b5e33f667a2042d7022da8bef31a8be5937aad1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 23 Jan 2018 10:55:40 -0500 Subject: [PATCH 0892/1087] Remove use of byte-masking macros in record_image_cmp These were introduced in 4cbb646334b3b998a29abef0d57608d42097e6c9, but after further analysis and testing, they should not be necessary and probably weren't the part of that commit that fixed anything. Reviewed-by: Michael Paquier --- src/backend/utils/adt/rowtypes.c | 65 ++------------------------------ 1 file changed, 3 insertions(+), 62 deletions(-) diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c index a5fabfcc9e..5f729342f8 100644 --- a/src/backend/utils/adt/rowtypes.c +++ b/src/backend/utils/adt/rowtypes.c @@ -1467,45 +1467,8 @@ record_image_cmp(FunctionCallInfo fcinfo) } else if (att1->attbyval) { - switch (att1->attlen) - { - case 1: - if (GET_1_BYTE(values1[i1]) != - GET_1_BYTE(values2[i2])) - { - cmpresult = (GET_1_BYTE(values1[i1]) < - GET_1_BYTE(values2[i2])) ? -1 : 1; - } - break; - case 2: - if (GET_2_BYTES(values1[i1]) != - GET_2_BYTES(values2[i2])) - { - cmpresult = (GET_2_BYTES(values1[i1]) < - GET_2_BYTES(values2[i2])) ? -1 : 1; - } - break; - case 4: - if (GET_4_BYTES(values1[i1]) != - GET_4_BYTES(values2[i2])) - { - cmpresult = (GET_4_BYTES(values1[i1]) < - GET_4_BYTES(values2[i2])) ? -1 : 1; - } - break; -#if SIZEOF_DATUM == 8 - case 8: - if (GET_8_BYTES(values1[i1]) != - GET_8_BYTES(values2[i2])) - { - cmpresult = (GET_8_BYTES(values1[i1]) < - GET_8_BYTES(values2[i2])) ? -1 : 1; - } - break; -#endif - default: - Assert(false); /* cannot happen */ - } + if (values1[i1] != values2[i2]) + cmpresult = (values1[i1] < values2[i2]) ? -1 : 1; } else { @@ -1739,29 +1702,7 @@ record_image_eq(PG_FUNCTION_ARGS) } else if (att1->attbyval) { - switch (att1->attlen) - { - case 1: - result = (GET_1_BYTE(values1[i1]) == - GET_1_BYTE(values2[i2])); - break; - case 2: - result = (GET_2_BYTES(values1[i1]) == - GET_2_BYTES(values2[i2])); - break; - case 4: - result = (GET_4_BYTES(values1[i1]) == - GET_4_BYTES(values2[i2])); - break; -#if SIZEOF_DATUM == 8 - case 8: - result = (GET_8_BYTES(values1[i1]) == - GET_8_BYTES(values2[i2])); - break; -#endif - default: - Assert(false); /* cannot happen */ - } + result = (values1[i1] == values2[i2]); } else { From 2a5ecb56d22340a00393fa60e7b910c472071875 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 25 Jan 2018 11:14:24 -0500 Subject: [PATCH 0893/1087] Update documentation to mention huge pages on other OSes Previously, the docs implied that only Linux and Windows could use huge pages. That's not quite true: it's just that we only know how to request them explicitly on those OSes. Be more explicit about what huge_pages really does and mention that some OSes may use huge pages automatically. Author: Thomas Munro and Catalin Iacob Reviewed-By: Justin Pryzby, Peter Eisentraut Discussion: https://postgr.es/m/CAEepm=3qzR-hfjepymohuC4XO5phxoSoipOjm6BEhnJHjNR+jg@mail.gmail.com --- doc/src/sgml/config.sgml | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 45b2af14eb..f951ddb41e 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1372,14 +1372,20 @@ include_dir 'conf.d' - Enables/disables the use of huge memory pages. Valid values are - try (the default), on, - and off. + Controls whether huge pages are requested for the main shared memory + area. Valid values are try (the default), + on, and off. With + huge_pages set to try, the + server will try to request huge pages, but fall back to the default if + that fails. With on, failure to request huge pages + will prevent the server from starting up. With off, + huge pages will not be requested. - At present, this feature is supported only on Linux and Windows. The - setting is ignored on other systems when set to try. + At present, this setting is supported only on Linux and Windows. The + setting is ignored on other systems when set to + try. @@ -1401,11 +1407,18 @@ include_dir 'conf.d' - With huge_pages set to try, - the server will try to use huge pages, but fall back to using - normal allocation if that fails. With on, failure - to use huge pages will prevent the server from starting up. With - off, huge pages will not be used. + Note that this setting only affects the main shared memory area. + Operating systems such as Linux, FreeBSD, and Illumos can also use + huge pages (also known as super pages or + large pages) automatically for normal memory + allocation, without an explicit request from + PostgreSQL. On Linux, this is called + transparent huge pagestransparent + huge pages (THP). That feature has been known to + cause performance degradation with + PostgreSQL for some users on some Linux + versions, so its use is currently discouraged (unlike explicit use of + huge_pages). From 5955d934194c3888f30318209ade71b53d29777f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 25 Jan 2018 13:54:42 -0500 Subject: [PATCH 0894/1087] Improve pg_dump's handling of "special" built-in objects. We had some pretty ad-hoc handling of the public schema and the plpgsql extension, which are both presumed to exist in template0 but might be modified or deleted in the database being dumped. Up to now, by default pg_dump would emit a CREATE EXTENSION IF NOT EXISTS command as well as a COMMENT command for plpgsql. The usefulness of the former is questionable, and the latter caused annoying errors in non-superuser dump/restore scenarios. Let's instead install a rule that built-in extensions (identified by having low-numbered OIDs) are not to be dumped. We were doing it that way already in binary-upgrade mode, so this just makes regular mode behave the same. It remains true that if someone has installed a non-default ACL on the plpgsql language, that will get dumped thanks to the pg_init_privs mechanism. This is more consistent with the handling of built-in objects of other kinds. Also, change the very ad-hoc mechanism that was used to avoid dumping creation and comment commands for the public schema. Instead of hardwiring a test in _printTocEntry(), make use of the DUMP_COMPONENT_ infrastructure to mark that schema up-front about what we want to do with it. This has the visible effect that the public schema won't be mentioned in the output at all, except for updating its ACL if it has a non-default ACL. Previously, while it was normally not mentioned, --clean mode would drop and recreate it, again causing headaches for non-superuser usage. This change likewise makes the public schema less special and more like other built-in objects. If plpgsql, or the public schema, has been removed entirely in the source DB, that situation won't be reproduced in the destination ... but that was true before. Discussion: https://postgr.es/m/29048.1516812451@sss.pgh.pa.us --- src/bin/pg_dump/pg_backup_archiver.c | 23 ------- src/bin/pg_dump/pg_dump.c | 74 +++++++++++----------- src/bin/pg_dump/t/002_pg_dump.pl | 92 ++++++++++++++++------------ 3 files changed, 87 insertions(+), 102 deletions(-) diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index f55aa36c49..fb0379377b 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -3453,29 +3453,6 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) { RestoreOptions *ropt = AH->public.ropt; - /* - * Avoid dumping the public schema, as it will already be created ... - * unless we are using --clean mode (and *not* --create mode), in which - * case we've previously issued a DROP for it so we'd better recreate it. - * - * Likewise for its comment, if any. (We could try issuing the COMMENT - * command anyway; but it'd fail if the restore is done as non-super-user, - * so let's not.) - * - * XXX it looks pretty ugly to hard-wire the public schema like this, but - * it sits in a sort of no-mans-land between being a system object and a - * user object, so it really is special in a way. - */ - if (!(ropt->dropSchema && !ropt->createDB)) - { - if (strcmp(te->desc, "SCHEMA") == 0 && - strcmp(te->tag, "public") == 0) - return; - if (strcmp(te->desc, "COMMENT") == 0 && - strcmp(te->tag, "SCHEMA public") == 0) - return; - } - /* Select owner, schema, and tablespace as necessary */ _becomeOwner(AH, te); _selectOutputSchema(AH, te->namespace); diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index d65ea54a69..b534fb7b95 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -1407,6 +1407,19 @@ selectDumpableNamespace(NamespaceInfo *nsinfo, Archive *fout) /* Other system schemas don't get dumped */ nsinfo->dobj.dump_contains = nsinfo->dobj.dump = DUMP_COMPONENT_NONE; } + else if (strcmp(nsinfo->dobj.name, "public") == 0) + { + /* + * The public schema is a strange beast that sits in a sort of + * no-mans-land between being a system object and a user object. We + * don't want to dump creation or comment commands for it, because + * that complicates matters for non-superuser use of pg_dump. But we + * should dump any ACL changes that have occurred for it, and of + * course we should dump contained objects. + */ + nsinfo->dobj.dump = DUMP_COMPONENT_ACL; + nsinfo->dobj.dump_contains = DUMP_COMPONENT_ALL; + } else nsinfo->dobj.dump_contains = nsinfo->dobj.dump = DUMP_COMPONENT_ALL; @@ -1617,21 +1630,21 @@ selectDumpableAccessMethod(AccessMethodInfo *method, Archive *fout) * selectDumpableExtension: policy-setting subroutine * Mark an extension as to be dumped or not * - * Normally, we dump all extensions, or none of them if include_everything - * is false (i.e., a --schema or --table switch was given). However, in - * binary-upgrade mode it's necessary to skip built-in extensions, since we + * Built-in extensions should be skipped except for checking ACLs, since we * assume those will already be installed in the target database. We identify * such extensions by their having OIDs in the range reserved for initdb. + * We dump all user-added extensions by default, or none of them if + * include_everything is false (i.e., a --schema or --table switch was given). */ static void selectDumpableExtension(ExtensionInfo *extinfo, DumpOptions *dopt) { /* - * Use DUMP_COMPONENT_ACL for from-initdb extensions, to allow users to - * change permissions on those objects, if they wish to, and have those - * changes preserved. + * Use DUMP_COMPONENT_ACL for built-in extensions, to allow users to + * change permissions on their member objects, if they wish to, and have + * those changes preserved. */ - if (dopt->binary_upgrade && extinfo->dobj.catId.oid <= (Oid) g_last_builtin_oid) + if (extinfo->dobj.catId.oid <= (Oid) g_last_builtin_oid) extinfo->dobj.dump = extinfo->dobj.dump_contains = DUMP_COMPONENT_ACL; else extinfo->dobj.dump = extinfo->dobj.dump_contains = @@ -4435,29 +4448,6 @@ getNamespaces(Archive *fout, int *numNamespaces) init_acl_subquery->data, init_racl_subquery->data); - /* - * When we are doing a 'clean' run, we will be dropping and recreating - * the 'public' schema (the only object which has that kind of - * treatment in the backend and which has an entry in pg_init_privs) - * and therefore we should not consider any initial privileges in - * pg_init_privs in that case. - * - * See pg_backup_archiver.c:_printTocEntry() for the details on why - * the public schema is special in this regard. - * - * Note that if the public schema is dropped and re-created, this is - * essentially a no-op because the new public schema won't have an - * entry in pg_init_privs anyway, as the entry will be removed when - * the public schema is dropped. - * - * Further, we have to handle the case where the public schema does - * not exist at all. - */ - if (dopt->outputClean) - appendPQExpBuffer(query, " AND pip.objoid <> " - "coalesce((select oid from pg_namespace " - "where nspname = 'public'),0)"); - appendPQExpBuffer(query, ") "); destroyPQExpBuffer(acl_subquery); @@ -9945,20 +9935,28 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) if (!dopt->binary_upgrade) { /* - * In a regular dump, we use IF NOT EXISTS so that there isn't a - * problem if the extension already exists in the target database; - * this is essential for installed-by-default extensions such as - * plpgsql. + * In a regular dump, we simply create the extension, intentionally + * not specifying a version, so that the destination installation's + * default version is used. * - * In binary-upgrade mode, that doesn't work well, so instead we skip - * built-in extensions based on their OIDs; see - * selectDumpableExtension. + * Use of IF NOT EXISTS here is unlike our behavior for other object + * types; but there are various scenarios in which it's convenient to + * manually create the desired extension before restoring, so we + * prefer to allow it to exist already. */ appendPQExpBuffer(q, "CREATE EXTENSION IF NOT EXISTS %s WITH SCHEMA %s;\n", qextname, fmtId(extinfo->namespace)); } else { + /* + * In binary-upgrade mode, it's critical to reproduce the state of the + * database exactly, so our procedure is to create an empty extension, + * restore all the contained objects normally, and add them to the + * extension one by one. This function performs just the first of + * those steps. binary_upgrade_extension_member() takes care of + * adding member objects as they're created. + */ int i; int n; @@ -9968,8 +9966,6 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) * We unconditionally create the extension, so we must drop it if it * exists. This could happen if the user deleted 'plpgsql' and then * readded it, causing its oid to be greater than g_last_builtin_oid. - * The g_last_builtin_oid test was kept to avoid repeatedly dropping - * and recreating extensions like 'plpgsql'. */ appendPQExpBuffer(q, "DROP EXTENSION IF EXISTS %s;\n", qextname); diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index 74730bfc65..3e9b4d94dc 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -1548,30 +1548,31 @@ all_runs => 1, catch_all => 'COMMENT commands', regexp => qr/^COMMENT ON EXTENSION plpgsql IS .*;/m, - like => { + # this shouldn't ever get emitted anymore + like => {}, + unlike => { + binary_upgrade => 1, clean => 1, clean_if_exists => 1, + column_inserts => 1, createdb => 1, + data_only => 1, defaults => 1, exclude_dump_test_schema => 1, exclude_test_table => 1, exclude_test_table_data => 1, no_blobs => 1, - no_privs => 1, no_owner => 1, + no_privs => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, pg_dumpall_dbprivs => 1, + role => 1, schema_only => 1, + section_post_data => 1, section_pre_data => 1, - with_oids => 1, }, - unlike => { - binary_upgrade => 1, - column_inserts => 1, - data_only => 1, - only_dump_test_schema => 1, - only_dump_test_table => 1, - role => 1, - section_post_data => 1, - test_schema_plus_blobs => 1, }, }, + test_schema_plus_blobs => 1, + with_oids => 1, }, }, 'COMMENT ON TABLE dump_test.test_table' => { all_runs => 1, @@ -2751,33 +2752,34 @@ regexp => qr/^ \QCREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;\E /xm, - like => { + # this shouldn't ever get emitted anymore + like => {}, + unlike => { + binary_upgrade => 1, clean => 1, clean_if_exists => 1, + column_inserts => 1, createdb => 1, + data_only => 1, defaults => 1, exclude_dump_test_schema => 1, exclude_test_table => 1, exclude_test_table_data => 1, no_blobs => 1, - no_privs => 1, no_owner => 1, - pg_dumpall_dbprivs => 1, - schema_only => 1, - section_pre_data => 1, - with_oids => 1, }, - unlike => { - binary_upgrade => 1, - column_inserts => 1, - data_only => 1, + no_privs => 1, only_dump_test_schema => 1, only_dump_test_table => 1, + pg_dumpall_dbprivs => 1, pg_dumpall_globals => 1, pg_dumpall_globals_clean => 1, role => 1, + schema_only => 1, section_data => 1, section_post_data => 1, - test_schema_plus_blobs => 1, }, }, + section_pre_data => 1, + test_schema_plus_blobs => 1, + with_oids => 1, }, }, 'CREATE AGGREGATE dump_test.newavg' => { all_runs => 1, @@ -4565,11 +4567,12 @@ all_runs => 1, catch_all => 'CREATE ... commands', regexp => qr/^CREATE SCHEMA public;/m, - like => { - clean => 1, - clean_if_exists => 1, }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, createdb => 1, defaults => 1, exclude_test_table => 1, @@ -5395,8 +5398,10 @@ all_runs => 1, catch_all => 'DROP ... commands', regexp => qr/^DROP SCHEMA public;/m, - like => { clean => 1 }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { + clean => 1, clean_if_exists => 1, pg_dumpall_globals_clean => 1, }, }, @@ -5404,17 +5409,21 @@ all_runs => 1, catch_all => 'DROP ... commands', regexp => qr/^DROP SCHEMA IF EXISTS public;/m, - like => { clean_if_exists => 1 }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { clean => 1, + clean_if_exists => 1, pg_dumpall_globals_clean => 1, }, }, 'DROP EXTENSION plpgsql' => { all_runs => 1, catch_all => 'DROP ... commands', regexp => qr/^DROP EXTENSION plpgsql;/m, - like => { clean => 1, }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { + clean => 1, clean_if_exists => 1, pg_dumpall_globals_clean => 1, }, }, @@ -5494,9 +5503,11 @@ all_runs => 1, catch_all => 'DROP ... commands', regexp => qr/^DROP EXTENSION IF EXISTS plpgsql;/m, - like => { clean_if_exists => 1, }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { clean => 1, + clean_if_exists => 1, pg_dumpall_globals_clean => 1, }, }, 'DROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler()' => { @@ -6264,11 +6275,12 @@ \Q--\E\n\n \QGRANT USAGE ON SCHEMA public TO PUBLIC;\E /xm, - like => { - clean => 1, - clean_if_exists => 1, }, + # this shouldn't ever get emitted anymore + like => {}, unlike => { binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, createdb => 1, defaults => 1, exclude_dump_test_schema => 1, @@ -6537,6 +6549,8 @@ /xm, like => { binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, createdb => 1, defaults => 1, exclude_dump_test_schema => 1, @@ -6549,8 +6563,6 @@ section_pre_data => 1, with_oids => 1, }, unlike => { - clean => 1, - clean_if_exists => 1, only_dump_test_schema => 1, only_dump_test_table => 1, pg_dumpall_globals_clean => 1, @@ -6576,18 +6588,18 @@ exclude_test_table_data => 1, no_blobs => 1, no_owner => 1, + only_dump_test_schema => 1, + only_dump_test_table => 1, pg_dumpall_dbprivs => 1, + role => 1, schema_only => 1, section_pre_data => 1, + test_schema_plus_blobs => 1, with_oids => 1, }, unlike => { - only_dump_test_schema => 1, - only_dump_test_table => 1, pg_dumpall_globals_clean => 1, - role => 1, section_data => 1, - section_post_data => 1, - test_schema_plus_blobs => 1, }, }, + section_post_data => 1, }, }, 'REVOKE commands' => { # catch-all for REVOKE commands all_runs => 0, # catch-all From 05fb5d661925f00106373f1a594be5aca24d9a94 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Thu, 25 Jan 2018 16:11:51 -0300 Subject: [PATCH 0895/1087] Ignore partitioned indexes where appropriate get_relation_info() was too optimistic about opening indexes in partitioned tables, which would raise errors when any queries were planned on such tables. Fix by ignoring any indexes of the partitioned kind. CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem. Fix by disallowing these commands in partitioned tables. Fallout from 8b08f7d4820f. --- src/backend/commands/cluster.c | 14 ++++++++++++++ src/backend/optimizer/util/plancat.c | 10 ++++++++++ src/test/regress/expected/cluster.out | 8 ++++++++ src/test/regress/expected/indexing.out | 11 +++++++++++ src/test/regress/sql/cluster.sql | 7 +++++++ src/test/regress/sql/indexing.sql | 8 ++++++++ 6 files changed, 58 insertions(+) diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c index eb73299199..1701548d84 100644 --- a/src/backend/commands/cluster.c +++ b/src/backend/commands/cluster.c @@ -128,6 +128,14 @@ cluster(ClusterStmt *stmt, bool isTopLevel) (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot cluster temporary tables of other sessions"))); + /* + * Reject clustering a partitioned table. + */ + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot cluster a partitioned table"))); + if (stmt->indexname == NULL) { ListCell *index; @@ -482,6 +490,12 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal) Relation pg_index; ListCell *index; + /* Disallow applying to a partitioned table */ + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot mark index clustered in partitioned table"))); + /* * If the index is already marked clustered, no need to do anything. */ diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c index 8c60b35068..60f21711f4 100644 --- a/src/backend/optimizer/util/plancat.c +++ b/src/backend/optimizer/util/plancat.c @@ -207,6 +207,16 @@ get_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent, continue; } + /* + * Ignore partitioned indexes, since they are not usable for + * queries. + */ + if (indexRelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) + { + index_close(indexRelation, NoLock); + continue; + } + /* * If the index is valid, but cannot yet be used, ignore it; but * mark the plan we are generating as transient. See diff --git a/src/test/regress/expected/cluster.out b/src/test/regress/expected/cluster.out index 82713bfa2c..2bb62212ea 100644 --- a/src/test/regress/expected/cluster.out +++ b/src/test/regress/expected/cluster.out @@ -439,6 +439,14 @@ select * from clstr_temp; drop table clstr_temp; RESET SESSION AUTHORIZATION; +-- Check that partitioned tables cannot be clustered +CREATE TABLE clstrpart (a int) PARTITION BY RANGE (a); +CREATE INDEX clstrpart_idx ON clstrpart (a); +ALTER TABLE clstrpart CLUSTER ON clstrpart_idx; +ERROR: cannot mark index clustered in partitioned table +CLUSTER clstrpart USING clstrpart_idx; +ERROR: cannot cluster a partitioned table +DROP TABLE clstrpart; -- Test CLUSTER with external tuplesorting create table clstr_4 as select * from tenk1; create index cluster_sort on clstr_4 (hundred, thousand, tenthous); diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out index ffd4b10c37..e034ad3aad 100644 --- a/src/test/regress/expected/indexing.out +++ b/src/test/regress/expected/indexing.out @@ -31,6 +31,17 @@ ERROR: cannot create unique index on partitioned table "idxpart" create index concurrently on idxpart (a); ERROR: cannot create index on partitioned table "idxpart" concurrently drop table idxpart; +-- Verify bugfix with query on indexed partitioned table with no partitions +-- https://postgr.es/m/20180124162006.pmapfiznhgngwtjf@alvherre.pgsql +CREATE TABLE idxpart (col1 INT) PARTITION BY RANGE (col1); +CREATE INDEX ON idxpart (col1); +CREATE TABLE idxpart_two (col2 INT); +SELECT col2 FROM idxpart_two fk LEFT OUTER JOIN idxpart pk ON (col1 = col2); + col2 +------ +(0 rows) + +DROP table idxpart, idxpart_two; -- If a table without index is attached as partition to a table with -- an index, the index is automatically created create table idxpart (a int, b int, c text) partition by range (a); diff --git a/src/test/regress/sql/cluster.sql b/src/test/regress/sql/cluster.sql index a6c2757efa..522bfeead4 100644 --- a/src/test/regress/sql/cluster.sql +++ b/src/test/regress/sql/cluster.sql @@ -196,6 +196,13 @@ drop table clstr_temp; RESET SESSION AUTHORIZATION; +-- Check that partitioned tables cannot be clustered +CREATE TABLE clstrpart (a int) PARTITION BY RANGE (a); +CREATE INDEX clstrpart_idx ON clstrpart (a); +ALTER TABLE clstrpart CLUSTER ON clstrpart_idx; +CLUSTER clstrpart USING clstrpart_idx; +DROP TABLE clstrpart; + -- Test CLUSTER with external tuplesorting create table clstr_4 as select * from tenk1; diff --git a/src/test/regress/sql/indexing.sql b/src/test/regress/sql/indexing.sql index 2f985ec866..1a9ea89ade 100644 --- a/src/test/regress/sql/indexing.sql +++ b/src/test/regress/sql/indexing.sql @@ -19,6 +19,14 @@ create unique index on idxpart (a); create index concurrently on idxpart (a); drop table idxpart; +-- Verify bugfix with query on indexed partitioned table with no partitions +-- https://postgr.es/m/20180124162006.pmapfiznhgngwtjf@alvherre.pgsql +CREATE TABLE idxpart (col1 INT) PARTITION BY RANGE (col1); +CREATE INDEX ON idxpart (col1); +CREATE TABLE idxpart_two (col2 INT); +SELECT col2 FROM idxpart_two fk LEFT OUTER JOIN idxpart pk ON (col1 = col2); +DROP table idxpart, idxpart_two; + -- If a table without index is attached as partition to a table with -- an index, the index is automatically created create table idxpart (a int, b int, c text) partition by range (a); From 0d4e6ed3085828edb68f516067d45761c0a89ac5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 25 Jan 2018 14:26:07 -0500 Subject: [PATCH 0896/1087] Clean up some aspects of pg_dump/pg_restore item-selection logic. Ensure that CREATE DATABASE and related commands are issued when, and only when, --create is specified. Previously there were scenarios where using selective-dump switches would prevent --create from having any effect. For example, it would fail to do anything in pg_restore if the archive file had been made by a selective dump, because there would be no TOC entry for the database. Since we don't issue \connect either if we don't issue CREATE DATABASE, this could result in unexpectedly restoring objects into the wrong database. Also fix pg_restore's selective restore logic so that when an object is selected to be restored, we also restore its ACL, comment, and security label if any. Previously there was no way to get the latter properties except through tedious mucking about with a -L file. If, for some reason, you don't want these properties, you can match the old behavior by adding --no-acl etc. While at it, try to make _tocEntryRequired() a little better organized and better documented. Discussion: https://postgr.es/m/32668.1516848577@sss.pgh.pa.us --- doc/src/sgml/ref/pg_restore.sgml | 7 +- src/bin/pg_dump/pg_backup_archiver.c | 226 ++++++++++++++++----------- src/bin/pg_dump/pg_dump.c | 9 +- 3 files changed, 153 insertions(+), 89 deletions(-) diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index a2ebf75ebb..ee756159f6 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -446,6 +446,11 @@ flag of pg_dump. There is not currently any provision for wild-card matching in pg_restore, nor can you include a schema name within its . + And, while pg_dump's + flag will also dump subsidiary objects (such as indexes) of the + selected table(s), + pg_restore's + flag does not include such subsidiary objects. @@ -564,7 +569,7 @@ Use conditional commands (i.e. add an IF EXISTS - clause) when cleaning database objects. This option is not valid + clause) to drop database objects. This option is not valid unless is also specified. diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index fb0379377b..94c511c936 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -70,7 +70,7 @@ static void _selectOutputSchema(ArchiveHandle *AH, const char *schemaName); static void _selectTablespace(ArchiveHandle *AH, const char *tablespace); static void processEncodingEntry(ArchiveHandle *AH, TocEntry *te); static void processStdStringsEntry(ArchiveHandle *AH, TocEntry *te); -static teReqs _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt); +static teReqs _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH); static RestorePass _tocEntryRestorePass(TocEntry *te); static bool _tocEntryIsACL(TocEntry *te); static void _disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te); @@ -312,7 +312,7 @@ ProcessArchiveRestoreOptions(Archive *AHX) if (te->section != SECTION_NONE) curSection = te->section; - te->reqs = _tocEntryRequired(te, curSection, ropt); + te->reqs = _tocEntryRequired(te, curSection, AH); } /* Enforce strict names checking */ @@ -488,9 +488,8 @@ RestoreArchive(Archive *AHX) * In createDB mode, issue a DROP *only* for the database as a * whole. Issuing drops against anything else would be wrong, * because at this point we're connected to the wrong database. - * Conversely, if we're not in createDB mode, we'd better not - * issue a DROP against the database at all. (The DATABASE - * PROPERTIES entry, if any, works like the DATABASE entry.) + * (The DATABASE PROPERTIES entry, if any, should be treated like + * the DATABASE entry.) */ if (ropt->createDB) { @@ -498,12 +497,6 @@ RestoreArchive(Archive *AHX) strcmp(te->desc, "DATABASE PROPERTIES") != 0) continue; } - else - { - if (strcmp(te->desc, "DATABASE") == 0 || - strcmp(te->desc, "DATABASE PROPERTIES") == 0) - continue; - } /* Otherwise, drop anything that's selected and has a dropStmt */ if (((te->reqs & (REQ_SCHEMA | REQ_DATA)) != 0) && te->dropStmt) @@ -752,25 +745,6 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) AH->currentTE = te; - /* Work out what, if anything, we want from this entry */ - reqs = te->reqs; - - /* - * Ignore DATABASE and related entries unless createDB is specified. We - * must check this here, not in _tocEntryRequired, because !createDB - * should not prevent emitting these entries to an archive file. - */ - if (!ropt->createDB && - (strcmp(te->desc, "DATABASE") == 0 || - strcmp(te->desc, "DATABASE PROPERTIES") == 0 || - (strcmp(te->desc, "ACL") == 0 && - strncmp(te->tag, "DATABASE ", 9) == 0) || - (strcmp(te->desc, "COMMENT") == 0 && - strncmp(te->tag, "DATABASE ", 9) == 0) || - (strcmp(te->desc, "SECURITY LABEL") == 0 && - strncmp(te->tag, "DATABASE ", 9) == 0))) - reqs = 0; - /* Dump any relevant dump warnings to stderr */ if (!ropt->suppressDumpWarnings && strcmp(te->desc, "WARNING") == 0) { @@ -780,6 +754,9 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) write_msg(modulename, "warning from original dump file: %s\n", te->copyStmt); } + /* Work out what, if anything, we want from this entry */ + reqs = te->reqs; + defnDumped = false; /* @@ -1191,7 +1168,7 @@ PrintTOCSummary(Archive *AHX) if (te->section != SECTION_NONE) curSection = te->section; if (ropt->verbose || - (_tocEntryRequired(te, curSection, ropt) & (REQ_SCHEMA | REQ_DATA)) != 0) + (_tocEntryRequired(te, curSection, AH) & (REQ_SCHEMA | REQ_DATA)) != 0) { char *sanitized_name; char *sanitized_schema; @@ -2824,16 +2801,42 @@ StrictNamesCheck(RestoreOptions *ropt) } } +/* + * Determine whether we want to restore this TOC entry. + * + * Returns 0 if entry should be skipped, or some combination of the + * REQ_SCHEMA and REQ_DATA bits if we want to restore schema and/or data + * portions of this TOC entry, or REQ_SPECIAL if it's a special entry. + */ static teReqs -_tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) +_tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH) { teReqs res = REQ_SCHEMA | REQ_DATA; + RestoreOptions *ropt = AH->public.ropt; /* ENCODING and STDSTRINGS items are treated specially */ if (strcmp(te->desc, "ENCODING") == 0 || strcmp(te->desc, "STDSTRINGS") == 0) return REQ_SPECIAL; + /* + * DATABASE and DATABASE PROPERTIES also have a special rule: they are + * restored in createDB mode, and not restored otherwise, independently of + * all else. + */ + if (strcmp(te->desc, "DATABASE") == 0 || + strcmp(te->desc, "DATABASE PROPERTIES") == 0) + { + if (ropt->createDB) + return REQ_SCHEMA; + else + return 0; + } + + /* + * Process exclusions that affect certain classes of TOC entries. + */ + /* If it's an ACL, maybe ignore it */ if (ropt->aclsSkip && _tocEntryIsACL(te)) return 0; @@ -2842,11 +2845,11 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) if (ropt->no_publications && strcmp(te->desc, "PUBLICATION") == 0) return 0; - /* If it's security labels, maybe ignore it */ + /* If it's a security label, maybe ignore it */ if (ropt->no_security_labels && strcmp(te->desc, "SECURITY LABEL") == 0) return 0; - /* If it's a subcription, maybe ignore it */ + /* If it's a subscription, maybe ignore it */ if (ropt->no_subscriptions && strcmp(te->desc, "SUBSCRIPTION") == 0) return 0; @@ -2870,65 +2873,118 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) return 0; } - /* Check options for selective dump/restore */ - if (ropt->schemaNames.head != NULL) - { - /* If no namespace is specified, it means all. */ - if (!te->namespace) - return 0; - if (!(simple_string_list_member(&ropt->schemaNames, te->namespace))) - return 0; - } - - if (ropt->schemaExcludeNames.head != NULL && - te->namespace && - simple_string_list_member(&ropt->schemaExcludeNames, te->namespace)) + /* Ignore it if rejected by idWanted[] (cf. SortTocFromFile) */ + if (ropt->idWanted && !ropt->idWanted[te->dumpId - 1]) return 0; - if (ropt->selTypes) + /* + * Check options for selective dump/restore. + */ + if (strcmp(te->desc, "ACL") == 0 || + strcmp(te->desc, "COMMENT") == 0 || + strcmp(te->desc, "SECURITY LABEL") == 0) { - if (strcmp(te->desc, "TABLE") == 0 || - strcmp(te->desc, "TABLE DATA") == 0 || - strcmp(te->desc, "VIEW") == 0 || - strcmp(te->desc, "FOREIGN TABLE") == 0 || - strcmp(te->desc, "MATERIALIZED VIEW") == 0 || - strcmp(te->desc, "MATERIALIZED VIEW DATA") == 0 || - strcmp(te->desc, "SEQUENCE") == 0 || - strcmp(te->desc, "SEQUENCE SET") == 0) + /* Database properties react to createDB, not selectivity options. */ + if (strncmp(te->tag, "DATABASE ", 9) == 0) { - if (!ropt->selTable) - return 0; - if (ropt->tableNames.head != NULL && (!(simple_string_list_member(&ropt->tableNames, te->tag)))) + if (!ropt->createDB) return 0; } - else if (strcmp(te->desc, "INDEX") == 0) + else if (ropt->schemaNames.head != NULL || + ropt->schemaExcludeNames.head != NULL || + ropt->selTypes) { - if (!ropt->selIndex) - return 0; - if (ropt->indexNames.head != NULL && (!(simple_string_list_member(&ropt->indexNames, te->tag)))) + /* + * In a selective dump/restore, we want to restore these dependent + * TOC entry types only if their parent object is being restored. + * Without selectivity options, we let through everything in the + * archive. Note there may be such entries with no parent, eg + * non-default ACLs for built-in objects. + * + * This code depends on the parent having been marked already, + * which should be the case; if it isn't, perhaps due to + * SortTocFromFile rearrangement, skipping the dependent entry + * seems prudent anyway. + * + * Ideally we'd handle, eg, table CHECK constraints this way too. + * But it's hard to tell which of their dependencies is the one to + * consult. + */ + if (te->nDeps != 1 || + TocIDRequired(AH, te->dependencies[0]) == 0) return 0; } - else if (strcmp(te->desc, "FUNCTION") == 0 || - strcmp(te->desc, "PROCEDURE") == 0) + } + else + { + /* Apply selective-restore rules for standalone TOC entries. */ + if (ropt->schemaNames.head != NULL) { - if (!ropt->selFunction) + /* If no namespace is specified, it means all. */ + if (!te->namespace) return 0; - if (ropt->functionNames.head != NULL && (!(simple_string_list_member(&ropt->functionNames, te->tag)))) + if (!simple_string_list_member(&ropt->schemaNames, te->namespace)) return 0; } - else if (strcmp(te->desc, "TRIGGER") == 0) + + if (ropt->schemaExcludeNames.head != NULL && + te->namespace && + simple_string_list_member(&ropt->schemaExcludeNames, te->namespace)) + return 0; + + if (ropt->selTypes) { - if (!ropt->selTrigger) - return 0; - if (ropt->triggerNames.head != NULL && (!(simple_string_list_member(&ropt->triggerNames, te->tag)))) + if (strcmp(te->desc, "TABLE") == 0 || + strcmp(te->desc, "TABLE DATA") == 0 || + strcmp(te->desc, "VIEW") == 0 || + strcmp(te->desc, "FOREIGN TABLE") == 0 || + strcmp(te->desc, "MATERIALIZED VIEW") == 0 || + strcmp(te->desc, "MATERIALIZED VIEW DATA") == 0 || + strcmp(te->desc, "SEQUENCE") == 0 || + strcmp(te->desc, "SEQUENCE SET") == 0) + { + if (!ropt->selTable) + return 0; + if (ropt->tableNames.head != NULL && + !simple_string_list_member(&ropt->tableNames, te->tag)) + return 0; + } + else if (strcmp(te->desc, "INDEX") == 0) + { + if (!ropt->selIndex) + return 0; + if (ropt->indexNames.head != NULL && + !simple_string_list_member(&ropt->indexNames, te->tag)) + return 0; + } + else if (strcmp(te->desc, "FUNCTION") == 0 || + strcmp(te->desc, "AGGREGATE") == 0 || + strcmp(te->desc, "PROCEDURE") == 0) + { + if (!ropt->selFunction) + return 0; + if (ropt->functionNames.head != NULL && + !simple_string_list_member(&ropt->functionNames, te->tag)) + return 0; + } + else if (strcmp(te->desc, "TRIGGER") == 0) + { + if (!ropt->selTrigger) + return 0; + if (ropt->triggerNames.head != NULL && + !simple_string_list_member(&ropt->triggerNames, te->tag)) + return 0; + } + else return 0; } - else - return 0; } /* - * Check if we had a dataDumper. Indicates if the entry is schema or data + * Determine whether the TOC entry contains schema and/or data components, + * and mask off inapplicable REQ bits. If it had a dataDumper, assume + * it's both schema and data. Otherwise it's probably schema-only, but + * there are exceptions. */ if (!te->hadDumper) { @@ -2952,6 +3008,10 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) res = res & ~REQ_DATA; } + /* If there's no definition command, there's no schema component */ + if (!te->defn || !te->defn[0]) + res = res & ~REQ_SCHEMA; + /* * Special case: type with tag; this is obsolete and we * always ignore it. @@ -2963,12 +3023,12 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) if (ropt->schemaOnly) { /* - * The sequence_data option overrides schema-only for SEQUENCE SET. + * The sequence_data option overrides schemaOnly for SEQUENCE SET. * - * In binary-upgrade mode, even with schema-only set, we do not mask - * out large objects. Only large object definitions, comments and - * other information should be generated in binary-upgrade mode (not - * the actual data). + * In binary-upgrade mode, even with schemaOnly set, we do not mask + * out large objects. (Only large object definitions, comments and + * other metadata should be generated in binary-upgrade mode, not the + * actual data, but that need not concern us here.) */ if (!(ropt->sequence_data && strcmp(te->desc, "SEQUENCE SET") == 0) && !(ropt->binary_upgrade && @@ -2986,14 +3046,6 @@ _tocEntryRequired(TocEntry *te, teSection curSection, RestoreOptions *ropt) if (ropt->dataOnly) res = res & REQ_DATA; - /* Mask it if we don't have a schema contribution */ - if (!te->defn || strlen(te->defn) == 0) - res = res & ~REQ_SCHEMA; - - /* Finally, if there's a per-ID filter, limit based on that as well */ - if (ropt->idWanted && !ropt->idWanted[te->dumpId - 1]) - return 0; - return res; } diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index b534fb7b95..c34d990729 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -631,6 +631,13 @@ main(int argc, char **argv) compressLevel = 0; #endif + /* + * If emitting an archive format, we always want to emit a DATABASE item, + * in case --create is specified at pg_restore time. + */ + if (!plainText) + dopt.outputCreateDB = 1; + /* * On Windows we can only have at most MAXIMUM_WAIT_OBJECTS (= 64 usually) * parallel jobs because that's the maximum limit for the @@ -841,7 +848,7 @@ main(int argc, char **argv) dumpStdStrings(fout); /* The database items are always next, unless we don't want them at all */ - if (dopt.include_everything && !dopt.dataOnly) + if (dopt.outputCreateDB) dumpDatabase(fout); /* Now the rearrangeable objects. */ From bb415675d8ab6e776321a96f9c0e77c12fda96ea Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 25 Jan 2018 14:32:28 -0500 Subject: [PATCH 0897/1087] Add missing "static" markers. Per buildfarm. --- src/backend/executor/nodeModifyTable.c | 2 +- src/bin/pg_dump/pg_dump.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 828e1b0015..2a8ecbd830 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -1711,7 +1711,7 @@ ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate) * 1. For update-tuple-routing. * 2. For capturing tuples in transition tables. */ -void +static void ExecSetupChildParentMapForSubplan(ModifyTableState *mtstate) { ResultRelInfo *targetRelInfo = getTargetResultRelInfo(mtstate); diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index c34d990729..d047e4a49b 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -16431,7 +16431,7 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) * dumpIndexAttach * write out to fout a partitioned-index attachment clause */ -void +static void dumpIndexAttach(Archive *fout, IndexAttachInfo *attachinfo) { if (fout->dopt->dataOnly) From 1368e92e16a098338e39c8e540bdf9f6cf35ebf4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 25 Jan 2018 15:27:24 -0500 Subject: [PATCH 0898/1087] Support --no-comments in pg_dump, pg_dumpall, pg_restore. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We have switches already to suppress other subsidiary object properties, such as ACLs, security labels, ownership, and tablespaces, so just on the grounds of symmetry we should allow suppressing comments as well. Also, commit 0d4e6ed30 added a positive reason to have this feature, i.e. to allow obtaining the old behavior of selective pg_restore should anyone desire that. Recent commits have removed the cases where pg_dump emitted comments on built-in objects that the restoring user might not have privileges to comment on, so the original primary motivation for this feature is gone, but it still seems at least somewhat useful in its own right. Robins Tharakan, reviewed by Fabrízio Mello Discussion: https://postgr.es/m/CAEP4nAx22Z4ch74oJGzr5RyyjcyUSbpiFLyeYXX8pehfou92ug@mail.gmail.com --- doc/src/sgml/ref/pg_dump.sgml | 9 +++++++++ doc/src/sgml/ref/pg_dumpall.sgml | 9 +++++++++ doc/src/sgml/ref/pg_restore.sgml | 9 +++++++++ src/bin/pg_dump/pg_backup.h | 2 ++ src/bin/pg_dump/pg_backup_archiver.c | 6 +++++- src/bin/pg_dump/pg_dump.c | 19 +++++++++++++++++-- src/bin/pg_dump/pg_dumpall.c | 9 +++++++-- src/bin/pg_dump/pg_restore.c | 4 ++++ 8 files changed, 62 insertions(+), 5 deletions(-) diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml index 11582dd1c8..50809b4844 100644 --- a/doc/src/sgml/ref/pg_dump.sgml +++ b/doc/src/sgml/ref/pg_dump.sgml @@ -804,6 +804,15 @@ PostgreSQL documentation + + + + + Do not dump comments. + + + + diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml index 4a639f2d41..5d6fe9b87d 100644 --- a/doc/src/sgml/ref/pg_dumpall.sgml +++ b/doc/src/sgml/ref/pg_dumpall.sgml @@ -342,6 +342,15 @@ PostgreSQL documentation + + + + + Do not dump comments. + + + + diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index ee756159f6..345324bd27 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -575,6 +575,15 @@ + + + + + Do not dump comments. + + + + diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h index ce3100f09d..520cd095d3 100644 --- a/src/bin/pg_dump/pg_backup.h +++ b/src/bin/pg_dump/pg_backup.h @@ -74,6 +74,7 @@ typedef struct _restoreOptions int dump_inserts; int column_inserts; int if_exists; + int no_comments; /* Skip comments */ int no_publications; /* Skip publication entries */ int no_security_labels; /* Skip security label entries */ int no_subscriptions; /* Skip subscription entries */ @@ -146,6 +147,7 @@ typedef struct _dumpOptions int dump_inserts; int column_inserts; int if_exists; + int no_comments; int no_security_labels; int no_publications; int no_subscriptions; diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 94c511c936..7c5e8c018b 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -169,9 +169,9 @@ dumpOptionsFromRestoreOptions(RestoreOptions *ropt) dopt->outputNoTablespaces = ropt->noTablespace; dopt->disable_triggers = ropt->disable_triggers; dopt->use_setsessauth = ropt->use_setsessauth; - dopt->disable_dollar_quoting = ropt->disable_dollar_quoting; dopt->dump_inserts = ropt->dump_inserts; + dopt->no_comments = ropt->no_comments; dopt->no_publications = ropt->no_publications; dopt->no_security_labels = ropt->no_security_labels; dopt->no_subscriptions = ropt->no_subscriptions; @@ -2841,6 +2841,10 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH) if (ropt->aclsSkip && _tocEntryIsACL(te)) return 0; + /* If it's a comment, maybe ignore it */ + if (ropt->no_comments && strcmp(te->desc, "COMMENT") == 0) + return 0; + /* If it's a publication, maybe ignore it */ if (ropt->no_publications && strcmp(te->desc, "PUBLICATION") == 0) return 0; diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index d047e4a49b..8ca83c06d6 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -359,6 +359,7 @@ main(int argc, char **argv) {"snapshot", required_argument, NULL, 6}, {"strict-names", no_argument, &strict_names, 1}, {"use-set-session-authorization", no_argument, &dopt.use_setsessauth, 1}, + {"no-comments", no_argument, &dopt.no_comments, 1}, {"no-publications", no_argument, &dopt.no_publications, 1}, {"no-security-labels", no_argument, &dopt.no_security_labels, 1}, {"no-synchronized-snapshots", no_argument, &dopt.no_synchronized_snapshots, 1}, @@ -877,6 +878,7 @@ main(int argc, char **argv) ropt->use_setsessauth = dopt.use_setsessauth; ropt->disable_dollar_quoting = dopt.disable_dollar_quoting; ropt->dump_inserts = dopt.dump_inserts; + ropt->no_comments = dopt.no_comments; ropt->no_publications = dopt.no_publications; ropt->no_security_labels = dopt.no_security_labels; ropt->no_subscriptions = dopt.no_subscriptions; @@ -967,6 +969,7 @@ help(const char *progname) printf(_(" --exclude-table-data=TABLE do NOT dump data for the named table(s)\n")); printf(_(" --if-exists use IF EXISTS when dropping objects\n")); printf(_(" --inserts dump data as INSERT commands, rather than COPY\n")); + printf(_(" --no-comments do not dump comments\n")); printf(_(" --no-publications do not dump publications\n")); printf(_(" --no-security-labels do not dump security label assignments\n")); printf(_(" --no-subscriptions do not dump subscriptions\n")); @@ -2780,7 +2783,7 @@ dumpDatabase(Archive *fout) */ char *comment = PQgetvalue(res, 0, PQfnumber(res, "description")); - if (comment && *comment) + if (comment && *comment && !dopt->no_comments) { resetPQExpBuffer(dbQry); @@ -2806,7 +2809,7 @@ dumpDatabase(Archive *fout) dbCatId, 0, dbDumpId); } - /* Dump shared security label. */ + /* Dump DB security label, if enabled */ if (!dopt->no_security_labels && fout->remoteVersion >= 90200) { PGresult *shres; @@ -9416,6 +9419,10 @@ dumpComment(Archive *fout, const char *target, CommentItem *comments; int ncomments; + /* do nothing, if --no-comments is supplied */ + if (dopt->no_comments) + return; + /* Comments are schema not data ... except blob comments are data */ if (strncmp(target, "LARGE OBJECT ", 13) != 0) { @@ -9483,6 +9490,10 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, PQExpBuffer query; PQExpBuffer target; + /* do nothing, if --no-comments is supplied */ + if (dopt->no_comments) + return; + /* Comments are SCHEMA not data */ if (dopt->dataOnly) return; @@ -11152,6 +11163,10 @@ dumpCompositeTypeColComments(Archive *fout, TypeInfo *tyinfo) int i_attname; int i_attnum; + /* do nothing, if --no-comments is supplied */ + if (fout->dopt->no_comments) + return; + query = createPQExpBuffer(); appendPQExpBuffer(query, diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 2fd5a025af..40ee5d1d8b 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -68,6 +68,7 @@ static int if_exists = 0; static int inserts = 0; static int no_tablespaces = 0; static int use_setsessauth = 0; +static int no_comments = 0; static int no_publications = 0; static int no_security_labels = 0; static int no_subscriptions = 0; @@ -127,6 +128,7 @@ main(int argc, char *argv[]) {"load-via-partition-root", no_argument, &load_via_partition_root, 1}, {"role", required_argument, NULL, 3}, {"use-set-session-authorization", no_argument, &use_setsessauth, 1}, + {"no-comments", no_argument, &no_comments, 1}, {"no-publications", no_argument, &no_publications, 1}, {"no-role-passwords", no_argument, &no_role_passwords, 1}, {"no-security-labels", no_argument, &no_security_labels, 1}, @@ -392,6 +394,8 @@ main(int argc, char *argv[]) appendPQExpBufferStr(pgdumpopts, " --load-via-partition-root"); if (use_setsessauth) appendPQExpBufferStr(pgdumpopts, " --use-set-session-authorization"); + if (no_comments) + appendPQExpBufferStr(pgdumpopts, " --no-comments"); if (no_publications) appendPQExpBufferStr(pgdumpopts, " --no-publications"); if (no_security_labels) @@ -606,6 +610,7 @@ help(void) printf(_(" --disable-triggers disable triggers during data-only restore\n")); printf(_(" --if-exists use IF EXISTS when dropping objects\n")); printf(_(" --inserts dump data as INSERT commands, rather than COPY\n")); + printf(_(" --no-comments do not dump comments\n")); printf(_(" --no-publications do not dump publications\n")); printf(_(" --no-role-passwords do not dump passwords for roles\n")); printf(_(" --no-security-labels do not dump security label assignments\n")); @@ -914,7 +919,7 @@ dumpRoles(PGconn *conn) appendPQExpBufferStr(buf, ";\n"); - if (!PQgetisnull(res, i, i_rolcomment)) + if (!no_comments && !PQgetisnull(res, i, i_rolcomment)) { appendPQExpBuffer(buf, "COMMENT ON ROLE %s IS ", fmtId(rolename)); appendStringLiteralConn(buf, PQgetvalue(res, i, i_rolcomment), conn); @@ -1220,7 +1225,7 @@ dumpTablespaces(PGconn *conn) exit_nicely(1); } - if (spccomment && strlen(spccomment)) + if (!no_comments && spccomment && spccomment[0] != '\0') { appendPQExpBuffer(buf, "COMMENT ON TABLESPACE %s IS ", fspcname); appendStringLiteralConn(buf, spccomment, conn); diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c index 860a211a3c..edc14f176f 100644 --- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c @@ -71,6 +71,7 @@ main(int argc, char **argv) static int no_data_for_failed_tables = 0; static int outputNoTablespaces = 0; static int use_setsessauth = 0; + static int no_comments = 0; static int no_publications = 0; static int no_security_labels = 0; static int no_subscriptions = 0; @@ -119,6 +120,7 @@ main(int argc, char **argv) {"section", required_argument, NULL, 3}, {"strict-names", no_argument, &strict_names, 1}, {"use-set-session-authorization", no_argument, &use_setsessauth, 1}, + {"no-comments", no_argument, &no_comments, 1}, {"no-publications", no_argument, &no_publications, 1}, {"no-security-labels", no_argument, &no_security_labels, 1}, {"no-subscriptions", no_argument, &no_subscriptions, 1}, @@ -358,6 +360,7 @@ main(int argc, char **argv) opts->noDataForFailedTables = no_data_for_failed_tables; opts->noTablespace = outputNoTablespaces; opts->use_setsessauth = use_setsessauth; + opts->no_comments = no_comments; opts->no_publications = no_publications; opts->no_security_labels = no_security_labels; opts->no_subscriptions = no_subscriptions; @@ -482,6 +485,7 @@ usage(const char *progname) printf(_(" --if-exists use IF EXISTS when dropping objects\n")); printf(_(" --no-data-for-failed-tables do not restore data of tables that could not be\n" " created\n")); + printf(_(" --no-comments do not dump comments\n")); printf(_(" --no-publications do not restore publications\n")); printf(_(" --no-security-labels do not restore security labels\n")); printf(_(" --no-subscriptions do not restore subscriptions\n")); From 6588a43bcacca872fafba10363d346b806964d90 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Thu, 25 Jan 2018 20:13:25 -0500 Subject: [PATCH 0899/1087] Fix C comment typo Reported-by: Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoBgnHy2YKAUuB6iVG4ibvLYepHr+RDRkr1arqWwc1AHCw@mail.gmail.com Author: Masahiko Sawada --- contrib/pg_prewarm/autoprewarm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c index 15e4a65ea6..3a4cc5b172 100644 --- a/contrib/pg_prewarm/autoprewarm.c +++ b/contrib/pg_prewarm/autoprewarm.c @@ -368,7 +368,7 @@ apw_load_buffers(void) if (current_db != blkinfo[i].database) { /* - * Combine BlockRecordInfos for global objects withs those of + * Combine BlockRecordInfos for global objects with those of * the database. */ if (current_db != InvalidOid) From a6ef00b5c3c4a287e03b634d328529b69cc1e770 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 23 Jan 2018 11:19:12 -0500 Subject: [PATCH 0900/1087] Remove byte-masking macros for Datum conversion macros As the comment there stated, these were needed for old-style user-defined functions, but since we removed support for those, we don't need this anymore. Reviewed-by: Michael Paquier --- src/include/postgres.h | 90 ++++++++++++++++-------------------------- 1 file changed, 34 insertions(+), 56 deletions(-) diff --git a/src/include/postgres.h b/src/include/postgres.h index b69f88aa5b..3dc62801aa 100644 --- a/src/include/postgres.h +++ b/src/include/postgres.h @@ -24,7 +24,7 @@ * section description * ------- ------------------------------------------------ * 1) variable-length datatypes (TOAST support) - * 2) datum type + support macros + * 2) Datum type + support macros * 3) exception handling backend support * * NOTES @@ -349,60 +349,38 @@ typedef struct /* ---------------------------------------------------------------- - * Section 2: datum type + support macros + * Section 2: Datum type + support macros * ---------------------------------------------------------------- */ /* - * Port Notes: - * Postgres makes the following assumptions about datatype sizes: + * A Datum contains either a value of a pass-by-value type or a pointer to a + * value of a pass-by-reference type. Therefore, we require: * - * sizeof(Datum) == sizeof(void *) == 4 or 8 - * sizeof(char) == 1 - * sizeof(short) == 2 + * sizeof(Datum) == sizeof(void *) == 4 or 8 * - * When a type narrower than Datum is stored in a Datum, we place it in the - * low-order bits and are careful that the DatumGetXXX macro for it discards - * the unused high-order bits (as opposed to, say, assuming they are zero). - * This is needed to support old-style user-defined functions, since depending - * on architecture and compiler, the return value of a function returning char - * or short may contain garbage when called as if it returned Datum. + * The macros below and the analogous macros for other types should be used to + * convert between a Datum and the appropriate C type. */ typedef uintptr_t Datum; #define SIZEOF_DATUM SIZEOF_VOID_P -typedef Datum *DatumPtr; - -#define GET_1_BYTE(datum) (((Datum) (datum)) & 0x000000ff) -#define GET_2_BYTES(datum) (((Datum) (datum)) & 0x0000ffff) -#define GET_4_BYTES(datum) (((Datum) (datum)) & 0xffffffff) -#if SIZEOF_DATUM == 8 -#define GET_8_BYTES(datum) ((Datum) (datum)) -#endif -#define SET_1_BYTE(value) (((Datum) (value)) & 0x000000ff) -#define SET_2_BYTES(value) (((Datum) (value)) & 0x0000ffff) -#define SET_4_BYTES(value) (((Datum) (value)) & 0xffffffff) -#if SIZEOF_DATUM == 8 -#define SET_8_BYTES(value) ((Datum) (value)) -#endif - /* * DatumGetBool * Returns boolean value of a datum. * - * Note: any nonzero value will be considered TRUE, but we ignore bits to - * the left of the width of bool, per comment above. + * Note: any nonzero value will be considered true. */ -#define DatumGetBool(X) ((bool) (GET_1_BYTE(X) != 0)) +#define DatumGetBool(X) ((bool) ((X) != 0)) /* * BoolGetDatum * Returns datum representation for a boolean. * - * Note: any nonzero value will be considered TRUE. + * Note: any nonzero value will be considered true. */ #define BoolGetDatum(X) ((Datum) ((X) ? 1 : 0)) @@ -412,140 +390,140 @@ typedef Datum *DatumPtr; * Returns character value of a datum. */ -#define DatumGetChar(X) ((char) GET_1_BYTE(X)) +#define DatumGetChar(X) ((char) (X)) /* * CharGetDatum * Returns datum representation for a character. */ -#define CharGetDatum(X) ((Datum) SET_1_BYTE(X)) +#define CharGetDatum(X) ((Datum) (X)) /* * Int8GetDatum * Returns datum representation for an 8-bit integer. */ -#define Int8GetDatum(X) ((Datum) SET_1_BYTE(X)) +#define Int8GetDatum(X) ((Datum) (X)) /* * DatumGetUInt8 * Returns 8-bit unsigned integer value of a datum. */ -#define DatumGetUInt8(X) ((uint8) GET_1_BYTE(X)) +#define DatumGetUInt8(X) ((uint8) (X)) /* * UInt8GetDatum * Returns datum representation for an 8-bit unsigned integer. */ -#define UInt8GetDatum(X) ((Datum) SET_1_BYTE(X)) +#define UInt8GetDatum(X) ((Datum) (X)) /* * DatumGetInt16 * Returns 16-bit integer value of a datum. */ -#define DatumGetInt16(X) ((int16) GET_2_BYTES(X)) +#define DatumGetInt16(X) ((int16) (X)) /* * Int16GetDatum * Returns datum representation for a 16-bit integer. */ -#define Int16GetDatum(X) ((Datum) SET_2_BYTES(X)) +#define Int16GetDatum(X) ((Datum) (X)) /* * DatumGetUInt16 * Returns 16-bit unsigned integer value of a datum. */ -#define DatumGetUInt16(X) ((uint16) GET_2_BYTES(X)) +#define DatumGetUInt16(X) ((uint16) (X)) /* * UInt16GetDatum * Returns datum representation for a 16-bit unsigned integer. */ -#define UInt16GetDatum(X) ((Datum) SET_2_BYTES(X)) +#define UInt16GetDatum(X) ((Datum) (X)) /* * DatumGetInt32 * Returns 32-bit integer value of a datum. */ -#define DatumGetInt32(X) ((int32) GET_4_BYTES(X)) +#define DatumGetInt32(X) ((int32) (X)) /* * Int32GetDatum * Returns datum representation for a 32-bit integer. */ -#define Int32GetDatum(X) ((Datum) SET_4_BYTES(X)) +#define Int32GetDatum(X) ((Datum) (X)) /* * DatumGetUInt32 * Returns 32-bit unsigned integer value of a datum. */ -#define DatumGetUInt32(X) ((uint32) GET_4_BYTES(X)) +#define DatumGetUInt32(X) ((uint32) (X)) /* * UInt32GetDatum * Returns datum representation for a 32-bit unsigned integer. */ -#define UInt32GetDatum(X) ((Datum) SET_4_BYTES(X)) +#define UInt32GetDatum(X) ((Datum) (X)) /* * DatumGetObjectId * Returns object identifier value of a datum. */ -#define DatumGetObjectId(X) ((Oid) GET_4_BYTES(X)) +#define DatumGetObjectId(X) ((Oid) (X)) /* * ObjectIdGetDatum * Returns datum representation for an object identifier. */ -#define ObjectIdGetDatum(X) ((Datum) SET_4_BYTES(X)) +#define ObjectIdGetDatum(X) ((Datum) (X)) /* * DatumGetTransactionId * Returns transaction identifier value of a datum. */ -#define DatumGetTransactionId(X) ((TransactionId) GET_4_BYTES(X)) +#define DatumGetTransactionId(X) ((TransactionId) (X)) /* * TransactionIdGetDatum * Returns datum representation for a transaction identifier. */ -#define TransactionIdGetDatum(X) ((Datum) SET_4_BYTES((X))) +#define TransactionIdGetDatum(X) ((Datum) (X)) /* * MultiXactIdGetDatum * Returns datum representation for a multixact identifier. */ -#define MultiXactIdGetDatum(X) ((Datum) SET_4_BYTES((X))) +#define MultiXactIdGetDatum(X) ((Datum) (X)) /* * DatumGetCommandId * Returns command identifier value of a datum. */ -#define DatumGetCommandId(X) ((CommandId) GET_4_BYTES(X)) +#define DatumGetCommandId(X) ((CommandId) (X)) /* * CommandIdGetDatum * Returns datum representation for a command identifier. */ -#define CommandIdGetDatum(X) ((Datum) SET_4_BYTES(X)) +#define CommandIdGetDatum(X) ((Datum) (X)) /* * DatumGetPointer @@ -608,7 +586,7 @@ typedef Datum *DatumPtr; */ #ifdef USE_FLOAT8_BYVAL -#define DatumGetInt64(X) ((int64) GET_8_BYTES(X)) +#define DatumGetInt64(X) ((int64) (X)) #else #define DatumGetInt64(X) (* ((int64 *) DatumGetPointer(X))) #endif @@ -622,7 +600,7 @@ typedef Datum *DatumPtr; */ #ifdef USE_FLOAT8_BYVAL -#define Int64GetDatum(X) ((Datum) SET_8_BYTES(X)) +#define Int64GetDatum(X) ((Datum) (X)) #else extern Datum Int64GetDatum(int64 X); #endif @@ -635,7 +613,7 @@ extern Datum Int64GetDatum(int64 X); */ #ifdef USE_FLOAT8_BYVAL -#define DatumGetUInt64(X) ((uint64) GET_8_BYTES(X)) +#define DatumGetUInt64(X) ((uint64) (X)) #else #define DatumGetUInt64(X) (* ((uint64 *) DatumGetPointer(X))) #endif @@ -649,7 +627,7 @@ extern Datum Int64GetDatum(int64 X); */ #ifdef USE_FLOAT8_BYVAL -#define UInt64GetDatum(X) ((Datum) SET_8_BYTES(X)) +#define UInt64GetDatum(X) ((Datum) (X)) #else #define UInt64GetDatum(X) Int64GetDatum((int64) (X)) #endif From c1869542b3a4da4b12cace2253ef177da761c00d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 25 Jan 2018 08:58:00 -0500 Subject: [PATCH 0901/1087] Use abstracted SSL API in server connection log messages The existing "connection authorized" server log messages used OpenSSL API calls directly, even though similar abstracted API calls exist. Change to use the latter instead. Change the function prototype for the functions that return the TLS version and the cipher to return const char * directly instead of copying into a buffer. That makes them slightly easier to use. Add bits= to the message. psql shows that, so we might as well show the same information on the client and server. Reviewed-by: Daniel Gustafsson Reviewed-by: Michael Paquier --- src/backend/libpq/be-secure-openssl.c | 16 ++++++++-------- src/backend/postmaster/pgstat.c | 4 ++-- src/backend/utils/init/postinit.c | 22 ++++++++++++++-------- src/include/libpq/libpq-be.h | 4 ++-- 4 files changed, 26 insertions(+), 20 deletions(-) diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c index 02601da6c8..e1ddfb3c16 100644 --- a/src/backend/libpq/be-secure-openssl.c +++ b/src/backend/libpq/be-secure-openssl.c @@ -1047,22 +1047,22 @@ be_tls_get_compression(Port *port) return false; } -void -be_tls_get_version(Port *port, char *ptr, size_t len) +const char * +be_tls_get_version(Port *port) { if (port->ssl) - strlcpy(ptr, SSL_get_version(port->ssl), len); + return SSL_get_version(port->ssl); else - ptr[0] = '\0'; + return NULL; } -void -be_tls_get_cipher(Port *port, char *ptr, size_t len) +const char * +be_tls_get_cipher(Port *port) { if (port->ssl) - strlcpy(ptr, SSL_get_cipher(port->ssl), len); + return SSL_get_cipher(port->ssl); else - ptr[0] = '\0'; + return NULL; } void diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index d13011454c..605b1832be 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -2909,8 +2909,8 @@ pgstat_bestart(void) beentry->st_ssl = true; beentry->st_sslstatus->ssl_bits = be_tls_get_cipher_bits(MyProcPort); beentry->st_sslstatus->ssl_compression = be_tls_get_compression(MyProcPort); - be_tls_get_version(MyProcPort, beentry->st_sslstatus->ssl_version, NAMEDATALEN); - be_tls_get_cipher(MyProcPort, beentry->st_sslstatus->ssl_cipher, NAMEDATALEN); + strlcpy(beentry->st_sslstatus->ssl_version, be_tls_get_version(MyProcPort), NAMEDATALEN); + strlcpy(beentry->st_sslstatus->ssl_cipher, be_tls_get_cipher(MyProcPort), NAMEDATALEN); be_tls_get_peerdn_name(MyProcPort, beentry->st_sslstatus->ssl_clientdn, NAMEDATALEN); } else diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c index f9b330998d..484628987f 100644 --- a/src/backend/utils/init/postinit.c +++ b/src/backend/utils/init/postinit.c @@ -246,12 +246,15 @@ PerformAuthentication(Port *port) { if (am_walsender) { -#ifdef USE_OPENSSL +#ifdef USE_SSL if (port->ssl_in_use) ereport(LOG, - (errmsg("replication connection authorized: user=%s SSL enabled (protocol=%s, cipher=%s, compression=%s)", - port->user_name, SSL_get_version(port->ssl), SSL_get_cipher(port->ssl), - SSL_get_current_compression(port->ssl) ? _("on") : _("off")))); + (errmsg("replication connection authorized: user=%s SSL enabled (protocol=%s, cipher=%s, bits=%d, compression=%s)", + port->user_name, + be_tls_get_version(port), + be_tls_get_cipher(port), + be_tls_get_cipher_bits(port), + be_tls_get_compression(port) ? _("on") : _("off")))); else #endif ereport(LOG, @@ -260,12 +263,15 @@ PerformAuthentication(Port *port) } else { -#ifdef USE_OPENSSL +#ifdef USE_SSL if (port->ssl_in_use) ereport(LOG, - (errmsg("connection authorized: user=%s database=%s SSL enabled (protocol=%s, cipher=%s, compression=%s)", - port->user_name, port->database_name, SSL_get_version(port->ssl), SSL_get_cipher(port->ssl), - SSL_get_current_compression(port->ssl) ? _("on") : _("off")))); + (errmsg("connection authorized: user=%s database=%s SSL enabled (protocol=%s, cipher=%s, bits=%d, compression=%s)", + port->user_name, port->database_name, + be_tls_get_version(port), + be_tls_get_cipher(port), + be_tls_get_cipher_bits(port), + be_tls_get_compression(port) ? _("on") : _("off")))); else #endif ereport(LOG, diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h index 584f794b9e..7698cd1f88 100644 --- a/src/include/libpq/libpq-be.h +++ b/src/include/libpq/libpq-be.h @@ -256,8 +256,8 @@ extern ssize_t be_tls_write(Port *port, void *ptr, size_t len, int *waitfor); */ extern int be_tls_get_cipher_bits(Port *port); extern bool be_tls_get_compression(Port *port); -extern void be_tls_get_version(Port *port, char *ptr, size_t len); -extern void be_tls_get_cipher(Port *port, char *ptr, size_t len); +extern const char *be_tls_get_version(Port *port); +extern const char *be_tls_get_cipher(Port *port); extern void be_tls_get_peerdn_name(Port *port, char *ptr, size_t len); /* From b0313f9cc8f54d6a5c12f8987c9b6afa0a5bbced Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 26 Jan 2018 09:51:15 -0500 Subject: [PATCH 0902/1087] pageinspect: Fix use of wrong memory context by hash_page_items. This can cause it to produce incorrect output. Report and patch by Masahiko Sawada. Discussion: http://postgr.es/m/CAD21AoBc5Asx7pXdUWu6NqU_g=Ysn95EGL9SMeYhLLduYoO_OA@mail.gmail.com --- contrib/pageinspect/hashfuncs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/pageinspect/hashfuncs.c b/contrib/pageinspect/hashfuncs.c index 3d0e3f9757..99b61b8669 100644 --- a/contrib/pageinspect/hashfuncs.c +++ b/contrib/pageinspect/hashfuncs.c @@ -313,10 +313,10 @@ hash_page_items(PG_FUNCTION_ARGS) fctx = SRF_FIRSTCALL_INIT(); - page = verify_hash_page(raw_page, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); - mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx); + page = verify_hash_page(raw_page, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE); + uargs = palloc(sizeof(struct user_args)); uargs->page = page; From 4971d2a32209118ebbdc6611341b89901e340902 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 26 Jan 2018 12:25:44 -0500 Subject: [PATCH 0903/1087] Remove the obsolete WITH clause of CREATE FUNCTION. This clause was superseded by SQL-standard syntax back in 7.3. We've kept it around for backwards-compatibility purposes ever since; but 15 years seems like long enough for that, especially seeing that there are undocumented weirdnesses in how it interacts with the SQL-standard syntax for specifying the same options. Michael Paquier, per an observation by Daniel Gustafsson; some small cosmetic adjustments to nearby code by me. Discussion: https://postgr.es/m/20180115022748.GB1724@paquier.xyz --- doc/src/sgml/ref/create_function.sgml | 36 --------- src/backend/commands/functioncmds.c | 109 +++++++------------------- src/backend/nodes/copyfuncs.c | 5 +- src/backend/nodes/equalfuncs.c | 3 +- src/backend/parser/gram.y | 14 ++-- src/include/nodes/parsenodes.h | 3 +- 6 files changed, 38 insertions(+), 132 deletions(-) diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml index fd229d1193..c0adb8cf1e 100644 --- a/doc/src/sgml/ref/create_function.sgml +++ b/doc/src/sgml/ref/create_function.sgml @@ -37,7 +37,6 @@ CREATE [ OR REPLACE ] FUNCTION | AS 'definition' | AS 'obj_file', 'link_symbol' } ... - [ WITH ( attribute [, ...] ) ] @@ -560,41 +559,6 @@ CREATE [ OR REPLACE ] FUNCTION - - attribute - - - - The historical way to specify optional pieces of information - about the function. The following attributes can appear here: - - - - isStrict - - - Equivalent to STRICT or RETURNS NULL ON NULL INPUT. - - - - - - isCachable - - isCachable is an obsolete equivalent of - IMMUTABLE; it's still accepted for - backwards-compatibility reasons. - - - - - - - Attribute names are not case-sensitive. - - - - diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index df87dfeb54..a483714766 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -637,21 +637,21 @@ update_proconfig_value(ArrayType *a, List *set_items) * attributes. */ static void -compute_attributes_sql_style(ParseState *pstate, - bool is_procedure, - List *options, - List **as, - char **language, - Node **transform, - bool *windowfunc_p, - char *volatility_p, - bool *strict_p, - bool *security_definer, - bool *leakproof_p, - ArrayType **proconfig, - float4 *procost, - float4 *prorows, - char *parallel_p) +compute_function_attributes(ParseState *pstate, + bool is_procedure, + List *options, + List **as, + char **language, + Node **transform, + bool *windowfunc_p, + char *volatility_p, + bool *strict_p, + bool *security_definer, + bool *leakproof_p, + ArrayType **proconfig, + float4 *procost, + float4 *prorows, + char *parallel_p) { ListCell *option; DefElem *as_item = NULL; @@ -789,59 +789,6 @@ compute_attributes_sql_style(ParseState *pstate, } -/*------------- - * Interpret the parameters *parameters and return their contents via - * *isStrict_p and *volatility_p. - * - * These parameters supply optional information about a function. - * All have defaults if not specified. Parameters: - * - * * isStrict means the function should not be called when any NULL - * inputs are present; instead a NULL result value should be assumed. - * - * * volatility tells the optimizer whether the function's result can - * be assumed to be repeatable over multiple evaluations. - *------------ - */ -static void -compute_attributes_with_style(ParseState *pstate, bool is_procedure, List *parameters, bool *isStrict_p, char *volatility_p) -{ - ListCell *pl; - - foreach(pl, parameters) - { - DefElem *param = (DefElem *) lfirst(pl); - - if (pg_strcasecmp(param->defname, "isstrict") == 0) - { - if (is_procedure) - ereport(ERROR, - (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), - errmsg("invalid attribute in procedure definition"), - parser_errposition(pstate, param->location))); - *isStrict_p = defGetBoolean(param); - } - else if (pg_strcasecmp(param->defname, "iscachable") == 0) - { - /* obsolete spelling of isImmutable */ - if (is_procedure) - ereport(ERROR, - (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), - errmsg("invalid attribute in procedure definition"), - parser_errposition(pstate, param->location))); - if (defGetBoolean(param)) - *volatility_p = PROVOLATILE_IMMUTABLE; - } - else - ereport(WARNING, - (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("unrecognized function attribute \"%s\" ignored", - param->defname), - parser_errposition(pstate, param->location))); - } -} - - /* * For a dynamically linked C language object, the form of the clause is * @@ -909,7 +856,7 @@ interpret_AS_clause(Oid languageOid, const char *languageName, /* * CreateFunction - * Execute a CREATE FUNCTION utility statement. + * Execute a CREATE FUNCTION (or CREATE PROCEDURE) utility statement. */ ObjectAddress CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) @@ -957,7 +904,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) aclcheck_error(aclresult, OBJECT_SCHEMA, get_namespace_name(namespaceId)); - /* default attributes */ + /* Set default attributes */ isWindowFunc = false; isStrict = false; security = false; @@ -968,14 +915,14 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) prorows = -1; /* indicates not set */ parallel = PROPARALLEL_UNSAFE; - /* override attributes from explicit list */ - compute_attributes_sql_style(pstate, - stmt->is_procedure, - stmt->options, - &as_clause, &language, &transformDefElem, - &isWindowFunc, &volatility, - &isStrict, &security, &isLeakProof, - &proconfig, &procost, &prorows, ¶llel); + /* Extract non-default attributes from stmt->options list */ + compute_function_attributes(pstate, + stmt->is_procedure, + stmt->options, + &as_clause, &language, &transformDefElem, + &isWindowFunc, &volatility, + &isStrict, &security, &isLeakProof, + &proconfig, &procost, &prorows, ¶llel); /* Look up the language and validate permissions */ languageTuple = SearchSysCache1(LANGNAME, PointerGetDatum(language)); @@ -1107,8 +1054,6 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt) trftypes = NULL; } - compute_attributes_with_style(pstate, stmt->is_procedure, stmt->withClause, &isStrict, &volatility); - interpret_AS_clause(languageOid, language, funcname, as_clause, &prosrc_str, &probin_str); @@ -2269,7 +2214,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) FuncExpr *fexpr; int nargs; int i; - AclResult aclresult; + AclResult aclresult; FmgrInfo flinfo; FunctionCallInfoData fcinfo; CallContext *callcontext; @@ -2329,7 +2274,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, (Node *) callcontext, NULL); i = 0; - foreach (lc, fexpr->args) + foreach(lc, fexpr->args) { EState *estate; ExprState *exprstate; diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index e5d2de5330..fd3001c493 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3217,7 +3217,7 @@ _copyClosePortalStmt(const ClosePortalStmt *from) static CallStmt * _copyCallStmt(const CallStmt *from) { - CallStmt *newnode = makeNode(CallStmt); + CallStmt *newnode = makeNode(CallStmt); COPY_NODE_FIELD(funccall); @@ -3422,13 +3422,12 @@ _copyCreateFunctionStmt(const CreateFunctionStmt *from) { CreateFunctionStmt *newnode = makeNode(CreateFunctionStmt); + COPY_SCALAR_FIELD(is_procedure); COPY_SCALAR_FIELD(replace); COPY_NODE_FIELD(funcname); COPY_NODE_FIELD(parameters); COPY_NODE_FIELD(returnType); - COPY_SCALAR_FIELD(is_procedure); COPY_NODE_FIELD(options); - COPY_NODE_FIELD(withClause); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 785dc54d37..7d2aa1a2d3 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1370,13 +1370,12 @@ _equalCreateStatsStmt(const CreateStatsStmt *a, const CreateStatsStmt *b) static bool _equalCreateFunctionStmt(const CreateFunctionStmt *a, const CreateFunctionStmt *b) { + COMPARE_SCALAR_FIELD(is_procedure); COMPARE_SCALAR_FIELD(replace); COMPARE_NODE_FIELD(funcname); COMPARE_NODE_FIELD(parameters); COMPARE_NODE_FIELD(returnType); - COMPARE_SCALAR_FIELD(is_procedure); COMPARE_NODE_FIELD(options); - COMPARE_NODE_FIELD(withClause); return true; } diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 459a227e57..5329432f25 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -7506,51 +7506,51 @@ opt_nulls_order: NULLS_LA FIRST_P { $$ = SORTBY_NULLS_FIRST; } CreateFunctionStmt: CREATE opt_or_replace FUNCTION func_name func_args_with_defaults - RETURNS func_return createfunc_opt_list opt_definition + RETURNS func_return createfunc_opt_list { CreateFunctionStmt *n = makeNode(CreateFunctionStmt); + n->is_procedure = false; n->replace = $2; n->funcname = $4; n->parameters = $5; n->returnType = $7; n->options = $8; - n->withClause = $9; $$ = (Node *)n; } | CREATE opt_or_replace FUNCTION func_name func_args_with_defaults - RETURNS TABLE '(' table_func_column_list ')' createfunc_opt_list opt_definition + RETURNS TABLE '(' table_func_column_list ')' createfunc_opt_list { CreateFunctionStmt *n = makeNode(CreateFunctionStmt); + n->is_procedure = false; n->replace = $2; n->funcname = $4; n->parameters = mergeTableFuncParameters($5, $9); n->returnType = TableFuncTypeName($9); n->returnType->location = @7; n->options = $11; - n->withClause = $12; $$ = (Node *)n; } | CREATE opt_or_replace FUNCTION func_name func_args_with_defaults - createfunc_opt_list opt_definition + createfunc_opt_list { CreateFunctionStmt *n = makeNode(CreateFunctionStmt); + n->is_procedure = false; n->replace = $2; n->funcname = $4; n->parameters = $5; n->returnType = NULL; n->options = $6; - n->withClause = $7; $$ = (Node *)n; } | CREATE opt_or_replace PROCEDURE func_name func_args_with_defaults createfunc_opt_list { CreateFunctionStmt *n = makeNode(CreateFunctionStmt); + n->is_procedure = true; n->replace = $2; n->funcname = $4; n->parameters = $5; n->returnType = NULL; - n->is_procedure = true; n->options = $6; $$ = (Node *)n; } diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index bbacbe144c..76a73b2a37 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -2735,13 +2735,12 @@ typedef struct CreateStatsStmt typedef struct CreateFunctionStmt { NodeTag type; + bool is_procedure; /* it's really CREATE PROCEDURE */ bool replace; /* T => replace if already exists */ List *funcname; /* qualified name of function to create */ List *parameters; /* a list of FunctionParameter */ TypeName *returnType; /* the return type */ - bool is_procedure; List *options; /* a list of DefElem */ - List *withClause; /* a list of DefElem */ } CreateFunctionStmt; typedef enum FunctionParameterMode From 9fd8b7d632570af90a0b374816f604f59bba11ad Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 26 Jan 2018 15:03:12 -0500 Subject: [PATCH 0904/1087] Factor some code out of create_grouping_paths. This is preparatory refactoring to prepare the way for partition-wise aggregate, which will reuse the new subroutines for child grouping rels. It also does not seem like a bad idea on general principle, as the function was getting pretty long. Jeevan Chalke. The larger patch series of which this patch is a part was reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, Ashutosh Bapat, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, and me. Some cosmetic changes by me. Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com --- src/backend/optimizer/plan/planner.c | 1460 ++++++++++++++------------ 1 file changed, 772 insertions(+), 688 deletions(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 53870432ea..2a4e22b6c8 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -185,6 +185,26 @@ static PathTarget *make_sort_input_target(PlannerInfo *root, bool *have_postponed_srfs); static void adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel, List *targets, List *targets_contain_srfs); +static void add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, + RelOptInfo *grouped_rel, PathTarget *target, + PathTarget *partial_grouping_target, + const AggClauseCosts *agg_costs, + const AggClauseCosts *agg_final_costs, + grouping_sets_data *gd, bool can_sort, bool can_hash, + double dNumGroups, List *havingQual); +static void add_partial_paths_to_grouping_rel(PlannerInfo *root, + RelOptInfo *input_rel, + RelOptInfo *grouped_rel, + PathTarget *target, + PathTarget *partial_grouping_target, + AggClauseCosts *agg_partial_costs, + AggClauseCosts *agg_final_costs, + grouping_sets_data *gd, + bool can_sort, + bool can_hash, + List *havingQual); +static bool can_parallel_agg(PlannerInfo *root, RelOptInfo *input_rel, + RelOptInfo *grouped_rel, const AggClauseCosts *agg_costs); /***************************************************************************** @@ -3610,15 +3630,11 @@ create_grouping_paths(PlannerInfo *root, PathTarget *partial_grouping_target = NULL; AggClauseCosts agg_partial_costs; /* parallel only */ AggClauseCosts agg_final_costs; /* parallel only */ - Size hashaggtablesize; double dNumGroups; - double dNumPartialGroups = 0; bool can_hash; bool can_sort; bool try_parallel_aggregation; - ListCell *lc; - /* For now, do all work in the (GROUP_AGG, NULL) upperrel */ grouped_rel = fetch_upper_rel(root, UPPERREL_GROUP_AGG, NULL); @@ -3754,44 +3770,11 @@ create_grouping_paths(PlannerInfo *root, (gd ? gd->any_hashable : grouping_is_hashable(parse->groupClause))); /* - * If grouped_rel->consider_parallel is true, then paths that we generate - * for this grouping relation could be run inside of a worker, but that - * doesn't mean we can actually use the PartialAggregate/FinalizeAggregate - * execution strategy. Figure that out. + * Figure out whether a PartialAggregate/Finalize Aggregate execution + * strategy is viable. */ - if (!grouped_rel->consider_parallel) - { - /* Not even parallel-safe. */ - try_parallel_aggregation = false; - } - else if (input_rel->partial_pathlist == NIL) - { - /* Nothing to use as input for partial aggregate. */ - try_parallel_aggregation = false; - } - else if (!parse->hasAggs && parse->groupClause == NIL) - { - /* - * We don't know how to do parallel aggregation unless we have either - * some aggregates or a grouping clause. - */ - try_parallel_aggregation = false; - } - else if (parse->groupingSets) - { - /* We don't know how to do grouping sets in parallel. */ - try_parallel_aggregation = false; - } - else if (agg_costs->hasNonPartial || agg_costs->hasNonSerial) - { - /* Insufficient support for partial mode. */ - try_parallel_aggregation = false; - } - else - { - /* Everything looks good. */ - try_parallel_aggregation = true; - } + try_parallel_aggregation = can_parallel_agg(root, input_rel, grouped_rel, + agg_costs); /* * Before generating paths for grouped_rel, we first generate any possible @@ -3803,8 +3786,6 @@ create_grouping_paths(PlannerInfo *root, */ if (try_parallel_aggregation) { - Path *cheapest_partial_path = linitial(input_rel->partial_pathlist); - /* * Build target list for partial aggregate paths. These paths cannot * just emit the same tlist as regular aggregate paths, because (1) we @@ -3814,11 +3795,6 @@ create_grouping_paths(PlannerInfo *root, */ partial_grouping_target = make_partial_grouping_target(root, target); - /* Estimate number of partial groups. */ - dNumPartialGroups = get_number_of_groups(root, - cheapest_partial_path->rows, - gd); - /* * Collect statistics about aggregates for estimating costs of * performing aggregation in parallel. @@ -3841,480 +3817,141 @@ create_grouping_paths(PlannerInfo *root, &agg_final_costs); } - if (can_sort) - { - /* This was checked before setting try_parallel_aggregation */ - Assert(parse->hasAggs || parse->groupClause); + add_partial_paths_to_grouping_rel(root, input_rel, grouped_rel, target, + partial_grouping_target, + &agg_partial_costs, &agg_final_costs, + gd, can_sort, can_hash, + (List *) parse->havingQual); + } - /* - * Use any available suitably-sorted path as input, and also - * consider sorting the cheapest partial path. - */ - foreach(lc, input_rel->partial_pathlist) - { - Path *path = (Path *) lfirst(lc); - bool is_sorted; + /* Build final grouping paths */ + add_paths_to_grouping_rel(root, input_rel, grouped_rel, target, + partial_grouping_target, agg_costs, + &agg_final_costs, gd, can_sort, can_hash, + dNumGroups, (List *) parse->havingQual); - is_sorted = pathkeys_contained_in(root->group_pathkeys, - path->pathkeys); - if (path == cheapest_partial_path || is_sorted) - { - /* Sort the cheapest partial path, if it isn't already */ - if (!is_sorted) - path = (Path *) create_sort_path(root, - grouped_rel, - path, - root->group_pathkeys, - -1.0); + /* Give a helpful error if we failed to find any implementation */ + if (grouped_rel->pathlist == NIL) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("could not implement GROUP BY"), + errdetail("Some of the datatypes only support hashing, while others only support sorting."))); - if (parse->hasAggs) - add_partial_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - path, - partial_grouping_target, - parse->groupClause ? AGG_SORTED : AGG_PLAIN, - AGGSPLIT_INITIAL_SERIAL, - parse->groupClause, - NIL, - &agg_partial_costs, - dNumPartialGroups)); - else - add_partial_path(grouped_rel, (Path *) - create_group_path(root, - grouped_rel, - path, - partial_grouping_target, - parse->groupClause, - NIL, - dNumPartialGroups)); - } - } - } + /* + * If there is an FDW that's responsible for all baserels of the query, + * let it consider adding ForeignPaths. + */ + if (grouped_rel->fdwroutine && + grouped_rel->fdwroutine->GetForeignUpperPaths) + grouped_rel->fdwroutine->GetForeignUpperPaths(root, UPPERREL_GROUP_AGG, + input_rel, grouped_rel); - if (can_hash) - { - /* Checked above */ - Assert(parse->hasAggs || parse->groupClause); + /* Let extensions possibly add some more paths */ + if (create_upper_paths_hook) + (*create_upper_paths_hook) (root, UPPERREL_GROUP_AGG, + input_rel, grouped_rel); + + /* Now choose the best path(s) */ + set_cheapest(grouped_rel); - hashaggtablesize = - estimate_hashagg_tablesize(cheapest_partial_path, - &agg_partial_costs, - dNumPartialGroups); + /* + * We've been using the partial pathlist for the grouped relation to hold + * partially aggregated paths, but that's actually a little bit bogus + * because it's unsafe for later planning stages -- like ordered_rel --- + * to get the idea that they can use these partial paths as if they didn't + * need a FinalizeAggregate step. Zap the partial pathlist at this stage + * so we don't get confused. + */ + grouped_rel->partial_pathlist = NIL; - /* - * Tentatively produce a partial HashAgg Path, depending on if it - * looks as if the hash table will fit in work_mem. - */ - if (hashaggtablesize < work_mem * 1024L) - { - add_partial_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - cheapest_partial_path, - partial_grouping_target, - AGG_HASHED, - AGGSPLIT_INITIAL_SERIAL, - parse->groupClause, - NIL, - &agg_partial_costs, - dNumPartialGroups)); - } - } - } + return grouped_rel; +} - /* Build final grouping paths */ - if (can_sort) + +/* + * For a given input path, consider the possible ways of doing grouping sets on + * it, by combinations of hashing and sorting. This can be called multiple + * times, so it's important that it not scribble on input. No result is + * returned, but any generated paths are added to grouped_rel. + */ +static void +consider_groupingsets_paths(PlannerInfo *root, + RelOptInfo *grouped_rel, + Path *path, + bool is_sorted, + bool can_hash, + PathTarget *target, + grouping_sets_data *gd, + const AggClauseCosts *agg_costs, + double dNumGroups) +{ + Query *parse = root->parse; + + /* + * If we're not being offered sorted input, then only consider plans that + * can be done entirely by hashing. + * + * We can hash everything if it looks like it'll fit in work_mem. But if + * the input is actually sorted despite not being advertised as such, we + * prefer to make use of that in order to use less memory. + * + * If none of the grouping sets are sortable, then ignore the work_mem + * limit and generate a path anyway, since otherwise we'll just fail. + */ + if (!is_sorted) { - /* - * Use any available suitably-sorted path as input, and also consider - * sorting the cheapest-total path. - */ - foreach(lc, input_rel->pathlist) - { - Path *path = (Path *) lfirst(lc); - bool is_sorted; + List *new_rollups = NIL; + RollupData *unhashed_rollup = NULL; + List *sets_data; + List *empty_sets_data = NIL; + List *empty_sets = NIL; + ListCell *lc; + ListCell *l_start = list_head(gd->rollups); + AggStrategy strat = AGG_HASHED; + Size hashsize; + double exclude_groups = 0.0; - is_sorted = pathkeys_contained_in(root->group_pathkeys, - path->pathkeys); - if (path == cheapest_path || is_sorted) - { - /* Sort the cheapest-total path if it isn't already sorted */ - if (!is_sorted) - path = (Path *) create_sort_path(root, - grouped_rel, - path, - root->group_pathkeys, - -1.0); + Assert(can_hash); - /* Now decide what to stick atop it */ - if (parse->groupingSets) - { - consider_groupingsets_paths(root, grouped_rel, - path, true, can_hash, target, - gd, agg_costs, dNumGroups); - } - else if (parse->hasAggs) - { - /* - * We have aggregation, possibly with plain GROUP BY. Make - * an AggPath. - */ - add_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - path, - target, - parse->groupClause ? AGG_SORTED : AGG_PLAIN, - AGGSPLIT_SIMPLE, - parse->groupClause, - (List *) parse->havingQual, - agg_costs, - dNumGroups)); - } - else if (parse->groupClause) - { - /* - * We have GROUP BY without aggregation or grouping sets. - * Make a GroupPath. - */ - add_path(grouped_rel, (Path *) - create_group_path(root, - grouped_rel, - path, - target, - parse->groupClause, - (List *) parse->havingQual, - dNumGroups)); - } - else - { - /* Other cases should have been handled above */ - Assert(false); - } - } + if (pathkeys_contained_in(root->group_pathkeys, path->pathkeys)) + { + unhashed_rollup = lfirst_node(RollupData, l_start); + exclude_groups = unhashed_rollup->numGroups; + l_start = lnext(l_start); } + hashsize = estimate_hashagg_tablesize(path, + agg_costs, + dNumGroups - exclude_groups); + /* - * Now generate a complete GroupAgg Path atop of the cheapest partial - * path. We can do this using either Gather or Gather Merge. + * gd->rollups is empty if we have only unsortable columns to work + * with. Override work_mem in that case; otherwise, we'll rely on the + * sorted-input case to generate usable mixed paths. */ - if (grouped_rel->partial_pathlist) - { - Path *path = (Path *) linitial(grouped_rel->partial_pathlist); - double total_groups = path->rows * path->parallel_workers; + if (hashsize > work_mem * 1024L && gd->rollups) + return; /* nope, won't fit */ - path = (Path *) create_gather_path(root, - grouped_rel, - path, - partial_grouping_target, - NULL, - &total_groups); + /* + * We need to burst the existing rollups list into individual grouping + * sets and recompute a groupClause for each set. + */ + sets_data = list_copy(gd->unsortable_sets); + + for_each_cell(lc, l_start) + { + RollupData *rollup = lfirst_node(RollupData, lc); /* - * Since Gather's output is always unsorted, we'll need to sort, - * unless there's no GROUP BY clause or a degenerate (constant) - * one, in which case there will only be a single group. - */ - if (root->group_pathkeys) - path = (Path *) create_sort_path(root, - grouped_rel, - path, - root->group_pathkeys, - -1.0); - - if (parse->hasAggs) - add_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - path, - target, - parse->groupClause ? AGG_SORTED : AGG_PLAIN, - AGGSPLIT_FINAL_DESERIAL, - parse->groupClause, - (List *) parse->havingQual, - &agg_final_costs, - dNumGroups)); - else - add_path(grouped_rel, (Path *) - create_group_path(root, - grouped_rel, - path, - target, - parse->groupClause, - (List *) parse->havingQual, - dNumGroups)); - - /* - * The point of using Gather Merge rather than Gather is that it - * can preserve the ordering of the input path, so there's no - * reason to try it unless (1) it's possible to produce more than - * one output row and (2) we want the output path to be ordered. - */ - if (parse->groupClause != NIL && root->group_pathkeys != NIL) - { - foreach(lc, grouped_rel->partial_pathlist) - { - Path *subpath = (Path *) lfirst(lc); - Path *gmpath; - double total_groups; - - /* - * It's useful to consider paths that are already properly - * ordered for Gather Merge, because those don't need a - * sort. It's also useful to consider the cheapest path, - * because sorting it in parallel and then doing Gather - * Merge may be better than doing an unordered Gather - * followed by a sort. But there's no point in - * considering non-cheapest paths that aren't already - * sorted correctly. - */ - if (path != subpath && - !pathkeys_contained_in(root->group_pathkeys, - subpath->pathkeys)) - continue; - - total_groups = subpath->rows * subpath->parallel_workers; - - gmpath = (Path *) - create_gather_merge_path(root, - grouped_rel, - subpath, - partial_grouping_target, - root->group_pathkeys, - NULL, - &total_groups); - - if (parse->hasAggs) - add_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - gmpath, - target, - parse->groupClause ? AGG_SORTED : AGG_PLAIN, - AGGSPLIT_FINAL_DESERIAL, - parse->groupClause, - (List *) parse->havingQual, - &agg_final_costs, - dNumGroups)); - else - add_path(grouped_rel, (Path *) - create_group_path(root, - grouped_rel, - gmpath, - target, - parse->groupClause, - (List *) parse->havingQual, - dNumGroups)); - } - } - } - } - - if (can_hash) - { - if (parse->groupingSets) - { - /* - * Try for a hash-only groupingsets path over unsorted input. - */ - consider_groupingsets_paths(root, grouped_rel, - cheapest_path, false, true, target, - gd, agg_costs, dNumGroups); - } - else - { - hashaggtablesize = estimate_hashagg_tablesize(cheapest_path, - agg_costs, - dNumGroups); - - /* - * Provided that the estimated size of the hashtable does not - * exceed work_mem, we'll generate a HashAgg Path, although if we - * were unable to sort above, then we'd better generate a Path, so - * that we at least have one. - */ - if (hashaggtablesize < work_mem * 1024L || - grouped_rel->pathlist == NIL) - { - /* - * We just need an Agg over the cheapest-total input path, - * since input order won't matter. - */ - add_path(grouped_rel, (Path *) - create_agg_path(root, grouped_rel, - cheapest_path, - target, - AGG_HASHED, - AGGSPLIT_SIMPLE, - parse->groupClause, - (List *) parse->havingQual, - agg_costs, - dNumGroups)); - } - } - - /* - * Generate a HashAgg Path atop of the cheapest partial path. Once - * again, we'll only do this if it looks as though the hash table - * won't exceed work_mem. - */ - if (grouped_rel->partial_pathlist) - { - Path *path = (Path *) linitial(grouped_rel->partial_pathlist); - - hashaggtablesize = estimate_hashagg_tablesize(path, - &agg_final_costs, - dNumGroups); - - if (hashaggtablesize < work_mem * 1024L) - { - double total_groups = path->rows * path->parallel_workers; - - path = (Path *) create_gather_path(root, - grouped_rel, - path, - partial_grouping_target, - NULL, - &total_groups); - - add_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - path, - target, - AGG_HASHED, - AGGSPLIT_FINAL_DESERIAL, - parse->groupClause, - (List *) parse->havingQual, - &agg_final_costs, - dNumGroups)); - } - } - } - - /* Give a helpful error if we failed to find any implementation */ - if (grouped_rel->pathlist == NIL) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("could not implement GROUP BY"), - errdetail("Some of the datatypes only support hashing, while others only support sorting."))); - - /* - * If there is an FDW that's responsible for all baserels of the query, - * let it consider adding ForeignPaths. - */ - if (grouped_rel->fdwroutine && - grouped_rel->fdwroutine->GetForeignUpperPaths) - grouped_rel->fdwroutine->GetForeignUpperPaths(root, UPPERREL_GROUP_AGG, - input_rel, grouped_rel); - - /* Let extensions possibly add some more paths */ - if (create_upper_paths_hook) - (*create_upper_paths_hook) (root, UPPERREL_GROUP_AGG, - input_rel, grouped_rel); - - /* Now choose the best path(s) */ - set_cheapest(grouped_rel); - - /* - * We've been using the partial pathlist for the grouped relation to hold - * partially aggregated paths, but that's actually a little bit bogus - * because it's unsafe for later planning stages -- like ordered_rel --- - * to get the idea that they can use these partial paths as if they didn't - * need a FinalizeAggregate step. Zap the partial pathlist at this stage - * so we don't get confused. - */ - grouped_rel->partial_pathlist = NIL; - - return grouped_rel; -} - - -/* - * For a given input path, consider the possible ways of doing grouping sets on - * it, by combinations of hashing and sorting. This can be called multiple - * times, so it's important that it not scribble on input. No result is - * returned, but any generated paths are added to grouped_rel. - */ -static void -consider_groupingsets_paths(PlannerInfo *root, - RelOptInfo *grouped_rel, - Path *path, - bool is_sorted, - bool can_hash, - PathTarget *target, - grouping_sets_data *gd, - const AggClauseCosts *agg_costs, - double dNumGroups) -{ - Query *parse = root->parse; - - /* - * If we're not being offered sorted input, then only consider plans that - * can be done entirely by hashing. - * - * We can hash everything if it looks like it'll fit in work_mem. But if - * the input is actually sorted despite not being advertised as such, we - * prefer to make use of that in order to use less memory. - * - * If none of the grouping sets are sortable, then ignore the work_mem - * limit and generate a path anyway, since otherwise we'll just fail. - */ - if (!is_sorted) - { - List *new_rollups = NIL; - RollupData *unhashed_rollup = NULL; - List *sets_data; - List *empty_sets_data = NIL; - List *empty_sets = NIL; - ListCell *lc; - ListCell *l_start = list_head(gd->rollups); - AggStrategy strat = AGG_HASHED; - Size hashsize; - double exclude_groups = 0.0; - - Assert(can_hash); - - if (pathkeys_contained_in(root->group_pathkeys, path->pathkeys)) - { - unhashed_rollup = lfirst_node(RollupData, l_start); - exclude_groups = unhashed_rollup->numGroups; - l_start = lnext(l_start); - } - - hashsize = estimate_hashagg_tablesize(path, - agg_costs, - dNumGroups - exclude_groups); - - /* - * gd->rollups is empty if we have only unsortable columns to work - * with. Override work_mem in that case; otherwise, we'll rely on the - * sorted-input case to generate usable mixed paths. - */ - if (hashsize > work_mem * 1024L && gd->rollups) - return; /* nope, won't fit */ - - /* - * We need to burst the existing rollups list into individual grouping - * sets and recompute a groupClause for each set. - */ - sets_data = list_copy(gd->unsortable_sets); - - for_each_cell(lc, l_start) - { - RollupData *rollup = lfirst_node(RollupData, lc); - - /* - * If we find an unhashable rollup that's not been skipped by the - * "actually sorted" check above, we can't cope; we'd need sorted - * input (with a different sort order) but we can't get that here. - * So bail out; we'll get a valid path from the is_sorted case - * instead. - * - * The mere presence of empty grouping sets doesn't make a rollup - * unhashable (see preprocess_grouping_sets), we handle those - * specially below. + * If we find an unhashable rollup that's not been skipped by the + * "actually sorted" check above, we can't cope; we'd need sorted + * input (with a different sort order) but we can't get that here. + * So bail out; we'll get a valid path from the is_sorted case + * instead. + * + * The mere presence of empty grouping sets doesn't make a rollup + * unhashable (see preprocess_grouping_sets), we handle those + * specially below. */ if (!rollup->hashable) return; @@ -5971,246 +5608,693 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel, rel->cheapest_total_path = newpath; } - /* Likewise for partial paths, if any */ - foreach(lc, rel->partial_pathlist) + /* Likewise for partial paths, if any */ + foreach(lc, rel->partial_pathlist) + { + Path *subpath = (Path *) lfirst(lc); + Path *newpath = subpath; + ListCell *lc1, + *lc2; + + Assert(subpath->param_info == NULL); + forboth(lc1, targets, lc2, targets_contain_srfs) + { + PathTarget *thistarget = lfirst_node(PathTarget, lc1); + bool contains_srfs = (bool) lfirst_int(lc2); + + /* If this level doesn't contain SRFs, do regular projection */ + if (contains_srfs) + newpath = (Path *) create_set_projection_path(root, + rel, + newpath, + thistarget); + else + { + /* avoid apply_projection_to_path, in case of multiple refs */ + newpath = (Path *) create_projection_path(root, + rel, + newpath, + thistarget); + } + } + lfirst(lc) = newpath; + } +} + +/* + * expression_planner + * Perform planner's transformations on a standalone expression. + * + * Various utility commands need to evaluate expressions that are not part + * of a plannable query. They can do so using the executor's regular + * expression-execution machinery, but first the expression has to be fed + * through here to transform it from parser output to something executable. + * + * Currently, we disallow sublinks in standalone expressions, so there's no + * real "planning" involved here. (That might not always be true though.) + * What we must do is run eval_const_expressions to ensure that any function + * calls are converted to positional notation and function default arguments + * get inserted. The fact that constant subexpressions get simplified is a + * side-effect that is useful when the expression will get evaluated more than + * once. Also, we must fix operator function IDs. + * + * Note: this must not make any damaging changes to the passed-in expression + * tree. (It would actually be okay to apply fix_opfuncids to it, but since + * we first do an expression_tree_mutator-based walk, what is returned will + * be a new node tree.) + */ +Expr * +expression_planner(Expr *expr) +{ + Node *result; + + /* + * Convert named-argument function calls, insert default arguments and + * simplify constant subexprs + */ + result = eval_const_expressions(NULL, (Node *) expr); + + /* Fill in opfuncid values if missing */ + fix_opfuncids(result); + + return (Expr *) result; +} + + +/* + * plan_cluster_use_sort + * Use the planner to decide how CLUSTER should implement sorting + * + * tableOid is the OID of a table to be clustered on its index indexOid + * (which is already known to be a btree index). Decide whether it's + * cheaper to do an indexscan or a seqscan-plus-sort to execute the CLUSTER. + * Return true to use sorting, false to use an indexscan. + * + * Note: caller had better already hold some type of lock on the table. + */ +bool +plan_cluster_use_sort(Oid tableOid, Oid indexOid) +{ + PlannerInfo *root; + Query *query; + PlannerGlobal *glob; + RangeTblEntry *rte; + RelOptInfo *rel; + IndexOptInfo *indexInfo; + QualCost indexExprCost; + Cost comparisonCost; + Path *seqScanPath; + Path seqScanAndSortPath; + IndexPath *indexScanPath; + ListCell *lc; + + /* We can short-circuit the cost comparison if indexscans are disabled */ + if (!enable_indexscan) + return true; /* use sort */ + + /* Set up mostly-dummy planner state */ + query = makeNode(Query); + query->commandType = CMD_SELECT; + + glob = makeNode(PlannerGlobal); + + root = makeNode(PlannerInfo); + root->parse = query; + root->glob = glob; + root->query_level = 1; + root->planner_cxt = CurrentMemoryContext; + root->wt_param_id = -1; + + /* Build a minimal RTE for the rel */ + rte = makeNode(RangeTblEntry); + rte->rtekind = RTE_RELATION; + rte->relid = tableOid; + rte->relkind = RELKIND_RELATION; /* Don't be too picky. */ + rte->lateral = false; + rte->inh = false; + rte->inFromCl = true; + query->rtable = list_make1(rte); + + /* Set up RTE/RelOptInfo arrays */ + setup_simple_rel_arrays(root); + + /* Build RelOptInfo */ + rel = build_simple_rel(root, 1, NULL); + + /* Locate IndexOptInfo for the target index */ + indexInfo = NULL; + foreach(lc, rel->indexlist) + { + indexInfo = lfirst_node(IndexOptInfo, lc); + if (indexInfo->indexoid == indexOid) + break; + } + + /* + * It's possible that get_relation_info did not generate an IndexOptInfo + * for the desired index; this could happen if it's not yet reached its + * indcheckxmin usability horizon, or if it's a system index and we're + * ignoring system indexes. In such cases we should tell CLUSTER to not + * trust the index contents but use seqscan-and-sort. + */ + if (lc == NULL) /* not in the list? */ + return true; /* use sort */ + + /* + * Rather than doing all the pushups that would be needed to use + * set_baserel_size_estimates, just do a quick hack for rows and width. + */ + rel->rows = rel->tuples; + rel->reltarget->width = get_relation_data_width(tableOid, NULL); + + root->total_table_pages = rel->pages; + + /* + * Determine eval cost of the index expressions, if any. We need to + * charge twice that amount for each tuple comparison that happens during + * the sort, since tuplesort.c will have to re-evaluate the index + * expressions each time. (XXX that's pretty inefficient...) + */ + cost_qual_eval(&indexExprCost, indexInfo->indexprs, root); + comparisonCost = 2.0 * (indexExprCost.startup + indexExprCost.per_tuple); + + /* Estimate the cost of seq scan + sort */ + seqScanPath = create_seqscan_path(root, rel, NULL, 0); + cost_sort(&seqScanAndSortPath, root, NIL, + seqScanPath->total_cost, rel->tuples, rel->reltarget->width, + comparisonCost, maintenance_work_mem, -1.0); + + /* Estimate the cost of index scan */ + indexScanPath = create_index_path(root, indexInfo, + NIL, NIL, NIL, NIL, NIL, + ForwardScanDirection, false, + NULL, 1.0, false); + + return (seqScanAndSortPath.total_cost < indexScanPath->path.total_cost); +} + +/* + * get_partitioned_child_rels + * Returns a list of the RT indexes of the partitioned child relations + * with rti as the root parent RT index. Also sets + * *part_cols_updated to true if any of the root rte's updated + * columns is used in the partition key either of the relation whose RTI + * is specified or of any child relation. + * + * Note: This function might get called even for range table entries that + * are not partitioned tables; in such a case, it will simply return NIL. + */ +List * +get_partitioned_child_rels(PlannerInfo *root, Index rti, + bool *part_cols_updated) +{ + List *result = NIL; + ListCell *l; + + if (part_cols_updated) + *part_cols_updated = false; + + foreach(l, root->pcinfo_list) { - Path *subpath = (Path *) lfirst(lc); - Path *newpath = subpath; - ListCell *lc1, - *lc2; + PartitionedChildRelInfo *pc = lfirst_node(PartitionedChildRelInfo, l); - Assert(subpath->param_info == NULL); - forboth(lc1, targets, lc2, targets_contain_srfs) + if (pc->parent_relid == rti) { - PathTarget *thistarget = lfirst_node(PathTarget, lc1); - bool contains_srfs = (bool) lfirst_int(lc2); - - /* If this level doesn't contain SRFs, do regular projection */ - if (contains_srfs) - newpath = (Path *) create_set_projection_path(root, - rel, - newpath, - thistarget); - else - { - /* avoid apply_projection_to_path, in case of multiple refs */ - newpath = (Path *) create_projection_path(root, - rel, - newpath, - thistarget); - } + result = pc->child_rels; + if (part_cols_updated) + *part_cols_updated = pc->part_cols_updated; + break; } - lfirst(lc) = newpath; } + + return result; } /* - * expression_planner - * Perform planner's transformations on a standalone expression. - * - * Various utility commands need to evaluate expressions that are not part - * of a plannable query. They can do so using the executor's regular - * expression-execution machinery, but first the expression has to be fed - * through here to transform it from parser output to something executable. - * - * Currently, we disallow sublinks in standalone expressions, so there's no - * real "planning" involved here. (That might not always be true though.) - * What we must do is run eval_const_expressions to ensure that any function - * calls are converted to positional notation and function default arguments - * get inserted. The fact that constant subexpressions get simplified is a - * side-effect that is useful when the expression will get evaluated more than - * once. Also, we must fix operator function IDs. - * - * Note: this must not make any damaging changes to the passed-in expression - * tree. (It would actually be okay to apply fix_opfuncids to it, but since - * we first do an expression_tree_mutator-based walk, what is returned will - * be a new node tree.) + * get_partitioned_child_rels_for_join + * Build and return a list containing the RTI of every partitioned + * relation which is a child of some rel included in the join. */ -Expr * -expression_planner(Expr *expr) +List * +get_partitioned_child_rels_for_join(PlannerInfo *root, Relids join_relids) { - Node *result; + List *result = NIL; + ListCell *l; - /* - * Convert named-argument function calls, insert default arguments and - * simplify constant subexprs - */ - result = eval_const_expressions(NULL, (Node *) expr); + foreach(l, root->pcinfo_list) + { + PartitionedChildRelInfo *pc = lfirst(l); - /* Fill in opfuncid values if missing */ - fix_opfuncids(result); + if (bms_is_member(pc->parent_relid, join_relids)) + result = list_concat(result, list_copy(pc->child_rels)); + } - return (Expr *) result; + return result; } - /* - * plan_cluster_use_sort - * Use the planner to decide how CLUSTER should implement sorting - * - * tableOid is the OID of a table to be clustered on its index indexOid - * (which is already known to be a btree index). Decide whether it's - * cheaper to do an indexscan or a seqscan-plus-sort to execute the CLUSTER. - * Return true to use sorting, false to use an indexscan. + * add_paths_to_grouping_rel * - * Note: caller had better already hold some type of lock on the table. + * Add non-partial paths to grouping relation. */ -bool -plan_cluster_use_sort(Oid tableOid, Oid indexOid) +static void +add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, + RelOptInfo *grouped_rel, PathTarget *target, + PathTarget *partial_grouping_target, + const AggClauseCosts *agg_costs, + const AggClauseCosts *agg_final_costs, + grouping_sets_data *gd, bool can_sort, bool can_hash, + double dNumGroups, List *havingQual) { - PlannerInfo *root; - Query *query; - PlannerGlobal *glob; - RangeTblEntry *rte; - RelOptInfo *rel; - IndexOptInfo *indexInfo; - QualCost indexExprCost; - Cost comparisonCost; - Path *seqScanPath; - Path seqScanAndSortPath; - IndexPath *indexScanPath; + Query *parse = root->parse; + Path *cheapest_path = input_rel->cheapest_total_path; ListCell *lc; - /* We can short-circuit the cost comparison if indexscans are disabled */ - if (!enable_indexscan) - return true; /* use sort */ + if (can_sort) + { + /* + * Use any available suitably-sorted path as input, and also consider + * sorting the cheapest-total path. + */ + foreach(lc, input_rel->pathlist) + { + Path *path = (Path *) lfirst(lc); + bool is_sorted; - /* Set up mostly-dummy planner state */ - query = makeNode(Query); - query->commandType = CMD_SELECT; + is_sorted = pathkeys_contained_in(root->group_pathkeys, + path->pathkeys); + if (path == cheapest_path || is_sorted) + { + /* Sort the cheapest-total path if it isn't already sorted */ + if (!is_sorted) + path = (Path *) create_sort_path(root, + grouped_rel, + path, + root->group_pathkeys, + -1.0); - glob = makeNode(PlannerGlobal); + /* Now decide what to stick atop it */ + if (parse->groupingSets) + { + consider_groupingsets_paths(root, grouped_rel, + path, true, can_hash, target, + gd, agg_costs, dNumGroups); + } + else if (parse->hasAggs) + { + /* + * We have aggregation, possibly with plain GROUP BY. Make + * an AggPath. + */ + add_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + path, + target, + parse->groupClause ? AGG_SORTED : AGG_PLAIN, + AGGSPLIT_SIMPLE, + parse->groupClause, + havingQual, + agg_costs, + dNumGroups)); + } + else if (parse->groupClause) + { + /* + * We have GROUP BY without aggregation or grouping sets. + * Make a GroupPath. + */ + add_path(grouped_rel, (Path *) + create_group_path(root, + grouped_rel, + path, + target, + parse->groupClause, + havingQual, + dNumGroups)); + } + else + { + /* Other cases should have been handled above */ + Assert(false); + } + } + } - root = makeNode(PlannerInfo); - root->parse = query; - root->glob = glob; - root->query_level = 1; - root->planner_cxt = CurrentMemoryContext; - root->wt_param_id = -1; + /* + * Now generate a complete GroupAgg Path atop of the cheapest partial + * path. We can do this using either Gather or Gather Merge. + */ + if (grouped_rel->partial_pathlist) + { + Path *path = (Path *) linitial(grouped_rel->partial_pathlist); + double total_groups = path->rows * path->parallel_workers; + + path = (Path *) create_gather_path(root, + grouped_rel, + path, + partial_grouping_target, + NULL, + &total_groups); + + /* + * Since Gather's output is always unsorted, we'll need to sort, + * unless there's no GROUP BY clause or a degenerate (constant) + * one, in which case there will only be a single group. + */ + if (root->group_pathkeys) + path = (Path *) create_sort_path(root, + grouped_rel, + path, + root->group_pathkeys, + -1.0); + + if (parse->hasAggs) + add_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + path, + target, + parse->groupClause ? AGG_SORTED : AGG_PLAIN, + AGGSPLIT_FINAL_DESERIAL, + parse->groupClause, + havingQual, + agg_final_costs, + dNumGroups)); + else + add_path(grouped_rel, (Path *) + create_group_path(root, + grouped_rel, + path, + target, + parse->groupClause, + havingQual, + dNumGroups)); + + /* + * The point of using Gather Merge rather than Gather is that it + * can preserve the ordering of the input path, so there's no + * reason to try it unless (1) it's possible to produce more than + * one output row and (2) we want the output path to be ordered. + */ + if (parse->groupClause != NIL && root->group_pathkeys != NIL) + { + foreach(lc, grouped_rel->partial_pathlist) + { + Path *subpath = (Path *) lfirst(lc); + Path *gmpath; + double total_groups; + + /* + * It's useful to consider paths that are already properly + * ordered for Gather Merge, because those don't need a + * sort. It's also useful to consider the cheapest path, + * because sorting it in parallel and then doing Gather + * Merge may be better than doing an unordered Gather + * followed by a sort. But there's no point in considering + * non-cheapest paths that aren't already sorted + * correctly. + */ + if (path != subpath && + !pathkeys_contained_in(root->group_pathkeys, + subpath->pathkeys)) + continue; - /* Build a minimal RTE for the rel */ - rte = makeNode(RangeTblEntry); - rte->rtekind = RTE_RELATION; - rte->relid = tableOid; - rte->relkind = RELKIND_RELATION; /* Don't be too picky. */ - rte->lateral = false; - rte->inh = false; - rte->inFromCl = true; - query->rtable = list_make1(rte); + total_groups = subpath->rows * subpath->parallel_workers; - /* Set up RTE/RelOptInfo arrays */ - setup_simple_rel_arrays(root); + gmpath = (Path *) + create_gather_merge_path(root, + grouped_rel, + subpath, + partial_grouping_target, + root->group_pathkeys, + NULL, + &total_groups); - /* Build RelOptInfo */ - rel = build_simple_rel(root, 1, NULL); + if (parse->hasAggs) + add_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + gmpath, + target, + parse->groupClause ? AGG_SORTED : AGG_PLAIN, + AGGSPLIT_FINAL_DESERIAL, + parse->groupClause, + havingQual, + agg_final_costs, + dNumGroups)); + else + add_path(grouped_rel, (Path *) + create_group_path(root, + grouped_rel, + gmpath, + target, + parse->groupClause, + havingQual, + dNumGroups)); + } + } + } + } - /* Locate IndexOptInfo for the target index */ - indexInfo = NULL; - foreach(lc, rel->indexlist) + if (can_hash) { - indexInfo = lfirst_node(IndexOptInfo, lc); - if (indexInfo->indexoid == indexOid) - break; - } + Size hashaggtablesize; - /* - * It's possible that get_relation_info did not generate an IndexOptInfo - * for the desired index; this could happen if it's not yet reached its - * indcheckxmin usability horizon, or if it's a system index and we're - * ignoring system indexes. In such cases we should tell CLUSTER to not - * trust the index contents but use seqscan-and-sort. - */ - if (lc == NULL) /* not in the list? */ - return true; /* use sort */ + if (parse->groupingSets) + { + /* + * Try for a hash-only groupingsets path over unsorted input. + */ + consider_groupingsets_paths(root, grouped_rel, + cheapest_path, false, true, target, + gd, agg_costs, dNumGroups); + } + else + { + hashaggtablesize = estimate_hashagg_tablesize(cheapest_path, + agg_costs, + dNumGroups); - /* - * Rather than doing all the pushups that would be needed to use - * set_baserel_size_estimates, just do a quick hack for rows and width. - */ - rel->rows = rel->tuples; - rel->reltarget->width = get_relation_data_width(tableOid, NULL); + /* + * Provided that the estimated size of the hashtable does not + * exceed work_mem, we'll generate a HashAgg Path, although if we + * were unable to sort above, then we'd better generate a Path, so + * that we at least have one. + */ + if (hashaggtablesize < work_mem * 1024L || + grouped_rel->pathlist == NIL) + { + /* + * We just need an Agg over the cheapest-total input path, + * since input order won't matter. + */ + add_path(grouped_rel, (Path *) + create_agg_path(root, grouped_rel, + cheapest_path, + target, + AGG_HASHED, + AGGSPLIT_SIMPLE, + parse->groupClause, + havingQual, + agg_costs, + dNumGroups)); + } + } - root->total_table_pages = rel->pages; + /* + * Generate a HashAgg Path atop of the cheapest partial path. Once + * again, we'll only do this if it looks as though the hash table + * won't exceed work_mem. + */ + if (grouped_rel->partial_pathlist) + { + Path *path = (Path *) linitial(grouped_rel->partial_pathlist); - /* - * Determine eval cost of the index expressions, if any. We need to - * charge twice that amount for each tuple comparison that happens during - * the sort, since tuplesort.c will have to re-evaluate the index - * expressions each time. (XXX that's pretty inefficient...) - */ - cost_qual_eval(&indexExprCost, indexInfo->indexprs, root); - comparisonCost = 2.0 * (indexExprCost.startup + indexExprCost.per_tuple); + hashaggtablesize = estimate_hashagg_tablesize(path, + agg_final_costs, + dNumGroups); - /* Estimate the cost of seq scan + sort */ - seqScanPath = create_seqscan_path(root, rel, NULL, 0); - cost_sort(&seqScanAndSortPath, root, NIL, - seqScanPath->total_cost, rel->tuples, rel->reltarget->width, - comparisonCost, maintenance_work_mem, -1.0); + if (hashaggtablesize < work_mem * 1024L) + { + double total_groups = path->rows * path->parallel_workers; - /* Estimate the cost of index scan */ - indexScanPath = create_index_path(root, indexInfo, - NIL, NIL, NIL, NIL, NIL, - ForwardScanDirection, false, - NULL, 1.0, false); + path = (Path *) create_gather_path(root, + grouped_rel, + path, + partial_grouping_target, + NULL, + &total_groups); - return (seqScanAndSortPath.total_cost < indexScanPath->path.total_cost); + add_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + path, + target, + AGG_HASHED, + AGGSPLIT_FINAL_DESERIAL, + parse->groupClause, + havingQual, + agg_final_costs, + dNumGroups)); + } + } + } } /* - * get_partitioned_child_rels - * Returns a list of the RT indexes of the partitioned child relations - * with rti as the root parent RT index. Also sets - * *part_cols_updated to true if any of the root rte's updated - * columns is used in the partition key either of the relation whose RTI - * is specified or of any child relation. + * add_partial_paths_to_grouping_rel * - * Note: This function might get called even for range table entries that - * are not partitioned tables; in such a case, it will simply return NIL. + * Add partial paths to grouping relation. These paths are not fully + * aggregated; a FinalizeAggregate step is still required. */ -List * -get_partitioned_child_rels(PlannerInfo *root, Index rti, - bool *part_cols_updated) +static void +add_partial_paths_to_grouping_rel(PlannerInfo *root, + RelOptInfo *input_rel, + RelOptInfo *grouped_rel, + PathTarget *target, + PathTarget *partial_grouping_target, + AggClauseCosts *agg_partial_costs, + AggClauseCosts *agg_final_costs, + grouping_sets_data *gd, + bool can_sort, + bool can_hash, + List *havingQual) { - List *result = NIL; - ListCell *l; + Query *parse = root->parse; + Path *cheapest_partial_path = linitial(input_rel->partial_pathlist); + Size hashaggtablesize; + double dNumPartialGroups = 0; + ListCell *lc; - if (part_cols_updated) - *part_cols_updated = false; + /* Estimate number of partial groups. */ + dNumPartialGroups = get_number_of_groups(root, + cheapest_partial_path->rows, + gd); - foreach(l, root->pcinfo_list) + if (can_sort) { - PartitionedChildRelInfo *pc = lfirst_node(PartitionedChildRelInfo, l); + /* This should have been checked previously */ + Assert(parse->hasAggs || parse->groupClause); - if (pc->parent_relid == rti) + /* + * Use any available suitably-sorted path as input, and also consider + * sorting the cheapest partial path. + */ + foreach(lc, input_rel->partial_pathlist) { - result = pc->child_rels; - if (part_cols_updated) - *part_cols_updated = pc->part_cols_updated; - break; + Path *path = (Path *) lfirst(lc); + bool is_sorted; + + is_sorted = pathkeys_contained_in(root->group_pathkeys, + path->pathkeys); + if (path == cheapest_partial_path || is_sorted) + { + /* Sort the cheapest partial path, if it isn't already */ + if (!is_sorted) + path = (Path *) create_sort_path(root, + grouped_rel, + path, + root->group_pathkeys, + -1.0); + + if (parse->hasAggs) + add_partial_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + path, + partial_grouping_target, + parse->groupClause ? AGG_SORTED : AGG_PLAIN, + AGGSPLIT_INITIAL_SERIAL, + parse->groupClause, + NIL, + agg_partial_costs, + dNumPartialGroups)); + else + add_partial_path(grouped_rel, (Path *) + create_group_path(root, + grouped_rel, + path, + partial_grouping_target, + parse->groupClause, + NIL, + dNumPartialGroups)); + } } } - return result; + if (can_hash) + { + /* Checked above */ + Assert(parse->hasAggs || parse->groupClause); + + hashaggtablesize = + estimate_hashagg_tablesize(cheapest_partial_path, + agg_partial_costs, + dNumPartialGroups); + + /* + * Tentatively produce a partial HashAgg Path, depending on if it + * looks as if the hash table will fit in work_mem. + */ + if (hashaggtablesize < work_mem * 1024L) + { + add_partial_path(grouped_rel, (Path *) + create_agg_path(root, + grouped_rel, + cheapest_partial_path, + partial_grouping_target, + AGG_HASHED, + AGGSPLIT_INITIAL_SERIAL, + parse->groupClause, + NIL, + agg_partial_costs, + dNumPartialGroups)); + } + } } /* - * get_partitioned_child_rels_for_join - * Build and return a list containing the RTI of every partitioned - * relation which is a child of some rel included in the join. + * can_parallel_agg + * + * Determines whether or not parallel grouping and/or aggregation is possible. + * Returns true when possible, false otherwise. */ -List * -get_partitioned_child_rels_for_join(PlannerInfo *root, Relids join_relids) +static bool +can_parallel_agg(PlannerInfo *root, RelOptInfo *input_rel, + RelOptInfo *grouped_rel, const AggClauseCosts *agg_costs) { - List *result = NIL; - ListCell *l; + Query *parse = root->parse; - foreach(l, root->pcinfo_list) + if (!grouped_rel->consider_parallel) { - PartitionedChildRelInfo *pc = lfirst(l); - - if (bms_is_member(pc->parent_relid, join_relids)) - result = list_concat(result, list_copy(pc->child_rels)); + /* Not even parallel-safe. */ + return false; + } + else if (input_rel->partial_pathlist == NIL) + { + /* Nothing to use as input for partial aggregate. */ + return false; + } + else if (!parse->hasAggs && parse->groupClause == NIL) + { + /* + * We don't know how to do parallel aggregation unless we have either + * some aggregates or a grouping clause. + */ + return false; + } + else if (parse->groupingSets) + { + /* We don't know how to do grouping sets in parallel. */ + return false; + } + else if (agg_costs->hasNonPartial || agg_costs->hasNonSerial) + { + /* Insufficient support for partial mode. */ + return false; } - return result; + /* Everything looks good. */ + return true; } From fb8697b31aaeebe6170c572739867dcaa01053c6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 26 Jan 2018 18:25:02 -0500 Subject: [PATCH 0905/1087] Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers. We have a lot of code in which option names, which from the user's viewpoint are logically keywords, are passed through the grammar as plain identifiers, and then matched to string literals during command execution. This approach avoids making words into lexer keywords unnecessarily. Some places matched these strings using plain strcmp, some using pg_strcasecmp. But the latter should be unnecessary since identifiers would have been downcased on their way through the parser. Aside from any efficiency concerns (probably not a big factor), the lack of consistency in this area creates a hazard of subtle bugs due to different places coming to different conclusions about whether two option names are the same or different. Hence, standardize on using strcmp() to match any option names that are expected to have been fed through the parser. This does create a user-visible behavioral change, which is that while formerly all of these would work: alter table foo set (fillfactor = 50); alter table foo set (FillFactor = 50); alter table foo set ("fillfactor" = 50); alter table foo set ("FillFactor" = 50); now the last case will fail because that double-quoted identifier is different from the others. However, none of our documentation says that you can use a quoted identifier in such contexts at all, and we should discourage doing so since it would break if we ever decide to parse such constructs as true lexer keywords rather than poor man's substitutes. So this shouldn't create a significant compatibility issue for users. Daniel Gustafsson, reviewed by Michael Paquier, small changes by me Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se --- contrib/dict_int/dict_int.c | 4 +- contrib/dict_xsyn/dict_xsyn.c | 10 ++-- contrib/unaccent/unaccent.c | 2 +- doc/src/sgml/textsearch.sgml | 1 + src/backend/access/common/reloptions.c | 22 ++++---- src/backend/commands/aggregatecmds.c | 56 +++++++++---------- src/backend/commands/collationcmds.c | 12 ++-- src/backend/commands/operatorcmds.c | 44 +++++++-------- src/backend/commands/tablecmds.c | 2 +- src/backend/commands/tsearchcmds.c | 23 ++++---- src/backend/commands/typecmds.c | 48 ++++++++-------- src/backend/commands/view.c | 6 +- src/backend/parser/parse_clause.c | 2 +- src/backend/snowball/dict_snowball.c | 4 +- src/backend/tsearch/dict_ispell.c | 6 +- src/backend/tsearch/dict_simple.c | 4 +- src/backend/tsearch/dict_synonym.c | 4 +- src/backend/tsearch/dict_thesaurus.c | 4 +- src/include/access/reloptions.h | 2 +- src/test/regress/expected/aggregates.out | 15 ++--- src/test/regress/expected/alter_generic.out | 6 ++ src/test/regress/expected/alter_operator.out | 3 + src/test/regress/expected/collate.out | 5 ++ .../regress/expected/create_aggregate.out | 30 ++++++++++ src/test/regress/expected/create_operator.out | 23 ++++++++ src/test/regress/expected/create_table.out | 5 ++ src/test/regress/expected/create_type.out | 28 ++++++++++ src/test/regress/expected/tsdicts.out | 8 +++ src/test/regress/sql/aggregates.sql | 15 ++--- src/test/regress/sql/alter_generic.sql | 6 ++ src/test/regress/sql/alter_operator.sql | 3 + src/test/regress/sql/collate.sql | 2 + src/test/regress/sql/create_aggregate.sql | 20 +++++++ src/test/regress/sql/create_operator.sql | 14 +++++ src/test/regress/sql/create_table.sql | 4 ++ src/test/regress/sql/create_type.sql | 10 ++++ src/test/regress/sql/tsdicts.sql | 8 +++ 37 files changed, 318 insertions(+), 143 deletions(-) diff --git a/contrib/dict_int/dict_int.c b/contrib/dict_int/dict_int.c index 8b45532938..56ede37089 100644 --- a/contrib/dict_int/dict_int.c +++ b/contrib/dict_int/dict_int.c @@ -42,11 +42,11 @@ dintdict_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp(defel->defname, "MAXLEN") == 0) + if (strcmp(defel->defname, "maxlen") == 0) { d->maxlen = atoi(defGetString(defel)); } - else if (pg_strcasecmp(defel->defname, "REJECTLONG") == 0) + else if (strcmp(defel->defname, "rejectlong") == 0) { d->rejectlong = defGetBoolean(defel); } diff --git a/contrib/dict_xsyn/dict_xsyn.c b/contrib/dict_xsyn/dict_xsyn.c index 8a3abf7e3c..a79ece240c 100644 --- a/contrib/dict_xsyn/dict_xsyn.c +++ b/contrib/dict_xsyn/dict_xsyn.c @@ -157,23 +157,23 @@ dxsyn_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp(defel->defname, "MATCHORIG") == 0) + if (strcmp(defel->defname, "matchorig") == 0) { d->matchorig = defGetBoolean(defel); } - else if (pg_strcasecmp(defel->defname, "KEEPORIG") == 0) + else if (strcmp(defel->defname, "keeporig") == 0) { d->keeporig = defGetBoolean(defel); } - else if (pg_strcasecmp(defel->defname, "MATCHSYNONYMS") == 0) + else if (strcmp(defel->defname, "matchsynonyms") == 0) { d->matchsynonyms = defGetBoolean(defel); } - else if (pg_strcasecmp(defel->defname, "KEEPSYNONYMS") == 0) + else if (strcmp(defel->defname, "keepsynonyms") == 0) { d->keepsynonyms = defGetBoolean(defel); } - else if (pg_strcasecmp(defel->defname, "RULES") == 0) + else if (strcmp(defel->defname, "rules") == 0) { /* we can't read the rules before parsing all options! */ filename = defGetString(defel); diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c index 82f9c7fcfe..247c202755 100644 --- a/contrib/unaccent/unaccent.c +++ b/contrib/unaccent/unaccent.c @@ -276,7 +276,7 @@ unaccent_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp("Rules", defel->defname) == 0) + if (strcmp(defel->defname, "rules") == 0) { if (fileloaded) ereport(ERROR, diff --git a/doc/src/sgml/textsearch.sgml b/doc/src/sgml/textsearch.sgml index 1a2f04019c..610b7bf033 100644 --- a/doc/src/sgml/textsearch.sgml +++ b/doc/src/sgml/textsearch.sgml @@ -1271,6 +1271,7 @@ ts_headline( config + These option names are recognized case-insensitively. Any unspecified options receive these defaults: diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index 274f7aa8e9..46276ceff1 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -796,12 +796,12 @@ transformRelOptions(Datum oldOptions, List *defList, const char *namspace, } else if (def->defnamespace == NULL) continue; - else if (pg_strcasecmp(def->defnamespace, namspace) != 0) + else if (strcmp(def->defnamespace, namspace) != 0) continue; kw_len = strlen(def->defname); if (text_len > kw_len && text_str[kw_len] == '=' && - pg_strncasecmp(text_str, def->defname, kw_len) == 0) + strncmp(text_str, def->defname, kw_len) == 0) break; } if (!cell) @@ -849,8 +849,7 @@ transformRelOptions(Datum oldOptions, List *defList, const char *namspace, { for (i = 0; validnsps[i]; i++) { - if (pg_strcasecmp(def->defnamespace, - validnsps[i]) == 0) + if (strcmp(def->defnamespace, validnsps[i]) == 0) { valid = true; break; @@ -865,7 +864,7 @@ transformRelOptions(Datum oldOptions, List *defList, const char *namspace, def->defnamespace))); } - if (ignoreOids && pg_strcasecmp(def->defname, "oids") == 0) + if (ignoreOids && strcmp(def->defname, "oids") == 0) continue; /* ignore if not in the same namespace */ @@ -876,7 +875,7 @@ transformRelOptions(Datum oldOptions, List *defList, const char *namspace, } else if (def->defnamespace == NULL) continue; - else if (pg_strcasecmp(def->defnamespace, namspace) != 0) + else if (strcmp(def->defnamespace, namspace) != 0) continue; /* @@ -1082,8 +1081,7 @@ parseRelOptions(Datum options, bool validate, relopt_kind kind, int kw_len = reloptions[j].gen->namelen; if (text_len > kw_len && text_str[kw_len] == '=' && - pg_strncasecmp(text_str, reloptions[j].gen->name, - kw_len) == 0) + strncmp(text_str, reloptions[j].gen->name, kw_len) == 0) { parse_one_reloption(&reloptions[j], text_str, text_len, validate); @@ -1262,7 +1260,7 @@ fillRelOptions(void *rdopts, Size basesize, for (j = 0; j < numelems; j++) { - if (pg_strcasecmp(options[i].gen->name, elems[j].optname) == 0) + if (strcmp(options[i].gen->name, elems[j].optname) == 0) { relopt_string *optstring; char *itempos = ((char *) rdopts) + elems[j].offset; @@ -1556,9 +1554,9 @@ AlterTableGetRelOptionsLockLevel(List *defList) for (i = 0; relOpts[i]; i++) { - if (pg_strncasecmp(relOpts[i]->name, - def->defname, - relOpts[i]->namelen + 1) == 0) + if (strncmp(relOpts[i]->name, + def->defname, + relOpts[i]->namelen + 1) == 0) { if (lockmode < relOpts[i]->lockmode) lockmode = relOpts[i]->lockmode; diff --git a/src/backend/commands/aggregatecmds.c b/src/backend/commands/aggregatecmds.c index 1779ba7bcb..a48c3ac572 100644 --- a/src/backend/commands/aggregatecmds.c +++ b/src/backend/commands/aggregatecmds.c @@ -127,37 +127,37 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List * sfunc1, stype1, and initcond1 are accepted as obsolete spellings * for sfunc, stype, initcond. */ - if (pg_strcasecmp(defel->defname, "sfunc") == 0) + if (strcmp(defel->defname, "sfunc") == 0) transfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "sfunc1") == 0) + else if (strcmp(defel->defname, "sfunc1") == 0) transfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "finalfunc") == 0) + else if (strcmp(defel->defname, "finalfunc") == 0) finalfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "combinefunc") == 0) + else if (strcmp(defel->defname, "combinefunc") == 0) combinefuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "serialfunc") == 0) + else if (strcmp(defel->defname, "serialfunc") == 0) serialfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "deserialfunc") == 0) + else if (strcmp(defel->defname, "deserialfunc") == 0) deserialfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "msfunc") == 0) + else if (strcmp(defel->defname, "msfunc") == 0) mtransfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "minvfunc") == 0) + else if (strcmp(defel->defname, "minvfunc") == 0) minvtransfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "mfinalfunc") == 0) + else if (strcmp(defel->defname, "mfinalfunc") == 0) mfinalfuncName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "finalfunc_extra") == 0) + else if (strcmp(defel->defname, "finalfunc_extra") == 0) finalfuncExtraArgs = defGetBoolean(defel); - else if (pg_strcasecmp(defel->defname, "mfinalfunc_extra") == 0) + else if (strcmp(defel->defname, "mfinalfunc_extra") == 0) mfinalfuncExtraArgs = defGetBoolean(defel); - else if (pg_strcasecmp(defel->defname, "finalfunc_modify") == 0) + else if (strcmp(defel->defname, "finalfunc_modify") == 0) finalfuncModify = extractModify(defel); - else if (pg_strcasecmp(defel->defname, "mfinalfunc_modify") == 0) + else if (strcmp(defel->defname, "mfinalfunc_modify") == 0) mfinalfuncModify = extractModify(defel); - else if (pg_strcasecmp(defel->defname, "sortop") == 0) + else if (strcmp(defel->defname, "sortop") == 0) sortoperatorName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "basetype") == 0) + else if (strcmp(defel->defname, "basetype") == 0) baseType = defGetTypeName(defel); - else if (pg_strcasecmp(defel->defname, "hypothetical") == 0) + else if (strcmp(defel->defname, "hypothetical") == 0) { if (defGetBoolean(defel)) { @@ -168,23 +168,23 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List aggKind = AGGKIND_HYPOTHETICAL; } } - else if (pg_strcasecmp(defel->defname, "stype") == 0) + else if (strcmp(defel->defname, "stype") == 0) transType = defGetTypeName(defel); - else if (pg_strcasecmp(defel->defname, "stype1") == 0) + else if (strcmp(defel->defname, "stype1") == 0) transType = defGetTypeName(defel); - else if (pg_strcasecmp(defel->defname, "sspace") == 0) + else if (strcmp(defel->defname, "sspace") == 0) transSpace = defGetInt32(defel); - else if (pg_strcasecmp(defel->defname, "mstype") == 0) + else if (strcmp(defel->defname, "mstype") == 0) mtransType = defGetTypeName(defel); - else if (pg_strcasecmp(defel->defname, "msspace") == 0) + else if (strcmp(defel->defname, "msspace") == 0) mtransSpace = defGetInt32(defel); - else if (pg_strcasecmp(defel->defname, "initcond") == 0) + else if (strcmp(defel->defname, "initcond") == 0) initval = defGetString(defel); - else if (pg_strcasecmp(defel->defname, "initcond1") == 0) + else if (strcmp(defel->defname, "initcond1") == 0) initval = defGetString(defel); - else if (pg_strcasecmp(defel->defname, "minitcond") == 0) + else if (strcmp(defel->defname, "minitcond") == 0) minitval = defGetString(defel); - else if (pg_strcasecmp(defel->defname, "parallel") == 0) + else if (strcmp(defel->defname, "parallel") == 0) parallel = defGetString(defel); else ereport(WARNING, @@ -420,11 +420,11 @@ DefineAggregate(ParseState *pstate, List *name, List *args, bool oldstyle, List if (parallel) { - if (pg_strcasecmp(parallel, "safe") == 0) + if (strcmp(parallel, "safe") == 0) proparallel = PROPARALLEL_SAFE; - else if (pg_strcasecmp(parallel, "restricted") == 0) + else if (strcmp(parallel, "restricted") == 0) proparallel = PROPARALLEL_RESTRICTED; - else if (pg_strcasecmp(parallel, "unsafe") == 0) + else if (strcmp(parallel, "unsafe") == 0) proparallel = PROPARALLEL_UNSAFE; else ereport(ERROR, diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c index fdfb3dcd2c..d0b5cdb69a 100644 --- a/src/backend/commands/collationcmds.c +++ b/src/backend/commands/collationcmds.c @@ -82,17 +82,17 @@ DefineCollation(ParseState *pstate, List *names, List *parameters, bool if_not_e DefElem *defel = lfirst_node(DefElem, pl); DefElem **defelp; - if (pg_strcasecmp(defel->defname, "from") == 0) + if (strcmp(defel->defname, "from") == 0) defelp = &fromEl; - else if (pg_strcasecmp(defel->defname, "locale") == 0) + else if (strcmp(defel->defname, "locale") == 0) defelp = &localeEl; - else if (pg_strcasecmp(defel->defname, "lc_collate") == 0) + else if (strcmp(defel->defname, "lc_collate") == 0) defelp = &lccollateEl; - else if (pg_strcasecmp(defel->defname, "lc_ctype") == 0) + else if (strcmp(defel->defname, "lc_ctype") == 0) defelp = &lcctypeEl; - else if (pg_strcasecmp(defel->defname, "provider") == 0) + else if (strcmp(defel->defname, "provider") == 0) defelp = &providerEl; - else if (pg_strcasecmp(defel->defname, "version") == 0) + else if (strcmp(defel->defname, "version") == 0) defelp = &versionEl; else { diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c index 35404ac39a..585382d758 100644 --- a/src/backend/commands/operatorcmds.c +++ b/src/backend/commands/operatorcmds.c @@ -105,7 +105,7 @@ DefineOperator(List *names, List *parameters) { DefElem *defel = (DefElem *) lfirst(pl); - if (pg_strcasecmp(defel->defname, "leftarg") == 0) + if (strcmp(defel->defname, "leftarg") == 0) { typeName1 = defGetTypeName(defel); if (typeName1->setof) @@ -113,7 +113,7 @@ DefineOperator(List *names, List *parameters) (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("SETOF type not allowed for operator argument"))); } - else if (pg_strcasecmp(defel->defname, "rightarg") == 0) + else if (strcmp(defel->defname, "rightarg") == 0) { typeName2 = defGetTypeName(defel); if (typeName2->setof) @@ -121,28 +121,28 @@ DefineOperator(List *names, List *parameters) (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION), errmsg("SETOF type not allowed for operator argument"))); } - else if (pg_strcasecmp(defel->defname, "procedure") == 0) + else if (strcmp(defel->defname, "procedure") == 0) functionName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "commutator") == 0) + else if (strcmp(defel->defname, "commutator") == 0) commutatorName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "negator") == 0) + else if (strcmp(defel->defname, "negator") == 0) negatorName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "restrict") == 0) + else if (strcmp(defel->defname, "restrict") == 0) restrictionName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "join") == 0) + else if (strcmp(defel->defname, "join") == 0) joinName = defGetQualifiedName(defel); - else if (pg_strcasecmp(defel->defname, "hashes") == 0) + else if (strcmp(defel->defname, "hashes") == 0) canHash = defGetBoolean(defel); - else if (pg_strcasecmp(defel->defname, "merges") == 0) + else if (strcmp(defel->defname, "merges") == 0) canMerge = defGetBoolean(defel); /* These obsolete options are taken as meaning canMerge */ - else if (pg_strcasecmp(defel->defname, "sort1") == 0) + else if (strcmp(defel->defname, "sort1") == 0) canMerge = true; - else if (pg_strcasecmp(defel->defname, "sort2") == 0) + else if (strcmp(defel->defname, "sort2") == 0) canMerge = true; - else if (pg_strcasecmp(defel->defname, "ltcmp") == 0) + else if (strcmp(defel->defname, "ltcmp") == 0) canMerge = true; - else if (pg_strcasecmp(defel->defname, "gtcmp") == 0) + else if (strcmp(defel->defname, "gtcmp") == 0) canMerge = true; else { @@ -420,12 +420,12 @@ AlterOperator(AlterOperatorStmt *stmt) else param = defGetQualifiedName(defel); - if (pg_strcasecmp(defel->defname, "restrict") == 0) + if (strcmp(defel->defname, "restrict") == 0) { restrictionName = param; updateRestriction = true; } - else if (pg_strcasecmp(defel->defname, "join") == 0) + else if (strcmp(defel->defname, "join") == 0) { joinName = param; updateJoin = true; @@ -435,13 +435,13 @@ AlterOperator(AlterOperatorStmt *stmt) * The rest of the options that CREATE accepts cannot be changed. * Check for them so that we can give a meaningful error message. */ - else if (pg_strcasecmp(defel->defname, "leftarg") == 0 || - pg_strcasecmp(defel->defname, "rightarg") == 0 || - pg_strcasecmp(defel->defname, "procedure") == 0 || - pg_strcasecmp(defel->defname, "commutator") == 0 || - pg_strcasecmp(defel->defname, "negator") == 0 || - pg_strcasecmp(defel->defname, "hashes") == 0 || - pg_strcasecmp(defel->defname, "merges") == 0) + else if (strcmp(defel->defname, "leftarg") == 0 || + strcmp(defel->defname, "rightarg") == 0 || + strcmp(defel->defname, "procedure") == 0 || + strcmp(defel->defname, "commutator") == 0 || + strcmp(defel->defname, "negator") == 0 || + strcmp(defel->defname, "hashes") == 0 || + strcmp(defel->defname, "merges") == 0) { ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 2e768dd5e4..ea03fd2ecf 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -10536,7 +10536,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation, { DefElem *defel = (DefElem *) lfirst(cell); - if (pg_strcasecmp(defel->defname, "check_option") == 0) + if (strcmp(defel->defname, "check_option") == 0) check_option = true; } diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c index bdf3857ce4..3a843512d1 100644 --- a/src/backend/commands/tsearchcmds.c +++ b/src/backend/commands/tsearchcmds.c @@ -209,27 +209,27 @@ DefineTSParser(List *names, List *parameters) { DefElem *defel = (DefElem *) lfirst(pl); - if (pg_strcasecmp(defel->defname, "start") == 0) + if (strcmp(defel->defname, "start") == 0) { values[Anum_pg_ts_parser_prsstart - 1] = get_ts_parser_func(defel, Anum_pg_ts_parser_prsstart); } - else if (pg_strcasecmp(defel->defname, "gettoken") == 0) + else if (strcmp(defel->defname, "gettoken") == 0) { values[Anum_pg_ts_parser_prstoken - 1] = get_ts_parser_func(defel, Anum_pg_ts_parser_prstoken); } - else if (pg_strcasecmp(defel->defname, "end") == 0) + else if (strcmp(defel->defname, "end") == 0) { values[Anum_pg_ts_parser_prsend - 1] = get_ts_parser_func(defel, Anum_pg_ts_parser_prsend); } - else if (pg_strcasecmp(defel->defname, "headline") == 0) + else if (strcmp(defel->defname, "headline") == 0) { values[Anum_pg_ts_parser_prsheadline - 1] = get_ts_parser_func(defel, Anum_pg_ts_parser_prsheadline); } - else if (pg_strcasecmp(defel->defname, "lextypes") == 0) + else if (strcmp(defel->defname, "lextypes") == 0) { values[Anum_pg_ts_parser_prslextype - 1] = get_ts_parser_func(defel, Anum_pg_ts_parser_prslextype); @@ -438,7 +438,7 @@ DefineTSDictionary(List *names, List *parameters) { DefElem *defel = (DefElem *) lfirst(pl); - if (pg_strcasecmp(defel->defname, "template") == 0) + if (strcmp(defel->defname, "template") == 0) { templId = get_ts_template_oid(defGetQualifiedName(defel), false); } @@ -580,7 +580,7 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt) DefElem *oldel = (DefElem *) lfirst(cell); next = lnext(cell); - if (pg_strcasecmp(oldel->defname, defel->defname) == 0) + if (strcmp(oldel->defname, defel->defname) == 0) dictoptions = list_delete_cell(dictoptions, cell, prev); else prev = cell; @@ -765,13 +765,13 @@ DefineTSTemplate(List *names, List *parameters) { DefElem *defel = (DefElem *) lfirst(pl); - if (pg_strcasecmp(defel->defname, "init") == 0) + if (strcmp(defel->defname, "init") == 0) { values[Anum_pg_ts_template_tmplinit - 1] = get_ts_template_func(defel, Anum_pg_ts_template_tmplinit); nulls[Anum_pg_ts_template_tmplinit - 1] = false; } - else if (pg_strcasecmp(defel->defname, "lexize") == 0) + else if (strcmp(defel->defname, "lexize") == 0) { values[Anum_pg_ts_template_tmpllexize - 1] = get_ts_template_func(defel, Anum_pg_ts_template_tmpllexize); @@ -990,9 +990,9 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied) { DefElem *defel = (DefElem *) lfirst(pl); - if (pg_strcasecmp(defel->defname, "parser") == 0) + if (strcmp(defel->defname, "parser") == 0) prsOid = get_ts_parser_oid(defGetQualifiedName(defel), false); - else if (pg_strcasecmp(defel->defname, "copy") == 0) + else if (strcmp(defel->defname, "copy") == 0) sourceOid = get_ts_config_oid(defGetQualifiedName(defel), false); else ereport(ERROR, @@ -1251,7 +1251,6 @@ getTokenTypes(Oid prsId, List *tokennames) j = 0; while (list && list[j].lexid) { - /* XXX should we use pg_strcasecmp here? */ if (strcmp(strVal(val), list[j].alias) == 0) { res[i] = list[j].lexid; diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c index 74eb430f96..899a5c4cd4 100644 --- a/src/backend/commands/typecmds.c +++ b/src/backend/commands/typecmds.c @@ -245,42 +245,42 @@ DefineType(ParseState *pstate, List *names, List *parameters) DefElem *defel = (DefElem *) lfirst(pl); DefElem **defelp; - if (pg_strcasecmp(defel->defname, "like") == 0) + if (strcmp(defel->defname, "like") == 0) defelp = &likeTypeEl; - else if (pg_strcasecmp(defel->defname, "internallength") == 0) + else if (strcmp(defel->defname, "internallength") == 0) defelp = &internalLengthEl; - else if (pg_strcasecmp(defel->defname, "input") == 0) + else if (strcmp(defel->defname, "input") == 0) defelp = &inputNameEl; - else if (pg_strcasecmp(defel->defname, "output") == 0) + else if (strcmp(defel->defname, "output") == 0) defelp = &outputNameEl; - else if (pg_strcasecmp(defel->defname, "receive") == 0) + else if (strcmp(defel->defname, "receive") == 0) defelp = &receiveNameEl; - else if (pg_strcasecmp(defel->defname, "send") == 0) + else if (strcmp(defel->defname, "send") == 0) defelp = &sendNameEl; - else if (pg_strcasecmp(defel->defname, "typmod_in") == 0) + else if (strcmp(defel->defname, "typmod_in") == 0) defelp = &typmodinNameEl; - else if (pg_strcasecmp(defel->defname, "typmod_out") == 0) + else if (strcmp(defel->defname, "typmod_out") == 0) defelp = &typmodoutNameEl; - else if (pg_strcasecmp(defel->defname, "analyze") == 0 || - pg_strcasecmp(defel->defname, "analyse") == 0) + else if (strcmp(defel->defname, "analyze") == 0 || + strcmp(defel->defname, "analyse") == 0) defelp = &analyzeNameEl; - else if (pg_strcasecmp(defel->defname, "category") == 0) + else if (strcmp(defel->defname, "category") == 0) defelp = &categoryEl; - else if (pg_strcasecmp(defel->defname, "preferred") == 0) + else if (strcmp(defel->defname, "preferred") == 0) defelp = &preferredEl; - else if (pg_strcasecmp(defel->defname, "delimiter") == 0) + else if (strcmp(defel->defname, "delimiter") == 0) defelp = &delimiterEl; - else if (pg_strcasecmp(defel->defname, "element") == 0) + else if (strcmp(defel->defname, "element") == 0) defelp = &elemTypeEl; - else if (pg_strcasecmp(defel->defname, "default") == 0) + else if (strcmp(defel->defname, "default") == 0) defelp = &defaultValueEl; - else if (pg_strcasecmp(defel->defname, "passedbyvalue") == 0) + else if (strcmp(defel->defname, "passedbyvalue") == 0) defelp = &byValueEl; - else if (pg_strcasecmp(defel->defname, "alignment") == 0) + else if (strcmp(defel->defname, "alignment") == 0) defelp = &alignmentEl; - else if (pg_strcasecmp(defel->defname, "storage") == 0) + else if (strcmp(defel->defname, "storage") == 0) defelp = &storageEl; - else if (pg_strcasecmp(defel->defname, "collatable") == 0) + else if (strcmp(defel->defname, "collatable") == 0) defelp = &collatableEl; else { @@ -1439,7 +1439,7 @@ DefineRange(CreateRangeStmt *stmt) { DefElem *defel = (DefElem *) lfirst(lc); - if (pg_strcasecmp(defel->defname, "subtype") == 0) + if (strcmp(defel->defname, "subtype") == 0) { if (OidIsValid(rangeSubtype)) ereport(ERROR, @@ -1448,7 +1448,7 @@ DefineRange(CreateRangeStmt *stmt) /* we can look up the subtype name immediately */ rangeSubtype = typenameTypeId(NULL, defGetTypeName(defel)); } - else if (pg_strcasecmp(defel->defname, "subtype_opclass") == 0) + else if (strcmp(defel->defname, "subtype_opclass") == 0) { if (rangeSubOpclassName != NIL) ereport(ERROR, @@ -1456,7 +1456,7 @@ DefineRange(CreateRangeStmt *stmt) errmsg("conflicting or redundant options"))); rangeSubOpclassName = defGetQualifiedName(defel); } - else if (pg_strcasecmp(defel->defname, "collation") == 0) + else if (strcmp(defel->defname, "collation") == 0) { if (rangeCollationName != NIL) ereport(ERROR, @@ -1464,7 +1464,7 @@ DefineRange(CreateRangeStmt *stmt) errmsg("conflicting or redundant options"))); rangeCollationName = defGetQualifiedName(defel); } - else if (pg_strcasecmp(defel->defname, "canonical") == 0) + else if (strcmp(defel->defname, "canonical") == 0) { if (rangeCanonicalName != NIL) ereport(ERROR, @@ -1472,7 +1472,7 @@ DefineRange(CreateRangeStmt *stmt) errmsg("conflicting or redundant options"))); rangeCanonicalName = defGetQualifiedName(defel); } - else if (pg_strcasecmp(defel->defname, "subtype_diff") == 0) + else if (strcmp(defel->defname, "subtype_diff") == 0) { if (rangeSubtypeDiffName != NIL) ereport(ERROR, diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c index 04ad76a210..7d4511c585 100644 --- a/src/backend/commands/view.c +++ b/src/backend/commands/view.c @@ -46,8 +46,8 @@ void validateWithCheckOption(const char *value) { if (value == NULL || - (pg_strcasecmp(value, "local") != 0 && - pg_strcasecmp(value, "cascaded") != 0)) + (strcmp(value, "local") != 0 && + strcmp(value, "cascaded") != 0)) { ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), @@ -485,7 +485,7 @@ DefineView(ViewStmt *stmt, const char *queryString, { DefElem *defel = (DefElem *) lfirst(cell); - if (pg_strcasecmp(defel->defname, "check_option") == 0) + if (strcmp(defel->defname, "check_option") == 0) check_option = true; } diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index 9fbcfd4fa6..406cd1dad0 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -262,7 +262,7 @@ interpretOidsOption(List *defList, bool allowOids) DefElem *def = (DefElem *) lfirst(cell); if (def->defnamespace == NULL && - pg_strcasecmp(def->defname, "oids") == 0) + strcmp(def->defname, "oids") == 0) { if (!allowOids) ereport(ERROR, diff --git a/src/backend/snowball/dict_snowball.c b/src/backend/snowball/dict_snowball.c index 043681ec2d..78c9f73ef0 100644 --- a/src/backend/snowball/dict_snowball.c +++ b/src/backend/snowball/dict_snowball.c @@ -192,7 +192,7 @@ dsnowball_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp("StopWords", defel->defname) == 0) + if (strcmp(defel->defname, "stopwords") == 0) { if (stoploaded) ereport(ERROR, @@ -201,7 +201,7 @@ dsnowball_init(PG_FUNCTION_ARGS) readstoplist(defGetString(defel), &d->stoplist, lowerstr); stoploaded = true; } - else if (pg_strcasecmp("Language", defel->defname) == 0) + else if (strcmp(defel->defname, "language") == 0) { if (d->stem) ereport(ERROR, diff --git a/src/backend/tsearch/dict_ispell.c b/src/backend/tsearch/dict_ispell.c index 0d706795ad..edc6547700 100644 --- a/src/backend/tsearch/dict_ispell.c +++ b/src/backend/tsearch/dict_ispell.c @@ -44,7 +44,7 @@ dispell_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp(defel->defname, "DictFile") == 0) + if (strcmp(defel->defname, "dictfile") == 0) { if (dictloaded) ereport(ERROR, @@ -55,7 +55,7 @@ dispell_init(PG_FUNCTION_ARGS) "dict")); dictloaded = true; } - else if (pg_strcasecmp(defel->defname, "AffFile") == 0) + else if (strcmp(defel->defname, "afffile") == 0) { if (affloaded) ereport(ERROR, @@ -66,7 +66,7 @@ dispell_init(PG_FUNCTION_ARGS) "affix")); affloaded = true; } - else if (pg_strcasecmp(defel->defname, "StopWords") == 0) + else if (strcmp(defel->defname, "stopwords") == 0) { if (stoploaded) ereport(ERROR, diff --git a/src/backend/tsearch/dict_simple.c b/src/backend/tsearch/dict_simple.c index 268b4e48cf..ac6a24eba5 100644 --- a/src/backend/tsearch/dict_simple.c +++ b/src/backend/tsearch/dict_simple.c @@ -41,7 +41,7 @@ dsimple_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp("StopWords", defel->defname) == 0) + if (strcmp(defel->defname, "stopwords") == 0) { if (stoploaded) ereport(ERROR, @@ -50,7 +50,7 @@ dsimple_init(PG_FUNCTION_ARGS) readstoplist(defGetString(defel), &d->stoplist, lowerstr); stoploaded = true; } - else if (pg_strcasecmp("Accept", defel->defname) == 0) + else if (strcmp(defel->defname, "accept") == 0) { if (acceptloaded) ereport(ERROR, diff --git a/src/backend/tsearch/dict_synonym.c b/src/backend/tsearch/dict_synonym.c index 8ca65f3ded..c011886cb0 100644 --- a/src/backend/tsearch/dict_synonym.c +++ b/src/backend/tsearch/dict_synonym.c @@ -108,9 +108,9 @@ dsynonym_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp("Synonyms", defel->defname) == 0) + if (strcmp(defel->defname, "synonyms") == 0) filename = defGetString(defel); - else if (pg_strcasecmp("CaseSensitive", defel->defname) == 0) + else if (strcmp(defel->defname, "casesensitive") == 0) case_sensitive = defGetBoolean(defel); else ereport(ERROR, diff --git a/src/backend/tsearch/dict_thesaurus.c b/src/backend/tsearch/dict_thesaurus.c index 23aaac8d07..24364e646d 100644 --- a/src/backend/tsearch/dict_thesaurus.c +++ b/src/backend/tsearch/dict_thesaurus.c @@ -616,7 +616,7 @@ thesaurus_init(PG_FUNCTION_ARGS) { DefElem *defel = (DefElem *) lfirst(l); - if (pg_strcasecmp("DictFile", defel->defname) == 0) + if (strcmp(defel->defname, "dictfile") == 0) { if (fileloaded) ereport(ERROR, @@ -625,7 +625,7 @@ thesaurus_init(PG_FUNCTION_ARGS) thesaurusRead(defGetString(defel), d); fileloaded = true; } - else if (pg_strcasecmp("Dictionary", defel->defname) == 0) + else if (strcmp(defel->defname, "dictionary") == 0) { if (subdictname) ereport(ERROR, diff --git a/src/include/access/reloptions.h b/src/include/access/reloptions.h index 94739f7ac6..b32c1e9efe 100644 --- a/src/include/access/reloptions.h +++ b/src/include/access/reloptions.h @@ -166,7 +166,7 @@ typedef struct * code block. */ #define HAVE_RELOPTION(optname, option) \ - (pg_strncasecmp(option.gen->name, optname, option.gen->namelen + 1) == 0) + (strncmp(option.gen->name, optname, option.gen->namelen + 1) == 0) #define HANDLE_INT_RELOPTION(optname, var, option, wasset) \ do { \ diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index dbce7d3e8b..f85e913850 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -2007,12 +2007,13 @@ BEGIN END IF; RETURN NULL; END$$; -CREATE AGGREGATE balk( - BASETYPE = int4, +CREATE AGGREGATE balk(int4) +( SFUNC = balkifnull(int8, int4), STYPE = int8, - "PARALLEL" = SAFE, - INITCOND = '0'); + PARALLEL = SAFE, + INITCOND = '0' +); SELECT balk(hundred) FROM tenk1; balk ------ @@ -2035,12 +2036,12 @@ BEGIN END IF; RETURN NULL; END$$; -CREATE AGGREGATE balk( - BASETYPE = int4, +CREATE AGGREGATE balk(int4) +( SFUNC = int4_sum(int8, int4), STYPE = int8, COMBINEFUNC = balkifnull(int8, int8), - "PARALLEL" = SAFE, + PARALLEL = SAFE, INITCOND = '0' ); -- force use of parallelism diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out index 767c09bec5..200828aa99 100644 --- a/src/test/regress/expected/alter_generic.out +++ b/src/test/regress/expected/alter_generic.out @@ -633,6 +633,9 @@ ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- OK CREATE TEXT SEARCH TEMPLATE alt_ts_temp2 (lexize=dsimple_lexize); ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- failed (name conflict) ERROR: text search template "alt_ts_temp2" already exists in schema "alt_nsp2" +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH TEMPLATE tstemp_case ("Init" = init_function); +ERROR: text search template parameter "Init" not recognized SELECT nspname, tmplname FROM pg_ts_template t, pg_namespace n WHERE t.tmplnamespace = n.oid AND nspname like 'alt_nsp%' @@ -659,6 +662,9 @@ CREATE TEXT SEARCH PARSER alt_ts_prs2 (start = prsd_start, gettoken = prsd_nexttoken, end = prsd_end, lextypes = prsd_lextype); ALTER TEXT SEARCH PARSER alt_ts_prs2 SET SCHEMA alt_nsp2; -- failed (name conflict) ERROR: text search parser "alt_ts_prs2" already exists in schema "alt_nsp2" +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH PARSER tspars_case ("Start" = start_function); +ERROR: text search parser parameter "Start" not recognized SELECT nspname, prsname FROM pg_ts_parser t, pg_namespace n WHERE t.prsnamespace = n.oid AND nspname like 'alt_nsp%' diff --git a/src/test/regress/expected/alter_operator.out b/src/test/regress/expected/alter_operator.out index ef47affd7b..71bd484282 100644 --- a/src/test/regress/expected/alter_operator.out +++ b/src/test/regress/expected/alter_operator.out @@ -121,6 +121,9 @@ ALTER OPERATOR === (boolean, boolean) SET (COMMUTATOR = !==); ERROR: operator attribute "commutator" cannot be changed ALTER OPERATOR === (boolean, boolean) SET (NEGATOR = !==); ERROR: operator attribute "negator" cannot be changed +-- invalid: non-lowercase quoted identifiers +ALTER OPERATOR & (bit, bit) SET ("Restrict" = _int_contsel, "Join" = _int_contjoinsel); +ERROR: operator attribute "Restrict" not recognized -- -- Test permission check. Must be owner to ALTER OPERATOR. -- diff --git a/src/test/regress/expected/collate.out b/src/test/regress/expected/collate.out index b0025c0a87..3bc3713ee1 100644 --- a/src/test/regress/expected/collate.out +++ b/src/test/regress/expected/collate.out @@ -633,6 +633,11 @@ DROP COLLATION mycoll2; -- fail ERROR: cannot drop collation mycoll2 because other objects depend on it DETAIL: table collate_test23 column f1 depends on collation mycoll2 HINT: Use DROP ... CASCADE to drop the dependent objects too. +-- invalid: non-lowercase quoted identifiers +CREATE COLLATION case_coll ("Lc_Collate" = "POSIX", "Lc_Ctype" = "POSIX"); +ERROR: collation attribute "Lc_Collate" not recognized +LINE 1: CREATE COLLATION case_coll ("Lc_Collate" = "POSIX", "Lc_Ctyp... + ^ -- 9.1 bug with useless COLLATE in an expression subject to length coercion CREATE TEMP TABLE vctable (f1 varchar(25)); INSERT INTO vctable VALUES ('foo' COLLATE "C"); diff --git a/src/test/regress/expected/create_aggregate.out b/src/test/regress/expected/create_aggregate.out index ef65cd54ca..b9b7fbcc9e 100644 --- a/src/test/regress/expected/create_aggregate.out +++ b/src/test/regress/expected/create_aggregate.out @@ -195,3 +195,33 @@ CREATE AGGREGATE wrongreturntype (float8) minvfunc = float8mi_int ); ERROR: return type of inverse transition function float8mi_int is not double precision +-- invalid: non-lowercase quoted identifiers +CREATE AGGREGATE case_agg ( -- old syntax + "Sfunc1" = int4pl, + "Basetype" = int4, + "Stype1" = int4, + "Initcond1" = '0', + "Parallel" = safe +); +WARNING: aggregate attribute "Sfunc1" not recognized +WARNING: aggregate attribute "Basetype" not recognized +WARNING: aggregate attribute "Stype1" not recognized +WARNING: aggregate attribute "Initcond1" not recognized +WARNING: aggregate attribute "Parallel" not recognized +ERROR: aggregate stype must be specified +CREATE AGGREGATE case_agg(float8) +( + "Stype" = internal, + "Sfunc" = ordered_set_transition, + "Finalfunc" = percentile_disc_final, + "Finalfunc_extra" = true, + "Finalfunc_modify" = read_write, + "Parallel" = safe +); +WARNING: aggregate attribute "Stype" not recognized +WARNING: aggregate attribute "Sfunc" not recognized +WARNING: aggregate attribute "Finalfunc" not recognized +WARNING: aggregate attribute "Finalfunc_extra" not recognized +WARNING: aggregate attribute "Finalfunc_modify" not recognized +WARNING: aggregate attribute "Parallel" not recognized +ERROR: aggregate stype must be specified diff --git a/src/test/regress/expected/create_operator.out b/src/test/regress/expected/create_operator.out index 3a216c2ca8..3c4ccae1e7 100644 --- a/src/test/regress/expected/create_operator.out +++ b/src/test/regress/expected/create_operator.out @@ -172,3 +172,26 @@ CREATE OPERATOR #*# ( ); ERROR: permission denied for type type_op6 ROLLBACK; +-- invalid: non-lowercase quoted identifiers +CREATE OPERATOR === +( + "Leftarg" = box, + "Rightarg" = box, + "Procedure" = area_equal_procedure, + "Commutator" = ===, + "Negator" = !==, + "Restrict" = area_restriction_procedure, + "Join" = area_join_procedure, + "Hashes", + "Merges" +); +WARNING: operator attribute "Leftarg" not recognized +WARNING: operator attribute "Rightarg" not recognized +WARNING: operator attribute "Procedure" not recognized +WARNING: operator attribute "Commutator" not recognized +WARNING: operator attribute "Negator" not recognized +WARNING: operator attribute "Restrict" not recognized +WARNING: operator attribute "Join" not recognized +WARNING: operator attribute "Hashes" not recognized +WARNING: operator attribute "Merges" not recognized +ERROR: operator procedure must be specified diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index 8e745402ae..ef0906776e 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -215,6 +215,11 @@ CREATE TABLE IF NOT EXISTS test_tsvector( t text ); NOTICE: relation "test_tsvector" already exists, skipping +-- invalid: non-lowercase quoted reloptions identifiers +CREATE TABLE tas_case WITH ("Fillfactor" = 10) AS SELECT 1 a; +ERROR: unrecognized parameter "Fillfactor" +CREATE TABLE tas_case (a text) WITH ("Oids" = true); +ERROR: unrecognized parameter "Oids" CREATE UNLOGGED TABLE unlogged1 (a int primary key); -- OK CREATE TEMPORARY TABLE unlogged2 (a int primary key); -- OK SELECT relname, relkind, relpersistence FROM pg_class WHERE relname ~ '^unlogged\d' ORDER BY relname; diff --git a/src/test/regress/expected/create_type.out b/src/test/regress/expected/create_type.out index 5886a1f37f..4eef32bf4d 100644 --- a/src/test/regress/expected/create_type.out +++ b/src/test/regress/expected/create_type.out @@ -83,6 +83,34 @@ SELECT * FROM default_test; zippo | 42 (1 row) +-- invalid: non-lowercase quoted identifiers +CREATE TYPE case_int42 ( + "Internallength" = 4, + "Input" = int42_in, + "Output" = int42_out, + "Alignment" = int4, + "Default" = 42, + "Passedbyvalue" +); +WARNING: type attribute "Internallength" not recognized +LINE 2: "Internallength" = 4, + ^ +WARNING: type attribute "Input" not recognized +LINE 3: "Input" = int42_in, + ^ +WARNING: type attribute "Output" not recognized +LINE 4: "Output" = int42_out, + ^ +WARNING: type attribute "Alignment" not recognized +LINE 5: "Alignment" = int4, + ^ +WARNING: type attribute "Default" not recognized +LINE 6: "Default" = 42, + ^ +WARNING: type attribute "Passedbyvalue" not recognized +LINE 7: "Passedbyvalue" + ^ +ERROR: type input function must be specified -- Test stand-alone composite type CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42); CREATE FUNCTION get_default_test() RETURNS SETOF default_test_row AS ' diff --git a/src/test/regress/expected/tsdicts.out b/src/test/regress/expected/tsdicts.out index 0744ef803b..0c1d7c7675 100644 --- a/src/test/regress/expected/tsdicts.out +++ b/src/test/regress/expected/tsdicts.out @@ -580,3 +580,11 @@ SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a 'card':3,10 'invit':2,9 'like':6 'look':5 'order':1,8 (1 row) +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH DICTIONARY tsdict_case +( + Template = ispell, + "DictFile" = ispell_sample, + "AffFile" = ispell_sample +); +ERROR: unrecognized Ispell parameter: "DictFile" diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index 6c9b86a616..506d0442d7 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -861,12 +861,13 @@ BEGIN RETURN NULL; END$$; -CREATE AGGREGATE balk( - BASETYPE = int4, +CREATE AGGREGATE balk(int4) +( SFUNC = balkifnull(int8, int4), STYPE = int8, - "PARALLEL" = SAFE, - INITCOND = '0'); + PARALLEL = SAFE, + INITCOND = '0' +); SELECT balk(hundred) FROM tenk1; @@ -888,12 +889,12 @@ BEGIN RETURN NULL; END$$; -CREATE AGGREGATE balk( - BASETYPE = int4, +CREATE AGGREGATE balk(int4) +( SFUNC = int4_sum(int8, int4), STYPE = int8, COMBINEFUNC = balkifnull(int8, int8), - "PARALLEL" = SAFE, + PARALLEL = SAFE, INITCOND = '0' ); diff --git a/src/test/regress/sql/alter_generic.sql b/src/test/regress/sql/alter_generic.sql index 311812e351..96be6e752a 100644 --- a/src/test/regress/sql/alter_generic.sql +++ b/src/test/regress/sql/alter_generic.sql @@ -543,6 +543,9 @@ ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- OK CREATE TEXT SEARCH TEMPLATE alt_ts_temp2 (lexize=dsimple_lexize); ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- failed (name conflict) +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH TEMPLATE tstemp_case ("Init" = init_function); + SELECT nspname, tmplname FROM pg_ts_template t, pg_namespace n WHERE t.tmplnamespace = n.oid AND nspname like 'alt_nsp%' @@ -565,6 +568,9 @@ CREATE TEXT SEARCH PARSER alt_ts_prs2 (start = prsd_start, gettoken = prsd_nexttoken, end = prsd_end, lextypes = prsd_lextype); ALTER TEXT SEARCH PARSER alt_ts_prs2 SET SCHEMA alt_nsp2; -- failed (name conflict) +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH PARSER tspars_case ("Start" = start_function); + SELECT nspname, prsname FROM pg_ts_parser t, pg_namespace n WHERE t.prsnamespace = n.oid AND nspname like 'alt_nsp%' diff --git a/src/test/regress/sql/alter_operator.sql b/src/test/regress/sql/alter_operator.sql index 51ffd7e0e0..fd40370165 100644 --- a/src/test/regress/sql/alter_operator.sql +++ b/src/test/regress/sql/alter_operator.sql @@ -81,6 +81,9 @@ ALTER OPERATOR === (boolean, boolean) SET (JOIN = non_existent_func); ALTER OPERATOR === (boolean, boolean) SET (COMMUTATOR = !==); ALTER OPERATOR === (boolean, boolean) SET (NEGATOR = !==); +-- invalid: non-lowercase quoted identifiers +ALTER OPERATOR & (bit, bit) SET ("Restrict" = _int_contsel, "Join" = _int_contjoinsel); + -- -- Test permission check. Must be owner to ALTER OPERATOR. -- diff --git a/src/test/regress/sql/collate.sql b/src/test/regress/sql/collate.sql index 698f577490..4ddde95a5e 100644 --- a/src/test/regress/sql/collate.sql +++ b/src/test/regress/sql/collate.sql @@ -239,6 +239,8 @@ DROP COLLATION mycoll1; CREATE TABLE collate_test23 (f1 text collate mycoll2); DROP COLLATION mycoll2; -- fail +-- invalid: non-lowercase quoted identifiers +CREATE COLLATION case_coll ("Lc_Collate" = "POSIX", "Lc_Ctype" = "POSIX"); -- 9.1 bug with useless COLLATE in an expression subject to length coercion diff --git a/src/test/regress/sql/create_aggregate.sql b/src/test/regress/sql/create_aggregate.sql index 46e773bfe3..590ca9a624 100644 --- a/src/test/regress/sql/create_aggregate.sql +++ b/src/test/regress/sql/create_aggregate.sql @@ -211,3 +211,23 @@ CREATE AGGREGATE wrongreturntype (float8) msfunc = float8pl, minvfunc = float8mi_int ); + +-- invalid: non-lowercase quoted identifiers + +CREATE AGGREGATE case_agg ( -- old syntax + "Sfunc1" = int4pl, + "Basetype" = int4, + "Stype1" = int4, + "Initcond1" = '0', + "Parallel" = safe +); + +CREATE AGGREGATE case_agg(float8) +( + "Stype" = internal, + "Sfunc" = ordered_set_transition, + "Finalfunc" = percentile_disc_final, + "Finalfunc_extra" = true, + "Finalfunc_modify" = read_write, + "Parallel" = safe +); diff --git a/src/test/regress/sql/create_operator.sql b/src/test/regress/sql/create_operator.sql index 0e5d6356bc..bb9907b3ed 100644 --- a/src/test/regress/sql/create_operator.sql +++ b/src/test/regress/sql/create_operator.sql @@ -179,3 +179,17 @@ CREATE OPERATOR #*# ( procedure = fn_op6 ); ROLLBACK; + +-- invalid: non-lowercase quoted identifiers +CREATE OPERATOR === +( + "Leftarg" = box, + "Rightarg" = box, + "Procedure" = area_equal_procedure, + "Commutator" = ===, + "Negator" = !==, + "Restrict" = area_restriction_procedure, + "Join" = area_join_procedure, + "Hashes", + "Merges" +); diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index 8f9991ef18..10e5d49e8e 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -253,6 +253,10 @@ CREATE TABLE IF NOT EXISTS test_tsvector( t text ); +-- invalid: non-lowercase quoted reloptions identifiers +CREATE TABLE tas_case WITH ("Fillfactor" = 10) AS SELECT 1 a; +CREATE TABLE tas_case (a text) WITH ("Oids" = true); + CREATE UNLOGGED TABLE unlogged1 (a int primary key); -- OK CREATE TEMPORARY TABLE unlogged2 (a int primary key); -- OK SELECT relname, relkind, relpersistence FROM pg_class WHERE relname ~ '^unlogged\d' ORDER BY relname; diff --git a/src/test/regress/sql/create_type.sql b/src/test/regress/sql/create_type.sql index a28303aa6a..2123d63e2e 100644 --- a/src/test/regress/sql/create_type.sql +++ b/src/test/regress/sql/create_type.sql @@ -84,6 +84,16 @@ INSERT INTO default_test DEFAULT VALUES; SELECT * FROM default_test; +-- invalid: non-lowercase quoted identifiers +CREATE TYPE case_int42 ( + "Internallength" = 4, + "Input" = int42_in, + "Output" = int42_out, + "Alignment" = int4, + "Default" = 42, + "Passedbyvalue" +); + -- Test stand-alone composite type CREATE TYPE default_test_row AS (f1 text_w_default, f2 int42); diff --git a/src/test/regress/sql/tsdicts.sql b/src/test/regress/sql/tsdicts.sql index a5a569e1ad..1633c0d066 100644 --- a/src/test/regress/sql/tsdicts.sql +++ b/src/test/regress/sql/tsdicts.sql @@ -188,3 +188,11 @@ ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one'); SELECT to_tsvector('thesaurus_tst', 'Supernovae star is very new star and usually called supernovae (abbreviation SN)'); SELECT to_tsvector('thesaurus_tst', 'Booking tickets is looking like a booking a tickets'); + +-- invalid: non-lowercase quoted identifiers +CREATE TEXT SEARCH DICTIONARY tsdict_case +( + Template = ispell, + "DictFile" = ispell_sample, + "AffFile" = ispell_sample +); From ba8c2dfffd8e018fa0fae554fee69a7b7e93472e Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Sat, 27 Jan 2018 13:13:52 +0100 Subject: [PATCH 0906/1087] Add missing semicolons in documentation examples Author: Daniel Gustafsson --- doc/src/sgml/ddl.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 3244399782..1e1f3428a6 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3097,14 +3097,14 @@ CREATE TABLE measurement ( CREATE TABLE measurement_y2006m02 PARTITION OF measurement - FOR VALUES FROM ('2006-02-01') TO ('2006-03-01') + FOR VALUES FROM ('2006-02-01') TO ('2006-03-01'); CREATE TABLE measurement_y2006m03 PARTITION OF measurement - FOR VALUES FROM ('2006-03-01') TO ('2006-04-01') + FOR VALUES FROM ('2006-03-01') TO ('2006-04-01'); ... CREATE TABLE measurement_y2007m11 PARTITION OF measurement - FOR VALUES FROM ('2007-11-01') TO ('2007-12-01') + FOR VALUES FROM ('2007-11-01') TO ('2007-12-01'); CREATE TABLE measurement_y2007m12 PARTITION OF measurement FOR VALUES FROM ('2007-12-01') TO ('2008-01-01') From 2e668c522e58854ae19b7fdc5ac23f9b8a1275f5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 27 Jan 2018 13:52:24 -0500 Subject: [PATCH 0907/1087] Avoid crash during EvalPlanQual recheck of an inner indexscan. Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to postpone initialization of the indexscan proper until the first tuple fetch. It overlooked the question of mark/restore behavior, which means that if some caller attempts to mark the scan before the first tuple fetch, you get a null pointer dereference. The only existing user of mark/restore is nodeMergejoin.c, which (somewhat accidentally) will never attempt to set a mark before the first inner tuple unless the inner child node is a Material node. Hence the case can't arise normally, so it seems sufficient to document the assumption at both ends. However, during an EvalPlanQual recheck, ExecScanFetch doesn't call IndexNext but just returns the jammed-in test tuple. Therefore, if we're doing a recheck in a plan tree with a mergejoin with inner indexscan, it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null, as reported by Guo Xiang Tan in bug #15032. Really, when there's a test tuple supplied during an EPQ recheck, touching the index at all is the wrong thing: rather, the behavior of mark/restore ought to amount to saving and restoring the es_epqScanDone flag. We can avoid finding a place to actually save the flag, for the moment, because given the assumption that no caller will set a mark before fetching a tuple, es_epqScanDone must always be set by the time we try to mark. So the actual behavior change required is just to not reach the index access if a test tuple is supplied. The set of plan node types that need to consider this issue are those that support EPQ test tuples (i.e., call ExecScan()) and also support mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps CustomScan. It's tempting to try to fix the problem in one place by teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some plan types that aren't Scans, and also it seems risky to make assumptions about what a CustomScan wants to do here. Also, the most likely future change here is to decide that we do need to support marks placed before the first tuple, which would require additional work in IndexScan and IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c and nodeIndexonlyscan.c, accepting the small amount of code duplicated thereby, and leave it to CustomScan providers to fix this bug if they have it. Back-patch to v10 where commit 09529a70b came in. In earlier branches, the index_markpos() call is a waste of cycles when EPQ is active, but no more than that, so it doesn't seem appropriate to back-patch further. Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org --- src/backend/executor/nodeIndexonlyscan.c | 45 +++++++++++++++++++ src/backend/executor/nodeIndexscan.c | 45 +++++++++++++++++++ src/backend/executor/nodeMergejoin.c | 4 ++ .../isolation/expected/eval-plan-qual.out | 33 ++++++++++++++ src/test/isolation/specs/eval-plan-qual.spec | 18 +++++++- 5 files changed, 144 insertions(+), 1 deletion(-) diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index 9b7f470ee2..f61b3abf32 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -421,11 +421,39 @@ ExecEndIndexOnlyScan(IndexOnlyScanState *node) /* ---------------------------------------------------------------- * ExecIndexOnlyMarkPos + * + * Note: we assume that no caller attempts to set a mark before having read + * at least one tuple. Otherwise, ioss_ScanDesc might still be NULL. * ---------------------------------------------------------------- */ void ExecIndexOnlyMarkPos(IndexOnlyScanState *node) { + EState *estate = node->ss.ps.state; + + if (estate->es_epqTuple != NULL) + { + /* + * We are inside an EvalPlanQual recheck. If a test tuple exists for + * this relation, then we shouldn't access the index at all. We would + * instead need to save, and later restore, the state of the + * es_epqScanDone flag, so that re-fetching the test tuple is + * possible. However, given the assumption that no caller sets a mark + * at the start of the scan, we can only get here with es_epqScanDone + * already set, and so no state need be saved. + */ + Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid; + + Assert(scanrelid > 0); + if (estate->es_epqTupleSet[scanrelid - 1]) + { + /* Verify the claim above */ + if (!estate->es_epqScanDone[scanrelid - 1]) + elog(ERROR, "unexpected ExecIndexOnlyMarkPos call in EPQ recheck"); + return; + } + } + index_markpos(node->ioss_ScanDesc); } @@ -436,6 +464,23 @@ ExecIndexOnlyMarkPos(IndexOnlyScanState *node) void ExecIndexOnlyRestrPos(IndexOnlyScanState *node) { + EState *estate = node->ss.ps.state; + + if (estate->es_epqTuple != NULL) + { + /* See comments in ExecIndexOnlyMarkPos */ + Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid; + + Assert(scanrelid > 0); + if (estate->es_epqTupleSet[scanrelid - 1]) + { + /* Verify the claim above */ + if (!estate->es_epqScanDone[scanrelid - 1]) + elog(ERROR, "unexpected ExecIndexOnlyRestrPos call in EPQ recheck"); + return; + } + } + index_restrpos(node->ioss_ScanDesc); } diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index 54fafa5033..eed69a0c66 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -849,11 +849,39 @@ ExecEndIndexScan(IndexScanState *node) /* ---------------------------------------------------------------- * ExecIndexMarkPos + * + * Note: we assume that no caller attempts to set a mark before having read + * at least one tuple. Otherwise, iss_ScanDesc might still be NULL. * ---------------------------------------------------------------- */ void ExecIndexMarkPos(IndexScanState *node) { + EState *estate = node->ss.ps.state; + + if (estate->es_epqTuple != NULL) + { + /* + * We are inside an EvalPlanQual recheck. If a test tuple exists for + * this relation, then we shouldn't access the index at all. We would + * instead need to save, and later restore, the state of the + * es_epqScanDone flag, so that re-fetching the test tuple is + * possible. However, given the assumption that no caller sets a mark + * at the start of the scan, we can only get here with es_epqScanDone + * already set, and so no state need be saved. + */ + Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid; + + Assert(scanrelid > 0); + if (estate->es_epqTupleSet[scanrelid - 1]) + { + /* Verify the claim above */ + if (!estate->es_epqScanDone[scanrelid - 1]) + elog(ERROR, "unexpected ExecIndexMarkPos call in EPQ recheck"); + return; + } + } + index_markpos(node->iss_ScanDesc); } @@ -864,6 +892,23 @@ ExecIndexMarkPos(IndexScanState *node) void ExecIndexRestrPos(IndexScanState *node) { + EState *estate = node->ss.ps.state; + + if (estate->es_epqTuple != NULL) + { + /* See comments in ExecIndexMarkPos */ + Index scanrelid = ((Scan *) node->ss.ps.plan)->scanrelid; + + Assert(scanrelid > 0); + if (estate->es_epqTupleSet[scanrelid - 1]) + { + /* Verify the claim above */ + if (!estate->es_epqScanDone[scanrelid - 1]) + elog(ERROR, "unexpected ExecIndexRestrPos call in EPQ recheck"); + return; + } + } + index_restrpos(node->iss_ScanDesc); } diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c index b52946f180..ec5f82f6a9 100644 --- a/src/backend/executor/nodeMergejoin.c +++ b/src/backend/executor/nodeMergejoin.c @@ -1502,6 +1502,10 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) * * Currently, only Material wants the extra MARKs, and it will be helpful * only if eflags doesn't specify REWIND. + * + * Note that for IndexScan and IndexOnlyScan, it is *necessary* that we + * not set mj_ExtraMarks; otherwise we might attempt to set a mark before + * the first inner tuple, which they do not support. */ if (IsA(innerPlan(node), Material) && (eflags & EXEC_FLAG_REWIND) == 0 && diff --git a/src/test/isolation/expected/eval-plan-qual.out b/src/test/isolation/expected/eval-plan-qual.out index 10c784a05f..eb40717679 100644 --- a/src/test/isolation/expected/eval-plan-qual.out +++ b/src/test/isolation/expected/eval-plan-qual.out @@ -184,3 +184,36 @@ step readwcte: <... completed> id value 1 tableAValue2 + +starting permutation: wrjt selectjoinforupdate c2 c1 +step wrjt: UPDATE jointest SET data = 42 WHERE id = 7; +step selectjoinforupdate: + set enable_nestloop to 0; + set enable_hashjoin to 0; + set enable_seqscan to 0; + explain (costs off) + select * from jointest a join jointest b on a.id=b.id for update; + select * from jointest a join jointest b on a.id=b.id for update; + +step c2: COMMIT; +step selectjoinforupdate: <... completed> +QUERY PLAN + +LockRows + -> Merge Join + Merge Cond: (a.id = b.id) + -> Index Scan using jointest_id_idx on jointest a + -> Index Scan using jointest_id_idx on jointest b +id data id data + +1 0 1 0 +2 0 2 0 +3 0 3 0 +4 0 4 0 +5 0 5 0 +6 0 6 0 +7 42 7 42 +8 0 8 0 +9 0 9 0 +10 0 10 0 +step c1: COMMIT; diff --git a/src/test/isolation/specs/eval-plan-qual.spec b/src/test/isolation/specs/eval-plan-qual.spec index 7ff6f6b8cc..d2b34ec7cc 100644 --- a/src/test/isolation/specs/eval-plan-qual.spec +++ b/src/test/isolation/specs/eval-plan-qual.spec @@ -21,13 +21,16 @@ setup CREATE TABLE table_b (id integer, value text); INSERT INTO table_a VALUES (1, 'tableAValue'); INSERT INTO table_b VALUES (1, 'tableBValue'); + + CREATE TABLE jointest AS SELECT generate_series(1,10) AS id, 0 AS data; + CREATE INDEX ON jointest(id); } teardown { DROP TABLE accounts; DROP TABLE p CASCADE; - DROP TABLE table_a, table_b; + DROP TABLE table_a, table_b, jointest; } session "s1" @@ -78,6 +81,17 @@ step "updateforss" { UPDATE table_b SET value = 'newTableBValue' WHERE id = 1; } +# these tests exercise mark/restore during EPQ recheck, cf bug #15032 + +step "selectjoinforupdate" { + set enable_nestloop to 0; + set enable_hashjoin to 0; + set enable_seqscan to 0; + explain (costs off) + select * from jointest a join jointest b on a.id=b.id for update; + select * from jointest a join jointest b on a.id=b.id for update; +} + session "s2" setup { BEGIN ISOLATION LEVEL READ COMMITTED; } @@ -104,6 +118,7 @@ step "readforss" { WHERE ta.id = 1 FOR UPDATE OF ta; } step "wrtwcte" { UPDATE table_a SET value = 'tableAValue2' WHERE id = 1; } +step "wrjt" { UPDATE jointest SET data = 42 WHERE id = 7; } step "c2" { COMMIT; } session "s3" @@ -135,3 +150,4 @@ permutation "wx2" "partiallock" "c2" "c1" "read" permutation "wx2" "lockwithvalues" "c2" "c1" "read" permutation "updateforss" "readforss" "c1" "c2" permutation "wrtwcte" "readwcte" "c1" "c2" +permutation "wrjt" "selectjoinforupdate" "c2" "c1" From 41fc04ff913de6e0ffdbffff25298b39cd4ba42d Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 27 Jan 2018 16:42:28 -0500 Subject: [PATCH 0908/1087] Update time zone data files to tzdata release 2018c. DST law changes in Brazil, Sao Tome and Principe. Historical corrections for Bolivia, Japan, and South Sudan. The "US/Pacific-New" zone has been removed (it was only a link to America/Los_Angeles anyway). --- src/timezone/data/tzdata.zi | 22 +++++++++++++--------- src/timezone/known_abbrevs.txt | 1 - src/timezone/tznames/Africa.txt | 3 +-- 3 files changed, 14 insertions(+), 12 deletions(-) diff --git a/src/timezone/data/tzdata.zi b/src/timezone/data/tzdata.zi index a818947009..afa8f85c43 100644 --- a/src/timezone/data/tzdata.zi +++ b/src/timezone/data/tzdata.zi @@ -1,3 +1,4 @@ +# version 2018c # This zic input file is in the public domain. R A 1916 o - Jun 14 23s 1 S R A 1916 1919 - O Sun>=1 23s 0 - @@ -50,7 +51,6 @@ Li Africa/Abidjan Africa/Freetown Li Africa/Abidjan Africa/Lome Li Africa/Abidjan Africa/Nouakchott Li Africa/Abidjan Africa/Ouagadougou -Li Africa/Abidjan Africa/Sao_Tome Li Africa/Abidjan Atlantic/St_Helena R B 1940 o - Jul 15 0 1 S R B 1940 o - O 1 0 0 - @@ -237,6 +237,10 @@ Li Africa/Lagos Africa/Niamey Li Africa/Lagos Africa/Porto-Novo Z Indian/Reunion 3:41:52 - LMT 1911 Jun 4 - +04 +Z Africa/Sao_Tome 0:26:56 - LMT 1884 +-0:36:45 - LMT 1912 +0 - GMT 2018 Ja 1 1 +1 - WAT Z Indian/Mahe 3:41:48 - LMT 1906 Jun 4 - +04 R H 1942 1943 - S Sun>=15 2 1 - @@ -656,10 +660,10 @@ R Z 2013 ma - O lastSun 2 0 S Z Asia/Jerusalem 2:20:54 - LMT 1880 2:20:40 - JMT 1918 2 Z I%sT -R a 1948 o - May Sun>=1 2 1 D -R a 1948 1951 - S Sat>=8 2 0 S -R a 1949 o - Ap Sun>=1 2 1 D -R a 1950 1951 - May Sun>=1 2 1 D +R a 1948 o - May Sat>=1 24 1 D +R a 1948 1951 - S Sun>=9 0 0 S +R a 1949 o - Ap Sat>=1 24 1 D +R a 1950 1951 - May Sat>=1 24 1 D Z Asia/Tokyo 9:18:59 - LMT 1887 D 31 15u 9 a J%sT R b 1973 o - Jun 6 0 1 S @@ -3606,7 +3610,7 @@ Z America/Argentina/Ushuaia -4:33:12 - LMT 1894 O 31 Li America/Curacao America/Aruba Z America/La_Paz -4:32:36 - LMT 1890 -4:32:36 - CMT 1931 O 15 --4:32:36 1 BOST 1932 Mar 21 +-4:32:36 1 BST 1932 Mar 21 -4 - -04 R As 1931 o - O 3 11 1 S R As 1932 1933 - Ap 1 0 0 - @@ -3658,12 +3662,13 @@ R As 2005 o - O 16 0 1 S R As 2006 o - N 5 0 1 S R As 2007 o - F 25 0 0 - R As 2007 o - O Sun>=8 0 1 S -R As 2008 ma - O Sun>=15 0 1 S +R As 2008 2017 - O Sun>=15 0 1 S R As 2008 2011 - F Sun>=15 0 0 - R As 2012 o - F Sun>=22 0 0 - R As 2013 2014 - F Sun>=15 0 0 - R As 2015 o - F Sun>=22 0 0 - R As 2016 2022 - F Sun>=15 0 0 - +R As 2018 ma - N Sun>=1 0 1 S R As 2023 o - F Sun>=22 0 0 - R As 2024 2025 - F Sun>=15 0 0 - R As 2026 o - F Sun>=22 0 0 - @@ -4024,6 +4029,7 @@ Z Etc/GMT+9 -9 - -09 Z Etc/GMT+10 -10 - -10 Z Etc/GMT+11 -11 - -11 Z Etc/GMT+12 -12 - -12 +Z Factory 0 - -00 Li Africa/Nairobi Africa/Asmera Li Africa/Abidjan Africa/Timbuktu Li America/Argentina/Catamarca America/Argentina/ComodRivadavia @@ -4142,5 +4148,3 @@ Li Etc/UTC UTC Li Etc/UTC Universal Li Europe/Moscow W-SU Li Etc/UTC Zulu -Li America/Los_Angeles US/Pacific-New -Z Factory 0 - -00 diff --git a/src/timezone/known_abbrevs.txt b/src/timezone/known_abbrevs.txt index eb48069d87..4db831c62d 100644 --- a/src/timezone/known_abbrevs.txt +++ b/src/timezone/known_abbrevs.txt @@ -96,7 +96,6 @@ SAST 7200 SST -39600 UCT 0 UTC 0 -WAST 7200 D WAT 3600 WEST 3600 D WET 0 diff --git a/src/timezone/tznames/Africa.txt b/src/timezone/tznames/Africa.txt index 0bd0c405f6..2ea08a6508 100644 --- a/src/timezone/tznames/Africa.txt +++ b/src/timezone/tznames/Africa.txt @@ -147,8 +147,7 @@ GMT 0 # Greenwich Mean Time # - SAST South Australian Standard Time (not in IANA database) SAST 7200 # South Africa Standard Time # (Africa/Johannesburg) -WAST 7200 D # West Africa Summer Time - # (Africa/Windhoek) +WAST 7200 D # West Africa Summer Time (obsolete) WAT 3600 # West Africa Time # (Africa/Bangui) # (Africa/Brazzaville) From 010123e144a5a5d395a15067f301a2c2443f49cf Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Sat, 27 Jan 2018 23:05:52 -0500 Subject: [PATCH 0909/1087] C includes: Reorder C includes in partition.c Discussion: https://postgr.es/m/5A69AA50.2060600@lab.ntt.co.jp Author: Etsuro Fujita --- src/backend/catalog/partition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 8adc4ee977..e69bbc0345 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -46,11 +46,11 @@ #include "utils/array.h" #include "utils/builtins.h" #include "utils/datum.h" -#include "utils/memutils.h" #include "utils/fmgroids.h" #include "utils/hashutils.h" #include "utils/inval.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" #include "utils/rel.h" #include "utils/ruleutils.h" #include "utils/syscache.h" From 35a528062cc8ccdb51bde6c672991ae64e970847 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 28 Jan 2018 13:39:07 -0500 Subject: [PATCH 0910/1087] Add stack-overflow guards in set-operation planning. create_plan_recurse lacked any stack depth check. This is not per our normal coding rules, but I'd supposed it was safe because earlier planner processing is more complex and presumably should eat more stack. But bug #15033 from Andrew Grossman shows this isn't true, at least not for queries having the form of a many-thousand-way INTERSECT stack. Further testing showed that recurse_set_operations is also capable of being crashed in this way, since it likewise will recurse to the bottom of a parsetree before calling any support functions that might themselves contain any stack checks. However, its stack consumption is only perhaps a third of create_plan_recurse's. It's possible that this particular problem with create_plan_recurse can only manifest in 9.6 and later, since before that we didn't build a Path tree for set operations. But having seen this example, I now have no faith in the proposition that create_plan_recurse doesn't need a stack check, so back-patch to all supported branches. Discussion: https://postgr.es/m/20180127050845.28812.58244@wrigleys.postgresql.org --- src/backend/optimizer/plan/createplan.c | 3 +++ src/backend/optimizer/prep/prepunion.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 86e7e74793..c46e1318a6 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -358,6 +358,9 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags) { Plan *plan; + /* Guard against stack overflow due to overly complex plans */ + check_stack_depth(); + switch (best_path->pathtype) { case T_SeqScan: diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c index e6b15348c1..b586f941a8 100644 --- a/src/backend/optimizer/prep/prepunion.c +++ b/src/backend/optimizer/prep/prepunion.c @@ -267,6 +267,9 @@ recurse_set_operations(Node *setOp, PlannerInfo *root, List **pTargetList, double *pNumGroups) { + /* Guard against stack overflow due to overly complex setop nests */ + check_stack_depth(); + if (IsA(setOp, RangeTblRef)) { RangeTblRef *rtr = (RangeTblRef *) setOp; From 15be27460191a9ffb149cc98f6fbf97c369a6b1e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 29 Jan 2018 12:57:09 -0500 Subject: [PATCH 0911/1087] Avoid misleading psql password prompt when username is multiply specified. When a password is needed, cases such as psql -d "postgresql://alice@localhost/testdb" -U bob would incorrectly prompt for "Password for user bob: ", when actually the connection will be attempted with username alice. The priority order of which name to use isn't that important here, but the misleading prompt is. When we are prompting for a password after initial connection failure, we can fix this reliably by looking at PQuser(conn) to see how libpq interpreted the connection arguments. But when we're doing a forced password prompt because of a -W switch, we can't use that solution. Fortunately, because the main use of -W is for noninteractive situations, it's less critical to produce a helpful prompt in such cases. I made the startup prompt for -W just say "Password: " all the time, rather than expending extra code on trying to identify which username to use. In the case of a \c command (after -W has been given), there's already logic in do_connect that determines whether the "dbname" is a connstring or URI, so we can avoid lobotomizing the prompt except in cases that are actually dubious. (We could do similarly in startup.c if anyone complains, but for now it seems not worthwhile, especially since that would still be only a partial solution.) Per bug #15025 from Akos Vandra. Although this is arguably a bug fix, it doesn't seem worth back-patching. The case where it matters seems like a very corner-case usage, and someone might complain that we'd changed the behavior of -W in a minor release. Discussion: https://postgr.es/m/20180123130013.7407.24749@wrigleys.postgresql.org --- src/bin/psql/command.c | 17 ++++++++++++++--- src/bin/psql/startup.c | 31 +++++++++++++++++++++---------- 2 files changed, 35 insertions(+), 13 deletions(-) diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c index 015c391aa4..3560318749 100644 --- a/src/bin/psql/command.c +++ b/src/bin/psql/command.c @@ -2829,7 +2829,7 @@ prompt_for_password(const char *username) { char buf[100]; - if (username == NULL) + if (username == NULL || username[0] == '\0') simple_prompt("Password: ", buf, sizeof(buf), false); else { @@ -2960,7 +2960,14 @@ do_connect(enum trivalue reuse_previous_specification, */ if (pset.getPassword == TRI_YES) { - password = prompt_for_password(user); + /* + * If a connstring or URI is provided, we can't be sure we know which + * username will be used, since we haven't parsed that argument yet. + * Don't risk issuing a misleading prompt. As in startup.c, it does + * not seem worth working harder, since this getPassword option is + * normally only used in noninteractive cases. + */ + password = prompt_for_password(has_connection_string ? NULL : user); } else if (o_conn && keep_password) { @@ -3026,8 +3033,12 @@ do_connect(enum trivalue reuse_previous_specification, */ if (!password && PQconnectionNeedsPassword(n_conn) && pset.getPassword != TRI_NO) { + /* + * Prompt for password using the username we actually connected + * with --- it might've come out of "dbname" rather than "user". + */ + password = prompt_for_password(PQuser(n_conn)); PQfinish(n_conn); - password = prompt_for_password(user); continue; } diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c index ec6ae45b24..be57574cd3 100644 --- a/src/bin/psql/startup.c +++ b/src/bin/psql/startup.c @@ -101,7 +101,6 @@ main(int argc, char *argv[]) int successResult; bool have_password = false; char password[100]; - char *password_prompt = NULL; bool new_pass; set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("psql")); @@ -205,15 +204,14 @@ main(int argc, char *argv[]) pset.popt.topt.recordSep.separator_zero = false; } - if (options.username == NULL) - password_prompt = pg_strdup(_("Password: ")); - else - password_prompt = psprintf(_("Password for user %s: "), - options.username); - if (pset.getPassword == TRI_YES) { - simple_prompt(password_prompt, password, sizeof(password), false); + /* + * We can't be sure yet of the username that will be used, so don't + * offer a potentially wrong one. Typical uses of this option are + * noninteractive anyway. + */ + simple_prompt("Password: ", password, sizeof(password), false); have_password = true; } @@ -252,15 +250,28 @@ main(int argc, char *argv[]) !have_password && pset.getPassword != TRI_NO) { + /* + * Before closing the old PGconn, extract the user name that was + * actually connected with --- it might've come out of a URI or + * connstring "database name" rather than options.username. + */ + const char *realusername = PQuser(pset.db); + char *password_prompt; + + if (realusername && realusername[0]) + password_prompt = psprintf(_("Password for user %s: "), + realusername); + else + password_prompt = pg_strdup(_("Password: ")); PQfinish(pset.db); + simple_prompt(password_prompt, password, sizeof(password), false); + free(password_prompt); have_password = true; new_pass = true; } } while (new_pass); - free(password_prompt); - if (PQstatus(pset.db) == CONNECTION_BAD) { fprintf(stderr, "%s: %s", pset.progname, PQerrorMessage(pset.db)); From c068f87723ca9cded1f2aceb956ede49de651690 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 29 Jan 2018 11:02:09 -0800 Subject: [PATCH 0912/1087] Improve bit perturbation in TupleHashTableHash. The changes in b81b5a96f424531b97cdd1dba97d9d1b9c9d372e did not fully address the issue, because the bit-mixing of the IV into the final hash-key didn't prevent clustering in the input-data survive in the output data. This didn't cause a lot of problems because of the additional growth conditions added d4c62a6b623d6eef88218158e9fa3cf974c6c7e5. But as we want to rein those in due to explosive growth in some edges, this needs to be fixed. Author: Andres Freund Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org Backpatch: 10, where simplehash was introduced --- src/backend/executor/execGrouping.c | 11 ++++++-- src/test/regress/expected/groupingsets.out | 30 ++++++++++++---------- src/test/regress/sql/groupingsets.sql | 7 ++--- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c index 058ee68804..8e8dbb1f20 100644 --- a/src/backend/executor/execGrouping.c +++ b/src/backend/executor/execGrouping.c @@ -23,6 +23,7 @@ #include "executor/executor.h" #include "miscadmin.h" #include "utils/lsyscache.h" +#include "utils/hashutils.h" #include "utils/memutils.h" static uint32 TupleHashTableHash(struct tuplehash_hash *tb, const MinimalTuple tuple); @@ -326,7 +327,7 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, * underestimated. */ if (use_variable_hash_iv) - hashtable->hash_iv = hash_uint32(ParallelWorkerNumber); + hashtable->hash_iv = murmurhash32(ParallelWorkerNumber); else hashtable->hash_iv = 0; @@ -510,7 +511,13 @@ TupleHashTableHash(struct tuplehash_hash *tb, const MinimalTuple tuple) } } - return hashkey; + /* + * The way hashes are combined above, among each other and with the IV, + * doesn't lead to good bit perturbation. As the IV's goal is to lead to + * achieve that, perform a round of hashing of the combined hash - + * resulting in near perfect perturbation. + */ + return murmurhash32(hashkey); } /* diff --git a/src/test/regress/expected/groupingsets.out b/src/test/regress/expected/groupingsets.out index cbfdbfd856..d21a494a9d 100644 --- a/src/test/regress/expected/groupingsets.out +++ b/src/test/regress/expected/groupingsets.out @@ -1183,29 +1183,33 @@ explain (costs off) -- simple rescan tests select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) - group by grouping sets (a,b); + group by grouping sets (a,b) + order by 1, 2, 3; a | b | sum ---+---+----- - 2 | | 6 1 | | 3 + 2 | | 6 + | 1 | 3 | 2 | 3 | 3 | 3 - | 1 | 3 (5 rows) explain (costs off) select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) - group by grouping sets (a,b); - QUERY PLAN ------------------------------------------- - HashAggregate - Hash Key: gstest_data.a - Hash Key: gstest_data.b - -> Nested Loop - -> Values Scan on "*VALUES*" - -> Function Scan on gstest_data -(6 rows) + group by grouping sets (a,b) + order by 3, 1, 2; + QUERY PLAN +--------------------------------------------------------------------- + Sort + Sort Key: (sum("*VALUES*".column1)), gstest_data.a, gstest_data.b + -> HashAggregate + Hash Key: gstest_data.a + Hash Key: gstest_data.b + -> Nested Loop + -> Values Scan on "*VALUES*" + -> Function Scan on gstest_data +(8 rows) select * from (values (1),(2)) v(x), diff --git a/src/test/regress/sql/groupingsets.sql b/src/test/regress/sql/groupingsets.sql index b28d8217c1..eb68028603 100644 --- a/src/test/regress/sql/groupingsets.sql +++ b/src/test/regress/sql/groupingsets.sql @@ -342,12 +342,13 @@ explain (costs off) select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) - group by grouping sets (a,b); + group by grouping sets (a,b) + order by 1, 2, 3; explain (costs off) select a, b, sum(v.x) from (values (1),(2)) v(x), gstest_data(v.x) - group by grouping sets (a,b); - + group by grouping sets (a,b) + order by 3, 1, 2; select * from (values (1),(2)) v(x), lateral (select a, b, sum(v.x) from gstest_data(v.x) group by grouping sets (a,b)) s; From ab9f2c429d8fbd3580cd2ae5f2054ba6956b1f60 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 29 Jan 2018 11:02:09 -0800 Subject: [PATCH 0913/1087] Prevent growth of simplehash tables when they're "too empty". In cases where simplehash tables where filled with either a lot of conflicting hash-values, or values that hash to consecutive values (i.e. build "chains") the growth heuristics in d4c62a6b623d6eef88218158e9fa3cf974c6c7e5 could trigger rather explosively. To fix that, address some of the reasons (see previous commit) of why the growth heuristics where needed, and only allow growth when the table isn't too empty. While that means there's a few cases of bad input that can be slower, that seems a lot better than running very quickly out of memory. Author: Tomas Vondra and Andres Freund, with additional input by Thomas Munro, Tom Lane Todd A. Cook Reported-By: Todd A. Cook, Tomas Vondra, Thomas Munro Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org Backpatch: 10, where simplehash was introduced --- src/include/lib/simplehash.h | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h index c5af5b96a7..5273d49460 100644 --- a/src/include/lib/simplehash.h +++ b/src/include/lib/simplehash.h @@ -174,6 +174,10 @@ SH_SCOPE void SH_STAT(SH_TYPE * tb); #ifndef SH_GROW_MAX_MOVE #define SH_GROW_MAX_MOVE 150 #endif +#ifndef SH_GROW_MIN_FILLFACTOR +/* but do not grow due to SH_GROW_MAX_* if below */ +#define SH_GROW_MIN_FILLFACTOR 0.1 +#endif #ifdef SH_STORE_HASH #define SH_COMPARE_KEYS(tb, ahash, akey, b) (ahash == SH_GET_HASH(tb, b) && SH_EQUAL(tb, b->SH_KEY, akey)) @@ -574,9 +578,12 @@ SH_INSERT(SH_TYPE * tb, SH_KEY_TYPE key, bool *found) * hashtables, grow the hashtable if collisions would require * us to move a lot of entries. The most likely cause of such * imbalance is filling a (currently) small table, from a - * currently big one, in hash-table order. + * currently big one, in hash-table order. Don't grow if the + * hashtable would be too empty, to prevent quick space + * explosion for some weird edge cases. */ - if (++emptydist > SH_GROW_MAX_MOVE) + if (unlikely(++emptydist > SH_GROW_MAX_MOVE) && + ((double) tb->members / tb->size) >= SH_GROW_MIN_FILLFACTOR) { tb->grow_threshold = 0; goto restart; @@ -621,9 +628,12 @@ SH_INSERT(SH_TYPE * tb, SH_KEY_TYPE key, bool *found) * To avoid negative consequences from overly imbalanced hashtables, * grow the hashtable if collisions lead to large runs. The most * likely cause of such imbalance is filling a (currently) small - * table, from a currently big one, in hash-table order. + * table, from a currently big one, in hash-table order. Don't grow + * if the hashtable would be too empty, to prevent quick space + * explosion for some weird edge cases. */ - if (insertdist > SH_GROW_MAX_DIB) + if (unlikely(insertdist > SH_GROW_MAX_DIB) && + ((double) tb->members / tb->size) >= SH_GROW_MIN_FILLFACTOR) { tb->grow_threshold = 0; goto restart; @@ -923,6 +933,7 @@ SH_STAT(SH_TYPE * tb) #undef SH_MAX_FILLFACTOR #undef SH_GROW_MAX_DIB #undef SH_GROW_MAX_MOVE +#undef SH_GROW_MIN_FILLFACTOR #undef SH_MAX_SIZE /* types */ From 1e1e599d6663c4a65388b40f84b2ea6b7c6e381b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 29 Jan 2018 14:26:17 -0500 Subject: [PATCH 0914/1087] doc: Clarify pg_upgrade documentation Clarify that the restriction against reg* types only applies to table columns using these types, not to the type appearing in any other way, for example as a function argument. --- doc/src/sgml/ref/pgupgrade.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index 055eac31a0..aaa4b04a42 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -682,7 +682,7 @@ psql --username=postgres --file=script.sql postgres pg_upgrade does not support upgrading of databases - containing these reg* OID-referencing system data types: + containing table columns using these reg* OID-referencing system data types: regproc, regprocedure, regoper, regoperator, regconfig, and regdictionary. (regtype can be upgraded.) From fc96c6942551dafa6cb2a6000cbc9b20643e5db3 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 23 Jan 2018 23:20:02 -0800 Subject: [PATCH 0915/1087] Initialize unused ExprEvalStep fields. ExecPushExprSlots didn't initialize ExprEvalStep's resvalue/resnull steps as it didn't use them. That caused wrong valgrind warnings for an upcoming patch, so zero-intialize. Also zero-initialize all scratch ExprEvalStep's allocated on the stack, to avoid issues with similar future omissions of non-critial data. --- src/backend/executor/execExpr.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 883a29f0a7..c6eb3ebacf 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -118,7 +118,7 @@ ExprState * ExecInitExpr(Expr *node, PlanState *parent) { ExprState *state; - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; /* Special case: NULL expression produces a NULL ExprState pointer */ if (node == NULL) @@ -155,7 +155,7 @@ ExprState * ExecInitExprWithParams(Expr *node, ParamListInfo ext_params) { ExprState *state; - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; /* Special case: NULL expression produces a NULL ExprState pointer */ if (node == NULL) @@ -204,7 +204,7 @@ ExprState * ExecInitQual(List *qual, PlanState *parent) { ExprState *state; - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; List *adjust_jumps = NIL; ListCell *lc; @@ -353,7 +353,7 @@ ExecBuildProjectionInfo(List *targetList, { ProjectionInfo *projInfo = makeNode(ProjectionInfo); ExprState *state; - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; ListCell *lc; projInfo->pi_exprContext = econtext; @@ -638,7 +638,7 @@ static void ExecInitExprRec(Expr *node, ExprState *state, Datum *resv, bool *resnull) { - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; /* Guard against stack overflow due to overly complex expressions */ check_stack_depth(); @@ -2273,7 +2273,10 @@ ExecInitExprSlots(ExprState *state, Node *node) static void ExecPushExprSlots(ExprState *state, LastAttnumInfo *info) { - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; + + scratch.resvalue = NULL; + scratch.resnull = NULL; /* Emit steps as needed */ if (info->last_inner > 0) @@ -2659,7 +2662,7 @@ static void ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest, ExprState *state, Datum *resv, bool *resnull) { - ExprEvalStep scratch2; + ExprEvalStep scratch2 = {0}; DomainConstraintRef *constraint_ref; Datum *domainval = NULL; bool *domainnull = NULL; @@ -2811,7 +2814,7 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase, { ExprState *state = makeNode(ExprState); PlanState *parent = &aggstate->ss.ps; - ExprEvalStep scratch; + ExprEvalStep scratch = {0}; int transno = 0; int setoff = 0; bool isCombine = DO_AGGSPLIT_COMBINE(aggstate->aggsplit); From 97d4445a033f1cc02784d42561b52b3441c8eddd Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 29 Jan 2018 15:13:07 -0500 Subject: [PATCH 0916/1087] Save a few bytes by removing useless last argument to SearchCatCacheList. There's never any value in giving a fully specified cache key to SearchCatCacheList: you might as well call SearchCatCache instead, since there could be only one match. So the maximum useful number of key arguments is one less than the supported number of key columns. We might as well remove the useless extra argument and save some few bytes per call site, as well as a cycle or so per call. I believe the reason it was coded like this is that originally, callers had to write out all the dummy arguments in each call, and so it seemed less confusing if SearchCatCache and SearchCatCacheList took the same number of key arguments. But since commit e26c539e9, callers only write their live arguments explicitly, making that a non-factor; and there's surely been enough time for third-party modules to adapt to that coding style. So this is only an ABI break not an API break for callers. Per discussion with Oliver Ford, this might also make it less confusing how to use SearchCatCacheList correctly. Discussion: https://postgr.es/m/27788.1517069693@sss.pgh.pa.us --- src/backend/utils/cache/catcache.c | 9 +++++++-- src/backend/utils/cache/syscache.c | 4 ++-- src/include/utils/catcache.h | 2 +- src/include/utils/syscache.h | 10 ++++------ 4 files changed, 14 insertions(+), 11 deletions(-) diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c index 8a0a42ce71..5ddbf6eab1 100644 --- a/src/backend/utils/cache/catcache.c +++ b/src/backend/utils/cache/catcache.c @@ -1512,6 +1512,11 @@ GetCatCacheHashValue(CatCache *cache, * Generate a list of all tuples matching a partial key (that is, * a key specifying just the first K of the cache's N key columns). * + * It doesn't make any sense to specify all of the cache's key columns + * here: since the key is unique, there could be at most one match, so + * you ought to use SearchCatCache() instead. Hence this function takes + * one less Datum argument than SearchCatCache() does. + * * The caller must not modify the list object or the pointed-to tuples, * and must call ReleaseCatCacheList() when done with the list. */ @@ -1520,9 +1525,9 @@ SearchCatCacheList(CatCache *cache, int nkeys, Datum v1, Datum v2, - Datum v3, - Datum v4) + Datum v3) { + Datum v4 = 0; /* dummy last-column value */ Datum arguments[CATCACHE_MAXKEYS]; uint32 lHashValue; dlist_iter iter; diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c index 041cd53a30..2b381782a3 100644 --- a/src/backend/utils/cache/syscache.c +++ b/src/backend/utils/cache/syscache.c @@ -1418,14 +1418,14 @@ GetSysCacheHashValue(int cacheId, */ struct catclist * SearchSysCacheList(int cacheId, int nkeys, - Datum key1, Datum key2, Datum key3, Datum key4) + Datum key1, Datum key2, Datum key3) { if (cacheId < 0 || cacheId >= SysCacheSize || !PointerIsValid(SysCache[cacheId])) elog(ERROR, "invalid cache ID: %d", cacheId); return SearchCatCacheList(SysCache[cacheId], nkeys, - key1, key2, key3, key4); + key1, key2, key3); } /* diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h index 39d4876169..7b22f9c7bc 100644 --- a/src/include/utils/catcache.h +++ b/src/include/utils/catcache.h @@ -214,7 +214,7 @@ extern uint32 GetCatCacheHashValue(CatCache *cache, extern CatCList *SearchCatCacheList(CatCache *cache, int nkeys, Datum v1, Datum v2, - Datum v3, Datum v4); + Datum v3); extern void ReleaseCatCacheList(CatCList *list); extern void ResetCatalogCaches(void); diff --git a/src/include/utils/syscache.h b/src/include/utils/syscache.h index 55d573c687..4f333586ee 100644 --- a/src/include/utils/syscache.h +++ b/src/include/utils/syscache.h @@ -157,7 +157,7 @@ extern uint32 GetSysCacheHashValue(int cacheId, /* list-search interface. Users of this must import catcache.h too */ struct catclist; extern struct catclist *SearchSysCacheList(int cacheId, int nkeys, - Datum key1, Datum key2, Datum key3, Datum key4); + Datum key1, Datum key2, Datum key3); extern void SysCacheInvalidate(int cacheId, uint32 hashValue); @@ -207,13 +207,11 @@ extern bool RelationSupportsSysCache(Oid relid); GetSysCacheHashValue(cacheId, key1, key2, key3, key4) #define SearchSysCacheList1(cacheId, key1) \ - SearchSysCacheList(cacheId, 1, key1, 0, 0, 0) + SearchSysCacheList(cacheId, 1, key1, 0, 0) #define SearchSysCacheList2(cacheId, key1, key2) \ - SearchSysCacheList(cacheId, 2, key1, key2, 0, 0) + SearchSysCacheList(cacheId, 2, key1, key2, 0) #define SearchSysCacheList3(cacheId, key1, key2, key3) \ - SearchSysCacheList(cacheId, 3, key1, key2, key3, 0) -#define SearchSysCacheList4(cacheId, key1, key2, key3, key4) \ - SearchSysCacheList(cacheId, 4, key1, key2, key3, key4) + SearchSysCacheList(cacheId, 3, key1, key2, key3) #define ReleaseSysCacheList(x) ReleaseCatCacheList(x) From c12693d8f3bbbffcb79f6af476cc647402e1145e Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Mon, 29 Jan 2018 12:16:53 -0800 Subject: [PATCH 0917/1087] Introduce ExecQualAndReset() helper. It's a common task to evaluate a qual and reset the corresponding expression context. Currently that requires storing the result of the qual eval, resetting the context, and then reacting on the result. As that's awkward several places only reset the context next time through a node. That's not great, so introduce a helper that evaluates and resets. It's a bit ugly that it currently uses MemoryContextReset() instead of ResetExprContext(), but that seems easier than reordering all of executor.h. Author: Andres Freund Discussion: https://postgr.es/m/20180109222544.f7loxrunqh3xjl5f@alap3.anarazel.de --- src/backend/executor/nodeBitmapHeapscan.c | 9 ++------- src/backend/executor/nodeHash.c | 10 ++-------- src/backend/executor/nodeIndexonlyscan.c | 3 +-- src/backend/executor/nodeIndexscan.c | 11 +++-------- src/include/executor/executor.h | 17 +++++++++++++++++ 5 files changed, 25 insertions(+), 25 deletions(-) diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index 7ba1db7d7e..fa65d4efbe 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -352,9 +352,7 @@ BitmapHeapNext(BitmapHeapScanState *node) if (tbmres->recheck) { econtext->ecxt_scantuple = slot; - ResetExprContext(econtext); - - if (!ExecQual(node->bitmapqualorig, econtext)) + if (!ExecQualAndReset(node->bitmapqualorig, econtext)) { /* Fails recheck, so drop it and loop back for another */ InstrCountFiltered2(node, 1); @@ -717,10 +715,7 @@ BitmapHeapRecheck(BitmapHeapScanState *node, TupleTableSlot *slot) /* Does the tuple meet the original qual conditions? */ econtext->ecxt_scantuple = slot; - - ResetExprContext(econtext); - - return ExecQual(node->bitmapqualorig, econtext); + return ExecQualAndReset(node->bitmapqualorig, econtext); } /* ---------------------------------------------------------------- diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index a9149ef81c..c26b8ea44e 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -1942,10 +1942,7 @@ ExecScanHashBucket(HashJoinState *hjstate, false); /* do not pfree */ econtext->ecxt_innertuple = inntuple; - /* reset temp memory each time to avoid leaks from qual expr */ - ResetExprContext(econtext); - - if (ExecQual(hjclauses, econtext)) + if (ExecQualAndReset(hjclauses, econtext)) { hjstate->hj_CurTuple = hashTuple; return true; @@ -2002,10 +1999,7 @@ ExecParallelScanHashBucket(HashJoinState *hjstate, false); /* do not pfree */ econtext->ecxt_innertuple = inntuple; - /* reset temp memory each time to avoid leaks from qual expr */ - ResetExprContext(econtext); - - if (ExecQual(hjclauses, econtext)) + if (ExecQualAndReset(hjclauses, econtext)) { hjstate->hj_CurTuple = hashTuple; return true; diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index f61b3abf32..8ffcc52bea 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -214,8 +214,7 @@ IndexOnlyNext(IndexOnlyScanState *node) if (scandesc->xs_recheck) { econtext->ecxt_scantuple = slot; - ResetExprContext(econtext); - if (!ExecQual(node->indexqual, econtext)) + if (!ExecQualAndReset(node->indexqual, econtext)) { /* Fails recheck, so drop it and loop back for another */ InstrCountFiltered2(node, 1); diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index eed69a0c66..b8b961add4 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -152,8 +152,7 @@ IndexNext(IndexScanState *node) if (scandesc->xs_recheck) { econtext->ecxt_scantuple = slot; - ResetExprContext(econtext); - if (!ExecQual(node->indexqualorig, econtext)) + if (!ExecQualAndReset(node->indexqualorig, econtext)) { /* Fails recheck, so drop it and loop back for another */ InstrCountFiltered2(node, 1); @@ -300,8 +299,7 @@ IndexNextWithReorder(IndexScanState *node) if (scandesc->xs_recheck) { econtext->ecxt_scantuple = slot; - ResetExprContext(econtext); - if (!ExecQual(node->indexqualorig, econtext)) + if (!ExecQualAndReset(node->indexqualorig, econtext)) { /* Fails recheck, so drop it and loop back for another */ InstrCountFiltered2(node, 1); @@ -420,10 +418,7 @@ IndexRecheck(IndexScanState *node, TupleTableSlot *slot) /* Does the tuple meet the indexqual condition? */ econtext->ecxt_scantuple = slot; - - ResetExprContext(econtext); - - return ExecQual(node->indexqualorig, econtext); + return ExecQualAndReset(node->indexqualorig, econtext); } diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 6545a80222..1d824eff36 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -17,6 +17,7 @@ #include "catalog/partition.h" #include "executor/execdesc.h" #include "nodes/parsenodes.h" +#include "utils/memutils.h" /* @@ -381,6 +382,22 @@ ExecQual(ExprState *state, ExprContext *econtext) } #endif +/* + * ExecQualAndReset() - evaluate qual with ExecQual() and reset expression + * context. + */ +#ifndef FRONTEND +static inline bool +ExecQualAndReset(ExprState *state, ExprContext *econtext) +{ + bool ret = ExecQual(state, econtext); + + /* inline ResetExprContext, to avoid ordering issue in this file */ + MemoryContextReset(econtext->ecxt_per_tuple_memory); + return ret; +} +#endif + extern bool ExecCheck(ExprState *state, ExprContext *context); /* From 6ad3611e1ea6fef6ac0c746d1565b3f6a856b593 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 29 Jan 2018 20:41:36 -0500 Subject: [PATCH 0918/1087] Remove dead assignment per scan-build --- contrib/spi/refint.c | 1 - 1 file changed, 1 deletion(-) diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c index 2fc894e72a..b065ffa400 100644 --- a/contrib/spi/refint.c +++ b/contrib/spi/refint.c @@ -489,7 +489,6 @@ check_foreign_key(PG_FUNCTION_ARGS) " %s = %s%s%s %s ", args2[k], (is_char_type > 0) ? "'" : "", nv, (is_char_type > 0) ? "'" : "", (k < nkeys) ? ", " : ""); - is_char_type = 0; } strcat(sql, " where "); From 07e524d3e955a79b94918d076642b3ac8e84b65f Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 29 Jan 2018 20:42:15 -0500 Subject: [PATCH 0919/1087] Silence complaint about dead assignment The preferred place for "placate compiler" assignments is after elog(ERROR), not before it. Otherwise, scan-build complains about a dead assignment. --- src/backend/commands/tablecmds.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index ea03fd2ecf..37c7d66881 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -13251,8 +13251,8 @@ RangeVarCallbackForAlterRelation(const RangeVar *rv, Oid relid, Oid oldrelid, reltype = ((AlterTableStmt *) stmt)->relkind; else { - reltype = OBJECT_TABLE; /* placate compiler */ elog(ERROR, "unrecognized node type: %d", (int) nodeTag(stmt)); + reltype = OBJECT_TABLE; /* placate compiler */ } /* From a044378ce2f6268a996c8cce2b7bfb5d82b05c90 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 29 Jan 2018 20:44:35 -0500 Subject: [PATCH 0920/1087] Add some noreturn attributes to help static analyzers --- src/backend/utils/adt/json.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index 97a5b85516..3ba9bb3519 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -84,8 +84,8 @@ static void parse_object_field(JsonLexContext *lex, JsonSemAction *sem); static void parse_object(JsonLexContext *lex, JsonSemAction *sem); static void parse_array_element(JsonLexContext *lex, JsonSemAction *sem); static void parse_array(JsonLexContext *lex, JsonSemAction *sem); -static void report_parse_error(JsonParseContext ctx, JsonLexContext *lex); -static void report_invalid_token(JsonLexContext *lex); +static void report_parse_error(JsonParseContext ctx, JsonLexContext *lex) pg_attribute_noreturn(); +static void report_invalid_token(JsonLexContext *lex) pg_attribute_noreturn(); static int report_json_context(JsonLexContext *lex); static char *extract_mb_char(char *s); static void composite_to_json(Datum composite, StringInfo result, From 99f6a17dd62aa5ed92df7e5c03077ddfc85381c8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 30 Jan 2018 14:27:38 -0500 Subject: [PATCH 0921/1087] Fix test case for 'outer pathkeys do not match mergeclauses' fix. Commit 4bbf6edfbd5d03743ff82dda2f00c738fb3208f5 added a test case, but it turns out that the test case doesn't reliably test for the bug, and in the context of the regression test suite did not because ANALYZE had not been run. Report and patch by Etsuro Fujita. I added a comment along lines previously suggested by Tom Lane. Discussion: http://postgr.es/m/5A6195D8.8060206@lab.ntt.co.jp --- .../postgres_fdw/expected/postgres_fdw.out | 77 ++++++++++--------- contrib/postgres_fdw/sql/postgres_fdw.sql | 13 +++- 2 files changed, 48 insertions(+), 42 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index f88e0a2d52..5e1f44041c 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -2330,38 +2330,43 @@ SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5 (4 rows) -- multi-way join involving multiple merge joins +-- (this case used to have EPQ-related planning problems) +SET enable_nestloop TO false; +SET enable_hashjoin TO false; EXPLAIN (VERBOSE, COSTS OFF) -SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 - AND ft1.c1 = ft5.c1 FOR UPDATE; - QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c2 = ft4.c1 + AND ft1.c2 = ft5.c1 AND ft1.c1 < 100 AND ft2.c1 < 100 FOR UPDATE; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- LockRows Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3, ft1.*, ft2.*, ft4.*, ft5.* -> Foreign Scan Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3, ft1.*, ft2.*, ft4.*, ft5.* Relations: (((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5) - Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END, r3.c1, r3.c2, r3.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r4.c1, r4.c2, r4.c3, CASE WHEN (r4.*)::text IS NOT NULL THEN ROW(r4.c1, r4.c2, r4.c3) END FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")))) INNER JOIN "S 1"."T 3" r3 ON (((r1."C 1" = r3.c1)))) INNER JOIN "S 1"."T 4" r4 ON (((r1."C 1" = r4.c1)))) FOR UPDATE OF r1 FOR UPDATE OF r2 FOR UPDATE OF r3 FOR UPDATE OF r4 + Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, CASE WHEN (r1.*)::text IS NOT NULL THEN ROW(r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8) END, r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END, r3.c1, r3.c2, r3.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r4.c1, r4.c2, r4.c3, CASE WHEN (r4.*)::text IS NOT NULL THEN ROW(r4.c1, r4.c2, r4.c3) END FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")) AND ((r2."C 1" < 100)) AND ((r1."C 1" < 100)))) INNER JOIN "S 1"."T 3" r3 ON (((r1.c2 = r3.c1)))) INNER JOIN "S 1"."T 4" r4 ON (((r1.c2 = r4.c1)))) FOR UPDATE OF r1 FOR UPDATE OF r2 FOR UPDATE OF r3 FOR UPDATE OF r4 -> Merge Join Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.*, ft4.c1, ft4.c2, ft4.c3, ft4.*, ft5.c1, ft5.c2, ft5.c3, ft5.* - Merge Cond: (ft1.c1 = ft5.c1) + Merge Cond: (ft1.c2 = ft5.c1) -> Merge Join Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.*, ft4.c1, ft4.c2, ft4.c3, ft4.* - Merge Cond: (ft1.c1 = ft4.c1) - -> Merge Join + Merge Cond: (ft1.c2 = ft4.c1) + -> Sort Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* - Merge Cond: (ft1.c1 = ft2.c1) - -> Sort - Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.* - Sort Key: ft1.c1 - -> Foreign Scan on public.ft1 + Sort Key: ft1.c2 + -> Merge Join + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* + Merge Cond: (ft1.c1 = ft2.c1) + -> Sort Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE - -> Sort - Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* - Sort Key: ft2.c1 - -> Foreign Scan on public.ft2 + Sort Key: ft1.c1 + -> Foreign Scan on public.ft1 + Output: ft1.c1, ft1.c2, ft1.c3, ft1.c4, ft1.c5, ft1.c6, ft1.c7, ft1.c8, ft1.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" < 100)) FOR UPDATE + -> Materialize Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" FOR UPDATE + -> Foreign Scan on public.ft2 + Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.* + Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" < 100)) ORDER BY "C 1" ASC NULLS LAST FOR UPDATE -> Sort Output: ft4.c1, ft4.c2, ft4.c3, ft4.* Sort Key: ft4.c1 @@ -2374,30 +2379,26 @@ SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 -> Foreign Scan on public.ft5 Output: ft5.c1, ft5.c2, ft5.c3, ft5.* Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" FOR UPDATE -(39 rows) +(41 rows) -SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 - AND ft1.c1 = ft5.c1 FOR UPDATE; +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c2 = ft4.c1 + AND ft1.c2 = ft5.c1 AND ft1.c1 < 100 AND ft2.c1 < 100 FOR UPDATE; c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c1 | c2 | c3 | c1 | c2 | c3 ----+----+-------+------------------------------+--------------------------+----+------------+-----+----+----+-------+------------------------------+--------------------------+----+------------+-----+----+----+--------+----+----+-------- 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo | 6 | 6 | 00006 | Wed Jan 07 00:00:00 1970 PST | Wed Jan 07 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 - 12 | 2 | 00012 | Tue Jan 13 00:00:00 1970 PST | Tue Jan 13 00:00:00 1970 | 2 | 2 | foo | 12 | 2 | 00012 | Tue Jan 13 00:00:00 1970 PST | Tue Jan 13 00:00:00 1970 | 2 | 2 | foo | 12 | 13 | AAA012 | 12 | 13 | AAA012 - 18 | 8 | 00018 | Mon Jan 19 00:00:00 1970 PST | Mon Jan 19 00:00:00 1970 | 8 | 8 | foo | 18 | 8 | 00018 | Mon Jan 19 00:00:00 1970 PST | Mon Jan 19 00:00:00 1970 | 8 | 8 | foo | 18 | 19 | AAA018 | 18 | 19 | - 24 | 4 | 00024 | Sun Jan 25 00:00:00 1970 PST | Sun Jan 25 00:00:00 1970 | 4 | 4 | foo | 24 | 4 | 00024 | Sun Jan 25 00:00:00 1970 PST | Sun Jan 25 00:00:00 1970 | 4 | 4 | foo | 24 | 25 | AAA024 | 24 | 25 | AAA024 - 30 | 0 | 00030 | Sat Jan 31 00:00:00 1970 PST | Sat Jan 31 00:00:00 1970 | 0 | 0 | foo | 30 | 0 | 00030 | Sat Jan 31 00:00:00 1970 PST | Sat Jan 31 00:00:00 1970 | 0 | 0 | foo | 30 | 31 | AAA030 | 30 | 31 | AAA030 - 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 36 | 37 | AAA036 | 36 | 37 | - 42 | 2 | 00042 | Thu Feb 12 00:00:00 1970 PST | Thu Feb 12 00:00:00 1970 | 2 | 2 | foo | 42 | 2 | 00042 | Thu Feb 12 00:00:00 1970 PST | Thu Feb 12 00:00:00 1970 | 2 | 2 | foo | 42 | 43 | AAA042 | 42 | 43 | AAA042 - 48 | 8 | 00048 | Wed Feb 18 00:00:00 1970 PST | Wed Feb 18 00:00:00 1970 | 8 | 8 | foo | 48 | 8 | 00048 | Wed Feb 18 00:00:00 1970 PST | Wed Feb 18 00:00:00 1970 | 8 | 8 | foo | 48 | 49 | AAA048 | 48 | 49 | AAA048 - 54 | 4 | 00054 | Tue Feb 24 00:00:00 1970 PST | Tue Feb 24 00:00:00 1970 | 4 | 4 | foo | 54 | 4 | 00054 | Tue Feb 24 00:00:00 1970 PST | Tue Feb 24 00:00:00 1970 | 4 | 4 | foo | 54 | 55 | AAA054 | 54 | 55 | - 60 | 0 | 00060 | Mon Mar 02 00:00:00 1970 PST | Mon Mar 02 00:00:00 1970 | 0 | 0 | foo | 60 | 0 | 00060 | Mon Mar 02 00:00:00 1970 PST | Mon Mar 02 00:00:00 1970 | 0 | 0 | foo | 60 | 61 | AAA060 | 60 | 61 | AAA060 - 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 66 | 67 | AAA066 | 66 | 67 | AAA066 - 72 | 2 | 00072 | Sat Mar 14 00:00:00 1970 PST | Sat Mar 14 00:00:00 1970 | 2 | 2 | foo | 72 | 2 | 00072 | Sat Mar 14 00:00:00 1970 PST | Sat Mar 14 00:00:00 1970 | 2 | 2 | foo | 72 | 73 | AAA072 | 72 | 73 | - 78 | 8 | 00078 | Fri Mar 20 00:00:00 1970 PST | Fri Mar 20 00:00:00 1970 | 8 | 8 | foo | 78 | 8 | 00078 | Fri Mar 20 00:00:00 1970 PST | Fri Mar 20 00:00:00 1970 | 8 | 8 | foo | 78 | 79 | AAA078 | 78 | 79 | AAA078 - 84 | 4 | 00084 | Thu Mar 26 00:00:00 1970 PST | Thu Mar 26 00:00:00 1970 | 4 | 4 | foo | 84 | 4 | 00084 | Thu Mar 26 00:00:00 1970 PST | Thu Mar 26 00:00:00 1970 | 4 | 4 | foo | 84 | 85 | AAA084 | 84 | 85 | AAA084 - 90 | 0 | 00090 | Wed Apr 01 00:00:00 1970 PST | Wed Apr 01 00:00:00 1970 | 0 | 0 | foo | 90 | 0 | 00090 | Wed Apr 01 00:00:00 1970 PST | Wed Apr 01 00:00:00 1970 | 0 | 0 | foo | 90 | 91 | AAA090 | 90 | 91 | - 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 96 | 97 | AAA096 | 96 | 97 | AAA096 -(16 rows) + 16 | 6 | 00016 | Sat Jan 17 00:00:00 1970 PST | Sat Jan 17 00:00:00 1970 | 6 | 6 | foo | 16 | 6 | 00016 | Sat Jan 17 00:00:00 1970 PST | Sat Jan 17 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 26 | 6 | 00026 | Tue Jan 27 00:00:00 1970 PST | Tue Jan 27 00:00:00 1970 | 6 | 6 | foo | 26 | 6 | 00026 | Tue Jan 27 00:00:00 1970 PST | Tue Jan 27 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 36 | 6 | 00036 | Fri Feb 06 00:00:00 1970 PST | Fri Feb 06 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 46 | 6 | 00046 | Mon Feb 16 00:00:00 1970 PST | Mon Feb 16 00:00:00 1970 | 6 | 6 | foo | 46 | 6 | 00046 | Mon Feb 16 00:00:00 1970 PST | Mon Feb 16 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 56 | 6 | 00056 | Thu Feb 26 00:00:00 1970 PST | Thu Feb 26 00:00:00 1970 | 6 | 6 | foo | 56 | 6 | 00056 | Thu Feb 26 00:00:00 1970 PST | Thu Feb 26 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 66 | 6 | 00066 | Sun Mar 08 00:00:00 1970 PST | Sun Mar 08 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 76 | 6 | 00076 | Wed Mar 18 00:00:00 1970 PST | Wed Mar 18 00:00:00 1970 | 6 | 6 | foo | 76 | 6 | 00076 | Wed Mar 18 00:00:00 1970 PST | Wed Mar 18 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 86 | 6 | 00086 | Sat Mar 28 00:00:00 1970 PST | Sat Mar 28 00:00:00 1970 | 6 | 6 | foo | 86 | 6 | 00086 | Sat Mar 28 00:00:00 1970 PST | Sat Mar 28 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 + 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 96 | 6 | 00096 | Tue Apr 07 00:00:00 1970 PST | Tue Apr 07 00:00:00 1970 | 6 | 6 | foo | 6 | 7 | AAA006 | 6 | 7 | AAA006 +(10 rows) +RESET enable_nestloop; +RESET enable_hashjoin; -- check join pushdown in situations where multiple userids are involved CREATE ROLE regress_view_owner SUPERUSER; CREATE USER MAPPING FOR regress_view_owner SERVER loopback; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index e73c258ff4..400a9b0cd7 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -560,11 +560,16 @@ SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5 SELECT ft5, ft5.c1, ft5.c2, ft5.c3, ft4.c1, ft4.c2 FROM ft5 left join ft4 on ft5.c1 = ft4.c1 WHERE ft4.c1 BETWEEN 10 and 30 ORDER BY ft5.c1, ft4.c1; -- multi-way join involving multiple merge joins +-- (this case used to have EPQ-related planning problems) +SET enable_nestloop TO false; +SET enable_hashjoin TO false; EXPLAIN (VERBOSE, COSTS OFF) -SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 - AND ft1.c1 = ft5.c1 FOR UPDATE; -SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = ft4.c1 - AND ft1.c1 = ft5.c1 FOR UPDATE; +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c2 = ft4.c1 + AND ft1.c2 = ft5.c1 AND ft1.c1 < 100 AND ft2.c1 < 100 FOR UPDATE; +SELECT * FROM ft1, ft2, ft4, ft5 WHERE ft1.c1 = ft2.c1 AND ft1.c2 = ft4.c1 + AND ft1.c2 = ft5.c1 AND ft1.c1 < 100 AND ft2.c1 < 100 FOR UPDATE; +RESET enable_nestloop; +RESET enable_hashjoin; -- check join pushdown in situations where multiple userids are involved CREATE ROLE regress_view_owner SUPERUSER; From 38d485fdaa5739627b642303cc172acc1487b90a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 30 Jan 2018 16:50:30 -0500 Subject: [PATCH 0922/1087] Fix up references to scram-sha-256 pg_hba_file_rules erroneously reported this as scram-sha256. Fix that. To avoid future errors and confusion, also adjust documentation links and internal symbols to have a separator between "sha" and "256". Reported-by: Christophe Courtois Author: Michael Paquier --- doc/src/sgml/protocol.sgml | 2 +- src/backend/libpq/auth.c | 16 ++++++++-------- src/backend/libpq/hba.c | 2 +- src/include/common/scram-common.h | 4 ++-- src/interfaces/libpq/fe-auth-scram.c | 4 ++-- src/interfaces/libpq/fe-auth.c | 8 ++++---- 6 files changed, 18 insertions(+), 18 deletions(-) diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml index 4c5ed1e6d6..3cec9e0b0c 100644 --- a/doc/src/sgml/protocol.sgml +++ b/doc/src/sgml/protocol.sgml @@ -1540,7 +1540,7 @@ On error, the server can abort the authentication at any stage, and send an ErrorMessage. - + SCRAM-SHA-256 authentication diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c index 746d7cbb8a..3014b17a7c 100644 --- a/src/backend/libpq/auth.c +++ b/src/backend/libpq/auth.c @@ -894,18 +894,18 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) * channel-binding variants go first, if they are supported. Channel * binding is only supported in SSL builds. */ - sasl_mechs = palloc(strlen(SCRAM_SHA256_PLUS_NAME) + - strlen(SCRAM_SHA256_NAME) + 3); + sasl_mechs = palloc(strlen(SCRAM_SHA_256_PLUS_NAME) + + strlen(SCRAM_SHA_256_NAME) + 3); p = sasl_mechs; if (port->ssl_in_use) { - strcpy(p, SCRAM_SHA256_PLUS_NAME); - p += strlen(SCRAM_SHA256_PLUS_NAME) + 1; + strcpy(p, SCRAM_SHA_256_PLUS_NAME); + p += strlen(SCRAM_SHA_256_PLUS_NAME) + 1; } - strcpy(p, SCRAM_SHA256_NAME); - p += strlen(SCRAM_SHA256_NAME) + 1; + strcpy(p, SCRAM_SHA_256_NAME); + p += strlen(SCRAM_SHA_256_NAME) + 1; /* Put another '\0' to mark that list is finished. */ p[0] = '\0'; @@ -973,8 +973,8 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail) const char *selected_mech; selected_mech = pq_getmsgrawstring(&buf); - if (strcmp(selected_mech, SCRAM_SHA256_NAME) != 0 && - strcmp(selected_mech, SCRAM_SHA256_PLUS_NAME) != 0) + if (strcmp(selected_mech, SCRAM_SHA_256_NAME) != 0 && + strcmp(selected_mech, SCRAM_SHA_256_PLUS_NAME) != 0) { ereport(ERROR, (errcode(ERRCODE_PROTOCOL_VIOLATION), diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c index aa20f266b8..acf625e4ec 100644 --- a/src/backend/libpq/hba.c +++ b/src/backend/libpq/hba.c @@ -126,7 +126,7 @@ static const char *const UserAuthName[] = "ident", "password", "md5", - "scram-sha256", + "scram-sha-256", "gss", "sspi", "pam", diff --git a/src/include/common/scram-common.h b/src/include/common/scram-common.h index e1d742ba89..17373cce3a 100644 --- a/src/include/common/scram-common.h +++ b/src/include/common/scram-common.h @@ -16,8 +16,8 @@ #include "common/sha2.h" /* Name of SCRAM mechanisms per IANA */ -#define SCRAM_SHA256_NAME "SCRAM-SHA-256" -#define SCRAM_SHA256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ +#define SCRAM_SHA_256_NAME "SCRAM-SHA-256" +#define SCRAM_SHA_256_PLUS_NAME "SCRAM-SHA-256-PLUS" /* with channel binding */ /* Channel binding types */ #define SCRAM_CHANNEL_BINDING_TLS_UNIQUE "tls-unique" diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c index 23bd5fb2b6..8415bbb5c6 100644 --- a/src/interfaces/libpq/fe-auth-scram.c +++ b/src/interfaces/libpq/fe-auth-scram.c @@ -349,7 +349,7 @@ build_client_first_message(fe_scram_state *state) /* * First build the gs2-header with channel binding information. */ - if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) + if (strcmp(state->sasl_mechanism, SCRAM_SHA_256_PLUS_NAME) == 0) { Assert(conn->ssl_in_use); appendPQExpBuffer(&buf, "p=%s", conn->scram_channel_binding); @@ -430,7 +430,7 @@ build_client_final_message(fe_scram_state *state) * build_client_first_message(), because the server will check that it's * the same flag both times. */ - if (strcmp(state->sasl_mechanism, SCRAM_SHA256_PLUS_NAME) == 0) + if (strcmp(state->sasl_mechanism, SCRAM_SHA_256_PLUS_NAME) == 0) { char *cbind_data = NULL; size_t cbind_data_len = 0; diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c index 7bcbca9df6..3b2073a47f 100644 --- a/src/interfaces/libpq/fe-auth.c +++ b/src/interfaces/libpq/fe-auth.c @@ -533,11 +533,11 @@ pg_SASL_init(PGconn *conn, int payloadlen) if (conn->ssl_in_use && conn->scram_channel_binding && strlen(conn->scram_channel_binding) > 0 && - strcmp(mechanism_buf.data, SCRAM_SHA256_PLUS_NAME) == 0) - selected_mechanism = SCRAM_SHA256_PLUS_NAME; - else if (strcmp(mechanism_buf.data, SCRAM_SHA256_NAME) == 0 && + strcmp(mechanism_buf.data, SCRAM_SHA_256_PLUS_NAME) == 0) + selected_mechanism = SCRAM_SHA_256_PLUS_NAME; + else if (strcmp(mechanism_buf.data, SCRAM_SHA_256_NAME) == 0 && !selected_mechanism) - selected_mechanism = SCRAM_SHA256_NAME; + selected_mechanism = SCRAM_SHA_256_NAME; } if (!selected_mechanism) From f75a95915528646cbfaf238fb48b3ffa17969383 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 27 Jan 2018 13:47:52 -0500 Subject: [PATCH 0923/1087] Refactor client-side SSL certificate checking code Separate the parts specific to the SSL library from the general logic. The previous code structure was open_client_SSL() calls verify_peer_name_matches_certificate() calls verify_peer_name_matches_certificate_name() calls wildcard_certificate_match() and was completely in fe-secure-openssl.c. The new structure is open_client_SSL() [openssl] calls pq_verify_peer_name_matches_certificate() [generic] calls pgtls_verify_peer_name_matches_certificate_guts() [openssl] calls openssl_verify_peer_name_matches_certificate_name() [openssl] calls pq_verify_peer_name_matches_certificate_name() [generic] calls wildcard_certificate_match() [generic] Move the generic functions into a new file fe-secure-common.c, so the calls generally go fe-connect.c -> fe-secure.c -> fe-secure-${impl}.c -> fe-secure-common.c, although there is a bit of back-and-forth between the last two. Reviewed-by: Michael Paquier --- src/interfaces/libpq/Makefile | 2 +- src/interfaces/libpq/fe-secure-common.c | 204 ++++++++++++++++++++++ src/interfaces/libpq/fe-secure-common.h | 26 +++ src/interfaces/libpq/fe-secure-openssl.c | 210 +++-------------------- src/interfaces/libpq/libpq-int.h | 13 ++ src/interfaces/libpq/nls.mk | 2 +- src/tools/msvc/Mkvcbuild.pm | 1 + 7 files changed, 271 insertions(+), 187 deletions(-) create mode 100644 src/interfaces/libpq/fe-secure-common.c create mode 100644 src/interfaces/libpq/fe-secure-common.h diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile index 0bf1e7ef04..abe0a50e98 100644 --- a/src/interfaces/libpq/Makefile +++ b/src/interfaces/libpq/Makefile @@ -52,7 +52,7 @@ OBJS += encnames.o wchar.o OBJS += base64.o ip.o md5.o scram-common.o saslprep.o unicode_norm.o ifeq ($(with_openssl),yes) -OBJS += fe-secure-openssl.o sha2_openssl.o +OBJS += fe-secure-openssl.o fe-secure-common.o sha2_openssl.o else OBJS += sha2.o endif diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c new file mode 100644 index 0000000000..40203f3b64 --- /dev/null +++ b/src/interfaces/libpq/fe-secure-common.c @@ -0,0 +1,204 @@ +/*------------------------------------------------------------------------- + * + * fe-secure-common.c + * + * common implementation-independent SSL support code + * + * While fe-secure.c contains the interfaces that the rest of libpq call, this + * file contains support routines that are used by the library-specific + * implementations such as fe-secure-openssl.c. + * + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/interfaces/libpq/fe-secure-common.c + * + *------------------------------------------------------------------------- + */ + +#include "postgres_fe.h" + +#include "fe-secure-common.h" + +#include "libpq-int.h" +#include "pqexpbuffer.h" + +/* + * Check if a wildcard certificate matches the server hostname. + * + * The rule for this is: + * 1. We only match the '*' character as wildcard + * 2. We match only wildcards at the start of the string + * 3. The '*' character does *not* match '.', meaning that we match only + * a single pathname component. + * 4. We don't support more than one '*' in a single pattern. + * + * This is roughly in line with RFC2818, but contrary to what most browsers + * appear to be implementing (point 3 being the difference) + * + * Matching is always case-insensitive, since DNS is case insensitive. + */ +static bool +wildcard_certificate_match(const char *pattern, const char *string) +{ + int lenpat = strlen(pattern); + int lenstr = strlen(string); + + /* If we don't start with a wildcard, it's not a match (rule 1 & 2) */ + if (lenpat < 3 || + pattern[0] != '*' || + pattern[1] != '.') + return false; + + /* If pattern is longer than the string, we can never match */ + if (lenpat > lenstr) + return false; + + /* + * If string does not end in pattern (minus the wildcard), we don't match + */ + if (pg_strcasecmp(pattern + 1, string + lenstr - lenpat + 1) != 0) + return false; + + /* + * If there is a dot left of where the pattern started to match, we don't + * match (rule 3) + */ + if (strchr(string, '.') < string + lenstr - lenpat) + return false; + + /* String ended with pattern, and didn't have a dot before, so we match */ + return true; +} + +/* + * Check if a name from a server's certificate matches the peer's hostname. + * + * Returns 1 if the name matches, and 0 if it does not. On error, returns + * -1, and sets the libpq error message. + * + * The name extracted from the certificate is returned in *store_name. The + * caller is responsible for freeing it. + */ +int +pq_verify_peer_name_matches_certificate_name(PGconn *conn, + const char *namedata, size_t namelen, + char **store_name) +{ + char *name; + int result; + char *host = PQhost(conn); + + *store_name = NULL; + + /* + * There is no guarantee the string returned from the certificate is + * NULL-terminated, so make a copy that is. + */ + name = malloc(namelen + 1); + if (name == NULL) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("out of memory\n")); + return -1; + } + memcpy(name, namedata, namelen); + name[namelen] = '\0'; + + /* + * Reject embedded NULLs in certificate common or alternative name to + * prevent attacks like CVE-2009-4034. + */ + if (namelen != strlen(name)) + { + free(name); + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("SSL certificate's name contains embedded null\n")); + return -1; + } + + if (pg_strcasecmp(name, host) == 0) + { + /* Exact name match */ + result = 1; + } + else if (wildcard_certificate_match(name, host)) + { + /* Matched wildcard name */ + result = 1; + } + else + { + result = 0; + } + + *store_name = name; + return result; +} + +/* + * Verify that the server certificate matches the hostname we connected to. + * + * The certificate's Common Name and Subject Alternative Names are considered. + */ +bool +pq_verify_peer_name_matches_certificate(PGconn *conn) +{ + char *host = PQhost(conn); + int rc; + int names_examined = 0; + char *first_name = NULL; + + /* + * If told not to verify the peer name, don't do it. Return true + * indicating that the verification was successful. + */ + if (strcmp(conn->sslmode, "verify-full") != 0) + return true; + + /* Check that we have a hostname to compare with. */ + if (!(host && host[0] != '\0')) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("host name must be specified for a verified SSL connection\n")); + return false; + } + + rc = pgtls_verify_peer_name_matches_certificate_guts(conn, &names_examined, &first_name); + + if (rc == 0) + { + /* + * No match. Include the name from the server certificate in the error + * message, to aid debugging broken configurations. If there are + * multiple names, only print the first one to avoid an overly long + * error message. + */ + if (names_examined > 1) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_ngettext("server certificate for \"%s\" (and %d other name) does not match host name \"%s\"\n", + "server certificate for \"%s\" (and %d other names) does not match host name \"%s\"\n", + names_examined - 1), + first_name, names_examined - 1, host); + } + else if (names_examined == 1) + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("server certificate for \"%s\" does not match host name \"%s\"\n"), + first_name, host); + } + else + { + printfPQExpBuffer(&conn->errorMessage, + libpq_gettext("could not get server's host name from server certificate\n")); + } + } + + /* clean up */ + if (first_name) + free(first_name); + + return (rc == 1); +} diff --git a/src/interfaces/libpq/fe-secure-common.h b/src/interfaces/libpq/fe-secure-common.h new file mode 100644 index 0000000000..980a58af25 --- /dev/null +++ b/src/interfaces/libpq/fe-secure-common.h @@ -0,0 +1,26 @@ +/*------------------------------------------------------------------------- + * + * fe-secure-common.h + * + * common implementation-independent SSL support code + * + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * IDENTIFICATION + * src/interfaces/libpq/fe-secure-common.h + * + *------------------------------------------------------------------------- + */ + +#ifndef FE_SECURE_COMMON_H +#define FE_SECURE_COMMON_H + +#include "libpq-fe.h" + +extern int pq_verify_peer_name_matches_certificate_name(PGconn *conn, + const char *namedata, size_t namelen, + char **store_name); +extern bool pq_verify_peer_name_matches_certificate(PGconn *conn); + +#endif /* FE_SECURE_COMMON_H */ diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c index 9ab317320a..cade4e157c 100644 --- a/src/interfaces/libpq/fe-secure-openssl.c +++ b/src/interfaces/libpq/fe-secure-openssl.c @@ -28,6 +28,7 @@ #include "libpq-fe.h" #include "fe-auth.h" +#include "fe-secure-common.h" #include "libpq-int.h" #ifdef WIN32 @@ -60,9 +61,8 @@ #endif #include -static bool verify_peer_name_matches_certificate(PGconn *); static int verify_cb(int ok, X509_STORE_CTX *ctx); -static int verify_peer_name_matches_certificate_name(PGconn *conn, +static int openssl_verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name, char **store_name); static void destroy_ssl_system(void); @@ -492,76 +492,16 @@ verify_cb(int ok, X509_STORE_CTX *ctx) /* - * Check if a wildcard certificate matches the server hostname. - * - * The rule for this is: - * 1. We only match the '*' character as wildcard - * 2. We match only wildcards at the start of the string - * 3. The '*' character does *not* match '.', meaning that we match only - * a single pathname component. - * 4. We don't support more than one '*' in a single pattern. - * - * This is roughly in line with RFC2818, but contrary to what most browsers - * appear to be implementing (point 3 being the difference) - * - * Matching is always case-insensitive, since DNS is case insensitive. - */ -static int -wildcard_certificate_match(const char *pattern, const char *string) -{ - int lenpat = strlen(pattern); - int lenstr = strlen(string); - - /* If we don't start with a wildcard, it's not a match (rule 1 & 2) */ - if (lenpat < 3 || - pattern[0] != '*' || - pattern[1] != '.') - return 0; - - if (lenpat > lenstr) - /* If pattern is longer than the string, we can never match */ - return 0; - - if (pg_strcasecmp(pattern + 1, string + lenstr - lenpat + 1) != 0) - - /* - * If string does not end in pattern (minus the wildcard), we don't - * match - */ - return 0; - - if (strchr(string, '.') < string + lenstr - lenpat) - - /* - * If there is a dot left of where the pattern started to match, we - * don't match (rule 3) - */ - return 0; - - /* String ended with pattern, and didn't have a dot before, so we match */ - return 1; -} - -/* - * Check if a name from a server's certificate matches the peer's hostname. - * - * Returns 1 if the name matches, and 0 if it does not. On error, returns - * -1, and sets the libpq error message. - * - * The name extracted from the certificate is returned in *store_name. The - * caller is responsible for freeing it. + * OpenSSL-specific wrapper around + * pq_verify_peer_name_matches_certificate_name(), converting the ASN1_STRING + * into a plain C string. */ static int -verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name_entry, - char **store_name) +openssl_verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name_entry, + char **store_name) { int len; - char *name; const unsigned char *namedata; - int result; - char *host = PQhost(conn); - - *store_name = NULL; /* Should not happen... */ if (name_entry == NULL) @@ -573,9 +513,6 @@ verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name_entry, /* * GEN_DNS can be only IA5String, equivalent to US ASCII. - * - * There is no guarantee the string returned from the certificate is - * NULL-terminated, so make a copy that is. */ #ifdef HAVE_ASN1_STRING_GET0_DATA namedata = ASN1_STRING_get0_data(name_entry); @@ -583,45 +520,9 @@ verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name_entry, namedata = ASN1_STRING_data(name_entry); #endif len = ASN1_STRING_length(name_entry); - name = malloc(len + 1); - if (name == NULL) - { - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("out of memory\n")); - return -1; - } - memcpy(name, namedata, len); - name[len] = '\0'; - - /* - * Reject embedded NULLs in certificate common or alternative name to - * prevent attacks like CVE-2009-4034. - */ - if (len != strlen(name)) - { - free(name); - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("SSL certificate's name contains embedded null\n")); - return -1; - } - if (pg_strcasecmp(name, host) == 0) - { - /* Exact name match */ - result = 1; - } - else if (wildcard_certificate_match(name, host)) - { - /* Matched wildcard name */ - result = 1; - } - else - { - result = 0; - } - - *store_name = name; - return result; + /* OK to cast from unsigned to plain char, since it's all ASCII. */ + return pq_verify_peer_name_matches_certificate_name(conn, (const char *) namedata, len, store_name); } /* @@ -629,33 +530,14 @@ verify_peer_name_matches_certificate_name(PGconn *conn, ASN1_STRING *name_entry, * * The certificate's Common Name and Subject Alternative Names are considered. */ -static bool -verify_peer_name_matches_certificate(PGconn *conn) +int +pgtls_verify_peer_name_matches_certificate_guts(PGconn *conn, + int *names_examined, + char **first_name) { - int names_examined = 0; - bool found_match = false; - bool got_error = false; - char *first_name = NULL; - STACK_OF(GENERAL_NAME) *peer_san; int i; - int rc; - char *host = PQhost(conn); - - /* - * If told not to verify the peer name, don't do it. Return true - * indicating that the verification was successful. - */ - if (strcmp(conn->sslmode, "verify-full") != 0) - return true; - - /* Check that we have a hostname to compare with. */ - if (!(host && host[0] != '\0')) - { - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("host name must be specified for a verified SSL connection\n")); - return false; - } + int rc = 0; /* * First, get the Subject Alternative Names (SANs) from the certificate, @@ -676,24 +558,20 @@ verify_peer_name_matches_certificate(PGconn *conn) { char *alt_name; - names_examined++; - rc = verify_peer_name_matches_certificate_name(conn, + (*names_examined)++; + rc = openssl_verify_peer_name_matches_certificate_name(conn, name->d.dNSName, &alt_name); - if (rc == -1) - got_error = true; - if (rc == 1) - found_match = true; if (alt_name) { - if (!first_name) - first_name = alt_name; + if (!*first_name) + *first_name = alt_name; else free(alt_name); } } - if (found_match || got_error) + if (rc != 0) break; } sk_GENERAL_NAME_free(peer_san); @@ -706,7 +584,7 @@ verify_peer_name_matches_certificate(PGconn *conn) * (Per RFC 2818 and RFC 6125, if the subjectAltName extension of type * dNSName is present, the CN must be ignored.) */ - if (names_examined == 0) + if (*names_examined == 0) { X509_NAME *subject_name; @@ -719,55 +597,17 @@ verify_peer_name_matches_certificate(PGconn *conn) NID_commonName, -1); if (cn_index >= 0) { - names_examined++; - rc = verify_peer_name_matches_certificate_name( + (*names_examined)++; + rc = openssl_verify_peer_name_matches_certificate_name( conn, X509_NAME_ENTRY_get_data( X509_NAME_get_entry(subject_name, cn_index)), - &first_name); - - if (rc == -1) - got_error = true; - else if (rc == 1) - found_match = true; + first_name); } } } - if (!found_match && !got_error) - { - /* - * No match. Include the name from the server certificate in the error - * message, to aid debugging broken configurations. If there are - * multiple names, only print the first one to avoid an overly long - * error message. - */ - if (names_examined > 1) - { - printfPQExpBuffer(&conn->errorMessage, - libpq_ngettext("server certificate for \"%s\" (and %d other name) does not match host name \"%s\"\n", - "server certificate for \"%s\" (and %d other names) does not match host name \"%s\"\n", - names_examined - 1), - first_name, names_examined - 1, host); - } - else if (names_examined == 1) - { - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("server certificate for \"%s\" does not match host name \"%s\"\n"), - first_name, host); - } - else - { - printfPQExpBuffer(&conn->errorMessage, - libpq_gettext("could not get server's host name from server certificate\n")); - } - } - - /* clean up */ - if (first_name) - free(first_name); - - return found_match && !got_error; + return rc; } #if defined(ENABLE_THREAD_SAFETY) && defined(HAVE_CRYPTO_LOCK) @@ -1441,7 +1281,7 @@ open_client_SSL(PGconn *conn) return PGRES_POLLING_FAILED; } - if (!verify_peer_name_matches_certificate(conn)) + if (!pq_verify_peer_name_matches_certificate(conn)) { pgtls_close(conn); return PGRES_POLLING_FAILED; diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h index b3492b033a..eba23dcecc 100644 --- a/src/interfaces/libpq/libpq-int.h +++ b/src/interfaces/libpq/libpq-int.h @@ -732,6 +732,19 @@ extern char *pgtls_get_finished(PGconn *conn, size_t *len); */ extern char *pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len); +/* + * Verify that the server certificate matches the host name we connected to. + * + * The certificate's Common Name and Subject Alternative Names are considered. + * + * Returns 1 if the name matches, and 0 if it does not. On error, returns + * -1, and sets the libpq error message. + * + */ +extern int pgtls_verify_peer_name_matches_certificate_guts(PGconn *conn, + int *names_examined, + char **first_name); + /* === miscellaneous macros === */ /* diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk index 2c5659e262..4196870b49 100644 --- a/src/interfaces/libpq/nls.mk +++ b/src/interfaces/libpq/nls.mk @@ -1,6 +1,6 @@ # src/interfaces/libpq/nls.mk CATALOG_NAME = libpq AVAIL_LANGUAGES = cs de es fr he it ja ko pl pt_BR ru sv tr zh_CN zh_TW -GETTEXT_FILES = fe-auth.c fe-auth-scram.c fe-connect.c fe-exec.c fe-lobj.c fe-misc.c fe-protocol2.c fe-protocol3.c fe-secure.c fe-secure-openssl.c win32.c +GETTEXT_FILES = fe-auth.c fe-auth-scram.c fe-connect.c fe-exec.c fe-lobj.c fe-misc.c fe-protocol2.c fe-protocol3.c fe-secure.c fe-secure-common.c fe-secure-openssl.c win32.c GETTEXT_TRIGGERS = libpq_gettext pqInternalNotice:2 GETTEXT_FLAGS = libpq_gettext:1:pass-c-format pqInternalNotice:2:c-format diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index 93f364a9f2..d8c279ab92 100644 --- a/src/tools/msvc/Mkvcbuild.pm +++ b/src/tools/msvc/Mkvcbuild.pm @@ -242,6 +242,7 @@ sub mkvcbuild # building with OpenSSL. if (!$solution->{options}->{openssl}) { + $libpq->RemoveFile('src/interfaces/libpq/fe-secure-common.c'); $libpq->RemoveFile('src/interfaces/libpq/fe-secure-openssl.c'); $libpq->RemoveFile('src/common/sha2_openssl.c'); } From 0ff5bd7b47377a6b5939d6fbbb67c8d42f9170dc Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 31 Jan 2018 15:12:33 -0500 Subject: [PATCH 0924/1087] pg_prewarm: Add missing LWLockRegisterTranche call. Commit 79ccd7cbd5ca44bee0191d12e9e65abf702899e7, which added automatic prewarming, neglected this. Kyotaro Horiguchi, reviewed by me. Discussion: http://postgr.es/m/20171215.173219.38055760.horiguchi.kyotaro@lab.ntt.co.jp --- contrib/pg_prewarm/autoprewarm.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c index 3a4cc5b172..f99f9c07af 100644 --- a/contrib/pg_prewarm/autoprewarm.c +++ b/contrib/pg_prewarm/autoprewarm.c @@ -767,6 +767,8 @@ apw_init_shmem(void) } LWLockRelease(AddinShmemInitLock); + LWLockRegisterTranche(apw_state->lock.tranche, "autoprewarm"); + return found; } From 3ccdc6f9a5876d0953912fd589989387764ed9a3 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 31 Jan 2018 15:43:11 -0500 Subject: [PATCH 0925/1087] Fix list partition constraints for partition keys of array type. The old code generated always generated a constraint of the form col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an array type. Instead, generate col = val when there's only one value, col = val1 OR col = val2 OR ... when there are multiple values and col is of array type, and the old form when there are multiple values and col is not of an array type. As a side benefit, this makes constraint exclusion able to prune a list partition declared to accept a single Boolean value, which didn't work before. Amit Langote, reviewed by Etsuro Fujita Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp --- src/backend/catalog/partition.c | 98 +++++++++++++------ src/test/regress/expected/create_table.out | 18 +++- src/test/regress/expected/foreign_data.out | 6 +- src/test/regress/expected/partition_prune.out | 8 +- src/test/regress/sql/create_table.sql | 6 ++ 5 files changed, 93 insertions(+), 43 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index e69bbc0345..45945511f0 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -1625,18 +1625,60 @@ make_partition_op_expr(PartitionKey key, int keynum, { case PARTITION_STRATEGY_LIST: { - ScalarArrayOpExpr *saopexpr; - - /* Build leftop = ANY (rightop) */ - saopexpr = makeNode(ScalarArrayOpExpr); - saopexpr->opno = operoid; - saopexpr->opfuncid = get_opcode(operoid); - saopexpr->useOr = true; - saopexpr->inputcollid = key->partcollation[keynum]; - saopexpr->args = list_make2(arg1, arg2); - saopexpr->location = -1; - - result = (Expr *) saopexpr; + List *elems = (List *) arg2; + int nelems = list_length(elems); + + Assert(nelems >= 1); + Assert(keynum == 0); + + if (nelems > 1 && + !type_is_array(key->parttypid[keynum])) + { + ArrayExpr *arrexpr; + ScalarArrayOpExpr *saopexpr; + + /* Construct an ArrayExpr for the right-hand inputs */ + arrexpr = makeNode(ArrayExpr); + arrexpr->array_typeid = + get_array_type(key->parttypid[keynum]); + arrexpr->array_collid = key->parttypcoll[keynum]; + arrexpr->element_typeid = key->parttypid[keynum]; + arrexpr->elements = elems; + arrexpr->multidims = false; + arrexpr->location = -1; + + /* Build leftop = ANY (rightop) */ + saopexpr = makeNode(ScalarArrayOpExpr); + saopexpr->opno = operoid; + saopexpr->opfuncid = get_opcode(operoid); + saopexpr->useOr = true; + saopexpr->inputcollid = key->partcollation[keynum]; + saopexpr->args = list_make2(arg1, arrexpr); + saopexpr->location = -1; + + result = (Expr *) saopexpr; + } + else + { + List *elemops = NIL; + ListCell *lc; + + foreach (lc, elems) + { + Expr *elem = lfirst(lc), + *elemop; + + elemop = make_opclause(operoid, + BOOLOID, + false, + arg1, elem, + InvalidOid, + key->partcollation[keynum]); + elemops = lappend(elemops, elemop); + } + + result = nelems > 1 ? makeBoolExpr(OR_EXPR, elemops, -1) : linitial(elemops); + } break; } @@ -1758,11 +1800,10 @@ get_qual_for_list(Relation parent, PartitionBoundSpec *spec) PartitionKey key = RelationGetPartitionKey(parent); List *result; Expr *keyCol; - ArrayExpr *arr; Expr *opexpr; NullTest *nulltest; ListCell *cell; - List *arrelems = NIL; + List *elems = NIL; bool list_has_null = false; /* @@ -1828,7 +1869,7 @@ get_qual_for_list(Relation parent, PartitionBoundSpec *spec) false, /* isnull */ key->parttypbyval[0]); - arrelems = lappend(arrelems, val); + elems = lappend(elems, val); } } else @@ -1843,30 +1884,25 @@ get_qual_for_list(Relation parent, PartitionBoundSpec *spec) if (val->constisnull) list_has_null = true; else - arrelems = lappend(arrelems, copyObject(val)); + elems = lappend(elems, copyObject(val)); } } - if (arrelems) + if (elems) { - /* Construct an ArrayExpr for the non-null partition values */ - arr = makeNode(ArrayExpr); - arr->array_typeid = !type_is_array(key->parttypid[0]) - ? get_array_type(key->parttypid[0]) - : key->parttypid[0]; - arr->array_collid = key->parttypcoll[0]; - arr->element_typeid = key->parttypid[0]; - arr->elements = arrelems; - arr->multidims = false; - arr->location = -1; - - /* Generate the main expression, i.e., keyCol = ANY (arr) */ + /* + * Generate the operator expression from the non-null partition + * values. + */ opexpr = make_partition_op_expr(key, 0, BTEqualStrategyNumber, - keyCol, (Expr *) arr); + keyCol, (Expr *) elems); } else { - /* If there are no partition values, we don't need an = ANY expr */ + /* + * If there are no partition values, we don't need an operator + * expression. + */ opexpr = NULL; } diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index ef0906776e..e554ec4844 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -742,7 +742,7 @@ CREATE TABLE part_c_1_10 PARTITION OF part_c FOR VALUES FROM (1) TO (10); a | text | | | | extended | | b | integer | | not null | 1 | plain | | Partition of: parted FOR VALUES IN ('b') -Partition constraint: ((a IS NOT NULL) AND (a = ANY (ARRAY['b'::text]))) +Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text)) Check constraints: "check_a" CHECK (length(a) > 0) "part_b_b_check" CHECK (b >= 0) @@ -755,7 +755,7 @@ Check constraints: a | text | | | | extended | | b | integer | | not null | 0 | plain | | Partition of: parted FOR VALUES IN ('c') -Partition constraint: ((a IS NOT NULL) AND (a = ANY (ARRAY['c'::text]))) +Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text)) Partition key: RANGE (b) Check constraints: "check_a" CHECK (length(a) > 0) @@ -769,7 +769,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10) a | text | | | | extended | | b | integer | | not null | 0 | plain | | Partition of: part_c FOR VALUES FROM (1) TO (10) -Partition constraint: ((a IS NOT NULL) AND (a = ANY (ARRAY['c'::text])) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10)) +Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10)) Check constraints: "check_a" CHECK (length(a) > 0) @@ -868,3 +868,15 @@ Partition key: LIST (a) Number of partitions: 0 DROP TABLE parted_col_comment; +-- list partitioning on array type column +CREATE TABLE arrlp (a int[]) PARTITION BY LIST (a); +CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}'); +\d+ arrlp12 + Table "public.arrlp12" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+-----------+-----------+----------+---------+----------+--------------+------------- + a | integer[] | | | | extended | | +Partition of: arrlp FOR VALUES IN ('{1}', '{2}') +Partition constraint: ((a IS NOT NULL) AND (((a)::anyarray OPERATOR(pg_catalog.=) '{1}'::integer[]) OR ((a)::anyarray OPERATOR(pg_catalog.=) '{2}'::integer[]))) + +DROP TABLE arrlp; diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out index d2c184f2cf..6a1b278e5a 100644 --- a/src/test/regress/expected/foreign_data.out +++ b/src/test/regress/expected/foreign_data.out @@ -1863,7 +1863,7 @@ Partitions: pt2_1 FOR VALUES IN (1) c2 | text | | | | | extended | | c3 | date | | | | | plain | | Partition of: pt2 FOR VALUES IN (1) -Partition constraint: ((c1 IS NOT NULL) AND (c1 = ANY (ARRAY[1]))) +Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1)) Server: s0 FDW options: (delimiter ',', quote '"', "be quoted" 'value') @@ -1935,7 +1935,7 @@ Partitions: pt2_1 FOR VALUES IN (1) c2 | text | | | | | extended | | c3 | date | | | | | plain | | Partition of: pt2 FOR VALUES IN (1) -Partition constraint: ((c1 IS NOT NULL) AND (c1 = ANY (ARRAY[1]))) +Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1)) Server: s0 FDW options: (delimiter ',', quote '"', "be quoted" 'value') @@ -1963,7 +1963,7 @@ Partitions: pt2_1 FOR VALUES IN (1) c2 | text | | | | | extended | | c3 | date | | not null | | | plain | | Partition of: pt2 FOR VALUES IN (1) -Partition constraint: ((c1 IS NOT NULL) AND (c1 = ANY (ARRAY[1]))) +Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1)) Check constraints: "p21chk" CHECK (c2 <> ''::text) Server: s0 diff --git a/src/test/regress/expected/partition_prune.out b/src/test/regress/expected/partition_prune.out index aabb0240a9..348719bd62 100644 --- a/src/test/regress/expected/partition_prune.out +++ b/src/test/regress/expected/partition_prune.out @@ -1014,23 +1014,19 @@ explain (costs off) select * from boolpart where a = false; Append -> Seq Scan on boolpart_f Filter: (NOT a) - -> Seq Scan on boolpart_t - Filter: (NOT a) -> Seq Scan on boolpart_default Filter: (NOT a) -(7 rows) +(5 rows) explain (costs off) select * from boolpart where not a = false; QUERY PLAN ------------------------------------ Append - -> Seq Scan on boolpart_f - Filter: a -> Seq Scan on boolpart_t Filter: a -> Seq Scan on boolpart_default Filter: a -(7 rows) +(5 rows) explain (costs off) select * from boolpart where a is true or a is not true; QUERY PLAN diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index 10e5d49e8e..a71d9ae7ab 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -711,3 +711,9 @@ COMMENT ON COLUMN parted_col_comment.a IS 'Partition key'; SELECT obj_description('parted_col_comment'::regclass); \d+ parted_col_comment DROP TABLE parted_col_comment; + +-- list partitioning on array type column +CREATE TABLE arrlp (a int[]) PARTITION BY LIST (a); +CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}'); +\d+ arrlp12 +DROP TABLE arrlp; From e5dede90971d2ddbb7bca72ffc329c043bf717db Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 16:25:21 -0500 Subject: [PATCH 0926/1087] doc: mention datadir locations are actually config locations Technically, pg_upgrade's --old-datadir and --new-datadir are configuration directories, not necessarily data directories. This is reflected in the 'postgres' manual page, so do the same for pg_upgrade. Reported-by: Yves Goergen Bug: 14898 Discussion: https://postgr.es/m/20171110220912.31513.13322@wrigleys.postgresql.org Backpatch-through: 10 --- doc/src/sgml/ref/pgupgrade.sgml | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index aaa4b04a42..ffa400ad84 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -24,9 +24,9 @@ newbindir - olddatadir + oldconfigdir - newdatadir + newconfigdir option @@ -99,16 +99,16 @@ - datadir - datadir - the old cluster data directory; environment + configdir + configdir + the old database cluster configuration directory; environment variable PGDATAOLD - datadir - datadir - the new cluster data directory; environment + configdir + configdir + the new database cluster configuration directory; environment variable PGDATANEW @@ -476,7 +476,7 @@ pg_upgrade.exe Save configuration files - Save any configuration files from the old standbys' data + Save any configuration files from the old standbys' configuration directories you need to keep, e.g. postgresql.conf, recovery.conf, because these will be overwritten or removed in the next step. From d40d97d6c74496a35111b29f8eef2a09cb43bc58 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 31 Jan 2018 16:28:11 -0500 Subject: [PATCH 0927/1087] pgcrypto's encrypt() supports AES-128, AES-192, and AES-256 Previously, only 128 was mentioned, but the others are also supported. Thomas Munro, reviewed by Michael Paquier and extended a bit by me. Discussion: http://postgr.es/m/CAEepm=1XbBHXYJKofGjnM2Qfz-ZBVqhGU4AqvtgR+Hegy4fdKg@mail.gmail.com --- contrib/pgcrypto/expected/rijndael.out | 2 +- contrib/pgcrypto/sql/rijndael.sql | 2 +- doc/src/sgml/pgcrypto.sgml | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/contrib/pgcrypto/expected/rijndael.out b/contrib/pgcrypto/expected/rijndael.out index 14b2650c32..5366604a3d 100644 --- a/contrib/pgcrypto/expected/rijndael.out +++ b/contrib/pgcrypto/expected/rijndael.out @@ -1,5 +1,5 @@ -- --- AES / Rijndael-128 cipher +-- AES cipher (aka Rijndael-128, -192, or -256) -- -- ensure consistent test output regardless of the default bytea format SET bytea_output TO escape; diff --git a/contrib/pgcrypto/sql/rijndael.sql b/contrib/pgcrypto/sql/rijndael.sql index bfbf95d39b..a9bcbf33d0 100644 --- a/contrib/pgcrypto/sql/rijndael.sql +++ b/contrib/pgcrypto/sql/rijndael.sql @@ -1,5 +1,5 @@ -- --- AES / Rijndael-128 cipher +-- AES cipher (aka Rijndael-128, -192, or -256) -- -- ensure consistent test output regardless of the default bytea format SET bytea_output TO escape; diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml index 25341d684c..efa193d22e 100644 --- a/doc/src/sgml/pgcrypto.sgml +++ b/doc/src/sgml/pgcrypto.sgml @@ -1062,7 +1062,7 @@ decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea bf — Blowfish - aes — AES (Rijndael-128) + aes — AES (Rijndael-128, -192 or -256) and mode is one of: From de715414608846ce1ae44b79a39d61c48e25dce7 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 16:43:32 -0500 Subject: [PATCH 0928/1087] doc: Improve pg_upgrade rsync examples to use clusterdir Commit 9521ce4a7a1125385fb4de9689f345db594c516a from Sep 13, 2017 and backpatched through 9.5 used rsync examples with datadir. The reporter has pointed out, and testing has verified, that clusterdir must be used, so update the docs accordingly. Reported-by: Don Seiler Discussion: https://postgr.es/m/CAHJZqBD0u9dCERpYzK6BkRv=663AmH==DFJpVC=M4Xg_rq2=CQ@mail.gmail.com Backpatch-through: 9.5 --- doc/src/sgml/ref/pgupgrade.sgml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index ffa400ad84..0f8f6af98b 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -494,10 +494,10 @@ pg_upgrade.exe server: -rsync --archive --delete --hard-links --size-only --no-inc-recursive old_pgdata new_pgdata remote_dir +rsync --archive --delete --hard-links --size-only --no-inc-recursive old_cluster new_cluster remote_dir - where and are relative + where and are relative to the current directory on the primary, and is above the old and new cluster directories on the standby. The directory structure under the specified @@ -506,8 +506,8 @@ rsync --archive --delete --hard-links --size-only --no-inc-recursive old_pgdata remote directory, e.g. -rsync --archive --delete --hard-links --size-only --no-inc-recursive /opt/PostgreSQL/9.5/data \ - /opt/PostgreSQL/9.6/data standby.example.com:/opt/PostgreSQL +rsync --archive --delete --hard-links --size-only --no-inc-recursive /opt/PostgreSQL/9.5 \ + /opt/PostgreSQL/9.6 standby.example.com:/opt/PostgreSQL You can verify what the command will do using From 22757960bb3b2a6fa331bad132998c53b3e744a9 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 31 Jan 2018 16:45:37 -0500 Subject: [PATCH 0929/1087] Fix typo: colums -> columns. Along the way, also fix code indentation. Alexander Lakhin, reviewed by Michael Paquier Discussion: http://postgr.es/m/45c44aa7-7cfa-7f3b-83fd-d8300677fdda@gmail.com --- src/backend/parser/parse_utilcmd.c | 4 ++-- src/test/regress/expected/identity.out | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 5afb363096..1d35815fcf 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -664,8 +664,8 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column) if (cxt->ofType) ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("identity colums are not supported on typed tables"))); + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("identity columns are not supported on typed tables"))); if (cxt->partbound) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 87ef0d3b2a..627389b749 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -349,7 +349,7 @@ DROP USER regress_user1; -- typed tables (currently not supported) CREATE TYPE itest_type AS (f1 integer, f2 text, f3 bigint); CREATE TABLE itest12 OF itest_type (f1 WITH OPTIONS GENERATED ALWAYS AS IDENTITY); -- error -ERROR: identity colums are not supported on typed tables +ERROR: identity columns are not supported on typed tables DROP TYPE itest_type CASCADE; -- table partitions (currently not supported) CREATE TABLE itest_parent (f1 date NOT NULL, f2 text, f3 bigint) PARTITION BY RANGE (f1); From 3b15255912af3fa428fbc296d830292ffc8c9803 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 16:54:33 -0500 Subject: [PATCH 0930/1087] doc: in contrib-spi, mention and link to the meaning of SPI Also remove outdated comment about SPI subtransactions. Reported-by: gregory@arenius.com Discussion: https://postgr.es/m/151726276676.1240.10501743959198501067@wrigleys.postgresql.org Backpatch-through: 9.3 --- doc/src/sgml/contrib-spi.sgml | 4 +++- doc/src/sgml/spi.sgml | 3 +-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/contrib-spi.sgml b/doc/src/sgml/contrib-spi.sgml index 32c7105cf6..844ea161c4 100644 --- a/doc/src/sgml/contrib-spi.sgml +++ b/doc/src/sgml/contrib-spi.sgml @@ -10,7 +10,9 @@ The spi module provides several workable examples - of using SPI and triggers. While these functions are of some value in + of using the Server Programming Interface + (SPI) and triggers. While these functions are of + some value in their own right, they are even more useful as examples to modify for your own purposes. The functions are general enough to be used with any table, but you have to specify table and field names (as described diff --git a/doc/src/sgml/spi.sgml b/doc/src/sgml/spi.sgml index 10448922b1..0bac342322 100644 --- a/doc/src/sgml/spi.sgml +++ b/doc/src/sgml/spi.sgml @@ -42,8 +42,7 @@ have documented error-return conventions. Those conventions only apply for errors detected within the SPI functions themselves, however.) It is possible to recover control after an error by establishing your own - subtransaction surrounding SPI calls that might fail. This is not currently - documented because the mechanisms required are still in flux. + subtransaction surrounding SPI calls that might fail. From 1cf1112990cff432b53a74a0ac9ca897ce8a7688 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 17:00:17 -0500 Subject: [PATCH 0931/1087] doc: clearify trigger behavior for inheritance The previous wording added in PG 10 wasn't specific enough about the behavior of statement and row triggers when using inheritance. Reported-by: ian@thepathcentral.com Discussion: https://postgr.es/m/20171129193934.27108.30796@wrigleys.postgresql.org Backpatch-through: 10 --- doc/src/sgml/ref/create_trigger.sgml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index a8c0b5725d..dab1041130 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -501,9 +501,10 @@ UPDATE OF column_name1 [, column_name2 Modifying a partitioned table or a table with inheritance children fires - statement-level triggers directly attached to that table, but not + statement-level triggers attached to the explicitly named table, but not statement-level triggers for its partitions or child tables. In contrast, - row-level triggers are fired for all affected partitions or child tables. + row-level triggers are fired on the rows in effected partitions or + child tables, even if they are not explicitly named in the query. If a statement-level trigger has been defined with transition relations named by a REFERENCING clause, then before and after images of rows are visible from all affected partitions or child tables. From 59ad2463507622a1244740c4b527610f590dc473 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 17:09:59 -0500 Subject: [PATCH 0932/1087] doc: clarify major/minor pg_upgrade versions with examples The previous docs added in PG 10 were not clear enough for someone who didn't understand the PG 10 version change, so give more specific examples. Reported-by: jim@room118solutions.com Discussion: https://postgr.es/m/20171218213041.25744.8414@wrigleys.postgresql.org Backpatch-through: 10 --- doc/src/sgml/ref/pgupgrade.sgml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml index 0f8f6af98b..6dafb404a1 100644 --- a/doc/src/sgml/ref/pgupgrade.sgml +++ b/doc/src/sgml/ref/pgupgrade.sgml @@ -38,9 +38,9 @@ pg_upgrade (formerly called pg_migrator) allows data stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/reload typically required for - major version upgrades, e.g. from 9.6.3 to the current major release - of PostgreSQL. It is not required for minor version upgrades, e.g. from - 9.6.2 to 9.6.3. + major version upgrades, e.g. from 9.5.8 to 9.6.4 or from 10.7 to 11.2. + It is not required for minor version upgrades, e.g. from 9.6.2 to 9.6.3 + or from 10.1 to 10.2. From eab30cc6b55dc03589bda13bc76b12d7142d5686 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Wed, 31 Jan 2018 17:52:47 -0500 Subject: [PATCH 0933/1087] doc: fix trigger inheritance wording Fix wording from commit 1cf1112990cff432b53a74a0ac9ca897ce8a7688 Reported-by: Robert Haas Backpatch-through: 10 --- doc/src/sgml/ref/create_trigger.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/create_trigger.sgml b/doc/src/sgml/ref/create_trigger.sgml index dab1041130..3d6b9f033c 100644 --- a/doc/src/sgml/ref/create_trigger.sgml +++ b/doc/src/sgml/ref/create_trigger.sgml @@ -503,7 +503,7 @@ UPDATE OF column_name1 [, column_name2REFERENCING clause, then before and after From df9f599bc6f14307252ac75ea1dc997310da5ba6 Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Thu, 1 Feb 2018 08:30:43 -0500 Subject: [PATCH 0934/1087] psql: Add quit/help behavior/hint, for other tool portability Issuing 'quit'/'exit' in an empty psql buffer exits psql. Issuing 'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix whitespace issues a hint on how to exit. Also add similar 'help' hints for 'help' in a non-empty psql buffer. Reported-by: Everaldo Canuto Discussion: https://postgr.es/m/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com Author: original author Robert Haas, modified by me --- src/bin/psql/mainloop.c | 113 +++++++++++++++++++++++++++++++++++----- 1 file changed, 100 insertions(+), 13 deletions(-) diff --git a/src/bin/psql/mainloop.c b/src/bin/psql/mainloop.c index a8778a57aa..a3ef15058f 100644 --- a/src/bin/psql/mainloop.c +++ b/src/bin/psql/mainloop.c @@ -216,21 +216,108 @@ MainLoop(FILE *source) continue; } - /* A request for help? Be friendly and give them some guidance */ - if (pset.cur_cmd_interactive && query_buf->len == 0 && - pg_strncasecmp(line, "help", 4) == 0 && - (line[4] == '\0' || line[4] == ';' || isspace((unsigned char) line[4]))) + /* Recognize "help", "quit", "exit" only in interactive mode */ + if (pset.cur_cmd_interactive) { - free(line); - puts(_("You are using psql, the command-line interface to PostgreSQL.")); - printf(_("Type: \\copyright for distribution terms\n" - " \\h for help with SQL commands\n" - " \\? for help with psql commands\n" - " \\g or terminate with semicolon to execute query\n" - " \\q to quit\n")); + char *first_word = line; + char *rest_of_line = NULL; + bool found_help = false; + bool found_exit_or_quit = false; - fflush(stdout); - continue; + /* Search for the words we recognize; must be first word */ + if (pg_strncasecmp(first_word, "help", 4) == 0) + { + rest_of_line = first_word + 4; + found_help = true; + } + else if (pg_strncasecmp(first_word, "exit", 4) == 0 || + pg_strncasecmp(first_word, "quit", 4) == 0) + { + rest_of_line = first_word + 4; + found_exit_or_quit = true; + } + + /* + * If we found a command word, check whether the rest of the line + * contains only whitespace plus maybe one semicolon. If not, + * ignore the command word after all. + */ + if (rest_of_line != NULL) + { + /* + * Ignore unless rest of line is whitespace, plus maybe one + * semicolon + */ + while (isspace((unsigned char) *rest_of_line)) + ++rest_of_line; + if (*rest_of_line == ';') + ++rest_of_line; + while (isspace((unsigned char) *rest_of_line)) + ++rest_of_line; + if (*rest_of_line != '\0') + { + found_help = false; + found_exit_or_quit = false; + } + } + + /* + * "help" is only a command when the query buffer is empty, but we + * emit a one-line message even when it isn't to help confused + * users. The text is still added to the query buffer in that + * case. + */ + if (found_help) + { + if (query_buf->len != 0) +#ifndef WIN32 + puts(_("Use \\? for help or press control-C to clear the input buffer.")); +#else + puts(_("Use \\? for help.")); +#endif + else + { + puts(_("You are using psql, the command-line interface to PostgreSQL.")); + printf(_("Type: \\copyright for distribution terms\n" + " \\h for help with SQL commands\n" + " \\? for help with psql commands\n" + " \\g or terminate with semicolon to execute query\n" + " \\q to quit\n")); + free(line); + fflush(stdout); + continue; + } + } + /* + * "quit" and "exit" are only commands when the query buffer is + * empty, but we emit a one-line message even when it isn't to + * help confused users. The text is still added to the query + * buffer in that case. + */ + if (found_exit_or_quit) + { + if (query_buf->len != 0) + { + if (prompt_status == PROMPT_READY || + prompt_status == PROMPT_CONTINUE || + prompt_status == PROMPT_PAREN) + puts(_("Use \\q to quit.")); + else +#ifndef WIN32 + puts(_("Use control-D to quit.")); +#else + puts(_("Use control-C to quit.")); +#endif + } + else + { + /* exit app */ + free(line); + fflush(stdout); + successResult = EXIT_SUCCESS; + break; + } + } } /* echo back if flag is set, unless interactive */ From ad25a6b1f25baf09c869c903c9c8e26d390875f5 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 1 Feb 2018 15:21:13 -0500 Subject: [PATCH 0935/1087] Fix possible failure to mark hash metapage dirty. Report and suggested fix by Lixian Zou. Amit Kapila put it in the form of a patch and reviewed. Discussion: http://postgr.es/m/151739848647.1239.12528851873396651946@wrigleys.postgresql.org --- src/backend/access/hash/hashovfl.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/backend/access/hash/hashovfl.c b/src/backend/access/hash/hashovfl.c index c9de1283dc..2033b2f7f9 100644 --- a/src/backend/access/hash/hashovfl.c +++ b/src/backend/access/hash/hashovfl.c @@ -341,9 +341,10 @@ _hash_addovflpage(Relation rel, Buffer metabuf, Buffer buf, bool retain_pin) metap->hashm_mapp[metap->hashm_nmaps] = BufferGetBlockNumber(newmapbuf); metap->hashm_nmaps++; metap->hashm_spares[splitnum]++; - MarkBufferDirty(metabuf); } + MarkBufferDirty(metabuf); + /* * for new overflow page, we don't need to explicitly set the bit in * bitmap page, as by default that will be set to "in use". From a2a22057617dc84b500f85938947c125183f1289 Mon Sep 17 00:00:00 2001 From: Stephen Frost Date: Fri, 2 Feb 2018 05:30:04 -0500 Subject: [PATCH 0936/1087] Improve ALTER TABLE synopsis MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add into the ALTER TABLE synopsis the definition of partition_bound_spec, column_constraint, index_parameters and exclude_element. Initial patch by Lætitia Avrot, with further improvements by Amit Langote and Thomas Munro. Discussion: https://postgr.es/m/flat/27ec4df3-d1ab-3411-f87f-647f944897e1%40lab.ntt.co.jp --- doc/src/sgml/ref/alter_table.sgml | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 286c7a8589..2b514b7606 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -85,6 +85,27 @@ ALTER TABLE [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } +and partition_bound_spec is: + +IN ( { numeric_literal | string_literal | NULL } [, ...] ) | +FROM ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) + TO ( { numeric_literal | string_literal | MINVALUE | MAXVALUE } [, ...] ) | +WITH ( MODULUS numeric_literal, REMAINDER numeric_literal ) + +and column_constraint is: + +[ CONSTRAINT constraint_name ] +{ NOT NULL | + NULL | + CHECK ( expression ) [ NO INHERIT ] | + DEFAULT default_expr | + GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] | + UNIQUE index_parameters | + PRIMARY KEY index_parameters | + REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] + [ ON DELETE action ] [ ON UPDATE action ] } +[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + and table_constraint is: [ CONSTRAINT constraint_name ] @@ -101,6 +122,16 @@ ALTER TABLE [ IF EXISTS ] name [ CONSTRAINT constraint_name ] { UNIQUE | PRIMARY KEY } USING INDEX index_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + +index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are: + +[ WITH ( storage_parameter [= value] [, ... ] ) ] +[ USING INDEX TABLESPACE tablespace_name ] + +exclude_element in an EXCLUDE constraint is: + +{ column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] + From 9222c0d9ed9794d54fc3f5101498829eaec9e799 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 2 Feb 2018 09:00:59 -0500 Subject: [PATCH 0937/1087] Add new function WaitForParallelWorkersToAttach. Once this function has been called, we know that all workers have started and attached to their error queues -- so if any of them subsequently exit uncleanly, we'll be sure to throw an ERROR promptly. Otherwise, users of the ParallelContext machinery must be careful not to wait forever for a worker that has failed to start. Parallel query manages to work without needing this for reasons explained in new comments added by this patch, but it's a useful primitive for other parallel operations, such as the pending patch to make creating a btree index run in parallel. Amit Kapila, revised by me. Additional review by Peter Geoghegan. Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com --- src/backend/access/transam/parallel.c | 152 +++++++++++++++++++++++-- src/backend/executor/nodeGather.c | 9 +- src/backend/executor/nodeGatherMerge.c | 9 +- src/include/access/parallel.h | 4 +- 4 files changed, 163 insertions(+), 11 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 54d9ea7be0..5b45b07e7c 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -437,10 +437,11 @@ ReinitializeParallelDSM(ParallelContext *pcxt) WaitForParallelWorkersToFinish(pcxt); WaitForParallelWorkersToExit(pcxt); pcxt->nworkers_launched = 0; - if (pcxt->any_message_received) + if (pcxt->known_attached_workers) { - pfree(pcxt->any_message_received); - pcxt->any_message_received = NULL; + pfree(pcxt->known_attached_workers); + pcxt->known_attached_workers = NULL; + pcxt->nknown_attached_workers = 0; } } @@ -542,16 +543,147 @@ LaunchParallelWorkers(ParallelContext *pcxt) /* * Now that nworkers_launched has taken its final value, we can initialize - * any_message_received. + * known_attached_workers. */ if (pcxt->nworkers_launched > 0) - pcxt->any_message_received = + { + pcxt->known_attached_workers = palloc0(sizeof(bool) * pcxt->nworkers_launched); + pcxt->nknown_attached_workers = 0; + } /* Restore previous memory context. */ MemoryContextSwitchTo(oldcontext); } +/* + * Wait for all workers to attach to their error queues, and throw an error if + * any worker fails to do this. + * + * Callers can assume that if this function returns successfully, then the + * number of workers given by pcxt->nworkers_launched have initialized and + * attached to their error queues. Whether or not these workers are guaranteed + * to still be running depends on what code the caller asked them to run; + * this function does not guarantee that they have not exited. However, it + * does guarantee that any workers which exited must have done so cleanly and + * after successfully performing the work with which they were tasked. + * + * If this function is not called, then some of the workers that were launched + * may not have been started due to a fork() failure, or may have exited during + * early startup prior to attaching to the error queue, so nworkers_launched + * cannot be viewed as completely reliable. It will never be less than the + * number of workers which actually started, but it might be more. Any workers + * that failed to start will still be discovered by + * WaitForParallelWorkersToFinish and an error will be thrown at that time, + * provided that function is eventually reached. + * + * In general, the leader process should do as much work as possible before + * calling this function. fork() failures and other early-startup failures + * are very uncommon, and having the leader sit idle when it could be doing + * useful work is undesirable. However, if the leader needs to wait for + * all of its workers or for a specific worker, it may want to call this + * function before doing so. If not, it must make some other provision for + * the failure-to-start case, lest it wait forever. On the other hand, a + * leader which never waits for a worker that might not be started yet, or + * at least never does so prior to WaitForParallelWorkersToFinish(), need not + * call this function at all. + */ +void +WaitForParallelWorkersToAttach(ParallelContext *pcxt) +{ + int i; + + /* Skip this if we have no launched workers. */ + if (pcxt->nworkers_launched == 0) + return; + + for (;;) + { + /* + * This will process any parallel messages that are pending and it may + * also throw an error propagated from a worker. + */ + CHECK_FOR_INTERRUPTS(); + + for (i = 0; i < pcxt->nworkers_launched; ++i) + { + BgwHandleStatus status; + shm_mq *mq; + int rc; + pid_t pid; + + if (pcxt->known_attached_workers[i]) + continue; + + /* + * If error_mqh is NULL, then the worker has already exited + * cleanly. + */ + if (pcxt->worker[i].error_mqh == NULL) + { + pcxt->known_attached_workers[i] = true; + ++pcxt->nknown_attached_workers; + continue; + } + + status = GetBackgroundWorkerPid(pcxt->worker[i].bgwhandle, &pid); + if (status == BGWH_STARTED) + { + /* Has the worker attached to the error queue? */ + mq = shm_mq_get_queue(pcxt->worker[i].error_mqh); + if (shm_mq_get_sender(mq) != NULL) + { + /* Yes, so it is known to be attached. */ + pcxt->known_attached_workers[i] = true; + ++pcxt->nknown_attached_workers; + } + } + else if (status == BGWH_STOPPED) + { + /* + * If the worker stopped without attaching to the error queue, + * throw an error. + */ + mq = shm_mq_get_queue(pcxt->worker[i].error_mqh); + if (shm_mq_get_sender(mq) == NULL) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("parallel worker failed to initialize"), + errhint("More details may be available in the server log."))); + + pcxt->known_attached_workers[i] = true; + ++pcxt->nknown_attached_workers; + } + else + { + /* + * Worker not yet started, so we must wait. The postmaster + * will notify us if the worker's state changes. Our latch + * might also get set for some other reason, but if so we'll + * just end up waiting for the same worker again. + */ + rc = WaitLatch(MyLatch, + WL_LATCH_SET | WL_POSTMASTER_DEATH, + -1, WAIT_EVENT_BGWORKER_STARTUP); + + /* emergency bailout if postmaster has died */ + if (rc & WL_POSTMASTER_DEATH) + proc_exit(1); + + if (rc & WL_LATCH_SET) + ResetLatch(MyLatch); + } + } + + /* If all workers are known to have started, we're done. */ + if (pcxt->nknown_attached_workers >= pcxt->nworkers_launched) + { + Assert(pcxt->nknown_attached_workers == pcxt->nworkers_launched); + break; + } + } +} + /* * Wait for all workers to finish computing. * @@ -589,7 +721,7 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt) */ if (pcxt->worker[i].error_mqh == NULL) ++nfinished; - else if (pcxt->any_message_received[i]) + else if (pcxt->known_attached_workers[i]) { anyone_alive = true; break; @@ -909,8 +1041,12 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg) { char msgtype; - if (pcxt->any_message_received != NULL) - pcxt->any_message_received[i] = true; + if (pcxt->known_attached_workers != NULL && + !pcxt->known_attached_workers[i]) + { + pcxt->known_attached_workers[i] = true; + pcxt->nknown_attached_workers++; + } msgtype = pq_getmsgbyte(msg); diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 89266b5371..58eadd45b8 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -312,7 +312,14 @@ gather_readnext(GatherState *gatherstate) /* Check for async events, particularly messages from workers. */ CHECK_FOR_INTERRUPTS(); - /* Attempt to read a tuple, but don't block if none is available. */ + /* + * Attempt to read a tuple, but don't block if none is available. + * + * Note that TupleQueueReaderNext will just return NULL for a worker + * which fails to initialize. We'll treat that worker as having + * produced no tuples; WaitForParallelWorkersToFinish will error out + * when we get there. + */ Assert(gatherstate->nextreader < gatherstate->nreaders); reader = gatherstate->reader[gatherstate->nextreader]; tup = TupleQueueReaderNext(reader, true, &readerdone); diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index a3e34c6980..6858c91e8c 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -710,7 +710,14 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait, /* Check for async events, particularly messages from workers. */ CHECK_FOR_INTERRUPTS(); - /* Attempt to read a tuple. */ + /* + * Attempt to read a tuple. + * + * Note that TupleQueueReaderNext will just return NULL for a worker which + * fails to initialize. We'll treat that worker as having produced no + * tuples; WaitForParallelWorkersToFinish will error out when we get + * there. + */ reader = gm_state->reader[nreader - 1]; tup = TupleQueueReaderNext(reader, nowait, done); diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index 32c2e32bea..d0c218b185 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -43,7 +43,8 @@ typedef struct ParallelContext void *private_memory; shm_toc *toc; ParallelWorkerInfo *worker; - bool *any_message_received; + int nknown_attached_workers; + bool *known_attached_workers; } ParallelContext; typedef struct ParallelWorkerContext @@ -62,6 +63,7 @@ extern ParallelContext *CreateParallelContext(const char *library_name, const ch extern void InitializeParallelDSM(ParallelContext *pcxt); extern void ReinitializeParallelDSM(ParallelContext *pcxt); extern void LaunchParallelWorkers(ParallelContext *pcxt); +extern void WaitForParallelWorkersToAttach(ParallelContext *pcxt); extern void WaitForParallelWorkersToFinish(ParallelContext *pcxt); extern void DestroyParallelContext(ParallelContext *pcxt); extern bool ParallelContextActive(void); From 9aef173163ae68c6b241e4c9bbb375c6baa71c60 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 2 Feb 2018 09:23:42 -0500 Subject: [PATCH 0938/1087] Refactor code for partition bound searching Remove partition_bound_cmp() and partition_bound_bsearch(), whose void * argument could be, depending on the situation, of any of three different types: PartitionBoundSpec *, PartitionRangeBound *, Datum *. Instead, introduce separate bound-searching functions for each situation: partition_list_bsearch, partition_range_bsearch, partition_range_datum_bsearch, and partition_hash_bsearch. This requires duplicating the code for binary search, but it makes the code much more type safe, involves fewer branches at runtime, and at least in my opinion, is much easier to understand. Along the way, add an option to partition_range_datum_bsearch allowing the number of keys to be specified, so that we can search for partitions based on a prefix of the full list of partition keys. This is important for pending work to improve partition pruning. Amit Langote, per a suggestion from me. Discussion: http://postgr.es/m/CA+TgmoaVLDLc8=YESRwD32gPhodU_ELmXyKs77gveiYp+JE4vQ@mail.gmail.com --- src/backend/catalog/partition.c | 265 ++++++++++++++++++++------------ 1 file changed, 170 insertions(+), 95 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 45945511f0..31c80c7f1a 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -170,14 +170,21 @@ static int32 partition_rbound_cmp(PartitionKey key, bool lower1, PartitionRangeBound *b2); static int32 partition_rbound_datum_cmp(PartitionKey key, Datum *rb_datums, PartitionRangeDatumKind *rb_kind, - Datum *tuple_datums); + Datum *tuple_datums, int n_tuple_datums); -static int32 partition_bound_cmp(PartitionKey key, - PartitionBoundInfo boundinfo, - int offset, void *probe, bool probe_is_bound); -static int partition_bound_bsearch(PartitionKey key, +static int partition_list_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + Datum value, bool *is_equal); +static int partition_range_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, - void *probe, bool probe_is_bound, bool *is_equal); + PartitionRangeBound *probe, bool *is_equal); +static int partition_range_datum_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + int nvalues, Datum *values, bool *is_equal); +static int partition_hash_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + int modulus, int remainder); + static int get_partition_bound_num_indexes(PartitionBoundInfo b); static int get_greatest_modulus(PartitionBoundInfo b); static uint64 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); @@ -981,8 +988,7 @@ check_new_partition_bound(char *relname, Relation parent, int greatest_modulus; int remainder; int offset; - bool equal, - valid_modulus = true; + bool valid_modulus = true; int prev_modulus, /* Previous largest modulus */ next_modulus; /* Next largest modulus */ @@ -995,12 +1001,13 @@ check_new_partition_bound(char *relname, Relation parent, * modulus 10 and a partition with modulus 15, because 10 * is not a factor of 15. * - * Get greatest bound in array boundinfo->datums which is - * less than or equal to spec->modulus and - * spec->remainder. + * Get the greatest (modulus, remainder) pair contained in + * boundinfo->datums that is less than or equal to the + * (spec->modulus, spec->remainder) pair. */ - offset = partition_bound_bsearch(key, boundinfo, spec, - true, &equal); + offset = partition_hash_bsearch(key, boundinfo, + spec->modulus, + spec->remainder); if (offset < 0) { next_modulus = DatumGetInt32(datums[0][0]); @@ -1074,9 +1081,9 @@ check_new_partition_bound(char *relname, Relation parent, int offset; bool equal; - offset = partition_bound_bsearch(key, boundinfo, - &val->constvalue, - true, &equal); + offset = partition_list_bsearch(key, boundinfo, + val->constvalue, + &equal); if (offset >= 0 && equal) { overlap = true; @@ -1148,8 +1155,8 @@ check_new_partition_bound(char *relname, Relation parent, * since the index array is initialised with an extra -1 * at the end. */ - offset = partition_bound_bsearch(key, boundinfo, lower, - true, &equal); + offset = partition_range_bsearch(key, boundinfo, lower, + &equal); if (boundinfo->indexes[offset + 1] < 0) { @@ -1162,10 +1169,16 @@ check_new_partition_bound(char *relname, Relation parent, if (offset + 1 < boundinfo->ndatums) { int32 cmpval; + Datum *datums; + PartitionRangeDatumKind *kind; + bool is_lower; + + datums = boundinfo->datums[offset + 1]; + kind = boundinfo->kind[offset + 1]; + is_lower = (boundinfo->indexes[offset + 1] == -1); - cmpval = partition_bound_cmp(key, boundinfo, - offset + 1, upper, - true); + cmpval = partition_rbound_cmp(key, datums, kind, + is_lower, upper); if (cmpval < 0) { /* @@ -2574,11 +2587,9 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) { bool equal = false; - bound_offset = partition_bound_bsearch(key, - partdesc->boundinfo, - values, - false, - &equal); + bound_offset = partition_list_bsearch(key, + partdesc->boundinfo, + values[0], &equal); if (bound_offset >= 0 && equal) part_index = partdesc->boundinfo->indexes[bound_offset]; } @@ -2605,12 +2616,11 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) if (!range_partkey_has_null) { - bound_offset = partition_bound_bsearch(key, - partdesc->boundinfo, - values, - false, - &equal); - + bound_offset = partition_range_datum_bsearch(key, + partdesc->boundinfo, + key->partnatts, + values, + &equal); /* * The bound at bound_offset is less than or equal to the * tuple value, so the bound at offset+1 is the upper @@ -2881,12 +2891,12 @@ partition_rbound_cmp(PartitionKey key, static int32 partition_rbound_datum_cmp(PartitionKey key, Datum *rb_datums, PartitionRangeDatumKind *rb_kind, - Datum *tuple_datums) + Datum *tuple_datums, int n_tuple_datums) { int i; int32 cmpval = -1; - for (i = 0; i < key->partnatts; i++) + for (i = 0; i < n_tuple_datums; i++) { if (rb_kind[i] == PARTITION_RANGE_DATUM_MINVALUE) return -1; @@ -2905,84 +2915,104 @@ partition_rbound_datum_cmp(PartitionKey key, } /* - * partition_bound_cmp + * partition_list_bsearch + * Returns the index of the greatest bound datum that is less than equal + * to the given value or -1 if all of the bound datums are greater * - * Return whether the bound at offset in boundinfo is <, =, or > the argument - * specified in *probe. + * *is_equal is set to true if the bound datum at the returned index is equal + * to the input value. */ -static int32 -partition_bound_cmp(PartitionKey key, PartitionBoundInfo boundinfo, - int offset, void *probe, bool probe_is_bound) +static int +partition_list_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + Datum value, bool *is_equal) { - Datum *bound_datums = boundinfo->datums[offset]; - int32 cmpval = -1; + int lo, + hi, + mid; - switch (key->strategy) + lo = -1; + hi = boundinfo->ndatums - 1; + while (lo < hi) { - case PARTITION_STRATEGY_HASH: - { - PartitionBoundSpec *spec = (PartitionBoundSpec *) probe; + int32 cmpval; - cmpval = partition_hbound_cmp(DatumGetInt32(bound_datums[0]), - DatumGetInt32(bound_datums[1]), - spec->modulus, spec->remainder); + mid = (lo + hi + 1) / 2; + cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[0], + key->partcollation[0], + boundinfo->datums[mid][0], + value)); + if (cmpval <= 0) + { + lo = mid; + *is_equal = (cmpval == 0); + if (*is_equal) break; - } - case PARTITION_STRATEGY_LIST: - cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[0], - key->partcollation[0], - bound_datums[0], - *(Datum *) probe)); - break; + } + else + hi = mid - 1; + } - case PARTITION_STRATEGY_RANGE: - { - PartitionRangeDatumKind *kind = boundinfo->kind[offset]; + return lo; +} - if (probe_is_bound) - { - /* - * We need to pass whether the existing bound is a lower - * bound, so that two equal-valued lower and upper bounds - * are not regarded equal. - */ - bool lower = boundinfo->indexes[offset] < 0; +/* + * partition_range_bsearch + * Returns the index of the greatest range bound that is less than or + * equal to the given range bound or -1 if all of the range bounds are + * greater + * + * *is_equal is set to true if the range bound at the returned index is equal + * to the input range bound + */ +static int +partition_range_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + PartitionRangeBound *probe, bool *is_equal) +{ + int lo, + hi, + mid; - cmpval = partition_rbound_cmp(key, - bound_datums, kind, lower, - (PartitionRangeBound *) probe); - } - else - cmpval = partition_rbound_datum_cmp(key, - bound_datums, kind, - (Datum *) probe); - break; - } + lo = -1; + hi = boundinfo->ndatums - 1; + while (lo < hi) + { + int32 cmpval; - default: - elog(ERROR, "unexpected partition strategy: %d", - (int) key->strategy); + mid = (lo + hi + 1) / 2; + cmpval = partition_rbound_cmp(key, + boundinfo->datums[mid], + boundinfo->kind[mid], + (boundinfo->indexes[mid] == -1), + probe); + if (cmpval <= 0) + { + lo = mid; + *is_equal = (cmpval == 0); + + if (*is_equal) + break; + } + else + hi = mid - 1; } - return cmpval; + return lo; } /* - * Binary search on a collection of partition bounds. Returns greatest - * bound in array boundinfo->datums which is less than or equal to *probe. - * If all bounds in the array are greater than *probe, -1 is returned. - * - * *probe could either be a partition bound or a Datum array representing - * the partition key of a tuple being routed; probe_is_bound tells which. - * We pass that down to the comparison function so that it can interpret the - * contents of *probe accordingly. + * partition_range_bsearch + * Returns the index of the greatest range bound that is less than or + * equal to the given tuple or -1 if all of the range bounds are greater * - * *is_equal is set to whether the bound at the returned index is equal with - * *probe. + * *is_equal is set to true if the range bound at the returned index is equal + * to the input tuple. */ static int -partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, - void *probe, bool probe_is_bound, bool *is_equal) +partition_range_datum_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + int nvalues, Datum *values, bool *is_equal) { int lo, hi, @@ -2995,8 +3025,11 @@ partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = partition_bound_cmp(key, boundinfo, mid, probe, - probe_is_bound); + cmpval = partition_rbound_datum_cmp(key, + boundinfo->datums[mid], + boundinfo->kind[mid], + values, + nvalues); if (cmpval <= 0) { lo = mid; @@ -3012,6 +3045,48 @@ partition_bound_bsearch(PartitionKey key, PartitionBoundInfo boundinfo, return lo; } +/* + * partition_hash_bsearch + * Returns the index of the greatest (modulus, remainder) pair that is + * less than or equal to the given (modulus, remainder) pair or -1 if + * all of them are greater + */ +static int +partition_hash_bsearch(PartitionKey key, + PartitionBoundInfo boundinfo, + int modulus, int remainder) +{ + int lo, + hi, + mid; + + lo = -1; + hi = boundinfo->ndatums - 1; + while (lo < hi) + { + int32 cmpval, + bound_modulus, + bound_remainder; + + mid = (lo + hi + 1) / 2; + bound_modulus = DatumGetInt32(boundinfo->datums[mid][0]); + bound_remainder = DatumGetInt32(boundinfo->datums[mid][1]); + cmpval = partition_hbound_cmp(bound_modulus, bound_remainder, + modulus, remainder); + if (cmpval <= 0) + { + lo = mid; + + if (cmpval == 0) + break; + } + else + hi = mid - 1; + } + + return lo; +} + /* * get_default_oid_from_partdesc * From 9da0cc35284bdbe8d442d732963303ff0e0a40bc Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 2 Feb 2018 13:25:55 -0500 Subject: [PATCH 0939/1087] Support parallel btree index builds. To make this work, tuplesort.c and logtape.c must also support parallelism, so this patch adds that infrastructure and then applies it to the particular case of parallel btree index builds. Testing to date shows that this can often be 2-3x faster than a serial index build. The model for deciding how many workers to use is fairly primitive at present, but it's better than not having the feature. We can refine it as we get more experience. Peter Geoghegan with some help from Rushabh Lathia. While Heikki Linnakangas is not an author of this patch, he wrote other patches without which this feature would not have been possible, and therefore the release notes should possibly credit him as an author of this feature. Reviewed by Claudio Freire, Heikki Linnakangas, Thomas Munro, Tels, Amit Kapila, me. Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com --- contrib/bloom/blinsert.c | 3 +- doc/src/sgml/config.sgml | 44 +- doc/src/sgml/monitoring.sgml | 12 +- doc/src/sgml/ref/create_index.sgml | 58 ++ doc/src/sgml/ref/create_table.sgml | 4 +- src/backend/access/brin/brin.c | 4 +- src/backend/access/gin/gininsert.c | 2 +- src/backend/access/gist/gistbuild.c | 2 +- src/backend/access/hash/hash.c | 2 +- src/backend/access/hash/hashsort.c | 1 + src/backend/access/heap/heapam.c | 28 +- src/backend/access/nbtree/nbtree.c | 134 +-- src/backend/access/nbtree/nbtsort.c | 878 +++++++++++++++++- src/backend/access/spgist/spginsert.c | 3 +- src/backend/access/transam/parallel.c | 12 +- src/backend/bootstrap/bootstrap.c | 2 +- src/backend/catalog/heap.c | 2 +- src/backend/catalog/index.c | 123 ++- src/backend/catalog/toasting.c | 1 + src/backend/commands/cluster.c | 3 +- src/backend/commands/indexcmds.c | 7 +- src/backend/executor/execParallel.c | 2 +- src/backend/executor/nodeAgg.c | 6 +- src/backend/executor/nodeSort.c | 2 +- src/backend/optimizer/path/allpaths.c | 18 +- src/backend/optimizer/path/costsize.c | 4 +- src/backend/optimizer/plan/planner.c | 136 +++ src/backend/postmaster/pgstat.c | 3 + src/backend/storage/file/buffile.c | 61 +- src/backend/storage/file/fd.c | 10 + src/backend/utils/adt/orderedsetaggs.c | 2 + src/backend/utils/init/globals.c | 1 + src/backend/utils/misc/guc.c | 10 + src/backend/utils/misc/postgresql.conf.sample | 3 +- src/backend/utils/probes.d | 2 +- src/backend/utils/sort/logtape.c | 199 +++- src/backend/utils/sort/tuplesort.c | 595 ++++++++++-- src/include/access/nbtree.h | 14 +- src/include/access/parallel.h | 4 +- src/include/access/relscan.h | 1 + src/include/catalog/index.h | 9 +- src/include/miscadmin.h | 1 + src/include/nodes/execnodes.h | 6 +- src/include/optimizer/paths.h | 2 +- src/include/optimizer/planner.h | 1 + src/include/pgstat.h | 1 + src/include/storage/buffile.h | 2 + src/include/storage/fd.h | 1 + src/include/utils/logtape.h | 39 +- src/include/utils/tuplesort.h | 132 ++- src/tools/pgindent/typedefs.list | 6 + 51 files changed, 2237 insertions(+), 361 deletions(-) diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c index bfee244aa1..d231e5331f 100644 --- a/contrib/bloom/blinsert.c +++ b/contrib/bloom/blinsert.c @@ -135,7 +135,8 @@ blbuild(Relation heap, Relation index, IndexInfo *indexInfo) /* Do the heap scan */ reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, - bloomBuildCallback, (void *) &buildstate); + bloomBuildCallback, (void *) &buildstate, + NULL); /* * There are could be some items in cached page. Flush this page if diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index f951ddb41e..c45979dee4 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -2022,7 +2022,8 @@ include_dir 'conf.d' When changing this value, consider also adjusting - and + , + , and . @@ -2070,6 +2071,44 @@ include_dir 'conf.d' + + max_parallel_maintenance_workers (integer) + + max_parallel_maintenance_workers configuration parameter + + + + + Sets the maximum number of parallel workers that can be + started by a single utility command. Currently, the only + parallel utility command that supports the use of parallel + workers is CREATE INDEX, and only when + building a B-tree index. Parallel workers are taken from the + pool of processes established by , limited by . Note that the requested + number of workers may not actually be available at runtime. + If this occurs, the utility operation will run with fewer + workers than expected. The default value is 2. Setting this + value to 0 disables the use of parallel workers by utility + commands. + + + + Note that parallel utility commands should not consume + substantially more memory than equivalent non-parallel + operations. This strategy differs from that of parallel + query, where resource limits generally apply per worker + process. Parallel utility commands treat the resource limit + maintenance_work_mem as a limit to be applied to + the entire utility command, regardless of the number of + parallel worker processes. However, parallel utility + commands may still consume substantially more CPU resources + and I/O bandwidth. + + + + max_parallel_workers (integer) @@ -2079,8 +2118,9 @@ include_dir 'conf.d' Sets the maximum number of workers that the system can support for - parallel queries. The default value is 8. When increasing or + parallel operations. The default value is 8. When increasing or decreasing this value, consider also adjusting + and . Also, note that a setting for this value which is higher than will have no effect, diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 8a9793644f..e138d1ef07 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -1263,7 +1263,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser Waiting in an extension. - IPC + IPC BgWorkerShutdown Waiting for background worker to shut down. @@ -1371,6 +1371,10 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser ParallelBitmapScan Waiting for parallel bitmap scan to become initialized. + + ParallelCreateIndexScan + Waiting for parallel CREATE INDEX workers to finish heap scan. + ProcArrayGroupUpdate Waiting for group leader to clear transaction id at transaction end. @@ -3900,13 +3904,15 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid, sort-start - (int, bool, int, int, bool) + (int, bool, int, int, bool, int) Probe that fires when a sort operation is started. arg0 indicates heap, index or datum sort. arg1 is true for unique-value enforcement. arg2 is the number of key columns. arg3 is the number of kilobytes of work memory allowed. - arg4 is true if random access to the sort result is required. + arg4 is true if random access to the sort result is required. + arg5 indicates serial when 0, parallel worker when + 1, or parallel leader when 2. sort-done diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 5137fe6383..f464557de8 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -599,6 +599,64 @@ Indexes: which would drive the machine into swapping. + + PostgreSQL can build indexes while + leveraging multiple CPUs in order to process the table rows faster. + This feature is known as parallel index + build. For index methods that support building indexes + in parallel (currently, only B-tree), + maintenance_work_mem specifies the maximum + amount of memory that can be used by each index build operation as + a whole, regardless of how many worker processes were started. + Generally, a cost model automatically determines how many worker + processes should be requested, if any. + + + + Parallel index builds may benefit from increasing + maintenance_work_mem where an equivalent serial + index build will see little or no benefit. Note that + maintenance_work_mem may influence the number of + worker processes requested, since parallel workers must have at + least a 32MB share of the total + maintenance_work_mem budget. There must also be + a remaining 32MB share for the leader process. + Increasing + may allow more workers to be used, which will reduce the time + needed for index creation, so long as the index build is not + already I/O bound. Of course, there should also be sufficient + CPU capacity that would otherwise lie idle. + + + + Setting a value for parallel_workers via directly controls how many parallel + worker processes will be requested by a CREATE + INDEX against the table. This bypasses the cost model + completely, and prevents maintenance_work_mem + from affecting how many parallel workers are requested. Setting + parallel_workers to 0 via ALTER + TABLE will disable parallel index builds on the table in + all cases. + + + + + You might want to reset parallel_workers after + setting it as part of tuning an index build. This avoids + inadvertent changes to query plans, since + parallel_workers affects + all parallel table scans. + + + + + While CREATE INDEX with the + CONCURRENTLY option supports parallel builds + without special restrictions, only the first table scan is actually + performed in parallel. + + Use to remove an index. diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index a0c9a6d257..d2df40d543 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1228,8 +1228,8 @@ WITH ( MODULUS numeric_literal, REM This sets the number of workers that should be used to assist a parallel scan of this table. If not set, the system will determine a value based on the relation size. The actual number of workers chosen by the planner - may be less, for example due to - the setting of . + or by utility statements that use parallel scans may be less, for example + due to the setting of . diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index 5027872267..68b3371665 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -706,7 +706,7 @@ brinbuild(Relation heap, Relation index, IndexInfo *indexInfo) * heap blocks in physical order. */ reltuples = IndexBuildHeapScan(heap, index, indexInfo, false, - brinbuildCallback, (void *) state); + brinbuildCallback, (void *) state, NULL); /* process the final batch */ form_and_insert_tuple(state); @@ -1205,7 +1205,7 @@ summarize_range(IndexInfo *indexInfo, BrinBuildState *state, Relation heapRel, state->bs_currRangeStart = heapBlk; IndexBuildHeapRangeScan(heapRel, state->bs_irel, indexInfo, false, true, heapBlk, scanNumBlks, - brinbuildCallback, (void *) state); + brinbuildCallback, (void *) state, NULL); /* * Now we update the values obtained by the scan with the placeholder diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c index 473cc3d6b3..23f7285547 100644 --- a/src/backend/access/gin/gininsert.c +++ b/src/backend/access/gin/gininsert.c @@ -391,7 +391,7 @@ ginbuild(Relation heap, Relation index, IndexInfo *indexInfo) * prefers to receive tuples in TID order. */ reltuples = IndexBuildHeapScan(heap, index, indexInfo, false, - ginBuildCallback, (void *) &buildstate); + ginBuildCallback, (void *) &buildstate, NULL); /* dump remaining entries to the index */ oldCtx = MemoryContextSwitchTo(buildstate.tmpCtx); diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index d22318a5f1..434f15f014 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -203,7 +203,7 @@ gistbuild(Relation heap, Relation index, IndexInfo *indexInfo) * Do the heap scan. */ reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, - gistBuildCallback, (void *) &buildstate); + gistBuildCallback, (void *) &buildstate, NULL); /* * If buffering was used, flush out all the tuples that are still in the diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c index 718e2be1cd..e337439ada 100644 --- a/src/backend/access/hash/hash.c +++ b/src/backend/access/hash/hash.c @@ -159,7 +159,7 @@ hashbuild(Relation heap, Relation index, IndexInfo *indexInfo) /* do the heap scan */ reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, - hashbuildCallback, (void *) &buildstate); + hashbuildCallback, (void *) &buildstate, NULL); if (buildstate.spool) { diff --git a/src/backend/access/hash/hashsort.c b/src/backend/access/hash/hashsort.c index 7d3790a473..b70964f429 100644 --- a/src/backend/access/hash/hashsort.c +++ b/src/backend/access/hash/hashsort.c @@ -82,6 +82,7 @@ _h_spoolinit(Relation heap, Relation index, uint32 num_buckets) hspool->low_mask, hspool->max_buckets, maintenance_work_mem, + NULL, false); return hspool; diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index be263850cd..8a846e7dba 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -1627,7 +1627,16 @@ heap_parallelscan_initialize(ParallelHeapScanDesc target, Relation relation, SpinLockInit(&target->phs_mutex); target->phs_startblock = InvalidBlockNumber; pg_atomic_init_u64(&target->phs_nallocated, 0); - SerializeSnapshot(snapshot, target->phs_snapshot_data); + if (IsMVCCSnapshot(snapshot)) + { + SerializeSnapshot(snapshot, target->phs_snapshot_data); + target->phs_snapshot_any = false; + } + else + { + Assert(snapshot == SnapshotAny); + target->phs_snapshot_any = true; + } } /* ---------------- @@ -1655,11 +1664,22 @@ heap_beginscan_parallel(Relation relation, ParallelHeapScanDesc parallel_scan) Snapshot snapshot; Assert(RelationGetRelid(relation) == parallel_scan->phs_relid); - snapshot = RestoreSnapshot(parallel_scan->phs_snapshot_data); - RegisterSnapshot(snapshot); + + if (!parallel_scan->phs_snapshot_any) + { + /* Snapshot was serialized -- restore it */ + snapshot = RestoreSnapshot(parallel_scan->phs_snapshot_data); + RegisterSnapshot(snapshot); + } + else + { + /* SnapshotAny passed by caller (not serialized) */ + snapshot = SnapshotAny; + } return heap_beginscan_internal(relation, snapshot, 0, NULL, parallel_scan, - true, true, true, false, false, true); + true, true, true, false, false, + !parallel_scan->phs_snapshot_any); } /* ---------------- diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index a344c4490e..8158508d8c 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -21,36 +21,19 @@ #include "access/nbtree.h" #include "access/relscan.h" #include "access/xlog.h" -#include "catalog/index.h" #include "commands/vacuum.h" +#include "nodes/execnodes.h" #include "pgstat.h" #include "storage/condition_variable.h" #include "storage/indexfsm.h" #include "storage/ipc.h" #include "storage/lmgr.h" #include "storage/smgr.h" -#include "tcop/tcopprot.h" /* pgrminclude ignore */ #include "utils/builtins.h" #include "utils/index_selfuncs.h" #include "utils/memutils.h" -/* Working state for btbuild and its callback */ -typedef struct -{ - bool isUnique; - bool haveDead; - Relation heapRel; - BTSpool *spool; - - /* - * spool2 is needed only when the index is a unique index. Dead tuples are - * put into spool2 instead of spool in order to avoid uniqueness check. - */ - BTSpool *spool2; - double indtuples; -} BTBuildState; - /* Working state needed by btvacuumpage */ typedef struct { @@ -104,12 +87,6 @@ typedef struct BTParallelScanDescData typedef struct BTParallelScanDescData *BTParallelScanDesc; -static void btbuildCallback(Relation index, - HeapTuple htup, - Datum *values, - bool *isnull, - bool tupleIsAlive, - void *state); static void btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, IndexBulkDeleteCallback callback, void *callback_state, BTCycleId cycleid); @@ -166,115 +143,6 @@ bthandler(PG_FUNCTION_ARGS) PG_RETURN_POINTER(amroutine); } -/* - * btbuild() -- build a new btree index. - */ -IndexBuildResult * -btbuild(Relation heap, Relation index, IndexInfo *indexInfo) -{ - IndexBuildResult *result; - double reltuples; - BTBuildState buildstate; - - buildstate.isUnique = indexInfo->ii_Unique; - buildstate.haveDead = false; - buildstate.heapRel = heap; - buildstate.spool = NULL; - buildstate.spool2 = NULL; - buildstate.indtuples = 0; - -#ifdef BTREE_BUILD_STATS - if (log_btree_build_stats) - ResetUsage(); -#endif /* BTREE_BUILD_STATS */ - - /* - * We expect to be called exactly once for any index relation. If that's - * not the case, big trouble's what we have. - */ - if (RelationGetNumberOfBlocks(index) != 0) - elog(ERROR, "index \"%s\" already contains data", - RelationGetRelationName(index)); - - buildstate.spool = _bt_spoolinit(heap, index, indexInfo->ii_Unique, false); - - /* - * If building a unique index, put dead tuples in a second spool to keep - * them out of the uniqueness check. - */ - if (indexInfo->ii_Unique) - buildstate.spool2 = _bt_spoolinit(heap, index, false, true); - - /* do the heap scan */ - reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, - btbuildCallback, (void *) &buildstate); - - /* okay, all heap tuples are indexed */ - if (buildstate.spool2 && !buildstate.haveDead) - { - /* spool2 turns out to be unnecessary */ - _bt_spooldestroy(buildstate.spool2); - buildstate.spool2 = NULL; - } - - /* - * Finish the build by (1) completing the sort of the spool file, (2) - * inserting the sorted tuples into btree pages and (3) building the upper - * levels. - */ - _bt_leafbuild(buildstate.spool, buildstate.spool2); - _bt_spooldestroy(buildstate.spool); - if (buildstate.spool2) - _bt_spooldestroy(buildstate.spool2); - -#ifdef BTREE_BUILD_STATS - if (log_btree_build_stats) - { - ShowUsage("BTREE BUILD STATS"); - ResetUsage(); - } -#endif /* BTREE_BUILD_STATS */ - - /* - * Return statistics - */ - result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult)); - - result->heap_tuples = reltuples; - result->index_tuples = buildstate.indtuples; - - return result; -} - -/* - * Per-tuple callback from IndexBuildHeapScan - */ -static void -btbuildCallback(Relation index, - HeapTuple htup, - Datum *values, - bool *isnull, - bool tupleIsAlive, - void *state) -{ - BTBuildState *buildstate = (BTBuildState *) state; - - /* - * insert the index tuple into the appropriate spool file for subsequent - * processing - */ - if (tupleIsAlive || buildstate->spool2 == NULL) - _bt_spool(buildstate->spool, &htup->t_self, values, isnull); - else - { - /* dead tuples are put into spool2 */ - buildstate->haveDead = true; - _bt_spool(buildstate->spool2, &htup->t_self, values, isnull); - } - - buildstate->indtuples += 1; -} - /* * btbuildempty() -- build an empty btree index in the initialization fork */ diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c index f6159db1cd..521ae6e5f7 100644 --- a/src/backend/access/nbtree/nbtsort.c +++ b/src/backend/access/nbtree/nbtsort.c @@ -67,28 +67,168 @@ #include "postgres.h" #include "access/nbtree.h" +#include "access/parallel.h" +#include "access/relscan.h" +#include "access/xact.h" #include "access/xlog.h" #include "access/xloginsert.h" +#include "catalog/index.h" #include "miscadmin.h" +#include "pgstat.h" #include "storage/smgr.h" -#include "tcop/tcopprot.h" +#include "tcop/tcopprot.h" /* pgrminclude ignore */ #include "utils/rel.h" #include "utils/sortsupport.h" #include "utils/tuplesort.h" +/* Magic numbers for parallel state sharing */ +#define PARALLEL_KEY_BTREE_SHARED UINT64CONST(0xA000000000000001) +#define PARALLEL_KEY_TUPLESORT UINT64CONST(0xA000000000000002) +#define PARALLEL_KEY_TUPLESORT_SPOOL2 UINT64CONST(0xA000000000000003) + +/* + * DISABLE_LEADER_PARTICIPATION disables the leader's participation in + * parallel index builds. This may be useful as a debugging aid. +#undef DISABLE_LEADER_PARTICIPATION + */ + /* * Status record for spooling/sorting phase. (Note we may have two of * these due to the special requirements for uniqueness-checking with * dead tuples.) */ -struct BTSpool +typedef struct BTSpool { Tuplesortstate *sortstate; /* state data for tuplesort.c */ Relation heap; Relation index; bool isunique; -}; +} BTSpool; + +/* + * Status for index builds performed in parallel. This is allocated in a + * dynamic shared memory segment. Note that there is a separate tuplesort TOC + * entry, private to tuplesort.c but allocated by this module on its behalf. + */ +typedef struct BTShared +{ + /* + * These fields are not modified during the sort. They primarily exist + * for the benefit of worker processes that need to create BTSpool state + * corresponding to that used by the leader. + */ + Oid heaprelid; + Oid indexrelid; + bool isunique; + bool isconcurrent; + int scantuplesortstates; + + /* + * workersdonecv is used to monitor the progress of workers. All parallel + * participants must indicate that they are done before leader can use + * mutable state that workers maintain during scan (and before leader can + * proceed to tuplesort_performsort()). + */ + ConditionVariable workersdonecv; + + /* + * mutex protects all fields before heapdesc. + * + * These fields contain status information of interest to B-Tree index + * builds that must work just the same when an index is built in parallel. + */ + slock_t mutex; + + /* + * Mutable state that is maintained by workers, and reported back to + * leader at end of parallel scan. + * + * nparticipantsdone is number of worker processes finished. + * + * reltuples is the total number of input heap tuples. + * + * havedead indicates if RECENTLY_DEAD tuples were encountered during + * build. + * + * indtuples is the total number of tuples that made it into the index. + * + * brokenhotchain indicates if any worker detected a broken HOT chain + * during build. + */ + int nparticipantsdone; + double reltuples; + bool havedead; + double indtuples; + bool brokenhotchain; + + /* + * This variable-sized field must come last. + * + * See _bt_parallel_estimate_shared(). + */ + ParallelHeapScanDescData heapdesc; +} BTShared; + +/* + * Status for leader in parallel index build. + */ +typedef struct BTLeader +{ + /* parallel context itself */ + ParallelContext *pcxt; + + /* + * nparticipanttuplesorts is the exact number of worker processes + * successfully launched, plus one leader process if it participates as a + * worker (only DISABLE_LEADER_PARTICIPATION builds avoid leader + * participating as a worker). + */ + int nparticipanttuplesorts; + + /* + * Leader process convenience pointers to shared state (leader avoids TOC + * lookups). + * + * btshared is the shared state for entire build. sharedsort is the + * shared, tuplesort-managed state passed to each process tuplesort. + * sharedsort2 is the corresponding btspool2 shared state, used only when + * building unique indexes. snapshot is the snapshot used by the scan iff + * an MVCC snapshot is required. + */ + BTShared *btshared; + Sharedsort *sharedsort; + Sharedsort *sharedsort2; + Snapshot snapshot; +} BTLeader; + +/* + * Working state for btbuild and its callback. + * + * When parallel CREATE INDEX is used, there is a BTBuildState for each + * participant. + */ +typedef struct BTBuildState +{ + bool isunique; + bool havedead; + Relation heap; + BTSpool *spool; + + /* + * spool2 is needed only when the index is a unique index. Dead tuples are + * put into spool2 instead of spool in order to avoid uniqueness check. + */ + BTSpool *spool2; + double indtuples; + + /* + * btleader is only present when a parallel index build is performed, and + * only in the leader process. (Actually, only the leader has a + * BTBuildState. Workers have their own spool and spool2, though.) + */ + BTLeader *btleader; +} BTBuildState; /* * Status record for a btree page being built. We have one of these @@ -128,6 +268,14 @@ typedef struct BTWriteState } BTWriteState; +static double _bt_spools_heapscan(Relation heap, Relation index, + BTBuildState *buildstate, IndexInfo *indexInfo); +static void _bt_spooldestroy(BTSpool *btspool); +static void _bt_spool(BTSpool *btspool, ItemPointer self, + Datum *values, bool *isnull); +static void _bt_leafbuild(BTSpool *btspool, BTSpool *btspool2); +static void _bt_build_callback(Relation index, HeapTuple htup, Datum *values, + bool *isnull, bool tupleIsAlive, void *state); static Page _bt_blnewpage(uint32 level); static BTPageState *_bt_pagestate(BTWriteState *wstate, uint32 level); static void _bt_slideleft(Page page); @@ -138,45 +286,219 @@ static void _bt_buildadd(BTWriteState *wstate, BTPageState *state, static void _bt_uppershutdown(BTWriteState *wstate, BTPageState *state); static void _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2); +static void _bt_begin_parallel(BTBuildState *buildstate, bool isconcurrent, + int request); +static void _bt_end_parallel(BTLeader *btleader); +static Size _bt_parallel_estimate_shared(Snapshot snapshot); +static double _bt_parallel_heapscan(BTBuildState *buildstate, + bool *brokenhotchain); +static void _bt_leader_participate_as_worker(BTBuildState *buildstate); +static void _bt_parallel_scan_and_sort(BTSpool *btspool, BTSpool *btspool2, + BTShared *btshared, Sharedsort *sharedsort, + Sharedsort *sharedsort2, int sortmem); /* - * Interface routines + * btbuild() -- build a new btree index. */ +IndexBuildResult * +btbuild(Relation heap, Relation index, IndexInfo *indexInfo) +{ + IndexBuildResult *result; + BTBuildState buildstate; + double reltuples; + +#ifdef BTREE_BUILD_STATS + if (log_btree_build_stats) + ResetUsage(); +#endif /* BTREE_BUILD_STATS */ + + buildstate.isunique = indexInfo->ii_Unique; + buildstate.havedead = false; + buildstate.heap = heap; + buildstate.spool = NULL; + buildstate.spool2 = NULL; + buildstate.indtuples = 0; + buildstate.btleader = NULL; + + /* + * We expect to be called exactly once for any index relation. If that's + * not the case, big trouble's what we have. + */ + if (RelationGetNumberOfBlocks(index) != 0) + elog(ERROR, "index \"%s\" already contains data", + RelationGetRelationName(index)); + + reltuples = _bt_spools_heapscan(heap, index, &buildstate, indexInfo); + + /* + * Finish the build by (1) completing the sort of the spool file, (2) + * inserting the sorted tuples into btree pages and (3) building the upper + * levels. Finally, it may also be necessary to end use of parallelism. + */ + _bt_leafbuild(buildstate.spool, buildstate.spool2); + _bt_spooldestroy(buildstate.spool); + if (buildstate.spool2) + _bt_spooldestroy(buildstate.spool2); + if (buildstate.btleader) + _bt_end_parallel(buildstate.btleader); + + result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult)); + + result->heap_tuples = reltuples; + result->index_tuples = buildstate.indtuples; + +#ifdef BTREE_BUILD_STATS + if (log_btree_build_stats) + { + ShowUsage("BTREE BUILD STATS"); + ResetUsage(); + } +#endif /* BTREE_BUILD_STATS */ + return result; +} /* - * create and initialize a spool structure + * Create and initialize one or two spool structures, and save them in caller's + * buildstate argument. May also fill-in fields within indexInfo used by index + * builds. + * + * Scans the heap, possibly in parallel, filling spools with IndexTuples. This + * routine encapsulates all aspects of managing parallelism. Caller need only + * call _bt_end_parallel() in parallel case after it is done with spool/spool2. + * + * Returns the total number of heap tuples scanned. */ -BTSpool * -_bt_spoolinit(Relation heap, Relation index, bool isunique, bool isdead) +static double +_bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate, + IndexInfo *indexInfo) { BTSpool *btspool = (BTSpool *) palloc0(sizeof(BTSpool)); - int btKbytes; + SortCoordinate coordinate = NULL; + double reltuples = 0; + /* + * We size the sort area as maintenance_work_mem rather than work_mem to + * speed index creation. This should be OK since a single backend can't + * run multiple index creations in parallel (see also: notes on + * parallelism and maintenance_work_mem below). + */ btspool->heap = heap; btspool->index = index; - btspool->isunique = isunique; + btspool->isunique = indexInfo->ii_Unique; + + /* Save as primary spool */ + buildstate->spool = btspool; + + /* Attempt to launch parallel worker scan when required */ + if (indexInfo->ii_ParallelWorkers > 0) + _bt_begin_parallel(buildstate, indexInfo->ii_Concurrent, + indexInfo->ii_ParallelWorkers); /* - * We size the sort area as maintenance_work_mem rather than work_mem to - * speed index creation. This should be OK since a single backend can't - * run multiple index creations in parallel. Note that creation of a - * unique index actually requires two BTSpool objects. We expect that the - * second one (for dead tuples) won't get very full, so we give it only - * work_mem. + * If parallel build requested and at least one worker process was + * successfully launched, set up coordination state */ - btKbytes = isdead ? work_mem : maintenance_work_mem; - btspool->sortstate = tuplesort_begin_index_btree(heap, index, isunique, - btKbytes, false); + if (buildstate->btleader) + { + coordinate = (SortCoordinate) palloc0(sizeof(SortCoordinateData)); + coordinate->isWorker = false; + coordinate->nParticipants = + buildstate->btleader->nparticipanttuplesorts; + coordinate->sharedsort = buildstate->btleader->sharedsort; + } - return btspool; + /* + * Begin serial/leader tuplesort. + * + * In cases where parallelism is involved, the leader receives the same + * share of maintenance_work_mem as a serial sort (it is generally treated + * in the same way as a serial sort once we return). Parallel worker + * Tuplesortstates will have received only a fraction of + * maintenance_work_mem, though. + * + * We rely on the lifetime of the Leader Tuplesortstate almost not + * overlapping with any worker Tuplesortstate's lifetime. There may be + * some small overlap, but that's okay because we rely on leader + * Tuplesortstate only allocating a small, fixed amount of memory here. + * When its tuplesort_performsort() is called (by our caller), and + * significant amounts of memory are likely to be used, all workers must + * have already freed almost all memory held by their Tuplesortstates + * (they are about to go away completely, too). The overall effect is + * that maintenance_work_mem always represents an absolute high watermark + * on the amount of memory used by a CREATE INDEX operation, regardless of + * the use of parallelism or any other factor. + */ + buildstate->spool->sortstate = + tuplesort_begin_index_btree(heap, index, buildstate->isunique, + maintenance_work_mem, coordinate, + false); + + /* + * If building a unique index, put dead tuples in a second spool to keep + * them out of the uniqueness check. We expect that the second spool (for + * dead tuples) won't get very full, so we give it only work_mem. + */ + if (indexInfo->ii_Unique) + { + BTSpool *btspool2 = (BTSpool *) palloc0(sizeof(BTSpool)); + SortCoordinate coordinate2 = NULL; + + /* Initialize secondary spool */ + btspool2->heap = heap; + btspool2->index = index; + btspool2->isunique = false; + /* Save as secondary spool */ + buildstate->spool2 = btspool2; + + if (buildstate->btleader) + { + /* + * Set up non-private state that is passed to + * tuplesort_begin_index_btree() about the basic high level + * coordination of a parallel sort. + */ + coordinate2 = (SortCoordinate) palloc0(sizeof(SortCoordinateData)); + coordinate2->isWorker = false; + coordinate2->nParticipants = + buildstate->btleader->nparticipanttuplesorts; + coordinate2->sharedsort = buildstate->btleader->sharedsort2; + } + + /* + * We expect that the second one (for dead tuples) won't get very + * full, so we give it only work_mem + */ + buildstate->spool2->sortstate = + tuplesort_begin_index_btree(heap, index, false, work_mem, + coordinate2, false); + } + + /* Fill spool using either serial or parallel heap scan */ + if (!buildstate->btleader) + reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, + _bt_build_callback, (void *) buildstate, + NULL); + else + reltuples = _bt_parallel_heapscan(buildstate, + &indexInfo->ii_BrokenHotChain); + + /* okay, all heap tuples are spooled */ + if (buildstate->spool2 && !buildstate->havedead) + { + /* spool2 turns out to be unnecessary */ + _bt_spooldestroy(buildstate->spool2); + buildstate->spool2 = NULL; + } + + return reltuples; } /* * clean up a spool structure and its substructures. */ -void +static void _bt_spooldestroy(BTSpool *btspool) { tuplesort_end(btspool->sortstate); @@ -186,7 +508,7 @@ _bt_spooldestroy(BTSpool *btspool) /* * spool an index entry into the sort file. */ -void +static void _bt_spool(BTSpool *btspool, ItemPointer self, Datum *values, bool *isnull) { tuplesort_putindextuplevalues(btspool->sortstate, btspool->index, @@ -197,7 +519,7 @@ _bt_spool(BTSpool *btspool, ItemPointer self, Datum *values, bool *isnull) * given a spool loaded by successive calls to _bt_spool, * create an entire btree. */ -void +static void _bt_leafbuild(BTSpool *btspool, BTSpool *btspool2) { BTWriteState wstate; @@ -231,11 +553,34 @@ _bt_leafbuild(BTSpool *btspool, BTSpool *btspool2) _bt_load(&wstate, btspool, btspool2); } - /* - * Internal routines. + * Per-tuple callback from IndexBuildHeapScan */ +static void +_bt_build_callback(Relation index, + HeapTuple htup, + Datum *values, + bool *isnull, + bool tupleIsAlive, + void *state) +{ + BTBuildState *buildstate = (BTBuildState *) state; + /* + * insert the index tuple into the appropriate spool file for subsequent + * processing + */ + if (tupleIsAlive || buildstate->spool2 == NULL) + _bt_spool(buildstate->spool, &htup->t_self, values, isnull); + else + { + /* dead tuples are put into spool2 */ + buildstate->havedead = true; + _bt_spool(buildstate->spool2, &htup->t_self, values, isnull); + } + + buildstate->indtuples += 1; +} /* * allocate workspace for a new, clean btree page, not linked to any siblings. @@ -819,3 +1164,488 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2) smgrimmedsync(wstate->index->rd_smgr, MAIN_FORKNUM); } } + +/* + * Create parallel context, and launch workers for leader. + * + * buildstate argument should be initialized (with the exception of the + * tuplesort state in spools, which may later be created based on shared + * state initially set up here). + * + * isconcurrent indicates if operation is CREATE INDEX CONCURRENTLY. + * + * request is the target number of parallel worker processes to launch. + * + * Sets buildstate's BTLeader, which caller must use to shut down parallel + * mode by passing it to _bt_end_parallel() at the very end of its index + * build. If not even a single worker process can be launched, this is + * never set, and caller should proceed with a serial index build. + */ +static void +_bt_begin_parallel(BTBuildState *buildstate, bool isconcurrent, int request) +{ + ParallelContext *pcxt; + int scantuplesortstates; + Snapshot snapshot; + Size estbtshared; + Size estsort; + BTShared *btshared; + Sharedsort *sharedsort; + Sharedsort *sharedsort2; + BTSpool *btspool = buildstate->spool; + BTLeader *btleader = (BTLeader *) palloc0(sizeof(BTLeader)); + bool leaderparticipates = true; + +#ifdef DISABLE_LEADER_PARTICIPATION + leaderparticipates = false; +#endif + + /* + * Enter parallel mode, and create context for parallel build of btree + * index + */ + EnterParallelMode(); + Assert(request > 0); + pcxt = CreateParallelContext("postgres", "_bt_parallel_build_main", + request, true); + scantuplesortstates = leaderparticipates ? request + 1 : request; + + /* + * Prepare for scan of the base relation. In a normal index build, we use + * SnapshotAny because we must retrieve all tuples and do our own time + * qual checks (because we have to index RECENTLY_DEAD tuples). In a + * concurrent build, we take a regular MVCC snapshot and index whatever's + * live according to that. + */ + if (!isconcurrent) + snapshot = SnapshotAny; + else + snapshot = RegisterSnapshot(GetTransactionSnapshot()); + + /* + * Estimate size for at least two keys -- our own + * PARALLEL_KEY_BTREE_SHARED workspace, and PARALLEL_KEY_TUPLESORT + * tuplesort workspace + */ + estbtshared = _bt_parallel_estimate_shared(snapshot); + shm_toc_estimate_chunk(&pcxt->estimator, estbtshared); + estsort = tuplesort_estimate_shared(scantuplesortstates); + shm_toc_estimate_chunk(&pcxt->estimator, estsort); + + /* + * Unique case requires a second spool, and so we may have to account for + * a third shared workspace -- PARALLEL_KEY_TUPLESORT_SPOOL2 + */ + if (!btspool->isunique) + shm_toc_estimate_keys(&pcxt->estimator, 2); + else + { + shm_toc_estimate_chunk(&pcxt->estimator, estsort); + shm_toc_estimate_keys(&pcxt->estimator, 3); + } + + /* Everyone's had a chance to ask for space, so now create the DSM */ + InitializeParallelDSM(pcxt); + + /* Store shared build state, for which we reserved space */ + btshared = (BTShared *) shm_toc_allocate(pcxt->toc, estbtshared); + /* Initialize immutable state */ + btshared->heaprelid = RelationGetRelid(btspool->heap); + btshared->indexrelid = RelationGetRelid(btspool->index); + btshared->isunique = btspool->isunique; + btshared->isconcurrent = isconcurrent; + btshared->scantuplesortstates = scantuplesortstates; + ConditionVariableInit(&btshared->workersdonecv); + SpinLockInit(&btshared->mutex); + /* Initialize mutable state */ + btshared->nparticipantsdone = 0; + btshared->reltuples = 0.0; + btshared->havedead = false; + btshared->indtuples = 0.0; + btshared->brokenhotchain = false; + heap_parallelscan_initialize(&btshared->heapdesc, btspool->heap, snapshot); + + /* + * Store shared tuplesort-private state, for which we reserved space. + * Then, initialize opaque state using tuplesort routine. + */ + sharedsort = (Sharedsort *) shm_toc_allocate(pcxt->toc, estsort); + tuplesort_initialize_shared(sharedsort, scantuplesortstates, + pcxt->seg); + + shm_toc_insert(pcxt->toc, PARALLEL_KEY_BTREE_SHARED, btshared); + shm_toc_insert(pcxt->toc, PARALLEL_KEY_TUPLESORT, sharedsort); + + /* Unique case requires a second spool, and associated shared state */ + if (!btspool->isunique) + sharedsort2 = NULL; + else + { + /* + * Store additional shared tuplesort-private state, for which we + * reserved space. Then, initialize opaque state using tuplesort + * routine. + */ + sharedsort2 = (Sharedsort *) shm_toc_allocate(pcxt->toc, estsort); + tuplesort_initialize_shared(sharedsort2, scantuplesortstates, + pcxt->seg); + + shm_toc_insert(pcxt->toc, PARALLEL_KEY_TUPLESORT_SPOOL2, sharedsort2); + } + + /* Launch workers, saving status for leader/caller */ + LaunchParallelWorkers(pcxt); + btleader->pcxt = pcxt; + btleader->nparticipanttuplesorts = pcxt->nworkers_launched; + if (leaderparticipates) + btleader->nparticipanttuplesorts++; + btleader->btshared = btshared; + btleader->sharedsort = sharedsort; + btleader->sharedsort2 = sharedsort2; + btleader->snapshot = snapshot; + + /* If no workers were successfully launched, back out (do serial build) */ + if (pcxt->nworkers_launched == 0) + { + _bt_end_parallel(btleader); + return; + } + + /* Save leader state now that it's clear build will be parallel */ + buildstate->btleader = btleader; + + /* Join heap scan ourselves */ + if (leaderparticipates) + _bt_leader_participate_as_worker(buildstate); + + /* + * Caller needs to wait for all launched workers when we return. Make + * sure that the failure-to-start case will not hang forever. + */ + WaitForParallelWorkersToAttach(pcxt); +} + +/* + * Shut down workers, destroy parallel context, and end parallel mode. + */ +static void +_bt_end_parallel(BTLeader *btleader) +{ + /* Shutdown worker processes */ + WaitForParallelWorkersToFinish(btleader->pcxt); + /* Free last reference to MVCC snapshot, if one was used */ + if (IsMVCCSnapshot(btleader->snapshot)) + UnregisterSnapshot(btleader->snapshot); + DestroyParallelContext(btleader->pcxt); + ExitParallelMode(); +} + +/* + * Returns size of shared memory required to store state for a parallel + * btree index build based on the snapshot its parallel scan will use. + */ +static Size +_bt_parallel_estimate_shared(Snapshot snapshot) +{ + if (!IsMVCCSnapshot(snapshot)) + { + Assert(snapshot == SnapshotAny); + return sizeof(BTShared); + } + + return add_size(offsetof(BTShared, heapdesc) + + offsetof(ParallelHeapScanDescData, phs_snapshot_data), + EstimateSnapshotSpace(snapshot)); +} + +/* + * Within leader, wait for end of heap scan. + * + * When called, parallel heap scan started by _bt_begin_parallel() will + * already be underway within worker processes (when leader participates + * as a worker, we should end up here just as workers are finishing). + * + * Fills in fields needed for ambuild statistics, and lets caller set + * field indicating that some worker encountered a broken HOT chain. + * + * Returns the total number of heap tuples scanned. + */ +static double +_bt_parallel_heapscan(BTBuildState *buildstate, bool *brokenhotchain) +{ + BTShared *btshared = buildstate->btleader->btshared; + int nparticipanttuplesorts; + double reltuples; + + nparticipanttuplesorts = buildstate->btleader->nparticipanttuplesorts; + for (;;) + { + SpinLockAcquire(&btshared->mutex); + if (btshared->nparticipantsdone == nparticipanttuplesorts) + { + buildstate->havedead = btshared->havedead; + buildstate->indtuples = btshared->indtuples; + *brokenhotchain = btshared->brokenhotchain; + reltuples = btshared->reltuples; + SpinLockRelease(&btshared->mutex); + break; + } + SpinLockRelease(&btshared->mutex); + + ConditionVariableSleep(&btshared->workersdonecv, + WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN); + } + + ConditionVariableCancelSleep(); + + return reltuples; +} + +/* + * Within leader, participate as a parallel worker. + */ +static void +_bt_leader_participate_as_worker(BTBuildState *buildstate) +{ + BTLeader *btleader = buildstate->btleader; + BTSpool *leaderworker; + BTSpool *leaderworker2; + int sortmem; + + /* Allocate memory and initialize private spool */ + leaderworker = (BTSpool *) palloc0(sizeof(BTSpool)); + leaderworker->heap = buildstate->spool->heap; + leaderworker->index = buildstate->spool->index; + leaderworker->isunique = buildstate->spool->isunique; + + /* Initialize second spool, if required */ + if (!btleader->btshared->isunique) + leaderworker2 = NULL; + else + { + /* Allocate memory for worker's own private secondary spool */ + leaderworker2 = (BTSpool *) palloc0(sizeof(BTSpool)); + + /* Initialize worker's own secondary spool */ + leaderworker2->heap = leaderworker->heap; + leaderworker2->index = leaderworker->index; + leaderworker2->isunique = false; + } + + /* + * Might as well use reliable figure when doling out maintenance_work_mem + * (when requested number of workers were not launched, this will be + * somewhat higher than it is for other workers). + */ + sortmem = maintenance_work_mem / btleader->nparticipanttuplesorts; + + /* Perform work common to all participants */ + _bt_parallel_scan_and_sort(leaderworker, leaderworker2, btleader->btshared, + btleader->sharedsort, btleader->sharedsort2, + sortmem); + +#ifdef BTREE_BUILD_STATS + if (log_btree_build_stats) + { + ShowUsage("BTREE BUILD (Leader Partial Spool) STATISTICS"); + ResetUsage(); + } +#endif /* BTREE_BUILD_STATS */ +} + +/* + * Perform work within a launched parallel process. + */ +void +_bt_parallel_build_main(dsm_segment *seg, shm_toc *toc) +{ + BTSpool *btspool; + BTSpool *btspool2; + BTShared *btshared; + Sharedsort *sharedsort; + Sharedsort *sharedsort2; + Relation heapRel; + Relation indexRel; + LOCKMODE heapLockmode; + LOCKMODE indexLockmode; + int sortmem; + +#ifdef BTREE_BUILD_STATS + if (log_btree_build_stats) + ResetUsage(); +#endif /* BTREE_BUILD_STATS */ + + /* Look up shared state */ + btshared = shm_toc_lookup(toc, PARALLEL_KEY_BTREE_SHARED, false); + + /* Open relations using lock modes known to be obtained by index.c */ + if (!btshared->isconcurrent) + { + heapLockmode = ShareLock; + indexLockmode = AccessExclusiveLock; + } + else + { + heapLockmode = ShareUpdateExclusiveLock; + indexLockmode = RowExclusiveLock; + } + + /* Open relations within worker */ + heapRel = heap_open(btshared->heaprelid, heapLockmode); + indexRel = index_open(btshared->indexrelid, indexLockmode); + + /* Initialize worker's own spool */ + btspool = (BTSpool *) palloc0(sizeof(BTSpool)); + btspool->heap = heapRel; + btspool->index = indexRel; + btspool->isunique = btshared->isunique; + + /* Look up shared state private to tuplesort.c */ + sharedsort = shm_toc_lookup(toc, PARALLEL_KEY_TUPLESORT, false); + tuplesort_attach_shared(sharedsort, seg); + if (!btshared->isunique) + { + btspool2 = NULL; + sharedsort2 = NULL; + } + else + { + /* Allocate memory for worker's own private secondary spool */ + btspool2 = (BTSpool *) palloc0(sizeof(BTSpool)); + + /* Initialize worker's own secondary spool */ + btspool2->heap = btspool->heap; + btspool2->index = btspool->index; + btspool2->isunique = false; + /* Look up shared state private to tuplesort.c */ + sharedsort2 = shm_toc_lookup(toc, PARALLEL_KEY_TUPLESORT_SPOOL2, false); + tuplesort_attach_shared(sharedsort2, seg); + } + + /* Perform sorting of spool, and possibly a spool2 */ + sortmem = maintenance_work_mem / btshared->scantuplesortstates; + _bt_parallel_scan_and_sort(btspool, btspool2, btshared, sharedsort, + sharedsort2, sortmem); + +#ifdef BTREE_BUILD_STATS + if (log_btree_build_stats) + { + ShowUsage("BTREE BUILD (Worker Partial Spool) STATISTICS"); + ResetUsage(); + } +#endif /* BTREE_BUILD_STATS */ + + index_close(indexRel, indexLockmode); + heap_close(heapRel, heapLockmode); +} + +/* + * Perform a worker's portion of a parallel sort. + * + * This generates a tuplesort for passed btspool, and a second tuplesort + * state if a second btspool is need (i.e. for unique index builds). All + * other spool fields should already be set when this is called. + * + * sortmem is the amount of working memory to use within each worker, + * expressed in KBs. + * + * When this returns, workers are done, and need only release resources. + */ +static void +_bt_parallel_scan_and_sort(BTSpool *btspool, BTSpool *btspool2, + BTShared *btshared, Sharedsort *sharedsort, + Sharedsort *sharedsort2, int sortmem) +{ + SortCoordinate coordinate; + BTBuildState buildstate; + HeapScanDesc scan; + double reltuples; + IndexInfo *indexInfo; + + /* Initialize local tuplesort coordination state */ + coordinate = palloc0(sizeof(SortCoordinateData)); + coordinate->isWorker = true; + coordinate->nParticipants = -1; + coordinate->sharedsort = sharedsort; + + /* Begin "partial" tuplesort */ + btspool->sortstate = tuplesort_begin_index_btree(btspool->heap, + btspool->index, + btspool->isunique, + sortmem, coordinate, + false); + + /* + * Just as with serial case, there may be a second spool. If so, a + * second, dedicated spool2 partial tuplesort is required. + */ + if (btspool2) + { + SortCoordinate coordinate2; + + /* + * We expect that the second one (for dead tuples) won't get very + * full, so we give it only work_mem (unless sortmem is less for + * worker). Worker processes are generally permitted to allocate + * work_mem independently. + */ + coordinate2 = palloc0(sizeof(SortCoordinateData)); + coordinate2->isWorker = true; + coordinate2->nParticipants = -1; + coordinate2->sharedsort = sharedsort2; + btspool2->sortstate = + tuplesort_begin_index_btree(btspool->heap, btspool->index, false, + Min(sortmem, work_mem), coordinate2, + false); + } + + /* Fill in buildstate for _bt_build_callback() */ + buildstate.isunique = btshared->isunique; + buildstate.havedead = false; + buildstate.heap = btspool->heap; + buildstate.spool = btspool; + buildstate.spool2 = btspool2; + buildstate.indtuples = 0; + buildstate.btleader = NULL; + + /* Join parallel scan */ + indexInfo = BuildIndexInfo(btspool->index); + indexInfo->ii_Concurrent = btshared->isconcurrent; + scan = heap_beginscan_parallel(btspool->heap, &btshared->heapdesc); + reltuples = IndexBuildHeapScan(btspool->heap, btspool->index, indexInfo, + true, _bt_build_callback, + (void *) &buildstate, scan); + + /* + * Execute this worker's part of the sort. + * + * Unlike leader and serial cases, we cannot avoid calling + * tuplesort_performsort() for spool2 if it ends up containing no dead + * tuples (this is disallowed for workers by tuplesort). + */ + tuplesort_performsort(btspool->sortstate); + if (btspool2) + tuplesort_performsort(btspool2->sortstate); + + /* + * Done. Record ambuild statistics, and whether we encountered a broken + * HOT chain. + */ + SpinLockAcquire(&btshared->mutex); + btshared->nparticipantsdone++; + btshared->reltuples += reltuples; + if (buildstate.havedead) + btshared->havedead = true; + btshared->indtuples += buildstate.indtuples; + if (indexInfo->ii_BrokenHotChain) + btshared->brokenhotchain = true; + SpinLockRelease(&btshared->mutex); + + /* Notify leader */ + ConditionVariableSignal(&btshared->workersdonecv); + + /* We can end tuplesorts immediately */ + tuplesort_end(btspool->sortstate); + if (btspool2) + tuplesort_end(btspool2->sortstate); +} diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c index d2aec6df3e..34d9b48f15 100644 --- a/src/backend/access/spgist/spginsert.c +++ b/src/backend/access/spgist/spginsert.c @@ -138,7 +138,8 @@ spgbuild(Relation heap, Relation index, IndexInfo *indexInfo) ALLOCSET_DEFAULT_SIZES); reltuples = IndexBuildHeapScan(heap, index, indexInfo, true, - spgistBuildCallback, (void *) &buildstate); + spgistBuildCallback, (void *) &buildstate, + NULL); MemoryContextDelete(buildstate.tmpCtx); diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index 5b45b07e7c..a325933940 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -14,6 +14,7 @@ #include "postgres.h" +#include "access/nbtree.h" #include "access/parallel.h" #include "access/session.h" #include "access/xact.h" @@ -129,6 +130,9 @@ static const struct { { "ParallelQueryMain", ParallelQueryMain + }, + { + "_bt_parallel_build_main", _bt_parallel_build_main } }; @@ -146,7 +150,7 @@ static void ParallelWorkerShutdown(int code, Datum arg); */ ParallelContext * CreateParallelContext(const char *library_name, const char *function_name, - int nworkers) + int nworkers, bool serializable_okay) { MemoryContext oldcontext; ParallelContext *pcxt; @@ -167,9 +171,11 @@ CreateParallelContext(const char *library_name, const char *function_name, /* * If we are running under serializable isolation, we can't use parallel * workers, at least not until somebody enhances that mechanism to be - * parallel-aware. + * parallel-aware. Utility statement callers may ask us to ignore this + * restriction because they're always able to safely ignore the fact that + * SIREAD locks do not work with parallelism. */ - if (IsolationIsSerializable()) + if (IsolationIsSerializable() && !serializable_okay) nworkers = 0; /* We might be running in a short-lived memory context. */ diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c index 80860128fb..28ff2f0979 100644 --- a/src/backend/bootstrap/bootstrap.c +++ b/src/backend/bootstrap/bootstrap.c @@ -1137,7 +1137,7 @@ build_indices(void) heap = heap_open(ILHead->il_heap, NoLock); ind = index_open(ILHead->il_ind, NoLock); - index_build(heap, ind, ILHead->il_info, false, false); + index_build(heap, ind, ILHead->il_info, false, false, false); index_close(ind, NoLock); heap_close(heap, NoLock); diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 774c07b03a..0f34f5381a 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -2841,7 +2841,7 @@ RelationTruncateIndexes(Relation heapRelation) /* Initialize the index and rebuild */ /* Note: we do not need to re-establish pkey setting */ - index_build(heapRelation, currentIndex, indexInfo, false, true); + index_build(heapRelation, currentIndex, indexInfo, false, true, false); /* We're done with this index */ index_close(currentIndex, NoLock); diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 849a469127..f2cb6d7fb8 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -56,6 +56,7 @@ #include "nodes/makefuncs.h" #include "nodes/nodeFuncs.h" #include "optimizer/clauses.h" +#include "optimizer/planner.h" #include "parser/parser.h" #include "rewrite/rewriteManip.h" #include "storage/bufmgr.h" @@ -902,7 +903,7 @@ index_create(Relation heapRelation, Assert(indexRelationId == RelationGetRelid(indexRelation)); /* - * Obtain exclusive lock on it. Although no other backends can see it + * Obtain exclusive lock on it. Although no other transactions can see it * until we commit, this prevents deadlock-risk complaints from lock * manager in cases such as CLUSTER. */ @@ -1159,7 +1160,8 @@ index_create(Relation heapRelation, } else { - index_build(heapRelation, indexRelation, indexInfo, isprimary, false); + index_build(heapRelation, indexRelation, indexInfo, isprimary, false, + true); } /* @@ -1746,6 +1748,7 @@ BuildIndexInfo(Relation index) /* initialize index-build state to default */ ii->ii_Concurrent = false; ii->ii_BrokenHotChain = false; + ii->ii_ParallelWorkers = 0; /* set up for possible use by index AM */ ii->ii_Am = index->rd_rel->relam; @@ -2164,6 +2167,7 @@ index_update_stats(Relation rel, * * isprimary tells whether to mark the index as a primary-key index. * isreindex indicates we are recreating a previously-existing index. + * parallel indicates if parallelism may be useful. * * Note: when reindexing an existing index, isprimary can be false even if * the index is a PK; it's already properly marked and need not be re-marked. @@ -2177,7 +2181,8 @@ index_build(Relation heapRelation, Relation indexRelation, IndexInfo *indexInfo, bool isprimary, - bool isreindex) + bool isreindex, + bool parallel) { IndexBuildResult *stats; Oid save_userid; @@ -2192,10 +2197,31 @@ index_build(Relation heapRelation, Assert(PointerIsValid(indexRelation->rd_amroutine->ambuild)); Assert(PointerIsValid(indexRelation->rd_amroutine->ambuildempty)); - ereport(DEBUG1, - (errmsg("building index \"%s\" on table \"%s\"", - RelationGetRelationName(indexRelation), - RelationGetRelationName(heapRelation)))); + /* + * Determine worker process details for parallel CREATE INDEX. Currently, + * only btree has support for parallel builds. + * + * Note that planner considers parallel safety for us. + */ + if (parallel && IsNormalProcessingMode() && + indexRelation->rd_rel->relam == BTREE_AM_OID) + indexInfo->ii_ParallelWorkers = + plan_create_index_workers(RelationGetRelid(heapRelation), + RelationGetRelid(indexRelation)); + + if (indexInfo->ii_ParallelWorkers == 0) + ereport(DEBUG1, + (errmsg("building index \"%s\" on table \"%s\" serially", + RelationGetRelationName(indexRelation), + RelationGetRelationName(heapRelation)))); + else + ereport(DEBUG1, + (errmsg_plural("building index \"%s\" on table \"%s\" with request for %d parallel worker", + "building index \"%s\" on table \"%s\" with request for %d parallel workers", + indexInfo->ii_ParallelWorkers, + RelationGetRelationName(indexRelation), + RelationGetRelationName(heapRelation), + indexInfo->ii_ParallelWorkers))); /* * Switch to the table owner's userid, so that any index functions are run @@ -2347,13 +2373,14 @@ IndexBuildHeapScan(Relation heapRelation, IndexInfo *indexInfo, bool allow_sync, IndexBuildCallback callback, - void *callback_state) + void *callback_state, + HeapScanDesc scan) { return IndexBuildHeapRangeScan(heapRelation, indexRelation, indexInfo, allow_sync, false, 0, InvalidBlockNumber, - callback, callback_state); + callback, callback_state, scan); } /* @@ -2375,11 +2402,11 @@ IndexBuildHeapRangeScan(Relation heapRelation, BlockNumber start_blockno, BlockNumber numblocks, IndexBuildCallback callback, - void *callback_state) + void *callback_state, + HeapScanDesc scan) { bool is_system_catalog; bool checking_uniqueness; - HeapScanDesc scan; HeapTuple heapTuple; Datum values[INDEX_MAX_KEYS]; bool isnull[INDEX_MAX_KEYS]; @@ -2389,6 +2416,7 @@ IndexBuildHeapRangeScan(Relation heapRelation, EState *estate; ExprContext *econtext; Snapshot snapshot; + bool need_unregister_snapshot = false; TransactionId OldestXmin; BlockNumber root_blkno = InvalidBlockNumber; OffsetNumber root_offsets[MaxHeapTuplesPerPage]; @@ -2432,27 +2460,59 @@ IndexBuildHeapRangeScan(Relation heapRelation, * concurrent build, or during bootstrap, we take a regular MVCC snapshot * and index whatever's live according to that. */ - if (IsBootstrapProcessingMode() || indexInfo->ii_Concurrent) - { - snapshot = RegisterSnapshot(GetTransactionSnapshot()); - OldestXmin = InvalidTransactionId; /* not used */ + OldestXmin = InvalidTransactionId; + + /* okay to ignore lazy VACUUMs here */ + if (!IsBootstrapProcessingMode() && !indexInfo->ii_Concurrent) + OldestXmin = GetOldestXmin(heapRelation, PROCARRAY_FLAGS_VACUUM); - /* "any visible" mode is not compatible with this */ - Assert(!anyvisible); + if (!scan) + { + /* + * Serial index build. + * + * Must begin our own heap scan in this case. We may also need to + * register a snapshot whose lifetime is under our direct control. + */ + if (!TransactionIdIsValid(OldestXmin)) + { + snapshot = RegisterSnapshot(GetTransactionSnapshot()); + need_unregister_snapshot = true; + } + else + snapshot = SnapshotAny; + + scan = heap_beginscan_strat(heapRelation, /* relation */ + snapshot, /* snapshot */ + 0, /* number of keys */ + NULL, /* scan key */ + true, /* buffer access strategy OK */ + allow_sync); /* syncscan OK? */ } else { - snapshot = SnapshotAny; - /* okay to ignore lazy VACUUMs here */ - OldestXmin = GetOldestXmin(heapRelation, PROCARRAY_FLAGS_VACUUM); + /* + * Parallel index build. + * + * Parallel case never registers/unregisters own snapshot. Snapshot + * is taken from parallel heap scan, and is SnapshotAny or an MVCC + * snapshot, based on same criteria as serial case. + */ + Assert(!IsBootstrapProcessingMode()); + Assert(allow_sync); + snapshot = scan->rs_snapshot; } - scan = heap_beginscan_strat(heapRelation, /* relation */ - snapshot, /* snapshot */ - 0, /* number of keys */ - NULL, /* scan key */ - true, /* buffer access strategy OK */ - allow_sync); /* syncscan OK? */ + /* + * Must call GetOldestXmin() with SnapshotAny. Should never call + * GetOldestXmin() with MVCC snapshot. (It's especially worth checking + * this for parallel builds, since ambuild routines that support parallel + * builds must work these details out for themselves.) + */ + Assert(snapshot == SnapshotAny || IsMVCCSnapshot(snapshot)); + Assert(snapshot == SnapshotAny ? TransactionIdIsValid(OldestXmin) : + !TransactionIdIsValid(OldestXmin)); + Assert(snapshot == SnapshotAny || !anyvisible); /* set our scan endpoints */ if (!allow_sync) @@ -2783,8 +2843,8 @@ IndexBuildHeapRangeScan(Relation heapRelation, heap_endscan(scan); - /* we can now forget our snapshot, if set */ - if (IsBootstrapProcessingMode() || indexInfo->ii_Concurrent) + /* we can now forget our snapshot, if set and registered by us */ + if (need_unregister_snapshot) UnregisterSnapshot(snapshot); ExecDropSingleTupleTableSlot(slot); @@ -3027,7 +3087,7 @@ validate_index(Oid heapId, Oid indexId, Snapshot snapshot) state.tuplesort = tuplesort_begin_datum(INT8OID, Int8LessOperator, InvalidOid, false, maintenance_work_mem, - false); + NULL, false); state.htups = state.itups = state.tups_inserted = 0; (void) index_bulk_delete(&ivinfo, NULL, @@ -3552,7 +3612,7 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence, /* Initialize the index and rebuild */ /* Note: we do not need to re-establish pkey setting */ - index_build(heapRelation, iRel, indexInfo, false, true); + index_build(heapRelation, iRel, indexInfo, false, true, true); } PG_CATCH(); { @@ -3911,8 +3971,7 @@ SetReindexProcessing(Oid heapOid, Oid indexOid) static void ResetReindexProcessing(void) { - if (IsInParallelMode()) - elog(ERROR, "cannot modify reindex state during a parallel operation"); + /* This may be called in leader error path */ currentlyReindexedHeap = InvalidOid; currentlyReindexedIndex = InvalidOid; } diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index cf37011b73..dcbad1286b 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -315,6 +315,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, indexInfo->ii_ReadyForInserts = true; indexInfo->ii_Concurrent = false; indexInfo->ii_BrokenHotChain = false; + indexInfo->ii_ParallelWorkers = 0; indexInfo->ii_Am = BTREE_AM_OID; indexInfo->ii_AmCache = NULL; indexInfo->ii_Context = CurrentMemoryContext; diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c index 1701548d84..5d481dd50d 100644 --- a/src/backend/commands/cluster.c +++ b/src/backend/commands/cluster.c @@ -909,7 +909,8 @@ copy_heap_data(Oid OIDNewHeap, Oid OIDOldHeap, Oid OIDOldIndex, bool verbose, /* Set up sorting if wanted */ if (use_sort) tuplesort = tuplesort_begin_cluster(oldTupDesc, OldIndex, - maintenance_work_mem, false); + maintenance_work_mem, + NULL, false); else tuplesort = NULL; diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index a9461a4b06..7c46613215 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -380,6 +380,10 @@ DefineIndex(Oid relationId, * this will typically require the caller to have already locked the * relation. To avoid lock upgrade hazards, that lock should be at least * as strong as the one we take here. + * + * NB: If the lock strength here ever changes, code that is run by + * parallel workers under the control of certain particular ambuild + * functions will need to be updated, too. */ lockmode = stmt->concurrent ? ShareUpdateExclusiveLock : ShareLock; rel = heap_open(relationId, lockmode); @@ -617,6 +621,7 @@ DefineIndex(Oid relationId, indexInfo->ii_ReadyForInserts = !stmt->concurrent; indexInfo->ii_Concurrent = stmt->concurrent; indexInfo->ii_BrokenHotChain = false; + indexInfo->ii_ParallelWorkers = 0; indexInfo->ii_Am = accessMethodId; indexInfo->ii_AmCache = NULL; indexInfo->ii_Context = CurrentMemoryContext; @@ -1000,7 +1005,7 @@ DefineIndex(Oid relationId, indexInfo->ii_BrokenHotChain = false; /* Now build the index */ - index_build(rel, indexRelation, indexInfo, stmt->primary, false); + index_build(rel, indexRelation, indexInfo, stmt->primary, false, true); /* Close both the relations, but keep the locks */ heap_close(rel, NoLock); diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c index f8b72ebab9..14b0b89463 100644 --- a/src/backend/executor/execParallel.c +++ b/src/backend/executor/execParallel.c @@ -592,7 +592,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, pstmt_data = ExecSerializePlan(planstate->plan, estate); /* Create a parallel context. */ - pcxt = CreateParallelContext("postgres", "ParallelQueryMain", nworkers); + pcxt = CreateParallelContext("postgres", "ParallelQueryMain", nworkers, false); pei->pcxt = pcxt; /* diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index ec62e7fb38..a86d4b68ea 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -373,7 +373,7 @@ initialize_phase(AggState *aggstate, int newphase) sortnode->collations, sortnode->nullsFirst, work_mem, - false); + NULL, false); } aggstate->current_phase = newphase; @@ -450,7 +450,7 @@ initialize_aggregate(AggState *aggstate, AggStatePerTrans pertrans, pertrans->sortOperators[0], pertrans->sortCollations[0], pertrans->sortNullsFirst[0], - work_mem, false); + work_mem, NULL, false); } else pertrans->sortstates[aggstate->current_set] = @@ -460,7 +460,7 @@ initialize_aggregate(AggState *aggstate, AggStatePerTrans pertrans, pertrans->sortOperators, pertrans->sortCollations, pertrans->sortNullsFirst, - work_mem, false); + work_mem, NULL, false); } /* diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index 9c68de8565..d61c859fce 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -93,7 +93,7 @@ ExecSort(PlanState *pstate) plannode->collations, plannode->nullsFirst, work_mem, - node->randomAccess); + NULL, node->randomAccess); if (node->bounded) tuplesort_set_bound(tuplesortstate, node->bound); node->tuplesortstate = (void *) tuplesortstate; diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index fd1a58336b..5bff90e1bc 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -720,7 +720,8 @@ create_plain_partial_paths(PlannerInfo *root, RelOptInfo *rel) { int parallel_workers; - parallel_workers = compute_parallel_worker(rel, rel->pages, -1); + parallel_workers = compute_parallel_worker(rel, rel->pages, -1, + max_parallel_workers_per_gather); /* If any limit was set to zero, the user doesn't want a parallel scan. */ if (parallel_workers <= 0) @@ -3299,7 +3300,8 @@ create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, pages_fetched = compute_bitmap_pages(root, rel, bitmapqual, 1.0, NULL, NULL); - parallel_workers = compute_parallel_worker(rel, pages_fetched, -1); + parallel_workers = compute_parallel_worker(rel, pages_fetched, -1, + max_parallel_workers_per_gather); if (parallel_workers <= 0) return; @@ -3319,9 +3321,13 @@ create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, * * "index_pages" is the number of pages from the index that we expect to scan, or * -1 if we don't expect to scan any. + * + * "max_workers" is caller's limit on the number of workers. This typically + * comes from a GUC. */ int -compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages) +compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages, + int max_workers) { int parallel_workers = 0; @@ -3392,10 +3398,8 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages) } } - /* - * In no case use more than max_parallel_workers_per_gather workers. - */ - parallel_workers = Min(parallel_workers, max_parallel_workers_per_gather); + /* In no case use more than caller supplied maximum number of workers */ + parallel_workers = Min(parallel_workers, max_workers); return parallel_workers; } diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 8679b14b29..29fea48ee2 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -682,7 +682,9 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count, * order. */ path->path.parallel_workers = compute_parallel_worker(baserel, - rand_heap_pages, index_pages); + rand_heap_pages, + index_pages, + max_parallel_workers_per_gather); /* * Fall out if workers can't be assigned for parallel scan, because in diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 2a4e22b6c8..740de4957d 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -5793,6 +5793,142 @@ plan_cluster_use_sort(Oid tableOid, Oid indexOid) return (seqScanAndSortPath.total_cost < indexScanPath->path.total_cost); } +/* + * plan_create_index_workers + * Use the planner to decide how many parallel worker processes + * CREATE INDEX should request for use + * + * tableOid is the table on which the index is to be built. indexOid is the + * OID of an index to be created or reindexed (which must be a btree index). + * + * Return value is the number of parallel worker processes to request. It + * may be unsafe to proceed if this is 0. Note that this does not include the + * leader participating as a worker (value is always a number of parallel + * worker processes). + * + * Note: caller had better already hold some type of lock on the table and + * index. + */ +int +plan_create_index_workers(Oid tableOid, Oid indexOid) +{ + PlannerInfo *root; + Query *query; + PlannerGlobal *glob; + RangeTblEntry *rte; + Relation heap; + Relation index; + RelOptInfo *rel; + int parallel_workers; + BlockNumber heap_blocks; + double reltuples; + double allvisfrac; + + /* Return immediately when parallelism disabled */ + if (max_parallel_maintenance_workers == 0) + return 0; + + /* Set up largely-dummy planner state */ + query = makeNode(Query); + query->commandType = CMD_SELECT; + + glob = makeNode(PlannerGlobal); + + root = makeNode(PlannerInfo); + root->parse = query; + root->glob = glob; + root->query_level = 1; + root->planner_cxt = CurrentMemoryContext; + root->wt_param_id = -1; + + /* + * Build a minimal RTE. + * + * Set the target's table to be an inheritance parent. This is a kludge + * that prevents problems within get_relation_info(), which does not + * expect that any IndexOptInfo is currently undergoing REINDEX. + */ + rte = makeNode(RangeTblEntry); + rte->rtekind = RTE_RELATION; + rte->relid = tableOid; + rte->relkind = RELKIND_RELATION; /* Don't be too picky. */ + rte->lateral = false; + rte->inh = true; + rte->inFromCl = true; + query->rtable = list_make1(rte); + + /* Set up RTE/RelOptInfo arrays */ + setup_simple_rel_arrays(root); + + /* Build RelOptInfo */ + rel = build_simple_rel(root, 1, NULL); + + heap = heap_open(tableOid, NoLock); + index = index_open(indexOid, NoLock); + + /* + * Determine if it's safe to proceed. + * + * Currently, parallel workers can't access the leader's temporary tables. + * Furthermore, any index predicate or index expressions must be parallel + * safe. + */ + if (heap->rd_rel->relpersistence == RELPERSISTENCE_TEMP || + !is_parallel_safe(root, (Node *) RelationGetIndexExpressions(index)) || + !is_parallel_safe(root, (Node *) RelationGetIndexPredicate(index))) + { + parallel_workers = 0; + goto done; + } + + /* + * If parallel_workers storage parameter is set for the table, accept that + * as the number of parallel worker processes to launch (though still cap + * at max_parallel_maintenance_workers). Note that we deliberately do not + * consider any other factor when parallel_workers is set. (e.g., memory + * use by workers.) + */ + if (rel->rel_parallel_workers != -1) + { + parallel_workers = Min(rel->rel_parallel_workers, + max_parallel_maintenance_workers); + goto done; + } + + /* + * Estimate heap relation size ourselves, since rel->pages cannot be + * trusted (heap RTE was marked as inheritance parent) + */ + estimate_rel_size(heap, NULL, &heap_blocks, &reltuples, &allvisfrac); + + /* + * Determine number of workers to scan the heap relation using generic + * model + */ + parallel_workers = compute_parallel_worker(rel, heap_blocks, -1, + max_parallel_maintenance_workers); + + /* + * Cap workers based on available maintenance_work_mem as needed. + * + * Note that each tuplesort participant receives an even share of the + * total maintenance_work_mem budget. Aim to leave participants + * (including the leader as a participant) with no less than 32MB of + * memory. This leaves cases where maintenance_work_mem is set to 64MB + * immediately past the threshold of being capable of launching a single + * parallel worker to sort. + */ + while (parallel_workers > 0 && + maintenance_work_mem / (parallel_workers + 1) < 32768L) + parallel_workers--; + +done: + index_close(index, NoLock); + heap_close(heap, NoLock); + + return parallel_workers; +} + /* * get_partitioned_child_rels * Returns a list of the RT indexes of the partitioned child relations diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 605b1832be..96ba216387 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -3655,6 +3655,9 @@ pgstat_get_wait_ipc(WaitEventIPC w) case WAIT_EVENT_PARALLEL_BITMAP_SCAN: event_name = "ParallelBitmapScan"; break; + case WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN: + event_name = "ParallelCreateIndexScan"; + break; case WAIT_EVENT_PROCARRAY_GROUP_UPDATE: event_name = "ProcArrayGroupUpdate"; break; diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c index 4de6121ab9..c058c3fc43 100644 --- a/src/backend/storage/file/buffile.c +++ b/src/backend/storage/file/buffile.c @@ -271,7 +271,7 @@ BufFileCreateShared(SharedFileSet *fileset, const char *name) * Open a file that was previously created in another backend (or this one) * with BufFileCreateShared in the same SharedFileSet using the same name. * The backend that created the file must have called BufFileClose() or - * BufFileExport() to make sure that it is ready to be opened by other + * BufFileExportShared() to make sure that it is ready to be opened by other * backends and render it read-only. */ BufFile * @@ -800,3 +800,62 @@ BufFileTellBlock(BufFile *file) } #endif + +/* + * Return the current file size. Counts any holes left behind by + * BufFileViewAppend as part of the size. + */ +off_t +BufFileSize(BufFile *file) +{ + return ((file->numFiles - 1) * (off_t) MAX_PHYSICAL_FILESIZE) + + FileGetSize(file->files[file->numFiles - 1]); +} + +/* + * Append the contents of source file (managed within shared fileset) to + * end of target file (managed within same shared fileset). + * + * Note that operation subsumes ownership of underlying resources from + * "source". Caller should never call BufFileClose against source having + * called here first. Resource owners for source and target must match, + * too. + * + * This operation works by manipulating lists of segment files, so the + * file content is always appended at a MAX_PHYSICAL_FILESIZE-aligned + * boundary, typically creating empty holes before the boundary. These + * areas do not contain any interesting data, and cannot be read from by + * caller. + * + * Returns the block number within target where the contents of source + * begins. Caller should apply this as an offset when working off block + * positions that are in terms of the original BufFile space. + */ +long +BufFileAppend(BufFile *target, BufFile *source) +{ + long startBlock = target->numFiles * BUFFILE_SEG_SIZE; + int newNumFiles = target->numFiles + source->numFiles; + int i; + + Assert(target->fileset != NULL); + Assert(source->readOnly); + Assert(!source->dirty); + Assert(source->fileset != NULL); + + if (target->resowner != source->resowner) + elog(ERROR, "could not append BufFile with non-matching resource owner"); + + target->files = (File *) + repalloc(target->files, sizeof(File) * newNumFiles); + target->offsets = (off_t *) + repalloc(target->offsets, sizeof(off_t) * newNumFiles); + for (i = target->numFiles; i < newNumFiles; i++) + { + target->files[i] = source->files[i - target->numFiles]; + target->offsets[i] = 0L; + } + target->numFiles = newNumFiles; + + return startBlock; +} diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index 71516a9a5a..2a18e94ff4 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -2262,6 +2262,16 @@ FileGetRawMode(File file) return VfdCache[file].fileMode; } +/* + * FileGetSize - returns the size of file + */ +off_t +FileGetSize(File file) +{ + Assert(FileIsValid(file)); + return VfdCache[file].fileSize; +} + /* * Make room for another allocatedDescs[] array entry if needed and possible. * Returns true if an array element is available. diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 79dbfd1a05..63d9c67027 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -291,6 +291,7 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples) qstate->sortCollations, qstate->sortNullsFirsts, work_mem, + NULL, qstate->rescan_needed); else osastate->sortstate = tuplesort_begin_datum(qstate->sortColType, @@ -298,6 +299,7 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples) qstate->sortCollation, qstate->sortNullsFirst, work_mem, + NULL, qstate->rescan_needed); osastate->number_of_rows = 0; diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 54fa4a389e..446040d816 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -112,6 +112,7 @@ bool enableFsync = true; bool allowSystemTableMods = false; int work_mem = 1024; int maintenance_work_mem = 16384; +int max_parallel_maintenance_workers = 2; /* * Primary determinants of sizes of shared-memory structures. diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 5884fa905e..87ba67661a 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -2734,6 +2734,16 @@ static struct config_int ConfigureNamesInt[] = check_autovacuum_max_workers, NULL, NULL }, + { + {"max_parallel_maintenance_workers", PGC_USERSET, RESOURCES_ASYNCHRONOUS, + gettext_noop("Sets the maximum number of parallel processes per maintenance operation."), + NULL + }, + &max_parallel_maintenance_workers, + 2, 0, 1024, + NULL, NULL, NULL + }, + { {"max_parallel_workers_per_gather", PGC_USERSET, RESOURCES_ASYNCHRONOUS, gettext_noop("Sets the maximum number of parallel processes per executor node."), diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index abffde6b2b..9a3535559e 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -163,10 +163,11 @@ #effective_io_concurrency = 1 # 1-1000; 0 disables prefetching #max_worker_processes = 8 # (change requires restart) +#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers #max_parallel_workers_per_gather = 2 # taken from max_parallel_workers #parallel_leader_participation = on #max_parallel_workers = 8 # maximum number of max_worker_processes that - # can be used in parallel queries + # can be used in parallel operations #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate # (change requires restart) #backend_flush_after = 0 # measured in pages, 0 disables diff --git a/src/backend/utils/probes.d b/src/backend/utils/probes.d index 560d8ccda3..ad06e8e2ea 100644 --- a/src/backend/utils/probes.d +++ b/src/backend/utils/probes.d @@ -52,7 +52,7 @@ provider postgresql { probe query__done(const char *); probe statement__status(const char *); - probe sort__start(int, bool, int, int, bool); + probe sort__start(int, bool, int, int, bool, int); probe sort__done(bool, long); probe buffer__read__start(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool); diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c index 2d07b3d3f5..6b7c10bcfc 100644 --- a/src/backend/utils/sort/logtape.c +++ b/src/backend/utils/sort/logtape.c @@ -64,6 +64,14 @@ * care that all calls for a single LogicalTapeSet are made in the same * palloc context. * + * To support parallel sort operations involving coordinated callers to + * tuplesort.c routines across multiple workers, it is necessary to + * concatenate each worker BufFile/tapeset into one single logical tapeset + * managed by the leader. Workers should have produced one final + * materialized tape (their entire output) when this happens in leader. + * There will always be the same number of runs as input tapes, and the same + * number of input tapes as participants (worker Tuplesortstates). + * * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * @@ -76,6 +84,7 @@ #include "postgres.h" #include "storage/buffile.h" +#include "utils/builtins.h" #include "utils/logtape.h" #include "utils/memutils.h" @@ -129,16 +138,21 @@ typedef struct LogicalTape * a frozen tape. (When reading from an unfrozen tape, we use a larger * read buffer that holds multiple blocks, so the "current" block is * ambiguous.) + * + * When concatenation of worker tape BufFiles is performed, an offset to + * the first block in the unified BufFile space is applied during reads. */ long firstBlockNumber; long curBlockNumber; long nextBlockNumber; + long offsetBlockNumber; /* * Buffer for current data block(s). */ char *buffer; /* physical buffer (separately palloc'd) */ int buffer_size; /* allocated size of the buffer */ + int max_size; /* highest useful, safe buffer_size */ int pos; /* next read/write position in buffer */ int nbytes; /* total # of valid bytes in buffer */ } LogicalTape; @@ -159,10 +173,13 @@ struct LogicalTapeSet * by ltsGetFreeBlock(), and it is always greater than or equal to * nBlocksWritten. Blocks between nBlocksAllocated and nBlocksWritten are * blocks that have been allocated for a tape, but have not been written - * to the underlying file yet. + * to the underlying file yet. nHoleBlocks tracks the total number of + * blocks that are in unused holes between worker spaces following BufFile + * concatenation. */ long nBlocksAllocated; /* # of blocks allocated */ long nBlocksWritten; /* # of blocks used in underlying file */ + long nHoleBlocks; /* # of "hole" blocks left */ /* * We store the numbers of recycled-and-available blocks in freeBlocks[]. @@ -192,6 +209,8 @@ static void ltsWriteBlock(LogicalTapeSet *lts, long blocknum, void *buffer); static void ltsReadBlock(LogicalTapeSet *lts, long blocknum, void *buffer); static long ltsGetFreeBlock(LogicalTapeSet *lts); static void ltsReleaseBlock(LogicalTapeSet *lts, long blocknum); +static void ltsConcatWorkerTapes(LogicalTapeSet *lts, TapeShare *shared, + SharedFileSet *fileset); /* @@ -213,6 +232,11 @@ ltsWriteBlock(LogicalTapeSet *lts, long blocknum, void *buffer) * previous tape isn't flushed to disk until the end of the sort, so you * get one-block hole, where the last block of the previous tape will * later go. + * + * Note that BufFile concatenation can leave "holes" in BufFile between + * worker-owned block ranges. These are tracked for reporting purposes + * only. We never read from nor write to these hole blocks, and so they + * are not considered here. */ while (blocknum > lts->nBlocksWritten) { @@ -267,15 +291,18 @@ ltsReadFillBuffer(LogicalTapeSet *lts, LogicalTape *lt) do { char *thisbuf = lt->buffer + lt->nbytes; + long datablocknum = lt->nextBlockNumber; /* Fetch next block number */ - if (lt->nextBlockNumber == -1L) + if (datablocknum == -1L) break; /* EOF */ + /* Apply worker offset, needed for leader tapesets */ + datablocknum += lt->offsetBlockNumber; /* Read the block */ - ltsReadBlock(lts, lt->nextBlockNumber, (void *) thisbuf); + ltsReadBlock(lts, datablocknum, (void *) thisbuf); if (!lt->frozen) - ltsReleaseBlock(lts, lt->nextBlockNumber); + ltsReleaseBlock(lts, datablocknum); lt->curBlockNumber = lt->nextBlockNumber; lt->nbytes += TapeBlockGetNBytes(thisbuf); @@ -370,13 +397,116 @@ ltsReleaseBlock(LogicalTapeSet *lts, long blocknum) lts->blocksSorted = false; } +/* + * Claim ownership of a set of logical tapes from existing shared BufFiles. + * + * Caller should be leader process. Though tapes are marked as frozen in + * workers, they are not frozen when opened within leader, since unfrozen tapes + * use a larger read buffer. (Frozen tapes have smaller read buffer, optimized + * for random access.) + */ +static void +ltsConcatWorkerTapes(LogicalTapeSet *lts, TapeShare *shared, + SharedFileSet *fileset) +{ + LogicalTape *lt = NULL; + long tapeblocks; + long nphysicalblocks = 0L; + int i; + + /* Should have at least one worker tape, plus leader's tape */ + Assert(lts->nTapes >= 2); + + /* + * Build concatenated view of all BufFiles, remembering the block number + * where each source file begins. No changes are needed for leader/last + * tape. + */ + for (i = 0; i < lts->nTapes - 1; i++) + { + char filename[MAXPGPATH]; + BufFile *file; + + lt = <s->tapes[i]; + + pg_itoa(i, filename); + file = BufFileOpenShared(fileset, filename); + + /* + * Stash first BufFile, and concatenate subsequent BufFiles to that. + * Store block offset into each tape as we go. + */ + lt->firstBlockNumber = shared[i].firstblocknumber; + if (i == 0) + { + lts->pfile = file; + lt->offsetBlockNumber = 0L; + } + else + { + lt->offsetBlockNumber = BufFileAppend(lts->pfile, file); + } + /* Don't allocate more for read buffer than could possibly help */ + lt->max_size = Min(MaxAllocSize, shared[i].buffilesize); + tapeblocks = shared[i].buffilesize / BLCKSZ; + nphysicalblocks += tapeblocks; + } + + /* + * Set # of allocated blocks, as well as # blocks written. Use extent of + * new BufFile space (from 0 to end of last worker's tape space) for this. + * Allocated/written blocks should include space used by holes left + * between concatenated BufFiles. + */ + lts->nBlocksAllocated = lt->offsetBlockNumber + tapeblocks; + lts->nBlocksWritten = lts->nBlocksAllocated; + + /* + * Compute number of hole blocks so that we can later work backwards, and + * instrument number of physical blocks. We don't simply use physical + * blocks directly for instrumentation because this would break if we ever + * subsequently wrote to worker tape. + * + * Working backwards like this keeps our options open. If shared BufFiles + * ever support being written to post-export, logtape.c can automatically + * take advantage of that. We'd then support writing to the leader tape + * while recycling space from worker tapes, because the leader tape has a + * zero offset (write routines won't need to have extra logic to apply an + * offset). + * + * The only thing that currently prevents writing to the leader tape from + * working is the fact that BufFiles opened using BufFileOpenShared() are + * read-only by definition, but that could be changed if it seemed + * worthwhile. For now, writing to the leader tape will raise a "Bad file + * descriptor" error, so tuplesort must avoid writing to the leader tape + * altogether. + */ + lts->nHoleBlocks = lts->nBlocksAllocated - nphysicalblocks; +} + /* * Create a set of logical tapes in a temporary underlying file. * - * Each tape is initialized in write state. + * Each tape is initialized in write state. Serial callers pass ntapes, + * NULL argument for shared, and -1 for worker. Parallel worker callers + * pass ntapes, a shared file handle, NULL shared argument, and their own + * worker number. Leader callers, which claim shared worker tapes here, + * must supply non-sentinel values for all arguments except worker number, + * which should be -1. + * + * Leader caller is passing back an array of metadata each worker captured + * when LogicalTapeFreeze() was called for their final result tapes. Passed + * tapes array is actually sized ntapes - 1, because it includes only + * worker tapes, whereas leader requires its own leader tape. Note that we + * rely on the assumption that reclaimed worker tapes will only be read + * from once by leader, and never written to again (tapes are initialized + * for writing, but that's only to be consistent). Leader may not write to + * its own tape purely due to a restriction in the shared buffile + * infrastructure that may be lifted in the future. */ LogicalTapeSet * -LogicalTapeSetCreate(int ntapes) +LogicalTapeSetCreate(int ntapes, TapeShare *shared, SharedFileSet *fileset, + int worker) { LogicalTapeSet *lts; LogicalTape *lt; @@ -388,9 +518,9 @@ LogicalTapeSetCreate(int ntapes) Assert(ntapes > 0); lts = (LogicalTapeSet *) palloc(offsetof(LogicalTapeSet, tapes) + ntapes * sizeof(LogicalTape)); - lts->pfile = BufFileCreateTemp(false); lts->nBlocksAllocated = 0L; lts->nBlocksWritten = 0L; + lts->nHoleBlocks = 0L; lts->forgetFreeSpace = false; lts->blocksSorted = true; /* a zero-length array is sorted ... */ lts->freeBlocksLen = 32; /* reasonable initial guess */ @@ -412,11 +542,36 @@ LogicalTapeSetCreate(int ntapes) lt->dirty = false; lt->firstBlockNumber = -1L; lt->curBlockNumber = -1L; + lt->nextBlockNumber = -1L; + lt->offsetBlockNumber = 0L; lt->buffer = NULL; lt->buffer_size = 0; + /* palloc() larger than MaxAllocSize would fail */ + lt->max_size = MaxAllocSize; lt->pos = 0; lt->nbytes = 0; } + + /* + * Create temp BufFile storage as required. + * + * Leader concatenates worker tapes, which requires special adjustment to + * final tapeset data. Things are simpler for the worker case and the + * serial case, though. They are generally very similar -- workers use a + * shared fileset, whereas serial sorts use a conventional serial BufFile. + */ + if (shared) + ltsConcatWorkerTapes(lts, shared, fileset); + else if (fileset) + { + char filename[MAXPGPATH]; + + pg_itoa(worker, filename); + lts->pfile = BufFileCreateShared(fileset, filename); + } + else + lts->pfile = BufFileCreateTemp(false); + return lts; } @@ -470,6 +625,7 @@ LogicalTapeWrite(LogicalTapeSet *lts, int tapenum, Assert(tapenum >= 0 && tapenum < lts->nTapes); lt = <s->tapes[tapenum]; Assert(lt->writing); + Assert(lt->offsetBlockNumber == 0L); /* Allocate data buffer and first block on first write */ if (lt->buffer == NULL) @@ -566,12 +722,9 @@ LogicalTapeRewindForRead(LogicalTapeSet *lts, int tapenum, size_t buffer_size) if (buffer_size < BLCKSZ) buffer_size = BLCKSZ; - /* - * palloc() larger than MaxAllocSize would fail (a multi-gigabyte - * buffer is unlikely to be helpful, anyway) - */ - if (buffer_size > MaxAllocSize) - buffer_size = MaxAllocSize; + /* palloc() larger than max_size is unlikely to be helpful */ + if (buffer_size > lt->max_size) + buffer_size = lt->max_size; /* round down to BLCKSZ boundary */ buffer_size -= buffer_size % BLCKSZ; @@ -698,15 +851,22 @@ LogicalTapeRead(LogicalTapeSet *lts, int tapenum, * tape is rewound (after rewind is too late!). It performs a rewind * and switch to read mode "for free". An immediately following rewind- * for-read call is OK but not necessary. + * + * share output argument is set with details of storage used for tape after + * freezing, which may be passed to LogicalTapeSetCreate within leader + * process later. This metadata is only of interest to worker callers + * freezing their final output for leader (single materialized tape). + * Serial sorts should set share to NULL. */ void -LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum) +LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum, TapeShare *share) { LogicalTape *lt; Assert(tapenum >= 0 && tapenum < lts->nTapes); lt = <s->tapes[tapenum]; Assert(lt->writing); + Assert(lt->offsetBlockNumber == 0L); /* * Completion of a write phase. Flush last partial data block, and rewind @@ -749,6 +909,14 @@ LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum) else lt->nextBlockNumber = TapeBlockGetTrailer(lt->buffer)->next; lt->nbytes = TapeBlockGetNBytes(lt->buffer); + + /* Handle extra steps when caller is to share its tapeset */ + if (share) + { + BufFileExportShared(lts->pfile); + share->firstblocknumber = lt->firstBlockNumber; + share->buffilesize = BufFileSize(lts->pfile); + } } /* @@ -874,6 +1042,7 @@ LogicalTapeTell(LogicalTapeSet *lts, int tapenum, Assert(tapenum >= 0 && tapenum < lts->nTapes); lt = <s->tapes[tapenum]; + Assert(lt->offsetBlockNumber == 0L); /* With a larger buffer, 'pos' wouldn't be the same as offset within page */ Assert(lt->buffer_size == BLCKSZ); @@ -888,5 +1057,5 @@ LogicalTapeTell(LogicalTapeSet *lts, int tapenum, long LogicalTapeSetBlocks(LogicalTapeSet *lts) { - return lts->nBlocksAllocated; + return lts->nBlocksAllocated - lts->nHoleBlocks; } diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index eecc66cafa..041bdc2fa7 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -74,6 +74,14 @@ * above. Nonetheless, with large workMem we can have many tapes (but not * too many -- see the comments in tuplesort_merge_order). * + * This module supports parallel sorting. Parallel sorts involve coordination + * among one or more worker processes, and a leader process, each with its own + * tuplesort state. The leader process (or, more accurately, the + * Tuplesortstate associated with a leader process) creates a full tapeset + * consisting of worker tapes with one run to merge; a run for every + * worker process. This is then merged. Worker processes are guaranteed to + * produce exactly one output run from their partial input. + * * * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -113,6 +121,10 @@ #define DATUM_SORT 2 #define CLUSTER_SORT 3 +/* Sort parallel code from state for sort__start probes */ +#define PARALLEL_SORT(state) ((state)->shared == NULL ? 0 : \ + (state)->worker >= 0 ? 1 : 2) + /* GUC variables */ #ifdef TRACE_SORT bool trace_sort = false; @@ -374,6 +386,25 @@ struct Tuplesortstate int markpos_offset; /* saved "current", or offset in tape block */ bool markpos_eof; /* saved "eof_reached" */ + /* + * These variables are used during parallel sorting. + * + * worker is our worker identifier. Follows the general convention that + * -1 value relates to a leader tuplesort, and values >= 0 worker + * tuplesorts. (-1 can also be a serial tuplesort.) + * + * shared is mutable shared memory state, which is used to coordinate + * parallel sorts. + * + * nParticipants is the number of worker Tuplesortstates known by the + * leader to have actually been launched, which implies that they must + * finish a run leader can merge. Typically includes a worker state held + * by the leader process itself. Set in the leader Tuplesortstate only. + */ + int worker; + Sharedsort *shared; + int nParticipants; + /* * The sortKeys variable is used by every case other than the hash index * case; it is set by tuplesort_begin_xxx. tupDesc is only used by the @@ -435,6 +466,39 @@ struct Tuplesortstate #endif }; +/* + * Private mutable state of tuplesort-parallel-operation. This is allocated + * in shared memory. + */ +struct Sharedsort +{ + /* mutex protects all fields prior to tapes */ + slock_t mutex; + + /* + * currentWorker generates ordinal identifier numbers for parallel sort + * workers. These start from 0, and are always gapless. + * + * Workers increment workersFinished to indicate having finished. If this + * is equal to state.nParticipants within the leader, leader is ready to + * merge worker runs. + */ + int currentWorker; + int workersFinished; + + /* Temporary file space */ + SharedFileSet fileset; + + /* Size of tapes flexible array */ + int nTapes; + + /* + * Tapes array used by workers to report back information needed by the + * leader to concatenate all worker tapes into one for merging + */ + TapeShare tapes[FLEXIBLE_ARRAY_MEMBER]; +}; + /* * Is the given tuple allocated from the slab memory arena? */ @@ -465,6 +529,9 @@ struct Tuplesortstate #define LACKMEM(state) ((state)->availMem < 0 && !(state)->slabAllocatorUsed) #define USEMEM(state,amt) ((state)->availMem -= (amt)) #define FREEMEM(state,amt) ((state)->availMem += (amt)) +#define SERIAL(state) ((state)->shared == NULL) +#define WORKER(state) ((state)->shared && (state)->worker != -1) +#define LEADER(state) ((state)->shared && (state)->worker == -1) /* * NOTES about on-tape representation of tuples: @@ -521,10 +588,13 @@ struct Tuplesortstate } while(0) -static Tuplesortstate *tuplesort_begin_common(int workMem, bool randomAccess); +static Tuplesortstate *tuplesort_begin_common(int workMem, + SortCoordinate coordinate, + bool randomAccess); static void puttuple_common(Tuplesortstate *state, SortTuple *tuple); static bool consider_abort_common(Tuplesortstate *state); -static void inittapes(Tuplesortstate *state); +static void inittapes(Tuplesortstate *state, bool mergeruns); +static void inittapestate(Tuplesortstate *state, int maxTapes); static void selectnewtape(Tuplesortstate *state); static void init_slab_allocator(Tuplesortstate *state, int numSlots); static void mergeruns(Tuplesortstate *state); @@ -572,6 +642,10 @@ static void writetup_datum(Tuplesortstate *state, int tapenum, SortTuple *stup); static void readtup_datum(Tuplesortstate *state, SortTuple *stup, int tapenum, unsigned int len); +static int worker_get_identifier(Tuplesortstate *state); +static void worker_freeze_result_tape(Tuplesortstate *state); +static void worker_nomergeruns(Tuplesortstate *state); +static void leader_takeover_tapes(Tuplesortstate *state); static void free_sort_tuple(Tuplesortstate *state, SortTuple *stup); /* @@ -604,13 +678,18 @@ static void free_sort_tuple(Tuplesortstate *state, SortTuple *stup); */ static Tuplesortstate * -tuplesort_begin_common(int workMem, bool randomAccess) +tuplesort_begin_common(int workMem, SortCoordinate coordinate, + bool randomAccess) { Tuplesortstate *state; MemoryContext sortcontext; MemoryContext tuplecontext; MemoryContext oldcontext; + /* See leader_takeover_tapes() remarks on randomAccess support */ + if (coordinate && randomAccess) + elog(ERROR, "random access disallowed under parallel sort"); + /* * Create a working memory context for this sort operation. All data * needed by the sort will live inside this context. @@ -650,7 +729,14 @@ tuplesort_begin_common(int workMem, bool randomAccess) state->bounded = false; state->tuples = true; state->boundUsed = false; - state->allowedMem = workMem * (int64) 1024; + + /* + * workMem is forced to be at least 64KB, the current minimum valid value + * for the work_mem GUC. This is a defense against parallel sort callers + * that divide out memory among many workers in a way that leaves each + * with very little memory. + */ + state->allowedMem = Max(workMem, 64) * (int64) 1024; state->availMem = state->allowedMem; state->sortcontext = sortcontext; state->tuplecontext = tuplecontext; @@ -684,6 +770,33 @@ tuplesort_begin_common(int workMem, bool randomAccess) state->result_tape = -1; /* flag that result tape has not been formed */ + /* + * Initialize parallel-related state based on coordination information + * from caller + */ + if (!coordinate) + { + /* Serial sort */ + state->shared = NULL; + state->worker = -1; + state->nParticipants = -1; + } + else if (coordinate->isWorker) + { + /* Parallel worker produces exactly one final run from all input */ + state->shared = coordinate->sharedsort; + state->worker = worker_get_identifier(state); + state->nParticipants = -1; + } + else + { + /* Parallel leader state only used for final merge */ + state->shared = coordinate->sharedsort; + state->worker = -1; + state->nParticipants = coordinate->nParticipants; + Assert(state->nParticipants >= 1); + } + MemoryContextSwitchTo(oldcontext); return state; @@ -694,9 +807,10 @@ tuplesort_begin_heap(TupleDesc tupDesc, int nkeys, AttrNumber *attNums, Oid *sortOperators, Oid *sortCollations, bool *nullsFirstFlags, - int workMem, bool randomAccess) + int workMem, SortCoordinate coordinate, bool randomAccess) { - Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess); + Tuplesortstate *state = tuplesort_begin_common(workMem, coordinate, + randomAccess); MemoryContext oldcontext; int i; @@ -717,7 +831,8 @@ tuplesort_begin_heap(TupleDesc tupDesc, false, /* no unique check */ nkeys, workMem, - randomAccess); + randomAccess, + PARALLEL_SORT(state)); state->comparetup = comparetup_heap; state->copytup = copytup_heap; @@ -764,9 +879,11 @@ tuplesort_begin_heap(TupleDesc tupDesc, Tuplesortstate * tuplesort_begin_cluster(TupleDesc tupDesc, Relation indexRel, - int workMem, bool randomAccess) + int workMem, + SortCoordinate coordinate, bool randomAccess) { - Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess); + Tuplesortstate *state = tuplesort_begin_common(workMem, coordinate, + randomAccess); ScanKey indexScanKey; MemoryContext oldcontext; int i; @@ -789,7 +906,8 @@ tuplesort_begin_cluster(TupleDesc tupDesc, false, /* no unique check */ state->nKeys, workMem, - randomAccess); + randomAccess, + PARALLEL_SORT(state)); state->comparetup = comparetup_cluster; state->copytup = copytup_cluster; @@ -857,9 +975,12 @@ Tuplesortstate * tuplesort_begin_index_btree(Relation heapRel, Relation indexRel, bool enforceUnique, - int workMem, bool randomAccess) + int workMem, + SortCoordinate coordinate, + bool randomAccess) { - Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess); + Tuplesortstate *state = tuplesort_begin_common(workMem, coordinate, + randomAccess); ScanKey indexScanKey; MemoryContext oldcontext; int i; @@ -880,7 +1001,8 @@ tuplesort_begin_index_btree(Relation heapRel, enforceUnique, state->nKeys, workMem, - randomAccess); + randomAccess, + PARALLEL_SORT(state)); state->comparetup = comparetup_index_btree; state->copytup = copytup_index; @@ -934,9 +1056,12 @@ tuplesort_begin_index_hash(Relation heapRel, uint32 high_mask, uint32 low_mask, uint32 max_buckets, - int workMem, bool randomAccess) + int workMem, + SortCoordinate coordinate, + bool randomAccess) { - Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess); + Tuplesortstate *state = tuplesort_begin_common(workMem, coordinate, + randomAccess); MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(state->sortcontext); @@ -973,10 +1098,11 @@ tuplesort_begin_index_hash(Relation heapRel, Tuplesortstate * tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation, - bool nullsFirstFlag, - int workMem, bool randomAccess) + bool nullsFirstFlag, int workMem, + SortCoordinate coordinate, bool randomAccess) { - Tuplesortstate *state = tuplesort_begin_common(workMem, randomAccess); + Tuplesortstate *state = tuplesort_begin_common(workMem, coordinate, + randomAccess); MemoryContext oldcontext; int16 typlen; bool typbyval; @@ -996,7 +1122,8 @@ tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation, false, /* no unique check */ 1, workMem, - randomAccess); + randomAccess, + PARALLEL_SORT(state)); state->comparetup = comparetup_datum; state->copytup = copytup_datum; @@ -1054,7 +1181,7 @@ tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation, * delayed calls at the moment.) * * This is a hint only. The tuplesort may still return more tuples than - * requested. + * requested. Parallel leader tuplesorts will always ignore the hint. */ void tuplesort_set_bound(Tuplesortstate *state, int64 bound) @@ -1063,6 +1190,7 @@ tuplesort_set_bound(Tuplesortstate *state, int64 bound) Assert(state->status == TSS_INITIAL); Assert(state->memtupcount == 0); Assert(!state->bounded); + Assert(!WORKER(state)); #ifdef DEBUG_BOUNDED_SORT /* Honor GUC setting that disables the feature (for easy testing) */ @@ -1070,6 +1198,10 @@ tuplesort_set_bound(Tuplesortstate *state, int64 bound) return; #endif + /* Parallel leader ignores hint */ + if (LEADER(state)) + return; + /* We want to be able to compute bound * 2, so limit the setting */ if (bound > (int64) (INT_MAX / 2)) return; @@ -1128,11 +1260,13 @@ tuplesort_end(Tuplesortstate *state) if (trace_sort) { if (state->tapeset) - elog(LOG, "external sort ended, %ld disk blocks used: %s", - spaceUsed, pg_rusage_show(&state->ru_start)); + elog(LOG, "%s of %d ended, %ld disk blocks used: %s", + SERIAL(state) ? "external sort" : "parallel external sort", + state->worker, spaceUsed, pg_rusage_show(&state->ru_start)); else - elog(LOG, "internal sort ended, %ld KB used: %s", - spaceUsed, pg_rusage_show(&state->ru_start)); + elog(LOG, "%s of %d ended, %ld KB used: %s", + SERIAL(state) ? "internal sort" : "unperformed parallel sort", + state->worker, spaceUsed, pg_rusage_show(&state->ru_start)); } TRACE_POSTGRESQL_SORT_DONE(state->tapeset != NULL, spaceUsed); @@ -1503,6 +1637,8 @@ tuplesort_putdatum(Tuplesortstate *state, Datum val, bool isNull) static void puttuple_common(Tuplesortstate *state, SortTuple *tuple) { + Assert(!LEADER(state)); + switch (state->status) { case TSS_INITIAL: @@ -1556,7 +1692,7 @@ puttuple_common(Tuplesortstate *state, SortTuple *tuple) /* * Nope; time to switch to tape-based operation. */ - inittapes(state); + inittapes(state, true); /* * Dump all tuples. @@ -1658,8 +1794,8 @@ tuplesort_performsort(Tuplesortstate *state) #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "performsort starting: %s", - pg_rusage_show(&state->ru_start)); + elog(LOG, "performsort of %d starting: %s", + state->worker, pg_rusage_show(&state->ru_start)); #endif switch (state->status) @@ -1668,14 +1804,39 @@ tuplesort_performsort(Tuplesortstate *state) /* * We were able to accumulate all the tuples within the allowed - * amount of memory. Just qsort 'em and we're done. + * amount of memory, or leader to take over worker tapes */ - tuplesort_sort_memtuples(state); + if (SERIAL(state)) + { + /* Just qsort 'em and we're done */ + tuplesort_sort_memtuples(state); + state->status = TSS_SORTEDINMEM; + } + else if (WORKER(state)) + { + /* + * Parallel workers must still dump out tuples to tape. No + * merge is required to produce single output run, though. + */ + inittapes(state, false); + dumptuples(state, true); + worker_nomergeruns(state); + state->status = TSS_SORTEDONTAPE; + } + else + { + /* + * Leader will take over worker tapes and merge worker runs. + * Note that mergeruns sets the correct state->status. + */ + leader_takeover_tapes(state); + mergeruns(state); + } state->current = 0; state->eof_reached = false; + state->markpos_block = 0L; state->markpos_offset = 0; state->markpos_eof = false; - state->status = TSS_SORTEDINMEM; break; case TSS_BOUNDED: @@ -1698,8 +1859,8 @@ tuplesort_performsort(Tuplesortstate *state) /* * Finish tape-based sort. First, flush all tuples remaining in * memory out to tape; then merge until we have a single remaining - * run (or, if !randomAccess, one run per tape). Note that - * mergeruns sets the correct state->status. + * run (or, if !randomAccess and !WORKER(), one run per tape). + * Note that mergeruns sets the correct state->status. */ dumptuples(state, true); mergeruns(state); @@ -1718,12 +1879,12 @@ tuplesort_performsort(Tuplesortstate *state) if (trace_sort) { if (state->status == TSS_FINALMERGE) - elog(LOG, "performsort done (except %d-way final merge): %s", - state->activeTapes, + elog(LOG, "performsort of %d done (except %d-way final merge): %s", + state->worker, state->activeTapes, pg_rusage_show(&state->ru_start)); else - elog(LOG, "performsort done: %s", - pg_rusage_show(&state->ru_start)); + elog(LOG, "performsort of %d done: %s", + state->worker, pg_rusage_show(&state->ru_start)); } #endif @@ -1744,6 +1905,8 @@ tuplesort_gettuple_common(Tuplesortstate *state, bool forward, unsigned int tuplen; size_t nmoved; + Assert(!WORKER(state)); + switch (state->status) { case TSS_SORTEDINMEM: @@ -2127,6 +2290,7 @@ tuplesort_skiptuples(Tuplesortstate *state, int64 ntuples, bool forward) */ Assert(forward); Assert(ntuples >= 0); + Assert(!WORKER(state)); switch (state->status) { @@ -2221,57 +2385,40 @@ tuplesort_merge_order(int64 allowedMem) /* * inittapes - initialize for tape sorting. * - * This is called only if we have found we don't have room to sort in memory. + * This is called only if we have found we won't sort in memory. */ static void -inittapes(Tuplesortstate *state) +inittapes(Tuplesortstate *state, bool mergeruns) { int maxTapes, j; - int64 tapeSpace; - /* Compute number of tapes to use: merge order plus 1 */ - maxTapes = tuplesort_merge_order(state->allowedMem) + 1; + Assert(!LEADER(state)); - state->maxTapes = maxTapes; - state->tapeRange = maxTapes - 1; + if (mergeruns) + { + /* Compute number of tapes to use: merge order plus 1 */ + maxTapes = tuplesort_merge_order(state->allowedMem) + 1; + } + else + { + /* Workers can sometimes produce single run, output without merge */ + Assert(WORKER(state)); + maxTapes = MINORDER + 1; + } #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "switching to external sort with %d tapes: %s", - maxTapes, pg_rusage_show(&state->ru_start)); + elog(LOG, "%d switching to external sort with %d tapes: %s", + state->worker, maxTapes, pg_rusage_show(&state->ru_start)); #endif - /* - * Decrease availMem to reflect the space needed for tape buffers, when - * writing the initial runs; but don't decrease it to the point that we - * have no room for tuples. (That case is only likely to occur if sorting - * pass-by-value Datums; in all other scenarios the memtuples[] array is - * unlikely to occupy more than half of allowedMem. In the pass-by-value - * case it's not important to account for tuple space, so we don't care if - * LACKMEM becomes inaccurate.) - */ - tapeSpace = (int64) maxTapes * TAPE_BUFFER_OVERHEAD; - - if (tapeSpace + GetMemoryChunkSpace(state->memtuples) < state->allowedMem) - USEMEM(state, tapeSpace); - - /* - * Make sure that the temp file(s) underlying the tape set are created in - * suitable temp tablespaces. - */ - PrepareTempTablespaces(); - - /* - * Create the tape set and allocate the per-tape data arrays. - */ - state->tapeset = LogicalTapeSetCreate(maxTapes); - - state->mergeactive = (bool *) palloc0(maxTapes * sizeof(bool)); - state->tp_fib = (int *) palloc0(maxTapes * sizeof(int)); - state->tp_runs = (int *) palloc0(maxTapes * sizeof(int)); - state->tp_dummy = (int *) palloc0(maxTapes * sizeof(int)); - state->tp_tapenum = (int *) palloc0(maxTapes * sizeof(int)); + /* Create the tape set and allocate the per-tape data arrays */ + inittapestate(state, maxTapes); + state->tapeset = + LogicalTapeSetCreate(maxTapes, NULL, + state->shared ? &state->shared->fileset : NULL, + state->worker); state->currentRun = 0; @@ -2294,6 +2441,47 @@ inittapes(Tuplesortstate *state) state->status = TSS_BUILDRUNS; } +/* + * inittapestate - initialize generic tape management state + */ +static void +inittapestate(Tuplesortstate *state, int maxTapes) +{ + int64 tapeSpace; + + /* + * Decrease availMem to reflect the space needed for tape buffers; but + * don't decrease it to the point that we have no room for tuples. (That + * case is only likely to occur if sorting pass-by-value Datums; in all + * other scenarios the memtuples[] array is unlikely to occupy more than + * half of allowedMem. In the pass-by-value case it's not important to + * account for tuple space, so we don't care if LACKMEM becomes + * inaccurate.) + */ + tapeSpace = (int64) maxTapes * TAPE_BUFFER_OVERHEAD; + + if (tapeSpace + GetMemoryChunkSpace(state->memtuples) < state->allowedMem) + USEMEM(state, tapeSpace); + + /* + * Make sure that the temp file(s) underlying the tape set are created in + * suitable temp tablespaces. For parallel sorts, this should have been + * called already, but it doesn't matter if it is called a second time. + */ + PrepareTempTablespaces(); + + state->mergeactive = (bool *) palloc0(maxTapes * sizeof(bool)); + state->tp_fib = (int *) palloc0(maxTapes * sizeof(int)); + state->tp_runs = (int *) palloc0(maxTapes * sizeof(int)); + state->tp_dummy = (int *) palloc0(maxTapes * sizeof(int)); + state->tp_tapenum = (int *) palloc0(maxTapes * sizeof(int)); + + /* Record # of tapes allocated (for duration of sort) */ + state->maxTapes = maxTapes; + /* Record maximum # of tapes usable as inputs when merging */ + state->tapeRange = maxTapes - 1; +} + /* * selectnewtape -- select new tape for new initial run. * @@ -2471,8 +2659,8 @@ mergeruns(Tuplesortstate *state) */ #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "using " INT64_FORMAT " KB of memory for read buffers among %d input tapes", - (state->availMem) / 1024, numInputTapes); + elog(LOG, "%d using " INT64_FORMAT " KB of memory for read buffers among %d input tapes", + state->worker, state->availMem / 1024, numInputTapes); #endif state->read_buffer_size = Max(state->availMem / numInputTapes, 0); @@ -2490,7 +2678,7 @@ mergeruns(Tuplesortstate *state) * pass remains. If we don't have to produce a materialized sorted * tape, we can stop at this point and do the final merge on-the-fly. */ - if (!state->randomAccess) + if (!state->randomAccess && !WORKER(state)) { bool allOneRun = true; @@ -2575,7 +2763,10 @@ mergeruns(Tuplesortstate *state) * a waste of cycles anyway... */ state->result_tape = state->tp_tapenum[state->tapeRange]; - LogicalTapeFreeze(state->tapeset, state->result_tape); + if (!WORKER(state)) + LogicalTapeFreeze(state->tapeset, state->result_tape, NULL); + else + worker_freeze_result_tape(state); state->status = TSS_SORTEDONTAPE; /* Release the read buffers of all the other tapes, by rewinding them. */ @@ -2644,8 +2835,8 @@ mergeonerun(Tuplesortstate *state) #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "finished %d-way merge step: %s", state->activeTapes, - pg_rusage_show(&state->ru_start)); + elog(LOG, "%d finished %d-way merge step: %s", state->worker, + state->activeTapes, pg_rusage_show(&state->ru_start)); #endif } @@ -2779,8 +2970,9 @@ dumptuples(Tuplesortstate *state, bool alltuples) #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "starting quicksort of run %d: %s", - state->currentRun, pg_rusage_show(&state->ru_start)); + elog(LOG, "%d starting quicksort of run %d: %s", + state->worker, state->currentRun, + pg_rusage_show(&state->ru_start)); #endif /* @@ -2791,8 +2983,9 @@ dumptuples(Tuplesortstate *state, bool alltuples) #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "finished quicksort of run %d: %s", - state->currentRun, pg_rusage_show(&state->ru_start)); + elog(LOG, "%d finished quicksort of run %d: %s", + state->worker, state->currentRun, + pg_rusage_show(&state->ru_start)); #endif memtupwrite = state->memtupcount; @@ -2818,8 +3011,8 @@ dumptuples(Tuplesortstate *state, bool alltuples) #ifdef TRACE_SORT if (trace_sort) - elog(LOG, "finished writing run %d to tape %d: %s", - state->currentRun, state->destTape, + elog(LOG, "%d finished writing run %d to tape %d: %s", + state->worker, state->currentRun, state->destTape, pg_rusage_show(&state->ru_start)); #endif @@ -3031,6 +3224,7 @@ make_bounded_heap(Tuplesortstate *state) Assert(state->status == TSS_INITIAL); Assert(state->bounded); Assert(tupcount >= state->bound); + Assert(SERIAL(state)); /* Reverse sort direction so largest entry will be at root */ reversedirection(state); @@ -3078,6 +3272,7 @@ sort_bounded_heap(Tuplesortstate *state) Assert(state->status == TSS_BOUNDED); Assert(state->bounded); Assert(tupcount == state->bound); + Assert(SERIAL(state)); /* * We can unheapify in place because each delete-top call will remove the @@ -3112,6 +3307,8 @@ sort_bounded_heap(Tuplesortstate *state) static void tuplesort_sort_memtuples(Tuplesortstate *state) { + Assert(!LEADER(state)); + if (state->memtupcount > 1) { /* Can we use the single-key sort function? */ @@ -4151,6 +4348,230 @@ readtup_datum(Tuplesortstate *state, SortTuple *stup, &tuplen, sizeof(tuplen)); } +/* + * Parallel sort routines + */ + +/* + * tuplesort_estimate_shared - estimate required shared memory allocation + * + * nWorkers is an estimate of the number of workers (it's the number that + * will be requested). + */ +Size +tuplesort_estimate_shared(int nWorkers) +{ + Size tapesSize; + + Assert(nWorkers > 0); + + /* Make sure that BufFile shared state is MAXALIGN'd */ + tapesSize = mul_size(sizeof(TapeShare), nWorkers); + tapesSize = MAXALIGN(add_size(tapesSize, offsetof(Sharedsort, tapes))); + + return tapesSize; +} + +/* + * tuplesort_initialize_shared - initialize shared tuplesort state + * + * Must be called from leader process before workers are launched, to + * establish state needed up-front for worker tuplesortstates. nWorkers + * should match the argument passed to tuplesort_estimate_shared(). + */ +void +tuplesort_initialize_shared(Sharedsort *shared, int nWorkers, dsm_segment *seg) +{ + int i; + + Assert(nWorkers > 0); + + SpinLockInit(&shared->mutex); + shared->currentWorker = 0; + shared->workersFinished = 0; + SharedFileSetInit(&shared->fileset, seg); + shared->nTapes = nWorkers; + for (i = 0; i < nWorkers; i++) + { + shared->tapes[i].firstblocknumber = 0L; + shared->tapes[i].buffilesize = 0; + } +} + +/* + * tuplesort_attach_shared - attach to shared tuplesort state + * + * Must be called by all worker processes. + */ +void +tuplesort_attach_shared(Sharedsort *shared, dsm_segment *seg) +{ + /* Attach to SharedFileSet */ + SharedFileSetAttach(&shared->fileset, seg); +} + +/* + * worker_get_identifier - Assign and return ordinal identifier for worker + * + * The order in which these are assigned is not well defined, and should not + * matter; worker numbers across parallel sort participants need only be + * distinct and gapless. logtape.c requires this. + * + * Note that the identifiers assigned from here have no relation to + * ParallelWorkerNumber number, to avoid making any assumption about + * caller's requirements. However, we do follow the ParallelWorkerNumber + * convention of representing a non-worker with worker number -1. This + * includes the leader, as well as serial Tuplesort processes. + */ +static int +worker_get_identifier(Tuplesortstate *state) +{ + Sharedsort *shared = state->shared; + int worker; + + Assert(WORKER(state)); + + SpinLockAcquire(&shared->mutex); + worker = shared->currentWorker++; + SpinLockRelease(&shared->mutex); + + return worker; +} + +/* + * worker_freeze_result_tape - freeze worker's result tape for leader + * + * This is called by workers just after the result tape has been determined, + * instead of calling LogicalTapeFreeze() directly. They do so because + * workers require a few additional steps over similar serial + * TSS_SORTEDONTAPE external sort cases, which also happen here. The extra + * steps are around freeing now unneeded resources, and representing to + * leader that worker's input run is available for its merge. + * + * There should only be one final output run for each worker, which consists + * of all tuples that were originally input into worker. + */ +static void +worker_freeze_result_tape(Tuplesortstate *state) +{ + Sharedsort *shared = state->shared; + TapeShare output; + + Assert(WORKER(state)); + Assert(state->result_tape != -1); + Assert(state->memtupcount == 0); + + /* + * Free most remaining memory, in case caller is sensitive to our holding + * on to it. memtuples may not be a tiny merge heap at this point. + */ + pfree(state->memtuples); + /* Be tidy */ + state->memtuples = NULL; + state->memtupsize = 0; + + /* + * Parallel worker requires result tape metadata, which is to be stored in + * shared memory for leader + */ + LogicalTapeFreeze(state->tapeset, state->result_tape, &output); + + /* Store properties of output tape, and update finished worker count */ + SpinLockAcquire(&shared->mutex); + shared->tapes[state->worker] = output; + shared->workersFinished++; + SpinLockRelease(&shared->mutex); +} + +/* + * worker_nomergeruns - dump memtuples in worker, without merging + * + * This called as an alternative to mergeruns() with a worker when no + * merging is required. + */ +static void +worker_nomergeruns(Tuplesortstate *state) +{ + Assert(WORKER(state)); + Assert(state->result_tape == -1); + + state->result_tape = state->tp_tapenum[state->destTape]; + worker_freeze_result_tape(state); +} + +/* + * leader_takeover_tapes - create tapeset for leader from worker tapes + * + * So far, leader Tuplesortstate has performed no actual sorting. By now, all + * sorting has occurred in workers, all of which must have already returned + * from tuplesort_performsort(). + * + * When this returns, leader process is left in a state that is virtually + * indistinguishable from it having generated runs as a serial external sort + * might have. + */ +static void +leader_takeover_tapes(Tuplesortstate *state) +{ + Sharedsort *shared = state->shared; + int nParticipants = state->nParticipants; + int workersFinished; + int j; + + Assert(LEADER(state)); + Assert(nParticipants >= 1); + + SpinLockAcquire(&shared->mutex); + workersFinished = shared->workersFinished; + SpinLockRelease(&shared->mutex); + + if (nParticipants != workersFinished) + elog(ERROR, "cannot take over tapes before all workers finish"); + + /* + * Create the tapeset from worker tapes, including a leader-owned tape at + * the end. Parallel workers are far more expensive than logical tapes, + * so the number of tapes allocated here should never be excessive. + * + * We still have a leader tape, though it's not possible to write to it + * due to restrictions in the shared fileset infrastructure used by + * logtape.c. It will never be written to in practice because + * randomAccess is disallowed for parallel sorts. + */ + inittapestate(state, nParticipants + 1); + state->tapeset = LogicalTapeSetCreate(nParticipants + 1, shared->tapes, + &shared->fileset, state->worker); + + /* mergeruns() relies on currentRun for # of runs (in one-pass cases) */ + state->currentRun = nParticipants; + + /* + * Initialize variables of Algorithm D to be consistent with runs from + * workers having been generated in the leader. + * + * There will always be exactly 1 run per worker, and exactly one input + * tape per run, because workers always output exactly 1 run, even when + * there were no input tuples for workers to sort. + */ + for (j = 0; j < state->maxTapes; j++) + { + /* One real run; no dummy runs for worker tapes */ + state->tp_fib[j] = 1; + state->tp_runs[j] = 1; + state->tp_dummy[j] = 0; + state->tp_tapenum[j] = j; + } + /* Leader tape gets one dummy run, and no real runs */ + state->tp_fib[state->tapeRange] = 0; + state->tp_runs[state->tapeRange] = 0; + state->tp_dummy[state->tapeRange] = 1; + + state->Level = 1; + state->destTape = 0; + + state->status = TSS_BUILDRUNS; +} + /* * Convenience routine to free a tuple previously loaded into sort memory */ diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index d28f413c66..0f6a40168c 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -21,6 +21,7 @@ #include "catalog/pg_index.h" #include "lib/stringinfo.h" #include "storage/bufmgr.h" +#include "storage/shm_toc.h" /* There's room for a 16-bit vacuum cycle ID in BTPageOpaqueData */ typedef uint16 BTCycleId; @@ -430,8 +431,6 @@ typedef BTScanOpaqueData *BTScanOpaque; /* * external entry points for btree, in nbtree.c */ -extern IndexBuildResult *btbuild(Relation heap, Relation index, - struct IndexInfo *indexInfo); extern void btbuildempty(Relation index); extern bool btinsert(Relation rel, Datum *values, bool *isnull, ItemPointer ht_ctid, Relation heapRel, @@ -547,13 +546,8 @@ extern bool btvalidate(Oid opclassoid); /* * prototypes for functions in nbtsort.c */ -typedef struct BTSpool BTSpool; /* opaque type known only within nbtsort.c */ - -extern BTSpool *_bt_spoolinit(Relation heap, Relation index, - bool isunique, bool isdead); -extern void _bt_spooldestroy(BTSpool *btspool); -extern void _bt_spool(BTSpool *btspool, ItemPointer self, - Datum *values, bool *isnull); -extern void _bt_leafbuild(BTSpool *btspool, BTSpool *spool2); +extern IndexBuildResult *btbuild(Relation heap, Relation index, + struct IndexInfo *indexInfo); +extern void _bt_parallel_build_main(dsm_segment *seg, shm_toc *toc); #endif /* NBTREE_H */ diff --git a/src/include/access/parallel.h b/src/include/access/parallel.h index d0c218b185..025691fd82 100644 --- a/src/include/access/parallel.h +++ b/src/include/access/parallel.h @@ -59,7 +59,9 @@ extern PGDLLIMPORT bool InitializingParallelWorker; #define IsParallelWorker() (ParallelWorkerNumber >= 0) -extern ParallelContext *CreateParallelContext(const char *library_name, const char *function_name, int nworkers); +extern ParallelContext *CreateParallelContext(const char *library_name, + const char *function_name, int nworkers, + bool serializable_okay); extern void InitializeParallelDSM(ParallelContext *pcxt); extern void ReinitializeParallelDSM(ParallelContext *pcxt); extern void LaunchParallelWorkers(ParallelContext *pcxt); diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h index 9c603ca637..18c7dedd5d 100644 --- a/src/include/access/relscan.h +++ b/src/include/access/relscan.h @@ -39,6 +39,7 @@ typedef struct ParallelHeapScanDescData BlockNumber phs_startblock; /* starting block number */ pg_atomic_uint64 phs_nallocated; /* number of blocks allocated to * workers so far. */ + bool phs_snapshot_any; /* SnapshotAny, not phs_snapshot_data? */ char phs_snapshot_data[FLEXIBLE_ARRAY_MEMBER]; } ParallelHeapScanDescData; diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index 235e180299..a5cd8ddb1e 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -104,14 +104,16 @@ extern void index_build(Relation heapRelation, Relation indexRelation, IndexInfo *indexInfo, bool isprimary, - bool isreindex); + bool isreindex, + bool parallel); extern double IndexBuildHeapScan(Relation heapRelation, Relation indexRelation, IndexInfo *indexInfo, bool allow_sync, IndexBuildCallback callback, - void *callback_state); + void *callback_state, + HeapScanDesc scan); extern double IndexBuildHeapRangeScan(Relation heapRelation, Relation indexRelation, IndexInfo *indexInfo, @@ -120,7 +122,8 @@ extern double IndexBuildHeapRangeScan(Relation heapRelation, BlockNumber start_blockno, BlockNumber end_blockno, IndexBuildCallback callback, - void *callback_state); + void *callback_state, + HeapScanDesc scan); extern void validate_index(Oid heapId, Oid indexId, Snapshot snapshot); diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 54ee273747..429c055489 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -241,6 +241,7 @@ extern bool enableFsync; extern PGDLLIMPORT bool allowSystemTableMods; extern PGDLLIMPORT int work_mem; extern PGDLLIMPORT int maintenance_work_mem; +extern PGDLLIMPORT int max_parallel_maintenance_workers; extern int VacuumCostPageHit; extern int VacuumCostPageMiss; diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 1bf67455e0..a2a2a9f3d4 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -132,11 +132,12 @@ typedef struct ExprState * ReadyForInserts is it valid for inserts? * Concurrent are we doing a concurrent index build? * BrokenHotChain did we detect any broken HOT chains? + * ParallelWorkers # of workers requested (excludes leader) * AmCache private cache area for index AM * Context memory context holding this IndexInfo * - * ii_Concurrent and ii_BrokenHotChain are used only during index build; - * they're conventionally set to false otherwise. + * ii_Concurrent, ii_BrokenHotChain, and ii_ParallelWorkers are used only + * during index build; they're conventionally zeroed otherwise. * ---------------- */ typedef struct IndexInfo @@ -158,6 +159,7 @@ typedef struct IndexInfo bool ii_ReadyForInserts; bool ii_Concurrent; bool ii_BrokenHotChain; + int ii_ParallelWorkers; Oid ii_Am; void *ii_AmCache; MemoryContext ii_Context; diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index 0072b7aa0d..b6be259ff7 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -55,7 +55,7 @@ extern RelOptInfo *standard_join_search(PlannerInfo *root, int levels_needed, extern void generate_gather_paths(PlannerInfo *root, RelOptInfo *rel); extern int compute_parallel_worker(RelOptInfo *rel, double heap_pages, - double index_pages); + double index_pages, int max_workers); extern void create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, Path *bitmapqual); extern void generate_partition_wise_join_paths(PlannerInfo *root, diff --git a/src/include/optimizer/planner.h b/src/include/optimizer/planner.h index 29173d36c4..0d8b88d78b 100644 --- a/src/include/optimizer/planner.h +++ b/src/include/optimizer/planner.h @@ -56,6 +56,7 @@ extern Expr *expression_planner(Expr *expr); extern Expr *preprocess_phv_expression(PlannerInfo *root, Expr *expr); extern bool plan_cluster_use_sort(Oid tableOid, Oid indexOid); +extern int plan_create_index_workers(Oid tableOid, Oid indexOid); extern List *get_partitioned_child_rels(PlannerInfo *root, Index rti, bool *part_cols_updated); diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 3d3c0b64fc..be2f59239b 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -826,6 +826,7 @@ typedef enum WAIT_EVENT_MQ_SEND, WAIT_EVENT_PARALLEL_FINISH, WAIT_EVENT_PARALLEL_BITMAP_SCAN, + WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN, WAIT_EVENT_PROCARRAY_GROUP_UPDATE, WAIT_EVENT_CLOG_GROUP_UPDATE, WAIT_EVENT_REPLICATION_ORIGIN_DROP, diff --git a/src/include/storage/buffile.h b/src/include/storage/buffile.h index a3df056a61..a6cdeb451c 100644 --- a/src/include/storage/buffile.h +++ b/src/include/storage/buffile.h @@ -43,6 +43,8 @@ extern size_t BufFileWrite(BufFile *file, void *ptr, size_t size); extern int BufFileSeek(BufFile *file, int fileno, off_t offset, int whence); extern void BufFileTell(BufFile *file, int *fileno, off_t *offset); extern int BufFileSeekBlock(BufFile *file, long blknum); +extern off_t BufFileSize(BufFile *file); +extern long BufFileAppend(BufFile *target, BufFile *source); extern BufFile *BufFileCreateShared(SharedFileSet *fileset, const char *name); extern void BufFileExportShared(BufFile *file); diff --git a/src/include/storage/fd.h b/src/include/storage/fd.h index db5ca16679..4244e7b1fd 100644 --- a/src/include/storage/fd.h +++ b/src/include/storage/fd.h @@ -78,6 +78,7 @@ extern char *FilePathName(File file); extern int FileGetRawDesc(File file); extern int FileGetRawFlags(File file); extern mode_t FileGetRawMode(File file); +extern off_t FileGetSize(File file); /* Operations used for sharing named temporary files */ extern File PathNameCreateTemporaryFile(const char *name, bool error_on_failure); diff --git a/src/include/utils/logtape.h b/src/include/utils/logtape.h index 88662c10a4..9bf1d80142 100644 --- a/src/include/utils/logtape.h +++ b/src/include/utils/logtape.h @@ -16,15 +16,49 @@ #ifndef LOGTAPE_H #define LOGTAPE_H +#include "storage/sharedfileset.h" + /* LogicalTapeSet is an opaque type whose details are not known outside logtape.c. */ typedef struct LogicalTapeSet LogicalTapeSet; +/* + * The approach tuplesort.c takes to parallel external sorts is that workers, + * whose state is almost the same as independent serial sorts, are made to + * produce a final materialized tape of sorted output in all cases. This is + * frozen, just like any case requiring a final materialized tape. However, + * there is one difference, which is that freezing will also export an + * underlying shared fileset BufFile for sharing. Freezing produces TapeShare + * metadata for the worker when this happens, which is passed along through + * shared memory to leader. + * + * The leader process can then pass an array of TapeShare metadata (one per + * worker participant) to LogicalTapeSetCreate(), alongside a handle to a + * shared fileset, which is sufficient to construct a new logical tapeset that + * consists of each of the tapes materialized by workers. + * + * Note that while logtape.c does create an empty leader tape at the end of the + * tapeset in the leader case, it can never be written to due to a restriction + * in the shared buffile infrastructure. + */ +typedef struct TapeShare +{ + /* + * firstblocknumber is first block that should be read from materialized + * tape. + * + * buffilesize is the size of associated BufFile following freezing. + */ + long firstblocknumber; + off_t buffilesize; +} TapeShare; + /* * prototypes for functions in logtape.c */ -extern LogicalTapeSet *LogicalTapeSetCreate(int ntapes); +extern LogicalTapeSet *LogicalTapeSetCreate(int ntapes, TapeShare *shared, + SharedFileSet *fileset, int worker); extern void LogicalTapeSetClose(LogicalTapeSet *lts); extern void LogicalTapeSetForgetFreeSpace(LogicalTapeSet *lts); extern size_t LogicalTapeRead(LogicalTapeSet *lts, int tapenum, @@ -34,7 +68,8 @@ extern void LogicalTapeWrite(LogicalTapeSet *lts, int tapenum, extern void LogicalTapeRewindForRead(LogicalTapeSet *lts, int tapenum, size_t buffer_size); extern void LogicalTapeRewindForWrite(LogicalTapeSet *lts, int tapenum); -extern void LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum); +extern void LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum, + TapeShare *share); extern size_t LogicalTapeBackspace(LogicalTapeSet *lts, int tapenum, size_t size); extern void LogicalTapeSeek(LogicalTapeSet *lts, int tapenum, diff --git a/src/include/utils/tuplesort.h b/src/include/utils/tuplesort.h index 5d57c503ab..d2e6754f04 100644 --- a/src/include/utils/tuplesort.h +++ b/src/include/utils/tuplesort.h @@ -8,7 +8,8 @@ * if necessary). It works efficiently for both small and large amounts * of data. Small amounts are sorted in-memory using qsort(). Large * amounts are sorted using temporary files and a standard external sort - * algorithm. + * algorithm. Parallel sorts use a variant of this external sort + * algorithm, and are typically only used for large amounts of data. * * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California @@ -23,13 +24,39 @@ #include "access/itup.h" #include "executor/tuptable.h" #include "fmgr.h" +#include "storage/dsm.h" #include "utils/relcache.h" -/* Tuplesortstate is an opaque type whose details are not known outside - * tuplesort.c. +/* + * Tuplesortstate and Sharedsort are opaque types whose details are not + * known outside tuplesort.c. */ typedef struct Tuplesortstate Tuplesortstate; +typedef struct Sharedsort Sharedsort; + +/* + * Tuplesort parallel coordination state, allocated by each participant in + * local memory. Participant caller initializes everything. See usage notes + * below. + */ +typedef struct SortCoordinateData +{ + /* Worker process? If not, must be leader. */ + bool isWorker; + + /* + * Leader-process-passed number of participants known launched (workers + * set this to -1). Includes state within leader needed for it to + * participate as a worker, if any. + */ + int nParticipants; + + /* Private opaque state (points to shared memory) */ + Sharedsort *sharedsort; +} SortCoordinateData; + +typedef struct SortCoordinateData *SortCoordinate; /* * Data structures for reporting sort statistics. Note that @@ -66,6 +93,8 @@ typedef struct TuplesortInstrumentation * sorting HeapTuples and two more for sorting IndexTuples. Yet another * API supports sorting bare Datums. * + * Serial sort callers should pass NULL for their coordinate argument. + * * The "heap" API actually stores/sorts MinimalTuples, which means it doesn't * preserve the system columns (tuple identity and transaction visibility * info). The sort keys are specified by column numbers within the tuples @@ -84,30 +113,107 @@ typedef struct TuplesortInstrumentation * * The "index_hash" API is similar to index_btree, but the tuples are * actually sorted by their hash codes not the raw data. + * + * Parallel sort callers are required to coordinate multiple tuplesort states + * in a leader process and one or more worker processes. The leader process + * must launch workers, and have each perform an independent "partial" + * tuplesort, typically fed by the parallel heap interface. The leader later + * produces the final output (internally, it merges runs output by workers). + * + * Callers must do the following to perform a sort in parallel using multiple + * worker processes: + * + * 1. Request tuplesort-private shared memory for n workers. Use + * tuplesort_estimate_shared() to get the required size. + * 2. Have leader process initialize allocated shared memory using + * tuplesort_initialize_shared(). Launch workers. + * 3. Initialize a coordinate argument within both the leader process, and + * for each worker process. This has a pointer to the shared + * tuplesort-private structure, as well as some caller-initialized fields. + * Leader's coordinate argument reliably indicates number of workers + * launched (this is unused by workers). + * 4. Begin a tuplesort using some appropriate tuplesort_begin* routine, + * (passing the coordinate argument) within each worker. The workMem + * arguments need not be identical. All other arguments should match + * exactly, though. + * 5. tuplesort_attach_shared() should be called by all workers. Feed tuples + * to each worker, and call tuplesort_performsort() within each when input + * is exhausted. + * 6. Call tuplesort_end() in each worker process. Worker processes can shut + * down once tuplesort_end() returns. + * 7. Begin a tuplesort in the leader using the same tuplesort_begin* + * routine, passing a leader-appropriate coordinate argument (this can + * happen as early as during step 3, actually, since we only need to know + * the number of workers successfully launched). The leader must now wait + * for workers to finish. Caller must use own mechanism for ensuring that + * next step isn't reached until all workers have called and returned from + * tuplesort_performsort(). (Note that it's okay if workers have already + * also called tuplesort_end() by then.) + * 8. Call tuplesort_performsort() in leader. Consume output using the + * appropriate tuplesort_get* routine. Leader can skip this step if + * tuplesort turns out to be unnecessary. + * 9. Call tuplesort_end() in leader. + * + * This division of labor assumes nothing about how input tuples are produced, + * but does require that caller combine the state of multiple tuplesorts for + * any purpose other than producing the final output. For example, callers + * must consider that tuplesort_get_stats() reports on only one worker's role + * in a sort (or the leader's role), and not statistics for the sort as a + * whole. + * + * Note that callers may use the leader process to sort runs as if it was an + * independent worker process (prior to the process performing a leader sort + * to produce the final sorted output). Doing so only requires a second + * "partial" tuplesort within the leader process, initialized like that of a + * worker process. The steps above don't touch on this directly. The only + * difference is that the tuplesort_attach_shared() call is never needed within + * leader process, because the backend as a whole holds the shared fileset + * reference. A worker Tuplesortstate in leader is expected to do exactly the + * same amount of total initial processing work as a worker process + * Tuplesortstate, since the leader process has nothing else to do before + * workers finish. + * + * Note that only a very small amount of memory will be allocated prior to + * the leader state first consuming input, and that workers will free the + * vast majority of their memory upon returning from tuplesort_performsort(). + * Callers can rely on this to arrange for memory to be used in a way that + * respects a workMem-style budget across an entire parallel sort operation. + * + * Callers are responsible for parallel safety in general. However, they + * can at least rely on there being no parallel safety hazards within + * tuplesort, because tuplesort thinks of the sort as several independent + * sorts whose results are combined. Since, in general, the behavior of + * sort operators is immutable, caller need only worry about the parallel + * safety of whatever the process is through which input tuples are + * generated (typically, caller uses a parallel heap scan). */ extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc, int nkeys, AttrNumber *attNums, Oid *sortOperators, Oid *sortCollations, bool *nullsFirstFlags, - int workMem, bool randomAccess); + int workMem, SortCoordinate coordinate, + bool randomAccess); extern Tuplesortstate *tuplesort_begin_cluster(TupleDesc tupDesc, - Relation indexRel, - int workMem, bool randomAccess); + Relation indexRel, int workMem, + SortCoordinate coordinate, bool randomAccess); extern Tuplesortstate *tuplesort_begin_index_btree(Relation heapRel, Relation indexRel, bool enforceUnique, - int workMem, bool randomAccess); + int workMem, SortCoordinate coordinate, + bool randomAccess); extern Tuplesortstate *tuplesort_begin_index_hash(Relation heapRel, Relation indexRel, uint32 high_mask, uint32 low_mask, uint32 max_buckets, - int workMem, bool randomAccess); + int workMem, SortCoordinate coordinate, + bool randomAccess); extern Tuplesortstate *tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation, bool nullsFirstFlag, - int workMem, bool randomAccess); + int workMem, SortCoordinate coordinate, + bool randomAccess); extern void tuplesort_set_bound(Tuplesortstate *state, int64 bound); @@ -141,10 +247,16 @@ extern const char *tuplesort_space_type_name(TuplesortSpaceType t); extern int tuplesort_merge_order(int64 allowedMem); +extern Size tuplesort_estimate_shared(int nworkers); +extern void tuplesort_initialize_shared(Sharedsort *shared, int nWorkers, + dsm_segment *seg); +extern void tuplesort_attach_shared(Sharedsort *shared, dsm_segment *seg); + /* * These routines may only be called if randomAccess was specified 'true'. * Likewise, backwards scan in gettuple/getdatum is only allowed if - * randomAccess was specified. + * randomAccess was specified. Note that parallel sorts do not support + * randomAccess. */ extern void tuplesort_rescan(Tuplesortstate *state); diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index a42ff9794a..d4765ce3b0 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -165,6 +165,7 @@ BTArrayKeyInfo BTBuildState BTCycleId BTIndexStat +BTLeader BTMetaPageData BTOneVacInfo BTPS_State @@ -178,6 +179,7 @@ BTScanOpaqueData BTScanPos BTScanPosData BTScanPosItem +BTShared BTSortArrayContext BTSpool BTStack @@ -2047,6 +2049,7 @@ SharedSortInfo SharedTuplestore SharedTuplestoreAccessor SharedTypmodTableEntry +Sharedsort ShellTypeInfo ShippableCacheEntry ShippableCacheKey @@ -2091,6 +2094,8 @@ Sort SortBy SortByDir SortByNulls +SortCoordinate +SortCoordinateData SortGroupClause SortItem SortPath @@ -2234,6 +2239,7 @@ TableSpaceOpts TablespaceList TablespaceListCell TapeBlockTrailer +TapeShare TarMethodData TarMethodFile TargetEntry From 533c5d8bddf0feb1785b3da17c0d17feeaac76d8 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 2 Feb 2018 14:20:50 -0500 Subject: [PATCH 0940/1087] Fix application of identity values in some cases Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that identity values were not applied in some further cases, including logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD COLUMN. To fix all that, apply the identity column expression in build_column_default() instead of repeating the same logic at each call site. For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding completely ignored that existing rows for the new column should have values filled in from the identity sequence. The coding using build_column_default() fails for this because the sequence ownership isn't registered until after ALTER TABLE, and we can't do it before because we don't have the column in the catalog yet. So we specially remember in ColumnDef the sequence name that we decided on and build a custom NextValueExpr using that. Reviewed-by: Michael Paquier --- src/backend/commands/copy.c | 16 ++----------- src/backend/commands/tablecmds.c | 16 ++++++++++++- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/equalfuncs.c | 1 + src/backend/nodes/outfuncs.c | 1 + src/backend/parser/parse_utilcmd.c | 8 +++++++ src/backend/rewrite/rewriteHandler.c | 22 ++++++++--------- src/include/nodes/parsenodes.h | 2 ++ src/test/regress/expected/identity.out | 28 ++++++++++++++++++++++ src/test/regress/sql/identity.sql | 19 +++++++++++++++ src/test/subscription/t/008_diff_schema.pl | 18 ++++++++++---- 11 files changed, 102 insertions(+), 30 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index 04a24c6082..b3933df9af 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -23,7 +23,6 @@ #include "access/sysattr.h" #include "access/xact.h" #include "access/xlog.h" -#include "catalog/dependency.h" #include "catalog/pg_type.h" #include "commands/copy.h" #include "commands/defrem.h" @@ -2996,19 +2995,8 @@ BeginCopyFrom(ParseState *pstate, { /* attribute is NOT to be copied from input */ /* use default value if one exists */ - Expr *defexpr; - - if (att->attidentity) - { - NextValueExpr *nve = makeNode(NextValueExpr); - - nve->seqid = getOwnedSequence(RelationGetRelid(cstate->rel), - attnum); - nve->typeId = att->atttypid; - defexpr = (Expr *) nve; - } - else - defexpr = (Expr *) build_column_default(cstate->rel, attnum); + Expr *defexpr = (Expr *) build_column_default(cstate->rel, + attnum); if (defexpr != NULL) { diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 37c7d66881..89454d8e80 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -5486,7 +5486,21 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel, if (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE && relkind != RELKIND_FOREIGN_TABLE && attribute.attnum > 0) { - defval = (Expr *) build_column_default(rel, attribute.attnum); + /* + * For an identity column, we can't use build_column_default(), + * because the sequence ownership isn't set yet. So do it manually. + */ + if (colDef->identity) + { + NextValueExpr *nve = makeNode(NextValueExpr); + + nve->seqid = RangeVarGetRelid(colDef->identitySequence, NoLock, false); + nve->typeId = typeOid; + + defval = (Expr *) nve; + } + else + defval = (Expr *) build_column_default(rel, attribute.attnum); if (!defval && DomainHasConstraints(typeOid)) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index fd3001c493..bafe0d1071 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2819,6 +2819,7 @@ _copyColumnDef(const ColumnDef *from) COPY_NODE_FIELD(raw_default); COPY_NODE_FIELD(cooked_default); COPY_SCALAR_FIELD(identity); + COPY_NODE_FIELD(identitySequence); COPY_NODE_FIELD(collClause); COPY_SCALAR_FIELD(collOid); COPY_NODE_FIELD(constraints); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 7d2aa1a2d3..02ca7d588c 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2564,6 +2564,7 @@ _equalColumnDef(const ColumnDef *a, const ColumnDef *b) COMPARE_NODE_FIELD(raw_default); COMPARE_NODE_FIELD(cooked_default); COMPARE_SCALAR_FIELD(identity); + COMPARE_NODE_FIELD(identitySequence); COMPARE_NODE_FIELD(collClause); COMPARE_SCALAR_FIELD(collOid); COMPARE_NODE_FIELD(constraints); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index e0f4befd9f..e6ba096257 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2814,6 +2814,7 @@ _outColumnDef(StringInfo str, const ColumnDef *node) WRITE_NODE_FIELD(raw_default); WRITE_NODE_FIELD(cooked_default); WRITE_CHAR_FIELD(identity); + WRITE_NODE_FIELD(identitySequence); WRITE_NODE_FIELD(collClause); WRITE_OID_FIELD(collOid); WRITE_NODE_FIELD(constraints); diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 1d35815fcf..d415d7180f 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -472,6 +472,14 @@ generateSerialExtraStmts(CreateStmtContext *cxt, ColumnDef *column, cxt->blist = lappend(cxt->blist, seqstmt); + /* + * Store the identity sequence name that we decided on. ALTER TABLE + * ... ADD COLUMN ... IDENTITY needs this so that it can fill the new + * column with values from the sequence, while the association of the + * sequence with the table is not set until after the ALTER TABLE. + */ + column->identitySequence = seqstmt->sequence; + /* * Build an ALTER SEQUENCE ... OWNED BY command to mark the sequence as * owned by this column, and add it to the list of things to be done after diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c index 32e3798972..66253fc3d3 100644 --- a/src/backend/rewrite/rewriteHandler.c +++ b/src/backend/rewrite/rewriteHandler.c @@ -844,17 +844,7 @@ rewriteTargetListIU(List *targetList, { Node *new_expr; - if (att_tup->attidentity) - { - NextValueExpr *nve = makeNode(NextValueExpr); - - nve->seqid = getOwnedSequence(RelationGetRelid(target_relation), attrno); - nve->typeId = att_tup->atttypid; - - new_expr = (Node *) nve; - } - else - new_expr = build_column_default(target_relation, attrno); + new_expr = build_column_default(target_relation, attrno); /* * If there is no default (ie, default is effectively NULL), we @@ -1123,6 +1113,16 @@ build_column_default(Relation rel, int attrno) Node *expr = NULL; Oid exprtype; + if (att_tup->attidentity) + { + NextValueExpr *nve = makeNode(NextValueExpr); + + nve->seqid = getOwnedSequence(RelationGetRelid(rel), attrno); + nve->typeId = att_tup->atttypid; + + return (Node *) nve; + } + /* * Scan to see if relation has a default for this column. */ diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index 76a73b2a37..a16de289ba 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -647,6 +647,8 @@ typedef struct ColumnDef Node *raw_default; /* default value (untransformed parse tree) */ Node *cooked_default; /* default value (transformed expr tree) */ char identity; /* attidentity setting */ + RangeVar *identitySequence; /* to store identity sequence name for ALTER + * TABLE ... ADD COLUMN */ CollateClause *collClause; /* untransformed COLLATE spec, if any */ Oid collOid; /* collation OID (InvalidOid if not set) */ List *constraints; /* other constraints on column */ diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out index 627389b749..5536044d9f 100644 --- a/src/test/regress/expected/identity.out +++ b/src/test/regress/expected/identity.out @@ -104,6 +104,19 @@ SELECT * FROM itest4; 2 | (2 rows) +-- VALUES RTEs +INSERT INTO itest3 VALUES (DEFAULT, 'a'); +INSERT INTO itest3 VALUES (DEFAULT, 'b'), (DEFAULT, 'c'); +SELECT * FROM itest3; + a | b +----+--- + 7 | + 12 | + 17 | a + 22 | b + 27 | c +(5 rows) + -- OVERRIDING tests INSERT INTO itest1 VALUES (10, 'xyz'); INSERT INTO itest1 OVERRIDING USER VALUE VALUES (10, 'xyz'); @@ -237,6 +250,21 @@ SELECT * FROM itestv11; 11 | xyz (3 rows) +-- ADD COLUMN +CREATE TABLE itest13 (a int); +-- add column to empty table +ALTER TABLE itest13 ADD COLUMN b int GENERATED BY DEFAULT AS IDENTITY; +INSERT INTO itest13 VALUES (1), (2), (3); +-- add column to populated table +ALTER TABLE itest13 ADD COLUMN c int GENERATED BY DEFAULT AS IDENTITY; +SELECT * FROM itest13; + a | b | c +---+---+--- + 1 | 1 | 1 + 2 | 2 | 2 + 3 | 3 | 3 +(3 rows) + -- various ALTER COLUMN tests -- fail, not allowed for identity columns ALTER TABLE itest1 ALTER COLUMN a SET DEFAULT 1; diff --git a/src/test/regress/sql/identity.sql b/src/test/regress/sql/identity.sql index 1b2d11cf34..8be086d7ea 100644 --- a/src/test/regress/sql/identity.sql +++ b/src/test/regress/sql/identity.sql @@ -54,6 +54,14 @@ SELECT * FROM itest3; SELECT * FROM itest4; +-- VALUES RTEs + +INSERT INTO itest3 VALUES (DEFAULT, 'a'); +INSERT INTO itest3 VALUES (DEFAULT, 'b'), (DEFAULT, 'c'); + +SELECT * FROM itest3; + + -- OVERRIDING tests INSERT INTO itest1 VALUES (10, 'xyz'); @@ -138,6 +146,17 @@ INSERT INTO itestv11 OVERRIDING SYSTEM VALUE VALUES (11, 'xyz'); SELECT * FROM itestv11; +-- ADD COLUMN + +CREATE TABLE itest13 (a int); +-- add column to empty table +ALTER TABLE itest13 ADD COLUMN b int GENERATED BY DEFAULT AS IDENTITY; +INSERT INTO itest13 VALUES (1), (2), (3); +-- add column to populated table +ALTER TABLE itest13 ADD COLUMN c int GENERATED BY DEFAULT AS IDENTITY; +SELECT * FROM itest13; + + -- various ALTER COLUMN tests -- fail, not allowed for identity columns diff --git a/src/test/subscription/t/008_diff_schema.pl b/src/test/subscription/t/008_diff_schema.pl index ea31625402..d4849c89a3 100644 --- a/src/test/subscription/t/008_diff_schema.pl +++ b/src/test/subscription/t/008_diff_schema.pl @@ -3,7 +3,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 3; +use Test::More tests => 4; # Create publisher node my $node_publisher = get_new_node('publisher'); @@ -22,7 +22,7 @@ "INSERT INTO test_tab VALUES (1, 'foo'), (2, 'bar')"); # Setup structure on subscriber -$node_subscriber->safe_psql('postgres', "CREATE TABLE test_tab (a int primary key, b text, c timestamptz DEFAULT now(), d bigint DEFAULT 999)"); +$node_subscriber->safe_psql('postgres', "CREATE TABLE test_tab (a int primary key, b text, c timestamptz DEFAULT now(), d bigint DEFAULT 999, e int GENERATED BY DEFAULT AS IDENTITY)"); # Setup logical replication my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres'; @@ -52,8 +52,8 @@ $node_publisher->wait_for_catchup($appname); $result = - $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999) FROM test_tab"); -is($result, qq(2|2|2), 'check extra columns contain local defaults'); + $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab"); +is($result, qq(2|2|2|2), 'check extra columns contain local defaults after copy'); # Change the local values of the extra columns on the subscriber, # update publisher, and check that subscriber retains the expected @@ -67,5 +67,15 @@ $node_subscriber->safe_psql('postgres', "SELECT count(*), count(extract(epoch from c) = 987654321), count(d = 999) FROM test_tab"); is($result, qq(2|2|2), 'check extra columns contain locally changed data'); +# Another insert +$node_publisher->safe_psql('postgres', + "INSERT INTO test_tab VALUES (3, 'baz')"); + +$node_publisher->wait_for_catchup($appname); + +$result = + $node_subscriber->safe_psql('postgres', "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab"); +is($result, qq(3|3|3|3), 'check extra columns contain local defaults after apply'); + $node_subscriber->stop; $node_publisher->stop; From bf641d3376018c40860f26167a60febff5bc1f51 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 2 Feb 2018 17:10:46 -0500 Subject: [PATCH 0941/1087] First-draft release notes for 10.2. As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first. --- doc/src/sgml/release-10.sgml | 1164 ++++++++++++++++++++++++++++++++++ 1 file changed, 1164 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 18323b349c..061d968f73 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1,6 +1,1170 @@ + + Release 10.2 + + + Release date: + 2018-02-08 + + + + This release contains a variety of fixes from 10.1. + For information about new features in major release 10, see + . + + + + Migration to Version 10.2 + + + A dump/restore is not required for those running 10.X. + + + + However, + if you use contrib/cube's ~> + operator, see the entry below about that. + + + + Also, if you are upgrading from a version earlier than 10.1, + see . + + + + + Changes + + + + + + + Fix vacuuming of tuples that were updated while key-share locked + (Andres Freund, Álvaro Herrera) + + + + In some cases VACUUM would fail to remove such + tuples even though they are now dead, leading to assorted data + corruption scenarios. + + + + + + + Fix failure to mark a hash index's metapage dirty after + adding a new overflow page, potentially leading to index corruption + (Lixian Zou, Amit Kapila) + + + + + + + Ensure that vacuum will always clean up the pending-insertions list of + a GIN index (Masahiko Sawada) + + + + This is necessary to ensure that dead index entries get removed. + The old code got it backwards, allowing vacuum to skip the cleanup if + some other process were running it concurrently, thus risking + invalid entries being left behind in the index. + + + + + + + Fix inadequate buffer locking in some LSN fetches (Jacob Champion, + Asim Praveen, Ashwin Agrawal) + + + + These errors could result in misbehavior under concurrent load. + The potential consequences have not been characterized fully. + + + + + + + Fix incorrect query results from cases involving flattening of + subqueries whose outputs are used in GROUPING SETS + (Heikki Linnakangas) + + + + + + + Fix handling of list partitioning constraints for partition keys of + boolean or array types (Amit Langote) + + + + + + + Avoid unnecessary failure in a query on an inheritance tree that + occurs concurrently with some child table being removed from the tree + by ALTER TABLE NO INHERIT (Tom Lane) + + + + + + + Fix spurious deadlock failures when multiple sessions are + running CREATE INDEX CONCURRENTLY (Jeff Janes) + + + + + + + During VACUUM FULL, update the table's size fields + in pg_class sooner (Amit Kapila) + + + + This prevents poor behavior when rebuilding any hash indexes on the + table, since those use the pg_class + statistics to govern the initial hash size. + + + + + + + Fix + UNION/INTERSECT/EXCEPT + over zero columns (Tom Lane) + + + + + + + Disallow identity columns on typed tables and partitions + (Michael Paquier) + + + + These cases will be treated as unsupported features for now. + + + + + + + Fix assorted failures to apply the correct default value when + inserting into an identity column (Michael Paquier, Peter Eisentraut) + + + + In several contexts, notably COPY + and ALTER TABLE ADD COLUMN, the expected default + value was not applied and instead a null value was inserted. + + + + + + + Fix failures when an inheritance tree contains foreign child tables + (Etsuro Fujita) + + + + A mix of regular and foreign tables in an inheritance tree resulted in + creation of incorrect plans for UPDATE + and DELETE queries. This led to visible failures in + some cases, notably when there are row-level triggers on a foreign + child table. + + + + + + + Repair failure with correlated sub-SELECT + inside VALUES inside a LATERAL + subquery (Tom Lane) + + + + + + + Fix could not devise a query plan for the given query + planner failure for some cases involving nested UNION + ALL inside a lateral subquery (Tom Lane) + + + + + + + Allow functional dependency statistics to be used for boolean columns + (Tom Lane) + + + + Previously, although extended statistics could be declared and + collected on boolean columns, the planner failed to apply them. + + + + + + + Avoid underestimating the number of groups emitted by subqueries + containing set-returning functions in their grouping columns (Tom Lane) + + + + Cases similar to SELECT DISTINCT unnest(foo) got a + lower output rowcount estimate in 10.0 than they did in earlier + releases, possibly resulting in unfavorable plan choices. Restore the + prior estimation behavior. + + + + + + + Fix use of triggers in logical replication workers (Petr Jelinek) + + + + + + + Fix logical decoding to correctly clean up disk files for crashed + transactions (Atsushi Torikoshi) + + + + Logical decoding may spill WAL records to disk for transactions + generating many WAL records. Normally these files are cleaned up + after the commit or abort record arrives for the transaction; but if + no such record is ever seen, the removal code misbehaved. + + + + + + + Fix walsender timeout failure and failure to respond to interrupts + when processing a large transaction (Petr Jelinek) + + + + + + + Fix race condition during replication origin drop that could allow the + dropping process to wait indefinitely (Tom Lane) + + + + + + + Allow members of the pg_read_all_stats role to see + walsender statistics in the pg_stat_replication + view (Feike Steenbergen) + + + + + + + Show walsenders that are sending base backups as active + in pg_stat_activity (Magnus Hagander) + + + + + + + Fix reporting of scram-sha-256 authentication + method in the pg_hba_file_rules view + (Michael Paquier) + + + + Previously this was printed as scram-sha256, + possibly confusing users as to the correct spelling. + + + + + + + Fix has_sequence_privilege() to + support WITH GRANT OPTION tests (Joe Conway) + + + + This case was already handled by other privilege-testing functions. + + + + + + + In databases using UTF8 encoding, ignore any XML declaration that + asserts a different encoding (Pavel Stehule, Noah Misch) + + + + We always store XML strings in the database encoding, so allowing + libxml to act on a declaration of another encoding gave wrong results. + In encodings other than UTF8, we don't promise to support non-ASCII + XML data anyway, so retain the previous behavior for bug compatibility. + This change affects only xpath() and related + functions; other XML code paths already acted this way. + + + + + + + Provide for forward compatibility with future minor protocol versions + (Robert Haas, Badrul Chowdhury) + + + + Up to now, PostgreSQL servers simply + rejected requests to use protocol versions newer than 3.0, so that + there was no functional difference between the major and minor parts + of the protocol version number. Allow clients to request versions 3.x + without failing, sending back a message showing that the server only + understands 3.0. This makes no difference at the moment, but + back-patching this change should allow speedier introduction of future + minor protocol upgrades. + + + + + + + Allow a client that supports SCRAM channel binding (such as v11 or + later libpq) to connect to a v10 server + (Michael Paquier) + + + + v10 does not have this feature, and the connection-time negotiation + about whether to use it was done incorrectly. + + + + + + + Avoid live-lock in ConditionVariableBroadcast() + (Tom Lane, Thomas Munro) + + + + Given repeatedly-unlucky timing, a process attempting to awaken all + waiters for a condition variable could loop indefinitely. This + affects only parallel index scans and some operations on replication + slots. + + + + + + + Clean up waits for condition variables correctly during subtransaction + abort (Robert Haas) + + + + + + + Ensure that child processes that are waiting for a condition variable + will exit promptly if the postmaster process dies (Tom Lane) + + + + + + + Fix crashes in parallel queries using more than one Gather node + (Thomas Munro) + + + + + + + Fix hang in parallel index scan when processing a deleted or half-dead + index page (Amit Kapila) + + + + + + + Avoid crash if parallel bitmap heap scan is unable to allocate a + shared memory segment (Robert Haas) + + + + + + + Cope with failure to start a parallel worker process + (Amit Kapila, Robert Haas) + + + + Parallel query previously tended to hang indefinitely if a worker + could not be started, as the result of fork() + failure or other low-probability problems. + + + + + + + Avoid unnecessary failure when no error queues are created during + parallel query startup (Robert Haas) + + + + + + + Fix collection of EXPLAIN statistics from parallel + workers (Amit Kapila, Thomas Munro) + + + + + + + Ensure that query strings passed to parallel workers are correctly + null-terminated (Thomas Munro) + + + + This prevents emitting garbage in postmaster log output from such + workers. + + + + + + + Avoid unsafe alignment assumptions when working + with __int128 (Tom Lane) + + + + Typically, compilers assume that __int128 variables are + aligned on 16-byte boundaries, but our memory allocation + infrastructure isn't prepared to guarantee that, and increasing the + setting of MAXALIGN seems infeasible for multiple reasons. Adjust the + code to allow use of __int128 only when we can tell the + compiler to assume lesser alignment. The only known symptom of this + problem so far is crashes in some parallel aggregation queries. + + + + + + + Prevent stack-overflow crashes when planning extremely deeply + nested set operations + (UNION/INTERSECT/EXCEPT) + (Tom Lane) + + + + + + + Avoid crash during an EvalPlanQual recheck of an indexscan that is the + inner child of a merge join (Tom Lane) + + + + This could only happen during an update or SELECT FOR + UPDATE of a join, when there is a concurrent update of some + selected row. + + + + + + + Fix crash in autovacuum when extended statistics are defined + for a table but can't be computed (Álvaro Herrera) + + + + + + + Fix null-pointer crashes for some types of LDAP URLs appearing + in pg_hba.conf (Thomas Munro) + + + + + + + Prevent out-of-memory failures due to excessive growth of simple hash + tables (Tomas Vondra, Andres Freund) + + + + + + + Fix sample INSTR() functions in the PL/pgSQL + documentation (Yugo Nagata, Tom Lane) + + + + These functions are stated to be Oracle(TM)-compatible, but + they weren't exactly. In particular, there was a discrepancy in the + interpretation of a negative third parameter: Oracle thinks that a + negative value indicates the last place where the target substring can + begin, whereas our functions took it as the last place where the + target can end. Also, Oracle throws an error for a zero or negative + fourth parameter, whereas our functions returned zero. + + + + The sample code has been adjusted to match Oracle's behavior more + precisely. Users who have copied this code into their applications + may wish to update their copies. + + + + + + + Fix pg_dump to make ACL (permissions), + security label, and comment entries reliably identifiable in archive + output formats (Tom Lane) + + + + The tag portion of an ACL archive entry was usually + just the name of the associated object. Make it start with the object + type instead, bringing ACLs into line with the convention already used + for comment and security label archive entries. Also, fix the + security label and comment entries for the whole database, if present, + to make their tags start with DATABASE so that they + also follow this convention. This prevents false matches in code that + tries to identify large-object-related entries by seeing if the tag + starts with LARGE OBJECT. That could have resulted + in misclassifying entries as data rather than schema, with undesirable + results in a schema-only or data-only dump. + + + + Note that this change has user-visible results in the output + of pg_restore --list. + + + + + + + Rename pg_rewind's + copy_file_range function to avoid conflict + with new Linux system call (Andres Freund) + + + + This change prevents build failures with newer glibc versions. + + + + + + + In ecpg, detect indicator arrays that do + not have the correct length and report an error (David Rader) + + + + + + + Change the behavior of contrib/cube's + cube ~> int + operator to make it compatible with KNN search (Alexander Korotkov) + + + + The meaning of the second argument (the dimension selector) has been + changed to make it predictable which value is selected even when + dealing with cubes of varying dimensionalities. + + + + This is an incompatible change, but since the point of the operator + was to be used in KNN searches, it seems rather useless as-is. + After installing this update, any expression indexes or materialized + views using this operator will need to be reindexed/refreshed. + + + + + + + Avoid triggering a libc assertion + in contrib/hstore, due to use + of memcpy() with equal source and destination + pointers (Tomas Vondra) + + + + + + + Fix incorrect display of tuples' null bitmaps + in contrib/pageinspect (Maksim Milyutin) + + + + + + + Fix incorrect output from contrib/pageinspect's + hash_page_items() function (Masahiko Sawada) + + + + + + + In contrib/postgres_fdw, avoid + outer pathkeys do not match mergeclauses + planner error when constructing a plan involving a remote join + (Robert Haas) + + + + + + + In contrib/postgres_fdw, avoid planner failure + when there are duplicate GROUP BY entries + (Jeevan Chalke) + + + + + + + Provide modern examples of how to auto-start Postgres on macOS + (Tom Lane) + + + + The scripts in contrib/start-scripts/osx use + infrastructure that's been deprecated for over a decade, and which no + longer works at all in macOS releases of the last couple of years. + Add a new subdirectory contrib/start-scripts/macos + containing scripts that use the newer launchd infrastructure. + + + + + + + Fix incorrect selection of configuration-specific libraries for + OpenSSL on Windows (Andrew Dunstan) + + + + + + + Support linking to MinGW-built versions of libperl (Noah Misch) + + + + This allows building PL/Perl with some common Perl distributions for + Windows. + + + + + + + Fix MSVC build to test whether 32-bit libperl + needs -D_USE_32BIT_TIME_T (Noah Misch) + + + + Available Perl distributions are inconsistent about what they expect, + and lack any reliable means of reporting it, so resort to a build-time + test on what the library being used actually does. + + + + + + + On Windows, install the crash dump handler earlier in postmaster + startup (Takayuki Tsunakawa) + + + + This may allow collection of a core dump for some early-startup + failures that did not produce a dump before. + + + + + + + On Windows, avoid encoding-version-related crashes when emitting + messages very early in postmaster startup (Takayuki Tsunakawa) + + + + + + + Use our existing Motorola 68K spinlock code on OpenBSD as + well as NetBSD (David Carlier) + + + + + + + Add support for spinlocks on Motorola 88K (David Carlier) + + + + + + + Update time zone data files to tzdata + release 2018c for DST law changes in Brazil, Sao Tome and Principe, + plus historical corrections for Bolivia, Japan, and South Sudan. + The US/Pacific-New zone has been removed (it was + only an alias for America/Los_Angeles anyway). + + + + + + + + Release 10.1 From 957ff087c822c95f63df956e1a91c15614ecb2b4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 2 Feb 2018 18:26:07 -0500 Subject: [PATCH 0942/1087] Be more wary about shm_toc_lookup failure. Commit 445dbd82a basically missed the point of commit d46633506, which was that we shouldn't allow shm_toc_lookup() failure to lead to a core dump or assertion crash, because the odds of such a failure should never be considered negligible. It's correct that we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there if we have no workers. But if we have no workers, we're not going to do anything in this function with the lookup result anyway, so let's just skip it. That lets the code use the easy-to-prove-safe noError=false case, rather than anything requiring effort to review. Back-patch to v10, like the previous commit. Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us --- src/backend/access/transam/parallel.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c index a325933940..9d4efc0f8f 100644 --- a/src/backend/access/transam/parallel.c +++ b/src/backend/access/transam/parallel.c @@ -434,8 +434,6 @@ void ReinitializeParallelDSM(ParallelContext *pcxt) { FixedParallelState *fps; - char *error_queue_space; - int i; /* Wait for any old workers to exit. */ if (pcxt->nworkers_launched > 0) @@ -456,18 +454,23 @@ ReinitializeParallelDSM(ParallelContext *pcxt) fps->last_xlog_end = 0; /* Recreate error queues (if they exist). */ - error_queue_space = - shm_toc_lookup(pcxt->toc, PARALLEL_KEY_ERROR_QUEUE, true); - Assert(pcxt->nworkers == 0 || error_queue_space != NULL); - for (i = 0; i < pcxt->nworkers; ++i) + if (pcxt->nworkers > 0) { - char *start; - shm_mq *mq; + char *error_queue_space; + int i; - start = error_queue_space + i * PARALLEL_ERROR_QUEUE_SIZE; - mq = shm_mq_create(start, PARALLEL_ERROR_QUEUE_SIZE); - shm_mq_set_receiver(mq, MyProc); - pcxt->worker[i].error_mqh = shm_mq_attach(mq, pcxt->seg, NULL); + error_queue_space = + shm_toc_lookup(pcxt->toc, PARALLEL_KEY_ERROR_QUEUE, false); + for (i = 0; i < pcxt->nworkers; ++i) + { + char *start; + shm_mq *mq; + + start = error_queue_space + i * PARALLEL_ERROR_QUEUE_SIZE; + mq = shm_mq_create(start, PARALLEL_ERROR_QUEUE_SIZE); + shm_mq_set_receiver(mq, MyProc); + pcxt->worker[i].error_mqh = shm_mq_attach(mq, pcxt->seg, NULL); + } } } From d59ff4ab3111f20054425d82dab1393101dcfe8e Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 2 Feb 2018 18:32:05 -0500 Subject: [PATCH 0943/1087] Fix another instance of unsafe coding for shm_toc_lookup failure. One or another author of commit 5bcf389ec seems to have thought that computing an offset from a NULL pointer would yield another NULL pointer. There may possibly be architectures where that works, but common machines don't work like that. Per a quick code review of places calling shm_toc_lookup and not using noError = false. --- src/backend/executor/nodeHash.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index c26b8ea44e..70553b8fdf 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -2582,9 +2582,13 @@ ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwcxt) { SharedHashInfo *shared_info; + /* might not be there ... */ shared_info = (SharedHashInfo *) shm_toc_lookup(pwcxt->toc, node->ps.plan->plan_node_id, true); - node->hinstrument = &shared_info->hinstrument[ParallelWorkerNumber]; + if (shared_info) + node->hinstrument = &shared_info->hinstrument[ParallelWorkerNumber]; + else + node->hinstrument = NULL; } /* From bc38bdba04d75f7c39d57f3eba9c01958d8d2f7c Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 2 Feb 2018 21:10:59 -0500 Subject: [PATCH 0944/1087] doc: Fix index link The index entry was pointing to a slightly wrong location. --- doc/src/sgml/logicaldecoding.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml index fa101937e5..5501eed108 100644 --- a/doc/src/sgml/logicaldecoding.sgml +++ b/doc/src/sgml/logicaldecoding.sgml @@ -365,7 +365,7 @@ $ pg_recvlogical -d postgres --slot test --drop-slot Initialization Function - + _PG_output_plugin_init From 794eb3a8f00519fac561831dee35f4ee557bd08b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 2 Feb 2018 22:33:38 -0500 Subject: [PATCH 0945/1087] Minor copy-editing for 10.2 release notes. Second pass after taking a break ... --- doc/src/sgml/release-10.sgml | 41 ++++++++++++++++++------------------ 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 061d968f73..740fb916b0 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -89,7 +89,7 @@ Branch: REL9_6_STABLE [19648ce55] 2017-11-16 15:26:49 -0500 This is necessary to ensure that dead index entries get removed. The old code got it backwards, allowing vacuum to skip the cleanup if - some other process were running it concurrently, thus risking + some other process were running cleanup concurrently, thus risking invalid entries being left behind in the index. @@ -186,7 +186,7 @@ Branch: REL_10_STABLE [bdbf29aae] 2017-12-27 18:26:58 +0300 - This prevents poor behavior when rebuilding any hash indexes on the + This prevents poor behavior when rebuilding hash indexes on the table, since those use the pg_class statistics to govern the initial hash size. @@ -361,7 +361,7 @@ Branch: REL9_4_STABLE [f68c49f86] 2018-01-05 12:17:10 -0300 Logical decoding may spill WAL records to disk for transactions generating many WAL records. Normally these files are cleaned up - after the commit or abort record arrives for the transaction; but if + after the transaction's commit or abort record arrives; but if no such record is ever seen, the removal code misbehaved. @@ -413,8 +413,8 @@ Branch: master [d02974e32] 2017-12-29 16:28:32 +0100 Branch: REL_10_STABLE [b38c3d58e] 2017-12-29 16:22:43 +0100 --> - Show walsenders that are sending base backups as active - in pg_stat_activity (Magnus Hagander) + Show walsenders that are sending base backups as active in + the pg_stat_activity view (Magnus Hagander) @@ -448,11 +448,8 @@ Branch: REL9_3_STABLE [69e5b1e9c] 2017-11-26 09:50:53 -0800 --> Fix has_sequence_privilege() to - support WITH GRANT OPTION tests (Joe Conway) - - - - This case was already handled by other privilege-testing functions. + support WITH GRANT OPTION tests, + as other privilege-testing functions do (Joe Conway) @@ -544,9 +541,9 @@ Branch: REL_10_STABLE [1c77e9908] 2018-01-05 19:21:30 -0500 Given repeatedly-unlucky timing, a process attempting to awaken all - waiters for a condition variable could loop indefinitely. This - affects only parallel index scans and some operations on replication - slots. + waiters for a condition variable could loop indefinitely. Due to the + limited usage of condition variables in v10, this affects only + parallel index scans and some operations on replication slots. @@ -641,8 +638,8 @@ Branch: master [445dbd82a] 2017-11-28 12:15:38 -0500 Branch: REL_10_STABLE [dba6e75c1] 2017-11-28 12:19:19 -0500 --> - Avoid unnecessary failure when no error queues are created during - parallel query startup (Robert Haas) + Avoid unnecessary failure when no parallel workers can be obtained + during parallel query startup (Robert Haas) @@ -795,7 +792,8 @@ Branch: REL9_3_STABLE [45bfef7fb] 2018-01-10 17:13:29 -0500 - These functions are stated to be Oracle(TM)-compatible, but + These functions are stated to + be Oracle compatible, but they weren't exactly. In particular, there was a discrepancy in the interpretation of a negative third parameter: Oracle thinks that a negative value indicates the last place where the target substring can @@ -823,7 +821,7 @@ Branch: REL9_3_STABLE [ef115621c] 2018-01-22 12:06:19 -0500 --> Fix pg_dump to make ACL (permissions), - security label, and comment entries reliably identifiable in archive + comment, and security label entries reliably identifiable in archive output formats (Tom Lane) @@ -832,7 +830,7 @@ Branch: REL9_3_STABLE [ef115621c] 2018-01-22 12:06:19 -0500 just the name of the associated object. Make it start with the object type instead, bringing ACLs into line with the convention already used for comment and security label archive entries. Also, fix the - security label and comment entries for the whole database, if present, + comment and security label entries for the whole database, if present, to make their tags start with DATABASE so that they also follow this convention. This prevents false matches in code that tries to identify large-object-related entries by seeing if the tag @@ -858,7 +856,7 @@ Branch: REL9_5_STABLE [ea4cbf8f1] 2018-01-03 12:39:59 -0800 Rename pg_rewind's copy_file_range function to avoid conflict - with new Linux system call (Andres Freund) + with new Linux system call of that name (Andres Freund) @@ -1005,7 +1003,8 @@ Branch: REL9_3_STABLE [77b76fea9] 2017-11-17 12:47:44 -0500 infrastructure that's been deprecated for over a decade, and which no longer works at all in macOS releases of the last couple of years. Add a new subdirectory contrib/start-scripts/macos - containing scripts that use the newer launchd infrastructure. + containing scripts that use the newer launchd + infrastructure. @@ -1105,7 +1104,7 @@ Branch: REL9_4_STABLE [19cf9e96a] 2017-11-12 13:03:29 -0800 Branch: REL9_3_STABLE [30e99efe8] 2017-11-12 13:05:55 -0800 --> - On Windows, avoid encoding-version-related crashes when emitting + On Windows, avoid encoding-conversion-related crashes when emitting messages very early in postmaster startup (Takayuki Tsunakawa) From 1d81c093db81f63727436c736d9d27b7bedb4b66 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 3 Feb 2018 10:19:57 -0500 Subject: [PATCH 0946/1087] doc: Clarify psql --list documentation a bit more --- doc/src/sgml/ref/psql-ref.sgml | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 7ea7edc3d1..6f9b30b673 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -283,11 +283,17 @@ EOF List all available databases, then exit. Other non-connection - options are ignored. If an explicit database name is not - found the postgres database, not the user's, - will be targeted for connection. This is similar to the meta-command + options are ignored. This is similar to the meta-command \list. + + + When this option is used, psql will connect + to the database postgres, unless a different database + is named on the command line (option or non-option + argument, possibly via a service entry, but not via an environment + variable). + From 4ac583f36a0c5452047531f56703b8ea51e718ad Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 3 Feb 2018 11:09:24 -0500 Subject: [PATCH 0947/1087] doc: Fix name in release notes Author: Alexander Lakhin --- doc/src/sgml/release-10.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 740fb916b0..3159f7a21f 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -5195,7 +5195,6 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Euler Taveira Fabien Coelho Fabrízio de Royes Mello - Fakhroutdinov Evgenievich Feike Steenbergen Felix Gerzaguet Filip Jirsák @@ -5374,6 +5373,7 @@ Branch: REL_10_STABLE [5159626af] 2017-11-03 14:14:16 -0400 Suraj Kharage Sveinn Sveinsson Sven R. Kunze + Tahir Fakhroutdinov Taiki Kondo Takayuki Tsunakawa Takeshi Ideriha From 64fb645914741ffc3aee646308416c209c6ff06b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 4 Feb 2018 11:46:28 -0500 Subject: [PATCH 0948/1087] Doc: minor clarifications in xindex.sgml. I noticed some slightly confusing or out-of-date verbiage here while working on the window RANGE patch. Seems worth committing separately. --- doc/src/sgml/xindex.sgml | 34 ++++++++++++++++++++++++---------- 1 file changed, 24 insertions(+), 10 deletions(-) diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index 2b4298065c..81c0cdc4f8 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -458,8 +458,9 @@ Compute the 64-bit hash value for a key given a 64-bit salt; if - the salt is 0, the low 32 bits will match the value that would - have been computed by function 1 + the salt is 0, the low 32 bits of the result must match the value + that would have been computed by function 1 + (optional) 2 @@ -1139,16 +1140,11 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD ordering. - - Comparison of arrays of user-defined types also relies on the semantics - defined by the default B-tree operator class. - - If there is no default B-tree operator class for a data type, the system will look for a default hash operator class. But since that kind of - operator class only provides equality, in practice it is only enough - to support array equality. + operator class only provides equality, it is only able to support grouping + not sorting. @@ -1168,7 +1164,25 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD - Another important point is that an operator that + Sorting by a non-default B-tree operator class is possible by specifying + the class's less-than operator in a USING option, + for example + +SELECT * FROM mytable ORDER BY somecol USING ~<~; + + Alternatively, specifying the class's greater-than operator + in USING selects a descending-order sort. + + + + Comparison of arrays of a user-defined type also relies on the semantics + defined by the type's default B-tree operator class. If there is no + default B-tree operator class, but there is a default hash operator class, + then array equality is supported, but not ordering comparisons. + + + + Another important point is that an equality operator that appears in a hash operator family is a candidate for hash joins, hash aggregation, and related optimizations. The hash operator family is essential here since it identifies the hash function(s) to use. From cf1cba3110f339eddecd66cdf7d8f9b4370f34c2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 4 Feb 2018 15:13:44 -0500 Subject: [PATCH 0949/1087] Release notes for 10.2, 9.6.7, 9.5.11, 9.4.16, 9.3.21. --- doc/src/sgml/release-9.3.sgml | 300 ++++++++++++++++++++++ doc/src/sgml/release-9.4.sgml | 341 +++++++++++++++++++++++++ doc/src/sgml/release-9.5.sgml | 393 +++++++++++++++++++++++++++++ doc/src/sgml/release-9.6.sgml | 457 ++++++++++++++++++++++++++++++++++ 4 files changed, 1491 insertions(+) diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index 3c540bcc26..4f50bdf5e6 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -1,6 +1,306 @@ + + Release 9.3.21 + + + Release date: + 2018-02-08 + + + + This release contains a variety of fixes from 9.3.20. + For information about new features in the 9.3 major release, see + . + + + + Migration to Version 9.3.21 + + + A dump/restore is not required for those running 9.3.X. + + + + However, if you are upgrading from a version earlier than 9.3.18, + see . + + + + + Changes + + + + + + Fix vacuuming of tuples that were updated while key-share locked + (Andres Freund, Álvaro Herrera) + + + + In some cases VACUUM would fail to remove such + tuples even though they are now dead, leading to assorted data + corruption scenarios. + + + + + + Fix inadequate buffer locking in some LSN fetches (Jacob Champion, + Asim Praveen, Ashwin Agrawal) + + + + These errors could result in misbehavior under concurrent load. + The potential consequences have not been characterized fully. + + + + + + Avoid unnecessary failure in a query on an inheritance tree that + occurs concurrently with some child table being removed from the tree + by ALTER TABLE NO INHERIT (Tom Lane) + + + + + + Repair failure with correlated sub-SELECT + inside VALUES inside a LATERAL + subquery (Tom Lane) + + + + + + Fix could not devise a query plan for the given query + planner failure for some cases involving nested UNION + ALL inside a lateral subquery (Tom Lane) + + + + + + Fix has_sequence_privilege() to + support WITH GRANT OPTION tests, + as other privilege-testing functions do (Joe Conway) + + + + + + In databases using UTF8 encoding, ignore any XML declaration that + asserts a different encoding (Pavel Stehule, Noah Misch) + + + + We always store XML strings in the database encoding, so allowing + libxml to act on a declaration of another encoding gave wrong results. + In encodings other than UTF8, we don't promise to support non-ASCII + XML data anyway, so retain the previous behavior for bug compatibility. + This change affects only xpath() and related + functions; other XML code paths already acted this way. + + + + + + Provide for forward compatibility with future minor protocol versions + (Robert Haas, Badrul Chowdhury) + + + + Up to now, PostgreSQL servers simply + rejected requests to use protocol versions newer than 3.0, so that + there was no functional difference between the major and minor parts + of the protocol version number. Allow clients to request versions 3.x + without failing, sending back a message showing that the server only + understands 3.0. This makes no difference at the moment, but + back-patching this change should allow speedier introduction of future + minor protocol upgrades. + + + + + + Prevent stack-overflow crashes when planning extremely deeply + nested set operations + (UNION/INTERSECT/EXCEPT) + (Tom Lane) + + + + + + Fix null-pointer crashes for some types of LDAP URLs appearing + in pg_hba.conf (Thomas Munro) + + + + + + Fix sample INSTR() functions in the PL/pgSQL + documentation (Yugo Nagata, Tom Lane) + + + + These functions are stated to + be Oracle compatible, but + they weren't exactly. In particular, there was a discrepancy in the + interpretation of a negative third parameter: Oracle thinks that a + negative value indicates the last place where the target substring can + begin, whereas our functions took it as the last place where the + target can end. Also, Oracle throws an error for a zero or negative + fourth parameter, whereas our functions returned zero. + + + + The sample code has been adjusted to match Oracle's behavior more + precisely. Users who have copied this code into their applications + may wish to update their copies. + + + + + + Fix pg_dump to make ACL (permissions), + comment, and security label entries reliably identifiable in archive + output formats (Tom Lane) + + + + The tag portion of an ACL archive entry was usually + just the name of the associated object. Make it start with the object + type instead, bringing ACLs into line with the convention already used + for comment and security label archive entries. Also, fix the + comment and security label entries for the whole database, if present, + to make their tags start with DATABASE so that they + also follow this convention. This prevents false matches in code that + tries to identify large-object-related entries by seeing if the tag + starts with LARGE OBJECT. That could have resulted + in misclassifying entries as data rather than schema, with undesirable + results in a schema-only or data-only dump. + + + + Note that this change has user-visible results in the output + of pg_restore --list. + + + + + + In ecpg, detect indicator arrays that do + not have the correct length and report an error (David Rader) + + + + + + Avoid triggering a libc assertion + in contrib/hstore, due to use + of memcpy() with equal source and destination + pointers (Tomas Vondra) + + + + + + Provide modern examples of how to auto-start Postgres on macOS + (Tom Lane) + + + + The scripts in contrib/start-scripts/osx use + infrastructure that's been deprecated for over a decade, and which no + longer works at all in macOS releases of the last couple of years. + Add a new subdirectory contrib/start-scripts/macos + containing scripts that use the newer launchd + infrastructure. + + + + + + Fix incorrect selection of configuration-specific libraries for + OpenSSL on Windows (Andrew Dunstan) + + + + + + Support linking to MinGW-built versions of libperl (Noah Misch) + + + + This allows building PL/Perl with some common Perl distributions for + Windows. + + + + + + Fix MSVC build to test whether 32-bit libperl + needs -D_USE_32BIT_TIME_T (Noah Misch) + + + + Available Perl distributions are inconsistent about what they expect, + and lack any reliable means of reporting it, so resort to a build-time + test on what the library being used actually does. + + + + + + On Windows, install the crash dump handler earlier in postmaster + startup (Takayuki Tsunakawa) + + + + This may allow collection of a core dump for some early-startup + failures that did not produce a dump before. + + + + + + On Windows, avoid encoding-conversion-related crashes when emitting + messages very early in postmaster startup (Takayuki Tsunakawa) + + + + + + Use our existing Motorola 68K spinlock code on OpenBSD as + well as NetBSD (David Carlier) + + + + + + Add support for spinlocks on Motorola 88K (David Carlier) + + + + + + Update time zone data files to tzdata + release 2018c for DST law changes in Brazil, Sao Tome and Principe, + plus historical corrections for Bolivia, Japan, and South Sudan. + The US/Pacific-New zone has been removed (it was + only an alias for America/Los_Angeles anyway). + + + + + + + + Release 9.3.20 diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index 4ecf90d691..329e5ec0e6 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -1,6 +1,347 @@ + + Release 9.4.16 + + + Release date: + 2018-02-08 + + + + This release contains a variety of fixes from 9.4.15. + For information about new features in the 9.4 major release, see + . + + + + Migration to Version 9.4.16 + + + A dump/restore is not required for those running 9.4.X. + + + + However, if you are upgrading from a version earlier than 9.4.13, + see . + + + + + Changes + + + + + + Fix vacuuming of tuples that were updated while key-share locked + (Andres Freund, Álvaro Herrera) + + + + In some cases VACUUM would fail to remove such + tuples even though they are now dead, leading to assorted data + corruption scenarios. + + + + + + Fix inadequate buffer locking in some LSN fetches (Jacob Champion, + Asim Praveen, Ashwin Agrawal) + + + + These errors could result in misbehavior under concurrent load. + The potential consequences have not been characterized fully. + + + + + + Avoid unnecessary failure in a query on an inheritance tree that + occurs concurrently with some child table being removed from the tree + by ALTER TABLE NO INHERIT (Tom Lane) + + + + + + Fix spurious deadlock failures when multiple sessions are + running CREATE INDEX CONCURRENTLY (Jeff Janes) + + + + + + Repair failure with correlated sub-SELECT + inside VALUES inside a LATERAL + subquery (Tom Lane) + + + + + + Fix could not devise a query plan for the given query + planner failure for some cases involving nested UNION + ALL inside a lateral subquery (Tom Lane) + + + + + + Fix logical decoding to correctly clean up disk files for crashed + transactions (Atsushi Torikoshi) + + + + Logical decoding may spill WAL records to disk for transactions + generating many WAL records. Normally these files are cleaned up + after the transaction's commit or abort record arrives; but if + no such record is ever seen, the removal code misbehaved. + + + + + + Fix walsender timeout failure and failure to respond to interrupts + when processing a large transaction (Petr Jelinek) + + + + + + Fix has_sequence_privilege() to + support WITH GRANT OPTION tests, + as other privilege-testing functions do (Joe Conway) + + + + + + In databases using UTF8 encoding, ignore any XML declaration that + asserts a different encoding (Pavel Stehule, Noah Misch) + + + + We always store XML strings in the database encoding, so allowing + libxml to act on a declaration of another encoding gave wrong results. + In encodings other than UTF8, we don't promise to support non-ASCII + XML data anyway, so retain the previous behavior for bug compatibility. + This change affects only xpath() and related + functions; other XML code paths already acted this way. + + + + + + Provide for forward compatibility with future minor protocol versions + (Robert Haas, Badrul Chowdhury) + + + + Up to now, PostgreSQL servers simply + rejected requests to use protocol versions newer than 3.0, so that + there was no functional difference between the major and minor parts + of the protocol version number. Allow clients to request versions 3.x + without failing, sending back a message showing that the server only + understands 3.0. This makes no difference at the moment, but + back-patching this change should allow speedier introduction of future + minor protocol upgrades. + + + + + + Cope with failure to start a parallel worker process + (Amit Kapila, Robert Haas) + + + + Parallel query previously tended to hang indefinitely if a worker + could not be started, as the result of fork() + failure or other low-probability problems. + + + + + + Prevent stack-overflow crashes when planning extremely deeply + nested set operations + (UNION/INTERSECT/EXCEPT) + (Tom Lane) + + + + + + Fix null-pointer crashes for some types of LDAP URLs appearing + in pg_hba.conf (Thomas Munro) + + + + + + Fix sample INSTR() functions in the PL/pgSQL + documentation (Yugo Nagata, Tom Lane) + + + + These functions are stated to + be Oracle compatible, but + they weren't exactly. In particular, there was a discrepancy in the + interpretation of a negative third parameter: Oracle thinks that a + negative value indicates the last place where the target substring can + begin, whereas our functions took it as the last place where the + target can end. Also, Oracle throws an error for a zero or negative + fourth parameter, whereas our functions returned zero. + + + + The sample code has been adjusted to match Oracle's behavior more + precisely. Users who have copied this code into their applications + may wish to update their copies. + + + + + + Fix pg_dump to make ACL (permissions), + comment, and security label entries reliably identifiable in archive + output formats (Tom Lane) + + + + The tag portion of an ACL archive entry was usually + just the name of the associated object. Make it start with the object + type instead, bringing ACLs into line with the convention already used + for comment and security label archive entries. Also, fix the + comment and security label entries for the whole database, if present, + to make their tags start with DATABASE so that they + also follow this convention. This prevents false matches in code that + tries to identify large-object-related entries by seeing if the tag + starts with LARGE OBJECT. That could have resulted + in misclassifying entries as data rather than schema, with undesirable + results in a schema-only or data-only dump. + + + + Note that this change has user-visible results in the output + of pg_restore --list. + + + + + + In ecpg, detect indicator arrays that do + not have the correct length and report an error (David Rader) + + + + + + Avoid triggering a libc assertion + in contrib/hstore, due to use + of memcpy() with equal source and destination + pointers (Tomas Vondra) + + + + + + Provide modern examples of how to auto-start Postgres on macOS + (Tom Lane) + + + + The scripts in contrib/start-scripts/osx use + infrastructure that's been deprecated for over a decade, and which no + longer works at all in macOS releases of the last couple of years. + Add a new subdirectory contrib/start-scripts/macos + containing scripts that use the newer launchd + infrastructure. + + + + + + Fix incorrect selection of configuration-specific libraries for + OpenSSL on Windows (Andrew Dunstan) + + + + + + Support linking to MinGW-built versions of libperl (Noah Misch) + + + + This allows building PL/Perl with some common Perl distributions for + Windows. + + + + + + Fix MSVC build to test whether 32-bit libperl + needs -D_USE_32BIT_TIME_T (Noah Misch) + + + + Available Perl distributions are inconsistent about what they expect, + and lack any reliable means of reporting it, so resort to a build-time + test on what the library being used actually does. + + + + + + On Windows, install the crash dump handler earlier in postmaster + startup (Takayuki Tsunakawa) + + + + This may allow collection of a core dump for some early-startup + failures that did not produce a dump before. + + + + + + On Windows, avoid encoding-conversion-related crashes when emitting + messages very early in postmaster startup (Takayuki Tsunakawa) + + + + + + Use our existing Motorola 68K spinlock code on OpenBSD as + well as NetBSD (David Carlier) + + + + + + Add support for spinlocks on Motorola 88K (David Carlier) + + + + + + Update time zone data files to tzdata + release 2018c for DST law changes in Brazil, Sao Tome and Principe, + plus historical corrections for Bolivia, Japan, and South Sudan. + The US/Pacific-New zone has been removed (it was + only an alias for America/Los_Angeles anyway). + + + + + + + + Release 9.4.15 diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index d79d953d51..9d18de4be9 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -1,6 +1,399 @@ + + Release 9.5.11 + + + Release date: + 2018-02-08 + + + + This release contains a variety of fixes from 9.5.10. + For information about new features in the 9.5 major release, see + . + + + + Migration to Version 9.5.11 + + + A dump/restore is not required for those running 9.5.X. + + + + However, if you are upgrading from a version earlier than 9.5.10, + see . + + + + + Changes + + + + + + Fix vacuuming of tuples that were updated while key-share locked + (Andres Freund, Álvaro Herrera) + + + + In some cases VACUUM would fail to remove such + tuples even though they are now dead, leading to assorted data + corruption scenarios. + + + + + + Fix inadequate buffer locking in some LSN fetches (Jacob Champion, + Asim Praveen, Ashwin Agrawal) + + + + These errors could result in misbehavior under concurrent load. + The potential consequences have not been characterized fully. + + + + + + Fix incorrect query results from cases involving flattening of + subqueries whose outputs are used in GROUPING SETS + (Heikki Linnakangas) + + + + + + Avoid unnecessary failure in a query on an inheritance tree that + occurs concurrently with some child table being removed from the tree + by ALTER TABLE NO INHERIT (Tom Lane) + + + + + + Fix spurious deadlock failures when multiple sessions are + running CREATE INDEX CONCURRENTLY (Jeff Janes) + + + + + + Fix failures when an inheritance tree contains foreign child tables + (Etsuro Fujita) + + + + A mix of regular and foreign tables in an inheritance tree resulted in + creation of incorrect plans for UPDATE + and DELETE queries. This led to visible failures in + some cases, notably when there are row-level triggers on a foreign + child table. + + + + + + Repair failure with correlated sub-SELECT + inside VALUES inside a LATERAL + subquery (Tom Lane) + + + + + + Fix could not devise a query plan for the given query + planner failure for some cases involving nested UNION + ALL inside a lateral subquery (Tom Lane) + + + + + + Fix logical decoding to correctly clean up disk files for crashed + transactions (Atsushi Torikoshi) + + + + Logical decoding may spill WAL records to disk for transactions + generating many WAL records. Normally these files are cleaned up + after the transaction's commit or abort record arrives; but if + no such record is ever seen, the removal code misbehaved. + + + + + + Fix walsender timeout failure and failure to respond to interrupts + when processing a large transaction (Petr Jelinek) + + + + + + Fix has_sequence_privilege() to + support WITH GRANT OPTION tests, + as other privilege-testing functions do (Joe Conway) + + + + + + In databases using UTF8 encoding, ignore any XML declaration that + asserts a different encoding (Pavel Stehule, Noah Misch) + + + + We always store XML strings in the database encoding, so allowing + libxml to act on a declaration of another encoding gave wrong results. + In encodings other than UTF8, we don't promise to support non-ASCII + XML data anyway, so retain the previous behavior for bug compatibility. + This change affects only xpath() and related + functions; other XML code paths already acted this way. + + + + + + Provide for forward compatibility with future minor protocol versions + (Robert Haas, Badrul Chowdhury) + + + + Up to now, PostgreSQL servers simply + rejected requests to use protocol versions newer than 3.0, so that + there was no functional difference between the major and minor parts + of the protocol version number. Allow clients to request versions 3.x + without failing, sending back a message showing that the server only + understands 3.0. This makes no difference at the moment, but + back-patching this change should allow speedier introduction of future + minor protocol upgrades. + + + + + + Cope with failure to start a parallel worker process + (Amit Kapila, Robert Haas) + + + + Parallel query previously tended to hang indefinitely if a worker + could not be started, as the result of fork() + failure or other low-probability problems. + + + + + + Avoid unsafe alignment assumptions when working + with __int128 (Tom Lane) + + + + Typically, compilers assume that __int128 variables are + aligned on 16-byte boundaries, but our memory allocation + infrastructure isn't prepared to guarantee that, and increasing the + setting of MAXALIGN seems infeasible for multiple reasons. Adjust the + code to allow use of __int128 only when we can tell the + compiler to assume lesser alignment. The only known symptom of this + problem so far is crashes in some parallel aggregation queries. + + + + + + Prevent stack-overflow crashes when planning extremely deeply + nested set operations + (UNION/INTERSECT/EXCEPT) + (Tom Lane) + + + + + + Fix null-pointer crashes for some types of LDAP URLs appearing + in pg_hba.conf (Thomas Munro) + + + + + + Fix sample INSTR() functions in the PL/pgSQL + documentation (Yugo Nagata, Tom Lane) + + + + These functions are stated to + be Oracle compatible, but + they weren't exactly. In particular, there was a discrepancy in the + interpretation of a negative third parameter: Oracle thinks that a + negative value indicates the last place where the target substring can + begin, whereas our functions took it as the last place where the + target can end. Also, Oracle throws an error for a zero or negative + fourth parameter, whereas our functions returned zero. + + + + The sample code has been adjusted to match Oracle's behavior more + precisely. Users who have copied this code into their applications + may wish to update their copies. + + + + + + Fix pg_dump to make ACL (permissions), + comment, and security label entries reliably identifiable in archive + output formats (Tom Lane) + + + + The tag portion of an ACL archive entry was usually + just the name of the associated object. Make it start with the object + type instead, bringing ACLs into line with the convention already used + for comment and security label archive entries. Also, fix the + comment and security label entries for the whole database, if present, + to make their tags start with DATABASE so that they + also follow this convention. This prevents false matches in code that + tries to identify large-object-related entries by seeing if the tag + starts with LARGE OBJECT. That could have resulted + in misclassifying entries as data rather than schema, with undesirable + results in a schema-only or data-only dump. + + + + Note that this change has user-visible results in the output + of pg_restore --list. + + + + + + Rename pg_rewind's + copy_file_range function to avoid conflict + with new Linux system call of that name (Andres Freund) + + + + This change prevents build failures with newer glibc versions. + + + + + + In ecpg, detect indicator arrays that do + not have the correct length and report an error (David Rader) + + + + + + Avoid triggering a libc assertion + in contrib/hstore, due to use + of memcpy() with equal source and destination + pointers (Tomas Vondra) + + + + + + Provide modern examples of how to auto-start Postgres on macOS + (Tom Lane) + + + + The scripts in contrib/start-scripts/osx use + infrastructure that's been deprecated for over a decade, and which no + longer works at all in macOS releases of the last couple of years. + Add a new subdirectory contrib/start-scripts/macos + containing scripts that use the newer launchd + infrastructure. + + + + + + Fix incorrect selection of configuration-specific libraries for + OpenSSL on Windows (Andrew Dunstan) + + + + + + Support linking to MinGW-built versions of libperl (Noah Misch) + + + + This allows building PL/Perl with some common Perl distributions for + Windows. + + + + + + Fix MSVC build to test whether 32-bit libperl + needs -D_USE_32BIT_TIME_T (Noah Misch) + + + + Available Perl distributions are inconsistent about what they expect, + and lack any reliable means of reporting it, so resort to a build-time + test on what the library being used actually does. + + + + + + On Windows, install the crash dump handler earlier in postmaster + startup (Takayuki Tsunakawa) + + + + This may allow collection of a core dump for some early-startup + failures that did not produce a dump before. + + + + + + On Windows, avoid encoding-conversion-related crashes when emitting + messages very early in postmaster startup (Takayuki Tsunakawa) + + + + + + Use our existing Motorola 68K spinlock code on OpenBSD as + well as NetBSD (David Carlier) + + + + + + Add support for spinlocks on Motorola 88K (David Carlier) + + + + + + Update time zone data files to tzdata + release 2018c for DST law changes in Brazil, Sao Tome and Principe, + plus historical corrections for Bolivia, Japan, and South Sudan. + The US/Pacific-New zone has been removed (it was + only an alias for America/Los_Angeles anyway). + + + + + + + + Release 9.5.10 diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index ce040f1a5a..26025712be 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -1,6 +1,463 @@ + + Release 9.6.7 + + + Release date: + 2018-02-08 + + + + This release contains a variety of fixes from 9.6.6. + For information about new features in the 9.6 major release, see + . + + + + Migration to Version 9.6.7 + + + A dump/restore is not required for those running 9.6.X. + + + + However, + if you use contrib/cube's ~> + operator, see the entry below about that. + + + + Also, if you are upgrading from a version earlier than 9.6.6, + see . + + + + + Changes + + + + + + Fix vacuuming of tuples that were updated while key-share locked + (Andres Freund, Álvaro Herrera) + + + + In some cases VACUUM would fail to remove such + tuples even though they are now dead, leading to assorted data + corruption scenarios. + + + + + + Ensure that vacuum will always clean up the pending-insertions list of + a GIN index (Masahiko Sawada) + + + + This is necessary to ensure that dead index entries get removed. + The old code got it backwards, allowing vacuum to skip the cleanup if + some other process were running cleanup concurrently, thus risking + invalid entries being left behind in the index. + + + + + + Fix inadequate buffer locking in some LSN fetches (Jacob Champion, + Asim Praveen, Ashwin Agrawal) + + + + These errors could result in misbehavior under concurrent load. + The potential consequences have not been characterized fully. + + + + + + Fix incorrect query results from cases involving flattening of + subqueries whose outputs are used in GROUPING SETS + (Heikki Linnakangas) + + + + + + Avoid unnecessary failure in a query on an inheritance tree that + occurs concurrently with some child table being removed from the tree + by ALTER TABLE NO INHERIT (Tom Lane) + + + + + + Fix spurious deadlock failures when multiple sessions are + running CREATE INDEX CONCURRENTLY (Jeff Janes) + + + + + + Fix failures when an inheritance tree contains foreign child tables + (Etsuro Fujita) + + + + A mix of regular and foreign tables in an inheritance tree resulted in + creation of incorrect plans for UPDATE + and DELETE queries. This led to visible failures in + some cases, notably when there are row-level triggers on a foreign + child table. + + + + + + Repair failure with correlated sub-SELECT + inside VALUES inside a LATERAL + subquery (Tom Lane) + + + + + + Fix could not devise a query plan for the given query + planner failure for some cases involving nested UNION + ALL inside a lateral subquery (Tom Lane) + + + + + + Fix logical decoding to correctly clean up disk files for crashed + transactions (Atsushi Torikoshi) + + + + Logical decoding may spill WAL records to disk for transactions + generating many WAL records. Normally these files are cleaned up + after the transaction's commit or abort record arrives; but if + no such record is ever seen, the removal code misbehaved. + + + + + + Fix walsender timeout failure and failure to respond to interrupts + when processing a large transaction (Petr Jelinek) + + + + + + Fix has_sequence_privilege() to + support WITH GRANT OPTION tests, + as other privilege-testing functions do (Joe Conway) + + + + + + In databases using UTF8 encoding, ignore any XML declaration that + asserts a different encoding (Pavel Stehule, Noah Misch) + + + + We always store XML strings in the database encoding, so allowing + libxml to act on a declaration of another encoding gave wrong results. + In encodings other than UTF8, we don't promise to support non-ASCII + XML data anyway, so retain the previous behavior for bug compatibility. + This change affects only xpath() and related + functions; other XML code paths already acted this way. + + + + + + Provide for forward compatibility with future minor protocol versions + (Robert Haas, Badrul Chowdhury) + + + + Up to now, PostgreSQL servers simply + rejected requests to use protocol versions newer than 3.0, so that + there was no functional difference between the major and minor parts + of the protocol version number. Allow clients to request versions 3.x + without failing, sending back a message showing that the server only + understands 3.0. This makes no difference at the moment, but + back-patching this change should allow speedier introduction of future + minor protocol upgrades. + + + + + + Cope with failure to start a parallel worker process + (Amit Kapila, Robert Haas) + + + + Parallel query previously tended to hang indefinitely if a worker + could not be started, as the result of fork() + failure or other low-probability problems. + + + + + + Fix collection of EXPLAIN statistics from parallel + workers (Amit Kapila, Thomas Munro) + + + + + + Avoid unsafe alignment assumptions when working + with __int128 (Tom Lane) + + + + Typically, compilers assume that __int128 variables are + aligned on 16-byte boundaries, but our memory allocation + infrastructure isn't prepared to guarantee that, and increasing the + setting of MAXALIGN seems infeasible for multiple reasons. Adjust the + code to allow use of __int128 only when we can tell the + compiler to assume lesser alignment. The only known symptom of this + problem so far is crashes in some parallel aggregation queries. + + + + + + Prevent stack-overflow crashes when planning extremely deeply + nested set operations + (UNION/INTERSECT/EXCEPT) + (Tom Lane) + + + + + + Fix null-pointer crashes for some types of LDAP URLs appearing + in pg_hba.conf (Thomas Munro) + + + + + + Fix sample INSTR() functions in the PL/pgSQL + documentation (Yugo Nagata, Tom Lane) + + + + These functions are stated to + be Oracle compatible, but + they weren't exactly. In particular, there was a discrepancy in the + interpretation of a negative third parameter: Oracle thinks that a + negative value indicates the last place where the target substring can + begin, whereas our functions took it as the last place where the + target can end. Also, Oracle throws an error for a zero or negative + fourth parameter, whereas our functions returned zero. + + + + The sample code has been adjusted to match Oracle's behavior more + precisely. Users who have copied this code into their applications + may wish to update their copies. + + + + + + Fix pg_dump to make ACL (permissions), + comment, and security label entries reliably identifiable in archive + output formats (Tom Lane) + + + + The tag portion of an ACL archive entry was usually + just the name of the associated object. Make it start with the object + type instead, bringing ACLs into line with the convention already used + for comment and security label archive entries. Also, fix the + comment and security label entries for the whole database, if present, + to make their tags start with DATABASE so that they + also follow this convention. This prevents false matches in code that + tries to identify large-object-related entries by seeing if the tag + starts with LARGE OBJECT. That could have resulted + in misclassifying entries as data rather than schema, with undesirable + results in a schema-only or data-only dump. + + + + Note that this change has user-visible results in the output + of pg_restore --list. + + + + + + Rename pg_rewind's + copy_file_range function to avoid conflict + with new Linux system call of that name (Andres Freund) + + + + This change prevents build failures with newer glibc versions. + + + + + + In ecpg, detect indicator arrays that do + not have the correct length and report an error (David Rader) + + + + + + Change the behavior of contrib/cube's + cube ~> int + operator to make it compatible with KNN search (Alexander Korotkov) + + + + The meaning of the second argument (the dimension selector) has been + changed to make it predictable which value is selected even when + dealing with cubes of varying dimensionalities. + + + + This is an incompatible change, but since the point of the operator + was to be used in KNN searches, it seems rather useless as-is. + After installing this update, any expression indexes or materialized + views using this operator will need to be reindexed/refreshed. + + + + + + Avoid triggering a libc assertion + in contrib/hstore, due to use + of memcpy() with equal source and destination + pointers (Tomas Vondra) + + + + + + Fix incorrect display of tuples' null bitmaps + in contrib/pageinspect (Maksim Milyutin) + + + + + + In contrib/postgres_fdw, avoid + outer pathkeys do not match mergeclauses + planner error when constructing a plan involving a remote join + (Robert Haas) + + + + + + Provide modern examples of how to auto-start Postgres on macOS + (Tom Lane) + + + + The scripts in contrib/start-scripts/osx use + infrastructure that's been deprecated for over a decade, and which no + longer works at all in macOS releases of the last couple of years. + Add a new subdirectory contrib/start-scripts/macos + containing scripts that use the newer launchd + infrastructure. + + + + + + Fix incorrect selection of configuration-specific libraries for + OpenSSL on Windows (Andrew Dunstan) + + + + + + Support linking to MinGW-built versions of libperl (Noah Misch) + + + + This allows building PL/Perl with some common Perl distributions for + Windows. + + + + + + Fix MSVC build to test whether 32-bit libperl + needs -D_USE_32BIT_TIME_T (Noah Misch) + + + + Available Perl distributions are inconsistent about what they expect, + and lack any reliable means of reporting it, so resort to a build-time + test on what the library being used actually does. + + + + + + On Windows, install the crash dump handler earlier in postmaster + startup (Takayuki Tsunakawa) + + + + This may allow collection of a core dump for some early-startup + failures that did not produce a dump before. + + + + + + On Windows, avoid encoding-conversion-related crashes when emitting + messages very early in postmaster startup (Takayuki Tsunakawa) + + + + + + Use our existing Motorola 68K spinlock code on OpenBSD as + well as NetBSD (David Carlier) + + + + + + Add support for spinlocks on Motorola 88K (David Carlier) + + + + + + Update time zone data files to tzdata + release 2018c for DST law changes in Brazil, Sao Tome and Principe, + plus historical corrections for Bolivia, Japan, and South Sudan. + The US/Pacific-New zone has been removed (it was + only an alias for America/Los_Angeles anyway). + + + + + + + + Release 9.6.6 From ad14919ac901e9703d81a5bf8a6b608719c85b60 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 3 Feb 2018 11:29:23 -0500 Subject: [PATCH 0950/1087] doc: Update mentions of MD5 in the documentation Reported-by: Shay Rojansky --- doc/src/sgml/runtime.sgml | 34 +++++++++------------------------- 1 file changed, 9 insertions(+), 25 deletions(-) diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml index d162acb2e8..71f02300c2 100644 --- a/doc/src/sgml/runtime.sgml +++ b/doc/src/sgml/runtime.sgml @@ -2023,16 +2023,18 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - Password Storage Encryption + Password Encryption - By default, database user passwords are stored as MD5 hashes, so - the administrator cannot determine the actual password assigned - to the user. If MD5 encryption is used for client authentication, - the unencrypted password is never even temporarily present on the - server because the client MD5-encrypts it before being sent - across the network. + Database user passwords are stored as hashes (determined by the setting + ), so the administrator cannot + determine the actual password assigned to the user. If SCRAM or MD5 + encryption is used for client authentication, the unencrypted password is + never even temporarily present on the server because the client encrypts + it before being sent across the network. SCRAM is preferred, because it + is an Internet standard and is more secure than the PostgreSQL-specific + MD5 authentication protocol. @@ -2086,24 +2088,6 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433 - - Encrypting Passwords Across A Network - - - - The MD5 authentication method double-encrypts the - password on the client before sending it to the server. It first - MD5-encrypts it based on the user name, and then encrypts it - based on a random salt sent by the server when the database - connection was made. It is this double-encrypted value that is - sent over the network to the server. Double-encryption not only - prevents the password from being discovered, it also prevents - another connection from using the same encrypted password to - connect to the database server at a later time. - - - - Encrypting Data Across A Network From 05d0f13f0701d84e4e6784da336aabcc2dfc8ade Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 4 Feb 2018 22:14:07 -0500 Subject: [PATCH 0951/1087] Skip setting up shared instrumentation for Hash node if not needed. We don't need to set up the shared space for hash join instrumentation data if instrumentation hasn't been requested. Let's follow the example of the similar Sort node code and save a few cycles by skipping that when we can. This reverts commit d59ff4ab3 and instead allows us to use the safer choice of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker, since if we reach that call there should be a TOC entry to be found. Thomas Munro Discussion: https://postgr.es/m/E1ehkoZ-0005uW-43%40gemulon.postgresql.org --- src/backend/executor/nodeHash.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index 70553b8fdf..b10f847452 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -2549,6 +2549,10 @@ ExecHashEstimate(HashState *node, ParallelContext *pcxt) { size_t size; + /* don't need this if not instrumenting or no workers */ + if (!node->ps.instrument || pcxt->nworkers == 0) + return; + size = mul_size(pcxt->nworkers, sizeof(HashInstrumentation)); size = add_size(size, offsetof(SharedHashInfo, hinstrument)); shm_toc_estimate_chunk(&pcxt->estimator, size); @@ -2564,6 +2568,10 @@ ExecHashInitializeDSM(HashState *node, ParallelContext *pcxt) { size_t size; + /* don't need this if not instrumenting or no workers */ + if (!node->ps.instrument || pcxt->nworkers == 0) + return; + size = offsetof(SharedHashInfo, hinstrument) + pcxt->nworkers * sizeof(HashInstrumentation); node->shared_info = (SharedHashInfo *) shm_toc_allocate(pcxt->toc, size); @@ -2582,13 +2590,13 @@ ExecHashInitializeWorker(HashState *node, ParallelWorkerContext *pwcxt) { SharedHashInfo *shared_info; - /* might not be there ... */ + /* don't need this if not instrumenting */ + if (!node->ps.instrument) + return; + shared_info = (SharedHashInfo *) - shm_toc_lookup(pwcxt->toc, node->ps.plan->plan_node_id, true); - if (shared_info) - node->hinstrument = &shared_info->hinstrument[ParallelWorkerNumber]; - else - node->hinstrument = NULL; + shm_toc_lookup(pwcxt->toc, node->ps.plan->plan_node_id, false); + node->hinstrument = &shared_info->hinstrument[ParallelWorkerNumber]; } /* @@ -2614,6 +2622,9 @@ ExecHashRetrieveInstrumentation(HashState *node) SharedHashInfo *shared_info = node->shared_info; size_t size; + if (shared_info == NULL) + return; + /* Replace node->shared_info with a copy in backend-local memory. */ size = offsetof(SharedHashInfo, hinstrument) + shared_info->num_workers * sizeof(HashInstrumentation); From 3492a0af0bd37e7f23e27fd3f5537f414ee9ab9b Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 5 Feb 2018 10:37:30 -0500 Subject: [PATCH 0952/1087] Fix RelationBuildPartitionKey's processing of partition key expressions. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Failure to advance the list pointer while reading partition expressions from a list results in invoking an input function with inappropriate data, possibly leading to crashes or, with carefully crafted input, disclosure of arbitrary backend memory. Bug discovered independently by Álvaro Herrera and David Rowley. This patch is by Álvaro but owes something to David's proposed fix. Back-patch to v10 where the issue was introduced. Security: CVE-2018-1052 --- src/backend/utils/cache/relcache.c | 5 ++++ src/test/regress/expected/create_table.out | 29 ++++++++++++++++------ src/test/regress/sql/create_table.sql | 9 +++++-- 3 files changed, 34 insertions(+), 9 deletions(-) diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index c081b88b73..d5cc246156 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -983,9 +983,14 @@ RelationBuildPartitionKey(Relation relation) } else { + if (partexprs_item == NULL) + elog(ERROR, "wrong number of partition key expressions"); + key->parttypid[i] = exprType(lfirst(partexprs_item)); key->parttypmod[i] = exprTypmod(lfirst(partexprs_item)); key->parttypcoll[i] = exprCollation(lfirst(partexprs_item)); + + partexprs_item = lnext(partexprs_item); } get_typlenbyvalalign(key->parttypid[i], &key->parttyplen[i], diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index e554ec4844..f5e56365f5 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -419,8 +419,9 @@ DETAIL: table partitioned depends on function plusone(integer) HINT: Use DROP ... CASCADE to drop the dependent objects too. -- partitioned table cannot participate in regular inheritance CREATE TABLE partitioned2 ( - a int -) PARTITION BY LIST ((a+1)); + a int, + b text +) PARTITION BY RANGE ((a+1), substr(b, 1, 5)); CREATE TABLE fail () INHERITS (partitioned2); ERROR: cannot inherit from partitioned table "partitioned2" -- Partition key in describe output @@ -436,13 +437,27 @@ Partition key: RANGE (a oid_ops, plusone(b), c, d COLLATE "C") Number of partitions: 0 \d+ partitioned2 - Table "public.partitioned2" - Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ---------+---------+-----------+----------+---------+---------+--------------+------------- - a | integer | | | | plain | | -Partition key: LIST (((a + 1))) + Table "public.partitioned2" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+----------+--------------+------------- + a | integer | | | | plain | | + b | text | | | | extended | | +Partition key: RANGE (((a + 1)), substr(b, 1, 5)) Number of partitions: 0 +INSERT INTO partitioned2 VALUES (1, 'hello'); +ERROR: no partition of relation "partitioned2" found for row +DETAIL: Partition key of the failing row contains ((a + 1), substr(b, 1, 5)) = (2, hello). +CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO (100, 'ccccc'); +\d+ part2_1 + Table "public.part2_1" + Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--------+---------+-----------+----------+---------+----------+--------------+------------- + a | integer | | | | plain | | + b | text | | | | extended | | +Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc') +Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text)))) + DROP TABLE partitioned, partitioned2; -- -- Partitions diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index a71d9ae7ab..fdd6d14104 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -419,14 +419,19 @@ DROP FUNCTION plusone(int); -- partitioned table cannot participate in regular inheritance CREATE TABLE partitioned2 ( - a int -) PARTITION BY LIST ((a+1)); + a int, + b text +) PARTITION BY RANGE ((a+1), substr(b, 1, 5)); CREATE TABLE fail () INHERITS (partitioned2); -- Partition key in describe output \d partitioned \d+ partitioned2 +INSERT INTO partitioned2 VALUES (1, 'hello'); +CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO (100, 'ccccc'); +\d+ part2_1 + DROP TABLE partitioned, partitioned2; -- From a926eb84e07a604da6d059eca1fd87f919bb5d7a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 5 Feb 2018 10:58:27 -0500 Subject: [PATCH 0953/1087] Ensure that all temp files made during pg_upgrade are non-world-readable. pg_upgrade has always attempted to ensure that the transient dump files it creates are inaccessible except to the owner. However, refactoring in commit 76a7650c4 broke that for the file containing "pg_dumpall -g" output; since then, that file was protected according to the process's default umask. Since that file may contain role passwords (hopefully encrypted, but passwords nonetheless), this is a particularly unfortunate oversight. Prudent users of pg_upgrade on multiuser systems would probably run it under a umask tight enough that the issue is moot, but perhaps some users are depending only on pg_upgrade's umask changes to protect their data. To fix this in a future-proof way, let's just tighten the umask at process start. There are no files pg_upgrade needs to write at a weaker security level; and if there were, transiently relaxing the umask around where they're created would be a safer approach. Report and patch by Tom Lane; the idea for the fix is due to Noah Misch. Back-patch to all supported branches. Security: CVE-2018-1053 --- src/bin/pg_upgrade/dump.c | 10 ---------- src/bin/pg_upgrade/file.c | 15 --------------- src/bin/pg_upgrade/pg_upgrade.c | 4 ++++ src/bin/pg_upgrade/pg_upgrade.h | 4 +++- 4 files changed, 7 insertions(+), 26 deletions(-) diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c index 8a662e9865..def22c6521 100644 --- a/src/bin/pg_upgrade/dump.c +++ b/src/bin/pg_upgrade/dump.c @@ -18,7 +18,6 @@ void generate_old_dump(void) { int dbnum; - mode_t old_umask; prep_status("Creating dump of global objects"); @@ -33,13 +32,6 @@ generate_old_dump(void) prep_status("Creating dump of database schemas\n"); - /* - * Set umask for this function, all functions it calls, and all - * subprocesses/threads it creates. We can't use fopen_priv() as Windows - * uses threads and umask is process-global. - */ - old_umask = umask(S_IRWXG | S_IRWXO); - /* create per-db dump files */ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++) { @@ -74,8 +66,6 @@ generate_old_dump(void) while (reap_child(true) == true) ; - umask(old_umask); - end_progress_output(); check_ok(); } diff --git a/src/bin/pg_upgrade/file.c b/src/bin/pg_upgrade/file.c index f88e3d558f..f38bfacf02 100644 --- a/src/bin/pg_upgrade/file.c +++ b/src/bin/pg_upgrade/file.c @@ -314,18 +314,3 @@ win32_pghardlink(const char *src, const char *dst) return 0; } #endif - - -/* fopen() file with no group/other permissions */ -FILE * -fopen_priv(const char *path, const char *mode) -{ - mode_t old_umask = umask(S_IRWXG | S_IRWXO); - FILE *fp; - - fp = fopen(path, mode); - - umask(old_umask); /* we assume this can't change errno */ - - return fp; -} diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index a67e484a85..3f57a25b05 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -77,6 +77,10 @@ main(int argc, char **argv) bool live_check = false; set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_upgrade")); + + /* Ensure that all files created by pg_upgrade are non-world-readable */ + umask(S_IRWXG | S_IRWXO); + parseCommandLine(argc, argv); get_restricted_token(os_info.progname); diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index 67f874b4f1..7e5e971294 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -374,7 +374,9 @@ void linkFile(const char *src, const char *dst, void rewriteVisibilityMap(const char *fromfile, const char *tofile, const char *schemaName, const char *relName); void check_hard_link(void); -FILE *fopen_priv(const char *path, const char *mode); + +/* fopen_priv() is no longer different from fopen() */ +#define fopen_priv(path, mode) fopen(path, mode) /* function.c */ From 1eb5d43beed9d8cdc61377867f0a53eb2cfba0c4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 5 Feb 2018 14:43:40 -0500 Subject: [PATCH 0954/1087] Last-minute updates for release notes. Security: CVE-2018-1052, CVE-2018-1053 --- doc/src/sgml/release-10.sgml | 49 +++++++++++++++++++++++++++++++++++ doc/src/sgml/release-9.3.sgml | 22 ++++++++++++++++ doc/src/sgml/release-9.4.sgml | 22 ++++++++++++++++ doc/src/sgml/release-9.5.sgml | 22 ++++++++++++++++ doc/src/sgml/release-9.6.sgml | 22 ++++++++++++++++ 5 files changed, 137 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 3159f7a21f..7b0fde2b93 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -41,6 +41,55 @@ + + Fix processing of partition keys containing multiple expressions + (Álvaro Herrera, David Rowley) + + + + This error led to crashes or, with carefully crafted input, disclosure + of arbitrary backend memory. + (CVE-2018-1052) + + + + + + + Ensure that all temporary files made + by pg_upgrade are non-world-readable + (Tom Lane, Noah Misch) + + + + pg_upgrade normally restricts its + temporary files to be readable and writable only by the calling user. + But the temporary file containing pg_dumpall -g + output would be group- or world-readable, or even writable, if the + user's umask setting allows. In typical usage on + multi-user machines, the umask and/or the working + directory's permissions would be tight enough to prevent problems; + but there may be people using pg_upgrade + in scenarios where this oversight would permit disclosure of database + passwords to unfriendly eyes. + (CVE-2018-1053) + + + + + + + +B-Tree Indexes + + + index + B-Tree + + + + Introduction + + + PostgreSQL includes an implementation of the + standard btree (multi-way binary tree) index data + structure. Any data type that can be sorted into a well-defined linear + order can be indexed by a btree index. The only limitation is that an + index entry cannot exceed approximately one-third of a page (after TOAST + compression, if applicable). + + + + Because each btree operator class imposes a sort order on its data type, + btree operator classes (or, really, operator families) have come to be + used as PostgreSQL's general representation + and understanding of sorting semantics. Therefore, they've acquired + some features that go beyond what would be needed just to support btree + indexes, and parts of the system that are quite distant from the + btree AM make use of them. + + + + + + Behavior of B-Tree Operator Classes + + + As shown in , a btree operator + class must provide five comparison operators, + <, + <=, + =, + >= and + >. + One might expect that <> should also be part of + the operator class, but it is not, because it would almost never be + useful to use a <> WHERE clause in an index + search. (For some purposes, the planner treats <> + as associated with a btree operator class; but it finds that operator via + the = operator's negator link, rather than + from pg_amop.) + + + + When several data types share near-identical sorting semantics, their + operator classes can be grouped into an operator family. Doing so is + advantageous because it allows the planner to make deductions about + cross-type comparisons. Each operator class within the family should + contain the single-type operators (and associated support functions) + for its input data type, while cross-type comparison operators and + support functions are loose in the family. It is + recommendable that a complete set of cross-type operators be included + in the family, thus ensuring that the planner can represent any + comparison conditions that it deduces from transitivity. + + + + There are some basic assumptions that a btree operator family must + satisfy: + + + + + + An = operator must be an equivalence relation; that + is, for all non-null values A, + B, C of the + data type: + + + + + A = + A is true + (reflexive law) + + + + + if A = + B, + then B = + A + (symmetric law) + + + + + if A = + B and B + = C, + then A = + C + (transitive law) + + + + + + + + + A < operator must be a strong ordering relation; + that is, for all non-null values A, + B, C: + + + + + A < + A is false + (irreflexive law) + + + + + if A < + B + and B < + C, + then A < + C + (transitive law) + + + + + + + + + Furthermore, the ordering is total; that is, for all non-null + values A, B: + + + + + exactly one of A < + B, A + = B, and + B < + A is true + (trichotomy law) + + + + + (The trichotomy law justifies the definition of the comparison support + function, of course.) + + + + + + The other three operators are defined in terms of = + and < in the obvious way, and must act consistently + with them. + + + + For an operator family supporting multiple data types, the above laws must + hold when A, B, + C are taken from any data types in the family. + The transitive laws are the trickiest to ensure, as in cross-type + situations they represent statements that the behaviors of two or three + different operators are consistent. + As an example, it would not work to put float8 + and numeric into the same operator family, at least not with + the current semantics that numeric values are converted + to float8 for comparison to a float8. Because + of the limited accuracy of float8, this means there are + distinct numeric values that will compare equal to the + same float8 value, and thus the transitive law would fail. + + + + Another requirement for a multiple-data-type family is that any implicit + or binary-coercion casts that are defined between data types included in + the operator family must not change the associated sort ordering. + + + + It should be fairly clear why a btree index requires these laws to hold + within a single data type: without them there is no ordering to arrange + the keys with. Also, index searches using a comparison key of a + different data type require comparisons to behave sanely across two + data types. The extensions to three or more data types within a family + are not strictly required by the btree index mechanism itself, but the + planner relies on them for optimization purposes. + + + + + + B-Tree Support Functions + + + As shown in , btree defines + one required and one optional support function. + + + + For each combination of data types that a btree operator family provides + comparison operators for, it must provide a comparison support function, + registered in pg_amproc with support function + number 1 and + amproclefttype/amprocrighttype + equal to the left and right data types for the comparison (i.e., the + same data types that the matching operators are registered with + in pg_amop). + The comparison function must take two non-null values + A and B and + return an int32 value that + is < 0, 0, + or > 0 + when A < + B, A + = B, + or A > + B, respectively. The function must not + return INT_MIN for the A + < B case, + since the value may be negated before being tested for sign. A null + result is disallowed, too. + See src/backend/access/nbtree/nbtcompare.c for + examples. + + + + If the compared values are of a collatable data type, the appropriate + collation OID will be passed to the comparison support function, using + the standard PG_GET_COLLATION() mechanism. + + + + Optionally, a btree operator family may provide sort + support function(s), registered under support function number + 2. These functions allow implementing comparisons for sorting purposes + in a more efficient way than naively calling the comparison support + function. The APIs involved in this are defined in + src/include/utils/sortsupport.h. + + + + + + Implementation + + + An introduction to the btree index implementation can be found in + src/backend/access/nbtree/README. + + + + + diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml index a72c50eadb..732b8ab7d0 100644 --- a/doc/src/sgml/filelist.sgml +++ b/doc/src/sgml/filelist.sgml @@ -83,6 +83,7 @@ + diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml index 041afdbd86..054347b17d 100644 --- a/doc/src/sgml/postgres.sgml +++ b/doc/src/sgml/postgres.sgml @@ -252,6 +252,7 @@ &geqo; &indexam; &generic-wal; + &btree; &gist; &spgist; &gin; diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index 81c0cdc4f8..e40131473f 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -35,7 +35,7 @@ PostgreSQL, but all index methods are described in pg_am. It is possible to add a new index access method by writing the necessary code and - then creating a row in pg_am — but that is + then creating an entry in pg_am — but that is beyond the scope of this chapter (see ). @@ -404,6 +404,8 @@ B-trees require a single support function, and allow a second one to be supplied at the operator class author's option, as shown in . + The requirements for these support functions are explained further in + .
@@ -426,8 +428,8 @@ - Return the addresses of C-callable sort support function(s), - as documented in utils/sortsupport.h (optional) + Return the addresses of C-callable sort support function(s) + (optional) 2 @@ -1056,11 +1058,8 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD In a B-tree operator family, all the operators in the family must sort - compatibly, meaning that the transitive laws hold across all the data types - supported by the family: if A = B and B = C, then A = C, - and if A < B and B < C, then A < C. Moreover, implicit - or binary coercion casts between types represented in the operator family - must not change the associated sort ordering. For each + compatibly, as is specified in detail in . + For each operator in the family there must be a support function having the same two input data types as the operator. It is recommended that a family be complete, i.e., for each combination of data types, all operators are diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README index a3f11da8d5..34f78b2f50 100644 --- a/src/backend/access/nbtree/README +++ b/src/backend/access/nbtree/README @@ -623,56 +623,3 @@ routines must treat it accordingly. The actual key stored in the item is irrelevant, and need not be stored at all. This arrangement corresponds to the fact that an L&Y non-leaf page has one more pointer than key. - -Notes to Operator Class Implementors ------------------------------------- - -With this implementation, we require each supported combination of -datatypes to supply us with a comparison procedure via pg_amproc. -This procedure must take two nonnull values A and B and return an int32 < 0, -0, or > 0 if A < B, A = B, or A > B, respectively. The procedure must -not return INT_MIN for "A < B", since the value may be negated before -being tested for sign. A null result is disallowed, too. See nbtcompare.c -for examples. - -There are some basic assumptions that a btree operator family must satisfy: - -An = operator must be an equivalence relation; that is, for all non-null -values A,B,C of the datatype: - - A = A is true reflexive law - if A = B, then B = A symmetric law - if A = B and B = C, then A = C transitive law - -A < operator must be a strong ordering relation; that is, for all non-null -values A,B,C: - - A < A is false irreflexive law - if A < B and B < C, then A < C transitive law - -Furthermore, the ordering is total; that is, for all non-null values A,B: - - exactly one of A < B, A = B, and B < A is true trichotomy law - -(The trichotomy law justifies the definition of the comparison support -procedure, of course.) - -The other three operators are defined in terms of these two in the obvious way, -and must act consistently with them. - -For an operator family supporting multiple datatypes, the above laws must hold -when A,B,C are taken from any datatypes in the family. The transitive laws -are the trickiest to ensure, as in cross-type situations they represent -statements that the behaviors of two or three different operators are -consistent. As an example, it would not work to put float8 and numeric into -an opfamily, at least not with the current semantics that numerics are -converted to float8 for comparison to a float8. Because of the limited -accuracy of float8, this means there are distinct numeric values that will -compare equal to the same float8 value, and thus the transitive law fails. - -It should be fairly clear why a btree index requires these laws to hold within -a single datatype: without them there is no ordering to arrange the keys with. -Also, index searches using a key of a different datatype require comparisons -to behave sanely across two datatypes. The extensions to three or more -datatypes within a family are not strictly required by the btree index -mechanism itself, but the planner relies on them for optimization purposes. From 9fafa413ac602624e10f61ef44a20c17029d43d8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 6 Feb 2018 14:24:57 -0500 Subject: [PATCH 0957/1087] Avoid valgrind complaint about write() of uninitalized bytes. LogicalTapeFreeze() may write out its first block when it is dirty but not full, and then immediately read the first block back in from its BufFile as a BLCKSZ-width block. This can only occur in rare cases where very few tuples were written out, which is currently only possible with parallel external tuplesorts. To avoid valgrind complaints, tell it to treat the tail of logtape.c's buffer as defined. Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem but did not create it. LogicalTapeFreeze() has always tended to write out some amount of garbage bytes, but previously never wrote less than one block of data in total, so the problem was masked. Per buildfarm members lousyjack and skink. Peter Geoghegan, based on a suggestion from Tom Lane and me. Some comment revisions by me. --- src/backend/utils/sort/logtape.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c index 6b7c10bcfc..66bfcced8d 100644 --- a/src/backend/utils/sort/logtape.c +++ b/src/backend/utils/sort/logtape.c @@ -86,6 +86,7 @@ #include "storage/buffile.h" #include "utils/builtins.h" #include "utils/logtape.h" +#include "utils/memdebug.h" #include "utils/memutils.h" /* @@ -874,6 +875,17 @@ LogicalTapeFreeze(LogicalTapeSet *lts, int tapenum, TapeShare *share) */ if (lt->dirty) { + /* + * As long as we've filled the buffer at least once, its contents are + * entirely defined from valgrind's point of view, even though + * contents beyond the current end point may be stale. But it's + * possible - at least in the case of a parallel sort - to sort such + * small amount of data that we do not fill the buffer even once. Tell + * valgrind that its contents are defined, so it doesn't bleat. + */ + VALGRIND_MAKE_MEM_DEFINED(lt->buffer + lt->nbytes, + lt->buffer_size - lt->nbytes); + TapeBlockSetNBytes(lt->buffer, lt->nbytes); ltsWriteBlock(lts, lt->curBlockNumber, (void *) lt->buffer); lt->writing = false; From 23209457314f6fd89fcd251a8173b0129aaa95a2 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Tue, 6 Feb 2018 15:50:13 -0500 Subject: [PATCH 0958/1087] Fix incorrect grammar. Etsuro Fujita Discussion: http://postgr.es/m/5A7981EA.8020201@lab.ntt.co.jp --- src/backend/executor/execPartition.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 106a96d910..ba6b52c32c 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -360,7 +360,7 @@ ExecSetupChildParentMapForLeaf(PartitionTupleRouting *proute) Assert(proute != NULL); /* - * These array elements gets filled up with maps on an on-demand basis. + * These array elements get filled up with maps on an on-demand basis. * Initially just set all of them to NULL. */ proute->child_parent_tupconv_maps = From 0a459cec96d3856f476c2db298c6b52f592894e8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 7 Feb 2018 00:06:50 -0500 Subject: [PATCH 0959/1087] Support all SQL:2011 options for window frame clauses. This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING" frame boundaries in window functions. We'd punted on that back in the original patch to add window functions, because it was not clear how to do it in a reasonably data-type-extensible fashion. That problem is resolved here by adding the ability for btree operator classes to provide an "in_range" support function that defines how to add or subtract the RANGE offset value. Factoring it this way also allows the operator class to avoid overflow problems near the ends of the datatype's range, if it wishes to expend effort on that. (In the committed patch, the integer opclasses handle that issue, but it did not seem worth the trouble to avoid overflow failures for datetime types.) The patch includes in_range support for the integer_ops opfamily (int2/int4/int8) as well as the standard datetime types. Support for other numeric types has been requested, but that seems like suitable material for a follow-on patch. In addition, the patch adds GROUPS mode which counts the offset in ORDER-BY peer groups rather than rows, and it adds the frame_exclusion options specified by SQL:2011. As far as I can see, we are now fully up to spec on window framing options. Existing behaviors remain unchanged, except that I changed the errcode for a couple of existing error reports to meet the SQL spec's expectation that negative "offset" values should be reported as SQLSTATE 22013. Internally and in relevant parts of the documentation, we now consistently use the terminology "offset PRECEDING/FOLLOWING" rather than "value PRECEDING/FOLLOWING", since the term "value" is confusingly vague. Oliver Ford, reviewed and whacked around some by me Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com --- doc/src/sgml/btree.sgml | 181 +- doc/src/sgml/func.sgml | 5 +- doc/src/sgml/ref/select.sgml | 123 +- doc/src/sgml/syntax.sgml | 144 +- doc/src/sgml/xindex.sgml | 60 +- src/backend/access/nbtree/nbtvalidate.c | 32 +- src/backend/catalog/dependency.c | 16 + src/backend/catalog/sql_features.txt | 2 +- src/backend/commands/opclasscmds.c | 29 +- src/backend/executor/nodeWindowAgg.c | 1056 +++++++++--- src/backend/nodes/copyfuncs.c | 10 + src/backend/nodes/equalfuncs.c | 5 + src/backend/nodes/outfuncs.c | 10 + src/backend/nodes/readfuncs.c | 10 + src/backend/optimizer/plan/createplan.c | 14 + src/backend/parser/gram.y | 71 +- src/backend/parser/parse_agg.c | 8 + src/backend/parser/parse_clause.c | 156 +- src/backend/parser/parse_expr.c | 3 + src/backend/parser/parse_func.c | 1 + src/backend/utils/adt/date.c | 107 ++ src/backend/utils/adt/int.c | 152 ++ src/backend/utils/adt/int8.c | 42 +- src/backend/utils/adt/ruleutils.c | 20 +- src/backend/utils/adt/timestamp.c | 104 ++ src/backend/utils/errcodes.txt | 1 + src/include/access/nbtree.h | 8 +- src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_amproc.h | 13 + src/include/catalog/pg_proc.h | 28 + src/include/nodes/execnodes.h | 25 +- src/include/nodes/parsenodes.h | 48 +- src/include/nodes/plannodes.h | 6 + src/include/parser/kwlist.h | 3 + src/include/parser/parse_node.h | 3 +- src/test/regress/expected/alter_generic.out | 4 +- src/test/regress/expected/window.out | 1697 ++++++++++++++++++- src/test/regress/sql/window.sql | 496 +++++- 38 files changed, 4300 insertions(+), 395 deletions(-) diff --git a/doc/src/sgml/btree.sgml b/doc/src/sgml/btree.sgml index 9f39edc742..10abf90189 100644 --- a/doc/src/sgml/btree.sgml +++ b/doc/src/sgml/btree.sgml @@ -207,7 +207,7 @@ As shown in , btree defines - one required and one optional support function. + one required and two optional support functions. @@ -252,6 +252,185 @@ src/include/utils/sortsupport.h. + + in_range support functions + + + + support functions + in_range + + + + Optionally, a btree operator family may + provide in_range support function(s), registered + under support function number 3. These are not used during btree index + operations; rather, they extend the semantics of the operator family so + that it can support window clauses containing + the RANGE offset + PRECEDING + and RANGE offset + FOLLOWING frame bound types (see + ). Fundamentally, the extra + information provided is how to add or subtract + an offset value in a way that is compatible + with the family's data ordering. + + + + An in_range function must have the signature + +in_range(val type1, base type1, offset type2, sub bool, less bool) +returns bool + + val and base must be + of the same type, which is one of the types supported by the operator + family (i.e., a type for which it provides an ordering). + However, offset could be of a different type, + which might be one otherwise unsupported by the family. An example is + that the built-in time_ops family provides + an in_range function that + has offset of type interval. + A family can provide in_range functions for any of + its supported types and one or more offset + types. Each in_range function should be entered + in pg_amproc + with amproclefttype equal to type1 + and amprocrighttype equal to type2. + + + + The essential semantics of an in_range function + depend on the two boolean flag parameters. It should add or + subtract base + and offset, then + compare val to the result, as follows: + + + + if !sub and + !less, + return val >= + (base + + offset) + + + + + if !sub + and less, + return val <= + (base + + offset) + + + + + if sub + and !less, + return val >= + (base - + offset) + + + + + if sub and less, + return val <= + (base - + offset) + + + + Before doing so, the function should check the sign + of offset: if it is less than zero, raise + error ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE (22013) + with error text like invalid preceding or following size in window + function. (This is required by the SQL standard, although + nonstandard operator families might perhaps choose to ignore this + restriction, since there seems to be little semantic necessity for it.) + This requirement is delegated to the in_range + function so that the core code needn't understand what less than + zero means for a particular data type. + + + + An additional expectation is that in_range functions + should, if practical, avoid throwing an error + if base + + offset + or base - + offset would overflow. + The correct comparison result can be determined even if that value would + be out of the data type's range. Note that if the data type includes + concepts such as infinity or NaN, extra care + may be needed to ensure that in_range's results agree + with the normal sort order of the operator family. + + + + The results of the in_range function must be + consistent with the sort ordering imposed by the operator family. + To be precise, given any fixed values of offset + and sub, then: + + + + If in_range with less = + true is true for some val1 + and base, it must be true for + every val2 <= + val1 with the + same base. + + + + + If in_range with less = + true is false for some val1 + and base, it must be false for + every val2 >= + val1 with the + same base. + + + + + If in_range with less = + true is true for some val + and base1, it must be true for + every base2 >= + base1 with the + same val. + + + + + If in_range with less = + true is false for some val + and base1, it must be false for + every base2 <= + base1 with the + same val. + + + + Analogous statements with inverted conditions hold + when less = false. + + + + If the type being ordered (type1) is collatable, + the appropriate collation OID will be passed to + the in_range function, using the standard + PG_GET_COLLATION() mechanism. + + + + in_range functions need not handle NULL inputs, and + typically will be marked strict. + + diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 487c7ff750..640ff09a7b 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -14729,8 +14729,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; partition through the last peer of the current row. This is likely to give unhelpful results for last_value and sometimes also nth_value. You can redefine the frame by - adding a suitable frame specification (RANGE or - ROWS) to the OVER clause. + adding a suitable frame specification (RANGE, + ROWS or GROUPS) to + the OVER clause. See for more information about frame specifications. diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml index 8a3e86b6db..b5d3d3a071 100644 --- a/doc/src/sgml/ref/select.sgml +++ b/doc/src/sgml/ref/select.sgml @@ -859,19 +859,28 @@ WINDOW window_name AS ( frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS | GROUPS } frame_start [ frame_exclusion ] +{ RANGE | ROWS | GROUPS } BETWEEN frame_start AND frame_end [ frame_exclusion ] - where frame_start and frame_end can be - one of + where frame_start + and frame_end can be one of UNBOUNDED PRECEDING -value PRECEDING +offset PRECEDING CURRENT ROW -value FOLLOWING +offset FOLLOWING UNBOUNDED FOLLOWING + + + and frame_exclusion can be one of + + +EXCLUDE CURRENT ROW +EXCLUDE GROUP +EXCLUDE TIES +EXCLUDE NO OTHERS If frame_end is omitted it defaults to CURRENT @@ -879,8 +888,10 @@ UNBOUNDED FOLLOWING frame_start cannot be UNBOUNDED FOLLOWING, frame_end cannot be UNBOUNDED PRECEDING, and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + above list of frame_start + and frame_end options than + the frame_start choice does — for example + RANGE BETWEEN CURRENT ROW AND offset PRECEDING is not allowed. @@ -888,33 +899,72 @@ UNBOUNDED FOLLOWING The default framing option is RANGE UNBOUNDED PRECEDING, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW; it sets the frame to be all rows from the partition start - up through the current row's last peer (a row that ORDER - BY considers equivalent to the current row, or all rows if there - is no ORDER BY). + up through the current row's last peer (a row + that the window's ORDER BY clause considers + equivalent to the current row), or all rows if there + is no ORDER BY. In general, UNBOUNDED PRECEDING means that the frame starts with the first row of the partition, and similarly UNBOUNDED FOLLOWING means that the frame ends with the last - row of the partition (regardless of RANGE or ROWS - mode). In ROWS mode, CURRENT ROW - means that the frame starts or ends with the current row; but in - RANGE mode it means that the frame starts or ends with - the current row's first or last peer in the ORDER BY ordering. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts - or ends with the row that many rows before or after the current row. - value must be an integer expression not - containing any variables, aggregate functions, or window functions. - The value must not be null or negative; but it can be zero, which - selects the current row itself. - - - - Beware that the ROWS options can produce unpredictable + row of the partition, regardless + of RANGE, ROWS + or GROUPS mode. + In ROWS mode, CURRENT ROW means + that the frame starts or ends with the current row; but + in RANGE or GROUPS mode it means + that the frame starts or ends with the current row's first or last peer + in the ORDER BY ordering. + The offset PRECEDING and + offset FOLLOWING options + vary in meaning depending on the frame mode. + In ROWS mode, the offset + is an integer indicating that the frame starts or ends that many rows + before or after the current row. + In GROUPS mode, the offset + is an integer indicating that the frame starts or ends that many peer + groups before or after the current row's peer group, where + a peer group is a group of rows that are + equivalent according to ORDER BY. + In RANGE mode, use of + an offset option requires that there be + exactly one ORDER BY column in the window definition. + Then the frame contains those rows whose ordering column value is no + more than offset less than + (for PRECEDING) or more than + (for FOLLOWING) the current row's ordering column + value. In these cases the data type of + the offset expression depends on the data + type of the ordering column. For numeric ordering columns it is + typically of the same type as the ordering column, but for datetime + ordering columns it is an interval. + In all these cases, the value of the offset + must be non-null and non-negative. Also, while + the offset does not have to be a simple + constant, it cannot contain variables, aggregate functions, or window + functions. + + + + The frame_exclusion option allows rows around + the current row to be excluded from the frame, even if they would be + included according to the frame start and frame end options. + EXCLUDE CURRENT ROW excludes the current row from the + frame. + EXCLUDE GROUP excludes the current row and its + ordering peers from the frame. + EXCLUDE TIES excludes any peers of the current + row from the frame, but not the current row itself. + EXCLUDE NO OTHERS simply specifies explicitly the + default behavior of not excluding the current row or its peers. + + + + Beware that the ROWS mode can produce unpredictable results if the ORDER BY ordering does not order the rows - uniquely. The RANGE options are designed to ensure that - rows that are peers in the ORDER BY ordering are treated - alike; all peer rows will be in the same frame. + uniquely. The RANGE and GROUPS + modes are designed to ensure that rows that are peers in + the ORDER BY ordering are treated alike: all rows of + a given peer group will be in the frame or excluded from it. @@ -1981,17 +2031,6 @@ SELECT distributors.* WHERE distributors.name = 'Westward'; - - <literal>WINDOW</literal> Clause Restrictions - - - The SQL standard provides additional options for the window - frame_clause. - PostgreSQL currently supports only the - options listed above. - - - <literal>LIMIT</literal> and <literal>OFFSET</literal> diff --git a/doc/src/sgml/syntax.sgml b/doc/src/sgml/syntax.sgml index a938a21334..f9905fb447 100644 --- a/doc/src/sgml/syntax.sgml +++ b/doc/src/sgml/syntax.sgml @@ -1802,20 +1802,27 @@ FROM generate_series(1,10) AS s(i); [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ] [ frame_clause ] - and the optional frame_clause + The optional frame_clause can be one of -{ RANGE | ROWS } frame_start -{ RANGE | ROWS } BETWEEN frame_start AND frame_end +{ RANGE | ROWS | GROUPS } frame_start [ frame_exclusion ] +{ RANGE | ROWS | GROUPS } BETWEEN frame_start AND frame_end [ frame_exclusion ] - where frame_start and frame_end can be - one of + where frame_start + and frame_end can be one of UNBOUNDED PRECEDING -value PRECEDING +offset PRECEDING CURRENT ROW -value FOLLOWING +offset FOLLOWING UNBOUNDED FOLLOWING + + and frame_exclusion can be one of + +EXCLUDE CURRENT ROW +EXCLUDE GROUP +EXCLUDE TIES +EXCLUDE NO OTHERS @@ -1856,11 +1863,14 @@ UNBOUNDED FOLLOWING The frame_clause specifies the set of rows constituting the window frame, which is a subset of the current partition, for those window functions that act on - the frame instead of the whole partition. The frame can be specified in - either RANGE or ROWS mode; in either case, it - runs from the frame_start to the - frame_end. If frame_end is omitted, - it defaults to CURRENT ROW. + the frame instead of the whole partition. The set of rows in the frame + can vary depending on which row is the current row. The frame can be + specified in RANGE, ROWS + or GROUPS mode; in each case, it runs from + the frame_start to + the frame_end. + If frame_end is omitted, the end defaults + to CURRENT ROW. @@ -1871,24 +1881,91 @@ UNBOUNDED FOLLOWING - In RANGE mode, a frame_start of - CURRENT ROW means the frame starts with the current row's - first peer row (a row that ORDER BY considers - equivalent to the current row), while a frame_end of - CURRENT ROW means the frame ends with the last equivalent - ORDER BY peer. In ROWS mode, CURRENT ROW simply means - the current row. + In RANGE or GROUPS mode, + a frame_start of + CURRENT ROW means the frame starts with the current + row's first peer row (a row that the + window's ORDER BY clause sorts as equivalent to the + current row), while a frame_end of + CURRENT ROW means the frame ends with the current + row's last peer row. + In ROWS mode, CURRENT ROW simply + means the current row. - The value PRECEDING and - value FOLLOWING cases are currently only - allowed in ROWS mode. They indicate that the frame starts - or ends the specified number of rows before or after the current row. - value must be an integer expression not + In the offset PRECEDING + and offset FOLLOWING frame + options, the offset must be an expression not containing any variables, aggregate functions, or window functions. - The value must not be null or negative; but it can be zero, which - just selects the current row. + The meaning of the offset depends on the + frame mode: + + + + In ROWS mode, + the offset must yield a non-null, + non-negative integer, and the option means that the frame starts or + ends the specified number of rows before or after the current row. + + + + + In GROUPS mode, + the offset again must yield a non-null, + non-negative integer, and the option means that the frame starts or + ends the specified number of peer groups + before or after the current row's peer group, where a peer group is a + set of rows that are equivalent in the ORDER BY + ordering. (If there is no ORDER BY, the whole + partition is one peer group.) + + + + + In RANGE mode, these options require that + the ORDER BY clause specify exactly one column. + The offset specifies the maximum + difference between the value of that column in the current row and + its value in preceding or following rows of the frame. The data type + of the offset expression varies depending + on the data type of the ordering column. For numeric ordering + columns it is typically of the same type as the ordering column, + but for datetime ordering columns it is an interval. + For example, if the ordering column is of type date + or timestamp, one could write RANGE BETWEEN + '1 day' PRECEDING AND '10 days' FOLLOWING. + The offset is still required to be + non-null and non-negative, though the meaning + of non-negative depends on its data type. + + + + In any case, the distance to the end of the frame is limited by the + distance to the end of the partition, so that for rows near the partition + ends the frame might contain fewer rows than elsewhere. + + + + Notice that in both ROWS and GROUPS + mode, 0 PRECEDING and 0 FOLLOWING + are equivalent to CURRENT ROW. This normally holds + in RANGE mode as well, for an appropriate + data-type-specific meaning of zero. + + + + The frame_exclusion option allows rows around + the current row to be excluded from the frame, even if they would be + included according to the frame start and frame end options. + EXCLUDE CURRENT ROW excludes the current row from the + frame. + EXCLUDE GROUP excludes the current row and its + ordering peers from the frame. + EXCLUDE TIES excludes any peers of the current + row from the frame, but not the current row itself. + EXCLUDE NO OTHERS simply specifies explicitly the + default behavior of not excluding the current row or its peers. @@ -1896,9 +1973,9 @@ UNBOUNDED FOLLOWING which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. With ORDER BY, this sets the frame to be all rows from the partition start up through the current row's last - ORDER BY peer. Without ORDER BY, all rows of the partition are - included in the window frame, since all rows become peers of the current - row. + ORDER BY peer. Without ORDER BY, + this means all rows of the partition are included in the window frame, + since all rows become peers of the current row. @@ -1906,9 +1983,14 @@ UNBOUNDED FOLLOWING frame_start cannot be UNBOUNDED FOLLOWING, frame_end cannot be UNBOUNDED PRECEDING, and the frame_end choice cannot appear earlier in the - above list than the frame_start choice — for example - RANGE BETWEEN CURRENT ROW AND value + above list of frame_start + and frame_end options than + the frame_start choice does — for example + RANGE BETWEEN CURRENT ROW AND offset PRECEDING is not allowed. + But, for example, ROWS BETWEEN 7 PRECEDING AND 8 + PRECEDING is allowed, even though it would never select any + rows. diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index e40131473f..9f5c0c3fb2 100644 --- a/doc/src/sgml/xindex.sgml +++ b/doc/src/sgml/xindex.sgml @@ -401,7 +401,8 @@ - B-trees require a single support function, and allow a second one to be + B-trees require a comparison support function, + and allow two additional support functions to be supplied at the operator class author's option, as shown in . The requirements for these support functions are explained further in @@ -433,6 +434,13 @@ 2 + + + Compare a test value to a base value plus/minus an offset, and return + true or false according to the comparison result (optional) + + 3 +
@@ -971,7 +979,8 @@ DEFAULT FOR TYPE int8 USING btree FAMILY integer_ops AS OPERATOR 4 >= , OPERATOR 5 > , FUNCTION 1 btint8cmp(int8, int8) , - FUNCTION 2 btint8sortsupport(internal) ; + FUNCTION 2 btint8sortsupport(internal) , + FUNCTION 3 in_range(int8, int8, int8, boolean, boolean) ; CREATE OPERATOR CLASS int4_ops DEFAULT FOR TYPE int4 USING btree FAMILY integer_ops AS @@ -982,7 +991,8 @@ DEFAULT FOR TYPE int4 USING btree FAMILY integer_ops AS OPERATOR 4 >= , OPERATOR 5 > , FUNCTION 1 btint4cmp(int4, int4) , - FUNCTION 2 btint4sortsupport(internal) ; + FUNCTION 2 btint4sortsupport(internal) , + FUNCTION 3 in_range(int4, int4, int4, boolean, boolean) ; CREATE OPERATOR CLASS int2_ops DEFAULT FOR TYPE int2 USING btree FAMILY integer_ops AS @@ -993,7 +1003,8 @@ DEFAULT FOR TYPE int2 USING btree FAMILY integer_ops AS OPERATOR 4 >= , OPERATOR 5 > , FUNCTION 1 btint2cmp(int2, int2) , - FUNCTION 2 btint2sortsupport(internal) ; + FUNCTION 2 btint2sortsupport(internal) , + FUNCTION 3 in_range(int2, int2, int2, boolean, boolean) ; ALTER OPERATOR FAMILY integer_ops USING btree ADD -- cross-type comparisons int8 vs int2 @@ -1042,7 +1053,13 @@ ALTER OPERATOR FAMILY integer_ops USING btree ADD OPERATOR 3 = (int2, int4) , OPERATOR 4 >= (int2, int4) , OPERATOR 5 > (int2, int4) , - FUNCTION 1 btint24cmp(int2, int4) ; + FUNCTION 1 btint24cmp(int2, int4) , + + -- cross-type in_range functions + FUNCTION 3 in_range(int4, int4, int8, boolean, boolean) , + FUNCTION 3 in_range(int4, int4, int2, boolean, boolean) , + FUNCTION 3 in_range(int2, int2, int8, boolean, boolean) , + FUNCTION 3 in_range(int2, int2, int4, boolean, boolean) ; ]]> @@ -1180,6 +1197,39 @@ SELECT * FROM mytable ORDER BY somecol USING ~<~; then array equality is supported, but not ordering comparisons. + + Another SQL feature that requires even more data-type-specific knowledge + is the RANGE offset + PRECEDING/FOLLOWING framing option + for window functions (see ). + For a query such as + +SELECT sum(x) OVER (ORDER BY x RANGE BETWEEN 5 PRECEDING AND 10 FOLLOWING) + FROM mytable; + + it is not sufficient to know how to order by x; + the database must also understand how to subtract 5 or + add 10 to the current row's value of x + to identify the bounds of the current window frame. Comparing the + resulting bounds to other rows' values of x is + possible using the comparison operators provided by the B-tree operator + class that defines the ORDER BY ordering — but + addition and subtraction operators are not part of the operator class, so + which ones should be used? Hard-wiring that choice would be undesirable, + because different sort orders (different B-tree operator classes) might + need different behavior. Therefore, a B-tree operator class can specify + an in_range support function that encapsulates the + addition and subtraction behaviors that make sense for its sort order. + It can even provide more than one in_range support function, in case + there is more than one data type that makes sense to use as the offset + in RANGE clauses. + If the B-tree operator class associated with the window's ORDER + BY clause does not have a matching in_range support function, + the RANGE offset + PRECEDING/FOLLOWING + option is not supported. + + Another important point is that an equality operator that appears in a hash operator family is a candidate for hash joins, diff --git a/src/backend/access/nbtree/nbtvalidate.c b/src/backend/access/nbtree/nbtvalidate.c index 8f4ccc87c0..f24091c0ad 100644 --- a/src/backend/access/nbtree/nbtvalidate.c +++ b/src/backend/access/nbtree/nbtvalidate.c @@ -51,6 +51,7 @@ btvalidate(Oid opclassoid) List *grouplist; OpFamilyOpFuncGroup *opclassgroup; List *familytypes; + int usefulgroups; int i; ListCell *lc; @@ -95,6 +96,14 @@ btvalidate(Oid opclassoid) ok = check_amproc_signature(procform->amproc, VOIDOID, true, 1, 1, INTERNALOID); break; + case BTINRANGE_PROC: + ok = check_amproc_signature(procform->amproc, BOOLOID, true, + 5, 5, + procform->amproclefttype, + procform->amproclefttype, + procform->amprocrighttype, + BOOLOID, BOOLOID); + break; default: ereport(INFO, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), @@ -165,12 +174,28 @@ btvalidate(Oid opclassoid) /* Now check for inconsistent groups of operators/functions */ grouplist = identify_opfamily_groups(oprlist, proclist); + usefulgroups = 0; opclassgroup = NULL; familytypes = NIL; foreach(lc, grouplist) { OpFamilyOpFuncGroup *thisgroup = (OpFamilyOpFuncGroup *) lfirst(lc); + /* + * It is possible for an in_range support function to have a RHS type + * that is otherwise irrelevant to the opfamily --- for instance, SQL + * requires the datetime_ops opclass to have range support with an + * interval offset. So, if this group appears to contain only an + * in_range function, ignore it: it doesn't represent a pair of + * supported types. + */ + if (thisgroup->operatorset == 0 && + thisgroup->functionset == (1 << BTINRANGE_PROC)) + continue; + + /* Else count it as a relevant group */ + usefulgroups++; + /* Remember the group exactly matching the test opclass */ if (thisgroup->lefttype == opcintype && thisgroup->righttype == opcintype) @@ -186,8 +211,8 @@ btvalidate(Oid opclassoid) /* * Complain if there seems to be an incomplete set of either operators - * or support functions for this datatype pair. The only thing that - * is considered optional is the sortsupport function. + * or support functions for this datatype pair. The only things + * considered optional are the sortsupport and in_range functions. */ if (thisgroup->operatorset != ((1 << BTLessStrategyNumber) | @@ -234,8 +259,7 @@ btvalidate(Oid opclassoid) * additional qual clauses from equivalence classes, so it seems * reasonable to insist that all built-in btree opfamilies be complete. */ - if (list_length(grouplist) != - list_length(familytypes) * list_length(familytypes)) + if (usefulgroups != (list_length(familytypes) * list_length(familytypes))) { ereport(INFO, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index be60270ea5..b7e39af7a2 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -1883,6 +1883,22 @@ find_expr_references_walker(Node *node, context->addrs); return false; } + else if (IsA(node, WindowClause)) + { + WindowClause *wc = (WindowClause *) node; + + if (OidIsValid(wc->startInRangeFunc)) + add_object_address(OCLASS_PROC, wc->startInRangeFunc, 0, + context->addrs); + if (OidIsValid(wc->endInRangeFunc)) + add_object_address(OCLASS_PROC, wc->endInRangeFunc, 0, + context->addrs); + if (OidIsValid(wc->inRangeColl) && + wc->inRangeColl != DEFAULT_COLLATION_OID) + add_object_address(OCLASS_COLLATION, wc->inRangeColl, 0, + context->addrs); + /* fall through to examine substructure */ + } else if (IsA(node, Query)) { /* Recurse into RTE subquery or not-yet-planned sublink subquery */ diff --git a/src/backend/catalog/sql_features.txt b/src/backend/catalog/sql_features.txt index 8e746f36d4..20d61f3780 100644 --- a/src/backend/catalog/sql_features.txt +++ b/src/backend/catalog/sql_features.txt @@ -498,7 +498,7 @@ T616 Null treatment option for LEAD and LAG functions NO T617 FIRST_VALUE and LAST_VALUE function YES T618 NTH_VALUE function NO function exists, but some options missing T619 Nested window functions NO -T620 WINDOW clause: GROUPS option NO +T620 WINDOW clause: GROUPS option YES T621 Enhanced numeric functions YES T631 IN predicate with one list element YES T641 Multiple column assignment NO only some syntax variants supported diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c index 1768140a83..e4b1369f19 100644 --- a/src/backend/commands/opclasscmds.c +++ b/src/backend/commands/opclasscmds.c @@ -1128,10 +1128,11 @@ assignProcTypes(OpFamilyMember *member, Oid amoid, Oid typeoid) procform = (Form_pg_proc) GETSTRUCT(proctup); /* - * btree comparison procs must be 2-arg procs returning int4, while btree - * sortsupport procs must take internal and return void. hash support - * proc 1 must be a 1-arg proc returning int4, while proc 2 must be a - * 2-arg proc returning int8. Otherwise we don't know. + * btree comparison procs must be 2-arg procs returning int4. btree + * sortsupport procs must take internal and return void. btree in_range + * procs must be 5-arg procs returning bool. hash support proc 1 must be + * a 1-arg proc returning int4, while proc 2 must be a 2-arg proc + * returning int8. Otherwise we don't know. */ if (amoid == BTREE_AM_OID) { @@ -1171,6 +1172,26 @@ assignProcTypes(OpFamilyMember *member, Oid amoid, Oid typeoid) * Can't infer lefttype/righttype from proc, so use default rule */ } + else if (member->number == BTINRANGE_PROC) + { + if (procform->pronargs != 5) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("btree in_range procedures must have five arguments"))); + if (procform->prorettype != BOOLOID) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("btree in_range procedures must return boolean"))); + + /* + * If lefttype/righttype isn't specified, use the proc's input + * types (we look at the test-value and offset arguments) + */ + if (!OidIsValid(member->lefttype)) + member->lefttype = procform->proargtypes.values[0]; + if (!OidIsValid(member->righttype)) + member->righttype = procform->proargtypes.values[2]; + } } else if (amoid == HASH_AM_OID) { diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 0afb1c83d3..f6412576f4 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -180,10 +180,11 @@ static void begin_partition(WindowAggState *winstate); static void spool_tuples(WindowAggState *winstate, int64 pos); static void release_partition(WindowAggState *winstate); -static bool row_is_in_frame(WindowAggState *winstate, int64 pos, +static int row_is_in_frame(WindowAggState *winstate, int64 pos, TupleTableSlot *slot); -static void update_frameheadpos(WindowObject winobj, TupleTableSlot *slot); -static void update_frametailpos(WindowObject winobj, TupleTableSlot *slot); +static void update_frameheadpos(WindowAggState *winstate); +static void update_frametailpos(WindowAggState *winstate); +static void update_grouptailpos(WindowAggState *winstate); static WindowStatePerAggData *initialize_peragg(WindowAggState *winstate, WindowFunc *wfunc, @@ -683,11 +684,9 @@ eval_windowaggregates(WindowAggState *winstate) temp_slot = winstate->temp_slot_1; /* - * Currently, we support only a subset of the SQL-standard window framing - * rules. - * - * If the frame start is UNBOUNDED_PRECEDING, the window frame consists of - * a contiguous group of rows extending forward from the start of the + * If the window's frame start clause is UNBOUNDED_PRECEDING and no + * exclusion clause is specified, then the window frame consists of a + * contiguous group of rows extending forward from the start of the * partition, and rows only enter the frame, never exit it, as the current * row advances forward. This makes it possible to use an incremental * strategy for evaluating aggregates: we run the transition function for @@ -710,6 +709,11 @@ eval_windowaggregates(WindowAggState *winstate) * must perform the aggregation all over again for all tuples within the * new frame boundaries. * + * If there's any exclusion clause, then we may have to aggregate over a + * non-contiguous set of rows, so we punt and recalculate for every row. + * (For some frame end choices, it might be that the frame is always + * contiguous anyway, but that's an optimization to investigate later.) + * * In many common cases, multiple rows share the same frame and hence the * same aggregate value. (In particular, if there's no ORDER BY in a RANGE * window, then all rows are peers and so they all have window frame equal @@ -728,7 +732,7 @@ eval_windowaggregates(WindowAggState *winstate) * The frame head should never move backwards, and the code below wouldn't * cope if it did, so for safety we complain if it does. */ - update_frameheadpos(agg_winobj, temp_slot); + update_frameheadpos(winstate); if (winstate->frameheadpos < winstate->aggregatedbase) elog(ERROR, "window frame head moved backward"); @@ -737,15 +741,16 @@ eval_windowaggregates(WindowAggState *winstate) * the result values that were previously saved at the bottom of this * function. Since we don't know the current frame's end yet, this is not * possible to check for fully. But if the frame end mode is UNBOUNDED - * FOLLOWING or CURRENT ROW, and the current row lies within the previous - * row's frame, then the two frames' ends must coincide. Note that on the - * first row aggregatedbase == aggregatedupto, meaning this test must - * fail, so we don't need to check the "there was no previous row" case - * explicitly here. + * FOLLOWING or CURRENT ROW, no exclusion clause is specified, and the + * current row lies within the previous row's frame, then the two frames' + * ends must coincide. Note that on the first row aggregatedbase == + * aggregatedupto, meaning this test must fail, so we don't need to check + * the "there was no previous row" case explicitly here. */ if (winstate->aggregatedbase == winstate->frameheadpos && (winstate->frameOptions & (FRAMEOPTION_END_UNBOUNDED_FOLLOWING | FRAMEOPTION_END_CURRENT_ROW)) && + !(winstate->frameOptions & FRAMEOPTION_EXCLUSION) && winstate->aggregatedbase <= winstate->currentpos && winstate->aggregatedupto > winstate->currentpos) { @@ -766,6 +771,7 @@ eval_windowaggregates(WindowAggState *winstate) * - if we're processing the first row in the partition, or * - if the frame's head moved and we cannot use an inverse * transition function, or + * - we have an EXCLUSION clause, or * - if the new frame doesn't overlap the old one * * Note that we don't strictly need to restart in the last case, but if @@ -780,6 +786,7 @@ eval_windowaggregates(WindowAggState *winstate) if (winstate->currentpos == 0 || (winstate->aggregatedbase != winstate->frameheadpos && !OidIsValid(peraggstate->invtransfn_oid)) || + (winstate->frameOptions & FRAMEOPTION_EXCLUSION) || winstate->aggregatedupto <= winstate->frameheadpos) { peraggstate->restart = true; @@ -920,6 +927,8 @@ eval_windowaggregates(WindowAggState *winstate) */ for (;;) { + int ret; + /* Fetch next row if we didn't already */ if (TupIsNull(agg_row_slot)) { @@ -928,9 +937,15 @@ eval_windowaggregates(WindowAggState *winstate) break; /* must be end of partition */ } - /* Exit loop (for now) if not in frame */ - if (!row_is_in_frame(winstate, winstate->aggregatedupto, agg_row_slot)) + /* + * Exit loop if no more rows can be in frame. Skip aggregation if + * current row is not in frame but there might be more in the frame. + */ + ret = row_is_in_frame(winstate, winstate->aggregatedupto, agg_row_slot); + if (ret < 0) break; + if (ret == 0) + goto next_tuple; /* Set tuple context for evaluation of aggregate arguments */ winstate->tmpcontext->ecxt_outertuple = agg_row_slot; @@ -951,6 +966,7 @@ eval_windowaggregates(WindowAggState *winstate) peraggstate); } +next_tuple: /* Reset per-input-tuple context after each tuple */ ResetExprContext(winstate->tmpcontext); @@ -1061,6 +1077,7 @@ eval_windowfunction(WindowAggState *winstate, WindowStatePerFunc perfuncstate, static void begin_partition(WindowAggState *winstate) { + WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; PlanState *outerPlan = outerPlanState(winstate); int numfuncs = winstate->numfuncs; int i; @@ -1068,11 +1085,21 @@ begin_partition(WindowAggState *winstate) winstate->partition_spooled = false; winstate->framehead_valid = false; winstate->frametail_valid = false; + winstate->grouptail_valid = false; winstate->spooled_rows = 0; winstate->currentpos = 0; winstate->frameheadpos = 0; - winstate->frametailpos = -1; + winstate->frametailpos = 0; + winstate->currentgroup = 0; + winstate->frameheadgroup = 0; + winstate->frametailgroup = 0; + winstate->groupheadpos = 0; + winstate->grouptailpos = -1; /* see update_grouptailpos */ ExecClearTuple(winstate->agg_row_slot); + if (winstate->framehead_slot) + ExecClearTuple(winstate->framehead_slot); + if (winstate->frametail_slot) + ExecClearTuple(winstate->frametail_slot); /* * If this is the very first partition, we need to fetch the first input @@ -1099,7 +1126,7 @@ begin_partition(WindowAggState *winstate) /* * Set up read pointers for the tuplestore. The current pointer doesn't * need BACKWARD capability, but the per-window-function read pointers do, - * and the aggregate pointer does if frame start is movable. + * and the aggregate pointer does if we might need to restart aggregation. */ winstate->current_ptr = 0; /* read pointer 0 is pre-allocated */ @@ -1112,10 +1139,14 @@ begin_partition(WindowAggState *winstate) WindowObject agg_winobj = winstate->agg_winobj; int readptr_flags = 0; - /* If the frame head is potentially movable ... */ - if (!(winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) + /* + * If the frame head is potentially movable, or we have an EXCLUSION + * clause, we might need to restart aggregation ... + */ + if (!(winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING) || + (winstate->frameOptions & FRAMEOPTION_EXCLUSION)) { - /* ... create a mark pointer to track the frame head */ + /* ... so create a mark pointer to track the frame head */ agg_winobj->markptr = tuplestore_alloc_read_pointer(winstate->buffer, 0); /* and the read pointer will need BACKWARD capability */ readptr_flags |= EXEC_FLAG_BACKWARD; @@ -1149,6 +1180,44 @@ begin_partition(WindowAggState *winstate) } } + /* + * If we are in RANGE or GROUPS mode, then determining frame boundaries + * requires physical access to the frame endpoint rows, except in + * degenerate cases. We create read pointers to point to those rows, to + * simplify access and ensure that the tuplestore doesn't discard the + * endpoint rows prematurely. (Must match logic in update_frameheadpos + * and update_frametailpos.) + */ + winstate->framehead_ptr = winstate->frametail_ptr = -1; /* if not used */ + + if ((winstate->frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) && + node->ordNumCols != 0) + { + if (!(winstate->frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) + winstate->framehead_ptr = + tuplestore_alloc_read_pointer(winstate->buffer, 0); + if (!(winstate->frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)) + winstate->frametail_ptr = + tuplestore_alloc_read_pointer(winstate->buffer, 0); + } + + /* + * If we have an exclusion clause that requires knowing the boundaries of + * the current row's peer group, we create a read pointer to track the + * tail position of the peer group (i.e., first row of the next peer + * group). The head position does not require its own pointer because we + * maintain that as a side effect of advancing the current row. + */ + winstate->grouptail_ptr = -1; + + if ((winstate->frameOptions & (FRAMEOPTION_EXCLUDE_GROUP | + FRAMEOPTION_EXCLUDE_TIES)) && + node->ordNumCols != 0) + { + winstate->grouptail_ptr = + tuplestore_alloc_read_pointer(winstate->buffer, 0); + } + /* * Store the first tuple into the tuplestore (it's always available now; * we either read it above, or saved it at the end of previous partition) @@ -1275,119 +1344,127 @@ release_partition(WindowAggState *winstate) * The caller must have already determined that the row is in the partition * and fetched it into a slot. This function just encapsulates the framing * rules. + * + * Returns: + * -1, if the row is out of frame and no succeeding rows can be in frame + * 0, if the row is out of frame but succeeding rows might be in frame + * 1, if the row is in frame + * + * May clobber winstate->temp_slot_2. */ -static bool +static int row_is_in_frame(WindowAggState *winstate, int64 pos, TupleTableSlot *slot) { int frameOptions = winstate->frameOptions; Assert(pos >= 0); /* else caller error */ - /* First, check frame starting conditions */ - if (frameOptions & FRAMEOPTION_START_CURRENT_ROW) - { - if (frameOptions & FRAMEOPTION_ROWS) - { - /* rows before current row are out of frame */ - if (pos < winstate->currentpos) - return false; - } - else if (frameOptions & FRAMEOPTION_RANGE) - { - /* preceding row that is not peer is out of frame */ - if (pos < winstate->currentpos && - !are_peers(winstate, slot, winstate->ss.ss_ScanTupleSlot)) - return false; - } - else - Assert(false); - } - else if (frameOptions & FRAMEOPTION_START_VALUE) - { - if (frameOptions & FRAMEOPTION_ROWS) - { - int64 offset = DatumGetInt64(winstate->startOffsetValue); - - /* rows before current row + offset are out of frame */ - if (frameOptions & FRAMEOPTION_START_VALUE_PRECEDING) - offset = -offset; - - if (pos < winstate->currentpos + offset) - return false; - } - else if (frameOptions & FRAMEOPTION_RANGE) - { - /* parser should have rejected this */ - elog(ERROR, "window frame with value offset is not implemented"); - } - else - Assert(false); - } + /* + * First, check frame starting conditions. We might as well delegate this + * to update_frameheadpos always; it doesn't add any notable cost. + */ + update_frameheadpos(winstate); + if (pos < winstate->frameheadpos) + return 0; - /* Okay so far, now check frame ending conditions */ + /* + * Okay so far, now check frame ending conditions. Here, we avoid calling + * update_frametailpos in simple cases, so as not to spool tuples further + * ahead than necessary. + */ if (frameOptions & FRAMEOPTION_END_CURRENT_ROW) { if (frameOptions & FRAMEOPTION_ROWS) { /* rows after current row are out of frame */ if (pos > winstate->currentpos) - return false; + return -1; } - else if (frameOptions & FRAMEOPTION_RANGE) + else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) { /* following row that is not peer is out of frame */ if (pos > winstate->currentpos && !are_peers(winstate, slot, winstate->ss.ss_ScanTupleSlot)) - return false; + return -1; } else Assert(false); } - else if (frameOptions & FRAMEOPTION_END_VALUE) + else if (frameOptions & FRAMEOPTION_END_OFFSET) { if (frameOptions & FRAMEOPTION_ROWS) { int64 offset = DatumGetInt64(winstate->endOffsetValue); /* rows after current row + offset are out of frame */ - if (frameOptions & FRAMEOPTION_END_VALUE_PRECEDING) + if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING) offset = -offset; if (pos > winstate->currentpos + offset) - return false; + return -1; } - else if (frameOptions & FRAMEOPTION_RANGE) + else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) { - /* parser should have rejected this */ - elog(ERROR, "window frame with value offset is not implemented"); + /* hard cases, so delegate to update_frametailpos */ + update_frametailpos(winstate); + if (pos >= winstate->frametailpos) + return -1; } else Assert(false); } + /* Check exclusion clause */ + if (frameOptions & FRAMEOPTION_EXCLUDE_CURRENT_ROW) + { + if (pos == winstate->currentpos) + return 0; + } + else if ((frameOptions & FRAMEOPTION_EXCLUDE_GROUP) || + ((frameOptions & FRAMEOPTION_EXCLUDE_TIES) && + pos != winstate->currentpos)) + { + WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; + + /* If no ORDER BY, all rows are peers with each other */ + if (node->ordNumCols == 0) + return 0; + /* Otherwise, check the group boundaries */ + if (pos >= winstate->groupheadpos) + { + update_grouptailpos(winstate); + if (pos < winstate->grouptailpos) + return 0; + } + } + /* If we get here, it's in frame */ - return true; + return 1; } /* * update_frameheadpos * make frameheadpos valid for the current row * - * Uses the winobj's read pointer for any required fetches; hence, if the - * frame mode is one that requires row comparisons, the winobj's mark must - * not be past the currently known frame head. Also uses the specified slot - * for any required fetches. + * Note that frameheadpos is computed without regard for any window exclusion + * clause; the current row and/or its peers are considered part of the frame + * for this purpose even if they must be excluded later. + * + * May clobber winstate->temp_slot_2. */ static void -update_frameheadpos(WindowObject winobj, TupleTableSlot *slot) +update_frameheadpos(WindowAggState *winstate) { - WindowAggState *winstate = winobj->winstate; WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; int frameOptions = winstate->frameOptions; + MemoryContext oldcontext; if (winstate->framehead_valid) return; /* already known for current row */ + /* We may be called in a short-lived context */ + oldcontext = MemoryContextSwitchTo(winstate->ss.ps.ps_ExprContext->ecxt_per_query_memory); + if (frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING) { /* In UNBOUNDED PRECEDING mode, frame head is always row 0 */ @@ -1402,58 +1479,67 @@ update_frameheadpos(WindowObject winobj, TupleTableSlot *slot) winstate->frameheadpos = winstate->currentpos; winstate->framehead_valid = true; } - else if (frameOptions & FRAMEOPTION_RANGE) + else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) { - int64 fhprev; - /* If no ORDER BY, all rows are peers with each other */ if (node->ordNumCols == 0) { winstate->frameheadpos = 0; winstate->framehead_valid = true; + MemoryContextSwitchTo(oldcontext); return; } /* - * In RANGE START_CURRENT mode, frame head is the first row that - * is a peer of current row. We search backwards from current, - * which could be a bit inefficient if peer sets are large. Might - * be better to have a separate read pointer that moves forward - * tracking the frame head. + * In RANGE or GROUPS START_CURRENT_ROW mode, frame head is the + * first row that is a peer of current row. We keep a copy of the + * last-known frame head row in framehead_slot, and advance as + * necessary. Note that if we reach end of partition, we will + * leave frameheadpos = end+1 and framehead_slot empty. */ - fhprev = winstate->currentpos - 1; - for (;;) + tuplestore_select_read_pointer(winstate->buffer, + winstate->framehead_ptr); + if (winstate->frameheadpos == 0 && + TupIsNull(winstate->framehead_slot)) { - /* assume the frame head can't go backwards */ - if (fhprev < winstate->frameheadpos) - break; - if (!window_gettupleslot(winobj, fhprev, slot)) - break; /* start of partition */ - if (!are_peers(winstate, slot, winstate->ss.ss_ScanTupleSlot)) - break; /* not peer of current row */ - fhprev--; + /* fetch first row into framehead_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->framehead_slot)) + { + if (are_peers(winstate, winstate->framehead_slot, + winstate->ss.ss_ScanTupleSlot)) + break; /* this row is the correct frame head */ + /* Note we advance frameheadpos even if the fetch fails */ + winstate->frameheadpos++; + spool_tuples(winstate, winstate->frameheadpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + break; /* end of partition */ } - winstate->frameheadpos = fhprev + 1; winstate->framehead_valid = true; } else Assert(false); } - else if (frameOptions & FRAMEOPTION_START_VALUE) + else if (frameOptions & FRAMEOPTION_START_OFFSET) { if (frameOptions & FRAMEOPTION_ROWS) { /* In ROWS mode, bound is physically n before/after current */ int64 offset = DatumGetInt64(winstate->startOffsetValue); - if (frameOptions & FRAMEOPTION_START_VALUE_PRECEDING) + if (frameOptions & FRAMEOPTION_START_OFFSET_PRECEDING) offset = -offset; winstate->frameheadpos = winstate->currentpos + offset; /* frame head can't go before first row */ if (winstate->frameheadpos < 0) winstate->frameheadpos = 0; - else if (winstate->frameheadpos > winstate->currentpos) + else if (winstate->frameheadpos > winstate->currentpos + 1) { /* make sure frameheadpos is not past end of partition */ spool_tuples(winstate, winstate->frameheadpos - 1); @@ -1464,40 +1550,172 @@ update_frameheadpos(WindowObject winobj, TupleTableSlot *slot) } else if (frameOptions & FRAMEOPTION_RANGE) { - /* parser should have rejected this */ - elog(ERROR, "window frame with value offset is not implemented"); + /* + * In RANGE START_OFFSET mode, frame head is the first row that + * satisfies the in_range constraint relative to the current row. + * We keep a copy of the last-known frame head row in + * framehead_slot, and advance as necessary. Note that if we + * reach end of partition, we will leave frameheadpos = end+1 and + * framehead_slot empty. + */ + bool sub, + less; + + /* Precompute flags for in_range checks */ + if (frameOptions & FRAMEOPTION_START_OFFSET_PRECEDING) + sub = true; /* subtract startOffset from current row */ + else + sub = false; /* add it */ + less = false; /* normally, we want frame head >= sum */ + /* If sort order is descending, flip both flags */ + if (!winstate->inRangeAsc) + { + sub = !sub; + less = true; + } + + tuplestore_select_read_pointer(winstate->buffer, + winstate->framehead_ptr); + if (winstate->frameheadpos == 0 && + TupIsNull(winstate->framehead_slot)) + { + /* fetch first row into framehead_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->framehead_slot)) + { + Datum headval, + currval; + bool headisnull, + currisnull; + + headval = slot_getattr(winstate->framehead_slot, 1, + &headisnull); + currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, 1, + &currisnull); + if (headisnull || currisnull) + { + /* order of the rows depends only on nulls_first */ + if (winstate->inRangeNullsFirst) + { + /* advance head if head is null and curr is not */ + if (!headisnull || currisnull) + break; + } + else + { + /* advance head if head is not null and curr is null */ + if (headisnull || !currisnull) + break; + } + } + else + { + if (DatumGetBool(FunctionCall5Coll(&winstate->startInRangeFunc, + winstate->inRangeColl, + headval, + currval, + winstate->startOffsetValue, + BoolGetDatum(sub), + BoolGetDatum(less)))) + break; /* this row is the correct frame head */ + } + /* Note we advance frameheadpos even if the fetch fails */ + winstate->frameheadpos++; + spool_tuples(winstate, winstate->frameheadpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + break; /* end of partition */ + } + winstate->framehead_valid = true; + } + else if (frameOptions & FRAMEOPTION_GROUPS) + { + /* + * In GROUPS START_OFFSET mode, frame head is the first row of the + * first peer group whose number satisfies the offset constraint. + * We keep a copy of the last-known frame head row in + * framehead_slot, and advance as necessary. Note that if we + * reach end of partition, we will leave frameheadpos = end+1 and + * framehead_slot empty. + */ + int64 offset = DatumGetInt64(winstate->startOffsetValue); + int64 minheadgroup; + + if (frameOptions & FRAMEOPTION_START_OFFSET_PRECEDING) + minheadgroup = winstate->currentgroup - offset; + else + minheadgroup = winstate->currentgroup + offset; + + tuplestore_select_read_pointer(winstate->buffer, + winstate->framehead_ptr); + if (winstate->frameheadpos == 0 && + TupIsNull(winstate->framehead_slot)) + { + /* fetch first row into framehead_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->framehead_slot)) + { + if (winstate->frameheadgroup >= minheadgroup) + break; /* this row is the correct frame head */ + ExecCopySlot(winstate->temp_slot_2, winstate->framehead_slot); + /* Note we advance frameheadpos even if the fetch fails */ + winstate->frameheadpos++; + spool_tuples(winstate, winstate->frameheadpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->framehead_slot)) + break; /* end of partition */ + if (!are_peers(winstate, winstate->temp_slot_2, + winstate->framehead_slot)) + winstate->frameheadgroup++; + } + ExecClearTuple(winstate->temp_slot_2); + winstate->framehead_valid = true; } else Assert(false); } else Assert(false); + + MemoryContextSwitchTo(oldcontext); } /* * update_frametailpos * make frametailpos valid for the current row * - * Uses the winobj's read pointer for any required fetches; hence, if the - * frame mode is one that requires row comparisons, the winobj's mark must - * not be past the currently known frame tail. Also uses the specified slot - * for any required fetches. + * Note that frametailpos is computed without regard for any window exclusion + * clause; the current row and/or its peers are considered part of the frame + * for this purpose even if they must be excluded later. + * + * May clobber winstate->temp_slot_2. */ static void -update_frametailpos(WindowObject winobj, TupleTableSlot *slot) +update_frametailpos(WindowAggState *winstate) { - WindowAggState *winstate = winobj->winstate; WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; int frameOptions = winstate->frameOptions; + MemoryContext oldcontext; if (winstate->frametail_valid) return; /* already known for current row */ + /* We may be called in a short-lived context */ + oldcontext = MemoryContextSwitchTo(winstate->ss.ps.ps_ExprContext->ecxt_per_query_memory); + if (frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING) { /* In UNBOUNDED FOLLOWING mode, all partition rows are in frame */ spool_tuples(winstate, -1); - winstate->frametailpos = winstate->spooled_rows - 1; + winstate->frametailpos = winstate->spooled_rows; winstate->frametail_valid = true; } else if (frameOptions & FRAMEOPTION_END_CURRENT_ROW) @@ -1505,77 +1723,276 @@ update_frametailpos(WindowObject winobj, TupleTableSlot *slot) if (frameOptions & FRAMEOPTION_ROWS) { /* In ROWS mode, exactly the rows up to current are in frame */ - winstate->frametailpos = winstate->currentpos; + winstate->frametailpos = winstate->currentpos + 1; winstate->frametail_valid = true; } - else if (frameOptions & FRAMEOPTION_RANGE) + else if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) { - int64 ftnext; - /* If no ORDER BY, all rows are peers with each other */ if (node->ordNumCols == 0) { spool_tuples(winstate, -1); - winstate->frametailpos = winstate->spooled_rows - 1; + winstate->frametailpos = winstate->spooled_rows; winstate->frametail_valid = true; + MemoryContextSwitchTo(oldcontext); return; } /* - * Else we have to search for the first non-peer of the current - * row. We assume the current value of frametailpos is a lower - * bound on the possible frame tail location, ie, frame tail never - * goes backward, and that currentpos is also a lower bound, ie, - * frame end always >= current row. + * In RANGE or GROUPS END_CURRENT_ROW mode, frame end is the last + * row that is a peer of current row, frame tail is the row after + * that (if any). We keep a copy of the last-known frame tail row + * in frametail_slot, and advance as necessary. Note that if we + * reach end of partition, we will leave frametailpos = end+1 and + * frametail_slot empty. */ - ftnext = Max(winstate->frametailpos, winstate->currentpos) + 1; - for (;;) + tuplestore_select_read_pointer(winstate->buffer, + winstate->frametail_ptr); + if (winstate->frametailpos == 0 && + TupIsNull(winstate->frametail_slot)) + { + /* fetch first row into frametail_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->frametail_slot)) { - if (!window_gettupleslot(winobj, ftnext, slot)) + if (winstate->frametailpos > winstate->currentpos && + !are_peers(winstate, winstate->frametail_slot, + winstate->ss.ss_ScanTupleSlot)) + break; /* this row is the frame tail */ + /* Note we advance frametailpos even if the fetch fails */ + winstate->frametailpos++; + spool_tuples(winstate, winstate->frametailpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) break; /* end of partition */ - if (!are_peers(winstate, slot, winstate->ss.ss_ScanTupleSlot)) - break; /* not peer of current row */ - ftnext++; } - winstate->frametailpos = ftnext - 1; winstate->frametail_valid = true; } else Assert(false); } - else if (frameOptions & FRAMEOPTION_END_VALUE) + else if (frameOptions & FRAMEOPTION_END_OFFSET) { if (frameOptions & FRAMEOPTION_ROWS) { /* In ROWS mode, bound is physically n before/after current */ int64 offset = DatumGetInt64(winstate->endOffsetValue); - if (frameOptions & FRAMEOPTION_END_VALUE_PRECEDING) + if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING) offset = -offset; - winstate->frametailpos = winstate->currentpos + offset; - /* smallest allowable value of frametailpos is -1 */ + winstate->frametailpos = winstate->currentpos + offset + 1; + /* smallest allowable value of frametailpos is 0 */ if (winstate->frametailpos < 0) - winstate->frametailpos = -1; - else if (winstate->frametailpos > winstate->currentpos) + winstate->frametailpos = 0; + else if (winstate->frametailpos > winstate->currentpos + 1) { - /* make sure frametailpos is not past last row of partition */ - spool_tuples(winstate, winstate->frametailpos); - if (winstate->frametailpos >= winstate->spooled_rows) - winstate->frametailpos = winstate->spooled_rows - 1; + /* make sure frametailpos is not past end of partition */ + spool_tuples(winstate, winstate->frametailpos - 1); + if (winstate->frametailpos > winstate->spooled_rows) + winstate->frametailpos = winstate->spooled_rows; } winstate->frametail_valid = true; } else if (frameOptions & FRAMEOPTION_RANGE) { - /* parser should have rejected this */ - elog(ERROR, "window frame with value offset is not implemented"); + /* + * In RANGE END_OFFSET mode, frame end is the last row that + * satisfies the in_range constraint relative to the current row, + * frame tail is the row after that (if any). We keep a copy of + * the last-known frame tail row in frametail_slot, and advance as + * necessary. Note that if we reach end of partition, we will + * leave frametailpos = end+1 and frametail_slot empty. + */ + bool sub, + less; + + /* Precompute flags for in_range checks */ + if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING) + sub = true; /* subtract endOffset from current row */ + else + sub = false; /* add it */ + less = true; /* normally, we want frame tail <= sum */ + /* If sort order is descending, flip both flags */ + if (!winstate->inRangeAsc) + { + sub = !sub; + less = false; + } + + tuplestore_select_read_pointer(winstate->buffer, + winstate->frametail_ptr); + if (winstate->frametailpos == 0 && + TupIsNull(winstate->frametail_slot)) + { + /* fetch first row into frametail_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->frametail_slot)) + { + Datum tailval, + currval; + bool tailisnull, + currisnull; + + tailval = slot_getattr(winstate->frametail_slot, 1, + &tailisnull); + currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, 1, + &currisnull); + if (tailisnull || currisnull) + { + /* order of the rows depends only on nulls_first */ + if (winstate->inRangeNullsFirst) + { + /* advance tail if tail is null or curr is not */ + if (!tailisnull) + break; + } + else + { + /* advance tail if tail is not null or curr is null */ + if (!currisnull) + break; + } + } + else + { + if (!DatumGetBool(FunctionCall5Coll(&winstate->endInRangeFunc, + winstate->inRangeColl, + tailval, + currval, + winstate->endOffsetValue, + BoolGetDatum(sub), + BoolGetDatum(less)))) + break; /* this row is the correct frame tail */ + } + /* Note we advance frametailpos even if the fetch fails */ + winstate->frametailpos++; + spool_tuples(winstate, winstate->frametailpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) + break; /* end of partition */ + } + winstate->frametail_valid = true; + } + else if (frameOptions & FRAMEOPTION_GROUPS) + { + /* + * In GROUPS END_OFFSET mode, frame end is the last row of the + * last peer group whose number satisfies the offset constraint, + * and frame tail is the row after that (if any). We keep a copy + * of the last-known frame tail row in frametail_slot, and advance + * as necessary. Note that if we reach end of partition, we will + * leave frametailpos = end+1 and frametail_slot empty. + */ + int64 offset = DatumGetInt64(winstate->endOffsetValue); + int64 maxtailgroup; + + if (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING) + maxtailgroup = winstate->currentgroup - offset; + else + maxtailgroup = winstate->currentgroup + offset; + + tuplestore_select_read_pointer(winstate->buffer, + winstate->frametail_ptr); + if (winstate->frametailpos == 0 && + TupIsNull(winstate->frametail_slot)) + { + /* fetch first row into frametail_slot, if we didn't already */ + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) + elog(ERROR, "unexpected end of tuplestore"); + } + + while (!TupIsNull(winstate->frametail_slot)) + { + if (winstate->frametailgroup > maxtailgroup) + break; /* this row is the correct frame tail */ + ExecCopySlot(winstate->temp_slot_2, winstate->frametail_slot); + /* Note we advance frametailpos even if the fetch fails */ + winstate->frametailpos++; + spool_tuples(winstate, winstate->frametailpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->frametail_slot)) + break; /* end of partition */ + if (!are_peers(winstate, winstate->temp_slot_2, + winstate->frametail_slot)) + winstate->frametailgroup++; + } + ExecClearTuple(winstate->temp_slot_2); + winstate->frametail_valid = true; } else Assert(false); } else Assert(false); + + MemoryContextSwitchTo(oldcontext); +} + +/* + * update_grouptailpos + * make grouptailpos valid for the current row + * + * May clobber winstate->temp_slot_2. + */ +static void +update_grouptailpos(WindowAggState *winstate) +{ + WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; + MemoryContext oldcontext; + + if (winstate->grouptail_valid) + return; /* already known for current row */ + + /* We may be called in a short-lived context */ + oldcontext = MemoryContextSwitchTo(winstate->ss.ps.ps_ExprContext->ecxt_per_query_memory); + + /* If no ORDER BY, all rows are peers with each other */ + if (node->ordNumCols == 0) + { + spool_tuples(winstate, -1); + winstate->grouptailpos = winstate->spooled_rows; + winstate->grouptail_valid = true; + MemoryContextSwitchTo(oldcontext); + return; + } + + /* + * Because grouptail_valid is reset only when current row advances into a + * new peer group, we always reach here knowing that grouptailpos needs to + * be advanced by at least one row. Hence, unlike the otherwise similar + * case for frame tail tracking, we do not need persistent storage of the + * group tail row. + */ + Assert(winstate->grouptailpos <= winstate->currentpos); + tuplestore_select_read_pointer(winstate->buffer, + winstate->grouptail_ptr); + for (;;) + { + /* Note we advance grouptailpos even if the fetch fails */ + winstate->grouptailpos++; + spool_tuples(winstate, winstate->grouptailpos); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->temp_slot_2)) + break; /* end of partition */ + if (winstate->grouptailpos > winstate->currentpos && + !are_peers(winstate, winstate->temp_slot_2, + winstate->ss.ss_ScanTupleSlot)) + break; /* this row is the group tail */ + } + ExecClearTuple(winstate->temp_slot_2); + winstate->grouptail_valid = true; + + MemoryContextSwitchTo(oldcontext); } @@ -1602,7 +2019,9 @@ ExecWindowAgg(PlanState *pstate) return NULL; /* - * Compute frame offset values, if any, during first call. + * Compute frame offset values, if any, during first call (or after a + * rescan). These are assumed to hold constant throughout the scan; if + * user gives us a volatile expression, we'll only use its initial value. */ if (winstate->all_first) { @@ -1613,7 +2032,7 @@ ExecWindowAgg(PlanState *pstate) int16 len; bool byval; - if (frameOptions & FRAMEOPTION_START_VALUE) + if (frameOptions & FRAMEOPTION_START_OFFSET) { Assert(winstate->startOffset != NULL); value = ExecEvalExprSwitchContext(winstate->startOffset, @@ -1627,18 +2046,18 @@ ExecWindowAgg(PlanState *pstate) get_typlenbyval(exprType((Node *) winstate->startOffset->expr), &len, &byval); winstate->startOffsetValue = datumCopy(value, byval, len); - if (frameOptions & FRAMEOPTION_ROWS) + if (frameOptions & (FRAMEOPTION_ROWS | FRAMEOPTION_GROUPS)) { /* value is known to be int8 */ int64 offset = DatumGetInt64(value); if (offset < 0) ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), errmsg("frame starting offset must not be negative"))); } } - if (frameOptions & FRAMEOPTION_END_VALUE) + if (frameOptions & FRAMEOPTION_END_OFFSET) { Assert(winstate->endOffset != NULL); value = ExecEvalExprSwitchContext(winstate->endOffset, @@ -1652,14 +2071,14 @@ ExecWindowAgg(PlanState *pstate) get_typlenbyval(exprType((Node *) winstate->endOffset->expr), &len, &byval); winstate->endOffsetValue = datumCopy(value, byval, len); - if (frameOptions & FRAMEOPTION_ROWS) + if (frameOptions & (FRAMEOPTION_ROWS | FRAMEOPTION_GROUPS)) { /* value is known to be int8 */ int64 offset = DatumGetInt64(value); if (offset < 0) ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), errmsg("frame ending offset must not be negative"))); } } @@ -1679,6 +2098,7 @@ ExecWindowAgg(PlanState *pstate) /* This might mean that the frame moves, too */ winstate->framehead_valid = false; winstate->frametail_valid = false; + /* we don't need to invalidate grouptail here; see below */ } /* @@ -1718,12 +2138,38 @@ ExecWindowAgg(PlanState *pstate) * out of the tuplestore, since window function evaluation might cause the * tuplestore to dump its state to disk.) * + * In GROUPS mode, or when tracking a group-oriented exclusion clause, we + * must also detect entering a new peer group and update associated state + * when that happens. We use temp_slot_2 to temporarily hold the previous + * row for this purpose. + * * Current row must be in the tuplestore, since we spooled it above. */ tuplestore_select_read_pointer(winstate->buffer, winstate->current_ptr); - if (!tuplestore_gettupleslot(winstate->buffer, true, true, - winstate->ss.ss_ScanTupleSlot)) - elog(ERROR, "unexpected end of tuplestore"); + if ((winstate->frameOptions & (FRAMEOPTION_GROUPS | + FRAMEOPTION_EXCLUDE_GROUP | + FRAMEOPTION_EXCLUDE_TIES)) && + winstate->currentpos > 0) + { + ExecCopySlot(winstate->temp_slot_2, winstate->ss.ss_ScanTupleSlot); + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->ss.ss_ScanTupleSlot)) + elog(ERROR, "unexpected end of tuplestore"); + if (!are_peers(winstate, winstate->temp_slot_2, + winstate->ss.ss_ScanTupleSlot)) + { + winstate->currentgroup++; + winstate->groupheadpos = winstate->currentpos; + winstate->grouptail_valid = false; + } + ExecClearTuple(winstate->temp_slot_2); + } + else + { + if (!tuplestore_gettupleslot(winstate->buffer, true, true, + winstate->ss.ss_ScanTupleSlot)) + elog(ERROR, "unexpected end of tuplestore"); + } /* * Evaluate true window functions @@ -1746,6 +2192,23 @@ ExecWindowAgg(PlanState *pstate) if (winstate->numaggs > 0) eval_windowaggregates(winstate); + /* + * If we have created auxiliary read pointers for the frame or group + * boundaries, force them to be kept up-to-date, because we don't know + * whether the window function(s) will do anything that requires that. + * Failing to advance the pointers would result in being unable to trim + * data from the tuplestore, which is bad. (If we could know in advance + * whether the window functions will use frame boundary info, we could + * skip creating these pointers in the first place ... but unfortunately + * the window function API doesn't require that.) + */ + if (winstate->framehead_ptr >= 0) + update_frameheadpos(winstate); + if (winstate->frametail_ptr >= 0) + update_frametailpos(winstate); + if (winstate->grouptail_ptr >= 0) + update_grouptailpos(winstate); + /* * Truncate any no-longer-needed rows from the tuplestore. */ @@ -1777,6 +2240,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) ExprContext *tmpcontext; WindowStatePerFunc perfunc; WindowStatePerAgg peragg; + int frameOptions = node->frameOptions; int numfuncs, wfuncno, numaggs, @@ -1831,6 +2295,20 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) winstate->temp_slot_1 = ExecInitExtraTupleSlot(estate); winstate->temp_slot_2 = ExecInitExtraTupleSlot(estate); + /* + * create frame head and tail slots only if needed (must match logic in + * update_frameheadpos and update_frametailpos) + */ + winstate->framehead_slot = winstate->frametail_slot = NULL; + + if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) + { + if (!(frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) + winstate->framehead_slot = ExecInitExtraTupleSlot(estate); + if (!(frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)) + winstate->frametail_slot = ExecInitExtraTupleSlot(estate); + } + /* * WindowAgg nodes never have quals, since they can only occur at the * logical top level of a query (ie, after any WHERE or HAVING filters) @@ -1858,6 +2336,12 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); ExecSetSlotDescriptor(winstate->temp_slot_2, winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); + if (winstate->framehead_slot) + ExecSetSlotDescriptor(winstate->framehead_slot, + winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); + if (winstate->frametail_slot) + ExecSetSlotDescriptor(winstate->frametail_slot, + winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); /* * Initialize result tuple type and projection info. @@ -1991,7 +2475,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) } /* copy frame options to state node for easy access */ - winstate->frameOptions = node->frameOptions; + winstate->frameOptions = frameOptions; /* initialize frame bound offset expressions */ winstate->startOffset = ExecInitExpr((Expr *) node->startOffset, @@ -1999,6 +2483,15 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) winstate->endOffset = ExecInitExpr((Expr *) node->endOffset, (PlanState *) winstate); + /* Lookup in_range support functions if needed */ + if (OidIsValid(node->startInRangeFunc)) + fmgr_info(node->startInRangeFunc, &winstate->startInRangeFunc); + if (OidIsValid(node->endInRangeFunc)) + fmgr_info(node->endInRangeFunc, &winstate->endInRangeFunc); + winstate->inRangeColl = node->inRangeColl; + winstate->inRangeAsc = node->inRangeAsc; + winstate->inRangeNullsFirst = node->inRangeNullsFirst; + winstate->all_first = true; winstate->partition_spooled = false; winstate->more_partitions = false; @@ -2023,6 +2516,10 @@ ExecEndWindowAgg(WindowAggState *node) ExecClearTuple(node->agg_row_slot); ExecClearTuple(node->temp_slot_1); ExecClearTuple(node->temp_slot_2); + if (node->framehead_slot) + ExecClearTuple(node->framehead_slot); + if (node->frametail_slot) + ExecClearTuple(node->frametail_slot); /* * Free both the expr contexts. @@ -2068,6 +2565,10 @@ ExecReScanWindowAgg(WindowAggState *node) ExecClearTuple(node->agg_row_slot); ExecClearTuple(node->temp_slot_1); ExecClearTuple(node->temp_slot_2); + if (node->framehead_slot) + ExecClearTuple(node->framehead_slot); + if (node->frametail_slot) + ExecClearTuple(node->frametail_slot); /* Forget current wfunc values */ MemSet(econtext->ecxt_aggvalues, 0, sizeof(Datum) * node->numfuncs); @@ -2574,7 +3075,7 @@ WinSetMarkPosition(WindowObject winobj, int64 markpos) /* * WinRowsArePeers - * Compare two rows (specified by absolute position in window) to see + * Compare two rows (specified by absolute position in partition) to see * if they are equal according to the ORDER BY clause. * * NB: this does not consider the window frame mode. @@ -2596,6 +3097,10 @@ WinRowsArePeers(WindowObject winobj, int64 pos1, int64 pos2) if (node->ordNumCols == 0) return true; + /* + * Note: OK to use temp_slot_2 here because we aren't calling any + * frame-related functions (those tend to clobber temp_slot_2). + */ slot1 = winstate->temp_slot_1; slot2 = winstate->temp_slot_2; @@ -2680,30 +3185,7 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno, if (isout) *isout = false; if (set_mark) - { - int frameOptions = winstate->frameOptions; - int64 mark_pos = abs_pos; - - /* - * In RANGE mode with a moving frame head, we must not let the - * mark advance past frameheadpos, since that row has to be - * fetchable during future update_frameheadpos calls. - * - * XXX it is very ugly to pollute window functions' marks with - * this consideration; it could for instance mask a logic bug that - * lets a window function fetch rows before what it had claimed - * was its mark. Perhaps use a separate mark for frame head - * probes? - */ - if ((frameOptions & FRAMEOPTION_RANGE) && - !(frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) - { - update_frameheadpos(winobj, winstate->temp_slot_2); - if (mark_pos > winstate->frameheadpos) - mark_pos = winstate->frameheadpos; - } - WinSetMarkPosition(winobj, mark_pos); - } + WinSetMarkPosition(winobj, abs_pos); econtext->ecxt_outertuple = slot; return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno), econtext, isnull); @@ -2714,19 +3196,34 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno, * WinGetFuncArgInFrame * Evaluate a window function's argument expression on a specified * row of the window frame. The row is identified in lseek(2) style, - * i.e. relative to the current, first, or last row. + * i.e. relative to the first or last row of the frame. (We do not + * support WINDOW_SEEK_CURRENT here, because it's not very clear what + * that should mean if the current row isn't part of the frame.) * * argno: argument number to evaluate (counted from 0) * relpos: signed rowcount offset from the seek position - * seektype: WINDOW_SEEK_CURRENT, WINDOW_SEEK_HEAD, or WINDOW_SEEK_TAIL - * set_mark: If the row is found and set_mark is true, the mark is moved to - * the row as a side-effect. + * seektype: WINDOW_SEEK_HEAD or WINDOW_SEEK_TAIL + * set_mark: If the row is found/in frame and set_mark is true, the mark is + * moved to the row as a side-effect. * isnull: output argument, receives isnull status of result * isout: output argument, set to indicate whether target row position * is out of frame (can pass NULL if caller doesn't care about this) * - * Specifying a nonexistent row is not an error, it just causes a null result - * (plus setting *isout true, if isout isn't NULL). + * Specifying a nonexistent or not-in-frame row is not an error, it just + * causes a null result (plus setting *isout true, if isout isn't NULL). + * + * Note that some exclusion-clause options lead to situations where the + * rows that are in-frame are not consecutive in the partition. But we + * count only in-frame rows when measuring relpos. + * + * The set_mark flag is interpreted as meaning that the caller will specify + * a constant (or, perhaps, monotonically increasing) relpos in successive + * calls, so that *if there is no exclusion clause* there will be no need + * to fetch a row before the previously fetched row. But we do not expect + * the caller to know how to account for exclusion clauses. Therefore, + * if there is an exclusion clause we take responsibility for adjusting the + * mark request to something that will be safe given the above assumption + * about relpos. */ Datum WinGetFuncArgInFrame(WindowObject winobj, int argno, @@ -2736,8 +3233,8 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno, WindowAggState *winstate; ExprContext *econtext; TupleTableSlot *slot; - bool gottuple; int64 abs_pos; + int64 mark_pos; Assert(WindowObjectIsValid(winobj)); winstate = winobj->winstate; @@ -2747,66 +3244,167 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno, switch (seektype) { case WINDOW_SEEK_CURRENT: - abs_pos = winstate->currentpos + relpos; + elog(ERROR, "WINDOW_SEEK_CURRENT is not supported for WinGetFuncArgInFrame"); + abs_pos = mark_pos = 0; /* keep compiler quiet */ break; case WINDOW_SEEK_HEAD: - update_frameheadpos(winobj, slot); + /* rejecting relpos < 0 is easy and simplifies code below */ + if (relpos < 0) + goto out_of_frame; + update_frameheadpos(winstate); abs_pos = winstate->frameheadpos + relpos; + mark_pos = abs_pos; + + /* + * Account for exclusion option if one is active, but advance only + * abs_pos not mark_pos. This prevents changes of the current + * row's peer group from resulting in trying to fetch a row before + * some previous mark position. + * + * Note that in some corner cases such as current row being + * outside frame, these calculations are theoretically too simple, + * but it doesn't matter because we'll end up deciding the row is + * out of frame. We do not attempt to avoid fetching rows past + * end of frame; that would happen in some cases anyway. + */ + switch (winstate->frameOptions & FRAMEOPTION_EXCLUSION) + { + case 0: + /* no adjustment needed */ + break; + case FRAMEOPTION_EXCLUDE_CURRENT_ROW: + if (abs_pos >= winstate->currentpos && + winstate->currentpos >= winstate->frameheadpos) + abs_pos++; + break; + case FRAMEOPTION_EXCLUDE_GROUP: + update_grouptailpos(winstate); + if (abs_pos >= winstate->groupheadpos && + winstate->grouptailpos > winstate->frameheadpos) + { + int64 overlapstart = Max(winstate->groupheadpos, + winstate->frameheadpos); + + abs_pos += winstate->grouptailpos - overlapstart; + } + break; + case FRAMEOPTION_EXCLUDE_TIES: + update_grouptailpos(winstate); + if (abs_pos >= winstate->groupheadpos && + winstate->grouptailpos > winstate->frameheadpos) + { + int64 overlapstart = Max(winstate->groupheadpos, + winstate->frameheadpos); + + if (abs_pos == overlapstart) + abs_pos = winstate->currentpos; + else + abs_pos += winstate->grouptailpos - overlapstart - 1; + } + break; + default: + elog(ERROR, "unrecognized frame option state: 0x%x", + winstate->frameOptions); + break; + } break; case WINDOW_SEEK_TAIL: - update_frametailpos(winobj, slot); - abs_pos = winstate->frametailpos + relpos; + /* rejecting relpos > 0 is easy and simplifies code below */ + if (relpos > 0) + goto out_of_frame; + update_frametailpos(winstate); + abs_pos = winstate->frametailpos - 1 + relpos; + + /* + * Account for exclusion option if one is active. If there is no + * exclusion, we can safely set the mark at the accessed row. But + * if there is, we can only mark the frame start, because we can't + * be sure how far back in the frame the exclusion might cause us + * to fetch in future. Furthermore, we have to actually check + * against frameheadpos here, since it's unsafe to try to fetch a + * row before frame start if the mark might be there already. + */ + switch (winstate->frameOptions & FRAMEOPTION_EXCLUSION) + { + case 0: + /* no adjustment needed */ + mark_pos = abs_pos; + break; + case FRAMEOPTION_EXCLUDE_CURRENT_ROW: + if (abs_pos <= winstate->currentpos && + winstate->currentpos < winstate->frametailpos) + abs_pos--; + update_frameheadpos(winstate); + if (abs_pos < winstate->frameheadpos) + goto out_of_frame; + mark_pos = winstate->frameheadpos; + break; + case FRAMEOPTION_EXCLUDE_GROUP: + update_grouptailpos(winstate); + if (abs_pos < winstate->grouptailpos && + winstate->groupheadpos < winstate->frametailpos) + { + int64 overlapend = Min(winstate->grouptailpos, + winstate->frametailpos); + + abs_pos -= overlapend - winstate->groupheadpos; + } + update_frameheadpos(winstate); + if (abs_pos < winstate->frameheadpos) + goto out_of_frame; + mark_pos = winstate->frameheadpos; + break; + case FRAMEOPTION_EXCLUDE_TIES: + update_grouptailpos(winstate); + if (abs_pos < winstate->grouptailpos && + winstate->groupheadpos < winstate->frametailpos) + { + int64 overlapend = Min(winstate->grouptailpos, + winstate->frametailpos); + + if (abs_pos == overlapend - 1) + abs_pos = winstate->currentpos; + else + abs_pos -= overlapend - 1 - winstate->groupheadpos; + } + update_frameheadpos(winstate); + if (abs_pos < winstate->frameheadpos) + goto out_of_frame; + mark_pos = winstate->frameheadpos; + break; + default: + elog(ERROR, "unrecognized frame option state: 0x%x", + winstate->frameOptions); + mark_pos = 0; /* keep compiler quiet */ + break; + } break; default: elog(ERROR, "unrecognized window seek type: %d", seektype); - abs_pos = 0; /* keep compiler quiet */ + abs_pos = mark_pos = 0; /* keep compiler quiet */ break; } - gottuple = window_gettupleslot(winobj, abs_pos, slot); - if (gottuple) - gottuple = row_is_in_frame(winstate, abs_pos, slot); + if (!window_gettupleslot(winobj, abs_pos, slot)) + goto out_of_frame; - if (!gottuple) - { - if (isout) - *isout = true; - *isnull = true; - return (Datum) 0; - } - else - { - if (isout) - *isout = false; - if (set_mark) - { - int frameOptions = winstate->frameOptions; - int64 mark_pos = abs_pos; + /* The code above does not detect all out-of-frame cases, so check */ + if (row_is_in_frame(winstate, abs_pos, slot) <= 0) + goto out_of_frame; - /* - * In RANGE mode with a moving frame head, we must not let the - * mark advance past frameheadpos, since that row has to be - * fetchable during future update_frameheadpos calls. - * - * XXX it is very ugly to pollute window functions' marks with - * this consideration; it could for instance mask a logic bug that - * lets a window function fetch rows before what it had claimed - * was its mark. Perhaps use a separate mark for frame head - * probes? - */ - if ((frameOptions & FRAMEOPTION_RANGE) && - !(frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) - { - update_frameheadpos(winobj, winstate->temp_slot_2); - if (mark_pos > winstate->frameheadpos) - mark_pos = winstate->frameheadpos; - } - WinSetMarkPosition(winobj, mark_pos); - } - econtext->ecxt_outertuple = slot; - return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno), - econtext, isnull); - } + if (isout) + *isout = false; + if (set_mark) + WinSetMarkPosition(winobj, mark_pos); + econtext->ecxt_outertuple = slot; + return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno), + econtext, isnull); + +out_of_frame: + if (isout) + *isout = true; + *isnull = true; + return (Datum) 0; } /* diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index bafe0d1071..82255b0d1d 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -1012,6 +1012,11 @@ _copyWindowAgg(const WindowAgg *from) COPY_SCALAR_FIELD(frameOptions); COPY_NODE_FIELD(startOffset); COPY_NODE_FIELD(endOffset); + COPY_SCALAR_FIELD(startInRangeFunc); + COPY_SCALAR_FIELD(endInRangeFunc); + COPY_SCALAR_FIELD(inRangeColl); + COPY_SCALAR_FIELD(inRangeAsc); + COPY_SCALAR_FIELD(inRangeNullsFirst); return newnode; } @@ -2412,6 +2417,11 @@ _copyWindowClause(const WindowClause *from) COPY_SCALAR_FIELD(frameOptions); COPY_NODE_FIELD(startOffset); COPY_NODE_FIELD(endOffset); + COPY_SCALAR_FIELD(startInRangeFunc); + COPY_SCALAR_FIELD(endInRangeFunc); + COPY_SCALAR_FIELD(inRangeColl); + COPY_SCALAR_FIELD(inRangeAsc); + COPY_SCALAR_FIELD(inRangeNullsFirst); COPY_SCALAR_FIELD(winref); COPY_SCALAR_FIELD(copiedOrder); diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index 02ca7d588c..b9bc8e38d7 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2735,6 +2735,11 @@ _equalWindowClause(const WindowClause *a, const WindowClause *b) COMPARE_SCALAR_FIELD(frameOptions); COMPARE_NODE_FIELD(startOffset); COMPARE_NODE_FIELD(endOffset); + COMPARE_SCALAR_FIELD(startInRangeFunc); + COMPARE_SCALAR_FIELD(endInRangeFunc); + COMPARE_SCALAR_FIELD(inRangeColl); + COMPARE_SCALAR_FIELD(inRangeAsc); + COMPARE_SCALAR_FIELD(inRangeNullsFirst); COMPARE_SCALAR_FIELD(winref); COMPARE_SCALAR_FIELD(copiedOrder); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index e6ba096257..011d2a3fa9 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -840,6 +840,11 @@ _outWindowAgg(StringInfo str, const WindowAgg *node) WRITE_INT_FIELD(frameOptions); WRITE_NODE_FIELD(startOffset); WRITE_NODE_FIELD(endOffset); + WRITE_OID_FIELD(startInRangeFunc); + WRITE_OID_FIELD(endInRangeFunc); + WRITE_OID_FIELD(inRangeColl); + WRITE_BOOL_FIELD(inRangeAsc); + WRITE_BOOL_FIELD(inRangeNullsFirst); } static void @@ -2985,6 +2990,11 @@ _outWindowClause(StringInfo str, const WindowClause *node) WRITE_INT_FIELD(frameOptions); WRITE_NODE_FIELD(startOffset); WRITE_NODE_FIELD(endOffset); + WRITE_OID_FIELD(startInRangeFunc); + WRITE_OID_FIELD(endInRangeFunc); + WRITE_OID_FIELD(inRangeColl); + WRITE_BOOL_FIELD(inRangeAsc); + WRITE_BOOL_FIELD(inRangeNullsFirst); WRITE_UINT_FIELD(winref); WRITE_BOOL_FIELD(copiedOrder); } diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c index 22d8b9d0d5..068db353d7 100644 --- a/src/backend/nodes/readfuncs.c +++ b/src/backend/nodes/readfuncs.c @@ -369,6 +369,11 @@ _readWindowClause(void) READ_INT_FIELD(frameOptions); READ_NODE_FIELD(startOffset); READ_NODE_FIELD(endOffset); + READ_OID_FIELD(startInRangeFunc); + READ_OID_FIELD(endInRangeFunc); + READ_OID_FIELD(inRangeColl); + READ_BOOL_FIELD(inRangeAsc); + READ_BOOL_FIELD(inRangeNullsFirst); READ_UINT_FIELD(winref); READ_BOOL_FIELD(copiedOrder); @@ -2139,6 +2144,11 @@ _readWindowAgg(void) READ_INT_FIELD(frameOptions); READ_NODE_FIELD(startOffset); READ_NODE_FIELD(endOffset); + READ_OID_FIELD(startInRangeFunc); + READ_OID_FIELD(endInRangeFunc); + READ_OID_FIELD(inRangeColl); + READ_BOOL_FIELD(inRangeAsc); + READ_BOOL_FIELD(inRangeNullsFirst); READ_DONE(); } diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index c46e1318a6..da0cc7f266 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -261,6 +261,8 @@ static WindowAgg *make_windowagg(List *tlist, Index winref, int partNumCols, AttrNumber *partColIdx, Oid *partOperators, int ordNumCols, AttrNumber *ordColIdx, Oid *ordOperators, int frameOptions, Node *startOffset, Node *endOffset, + Oid startInRangeFunc, Oid endInRangeFunc, + Oid inRangeColl, bool inRangeAsc, bool inRangeNullsFirst, Plan *lefttree); static Group *make_group(List *tlist, List *qual, int numGroupCols, AttrNumber *grpColIdx, Oid *grpOperators, @@ -2123,6 +2125,11 @@ create_windowagg_plan(PlannerInfo *root, WindowAggPath *best_path) wc->frameOptions, wc->startOffset, wc->endOffset, + wc->startInRangeFunc, + wc->endInRangeFunc, + wc->inRangeColl, + wc->inRangeAsc, + wc->inRangeNullsFirst, subplan); copy_generic_path_info(&plan->plan, (Path *) best_path); @@ -6080,6 +6087,8 @@ make_windowagg(List *tlist, Index winref, int partNumCols, AttrNumber *partColIdx, Oid *partOperators, int ordNumCols, AttrNumber *ordColIdx, Oid *ordOperators, int frameOptions, Node *startOffset, Node *endOffset, + Oid startInRangeFunc, Oid endInRangeFunc, + Oid inRangeColl, bool inRangeAsc, bool inRangeNullsFirst, Plan *lefttree) { WindowAgg *node = makeNode(WindowAgg); @@ -6095,6 +6104,11 @@ make_windowagg(List *tlist, Index winref, node->frameOptions = frameOptions; node->startOffset = startOffset; node->endOffset = endOffset; + node->startInRangeFunc = startInRangeFunc; + node->endInRangeFunc = endInRangeFunc; + node->inRangeColl = inRangeColl; + node->inRangeAsc = inRangeAsc; + node->inRangeNullsFirst = inRangeNullsFirst; plan->targetlist = tlist; plan->lefttree = lefttree; diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 5329432f25..d99f2be2c9 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -570,6 +570,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); %type window_clause window_definition_list opt_partition_clause %type window_definition over_clause window_specification opt_frame_clause frame_extent frame_bound +%type opt_window_exclusion_clause %type opt_existing_window_name %type opt_if_not_exists %type generated_when override_kind @@ -632,7 +633,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); FALSE_P FAMILY FETCH FILTER FIRST_P FLOAT_P FOLLOWING FOR FORCE FOREIGN FORWARD FREEZE FROM FULL FUNCTION FUNCTIONS - GENERATED GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING + GENERATED GLOBAL GRANT GRANTED GREATEST GROUP_P GROUPING GROUPS HANDLER HAVING HEADER_P HOLD HOUR_P @@ -656,7 +657,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); NULLS_P NUMERIC OBJECT_P OF OFF OFFSET OIDS OLD ON ONLY OPERATOR OPTION OPTIONS OR - ORDER ORDINALITY OUT_P OUTER_P OVER OVERLAPS OVERLAY OVERRIDING OWNED OWNER + ORDER ORDINALITY OTHERS OUT_P OUTER_P + OVER OVERLAPS OVERLAY OVERRIDING OWNED OWNER PARALLEL PARSER PARTIAL PARTITION PASSING PASSWORD PLACING PLANS POLICY POSITION PRECEDING PRECISION PRESERVE PREPARE PREPARED PRIMARY @@ -676,7 +678,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); SUBSCRIPTION SUBSTRING SYMMETRIC SYSID SYSTEM_P TABLE TABLES TABLESAMPLE TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN - TIME TIMESTAMP TO TRAILING TRANSACTION TRANSFORM TREAT TRIGGER TRIM TRUE_P + TIES TIME TIMESTAMP TO TRAILING TRANSACTION TRANSFORM + TREAT TRIGGER TRIM TRUE_P TRUNCATE TRUSTED TYPE_P TYPES_P UNBOUNDED UNCOMMITTED UNENCRYPTED UNION UNIQUE UNKNOWN UNLISTEN UNLOGGED @@ -724,9 +727,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); * between POSTFIXOP and Op. We can safely assign the same priority to * various unreserved keywords as needed to resolve ambiguities (this can't * have any bad effects since obviously the keywords will still behave the - * same as if they weren't keywords). We need to do this for PARTITION, - * RANGE, ROWS to support opt_existing_window_name; and for RANGE, ROWS - * so that they can follow a_expr without creating postfix-operator problems; + * same as if they weren't keywords). We need to do this: + * for PARTITION, RANGE, ROWS, GROUPS to support opt_existing_window_name; + * for RANGE, ROWS, GROUPS so that they can follow a_expr without creating + * postfix-operator problems; * for GENERATED so that it can follow b_expr; * and for NULL so that it can follow b_expr in ColQualList without creating * postfix-operator problems. @@ -746,7 +750,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); * blame any funny behavior of UNBOUNDED on the SQL standard, though. */ %nonassoc UNBOUNDED /* ideally should have same precedence as IDENT */ -%nonassoc IDENT GENERATED NULL_P PARTITION RANGE ROWS PRECEDING FOLLOWING CUBE ROLLUP +%nonassoc IDENT GENERATED NULL_P PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP %left Op OPERATOR /* multi-character ops and user-defined operators */ %left '+' '-' %left '*' '/' '%' @@ -14003,7 +14007,7 @@ window_specification: '(' opt_existing_window_name opt_partition_clause ; /* - * If we see PARTITION, RANGE, or ROWS as the first token after the '(' + * If we see PARTITION, RANGE, ROWS or GROUPS as the first token after the '(' * of a window_specification, we want the assumption to be that there is * no existing_window_name; but those keywords are unreserved and so could * be ColIds. We fix this by making them have the same precedence as IDENT @@ -14023,33 +14027,27 @@ opt_partition_clause: PARTITION BY expr_list { $$ = $3; } /* * For frame clauses, we return a WindowDef, but only some fields are used: * frameOptions, startOffset, and endOffset. - * - * This is only a subset of the full SQL:2008 frame_clause grammar. - * We don't support yet. */ opt_frame_clause: - RANGE frame_extent + RANGE frame_extent opt_window_exclusion_clause { WindowDef *n = $2; n->frameOptions |= FRAMEOPTION_NONDEFAULT | FRAMEOPTION_RANGE; - if (n->frameOptions & (FRAMEOPTION_START_VALUE_PRECEDING | - FRAMEOPTION_END_VALUE_PRECEDING)) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("RANGE PRECEDING is only supported with UNBOUNDED"), - parser_errposition(@1))); - if (n->frameOptions & (FRAMEOPTION_START_VALUE_FOLLOWING | - FRAMEOPTION_END_VALUE_FOLLOWING)) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("RANGE FOLLOWING is only supported with UNBOUNDED"), - parser_errposition(@1))); + n->frameOptions |= $3; $$ = n; } - | ROWS frame_extent + | ROWS frame_extent opt_window_exclusion_clause { WindowDef *n = $2; n->frameOptions |= FRAMEOPTION_NONDEFAULT | FRAMEOPTION_ROWS; + n->frameOptions |= $3; + $$ = n; + } + | GROUPS frame_extent opt_window_exclusion_clause + { + WindowDef *n = $2; + n->frameOptions |= FRAMEOPTION_NONDEFAULT | FRAMEOPTION_GROUPS; + n->frameOptions |= $3; $$ = n; } | /*EMPTY*/ @@ -14071,7 +14069,7 @@ frame_extent: frame_bound (errcode(ERRCODE_WINDOWING_ERROR), errmsg("frame start cannot be UNBOUNDED FOLLOWING"), parser_errposition(@1))); - if (n->frameOptions & FRAMEOPTION_START_VALUE_FOLLOWING) + if (n->frameOptions & FRAMEOPTION_START_OFFSET_FOLLOWING) ereport(ERROR, (errcode(ERRCODE_WINDOWING_ERROR), errmsg("frame starting from following row cannot end with current row"), @@ -14100,13 +14098,13 @@ frame_extent: frame_bound errmsg("frame end cannot be UNBOUNDED PRECEDING"), parser_errposition(@4))); if ((frameOptions & FRAMEOPTION_START_CURRENT_ROW) && - (frameOptions & FRAMEOPTION_END_VALUE_PRECEDING)) + (frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING)) ereport(ERROR, (errcode(ERRCODE_WINDOWING_ERROR), errmsg("frame starting from current row cannot have preceding rows"), parser_errposition(@4))); - if ((frameOptions & FRAMEOPTION_START_VALUE_FOLLOWING) && - (frameOptions & (FRAMEOPTION_END_VALUE_PRECEDING | + if ((frameOptions & FRAMEOPTION_START_OFFSET_FOLLOWING) && + (frameOptions & (FRAMEOPTION_END_OFFSET_PRECEDING | FRAMEOPTION_END_CURRENT_ROW))) ereport(ERROR, (errcode(ERRCODE_WINDOWING_ERROR), @@ -14151,7 +14149,7 @@ frame_bound: | a_expr PRECEDING { WindowDef *n = makeNode(WindowDef); - n->frameOptions = FRAMEOPTION_START_VALUE_PRECEDING; + n->frameOptions = FRAMEOPTION_START_OFFSET_PRECEDING; n->startOffset = $1; n->endOffset = NULL; $$ = n; @@ -14159,13 +14157,21 @@ frame_bound: | a_expr FOLLOWING { WindowDef *n = makeNode(WindowDef); - n->frameOptions = FRAMEOPTION_START_VALUE_FOLLOWING; + n->frameOptions = FRAMEOPTION_START_OFFSET_FOLLOWING; n->startOffset = $1; n->endOffset = NULL; $$ = n; } ; +opt_window_exclusion_clause: + EXCLUDE CURRENT_P ROW { $$ = FRAMEOPTION_EXCLUDE_CURRENT_ROW; } + | EXCLUDE GROUP_P { $$ = FRAMEOPTION_EXCLUDE_GROUP; } + | EXCLUDE TIES { $$ = FRAMEOPTION_EXCLUDE_TIES; } + | EXCLUDE NO OTHERS { $$ = 0; } + | /*EMPTY*/ { $$ = 0; } + ; + /* * Supporting nonterminals for expressions. @@ -15027,6 +15033,7 @@ unreserved_keyword: | GENERATED | GLOBAL | GRANTED + | GROUPS | HANDLER | HEADER_P | HOLD @@ -15092,6 +15099,7 @@ unreserved_keyword: | OPTION | OPTIONS | ORDINALITY + | OTHERS | OVER | OVERRIDING | OWNED @@ -15182,6 +15190,7 @@ unreserved_keyword: | TEMPLATE | TEMPORARY | TEXT_P + | TIES | TRANSACTION | TRANSFORM | TRIGGER diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c index 6a9f1b0217..747139489a 100644 --- a/src/backend/parser/parse_agg.c +++ b/src/backend/parser/parse_agg.c @@ -419,6 +419,13 @@ check_agglevels_and_constraints(ParseState *pstate, Node *expr) else err = _("grouping operations are not allowed in window ROWS"); + break; + case EXPR_KIND_WINDOW_FRAME_GROUPS: + if (isAgg) + err = _("aggregate functions are not allowed in window GROUPS"); + else + err = _("grouping operations are not allowed in window GROUPS"); + break; case EXPR_KIND_SELECT_TARGET: /* okay */ @@ -835,6 +842,7 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc, case EXPR_KIND_WINDOW_ORDER: case EXPR_KIND_WINDOW_FRAME_RANGE: case EXPR_KIND_WINDOW_FRAME_ROWS: + case EXPR_KIND_WINDOW_FRAME_GROUPS: err = _("window functions are not allowed in window definitions"); break; case EXPR_KIND_SELECT_TARGET: diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index 406cd1dad0..9bafc24083 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -18,10 +18,13 @@ #include "miscadmin.h" #include "access/heapam.h" +#include "access/htup_details.h" +#include "access/nbtree.h" #include "access/tsmapi.h" #include "catalog/catalog.h" #include "catalog/heap.h" #include "catalog/pg_am.h" +#include "catalog/pg_amproc.h" #include "catalog/pg_collation.h" #include "catalog/pg_constraint_fn.h" #include "catalog/pg_type.h" @@ -43,8 +46,11 @@ #include "parser/parse_target.h" #include "parser/parse_type.h" #include "rewrite/rewriteManip.h" +#include "utils/builtins.h" #include "utils/guc.h" +#include "utils/catcache.h" #include "utils/lsyscache.h" +#include "utils/syscache.h" #include "utils/rel.h" @@ -95,6 +101,7 @@ static List *addTargetToGroupList(ParseState *pstate, TargetEntry *tle, List *grouplist, List *targetlist, int location); static WindowClause *findWindowClause(List *wclist, const char *name); static Node *transformFrameOffset(ParseState *pstate, int frameOptions, + Oid rangeopfamily, Oid rangeopcintype, Oid *inRangeFunc, Node *clause); @@ -2627,6 +2634,8 @@ transformWindowDefinitions(ParseState *pstate, WindowClause *refwc = NULL; List *partitionClause; List *orderClause; + Oid rangeopfamily = InvalidOid; + Oid rangeopcintype = InvalidOid; WindowClause *wc; winref++; @@ -2753,10 +2762,47 @@ transformWindowDefinitions(ParseState *pstate, parser_errposition(pstate, windef->location))); } wc->frameOptions = windef->frameOptions; + + /* + * RANGE offset PRECEDING/FOLLOWING requires exactly one ORDER BY + * column; check that and get its sort opfamily info. + */ + if ((wc->frameOptions & FRAMEOPTION_RANGE) && + (wc->frameOptions & (FRAMEOPTION_START_OFFSET | + FRAMEOPTION_END_OFFSET))) + { + SortGroupClause *sortcl; + Node *sortkey; + int16 rangestrategy; + + if (list_length(wc->orderClause) != 1) + ereport(ERROR, + (errcode(ERRCODE_WINDOWING_ERROR), + errmsg("RANGE with offset PRECEDING/FOLLOWING requires exactly one ORDER BY column"), + parser_errposition(pstate, windef->location))); + sortcl = castNode(SortGroupClause, linitial(wc->orderClause)); + sortkey = get_sortgroupclause_expr(sortcl, *targetlist); + /* Find the sort operator in pg_amop */ + if (!get_ordering_op_properties(sortcl->sortop, + &rangeopfamily, + &rangeopcintype, + &rangestrategy)) + elog(ERROR, "operator %u is not a valid ordering operator", + sortcl->sortop); + /* Record properties of sort ordering */ + wc->inRangeColl = exprCollation(sortkey); + wc->inRangeAsc = (rangestrategy == BTLessStrategyNumber); + wc->inRangeNullsFirst = sortcl->nulls_first; + } + /* Process frame offset expressions */ wc->startOffset = transformFrameOffset(pstate, wc->frameOptions, + rangeopfamily, rangeopcintype, + &wc->startInRangeFunc, windef->startOffset); wc->endOffset = transformFrameOffset(pstate, wc->frameOptions, + rangeopfamily, rangeopcintype, + &wc->endInRangeFunc, windef->endOffset); wc->winref = winref; @@ -3489,13 +3535,24 @@ findWindowClause(List *wclist, const char *name) /* * transformFrameOffset * Process a window frame offset expression + * + * In RANGE mode, rangeopfamily is the sort opfamily for the input ORDER BY + * column, and rangeopcintype is the input data type the sort operator is + * registered with. We expect the in_range function to be registered with + * that same type. (In binary-compatible cases, it might be different from + * the input column's actual type, so we can't use that for the lookups.) + * We'll return the OID of the in_range function to *inRangeFunc. */ static Node * -transformFrameOffset(ParseState *pstate, int frameOptions, Node *clause) +transformFrameOffset(ParseState *pstate, int frameOptions, + Oid rangeopfamily, Oid rangeopcintype, Oid *inRangeFunc, + Node *clause) { const char *constructName = NULL; Node *node; + *inRangeFunc = InvalidOid; /* default result */ + /* Quick exit if no offset expression */ if (clause == NULL) return NULL; @@ -3513,16 +3570,105 @@ transformFrameOffset(ParseState *pstate, int frameOptions, Node *clause) } else if (frameOptions & FRAMEOPTION_RANGE) { + /* + * We must look up the in_range support function that's to be used, + * possibly choosing one of several, and coerce the "offset" value to + * the appropriate input type. + */ + Oid nodeType; + Oid preferredType; + int nfuncs = 0; + int nmatches = 0; + Oid selectedType = InvalidOid; + Oid selectedFunc = InvalidOid; + CatCList *proclist; + int i; + /* Transform the raw expression tree */ node = transformExpr(pstate, clause, EXPR_KIND_WINDOW_FRAME_RANGE); + nodeType = exprType(node); + + /* + * If there are multiple candidates, we'll prefer the one that exactly + * matches nodeType; or if nodeType is as yet unknown, prefer the one + * that exactly matches the sort column type. (The second rule is + * like what we do for "known_type operator unknown".) + */ + preferredType = (nodeType != UNKNOWNOID) ? nodeType : rangeopcintype; + + /* Find the in_range support functions applicable to this case */ + proclist = SearchSysCacheList2(AMPROCNUM, + ObjectIdGetDatum(rangeopfamily), + ObjectIdGetDatum(rangeopcintype)); + for (i = 0; i < proclist->n_members; i++) + { + HeapTuple proctup = &proclist->members[i]->tuple; + Form_pg_amproc procform = (Form_pg_amproc) GETSTRUCT(proctup); + + /* The search will find all support proc types; ignore others */ + if (procform->amprocnum != BTINRANGE_PROC) + continue; + nfuncs++; + + /* Ignore function if given value can't be coerced to that type */ + if (!can_coerce_type(1, &nodeType, &procform->amprocrighttype, + COERCION_IMPLICIT)) + continue; + nmatches++; + + /* Remember preferred match, or any match if didn't find that */ + if (selectedType != preferredType) + { + selectedType = procform->amprocrighttype; + selectedFunc = procform->amproc; + } + } + ReleaseCatCacheList(proclist); /* - * this needs a lot of thought to decide how to support in the context - * of Postgres' extensible datatype framework + * Throw error if needed. It seems worth taking the trouble to + * distinguish "no support at all" from "you didn't match any + * available offset type". */ + if (nfuncs == 0) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("RANGE with offset PRECEDING/FOLLOWING is not supported for column type %s", + format_type_be(rangeopcintype)), + parser_errposition(pstate, exprLocation(node)))); + if (nmatches == 0) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("RANGE with offset PRECEDING/FOLLOWING is not supported for column type %s and offset type %s", + format_type_be(rangeopcintype), + format_type_be(nodeType)), + errhint("Cast the offset value to an appropriate type."), + parser_errposition(pstate, exprLocation(node)))); + if (nmatches != 1 && selectedType != preferredType) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("RANGE with offset PRECEDING/FOLLOWING has multiple interpretations for column type %s and offset type %s", + format_type_be(rangeopcintype), + format_type_be(nodeType)), + errhint("Cast the offset value to the exact intended type."), + parser_errposition(pstate, exprLocation(node)))); + + /* OK, coerce the offset to the right type */ constructName = "RANGE"; - /* error was already thrown by gram.y, this is just a backstop */ - elog(ERROR, "window frame with value offset is not implemented"); + node = coerce_to_specific_type(pstate, node, + selectedType, constructName); + *inRangeFunc = selectedFunc; + } + else if (frameOptions & FRAMEOPTION_GROUPS) + { + /* Transform the raw expression tree */ + node = transformExpr(pstate, clause, EXPR_KIND_WINDOW_FRAME_GROUPS); + + /* + * Like LIMIT clause, simply coerce to int8 + */ + constructName = "GROUPS"; + node = coerce_to_specific_type(pstate, node, INT8OID, constructName); } else { diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index b2f5e46e3b..d45926f27f 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -1805,6 +1805,7 @@ transformSubLink(ParseState *pstate, SubLink *sublink) case EXPR_KIND_WINDOW_ORDER: case EXPR_KIND_WINDOW_FRAME_RANGE: case EXPR_KIND_WINDOW_FRAME_ROWS: + case EXPR_KIND_WINDOW_FRAME_GROUPS: case EXPR_KIND_SELECT_TARGET: case EXPR_KIND_INSERT_TARGET: case EXPR_KIND_UPDATE_SOURCE: @@ -3428,6 +3429,8 @@ ParseExprKindName(ParseExprKind exprKind) return "window RANGE"; case EXPR_KIND_WINDOW_FRAME_ROWS: return "window ROWS"; + case EXPR_KIND_WINDOW_FRAME_GROUPS: + return "window GROUPS"; case EXPR_KIND_SELECT_TARGET: return "SELECT"; case EXPR_KIND_INSERT_TARGET: diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index ffae0f3cf3..4a7bc77c0f 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -2227,6 +2227,7 @@ check_srf_call_placement(ParseState *pstate, Node *last_srf, int location) break; case EXPR_KIND_WINDOW_FRAME_RANGE: case EXPR_KIND_WINDOW_FRAME_ROWS: + case EXPR_KIND_WINDOW_FRAME_GROUPS: err = _("set-returning functions are not allowed in window definitions"); break; case EXPR_KIND_SELECT_TARGET: diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c index 747ef49789..eea2904414 100644 --- a/src/backend/utils/adt/date.c +++ b/src/backend/utils/adt/date.c @@ -1011,6 +1011,34 @@ timestamptz_cmp_date(PG_FUNCTION_ARGS) PG_RETURN_INT32(timestamptz_cmp_internal(dt1, dt2)); } +/* + * in_range support function for date. + * + * We implement this by promoting the dates to timestamp (without time zone) + * and then using the timestamp-and-interval in_range function. + */ +Datum +in_range_date_interval(PG_FUNCTION_ARGS) +{ + DateADT val = PG_GETARG_DATEADT(0); + DateADT base = PG_GETARG_DATEADT(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + Timestamp valStamp; + Timestamp baseStamp; + + valStamp = date2timestamp(val); + baseStamp = date2timestamp(base); + + return DirectFunctionCall5(in_range_timestamp_interval, + TimestampGetDatum(valStamp), + TimestampGetDatum(baseStamp), + IntervalPGetDatum(offset), + BoolGetDatum(sub), + BoolGetDatum(less)); +} + /* Add an interval to a date, giving a new date. * Must handle both positive and negative intervals. @@ -1842,6 +1870,45 @@ time_mi_interval(PG_FUNCTION_ARGS) PG_RETURN_TIMEADT(result); } +/* + * in_range support function for time. + */ +Datum +in_range_time_interval(PG_FUNCTION_ARGS) +{ + TimeADT val = PG_GETARG_TIMEADT(0); + TimeADT base = PG_GETARG_TIMEADT(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + TimeADT sum; + + /* + * Like time_pl_interval/time_mi_interval, we disregard the month and day + * fields of the offset. So our test for negative should too. + */ + if (offset->time < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* + * We can't use time_pl_interval/time_mi_interval here, because their + * wraparound behavior would give wrong (or at least undesirable) answers. + * Fortunately the equivalent non-wrapping behavior is trivial, especially + * since we don't worry about integer overflow. + */ + if (sub) + sum = base - offset->time; + else + sum = base + offset->time; + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + /* time_part() * Extract specified field from time type. @@ -2305,6 +2372,46 @@ timetz_mi_interval(PG_FUNCTION_ARGS) PG_RETURN_TIMETZADT_P(result); } +/* + * in_range support function for timetz. + */ +Datum +in_range_timetz_interval(PG_FUNCTION_ARGS) +{ + TimeTzADT *val = PG_GETARG_TIMETZADT_P(0); + TimeTzADT *base = PG_GETARG_TIMETZADT_P(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + TimeTzADT sum; + + /* + * Like timetz_pl_interval/timetz_mi_interval, we disregard the month and + * day fields of the offset. So our test for negative should too. + */ + if (offset->time < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* + * We can't use timetz_pl_interval/timetz_mi_interval here, because their + * wraparound behavior would give wrong (or at least undesirable) answers. + * Fortunately the equivalent non-wrapping behavior is trivial, especially + * since we don't worry about integer overflow. + */ + if (sub) + sum.time = base->time - offset->time; + else + sum.time = base->time + offset->time; + sum.zone = base->zone; + + if (less) + PG_RETURN_BOOL(timetz_cmp_internal(val, &sum) <= 0); + else + PG_RETURN_BOOL(timetz_cmp_internal(val, &sum) >= 0); +} + /* overlaps_timetz() --- implements the SQL OVERLAPS operator. * * Algorithm is per SQL spec. This is much harder than you'd think diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c index 7352908365..559c365fec 100644 --- a/src/backend/utils/adt/int.c +++ b/src/backend/utils/adt/int.c @@ -585,6 +585,158 @@ int42ge(PG_FUNCTION_ARGS) PG_RETURN_BOOL(arg1 >= arg2); } + +/*---------------------------------------------------------- + * in_range functions for int4 and int2, + * including cross-data-type comparisons. + * + * Note: we provide separate intN_int8 functions for performance + * reasons. This forces also providing intN_int2, else cases with a + * smallint offset value would fail to resolve which function to use. + * But that's an unlikely situation, so don't duplicate code for it. + *---------------------------------------------------------*/ + +Datum +in_range_int4_int4(PG_FUNCTION_ARGS) +{ + int32 val = PG_GETARG_INT32(0); + int32 base = PG_GETARG_INT32(1); + int32 offset = PG_GETARG_INT32(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + int32 sum; + + if (offset < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + if (sub) + offset = -offset; /* cannot overflow */ + + if (unlikely(pg_add_s32_overflow(base, offset, &sum))) + { + /* + * If sub is false, the true sum is surely more than val, so correct + * answer is the same as "less". If sub is true, the true sum is + * surely less than val, so the answer is "!less". + */ + PG_RETURN_BOOL(sub ? !less : less); + } + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +Datum +in_range_int4_int2(PG_FUNCTION_ARGS) +{ + /* Doesn't seem worth duplicating code for, so just invoke int4_int4 */ + return DirectFunctionCall5(in_range_int4_int4, + PG_GETARG_DATUM(0), + PG_GETARG_DATUM(1), + Int32GetDatum((int32) PG_GETARG_INT16(2)), + PG_GETARG_DATUM(3), + PG_GETARG_DATUM(4)); +} + +Datum +in_range_int4_int8(PG_FUNCTION_ARGS) +{ + /* We must do all the math in int64 */ + int64 val = (int64) PG_GETARG_INT32(0); + int64 base = (int64) PG_GETARG_INT32(1); + int64 offset = PG_GETARG_INT64(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + int64 sum; + + if (offset < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + if (sub) + offset = -offset; /* cannot overflow */ + + if (unlikely(pg_add_s64_overflow(base, offset, &sum))) + { + /* + * If sub is false, the true sum is surely more than val, so correct + * answer is the same as "less". If sub is true, the true sum is + * surely less than val, so the answer is "!less". + */ + PG_RETURN_BOOL(sub ? !less : less); + } + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +Datum +in_range_int2_int4(PG_FUNCTION_ARGS) +{ + /* We must do all the math in int32 */ + int32 val = (int32) PG_GETARG_INT16(0); + int32 base = (int32) PG_GETARG_INT16(1); + int32 offset = PG_GETARG_INT32(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + int32 sum; + + if (offset < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + if (sub) + offset = -offset; /* cannot overflow */ + + if (unlikely(pg_add_s32_overflow(base, offset, &sum))) + { + /* + * If sub is false, the true sum is surely more than val, so correct + * answer is the same as "less". If sub is true, the true sum is + * surely less than val, so the answer is "!less". + */ + PG_RETURN_BOOL(sub ? !less : less); + } + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +Datum +in_range_int2_int2(PG_FUNCTION_ARGS) +{ + /* Doesn't seem worth duplicating code for, so just invoke int2_int4 */ + return DirectFunctionCall5(in_range_int2_int4, + PG_GETARG_DATUM(0), + PG_GETARG_DATUM(1), + Int32GetDatum((int32) PG_GETARG_INT16(2)), + PG_GETARG_DATUM(3), + PG_GETARG_DATUM(4)); +} + +Datum +in_range_int2_int8(PG_FUNCTION_ARGS) +{ + /* Doesn't seem worth duplicating code for, so just invoke int4_int8 */ + return DirectFunctionCall5(in_range_int4_int8, + Int32GetDatum((int32) PG_GETARG_INT16(0)), + Int32GetDatum((int32) PG_GETARG_INT16(1)), + PG_GETARG_DATUM(2), + PG_GETARG_DATUM(3), + PG_GETARG_DATUM(4)); +} + + /* * int[24]pl - returns arg1 + arg2 * int[24]mi - returns arg1 - arg2 diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c index ae6a4683d4..e6bae6860d 100644 --- a/src/backend/utils/adt/int8.c +++ b/src/backend/utils/adt/int8.c @@ -14,7 +14,7 @@ #include "postgres.h" #include -#include /* for _isnan */ +#include /* for _isnan */ #include #include @@ -469,6 +469,46 @@ int28ge(PG_FUNCTION_ARGS) PG_RETURN_BOOL(val1 >= val2); } +/* + * in_range support function for int8. + * + * Note: we needn't supply int8_int4 or int8_int2 variants, as implicit + * coercion of the offset value takes care of those scenarios just as well. + */ +Datum +in_range_int8_int8(PG_FUNCTION_ARGS) +{ + int64 val = PG_GETARG_INT64(0); + int64 base = PG_GETARG_INT64(1); + int64 offset = PG_GETARG_INT64(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + int64 sum; + + if (offset < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + if (sub) + offset = -offset; /* cannot overflow */ + + if (unlikely(pg_add_s64_overflow(base, offset, &sum))) + { + /* + * If sub is false, the true sum is surely more than val, so correct + * answer is the same as "less". If sub is true, the true sum is + * surely less than val, so the answer is "!less". + */ + PG_RETURN_BOOL(sub ? !less : less); + } + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + /*---------------------------------------------------------- * Arithmetic operators on 64-bit integers. diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index c5f5a1ca3f..28767a129a 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -5877,6 +5877,8 @@ get_rule_windowspec(WindowClause *wc, List *targetList, appendStringInfoString(buf, "RANGE "); else if (wc->frameOptions & FRAMEOPTION_ROWS) appendStringInfoString(buf, "ROWS "); + else if (wc->frameOptions & FRAMEOPTION_GROUPS) + appendStringInfoString(buf, "GROUPS "); else Assert(false); if (wc->frameOptions & FRAMEOPTION_BETWEEN) @@ -5885,12 +5887,12 @@ get_rule_windowspec(WindowClause *wc, List *targetList, appendStringInfoString(buf, "UNBOUNDED PRECEDING "); else if (wc->frameOptions & FRAMEOPTION_START_CURRENT_ROW) appendStringInfoString(buf, "CURRENT ROW "); - else if (wc->frameOptions & FRAMEOPTION_START_VALUE) + else if (wc->frameOptions & FRAMEOPTION_START_OFFSET) { get_rule_expr(wc->startOffset, context, false); - if (wc->frameOptions & FRAMEOPTION_START_VALUE_PRECEDING) + if (wc->frameOptions & FRAMEOPTION_START_OFFSET_PRECEDING) appendStringInfoString(buf, " PRECEDING "); - else if (wc->frameOptions & FRAMEOPTION_START_VALUE_FOLLOWING) + else if (wc->frameOptions & FRAMEOPTION_START_OFFSET_FOLLOWING) appendStringInfoString(buf, " FOLLOWING "); else Assert(false); @@ -5904,12 +5906,12 @@ get_rule_windowspec(WindowClause *wc, List *targetList, appendStringInfoString(buf, "UNBOUNDED FOLLOWING "); else if (wc->frameOptions & FRAMEOPTION_END_CURRENT_ROW) appendStringInfoString(buf, "CURRENT ROW "); - else if (wc->frameOptions & FRAMEOPTION_END_VALUE) + else if (wc->frameOptions & FRAMEOPTION_END_OFFSET) { get_rule_expr(wc->endOffset, context, false); - if (wc->frameOptions & FRAMEOPTION_END_VALUE_PRECEDING) + if (wc->frameOptions & FRAMEOPTION_END_OFFSET_PRECEDING) appendStringInfoString(buf, " PRECEDING "); - else if (wc->frameOptions & FRAMEOPTION_END_VALUE_FOLLOWING) + else if (wc->frameOptions & FRAMEOPTION_END_OFFSET_FOLLOWING) appendStringInfoString(buf, " FOLLOWING "); else Assert(false); @@ -5917,6 +5919,12 @@ get_rule_windowspec(WindowClause *wc, List *targetList, else Assert(false); } + if (wc->frameOptions & FRAMEOPTION_EXCLUDE_CURRENT_ROW) + appendStringInfoString(buf, "EXCLUDE CURRENT ROW "); + else if (wc->frameOptions & FRAMEOPTION_EXCLUDE_GROUP) + appendStringInfoString(buf, "EXCLUDE GROUP "); + else if (wc->frameOptions & FRAMEOPTION_EXCLUDE_TIES) + appendStringInfoString(buf, "EXCLUDE TIES "); /* we will now have a trailing space; remove it */ buf->len--; } diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c index e6a1eed191..103f91ae62 100644 --- a/src/backend/utils/adt/timestamp.c +++ b/src/backend/utils/adt/timestamp.c @@ -3258,6 +3258,110 @@ interval_div(PG_FUNCTION_ARGS) PG_RETURN_INTERVAL_P(result); } + +/* + * in_range support functions for timestamps and intervals. + * + * Per SQL spec, we support these with interval as the offset type. + * The spec's restriction that the offset not be negative is a bit hard to + * decipher for intervals, but we choose to interpret it the same as our + * interval comparison operators would. + */ + +Datum +in_range_timestamptz_interval(PG_FUNCTION_ARGS) +{ + TimestampTz val = PG_GETARG_TIMESTAMPTZ(0); + TimestampTz base = PG_GETARG_TIMESTAMPTZ(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + TimestampTz sum; + + if (int128_compare(interval_cmp_value(offset), int64_to_int128(0)) < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* We don't currently bother to avoid overflow hazards here */ + if (sub) + sum = DatumGetTimestampTz(DirectFunctionCall2(timestamptz_mi_interval, + TimestampTzGetDatum(base), + IntervalPGetDatum(offset))); + else + sum = DatumGetTimestampTz(DirectFunctionCall2(timestamptz_pl_interval, + TimestampTzGetDatum(base), + IntervalPGetDatum(offset))); + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +Datum +in_range_timestamp_interval(PG_FUNCTION_ARGS) +{ + Timestamp val = PG_GETARG_TIMESTAMP(0); + Timestamp base = PG_GETARG_TIMESTAMP(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + Timestamp sum; + + if (int128_compare(interval_cmp_value(offset), int64_to_int128(0)) < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* We don't currently bother to avoid overflow hazards here */ + if (sub) + sum = DatumGetTimestamp(DirectFunctionCall2(timestamp_mi_interval, + TimestampGetDatum(base), + IntervalPGetDatum(offset))); + else + sum = DatumGetTimestamp(DirectFunctionCall2(timestamp_pl_interval, + TimestampGetDatum(base), + IntervalPGetDatum(offset))); + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +Datum +in_range_interval_interval(PG_FUNCTION_ARGS) +{ + Interval *val = PG_GETARG_INTERVAL_P(0); + Interval *base = PG_GETARG_INTERVAL_P(1); + Interval *offset = PG_GETARG_INTERVAL_P(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + Interval *sum; + + if (int128_compare(interval_cmp_value(offset), int64_to_int128(0)) < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* We don't currently bother to avoid overflow hazards here */ + if (sub) + sum = DatumGetIntervalP(DirectFunctionCall2(interval_mi, + IntervalPGetDatum(base), + IntervalPGetDatum(offset))); + else + sum = DatumGetIntervalP(DirectFunctionCall2(interval_pl, + IntervalPGetDatum(base), + IntervalPGetDatum(offset))); + + if (less) + PG_RETURN_BOOL(interval_cmp_internal(val, sum) <= 0); + else + PG_RETURN_BOOL(interval_cmp_internal(val, sum) >= 0); +} + + /* * interval_accum, interval_accum_inv, and interval_avg implement the * AVG(interval) aggregate. diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt index 1475bfe362..9871d1e793 100644 --- a/src/backend/utils/errcodes.txt +++ b/src/backend/utils/errcodes.txt @@ -177,6 +177,7 @@ Section: Class 22 - Data Exception 22P06 E ERRCODE_NONSTANDARD_USE_OF_ESCAPE_CHARACTER nonstandard_use_of_escape_character 22010 E ERRCODE_INVALID_INDICATOR_PARAMETER_VALUE invalid_indicator_parameter_value 22023 E ERRCODE_INVALID_PARAMETER_VALUE invalid_parameter_value +22013 E ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE invalid_preceding_following_size 2201B E ERRCODE_INVALID_REGULAR_EXPRESSION invalid_regular_expression 2201W E ERRCODE_INVALID_ROW_COUNT_IN_LIMIT_CLAUSE invalid_row_count_in_limit_clause 2201X E ERRCODE_INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE invalid_row_count_in_result_offset_clause diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index 0f6a40168c..2b0b1da763 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -225,11 +225,17 @@ typedef struct BTMetaPageData * To facilitate accelerated sorting, an operator class may choose to * offer a second procedure (BTSORTSUPPORT_PROC). For full details, see * src/include/utils/sortsupport.h. + * + * To support window frames defined by "RANGE offset PRECEDING/FOLLOWING", + * an operator class may choose to offer a third amproc procedure + * (BTINRANGE_PROC), independently of whether it offers sortsupport. + * For full details, see doc/src/sgml/btree.sgml. */ #define BTORDER_PROC 1 #define BTSORTSUPPORT_PROC 2 -#define BTNProcs 2 +#define BTINRANGE_PROC 3 +#define BTNProcs 3 /* * We need to be able to tell the difference between read and write diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index f1765af4ba..433d6db4f6 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201712251 +#define CATALOG_VERSION_NO 201802061 #endif diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index f545a0580d..c3d0ff70e6 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -96,6 +96,9 @@ DATA(insert ( 434 1184 1184 1 1314 )); DATA(insert ( 434 1184 1184 2 3137 )); DATA(insert ( 434 1184 1082 1 2383 )); DATA(insert ( 434 1184 1114 1 2533 )); +DATA(insert ( 434 1082 1186 3 4133 )); +DATA(insert ( 434 1114 1186 3 4134 )); +DATA(insert ( 434 1184 1186 3 4135 )); DATA(insert ( 1970 700 700 1 354 )); DATA(insert ( 1970 700 700 2 3132 )); DATA(insert ( 1970 700 701 1 2194 )); @@ -107,15 +110,23 @@ DATA(insert ( 1976 21 21 1 350 )); DATA(insert ( 1976 21 21 2 3129 )); DATA(insert ( 1976 21 23 1 2190 )); DATA(insert ( 1976 21 20 1 2192 )); +DATA(insert ( 1976 21 20 3 4130 )); +DATA(insert ( 1976 21 23 3 4131 )); +DATA(insert ( 1976 21 21 3 4132 )); DATA(insert ( 1976 23 23 1 351 )); DATA(insert ( 1976 23 23 2 3130 )); DATA(insert ( 1976 23 20 1 2188 )); DATA(insert ( 1976 23 21 1 2191 )); +DATA(insert ( 1976 23 20 3 4127 )); +DATA(insert ( 1976 23 23 3 4128 )); +DATA(insert ( 1976 23 21 3 4129 )); DATA(insert ( 1976 20 20 1 842 )); DATA(insert ( 1976 20 20 2 3131 )); DATA(insert ( 1976 20 23 1 2189 )); DATA(insert ( 1976 20 21 1 2193 )); +DATA(insert ( 1976 20 20 3 4126 )); DATA(insert ( 1982 1186 1186 1 1315 )); +DATA(insert ( 1982 1186 1186 3 4136 )); DATA(insert ( 1984 829 829 1 836 )); DATA(insert ( 1984 829 829 2 3359 )); DATA(insert ( 1986 19 19 1 359 )); @@ -128,7 +139,9 @@ DATA(insert ( 1991 30 30 1 404 )); DATA(insert ( 1994 25 25 1 360 )); DATA(insert ( 1994 25 25 2 3255 )); DATA(insert ( 1996 1083 1083 1 1107 )); +DATA(insert ( 1996 1083 1186 3 4137 )); DATA(insert ( 2000 1266 1266 1 1358 )); +DATA(insert ( 2000 1266 1186 3 4138 )); DATA(insert ( 2002 1562 1562 1 1672 )); DATA(insert ( 2095 25 25 1 2166 )); DATA(insert ( 2095 25 25 2 3332 )); diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index f01648c961..2a5321315a 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -647,6 +647,20 @@ DATA(insert OID = 381 ( bttintervalcmp PGNSP PGUID 12 1 0 0 0 f f f f t f i DESCR("less-equal-greater"); DATA(insert OID = 382 ( btarraycmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 23 "2277 2277" _null_ _null_ _null_ _null_ _null_ btarraycmp _null_ _null_ _null_ )); DESCR("less-equal-greater"); +DATA(insert OID = 4126 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "20 20 20 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int8_int8 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4127 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "23 23 20 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int4_int8 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4128 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "23 23 23 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int4_int4 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4129 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "23 23 21 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int4_int2 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4130 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "21 21 20 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int2_int8 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4131 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "21 21 23 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int2_int4 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4132 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "21 21 21 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int2_int2 _null_ _null_ _null_ )); +DESCR("window RANGE support"); DATA(insert OID = 361 ( lseg_distance PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 701 "601 601" _null_ _null_ _null_ _null_ _null_ lseg_distance _null_ _null_ _null_ )); DATA(insert OID = 362 ( lseg_interpt PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 600 "601 601" _null_ _null_ _null_ _null_ _null_ lseg_interpt _null_ _null_ _null_ )); @@ -1216,6 +1230,8 @@ DATA(insert OID = 1092 ( date_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 DESCR("less-equal-greater"); DATA(insert OID = 3136 ( date_sortsupport PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2278 "2281" _null_ _null_ _null_ _null_ _null_ date_sortsupport _null_ _null_ _null_ )); DESCR("sort support"); +DATA(insert OID = 4133 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1082 1082 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_date_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); /* OIDS 1100 - 1199 */ @@ -3141,6 +3157,18 @@ DATA(insert OID = 2045 ( timestamp_cmp PGNSP PGUID 12 1 0 0 0 f f f f t f i s DESCR("less-equal-greater"); DATA(insert OID = 3137 ( timestamp_sortsupport PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 2278 "2281" _null_ _null_ _null_ _null_ _null_ timestamp_sortsupport _null_ _null_ _null_ )); DESCR("sort support"); + +DATA(insert OID = 4134 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1114 1114 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_timestamp_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4135 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f s s 5 0 16 "1184 1184 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_timestamptz_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4136 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1186 1186 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_interval_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4137 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1083 1083 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_time_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4138 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1266 1266 1186 16 16" _null_ _null_ _null_ _null_ _null_ in_range_timetz_interval _null_ _null_ _null_ )); +DESCR("window RANGE support"); + DATA(insert OID = 2046 ( time PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 1083 "1266" _null_ _null_ _null_ _null_ _null_ timetz_time _null_ _null_ _null_ )); DESCR("convert time with time zone to time"); DATA(insert OID = 2047 ( timetz PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0 1266 "1083" _null_ _null_ _null_ _null_ _null_ time_timetz _null_ _null_ _null_ )); diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index a2a2a9f3d4..54ce63f147 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1885,11 +1885,14 @@ typedef struct WindowAggState FmgrInfo *partEqfunctions; /* equality funcs for partition columns */ FmgrInfo *ordEqfunctions; /* equality funcs for ordering columns */ Tuplestorestate *buffer; /* stores rows of current partition */ - int current_ptr; /* read pointer # for current */ + int current_ptr; /* read pointer # for current row */ + int framehead_ptr; /* read pointer # for frame head, if used */ + int frametail_ptr; /* read pointer # for frame tail, if used */ + int grouptail_ptr; /* read pointer # for group tail, if used */ int64 spooled_rows; /* total # of rows in buffer */ int64 currentpos; /* position of current row in partition */ int64 frameheadpos; /* current frame head position */ - int64 frametailpos; /* current frame tail position */ + int64 frametailpos; /* current frame tail position (frame end+1) */ /* use struct pointer to avoid including windowapi.h here */ struct WindowObjectData *agg_winobj; /* winobj for aggregate fetches */ int64 aggregatedbase; /* start row for current aggregates */ @@ -1901,6 +1904,20 @@ typedef struct WindowAggState Datum startOffsetValue; /* result of startOffset evaluation */ Datum endOffsetValue; /* result of endOffset evaluation */ + /* these fields are used with RANGE offset PRECEDING/FOLLOWING: */ + FmgrInfo startInRangeFunc; /* in_range function for startOffset */ + FmgrInfo endInRangeFunc; /* in_range function for endOffset */ + Oid inRangeColl; /* collation for in_range tests */ + bool inRangeAsc; /* use ASC sort order for in_range tests? */ + bool inRangeNullsFirst; /* nulls sort first for in_range tests? */ + + /* these fields are used in GROUPS mode: */ + int64 currentgroup; /* peer group # of current row in partition */ + int64 frameheadgroup; /* peer group # of frame head row */ + int64 frametailgroup; /* peer group # of frame tail row */ + int64 groupheadpos; /* current row's peer group head position */ + int64 grouptailpos; /* " " " " tail position (group end+1) */ + MemoryContext partcontext; /* context for partition-lifespan data */ MemoryContext aggcontext; /* shared context for aggregate working data */ MemoryContext curaggcontext; /* current aggregate's working data */ @@ -1916,9 +1933,13 @@ typedef struct WindowAggState * date for current row */ bool frametail_valid; /* true if frametailpos is known up to * date for current row */ + bool grouptail_valid; /* true if grouptailpos is known up to + * date for current row */ TupleTableSlot *first_part_slot; /* first tuple of current or next * partition */ + TupleTableSlot *framehead_slot; /* first tuple of current frame */ + TupleTableSlot *frametail_slot; /* first tuple after current frame */ /* temporary slots for tuples fetched back from tuplestore */ TupleTableSlot *agg_row_slot; diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index a16de289ba..c7a43b8933 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -499,27 +499,33 @@ typedef struct WindowDef * which were defaulted; the correct behavioral bits must be set either way. * The START_foo and END_foo options must come in pairs of adjacent bits for * the convenience of gram.y, even though some of them are useless/invalid. - * We will need more bits (and fields) to cover the full SQL:2008 option set. */ #define FRAMEOPTION_NONDEFAULT 0x00001 /* any specified? */ #define FRAMEOPTION_RANGE 0x00002 /* RANGE behavior */ #define FRAMEOPTION_ROWS 0x00004 /* ROWS behavior */ -#define FRAMEOPTION_BETWEEN 0x00008 /* BETWEEN given? */ -#define FRAMEOPTION_START_UNBOUNDED_PRECEDING 0x00010 /* start is U. P. */ -#define FRAMEOPTION_END_UNBOUNDED_PRECEDING 0x00020 /* (disallowed) */ -#define FRAMEOPTION_START_UNBOUNDED_FOLLOWING 0x00040 /* (disallowed) */ -#define FRAMEOPTION_END_UNBOUNDED_FOLLOWING 0x00080 /* end is U. F. */ -#define FRAMEOPTION_START_CURRENT_ROW 0x00100 /* start is C. R. */ -#define FRAMEOPTION_END_CURRENT_ROW 0x00200 /* end is C. R. */ -#define FRAMEOPTION_START_VALUE_PRECEDING 0x00400 /* start is V. P. */ -#define FRAMEOPTION_END_VALUE_PRECEDING 0x00800 /* end is V. P. */ -#define FRAMEOPTION_START_VALUE_FOLLOWING 0x01000 /* start is V. F. */ -#define FRAMEOPTION_END_VALUE_FOLLOWING 0x02000 /* end is V. F. */ - -#define FRAMEOPTION_START_VALUE \ - (FRAMEOPTION_START_VALUE_PRECEDING | FRAMEOPTION_START_VALUE_FOLLOWING) -#define FRAMEOPTION_END_VALUE \ - (FRAMEOPTION_END_VALUE_PRECEDING | FRAMEOPTION_END_VALUE_FOLLOWING) +#define FRAMEOPTION_GROUPS 0x00008 /* GROUPS behavior */ +#define FRAMEOPTION_BETWEEN 0x00010 /* BETWEEN given? */ +#define FRAMEOPTION_START_UNBOUNDED_PRECEDING 0x00020 /* start is U. P. */ +#define FRAMEOPTION_END_UNBOUNDED_PRECEDING 0x00040 /* (disallowed) */ +#define FRAMEOPTION_START_UNBOUNDED_FOLLOWING 0x00080 /* (disallowed) */ +#define FRAMEOPTION_END_UNBOUNDED_FOLLOWING 0x00100 /* end is U. F. */ +#define FRAMEOPTION_START_CURRENT_ROW 0x00200 /* start is C. R. */ +#define FRAMEOPTION_END_CURRENT_ROW 0x00400 /* end is C. R. */ +#define FRAMEOPTION_START_OFFSET_PRECEDING 0x00800 /* start is O. P. */ +#define FRAMEOPTION_END_OFFSET_PRECEDING 0x01000 /* end is O. P. */ +#define FRAMEOPTION_START_OFFSET_FOLLOWING 0x02000 /* start is O. F. */ +#define FRAMEOPTION_END_OFFSET_FOLLOWING 0x04000 /* end is O. F. */ +#define FRAMEOPTION_EXCLUDE_CURRENT_ROW 0x08000 /* omit C.R. */ +#define FRAMEOPTION_EXCLUDE_GROUP 0x10000 /* omit C.R. & peers */ +#define FRAMEOPTION_EXCLUDE_TIES 0x20000 /* omit C.R.'s peers */ + +#define FRAMEOPTION_START_OFFSET \ + (FRAMEOPTION_START_OFFSET_PRECEDING | FRAMEOPTION_START_OFFSET_FOLLOWING) +#define FRAMEOPTION_END_OFFSET \ + (FRAMEOPTION_END_OFFSET_PRECEDING | FRAMEOPTION_END_OFFSET_FOLLOWING) +#define FRAMEOPTION_EXCLUSION \ + (FRAMEOPTION_EXCLUDE_CURRENT_ROW | FRAMEOPTION_EXCLUDE_GROUP | \ + FRAMEOPTION_EXCLUDE_TIES) #define FRAMEOPTION_DEFAULTS \ (FRAMEOPTION_RANGE | FRAMEOPTION_START_UNBOUNDED_PRECEDING | \ @@ -1277,6 +1283,9 @@ typedef struct GroupingSet * if the clause originally came from WINDOW, and is NULL if it originally * was an OVER clause (but note that we collapse out duplicate OVERs). * partitionClause and orderClause are lists of SortGroupClause structs. + * If we have RANGE with offset PRECEDING/FOLLOWING, the semantics of that are + * specified by startInRangeFunc/inRangeColl/inRangeAsc/inRangeNullsFirst + * for the start offset, or endInRangeFunc/inRange* for the end offset. * winref is an ID number referenced by WindowFunc nodes; it must be unique * among the members of a Query's windowClause list. * When refname isn't null, the partitionClause is always copied from there; @@ -1293,6 +1302,11 @@ typedef struct WindowClause int frameOptions; /* frame_clause options, see WindowDef */ Node *startOffset; /* expression for starting bound, if any */ Node *endOffset; /* expression for ending bound, if any */ + Oid startInRangeFunc; /* in_range function for startOffset */ + Oid endInRangeFunc; /* in_range function for endOffset */ + Oid inRangeColl; /* collation for in_range tests */ + bool inRangeAsc; /* use ASC sort order for in_range tests? */ + bool inRangeNullsFirst; /* nulls sort first for in_range tests? */ Index winref; /* ID referenced by window functions */ bool copiedOrder; /* did we copy orderClause from refname? */ } WindowClause; diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h index baf3c07417..f2e19eae68 100644 --- a/src/include/nodes/plannodes.h +++ b/src/include/nodes/plannodes.h @@ -811,6 +811,12 @@ typedef struct WindowAgg int frameOptions; /* frame_clause options, see WindowDef */ Node *startOffset; /* expression for starting bound, if any */ Node *endOffset; /* expression for ending bound, if any */ + /* these fields are used with RANGE offset PRECEDING/FOLLOWING: */ + Oid startInRangeFunc; /* in_range function for startOffset */ + Oid endInRangeFunc; /* in_range function for endOffset */ + Oid inRangeColl; /* collation for in_range tests */ + bool inRangeAsc; /* use ASC sort order for in_range tests? */ + bool inRangeNullsFirst; /* nulls sort first for in_range tests? */ } WindowAgg; /* ---------------- diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h index 26af944e03..cf32197bc3 100644 --- a/src/include/parser/kwlist.h +++ b/src/include/parser/kwlist.h @@ -182,6 +182,7 @@ PG_KEYWORD("granted", GRANTED, UNRESERVED_KEYWORD) PG_KEYWORD("greatest", GREATEST, COL_NAME_KEYWORD) PG_KEYWORD("group", GROUP_P, RESERVED_KEYWORD) PG_KEYWORD("grouping", GROUPING, COL_NAME_KEYWORD) +PG_KEYWORD("groups", GROUPS, UNRESERVED_KEYWORD) PG_KEYWORD("handler", HANDLER, UNRESERVED_KEYWORD) PG_KEYWORD("having", HAVING, RESERVED_KEYWORD) PG_KEYWORD("header", HEADER_P, UNRESERVED_KEYWORD) @@ -283,6 +284,7 @@ PG_KEYWORD("options", OPTIONS, UNRESERVED_KEYWORD) PG_KEYWORD("or", OR, RESERVED_KEYWORD) PG_KEYWORD("order", ORDER, RESERVED_KEYWORD) PG_KEYWORD("ordinality", ORDINALITY, UNRESERVED_KEYWORD) +PG_KEYWORD("others", OTHERS, UNRESERVED_KEYWORD) PG_KEYWORD("out", OUT_P, COL_NAME_KEYWORD) PG_KEYWORD("outer", OUTER_P, TYPE_FUNC_NAME_KEYWORD) PG_KEYWORD("over", OVER, UNRESERVED_KEYWORD) @@ -397,6 +399,7 @@ PG_KEYWORD("template", TEMPLATE, UNRESERVED_KEYWORD) PG_KEYWORD("temporary", TEMPORARY, UNRESERVED_KEYWORD) PG_KEYWORD("text", TEXT_P, UNRESERVED_KEYWORD) PG_KEYWORD("then", THEN, RESERVED_KEYWORD) +PG_KEYWORD("ties", TIES, UNRESERVED_KEYWORD) PG_KEYWORD("time", TIME, COL_NAME_KEYWORD) PG_KEYWORD("timestamp", TIMESTAMP, COL_NAME_KEYWORD) PG_KEYWORD("to", TO, RESERVED_KEYWORD) diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h index 4e96fa7907..2e0792d60b 100644 --- a/src/include/parser/parse_node.h +++ b/src/include/parser/parse_node.h @@ -45,6 +45,7 @@ typedef enum ParseExprKind EXPR_KIND_WINDOW_ORDER, /* window definition ORDER BY */ EXPR_KIND_WINDOW_FRAME_RANGE, /* window frame clause with RANGE */ EXPR_KIND_WINDOW_FRAME_ROWS, /* window frame clause with ROWS */ + EXPR_KIND_WINDOW_FRAME_GROUPS, /* window frame clause with GROUPS */ EXPR_KIND_SELECT_TARGET, /* SELECT target list item */ EXPR_KIND_INSERT_TARGET, /* INSERT target list item */ EXPR_KIND_UPDATE_SOURCE, /* UPDATE assignment source item */ @@ -67,7 +68,7 @@ typedef enum ParseExprKind EXPR_KIND_EXECUTE_PARAMETER, /* parameter value in EXECUTE */ EXPR_KIND_TRIGGER_WHEN, /* WHEN condition in CREATE TRIGGER */ EXPR_KIND_POLICY, /* USING or WITH CHECK expr in policy */ - EXPR_KIND_PARTITION_EXPRESSION, /* PARTITION BY expression */ + EXPR_KIND_PARTITION_EXPRESSION, /* PARTITION BY expression */ EXPR_KIND_CALL /* CALL argument */ } ParseExprKind; diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out index 200828aa99..44356dea0b 100644 --- a/src/test/regress/expected/alter_generic.out +++ b/src/test/regress/expected/alter_generic.out @@ -354,9 +354,9 @@ ERROR: invalid operator number 0, must be between 1 and 5 ALTER OPERATOR FAMILY alt_opf4 USING btree ADD OPERATOR 1 < ; -- operator without argument types ERROR: operator argument types must be specified in ALTER OPERATOR FAMILY ALTER OPERATOR FAMILY alt_opf4 USING btree ADD FUNCTION 0 btint42cmp(int4, int2); -- function number should be between 1 and 5 -ERROR: invalid procedure number 0, must be between 1 and 2 +ERROR: invalid procedure number 0, must be between 1 and 3 ALTER OPERATOR FAMILY alt_opf4 USING btree ADD FUNCTION 6 btint42cmp(int4, int2); -- function number should be between 1 and 5 -ERROR: invalid procedure number 6, must be between 1 and 2 +ERROR: invalid procedure number 6, must be between 1 and 3 ALTER OPERATOR FAMILY alt_opf4 USING btree ADD STORAGE invalid_storage; -- Ensure STORAGE is not a part of ALTER OPERATOR FAMILY ERROR: STORAGE cannot be specified in ALTER OPERATOR FAMILY DROP OPERATOR FAMILY alt_opf4 USING btree; diff --git a/src/test/regress/expected/window.out b/src/test/regress/expected/window.out index 19f909f3d1..b675487729 100644 --- a/src/test/regress/expected/window.out +++ b/src/test/regress/expected/window.out @@ -819,6 +819,176 @@ FROM tenk1 WHERE unique1 < 10; 10 | 0 | 0 (10 rows) +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude no others), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 7 | 4 | 0 + 13 | 2 | 2 + 22 | 1 | 1 + 26 | 6 | 2 + 29 | 9 | 1 + 31 | 8 | 0 + 32 | 5 | 1 + 23 | 3 | 3 + 15 | 7 | 3 + 10 | 0 | 0 +(10 rows) + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 3 | 4 | 0 + 11 | 2 | 2 + 21 | 1 | 1 + 20 | 6 | 2 + 20 | 9 | 1 + 23 | 8 | 0 + 27 | 5 | 1 + 20 | 3 | 3 + 8 | 7 | 3 + 10 | 0 | 0 +(10 rows) + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 4 | 0 + | 2 | 2 + | 1 | 1 + | 6 | 2 + | 9 | 1 + | 8 | 0 + | 5 | 1 + | 3 | 3 + | 7 | 3 + | 0 | 0 +(10 rows) + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 4 | 4 | 0 + 2 | 2 | 2 + 1 | 1 | 1 + 6 | 6 | 2 + 9 | 9 | 1 + 8 | 8 | 0 + 5 | 5 | 1 + 3 | 3 | 3 + 7 | 7 | 3 + 0 | 0 | 0 +(10 rows) + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + first_value | unique1 | four +-------------+---------+------ + 8 | 0 | 0 + 4 | 8 | 0 + 5 | 4 | 0 + 9 | 5 | 1 + 1 | 9 | 1 + 6 | 1 | 1 + 2 | 6 | 2 + 3 | 2 | 2 + 7 | 3 | 3 + | 7 | 3 +(10 rows) + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + first_value | unique1 | four +-------------+---------+------ + | 0 | 0 + 5 | 8 | 0 + 5 | 4 | 0 + | 5 | 1 + 6 | 9 | 1 + 6 | 1 | 1 + 3 | 6 | 2 + 3 | 2 | 2 + | 3 | 3 + | 7 | 3 +(10 rows) + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + first_value | unique1 | four +-------------+---------+------ + 0 | 0 | 0 + 8 | 8 | 0 + 4 | 4 | 0 + 5 | 5 | 1 + 9 | 9 | 1 + 1 | 1 | 1 + 6 | 6 | 2 + 2 | 2 | 2 + 3 | 3 | 3 + 7 | 7 | 3 +(10 rows) + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + last_value | unique1 | four +------------+---------+------ + 4 | 0 | 0 + 5 | 8 | 0 + 9 | 4 | 0 + 1 | 5 | 1 + 6 | 9 | 1 + 2 | 1 | 1 + 3 | 6 | 2 + 7 | 2 | 2 + 7 | 3 | 3 + | 7 | 3 +(10 rows) + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + last_value | unique1 | four +------------+---------+------ + | 0 | 0 + 5 | 8 | 0 + 9 | 4 | 0 + | 5 | 1 + 6 | 9 | 1 + 2 | 1 | 1 + 3 | 6 | 2 + 7 | 2 | 2 + | 3 | 3 + | 7 | 3 +(10 rows) + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + last_value | unique1 | four +------------+---------+------ + 0 | 0 | 0 + 5 | 8 | 0 + 9 | 4 | 0 + 5 | 5 | 1 + 6 | 9 | 1 + 2 | 1 | 1 + 3 | 6 | 2 + 7 | 2 | 2 + 3 | 3 | 3 + 7 | 7 | 3 +(10 rows) + SELECT sum(unique1) over (rows between 2 preceding and 1 preceding), unique1, four FROM tenk1 WHERE unique1 < 10; @@ -887,13 +1057,57 @@ FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); 10 | 7 | 3 (10 rows) --- fail: not implemented yet -SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding), +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude current row), unique1, four -FROM tenk1 WHERE unique1 < 10; -ERROR: RANGE PRECEDING is only supported with UNBOUNDED -LINE 1: SELECT sum(unique1) over (order by four range between 2::int... - ^ +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); + sum | unique1 | four +-----+---------+------ + 12 | 0 | 0 + 4 | 8 | 0 + 8 | 4 | 0 + 22 | 5 | 1 + 18 | 9 | 1 + 26 | 1 | 1 + 29 | 6 | 2 + 33 | 2 | 2 + 42 | 3 | 3 + 38 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 35 | 3 | 3 + 35 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); + sum | unique1 | four +-----+---------+------ + 0 | 0 | 0 + 8 | 8 | 0 + 4 | 4 | 0 + 17 | 5 | 1 + 21 | 9 | 1 + 13 | 1 | 1 + 33 | 6 | 2 + 29 | 2 | 2 + 38 | 3 | 3 + 42 | 7 | 3 +(10 rows) + SELECT first_value(unique1) over w, nth_value(unique1, 2) over w AS nth_2, last_value(unique1) over w, unique1, four @@ -958,6 +1172,1477 @@ SELECT pg_get_viewdef('v_window'); FROM generate_series(1, 10) i(i); (1 row) +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude current row) as sum_rows FROM generate_series(1, 10) i; +SELECT * FROM v_window; + i | sum_rows +----+---------- + 1 | 2 + 2 | 4 + 3 | 6 + 4 | 8 + 5 | 10 + 6 | 12 + 7 | 14 + 8 | 16 + 9 | 18 + 10 | 9 +(10 rows) + +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +----------------------------------------------------------------------------------------------------------- + SELECT i.i, + + sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING EXCLUDE CURRENT ROW) AS sum_rows+ + FROM generate_series(1, 10) i(i); +(1 row) + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude group) as sum_rows FROM generate_series(1, 10) i; +SELECT * FROM v_window; + i | sum_rows +----+---------- + 1 | 2 + 2 | 4 + 3 | 6 + 4 | 8 + 5 | 10 + 6 | 12 + 7 | 14 + 8 | 16 + 9 | 18 + 10 | 9 +(10 rows) + +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +----------------------------------------------------------------------------------------------------- + SELECT i.i, + + sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING EXCLUDE GROUP) AS sum_rows+ + FROM generate_series(1, 10) i(i); +(1 row) + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude ties) as sum_rows FROM generate_series(1, 10) i; +SELECT * FROM v_window; + i | sum_rows +----+---------- + 1 | 3 + 2 | 6 + 3 | 9 + 4 | 12 + 5 | 15 + 6 | 18 + 7 | 21 + 8 | 24 + 9 | 27 + 10 | 19 +(10 rows) + +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +---------------------------------------------------------------------------------------------------- + SELECT i.i, + + sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING EXCLUDE TIES) AS sum_rows+ + FROM generate_series(1, 10) i(i); +(1 row) + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude no others) as sum_rows FROM generate_series(1, 10) i; +SELECT * FROM v_window; + i | sum_rows +----+---------- + 1 | 3 + 2 | 6 + 3 | 9 + 4 | 12 + 5 | 15 + 6 | 18 + 7 | 21 + 8 | 24 + 9 | 27 + 10 | 19 +(10 rows) + +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +--------------------------------------------------------------------------------------- + SELECT i.i, + + sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+ + FROM generate_series(1, 10) i(i); +(1 row) + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i groups between 1 preceding and 1 following) as sum_rows FROM generate_series(1, 10) i; +SELECT * FROM v_window; + i | sum_rows +----+---------- + 1 | 3 + 2 | 6 + 3 | 9 + 4 | 12 + 5 | 15 + 6 | 18 + 7 | 21 + 8 | 24 + 9 | 27 + 10 | 19 +(10 rows) + +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +----------------------------------------------------------------------------------------- + SELECT i.i, + + sum(i.i) OVER (ORDER BY i.i GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+ + FROM generate_series(1, 10) i(i); +(1 row) + +DROP VIEW v_window; +CREATE TEMP VIEW v_window AS + SELECT i, min(i) over (order by i range between '1 day' preceding and '10 days' following) as min_i + FROM generate_series(now(), now()+'100 days'::interval, '1 hour') i; +SELECT pg_get_viewdef('v_window'); + pg_get_viewdef +--------------------------------------------------------------------------------------------------------------------------- + SELECT i.i, + + min(i.i) OVER (ORDER BY i.i RANGE BETWEEN '@ 1 day'::interval PRECEDING AND '@ 10 days'::interval FOLLOWING) AS min_i+ + FROM generate_series(now(), (now() + '@ 100 days'::interval), '@ 1 hour'::interval) i(i); +(1 row) + +-- RANGE offset PRECEDING/FOLLOWING tests +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four desc range between 2::int8 preceding and 1::int2 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 3 | 3 + | 7 | 3 + 10 | 6 | 2 + 10 | 2 | 2 + 18 | 9 | 1 + 18 | 5 | 1 + 18 | 1 | 1 + 23 | 0 | 0 + 23 | 8 | 0 + 23 | 4 | 0 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude no others), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 6::int2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 33 | 0 | 0 + 41 | 8 | 0 + 37 | 4 | 0 + 35 | 5 | 1 + 39 | 9 | 1 + 31 | 1 | 1 + 43 | 6 | 2 + 39 | 2 | 2 + 26 | 3 | 3 + 30 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 6::int2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 33 | 0 | 0 + 33 | 8 | 0 + 33 | 4 | 0 + 30 | 5 | 1 + 30 | 9 | 1 + 30 | 1 | 1 + 37 | 6 | 2 + 37 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (partition by four order by unique1 range between 5::int8 preceding and 6::int2 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 4 | 0 | 0 + 12 | 4 | 0 + 12 | 8 | 0 + 6 | 1 | 1 + 15 | 5 | 1 + 14 | 9 | 1 + 8 | 2 | 2 + 8 | 6 | 2 + 10 | 3 | 3 + 10 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (partition by four order by unique1 range between 5::int8 preceding and 6::int2 following + exclude current row),unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 4 | 0 | 0 + 8 | 4 | 0 + 4 | 8 | 0 + 5 | 1 | 1 + 10 | 5 | 1 + 5 | 9 | 1 + 6 | 2 | 2 + 2 | 6 | 2 + 7 | 3 | 3 + 3 | 7 | 3 +(10 rows) + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + sum | salary | enroll_date +-------+--------+------------- + 34900 | 5000 | 10-01-2006 + 34900 | 6000 | 10-01-2006 + 38400 | 3900 | 12-23-2006 + 47100 | 4800 | 08-01-2007 + 47100 | 5200 | 08-01-2007 + 47100 | 4800 | 08-08-2007 + 47100 | 5200 | 08-15-2007 + 36100 | 3500 | 12-10-2007 + 32200 | 4500 | 01-01-2008 + 32200 | 4200 | 01-01-2008 +(10 rows) + +select sum(salary) over (order by enroll_date desc range between '1 year'::interval preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + sum | salary | enroll_date +-------+--------+------------- + 32200 | 4200 | 01-01-2008 + 32200 | 4500 | 01-01-2008 + 36100 | 3500 | 12-10-2007 + 47100 | 5200 | 08-15-2007 + 47100 | 4800 | 08-08-2007 + 47100 | 4800 | 08-01-2007 + 47100 | 5200 | 08-01-2007 + 38400 | 3900 | 12-23-2006 + 34900 | 5000 | 10-01-2006 + 34900 | 6000 | 10-01-2006 +(10 rows) + +select sum(salary) over (order by enroll_date desc range between '1 year'::interval following and '1 year'::interval following), + salary, enroll_date from empsalary; + sum | salary | enroll_date +-----+--------+------------- + | 4200 | 01-01-2008 + | 4500 | 01-01-2008 + | 3500 | 12-10-2007 + | 5200 | 08-15-2007 + | 4800 | 08-08-2007 + | 4800 | 08-01-2007 + | 5200 | 08-01-2007 + | 3900 | 12-23-2006 + | 5000 | 10-01-2006 + | 6000 | 10-01-2006 +(10 rows) + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude current row), salary, enroll_date from empsalary; + sum | salary | enroll_date +-------+--------+------------- + 29900 | 5000 | 10-01-2006 + 28900 | 6000 | 10-01-2006 + 34500 | 3900 | 12-23-2006 + 42300 | 4800 | 08-01-2007 + 41900 | 5200 | 08-01-2007 + 42300 | 4800 | 08-08-2007 + 41900 | 5200 | 08-15-2007 + 32600 | 3500 | 12-10-2007 + 27700 | 4500 | 01-01-2008 + 28000 | 4200 | 01-01-2008 +(10 rows) + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude group), salary, enroll_date from empsalary; + sum | salary | enroll_date +-------+--------+------------- + 23900 | 5000 | 10-01-2006 + 23900 | 6000 | 10-01-2006 + 34500 | 3900 | 12-23-2006 + 37100 | 4800 | 08-01-2007 + 37100 | 5200 | 08-01-2007 + 42300 | 4800 | 08-08-2007 + 41900 | 5200 | 08-15-2007 + 32600 | 3500 | 12-10-2007 + 23500 | 4500 | 01-01-2008 + 23500 | 4200 | 01-01-2008 +(10 rows) + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude ties), salary, enroll_date from empsalary; + sum | salary | enroll_date +-------+--------+------------- + 28900 | 5000 | 10-01-2006 + 29900 | 6000 | 10-01-2006 + 38400 | 3900 | 12-23-2006 + 41900 | 4800 | 08-01-2007 + 42300 | 5200 | 08-01-2007 + 47100 | 4800 | 08-08-2007 + 47100 | 5200 | 08-15-2007 + 36100 | 3500 | 12-10-2007 + 28000 | 4500 | 01-01-2008 + 27700 | 4200 | 01-01-2008 +(10 rows) + +select first_value(salary) over(order by salary range between 1000 preceding and 1000 following), + lead(salary) over(order by salary range between 1000 preceding and 1000 following), + nth_value(salary, 1) over(order by salary range between 1000 preceding and 1000 following), + salary from empsalary; + first_value | lead | nth_value | salary +-------------+------+-----------+-------- + 3500 | 3900 | 3500 | 3500 + 3500 | 4200 | 3500 | 3900 + 3500 | 4500 | 3500 | 4200 + 3500 | 4800 | 3500 | 4500 + 3900 | 4800 | 3900 | 4800 + 3900 | 5000 | 3900 | 4800 + 4200 | 5200 | 4200 | 5000 + 4200 | 5200 | 4200 | 5200 + 4200 | 6000 | 4200 | 5200 + 5000 | | 5000 | 6000 +(10 rows) + +select last_value(salary) over(order by salary range between 1000 preceding and 1000 following), + lag(salary) over(order by salary range between 1000 preceding and 1000 following), + salary from empsalary; + last_value | lag | salary +------------+------+-------- + 4500 | | 3500 + 4800 | 3500 | 3900 + 5200 | 3900 | 4200 + 5200 | 4200 | 4500 + 5200 | 4500 | 4800 + 5200 | 4800 | 4800 + 6000 | 4800 | 5000 + 6000 | 5000 | 5200 + 6000 | 5200 | 5200 + 6000 | 5200 | 6000 +(10 rows) + +select first_value(salary) over(order by salary range between 1000 following and 3000 following + exclude current row), + lead(salary) over(order by salary range between 1000 following and 3000 following exclude ties), + nth_value(salary, 1) over(order by salary range between 1000 following and 3000 following + exclude ties), + salary from empsalary; + first_value | lead | nth_value | salary +-------------+------+-----------+-------- + 4500 | 3900 | 4500 | 3500 + 5000 | 4200 | 5000 | 3900 + 5200 | 4500 | 5200 | 4200 + 6000 | 4800 | 6000 | 4500 + 6000 | 4800 | 6000 | 4800 + 6000 | 5000 | 6000 | 4800 + 6000 | 5200 | 6000 | 5000 + | 5200 | | 5200 + | 6000 | | 5200 + | | | 6000 +(10 rows) + +select last_value(salary) over(order by salary range between 1000 following and 3000 following + exclude group), + lag(salary) over(order by salary range between 1000 following and 3000 following exclude group), + salary from empsalary; + last_value | lag | salary +------------+------+-------- + 6000 | | 3500 + 6000 | 3500 | 3900 + 6000 | 3900 | 4200 + 6000 | 4200 | 4500 + 6000 | 4500 | 4800 + 6000 | 4800 | 4800 + 6000 | 4800 | 5000 + | 5000 | 5200 + | 5200 | 5200 + | 5200 | 6000 +(10 rows) + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + first_value | last_value | salary | enroll_date +-------------+------------+--------+------------- + 5000 | 5200 | 5000 | 10-01-2006 + 6000 | 5200 | 6000 | 10-01-2006 + 5000 | 3500 | 3900 | 12-23-2006 + 5000 | 4200 | 4800 | 08-01-2007 + 5000 | 4200 | 5200 | 08-01-2007 + 5000 | 4200 | 4800 | 08-08-2007 + 5000 | 4200 | 5200 | 08-15-2007 + 5000 | 4200 | 3500 | 12-10-2007 + 5000 | 4200 | 4500 | 01-01-2008 + 5000 | 4200 | 4200 | 01-01-2008 +(10 rows) + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + salary, enroll_date from empsalary; + first_value | last_value | salary | enroll_date +-------------+------------+--------+------------- + 5000 | 5200 | 5000 | 10-01-2006 + 6000 | 5200 | 6000 | 10-01-2006 + 5000 | 3500 | 3900 | 12-23-2006 + 5000 | 4200 | 4800 | 08-01-2007 + 5000 | 4200 | 5200 | 08-01-2007 + 5000 | 4200 | 4800 | 08-08-2007 + 5000 | 4200 | 5200 | 08-15-2007 + 5000 | 4200 | 3500 | 12-10-2007 + 5000 | 4500 | 4500 | 01-01-2008 + 5000 | 4200 | 4200 | 01-01-2008 +(10 rows) + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude group), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude group), + salary, enroll_date from empsalary; + first_value | last_value | salary | enroll_date +-------------+------------+--------+------------- + 3900 | 5200 | 5000 | 10-01-2006 + 3900 | 5200 | 6000 | 10-01-2006 + 5000 | 3500 | 3900 | 12-23-2006 + 5000 | 4200 | 4800 | 08-01-2007 + 5000 | 4200 | 5200 | 08-01-2007 + 5000 | 4200 | 4800 | 08-08-2007 + 5000 | 4200 | 5200 | 08-15-2007 + 5000 | 4200 | 3500 | 12-10-2007 + 5000 | 3500 | 4500 | 01-01-2008 + 5000 | 3500 | 4200 | 01-01-2008 +(10 rows) + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude current row), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude current row), + salary, enroll_date from empsalary; + first_value | last_value | salary | enroll_date +-------------+------------+--------+------------- + 6000 | 5200 | 5000 | 10-01-2006 + 5000 | 5200 | 6000 | 10-01-2006 + 5000 | 3500 | 3900 | 12-23-2006 + 5000 | 4200 | 4800 | 08-01-2007 + 5000 | 4200 | 5200 | 08-01-2007 + 5000 | 4200 | 4800 | 08-08-2007 + 5000 | 4200 | 5200 | 08-15-2007 + 5000 | 4200 | 3500 | 12-10-2007 + 5000 | 4200 | 4500 | 01-01-2008 + 5000 | 4500 | 4200 | 01-01-2008 +(10 rows) + +-- RANGE offset PRECEDING/FOLLOWING with null values +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x asc nulls first range between 2 preceding and 2 following); + x | y | first_value | last_value +---+----+-------------+------------ + | 42 | 42 | 43 + | 43 | 42 | 43 + 1 | 1 | 1 | 3 + 2 | 2 | 1 | 4 + 3 | 3 | 1 | 5 + 4 | 4 | 2 | 5 + 5 | 5 | 3 | 5 +(7 rows) + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x asc nulls last range between 2 preceding and 2 following); + x | y | first_value | last_value +---+----+-------------+------------ + 1 | 1 | 1 | 3 + 2 | 2 | 1 | 4 + 3 | 3 | 1 | 5 + 4 | 4 | 2 | 5 + 5 | 5 | 3 | 5 + | 42 | 42 | 43 + | 43 | 42 | 43 +(7 rows) + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x desc nulls first range between 2 preceding and 2 following); + x | y | first_value | last_value +---+----+-------------+------------ + | 43 | 43 | 42 + | 42 | 43 | 42 + 5 | 5 | 5 | 3 + 4 | 4 | 5 | 2 + 3 | 3 | 5 | 1 + 2 | 2 | 4 | 1 + 1 | 1 | 3 | 1 +(7 rows) + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x desc nulls last range between 2 preceding and 2 following); + x | y | first_value | last_value +---+----+-------------+------------ + 5 | 5 | 5 | 3 + 4 | 4 | 5 | 2 + 3 | 3 | 5 | 1 + 2 | 2 | 4 | 1 + 1 | 1 | 3 | 1 + | 42 | 42 | 43 + | 43 | 42 | 43 +(7 rows) + +-- Check overflow behavior for various integer sizes +select x, last_value(x) over (order by x::smallint range between current row and 2147450884 following) +from generate_series(32764, 32766) x; + x | last_value +-------+------------ + 32764 | 32766 + 32765 | 32766 + 32766 | 32766 +(3 rows) + +select x, last_value(x) over (order by x::smallint desc range between current row and 2147450885 following) +from generate_series(-32766, -32764) x; + x | last_value +--------+------------ + -32764 | -32766 + -32765 | -32766 + -32766 | -32766 +(3 rows) + +select x, last_value(x) over (order by x range between current row and 4 following) +from generate_series(2147483644, 2147483646) x; + x | last_value +------------+------------ + 2147483644 | 2147483646 + 2147483645 | 2147483646 + 2147483646 | 2147483646 +(3 rows) + +select x, last_value(x) over (order by x desc range between current row and 5 following) +from generate_series(-2147483646, -2147483644) x; + x | last_value +-------------+------------- + -2147483644 | -2147483646 + -2147483645 | -2147483646 + -2147483646 | -2147483646 +(3 rows) + +select x, last_value(x) over (order by x range between current row and 4 following) +from generate_series(9223372036854775804, 9223372036854775806) x; + x | last_value +---------------------+--------------------- + 9223372036854775804 | 9223372036854775806 + 9223372036854775805 | 9223372036854775806 + 9223372036854775806 | 9223372036854775806 +(3 rows) + +select x, last_value(x) over (order by x desc range between current row and 5 following) +from generate_series(-9223372036854775806, -9223372036854775804) x; + x | last_value +----------------------+---------------------- + -9223372036854775804 | -9223372036854775806 + -9223372036854775805 | -9223372036854775806 + -9223372036854775806 | -9223372036854775806 +(3 rows) + +-- Test in_range for other datetime datatypes +create temp table datetimes( + id int, + f_time time, + f_timetz timetz, + f_interval interval, + f_timestamptz timestamptz, + f_timestamp timestamp +); +insert into datetimes values +(1, '11:00', '11:00 BST', '1 year', '2000-10-19 10:23:54+01', '2000-10-19 10:23:54'), +(2, '12:00', '12:00 BST', '2 years', '2001-10-19 10:23:54+01', '2001-10-19 10:23:54'), +(3, '13:00', '13:00 BST', '3 years', '2001-10-19 10:23:54+01', '2001-10-19 10:23:54'), +(4, '14:00', '14:00 BST', '4 years', '2002-10-19 10:23:54+01', '2002-10-19 10:23:54'), +(5, '15:00', '15:00 BST', '5 years', '2003-10-19 10:23:54+01', '2003-10-19 10:23:54'), +(6, '15:00', '15:00 BST', '5 years', '2004-10-19 10:23:54+01', '2004-10-19 10:23:54'), +(7, '17:00', '17:00 BST', '7 years', '2005-10-19 10:23:54+01', '2005-10-19 10:23:54'), +(8, '18:00', '18:00 BST', '8 years', '2006-10-19 10:23:54+01', '2006-10-19 10:23:54'), +(9, '19:00', '19:00 BST', '9 years', '2007-10-19 10:23:54+01', '2007-10-19 10:23:54'), +(10, '20:00', '20:00 BST', '10 years', '2008-10-19 10:23:54+01', '2008-10-19 10:23:54'); +select id, f_time, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_time range between + '70 min'::interval preceding and '2 hours'::interval following); + id | f_time | first_value | last_value +----+----------+-------------+------------ + 1 | 11:00:00 | 1 | 3 + 2 | 12:00:00 | 1 | 4 + 3 | 13:00:00 | 2 | 6 + 4 | 14:00:00 | 3 | 6 + 5 | 15:00:00 | 4 | 7 + 6 | 15:00:00 | 4 | 7 + 7 | 17:00:00 | 7 | 9 + 8 | 18:00:00 | 7 | 10 + 9 | 19:00:00 | 8 | 10 + 10 | 20:00:00 | 9 | 10 +(10 rows) + +select id, f_time, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_time desc range between + '70 min' preceding and '2 hours' following); + id | f_time | first_value | last_value +----+----------+-------------+------------ + 10 | 20:00:00 | 10 | 8 + 9 | 19:00:00 | 10 | 7 + 8 | 18:00:00 | 9 | 7 + 7 | 17:00:00 | 8 | 5 + 6 | 15:00:00 | 6 | 3 + 5 | 15:00:00 | 6 | 3 + 4 | 14:00:00 | 6 | 2 + 3 | 13:00:00 | 4 | 1 + 2 | 12:00:00 | 3 | 1 + 1 | 11:00:00 | 2 | 1 +(10 rows) + +select id, f_timetz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timetz range between + '70 min'::interval preceding and '2 hours'::interval following); + id | f_timetz | first_value | last_value +----+-------------+-------------+------------ + 1 | 11:00:00+01 | 1 | 3 + 2 | 12:00:00+01 | 1 | 4 + 3 | 13:00:00+01 | 2 | 6 + 4 | 14:00:00+01 | 3 | 6 + 5 | 15:00:00+01 | 4 | 7 + 6 | 15:00:00+01 | 4 | 7 + 7 | 17:00:00+01 | 7 | 9 + 8 | 18:00:00+01 | 7 | 10 + 9 | 19:00:00+01 | 8 | 10 + 10 | 20:00:00+01 | 9 | 10 +(10 rows) + +select id, f_timetz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timetz desc range between + '70 min' preceding and '2 hours' following); + id | f_timetz | first_value | last_value +----+-------------+-------------+------------ + 10 | 20:00:00+01 | 10 | 8 + 9 | 19:00:00+01 | 10 | 7 + 8 | 18:00:00+01 | 9 | 7 + 7 | 17:00:00+01 | 8 | 5 + 6 | 15:00:00+01 | 6 | 3 + 5 | 15:00:00+01 | 6 | 3 + 4 | 14:00:00+01 | 6 | 2 + 3 | 13:00:00+01 | 4 | 1 + 2 | 12:00:00+01 | 3 | 1 + 1 | 11:00:00+01 | 2 | 1 +(10 rows) + +select id, f_interval, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_interval range between + '1 year'::interval preceding and '1 year'::interval following); + id | f_interval | first_value | last_value +----+------------+-------------+------------ + 1 | @ 1 year | 1 | 2 + 2 | @ 2 years | 1 | 3 + 3 | @ 3 years | 2 | 4 + 4 | @ 4 years | 3 | 6 + 5 | @ 5 years | 4 | 6 + 6 | @ 5 years | 4 | 6 + 7 | @ 7 years | 7 | 8 + 8 | @ 8 years | 7 | 9 + 9 | @ 9 years | 8 | 10 + 10 | @ 10 years | 9 | 10 +(10 rows) + +select id, f_interval, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_interval desc range between + '1 year' preceding and '1 year' following); + id | f_interval | first_value | last_value +----+------------+-------------+------------ + 10 | @ 10 years | 10 | 9 + 9 | @ 9 years | 10 | 8 + 8 | @ 8 years | 9 | 7 + 7 | @ 7 years | 8 | 7 + 6 | @ 5 years | 6 | 4 + 5 | @ 5 years | 6 | 4 + 4 | @ 4 years | 6 | 3 + 3 | @ 3 years | 4 | 2 + 2 | @ 2 years | 3 | 1 + 1 | @ 1 year | 2 | 1 +(10 rows) + +select id, f_timestamptz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamptz range between + '1 year'::interval preceding and '1 year'::interval following); + id | f_timestamptz | first_value | last_value +----+------------------------------+-------------+------------ + 1 | Thu Oct 19 02:23:54 2000 PDT | 1 | 3 + 2 | Fri Oct 19 02:23:54 2001 PDT | 1 | 4 + 3 | Fri Oct 19 02:23:54 2001 PDT | 1 | 4 + 4 | Sat Oct 19 02:23:54 2002 PDT | 2 | 5 + 5 | Sun Oct 19 02:23:54 2003 PDT | 4 | 6 + 6 | Tue Oct 19 02:23:54 2004 PDT | 5 | 7 + 7 | Wed Oct 19 02:23:54 2005 PDT | 6 | 8 + 8 | Thu Oct 19 02:23:54 2006 PDT | 7 | 9 + 9 | Fri Oct 19 02:23:54 2007 PDT | 8 | 10 + 10 | Sun Oct 19 02:23:54 2008 PDT | 9 | 10 +(10 rows) + +select id, f_timestamptz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamptz desc range between + '1 year' preceding and '1 year' following); + id | f_timestamptz | first_value | last_value +----+------------------------------+-------------+------------ + 10 | Sun Oct 19 02:23:54 2008 PDT | 10 | 9 + 9 | Fri Oct 19 02:23:54 2007 PDT | 10 | 8 + 8 | Thu Oct 19 02:23:54 2006 PDT | 9 | 7 + 7 | Wed Oct 19 02:23:54 2005 PDT | 8 | 6 + 6 | Tue Oct 19 02:23:54 2004 PDT | 7 | 5 + 5 | Sun Oct 19 02:23:54 2003 PDT | 6 | 4 + 4 | Sat Oct 19 02:23:54 2002 PDT | 5 | 2 + 3 | Fri Oct 19 02:23:54 2001 PDT | 4 | 1 + 2 | Fri Oct 19 02:23:54 2001 PDT | 4 | 1 + 1 | Thu Oct 19 02:23:54 2000 PDT | 3 | 1 +(10 rows) + +select id, f_timestamp, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamp range between + '1 year'::interval preceding and '1 year'::interval following); + id | f_timestamp | first_value | last_value +----+--------------------------+-------------+------------ + 1 | Thu Oct 19 10:23:54 2000 | 1 | 3 + 2 | Fri Oct 19 10:23:54 2001 | 1 | 4 + 3 | Fri Oct 19 10:23:54 2001 | 1 | 4 + 4 | Sat Oct 19 10:23:54 2002 | 2 | 5 + 5 | Sun Oct 19 10:23:54 2003 | 4 | 6 + 6 | Tue Oct 19 10:23:54 2004 | 5 | 7 + 7 | Wed Oct 19 10:23:54 2005 | 6 | 8 + 8 | Thu Oct 19 10:23:54 2006 | 7 | 9 + 9 | Fri Oct 19 10:23:54 2007 | 8 | 10 + 10 | Sun Oct 19 10:23:54 2008 | 9 | 10 +(10 rows) + +select id, f_timestamp, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamp desc range between + '1 year' preceding and '1 year' following); + id | f_timestamp | first_value | last_value +----+--------------------------+-------------+------------ + 10 | Sun Oct 19 10:23:54 2008 | 10 | 9 + 9 | Fri Oct 19 10:23:54 2007 | 10 | 8 + 8 | Thu Oct 19 10:23:54 2006 | 9 | 7 + 7 | Wed Oct 19 10:23:54 2005 | 8 | 6 + 6 | Tue Oct 19 10:23:54 2004 | 7 | 5 + 5 | Sun Oct 19 10:23:54 2003 | 6 | 4 + 4 | Sat Oct 19 10:23:54 2002 | 5 | 2 + 3 | Fri Oct 19 10:23:54 2001 | 4 | 1 + 2 | Fri Oct 19 10:23:54 2001 | 4 | 1 + 1 | Thu Oct 19 10:23:54 2000 | 3 | 1 +(10 rows) + +-- RANGE offset PRECEDING/FOLLOWING error cases +select sum(salary) over (order by enroll_date, salary range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; +ERROR: RANGE with offset PRECEDING/FOLLOWING requires exactly one ORDER BY column +LINE 1: select sum(salary) over (order by enroll_date, salary range ... + ^ +select sum(salary) over (range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; +ERROR: RANGE with offset PRECEDING/FOLLOWING requires exactly one ORDER BY column +LINE 1: select sum(salary) over (range between '1 year'::interval pr... + ^ +select sum(salary) over (order by depname range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; +ERROR: RANGE with offset PRECEDING/FOLLOWING is not supported for column type text +LINE 1: ... sum(salary) over (order by depname range between '1 year'::... + ^ +select max(enroll_date) over (order by enroll_date range between 1 preceding and 2 following + exclude ties), salary, enroll_date from empsalary; +ERROR: RANGE with offset PRECEDING/FOLLOWING is not supported for column type date and offset type integer +LINE 1: ...ll_date) over (order by enroll_date range between 1 precedin... + ^ +HINT: Cast the offset value to an appropriate type. +select max(enroll_date) over (order by salary range between -1 preceding and 2 following + exclude ties), salary, enroll_date from empsalary; +ERROR: invalid preceding or following size in window function +select max(enroll_date) over (order by salary range between 1 preceding and -2 following + exclude ties), salary, enroll_date from empsalary; +ERROR: invalid preceding or following size in window function +select max(enroll_date) over (order by salary range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; +ERROR: RANGE with offset PRECEDING/FOLLOWING is not supported for column type integer and offset type interval +LINE 1: ...(enroll_date) over (order by salary range between '1 year'::... + ^ +HINT: Cast the offset value to an appropriate type. +select max(enroll_date) over (order by enroll_date range between '1 year'::interval preceding and '-2 years'::interval following + exclude ties), salary, enroll_date from empsalary; +ERROR: invalid preceding or following size in window function +-- GROUPS tests +SELECT sum(unique1) over (order by four groups between unbounded preceding and current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 12 | 0 | 0 + 12 | 8 | 0 + 12 | 4 | 0 + 27 | 5 | 1 + 27 | 9 | 1 + 27 | 1 | 1 + 35 | 6 | 2 + 35 | 2 | 2 + 45 | 3 | 3 + 45 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between unbounded preceding and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 45 | 0 | 0 + 45 | 8 | 0 + 45 | 4 | 0 + 45 | 5 | 1 + 45 | 9 | 1 + 45 | 1 | 1 + 45 | 6 | 2 + 45 | 2 | 2 + 45 | 3 | 3 + 45 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between current row and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 45 | 0 | 0 + 45 | 8 | 0 + 45 | 4 | 0 + 33 | 5 | 1 + 33 | 9 | 1 + 33 | 1 | 1 + 18 | 6 | 2 + 18 | 2 | 2 + 10 | 3 | 3 + 10 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 1 preceding and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 45 | 0 | 0 + 45 | 8 | 0 + 45 | 4 | 0 + 45 | 5 | 1 + 45 | 9 | 1 + 45 | 1 | 1 + 33 | 6 | 2 + 33 | 2 | 2 + 18 | 3 | 3 + 18 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 1 following and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 33 | 0 | 0 + 33 | 8 | 0 + 33 | 4 | 0 + 18 | 5 | 1 + 18 | 9 | 1 + 18 | 1 | 1 + 10 | 6 | 2 + 10 | 2 | 2 + | 3 | 3 + | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between unbounded preceding and 2 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 35 | 0 | 0 + 35 | 8 | 0 + 35 | 4 | 0 + 45 | 5 | 1 + 45 | 9 | 1 + 45 | 1 | 1 + 45 | 6 | 2 + 45 | 2 | 2 + 45 | 3 | 3 + 45 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + | 0 | 0 + | 8 | 0 + | 4 | 0 + 12 | 5 | 1 + 12 | 9 | 1 + 12 | 1 | 1 + 27 | 6 | 2 + 27 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 27 | 0 | 0 + 27 | 8 | 0 + 27 | 4 | 0 + 35 | 5 | 1 + 35 | 9 | 1 + 35 | 1 | 1 + 45 | 6 | 2 + 45 | 2 | 2 + 33 | 3 | 3 + 33 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 0 preceding and 0 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 12 | 0 | 0 + 12 | 8 | 0 + 12 | 4 | 0 + 15 | 5 | 1 + 15 | 9 | 1 + 15 | 1 | 1 + 8 | 6 | 2 + 8 | 2 | 2 + 10 | 3 | 3 + 10 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude current row), unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 27 | 0 | 0 + 19 | 8 | 0 + 23 | 4 | 0 + 30 | 5 | 1 + 26 | 9 | 1 + 34 | 1 | 1 + 39 | 6 | 2 + 43 | 2 | 2 + 30 | 3 | 3 + 26 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude group), unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 15 | 0 | 0 + 15 | 8 | 0 + 15 | 4 | 0 + 20 | 5 | 1 + 20 | 9 | 1 + 20 | 1 | 1 + 37 | 6 | 2 + 37 | 2 | 2 + 23 | 3 | 3 + 23 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude ties), unique1, four +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four +-----+---------+------ + 15 | 0 | 0 + 23 | 8 | 0 + 19 | 4 | 0 + 25 | 5 | 1 + 29 | 9 | 1 + 21 | 1 | 1 + 43 | 6 | 2 + 39 | 2 | 2 + 26 | 3 | 3 + 30 | 7 | 3 +(10 rows) + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following),unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four | ten +-----+---------+------+----- + 0 | 0 | 0 | 0 + 1 | 1 | 1 | 1 + 2 | 2 | 2 | 2 + 3 | 3 | 3 | 3 + 4 | 4 | 0 | 4 + 5 | 5 | 1 | 5 + 6 | 6 | 2 | 6 + 7 | 7 | 3 | 7 + 8 | 8 | 0 | 8 + 9 | 9 | 1 | 9 +(10 rows) + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude current row), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four | ten +-----+---------+------+----- + | 0 | 0 | 0 + | 1 | 1 | 1 + | 2 | 2 | 2 + | 3 | 3 | 3 + | 4 | 0 | 4 + | 5 | 1 | 5 + | 6 | 2 | 6 + | 7 | 3 | 7 + | 8 | 0 | 8 + | 9 | 1 | 9 +(10 rows) + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude group), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four | ten +-----+---------+------+----- + | 0 | 0 | 0 + | 1 | 1 | 1 + | 2 | 2 | 2 + | 3 | 3 | 3 + | 4 | 0 | 4 + | 5 | 1 | 5 + | 6 | 2 | 6 + | 7 | 3 | 7 + | 8 | 0 | 8 + | 9 | 1 | 9 +(10 rows) + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude ties), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + sum | unique1 | four | ten +-----+---------+------+----- + 0 | 0 | 0 | 0 + 1 | 1 | 1 | 1 + 2 | 2 | 2 | 2 + 3 | 3 | 3 | 3 + 4 | 4 | 0 | 4 + 5 | 5 | 1 | 5 + 6 | 6 | 2 | 6 + 7 | 7 | 3 | 7 + 8 | 8 | 0 | 8 + 9 | 9 | 1 | 9 +(10 rows) + +select first_value(salary) over(order by enroll_date groups between 1 preceding and 1 following), + lead(salary) over(order by enroll_date groups between 1 preceding and 1 following), + nth_value(salary, 1) over(order by enroll_date groups between 1 preceding and 1 following), + salary, enroll_date from empsalary; + first_value | lead | nth_value | salary | enroll_date +-------------+------+-----------+--------+------------- + 5000 | 6000 | 5000 | 5000 | 10-01-2006 + 5000 | 3900 | 5000 | 6000 | 10-01-2006 + 5000 | 4800 | 5000 | 3900 | 12-23-2006 + 3900 | 5200 | 3900 | 4800 | 08-01-2007 + 3900 | 4800 | 3900 | 5200 | 08-01-2007 + 4800 | 5200 | 4800 | 4800 | 08-08-2007 + 4800 | 3500 | 4800 | 5200 | 08-15-2007 + 5200 | 4500 | 5200 | 3500 | 12-10-2007 + 3500 | 4200 | 3500 | 4500 | 01-01-2008 + 3500 | | 3500 | 4200 | 01-01-2008 +(10 rows) + +select last_value(salary) over(order by enroll_date groups between 1 preceding and 1 following), + lag(salary) over(order by enroll_date groups between 1 preceding and 1 following), + salary, enroll_date from empsalary; + last_value | lag | salary | enroll_date +------------+------+--------+------------- + 3900 | | 5000 | 10-01-2006 + 3900 | 5000 | 6000 | 10-01-2006 + 5200 | 6000 | 3900 | 12-23-2006 + 4800 | 3900 | 4800 | 08-01-2007 + 4800 | 4800 | 5200 | 08-01-2007 + 5200 | 5200 | 4800 | 08-08-2007 + 3500 | 4800 | 5200 | 08-15-2007 + 4200 | 5200 | 3500 | 12-10-2007 + 4200 | 3500 | 4500 | 01-01-2008 + 4200 | 4500 | 4200 | 01-01-2008 +(10 rows) + +select first_value(salary) over(order by enroll_date groups between 1 following and 3 following + exclude current row), + lead(salary) over(order by enroll_date groups between 1 following and 3 following exclude ties), + nth_value(salary, 1) over(order by enroll_date groups between 1 following and 3 following + exclude ties), + salary, enroll_date from empsalary; + first_value | lead | nth_value | salary | enroll_date +-------------+------+-----------+--------+------------- + 3900 | 6000 | 3900 | 5000 | 10-01-2006 + 3900 | 3900 | 3900 | 6000 | 10-01-2006 + 4800 | 4800 | 4800 | 3900 | 12-23-2006 + 4800 | 5200 | 4800 | 4800 | 08-01-2007 + 4800 | 4800 | 4800 | 5200 | 08-01-2007 + 5200 | 5200 | 5200 | 4800 | 08-08-2007 + 3500 | 3500 | 3500 | 5200 | 08-15-2007 + 4500 | 4500 | 4500 | 3500 | 12-10-2007 + | 4200 | | 4500 | 01-01-2008 + | | | 4200 | 01-01-2008 +(10 rows) + +select last_value(salary) over(order by enroll_date groups between 1 following and 3 following + exclude group), + lag(salary) over(order by enroll_date groups between 1 following and 3 following exclude group), + salary, enroll_date from empsalary; + last_value | lag | salary | enroll_date +------------+------+--------+------------- + 4800 | | 5000 | 10-01-2006 + 4800 | 5000 | 6000 | 10-01-2006 + 5200 | 6000 | 3900 | 12-23-2006 + 3500 | 3900 | 4800 | 08-01-2007 + 3500 | 4800 | 5200 | 08-01-2007 + 4200 | 5200 | 4800 | 08-08-2007 + 4200 | 4800 | 5200 | 08-15-2007 + 4200 | 5200 | 3500 | 12-10-2007 + | 3500 | 4500 | 01-01-2008 + | 4500 | 4200 | 01-01-2008 +(10 rows) + +-- Show differences in offset interpretation between ROWS, RANGE, and GROUPS +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x rows between 1 preceding and 1 following); + x | sum +----+----- + 1 | 4 + 3 | 9 + 5 | 15 + 7 | 21 + 9 | 27 + 11 | 33 + 13 | 39 + 15 | 45 + 17 | 51 + 19 | 57 + 21 | 63 + 23 | 69 + 25 | 75 + 27 | 81 + 29 | 87 + 31 | 93 + 33 | 99 + 35 | 68 +(18 rows) + +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x range between 1 preceding and 1 following); + x | sum +----+----- + 1 | 1 + 3 | 3 + 5 | 5 + 7 | 7 + 9 | 9 + 11 | 11 + 13 | 13 + 15 | 15 + 17 | 17 + 19 | 19 + 21 | 21 + 23 | 23 + 25 | 25 + 27 | 27 + 29 | 29 + 31 | 31 + 33 | 33 + 35 | 35 +(18 rows) + +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x groups between 1 preceding and 1 following); + x | sum +----+----- + 1 | 4 + 3 | 9 + 5 | 15 + 7 | 21 + 9 | 27 + 11 | 33 + 13 | 39 + 15 | 45 + 17 | 51 + 19 | 57 + 21 | 63 + 23 | 69 + 25 | 75 + 27 | 81 + 29 | 87 + 31 | 93 + 33 | 99 + 35 | 68 +(18 rows) + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x rows between 1 preceding and 1 following); + x | sum +----+----- + 1 | 2 + 1 | 3 + 1 | 7 + 5 | 13 + 7 | 21 + 9 | 27 + 11 | 33 + 13 | 39 + 15 | 45 + 17 | 51 + 19 | 57 + 21 | 63 + 23 | 69 + 25 | 75 + 27 | 81 + 29 | 87 + 31 | 93 + 33 | 99 + 35 | 105 + 37 | 111 + 39 | 117 + 41 | 123 + 43 | 129 + 45 | 135 + 47 | 141 + 49 | 96 +(26 rows) + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x range between 1 preceding and 1 following); + x | sum +----+----- + 1 | 3 + 1 | 3 + 1 | 3 + 5 | 5 + 7 | 7 + 9 | 9 + 11 | 11 + 13 | 13 + 15 | 15 + 17 | 17 + 19 | 19 + 21 | 21 + 23 | 23 + 25 | 25 + 27 | 27 + 29 | 29 + 31 | 31 + 33 | 33 + 35 | 35 + 37 | 37 + 39 | 39 + 41 | 41 + 43 | 43 + 45 | 45 + 47 | 47 + 49 | 49 +(26 rows) + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x groups between 1 preceding and 1 following); + x | sum +----+----- + 1 | 8 + 1 | 8 + 1 | 8 + 5 | 15 + 7 | 21 + 9 | 27 + 11 | 33 + 13 | 39 + 15 | 45 + 17 | 51 + 19 | 57 + 21 | 63 + 23 | 69 + 25 | 75 + 27 | 81 + 29 | 87 + 31 | 93 + 33 | 99 + 35 | 105 + 37 | 111 + 39 | 117 + 41 | 123 + 43 | 129 + 45 | 135 + 47 | 141 + 49 | 96 +(26 rows) + -- with UNION SELECT count(*) OVER (PARTITION BY four) FROM (SELECT * FROM tenk1 UNION ALL SELECT * FROM tenk2)s LIMIT 0; count diff --git a/src/test/regress/sql/window.sql b/src/test/regress/sql/window.sql index e2a1a1cdd5..3320aa81f8 100644 --- a/src/test/regress/sql/window.sql +++ b/src/test/regress/sql/window.sql @@ -189,6 +189,46 @@ SELECT sum(unique1) over (rows between 2 preceding and 2 following), unique1, four FROM tenk1 WHERE unique1 < 10; +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude no others), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (rows between 2 preceding and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT first_value(unique1) over (ORDER BY four rows between current row and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT last_value(unique1) over (ORDER BY four rows between current row and 2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + SELECT sum(unique1) over (rows between 2 preceding and 1 preceding), unique1, four FROM tenk1 WHERE unique1 < 10; @@ -205,10 +245,17 @@ SELECT sum(unique1) over (w range between current row and unbounded following), unique1, four FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); --- fail: not implemented yet -SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding), +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude current row), unique1, four -FROM tenk1 WHERE unique1 < 10; +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); + +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); + +SELECT sum(unique1) over (w range between unbounded preceding and current row exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10 WINDOW w AS (order by four); SELECT first_value(unique1) over w, nth_value(unique1, 2) over w AS nth_2, @@ -230,6 +277,449 @@ SELECT * FROM v_window; SELECT pg_get_viewdef('v_window'); +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude current row) as sum_rows FROM generate_series(1, 10) i; + +SELECT * FROM v_window; + +SELECT pg_get_viewdef('v_window'); + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude group) as sum_rows FROM generate_series(1, 10) i; + +SELECT * FROM v_window; + +SELECT pg_get_viewdef('v_window'); + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude ties) as sum_rows FROM generate_series(1, 10) i; + +SELECT * FROM v_window; + +SELECT pg_get_viewdef('v_window'); + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i rows between 1 preceding and 1 following + exclude no others) as sum_rows FROM generate_series(1, 10) i; + +SELECT * FROM v_window; + +SELECT pg_get_viewdef('v_window'); + +CREATE OR REPLACE TEMP VIEW v_window AS + SELECT i, sum(i) over (order by i groups between 1 preceding and 1 following) as sum_rows FROM generate_series(1, 10) i; + +SELECT * FROM v_window; + +SELECT pg_get_viewdef('v_window'); + +DROP VIEW v_window; + +CREATE TEMP VIEW v_window AS + SELECT i, min(i) over (order by i range between '1 day' preceding and '10 days' following) as min_i + FROM generate_series(now(), now()+'100 days'::interval, '1 hour') i; + +SELECT pg_get_viewdef('v_window'); + +-- RANGE offset PRECEDING/FOLLOWING tests + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four desc range between 2::int8 preceding and 1::int2 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude no others), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 1::int2 preceding exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 6::int2 following exclude ties), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four range between 2::int8 preceding and 6::int2 following exclude group), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by four order by unique1 range between 5::int8 preceding and 6::int2 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by four order by unique1 range between 5::int8 preceding and 6::int2 following + exclude current row),unique1, four +FROM tenk1 WHERE unique1 < 10; + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + +select sum(salary) over (order by enroll_date desc range between '1 year'::interval preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + +select sum(salary) over (order by enroll_date desc range between '1 year'::interval following and '1 year'::interval following), + salary, enroll_date from empsalary; + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude current row), salary, enroll_date from empsalary; + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude group), salary, enroll_date from empsalary; + +select sum(salary) over (order by enroll_date range between '1 year'::interval preceding and '1 year'::interval following + exclude ties), salary, enroll_date from empsalary; + +select first_value(salary) over(order by salary range between 1000 preceding and 1000 following), + lead(salary) over(order by salary range between 1000 preceding and 1000 following), + nth_value(salary, 1) over(order by salary range between 1000 preceding and 1000 following), + salary from empsalary; + +select last_value(salary) over(order by salary range between 1000 preceding and 1000 following), + lag(salary) over(order by salary range between 1000 preceding and 1000 following), + salary from empsalary; + +select first_value(salary) over(order by salary range between 1000 following and 3000 following + exclude current row), + lead(salary) over(order by salary range between 1000 following and 3000 following exclude ties), + nth_value(salary, 1) over(order by salary range between 1000 following and 3000 following + exclude ties), + salary from empsalary; + +select last_value(salary) over(order by salary range between 1000 following and 3000 following + exclude group), + lag(salary) over(order by salary range between 1000 following and 3000 following exclude group), + salary from empsalary; + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following), + salary, enroll_date from empsalary; + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude ties), + salary, enroll_date from empsalary; + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude group), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude group), + salary, enroll_date from empsalary; + +select first_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude current row), + last_value(salary) over(order by enroll_date range between unbounded preceding and '1 year'::interval following + exclude current row), + salary, enroll_date from empsalary; + +-- RANGE offset PRECEDING/FOLLOWING with null values +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x asc nulls first range between 2 preceding and 2 following); + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x asc nulls last range between 2 preceding and 2 following); + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x desc nulls first range between 2 preceding and 2 following); + +select x, y, + first_value(y) over w, + last_value(y) over w +from + (select x, x as y from generate_series(1,5) as x + union all select null, 42 + union all select null, 43) ss +window w as + (order by x desc nulls last range between 2 preceding and 2 following); + +-- Check overflow behavior for various integer sizes + +select x, last_value(x) over (order by x::smallint range between current row and 2147450884 following) +from generate_series(32764, 32766) x; + +select x, last_value(x) over (order by x::smallint desc range between current row and 2147450885 following) +from generate_series(-32766, -32764) x; + +select x, last_value(x) over (order by x range between current row and 4 following) +from generate_series(2147483644, 2147483646) x; + +select x, last_value(x) over (order by x desc range between current row and 5 following) +from generate_series(-2147483646, -2147483644) x; + +select x, last_value(x) over (order by x range between current row and 4 following) +from generate_series(9223372036854775804, 9223372036854775806) x; + +select x, last_value(x) over (order by x desc range between current row and 5 following) +from generate_series(-9223372036854775806, -9223372036854775804) x; + +-- Test in_range for other datetime datatypes + +create temp table datetimes( + id int, + f_time time, + f_timetz timetz, + f_interval interval, + f_timestamptz timestamptz, + f_timestamp timestamp +); + +insert into datetimes values +(1, '11:00', '11:00 BST', '1 year', '2000-10-19 10:23:54+01', '2000-10-19 10:23:54'), +(2, '12:00', '12:00 BST', '2 years', '2001-10-19 10:23:54+01', '2001-10-19 10:23:54'), +(3, '13:00', '13:00 BST', '3 years', '2001-10-19 10:23:54+01', '2001-10-19 10:23:54'), +(4, '14:00', '14:00 BST', '4 years', '2002-10-19 10:23:54+01', '2002-10-19 10:23:54'), +(5, '15:00', '15:00 BST', '5 years', '2003-10-19 10:23:54+01', '2003-10-19 10:23:54'), +(6, '15:00', '15:00 BST', '5 years', '2004-10-19 10:23:54+01', '2004-10-19 10:23:54'), +(7, '17:00', '17:00 BST', '7 years', '2005-10-19 10:23:54+01', '2005-10-19 10:23:54'), +(8, '18:00', '18:00 BST', '8 years', '2006-10-19 10:23:54+01', '2006-10-19 10:23:54'), +(9, '19:00', '19:00 BST', '9 years', '2007-10-19 10:23:54+01', '2007-10-19 10:23:54'), +(10, '20:00', '20:00 BST', '10 years', '2008-10-19 10:23:54+01', '2008-10-19 10:23:54'); + +select id, f_time, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_time range between + '70 min'::interval preceding and '2 hours'::interval following); + +select id, f_time, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_time desc range between + '70 min' preceding and '2 hours' following); + +select id, f_timetz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timetz range between + '70 min'::interval preceding and '2 hours'::interval following); + +select id, f_timetz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timetz desc range between + '70 min' preceding and '2 hours' following); + +select id, f_interval, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_interval range between + '1 year'::interval preceding and '1 year'::interval following); + +select id, f_interval, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_interval desc range between + '1 year' preceding and '1 year' following); + +select id, f_timestamptz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamptz range between + '1 year'::interval preceding and '1 year'::interval following); + +select id, f_timestamptz, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamptz desc range between + '1 year' preceding and '1 year' following); + +select id, f_timestamp, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamp range between + '1 year'::interval preceding and '1 year'::interval following); + +select id, f_timestamp, first_value(id) over w, last_value(id) over w +from datetimes +window w as (order by f_timestamp desc range between + '1 year' preceding and '1 year' following); + +-- RANGE offset PRECEDING/FOLLOWING error cases +select sum(salary) over (order by enroll_date, salary range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; + +select sum(salary) over (range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; + +select sum(salary) over (order by depname range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; + +select max(enroll_date) over (order by enroll_date range between 1 preceding and 2 following + exclude ties), salary, enroll_date from empsalary; + +select max(enroll_date) over (order by salary range between -1 preceding and 2 following + exclude ties), salary, enroll_date from empsalary; + +select max(enroll_date) over (order by salary range between 1 preceding and -2 following + exclude ties), salary, enroll_date from empsalary; + +select max(enroll_date) over (order by salary range between '1 year'::interval preceding and '2 years'::interval following + exclude ties), salary, enroll_date from empsalary; + +select max(enroll_date) over (order by enroll_date range between '1 year'::interval preceding and '-2 years'::interval following + exclude ties), salary, enroll_date from empsalary; + +-- GROUPS tests + +SELECT sum(unique1) over (order by four groups between unbounded preceding and current row), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between unbounded preceding and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between current row and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 1 preceding and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 1 following and unbounded following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between unbounded preceding and 2 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 preceding), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 0 preceding and 0 following), + unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude current row), unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude group), unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (order by four groups between 2 preceding and 1 following + exclude ties), unique1, four +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following),unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude current row), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude group), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + +SELECT sum(unique1) over (partition by ten + order by four groups between 0 preceding and 0 following exclude ties), unique1, four, ten +FROM tenk1 WHERE unique1 < 10; + +select first_value(salary) over(order by enroll_date groups between 1 preceding and 1 following), + lead(salary) over(order by enroll_date groups between 1 preceding and 1 following), + nth_value(salary, 1) over(order by enroll_date groups between 1 preceding and 1 following), + salary, enroll_date from empsalary; + +select last_value(salary) over(order by enroll_date groups between 1 preceding and 1 following), + lag(salary) over(order by enroll_date groups between 1 preceding and 1 following), + salary, enroll_date from empsalary; + +select first_value(salary) over(order by enroll_date groups between 1 following and 3 following + exclude current row), + lead(salary) over(order by enroll_date groups between 1 following and 3 following exclude ties), + nth_value(salary, 1) over(order by enroll_date groups between 1 following and 3 following + exclude ties), + salary, enroll_date from empsalary; + +select last_value(salary) over(order by enroll_date groups between 1 following and 3 following + exclude group), + lag(salary) over(order by enroll_date groups between 1 following and 3 following exclude group), + salary, enroll_date from empsalary; + +-- Show differences in offset interpretation between ROWS, RANGE, and GROUPS +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x rows between 1 preceding and 1 following); + +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x range between 1 preceding and 1 following); + +WITH cte (x) AS ( + SELECT * FROM generate_series(1, 35, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x groups between 1 preceding and 1 following); + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x rows between 1 preceding and 1 following); + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x range between 1 preceding and 1 following); + +WITH cte (x) AS ( + select 1 union all select 1 union all select 1 union all + SELECT * FROM generate_series(5, 49, 2) +) +SELECT x, (sum(x) over w) +FROM cte +WINDOW w AS (ORDER BY x groups between 1 preceding and 1 following); + -- with UNION SELECT count(*) OVER (PARTITION BY four) FROM (SELECT * FROM tenk1 UNION ALL SELECT * FROM tenk2)s LIMIT 0; From 9e039015501ad4033c093dee8dfc8b414634e953 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Wed, 7 Feb 2018 10:53:56 +0100 Subject: [PATCH 0960/1087] Change default git repo URL to https Since we now support the server side handler for git over https (so we're no longer using the "dumb protocol"), make https the primary choice for cloning the repository, and the git protocol the secondary choice. In passing, also change the links to git-scm.com from http to https. Reviewed by Stefan Kaltenbrunner and David G. Johnston --- doc/src/sgml/sourcerepo.sgml | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/doc/src/sgml/sourcerepo.sgml b/doc/src/sgml/sourcerepo.sgml index 8729a8450d..aaeacb14c5 100644 --- a/doc/src/sgml/sourcerepo.sgml +++ b/doc/src/sgml/sourcerepo.sgml @@ -41,7 +41,7 @@ You will need an installed version of Git, which you can - get from . Many systems already + get from . Many systems already have a recent version of Git installed by default, or available in their package distribution system. @@ -52,7 +52,7 @@ To begin using the Git repository, make a clone of the official mirror: -git clone git://git.postgresql.org/git/postgresql.git +git clone https://git.postgresql.org/git/postgresql.git This will copy the full repository to your local machine, so it may take @@ -62,16 +62,13 @@ git clone git://git.postgresql.org/git/postgresql.git - The Git mirror can also be reached via the HTTP protocol, if for example - a firewall is blocking access to the Git protocol. Just change the URL - prefix to https, as in: + The Git mirror can also be reached via the Git protocol. Just change the URL + prefix to git, as in: -git clone https://git.postgresql.org/git/postgresql.git +git clone git://git.postgresql.org/git/postgresql.git - The HTTP protocol is less efficient than the Git protocol, so it will be - slower to use. @@ -90,7 +87,7 @@ git fetch Git can do a lot more things than just fetch the source. For more information, consult the Git man pages, or see the - website at . + website at . From 4815dfa10f4db8835b7424da22a4011b53040606 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 7 Feb 2018 08:41:14 -0500 Subject: [PATCH 0961/1087] Remove prototype for fmgr() function, which no longer exists. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Commit 5ded4bd21403e143dd3eb66b92d52732fdac1945 removed the code for this function, but neglected to remove the prototype and associated comments. Dagfinn Ilmari Mannsåker Discussion: http://postgr.es/m/d8j4lmuxjzk.fsf@dalvik.ping.uio.no --- src/include/fmgr.h | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/src/include/fmgr.h b/src/include/fmgr.h index 665dd76b12..69786bfca8 100644 --- a/src/include/fmgr.h +++ b/src/include/fmgr.h @@ -730,19 +730,4 @@ extern PGDLLIMPORT fmgr_hook_type fmgr_hook; #define FmgrHookIsNeeded(fn_oid) \ (!needs_fmgr_hook ? false : (*needs_fmgr_hook)(fn_oid)) -/* - * !!! OLD INTERFACE !!! - * - * fmgr() is the only remaining vestige of the old-style caller support - * functions. It's no longer used anywhere in the Postgres distribution, - * but we should leave it around for a release or two to ease the transition - * for user-supplied C functions. OidFunctionCallN() replaces it for new - * code. - */ - -/* - * DEPRECATED, DO NOT USE IN NEW CODE - */ -extern char *fmgr(Oid procedureId,...); - #endif /* FMGR_H */ From b98a7cd58f6189833e6ad6ac7e7ad5b6412409fd Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 7 Feb 2018 08:48:04 -0500 Subject: [PATCH 0962/1087] Update out-of-date comment in StartupXLOG. Commit 4b0d28de06b28e57c540fca458e4853854fbeaf8 should have updated this comment, but did not. Thomas Munro Discussion: http://postgr.es/m/CAEepm=0iJ8aqQcF9ij2KerAkuHF3SwrVTzjMdm1H4w++nfBf9A@mail.gmail.com --- src/backend/access/transam/xlog.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index e42b828edf..18b7471597 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -6563,10 +6563,7 @@ StartupXLOG(void) StandbyMode = true; } - /* - * Get the last valid checkpoint record. If the latest one according - * to pg_control is broken, try the next-to-last one. - */ + /* Get the last valid checkpoint record. */ checkPointLoc = ControlFile->checkPoint; RedoStartLSN = ControlFile->checkPointCopy.redo; record = ReadCheckpointRecord(xlogreader, checkPointLoc, 1, true); From 32ff2691173559e5f0ca3ea9cd5db134af6ee37d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 6 Feb 2018 22:43:21 -0500 Subject: [PATCH 0963/1087] Add more information_schema columns - table_constraints.enforced - triggers.action_order - triggers.action_reference_old_table - triggers.action_reference_new_table Reviewed-by: Michael Paquier --- doc/src/sgml/information_schema.sgml | 20 +++++-- src/backend/catalog/information_schema.sql | 18 ++++--- src/test/regress/expected/triggers.out | 63 ++++++++++++++++++++++ src/test/regress/sql/triggers.sql | 14 +++++ 4 files changed, 106 insertions(+), 9 deletions(-) diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index 0faa72f1d3..09ef2827f2 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -5317,6 +5317,13 @@ ORDER BY c.ordinal_position; yes_or_no YES if the constraint is deferrable and initially deferred, NO if not
+ + enforced + yes_or_no + Applies to a feature not available in + PostgreSQL (currently always + YES) + @@ -5761,7 +5768,14 @@ ORDER BY c.ordinal_position; action_order cardinal_number - Not yet implemented + + Firing order among triggers on the same table having the same + event_manipulation, + action_timing, and + action_orientation. In + PostgreSQL, triggers are fired in name + order, so this column reflects that. + @@ -5806,13 +5820,13 @@ ORDER BY c.ordinal_position; action_reference_old_table sql_identifier - Applies to a feature not available in PostgreSQL + Name of the old transition table, or null if none action_reference_new_table sql_identifier - Applies to a feature not available in PostgreSQL + Name of the new transition table, or null if none diff --git a/src/backend/catalog/information_schema.sql b/src/backend/catalog/information_schema.sql index 6fb1a1bc1c..686528c354 100644 --- a/src/backend/catalog/information_schema.sql +++ b/src/backend/catalog/information_schema.sql @@ -1783,7 +1783,8 @@ CREATE VIEW table_constraints AS CAST(CASE WHEN c.condeferrable THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_deferrable, CAST(CASE WHEN c.condeferred THEN 'YES' ELSE 'NO' END AS yes_or_no) - AS initially_deferred + AS initially_deferred, + CAST('YES' AS yes_or_no) AS enforced FROM pg_namespace nc, pg_namespace nr, @@ -1812,7 +1813,8 @@ CREATE VIEW table_constraints AS CAST(r.relname AS sql_identifier) AS table_name, CAST('CHECK' AS character_data) AS constraint_type, CAST('NO' AS yes_or_no) AS is_deferrable, - CAST('NO' AS yes_or_no) AS initially_deferred + CAST('NO' AS yes_or_no) AS initially_deferred, + CAST('YES' AS yes_or_no) AS enforced FROM pg_namespace nr, pg_class r, @@ -2084,8 +2086,12 @@ CREATE VIEW triggers AS CAST(current_database() AS sql_identifier) AS event_object_catalog, CAST(n.nspname AS sql_identifier) AS event_object_schema, CAST(c.relname AS sql_identifier) AS event_object_table, - CAST(null AS cardinal_number) AS action_order, - -- XXX strange hacks follow + CAST( + -- To determine action order, partition by schema, table, + -- event_manipulation (INSERT/DELETE/UPDATE), ROW/STATEMENT (1), + -- BEFORE/AFTER (66), then order by trigger name + rank() OVER (PARTITION BY n.oid, c.oid, em.num, t.tgtype & 1, t.tgtype & 66 ORDER BY t.tgname) + AS cardinal_number) AS action_order, CAST( CASE WHEN pg_has_role(c.relowner, 'USAGE') THEN (regexp_match(pg_get_triggerdef(t.oid), E'.{35,} WHEN \\((.+)\\) EXECUTE PROCEDURE'))[1] @@ -2103,8 +2109,8 @@ CREATE VIEW triggers AS -- hard-wired refs to TRIGGER_TYPE_BEFORE, TRIGGER_TYPE_INSTEAD CASE t.tgtype & 66 WHEN 2 THEN 'BEFORE' WHEN 64 THEN 'INSTEAD OF' ELSE 'AFTER' END AS character_data) AS action_timing, - CAST(null AS sql_identifier) AS action_reference_old_table, - CAST(null AS sql_identifier) AS action_reference_new_table, + CAST(tgoldtable AS sql_identifier) AS action_reference_old_table, + CAST(tgnewtable AS sql_identifier) AS action_reference_new_table, CAST(null AS sql_identifier) AS action_reference_old_row, CAST(null AS sql_identifier) AS action_reference_new_row, CAST(null AS time_stamp) AS created diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 9a7aafcc96..83978b610e 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -96,6 +96,24 @@ CONTEXT: SQL statement "delete from fkeys2 where fkey21 = $1 and fkey22 = $2 " update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1'; NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; + trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table +----------------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+---------------------------- + check_fkeys2_fkey_restrict | DELETE | public | fkeys2 | 1 | | ROW | BEFORE | | + check_fkeys2_fkey_restrict | UPDATE | public | fkeys2 | 1 | | ROW | BEFORE | | + check_fkeys2_pkey_exist | INSERT | public | fkeys2 | 1 | | ROW | BEFORE | | + check_fkeys2_pkey_exist | UPDATE | public | fkeys2 | 2 | | ROW | BEFORE | | + check_fkeys_pkey2_exist | INSERT | public | fkeys | 1 | | ROW | BEFORE | | + check_fkeys_pkey2_exist | UPDATE | public | fkeys | 1 | | ROW | BEFORE | | + check_fkeys_pkey_exist | INSERT | public | fkeys | 2 | | ROW | BEFORE | | + check_fkeys_pkey_exist | UPDATE | public | fkeys | 2 | | ROW | BEFORE | | + check_pkeys_fkey_cascade | DELETE | public | pkeys | 1 | | ROW | BEFORE | | + check_pkeys_fkey_cascade | UPDATE | public | pkeys | 1 | | ROW | BEFORE | | +(10 rows) + DROP TABLE pkeys; DROP TABLE fkeys; DROP TABLE fkeys2; @@ -347,6 +365,24 @@ CREATE TRIGGER insert_when BEFORE INSERT ON main_table FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('insert_when'); CREATE TRIGGER delete_when AFTER DELETE ON main_table FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('delete_when'); +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; + trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table +----------------------+--------------------+---------------------+--------------------+--------------+--------------------------------+--------------------+---------------+----------------------------+---------------------------- + after_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | AFTER | | + after_upd_row_trig | UPDATE | public | main_table | 1 | | ROW | AFTER | | + after_upd_stmt_trig | UPDATE | public | main_table | 1 | | STATEMENT | AFTER | | + before_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | BEFORE | | + delete_a | DELETE | public | main_table | 1 | (old.a = 123) | ROW | AFTER | | + delete_when | DELETE | public | main_table | 1 | true | STATEMENT | AFTER | | + insert_a | INSERT | public | main_table | 1 | (new.a = 123) | ROW | AFTER | | + insert_when | INSERT | public | main_table | 2 | true | STATEMENT | BEFORE | | + modified_a | UPDATE | public | main_table | 1 | (old.a <> new.a) | ROW | BEFORE | | + modified_any | UPDATE | public | main_table | 2 | (old.* IS DISTINCT FROM new.*) | ROW | BEFORE | | +(10 rows) + INSERT INTO main_table (a) VALUES (123), (456); NOTICE: trigger_func(before_ins_stmt) called: action = INSERT, when = BEFORE, level = STATEMENT NOTICE: trigger_func(insert_when) called: action = INSERT, when = BEFORE, level = STATEMENT @@ -1991,6 +2027,33 @@ create trigger child3_update_trig create trigger child3_delete_trig after delete on child3 referencing old table as old_table for each statement execute procedure dump_delete(); +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; + trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table +------------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+---------------------------- + after_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | AFTER | | + after_upd_a_b_row_trig | UPDATE | public | main_table | 1 | | ROW | AFTER | | + after_upd_b_row_trig | UPDATE | public | main_table | 2 | | ROW | AFTER | | + after_upd_b_stmt_trig | UPDATE | public | main_table | 1 | | STATEMENT | AFTER | | + after_upd_stmt_trig | UPDATE | public | main_table | 2 | | STATEMENT | AFTER | | + before_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | BEFORE | | + before_upd_a_stmt_trig | UPDATE | public | main_table | 1 | | STATEMENT | BEFORE | | + child1_delete_trig | DELETE | public | child1 | 1 | | STATEMENT | AFTER | old_table | + child1_insert_trig | INSERT | public | child1 | 1 | | STATEMENT | AFTER | | new_table + child1_update_trig | UPDATE | public | child1 | 1 | | STATEMENT | AFTER | old_table | new_table + child2_delete_trig | DELETE | public | child2 | 1 | | STATEMENT | AFTER | old_table | + child2_insert_trig | INSERT | public | child2 | 1 | | STATEMENT | AFTER | | new_table + child2_update_trig | UPDATE | public | child2 | 1 | | STATEMENT | AFTER | old_table | new_table + child3_delete_trig | DELETE | public | child3 | 1 | | STATEMENT | AFTER | old_table | + child3_insert_trig | INSERT | public | child3 | 1 | | STATEMENT | AFTER | | new_table + child3_update_trig | UPDATE | public | child3 | 1 | | STATEMENT | AFTER | old_table | new_table + parent_delete_trig | DELETE | public | parent | 1 | | STATEMENT | AFTER | old_table | + parent_insert_trig | INSERT | public | parent | 1 | | STATEMENT | AFTER | | new_table + parent_update_trig | UPDATE | public | parent | 1 | | STATEMENT | AFTER | old_table | new_table +(19 rows) + -- insert directly into children sees respective child-format tuples insert into child1 values ('AAA', 42); NOTICE: trigger = child1_insert_trig, new table = (AAA,42) diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 47b5bde390..7abebda459 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -92,6 +92,11 @@ delete from pkeys where pkey1 = 40 and pkey2 = '4'; update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = '5'; update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1'; +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; + DROP TABLE pkeys; DROP TABLE fkeys; DROP TABLE fkeys2; @@ -279,6 +284,10 @@ CREATE TRIGGER insert_when BEFORE INSERT ON main_table FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('insert_when'); CREATE TRIGGER delete_when AFTER DELETE ON main_table FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('delete_when'); +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; INSERT INTO main_table (a) VALUES (123), (456); COPY main_table FROM stdin; 123 999 @@ -1472,6 +1481,11 @@ create trigger child3_delete_trig after delete on child3 referencing old table as old_table for each statement execute procedure dump_delete(); +SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, + action_order, action_condition, action_orientation, action_timing, + action_reference_old_table, action_reference_new_table + FROM information_schema.triggers ORDER BY 1, 2; + -- insert directly into children sees respective child-format tuples insert into child1 values ('AAA', 42); insert into child2 values ('BBB', 42); From 7c44b75a2a0705bf17d0e7ef02b1a0a769306fa5 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Wed, 7 Feb 2018 14:57:19 -0500 Subject: [PATCH 0964/1087] Make new triggers tests more robust Add explicit collation on the trigger name to avoid locale dependencies. Also restrict the tables selected, to avoid interference from concurrently running tests. --- src/test/regress/expected/triggers.out | 49 +++++++++++++------------- src/test/regress/sql/triggers.sql | 12 +++++-- 2 files changed, 33 insertions(+), 28 deletions(-) diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 83978b610e..98db323337 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -99,7 +99,9 @@ NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; + FROM information_schema.triggers + WHERE event_object_table in ('pkeys', 'fkeys', 'fkeys2') + ORDER BY trigger_name COLLATE "C", 2; trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table ----------------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+---------------------------- check_fkeys2_fkey_restrict | DELETE | public | fkeys2 | 1 | | ROW | BEFORE | | @@ -368,7 +370,9 @@ FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('delete_when'); SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; + FROM information_schema.triggers + WHERE event_object_table IN ('main_table') + ORDER BY trigger_name COLLATE "C", 2; trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table ----------------------+--------------------+---------------------+--------------------+--------------+--------------------------------+--------------------+---------------+----------------------------+---------------------------- after_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | AFTER | | @@ -2030,29 +2034,24 @@ create trigger child3_delete_trig SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; - trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table -------------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+---------------------------- - after_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | AFTER | | - after_upd_a_b_row_trig | UPDATE | public | main_table | 1 | | ROW | AFTER | | - after_upd_b_row_trig | UPDATE | public | main_table | 2 | | ROW | AFTER | | - after_upd_b_stmt_trig | UPDATE | public | main_table | 1 | | STATEMENT | AFTER | | - after_upd_stmt_trig | UPDATE | public | main_table | 2 | | STATEMENT | AFTER | | - before_ins_stmt_trig | INSERT | public | main_table | 1 | | STATEMENT | BEFORE | | - before_upd_a_stmt_trig | UPDATE | public | main_table | 1 | | STATEMENT | BEFORE | | - child1_delete_trig | DELETE | public | child1 | 1 | | STATEMENT | AFTER | old_table | - child1_insert_trig | INSERT | public | child1 | 1 | | STATEMENT | AFTER | | new_table - child1_update_trig | UPDATE | public | child1 | 1 | | STATEMENT | AFTER | old_table | new_table - child2_delete_trig | DELETE | public | child2 | 1 | | STATEMENT | AFTER | old_table | - child2_insert_trig | INSERT | public | child2 | 1 | | STATEMENT | AFTER | | new_table - child2_update_trig | UPDATE | public | child2 | 1 | | STATEMENT | AFTER | old_table | new_table - child3_delete_trig | DELETE | public | child3 | 1 | | STATEMENT | AFTER | old_table | - child3_insert_trig | INSERT | public | child3 | 1 | | STATEMENT | AFTER | | new_table - child3_update_trig | UPDATE | public | child3 | 1 | | STATEMENT | AFTER | old_table | new_table - parent_delete_trig | DELETE | public | parent | 1 | | STATEMENT | AFTER | old_table | - parent_insert_trig | INSERT | public | parent | 1 | | STATEMENT | AFTER | | new_table - parent_update_trig | UPDATE | public | parent | 1 | | STATEMENT | AFTER | old_table | new_table -(19 rows) + FROM information_schema.triggers + WHERE event_object_table IN ('parent', 'child1', 'child2', 'child3') + ORDER BY trigger_name COLLATE "C", 2; + trigger_name | event_manipulation | event_object_schema | event_object_table | action_order | action_condition | action_orientation | action_timing | action_reference_old_table | action_reference_new_table +--------------------+--------------------+---------------------+--------------------+--------------+------------------+--------------------+---------------+----------------------------+---------------------------- + child1_delete_trig | DELETE | public | child1 | 1 | | STATEMENT | AFTER | old_table | + child1_insert_trig | INSERT | public | child1 | 1 | | STATEMENT | AFTER | | new_table + child1_update_trig | UPDATE | public | child1 | 1 | | STATEMENT | AFTER | old_table | new_table + child2_delete_trig | DELETE | public | child2 | 1 | | STATEMENT | AFTER | old_table | + child2_insert_trig | INSERT | public | child2 | 1 | | STATEMENT | AFTER | | new_table + child2_update_trig | UPDATE | public | child2 | 1 | | STATEMENT | AFTER | old_table | new_table + child3_delete_trig | DELETE | public | child3 | 1 | | STATEMENT | AFTER | old_table | + child3_insert_trig | INSERT | public | child3 | 1 | | STATEMENT | AFTER | | new_table + child3_update_trig | UPDATE | public | child3 | 1 | | STATEMENT | AFTER | old_table | new_table + parent_delete_trig | DELETE | public | parent | 1 | | STATEMENT | AFTER | old_table | + parent_insert_trig | INSERT | public | parent | 1 | | STATEMENT | AFTER | | new_table + parent_update_trig | UPDATE | public | parent | 1 | | STATEMENT | AFTER | old_table | new_table +(12 rows) -- insert directly into children sees respective child-format tuples insert into child1 values ('AAA', 42); diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index 7abebda459..dba9bdd98b 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -95,7 +95,9 @@ update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1'; SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; + FROM information_schema.triggers + WHERE event_object_table in ('pkeys', 'fkeys', 'fkeys2') + ORDER BY trigger_name COLLATE "C", 2; DROP TABLE pkeys; DROP TABLE fkeys; @@ -287,7 +289,9 @@ FOR EACH STATEMENT WHEN (true) EXECUTE PROCEDURE trigger_func('delete_when'); SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; + FROM information_schema.triggers + WHERE event_object_table IN ('main_table') + ORDER BY trigger_name COLLATE "C", 2; INSERT INTO main_table (a) VALUES (123), (456); COPY main_table FROM stdin; 123 999 @@ -1484,7 +1488,9 @@ create trigger child3_delete_trig SELECT trigger_name, event_manipulation, event_object_schema, event_object_table, action_order, action_condition, action_orientation, action_timing, action_reference_old_table, action_reference_new_table - FROM information_schema.triggers ORDER BY 1, 2; + FROM information_schema.triggers + WHERE event_object_table IN ('parent', 'child1', 'child2', 'child3') + ORDER BY trigger_name COLLATE "C", 2; -- insert directly into children sees respective child-format tuples insert into child1 values ('AAA', 42); From 1bc0100d270e5bcc980a0629b8726a32a497e788 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 7 Feb 2018 15:34:30 -0500 Subject: [PATCH 0965/1087] postgres_fdw: Push down UPDATE/DELETE joins to remote servers. Commit 0bf3ae88af330496517722e391e7c975e6bad219 allowed direct foreign table modification; instead of fetching each row, updating it locally, and then pushing the modification back to the remote side, we would instead do all the work on the remote server via a single remote UPDATE or DELETE command. However, that commit only enabled this optimization when join tree consisted only of the target table. This change allows the same optimization when an UPDATE statement has a FROM clause or a DELETE statement has a USING clause. This works much like ordinary foreign join pushdown, in that the tables must be on the same remote server, relevant parts of the query must be pushdown-safe, and so forth. Etsuro Fujita, reviewed by Ashutosh Bapat, Rushabh Lathia, and me. Some formatting corrections by me. Discussion: http://postgr.es/m/5A57193A.2080003@lab.ntt.co.jp Discussion: http://postgr.es/m/b9cee735-62f8-6c07-7528-6364ce9347d0@lab.ntt.co.jp --- contrib/postgres_fdw/deparse.c | 218 +++++++-- .../postgres_fdw/expected/postgres_fdw.out | 241 ++++++++-- contrib/postgres_fdw/postgres_fdw.c | 438 +++++++++++++++++- contrib/postgres_fdw/postgres_fdw.h | 2 + contrib/postgres_fdw/sql/postgres_fdw.sql | 56 ++- 5 files changed, 863 insertions(+), 92 deletions(-) diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index e111b09c7c..32c7261dae 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -132,7 +132,9 @@ static void deparseTargetList(StringInfo buf, Bitmapset *attrs_used, bool qualify_col, List **retrieved_attrs); -static void deparseExplicitTargetList(List *tlist, List **retrieved_attrs, +static void deparseExplicitTargetList(List *tlist, + bool is_returning, + List **retrieved_attrs, deparse_expr_cxt *context); static void deparseSubqueryTargetList(deparse_expr_cxt *context); static void deparseReturningList(StringInfo buf, PlannerInfo *root, @@ -168,11 +170,13 @@ static void deparseLockingClause(deparse_expr_cxt *context); static void appendOrderByClause(List *pathkeys, deparse_expr_cxt *context); static void appendConditions(List *exprs, deparse_expr_cxt *context); static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root, - RelOptInfo *joinrel, bool use_alias, List **params_list); + RelOptInfo *foreignrel, bool use_alias, + Index ignore_rel, List **ignore_conds, + List **params_list); static void deparseFromExpr(List *quals, deparse_expr_cxt *context); static void deparseRangeTblRef(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, bool make_subquery, - List **params_list); + Index ignore_rel, List **ignore_conds, List **params_list); static void deparseAggref(Aggref *node, deparse_expr_cxt *context); static void appendGroupByClause(List *tlist, deparse_expr_cxt *context); static void appendAggOrderBy(List *orderList, List *targetList, @@ -1028,7 +1032,7 @@ deparseSelectSql(List *tlist, bool is_subquery, List **retrieved_attrs, * For a join or upper relation the input tlist gives the list of * columns required to be fetched from the foreign server. */ - deparseExplicitTargetList(tlist, retrieved_attrs, context); + deparseExplicitTargetList(tlist, false, retrieved_attrs, context); } else { @@ -1071,7 +1075,7 @@ deparseFromExpr(List *quals, deparse_expr_cxt *context) appendStringInfoString(buf, " FROM "); deparseFromExprForRel(buf, context->root, scanrel, (bms_num_members(scanrel->relids) > 1), - context->params_list); + (Index) 0, NULL, context->params_list); /* Construct WHERE clause */ if (quals != NIL) @@ -1340,9 +1344,14 @@ get_jointype_name(JoinType jointype) * * retrieved_attrs is the list of continuously increasing integers starting * from 1. It has same number of entries as tlist. + * + * This is used for both SELECT and RETURNING targetlists; the is_returning + * parameter is true only for a RETURNING targetlist. */ static void -deparseExplicitTargetList(List *tlist, List **retrieved_attrs, +deparseExplicitTargetList(List *tlist, + bool is_returning, + List **retrieved_attrs, deparse_expr_cxt *context) { ListCell *lc; @@ -1357,13 +1366,16 @@ deparseExplicitTargetList(List *tlist, List **retrieved_attrs, if (i > 0) appendStringInfoString(buf, ", "); + else if (is_returning) + appendStringInfoString(buf, " RETURNING "); + deparseExpr((Expr *) tle->expr, context); *retrieved_attrs = lappend_int(*retrieved_attrs, i + 1); i++; } - if (i == 0) + if (i == 0 && !is_returning) appendStringInfoString(buf, "NULL"); } @@ -1406,10 +1418,17 @@ deparseSubqueryTargetList(deparse_expr_cxt *context) * The function constructs ... JOIN ... ON ... for join relation. For a base * relation it just returns schema-qualified tablename, with the appropriate * alias if so requested. + * + * 'ignore_rel' is either zero or the RT index of a target relation. In the + * latter case the function constructs FROM clause of UPDATE or USING clause + * of DELETE; it deparses the join relation as if the relation never contained + * the target relation, and creates a List of conditions to be deparsed into + * the top-level WHERE clause, which is returned to *ignore_conds. */ static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, - bool use_alias, List **params_list) + bool use_alias, Index ignore_rel, List **ignore_conds, + List **params_list) { PgFdwRelationInfo *fpinfo = (PgFdwRelationInfo *) foreignrel->fdw_private; @@ -1417,16 +1436,89 @@ deparseFromExprForRel(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, { StringInfoData join_sql_o; StringInfoData join_sql_i; + RelOptInfo *outerrel = fpinfo->outerrel; + RelOptInfo *innerrel = fpinfo->innerrel; + bool outerrel_is_target = false; + bool innerrel_is_target = false; - /* Deparse outer relation */ - initStringInfo(&join_sql_o); - deparseRangeTblRef(&join_sql_o, root, fpinfo->outerrel, - fpinfo->make_outerrel_subquery, params_list); + if (ignore_rel > 0 && bms_is_member(ignore_rel, foreignrel->relids)) + { + /* + * If this is an inner join, add joinclauses to *ignore_conds and + * set it to empty so that those can be deparsed into the WHERE + * clause. Note that since the target relation can never be + * within the nullable side of an outer join, those could safely + * be pulled up into the WHERE clause (see foreign_join_ok()). + * Note also that since the target relation is only inner-joined + * to any other relation in the query, all conditions in the join + * tree mentioning the target relation could be deparsed into the + * WHERE clause by doing this recursively. + */ + if (fpinfo->jointype == JOIN_INNER) + { + *ignore_conds = list_concat(*ignore_conds, + list_copy(fpinfo->joinclauses)); + fpinfo->joinclauses = NIL; + } - /* Deparse inner relation */ - initStringInfo(&join_sql_i); - deparseRangeTblRef(&join_sql_i, root, fpinfo->innerrel, - fpinfo->make_innerrel_subquery, params_list); + /* + * Check if either of the input relations is the target relation. + */ + if (outerrel->relid == ignore_rel) + outerrel_is_target = true; + else if (innerrel->relid == ignore_rel) + innerrel_is_target = true; + } + + /* Deparse outer relation if not the target relation. */ + if (!outerrel_is_target) + { + initStringInfo(&join_sql_o); + deparseRangeTblRef(&join_sql_o, root, outerrel, + fpinfo->make_outerrel_subquery, + ignore_rel, ignore_conds, params_list); + + /* + * If inner relation is the target relation, skip deparsing it. + * Note that since the join of the target relation with any other + * relation in the query is an inner join and can never be within + * the nullable side of an outer join, the join could be + * interchanged with higher-level joins (cf. identity 1 on outer + * join reordering shown in src/backend/optimizer/README), which + * means it's safe to skip the target-relation deparsing here. + */ + if (innerrel_is_target) + { + Assert(fpinfo->jointype == JOIN_INNER); + Assert(fpinfo->joinclauses == NIL); + appendStringInfo(buf, "%s", join_sql_o.data); + return; + } + } + + /* Deparse inner relation if not the target relation. */ + if (!innerrel_is_target) + { + initStringInfo(&join_sql_i); + deparseRangeTblRef(&join_sql_i, root, innerrel, + fpinfo->make_innerrel_subquery, + ignore_rel, ignore_conds, params_list); + + /* + * If outer relation is the target relation, skip deparsing it. + * See the above note about safety. + */ + if (outerrel_is_target) + { + Assert(fpinfo->jointype == JOIN_INNER); + Assert(fpinfo->joinclauses == NIL); + appendStringInfo(buf, "%s", join_sql_i.data); + return; + } + } + + /* Neither of the relations is the target relation. */ + Assert(!outerrel_is_target && !innerrel_is_target); /* * For a join relation FROM clause entry is deparsed as @@ -1486,7 +1578,8 @@ deparseFromExprForRel(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, */ static void deparseRangeTblRef(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, - bool make_subquery, List **params_list) + bool make_subquery, Index ignore_rel, List **ignore_conds, + List **params_list) { PgFdwRelationInfo *fpinfo = (PgFdwRelationInfo *) foreignrel->fdw_private; @@ -1501,6 +1594,14 @@ deparseRangeTblRef(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, List *retrieved_attrs; int ncols; + /* + * The given relation shouldn't contain the target relation, because + * this should only happen for input relations for a full join, and + * such relations can never contain an UPDATE/DELETE target. + */ + Assert(ignore_rel == 0 || + !bms_is_member(ignore_rel, foreignrel->relids)); + /* Deparse the subquery representing the relation. */ appendStringInfoChar(buf, '('); deparseSelectStmtForRel(buf, root, foreignrel, NIL, @@ -1534,7 +1635,8 @@ deparseRangeTblRef(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel, } } else - deparseFromExprForRel(buf, root, foreignrel, true, params_list); + deparseFromExprForRel(buf, root, foreignrel, true, ignore_rel, + ignore_conds, params_list); } /* @@ -1645,13 +1747,23 @@ deparseUpdateSql(StringInfo buf, PlannerInfo *root, /* * deparse remote UPDATE statement * - * The statement text is appended to buf, and we also create an integer List - * of the columns being retrieved by RETURNING (if any), which is returned - * to *retrieved_attrs. + * 'buf' is the output buffer to append the statement to + * 'rtindex' is the RT index of the associated target relation + * 'rel' is the relation descriptor for the target relation + * 'foreignrel' is the RelOptInfo for the target relation or the join relation + * containing all base relations in the query + * 'targetlist' is the tlist of the underlying foreign-scan plan node + * 'targetAttrs' is the target columns of the UPDATE + * 'remote_conds' is the qual clauses that must be evaluated remotely + * '*params_list' is an output list of exprs that will become remote Params + * 'returningList' is the RETURNING targetlist + * '*retrieved_attrs' is an output list of integers of columns being retrieved + * by RETURNING (if any) */ void deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root, Index rtindex, Relation rel, + RelOptInfo *foreignrel, List *targetlist, List *targetAttrs, List *remote_conds, @@ -1659,7 +1771,6 @@ deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root, List *returningList, List **retrieved_attrs) { - RelOptInfo *baserel = root->simple_rel_array[rtindex]; deparse_expr_cxt context; int nestlevel; bool first; @@ -1667,13 +1778,15 @@ deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root, /* Set up context struct for recursion */ context.root = root; - context.foreignrel = baserel; - context.scanrel = baserel; + context.foreignrel = foreignrel; + context.scanrel = foreignrel; context.buf = buf; context.params_list = params_list; appendStringInfoString(buf, "UPDATE "); deparseRelation(buf, rel); + if (foreignrel->reloptkind == RELOPT_JOINREL) + appendStringInfo(buf, " %s%d", REL_ALIAS_PREFIX, rtindex); appendStringInfoString(buf, " SET "); /* Make sure any constants in the exprs are printed portably */ @@ -1700,14 +1813,28 @@ deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root, reset_transmission_modes(nestlevel); + if (foreignrel->reloptkind == RELOPT_JOINREL) + { + List *ignore_conds = NIL; + + appendStringInfo(buf, " FROM "); + deparseFromExprForRel(buf, root, foreignrel, true, rtindex, + &ignore_conds, params_list); + remote_conds = list_concat(remote_conds, ignore_conds); + } + if (remote_conds) { appendStringInfoString(buf, " WHERE "); appendConditions(remote_conds, &context); } - deparseReturningList(buf, root, rtindex, rel, false, - returningList, retrieved_attrs); + if (foreignrel->reloptkind == RELOPT_JOINREL) + deparseExplicitTargetList(returningList, true, retrieved_attrs, + &context); + else + deparseReturningList(buf, root, rtindex, rel, false, + returningList, retrieved_attrs); } /* @@ -1735,30 +1862,49 @@ deparseDeleteSql(StringInfo buf, PlannerInfo *root, /* * deparse remote DELETE statement * - * The statement text is appended to buf, and we also create an integer List - * of the columns being retrieved by RETURNING (if any), which is returned - * to *retrieved_attrs. + * 'buf' is the output buffer to append the statement to + * 'rtindex' is the RT index of the associated target relation + * 'rel' is the relation descriptor for the target relation + * 'foreignrel' is the RelOptInfo for the target relation or the join relation + * containing all base relations in the query + * 'remote_conds' is the qual clauses that must be evaluated remotely + * '*params_list' is an output list of exprs that will become remote Params + * 'returningList' is the RETURNING targetlist + * '*retrieved_attrs' is an output list of integers of columns being retrieved + * by RETURNING (if any) */ void deparseDirectDeleteSql(StringInfo buf, PlannerInfo *root, Index rtindex, Relation rel, + RelOptInfo *foreignrel, List *remote_conds, List **params_list, List *returningList, List **retrieved_attrs) { - RelOptInfo *baserel = root->simple_rel_array[rtindex]; deparse_expr_cxt context; /* Set up context struct for recursion */ context.root = root; - context.foreignrel = baserel; - context.scanrel = baserel; + context.foreignrel = foreignrel; + context.scanrel = foreignrel; context.buf = buf; context.params_list = params_list; appendStringInfoString(buf, "DELETE FROM "); deparseRelation(buf, rel); + if (foreignrel->reloptkind == RELOPT_JOINREL) + appendStringInfo(buf, " %s%d", REL_ALIAS_PREFIX, rtindex); + + if (foreignrel->reloptkind == RELOPT_JOINREL) + { + List *ignore_conds = NIL; + + appendStringInfo(buf, " USING "); + deparseFromExprForRel(buf, root, foreignrel, true, rtindex, + &ignore_conds, params_list); + remote_conds = list_concat(remote_conds, ignore_conds); + } if (remote_conds) { @@ -1766,8 +1912,12 @@ deparseDirectDeleteSql(StringInfo buf, PlannerInfo *root, appendConditions(remote_conds, &context); } - deparseReturningList(buf, root, rtindex, rel, false, - returningList, retrieved_attrs); + if (foreignrel->reloptkind == RELOPT_JOINREL) + deparseExplicitTargetList(returningList, true, retrieved_attrs, + &context); + else + deparseReturningList(buf, root, rtindex, rel, false, + returningList, retrieved_attrs); } /* diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 5e1f44041c..885a45b0df 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -4399,27 +4399,13 @@ UPDATE ft2 SET c2 = c2 + 400, c3 = c3 || '_update7' WHERE c1 % 10 = 7 RETURNING EXPLAIN (verbose, costs off) UPDATE ft2 SET c2 = ft2.c2 + 500, c3 = ft2.c3 || '_update9', c7 = DEFAULT - FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; -- can't be pushed down - QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; -- can be pushed down + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on public.ft2 - Remote SQL: UPDATE "S 1"."T 1" SET c2 = $2, c3 = $3, c7 = $4 WHERE ctid = $1 - -> Foreign Scan - Output: ft2.c1, (ft2.c2 + 500), NULL::integer, (ft2.c3 || '_update9'::text), ft2.c4, ft2.c5, ft2.c6, 'ft2 '::character(10), ft2.c8, ft2.ctid, ft1.* - Relations: (public.ft2) INNER JOIN (public.ft1) - Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c8, r1.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1.c2 = r2."C 1")) AND (((r2."C 1" % 10) = 9)))) FOR UPDATE OF r1 - -> Hash Join - Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c8, ft2.ctid, ft1.* - Hash Cond: (ft2.c2 = ft1.c1) - -> Foreign Scan on public.ft2 - Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c8, ft2.ctid - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c8, ctid FROM "S 1"."T 1" FOR UPDATE - -> Hash - Output: ft1.*, ft1.c1 - -> Foreign Scan on public.ft1 - Output: ft1.*, ft1.c1 - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE ((("C 1" % 10) = 9)) -(17 rows) + -> Foreign Update + Remote SQL: UPDATE "S 1"."T 1" r1 SET c2 = (r1.c2 + 500), c3 = (r1.c3 || '_update9'::text), c7 = 'ft2 '::character(10) FROM "S 1"."T 1" r2 WHERE ((r1.c2 = r2."C 1")) AND (((r2."C 1" % 10) = 9)) +(3 rows) UPDATE ft2 SET c2 = ft2.c2 + 500, c3 = ft2.c3 || '_update9', c7 = DEFAULT FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; @@ -4542,27 +4528,13 @@ DELETE FROM ft2 WHERE c1 % 10 = 5 RETURNING c1, c4; (103 rows) EXPLAIN (verbose, costs off) -DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; -- can't be pushed down - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; -- can be pushed down + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------- Delete on public.ft2 - Remote SQL: DELETE FROM "S 1"."T 1" WHERE ctid = $1 - -> Foreign Scan - Output: ft2.ctid, ft1.* - Relations: (public.ft2) INNER JOIN (public.ft1) - Remote SQL: SELECT r1.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2."C 1", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7, r2.c8) END FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1.c2 = r2."C 1")) AND (((r2."C 1" % 10) = 2)))) FOR UPDATE OF r1 - -> Hash Join - Output: ft2.ctid, ft1.* - Hash Cond: (ft2.c2 = ft1.c1) - -> Foreign Scan on public.ft2 - Output: ft2.ctid, ft2.c2 - Remote SQL: SELECT c2, ctid FROM "S 1"."T 1" FOR UPDATE - -> Hash - Output: ft1.*, ft1.c1 - -> Foreign Scan on public.ft1 - Output: ft1.*, ft1.c1 - Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE ((("C 1" % 10) = 2)) -(17 rows) + -> Foreign Delete + Remote SQL: DELETE FROM "S 1"."T 1" r1 USING "S 1"."T 1" r2 WHERE ((r1.c2 = r2."C 1")) AND (((r2."C 1" % 10) = 2)) +(3 rows) DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; SELECT c1,c2,c3,c4 FROM ft2 ORDER BY c1; @@ -5438,6 +5410,195 @@ DELETE FROM ft2 WHERE c1 = 9999 RETURNING tableoid::regclass; ft2 (1 row) +-- Test UPDATE/DELETE with RETURNING on a three-table join +INSERT INTO ft2 (c1,c2,c3) + SELECT id, id - 1200, to_char(id, 'FM00000') FROM generate_series(1201, 1300) id; +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'foo' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; -- can be pushed down + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Update on public.ft2 + Output: ft2.ctid, ft2.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3 + -> Foreign Update + Remote SQL: UPDATE "S 1"."T 1" r1 SET c3 = 'foo'::text FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (TRUE)) WHERE ((r2.c1 = r3.c1)) AND ((r1.c2 = r2.c1)) AND ((r1."C 1" > 1200)) RETURNING r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, r1.ctid, r2.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3 +(4 rows) + +UPDATE ft2 SET c3 = 'foo' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; + ctid | ft2 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | ctid | ft4 | c1 | c2 | c3 +----------+--------------------------------+------+----+-----+----+----+----+------------+----+--------+----------------+----+----+-------- + (12,102) | (1206,6,foo,,,,"ft2 ",) | 1206 | 6 | foo | | | | ft2 | | (0,6) | (6,7,AAA006) | 6 | 7 | AAA006 + (12,103) | (1212,12,foo,,,,"ft2 ",) | 1212 | 12 | foo | | | | ft2 | | (0,12) | (12,13,AAA012) | 12 | 13 | AAA012 + (12,104) | (1218,18,foo,,,,"ft2 ",) | 1218 | 18 | foo | | | | ft2 | | (0,18) | (18,19,AAA018) | 18 | 19 | AAA018 + (12,105) | (1224,24,foo,,,,"ft2 ",) | 1224 | 24 | foo | | | | ft2 | | (0,24) | (24,25,AAA024) | 24 | 25 | AAA024 + (12,106) | (1230,30,foo,,,,"ft2 ",) | 1230 | 30 | foo | | | | ft2 | | (0,30) | (30,31,AAA030) | 30 | 31 | AAA030 + (12,107) | (1236,36,foo,,,,"ft2 ",) | 1236 | 36 | foo | | | | ft2 | | (0,36) | (36,37,AAA036) | 36 | 37 | AAA036 + (12,108) | (1242,42,foo,,,,"ft2 ",) | 1242 | 42 | foo | | | | ft2 | | (0,42) | (42,43,AAA042) | 42 | 43 | AAA042 + (12,109) | (1248,48,foo,,,,"ft2 ",) | 1248 | 48 | foo | | | | ft2 | | (0,48) | (48,49,AAA048) | 48 | 49 | AAA048 + (12,110) | (1254,54,foo,,,,"ft2 ",) | 1254 | 54 | foo | | | | ft2 | | (0,54) | (54,55,AAA054) | 54 | 55 | AAA054 + (12,111) | (1260,60,foo,,,,"ft2 ",) | 1260 | 60 | foo | | | | ft2 | | (0,60) | (60,61,AAA060) | 60 | 61 | AAA060 + (12,112) | (1266,66,foo,,,,"ft2 ",) | 1266 | 66 | foo | | | | ft2 | | (0,66) | (66,67,AAA066) | 66 | 67 | AAA066 + (12,113) | (1272,72,foo,,,,"ft2 ",) | 1272 | 72 | foo | | | | ft2 | | (0,72) | (72,73,AAA072) | 72 | 73 | AAA072 + (12,114) | (1278,78,foo,,,,"ft2 ",) | 1278 | 78 | foo | | | | ft2 | | (0,78) | (78,79,AAA078) | 78 | 79 | AAA078 + (12,115) | (1284,84,foo,,,,"ft2 ",) | 1284 | 84 | foo | | | | ft2 | | (0,84) | (84,85,AAA084) | 84 | 85 | AAA084 + (12,116) | (1290,90,foo,,,,"ft2 ",) | 1290 | 90 | foo | | | | ft2 | | (0,90) | (90,91,AAA090) | 90 | 91 | AAA090 + (12,117) | (1296,96,foo,,,,"ft2 ",) | 1296 | 96 | foo | | | | ft2 | | (0,96) | (96,97,AAA096) | 96 | 97 | AAA096 +(16 rows) + +EXPLAIN (verbose, costs off) +DELETE FROM ft2 + USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 + RETURNING 100; -- can be pushed down + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Delete on public.ft2 + Output: 100 + -> Foreign Delete + Remote SQL: DELETE FROM "S 1"."T 1" r1 USING ("S 1"."T 3" r2 LEFT JOIN "S 1"."T 4" r3 ON (((r2.c1 = r3.c1)))) WHERE ((r1.c2 = r2.c1)) AND ((r1."C 1" > 1200)) AND (((r1."C 1" % 10) = 0)) +(4 rows) + +DELETE FROM ft2 + USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 + RETURNING 100; + ?column? +---------- + 100 + 100 + 100 + 100 + 100 + 100 + 100 + 100 + 100 + 100 +(10 rows) + +DELETE FROM ft2 WHERE ft2.c1 > 1200; +-- Test UPDATE/DELETE with WHERE or JOIN/ON conditions containing +-- user-defined operators/functions +ALTER SERVER loopback OPTIONS (DROP extensions); +INSERT INTO ft2 (c1,c2,c3) + SELECT id, id % 10, to_char(id, 'FM00000') FROM generate_series(2001, 2010) id; +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *; -- can't be pushed down + QUERY PLAN +---------------------------------------------------------------------------------------------------------- + Update on public.ft2 + Output: c1, c2, c3, c4, c5, c6, c7, c8 + Remote SQL: UPDATE "S 1"."T 1" SET c3 = $2 WHERE ctid = $1 RETURNING "C 1", c2, c3, c4, c5, c6, c7, c8 + -> Foreign Scan on public.ft2 + Output: c1, c2, NULL::integer, 'bar'::text, c4, c5, c6, c7, c8, ctid + Filter: (postgres_fdw_abs(ft2.c1) > 2000) + Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" FOR UPDATE +(7 rows) + +UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *; + c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 +------+----+-----+----+----+----+------------+---- + 2001 | 1 | bar | | | | ft2 | + 2002 | 2 | bar | | | | ft2 | + 2003 | 3 | bar | | | | ft2 | + 2004 | 4 | bar | | | | ft2 | + 2005 | 5 | bar | | | | ft2 | + 2006 | 6 | bar | | | | ft2 | + 2007 | 7 | bar | | | | ft2 | + 2008 | 8 | bar | | | | ft2 | + 2009 | 9 | bar | | | | ft2 | + 2010 | 0 | bar | | | | ft2 | +(10 rows) + +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'baz' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1 + RETURNING ft2.*, ft4.*, ft5.*; -- can't be pushed down + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Update on public.ft2 + Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3 + Remote SQL: UPDATE "S 1"."T 1" SET c3 = $2 WHERE ctid = $1 RETURNING "C 1", c2, c3, c4, c5, c6, c7, c8 + -> Nested Loop + Output: ft2.c1, ft2.c2, NULL::integer, 'baz'::text, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft5.*, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3 + Join Filter: (ft2.c2 === ft4.c1) + -> Foreign Scan on public.ft2 + Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid + Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE + -> Foreign Scan + Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3 + Relations: (public.ft4) INNER JOIN (public.ft5) + Remote SQL: SELECT CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r3.c1, r3.c2, r3.c3 FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (((r2.c1 = r3.c1)))) + -> Hash Join + Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3 + Hash Cond: (ft4.c1 = ft5.c1) + -> Foreign Scan on public.ft4 + Output: ft4.*, ft4.c1, ft4.c2, ft4.c3 + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" + -> Hash + Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 + -> Foreign Scan on public.ft5 + Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" +(24 rows) + +UPDATE ft2 SET c3 = 'baz' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1 + RETURNING ft2.*, ft4.*, ft5.*; + c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | c1 | c2 | c3 | c1 | c2 | c3 +------+----+-----+----+----+----+------------+----+----+----+--------+----+----+-------- + 2006 | 6 | baz | | | | ft2 | | 6 | 7 | AAA006 | 6 | 7 | AAA006 +(1 row) + +EXPLAIN (verbose, costs off) +DELETE FROM ft2 + USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down + QUERY PLAN +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Delete on public.ft2 + Output: ft2.ctid, ft2.c1, ft2.c2, ft2.c3 + Remote SQL: DELETE FROM "S 1"."T 1" WHERE ctid = $1 RETURNING "C 1", c2, c3, ctid + -> Foreign Scan + Output: ft2.ctid, ft4.*, ft5.* + Filter: (ft4.c1 === ft5.c1) + Relations: ((public.ft2) INNER JOIN (public.ft4)) INNER JOIN (public.ft5) + Remote SQL: SELECT r1.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r2.c1, r3.c1 FROM (("S 1"."T 1" r1 INNER JOIN "S 1"."T 3" r2 ON (((r1.c2 = r2.c1)) AND ((r1."C 1" > 2000)))) INNER JOIN "S 1"."T 4" r3 ON (TRUE)) FOR UPDATE OF r1 + -> Nested Loop + Output: ft2.ctid, ft4.*, ft5.*, ft4.c1, ft5.c1 + -> Nested Loop + Output: ft2.ctid, ft4.*, ft4.c1 + Join Filter: (ft2.c2 = ft4.c1) + -> Foreign Scan on public.ft2 + Output: ft2.ctid, ft2.c2 + Remote SQL: SELECT c2, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE + -> Foreign Scan on public.ft4 + Output: ft4.*, ft4.c1 + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" + -> Foreign Scan on public.ft5 + Output: ft5.*, ft5.c1 + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" +(22 rows) + +DELETE FROM ft2 + USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; + ctid | c1 | c2 | c3 +----------+------+----+----- + (12,112) | 2006 | 6 | baz +(1 row) + +DELETE FROM ft2 WHERE ft2.c1 > 2000; +ALTER SERVER loopback OPTIONS (ADD extensions 'postgres_fdw'); -- Test that trigger on remote table works as expected CREATE OR REPLACE FUNCTION "S 1".F_BRTRIG() RETURNS trigger AS $$ BEGIN diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index 7ff43337a9..c1d7f8032e 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -210,6 +210,11 @@ typedef struct PgFdwDirectModifyState PGresult *result; /* result for query */ int num_tuples; /* # of result tuples */ int next_tuple; /* index of next one to return */ + Relation resultRel; /* relcache entry for the target relation */ + AttrNumber *attnoMap; /* array of attnums of input user columns */ + AttrNumber ctidAttno; /* attnum of input ctid column */ + AttrNumber oidAttno; /* attnum of input oid column */ + bool hasSystemCols; /* are there system columns of resultRel? */ /* working memory context */ MemoryContext temp_cxt; /* context for per-tuple temporary data */ @@ -376,8 +381,17 @@ static const char **convert_prep_stmt_params(PgFdwModifyState *fmstate, TupleTableSlot *slot); static void store_returning_result(PgFdwModifyState *fmstate, TupleTableSlot *slot, PGresult *res); +static List *build_remote_returning(Index rtindex, Relation rel, + List *returningList); +static void rebuild_fdw_scan_tlist(ForeignScan *fscan, List *tlist); static void execute_dml_stmt(ForeignScanState *node); static TupleTableSlot *get_returning_data(ForeignScanState *node); +static void init_returning_filter(PgFdwDirectModifyState *dmstate, + List *fdw_scan_tlist, + Index rtindex); +static TupleTableSlot *apply_returning_filter(PgFdwDirectModifyState *dmstate, + TupleTableSlot *slot, + EState *estate); static void prepare_query_params(PlanState *node, List *fdw_exprs, int numParams, @@ -2144,14 +2158,15 @@ postgresPlanDirectModify(PlannerInfo *root, if (subplan->qual != NIL) return false; - /* - * We can't handle an UPDATE or DELETE on a foreign join for now. - */ - if (fscan->scan.scanrelid == 0) - return false; - /* Safe to fetch data about the target foreign rel */ - foreignrel = root->simple_rel_array[resultRelation]; + if (fscan->scan.scanrelid == 0) + { + foreignrel = find_join_rel(root, fscan->fs_relids); + /* We should have a rel for this foreign join. */ + Assert(foreignrel); + } + else + foreignrel = root->simple_rel_array[resultRelation]; rte = root->simple_rte_array[resultRelation]; fpinfo = (PgFdwRelationInfo *) foreignrel->fdw_private; @@ -2212,8 +2227,23 @@ postgresPlanDirectModify(PlannerInfo *root, * Extract the relevant RETURNING list if any. */ if (plan->returningLists) + { returningList = (List *) list_nth(plan->returningLists, subplan_index); + /* + * When performing an UPDATE/DELETE .. RETURNING on a join directly, + * we fetch from the foreign server any Vars specified in RETURNING + * that refer not only to the target relation but to non-target + * relations. So we'll deparse them into the RETURNING clause of the + * remote query; use a targetlist consisting of them instead, which + * will be adjusted to be new fdw_scan_tlist of the foreign-scan plan + * node below. + */ + if (fscan->scan.scanrelid == 0) + returningList = build_remote_returning(resultRelation, rel, + returningList); + } + /* * Construct the SQL command string. */ @@ -2221,6 +2251,7 @@ postgresPlanDirectModify(PlannerInfo *root, { case CMD_UPDATE: deparseDirectUpdateSql(&sql, root, resultRelation, rel, + foreignrel, ((Plan *) fscan)->targetlist, targetAttrs, remote_exprs, ¶ms_list, @@ -2228,6 +2259,7 @@ postgresPlanDirectModify(PlannerInfo *root, break; case CMD_DELETE: deparseDirectDeleteSql(&sql, root, resultRelation, rel, + foreignrel, remote_exprs, ¶ms_list, returningList, &retrieved_attrs); break; @@ -2255,6 +2287,19 @@ postgresPlanDirectModify(PlannerInfo *root, retrieved_attrs, makeInteger(plan->canSetTag)); + /* + * Update the foreign-join-related fields. + */ + if (fscan->scan.scanrelid == 0) + { + /* No need for the outer subplan. */ + fscan->scan.plan.lefttree = NULL; + + /* Build new fdw_scan_tlist if UPDATE/DELETE .. RETURNING. */ + if (returningList) + rebuild_fdw_scan_tlist(fscan, returningList); + } + heap_close(rel, NoLock); return true; } @@ -2269,6 +2314,7 @@ postgresBeginDirectModify(ForeignScanState *node, int eflags) ForeignScan *fsplan = (ForeignScan *) node->ss.ps.plan; EState *estate = node->ss.ps.state; PgFdwDirectModifyState *dmstate; + Index rtindex; RangeTblEntry *rte; Oid userid; ForeignTable *table; @@ -2291,11 +2337,15 @@ postgresBeginDirectModify(ForeignScanState *node, int eflags) * Identify which user to do the remote access as. This should match what * ExecCheckRTEPerms() does. */ - rte = rt_fetch(fsplan->scan.scanrelid, estate->es_range_table); + rtindex = estate->es_result_relation_info->ri_RangeTableIndex; + rte = rt_fetch(rtindex, estate->es_range_table); userid = rte->checkAsUser ? rte->checkAsUser : GetUserId(); /* Get info about foreign table. */ - dmstate->rel = node->ss.ss_currentRelation; + if (fsplan->scan.scanrelid == 0) + dmstate->rel = ExecOpenScanRelation(estate, rtindex, eflags); + else + dmstate->rel = node->ss.ss_currentRelation; table = GetForeignTable(RelationGetRelid(dmstate->rel)); user = GetUserMapping(userid, table->serverid); @@ -2305,6 +2355,21 @@ postgresBeginDirectModify(ForeignScanState *node, int eflags) */ dmstate->conn = GetConnection(user, false); + /* Update the foreign-join-related fields. */ + if (fsplan->scan.scanrelid == 0) + { + /* Save info about foreign table. */ + dmstate->resultRel = dmstate->rel; + + /* + * Set dmstate->rel to NULL to teach get_returning_data() and + * make_tuple_from_result_row() that columns fetched from the remote + * server are described by fdw_scan_tlist of the foreign-scan plan + * node, not the tuple descriptor for the target relation. + */ + dmstate->rel = NULL; + } + /* Initialize state variable */ dmstate->num_tuples = -1; /* -1 means not set yet */ @@ -2325,7 +2390,24 @@ postgresBeginDirectModify(ForeignScanState *node, int eflags) /* Prepare for input conversion of RETURNING results. */ if (dmstate->has_returning) - dmstate->attinmeta = TupleDescGetAttInMetadata(RelationGetDescr(dmstate->rel)); + { + TupleDesc tupdesc; + + if (fsplan->scan.scanrelid == 0) + tupdesc = node->ss.ss_ScanTupleSlot->tts_tupleDescriptor; + else + tupdesc = RelationGetDescr(dmstate->rel); + + dmstate->attinmeta = TupleDescGetAttInMetadata(tupdesc); + + /* + * When performing an UPDATE/DELETE .. RETURNING on a join directly, + * initialize a filter to extract an updated/deleted tuple from a scan + * tuple. + */ + if (fsplan->scan.scanrelid == 0) + init_returning_filter(dmstate, fsplan->fdw_scan_tlist, rtindex); + } /* * Prepare for processing of parameters used in remote query, if any. @@ -2406,6 +2488,10 @@ postgresEndDirectModify(ForeignScanState *node) ReleaseConnection(dmstate->conn); dmstate->conn = NULL; + /* close the target relation. */ + if (dmstate->resultRel) + ExecCloseScanRelation(dmstate->resultRel); + /* MemoryContext will be deleted automatically. */ } @@ -3272,6 +3358,136 @@ store_returning_result(PgFdwModifyState *fmstate, PG_END_TRY(); } +/* + * build_remote_returning + * Build a RETURNING targetlist of a remote query for performing an + * UPDATE/DELETE .. RETURNING on a join directly + */ +static List * +build_remote_returning(Index rtindex, Relation rel, List *returningList) +{ + bool have_wholerow = false; + List *tlist = NIL; + List *vars; + ListCell *lc; + + Assert(returningList); + + vars = pull_var_clause((Node *) returningList, PVC_INCLUDE_PLACEHOLDERS); + + /* + * If there's a whole-row reference to the target relation, then we'll + * need all the columns of the relation. + */ + foreach(lc, vars) + { + Var *var = (Var *) lfirst(lc); + + if (IsA(var, Var) && + var->varno == rtindex && + var->varattno == InvalidAttrNumber) + { + have_wholerow = true; + break; + } + } + + if (have_wholerow) + { + TupleDesc tupdesc = RelationGetDescr(rel); + int i; + + for (i = 1; i <= tupdesc->natts; i++) + { + Form_pg_attribute attr = TupleDescAttr(tupdesc, i - 1); + Var *var; + + /* Ignore dropped attributes. */ + if (attr->attisdropped) + continue; + + var = makeVar(rtindex, + i, + attr->atttypid, + attr->atttypmod, + attr->attcollation, + 0); + + tlist = lappend(tlist, + makeTargetEntry((Expr *) var, + list_length(tlist) + 1, + NULL, + false)); + } + } + + /* Now add any remaining columns to tlist. */ + foreach(lc, vars) + { + Var *var = (Var *) lfirst(lc); + + /* + * No need for whole-row references to the target relation. We don't + * need system columns other than ctid and oid either, since those are + * set locally. + */ + if (IsA(var, Var) && + var->varno == rtindex && + var->varattno <= InvalidAttrNumber && + var->varattno != SelfItemPointerAttributeNumber && + var->varattno != ObjectIdAttributeNumber) + continue; /* don't need it */ + + if (tlist_member((Expr *) var, tlist)) + continue; /* already got it */ + + tlist = lappend(tlist, + makeTargetEntry((Expr *) var, + list_length(tlist) + 1, + NULL, + false)); + } + + list_free(vars); + + return tlist; +} + +/* + * rebuild_fdw_scan_tlist + * Build new fdw_scan_tlist of given foreign-scan plan node from given + * tlist + * + * There might be columns that the fdw_scan_tlist of the given foreign-scan + * plan node contains that the given tlist doesn't. The fdw_scan_tlist would + * have contained resjunk columns such as 'ctid' of the target relation and + * 'wholerow' of non-target relations, but the tlist might not contain them, + * for example. So, adjust the tlist so it contains all the columns specified + * in the fdw_scan_tlist; else setrefs.c will get confused. + */ +static void +rebuild_fdw_scan_tlist(ForeignScan *fscan, List *tlist) +{ + List *new_tlist = tlist; + List *old_tlist = fscan->fdw_scan_tlist; + ListCell *lc; + + foreach(lc, old_tlist) + { + TargetEntry *tle = (TargetEntry *) lfirst(lc); + + if (tlist_member(tle->expr, new_tlist)) + continue; /* already got it */ + + new_tlist = lappend(new_tlist, + makeTargetEntry(tle->expr, + list_length(new_tlist) + 1, + NULL, + false)); + } + fscan->fdw_scan_tlist = new_tlist; +} + /* * Execute a direct UPDATE/DELETE statement. */ @@ -3332,6 +3548,7 @@ get_returning_data(ForeignScanState *node) EState *estate = node->ss.ps.state; ResultRelInfo *resultRelInfo = estate->es_result_relation_info; TupleTableSlot *slot = node->ss.ss_ScanTupleSlot; + TupleTableSlot *resultSlot; Assert(resultRelInfo->ri_projectReturning); @@ -3349,7 +3566,10 @@ get_returning_data(ForeignScanState *node) * "UPDATE/DELETE .. RETURNING 1" for example.) */ if (!dmstate->has_returning) + { ExecStoreAllNullTuple(slot); + resultSlot = slot; + } else { /* @@ -3365,7 +3585,7 @@ get_returning_data(ForeignScanState *node) dmstate->rel, dmstate->attinmeta, dmstate->retrieved_attrs, - NULL, + node, dmstate->temp_cxt); ExecStoreTuple(newtup, slot, InvalidBuffer, false); } @@ -3376,15 +3596,204 @@ get_returning_data(ForeignScanState *node) PG_RE_THROW(); } PG_END_TRY(); + + /* Get the updated/deleted tuple. */ + if (dmstate->rel) + resultSlot = slot; + else + resultSlot = apply_returning_filter(dmstate, slot, estate); } dmstate->next_tuple++; /* Make slot available for evaluation of the local query RETURNING list. */ - resultRelInfo->ri_projectReturning->pi_exprContext->ecxt_scantuple = slot; + resultRelInfo->ri_projectReturning->pi_exprContext->ecxt_scantuple = + resultSlot; return slot; } +/* + * Initialize a filter to extract an updated/deleted tuple from a scan tuple. + */ +static void +init_returning_filter(PgFdwDirectModifyState *dmstate, + List *fdw_scan_tlist, + Index rtindex) +{ + TupleDesc resultTupType = RelationGetDescr(dmstate->resultRel); + ListCell *lc; + int i; + + /* + * Calculate the mapping between the fdw_scan_tlist's entries and the + * result tuple's attributes. + * + * The "map" is an array of indexes of the result tuple's attributes in + * fdw_scan_tlist, i.e., one entry for every attribute of the result + * tuple. We store zero for any attributes that don't have the + * corresponding entries in that list, marking that a NULL is needed in + * the result tuple. + * + * Also get the indexes of the entries for ctid and oid if any. + */ + dmstate->attnoMap = (AttrNumber *) + palloc0(resultTupType->natts * sizeof(AttrNumber)); + + dmstate->ctidAttno = dmstate->oidAttno = 0; + + i = 1; + dmstate->hasSystemCols = false; + foreach(lc, fdw_scan_tlist) + { + TargetEntry *tle = (TargetEntry *) lfirst(lc); + Var *var = (Var *) tle->expr; + + Assert(IsA(var, Var)); + + /* + * If the Var is a column of the target relation to be retrieved from + * the foreign server, get the index of the entry. + */ + if (var->varno == rtindex && + list_member_int(dmstate->retrieved_attrs, i)) + { + int attrno = var->varattno; + + if (attrno < 0) + { + /* + * We don't retrieve system columns other than ctid and oid. + */ + if (attrno == SelfItemPointerAttributeNumber) + dmstate->ctidAttno = i; + else if (attrno == ObjectIdAttributeNumber) + dmstate->oidAttno = i; + else + Assert(false); + dmstate->hasSystemCols = true; + } + else + { + /* + * We don't retrieve whole-row references to the target + * relation either. + */ + Assert(attrno > 0); + + dmstate->attnoMap[attrno - 1] = i; + } + } + i++; + } +} + +/* + * Extract and return an updated/deleted tuple from a scan tuple. + */ +static TupleTableSlot * +apply_returning_filter(PgFdwDirectModifyState *dmstate, + TupleTableSlot *slot, + EState *estate) +{ + TupleDesc resultTupType = RelationGetDescr(dmstate->resultRel); + TupleTableSlot *resultSlot; + Datum *values; + bool *isnull; + Datum *old_values; + bool *old_isnull; + int i; + + /* + * Use the trigger tuple slot as a place to store the result tuple. + */ + resultSlot = estate->es_trig_tuple_slot; + if (resultSlot->tts_tupleDescriptor != resultTupType) + ExecSetSlotDescriptor(resultSlot, resultTupType); + + /* + * Extract all the values of the scan tuple. + */ + slot_getallattrs(slot); + old_values = slot->tts_values; + old_isnull = slot->tts_isnull; + + /* + * Prepare to build the result tuple. + */ + ExecClearTuple(resultSlot); + values = resultSlot->tts_values; + isnull = resultSlot->tts_isnull; + + /* + * Transpose data into proper fields of the result tuple. + */ + for (i = 0; i < resultTupType->natts; i++) + { + int j = dmstate->attnoMap[i]; + + if (j == 0) + { + values[i] = (Datum) 0; + isnull[i] = true; + } + else + { + values[i] = old_values[j - 1]; + isnull[i] = old_isnull[j - 1]; + } + } + + /* + * Build the virtual tuple. + */ + ExecStoreVirtualTuple(resultSlot); + + /* + * If we have any system columns to return, install them. + */ + if (dmstate->hasSystemCols) + { + HeapTuple resultTup = ExecMaterializeSlot(resultSlot); + + /* ctid */ + if (dmstate->ctidAttno) + { + ItemPointer ctid = NULL; + + ctid = (ItemPointer) DatumGetPointer(old_values[dmstate->ctidAttno - 1]); + resultTup->t_self = *ctid; + } + + /* oid */ + if (dmstate->oidAttno) + { + Oid oid = InvalidOid; + + oid = DatumGetObjectId(old_values[dmstate->oidAttno - 1]); + HeapTupleSetOid(resultTup, oid); + } + + /* + * And remaining columns + * + * Note: since we currently don't allow the target relation to appear + * on the nullable side of an outer join, any system columns wouldn't + * go to NULL. + * + * Note: no need to care about tableoid here because it will be + * initialized in ExecProcessReturning(). + */ + HeapTupleHeaderSetXmin(resultTup->t_data, InvalidTransactionId); + HeapTupleHeaderSetXmax(resultTup->t_data, InvalidTransactionId); + HeapTupleHeaderSetCmin(resultTup->t_data, InvalidTransactionId); + } + + /* + * And return the result tuple. + */ + return resultSlot; +} + /* * Prepare for processing of parameters used in remote query. */ @@ -4954,11 +5363,8 @@ make_tuple_from_result_row(PGresult *res, tupdesc = RelationGetDescr(rel); else { - PgFdwScanState *fdw_sstate; - Assert(fsstate); - fdw_sstate = (PgFdwScanState *) fsstate->fdw_state; - tupdesc = fdw_sstate->tupdesc; + tupdesc = fsstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; } values = (Datum *) palloc0(tupdesc->natts * sizeof(Datum)); diff --git a/contrib/postgres_fdw/postgres_fdw.h b/contrib/postgres_fdw/postgres_fdw.h index 1ae809d2c6..d37cc88b6e 100644 --- a/contrib/postgres_fdw/postgres_fdw.h +++ b/contrib/postgres_fdw/postgres_fdw.h @@ -150,6 +150,7 @@ extern void deparseUpdateSql(StringInfo buf, PlannerInfo *root, List **retrieved_attrs); extern void deparseDirectUpdateSql(StringInfo buf, PlannerInfo *root, Index rtindex, Relation rel, + RelOptInfo *foreignrel, List *targetlist, List *targetAttrs, List *remote_conds, @@ -162,6 +163,7 @@ extern void deparseDeleteSql(StringInfo buf, PlannerInfo *root, List **retrieved_attrs); extern void deparseDirectDeleteSql(StringInfo buf, PlannerInfo *root, Index rtindex, Relation rel, + RelOptInfo *foreignrel, List *remote_conds, List **params_list, List *returningList, diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 400a9b0cd7..e0a1d6febe 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1082,14 +1082,14 @@ UPDATE ft2 SET c2 = c2 + 400, c3 = c3 || '_update7' WHERE c1 % 10 = 7 RETURNING UPDATE ft2 SET c2 = c2 + 400, c3 = c3 || '_update7' WHERE c1 % 10 = 7 RETURNING *; EXPLAIN (verbose, costs off) UPDATE ft2 SET c2 = ft2.c2 + 500, c3 = ft2.c3 || '_update9', c7 = DEFAULT - FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; -- can't be pushed down + FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; -- can be pushed down UPDATE ft2 SET c2 = ft2.c2 + 500, c3 = ft2.c3 || '_update9', c7 = DEFAULT FROM ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 9; EXPLAIN (verbose, costs off) DELETE FROM ft2 WHERE c1 % 10 = 5 RETURNING c1, c4; -- can be pushed down DELETE FROM ft2 WHERE c1 % 10 = 5 RETURNING c1, c4; EXPLAIN (verbose, costs off) -DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; -- can't be pushed down +DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; -- can be pushed down DELETE FROM ft2 USING ft1 WHERE ft1.c1 = ft2.c2 AND ft1.c1 % 10 = 2; SELECT c1,c2,c3,c4 FROM ft2 ORDER BY c1; EXPLAIN (verbose, costs off) @@ -1102,6 +1102,58 @@ EXPLAIN (verbose, costs off) DELETE FROM ft2 WHERE c1 = 9999 RETURNING tableoid::regclass; -- can be pushed down DELETE FROM ft2 WHERE c1 = 9999 RETURNING tableoid::regclass; +-- Test UPDATE/DELETE with RETURNING on a three-table join +INSERT INTO ft2 (c1,c2,c3) + SELECT id, id - 1200, to_char(id, 'FM00000') FROM generate_series(1201, 1300) id; +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'foo' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; -- can be pushed down +UPDATE ft2 SET c3 = 'foo' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; +EXPLAIN (verbose, costs off) +DELETE FROM ft2 + USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 + RETURNING 100; -- can be pushed down +DELETE FROM ft2 + USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 + RETURNING 100; +DELETE FROM ft2 WHERE ft2.c1 > 1200; + +-- Test UPDATE/DELETE with WHERE or JOIN/ON conditions containing +-- user-defined operators/functions +ALTER SERVER loopback OPTIONS (DROP extensions); +INSERT INTO ft2 (c1,c2,c3) + SELECT id, id % 10, to_char(id, 'FM00000') FROM generate_series(2001, 2010) id; +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *; -- can't be pushed down +UPDATE ft2 SET c3 = 'bar' WHERE postgres_fdw_abs(c1) > 2000 RETURNING *; +EXPLAIN (verbose, costs off) +UPDATE ft2 SET c3 = 'baz' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1 + RETURNING ft2.*, ft4.*, ft5.*; -- can't be pushed down +UPDATE ft2 SET c3 = 'baz' + FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1 + RETURNING ft2.*, ft4.*, ft5.*; +EXPLAIN (verbose, costs off) +DELETE FROM ft2 + USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down +DELETE FROM ft2 + USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) + WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 + RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; +DELETE FROM ft2 WHERE ft2.c1 > 2000; +ALTER SERVER loopback OPTIONS (ADD extensions 'postgres_fdw'); + -- Test that trigger on remote table works as expected CREATE OR REPLACE FUNCTION "S 1".F_BRTRIG() RETURNS trigger AS $$ BEGIN From 882ea509fe7a4711fe25463427a33262b873dfa1 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 7 Feb 2018 20:38:08 -0500 Subject: [PATCH 0966/1087] postgres_fdw: Remove CTID output from some tests. Commit 1bc0100d270e5bcc980a0629b8726a32a497e788 added these tests, but they're not stable enough to survive in the buildfarm. Remove CTIDs from the output in the hopes of fixing that. --- .../postgres_fdw/expected/postgres_fdw.out | 64 +++++++++---------- contrib/postgres_fdw/sql/postgres_fdw.sql | 10 +-- 2 files changed, 37 insertions(+), 37 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 885a45b0df..adbf77ffe5 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -5417,44 +5417,44 @@ EXPLAIN (verbose, costs off) UPDATE ft2 SET c3 = 'foo' FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; -- can be pushed down - QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + RETURNING ft2, ft2.*, ft4, ft4.*; -- can be pushed down + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on public.ft2 - Output: ft2.ctid, ft2.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3 + Output: ft2.*, ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.*, ft4.c1, ft4.c2, ft4.c3 -> Foreign Update - Remote SQL: UPDATE "S 1"."T 1" r1 SET c3 = 'foo'::text FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (TRUE)) WHERE ((r2.c1 = r3.c1)) AND ((r1.c2 = r2.c1)) AND ((r1."C 1" > 1200)) RETURNING r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, r1.ctid, r2.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3 + Remote SQL: UPDATE "S 1"."T 1" r1 SET c3 = 'foo'::text FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (TRUE)) WHERE ((r2.c1 = r3.c1)) AND ((r1.c2 = r2.c1)) AND ((r1."C 1" > 1200)) RETURNING r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3 (4 rows) UPDATE ft2 SET c3 = 'foo' FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; - ctid | ft2 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | ctid | ft4 | c1 | c2 | c3 -----------+--------------------------------+------+----+-----+----+----+----+------------+----+--------+----------------+----+----+-------- - (12,102) | (1206,6,foo,,,,"ft2 ",) | 1206 | 6 | foo | | | | ft2 | | (0,6) | (6,7,AAA006) | 6 | 7 | AAA006 - (12,103) | (1212,12,foo,,,,"ft2 ",) | 1212 | 12 | foo | | | | ft2 | | (0,12) | (12,13,AAA012) | 12 | 13 | AAA012 - (12,104) | (1218,18,foo,,,,"ft2 ",) | 1218 | 18 | foo | | | | ft2 | | (0,18) | (18,19,AAA018) | 18 | 19 | AAA018 - (12,105) | (1224,24,foo,,,,"ft2 ",) | 1224 | 24 | foo | | | | ft2 | | (0,24) | (24,25,AAA024) | 24 | 25 | AAA024 - (12,106) | (1230,30,foo,,,,"ft2 ",) | 1230 | 30 | foo | | | | ft2 | | (0,30) | (30,31,AAA030) | 30 | 31 | AAA030 - (12,107) | (1236,36,foo,,,,"ft2 ",) | 1236 | 36 | foo | | | | ft2 | | (0,36) | (36,37,AAA036) | 36 | 37 | AAA036 - (12,108) | (1242,42,foo,,,,"ft2 ",) | 1242 | 42 | foo | | | | ft2 | | (0,42) | (42,43,AAA042) | 42 | 43 | AAA042 - (12,109) | (1248,48,foo,,,,"ft2 ",) | 1248 | 48 | foo | | | | ft2 | | (0,48) | (48,49,AAA048) | 48 | 49 | AAA048 - (12,110) | (1254,54,foo,,,,"ft2 ",) | 1254 | 54 | foo | | | | ft2 | | (0,54) | (54,55,AAA054) | 54 | 55 | AAA054 - (12,111) | (1260,60,foo,,,,"ft2 ",) | 1260 | 60 | foo | | | | ft2 | | (0,60) | (60,61,AAA060) | 60 | 61 | AAA060 - (12,112) | (1266,66,foo,,,,"ft2 ",) | 1266 | 66 | foo | | | | ft2 | | (0,66) | (66,67,AAA066) | 66 | 67 | AAA066 - (12,113) | (1272,72,foo,,,,"ft2 ",) | 1272 | 72 | foo | | | | ft2 | | (0,72) | (72,73,AAA072) | 72 | 73 | AAA072 - (12,114) | (1278,78,foo,,,,"ft2 ",) | 1278 | 78 | foo | | | | ft2 | | (0,78) | (78,79,AAA078) | 78 | 79 | AAA078 - (12,115) | (1284,84,foo,,,,"ft2 ",) | 1284 | 84 | foo | | | | ft2 | | (0,84) | (84,85,AAA084) | 84 | 85 | AAA084 - (12,116) | (1290,90,foo,,,,"ft2 ",) | 1290 | 90 | foo | | | | ft2 | | (0,90) | (90,91,AAA090) | 90 | 91 | AAA090 - (12,117) | (1296,96,foo,,,,"ft2 ",) | 1296 | 96 | foo | | | | ft2 | | (0,96) | (96,97,AAA096) | 96 | 97 | AAA096 + RETURNING ft2, ft2.*, ft4, ft4.*; + ft2 | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 | ft4 | c1 | c2 | c3 +--------------------------------+------+----+-----+----+----+----+------------+----+----------------+----+----+-------- + (1206,6,foo,,,,"ft2 ",) | 1206 | 6 | foo | | | | ft2 | | (6,7,AAA006) | 6 | 7 | AAA006 + (1212,12,foo,,,,"ft2 ",) | 1212 | 12 | foo | | | | ft2 | | (12,13,AAA012) | 12 | 13 | AAA012 + (1218,18,foo,,,,"ft2 ",) | 1218 | 18 | foo | | | | ft2 | | (18,19,AAA018) | 18 | 19 | AAA018 + (1224,24,foo,,,,"ft2 ",) | 1224 | 24 | foo | | | | ft2 | | (24,25,AAA024) | 24 | 25 | AAA024 + (1230,30,foo,,,,"ft2 ",) | 1230 | 30 | foo | | | | ft2 | | (30,31,AAA030) | 30 | 31 | AAA030 + (1236,36,foo,,,,"ft2 ",) | 1236 | 36 | foo | | | | ft2 | | (36,37,AAA036) | 36 | 37 | AAA036 + (1242,42,foo,,,,"ft2 ",) | 1242 | 42 | foo | | | | ft2 | | (42,43,AAA042) | 42 | 43 | AAA042 + (1248,48,foo,,,,"ft2 ",) | 1248 | 48 | foo | | | | ft2 | | (48,49,AAA048) | 48 | 49 | AAA048 + (1254,54,foo,,,,"ft2 ",) | 1254 | 54 | foo | | | | ft2 | | (54,55,AAA054) | 54 | 55 | AAA054 + (1260,60,foo,,,,"ft2 ",) | 1260 | 60 | foo | | | | ft2 | | (60,61,AAA060) | 60 | 61 | AAA060 + (1266,66,foo,,,,"ft2 ",) | 1266 | 66 | foo | | | | ft2 | | (66,67,AAA066) | 66 | 67 | AAA066 + (1272,72,foo,,,,"ft2 ",) | 1272 | 72 | foo | | | | ft2 | | (72,73,AAA072) | 72 | 73 | AAA072 + (1278,78,foo,,,,"ft2 ",) | 1278 | 78 | foo | | | | ft2 | | (78,79,AAA078) | 78 | 79 | AAA078 + (1284,84,foo,,,,"ft2 ",) | 1284 | 84 | foo | | | | ft2 | | (84,85,AAA084) | 84 | 85 | AAA084 + (1290,90,foo,,,,"ft2 ",) | 1290 | 90 | foo | | | | ft2 | | (90,91,AAA090) | 90 | 91 | AAA090 + (1296,96,foo,,,,"ft2 ",) | 1296 | 96 | foo | | | | ft2 | | (96,97,AAA096) | 96 | 97 | AAA096 (16 rows) EXPLAIN (verbose, costs off) DELETE FROM ft2 USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 - RETURNING 100; -- can be pushed down + RETURNING 100; -- can be pushed down QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Delete on public.ft2 @@ -5561,12 +5561,12 @@ EXPLAIN (verbose, costs off) DELETE FROM ft2 USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down + RETURNING ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Delete on public.ft2 - Output: ft2.ctid, ft2.c1, ft2.c2, ft2.c3 - Remote SQL: DELETE FROM "S 1"."T 1" WHERE ctid = $1 RETURNING "C 1", c2, c3, ctid + Output: ft2.c1, ft2.c2, ft2.c3 + Remote SQL: DELETE FROM "S 1"."T 1" WHERE ctid = $1 RETURNING "C 1", c2, c3 -> Foreign Scan Output: ft2.ctid, ft4.*, ft5.* Filter: (ft4.c1 === ft5.c1) @@ -5591,10 +5591,10 @@ DELETE FROM ft2 DELETE FROM ft2 USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; - ctid | c1 | c2 | c3 -----------+------+----+----- - (12,112) | 2006 | 6 | baz + RETURNING ft2.c1, ft2.c2, ft2.c3; + c1 | c2 | c3 +------+----+----- + 2006 | 6 | baz (1 row) DELETE FROM ft2 WHERE ft2.c1 > 2000; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index e0a1d6febe..0b2c5289e3 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1109,16 +1109,16 @@ EXPLAIN (verbose, costs off) UPDATE ft2 SET c3 = 'foo' FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; -- can be pushed down + RETURNING ft2, ft2.*, ft4, ft4.*; -- can be pushed down UPDATE ft2 SET c3 = 'foo' FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2, ft2.*, ft4.ctid, ft4, ft4.*; + RETURNING ft2, ft2.*, ft4, ft4.*; EXPLAIN (verbose, costs off) DELETE FROM ft2 USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 - RETURNING 100; -- can be pushed down + RETURNING 100; -- can be pushed down DELETE FROM ft2 USING ft4 LEFT JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 1200 AND ft2.c1 % 10 = 0 AND ft2.c2 = ft4.c1 @@ -1146,11 +1146,11 @@ EXPLAIN (verbose, costs off) DELETE FROM ft2 USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down + RETURNING ft2.c1, ft2.c2, ft2.c3; -- can't be pushed down DELETE FROM ft2 USING ft4 INNER JOIN ft5 ON (ft4.c1 === ft5.c1) WHERE ft2.c1 > 2000 AND ft2.c2 = ft4.c1 - RETURNING ft2.ctid, ft2.c1, ft2.c2, ft2.c3; + RETURNING ft2.c1, ft2.c2, ft2.c3; DELETE FROM ft2 WHERE ft2.c1 > 2000; ALTER SERVER loopback OPTIONS (ADD extensions 'postgres_fdw'); From b3a101eff0fd3747bebf547b1769e28f820f4515 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 8 Feb 2018 09:12:30 -0500 Subject: [PATCH 0967/1087] Refine SSL tests test name reporting Instead of using the psql/libpq connection string as the displayed test name and relying on "notes" and source code comments to explain the tests, give the tests self-explanatory names, like we do elsewhere. Reviewed-by: Michael Paquier Reviewed-by: Daniel Gustafsson --- src/test/ssl/ServerSetup.pm | 15 +--- src/test/ssl/t/001_ssltests.pl | 142 ++++++++++++++++++++------------- 2 files changed, 89 insertions(+), 68 deletions(-) diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index b4d5746e20..45991d61a2 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -39,7 +39,6 @@ our @EXPORT = qw( sub run_test_psql { my $connstr = $_[0]; - my $logstring = $_[1]; my $cmd = [ 'psql', '-X', '-A', '-t', '-c', "SELECT \$\$connected with $connstr\$\$", @@ -49,19 +48,15 @@ sub run_test_psql return $result; } -# # The first argument is a base connection string to use for connection. -# The second argument is a complementary connection string, and it's also -# printed out as the test case name. +# The second argument is a complementary connection string. sub test_connect_ok { my $common_connstr = $_[0]; my $connstr = $_[1]; my $test_name = $_[2]; - my $result = - run_test_psql("$common_connstr $connstr", "(should succeed)"); - ok($result, $test_name || $connstr); + ok(run_test_psql("$common_connstr $connstr"), $test_name); } sub test_connect_fails @@ -70,8 +65,7 @@ sub test_connect_fails my $connstr = $_[1]; my $test_name = $_[2]; - my $result = run_test_psql("$common_connstr $connstr", "(should fail)"); - ok(!$result, $test_name || "$connstr (should fail)"); + ok(!run_test_psql("$common_connstr $connstr"), $test_name); } # Copy a set of files, taking into account wildcards @@ -151,9 +145,6 @@ sub switch_server_cert my $cafile = $_[2] || "root+client_ca"; my $pgdata = $node->data_dir; - note - "reloading server with certfile \"$certfile\" and cafile \"$cafile\""; - open my $sslconf, '>', "$pgdata/sslconfig.conf"; print $sslconf "ssl=on\n"; print $sslconf "ssl_ca_file='$cafile.crt'\n"; diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl index 28837a1391..e53bd12ae9 100644 --- a/src/test/ssl/t/001_ssltests.pl +++ b/src/test/ssl/t/001_ssltests.pl @@ -47,113 +47,134 @@ "user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test"; # The server should not accept non-SSL connections. -note "test that the server doesn't accept non-SSL connections"; -test_connect_fails($common_connstr, "sslmode=disable"); +test_connect_fails($common_connstr, "sslmode=disable", + "server doesn't accept non-SSL connections"); # Try without a root cert. In sslmode=require, this should work. In verify-ca # or verify-full mode it should fail. -note "connect without server root cert"; -test_connect_ok($common_connstr, "sslrootcert=invalid sslmode=require"); -test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-ca"); -test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-full"); +test_connect_ok($common_connstr, "sslrootcert=invalid sslmode=require", + "connect without server root cert sslmode=require"); +test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-ca", + "connect without server root cert sslmode=verify-ca"); +test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-full", + "connect without server root cert sslmode=verify-full"); # Try with wrong root cert, should fail. (We're using the client CA as the # root, but the server's key is signed by the server CA.) -note "connect with wrong server root cert"; test_connect_fails($common_connstr, - "sslrootcert=ssl/client_ca.crt sslmode=require"); + "sslrootcert=ssl/client_ca.crt sslmode=require", + "connect with wrong server root cert sslmode=require"); test_connect_fails($common_connstr, - "sslrootcert=ssl/client_ca.crt sslmode=verify-ca"); + "sslrootcert=ssl/client_ca.crt sslmode=verify-ca", + "connect with wrong server root cert sslmode=verify-ca"); test_connect_fails($common_connstr, - "sslrootcert=ssl/client_ca.crt sslmode=verify-full"); + "sslrootcert=ssl/client_ca.crt sslmode=verify-full", + "connect with wrong server root cert sslmode=verify-full"); # Try with just the server CA's cert. This fails because the root file # must contain the whole chain up to the root CA. -note "connect with server CA cert, without root CA"; test_connect_fails($common_connstr, - "sslrootcert=ssl/server_ca.crt sslmode=verify-ca"); + "sslrootcert=ssl/server_ca.crt sslmode=verify-ca", + "connect with server CA cert, without root CA"); # And finally, with the correct root cert. -note "connect with correct server CA cert file"; test_connect_ok($common_connstr, - "sslrootcert=ssl/root+server_ca.crt sslmode=require"); + "sslrootcert=ssl/root+server_ca.crt sslmode=require", + "connect with correct server CA cert file sslmode=require"); test_connect_ok($common_connstr, - "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca", + "connect with correct server CA cert file sslmode=verify-ca"); test_connect_ok($common_connstr, - "sslrootcert=ssl/root+server_ca.crt sslmode=verify-full"); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-full", + "connect with correct server CA cert file sslmode=verify-full"); # Test with cert root file that contains two certificates. The client should # be able to pick the right one, regardless of the order in the file. test_connect_ok($common_connstr, - "sslrootcert=ssl/both-cas-1.crt sslmode=verify-ca"); + "sslrootcert=ssl/both-cas-1.crt sslmode=verify-ca", + "cert root file that contains two certificates, order 1"); test_connect_ok($common_connstr, - "sslrootcert=ssl/both-cas-2.crt sslmode=verify-ca"); + "sslrootcert=ssl/both-cas-2.crt sslmode=verify-ca", + "cert root file that contains two certificates, order 2"); -note "testing sslcrl option with a non-revoked cert"; +# CRL tests # Invalid CRL filename is the same as no CRL, succeeds test_connect_ok($common_connstr, - "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=invalid"); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=invalid", + "sslcrl option with invalid file name"); # A CRL belonging to a different CA is not accepted, fails test_connect_fails($common_connstr, -"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/client.crl"); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/client.crl", + "CRL belonging to a different CA"); # With the correct CRL, succeeds (this cert is not revoked) test_connect_ok($common_connstr, -"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl" -); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl", + "CRL with a non-revoked cert"); # Check that connecting with verify-full fails, when the hostname doesn't # match the hostname in the server's certificate. -note "test mismatch between hostname and server certificate"; $common_connstr = -"user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; +"user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR"; + +test_connect_ok($common_connstr, "sslmode=require host=wronghost.test", + "mismatch between host name and server certificate sslmode=require"); +test_connect_ok($common_connstr, "sslmode=verify-ca host=wronghost.test", + "mismatch between host name and server certificate sslmode=verify-ca"); +test_connect_fails($common_connstr, "sslmode=verify-full host=wronghost.test", + "mismatch between host name and server certificate sslmode=verify-full"); -test_connect_ok($common_connstr, "sslmode=require host=wronghost.test"); -test_connect_ok($common_connstr, "sslmode=verify-ca host=wronghost.test"); -test_connect_fails($common_connstr, "sslmode=verify-full host=wronghost.test"); # Test Subject Alternative Names. switch_server_cert($node, 'server-multiple-alt-names'); -note "test hostname matching with X.509 Subject Alternative Names"; $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test"); -test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test"); -test_connect_ok($common_connstr, "host=foo.wildcard.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test", + "host name matching with X.509 Subject Alternative Names 1"); +test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test", + "host name matching with X.509 Subject Alternative Names 2"); +test_connect_ok($common_connstr, "host=foo.wildcard.pg-ssltest.test", + "host name matching with X.509 Subject Alternative Names wildcard"); -test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test"); +test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test", + "host name not matching with X.509 Subject Alternative Names"); test_connect_fails($common_connstr, - "host=deep.subdomain.wildcard.pg-ssltest.test"); + "host=deep.subdomain.wildcard.pg-ssltest.test", + "host name not matching with X.509 Subject Alternative Names wildcard"); # Test certificate with a single Subject Alternative Name. (this gives a # slightly different error message, that's all) switch_server_cert($node, 'server-single-alt-name'); -note "test hostname matching with a single X.509 Subject Alternative Name"; $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok($common_connstr, "host=single.alt-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=single.alt-name.pg-ssltest.test", + "host name matching with a single X.509 Subject Alternative Name"); -test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test"); +test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test", + "host name not matching with a single X.509 Subject Alternative Name"); test_connect_fails($common_connstr, - "host=deep.subdomain.wildcard.pg-ssltest.test"); + "host=deep.subdomain.wildcard.pg-ssltest.test", + "host name not matching with a single X.509 Subject Alternative Name wildcard"); # Test server certificate with a CN and SANs. Per RFCs 2818 and 6125, the CN # should be ignored when the certificate has both. switch_server_cert($node, 'server-cn-and-alt-names'); -note "test certificate with both a CN and SANs"; $common_connstr = "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR sslmode=verify-full"; -test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test"); -test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test"); -test_connect_fails($common_connstr, "host=common-name.pg-ssltest.test"); +test_connect_ok($common_connstr, "host=dns1.alt-name.pg-ssltest.test", + "certificate with both a CN and SANs 1"); +test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test", + "certificate with both a CN and SANs 2"); +test_connect_fails($common_connstr, "host=common-name.pg-ssltest.test", + "certificate with both a CN and SANs ignores CN"); # Finally, test a server certificate that has no CN or SANs. Of course, that's # not a very sensible certificate, but libpq should handle it gracefully. @@ -162,12 +183,13 @@ "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR"; test_connect_ok($common_connstr, - "sslmode=verify-ca host=common-name.pg-ssltest.test"); + "sslmode=verify-ca host=common-name.pg-ssltest.test", + "server certificate without CN or SANs sslmode=verify-ca"); test_connect_fails($common_connstr, - "sslmode=verify-full host=common-name.pg-ssltest.test"); + "sslmode=verify-full host=common-name.pg-ssltest.test", + "server certificate without CN or SANs sslmode=verify-full"); # Test that the CRL works -note "testing client-side CRL"; switch_server_cert($node, 'server-revoked'); $common_connstr = @@ -175,34 +197,40 @@ # Without the CRL, succeeds. With it, fails. test_connect_ok($common_connstr, - "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca"); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca", + "connects without client-side CRL"); test_connect_fails($common_connstr, -"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl" -); + "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl", + "does not connect with client-side CRL"); ### Part 2. Server-side tests. ### ### Test certificate authorization. -note "testing certificate authorization"; +note "running server tests"; + $common_connstr = "sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=certdb hostaddr=$SERVERHOSTADDR"; # no client cert -test_connect_fails($common_connstr, "user=ssltestuser sslcert=invalid"); +test_connect_fails($common_connstr, + "user=ssltestuser sslcert=invalid", + "certificate authorization fails without client cert"); # correct client cert test_connect_ok($common_connstr, - "user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key"); + "user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key", + "certificate authorization succeeds with correct client cert"); # client cert belonging to another user test_connect_fails($common_connstr, - "user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key"); + "user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key", + "certificate authorization fails with client cert belonging to another user"); # revoked client cert test_connect_fails($common_connstr, -"user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked.key" -); + "user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked.key", + "certificate authorization fails with revoked client cert"); # intermediate client_ca.crt is provided by client, and isn't in server's ssl_ca_file switch_server_cert($node, 'server-cn-only', 'root_ca'); @@ -210,8 +238,10 @@ "user=ssltestuser dbname=certdb sslkey=ssl/client_tmp.key sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR"; test_connect_ok($common_connstr, - "sslmode=require sslcert=ssl/client+client_ca.crt"); -test_connect_fails($common_connstr, "sslmode=require sslcert=ssl/client.crt"); + "sslmode=require sslcert=ssl/client+client_ca.crt", + "intermediate client certificate is provided by client"); +test_connect_fails($common_connstr, "sslmode=require sslcert=ssl/client.crt", + "intermediate client certificate is missing"); # clean up unlink "ssl/client_tmp.key"; From 88fdc7006018b92d6ec92c54b3819764703daaba Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 8 Feb 2018 12:31:48 -0500 Subject: [PATCH 0968/1087] Fix possible infinite loop with Parallel Append. When the previously-chosen plan was non-partial, all pa_finished flags for partial plans are now set, and pa_next_plan has not yet been set to INVALID_SUBPLAN_INDEX, the previous code could go into an infinite loop. Report by Rajkumar Raghuwanshi. Patch by Amit Khandekar and me. Review by Kyotaro Horiguchi. Discussion: http://postgr.es/m/CAJ3gD9cf43z78qY=U=H0HvOEN341qfRO-vLpnKPSviHeWgJQ5w@mail.gmail.com --- src/backend/executor/nodeAppend.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 64a17fb032..264d8fea8d 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -473,6 +473,9 @@ choose_next_subplan_for_worker(AppendState *node) return false; } + /* Save the plan from which we are starting the search. */ + node->as_whichplan = pstate->pa_next_plan; + /* Loop until we find a subplan to execute. */ while (pstate->pa_finished[pstate->pa_next_plan]) { @@ -481,14 +484,17 @@ choose_next_subplan_for_worker(AppendState *node) /* Advance to next plan. */ pstate->pa_next_plan++; } - else if (append->first_partial_plan < node->as_nplans) + else if (node->as_whichplan > append->first_partial_plan) { /* Loop back to first partial plan. */ pstate->pa_next_plan = append->first_partial_plan; } else { - /* At last plan, no partial plans, arrange to bail out. */ + /* + * At last plan, and either there are no partial plans or we've + * tried them all. Arrange to bail out. + */ pstate->pa_next_plan = node->as_whichplan; } From e44dd84325c277fd031b9ef486c51a0946c7d3a0 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 8 Feb 2018 14:29:05 -0500 Subject: [PATCH 0969/1087] Avoid listing the same ResultRelInfo in more than one EState list. Doing so causes EXPLAIN ANALYZE to show trigger statistics multiple times. Commit 2f178441044be430f6b4d626e4dae68a9a6f6cec seems to be to blame for this. Amit Langote, revieed by Amit Khandekar, Etsuro Fujita, and me. --- src/backend/commands/explain.c | 11 +++++++---- src/backend/executor/execMain.c | 7 +++++-- src/backend/executor/execPartition.c | 12 +++++++++--- src/backend/executor/execUtils.c | 2 +- src/include/nodes/execnodes.h | 7 +++++-- 5 files changed, 27 insertions(+), 12 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 41cd47e8bc..900fa74e85 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -652,15 +652,18 @@ ExplainPrintTriggers(ExplainState *es, QueryDesc *queryDesc) bool show_relname; int numrels = queryDesc->estate->es_num_result_relations; int numrootrels = queryDesc->estate->es_num_root_result_relations; - List *leafrels = queryDesc->estate->es_leaf_result_relations; - List *targrels = queryDesc->estate->es_trig_target_relations; + List *routerels; + List *targrels; int nr; ListCell *l; + routerels = queryDesc->estate->es_tuple_routing_result_relations; + targrels = queryDesc->estate->es_trig_target_relations; + ExplainOpenGroup("Triggers", "Triggers", false, es); show_relname = (numrels > 1 || numrootrels > 0 || - leafrels != NIL || targrels != NIL); + routerels != NIL || targrels != NIL); rInfo = queryDesc->estate->es_result_relations; for (nr = 0; nr < numrels; rInfo++, nr++) report_triggers(rInfo, show_relname, es); @@ -669,7 +672,7 @@ ExplainPrintTriggers(ExplainState *es, QueryDesc *queryDesc) for (nr = 0; nr < numrootrels; rInfo++, nr++) report_triggers(rInfo, show_relname, es); - foreach(l, leafrels) + foreach(l, routerels) { rInfo = (ResultRelInfo *) lfirst(l); report_triggers(rInfo, show_relname, es); diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 410921cc40..5d3e923cca 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1413,8 +1413,11 @@ ExecGetTriggerResultRel(EState *estate, Oid relid) rInfo++; nr--; } - /* Third, search through the leaf result relations, if any */ - foreach(l, estate->es_leaf_result_relations) + /* + * Third, search through the result relations that were created during + * tuple routing, if any. + */ + foreach(l, estate->es_tuple_routing_result_relations) { rInfo = (ResultRelInfo *) lfirst(l); if (RelationGetRelid(rInfo->ri_RelationDesc) == relid) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index ba6b52c32c..4048c3ebc6 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -178,6 +178,15 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, resultRTindex, rel, estate->es_instrument); + + /* + * Since we've just initialized this ResultRelInfo, it's not in + * any list attached to the estate as yet. Add it, so that it can + * be found later. + */ + estate->es_tuple_routing_result_relations = + lappend(estate->es_tuple_routing_result_relations, + leaf_part_rri); } part_tupdesc = RelationGetDescr(partrel); @@ -210,9 +219,6 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, mtstate != NULL && mtstate->mt_onconflict != ONCONFLICT_NONE); - estate->es_leaf_result_relations = - lappend(estate->es_leaf_result_relations, leaf_part_rri); - proute->partitions[i] = leaf_part_rri; i++; } diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index e29f7aaf7b..50b6edce63 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -119,7 +119,7 @@ CreateExecutorState(void) estate->es_root_result_relations = NULL; estate->es_num_root_result_relations = 0; - estate->es_leaf_result_relations = NIL; + estate->es_tuple_routing_result_relations = NIL; estate->es_trig_target_relations = NIL; estate->es_trig_tuple_slot = NULL; diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 54ce63f147..286d55be03 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -466,8 +466,11 @@ typedef struct EState ResultRelInfo *es_root_result_relations; /* array of ResultRelInfos */ int es_num_root_result_relations; /* length of the array */ - /* Info about leaf partitions of partitioned table(s) for insert queries: */ - List *es_leaf_result_relations; /* List of ResultRelInfos */ + /* + * The following list contains ResultRelInfos created by the tuple + * routing code for partitions that don't already have one. + */ + List *es_tuple_routing_result_relations; /* Stuff used for firing triggers: */ List *es_trig_target_relations; /* trigger-only ResultRelInfos */ From b78d0160da13109c69acfd0caded3f920bff2f3b Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 8 Feb 2018 14:35:54 -0500 Subject: [PATCH 0970/1087] Fix incorrect method name in comment. Atsushi Torikoshi Discussion: http://postgr.es/m/1b056262-4bc0-a982-c899-bb67a0a7fd52@lab.ntt.co.jp --- src/backend/replication/walsender.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 130ecd5559..d46374ddce 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -1243,7 +1243,7 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid, } /* - * LogicalDecodingContext 'progress_update' callback. + * LogicalDecodingContext 'update_progress' callback. * * Write the current position to the log tracker (see XLogSendPhysical). */ From 958e20e42d6c346ab89f6c72e4262230161d1663 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 9 Feb 2018 15:24:35 -0500 Subject: [PATCH 0971/1087] postgres_fdw: Attmempt to stabilize regression tests. Even after commit 882ea509fe7a4711fe25463427a33262b873dfa1, some buildfarm members are still failing in the postgres_fdw tests. Try to fix that by disabling use of remote statistics for some test cases. Etsuro Fujita Discussion: http://postgr.es/m/5A7D76CF.8080601@lab.ntt.co.jp --- .../postgres_fdw/expected/postgres_fdw.out | 43 +++++++++++-------- contrib/postgres_fdw/sql/postgres_fdw.sql | 9 ++++ 2 files changed, 34 insertions(+), 18 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index adbf77ffe5..62e084fb3d 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -4244,6 +4244,10 @@ explain (verbose, costs off) select * from ft3 f, loct3 l -- =================================================================== -- test writable foreign table stuff -- =================================================================== +-- Autovacuum on the remote side might affect remote estimates, +-- so use local stats on ft2 as well +ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'false'); +ANALYZE ft2; EXPLAIN (verbose, costs off) INSERT INTO ft2 (c1,c2,c3) SELECT c1+1000,c2+100, c3 || c3 FROM ft2 LIMIT 20; QUERY PLAN @@ -5520,32 +5524,32 @@ UPDATE ft2 SET c3 = 'baz' FROM ft4 INNER JOIN ft5 ON (ft4.c1 = ft5.c1) WHERE ft2.c1 > 2000 AND ft2.c2 === ft4.c1 RETURNING ft2.*, ft4.*, ft5.*; -- can't be pushed down - QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on public.ft2 Output: ft2.c1, ft2.c2, ft2.c3, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3 Remote SQL: UPDATE "S 1"."T 1" SET c3 = $2 WHERE ctid = $1 RETURNING "C 1", c2, c3, c4, c5, c6, c7, c8 - -> Nested Loop + -> Hash Join Output: ft2.c1, ft2.c2, NULL::integer, 'baz'::text, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft5.*, ft4.c1, ft4.c2, ft4.c3, ft5.c1, ft5.c2, ft5.c3 - Join Filter: (ft2.c2 === ft4.c1) - -> Foreign Scan on public.ft2 - Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid - Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE + Hash Cond: (ft4.c1 = ft5.c1) -> Foreign Scan - Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3 - Relations: (public.ft4) INNER JOIN (public.ft5) - Remote SQL: SELECT CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3, CASE WHEN (r3.*)::text IS NOT NULL THEN ROW(r3.c1, r3.c2, r3.c3) END, r3.c1, r3.c2, r3.c3 FROM ("S 1"."T 3" r2 INNER JOIN "S 1"."T 4" r3 ON (((r2.c1 = r3.c1)))) - -> Hash Join - Output: ft4.*, ft4.c1, ft4.c2, ft4.c3, ft5.*, ft5.c1, ft5.c2, ft5.c3 - Hash Cond: (ft4.c1 = ft5.c1) + Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3 + Filter: (ft2.c2 === ft4.c1) + Relations: (public.ft2) INNER JOIN (public.ft4) + Remote SQL: SELECT r1."C 1", r1.c2, r1.c4, r1.c5, r1.c6, r1.c7, r1.c8, r1.ctid, CASE WHEN (r2.*)::text IS NOT NULL THEN ROW(r2.c1, r2.c2, r2.c3) END, r2.c1, r2.c2, r2.c3 FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 3" r2 ON (((r1."C 1" > 2000)))) FOR UPDATE OF r1 + -> Nested Loop + Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid, ft4.*, ft4.c1, ft4.c2, ft4.c3 + -> Foreign Scan on public.ft2 + Output: ft2.c1, ft2.c2, ft2.c4, ft2.c5, ft2.c6, ft2.c7, ft2.c8, ft2.ctid + Remote SQL: SELECT "C 1", c2, c4, c5, c6, c7, c8, ctid FROM "S 1"."T 1" WHERE (("C 1" > 2000)) FOR UPDATE -> Foreign Scan on public.ft4 Output: ft4.*, ft4.c1, ft4.c2, ft4.c3 Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 3" - -> Hash - Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 - -> Foreign Scan on public.ft5 - Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 - Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" + -> Hash + Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 + -> Foreign Scan on public.ft5 + Output: ft5.*, ft5.c1, ft5.c2, ft5.c3 + Remote SQL: SELECT c1, c2, c3 FROM "S 1"."T 4" (24 rows) UPDATE ft2 SET c3 = 'baz' @@ -5999,6 +6003,9 @@ select c2, count(*) from "S 1"."T 1" where c2 < 500 group by 1 order by 1; 407 | 100 (13 rows) +-- Go back to use remote-estimate mode on ft2 +VACUUM ANALYZE "S 1"."T 1"; +ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'true'); -- Above DMLs add data with c6 as NULL in ft1, so test ORDER BY NULLS LAST and NULLs -- FIRST behavior here. -- ORDER BY DESC NULLS LAST options diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 0b2c5289e3..68fdfdc765 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1068,6 +1068,11 @@ explain (verbose, costs off) select * from ft3 f, loct3 l -- =================================================================== -- test writable foreign table stuff -- =================================================================== +-- Autovacuum on the remote side might affect remote estimates, +-- so use local stats on ft2 as well +ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'false'); +ANALYZE ft2; + EXPLAIN (verbose, costs off) INSERT INTO ft2 (c1,c2,c3) SELECT c1+1000,c2+100, c3 || c3 FROM ft2 LIMIT 20; INSERT INTO ft2 (c1,c2,c3) SELECT c1+1000,c2+100, c3 || c3 FROM ft2 LIMIT 20; @@ -1208,6 +1213,10 @@ commit; select c2, count(*) from ft2 where c2 < 500 group by 1 order by 1; select c2, count(*) from "S 1"."T 1" where c2 < 500 group by 1 order by 1; +-- Go back to use remote-estimate mode on ft2 +VACUUM ANALYZE "S 1"."T 1"; +ALTER FOREIGN TABLE ft2 OPTIONS (SET use_remote_estimate 'true'); + -- Above DMLs add data with c6 as NULL in ft1, so test ORDER BY NULLS LAST and NULLs -- FIRST behavior here. -- ORDER BY DESC NULLS LAST options From be42015fcc7f91574775a53df9923a36fabddc60 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 9 Feb 2018 15:48:18 -0500 Subject: [PATCH 0972/1087] Clear stmt_timeout_active if we disable_all_timeouts. Otherwise, we can end up with the flag set when the timeout is actually disabled, leading to misbehavior. Commit f8e5f156b30efee5d0038b03e38735773abcb7ed introduced this bug. Reported by Peter Eisentraut. Analysis and fix by Thomas Munro, tweaked by me. Discussion: http://postgr.es/m/6a909374-2602-7136-8c70-397330a418f3@2ndquadrant.com --- src/backend/tcop/postgres.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index ddc3ec860a..6dc2095b9a 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3912,6 +3912,7 @@ PostgresMain(int argc, char *argv[], */ disable_all_timeouts(false); QueryCancelPending = false; /* second to avoid race condition */ + stmt_timeout_active = false; /* Not reading from the client anymore. */ DoingCommandRead = false; From 935dee9ad5a8d12f4d3b772a6e6c99d245e5ad44 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 9 Feb 2018 15:54:45 -0500 Subject: [PATCH 0973/1087] Mark assorted GUC variables as PGDLLIMPORT. This makes life easier for extension authors. Metin Doslu Discussion: http://postgr.es/m/CAL1dPcfa45o1dC-c4t-48v0OZE6oy4ChJhObrtkK8mzNfXqDTA@mail.gmail.com --- src/include/miscadmin.h | 2 +- src/include/optimizer/cost.h | 36 +++++++++++++++++------------------ src/include/optimizer/paths.h | 8 ++++---- src/include/utils/guc.h | 2 +- 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 429c055489..a4574cd533 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -158,7 +158,7 @@ extern PGDLLIMPORT int NBuffers; extern PGDLLIMPORT int MaxBackends; extern PGDLLIMPORT int MaxConnections; extern PGDLLIMPORT int max_worker_processes; -extern int max_parallel_workers; +extern PGDLLIMPORT int max_parallel_workers; extern PGDLLIMPORT int MyProcPid; extern PGDLLIMPORT pg_time_t MyStartTime; diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index d2fff76653..0e9f858b9e 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -53,24 +53,24 @@ extern PGDLLIMPORT double cpu_operator_cost; extern PGDLLIMPORT double parallel_tuple_cost; extern PGDLLIMPORT double parallel_setup_cost; extern PGDLLIMPORT int effective_cache_size; -extern Cost disable_cost; -extern int max_parallel_workers_per_gather; -extern bool enable_seqscan; -extern bool enable_indexscan; -extern bool enable_indexonlyscan; -extern bool enable_bitmapscan; -extern bool enable_tidscan; -extern bool enable_sort; -extern bool enable_hashagg; -extern bool enable_nestloop; -extern bool enable_material; -extern bool enable_mergejoin; -extern bool enable_hashjoin; -extern bool enable_gathermerge; -extern bool enable_partition_wise_join; -extern bool enable_parallel_append; -extern bool enable_parallel_hash; -extern int constraint_exclusion; +extern PGDLLIMPORT Cost disable_cost; +extern PGDLLIMPORT int max_parallel_workers_per_gather; +extern PGDLLIMPORT bool enable_seqscan; +extern PGDLLIMPORT bool enable_indexscan; +extern PGDLLIMPORT bool enable_indexonlyscan; +extern PGDLLIMPORT bool enable_bitmapscan; +extern PGDLLIMPORT bool enable_tidscan; +extern PGDLLIMPORT bool enable_sort; +extern PGDLLIMPORT bool enable_hashagg; +extern PGDLLIMPORT bool enable_nestloop; +extern PGDLLIMPORT bool enable_material; +extern PGDLLIMPORT bool enable_mergejoin; +extern PGDLLIMPORT bool enable_hashjoin; +extern PGDLLIMPORT bool enable_gathermerge; +extern PGDLLIMPORT bool enable_partition_wise_join; +extern PGDLLIMPORT bool enable_parallel_append; +extern PGDLLIMPORT bool enable_parallel_hash; +extern PGDLLIMPORT int constraint_exclusion; extern double clamp_row_est(double nrows); extern double index_pages_fetched(double tuples_fetched, BlockNumber pages, diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index b6be259ff7..4708443c39 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -20,10 +20,10 @@ /* * allpaths.c */ -extern bool enable_geqo; -extern int geqo_threshold; -extern int min_parallel_table_scan_size; -extern int min_parallel_index_scan_size; +extern PGDLLIMPORT bool enable_geqo; +extern PGDLLIMPORT int geqo_threshold; +extern PGDLLIMPORT int min_parallel_table_scan_size; +extern PGDLLIMPORT int min_parallel_index_scan_size; /* Hook for plugins to get control in set_rel_pathlist() */ typedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root, diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h index 77daa5a539..2e03640c0b 100644 --- a/src/include/utils/guc.h +++ b/src/include/utils/guc.h @@ -263,7 +263,7 @@ extern char *HbaFileName; extern char *IdentFileName; extern char *external_pid_file; -extern char *application_name; +extern PGDLLIMPORT char *application_name; extern int tcp_keepalives_idle; extern int tcp_keepalives_interval; From fad15f4a547ad433a28c370bd071b08df9e65f10 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Sat, 10 Feb 2018 10:01:37 -0300 Subject: [PATCH 0974/1087] Mention partitioned indexes in "Data Definition" chapter MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We can now create indexes more easily than before, so update this chapter to use the simpler instructions. After an idea of Amit Langote. I (Álvaro) opted to do more invasive surgery and remove the previous suggestion to create per-partition indexes, which his patch left in place. Discussion: https://postgr.es/m/eafaaeb1-f0fd-d010-dd45-07db0300f645@lab.ntt.co.jp Author: Amit Langote, Álvaro Herrera --- doc/src/sgml/ddl.sgml | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 1e1f3428a6..bee1ebd7db 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3145,18 +3145,15 @@ CREATE TABLE measurement_y2006m02 PARTITION OF measurement Create an index on the key column(s), as well as any other indexes you - might want for every partition. (The key index is not strictly + might want, on the partitioned table. (The key index is not strictly necessary, but in most scenarios it is helpful. If you intend the key values to be unique then you should always create a unique or - primary-key constraint for each partition.) + primary-key constraint for each partition.) This automatically creates + one index on each partition, and any partitions you create or attach + later will also contain the index. -CREATE INDEX ON measurement_y2006m02 (logdate); -CREATE INDEX ON measurement_y2006m03 (logdate); -... -CREATE INDEX ON measurement_y2007m11 (logdate); -CREATE INDEX ON measurement_y2007m12 (logdate); -CREATE INDEX ON measurement_y2008m01 (logdate); +CREATE INDEX ON measurement (logdate); @@ -3273,12 +3270,9 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - There is no facility available to create the matching indexes on all - partitions automatically. Indexes must be added to each partition with - separate commands. This also means that there is no way to create a - primary key, unique constraint, or exclusion constraint spanning all - partitions; it is only possible to constrain each leaf partition - individually. + There is no way to create a primary key, unique constraint, or + exclusion constraint spanning all partitions; it is only possible + to constrain each leaf partition individually. From 65b1d767856d96c7d6f952f30890dd5b7d4b66bb Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 10 Feb 2018 13:05:14 -0500 Subject: [PATCH 0975/1087] Fix oversight in CALL argument handling, and do some minor cleanup. CALL statements cannot support sub-SELECTs in the arguments of the called procedure, since they just use ExecEvalExpr to evaluate such arguments. Teach transformSubLink() to reject the case, as it already does for other contexts in which subqueries are not supported. In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum symbol line up more closely with the phrasing of the error messages it is associated with. And fix someone's weak grasp of English grammar in the preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an incorrect comment in resolve_unique_index_expr (possibly it was correct when written, but nowadays transformExpr definitely does reject SRFs here). Per report from Pavel Stehule --- but this resolves only one of the bugs he mentions. Discussion: https://postgr.es/m/CAFj8pRDxOwPPzpA8i+AQeDQFj7bhVw-dR2==rfWZ3zMGkm568Q@mail.gmail.com --- src/backend/commands/functioncmds.c | 2 +- src/backend/parser/parse_agg.c | 10 +++++----- src/backend/parser/parse_clause.c | 11 +++++------ src/backend/parser/parse_expr.c | 6 ++++-- src/backend/parser/parse_func.c | 2 +- src/include/parser/parse_node.h | 2 +- src/test/regress/expected/create_table.out | 4 ++-- 7 files changed, 19 insertions(+), 18 deletions(-) diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index a483714766..5d94c1ca27 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -2225,7 +2225,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) { targs = lappend(targs, transformExpr(pstate, (Node *) lfirst(lc), - EXPR_KIND_CALL)); + EXPR_KIND_CALL_ARGUMENT)); } node = ParseFuncOrColumn(pstate, diff --git a/src/backend/parser/parse_agg.c b/src/backend/parser/parse_agg.c index 747139489a..377a7ed6d0 100644 --- a/src/backend/parser/parse_agg.c +++ b/src/backend/parser/parse_agg.c @@ -509,13 +509,13 @@ check_agglevels_and_constraints(ParseState *pstate, Node *expr) break; case EXPR_KIND_PARTITION_EXPRESSION: if (isAgg) - err = _("aggregate functions are not allowed in partition key expression"); + err = _("aggregate functions are not allowed in partition key expressions"); else - err = _("grouping operations are not allowed in partition key expression"); + err = _("grouping operations are not allowed in partition key expressions"); break; - case EXPR_KIND_CALL: + case EXPR_KIND_CALL_ARGUMENT: if (isAgg) err = _("aggregate functions are not allowed in CALL arguments"); else @@ -897,9 +897,9 @@ transformWindowFuncCall(ParseState *pstate, WindowFunc *wfunc, err = _("window functions are not allowed in trigger WHEN conditions"); break; case EXPR_KIND_PARTITION_EXPRESSION: - err = _("window functions are not allowed in partition key expression"); + err = _("window functions are not allowed in partition key expressions"); break; - case EXPR_KIND_CALL: + case EXPR_KIND_CALL_ARGUMENT: err = _("window functions are not allowed in CALL arguments"); break; diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c index 9bafc24083..3a02307bd9 100644 --- a/src/backend/parser/parse_clause.c +++ b/src/backend/parser/parse_clause.c @@ -3106,12 +3106,11 @@ resolve_unique_index_expr(ParseState *pstate, InferClause *infer, } /* - * transformExpr() should have already rejected subqueries, - * aggregates, and window functions, based on the EXPR_KIND_ for an - * index expression. Expressions returning sets won't have been - * rejected, but don't bother doing so here; there should be no - * available expression unique index to match any such expression - * against anyway. + * transformExpr() will reject subqueries, aggregates, window + * functions, and SRFs, based on being passed + * EXPR_KIND_INDEX_EXPRESSION. So we needn't worry about those + * further ... not that they would match any available index + * expression anyway. */ pInfer->expr = transformExpr(pstate, parse, EXPR_KIND_INDEX_EXPRESSION); diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c index d45926f27f..385e54a9b6 100644 --- a/src/backend/parser/parse_expr.c +++ b/src/backend/parser/parse_expr.c @@ -1818,7 +1818,6 @@ transformSubLink(ParseState *pstate, SubLink *sublink) case EXPR_KIND_RETURNING: case EXPR_KIND_VALUES: case EXPR_KIND_VALUES_SINGLE: - case EXPR_KIND_CALL: /* okay */ break; case EXPR_KIND_CHECK_CONSTRAINT: @@ -1847,6 +1846,9 @@ transformSubLink(ParseState *pstate, SubLink *sublink) case EXPR_KIND_PARTITION_EXPRESSION: err = _("cannot use subquery in partition key expression"); break; + case EXPR_KIND_CALL_ARGUMENT: + err = _("cannot use subquery in CALL argument"); + break; /* * There is intentionally no default: case here, so that the @@ -3471,7 +3473,7 @@ ParseExprKindName(ParseExprKind exprKind) return "WHEN"; case EXPR_KIND_PARTITION_EXPRESSION: return "PARTITION BY"; - case EXPR_KIND_CALL: + case EXPR_KIND_CALL_ARGUMENT: return "CALL"; /* diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c index 4a7bc77c0f..2a4ac09d5c 100644 --- a/src/backend/parser/parse_func.c +++ b/src/backend/parser/parse_func.c @@ -2290,7 +2290,7 @@ check_srf_call_placement(ParseState *pstate, Node *last_srf, int location) case EXPR_KIND_PARTITION_EXPRESSION: err = _("set-returning functions are not allowed in partition key expressions"); break; - case EXPR_KIND_CALL: + case EXPR_KIND_CALL_ARGUMENT: err = _("set-returning functions are not allowed in CALL arguments"); break; diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h index 2e0792d60b..0230543810 100644 --- a/src/include/parser/parse_node.h +++ b/src/include/parser/parse_node.h @@ -69,7 +69,7 @@ typedef enum ParseExprKind EXPR_KIND_TRIGGER_WHEN, /* WHEN condition in CREATE TRIGGER */ EXPR_KIND_POLICY, /* USING or WITH CHECK expr in policy */ EXPR_KIND_PARTITION_EXPRESSION, /* PARTITION BY expression */ - EXPR_KIND_CALL /* CALL argument */ + EXPR_KIND_CALL_ARGUMENT /* procedure argument in CALL */ } ParseExprKind; diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index f5e56365f5..bef5463bab 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -325,12 +325,12 @@ DROP FUNCTION retset(int); CREATE TABLE partitioned ( a int ) PARTITION BY RANGE ((avg(a))); -ERROR: aggregate functions are not allowed in partition key expression +ERROR: aggregate functions are not allowed in partition key expressions CREATE TABLE partitioned ( a int, b int ) PARTITION BY RANGE ((avg(a) OVER (PARTITION BY b))); -ERROR: window functions are not allowed in partition key expression +ERROR: window functions are not allowed in partition key expressions CREATE TABLE partitioned ( a int ) PARTITION BY LIST ((a LIKE (SELECT 1))); From d02d4a6d4f27c223f48b03a5e651a22c8460b3c4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 10 Feb 2018 13:37:12 -0500 Subject: [PATCH 0976/1087] Avoid premature free of pass-by-reference CALL arguments. Prematurely freeing the EState used to evaluate CALL arguments led, in some cases, to passing dangling pointers to the procedure. This was masked in trivial cases because the argument pointers would point to Const nodes in the original expression tree, and in some other cases because the result value would end up in the standalone ExprContext rather than in memory belonging to the EState --- but that wasn't exactly high quality programming either, because the standalone ExprContext was never explicitly freed, breaking assorted API contracts. In addition, using a separate EState for each argument was just silly. So let's use just one EState, and one ExprContext, and make the latter belong to the former rather than be standalone, and clean up the EState (and hence the ExprContext) post-call. While at it, improve the function's commentary a bit. Discussion: https://postgr.es/m/29173.1518282748@sss.pgh.pa.us --- src/backend/commands/functioncmds.c | 28 +++++++++++++++---- .../regress/expected/create_procedure.out | 12 +++++--- src/test/regress/sql/create_procedure.sql | 4 ++- 3 files changed, 33 insertions(+), 11 deletions(-) diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index 5d94c1ca27..a027b19744 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -2204,6 +2204,12 @@ ExecuteDoStmt(DoStmt *stmt, bool atomic) * transaction commands based on that information, e.g., call * SPI_connect_ext(SPI_OPT_NONATOMIC). The language should also pass on the * atomic flag to any nested invocations to CALL. + * + * The expression data structures and execution context that we create + * within this function are children of the portalContext of the Portal + * that the CALL utility statement runs in. Therefore, any pass-by-ref + * values that we're passing to the procedure will survive transaction + * commits that might occur inside the procedure. */ void ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) @@ -2218,8 +2224,11 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) FmgrInfo flinfo; FunctionCallInfoData fcinfo; CallContext *callcontext; + EState *estate; + ExprContext *econtext; HeapTuple tp; + /* We need to do parse analysis on the procedure call and its arguments */ targs = NIL; foreach(lc, stmt->funccall->args) { @@ -2241,7 +2250,6 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) aclcheck_error(aclresult, OBJECT_PROCEDURE, get_func_name(fexpr->funcid)); - InvokeFunctionExecuteHook(fexpr->funcid); nargs = list_length(fexpr->args); @@ -2254,6 +2262,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) FUNC_MAX_ARGS, FUNC_MAX_ARGS))); + /* Prep the context object we'll pass to the procedure */ callcontext = makeNode(CallContext); callcontext->atomic = atomic; @@ -2270,23 +2279,28 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) callcontext->atomic = true; ReleaseSysCache(tp); + /* Initialize function call structure */ + InvokeFunctionExecuteHook(fexpr->funcid); fmgr_info(fexpr->funcid, &flinfo); InitFunctionCallInfoData(fcinfo, &flinfo, nargs, fexpr->inputcollid, (Node *) callcontext, NULL); + /* + * Evaluate procedure arguments inside a suitable execution context. Note + * we can't free this context till the procedure returns. + */ + estate = CreateExecutorState(); + econtext = CreateExprContext(estate); + i = 0; foreach(lc, fexpr->args) { - EState *estate; ExprState *exprstate; - ExprContext *econtext; Datum val; bool isnull; - estate = CreateExecutorState(); exprstate = ExecPrepareExpr(lfirst(lc), estate); - econtext = CreateStandaloneExprContext(); + val = ExecEvalExprSwitchContext(exprstate, econtext, &isnull); - FreeExecutorState(estate); fcinfo.arg[i] = val; fcinfo.argnull[i] = isnull; @@ -2295,4 +2309,6 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) } FunctionCallInvoke(&fcinfo); + + FreeExecutorState(estate); } diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out index ccad5c87d5..873907dc43 100644 --- a/src/test/regress/expected/create_procedure.out +++ b/src/test/regress/expected/create_procedure.out @@ -21,6 +21,8 @@ LINE 1: SELECT ptest1('x'); ^ HINT: To call a procedure, use CALL. CALL ptest1('a'); -- ok +CALL ptest1('xy' || 'zzy'); -- ok, constant-folded arg +CALL ptest1(substring(random()::text, 1, 1)); -- ok, volatile arg \df ptest1 List of functions Schema | Name | Result data type | Argument data types | Type @@ -28,11 +30,13 @@ CALL ptest1('a'); -- ok public | ptest1 | | x text | proc (1 row) -SELECT * FROM cp_test ORDER BY a; - a | b ----+--- +SELECT * FROM cp_test ORDER BY b COLLATE "C"; + a | b +---+------- + 1 | 0 1 | a -(1 row) + 1 | xyzzy +(3 rows) CREATE PROCEDURE ptest2() LANGUAGE SQL diff --git a/src/test/regress/sql/create_procedure.sql b/src/test/regress/sql/create_procedure.sql index 8c47b7e9ef..d65e568a64 100644 --- a/src/test/regress/sql/create_procedure.sql +++ b/src/test/regress/sql/create_procedure.sql @@ -13,10 +13,12 @@ $$; SELECT ptest1('x'); -- error CALL ptest1('a'); -- ok +CALL ptest1('xy' || 'zzy'); -- ok, constant-folded arg +CALL ptest1(substring(random()::text, 1, 1)); -- ok, volatile arg \df ptest1 -SELECT * FROM cp_test ORDER BY a; +SELECT * FROM cp_test ORDER BY b COLLATE "C"; CREATE PROCEDURE ptest2() From 5c9f2564fabbc770ead3bd92136fdafc43654f27 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 11 Feb 2018 13:24:15 -0500 Subject: [PATCH 0977/1087] Fix assorted errors in pg_dump's handling of extended statistics objects. pg_dump supposed that a stats object necessarily shares the same schema as its underlying table, and that it doesn't have a separate owner. These things may have been true during early development of the feature, but they are not true as of v10 release. Failure to track the object's schema separately turns out to have only limited consequences, because pg_get_statisticsobjdef() always schema- qualifies the target object name in the generated CREATE STATISTICS command (a decision out of step with the rest of ruleutils.c, but I digress). Therefore the restored object would be in the right schema, so that the only problem is that the TOC entry would be mislabeled as to schema. That could lead to wrong decisions for schema-selective restores, for example. The ownership issue is a bit more serious: not only was the TOC entry potentially mislabeled as to owner, but pg_dump didn't bother to issue an ALTER OWNER command at all, so that after restore the stats object would continue to be owned by the restoring superuser. A final point is that decisions as to whether to dump a stats object or not were driven by whether the underlying table was dumped or not. While that's not wrong on its face, it won't scale nicely to the planned future extension to cross-table statistics. Moreover, that design decision comes out of the view of stats objects as being auxiliary to a particular table, like a rule or trigger, which is exactly where the above problems came from. Since we're now treating stats objects more like independent objects in their own right, they ought to behave like standalone objects for this purpose too. So change to using the generic selectDumpableObject() logic for them (which presently amounts to "dump if containing schema is to be dumped"). Along the way to fixing this, restructure so that getExtendedStatistics collects the identity info (only) for all extended stats objects in one query, and then for each object actually being dumped, we retrieve the definition in dumpStatisticsExt. This is necessary to ensure that schema-qualification in the generated CREATE STATISTICS command happens with respect to the search path that pg_dump will now be using at restore time (ie, the schema the stats object is in, not that of the underlying table). It's probably also significantly faster in the typical scenario where only a minority of tables have extended stats. Back-patch to v10 where extended stats were introduced. Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us --- src/bin/pg_dump/common.c | 2 +- src/bin/pg_dump/pg_backup_archiver.c | 5 +- src/bin/pg_dump/pg_dump.c | 132 +++++++++++++-------------- src/bin/pg_dump/pg_dump.h | 5 +- src/bin/pg_dump/t/002_pg_dump.pl | 2 +- 5 files changed, 68 insertions(+), 78 deletions(-) diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c index 2ec3627a68..0a758f14bf 100644 --- a/src/bin/pg_dump/common.c +++ b/src/bin/pg_dump/common.c @@ -266,7 +266,7 @@ getSchemaData(Archive *fout, int *numTablesPtr) if (g_verbose) write_msg(NULL, "reading extended statistics\n"); - getExtendedStatistics(fout, tblinfo, numTables); + getExtendedStatistics(fout); if (g_verbose) write_msg(NULL, "reading constraints\n"); diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 7c5e8c018b..a4deb53e3a 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -3441,6 +3441,7 @@ _getObjectDescription(PQExpBuffer buf, TocEntry *te, ArchiveHandle *AH) strcmp(type, "FOREIGN TABLE") == 0 || strcmp(type, "TEXT SEARCH DICTIONARY") == 0 || strcmp(type, "TEXT SEARCH CONFIGURATION") == 0 || + strcmp(type, "STATISTICS") == 0 || /* non-schema-specified objects */ strcmp(type, "DATABASE") == 0 || strcmp(type, "PROCEDURAL LANGUAGE") == 0 || @@ -3636,6 +3637,7 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) strcmp(te->desc, "TEXT SEARCH CONFIGURATION") == 0 || strcmp(te->desc, "FOREIGN DATA WRAPPER") == 0 || strcmp(te->desc, "SERVER") == 0 || + strcmp(te->desc, "STATISTICS") == 0 || strcmp(te->desc, "PUBLICATION") == 0 || strcmp(te->desc, "SUBSCRIPTION") == 0) { @@ -3658,8 +3660,7 @@ _printTocEntry(ArchiveHandle *AH, TocEntry *te, bool isData) strcmp(te->desc, "TRIGGER") == 0 || strcmp(te->desc, "ROW SECURITY") == 0 || strcmp(te->desc, "POLICY") == 0 || - strcmp(te->desc, "USER MAPPING") == 0 || - strcmp(te->desc, "STATISTICS") == 0) + strcmp(te->desc, "USER MAPPING") == 0) { /* these object types don't have separate owners */ } diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 8ca83c06d6..06bbc5033d 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -7020,17 +7020,14 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) /* * getExtendedStatistics - * get information about extended statistics on a dumpable table - * or materialized view. + * get information about extended-statistics objects. * * Note: extended statistics data is not returned directly to the caller, but * it does get entered into the DumpableObject tables. */ void -getExtendedStatistics(Archive *fout, TableInfo tblinfo[], int numTables) +getExtendedStatistics(Archive *fout) { - int i, - j; PQExpBuffer query; PGresult *res; StatsExtInfo *statsextinfo; @@ -7038,7 +7035,9 @@ getExtendedStatistics(Archive *fout, TableInfo tblinfo[], int numTables) int i_tableoid; int i_oid; int i_stxname; - int i_stxdef; + int i_stxnamespace; + int i_rolname; + int i; /* Extended statistics were new in v10 */ if (fout->remoteVersion < 100000) @@ -7046,73 +7045,46 @@ getExtendedStatistics(Archive *fout, TableInfo tblinfo[], int numTables) query = createPQExpBuffer(); - for (i = 0; i < numTables; i++) - { - TableInfo *tbinfo = &tblinfo[i]; - - /* - * Only plain tables, materialized views, foreign tables and - * partitioned tables can have extended statistics. - */ - if (tbinfo->relkind != RELKIND_RELATION && - tbinfo->relkind != RELKIND_MATVIEW && - tbinfo->relkind != RELKIND_FOREIGN_TABLE && - tbinfo->relkind != RELKIND_PARTITIONED_TABLE) - continue; - - /* - * Ignore extended statistics of tables whose definitions are not to - * be dumped. - */ - if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)) - continue; - - if (g_verbose) - write_msg(NULL, "reading extended statistics for table \"%s.%s\"\n", - tbinfo->dobj.namespace->dobj.name, - tbinfo->dobj.name); - - /* Make sure we are in proper schema so stadef is right */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); + /* Make sure we are in proper schema */ + selectSourceSchema(fout, "pg_catalog"); - resetPQExpBuffer(query); + appendPQExpBuffer(query, "SELECT tableoid, oid, stxname, " + "stxnamespace, (%s stxowner) AS rolname " + "FROM pg_catalog.pg_statistic_ext", + username_subquery); - appendPQExpBuffer(query, - "SELECT " - "tableoid, " - "oid, " - "stxname, " - "pg_catalog.pg_get_statisticsobjdef(oid) AS stxdef " - "FROM pg_catalog.pg_statistic_ext " - "WHERE stxrelid = '%u' " - "ORDER BY stxname", tbinfo->dobj.catId.oid); + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); - res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); + ntups = PQntuples(res); - ntups = PQntuples(res); + i_tableoid = PQfnumber(res, "tableoid"); + i_oid = PQfnumber(res, "oid"); + i_stxname = PQfnumber(res, "stxname"); + i_stxnamespace = PQfnumber(res, "stxnamespace"); + i_rolname = PQfnumber(res, "rolname"); - i_tableoid = PQfnumber(res, "tableoid"); - i_oid = PQfnumber(res, "oid"); - i_stxname = PQfnumber(res, "stxname"); - i_stxdef = PQfnumber(res, "stxdef"); + statsextinfo = (StatsExtInfo *) pg_malloc(ntups * sizeof(StatsExtInfo)); - statsextinfo = (StatsExtInfo *) pg_malloc(ntups * sizeof(StatsExtInfo)); + for (i = 0; i < ntups; i++) + { + statsextinfo[i].dobj.objType = DO_STATSEXT; + statsextinfo[i].dobj.catId.tableoid = atooid(PQgetvalue(res, i, i_tableoid)); + statsextinfo[i].dobj.catId.oid = atooid(PQgetvalue(res, i, i_oid)); + AssignDumpId(&statsextinfo[i].dobj); + statsextinfo[i].dobj.name = pg_strdup(PQgetvalue(res, i, i_stxname)); + statsextinfo[i].dobj.namespace = + findNamespace(fout, + atooid(PQgetvalue(res, i, i_stxnamespace))); + statsextinfo[i].rolname = pg_strdup(PQgetvalue(res, i, i_rolname)); - for (j = 0; j < ntups; j++) - { - statsextinfo[j].dobj.objType = DO_STATSEXT; - statsextinfo[j].dobj.catId.tableoid = atooid(PQgetvalue(res, j, i_tableoid)); - statsextinfo[j].dobj.catId.oid = atooid(PQgetvalue(res, j, i_oid)); - AssignDumpId(&statsextinfo[j].dobj); - statsextinfo[j].dobj.name = pg_strdup(PQgetvalue(res, j, i_stxname)); - statsextinfo[j].dobj.namespace = tbinfo->dobj.namespace; - statsextinfo[j].statsexttable = tbinfo; - statsextinfo[j].statsextdef = pg_strdup(PQgetvalue(res, j, i_stxdef)); - } + /* Decide whether we want to dump it */ + selectDumpableObject(&(statsextinfo[i].dobj), fout); - PQclear(res); + /* Stats objects do not currently have ACLs. */ + statsextinfo[i].dobj.dump &= ~DUMP_COMPONENT_ACL; } + PQclear(res); destroyPQExpBuffer(query); } @@ -16486,25 +16458,41 @@ static void dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) { DumpOptions *dopt = fout->dopt; - TableInfo *tbinfo = statsextinfo->statsexttable; PQExpBuffer q; PQExpBuffer delq; PQExpBuffer labelq; + PQExpBuffer query; + PGresult *res; + char *stxdef; - if (dopt->dataOnly) + /* Skip if not to be dumped */ + if (!statsextinfo->dobj.dump || dopt->dataOnly) return; q = createPQExpBuffer(); delq = createPQExpBuffer(); labelq = createPQExpBuffer(); + query = createPQExpBuffer(); + + /* Make sure we are in proper schema so references are qualified */ + selectSourceSchema(fout, statsextinfo->dobj.namespace->dobj.name); + + appendPQExpBuffer(query, "SELECT " + "pg_catalog.pg_get_statisticsobjdef('%u'::pg_catalog.oid)", + statsextinfo->dobj.catId.oid); + + res = ExecuteSqlQueryForSingleRow(fout, query->data); + + stxdef = PQgetvalue(res, 0, 0); appendPQExpBuffer(labelq, "STATISTICS %s", fmtId(statsextinfo->dobj.name)); - appendPQExpBuffer(q, "%s;\n", statsextinfo->statsextdef); + /* Result of pg_get_statisticsobjdef is complete except for semicolon */ + appendPQExpBuffer(q, "%s;\n", stxdef); appendPQExpBuffer(delq, "DROP STATISTICS %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); + fmtId(statsextinfo->dobj.namespace->dobj.name)); appendPQExpBuffer(delq, "%s;\n", fmtId(statsextinfo->dobj.name)); @@ -16512,9 +16500,9 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) ArchiveEntry(fout, statsextinfo->dobj.catId, statsextinfo->dobj.dumpId, statsextinfo->dobj.name, - tbinfo->dobj.namespace->dobj.name, + statsextinfo->dobj.namespace->dobj.name, NULL, - tbinfo->rolname, false, + statsextinfo->rolname, false, "STATISTICS", SECTION_POST_DATA, q->data, delq->data, NULL, NULL, 0, @@ -16523,14 +16511,16 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) /* Dump Statistics Comments */ if (statsextinfo->dobj.dump & DUMP_COMPONENT_COMMENT) dumpComment(fout, labelq->data, - tbinfo->dobj.namespace->dobj.name, - tbinfo->rolname, + statsextinfo->dobj.namespace->dobj.name, + statsextinfo->rolname, statsextinfo->dobj.catId, 0, statsextinfo->dobj.dumpId); + PQclear(res); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); destroyPQExpBuffer(labelq); + destroyPQExpBuffer(query); } /* diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h index 6c18d451ef..a4d6d926a8 100644 --- a/src/bin/pg_dump/pg_dump.h +++ b/src/bin/pg_dump/pg_dump.h @@ -380,8 +380,7 @@ typedef struct _indexAttachInfo typedef struct _statsExtInfo { DumpableObject dobj; - TableInfo *statsexttable; /* link to table the stats ext is for */ - char *statsextdef; + char *rolname; /* name of owner, or empty string */ } StatsExtInfo; typedef struct _ruleInfo @@ -694,7 +693,7 @@ extern TableInfo *getTables(Archive *fout, int *numTables); extern void getOwnedSeqs(Archive *fout, TableInfo tblinfo[], int numTables); extern InhInfo *getInherits(Archive *fout, int *numInherits); extern void getIndexes(Archive *fout, TableInfo tblinfo[], int numTables); -extern void getExtendedStatistics(Archive *fout, TableInfo tblinfo[], int numTables); +extern void getExtendedStatistics(Archive *fout); extern void getConstraints(Archive *fout, TableInfo tblinfo[], int numTables); extern RuleInfo *getRules(Archive *fout, int *numRules); extern void getTriggers(Archive *fout, TableInfo tblinfo[], int numTables); diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index 3e9b4d94dc..7b21709f76 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -1420,7 +1420,7 @@ 'ALTER ... OWNER commands (except post-data objects)' => { all_runs => 0, # catch-all regexp => -qr/^ALTER (?!EVENT TRIGGER|LARGE OBJECT|PUBLICATION|SUBSCRIPTION)(.*) OWNER TO .*;/m, +qr/^ALTER (?!EVENT TRIGGER|LARGE OBJECT|STATISTICS|PUBLICATION|SUBSCRIPTION)(.*) OWNER TO .*;/m, like => {}, # use more-specific options above unlike => { column_inserts => 1, From 91389228a1007fa3845e29e17568e52ab1726d5d Mon Sep 17 00:00:00 2001 From: Bruce Momjian Date: Mon, 12 Feb 2018 01:27:06 -0500 Subject: [PATCH 0978/1087] psql: give ^D hint for \q in place where \q is ignored Also add comment on why exit/quit are not documented. Discussion: https://postgr.es/m/20180202053928.GA13472@momjian.us --- src/bin/psql/mainloop.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/src/bin/psql/mainloop.c b/src/bin/psql/mainloop.c index a3ef15058f..c06ce3ca09 100644 --- a/src/bin/psql/mainloop.c +++ b/src/bin/psql/mainloop.c @@ -223,6 +223,7 @@ MainLoop(FILE *source) char *rest_of_line = NULL; bool found_help = false; bool found_exit_or_quit = false; + bool found_q = false; /* Search for the words we recognize; must be first word */ if (pg_strncasecmp(first_word, "help", 4) == 0) @@ -237,10 +238,18 @@ MainLoop(FILE *source) found_exit_or_quit = true; } + else if (strncmp(first_word, "\\q", 2) == 0) + { + rest_of_line = first_word + 2; + found_q = true; + } + /* * If we found a command word, check whether the rest of the line * contains only whitespace plus maybe one semicolon. If not, - * ignore the command word after all. + * ignore the command word after all. These commands are only + * for compatibility with other SQL clients and are not + * documented. */ if (rest_of_line != NULL) { @@ -288,6 +297,7 @@ MainLoop(FILE *source) continue; } } + /* * "quit" and "exit" are only commands when the query buffer is * empty, but we emit a one-line message even when it isn't to @@ -318,6 +328,21 @@ MainLoop(FILE *source) break; } } + + /* + * If they typed "\q" in a place where "\q" is not active, + * supply a hint. The text is still added to the query + * buffer. + */ + if (found_q && query_buf->len != 0 && + prompt_status != PROMPT_READY && + prompt_status != PROMPT_CONTINUE && + prompt_status != PROMPT_PAREN) +#ifndef WIN32 + puts(_("Use control-D to quit.")); +#else + puts(_("Use control-C to quit.")); +#endif } /* echo back if flag is set, unless interactive */ From 80f021ef139affdb219ccef71fff283e8f91f112 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 12 Feb 2018 11:38:06 -0300 Subject: [PATCH 0979/1087] Add missing article Noticed while reviewing nearby text --- doc/src/sgml/ddl.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index bee1ebd7db..8c3be5b103 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3078,7 +3078,7 @@ CREATE TABLE measurement ( parent. Note that specifying bounds such that the new partition's values will overlap with those in one or more existing partitions will cause an error. Inserting data into the parent table that does not map - to one of the existing partitions will cause an error; appropriate + to one of the existing partitions will cause an error; an appropriate partition must be added manually. From 88ef48c1ccee6a2200e01318180cf521413b3012 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 12 Feb 2018 12:55:12 -0500 Subject: [PATCH 0980/1087] Fix parallel index builds for dynamic_shared_memory_type=none. The previous code failed to realize that this setting effectively disables parallelism, and would crash if it decided to attempt parallelism anyway. Instead, treat it as a disabling condition. Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael Paquier and Peter Geoghegan. Discussion: http://postgr.es/m/20180209.170635.256350357.horiguchi.kyotaro@lab.ntt.co.jp --- src/backend/optimizer/plan/planner.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 740de4957d..3e8cd1447c 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -5825,7 +5825,8 @@ plan_create_index_workers(Oid tableOid, Oid indexOid) double allvisfrac; /* Return immediately when parallelism disabled */ - if (max_parallel_maintenance_workers == 0) + if (dynamic_shared_memory_type == DSM_IMPL_NONE || + max_parallel_maintenance_workers == 0) return 0; /* Set up largely-dummy planner state */ From 8237f27b504ff1d1e2da7ae4c81a7f72ea0e0e3e Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 12 Feb 2018 19:30:30 -0300 Subject: [PATCH 0981/1087] get_relid_attribute_name is dead, long live get_attname MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The modern way is to use a missing_ok argument instead of two separate almost-identical routines, so do that. Author: Michaël Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/20180201063212.GE6398@paquier.xyz --- contrib/postgres_fdw/deparse.c | 2 +- contrib/postgres_fdw/postgres_fdw.c | 2 +- contrib/sepgsql/dml.c | 5 +---- src/backend/catalog/heap.c | 3 ++- src/backend/catalog/objectaddress.c | 9 +++++---- src/backend/parser/parse_relation.c | 2 +- src/backend/parser/parse_utilcmd.c | 10 +++++----- src/backend/utils/adt/ruleutils.c | 26 +++++++++++++----------- src/backend/utils/cache/lsyscache.c | 31 ++++++++--------------------- src/backend/utils/cache/relcache.c | 2 +- src/include/utils/lsyscache.h | 3 +-- 11 files changed, 40 insertions(+), 55 deletions(-) diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 32c7261dae..f4b38c65ac 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -2176,7 +2176,7 @@ deparseColumnRef(StringInfo buf, int varno, int varattno, PlannerInfo *root, * FDW option, use attribute name. */ if (colname == NULL) - colname = get_relid_attribute_name(rte->relid, varattno); + colname = get_attname(rte->relid, varattno, false); if (qualify_col) ADD_REL_QUALIFIER(buf, varno); diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index c1d7f8032e..d37180ae10 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -5545,7 +5545,7 @@ conversion_error_callback(void *arg) if (var->varattno == 0) is_wholerow = true; else - attname = get_relid_attribute_name(rte->relid, var->varattno); + attname = get_attname(rte->relid, var->varattno, false); relname = get_rel_name(rte->relid); } diff --git a/contrib/sepgsql/dml.c b/contrib/sepgsql/dml.c index 36cdb27a76..c1fa320eb4 100644 --- a/contrib/sepgsql/dml.c +++ b/contrib/sepgsql/dml.c @@ -118,10 +118,7 @@ fixup_inherited_columns(Oid parentId, Oid childId, Bitmapset *columns) continue; } - attname = get_attname(parentId, attno); - if (!attname) - elog(ERROR, "cache lookup failed for attribute %d of relation %u", - attno, parentId); + attname = get_attname(parentId, attno, false); attno = get_attnum(childId, attname); if (attno == InvalidAttrNumber) elog(ERROR, "cache lookup failed for attribute %s of relation %u", diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c index 0f34f5381a..cf36ce4add 100644 --- a/src/backend/catalog/heap.c +++ b/src/backend/catalog/heap.c @@ -2405,7 +2405,8 @@ AddRelationNewConstraints(Relation rel, if (list_length(vars) == 1) colname = get_attname(RelationGetRelid(rel), - ((Var *) linitial(vars))->varattno); + ((Var *) linitial(vars))->varattno, + true); else colname = NULL; diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index 570e65affb..b4c2467710 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -2682,8 +2682,9 @@ getObjectDescription(const ObjectAddress *object) getRelationDescription(&buffer, object->objectId); if (object->objectSubId != 0) appendStringInfo(&buffer, _(" column %s"), - get_relid_attribute_name(object->objectId, - object->objectSubId)); + get_attname(object->objectId, + object->objectSubId, + false)); break; case OCLASS_PROC: @@ -4103,8 +4104,8 @@ getObjectIdentityParts(const ObjectAddress *object, { char *attr; - attr = get_relid_attribute_name(object->objectId, - object->objectSubId); + attr = get_attname(object->objectId, object->objectSubId, + false); appendStringInfo(&buffer, ".%s", quote_identifier(attr)); if (objname) *objname = lappend(*objname, attr); diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c index 2625da5327..053ae02c9f 100644 --- a/src/backend/parser/parse_relation.c +++ b/src/backend/parser/parse_relation.c @@ -2687,7 +2687,7 @@ get_rte_attribute_name(RangeTblEntry *rte, AttrNumber attnum) * built (which can easily happen for rules). */ if (rte->rtekind == RTE_RELATION) - return get_relid_attribute_name(rte->relid, attnum); + return get_attname(rte->relid, attnum, false); /* * Otherwise use the column name from eref. There should always be one. diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index d415d7180f..7c2cd4656a 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -1470,7 +1470,7 @@ generateClonedIndexStmt(RangeVar *heapRel, Oid heapRelid, Relation source_idx, /* Simple index column */ char *attname; - attname = get_relid_attribute_name(indrelid, attnum); + attname = get_attname(indrelid, attnum, false); keycoltype = get_atttype(indrelid, attnum); iparam->name = attname; @@ -3406,8 +3406,8 @@ transformPartitionBound(ParseState *pstate, Relation parent, /* Get the only column's name in case we need to output an error */ if (key->partattrs[0] != 0) - colname = get_relid_attribute_name(RelationGetRelid(parent), - key->partattrs[0]); + colname = get_attname(RelationGetRelid(parent), + key->partattrs[0], false); else colname = deparse_expression((Node *) linitial(partexprs), deparse_context_for(RelationGetRelationName(parent), @@ -3491,8 +3491,8 @@ transformPartitionBound(ParseState *pstate, Relation parent, /* Get the column's name in case we need to output an error */ if (key->partattrs[i] != 0) - colname = get_relid_attribute_name(RelationGetRelid(parent), - key->partattrs[i]); + colname = get_attname(RelationGetRelid(parent), + key->partattrs[i], false); else { colname = deparse_expression((Node *) list_nth(partexprs, j), diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 28767a129a..3bb468bdad 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -908,8 +908,8 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty) if (i > 0) appendStringInfoString(&buf, ", "); - attname = get_relid_attribute_name(trigrec->tgrelid, - trigrec->tgattr.values[i]); + attname = get_attname(trigrec->tgrelid, + trigrec->tgattr.values[i], false); appendStringInfoString(&buf, quote_identifier(attname)); } } @@ -1292,7 +1292,7 @@ pg_get_indexdef_worker(Oid indexrelid, int colno, char *attname; int32 keycoltypmod; - attname = get_relid_attribute_name(indrelid, attnum); + attname = get_attname(indrelid, attnum, false); if (!colno || colno == keyno + 1) appendStringInfoString(&buf, quote_identifier(attname)); get_atttypetypmodcoll(indrelid, attnum, @@ -1535,7 +1535,7 @@ pg_get_statisticsobj_worker(Oid statextid, bool missing_ok) if (colno > 0) appendStringInfoString(&buf, ", "); - attname = get_relid_attribute_name(statextrec->stxrelid, attnum); + attname = get_attname(statextrec->stxrelid, attnum, false); appendStringInfoString(&buf, quote_identifier(attname)); } @@ -1692,7 +1692,7 @@ pg_get_partkeydef_worker(Oid relid, int prettyFlags, char *attname; int32 keycoltypmod; - attname = get_relid_attribute_name(relid, attnum); + attname = get_attname(relid, attnum, false); appendStringInfoString(&buf, quote_identifier(attname)); get_atttypetypmodcoll(relid, attnum, &keycoltype, &keycoltypmod, @@ -2196,7 +2196,7 @@ decompile_column_index_array(Datum column_index_array, Oid relId, { char *colName; - colName = get_relid_attribute_name(relId, DatumGetInt16(keys[j])); + colName = get_attname(relId, DatumGetInt16(keys[j]), false); if (j == 0) appendStringInfoString(buf, quote_identifier(colName)); @@ -6015,8 +6015,9 @@ get_insert_query_def(Query *query, deparse_context *context) * tle->resname, since resname will fail to track RENAME. */ appendStringInfoString(buf, - quote_identifier(get_relid_attribute_name(rte->relid, - tle->resno))); + quote_identifier(get_attname(rte->relid, + tle->resno, + false))); /* * Print any indirection needed (subfields or subscripts), and strip @@ -6319,8 +6320,9 @@ get_update_query_targetlist_def(Query *query, List *targetList, * tle->resname, since resname will fail to track RENAME. */ appendStringInfoString(buf, - quote_identifier(get_relid_attribute_name(rte->relid, - tle->resno))); + quote_identifier(get_attname(rte->relid, + tle->resno, + false))); /* * Print any indirection needed (subfields or subscripts), and strip @@ -10340,8 +10342,8 @@ processIndirection(Node *node, deparse_context *context) * target lists, but this function cannot be used for that case. */ Assert(list_length(fstore->fieldnums) == 1); - fieldname = get_relid_attribute_name(typrelid, - linitial_int(fstore->fieldnums)); + fieldname = get_attname(typrelid, + linitial_int(fstore->fieldnums), false); appendStringInfo(buf, ".%s", quote_identifier(fieldname)); /* diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c index e8aa179347..51b6b4f7bb 100644 --- a/src/backend/utils/cache/lsyscache.c +++ b/src/backend/utils/cache/lsyscache.c @@ -765,19 +765,19 @@ get_opfamily_proc(Oid opfamily, Oid lefttype, Oid righttype, int16 procnum) /* * get_attname - * Given the relation id and the attribute number, - * return the "attname" field from the attribute relation. + * Given the relation id and the attribute number, return the "attname" + * field from the attribute relation as a palloc'ed string. * - * Note: returns a palloc'd copy of the string, or NULL if no such attribute. + * If no such attribute exists and missing_ok is true, NULL is returned; + * otherwise a not-intended-for-user-consumption error is thrown. */ char * -get_attname(Oid relid, AttrNumber attnum) +get_attname(Oid relid, AttrNumber attnum, bool missing_ok) { HeapTuple tp; tp = SearchSysCache2(ATTNUM, - ObjectIdGetDatum(relid), - Int16GetDatum(attnum)); + ObjectIdGetDatum(relid), Int16GetDatum(attnum)); if (HeapTupleIsValid(tp)) { Form_pg_attribute att_tup = (Form_pg_attribute) GETSTRUCT(tp); @@ -787,26 +787,11 @@ get_attname(Oid relid, AttrNumber attnum) ReleaseSysCache(tp); return result; } - else - return NULL; -} - -/* - * get_relid_attribute_name - * - * Same as above routine get_attname(), except that error - * is handled by elog() instead of returning NULL. - */ -char * -get_relid_attribute_name(Oid relid, AttrNumber attnum) -{ - char *attname; - attname = get_attname(relid, attnum); - if (attname == NULL) + if (!missing_ok) elog(ERROR, "cache lookup failed for attribute %d of relation %u", attnum, relid); - return attname; + return NULL; } /* diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c index d5cc246156..1ebf9c4ed2 100644 --- a/src/backend/utils/cache/relcache.c +++ b/src/backend/utils/cache/relcache.c @@ -5250,7 +5250,7 @@ errtablecol(Relation rel, int attnum) if (attnum > 0 && attnum <= reldesc->natts) colname = NameStr(TupleDescAttr(reldesc, attnum - 1)->attname); else - colname = get_relid_attribute_name(RelationGetRelid(rel), attnum); + colname = get_attname(RelationGetRelid(rel), attnum, false); return errtablecolname(rel, colname); } diff --git a/src/include/utils/lsyscache.h b/src/include/utils/lsyscache.h index 9731e6f7ae..1f6c04a8f3 100644 --- a/src/include/utils/lsyscache.h +++ b/src/include/utils/lsyscache.h @@ -83,8 +83,7 @@ extern List *get_op_btree_interpretation(Oid opno); extern bool equality_ops_are_compatible(Oid opno1, Oid opno2); extern Oid get_opfamily_proc(Oid opfamily, Oid lefttype, Oid righttype, int16 procnum); -extern char *get_attname(Oid relid, AttrNumber attnum); -extern char *get_relid_attribute_name(Oid relid, AttrNumber attnum); +extern char *get_attname(Oid relid, AttrNumber attnum, bool missing_ok); extern AttrNumber get_attnum(Oid relid, const char *attname); extern char get_attidentity(Oid relid, AttrNumber attnum); extern Oid get_atttype(Oid relid, AttrNumber attnum); From ebdb42a0d6a61b93a5bb9f4204408edf5959332c Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 12 Feb 2018 22:39:52 -0500 Subject: [PATCH 0982/1087] Fix typo Author: Masahiko Sawada --- src/backend/replication/logical/origin.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 5cc9a955d7..963878a5d8 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -341,7 +341,7 @@ replorigin_drop(RepOriginId roident, bool nowait) /* * To interlock against concurrent drops, we hold ExclusiveLock on - * pg_replication_origin throughout this funcion. + * pg_replication_origin throughout this function. */ rel = heap_open(ReplicationOriginRelationId, ExclusiveLock); From b4e2ada347bd8ae941171bd0761462e5b11b765d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 13 Feb 2018 09:12:45 -0500 Subject: [PATCH 0983/1087] In LDAP test, restart after pg_hba.conf changes Instead of issuing a reload after pg_hba.conf changes between test cases, run a full restart. With a reload, an error in the new pg_hba.conf is ignored and the tests will continue to run with the old settings, invalidating the subsequent test cases. With a restart, a faulty pg_hba.conf will lead to the test being aborted, which is what we'd rather want. --- src/test/ldap/t/001_auth.pl | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl index 5508da459f..a83d96ae91 100644 --- a/src/test/ldap/t/001_auth.pl +++ b/src/test/ldap/t/001_auth.pl @@ -130,7 +130,7 @@ sub test_access unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="uid=" ldapsuffix=",dc=example,dc=net"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'wrong'; test_access($node, 'test0', 2, 'simple bind authentication fails if user not found in LDAP'); @@ -142,7 +142,7 @@ sub test_access unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'wrong'; test_access($node, 'test0', 2, 'search+bind authentication fails if user not found in LDAP'); @@ -154,7 +154,7 @@ sub test_access unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn?uid?sub"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'wrong'; test_access($node, 'test0', 2, 'search+bind with LDAP URL authentication fails if user not found in LDAP'); @@ -166,7 +166,7 @@ sub test_access unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(|(uid=\$username)(mail=\$username))"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'search filter finds by uid'); @@ -177,7 +177,7 @@ sub test_access unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn??sub?(|(uid=\$username)(mail=\$username))"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'search filter finds by uid'); @@ -189,7 +189,7 @@ sub test_access # override. It might be useful in a case like this. unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldap_url/$ldap_basedn??sub" ldapsearchfilter="(|(uid=\$username)(mail=\$username))"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'combined LDAP URL and search filter'); @@ -199,7 +199,7 @@ sub test_access # note bad ldapprefix with a question mark that triggers a diagnostic message unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapprefix="?uid=" ldapsuffix=""}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 2, 'any attempt fails due to bad search pattern'); @@ -209,7 +209,7 @@ sub test_access # request StartTLS with ldaptls=1 unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapport=$ldap_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(uid=\$username)" ldaptls=1}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'StartTLS'); @@ -217,7 +217,7 @@ sub test_access # request LDAPS with ldapscheme=ldaps unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapserver=$ldap_server ldapscheme=ldaps ldapport=$ldaps_port ldapbasedn="$ldap_basedn" ldapsearchfilter="(uid=\$username)"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'LDAPS'); @@ -225,7 +225,7 @@ sub test_access # request LDAPS with ldapurl=ldaps://... unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldaps_url/$ldap_basedn??sub?(uid=\$username)"}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 0, 'LDAPS with URL'); @@ -233,7 +233,7 @@ sub test_access # bad combination of LDAPS and StartTLS unlink($node->data_dir . '/pg_hba.conf'); $node->append_conf('pg_hba.conf', qq{local all all ldap ldapurl="$ldaps_url/$ldap_basedn??sub?(uid=\$username)" ldaptls=1}); -$node->reload; +$node->restart; $ENV{"PGPASSWORD"} = 'secret1'; test_access($node, 'test1', 2, 'bad combination of LDAPS and StartTLS'); From a7b8f0661d9ca9656ba58546ed871b36dbf8504d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 12 Feb 2018 13:47:18 -0500 Subject: [PATCH 0984/1087] Fix typo --- .../regress/expected/create_function_3.out | 144 +++++++++--------- src/test/regress/sql/create_function_3.sql | 88 +++++------ 2 files changed, 116 insertions(+), 116 deletions(-) diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out index b5e19485e5..2fd25b8593 100644 --- a/src/test/regress/expected/create_function_3.out +++ b/src/test/regress/expected/create_function_3.out @@ -69,124 +69,124 @@ SELECT proname, provolatile FROM pg_proc -- -- SECURITY DEFINER | INVOKER -- -CREATE FUNCTION functext_C_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 0'; -CREATE FUNCTION functext_C_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_2(int) RETURNS bool LANGUAGE 'sql' SECURITY DEFINER AS 'SELECT $1 = 0'; -CREATE FUNCTION functext_C_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_3(int) RETURNS bool LANGUAGE 'sql' SECURITY INVOKER AS 'SELECT $1 < 0'; SELECT proname, prosecdef FROM pg_proc - WHERE oid in ('functext_C_1'::regproc, - 'functext_C_2'::regproc, - 'functext_C_3'::regproc) ORDER BY proname; + WHERE oid in ('functest_C_1'::regproc, + 'functest_C_2'::regproc, + 'functest_C_3'::regproc) ORDER BY proname; proname | prosecdef --------------+----------- - functext_c_1 | f - functext_c_2 | t - functext_c_3 | f + functest_c_1 | f + functest_c_2 | t + functest_c_3 | f (3 rows) -ALTER FUNCTION functext_C_1(int) IMMUTABLE; -- unrelated change, no effect -ALTER FUNCTION functext_C_2(int) SECURITY INVOKER; -ALTER FUNCTION functext_C_3(int) SECURITY DEFINER; +ALTER FUNCTION functest_C_1(int) IMMUTABLE; -- unrelated change, no effect +ALTER FUNCTION functest_C_2(int) SECURITY INVOKER; +ALTER FUNCTION functest_C_3(int) SECURITY DEFINER; SELECT proname, prosecdef FROM pg_proc - WHERE oid in ('functext_C_1'::regproc, - 'functext_C_2'::regproc, - 'functext_C_3'::regproc) ORDER BY proname; + WHERE oid in ('functest_C_1'::regproc, + 'functest_C_2'::regproc, + 'functest_C_3'::regproc) ORDER BY proname; proname | prosecdef --------------+----------- - functext_c_1 | f - functext_c_2 | f - functext_c_3 | t + functest_c_1 | f + functest_c_2 | f + functest_c_3 | t (3 rows) -- -- LEAKPROOF -- -CREATE FUNCTION functext_E_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 100'; -CREATE FUNCTION functext_E_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_2(int) RETURNS bool LANGUAGE 'sql' LEAKPROOF AS 'SELECT $1 > 100'; SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; proname | proleakproof --------------+-------------- - functext_e_1 | f - functext_e_2 | t + functest_e_1 | f + functest_e_2 | t (2 rows) -ALTER FUNCTION functext_E_1(int) LEAKPROOF; -ALTER FUNCTION functext_E_2(int) STABLE; -- unrelated change, no effect +ALTER FUNCTION functest_E_1(int) LEAKPROOF; +ALTER FUNCTION functest_E_2(int) STABLE; -- unrelated change, no effect SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; proname | proleakproof --------------+-------------- - functext_e_1 | t - functext_e_2 | t + functest_e_1 | t + functest_e_2 | t (2 rows) -ALTER FUNCTION functext_E_2(int) NOT LEAKPROOF; -- remove leakproog attribute +ALTER FUNCTION functest_E_2(int) NOT LEAKPROOF; -- remove leakproog attribute SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; proname | proleakproof --------------+-------------- - functext_e_1 | t - functext_e_2 | f + functest_e_1 | t + functest_e_2 | f (2 rows) -- it takes superuser privilege to turn on leakproof, but not for turn off -ALTER FUNCTION functext_E_1(int) OWNER TO regress_unpriv_user; -ALTER FUNCTION functext_E_2(int) OWNER TO regress_unpriv_user; +ALTER FUNCTION functest_E_1(int) OWNER TO regress_unpriv_user; +ALTER FUNCTION functest_E_2(int) OWNER TO regress_unpriv_user; SET SESSION AUTHORIZATION regress_unpriv_user; SET search_path TO temp_func_test, public; -ALTER FUNCTION functext_E_1(int) NOT LEAKPROOF; -ALTER FUNCTION functext_E_2(int) LEAKPROOF; +ALTER FUNCTION functest_E_1(int) NOT LEAKPROOF; +ALTER FUNCTION functest_E_2(int) LEAKPROOF; ERROR: only superuser can define a leakproof function -CREATE FUNCTION functext_E_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_3(int) RETURNS bool LANGUAGE 'sql' LEAKPROOF AS 'SELECT $1 < 200'; -- failed ERROR: only superuser can define a leakproof function RESET SESSION AUTHORIZATION; -- -- CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT -- -CREATE FUNCTION functext_F_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 50'; -CREATE FUNCTION functext_F_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_2(int) RETURNS bool LANGUAGE 'sql' CALLED ON NULL INPUT AS 'SELECT $1 = 50'; -CREATE FUNCTION functext_F_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_3(int) RETURNS bool LANGUAGE 'sql' RETURNS NULL ON NULL INPUT AS 'SELECT $1 < 50'; -CREATE FUNCTION functext_F_4(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_4(int) RETURNS bool LANGUAGE 'sql' STRICT AS 'SELECT $1 = 50'; SELECT proname, proisstrict FROM pg_proc - WHERE oid in ('functext_F_1'::regproc, - 'functext_F_2'::regproc, - 'functext_F_3'::regproc, - 'functext_F_4'::regproc) ORDER BY proname; + WHERE oid in ('functest_F_1'::regproc, + 'functest_F_2'::regproc, + 'functest_F_3'::regproc, + 'functest_F_4'::regproc) ORDER BY proname; proname | proisstrict --------------+------------- - functext_f_1 | f - functext_f_2 | f - functext_f_3 | t - functext_f_4 | t + functest_f_1 | f + functest_f_2 | f + functest_f_3 | t + functest_f_4 | t (4 rows) -ALTER FUNCTION functext_F_1(int) IMMUTABLE; -- unrelated change, no effect -ALTER FUNCTION functext_F_2(int) STRICT; -ALTER FUNCTION functext_F_3(int) CALLED ON NULL INPUT; +ALTER FUNCTION functest_F_1(int) IMMUTABLE; -- unrelated change, no effect +ALTER FUNCTION functest_F_2(int) STRICT; +ALTER FUNCTION functest_F_3(int) CALLED ON NULL INPUT; SELECT proname, proisstrict FROM pg_proc - WHERE oid in ('functext_F_1'::regproc, - 'functext_F_2'::regproc, - 'functext_F_3'::regproc, - 'functext_F_4'::regproc) ORDER BY proname; + WHERE oid in ('functest_F_1'::regproc, + 'functest_F_2'::regproc, + 'functest_F_3'::regproc, + 'functest_F_4'::regproc) ORDER BY proname; proname | proisstrict --------------+------------- - functext_f_1 | f - functext_f_2 | t - functext_f_3 | f - functext_f_4 | t + functest_f_1 | f + functest_f_2 | t + functest_f_3 | f + functest_f_4 | t (4 rows) -- information_schema tests @@ -236,15 +236,15 @@ drop cascades to function functest_a_3() drop cascades to function functest_b_2(integer) drop cascades to function functest_b_3(integer) drop cascades to function functest_b_4(integer) -drop cascades to function functext_c_1(integer) -drop cascades to function functext_c_2(integer) -drop cascades to function functext_c_3(integer) -drop cascades to function functext_e_1(integer) -drop cascades to function functext_e_2(integer) -drop cascades to function functext_f_1(integer) -drop cascades to function functext_f_2(integer) -drop cascades to function functext_f_3(integer) -drop cascades to function functext_f_4(integer) +drop cascades to function functest_c_1(integer) +drop cascades to function functest_c_2(integer) +drop cascades to function functest_c_3(integer) +drop cascades to function functest_e_1(integer) +drop cascades to function functest_e_2(integer) +drop cascades to function functest_f_1(integer) +drop cascades to function functest_f_2(integer) +drop cascades to function functest_f_3(integer) +drop cascades to function functest_f_4(integer) drop cascades to function functest_b_2(bigint) DROP USER regress_unpriv_user; RESET search_path; diff --git a/src/test/regress/sql/create_function_3.sql b/src/test/regress/sql/create_function_3.sql index 0a0e407aab..6c411bdfda 100644 --- a/src/test/regress/sql/create_function_3.sql +++ b/src/test/regress/sql/create_function_3.sql @@ -52,57 +52,57 @@ SELECT proname, provolatile FROM pg_proc -- -- SECURITY DEFINER | INVOKER -- -CREATE FUNCTION functext_C_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 0'; -CREATE FUNCTION functext_C_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_2(int) RETURNS bool LANGUAGE 'sql' SECURITY DEFINER AS 'SELECT $1 = 0'; -CREATE FUNCTION functext_C_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_C_3(int) RETURNS bool LANGUAGE 'sql' SECURITY INVOKER AS 'SELECT $1 < 0'; SELECT proname, prosecdef FROM pg_proc - WHERE oid in ('functext_C_1'::regproc, - 'functext_C_2'::regproc, - 'functext_C_3'::regproc) ORDER BY proname; + WHERE oid in ('functest_C_1'::regproc, + 'functest_C_2'::regproc, + 'functest_C_3'::regproc) ORDER BY proname; -ALTER FUNCTION functext_C_1(int) IMMUTABLE; -- unrelated change, no effect -ALTER FUNCTION functext_C_2(int) SECURITY INVOKER; -ALTER FUNCTION functext_C_3(int) SECURITY DEFINER; +ALTER FUNCTION functest_C_1(int) IMMUTABLE; -- unrelated change, no effect +ALTER FUNCTION functest_C_2(int) SECURITY INVOKER; +ALTER FUNCTION functest_C_3(int) SECURITY DEFINER; SELECT proname, prosecdef FROM pg_proc - WHERE oid in ('functext_C_1'::regproc, - 'functext_C_2'::regproc, - 'functext_C_3'::regproc) ORDER BY proname; + WHERE oid in ('functest_C_1'::regproc, + 'functest_C_2'::regproc, + 'functest_C_3'::regproc) ORDER BY proname; -- -- LEAKPROOF -- -CREATE FUNCTION functext_E_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 100'; -CREATE FUNCTION functext_E_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_2(int) RETURNS bool LANGUAGE 'sql' LEAKPROOF AS 'SELECT $1 > 100'; SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; -ALTER FUNCTION functext_E_1(int) LEAKPROOF; -ALTER FUNCTION functext_E_2(int) STABLE; -- unrelated change, no effect +ALTER FUNCTION functest_E_1(int) LEAKPROOF; +ALTER FUNCTION functest_E_2(int) STABLE; -- unrelated change, no effect SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; -ALTER FUNCTION functext_E_2(int) NOT LEAKPROOF; -- remove leakproog attribute +ALTER FUNCTION functest_E_2(int) NOT LEAKPROOF; -- remove leakproog attribute SELECT proname, proleakproof FROM pg_proc - WHERE oid in ('functext_E_1'::regproc, - 'functext_E_2'::regproc) ORDER BY proname; + WHERE oid in ('functest_E_1'::regproc, + 'functest_E_2'::regproc) ORDER BY proname; -- it takes superuser privilege to turn on leakproof, but not for turn off -ALTER FUNCTION functext_E_1(int) OWNER TO regress_unpriv_user; -ALTER FUNCTION functext_E_2(int) OWNER TO regress_unpriv_user; +ALTER FUNCTION functest_E_1(int) OWNER TO regress_unpriv_user; +ALTER FUNCTION functest_E_2(int) OWNER TO regress_unpriv_user; SET SESSION AUTHORIZATION regress_unpriv_user; SET search_path TO temp_func_test, public; -ALTER FUNCTION functext_E_1(int) NOT LEAKPROOF; -ALTER FUNCTION functext_E_2(int) LEAKPROOF; +ALTER FUNCTION functest_E_1(int) NOT LEAKPROOF; +ALTER FUNCTION functest_E_2(int) LEAKPROOF; -CREATE FUNCTION functext_E_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_E_3(int) RETURNS bool LANGUAGE 'sql' LEAKPROOF AS 'SELECT $1 < 200'; -- failed RESET SESSION AUTHORIZATION; @@ -110,28 +110,28 @@ RESET SESSION AUTHORIZATION; -- -- CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT -- -CREATE FUNCTION functext_F_1(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_1(int) RETURNS bool LANGUAGE 'sql' AS 'SELECT $1 > 50'; -CREATE FUNCTION functext_F_2(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_2(int) RETURNS bool LANGUAGE 'sql' CALLED ON NULL INPUT AS 'SELECT $1 = 50'; -CREATE FUNCTION functext_F_3(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_3(int) RETURNS bool LANGUAGE 'sql' RETURNS NULL ON NULL INPUT AS 'SELECT $1 < 50'; -CREATE FUNCTION functext_F_4(int) RETURNS bool LANGUAGE 'sql' +CREATE FUNCTION functest_F_4(int) RETURNS bool LANGUAGE 'sql' STRICT AS 'SELECT $1 = 50'; SELECT proname, proisstrict FROM pg_proc - WHERE oid in ('functext_F_1'::regproc, - 'functext_F_2'::regproc, - 'functext_F_3'::regproc, - 'functext_F_4'::regproc) ORDER BY proname; - -ALTER FUNCTION functext_F_1(int) IMMUTABLE; -- unrelated change, no effect -ALTER FUNCTION functext_F_2(int) STRICT; -ALTER FUNCTION functext_F_3(int) CALLED ON NULL INPUT; + WHERE oid in ('functest_F_1'::regproc, + 'functest_F_2'::regproc, + 'functest_F_3'::regproc, + 'functest_F_4'::regproc) ORDER BY proname; + +ALTER FUNCTION functest_F_1(int) IMMUTABLE; -- unrelated change, no effect +ALTER FUNCTION functest_F_2(int) STRICT; +ALTER FUNCTION functest_F_3(int) CALLED ON NULL INPUT; SELECT proname, proisstrict FROM pg_proc - WHERE oid in ('functext_F_1'::regproc, - 'functext_F_2'::regproc, - 'functext_F_3'::regproc, - 'functext_F_4'::regproc) ORDER BY proname; + WHERE oid in ('functest_F_1'::regproc, + 'functest_F_2'::regproc, + 'functest_F_3'::regproc, + 'functest_F_4'::regproc) ORDER BY proname; -- information_schema tests From 7cd56f218d0f8953999b944bc558cd6684b15cdc Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 12 Feb 2018 14:03:04 -0500 Subject: [PATCH 0985/1087] Add tests for pg_get_functiondef --- .../regress/expected/create_function_3.out | 44 +++++++++++++++++++ src/test/regress/sql/create_function_3.sql | 8 ++++ 2 files changed, 52 insertions(+) diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out index 2fd25b8593..5ff1e0dd86 100644 --- a/src/test/regress/expected/create_function_3.out +++ b/src/test/regress/expected/create_function_3.out @@ -189,6 +189,50 @@ SELECT proname, proisstrict FROM pg_proc functest_f_4 | t (4 rows) +-- pg_get_functiondef tests +SELECT pg_get_functiondef('functest_A_1'::regproc); + pg_get_functiondef +-------------------------------------------------------------------- + CREATE OR REPLACE FUNCTION temp_func_test.functest_a_1(text, date)+ + RETURNS boolean + + LANGUAGE sql + + AS $function$SELECT $1 = 'abcd' AND $2 > '2001-01-01'$function$ + + +(1 row) + +SELECT pg_get_functiondef('functest_B_3'::regproc); + pg_get_functiondef +----------------------------------------------------------------- + CREATE OR REPLACE FUNCTION temp_func_test.functest_b_3(integer)+ + RETURNS boolean + + LANGUAGE sql + + STABLE + + AS $function$SELECT $1 = 0$function$ + + +(1 row) + +SELECT pg_get_functiondef('functest_C_3'::regproc); + pg_get_functiondef +----------------------------------------------------------------- + CREATE OR REPLACE FUNCTION temp_func_test.functest_c_3(integer)+ + RETURNS boolean + + LANGUAGE sql + + SECURITY DEFINER + + AS $function$SELECT $1 < 0$function$ + + +(1 row) + +SELECT pg_get_functiondef('functest_F_2'::regproc); + pg_get_functiondef +----------------------------------------------------------------- + CREATE OR REPLACE FUNCTION temp_func_test.functest_f_2(integer)+ + RETURNS boolean + + LANGUAGE sql + + STRICT + + AS $function$SELECT $1 = 50$function$ + + +(1 row) + -- information_schema tests CREATE FUNCTION functest_IS_1(a int, b int default 1, c text default 'foo') RETURNS int diff --git a/src/test/regress/sql/create_function_3.sql b/src/test/regress/sql/create_function_3.sql index 6c411bdfda..fbdf8310e3 100644 --- a/src/test/regress/sql/create_function_3.sql +++ b/src/test/regress/sql/create_function_3.sql @@ -134,6 +134,14 @@ SELECT proname, proisstrict FROM pg_proc 'functest_F_4'::regproc) ORDER BY proname; +-- pg_get_functiondef tests + +SELECT pg_get_functiondef('functest_A_1'::regproc); +SELECT pg_get_functiondef('functest_B_3'::regproc); +SELECT pg_get_functiondef('functest_C_3'::regproc); +SELECT pg_get_functiondef('functest_F_2'::regproc); + + -- information_schema tests CREATE FUNCTION functest_IS_1(a int, b int default 1, c text default 'foo') From 7a32ac8a66903de8c352735f2a26f610f5e47090 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 13 Feb 2018 10:34:04 -0500 Subject: [PATCH 0986/1087] Add procedure support to pg_get_functiondef This also makes procedures work in psql's \ef and \sf commands. Reported-by: Pavel Stehule --- doc/src/sgml/func.sgml | 8 +++---- doc/src/sgml/ref/psql-ref.sgml | 10 +++++---- src/backend/utils/adt/ruleutils.c | 22 ++++++++++++++----- .../regress/expected/create_procedure.out | 11 ++++++++++ src/test/regress/sql/create_procedure.sql | 1 + 5 files changed, 38 insertions(+), 14 deletions(-) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 640ff09a7b..4be31b082a 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -17008,22 +17008,22 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); pg_get_functiondef(func_oid) text - get definition of a function + get definition of a function or procedure pg_get_function_arguments(func_oid) text - get argument list of function's definition (with default values) + get argument list of function's or procedure's definition (with default values) pg_get_function_identity_arguments(func_oid) text - get argument list to identify a function (without default values) + get argument list to identify a function or procedure (without default values) pg_get_function_result(func_oid) text - get RETURNS clause for function + get RETURNS clause for function (returns null for a procedure) pg_get_indexdef(index_oid) diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 6f9b30b673..8bd9b9387e 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -1815,8 +1815,9 @@ Tue Oct 26 21:40:57 CEST 1999 - This command fetches and edits the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. + This command fetches and edits the definition of the named function or procedure, + in the form of a CREATE OR REPLACE FUNCTION or + CREATE OR REPLACE PROCEDURE command. Editing is done in the same way as for \edit. After the editor exits, the updated command waits in the query buffer; type semicolon or \g to send it, or \r @@ -2970,8 +2971,9 @@ testdb=> \setenv LESS -imx4F - This command fetches and shows the definition of the named function, - in the form of a CREATE OR REPLACE FUNCTION command. + This command fetches and shows the definition of the named function or procedure, + in the form of a CREATE OR REPLACE FUNCTION or + CREATE OR REPLACE PROCEDURE command. The definition is printed to the current query output channel, as set by \o. diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index 3bb468bdad..ba9fab4582 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -2449,6 +2449,7 @@ pg_get_functiondef(PG_FUNCTION_ARGS) StringInfoData dq; HeapTuple proctup; Form_pg_proc proc; + bool isfunction; Datum tmp; bool isnull; const char *prosrc; @@ -2472,20 +2473,28 @@ pg_get_functiondef(PG_FUNCTION_ARGS) (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("\"%s\" is an aggregate function", name))); + isfunction = (proc->prorettype != InvalidOid); + /* * We always qualify the function name, to ensure the right function gets * replaced. */ nsp = get_namespace_name(proc->pronamespace); - appendStringInfo(&buf, "CREATE OR REPLACE FUNCTION %s(", + appendStringInfo(&buf, "CREATE OR REPLACE %s %s(", + isfunction ? "FUNCTION" : "PROCEDURE", quote_qualified_identifier(nsp, name)); (void) print_function_arguments(&buf, proctup, false, true); - appendStringInfoString(&buf, ")\n RETURNS "); - print_function_rettype(&buf, proctup); + appendStringInfoString(&buf, ")\n"); + if (isfunction) + { + appendStringInfoString(&buf, " RETURNS "); + print_function_rettype(&buf, proctup); + appendStringInfoChar(&buf, '\n'); + } print_function_trftypes(&buf, proctup); - appendStringInfo(&buf, "\n LANGUAGE %s\n", + appendStringInfo(&buf, " LANGUAGE %s\n", quote_identifier(get_language_name(proc->prolang, false))); /* Emit some miscellaneous options on one line */ @@ -2607,10 +2616,11 @@ pg_get_functiondef(PG_FUNCTION_ARGS) * * Since the user is likely to be editing the function body string, we * shouldn't use a short delimiter that he might easily create a conflict - * with. Hence prefer "$function$", but extend if needed. + * with. Hence prefer "$function$"/"$procedure$", but extend if needed. */ initStringInfo(&dq); - appendStringInfoString(&dq, "$function"); + appendStringInfoChar(&dq, '$'); + appendStringInfoString(&dq, (isfunction ? "function" : "procedure")); while (strstr(prosrc, dq.data) != NULL) appendStringInfoChar(&dq, 'x'); appendStringInfoChar(&dq, '$'); diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out index 873907dc43..e7bede24fa 100644 --- a/src/test/regress/expected/create_procedure.out +++ b/src/test/regress/expected/create_procedure.out @@ -30,6 +30,17 @@ CALL ptest1(substring(random()::text, 1, 1)); -- ok, volatile arg public | ptest1 | | x text | proc (1 row) +SELECT pg_get_functiondef('ptest1'::regproc); + pg_get_functiondef +--------------------------------------------------- + CREATE OR REPLACE PROCEDURE public.ptest1(x text)+ + LANGUAGE sql + + AS $procedure$ + + INSERT INTO cp_test VALUES (1, x); + + $procedure$ + + +(1 row) + SELECT * FROM cp_test ORDER BY b COLLATE "C"; a | b ---+------- diff --git a/src/test/regress/sql/create_procedure.sql b/src/test/regress/sql/create_procedure.sql index d65e568a64..774c12ee34 100644 --- a/src/test/regress/sql/create_procedure.sql +++ b/src/test/regress/sql/create_procedure.sql @@ -17,6 +17,7 @@ CALL ptest1('xy' || 'zzy'); -- ok, constant-folded arg CALL ptest1(substring(random()::text, 1, 1)); -- ok, volatile arg \df ptest1 +SELECT pg_get_functiondef('ptest1'::regproc); SELECT * FROM cp_test ORDER BY b COLLATE "C"; From 2ac3e6acc228e4b99022019379c6d5c4b61b231c Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 13 Feb 2018 10:39:51 -0500 Subject: [PATCH 0987/1087] doc: pg_function_is_visible also applies to aggregates and procedures --- doc/src/sgml/func.sgml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 4be31b082a..1e535cf215 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -16820,6 +16820,8 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); Each function performs the visibility check for one type of database object. Note that pg_table_is_visible can also be used with views, materialized views, indexes, sequences and foreign tables; + pg_function_is_visible can also be used with + procedures and aggregates; pg_type_is_visible can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name From 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 13 Feb 2018 18:52:21 -0500 Subject: [PATCH 0988/1087] Make plpgsql use its DTYPE_REC code paths for composite-type variables. Formerly, DTYPE_REC was used only for variables declared as "record"; variables of named composite types used DTYPE_ROW, which is faster for some purposes but much less flexible. In particular, the ROW code paths are entirely incapable of dealing with DDL-caused changes to the number or data types of the columns of a row variable, once a particular plpgsql function has been parsed for the first time in a session. And, since the stored representation of a ROW isn't a tuple, there wasn't any easy way to deal with variables of domain-over-composite types, since the domain constraint checking code would expect the value to be checked to be a tuple. A lesser, but still real, annoyance is that ROW format cannot represent a true NULL composite value, only a row of per-field NULL values, which is not exactly the same thing. Hence, switch to using DTYPE_REC for all composite-typed variables, whether "record", named composite type, or domain over named composite type. DTYPE_ROW remains but is used only for its native purpose, to represent a fixed-at-compile-time list of variables, for instance the targets of an INTO clause. To accomplish this without taking significant performance losses, introduce infrastructure that allows storing composite-type variables as "expanded objects", similar to the "expanded array" infrastructure introduced in commit 1dc5ebc90. A composite variable's value is thereby kept (most of the time) in the form of separate Datums, so that field accesses and updates are not much more expensive than they were in the ROW format. This holds the line, more or less, on performance of variables of named composite types in field-access-intensive microbenchmarks, and makes variables declared "record" perform much better than before in similar tests. In addition, the logic involved with enforcing composite-domain constraints against updates of individual fields is in the expanded record infrastructure not plpgsql proper, so that it might be reusable for other purposes. In further support of this, introduce a typcache feature for assigning a unique-within-process identifier to each distinct tuple descriptor of interest; in particular, DDL alterations on composite types result in a new identifier for that type. This allows very cheap detection of the need to refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea I had in commit 687f096ea: that assigned identifying sequence numbers to successive versions of individual composite types, but the numbers were not unique across different types, nor was there support for assigning numbers to registered record types. In passing, allow plpgsql functions to accept as well as return type "record". There was no good reason for the old restriction, and it was out of step with most of the other PLs. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us --- doc/src/sgml/plpgsql.sgml | 12 +- src/backend/executor/execExprInterp.c | 135 +- src/backend/utils/adt/Makefile | 2 +- src/backend/utils/adt/expandedrecord.c | 1569 ++++++++++++++ src/backend/utils/cache/typcache.c | 84 +- src/include/utils/expandedrecord.h | 227 ++ src/include/utils/typcache.h | 14 +- src/pl/plpgsql/src/Makefile | 2 +- .../plpgsql/src/expected/plpgsql_record.out | 662 ++++++ src/pl/plpgsql/src/pl_comp.c | 380 +--- src/pl/plpgsql/src/pl_exec.c | 1853 ++++++++++++----- src/pl/plpgsql/src/pl_funcs.c | 9 +- src/pl/plpgsql/src/pl_gram.y | 15 +- src/pl/plpgsql/src/pl_handler.c | 5 +- src/pl/plpgsql/src/plpgsql.h | 59 +- src/pl/plpgsql/src/sql/plpgsql_record.sql | 441 ++++ src/pl/plpython/plpy_typeio.c | 8 +- src/pl/plpython/plpy_typeio.h | 4 +- src/test/regress/expected/plpgsql.out | 11 +- src/test/regress/sql/plpgsql.sql | 3 + 20 files changed, 4589 insertions(+), 906 deletions(-) create mode 100644 src/backend/utils/adt/expandedrecord.c create mode 100644 src/include/utils/expandedrecord.h create mode 100644 src/pl/plpgsql/src/expected/plpgsql_record.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_record.sql diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 90a3c00dfe..c1e3c6a19d 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -123,7 +123,9 @@ and they can return a result of any of these types. They can also accept or return any composite type (row type) specified by name. It is also possible to declare a PL/pgSQL - function as returning record, which means that the result + function as accepting record, which means that any + composite type will do as input, or + as returning record, which means that the result is a row type whose columns are determined by specification in the calling query, as discussed in . @@ -671,14 +673,6 @@ user_id users.user_id%TYPE; be selected from it, for example $1.user_id. - - Only the user-defined columns of a table row are accessible in a - row-type variable, not the OID or other system columns (because the - row could be from a view). The fields of the row type inherit the - table's field size or precision for data types such as - char(n). - - Here is an example of using composite types. table1 and table2 are existing tables having at least the diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index f646fd9c51..9c6c2b02e9 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -70,6 +70,7 @@ #include "utils/builtins.h" #include "utils/date.h" #include "utils/datum.h" +#include "utils/expandedrecord.h" #include "utils/lsyscache.h" #include "utils/timestamp.h" #include "utils/typcache.h" @@ -2820,57 +2821,105 @@ ExecEvalFieldSelect(ExprState *state, ExprEvalStep *op, ExprContext *econtext) if (*op->resnull) return; - /* Get the composite datum and extract its type fields */ tupDatum = *op->resvalue; - tuple = DatumGetHeapTupleHeader(tupDatum); - tupType = HeapTupleHeaderGetTypeId(tuple); - tupTypmod = HeapTupleHeaderGetTypMod(tuple); + /* We can special-case expanded records for speed */ + if (VARATT_IS_EXTERNAL_EXPANDED(DatumGetPointer(tupDatum))) + { + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) DatumGetEOHP(tupDatum); - /* Lookup tupdesc if first time through or if type changes */ - tupDesc = get_cached_rowtype(tupType, tupTypmod, - &op->d.fieldselect.argdesc, - econtext); + Assert(erh->er_magic == ER_MAGIC); - /* - * Find field's attr record. Note we don't support system columns here: a - * datum tuple doesn't have valid values for most of the interesting - * system columns anyway. - */ - if (fieldnum <= 0) /* should never happen */ - elog(ERROR, "unsupported reference to system column %d in FieldSelect", - fieldnum); - if (fieldnum > tupDesc->natts) /* should never happen */ - elog(ERROR, "attribute number %d exceeds number of columns %d", - fieldnum, tupDesc->natts); - attr = TupleDescAttr(tupDesc, fieldnum - 1); - - /* Check for dropped column, and force a NULL result if so */ - if (attr->attisdropped) - { - *op->resnull = true; - return; + /* Extract record's TupleDesc */ + tupDesc = expanded_record_get_tupdesc(erh); + + /* + * Find field's attr record. Note we don't support system columns + * here: a datum tuple doesn't have valid values for most of the + * interesting system columns anyway. + */ + if (fieldnum <= 0) /* should never happen */ + elog(ERROR, "unsupported reference to system column %d in FieldSelect", + fieldnum); + if (fieldnum > tupDesc->natts) /* should never happen */ + elog(ERROR, "attribute number %d exceeds number of columns %d", + fieldnum, tupDesc->natts); + attr = TupleDescAttr(tupDesc, fieldnum - 1); + + /* Check for dropped column, and force a NULL result if so */ + if (attr->attisdropped) + { + *op->resnull = true; + return; + } + + /* Check for type mismatch --- possible after ALTER COLUMN TYPE? */ + /* As in CheckVarSlotCompatibility, we should but can't check typmod */ + if (op->d.fieldselect.resulttype != attr->atttypid) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("attribute %d has wrong type", fieldnum), + errdetail("Table has type %s, but query expects %s.", + format_type_be(attr->atttypid), + format_type_be(op->d.fieldselect.resulttype)))); + + /* extract the field */ + *op->resvalue = expanded_record_get_field(erh, fieldnum, + op->resnull); } + else + { + /* Get the composite datum and extract its type fields */ + tuple = DatumGetHeapTupleHeader(tupDatum); - /* Check for type mismatch --- possible after ALTER COLUMN TYPE? */ - /* As in CheckVarSlotCompatibility, we should but can't check typmod */ - if (op->d.fieldselect.resulttype != attr->atttypid) - ereport(ERROR, - (errcode(ERRCODE_DATATYPE_MISMATCH), - errmsg("attribute %d has wrong type", fieldnum), - errdetail("Table has type %s, but query expects %s.", - format_type_be(attr->atttypid), - format_type_be(op->d.fieldselect.resulttype)))); + tupType = HeapTupleHeaderGetTypeId(tuple); + tupTypmod = HeapTupleHeaderGetTypMod(tuple); - /* heap_getattr needs a HeapTuple not a bare HeapTupleHeader */ - tmptup.t_len = HeapTupleHeaderGetDatumLength(tuple); - tmptup.t_data = tuple; + /* Lookup tupdesc if first time through or if type changes */ + tupDesc = get_cached_rowtype(tupType, tupTypmod, + &op->d.fieldselect.argdesc, + econtext); + + /* + * Find field's attr record. Note we don't support system columns + * here: a datum tuple doesn't have valid values for most of the + * interesting system columns anyway. + */ + if (fieldnum <= 0) /* should never happen */ + elog(ERROR, "unsupported reference to system column %d in FieldSelect", + fieldnum); + if (fieldnum > tupDesc->natts) /* should never happen */ + elog(ERROR, "attribute number %d exceeds number of columns %d", + fieldnum, tupDesc->natts); + attr = TupleDescAttr(tupDesc, fieldnum - 1); - /* extract the field */ - *op->resvalue = heap_getattr(&tmptup, - fieldnum, - tupDesc, - op->resnull); + /* Check for dropped column, and force a NULL result if so */ + if (attr->attisdropped) + { + *op->resnull = true; + return; + } + + /* Check for type mismatch --- possible after ALTER COLUMN TYPE? */ + /* As in CheckVarSlotCompatibility, we should but can't check typmod */ + if (op->d.fieldselect.resulttype != attr->atttypid) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("attribute %d has wrong type", fieldnum), + errdetail("Table has type %s, but query expects %s.", + format_type_be(attr->atttypid), + format_type_be(op->d.fieldselect.resulttype)))); + + /* heap_getattr needs a HeapTuple not a bare HeapTupleHeader */ + tmptup.t_len = HeapTupleHeaderGetDatumLength(tuple); + tmptup.t_data = tuple; + + /* extract the field */ + *op->resvalue = heap_getattr(&tmptup, + fieldnum, + tupDesc, + op->resnull); + } } /* diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile index 1fb018416e..61ca90312f 100644 --- a/src/backend/utils/adt/Makefile +++ b/src/backend/utils/adt/Makefile @@ -12,7 +12,7 @@ include $(top_builddir)/src/Makefile.global OBJS = acl.o amutils.o arrayfuncs.o array_expanded.o array_selfuncs.o \ array_typanalyze.o array_userfuncs.o arrayutils.o ascii.o \ bool.o cash.o char.o date.o datetime.o datum.o dbsize.o domains.o \ - encode.o enum.o expandeddatum.o \ + encode.o enum.o expandeddatum.o expandedrecord.o \ float.o format_type.o formatting.o genfile.o \ geo_ops.o geo_selfuncs.o geo_spgist.o inet_cidr_ntop.o inet_net_pton.o \ int.o int8.o json.o jsonb.o jsonb_gin.o jsonb_op.o jsonb_util.o \ diff --git a/src/backend/utils/adt/expandedrecord.c b/src/backend/utils/adt/expandedrecord.c new file mode 100644 index 0000000000..0bf5fe8cc7 --- /dev/null +++ b/src/backend/utils/adt/expandedrecord.c @@ -0,0 +1,1569 @@ +/*------------------------------------------------------------------------- + * + * expandedrecord.c + * Functions for manipulating composite expanded objects. + * + * This module supports "expanded objects" (cf. expandeddatum.h) that can + * store values of named composite types, domains over named composite types, + * and record types (registered or anonymous). + * + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/backend/utils/adt/expandedrecord.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "access/htup_details.h" +#include "catalog/heap.h" +#include "catalog/pg_type.h" +#include "utils/builtins.h" +#include "utils/datum.h" +#include "utils/expandedrecord.h" +#include "utils/memutils.h" +#include "utils/typcache.h" + + +/* "Methods" required for an expanded object */ +static Size ER_get_flat_size(ExpandedObjectHeader *eohptr); +static void ER_flatten_into(ExpandedObjectHeader *eohptr, + void *result, Size allocated_size); + +static const ExpandedObjectMethods ER_methods = +{ + ER_get_flat_size, + ER_flatten_into +}; + +/* Other local functions */ +static void ER_mc_callback(void *arg); +static MemoryContext get_domain_check_cxt(ExpandedRecordHeader *erh); +static void build_dummy_expanded_header(ExpandedRecordHeader *main_erh); +static pg_noinline void check_domain_for_new_field(ExpandedRecordHeader *erh, + int fnumber, + Datum newValue, bool isnull); +static pg_noinline void check_domain_for_new_tuple(ExpandedRecordHeader *erh, + HeapTuple tuple); + + +/* + * Build an expanded record of the specified composite type + * + * type_id can be RECORDOID, but only if a positive typmod is given. + * + * The expanded record is initially "empty", having a state logically + * equivalent to a NULL composite value (not ROW(NULL, NULL, ...)). + * Note that this might not be a valid state for a domain type; if the + * caller needs to check that, call expanded_record_set_tuple(erh, NULL). + * + * The expanded object will be a child of parentcontext. + */ +ExpandedRecordHeader * +make_expanded_record_from_typeid(Oid type_id, int32 typmod, + MemoryContext parentcontext) +{ + ExpandedRecordHeader *erh; + int flags = 0; + TupleDesc tupdesc; + uint64 tupdesc_id; + MemoryContext objcxt; + char *chunk; + + if (type_id != RECORDOID) + { + /* + * Consult the typcache to see if it's a domain over composite, and in + * any case to get the tupdesc and tupdesc identifier. + */ + TypeCacheEntry *typentry; + + typentry = lookup_type_cache(type_id, + TYPECACHE_TUPDESC | + TYPECACHE_DOMAIN_BASE_INFO); + if (typentry->typtype == TYPTYPE_DOMAIN) + { + flags |= ER_FLAG_IS_DOMAIN; + typentry = lookup_type_cache(typentry->domainBaseType, + TYPECACHE_TUPDESC); + } + if (typentry->tupDesc == NULL) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("type %s is not composite", + format_type_be(type_id)))); + tupdesc = typentry->tupDesc; + tupdesc_id = typentry->tupDesc_identifier; + } + else + { + /* + * For RECORD types, get the tupdesc and identifier from typcache. + */ + tupdesc = lookup_rowtype_tupdesc(type_id, typmod); + tupdesc_id = assign_record_type_identifier(type_id, typmod); + } + + /* + * Allocate private context for expanded object. We use a regular-size + * context, not a small one, to improve the odds that we can fit a tupdesc + * into it without needing an extra malloc block. (This code path doesn't + * ever need to copy a tupdesc into the expanded record, but let's be + * consistent with the other ways of making an expanded record.) + */ + objcxt = AllocSetContextCreate(parentcontext, + "expanded record", + ALLOCSET_DEFAULT_SIZES); + + /* + * Since we already know the number of fields in the tupdesc, we can + * allocate the dvalues/dnulls arrays along with the record header. This + * is useless if we never need those arrays, but it costs almost nothing, + * and it will save a palloc cycle if we do need them. + */ + erh = (ExpandedRecordHeader *) + MemoryContextAlloc(objcxt, MAXALIGN(sizeof(ExpandedRecordHeader)) + + tupdesc->natts * (sizeof(Datum) + sizeof(bool))); + + /* Ensure all header fields are initialized to 0/null */ + memset(erh, 0, sizeof(ExpandedRecordHeader)); + + EOH_init_header(&erh->hdr, &ER_methods, objcxt); + erh->er_magic = ER_MAGIC; + + /* Set up dvalues/dnulls, with no valid contents as yet */ + chunk = (char *) erh + MAXALIGN(sizeof(ExpandedRecordHeader)); + erh->dvalues = (Datum *) chunk; + erh->dnulls = (bool *) (chunk + tupdesc->natts * sizeof(Datum)); + erh->nfields = tupdesc->natts; + + /* Fill in composite-type identification info */ + erh->er_decltypeid = type_id; + erh->er_typeid = tupdesc->tdtypeid; + erh->er_typmod = tupdesc->tdtypmod; + erh->er_tupdesc_id = tupdesc_id; + + erh->flags = flags; + + /* + * If what we got from the typcache is a refcounted tupdesc, we need to + * acquire our own refcount on it. We manage the refcount with a memory + * context callback rather than assuming that the CurrentResourceOwner is + * longer-lived than this expanded object. + */ + if (tupdesc->tdrefcount >= 0) + { + /* Register callback to release the refcount */ + erh->er_mcb.func = ER_mc_callback; + erh->er_mcb.arg = (void *) erh; + MemoryContextRegisterResetCallback(erh->hdr.eoh_context, + &erh->er_mcb); + + /* And save the pointer */ + erh->er_tupdesc = tupdesc; + tupdesc->tdrefcount++; + + /* If we called lookup_rowtype_tupdesc, release the pin it took */ + if (type_id == RECORDOID) + DecrTupleDescRefCount(tupdesc); + } + else + { + /* + * If it's not refcounted, just assume it will outlive the expanded + * object. (This can happen for shared record types, for instance.) + */ + erh->er_tupdesc = tupdesc; + } + + /* + * We don't set ER_FLAG_DVALUES_VALID or ER_FLAG_FVALUE_VALID, so the + * record remains logically empty. + */ + + return erh; +} + +/* + * Build an expanded record of the rowtype defined by the tupdesc + * + * The tupdesc is copied if necessary (i.e., if we can't just bump its + * reference count instead). + * + * The expanded record is initially "empty", having a state logically + * equivalent to a NULL composite value (not ROW(NULL, NULL, ...)). + * + * The expanded object will be a child of parentcontext. + */ +ExpandedRecordHeader * +make_expanded_record_from_tupdesc(TupleDesc tupdesc, + MemoryContext parentcontext) +{ + ExpandedRecordHeader *erh; + uint64 tupdesc_id; + MemoryContext objcxt; + MemoryContext oldcxt; + char *chunk; + + if (tupdesc->tdtypeid != RECORDOID) + { + /* + * If it's a named composite type (not RECORD), we prefer to reference + * the typcache's copy of the tupdesc, which is guaranteed to be + * refcounted (the given tupdesc might not be). In any case, we need + * to consult the typcache to get the correct tupdesc identifier. + * + * Note that tdtypeid couldn't be a domain type, so we need not + * consider that case here. + */ + TypeCacheEntry *typentry; + + typentry = lookup_type_cache(tupdesc->tdtypeid, TYPECACHE_TUPDESC); + if (typentry->tupDesc == NULL) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("type %s is not composite", + format_type_be(tupdesc->tdtypeid)))); + tupdesc = typentry->tupDesc; + tupdesc_id = typentry->tupDesc_identifier; + } + else + { + /* + * For RECORD types, get the appropriate unique identifier (possibly + * freshly assigned). + */ + tupdesc_id = assign_record_type_identifier(tupdesc->tdtypeid, + tupdesc->tdtypmod); + } + + /* + * Allocate private context for expanded object. We use a regular-size + * context, not a small one, to improve the odds that we can fit a tupdesc + * into it without needing an extra malloc block. + */ + objcxt = AllocSetContextCreate(parentcontext, + "expanded record", + ALLOCSET_DEFAULT_SIZES); + + /* + * Since we already know the number of fields in the tupdesc, we can + * allocate the dvalues/dnulls arrays along with the record header. This + * is useless if we never need those arrays, but it costs almost nothing, + * and it will save a palloc cycle if we do need them. + */ + erh = (ExpandedRecordHeader *) + MemoryContextAlloc(objcxt, MAXALIGN(sizeof(ExpandedRecordHeader)) + + tupdesc->natts * (sizeof(Datum) + sizeof(bool))); + + /* Ensure all header fields are initialized to 0/null */ + memset(erh, 0, sizeof(ExpandedRecordHeader)); + + EOH_init_header(&erh->hdr, &ER_methods, objcxt); + erh->er_magic = ER_MAGIC; + + /* Set up dvalues/dnulls, with no valid contents as yet */ + chunk = (char *) erh + MAXALIGN(sizeof(ExpandedRecordHeader)); + erh->dvalues = (Datum *) chunk; + erh->dnulls = (bool *) (chunk + tupdesc->natts * sizeof(Datum)); + erh->nfields = tupdesc->natts; + + /* Fill in composite-type identification info */ + erh->er_decltypeid = erh->er_typeid = tupdesc->tdtypeid; + erh->er_typmod = tupdesc->tdtypmod; + erh->er_tupdesc_id = tupdesc_id; + + /* + * Copy tupdesc if needed, but we prefer to bump its refcount if possible. + * We manage the refcount with a memory context callback rather than + * assuming that the CurrentResourceOwner is longer-lived than this + * expanded object. + */ + if (tupdesc->tdrefcount >= 0) + { + /* Register callback to release the refcount */ + erh->er_mcb.func = ER_mc_callback; + erh->er_mcb.arg = (void *) erh; + MemoryContextRegisterResetCallback(erh->hdr.eoh_context, + &erh->er_mcb); + + /* And save the pointer */ + erh->er_tupdesc = tupdesc; + tupdesc->tdrefcount++; + } + else + { + /* Just copy it */ + oldcxt = MemoryContextSwitchTo(objcxt); + erh->er_tupdesc = CreateTupleDescCopy(tupdesc); + erh->flags |= ER_FLAG_TUPDESC_ALLOCED; + MemoryContextSwitchTo(oldcxt); + } + + /* + * We don't set ER_FLAG_DVALUES_VALID or ER_FLAG_FVALUE_VALID, so the + * record remains logically empty. + */ + + return erh; +} + +/* + * Build an expanded record of the same rowtype as the given expanded record + * + * This is faster than either of the above routines because we can bypass + * typcache lookup(s). + * + * The expanded record is initially "empty" --- we do not copy whatever + * tuple might be in the source expanded record. + * + * The expanded object will be a child of parentcontext. + */ +ExpandedRecordHeader * +make_expanded_record_from_exprecord(ExpandedRecordHeader *olderh, + MemoryContext parentcontext) +{ + ExpandedRecordHeader *erh; + TupleDesc tupdesc = expanded_record_get_tupdesc(olderh); + MemoryContext objcxt; + MemoryContext oldcxt; + char *chunk; + + /* + * Allocate private context for expanded object. We use a regular-size + * context, not a small one, to improve the odds that we can fit a tupdesc + * into it without needing an extra malloc block. + */ + objcxt = AllocSetContextCreate(parentcontext, + "expanded record", + ALLOCSET_DEFAULT_SIZES); + + /* + * Since we already know the number of fields in the tupdesc, we can + * allocate the dvalues/dnulls arrays along with the record header. This + * is useless if we never need those arrays, but it costs almost nothing, + * and it will save a palloc cycle if we do need them. + */ + erh = (ExpandedRecordHeader *) + MemoryContextAlloc(objcxt, MAXALIGN(sizeof(ExpandedRecordHeader)) + + tupdesc->natts * (sizeof(Datum) + sizeof(bool))); + + /* Ensure all header fields are initialized to 0/null */ + memset(erh, 0, sizeof(ExpandedRecordHeader)); + + EOH_init_header(&erh->hdr, &ER_methods, objcxt); + erh->er_magic = ER_MAGIC; + + /* Set up dvalues/dnulls, with no valid contents as yet */ + chunk = (char *) erh + MAXALIGN(sizeof(ExpandedRecordHeader)); + erh->dvalues = (Datum *) chunk; + erh->dnulls = (bool *) (chunk + tupdesc->natts * sizeof(Datum)); + erh->nfields = tupdesc->natts; + + /* Fill in composite-type identification info */ + erh->er_decltypeid = olderh->er_decltypeid; + erh->er_typeid = olderh->er_typeid; + erh->er_typmod = olderh->er_typmod; + erh->er_tupdesc_id = olderh->er_tupdesc_id; + + /* The only flag bit that transfers over is IS_DOMAIN */ + erh->flags = olderh->flags & ER_FLAG_IS_DOMAIN; + + /* + * Copy tupdesc if needed, but we prefer to bump its refcount if possible. + * We manage the refcount with a memory context callback rather than + * assuming that the CurrentResourceOwner is longer-lived than this + * expanded object. + */ + if (tupdesc->tdrefcount >= 0) + { + /* Register callback to release the refcount */ + erh->er_mcb.func = ER_mc_callback; + erh->er_mcb.arg = (void *) erh; + MemoryContextRegisterResetCallback(erh->hdr.eoh_context, + &erh->er_mcb); + + /* And save the pointer */ + erh->er_tupdesc = tupdesc; + tupdesc->tdrefcount++; + } + else if (olderh->flags & ER_FLAG_TUPDESC_ALLOCED) + { + /* We need to make our own copy of the tupdesc */ + oldcxt = MemoryContextSwitchTo(objcxt); + erh->er_tupdesc = CreateTupleDescCopy(tupdesc); + erh->flags |= ER_FLAG_TUPDESC_ALLOCED; + MemoryContextSwitchTo(oldcxt); + } + else + { + /* + * Assume the tupdesc will outlive this expanded object, just like + * we're assuming it will outlive the source object. + */ + erh->er_tupdesc = tupdesc; + } + + /* + * We don't set ER_FLAG_DVALUES_VALID or ER_FLAG_FVALUE_VALID, so the + * record remains logically empty. + */ + + return erh; +} + +/* + * Insert given tuple as the value of the expanded record + * + * It is caller's responsibility that the tuple matches the record's + * previously-assigned rowtype. (However domain constraints, if any, + * will be checked here.) + * + * The tuple is physically copied into the expanded record's local storage + * if "copy" is true, otherwise it's caller's responsibility that the tuple + * will live as long as the expanded record does. In any case, out-of-line + * fields in the tuple are not automatically inlined. + * + * Alternatively, tuple can be NULL, in which case we just set the expanded + * record to be empty. + */ +void +expanded_record_set_tuple(ExpandedRecordHeader *erh, + HeapTuple tuple, + bool copy) +{ + int oldflags; + HeapTuple oldtuple; + char *oldfstartptr; + char *oldfendptr; + int newflags; + HeapTuple newtuple; + MemoryContext oldcxt; + + /* Shouldn't ever be trying to assign new data to a dummy header */ + Assert(!(erh->flags & ER_FLAG_IS_DUMMY)); + + /* + * Before performing the assignment, see if result will satisfy domain. + */ + if (erh->flags & ER_FLAG_IS_DOMAIN) + check_domain_for_new_tuple(erh, tuple); + + /* + * Initialize new flags, keeping only non-data status bits. + */ + oldflags = erh->flags; + newflags = oldflags & ER_FLAGS_NON_DATA; + + /* + * Copy tuple into local storage if needed. We must be sure this succeeds + * before we start to modify the expanded record's state. + */ + if (copy && tuple) + { + oldcxt = MemoryContextSwitchTo(erh->hdr.eoh_context); + newtuple = heap_copytuple(tuple); + newflags |= ER_FLAG_FVALUE_ALLOCED; + MemoryContextSwitchTo(oldcxt); + } + else + newtuple = tuple; + + /* Make copies of fields we're about to overwrite */ + oldtuple = erh->fvalue; + oldfstartptr = erh->fstartptr; + oldfendptr = erh->fendptr; + + /* + * It's now safe to update the expanded record's state. + */ + if (newtuple) + { + /* Save flat representation */ + erh->fvalue = newtuple; + erh->fstartptr = (char *) newtuple->t_data; + erh->fendptr = ((char *) newtuple->t_data) + newtuple->t_len; + newflags |= ER_FLAG_FVALUE_VALID; + + /* Remember if we have any out-of-line field values */ + if (HeapTupleHasExternal(newtuple)) + newflags |= ER_FLAG_HAVE_EXTERNAL; + } + else + { + erh->fvalue = NULL; + erh->fstartptr = erh->fendptr = NULL; + } + + erh->flags = newflags; + + /* Reset flat-size info; we don't bother to make it valid now */ + erh->flat_size = 0; + + /* + * Now, release any storage belonging to old field values. It's safe to + * do this because ER_FLAG_DVALUES_VALID is no longer set in erh->flags; + * even if we fail partway through, the record is valid, and at worst + * we've failed to reclaim some space. + */ + if (oldflags & ER_FLAG_DVALUES_ALLOCED) + { + TupleDesc tupdesc = erh->er_tupdesc; + int i; + + for (i = 0; i < erh->nfields; i++) + { + if (!erh->dnulls[i] && + !(TupleDescAttr(tupdesc, i)->attbyval)) + { + char *oldValue = (char *) DatumGetPointer(erh->dvalues[i]); + + if (oldValue < oldfstartptr || oldValue >= oldfendptr) + pfree(oldValue); + } + } + } + + /* Likewise free the old tuple, if it was locally allocated */ + if (oldflags & ER_FLAG_FVALUE_ALLOCED) + heap_freetuple(oldtuple); + + /* We won't make a new deconstructed representation until/unless needed */ +} + +/* + * make_expanded_record_from_datum: build expanded record from composite Datum + * + * This combines the functions of make_expanded_record_from_typeid and + * expanded_record_set_tuple. However, we do not force a lookup of the + * tupdesc immediately, reasoning that it might never be needed. + * + * The expanded object will be a child of parentcontext. + * + * Note: a composite datum cannot self-identify as being of a domain type, + * so we need not consider domain cases here. + */ +Datum +make_expanded_record_from_datum(Datum recorddatum, MemoryContext parentcontext) +{ + ExpandedRecordHeader *erh; + HeapTupleHeader tuphdr; + HeapTupleData tmptup; + HeapTuple newtuple; + MemoryContext objcxt; + MemoryContext oldcxt; + + /* + * Allocate private context for expanded object. We use a regular-size + * context, not a small one, to improve the odds that we can fit a tupdesc + * into it without needing an extra malloc block. + */ + objcxt = AllocSetContextCreate(parentcontext, + "expanded record", + ALLOCSET_DEFAULT_SIZES); + + /* Set up expanded record header, initializing fields to 0/null */ + erh = (ExpandedRecordHeader *) + MemoryContextAllocZero(objcxt, sizeof(ExpandedRecordHeader)); + + EOH_init_header(&erh->hdr, &ER_methods, objcxt); + erh->er_magic = ER_MAGIC; + + /* + * Detoast and copy source record into private context, as a HeapTuple. + * (If we actually have to detoast the source, we'll leak some memory in + * the caller's context, but it doesn't seem worth worrying about.) + */ + tuphdr = DatumGetHeapTupleHeader(recorddatum); + + tmptup.t_len = HeapTupleHeaderGetDatumLength(tuphdr); + ItemPointerSetInvalid(&(tmptup.t_self)); + tmptup.t_tableOid = InvalidOid; + tmptup.t_data = tuphdr; + + oldcxt = MemoryContextSwitchTo(objcxt); + newtuple = heap_copytuple(&tmptup); + erh->flags |= ER_FLAG_FVALUE_ALLOCED; + MemoryContextSwitchTo(oldcxt); + + /* Fill in composite-type identification info */ + erh->er_decltypeid = erh->er_typeid = HeapTupleHeaderGetTypeId(tuphdr); + erh->er_typmod = HeapTupleHeaderGetTypMod(tuphdr); + + /* remember we have a flat representation */ + erh->fvalue = newtuple; + erh->fstartptr = (char *) newtuple->t_data; + erh->fendptr = ((char *) newtuple->t_data) + newtuple->t_len; + erh->flags |= ER_FLAG_FVALUE_VALID; + + /* Shouldn't need to set ER_FLAG_HAVE_EXTERNAL */ + Assert(!HeapTupleHeaderHasExternal(tuphdr)); + + /* + * We won't look up the tupdesc till we have to, nor make a deconstructed + * representation. We don't have enough info to fill flat_size and + * friends, either. + */ + + /* return a R/W pointer to the expanded record */ + return EOHPGetRWDatum(&erh->hdr); +} + +/* + * get_flat_size method for expanded records + * + * Note: call this in a reasonably short-lived memory context, in case of + * memory leaks from activities such as detoasting. + */ +static Size +ER_get_flat_size(ExpandedObjectHeader *eohptr) +{ + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) eohptr; + TupleDesc tupdesc; + Size len; + Size data_len; + int hoff; + bool hasnull; + int i; + + Assert(erh->er_magic == ER_MAGIC); + + /* + * The flat representation has to be a valid composite datum. Make sure + * that we have a registered, not anonymous, RECORD type. + */ + if (erh->er_typeid == RECORDOID && + erh->er_typmod < 0) + { + tupdesc = expanded_record_get_tupdesc(erh); + assign_record_type_typmod(tupdesc); + erh->er_typmod = tupdesc->tdtypmod; + } + + /* + * If we have a valid flattened value without out-of-line fields, we can + * just use it as-is. + */ + if (erh->flags & ER_FLAG_FVALUE_VALID && + !(erh->flags & ER_FLAG_HAVE_EXTERNAL)) + return erh->fvalue->t_len; + + /* If we have a cached size value, believe that */ + if (erh->flat_size) + return erh->flat_size; + + /* If we haven't yet deconstructed the tuple, do that */ + if (!(erh->flags & ER_FLAG_DVALUES_VALID)) + deconstruct_expanded_record(erh); + + /* Tuple descriptor must be valid by now */ + tupdesc = erh->er_tupdesc; + + /* + * Composite datums mustn't contain any out-of-line values. + */ + if (erh->flags & ER_FLAG_HAVE_EXTERNAL) + { + for (i = 0; i < erh->nfields; i++) + { + Form_pg_attribute attr = TupleDescAttr(tupdesc, i); + + if (!erh->dnulls[i] && + !attr->attbyval && attr->attlen == -1 && + VARATT_IS_EXTERNAL(DatumGetPointer(erh->dvalues[i]))) + { + /* + * It's an external toasted value, so we need to dereference + * it so that the flat representation will be self-contained. + * Do this step in the caller's context because the TOAST + * fetch might leak memory. That means making an extra copy, + * which is a tad annoying, but repetitive leaks in the + * record's context would be worse. + */ + Datum newValue; + + newValue = PointerGetDatum(PG_DETOAST_DATUM(erh->dvalues[i])); + /* expanded_record_set_field can do the rest */ + /* ... and we don't need it to recheck domain constraints */ + expanded_record_set_field_internal(erh, i + 1, + newValue, false, + false); + /* Might as well free the detoasted value */ + pfree(DatumGetPointer(newValue)); + } + } + + /* + * We have now removed all external field values, so we can clear the + * flag about them. This won't cause ER_flatten_into() to mistakenly + * take the fast path, since expanded_record_set_field() will have + * cleared ER_FLAG_FVALUE_VALID. + */ + erh->flags &= ~ER_FLAG_HAVE_EXTERNAL; + } + + /* Test if we currently have any null values */ + hasnull = false; + for (i = 0; i < erh->nfields; i++) + { + if (erh->dnulls[i]) + { + hasnull = true; + break; + } + } + + /* Determine total space needed */ + len = offsetof(HeapTupleHeaderData, t_bits); + + if (hasnull) + len += BITMAPLEN(tupdesc->natts); + + if (tupdesc->tdhasoid) + len += sizeof(Oid); + + hoff = len = MAXALIGN(len); /* align user data safely */ + + data_len = heap_compute_data_size(tupdesc, erh->dvalues, erh->dnulls); + + len += data_len; + + /* Cache for next time */ + erh->flat_size = len; + erh->data_len = data_len; + erh->hoff = hoff; + erh->hasnull = hasnull; + + return len; +} + +/* + * flatten_into method for expanded records + */ +static void +ER_flatten_into(ExpandedObjectHeader *eohptr, + void *result, Size allocated_size) +{ + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) eohptr; + HeapTupleHeader tuphdr = (HeapTupleHeader) result; + TupleDesc tupdesc; + + Assert(erh->er_magic == ER_MAGIC); + + /* Easy if we have a valid flattened value without out-of-line fields */ + if (erh->flags & ER_FLAG_FVALUE_VALID && + !(erh->flags & ER_FLAG_HAVE_EXTERNAL)) + { + Assert(allocated_size == erh->fvalue->t_len); + memcpy(tuphdr, erh->fvalue->t_data, allocated_size); + /* The original flattened value might not have datum header fields */ + HeapTupleHeaderSetDatumLength(tuphdr, allocated_size); + HeapTupleHeaderSetTypeId(tuphdr, erh->er_typeid); + HeapTupleHeaderSetTypMod(tuphdr, erh->er_typmod); + return; + } + + /* Else allocation should match previous get_flat_size result */ + Assert(allocated_size == erh->flat_size); + + /* We'll need the tuple descriptor */ + tupdesc = expanded_record_get_tupdesc(erh); + + /* We must ensure that any pad space is zero-filled */ + memset(tuphdr, 0, allocated_size); + + /* Set up header fields of composite Datum */ + HeapTupleHeaderSetDatumLength(tuphdr, allocated_size); + HeapTupleHeaderSetTypeId(tuphdr, erh->er_typeid); + HeapTupleHeaderSetTypMod(tuphdr, erh->er_typmod); + /* We also make sure that t_ctid is invalid unless explicitly set */ + ItemPointerSetInvalid(&(tuphdr->t_ctid)); + + HeapTupleHeaderSetNatts(tuphdr, tupdesc->natts); + tuphdr->t_hoff = erh->hoff; + + if (tupdesc->tdhasoid) /* else leave infomask = 0 */ + tuphdr->t_infomask = HEAP_HASOID; + + /* And fill the data area from dvalues/dnulls */ + heap_fill_tuple(tupdesc, + erh->dvalues, + erh->dnulls, + (char *) tuphdr + erh->hoff, + erh->data_len, + &tuphdr->t_infomask, + (erh->hasnull ? tuphdr->t_bits : NULL)); +} + +/* + * Look up the tupdesc for the expanded record's actual type + * + * Note: code internal to this module is allowed to just fetch + * erh->er_tupdesc if ER_FLAG_DVALUES_VALID is set; otherwise it should call + * expanded_record_get_tupdesc. This function is the out-of-line portion + * of expanded_record_get_tupdesc. + */ +TupleDesc +expanded_record_fetch_tupdesc(ExpandedRecordHeader *erh) +{ + TupleDesc tupdesc; + + /* Easy if we already have it (but caller should have checked already) */ + if (erh->er_tupdesc) + return erh->er_tupdesc; + + /* Lookup the composite type's tupdesc using the typcache */ + tupdesc = lookup_rowtype_tupdesc(erh->er_typeid, erh->er_typmod); + + /* + * If it's a refcounted tupdesc rather than a statically allocated one, we + * want to manage the refcount with a memory context callback rather than + * assuming that the CurrentResourceOwner is longer-lived than this + * expanded object. + */ + if (tupdesc->tdrefcount >= 0) + { + /* Register callback if we didn't already */ + if (erh->er_mcb.arg == NULL) + { + erh->er_mcb.func = ER_mc_callback; + erh->er_mcb.arg = (void *) erh; + MemoryContextRegisterResetCallback(erh->hdr.eoh_context, + &erh->er_mcb); + } + + /* Remember our own pointer */ + erh->er_tupdesc = tupdesc; + tupdesc->tdrefcount++; + + /* Release the pin lookup_rowtype_tupdesc acquired */ + DecrTupleDescRefCount(tupdesc); + } + else + { + /* Just remember the pointer */ + erh->er_tupdesc = tupdesc; + } + + /* In either case, fetch the process-global ID for this tupdesc */ + erh->er_tupdesc_id = assign_record_type_identifier(tupdesc->tdtypeid, + tupdesc->tdtypmod); + + return tupdesc; +} + +/* + * Get a HeapTuple representing the current value of the expanded record + * + * If valid, the originally stored tuple is returned, so caller must not + * scribble on it. Otherwise, we return a HeapTuple created in the current + * memory context. In either case, no attempt has been made to inline + * out-of-line toasted values, so the tuple isn't usable as a composite + * datum. + * + * Returns NULL if expanded record is empty. + */ +HeapTuple +expanded_record_get_tuple(ExpandedRecordHeader *erh) +{ + /* Easy case if we still have original tuple */ + if (erh->flags & ER_FLAG_FVALUE_VALID) + return erh->fvalue; + + /* Else just build a tuple from datums */ + if (erh->flags & ER_FLAG_DVALUES_VALID) + return heap_form_tuple(erh->er_tupdesc, erh->dvalues, erh->dnulls); + + /* Expanded record is empty */ + return NULL; +} + +/* + * Memory context reset callback for cleaning up external resources + */ +static void +ER_mc_callback(void *arg) +{ + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) arg; + TupleDesc tupdesc = erh->er_tupdesc; + + /* Release our privately-managed tupdesc refcount, if any */ + if (tupdesc) + { + erh->er_tupdesc = NULL; /* just for luck */ + if (tupdesc->tdrefcount > 0) + { + if (--tupdesc->tdrefcount == 0) + FreeTupleDesc(tupdesc); + } + } +} + +/* + * DatumGetExpandedRecord: get a writable expanded record from an input argument + * + * Caution: if the input is a read/write pointer, this returns the input + * argument; so callers must be sure that their changes are "safe", that is + * they cannot leave the record in a corrupt state. + */ +ExpandedRecordHeader * +DatumGetExpandedRecord(Datum d) +{ + /* If it's a writable expanded record already, just return it */ + if (VARATT_IS_EXTERNAL_EXPANDED_RW(DatumGetPointer(d))) + { + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) DatumGetEOHP(d); + + Assert(erh->er_magic == ER_MAGIC); + return erh; + } + + /* Else expand the hard way */ + d = make_expanded_record_from_datum(d, CurrentMemoryContext); + return (ExpandedRecordHeader *) DatumGetEOHP(d); +} + +/* + * Create the Datum/isnull representation of an expanded record object + * if we didn't do so already. After calling this, it's OK to read the + * dvalues/dnulls arrays directly, rather than going through get_field. + * + * Note that if the object is currently empty ("null"), this will change + * it to represent a row of nulls. + */ +void +deconstruct_expanded_record(ExpandedRecordHeader *erh) +{ + TupleDesc tupdesc; + Datum *dvalues; + bool *dnulls; + int nfields; + + if (erh->flags & ER_FLAG_DVALUES_VALID) + return; /* already valid, nothing to do */ + + /* We'll need the tuple descriptor */ + tupdesc = expanded_record_get_tupdesc(erh); + + /* + * Allocate arrays in private context, if we don't have them already. We + * don't expect to see a change in nfields here, so while we cope if it + * happens, we don't bother avoiding a leak of the old arrays (which might + * not be separately palloc'd, anyway). + */ + nfields = tupdesc->natts; + if (erh->dvalues == NULL || erh->nfields != nfields) + { + char *chunk; + + /* + * To save a palloc cycle, we allocate both the Datum and isnull + * arrays in one palloc chunk. + */ + chunk = MemoryContextAlloc(erh->hdr.eoh_context, + nfields * (sizeof(Datum) + sizeof(bool))); + dvalues = (Datum *) chunk; + dnulls = (bool *) (chunk + nfields * sizeof(Datum)); + erh->dvalues = dvalues; + erh->dnulls = dnulls; + erh->nfields = nfields; + } + else + { + dvalues = erh->dvalues; + dnulls = erh->dnulls; + } + + if (erh->flags & ER_FLAG_FVALUE_VALID) + { + /* Deconstruct tuple */ + heap_deform_tuple(erh->fvalue, tupdesc, dvalues, dnulls); + } + else + { + /* If record was empty, instantiate it as a row of nulls */ + memset(dvalues, 0, nfields * sizeof(Datum)); + memset(dnulls, true, nfields * sizeof(bool)); + } + + /* Mark the dvalues as valid */ + erh->flags |= ER_FLAG_DVALUES_VALID; +} + +/* + * Look up a record field by name + * + * If there is a field named "fieldname", fill in the contents of finfo + * and return "true". Else return "false" without changing *finfo. + */ +bool +expanded_record_lookup_field(ExpandedRecordHeader *erh, const char *fieldname, + ExpandedRecordFieldInfo *finfo) +{ + TupleDesc tupdesc; + int fno; + Form_pg_attribute attr; + + tupdesc = expanded_record_get_tupdesc(erh); + + /* First, check user-defined attributes */ + for (fno = 0; fno < tupdesc->natts; fno++) + { + attr = TupleDescAttr(tupdesc, fno); + if (namestrcmp(&attr->attname, fieldname) == 0 && + !attr->attisdropped) + { + finfo->fnumber = attr->attnum; + finfo->ftypeid = attr->atttypid; + finfo->ftypmod = attr->atttypmod; + finfo->fcollation = attr->attcollation; + return true; + } + } + + /* How about system attributes? */ + attr = SystemAttributeByName(fieldname, tupdesc->tdhasoid); + if (attr != NULL) + { + finfo->fnumber = attr->attnum; + finfo->ftypeid = attr->atttypid; + finfo->ftypmod = attr->atttypmod; + finfo->fcollation = attr->attcollation; + return true; + } + + return false; +} + +/* + * Fetch value of record field + * + * expanded_record_get_field is the frontend for this; it handles the + * easy inline-able cases. + */ +Datum +expanded_record_fetch_field(ExpandedRecordHeader *erh, int fnumber, + bool *isnull) +{ + if (fnumber > 0) + { + /* Empty record has null fields */ + if (ExpandedRecordIsEmpty(erh)) + { + *isnull = true; + return (Datum) 0; + } + /* Make sure we have deconstructed form */ + deconstruct_expanded_record(erh); + /* Out-of-range field number reads as null */ + if (unlikely(fnumber > erh->nfields)) + { + *isnull = true; + return (Datum) 0; + } + *isnull = erh->dnulls[fnumber - 1]; + return erh->dvalues[fnumber - 1]; + } + else + { + /* System columns read as null if we haven't got flat tuple */ + if (erh->fvalue == NULL) + { + *isnull = true; + return (Datum) 0; + } + /* heap_getsysattr doesn't actually use tupdesc, so just pass null */ + return heap_getsysattr(erh->fvalue, fnumber, NULL, isnull); + } +} + +/* + * Set value of record field + * + * If the expanded record is of domain type, the assignment will be rejected + * (without changing the record's state) if the domain's constraints would + * be violated. + * + * Internal callers can pass check_constraints = false to skip application + * of domain constraints. External callers should never do that. + */ +void +expanded_record_set_field_internal(ExpandedRecordHeader *erh, int fnumber, + Datum newValue, bool isnull, + bool check_constraints) +{ + TupleDesc tupdesc; + Form_pg_attribute attr; + Datum *dvalues; + bool *dnulls; + char *oldValue; + + /* + * Shouldn't ever be trying to assign new data to a dummy header, except + * in the case of an internal call for field inlining. + */ + Assert(!(erh->flags & ER_FLAG_IS_DUMMY) || !check_constraints); + + /* Before performing the assignment, see if result will satisfy domain */ + if ((erh->flags & ER_FLAG_IS_DOMAIN) && check_constraints) + check_domain_for_new_field(erh, fnumber, newValue, isnull); + + /* If we haven't yet deconstructed the tuple, do that */ + if (!(erh->flags & ER_FLAG_DVALUES_VALID)) + deconstruct_expanded_record(erh); + + /* Tuple descriptor must be valid by now */ + tupdesc = erh->er_tupdesc; + Assert(erh->nfields == tupdesc->natts); + + /* Caller error if fnumber is system column or nonexistent column */ + if (unlikely(fnumber <= 0 || fnumber > erh->nfields)) + elog(ERROR, "cannot assign to field %d of expanded record", fnumber); + + /* + * Copy new field value into record's context, if needed. + */ + attr = TupleDescAttr(tupdesc, fnumber - 1); + if (!isnull && !attr->attbyval) + { + MemoryContext oldcxt; + + oldcxt = MemoryContextSwitchTo(erh->hdr.eoh_context); + newValue = datumCopy(newValue, false, attr->attlen); + MemoryContextSwitchTo(oldcxt); + + /* Remember that we have field(s) that may need to be pfree'd */ + erh->flags |= ER_FLAG_DVALUES_ALLOCED; + + /* + * While we're here, note whether it's an external toasted value, + * because that could mean we need to inline it later. + */ + if (attr->attlen == -1 && + VARATT_IS_EXTERNAL(DatumGetPointer(newValue))) + erh->flags |= ER_FLAG_HAVE_EXTERNAL; + } + + /* + * We're ready to make irreversible changes. + */ + dvalues = erh->dvalues; + dnulls = erh->dnulls; + + /* Flattened value will no longer represent record accurately */ + erh->flags &= ~ER_FLAG_FVALUE_VALID; + /* And we don't know the flattened size either */ + erh->flat_size = 0; + + /* Grab old field value for pfree'ing, if needed. */ + if (!attr->attbyval && !dnulls[fnumber - 1]) + oldValue = (char *) DatumGetPointer(dvalues[fnumber - 1]); + else + oldValue = NULL; + + /* And finally we can insert the new field. */ + dvalues[fnumber - 1] = newValue; + dnulls[fnumber - 1] = isnull; + + /* + * Free old field if needed; this keeps repeated field replacements from + * bloating the record's storage. If the pfree somehow fails, it won't + * corrupt the record. + * + * If we're updating a dummy header, we can't risk pfree'ing the old + * value, because most likely the expanded record's main header still has + * a pointer to it. This won't result in any sustained memory leak, since + * whatever we just allocated here is in the short-lived domain check + * context. + */ + if (oldValue && !(erh->flags & ER_FLAG_IS_DUMMY)) + { + /* Don't try to pfree a part of the original flat record */ + if (oldValue < erh->fstartptr || oldValue >= erh->fendptr) + pfree(oldValue); + } +} + +/* + * Set all record field(s) + * + * Caller must ensure that the provided datums are of the right types + * to match the record's previously assigned rowtype. + * + * Unlike repeated application of expanded_record_set_field(), this does not + * guarantee to leave the expanded record in a non-corrupt state in event + * of an error. Typically it would only be used for initializing a new + * expanded record. + */ +void +expanded_record_set_fields(ExpandedRecordHeader *erh, + const Datum *newValues, const bool *isnulls) +{ + TupleDesc tupdesc; + Datum *dvalues; + bool *dnulls; + int fnumber; + MemoryContext oldcxt; + + /* Shouldn't ever be trying to assign new data to a dummy header */ + Assert(!(erh->flags & ER_FLAG_IS_DUMMY)); + + /* If we haven't yet deconstructed the tuple, do that */ + if (!(erh->flags & ER_FLAG_DVALUES_VALID)) + deconstruct_expanded_record(erh); + + /* Tuple descriptor must be valid by now */ + tupdesc = erh->er_tupdesc; + Assert(erh->nfields == tupdesc->natts); + + /* Flattened value will no longer represent record accurately */ + erh->flags &= ~ER_FLAG_FVALUE_VALID; + /* And we don't know the flattened size either */ + erh->flat_size = 0; + + oldcxt = MemoryContextSwitchTo(erh->hdr.eoh_context); + + dvalues = erh->dvalues; + dnulls = erh->dnulls; + + for (fnumber = 0; fnumber < erh->nfields; fnumber++) + { + Form_pg_attribute attr = TupleDescAttr(tupdesc, fnumber); + Datum newValue; + bool isnull; + + /* Ignore dropped columns */ + if (attr->attisdropped) + continue; + + newValue = newValues[fnumber]; + isnull = isnulls[fnumber]; + + if (!attr->attbyval) + { + /* + * Copy new field value into record's context, if needed. + */ + if (!isnull) + { + newValue = datumCopy(newValue, false, attr->attlen); + + /* Remember that we have field(s) that need to be pfree'd */ + erh->flags |= ER_FLAG_DVALUES_ALLOCED; + + /* + * While we're here, note whether it's an external toasted + * value, because that could mean we need to inline it later. + */ + if (attr->attlen == -1 && + VARATT_IS_EXTERNAL(DatumGetPointer(newValue))) + erh->flags |= ER_FLAG_HAVE_EXTERNAL; + } + + /* + * Free old field value, if any (not likely, since really we ought + * to be inserting into an empty record). + */ + if (unlikely(!dnulls[fnumber])) + { + char *oldValue; + + oldValue = (char *) DatumGetPointer(dvalues[fnumber]); + /* Don't try to pfree a part of the original flat record */ + if (oldValue < erh->fstartptr || oldValue >= erh->fendptr) + pfree(oldValue); + } + } + + /* And finally we can insert the new field. */ + dvalues[fnumber] = newValue; + dnulls[fnumber] = isnull; + } + + /* + * Because we don't guarantee atomicity of set_fields(), we can just leave + * checking of domain constraints to occur as the final step; if it throws + * an error, too bad. + */ + if (erh->flags & ER_FLAG_IS_DOMAIN) + { + /* We run domain_check in a short-lived context to limit cruft */ + MemoryContextSwitchTo(get_domain_check_cxt(erh)); + + domain_check(ExpandedRecordGetRODatum(erh), false, + erh->er_decltypeid, + &erh->er_domaininfo, + erh->hdr.eoh_context); + } + + MemoryContextSwitchTo(oldcxt); +} + +/* + * Construct (or reset) working memory context for domain checks. + * + * If we don't have a working memory context for domain checking, make one; + * if we have one, reset it to get rid of any leftover cruft. (It is a tad + * annoying to need a whole context for this, since it will often go unused + * --- but it's hard to avoid memory leaks otherwise. We can make the + * context small, at least.) + */ +static MemoryContext +get_domain_check_cxt(ExpandedRecordHeader *erh) +{ + if (erh->er_domain_check_cxt == NULL) + erh->er_domain_check_cxt = + AllocSetContextCreate(erh->hdr.eoh_context, + "expanded record domain checks", + ALLOCSET_SMALL_SIZES); + else + MemoryContextReset(erh->er_domain_check_cxt); + return erh->er_domain_check_cxt; +} + +/* + * Construct "dummy header" for checking domain constraints. + * + * Since we don't want to modify the state of the expanded record until + * we've validated the constraints, our approach is to set up a dummy + * record header containing the new field value(s) and then pass that to + * domain_check. We retain the dummy header as part of the expanded + * record's state to save palloc cycles, but reinitialize (most of) + * its contents on each use. + */ +static void +build_dummy_expanded_header(ExpandedRecordHeader *main_erh) +{ + ExpandedRecordHeader *erh; + TupleDesc tupdesc = expanded_record_get_tupdesc(main_erh); + + /* Ensure we have a domain_check_cxt */ + (void) get_domain_check_cxt(main_erh); + + /* + * Allocate dummy header on first time through, or in the unlikely event + * that the number of fields changes (in which case we just leak the old + * one). Include space for its field values in the request. + */ + erh = main_erh->er_dummy_header; + if (erh == NULL || erh->nfields != tupdesc->natts) + { + char *chunk; + + erh = (ExpandedRecordHeader *) + MemoryContextAlloc(main_erh->hdr.eoh_context, + MAXALIGN(sizeof(ExpandedRecordHeader)) + + tupdesc->natts * (sizeof(Datum) + sizeof(bool))); + + /* Ensure all header fields are initialized to 0/null */ + memset(erh, 0, sizeof(ExpandedRecordHeader)); + + /* + * We set up the dummy header with an indication that its memory + * context is the short-lived context. This is so that, if any + * detoasting of out-of-line values happens due to an attempt to + * extract a composite datum from the dummy header, the detoasted + * stuff will end up in the short-lived context and not cause a leak. + * This is cheating a bit on the expanded-object protocol; but since + * we never pass a R/W pointer to the dummy object to any other code, + * nothing else is authorized to delete or transfer ownership of the + * object's context, so it should be safe enough. + */ + EOH_init_header(&erh->hdr, &ER_methods, main_erh->er_domain_check_cxt); + erh->er_magic = ER_MAGIC; + + /* Set up dvalues/dnulls, with no valid contents as yet */ + chunk = (char *) erh + MAXALIGN(sizeof(ExpandedRecordHeader)); + erh->dvalues = (Datum *) chunk; + erh->dnulls = (bool *) (chunk + tupdesc->natts * sizeof(Datum)); + erh->nfields = tupdesc->natts; + + /* + * The fields we just set are assumed to remain constant through + * multiple uses of the dummy header to check domain constraints. All + * other dummy header fields should be explicitly reset below, to + * ensure there's not accidental effects of one check on the next one. + */ + + main_erh->er_dummy_header = erh; + } + + /* + * If anything inquires about the dummy header's declared type, it should + * report the composite base type, not the domain type (since the VALUE in + * a domain check constraint is of the base type not the domain). Hence + * we do not transfer over the IS_DOMAIN flag, nor indeed any of the main + * header's flags, since the dummy header is empty of data at this point. + * But don't forget to mark header as dummy. + */ + erh->flags = ER_FLAG_IS_DUMMY; + + /* Copy composite-type identification info */ + erh->er_decltypeid = erh->er_typeid = main_erh->er_typeid; + erh->er_typmod = main_erh->er_typmod; + + /* Dummy header does not need its own tupdesc refcount */ + erh->er_tupdesc = tupdesc; + erh->er_tupdesc_id = main_erh->er_tupdesc_id; + + /* + * It's tempting to copy over whatever we know about the flat size, but + * there's no point since we're surely about to modify the dummy record's + * field(s). Instead just clear anything left over from a previous usage + * cycle. + */ + erh->flat_size = 0; + + /* Copy over fvalue if we have it, so that system columns are available */ + erh->fvalue = main_erh->fvalue; + erh->fstartptr = main_erh->fstartptr; + erh->fendptr = main_erh->fendptr; +} + +/* + * Precheck domain constraints for a set_field operation + */ +static pg_noinline void +check_domain_for_new_field(ExpandedRecordHeader *erh, int fnumber, + Datum newValue, bool isnull) +{ + ExpandedRecordHeader *dummy_erh; + MemoryContext oldcxt; + + /* Construct dummy header to contain proposed new field set */ + build_dummy_expanded_header(erh); + dummy_erh = erh->er_dummy_header; + + /* + * If record isn't empty, just deconstruct it (if needed) and copy over + * the existing field values. If it is empty, just fill fields with nulls + * manually --- don't call deconstruct_expanded_record prematurely. + */ + if (!ExpandedRecordIsEmpty(erh)) + { + deconstruct_expanded_record(erh); + memcpy(dummy_erh->dvalues, erh->dvalues, + dummy_erh->nfields * sizeof(Datum)); + memcpy(dummy_erh->dnulls, erh->dnulls, + dummy_erh->nfields * sizeof(bool)); + /* There might be some external values in there... */ + dummy_erh->flags |= erh->flags & ER_FLAG_HAVE_EXTERNAL; + } + else + { + memset(dummy_erh->dvalues, 0, dummy_erh->nfields * sizeof(Datum)); + memset(dummy_erh->dnulls, true, dummy_erh->nfields * sizeof(bool)); + } + + /* Either way, we now have valid dvalues */ + dummy_erh->flags |= ER_FLAG_DVALUES_VALID; + + /* Caller error if fnumber is system column or nonexistent column */ + if (unlikely(fnumber <= 0 || fnumber > dummy_erh->nfields)) + elog(ERROR, "cannot assign to field %d of expanded record", fnumber); + + /* Insert proposed new value into dummy field array */ + dummy_erh->dvalues[fnumber - 1] = newValue; + dummy_erh->dnulls[fnumber - 1] = isnull; + + /* + * The proposed new value might be external, in which case we'd better set + * the flag for that in dummy_erh. (This matters in case something in the + * domain check expressions tries to extract a flat value from the dummy + * header.) + */ + if (!isnull) + { + Form_pg_attribute attr = TupleDescAttr(erh->er_tupdesc, fnumber - 1); + + if (!attr->attbyval && attr->attlen == -1 && + VARATT_IS_EXTERNAL(DatumGetPointer(newValue))) + dummy_erh->flags |= ER_FLAG_HAVE_EXTERNAL; + } + + /* + * We call domain_check in the short-lived context, so that any cruft + * leaked by expression evaluation can be reclaimed. + */ + oldcxt = MemoryContextSwitchTo(erh->er_domain_check_cxt); + + /* + * And now we can apply the check. Note we use main header's domain cache + * space, so that caching carries across repeated uses. + */ + domain_check(ExpandedRecordGetRODatum(dummy_erh), false, + erh->er_decltypeid, + &erh->er_domaininfo, + erh->hdr.eoh_context); + + MemoryContextSwitchTo(oldcxt); + + /* We might as well clean up cruft immediately. */ + MemoryContextReset(erh->er_domain_check_cxt); +} + +/* + * Precheck domain constraints for a set_tuple operation + */ +static pg_noinline void +check_domain_for_new_tuple(ExpandedRecordHeader *erh, HeapTuple tuple) +{ + ExpandedRecordHeader *dummy_erh; + MemoryContext oldcxt; + + /* If we're being told to set record to empty, just see if NULL is OK */ + if (tuple == NULL) + { + /* We run domain_check in a short-lived context to limit cruft */ + oldcxt = MemoryContextSwitchTo(get_domain_check_cxt(erh)); + + domain_check((Datum) 0, true, + erh->er_decltypeid, + &erh->er_domaininfo, + erh->hdr.eoh_context); + + MemoryContextSwitchTo(oldcxt); + + /* We might as well clean up cruft immediately. */ + MemoryContextReset(erh->er_domain_check_cxt); + + return; + } + + /* Construct dummy header to contain replacement tuple */ + build_dummy_expanded_header(erh); + dummy_erh = erh->er_dummy_header; + + /* Insert tuple, but don't bother to deconstruct its fields for now */ + dummy_erh->fvalue = tuple; + dummy_erh->fstartptr = (char *) tuple->t_data; + dummy_erh->fendptr = ((char *) tuple->t_data) + tuple->t_len; + dummy_erh->flags |= ER_FLAG_FVALUE_VALID; + + /* Remember if we have any out-of-line field values */ + if (HeapTupleHasExternal(tuple)) + dummy_erh->flags |= ER_FLAG_HAVE_EXTERNAL; + + /* + * We call domain_check in the short-lived context, so that any cruft + * leaked by expression evaluation can be reclaimed. + */ + oldcxt = MemoryContextSwitchTo(erh->er_domain_check_cxt); + + /* + * And now we can apply the check. Note we use main header's domain cache + * space, so that caching carries across repeated uses. + */ + domain_check(ExpandedRecordGetRODatum(dummy_erh), false, + erh->er_decltypeid, + &erh->er_domaininfo, + erh->hdr.eoh_context); + + MemoryContextSwitchTo(oldcxt); + + /* We might as well clean up cruft immediately. */ + MemoryContextReset(erh->er_domain_check_cxt); +} diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c index cf22306b20..874d8cd1c9 100644 --- a/src/backend/utils/cache/typcache.c +++ b/src/backend/utils/cache/typcache.c @@ -259,12 +259,22 @@ static const dshash_parameters srtr_typmod_table_params = { LWTRANCHE_SESSION_TYPMOD_TABLE }; +/* hashtable for recognizing registered record types */ static HTAB *RecordCacheHash = NULL; +/* arrays of info about registered record types, indexed by assigned typmod */ static TupleDesc *RecordCacheArray = NULL; -static int32 RecordCacheArrayLen = 0; /* allocated length of array */ +static uint64 *RecordIdentifierArray = NULL; +static int32 RecordCacheArrayLen = 0; /* allocated length of above arrays */ static int32 NextRecordTypmod = 0; /* number of entries used */ +/* + * Process-wide counter for generating unique tupledesc identifiers. + * Zero and one (INVALID_TUPLEDESC_IDENTIFIER) aren't allowed to be chosen + * as identifiers, so we start the counter at INVALID_TUPLEDESC_IDENTIFIER. + */ +static uint64 tupledesc_id_counter = INVALID_TUPLEDESC_IDENTIFIER; + static void load_typcache_tupdesc(TypeCacheEntry *typentry); static void load_rangetype_info(TypeCacheEntry *typentry); static void load_domaintype_info(TypeCacheEntry *typentry); @@ -793,10 +803,10 @@ load_typcache_tupdesc(TypeCacheEntry *typentry) typentry->tupDesc->tdrefcount++; /* - * In future, we could take some pains to not increment the seqno if the - * tupdesc didn't really change; but for now it's not worth it. + * In future, we could take some pains to not change tupDesc_identifier if + * the tupdesc didn't really change; but for now it's not worth it. */ - typentry->tupDescSeqNo++; + typentry->tupDesc_identifier = ++tupledesc_id_counter; relation_close(rel, AccessShareLock); } @@ -1496,7 +1506,8 @@ cache_range_element_properties(TypeCacheEntry *typentry) } /* - * Make sure that RecordCacheArray is large enough to store 'typmod'. + * Make sure that RecordCacheArray and RecordIdentifierArray are large enough + * to store 'typmod'. */ static void ensure_record_cache_typmod_slot_exists(int32 typmod) @@ -1505,6 +1516,8 @@ ensure_record_cache_typmod_slot_exists(int32 typmod) { RecordCacheArray = (TupleDesc *) MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(TupleDesc)); + RecordIdentifierArray = (uint64 *) + MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(uint64)); RecordCacheArrayLen = 64; } @@ -1519,6 +1532,10 @@ ensure_record_cache_typmod_slot_exists(int32 typmod) newlen * sizeof(TupleDesc)); memset(RecordCacheArray + RecordCacheArrayLen, 0, (newlen - RecordCacheArrayLen) * sizeof(TupleDesc)); + RecordIdentifierArray = (uint64 *) repalloc(RecordIdentifierArray, + newlen * sizeof(uint64)); + memset(RecordIdentifierArray + RecordCacheArrayLen, 0, + (newlen - RecordCacheArrayLen) * sizeof(uint64)); RecordCacheArrayLen = newlen; } } @@ -1581,11 +1598,17 @@ lookup_rowtype_tupdesc_internal(Oid type_id, int32 typmod, bool noError) /* * Our local array can now point directly to the TupleDesc - * in shared memory. + * in shared memory, which is non-reference-counted. */ RecordCacheArray[typmod] = tupdesc; Assert(tupdesc->tdrefcount == -1); + /* + * We don't share tupdesc identifiers across processes, so + * assign one locally. + */ + RecordIdentifierArray[typmod] = ++tupledesc_id_counter; + dshash_release_lock(CurrentSession->shared_typmod_table, entry); @@ -1790,12 +1813,61 @@ assign_record_type_typmod(TupleDesc tupDesc) RecordCacheArray[entDesc->tdtypmod] = entDesc; recentry->tupdesc = entDesc; + /* Assign a unique tupdesc identifier, too. */ + RecordIdentifierArray[entDesc->tdtypmod] = ++tupledesc_id_counter; + /* Update the caller's tuple descriptor. */ tupDesc->tdtypmod = entDesc->tdtypmod; MemoryContextSwitchTo(oldcxt); } +/* + * assign_record_type_identifier + * + * Get an identifier, which will be unique over the lifespan of this backend + * process, for the current tuple descriptor of the specified composite type. + * For named composite types, the value is guaranteed to change if the type's + * definition does. For registered RECORD types, the value will not change + * once assigned, since the registered type won't either. If an anonymous + * RECORD type is specified, we return a new identifier on each call. + */ +uint64 +assign_record_type_identifier(Oid type_id, int32 typmod) +{ + if (type_id != RECORDOID) + { + /* + * It's a named composite type, so use the regular typcache. + */ + TypeCacheEntry *typentry; + + typentry = lookup_type_cache(type_id, TYPECACHE_TUPDESC); + if (typentry->tupDesc == NULL) + ereport(ERROR, + (errcode(ERRCODE_WRONG_OBJECT_TYPE), + errmsg("type %s is not composite", + format_type_be(type_id)))); + Assert(typentry->tupDesc_identifier != 0); + return typentry->tupDesc_identifier; + } + else + { + /* + * It's a transient record type, so look in our record-type table. + */ + if (typmod >= 0 && typmod < RecordCacheArrayLen && + RecordCacheArray[typmod] != NULL) + { + Assert(RecordIdentifierArray[typmod] != 0); + return RecordIdentifierArray[typmod]; + } + + /* For anonymous or unrecognized record type, generate a new ID */ + return ++tupledesc_id_counter; + } +} + /* * Return the amout of shmem required to hold a SharedRecordTypmodRegistry. * This exists only to avoid exposing private innards of diff --git a/src/include/utils/expandedrecord.h b/src/include/utils/expandedrecord.h new file mode 100644 index 0000000000..a95c9cce22 --- /dev/null +++ b/src/include/utils/expandedrecord.h @@ -0,0 +1,227 @@ +/*------------------------------------------------------------------------- + * + * expandedrecord.h + * Declarations for composite expanded objects. + * + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/utils/expandedrecord.h + * + *------------------------------------------------------------------------- + */ +#ifndef EXPANDEDRECORD_H +#define EXPANDEDRECORD_H + +#include "access/htup.h" +#include "access/tupdesc.h" +#include "fmgr.h" +#include "utils/expandeddatum.h" + + +/* + * An expanded record is contained within a private memory context (as + * all expanded objects must be) and has a control structure as below. + * + * The expanded record might contain a regular "flat" tuple if that was the + * original input and we've not modified it. Otherwise, the contents are + * represented by Datum/isnull arrays plus type information. We could also + * have both forms, if we've deconstructed the original tuple for access + * purposes but not yet changed it. For pass-by-reference field types, the + * Datums would point into the flat tuple in this situation. Once we start + * modifying tuple fields, new pass-by-ref fields are separately palloc'd + * within the memory context. + * + * It's possible to build an expanded record that references a "flat" tuple + * stored externally, if the caller can guarantee that that tuple will not + * change for the lifetime of the expanded record. (This frammish is mainly + * meant to avoid unnecessary data copying in trigger functions.) + */ +#define ER_MAGIC 1384727874 /* ID for debugging crosschecks */ + +typedef struct ExpandedRecordHeader +{ + /* Standard header for expanded objects */ + ExpandedObjectHeader hdr; + + /* Magic value identifying an expanded record (for debugging only) */ + int er_magic; + + /* Assorted flag bits */ + int flags; +#define ER_FLAG_FVALUE_VALID 0x0001 /* fvalue is up to date? */ +#define ER_FLAG_FVALUE_ALLOCED 0x0002 /* fvalue is local storage? */ +#define ER_FLAG_DVALUES_VALID 0x0004 /* dvalues/dnulls are up to date? */ +#define ER_FLAG_DVALUES_ALLOCED 0x0008 /* any field values local storage? */ +#define ER_FLAG_HAVE_EXTERNAL 0x0010 /* any field values are external? */ +#define ER_FLAG_TUPDESC_ALLOCED 0x0020 /* tupdesc is local storage? */ +#define ER_FLAG_IS_DOMAIN 0x0040 /* er_decltypeid is domain? */ +#define ER_FLAG_IS_DUMMY 0x0080 /* this header is dummy (see below) */ +/* flag bits that are not to be cleared when replacing tuple data: */ +#define ER_FLAGS_NON_DATA \ + (ER_FLAG_TUPDESC_ALLOCED | ER_FLAG_IS_DOMAIN | ER_FLAG_IS_DUMMY) + + /* Declared type of the record variable (could be a domain type) */ + Oid er_decltypeid; + + /* + * Actual composite type/typmod; never a domain (if ER_FLAG_IS_DOMAIN, + * these identify the composite base type). These will match + * er_tupdesc->tdtypeid/tdtypmod, as well as the header fields of + * composite datums made from or stored in this expanded record. + */ + Oid er_typeid; /* type OID of the composite type */ + int32 er_typmod; /* typmod of the composite type */ + + /* + * Tuple descriptor, if we have one, else NULL. This may point to a + * reference-counted tupdesc originally belonging to the typcache, in + * which case we use a memory context reset callback to release the + * refcount. It can also be locally allocated in this object's private + * context (in which case ER_FLAG_TUPDESC_ALLOCED is set). + */ + TupleDesc er_tupdesc; + + /* + * Unique-within-process identifier for the tupdesc (see typcache.h). This + * field will never be equal to INVALID_TUPLEDESC_IDENTIFIER. + */ + uint64 er_tupdesc_id; + + /* + * If we have a Datum-array representation of the record, it's kept here; + * else ER_FLAG_DVALUES_VALID is not set, and dvalues/dnulls may be NULL + * if they've not yet been allocated. If allocated, the dvalues and + * dnulls arrays are palloc'd within the object private context, and are + * of length matching er_tupdesc->natts. For pass-by-ref field types, + * dvalues entries might point either into the fstartptr..fendptr area, or + * to separately palloc'd chunks. + */ + Datum *dvalues; /* array of Datums */ + bool *dnulls; /* array of is-null flags for Datums */ + int nfields; /* length of above arrays */ + + /* + * flat_size is the current space requirement for the flat equivalent of + * the expanded record, if known; otherwise it's 0. We store this to make + * consecutive calls of get_flat_size cheap. If flat_size is not 0, the + * component values data_len, hoff, and hasnull must be valid too. + */ + Size flat_size; + + Size data_len; /* data len within flat_size */ + int hoff; /* header offset */ + bool hasnull; /* null bitmap needed? */ + + /* + * fvalue points to the flat representation if we have one, else it is + * NULL. If the flat representation is valid (up to date) then + * ER_FLAG_FVALUE_VALID is set. Even if we've outdated the flat + * representation due to changes of user fields, it can still be used to + * fetch system column values. If we have a flat representation then + * fstartptr/fendptr point to the start and end+1 of its data area; this + * is so that we can tell which Datum pointers point into the flat + * representation rather than being pointers to separately palloc'd data. + */ + HeapTuple fvalue; /* might or might not be private storage */ + char *fstartptr; /* start of its data area */ + char *fendptr; /* end+1 of its data area */ + + /* Working state for domain checking, used if ER_FLAG_IS_DOMAIN is set */ + MemoryContext er_domain_check_cxt; /* short-term memory context */ + struct ExpandedRecordHeader *er_dummy_header; /* dummy record header */ + void *er_domaininfo; /* cache space for domain_check() */ + + /* Callback info (it's active if er_mcb.arg is not NULL) */ + MemoryContextCallback er_mcb; +} ExpandedRecordHeader; + +/* fmgr macros for expanded record objects */ +#define PG_GETARG_EXPANDED_RECORD(n) DatumGetExpandedRecord(PG_GETARG_DATUM(n)) +#define ExpandedRecordGetDatum(erh) EOHPGetRWDatum(&(erh)->hdr) +#define ExpandedRecordGetRODatum(erh) EOHPGetRODatum(&(erh)->hdr) +#define PG_RETURN_EXPANDED_RECORD(x) PG_RETURN_DATUM(ExpandedRecordGetDatum(x)) + +/* assorted other macros */ +#define ExpandedRecordIsEmpty(erh) \ + (((erh)->flags & (ER_FLAG_DVALUES_VALID | ER_FLAG_FVALUE_VALID)) == 0) +#define ExpandedRecordIsDomain(erh) \ + (((erh)->flags & ER_FLAG_IS_DOMAIN) != 0) + +/* this can substitute for TransferExpandedObject() when we already have erh */ +#define TransferExpandedRecord(erh, cxt) \ + MemoryContextSetParent((erh)->hdr.eoh_context, cxt) + +/* information returned by expanded_record_lookup_field() */ +typedef struct ExpandedRecordFieldInfo +{ + int fnumber; /* field's attr number in record */ + Oid ftypeid; /* field's type/typmod info */ + int32 ftypmod; + Oid fcollation; /* field's collation if any */ +} ExpandedRecordFieldInfo; + +/* + * prototypes for functions defined in expandedrecord.c + */ +extern ExpandedRecordHeader *make_expanded_record_from_typeid(Oid type_id, int32 typmod, + MemoryContext parentcontext); +extern ExpandedRecordHeader *make_expanded_record_from_tupdesc(TupleDesc tupdesc, + MemoryContext parentcontext); +extern ExpandedRecordHeader *make_expanded_record_from_exprecord(ExpandedRecordHeader *olderh, + MemoryContext parentcontext); +extern void expanded_record_set_tuple(ExpandedRecordHeader *erh, + HeapTuple tuple, bool copy); +extern Datum make_expanded_record_from_datum(Datum recorddatum, + MemoryContext parentcontext); +extern TupleDesc expanded_record_fetch_tupdesc(ExpandedRecordHeader *erh); +extern HeapTuple expanded_record_get_tuple(ExpandedRecordHeader *erh); +extern ExpandedRecordHeader *DatumGetExpandedRecord(Datum d); +extern void deconstruct_expanded_record(ExpandedRecordHeader *erh); +extern bool expanded_record_lookup_field(ExpandedRecordHeader *erh, + const char *fieldname, + ExpandedRecordFieldInfo *finfo); +extern Datum expanded_record_fetch_field(ExpandedRecordHeader *erh, int fnumber, + bool *isnull); +extern void expanded_record_set_field_internal(ExpandedRecordHeader *erh, + int fnumber, + Datum newValue, bool isnull, + bool check_constraints); +extern void expanded_record_set_fields(ExpandedRecordHeader *erh, + const Datum *newValues, const bool *isnulls); + +/* outside code should never call expanded_record_set_field_internal as such */ +#define expanded_record_set_field(erh, fnumber, newValue, isnull) \ + expanded_record_set_field_internal(erh, fnumber, newValue, isnull, true) + +/* + * Inline-able fast cases. The expanded_record_fetch_xxx functions above + * handle the general cases. + */ + +/* Get the tupdesc for the expanded record's actual type */ +static inline TupleDesc +expanded_record_get_tupdesc(ExpandedRecordHeader *erh) +{ + if (likely(erh->er_tupdesc != NULL)) + return erh->er_tupdesc; + else + return expanded_record_fetch_tupdesc(erh); +} + +/* Get value of record field */ +static inline Datum +expanded_record_get_field(ExpandedRecordHeader *erh, int fnumber, + bool *isnull) +{ + if ((erh->flags & ER_FLAG_DVALUES_VALID) && + likely(fnumber > 0 && fnumber <= erh->nfields)) + { + *isnull = erh->dnulls[fnumber - 1]; + return erh->dvalues[fnumber - 1]; + } + else + return expanded_record_fetch_field(erh, fnumber, isnull); +} + +#endif /* EXPANDEDRECORD_H */ diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h index f25448d316..217d064da5 100644 --- a/src/include/utils/typcache.h +++ b/src/include/utils/typcache.h @@ -76,11 +76,14 @@ typedef struct TypeCacheEntry /* * Tuple descriptor if it's a composite type (row type). NULL if not * composite or information hasn't yet been requested. (NOTE: this is a - * reference-counted tupledesc.) To simplify caching dependent info, - * tupDescSeqNo is incremented each time tupDesc is rebuilt in a session. + * reference-counted tupledesc.) + * + * To simplify caching dependent info, tupDesc_identifier is an identifier + * for this tupledesc that is unique for the life of the process, and + * changes anytime the tupledesc does. Zero if not yet determined. */ TupleDesc tupDesc; - int64 tupDescSeqNo; + uint64 tupDesc_identifier; /* * Fields computed when TYPECACHE_RANGE_INFO is requested. Zeroes if not @@ -138,6 +141,9 @@ typedef struct TypeCacheEntry #define TYPECACHE_HASH_EXTENDED_PROC 0x4000 #define TYPECACHE_HASH_EXTENDED_PROC_FINFO 0x8000 +/* This value will not equal any valid tupledesc identifier, nor 0 */ +#define INVALID_TUPLEDESC_IDENTIFIER ((uint64) 1) + /* * Callers wishing to maintain a long-lived reference to a domain's constraint * set must store it in one of these. Use InitDomainConstraintRef() and @@ -179,6 +185,8 @@ extern TupleDesc lookup_rowtype_tupdesc_domain(Oid type_id, int32 typmod, extern void assign_record_type_typmod(TupleDesc tupDesc); +extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod); + extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2); extern size_t SharedRecordTypmodRegistryEstimate(void); diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 91e1ada7ad..2190eab616 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -26,7 +26,7 @@ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) -REGRESS = plpgsql_call plpgsql_control plpgsql_transaction +REGRESS = plpgsql_call plpgsql_control plpgsql_record plpgsql_transaction all: all-lib diff --git a/src/pl/plpgsql/src/expected/plpgsql_record.out b/src/pl/plpgsql/src/expected/plpgsql_record.out new file mode 100644 index 0000000000..3f7cab2088 --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_record.out @@ -0,0 +1,662 @@ +-- +-- Tests for PL/pgSQL handling of composite (record) variables +-- +create type two_int4s as (f1 int4, f2 int4); +create type two_int8s as (q1 int8, q2 int8); +-- base-case return of a composite type +create function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1,1)::two_int8s; end $$; +select retc(42); + retc +-------- + (42,1) +(1 row) + +-- ok to return a matching record type +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1::int8, 1::int8); end $$; +select retc(42); + retc +-------- + (42,1) +(1 row) + +-- we don't currently support implicit casting +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1,1); end $$; +select retc(42); +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 1. +CONTEXT: PL/pgSQL function retc(integer) while casting return value to function's return type +-- nor extra columns +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1::int8, 1::int8, 42); end $$; +select retc(42); +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (3) does not match expected column count (2). +CONTEXT: PL/pgSQL function retc(integer) while casting return value to function's return type +-- same cases with an intermediate "record" variable +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1::int8, 1::int8); return r; end $$; +select retc(42); + retc +-------- + (42,1) +(1 row) + +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1,1); return r; end $$; +select retc(42); +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 1. +CONTEXT: PL/pgSQL function retc(integer) while casting return value to function's return type +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1::int8, 1::int8, 42); return r; end $$; +select retc(42); +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (3) does not match expected column count (2). +CONTEXT: PL/pgSQL function retc(integer) while casting return value to function's return type +-- but, for mostly historical reasons, we do convert when assigning +-- to a named-composite-type variable +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r two_int8s; begin r := row($1::int8, 1::int8, 42); return r; end $$; +select retc(42); + retc +-------- + (42,1) +(1 row) + +do $$ declare c two_int8s; +begin c := row(1,2); raise notice 'c = %', c; end$$; +NOTICE: c = (1,2) +do $$ declare c two_int8s; +begin for c in select 1,2 loop raise notice 'c = %', c; end loop; end$$; +NOTICE: c = (1,2) +do $$ declare c4 two_int4s; c8 two_int8s; +begin + c8 := row(1,2); + c4 := c8; + c8 := c4; + raise notice 'c4 = %', c4; + raise notice 'c8 = %', c8; +end$$; +NOTICE: c4 = (1,2) +NOTICE: c8 = (1,2) +-- check passing composite result to another function +create function getq1(two_int8s) returns int8 language plpgsql as $$ +declare r two_int8s; begin r := $1; return r.q1; end $$; +select getq1(retc(344)); + getq1 +------- + 344 +(1 row) + +select getq1(row(1,2)); + getq1 +------- + 1 +(1 row) + +do $$ +declare r1 two_int8s; r2 record; x int8; +begin + r1 := retc(345); + perform getq1(r1); + x := getq1(r1); + raise notice 'x = %', x; + r2 := retc(346); + perform getq1(r2); + x := getq1(r2); + raise notice 'x = %', x; +end$$; +NOTICE: x = 345 +NOTICE: x = 346 +-- check assignments of composites +do $$ +declare r1 two_int8s; r2 two_int8s; r3 record; r4 record; +begin + r1 := row(1,2); + raise notice 'r1 = %', r1; + r1 := r1; -- shouldn't do anything + raise notice 'r1 = %', r1; + r2 := r1; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r2.q2 = r1.q1 + 3; -- check that r2 has distinct storage + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r1 := null; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r1 := row(7,11)::two_int8s; + r2 := r1; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r3 := row(1,2); + r4 := r3; + raise notice 'r3 = %', r3; + raise notice 'r4 = %', r4; + r4.f1 := r4.f1 + 3; -- check that r4 has distinct storage + raise notice 'r3 = %', r3; + raise notice 'r4 = %', r4; + r1 := r3; + raise notice 'r1 = %', r1; + r4 := r1; + raise notice 'r4 = %', r4; + r4.q2 := r4.q2 + 1; -- r4's field names have changed + raise notice 'r4 = %', r4; +end$$; +NOTICE: r1 = (1,2) +NOTICE: r1 = (1,2) +NOTICE: r1 = (1,2) +NOTICE: r2 = (1,2) +NOTICE: r1 = (1,2) +NOTICE: r2 = (1,4) +NOTICE: r1 = +NOTICE: r2 = (1,4) +NOTICE: r1 = (7,11) +NOTICE: r2 = (7,11) +NOTICE: r3 = (1,2) +NOTICE: r4 = (1,2) +NOTICE: r3 = (1,2) +NOTICE: r4 = (4,2) +NOTICE: r1 = (1,2) +NOTICE: r4 = (1,2) +NOTICE: r4 = (1,3) +-- fields of named-type vars read as null if uninitialized +do $$ +declare r1 two_int8s; +begin + raise notice 'r1 = %', r1; + raise notice 'r1.q1 = %', r1.q1; + raise notice 'r1.q2 = %', r1.q2; + raise notice 'r1 = %', r1; +end$$; +NOTICE: r1 = +NOTICE: r1.q1 = +NOTICE: r1.q2 = +NOTICE: r1 = +do $$ +declare r1 two_int8s; +begin + raise notice 'r1.q1 = %', r1.q1; + raise notice 'r1.q2 = %', r1.q2; + raise notice 'r1 = %', r1; + raise notice 'r1.nosuchfield = %', r1.nosuchfield; +end$$; +NOTICE: r1.q1 = +NOTICE: r1.q2 = +NOTICE: r1 = +ERROR: record "r1" has no field "nosuchfield" +CONTEXT: SQL statement "SELECT r1.nosuchfield" +PL/pgSQL function inline_code_block line 7 at RAISE +-- records, not so much +do $$ +declare r1 record; +begin + raise notice 'r1 = %', r1; + raise notice 'r1.f1 = %', r1.f1; + raise notice 'r1.f2 = %', r1.f2; + raise notice 'r1 = %', r1; +end$$; +NOTICE: r1 = +ERROR: record "r1" is not assigned yet +DETAIL: The tuple structure of a not-yet-assigned record is indeterminate. +CONTEXT: SQL statement "SELECT r1.f1" +PL/pgSQL function inline_code_block line 5 at RAISE +-- but OK if you assign first +do $$ +declare r1 record; +begin + raise notice 'r1 = %', r1; + r1 := row(1,2); + raise notice 'r1.f1 = %', r1.f1; + raise notice 'r1.f2 = %', r1.f2; + raise notice 'r1 = %', r1; + raise notice 'r1.nosuchfield = %', r1.nosuchfield; +end$$; +NOTICE: r1 = +NOTICE: r1.f1 = 1 +NOTICE: r1.f2 = 2 +NOTICE: r1 = (1,2) +ERROR: record "r1" has no field "nosuchfield" +CONTEXT: SQL statement "SELECT r1.nosuchfield" +PL/pgSQL function inline_code_block line 9 at RAISE +-- check repeated assignments to composite fields +create table some_table (id int, data text); +do $$ +declare r some_table; +begin + r := (23, 'skidoo'); + for i in 1 .. 10 loop + r.id := r.id + i; + r.data := r.data || ' ' || i; + end loop; + raise notice 'r = %', r; +end$$; +NOTICE: r = (78,"skidoo 1 2 3 4 5 6 7 8 9 10") +-- check behavior of function declared to return "record" +create function returnsrecord(int) returns record language plpgsql as +$$ begin return row($1,$1+1); end $$; +select returnsrecord(42); + returnsrecord +--------------- + (42,43) +(1 row) + +select * from returnsrecord(42) as r(x int, y int); + x | y +----+---- + 42 | 43 +(1 row) + +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (2) does not match expected column count (3). +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +select * from returnsrecord(42) as r(x int, y bigint); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 2. +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +-- same with an intermediate record variable +create or replace function returnsrecord(int) returns record language plpgsql as +$$ declare r record; begin r := row($1,$1+1); return r; end $$; +select returnsrecord(42); + returnsrecord +--------------- + (42,43) +(1 row) + +select * from returnsrecord(42) as r(x int, y int); + x | y +----+---- + 42 | 43 +(1 row) + +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (2) does not match expected column count (3). +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +select * from returnsrecord(42) as r(x int, y bigint); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 2. +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +-- should work the same with a missing column in the actual result value +create table has_hole(f1 int, f2 int, f3 int); +alter table has_hole drop column f2; +create or replace function returnsrecord(int) returns record language plpgsql as +$$ begin return row($1,$1+1)::has_hole; end $$; +select returnsrecord(42); + returnsrecord +--------------- + (42,43) +(1 row) + +select * from returnsrecord(42) as r(x int, y int); + x | y +----+---- + 42 | 43 +(1 row) + +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (2) does not match expected column count (3). +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +select * from returnsrecord(42) as r(x int, y bigint); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 2. +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +-- same with an intermediate record variable +create or replace function returnsrecord(int) returns record language plpgsql as +$$ declare r record; begin r := row($1,$1+1)::has_hole; return r; end $$; +select returnsrecord(42); + returnsrecord +--------------- + (42,43) +(1 row) + +select * from returnsrecord(42) as r(x int, y int); + x | y +----+---- + 42 | 43 +(1 row) + +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (2) does not match expected column count (3). +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +select * from returnsrecord(42) as r(x int, y bigint); -- fail +ERROR: returned record type does not match expected record type +DETAIL: Returned type integer does not match expected type bigint in column 2. +CONTEXT: PL/pgSQL function returnsrecord(integer) while casting return value to function's return type +-- check access to a field of an argument declared "record" +create function getf1(x record) returns int language plpgsql as +$$ begin return x.f1; end $$; +select getf1(1); +ERROR: function getf1(integer) does not exist +LINE 1: select getf1(1); + ^ +HINT: No function matches the given name and argument types. You might need to add explicit type casts. +select getf1(row(1,2)); + getf1 +------- + 1 +(1 row) + +select getf1(row(1,2)::two_int8s); +ERROR: record "x" has no field "f1" +CONTEXT: PL/pgSQL function getf1(record) line 1 at RETURN +select getf1(row(1,2)); + getf1 +------- + 1 +(1 row) + +-- check behavior when assignment to FOR-loop variable requires coercion +do $$ +declare r two_int8s; +begin + for r in select i, i+1 from generate_series(1,4) i + loop + raise notice 'r = %', r; + end loop; +end$$; +NOTICE: r = (1,2) +NOTICE: r = (2,3) +NOTICE: r = (3,4) +NOTICE: r = (4,5) +-- check behavior when returning setof composite +create function returnssetofholes() returns setof has_hole language plpgsql as +$$ +declare r record; + h has_hole; +begin + return next h; + r := (1,2); + h := (3,4); + return next r; + return next h; + return next row(5,6); + return next row(7,8)::has_hole; +end$$; +select returnssetofholes(); + returnssetofholes +------------------- + (,) + (1,2) + (3,4) + (5,6) + (7,8) +(5 rows) + +create or replace function returnssetofholes() returns setof has_hole language plpgsql as +$$ +declare r record; +begin + return next r; -- fails, not assigned yet +end$$; +select returnssetofholes(); +ERROR: record "r" is not assigned yet +DETAIL: The tuple structure of a not-yet-assigned record is indeterminate. +CONTEXT: PL/pgSQL function returnssetofholes() line 4 at RETURN NEXT +create or replace function returnssetofholes() returns setof has_hole language plpgsql as +$$ +begin + return next row(1,2,3); -- fails +end$$; +select returnssetofholes(); +ERROR: returned record type does not match expected record type +DETAIL: Number of returned columns (3) does not match expected column count (2). +CONTEXT: PL/pgSQL function returnssetofholes() line 3 at RETURN NEXT +-- check behavior with changes of a named rowtype +create table mutable(f1 int, f2 text); +create function sillyaddone(int) returns int language plpgsql as +$$ declare r mutable; begin r.f1 := $1; return r.f1 + 1; end $$; +select sillyaddone(42); + sillyaddone +------------- + 43 +(1 row) + +alter table mutable drop column f1; +alter table mutable add column f1 float8; +-- currently, this fails due to cached plan for "r.f1 + 1" expression +select sillyaddone(42); +ERROR: type of parameter 4 (double precision) does not match that when preparing the plan (integer) +CONTEXT: PL/pgSQL function sillyaddone(integer) line 1 at RETURN +\c - +-- but it's OK after a reconnect +select sillyaddone(42); + sillyaddone +------------- + 43 +(1 row) + +alter table mutable drop column f1; +select sillyaddone(42); -- fail +ERROR: record "r" has no field "f1" +CONTEXT: PL/pgSQL function sillyaddone(integer) line 1 at assignment +create function getf3(x mutable) returns int language plpgsql as +$$ begin return x.f3; end $$; +select getf3(null::mutable); -- doesn't work yet +ERROR: record "x" has no field "f3" +CONTEXT: SQL statement "SELECT x.f3" +PL/pgSQL function getf3(mutable) line 1 at RETURN +alter table mutable add column f3 int; +select getf3(null::mutable); -- now it works + getf3 +------- + +(1 row) + +alter table mutable drop column f3; +select getf3(null::mutable); -- fails again +ERROR: record "x" has no field "f3" +CONTEXT: PL/pgSQL function getf3(mutable) line 1 at RETURN +-- check access to system columns in a record variable +create function sillytrig() returns trigger language plpgsql as +$$begin + raise notice 'old.ctid = %', old.ctid; + raise notice 'old.tableoid = %', old.tableoid::regclass; + return new; +end$$; +create trigger mutable_trig before update on mutable for each row +execute procedure sillytrig(); +insert into mutable values ('foo'), ('bar'); +update mutable set f2 = f2 || ' baz'; +NOTICE: old.ctid = (0,1) +NOTICE: old.tableoid = mutable +NOTICE: old.ctid = (0,2) +NOTICE: old.tableoid = mutable +table mutable; + f2 +--------- + foo baz + bar baz +(2 rows) + +-- check returning a composite datum from a trigger +create or replace function sillytrig() returns trigger language plpgsql as +$$begin + return row(new.*); +end$$; +update mutable set f2 = f2 || ' baz'; +table mutable; + f2 +------------- + foo baz baz + bar baz baz +(2 rows) + +create or replace function sillytrig() returns trigger language plpgsql as +$$declare r record; +begin + r := row(new.*); + return r; +end$$; +update mutable set f2 = f2 || ' baz'; +table mutable; + f2 +----------------- + foo baz baz baz + bar baz baz baz +(2 rows) + +-- +-- Domains of composite +-- +create domain ordered_int8s as two_int8s check((value).q1 <= (value).q2); +create function read_ordered_int8s(p ordered_int8s) returns int8 as $$ +begin return p.q1 + p.q2; end +$$ language plpgsql; +select read_ordered_int8s(row(1, 2)); + read_ordered_int8s +-------------------- + 3 +(1 row) + +select read_ordered_int8s(row(2, 1)); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +create function build_ordered_int8s(i int8, j int8) returns ordered_int8s as $$ +begin return row(i,j); end +$$ language plpgsql; +select build_ordered_int8s(1,2); + build_ordered_int8s +--------------------- + (1,2) +(1 row) + +select build_ordered_int8s(2,1); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function build_ordered_int8s(bigint,bigint) while casting return value to function's return type +create function build_ordered_int8s_2(i int8, j int8) returns ordered_int8s as $$ +declare r record; begin r := row(i,j); return r; end +$$ language plpgsql; +select build_ordered_int8s_2(1,2); + build_ordered_int8s_2 +----------------------- + (1,2) +(1 row) + +select build_ordered_int8s_2(2,1); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function build_ordered_int8s_2(bigint,bigint) while casting return value to function's return type +create function build_ordered_int8s_3(i int8, j int8) returns ordered_int8s as $$ +declare r two_int8s; begin r := row(i,j); return r; end +$$ language plpgsql; +select build_ordered_int8s_3(1,2); + build_ordered_int8s_3 +----------------------- + (1,2) +(1 row) + +select build_ordered_int8s_3(2,1); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function build_ordered_int8s_3(bigint,bigint) while casting return value to function's return type +create function build_ordered_int8s_4(i int8, j int8) returns ordered_int8s as $$ +declare r ordered_int8s; begin r := row(i,j); return r; end +$$ language plpgsql; +select build_ordered_int8s_4(1,2); + build_ordered_int8s_4 +----------------------- + (1,2) +(1 row) + +select build_ordered_int8s_4(2,1); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function build_ordered_int8s_4(bigint,bigint) line 2 at assignment +create function build_ordered_int8s_a(i int8, j int8) returns ordered_int8s[] as $$ +begin return array[row(i,j), row(i,j+1)]; end +$$ language plpgsql; +select build_ordered_int8s_a(1,2); + build_ordered_int8s_a +----------------------- + {"(1,2)","(1,3)"} +(1 row) + +select build_ordered_int8s_a(2,1); -- fail +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function build_ordered_int8s_a(bigint,bigint) while casting return value to function's return type +-- check field assignment +do $$ +declare r ordered_int8s; +begin + r.q1 := null; + r.q2 := 43; + r.q1 := 42; + r.q2 := 41; -- fail +end$$; +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function inline_code_block line 7 at assignment +-- check whole-row assignment +do $$ +declare r ordered_int8s; +begin + r := null; + r := row(null,null); + r := row(1,2); + r := row(2,1); -- fail +end$$; +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function inline_code_block line 7 at assignment +-- check assignment in for-loop +do $$ +declare r ordered_int8s; +begin + for r in values (1,2),(3,4),(6,5) loop + raise notice 'r = %', r; + end loop; +end$$; +NOTICE: r = (1,2) +NOTICE: r = (3,4) +ERROR: value for domain ordered_int8s violates check constraint "ordered_int8s_check" +CONTEXT: PL/pgSQL function inline_code_block line 4 at FOR over SELECT rows +-- check behavior with toastable fields, too +create type two_texts as (f1 text, f2 text); +create domain ordered_texts as two_texts check((value).f1 <= (value).f2); +create table sometable (id int, a text, b text); +-- b should be compressed, but in-line +insert into sometable values (1, 'a', repeat('ffoob',1000)); +-- this b should be out-of-line +insert into sometable values (2, 'a', repeat('ffoob',100000)); +-- this pair should fail the domain check +insert into sometable values (3, 'z', repeat('ffoob',100000)); +do $$ +declare d ordered_texts; +begin + for d in select a, b from sometable loop + raise notice 'succeeded at "%"', d.f1; + end loop; +end$$; +NOTICE: succeeded at "a" +NOTICE: succeeded at "a" +ERROR: value for domain ordered_texts violates check constraint "ordered_texts_check" +CONTEXT: PL/pgSQL function inline_code_block line 4 at FOR over SELECT rows +do $$ +declare r record; d ordered_texts; +begin + for r in select * from sometable loop + raise notice 'processing row %', r.id; + d := row(r.a, r.b); + end loop; +end$$; +NOTICE: processing row 1 +NOTICE: processing row 2 +NOTICE: processing row 3 +ERROR: value for domain ordered_texts violates check constraint "ordered_texts_check" +CONTEXT: PL/pgSQL function inline_code_block line 6 at assignment +do $$ +declare r record; d ordered_texts; +begin + for r in select * from sometable loop + raise notice 'processing row %', r.id; + d := null; + d.f1 := r.a; + d.f2 := r.b; + end loop; +end$$; +NOTICE: processing row 1 +NOTICE: processing row 2 +NOTICE: processing row 3 +ERROR: value for domain ordered_texts violates check constraint "ordered_texts_check" +CONTEXT: PL/pgSQL function inline_code_block line 8 at assignment diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 43de3f752c..09ecaec635 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -32,6 +32,7 @@ #include "utils/regproc.h" #include "utils/rel.h" #include "utils/syscache.h" +#include "utils/typcache.h" #include "plpgsql.h" @@ -104,7 +105,6 @@ static Node *plpgsql_param_ref(ParseState *pstate, ParamRef *pref); static Node *resolve_column_ref(ParseState *pstate, PLpgSQL_expr *expr, ColumnRef *cref, bool error_if_no_field); static Node *make_datum_param(PLpgSQL_expr *expr, int dno, int location); -static PLpgSQL_row *build_row_from_class(Oid classOid); static PLpgSQL_row *build_row_from_vars(PLpgSQL_variable **vars, int numvars); static PLpgSQL_type *build_datatype(HeapTuple typeTup, int32 typmod, Oid collation); static void plpgsql_start_datums(void); @@ -425,8 +425,7 @@ do_compile(FunctionCallInfo fcinfo, /* Disallow pseudotype argument */ /* (note we already replaced polymorphic types) */ /* (build_variable would do this, but wrong message) */ - if (argdtype->ttype != PLPGSQL_TTYPE_SCALAR && - argdtype->ttype != PLPGSQL_TTYPE_ROW) + if (argdtype->ttype == PLPGSQL_TTYPE_PSEUDO) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/pgSQL functions cannot accept type %s", @@ -447,8 +446,8 @@ do_compile(FunctionCallInfo fcinfo, } else { - Assert(argvariable->dtype == PLPGSQL_DTYPE_ROW); - argitemtype = PLPGSQL_NSTYPE_ROW; + Assert(argvariable->dtype == PLPGSQL_DTYPE_REC); + argitemtype = PLPGSQL_NSTYPE_REC; } /* Remember arguments in appropriate arrays */ @@ -557,29 +556,25 @@ do_compile(FunctionCallInfo fcinfo, format_type_be(rettypeid)))); } - if (typeStruct->typrelid != InvalidOid || - rettypeid == RECORDOID) - function->fn_retistuple = true; - else + function->fn_retistuple = type_is_rowtype(rettypeid); + function->fn_retbyval = typeStruct->typbyval; + function->fn_rettyplen = typeStruct->typlen; + + /* + * install $0 reference, but only for polymorphic return + * types, and not when the return is specified through an + * output parameter. + */ + if (IsPolymorphicType(procStruct->prorettype) && + num_out_args == 0) { - function->fn_retbyval = typeStruct->typbyval; - function->fn_rettyplen = typeStruct->typlen; - - /* - * install $0 reference, but only for polymorphic return - * types, and not when the return is specified through an - * output parameter. - */ - if (IsPolymorphicType(procStruct->prorettype) && - num_out_args == 0) - { - (void) plpgsql_build_variable("$0", 0, - build_datatype(typeTup, - -1, - function->fn_input_collation), - true); - } + (void) plpgsql_build_variable("$0", 0, + build_datatype(typeTup, + -1, + function->fn_input_collation), + true); } + ReleaseSysCache(typeTup); } break; @@ -599,11 +594,11 @@ do_compile(FunctionCallInfo fcinfo, errhint("The arguments of the trigger can be accessed through TG_NARGS and TG_ARGV instead."))); /* Add the record for referencing NEW ROW */ - rec = plpgsql_build_record("new", 0, true); + rec = plpgsql_build_record("new", 0, RECORDOID, true); function->new_varno = rec->dno; /* Add the record for referencing OLD ROW */ - rec = plpgsql_build_record("old", 0, true); + rec = plpgsql_build_record("old", 0, RECORDOID, true); function->old_varno = rec->dno; /* Add the variable tg_name */ @@ -1240,19 +1235,22 @@ resolve_column_ref(ParseState *pstate, PLpgSQL_expr *expr, if (nnames == nnames_field) { /* colname could be a field in this record */ + PLpgSQL_rec *rec = (PLpgSQL_rec *) estate->datums[nse->itemno]; int i; /* search for a datum referencing this field */ - for (i = 0; i < estate->ndatums; i++) + i = rec->firstfield; + while (i >= 0) { PLpgSQL_recfield *fld = (PLpgSQL_recfield *) estate->datums[i]; - if (fld->dtype == PLPGSQL_DTYPE_RECFIELD && - fld->recparentno == nse->itemno && - strcmp(fld->fieldname, colname) == 0) + Assert(fld->dtype == PLPGSQL_DTYPE_RECFIELD && + fld->recparentno == nse->itemno); + if (strcmp(fld->fieldname, colname) == 0) { return make_datum_param(expr, i, cref->location); } + i = fld->nextfield; } /* @@ -1270,34 +1268,6 @@ resolve_column_ref(ParseState *pstate, PLpgSQL_expr *expr, parser_errposition(pstate, cref->location))); } break; - case PLPGSQL_NSTYPE_ROW: - if (nnames == nnames_wholerow) - return make_datum_param(expr, nse->itemno, cref->location); - if (nnames == nnames_field) - { - /* colname could be a field in this row */ - PLpgSQL_row *row = (PLpgSQL_row *) estate->datums[nse->itemno]; - int i; - - for (i = 0; i < row->nfields; i++) - { - if (row->fieldnames[i] && - strcmp(row->fieldnames[i], colname) == 0) - { - return make_datum_param(expr, row->varnos[i], - cref->location); - } - } - /* Not found, so throw error or return NULL */ - if (error_if_no_field) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("record \"%s\" has no field \"%s\"", - (nnames_field == 1) ? name1 : name2, - colname), - parser_errposition(pstate, cref->location))); - } - break; default: elog(ERROR, "unrecognized plpgsql itemtype: %d", nse->itemtype); } @@ -1385,7 +1355,6 @@ plpgsql_parse_word(char *word1, const char *yytxt, switch (ns->itemtype) { case PLPGSQL_NSTYPE_VAR: - case PLPGSQL_NSTYPE_ROW: case PLPGSQL_NSTYPE_REC: wdatum->datum = plpgsql_Datums[ns->itemno]; wdatum->ident = word1; @@ -1461,14 +1430,11 @@ plpgsql_parse_dblword(char *word1, char *word2, * datum whether it is or not --- any error will be * detected later. */ + PLpgSQL_rec *rec; PLpgSQL_recfield *new; - new = palloc(sizeof(PLpgSQL_recfield)); - new->dtype = PLPGSQL_DTYPE_RECFIELD; - new->fieldname = pstrdup(word2); - new->recparentno = ns->itemno; - - plpgsql_adddatum((PLpgSQL_datum *) new); + rec = (PLpgSQL_rec *) (plpgsql_Datums[ns->itemno]); + new = plpgsql_build_recfield(rec, word2); wdatum->datum = (PLpgSQL_datum *) new; } @@ -1482,43 +1448,6 @@ plpgsql_parse_dblword(char *word1, char *word2, wdatum->idents = idents; return true; - case PLPGSQL_NSTYPE_ROW: - if (nnames == 1) - { - /* - * First word is a row name, so second word could be a - * field in this row. Again, no error now if it - * isn't. - */ - PLpgSQL_row *row; - int i; - - row = (PLpgSQL_row *) (plpgsql_Datums[ns->itemno]); - for (i = 0; i < row->nfields; i++) - { - if (row->fieldnames[i] && - strcmp(row->fieldnames[i], word2) == 0) - { - wdatum->datum = plpgsql_Datums[row->varnos[i]]; - wdatum->ident = NULL; - wdatum->quoted = false; /* not used */ - wdatum->idents = idents; - return true; - } - } - /* fall through to return CWORD */ - } - else - { - /* Block-qualified reference to row variable. */ - wdatum->datum = plpgsql_Datums[ns->itemno]; - wdatum->ident = NULL; - wdatum->quoted = false; /* not used */ - wdatum->idents = idents; - return true; - } - break; - default: break; } @@ -1572,14 +1501,11 @@ plpgsql_parse_tripword(char *word1, char *word2, char *word3, * words 1/2 are a record name, so third word could be * a field in this record. */ + PLpgSQL_rec *rec; PLpgSQL_recfield *new; - new = palloc(sizeof(PLpgSQL_recfield)); - new->dtype = PLPGSQL_DTYPE_RECFIELD; - new->fieldname = pstrdup(word3); - new->recparentno = ns->itemno; - - plpgsql_adddatum((PLpgSQL_datum *) new); + rec = (PLpgSQL_rec *) (plpgsql_Datums[ns->itemno]); + new = plpgsql_build_recfield(rec, word3); wdatum->datum = (PLpgSQL_datum *) new; wdatum->ident = NULL; @@ -1588,32 +1514,6 @@ plpgsql_parse_tripword(char *word1, char *word2, char *word3, return true; } - case PLPGSQL_NSTYPE_ROW: - { - /* - * words 1/2 are a row name, so third word could be a - * field in this row. - */ - PLpgSQL_row *row; - int i; - - row = (PLpgSQL_row *) (plpgsql_Datums[ns->itemno]); - for (i = 0; i < row->nfields; i++) - { - if (row->fieldnames[i] && - strcmp(row->fieldnames[i], word3) == 0) - { - wdatum->datum = plpgsql_Datums[row->varnos[i]]; - wdatum->ident = NULL; - wdatum->quoted = false; /* not used */ - wdatum->idents = idents; - return true; - } - } - /* fall through to return CWORD */ - break; - } - default: break; } @@ -1864,8 +1764,8 @@ plpgsql_parse_cwordrowtype(List *idents) * plpgsql_build_variable - build a datum-array entry of a given * datatype * - * The returned struct may be a PLpgSQL_var, PLpgSQL_row, or - * PLpgSQL_rec depending on the given datatype, and is allocated via + * The returned struct may be a PLpgSQL_var or PLpgSQL_rec + * depending on the given datatype, and is allocated via * palloc. The struct is automatically added to the current datum * array, and optionally to the current namespace. */ @@ -1902,31 +1802,13 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, result = (PLpgSQL_variable *) var; break; } - case PLPGSQL_TTYPE_ROW: - { - /* Composite type -- build a row variable */ - PLpgSQL_row *row; - - row = build_row_from_class(dtype->typrelid); - - row->dtype = PLPGSQL_DTYPE_ROW; - row->refname = pstrdup(refname); - row->lineno = lineno; - - plpgsql_adddatum((PLpgSQL_datum *) row); - if (add2namespace) - plpgsql_ns_additem(PLPGSQL_NSTYPE_ROW, - row->dno, - refname); - result = (PLpgSQL_variable *) row; - break; - } case PLPGSQL_TTYPE_REC: { - /* "record" type -- build a record variable */ + /* Composite type -- build a record variable */ PLpgSQL_rec *rec; - rec = plpgsql_build_record(refname, lineno, add2namespace); + rec = plpgsql_build_record(refname, lineno, dtype->typoid, + add2namespace); result = (PLpgSQL_variable *) rec; break; } @@ -1950,7 +1832,8 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, * Build empty named record variable, and optionally add it to namespace */ PLpgSQL_rec * -plpgsql_build_record(const char *refname, int lineno, bool add2namespace) +plpgsql_build_record(const char *refname, int lineno, Oid rectypeid, + bool add2namespace) { PLpgSQL_rec *rec; @@ -1958,10 +1841,9 @@ plpgsql_build_record(const char *refname, int lineno, bool add2namespace) rec->dtype = PLPGSQL_DTYPE_REC; rec->refname = pstrdup(refname); rec->lineno = lineno; - rec->tup = NULL; - rec->tupdesc = NULL; - rec->freetup = false; - rec->freetupdesc = false; + rec->rectypeid = rectypeid; + rec->firstfield = -1; + rec->erh = NULL; plpgsql_adddatum((PLpgSQL_datum *) rec); if (add2namespace) plpgsql_ns_additem(PLPGSQL_NSTYPE_REC, rec->dno, rec->refname); @@ -1969,104 +1851,9 @@ plpgsql_build_record(const char *refname, int lineno, bool add2namespace) return rec; } -/* - * Build a row-variable data structure given the pg_class OID. - */ -static PLpgSQL_row * -build_row_from_class(Oid classOid) -{ - PLpgSQL_row *row; - Relation rel; - Form_pg_class classStruct; - const char *relname; - int i; - - /* - * Open the relation to get info. - */ - rel = relation_open(classOid, AccessShareLock); - classStruct = RelationGetForm(rel); - relname = RelationGetRelationName(rel); - - /* - * Accept relation, sequence, view, materialized view, composite type, or - * foreign table. - */ - if (classStruct->relkind != RELKIND_RELATION && - classStruct->relkind != RELKIND_SEQUENCE && - classStruct->relkind != RELKIND_VIEW && - classStruct->relkind != RELKIND_MATVIEW && - classStruct->relkind != RELKIND_COMPOSITE_TYPE && - classStruct->relkind != RELKIND_FOREIGN_TABLE && - classStruct->relkind != RELKIND_PARTITIONED_TABLE) - ereport(ERROR, - (errcode(ERRCODE_WRONG_OBJECT_TYPE), - errmsg("relation \"%s\" is not a table", relname))); - - /* - * Create a row datum entry and all the required variables that it will - * point to. - */ - row = palloc0(sizeof(PLpgSQL_row)); - row->dtype = PLPGSQL_DTYPE_ROW; - row->rowtupdesc = CreateTupleDescCopy(RelationGetDescr(rel)); - row->nfields = classStruct->relnatts; - row->fieldnames = palloc(sizeof(char *) * row->nfields); - row->varnos = palloc(sizeof(int) * row->nfields); - - for (i = 0; i < row->nfields; i++) - { - Form_pg_attribute attrStruct; - - /* - * Get the attribute and check for dropped column - */ - attrStruct = TupleDescAttr(row->rowtupdesc, i); - - if (!attrStruct->attisdropped) - { - char *attname; - char refname[(NAMEDATALEN * 2) + 100]; - PLpgSQL_variable *var; - - attname = NameStr(attrStruct->attname); - snprintf(refname, sizeof(refname), "%s.%s", relname, attname); - - /* - * Create the internal variable for the field - * - * We know if the table definitions contain a default value or if - * the field is declared in the table as NOT NULL. But it's - * possible to create a table field as NOT NULL without a default - * value and that would lead to problems later when initializing - * the variables due to entering a block at execution time. Thus - * we ignore this information for now. - */ - var = plpgsql_build_variable(refname, 0, - plpgsql_build_datatype(attrStruct->atttypid, - attrStruct->atttypmod, - attrStruct->attcollation), - false); - - /* Add the variable to the row */ - row->fieldnames[i] = attname; - row->varnos[i] = var->dno; - } - else - { - /* Leave a hole in the row structure for the dropped col */ - row->fieldnames[i] = NULL; - row->varnos[i] = -1; - } - } - - relation_close(rel, AccessShareLock); - - return row; -} - /* * Build a row-variable data structure given the component variables. + * Include a rowtupdesc, since we will need to materialize the row result. */ static PLpgSQL_row * build_row_from_vars(PLpgSQL_variable **vars, int numvars) @@ -2084,9 +1871,9 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars) for (i = 0; i < numvars; i++) { PLpgSQL_variable *var = vars[i]; - Oid typoid = RECORDOID; - int32 typmod = -1; - Oid typcoll = InvalidOid; + Oid typoid; + int32 typmod; + Oid typcoll; switch (var->dtype) { @@ -2097,19 +1884,17 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars) break; case PLPGSQL_DTYPE_REC: - break; - - case PLPGSQL_DTYPE_ROW: - if (((PLpgSQL_row *) var)->rowtupdesc) - { - typoid = ((PLpgSQL_row *) var)->rowtupdesc->tdtypeid; - typmod = ((PLpgSQL_row *) var)->rowtupdesc->tdtypmod; - /* composite types have no collation */ - } + typoid = ((PLpgSQL_rec *) var)->rectypeid; + typmod = -1; /* don't know typmod, if it's used at all */ + typcoll = InvalidOid; /* composite types have no collation */ break; default: elog(ERROR, "unrecognized dtype: %d", var->dtype); + typoid = InvalidOid; /* keep compiler quiet */ + typmod = 0; + typcoll = InvalidOid; + break; } row->fieldnames[i] = var->refname; @@ -2125,6 +1910,46 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars) return row; } +/* + * Build a RECFIELD datum for the named field of the specified record variable + * + * If there's already such a datum, just return it; we don't need duplicates. + */ +PLpgSQL_recfield * +plpgsql_build_recfield(PLpgSQL_rec *rec, const char *fldname) +{ + PLpgSQL_recfield *recfield; + int i; + + /* search for an existing datum referencing this field */ + i = rec->firstfield; + while (i >= 0) + { + PLpgSQL_recfield *fld = (PLpgSQL_recfield *) plpgsql_Datums[i]; + + Assert(fld->dtype == PLPGSQL_DTYPE_RECFIELD && + fld->recparentno == rec->dno); + if (strcmp(fld->fieldname, fldname) == 0) + return fld; + i = fld->nextfield; + } + + /* nope, so make a new one */ + recfield = palloc0(sizeof(PLpgSQL_recfield)); + recfield->dtype = PLPGSQL_DTYPE_RECFIELD; + recfield->fieldname = pstrdup(fldname); + recfield->recparentno = rec->dno; + recfield->rectupledescid = INVALID_TUPLEDESC_IDENTIFIER; + + plpgsql_adddatum((PLpgSQL_datum *) recfield); + + /* now we can link it into the parent's chain */ + recfield->nextfield = rec->firstfield; + rec->firstfield = recfield->dno; + + return recfield; +} + /* * plpgsql_build_datatype * Build PLpgSQL_type struct given type OID, typmod, and collation. @@ -2171,14 +1996,18 @@ build_datatype(HeapTuple typeTup, int32 typmod, Oid collation) switch (typeStruct->typtype) { case TYPTYPE_BASE: - case TYPTYPE_DOMAIN: case TYPTYPE_ENUM: case TYPTYPE_RANGE: typ->ttype = PLPGSQL_TTYPE_SCALAR; break; case TYPTYPE_COMPOSITE: - Assert(OidIsValid(typeStruct->typrelid)); - typ->ttype = PLPGSQL_TTYPE_ROW; + typ->ttype = PLPGSQL_TTYPE_REC; + break; + case TYPTYPE_DOMAIN: + if (type_is_rowtype(typeStruct->typbasetype)) + typ->ttype = PLPGSQL_TTYPE_REC; + else + typ->ttype = PLPGSQL_TTYPE_SCALAR; break; case TYPTYPE_PSEUDO: if (typ->typoid == RECORDOID) @@ -2194,7 +2023,6 @@ build_datatype(HeapTuple typeTup, int32 typmod, Oid collation) typ->typlen = typeStruct->typlen; typ->typbyval = typeStruct->typbyval; typ->typtype = typeStruct->typtype; - typ->typrelid = typeStruct->typrelid; typ->collation = typeStruct->typcollation; if (OidIsValid(collation) && OidIsValid(typ->collation)) typ->collation = collation; diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 4478c5332e..7612902e8f 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -232,6 +232,8 @@ static HTAB *shared_cast_hash = NULL; /************************************************************ * Local function forward declarations ************************************************************/ +static void coerce_function_result_tuple(PLpgSQL_execstate *estate, + TupleDesc tupdesc); static void plpgsql_exec_error_callback(void *arg); static PLpgSQL_datum *copy_plpgsql_datum(PLpgSQL_datum *datum); static MemoryContext get_stmt_mcontext(PLpgSQL_execstate *estate); @@ -291,9 +293,9 @@ static int exec_stmt_dynexecute(PLpgSQL_execstate *estate, static int exec_stmt_dynfors(PLpgSQL_execstate *estate, PLpgSQL_stmt_dynfors *stmt); static int exec_stmt_commit(PLpgSQL_execstate *estate, - PLpgSQL_stmt_commit *stmt); + PLpgSQL_stmt_commit *stmt); static int exec_stmt_rollback(PLpgSQL_execstate *estate, - PLpgSQL_stmt_rollback *stmt); + PLpgSQL_stmt_rollback *stmt); static void plpgsql_estate_setup(PLpgSQL_execstate *estate, PLpgSQL_function *func, @@ -349,7 +351,7 @@ static ParamListInfo setup_param_list(PLpgSQL_execstate *estate, PLpgSQL_expr *expr); static ParamExternData *plpgsql_param_fetch(ParamListInfo params, int paramid, bool speculative, - ParamExternData *prm); + ParamExternData *workspace); static void plpgsql_param_compile(ParamListInfo params, Param *param, ExprState *state, Datum *resv, bool *resnull); @@ -357,19 +359,35 @@ static void plpgsql_param_eval_var(ExprState *state, ExprEvalStep *op, ExprContext *econtext); static void plpgsql_param_eval_var_ro(ExprState *state, ExprEvalStep *op, ExprContext *econtext); -static void plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, +static void plpgsql_param_eval_recfield(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); +static void plpgsql_param_eval_generic(ExprState *state, ExprEvalStep *op, ExprContext *econtext); +static void plpgsql_param_eval_generic_ro(ExprState *state, ExprEvalStep *op, + ExprContext *econtext); static void exec_move_row(PLpgSQL_execstate *estate, PLpgSQL_variable *target, HeapTuple tup, TupleDesc tupdesc); +static ExpandedRecordHeader *make_expanded_record_for_rec(PLpgSQL_execstate *estate, + PLpgSQL_rec *rec, + TupleDesc srctupdesc, + ExpandedRecordHeader *srcerh); +static void exec_move_row_from_fields(PLpgSQL_execstate *estate, + PLpgSQL_variable *target, + ExpandedRecordHeader *newerh, + Datum *values, bool *nulls, + TupleDesc tupdesc); +static bool compatible_tupdescs(TupleDesc src_tupdesc, TupleDesc dst_tupdesc); static HeapTuple make_tuple_from_row(PLpgSQL_execstate *estate, PLpgSQL_row *row, TupleDesc tupdesc); -static HeapTuple get_tuple_from_datum(Datum value); -static TupleDesc get_tupdesc_from_datum(Datum value); +static TupleDesc deconstruct_composite_datum(Datum value, + HeapTupleData *tmptup); static void exec_move_row_from_datum(PLpgSQL_execstate *estate, PLpgSQL_variable *target, Datum value); +static void instantiate_empty_record_variable(PLpgSQL_execstate *estate, + PLpgSQL_rec *rec); static char *convert_value_to_string(PLpgSQL_execstate *estate, Datum value, Oid valtype); static Datum exec_cast_value(PLpgSQL_execstate *estate, @@ -387,6 +405,8 @@ static void assign_simple_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, Datum newvalue, bool isnull, bool freeable); static void assign_text_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, const char *str); +static void assign_record_var(PLpgSQL_execstate *estate, PLpgSQL_rec *rec, + ExpandedRecordHeader *erh); static PreparedParamsData *exec_eval_using_params(PLpgSQL_execstate *estate, List *params); static Portal exec_dynquery_with_params(PLpgSQL_execstate *estate, @@ -482,7 +502,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, /* take ownership of R/W object */ assign_simple_var(&estate, var, TransferExpandedObject(var->value, - CurrentMemoryContext), + estate.datum_context), false, true); } @@ -495,7 +515,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, /* flat array, so force to expanded form */ assign_simple_var(&estate, var, expand_array(var->value, - CurrentMemoryContext, + estate.datum_context, NULL), false, true); @@ -504,21 +524,21 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, } break; - case PLPGSQL_DTYPE_ROW: + case PLPGSQL_DTYPE_REC: { - PLpgSQL_row *row = (PLpgSQL_row *) estate.datums[n]; + PLpgSQL_rec *rec = (PLpgSQL_rec *) estate.datums[n]; if (!fcinfo->argnull[i]) { /* Assign row value from composite datum */ exec_move_row_from_datum(&estate, - (PLpgSQL_variable *) row, + (PLpgSQL_variable *) rec, fcinfo->arg[i]); } else { /* If arg is null, treat it as an empty row */ - exec_move_row(&estate, (PLpgSQL_variable *) row, + exec_move_row(&estate, (PLpgSQL_variable *) rec, NULL, NULL); } /* clean up after exec_move_row() */ @@ -582,15 +602,12 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, /* If we produced any tuples, send back the result */ if (estate.tuple_store) { - rsi->setResult = estate.tuple_store; - if (estate.rettupdesc) - { - MemoryContext oldcxt; + MemoryContext oldcxt; - oldcxt = MemoryContextSwitchTo(estate.tuple_store_cxt); - rsi->setDesc = CreateTupleDescCopy(estate.rettupdesc); - MemoryContextSwitchTo(oldcxt); - } + rsi->setResult = estate.tuple_store; + oldcxt = MemoryContextSwitchTo(estate.tuple_store_cxt); + rsi->setDesc = CreateTupleDescCopy(estate.tuple_store_desc); + MemoryContextSwitchTo(oldcxt); } estate.retval = (Datum) 0; fcinfo->isnull = true; @@ -598,62 +615,80 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, else if (!estate.retisnull) { if (!func->fn_rettype) - { ereport(ERROR, (errmsg("cannot return a value from a procedure"))); - } + /* + * Cast result value to function's declared result type, and copy it + * out to the upper executor memory context. We must treat tuple + * results specially in order to deal with cases like rowtypes + * involving dropped columns. + */ if (estate.retistuple) { - /* - * We have to check that the returned tuple actually matches the - * expected result type. XXX would be better to cache the tupdesc - * instead of repeating get_call_result_type() - */ - HeapTuple rettup = (HeapTuple) DatumGetPointer(estate.retval); - TupleDesc tupdesc; - TupleConversionMap *tupmap; - - switch (get_call_result_type(fcinfo, NULL, &tupdesc)) + /* Don't need coercion if rowtype is known to match */ + if (func->fn_rettype == estate.rettype && + func->fn_rettype != RECORDOID) { - case TYPEFUNC_COMPOSITE: - /* got the expected result rowtype, now check it */ - tupmap = convert_tuples_by_position(estate.rettupdesc, - tupdesc, - gettext_noop("returned record type does not match expected record type")); - /* it might need conversion */ - if (tupmap) - rettup = do_convert_tuple(rettup, tupmap); - /* no need to free map, we're about to return anyway */ - break; - case TYPEFUNC_RECORD: + /* + * Copy the tuple result into upper executor memory context. + * However, if we have a R/W expanded datum, we can just + * transfer its ownership out to the upper context. + */ + estate.retval = SPI_datumTransfer(estate.retval, + false, + -1); + } + else + { + /* + * Need to look up the expected result type. XXX would be + * better to cache the tupdesc instead of repeating + * get_call_result_type(), but the only easy place to save it + * is in the PLpgSQL_function struct, and that's too + * long-lived: composite types could change during the + * existence of a PLpgSQL_function. + */ + Oid resultTypeId; + TupleDesc tupdesc; - /* - * Failed to determine actual type of RECORD. We could - * raise an error here, but what this means in practice is - * that the caller is expecting any old generic rowtype, - * so we don't really need to be restrictive. Pass back - * the generated result type, instead. - */ - tupdesc = estate.rettupdesc; - if (tupdesc == NULL) /* shouldn't happen */ + switch (get_call_result_type(fcinfo, &resultTypeId, &tupdesc)) + { + case TYPEFUNC_COMPOSITE: + /* got the expected result rowtype, now coerce it */ + coerce_function_result_tuple(&estate, tupdesc); + break; + case TYPEFUNC_COMPOSITE_DOMAIN: + /* got the expected result rowtype, now coerce it */ + coerce_function_result_tuple(&estate, tupdesc); + /* and check domain constraints */ + /* XXX allowing caching here would be good, too */ + domain_check(estate.retval, false, resultTypeId, + NULL, NULL); + break; + case TYPEFUNC_RECORD: + + /* + * Failed to determine actual type of RECORD. We + * could raise an error here, but what this means in + * practice is that the caller is expecting any old + * generic rowtype, so we don't really need to be + * restrictive. Pass back the generated result as-is. + */ + estate.retval = SPI_datumTransfer(estate.retval, + false, + -1); + break; + default: + /* shouldn't get here if retistuple is true ... */ elog(ERROR, "return type must be a row type"); - break; - default: - /* shouldn't get here if retistuple is true ... */ - elog(ERROR, "return type must be a row type"); - break; + break; + } } - - /* - * Copy tuple to upper executor memory, as a tuple Datum. Make - * sure it is labeled with the caller-supplied tuple type. - */ - estate.retval = PointerGetDatum(SPI_returntuple(rettup, tupdesc)); } else { - /* Cast value to proper type */ + /* Scalar case: use exec_cast_value */ estate.retval = exec_cast_value(&estate, estate.retval, &fcinfo->isnull, @@ -699,6 +734,94 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, return estate.retval; } +/* + * Helper for plpgsql_exec_function: coerce composite result to the specified + * tuple descriptor, and copy it out to upper executor memory. This is split + * out mostly for cosmetic reasons --- the logic would be very deeply nested + * otherwise. + * + * estate->retval is updated in-place. + */ +static void +coerce_function_result_tuple(PLpgSQL_execstate *estate, TupleDesc tupdesc) +{ + HeapTuple rettup; + TupleDesc retdesc; + TupleConversionMap *tupmap; + + /* We assume exec_stmt_return verified that result is composite */ + Assert(type_is_rowtype(estate->rettype)); + + /* We can special-case expanded records for speed */ + if (VARATT_IS_EXTERNAL_EXPANDED(DatumGetPointer(estate->retval))) + { + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) DatumGetEOHP(estate->retval); + + Assert(erh->er_magic == ER_MAGIC); + + /* Extract record's TupleDesc */ + retdesc = expanded_record_get_tupdesc(erh); + + /* check rowtype compatibility */ + tupmap = convert_tuples_by_position(retdesc, + tupdesc, + gettext_noop("returned record type does not match expected record type")); + + /* it might need conversion */ + if (tupmap) + { + rettup = expanded_record_get_tuple(erh); + Assert(rettup); + rettup = do_convert_tuple(rettup, tupmap); + + /* + * Copy tuple to upper executor memory, as a tuple Datum. Make + * sure it is labeled with the caller-supplied tuple type. + */ + estate->retval = PointerGetDatum(SPI_returntuple(rettup, tupdesc)); + /* no need to free map, we're about to return anyway */ + } + else + { + /* + * We need only copy result into upper executor memory context. + * However, if we have a R/W expanded datum, we can just transfer + * its ownership out to the upper executor context. + */ + estate->retval = SPI_datumTransfer(estate->retval, + false, + -1); + } + } + else + { + /* Convert composite datum to a HeapTuple and TupleDesc */ + HeapTupleData tmptup; + + retdesc = deconstruct_composite_datum(estate->retval, &tmptup); + rettup = &tmptup; + + /* check rowtype compatibility */ + tupmap = convert_tuples_by_position(retdesc, + tupdesc, + gettext_noop("returned record type does not match expected record type")); + + /* it might need conversion */ + if (tupmap) + rettup = do_convert_tuple(rettup, tupmap); + + /* + * Copy tuple to upper executor memory, as a tuple Datum. Make sure + * it is labeled with the caller-supplied tuple type. + */ + estate->retval = PointerGetDatum(SPI_returntuple(rettup, tupdesc)); + + /* no need to free map, we're about to return anyway */ + + ReleaseTupleDesc(retdesc); + } +} + /* ---------- * plpgsql_exec_trigger Called by the call handler for @@ -713,6 +836,7 @@ plpgsql_exec_trigger(PLpgSQL_function *func, ErrorContextCallback plerrcontext; int i; int rc; + TupleDesc tupdesc; PLpgSQL_var *var; PLpgSQL_rec *rec_new, *rec_old; @@ -747,37 +871,34 @@ plpgsql_exec_trigger(PLpgSQL_function *func, * might have a test like "if (TG_OP = 'INSERT' and NEW.foo = 'xyz')", * which should parse regardless of the current trigger type. */ + tupdesc = RelationGetDescr(trigdata->tg_relation); + rec_new = (PLpgSQL_rec *) (estate.datums[func->new_varno]); - rec_new->freetup = false; - rec_new->tupdesc = trigdata->tg_relation->rd_att; - rec_new->freetupdesc = false; rec_old = (PLpgSQL_rec *) (estate.datums[func->old_varno]); - rec_old->freetup = false; - rec_old->tupdesc = trigdata->tg_relation->rd_att; - rec_old->freetupdesc = false; + + rec_new->erh = make_expanded_record_from_tupdesc(tupdesc, + estate.datum_context); + rec_old->erh = make_expanded_record_from_exprecord(rec_new->erh, + estate.datum_context); if (!TRIGGER_FIRED_FOR_ROW(trigdata->tg_event)) { /* * Per-statement triggers don't use OLD/NEW variables */ - rec_new->tup = NULL; - rec_old->tup = NULL; } else if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event)) { - rec_new->tup = trigdata->tg_trigtuple; - rec_old->tup = NULL; + expanded_record_set_tuple(rec_new->erh, trigdata->tg_trigtuple, false); } else if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) { - rec_new->tup = trigdata->tg_newtuple; - rec_old->tup = trigdata->tg_trigtuple; + expanded_record_set_tuple(rec_new->erh, trigdata->tg_newtuple, false); + expanded_record_set_tuple(rec_old->erh, trigdata->tg_trigtuple, false); } else if (TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) { - rec_new->tup = NULL; - rec_old->tup = trigdata->tg_trigtuple; + expanded_record_set_tuple(rec_old->erh, trigdata->tg_trigtuple, false); } else elog(ERROR, "unrecognized trigger action: not INSERT, DELETE, or UPDATE"); @@ -936,20 +1057,68 @@ plpgsql_exec_trigger(PLpgSQL_function *func, rettup = NULL; else { + TupleDesc retdesc; TupleConversionMap *tupmap; - rettup = (HeapTuple) DatumGetPointer(estate.retval); - /* check rowtype compatibility */ - tupmap = convert_tuples_by_position(estate.rettupdesc, - trigdata->tg_relation->rd_att, - gettext_noop("returned row structure does not match the structure of the triggering table")); - /* it might need conversion */ - if (tupmap) - rettup = do_convert_tuple(rettup, tupmap); - /* no need to free map, we're about to return anyway */ + /* We assume exec_stmt_return verified that result is composite */ + Assert(type_is_rowtype(estate.rettype)); - /* Copy tuple to upper executor memory */ - rettup = SPI_copytuple(rettup); + /* We can special-case expanded records for speed */ + if (VARATT_IS_EXTERNAL_EXPANDED(DatumGetPointer(estate.retval))) + { + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) DatumGetEOHP(estate.retval); + + Assert(erh->er_magic == ER_MAGIC); + + /* Extract HeapTuple and TupleDesc */ + rettup = expanded_record_get_tuple(erh); + Assert(rettup); + retdesc = expanded_record_get_tupdesc(erh); + + if (retdesc != RelationGetDescr(trigdata->tg_relation)) + { + /* check rowtype compatibility */ + tupmap = convert_tuples_by_position(retdesc, + RelationGetDescr(trigdata->tg_relation), + gettext_noop("returned row structure does not match the structure of the triggering table")); + /* it might need conversion */ + if (tupmap) + rettup = do_convert_tuple(rettup, tupmap); + /* no need to free map, we're about to return anyway */ + } + + /* + * Copy tuple to upper executor memory. But if user just did + * "return new" or "return old" without changing anything, there's + * no need to copy; we can return the original tuple (which will + * save a few cycles in trigger.c as well as here). + */ + if (rettup != trigdata->tg_newtuple && + rettup != trigdata->tg_trigtuple) + rettup = SPI_copytuple(rettup); + } + else + { + /* Convert composite datum to a HeapTuple and TupleDesc */ + HeapTupleData tmptup; + + retdesc = deconstruct_composite_datum(estate.retval, &tmptup); + rettup = &tmptup; + + /* check rowtype compatibility */ + tupmap = convert_tuples_by_position(retdesc, + RelationGetDescr(trigdata->tg_relation), + gettext_noop("returned row structure does not match the structure of the triggering table")); + /* it might need conversion */ + if (tupmap) + rettup = do_convert_tuple(rettup, tupmap); + + ReleaseTupleDesc(retdesc); + /* no need to free map, we're about to return anyway */ + + /* Copy tuple to upper executor memory */ + rettup = SPI_copytuple(rettup); + } } /* @@ -1146,11 +1315,8 @@ copy_plpgsql_datum(PLpgSQL_datum *datum) PLpgSQL_rec *new = palloc(sizeof(PLpgSQL_rec)); memcpy(new, datum, sizeof(PLpgSQL_rec)); - /* should be preset to null/non-freeable */ - Assert(new->tup == NULL); - Assert(new->tupdesc == NULL); - Assert(!new->freetup); - Assert(!new->freetupdesc); + /* should be preset to empty */ + Assert(new->erh == NULL); result = (PLpgSQL_datum *) new; } @@ -1162,8 +1328,8 @@ copy_plpgsql_datum(PLpgSQL_datum *datum) /* * These datum records are read-only at runtime, so no need to - * copy them (well, ARRAYELEM contains some cached type data, but - * we'd just as soon centralize the caching anyway) + * copy them (well, RECFIELD and ARRAYELEM contain cached data, + * but we'd just as soon centralize the caching anyway) */ result = datum; break; @@ -1334,18 +1500,9 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) { PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - if (rec->freetup) - { - heap_freetuple(rec->tup); - rec->freetup = false; - } - if (rec->freetupdesc) - { - FreeTupleDesc(rec->tupdesc); - rec->freetupdesc = false; - } - rec->tup = NULL; - rec->tupdesc = NULL; + if (rec->erh) + DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); + rec->erh = NULL; } break; @@ -1401,16 +1558,12 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) /* * If the block ended with RETURN, we may need to copy the return - * value out of the subtransaction eval_context. This is - * currently only needed for scalar result types --- rowtype - * values will always exist in the function's main memory context, - * cf. exec_stmt_return(). We can avoid a physical copy if the - * value happens to be a R/W expanded object. + * value out of the subtransaction eval_context. We can avoid a + * physical copy if the value happens to be a R/W expanded object. */ if (rc == PLPGSQL_RC_RETURN && !estate->retisset && - !estate->retisnull && - estate->rettupdesc == NULL) + !estate->retisnull) { int16 resTypLen; bool resTypByVal; @@ -2574,12 +2727,8 @@ exec_stmt_exit(PLpgSQL_execstate *estate, PLpgSQL_stmt_exit *stmt) * exec_stmt_return Evaluate an expression and start * returning from the function. * - * Note: in the retistuple code paths, the returned tuple is always in the - * function's main context, whereas for non-tuple data types the result may - * be in the eval_mcontext. The former case is not a memory leak since we're - * about to exit the function anyway. (If you want to change it, note that - * exec_stmt_block() knows about this behavior.) The latter case means that - * we must not do exec_eval_cleanup while unwinding the control stack. + * Note: The result may be in the eval_mcontext. Therefore, we must not + * do exec_eval_cleanup while unwinding the control stack. * ---------- */ static int @@ -2593,9 +2742,8 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) if (estate->retisset) return PLPGSQL_RC_RETURN; - /* initialize for null result (possibly a tuple) */ + /* initialize for null result */ estate->retval = (Datum) 0; - estate->rettupdesc = NULL; estate->retisnull = true; estate->rettype = InvalidOid; @@ -2626,10 +2774,12 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) estate->rettype = var->datatype->typoid; /* - * Cope with retistuple case. A PLpgSQL_var could not be - * of composite type, so we needn't make any effort to - * convert. However, for consistency with the expression - * code path, don't throw error if the result is NULL. + * A PLpgSQL_var could not be of composite type, so + * conversion must fail if retistuple. We throw a custom + * error mainly for consistency with historical behavior. + * For the same reason, we don't throw error if the result + * is NULL. (Note that plpgsql_exec_trigger assumes that + * any non-null result has been verified to be composite.) */ if (estate->retistuple && !estate->retisnull) ereport(ERROR, @@ -2641,23 +2791,13 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) case PLPGSQL_DTYPE_REC: { PLpgSQL_rec *rec = (PLpgSQL_rec *) retvar; - int32 rettypmod; - if (HeapTupleIsValid(rec->tup)) + /* If record is empty, we return NULL not a row of nulls */ + if (rec->erh && !ExpandedRecordIsEmpty(rec->erh)) { - if (estate->retistuple) - { - estate->retval = PointerGetDatum(rec->tup); - estate->rettupdesc = rec->tupdesc; - estate->retisnull = false; - } - else - exec_eval_datum(estate, - retvar, - &estate->rettype, - &rettypmod, - &estate->retval, - &estate->retisnull); + estate->retval = ExpandedRecordGetDatum(rec->erh); + estate->retisnull = false; + estate->rettype = rec->rectypeid; } } break; @@ -2667,26 +2807,13 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) PLpgSQL_row *row = (PLpgSQL_row *) retvar; int32 rettypmod; - if (estate->retistuple) - { - HeapTuple tup; - - if (!row->rowtupdesc) /* should not happen */ - elog(ERROR, "row variable has no tupdesc"); - tup = make_tuple_from_row(estate, row, row->rowtupdesc); - if (tup == NULL) /* should not happen */ - elog(ERROR, "row not compatible with its own tupdesc"); - estate->retval = PointerGetDatum(tup); - estate->rettupdesc = row->rowtupdesc; - estate->retisnull = false; - } - else - exec_eval_datum(estate, - retvar, - &estate->rettype, - &rettypmod, - &estate->retval, - &estate->retisnull); + /* We get here if there are multiple OUT parameters */ + exec_eval_datum(estate, + (PLpgSQL_datum *) row, + &estate->rettype, + &rettypmod, + &estate->retval, + &estate->retisnull); } break; @@ -2706,23 +2833,15 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) &(estate->rettype), &rettypmod); - if (estate->retistuple && !estate->retisnull) - { - /* Convert composite datum to a HeapTuple and TupleDesc */ - HeapTuple tuple; - TupleDesc tupdesc; - - /* Source must be of RECORD or composite type */ - if (!type_is_rowtype(estate->rettype)) - ereport(ERROR, - (errcode(ERRCODE_DATATYPE_MISMATCH), - errmsg("cannot return non-composite value from function returning composite type"))); - tuple = get_tuple_from_datum(estate->retval); - tupdesc = get_tupdesc_from_datum(estate->retval); - estate->retval = PointerGetDatum(tuple); - estate->rettupdesc = CreateTupleDescCopy(tupdesc); - ReleaseTupleDesc(tupdesc); - } + /* + * As in the DTYPE_VAR case above, throw a custom error if a non-null, + * non-composite value is returned in a function returning tuple. + */ + if (estate->retistuple && !estate->retisnull && + !type_is_rowtype(estate->rettype)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("cannot return non-composite value from function returning composite type"))); return PLPGSQL_RC_RETURN; } @@ -2765,8 +2884,8 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, if (estate->tuple_store == NULL) exec_init_tuple_store(estate); - /* rettupdesc will be filled by exec_init_tuple_store */ - tupdesc = estate->rettupdesc; + /* tuple_store_desc will be filled by exec_init_tuple_store */ + tupdesc = estate->tuple_store_desc; natts = tupdesc->natts; /* @@ -2819,22 +2938,22 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, case PLPGSQL_DTYPE_REC: { PLpgSQL_rec *rec = (PLpgSQL_rec *) retvar; + TupleDesc rec_tupdesc; TupleConversionMap *tupmap; - if (!HeapTupleIsValid(rec->tup)) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned" - " record is indeterminate."))); + /* If rec is null, try to convert it to a row of nulls */ + if (rec->erh == NULL) + instantiate_empty_record_variable(estate, rec); + if (ExpandedRecordIsEmpty(rec->erh)) + deconstruct_expanded_record(rec->erh); /* Use eval_mcontext for tuple conversion work */ oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); - tupmap = convert_tuples_by_position(rec->tupdesc, + rec_tupdesc = expanded_record_get_tupdesc(rec->erh); + tupmap = convert_tuples_by_position(rec_tupdesc, tupdesc, gettext_noop("wrong record type supplied in RETURN NEXT")); - tuple = rec->tup; + tuple = expanded_record_get_tuple(rec->erh); if (tupmap) tuple = do_convert_tuple(tuple, tupmap); tuplestore_puttuple(estate->tuple_store, tuple); @@ -2846,10 +2965,12 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, { PLpgSQL_row *row = (PLpgSQL_row *) retvar; + /* We get here if there are multiple OUT parameters */ + /* Use eval_mcontext for tuple conversion work */ oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); tuple = make_tuple_from_row(estate, row, tupdesc); - if (tuple == NULL) + if (tuple == NULL) /* should not happen */ ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("wrong record type supplied in RETURN NEXT"))); @@ -2881,6 +3002,7 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, /* Expression should be of RECORD or composite type */ if (!isNull) { + HeapTupleData tmptup; TupleDesc retvaldesc; TupleConversionMap *tupmap; @@ -2891,8 +3013,8 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, /* Use eval_mcontext for tuple conversion work */ oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); - tuple = get_tuple_from_datum(retval); - retvaldesc = get_tupdesc_from_datum(retval); + retvaldesc = deconstruct_composite_datum(retval, &tmptup); + tuple = &tmptup; tupmap = convert_tuples_by_position(retvaldesc, tupdesc, gettext_noop("returned record type does not match expected record type")); if (tupmap) @@ -2992,7 +3114,7 @@ exec_stmt_return_query(PLpgSQL_execstate *estate, oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); tupmap = convert_tuples_by_position(portal->tupDesc, - estate->rettupdesc, + estate->tuple_store_desc, gettext_noop("structure of query does not match function result type")); while (true) @@ -3069,7 +3191,7 @@ exec_init_tuple_store(PLpgSQL_execstate *estate) CurrentResourceOwner = oldowner; MemoryContextSwitchTo(oldcxt); - estate->rettupdesc = rsi->expectedDesc; + estate->tuple_store_desc = rsi->expectedDesc; } #define SET_RAISE_OPTION_TEXT(opt, name) \ @@ -3363,11 +3485,11 @@ plpgsql_estate_setup(PLpgSQL_execstate *estate, estate->readonly_func = func->fn_readonly; - estate->rettupdesc = NULL; estate->exitlabel = NULL; estate->cur_error = NULL; estate->tuple_store = NULL; + estate->tuple_store_desc = NULL; if (rsi) { estate->tuple_store_cxt = rsi->econtext->ecxt_per_query_memory; @@ -3384,6 +3506,7 @@ plpgsql_estate_setup(PLpgSQL_execstate *estate, estate->ndatums = func->ndatums; estate->datums = palloc(sizeof(PLpgSQL_datum *) * estate->ndatums); /* caller is expected to fill the datums array */ + estate->datum_context = CurrentMemoryContext; /* initialize our ParamListInfo with appropriate hook functions */ estate->paramLI = (ParamListInfo) @@ -4449,7 +4572,7 @@ exec_assign_value(PLpgSQL_execstate *estate, { /* array and not already R/W, so apply expand_array */ newvalue = expand_array(newvalue, - CurrentMemoryContext, + estate->datum_context, NULL); } else @@ -4534,64 +4657,58 @@ exec_assign_value(PLpgSQL_execstate *estate, */ PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) target; PLpgSQL_rec *rec; - int fno; - HeapTuple newtup; - int colnums[1]; - Datum values[1]; - bool nulls[1]; - Oid atttype; - int32 atttypmod; + ExpandedRecordHeader *erh; rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); + erh = rec->erh; /* - * Check that there is already a tuple in the record. We need - * that because records don't have any predefined field - * structure. - */ - if (!HeapTupleIsValid(rec->tup)) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - - /* - * Get the number of the record field to change. Disallow - * system columns because the code below won't cope. + * If record variable is NULL, instantiate it if it has a + * named composite type, else complain. (This won't change + * the logical state of the record, but if we successfully + * assign below, the unassigned fields will all become NULLs.) */ - fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); - if (fno <= 0) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("record \"%s\" has no field \"%s\"", - rec->refname, recfield->fieldname))); - colnums[0] = fno; + if (erh == NULL) + { + instantiate_empty_record_variable(estate, rec); + erh = rec->erh; + } /* - * Now insert the new value, being careful to cast it to the - * right type. + * Look up the field's properties if we have not already, or + * if the tuple descriptor ID changed since last time. */ - atttype = TupleDescAttr(rec->tupdesc, fno - 1)->atttypid; - atttypmod = TupleDescAttr(rec->tupdesc, fno - 1)->atttypmod; - values[0] = exec_cast_value(estate, - value, - &isNull, - valtype, - valtypmod, - atttype, - atttypmod); - nulls[0] = isNull; - - newtup = heap_modify_tuple_by_cols(rec->tup, rec->tupdesc, - 1, colnums, values, nulls); - - if (rec->freetup) - heap_freetuple(rec->tup); - - rec->tup = newtup; - rec->freetup = true; + if (unlikely(recfield->rectupledescid != erh->er_tupdesc_id)) + { + if (!expanded_record_lookup_field(erh, + recfield->fieldname, + &recfield->finfo)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("record \"%s\" has no field \"%s\"", + rec->refname, recfield->fieldname))); + recfield->rectupledescid = erh->er_tupdesc_id; + } + /* We don't support assignments to system columns. */ + if (recfield->finfo.fnumber <= 0) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("cannot assign to system column \"%s\"", + recfield->fieldname))); + + /* Cast the new value to the right type, if needed. */ + value = exec_cast_value(estate, + value, + &isNull, + valtype, + valtypmod, + recfield->finfo.ftypeid, + recfield->finfo.ftypmod); + + /* And assign it. */ + expanded_record_set_field(erh, recfield->finfo.fnumber, + value, isNull); break; } @@ -4837,6 +4954,7 @@ exec_eval_datum(PLpgSQL_execstate *estate, PLpgSQL_row *row = (PLpgSQL_row *) datum; HeapTuple tup; + /* We get here if there are multiple OUT parameters */ if (!row->rowtupdesc) /* should not happen */ elog(ERROR, "row variable has no tupdesc"); /* Make sure we have a valid type/typmod setting */ @@ -4857,22 +4975,41 @@ exec_eval_datum(PLpgSQL_execstate *estate, { PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - if (!HeapTupleIsValid(rec->tup)) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - Assert(rec->tupdesc != NULL); - /* Make sure we have a valid type/typmod setting */ - BlessTupleDesc(rec->tupdesc); - - oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); - *typeid = rec->tupdesc->tdtypeid; - *typetypmod = rec->tupdesc->tdtypmod; - *value = heap_copy_tuple_as_datum(rec->tup, rec->tupdesc); - *isnull = false; - MemoryContextSwitchTo(oldcontext); + if (rec->erh == NULL) + { + /* Treat uninstantiated record as a simple NULL */ + *value = (Datum) 0; + *isnull = true; + /* Report variable's declared type */ + *typeid = rec->rectypeid; + *typetypmod = -1; + } + else + { + if (ExpandedRecordIsEmpty(rec->erh)) + { + /* Empty record is also a NULL */ + *value = (Datum) 0; + *isnull = true; + } + else + { + *value = ExpandedRecordGetDatum(rec->erh); + *isnull = false; + } + if (rec->rectypeid != RECORDOID) + { + /* Report variable's declared type, if not RECORD */ + *typeid = rec->rectypeid; + *typetypmod = -1; + } + else + { + /* Report record's actual type if declared RECORD */ + *typeid = rec->erh->er_typeid; + *typetypmod = rec->erh->er_typmod; + } + } break; } @@ -4880,31 +5017,46 @@ exec_eval_datum(PLpgSQL_execstate *estate, { PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) datum; PLpgSQL_rec *rec; - int fno; + ExpandedRecordHeader *erh; rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); - if (!HeapTupleIsValid(rec->tup)) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); - if (fno == SPI_ERROR_NOATTRIBUTE) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("record \"%s\" has no field \"%s\"", - rec->refname, recfield->fieldname))); - *typeid = SPI_gettypeid(rec->tupdesc, fno); - if (fno > 0) + erh = rec->erh; + + /* + * If record variable is NULL, instantiate it if it has a + * named composite type, else complain. (This won't change + * the logical state of the record: it's still NULL.) + */ + if (erh == NULL) { - Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); + instantiate_empty_record_variable(estate, rec); + erh = rec->erh; + } - *typetypmod = attr->atttypmod; + /* + * Look up the field's properties if we have not already, or + * if the tuple descriptor ID changed since last time. + */ + if (unlikely(recfield->rectupledescid != erh->er_tupdesc_id)) + { + if (!expanded_record_lookup_field(erh, + recfield->fieldname, + &recfield->finfo)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("record \"%s\" has no field \"%s\"", + rec->refname, recfield->fieldname))); + recfield->rectupledescid = erh->er_tupdesc_id; } - else - *typetypmod = -1; - *value = SPI_getbinval(rec->tup, rec->tupdesc, fno, isnull); + + /* Report type data. */ + *typeid = recfield->finfo.ftypeid; + *typetypmod = recfield->finfo.ftypmod; + + /* And fetch the field value. */ + *value = expanded_record_get_field(erh, + recfield->finfo.fnumber, + isnull); break; } @@ -4916,10 +5068,8 @@ exec_eval_datum(PLpgSQL_execstate *estate, /* * plpgsql_exec_get_datum_type Get datatype of a PLpgSQL_datum * - * This is the same logic as in exec_eval_datum, except that it can handle - * some cases where exec_eval_datum has to fail; specifically, we may have - * a tupdesc but no row value for a record variable. (This currently can - * happen only for a trigger's NEW/OLD records.) + * This is the same logic as in exec_eval_datum, but we skip acquiring + * the actual value of the variable. Also, needn't support DTYPE_ROW. */ Oid plpgsql_exec_get_datum_type(PLpgSQL_execstate *estate, @@ -4937,31 +5087,20 @@ plpgsql_exec_get_datum_type(PLpgSQL_execstate *estate, break; } - case PLPGSQL_DTYPE_ROW: - { - PLpgSQL_row *row = (PLpgSQL_row *) datum; - - if (!row->rowtupdesc) /* should not happen */ - elog(ERROR, "row variable has no tupdesc"); - /* Make sure we have a valid type/typmod setting */ - BlessTupleDesc(row->rowtupdesc); - typeid = row->rowtupdesc->tdtypeid; - break; - } - case PLPGSQL_DTYPE_REC: { PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - if (rec->tupdesc == NULL) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - /* Make sure we have a valid type/typmod setting */ - BlessTupleDesc(rec->tupdesc); - typeid = rec->tupdesc->tdtypeid; + if (rec->erh == NULL || rec->rectypeid != RECORDOID) + { + /* Report variable's declared type */ + typeid = rec->rectypeid; + } + else + { + /* Report record's actual type if declared RECORD */ + typeid = rec->erh->er_typeid; + } break; } @@ -4969,22 +5108,34 @@ plpgsql_exec_get_datum_type(PLpgSQL_execstate *estate, { PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) datum; PLpgSQL_rec *rec; - int fno; rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); - if (rec->tupdesc == NULL) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); - if (fno == SPI_ERROR_NOATTRIBUTE) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("record \"%s\" has no field \"%s\"", - rec->refname, recfield->fieldname))); - typeid = SPI_gettypeid(rec->tupdesc, fno); + + /* + * If record variable is NULL, instantiate it if it has a + * named composite type, else complain. (This won't change + * the logical state of the record: it's still NULL.) + */ + if (rec->erh == NULL) + instantiate_empty_record_variable(estate, rec); + + /* + * Look up the field's properties if we have not already, or + * if the tuple descriptor ID changed since last time. + */ + if (unlikely(recfield->rectupledescid != rec->erh->er_tupdesc_id)) + { + if (!expanded_record_lookup_field(rec->erh, + recfield->fieldname, + &recfield->finfo)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("record \"%s\" has no field \"%s\"", + rec->refname, recfield->fieldname))); + recfield->rectupledescid = rec->erh->er_tupdesc_id; + } + + typeid = recfield->finfo.ftypeid; break; } @@ -5001,7 +5152,8 @@ plpgsql_exec_get_datum_type(PLpgSQL_execstate *estate, * plpgsql_exec_get_datum_type_info Get datatype etc of a PLpgSQL_datum * * An extended version of plpgsql_exec_get_datum_type, which also retrieves the - * typmod and collation of the datum. + * typmod and collation of the datum. Note however that we don't report the + * possibly-mutable typmod of RECORD values, but say -1 always. */ void plpgsql_exec_get_datum_type_info(PLpgSQL_execstate *estate, @@ -5020,37 +5172,23 @@ plpgsql_exec_get_datum_type_info(PLpgSQL_execstate *estate, break; } - case PLPGSQL_DTYPE_ROW: - { - PLpgSQL_row *row = (PLpgSQL_row *) datum; - - if (!row->rowtupdesc) /* should not happen */ - elog(ERROR, "row variable has no tupdesc"); - /* Make sure we have a valid type/typmod setting */ - BlessTupleDesc(row->rowtupdesc); - *typeid = row->rowtupdesc->tdtypeid; - /* do NOT return the mutable typmod of a RECORD variable */ - *typmod = -1; - /* composite types are never collatable */ - *collation = InvalidOid; - break; - } - case PLPGSQL_DTYPE_REC: { PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - if (rec->tupdesc == NULL) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - /* Make sure we have a valid type/typmod setting */ - BlessTupleDesc(rec->tupdesc); - *typeid = rec->tupdesc->tdtypeid; - /* do NOT return the mutable typmod of a RECORD variable */ - *typmod = -1; + if (rec->erh == NULL || rec->rectypeid != RECORDOID) + { + /* Report variable's declared type */ + *typeid = rec->rectypeid; + *typmod = -1; + } + else + { + /* Report record's actual type if declared RECORD */ + *typeid = rec->erh->er_typeid; + /* do NOT return the mutable typmod of a RECORD variable */ + *typmod = -1; + } /* composite types are never collatable */ *collation = InvalidOid; break; @@ -5060,38 +5198,36 @@ plpgsql_exec_get_datum_type_info(PLpgSQL_execstate *estate, { PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) datum; PLpgSQL_rec *rec; - int fno; rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); - if (rec->tupdesc == NULL) - ereport(ERROR, - (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("record \"%s\" is not assigned yet", - rec->refname), - errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); - fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); - if (fno == SPI_ERROR_NOATTRIBUTE) - ereport(ERROR, - (errcode(ERRCODE_UNDEFINED_COLUMN), - errmsg("record \"%s\" has no field \"%s\"", - rec->refname, recfield->fieldname))); - *typeid = SPI_gettypeid(rec->tupdesc, fno); - if (fno > 0) - { - Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); - *typmod = attr->atttypmod; - } - else - *typmod = -1; - if (fno > 0) - { - Form_pg_attribute attr = TupleDescAttr(rec->tupdesc, fno - 1); + /* + * If record variable is NULL, instantiate it if it has a + * named composite type, else complain. (This won't change + * the logical state of the record: it's still NULL.) + */ + if (rec->erh == NULL) + instantiate_empty_record_variable(estate, rec); - *collation = attr->attcollation; + /* + * Look up the field's properties if we have not already, or + * if the tuple descriptor ID changed since last time. + */ + if (unlikely(recfield->rectupledescid != rec->erh->er_tupdesc_id)) + { + if (!expanded_record_lookup_field(rec->erh, + recfield->fieldname, + &recfield->finfo)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("record \"%s\" has no field \"%s\"", + rec->refname, recfield->fieldname))); + recfield->rectupledescid = rec->erh->er_tupdesc_id; } - else /* no system column types have collation */ - *collation = InvalidOid; + + *typeid = recfield->finfo.ftypeid; + *typmod = recfield->finfo.ftypmod; + *collation = recfield->finfo.fcollation; break; } @@ -5315,6 +5451,8 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, SPITupleTable *tuptab; bool found = false; int rc = PLPGSQL_RC_OK; + uint64 previous_id = INVALID_TUPLEDESC_IDENTIFIER; + bool tupdescs_match = true; uint64 n; /* Fetch loop variable's datum entry */ @@ -5357,10 +5495,57 @@ exec_for_query(PLpgSQL_execstate *estate, PLpgSQL_stmt_forq *stmt, for (i = 0; i < n; i++) { /* - * Assign the tuple to the target + * Assign the tuple to the target. Here, because we know that all + * loop iterations should be assigning the same tupdesc, we can + * optimize away repeated creations of expanded records with + * identical tupdescs. Testing for changes of er_tupdesc_id is + * reliable even if the loop body contains assignments that + * replace the target's value entirely, because it's assigned from + * a process-global counter. The case where the tupdescs don't + * match could possibly be handled more efficiently than this + * coding does, but it's not clear extra effort is worthwhile. */ - exec_move_row(estate, var, tuptab->vals[i], tuptab->tupdesc); - exec_eval_cleanup(estate); + if (var->dtype == PLPGSQL_DTYPE_REC) + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) var; + + if (rec->erh && + rec->erh->er_tupdesc_id == previous_id && + tupdescs_match) + { + /* Only need to assign a new tuple value */ + expanded_record_set_tuple(rec->erh, tuptab->vals[i], true); + } + else + { + /* + * First time through, or var's tupdesc changed in loop, + * or we have to do it the hard way because type coercion + * is needed. + */ + exec_move_row(estate, var, + tuptab->vals[i], tuptab->tupdesc); + + /* + * Check to see if physical assignment is OK next time. + * Once the tupdesc comparison has failed once, we don't + * bother rechecking in subsequent loop iterations. + */ + if (tupdescs_match) + { + tupdescs_match = + (rec->rectypeid == RECORDOID || + rec->rectypeid == tuptab->tupdesc->tdtypeid || + compatible_tupdescs(tuptab->tupdesc, + expanded_record_get_tupdesc(rec->erh))); + } + previous_id = rec->erh->er_tupdesc_id; + } + } + else + exec_move_row(estate, var, tuptab->vals[i], tuptab->tupdesc); + + exec_eval_cleanup(estate); /* * Execute the statements @@ -5684,27 +5869,33 @@ plpgsql_param_fetch(ParamListInfo params, break; case PLPGSQL_DTYPE_REC: - { - PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - - if (!HeapTupleIsValid(rec->tup)) - ok = false; - break; - } + /* always safe (might return NULL, that's fine) */ + break; case PLPGSQL_DTYPE_RECFIELD: { PLpgSQL_recfield *recfield = (PLpgSQL_recfield *) datum; PLpgSQL_rec *rec; - int fno; rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); - if (!HeapTupleIsValid(rec->tup)) + + /* + * If record variable is NULL, don't risk anything. + */ + if (rec->erh == NULL) ok = false; - else + + /* + * Look up the field's properties if we have not already, + * or if the tuple descriptor ID changed since last time. + */ + else if (unlikely(recfield->rectupledescid != rec->erh->er_tupdesc_id)) { - fno = SPI_fnumber(rec->tupdesc, recfield->fieldname); - if (fno == SPI_ERROR_NOATTRIBUTE) + if (expanded_record_lookup_field(rec->erh, + recfield->fieldname, + &recfield->finfo)) + recfield->rectupledescid = rec->erh->er_tupdesc_id; + else ok = false; } break; @@ -5737,10 +5928,17 @@ plpgsql_param_fetch(ParamListInfo params, * If it's a read/write expanded datum, convert reference to read-only, * unless it's safe to pass as read-write. */ - if (datum->dtype == PLPGSQL_DTYPE_VAR && dno != expr->rwparam) - prm->value = MakeExpandedObjectReadOnly(prm->value, - prm->isnull, - ((PLpgSQL_var *) datum)->datatype->typlen); + if (dno != expr->rwparam) + { + if (datum->dtype == PLPGSQL_DTYPE_VAR) + prm->value = MakeExpandedObjectReadOnly(prm->value, + prm->isnull, + ((PLpgSQL_var *) datum)->datatype->typlen); + else if (datum->dtype == PLPGSQL_DTYPE_REC) + prm->value = MakeExpandedObjectReadOnly(prm->value, + prm->isnull, + -1); + } return prm; } @@ -5774,7 +5972,13 @@ plpgsql_param_compile(ParamListInfo params, Param *param, scratch.resvalue = resv; scratch.resnull = resnull; - /* Select appropriate eval function */ + /* + * Select appropriate eval function. It seems worth special-casing + * DTYPE_VAR and DTYPE_RECFIELD for performance. Also, we can determine + * in advance whether MakeExpandedObjectReadOnly() will be required. + * Currently, only VAR and REC datums could contain read/write expanded + * objects. + */ if (datum->dtype == PLPGSQL_DTYPE_VAR) { if (dno != expr->rwparam && @@ -5783,8 +5987,13 @@ plpgsql_param_compile(ParamListInfo params, Param *param, else scratch.d.cparam.paramfunc = plpgsql_param_eval_var; } + else if (datum->dtype == PLPGSQL_DTYPE_RECFIELD) + scratch.d.cparam.paramfunc = plpgsql_param_eval_recfield; + else if (datum->dtype == PLPGSQL_DTYPE_REC && + dno != expr->rwparam) + scratch.d.cparam.paramfunc = plpgsql_param_eval_generic_ro; else - scratch.d.cparam.paramfunc = plpgsql_param_eval_non_var; + scratch.d.cparam.paramfunc = plpgsql_param_eval_generic; /* * Note: it's tempting to use paramarg to store the estate pointer and @@ -5868,12 +6077,85 @@ plpgsql_param_eval_var_ro(ExprState *state, ExprEvalStep *op, } /* - * plpgsql_param_eval_non_var evaluation of EEOP_PARAM_CALLBACK step + * plpgsql_param_eval_recfield evaluation of EEOP_PARAM_CALLBACK step * - * This handles all variable types except DTYPE_VAR. + * This is specialized to the case of DTYPE_RECFIELD variables, for which + * we never need to invoke MakeExpandedObjectReadOnly. */ static void -plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, +plpgsql_param_eval_recfield(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + ParamListInfo params; + PLpgSQL_execstate *estate; + int dno = op->d.cparam.paramid - 1; + PLpgSQL_recfield *recfield; + PLpgSQL_rec *rec; + ExpandedRecordHeader *erh; + + /* fetch back the hook data */ + params = econtext->ecxt_param_list_info; + estate = (PLpgSQL_execstate *) params->paramFetchArg; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + recfield = (PLpgSQL_recfield *) estate->datums[dno]; + Assert(recfield->dtype == PLPGSQL_DTYPE_RECFIELD); + + /* inline the relevant part of exec_eval_datum */ + rec = (PLpgSQL_rec *) (estate->datums[recfield->recparentno]); + erh = rec->erh; + + /* + * If record variable is NULL, instantiate it if it has a named composite + * type, else complain. (This won't change the logical state of the + * record: it's still NULL.) + */ + if (erh == NULL) + { + instantiate_empty_record_variable(estate, rec); + erh = rec->erh; + } + + /* + * Look up the field's properties if we have not already, or if the tuple + * descriptor ID changed since last time. + */ + if (unlikely(recfield->rectupledescid != erh->er_tupdesc_id)) + { + if (!expanded_record_lookup_field(erh, + recfield->fieldname, + &recfield->finfo)) + ereport(ERROR, + (errcode(ERRCODE_UNDEFINED_COLUMN), + errmsg("record \"%s\" has no field \"%s\"", + rec->refname, recfield->fieldname))); + recfield->rectupledescid = erh->er_tupdesc_id; + } + + /* OK to fetch the field value. */ + *op->resvalue = expanded_record_get_field(erh, + recfield->finfo.fnumber, + op->resnull); + + /* safety check -- needed for, eg, record fields */ + if (unlikely(recfield->finfo.ftypeid != op->d.cparam.paramtype)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("type of parameter %d (%s) does not match that when preparing the plan (%s)", + op->d.cparam.paramid, + format_type_be(recfield->finfo.ftypeid), + format_type_be(op->d.cparam.paramtype)))); +} + +/* + * plpgsql_param_eval_generic evaluation of EEOP_PARAM_CALLBACK step + * + * This handles all variable types, but assumes we do not need to invoke + * MakeExpandedObjectReadOnly. + */ +static void +plpgsql_param_eval_generic(ExprState *state, ExprEvalStep *op, ExprContext *econtext) { ParamListInfo params; @@ -5890,8 +6172,8 @@ plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, /* now we can access the target datum */ datum = estate->datums[dno]; - Assert(datum->dtype != PLPGSQL_DTYPE_VAR); + /* fetch datum's value */ exec_eval_datum(estate, datum, &datumtype, &datumtypmod, op->resvalue, op->resnull); @@ -5904,114 +6186,384 @@ plpgsql_param_eval_non_var(ExprState *state, ExprEvalStep *op, op->d.cparam.paramid, format_type_be(datumtype), format_type_be(op->d.cparam.paramtype)))); +} - /* - * Currently, if the dtype isn't VAR, the value couldn't be a read/write - * expanded datum. - */ +/* + * plpgsql_param_eval_generic_ro evaluation of EEOP_PARAM_CALLBACK step + * + * This handles all variable types, but assumes we need to invoke + * MakeExpandedObjectReadOnly (hence, variable must be of a varlena type). + */ +static void +plpgsql_param_eval_generic_ro(ExprState *state, ExprEvalStep *op, + ExprContext *econtext) +{ + ParamListInfo params; + PLpgSQL_execstate *estate; + int dno = op->d.cparam.paramid - 1; + PLpgSQL_datum *datum; + Oid datumtype; + int32 datumtypmod; + + /* fetch back the hook data */ + params = econtext->ecxt_param_list_info; + estate = (PLpgSQL_execstate *) params->paramFetchArg; + Assert(dno >= 0 && dno < estate->ndatums); + + /* now we can access the target datum */ + datum = estate->datums[dno]; + + /* fetch datum's value */ + exec_eval_datum(estate, datum, + &datumtype, &datumtypmod, + op->resvalue, op->resnull); + + /* safety check -- needed for, eg, record fields */ + if (unlikely(datumtype != op->d.cparam.paramtype)) + ereport(ERROR, + (errcode(ERRCODE_DATATYPE_MISMATCH), + errmsg("type of parameter %d (%s) does not match that when preparing the plan (%s)", + op->d.cparam.paramid, + format_type_be(datumtype), + format_type_be(op->d.cparam.paramtype)))); + + /* force the value to read-only */ + *op->resvalue = MakeExpandedObjectReadOnly(*op->resvalue, + *op->resnull, + -1); } -/* ---------- +/* * exec_move_row Move one tuple's values into a record or row * - * Since this uses exec_assign_value, caller should eventually call + * tup and tupdesc may both be NULL if we're just assigning an indeterminate + * composite NULL to the target. Alternatively, can have tup be NULL and + * tupdesc not NULL, in which case we assign a row of NULLs to the target. + * + * Since this uses the mcontext for workspace, caller should eventually call * exec_eval_cleanup to prevent long-term memory leaks. - * ---------- */ static void exec_move_row(PLpgSQL_execstate *estate, PLpgSQL_variable *target, HeapTuple tup, TupleDesc tupdesc) { + ExpandedRecordHeader *newerh = NULL; + /* - * Record is simple - just copy the tuple and its descriptor into the - * record variable + * If target is RECORD, we may be able to avoid field-by-field processing. */ if (target->dtype == PLPGSQL_DTYPE_REC) { PLpgSQL_rec *rec = (PLpgSQL_rec *) target; /* - * Copy input first, just in case it is pointing at variable's value + * If we have no source tupdesc, just set the record variable to NULL. + * (If we have a source tupdesc but not a tuple, we'll set the + * variable to a row of nulls, instead. This is odd perhaps, but + * backwards compatible.) + */ + if (tupdesc == NULL) + { + if (rec->erh) + DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); + rec->erh = NULL; + return; + } + + /* + * Build a new expanded record with appropriate tupdesc. + */ + newerh = make_expanded_record_for_rec(estate, rec, tupdesc, NULL); + + /* + * If the rowtypes match, or if we have no tuple anyway, we can + * complete the assignment without field-by-field processing. + * + * The tests here are ordered more or less in order of cheapness. We + * can easily detect it will work if the target is declared RECORD or + * has the same typeid as the source. But when assigning from a query + * result, it's common to have a source tupdesc that's labeled RECORD + * but is actually physically compatible with a named-composite-type + * target, so it's worth spending extra cycles to check for that. */ - if (HeapTupleIsValid(tup)) - tup = heap_copytuple(tup); - else if (tupdesc) + if (rec->rectypeid == RECORDOID || + rec->rectypeid == tupdesc->tdtypeid || + !HeapTupleIsValid(tup) || + compatible_tupdescs(tupdesc, expanded_record_get_tupdesc(newerh))) { - /* If we have a tupdesc but no data, form an all-nulls tuple */ - bool *nulls; + if (!HeapTupleIsValid(tup)) + { + /* No data, so force the record into all-nulls state */ + deconstruct_expanded_record(newerh); + } + else + { + /* No coercion is needed, so just assign the row value */ + expanded_record_set_tuple(newerh, tup, true); + } - nulls = (bool *) - eval_mcontext_alloc(estate, tupdesc->natts * sizeof(bool)); - memset(nulls, true, tupdesc->natts * sizeof(bool)); + /* Complete the assignment */ + assign_record_var(estate, rec, newerh); - tup = heap_form_tuple(tupdesc, NULL, nulls); + return; } + } - if (tupdesc) - tupdesc = CreateTupleDescCopy(tupdesc); + /* + * Otherwise, deconstruct the tuple and do field-by-field assignment, + * using exec_move_row_from_fields. + */ + if (tupdesc && HeapTupleIsValid(tup)) + { + int td_natts = tupdesc->natts; + Datum *values; + bool *nulls; + Datum values_local[64]; + bool nulls_local[64]; - /* Free the old value ... */ - if (rec->freetup) + /* + * Need workspace arrays. If td_natts is small enough, use local + * arrays to save doing a palloc. Even if it's not small, we can + * allocate both the Datum and isnull arrays in one palloc chunk. + */ + if (td_natts <= lengthof(values_local)) { - heap_freetuple(rec->tup); - rec->freetup = false; + values = values_local; + nulls = nulls_local; } - if (rec->freetupdesc) + else { - FreeTupleDesc(rec->tupdesc); - rec->freetupdesc = false; + char *chunk; + + chunk = eval_mcontext_alloc(estate, + td_natts * (sizeof(Datum) + sizeof(bool))); + values = (Datum *) chunk; + nulls = (bool *) (chunk + td_natts * sizeof(Datum)); } - /* ... and install the new */ - if (HeapTupleIsValid(tup)) + heap_deform_tuple(tup, tupdesc, values, nulls); + + exec_move_row_from_fields(estate, target, newerh, + values, nulls, tupdesc); + } + else + { + /* + * Assign all-nulls. + */ + exec_move_row_from_fields(estate, target, newerh, + NULL, NULL, NULL); + } +} + +/* + * Build an expanded record object suitable for assignment to "rec". + * + * Caller must supply either a source tuple descriptor or a source expanded + * record (not both). If the record variable has declared type RECORD, + * it'll adopt the source's rowtype. Even if it doesn't, we may be able to + * piggyback on a source expanded record to save a typcache lookup. + * + * Caller must fill the object with data, then do assign_record_var(). + * + * The new record is initially put into the mcontext, so it will be cleaned up + * if we fail before reaching assign_record_var(). + */ +static ExpandedRecordHeader * +make_expanded_record_for_rec(PLpgSQL_execstate *estate, + PLpgSQL_rec *rec, + TupleDesc srctupdesc, + ExpandedRecordHeader *srcerh) +{ + ExpandedRecordHeader *newerh; + MemoryContext mcontext = get_eval_mcontext(estate); + + if (rec->rectypeid != RECORDOID) + { + /* + * New record must be of desired type, but maybe srcerh has already + * done all the same lookups. + */ + if (srcerh && rec->rectypeid == srcerh->er_decltypeid) + newerh = make_expanded_record_from_exprecord(srcerh, + mcontext); + else + newerh = make_expanded_record_from_typeid(rec->rectypeid, -1, + mcontext); + } + else + { + /* + * We'll adopt the input tupdesc. We can still use + * make_expanded_record_from_exprecord, if srcerh isn't a composite + * domain. (If it is, we effectively adopt its base type.) + */ + if (srcerh && !ExpandedRecordIsDomain(srcerh)) + newerh = make_expanded_record_from_exprecord(srcerh, + mcontext); + else { - rec->tup = tup; - rec->freetup = true; + if (!srctupdesc) + srctupdesc = expanded_record_get_tupdesc(srcerh); + newerh = make_expanded_record_from_tupdesc(srctupdesc, + mcontext); } - else - rec->tup = NULL; + } + + return newerh; +} - if (tupdesc) +/* + * exec_move_row_from_fields Move arrays of field values into a record or row + * + * When assigning to a record, the caller must have already created a suitable + * new expanded record object, newerh. Pass NULL when assigning to a row. + * + * tupdesc describes the input row, which might have different column + * types and/or different dropped-column positions than the target. + * values/nulls/tupdesc can all be NULL if we just want to assign nulls to + * all fields of the record or row. + * + * Since this uses the mcontext for workspace, caller should eventually call + * exec_eval_cleanup to prevent long-term memory leaks. + */ +static void +exec_move_row_from_fields(PLpgSQL_execstate *estate, + PLpgSQL_variable *target, + ExpandedRecordHeader *newerh, + Datum *values, bool *nulls, + TupleDesc tupdesc) +{ + int td_natts = tupdesc ? tupdesc->natts : 0; + int fnum; + int anum; + + /* Handle RECORD-target case */ + if (target->dtype == PLPGSQL_DTYPE_REC) + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) target; + TupleDesc var_tupdesc; + Datum newvalues_local[64]; + bool newnulls_local[64]; + + Assert(newerh != NULL); /* caller must have built new object */ + + var_tupdesc = expanded_record_get_tupdesc(newerh); + + /* + * Coerce field values if needed. This might involve dealing with + * different sets of dropped columns and/or coercing individual column + * types. That's sort of a pain, but historically plpgsql has allowed + * it, so we preserve the behavior. However, it's worth a quick check + * to see if the tupdescs are identical. (Since expandedrecord.c + * prefers to use refcounted tupdescs from the typcache, expanded + * records with the same rowtype will have pointer-equal tupdescs.) + */ + if (var_tupdesc != tupdesc) { - rec->tupdesc = tupdesc; - rec->freetupdesc = true; + int vtd_natts = var_tupdesc->natts; + Datum *newvalues; + bool *newnulls; + + /* + * Need workspace arrays. If vtd_natts is small enough, use local + * arrays to save doing a palloc. Even if it's not small, we can + * allocate both the Datum and isnull arrays in one palloc chunk. + */ + if (vtd_natts <= lengthof(newvalues_local)) + { + newvalues = newvalues_local; + newnulls = newnulls_local; + } + else + { + char *chunk; + + chunk = eval_mcontext_alloc(estate, + vtd_natts * (sizeof(Datum) + sizeof(bool))); + newvalues = (Datum *) chunk; + newnulls = (bool *) (chunk + vtd_natts * sizeof(Datum)); + } + + /* Walk over destination columns */ + anum = 0; + for (fnum = 0; fnum < vtd_natts; fnum++) + { + Form_pg_attribute attr = TupleDescAttr(var_tupdesc, fnum); + Datum value; + bool isnull; + Oid valtype; + int32 valtypmod; + + if (attr->attisdropped) + { + /* expanded_record_set_fields should ignore this column */ + continue; /* skip dropped column in record */ + } + + while (anum < td_natts && + TupleDescAttr(tupdesc, anum)->attisdropped) + anum++; /* skip dropped column in tuple */ + + if (anum < td_natts) + { + value = values[anum]; + isnull = nulls[anum]; + valtype = TupleDescAttr(tupdesc, anum)->atttypid; + valtypmod = TupleDescAttr(tupdesc, anum)->atttypmod; + anum++; + } + else + { + value = (Datum) 0; + isnull = true; + valtype = UNKNOWNOID; + valtypmod = -1; + } + + /* Cast the new value to the right type, if needed. */ + newvalues[fnum] = exec_cast_value(estate, + value, + &isnull, + valtype, + valtypmod, + attr->atttypid, + attr->atttypmod); + newnulls[fnum] = isnull; + } + + values = newvalues; + nulls = newnulls; } - else - rec->tupdesc = NULL; + + /* Insert the coerced field values into the new expanded record */ + expanded_record_set_fields(newerh, values, nulls); + + /* Complete the assignment */ + assign_record_var(estate, rec, newerh); return; } + /* newerh should not have been passed in non-RECORD cases */ + Assert(newerh == NULL); + /* - * Row is a bit more complicated in that we assign the individual - * attributes of the tuple to the variables the row points to. + * For a row, we assign the individual field values to the variables the + * row points to. * - * NOTE: this code used to demand row->nfields == - * HeapTupleHeaderGetNatts(tup->t_data), but that's wrong. The tuple - * might have more fields than we expected if it's from an - * inheritance-child table of the current table, or it might have fewer if - * the table has had columns added by ALTER TABLE. Ignore extra columns - * and assume NULL for missing columns, the same as heap_getattr would do. - * We also have to skip over dropped columns in either the source or - * destination. + * NOTE: both this code and the record code above silently ignore extra + * columns in the source and assume NULL for missing columns. This is + * pretty dubious but it's the historical behavior. * - * If we have no tuple data at all, we'll assign NULL to all columns of + * If we have no input data at all, we'll assign NULL to all columns of * the row variable. */ if (target->dtype == PLPGSQL_DTYPE_ROW) { PLpgSQL_row *row = (PLpgSQL_row *) target; - int td_natts = tupdesc ? tupdesc->natts : 0; - int t_natts; - int fnum; - int anum; - - if (HeapTupleIsValid(tup)) - t_natts = HeapTupleHeaderGetNatts(tup->t_data); - else - t_natts = 0; anum = 0; for (fnum = 0; fnum < row->nfields; fnum++) @@ -6022,9 +6574,6 @@ exec_move_row(PLpgSQL_execstate *estate, Oid valtype; int32 valtypmod; - if (row->varnos[fnum] < 0) - continue; /* skip dropped column in row struct */ - var = (PLpgSQL_var *) (estate->datums[row->varnos[fnum]]); while (anum < td_natts && @@ -6033,13 +6582,8 @@ exec_move_row(PLpgSQL_execstate *estate, if (anum < td_natts) { - if (anum < t_natts) - value = SPI_getbinval(tup, tupdesc, anum + 1, &isnull); - else - { - value = (Datum) 0; - isnull = true; - } + value = values[anum]; + isnull = nulls[anum]; valtype = TupleDescAttr(tupdesc, anum)->atttypid; valtypmod = TupleDescAttr(tupdesc, anum)->atttypmod; anum++; @@ -6062,6 +6606,47 @@ exec_move_row(PLpgSQL_execstate *estate, elog(ERROR, "unsupported target"); } +/* + * compatible_tupdescs: detect whether two tupdescs are physically compatible + * + * TRUE indicates that a tuple satisfying src_tupdesc can be used directly as + * a value for a composite variable using dst_tupdesc. + */ +static bool +compatible_tupdescs(TupleDesc src_tupdesc, TupleDesc dst_tupdesc) +{ + int i; + + /* Possibly we could allow src_tupdesc to have extra columns? */ + if (dst_tupdesc->natts != src_tupdesc->natts) + return false; + + for (i = 0; i < dst_tupdesc->natts; i++) + { + Form_pg_attribute dattr = TupleDescAttr(dst_tupdesc, i); + Form_pg_attribute sattr = TupleDescAttr(src_tupdesc, i); + + if (dattr->attisdropped != sattr->attisdropped) + return false; + if (!dattr->attisdropped) + { + /* Normal columns must match by type and typmod */ + if (dattr->atttypid != sattr->atttypid || + (dattr->atttypmod >= 0 && + dattr->atttypmod != sattr->atttypmod)) + return false; + } + else + { + /* Dropped columns are OK as long as length/alignment match */ + if (dattr->attlen != sattr->attlen || + dattr->attalign != sattr->attalign) + return false; + } + } + return true; +} + /* ---------- * make_tuple_from_row Make a tuple from the values of a row object * @@ -6098,8 +6683,6 @@ make_tuple_from_row(PLpgSQL_execstate *estate, nulls[i] = true; /* leave the column as null */ continue; } - if (row->varnos[i] < 0) /* should not happen */ - elog(ERROR, "dropped rowtype entry for non-dropped column"); exec_eval_datum(estate, estate->datums[row->varnos[i]], &fieldtypeid, &fieldtypmod, @@ -6114,86 +6697,290 @@ make_tuple_from_row(PLpgSQL_execstate *estate, return tuple; } -/* ---------- - * get_tuple_from_datum extract a tuple from a composite Datum +/* + * deconstruct_composite_datum extract tuple+tupdesc from composite Datum * - * Returns a HeapTuple, freshly palloc'd in caller's context. + * The caller must supply a HeapTupleData variable, in which we set up a + * tuple header pointing to the composite datum's body. To make the tuple + * value outlive that variable, caller would need to apply heap_copytuple... + * but current callers only need a short-lived tuple value anyway. * - * Note: it's caller's responsibility to be sure value is of composite type. - * ---------- - */ -static HeapTuple -get_tuple_from_datum(Datum value) -{ - HeapTupleHeader td = DatumGetHeapTupleHeader(value); - HeapTupleData tmptup; - - /* Build a temporary HeapTuple control structure */ - tmptup.t_len = HeapTupleHeaderGetDatumLength(td); - ItemPointerSetInvalid(&(tmptup.t_self)); - tmptup.t_tableOid = InvalidOid; - tmptup.t_data = td; - - /* Build a copy and return it */ - return heap_copytuple(&tmptup); -} - -/* ---------- - * get_tupdesc_from_datum get a tuple descriptor for a composite Datum - * - * Returns a pointer to the TupleDesc of the tuple's rowtype. + * Returns a pointer to the TupleDesc of the datum's rowtype. * Caller is responsible for calling ReleaseTupleDesc when done with it. * * Note: it's caller's responsibility to be sure value is of composite type. - * ---------- + * Also, best to call this in a short-lived context, as it might leak memory. */ static TupleDesc -get_tupdesc_from_datum(Datum value) +deconstruct_composite_datum(Datum value, HeapTupleData *tmptup) { - HeapTupleHeader td = DatumGetHeapTupleHeader(value); + HeapTupleHeader td; Oid tupType; int32 tupTypmod; + /* Get tuple body (note this could involve detoasting) */ + td = DatumGetHeapTupleHeader(value); + + /* Build a temporary HeapTuple control structure */ + tmptup->t_len = HeapTupleHeaderGetDatumLength(td); + ItemPointerSetInvalid(&(tmptup->t_self)); + tmptup->t_tableOid = InvalidOid; + tmptup->t_data = td; + /* Extract rowtype info and find a tupdesc */ tupType = HeapTupleHeaderGetTypeId(td); tupTypmod = HeapTupleHeaderGetTypMod(td); return lookup_rowtype_tupdesc(tupType, tupTypmod); } -/* ---------- +/* * exec_move_row_from_datum Move a composite Datum into a record or row * - * This is equivalent to get_tuple_from_datum() followed by exec_move_row(), - * but we avoid constructing an intermediate physical copy of the tuple. - * ---------- + * This is equivalent to deconstruct_composite_datum() followed by + * exec_move_row(), but we can optimize things if the Datum is an + * expanded-record reference. + * + * Note: it's caller's responsibility to be sure value is of composite type. */ static void exec_move_row_from_datum(PLpgSQL_execstate *estate, PLpgSQL_variable *target, Datum value) { - HeapTupleHeader td = DatumGetHeapTupleHeader(value); - Oid tupType; - int32 tupTypmod; - TupleDesc tupdesc; - HeapTupleData tmptup; + /* Check to see if source is an expanded record */ + if (VARATT_IS_EXTERNAL_EXPANDED(DatumGetPointer(value))) + { + ExpandedRecordHeader *erh = (ExpandedRecordHeader *) DatumGetEOHP(value); + ExpandedRecordHeader *newerh = NULL; - /* Extract rowtype info and find a tupdesc */ - tupType = HeapTupleHeaderGetTypeId(td); - tupTypmod = HeapTupleHeaderGetTypMod(td); - tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); + Assert(erh->er_magic == ER_MAGIC); - /* Build a temporary HeapTuple control structure */ - tmptup.t_len = HeapTupleHeaderGetDatumLength(td); - ItemPointerSetInvalid(&(tmptup.t_self)); - tmptup.t_tableOid = InvalidOid; - tmptup.t_data = td; + /* These cases apply if the target is record not row... */ + if (target->dtype == PLPGSQL_DTYPE_REC) + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) target; + + /* + * If it's the same record already stored in the variable, do + * nothing. This would happen only in silly cases like "r := r", + * but we need some check to avoid possibly freeing the variable's + * live value below. Note that this applies even if what we have + * is a R/O pointer. + */ + if (erh == rec->erh) + return; + + /* + * If we have a R/W pointer, we're allowed to just commandeer + * ownership of the expanded record. If it's of the right type to + * put into the record variable, do that. (Note we don't accept + * an expanded record of a composite-domain type as a RECORD + * value. We'll treat it as the base composite type instead; + * compare logic in make_expanded_record_for_rec.) + */ + if (VARATT_IS_EXTERNAL_EXPANDED_RW(DatumGetPointer(value)) && + (rec->rectypeid == erh->er_decltypeid || + (rec->rectypeid == RECORDOID && + !ExpandedRecordIsDomain(erh)))) + { + assign_record_var(estate, rec, erh); + return; + } + + /* + * If we already have an expanded record object in the target + * variable, and the source record contains a valid tuple + * representation with the right rowtype, then we can skip making + * a new expanded record and just assign the tuple with + * expanded_record_set_tuple. (We can't do the equivalent if we + * have to do field-by-field assignment, since that wouldn't be + * atomic if there's an error.) We consider that there's a + * rowtype match only if it's the same named composite type or + * same registered rowtype; checking for matches of anonymous + * rowtypes would be more expensive than this is worth. + */ + if (rec->erh && + (erh->flags & ER_FLAG_FVALUE_VALID) && + erh->er_typeid == rec->erh->er_typeid && + (erh->er_typeid != RECORDOID || + (erh->er_typmod == rec->erh->er_typmod && + erh->er_typmod >= 0))) + { + expanded_record_set_tuple(rec->erh, erh->fvalue, true); + return; + } + + /* + * Otherwise we're gonna need a new expanded record object. Make + * it here in hopes of piggybacking on the source object's + * previous typcache lookup. + */ + newerh = make_expanded_record_for_rec(estate, rec, NULL, erh); + + /* + * If the expanded record contains a valid tuple representation, + * and we don't need rowtype conversion, then just copying the + * tuple is probably faster than field-by-field processing. (This + * isn't duplicative of the previous check, since here we will + * catch the case where the record variable was previously empty.) + */ + if ((erh->flags & ER_FLAG_FVALUE_VALID) && + (rec->rectypeid == RECORDOID || + rec->rectypeid == erh->er_typeid)) + { + expanded_record_set_tuple(newerh, erh->fvalue, true); + assign_record_var(estate, rec, newerh); + return; + } + + /* + * Need to special-case empty source record, else code below would + * leak newerh. + */ + if (ExpandedRecordIsEmpty(erh)) + { + /* Set newerh to a row of NULLs */ + deconstruct_expanded_record(newerh); + assign_record_var(estate, rec, newerh); + return; + } + } /* end of record-target-only cases */ + + /* + * If the source expanded record is empty, we should treat that like a + * NULL tuple value. (We're unlikely to see such a case, but we must + * check this; deconstruct_expanded_record would cause a change of + * logical state, which is not OK.) + */ + if (ExpandedRecordIsEmpty(erh)) + { + exec_move_row(estate, target, NULL, + expanded_record_get_tupdesc(erh)); + return; + } + + /* + * Otherwise, ensure that the source record is deconstructed, and + * assign from its field values. + */ + deconstruct_expanded_record(erh); + exec_move_row_from_fields(estate, target, newerh, + erh->dvalues, erh->dnulls, + expanded_record_get_tupdesc(erh)); + } + else + { + /* + * Nope, we've got a plain composite Datum. Deconstruct it; but we + * don't use deconstruct_composite_datum(), because we may be able to + * skip calling lookup_rowtype_tupdesc(). + */ + HeapTupleHeader td; + HeapTupleData tmptup; + Oid tupType; + int32 tupTypmod; + TupleDesc tupdesc; + MemoryContext oldcontext; + + /* Ensure that any detoasted data winds up in the eval_mcontext */ + oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate)); + /* Get tuple body (note this could involve detoasting) */ + td = DatumGetHeapTupleHeader(value); + MemoryContextSwitchTo(oldcontext); + + /* Build a temporary HeapTuple control structure */ + tmptup.t_len = HeapTupleHeaderGetDatumLength(td); + ItemPointerSetInvalid(&(tmptup.t_self)); + tmptup.t_tableOid = InvalidOid; + tmptup.t_data = td; - /* Do the move */ - exec_move_row(estate, target, &tmptup, tupdesc); + /* Extract rowtype info */ + tupType = HeapTupleHeaderGetTypeId(td); + tupTypmod = HeapTupleHeaderGetTypMod(td); - /* Release tupdesc usage count */ - ReleaseTupleDesc(tupdesc); + /* Now, if the target is record not row, maybe we can optimize ... */ + if (target->dtype == PLPGSQL_DTYPE_REC) + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) target; + + /* + * If we already have an expanded record object in the target + * variable, and the source datum has a matching rowtype, then we + * can skip making a new expanded record and just assign the tuple + * with expanded_record_set_tuple. We consider that there's a + * rowtype match only if it's the same named composite type or + * same registered rowtype. (Checking to reject an anonymous + * rowtype here should be redundant, but let's be safe.) + */ + if (rec->erh && + tupType == rec->erh->er_typeid && + (tupType != RECORDOID || + (tupTypmod == rec->erh->er_typmod && + tupTypmod >= 0))) + { + expanded_record_set_tuple(rec->erh, &tmptup, true); + return; + } + + /* + * If the source datum has a rowtype compatible with the target + * variable, just build a new expanded record and assign the tuple + * into it. Using make_expanded_record_from_typeid() here saves + * one typcache lookup compared to the code below. + */ + if (rec->rectypeid == RECORDOID || rec->rectypeid == tupType) + { + ExpandedRecordHeader *newerh; + MemoryContext mcontext = get_eval_mcontext(estate); + + newerh = make_expanded_record_from_typeid(tupType, tupTypmod, + mcontext); + expanded_record_set_tuple(newerh, &tmptup, true); + assign_record_var(estate, rec, newerh); + return; + } + + /* + * Otherwise, we're going to need conversion, so fall through to + * do it the hard way. + */ + } + + /* + * ROW target, or unoptimizable RECORD target, so we have to expend a + * lookup to obtain the source datum's tupdesc. + */ + tupdesc = lookup_rowtype_tupdesc(tupType, tupTypmod); + + /* Do the move */ + exec_move_row(estate, target, &tmptup, tupdesc); + + /* Release tupdesc usage count */ + ReleaseTupleDesc(tupdesc); + } +} + +/* + * If we have not created an expanded record to hold the record variable's + * value, do so. The expanded record will be "empty", so this does not + * change the logical state of the record variable: it's still NULL. + * However, now we'll have a tupdesc with which we can e.g. look up fields. + */ +static void +instantiate_empty_record_variable(PLpgSQL_execstate *estate, PLpgSQL_rec *rec) +{ + Assert(rec->erh == NULL); /* else caller error */ + + /* If declared type is RECORD, we can't instantiate */ + if (rec->rectypeid == RECORDOID) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("record \"%s\" is not assigned yet", rec->refname), + errdetail("The tuple structure of a not-yet-assigned record is indeterminate."))); + + /* OK, do it */ + rec->erh = make_expanded_record_from_typeid(rec->rectypeid, -1, + estate->datum_context); } /* ---------- @@ -6906,6 +7693,26 @@ assign_text_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, const char *str) assign_simple_var(estate, var, CStringGetTextDatum(str), false, true); } +/* + * assign_record_var --- assign a new value to any REC datum. + */ +static void +assign_record_var(PLpgSQL_execstate *estate, PLpgSQL_rec *rec, + ExpandedRecordHeader *erh) +{ + Assert(rec->dtype == PLPGSQL_DTYPE_REC); + + /* Transfer new record object into datum_context */ + TransferExpandedRecord(erh, estate->datum_context); + + /* Free the old value ... */ + if (rec->erh) + DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); + + /* ... and install the new */ + rec->erh = erh; +} + /* * exec_eval_using_params --- evaluate params of USING clause * diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index f0e85fcfcd..b36fab67bc 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -1618,15 +1618,16 @@ plpgsql_dumptree(PLpgSQL_function *func) printf("ROW %-16s fields", row->refname); for (i = 0; i < row->nfields; i++) { - if (row->fieldnames[i]) - printf(" %s=var %d", row->fieldnames[i], - row->varnos[i]); + printf(" %s=var %d", row->fieldnames[i], + row->varnos[i]); } printf("\n"); } break; case PLPGSQL_DTYPE_REC: - printf("REC %s\n", ((PLpgSQL_rec *) d)->refname); + printf("REC %-16s typoid %u\n", + ((PLpgSQL_rec *) d)->refname, + ((PLpgSQL_rec *) d)->rectypeid); break; case PLPGSQL_DTYPE_RECFIELD: printf("RECFIELD %-16s of REC %d\n", diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index 42f6a2e161..97c0d4f98a 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -512,7 +512,7 @@ decl_statement : decl_varname decl_const decl_datatype decl_collate decl_notnull else ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("row or record variable cannot be CONSTANT"), + errmsg("record variable cannot be CONSTANT"), parser_errposition(@2))); } if ($5) @@ -522,7 +522,7 @@ decl_statement : decl_varname decl_const decl_datatype decl_collate decl_notnull else ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("row or record variable cannot be NOT NULL"), + errmsg("record variable cannot be NOT NULL"), parser_errposition(@4))); } @@ -533,7 +533,7 @@ decl_statement : decl_varname decl_const decl_datatype decl_collate decl_notnull else ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("default value for row or record variable is not supported"), + errmsg("default value for record variable is not supported"), parser_errposition(@5))); } } @@ -1333,7 +1333,7 @@ for_control : for_variable K_IN { ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), - errmsg("loop variable of loop over rows must be a record or row variable or list of scalar variables"), + errmsg("loop variable of loop over rows must be a record variable or list of scalar variables"), parser_errposition(@1))); } new->query = expr; @@ -1386,6 +1386,7 @@ for_control : for_variable K_IN new->var = (PLpgSQL_variable *) plpgsql_build_record($1.name, $1.lineno, + RECORDOID, true); $$ = (PLpgSQL_stmt *) new; @@ -1524,7 +1525,7 @@ for_control : for_variable K_IN { ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("loop variable of loop over rows must be a record or row variable or list of scalar variables"), + errmsg("loop variable of loop over rows must be a record variable or list of scalar variables"), parser_errposition(@1))); } @@ -3328,7 +3329,7 @@ check_assignable(PLpgSQL_datum *datum, int location) parser_errposition(location))); break; case PLPGSQL_DTYPE_ROW: - /* always assignable? */ + /* always assignable? Shouldn't we check member vars? */ break; case PLPGSQL_DTYPE_REC: /* always assignable? What about NEW/OLD? */ @@ -3385,7 +3386,7 @@ read_into_target(PLpgSQL_variable **target, bool *strict) if ((tok = yylex()) == ',') ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), - errmsg("record or row variable cannot be part of multiple-item INTO list"), + errmsg("record variable cannot be part of multiple-item INTO list"), parser_errposition(yylloc))); plpgsql_push_back_token(tok); } diff --git a/src/pl/plpgsql/src/pl_handler.c b/src/pl/plpgsql/src/pl_handler.c index c49428d923..f38ef04077 100644 --- a/src/pl/plpgsql/src/pl_handler.c +++ b/src/pl/plpgsql/src/pl_handler.c @@ -443,14 +443,15 @@ plpgsql_validator(PG_FUNCTION_ARGS) } /* Disallow pseudotypes in arguments (either IN or OUT) */ - /* except for polymorphic */ + /* except for RECORD and polymorphic */ numargs = get_func_arg_info(tuple, &argtypes, &argnames, &argmodes); for (i = 0; i < numargs; i++) { if (get_typtype(argtypes[i]) == TYPTYPE_PSEUDO) { - if (!IsPolymorphicType(argtypes[i])) + if (argtypes[i] != RECORDOID && + !IsPolymorphicType(argtypes[i])) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("PL/pgSQL functions cannot accept type %s", diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index a9b9d91de7..d4eb67b738 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -20,6 +20,8 @@ #include "commands/event_trigger.h" #include "commands/trigger.h" #include "executor/spi.h" +#include "utils/expandedrecord.h" + /********************************************************************** * Definitions @@ -37,10 +39,9 @@ */ typedef enum PLpgSQL_nsitem_type { - PLPGSQL_NSTYPE_LABEL, - PLPGSQL_NSTYPE_VAR, - PLPGSQL_NSTYPE_ROW, - PLPGSQL_NSTYPE_REC + PLPGSQL_NSTYPE_LABEL, /* block label */ + PLPGSQL_NSTYPE_VAR, /* scalar variable */ + PLPGSQL_NSTYPE_REC /* composite variable */ } PLpgSQL_nsitem_type; /* @@ -72,9 +73,8 @@ typedef enum PLpgSQL_datum_type typedef enum PLpgSQL_type_type { PLPGSQL_TTYPE_SCALAR, /* scalar types and domains */ - PLPGSQL_TTYPE_ROW, /* composite types */ - PLPGSQL_TTYPE_REC, /* RECORD pseudotype */ - PLPGSQL_TTYPE_PSEUDO /* other pseudotypes */ + PLPGSQL_TTYPE_REC, /* composite types, including RECORD */ + PLPGSQL_TTYPE_PSEUDO /* pseudotypes */ } PLpgSQL_type_type; /* @@ -183,7 +183,6 @@ typedef struct PLpgSQL_type int16 typlen; /* stuff copied from its pg_type entry */ bool typbyval; char typtype; - Oid typrelid; Oid collation; /* from pg_type, but can be overridden */ bool typisarray; /* is "true" array, or domain over one */ int32 atttypmod; /* typmod (taken from someplace else) */ @@ -274,7 +273,12 @@ typedef struct PLpgSQL_var } PLpgSQL_var; /* - * Row variable + * Row variable - this represents one or more variables that are listed in an + * INTO clause, FOR-loop targetlist, cursor argument list, etc. We also use + * a row to represent a function's OUT parameters when there's more than one. + * + * Note that there's no way to name the row as such from PL/pgSQL code, + * so many functions don't need to support these. */ typedef struct PLpgSQL_row { @@ -283,21 +287,20 @@ typedef struct PLpgSQL_row char *refname; int lineno; - /* Note: TupleDesc is only set up for named rowtypes, else it is NULL. */ - TupleDesc rowtupdesc; - /* - * Note: if the underlying rowtype contains a dropped column, the - * corresponding fieldnames[] entry will be NULL, and there is no - * corresponding var (varnos[] will be -1). + * rowtupdesc is only set up if we might need to convert the row into a + * composite datum, which currently only happens for OUT parameters. + * Otherwise it is NULL. */ + TupleDesc rowtupdesc; + int nfields; char **fieldnames; int *varnos; } PLpgSQL_row; /* - * Record variable (non-fixed structure) + * Record variable (any composite type, including RECORD) */ typedef struct PLpgSQL_rec { @@ -305,11 +308,11 @@ typedef struct PLpgSQL_rec int dno; char *refname; int lineno; - - HeapTuple tup; - TupleDesc tupdesc; - bool freetup; - bool freetupdesc; + Oid rectypeid; /* declared type of variable */ + /* RECFIELDs for this record are chained together for easy access */ + int firstfield; /* dno of first RECFIELD, or -1 if none */ + /* We always store record variables as "expanded" records */ + ExpandedRecordHeader *erh; } PLpgSQL_rec; /* @@ -319,8 +322,12 @@ typedef struct PLpgSQL_recfield { PLpgSQL_datum_type dtype; int dno; - char *fieldname; + char *fieldname; /* name of field */ int recparentno; /* dno of parent record */ + int nextfield; /* dno of next child, or -1 if none */ + uint64 rectupledescid; /* record's tupledesc ID as of last lookup */ + ExpandedRecordFieldInfo finfo; /* field's attnum and type info */ + /* if rectupledescid == INVALID_TUPLEDESC_IDENTIFIER, finfo isn't valid */ } PLpgSQL_recfield; /* @@ -903,12 +910,12 @@ typedef struct PLpgSQL_execstate bool readonly_func; - TupleDesc rettupdesc; char *exitlabel; /* the "target" label of the current EXIT or * CONTINUE stmt, if any */ ErrorData *cur_error; /* current exception handler's error */ Tuplestorestate *tuple_store; /* SRFs accumulate results here */ + TupleDesc tuple_store_desc; /* descriptor for tuples in tuple_store */ MemoryContext tuple_store_cxt; ResourceOwner tuple_store_owner; ReturnSetInfo *rsi; @@ -917,6 +924,8 @@ typedef struct PLpgSQL_execstate int found_varno; int ndatums; PLpgSQL_datum **datums; + /* context containing variable values (same as func's SPI_proc context) */ + MemoryContext datum_context; /* * paramLI is what we use to pass local variable values to the executor. @@ -1088,7 +1097,9 @@ extern PLpgSQL_variable *plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, bool add2namespace); extern PLpgSQL_rec *plpgsql_build_record(const char *refname, int lineno, - bool add2namespace); + Oid rectypeid, bool add2namespace); +extern PLpgSQL_recfield *plpgsql_build_recfield(PLpgSQL_rec *rec, + const char *fldname); extern int plpgsql_recognize_err_condition(const char *condname, bool allow_sqlstate); extern PLpgSQL_condition *plpgsql_parse_err_condition(char *condname); diff --git a/src/pl/plpgsql/src/sql/plpgsql_record.sql b/src/pl/plpgsql/src/sql/plpgsql_record.sql new file mode 100644 index 0000000000..069d2643cf --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_record.sql @@ -0,0 +1,441 @@ +-- +-- Tests for PL/pgSQL handling of composite (record) variables +-- + +create type two_int4s as (f1 int4, f2 int4); +create type two_int8s as (q1 int8, q2 int8); + +-- base-case return of a composite type +create function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1,1)::two_int8s; end $$; +select retc(42); + +-- ok to return a matching record type +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1::int8, 1::int8); end $$; +select retc(42); + +-- we don't currently support implicit casting +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1,1); end $$; +select retc(42); + +-- nor extra columns +create or replace function retc(int) returns two_int8s language plpgsql as +$$ begin return row($1::int8, 1::int8, 42); end $$; +select retc(42); + +-- same cases with an intermediate "record" variable +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1::int8, 1::int8); return r; end $$; +select retc(42); + +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1,1); return r; end $$; +select retc(42); + +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r record; begin r := row($1::int8, 1::int8, 42); return r; end $$; +select retc(42); + +-- but, for mostly historical reasons, we do convert when assigning +-- to a named-composite-type variable +create or replace function retc(int) returns two_int8s language plpgsql as +$$ declare r two_int8s; begin r := row($1::int8, 1::int8, 42); return r; end $$; +select retc(42); + +do $$ declare c two_int8s; +begin c := row(1,2); raise notice 'c = %', c; end$$; + +do $$ declare c two_int8s; +begin for c in select 1,2 loop raise notice 'c = %', c; end loop; end$$; + +do $$ declare c4 two_int4s; c8 two_int8s; +begin + c8 := row(1,2); + c4 := c8; + c8 := c4; + raise notice 'c4 = %', c4; + raise notice 'c8 = %', c8; +end$$; + +-- check passing composite result to another function +create function getq1(two_int8s) returns int8 language plpgsql as $$ +declare r two_int8s; begin r := $1; return r.q1; end $$; + +select getq1(retc(344)); +select getq1(row(1,2)); + +do $$ +declare r1 two_int8s; r2 record; x int8; +begin + r1 := retc(345); + perform getq1(r1); + x := getq1(r1); + raise notice 'x = %', x; + r2 := retc(346); + perform getq1(r2); + x := getq1(r2); + raise notice 'x = %', x; +end$$; + +-- check assignments of composites +do $$ +declare r1 two_int8s; r2 two_int8s; r3 record; r4 record; +begin + r1 := row(1,2); + raise notice 'r1 = %', r1; + r1 := r1; -- shouldn't do anything + raise notice 'r1 = %', r1; + r2 := r1; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r2.q2 = r1.q1 + 3; -- check that r2 has distinct storage + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r1 := null; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r1 := row(7,11)::two_int8s; + r2 := r1; + raise notice 'r1 = %', r1; + raise notice 'r2 = %', r2; + r3 := row(1,2); + r4 := r3; + raise notice 'r3 = %', r3; + raise notice 'r4 = %', r4; + r4.f1 := r4.f1 + 3; -- check that r4 has distinct storage + raise notice 'r3 = %', r3; + raise notice 'r4 = %', r4; + r1 := r3; + raise notice 'r1 = %', r1; + r4 := r1; + raise notice 'r4 = %', r4; + r4.q2 := r4.q2 + 1; -- r4's field names have changed + raise notice 'r4 = %', r4; +end$$; + +-- fields of named-type vars read as null if uninitialized +do $$ +declare r1 two_int8s; +begin + raise notice 'r1 = %', r1; + raise notice 'r1.q1 = %', r1.q1; + raise notice 'r1.q2 = %', r1.q2; + raise notice 'r1 = %', r1; +end$$; + +do $$ +declare r1 two_int8s; +begin + raise notice 'r1.q1 = %', r1.q1; + raise notice 'r1.q2 = %', r1.q2; + raise notice 'r1 = %', r1; + raise notice 'r1.nosuchfield = %', r1.nosuchfield; +end$$; + +-- records, not so much +do $$ +declare r1 record; +begin + raise notice 'r1 = %', r1; + raise notice 'r1.f1 = %', r1.f1; + raise notice 'r1.f2 = %', r1.f2; + raise notice 'r1 = %', r1; +end$$; + +-- but OK if you assign first +do $$ +declare r1 record; +begin + raise notice 'r1 = %', r1; + r1 := row(1,2); + raise notice 'r1.f1 = %', r1.f1; + raise notice 'r1.f2 = %', r1.f2; + raise notice 'r1 = %', r1; + raise notice 'r1.nosuchfield = %', r1.nosuchfield; +end$$; + +-- check repeated assignments to composite fields +create table some_table (id int, data text); + +do $$ +declare r some_table; +begin + r := (23, 'skidoo'); + for i in 1 .. 10 loop + r.id := r.id + i; + r.data := r.data || ' ' || i; + end loop; + raise notice 'r = %', r; +end$$; + +-- check behavior of function declared to return "record" + +create function returnsrecord(int) returns record language plpgsql as +$$ begin return row($1,$1+1); end $$; + +select returnsrecord(42); +select * from returnsrecord(42) as r(x int, y int); +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +select * from returnsrecord(42) as r(x int, y bigint); -- fail + +-- same with an intermediate record variable +create or replace function returnsrecord(int) returns record language plpgsql as +$$ declare r record; begin r := row($1,$1+1); return r; end $$; + +select returnsrecord(42); +select * from returnsrecord(42) as r(x int, y int); +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +select * from returnsrecord(42) as r(x int, y bigint); -- fail + +-- should work the same with a missing column in the actual result value +create table has_hole(f1 int, f2 int, f3 int); +alter table has_hole drop column f2; + +create or replace function returnsrecord(int) returns record language plpgsql as +$$ begin return row($1,$1+1)::has_hole; end $$; + +select returnsrecord(42); +select * from returnsrecord(42) as r(x int, y int); +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +select * from returnsrecord(42) as r(x int, y bigint); -- fail + +-- same with an intermediate record variable +create or replace function returnsrecord(int) returns record language plpgsql as +$$ declare r record; begin r := row($1,$1+1)::has_hole; return r; end $$; + +select returnsrecord(42); +select * from returnsrecord(42) as r(x int, y int); +select * from returnsrecord(42) as r(x int, y int, z int); -- fail +select * from returnsrecord(42) as r(x int, y bigint); -- fail + +-- check access to a field of an argument declared "record" +create function getf1(x record) returns int language plpgsql as +$$ begin return x.f1; end $$; +select getf1(1); +select getf1(row(1,2)); +select getf1(row(1,2)::two_int8s); +select getf1(row(1,2)); + +-- check behavior when assignment to FOR-loop variable requires coercion +do $$ +declare r two_int8s; +begin + for r in select i, i+1 from generate_series(1,4) i + loop + raise notice 'r = %', r; + end loop; +end$$; + +-- check behavior when returning setof composite +create function returnssetofholes() returns setof has_hole language plpgsql as +$$ +declare r record; + h has_hole; +begin + return next h; + r := (1,2); + h := (3,4); + return next r; + return next h; + return next row(5,6); + return next row(7,8)::has_hole; +end$$; +select returnssetofholes(); + +create or replace function returnssetofholes() returns setof has_hole language plpgsql as +$$ +declare r record; +begin + return next r; -- fails, not assigned yet +end$$; +select returnssetofholes(); + +create or replace function returnssetofholes() returns setof has_hole language plpgsql as +$$ +begin + return next row(1,2,3); -- fails +end$$; +select returnssetofholes(); + +-- check behavior with changes of a named rowtype +create table mutable(f1 int, f2 text); + +create function sillyaddone(int) returns int language plpgsql as +$$ declare r mutable; begin r.f1 := $1; return r.f1 + 1; end $$; +select sillyaddone(42); + +alter table mutable drop column f1; +alter table mutable add column f1 float8; + +-- currently, this fails due to cached plan for "r.f1 + 1" expression +select sillyaddone(42); +\c - +-- but it's OK after a reconnect +select sillyaddone(42); + +alter table mutable drop column f1; +select sillyaddone(42); -- fail + +create function getf3(x mutable) returns int language plpgsql as +$$ begin return x.f3; end $$; +select getf3(null::mutable); -- doesn't work yet +alter table mutable add column f3 int; +select getf3(null::mutable); -- now it works +alter table mutable drop column f3; +select getf3(null::mutable); -- fails again + +-- check access to system columns in a record variable + +create function sillytrig() returns trigger language plpgsql as +$$begin + raise notice 'old.ctid = %', old.ctid; + raise notice 'old.tableoid = %', old.tableoid::regclass; + return new; +end$$; + +create trigger mutable_trig before update on mutable for each row +execute procedure sillytrig(); + +insert into mutable values ('foo'), ('bar'); +update mutable set f2 = f2 || ' baz'; +table mutable; + +-- check returning a composite datum from a trigger + +create or replace function sillytrig() returns trigger language plpgsql as +$$begin + return row(new.*); +end$$; + +update mutable set f2 = f2 || ' baz'; +table mutable; + +create or replace function sillytrig() returns trigger language plpgsql as +$$declare r record; +begin + r := row(new.*); + return r; +end$$; + +update mutable set f2 = f2 || ' baz'; +table mutable; + +-- +-- Domains of composite +-- + +create domain ordered_int8s as two_int8s check((value).q1 <= (value).q2); + +create function read_ordered_int8s(p ordered_int8s) returns int8 as $$ +begin return p.q1 + p.q2; end +$$ language plpgsql; + +select read_ordered_int8s(row(1, 2)); +select read_ordered_int8s(row(2, 1)); -- fail + +create function build_ordered_int8s(i int8, j int8) returns ordered_int8s as $$ +begin return row(i,j); end +$$ language plpgsql; + +select build_ordered_int8s(1,2); +select build_ordered_int8s(2,1); -- fail + +create function build_ordered_int8s_2(i int8, j int8) returns ordered_int8s as $$ +declare r record; begin r := row(i,j); return r; end +$$ language plpgsql; + +select build_ordered_int8s_2(1,2); +select build_ordered_int8s_2(2,1); -- fail + +create function build_ordered_int8s_3(i int8, j int8) returns ordered_int8s as $$ +declare r two_int8s; begin r := row(i,j); return r; end +$$ language plpgsql; + +select build_ordered_int8s_3(1,2); +select build_ordered_int8s_3(2,1); -- fail + +create function build_ordered_int8s_4(i int8, j int8) returns ordered_int8s as $$ +declare r ordered_int8s; begin r := row(i,j); return r; end +$$ language plpgsql; + +select build_ordered_int8s_4(1,2); +select build_ordered_int8s_4(2,1); -- fail + +create function build_ordered_int8s_a(i int8, j int8) returns ordered_int8s[] as $$ +begin return array[row(i,j), row(i,j+1)]; end +$$ language plpgsql; + +select build_ordered_int8s_a(1,2); +select build_ordered_int8s_a(2,1); -- fail + +-- check field assignment +do $$ +declare r ordered_int8s; +begin + r.q1 := null; + r.q2 := 43; + r.q1 := 42; + r.q2 := 41; -- fail +end$$; + +-- check whole-row assignment +do $$ +declare r ordered_int8s; +begin + r := null; + r := row(null,null); + r := row(1,2); + r := row(2,1); -- fail +end$$; + +-- check assignment in for-loop +do $$ +declare r ordered_int8s; +begin + for r in values (1,2),(3,4),(6,5) loop + raise notice 'r = %', r; + end loop; +end$$; + +-- check behavior with toastable fields, too + +create type two_texts as (f1 text, f2 text); +create domain ordered_texts as two_texts check((value).f1 <= (value).f2); + +create table sometable (id int, a text, b text); +-- b should be compressed, but in-line +insert into sometable values (1, 'a', repeat('ffoob',1000)); +-- this b should be out-of-line +insert into sometable values (2, 'a', repeat('ffoob',100000)); +-- this pair should fail the domain check +insert into sometable values (3, 'z', repeat('ffoob',100000)); + +do $$ +declare d ordered_texts; +begin + for d in select a, b from sometable loop + raise notice 'succeeded at "%"', d.f1; + end loop; +end$$; + +do $$ +declare r record; d ordered_texts; +begin + for r in select * from sometable loop + raise notice 'processing row %', r.id; + d := row(r.a, r.b); + end loop; +end$$; + +do $$ +declare r record; d ordered_texts; +begin + for r in select * from sometable loop + raise notice 'processing row %', r.id; + d := null; + d.f1 := r.a; + d.f2 := r.b; + end loop; +end$$; diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c index 6c6b16f4d7..d6a6a849c3 100644 --- a/src/pl/plpython/plpy_typeio.c +++ b/src/pl/plpython/plpy_typeio.c @@ -384,7 +384,7 @@ PLy_output_setup_func(PLyObToDatum *arg, MemoryContext arg_mcxt, /* We'll set up the per-field data later */ arg->u.tuple.recdesc = NULL; arg->u.tuple.typentry = typentry; - arg->u.tuple.tupdescseq = typentry ? typentry->tupDescSeqNo - 1 : 0; + arg->u.tuple.tupdescid = INVALID_TUPLEDESC_IDENTIFIER; arg->u.tuple.atts = NULL; arg->u.tuple.natts = 0; /* Mark this invalid till needed, too */ @@ -499,7 +499,7 @@ PLy_input_setup_func(PLyDatumToOb *arg, MemoryContext arg_mcxt, /* We'll set up the per-field data later */ arg->u.tuple.recdesc = NULL; arg->u.tuple.typentry = typentry; - arg->u.tuple.tupdescseq = typentry ? typentry->tupDescSeqNo - 1 : 0; + arg->u.tuple.tupdescid = INVALID_TUPLEDESC_IDENTIFIER; arg->u.tuple.atts = NULL; arg->u.tuple.natts = 0; } @@ -969,11 +969,11 @@ PLyObject_ToComposite(PLyObToDatum *arg, PyObject *plrv, /* We should have the descriptor of the type's typcache entry */ Assert(desc == arg->u.tuple.typentry->tupDesc); /* Detect change of descriptor, update cache if needed */ - if (arg->u.tuple.tupdescseq != arg->u.tuple.typentry->tupDescSeqNo) + if (arg->u.tuple.tupdescid != arg->u.tuple.typentry->tupDesc_identifier) { PLy_output_setup_tuple(arg, desc, PLy_current_execution_context()->curr_proc); - arg->u.tuple.tupdescseq = arg->u.tuple.typentry->tupDescSeqNo; + arg->u.tuple.tupdescid = arg->u.tuple.typentry->tupDesc_identifier; } } else diff --git a/src/pl/plpython/plpy_typeio.h b/src/pl/plpython/plpy_typeio.h index 91870c91b0..82bdfae548 100644 --- a/src/pl/plpython/plpy_typeio.h +++ b/src/pl/plpython/plpy_typeio.h @@ -42,7 +42,7 @@ typedef struct PLyTupleToOb TupleDesc recdesc; /* If we're dealing with a named composite type, these fields are set: */ TypeCacheEntry *typentry; /* typcache entry for type */ - int64 tupdescseq; /* last tupdesc seqno seen in typcache */ + uint64 tupdescid; /* last tupdesc identifier seen in typcache */ /* These fields are NULL/0 if not yet set: */ PLyDatumToOb *atts; /* array of per-column conversion info */ int natts; /* length of array */ @@ -107,7 +107,7 @@ typedef struct PLyObToTuple TupleDesc recdesc; /* If we're dealing with a named composite type, these fields are set: */ TypeCacheEntry *typentry; /* typcache entry for type */ - int64 tupdescseq; /* last tupdesc seqno seen in typcache */ + uint64 tupdescid; /* last tupdesc identifier seen in typcache */ /* These fields are NULL/0 if not yet set: */ PLyObToDatum *atts; /* array of per-column conversion info */ int natts; /* length of array */ diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 4f9501db00..0c1da08869 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -4595,23 +4595,32 @@ begin x int; y int := i; r record; + c int8_tbl; begin if i = 1 then x := 42; r := row(i, i+1); + c := row(i, i+1); end if; raise notice 'x = %', x; raise notice 'y = %', y; raise notice 'r = %', r; + raise notice 'c = %', c; end; end loop; end$$; NOTICE: x = 42 NOTICE: y = 1 NOTICE: r = (1,2) +NOTICE: c = (1,2) NOTICE: x = NOTICE: y = 2 -ERROR: record "r" is not assigned yet +NOTICE: r = +NOTICE: c = +NOTICE: x = +NOTICE: y = 3 +NOTICE: r = +NOTICE: c = \set VERBOSITY default -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 3914651bf6..6bdcfe7cc5 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -3745,14 +3745,17 @@ begin x int; y int := i; r record; + c int8_tbl; begin if i = 1 then x := 42; r := row(i, i+1); + c := row(i, i+1); end if; raise notice 'x = %', x; raise notice 'y = %', y; raise notice 'r = %', r; + raise notice 'c = %', c; end; end loop; end$$; From 40301c1c8bcbe92a9ba9bf017da03e83476ae0e5 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 13 Feb 2018 19:10:43 -0500 Subject: [PATCH 0989/1087] Speed up plpgsql function startup by doing fewer pallocs. Previously, copy_plpgsql_datum did a separate palloc for each variable needing instance-local storage. In simple benchmarks this made for a noticeable fraction of the total runtime. Improve it by precalculating the space needed for all of a function's variables and doing just one palloc for all of them. In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum" types, since in fact it has nothing in common with the others, and there is noplace that needs to discriminate on the basis of dtype between an expression and any type of datum. And add comments clarifying which datum struct fields are generic and which aren't. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us --- src/pl/plpgsql/src/pl_comp.c | 17 ++++++ src/pl/plpgsql/src/pl_exec.c | 111 +++++++++++++++++++---------------- src/pl/plpgsql/src/pl_gram.y | 4 -- src/pl/plpgsql/src/plpgsql.h | 84 +++++++++++++++----------- 4 files changed, 130 insertions(+), 86 deletions(-) diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 09ecaec635..97cb763641 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -2183,12 +2183,29 @@ plpgsql_adddatum(PLpgSQL_datum *new) static void plpgsql_finish_datums(PLpgSQL_function *function) { + Size copiable_size = 0; int i; function->ndatums = plpgsql_nDatums; function->datums = palloc(sizeof(PLpgSQL_datum *) * plpgsql_nDatums); for (i = 0; i < plpgsql_nDatums; i++) + { function->datums[i] = plpgsql_Datums[i]; + + /* This must agree with copy_plpgsql_datums on what is copiable */ + switch (function->datums[i]->dtype) + { + case PLPGSQL_DTYPE_VAR: + copiable_size += MAXALIGN(sizeof(PLpgSQL_var)); + break; + case PLPGSQL_DTYPE_REC: + copiable_size += MAXALIGN(sizeof(PLpgSQL_rec)); + break; + default: + break; + } + } + function->copiable_size = copiable_size; } diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 7612902e8f..c90024064a 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -235,7 +235,8 @@ static HTAB *shared_cast_hash = NULL; static void coerce_function_result_tuple(PLpgSQL_execstate *estate, TupleDesc tupdesc); static void plpgsql_exec_error_callback(void *arg); -static PLpgSQL_datum *copy_plpgsql_datum(PLpgSQL_datum *datum); +static void copy_plpgsql_datums(PLpgSQL_execstate *estate, + PLpgSQL_function *func); static MemoryContext get_stmt_mcontext(PLpgSQL_execstate *estate); static void push_stmt_mcontext(PLpgSQL_execstate *estate); static void pop_stmt_mcontext(PLpgSQL_execstate *estate); @@ -458,8 +459,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, * Make local execution copies of all the datums */ estate.err_text = gettext_noop("during initialization of execution state"); - for (i = 0; i < estate.ndatums; i++) - estate.datums[i] = copy_plpgsql_datum(func->datums[i]); + copy_plpgsql_datums(&estate, func); /* * Store the actual call argument values into the appropriate variables @@ -859,8 +859,7 @@ plpgsql_exec_trigger(PLpgSQL_function *func, * Make local execution copies of all the datums */ estate.err_text = gettext_noop("during initialization of execution state"); - for (i = 0; i < estate.ndatums; i++) - estate.datums[i] = copy_plpgsql_datum(func->datums[i]); + copy_plpgsql_datums(&estate, func); /* * Put the OLD and NEW tuples into record variables @@ -1153,7 +1152,6 @@ plpgsql_exec_event_trigger(PLpgSQL_function *func, EventTriggerData *trigdata) { PLpgSQL_execstate estate; ErrorContextCallback plerrcontext; - int i; int rc; PLpgSQL_var *var; @@ -1174,8 +1172,7 @@ plpgsql_exec_event_trigger(PLpgSQL_function *func, EventTriggerData *trigdata) * Make local execution copies of all the datums */ estate.err_text = gettext_noop("during initialization of execution state"); - for (i = 0; i < estate.ndatums; i++) - estate.datums[i] = copy_plpgsql_datum(func->datums[i]); + copy_plpgsql_datums(&estate, func); /* * Assign the special tg_ variables @@ -1290,57 +1287,73 @@ plpgsql_exec_error_callback(void *arg) * Support function for initializing local execution variables * ---------- */ -static PLpgSQL_datum * -copy_plpgsql_datum(PLpgSQL_datum *datum) +static void +copy_plpgsql_datums(PLpgSQL_execstate *estate, + PLpgSQL_function *func) { - PLpgSQL_datum *result; + int ndatums = estate->ndatums; + PLpgSQL_datum **indatums; + PLpgSQL_datum **outdatums; + char *workspace; + char *ws_next; + int i; - switch (datum->dtype) - { - case PLPGSQL_DTYPE_VAR: - { - PLpgSQL_var *new = palloc(sizeof(PLpgSQL_var)); + /* Allocate local datum-pointer array */ + estate->datums = (PLpgSQL_datum **) + palloc(sizeof(PLpgSQL_datum *) * ndatums); - memcpy(new, datum, sizeof(PLpgSQL_var)); - /* should be preset to null/non-freeable */ - Assert(new->isnull); - Assert(!new->freeval); + /* + * To reduce palloc overhead, we make a single palloc request for all the + * space needed for locally-instantiated datums. + */ + workspace = palloc(func->copiable_size); + ws_next = workspace; - result = (PLpgSQL_datum *) new; - } - break; + /* Fill datum-pointer array, copying datums into workspace as needed */ + indatums = func->datums; + outdatums = estate->datums; + for (i = 0; i < ndatums; i++) + { + PLpgSQL_datum *indatum = indatums[i]; + PLpgSQL_datum *outdatum; - case PLPGSQL_DTYPE_REC: - { - PLpgSQL_rec *new = palloc(sizeof(PLpgSQL_rec)); + /* This must agree with plpgsql_finish_datums on what is copiable */ + switch (indatum->dtype) + { + case PLPGSQL_DTYPE_VAR: + outdatum = (PLpgSQL_datum *) ws_next; + memcpy(outdatum, indatum, sizeof(PLpgSQL_var)); + ws_next += MAXALIGN(sizeof(PLpgSQL_var)); + break; - memcpy(new, datum, sizeof(PLpgSQL_rec)); - /* should be preset to empty */ - Assert(new->erh == NULL); + case PLPGSQL_DTYPE_REC: + outdatum = (PLpgSQL_datum *) ws_next; + memcpy(outdatum, indatum, sizeof(PLpgSQL_rec)); + ws_next += MAXALIGN(sizeof(PLpgSQL_rec)); + break; - result = (PLpgSQL_datum *) new; - } - break; + case PLPGSQL_DTYPE_ROW: + case PLPGSQL_DTYPE_RECFIELD: + case PLPGSQL_DTYPE_ARRAYELEM: - case PLPGSQL_DTYPE_ROW: - case PLPGSQL_DTYPE_RECFIELD: - case PLPGSQL_DTYPE_ARRAYELEM: + /* + * These datum records are read-only at runtime, so no need to + * copy them (well, RECFIELD and ARRAYELEM contain cached + * data, but we'd just as soon centralize the caching anyway). + */ + outdatum = indatum; + break; - /* - * These datum records are read-only at runtime, so no need to - * copy them (well, RECFIELD and ARRAYELEM contain cached data, - * but we'd just as soon centralize the caching anyway) - */ - result = datum; - break; + default: + elog(ERROR, "unrecognized dtype: %d", indatum->dtype); + outdatum = NULL; /* keep compiler quiet */ + break; + } - default: - elog(ERROR, "unrecognized dtype: %d", datum->dtype); - result = NULL; /* keep compiler quiet */ - break; + outdatums[i] = outdatum; } - return result; + Assert(ws_next == workspace + func->copiable_size); } /* @@ -3504,8 +3517,8 @@ plpgsql_estate_setup(PLpgSQL_execstate *estate, estate->found_varno = func->found_varno; estate->ndatums = func->ndatums; - estate->datums = palloc(sizeof(PLpgSQL_datum *) * estate->ndatums); - /* caller is expected to fill the datums array */ + estate->datums = NULL; + /* the datums array will be filled by copy_plpgsql_datums() */ estate->datum_context = CurrentMemoryContext; /* initialize our ParamListInfo with appropriate hook functions */ diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index 97c0d4f98a..ee943eed93 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -564,7 +564,6 @@ decl_statement : decl_varname decl_const decl_datatype decl_collate decl_notnull curname_def = palloc0(sizeof(PLpgSQL_expr)); - curname_def->dtype = PLPGSQL_DTYPE_EXPR; strcpy(buf, "SELECT "); cp1 = new->refname; cp2 = buf + strlen(buf); @@ -2697,7 +2696,6 @@ read_sql_construct(int until, } expr = palloc0(sizeof(PLpgSQL_expr)); - expr->dtype = PLPGSQL_DTYPE_EXPR; expr->query = pstrdup(ds.data); expr->plan = NULL; expr->paramnos = NULL; @@ -2944,7 +2942,6 @@ make_execsql_stmt(int firsttoken, int location) ds.data[--ds.len] = '\0'; expr = palloc0(sizeof(PLpgSQL_expr)); - expr->dtype = PLPGSQL_DTYPE_EXPR; expr->query = pstrdup(ds.data); expr->plan = NULL; expr->paramnos = NULL; @@ -3816,7 +3813,6 @@ read_cursor_args(PLpgSQL_var *cursor, int until, const char *expected) appendStringInfoChar(&ds, ';'); expr = palloc0(sizeof(PLpgSQL_expr)); - expr->dtype = PLPGSQL_DTYPE_EXPR; expr->query = pstrdup(ds.data); expr->plan = NULL; expr->paramnos = NULL; diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index d4eb67b738..dadbfb569c 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -63,8 +63,7 @@ typedef enum PLpgSQL_datum_type PLPGSQL_DTYPE_ROW, PLPGSQL_DTYPE_REC, PLPGSQL_DTYPE_RECFIELD, - PLPGSQL_DTYPE_ARRAYELEM, - PLPGSQL_DTYPE_EXPR + PLPGSQL_DTYPE_ARRAYELEM } PLpgSQL_datum_type; /* @@ -188,39 +187,11 @@ typedef struct PLpgSQL_type int32 atttypmod; /* typmod (taken from someplace else) */ } PLpgSQL_type; -/* - * Generic datum array item - * - * PLpgSQL_datum is the common supertype for PLpgSQL_expr, PLpgSQL_var, - * PLpgSQL_row, PLpgSQL_rec, PLpgSQL_recfield, and PLpgSQL_arrayelem - */ -typedef struct PLpgSQL_datum -{ - PLpgSQL_datum_type dtype; - int dno; -} PLpgSQL_datum; - -/* - * Scalar or composite variable - * - * The variants PLpgSQL_var, PLpgSQL_row, and PLpgSQL_rec share these - * fields - */ -typedef struct PLpgSQL_variable -{ - PLpgSQL_datum_type dtype; - int dno; - char *refname; - int lineno; -} PLpgSQL_variable; - /* * SQL Query to plan and execute */ typedef struct PLpgSQL_expr { - PLpgSQL_datum_type dtype; - int dno; char *query; SPIPlanPtr plan; Bitmapset *paramnos; /* all dnos referenced by this query */ @@ -249,6 +220,32 @@ typedef struct PLpgSQL_expr LocalTransactionId expr_simple_lxid; } PLpgSQL_expr; +/* + * Generic datum array item + * + * PLpgSQL_datum is the common supertype for PLpgSQL_var, PLpgSQL_row, + * PLpgSQL_rec, PLpgSQL_recfield, and PLpgSQL_arrayelem. + */ +typedef struct PLpgSQL_datum +{ + PLpgSQL_datum_type dtype; + int dno; +} PLpgSQL_datum; + +/* + * Scalar or composite variable + * + * The variants PLpgSQL_var, PLpgSQL_row, and PLpgSQL_rec share these + * fields. + */ +typedef struct PLpgSQL_variable +{ + PLpgSQL_datum_type dtype; + int dno; + char *refname; + int lineno; +} PLpgSQL_variable; + /* * Scalar variable */ @@ -258,11 +255,18 @@ typedef struct PLpgSQL_var int dno; char *refname; int lineno; + /* end of PLpgSQL_variable fields */ + bool isconst; + bool notnull; PLpgSQL_type *datatype; - int isconst; - int notnull; PLpgSQL_expr *default_val; + + /* + * Variables declared as CURSOR FOR are mostly like ordinary + * scalar variables of type refcursor, but they have these additional + * properties: + */ PLpgSQL_expr *cursor_explicit_expr; int cursor_explicit_argrow; int cursor_options; @@ -286,6 +290,7 @@ typedef struct PLpgSQL_row int dno; char *refname; int lineno; + /* end of PLpgSQL_variable fields */ /* * rowtupdesc is only set up if we might need to convert the row into a @@ -308,6 +313,8 @@ typedef struct PLpgSQL_rec int dno; char *refname; int lineno; + /* end of PLpgSQL_variable fields */ + Oid rectypeid; /* declared type of variable */ /* RECFIELDs for this record are chained together for easy access */ int firstfield; /* dno of first RECFIELD, or -1 if none */ @@ -322,6 +329,8 @@ typedef struct PLpgSQL_recfield { PLpgSQL_datum_type dtype; int dno; + /* end of PLpgSQL_datum fields */ + char *fieldname; /* name of field */ int recparentno; /* dno of parent record */ int nextfield; /* dno of next child, or -1 if none */ @@ -337,6 +346,8 @@ typedef struct PLpgSQL_arrayelem { PLpgSQL_datum_type dtype; int dno; + /* end of PLpgSQL_datum fields */ + PLpgSQL_expr *subscript; int arrayparentno; /* dno of parent array variable */ @@ -884,6 +895,7 @@ typedef struct PLpgSQL_function /* the datums representing the function's local variables */ int ndatums; PLpgSQL_datum **datums; + Size copiable_size; /* space for locally instantiated datums */ /* function body parsetree */ PLpgSQL_stmt_block *action; @@ -920,8 +932,14 @@ typedef struct PLpgSQL_execstate ResourceOwner tuple_store_owner; ReturnSetInfo *rsi; - /* the datums representing the function's local variables */ int found_varno; + + /* + * The datums representing the function's local variables. Some of these + * are local storage in this execstate, but some just point to the shared + * copy belonging to the PLpgSQL_function, depending on whether or not we + * need any per-execution state for the datum's dtype. + */ int ndatums; PLpgSQL_datum **datums; /* context containing variable values (same as func's SPI_proc context) */ From fd333bc763ea104f2a2c10c6b0061c996d4a2f5a Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 13 Feb 2018 19:20:37 -0500 Subject: [PATCH 0990/1087] Speed up plpgsql trigger startup by introducing "promises". Over the years we've accreted quite a few special variables that are predefined in plpgsql trigger functions. The cost of initializing these variables to their defined values turns out to be a significant part of the runtime of simple triggers; but, undoubtedly, most real-world triggers never examine the values of most of these variables. To improve matters, invent the notion of a variable that has a "promise" attached to it, specifying which of the predetermined values should be assigned to the variable if anything ever reads it. This eliminates all the unneeded startup overhead, in return for a small penalty on accesses to these variables. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us --- src/pl/plpgsql/src/pl_comp.c | 50 +++-- src/pl/plpgsql/src/pl_exec.c | 334 ++++++++++++++++++++++------------ src/pl/plpgsql/src/pl_funcs.c | 5 + src/pl/plpgsql/src/pl_gram.y | 3 + src/pl/plpgsql/src/plpgsql.h | 56 ++++-- 5 files changed, 306 insertions(+), 142 deletions(-) diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 97cb763641..526aa8f621 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -607,7 +607,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_name_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_NAME; /* Add the variable tg_when */ var = plpgsql_build_variable("tg_when", 0, @@ -615,7 +617,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_when_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_WHEN; /* Add the variable tg_level */ var = plpgsql_build_variable("tg_level", 0, @@ -623,7 +627,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_level_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_LEVEL; /* Add the variable tg_op */ var = plpgsql_build_variable("tg_op", 0, @@ -631,7 +637,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_op_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_OP; /* Add the variable tg_relid */ var = plpgsql_build_variable("tg_relid", 0, @@ -639,7 +647,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_relid_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_RELID; /* Add the variable tg_relname */ var = plpgsql_build_variable("tg_relname", 0, @@ -647,7 +657,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_relname_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_TABLE_NAME; /* tg_table_name is now preferred to tg_relname */ var = plpgsql_build_variable("tg_table_name", 0, @@ -655,7 +667,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_table_name_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_TABLE_NAME; /* add the variable tg_table_schema */ var = plpgsql_build_variable("tg_table_schema", 0, @@ -663,7 +677,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_table_schema_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_TABLE_SCHEMA; /* Add the variable tg_nargs */ var = plpgsql_build_variable("tg_nargs", 0, @@ -671,7 +687,9 @@ do_compile(FunctionCallInfo fcinfo, -1, InvalidOid), true); - function->tg_nargs_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_NARGS; /* Add the variable tg_argv */ var = plpgsql_build_variable("tg_argv", 0, @@ -679,7 +697,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_argv_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_ARGV; break; @@ -701,7 +721,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_event_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_EVENT; /* Add the variable tg_tag */ var = plpgsql_build_variable("tg_tag", 0, @@ -709,7 +731,9 @@ do_compile(FunctionCallInfo fcinfo, -1, function->fn_input_collation), true); - function->tg_tag_varno = var->dno; + Assert(var->dtype == PLPGSQL_DTYPE_VAR); + var->dtype = PLPGSQL_DTYPE_PROMISE; + ((PLpgSQL_var *) var)->promise = PLPGSQL_PROMISE_TG_TAG; break; @@ -1878,6 +1902,7 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars) switch (var->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: typoid = ((PLpgSQL_var *) var)->datatype->typoid; typmod = ((PLpgSQL_var *) var)->datatype->atttypmod; typcoll = ((PLpgSQL_var *) var)->datatype->collation; @@ -2196,6 +2221,7 @@ plpgsql_finish_datums(PLpgSQL_function *function) switch (function->datums[i]->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: copiable_size += MAXALIGN(sizeof(PLpgSQL_var)); break; case PLPGSQL_DTYPE_REC: diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index c90024064a..f6866743ac 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -237,6 +237,8 @@ static void coerce_function_result_tuple(PLpgSQL_execstate *estate, static void plpgsql_exec_error_callback(void *arg); static void copy_plpgsql_datums(PLpgSQL_execstate *estate, PLpgSQL_function *func); +static void plpgsql_fulfill_promise(PLpgSQL_execstate *estate, + PLpgSQL_var *var); static MemoryContext get_stmt_mcontext(PLpgSQL_execstate *estate); static void push_stmt_mcontext(PLpgSQL_execstate *estate); static void pop_stmt_mcontext(PLpgSQL_execstate *estate); @@ -547,6 +549,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, break; default: + /* Anything else should not be an argument variable */ elog(ERROR, "unrecognized dtype: %d", func->datums[i]->dtype); } } @@ -834,10 +837,8 @@ plpgsql_exec_trigger(PLpgSQL_function *func, { PLpgSQL_execstate estate; ErrorContextCallback plerrcontext; - int i; int rc; TupleDesc tupdesc; - PLpgSQL_var *var; PLpgSQL_rec *rec_new, *rec_old; HeapTuple rettup; @@ -846,6 +847,7 @@ plpgsql_exec_trigger(PLpgSQL_function *func, * Setup the execution state */ plpgsql_estate_setup(&estate, func, NULL, NULL); + estate.trigdata = trigdata; /* * Setup error traceback support for ereport() @@ -906,106 +908,6 @@ plpgsql_exec_trigger(PLpgSQL_function *func, rc = SPI_register_trigger_data(trigdata); Assert(rc >= 0); - /* - * Assign the special tg_ variables - */ - - var = (PLpgSQL_var *) (estate.datums[func->tg_op_varno]); - if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event)) - assign_text_var(&estate, var, "INSERT"); - else if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) - assign_text_var(&estate, var, "UPDATE"); - else if (TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)) - assign_text_var(&estate, var, "DELETE"); - else if (TRIGGER_FIRED_BY_TRUNCATE(trigdata->tg_event)) - assign_text_var(&estate, var, "TRUNCATE"); - else - elog(ERROR, "unrecognized trigger action: not INSERT, DELETE, UPDATE, or TRUNCATE"); - - var = (PLpgSQL_var *) (estate.datums[func->tg_name_varno]); - assign_simple_var(&estate, var, - DirectFunctionCall1(namein, - CStringGetDatum(trigdata->tg_trigger->tgname)), - false, true); - - var = (PLpgSQL_var *) (estate.datums[func->tg_when_varno]); - if (TRIGGER_FIRED_BEFORE(trigdata->tg_event)) - assign_text_var(&estate, var, "BEFORE"); - else if (TRIGGER_FIRED_AFTER(trigdata->tg_event)) - assign_text_var(&estate, var, "AFTER"); - else if (TRIGGER_FIRED_INSTEAD(trigdata->tg_event)) - assign_text_var(&estate, var, "INSTEAD OF"); - else - elog(ERROR, "unrecognized trigger execution time: not BEFORE, AFTER, or INSTEAD OF"); - - var = (PLpgSQL_var *) (estate.datums[func->tg_level_varno]); - if (TRIGGER_FIRED_FOR_ROW(trigdata->tg_event)) - assign_text_var(&estate, var, "ROW"); - else if (TRIGGER_FIRED_FOR_STATEMENT(trigdata->tg_event)) - assign_text_var(&estate, var, "STATEMENT"); - else - elog(ERROR, "unrecognized trigger event type: not ROW or STATEMENT"); - - var = (PLpgSQL_var *) (estate.datums[func->tg_relid_varno]); - assign_simple_var(&estate, var, - ObjectIdGetDatum(trigdata->tg_relation->rd_id), - false, false); - - var = (PLpgSQL_var *) (estate.datums[func->tg_relname_varno]); - assign_simple_var(&estate, var, - DirectFunctionCall1(namein, - CStringGetDatum(RelationGetRelationName(trigdata->tg_relation))), - false, true); - - var = (PLpgSQL_var *) (estate.datums[func->tg_table_name_varno]); - assign_simple_var(&estate, var, - DirectFunctionCall1(namein, - CStringGetDatum(RelationGetRelationName(trigdata->tg_relation))), - false, true); - - var = (PLpgSQL_var *) (estate.datums[func->tg_table_schema_varno]); - assign_simple_var(&estate, var, - DirectFunctionCall1(namein, - CStringGetDatum(get_namespace_name( - RelationGetNamespace( - trigdata->tg_relation)))), - false, true); - - var = (PLpgSQL_var *) (estate.datums[func->tg_nargs_varno]); - assign_simple_var(&estate, var, - Int16GetDatum(trigdata->tg_trigger->tgnargs), - false, false); - - var = (PLpgSQL_var *) (estate.datums[func->tg_argv_varno]); - if (trigdata->tg_trigger->tgnargs > 0) - { - /* - * For historical reasons, tg_argv[] subscripts start at zero not one. - * So we can't use construct_array(). - */ - int nelems = trigdata->tg_trigger->tgnargs; - Datum *elems; - int dims[1]; - int lbs[1]; - - elems = palloc(sizeof(Datum) * nelems); - for (i = 0; i < nelems; i++) - elems[i] = CStringGetTextDatum(trigdata->tg_trigger->tgargs[i]); - dims[0] = nelems; - lbs[0] = 0; - - assign_simple_var(&estate, var, - PointerGetDatum(construct_md_array(elems, NULL, - 1, dims, lbs, - TEXTOID, - -1, false, 'i')), - false, true); - } - else - { - assign_simple_var(&estate, var, (Datum) 0, true, false); - } - estate.err_text = gettext_noop("during function entry"); /* @@ -1153,12 +1055,12 @@ plpgsql_exec_event_trigger(PLpgSQL_function *func, EventTriggerData *trigdata) PLpgSQL_execstate estate; ErrorContextCallback plerrcontext; int rc; - PLpgSQL_var *var; /* * Setup the execution state */ plpgsql_estate_setup(&estate, func, NULL, NULL); + estate.evtrigdata = trigdata; /* * Setup error traceback support for ereport() @@ -1174,15 +1076,6 @@ plpgsql_exec_event_trigger(PLpgSQL_function *func, EventTriggerData *trigdata) estate.err_text = gettext_noop("during initialization of execution state"); copy_plpgsql_datums(&estate, func); - /* - * Assign the special tg_ variables - */ - var = (PLpgSQL_var *) (estate.datums[func->tg_event_varno]); - assign_text_var(&estate, var, trigdata->event); - - var = (PLpgSQL_var *) (estate.datums[func->tg_tag_varno]); - assign_text_var(&estate, var, trigdata->tag); - /* * Let the instrumentation plugin peek at this function */ @@ -1321,6 +1214,7 @@ copy_plpgsql_datums(PLpgSQL_execstate *estate, switch (indatum->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: outdatum = (PLpgSQL_datum *) ws_next; memcpy(outdatum, indatum, sizeof(PLpgSQL_var)); ws_next += MAXALIGN(sizeof(PLpgSQL_var)); @@ -1356,6 +1250,166 @@ copy_plpgsql_datums(PLpgSQL_execstate *estate, Assert(ws_next == workspace + func->copiable_size); } +/* + * If the variable has an armed "promise", compute the promised value + * and assign it to the variable. + * The assignment automatically disarms the promise. + */ +static void +plpgsql_fulfill_promise(PLpgSQL_execstate *estate, + PLpgSQL_var *var) +{ + MemoryContext oldcontext; + + if (var->promise == PLPGSQL_PROMISE_NONE) + return; /* nothing to do */ + + /* + * This will typically be invoked in a short-lived context such as the + * mcontext. We must create variable values in the estate's datum + * context. This quick-and-dirty solution risks leaking some additional + * cruft there, but since any one promise is honored at most once per + * function call, it's probably not worth being more careful. + */ + oldcontext = MemoryContextSwitchTo(estate->datum_context); + + switch (var->promise) + { + case PLPGSQL_PROMISE_TG_NAME: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + assign_simple_var(estate, var, + DirectFunctionCall1(namein, + CStringGetDatum(estate->trigdata->tg_trigger->tgname)), + false, true); + break; + + case PLPGSQL_PROMISE_TG_WHEN: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + if (TRIGGER_FIRED_BEFORE(estate->trigdata->tg_event)) + assign_text_var(estate, var, "BEFORE"); + else if (TRIGGER_FIRED_AFTER(estate->trigdata->tg_event)) + assign_text_var(estate, var, "AFTER"); + else if (TRIGGER_FIRED_INSTEAD(estate->trigdata->tg_event)) + assign_text_var(estate, var, "INSTEAD OF"); + else + elog(ERROR, "unrecognized trigger execution time: not BEFORE, AFTER, or INSTEAD OF"); + break; + + case PLPGSQL_PROMISE_TG_LEVEL: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + if (TRIGGER_FIRED_FOR_ROW(estate->trigdata->tg_event)) + assign_text_var(estate, var, "ROW"); + else if (TRIGGER_FIRED_FOR_STATEMENT(estate->trigdata->tg_event)) + assign_text_var(estate, var, "STATEMENT"); + else + elog(ERROR, "unrecognized trigger event type: not ROW or STATEMENT"); + break; + + case PLPGSQL_PROMISE_TG_OP: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + if (TRIGGER_FIRED_BY_INSERT(estate->trigdata->tg_event)) + assign_text_var(estate, var, "INSERT"); + else if (TRIGGER_FIRED_BY_UPDATE(estate->trigdata->tg_event)) + assign_text_var(estate, var, "UPDATE"); + else if (TRIGGER_FIRED_BY_DELETE(estate->trigdata->tg_event)) + assign_text_var(estate, var, "DELETE"); + else if (TRIGGER_FIRED_BY_TRUNCATE(estate->trigdata->tg_event)) + assign_text_var(estate, var, "TRUNCATE"); + else + elog(ERROR, "unrecognized trigger action: not INSERT, DELETE, UPDATE, or TRUNCATE"); + break; + + case PLPGSQL_PROMISE_TG_RELID: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + assign_simple_var(estate, var, + ObjectIdGetDatum(estate->trigdata->tg_relation->rd_id), + false, false); + break; + + case PLPGSQL_PROMISE_TG_TABLE_NAME: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + assign_simple_var(estate, var, + DirectFunctionCall1(namein, + CStringGetDatum(RelationGetRelationName(estate->trigdata->tg_relation))), + false, true); + break; + + case PLPGSQL_PROMISE_TG_TABLE_SCHEMA: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + assign_simple_var(estate, var, + DirectFunctionCall1(namein, + CStringGetDatum(get_namespace_name(RelationGetNamespace(estate->trigdata->tg_relation)))), + false, true); + break; + + case PLPGSQL_PROMISE_TG_NARGS: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + assign_simple_var(estate, var, + Int16GetDatum(estate->trigdata->tg_trigger->tgnargs), + false, false); + break; + + case PLPGSQL_PROMISE_TG_ARGV: + if (estate->trigdata == NULL) + elog(ERROR, "trigger promise is not in a trigger function"); + if (estate->trigdata->tg_trigger->tgnargs > 0) + { + /* + * For historical reasons, tg_argv[] subscripts start at zero + * not one. So we can't use construct_array(). + */ + int nelems = estate->trigdata->tg_trigger->tgnargs; + Datum *elems; + int dims[1]; + int lbs[1]; + int i; + + elems = palloc(sizeof(Datum) * nelems); + for (i = 0; i < nelems; i++) + elems[i] = CStringGetTextDatum(estate->trigdata->tg_trigger->tgargs[i]); + dims[0] = nelems; + lbs[0] = 0; + + assign_simple_var(estate, var, + PointerGetDatum(construct_md_array(elems, NULL, + 1, dims, lbs, + TEXTOID, + -1, false, 'i')), + false, true); + } + else + { + assign_simple_var(estate, var, (Datum) 0, true, false); + } + break; + + case PLPGSQL_PROMISE_TG_EVENT: + if (estate->evtrigdata == NULL) + elog(ERROR, "event trigger promise is not in an event trigger function"); + assign_text_var(estate, var, estate->evtrigdata->event); + break; + + case PLPGSQL_PROMISE_TG_TAG: + if (estate->evtrigdata == NULL) + elog(ERROR, "event trigger promise is not in an event trigger function"); + assign_text_var(estate, var, estate->evtrigdata->tag); + break; + + default: + elog(ERROR, "unrecognized promise type: %d", var->promise); + } + + MemoryContextSwitchTo(oldcontext); +} + /* * Create a memory context for statement-lifespan variables, if we don't * have one already. It will be a child of stmt_mcontext_parent, which is @@ -1464,6 +1518,10 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) /* * The set of dtypes handled here must match plpgsql_add_initdatums(). + * + * Note that we currently don't support promise datums within blocks, + * only at a function's outermost scope, so we needn't handle those + * here. */ switch (datum->dtype) { @@ -2778,6 +2836,12 @@ exec_stmt_return(PLpgSQL_execstate *estate, PLpgSQL_stmt_return *stmt) switch (retvar->dtype) { + case PLPGSQL_DTYPE_PROMISE: + /* fulfill promise if needed, then handle like regular var */ + plpgsql_fulfill_promise(estate, (PLpgSQL_var *) retvar); + + /* FALL THRU */ + case PLPGSQL_DTYPE_VAR: { PLpgSQL_var *var = (PLpgSQL_var *) retvar; @@ -2917,6 +2981,12 @@ exec_stmt_return_next(PLpgSQL_execstate *estate, switch (retvar->dtype) { + case PLPGSQL_DTYPE_PROMISE: + /* fulfill promise if needed, then handle like regular var */ + plpgsql_fulfill_promise(estate, (PLpgSQL_var *) retvar); + + /* FALL THRU */ + case PLPGSQL_DTYPE_VAR: { PLpgSQL_var *var = (PLpgSQL_var *) retvar; @@ -3487,6 +3557,8 @@ plpgsql_estate_setup(PLpgSQL_execstate *estate, func->cur_estate = estate; estate->func = func; + estate->trigdata = NULL; + estate->evtrigdata = NULL; estate->retval = (Datum) 0; estate->retisnull = true; @@ -4542,6 +4614,7 @@ exec_assign_value(PLpgSQL_execstate *estate, switch (target->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: { /* * Target is a variable @@ -4604,10 +4677,16 @@ exec_assign_value(PLpgSQL_execstate *estate, * cannot reliably be made any earlier; we have to be looking * at the object's standard R/W pointer to be sure pointer * equality is meaningful. + * + * Also, if it's a promise variable, we should disarm the + * promise in any case --- otherwise, assigning null to an + * armed promise variable would fail to disarm the promise. */ if (var->value != newvalue || var->isnull || isNull) assign_simple_var(estate, var, newvalue, isNull, (!var->datatype->typbyval && !isNull)); + else + var->promise = PLPGSQL_PROMISE_NONE; break; } @@ -4951,6 +5030,12 @@ exec_eval_datum(PLpgSQL_execstate *estate, switch (datum->dtype) { + case PLPGSQL_DTYPE_PROMISE: + /* fulfill promise if needed, then handle like regular var */ + plpgsql_fulfill_promise(estate, (PLpgSQL_var *) datum); + + /* FALL THRU */ + case PLPGSQL_DTYPE_VAR: { PLpgSQL_var *var = (PLpgSQL_var *) datum; @@ -5093,6 +5178,7 @@ plpgsql_exec_get_datum_type(PLpgSQL_execstate *estate, switch (datum->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: { PLpgSQL_var *var = (PLpgSQL_var *) datum; @@ -5176,6 +5262,7 @@ plpgsql_exec_get_datum_type_info(PLpgSQL_execstate *estate, switch (datum->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: { PLpgSQL_var *var = (PLpgSQL_var *) datum; @@ -5874,6 +5961,7 @@ plpgsql_param_fetch(ParamListInfo params, switch (datum->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: /* always safe */ break; @@ -5989,8 +6077,8 @@ plpgsql_param_compile(ParamListInfo params, Param *param, * Select appropriate eval function. It seems worth special-casing * DTYPE_VAR and DTYPE_RECFIELD for performance. Also, we can determine * in advance whether MakeExpandedObjectReadOnly() will be required. - * Currently, only VAR and REC datums could contain read/write expanded - * objects. + * Currently, only VAR/PROMISE and REC datums could contain read/write + * expanded objects. */ if (datum->dtype == PLPGSQL_DTYPE_VAR) { @@ -6002,6 +6090,14 @@ plpgsql_param_compile(ParamListInfo params, Param *param, } else if (datum->dtype == PLPGSQL_DTYPE_RECFIELD) scratch.d.cparam.paramfunc = plpgsql_param_eval_recfield; + else if (datum->dtype == PLPGSQL_DTYPE_PROMISE) + { + if (dno != expr->rwparam && + ((PLpgSQL_var *) datum)->datatype->typlen == -1) + scratch.d.cparam.paramfunc = plpgsql_param_eval_generic_ro; + else + scratch.d.cparam.paramfunc = plpgsql_param_eval_generic; + } else if (datum->dtype == PLPGSQL_DTYPE_REC && dno != expr->rwparam) scratch.d.cparam.paramfunc = plpgsql_param_eval_generic_ro; @@ -7680,7 +7776,8 @@ static void assign_simple_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, Datum newvalue, bool isnull, bool freeable) { - Assert(var->dtype == PLPGSQL_DTYPE_VAR); + Assert(var->dtype == PLPGSQL_DTYPE_VAR || + var->dtype == PLPGSQL_DTYPE_PROMISE); /* Free the old value if needed */ if (var->freeval) { @@ -7695,6 +7792,13 @@ assign_simple_var(PLpgSQL_execstate *estate, PLpgSQL_var *var, var->value = newvalue; var->isnull = isnull; var->freeval = freeable; + + /* + * If it's a promise variable, then either we just assigned the promised + * value, or the user explicitly assigned an overriding value. Either + * way, cancel the promise. + */ + var->promise = PLPGSQL_PROMISE_NONE; } /* diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index b36fab67bc..379fd69f44 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -729,6 +729,7 @@ plpgsql_free_function_memory(PLpgSQL_function *func) switch (d->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: { PLpgSQL_var *var = (PLpgSQL_var *) d; @@ -1582,6 +1583,7 @@ plpgsql_dumptree(PLpgSQL_function *func) switch (d->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: { PLpgSQL_var *var = (PLpgSQL_var *) d; @@ -1608,6 +1610,9 @@ plpgsql_dumptree(PLpgSQL_function *func) dump_expr(var->cursor_explicit_expr); printf("\n"); } + if (var->promise != PLPGSQL_PROMISE_NONE) + printf(" PROMISE %d\n", + (int) var->promise); } break; case PLPGSQL_DTYPE_ROW: diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index ee943eed93..5bf45942a6 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -3170,6 +3170,7 @@ make_return_stmt(int location) if (tok == T_DATUM && plpgsql_peek() == ';' && (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_VAR || + yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_PROMISE || yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW || yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC)) { @@ -3231,6 +3232,7 @@ make_return_next_stmt(int location) if (tok == T_DATUM && plpgsql_peek() == ';' && (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_VAR || + yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_PROMISE || yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW || yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC)) { @@ -3318,6 +3320,7 @@ check_assignable(PLpgSQL_datum *datum, int location) switch (datum->dtype) { case PLPGSQL_DTYPE_VAR: + case PLPGSQL_DTYPE_PROMISE: if (((PLpgSQL_var *) datum)->isconst) ereport(ERROR, (errcode(ERRCODE_ERROR_IN_ASSIGNMENT), diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index dadbfb569c..01b89a5ffa 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -63,9 +63,29 @@ typedef enum PLpgSQL_datum_type PLPGSQL_DTYPE_ROW, PLPGSQL_DTYPE_REC, PLPGSQL_DTYPE_RECFIELD, - PLPGSQL_DTYPE_ARRAYELEM + PLPGSQL_DTYPE_ARRAYELEM, + PLPGSQL_DTYPE_PROMISE } PLpgSQL_datum_type; +/* + * DTYPE_PROMISE datums have these possible ways of computing the promise + */ +typedef enum PLpgSQL_promise_type +{ + PLPGSQL_PROMISE_NONE = 0, /* not a promise, or promise satisfied */ + PLPGSQL_PROMISE_TG_NAME, + PLPGSQL_PROMISE_TG_WHEN, + PLPGSQL_PROMISE_TG_LEVEL, + PLPGSQL_PROMISE_TG_OP, + PLPGSQL_PROMISE_TG_RELID, + PLPGSQL_PROMISE_TG_TABLE_NAME, + PLPGSQL_PROMISE_TG_TABLE_SCHEMA, + PLPGSQL_PROMISE_TG_NARGS, + PLPGSQL_PROMISE_TG_ARGV, + PLPGSQL_PROMISE_TG_EVENT, + PLPGSQL_PROMISE_TG_TAG +} PLpgSQL_promise_type; + /* * Variants distinguished in PLpgSQL_type structs */ @@ -248,6 +268,14 @@ typedef struct PLpgSQL_variable /* * Scalar variable + * + * DTYPE_VAR and DTYPE_PROMISE datums both use this struct type. + * A PROMISE datum works exactly like a VAR datum for most purposes, + * but if it is read without having previously been assigned to, then + * a special "promised" value is computed and assigned to the datum + * before the read is performed. This technique avoids the overhead of + * computing the variable's value in cases where we expect that many + * functions will never read it. */ typedef struct PLpgSQL_var { @@ -271,9 +299,18 @@ typedef struct PLpgSQL_var int cursor_explicit_argrow; int cursor_options; + /* Fields below here can change at runtime */ + Datum value; bool isnull; bool freeval; + + /* + * The promise field records which "promised" value to assign if the + * promise must be honored. If it's a normal variable, or the promise has + * been fulfilled, this is PLPGSQL_PROMISE_NONE. + */ + PLpgSQL_promise_type promise; } PLpgSQL_var; /* @@ -869,20 +906,6 @@ typedef struct PLpgSQL_function int found_varno; int new_varno; int old_varno; - int tg_name_varno; - int tg_when_varno; - int tg_level_varno; - int tg_op_varno; - int tg_relid_varno; - int tg_relname_varno; - int tg_table_name_varno; - int tg_table_schema_varno; - int tg_nargs_varno; - int tg_argv_varno; - - /* for event triggers */ - int tg_event_varno; - int tg_tag_varno; PLpgSQL_resolve_option resolve_option; @@ -912,6 +935,9 @@ typedef struct PLpgSQL_execstate { PLpgSQL_function *func; /* function being executed */ + TriggerData *trigdata; /* if regular trigger, data about firing */ + EventTriggerData *evtrigdata; /* if event trigger, data about firing */ + Datum retval; bool retisnull; Oid rettype; /* type of current retval */ From f9263006d871d127794a402a7bef713fdd882156 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 13 Feb 2018 22:15:08 -0500 Subject: [PATCH 0991/1087] Support CONSTANT/NOT NULL/initial value for plpgsql composite variables. These features were never implemented previously for composite or record variables ... not that the documentation admitted it, so there's no doc updates here. This also fixes some issues concerning enforcing DOMAIN NOT NULL constraints against plpgsql variables, although I'm not sure that that topic is completely dealt with. I created a new plpgsql test file for these features, and moved the one relevant existing test case into that file. Tom Lane, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us --- src/pl/plpgsql/src/Makefile | 3 +- .../plpgsql/src/expected/plpgsql_varprops.out | 300 ++++++++++++++++++ src/pl/plpgsql/src/pl_comp.c | 17 +- src/pl/plpgsql/src/pl_exec.c | 72 ++++- src/pl/plpgsql/src/pl_funcs.c | 15 + src/pl/plpgsql/src/pl_gram.y | 75 ++--- src/pl/plpgsql/src/plpgsql.h | 27 +- src/pl/plpgsql/src/sql/plpgsql_varprops.sql | 249 +++++++++++++++ src/test/regress/expected/plpgsql.out | 36 --- src/test/regress/sql/plpgsql.sql | 26 -- 10 files changed, 686 insertions(+), 134 deletions(-) create mode 100644 src/pl/plpgsql/src/expected/plpgsql_varprops.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_varprops.sql diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 2190eab616..3ac64e2d44 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -26,7 +26,8 @@ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) -REGRESS = plpgsql_call plpgsql_control plpgsql_record plpgsql_transaction +REGRESS = plpgsql_call plpgsql_control plpgsql_record \ + plpgsql_transaction plpgsql_varprops all: all-lib diff --git a/src/pl/plpgsql/src/expected/plpgsql_varprops.out b/src/pl/plpgsql/src/expected/plpgsql_varprops.out new file mode 100644 index 0000000000..109056c054 --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_varprops.out @@ -0,0 +1,300 @@ +-- +-- Tests for PL/pgSQL variable properties: CONSTANT, NOT NULL, initializers +-- +create type var_record as (f1 int4, f2 int4); +create domain int_nn as int not null; +create domain var_record_nn as var_record not null; +create domain var_record_colnn as var_record check((value).f2 is not null); +-- CONSTANT +do $$ +declare x constant int := 42; +begin + raise notice 'x = %', x; +end$$; +NOTICE: x = 42 +do $$ +declare x constant int; +begin + x := 42; -- fail +end$$; +ERROR: variable "x" is declared CONSTANT +LINE 4: x := 42; -- fail + ^ +do $$ +declare x constant int; y int; +begin + for x, y in select 1, 2 loop -- fail + end loop; +end$$; +ERROR: variable "x" is declared CONSTANT +LINE 4: for x, y in select 1, 2 loop -- fail + ^ +do $$ +declare x constant int[]; +begin + x[1] := 42; -- fail +end$$; +ERROR: variable "x" is declared CONSTANT +LINE 4: x[1] := 42; -- fail + ^ +do $$ +declare x constant int[]; y int; +begin + for x[1], y in select 1, 2 loop -- fail (currently, unsupported syntax) + end loop; +end$$; +ERROR: syntax error at or near "[" +LINE 4: for x[1], y in select 1, 2 loop -- fail (currently, unsup... + ^ +do $$ +declare x constant var_record; +begin + x.f1 := 42; -- fail +end$$; +ERROR: variable "x" is declared CONSTANT +LINE 4: x.f1 := 42; -- fail + ^ +do $$ +declare x constant var_record; y int; +begin + for x.f1, y in select 1, 2 loop -- fail + end loop; +end$$; +ERROR: variable "x" is declared CONSTANT +LINE 4: for x.f1, y in select 1, 2 loop -- fail + ^ +-- initializer expressions +do $$ +declare x int := sin(0); +begin + raise notice 'x = %', x; +end$$; +NOTICE: x = 0 +do $$ +declare x int := 1/0; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: division by zero +CONTEXT: SQL statement "SELECT 1/0" +PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x bigint[] := array[1,3,5]; +begin + raise notice 'x = %', x; +end$$; +NOTICE: x = {1,3,5} +do $$ +declare x record := row(1,2,3); +begin + raise notice 'x = %', x; +end$$; +NOTICE: x = (1,2,3) +do $$ +declare x var_record := row(1,2); +begin + raise notice 'x = %', x; +end$$; +NOTICE: x = (1,2) +-- NOT NULL +do $$ +declare x int not null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: variable "x" must have a default value, since it's declared NOT NULL +LINE 2: declare x int not null; -- fail + ^ +do $$ +declare x int not null := 42; +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; +NOTICE: x = 42 +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 5 at assignment +do $$ +declare x int not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x record not null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: variable "x" must have a default value, since it's declared NOT NULL +LINE 2: declare x record not null; -- fail + ^ +do $$ +declare x record not null := row(42); +begin + raise notice 'x = %', x; + x := row(null); -- ok + raise notice 'x = %', x; + x := null; -- fail +end$$; +NOTICE: x = (42) +NOTICE: x = () +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 7 at assignment +do $$ +declare x record not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record not null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: variable "x" must have a default value, since it's declared NOT NULL +LINE 2: declare x var_record not null; -- fail + ^ +do $$ +declare x var_record not null := row(41,42); +begin + raise notice 'x = %', x; + x := row(null,null); -- ok + raise notice 'x = %', x; + x := null; -- fail +end$$; +NOTICE: x = (41,42) +NOTICE: x = (,) +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 7 at assignment +do $$ +declare x var_record not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: null value cannot be assigned to variable "x" declared NOT NULL +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +-- Check that variables are reinitialized on block re-entry. +\set VERBOSITY terse \\ -- needed for output stability +do $$ +begin + for i in 1..3 loop + declare + x int; + y int := i; + r record; + c var_record; + begin + if i = 1 then + x := 42; + r := row(i, i+1); + c := row(i, i+1); + end if; + raise notice 'x = %', x; + raise notice 'y = %', y; + raise notice 'r = %', r; + raise notice 'c = %', c; + end; + end loop; +end$$; +NOTICE: x = 42 +NOTICE: y = 1 +NOTICE: r = (1,2) +NOTICE: c = (1,2) +NOTICE: x = +NOTICE: y = 2 +NOTICE: r = +NOTICE: c = +NOTICE: x = +NOTICE: y = 3 +NOTICE: r = +NOTICE: c = +\set VERBOSITY default +-- Check enforcement of domain constraints during initialization +do $$ +declare x int_nn; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: domain int_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x int_nn := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: domain int_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x int_nn := 42; +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; +NOTICE: x = 42 +ERROR: domain int_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 5 at assignment +do $$ +declare x var_record_nn; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: domain var_record_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record_nn := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: domain var_record_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record_nn := row(1,2); +begin + raise notice 'x = %', x; + x := row(null,null); -- ok + x := null; -- fail +end$$; +NOTICE: x = (1,2) +ERROR: domain var_record_nn does not allow null values +CONTEXT: PL/pgSQL function inline_code_block line 6 at assignment +do $$ +declare x var_record_colnn; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: value for domain var_record_colnn violates check constraint "var_record_colnn_check" +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record_colnn := null; -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: value for domain var_record_colnn violates check constraint "var_record_colnn_check" +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record_colnn := row(1,null); -- fail +begin + raise notice 'x = %', x; +end$$; +ERROR: value for domain var_record_colnn violates check constraint "var_record_colnn_check" +CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization +do $$ +declare x var_record_colnn := row(1,2); +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; +NOTICE: x = (1,2) +ERROR: value for domain var_record_colnn violates check constraint "var_record_colnn_check" +CONTEXT: PL/pgSQL function inline_code_block line 5 at assignment +do $$ +declare x var_record_colnn := row(1,2); +begin + raise notice 'x = %', x; + x := row(null,null); -- fail +end$$; +NOTICE: x = (1,2) +ERROR: value for domain var_record_colnn violates check constraint "var_record_colnn_check" +CONTEXT: PL/pgSQL function inline_code_block line 5 at assignment diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index 526aa8f621..aab92c4711 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -594,11 +594,11 @@ do_compile(FunctionCallInfo fcinfo, errhint("The arguments of the trigger can be accessed through TG_NARGS and TG_ARGV instead."))); /* Add the record for referencing NEW ROW */ - rec = plpgsql_build_record("new", 0, RECORDOID, true); + rec = plpgsql_build_record("new", 0, NULL, RECORDOID, true); function->new_varno = rec->dno; /* Add the record for referencing OLD ROW */ - rec = plpgsql_build_record("old", 0, RECORDOID, true); + rec = plpgsql_build_record("old", 0, NULL, RECORDOID, true); function->old_varno = rec->dno; /* Add the variable tg_name */ @@ -1811,7 +1811,7 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, var->refname = pstrdup(refname); var->lineno = lineno; var->datatype = dtype; - /* other fields might be filled by caller */ + /* other fields are left as 0, might be changed by caller */ /* preset to NULL */ var->value = 0; @@ -1831,7 +1831,8 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, /* Composite type -- build a record variable */ PLpgSQL_rec *rec; - rec = plpgsql_build_record(refname, lineno, dtype->typoid, + rec = plpgsql_build_record(refname, lineno, + dtype, dtype->typoid, add2namespace); result = (PLpgSQL_variable *) rec; break; @@ -1856,7 +1857,8 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, * Build empty named record variable, and optionally add it to namespace */ PLpgSQL_rec * -plpgsql_build_record(const char *refname, int lineno, Oid rectypeid, +plpgsql_build_record(const char *refname, int lineno, + PLpgSQL_type *dtype, Oid rectypeid, bool add2namespace) { PLpgSQL_rec *rec; @@ -1865,6 +1867,8 @@ plpgsql_build_record(const char *refname, int lineno, Oid rectypeid, rec->dtype = PLPGSQL_DTYPE_REC; rec->refname = pstrdup(refname); rec->lineno = lineno; + /* other fields are left as 0, might be changed by caller */ + rec->datatype = dtype; rec->rectypeid = rectypeid; rec->firstfield = -1; rec->erh = NULL; @@ -1899,6 +1903,9 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars) int32 typmod; Oid typcoll; + /* Member vars of a row should never be const */ + Assert(!var->isconst); + switch (var->dtype) { case PLPGSQL_DTYPE_VAR: diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index f6866743ac..5054d20ab1 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -539,7 +539,7 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, } else { - /* If arg is null, treat it as an empty row */ + /* If arg is null, set variable to null */ exec_move_row(&estate, (PLpgSQL_variable *) rec, NULL, NULL); } @@ -1539,11 +1539,9 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) { /* * If needed, give the datatype a chance to reject - * NULLs, by assigning a NULL to the variable. We + * NULLs, by assigning a NULL to the variable. We * claim the value is of type UNKNOWN, not the var's - * datatype, else coercion will be skipped. (Do this - * before the notnull check to be consistent with - * exec_assign_value.) + * datatype, else coercion will be skipped. */ if (var->datatype->typtype == TYPTYPE_DOMAIN) exec_assign_value(estate, @@ -1553,11 +1551,8 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) UNKNOWNOID, -1); - if (var->notnull) - ereport(ERROR, - (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED), - errmsg("variable \"%s\" declared NOT NULL cannot default to NULL", - var->refname))); + /* parser should have rejected NOT NULL */ + Assert(!var->notnull); } else { @@ -1571,9 +1566,28 @@ exec_stmt_block(PLpgSQL_execstate *estate, PLpgSQL_stmt_block *block) { PLpgSQL_rec *rec = (PLpgSQL_rec *) datum; - if (rec->erh) - DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); - rec->erh = NULL; + /* + * Deletion of any existing object will be handled during + * the assignments below, and in some cases it's more + * efficient for us not to get rid of it beforehand. + */ + if (rec->default_val == NULL) + { + /* + * If needed, give the datatype a chance to reject + * NULLs, by assigning a NULL to the variable. + */ + exec_move_row(estate, (PLpgSQL_variable *) rec, + NULL, NULL); + + /* parser should have rejected NOT NULL */ + Assert(!rec->notnull); + } + else + { + exec_assign_expr(estate, (PLpgSQL_datum *) rec, + rec->default_val); + } } break; @@ -4725,7 +4739,13 @@ exec_assign_value(PLpgSQL_execstate *estate, if (isNull) { - /* If source is null, just assign nulls to the record */ + if (rec->notnull) + ereport(ERROR, + (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED), + errmsg("null value cannot be assigned to variable \"%s\" declared NOT NULL", + rec->refname))); + + /* Set variable to a simple NULL */ exec_move_row(estate, (PLpgSQL_variable *) rec, NULL, NULL); } @@ -6375,9 +6395,27 @@ exec_move_row(PLpgSQL_execstate *estate, */ if (tupdesc == NULL) { - if (rec->erh) - DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); - rec->erh = NULL; + if (rec->datatype && + rec->datatype->typtype == TYPTYPE_DOMAIN) + { + /* + * If it's a composite domain, NULL might not be a legal + * value, so we instead need to make an empty expanded record + * and ensure that domain type checking gets done. If there + * is already an expanded record, piggyback on its lookups. + */ + newerh = make_expanded_record_for_rec(estate, rec, + NULL, rec->erh); + expanded_record_set_tuple(newerh, NULL, false); + assign_record_var(estate, rec, newerh); + } + else + { + /* Just clear it to NULL */ + if (rec->erh) + DeleteExpandedObject(ExpandedRecordGetDatum(rec->erh)); + rec->erh = NULL; + } return; } diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c index 379fd69f44..b986fc39b3 100644 --- a/src/pl/plpgsql/src/pl_funcs.c +++ b/src/pl/plpgsql/src/pl_funcs.c @@ -740,6 +740,11 @@ plpgsql_free_function_memory(PLpgSQL_function *func) case PLPGSQL_DTYPE_ROW: break; case PLPGSQL_DTYPE_REC: + { + PLpgSQL_rec *rec = (PLpgSQL_rec *) d; + + free_expr(rec->default_val); + } break; case PLPGSQL_DTYPE_RECFIELD: break; @@ -1633,6 +1638,16 @@ plpgsql_dumptree(PLpgSQL_function *func) printf("REC %-16s typoid %u\n", ((PLpgSQL_rec *) d)->refname, ((PLpgSQL_rec *) d)->rectypeid); + if (((PLpgSQL_rec *) d)->isconst) + printf(" CONSTANT\n"); + if (((PLpgSQL_rec *) d)->notnull) + printf(" NOT NULL\n"); + if (((PLpgSQL_rec *) d)->default_val != NULL) + { + printf(" DEFAULT "); + dump_expr(((PLpgSQL_rec *) d)->default_val); + printf("\n"); + } break; case PLPGSQL_DTYPE_RECFIELD: printf("RECFIELD %-16s of REC %d\n", diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index 5bf45942a6..688fbd6531 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -505,37 +505,20 @@ decl_statement : decl_varname decl_const decl_datatype decl_collate decl_notnull var = plpgsql_build_variable($1.name, $1.lineno, $3, true); - if ($2) - { - if (var->dtype == PLPGSQL_DTYPE_VAR) - ((PLpgSQL_var *) var)->isconst = $2; - else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("record variable cannot be CONSTANT"), - parser_errposition(@2))); - } - if ($5) - { - if (var->dtype == PLPGSQL_DTYPE_VAR) - ((PLpgSQL_var *) var)->notnull = $5; - else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("record variable cannot be NOT NULL"), - parser_errposition(@4))); + var->isconst = $2; + var->notnull = $5; + var->default_val = $6; - } - if ($6 != NULL) - { - if (var->dtype == PLPGSQL_DTYPE_VAR) - ((PLpgSQL_var *) var)->default_val = $6; - else - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("default value for record variable is not supported"), - parser_errposition(@5))); - } + /* + * The combination of NOT NULL without an initializer + * can't work, so let's reject it at compile time. + */ + if (var->notnull && var->default_val == NULL) + ereport(ERROR, + (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED), + errmsg("variable \"%s\" must have a default value, since it's declared NOT NULL", + var->refname), + parser_errposition(@5))); } | decl_varname K_ALIAS K_FOR decl_aliasitem ';' { @@ -635,6 +618,7 @@ decl_cursor_args : foreach (l, $2) { PLpgSQL_variable *arg = (PLpgSQL_variable *) lfirst(l); + Assert(!arg->isconst); new->fieldnames[i] = arg->refname; new->varnos[i] = arg->dno; i++; @@ -1385,6 +1369,7 @@ for_control : for_variable K_IN new->var = (PLpgSQL_variable *) plpgsql_build_record($1.name, $1.lineno, + NULL, RECORDOID, true); @@ -2237,7 +2222,7 @@ exception_sect : -1, plpgsql_curr_compile->fn_input_collation), true); - ((PLpgSQL_var *) var)->isconst = true; + var->isconst = true; new->sqlstate_varno = var->dno; var = plpgsql_build_variable("sqlerrm", lineno, @@ -2245,7 +2230,7 @@ exception_sect : -1, plpgsql_curr_compile->fn_input_collation), true); - ((PLpgSQL_var *) var)->isconst = true; + var->isconst = true; new->sqlerrm_varno = var->dno; $$ = new; @@ -3321,24 +3306,26 @@ check_assignable(PLpgSQL_datum *datum, int location) { case PLPGSQL_DTYPE_VAR: case PLPGSQL_DTYPE_PROMISE: - if (((PLpgSQL_var *) datum)->isconst) + case PLPGSQL_DTYPE_REC: + if (((PLpgSQL_variable *) datum)->isconst) ereport(ERROR, (errcode(ERRCODE_ERROR_IN_ASSIGNMENT), - errmsg("\"%s\" is declared CONSTANT", - ((PLpgSQL_var *) datum)->refname), + errmsg("variable \"%s\" is declared CONSTANT", + ((PLpgSQL_variable *) datum)->refname), parser_errposition(location))); break; case PLPGSQL_DTYPE_ROW: - /* always assignable? Shouldn't we check member vars? */ - break; - case PLPGSQL_DTYPE_REC: - /* always assignable? What about NEW/OLD? */ + /* always assignable; member vars were checked at compile time */ break; case PLPGSQL_DTYPE_RECFIELD: - /* always assignable? */ + /* assignable if parent record is */ + check_assignable(plpgsql_Datums[((PLpgSQL_recfield *) datum)->recparentno], + location); break; case PLPGSQL_DTYPE_ARRAYELEM: - /* always assignable? */ + /* assignable if parent array is */ + check_assignable(plpgsql_Datums[((PLpgSQL_arrayelem *) datum)->arrayparentno], + location); break; default: elog(ERROR, "unrecognized dtype: %d", datum->dtype); @@ -3463,9 +3450,8 @@ read_into_scalar_list(char *initial_name, */ plpgsql_push_back_token(tok); - row = palloc(sizeof(PLpgSQL_row)); + row = palloc0(sizeof(PLpgSQL_row)); row->dtype = PLPGSQL_DTYPE_ROW; - row->refname = pstrdup("*internal*"); row->lineno = plpgsql_location_to_lineno(initial_location); row->rowtupdesc = NULL; row->nfields = nfields; @@ -3498,9 +3484,8 @@ make_scalar_list1(char *initial_name, check_assignable(initial_datum, location); - row = palloc(sizeof(PLpgSQL_row)); + row = palloc0(sizeof(PLpgSQL_row)); row->dtype = PLPGSQL_DTYPE_ROW; - row->refname = pstrdup("*internal*"); row->lineno = lineno; row->rowtupdesc = NULL; row->nfields = 1; diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index 01b89a5ffa..c2449f03cf 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -264,6 +264,9 @@ typedef struct PLpgSQL_variable int dno; char *refname; int lineno; + bool isconst; + bool notnull; + PLpgSQL_expr *default_val; } PLpgSQL_variable; /* @@ -283,12 +286,12 @@ typedef struct PLpgSQL_var int dno; char *refname; int lineno; - /* end of PLpgSQL_variable fields */ - bool isconst; bool notnull; - PLpgSQL_type *datatype; PLpgSQL_expr *default_val; + /* end of PLpgSQL_variable fields */ + + PLpgSQL_type *datatype; /* * Variables declared as CURSOR FOR are mostly like ordinary @@ -320,6 +323,11 @@ typedef struct PLpgSQL_var * * Note that there's no way to name the row as such from PL/pgSQL code, * so many functions don't need to support these. + * + * refname, isconst, notnull, and default_val are unsupported (and hence + * always zero/null) for a row. The member variables of a row should have + * been checked to be writable at compile time, so isconst is correctly set + * to false. notnull and default_val aren't applicable. */ typedef struct PLpgSQL_row { @@ -327,6 +335,9 @@ typedef struct PLpgSQL_row int dno; char *refname; int lineno; + bool isconst; + bool notnull; + PLpgSQL_expr *default_val; /* end of PLpgSQL_variable fields */ /* @@ -350,11 +361,18 @@ typedef struct PLpgSQL_rec int dno; char *refname; int lineno; + bool isconst; + bool notnull; + PLpgSQL_expr *default_val; /* end of PLpgSQL_variable fields */ + PLpgSQL_type *datatype; /* can be NULL, if rectypeid is RECORDOID */ Oid rectypeid; /* declared type of variable */ /* RECFIELDs for this record are chained together for easy access */ int firstfield; /* dno of first RECFIELD, or -1 if none */ + + /* Fields below here can change at runtime */ + /* We always store record variables as "expanded" records */ ExpandedRecordHeader *erh; } PLpgSQL_rec; @@ -1141,7 +1159,8 @@ extern PLpgSQL_variable *plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype, bool add2namespace); extern PLpgSQL_rec *plpgsql_build_record(const char *refname, int lineno, - Oid rectypeid, bool add2namespace); + PLpgSQL_type *dtype, Oid rectypeid, + bool add2namespace); extern PLpgSQL_recfield *plpgsql_build_recfield(PLpgSQL_rec *rec, const char *fldname); extern int plpgsql_recognize_err_condition(const char *condname, diff --git a/src/pl/plpgsql/src/sql/plpgsql_varprops.sql b/src/pl/plpgsql/src/sql/plpgsql_varprops.sql new file mode 100644 index 0000000000..c0e7f95f4e --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_varprops.sql @@ -0,0 +1,249 @@ +-- +-- Tests for PL/pgSQL variable properties: CONSTANT, NOT NULL, initializers +-- + +create type var_record as (f1 int4, f2 int4); +create domain int_nn as int not null; +create domain var_record_nn as var_record not null; +create domain var_record_colnn as var_record check((value).f2 is not null); + +-- CONSTANT + +do $$ +declare x constant int := 42; +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x constant int; +begin + x := 42; -- fail +end$$; + +do $$ +declare x constant int; y int; +begin + for x, y in select 1, 2 loop -- fail + end loop; +end$$; + +do $$ +declare x constant int[]; +begin + x[1] := 42; -- fail +end$$; + +do $$ +declare x constant int[]; y int; +begin + for x[1], y in select 1, 2 loop -- fail (currently, unsupported syntax) + end loop; +end$$; + +do $$ +declare x constant var_record; +begin + x.f1 := 42; -- fail +end$$; + +do $$ +declare x constant var_record; y int; +begin + for x.f1, y in select 1, 2 loop -- fail + end loop; +end$$; + +-- initializer expressions + +do $$ +declare x int := sin(0); +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x int := 1/0; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x bigint[] := array[1,3,5]; +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x record := row(1,2,3); +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record := row(1,2); +begin + raise notice 'x = %', x; +end$$; + +-- NOT NULL + +do $$ +declare x int not null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x int not null := 42; +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; + +do $$ +declare x int not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x record not null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x record not null := row(42); +begin + raise notice 'x = %', x; + x := row(null); -- ok + raise notice 'x = %', x; + x := null; -- fail +end$$; + +do $$ +declare x record not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record not null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record not null := row(41,42); +begin + raise notice 'x = %', x; + x := row(null,null); -- ok + raise notice 'x = %', x; + x := null; -- fail +end$$; + +do $$ +declare x var_record not null := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +-- Check that variables are reinitialized on block re-entry. + +\set VERBOSITY terse \\ -- needed for output stability +do $$ +begin + for i in 1..3 loop + declare + x int; + y int := i; + r record; + c var_record; + begin + if i = 1 then + x := 42; + r := row(i, i+1); + c := row(i, i+1); + end if; + raise notice 'x = %', x; + raise notice 'y = %', y; + raise notice 'r = %', r; + raise notice 'c = %', c; + end; + end loop; +end$$; +\set VERBOSITY default + +-- Check enforcement of domain constraints during initialization + +do $$ +declare x int_nn; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x int_nn := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x int_nn := 42; +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; + +do $$ +declare x var_record_nn; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record_nn := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record_nn := row(1,2); +begin + raise notice 'x = %', x; + x := row(null,null); -- ok + x := null; -- fail +end$$; + +do $$ +declare x var_record_colnn; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record_colnn := null; -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record_colnn := row(1,null); -- fail +begin + raise notice 'x = %', x; +end$$; + +do $$ +declare x var_record_colnn := row(1,2); +begin + raise notice 'x = %', x; + x := null; -- fail +end$$; + +do $$ +declare x var_record_colnn := row(1,2); +begin + raise notice 'x = %', x; + x := row(null,null); -- fail +end$$; diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out index 0c1da08869..d294e53634 100644 --- a/src/test/regress/expected/plpgsql.out +++ b/src/test/regress/expected/plpgsql.out @@ -4586,42 +4586,6 @@ select scope_test(); (1 row) drop function scope_test(); --- Check that variables are reinitialized on block re-entry. -\set VERBOSITY terse \\ -- needed for output stability -do $$ -begin - for i in 1..3 loop - declare - x int; - y int := i; - r record; - c int8_tbl; - begin - if i = 1 then - x := 42; - r := row(i, i+1); - c := row(i, i+1); - end if; - raise notice 'x = %', x; - raise notice 'y = %', y; - raise notice 'r = %', r; - raise notice 'c = %', c; - end; - end loop; -end$$; -NOTICE: x = 42 -NOTICE: y = 1 -NOTICE: r = (1,2) -NOTICE: c = (1,2) -NOTICE: x = -NOTICE: y = 2 -NOTICE: r = -NOTICE: c = -NOTICE: x = -NOTICE: y = 3 -NOTICE: r = -NOTICE: c = -\set VERBOSITY default -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; create function conflict_test() returns setof int8_tbl as $$ diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql index 6bdcfe7cc5..f17cf0b49b 100644 --- a/src/test/regress/sql/plpgsql.sql +++ b/src/test/regress/sql/plpgsql.sql @@ -3735,32 +3735,6 @@ select scope_test(); drop function scope_test(); --- Check that variables are reinitialized on block re-entry. - -\set VERBOSITY terse \\ -- needed for output stability -do $$ -begin - for i in 1..3 loop - declare - x int; - y int := i; - r record; - c int8_tbl; - begin - if i = 1 then - x := 42; - r := row(i, i+1); - c := row(i, i+1); - end if; - raise notice 'x = %', x; - raise notice 'y = %', y; - raise notice 'r = %', r; - raise notice 'c = %', c; - end; - end loop; -end$$; -\set VERBOSITY default - -- Check handling of conflicts between plpgsql vars and table columns. set plpgsql.variable_conflict = error; From e748e902def40ee52d1ef0b770fd24aa0835e304 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 14:47:18 -0500 Subject: [PATCH 0992/1087] Fix broken logic for reporting PL/Python function names in errcontext. plpython_error_callback() reported the name of the function associated with the topmost PL/Python execution context. This was not merely wrong if there were nested PL/Python contexts, but it risked a core dump if the topmost one is an inline code block rather than a named function. That will have proname = NULL, and so we were passing a NULL pointer to snprintf("%s"). It seems that none of the PL/Python-testing machines in the buildfarm will dump core for that, but some platforms do, as reported by Marina Polyakova. Investigation finds that there actually is an existing regression test that used to prove that the behavior was wrong, though apparently no one had noticed that it was printing the wrong function name. It stopped showing the problem in 9.6 when we adjusted psql to not print CONTEXT by default for NOTICE messages. The problem is masked (if your platform avoids the core dump) in error cases, because PL/Python will throw away the originally generated error info in favor of a new traceback produced at the outer level. Repair by using ErrorContextCallback.arg to pass the correct context to the error callback. Add a regression test illustrating correct behavior. Back-patch to all supported branches, since they're all broken this way. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru --- src/pl/plpython/expected/plpython_error.out | 23 ++++++++ src/pl/plpython/expected/plpython_error_0.out | 23 ++++++++ src/pl/plpython/expected/plpython_error_5.out | 23 ++++++++ src/pl/plpython/plpy_main.c | 52 +++++++++---------- src/pl/plpython/plpy_procedure.c | 4 +- src/pl/plpython/sql/plpython_error.sql | 16 ++++++ 6 files changed, 112 insertions(+), 29 deletions(-) diff --git a/src/pl/plpython/expected/plpython_error.out b/src/pl/plpython/expected/plpython_error.out index 1f52af7fe0..4d615b41cc 100644 --- a/src/pl/plpython/expected/plpython_error.out +++ b/src/pl/plpython/expected/plpython_error.out @@ -422,3 +422,26 @@ EXCEPTION WHEN SQLSTATE 'SILLY' THEN -- NOOP END $$ LANGUAGE plpgsql; +/* test the context stack trace for nested execution levels + */ +CREATE FUNCTION notice_innerfunc() RETURNS int AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$") +return 1 +$$ LANGUAGE plpythonu; +CREATE FUNCTION notice_outerfunc() RETURNS int AS $$ +plpy.execute("SELECT notice_innerfunc()") +return 1 +$$ LANGUAGE plpythonu; +\set SHOW_CONTEXT always +SELECT notice_outerfunc(); +NOTICE: inside DO +CONTEXT: PL/Python anonymous code block +SQL statement "DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$" +PL/Python function "notice_innerfunc" +SQL statement "SELECT notice_innerfunc()" +PL/Python function "notice_outerfunc" + notice_outerfunc +------------------ + 1 +(1 row) + diff --git a/src/pl/plpython/expected/plpython_error_0.out b/src/pl/plpython/expected/plpython_error_0.out index 5323906122..290902b182 100644 --- a/src/pl/plpython/expected/plpython_error_0.out +++ b/src/pl/plpython/expected/plpython_error_0.out @@ -422,3 +422,26 @@ EXCEPTION WHEN SQLSTATE 'SILLY' THEN -- NOOP END $$ LANGUAGE plpgsql; +/* test the context stack trace for nested execution levels + */ +CREATE FUNCTION notice_innerfunc() RETURNS int AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$") +return 1 +$$ LANGUAGE plpythonu; +CREATE FUNCTION notice_outerfunc() RETURNS int AS $$ +plpy.execute("SELECT notice_innerfunc()") +return 1 +$$ LANGUAGE plpythonu; +\set SHOW_CONTEXT always +SELECT notice_outerfunc(); +NOTICE: inside DO +CONTEXT: PL/Python anonymous code block +SQL statement "DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$" +PL/Python function "notice_innerfunc" +SQL statement "SELECT notice_innerfunc()" +PL/Python function "notice_outerfunc" + notice_outerfunc +------------------ + 1 +(1 row) + diff --git a/src/pl/plpython/expected/plpython_error_5.out b/src/pl/plpython/expected/plpython_error_5.out index 5ff46ca50a..bc66ab5534 100644 --- a/src/pl/plpython/expected/plpython_error_5.out +++ b/src/pl/plpython/expected/plpython_error_5.out @@ -422,3 +422,26 @@ EXCEPTION WHEN SQLSTATE 'SILLY' THEN -- NOOP END $$ LANGUAGE plpgsql; +/* test the context stack trace for nested execution levels + */ +CREATE FUNCTION notice_innerfunc() RETURNS int AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$") +return 1 +$$ LANGUAGE plpythonu; +CREATE FUNCTION notice_outerfunc() RETURNS int AS $$ +plpy.execute("SELECT notice_innerfunc()") +return 1 +$$ LANGUAGE plpythonu; +\set SHOW_CONTEXT always +SELECT notice_outerfunc(); +NOTICE: inside DO +CONTEXT: PL/Python anonymous code block +SQL statement "DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$" +PL/Python function "notice_innerfunc" +SQL statement "SELECT notice_innerfunc()" +PL/Python function "notice_outerfunc" + notice_outerfunc +------------------ + 1 +(1 row) + diff --git a/src/pl/plpython/plpy_main.c b/src/pl/plpython/plpy_main.c index 5a197ce27a..6a66eba176 100644 --- a/src/pl/plpython/plpy_main.c +++ b/src/pl/plpython/plpy_main.c @@ -237,23 +237,26 @@ plpython_call_handler(PG_FUNCTION_ARGS) /* * Push execution context onto stack. It is important that this get * popped again, so avoid putting anything that could throw error between - * here and the PG_TRY. (plpython_error_callback expects the stack entry - * to be there, so we have to make the context first.) + * here and the PG_TRY. */ exec_ctx = PLy_push_execution_context(!nonatomic); - /* - * Setup error traceback support for ereport() - */ - plerrcontext.callback = plpython_error_callback; - plerrcontext.previous = error_context_stack; - error_context_stack = &plerrcontext; - PG_TRY(); { Oid funcoid = fcinfo->flinfo->fn_oid; PLyProcedure *proc; + /* + * Setup error traceback support for ereport(). Note that the PG_TRY + * structure pops this for us again at exit, so we needn't do that + * explicitly, nor do we risk the callback getting called after we've + * destroyed the exec_ctx. + */ + plerrcontext.callback = plpython_error_callback; + plerrcontext.arg = exec_ctx; + plerrcontext.previous = error_context_stack; + error_context_stack = &plerrcontext; + if (CALLED_AS_TRIGGER(fcinfo)) { Relation tgrel = ((TriggerData *) fcinfo->context)->tg_relation; @@ -279,9 +282,7 @@ plpython_call_handler(PG_FUNCTION_ARGS) } PG_END_TRY(); - /* Pop the error context stack */ - error_context_stack = plerrcontext.previous; - /* ... and then the execution context */ + /* Destroy the execution context */ PLy_pop_execution_context(); return retval; @@ -333,21 +334,22 @@ plpython_inline_handler(PG_FUNCTION_ARGS) /* * Push execution context onto stack. It is important that this get * popped again, so avoid putting anything that could throw error between - * here and the PG_TRY. (plpython_inline_error_callback doesn't currently - * need the stack entry, but for consistency with plpython_call_handler we - * do it in this order.) + * here and the PG_TRY. */ exec_ctx = PLy_push_execution_context(codeblock->atomic); - /* - * Setup error traceback support for ereport() - */ - plerrcontext.callback = plpython_inline_error_callback; - plerrcontext.previous = error_context_stack; - error_context_stack = &plerrcontext; - PG_TRY(); { + /* + * Setup error traceback support for ereport(). + * plpython_inline_error_callback doesn't currently need exec_ctx, but + * for consistency with plpython_call_handler we do it the same way. + */ + plerrcontext.callback = plpython_inline_error_callback; + plerrcontext.arg = exec_ctx; + plerrcontext.previous = error_context_stack; + error_context_stack = &plerrcontext; + PLy_procedure_compile(&proc, codeblock->source_text); exec_ctx->curr_proc = &proc; PLy_exec_function(&fake_fcinfo, &proc); @@ -361,9 +363,7 @@ plpython_inline_handler(PG_FUNCTION_ARGS) } PG_END_TRY(); - /* Pop the error context stack */ - error_context_stack = plerrcontext.previous; - /* ... and then the execution context */ + /* Destroy the execution context */ PLy_pop_execution_context(); /* Now clean up the transient procedure we made */ @@ -391,7 +391,7 @@ PLy_procedure_is_trigger(Form_pg_proc procStruct) static void plpython_error_callback(void *arg) { - PLyExecutionContext *exec_ctx = PLy_current_execution_context(); + PLyExecutionContext *exec_ctx = (PLyExecutionContext *) arg; if (exec_ctx->curr_proc) { diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c index 990a33cc6d..4e06413cd4 100644 --- a/src/pl/plpython/plpy_procedure.c +++ b/src/pl/plpython/plpy_procedure.c @@ -47,9 +47,7 @@ init_procedure_caches(void) } /* - * Get the name of the last procedure called by the backend (the - * innermost, if a plpython procedure call calls the backend and the - * backend calls another plpython procedure). + * PLy_procedure_name: get the name of the specified procedure. * * NB: this returns the SQL name, not the internal Python procedure name */ diff --git a/src/pl/plpython/sql/plpython_error.sql b/src/pl/plpython/sql/plpython_error.sql index d0df7e607d..d712eb1078 100644 --- a/src/pl/plpython/sql/plpython_error.sql +++ b/src/pl/plpython/sql/plpython_error.sql @@ -328,3 +328,19 @@ EXCEPTION WHEN SQLSTATE 'SILLY' THEN -- NOOP END $$ LANGUAGE plpgsql; + +/* test the context stack trace for nested execution levels + */ +CREATE FUNCTION notice_innerfunc() RETURNS int AS $$ +plpy.execute("DO LANGUAGE plpythonu $x$ plpy.notice('inside DO') $x$") +return 1 +$$ LANGUAGE plpythonu; + +CREATE FUNCTION notice_outerfunc() RETURNS int AS $$ +plpy.execute("SELECT notice_innerfunc()") +return 1 +$$ LANGUAGE plpythonu; + +\set SHOW_CONTEXT always + +SELECT notice_outerfunc(); From 0c62356cc8777961221a643fa77f62e1c7361085 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 15:06:01 -0500 Subject: [PATCH 0993/1087] Add an assertion that we don't pass NULL to snprintf("%s"). Per commit e748e902d, we appear to have little or no coverage in the buildfarm of machines that will dump core when asked to printf a null string pointer. Let's try to improve that situation by adding an assertion that will make src/port/snprintf.c behave that way. Since it's just an assertion, it won't break anything in production builds, but it will help developers find this type of oversight. Note that while our buildfarm coverage of machines that use that snprintf implementation is pretty thin on the Unix side (apparently amounting only to gaur/pademelon), all of the MSVC critters use it. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru --- src/port/snprintf.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/port/snprintf.c b/src/port/snprintf.c index 43c17e702e..8358425980 100644 --- a/src/port/snprintf.c +++ b/src/port/snprintf.c @@ -745,6 +745,8 @@ dopr(PrintfTarget *target, const char *format, va_list args) strvalue = argvalues[fmtpos].cptr; else strvalue = va_arg(args, char *); + /* Whine if someone tries to print a NULL string */ + Assert(strvalue != NULL); fmtstr(strvalue, leftjust, fieldwidth, precision, pointflag, target); break; From 9a725f7b5cb7e8c8894ef121b49eff9c265245c8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 16:06:49 -0500 Subject: [PATCH 0994/1087] Silence assorted "variable may be used uninitialized" warnings. All of these are false positives, but in each case a fair amount of analysis is needed to see that, and it's not too surprising that not all compilers are smart enough. (In particular, in the logtape.c case, a compiler lacking the knowledge provided by the Assert would almost surely complain, so that this warning will be seen in any non-assert build.) Some of these are of long standing while others are pretty recent, but it only seems worth fixing them in HEAD. Jaime Casanova, tweaked a bit by me Discussion: https://postgr.es/m/CAJGNTeMcYAMJdPAom52dppLMtF-UnEZi0dooj==75OEv1EoBZA@mail.gmail.com --- src/backend/access/transam/xloginsert.c | 2 +- src/backend/catalog/objectaddress.c | 2 ++ src/backend/utils/sort/logtape.c | 2 +- src/bin/pgbench/pgbench.c | 7 ++++--- 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c index de869e00ff..5bea073a2b 100644 --- a/src/backend/access/transam/xloginsert.c +++ b/src/backend/access/transam/xloginsert.c @@ -584,7 +584,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info, if (include_image) { Page page = regbuf->page; - uint16 compressed_len; + uint16 compressed_len = 0; /* * The page needs to be backed up, so calculate its hole length diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c index b4c2467710..80f561df1c 100644 --- a/src/backend/catalog/objectaddress.c +++ b/src/backend/catalog/objectaddress.c @@ -1593,6 +1593,8 @@ get_object_address_opf_member(ObjectType objtype, famaddr = get_object_address_opcf(OBJECT_OPFAMILY, copy, false); /* find out left/right type names and OIDs */ + typenames[0] = typenames[1] = NULL; + typeoids[0] = typeoids[1] = InvalidOid; i = 0; foreach(cell, lsecond(object)) { diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c index 66bfcced8d..d6794bf3de 100644 --- a/src/backend/utils/sort/logtape.c +++ b/src/backend/utils/sort/logtape.c @@ -411,7 +411,7 @@ ltsConcatWorkerTapes(LogicalTapeSet *lts, TapeShare *shared, SharedFileSet *fileset) { LogicalTape *lt = NULL; - long tapeblocks; + long tapeblocks = 0L; long nphysicalblocks = 0L; int i; diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 31ea6ca06e..d4209421f5 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -1495,6 +1495,7 @@ coerceToBool(PgBenchValue *pval, bool *bval) else /* NULL, INT or DOUBLE */ { fprintf(stderr, "cannot coerce %s to boolean\n", valueTypeName(pval)); + *bval = false; /* suppress uninitialized-variable warnings */ return false; } } @@ -1725,9 +1726,9 @@ evalLazyFunc(TState *thread, CState *st, * which do not require lazy evaluation. */ static bool -evalStandardFunc( - TState *thread, CState *st, - PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval) +evalStandardFunc(TState *thread, CState *st, + PgBenchFunction func, PgBenchExprLink *args, + PgBenchValue *retval) { /* evaluate all function arguments */ int nargs = 0; From 6d7dc5350042697bbb141a7362649db7fa67bd55 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Wed, 14 Feb 2018 14:17:28 -0800 Subject: [PATCH 0995/1087] Return implementation defined value if pg_$op_s$bit_overflow overflows. Some older compilers otherwise sometimes complain about undefined values, even though the return value should not be used in the overflow case. We assume that any decent compiler will optimize away the unnecessary assignment in performance critical cases. We do not want to restrain the returned value to a specific value, e.g. 0 or the wrapped-around value, because some fast ways to implement overflow detecting math do not easily allow for that (e.g. msvc intrinsics). As the function documentation already documents the returned value in case of intrinsics to be implementation defined, no documentation has to be updated. Per complaint from Tom Lane and his buildfarm member prairiedog. Author: Andres Freund Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us --- src/include/common/int.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/src/include/common/int.h b/src/include/common/int.h index feb84102b4..82e38d4b7b 100644 --- a/src/include/common/int.h +++ b/src/include/common/int.h @@ -34,7 +34,10 @@ pg_add_s16_overflow(int16 a, int16 b, int16 *result) int32 res = (int32) a + (int32) b; if (res > PG_INT16_MAX || res < PG_INT16_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int16) res; return false; #endif @@ -54,7 +57,10 @@ pg_sub_s16_overflow(int16 a, int16 b, int16 *result) int32 res = (int32) a - (int32) b; if (res > PG_INT16_MAX || res < PG_INT16_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int16) res; return false; #endif @@ -74,7 +80,10 @@ pg_mul_s16_overflow(int16 a, int16 b, int16 *result) int32 res = (int32) a * (int32) b; if (res > PG_INT16_MAX || res < PG_INT16_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int16) res; return false; #endif @@ -94,7 +103,10 @@ pg_add_s32_overflow(int32 a, int32 b, int32 *result) int64 res = (int64) a + (int64) b; if (res > PG_INT32_MAX || res < PG_INT32_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int32) res; return false; #endif @@ -114,7 +126,10 @@ pg_sub_s32_overflow(int32 a, int32 b, int32 *result) int64 res = (int64) a - (int64) b; if (res > PG_INT32_MAX || res < PG_INT32_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int32) res; return false; #endif @@ -134,7 +149,10 @@ pg_mul_s32_overflow(int32 a, int32 b, int32 *result) int64 res = (int64) a * (int64) b; if (res > PG_INT32_MAX || res < PG_INT32_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int32) res; return false; #endif @@ -154,13 +172,19 @@ pg_add_s64_overflow(int64 a, int64 b, int64 *result) int128 res = (int128) a + (int128) b; if (res > PG_INT64_MAX || res < PG_INT64_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int64) res; return false; #else if ((a > 0 && b > 0 && a > PG_INT64_MAX - b) || (a < 0 && b < 0 && a < PG_INT64_MIN - b)) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = a + b; return false; #endif @@ -180,13 +204,19 @@ pg_sub_s64_overflow(int64 a, int64 b, int64 *result) int128 res = (int128) a - (int128) b; if (res > PG_INT64_MAX || res < PG_INT64_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int64) res; return false; #else if ((a < 0 && b > 0 && a < PG_INT64_MIN + b) || (a > 0 && b < 0 && a > PG_INT64_MAX + b)) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = a - b; return false; #endif @@ -206,7 +236,10 @@ pg_mul_s64_overflow(int64 a, int64 b, int64 *result) int128 res = (int128) a * (int128) b; if (res > PG_INT64_MAX || res < PG_INT64_MIN) + { + *result = 0x5EED; /* to avoid spurious warnings */ return true; + } *result = (int64) res; return false; #else @@ -229,6 +262,7 @@ pg_mul_s64_overflow(int64 a, int64 b, int64 *result) (a < 0 && b > 0 && a < PG_INT64_MIN / b) || (a < 0 && b < 0 && a < PG_INT64_MAX / b))) { + *result = 0x5EED; /* to avoid spurious warnings */ return true; } *result = a * b; From feb1cc5593a5188796c2f52241f407500209fff2 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 18:17:22 -0500 Subject: [PATCH 0996/1087] Stabilize new plpgsql_record regression tests. The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some of the test cases added in commit 4b93f5799. There are two different problems: * In two places, a different CONTEXT stack is shown because the error is detected in a different place, due to recompiling an expression from scratch rather than re-using a previously cached plan for it. I fixed these via the expedient of hiding the CONTEXT stack altogether. * In one place, a test expected to fail (because a cached plan hadn't been updated) actually succeeds (because the forced recompile makes it good). I couldn't think of a simple workaround for this, so I've just commented out that test step altogether. I have hopes of improving things enough that both of these kluges can be reverted eventually. The first one is the same kind of problem previously discussed at https://postgr.es/m/31545.1512924904@sss.pgh.pa.us but there was insufficient agreement about how to fix it, so we just hacked around the output instability (commit 9edc97b71). The second issue should be fixed by allowing the plan to be rebuilt when a type conflict is detected. But for today, let's just make the buildfarm green again. --- src/pl/plpgsql/src/expected/plpgsql_record.out | 14 +++++++++----- src/pl/plpgsql/src/sql/plpgsql_record.sql | 10 +++++++++- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/src/pl/plpgsql/src/expected/plpgsql_record.out b/src/pl/plpgsql/src/expected/plpgsql_record.out index 3f7cab2088..29e42fda6c 100644 --- a/src/pl/plpgsql/src/expected/plpgsql_record.out +++ b/src/pl/plpgsql/src/expected/plpgsql_record.out @@ -343,9 +343,12 @@ select getf1(row(1,2)); 1 (1 row) +-- a CLOBBER_CACHE_ALWAYS build will report this error with a different +-- context stack than other builds, so suppress context output +\set SHOW_CONTEXT never select getf1(row(1,2)::two_int8s); ERROR: record "x" has no field "f1" -CONTEXT: PL/pgSQL function getf1(record) line 1 at RETURN +\set SHOW_CONTEXT errors select getf1(row(1,2)); getf1 ------- @@ -421,9 +424,7 @@ select sillyaddone(42); alter table mutable drop column f1; alter table mutable add column f1 float8; -- currently, this fails due to cached plan for "r.f1 + 1" expression -select sillyaddone(42); -ERROR: type of parameter 4 (double precision) does not match that when preparing the plan (integer) -CONTEXT: PL/pgSQL function sillyaddone(integer) line 1 at RETURN +-- select sillyaddone(42); \c - -- but it's OK after a reconnect select sillyaddone(42); @@ -450,9 +451,12 @@ select getf3(null::mutable); -- now it works (1 row) alter table mutable drop column f3; +-- a CLOBBER_CACHE_ALWAYS build will report this error with a different +-- context stack than other builds, so suppress context output +\set SHOW_CONTEXT never select getf3(null::mutable); -- fails again ERROR: record "x" has no field "f3" -CONTEXT: PL/pgSQL function getf3(mutable) line 1 at RETURN +\set SHOW_CONTEXT errors -- check access to system columns in a record variable create function sillytrig() returns trigger language plpgsql as $$begin diff --git a/src/pl/plpgsql/src/sql/plpgsql_record.sql b/src/pl/plpgsql/src/sql/plpgsql_record.sql index 069d2643cf..781ccb0ccb 100644 --- a/src/pl/plpgsql/src/sql/plpgsql_record.sql +++ b/src/pl/plpgsql/src/sql/plpgsql_record.sql @@ -215,7 +215,11 @@ create function getf1(x record) returns int language plpgsql as $$ begin return x.f1; end $$; select getf1(1); select getf1(row(1,2)); +-- a CLOBBER_CACHE_ALWAYS build will report this error with a different +-- context stack than other builds, so suppress context output +\set SHOW_CONTEXT never select getf1(row(1,2)::two_int8s); +\set SHOW_CONTEXT errors select getf1(row(1,2)); -- check behavior when assignment to FOR-loop variable requires coercion @@ -270,7 +274,7 @@ alter table mutable drop column f1; alter table mutable add column f1 float8; -- currently, this fails due to cached plan for "r.f1 + 1" expression -select sillyaddone(42); +-- select sillyaddone(42); \c - -- but it's OK after a reconnect select sillyaddone(42); @@ -284,7 +288,11 @@ select getf3(null::mutable); -- doesn't work yet alter table mutable add column f3 int; select getf3(null::mutable); -- now it works alter table mutable drop column f3; +-- a CLOBBER_CACHE_ALWAYS build will report this error with a different +-- context stack than other builds, so suppress context output +\set SHOW_CONTEXT never select getf3(null::mutable); -- fails again +\set SHOW_CONTEXT errors -- check access to system columns in a record variable From cbadba8dd632fc0d4162f7d686fec631bce7dfd0 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 18:42:14 -0500 Subject: [PATCH 0997/1087] Revert "Stabilize output of new regression test case". This effectively reverts commit 9edc97b71 (although the test is now in a different place and has different contents). We don't need that hack anymore, because since commit 4b93f5799, this test case no longer throws an error and so there's no displayed CONTEXT that could vary depending on CLOBBER_CACHE_ALWAYS. The underlying unstable-output problem isn't really gone, of course, but it no longer manifests here. --- src/pl/plpgsql/src/expected/plpgsql_varprops.out | 2 -- src/pl/plpgsql/src/sql/plpgsql_varprops.sql | 2 -- 2 files changed, 4 deletions(-) diff --git a/src/pl/plpgsql/src/expected/plpgsql_varprops.out b/src/pl/plpgsql/src/expected/plpgsql_varprops.out index 109056c054..18f03d75b4 100644 --- a/src/pl/plpgsql/src/expected/plpgsql_varprops.out +++ b/src/pl/plpgsql/src/expected/plpgsql_varprops.out @@ -176,7 +176,6 @@ end$$; ERROR: null value cannot be assigned to variable "x" declared NOT NULL CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization -- Check that variables are reinitialized on block re-entry. -\set VERBOSITY terse \\ -- needed for output stability do $$ begin for i in 1..3 loop @@ -210,7 +209,6 @@ NOTICE: x = NOTICE: y = 3 NOTICE: r = NOTICE: c = -\set VERBOSITY default -- Check enforcement of domain constraints during initialization do $$ declare x int_nn; -- fail diff --git a/src/pl/plpgsql/src/sql/plpgsql_varprops.sql b/src/pl/plpgsql/src/sql/plpgsql_varprops.sql index c0e7f95f4e..778119d223 100644 --- a/src/pl/plpgsql/src/sql/plpgsql_varprops.sql +++ b/src/pl/plpgsql/src/sql/plpgsql_varprops.sql @@ -151,7 +151,6 @@ end$$; -- Check that variables are reinitialized on block re-entry. -\set VERBOSITY terse \\ -- needed for output stability do $$ begin for i in 1..3 loop @@ -173,7 +172,6 @@ begin end; end loop; end$$; -\set VERBOSITY default -- Check enforcement of domain constraints during initialization From 03c5a00ea3867f5736da6cedce73b1eea88a98af Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 14 Feb 2018 19:43:33 -0500 Subject: [PATCH 0998/1087] Move the extern declaration for ExceptionalCondition into c.h. This is the logical conclusion of our decision to support Assert() in both frontend and backend code: it should be possible to use that after including just c.h. But as things were arranged before, if you wanted to use Assert() in code that might be compiled for either environment, you had to include postgres.h for the backend case. Let's simplify that. Per buildfarm, some of whose members started throwing warnings after commit 0c62356cc added an Assert in src/port/snprintf.c. It's possible that some other src/port files that use the stanza #ifndef FRONTEND #include "postgres.h" #else #include "postgres_fe.h" #endif could now be simplified to just say '#include "c.h"'. I have not tested for that, though, and it'd be unlikely to apply for more than a small number of them. --- src/include/c.h | 19 ++++++++++++++++--- src/include/postgres.h | 16 ---------------- 2 files changed, 16 insertions(+), 19 deletions(-) diff --git a/src/include/c.h b/src/include/c.h index 9b7fe87f32..c38ef8aed3 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -36,9 +36,10 @@ * 8) random stuff * 9) system-specific hacks * - * NOTE: since this file is included by both frontend and backend modules, it's - * almost certainly wrong to put an "extern" declaration here. typedefs and - * macros are the kind of thing that might go here. + * NOTE: since this file is included by both frontend and backend modules, + * it's usually wrong to put an "extern" declaration here, unless it's + * ifdef'd so that it's seen in only one case or the other. + * typedefs and macros are the kind of thing that might go here. * *---------------------------------------------------------------- */ @@ -747,6 +748,18 @@ typedef NameData *Name; #endif /* USE_ASSERT_CHECKING && !FRONTEND */ +/* + * ExceptionalCondition is compiled into the backend whether or not + * USE_ASSERT_CHECKING is defined, so as to support use of extensions + * that are built with that #define with a backend that isn't. Hence, + * we should declare it as long as !FRONTEND. + */ +#ifndef FRONTEND +extern void ExceptionalCondition(const char *conditionName, + const char *errorType, + const char *fileName, int lineNumber) pg_attribute_noreturn(); +#endif + /* * Macros to support compile-time assertion checks. * diff --git a/src/include/postgres.h b/src/include/postgres.h index 3dc62801aa..bbcb50e41f 100644 --- a/src/include/postgres.h +++ b/src/include/postgres.h @@ -25,7 +25,6 @@ * ------- ------------------------------------------------ * 1) variable-length datatypes (TOAST support) * 2) Datum type + support macros - * 3) exception handling backend support * * NOTES * @@ -766,19 +765,4 @@ extern Datum Float8GetDatum(float8 X); #define Float4GetDatumFast(X) PointerGetDatum(&(X)) #endif - -/* ---------------------------------------------------------------- - * Section 3: exception handling backend support - * ---------------------------------------------------------------- - */ - -/* - * Backend only infrastructure for the assertion-related macros in c.h. - * - * ExceptionalCondition must be present even when assertions are not enabled. - */ -extern void ExceptionalCondition(const char *conditionName, - const char *errorType, - const char *fileName, int lineNumber) pg_attribute_noreturn(); - #endif /* POSTGRES_H */ From 51940f97607b7cb4d03bdd99e43abb1a1c6a0c47 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 15 Feb 2018 13:41:30 -0500 Subject: [PATCH 0999/1087] Cast to void in StaticAssertExpr, not its callers. Seems a bit silly that many (in fact all, as of today) uses of StaticAssertExpr would need to cast it to void to avoid warnings from pickier compilers. Let's just do the cast right in the macro, instead. In passing, change StaticAssertExpr to StaticAssertStmt in one place where that seems more apropos. Discussion: https://postgr.es/m/16161.1518715186@sss.pgh.pa.us --- src/backend/storage/lmgr/lwlock.c | 4 ++-- src/include/c.h | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index 71caac1a1f..233606b414 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -380,10 +380,10 @@ LWLockShmemSize(void) void CreateLWLocks(void) { - StaticAssertExpr(LW_VAL_EXCLUSIVE > (uint32) MAX_BACKENDS, + StaticAssertStmt(LW_VAL_EXCLUSIVE > (uint32) MAX_BACKENDS, "MAX_BACKENDS too big for lwlock.c"); - StaticAssertExpr(sizeof(LWLock) <= LWLOCK_MINIMAL_SIZE && + StaticAssertStmt(sizeof(LWLock) <= LWLOCK_MINIMAL_SIZE && sizeof(LWLock) <= LWLOCK_PADDED_SIZE, "Miscalculated LWLock padding"); diff --git a/src/include/c.h b/src/include/c.h index c38ef8aed3..6b55181e0a 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -779,7 +779,7 @@ extern void ExceptionalCondition(const char *conditionName, #define StaticAssertStmt(condition, errmessage) \ do { _Static_assert(condition, errmessage); } while(0) #define StaticAssertExpr(condition, errmessage) \ - ({ StaticAssertStmt(condition, errmessage); true; }) + ((void) ({ StaticAssertStmt(condition, errmessage); true; })) #else /* !HAVE__STATIC_ASSERT */ #define StaticAssertStmt(condition, errmessage) \ ((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; })) @@ -796,7 +796,7 @@ extern void ExceptionalCondition(const char *conditionName, #define StaticAssertStmt(condition, errmessage) \ do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0) #define StaticAssertExpr(condition, errmessage) \ - ({ StaticAssertStmt(condition, errmessage); }) + ((void) ({ StaticAssertStmt(condition, errmessage); })) #endif #endif /* C++ */ @@ -817,14 +817,14 @@ extern void ExceptionalCondition(const char *conditionName, StaticAssertStmt(__builtin_types_compatible_p(__typeof__(varname), typename), \ CppAsString(varname) " does not have type " CppAsString(typename)) #define AssertVariableIsOfTypeMacro(varname, typename) \ - ((void) StaticAssertExpr(__builtin_types_compatible_p(__typeof__(varname), typename), \ + (StaticAssertExpr(__builtin_types_compatible_p(__typeof__(varname), typename), \ CppAsString(varname) " does not have type " CppAsString(typename))) #else /* !HAVE__BUILTIN_TYPES_COMPATIBLE_P */ #define AssertVariableIsOfType(varname, typename) \ StaticAssertStmt(sizeof(varname) == sizeof(typename), \ CppAsString(varname) " does not have type " CppAsString(typename)) #define AssertVariableIsOfTypeMacro(varname, typename) \ - ((void) StaticAssertExpr(sizeof(varname) == sizeof(typename), \ + (StaticAssertExpr(sizeof(varname) == sizeof(typename), \ CppAsString(varname) " does not have type " CppAsString(typename))) #endif /* HAVE__BUILTIN_TYPES_COMPATIBLE_P */ From 439c7bc1a070d746fab69d8696fca78673e64ba9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 15 Feb 2018 13:56:38 -0500 Subject: [PATCH 1000/1087] Doc: fix minor bug in CREATE TABLE example. One example in create_table.sgml claimed to be showing table constraint syntax, but it was really column constraint syntax due to the omission of a comma. This is both wrong and confusing, so fix it in all supported branches. Per report from neil@postgrescompare.com. Discussion: https://postgr.es/m/151871659877.1393.2431103178451978795@wrigleys.postgresql.org --- doc/src/sgml/ref/create_table.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index d2df40d543..8bf9dc992b 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1531,7 +1531,7 @@ CREATE TABLE distributors ( CREATE TABLE distributors ( did integer, - name varchar(40) + name varchar(40), CONSTRAINT con1 CHECK (did > 100 AND name <> '') ); From 51db0d18fbf58b0c2e5ebc2b5b2c48daf45c8d93 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Thu, 15 Feb 2018 16:25:19 -0500 Subject: [PATCH 1001/1087] Fix plpgsql to enforce domain checks when returning a NULL domain value. If a plpgsql function is declared to return a domain type, and the domain's constraints forbid a null value, it was nonetheless possible to return NULL, because we didn't bother to check the constraints for a null result. I'd noticed this while fooling with domains-over-composite, but had not gotten around to fixing it immediately. Add a regression test script exercising this and various other domain cases, largely borrowed from the plpython_types test. Although this is clearly a bug fix, I'm not sure whether anyone would thank us for changing the behavior in stable branches, so I'm inclined not to back-patch. --- src/pl/plpgsql/src/Makefile | 2 +- .../plpgsql/src/expected/plpgsql_domain.out | 397 ++++++++++++++++++ src/pl/plpgsql/src/pl_comp.c | 4 + src/pl/plpgsql/src/pl_exec.c | 16 + src/pl/plpgsql/src/plpgsql.h | 1 + src/pl/plpgsql/src/sql/plpgsql_domain.sql | 279 ++++++++++++ 6 files changed, 698 insertions(+), 1 deletion(-) create mode 100644 src/pl/plpgsql/src/expected/plpgsql_domain.out create mode 100644 src/pl/plpgsql/src/sql/plpgsql_domain.sql diff --git a/src/pl/plpgsql/src/Makefile b/src/pl/plpgsql/src/Makefile index 3ac64e2d44..fc60618618 100644 --- a/src/pl/plpgsql/src/Makefile +++ b/src/pl/plpgsql/src/Makefile @@ -26,7 +26,7 @@ DATA = plpgsql.control plpgsql--1.0.sql plpgsql--unpackaged--1.0.sql REGRESS_OPTS = --dbname=$(PL_TESTDB) -REGRESS = plpgsql_call plpgsql_control plpgsql_record \ +REGRESS = plpgsql_call plpgsql_control plpgsql_domain plpgsql_record \ plpgsql_transaction plpgsql_varprops all: all-lib diff --git a/src/pl/plpgsql/src/expected/plpgsql_domain.out b/src/pl/plpgsql/src/expected/plpgsql_domain.out new file mode 100644 index 0000000000..efc877cdd1 --- /dev/null +++ b/src/pl/plpgsql/src/expected/plpgsql_domain.out @@ -0,0 +1,397 @@ +-- +-- Tests for PL/pgSQL's behavior with domain types +-- +CREATE DOMAIN booltrue AS bool CHECK (VALUE IS TRUE OR VALUE IS NULL); +CREATE FUNCTION test_argresult_booltrue(x booltrue, y bool) RETURNS booltrue AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_argresult_booltrue(true, true); + test_argresult_booltrue +------------------------- + t +(1 row) + +SELECT * FROM test_argresult_booltrue(false, true); +ERROR: value for domain booltrue violates check constraint "booltrue_check" +SELECT * FROM test_argresult_booltrue(true, false); +ERROR: value for domain booltrue violates check constraint "booltrue_check" +CONTEXT: PL/pgSQL function test_argresult_booltrue(booltrue,boolean) while casting return value to function's return type +CREATE FUNCTION test_assign_booltrue(x bool, y bool) RETURNS booltrue AS $$ +declare v booltrue := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_booltrue(true, true); + test_assign_booltrue +---------------------- + t +(1 row) + +SELECT * FROM test_assign_booltrue(false, true); +ERROR: value for domain booltrue violates check constraint "booltrue_check" +CONTEXT: PL/pgSQL function test_assign_booltrue(boolean,boolean) line 3 during statement block local variable initialization +SELECT * FROM test_assign_booltrue(true, false); +ERROR: value for domain booltrue violates check constraint "booltrue_check" +CONTEXT: PL/pgSQL function test_assign_booltrue(boolean,boolean) line 4 at assignment +CREATE DOMAIN uint2 AS int2 CHECK (VALUE >= 0); +CREATE FUNCTION test_argresult_uint2(x uint2, y int) RETURNS uint2 AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_argresult_uint2(100::uint2, 50); + test_argresult_uint2 +---------------------- + 50 +(1 row) + +SELECT * FROM test_argresult_uint2(100::uint2, -50); +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: PL/pgSQL function test_argresult_uint2(uint2,integer) while casting return value to function's return type +SELECT * FROM test_argresult_uint2(null, 1); + test_argresult_uint2 +---------------------- + 1 +(1 row) + +CREATE FUNCTION test_assign_uint2(x int, y int) RETURNS uint2 AS $$ +declare v uint2 := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_uint2(100, 50); + test_assign_uint2 +------------------- + 50 +(1 row) + +SELECT * FROM test_assign_uint2(100, -50); +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: PL/pgSQL function test_assign_uint2(integer,integer) line 4 at assignment +SELECT * FROM test_assign_uint2(-100, 50); +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: PL/pgSQL function test_assign_uint2(integer,integer) line 3 during statement block local variable initialization +SELECT * FROM test_assign_uint2(null, 1); + test_assign_uint2 +------------------- + 1 +(1 row) + +CREATE DOMAIN nnint AS int NOT NULL; +CREATE FUNCTION test_argresult_nnint(x nnint, y int) RETURNS nnint AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_argresult_nnint(10, 20); + test_argresult_nnint +---------------------- + 20 +(1 row) + +SELECT * FROM test_argresult_nnint(null, 20); +ERROR: domain nnint does not allow null values +SELECT * FROM test_argresult_nnint(10, null); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_argresult_nnint(nnint,integer) while casting return value to function's return type +CREATE FUNCTION test_assign_nnint(x int, y int) RETURNS nnint AS $$ +declare v nnint := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_nnint(10, 20); + test_assign_nnint +------------------- + 20 +(1 row) + +SELECT * FROM test_assign_nnint(null, 20); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_assign_nnint(integer,integer) line 3 during statement block local variable initialization +SELECT * FROM test_assign_nnint(10, null); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_assign_nnint(integer,integer) line 4 at assignment +-- +-- Domains over arrays +-- +CREATE DOMAIN ordered_pair_domain AS integer[] CHECK (array_length(VALUE,1)=2 AND VALUE[1] < VALUE[2]); +CREATE FUNCTION test_argresult_array_domain(x ordered_pair_domain) + RETURNS ordered_pair_domain AS $$ +begin +return x; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_argresult_array_domain(ARRAY[0, 100]::ordered_pair_domain); + test_argresult_array_domain +----------------------------- + {0,100} +(1 row) + +SELECT * FROM test_argresult_array_domain(NULL::ordered_pair_domain); + test_argresult_array_domain +----------------------------- + +(1 row) + +CREATE FUNCTION test_argresult_array_domain_check_violation() + RETURNS ordered_pair_domain AS $$ +begin +return array[2,1]; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_argresult_array_domain_check_violation(); +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CONTEXT: PL/pgSQL function test_argresult_array_domain_check_violation() while casting return value to function's return type +CREATE FUNCTION test_assign_ordered_pair_domain(x int, y int, z int) RETURNS ordered_pair_domain AS $$ +declare v ordered_pair_domain := array[x, y]; +begin +v[2] := z; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_ordered_pair_domain(1,2,3); + test_assign_ordered_pair_domain +--------------------------------- + {1,3} +(1 row) + +SELECT * FROM test_assign_ordered_pair_domain(1,2,0); +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CONTEXT: PL/pgSQL function test_assign_ordered_pair_domain(integer,integer,integer) line 4 at assignment +SELECT * FROM test_assign_ordered_pair_domain(2,1,3); +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CONTEXT: PL/pgSQL function test_assign_ordered_pair_domain(integer,integer,integer) line 3 during statement block local variable initialization +-- +-- Arrays of domains +-- +CREATE FUNCTION test_read_uint2_array(x uint2[]) RETURNS uint2 AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; +select test_read_uint2_array(array[1::uint2]); + test_read_uint2_array +----------------------- + 1 +(1 row) + +CREATE FUNCTION test_build_uint2_array(x int2) RETURNS uint2[] AS $$ +begin +return array[x, x]; +end +$$ LANGUAGE plpgsql; +select test_build_uint2_array(1::int2); + test_build_uint2_array +------------------------ + {1,1} +(1 row) + +select test_build_uint2_array(-1::int2); -- fail +ERROR: value for domain uint2 violates check constraint "uint2_check" +CONTEXT: PL/pgSQL function test_build_uint2_array(smallint) while casting return value to function's return type +CREATE FUNCTION test_argresult_domain_array(x integer[]) + RETURNS ordered_pair_domain[] AS $$ +begin +return array[x::ordered_pair_domain, x::ordered_pair_domain]; +end +$$ LANGUAGE plpgsql; +select test_argresult_domain_array(array[2,4]); + test_argresult_domain_array +----------------------------- + {"{2,4}","{2,4}"} +(1 row) + +select test_argresult_domain_array(array[4,2]); -- fail +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CONTEXT: PL/pgSQL function test_argresult_domain_array(integer[]) line 3 at RETURN +CREATE FUNCTION test_argresult_domain_array2(x ordered_pair_domain) + RETURNS integer AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; +select test_argresult_domain_array2(array[2,4]); + test_argresult_domain_array2 +------------------------------ + 2 +(1 row) + +select test_argresult_domain_array2(array[4,2]); -- fail +ERROR: value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check" +CREATE FUNCTION test_argresult_array_domain_array(x ordered_pair_domain[]) + RETURNS ordered_pair_domain AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; +select test_argresult_array_domain_array(array[array[2,4]::ordered_pair_domain]); + test_argresult_array_domain_array +----------------------------------- + {2,4} +(1 row) + +-- +-- Domains within composite +-- +CREATE TYPE nnint_container AS (f1 int, f2 nnint); +CREATE FUNCTION test_result_nnint_container(x int, y int) + RETURNS nnint_container AS $$ +begin +return row(x, y)::nnint_container; +end +$$ LANGUAGE plpgsql; +SELECT test_result_nnint_container(null, 3); + test_result_nnint_container +----------------------------- + (,3) +(1 row) + +SELECT test_result_nnint_container(3, null); -- fail +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_result_nnint_container(integer,integer) line 3 at RETURN +CREATE FUNCTION test_assign_nnint_container(x int, y int, z int) + RETURNS nnint_container AS $$ +declare v nnint_container := row(x, y); +begin +v.f2 := z; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_nnint_container(1,2,3); + f1 | f2 +----+---- + 1 | 3 +(1 row) + +SELECT * FROM test_assign_nnint_container(1,2,null); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_assign_nnint_container(integer,integer,integer) line 4 at assignment +SELECT * FROM test_assign_nnint_container(1,null,3); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_assign_nnint_container(integer,integer,integer) line 3 during statement block local variable initialization +-- Since core system allows this: +SELECT null::nnint_container; + nnint_container +----------------- + +(1 row) + +-- so should PL/PgSQL +CREATE FUNCTION test_assign_nnint_container2(x int, y int, z int) + RETURNS nnint_container AS $$ +declare v nnint_container; +begin +v.f2 := z; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_nnint_container2(1,2,3); + f1 | f2 +----+---- + | 3 +(1 row) + +SELECT * FROM test_assign_nnint_container2(1,2,null); +ERROR: domain nnint does not allow null values +CONTEXT: PL/pgSQL function test_assign_nnint_container2(integer,integer,integer) line 4 at assignment +-- +-- Domains of composite +-- +CREATE TYPE named_pair AS ( + i integer, + j integer +); +CREATE DOMAIN ordered_named_pair AS named_pair CHECK((VALUE).i <= (VALUE).j); +CREATE FUNCTION read_ordered_named_pair(p ordered_named_pair) RETURNS integer AS $$ +begin +return p.i + p.j; +end +$$ LANGUAGE plpgsql; +SELECT read_ordered_named_pair(row(1, 2)); + read_ordered_named_pair +------------------------- + 3 +(1 row) + +SELECT read_ordered_named_pair(row(2, 1)); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CREATE FUNCTION build_ordered_named_pair(i int, j int) RETURNS ordered_named_pair AS $$ +begin +return row(i, j); +end +$$ LANGUAGE plpgsql; +SELECT build_ordered_named_pair(1,2); + build_ordered_named_pair +-------------------------- + (1,2) +(1 row) + +SELECT build_ordered_named_pair(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: PL/pgSQL function build_ordered_named_pair(integer,integer) while casting return value to function's return type +CREATE FUNCTION test_assign_ordered_named_pair(x int, y int, z int) + RETURNS ordered_named_pair AS $$ +declare v ordered_named_pair := row(x, y); +begin +v.j := z; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_ordered_named_pair(1,2,3); + i | j +---+--- + 1 | 3 +(1 row) + +SELECT * FROM test_assign_ordered_named_pair(1,2,0); +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: PL/pgSQL function test_assign_ordered_named_pair(integer,integer,integer) line 4 at assignment +SELECT * FROM test_assign_ordered_named_pair(2,1,3); +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: PL/pgSQL function test_assign_ordered_named_pair(integer,integer,integer) line 3 during statement block local variable initialization +CREATE FUNCTION build_ordered_named_pairs(i int, j int) RETURNS ordered_named_pair[] AS $$ +begin +return array[row(i, j), row(i, j+1)]; +end +$$ LANGUAGE plpgsql; +SELECT build_ordered_named_pairs(1,2); + build_ordered_named_pairs +--------------------------- + {"(1,2)","(1,3)"} +(1 row) + +SELECT build_ordered_named_pairs(2,1); -- fail +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: PL/pgSQL function build_ordered_named_pairs(integer,integer) while casting return value to function's return type +CREATE FUNCTION test_assign_ordered_named_pairs(x int, y int, z int) + RETURNS ordered_named_pair[] AS $$ +declare v ordered_named_pair[] := array[row(x, y)]; +begin +-- ideally this would work, but it doesn't yet: +-- v[1].j := z; +return v; +end +$$ LANGUAGE plpgsql; +SELECT * FROM test_assign_ordered_named_pairs(1,2,3); + test_assign_ordered_named_pairs +--------------------------------- + {"(1,2)"} +(1 row) + +SELECT * FROM test_assign_ordered_named_pairs(2,1,3); +ERROR: value for domain ordered_named_pair violates check constraint "ordered_named_pair_check" +CONTEXT: PL/pgSQL function test_assign_ordered_named_pairs(integer,integer,integer) line 3 during statement block local variable initialization +SELECT * FROM test_assign_ordered_named_pairs(1,2,0); -- should fail someday + test_assign_ordered_named_pairs +--------------------------------- + {"(1,2)"} +(1 row) + diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c index aab92c4711..d07a16a7ea 100644 --- a/src/pl/plpgsql/src/pl_comp.c +++ b/src/pl/plpgsql/src/pl_comp.c @@ -557,6 +557,7 @@ do_compile(FunctionCallInfo fcinfo, } function->fn_retistuple = type_is_rowtype(rettypeid); + function->fn_retisdomain = (typeStruct->typtype == TYPTYPE_DOMAIN); function->fn_retbyval = typeStruct->typbyval; function->fn_rettyplen = typeStruct->typlen; @@ -584,6 +585,7 @@ do_compile(FunctionCallInfo fcinfo, function->fn_rettype = InvalidOid; function->fn_retbyval = false; function->fn_retistuple = true; + function->fn_retisdomain = false; function->fn_retset = false; /* shouldn't be any declared arguments */ @@ -707,6 +709,7 @@ do_compile(FunctionCallInfo fcinfo, function->fn_rettype = VOIDOID; function->fn_retbyval = false; function->fn_retistuple = true; + function->fn_retisdomain = false; function->fn_retset = false; /* shouldn't be any declared arguments */ @@ -886,6 +889,7 @@ plpgsql_compile_inline(char *proc_source) function->fn_rettype = VOIDOID; function->fn_retset = false; function->fn_retistuple = false; + function->fn_retisdomain = false; /* a bit of hardwired knowledge about type VOID here */ function->fn_retbyval = true; function->fn_rettyplen = sizeof(int32); diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index 5054d20ab1..eae51e316a 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -712,6 +712,22 @@ plpgsql_exec_function(PLpgSQL_function *func, FunctionCallInfo fcinfo, func->fn_rettyplen); } } + else + { + /* + * We're returning a NULL, which normally requires no conversion work + * regardless of datatypes. But, if we are casting it to a domain + * return type, we'd better check that the domain's constraints pass. + */ + if (func->fn_retisdomain) + estate.retval = exec_cast_value(&estate, + estate.retval, + &fcinfo->isnull, + estate.rettype, + -1, + func->fn_rettype, + -1); + } estate.err_text = gettext_noop("during function exit"); diff --git a/src/pl/plpgsql/src/plpgsql.h b/src/pl/plpgsql/src/plpgsql.h index c2449f03cf..26a7344e9a 100644 --- a/src/pl/plpgsql/src/plpgsql.h +++ b/src/pl/plpgsql/src/plpgsql.h @@ -915,6 +915,7 @@ typedef struct PLpgSQL_function int fn_rettyplen; bool fn_retbyval; bool fn_retistuple; + bool fn_retisdomain; bool fn_retset; bool fn_readonly; diff --git a/src/pl/plpgsql/src/sql/plpgsql_domain.sql b/src/pl/plpgsql/src/sql/plpgsql_domain.sql new file mode 100644 index 0000000000..8f99aae5a9 --- /dev/null +++ b/src/pl/plpgsql/src/sql/plpgsql_domain.sql @@ -0,0 +1,279 @@ +-- +-- Tests for PL/pgSQL's behavior with domain types +-- + +CREATE DOMAIN booltrue AS bool CHECK (VALUE IS TRUE OR VALUE IS NULL); + +CREATE FUNCTION test_argresult_booltrue(x booltrue, y bool) RETURNS booltrue AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_argresult_booltrue(true, true); +SELECT * FROM test_argresult_booltrue(false, true); +SELECT * FROM test_argresult_booltrue(true, false); + +CREATE FUNCTION test_assign_booltrue(x bool, y bool) RETURNS booltrue AS $$ +declare v booltrue := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_booltrue(true, true); +SELECT * FROM test_assign_booltrue(false, true); +SELECT * FROM test_assign_booltrue(true, false); + + +CREATE DOMAIN uint2 AS int2 CHECK (VALUE >= 0); + +CREATE FUNCTION test_argresult_uint2(x uint2, y int) RETURNS uint2 AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_argresult_uint2(100::uint2, 50); +SELECT * FROM test_argresult_uint2(100::uint2, -50); +SELECT * FROM test_argresult_uint2(null, 1); + +CREATE FUNCTION test_assign_uint2(x int, y int) RETURNS uint2 AS $$ +declare v uint2 := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_uint2(100, 50); +SELECT * FROM test_assign_uint2(100, -50); +SELECT * FROM test_assign_uint2(-100, 50); +SELECT * FROM test_assign_uint2(null, 1); + + +CREATE DOMAIN nnint AS int NOT NULL; + +CREATE FUNCTION test_argresult_nnint(x nnint, y int) RETURNS nnint AS $$ +begin +return y; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_argresult_nnint(10, 20); +SELECT * FROM test_argresult_nnint(null, 20); +SELECT * FROM test_argresult_nnint(10, null); + +CREATE FUNCTION test_assign_nnint(x int, y int) RETURNS nnint AS $$ +declare v nnint := x; +begin +v := y; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_nnint(10, 20); +SELECT * FROM test_assign_nnint(null, 20); +SELECT * FROM test_assign_nnint(10, null); + + +-- +-- Domains over arrays +-- + +CREATE DOMAIN ordered_pair_domain AS integer[] CHECK (array_length(VALUE,1)=2 AND VALUE[1] < VALUE[2]); + +CREATE FUNCTION test_argresult_array_domain(x ordered_pair_domain) + RETURNS ordered_pair_domain AS $$ +begin +return x; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_argresult_array_domain(ARRAY[0, 100]::ordered_pair_domain); +SELECT * FROM test_argresult_array_domain(NULL::ordered_pair_domain); + +CREATE FUNCTION test_argresult_array_domain_check_violation() + RETURNS ordered_pair_domain AS $$ +begin +return array[2,1]; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_argresult_array_domain_check_violation(); + +CREATE FUNCTION test_assign_ordered_pair_domain(x int, y int, z int) RETURNS ordered_pair_domain AS $$ +declare v ordered_pair_domain := array[x, y]; +begin +v[2] := z; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_ordered_pair_domain(1,2,3); +SELECT * FROM test_assign_ordered_pair_domain(1,2,0); +SELECT * FROM test_assign_ordered_pair_domain(2,1,3); + + +-- +-- Arrays of domains +-- + +CREATE FUNCTION test_read_uint2_array(x uint2[]) RETURNS uint2 AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; + +select test_read_uint2_array(array[1::uint2]); + +CREATE FUNCTION test_build_uint2_array(x int2) RETURNS uint2[] AS $$ +begin +return array[x, x]; +end +$$ LANGUAGE plpgsql; + +select test_build_uint2_array(1::int2); +select test_build_uint2_array(-1::int2); -- fail + +CREATE FUNCTION test_argresult_domain_array(x integer[]) + RETURNS ordered_pair_domain[] AS $$ +begin +return array[x::ordered_pair_domain, x::ordered_pair_domain]; +end +$$ LANGUAGE plpgsql; + +select test_argresult_domain_array(array[2,4]); +select test_argresult_domain_array(array[4,2]); -- fail + +CREATE FUNCTION test_argresult_domain_array2(x ordered_pair_domain) + RETURNS integer AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; + +select test_argresult_domain_array2(array[2,4]); +select test_argresult_domain_array2(array[4,2]); -- fail + +CREATE FUNCTION test_argresult_array_domain_array(x ordered_pair_domain[]) + RETURNS ordered_pair_domain AS $$ +begin +return x[1]; +end +$$ LANGUAGE plpgsql; + +select test_argresult_array_domain_array(array[array[2,4]::ordered_pair_domain]); + + +-- +-- Domains within composite +-- + +CREATE TYPE nnint_container AS (f1 int, f2 nnint); + +CREATE FUNCTION test_result_nnint_container(x int, y int) + RETURNS nnint_container AS $$ +begin +return row(x, y)::nnint_container; +end +$$ LANGUAGE plpgsql; + +SELECT test_result_nnint_container(null, 3); +SELECT test_result_nnint_container(3, null); -- fail + +CREATE FUNCTION test_assign_nnint_container(x int, y int, z int) + RETURNS nnint_container AS $$ +declare v nnint_container := row(x, y); +begin +v.f2 := z; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_nnint_container(1,2,3); +SELECT * FROM test_assign_nnint_container(1,2,null); +SELECT * FROM test_assign_nnint_container(1,null,3); + +-- Since core system allows this: +SELECT null::nnint_container; +-- so should PL/PgSQL + +CREATE FUNCTION test_assign_nnint_container2(x int, y int, z int) + RETURNS nnint_container AS $$ +declare v nnint_container; +begin +v.f2 := z; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_nnint_container2(1,2,3); +SELECT * FROM test_assign_nnint_container2(1,2,null); + + +-- +-- Domains of composite +-- + +CREATE TYPE named_pair AS ( + i integer, + j integer +); + +CREATE DOMAIN ordered_named_pair AS named_pair CHECK((VALUE).i <= (VALUE).j); + +CREATE FUNCTION read_ordered_named_pair(p ordered_named_pair) RETURNS integer AS $$ +begin +return p.i + p.j; +end +$$ LANGUAGE plpgsql; + +SELECT read_ordered_named_pair(row(1, 2)); +SELECT read_ordered_named_pair(row(2, 1)); -- fail + +CREATE FUNCTION build_ordered_named_pair(i int, j int) RETURNS ordered_named_pair AS $$ +begin +return row(i, j); +end +$$ LANGUAGE plpgsql; + +SELECT build_ordered_named_pair(1,2); +SELECT build_ordered_named_pair(2,1); -- fail + +CREATE FUNCTION test_assign_ordered_named_pair(x int, y int, z int) + RETURNS ordered_named_pair AS $$ +declare v ordered_named_pair := row(x, y); +begin +v.j := z; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_ordered_named_pair(1,2,3); +SELECT * FROM test_assign_ordered_named_pair(1,2,0); +SELECT * FROM test_assign_ordered_named_pair(2,1,3); + +CREATE FUNCTION build_ordered_named_pairs(i int, j int) RETURNS ordered_named_pair[] AS $$ +begin +return array[row(i, j), row(i, j+1)]; +end +$$ LANGUAGE plpgsql; + +SELECT build_ordered_named_pairs(1,2); +SELECT build_ordered_named_pairs(2,1); -- fail + +CREATE FUNCTION test_assign_ordered_named_pairs(x int, y int, z int) + RETURNS ordered_named_pair[] AS $$ +declare v ordered_named_pair[] := array[row(x, y)]; +begin +-- ideally this would work, but it doesn't yet: +-- v[1].j := z; +return v; +end +$$ LANGUAGE plpgsql; + +SELECT * FROM test_assign_ordered_named_pairs(1,2,3); +SELECT * FROM test_assign_ordered_named_pairs(2,1,3); +SELECT * FROM test_assign_ordered_named_pairs(1,2,0); -- should fail someday From 773aec7aa98abd38d6d9435913bb8e14e392c274 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 15 Feb 2018 21:55:31 -0800 Subject: [PATCH 1002/1087] Do execGrouping.c via expression eval machinery. This has a performance benefit on own, although not hugely so. The primary benefit is that it will allow for to JIT tuple deforming and comparator invocations. Author: Andres Freund Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de --- src/backend/executor/execExpr.c | 118 +++++++++++ src/backend/executor/execExprInterp.c | 29 +++ src/backend/executor/execGrouping.c | 236 +++++----------------- src/backend/executor/nodeAgg.c | 143 ++++++++----- src/backend/executor/nodeGroup.c | 24 +-- src/backend/executor/nodeRecursiveunion.c | 5 +- src/backend/executor/nodeSetOp.c | 48 ++--- src/backend/executor/nodeSubplan.c | 81 +++++++- src/backend/executor/nodeUnique.c | 31 ++- src/backend/executor/nodeWindowAgg.c | 38 ++-- src/backend/utils/adt/orderedsetaggs.c | 56 ++--- src/include/executor/execExpr.h | 1 + src/include/executor/executor.h | 28 ++- src/include/executor/nodeAgg.h | 12 +- src/include/nodes/execnodes.h | 14 +- 15 files changed, 498 insertions(+), 366 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index c6eb3ebacf..47c1c1a49b 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -3193,3 +3193,121 @@ ExecBuildAggTransCall(ExprState *state, AggState *aggstate, as->d.agg_strict_trans_check.jumpnull = state->steps_len; } } + +/* + * Build equality expression that can be evaluated using ExecQual(), returning + * true if the expression context's inner/outer tuple are NOT DISTINCT. I.e + * two nulls match, a null and a not-null don't match. + * + * desc: tuple descriptor of the to-be-compared tuples + * numCols: the number of attributes to be examined + * keyColIdx: array of attribute column numbers + * eqFunctions: array of function oids of the equality functions to use + * parent: parent executor node + */ +ExprState * +ExecBuildGroupingEqual(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqfunctions, + PlanState *parent) +{ + ExprState *state = makeNode(ExprState); + ExprEvalStep scratch = {0}; + int natt; + int maxatt = -1; + List *adjust_jumps = NIL; + ListCell *lc; + + /* + * When no columns are actually compared, the result's always true. See + * special case in ExecQual(). + */ + if (numCols == 0) + return NULL; + + state->expr = NULL; + state->flags = EEO_FLAG_IS_QUAL; + state->parent = parent; + + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + + /* compute max needed attribute */ + for (natt = 0; natt < numCols; natt++) + { + int attno = keyColIdx[natt]; + + if (attno > maxatt) + maxatt = attno; + } + Assert(maxatt >= 0); + + /* push deform steps */ + scratch.opcode = EEOP_INNER_FETCHSOME; + scratch.d.fetch.last_var = maxatt; + ExprEvalPushStep(state, &scratch); + + scratch.opcode = EEOP_OUTER_FETCHSOME; + scratch.d.fetch.last_var = maxatt; + ExprEvalPushStep(state, &scratch); + + /* + * Start comparing at the last field (least significant sort key). That's + * the most likely to be different if we are dealing with sorted input. + */ + for (natt = numCols; --natt >= 0;) + { + int attno = keyColIdx[natt]; + Form_pg_attribute att = TupleDescAttr(desc, attno - 1); + Var *larg, + *rarg; + List *args; + + /* + * Reusing ExecInitFunc() requires creating Vars, but still seems + * worth it from a code reuse perspective. + */ + + /* left arg */ + larg = makeVar(INNER_VAR, attno, att->atttypid, + att->atttypmod, InvalidOid, 0); + /* right arg */ + rarg = makeVar(OUTER_VAR, attno, att->atttypid, + att->atttypmod, InvalidOid, 0); + args = list_make2(larg, rarg); + + /* evaluate distinctness */ + ExecInitFunc(&scratch, NULL, + args, eqfunctions[natt], InvalidOid, + state); + scratch.opcode = EEOP_NOT_DISTINCT; + ExprEvalPushStep(state, &scratch); + + /* then emit EEOP_QUAL to detect if result is false (or null) */ + scratch.opcode = EEOP_QUAL; + scratch.d.qualexpr.jumpdone = -1; + ExprEvalPushStep(state, &scratch); + adjust_jumps = lappend_int(adjust_jumps, + state->steps_len - 1); + } + + /* adjust jump targets */ + foreach(lc, adjust_jumps) + { + ExprEvalStep *as = &state->steps[lfirst_int(lc)]; + + Assert(as->opcode == EEOP_QUAL); + Assert(as->d.qualexpr.jumpdone == -1); + as->d.qualexpr.jumpdone = state->steps_len; + } + + scratch.resvalue = NULL; + scratch.resnull = NULL; + scratch.opcode = EEOP_DONE; + ExprEvalPushStep(state, &scratch); + + ExecReadyExpr(state); + + return state; +} diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 9c6c2b02e9..771b7e3945 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -355,6 +355,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_MAKE_READONLY, &&CASE_EEOP_IOCOERCE, &&CASE_EEOP_DISTINCT, + &&CASE_EEOP_NOT_DISTINCT, &&CASE_EEOP_NULLIF, &&CASE_EEOP_SQLVALUEFUNCTION, &&CASE_EEOP_CURRENTOFEXPR, @@ -1198,6 +1199,34 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } + /* see EEOP_DISTINCT for comments, this is just inverted */ + EEO_CASE(EEOP_NOT_DISTINCT) + { + FunctionCallInfo fcinfo = op->d.func.fcinfo_data; + + if (fcinfo->argnull[0] && fcinfo->argnull[1]) + { + *op->resvalue = BoolGetDatum(true); + *op->resnull = false; + } + else if (fcinfo->argnull[0] || fcinfo->argnull[1]) + { + *op->resvalue = BoolGetDatum(false); + *op->resnull = false; + } + else + { + Datum eqresult; + + fcinfo->isnull = false; + eqresult = op->d.func.fn_addr(fcinfo); + *op->resvalue = eqresult; + *op->resnull = fcinfo->isnull; + } + + EEO_NEXT(); + } + EEO_CASE(EEOP_NULLIF) { /* diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c index 8e8dbb1f20..4f604fb286 100644 --- a/src/backend/executor/execGrouping.c +++ b/src/backend/executor/execGrouping.c @@ -51,173 +51,34 @@ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tup * Utility routines for grouping tuples together *****************************************************************************/ -/* - * execTuplesMatch - * Return true if two tuples match in all the indicated fields. - * - * This actually implements SQL's notion of "not distinct". Two nulls - * match, a null and a not-null don't match. - * - * slot1, slot2: the tuples to compare (must have same columns!) - * numCols: the number of attributes to be examined - * matchColIdx: array of attribute column numbers - * eqFunctions: array of fmgr lookup info for the equality functions to use - * evalContext: short-term memory context for executing the functions - * - * NB: evalContext is reset each time! - */ -bool -execTuplesMatch(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext) -{ - MemoryContext oldContext; - bool result; - int i; - - /* Reset and switch into the temp context. */ - MemoryContextReset(evalContext); - oldContext = MemoryContextSwitchTo(evalContext); - - /* - * We cannot report a match without checking all the fields, but we can - * report a non-match as soon as we find unequal fields. So, start - * comparing at the last field (least significant sort key). That's the - * most likely to be different if we are dealing with sorted input. - */ - result = true; - - for (i = numCols; --i >= 0;) - { - AttrNumber att = matchColIdx[i]; - Datum attr1, - attr2; - bool isNull1, - isNull2; - - attr1 = slot_getattr(slot1, att, &isNull1); - - attr2 = slot_getattr(slot2, att, &isNull2); - - if (isNull1 != isNull2) - { - result = false; /* one null and one not; they aren't equal */ - break; - } - - if (isNull1) - continue; /* both are null, treat as equal */ - - /* Apply the type-specific equality function */ - - if (!DatumGetBool(FunctionCall2(&eqfunctions[i], - attr1, attr2))) - { - result = false; /* they aren't equal */ - break; - } - } - - MemoryContextSwitchTo(oldContext); - - return result; -} - -/* - * execTuplesUnequal - * Return true if two tuples are definitely unequal in the indicated - * fields. - * - * Nulls are neither equal nor unequal to anything else. A true result - * is obtained only if there are non-null fields that compare not-equal. - * - * Parameters are identical to execTuplesMatch. - */ -bool -execTuplesUnequal(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext) -{ - MemoryContext oldContext; - bool result; - int i; - - /* Reset and switch into the temp context. */ - MemoryContextReset(evalContext); - oldContext = MemoryContextSwitchTo(evalContext); - - /* - * We cannot report a match without checking all the fields, but we can - * report a non-match as soon as we find unequal fields. So, start - * comparing at the last field (least significant sort key). That's the - * most likely to be different if we are dealing with sorted input. - */ - result = false; - - for (i = numCols; --i >= 0;) - { - AttrNumber att = matchColIdx[i]; - Datum attr1, - attr2; - bool isNull1, - isNull2; - - attr1 = slot_getattr(slot1, att, &isNull1); - - if (isNull1) - continue; /* can't prove anything here */ - - attr2 = slot_getattr(slot2, att, &isNull2); - - if (isNull2) - continue; /* can't prove anything here */ - - /* Apply the type-specific equality function */ - - if (!DatumGetBool(FunctionCall2(&eqfunctions[i], - attr1, attr2))) - { - result = true; /* they are unequal */ - break; - } - } - - MemoryContextSwitchTo(oldContext); - - return result; -} - - /* * execTuplesMatchPrepare - * Look up the equality functions needed for execTuplesMatch or - * execTuplesUnequal, given an array of equality operator OIDs. - * - * The result is a palloc'd array. + * Build expression that can be evaluated using ExecQual(), returning + * whether an ExprContext's inner/outer tuples are NOT DISTINCT */ -FmgrInfo * -execTuplesMatchPrepare(int numCols, - Oid *eqOperators) +ExprState * +execTuplesMatchPrepare(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqOperators, + PlanState *parent) { - FmgrInfo *eqFunctions = (FmgrInfo *) palloc(numCols * sizeof(FmgrInfo)); + Oid *eqFunctions = (Oid *) palloc(numCols * sizeof(Oid)); int i; + ExprState *expr; + + if (numCols == 0) + return NULL; + /* lookup equality functions */ for (i = 0; i < numCols; i++) - { - Oid eq_opr = eqOperators[i]; - Oid eq_function; + eqFunctions[i] = get_opcode(eqOperators[i]); - eq_function = get_opcode(eq_opr); - fmgr_info(eq_function, &eqFunctions[i]); - } + /* build actual expression */ + expr = ExecBuildGroupingEqual(desc, numCols, keyColIdx, eqFunctions, + parent); - return eqFunctions; + return expr; } /* @@ -288,7 +149,9 @@ execTuplesHashPrepare(int numCols, * storage that will live as long as the hashtable does. */ TupleHashTable -BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, +BuildTupleHashTable(PlanState *parent, + TupleDesc inputDesc, + int numCols, AttrNumber *keyColIdx, FmgrInfo *eqfunctions, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, @@ -297,6 +160,9 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, { TupleHashTable hashtable; Size entrysize = sizeof(TupleHashEntryData) + additionalsize; + MemoryContext oldcontext; + Oid *eqoids = (Oid *) palloc(numCols * sizeof(Oid)); + int i; Assert(nbuckets > 0); @@ -333,6 +199,26 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, hashtable->hashtab = tuplehash_create(tablecxt, nbuckets, hashtable); + oldcontext = MemoryContextSwitchTo(hashtable->tablecxt); + + /* + * We copy the input tuple descriptor just for safety --- we assume all + * input tuples will have equivalent descriptors. + */ + hashtable->tableslot = MakeSingleTupleTableSlot(CreateTupleDescCopy(inputDesc)); + + /* build comparator for all columns */ + for (i = 0; i < numCols; i++) + eqoids[i] = eqfunctions[i].fn_oid; + hashtable->eq_func = ExecBuildGroupingEqual(inputDesc, + numCols, + keyColIdx, eqoids, + parent); + + MemoryContextSwitchTo(oldcontext); + + hashtable->exprcontext = CreateExprContext(parent->state); + return hashtable; } @@ -357,22 +243,6 @@ LookupTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, bool found; MinimalTuple key; - /* If first time through, clone the input slot to make table slot */ - if (hashtable->tableslot == NULL) - { - TupleDesc tupdesc; - - oldContext = MemoryContextSwitchTo(hashtable->tablecxt); - - /* - * We copy the input tuple descriptor just for safety --- we assume - * all input tuples will have equivalent descriptors. - */ - tupdesc = CreateTupleDescCopy(slot->tts_tupleDescriptor); - hashtable->tableslot = MakeSingleTupleTableSlot(tupdesc); - MemoryContextSwitchTo(oldContext); - } - /* Need to run the hash functions in short-lived context */ oldContext = MemoryContextSwitchTo(hashtable->tempcxt); @@ -524,9 +394,6 @@ TupleHashTableHash(struct tuplehash_hash *tb, const MinimalTuple tuple) * See whether two tuples (presumably of the same hash value) match * * As above, the passed pointers are pointers to TupleHashEntryData. - * - * Also, the caller must select an appropriate memory context for running - * the compare functions. (dynahash.c doesn't change CurrentMemoryContext.) */ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const MinimalTuple tuple2) @@ -534,6 +401,7 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const TupleTableSlot *slot1; TupleTableSlot *slot2; TupleHashTable hashtable = (TupleHashTable) tb->private_data; + ExprContext *econtext = hashtable->exprcontext; /* * We assume that simplehash.h will only ever call us with the first @@ -548,13 +416,7 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const slot2 = hashtable->inputslot; /* For crosstype comparisons, the inputslot must be first */ - if (execTuplesMatch(slot2, - slot1, - hashtable->numCols, - hashtable->keyColIdx, - hashtable->cur_eq_funcs, - hashtable->tempcxt)) - return 0; - else - return 1; + econtext->ecxt_innertuple = slot1; + econtext->ecxt_outertuple = slot2; + return !ExecQualAndReset(hashtable->eq_func, econtext); } diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index a86d4b68ea..467f8d896e 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -755,7 +755,7 @@ process_ordered_aggregate_single(AggState *aggstate, ((oldIsNull && *isNull) || (!oldIsNull && !*isNull && oldAbbrevVal == newAbbrevVal && - DatumGetBool(FunctionCall2(&pertrans->equalfns[0], + DatumGetBool(FunctionCall2(&pertrans->equalfnOne, oldVal, *newVal))))) { /* equal to prior, so forget this one */ @@ -802,7 +802,7 @@ process_ordered_aggregate_multi(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate) { - MemoryContext workcontext = aggstate->tmpcontext->ecxt_per_tuple_memory; + ExprContext *tmpcontext = aggstate->tmpcontext; FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; TupleTableSlot *slot1 = pertrans->sortslot; TupleTableSlot *slot2 = pertrans->uniqslot; @@ -811,6 +811,7 @@ process_ordered_aggregate_multi(AggState *aggstate, Datum newAbbrevVal = (Datum) 0; Datum oldAbbrevVal = (Datum) 0; bool haveOldValue = false; + TupleTableSlot *save = aggstate->tmpcontext->ecxt_outertuple; int i; tuplesort_performsort(pertrans->sortstates[aggstate->current_set]); @@ -824,22 +825,20 @@ process_ordered_aggregate_multi(AggState *aggstate, { CHECK_FOR_INTERRUPTS(); - /* - * Extract the first numTransInputs columns as datums to pass to the - * transfn. (This will help execTuplesMatch too, so we do it - * immediately.) - */ - slot_getsomeattrs(slot1, numTransInputs); + tmpcontext->ecxt_outertuple = slot1; + tmpcontext->ecxt_innertuple = slot2; if (numDistinctCols == 0 || !haveOldValue || newAbbrevVal != oldAbbrevVal || - !execTuplesMatch(slot1, slot2, - numDistinctCols, - pertrans->sortColIdx, - pertrans->equalfns, - workcontext)) + !ExecQual(pertrans->equalfnMulti, tmpcontext)) { + /* + * Extract the first numTransInputs columns as datums to pass to + * the transfn. + */ + slot_getsomeattrs(slot1, numTransInputs); + /* Load values into fcinfo */ /* Start from 1, since the 0th arg will be the transition value */ for (i = 0; i < numTransInputs; i++) @@ -857,15 +856,14 @@ process_ordered_aggregate_multi(AggState *aggstate, slot2 = slot1; slot1 = tmpslot; - /* avoid execTuplesMatch() calls by reusing abbreviated keys */ + /* avoid ExecQual() calls by reusing abbreviated keys */ oldAbbrevVal = newAbbrevVal; haveOldValue = true; } } - /* Reset context each time, unless execTuplesMatch did it for us */ - if (numDistinctCols == 0) - MemoryContextReset(workcontext); + /* Reset context each time */ + ResetExprContext(tmpcontext); ExecClearTuple(slot1); } @@ -875,6 +873,9 @@ process_ordered_aggregate_multi(AggState *aggstate, tuplesort_end(pertrans->sortstates[aggstate->current_set]); pertrans->sortstates[aggstate->current_set] = NULL; + + /* restore previous slot, potentially in use for grouping sets */ + tmpcontext->ecxt_outertuple = save; } /* @@ -1276,7 +1277,9 @@ build_hash_table(AggState *aggstate) Assert(perhash->aggnode->numGroups > 0); - perhash->hashtable = BuildTupleHashTable(perhash->numCols, + perhash->hashtable = BuildTupleHashTable(&aggstate->ss.ps, + perhash->hashslot->tts_tupleDescriptor, + perhash->numCols, perhash->hashGrpColIdxHash, perhash->eqfunctions, perhash->hashfunctions, @@ -1314,6 +1317,7 @@ find_hash_columns(AggState *aggstate) Bitmapset *base_colnos; List *outerTlist = outerPlanState(aggstate)->plan->targetlist; int numHashes = aggstate->num_hashes; + EState *estate = aggstate->ss.ps.state; int j; /* Find Vars that will be needed in tlist and qual */ @@ -1393,6 +1397,12 @@ find_hash_columns(AggState *aggstate) } hashDesc = ExecTypeFromTL(hashTlist, false); + + execTuplesHashPrepare(perhash->numCols, + perhash->aggnode->grpOperators, + &perhash->eqfunctions, + &perhash->hashfunctions); + perhash->hashslot = ExecAllocTableSlot(&estate->es_tupleTable); ExecSetSlotDescriptor(perhash->hashslot, hashDesc); list_free(hashTlist); @@ -1694,17 +1704,14 @@ agg_retrieve_direct(AggState *aggstate) * of the next grouping set *---------- */ + tmpcontext->ecxt_innertuple = econtext->ecxt_outertuple; if (aggstate->input_done || (node->aggstrategy != AGG_PLAIN && aggstate->projected_set != -1 && aggstate->projected_set < (numGroupingSets - 1) && nextSetSize > 0 && - !execTuplesMatch(econtext->ecxt_outertuple, - tmpcontext->ecxt_outertuple, - nextSetSize, - node->grpColIdx, - aggstate->phase->eqfunctions, - tmpcontext->ecxt_per_tuple_memory))) + !ExecQualAndReset(aggstate->phase->eqfunctions[nextSetSize - 1], + tmpcontext))) { aggstate->projected_set += 1; @@ -1847,12 +1854,9 @@ agg_retrieve_direct(AggState *aggstate) */ if (node->aggstrategy != AGG_PLAIN) { - if (!execTuplesMatch(firstSlot, - outerslot, - node->numCols, - node->grpColIdx, - aggstate->phase->eqfunctions, - tmpcontext->ecxt_per_tuple_memory)) + tmpcontext->ecxt_innertuple = firstSlot; + if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1], + tmpcontext)) { aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot); break; @@ -2078,6 +2082,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AggStatePerGroup *pergroups; Plan *outerPlan; ExprContext *econtext; + TupleDesc scanDesc; int numaggs, transno, aggno; @@ -2233,9 +2238,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * initialize source tuple type. */ ExecAssignScanTypeFromOuterPlan(&aggstate->ss); + scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; if (node->chain) - ExecSetSlotDescriptor(aggstate->sort_slot, - aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); + ExecSetSlotDescriptor(aggstate->sort_slot, scanDesc); /* * Initialize result tuple type and projection info. @@ -2355,11 +2360,43 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (aggnode->aggstrategy == AGG_SORTED) { + int i = 0; + Assert(aggnode->numCols > 0); + /* + * Build a separate function for each subset of columns that + * need to be compared. + */ phasedata->eqfunctions = - execTuplesMatchPrepare(aggnode->numCols, - aggnode->grpOperators); + (ExprState **) palloc0(aggnode->numCols * sizeof(ExprState *)); + + /* for each grouping set */ + for (i = 0; i < phasedata->numsets; i++) + { + int length = phasedata->gset_lengths[i]; + + if (phasedata->eqfunctions[length - 1] != NULL) + continue; + + phasedata->eqfunctions[length - 1] = + execTuplesMatchPrepare(scanDesc, + length, + aggnode->grpColIdx, + aggnode->grpOperators, + (PlanState *) aggstate); + } + + /* and for all grouped columns, unless already computed */ + if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL) + { + phasedata->eqfunctions[aggnode->numCols - 1] = + execTuplesMatchPrepare(scanDesc, + aggnode->numCols, + aggnode->grpColIdx, + aggnode->grpOperators, + (PlanState *) aggstate); + } } phasedata->aggnode = aggnode; @@ -2412,16 +2449,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (use_hashing) { - for (i = 0; i < numHashes; ++i) - { - aggstate->perhash[i].hashslot = ExecInitExtraTupleSlot(estate); - - execTuplesHashPrepare(aggstate->perhash[i].numCols, - aggstate->perhash[i].aggnode->grpOperators, - &aggstate->perhash[i].eqfunctions, - &aggstate->perhash[i].hashfunctions); - } - /* this is an array of pointers, not structures */ aggstate->hash_pergroup = pergroups; @@ -3101,24 +3128,28 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, if (aggref->aggdistinct) { + Oid *ops; + Assert(numArguments > 0); + Assert(list_length(aggref->aggdistinct) == numDistinctCols); - /* - * We need the equal function for each DISTINCT comparison we will - * make. - */ - pertrans->equalfns = - (FmgrInfo *) palloc(numDistinctCols * sizeof(FmgrInfo)); + ops = palloc(numDistinctCols * sizeof(Oid)); i = 0; foreach(lc, aggref->aggdistinct) - { - SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc); + ops[i++] = ((SortGroupClause *) lfirst(lc))->eqop; - fmgr_info(get_opcode(sortcl->eqop), &pertrans->equalfns[i]); - i++; - } - Assert(i == numDistinctCols); + /* lookup / build the necessary comparators */ + if (numDistinctCols == 1) + fmgr_info(get_opcode(ops[0]), &pertrans->equalfnOne); + else + pertrans->equalfnMulti = + execTuplesMatchPrepare(pertrans->sortdesc, + numDistinctCols, + pertrans->sortColIdx, + ops, + &aggstate->ss.ps); + pfree(ops); } pertrans->sortstates = (Tuplesortstate **) diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index f1cdbaa4e6..8f7bf459ef 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -25,6 +25,7 @@ #include "executor/executor.h" #include "executor/nodeGroup.h" #include "miscadmin.h" +#include "utils/memutils.h" /* @@ -37,8 +38,6 @@ ExecGroup(PlanState *pstate) { GroupState *node = castNode(GroupState, pstate); ExprContext *econtext; - int numCols; - AttrNumber *grpColIdx; TupleTableSlot *firsttupleslot; TupleTableSlot *outerslot; @@ -50,8 +49,6 @@ ExecGroup(PlanState *pstate) if (node->grp_done) return NULL; econtext = node->ss.ps.ps_ExprContext; - numCols = ((Group *) node->ss.ps.plan)->numCols; - grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx; /* * The ScanTupleSlot holds the (copied) first tuple of each group. @@ -59,7 +56,7 @@ ExecGroup(PlanState *pstate) firsttupleslot = node->ss.ss_ScanTupleSlot; /* - * We need not call ResetExprContext here because execTuplesMatch will + * We need not call ResetExprContext here because ExecQualAndReset() will * reset the per-tuple memory context once per input tuple. */ @@ -124,10 +121,9 @@ ExecGroup(PlanState *pstate) * Compare with first tuple and see if this tuple is of the same * group. If so, ignore it and keep scanning. */ - if (!execTuplesMatch(firsttupleslot, outerslot, - numCols, grpColIdx, - node->eqfunctions, - econtext->ecxt_per_tuple_memory)) + econtext->ecxt_innertuple = firsttupleslot; + econtext->ecxt_outertuple = outerslot; + if (!ExecQualAndReset(node->eqfunction, econtext)) break; } @@ -166,6 +162,7 @@ GroupState * ExecInitGroup(Group *node, EState *estate, int eflags) { GroupState *grpstate; + AttrNumber *grpColIdx = grpColIdx = node->grpColIdx; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -215,9 +212,12 @@ ExecInitGroup(Group *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - grpstate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->grpOperators); + grpstate->eqfunction = + execTuplesMatchPrepare(ExecGetResultType(outerPlanState(grpstate)), + node->numCols, + grpColIdx, + node->grpOperators, + &grpstate->ss.ps); return grpstate; } diff --git a/src/backend/executor/nodeRecursiveunion.c b/src/backend/executor/nodeRecursiveunion.c index 817749855f..c070338fdb 100644 --- a/src/backend/executor/nodeRecursiveunion.c +++ b/src/backend/executor/nodeRecursiveunion.c @@ -32,11 +32,14 @@ static void build_hash_table(RecursiveUnionState *rustate) { RecursiveUnion *node = (RecursiveUnion *) rustate->ps.plan; + TupleDesc desc = ExecGetResultType(outerPlanState(rustate)); Assert(node->numCols > 0); Assert(node->numGroups > 0); - rustate->hashtable = BuildTupleHashTable(node->numCols, + rustate->hashtable = BuildTupleHashTable(&rustate->ps, + desc, + node->numCols, node->dupColIdx, rustate->eqfunctions, rustate->hashfunctions, diff --git a/src/backend/executor/nodeSetOp.c b/src/backend/executor/nodeSetOp.c index c91c3402d2..ba2d3159c0 100644 --- a/src/backend/executor/nodeSetOp.c +++ b/src/backend/executor/nodeSetOp.c @@ -120,18 +120,22 @@ static void build_hash_table(SetOpState *setopstate) { SetOp *node = (SetOp *) setopstate->ps.plan; + ExprContext *econtext = setopstate->ps.ps_ExprContext; + TupleDesc desc = ExecGetResultType(outerPlanState(setopstate)); Assert(node->strategy == SETOP_HASHED); Assert(node->numGroups > 0); - setopstate->hashtable = BuildTupleHashTable(node->numCols, + setopstate->hashtable = BuildTupleHashTable(&setopstate->ps, + desc, + node->numCols, node->dupColIdx, setopstate->eqfunctions, setopstate->hashfunctions, node->numGroups, 0, setopstate->tableContext, - setopstate->tempContext, + econtext->ecxt_per_tuple_memory, false); } @@ -220,11 +224,11 @@ ExecSetOp(PlanState *pstate) static TupleTableSlot * setop_retrieve_direct(SetOpState *setopstate) { - SetOp *node = (SetOp *) setopstate->ps.plan; PlanState *outerPlan; SetOpStatePerGroup pergroup; TupleTableSlot *outerslot; TupleTableSlot *resultTupleSlot; + ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -292,11 +296,10 @@ setop_retrieve_direct(SetOpState *setopstate) /* * Check whether we've crossed a group boundary. */ - if (!execTuplesMatch(resultTupleSlot, - outerslot, - node->numCols, node->dupColIdx, - setopstate->eqfunctions, - setopstate->tempContext)) + econtext->ecxt_outertuple = resultTupleSlot; + econtext->ecxt_innertuple = outerslot; + + if (!ExecQualAndReset(setopstate->eqfunction, econtext)) { /* * Save the first input tuple of the next group. @@ -338,6 +341,7 @@ setop_fill_hash_table(SetOpState *setopstate) PlanState *outerPlan; int firstFlag; bool in_first_rel PG_USED_FOR_ASSERTS_ONLY; + ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -404,8 +408,8 @@ setop_fill_hash_table(SetOpState *setopstate) advance_counts((SetOpStatePerGroup) entry->additional, flag); } - /* Must reset temp context after each hashtable lookup */ - MemoryContextReset(setopstate->tempContext); + /* Must reset expression context after each hashtable lookup */ + ResetExprContext(econtext); } setopstate->table_filled = true; @@ -476,6 +480,7 @@ SetOpState * ExecInitSetOp(SetOp *node, EState *estate, int eflags) { SetOpState *setopstate; + TupleDesc outerDesc; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -498,16 +503,9 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) setopstate->tableContext = NULL; /* - * Miscellaneous initialization - * - * SetOp nodes have no ExprContext initialization because they never call - * ExecQual or ExecProject. But they do need a per-tuple memory context - * anyway for calling execTuplesMatch. + * create expression context */ - setopstate->tempContext = - AllocSetContextCreate(CurrentMemoryContext, - "SetOp", - ALLOCSET_DEFAULT_SIZES); + ExecAssignExprContext(estate, &setopstate->ps); /* * If hashing, we also need a longer-lived context to store the hash @@ -534,6 +532,7 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) if (node->strategy == SETOP_HASHED) eflags &= ~EXEC_FLAG_REWIND; outerPlanState(setopstate) = ExecInitNode(outerPlan(node), estate, eflags); + outerDesc = ExecGetResultType(outerPlanState(setopstate)); /* * setop nodes do no projections, so initialize projection info for this @@ -553,9 +552,12 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) &setopstate->eqfunctions, &setopstate->hashfunctions); else - setopstate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->dupOperators); + setopstate->eqfunction = + execTuplesMatchPrepare(outerDesc, + node->numCols, + node->dupColIdx, + node->dupOperators, + &setopstate->ps); if (node->strategy == SETOP_HASHED) { @@ -585,9 +587,9 @@ ExecEndSetOp(SetOpState *node) ExecClearTuple(node->ps.ps_ResultTupleSlot); /* free subsidiary stuff including hashtable */ - MemoryContextDelete(node->tempContext); if (node->tableContext) MemoryContextDelete(node->tableContext); + ExecFreeExprContext(&node->ps); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index edf7d034bd..fcf739b5e2 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -494,7 +494,9 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; - node->hashtable = BuildTupleHashTable(ncols, + node->hashtable = BuildTupleHashTable(node->parent, + node->descRight, + ncols, node->keyColIdx, node->tab_eq_funcs, node->tab_hash_funcs, @@ -514,7 +516,9 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; } - node->hashnulls = BuildTupleHashTable(ncols, + node->hashnulls = BuildTupleHashTable(node->parent, + node->descRight, + ncols, node->keyColIdx, node->tab_eq_funcs, node->tab_hash_funcs, @@ -598,6 +602,78 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) MemoryContextSwitchTo(oldcontext); } + +/* + * execTuplesUnequal + * Return true if two tuples are definitely unequal in the indicated + * fields. + * + * Nulls are neither equal nor unequal to anything else. A true result + * is obtained only if there are non-null fields that compare not-equal. + * + * slot1, slot2: the tuples to compare (must have same columns!) + * numCols: the number of attributes to be examined + * matchColIdx: array of attribute column numbers + * eqFunctions: array of fmgr lookup info for the equality functions to use + * evalContext: short-term memory context for executing the functions + */ +static bool +execTuplesUnequal(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext) +{ + MemoryContext oldContext; + bool result; + int i; + + /* Reset and switch into the temp context. */ + MemoryContextReset(evalContext); + oldContext = MemoryContextSwitchTo(evalContext); + + /* + * We cannot report a match without checking all the fields, but we can + * report a non-match as soon as we find unequal fields. So, start + * comparing at the last field (least significant sort key). That's the + * most likely to be different if we are dealing with sorted input. + */ + result = false; + + for (i = numCols; --i >= 0;) + { + AttrNumber att = matchColIdx[i]; + Datum attr1, + attr2; + bool isNull1, + isNull2; + + attr1 = slot_getattr(slot1, att, &isNull1); + + if (isNull1) + continue; /* can't prove anything here */ + + attr2 = slot_getattr(slot2, att, &isNull2); + + if (isNull2) + continue; /* can't prove anything here */ + + /* Apply the type-specific equality function */ + + if (!DatumGetBool(FunctionCall2(&eqfunctions[i], + attr1, attr2))) + { + result = true; /* they are unequal */ + break; + } + } + + MemoryContextSwitchTo(oldContext); + + return result; +} + /* * findPartialMatch: does the hashtable contain an entry that is not * provably distinct from the tuple? @@ -887,6 +963,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) NULL); tupDesc = ExecTypeFromTL(righttlist, false); + sstate->descRight = tupDesc; slot = ExecInitExtraTupleSlot(estate); ExecSetSlotDescriptor(slot, tupDesc); sstate->projRight = ExecBuildProjectionInfo(righttlist, diff --git a/src/backend/executor/nodeUnique.c b/src/backend/executor/nodeUnique.c index e330650593..9f823c58e1 100644 --- a/src/backend/executor/nodeUnique.c +++ b/src/backend/executor/nodeUnique.c @@ -47,7 +47,7 @@ static TupleTableSlot * /* return: a tuple or NULL */ ExecUnique(PlanState *pstate) { UniqueState *node = castNode(UniqueState, pstate); - Unique *plannode = (Unique *) node->ps.plan; + ExprContext *econtext = node->ps.ps_ExprContext; TupleTableSlot *resultTupleSlot; TupleTableSlot *slot; PlanState *outerPlan; @@ -89,10 +89,9 @@ ExecUnique(PlanState *pstate) * If so then we loop back and fetch another new tuple from the * subplan. */ - if (!execTuplesMatch(slot, resultTupleSlot, - plannode->numCols, plannode->uniqColIdx, - node->eqfunctions, - node->tempContext)) + econtext->ecxt_innertuple = slot; + econtext->ecxt_outertuple = resultTupleSlot; + if (!ExecQualAndReset(node->eqfunction, econtext)) break; } @@ -129,16 +128,9 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) uniquestate->ps.ExecProcNode = ExecUnique; /* - * Miscellaneous initialization - * - * Unique nodes have no ExprContext initialization because they never call - * ExecQual or ExecProject. But they do need a per-tuple memory context - * anyway for calling execTuplesMatch. + * create expression context */ - uniquestate->tempContext = - AllocSetContextCreate(CurrentMemoryContext, - "Unique", - ALLOCSET_DEFAULT_SIZES); + ExecAssignExprContext(estate, &uniquestate->ps); /* * Tuple table initialization @@ -160,9 +152,12 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - uniquestate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->uniqOperators); + uniquestate->eqfunction = + execTuplesMatchPrepare(ExecGetResultType(outerPlanState(uniquestate)), + node->numCols, + node->uniqColIdx, + node->uniqOperators, + &uniquestate->ps); return uniquestate; } @@ -180,7 +175,7 @@ ExecEndUnique(UniqueState *node) /* clean up tuple table */ ExecClearTuple(node->ps.ps_ResultTupleSlot); - MemoryContextDelete(node->tempContext); + ExecFreeExprContext(&node->ps); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index f6412576f4..1c807a8292 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -1272,12 +1272,13 @@ spool_tuples(WindowAggState *winstate, int64 pos) if (node->partNumCols > 0) { + ExprContext *econtext = winstate->tmpcontext; + + econtext->ecxt_innertuple = winstate->first_part_slot; + econtext->ecxt_outertuple = outerslot; + /* Check if this tuple still belongs to the current partition */ - if (!execTuplesMatch(winstate->first_part_slot, - outerslot, - node->partNumCols, node->partColIdx, - winstate->partEqfunctions, - winstate->tmpcontext->ecxt_per_tuple_memory)) + if (!ExecQualAndReset(winstate->partEqfunction, econtext)) { /* * end of partition; copy the tuple for the next cycle. @@ -2245,6 +2246,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) wfuncno, numaggs, aggno; + TupleDesc scanDesc; ListCell *l; /* check for unsupported flags */ @@ -2327,6 +2329,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) * store in the tuplestore and use in all our working slots). */ ExecAssignScanTypeFromOuterPlan(&winstate->ss); + scanDesc = winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; ExecSetSlotDescriptor(winstate->first_part_slot, winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); @@ -2351,11 +2354,20 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) /* Set up data for comparing tuples */ if (node->partNumCols > 0) - winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols, - node->partOperators); + winstate->partEqfunction = + execTuplesMatchPrepare(scanDesc, + node->partNumCols, + node->partColIdx, + node->partOperators, + &winstate->ss.ps); + if (node->ordNumCols > 0) - winstate->ordEqfunctions = execTuplesMatchPrepare(node->ordNumCols, - node->ordOperators); + winstate->ordEqfunction = + execTuplesMatchPrepare(scanDesc, + node->ordNumCols, + node->ordColIdx, + node->ordOperators, + &winstate->ss.ps); /* * WindowAgg nodes use aggvalues and aggnulls as well as Agg nodes. @@ -2879,15 +2891,15 @@ are_peers(WindowAggState *winstate, TupleTableSlot *slot1, TupleTableSlot *slot2) { WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; + ExprContext *econtext = winstate->tmpcontext; /* If no ORDER BY, all rows are peers with each other */ if (node->ordNumCols == 0) return true; - return execTuplesMatch(slot1, slot2, - node->ordNumCols, node->ordColIdx, - winstate->ordEqfunctions, - winstate->tmpcontext->ecxt_per_tuple_memory); + econtext->ecxt_outertuple = slot1; + econtext->ecxt_innertuple = slot2; + return ExecQualAndReset(winstate->ordEqfunction, econtext); } /* diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 63d9c67027..50b34fcbc6 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -27,6 +27,7 @@ #include "utils/array.h" #include "utils/builtins.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" #include "utils/timestamp.h" #include "utils/tuplesort.h" @@ -54,6 +55,8 @@ typedef struct OSAPerQueryState Aggref *aggref; /* Memory context containing this struct and other per-query data: */ MemoryContext qcontext; + /* Context for expression evaluation */ + ExprContext *econtext; /* Do we expect multiple final-function calls within one group? */ bool rescan_needed; @@ -71,7 +74,7 @@ typedef struct OSAPerQueryState Oid *sortCollations; bool *sortNullsFirsts; /* Equality operator call info, created only if needed: */ - FmgrInfo *equalfns; + ExprState *compareTuple; /* These fields are used only when accumulating datums: */ @@ -1287,6 +1290,8 @@ hypothetical_cume_dist_final(PG_FUNCTION_ARGS) Datum hypothetical_dense_rank_final(PG_FUNCTION_ARGS) { + ExprContext *econtext; + ExprState *compareTuple; int nargs = PG_NARGS() - 1; int64 rank = 1; int64 duplicate_count = 0; @@ -1294,12 +1299,9 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) int numDistinctCols; Datum abbrevVal = (Datum) 0; Datum abbrevOld = (Datum) 0; - AttrNumber *sortColIdx; - FmgrInfo *equalfns; TupleTableSlot *slot; TupleTableSlot *extraslot; TupleTableSlot *slot2; - MemoryContext tmpcontext; int i; Assert(AggCheckCallContext(fcinfo, NULL) == AGG_CONTEXT_AGGREGATE); @@ -1309,6 +1311,9 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) PG_RETURN_INT64(rank); osastate = (OSAPerGroupState *) PG_GETARG_POINTER(0); + econtext = osastate->qstate->econtext; + if (!econtext) + osastate->qstate->econtext = econtext = CreateStandaloneExprContext(); /* Adjust nargs to be the number of direct (or aggregated) args */ if (nargs % 2 != 0) @@ -1323,26 +1328,22 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) */ numDistinctCols = osastate->qstate->numSortCols - 1; - /* Look up the equality function(s), if we didn't already */ - equalfns = osastate->qstate->equalfns; - if (equalfns == NULL) + /* Build tuple comparator, if we didn't already */ + compareTuple = osastate->qstate->compareTuple; + if (compareTuple == NULL) { - MemoryContext qcontext = osastate->qstate->qcontext; - - equalfns = (FmgrInfo *) - MemoryContextAlloc(qcontext, numDistinctCols * sizeof(FmgrInfo)); - for (i = 0; i < numDistinctCols; i++) - { - fmgr_info_cxt(get_opcode(osastate->qstate->eqOperators[i]), - &equalfns[i], - qcontext); - } - osastate->qstate->equalfns = equalfns; + AttrNumber *sortColIdx = osastate->qstate->sortColIdx; + MemoryContext oldContext; + + oldContext = MemoryContextSwitchTo(osastate->qstate->qcontext); + compareTuple = execTuplesMatchPrepare(osastate->qstate->tupdesc, + numDistinctCols, + sortColIdx, + osastate->qstate->eqOperators, + NULL); + MemoryContextSwitchTo(oldContext); + osastate->qstate->compareTuple = compareTuple; } - sortColIdx = osastate->qstate->sortColIdx; - - /* Get short-term context we can use for execTuplesMatch */ - tmpcontext = AggGetTempMemoryContext(fcinfo); /* because we need a hypothetical row, we can't share transition state */ Assert(!osastate->sort_done); @@ -1385,19 +1386,18 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) break; /* count non-distinct tuples */ + econtext->ecxt_outertuple = slot; + econtext->ecxt_innertuple = slot2; + if (!TupIsNull(slot2) && abbrevVal == abbrevOld && - execTuplesMatch(slot, slot2, - numDistinctCols, - sortColIdx, - equalfns, - tmpcontext)) + ExecQualAndReset(compareTuple, econtext)) duplicate_count++; tmpslot = slot2; slot2 = slot; slot = tmpslot; - /* avoid execTuplesMatch() calls by reusing abbreviated keys */ + /* avoid ExecQual() calls by reusing abbreviated keys */ abbrevOld = abbrevVal; rank++; diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 117fc892f4..0cab431f65 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -148,6 +148,7 @@ typedef enum ExprEvalOp /* evaluate assorted special-purpose expression types */ EEOP_IOCOERCE, EEOP_DISTINCT, + EEOP_NOT_DISTINCT, EEOP_NULLIF, EEOP_SQLVALUEFUNCTION, EEOP_CURRENTOFEXPR, diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 1d824eff36..f648af2789 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -113,25 +113,18 @@ extern bool execCurrentOf(CurrentOfExpr *cexpr, /* * prototypes from functions in execGrouping.c */ -extern bool execTuplesMatch(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext); -extern bool execTuplesUnequal(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext); -extern FmgrInfo *execTuplesMatchPrepare(int numCols, - Oid *eqOperators); +extern ExprState *execTuplesMatchPrepare(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqOperators, + PlanState *parent); extern void execTuplesHashPrepare(int numCols, Oid *eqOperators, FmgrInfo **eqFunctions, FmgrInfo **hashFunctions); -extern TupleHashTable BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, +extern TupleHashTable BuildTupleHashTable(PlanState *parent, + TupleDesc inputDesc, + int numCols, AttrNumber *keyColIdx, FmgrInfo *eqfunctions, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, @@ -257,6 +250,11 @@ extern ExprState *ExecInitCheck(List *qual, PlanState *parent); extern List *ExecInitExprList(List *nodes, PlanState *parent); extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase, bool doSort, bool doHash); +extern ExprState *ExecBuildGroupingEqual(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqfunctions, + PlanState *parent); extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList, ExprContext *econtext, TupleTableSlot *slot, diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h index 3b06db86fd..24be7d2daa 100644 --- a/src/include/executor/nodeAgg.h +++ b/src/include/executor/nodeAgg.h @@ -102,11 +102,12 @@ typedef struct AggStatePerTransData bool *sortNullsFirst; /* - * fmgr lookup data for input columns' equality operators --- only - * set/used when aggregate has DISTINCT flag. Note that these are in - * order of sort column index, not parameter index. + * Comparators for input columns --- only set/used when aggregate has + * DISTINCT flag. equalfnOne version is used for single-column + * commparisons, equalfnMulti for the case of multiple columns. */ - FmgrInfo *equalfns; /* array of length numDistinctCols */ + FmgrInfo equalfnOne; + ExprState *equalfnMulti; /* * initial value from pg_aggregate entry @@ -270,7 +271,8 @@ typedef struct AggStatePerPhaseData int numsets; /* number of grouping sets (or 0) */ int *gset_lengths; /* lengths of grouping sets */ Bitmapset **grouped_cols; /* column groupings for rollup */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + ExprState **eqfunctions; /* expression returning equality, indexed by + * nr of cols to compare */ Agg *aggnode; /* Agg node for phase data */ Sort *sortnode; /* Sort node for input ordering for phase */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 286d55be03..74c359901c 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -635,6 +635,8 @@ typedef struct TupleHashTableData FmgrInfo *in_hash_funcs; /* hash functions for input datatype(s) */ FmgrInfo *cur_eq_funcs; /* equality functions for input vs. table */ uint32 hash_iv; /* hash-function IV */ + ExprState *eq_func; /* tuple equality comparator */ + ExprContext *exprcontext; /* expression context */ } TupleHashTableData; typedef tuplehash_iterator TupleHashIterator; @@ -781,6 +783,7 @@ typedef struct SubPlanState HeapTuple curTuple; /* copy of most recent tuple from subplan */ Datum curArray; /* most recent array from ARRAY() subplan */ /* these are used when hashing the subselect's output: */ + TupleDesc descRight; /* subselect desc after projection */ ProjectionInfo *projLeft; /* for projecting lefthand exprs */ ProjectionInfo *projRight; /* for projecting subselect output */ TupleHashTable hashtable; /* hash table for no-nulls subselect rows */ @@ -1795,7 +1798,7 @@ typedef struct SortState typedef struct GroupState { ScanState ss; /* its first field is NodeTag */ - FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ + ExprState *eqfunction; /* equality function */ bool grp_done; /* indicates completion of Group scan */ } GroupState; @@ -1885,8 +1888,8 @@ typedef struct WindowAggState WindowStatePerFunc perfunc; /* per-window-function information */ WindowStatePerAgg peragg; /* per-plain-aggregate information */ - FmgrInfo *partEqfunctions; /* equality funcs for partition columns */ - FmgrInfo *ordEqfunctions; /* equality funcs for ordering columns */ + ExprState *partEqfunction; /* equality funcs for partition columns */ + ExprState *ordEqfunction; /* equality funcs for ordering columns */ Tuplestorestate *buffer; /* stores rows of current partition */ int current_ptr; /* read pointer # for current row */ int framehead_ptr; /* read pointer # for frame head, if used */ @@ -1964,8 +1967,7 @@ typedef struct WindowAggState typedef struct UniqueState { PlanState ps; /* its first field is NodeTag */ - FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ - MemoryContext tempContext; /* short-term context for comparisons */ + ExprState *eqfunction; /* tuple equality qual */ } UniqueState; /* ---------------- @@ -2079,11 +2081,11 @@ typedef struct SetOpStatePerGroupData *SetOpStatePerGroup; typedef struct SetOpState { PlanState ps; /* its first field is NodeTag */ + ExprState *eqfunction; /* equality comparator */ FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ bool setop_done; /* indicates completion of output scan */ long numOutput; /* number of dups left to output */ - MemoryContext tempContext; /* short-term context for comparisons */ /* these fields are used in SETOP_SORTED mode: */ SetOpStatePerGroup pergroup; /* per-group working state */ HeapTuple grp_firstTuple; /* copy of first tuple of current group */ From 2a41507dab0f293ff241fe8ae326065998668af8 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 15 Feb 2018 22:39:18 -0800 Subject: [PATCH 1003/1087] Revert "Do execGrouping.c via expression eval machinery." This reverts commit 773aec7aa98abd38d6d9435913bb8e14e392c274. There's an unresolved issue in the reverted commit: It only creates one comparator function, but in for the nodeSubplan.c case we need more (c.f. FindTupleHashEntry vs LookupTupleHashEntry calls in nodeSubplan.c). This isn't too difficult to fix, but it's not entirely trivial either. The fact that the issue only causes breakage on 32bit systems shows that the current test coverage isn't that great. To avoid turning half the buildfarm red till those two issues are addressed, revert. --- src/backend/executor/execExpr.c | 118 ----------- src/backend/executor/execExprInterp.c | 29 --- src/backend/executor/execGrouping.c | 236 +++++++++++++++++----- src/backend/executor/nodeAgg.c | 143 +++++-------- src/backend/executor/nodeGroup.c | 24 +-- src/backend/executor/nodeRecursiveunion.c | 5 +- src/backend/executor/nodeSetOp.c | 48 +++-- src/backend/executor/nodeSubplan.c | 81 +------- src/backend/executor/nodeUnique.c | 31 +-- src/backend/executor/nodeWindowAgg.c | 38 ++-- src/backend/utils/adt/orderedsetaggs.c | 56 ++--- src/include/executor/execExpr.h | 1 - src/include/executor/executor.h | 28 +-- src/include/executor/nodeAgg.h | 12 +- src/include/nodes/execnodes.h | 14 +- 15 files changed, 366 insertions(+), 498 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 47c1c1a49b..c6eb3ebacf 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -3193,121 +3193,3 @@ ExecBuildAggTransCall(ExprState *state, AggState *aggstate, as->d.agg_strict_trans_check.jumpnull = state->steps_len; } } - -/* - * Build equality expression that can be evaluated using ExecQual(), returning - * true if the expression context's inner/outer tuple are NOT DISTINCT. I.e - * two nulls match, a null and a not-null don't match. - * - * desc: tuple descriptor of the to-be-compared tuples - * numCols: the number of attributes to be examined - * keyColIdx: array of attribute column numbers - * eqFunctions: array of function oids of the equality functions to use - * parent: parent executor node - */ -ExprState * -ExecBuildGroupingEqual(TupleDesc desc, - int numCols, - AttrNumber *keyColIdx, - Oid *eqfunctions, - PlanState *parent) -{ - ExprState *state = makeNode(ExprState); - ExprEvalStep scratch = {0}; - int natt; - int maxatt = -1; - List *adjust_jumps = NIL; - ListCell *lc; - - /* - * When no columns are actually compared, the result's always true. See - * special case in ExecQual(). - */ - if (numCols == 0) - return NULL; - - state->expr = NULL; - state->flags = EEO_FLAG_IS_QUAL; - state->parent = parent; - - scratch.resvalue = &state->resvalue; - scratch.resnull = &state->resnull; - - /* compute max needed attribute */ - for (natt = 0; natt < numCols; natt++) - { - int attno = keyColIdx[natt]; - - if (attno > maxatt) - maxatt = attno; - } - Assert(maxatt >= 0); - - /* push deform steps */ - scratch.opcode = EEOP_INNER_FETCHSOME; - scratch.d.fetch.last_var = maxatt; - ExprEvalPushStep(state, &scratch); - - scratch.opcode = EEOP_OUTER_FETCHSOME; - scratch.d.fetch.last_var = maxatt; - ExprEvalPushStep(state, &scratch); - - /* - * Start comparing at the last field (least significant sort key). That's - * the most likely to be different if we are dealing with sorted input. - */ - for (natt = numCols; --natt >= 0;) - { - int attno = keyColIdx[natt]; - Form_pg_attribute att = TupleDescAttr(desc, attno - 1); - Var *larg, - *rarg; - List *args; - - /* - * Reusing ExecInitFunc() requires creating Vars, but still seems - * worth it from a code reuse perspective. - */ - - /* left arg */ - larg = makeVar(INNER_VAR, attno, att->atttypid, - att->atttypmod, InvalidOid, 0); - /* right arg */ - rarg = makeVar(OUTER_VAR, attno, att->atttypid, - att->atttypmod, InvalidOid, 0); - args = list_make2(larg, rarg); - - /* evaluate distinctness */ - ExecInitFunc(&scratch, NULL, - args, eqfunctions[natt], InvalidOid, - state); - scratch.opcode = EEOP_NOT_DISTINCT; - ExprEvalPushStep(state, &scratch); - - /* then emit EEOP_QUAL to detect if result is false (or null) */ - scratch.opcode = EEOP_QUAL; - scratch.d.qualexpr.jumpdone = -1; - ExprEvalPushStep(state, &scratch); - adjust_jumps = lappend_int(adjust_jumps, - state->steps_len - 1); - } - - /* adjust jump targets */ - foreach(lc, adjust_jumps) - { - ExprEvalStep *as = &state->steps[lfirst_int(lc)]; - - Assert(as->opcode == EEOP_QUAL); - Assert(as->d.qualexpr.jumpdone == -1); - as->d.qualexpr.jumpdone = state->steps_len; - } - - scratch.resvalue = NULL; - scratch.resnull = NULL; - scratch.opcode = EEOP_DONE; - ExprEvalPushStep(state, &scratch); - - ExecReadyExpr(state); - - return state; -} diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 771b7e3945..9c6c2b02e9 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -355,7 +355,6 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_MAKE_READONLY, &&CASE_EEOP_IOCOERCE, &&CASE_EEOP_DISTINCT, - &&CASE_EEOP_NOT_DISTINCT, &&CASE_EEOP_NULLIF, &&CASE_EEOP_SQLVALUEFUNCTION, &&CASE_EEOP_CURRENTOFEXPR, @@ -1199,34 +1198,6 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } - /* see EEOP_DISTINCT for comments, this is just inverted */ - EEO_CASE(EEOP_NOT_DISTINCT) - { - FunctionCallInfo fcinfo = op->d.func.fcinfo_data; - - if (fcinfo->argnull[0] && fcinfo->argnull[1]) - { - *op->resvalue = BoolGetDatum(true); - *op->resnull = false; - } - else if (fcinfo->argnull[0] || fcinfo->argnull[1]) - { - *op->resvalue = BoolGetDatum(false); - *op->resnull = false; - } - else - { - Datum eqresult; - - fcinfo->isnull = false; - eqresult = op->d.func.fn_addr(fcinfo); - *op->resvalue = eqresult; - *op->resnull = fcinfo->isnull; - } - - EEO_NEXT(); - } - EEO_CASE(EEOP_NULLIF) { /* diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c index 4f604fb286..8e8dbb1f20 100644 --- a/src/backend/executor/execGrouping.c +++ b/src/backend/executor/execGrouping.c @@ -52,33 +52,172 @@ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tup *****************************************************************************/ /* - * execTuplesMatchPrepare - * Build expression that can be evaluated using ExecQual(), returning - * whether an ExprContext's inner/outer tuples are NOT DISTINCT + * execTuplesMatch + * Return true if two tuples match in all the indicated fields. + * + * This actually implements SQL's notion of "not distinct". Two nulls + * match, a null and a not-null don't match. + * + * slot1, slot2: the tuples to compare (must have same columns!) + * numCols: the number of attributes to be examined + * matchColIdx: array of attribute column numbers + * eqFunctions: array of fmgr lookup info for the equality functions to use + * evalContext: short-term memory context for executing the functions + * + * NB: evalContext is reset each time! + */ +bool +execTuplesMatch(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext) +{ + MemoryContext oldContext; + bool result; + int i; + + /* Reset and switch into the temp context. */ + MemoryContextReset(evalContext); + oldContext = MemoryContextSwitchTo(evalContext); + + /* + * We cannot report a match without checking all the fields, but we can + * report a non-match as soon as we find unequal fields. So, start + * comparing at the last field (least significant sort key). That's the + * most likely to be different if we are dealing with sorted input. + */ + result = true; + + for (i = numCols; --i >= 0;) + { + AttrNumber att = matchColIdx[i]; + Datum attr1, + attr2; + bool isNull1, + isNull2; + + attr1 = slot_getattr(slot1, att, &isNull1); + + attr2 = slot_getattr(slot2, att, &isNull2); + + if (isNull1 != isNull2) + { + result = false; /* one null and one not; they aren't equal */ + break; + } + + if (isNull1) + continue; /* both are null, treat as equal */ + + /* Apply the type-specific equality function */ + + if (!DatumGetBool(FunctionCall2(&eqfunctions[i], + attr1, attr2))) + { + result = false; /* they aren't equal */ + break; + } + } + + MemoryContextSwitchTo(oldContext); + + return result; +} + +/* + * execTuplesUnequal + * Return true if two tuples are definitely unequal in the indicated + * fields. + * + * Nulls are neither equal nor unequal to anything else. A true result + * is obtained only if there are non-null fields that compare not-equal. + * + * Parameters are identical to execTuplesMatch. */ -ExprState * -execTuplesMatchPrepare(TupleDesc desc, - int numCols, - AttrNumber *keyColIdx, - Oid *eqOperators, - PlanState *parent) +bool +execTuplesUnequal(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext) { - Oid *eqFunctions = (Oid *) palloc(numCols * sizeof(Oid)); + MemoryContext oldContext; + bool result; int i; - ExprState *expr; - if (numCols == 0) - return NULL; + /* Reset and switch into the temp context. */ + MemoryContextReset(evalContext); + oldContext = MemoryContextSwitchTo(evalContext); + + /* + * We cannot report a match without checking all the fields, but we can + * report a non-match as soon as we find unequal fields. So, start + * comparing at the last field (least significant sort key). That's the + * most likely to be different if we are dealing with sorted input. + */ + result = false; + + for (i = numCols; --i >= 0;) + { + AttrNumber att = matchColIdx[i]; + Datum attr1, + attr2; + bool isNull1, + isNull2; + + attr1 = slot_getattr(slot1, att, &isNull1); + + if (isNull1) + continue; /* can't prove anything here */ + + attr2 = slot_getattr(slot2, att, &isNull2); + + if (isNull2) + continue; /* can't prove anything here */ + + /* Apply the type-specific equality function */ + + if (!DatumGetBool(FunctionCall2(&eqfunctions[i], + attr1, attr2))) + { + result = true; /* they are unequal */ + break; + } + } + + MemoryContextSwitchTo(oldContext); + + return result; +} + + +/* + * execTuplesMatchPrepare + * Look up the equality functions needed for execTuplesMatch or + * execTuplesUnequal, given an array of equality operator OIDs. + * + * The result is a palloc'd array. + */ +FmgrInfo * +execTuplesMatchPrepare(int numCols, + Oid *eqOperators) +{ + FmgrInfo *eqFunctions = (FmgrInfo *) palloc(numCols * sizeof(FmgrInfo)); + int i; - /* lookup equality functions */ for (i = 0; i < numCols; i++) - eqFunctions[i] = get_opcode(eqOperators[i]); + { + Oid eq_opr = eqOperators[i]; + Oid eq_function; - /* build actual expression */ - expr = ExecBuildGroupingEqual(desc, numCols, keyColIdx, eqFunctions, - parent); + eq_function = get_opcode(eq_opr); + fmgr_info(eq_function, &eqFunctions[i]); + } - return expr; + return eqFunctions; } /* @@ -149,9 +288,7 @@ execTuplesHashPrepare(int numCols, * storage that will live as long as the hashtable does. */ TupleHashTable -BuildTupleHashTable(PlanState *parent, - TupleDesc inputDesc, - int numCols, AttrNumber *keyColIdx, +BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, FmgrInfo *eqfunctions, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, @@ -160,9 +297,6 @@ BuildTupleHashTable(PlanState *parent, { TupleHashTable hashtable; Size entrysize = sizeof(TupleHashEntryData) + additionalsize; - MemoryContext oldcontext; - Oid *eqoids = (Oid *) palloc(numCols * sizeof(Oid)); - int i; Assert(nbuckets > 0); @@ -199,26 +333,6 @@ BuildTupleHashTable(PlanState *parent, hashtable->hashtab = tuplehash_create(tablecxt, nbuckets, hashtable); - oldcontext = MemoryContextSwitchTo(hashtable->tablecxt); - - /* - * We copy the input tuple descriptor just for safety --- we assume all - * input tuples will have equivalent descriptors. - */ - hashtable->tableslot = MakeSingleTupleTableSlot(CreateTupleDescCopy(inputDesc)); - - /* build comparator for all columns */ - for (i = 0; i < numCols; i++) - eqoids[i] = eqfunctions[i].fn_oid; - hashtable->eq_func = ExecBuildGroupingEqual(inputDesc, - numCols, - keyColIdx, eqoids, - parent); - - MemoryContextSwitchTo(oldcontext); - - hashtable->exprcontext = CreateExprContext(parent->state); - return hashtable; } @@ -243,6 +357,22 @@ LookupTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, bool found; MinimalTuple key; + /* If first time through, clone the input slot to make table slot */ + if (hashtable->tableslot == NULL) + { + TupleDesc tupdesc; + + oldContext = MemoryContextSwitchTo(hashtable->tablecxt); + + /* + * We copy the input tuple descriptor just for safety --- we assume + * all input tuples will have equivalent descriptors. + */ + tupdesc = CreateTupleDescCopy(slot->tts_tupleDescriptor); + hashtable->tableslot = MakeSingleTupleTableSlot(tupdesc); + MemoryContextSwitchTo(oldContext); + } + /* Need to run the hash functions in short-lived context */ oldContext = MemoryContextSwitchTo(hashtable->tempcxt); @@ -394,6 +524,9 @@ TupleHashTableHash(struct tuplehash_hash *tb, const MinimalTuple tuple) * See whether two tuples (presumably of the same hash value) match * * As above, the passed pointers are pointers to TupleHashEntryData. + * + * Also, the caller must select an appropriate memory context for running + * the compare functions. (dynahash.c doesn't change CurrentMemoryContext.) */ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const MinimalTuple tuple2) @@ -401,7 +534,6 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const TupleTableSlot *slot1; TupleTableSlot *slot2; TupleHashTable hashtable = (TupleHashTable) tb->private_data; - ExprContext *econtext = hashtable->exprcontext; /* * We assume that simplehash.h will only ever call us with the first @@ -416,7 +548,13 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const slot2 = hashtable->inputslot; /* For crosstype comparisons, the inputslot must be first */ - econtext->ecxt_innertuple = slot1; - econtext->ecxt_outertuple = slot2; - return !ExecQualAndReset(hashtable->eq_func, econtext); + if (execTuplesMatch(slot2, + slot1, + hashtable->numCols, + hashtable->keyColIdx, + hashtable->cur_eq_funcs, + hashtable->tempcxt)) + return 0; + else + return 1; } diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index 467f8d896e..a86d4b68ea 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -755,7 +755,7 @@ process_ordered_aggregate_single(AggState *aggstate, ((oldIsNull && *isNull) || (!oldIsNull && !*isNull && oldAbbrevVal == newAbbrevVal && - DatumGetBool(FunctionCall2(&pertrans->equalfnOne, + DatumGetBool(FunctionCall2(&pertrans->equalfns[0], oldVal, *newVal))))) { /* equal to prior, so forget this one */ @@ -802,7 +802,7 @@ process_ordered_aggregate_multi(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate) { - ExprContext *tmpcontext = aggstate->tmpcontext; + MemoryContext workcontext = aggstate->tmpcontext->ecxt_per_tuple_memory; FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; TupleTableSlot *slot1 = pertrans->sortslot; TupleTableSlot *slot2 = pertrans->uniqslot; @@ -811,7 +811,6 @@ process_ordered_aggregate_multi(AggState *aggstate, Datum newAbbrevVal = (Datum) 0; Datum oldAbbrevVal = (Datum) 0; bool haveOldValue = false; - TupleTableSlot *save = aggstate->tmpcontext->ecxt_outertuple; int i; tuplesort_performsort(pertrans->sortstates[aggstate->current_set]); @@ -825,20 +824,22 @@ process_ordered_aggregate_multi(AggState *aggstate, { CHECK_FOR_INTERRUPTS(); - tmpcontext->ecxt_outertuple = slot1; - tmpcontext->ecxt_innertuple = slot2; + /* + * Extract the first numTransInputs columns as datums to pass to the + * transfn. (This will help execTuplesMatch too, so we do it + * immediately.) + */ + slot_getsomeattrs(slot1, numTransInputs); if (numDistinctCols == 0 || !haveOldValue || newAbbrevVal != oldAbbrevVal || - !ExecQual(pertrans->equalfnMulti, tmpcontext)) + !execTuplesMatch(slot1, slot2, + numDistinctCols, + pertrans->sortColIdx, + pertrans->equalfns, + workcontext)) { - /* - * Extract the first numTransInputs columns as datums to pass to - * the transfn. - */ - slot_getsomeattrs(slot1, numTransInputs); - /* Load values into fcinfo */ /* Start from 1, since the 0th arg will be the transition value */ for (i = 0; i < numTransInputs; i++) @@ -856,14 +857,15 @@ process_ordered_aggregate_multi(AggState *aggstate, slot2 = slot1; slot1 = tmpslot; - /* avoid ExecQual() calls by reusing abbreviated keys */ + /* avoid execTuplesMatch() calls by reusing abbreviated keys */ oldAbbrevVal = newAbbrevVal; haveOldValue = true; } } - /* Reset context each time */ - ResetExprContext(tmpcontext); + /* Reset context each time, unless execTuplesMatch did it for us */ + if (numDistinctCols == 0) + MemoryContextReset(workcontext); ExecClearTuple(slot1); } @@ -873,9 +875,6 @@ process_ordered_aggregate_multi(AggState *aggstate, tuplesort_end(pertrans->sortstates[aggstate->current_set]); pertrans->sortstates[aggstate->current_set] = NULL; - - /* restore previous slot, potentially in use for grouping sets */ - tmpcontext->ecxt_outertuple = save; } /* @@ -1277,9 +1276,7 @@ build_hash_table(AggState *aggstate) Assert(perhash->aggnode->numGroups > 0); - perhash->hashtable = BuildTupleHashTable(&aggstate->ss.ps, - perhash->hashslot->tts_tupleDescriptor, - perhash->numCols, + perhash->hashtable = BuildTupleHashTable(perhash->numCols, perhash->hashGrpColIdxHash, perhash->eqfunctions, perhash->hashfunctions, @@ -1317,7 +1314,6 @@ find_hash_columns(AggState *aggstate) Bitmapset *base_colnos; List *outerTlist = outerPlanState(aggstate)->plan->targetlist; int numHashes = aggstate->num_hashes; - EState *estate = aggstate->ss.ps.state; int j; /* Find Vars that will be needed in tlist and qual */ @@ -1397,12 +1393,6 @@ find_hash_columns(AggState *aggstate) } hashDesc = ExecTypeFromTL(hashTlist, false); - - execTuplesHashPrepare(perhash->numCols, - perhash->aggnode->grpOperators, - &perhash->eqfunctions, - &perhash->hashfunctions); - perhash->hashslot = ExecAllocTableSlot(&estate->es_tupleTable); ExecSetSlotDescriptor(perhash->hashslot, hashDesc); list_free(hashTlist); @@ -1704,14 +1694,17 @@ agg_retrieve_direct(AggState *aggstate) * of the next grouping set *---------- */ - tmpcontext->ecxt_innertuple = econtext->ecxt_outertuple; if (aggstate->input_done || (node->aggstrategy != AGG_PLAIN && aggstate->projected_set != -1 && aggstate->projected_set < (numGroupingSets - 1) && nextSetSize > 0 && - !ExecQualAndReset(aggstate->phase->eqfunctions[nextSetSize - 1], - tmpcontext))) + !execTuplesMatch(econtext->ecxt_outertuple, + tmpcontext->ecxt_outertuple, + nextSetSize, + node->grpColIdx, + aggstate->phase->eqfunctions, + tmpcontext->ecxt_per_tuple_memory))) { aggstate->projected_set += 1; @@ -1854,9 +1847,12 @@ agg_retrieve_direct(AggState *aggstate) */ if (node->aggstrategy != AGG_PLAIN) { - tmpcontext->ecxt_innertuple = firstSlot; - if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1], - tmpcontext)) + if (!execTuplesMatch(firstSlot, + outerslot, + node->numCols, + node->grpColIdx, + aggstate->phase->eqfunctions, + tmpcontext->ecxt_per_tuple_memory)) { aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot); break; @@ -2082,7 +2078,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AggStatePerGroup *pergroups; Plan *outerPlan; ExprContext *econtext; - TupleDesc scanDesc; int numaggs, transno, aggno; @@ -2238,9 +2233,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * initialize source tuple type. */ ExecAssignScanTypeFromOuterPlan(&aggstate->ss); - scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; if (node->chain) - ExecSetSlotDescriptor(aggstate->sort_slot, scanDesc); + ExecSetSlotDescriptor(aggstate->sort_slot, + aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); /* * Initialize result tuple type and projection info. @@ -2360,43 +2355,11 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (aggnode->aggstrategy == AGG_SORTED) { - int i = 0; - Assert(aggnode->numCols > 0); - /* - * Build a separate function for each subset of columns that - * need to be compared. - */ phasedata->eqfunctions = - (ExprState **) palloc0(aggnode->numCols * sizeof(ExprState *)); - - /* for each grouping set */ - for (i = 0; i < phasedata->numsets; i++) - { - int length = phasedata->gset_lengths[i]; - - if (phasedata->eqfunctions[length - 1] != NULL) - continue; - - phasedata->eqfunctions[length - 1] = - execTuplesMatchPrepare(scanDesc, - length, - aggnode->grpColIdx, - aggnode->grpOperators, - (PlanState *) aggstate); - } - - /* and for all grouped columns, unless already computed */ - if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL) - { - phasedata->eqfunctions[aggnode->numCols - 1] = - execTuplesMatchPrepare(scanDesc, - aggnode->numCols, - aggnode->grpColIdx, - aggnode->grpOperators, - (PlanState *) aggstate); - } + execTuplesMatchPrepare(aggnode->numCols, + aggnode->grpOperators); } phasedata->aggnode = aggnode; @@ -2449,6 +2412,16 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (use_hashing) { + for (i = 0; i < numHashes; ++i) + { + aggstate->perhash[i].hashslot = ExecInitExtraTupleSlot(estate); + + execTuplesHashPrepare(aggstate->perhash[i].numCols, + aggstate->perhash[i].aggnode->grpOperators, + &aggstate->perhash[i].eqfunctions, + &aggstate->perhash[i].hashfunctions); + } + /* this is an array of pointers, not structures */ aggstate->hash_pergroup = pergroups; @@ -3128,28 +3101,24 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, if (aggref->aggdistinct) { - Oid *ops; - Assert(numArguments > 0); - Assert(list_length(aggref->aggdistinct) == numDistinctCols); - ops = palloc(numDistinctCols * sizeof(Oid)); + /* + * We need the equal function for each DISTINCT comparison we will + * make. + */ + pertrans->equalfns = + (FmgrInfo *) palloc(numDistinctCols * sizeof(FmgrInfo)); i = 0; foreach(lc, aggref->aggdistinct) - ops[i++] = ((SortGroupClause *) lfirst(lc))->eqop; + { + SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc); - /* lookup / build the necessary comparators */ - if (numDistinctCols == 1) - fmgr_info(get_opcode(ops[0]), &pertrans->equalfnOne); - else - pertrans->equalfnMulti = - execTuplesMatchPrepare(pertrans->sortdesc, - numDistinctCols, - pertrans->sortColIdx, - ops, - &aggstate->ss.ps); - pfree(ops); + fmgr_info(get_opcode(sortcl->eqop), &pertrans->equalfns[i]); + i++; + } + Assert(i == numDistinctCols); } pertrans->sortstates = (Tuplesortstate **) diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index 8f7bf459ef..f1cdbaa4e6 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -25,7 +25,6 @@ #include "executor/executor.h" #include "executor/nodeGroup.h" #include "miscadmin.h" -#include "utils/memutils.h" /* @@ -38,6 +37,8 @@ ExecGroup(PlanState *pstate) { GroupState *node = castNode(GroupState, pstate); ExprContext *econtext; + int numCols; + AttrNumber *grpColIdx; TupleTableSlot *firsttupleslot; TupleTableSlot *outerslot; @@ -49,6 +50,8 @@ ExecGroup(PlanState *pstate) if (node->grp_done) return NULL; econtext = node->ss.ps.ps_ExprContext; + numCols = ((Group *) node->ss.ps.plan)->numCols; + grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx; /* * The ScanTupleSlot holds the (copied) first tuple of each group. @@ -56,7 +59,7 @@ ExecGroup(PlanState *pstate) firsttupleslot = node->ss.ss_ScanTupleSlot; /* - * We need not call ResetExprContext here because ExecQualAndReset() will + * We need not call ResetExprContext here because execTuplesMatch will * reset the per-tuple memory context once per input tuple. */ @@ -121,9 +124,10 @@ ExecGroup(PlanState *pstate) * Compare with first tuple and see if this tuple is of the same * group. If so, ignore it and keep scanning. */ - econtext->ecxt_innertuple = firsttupleslot; - econtext->ecxt_outertuple = outerslot; - if (!ExecQualAndReset(node->eqfunction, econtext)) + if (!execTuplesMatch(firsttupleslot, outerslot, + numCols, grpColIdx, + node->eqfunctions, + econtext->ecxt_per_tuple_memory)) break; } @@ -162,7 +166,6 @@ GroupState * ExecInitGroup(Group *node, EState *estate, int eflags) { GroupState *grpstate; - AttrNumber *grpColIdx = grpColIdx = node->grpColIdx; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -212,12 +215,9 @@ ExecInitGroup(Group *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - grpstate->eqfunction = - execTuplesMatchPrepare(ExecGetResultType(outerPlanState(grpstate)), - node->numCols, - grpColIdx, - node->grpOperators, - &grpstate->ss.ps); + grpstate->eqfunctions = + execTuplesMatchPrepare(node->numCols, + node->grpOperators); return grpstate; } diff --git a/src/backend/executor/nodeRecursiveunion.c b/src/backend/executor/nodeRecursiveunion.c index c070338fdb..817749855f 100644 --- a/src/backend/executor/nodeRecursiveunion.c +++ b/src/backend/executor/nodeRecursiveunion.c @@ -32,14 +32,11 @@ static void build_hash_table(RecursiveUnionState *rustate) { RecursiveUnion *node = (RecursiveUnion *) rustate->ps.plan; - TupleDesc desc = ExecGetResultType(outerPlanState(rustate)); Assert(node->numCols > 0); Assert(node->numGroups > 0); - rustate->hashtable = BuildTupleHashTable(&rustate->ps, - desc, - node->numCols, + rustate->hashtable = BuildTupleHashTable(node->numCols, node->dupColIdx, rustate->eqfunctions, rustate->hashfunctions, diff --git a/src/backend/executor/nodeSetOp.c b/src/backend/executor/nodeSetOp.c index ba2d3159c0..c91c3402d2 100644 --- a/src/backend/executor/nodeSetOp.c +++ b/src/backend/executor/nodeSetOp.c @@ -120,22 +120,18 @@ static void build_hash_table(SetOpState *setopstate) { SetOp *node = (SetOp *) setopstate->ps.plan; - ExprContext *econtext = setopstate->ps.ps_ExprContext; - TupleDesc desc = ExecGetResultType(outerPlanState(setopstate)); Assert(node->strategy == SETOP_HASHED); Assert(node->numGroups > 0); - setopstate->hashtable = BuildTupleHashTable(&setopstate->ps, - desc, - node->numCols, + setopstate->hashtable = BuildTupleHashTable(node->numCols, node->dupColIdx, setopstate->eqfunctions, setopstate->hashfunctions, node->numGroups, 0, setopstate->tableContext, - econtext->ecxt_per_tuple_memory, + setopstate->tempContext, false); } @@ -224,11 +220,11 @@ ExecSetOp(PlanState *pstate) static TupleTableSlot * setop_retrieve_direct(SetOpState *setopstate) { + SetOp *node = (SetOp *) setopstate->ps.plan; PlanState *outerPlan; SetOpStatePerGroup pergroup; TupleTableSlot *outerslot; TupleTableSlot *resultTupleSlot; - ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -296,10 +292,11 @@ setop_retrieve_direct(SetOpState *setopstate) /* * Check whether we've crossed a group boundary. */ - econtext->ecxt_outertuple = resultTupleSlot; - econtext->ecxt_innertuple = outerslot; - - if (!ExecQualAndReset(setopstate->eqfunction, econtext)) + if (!execTuplesMatch(resultTupleSlot, + outerslot, + node->numCols, node->dupColIdx, + setopstate->eqfunctions, + setopstate->tempContext)) { /* * Save the first input tuple of the next group. @@ -341,7 +338,6 @@ setop_fill_hash_table(SetOpState *setopstate) PlanState *outerPlan; int firstFlag; bool in_first_rel PG_USED_FOR_ASSERTS_ONLY; - ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -408,8 +404,8 @@ setop_fill_hash_table(SetOpState *setopstate) advance_counts((SetOpStatePerGroup) entry->additional, flag); } - /* Must reset expression context after each hashtable lookup */ - ResetExprContext(econtext); + /* Must reset temp context after each hashtable lookup */ + MemoryContextReset(setopstate->tempContext); } setopstate->table_filled = true; @@ -480,7 +476,6 @@ SetOpState * ExecInitSetOp(SetOp *node, EState *estate, int eflags) { SetOpState *setopstate; - TupleDesc outerDesc; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -503,9 +498,16 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) setopstate->tableContext = NULL; /* - * create expression context + * Miscellaneous initialization + * + * SetOp nodes have no ExprContext initialization because they never call + * ExecQual or ExecProject. But they do need a per-tuple memory context + * anyway for calling execTuplesMatch. */ - ExecAssignExprContext(estate, &setopstate->ps); + setopstate->tempContext = + AllocSetContextCreate(CurrentMemoryContext, + "SetOp", + ALLOCSET_DEFAULT_SIZES); /* * If hashing, we also need a longer-lived context to store the hash @@ -532,7 +534,6 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) if (node->strategy == SETOP_HASHED) eflags &= ~EXEC_FLAG_REWIND; outerPlanState(setopstate) = ExecInitNode(outerPlan(node), estate, eflags); - outerDesc = ExecGetResultType(outerPlanState(setopstate)); /* * setop nodes do no projections, so initialize projection info for this @@ -552,12 +553,9 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) &setopstate->eqfunctions, &setopstate->hashfunctions); else - setopstate->eqfunction = - execTuplesMatchPrepare(outerDesc, - node->numCols, - node->dupColIdx, - node->dupOperators, - &setopstate->ps); + setopstate->eqfunctions = + execTuplesMatchPrepare(node->numCols, + node->dupOperators); if (node->strategy == SETOP_HASHED) { @@ -587,9 +585,9 @@ ExecEndSetOp(SetOpState *node) ExecClearTuple(node->ps.ps_ResultTupleSlot); /* free subsidiary stuff including hashtable */ + MemoryContextDelete(node->tempContext); if (node->tableContext) MemoryContextDelete(node->tableContext); - ExecFreeExprContext(&node->ps); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index fcf739b5e2..edf7d034bd 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -494,9 +494,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; - node->hashtable = BuildTupleHashTable(node->parent, - node->descRight, - ncols, + node->hashtable = BuildTupleHashTable(ncols, node->keyColIdx, node->tab_eq_funcs, node->tab_hash_funcs, @@ -516,9 +514,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; } - node->hashnulls = BuildTupleHashTable(node->parent, - node->descRight, - ncols, + node->hashnulls = BuildTupleHashTable(ncols, node->keyColIdx, node->tab_eq_funcs, node->tab_hash_funcs, @@ -602,78 +598,6 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) MemoryContextSwitchTo(oldcontext); } - -/* - * execTuplesUnequal - * Return true if two tuples are definitely unequal in the indicated - * fields. - * - * Nulls are neither equal nor unequal to anything else. A true result - * is obtained only if there are non-null fields that compare not-equal. - * - * slot1, slot2: the tuples to compare (must have same columns!) - * numCols: the number of attributes to be examined - * matchColIdx: array of attribute column numbers - * eqFunctions: array of fmgr lookup info for the equality functions to use - * evalContext: short-term memory context for executing the functions - */ -static bool -execTuplesUnequal(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext) -{ - MemoryContext oldContext; - bool result; - int i; - - /* Reset and switch into the temp context. */ - MemoryContextReset(evalContext); - oldContext = MemoryContextSwitchTo(evalContext); - - /* - * We cannot report a match without checking all the fields, but we can - * report a non-match as soon as we find unequal fields. So, start - * comparing at the last field (least significant sort key). That's the - * most likely to be different if we are dealing with sorted input. - */ - result = false; - - for (i = numCols; --i >= 0;) - { - AttrNumber att = matchColIdx[i]; - Datum attr1, - attr2; - bool isNull1, - isNull2; - - attr1 = slot_getattr(slot1, att, &isNull1); - - if (isNull1) - continue; /* can't prove anything here */ - - attr2 = slot_getattr(slot2, att, &isNull2); - - if (isNull2) - continue; /* can't prove anything here */ - - /* Apply the type-specific equality function */ - - if (!DatumGetBool(FunctionCall2(&eqfunctions[i], - attr1, attr2))) - { - result = true; /* they are unequal */ - break; - } - } - - MemoryContextSwitchTo(oldContext); - - return result; -} - /* * findPartialMatch: does the hashtable contain an entry that is not * provably distinct from the tuple? @@ -963,7 +887,6 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) NULL); tupDesc = ExecTypeFromTL(righttlist, false); - sstate->descRight = tupDesc; slot = ExecInitExtraTupleSlot(estate); ExecSetSlotDescriptor(slot, tupDesc); sstate->projRight = ExecBuildProjectionInfo(righttlist, diff --git a/src/backend/executor/nodeUnique.c b/src/backend/executor/nodeUnique.c index 9f823c58e1..e330650593 100644 --- a/src/backend/executor/nodeUnique.c +++ b/src/backend/executor/nodeUnique.c @@ -47,7 +47,7 @@ static TupleTableSlot * /* return: a tuple or NULL */ ExecUnique(PlanState *pstate) { UniqueState *node = castNode(UniqueState, pstate); - ExprContext *econtext = node->ps.ps_ExprContext; + Unique *plannode = (Unique *) node->ps.plan; TupleTableSlot *resultTupleSlot; TupleTableSlot *slot; PlanState *outerPlan; @@ -89,9 +89,10 @@ ExecUnique(PlanState *pstate) * If so then we loop back and fetch another new tuple from the * subplan. */ - econtext->ecxt_innertuple = slot; - econtext->ecxt_outertuple = resultTupleSlot; - if (!ExecQualAndReset(node->eqfunction, econtext)) + if (!execTuplesMatch(slot, resultTupleSlot, + plannode->numCols, plannode->uniqColIdx, + node->eqfunctions, + node->tempContext)) break; } @@ -128,9 +129,16 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) uniquestate->ps.ExecProcNode = ExecUnique; /* - * create expression context + * Miscellaneous initialization + * + * Unique nodes have no ExprContext initialization because they never call + * ExecQual or ExecProject. But they do need a per-tuple memory context + * anyway for calling execTuplesMatch. */ - ExecAssignExprContext(estate, &uniquestate->ps); + uniquestate->tempContext = + AllocSetContextCreate(CurrentMemoryContext, + "Unique", + ALLOCSET_DEFAULT_SIZES); /* * Tuple table initialization @@ -152,12 +160,9 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - uniquestate->eqfunction = - execTuplesMatchPrepare(ExecGetResultType(outerPlanState(uniquestate)), - node->numCols, - node->uniqColIdx, - node->uniqOperators, - &uniquestate->ps); + uniquestate->eqfunctions = + execTuplesMatchPrepare(node->numCols, + node->uniqOperators); return uniquestate; } @@ -175,7 +180,7 @@ ExecEndUnique(UniqueState *node) /* clean up tuple table */ ExecClearTuple(node->ps.ps_ResultTupleSlot); - ExecFreeExprContext(&node->ps); + MemoryContextDelete(node->tempContext); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 1c807a8292..f6412576f4 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -1272,13 +1272,12 @@ spool_tuples(WindowAggState *winstate, int64 pos) if (node->partNumCols > 0) { - ExprContext *econtext = winstate->tmpcontext; - - econtext->ecxt_innertuple = winstate->first_part_slot; - econtext->ecxt_outertuple = outerslot; - /* Check if this tuple still belongs to the current partition */ - if (!ExecQualAndReset(winstate->partEqfunction, econtext)) + if (!execTuplesMatch(winstate->first_part_slot, + outerslot, + node->partNumCols, node->partColIdx, + winstate->partEqfunctions, + winstate->tmpcontext->ecxt_per_tuple_memory)) { /* * end of partition; copy the tuple for the next cycle. @@ -2246,7 +2245,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) wfuncno, numaggs, aggno; - TupleDesc scanDesc; ListCell *l; /* check for unsupported flags */ @@ -2329,7 +2327,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) * store in the tuplestore and use in all our working slots). */ ExecAssignScanTypeFromOuterPlan(&winstate->ss); - scanDesc = winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; ExecSetSlotDescriptor(winstate->first_part_slot, winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); @@ -2354,20 +2351,11 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) /* Set up data for comparing tuples */ if (node->partNumCols > 0) - winstate->partEqfunction = - execTuplesMatchPrepare(scanDesc, - node->partNumCols, - node->partColIdx, - node->partOperators, - &winstate->ss.ps); - + winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols, + node->partOperators); if (node->ordNumCols > 0) - winstate->ordEqfunction = - execTuplesMatchPrepare(scanDesc, - node->ordNumCols, - node->ordColIdx, - node->ordOperators, - &winstate->ss.ps); + winstate->ordEqfunctions = execTuplesMatchPrepare(node->ordNumCols, + node->ordOperators); /* * WindowAgg nodes use aggvalues and aggnulls as well as Agg nodes. @@ -2891,15 +2879,15 @@ are_peers(WindowAggState *winstate, TupleTableSlot *slot1, TupleTableSlot *slot2) { WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; - ExprContext *econtext = winstate->tmpcontext; /* If no ORDER BY, all rows are peers with each other */ if (node->ordNumCols == 0) return true; - econtext->ecxt_outertuple = slot1; - econtext->ecxt_innertuple = slot2; - return ExecQualAndReset(winstate->ordEqfunction, econtext); + return execTuplesMatch(slot1, slot2, + node->ordNumCols, node->ordColIdx, + winstate->ordEqfunctions, + winstate->tmpcontext->ecxt_per_tuple_memory); } /* diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 50b34fcbc6..63d9c67027 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -27,7 +27,6 @@ #include "utils/array.h" #include "utils/builtins.h" #include "utils/lsyscache.h" -#include "utils/memutils.h" #include "utils/timestamp.h" #include "utils/tuplesort.h" @@ -55,8 +54,6 @@ typedef struct OSAPerQueryState Aggref *aggref; /* Memory context containing this struct and other per-query data: */ MemoryContext qcontext; - /* Context for expression evaluation */ - ExprContext *econtext; /* Do we expect multiple final-function calls within one group? */ bool rescan_needed; @@ -74,7 +71,7 @@ typedef struct OSAPerQueryState Oid *sortCollations; bool *sortNullsFirsts; /* Equality operator call info, created only if needed: */ - ExprState *compareTuple; + FmgrInfo *equalfns; /* These fields are used only when accumulating datums: */ @@ -1290,8 +1287,6 @@ hypothetical_cume_dist_final(PG_FUNCTION_ARGS) Datum hypothetical_dense_rank_final(PG_FUNCTION_ARGS) { - ExprContext *econtext; - ExprState *compareTuple; int nargs = PG_NARGS() - 1; int64 rank = 1; int64 duplicate_count = 0; @@ -1299,9 +1294,12 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) int numDistinctCols; Datum abbrevVal = (Datum) 0; Datum abbrevOld = (Datum) 0; + AttrNumber *sortColIdx; + FmgrInfo *equalfns; TupleTableSlot *slot; TupleTableSlot *extraslot; TupleTableSlot *slot2; + MemoryContext tmpcontext; int i; Assert(AggCheckCallContext(fcinfo, NULL) == AGG_CONTEXT_AGGREGATE); @@ -1311,9 +1309,6 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) PG_RETURN_INT64(rank); osastate = (OSAPerGroupState *) PG_GETARG_POINTER(0); - econtext = osastate->qstate->econtext; - if (!econtext) - osastate->qstate->econtext = econtext = CreateStandaloneExprContext(); /* Adjust nargs to be the number of direct (or aggregated) args */ if (nargs % 2 != 0) @@ -1328,22 +1323,26 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) */ numDistinctCols = osastate->qstate->numSortCols - 1; - /* Build tuple comparator, if we didn't already */ - compareTuple = osastate->qstate->compareTuple; - if (compareTuple == NULL) + /* Look up the equality function(s), if we didn't already */ + equalfns = osastate->qstate->equalfns; + if (equalfns == NULL) { - AttrNumber *sortColIdx = osastate->qstate->sortColIdx; - MemoryContext oldContext; - - oldContext = MemoryContextSwitchTo(osastate->qstate->qcontext); - compareTuple = execTuplesMatchPrepare(osastate->qstate->tupdesc, - numDistinctCols, - sortColIdx, - osastate->qstate->eqOperators, - NULL); - MemoryContextSwitchTo(oldContext); - osastate->qstate->compareTuple = compareTuple; + MemoryContext qcontext = osastate->qstate->qcontext; + + equalfns = (FmgrInfo *) + MemoryContextAlloc(qcontext, numDistinctCols * sizeof(FmgrInfo)); + for (i = 0; i < numDistinctCols; i++) + { + fmgr_info_cxt(get_opcode(osastate->qstate->eqOperators[i]), + &equalfns[i], + qcontext); + } + osastate->qstate->equalfns = equalfns; } + sortColIdx = osastate->qstate->sortColIdx; + + /* Get short-term context we can use for execTuplesMatch */ + tmpcontext = AggGetTempMemoryContext(fcinfo); /* because we need a hypothetical row, we can't share transition state */ Assert(!osastate->sort_done); @@ -1386,18 +1385,19 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) break; /* count non-distinct tuples */ - econtext->ecxt_outertuple = slot; - econtext->ecxt_innertuple = slot2; - if (!TupIsNull(slot2) && abbrevVal == abbrevOld && - ExecQualAndReset(compareTuple, econtext)) + execTuplesMatch(slot, slot2, + numDistinctCols, + sortColIdx, + equalfns, + tmpcontext)) duplicate_count++; tmpslot = slot2; slot2 = slot; slot = tmpslot; - /* avoid ExecQual() calls by reusing abbreviated keys */ + /* avoid execTuplesMatch() calls by reusing abbreviated keys */ abbrevOld = abbrevVal; rank++; diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 0cab431f65..117fc892f4 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -148,7 +148,6 @@ typedef enum ExprEvalOp /* evaluate assorted special-purpose expression types */ EEOP_IOCOERCE, EEOP_DISTINCT, - EEOP_NOT_DISTINCT, EEOP_NULLIF, EEOP_SQLVALUEFUNCTION, EEOP_CURRENTOFEXPR, diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index f648af2789..1d824eff36 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -113,18 +113,25 @@ extern bool execCurrentOf(CurrentOfExpr *cexpr, /* * prototypes from functions in execGrouping.c */ -extern ExprState *execTuplesMatchPrepare(TupleDesc desc, - int numCols, - AttrNumber *keyColIdx, - Oid *eqOperators, - PlanState *parent); +extern bool execTuplesMatch(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext); +extern bool execTuplesUnequal(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext); +extern FmgrInfo *execTuplesMatchPrepare(int numCols, + Oid *eqOperators); extern void execTuplesHashPrepare(int numCols, Oid *eqOperators, FmgrInfo **eqFunctions, FmgrInfo **hashFunctions); -extern TupleHashTable BuildTupleHashTable(PlanState *parent, - TupleDesc inputDesc, - int numCols, AttrNumber *keyColIdx, +extern TupleHashTable BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, FmgrInfo *eqfunctions, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, @@ -250,11 +257,6 @@ extern ExprState *ExecInitCheck(List *qual, PlanState *parent); extern List *ExecInitExprList(List *nodes, PlanState *parent); extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase, bool doSort, bool doHash); -extern ExprState *ExecBuildGroupingEqual(TupleDesc desc, - int numCols, - AttrNumber *keyColIdx, - Oid *eqfunctions, - PlanState *parent); extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList, ExprContext *econtext, TupleTableSlot *slot, diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h index 24be7d2daa..3b06db86fd 100644 --- a/src/include/executor/nodeAgg.h +++ b/src/include/executor/nodeAgg.h @@ -102,12 +102,11 @@ typedef struct AggStatePerTransData bool *sortNullsFirst; /* - * Comparators for input columns --- only set/used when aggregate has - * DISTINCT flag. equalfnOne version is used for single-column - * commparisons, equalfnMulti for the case of multiple columns. + * fmgr lookup data for input columns' equality operators --- only + * set/used when aggregate has DISTINCT flag. Note that these are in + * order of sort column index, not parameter index. */ - FmgrInfo equalfnOne; - ExprState *equalfnMulti; + FmgrInfo *equalfns; /* array of length numDistinctCols */ /* * initial value from pg_aggregate entry @@ -271,8 +270,7 @@ typedef struct AggStatePerPhaseData int numsets; /* number of grouping sets (or 0) */ int *gset_lengths; /* lengths of grouping sets */ Bitmapset **grouped_cols; /* column groupings for rollup */ - ExprState **eqfunctions; /* expression returning equality, indexed by - * nr of cols to compare */ + FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ Agg *aggnode; /* Agg node for phase data */ Sort *sortnode; /* Sort node for input ordering for phase */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 74c359901c..286d55be03 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -635,8 +635,6 @@ typedef struct TupleHashTableData FmgrInfo *in_hash_funcs; /* hash functions for input datatype(s) */ FmgrInfo *cur_eq_funcs; /* equality functions for input vs. table */ uint32 hash_iv; /* hash-function IV */ - ExprState *eq_func; /* tuple equality comparator */ - ExprContext *exprcontext; /* expression context */ } TupleHashTableData; typedef tuplehash_iterator TupleHashIterator; @@ -783,7 +781,6 @@ typedef struct SubPlanState HeapTuple curTuple; /* copy of most recent tuple from subplan */ Datum curArray; /* most recent array from ARRAY() subplan */ /* these are used when hashing the subselect's output: */ - TupleDesc descRight; /* subselect desc after projection */ ProjectionInfo *projLeft; /* for projecting lefthand exprs */ ProjectionInfo *projRight; /* for projecting subselect output */ TupleHashTable hashtable; /* hash table for no-nulls subselect rows */ @@ -1798,7 +1795,7 @@ typedef struct SortState typedef struct GroupState { ScanState ss; /* its first field is NodeTag */ - ExprState *eqfunction; /* equality function */ + FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ bool grp_done; /* indicates completion of Group scan */ } GroupState; @@ -1888,8 +1885,8 @@ typedef struct WindowAggState WindowStatePerFunc perfunc; /* per-window-function information */ WindowStatePerAgg peragg; /* per-plain-aggregate information */ - ExprState *partEqfunction; /* equality funcs for partition columns */ - ExprState *ordEqfunction; /* equality funcs for ordering columns */ + FmgrInfo *partEqfunctions; /* equality funcs for partition columns */ + FmgrInfo *ordEqfunctions; /* equality funcs for ordering columns */ Tuplestorestate *buffer; /* stores rows of current partition */ int current_ptr; /* read pointer # for current row */ int framehead_ptr; /* read pointer # for frame head, if used */ @@ -1967,7 +1964,8 @@ typedef struct WindowAggState typedef struct UniqueState { PlanState ps; /* its first field is NodeTag */ - ExprState *eqfunction; /* tuple equality qual */ + FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ + MemoryContext tempContext; /* short-term context for comparisons */ } UniqueState; /* ---------------- @@ -2081,11 +2079,11 @@ typedef struct SetOpStatePerGroupData *SetOpStatePerGroup; typedef struct SetOpState { PlanState ps; /* its first field is NodeTag */ - ExprState *eqfunction; /* equality comparator */ FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ bool setop_done; /* indicates completion of output scan */ long numOutput; /* number of dups left to output */ + MemoryContext tempContext; /* short-term context for comparisons */ /* these fields are used in SETOP_SORTED mode: */ SetOpStatePerGroup pergroup; /* per-group working state */ HeapTuple grp_firstTuple; /* copy of first tuple of current group */ From f8437c819acc37b43bd2d5b19a6b7609b4ea1292 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Fri, 16 Feb 2018 12:46:41 +0100 Subject: [PATCH 1004/1087] Fix typo in comment --- src/backend/access/transam/xlog.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 18b7471597..47a6c4d895 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -4821,7 +4821,7 @@ check_wal_buffers(int *newval, void **extra, GucSource source) * This is to be called during startup, including a crash recovery cycle, * unless in bootstrap mode, where no control file yet exists. As there's no * usable shared memory yet (its sizing can depend on the contents of the - * control file!), first store the contents in local memory. XLOGShemInit() + * control file!), first store the contents in local memory. XLOGShmemInit() * will then copy it to shared memory later. * * reset just controls whether previous contents are to be expected (in the From 2fb1abaeb016aeb45b9e6d0b81b7a7e92bb251b9 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 16 Feb 2018 10:33:59 -0500 Subject: [PATCH 1005/1087] Rename enable_partition_wise_join to enable_partitionwise_join Discussion: https://www.postgresql.org/message-id/flat/ad24e4f4-6481-066e-e3fb-6ef4a3121882%402ndquadrant.com --- .../postgres_fdw/expected/postgres_fdw.out | 6 ++-- contrib/postgres_fdw/sql/postgres_fdw.sql | 6 ++-- doc/src/sgml/config.sgml | 12 +++---- src/backend/optimizer/README | 6 ++-- src/backend/optimizer/geqo/geqo_eval.c | 4 +-- src/backend/optimizer/path/allpaths.c | 20 +++++------ src/backend/optimizer/path/costsize.c | 2 +- src/backend/optimizer/path/joinrels.c | 24 ++++++------- src/backend/optimizer/util/relnode.c | 6 ++-- src/backend/utils/misc/guc.c | 6 ++-- src/backend/utils/misc/postgresql.conf.sample | 2 +- src/include/optimizer/cost.h | 2 +- src/include/optimizer/paths.h | 2 +- src/test/regress/expected/partition_join.out | 22 ++++++------ src/test/regress/expected/sysviews.out | 34 +++++++++---------- src/test/regress/sql/partition_join.sql | 22 ++++++------ 16 files changed, 88 insertions(+), 88 deletions(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 62e084fb3d..262c635cdb 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -7682,9 +7682,9 @@ AND ftoptions @> array['fetch_size=60000']; ROLLBACK; -- =================================================================== --- test partition-wise-joins +-- test partitionwise joins -- =================================================================== -SET enable_partition_wise_join=on; +SET enable_partitionwise_join=on; CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE fprt1_p1 (LIKE fprt1); CREATE TABLE fprt1_p2 (LIKE fprt1); @@ -7800,4 +7800,4 @@ SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t 400 | 400 (4 rows) -RESET enable_partition_wise_join; +RESET enable_partitionwise_join; diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 68fdfdc765..28635498ca 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1863,9 +1863,9 @@ AND ftoptions @> array['fetch_size=60000']; ROLLBACK; -- =================================================================== --- test partition-wise-joins +-- test partitionwise joins -- =================================================================== -SET enable_partition_wise_join=on; +SET enable_partitionwise_join=on; CREATE TABLE fprt1 (a int, b int, c varchar) PARTITION BY RANGE(a); CREATE TABLE fprt1_p1 (LIKE fprt1); @@ -1913,4 +1913,4 @@ EXPLAIN (COSTS OFF) SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; -RESET enable_partition_wise_join; +RESET enable_partitionwise_join; diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index c45979dee4..4c998fe51f 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -3736,20 +3736,20 @@ ANY num_sync ( - enable_partition_wise_join (boolean) + + enable_partitionwise_join (boolean) - enable_partition_wise_join configuration parameter + enable_partitionwise_join configuration parameter - Enables or disables the query planner's use of partition-wise join, + Enables or disables the query planner's use of partitionwise join, which allows a join between partitioned tables to be performed by - joining the matching partitions. Partition-wise join currently applies + joining the matching partitions. Partitionwise join currently applies only when the join conditions include all the partition keys, which must be of the same data type and have exactly matching sets of child - partitions. Because partition-wise join planning can use significantly + partitions. Because partitionwise join planning can use significantly more CPU time and memory during planning, the default is off. diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index 1e4084dcf4..84e60f7f6f 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -1076,8 +1076,8 @@ plan as possible. Expanding the range of cases in which more work can be pushed below the Gather (and costing them accurately) is likely to keep us busy for a long time to come. -Partition-wise joins --------------------- +Partitionwise joins +------------------- A join between two similarly partitioned tables can be broken down into joins between their matching partitions if there exists an equi-join condition between the partition keys of the joining tables. The equi-join between @@ -1089,7 +1089,7 @@ partitioned in the same way as the joining relations, thus allowing an N-way join between similarly partitioned tables having equi-join condition between their partition keys to be broken down into N-way joins between their matching partitions. This technique of breaking down a join between partitioned tables -into joins between their partitions is called partition-wise join. We will use +into joins between their partitions is called partitionwise join. We will use term "partitioned relation" for either a partitioned table or a join between compatibly partitioned tables. diff --git a/src/backend/optimizer/geqo/geqo_eval.c b/src/backend/optimizer/geqo/geqo_eval.c index 9053cfd0b9..57f0f594e5 100644 --- a/src/backend/optimizer/geqo/geqo_eval.c +++ b/src/backend/optimizer/geqo/geqo_eval.c @@ -264,8 +264,8 @@ merge_clump(PlannerInfo *root, List *clumps, Clump *new_clump, bool force) /* Keep searching if join order is not valid */ if (joinrel) { - /* Create paths for partition-wise joins. */ - generate_partition_wise_join_paths(root, joinrel); + /* Create paths for partitionwise joins. */ + generate_partitionwise_join_paths(root, joinrel); /* Create GatherPaths for any useful partial paths for rel */ generate_gather_paths(root, joinrel); diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 6e842f93d0..f714247ebb 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -929,7 +929,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, /* * We need attr_needed data for building targetlist of a join * relation representing join between matching partitions for - * partition-wise join. A given attribute of a child will be + * partitionwise join. A given attribute of a child will be * needed in the same highest joinrel where the corresponding * attribute of parent is needed. Hence it suffices to use the * same Relids set for parent and child. @@ -973,7 +973,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel, /* * Copy/Modify targetlist. Even if this child is deemed empty, we need * its targetlist in case it falls on nullable side in a child-join - * because of partition-wise join. + * because of partitionwise join. * * NB: the resulting childrel->reltarget->exprs may contain arbitrary * expressions, which otherwise would not occur in a rel's targetlist. @@ -2636,7 +2636,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) join_search_one_level(root, lev); /* - * Run generate_partition_wise_join_paths() and + * Run generate_partitionwise_join_paths() and * generate_gather_paths() for each just-processed joinrel. We could * not do this earlier because both regular and partial paths can get * added to a particular joinrel at multiple times within @@ -2649,8 +2649,8 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) { rel = (RelOptInfo *) lfirst(lc); - /* Create paths for partition-wise joins. */ - generate_partition_wise_join_paths(root, rel); + /* Create paths for partitionwise joins. */ + generate_partitionwise_join_paths(root, rel); /* Create GatherPaths for any useful partial paths for rel */ generate_gather_paths(root, rel); @@ -3405,8 +3405,8 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages, } /* - * generate_partition_wise_join_paths - * Create paths representing partition-wise join for given partitioned + * generate_partitionwise_join_paths + * Create paths representing partitionwise join for given partitioned * join relation. * * This must not be called until after we are done adding paths for all @@ -3414,7 +3414,7 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages, * generated here has a reference. */ void -generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) +generate_partitionwise_join_paths(PlannerInfo *root, RelOptInfo *rel) { List *live_children = NIL; int cnt_parts; @@ -3442,8 +3442,8 @@ generate_partition_wise_join_paths(PlannerInfo *root, RelOptInfo *rel) Assert(child_rel != NULL); - /* Add partition-wise join paths for partitioned child-joins. */ - generate_partition_wise_join_paths(root, child_rel); + /* Add partitionwise join paths for partitioned child-joins. */ + generate_partitionwise_join_paths(root, child_rel); /* Dummy children will not be scanned, so ignore those. */ if (IS_DUMMY_REL(child_rel)) diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 29fea48ee2..16ef348f40 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -127,7 +127,7 @@ bool enable_material = true; bool enable_mergejoin = true; bool enable_hashjoin = true; bool enable_gathermerge = true; -bool enable_partition_wise_join = false; +bool enable_partitionwise_join = false; bool enable_parallel_append = true; bool enable_parallel_hash = true; diff --git a/src/backend/optimizer/path/joinrels.c b/src/backend/optimizer/path/joinrels.c index f74afdb4dd..3f1c1b3477 100644 --- a/src/backend/optimizer/path/joinrels.c +++ b/src/backend/optimizer/path/joinrels.c @@ -39,7 +39,7 @@ static bool restriction_is_constant_false(List *restrictlist, static void populate_joinrel_with_paths(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, RelOptInfo *joinrel, SpecialJoinInfo *sjinfo, List *restrictlist); -static void try_partition_wise_join(PlannerInfo *root, RelOptInfo *rel1, +static void try_partitionwise_join(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, RelOptInfo *joinrel, SpecialJoinInfo *parent_sjinfo, List *parent_restrictlist); @@ -903,8 +903,8 @@ populate_joinrel_with_paths(PlannerInfo *root, RelOptInfo *rel1, break; } - /* Apply partition-wise join technique, if possible. */ - try_partition_wise_join(root, rel1, rel2, joinrel, sjinfo, restrictlist); + /* Apply partitionwise join technique, if possible. */ + try_partitionwise_join(root, rel1, rel2, joinrel, sjinfo, restrictlist); } @@ -1286,25 +1286,25 @@ restriction_is_constant_false(List *restrictlist, bool only_pushed_down) /* * Assess whether join between given two partitioned relations can be broken * down into joins between matching partitions; a technique called - * "partition-wise join" + * "partitionwise join" * - * Partition-wise join is possible when a. Joining relations have same + * Partitionwise join is possible when a. Joining relations have same * partitioning scheme b. There exists an equi-join between the partition keys * of the two relations. * - * Partition-wise join is planned as follows (details: optimizer/README.) + * Partitionwise join is planned as follows (details: optimizer/README.) * * 1. Create the RelOptInfos for joins between matching partitions i.e * child-joins and add paths to them. * * 2. Construct Append or MergeAppend paths across the set of child joins. - * This second phase is implemented by generate_partition_wise_join_paths(). + * This second phase is implemented by generate_partitionwise_join_paths(). * * The RelOptInfo, SpecialJoinInfo and restrictlist for each child join are * obtained by translating the respective parent join structures. */ static void -try_partition_wise_join(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, +try_partitionwise_join(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, RelOptInfo *joinrel, SpecialJoinInfo *parent_sjinfo, List *parent_restrictlist) { @@ -1334,7 +1334,7 @@ try_partition_wise_join(PlannerInfo *root, RelOptInfo *rel1, RelOptInfo *rel2, joinrel->part_scheme == rel2->part_scheme); /* - * Since we allow partition-wise join only when the partition bounds of + * Since we allow partitionwise join only when the partition bounds of * the joining relations exactly match, the partition bounds of the join * should match those of the joining relations. */ @@ -1478,7 +1478,7 @@ have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, JoinType jointype, /* * Only clauses referencing the partition keys are useful for - * partition-wise join. + * partitionwise join. */ ipk1 = match_expr_to_partition_keys(expr1, rel1, strict_op); if (ipk1 < 0) @@ -1489,13 +1489,13 @@ have_partkey_equi_join(RelOptInfo *rel1, RelOptInfo *rel2, JoinType jointype, /* * If the clause refers to keys at different ordinal positions, it can - * not be used for partition-wise join. + * not be used for partitionwise join. */ if (ipk1 != ipk2) continue; /* - * The clause allows partition-wise join if only it uses the same + * The clause allows partitionwise join if only it uses the same * operator family as that specified by the partition key. */ if (rel1->part_scheme->strategy == PARTITION_STRATEGY_HASH) diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c index 5c368321e6..da8f0f93fc 100644 --- a/src/backend/optimizer/util/relnode.c +++ b/src/backend/optimizer/util/relnode.c @@ -1601,15 +1601,15 @@ build_joinrel_partition_info(RelOptInfo *joinrel, RelOptInfo *outer_rel, int cnt; PartitionScheme part_scheme; - /* Nothing to do if partition-wise join technique is disabled. */ - if (!enable_partition_wise_join) + /* Nothing to do if partitionwise join technique is disabled. */ + if (!enable_partitionwise_join) { Assert(!IS_PARTITIONED_REL(joinrel)); return; } /* - * We can only consider this join as an input to further partition-wise + * We can only consider this join as an input to further partitionwise * joins if (a) the input relations are partitioned, (b) the partition * schemes match, and (c) we can identify an equi-join between the * partition keys. Note that if it were possible for diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 87ba67661a..1db7845d5a 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -914,11 +914,11 @@ static struct config_bool ConfigureNamesBool[] = NULL, NULL, NULL }, { - {"enable_partition_wise_join", PGC_USERSET, QUERY_TUNING_METHOD, - gettext_noop("Enables partition-wise join."), + {"enable_partitionwise_join", PGC_USERSET, QUERY_TUNING_METHOD, + gettext_noop("Enables partitionwise join."), NULL }, - &enable_partition_wise_join, + &enable_partitionwise_join, false, NULL, NULL, NULL }, diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 9a3535559e..39272925fb 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -303,7 +303,7 @@ #enable_seqscan = on #enable_sort = on #enable_tidscan = on -#enable_partition_wise_join = off +#enable_partitionwise_join = off #enable_parallel_hash = on # - Planner Cost Constants - diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index 0e9f858b9e..132e35551b 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -67,7 +67,7 @@ extern PGDLLIMPORT bool enable_material; extern PGDLLIMPORT bool enable_mergejoin; extern PGDLLIMPORT bool enable_hashjoin; extern PGDLLIMPORT bool enable_gathermerge; -extern PGDLLIMPORT bool enable_partition_wise_join; +extern PGDLLIMPORT bool enable_partitionwise_join; extern PGDLLIMPORT bool enable_parallel_append; extern PGDLLIMPORT bool enable_parallel_hash; extern PGDLLIMPORT int constraint_exclusion; diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index 4708443c39..c9e44318ad 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -58,7 +58,7 @@ extern int compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages, int max_workers); extern void create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, Path *bitmapqual); -extern void generate_partition_wise_join_paths(PlannerInfo *root, +extern void generate_partitionwise_join_paths(PlannerInfo *root, RelOptInfo *rel); #ifdef OPTIMIZER_DEBUG diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index 333f93889c..636bedadf2 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -1,9 +1,9 @@ -- -- PARTITION_JOIN --- Test partition-wise join between partitioned tables +-- Test partitionwise join between partitioned tables -- --- Enable partition-wise join, which by default is disabled. -SET enable_partition_wise_join to true; +-- Enable partitionwise join, which by default is disabled. +SET enable_partitionwise_join to true; -- -- partitioned by a single column -- @@ -1578,7 +1578,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 | | 525 | 0001 (16 rows) --- lateral partition-wise join +-- lateral partitionwise join EXPLAIN (COSTS OFF) SELECT * FROM prt1_l t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss @@ -1695,7 +1695,7 @@ CREATE TABLE prt4_n_p2 PARTITION OF prt4_n FOR VALUES FROM (300) TO (500); CREATE TABLE prt4_n_p3 PARTITION OF prt4_n FOR VALUES FROM (500) TO (600); INSERT INTO prt4_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 599, 2) i; ANALYZE prt4_n; --- partition-wise join can not be applied if the partition ranges differ +-- partitionwise join can not be applied if the partition ranges differ EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2 WHERE t1.a = t2.a; QUERY PLAN @@ -1742,7 +1742,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2, prt2 t3 WHERE t1.a = t2.a -> Seq Scan on prt2_p3 t3_2 (23 rows) --- partition-wise join can not be applied if there are no equi-join conditions +-- partitionwise join can not be applied if there are no equi-join conditions -- between partition keys EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 LEFT JOIN prt2 t2 ON (t1.a < t2.b); @@ -1763,7 +1763,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 LEFT JOIN prt2 t2 ON (t1.a < t2.b); (12 rows) -- equi-join with join condition on partial keys does not qualify for --- partition-wise join +-- partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1, prt2_m t2 WHERE t1.a = (t2.b + t2.a)/2; QUERY PLAN @@ -1782,7 +1782,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1, prt2_m t2 WHERE t1.a = (t2.b + t2. (11 rows) -- equi-join between out-of-order partition key columns does not qualify for --- partition-wise join +-- partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.a = t2.b; QUERY PLAN @@ -1800,7 +1800,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.a = t2.b; -> Seq Scan on prt2_m_p3 t2_2 (11 rows) --- equi-join between non-key columns does not qualify for partition-wise join +-- equi-join between non-key columns does not qualify for partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.c = t2.c; QUERY PLAN @@ -1818,7 +1818,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.c = t2.c; -> Seq Scan on prt2_m_p3 t2_2 (11 rows) --- partition-wise join can not be applied between tables with different +-- partitionwise join can not be applied between tables with different -- partition lists EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 LEFT JOIN prt2_n t2 ON (t1.c = t2.c); @@ -1857,7 +1857,7 @@ SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 JOIN prt2_n t2 ON (t1.c = t2.c) JOI -> Seq Scan on prt1_n_p2 t1_1 (16 rows) --- partition-wise join can not be applied for a join between list and range +-- partitionwise join can not be applied for a join between list and range -- partitioned table EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c); diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out index c9c8f51e1c..759f7d9d59 100644 --- a/src/test/regress/expected/sysviews.out +++ b/src/test/regress/expected/sysviews.out @@ -70,23 +70,23 @@ select count(*) >= 0 as ok from pg_prepared_xacts; -- This is to record the prevailing planner enable_foo settings during -- a regression test run. select name, setting from pg_settings where name like 'enable%'; - name | setting -----------------------------+--------- - enable_bitmapscan | on - enable_gathermerge | on - enable_hashagg | on - enable_hashjoin | on - enable_indexonlyscan | on - enable_indexscan | on - enable_material | on - enable_mergejoin | on - enable_nestloop | on - enable_parallel_append | on - enable_parallel_hash | on - enable_partition_wise_join | off - enable_seqscan | on - enable_sort | on - enable_tidscan | on + name | setting +---------------------------+--------- + enable_bitmapscan | on + enable_gathermerge | on + enable_hashagg | on + enable_hashjoin | on + enable_indexonlyscan | on + enable_indexscan | on + enable_material | on + enable_mergejoin | on + enable_nestloop | on + enable_parallel_append | on + enable_parallel_hash | on + enable_partitionwise_join | off + enable_seqscan | on + enable_sort | on + enable_tidscan | on (15 rows) -- Test that the pg_timezone_names and pg_timezone_abbrevs views are diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql index 55c5615d06..4b2e781060 100644 --- a/src/test/regress/sql/partition_join.sql +++ b/src/test/regress/sql/partition_join.sql @@ -1,10 +1,10 @@ -- -- PARTITION_JOIN --- Test partition-wise join between partitioned tables +-- Test partitionwise join between partitioned tables -- --- Enable partition-wise join, which by default is disabled. -SET enable_partition_wise_join to true; +-- Enable partitionwise join, which by default is disabled. +SET enable_partitionwise_join to true; -- -- partitioned by a single column @@ -306,7 +306,7 @@ EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; SELECT t1.a, t1.c, t2.b, t2.c FROM (SELECT * FROM prt1_l WHERE prt1_l.b = 0) t1 FULL JOIN (SELECT * FROM prt2_l WHERE prt2_l.a = 0) t2 ON (t1.a = t2.b AND t1.c = t2.c) ORDER BY t1.a, t2.b; --- lateral partition-wise join +-- lateral partitionwise join EXPLAIN (COSTS OFF) SELECT * FROM prt1_l t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM prt1_l t2 JOIN prt2_l t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss @@ -348,39 +348,39 @@ CREATE TABLE prt4_n_p3 PARTITION OF prt4_n FOR VALUES FROM (500) TO (600); INSERT INTO prt4_n SELECT i, i, to_char(i, 'FM0000') FROM generate_series(0, 599, 2) i; ANALYZE prt4_n; --- partition-wise join can not be applied if the partition ranges differ +-- partitionwise join can not be applied if the partition ranges differ EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2 WHERE t1.a = t2.a; EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt4_n t2, prt2 t3 WHERE t1.a = t2.a and t1.a = t3.b; --- partition-wise join can not be applied if there are no equi-join conditions +-- partitionwise join can not be applied if there are no equi-join conditions -- between partition keys EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1 LEFT JOIN prt2 t2 ON (t1.a < t2.b); -- equi-join with join condition on partial keys does not qualify for --- partition-wise join +-- partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1, prt2_m t2 WHERE t1.a = (t2.b + t2.a)/2; -- equi-join between out-of-order partition key columns does not qualify for --- partition-wise join +-- partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.a = t2.b; --- equi-join between non-key columns does not qualify for partition-wise join +-- equi-join between non-key columns does not qualify for partitionwise join EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_m t1 LEFT JOIN prt2_m t2 ON t1.c = t2.c; --- partition-wise join can not be applied between tables with different +-- partitionwise join can not be applied between tables with different -- partition lists EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 LEFT JOIN prt2_n t2 ON (t1.c = t2.c); EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 JOIN prt2_n t2 ON (t1.c = t2.c) JOIN plt1 t3 ON (t1.c = t3.c); --- partition-wise join can not be applied for a join between list and range +-- partitionwise join can not be applied for a join between list and range -- partitioned table EXPLAIN (COSTS OFF) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1_n t1 FULL JOIN prt1 t2 ON (t1.c = t2.c); From 49bff412edd9eb226e146f6e4db7b5a8e843bd1f Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 16 Feb 2018 12:14:08 -0500 Subject: [PATCH 1006/1087] Remove some inappropriate #includes. Other header files should never #include postgres.h (nor postgres_fe.h, nor c.h), per project policy. Also, there's no need for any backend .c file to explicitly include elog.h or palloc.h, because postgres.h pulls those in already. Extracted from a larger patch by Kyotaro Horiguchi. The rest of the removals he suggests require more study, but these are no-brainers. Discussion: https://postgr.es/m/20180215.200447.209320006.horiguchi.kyotaro@lab.ntt.co.jp --- src/backend/lib/knapsack.c | 1 - src/backend/replication/basebackup.c | 1 - src/backend/utils/misc/pg_config.c | 1 - src/backend/utils/misc/rls.c | 1 - src/include/lib/knapsack.h | 1 - src/pl/plpython/plpy_spi.h | 1 - src/pl/plpython/plpy_util.c | 1 - 7 files changed, 7 deletions(-) diff --git a/src/backend/lib/knapsack.c b/src/backend/lib/knapsack.c index 25ce4f2365..c7d9c4d8d2 100644 --- a/src/backend/lib/knapsack.c +++ b/src/backend/lib/knapsack.c @@ -32,7 +32,6 @@ #include "nodes/bitmapset.h" #include "utils/builtins.h" #include "utils/memutils.h" -#include "utils/palloc.h" /* * DiscreteKnapsack diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c index dd7ad64862..185f32a5f9 100644 --- a/src/backend/replication/basebackup.c +++ b/src/backend/replication/basebackup.c @@ -34,7 +34,6 @@ #include "storage/fd.h" #include "storage/ipc.h" #include "utils/builtins.h" -#include "utils/elog.h" #include "utils/ps_status.h" #include "utils/relcache.h" #include "utils/timestamp.h" diff --git a/src/backend/utils/misc/pg_config.c b/src/backend/utils/misc/pg_config.c index 436d2efb21..aa434bc3ab 100644 --- a/src/backend/utils/misc/pg_config.c +++ b/src/backend/utils/misc/pg_config.c @@ -19,7 +19,6 @@ #include "catalog/pg_type.h" #include "common/config_info.h" #include "utils/builtins.h" -#include "utils/elog.h" #include "port.h" Datum diff --git a/src/backend/utils/misc/rls.c b/src/backend/utils/misc/rls.c index 5ed64dc1dd..94449c7e8a 100644 --- a/src/backend/utils/misc/rls.c +++ b/src/backend/utils/misc/rls.c @@ -22,7 +22,6 @@ #include "miscadmin.h" #include "utils/acl.h" #include "utils/builtins.h" -#include "utils/elog.h" #include "utils/lsyscache.h" #include "utils/rls.h" #include "utils/syscache.h" diff --git a/src/include/lib/knapsack.h b/src/include/lib/knapsack.h index f2a61675cb..9f17004d48 100644 --- a/src/include/lib/knapsack.h +++ b/src/include/lib/knapsack.h @@ -8,7 +8,6 @@ #ifndef KNAPSACK_H #define KNAPSACK_H -#include "postgres.h" #include "nodes/bitmapset.h" extern Bitmapset *DiscreteKnapsack(int max_weight, int num_items, diff --git a/src/pl/plpython/plpy_spi.h b/src/pl/plpython/plpy_spi.h index d6b0a4707b..5a0eef78dc 100644 --- a/src/pl/plpython/plpy_spi.h +++ b/src/pl/plpython/plpy_spi.h @@ -5,7 +5,6 @@ #ifndef PLPY_SPI_H #define PLPY_SPI_H -#include "utils/palloc.h" #include "utils/resowner.h" extern PyObject *PLy_spi_prepare(PyObject *self, PyObject *args); diff --git a/src/pl/plpython/plpy_util.c b/src/pl/plpython/plpy_util.c index 35d57a9e80..51e2461ec3 100644 --- a/src/pl/plpython/plpy_util.c +++ b/src/pl/plpython/plpy_util.c @@ -8,7 +8,6 @@ #include "mb/pg_wchar.h" #include "utils/memutils.h" -#include "utils/palloc.h" #include "plpython.h" From ad9a274778d2d88c46b90309212b92ee7fdf9afe Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 1 Feb 2018 17:07:38 -0500 Subject: [PATCH 1007/1087] Fix crash when canceling parallel query elog(FATAL) would end up calling PortalCleanup(), which would call executor shutdown code, which could fail and crash, especially under parallel query. This was introduced by 8561e4840c81f7e345be2df170839846814fa004, which did not want to mark an active portal as failed by a normal transaction abort anymore. But we do need to do that for an elog(FATAL) exit. Introduce a variable shmem_exit_inprogress similar to the existing proc_exit_inprogress, so we can tell whether we are in the FATAL exit scenario. Reported-by: Andres Freund --- src/backend/storage/ipc/ipc.c | 7 +++++++ src/backend/utils/mmgr/portalmem.c | 8 ++++++++ src/include/storage/ipc.h | 1 + 3 files changed, 16 insertions(+) diff --git a/src/backend/storage/ipc/ipc.c b/src/backend/storage/ipc/ipc.c index 2de35efbd4..726db7b7f1 100644 --- a/src/backend/storage/ipc/ipc.c +++ b/src/backend/storage/ipc/ipc.c @@ -39,6 +39,11 @@ */ bool proc_exit_inprogress = false; +/* + * Set when shmem_exit() is in progress. + */ +bool shmem_exit_inprogress = false; + /* * This flag tracks whether we've called atexit() in the current process * (or in the parent postmaster). @@ -214,6 +219,8 @@ proc_exit_prepare(int code) void shmem_exit(int code) { + shmem_exit_inprogress = true; + /* * Call before_shmem_exit callbacks. * diff --git a/src/backend/utils/mmgr/portalmem.c b/src/backend/utils/mmgr/portalmem.c index f3f0add1d6..75a6dde32b 100644 --- a/src/backend/utils/mmgr/portalmem.c +++ b/src/backend/utils/mmgr/portalmem.c @@ -22,6 +22,7 @@ #include "catalog/pg_type.h" #include "commands/portalcmds.h" #include "miscadmin.h" +#include "storage/ipc.h" #include "utils/builtins.h" #include "utils/memutils.h" #include "utils/snapmgr.h" @@ -757,6 +758,13 @@ AtAbort_Portals(void) { Portal portal = hentry->portal; + /* + * When elog(FATAL) is progress, we need to set the active portal to + * failed, so that PortalCleanup() doesn't run the executor shutdown. + */ + if (portal->status == PORTAL_ACTIVE && shmem_exit_inprogress) + MarkPortalFailed(portal); + /* * Do nothing else to cursors held over from a previous transaction. */ diff --git a/src/include/storage/ipc.h b/src/include/storage/ipc.h index e934a83a1c..6a05a89349 100644 --- a/src/include/storage/ipc.h +++ b/src/include/storage/ipc.h @@ -63,6 +63,7 @@ typedef void (*shmem_startup_hook_type) (void); /* ipc.c */ extern PGDLLIMPORT bool proc_exit_inprogress; +extern PGDLLIMPORT bool shmem_exit_inprogress; extern void proc_exit(int code) pg_attribute_noreturn(); extern void shmem_exit(int code); From bf6c614a2f2c58312b3be34a47e7fb7362e07bcb Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Thu, 15 Feb 2018 21:55:31 -0800 Subject: [PATCH 1008/1087] Do execGrouping.c via expression eval machinery, take two. This has a performance benefit on own, although not hugely so. The primary benefit is that it will allow for to JIT tuple deforming and comparator invocations. Large parts of this were previously committed (773aec7aa), but the commit contained an omission around cross-type comparisons and was thus reverted. Author: Andres Freund Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de --- src/backend/executor/execExpr.c | 143 +++++++++++++ src/backend/executor/execExprInterp.c | 29 +++ src/backend/executor/execGrouping.c | 249 +++++----------------- src/backend/executor/nodeAgg.c | 145 ++++++++----- src/backend/executor/nodeGroup.c | 24 +-- src/backend/executor/nodeRecursiveunion.c | 11 +- src/backend/executor/nodeSetOp.c | 54 ++--- src/backend/executor/nodeSubplan.c | 110 +++++++++- src/backend/executor/nodeUnique.c | 31 ++- src/backend/executor/nodeWindowAgg.c | 38 ++-- src/backend/utils/adt/orderedsetaggs.c | 56 ++--- src/include/executor/execExpr.h | 1 + src/include/executor/executor.h | 34 ++- src/include/executor/nodeAgg.h | 14 +- src/include/nodes/execnodes.h | 27 +-- 15 files changed, 566 insertions(+), 400 deletions(-) diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index c6eb3ebacf..463e185a9a 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -3193,3 +3193,146 @@ ExecBuildAggTransCall(ExprState *state, AggState *aggstate, as->d.agg_strict_trans_check.jumpnull = state->steps_len; } } + +/* + * Build equality expression that can be evaluated using ExecQual(), returning + * true if the expression context's inner/outer tuple are NOT DISTINCT. I.e + * two nulls match, a null and a not-null don't match. + * + * desc: tuple descriptor of the to-be-compared tuples + * numCols: the number of attributes to be examined + * keyColIdx: array of attribute column numbers + * eqFunctions: array of function oids of the equality functions to use + * parent: parent executor node + */ +ExprState * +ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqfunctions, + PlanState *parent) +{ + ExprState *state = makeNode(ExprState); + ExprEvalStep scratch = {0}; + int natt; + int maxatt = -1; + List *adjust_jumps = NIL; + ListCell *lc; + + /* + * When no columns are actually compared, the result's always true. See + * special case in ExecQual(). + */ + if (numCols == 0) + return NULL; + + state->expr = NULL; + state->flags = EEO_FLAG_IS_QUAL; + state->parent = parent; + + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + + /* compute max needed attribute */ + for (natt = 0; natt < numCols; natt++) + { + int attno = keyColIdx[natt]; + + if (attno > maxatt) + maxatt = attno; + } + Assert(maxatt >= 0); + + /* push deform steps */ + scratch.opcode = EEOP_INNER_FETCHSOME; + scratch.d.fetch.last_var = maxatt; + ExprEvalPushStep(state, &scratch); + + scratch.opcode = EEOP_OUTER_FETCHSOME; + scratch.d.fetch.last_var = maxatt; + ExprEvalPushStep(state, &scratch); + + /* + * Start comparing at the last field (least significant sort key). That's + * the most likely to be different if we are dealing with sorted input. + */ + for (natt = numCols; --natt >= 0;) + { + int attno = keyColIdx[natt]; + Form_pg_attribute latt = TupleDescAttr(ldesc, attno - 1); + Form_pg_attribute ratt = TupleDescAttr(rdesc, attno - 1); + Oid foid = eqfunctions[natt]; + FmgrInfo *finfo; + FunctionCallInfo fcinfo; + AclResult aclresult; + + /* Check permission to call function */ + aclresult = pg_proc_aclcheck(foid, GetUserId(), ACL_EXECUTE); + if (aclresult != ACLCHECK_OK) + aclcheck_error(aclresult, OBJECT_FUNCTION, get_func_name(foid)); + + InvokeFunctionExecuteHook(foid); + + /* Set up the primary fmgr lookup information */ + finfo = palloc0(sizeof(FmgrInfo)); + fcinfo = palloc0(sizeof(FunctionCallInfoData)); + fmgr_info(foid, finfo); + fmgr_info_set_expr(NULL, finfo); + InitFunctionCallInfoData(*fcinfo, finfo, 2, + InvalidOid, NULL, NULL); + + /* left arg */ + scratch.opcode = EEOP_INNER_VAR; + scratch.d.var.attnum = attno - 1; + scratch.d.var.vartype = latt->atttypid; + scratch.resvalue = &fcinfo->arg[0]; + scratch.resnull = &fcinfo->argnull[0]; + ExprEvalPushStep(state, &scratch); + + /* right arg */ + scratch.opcode = EEOP_OUTER_VAR; + scratch.d.var.attnum = attno - 1; + scratch.d.var.vartype = ratt->atttypid; + scratch.resvalue = &fcinfo->arg[1]; + scratch.resnull = &fcinfo->argnull[1]; + ExprEvalPushStep(state, &scratch); + + /* evaluate distinctness */ + scratch.opcode = EEOP_NOT_DISTINCT; + scratch.d.func.finfo = finfo; + scratch.d.func.fcinfo_data = fcinfo; + scratch.d.func.fn_addr = finfo->fn_addr; + scratch.d.func.nargs = 2; + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + ExprEvalPushStep(state, &scratch); + + /* then emit EEOP_QUAL to detect if result is false (or null) */ + scratch.opcode = EEOP_QUAL; + scratch.d.qualexpr.jumpdone = -1; + scratch.resvalue = &state->resvalue; + scratch.resnull = &state->resnull; + ExprEvalPushStep(state, &scratch); + adjust_jumps = lappend_int(adjust_jumps, + state->steps_len - 1); + } + + /* adjust jump targets */ + foreach(lc, adjust_jumps) + { + ExprEvalStep *as = &state->steps[lfirst_int(lc)]; + + Assert(as->opcode == EEOP_QUAL); + Assert(as->d.qualexpr.jumpdone == -1); + as->d.qualexpr.jumpdone = state->steps_len; + } + + scratch.resvalue = NULL; + scratch.resnull = NULL; + scratch.opcode = EEOP_DONE; + ExprEvalPushStep(state, &scratch); + + ExecReadyExpr(state); + + return state; +} diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c index 9c6c2b02e9..771b7e3945 100644 --- a/src/backend/executor/execExprInterp.c +++ b/src/backend/executor/execExprInterp.c @@ -355,6 +355,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) &&CASE_EEOP_MAKE_READONLY, &&CASE_EEOP_IOCOERCE, &&CASE_EEOP_DISTINCT, + &&CASE_EEOP_NOT_DISTINCT, &&CASE_EEOP_NULLIF, &&CASE_EEOP_SQLVALUEFUNCTION, &&CASE_EEOP_CURRENTOFEXPR, @@ -1198,6 +1199,34 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull) EEO_NEXT(); } + /* see EEOP_DISTINCT for comments, this is just inverted */ + EEO_CASE(EEOP_NOT_DISTINCT) + { + FunctionCallInfo fcinfo = op->d.func.fcinfo_data; + + if (fcinfo->argnull[0] && fcinfo->argnull[1]) + { + *op->resvalue = BoolGetDatum(true); + *op->resnull = false; + } + else if (fcinfo->argnull[0] || fcinfo->argnull[1]) + { + *op->resvalue = BoolGetDatum(false); + *op->resnull = false; + } + else + { + Datum eqresult; + + fcinfo->isnull = false; + eqresult = op->d.func.fn_addr(fcinfo); + *op->resvalue = eqresult; + *op->resnull = fcinfo->isnull; + } + + EEO_NEXT(); + } + EEO_CASE(EEOP_NULLIF) { /* diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c index 8e8dbb1f20..c4d0e04058 100644 --- a/src/backend/executor/execGrouping.c +++ b/src/backend/executor/execGrouping.c @@ -51,173 +51,34 @@ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tup * Utility routines for grouping tuples together *****************************************************************************/ -/* - * execTuplesMatch - * Return true if two tuples match in all the indicated fields. - * - * This actually implements SQL's notion of "not distinct". Two nulls - * match, a null and a not-null don't match. - * - * slot1, slot2: the tuples to compare (must have same columns!) - * numCols: the number of attributes to be examined - * matchColIdx: array of attribute column numbers - * eqFunctions: array of fmgr lookup info for the equality functions to use - * evalContext: short-term memory context for executing the functions - * - * NB: evalContext is reset each time! - */ -bool -execTuplesMatch(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext) -{ - MemoryContext oldContext; - bool result; - int i; - - /* Reset and switch into the temp context. */ - MemoryContextReset(evalContext); - oldContext = MemoryContextSwitchTo(evalContext); - - /* - * We cannot report a match without checking all the fields, but we can - * report a non-match as soon as we find unequal fields. So, start - * comparing at the last field (least significant sort key). That's the - * most likely to be different if we are dealing with sorted input. - */ - result = true; - - for (i = numCols; --i >= 0;) - { - AttrNumber att = matchColIdx[i]; - Datum attr1, - attr2; - bool isNull1, - isNull2; - - attr1 = slot_getattr(slot1, att, &isNull1); - - attr2 = slot_getattr(slot2, att, &isNull2); - - if (isNull1 != isNull2) - { - result = false; /* one null and one not; they aren't equal */ - break; - } - - if (isNull1) - continue; /* both are null, treat as equal */ - - /* Apply the type-specific equality function */ - - if (!DatumGetBool(FunctionCall2(&eqfunctions[i], - attr1, attr2))) - { - result = false; /* they aren't equal */ - break; - } - } - - MemoryContextSwitchTo(oldContext); - - return result; -} - -/* - * execTuplesUnequal - * Return true if two tuples are definitely unequal in the indicated - * fields. - * - * Nulls are neither equal nor unequal to anything else. A true result - * is obtained only if there are non-null fields that compare not-equal. - * - * Parameters are identical to execTuplesMatch. - */ -bool -execTuplesUnequal(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext) -{ - MemoryContext oldContext; - bool result; - int i; - - /* Reset and switch into the temp context. */ - MemoryContextReset(evalContext); - oldContext = MemoryContextSwitchTo(evalContext); - - /* - * We cannot report a match without checking all the fields, but we can - * report a non-match as soon as we find unequal fields. So, start - * comparing at the last field (least significant sort key). That's the - * most likely to be different if we are dealing with sorted input. - */ - result = false; - - for (i = numCols; --i >= 0;) - { - AttrNumber att = matchColIdx[i]; - Datum attr1, - attr2; - bool isNull1, - isNull2; - - attr1 = slot_getattr(slot1, att, &isNull1); - - if (isNull1) - continue; /* can't prove anything here */ - - attr2 = slot_getattr(slot2, att, &isNull2); - - if (isNull2) - continue; /* can't prove anything here */ - - /* Apply the type-specific equality function */ - - if (!DatumGetBool(FunctionCall2(&eqfunctions[i], - attr1, attr2))) - { - result = true; /* they are unequal */ - break; - } - } - - MemoryContextSwitchTo(oldContext); - - return result; -} - - /* * execTuplesMatchPrepare - * Look up the equality functions needed for execTuplesMatch or - * execTuplesUnequal, given an array of equality operator OIDs. - * - * The result is a palloc'd array. + * Build expression that can be evaluated using ExecQual(), returning + * whether an ExprContext's inner/outer tuples are NOT DISTINCT */ -FmgrInfo * -execTuplesMatchPrepare(int numCols, - Oid *eqOperators) +ExprState * +execTuplesMatchPrepare(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqOperators, + PlanState *parent) { - FmgrInfo *eqFunctions = (FmgrInfo *) palloc(numCols * sizeof(FmgrInfo)); + Oid *eqFunctions = (Oid *) palloc(numCols * sizeof(Oid)); int i; + ExprState *expr; + + if (numCols == 0) + return NULL; + /* lookup equality functions */ for (i = 0; i < numCols; i++) - { - Oid eq_opr = eqOperators[i]; - Oid eq_function; + eqFunctions[i] = get_opcode(eqOperators[i]); - eq_function = get_opcode(eq_opr); - fmgr_info(eq_function, &eqFunctions[i]); - } + /* build actual expression */ + expr = ExecBuildGroupingEqual(desc, desc, numCols, keyColIdx, eqFunctions, + parent); - return eqFunctions; + return expr; } /* @@ -233,12 +94,12 @@ execTuplesMatchPrepare(int numCols, void execTuplesHashPrepare(int numCols, Oid *eqOperators, - FmgrInfo **eqFunctions, + Oid **eqFuncOids, FmgrInfo **hashFunctions) { int i; - *eqFunctions = (FmgrInfo *) palloc(numCols * sizeof(FmgrInfo)); + *eqFuncOids = (Oid *) palloc(numCols * sizeof(Oid)); *hashFunctions = (FmgrInfo *) palloc(numCols * sizeof(FmgrInfo)); for (i = 0; i < numCols; i++) @@ -255,7 +116,7 @@ execTuplesHashPrepare(int numCols, eq_opr); /* We're not supporting cross-type cases here */ Assert(left_hash_function == right_hash_function); - fmgr_info(eq_function, &(*eqFunctions)[i]); + (*eqFuncOids)[i] = eq_function; fmgr_info(right_hash_function, &(*hashFunctions)[i]); } } @@ -288,8 +149,10 @@ execTuplesHashPrepare(int numCols, * storage that will live as long as the hashtable does. */ TupleHashTable -BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, - FmgrInfo *eqfunctions, +BuildTupleHashTable(PlanState *parent, + TupleDesc inputDesc, + int numCols, AttrNumber *keyColIdx, + Oid *eqfuncoids, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, MemoryContext tablecxt, MemoryContext tempcxt, @@ -297,6 +160,7 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, { TupleHashTable hashtable; Size entrysize = sizeof(TupleHashEntryData) + additionalsize; + MemoryContext oldcontext; Assert(nbuckets > 0); @@ -309,14 +173,13 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, hashtable->numCols = numCols; hashtable->keyColIdx = keyColIdx; hashtable->tab_hash_funcs = hashfunctions; - hashtable->tab_eq_funcs = eqfunctions; hashtable->tablecxt = tablecxt; hashtable->tempcxt = tempcxt; hashtable->entrysize = entrysize; hashtable->tableslot = NULL; /* will be made on first lookup */ hashtable->inputslot = NULL; hashtable->in_hash_funcs = NULL; - hashtable->cur_eq_funcs = NULL; + hashtable->cur_eq_func = NULL; /* * If parallelism is in use, even if the master backend is performing the @@ -333,6 +196,24 @@ BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, hashtable->hashtab = tuplehash_create(tablecxt, nbuckets, hashtable); + oldcontext = MemoryContextSwitchTo(hashtable->tablecxt); + + /* + * We copy the input tuple descriptor just for safety --- we assume all + * input tuples will have equivalent descriptors. + */ + hashtable->tableslot = MakeSingleTupleTableSlot(CreateTupleDescCopy(inputDesc)); + + /* build comparator for all columns */ + hashtable->tab_eq_func = ExecBuildGroupingEqual(inputDesc, inputDesc, + numCols, + keyColIdx, eqfuncoids, + parent); + + MemoryContextSwitchTo(oldcontext); + + hashtable->exprcontext = CreateExprContext(parent->state); + return hashtable; } @@ -357,29 +238,13 @@ LookupTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, bool found; MinimalTuple key; - /* If first time through, clone the input slot to make table slot */ - if (hashtable->tableslot == NULL) - { - TupleDesc tupdesc; - - oldContext = MemoryContextSwitchTo(hashtable->tablecxt); - - /* - * We copy the input tuple descriptor just for safety --- we assume - * all input tuples will have equivalent descriptors. - */ - tupdesc = CreateTupleDescCopy(slot->tts_tupleDescriptor); - hashtable->tableslot = MakeSingleTupleTableSlot(tupdesc); - MemoryContextSwitchTo(oldContext); - } - /* Need to run the hash functions in short-lived context */ oldContext = MemoryContextSwitchTo(hashtable->tempcxt); /* set up data needed by hash and match functions */ hashtable->inputslot = slot; hashtable->in_hash_funcs = hashtable->tab_hash_funcs; - hashtable->cur_eq_funcs = hashtable->tab_eq_funcs; + hashtable->cur_eq_func = hashtable->tab_eq_func; key = NULL; /* flag to reference inputslot */ @@ -424,7 +289,7 @@ LookupTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, */ TupleHashEntry FindTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, - FmgrInfo *eqfunctions, + ExprState *eqcomp, FmgrInfo *hashfunctions) { TupleHashEntry entry; @@ -437,7 +302,7 @@ FindTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, /* Set up data needed by hash and match functions */ hashtable->inputslot = slot; hashtable->in_hash_funcs = hashfunctions; - hashtable->cur_eq_funcs = eqfunctions; + hashtable->cur_eq_func = eqcomp; /* Search the hash table */ key = NULL; /* flag to reference inputslot */ @@ -524,9 +389,6 @@ TupleHashTableHash(struct tuplehash_hash *tb, const MinimalTuple tuple) * See whether two tuples (presumably of the same hash value) match * * As above, the passed pointers are pointers to TupleHashEntryData. - * - * Also, the caller must select an appropriate memory context for running - * the compare functions. (dynahash.c doesn't change CurrentMemoryContext.) */ static int TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const MinimalTuple tuple2) @@ -534,6 +396,7 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const TupleTableSlot *slot1; TupleTableSlot *slot2; TupleHashTable hashtable = (TupleHashTable) tb->private_data; + ExprContext *econtext = hashtable->exprcontext; /* * We assume that simplehash.h will only ever call us with the first @@ -548,13 +411,7 @@ TupleHashTableMatch(struct tuplehash_hash *tb, const MinimalTuple tuple1, const slot2 = hashtable->inputslot; /* For crosstype comparisons, the inputslot must be first */ - if (execTuplesMatch(slot2, - slot1, - hashtable->numCols, - hashtable->keyColIdx, - hashtable->cur_eq_funcs, - hashtable->tempcxt)) - return 0; - else - return 1; + econtext->ecxt_innertuple = slot2; + econtext->ecxt_outertuple = slot1; + return !ExecQualAndReset(hashtable->cur_eq_func, econtext); } diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index a86d4b68ea..e74b3a9391 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -755,7 +755,7 @@ process_ordered_aggregate_single(AggState *aggstate, ((oldIsNull && *isNull) || (!oldIsNull && !*isNull && oldAbbrevVal == newAbbrevVal && - DatumGetBool(FunctionCall2(&pertrans->equalfns[0], + DatumGetBool(FunctionCall2(&pertrans->equalfnOne, oldVal, *newVal))))) { /* equal to prior, so forget this one */ @@ -802,7 +802,7 @@ process_ordered_aggregate_multi(AggState *aggstate, AggStatePerTrans pertrans, AggStatePerGroup pergroupstate) { - MemoryContext workcontext = aggstate->tmpcontext->ecxt_per_tuple_memory; + ExprContext *tmpcontext = aggstate->tmpcontext; FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo; TupleTableSlot *slot1 = pertrans->sortslot; TupleTableSlot *slot2 = pertrans->uniqslot; @@ -811,6 +811,7 @@ process_ordered_aggregate_multi(AggState *aggstate, Datum newAbbrevVal = (Datum) 0; Datum oldAbbrevVal = (Datum) 0; bool haveOldValue = false; + TupleTableSlot *save = aggstate->tmpcontext->ecxt_outertuple; int i; tuplesort_performsort(pertrans->sortstates[aggstate->current_set]); @@ -824,22 +825,20 @@ process_ordered_aggregate_multi(AggState *aggstate, { CHECK_FOR_INTERRUPTS(); - /* - * Extract the first numTransInputs columns as datums to pass to the - * transfn. (This will help execTuplesMatch too, so we do it - * immediately.) - */ - slot_getsomeattrs(slot1, numTransInputs); + tmpcontext->ecxt_outertuple = slot1; + tmpcontext->ecxt_innertuple = slot2; if (numDistinctCols == 0 || !haveOldValue || newAbbrevVal != oldAbbrevVal || - !execTuplesMatch(slot1, slot2, - numDistinctCols, - pertrans->sortColIdx, - pertrans->equalfns, - workcontext)) + !ExecQual(pertrans->equalfnMulti, tmpcontext)) { + /* + * Extract the first numTransInputs columns as datums to pass to + * the transfn. + */ + slot_getsomeattrs(slot1, numTransInputs); + /* Load values into fcinfo */ /* Start from 1, since the 0th arg will be the transition value */ for (i = 0; i < numTransInputs; i++) @@ -857,15 +856,14 @@ process_ordered_aggregate_multi(AggState *aggstate, slot2 = slot1; slot1 = tmpslot; - /* avoid execTuplesMatch() calls by reusing abbreviated keys */ + /* avoid ExecQual() calls by reusing abbreviated keys */ oldAbbrevVal = newAbbrevVal; haveOldValue = true; } } - /* Reset context each time, unless execTuplesMatch did it for us */ - if (numDistinctCols == 0) - MemoryContextReset(workcontext); + /* Reset context each time */ + ResetExprContext(tmpcontext); ExecClearTuple(slot1); } @@ -875,6 +873,9 @@ process_ordered_aggregate_multi(AggState *aggstate, tuplesort_end(pertrans->sortstates[aggstate->current_set]); pertrans->sortstates[aggstate->current_set] = NULL; + + /* restore previous slot, potentially in use for grouping sets */ + tmpcontext->ecxt_outertuple = save; } /* @@ -1276,9 +1277,11 @@ build_hash_table(AggState *aggstate) Assert(perhash->aggnode->numGroups > 0); - perhash->hashtable = BuildTupleHashTable(perhash->numCols, + perhash->hashtable = BuildTupleHashTable(&aggstate->ss.ps, + perhash->hashslot->tts_tupleDescriptor, + perhash->numCols, perhash->hashGrpColIdxHash, - perhash->eqfunctions, + perhash->eqfuncoids, perhash->hashfunctions, perhash->aggnode->numGroups, additionalsize, @@ -1314,6 +1317,7 @@ find_hash_columns(AggState *aggstate) Bitmapset *base_colnos; List *outerTlist = outerPlanState(aggstate)->plan->targetlist; int numHashes = aggstate->num_hashes; + EState *estate = aggstate->ss.ps.state; int j; /* Find Vars that will be needed in tlist and qual */ @@ -1393,6 +1397,12 @@ find_hash_columns(AggState *aggstate) } hashDesc = ExecTypeFromTL(hashTlist, false); + + execTuplesHashPrepare(perhash->numCols, + perhash->aggnode->grpOperators, + &perhash->eqfuncoids, + &perhash->hashfunctions); + perhash->hashslot = ExecAllocTableSlot(&estate->es_tupleTable); ExecSetSlotDescriptor(perhash->hashslot, hashDesc); list_free(hashTlist); @@ -1694,17 +1704,14 @@ agg_retrieve_direct(AggState *aggstate) * of the next grouping set *---------- */ + tmpcontext->ecxt_innertuple = econtext->ecxt_outertuple; if (aggstate->input_done || (node->aggstrategy != AGG_PLAIN && aggstate->projected_set != -1 && aggstate->projected_set < (numGroupingSets - 1) && nextSetSize > 0 && - !execTuplesMatch(econtext->ecxt_outertuple, - tmpcontext->ecxt_outertuple, - nextSetSize, - node->grpColIdx, - aggstate->phase->eqfunctions, - tmpcontext->ecxt_per_tuple_memory))) + !ExecQualAndReset(aggstate->phase->eqfunctions[nextSetSize - 1], + tmpcontext))) { aggstate->projected_set += 1; @@ -1847,12 +1854,9 @@ agg_retrieve_direct(AggState *aggstate) */ if (node->aggstrategy != AGG_PLAIN) { - if (!execTuplesMatch(firstSlot, - outerslot, - node->numCols, - node->grpColIdx, - aggstate->phase->eqfunctions, - tmpcontext->ecxt_per_tuple_memory)) + tmpcontext->ecxt_innertuple = firstSlot; + if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1], + tmpcontext)) { aggstate->grp_firstTuple = ExecCopySlotTuple(outerslot); break; @@ -2078,6 +2082,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) AggStatePerGroup *pergroups; Plan *outerPlan; ExprContext *econtext; + TupleDesc scanDesc; int numaggs, transno, aggno; @@ -2233,9 +2238,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * initialize source tuple type. */ ExecAssignScanTypeFromOuterPlan(&aggstate->ss); + scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; if (node->chain) - ExecSetSlotDescriptor(aggstate->sort_slot, - aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); + ExecSetSlotDescriptor(aggstate->sort_slot, scanDesc); /* * Initialize result tuple type and projection info. @@ -2355,11 +2360,43 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (aggnode->aggstrategy == AGG_SORTED) { + int i = 0; + Assert(aggnode->numCols > 0); + /* + * Build a separate function for each subset of columns that + * need to be compared. + */ phasedata->eqfunctions = - execTuplesMatchPrepare(aggnode->numCols, - aggnode->grpOperators); + (ExprState **) palloc0(aggnode->numCols * sizeof(ExprState *)); + + /* for each grouping set */ + for (i = 0; i < phasedata->numsets; i++) + { + int length = phasedata->gset_lengths[i]; + + if (phasedata->eqfunctions[length - 1] != NULL) + continue; + + phasedata->eqfunctions[length - 1] = + execTuplesMatchPrepare(scanDesc, + length, + aggnode->grpColIdx, + aggnode->grpOperators, + (PlanState *) aggstate); + } + + /* and for all grouped columns, unless already computed */ + if (phasedata->eqfunctions[aggnode->numCols - 1] == NULL) + { + phasedata->eqfunctions[aggnode->numCols - 1] = + execTuplesMatchPrepare(scanDesc, + aggnode->numCols, + aggnode->grpColIdx, + aggnode->grpOperators, + (PlanState *) aggstate); + } } phasedata->aggnode = aggnode; @@ -2412,16 +2449,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) */ if (use_hashing) { - for (i = 0; i < numHashes; ++i) - { - aggstate->perhash[i].hashslot = ExecInitExtraTupleSlot(estate); - - execTuplesHashPrepare(aggstate->perhash[i].numCols, - aggstate->perhash[i].aggnode->grpOperators, - &aggstate->perhash[i].eqfunctions, - &aggstate->perhash[i].hashfunctions); - } - /* this is an array of pointers, not structures */ aggstate->hash_pergroup = pergroups; @@ -3101,24 +3128,28 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, if (aggref->aggdistinct) { + Oid *ops; + Assert(numArguments > 0); + Assert(list_length(aggref->aggdistinct) == numDistinctCols); - /* - * We need the equal function for each DISTINCT comparison we will - * make. - */ - pertrans->equalfns = - (FmgrInfo *) palloc(numDistinctCols * sizeof(FmgrInfo)); + ops = palloc(numDistinctCols * sizeof(Oid)); i = 0; foreach(lc, aggref->aggdistinct) - { - SortGroupClause *sortcl = (SortGroupClause *) lfirst(lc); + ops[i++] = ((SortGroupClause *) lfirst(lc))->eqop; - fmgr_info(get_opcode(sortcl->eqop), &pertrans->equalfns[i]); - i++; - } - Assert(i == numDistinctCols); + /* lookup / build the necessary comparators */ + if (numDistinctCols == 1) + fmgr_info(get_opcode(ops[0]), &pertrans->equalfnOne); + else + pertrans->equalfnMulti = + execTuplesMatchPrepare(pertrans->sortdesc, + numDistinctCols, + pertrans->sortColIdx, + ops, + &aggstate->ss.ps); + pfree(ops); } pertrans->sortstates = (Tuplesortstate **) diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index f1cdbaa4e6..8f7bf459ef 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -25,6 +25,7 @@ #include "executor/executor.h" #include "executor/nodeGroup.h" #include "miscadmin.h" +#include "utils/memutils.h" /* @@ -37,8 +38,6 @@ ExecGroup(PlanState *pstate) { GroupState *node = castNode(GroupState, pstate); ExprContext *econtext; - int numCols; - AttrNumber *grpColIdx; TupleTableSlot *firsttupleslot; TupleTableSlot *outerslot; @@ -50,8 +49,6 @@ ExecGroup(PlanState *pstate) if (node->grp_done) return NULL; econtext = node->ss.ps.ps_ExprContext; - numCols = ((Group *) node->ss.ps.plan)->numCols; - grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx; /* * The ScanTupleSlot holds the (copied) first tuple of each group. @@ -59,7 +56,7 @@ ExecGroup(PlanState *pstate) firsttupleslot = node->ss.ss_ScanTupleSlot; /* - * We need not call ResetExprContext here because execTuplesMatch will + * We need not call ResetExprContext here because ExecQualAndReset() will * reset the per-tuple memory context once per input tuple. */ @@ -124,10 +121,9 @@ ExecGroup(PlanState *pstate) * Compare with first tuple and see if this tuple is of the same * group. If so, ignore it and keep scanning. */ - if (!execTuplesMatch(firsttupleslot, outerslot, - numCols, grpColIdx, - node->eqfunctions, - econtext->ecxt_per_tuple_memory)) + econtext->ecxt_innertuple = firsttupleslot; + econtext->ecxt_outertuple = outerslot; + if (!ExecQualAndReset(node->eqfunction, econtext)) break; } @@ -166,6 +162,7 @@ GroupState * ExecInitGroup(Group *node, EState *estate, int eflags) { GroupState *grpstate; + AttrNumber *grpColIdx = grpColIdx = node->grpColIdx; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -215,9 +212,12 @@ ExecInitGroup(Group *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - grpstate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->grpOperators); + grpstate->eqfunction = + execTuplesMatchPrepare(ExecGetResultType(outerPlanState(grpstate)), + node->numCols, + grpColIdx, + node->grpOperators, + &grpstate->ss.ps); return grpstate; } diff --git a/src/backend/executor/nodeRecursiveunion.c b/src/backend/executor/nodeRecursiveunion.c index 817749855f..ba48a69a3b 100644 --- a/src/backend/executor/nodeRecursiveunion.c +++ b/src/backend/executor/nodeRecursiveunion.c @@ -32,13 +32,16 @@ static void build_hash_table(RecursiveUnionState *rustate) { RecursiveUnion *node = (RecursiveUnion *) rustate->ps.plan; + TupleDesc desc = ExecGetResultType(outerPlanState(rustate)); Assert(node->numCols > 0); Assert(node->numGroups > 0); - rustate->hashtable = BuildTupleHashTable(node->numCols, + rustate->hashtable = BuildTupleHashTable(&rustate->ps, + desc, + node->numCols, node->dupColIdx, - rustate->eqfunctions, + rustate->eqfuncoids, rustate->hashfunctions, node->numGroups, 0, @@ -175,7 +178,7 @@ ExecInitRecursiveUnion(RecursiveUnion *node, EState *estate, int eflags) rustate->ps.state = estate; rustate->ps.ExecProcNode = ExecRecursiveUnion; - rustate->eqfunctions = NULL; + rustate->eqfuncoids = NULL; rustate->hashfunctions = NULL; rustate->hashtable = NULL; rustate->tempContext = NULL; @@ -250,7 +253,7 @@ ExecInitRecursiveUnion(RecursiveUnion *node, EState *estate, int eflags) { execTuplesHashPrepare(node->numCols, node->dupOperators, - &rustate->eqfunctions, + &rustate->eqfuncoids, &rustate->hashfunctions); build_hash_table(rustate); } diff --git a/src/backend/executor/nodeSetOp.c b/src/backend/executor/nodeSetOp.c index c91c3402d2..eb5449fc3e 100644 --- a/src/backend/executor/nodeSetOp.c +++ b/src/backend/executor/nodeSetOp.c @@ -120,18 +120,22 @@ static void build_hash_table(SetOpState *setopstate) { SetOp *node = (SetOp *) setopstate->ps.plan; + ExprContext *econtext = setopstate->ps.ps_ExprContext; + TupleDesc desc = ExecGetResultType(outerPlanState(setopstate)); Assert(node->strategy == SETOP_HASHED); Assert(node->numGroups > 0); - setopstate->hashtable = BuildTupleHashTable(node->numCols, + setopstate->hashtable = BuildTupleHashTable(&setopstate->ps, + desc, + node->numCols, node->dupColIdx, - setopstate->eqfunctions, + setopstate->eqfuncoids, setopstate->hashfunctions, node->numGroups, 0, setopstate->tableContext, - setopstate->tempContext, + econtext->ecxt_per_tuple_memory, false); } @@ -220,11 +224,11 @@ ExecSetOp(PlanState *pstate) static TupleTableSlot * setop_retrieve_direct(SetOpState *setopstate) { - SetOp *node = (SetOp *) setopstate->ps.plan; PlanState *outerPlan; SetOpStatePerGroup pergroup; TupleTableSlot *outerslot; TupleTableSlot *resultTupleSlot; + ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -292,11 +296,10 @@ setop_retrieve_direct(SetOpState *setopstate) /* * Check whether we've crossed a group boundary. */ - if (!execTuplesMatch(resultTupleSlot, - outerslot, - node->numCols, node->dupColIdx, - setopstate->eqfunctions, - setopstate->tempContext)) + econtext->ecxt_outertuple = resultTupleSlot; + econtext->ecxt_innertuple = outerslot; + + if (!ExecQualAndReset(setopstate->eqfunction, econtext)) { /* * Save the first input tuple of the next group. @@ -338,6 +341,7 @@ setop_fill_hash_table(SetOpState *setopstate) PlanState *outerPlan; int firstFlag; bool in_first_rel PG_USED_FOR_ASSERTS_ONLY; + ExprContext *econtext = setopstate->ps.ps_ExprContext; /* * get state info from node @@ -404,8 +408,8 @@ setop_fill_hash_table(SetOpState *setopstate) advance_counts((SetOpStatePerGroup) entry->additional, flag); } - /* Must reset temp context after each hashtable lookup */ - MemoryContextReset(setopstate->tempContext); + /* Must reset expression context after each hashtable lookup */ + ResetExprContext(econtext); } setopstate->table_filled = true; @@ -476,6 +480,7 @@ SetOpState * ExecInitSetOp(SetOp *node, EState *estate, int eflags) { SetOpState *setopstate; + TupleDesc outerDesc; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -488,7 +493,7 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) setopstate->ps.state = estate; setopstate->ps.ExecProcNode = ExecSetOp; - setopstate->eqfunctions = NULL; + setopstate->eqfuncoids = NULL; setopstate->hashfunctions = NULL; setopstate->setop_done = false; setopstate->numOutput = 0; @@ -498,16 +503,9 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) setopstate->tableContext = NULL; /* - * Miscellaneous initialization - * - * SetOp nodes have no ExprContext initialization because they never call - * ExecQual or ExecProject. But they do need a per-tuple memory context - * anyway for calling execTuplesMatch. + * create expression context */ - setopstate->tempContext = - AllocSetContextCreate(CurrentMemoryContext, - "SetOp", - ALLOCSET_DEFAULT_SIZES); + ExecAssignExprContext(estate, &setopstate->ps); /* * If hashing, we also need a longer-lived context to store the hash @@ -534,6 +532,7 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) if (node->strategy == SETOP_HASHED) eflags &= ~EXEC_FLAG_REWIND; outerPlanState(setopstate) = ExecInitNode(outerPlan(node), estate, eflags); + outerDesc = ExecGetResultType(outerPlanState(setopstate)); /* * setop nodes do no projections, so initialize projection info for this @@ -550,12 +549,15 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) if (node->strategy == SETOP_HASHED) execTuplesHashPrepare(node->numCols, node->dupOperators, - &setopstate->eqfunctions, + &setopstate->eqfuncoids, &setopstate->hashfunctions); else - setopstate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->dupOperators); + setopstate->eqfunction = + execTuplesMatchPrepare(outerDesc, + node->numCols, + node->dupColIdx, + node->dupOperators, + &setopstate->ps); if (node->strategy == SETOP_HASHED) { @@ -585,9 +587,9 @@ ExecEndSetOp(SetOpState *node) ExecClearTuple(node->ps.ps_ResultTupleSlot); /* free subsidiary stuff including hashtable */ - MemoryContextDelete(node->tempContext); if (node->tableContext) MemoryContextDelete(node->tableContext); + ExecFreeExprContext(&node->ps); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index edf7d034bd..4927e21217 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -149,7 +149,7 @@ ExecHashSubPlan(SubPlanState *node, if (node->havehashrows && FindTupleHashEntry(node->hashtable, slot, - node->cur_eq_funcs, + node->cur_eq_comp, node->lhs_hash_funcs) != NULL) { ExecClearTuple(slot); @@ -494,9 +494,11 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; - node->hashtable = BuildTupleHashTable(ncols, + node->hashtable = BuildTupleHashTable(node->parent, + node->descRight, + ncols, node->keyColIdx, - node->tab_eq_funcs, + node->tab_eq_funcoids, node->tab_hash_funcs, nbuckets, 0, @@ -514,9 +516,11 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) if (nbuckets < 1) nbuckets = 1; } - node->hashnulls = BuildTupleHashTable(ncols, + node->hashnulls = BuildTupleHashTable(node->parent, + node->descRight, + ncols, node->keyColIdx, - node->tab_eq_funcs, + node->tab_eq_funcoids, node->tab_hash_funcs, nbuckets, 0, @@ -598,6 +602,77 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext) MemoryContextSwitchTo(oldcontext); } +/* + * execTuplesUnequal + * Return true if two tuples are definitely unequal in the indicated + * fields. + * + * Nulls are neither equal nor unequal to anything else. A true result + * is obtained only if there are non-null fields that compare not-equal. + * + * slot1, slot2: the tuples to compare (must have same columns!) + * numCols: the number of attributes to be examined + * matchColIdx: array of attribute column numbers + * eqFunctions: array of fmgr lookup info for the equality functions to use + * evalContext: short-term memory context for executing the functions + */ +static bool +execTuplesUnequal(TupleTableSlot *slot1, + TupleTableSlot *slot2, + int numCols, + AttrNumber *matchColIdx, + FmgrInfo *eqfunctions, + MemoryContext evalContext) +{ + MemoryContext oldContext; + bool result; + int i; + + /* Reset and switch into the temp context. */ + MemoryContextReset(evalContext); + oldContext = MemoryContextSwitchTo(evalContext); + + /* + * We cannot report a match without checking all the fields, but we can + * report a non-match as soon as we find unequal fields. So, start + * comparing at the last field (least significant sort key). That's the + * most likely to be different if we are dealing with sorted input. + */ + result = false; + + for (i = numCols; --i >= 0;) + { + AttrNumber att = matchColIdx[i]; + Datum attr1, + attr2; + bool isNull1, + isNull2; + + attr1 = slot_getattr(slot1, att, &isNull1); + + if (isNull1) + continue; /* can't prove anything here */ + + attr2 = slot_getattr(slot2, att, &isNull2); + + if (isNull2) + continue; /* can't prove anything here */ + + /* Apply the type-specific equality function */ + + if (!DatumGetBool(FunctionCall2(&eqfunctions[i], + attr1, attr2))) + { + result = true; /* they are unequal */ + break; + } + } + + MemoryContextSwitchTo(oldContext); + + return result; +} + /* * findPartialMatch: does the hashtable contain an entry that is not * provably distinct from the tuple? @@ -719,6 +794,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) sstate->hashtempcxt = NULL; sstate->innerecontext = NULL; sstate->keyColIdx = NULL; + sstate->tab_eq_funcoids = NULL; sstate->tab_hash_funcs = NULL; sstate->tab_eq_funcs = NULL; sstate->lhs_hash_funcs = NULL; @@ -757,7 +833,8 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) { int ncols, i; - TupleDesc tupDesc; + TupleDesc tupDescLeft; + TupleDesc tupDescRight; TupleTableSlot *slot; List *oplist, *lefttlist, @@ -815,6 +892,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) Assert(list_length(oplist) == ncols); lefttlist = righttlist = NIL; + sstate->tab_eq_funcoids = (Oid *) palloc(ncols * sizeof(Oid)); sstate->tab_hash_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo)); sstate->tab_eq_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo)); sstate->lhs_hash_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo)); @@ -848,6 +926,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) righttlist = lappend(righttlist, tle); /* Lookup the equality function (potentially cross-type) */ + sstate->tab_eq_funcoids[i - 1] = opexpr->opfuncid; fmgr_info(opexpr->opfuncid, &sstate->cur_eq_funcs[i - 1]); fmgr_info_set_expr((Node *) opexpr, &sstate->cur_eq_funcs[i - 1]); @@ -877,23 +956,34 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) * (hack alert!). The righthand expressions will be evaluated in our * own innerecontext. */ - tupDesc = ExecTypeFromTL(lefttlist, false); + tupDescLeft = ExecTypeFromTL(lefttlist, false); slot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(slot, tupDesc); + ExecSetSlotDescriptor(slot, tupDescLeft); sstate->projLeft = ExecBuildProjectionInfo(lefttlist, NULL, slot, parent, NULL); - tupDesc = ExecTypeFromTL(righttlist, false); + sstate->descRight = tupDescRight = ExecTypeFromTL(righttlist, false); slot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(slot, tupDesc); + ExecSetSlotDescriptor(slot, tupDescRight); sstate->projRight = ExecBuildProjectionInfo(righttlist, sstate->innerecontext, slot, sstate->planstate, NULL); + + /* + * Create comparator for lookups of rows in the table (potentially + * across-type comparison). + */ + sstate->cur_eq_comp = ExecBuildGroupingEqual(tupDescLeft, tupDescRight, + ncols, + sstate->keyColIdx, + sstate->tab_eq_funcoids, + parent); + } return sstate; diff --git a/src/backend/executor/nodeUnique.c b/src/backend/executor/nodeUnique.c index e330650593..9f823c58e1 100644 --- a/src/backend/executor/nodeUnique.c +++ b/src/backend/executor/nodeUnique.c @@ -47,7 +47,7 @@ static TupleTableSlot * /* return: a tuple or NULL */ ExecUnique(PlanState *pstate) { UniqueState *node = castNode(UniqueState, pstate); - Unique *plannode = (Unique *) node->ps.plan; + ExprContext *econtext = node->ps.ps_ExprContext; TupleTableSlot *resultTupleSlot; TupleTableSlot *slot; PlanState *outerPlan; @@ -89,10 +89,9 @@ ExecUnique(PlanState *pstate) * If so then we loop back and fetch another new tuple from the * subplan. */ - if (!execTuplesMatch(slot, resultTupleSlot, - plannode->numCols, plannode->uniqColIdx, - node->eqfunctions, - node->tempContext)) + econtext->ecxt_innertuple = slot; + econtext->ecxt_outertuple = resultTupleSlot; + if (!ExecQualAndReset(node->eqfunction, econtext)) break; } @@ -129,16 +128,9 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) uniquestate->ps.ExecProcNode = ExecUnique; /* - * Miscellaneous initialization - * - * Unique nodes have no ExprContext initialization because they never call - * ExecQual or ExecProject. But they do need a per-tuple memory context - * anyway for calling execTuplesMatch. + * create expression context */ - uniquestate->tempContext = - AllocSetContextCreate(CurrentMemoryContext, - "Unique", - ALLOCSET_DEFAULT_SIZES); + ExecAssignExprContext(estate, &uniquestate->ps); /* * Tuple table initialization @@ -160,9 +152,12 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) /* * Precompute fmgr lookup data for inner loop */ - uniquestate->eqfunctions = - execTuplesMatchPrepare(node->numCols, - node->uniqOperators); + uniquestate->eqfunction = + execTuplesMatchPrepare(ExecGetResultType(outerPlanState(uniquestate)), + node->numCols, + node->uniqColIdx, + node->uniqOperators, + &uniquestate->ps); return uniquestate; } @@ -180,7 +175,7 @@ ExecEndUnique(UniqueState *node) /* clean up tuple table */ ExecClearTuple(node->ps.ps_ResultTupleSlot); - MemoryContextDelete(node->tempContext); + ExecFreeExprContext(&node->ps); ExecEndNode(outerPlanState(node)); } diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index f6412576f4..1c807a8292 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -1272,12 +1272,13 @@ spool_tuples(WindowAggState *winstate, int64 pos) if (node->partNumCols > 0) { + ExprContext *econtext = winstate->tmpcontext; + + econtext->ecxt_innertuple = winstate->first_part_slot; + econtext->ecxt_outertuple = outerslot; + /* Check if this tuple still belongs to the current partition */ - if (!execTuplesMatch(winstate->first_part_slot, - outerslot, - node->partNumCols, node->partColIdx, - winstate->partEqfunctions, - winstate->tmpcontext->ecxt_per_tuple_memory)) + if (!ExecQualAndReset(winstate->partEqfunction, econtext)) { /* * end of partition; copy the tuple for the next cycle. @@ -2245,6 +2246,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) wfuncno, numaggs, aggno; + TupleDesc scanDesc; ListCell *l; /* check for unsupported flags */ @@ -2327,6 +2329,7 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) * store in the tuplestore and use in all our working slots). */ ExecAssignScanTypeFromOuterPlan(&winstate->ss); + scanDesc = winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; ExecSetSlotDescriptor(winstate->first_part_slot, winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); @@ -2351,11 +2354,20 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) /* Set up data for comparing tuples */ if (node->partNumCols > 0) - winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols, - node->partOperators); + winstate->partEqfunction = + execTuplesMatchPrepare(scanDesc, + node->partNumCols, + node->partColIdx, + node->partOperators, + &winstate->ss.ps); + if (node->ordNumCols > 0) - winstate->ordEqfunctions = execTuplesMatchPrepare(node->ordNumCols, - node->ordOperators); + winstate->ordEqfunction = + execTuplesMatchPrepare(scanDesc, + node->ordNumCols, + node->ordColIdx, + node->ordOperators, + &winstate->ss.ps); /* * WindowAgg nodes use aggvalues and aggnulls as well as Agg nodes. @@ -2879,15 +2891,15 @@ are_peers(WindowAggState *winstate, TupleTableSlot *slot1, TupleTableSlot *slot2) { WindowAgg *node = (WindowAgg *) winstate->ss.ps.plan; + ExprContext *econtext = winstate->tmpcontext; /* If no ORDER BY, all rows are peers with each other */ if (node->ordNumCols == 0) return true; - return execTuplesMatch(slot1, slot2, - node->ordNumCols, node->ordColIdx, - winstate->ordEqfunctions, - winstate->tmpcontext->ecxt_per_tuple_memory); + econtext->ecxt_outertuple = slot1; + econtext->ecxt_innertuple = slot2; + return ExecQualAndReset(winstate->ordEqfunction, econtext); } /* diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c index 63d9c67027..50b34fcbc6 100644 --- a/src/backend/utils/adt/orderedsetaggs.c +++ b/src/backend/utils/adt/orderedsetaggs.c @@ -27,6 +27,7 @@ #include "utils/array.h" #include "utils/builtins.h" #include "utils/lsyscache.h" +#include "utils/memutils.h" #include "utils/timestamp.h" #include "utils/tuplesort.h" @@ -54,6 +55,8 @@ typedef struct OSAPerQueryState Aggref *aggref; /* Memory context containing this struct and other per-query data: */ MemoryContext qcontext; + /* Context for expression evaluation */ + ExprContext *econtext; /* Do we expect multiple final-function calls within one group? */ bool rescan_needed; @@ -71,7 +74,7 @@ typedef struct OSAPerQueryState Oid *sortCollations; bool *sortNullsFirsts; /* Equality operator call info, created only if needed: */ - FmgrInfo *equalfns; + ExprState *compareTuple; /* These fields are used only when accumulating datums: */ @@ -1287,6 +1290,8 @@ hypothetical_cume_dist_final(PG_FUNCTION_ARGS) Datum hypothetical_dense_rank_final(PG_FUNCTION_ARGS) { + ExprContext *econtext; + ExprState *compareTuple; int nargs = PG_NARGS() - 1; int64 rank = 1; int64 duplicate_count = 0; @@ -1294,12 +1299,9 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) int numDistinctCols; Datum abbrevVal = (Datum) 0; Datum abbrevOld = (Datum) 0; - AttrNumber *sortColIdx; - FmgrInfo *equalfns; TupleTableSlot *slot; TupleTableSlot *extraslot; TupleTableSlot *slot2; - MemoryContext tmpcontext; int i; Assert(AggCheckCallContext(fcinfo, NULL) == AGG_CONTEXT_AGGREGATE); @@ -1309,6 +1311,9 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) PG_RETURN_INT64(rank); osastate = (OSAPerGroupState *) PG_GETARG_POINTER(0); + econtext = osastate->qstate->econtext; + if (!econtext) + osastate->qstate->econtext = econtext = CreateStandaloneExprContext(); /* Adjust nargs to be the number of direct (or aggregated) args */ if (nargs % 2 != 0) @@ -1323,26 +1328,22 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) */ numDistinctCols = osastate->qstate->numSortCols - 1; - /* Look up the equality function(s), if we didn't already */ - equalfns = osastate->qstate->equalfns; - if (equalfns == NULL) + /* Build tuple comparator, if we didn't already */ + compareTuple = osastate->qstate->compareTuple; + if (compareTuple == NULL) { - MemoryContext qcontext = osastate->qstate->qcontext; - - equalfns = (FmgrInfo *) - MemoryContextAlloc(qcontext, numDistinctCols * sizeof(FmgrInfo)); - for (i = 0; i < numDistinctCols; i++) - { - fmgr_info_cxt(get_opcode(osastate->qstate->eqOperators[i]), - &equalfns[i], - qcontext); - } - osastate->qstate->equalfns = equalfns; + AttrNumber *sortColIdx = osastate->qstate->sortColIdx; + MemoryContext oldContext; + + oldContext = MemoryContextSwitchTo(osastate->qstate->qcontext); + compareTuple = execTuplesMatchPrepare(osastate->qstate->tupdesc, + numDistinctCols, + sortColIdx, + osastate->qstate->eqOperators, + NULL); + MemoryContextSwitchTo(oldContext); + osastate->qstate->compareTuple = compareTuple; } - sortColIdx = osastate->qstate->sortColIdx; - - /* Get short-term context we can use for execTuplesMatch */ - tmpcontext = AggGetTempMemoryContext(fcinfo); /* because we need a hypothetical row, we can't share transition state */ Assert(!osastate->sort_done); @@ -1385,19 +1386,18 @@ hypothetical_dense_rank_final(PG_FUNCTION_ARGS) break; /* count non-distinct tuples */ + econtext->ecxt_outertuple = slot; + econtext->ecxt_innertuple = slot2; + if (!TupIsNull(slot2) && abbrevVal == abbrevOld && - execTuplesMatch(slot, slot2, - numDistinctCols, - sortColIdx, - equalfns, - tmpcontext)) + ExecQualAndReset(compareTuple, econtext)) duplicate_count++; tmpslot = slot2; slot2 = slot; slot = tmpslot; - /* avoid execTuplesMatch() calls by reusing abbreviated keys */ + /* avoid ExecQual() calls by reusing abbreviated keys */ abbrevOld = abbrevVal; rank++; diff --git a/src/include/executor/execExpr.h b/src/include/executor/execExpr.h index 117fc892f4..0cab431f65 100644 --- a/src/include/executor/execExpr.h +++ b/src/include/executor/execExpr.h @@ -148,6 +148,7 @@ typedef enum ExprEvalOp /* evaluate assorted special-purpose expression types */ EEOP_IOCOERCE, EEOP_DISTINCT, + EEOP_NOT_DISTINCT, EEOP_NULLIF, EEOP_SQLVALUEFUNCTION, EEOP_CURRENTOFEXPR, diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 1d824eff36..621e7c3dc4 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -113,26 +113,19 @@ extern bool execCurrentOf(CurrentOfExpr *cexpr, /* * prototypes from functions in execGrouping.c */ -extern bool execTuplesMatch(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext); -extern bool execTuplesUnequal(TupleTableSlot *slot1, - TupleTableSlot *slot2, - int numCols, - AttrNumber *matchColIdx, - FmgrInfo *eqfunctions, - MemoryContext evalContext); -extern FmgrInfo *execTuplesMatchPrepare(int numCols, - Oid *eqOperators); +extern ExprState *execTuplesMatchPrepare(TupleDesc desc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqOperators, + PlanState *parent); extern void execTuplesHashPrepare(int numCols, Oid *eqOperators, - FmgrInfo **eqFunctions, + Oid **eqFuncOids, FmgrInfo **hashFunctions); -extern TupleHashTable BuildTupleHashTable(int numCols, AttrNumber *keyColIdx, - FmgrInfo *eqfunctions, +extern TupleHashTable BuildTupleHashTable(PlanState *parent, + TupleDesc inputDesc, + int numCols, AttrNumber *keyColIdx, + Oid *eqfuncoids, FmgrInfo *hashfunctions, long nbuckets, Size additionalsize, MemoryContext tablecxt, @@ -142,7 +135,7 @@ extern TupleHashEntry LookupTupleHashEntry(TupleHashTable hashtable, bool *isnew); extern TupleHashEntry FindTupleHashEntry(TupleHashTable hashtable, TupleTableSlot *slot, - FmgrInfo *eqfunctions, + ExprState *eqcomp, FmgrInfo *hashfunctions); /* @@ -257,6 +250,11 @@ extern ExprState *ExecInitCheck(List *qual, PlanState *parent); extern List *ExecInitExprList(List *nodes, PlanState *parent); extern ExprState *ExecBuildAggTrans(AggState *aggstate, struct AggStatePerPhaseData *phase, bool doSort, bool doHash); +extern ExprState *ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc, + int numCols, + AttrNumber *keyColIdx, + Oid *eqfunctions, + PlanState *parent); extern ProjectionInfo *ExecBuildProjectionInfo(List *targetList, ExprContext *econtext, TupleTableSlot *slot, diff --git a/src/include/executor/nodeAgg.h b/src/include/executor/nodeAgg.h index 3b06db86fd..aa6ebaaf97 100644 --- a/src/include/executor/nodeAgg.h +++ b/src/include/executor/nodeAgg.h @@ -102,11 +102,12 @@ typedef struct AggStatePerTransData bool *sortNullsFirst; /* - * fmgr lookup data for input columns' equality operators --- only - * set/used when aggregate has DISTINCT flag. Note that these are in - * order of sort column index, not parameter index. + * Comparators for input columns --- only set/used when aggregate has + * DISTINCT flag. equalfnOne version is used for single-column + * commparisons, equalfnMulti for the case of multiple columns. */ - FmgrInfo *equalfns; /* array of length numDistinctCols */ + FmgrInfo equalfnOne; + ExprState *equalfnMulti; /* * initial value from pg_aggregate entry @@ -270,7 +271,8 @@ typedef struct AggStatePerPhaseData int numsets; /* number of grouping sets (or 0) */ int *gset_lengths; /* lengths of grouping sets */ Bitmapset **grouped_cols; /* column groupings for rollup */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + ExprState **eqfunctions; /* expression returning equality, indexed by + * nr of cols to compare */ Agg *aggnode; /* Agg node for phase data */ Sort *sortnode; /* Sort node for input ordering for phase */ @@ -290,7 +292,7 @@ typedef struct AggStatePerHashData TupleHashIterator hashiter; /* for iterating through hash table */ TupleTableSlot *hashslot; /* slot for loading hash table */ FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + Oid *eqfuncoids; /* per-grouping-field equality fns */ int numCols; /* number of hash key columns */ int numhashGrpCols; /* number of columns in hash table */ int largestGrpColIdx; /* largest col required for hashing */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 286d55be03..a953820f43 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -594,10 +594,10 @@ typedef struct ExecAuxRowMark * Normally these are the only functions used, but FindTupleHashEntry() * supports searching a hashtable using cross-data-type hashing. For that, * the caller must supply hash functions for the LHS datatype as well as - * the cross-type equality operators to use. in_hash_funcs and cur_eq_funcs + * the cross-type equality operators to use. in_hash_funcs and cur_eq_func * are set to point to the caller's function arrays while doing such a search. * During LookupTupleHashEntry(), they point to tab_hash_funcs and - * tab_eq_funcs respectively. + * tab_eq_func respectively. * ---------------------------------------------------------------- */ typedef struct TupleHashEntryData *TupleHashEntry; @@ -625,7 +625,7 @@ typedef struct TupleHashTableData int numCols; /* number of columns in lookup key */ AttrNumber *keyColIdx; /* attr numbers of key columns */ FmgrInfo *tab_hash_funcs; /* hash functions for table datatype(s) */ - FmgrInfo *tab_eq_funcs; /* equality functions for table datatype(s) */ + ExprState *tab_eq_func; /* comparator for table datatype(s) */ MemoryContext tablecxt; /* memory context containing table */ MemoryContext tempcxt; /* context for function evaluations */ Size entrysize; /* actual size to make each hash entry */ @@ -633,8 +633,9 @@ typedef struct TupleHashTableData /* The following fields are set transiently for each table search: */ TupleTableSlot *inputslot; /* current input tuple's slot */ FmgrInfo *in_hash_funcs; /* hash functions for input datatype(s) */ - FmgrInfo *cur_eq_funcs; /* equality functions for input vs. table */ + ExprState *cur_eq_func; /* comparator for for input vs. table */ uint32 hash_iv; /* hash-function IV */ + ExprContext *exprcontext; /* expression context */ } TupleHashTableData; typedef tuplehash_iterator TupleHashIterator; @@ -781,6 +782,7 @@ typedef struct SubPlanState HeapTuple curTuple; /* copy of most recent tuple from subplan */ Datum curArray; /* most recent array from ARRAY() subplan */ /* these are used when hashing the subselect's output: */ + TupleDesc descRight; /* subselect desc after projection */ ProjectionInfo *projLeft; /* for projecting lefthand exprs */ ProjectionInfo *projRight; /* for projecting subselect output */ TupleHashTable hashtable; /* hash table for no-nulls subselect rows */ @@ -791,10 +793,12 @@ typedef struct SubPlanState MemoryContext hashtempcxt; /* temp memory context for hash tables */ ExprContext *innerecontext; /* econtext for computing inner tuples */ AttrNumber *keyColIdx; /* control data for hash tables */ + Oid *tab_eq_funcoids;/* equality func oids for table datatype(s) */ FmgrInfo *tab_hash_funcs; /* hash functions for table datatype(s) */ FmgrInfo *tab_eq_funcs; /* equality functions for table datatype(s) */ FmgrInfo *lhs_hash_funcs; /* hash functions for lefthand datatype(s) */ FmgrInfo *cur_eq_funcs; /* equality functions for LHS vs. table */ + ExprState *cur_eq_comp; /* equality comparator for LHS vs. table */ } SubPlanState; /* ---------------- @@ -1067,7 +1071,7 @@ typedef struct RecursiveUnionState Tuplestorestate *working_table; Tuplestorestate *intermediate_table; /* Remaining fields are unused in UNION ALL case */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + Oid *eqfuncoids; /* per-grouping-field equality fns */ FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ MemoryContext tempContext; /* short-term context for comparisons */ TupleHashTable hashtable; /* hash table for tuples already seen */ @@ -1795,7 +1799,7 @@ typedef struct SortState typedef struct GroupState { ScanState ss; /* its first field is NodeTag */ - FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ + ExprState *eqfunction; /* equality function */ bool grp_done; /* indicates completion of Group scan */ } GroupState; @@ -1885,8 +1889,8 @@ typedef struct WindowAggState WindowStatePerFunc perfunc; /* per-window-function information */ WindowStatePerAgg peragg; /* per-plain-aggregate information */ - FmgrInfo *partEqfunctions; /* equality funcs for partition columns */ - FmgrInfo *ordEqfunctions; /* equality funcs for ordering columns */ + ExprState *partEqfunction; /* equality funcs for partition columns */ + ExprState *ordEqfunction; /* equality funcs for ordering columns */ Tuplestorestate *buffer; /* stores rows of current partition */ int current_ptr; /* read pointer # for current row */ int framehead_ptr; /* read pointer # for frame head, if used */ @@ -1964,8 +1968,7 @@ typedef struct WindowAggState typedef struct UniqueState { PlanState ps; /* its first field is NodeTag */ - FmgrInfo *eqfunctions; /* per-field lookup data for equality fns */ - MemoryContext tempContext; /* short-term context for comparisons */ + ExprState *eqfunction; /* tuple equality qual */ } UniqueState; /* ---------------- @@ -2079,11 +2082,11 @@ typedef struct SetOpStatePerGroupData *SetOpStatePerGroup; typedef struct SetOpState { PlanState ps; /* its first field is NodeTag */ - FmgrInfo *eqfunctions; /* per-grouping-field equality fns */ + ExprState *eqfunction; /* equality comparator */ + Oid *eqfuncoids; /* per-grouping-field equality fns */ FmgrInfo *hashfunctions; /* per-grouping-field hash fns */ bool setop_done; /* indicates completion of output scan */ long numOutput; /* number of dups left to output */ - MemoryContext tempContext; /* short-term context for comparisons */ /* these fields are used in SETOP_SORTED mode: */ SetOpStatePerGroup pergroup; /* per-group working state */ HeapTuple grp_firstTuple; /* copy of first tuple of current group */ From ad7dbee368a7cd9e595d2a957be784326b08c943 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Fri, 16 Feb 2018 21:17:38 -0800 Subject: [PATCH 1009/1087] Allow tupleslots to have a fixed tupledesc, use in executor nodes. The reason for doing so is that it will allow expression evaluation to optimize based on the underlying tupledesc. In particular it will allow to JIT tuple deforming together with the expression itself. For that expression initialization needs to be moved after the relevant slots are initialized - mostly unproblematic, except in the case of nodeWorktablescan.c. After doing so there's no need for ExecAssignResultType() and ExecAssignResultTypeFromTL() anymore, as all former callers have been converted to create a slot with a fixed descriptor. When creating a slot with a fixed descriptor, tts_values/isnull can be allocated together with the main slot, reducing allocation overhead and increasing cache density a bit. Author: Andres Freund Discussion: https://postgr.es/m/20171206093717.vqdxe5icqttpxs3p@alap3.anarazel.de --- src/backend/commands/copy.c | 5 +- src/backend/commands/trigger.c | 6 +- src/backend/executor/README | 2 + src/backend/executor/execExpr.c | 2 +- src/backend/executor/execMain.c | 2 +- src/backend/executor/execPartition.c | 4 +- src/backend/executor/execScan.c | 2 +- src/backend/executor/execTuples.c | 114 +++++++++++++----- src/backend/executor/execUtils.c | 62 +--------- src/backend/executor/nodeAgg.c | 62 ++++------ src/backend/executor/nodeAppend.c | 18 +-- src/backend/executor/nodeBitmapAnd.c | 14 +-- src/backend/executor/nodeBitmapHeapscan.c | 56 ++++----- src/backend/executor/nodeBitmapIndexscan.c | 18 +-- src/backend/executor/nodeBitmapOr.c | 14 +-- src/backend/executor/nodeCtescan.c | 26 ++-- src/backend/executor/nodeCustom.c | 20 ++- src/backend/executor/nodeForeignscan.c | 30 ++--- src/backend/executor/nodeFunctionscan.c | 30 ++--- src/backend/executor/nodeGather.c | 31 ++--- src/backend/executor/nodeGatherMerge.c | 19 +-- src/backend/executor/nodeGroup.c | 26 ++-- src/backend/executor/nodeHash.c | 23 ++-- src/backend/executor/nodeHashjoin.c | 49 ++++---- src/backend/executor/nodeIndexonlyscan.c | 34 +++--- src/backend/executor/nodeIndexscan.c | 45 +++---- src/backend/executor/nodeLimit.c | 18 +-- src/backend/executor/nodeLockRows.c | 9 +- src/backend/executor/nodeMaterial.c | 22 ++-- src/backend/executor/nodeMergeAppend.c | 6 +- src/backend/executor/nodeMergejoin.c | 53 ++++---- src/backend/executor/nodeModifyTable.c | 25 ++-- .../executor/nodeNamedtuplestorescan.c | 21 ++-- src/backend/executor/nodeNestloop.c | 29 ++--- src/backend/executor/nodeProjectSet.c | 14 +-- src/backend/executor/nodeRecursiveunion.c | 9 +- src/backend/executor/nodeResult.c | 25 ++-- src/backend/executor/nodeSamplescan.c | 70 ++++------- src/backend/executor/nodeSeqscan.c | 56 +++------ src/backend/executor/nodeSetOp.c | 11 +- src/backend/executor/nodeSort.c | 18 ++- src/backend/executor/nodeSubplan.c | 6 +- src/backend/executor/nodeSubqueryscan.c | 28 ++--- src/backend/executor/nodeTableFuncscan.c | 26 ++-- src/backend/executor/nodeTidscan.c | 29 ++--- src/backend/executor/nodeUnique.c | 11 +- src/backend/executor/nodeValuesscan.c | 25 ++-- src/backend/executor/nodeWindowAgg.c | 65 ++++------ src/backend/executor/nodeWorktablescan.c | 16 +-- src/backend/replication/logical/worker.c | 22 ++-- src/include/executor/executor.h | 11 +- src/include/executor/tuptable.h | 5 +- 52 files changed, 566 insertions(+), 778 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index b3933df9af..d5883c98d1 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2444,10 +2444,9 @@ CopyFrom(CopyState cstate) estate->es_range_table = cstate->range_table; /* Set up a tuple slot too */ - myslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(myslot, tupDesc); + myslot = ExecInitExtraTupleSlot(estate, tupDesc); /* Triggers might need a slot as well */ - estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); + estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate, NULL); /* Prepare to catch AFTER triggers. */ AfterTriggerBeginQuery(); diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 160d941c00..fffc0095a7 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -3251,7 +3251,8 @@ TriggerEnabled(EState *estate, ResultRelInfo *relinfo, if (estate->es_trig_oldtup_slot == NULL) { oldContext = MemoryContextSwitchTo(estate->es_query_cxt); - estate->es_trig_oldtup_slot = ExecInitExtraTupleSlot(estate); + estate->es_trig_oldtup_slot = + ExecInitExtraTupleSlot(estate, NULL); MemoryContextSwitchTo(oldContext); } oldslot = estate->es_trig_oldtup_slot; @@ -3264,7 +3265,8 @@ TriggerEnabled(EState *estate, ResultRelInfo *relinfo, if (estate->es_trig_newtup_slot == NULL) { oldContext = MemoryContextSwitchTo(estate->es_query_cxt); - estate->es_trig_newtup_slot = ExecInitExtraTupleSlot(estate); + estate->es_trig_newtup_slot = + ExecInitExtraTupleSlot(estate, NULL); MemoryContextSwitchTo(oldContext); } newslot = estate->es_trig_newtup_slot; diff --git a/src/backend/executor/README b/src/backend/executor/README index b3e74aa1a5..0d7cd552eb 100644 --- a/src/backend/executor/README +++ b/src/backend/executor/README @@ -243,6 +243,8 @@ This is a sketch of control flow for full query processing: switch to per-query context to run ExecInitNode AfterTriggerBeginQuery ExecInitNode --- recursively scans plan tree + ExecInitNode + recurse into subsidiary nodes CreateExprContext creates per-tuple context ExecInitExpr diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c index 463e185a9a..db5fcafbfe 100644 --- a/src/backend/executor/execExpr.c +++ b/src/backend/executor/execExpr.c @@ -2415,7 +2415,7 @@ ExecInitWholeRowVar(ExprEvalStep *scratch, Var *variable, ExprState *state) scratch->d.wholerow.junkFilter = ExecInitJunkFilter(subplan->plan->targetlist, ExecGetResultType(subplan)->tdhasoid, - ExecInitExtraTupleSlot(parent->state)); + ExecInitExtraTupleSlot(parent->state, NULL)); } } } diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 5d3e923cca..91ba939bdc 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1073,7 +1073,7 @@ InitPlan(QueryDesc *queryDesc, int eflags) j = ExecInitJunkFilter(planstate->plan->targetlist, tupType->tdhasoid, - ExecInitExtraTupleSlot(estate)); + ExecInitExtraTupleSlot(estate, NULL)); estate->es_junkFilter = j; /* Want to return the cleaned tuple type */ diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 4048c3ebc6..00523ce250 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -93,7 +93,7 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, * We need an additional tuple slot for storing transient tuples that * are converted to the root table descriptor. */ - proute->root_tuple_slot = MakeTupleTableSlot(); + proute->root_tuple_slot = MakeTupleTableSlot(NULL); } else { @@ -112,7 +112,7 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, * (such as ModifyTableState) and released when the node finishes * processing. */ - proute->partition_tuple_slot = MakeTupleTableSlot(); + proute->partition_tuple_slot = MakeTupleTableSlot(NULL); i = 0; foreach(cell, leaf_parts) diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index bf4f603fd3..caf91730ce 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -229,7 +229,7 @@ ExecScan(ScanState *node, * the scan node, because the planner will preferentially generate a matching * tlist. * - * ExecAssignScanType must have been called already. + * The scan slot's descriptor must have been set already. */ void ExecAssignScanProjectionInfo(ScanState *node) diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c index 5df89e419c..c46d65cf93 100644 --- a/src/backend/executor/execTuples.c +++ b/src/backend/executor/execTuples.c @@ -58,7 +58,7 @@ * At ExecutorStart() * ---------------- * - ExecInitSeqScan() calls ExecInitScanTupleSlot() and - * ExecInitResultTupleSlot() to construct TupleTableSlots + * ExecInitResultTupleSlotTL() to construct TupleTableSlots * for the tuples returned by the access methods and the * tuples resulting from performing target list projections. * @@ -104,19 +104,36 @@ static TupleDesc ExecTypeFromTLInternal(List *targetList, /* -------------------------------- * MakeTupleTableSlot * - * Basic routine to make an empty TupleTableSlot. + * Basic routine to make an empty TupleTableSlot. If tupleDesc is + * specified the slot's descriptor is fixed for it's lifetime, gaining + * some efficiency. If that's undesirable, pass NULL. * -------------------------------- */ TupleTableSlot * -MakeTupleTableSlot(void) +MakeTupleTableSlot(TupleDesc tupleDesc) { - TupleTableSlot *slot = makeNode(TupleTableSlot); + Size sz; + TupleTableSlot *slot; + /* + * When a fixed descriptor is specified, we can reduce overhead by + * allocating the entire slot in one go. + */ + if (tupleDesc) + sz = MAXALIGN(sizeof(TupleTableSlot)) + + MAXALIGN(tupleDesc->natts * sizeof(Datum)) + + MAXALIGN(tupleDesc->natts * sizeof(bool)); + else + sz = sizeof(TupleTableSlot); + + slot = palloc0(sz); + slot->type = T_TupleTableSlot; slot->tts_isempty = true; slot->tts_shouldFree = false; slot->tts_shouldFreeMin = false; slot->tts_tuple = NULL; - slot->tts_tupleDescriptor = NULL; + slot->tts_fixedTupleDescriptor = tupleDesc != NULL; + slot->tts_tupleDescriptor = tupleDesc; slot->tts_mcxt = CurrentMemoryContext; slot->tts_buffer = InvalidBuffer; slot->tts_nvalid = 0; @@ -124,6 +141,19 @@ MakeTupleTableSlot(void) slot->tts_isnull = NULL; slot->tts_mintuple = NULL; + if (tupleDesc != NULL) + { + slot->tts_values = (Datum *) + (((char *) slot) + + MAXALIGN(sizeof(TupleTableSlot))); + slot->tts_isnull = (bool *) + (((char *) slot) + + MAXALIGN(sizeof(TupleTableSlot)) + + MAXALIGN(tupleDesc->natts * sizeof(Datum))); + + PinTupleDesc(tupleDesc); + } + return slot; } @@ -134,9 +164,9 @@ MakeTupleTableSlot(void) * -------------------------------- */ TupleTableSlot * -ExecAllocTableSlot(List **tupleTable) +ExecAllocTableSlot(List **tupleTable, TupleDesc desc) { - TupleTableSlot *slot = MakeTupleTableSlot(); + TupleTableSlot *slot = MakeTupleTableSlot(desc); *tupleTable = lappend(*tupleTable, slot); @@ -173,10 +203,13 @@ ExecResetTupleTable(List *tupleTable, /* tuple table */ /* If shouldFree, release memory occupied by the slot itself */ if (shouldFree) { - if (slot->tts_values) - pfree(slot->tts_values); - if (slot->tts_isnull) - pfree(slot->tts_isnull); + if (!slot->tts_fixedTupleDescriptor) + { + if (slot->tts_values) + pfree(slot->tts_values); + if (slot->tts_isnull) + pfree(slot->tts_isnull); + } pfree(slot); } } @@ -198,9 +231,7 @@ ExecResetTupleTable(List *tupleTable, /* tuple table */ TupleTableSlot * MakeSingleTupleTableSlot(TupleDesc tupdesc) { - TupleTableSlot *slot = MakeTupleTableSlot(); - - ExecSetSlotDescriptor(slot, tupdesc); + TupleTableSlot *slot = MakeTupleTableSlot(tupdesc); return slot; } @@ -220,10 +251,13 @@ ExecDropSingleTupleTableSlot(TupleTableSlot *slot) ExecClearTuple(slot); if (slot->tts_tupleDescriptor) ReleaseTupleDesc(slot->tts_tupleDescriptor); - if (slot->tts_values) - pfree(slot->tts_values); - if (slot->tts_isnull) - pfree(slot->tts_isnull); + if (!slot->tts_fixedTupleDescriptor) + { + if (slot->tts_values) + pfree(slot->tts_values); + if (slot->tts_isnull) + pfree(slot->tts_isnull); + } pfree(slot); } @@ -247,6 +281,8 @@ void ExecSetSlotDescriptor(TupleTableSlot *slot, /* slot to change */ TupleDesc tupdesc) /* new tuple descriptor */ { + Assert(!slot->tts_fixedTupleDescriptor); + /* For safety, make sure slot is empty before changing it */ ExecClearTuple(slot); @@ -816,7 +852,7 @@ ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot) */ /* -------------------------------- - * ExecInit{Result,Scan,Extra}TupleSlot + * ExecInit{Result,Scan,Extra}TupleSlot[TL] * * These are convenience routines to initialize the specified slot * in nodes inheriting the appropriate state. ExecInitExtraTupleSlot @@ -825,13 +861,30 @@ ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot) */ /* ---------------- - * ExecInitResultTupleSlot + * ExecInitResultTupleSlotTL + * + * Initialize result tuple slot, using the plan node's targetlist. * ---------------- */ void -ExecInitResultTupleSlot(EState *estate, PlanState *planstate) +ExecInitResultTupleSlotTL(EState *estate, PlanState *planstate) { - planstate->ps_ResultTupleSlot = ExecAllocTableSlot(&estate->es_tupleTable); + bool hasoid; + TupleDesc tupDesc; + + if (ExecContextForcesOids(planstate, &hasoid)) + { + /* context forces OID choice; hasoid is now set correctly */ + } + else + { + /* given free choice, don't leave space for OIDs in result tuples */ + hasoid = false; + } + + tupDesc = ExecTypeFromTL(planstate->plan->targetlist, hasoid); + + planstate->ps_ResultTupleSlot = ExecAllocTableSlot(&estate->es_tupleTable, tupDesc); } /* ---------------- @@ -839,19 +892,24 @@ ExecInitResultTupleSlot(EState *estate, PlanState *planstate) * ---------------- */ void -ExecInitScanTupleSlot(EState *estate, ScanState *scanstate) +ExecInitScanTupleSlot(EState *estate, ScanState *scanstate, TupleDesc tupledesc) { - scanstate->ss_ScanTupleSlot = ExecAllocTableSlot(&estate->es_tupleTable); + scanstate->ss_ScanTupleSlot = ExecAllocTableSlot(&estate->es_tupleTable, + tupledesc); } /* ---------------- * ExecInitExtraTupleSlot + * + * Return a newly created slot. If tupledesc is non-NULL the slot will have + * that as its fixed tupledesc. Otherwise the caller needs to use + * ExecSetSlotDescriptor() to set the descriptor before use. * ---------------- */ TupleTableSlot * -ExecInitExtraTupleSlot(EState *estate) +ExecInitExtraTupleSlot(EState *estate, TupleDesc tupledesc) { - return ExecAllocTableSlot(&estate->es_tupleTable); + return ExecAllocTableSlot(&estate->es_tupleTable, tupledesc); } /* ---------------- @@ -865,9 +923,7 @@ ExecInitExtraTupleSlot(EState *estate) TupleTableSlot * ExecInitNullTupleSlot(EState *estate, TupleDesc tupType) { - TupleTableSlot *slot = ExecInitExtraTupleSlot(estate); - - ExecSetSlotDescriptor(slot, tupType); + TupleTableSlot *slot = ExecInitExtraTupleSlot(estate, tupType); return ExecStoreAllNullTuple(slot); } diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c index 50b6edce63..a8ae37ebc8 100644 --- a/src/backend/executor/execUtils.c +++ b/src/backend/executor/execUtils.c @@ -22,7 +22,6 @@ * ReScanExprContext * * ExecAssignExprContext Common code for plan node init routines. - * ExecAssignResultType * etc * * ExecOpenScanRelation Common code for scan node init routines. @@ -428,47 +427,6 @@ ExecAssignExprContext(EState *estate, PlanState *planstate) planstate->ps_ExprContext = CreateExprContext(estate); } -/* ---------------- - * ExecAssignResultType - * ---------------- - */ -void -ExecAssignResultType(PlanState *planstate, TupleDesc tupDesc) -{ - TupleTableSlot *slot = planstate->ps_ResultTupleSlot; - - ExecSetSlotDescriptor(slot, tupDesc); -} - -/* ---------------- - * ExecAssignResultTypeFromTL - * ---------------- - */ -void -ExecAssignResultTypeFromTL(PlanState *planstate) -{ - bool hasoid; - TupleDesc tupDesc; - - if (ExecContextForcesOids(planstate, &hasoid)) - { - /* context forces OID choice; hasoid is now set correctly */ - } - else - { - /* given free choice, don't leave space for OIDs in result tuples */ - hasoid = false; - } - - /* - * ExecTypeFromTL needs the parse-time representation of the tlist, not a - * list of ExprStates. This is good because some plan nodes don't bother - * to set up planstate->targetlist ... - */ - tupDesc = ExecTypeFromTL(planstate->plan->targetlist, hasoid); - ExecAssignResultType(planstate, tupDesc); -} - /* ---------------- * ExecGetResultType * ---------------- @@ -609,13 +567,9 @@ ExecFreeExprContext(PlanState *planstate) planstate->ps_ExprContext = NULL; } + /* ---------------------------------------------------------------- - * the following scan type support functions are for - * those nodes which are stubborn and return tuples in - * their Scan tuple slot instead of their Result tuple - * slot.. luck fur us, these nodes do not do projections - * so we don't have to worry about getting the ProjectionInfo - * right for them... -cim 6/3/91 + * Scan node support * ---------------------------------------------------------------- */ @@ -632,11 +586,11 @@ ExecAssignScanType(ScanState *scanstate, TupleDesc tupDesc) } /* ---------------- - * ExecAssignScanTypeFromOuterPlan + * ExecCreateSlotFromOuterPlan * ---------------- */ void -ExecAssignScanTypeFromOuterPlan(ScanState *scanstate) +ExecCreateScanSlotFromOuterPlan(EState *estate, ScanState *scanstate) { PlanState *outerPlan; TupleDesc tupDesc; @@ -644,15 +598,9 @@ ExecAssignScanTypeFromOuterPlan(ScanState *scanstate) outerPlan = outerPlanState(scanstate); tupDesc = ExecGetResultType(outerPlan); - ExecAssignScanType(scanstate, tupDesc); + ExecInitScanTupleSlot(estate, scanstate, tupDesc); } - -/* ---------------------------------------------------------------- - * Scan node support - * ---------------------------------------------------------------- - */ - /* ---------------------------------------------------------------- * ExecRelationIsTargetRelation * diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index e74b3a9391..1b1334006f 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -1402,8 +1402,8 @@ find_hash_columns(AggState *aggstate) perhash->aggnode->grpOperators, &perhash->eqfuncoids, &perhash->hashfunctions); - perhash->hashslot = ExecAllocTableSlot(&estate->es_tupleTable); - ExecSetSlotDescriptor(perhash->hashslot, hashDesc); + perhash->hashslot = + ExecAllocTableSlot(&estate->es_tupleTable, hashDesc); list_free(hashTlist); bms_free(colnos); @@ -2198,31 +2198,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) ExecAssignExprContext(estate, &aggstate->ss.ps); - /* - * tuple table initialization. - * - * For hashtables, we create some additional slots below. - */ - ExecInitScanTupleSlot(estate, &aggstate->ss); - ExecInitResultTupleSlot(estate, &aggstate->ss.ps); - aggstate->sort_slot = ExecInitExtraTupleSlot(estate); - - /* - * initialize child expressions - * - * We expect the parser to have checked that no aggs contain other agg - * calls in their arguments (and just to be sure, we verify it again while - * initializing the plan node). This would make no sense under SQL - * semantics, and it's forbidden by the spec. Because it is true, we - * don't need to worry about evaluating the aggs in any particular order. - * - * Note: execExpr.c finds Aggrefs for us, and adds their AggrefExprState - * nodes to aggstate->aggs. Aggrefs in the qual are found here; Aggrefs - * in the targetlist are found during ExecAssignProjectionInfo, below. - */ - aggstate->ss.ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) aggstate); - /* * Initialize child nodes. * @@ -2237,17 +2212,33 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) /* * initialize source tuple type. */ - ExecAssignScanTypeFromOuterPlan(&aggstate->ss); + ExecCreateScanSlotFromOuterPlan(estate, &aggstate->ss); scanDesc = aggstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; if (node->chain) - ExecSetSlotDescriptor(aggstate->sort_slot, scanDesc); + aggstate->sort_slot = ExecInitExtraTupleSlot(estate, scanDesc); /* - * Initialize result tuple type and projection info. + * Initialize result type, slot and projection. */ - ExecAssignResultTypeFromTL(&aggstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &aggstate->ss.ps); ExecAssignProjectionInfo(&aggstate->ss.ps, NULL); + /* + * initialize child expressions + * + * We expect the parser to have checked that no aggs contain other agg + * calls in their arguments (and just to be sure, we verify it again while + * initializing the plan node). This would make no sense under SQL + * semantics, and it's forbidden by the spec. Because it is true, we + * don't need to worry about evaluating the aggs in any particular order. + * + * Note: execExpr.c finds Aggrefs for us, and adds their AggrefExprState + * nodes to aggstate->aggs. Aggrefs in the qual are found here; Aggrefs + * in the targetlist are found during ExecAssignProjectionInfo, below. + */ + aggstate->ss.ps.qual = + ExecInitQual(node->plan.qual, (PlanState *) aggstate); + /* * We should now have found all Aggrefs in the targetlist and quals. */ @@ -3071,8 +3062,8 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, if (numSortCols > 0 || aggref->aggfilter) { pertrans->sortdesc = ExecTypeFromTL(aggref->args, false); - pertrans->sortslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(pertrans->sortslot, pertrans->sortdesc); + pertrans->sortslot = + ExecInitExtraTupleSlot(estate, pertrans->sortdesc); } if (numSortCols > 0) @@ -3093,9 +3084,8 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, else if (numDistinctCols > 0) { /* we will need an extra slot to store prior values */ - pertrans->uniqslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(pertrans->uniqslot, - pertrans->sortdesc); + pertrans->uniqslot = + ExecInitExtraTupleSlot(estate, pertrans->sortdesc); } /* Extract the sort information for use later */ diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index 264d8fea8d..7a3dd2ee2d 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -129,17 +129,9 @@ ExecInitAppend(Append *node, EState *estate, int eflags) appendstate->as_nplans = nplans; /* - * Miscellaneous initialization - * - * Append plans don't have expression contexts because they never call - * ExecQual or ExecProject. + * Initialize result tuple type and slot. */ - - /* - * append nodes still have Result slots, which hold pointers to tuples, so - * we have to initialize them. - */ - ExecInitResultTupleSlot(estate, &appendstate->ps); + ExecInitResultTupleSlotTL(estate, &appendstate->ps); /* * call ExecInitNode on each of the plans to be executed and save the @@ -155,9 +147,11 @@ ExecInitAppend(Append *node, EState *estate, int eflags) } /* - * initialize output tuple type + * Miscellaneous initialization + * + * Append plans don't have expression contexts because they never call + * ExecQual or ExecProject. */ - ExecAssignResultTypeFromTL(&appendstate->ps); appendstate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeBitmapAnd.c b/src/backend/executor/nodeBitmapAnd.c index 913046c987..23d0d94326 100644 --- a/src/backend/executor/nodeBitmapAnd.c +++ b/src/backend/executor/nodeBitmapAnd.c @@ -80,13 +80,6 @@ ExecInitBitmapAnd(BitmapAnd *node, EState *estate, int eflags) bitmapandstate->bitmapplans = bitmapplanstates; bitmapandstate->nplans = nplans; - /* - * Miscellaneous initialization - * - * BitmapAnd plans don't have expression contexts because they never call - * ExecQual or ExecProject. They don't need any tuple slots either. - */ - /* * call ExecInitNode on each of the plans to be executed and save the * results into the array "bitmapplanstates". @@ -99,6 +92,13 @@ ExecInitBitmapAnd(BitmapAnd *node, EState *estate, int eflags) i++; } + /* + * Miscellaneous initialization + * + * BitmapAnd plans don't have expression contexts because they never call + * ExecQual or ExecProject. They don't need any tuple slots either. + */ + return bitmapandstate; } diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index fa65d4efbe..3e1c9e0714 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -907,23 +907,39 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags) ExecAssignExprContext(estate, &scanstate->ss.ps); /* - * initialize child expressions + * open the base relation and acquire appropriate lock on it. */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); - scanstate->bitmapqualorig = - ExecInitQual(node->bitmapqualorig, (PlanState *) scanstate); + currentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags); + + /* + * initialize child nodes + * + * We do this after ExecOpenScanRelation because the child nodes will open + * indexscans on our relation's indexes, and we want to be sure we have + * acquired a lock on the relation first. + */ + outerPlanState(scanstate) = ExecInitNode(outerPlan(node), estate, eflags); /* - * tuple table initialization + * get the scan type from the relation descriptor. */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); + ExecInitScanTupleSlot(estate, &scanstate->ss, + RelationGetDescr(currentRelation)); + /* - * open the base relation and acquire appropriate lock on it. + * Initialize result slot, type and projection. */ - currentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecAssignScanProjectionInfo(&scanstate->ss); + + /* + * initialize child expressions + */ + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + scanstate->bitmapqualorig = + ExecInitQual(node->bitmapqualorig, (PlanState *) scanstate); /* * Determine the maximum for prefetch_target. If the tablespace has a @@ -952,26 +968,6 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags) 0, NULL); - /* - * get the scan type from the relation descriptor. - */ - ExecAssignScanType(&scanstate->ss, RelationGetDescr(currentRelation)); - - /* - * Initialize result tuple type and projection info. - */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); - ExecAssignScanProjectionInfo(&scanstate->ss); - - /* - * initialize child nodes - * - * We do this last because the child nodes will open indexscans on our - * relation's indexes, and we want to be sure we have acquired a lock on - * the relation first. - */ - outerPlanState(scanstate) = ExecInitNode(outerPlan(node), estate, eflags); - /* * all done. */ diff --git a/src/backend/executor/nodeBitmapIndexscan.c b/src/backend/executor/nodeBitmapIndexscan.c index bb5e4da187..d04f4901b4 100644 --- a/src/backend/executor/nodeBitmapIndexscan.c +++ b/src/backend/executor/nodeBitmapIndexscan.c @@ -226,6 +226,15 @@ ExecInitBitmapIndexScan(BitmapIndexScan *node, EState *estate, int eflags) /* normally we don't make the result bitmap till runtime */ indexstate->biss_result = NULL; + /* + * We do not open or lock the base relation here. We assume that an + * ancestor BitmapHeapScan node is holding AccessShareLock (or better) on + * the heap relation throughout the execution of the plan tree. + */ + + indexstate->ss.ss_currentRelation = NULL; + indexstate->ss.ss_currentScanDesc = NULL; + /* * Miscellaneous initialization * @@ -242,15 +251,6 @@ ExecInitBitmapIndexScan(BitmapIndexScan *node, EState *estate, int eflags) * sub-parts corresponding to runtime keys (see below). */ - /* - * We do not open or lock the base relation here. We assume that an - * ancestor BitmapHeapScan node is holding AccessShareLock (or better) on - * the heap relation throughout the execution of the plan tree. - */ - - indexstate->ss.ss_currentRelation = NULL; - indexstate->ss.ss_currentScanDesc = NULL; - /* * If we are just doing EXPLAIN (ie, aren't going to run the plan), stop * here. This allows an index-advisor plugin to EXPLAIN a plan containing diff --git a/src/backend/executor/nodeBitmapOr.c b/src/backend/executor/nodeBitmapOr.c index 8047549f7d..3f0a0a0544 100644 --- a/src/backend/executor/nodeBitmapOr.c +++ b/src/backend/executor/nodeBitmapOr.c @@ -81,13 +81,6 @@ ExecInitBitmapOr(BitmapOr *node, EState *estate, int eflags) bitmaporstate->bitmapplans = bitmapplanstates; bitmaporstate->nplans = nplans; - /* - * Miscellaneous initialization - * - * BitmapOr plans don't have expression contexts because they never call - * ExecQual or ExecProject. They don't need any tuple slots either. - */ - /* * call ExecInitNode on each of the plans to be executed and save the * results into the array "bitmapplanstates". @@ -100,6 +93,13 @@ ExecInitBitmapOr(BitmapOr *node, EState *estate, int eflags) i++; } + /* + * Miscellaneous initialization + * + * BitmapOr plans don't have expression contexts because they never call + * ExecQual or ExecProject. They don't need any tuple slots either. + */ + return bitmaporstate; } diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c index ec6d75cbd4..218619c760 100644 --- a/src/backend/executor/nodeCtescan.c +++ b/src/backend/executor/nodeCtescan.c @@ -242,31 +242,25 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &scanstate->ss.ps); - /* - * initialize child expressions - */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); - /* * The scan tuple type (ie, the rowtype we expect to find in the work * table) is the same as the result rowtype of the CTE query. */ - ExecAssignScanType(&scanstate->ss, - ExecGetResultType(scanstate->cteplanstate)); + ExecInitScanTupleSlot(estate, &scanstate->ss, + ExecGetResultType(scanstate->cteplanstate)); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); ExecAssignScanProjectionInfo(&scanstate->ss); + /* + * initialize child expressions + */ + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + return scanstate; } diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c index 936a2221f5..b816e0b31d 100644 --- a/src/backend/executor/nodeCustom.c +++ b/src/backend/executor/nodeCustom.c @@ -54,14 +54,6 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags) /* create expression context for node */ ExecAssignExprContext(estate, &css->ss.ps); - /* initialize child expressions */ - css->ss.ps.qual = - ExecInitQual(cscan->scan.plan.qual, (PlanState *) css); - - /* tuple table initialization */ - ExecInitScanTupleSlot(estate, &css->ss); - ExecInitResultTupleSlot(estate, &css->ss.ps); - /* * open the base relation, if any, and acquire an appropriate lock on it */ @@ -81,23 +73,27 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags) TupleDesc scan_tupdesc; scan_tupdesc = ExecTypeFromTL(cscan->custom_scan_tlist, false); - ExecAssignScanType(&css->ss, scan_tupdesc); + ExecInitScanTupleSlot(estate, &css->ss, scan_tupdesc); /* Node's targetlist will contain Vars with varno = INDEX_VAR */ tlistvarno = INDEX_VAR; } else { - ExecAssignScanType(&css->ss, RelationGetDescr(scan_rel)); + ExecInitScanTupleSlot(estate, &css->ss, RelationGetDescr(scan_rel)); /* Node's targetlist will contain Vars with varno = scanrelid */ tlistvarno = scanrelid; } /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&css->ss.ps); + ExecInitResultTupleSlotTL(estate, &css->ss.ps); ExecAssignScanProjectionInfoWithVarno(&css->ss, tlistvarno); + /* initialize child expressions */ + css->ss.ps.qual = + ExecInitQual(cscan->scan.plan.qual, (PlanState *) css); + /* * The callback of custom-scan provider applies the final initialization * of the custom-scan-state node according to its logic. diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c index 59865f5cca..0084234b35 100644 --- a/src/backend/executor/nodeForeignscan.c +++ b/src/backend/executor/nodeForeignscan.c @@ -155,20 +155,6 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &scanstate->ss.ps); - /* - * initialize child expressions - */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); - scanstate->fdw_recheck_quals = - ExecInitQual(node->fdw_recheck_quals, (PlanState *) scanstate); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); - /* * open the base relation, if any, and acquire an appropriate lock on it; * also acquire function pointers from the FDW's handler @@ -194,23 +180,31 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags) TupleDesc scan_tupdesc; scan_tupdesc = ExecTypeFromTL(node->fdw_scan_tlist, false); - ExecAssignScanType(&scanstate->ss, scan_tupdesc); + ExecInitScanTupleSlot(estate, &scanstate->ss, scan_tupdesc); /* Node's targetlist will contain Vars with varno = INDEX_VAR */ tlistvarno = INDEX_VAR; } else { - ExecAssignScanType(&scanstate->ss, RelationGetDescr(currentRelation)); + ExecInitScanTupleSlot(estate, &scanstate->ss, RelationGetDescr(currentRelation)); /* Node's targetlist will contain Vars with varno = scanrelid */ tlistvarno = scanrelid; } /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); ExecAssignScanProjectionInfoWithVarno(&scanstate->ss, tlistvarno); + /* + * initialize child expressions + */ + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + scanstate->fdw_recheck_quals = + ExecInitQual(node->fdw_recheck_quals, (PlanState *) scanstate); + /* * Initialize FDW-related state. */ diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c index 69f8d3e814..fb7c9f6787 100644 --- a/src/backend/executor/nodeFunctionscan.c +++ b/src/backend/executor/nodeFunctionscan.c @@ -334,18 +334,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &scanstate->ss.ps); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); - - /* - * initialize child expressions - */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); - scanstate->funcstates = palloc(nfuncs * sizeof(FunctionScanPerFuncState)); natts = 0; @@ -436,8 +424,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags) */ if (!scanstate->simple) { - fs->func_slot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(fs->func_slot, fs->tupdesc); + fs->func_slot = ExecInitExtraTupleSlot(estate, fs->tupdesc); } else fs->func_slot = NULL; @@ -492,14 +479,23 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags) Assert(attno == natts); } - ExecAssignScanType(&scanstate->ss, scan_tupdesc); + /* + * Initialize scan slot and type. + */ + ExecInitScanTupleSlot(estate, &scanstate->ss, scan_tupdesc); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); ExecAssignScanProjectionInfo(&scanstate->ss); + /* + * initialize child expressions + */ + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + /* * Create a memory context that ExecMakeTableFunctionResult can use to * evaluate function arguments in. We can't use the per-tuple context for diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index 58eadd45b8..eaf7d2d563 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -59,7 +59,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags) { GatherState *gatherstate; Plan *outerNode; - bool hasoid; TupleDesc tupDesc; /* Gather node doesn't have innerPlan node. */ @@ -85,37 +84,29 @@ ExecInitGather(Gather *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &gatherstate->ps); - /* - * Gather doesn't support checking a qual (it's always more efficient to - * do it in the child node). - */ - Assert(!node->plan.qual); - - /* - * tuple table initialization - */ - gatherstate->funnel_slot = ExecInitExtraTupleSlot(estate); - ExecInitResultTupleSlot(estate, &gatherstate->ps); - /* * now initialize outer plan */ outerNode = outerPlan(node); outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags); + tupDesc = ExecGetResultType(outerPlanState(gatherstate)); + + /* + * Initialize result slot, type and projection. + */ + ExecInitResultTupleSlotTL(estate, &gatherstate->ps); + ExecConditionalAssignProjectionInfo(&gatherstate->ps, tupDesc, OUTER_VAR); /* * Initialize funnel slot to same tuple descriptor as outer plan. */ - if (!ExecContextForcesOids(outerPlanState(gatherstate), &hasoid)) - hasoid = false; - tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); - ExecSetSlotDescriptor(gatherstate->funnel_slot, tupDesc); + gatherstate->funnel_slot = ExecInitExtraTupleSlot(estate, tupDesc); /* - * Initialize result tuple type and projection info. + * Gather doesn't support checking a qual (it's always more efficient to + * do it in the child node). */ - ExecAssignResultTypeFromTL(&gatherstate->ps); - ExecConditionalAssignProjectionInfo(&gatherstate->ps, tupDesc, OUTER_VAR); + Assert(!node->plan.qual); return gatherstate; } diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c index 6858c91e8c..83221cdbae 100644 --- a/src/backend/executor/nodeGatherMerge.c +++ b/src/backend/executor/nodeGatherMerge.c @@ -73,7 +73,6 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) { GatherMergeState *gm_state; Plan *outerNode; - bool hasoid; TupleDesc tupDesc; /* Gather merge node doesn't have innerPlan node. */ @@ -104,11 +103,6 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) */ Assert(!node->plan.qual); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &gm_state->ps); - /* * now initialize outer plan */ @@ -119,15 +113,13 @@ ExecInitGatherMerge(GatherMerge *node, EState *estate, int eflags) * Store the tuple descriptor into gather merge state, so we can use it * while initializing the gather merge slots. */ - if (!ExecContextForcesOids(outerPlanState(gm_state), &hasoid)) - hasoid = false; - tupDesc = ExecTypeFromTL(outerNode->targetlist, hasoid); + tupDesc = ExecGetResultType(outerPlanState(gm_state)); gm_state->tupDesc = tupDesc; /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&gm_state->ps); + ExecInitResultTupleSlotTL(estate, &gm_state->ps); ExecConditionalAssignProjectionInfo(&gm_state->ps, tupDesc, OUTER_VAR); /* @@ -410,9 +402,8 @@ gather_merge_setup(GatherMergeState *gm_state) (HeapTuple *) palloc0(sizeof(HeapTuple) * MAX_TUPLE_STORE); /* Initialize tuple slot for worker */ - gm_state->gm_slots[i + 1] = ExecInitExtraTupleSlot(gm_state->ps.state); - ExecSetSlotDescriptor(gm_state->gm_slots[i + 1], - gm_state->tupDesc); + gm_state->gm_slots[i + 1] = + ExecInitExtraTupleSlot(gm_state->ps.state, gm_state->tupDesc); } /* Allocate the resources for the merge */ diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index 8f7bf459ef..c6efd64d00 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -181,34 +181,28 @@ ExecInitGroup(Group *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &grpstate->ss.ps); - /* - * tuple table initialization - */ - ExecInitScanTupleSlot(estate, &grpstate->ss); - ExecInitResultTupleSlot(estate, &grpstate->ss.ps); - - /* - * initialize child expressions - */ - grpstate->ss.ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) grpstate); - /* * initialize child nodes */ outerPlanState(grpstate) = ExecInitNode(outerPlan(node), estate, eflags); /* - * initialize tuple type. + * Initialize scan slot and type. */ - ExecAssignScanTypeFromOuterPlan(&grpstate->ss); + ExecCreateScanSlotFromOuterPlan(estate, &grpstate->ss); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&grpstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &grpstate->ss.ps); ExecAssignProjectionInfo(&grpstate->ss.ps, NULL); + /* + * initialize child expressions + */ + grpstate->ss.ps.qual = + ExecInitQual(node->plan.qual, (PlanState *) grpstate); + /* * Precompute fmgr lookup data for inner loop */ diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index b10f847452..06bb44b163 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -373,29 +373,24 @@ ExecInitHash(Hash *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &hashstate->ps); - /* - * initialize our result slot - */ - ExecInitResultTupleSlot(estate, &hashstate->ps); - - /* - * initialize child expressions - */ - hashstate->ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) hashstate); - /* * initialize child nodes */ outerPlanState(hashstate) = ExecInitNode(outerPlan(node), estate, eflags); /* - * initialize tuple type. no need to initialize projection info because - * this node doesn't do projections + * initialize our result slot and type. No need to build projection + * because this node doesn't do projections. */ - ExecAssignResultTypeFromTL(&hashstate->ps); + ExecInitResultTupleSlotTL(estate, &hashstate->ps); hashstate->ps.ps_ProjInfo = NULL; + /* + * initialize child expressions + */ + hashstate->ps.qual = + ExecInitQual(node->plan.qual, (PlanState *) hashstate); + return hashstate; } diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c index 03d78042fa..ab91eb2527 100644 --- a/src/backend/executor/nodeHashjoin.c +++ b/src/backend/executor/nodeHashjoin.c @@ -596,6 +596,7 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) List *lclauses; List *rclauses; List *hoperators; + TupleDesc outerDesc, innerDesc; ListCell *l; /* check for unsupported flags */ @@ -614,6 +615,7 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) * managed to launch a parallel query. */ hjstate->js.ps.ExecProcNode = ExecHashJoin; + hjstate->js.jointype = node->join.jointype; /* * Miscellaneous initialization @@ -622,17 +624,6 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &hjstate->js.ps); - /* - * initialize child expressions - */ - hjstate->js.ps.qual = - ExecInitQual(node->join.plan.qual, (PlanState *) hjstate); - hjstate->js.jointype = node->join.jointype; - hjstate->js.joinqual = - ExecInitQual(node->join.joinqual, (PlanState *) hjstate); - hjstate->hashclauses = - ExecInitQual(node->hashclauses, (PlanState *) hjstate); - /* * initialize child nodes * @@ -644,13 +635,20 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) hashNode = (Hash *) innerPlan(node); outerPlanState(hjstate) = ExecInitNode(outerNode, estate, eflags); + outerDesc = ExecGetResultType(outerPlanState(hjstate)); innerPlanState(hjstate) = ExecInitNode((Plan *) hashNode, estate, eflags); + innerDesc = ExecGetResultType(innerPlanState(hjstate)); + + /* + * Initialize result slot, type and projection. + */ + ExecInitResultTupleSlotTL(estate, &hjstate->js.ps); + ExecAssignProjectionInfo(&hjstate->js.ps, NULL); /* * tuple table initialization */ - ExecInitResultTupleSlot(estate, &hjstate->js.ps); - hjstate->hj_OuterTupleSlot = ExecInitExtraTupleSlot(estate); + hjstate->hj_OuterTupleSlot = ExecInitExtraTupleSlot(estate, outerDesc); /* * detect whether we need only consider the first matching inner tuple @@ -667,21 +665,17 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) case JOIN_LEFT: case JOIN_ANTI: hjstate->hj_NullInnerTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(innerPlanState(hjstate))); + ExecInitNullTupleSlot(estate, innerDesc); break; case JOIN_RIGHT: hjstate->hj_NullOuterTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(outerPlanState(hjstate))); + ExecInitNullTupleSlot(estate, outerDesc); break; case JOIN_FULL: hjstate->hj_NullOuterTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(outerPlanState(hjstate))); + ExecInitNullTupleSlot(estate, outerDesc); hjstate->hj_NullInnerTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(innerPlanState(hjstate))); + ExecInitNullTupleSlot(estate, innerDesc); break; default: elog(ERROR, "unrecognized join type: %d", @@ -703,13 +697,14 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags) } /* - * initialize tuple type and projection info + * initialize child expressions */ - ExecAssignResultTypeFromTL(&hjstate->js.ps); - ExecAssignProjectionInfo(&hjstate->js.ps, NULL); - - ExecSetSlotDescriptor(hjstate->hj_OuterTupleSlot, - ExecGetResultType(outerPlanState(hjstate))); + hjstate->js.ps.qual = + ExecInitQual(node->join.plan.qual, (PlanState *) hjstate); + hjstate->js.joinqual = + ExecInitQual(node->join.joinqual, (PlanState *) hjstate); + hjstate->hashclauses = + ExecInitQual(node->hashclauses, (PlanState *) hjstate); /* * initialize hash-specific info diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c index 8ffcc52bea..ddc0ae9061 100644 --- a/src/backend/executor/nodeIndexonlyscan.c +++ b/src/backend/executor/nodeIndexonlyscan.c @@ -518,23 +518,6 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &indexstate->ss.ps); - /* - * initialize child expressions - * - * Note: we don't initialize all of the indexorderby expression, only the - * sub-parts corresponding to runtime keys (see below). - */ - indexstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) indexstate); - indexstate->indexqual = - ExecInitQual(node->indexqual, (PlanState *) indexstate); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &indexstate->ss.ps); - ExecInitScanTupleSlot(estate, &indexstate->ss); - /* * open the base relation and acquire appropriate lock on it. */ @@ -551,16 +534,27 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags) * suitable data anyway.) */ tupDesc = ExecTypeFromTL(node->indextlist, false); - ExecAssignScanType(&indexstate->ss, tupDesc); + ExecInitScanTupleSlot(estate, &indexstate->ss, tupDesc); /* - * Initialize result tuple type and projection info. The node's + * Initialize result slot, type and projection info. The node's * targetlist will contain Vars with varno = INDEX_VAR, referencing the * scan tuple. */ - ExecAssignResultTypeFromTL(&indexstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &indexstate->ss.ps); ExecAssignScanProjectionInfoWithVarno(&indexstate->ss, INDEX_VAR); + /* + * initialize child expressions + * + * Note: we don't initialize all of the indexorderby expression, only the + * sub-parts corresponding to runtime keys (see below). + */ + indexstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) indexstate); + indexstate->indexqual = + ExecInitQual(node->indexqual, (PlanState *) indexstate); + /* * If we are just doing EXPLAIN (ie, aren't going to run the plan), stop * here. This allows an index-advisor plugin to EXPLAIN a plan containing diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c index b8b961add4..01c9de88f4 100644 --- a/src/backend/executor/nodeIndexscan.c +++ b/src/backend/executor/nodeIndexscan.c @@ -940,6 +940,26 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &indexstate->ss.ps); + /* + * open the base relation and acquire appropriate lock on it. + */ + currentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags); + + indexstate->ss.ss_currentRelation = currentRelation; + indexstate->ss.ss_currentScanDesc = NULL; /* no heap scan here */ + + /* + * get the scan type from the relation descriptor. + */ + ExecInitScanTupleSlot(estate, &indexstate->ss, + RelationGetDescr(currentRelation)); + + /* + * Initialize result slot, type and projection. + */ + ExecInitResultTupleSlotTL(estate, &indexstate->ss.ps); + ExecAssignScanProjectionInfo(&indexstate->ss); + /* * initialize child expressions * @@ -957,31 +977,6 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags) indexstate->indexorderbyorig = ExecInitExprList(node->indexorderbyorig, (PlanState *) indexstate); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &indexstate->ss.ps); - ExecInitScanTupleSlot(estate, &indexstate->ss); - - /* - * open the base relation and acquire appropriate lock on it. - */ - currentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags); - - indexstate->ss.ss_currentRelation = currentRelation; - indexstate->ss.ss_currentScanDesc = NULL; /* no heap scan here */ - - /* - * get the scan type from the relation descriptor. - */ - ExecAssignScanType(&indexstate->ss, RelationGetDescr(currentRelation)); - - /* - * Initialize result tuple type and projection info. - */ - ExecAssignResultTypeFromTL(&indexstate->ss.ps); - ExecAssignScanProjectionInfo(&indexstate->ss); - /* * If we are just doing EXPLAIN (ie, aren't going to run the plan), stop * here. This allows an index-advisor plugin to EXPLAIN a plan containing diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c index 29d2deac23..56d98b4490 100644 --- a/src/backend/executor/nodeLimit.c +++ b/src/backend/executor/nodeLimit.c @@ -353,6 +353,12 @@ ExecInitLimit(Limit *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &limitstate->ps); + /* + * initialize outer plan + */ + outerPlan = outerPlan(node); + outerPlanState(limitstate) = ExecInitNode(outerPlan, estate, eflags); + /* * initialize child expressions */ @@ -362,21 +368,15 @@ ExecInitLimit(Limit *node, EState *estate, int eflags) (PlanState *) limitstate); /* - * Tuple table initialization (XXX not actually used...) + * Initialize result slot and type. (XXX not actually used, but upper + * nodes access it to get this node's result tupledesc...) */ - ExecInitResultTupleSlot(estate, &limitstate->ps); - - /* - * then initialize outer plan - */ - outerPlan = outerPlan(node); - outerPlanState(limitstate) = ExecInitNode(outerPlan, estate, eflags); + ExecInitResultTupleSlotTL(estate, &limitstate->ps); /* * limit nodes do no projections, so initialize projection info for this * node appropriately */ - ExecAssignResultTypeFromTL(&limitstate->ps); limitstate->ps.ps_ProjInfo = NULL; return limitstate; diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c index 7961b4be6a..b39ccf7dc1 100644 --- a/src/backend/executor/nodeLockRows.c +++ b/src/backend/executor/nodeLockRows.c @@ -370,13 +370,15 @@ ExecInitLockRows(LockRows *node, EState *estate, int eflags) /* * Miscellaneous initialization * - * LockRows nodes never call ExecQual or ExecProject. + * LockRows nodes never call ExecQual or ExecProject, therefore no + * ExprContext is needed. */ /* - * Tuple table initialization (XXX not actually used...) + * Tuple table initialization (XXX not actually used, but upper nodes + * access it to get this node's result tupledesc...) */ - ExecInitResultTupleSlot(estate, &lrstate->ps); + ExecInitResultTupleSlotTL(estate, &lrstate->ps); /* * then initialize outer plan @@ -387,7 +389,6 @@ ExecInitLockRows(LockRows *node, EState *estate, int eflags) * LockRows nodes do no projections, so initialize projection info for * this node appropriately */ - ExecAssignResultTypeFromTL(&lrstate->ps); lrstate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeMaterial.c b/src/backend/executor/nodeMaterial.c index 85afe87c44..8c2e57dbd0 100644 --- a/src/backend/executor/nodeMaterial.c +++ b/src/backend/executor/nodeMaterial.c @@ -206,14 +206,6 @@ ExecInitMaterial(Material *node, EState *estate, int eflags) * ExecQual or ExecProject. */ - /* - * tuple table initialization - * - * material nodes only return tuples from their materialized relation. - */ - ExecInitResultTupleSlot(estate, &matstate->ss.ps); - ExecInitScanTupleSlot(estate, &matstate->ss); - /* * initialize child nodes * @@ -226,13 +218,19 @@ ExecInitMaterial(Material *node, EState *estate, int eflags) outerPlanState(matstate) = ExecInitNode(outerPlan, estate, eflags); /* - * initialize tuple type. no need to initialize projection info because - * this node doesn't do projections. + * Initialize result type and slot. No need to initialize projection info + * because this node doesn't do projections. + * + * material nodes only return tuples from their materialized relation. */ - ExecAssignResultTypeFromTL(&matstate->ss.ps); - ExecAssignScanTypeFromOuterPlan(&matstate->ss); + ExecInitResultTupleSlotTL(estate, &matstate->ss.ps); matstate->ss.ps.ps_ProjInfo = NULL; + /* + * initialize tuple type. + */ + ExecCreateScanSlotFromOuterPlan(estate, &matstate->ss); + return matstate; } diff --git a/src/backend/executor/nodeMergeAppend.c b/src/backend/executor/nodeMergeAppend.c index ab4009c967..118f4ef07d 100644 --- a/src/backend/executor/nodeMergeAppend.c +++ b/src/backend/executor/nodeMergeAppend.c @@ -109,7 +109,7 @@ ExecInitMergeAppend(MergeAppend *node, EState *estate, int eflags) * MergeAppend nodes do have Result slots, which hold pointers to tuples, * so we have to initialize them. */ - ExecInitResultTupleSlot(estate, &mergestate->ps); + ExecInitResultTupleSlotTL(estate, &mergestate->ps); /* * call ExecInitNode on each of the plans to be executed and save the @@ -124,10 +124,6 @@ ExecInitMergeAppend(MergeAppend *node, EState *estate, int eflags) i++; } - /* - * initialize output tuple type - */ - ExecAssignResultTypeFromTL(&mergestate->ps); mergestate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c index ec5f82f6a9..f3cbe2f889 100644 --- a/src/backend/executor/nodeMergejoin.c +++ b/src/backend/executor/nodeMergejoin.c @@ -1436,6 +1436,7 @@ MergeJoinState * ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) { MergeJoinState *mergestate; + TupleDesc outerDesc, innerDesc; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -1450,6 +1451,8 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) mergestate->js.ps.plan = (Plan *) node; mergestate->js.ps.state = estate; mergestate->js.ps.ExecProcNode = ExecMergeJoin; + mergestate->js.jointype = node->join.jointype; + mergestate->mj_ConstFalseJoin = false; /* * Miscellaneous initialization @@ -1466,17 +1469,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) mergestate->mj_OuterEContext = CreateExprContext(estate); mergestate->mj_InnerEContext = CreateExprContext(estate); - /* - * initialize child expressions - */ - mergestate->js.ps.qual = - ExecInitQual(node->join.plan.qual, (PlanState *) mergestate); - mergestate->js.jointype = node->join.jointype; - mergestate->js.joinqual = - ExecInitQual(node->join.joinqual, (PlanState *) mergestate); - mergestate->mj_ConstFalseJoin = false; - /* mergeclauses are handled below */ - /* * initialize child nodes * @@ -1488,10 +1480,12 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) mergestate->mj_SkipMarkRestore = node->skip_mark_restore; outerPlanState(mergestate) = ExecInitNode(outerPlan(node), estate, eflags); + outerDesc = ExecGetResultType(outerPlanState(mergestate)); innerPlanState(mergestate) = ExecInitNode(innerPlan(node), estate, mergestate->mj_SkipMarkRestore ? eflags : (eflags | EXEC_FLAG_MARK)); + innerDesc = ExecGetResultType(innerPlanState(mergestate)); /* * For certain types of inner child nodes, it is advantageous to issue @@ -1514,14 +1508,25 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) else mergestate->mj_ExtraMarks = false; + /* + * Initialize result slot, type and projection. + */ + ExecInitResultTupleSlotTL(estate, &mergestate->js.ps); + ExecAssignProjectionInfo(&mergestate->js.ps, NULL); + /* * tuple table initialization */ - ExecInitResultTupleSlot(estate, &mergestate->js.ps); + mergestate->mj_MarkedTupleSlot = ExecInitExtraTupleSlot(estate, innerDesc); - mergestate->mj_MarkedTupleSlot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(mergestate->mj_MarkedTupleSlot, - ExecGetResultType(innerPlanState(mergestate))); + /* + * initialize child expressions + */ + mergestate->js.ps.qual = + ExecInitQual(node->join.plan.qual, (PlanState *) mergestate); + mergestate->js.joinqual = + ExecInitQual(node->join.joinqual, (PlanState *) mergestate); + /* mergeclauses are handled below */ /* * detect whether we need only consider the first matching inner tuple @@ -1542,15 +1547,13 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) mergestate->mj_FillOuter = true; mergestate->mj_FillInner = false; mergestate->mj_NullInnerTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(innerPlanState(mergestate))); + ExecInitNullTupleSlot(estate, innerDesc); break; case JOIN_RIGHT: mergestate->mj_FillOuter = false; mergestate->mj_FillInner = true; mergestate->mj_NullOuterTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(outerPlanState(mergestate))); + ExecInitNullTupleSlot(estate, outerDesc); /* * Can't handle right or full join with non-constant extra @@ -1566,11 +1569,9 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) mergestate->mj_FillOuter = true; mergestate->mj_FillInner = true; mergestate->mj_NullOuterTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(outerPlanState(mergestate))); + ExecInitNullTupleSlot(estate, outerDesc); mergestate->mj_NullInnerTupleSlot = - ExecInitNullTupleSlot(estate, - ExecGetResultType(innerPlanState(mergestate))); + ExecInitNullTupleSlot(estate, innerDesc); /* * Can't handle right or full join with non-constant extra @@ -1587,12 +1588,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags) (int) node->join.jointype); } - /* - * initialize tuple type and projection info - */ - ExecAssignResultTypeFromTL(&mergestate->js.ps); - ExecAssignProjectionInfo(&mergestate->js.ps, NULL); - /* * preprocess the merge clauses */ diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 2a8ecbd830..93c03cfb07 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -2367,8 +2367,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) mtstate->ps.plan->targetlist = (List *) linitial(node->returningLists); /* Set up a slot for the output of the RETURNING projection(s) */ - ExecInitResultTupleSlot(estate, &mtstate->ps); - ExecAssignResultTypeFromTL(&mtstate->ps); + ExecInitResultTupleSlotTL(estate, &mtstate->ps); slot = mtstate->ps.ps_ResultTupleSlot; /* Need an econtext too */ @@ -2435,8 +2434,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * expects one (maybe should change that?). */ mtstate->ps.plan->targetlist = NIL; - ExecInitResultTupleSlot(estate, &mtstate->ps); - ExecAssignResultTypeFromTL(&mtstate->ps); + ExecInitResultTupleSlotTL(estate, &mtstate->ps); mtstate->ps.ps_ExprContext = NULL; } @@ -2449,6 +2447,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) if (node->onConflictAction == ONCONFLICT_UPDATE) { ExprContext *econtext; + TupleDesc relationDesc; TupleDesc tupDesc; /* insert may only have one plan, inheritance is not expanded */ @@ -2459,26 +2458,26 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ExecAssignExprContext(estate, &mtstate->ps); econtext = mtstate->ps.ps_ExprContext; + relationDesc = resultRelInfo->ri_RelationDesc->rd_att; /* initialize slot for the existing tuple */ - mtstate->mt_existing = ExecInitExtraTupleSlot(mtstate->ps.state); - ExecSetSlotDescriptor(mtstate->mt_existing, - resultRelInfo->ri_RelationDesc->rd_att); + mtstate->mt_existing = + ExecInitExtraTupleSlot(mtstate->ps.state, relationDesc); /* carried forward solely for the benefit of explain */ mtstate->mt_excludedtlist = node->exclRelTlist; /* create target slot for UPDATE SET projection */ tupDesc = ExecTypeFromTL((List *) node->onConflictSet, - resultRelInfo->ri_RelationDesc->rd_rel->relhasoids); - mtstate->mt_conflproj = ExecInitExtraTupleSlot(mtstate->ps.state); - ExecSetSlotDescriptor(mtstate->mt_conflproj, tupDesc); + relationDesc->tdhasoid); + mtstate->mt_conflproj = + ExecInitExtraTupleSlot(mtstate->ps.state, tupDesc); /* build UPDATE SET projection state */ resultRelInfo->ri_onConflictSetProj = ExecBuildProjectionInfo(node->onConflictSet, econtext, mtstate->mt_conflproj, &mtstate->ps, - resultRelInfo->ri_RelationDesc->rd_att); + relationDesc); /* build DO UPDATE WHERE clause expression */ if (node->onConflictWhere) @@ -2583,7 +2582,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) j = ExecInitJunkFilter(subplan->targetlist, resultRelInfo->ri_RelationDesc->rd_att->tdhasoid, - ExecInitExtraTupleSlot(estate)); + ExecInitExtraTupleSlot(estate, NULL)); if (operation == CMD_UPDATE || operation == CMD_DELETE) { @@ -2633,7 +2632,7 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) * we keep it in the estate. */ if (estate->es_trig_tuple_slot == NULL) - estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); + estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate, NULL); /* * Lastly, if this is not the primary (canSetTag) ModifyTable node, add it diff --git a/src/backend/executor/nodeNamedtuplestorescan.c b/src/backend/executor/nodeNamedtuplestorescan.c index c3b28176e4..4d898b1f83 100644 --- a/src/backend/executor/nodeNamedtuplestorescan.c +++ b/src/backend/executor/nodeNamedtuplestorescan.c @@ -133,26 +133,21 @@ ExecInitNamedTuplestoreScan(NamedTuplestoreScan *node, EState *estate, int eflag ExecAssignExprContext(estate, &scanstate->ss.ps); /* - * initialize child expressions + * Tuple table and result type initialization. The scan tuple type is + * specified for the tuplestore. */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecInitScanTupleSlot(estate, &scanstate->ss, scanstate->tupdesc); /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); - - /* - * The scan tuple type is specified for the tuplestore. + * initialize child expressions */ - ExecAssignScanType(&scanstate->ss, scanstate->tupdesc); + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); /* - * Initialize result tuple type and projection info. + * Initialize projection. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); ExecAssignScanProjectionInfo(&scanstate->ss); return scanstate; diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c index 9b4f8cc432..9ae9863226 100644 --- a/src/backend/executor/nodeNestloop.c +++ b/src/backend/executor/nodeNestloop.c @@ -285,15 +285,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &nlstate->js.ps); - /* - * initialize child expressions - */ - nlstate->js.ps.qual = - ExecInitQual(node->join.plan.qual, (PlanState *) nlstate); - nlstate->js.jointype = node->join.jointype; - nlstate->js.joinqual = - ExecInitQual(node->join.joinqual, (PlanState *) nlstate); - /* * initialize child nodes * @@ -311,9 +302,19 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags) innerPlanState(nlstate) = ExecInitNode(innerPlan(node), estate, eflags); /* - * tuple table initialization + * Initialize result slot, type and projection. */ - ExecInitResultTupleSlot(estate, &nlstate->js.ps); + ExecInitResultTupleSlotTL(estate, &nlstate->js.ps); + ExecAssignProjectionInfo(&nlstate->js.ps, NULL); + + /* + * initialize child expressions + */ + nlstate->js.ps.qual = + ExecInitQual(node->join.plan.qual, (PlanState *) nlstate); + nlstate->js.jointype = node->join.jointype; + nlstate->js.joinqual = + ExecInitQual(node->join.joinqual, (PlanState *) nlstate); /* * detect whether we need only consider the first matching inner tuple @@ -338,12 +339,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags) (int) node->join.jointype); } - /* - * initialize tuple type and projection info - */ - ExecAssignResultTypeFromTL(&nlstate->js.ps); - ExecAssignProjectionInfo(&nlstate->js.ps, NULL); - /* * finally, wipe the current outer tuple clean. */ diff --git a/src/backend/executor/nodeProjectSet.c b/src/backend/executor/nodeProjectSet.c index 3b79993ade..6d6ed38cee 100644 --- a/src/backend/executor/nodeProjectSet.c +++ b/src/backend/executor/nodeProjectSet.c @@ -243,14 +243,6 @@ ExecInitProjectSet(ProjectSet *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &state->ps); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &state->ps); - - /* We don't support any qual on ProjectSet nodes */ - Assert(node->plan.qual == NIL); - /* * initialize child nodes */ @@ -262,9 +254,9 @@ ExecInitProjectSet(ProjectSet *node, EState *estate, int eflags) Assert(innerPlan(node) == NULL); /* - * initialize tuple type and projection info + * tuple table and result type initialization */ - ExecAssignResultTypeFromTL(&state->ps); + ExecInitResultTupleSlotTL(estate, &state->ps); /* Create workspace for per-tlist-entry expr state & SRF-is-done state */ state->nelems = list_length(node->plan.targetlist); @@ -301,6 +293,8 @@ ExecInitProjectSet(ProjectSet *node, EState *estate, int eflags) off++; } + /* We don't support any qual on ProjectSet nodes */ + Assert(node->plan.qual == NIL); /* * Create a memory context that ExecMakeFunctionResult can use to evaluate diff --git a/src/backend/executor/nodeRecursiveunion.c b/src/backend/executor/nodeRecursiveunion.c index ba48a69a3b..6b3ea5afb3 100644 --- a/src/backend/executor/nodeRecursiveunion.c +++ b/src/backend/executor/nodeRecursiveunion.c @@ -229,14 +229,13 @@ ExecInitRecursiveUnion(RecursiveUnion *node, EState *estate, int eflags) * RecursiveUnion nodes still have Result slots, which hold pointers to * tuples, so we have to initialize them. */ - ExecInitResultTupleSlot(estate, &rustate->ps); + ExecInitResultTupleSlotTL(estate, &rustate->ps); /* - * Initialize result tuple type and projection info. (Note: we have to - * set up the result type before initializing child nodes, because - * nodeWorktablescan.c expects it to be valid.) + * Initialize result tuple type. (Note: we have to set up the result type + * before initializing child nodes, because nodeWorktablescan.c expects it + * to be valid.) */ - ExecAssignResultTypeFromTL(&rustate->ps); rustate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c index 5860d9c1ce..e4418a29bb 100644 --- a/src/backend/executor/nodeResult.c +++ b/src/backend/executor/nodeResult.c @@ -204,19 +204,6 @@ ExecInitResult(Result *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &resstate->ps); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &resstate->ps); - - /* - * initialize child expressions - */ - resstate->ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) resstate); - resstate->resconstantqual = - ExecInitQual((List *) node->resconstantqual, (PlanState *) resstate); - /* * initialize child nodes */ @@ -228,11 +215,19 @@ ExecInitResult(Result *node, EState *estate, int eflags) Assert(innerPlan(node) == NULL); /* - * initialize tuple type and projection info + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&resstate->ps); + ExecInitResultTupleSlotTL(estate, &resstate->ps); ExecAssignProjectionInfo(&resstate->ps, NULL); + /* + * initialize child expressions + */ + resstate->ps.qual = + ExecInitQual(node->plan.qual, (PlanState *) resstate); + resstate->resconstantqual = + ExecInitQual((List *) node->resconstantqual, (PlanState *) resstate); + return resstate; } diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c index e88cd18737..872d6e5735 100644 --- a/src/backend/executor/nodeSamplescan.c +++ b/src/backend/executor/nodeSamplescan.c @@ -26,7 +26,6 @@ #include "utils/rel.h" #include "utils/tqual.h" -static void InitScanRelation(SampleScanState *node, EState *estate, int eflags); static TupleTableSlot *SampleNext(SampleScanState *node); static void tablesample_init(SampleScanState *scanstate); static HeapTuple tablesample_getnext(SampleScanState *scanstate); @@ -106,35 +105,6 @@ ExecSampleScan(PlanState *pstate) (ExecScanRecheckMtd) SampleRecheck); } -/* ---------------------------------------------------------------- - * InitScanRelation - * - * Set up to access the scan relation. - * ---------------------------------------------------------------- - */ -static void -InitScanRelation(SampleScanState *node, EState *estate, int eflags) -{ - Relation currentRelation; - - /* - * get the relation object id from the relid'th entry in the range table, - * open that relation and acquire appropriate lock on it. - */ - currentRelation = ExecOpenScanRelation(estate, - ((SampleScan *) node->ss.ps.plan)->scan.scanrelid, - eflags); - - node->ss.ss_currentRelation = currentRelation; - - /* we won't set up the HeapScanDesc till later */ - node->ss.ss_currentScanDesc = NULL; - - /* and report the scan tuple slot's rowtype */ - ExecAssignScanType(&node->ss, RelationGetDescr(currentRelation)); -} - - /* ---------------------------------------------------------------- * ExecInitSampleScan * ---------------------------------------------------------------- @@ -165,31 +135,39 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags) ExecAssignExprContext(estate, &scanstate->ss.ps); /* - * initialize child expressions + * Initialize scan relation. + * + * Get the relation object id from the relid'th entry in the range table, + * open that relation and acquire appropriate lock on it. */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + scanstate->ss.ss_currentRelation = + ExecOpenScanRelation(estate, + node->scan.scanrelid, + eflags); - scanstate->args = ExecInitExprList(tsc->args, (PlanState *) scanstate); - scanstate->repeatable = - ExecInitExpr(tsc->repeatable, (PlanState *) scanstate); + /* we won't set up the HeapScanDesc till later */ + scanstate->ss.ss_currentScanDesc = NULL; - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); + /* and create slot with appropriate rowtype */ + ExecInitScanTupleSlot(estate, &scanstate->ss, + RelationGetDescr(scanstate->ss.ss_currentRelation)); /* - * initialize scan relation + * Initialize result slot, type and projection. + * tuple table and result tuple initialization */ - InitScanRelation(scanstate, estate, eflags); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecAssignScanProjectionInfo(&scanstate->ss); /* - * Initialize result tuple type and projection info. + * initialize child expressions */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); - ExecAssignScanProjectionInfo(&scanstate->ss); + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + + scanstate->args = ExecInitExprList(tsc->args, (PlanState *) scanstate); + scanstate->repeatable = + ExecInitExpr(tsc->repeatable, (PlanState *) scanstate); /* * If we don't have a REPEATABLE clause, select a random seed. We want to diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c index 58631378d5..9db368922a 100644 --- a/src/backend/executor/nodeSeqscan.c +++ b/src/backend/executor/nodeSeqscan.c @@ -32,7 +32,6 @@ #include "executor/nodeSeqscan.h" #include "utils/rel.h" -static void InitScanRelation(SeqScanState *node, EState *estate, int eflags); static TupleTableSlot *SeqNext(SeqScanState *node); /* ---------------------------------------------------------------- @@ -132,31 +131,6 @@ ExecSeqScan(PlanState *pstate) (ExecScanRecheckMtd) SeqRecheck); } -/* ---------------------------------------------------------------- - * InitScanRelation - * - * Set up to access the scan relation. - * ---------------------------------------------------------------- - */ -static void -InitScanRelation(SeqScanState *node, EState *estate, int eflags) -{ - Relation currentRelation; - - /* - * get the relation object id from the relid'th entry in the range table, - * open that relation and acquire appropriate lock on it. - */ - currentRelation = ExecOpenScanRelation(estate, - ((SeqScan *) node->ss.ps.plan)->scanrelid, - eflags); - - node->ss.ss_currentRelation = currentRelation; - - /* and report the scan tuple slot's rowtype */ - ExecAssignScanType(&node->ss, RelationGetDescr(currentRelation)); -} - /* ---------------------------------------------------------------- * ExecInitSeqScan @@ -190,27 +164,31 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags) ExecAssignExprContext(estate, &scanstate->ss.ps); /* - * initialize child expressions + * Initialize scan relation. + * + * Get the relation object id from the relid'th entry in the range table, + * open that relation and acquire appropriate lock on it. */ - scanstate->ss.ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) scanstate); + scanstate->ss.ss_currentRelation = + ExecOpenScanRelation(estate, + node->scanrelid, + eflags); - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); + /* and create slot with the appropriate rowtype */ + ExecInitScanTupleSlot(estate, &scanstate->ss, + RelationGetDescr(scanstate->ss.ss_currentRelation)); /* - * initialize scan relation + * Initialize result slot, type and projection. */ - InitScanRelation(scanstate, estate, eflags); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecAssignScanProjectionInfo(&scanstate->ss); /* - * Initialize result tuple type and projection info. + * initialize child expressions */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); - ExecAssignScanProjectionInfo(&scanstate->ss); + scanstate->ss.ps.qual = + ExecInitQual(node->plan.qual, (PlanState *) scanstate); return scanstate; } diff --git a/src/backend/executor/nodeSetOp.c b/src/backend/executor/nodeSetOp.c index eb5449fc3e..3fa4a5fcc6 100644 --- a/src/backend/executor/nodeSetOp.c +++ b/src/backend/executor/nodeSetOp.c @@ -518,11 +518,6 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) "SetOp hash table", ALLOCSET_DEFAULT_SIZES); - /* - * Tuple table initialization - */ - ExecInitResultTupleSlot(estate, &setopstate->ps); - /* * initialize child nodes * @@ -535,10 +530,10 @@ ExecInitSetOp(SetOp *node, EState *estate, int eflags) outerDesc = ExecGetResultType(outerPlanState(setopstate)); /* - * setop nodes do no projections, so initialize projection info for this - * node appropriately + * Initialize result slot and type. Setop nodes do no projections, so + * initialize projection info for this node appropriately. */ - ExecAssignResultTypeFromTL(&setopstate->ps); + ExecInitResultTupleSlotTL(estate, &setopstate->ps); setopstate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeSort.c b/src/backend/executor/nodeSort.c index d61c859fce..73f16c9aba 100644 --- a/src/backend/executor/nodeSort.c +++ b/src/backend/executor/nodeSort.c @@ -198,14 +198,6 @@ ExecInitSort(Sort *node, EState *estate, int eflags) * ExecQual or ExecProject. */ - /* - * tuple table initialization - * - * sort nodes only return scan tuples from their sorted relation. - */ - ExecInitResultTupleSlot(estate, &sortstate->ss.ps); - ExecInitScanTupleSlot(estate, &sortstate->ss); - /* * initialize child nodes * @@ -217,11 +209,15 @@ ExecInitSort(Sort *node, EState *estate, int eflags) outerPlanState(sortstate) = ExecInitNode(outerPlan(node), estate, eflags); /* - * initialize tuple type. no need to initialize projection info because + * Initialize scan slot and type. + */ + ExecCreateScanSlotFromOuterPlan(estate, &sortstate->ss); + + /* + * Initialize return slot and type. No need to initialize projection info because * this node doesn't do projections. */ - ExecAssignResultTypeFromTL(&sortstate->ss.ps); - ExecAssignScanTypeFromOuterPlan(&sortstate->ss); + ExecInitResultTupleSlotTL(estate, &sortstate->ss.ps); sortstate->ss.ps.ps_ProjInfo = NULL; SO1_printf("ExecInitSort: %s\n", diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c index 4927e21217..d5411500a2 100644 --- a/src/backend/executor/nodeSubplan.c +++ b/src/backend/executor/nodeSubplan.c @@ -957,8 +957,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) * own innerecontext. */ tupDescLeft = ExecTypeFromTL(lefttlist, false); - slot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(slot, tupDescLeft); + slot = ExecInitExtraTupleSlot(estate, tupDescLeft); sstate->projLeft = ExecBuildProjectionInfo(lefttlist, NULL, slot, @@ -966,8 +965,7 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent) NULL); sstate->descRight = tupDescRight = ExecTypeFromTL(righttlist, false); - slot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(slot, tupDescRight); + slot = ExecInitExtraTupleSlot(estate, tupDescRight); sstate->projRight = ExecBuildProjectionInfo(righttlist, sstate->innerecontext, slot, diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c index 715a5b6a84..fa61884785 100644 --- a/src/backend/executor/nodeSubqueryscan.c +++ b/src/backend/executor/nodeSubqueryscan.c @@ -120,35 +120,29 @@ ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &subquerystate->ss.ps); - /* - * initialize child expressions - */ - subquerystate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) subquerystate); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &subquerystate->ss.ps); - ExecInitScanTupleSlot(estate, &subquerystate->ss); - /* * initialize subquery */ subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags); /* - * Initialize scan tuple type (needed by ExecAssignScanProjectionInfo) + * Initialize scan slot and type (needed by ExecInitResultTupleSlotTL) */ - ExecAssignScanType(&subquerystate->ss, - ExecGetResultType(subquerystate->subplan)); + ExecInitScanTupleSlot(estate, &subquerystate->ss, + ExecGetResultType(subquerystate->subplan)); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&subquerystate->ss.ps); + ExecInitResultTupleSlotTL(estate, &subquerystate->ss.ps); ExecAssignScanProjectionInfo(&subquerystate->ss); + /* + * initialize child expressions + */ + subquerystate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) subquerystate); + return subquerystate; } diff --git a/src/backend/executor/nodeTableFuncscan.c b/src/backend/executor/nodeTableFuncscan.c index 3b609765d4..fed6f2b3a5 100644 --- a/src/backend/executor/nodeTableFuncscan.c +++ b/src/backend/executor/nodeTableFuncscan.c @@ -139,18 +139,6 @@ ExecInitTableFuncScan(TableFuncScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &scanstate->ss.ps); - /* - * initialize child expressions - */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, &scanstate->ss.ps); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); - /* * initialize source tuple type */ @@ -158,15 +146,21 @@ ExecInitTableFuncScan(TableFuncScan *node, EState *estate, int eflags) tf->coltypes, tf->coltypmods, tf->colcollations); - - ExecAssignScanType(&scanstate->ss, tupdesc); + /* and the corresponding scan slot */ + ExecInitScanTupleSlot(estate, &scanstate->ss, tupdesc); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); ExecAssignScanProjectionInfo(&scanstate->ss); + /* + * initialize child expressions + */ + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, &scanstate->ss.ps); + /* Only XMLTABLE is supported currently */ scanstate->routine = &XmlTableRoutine; diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c index f2737bb7ef..e207b1ffb5 100644 --- a/src/backend/executor/nodeTidscan.c +++ b/src/backend/executor/nodeTidscan.c @@ -530,20 +530,6 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &tidstate->ss.ps); - /* - * initialize child expressions - */ - tidstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) tidstate); - - TidExprListCreate(tidstate); - - /* - * tuple table initialization - */ - ExecInitResultTupleSlot(estate, &tidstate->ss.ps); - ExecInitScanTupleSlot(estate, &tidstate->ss); - /* * mark tid list as not computed yet */ @@ -562,14 +548,23 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags) /* * get the scan type from the relation descriptor. */ - ExecAssignScanType(&tidstate->ss, RelationGetDescr(currentRelation)); + ExecInitScanTupleSlot(estate, &tidstate->ss, + RelationGetDescr(currentRelation)); /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&tidstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &tidstate->ss.ps); ExecAssignScanProjectionInfo(&tidstate->ss); + /* + * initialize child expressions + */ + tidstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) tidstate); + + TidExprListCreate(tidstate); + /* * all done. */ diff --git a/src/backend/executor/nodeUnique.c b/src/backend/executor/nodeUnique.c index 9f823c58e1..05d65330a0 100644 --- a/src/backend/executor/nodeUnique.c +++ b/src/backend/executor/nodeUnique.c @@ -132,21 +132,16 @@ ExecInitUnique(Unique *node, EState *estate, int eflags) */ ExecAssignExprContext(estate, &uniquestate->ps); - /* - * Tuple table initialization - */ - ExecInitResultTupleSlot(estate, &uniquestate->ps); - /* * then initialize outer plan */ outerPlanState(uniquestate) = ExecInitNode(outerPlan(node), estate, eflags); /* - * unique nodes do no projections, so initialize projection info for this - * node appropriately + * Initialize result slot and type. Unique nodes do no projections, so + * initialize projection info for this node appropriately. */ - ExecAssignResultTypeFromTL(&uniquestate->ps); + ExecInitResultTupleSlotTL(estate, &uniquestate->ps); uniquestate->ps.ps_ProjInfo = NULL; /* diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c index c3d78b6295..63b7e7ef5b 100644 --- a/src/backend/executor/nodeValuesscan.c +++ b/src/backend/executor/nodeValuesscan.c @@ -248,23 +248,22 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags) ExecAssignExprContext(estate, planstate); /* - * tuple table initialization + * Get info about values list, initialize scan slot with it. */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); + tupdesc = ExecTypeFromExprList((List *) linitial(node->values_lists)); + ExecInitScanTupleSlot(estate, &scanstate->ss, tupdesc); /* - * initialize child expressions + * Initialize result slot, type and projection. */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecAssignScanProjectionInfo(&scanstate->ss); /* - * get info about values list + * initialize child expressions */ - tupdesc = ExecTypeFromExprList((List *) linitial(node->values_lists)); - - ExecAssignScanType(&scanstate->ss, tupdesc); + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); /* * Other node-specific setup @@ -281,12 +280,6 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags) scanstate->exprlists[i++] = (List *) lfirst(vtl); } - /* - * Initialize result tuple type and projection info. - */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); - ExecAssignScanProjectionInfo(&scanstate->ss); - return scanstate; } diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index 1c807a8292..a56c3e89fd 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -2287,30 +2287,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) "WindowAgg Aggregates", ALLOCSET_DEFAULT_SIZES); - /* - * tuple table initialization - */ - ExecInitScanTupleSlot(estate, &winstate->ss); - ExecInitResultTupleSlot(estate, &winstate->ss.ps); - winstate->first_part_slot = ExecInitExtraTupleSlot(estate); - winstate->agg_row_slot = ExecInitExtraTupleSlot(estate); - winstate->temp_slot_1 = ExecInitExtraTupleSlot(estate); - winstate->temp_slot_2 = ExecInitExtraTupleSlot(estate); - - /* - * create frame head and tail slots only if needed (must match logic in - * update_frameheadpos and update_frametailpos) - */ - winstate->framehead_slot = winstate->frametail_slot = NULL; - - if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) - { - if (!(frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) - winstate->framehead_slot = ExecInitExtraTupleSlot(estate); - if (!(frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)) - winstate->frametail_slot = ExecInitExtraTupleSlot(estate); - } - /* * WindowAgg nodes never have quals, since they can only occur at the * logical top level of a query (ie, after any WHERE or HAVING filters) @@ -2328,28 +2304,35 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags) * initialize source tuple type (which is also the tuple type that we'll * store in the tuplestore and use in all our working slots). */ - ExecAssignScanTypeFromOuterPlan(&winstate->ss); + ExecCreateScanSlotFromOuterPlan(estate, &winstate->ss); scanDesc = winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor; - ExecSetSlotDescriptor(winstate->first_part_slot, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); - ExecSetSlotDescriptor(winstate->agg_row_slot, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); - ExecSetSlotDescriptor(winstate->temp_slot_1, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); - ExecSetSlotDescriptor(winstate->temp_slot_2, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); - if (winstate->framehead_slot) - ExecSetSlotDescriptor(winstate->framehead_slot, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); - if (winstate->frametail_slot) - ExecSetSlotDescriptor(winstate->frametail_slot, - winstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor); + /* + * tuple table initialization + */ + winstate->first_part_slot = ExecInitExtraTupleSlot(estate, scanDesc); + winstate->agg_row_slot = ExecInitExtraTupleSlot(estate, scanDesc); + winstate->temp_slot_1 = ExecInitExtraTupleSlot(estate, scanDesc); + winstate->temp_slot_2 = ExecInitExtraTupleSlot(estate, scanDesc); + + /* + * create frame head and tail slots only if needed (must match logic in + * update_frameheadpos and update_frametailpos) + */ + winstate->framehead_slot = winstate->frametail_slot = NULL; + + if (frameOptions & (FRAMEOPTION_RANGE | FRAMEOPTION_GROUPS)) + { + if (!(frameOptions & FRAMEOPTION_START_UNBOUNDED_PRECEDING)) + winstate->framehead_slot = ExecInitExtraTupleSlot(estate, scanDesc); + if (!(frameOptions & FRAMEOPTION_END_UNBOUNDED_FOLLOWING)) + winstate->frametail_slot = ExecInitExtraTupleSlot(estate, scanDesc); + } /* - * Initialize result tuple type and projection info. + * Initialize result slot, type and projection. */ - ExecAssignResultTypeFromTL(&winstate->ss.ps); + ExecInitResultTupleSlotTL(estate, &winstate->ss.ps); ExecAssignProjectionInfo(&winstate->ss.ps, NULL); /* Set up data for comparing tuples */ diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c index 66d2111bd9..2ff9a215b1 100644 --- a/src/backend/executor/nodeWorktablescan.c +++ b/src/backend/executor/nodeWorktablescan.c @@ -157,21 +157,21 @@ ExecInitWorkTableScan(WorkTableScan *node, EState *estate, int eflags) ExecAssignExprContext(estate, &scanstate->ss.ps); /* - * initialize child expressions + * tuple table initialization */ - scanstate->ss.ps.qual = - ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); + ExecInitResultTupleSlotTL(estate, &scanstate->ss.ps); + ExecInitScanTupleSlot(estate, &scanstate->ss, NULL); /* - * tuple table initialization + * initialize child expressions */ - ExecInitResultTupleSlot(estate, &scanstate->ss.ps); - ExecInitScanTupleSlot(estate, &scanstate->ss); + scanstate->ss.ps.qual = + ExecInitQual(node->scan.plan.qual, (PlanState *) scanstate); /* - * Initialize result tuple type, but not yet projection info. + * Do not yet initialize projection info, see ExecWorkTableScan() for + * details. */ - ExecAssignResultTypeFromTL(&scanstate->ss.ps); return scanstate; } diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index 83c69092ae..04985c9f91 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -208,7 +208,7 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel) /* Triggers might need a slot */ if (resultRelInfo->ri_TrigDesc) - estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate); + estate->es_trig_tuple_slot = ExecInitExtraTupleSlot(estate, NULL); /* Prepare to catch AFTER triggers. */ AfterTriggerBeginQuery(); @@ -585,8 +585,8 @@ apply_handle_insert(StringInfo s) /* Initialize the executor state. */ estate = create_estate_for_relation(rel); - remoteslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(remoteslot, RelationGetDescr(rel->localrel)); + remoteslot = ExecInitExtraTupleSlot(estate, + RelationGetDescr(rel->localrel)); /* Process and store remote tuple in the slot */ oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate)); @@ -689,10 +689,10 @@ apply_handle_update(StringInfo s) /* Initialize the executor state. */ estate = create_estate_for_relation(rel); - remoteslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(remoteslot, RelationGetDescr(rel->localrel)); - localslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(localslot, RelationGetDescr(rel->localrel)); + remoteslot = ExecInitExtraTupleSlot(estate, + RelationGetDescr(rel->localrel)); + localslot = ExecInitExtraTupleSlot(estate, + RelationGetDescr(rel->localrel)); EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1); PushActiveSnapshot(GetTransactionSnapshot()); @@ -807,10 +807,10 @@ apply_handle_delete(StringInfo s) /* Initialize the executor state. */ estate = create_estate_for_relation(rel); - remoteslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(remoteslot, RelationGetDescr(rel->localrel)); - localslot = ExecInitExtraTupleSlot(estate); - ExecSetSlotDescriptor(localslot, RelationGetDescr(rel->localrel)); + remoteslot = ExecInitExtraTupleSlot(estate, + RelationGetDescr(rel->localrel)); + localslot = ExecInitExtraTupleSlot(estate, + RelationGetDescr(rel->localrel)); EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1); PushActiveSnapshot(GetTransactionSnapshot()); diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h index 621e7c3dc4..45a077a949 100644 --- a/src/include/executor/executor.h +++ b/src/include/executor/executor.h @@ -431,9 +431,10 @@ extern void ExecScanReScan(ScanState *node); /* * prototypes from functions in execTuples.c */ -extern void ExecInitResultTupleSlot(EState *estate, PlanState *planstate); -extern void ExecInitScanTupleSlot(EState *estate, ScanState *scanstate); -extern TupleTableSlot *ExecInitExtraTupleSlot(EState *estate); +extern void ExecInitResultTupleSlotTL(EState *estate, PlanState *planstate); +extern void ExecInitScanTupleSlot(EState *estate, ScanState *scanstate, TupleDesc tupleDesc); +extern TupleTableSlot *ExecInitExtraTupleSlot(EState *estate, + TupleDesc tupleDesc); extern TupleTableSlot *ExecInitNullTupleSlot(EState *estate, TupleDesc tupType); extern TupleDesc ExecTypeFromTL(List *targetList, bool hasoid); @@ -502,8 +503,6 @@ extern ExprContext *MakePerTupleExprContext(EState *estate); } while (0) extern void ExecAssignExprContext(EState *estate, PlanState *planstate); -extern void ExecAssignResultType(PlanState *planstate, TupleDesc tupDesc); -extern void ExecAssignResultTypeFromTL(PlanState *planstate); extern TupleDesc ExecGetResultType(PlanState *planstate); extern void ExecAssignProjectionInfo(PlanState *planstate, TupleDesc inputDesc); @@ -511,7 +510,7 @@ extern void ExecConditionalAssignProjectionInfo(PlanState *planstate, TupleDesc inputDesc, Index varno); extern void ExecFreeExprContext(PlanState *planstate); extern void ExecAssignScanType(ScanState *scanstate, TupleDesc tupDesc); -extern void ExecAssignScanTypeFromOuterPlan(ScanState *scanstate); +extern void ExecCreateScanSlotFromOuterPlan(EState *estate, ScanState *scanstate); extern bool ExecRelationIsTargetRelation(EState *estate, Index scanrelid); diff --git a/src/include/executor/tuptable.h b/src/include/executor/tuptable.h index 5b54834d33..8be0d5edc2 100644 --- a/src/include/executor/tuptable.h +++ b/src/include/executor/tuptable.h @@ -127,6 +127,7 @@ typedef struct TupleTableSlot MinimalTuple tts_mintuple; /* minimal tuple, or NULL if none */ HeapTupleData tts_minhdr; /* workspace for minimal-tuple-only case */ long tts_off; /* saved state for slot_deform_tuple */ + bool tts_fixedTupleDescriptor; /* descriptor can't be changed */ } TupleTableSlot; #define TTS_HAS_PHYSICAL_TUPLE(slot) \ @@ -139,8 +140,8 @@ typedef struct TupleTableSlot ((slot) == NULL || (slot)->tts_isempty) /* in executor/execTuples.c */ -extern TupleTableSlot *MakeTupleTableSlot(void); -extern TupleTableSlot *ExecAllocTableSlot(List **tupleTable); +extern TupleTableSlot *MakeTupleTableSlot(TupleDesc desc); +extern TupleTableSlot *ExecAllocTableSlot(List **tupleTable, TupleDesc desc); extern void ExecResetTupleTable(List *tupleTable, bool shouldFree); extern TupleTableSlot *MakeSingleTupleTableSlot(TupleDesc tupdesc); extern void ExecDropSingleTupleTableSlot(TupleTableSlot *slot); From cef60043dd27c47a1a4a220158836ccff20be07a Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 13 Feb 2018 19:47:16 -0300 Subject: [PATCH 1010/1087] Mention trigger name in trigger test This makes it more explicit exactly what is going on, for further proposed behavior changes. Discussion: https://postgr.es/m/20180214212624.hm7of76flesodamf@alvherre.pgsql --- src/test/regress/expected/triggers.out | 42 +++++++++++++------------- src/test/regress/sql/triggers.sql | 2 +- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index 98db323337..e7b4b31afc 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -1865,7 +1865,7 @@ create table parted2_stmt_trig1 partition of parted2_stmt_trig for values in (1) create table parted2_stmt_trig2 partition of parted2_stmt_trig for values in (2); create or replace function trigger_notice() returns trigger as $$ begin - raise notice 'trigger on % % % for %', TG_TABLE_NAME, TG_WHEN, TG_OP, TG_LEVEL; + raise notice 'trigger % on % % % for %', TG_NAME, TG_TABLE_NAME, TG_WHEN, TG_OP, TG_LEVEL; if TG_LEVEL = 'ROW' then return NEW; end if; @@ -1910,12 +1910,12 @@ create trigger trig_del_after after delete on parted2_stmt_trig with ins (a) as ( insert into parted2_stmt_trig values (1), (2) returning a ) insert into parted_stmt_trig select a from ins returning tableoid::regclass, a; -NOTICE: trigger on parted_stmt_trig BEFORE INSERT for STATEMENT -NOTICE: trigger on parted2_stmt_trig BEFORE INSERT for STATEMENT -NOTICE: trigger on parted_stmt_trig1 BEFORE INSERT for ROW -NOTICE: trigger on parted_stmt_trig1 AFTER INSERT for ROW -NOTICE: trigger on parted2_stmt_trig AFTER INSERT for STATEMENT -NOTICE: trigger on parted_stmt_trig AFTER INSERT for STATEMENT +NOTICE: trigger trig_ins_before on parted_stmt_trig BEFORE INSERT for STATEMENT +NOTICE: trigger trig_ins_before on parted2_stmt_trig BEFORE INSERT for STATEMENT +NOTICE: trigger trig_ins_before on parted_stmt_trig1 BEFORE INSERT for ROW +NOTICE: trigger trig_ins_after on parted_stmt_trig1 AFTER INSERT for ROW +NOTICE: trigger trig_ins_after on parted2_stmt_trig AFTER INSERT for STATEMENT +NOTICE: trigger trig_ins_after on parted_stmt_trig AFTER INSERT for STATEMENT tableoid | a -------------------+--- parted_stmt_trig1 | 1 @@ -1925,25 +1925,25 @@ NOTICE: trigger on parted_stmt_trig AFTER INSERT for STATEMENT with upd as ( update parted2_stmt_trig set a = a ) update parted_stmt_trig set a = a; -NOTICE: trigger on parted_stmt_trig BEFORE UPDATE for STATEMENT -NOTICE: trigger on parted_stmt_trig1 BEFORE UPDATE for ROW -NOTICE: trigger on parted2_stmt_trig BEFORE UPDATE for STATEMENT -NOTICE: trigger on parted_stmt_trig1 AFTER UPDATE for ROW -NOTICE: trigger on parted_stmt_trig AFTER UPDATE for STATEMENT -NOTICE: trigger on parted2_stmt_trig AFTER UPDATE for STATEMENT +NOTICE: trigger trig_upd_before on parted_stmt_trig BEFORE UPDATE for STATEMENT +NOTICE: trigger trig_upd_before on parted_stmt_trig1 BEFORE UPDATE for ROW +NOTICE: trigger trig_upd_before on parted2_stmt_trig BEFORE UPDATE for STATEMENT +NOTICE: trigger trig_upd_after on parted_stmt_trig1 AFTER UPDATE for ROW +NOTICE: trigger trig_upd_after on parted_stmt_trig AFTER UPDATE for STATEMENT +NOTICE: trigger trig_upd_after on parted2_stmt_trig AFTER UPDATE for STATEMENT delete from parted_stmt_trig; -NOTICE: trigger on parted_stmt_trig BEFORE DELETE for STATEMENT -NOTICE: trigger on parted_stmt_trig AFTER DELETE for STATEMENT +NOTICE: trigger trig_del_before on parted_stmt_trig BEFORE DELETE for STATEMENT +NOTICE: trigger trig_del_after on parted_stmt_trig AFTER DELETE for STATEMENT -- insert via copy on the parent copy parted_stmt_trig(a) from stdin; -NOTICE: trigger on parted_stmt_trig BEFORE INSERT for STATEMENT -NOTICE: trigger on parted_stmt_trig1 BEFORE INSERT for ROW -NOTICE: trigger on parted_stmt_trig1 AFTER INSERT for ROW -NOTICE: trigger on parted_stmt_trig AFTER INSERT for STATEMENT +NOTICE: trigger trig_ins_before on parted_stmt_trig BEFORE INSERT for STATEMENT +NOTICE: trigger trig_ins_before on parted_stmt_trig1 BEFORE INSERT for ROW +NOTICE: trigger trig_ins_after on parted_stmt_trig1 AFTER INSERT for ROW +NOTICE: trigger trig_ins_after on parted_stmt_trig AFTER INSERT for STATEMENT -- insert via copy on the first partition copy parted_stmt_trig1(a) from stdin; -NOTICE: trigger on parted_stmt_trig1 BEFORE INSERT for ROW -NOTICE: trigger on parted_stmt_trig1 AFTER INSERT for ROW +NOTICE: trigger trig_ins_before on parted_stmt_trig1 BEFORE INSERT for ROW +NOTICE: trigger trig_ins_after on parted_stmt_trig1 AFTER INSERT for ROW drop table parted_stmt_trig, parted2_stmt_trig; -- -- Test the interaction between transition tables and both kinds of diff --git a/src/test/regress/sql/triggers.sql b/src/test/regress/sql/triggers.sql index dba9bdd98b..ae8349ccbf 100644 --- a/src/test/regress/sql/triggers.sql +++ b/src/test/regress/sql/triggers.sql @@ -1317,7 +1317,7 @@ create table parted2_stmt_trig2 partition of parted2_stmt_trig for values in (2) create or replace function trigger_notice() returns trigger as $$ begin - raise notice 'trigger on % % % for %', TG_TABLE_NAME, TG_WHEN, TG_OP, TG_LEVEL; + raise notice 'trigger % on % % % for %', TG_NAME, TG_TABLE_NAME, TG_WHEN, TG_OP, TG_LEVEL; if TG_LEVEL = 'ROW' then return NEW; end if; From a26116c6cbf4117e8efaa7cfc5bacc887f01517f Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Sat, 17 Feb 2018 19:02:15 -0300 Subject: [PATCH 1011/1087] Refactor format_type APIs to be more modular Introduce a new format_type_extended, with a flags bitmask argument that can modify the default behavior. A few compatibility and readability wrappers remain: format_type_be format_type_be_qualified format_type_with_typemod while format_type_with_typemod_qualified, which had a single caller, is removed. Author: Michael Paquier, some revisions by me Discussion: 20180213035107.GA2915@paquier.xyz --- contrib/postgres_fdw/deparse.c | 10 ++- src/backend/utils/adt/format_type.c | 134 ++++++++++++++-------------- src/include/utils/builtins.h | 9 +- 3 files changed, 81 insertions(+), 72 deletions(-) diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index f4b38c65ac..02894a7e35 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -854,10 +854,12 @@ foreign_expr_walker(Node *node, static char * deparse_type_name(Oid type_oid, int32 typemod) { - if (is_builtin(type_oid)) - return format_type_with_typemod(type_oid, typemod); - else - return format_type_with_typemod_qualified(type_oid, typemod); + uint8 flags = FORMAT_TYPE_TYPEMOD_GIVEN; + + if (!is_builtin(type_oid)) + flags |= FORMAT_TYPE_FORCE_QUALIFY; + + return format_type_extended(type_oid, typemod, flags); } /* diff --git a/src/backend/utils/adt/format_type.c b/src/backend/utils/adt/format_type.c index de3da3607a..872574fdd5 100644 --- a/src/backend/utils/adt/format_type.c +++ b/src/backend/utils/adt/format_type.c @@ -28,9 +28,6 @@ #define MAX_INT32_LEN 11 -static char *format_type_internal(Oid type_oid, int32 typemod, - bool typemod_given, bool allow_invalid, - bool force_qualify); static char *printTypmod(const char *typname, int32 typmod, Oid typmodout); @@ -72,81 +69,52 @@ format_type(PG_FUNCTION_ARGS) PG_RETURN_NULL(); type_oid = PG_GETARG_OID(0); + typemod = PG_ARGISNULL(1) ? -1 : PG_GETARG_INT32(1); - if (PG_ARGISNULL(1)) - result = format_type_internal(type_oid, -1, false, true, false); - else - { - typemod = PG_GETARG_INT32(1); - result = format_type_internal(type_oid, typemod, true, true, false); - } + result = format_type_extended(type_oid, typemod, + FORMAT_TYPE_TYPEMOD_GIVEN | + FORMAT_TYPE_ALLOW_INVALID); PG_RETURN_TEXT_P(cstring_to_text(result)); } /* - * This version is for use within the backend in error messages, etc. - * One difference is that it will fail for an invalid type. + * format_type_extended + * Generate a possibly-qualified type name. * - * The result is always a palloc'd string. - */ -char * -format_type_be(Oid type_oid) -{ - return format_type_internal(type_oid, -1, false, false, false); -} - -/* - * This version returns a name that is always qualified (unless it's one - * of the SQL-keyword type names, such as TIMESTAMP WITH TIME ZONE). - */ -char * -format_type_be_qualified(Oid type_oid) -{ - return format_type_internal(type_oid, -1, false, false, true); -} - -/* - * This version allows a nondefault typemod to be specified. - */ -char * -format_type_with_typemod(Oid type_oid, int32 typemod) -{ - return format_type_internal(type_oid, typemod, true, false, false); -} - -/* - * This version allows a nondefault typemod to be specified, and forces - * qualification of normal type names. + * The default is to only qualify if the type is not in the search path, to + * ignore the given typmod, and to raise an error if a non-existent type_oid is + * given. + * + * The following bits in 'flags' modify the behavior: + * - FORMAT_TYPE_TYPEMOD_GIVEN + * consider the given typmod in the output (may be -1 to request + * the default behavior) + * + * - FORMAT_TYPE_ALLOW_INVALID + * if the type OID is invalid or unknown, return ??? or such instead + * of failing + * + * - FORMAT_TYPE_FORCE_QUALIFY + * always schema-qualify type names, regardless of search_path */ char * -format_type_with_typemod_qualified(Oid type_oid, int32 typemod) +format_type_extended(Oid type_oid, int32 typemod, bits16 flags) { - return format_type_internal(type_oid, typemod, true, false, true); -} - -/* - * Common workhorse. - */ -static char * -format_type_internal(Oid type_oid, int32 typemod, - bool typemod_given, bool allow_invalid, - bool force_qualify) -{ - bool with_typemod = typemod_given && (typemod >= 0); HeapTuple tuple; Form_pg_type typeform; Oid array_base_type; bool is_array; char *buf; + bool with_typemod; - if (type_oid == InvalidOid && allow_invalid) + if (type_oid == InvalidOid && (flags & FORMAT_TYPE_ALLOW_INVALID) != 0) return pstrdup("-"); tuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(type_oid)); if (!HeapTupleIsValid(tuple)) { - if (allow_invalid) + if ((flags & FORMAT_TYPE_ALLOW_INVALID) != 0) return pstrdup("???"); else elog(ERROR, "cache lookup failed for type %u", type_oid); @@ -162,15 +130,14 @@ format_type_internal(Oid type_oid, int32 typemod, */ array_base_type = typeform->typelem; - if (array_base_type != InvalidOid && - typeform->typstorage != 'p') + if (array_base_type != InvalidOid && typeform->typstorage != 'p') { /* Switch our attention to the array element type */ ReleaseSysCache(tuple); tuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(array_base_type)); if (!HeapTupleIsValid(tuple)) { - if (allow_invalid) + if ((flags & FORMAT_TYPE_ALLOW_INVALID) != 0) return pstrdup("???[]"); else elog(ERROR, "cache lookup failed for type %u", type_oid); @@ -182,6 +149,8 @@ format_type_internal(Oid type_oid, int32 typemod, else is_array = false; + with_typemod = (flags & FORMAT_TYPE_TYPEMOD_GIVEN) != 0 && (typemod >= 0); + /* * See if we want to special-case the output for certain built-in types. * Note that these special cases should all correspond to special @@ -200,7 +169,7 @@ format_type_internal(Oid type_oid, int32 typemod, case BITOID: if (with_typemod) buf = printTypmod("bit", typemod, typeform->typmodout); - else if (typemod_given) + else if ((flags & FORMAT_TYPE_TYPEMOD_GIVEN) != 0) { /* * bit with typmod -1 is not the same as BIT, which means @@ -219,7 +188,7 @@ format_type_internal(Oid type_oid, int32 typemod, case BPCHAROID: if (with_typemod) buf = printTypmod("character", typemod, typeform->typmodout); - else if (typemod_given) + else if ((flags & FORMAT_TYPE_TYPEMOD_GIVEN) != 0) { /* * bpchar with typmod -1 is not the same as CHARACTER, which @@ -313,13 +282,14 @@ format_type_internal(Oid type_oid, int32 typemod, /* * Default handling: report the name as it appears in the catalog. * Here, we must qualify the name if it is not visible in the search - * path, and we must double-quote it if it's not a standard identifier - * or if it matches any keyword. + * path or if caller requests it; and we must double-quote it if it's + * not a standard identifier or if it matches any keyword. */ char *nspname; char *typname; - if (!force_qualify && TypeIsVisible(type_oid)) + if ((flags & FORMAT_TYPE_FORCE_QUALIFY) == 0 && + TypeIsVisible(type_oid)) nspname = NULL; else nspname = get_namespace_name_or_temp(typeform->typnamespace); @@ -340,6 +310,36 @@ format_type_internal(Oid type_oid, int32 typemod, return buf; } +/* + * This version is for use within the backend in error messages, etc. + * One difference is that it will fail for an invalid type. + * + * The result is always a palloc'd string. + */ +char * +format_type_be(Oid type_oid) +{ + return format_type_extended(type_oid, -1, 0); +} + +/* + * This version returns a name that is always qualified (unless it's one + * of the SQL-keyword type names, such as TIMESTAMP WITH TIME ZONE). + */ +char * +format_type_be_qualified(Oid type_oid) +{ + return format_type_extended(type_oid, -1, FORMAT_TYPE_FORCE_QUALIFY); +} + +/* + * This version allows a nondefault typemod to be specified. + */ +char * +format_type_with_typemod(Oid type_oid, int32 typemod) +{ + return format_type_extended(type_oid, typemod, FORMAT_TYPE_TYPEMOD_GIVEN); +} /* * Add typmod decoration to the basic type name @@ -437,8 +437,8 @@ oidvectortypes(PG_FUNCTION_ARGS) for (num = 0; num < numargs; num++) { - char *typename = format_type_internal(oidArray->values[num], -1, - false, true, false); + char *typename = format_type_extended(oidArray->values[num], -1, + FORMAT_TYPE_ALLOW_INVALID); size_t slen = strlen(typename); if (left < (slen + 2)) diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h index 8bb57c5829..3e462f1a9c 100644 --- a/src/include/utils/builtins.h +++ b/src/include/utils/builtins.h @@ -112,10 +112,17 @@ extern void clean_ipv6_addr(int addr_family, char *addr); extern Datum numeric_float8_no_overflow(PG_FUNCTION_ARGS); /* format_type.c */ + +/* Control flags for format_type_extended */ +#define FORMAT_TYPE_TYPEMOD_GIVEN 0x01 /* typemod defined by caller */ +#define FORMAT_TYPE_ALLOW_INVALID 0x02 /* allow invalid types */ +#define FORMAT_TYPE_FORCE_QUALIFY 0x04 /* force qualification of type */ +extern char *format_type_extended(Oid type_oid, int32 typemod, bits16 flags); + extern char *format_type_be(Oid type_oid); extern char *format_type_be_qualified(Oid type_oid); extern char *format_type_with_typemod(Oid type_oid, int32 typemod); -extern char *format_type_with_typemod_qualified(Oid type_oid, int32 typemod); + extern int32 type_maximum_size(Oid type_oid, int32 typemod); /* quote.c */ From 7923118c16aa3408a994f297d8bdd68292f45324 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 17 Feb 2018 20:45:02 -0500 Subject: [PATCH 1012/1087] Minor comment fix --- src/backend/access/transam/xact.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index ea81f4b5de..dbaaf8e005 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -219,7 +219,7 @@ static TransactionStateData TopTransactionStateData = { false, /* entry-time xact r/o state */ false, /* startedInRecovery */ false, /* didLogXid */ - 0, /* parallelMode */ + 0, /* parallelModeLevel */ NULL /* link to parent state block */ }; From 1a1adb215c69bbf64fd8e01cc1706812dc8ba15b Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 17 Feb 2018 20:45:28 -0500 Subject: [PATCH 1013/1087] Move function comment to the right place --- src/backend/commands/tablecmds.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 89454d8e80..87539d6c0b 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -5879,9 +5879,6 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode) /* * ALTER TABLE ALTER COLUMN SET NOT NULL - * - * Return the address of the modified column. If the column was already NOT - * NULL, InvalidObjectAddress is returned. */ static void @@ -5904,6 +5901,10 @@ ATPrepSetNotNull(Relation rel, bool recurse, bool recursing) } } +/* + * Return the address of the modified column. If the column was already NOT + * NULL, InvalidObjectAddress is returned. + */ static ObjectAddress ATExecSetNotNull(AlteredTableInfo *tab, Relation rel, const char *colName, LOCKMODE lockmode) From 97a804cb2bba49d5ff04795cf500722977e5af9a Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 18 Feb 2018 17:16:11 -0500 Subject: [PATCH 1014/1087] Message style fix --- src/backend/commands/dbcommands.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index d2020d07cf..d1718f04ee 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -857,8 +857,8 @@ dropdb(const char *dbname, bool missing_ok) (errcode(ERRCODE_OBJECT_IN_USE), errmsg("database \"%s\" is used by an active logical replication slot", dbname), - errdetail_plural("There is %d active slot", - "There are %d active slots", + errdetail_plural("There is %d active slot.", + "There are %d active slots.", nslots_active, nslots_active))); } From 2e1d1ebdffa2c69779573c2e561056cd08541e74 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 18 Feb 2018 22:20:27 -0500 Subject: [PATCH 1015/1087] Remove redundant function declaration --- src/backend/catalog/partition.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 31c80c7f1a..4dddfcc014 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -189,9 +189,6 @@ static int get_partition_bound_num_indexes(PartitionBoundInfo b); static int get_greatest_modulus(PartitionBoundInfo b); static uint64 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); -/* SQL-callable function for use in hash partition CHECK constraints */ -PG_FUNCTION_INFO_V1(satisfies_hash_partition); - /* * RelationBuildPartitionDesc * Form rel's partition descriptor From ebf6049ebea19e4123fefce7b542189e84084cd1 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sun, 18 Feb 2018 22:20:54 -0500 Subject: [PATCH 1016/1087] Fix StaticAssertExpr() under C++ The previous code didn't compile, because static_assert() must end with a semicolon. To fix, wrap it in a block, similar to the C code. --- src/include/c.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/include/c.h b/src/include/c.h index 6b55181e0a..37c0c39199 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -791,7 +791,7 @@ extern void ExceptionalCondition(const char *conditionName, #define StaticAssertStmt(condition, errmessage) \ static_assert(condition, errmessage) #define StaticAssertExpr(condition, errmessage) \ - StaticAssertStmt(condition, errmessage) + ({ static_assert(condition, errmessage); }) #else #define StaticAssertStmt(condition, errmessage) \ do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0) From 8c44802b6ed4846accb08e2ffe93040b8b42aae9 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 18 Feb 2018 23:32:56 -0500 Subject: [PATCH 1017/1087] Remove redundant initialization of a local variable. In what was doubtless a typo, commit bf6c614a2 introduced a duplicate initialization of a local variable. This made Coverity unhappy, as well as pretty much anybody reading the code. We don't even have a real use for the local variable, so just remove it. --- src/backend/executor/nodeGroup.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c index c6efd64d00..2ea80e817d 100644 --- a/src/backend/executor/nodeGroup.c +++ b/src/backend/executor/nodeGroup.c @@ -162,7 +162,6 @@ GroupState * ExecInitGroup(Group *node, EState *estate, int eflags) { GroupState *grpstate; - AttrNumber *grpColIdx = grpColIdx = node->grpColIdx; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -209,7 +208,7 @@ ExecInitGroup(Group *node, EState *estate, int eflags) grpstate->eqfunction = execTuplesMatchPrepare(ExecGetResultType(outerPlanState(grpstate)), node->numCols, - grpColIdx, + node->grpColIdx, node->grpOperators, &grpstate->ss.ps); From 524d64ea8e3e49b4fda41ff9b2f048b697384058 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 19 Feb 2018 12:07:44 -0500 Subject: [PATCH 1018/1087] Remove bogus "extern" annotations on function definitions. While this is not illegal C, project style is to put "extern" only on declarations not definitions. David Rowley Discussion: https://postgr.es/m/CAKJS1f9RKLWXcMBQhvDYhmsMEo+ALuNgA-NE+AX5Uoke9DJ2Xg@mail.gmail.com --- contrib/postgres_fdw/deparse.c | 4 ++-- contrib/postgres_fdw/postgres_fdw.c | 2 +- src/backend/catalog/index.c | 6 +++--- src/backend/catalog/partition.c | 2 +- src/backend/foreign/foreign.c | 2 +- src/backend/storage/ipc/shm_toc.c | 6 +++--- src/backend/utils/adt/json.c | 8 ++++---- 7 files changed, 15 insertions(+), 15 deletions(-) diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 02894a7e35..8cd5843885 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -927,7 +927,7 @@ build_tlist_to_deparse(RelOptInfo *foreignrel) * * List of columns selected is returned in retrieved_attrs. */ -extern void +void deparseSelectStmtForRel(StringInfo buf, PlannerInfo *root, RelOptInfo *rel, List *tlist, List *remote_conds, List *pathkeys, bool is_subquery, List **retrieved_attrs, @@ -1313,7 +1313,7 @@ appendConditions(List *exprs, deparse_expr_cxt *context) } /* Output join name for given join type */ -extern const char * +const char * get_jointype_name(JoinType jointype) { switch (jointype) diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index d37180ae10..941a2e75a5 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -5567,7 +5567,7 @@ conversion_error_callback(void *arg) * Find an equivalence class member expression, all of whose Vars, come from * the indicated relation. */ -extern Expr * +Expr * find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel) { ListCell *lc_em; diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index f2cb6d7fb8..5fa87f5757 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -4023,7 +4023,7 @@ ResetReindexPending(void) * EstimateReindexStateSpace * Estimate space needed to pass reindex state to parallel workers. */ -extern Size +Size EstimateReindexStateSpace(void) { return offsetof(SerializedReindexState, pendingReindexedIndexes) @@ -4034,7 +4034,7 @@ EstimateReindexStateSpace(void) * SerializeReindexState * Serialize reindex state for parallel workers. */ -extern void +void SerializeReindexState(Size maxsize, char *start_address) { SerializedReindexState *sistate = (SerializedReindexState *) start_address; @@ -4052,7 +4052,7 @@ SerializeReindexState(Size maxsize, char *start_address) * RestoreReindexState * Restore reindex state in a parallel worker. */ -extern void +void RestoreReindexState(void *reindexstate) { SerializedReindexState *sistate = (SerializedReindexState *) reindexstate; diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index 4dddfcc014..b1c7cd6c72 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -856,7 +856,7 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, bool *parttypbyval, * Return a copy of given PartitionBoundInfo structure. The data types of bounds * are described by given partition key specification. */ -extern PartitionBoundInfo +PartitionBoundInfo partition_bounds_copy(PartitionBoundInfo src, PartitionKey key) { diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c index e7fd507fa5..eac78a5d31 100644 --- a/src/backend/foreign/foreign.c +++ b/src/backend/foreign/foreign.c @@ -712,7 +712,7 @@ get_foreign_server_oid(const char *servername, bool missing_ok) * path list in RelOptInfo is anyway sorted by total cost we are likely to * choose the most efficient path, which is all for the best. */ -extern Path * +Path * GetExistingLocalJoinPath(RelOptInfo *joinrel) { ListCell *lc; diff --git a/src/backend/storage/ipc/shm_toc.c b/src/backend/storage/ipc/shm_toc.c index 2abd140a96..ee5ec6e380 100644 --- a/src/backend/storage/ipc/shm_toc.c +++ b/src/backend/storage/ipc/shm_toc.c @@ -60,7 +60,7 @@ shm_toc_create(uint64 magic, void *address, Size nbytes) * Attach to an existing table of contents. If the magic number found at * the target address doesn't match our expectations, return NULL. */ -extern shm_toc * +shm_toc * shm_toc_attach(uint64 magic, void *address) { shm_toc *toc = (shm_toc *) address; @@ -84,7 +84,7 @@ shm_toc_attach(uint64 magic, void *address) * We allocate backwards from the end of the segment, so that the TOC entries * can grow forward from the start of the segment. */ -extern void * +void * shm_toc_allocate(shm_toc *toc, Size nbytes) { volatile shm_toc *vtoc = toc; @@ -127,7 +127,7 @@ shm_toc_allocate(shm_toc *toc, Size nbytes) /* * Return the number of bytes that can still be allocated. */ -extern Size +Size shm_toc_freespace(shm_toc *toc) { volatile shm_toc *vtoc = toc; diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c index 3ba9bb3519..6f0fe94d63 100644 --- a/src/backend/utils/adt/json.c +++ b/src/backend/utils/adt/json.c @@ -1843,7 +1843,7 @@ add_json(Datum val, bool is_null, StringInfo result, /* * SQL function array_to_json(row) */ -extern Datum +Datum array_to_json(PG_FUNCTION_ARGS) { Datum array = PG_GETARG_DATUM(0); @@ -1859,7 +1859,7 @@ array_to_json(PG_FUNCTION_ARGS) /* * SQL function array_to_json(row, prettybool) */ -extern Datum +Datum array_to_json_pretty(PG_FUNCTION_ARGS) { Datum array = PG_GETARG_DATUM(0); @@ -1876,7 +1876,7 @@ array_to_json_pretty(PG_FUNCTION_ARGS) /* * SQL function row_to_json(row) */ -extern Datum +Datum row_to_json(PG_FUNCTION_ARGS) { Datum array = PG_GETARG_DATUM(0); @@ -1892,7 +1892,7 @@ row_to_json(PG_FUNCTION_ARGS) /* * SQL function row_to_json(row, prettybool) */ -extern Datum +Datum row_to_json_pretty(PG_FUNCTION_ARGS) { Datum array = PG_GETARG_DATUM(0); From eb7ed3f3063401496e4aa4bd68fa33f0be31a72f Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 19 Feb 2018 16:59:37 -0300 Subject: [PATCH 1019/1087] Allow UNIQUE indexes on partitioned tables If we restrict unique constraints on partitioned tables so that they must always include the partition key, then our standard approach to unique indexes already works --- each unique key is forced to exist within a single partition, so enforcing the unique restriction in each index individually is enough to have it enforced globally. Therefore we can implement unique indexes on partitions by simply removing a few restrictions (and adding others.) Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime Casanova, Amit Langote --- doc/src/sgml/ddl.sgml | 9 +- doc/src/sgml/ref/alter_table.sgml | 15 +- doc/src/sgml/ref/create_index.sgml | 5 + doc/src/sgml/ref/create_table.sgml | 18 +- src/backend/bootstrap/bootparse.y | 2 + src/backend/catalog/index.c | 50 ++- src/backend/catalog/pg_constraint.c | 76 +++++ src/backend/catalog/toasting.c | 4 +- src/backend/commands/indexcmds.c | 125 +++++++- src/backend/commands/tablecmds.c | 71 ++++- src/backend/parser/analyze.c | 7 + src/backend/parser/parse_utilcmd.c | 31 +- src/backend/tcop/utility.c | 1 + src/bin/pg_dump/t/002_pg_dump.pl | 65 ++++ src/include/catalog/index.h | 5 +- src/include/catalog/pg_constraint_fn.h | 4 +- src/include/commands/defrem.h | 1 + src/include/parser/parse_utilcmd.h | 3 +- src/test/regress/expected/alter_table.out | 8 - src/test/regress/expected/create_index.out | 6 + src/test/regress/expected/create_table.out | 12 - src/test/regress/expected/indexing.out | 294 +++++++++++++++++- src/test/regress/expected/insert_conflict.out | 2 +- src/test/regress/sql/alter_table.sql | 2 - src/test/regress/sql/create_index.sql | 6 + src/test/regress/sql/create_table.sql | 8 - src/test/regress/sql/indexing.sql | 172 +++++++++- 27 files changed, 907 insertions(+), 95 deletions(-) diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 8c3be5b103..15a9285136 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -3146,9 +3146,8 @@ CREATE TABLE measurement_y2006m02 PARTITION OF measurement Create an index on the key column(s), as well as any other indexes you might want, on the partitioned table. (The key index is not strictly - necessary, but in most scenarios it is helpful. If you intend the key - values to be unique then you should always create a unique or - primary-key constraint for each partition.) This automatically creates + necessary, but in most scenarios it is helpful.) + This automatically creates one index on each partition, and any partitions you create or attach later will also contain the index. @@ -3270,7 +3269,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - There is no way to create a primary key, unique constraint, or + There is no way to create a exclusion constraint spanning all partitions; it is only possible to constrain each leaf partition individually. @@ -3278,7 +3277,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 - Since primary keys are not supported on partitioned tables, foreign + While primary keys are supported on partitioned tables, foreign keys referencing partitioned tables are not supported, nor are foreign key references from a partitioned table to some other table. diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 2b514b7606..5be56d4b28 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -412,6 +412,11 @@ WITH ( MODULUS numeric_literal, REM disappear too. + + Additional restrictions apply when unique or primary key constraints + are added to partitioned tables; see . + + Adding a constraint using an existing index can be helpful in @@ -834,9 +839,9 @@ WITH ( MODULUS numeric_literal, REM This form attaches an existing table (which might itself be partitioned) as a partition of the target table. The table can be attached - as a partition for specific values using FOR VALUES - or as a default partition by using DEFAULT - . For each index in the target table, a corresponding + as a partition for specific values using FOR VALUES + or as a default partition by using DEFAULT. + For each index in the target table, a corresponding one will be created in the attached table; or, if an equivalent index already exists, will be attached to the target table's index, as if ALTER INDEX ATTACH PARTITION had been executed. @@ -851,8 +856,10 @@ WITH ( MODULUS numeric_literal, REM as the target table and no more; moreover, the column types must also match. Also, it must have all the NOT NULL and CHECK constraints of the target table. Currently - UNIQUE, PRIMARY KEY, and FOREIGN KEY constraints are not considered. + UNIQUE and PRIMARY KEY constraints + from the parent table will be created in the partition, if they don't + already exist. If any of the CHECK constraints of the table being attached is marked NO INHERIT, the command will fail; such a constraint must be recreated without the NO INHERIT diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index f464557de8..1fd21e12bd 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -108,6 +108,11 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] + + + Additional restrictions apply when unique indexes are applied to + partitioned tables; see . + diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 8bf9dc992b..338dddd7cc 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -546,8 +546,7 @@ WITH ( MODULUS numeric_literal, REM - Partitioned tables do not support UNIQUE, - PRIMARY KEY, EXCLUDE, or + Partitioned tables do not support EXCLUDE or FOREIGN KEY constraints; however, you can define these constraints on individual partitions. @@ -786,6 +785,14 @@ WITH ( MODULUS numeric_literal, REM primary key constraint defined for the table. (Otherwise it would just be the same constraint listed twice.) + + + When used on partitioned tables, unique constraints must include all the + columns of the partition key. + If any partitions are in turn partitioned, all columns of each partition + key are considered at each level below the UNIQUE + constraint. + @@ -814,6 +821,13 @@ WITH ( MODULUS numeric_literal, REM about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows. + + + PRIMARY KEY constraints share the restrictions that + UNIQUE constraints have when placed on partitioned + tables. + + diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y index dfd53fa054..9e81f9514d 100644 --- a/src/backend/bootstrap/bootparse.y +++ b/src/backend/bootstrap/bootparse.y @@ -322,6 +322,7 @@ Boot_DeclareIndexStmt: stmt, $4, InvalidOid, + InvalidOid, false, false, false, @@ -367,6 +368,7 @@ Boot_DeclareUniqueIndexStmt: stmt, $5, InvalidOid, + InvalidOid, false, false, false, diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 5fa87f5757..564f2069cf 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -691,6 +691,8 @@ UpdateIndexRelation(Oid indexoid, * nonzero to specify a preselected OID. * parentIndexRelid: if creating an index partition, the OID of the * parent index; otherwise InvalidOid. + * parentConstraintId: if creating a constraint on a partition, the OID + * of the constraint in the parent; otherwise InvalidOid. * relFileNode: normally, pass InvalidOid to get new storage. May be * nonzero to attach an existing valid build. * indexInfo: same info executor uses to insert into the index @@ -722,6 +724,7 @@ UpdateIndexRelation(Oid indexoid, * (only if INDEX_CREATE_ADD_CONSTRAINT is set) * allow_system_table_mods: allow table to be a system catalog * is_internal: if true, post creation hook for new index + * constraintId: if not NULL, receives OID of created constraint * * Returns the OID of the created index. */ @@ -730,6 +733,7 @@ index_create(Relation heapRelation, const char *indexRelationName, Oid indexRelationId, Oid parentIndexRelid, + Oid parentConstraintId, Oid relFileNode, IndexInfo *indexInfo, List *indexColNames, @@ -742,7 +746,8 @@ index_create(Relation heapRelation, bits16 flags, bits16 constr_flags, bool allow_system_table_mods, - bool is_internal) + bool is_internal, + Oid *constraintId) { Oid heapRelationId = RelationGetRelid(heapRelation); Relation pg_class; @@ -989,6 +994,7 @@ index_create(Relation heapRelation, if ((flags & INDEX_CREATE_ADD_CONSTRAINT) != 0) { char constraintType; + ObjectAddress localaddr; if (isprimary) constraintType = CONSTRAINT_PRIMARY; @@ -1002,14 +1008,17 @@ index_create(Relation heapRelation, constraintType = 0; /* keep compiler quiet */ } - index_constraint_create(heapRelation, + localaddr = index_constraint_create(heapRelation, indexRelationId, + parentConstraintId, indexInfo, indexRelationName, constraintType, constr_flags, allow_system_table_mods, is_internal); + if (constraintId) + *constraintId = localaddr.objectId; } else { @@ -1181,6 +1190,8 @@ index_create(Relation heapRelation, * * heapRelation: table owning the index (must be suitably locked by caller) * indexRelationId: OID of the index + * parentConstraintId: if constraint is on a partition, the OID of the + * constraint in the parent. * indexInfo: same info executor uses to insert into the index * constraintName: what it say (generally, should match name of index) * constraintType: one of CONSTRAINT_PRIMARY, CONSTRAINT_UNIQUE, or @@ -1198,6 +1209,7 @@ index_create(Relation heapRelation, ObjectAddress index_constraint_create(Relation heapRelation, Oid indexRelationId, + Oid parentConstraintId, IndexInfo *indexInfo, const char *constraintName, char constraintType, @@ -1212,6 +1224,9 @@ index_constraint_create(Relation heapRelation, bool deferrable; bool initdeferred; bool mark_as_primary; + bool islocal; + bool noinherit; + int inhcount; deferrable = (constr_flags & INDEX_CONSTR_CREATE_DEFERRABLE) != 0; initdeferred = (constr_flags & INDEX_CONSTR_CREATE_INIT_DEFERRED) != 0; @@ -1246,6 +1261,19 @@ index_constraint_create(Relation heapRelation, deleteDependencyRecordsForClass(RelationRelationId, indexRelationId, RelationRelationId, DEPENDENCY_AUTO); + if (OidIsValid(parentConstraintId)) + { + islocal = false; + inhcount = 1; + noinherit = false; + } + else + { + islocal = true; + inhcount = 0; + noinherit = true; + } + /* * Construct a pg_constraint entry. */ @@ -1273,9 +1301,9 @@ index_constraint_create(Relation heapRelation, NULL, /* no check constraint */ NULL, NULL, - true, /* islocal */ - 0, /* inhcount */ - true, /* noinherit */ + islocal, + inhcount, + noinherit, is_internal); /* @@ -1294,6 +1322,18 @@ index_constraint_create(Relation heapRelation, recordDependencyOn(&myself, &referenced, DEPENDENCY_INTERNAL); + /* + * Also, if this is a constraint on a partition, mark it as depending + * on the constraint in the parent. + */ + if (OidIsValid(parentConstraintId)) + { + ObjectAddress parentConstr; + + ObjectAddressSet(parentConstr, ConstraintRelationId, parentConstraintId); + recordDependencyOn(&referenced, &parentConstr, DEPENDENCY_INTERNAL_AUTO); + } + /* * If the constraint is deferrable, create the deferred uniqueness * checking trigger. (The trigger will be given an internal dependency on diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c index 442ae7e23d..731c5e4317 100644 --- a/src/backend/catalog/pg_constraint.c +++ b/src/backend/catalog/pg_constraint.c @@ -747,6 +747,43 @@ AlterConstraintNamespaces(Oid ownerId, Oid oldNspId, heap_close(conRel, RowExclusiveLock); } +/* + * ConstraintSetParentConstraint + * Set a partition's constraint as child of its parent table's + * + * This updates the constraint's pg_constraint row to show it as inherited, and + * add a dependency to the parent so that it cannot be removed on its own. + */ +void +ConstraintSetParentConstraint(Oid childConstrId, Oid parentConstrId) +{ + Relation constrRel; + Form_pg_constraint constrForm; + HeapTuple tuple, + newtup; + ObjectAddress depender; + ObjectAddress referenced; + + constrRel = heap_open(ConstraintRelationId, RowExclusiveLock); + tuple = SearchSysCache1(CONSTROID, ObjectIdGetDatum(childConstrId)); + if (!HeapTupleIsValid(tuple)) + elog(ERROR, "cache lookup failed for constraint %u", childConstrId); + newtup = heap_copytuple(tuple); + constrForm = (Form_pg_constraint) GETSTRUCT(newtup); + constrForm->conislocal = false; + constrForm->coninhcount++; + CatalogTupleUpdate(constrRel, &tuple->t_self, newtup); + ReleaseSysCache(tuple); + + ObjectAddressSet(referenced, ConstraintRelationId, parentConstrId); + ObjectAddressSet(depender, ConstraintRelationId, childConstrId); + + recordDependencyOn(&depender, &referenced, DEPENDENCY_INTERNAL_AUTO); + + heap_close(constrRel, RowExclusiveLock); +} + + /* * get_relation_constraint_oid * Find a constraint on the specified relation with the specified name. @@ -903,6 +940,45 @@ get_relation_constraint_attnos(Oid relid, const char *conname, return conattnos; } +/* + * Return the OID of the constraint associated with the given index in the + * given relation; or InvalidOid if no such index is catalogued. + */ +Oid +get_relation_idx_constraint_oid(Oid relationId, Oid indexId) +{ + Relation pg_constraint; + SysScanDesc scan; + ScanKeyData key; + HeapTuple tuple; + Oid constraintId = InvalidOid; + + pg_constraint = heap_open(ConstraintRelationId, AccessShareLock); + + ScanKeyInit(&key, + Anum_pg_constraint_conrelid, + BTEqualStrategyNumber, + F_OIDEQ, + ObjectIdGetDatum(relationId)); + scan = systable_beginscan(pg_constraint, ConstraintRelidIndexId, + true, NULL, 1, &key); + while ((tuple = systable_getnext(scan)) != NULL) + { + Form_pg_constraint constrForm; + + constrForm = (Form_pg_constraint) GETSTRUCT(tuple); + if (constrForm->conindid == indexId) + { + constraintId = HeapTupleGetOid(tuple); + break; + } + } + systable_endscan(scan); + + heap_close(pg_constraint, AccessShareLock); + return constraintId; +} + /* * get_domain_constraint_oid * Find a constraint on the specified domain with the specified name. diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c index dcbad1286b..8bf2698545 100644 --- a/src/backend/catalog/toasting.c +++ b/src/backend/catalog/toasting.c @@ -330,13 +330,13 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, coloptions[1] = 0; index_create(toast_rel, toast_idxname, toastIndexOid, InvalidOid, - InvalidOid, + InvalidOid, InvalidOid, indexInfo, list_make2("chunk_id", "chunk_seq"), BTREE_AM_OID, rel->rd_rel->reltablespace, collationObjectId, classObjectId, coloptions, (Datum) 0, - INDEX_CREATE_IS_PRIMARY, 0, true, true); + INDEX_CREATE_IS_PRIMARY, 0, true, true, NULL); heap_close(toast_rel, NoLock); diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c index 7c46613215..3e48a58dcb 100644 --- a/src/backend/commands/indexcmds.c +++ b/src/backend/commands/indexcmds.c @@ -25,6 +25,7 @@ #include "catalog/indexing.h" #include "catalog/partition.h" #include "catalog/pg_am.h" +#include "catalog/pg_constraint_fn.h" #include "catalog/pg_inherits.h" #include "catalog/pg_inherits_fn.h" #include "catalog/pg_opclass.h" @@ -301,6 +302,8 @@ CheckIndexCompatible(Oid oldId, * nonzero to specify a preselected OID for the index. * 'parentIndexId': the OID of the parent index; InvalidOid if not the child * of a partitioned index. + * 'parentConstraintId': the OID of the parent constraint; InvalidOid if not + * the child of a constraint (only used when recursing) * 'is_alter_table': this is due to an ALTER rather than a CREATE operation. * 'check_rights': check for CREATE rights in namespace and tablespace. (This * should be true except when ALTER is deleting/recreating an index.) @@ -317,6 +320,7 @@ DefineIndex(Oid relationId, IndexStmt *stmt, Oid indexRelationId, Oid parentIndexId, + Oid parentConstraintId, bool is_alter_table, bool check_rights, bool check_not_in_use, @@ -331,6 +335,7 @@ DefineIndex(Oid relationId, Oid accessMethodId; Oid namespaceId; Oid tablespaceId; + Oid createdConstraintId = InvalidOid; List *indexColNames; Relation rel; Relation indexRelation; @@ -432,20 +437,11 @@ DefineIndex(Oid relationId, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot create index on partitioned table \"%s\" concurrently", RelationGetRelationName(rel)))); - if (stmt->unique) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("cannot create unique index on partitioned table \"%s\"", - RelationGetRelationName(rel)))); if (stmt->excludeOpNames) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot create exclusion constraints on partitioned table \"%s\"", RelationGetRelationName(rel)))); - if (stmt->primary || stmt->isconstraint) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("cannot create constraints on partitioned tables"))); } /* @@ -643,6 +639,84 @@ DefineIndex(Oid relationId, if (stmt->primary) index_check_primary_key(rel, indexInfo, is_alter_table); + /* + * If this table is partitioned and we're creating a unique index or a + * primary key, make sure that the indexed columns are part of the + * partition key. Otherwise it would be possible to violate uniqueness by + * putting values that ought to be unique in different partitions. + * + * We could lift this limitation if we had global indexes, but those have + * their own problems, so this is a useful feature combination. + */ + if (partitioned && (stmt->unique || stmt->primary)) + { + PartitionKey key = rel->rd_partkey; + int i; + + /* + * A partitioned table can have unique indexes, as long as all the + * columns in the partition key appear in the unique key. A + * partition-local index can enforce global uniqueness iff the PK + * value completely determines the partition that a row is in. + * + * Thus, verify that all the columns in the partition key appear + * in the unique key definition. + */ + for (i = 0; i < key->partnatts; i++) + { + bool found = false; + int j; + const char *constraint_type; + + if (stmt->primary) + constraint_type = "PRIMARY KEY"; + else if (stmt->unique) + constraint_type = "UNIQUE"; + else if (stmt->excludeOpNames != NIL) + constraint_type = "EXCLUDE"; + else + { + elog(ERROR, "unknown constraint type"); + constraint_type = NULL; /* keep compiler quiet */ + } + + /* + * It may be possible to support UNIQUE constraints when partition + * keys are expressions, but is it worth it? Give up for now. + */ + if (key->partattrs[i] == 0) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("unsupported %s constraint with partition key definition", + constraint_type), + errdetail("%s constraints cannot be used when partition keys include expressions.", + constraint_type))); + + for (j = 0; j < indexInfo->ii_NumIndexAttrs; j++) + { + if (key->partattrs[i] == indexInfo->ii_KeyAttrNumbers[j]) + { + found = true; + break; + } + } + if (!found) + { + Form_pg_attribute att; + + att = TupleDescAttr(RelationGetDescr(rel), key->partattrs[i] - 1); + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("insufficient columns in %s constraint definition", + constraint_type), + errdetail("%s constraint on table \"%s\" lacks column \"%s\" which is part of the partition key.", + constraint_type, RelationGetRelationName(rel), + NameStr(att->attname)))); + } + } + } + + /* * We disallow indexes on system columns other than OID. They would not * necessarily get updated correctly, and they don't seem useful anyway. @@ -740,12 +814,14 @@ DefineIndex(Oid relationId, indexRelationId = index_create(rel, indexRelationName, indexRelationId, parentIndexId, + parentConstraintId, stmt->oldNode, indexInfo, indexColNames, accessMethodId, tablespaceId, collationObjectId, classObjectId, coloptions, reloptions, flags, constr_flags, - allowSystemTableMods, !check_rights); + allowSystemTableMods, !check_rights, + &createdConstraintId); ObjectAddressSet(address, RelationRelationId, indexRelationId); @@ -832,16 +908,40 @@ DefineIndex(Oid relationId, opfamOids, attmap, maplen)) { + Oid cldConstrOid = InvalidOid; + /* - * Found a match. Attach index to parent and we're - * done, but keep lock till commit. + * Found a match. + * + * If this index is being created in the parent + * because of a constraint, then the child needs to + * have a constraint also, so look for one. If there + * is no such constraint, this index is no good, so + * keep looking. */ + if (createdConstraintId != InvalidOid) + { + cldConstrOid = + get_relation_idx_constraint_oid(childRelid, + cldidxid); + if (cldConstrOid == InvalidOid) + { + index_close(cldidx, lockmode); + continue; + } + } + + /* Attach index to parent and we're done. */ IndexSetParentIndex(cldidx, indexRelationId); + if (createdConstraintId != InvalidOid) + ConstraintSetParentConstraint(cldConstrOid, + createdConstraintId); if (!IndexIsValid(cldidx->rd_index)) invalidate_parent = true; found = true; + /* keep lock till commit */ index_close(cldidx, NoLock); break; } @@ -872,6 +972,7 @@ DefineIndex(Oid relationId, DefineIndex(childRelid, childStmt, InvalidOid, /* no predefined OID */ indexRelationId, /* this is our child */ + createdConstraintId, false, check_rights, check_not_in_use, false, quiet); } diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 87539d6c0b..db6c8ff00e 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -939,17 +939,20 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId, Relation idxRel = index_open(lfirst_oid(cell), AccessShareLock); AttrNumber *attmap; IndexStmt *idxstmt; + Oid constraintOid; attmap = convert_tuples_by_name_map(RelationGetDescr(rel), RelationGetDescr(parent), gettext_noop("could not convert row type")); idxstmt = generateClonedIndexStmt(NULL, RelationGetRelid(rel), idxRel, - attmap, RelationGetDescr(rel)->natts); + attmap, RelationGetDescr(rel)->natts, + &constraintOid); DefineIndex(RelationGetRelid(rel), idxstmt, InvalidOid, RelationGetRelid(idxRel), + constraintOid, false, false, false, false, false); index_close(idxRel, AccessShareLock); @@ -6824,6 +6827,7 @@ ATExecAddIndex(AlteredTableInfo *tab, Relation rel, stmt, InvalidOid, /* no predefined OID */ InvalidOid, /* no parent index */ + InvalidOid, /* no parent constraint */ true, /* is_alter_table */ check_rights, false, /* check_not_in_use - we did it already */ @@ -6869,6 +6873,15 @@ ATExecAddIndexConstraint(AlteredTableInfo *tab, Relation rel, Assert(OidIsValid(index_oid)); Assert(stmt->isconstraint); + /* + * Doing this on partitioned tables is not a simple feature to implement, + * so let's punt for now. + */ + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("ALTER TABLE / ADD CONSTRAINT USING INDEX is not supported on partitioned tables"))); + indexRel = index_open(index_oid, AccessShareLock); indexName = pstrdup(RelationGetRelationName(indexRel)); @@ -6916,6 +6929,7 @@ ATExecAddIndexConstraint(AlteredTableInfo *tab, Relation rel, address = index_constraint_create(rel, index_oid, + InvalidOid, indexInfo, constraintName, constraintType, @@ -14147,6 +14161,7 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) IndexInfo *info; AttrNumber *attmap; bool found = false; + Oid constraintOid; /* * Ignore indexes in the partitioned table other than partitioned @@ -14163,6 +14178,7 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) attmap = convert_tuples_by_name_map(RelationGetDescr(attachrel), RelationGetDescr(rel), gettext_noop("could not convert row type")); + constraintOid = get_relation_idx_constraint_oid(RelationGetRelid(rel), idx); /* * Scan the list of existing indexes in the partition-to-be, and mark @@ -14171,6 +14187,8 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) */ for (i = 0; i < list_length(attachRelIdxs); i++) { + Oid cldConstrOid = InvalidOid; + /* does this index have a parent? if so, can't use it */ if (has_superclass(RelationGetRelid(attachrelIdxRels[i]))) continue; @@ -14183,8 +14201,26 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) attmap, RelationGetDescr(rel)->natts)) { + /* + * If this index is being created in the parent because of a + * constraint, then the child needs to have a constraint also, + * so look for one. If there is no such constraint, this + * index is no good, so keep looking. + */ + if (OidIsValid(constraintOid)) + { + cldConstrOid = + get_relation_idx_constraint_oid(RelationGetRelid(attachrel), + RelationGetRelid(attachrelIdxRels[i])); + /* no dice */ + if (!OidIsValid(cldConstrOid)) + continue; + } + /* bingo. */ IndexSetParentIndex(attachrelIdxRels[i], idx); + if (OidIsValid(constraintOid)) + ConstraintSetParentConstraint(cldConstrOid, constraintOid); found = true; break; } @@ -14197,12 +14233,15 @@ AttachPartitionEnsureIndexes(Relation rel, Relation attachrel) if (!found) { IndexStmt *stmt; + Oid constraintOid; stmt = generateClonedIndexStmt(NULL, RelationGetRelid(attachrel), idxRel, attmap, - RelationGetDescr(rel)->natts); + RelationGetDescr(rel)->natts, + &constraintOid); DefineIndex(RelationGetRelid(attachrel), stmt, InvalidOid, RelationGetRelid(idxRel), + constraintOid, false, false, false, false, false); } @@ -14445,6 +14484,8 @@ ATExecAttachPartitionIdx(List **wqueue, Relation parentIdx, RangeVar *name) bool found; int i; PartitionDesc partDesc; + Oid constraintOid, + cldConstrId = InvalidOid; /* * If this partition already has an index attached, refuse the operation. @@ -14500,8 +14541,34 @@ ATExecAttachPartitionIdx(List **wqueue, Relation parentIdx, RangeVar *name) RelationGetRelationName(parentIdx)), errdetail("The index definitions do not match."))); + /* + * If there is a constraint in the parent, make sure there is one + * in the child too. + */ + constraintOid = get_relation_idx_constraint_oid(RelationGetRelid(parentTbl), + RelationGetRelid(parentIdx)); + + if (OidIsValid(constraintOid)) + { + cldConstrId = get_relation_idx_constraint_oid(RelationGetRelid(partTbl), + partIdxId); + if (!OidIsValid(cldConstrId)) + ereport(ERROR, + (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), + errmsg("cannot attach index \"%s\" as a partition of index \"%s\"", + RelationGetRelationName(partIdx), + RelationGetRelationName(parentIdx)), + errdetail("The index \"%s\" belongs to a constraint in table \"%s\" but no constraint exists for index \"%s\".", + RelationGetRelationName(parentIdx), + RelationGetRelationName(parentTbl), + RelationGetRelationName(partIdx)))); + } + /* All good -- do it */ IndexSetParentIndex(partIdx, RelationGetRelid(parentIdx)); + if (OidIsValid(constraintOid)) + ConstraintSetParentConstraint(cldConstrId, constraintOid); + pfree(attmap); CommandCounterIncrement(); diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c index e7b2bc7e73..5b3a610cf9 100644 --- a/src/backend/parser/analyze.c +++ b/src/backend/parser/analyze.c @@ -1017,6 +1017,13 @@ transformOnConflictClause(ParseState *pstate, TargetEntry *te; int attno; + if (targetrel->rd_partdesc) + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("%s cannot be applied to partitioned table \"%s\"", + "ON CONFLICT DO UPDATE", + RelationGetRelationName(targetrel)))); + /* * All INSERT expressions have been parsed, get ready for potentially * existing SET statements that need to be processed like an UPDATE. diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c index 7c2cd4656a..6029eb13d7 100644 --- a/src/backend/parser/parse_utilcmd.c +++ b/src/backend/parser/parse_utilcmd.c @@ -712,12 +712,6 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column) errmsg("primary key constraints are not supported on foreign tables"), parser_errposition(cxt->pstate, constraint->location))); - if (cxt->ispartitioned) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("primary key constraints are not supported on partitioned tables"), - parser_errposition(cxt->pstate, - constraint->location))); /* FALL THRU */ case CONSTR_UNIQUE: @@ -727,12 +721,6 @@ transformColumnDefinition(CreateStmtContext *cxt, ColumnDef *column) errmsg("unique constraints are not supported on foreign tables"), parser_errposition(cxt->pstate, constraint->location))); - if (cxt->ispartitioned) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("unique constraints are not supported on partitioned tables"), - parser_errposition(cxt->pstate, - constraint->location))); if (constraint->keys == NIL) constraint->keys = list_make1(makeString(column->colname)); cxt->ixconstraints = lappend(cxt->ixconstraints, constraint); @@ -829,12 +817,6 @@ transformTableConstraint(CreateStmtContext *cxt, Constraint *constraint) errmsg("primary key constraints are not supported on foreign tables"), parser_errposition(cxt->pstate, constraint->location))); - if (cxt->ispartitioned) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("primary key constraints are not supported on partitioned tables"), - parser_errposition(cxt->pstate, - constraint->location))); cxt->ixconstraints = lappend(cxt->ixconstraints, constraint); break; @@ -845,12 +827,6 @@ transformTableConstraint(CreateStmtContext *cxt, Constraint *constraint) errmsg("unique constraints are not supported on foreign tables"), parser_errposition(cxt->pstate, constraint->location))); - if (cxt->ispartitioned) - ereport(ERROR, - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), - errmsg("unique constraints are not supported on partitioned tables"), - parser_errposition(cxt->pstate, - constraint->location))); cxt->ixconstraints = lappend(cxt->ixconstraints, constraint); break; @@ -1192,7 +1168,7 @@ transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla /* Build CREATE INDEX statement to recreate the parent_index */ index_stmt = generateClonedIndexStmt(cxt->relation, InvalidOid, parent_index, - attmap, tupleDesc->natts); + attmap, tupleDesc->natts, NULL); /* Copy comment on index, if requested */ if (table_like_clause->options & CREATE_TABLE_LIKE_COMMENTS) @@ -1275,7 +1251,7 @@ transformOfType(CreateStmtContext *cxt, TypeName *ofTypename) */ IndexStmt * generateClonedIndexStmt(RangeVar *heapRel, Oid heapRelid, Relation source_idx, - const AttrNumber *attmap, int attmap_length) + const AttrNumber *attmap, int attmap_length, Oid *constraintOid) { Oid source_relid = RelationGetRelid(source_idx); HeapTuple ht_idxrel; @@ -1373,6 +1349,9 @@ generateClonedIndexStmt(RangeVar *heapRel, Oid heapRelid, Relation source_idx, HeapTuple ht_constr; Form_pg_constraint conrec; + if (constraintOid) + *constraintOid = constraintId; + ht_constr = SearchSysCache1(CONSTROID, ObjectIdGetDatum(constraintId)); if (!HeapTupleIsValid(ht_constr)) diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 3abe7d6155..8c23ee53e2 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -1353,6 +1353,7 @@ ProcessUtilitySlow(ParseState *pstate, stmt, InvalidOid, /* no predefined OID */ InvalidOid, /* no parent index */ + InvalidOid, /* no parent constraint */ false, /* is_alter_table */ true, /* check_rights */ true, /* check_not_in_use */ diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index 7b21709f76..ac9cfa04c1 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -5242,6 +5242,40 @@ role => 1, section_pre_data => 1, }, }, + 'ALTER TABLE measurement PRIMARY KEY' => { + all_runs => 1, + catch_all => 'CREATE ... commands', + create_order => 93, + create_sql => 'ALTER TABLE dump_test.measurement ADD PRIMARY KEY (city_id, logdate);', + regexp => qr/^ + \QALTER TABLE ONLY measurement\E \n^\s+ + \QADD CONSTRAINT measurement_pkey PRIMARY KEY (city_id, logdate);\E + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + only_dump_test_schema => 1, + pg_dumpall_dbprivs => 1, + schema_only => 1, + section_post_data => 1, + test_schema_plus_blobs => 1, + with_oids => 1, }, + unlike => { + exclude_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + role => 1, + section_pre_data => 1, }, }, + 'CREATE INDEX ... ON measurement_y2006_m2' => { all_runs => 1, catch_all => 'CREATE ... commands', @@ -5304,6 +5338,37 @@ section_pre_data => 1, test_schema_plus_blobs => 1, }, }, + 'ALTER INDEX ... ATTACH PARTITION (primary key)' => { + all_runs => 1, + catch_all => 'CREATE ... commands', + regexp => qr/^ + \QALTER INDEX dump_test.measurement_pkey ATTACH PARTITION dump_test_second_schema.measurement_y2006m2_pkey\E + /xm, + like => { + binary_upgrade => 1, + clean => 1, + clean_if_exists => 1, + createdb => 1, + defaults => 1, + exclude_dump_test_schema => 1, + exclude_test_table => 1, + exclude_test_table_data => 1, + no_blobs => 1, + no_privs => 1, + no_owner => 1, + pg_dumpall_dbprivs => 1, + role => 1, + schema_only => 1, + section_post_data => 1, + with_oids => 1, }, + unlike => { + only_dump_test_schema => 1, + only_dump_test_table => 1, + pg_dumpall_globals => 1, + pg_dumpall_globals_clean => 1, + section_pre_data => 1, + test_schema_plus_blobs => 1, }, }, + 'CREATE VIEW test_view' => { all_runs => 1, catch_all => 'CREATE ... commands', diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h index a5cd8ddb1e..f20c5f789b 100644 --- a/src/include/catalog/index.h +++ b/src/include/catalog/index.h @@ -54,6 +54,7 @@ extern Oid index_create(Relation heapRelation, const char *indexRelationName, Oid indexRelationId, Oid parentIndexRelid, + Oid parentConstraintId, Oid relFileNode, IndexInfo *indexInfo, List *indexColNames, @@ -66,7 +67,8 @@ extern Oid index_create(Relation heapRelation, bits16 flags, bits16 constr_flags, bool allow_system_table_mods, - bool is_internal); + bool is_internal, + Oid *constraintId); #define INDEX_CONSTR_CREATE_MARK_AS_PRIMARY (1 << 0) #define INDEX_CONSTR_CREATE_DEFERRABLE (1 << 1) @@ -76,6 +78,7 @@ extern Oid index_create(Relation heapRelation, extern ObjectAddress index_constraint_create(Relation heapRelation, Oid indexRelationId, + Oid parentConstraintId, IndexInfo *indexInfo, const char *constraintName, char constraintType, diff --git a/src/include/catalog/pg_constraint_fn.h b/src/include/catalog/pg_constraint_fn.h index 6bb1b09714..d3351f4a83 100644 --- a/src/include/catalog/pg_constraint_fn.h +++ b/src/include/catalog/pg_constraint_fn.h @@ -58,7 +58,6 @@ extern Oid CreateConstraintEntry(const char *constraintName, extern void RemoveConstraintById(Oid conId); extern void RenameConstraintById(Oid conId, const char *newname); -extern void SetValidatedConstraintById(Oid conId); extern bool ConstraintNameIsUsed(ConstraintCategory conCat, Oid objId, Oid objNamespace, const char *conname); @@ -68,10 +67,13 @@ extern char *ChooseConstraintName(const char *name1, const char *name2, extern void AlterConstraintNamespaces(Oid ownerId, Oid oldNspId, Oid newNspId, bool isType, ObjectAddresses *objsMoved); +extern void ConstraintSetParentConstraint(Oid childConstrId, + Oid parentConstrId); extern Oid get_relation_constraint_oid(Oid relid, const char *conname, bool missing_ok); extern Bitmapset *get_relation_constraint_attnos(Oid relid, const char *conname, bool missing_ok, Oid *constraintOid); extern Oid get_domain_constraint_oid(Oid typid, const char *conname, bool missing_ok); +extern Oid get_relation_idx_constraint_oid(Oid relationId, Oid indexId); extern Bitmapset *get_primary_key_attnos(Oid relid, bool deferrableOk, Oid *constraintOid); diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index 7b824c95af..f510f40945 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -26,6 +26,7 @@ extern ObjectAddress DefineIndex(Oid relationId, IndexStmt *stmt, Oid indexRelationId, Oid parentIndexId, + Oid parentConstraintId, bool is_alter_table, bool check_rights, bool check_not_in_use, diff --git a/src/include/parser/parse_utilcmd.h b/src/include/parser/parse_utilcmd.h index 64aa8234e5..35ac97940a 100644 --- a/src/include/parser/parse_utilcmd.h +++ b/src/include/parser/parse_utilcmd.h @@ -29,6 +29,7 @@ extern PartitionBoundSpec *transformPartitionBound(ParseState *pstate, Relation PartitionBoundSpec *spec); extern IndexStmt *generateClonedIndexStmt(RangeVar *heapRel, Oid heapOid, Relation source_idx, - const AttrNumber *attmap, int attmap_length); + const AttrNumber *attmap, int attmap_length, + Oid *constraintOid); #endif /* PARSE_UTILCMD_H */ diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out index e9a1d37f6f..ccd2c38dbc 100644 --- a/src/test/regress/expected/alter_table.out +++ b/src/test/regress/expected/alter_table.out @@ -3305,14 +3305,6 @@ CREATE TABLE partitioned ( a int, b int ) PARTITION BY RANGE (a, (a+b+1)); -ALTER TABLE partitioned ADD UNIQUE (a); -ERROR: unique constraints are not supported on partitioned tables -LINE 1: ALTER TABLE partitioned ADD UNIQUE (a); - ^ -ALTER TABLE partitioned ADD PRIMARY KEY (a); -ERROR: primary key constraints are not supported on partitioned tables -LINE 1: ALTER TABLE partitioned ADD PRIMARY KEY (a); - ^ ALTER TABLE partitioned ADD FOREIGN KEY (a) REFERENCES blah; ERROR: foreign key constraints are not supported on partitioned tables LINE 1: ALTER TABLE partitioned ADD FOREIGN KEY (a) REFERENCES blah; diff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out index 031a0bcec9..057faff2e5 100644 --- a/src/test/regress/expected/create_index.out +++ b/src/test/regress/expected/create_index.out @@ -2559,6 +2559,12 @@ DROP INDEX cwi_replaced_pkey; -- Should fail; a constraint depends on it ERROR: cannot drop index cwi_replaced_pkey because constraint cwi_replaced_pkey on table cwi_test requires it HINT: You can drop constraint cwi_replaced_pkey on table cwi_test instead. DROP TABLE cwi_test; +-- ADD CONSTRAINT USING INDEX is forbidden on partitioned tables +CREATE TABLE cwi_test(a int) PARTITION BY hash (a); +create unique index on cwi_test (a); +alter table cwi_test add primary key using index cwi_test_a_idx ; +ERROR: ALTER TABLE / ADD CONSTRAINT USING INDEX is not supported on partitioned tables +DROP TABLE cwi_test; -- -- Check handling of indexes on system columns -- diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out index bef5463bab..39a963888d 100644 --- a/src/test/regress/expected/create_table.out +++ b/src/test/regress/expected/create_table.out @@ -281,12 +281,6 @@ CREATE TABLE partitioned ( ) PARTITION BY LIST (a1, a2); -- fail ERROR: cannot use "list" partition strategy with more than one column -- unsupported constraint type for partitioned tables -CREATE TABLE partitioned ( - a int PRIMARY KEY -) PARTITION BY RANGE (a); -ERROR: primary key constraints are not supported on partitioned tables -LINE 2: a int PRIMARY KEY - ^ CREATE TABLE pkrel ( a int PRIMARY KEY ); @@ -297,12 +291,6 @@ ERROR: foreign key constraints are not supported on partitioned tables LINE 2: a int REFERENCES pkrel(a) ^ DROP TABLE pkrel; -CREATE TABLE partitioned ( - a int UNIQUE -) PARTITION BY RANGE (a); -ERROR: unique constraints are not supported on partitioned tables -LINE 2: a int UNIQUE - ^ CREATE TABLE partitioned ( a int, EXCLUDE USING gist (a WITH &&) diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out index e034ad3aad..0a980dc07d 100644 --- a/src/test/regress/expected/indexing.out +++ b/src/test/regress/expected/indexing.out @@ -26,8 +26,6 @@ drop table idxpart; -- Some unsupported features create table idxpart (a int, b int, c text) partition by range (a); create table idxpart1 partition of idxpart for values from (0) to (10); -create unique index on idxpart (a); -ERROR: cannot create unique index on partitioned table "idxpart" create index concurrently on idxpart (a); ERROR: cannot create index on partitioned table "idxpart" concurrently drop table idxpart; @@ -754,6 +752,296 @@ select attrelid::regclass, attname, attnum from pg_attribute idxpart_col_keep_idx | col_keep | 1 (7 rows) +drop table idxpart; +-- +-- Constraint-related indexes +-- +-- Verify that it works to add primary key / unique to partitioned tables +create table idxpart (a int primary key, b int) partition by range (a); +\d idxpart + Table "public.idxpart" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | not null | + b | integer | | | +Partition key: RANGE (a) +Indexes: + "idxpart_pkey" PRIMARY KEY, btree (a) +Number of partitions: 0 + +drop table idxpart; +-- but not if you fail to use the full partition key +create table idxpart (a int unique, b int) partition by range (a, b); +ERROR: insufficient columns in UNIQUE constraint definition +DETAIL: UNIQUE constraint on table "idxpart" lacks column "b" which is part of the partition key. +create table idxpart (a int, b int unique) partition by range (a, b); +ERROR: insufficient columns in UNIQUE constraint definition +DETAIL: UNIQUE constraint on table "idxpart" lacks column "a" which is part of the partition key. +create table idxpart (a int primary key, b int) partition by range (b, a); +ERROR: insufficient columns in PRIMARY KEY constraint definition +DETAIL: PRIMARY KEY constraint on table "idxpart" lacks column "b" which is part of the partition key. +create table idxpart (a int, b int primary key) partition by range (b, a); +ERROR: insufficient columns in PRIMARY KEY constraint definition +DETAIL: PRIMARY KEY constraint on table "idxpart" lacks column "a" which is part of the partition key. +-- OK if you use them in some other order +create table idxpart (a int, b int, c text, primary key (a, b, c)) partition by range (b, c, a); +drop table idxpart; +-- not other types of index-based constraints +create table idxpart (a int, exclude (a with = )) partition by range (a); +ERROR: exclusion constraints are not supported on partitioned tables +LINE 1: create table idxpart (a int, exclude (a with = )) partition ... + ^ +-- no expressions in partition key for PK/UNIQUE +create table idxpart (a int primary key, b int) partition by range ((b + a)); +ERROR: unsupported PRIMARY KEY constraint with partition key definition +DETAIL: PRIMARY KEY constraints cannot be used when partition keys include expressions. +create table idxpart (a int unique, b int) partition by range ((b + a)); +ERROR: unsupported UNIQUE constraint with partition key definition +DETAIL: UNIQUE constraints cannot be used when partition keys include expressions. +-- use ALTER TABLE to add a primary key +create table idxpart (a int, b int, c text) partition by range (a, b); +alter table idxpart add primary key (a); -- not an incomplete one though +ERROR: insufficient columns in PRIMARY KEY constraint definition +DETAIL: PRIMARY KEY constraint on table "idxpart" lacks column "b" which is part of the partition key. +alter table idxpart add primary key (a, b); -- this works +\d idxpart + Table "public.idxpart" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | not null | + b | integer | | not null | + c | text | | | +Partition key: RANGE (a, b) +Indexes: + "idxpart_pkey" PRIMARY KEY, btree (a, b) +Number of partitions: 0 + +create table idxpart1 partition of idxpart for values from (0, 0) to (1000, 1000); +\d idxpart1 + Table "public.idxpart1" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | not null | + b | integer | | not null | + c | text | | | +Partition of: idxpart FOR VALUES FROM (0, 0) TO (1000, 1000) +Indexes: + "idxpart1_pkey" PRIMARY KEY, btree (a, b) + +drop table idxpart; +-- use ALTER TABLE to add a unique constraint +create table idxpart (a int, b int) partition by range (a, b); +alter table idxpart add unique (a); -- not an incomplete one though +ERROR: insufficient columns in UNIQUE constraint definition +DETAIL: UNIQUE constraint on table "idxpart" lacks column "b" which is part of the partition key. +alter table idxpart add unique (b, a); -- this works +\d idxpart + Table "public.idxpart" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + a | integer | | | + b | integer | | | +Partition key: RANGE (a, b) +Indexes: + "idxpart_b_a_key" UNIQUE CONSTRAINT, btree (b, a) +Number of partitions: 0 + +drop table idxpart; +-- Exclusion constraints cannot be added +create table idxpart (a int, b int) partition by range (a); +alter table idxpart add exclude (a with =); +ERROR: exclusion constraints are not supported on partitioned tables +LINE 1: alter table idxpart add exclude (a with =); + ^ +drop table idxpart; +-- When (sub)partitions are created, they also contain the constraint +create table idxpart (a int, b int, primary key (a, b)) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (1, 1) to (10, 10); +create table idxpart2 partition of idxpart for values from (10, 10) to (20, 20) + partition by range (b); +create table idxpart21 partition of idxpart2 for values from (10) to (15); +create table idxpart22 partition of idxpart2 for values from (15) to (20); +create table idxpart3 (b int not null, a int not null); +alter table idxpart attach partition idxpart3 for values from (20, 20) to (30, 30); +select conname, contype, conrelid::regclass, conindid::regclass, conkey + from pg_constraint where conrelid::regclass::text like 'idxpart%' + order by conname; + conname | contype | conrelid | conindid | conkey +----------------+---------+-----------+----------------+-------- + idxpart1_pkey | p | idxpart1 | idxpart1_pkey | {1,2} + idxpart21_pkey | p | idxpart21 | idxpart21_pkey | {1,2} + idxpart22_pkey | p | idxpart22 | idxpart22_pkey | {1,2} + idxpart2_pkey | p | idxpart2 | idxpart2_pkey | {1,2} + idxpart3_pkey | p | idxpart3 | idxpart3_pkey | {2,1} + idxpart_pkey | p | idxpart | idxpart_pkey | {1,2} +(6 rows) + +drop table idxpart; +-- Verify that multi-layer partitioning honors the requirement that all +-- columns in the partition key must appear in primary key +create table idxpart (a int, b int, primary key (a)) partition by range (a); +create table idxpart2 partition of idxpart +for values from (0) to (1000) partition by range (b); -- fail +ERROR: insufficient columns in PRIMARY KEY constraint definition +DETAIL: PRIMARY KEY constraint on table "idxpart2" lacks column "b" which is part of the partition key. +drop table idxpart; +-- Multi-layer partitioning works correctly in this case: +create table idxpart (a int, b int, primary key (a, b)) partition by range (a); +create table idxpart2 partition of idxpart for values from (0) to (1000) partition by range (b); +create table idxpart21 partition of idxpart2 for values from (0) to (1000); +select conname, contype, conrelid::regclass, conindid::regclass, conkey + from pg_constraint where conrelid::regclass::text like 'idxpart%' + order by conname; + conname | contype | conrelid | conindid | conkey +----------------+---------+-----------+----------------+-------- + idxpart21_pkey | p | idxpart21 | idxpart21_pkey | {1,2} + idxpart2_pkey | p | idxpart2 | idxpart2_pkey | {1,2} + idxpart_pkey | p | idxpart | idxpart_pkey | {1,2} +(3 rows) + +drop table idxpart; +-- If a partitioned table has a unique/PK constraint, then it's not possible +-- to drop the corresponding constraint in the children; nor it's possible +-- to drop the indexes individually. Dropping the constraint in the parent +-- gets rid of the lot. +create table idxpart (i int) partition by hash (i); +create table idxpart0 partition of idxpart (i) for values with (modulus 2, remainder 0); +create table idxpart1 partition of idxpart (i) for values with (modulus 2, remainder 1); +alter table idxpart0 add primary key(i); +alter table idxpart add primary key(i); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; + indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated +----------+---------------+--------------+------------+---------------+------------+-------------+--------------+-------------- + idxpart0 | idxpart0_pkey | idxpart_pkey | t | idxpart0_pkey | f | 1 | t | t + idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | t | t + idxpart | idxpart_pkey | | t | idxpart_pkey | t | 0 | t | t +(3 rows) + +drop index idxpart0_pkey; -- fail +ERROR: cannot drop index idxpart0_pkey because index idxpart_pkey requires it +HINT: You can drop index idxpart_pkey instead. +drop index idxpart1_pkey; -- fail +ERROR: cannot drop index idxpart1_pkey because index idxpart_pkey requires it +HINT: You can drop index idxpart_pkey instead. +alter table idxpart0 drop constraint idxpart0_pkey; -- fail +ERROR: cannot drop inherited constraint "idxpart0_pkey" of relation "idxpart0" +alter table idxpart1 drop constraint idxpart1_pkey; -- fail +ERROR: cannot drop inherited constraint "idxpart1_pkey" of relation "idxpart1" +alter table idxpart drop constraint idxpart_pkey; -- ok +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; + indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated +----------+------------+-----------+------------+---------+------------+-------------+--------------+-------------- +(0 rows) + +drop table idxpart; +-- If a partitioned table has a constraint whose index is not valid, +-- attaching a missing partition makes it valid. +create table idxpart (a int) partition by range (a); +create table idxpart0 (like idxpart); +alter table idxpart0 add primary key (a); +alter table idxpart attach partition idxpart0 for values from (0) to (1000); +alter table only idxpart add primary key (a); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; + indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated +----------+---------------+-----------+------------+---------------+------------+-------------+--------------+-------------- + idxpart0 | idxpart0_pkey | | t | idxpart0_pkey | t | 0 | t | t + idxpart | idxpart_pkey | | f | idxpart_pkey | t | 0 | t | t +(2 rows) + +alter index idxpart_pkey attach partition idxpart0_pkey; +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; + indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated +----------+---------------+--------------+------------+---------------+------------+-------------+--------------+-------------- + idxpart0 | idxpart0_pkey | idxpart_pkey | t | idxpart0_pkey | f | 1 | t | t + idxpart | idxpart_pkey | | t | idxpart_pkey | t | 0 | t | t +(2 rows) + +drop table idxpart; +-- if a partition has a unique index without a constraint, does not attach +-- automatically; creates a new index instead. +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int not null, b int); +create unique index on idxpart1 (a); +alter table idxpart add primary key (a); +alter table idxpart attach partition idxpart1 for values from (1) to (1000); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; + indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated +----------+----------------+--------------+------------+---------------+------------+-------------+--------------+-------------- + idxpart1 | idxpart1_a_idx | | t | | | | | + idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | t | t + idxpart | idxpart_pkey | | t | idxpart_pkey | t | 0 | t | t +(3 rows) + +drop table idxpart; +-- Can't attach an index without a corresponding constraint +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int not null, b int); +create unique index on idxpart1 (a); +alter table idxpart attach partition idxpart1 for values from (1) to (1000); +alter table only idxpart add primary key (a); +alter index idxpart_pkey attach partition idxpart1_a_idx; -- fail +ERROR: cannot attach index "idxpart1_a_idx" as a partition of index "idxpart_pkey" +DETAIL: The index "idxpart_pkey" belongs to a constraint in table "idxpart" but no constraint exists for index "idxpart1_a_idx". +drop table idxpart; +-- Test that unique constraints are working +create table idxpart (a int, b text, primary key (a, b)) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100000); +create table idxpart2 (c int, like idxpart); +insert into idxpart2 (c, a, b) values (42, 572814, 'inserted first'); +alter table idxpart2 drop column c; +create unique index on idxpart (a); +alter table idxpart attach partition idxpart2 for values from (100000) to (1000000); +insert into idxpart values (0, 'zero'), (42, 'life'), (2^16, 'sixteen'); +insert into idxpart select 2^g, format('two to power of %s', g) from generate_series(15, 17) g; +ERROR: duplicate key value violates unique constraint "idxpart1_a_idx" +DETAIL: Key (a)=(65536) already exists. +insert into idxpart values (16, 'sixteen'); +insert into idxpart (b, a) values ('one', 142857), ('two', 285714); +insert into idxpart select a * 2, b || b from idxpart where a between 2^16 and 2^19; +ERROR: duplicate key value violates unique constraint "idxpart2_a_idx" +DETAIL: Key (a)=(285714) already exists. +insert into idxpart values (572814, 'five'); +ERROR: duplicate key value violates unique constraint "idxpart2_a_idx" +DETAIL: Key (a)=(572814) already exists. +insert into idxpart values (857142, 'six'); +select tableoid::regclass, * from idxpart order by a; + tableoid | a | b +----------+--------+---------------- + idxpart1 | 0 | zero + idxpart1 | 16 | sixteen + idxpart1 | 42 | life + idxpart1 | 65536 | sixteen + idxpart2 | 142857 | one + idxpart2 | 285714 | two + idxpart2 | 572814 | inserted first + idxpart2 | 857142 | six +(8 rows) + drop table idxpart; -- intentionally leave some objects around create table idxpart (a int) partition by range (a); @@ -766,3 +1054,5 @@ create index on idxpart22 (a); create index on only idxpart2 (a); alter index idxpart2_a_idx attach partition idxpart22_a_idx; create index on idxpart (a); +create table idxpart_another (a int, b int, primary key (a, b)) partition by range (a); +create table idxpart_another_1 partition of idxpart_another for values from (0) to (100); diff --git a/src/test/regress/expected/insert_conflict.out b/src/test/regress/expected/insert_conflict.out index 8fd2027d6a..2650faedee 100644 --- a/src/test/regress/expected/insert_conflict.out +++ b/src/test/regress/expected/insert_conflict.out @@ -794,7 +794,7 @@ insert into parted_conflict_test values (1, 'a') on conflict do nothing; insert into parted_conflict_test values (1, 'a') on conflict do nothing; -- however, on conflict do update is not supported yet insert into parted_conflict_test values (1) on conflict (b) do update set a = excluded.a; -ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification +ERROR: ON CONFLICT DO UPDATE cannot be applied to partitioned table "parted_conflict_test" -- but it works OK if we target the partition directly insert into parted_conflict_test_1 values (1) on conflict (b) do update set a = excluded.a; diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql index b27e8f6777..b73f523e8a 100644 --- a/src/test/regress/sql/alter_table.sql +++ b/src/test/regress/sql/alter_table.sql @@ -2035,8 +2035,6 @@ CREATE TABLE partitioned ( a int, b int ) PARTITION BY RANGE (a, (a+b+1)); -ALTER TABLE partitioned ADD UNIQUE (a); -ALTER TABLE partitioned ADD PRIMARY KEY (a); ALTER TABLE partitioned ADD FOREIGN KEY (a) REFERENCES blah; ALTER TABLE partitioned ADD EXCLUDE USING gist (a WITH &&); diff --git a/src/test/regress/sql/create_index.sql b/src/test/regress/sql/create_index.sql index a45e8ebeff..7f17588b0d 100644 --- a/src/test/regress/sql/create_index.sql +++ b/src/test/regress/sql/create_index.sql @@ -834,6 +834,12 @@ DROP INDEX cwi_replaced_pkey; -- Should fail; a constraint depends on it DROP TABLE cwi_test; +-- ADD CONSTRAINT USING INDEX is forbidden on partitioned tables +CREATE TABLE cwi_test(a int) PARTITION BY hash (a); +create unique index on cwi_test (a); +alter table cwi_test add primary key using index cwi_test_a_idx ; +DROP TABLE cwi_test; + -- -- Check handling of indexes on system columns -- diff --git a/src/test/regress/sql/create_table.sql b/src/test/regress/sql/create_table.sql index fdd6d14104..7d67ce05d9 100644 --- a/src/test/regress/sql/create_table.sql +++ b/src/test/regress/sql/create_table.sql @@ -298,10 +298,6 @@ CREATE TABLE partitioned ( ) PARTITION BY LIST (a1, a2); -- fail -- unsupported constraint type for partitioned tables -CREATE TABLE partitioned ( - a int PRIMARY KEY -) PARTITION BY RANGE (a); - CREATE TABLE pkrel ( a int PRIMARY KEY ); @@ -310,10 +306,6 @@ CREATE TABLE partitioned ( ) PARTITION BY RANGE (a); DROP TABLE pkrel; -CREATE TABLE partitioned ( - a int UNIQUE -) PARTITION BY RANGE (a); - CREATE TABLE partitioned ( a int, EXCLUDE USING gist (a WITH &&) diff --git a/src/test/regress/sql/indexing.sql b/src/test/regress/sql/indexing.sql index 1a9ea89ade..f3d0387f34 100644 --- a/src/test/regress/sql/indexing.sql +++ b/src/test/regress/sql/indexing.sql @@ -15,7 +15,6 @@ drop table idxpart; -- Some unsupported features create table idxpart (a int, b int, c text) partition by range (a); create table idxpart1 partition of idxpart for values from (0) to (10); -create unique index on idxpart (a); create index concurrently on idxpart (a); drop table idxpart; @@ -383,6 +382,175 @@ select attrelid::regclass, attname, attnum from pg_attribute order by attrelid::regclass, attnum; drop table idxpart; +-- +-- Constraint-related indexes +-- + +-- Verify that it works to add primary key / unique to partitioned tables +create table idxpart (a int primary key, b int) partition by range (a); +\d idxpart +drop table idxpart; + +-- but not if you fail to use the full partition key +create table idxpart (a int unique, b int) partition by range (a, b); +create table idxpart (a int, b int unique) partition by range (a, b); +create table idxpart (a int primary key, b int) partition by range (b, a); +create table idxpart (a int, b int primary key) partition by range (b, a); + +-- OK if you use them in some other order +create table idxpart (a int, b int, c text, primary key (a, b, c)) partition by range (b, c, a); +drop table idxpart; + +-- not other types of index-based constraints +create table idxpart (a int, exclude (a with = )) partition by range (a); + +-- no expressions in partition key for PK/UNIQUE +create table idxpart (a int primary key, b int) partition by range ((b + a)); +create table idxpart (a int unique, b int) partition by range ((b + a)); + +-- use ALTER TABLE to add a primary key +create table idxpart (a int, b int, c text) partition by range (a, b); +alter table idxpart add primary key (a); -- not an incomplete one though +alter table idxpart add primary key (a, b); -- this works +\d idxpart +create table idxpart1 partition of idxpart for values from (0, 0) to (1000, 1000); +\d idxpart1 +drop table idxpart; + +-- use ALTER TABLE to add a unique constraint +create table idxpart (a int, b int) partition by range (a, b); +alter table idxpart add unique (a); -- not an incomplete one though +alter table idxpart add unique (b, a); -- this works +\d idxpart +drop table idxpart; + +-- Exclusion constraints cannot be added +create table idxpart (a int, b int) partition by range (a); +alter table idxpart add exclude (a with =); +drop table idxpart; + +-- When (sub)partitions are created, they also contain the constraint +create table idxpart (a int, b int, primary key (a, b)) partition by range (a, b); +create table idxpart1 partition of idxpart for values from (1, 1) to (10, 10); +create table idxpart2 partition of idxpart for values from (10, 10) to (20, 20) + partition by range (b); +create table idxpart21 partition of idxpart2 for values from (10) to (15); +create table idxpart22 partition of idxpart2 for values from (15) to (20); +create table idxpart3 (b int not null, a int not null); +alter table idxpart attach partition idxpart3 for values from (20, 20) to (30, 30); +select conname, contype, conrelid::regclass, conindid::regclass, conkey + from pg_constraint where conrelid::regclass::text like 'idxpart%' + order by conname; +drop table idxpart; + +-- Verify that multi-layer partitioning honors the requirement that all +-- columns in the partition key must appear in primary key +create table idxpart (a int, b int, primary key (a)) partition by range (a); +create table idxpart2 partition of idxpart +for values from (0) to (1000) partition by range (b); -- fail +drop table idxpart; + +-- Multi-layer partitioning works correctly in this case: +create table idxpart (a int, b int, primary key (a, b)) partition by range (a); +create table idxpart2 partition of idxpart for values from (0) to (1000) partition by range (b); +create table idxpart21 partition of idxpart2 for values from (0) to (1000); +select conname, contype, conrelid::regclass, conindid::regclass, conkey + from pg_constraint where conrelid::regclass::text like 'idxpart%' + order by conname; +drop table idxpart; + +-- If a partitioned table has a unique/PK constraint, then it's not possible +-- to drop the corresponding constraint in the children; nor it's possible +-- to drop the indexes individually. Dropping the constraint in the parent +-- gets rid of the lot. +create table idxpart (i int) partition by hash (i); +create table idxpart0 partition of idxpart (i) for values with (modulus 2, remainder 0); +create table idxpart1 partition of idxpart (i) for values with (modulus 2, remainder 1); +alter table idxpart0 add primary key(i); +alter table idxpart add primary key(i); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; +drop index idxpart0_pkey; -- fail +drop index idxpart1_pkey; -- fail +alter table idxpart0 drop constraint idxpart0_pkey; -- fail +alter table idxpart1 drop constraint idxpart1_pkey; -- fail +alter table idxpart drop constraint idxpart_pkey; -- ok +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; +drop table idxpart; + +-- If a partitioned table has a constraint whose index is not valid, +-- attaching a missing partition makes it valid. +create table idxpart (a int) partition by range (a); +create table idxpart0 (like idxpart); +alter table idxpart0 add primary key (a); +alter table idxpart attach partition idxpart0 for values from (0) to (1000); +alter table only idxpart add primary key (a); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; +alter index idxpart_pkey attach partition idxpart0_pkey; +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; +drop table idxpart; + +-- if a partition has a unique index without a constraint, does not attach +-- automatically; creates a new index instead. +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int not null, b int); +create unique index on idxpart1 (a); +alter table idxpart add primary key (a); +alter table idxpart attach partition idxpart1 for values from (1) to (1000); +select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid, + conname, conislocal, coninhcount, connoinherit, convalidated + from pg_index idx left join pg_inherits inh on (idx.indexrelid = inh.inhrelid) + left join pg_constraint con on (idx.indexrelid = con.conindid) + where indrelid::regclass::text like 'idxpart%' + order by indexrelid::regclass::text collate "C"; +drop table idxpart; + +-- Can't attach an index without a corresponding constraint +create table idxpart (a int, b int) partition by range (a); +create table idxpart1 (a int not null, b int); +create unique index on idxpart1 (a); +alter table idxpart attach partition idxpart1 for values from (1) to (1000); +alter table only idxpart add primary key (a); +alter index idxpart_pkey attach partition idxpart1_a_idx; -- fail +drop table idxpart; + +-- Test that unique constraints are working +create table idxpart (a int, b text, primary key (a, b)) partition by range (a); +create table idxpart1 partition of idxpart for values from (0) to (100000); +create table idxpart2 (c int, like idxpart); +insert into idxpart2 (c, a, b) values (42, 572814, 'inserted first'); +alter table idxpart2 drop column c; +create unique index on idxpart (a); +alter table idxpart attach partition idxpart2 for values from (100000) to (1000000); +insert into idxpart values (0, 'zero'), (42, 'life'), (2^16, 'sixteen'); +insert into idxpart select 2^g, format('two to power of %s', g) from generate_series(15, 17) g; +insert into idxpart values (16, 'sixteen'); +insert into idxpart (b, a) values ('one', 142857), ('two', 285714); +insert into idxpart select a * 2, b || b from idxpart where a between 2^16 and 2^19; +insert into idxpart values (572814, 'five'); +insert into idxpart values (857142, 'six'); +select tableoid::regclass, * from idxpart order by a; +drop table idxpart; + -- intentionally leave some objects around create table idxpart (a int) partition by range (a); create table idxpart1 partition of idxpart for values from (0) to (100); @@ -394,3 +562,5 @@ create index on idxpart22 (a); create index on only idxpart2 (a); alter index idxpart2_a_idx attach partition idxpart22_a_idx; create index on idxpart (a); +create table idxpart_another (a int, b int, primary key (a, b)) partition by range (a); +create table idxpart_another_1 partition of idxpart_another for values from (0) to (100); From 4108a28d3a02c4226b0f558cf00738e00e8ea2a1 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 19 Feb 2018 17:56:43 -0300 Subject: [PATCH 1020/1087] Fix expected output --- src/test/regress/expected/indexing.out | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out index 0a980dc07d..85e3575b99 100644 --- a/src/test/regress/expected/indexing.out +++ b/src/test/regress/expected/indexing.out @@ -918,7 +918,7 @@ select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated ----------+---------------+--------------+------------+---------------+------------+-------------+--------------+-------------- idxpart0 | idxpart0_pkey | idxpart_pkey | t | idxpart0_pkey | f | 1 | t | t - idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | t | t + idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | f | t idxpart | idxpart_pkey | | t | idxpart_pkey | t | 0 | t | t (3 rows) @@ -993,7 +993,7 @@ select indrelid::regclass, indexrelid::regclass, inhparent::regclass, indisvalid indrelid | indexrelid | inhparent | indisvalid | conname | conislocal | coninhcount | connoinherit | convalidated ----------+----------------+--------------+------------+---------------+------------+-------------+--------------+-------------- idxpart1 | idxpart1_a_idx | | t | | | | | - idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | t | t + idxpart1 | idxpart1_pkey | idxpart_pkey | t | idxpart1_pkey | f | 1 | f | t idxpart | idxpart_pkey | | t | idxpart_pkey | t | 0 | t | t (3 rows) From 159efe4af4509741c25d6b95ddd9fda86facce42 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 19 Feb 2018 16:00:18 -0500 Subject: [PATCH 1021/1087] Fix misbehavior of CTE-used-in-a-subplan during EPQ rechecks. An updating query that reads a CTE within an InitPlan or SubPlan could get incorrect results if it updates rows that are concurrently being modified. This is caused by CteScanNext supposing that nothing inside its recursive ExecProcNode call could change which read pointer is selected in the CTE's shared tuplestore. While that's normally true because of scoping considerations, it can break down if an EPQ plan tree gets built during the call, because EvalPlanQualStart builds execution trees for all subplans whether they're going to be used during the recheck or not. And it seems like a pretty shaky assumption anyway, so let's just reselect our own read pointer here. Per bug #14870 from Andrei Gorita. This has been broken since CTEs were implemented, so back-patch to all supported branches. Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org --- src/backend/executor/nodeCtescan.c | 13 +++++++++++++ src/test/isolation/expected/eval-plan-qual.out | 15 +++++++++++++++ src/test/isolation/specs/eval-plan-qual.spec | 9 +++++++++ 3 files changed, 37 insertions(+) diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c index 218619c760..24700dd396 100644 --- a/src/backend/executor/nodeCtescan.c +++ b/src/backend/executor/nodeCtescan.c @@ -107,6 +107,13 @@ CteScanNext(CteScanState *node) return NULL; } + /* + * There are corner cases where the subplan could change which + * tuplestore read pointer is active, so be sure to reselect ours + * before storing the tuple we got. + */ + tuplestore_select_read_pointer(tuplestorestate, node->readptr); + /* * Append a copy of the returned tuple to tuplestore. NOTE: because * our read pointer is certainly in EOF state, its read position will @@ -178,6 +185,12 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags) * we might be asked to rescan the CTE even though upper levels didn't * tell us to be prepared to do it efficiently. Annoying, since this * prevents truncation of the tuplestore. XXX FIXME + * + * Note: if we are in an EPQ recheck plan tree, it's likely that no access + * to the tuplestore is needed at all, making this even more annoying. + * It's not worth improving that as long as all the read pointers would + * have REWIND anyway, but if we ever improve this logic then that aspect + * should be considered too. */ eflags |= EXEC_FLAG_REWIND; diff --git a/src/test/isolation/expected/eval-plan-qual.out b/src/test/isolation/expected/eval-plan-qual.out index eb40717679..fed01459cf 100644 --- a/src/test/isolation/expected/eval-plan-qual.out +++ b/src/test/isolation/expected/eval-plan-qual.out @@ -217,3 +217,18 @@ id data id data 9 0 9 0 10 0 10 0 step c1: COMMIT; + +starting permutation: wrtwcte multireadwcte c1 c2 +step wrtwcte: UPDATE table_a SET value = 'tableAValue2' WHERE id = 1; +step multireadwcte: + WITH updated AS ( + UPDATE table_a SET value = 'tableAValue3' WHERE id = 1 RETURNING id + ) + SELECT (SELECT id FROM updated) AS subid, * FROM updated; + +step c1: COMMIT; +step c2: COMMIT; +step multireadwcte: <... completed> +subid id + +1 1 diff --git a/src/test/isolation/specs/eval-plan-qual.spec b/src/test/isolation/specs/eval-plan-qual.spec index d2b34ec7cc..51ab61dc86 100644 --- a/src/test/isolation/specs/eval-plan-qual.spec +++ b/src/test/isolation/specs/eval-plan-qual.spec @@ -139,6 +139,14 @@ step "readwcte" { SELECT * FROM cte2; } +# this test exercises a different CTE misbehavior, cf bug #14870 +step "multireadwcte" { + WITH updated AS ( + UPDATE table_a SET value = 'tableAValue3' WHERE id = 1 RETURNING id + ) + SELECT (SELECT id FROM updated) AS subid, * FROM updated; +} + teardown { COMMIT; } permutation "wx1" "wx2" "c1" "c2" "read" @@ -151,3 +159,4 @@ permutation "wx2" "lockwithvalues" "c2" "c1" "read" permutation "updateforss" "readforss" "c1" "c2" permutation "wrtwcte" "readwcte" "c1" "c2" permutation "wrjt" "selectjoinforupdate" "c2" "c1" +permutation "wrtwcte" "multireadwcte" "c1" "c2" From 6f1d723b6359507ef55a81617167507bc25e3e2b Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Mon, 19 Feb 2018 18:00:53 -0300 Subject: [PATCH 1022/1087] Fix crash in pg_replication_slot_advance We were trying to use a LSN variable after releasing its containing slot structure. Reported by: tushar Author: amul sul Reviewed-by: Petr Jelinek, Masahiko Sawada Discussion: https://postgr.es/m/94ba999c-f76a-0423-6523-b8d531dfe4c7@enterprisedb.com --- src/backend/replication/slotfuncs.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c index cf2195bc93..e873dd1f81 100644 --- a/src/backend/replication/slotfuncs.c +++ b/src/backend/replication/slotfuncs.c @@ -480,8 +480,7 @@ pg_replication_slot_advance(PG_FUNCTION_ARGS) (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("cannot move slot to %X/%X, minimum is %X/%X", (uint32) (moveto >> 32), (uint32) moveto, - (uint32) (MyReplicationSlot->data.confirmed_flush >> 32), - (uint32) (MyReplicationSlot->data.confirmed_flush)))); + (uint32) (startlsn >> 32), (uint32) startlsn))); } if (OidIsValid(MyReplicationSlot->data.database)) From 9a44a26b65d3d36867267624b76d3dea3dc4f6f6 Mon Sep 17 00:00:00 2001 From: Magnus Hagander Date: Tue, 20 Feb 2018 12:03:18 +0100 Subject: [PATCH 1023/1087] Fix typo Author: Masahiko Sawada --- src/backend/storage/ipc/procarray.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 1a00011adc..afe1c03aa3 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -2958,7 +2958,7 @@ CountOtherDBBackends(Oid databaseId, int *nbackends, int *nprepared) * * Install limits to future computations of the xmin horizon to prevent vacuum * and HOT pruning from removing affected rows still needed by clients with - * replicaton slots. + * replication slots. */ void ProcArraySetReplicationSlotXmin(TransactionId xmin, TransactionId catalog_xmin, From 9a89f6d85467be362f4d426c76439cea70cd327f Mon Sep 17 00:00:00 2001 From: Alvaro Herrera Date: Tue, 20 Feb 2018 12:08:55 -0300 Subject: [PATCH 1024/1087] Adjust ALTER TABLE docs on partitioned constraints Move the "additional restrictions" comment to ALTER TABLE ADD CONSTRAINT instead of ADD CONSTRAINT USING INDEX; and in the latter instead indicate that partitioned tables are unsupported Noted by David G. Johnston Discussion: https://postgr.es/m/CAKFQuwY4Ld7ecxL_KAmaxwt0FUu5VcPPN2L4dh+3BeYbrdBa5g@mail.gmail.com --- doc/src/sgml/ref/alter_table.sgml | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml index 5be56d4b28..afe213910c 100644 --- a/doc/src/sgml/ref/alter_table.sgml +++ b/doc/src/sgml/ref/alter_table.sgml @@ -370,6 +370,12 @@ WITH ( MODULUS numeric_literal, REM the table, until it is validated by using the VALIDATE CONSTRAINT option. + + + Additional restrictions apply when unique or primary key constraints + are added to partitioned tables; see . + + @@ -413,8 +419,7 @@ WITH ( MODULUS numeric_literal, REM - Additional restrictions apply when unique or primary key constraints - are added to partitioned tables; see . + This form is not currently supported on partitioned tables. From 3486bcf9e89d87b59d0e370af098fda38be97209 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 20 Feb 2018 11:23:33 -0500 Subject: [PATCH 1025/1087] Fix pg_dump's logic for eliding sequence limits that match the defaults. The previous coding here applied atoi() to strings that could represent values too large to fit in an int. If the overflowed value happened to match one of the cases it was looking for, it would drop that limit value from the output, leading to incorrect restoration of the sequence. Avoid the unsafe behavior, and also make the logic cleaner by explicitly calculating the default min/max values for the appropriate kind of sequence. Reported and patched by Alexey Bashtanov, though I whacked his patch around a bit. Back-patch to v10 where the faulty logic was added. Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7a33@imap.cc --- src/bin/pg_dump/pg_dump.c | 53 +++++++++++++++++++++------------------ 1 file changed, 29 insertions(+), 24 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 06bbc5033d..0ffb3725e0 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -16843,6 +16843,10 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) *seqtype; bool cycled; bool is_ascending; + int64 default_minv, + default_maxv; + char bufm[32], + bufx[32]; PQExpBuffer query = createPQExpBuffer(); PQExpBuffer delqry = createPQExpBuffer(); PQExpBuffer labelq = createPQExpBuffer(); @@ -16912,40 +16916,41 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) cache = PQgetvalue(res, 0, 5); cycled = (strcmp(PQgetvalue(res, 0, 6), "t") == 0); - is_ascending = incby[0] != '-'; - - if (is_ascending && atoi(minv) == 1) - minv = NULL; - if (!is_ascending && atoi(maxv) == -1) - maxv = NULL; - + /* Calculate default limits for a sequence of this type */ + is_ascending = (incby[0] != '-'); if (strcmp(seqtype, "smallint") == 0) { - if (!is_ascending && atoi(minv) == PG_INT16_MIN) - minv = NULL; - if (is_ascending && atoi(maxv) == PG_INT16_MAX) - maxv = NULL; + default_minv = is_ascending ? 1 : PG_INT16_MIN; + default_maxv = is_ascending ? PG_INT16_MAX : -1; } else if (strcmp(seqtype, "integer") == 0) { - if (!is_ascending && atoi(minv) == PG_INT32_MIN) - minv = NULL; - if (is_ascending && atoi(maxv) == PG_INT32_MAX) - maxv = NULL; + default_minv = is_ascending ? 1 : PG_INT32_MIN; + default_maxv = is_ascending ? PG_INT32_MAX : -1; } else if (strcmp(seqtype, "bigint") == 0) { - char bufm[100], - bufx[100]; + default_minv = is_ascending ? 1 : PG_INT64_MIN; + default_maxv = is_ascending ? PG_INT64_MAX : -1; + } + else + { + exit_horribly(NULL, "unrecognized sequence type: %s\n", seqtype); + default_minv = default_maxv = 0; /* keep compiler quiet */ + } - snprintf(bufm, sizeof(bufm), INT64_FORMAT, PG_INT64_MIN); - snprintf(bufx, sizeof(bufx), INT64_FORMAT, PG_INT64_MAX); + /* + * 64-bit strtol() isn't very portable, so convert the limits to strings + * and compare that way. + */ + snprintf(bufm, sizeof(bufm), INT64_FORMAT, default_minv); + snprintf(bufx, sizeof(bufx), INT64_FORMAT, default_maxv); - if (!is_ascending && strcmp(minv, bufm) == 0) - minv = NULL; - if (is_ascending && strcmp(maxv, bufx) == 0) - maxv = NULL; - } + /* Don't print minv/maxv if they match the respective default limit */ + if (strcmp(minv, bufm) == 0) + minv = NULL; + if (strcmp(maxv, bufx) == 0) + maxv = NULL; /* * DROP must be fully qualified in case same name appears in pg_catalog From c2ff42c6c1631c6c67d09fc8574186a984566a0d Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 20 Feb 2018 17:58:27 -0500 Subject: [PATCH 1026/1087] Error message improvement --- src/backend/commands/tablecmds.c | 2 +- src/test/regress/expected/truncate.out | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index db6c8ff00e..74e020bffc 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -1369,7 +1369,7 @@ ExecuteTruncate(TruncateStmt *stmt) ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg("cannot truncate only a partitioned table"), - errhint("Do not specify the ONLY keyword, or use truncate only on the partitions directly."))); + errhint("Do not specify the ONLY keyword, or use TRUNCATE ONLY on the partitions directly."))); } /* diff --git a/src/test/regress/expected/truncate.out b/src/test/regress/expected/truncate.out index d967e8dd21..735d0e862d 100644 --- a/src/test/regress/expected/truncate.out +++ b/src/test/regress/expected/truncate.out @@ -455,12 +455,12 @@ CREATE TABLE truncparted (a int, b char) PARTITION BY LIST (a); -- error, can't truncate a partitioned table TRUNCATE ONLY truncparted; ERROR: cannot truncate only a partitioned table -HINT: Do not specify the ONLY keyword, or use truncate only on the partitions directly. +HINT: Do not specify the ONLY keyword, or use TRUNCATE ONLY on the partitions directly. CREATE TABLE truncparted1 PARTITION OF truncparted FOR VALUES IN (1); INSERT INTO truncparted VALUES (1, 'a'); -- error, must truncate partitions TRUNCATE ONLY truncparted; ERROR: cannot truncate only a partitioned table -HINT: Do not specify the ONLY keyword, or use truncate only on the partitions directly. +HINT: Do not specify the ONLY keyword, or use TRUNCATE ONLY on the partitions directly. TRUNCATE truncparted; DROP TABLE truncparted; From 4c0ec9ee28279cc6a610cde8470fc8b606267b68 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 20 Feb 2018 15:12:52 -0800 Subject: [PATCH 1027/1087] Use platform independent type for TupleTableSlot->tts_off. Previously tts_off was, for unknown reasons, of type long. For one that's unnecessary as tuples are restricted in length, for another long would be a bad choice of type even if that weren't the case, as it's not reliably wider than an int. Also HeapTupleHeader->t_len is a uint32. This is split off from a larger patch implementing JITed tuple deforming. Seems like an independent improvement, as tiny as it is. Author: Andres Freund --- src/backend/access/common/heaptuple.c | 4 ++-- src/include/executor/tuptable.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c index 0a13251067..a2f67f2332 100644 --- a/src/backend/access/common/heaptuple.c +++ b/src/backend/access/common/heaptuple.c @@ -942,7 +942,7 @@ heap_deform_tuple(HeapTuple tuple, TupleDesc tupleDesc, int natts; /* number of atts to extract */ int attnum; char *tp; /* ptr to tuple data */ - long off; /* offset in tuple data */ + uint32 off; /* offset in tuple data */ bits8 *bp = tup->t_bits; /* ptr to null bitmap in tuple */ bool slow = false; /* can we use/set attcacheoff? */ @@ -1043,7 +1043,7 @@ slot_deform_tuple(TupleTableSlot *slot, int natts) bool hasnulls = HeapTupleHasNulls(tuple); int attnum; char *tp; /* ptr to tuple data */ - long off; /* offset in tuple data */ + uint32 off; /* offset in tuple data */ bits8 *bp = tup->t_bits; /* ptr to null bitmap in tuple */ bool slow; /* can we use/set attcacheoff? */ diff --git a/src/include/executor/tuptable.h b/src/include/executor/tuptable.h index 8be0d5edc2..0642a3ada5 100644 --- a/src/include/executor/tuptable.h +++ b/src/include/executor/tuptable.h @@ -126,7 +126,7 @@ typedef struct TupleTableSlot bool *tts_isnull; /* current per-attribute isnull flags */ MinimalTuple tts_mintuple; /* minimal tuple, or NULL if none */ HeapTupleData tts_minhdr; /* workspace for minimal-tuple-only case */ - long tts_off; /* saved state for slot_deform_tuple */ + uint32 tts_off; /* saved state for slot_deform_tuple */ bool tts_fixedTupleDescriptor; /* descriptor can't be changed */ } TupleTableSlot; From 29d432e477a99f4c1e18820c5fc820a6b178c695 Mon Sep 17 00:00:00 2001 From: Andres Freund Date: Tue, 20 Feb 2018 18:24:00 -0800 Subject: [PATCH 1028/1087] Blindly attempt to adapt sepgsql regression tests. Commit bf6c614a2f2c58312b3be34a47e7fb7362e07bcb broke the sepgsql test due to a new invocation of the function access hook during grouping equal initialization. The new behaviour seems at least as correct as the old one, so try adapt the tests. As I've no working sepgsql setup here, this is just going from buildfarm results. Author: Andres Freund Discussion: https://postgr.es/m/20180217000337.lfsdvro3l6ccsksp@alap3.anarazel.de --- contrib/sepgsql/expected/misc.out | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/contrib/sepgsql/expected/misc.out b/contrib/sepgsql/expected/misc.out index 98f8005a60..fdf07298bb 100644 --- a/contrib/sepgsql/expected/misc.out +++ b/contrib/sepgsql/expected/misc.out @@ -143,6 +143,7 @@ LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_reg LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1 column x" LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1 column y" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.textlike(pg_catalog.text,pg_catalog.text)" +LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.int4eq(integer,integer)" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.row_number()" row_number | x | y ------------+----+---------------------------------- @@ -172,6 +173,7 @@ LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_reg LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1p_tens column p" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.textlike(pg_catalog.text,pg_catalog.text)" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.textlike(pg_catalog.text,pg_catalog.text)" +LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.int4eq(integer,integer)" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.row_number()" row_number | o | p ------------+----+---------------------------------- @@ -194,6 +196,7 @@ LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_reg LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1p_ones column o" LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1p_ones column p" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.textlike(pg_catalog.text,pg_catalog.text)" +LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.int4eq(integer,integer)" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.row_number()" row_number | o | p ------------+---+---------------------------------- @@ -205,6 +208,7 @@ LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_reg LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1p_tens column o" LOG: SELinux: allowed { select } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name="table t1p_tens column p" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.textlike(pg_catalog.text,pg_catalog.text)" +LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.int4eq(integer,integer)" LOG: SELinux: allowed { execute } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure name="pg_catalog.row_number()" row_number | o | p ------------+----+---------------------------------- From 38b41f182a66b67e36e2adf53d078599b1b65483 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Wed, 21 Feb 2018 18:40:24 -0500 Subject: [PATCH 1029/1087] Repair pg_upgrade's failure to preserve relfrozenxid for matviews. This oversight led to data corruption in matviews, manifesting as "could not access status of transaction" before our most recent releases, and "found xmin from before relfrozenxid" errors since then. The proximate cause of the problem seems to have been confusion between the task of preserving dropped-column status and the task of preserving frozenxid status. Those are required for distinct sets of relkinds, and the reasoning was entirely undocumented in the source code. In hopes of forestalling future errors of the same kind, try to improve the commentary in this area. In passing, also improve the remarkably unhelpful comments around pg_upgrade's set_frozenxids(). That's not actually buggy AFAICS, but good luck figuring out what it does from the old comments. Per report from Claudio Freire. It appears that bug #14852 from Alexey Ermakov is an earlier report of the same issue, and there may be other cases that we failed to identify at the time. Patch by me based on analysis by Andres Freund. The bug dates back to the introduction of matviews, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGTBQpbrY9CdRGGhyBZ9yqY4jWaGC85rUF4X+R7d-aim=mBNsw@mail.gmail.com Discussion: https://postgr.es/m/20171013115320.28049.86457@wrigleys.postgresql.org --- src/bin/pg_dump/pg_dump.c | 31 ++++++++++++++++++++++++++++--- src/bin/pg_upgrade/pg_upgrade.c | 33 ++++++++++++++++++++++----------- 2 files changed, 50 insertions(+), 14 deletions(-) diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 0ffb3725e0..f7a079f0b1 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -6548,7 +6548,8 @@ getTables(Archive *fout, int *numTables) * alterations to parent tables. * * NOTE: it'd be kinda nice to lock other relations too, not only - * plain tables, but the backend doesn't presently allow that. + * plain or partitioned tables, but the backend doesn't presently + * allow that. * * We only need to lock the table for certain components; see * pg_dump.h @@ -15898,6 +15899,14 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) * column order. That also means we have to take care about setting * attislocal correctly, plus fix up any inherited CHECK constraints. * Analogously, we set up typed tables using ALTER TABLE / OF here. + * + * We process foreign and partitioned tables here, even though they + * lack heap storage, because they can participate in inheritance + * relationships and we want this stuff to be consistent across the + * inheritance tree. We can exclude indexes, toast tables, sequences + * and matviews, even though they have storage, because we don't + * support altering or dropping columns in them, nor can they be part + * of inheritance trees. */ if (dopt->binary_upgrade && (tbinfo->relkind == RELKIND_RELATION || @@ -16009,7 +16018,19 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) fmtId(tbinfo->dobj.name), tbinfo->reloftype); } + } + /* + * In binary_upgrade mode, arrange to restore the old relfrozenxid and + * relminmxid of all vacuumable relations. (While vacuum.c processes + * TOAST tables semi-independently, here we see them only as children + * of other relations; so this "if" lacks RELKIND_TOASTVALUE, and the + * child toast table is handled below.) + */ + if (dopt->binary_upgrade && + (tbinfo->relkind == RELKIND_RELATION || + tbinfo->relkind == RELKIND_MATVIEW)) + { appendPQExpBufferStr(q, "\n-- For binary upgrade, set heap's relfrozenxid and relminmxid\n"); appendPQExpBuffer(q, "UPDATE pg_catalog.pg_class\n" "SET relfrozenxid = '%u', relminmxid = '%u'\n" @@ -16020,7 +16041,10 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (tbinfo->toast_oid) { - /* We preserve the toast oids, so we can use it during restore */ + /* + * The toast table will have the same OID at restore, so we + * can safely target it by OID. + */ appendPQExpBufferStr(q, "\n-- For binary upgrade, set toast's relfrozenxid and relminmxid\n"); appendPQExpBuffer(q, "UPDATE pg_catalog.pg_class\n" "SET relfrozenxid = '%u', relminmxid = '%u'\n" @@ -16034,7 +16058,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) * In binary_upgrade mode, restore matviews' populated status by * poking pg_class directly. This is pretty ugly, but we can't use * REFRESH MATERIALIZED VIEW since it's possible that some underlying - * matview is not populated even though this matview is. + * matview is not populated even though this matview is; in any case, + * we want to transfer the matview's heap storage, not run REFRESH. */ if (dopt->binary_upgrade && tbinfo->relkind == RELKIND_MATVIEW && tbinfo->relispopulated) diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index 3f57a25b05..bbfa4c1ef3 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -278,13 +278,13 @@ static void prepare_new_globals(void) { /* - * We set autovacuum_freeze_max_age to its maximum value so autovacuum - * does not launch here and delete clog files, before the frozen xids are - * set. + * Before we restore anything, set frozenxids of initdb-created tables. */ - set_frozenxids(false); + /* + * Now restore global objects (roles and tablespaces). + */ prep_status("Restoring global objects in the new cluster"); exec_prog(UTILITY_LOG_FILE, NULL, true, true, @@ -506,14 +506,25 @@ copy_xact_xlog_xid(void) /* * set_frozenxids() * - * We have frozen all xids, so set datfrozenxid, relfrozenxid, and - * relminmxid to be the old cluster's xid counter, which we just set - * in the new cluster. User-table frozenxid and minmxid values will - * be set by pg_dump --binary-upgrade, but objects not set by the pg_dump - * must have proper frozen counters. + * This is called on the new cluster before we restore anything, with + * minmxid_only = false. Its purpose is to ensure that all initdb-created + * vacuumable tables have relfrozenxid/relminmxid matching the old cluster's + * xid/mxid counters. We also initialize the datfrozenxid/datminmxid of the + * built-in databases to match. + * + * As we create user tables later, their relfrozenxid/relminmxid fields will + * be restored properly by the binary-upgrade restore script. Likewise for + * user-database datfrozenxid/datminmxid. However, if we're upgrading from a + * pre-9.3 database, which does not store per-table or per-DB minmxid, then + * the relminmxid/datminmxid values filled in by the restore script will just + * be zeroes. + * + * Hence, with a pre-9.3 source database, a second call occurs after + * everything is restored, with minmxid_only = true. This pass will + * initialize all tables and databases, both those made by initdb and user + * objects, with the desired minmxid value. frozenxid values are left alone. */ -static -void +static void set_frozenxids(bool minmxid_only) { int dbnum; From 7d8ac9814bc9bb6df2d845dbabed38d7284c7c2c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Wed, 21 Feb 2018 23:09:27 -0500 Subject: [PATCH 1030/1087] Charge cpu_tuple_cost * 0.5 for Append and MergeAppend nodes. Previously, Append didn't charge anything at all, and MergeAppend charged only cpu_operator_cost, about half the value used here. This change might make MergeAppend plans slightly more likely to be chosen than before, since this commit increases the assumed cost for Append -- with default values -- by 0.005 per tuple but MergeAppend by only 0.0025 per tuple. Since the comparisons required by MergeAppend are costed separately, it's not clear why MergeAppend needs to be otherwise more expensive than Append, so hopefully this is OK. Prior to partition-wise join, it didn't really matter whether or not an Append node had any cost of its own, because every plan had to use the same number of Append or MergeAppend nodes and in the same places. Only the relative cost of Append vs. MergeAppend made a difference. Now, however, it is possible to avoid some of the Append nodes using a partition-wise join, so it's worth making an effort. Pending patches for partition-wise aggregate care too, because an Append of Aggregate nodes will incur the Append overhead fewer times than an Aggregate over an Append. Although in most cases this change will favor the use of partition-wise techniques, it does the opposite when the join cardinality is greater than the sum of the input cardinalities. Since this situation arises in an existing regression test, I [rhaas] adjusted it to keep the overall plan shape approximately the same. Jeevan Chalke, per a suggestion from David Rowley. Reviewed by Ashutosh Bapat. Some changes by me. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia Sabih, and me. Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com --- src/backend/optimizer/path/costsize.c | 26 ++-- src/test/regress/expected/partition_join.out | 136 +++++++++---------- src/test/regress/expected/subselect.out | 4 +- src/test/regress/sql/partition_join.sql | 8 +- 4 files changed, 91 insertions(+), 83 deletions(-) diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index 16ef348f40..d8db0b29e1 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -100,6 +100,13 @@ #define LOG2(x) (log(x) / 0.693147180559945) +/* + * Append and MergeAppend nodes are less expensive than some other operations + * which use cpu_tuple_cost; instead of adding a separate GUC, estimate the + * per-tuple cost as cpu_tuple_cost multiplied by this value. + */ +#define APPEND_CPU_COST_MULTIPLIER 0.5 + double seq_page_cost = DEFAULT_SEQ_PAGE_COST; double random_page_cost = DEFAULT_RANDOM_PAGE_COST; @@ -1828,10 +1835,6 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers) /* * cost_append * Determines and returns the cost of an Append node. - * - * We charge nothing extra for the Append itself, which perhaps is too - * optimistic, but since it doesn't do any selection or projection, it is a - * pretty cheap node. */ void cost_append(AppendPath *apath) @@ -1914,6 +1917,13 @@ cost_append(AppendPath *apath) apath->first_partial_path, apath->path.parallel_workers); } + + /* + * Although Append does not do any selection or projection, it's not free; + * add a small per-tuple overhead. + */ + apath->path.total_cost += + cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * apath->path.rows; } /* @@ -1968,12 +1978,10 @@ cost_merge_append(Path *path, PlannerInfo *root, run_cost += tuples * comparison_cost * logN; /* - * Also charge a small amount (arbitrarily set equal to operator cost) per - * extracted tuple. We don't charge cpu_tuple_cost because a MergeAppend - * node doesn't do qual-checking or projection, so it has less overhead - * than most plan nodes. + * Although MergeAppend does not do any selection or projection, it's not + * free; add a small per-tuple overhead. */ - run_cost += cpu_operator_cost * tuples; + run_cost += cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER * tuples; path->startup_cost = startup_cost + input_startup_cost; path->total_cost = startup_cost + run_cost + input_total_cost; diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index 636bedadf2..a72d8bc208 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -1144,59 +1144,59 @@ INSERT INTO plt1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_ser ANALYZE plt1_e; -- test partition matching with N-way join EXPLAIN (COSTS OFF) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; QUERY PLAN -------------------------------------------------------------------------------------- - Sort - Sort Key: t1.c, t3.c - -> HashAggregate - Group Key: t1.c, t2.c, t3.c + GroupAggregate + Group Key: t1.c, t2.c, t3.c + -> Sort + Sort Key: t1.c, t3.c -> Result -> Append -> Hash Join - Hash Cond: (t1.c = t2.c) - -> Seq Scan on plt1_p1 t1 - -> Hash - -> Hash Join - Hash Cond: (t2.c = ltrim(t3.c, 'A'::text)) + Hash Cond: (t1.c = ltrim(t3.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1.b = t2.b) AND (t1.c = t2.c)) + -> Seq Scan on plt1_p1 t1 + -> Hash -> Seq Scan on plt2_p1 t2 - -> Hash - -> Seq Scan on plt1_e_p1 t3 - -> Hash Join - Hash Cond: (t1_1.c = t2_1.c) - -> Seq Scan on plt1_p2 t1_1 -> Hash - -> Hash Join - Hash Cond: (t2_1.c = ltrim(t3_1.c, 'A'::text)) - -> Seq Scan on plt2_p2 t2_1 - -> Hash - -> Seq Scan on plt1_e_p2 t3_1 + -> Seq Scan on plt1_e_p1 t3 -> Hash Join - Hash Cond: (t1_2.c = t2_2.c) - -> Seq Scan on plt1_p3 t1_2 + Hash Cond: (t1_1.c = ltrim(t3_1.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1_1.b = t2_1.b) AND (t1_1.c = t2_1.c)) + -> Seq Scan on plt1_p2 t1_1 + -> Hash + -> Seq Scan on plt2_p2 t2_1 -> Hash - -> Hash Join - Hash Cond: (t2_2.c = ltrim(t3_2.c, 'A'::text)) + -> Seq Scan on plt1_e_p2 t3_1 + -> Hash Join + Hash Cond: (t1_2.c = ltrim(t3_2.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1_2.b = t2_2.b) AND (t1_2.c = t2_2.c)) + -> Seq Scan on plt1_p3 t1_2 + -> Hash -> Seq Scan on plt2_p3 t2_2 - -> Hash - -> Seq Scan on plt1_e_p3 t3_2 + -> Hash + -> Seq Scan on plt1_e_p3 t3_2 (33 rows) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; avg | avg | avg | c | c | c ----------------------+----------------------+-----------------------+------+------+------- 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 - 74.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 - 124.0000000000000000 | 124.5000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 + 75.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 + 123.0000000000000000 | 123.0000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 - 224.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 - 274.0000000000000000 | 274.5000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 + 225.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 + 273.0000000000000000 | 273.0000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 324.0000000000000000 | 324.0000000000000000 | 648.0000000000000000 | 0006 | 0006 | A0006 - 374.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 - 424.0000000000000000 | 424.5000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 + 375.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 + 423.0000000000000000 | 423.0000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 474.0000000000000000 | 474.0000000000000000 | 948.0000000000000000 | 0009 | 0009 | A0009 - 524.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 - 574.0000000000000000 | 574.5000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 + 525.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 + 573.0000000000000000 | 573.0000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 (12 rows) -- joins where one of the relations is proven empty @@ -1289,59 +1289,59 @@ INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_ser ANALYZE pht1_e; -- test partition matching with N-way join EXPLAIN (COSTS OFF) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; - QUERY PLAN --------------------------------------------------------------------------------------- - Sort - Sort Key: t1.c, t3.c - -> HashAggregate - Group Key: t1.c, t2.c, t3.c +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; + QUERY PLAN +-------------------------------------------------------------------------------------------- + GroupAggregate + Group Key: t1.c, t2.c, t3.c + -> Sort + Sort Key: t1.c, t3.c -> Result -> Append -> Hash Join - Hash Cond: (t1.c = t2.c) - -> Seq Scan on pht1_p1 t1 - -> Hash - -> Hash Join - Hash Cond: (t2.c = ltrim(t3.c, 'A'::text)) + Hash Cond: (t1.c = ltrim(t3.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1.b = t2.b) AND (t1.c = t2.c)) + -> Seq Scan on pht1_p1 t1 + -> Hash -> Seq Scan on pht2_p1 t2 - -> Hash - -> Seq Scan on pht1_e_p1 t3 + -> Hash + -> Seq Scan on pht1_e_p1 t3 -> Hash Join - Hash Cond: (t1_1.c = t2_1.c) - -> Seq Scan on pht1_p2 t1_1 + Hash Cond: (ltrim(t3_1.c, 'A'::text) = t1_1.c) + -> Seq Scan on pht1_e_p2 t3_1 -> Hash -> Hash Join - Hash Cond: (t2_1.c = ltrim(t3_1.c, 'A'::text)) - -> Seq Scan on pht2_p2 t2_1 + Hash Cond: ((t1_1.b = t2_1.b) AND (t1_1.c = t2_1.c)) + -> Seq Scan on pht1_p2 t1_1 -> Hash - -> Seq Scan on pht1_e_p2 t3_1 + -> Seq Scan on pht2_p2 t2_1 -> Hash Join - Hash Cond: (t1_2.c = t2_2.c) - -> Seq Scan on pht1_p3 t1_2 + Hash Cond: (ltrim(t3_2.c, 'A'::text) = t1_2.c) + -> Seq Scan on pht1_e_p3 t3_2 -> Hash -> Hash Join - Hash Cond: (t2_2.c = ltrim(t3_2.c, 'A'::text)) - -> Seq Scan on pht2_p3 t2_2 + Hash Cond: ((t1_2.b = t2_2.b) AND (t1_2.c = t2_2.c)) + -> Seq Scan on pht1_p3 t1_2 -> Hash - -> Seq Scan on pht1_e_p3 t3_2 + -> Seq Scan on pht2_p3 t2_2 (33 rows) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; avg | avg | avg | c | c | c ----------------------+----------------------+-----------------------+------+------+------- 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 - 74.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 - 124.0000000000000000 | 124.5000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 + 75.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 + 123.0000000000000000 | 123.0000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 - 224.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 - 274.0000000000000000 | 274.5000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 + 225.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 + 273.0000000000000000 | 273.0000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 324.0000000000000000 | 324.0000000000000000 | 648.0000000000000000 | 0006 | 0006 | A0006 - 374.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 - 424.0000000000000000 | 424.5000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 + 375.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 + 423.0000000000000000 | 423.0000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 474.0000000000000000 | 474.0000000000000000 | 948.0000000000000000 | 0009 | 0009 | A0009 - 524.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 - 574.0000000000000000 | 574.5000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 + 525.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 + 573.0000000000000000 | 573.0000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 (12 rows) -- diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out index 4b893856cf..3b2bf3273e 100644 --- a/src/test/regress/expected/subselect.out +++ b/src/test/regress/expected/subselect.out @@ -235,7 +235,7 @@ SELECT *, pg_typeof(f1) FROM explain verbose select '42' union all select '43'; QUERY PLAN ------------------------------------------------- - Append (cost=0.00..0.04 rows=2 width=32) + Append (cost=0.00..0.05 rows=2 width=32) -> Result (cost=0.00..0.01 rows=1 width=32) Output: '42'::text -> Result (cost=0.00..0.01 rows=1 width=32) @@ -245,7 +245,7 @@ explain verbose select '42' union all select '43'; explain verbose select '42' union all select 43; QUERY PLAN ------------------------------------------------ - Append (cost=0.00..0.04 rows=2 width=4) + Append (cost=0.00..0.05 rows=2 width=4) -> Result (cost=0.00..0.01 rows=1 width=4) Output: 42 -> Result (cost=0.00..0.01 rows=1 width=4) diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql index 4b2e781060..17772a9300 100644 --- a/src/test/regress/sql/partition_join.sql +++ b/src/test/regress/sql/partition_join.sql @@ -213,8 +213,8 @@ ANALYZE plt1_e; -- test partition matching with N-way join EXPLAIN (COSTS OFF) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM plt1 t1, plt2 t2, plt1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; -- joins where one of the relations is proven empty EXPLAIN (COSTS OFF) @@ -258,8 +258,8 @@ ANALYZE pht1_e; -- test partition matching with N-way join EXPLAIN (COSTS OFF) -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; -SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; +SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; -- -- multiple levels of partitioning From 9a5c4f58f36dc7c87619602a7a2ec7de5a287068 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 08:51:00 -0500 Subject: [PATCH 1031/1087] Try to stabilize EXPLAIN output in partition_check test. Commit 7d8ac9814bc9bb6df2d845dbabed38d7284c7c2c adjusted these tests in the hope of preserving the plan shape, but I failed to notice that the three partitions were, on my local machine, choosing two different plan shapes. This is probably related to the fact that all three tables have exactly the same row count. Try to improve the situation by making pht1_e about half as large as the other two. Per Tom Lane and the buildfarm. Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us --- src/test/regress/expected/partition_join.out | 58 +++++++++----------- src/test/regress/sql/partition_join.sql | 2 +- 2 files changed, 27 insertions(+), 33 deletions(-) diff --git a/src/test/regress/expected/partition_join.out b/src/test/regress/expected/partition_join.out index a72d8bc208..4fccd9ae54 100644 --- a/src/test/regress/expected/partition_join.out +++ b/src/test/regress/expected/partition_join.out @@ -1285,13 +1285,13 @@ CREATE TABLE pht1_e (a int, b int, c text) PARTITION BY HASH(ltrim(c, 'A')); CREATE TABLE pht1_e_p1 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 0); CREATE TABLE pht1_e_p2 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 1); CREATE TABLE pht1_e_p3 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 2); -INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 299, 2) i; ANALYZE pht1_e; -- test partition matching with N-way join EXPLAIN (COSTS OFF) SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; - QUERY PLAN --------------------------------------------------------------------------------------------- + QUERY PLAN +-------------------------------------------------------------------------------------- GroupAggregate Group Key: t1.c, t2.c, t3.c -> Sort @@ -1308,41 +1308,35 @@ SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, ph -> Hash -> Seq Scan on pht1_e_p1 t3 -> Hash Join - Hash Cond: (ltrim(t3_1.c, 'A'::text) = t1_1.c) - -> Seq Scan on pht1_e_p2 t3_1 + Hash Cond: (t1_1.c = ltrim(t3_1.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1_1.b = t2_1.b) AND (t1_1.c = t2_1.c)) + -> Seq Scan on pht1_p2 t1_1 + -> Hash + -> Seq Scan on pht2_p2 t2_1 -> Hash - -> Hash Join - Hash Cond: ((t1_1.b = t2_1.b) AND (t1_1.c = t2_1.c)) - -> Seq Scan on pht1_p2 t1_1 - -> Hash - -> Seq Scan on pht2_p2 t2_1 + -> Seq Scan on pht1_e_p2 t3_1 -> Hash Join - Hash Cond: (ltrim(t3_2.c, 'A'::text) = t1_2.c) - -> Seq Scan on pht1_e_p3 t3_2 + Hash Cond: (t1_2.c = ltrim(t3_2.c, 'A'::text)) + -> Hash Join + Hash Cond: ((t1_2.b = t2_2.b) AND (t1_2.c = t2_2.c)) + -> Seq Scan on pht1_p3 t1_2 + -> Hash + -> Seq Scan on pht2_p3 t2_2 -> Hash - -> Hash Join - Hash Cond: ((t1_2.b = t2_2.b) AND (t1_2.c = t2_2.c)) - -> Seq Scan on pht1_p3 t1_2 - -> Hash - -> Seq Scan on pht2_p3 t2_2 + -> Seq Scan on pht1_e_p3 t3_2 (33 rows) SELECT avg(t1.a), avg(t2.b), avg(t3.a + t3.b), t1.c, t2.c, t3.c FROM pht1 t1, pht2 t2, pht1_e t3 WHERE t1.b = t2.b AND t1.c = t2.c AND ltrim(t3.c, 'A') = t1.c GROUP BY t1.c, t2.c, t3.c ORDER BY t1.c, t2.c, t3.c; - avg | avg | avg | c | c | c -----------------------+----------------------+-----------------------+------+------+------- - 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 - 75.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 - 123.0000000000000000 | 123.0000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 - 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 - 225.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 - 273.0000000000000000 | 273.0000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 - 324.0000000000000000 | 324.0000000000000000 | 648.0000000000000000 | 0006 | 0006 | A0006 - 375.0000000000000000 | 375.0000000000000000 | 748.0000000000000000 | 0007 | 0007 | A0007 - 423.0000000000000000 | 423.0000000000000000 | 848.0000000000000000 | 0008 | 0008 | A0008 - 474.0000000000000000 | 474.0000000000000000 | 948.0000000000000000 | 0009 | 0009 | A0009 - 525.0000000000000000 | 525.0000000000000000 | 1048.0000000000000000 | 0010 | 0010 | A0010 - 573.0000000000000000 | 573.0000000000000000 | 1148.0000000000000000 | 0011 | 0011 | A0011 -(12 rows) + avg | avg | avg | c | c | c +----------------------+----------------------+----------------------+------+------+------- + 24.0000000000000000 | 24.0000000000000000 | 48.0000000000000000 | 0000 | 0000 | A0000 + 75.0000000000000000 | 75.0000000000000000 | 148.0000000000000000 | 0001 | 0001 | A0001 + 123.0000000000000000 | 123.0000000000000000 | 248.0000000000000000 | 0002 | 0002 | A0002 + 174.0000000000000000 | 174.0000000000000000 | 348.0000000000000000 | 0003 | 0003 | A0003 + 225.0000000000000000 | 225.0000000000000000 | 448.0000000000000000 | 0004 | 0004 | A0004 + 273.0000000000000000 | 273.0000000000000000 | 548.0000000000000000 | 0005 | 0005 | A0005 +(6 rows) -- -- multiple levels of partitioning diff --git a/src/test/regress/sql/partition_join.sql b/src/test/regress/sql/partition_join.sql index 17772a9300..a2d8b1be55 100644 --- a/src/test/regress/sql/partition_join.sql +++ b/src/test/regress/sql/partition_join.sql @@ -253,7 +253,7 @@ CREATE TABLE pht1_e (a int, b int, c text) PARTITION BY HASH(ltrim(c, 'A')); CREATE TABLE pht1_e_p1 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 0); CREATE TABLE pht1_e_p2 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 1); CREATE TABLE pht1_e_p3 PARTITION OF pht1_e FOR VALUES WITH (MODULUS 3, REMAINDER 2); -INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 599, 2) i; +INSERT INTO pht1_e SELECT i, i, 'A' || to_char(i/50, 'FM0000') FROM generate_series(0, 299, 2) i; ANALYZE pht1_e; -- test partition matching with N-way join From de6428afe13bb6eb1c99a70aada1a105966bc27e Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 09:28:12 -0500 Subject: [PATCH 1032/1087] Avoid another valgrind complaint about write() of uninitalized bytes. Peter Geoghegan, per buildfarm member skink and Andres Freund Discussion: http://postgr.es/m/20180221053426.gp72lw67yfpzkw7a@alap3.anarazel.de --- src/backend/utils/sort/logtape.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c index d6794bf3de..05dde631dd 100644 --- a/src/backend/utils/sort/logtape.c +++ b/src/backend/utils/sort/logtape.c @@ -739,6 +739,18 @@ LogicalTapeRewindForRead(LogicalTapeSet *lts, int tapenum, size_t buffer_size) */ if (lt->dirty) { + /* + * As long as we've filled the buffer at least once, its contents + * are entirely defined from valgrind's point of view, even though + * contents beyond the current end point may be stale. But it's + * possible - at least in the case of a parallel sort - to sort + * such small amount of data that we do not fill the buffer even + * once. Tell valgrind that its contents are defined, so it + * doesn't bleat. + */ + VALGRIND_MAKE_MEM_DEFINED(lt->buffer + lt->nbytes, + lt->buffer_size - lt->nbytes); + TapeBlockSetNBytes(lt->buffer, lt->nbytes); ltsWriteBlock(lts, lt->curBlockNumber, (void *) lt->buffer); } From 84cb51b4e24b4e3a7057105971d0d385e179d978 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 10:03:14 -0500 Subject: [PATCH 1033/1087] postgres_fdw: Fix interaction of PHVs with child joins. Commit f49842d1ee31b976c681322f76025d7732e860f3 introduced the concept of a child join, but did not update this code accordingly. Ashutosh Bapat, with cosmetic changes by me Discussion: http://postgr.es/m/CAFjFpRf=J_KPOtw+bhZeURYkbizr8ufSaXg6gPEF6DKpgH-t6g@mail.gmail.com --- .../postgres_fdw/expected/postgres_fdw.out | 40 +++++++++++++++++++ contrib/postgres_fdw/postgres_fdw.c | 6 ++- contrib/postgres_fdw/sql/postgres_fdw.sql | 5 +++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index 262c635cdb..b0636c9d36 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -7800,4 +7800,44 @@ SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t 400 | 400 (4 rows) +-- with PHVs, partition-wise join selected but no join pushdown +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.phv, t2.b, t2.phv FROM (SELECT 't1_phv' phv, * FROM fprt1 WHERE a % 25 = 0) t1 FULL JOIN (SELECT 't2_phv' phv, * FROM fprt2 WHERE b % 25 = 0) t2 ON (t1.a = t2.b) ORDER BY t1.a, t2.b; + QUERY PLAN +------------------------------------------------------------ + Sort + Sort Key: ftprt1_p1.a, ftprt2_p1.b + -> Result + -> Append + -> Hash Full Join + Hash Cond: (ftprt1_p1.a = ftprt2_p1.b) + -> Foreign Scan on ftprt1_p1 + -> Hash + -> Foreign Scan on ftprt2_p1 + -> Hash Full Join + Hash Cond: (ftprt1_p2.a = ftprt2_p2.b) + -> Foreign Scan on ftprt1_p2 + -> Hash + -> Foreign Scan on ftprt2_p2 +(14 rows) + +SELECT t1.a, t1.phv, t2.b, t2.phv FROM (SELECT 't1_phv' phv, * FROM fprt1 WHERE a % 25 = 0) t1 FULL JOIN (SELECT 't2_phv' phv, * FROM fprt2 WHERE b % 25 = 0) t2 ON (t1.a = t2.b) ORDER BY t1.a, t2.b; + a | phv | b | phv +-----+--------+-----+-------- + 0 | t1_phv | 0 | t2_phv + 50 | t1_phv | | + 100 | t1_phv | | + 150 | t1_phv | 150 | t2_phv + 200 | t1_phv | | + 250 | t1_phv | 250 | t2_phv + 300 | t1_phv | | + 350 | t1_phv | | + 400 | t1_phv | 400 | t2_phv + 450 | t1_phv | | + | | 75 | t2_phv + | | 225 | t2_phv + | | 325 | t2_phv + | | 475 | t2_phv +(14 rows) + RESET enable_partitionwise_join; diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c index 941a2e75a5..e8a0d5482a 100644 --- a/contrib/postgres_fdw/postgres_fdw.c +++ b/contrib/postgres_fdw/postgres_fdw.c @@ -4565,7 +4565,11 @@ foreign_join_ok(PlannerInfo *root, RelOptInfo *joinrel, JoinType jointype, foreach(lc, root->placeholder_list) { PlaceHolderInfo *phinfo = lfirst(lc); - Relids relids = joinrel->relids; + Relids relids; + + /* PlaceHolderInfo refers to parent relids, not child relids. */ + relids = IS_OTHER_REL(joinrel) ? + joinrel->top_parent_relids : joinrel->relids; if (bms_is_subset(phinfo->ph_eval_at, relids) && bms_nonempty_difference(relids, phinfo->ph_eval_at)) diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index 28635498ca..9e47729278 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1913,4 +1913,9 @@ EXPLAIN (COSTS OFF) SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; SELECT t1.a,t1.b FROM fprt1 t1, LATERAL (SELECT t2.a, t2.b FROM fprt2 t2 WHERE t1.a = t2.b AND t1.b = t2.a) q WHERE t1.a%25 = 0 ORDER BY 1,2; +-- with PHVs, partition-wise join selected but no join pushdown +EXPLAIN (COSTS OFF) +SELECT t1.a, t1.phv, t2.b, t2.phv FROM (SELECT 't1_phv' phv, * FROM fprt1 WHERE a % 25 = 0) t1 FULL JOIN (SELECT 't2_phv' phv, * FROM fprt2 WHERE b % 25 = 0) t2 ON (t1.a = t2.b) ORDER BY t1.a, t2.b; +SELECT t1.a, t1.phv, t2.b, t2.phv FROM (SELECT 't1_phv' phv, * FROM fprt1 WHERE a % 25 = 0) t1 FULL JOIN (SELECT 't2_phv' phv, * FROM fprt2 WHERE b % 25 = 0) t2 ON (t1.a = t2.b) ORDER BY t1.a, t2.b; + RESET enable_partitionwise_join; From 810e7e264ab547c404e32dba4f8733db53912084 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 10:08:03 -0500 Subject: [PATCH 1034/1087] Remove extra word from comment. Etsuro Fujita Discussion: http://postgr.es/m/5A8EAF74.5010905@lab.ntt.co.jp --- src/backend/executor/execPartition.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index 00523ce250..beb2362ab0 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -140,10 +140,10 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, partrel = leaf_part_rri->ri_RelationDesc; /* - * This is required in order to we convert the partition's - * tuple to be compatible with the root partitioned table's - * tuple descriptor. When generating the per-subplan result - * rels, this was not set. + * This is required in order to convert the partition's tuple + * to be compatible with the root partitioned table's tuple + * descriptor. When generating the per-subplan result rels, + * this was not set. */ leaf_part_rri->ri_PartitionRoot = rel; From edd44738bc88148784899a8949519364d81d9ea8 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 10:55:54 -0500 Subject: [PATCH 1035/1087] Be lazier about partition tuple routing. It's not necessary to fully initialize the executor data structures for partitions to which no tuples are ever routed. Consider, for example, an INSERT statement that inserts only one row: it only cares about the partition to which that one row is routed. The new function ExecInitPartitionInfo performs the initialization in question only when a particular partition is about to receive a tuple. This includes creating, validating, and saving a pointer to the ResultRelInfo, setting up for speculative insertions, translating WCOs and initializing the resulting expressions, translating returning lists and building the appropriate projection information, and setting up a tuple conversion map. One thing that's not deferred is locking the child partitions; that seems desirable but would need more thought. Still, testing shows that this makes single-row inserts significantly faster on a table with many partitions without harming the bulk-insert case. Amit Langote, reviewed by Etsuro Fujita, with a few changes by me Discussion: http://postgr.es/m/8975331d-d961-cbdd-f862-fdd3d97dc2d0@lab.ntt.co.jp --- src/backend/commands/copy.c | 10 +- src/backend/executor/execPartition.c | 359 +++++++++++++++++-------- src/backend/executor/nodeModifyTable.c | 133 +-------- src/include/executor/execPartition.h | 9 +- 4 files changed, 279 insertions(+), 232 deletions(-) diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c index d5883c98d1..4562a5121d 100644 --- a/src/backend/commands/copy.c +++ b/src/backend/commands/copy.c @@ -2469,7 +2469,7 @@ CopyFrom(CopyState cstate) PartitionTupleRouting *proute; proute = cstate->partition_tuple_routing = - ExecSetupPartitionTupleRouting(NULL, cstate->rel, 1, estate); + ExecSetupPartitionTupleRouting(NULL, cstate->rel); /* * If we are capturing transition tuples, they may need to be @@ -2606,6 +2606,14 @@ CopyFrom(CopyState cstate) */ saved_resultRelInfo = resultRelInfo; resultRelInfo = proute->partitions[leaf_part_index]; + if (resultRelInfo == NULL) + { + resultRelInfo = ExecInitPartitionInfo(NULL, + saved_resultRelInfo, + proute, estate, + leaf_part_index); + Assert(resultRelInfo != NULL); + } /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c index beb2362ab0..54efc9e545 100644 --- a/src/backend/executor/execPartition.c +++ b/src/backend/executor/execPartition.c @@ -44,21 +44,25 @@ static char *ExecBuildSlotPartitionKeyDescription(Relation rel, * * Note that all the relations in the partition tree are locked using the * RowExclusiveLock mode upon return from this function. + * + * While we allocate the arrays of pointers of ResultRelInfo and + * TupleConversionMap for all partitions here, actual objects themselves are + * lazily allocated for a given partition if a tuple is actually routed to it; + * see ExecInitPartitionInfo. However, if the function is invoked for update + * tuple routing, caller would already have initialized ResultRelInfo's for + * some of the partitions, which are reused and assigned to their respective + * slot in the aforementioned array. */ PartitionTupleRouting * -ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, - Relation rel, Index resultRTindex, - EState *estate) +ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, Relation rel) { TupleDesc tupDesc = RelationGetDescr(rel); List *leaf_parts; ListCell *cell; int i; - ResultRelInfo *leaf_part_arr = NULL, - *update_rri = NULL; + ResultRelInfo *update_rri = NULL; int num_update_rri = 0, update_rri_index = 0; - bool is_update = false; PartitionTupleRouting *proute; /* @@ -76,13 +80,14 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, proute->parent_child_tupconv_maps = (TupleConversionMap **) palloc0(proute->num_partitions * sizeof(TupleConversionMap *)); + proute->partition_oids = (Oid *) palloc(proute->num_partitions * + sizeof(Oid)); /* Set up details specific to the type of tuple routing we are doing. */ if (mtstate && mtstate->operation == CMD_UPDATE) { ModifyTable *node = (ModifyTable *) mtstate->ps.plan; - is_update = true; update_rri = mtstate->resultRelInfo; num_update_rri = list_length(node->plans); proute->subplan_partition_offsets = @@ -95,16 +100,6 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, */ proute->root_tuple_slot = MakeTupleTableSlot(NULL); } - else - { - /* - * Since we are inserting tuples, we need to create all new result - * rels. Avoid repeated pallocs by allocating memory for all the - * result rels in bulk. - */ - leaf_part_arr = (ResultRelInfo *) palloc0(proute->num_partitions * - sizeof(ResultRelInfo)); - } /* * Initialize an empty slot that will be used to manipulate tuples of any @@ -117,107 +112,58 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, i = 0; foreach(cell, leaf_parts) { - ResultRelInfo *leaf_part_rri; - Relation partrel = NULL; - TupleDesc part_tupdesc; + ResultRelInfo *leaf_part_rri = NULL; Oid leaf_oid = lfirst_oid(cell); - if (is_update) - { - /* - * If the leaf partition is already present in the per-subplan - * result rels, we re-use that rather than initialize a new result - * rel. The per-subplan resultrels and the resultrels of the leaf - * partitions are both in the same canonical order. So while going - * through the leaf partition oids, we need to keep track of the - * next per-subplan result rel to be looked for in the leaf - * partition resultrels. - */ - if (update_rri_index < num_update_rri && - RelationGetRelid(update_rri[update_rri_index].ri_RelationDesc) == leaf_oid) - { - leaf_part_rri = &update_rri[update_rri_index]; - partrel = leaf_part_rri->ri_RelationDesc; - - /* - * This is required in order to convert the partition's tuple - * to be compatible with the root partitioned table's tuple - * descriptor. When generating the per-subplan result rels, - * this was not set. - */ - leaf_part_rri->ri_PartitionRoot = rel; - - /* Remember the subplan offset for this ResultRelInfo */ - proute->subplan_partition_offsets[update_rri_index] = i; - - update_rri_index++; - } - else - leaf_part_rri = (ResultRelInfo *) palloc0(sizeof(ResultRelInfo)); - } - else - { - /* For INSERTs, we already have an array of result rels allocated */ - leaf_part_rri = &leaf_part_arr[i]; - } + proute->partition_oids[i] = leaf_oid; /* - * If we didn't open the partition rel, it means we haven't - * initialized the result rel either. + * If the leaf partition is already present in the per-subplan result + * rels, we re-use that rather than initialize a new result rel. The + * per-subplan resultrels and the resultrels of the leaf partitions + * are both in the same canonical order. So while going through the + * leaf partition oids, we need to keep track of the next per-subplan + * result rel to be looked for in the leaf partition resultrels. */ - if (!partrel) + if (update_rri_index < num_update_rri && + RelationGetRelid(update_rri[update_rri_index].ri_RelationDesc) == leaf_oid) { - /* - * We locked all the partitions above including the leaf - * partitions. Note that each of the newly opened relations in - * proute->partitions are eventually closed by the caller. - */ - partrel = heap_open(leaf_oid, NoLock); - InitResultRelInfo(leaf_part_rri, - partrel, - resultRTindex, - rel, - estate->es_instrument); + Relation partrel; + TupleDesc part_tupdesc; + + leaf_part_rri = &update_rri[update_rri_index]; + partrel = leaf_part_rri->ri_RelationDesc; /* - * Since we've just initialized this ResultRelInfo, it's not in - * any list attached to the estate as yet. Add it, so that it can - * be found later. + * This is required in order to convert the partition's tuple to + * be compatible with the root partitioned table's tuple + * descriptor. When generating the per-subplan result rels, this + * was not set. */ - estate->es_tuple_routing_result_relations = - lappend(estate->es_tuple_routing_result_relations, - leaf_part_rri); - } + leaf_part_rri->ri_PartitionRoot = rel; - part_tupdesc = RelationGetDescr(partrel); + /* Remember the subplan offset for this ResultRelInfo */ + proute->subplan_partition_offsets[update_rri_index] = i; - /* - * Save a tuple conversion map to convert a tuple routed to this - * partition from the parent's type to the partition's. - */ - proute->parent_child_tupconv_maps[i] = - convert_tuples_by_name(tupDesc, part_tupdesc, - gettext_noop("could not convert row type")); + update_rri_index++; - /* - * Verify result relation is a valid target for an INSERT. An UPDATE - * of a partition-key becomes a DELETE+INSERT operation, so this check - * is still required when the operation is CMD_UPDATE. - */ - CheckValidResultRel(leaf_part_rri, CMD_INSERT); + part_tupdesc = RelationGetDescr(partrel); - /* - * Open partition indices. The user may have asked to check for - * conflicts within this leaf partition and do "nothing" instead of - * throwing an error. Be prepared in that case by initializing the - * index information needed by ExecInsert() to perform speculative - * insertions. - */ - if (leaf_part_rri->ri_RelationDesc->rd_rel->relhasindex && - leaf_part_rri->ri_IndexRelationDescs == NULL) - ExecOpenIndices(leaf_part_rri, - mtstate != NULL && - mtstate->mt_onconflict != ONCONFLICT_NONE); + /* + * Save a tuple conversion map to convert a tuple routed to this + * partition from the parent's type to the partition's. + */ + proute->parent_child_tupconv_maps[i] = + convert_tuples_by_name(tupDesc, part_tupdesc, + gettext_noop("could not convert row type")); + + /* + * Verify result relation is a valid target for an INSERT. An + * UPDATE of a partition-key becomes a DELETE+INSERT operation, so + * this check is required even when the operation is CMD_UPDATE. + */ + CheckValidResultRel(leaf_part_rri, CMD_INSERT); + } proute->partitions[i] = leaf_part_rri; i++; @@ -225,9 +171,9 @@ ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, /* * For UPDATE, we should have found all the per-subplan resultrels in the - * leaf partitions. + * leaf partitions. (If this is an INSERT, both values will be zero.) */ - Assert(!is_update || update_rri_index == num_update_rri); + Assert(update_rri_index == num_update_rri); return proute; } @@ -351,6 +297,201 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, return result; } +/* + * ExecInitPartitionInfo + * Initialize ResultRelInfo and other information for a partition if not + * already done + * + * Returns the ResultRelInfo + */ +ResultRelInfo * +ExecInitPartitionInfo(ModifyTableState *mtstate, + ResultRelInfo *resultRelInfo, + PartitionTupleRouting *proute, + EState *estate, int partidx) +{ + Relation rootrel = resultRelInfo->ri_RelationDesc, + partrel; + ResultRelInfo *leaf_part_rri; + ModifyTable *node = mtstate ? (ModifyTable *) mtstate->ps.plan : NULL; + MemoryContext oldContext; + + /* + * We locked all the partitions in ExecSetupPartitionTupleRouting + * including the leaf partitions. + */ + partrel = heap_open(proute->partition_oids[partidx], NoLock); + + /* + * Keep ResultRelInfo and other information for this partition in the + * per-query memory context so they'll survive throughout the query. + */ + oldContext = MemoryContextSwitchTo(estate->es_query_cxt); + + leaf_part_rri = (ResultRelInfo *) palloc0(sizeof(ResultRelInfo)); + InitResultRelInfo(leaf_part_rri, + partrel, + node ? node->nominalRelation : 1, + rootrel, + estate->es_instrument); + + /* + * Verify result relation is a valid target for an INSERT. An UPDATE of a + * partition-key becomes a DELETE+INSERT operation, so this check is still + * required when the operation is CMD_UPDATE. + */ + CheckValidResultRel(leaf_part_rri, CMD_INSERT); + + /* + * Since we've just initialized this ResultRelInfo, it's not in any list + * attached to the estate as yet. Add it, so that it can be found later. + * + * Note that the entries in this list appear in no predetermined order, + * because partition result rels are initialized as and when they're + * needed. + */ + estate->es_tuple_routing_result_relations = + lappend(estate->es_tuple_routing_result_relations, + leaf_part_rri); + + /* + * Open partition indices. The user may have asked to check for conflicts + * within this leaf partition and do "nothing" instead of throwing an + * error. Be prepared in that case by initializing the index information + * needed by ExecInsert() to perform speculative insertions. + */ + if (partrel->rd_rel->relhasindex && + leaf_part_rri->ri_IndexRelationDescs == NULL) + ExecOpenIndices(leaf_part_rri, + (mtstate != NULL && + mtstate->mt_onconflict != ONCONFLICT_NONE)); + + /* + * Build WITH CHECK OPTION constraints for the partition. Note that we + * didn't build the withCheckOptionList for partitions within the planner, + * but simple translation of varattnos will suffice. This only occurs for + * the INSERT case or in the case of UPDATE tuple routing where we didn't + * find a result rel to reuse in ExecSetupPartitionTupleRouting(). + */ + if (node && node->withCheckOptionLists != NIL) + { + List *wcoList; + List *wcoExprs = NIL; + ListCell *ll; + int firstVarno = mtstate->resultRelInfo[0].ri_RangeTableIndex; + Relation firstResultRel = mtstate->resultRelInfo[0].ri_RelationDesc; + + /* + * In the case of INSERT on a partitioned table, there is only one + * plan. Likewise, there is only one WCO list, not one per partition. + * For UPDATE, there are as many WCO lists as there are plans. + */ + Assert((node->operation == CMD_INSERT && + list_length(node->withCheckOptionLists) == 1 && + list_length(node->plans) == 1) || + (node->operation == CMD_UPDATE && + list_length(node->withCheckOptionLists) == + list_length(node->plans))); + + /* + * Use the WCO list of the first plan as a reference to calculate + * attno's for the WCO list of this partition. In the INSERT case, + * that refers to the root partitioned table, whereas in the UPDATE + * tuple routing case, that refers to the first partition in the + * mtstate->resultRelInfo array. In any case, both that relation and + * this partition should have the same columns, so we should be able + * to map attributes successfully. + */ + wcoList = linitial(node->withCheckOptionLists); + + /* + * Convert Vars in it to contain this partition's attribute numbers. + */ + wcoList = map_partition_varattnos(wcoList, firstVarno, + partrel, firstResultRel, NULL); + foreach(ll, wcoList) + { + WithCheckOption *wco = castNode(WithCheckOption, lfirst(ll)); + ExprState *wcoExpr = ExecInitQual(castNode(List, wco->qual), + mtstate->mt_plans[0]); + + wcoExprs = lappend(wcoExprs, wcoExpr); + } + + leaf_part_rri->ri_WithCheckOptions = wcoList; + leaf_part_rri->ri_WithCheckOptionExprs = wcoExprs; + } + + /* + * Build the RETURNING projection for the partition. Note that we didn't + * build the returningList for partitions within the planner, but simple + * translation of varattnos will suffice. This only occurs for the INSERT + * case or in the case of UPDATE tuple routing where we didn't find a + * result rel to reuse in ExecSetupPartitionTupleRouting(). + */ + if (node && node->returningLists != NIL) + { + TupleTableSlot *slot; + ExprContext *econtext; + List *returningList; + int firstVarno = mtstate->resultRelInfo[0].ri_RangeTableIndex; + Relation firstResultRel = mtstate->resultRelInfo[0].ri_RelationDesc; + + /* See the comment above for WCO lists. */ + Assert((node->operation == CMD_INSERT && + list_length(node->returningLists) == 1 && + list_length(node->plans) == 1) || + (node->operation == CMD_UPDATE && + list_length(node->returningLists) == + list_length(node->plans))); + + /* + * Use the RETURNING list of the first plan as a reference to + * calculate attno's for the RETURNING list of this partition. See + * the comment above for WCO lists for more details on why this is + * okay. + */ + returningList = linitial(node->returningLists); + + /* + * Convert Vars in it to contain this partition's attribute numbers. + */ + returningList = map_partition_varattnos(returningList, firstVarno, + partrel, firstResultRel, + NULL); + + /* + * Initialize the projection itself. + * + * Use the slot and the expression context that would have been set up + * in ExecInitModifyTable() for projection's output. + */ + Assert(mtstate->ps.ps_ResultTupleSlot != NULL); + slot = mtstate->ps.ps_ResultTupleSlot; + Assert(mtstate->ps.ps_ExprContext != NULL); + econtext = mtstate->ps.ps_ExprContext; + leaf_part_rri->ri_projectReturning = + ExecBuildProjectionInfo(returningList, econtext, slot, + &mtstate->ps, RelationGetDescr(partrel)); + } + + Assert(proute->partitions[partidx] == NULL); + proute->partitions[partidx] = leaf_part_rri; + + /* + * Save a tuple conversion map to convert a tuple routed to this partition + * from the parent's type to the partition's. + */ + proute->parent_child_tupconv_maps[partidx] = + convert_tuples_by_name(RelationGetDescr(rootrel), + RelationGetDescr(partrel), + gettext_noop("could not convert row type")); + + MemoryContextSwitchTo(oldContext); + + return leaf_part_rri; +} + /* * ExecSetupChildParentMapForLeaf -- Initialize the per-leaf-partition * child-to-root tuple conversion map array. @@ -477,6 +618,10 @@ ExecCleanupTupleRouting(PartitionTupleRouting *proute) { ResultRelInfo *resultRelInfo = proute->partitions[i]; + /* skip further processsing for uninitialized partitions */ + if (resultRelInfo == NULL) + continue; + /* * If this result rel is one of the UPDATE subplan result rels, let * ExecEndPlan() close it. For INSERT or COPY, diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c index 93c03cfb07..c32928d9bd 100644 --- a/src/backend/executor/nodeModifyTable.c +++ b/src/backend/executor/nodeModifyTable.c @@ -306,10 +306,18 @@ ExecInsert(ModifyTableState *mtstate, /* * Save the old ResultRelInfo and switch to the one corresponding to - * the selected partition. + * the selected partition. (We might need to initialize it first.) */ saved_resultRelInfo = resultRelInfo; resultRelInfo = proute->partitions[leaf_part_index]; + if (resultRelInfo == NULL) + { + resultRelInfo = ExecInitPartitionInfo(mtstate, + saved_resultRelInfo, + proute, estate, + leaf_part_index); + Assert(resultRelInfo != NULL); + } /* We do not yet have a way to insert into a foreign partition */ if (resultRelInfo->ri_FdwRoutine) @@ -2098,14 +2106,10 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) ResultRelInfo *saved_resultRelInfo; ResultRelInfo *resultRelInfo; Plan *subplan; - int firstVarno = 0; - Relation firstResultRel = NULL; ListCell *l; int i; Relation rel; bool update_tuple_routing_needed = node->partColsUpdated; - PartitionTupleRouting *proute = NULL; - int num_partitions = 0; /* check for unsupported flags */ Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -2228,20 +2232,8 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) */ if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE && (operation == CMD_INSERT || update_tuple_routing_needed)) - { - proute = mtstate->mt_partition_tuple_routing = - ExecSetupPartitionTupleRouting(mtstate, - rel, node->nominalRelation, - estate); - num_partitions = proute->num_partitions; - - /* - * Below are required as reference objects for mapping partition - * attno's in expressions such as WithCheckOptions and RETURNING. - */ - firstVarno = mtstate->resultRelInfo[0].ri_RangeTableIndex; - firstResultRel = mtstate->resultRelInfo[0].ri_RelationDesc; - } + mtstate->mt_partition_tuple_routing = + ExecSetupPartitionTupleRouting(mtstate, rel); /* * Build state for collecting transition tuples. This requires having a @@ -2287,70 +2279,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) i++; } - /* - * Build WITH CHECK OPTION constraints for each leaf partition rel. Note - * that we didn't build the withCheckOptionList for each partition within - * the planner, but simple translation of the varattnos for each partition - * will suffice. This only occurs for the INSERT case or for UPDATE row - * movement. DELETEs and local UPDATEs are handled above. - */ - if (node->withCheckOptionLists != NIL && num_partitions > 0) - { - List *first_wcoList; - - /* - * In case of INSERT on partitioned tables, there is only one plan. - * Likewise, there is only one WITH CHECK OPTIONS list, not one per - * partition. Whereas for UPDATE, there are as many WCOs as there are - * plans. So in either case, use the WCO expression of the first - * resultRelInfo as a reference to calculate attno's for the WCO - * expression of each of the partitions. We make a copy of the WCO - * qual for each partition. Note that, if there are SubPlans in there, - * they all end up attached to the one parent Plan node. - */ - Assert(update_tuple_routing_needed || - (operation == CMD_INSERT && - list_length(node->withCheckOptionLists) == 1 && - mtstate->mt_nplans == 1)); - - first_wcoList = linitial(node->withCheckOptionLists); - for (i = 0; i < num_partitions; i++) - { - Relation partrel; - List *mapped_wcoList; - List *wcoExprs = NIL; - ListCell *ll; - - resultRelInfo = proute->partitions[i]; - - /* - * If we are referring to a resultRelInfo from one of the update - * result rels, that result rel would already have - * WithCheckOptions initialized. - */ - if (resultRelInfo->ri_WithCheckOptions) - continue; - - partrel = resultRelInfo->ri_RelationDesc; - - mapped_wcoList = map_partition_varattnos(first_wcoList, - firstVarno, - partrel, firstResultRel, - NULL); - foreach(ll, mapped_wcoList) - { - WithCheckOption *wco = castNode(WithCheckOption, lfirst(ll)); - ExprState *wcoExpr = ExecInitQual(castNode(List, wco->qual), - &mtstate->ps); - - wcoExprs = lappend(wcoExprs, wcoExpr); - } - - resultRelInfo->ri_WithCheckOptions = mapped_wcoList; - resultRelInfo->ri_WithCheckOptionExprs = wcoExprs; - } - } - /* * Initialize RETURNING projections if needed. */ @@ -2358,7 +2286,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) { TupleTableSlot *slot; ExprContext *econtext; - List *firstReturningList; /* * Initialize result tuple slot and assign its rowtype using the first @@ -2388,44 +2315,6 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags) resultRelInfo->ri_RelationDesc->rd_att); resultRelInfo++; } - - /* - * Build a projection for each leaf partition rel. Note that we - * didn't build the returningList for each partition within the - * planner, but simple translation of the varattnos for each partition - * will suffice. This only occurs for the INSERT case or for UPDATE - * row movement. DELETEs and local UPDATEs are handled above. - */ - firstReturningList = linitial(node->returningLists); - for (i = 0; i < num_partitions; i++) - { - Relation partrel; - List *rlist; - - resultRelInfo = proute->partitions[i]; - - /* - * If we are referring to a resultRelInfo from one of the update - * result rels, that result rel would already have a returningList - * built. - */ - if (resultRelInfo->ri_projectReturning) - continue; - - partrel = resultRelInfo->ri_RelationDesc; - - /* - * Use the returning expression of the first resultRelInfo as a - * reference to calculate attno's for the returning expression of - * each of the partitions. - */ - rlist = map_partition_varattnos(firstReturningList, - firstVarno, - partrel, firstResultRel, NULL); - resultRelInfo->ri_projectReturning = - ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps, - resultRelInfo->ri_RelationDesc->rd_att); - } } else { diff --git a/src/include/executor/execPartition.h b/src/include/executor/execPartition.h index 3df9c498bb..e94718608f 100644 --- a/src/include/executor/execPartition.h +++ b/src/include/executor/execPartition.h @@ -58,6 +58,7 @@ typedef struct PartitionDispatchData *PartitionDispatch; * partition tree. * num_dispatch number of partitioned tables in the partition * tree (= length of partition_dispatch_info[]) + * partition_oids Array of leaf partitions OIDs * partitions Array of ResultRelInfo* objects with one entry * for every leaf partition in the partition tree. * num_partitions Number of leaf partitions in the partition tree @@ -91,6 +92,7 @@ typedef struct PartitionTupleRouting { PartitionDispatch *partition_dispatch_info; int num_dispatch; + Oid *partition_oids; ResultRelInfo **partitions; int num_partitions; TupleConversionMap **parent_child_tupconv_maps; @@ -103,12 +105,15 @@ typedef struct PartitionTupleRouting } PartitionTupleRouting; extern PartitionTupleRouting *ExecSetupPartitionTupleRouting(ModifyTableState *mtstate, - Relation rel, Index resultRTindex, - EState *estate); + Relation rel); extern int ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd, TupleTableSlot *slot, EState *estate); +extern ResultRelInfo *ExecInitPartitionInfo(ModifyTableState *mtstate, + ResultRelInfo *resultRelInfo, + PartitionTupleRouting *proute, + EState *estate, int partidx); extern void ExecSetupChildParentMapForLeaf(PartitionTupleRouting *proute); extern TupleConversionMap *TupConvMapForLeaf(PartitionTupleRouting *proute, ResultRelInfo *rootRelInfo, int leaf_index); From 10cfce34c0fe20d2caed5750bbc5c315c0e4cc63 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 6 Feb 2018 21:46:46 -0500 Subject: [PATCH 1036/1087] Add user-callable SHA-2 functions Add the user-callable functions sha224, sha256, sha384, sha512. We already had these in the C code to support SCRAM, but there was no test coverage outside of the SCRAM tests. Adding these as user-callable functions allows writing some tests. Also, we have a user-callable md5 function but no more modern alternative, which led to wide use of md5 as a general-purpose hash function, which leads to occasional complaints about using md5. Also mark the existing md5 functions as leak-proof. Reviewed-by: Michael Paquier --- doc/src/sgml/func.sgml | 71 +++++++++- src/backend/utils/adt/Makefile | 3 +- src/backend/utils/adt/cryptohashes.c | 169 +++++++++++++++++++++++ src/backend/utils/adt/varlena.c | 48 ------- src/include/catalog/pg_proc.h | 12 +- src/test/regress/expected/opr_sanity.out | 6 + src/test/regress/expected/strings.out | 53 +++++++ src/test/regress/sql/strings.sql | 18 +++ 8 files changed, 328 insertions(+), 52 deletions(-) create mode 100644 src/backend/utils/adt/cryptohashes.c diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 1e535cf215..2f59af25a6 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -3640,7 +3640,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); returning the result in hexadecimal md5(E'Th\\000omas'::bytea) - 8ab2d3c9689aaf18 b4958c334c82d8b1 + 8ab2d3c9689aaf18​b4958c334c82d8b1 @@ -3674,6 +3674,66 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); set_byte(E'Th\\000omas'::bytea, 4, 64) Th\000o@as + + + + + sha224 + + sha224(bytea) + + bytea + + SHA-224 hash + + sha224('abc') + \x23097d223405d8228642a477bda2​55b32aadbce4bda0b3f7e36c9da7 + + + + + + sha256 + + sha256(bytea) + + bytea + + SHA-256 hash + + sha256('abc') + \xba7816bf8f01cfea414140de5dae2223​b00361a396177a9cb410ff61f20015ad + + + + + + sha384 + + sha384(bytea) + + bytea + + SHA-384 hash + + sha384('abc') + \xcb00753f45a35e8bb5a03d699ac65007​272c32ab0eded1631a8b605a43ff5bed​8086072ba1e7cc2358baeca134c825a7 + + + + + + sha512 + + sha512(bytea) + + bytea + + SHA-512 hash + + sha512('abc') + \xddaf35a193617abacc417349ae204131​12e6fa4e89a97ea20a9eeee64b55d39a​2192992a274fc1a836ba3c23a3feebbd​454d4423643ce80e2a9ac94fa54ca49f + @@ -3686,6 +3746,15 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); the first byte, and bit 15 is the most significant bit of the second byte. + + Note that for historic reasons, the function md5 + returns a hex-encoded value of type text whereas the SHA-2 + functions return type bytea. Use the functions + encode and decode to convert + between the two, for example encode(sha256('abc'), + 'hex') to get a hex-encoded text representation. + + See also the aggregate function string_agg in and the large object functions diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile index 61ca90312f..4b35dbb8bb 100644 --- a/src/backend/utils/adt/Makefile +++ b/src/backend/utils/adt/Makefile @@ -11,7 +11,8 @@ include $(top_builddir)/src/Makefile.global # keep this list arranged alphabetically or it gets to be a mess OBJS = acl.o amutils.o arrayfuncs.o array_expanded.o array_selfuncs.o \ array_typanalyze.o array_userfuncs.o arrayutils.o ascii.o \ - bool.o cash.o char.o date.o datetime.o datum.o dbsize.o domains.o \ + bool.o cash.o char.o cryptohashes.o \ + date.o datetime.o datum.o dbsize.o domains.o \ encode.o enum.o expandeddatum.o expandedrecord.o \ float.o format_type.o formatting.o genfile.o \ geo_ops.o geo_selfuncs.o geo_spgist.o inet_cidr_ntop.o inet_net_pton.o \ diff --git a/src/backend/utils/adt/cryptohashes.c b/src/backend/utils/adt/cryptohashes.c new file mode 100644 index 0000000000..e1ff17590e --- /dev/null +++ b/src/backend/utils/adt/cryptohashes.c @@ -0,0 +1,169 @@ +/*------------------------------------------------------------------------- + * + * cryptohashes.c + * Cryptographic hash functions + * + * Portions Copyright (c) 2018, PostgreSQL Global Development Group + * + * + * IDENTIFICATION + * src/backend/utils/adt/cryptohashes.c + * + *------------------------------------------------------------------------- + */ +#include "postgres.h" + +#include "common/md5.h" +#include "common/sha2.h" +#include "utils/builtins.h" + + +/* + * MD5 + */ + +/* MD5 produces a 16 byte (128 bit) hash; double it for hex */ +#define MD5_HASH_LEN 32 + +/* + * Create an MD5 hash of a text value and return it as hex string. + */ +Datum +md5_text(PG_FUNCTION_ARGS) +{ + text *in_text = PG_GETARG_TEXT_PP(0); + size_t len; + char hexsum[MD5_HASH_LEN + 1]; + + /* Calculate the length of the buffer using varlena metadata */ + len = VARSIZE_ANY_EXHDR(in_text); + + /* get the hash result */ + if (pg_md5_hash(VARDATA_ANY(in_text), len, hexsum) == false) + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"))); + + /* convert to text and return it */ + PG_RETURN_TEXT_P(cstring_to_text(hexsum)); +} + +/* + * Create an MD5 hash of a bytea value and return it as a hex string. + */ +Datum +md5_bytea(PG_FUNCTION_ARGS) +{ + bytea *in = PG_GETARG_BYTEA_PP(0); + size_t len; + char hexsum[MD5_HASH_LEN + 1]; + + len = VARSIZE_ANY_EXHDR(in); + if (pg_md5_hash(VARDATA_ANY(in), len, hexsum) == false) + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("out of memory"))); + + PG_RETURN_TEXT_P(cstring_to_text(hexsum)); +} + + +/* + * SHA-2 variants + */ + +Datum +sha224_bytea(PG_FUNCTION_ARGS) +{ + bytea *in = PG_GETARG_BYTEA_PP(0); + const uint8 *data; + size_t len; + pg_sha224_ctx ctx; + unsigned char buf[PG_SHA224_DIGEST_LENGTH]; + bytea *result; + + len = VARSIZE_ANY_EXHDR(in); + data = (unsigned char *) VARDATA_ANY(in); + + pg_sha224_init(&ctx); + pg_sha224_update(&ctx, data, len); + pg_sha224_final(&ctx, buf); + + result = palloc(sizeof(buf) + VARHDRSZ); + SET_VARSIZE(result, sizeof(buf) + VARHDRSZ); + memcpy(VARDATA(result), buf, sizeof(buf)); + + PG_RETURN_BYTEA_P(result); +} + +Datum +sha256_bytea(PG_FUNCTION_ARGS) +{ + bytea *in = PG_GETARG_BYTEA_PP(0); + const uint8 *data; + size_t len; + pg_sha256_ctx ctx; + unsigned char buf[PG_SHA256_DIGEST_LENGTH]; + bytea *result; + + len = VARSIZE_ANY_EXHDR(in); + data = (unsigned char *) VARDATA_ANY(in); + + pg_sha256_init(&ctx); + pg_sha256_update(&ctx, data, len); + pg_sha256_final(&ctx, buf); + + result = palloc(sizeof(buf) + VARHDRSZ); + SET_VARSIZE(result, sizeof(buf) + VARHDRSZ); + memcpy(VARDATA(result), buf, sizeof(buf)); + + PG_RETURN_BYTEA_P(result); +} + +Datum +sha384_bytea(PG_FUNCTION_ARGS) +{ + bytea *in = PG_GETARG_BYTEA_PP(0); + const uint8 *data; + size_t len; + pg_sha384_ctx ctx; + unsigned char buf[PG_SHA384_DIGEST_LENGTH]; + bytea *result; + + len = VARSIZE_ANY_EXHDR(in); + data = (unsigned char *) VARDATA_ANY(in); + + pg_sha384_init(&ctx); + pg_sha384_update(&ctx, data, len); + pg_sha384_final(&ctx, buf); + + result = palloc(sizeof(buf) + VARHDRSZ); + SET_VARSIZE(result, sizeof(buf) + VARHDRSZ); + memcpy(VARDATA(result), buf, sizeof(buf)); + + PG_RETURN_BYTEA_P(result); +} + +Datum +sha512_bytea(PG_FUNCTION_ARGS) +{ + bytea *in = PG_GETARG_BYTEA_PP(0); + const uint8 *data; + size_t len; + pg_sha512_ctx ctx; + unsigned char buf[PG_SHA512_DIGEST_LENGTH]; + bytea *result; + + len = VARSIZE_ANY_EXHDR(in); + data = (unsigned char *) VARDATA_ANY(in); + + pg_sha512_init(&ctx); + pg_sha512_update(&ctx, data, len); + pg_sha512_final(&ctx, buf); + + result = palloc(sizeof(buf) + VARHDRSZ); + SET_VARSIZE(result, sizeof(buf) + VARHDRSZ); + memcpy(VARDATA(result), buf, sizeof(buf)); + + PG_RETURN_BYTEA_P(result); +} diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c index 304cb26691..4346410d5a 100644 --- a/src/backend/utils/adt/varlena.c +++ b/src/backend/utils/adt/varlena.c @@ -22,7 +22,6 @@ #include "catalog/pg_collation.h" #include "catalog/pg_type.h" #include "common/int.h" -#include "common/md5.h" #include "lib/hyperloglog.h" #include "libpq/pqformat.h" #include "miscadmin.h" @@ -4564,53 +4563,6 @@ to_hex64(PG_FUNCTION_ARGS) PG_RETURN_TEXT_P(cstring_to_text(ptr)); } -/* - * Create an md5 hash of a text string and return it as hex - * - * md5 produces a 16 byte (128 bit) hash; double it for hex - */ -#define MD5_HASH_LEN 32 - -Datum -md5_text(PG_FUNCTION_ARGS) -{ - text *in_text = PG_GETARG_TEXT_PP(0); - size_t len; - char hexsum[MD5_HASH_LEN + 1]; - - /* Calculate the length of the buffer using varlena metadata */ - len = VARSIZE_ANY_EXHDR(in_text); - - /* get the hash result */ - if (pg_md5_hash(VARDATA_ANY(in_text), len, hexsum) == false) - ereport(ERROR, - (errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory"))); - - /* convert to text and return it */ - PG_RETURN_TEXT_P(cstring_to_text(hexsum)); -} - -/* - * Create an md5 hash of a bytea field and return it as a hex string: - * 16-byte md5 digest is represented in 32 hex characters. - */ -Datum -md5_bytea(PG_FUNCTION_ARGS) -{ - bytea *in = PG_GETARG_BYTEA_PP(0); - size_t len; - char hexsum[MD5_HASH_LEN + 1]; - - len = VARSIZE_ANY_EXHDR(in); - if (pg_md5_hash(VARDATA_ANY(in), len, hexsum) == false) - ereport(ERROR, - (errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory"))); - - PG_RETURN_TEXT_P(cstring_to_text(hexsum)); -} - /* * Return the size of a datum, possibly compressed * diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 2a5321315a..62e16514cc 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -3917,10 +3917,18 @@ DATA(insert OID = 3314 ( system PGNSP PGUID 12 1 0 0 0 f f f f t f v s 1 0 33 DESCR("SYSTEM tablesample method handler"); /* cryptographic */ -DATA(insert OID = 2311 ( md5 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 25 "25" _null_ _null_ _null_ _null_ _null_ md5_text _null_ _null_ _null_ )); +DATA(insert OID = 2311 ( md5 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 25 "25" _null_ _null_ _null_ _null_ _null_ md5_text _null_ _null_ _null_ )); DESCR("MD5 hash"); -DATA(insert OID = 2321 ( md5 PGNSP PGUID 12 1 0 0 0 f f f f t f i s 1 0 25 "17" _null_ _null_ _null_ _null_ _null_ md5_bytea _null_ _null_ _null_ )); +DATA(insert OID = 2321 ( md5 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 25 "17" _null_ _null_ _null_ _null_ _null_ md5_bytea _null_ _null_ _null_ )); DESCR("MD5 hash"); +DATA(insert OID = 3419 ( sha224 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 17 "17" _null_ _null_ _null_ _null_ _null_ sha224_bytea _null_ _null_ _null_ )); +DESCR("SHA-224 hash"); +DATA(insert OID = 3420 ( sha256 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 17 "17" _null_ _null_ _null_ _null_ _null_ sha256_bytea _null_ _null_ _null_ )); +DESCR("SHA-256 hash"); +DATA(insert OID = 3421 ( sha384 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 17 "17" _null_ _null_ _null_ _null_ _null_ sha384_bytea _null_ _null_ _null_ )); +DESCR("SHA-384 hash"); +DATA(insert OID = 3422 ( sha512 PGNSP PGUID 12 1 0 0 0 f f f t t f i s 1 0 17 "17" _null_ _null_ _null_ _null_ _null_ sha512_bytea _null_ _null_ _null_ )); +DESCR("SHA-512 hash"); /* crosstype operations for date vs. timestamp and timestamptz */ DATA(insert OID = 2338 ( date_lt_timestamp PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "1082 1114" _null_ _null_ _null_ _null_ _null_ date_lt_timestamp _null_ _null_ _null_ )); diff --git a/src/test/regress/expected/opr_sanity.out b/src/test/regress/expected/opr_sanity.out index 684f7f20a8..6616cc1bf0 100644 --- a/src/test/regress/expected/opr_sanity.out +++ b/src/test/regress/expected/opr_sanity.out @@ -699,6 +699,8 @@ timestamp_lt(timestamp without time zone,timestamp without time zone) timestamp_le(timestamp without time zone,timestamp without time zone) timestamp_ge(timestamp without time zone,timestamp without time zone) timestamp_gt(timestamp without time zone,timestamp without time zone) +md5(text) +md5(bytea) tidgt(tid,tid) tidlt(tid,tid) tidge(tid,tid) @@ -711,6 +713,10 @@ uuid_gt(uuid,uuid) uuid_ne(uuid,uuid) xidneq(xid,xid) xidneqint4(xid,integer) +sha224(bytea) +sha256(bytea) +sha384(bytea) +sha512(bytea) macaddr8_eq(macaddr8,macaddr8) macaddr8_lt(macaddr8,macaddr8) macaddr8_le(macaddr8,macaddr8) diff --git a/src/test/regress/expected/strings.out b/src/test/regress/expected/strings.out index 8073eb4fad..cbe66c375c 100644 --- a/src/test/regress/expected/strings.out +++ b/src/test/regress/expected/strings.out @@ -1439,6 +1439,58 @@ select md5('12345678901234567890123456789012345678901234567890123456789012345678 t (1 row) +-- +-- SHA-2 +-- +SET bytea_output TO hex; +SELECT sha224(''); + sha224 +------------------------------------------------------------ + \xd14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f +(1 row) + +SELECT sha224('The quick brown fox jumps over the lazy dog.'); + sha224 +------------------------------------------------------------ + \x619cba8e8e05826e9b8c519c0a5c68f4fb653e8a3d8aa04bb2c8cd4c +(1 row) + +SELECT sha256(''); + sha256 +-------------------------------------------------------------------- + \xe3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 +(1 row) + +SELECT sha256('The quick brown fox jumps over the lazy dog.'); + sha256 +-------------------------------------------------------------------- + \xef537f25c895bfa782526529a9b63d97aa631564d5d789c2b765448c8635fb6c +(1 row) + +SELECT sha384(''); + sha384 +---------------------------------------------------------------------------------------------------- + \x38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b +(1 row) + +SELECT sha384('The quick brown fox jumps over the lazy dog.'); + sha384 +---------------------------------------------------------------------------------------------------- + \xed892481d8272ca6df370bf706e4d7bc1b5739fa2177aae6c50e946678718fc67a7af2819a021c2fc34e91bdb63409d7 +(1 row) + +SELECT sha512(''); + sha512 +------------------------------------------------------------------------------------------------------------------------------------ + \xcf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e +(1 row) + +SELECT sha512('The quick brown fox jumps over the lazy dog.'); + sha512 +------------------------------------------------------------------------------------------------------------------------------------ + \x91ea1245f20d46ae9a037a989f54f1f790f0a47607eeb8a14d12890cea77a1bbc6c7ed9cf205e67b7f2b8fd4c7dfd3a7a8617e45f3c463d481c7e586c39ac1ed +(1 row) + -- -- test behavior of escape_string_warning and standard_conforming_strings options -- @@ -1525,6 +1577,7 @@ select 'a\\bcd' as f1, 'a\\b\'cd' as f2, 'a\\b\'''cd' as f3, 'abcd\\' as f4, ' -- -- Additional string functions -- +SET bytea_output TO escape; SELECT initcap('hi THOMAS'); initcap ----------- diff --git a/src/test/regress/sql/strings.sql b/src/test/regress/sql/strings.sql index 9ed242208f..5a82237870 100644 --- a/src/test/regress/sql/strings.sql +++ b/src/test/regress/sql/strings.sql @@ -506,6 +506,23 @@ select md5('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'::byt select md5('12345678901234567890123456789012345678901234567890123456789012345678901234567890'::bytea) = '57edf4a22be3c955ac49da2e2107b67a' AS "TRUE"; +-- +-- SHA-2 +-- +SET bytea_output TO hex; + +SELECT sha224(''); +SELECT sha224('The quick brown fox jumps over the lazy dog.'); + +SELECT sha256(''); +SELECT sha256('The quick brown fox jumps over the lazy dog.'); + +SELECT sha384(''); +SELECT sha384('The quick brown fox jumps over the lazy dog.'); + +SELECT sha512(''); +SELECT sha512('The quick brown fox jumps over the lazy dog.'); + -- -- test behavior of escape_string_warning and standard_conforming_strings options -- @@ -540,6 +557,7 @@ select 'a\\bcd' as f1, 'a\\b\'cd' as f2, 'a\\b\'''cd' as f3, 'abcd\\' as f4, ' -- -- Additional string functions -- +SET bytea_output TO escape; SELECT initcap('hi THOMAS'); From 0db2fc98cdf4135f9dcfa3740db6f2548682fe7e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 6 Feb 2018 21:59:40 -0500 Subject: [PATCH 1037/1087] Update gratuitous use of MD5 in documentation It seems some people are bothered by the outdated MD5 appearing in example code. So replace it with more modern alternatives or by a different example function. Reported-by: Jon Wolski --- doc/src/sgml/citext.sgml | 10 +++++----- doc/src/sgml/sepgsql.sgml | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/doc/src/sgml/citext.sgml b/doc/src/sgml/citext.sgml index 82251de852..b1fe7101b2 100644 --- a/doc/src/sgml/citext.sgml +++ b/doc/src/sgml/citext.sgml @@ -80,11 +80,11 @@ CREATE TABLE users ( pass TEXT NOT NULL ); -INSERT INTO users VALUES ( 'larry', md5(random()::text) ); -INSERT INTO users VALUES ( 'Tom', md5(random()::text) ); -INSERT INTO users VALUES ( 'Damian', md5(random()::text) ); -INSERT INTO users VALUES ( 'NEAL', md5(random()::text) ); -INSERT INTO users VALUES ( 'Bjørn', md5(random()::text) ); +INSERT INTO users VALUES ( 'larry', sha256(random()::text::bytea) ); +INSERT INTO users VALUES ( 'Tom', sha256(random()::text::bytea) ); +INSERT INTO users VALUES ( 'Damian', sha256(random()::text::bytea) ); +INSERT INTO users VALUES ( 'NEAL', sha256(random()::text::bytea) ); +INSERT INTO users VALUES ( 'Bjørn', sha256(random()::text::bytea) ); SELECT * FROM users WHERE nick = 'Larry'; diff --git a/doc/src/sgml/sepgsql.sgml b/doc/src/sgml/sepgsql.sgml index 273efc6e27..f8c99e1b00 100644 --- a/doc/src/sgml/sepgsql.sgml +++ b/doc/src/sgml/sepgsql.sgml @@ -370,7 +370,7 @@ $ sudo semodule -r sepgsql-regtest For example, consider: -UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100; +UPDATE t1 SET x = 2, y = func1(y) WHERE z = 100; Here, db_column:update will be checked for From abcba7001e481a565b8fba2393666dc54e90db61 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Thu, 22 Feb 2018 15:13:57 -0500 Subject: [PATCH 1038/1087] Fix perlcritic warnings --- src/bin/pgbench/t/001_pgbench_with_server.pl | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl index 99286f6bc0..e585220dc0 100644 --- a/src/bin/pgbench/t/001_pgbench_with_server.pl +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl @@ -584,7 +584,7 @@ sub check_pgbench_logs { my ($prefix, $nb, $min, $max, $re) = @_; - my @logs = <$prefix.*>; + my @logs = glob "$prefix.*"; ok(@logs == $nb, "number of log files"); ok(grep(/^$prefix\.\d+(\.\d+)?$/, @logs) == $nb, "file name format"); @@ -592,14 +592,14 @@ sub check_pgbench_logs for my $log (sort @logs) { eval { - open LOG, $log or die "$@"; - my @contents = ; + open my $fh, '<', $log or die "$@"; + my @contents = <$fh>; my $clen = @contents; ok( $min <= $clen && $clen <= $max, "transaction count for $log ($clen)"); ok( grep($re, @contents) == $clen, "transaction format for $prefix"); - close LOG or die "$@"; + close $fh or die "$@"; }; } ok(unlink(@logs), "remove log files"); From a6a80134e3bffa0678a82ed7477d9d46dea07d3a Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Thu, 22 Feb 2018 18:05:30 -0500 Subject: [PATCH 1039/1087] Remove extra words. Thomas Munro Discussion: http://postgr.es/m/CAEepm=2x3NUSPed6=-wDYs39KtUU5Dw3mK_NAMWps+18FmkApQ@mail.gmail.com --- src/backend/storage/lmgr/predicate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c index d1ff2b1edc..654eca4f3f 100644 --- a/src/backend/storage/lmgr/predicate.c +++ b/src/backend/storage/lmgr/predicate.c @@ -493,8 +493,8 @@ PredicateLockingNeededForRelation(Relation relation) * as RO-safe since the last call, we release all predicate locks and reset * MySerializableXact. That makes subsequent calls to return quickly. * - * This is marked as 'inline' to make to eliminate the function call overhead - * in the common case that serialization is not needed. + * This is marked as 'inline' to eliminate the function call overhead in the + * common case that serialization is not needed. */ static inline bool SerializationNeededForRead(Relation relation, Snapshot snapshot) From 76b6aa41f41db66004b1c430f17a546d4102fbe7 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Tue, 20 Feb 2018 18:03:31 -0500 Subject: [PATCH 1040/1087] Support parameters in CALL To support parameters in CALL, move the parse analysis of the procedure and arguments into the global transformation phase, so that the parser hooks can be applied. And then at execution time pass the parameters from ProcessUtility on to ExecuteCallStmt. --- src/backend/commands/functioncmds.c | 25 ++--------- src/backend/nodes/copyfuncs.c | 1 + src/backend/nodes/equalfuncs.c | 1 + src/backend/parser/analyze.c | 45 +++++++++++++++++++ src/backend/tcop/utility.c | 2 +- src/include/commands/defrem.h | 3 +- src/include/nodes/parsenodes.h | 3 +- src/pl/plpgsql/src/expected/plpgsql_call.out | 19 ++++++++ src/pl/plpgsql/src/sql/plpgsql_call.sql | 18 ++++++++ .../regress/expected/create_procedure.out | 16 +++++++ src/test/regress/sql/create_procedure.sql | 15 +++++++ 11 files changed, 124 insertions(+), 24 deletions(-) diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c index a027b19744..abdfa249c0 100644 --- a/src/backend/commands/functioncmds.c +++ b/src/backend/commands/functioncmds.c @@ -2212,11 +2212,9 @@ ExecuteDoStmt(DoStmt *stmt, bool atomic) * commits that might occur inside the procedure. */ void -ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) +ExecuteCallStmt(CallStmt *stmt, ParamListInfo params, bool atomic) { - List *targs; ListCell *lc; - Node *node; FuncExpr *fexpr; int nargs; int i; @@ -2228,24 +2226,8 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) ExprContext *econtext; HeapTuple tp; - /* We need to do parse analysis on the procedure call and its arguments */ - targs = NIL; - foreach(lc, stmt->funccall->args) - { - targs = lappend(targs, transformExpr(pstate, - (Node *) lfirst(lc), - EXPR_KIND_CALL_ARGUMENT)); - } - - node = ParseFuncOrColumn(pstate, - stmt->funccall->funcname, - targs, - pstate->p_last_srf, - stmt->funccall, - true, - stmt->funccall->location); - - fexpr = castNode(FuncExpr, node); + fexpr = stmt->funcexpr; + Assert(fexpr); aclresult = pg_proc_aclcheck(fexpr->funcid, GetUserId(), ACL_EXECUTE); if (aclresult != ACLCHECK_OK) @@ -2289,6 +2271,7 @@ ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic) * we can't free this context till the procedure returns. */ estate = CreateExecutorState(); + estate->es_param_list_info = params; econtext = CreateExprContext(estate); i = 0; diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 82255b0d1d..266a3ef8ef 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -3231,6 +3231,7 @@ _copyCallStmt(const CallStmt *from) CallStmt *newnode = makeNode(CallStmt); COPY_NODE_FIELD(funccall); + COPY_NODE_FIELD(funcexpr); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index b9bc8e38d7..bbffc87842 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -1206,6 +1206,7 @@ static bool _equalCallStmt(const CallStmt *a, const CallStmt *b) { COMPARE_NODE_FIELD(funccall); + COMPARE_NODE_FIELD(funcexpr); return true; } diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c index 5b3a610cf9..c3a9617f67 100644 --- a/src/backend/parser/analyze.c +++ b/src/backend/parser/analyze.c @@ -36,6 +36,8 @@ #include "parser/parse_coerce.h" #include "parser/parse_collate.h" #include "parser/parse_cte.h" +#include "parser/parse_expr.h" +#include "parser/parse_func.h" #include "parser/parse_oper.h" #include "parser/parse_param.h" #include "parser/parse_relation.h" @@ -74,6 +76,8 @@ static Query *transformExplainStmt(ParseState *pstate, ExplainStmt *stmt); static Query *transformCreateTableAsStmt(ParseState *pstate, CreateTableAsStmt *stmt); +static Query *transformCallStmt(ParseState *pstate, + CallStmt *stmt); static void transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc, bool pushedDown); #ifdef RAW_EXPRESSION_COVERAGE_TEST @@ -318,6 +322,10 @@ transformStmt(ParseState *pstate, Node *parseTree) (CreateTableAsStmt *) parseTree); break; + case T_CallStmt: + result = transformCallStmt(pstate, + (CallStmt *) parseTree); + default: /* @@ -2571,6 +2579,43 @@ transformCreateTableAsStmt(ParseState *pstate, CreateTableAsStmt *stmt) return result; } +/* + * transform a CallStmt + * + * We need to do parse analysis on the procedure call and its arguments. + */ +static Query * +transformCallStmt(ParseState *pstate, CallStmt *stmt) +{ + List *targs; + ListCell *lc; + Node *node; + Query *result; + + targs = NIL; + foreach(lc, stmt->funccall->args) + { + targs = lappend(targs, transformExpr(pstate, + (Node *) lfirst(lc), + EXPR_KIND_CALL_ARGUMENT)); + } + + node = ParseFuncOrColumn(pstate, + stmt->funccall->funcname, + targs, + pstate->p_last_srf, + stmt->funccall, + true, + stmt->funccall->location); + + stmt->funcexpr = castNode(FuncExpr, node); + + result = makeNode(Query); + result->commandType = CMD_UTILITY; + result->utilityStmt = (Node *) stmt; + + return result; +} /* * Produce a string representation of a LockClauseStrength value. diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 8c23ee53e2..f78efdf359 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -660,7 +660,7 @@ standard_ProcessUtility(PlannedStmt *pstmt, break; case T_CallStmt: - ExecuteCallStmt(pstate, castNode(CallStmt, parsetree), + ExecuteCallStmt(castNode(CallStmt, parsetree), params, (context != PROCESS_UTILITY_TOPLEVEL || IsTransactionBlock())); break; diff --git a/src/include/commands/defrem.h b/src/include/commands/defrem.h index f510f40945..c829abfea7 100644 --- a/src/include/commands/defrem.h +++ b/src/include/commands/defrem.h @@ -15,6 +15,7 @@ #define DEFREM_H #include "catalog/objectaddress.h" +#include "nodes/params.h" #include "nodes/parsenodes.h" #include "utils/array.h" @@ -61,7 +62,7 @@ extern void DropTransformById(Oid transformOid); extern void IsThereFunctionInNamespace(const char *proname, int pronargs, oidvector *proargtypes, Oid nspOid); extern void ExecuteDoStmt(DoStmt *stmt, bool atomic); -extern void ExecuteCallStmt(ParseState *pstate, CallStmt *stmt, bool atomic); +extern void ExecuteCallStmt(CallStmt *stmt, ParamListInfo params, bool atomic); extern Oid get_cast_oid(Oid sourcetypeid, Oid targettypeid, bool missing_ok); extern Oid get_transform_oid(Oid type_id, Oid lang_id, bool missing_ok); extern void interpret_function_parameter_list(ParseState *pstate, diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h index c7a43b8933..ac292bc6e7 100644 --- a/src/include/nodes/parsenodes.h +++ b/src/include/nodes/parsenodes.h @@ -2814,7 +2814,8 @@ typedef struct InlineCodeBlock typedef struct CallStmt { NodeTag type; - FuncCall *funccall; + FuncCall *funccall; /* from the parser */ + FuncExpr *funcexpr; /* transformed */ } CallStmt; typedef struct CallContext diff --git a/src/pl/plpgsql/src/expected/plpgsql_call.out b/src/pl/plpgsql/src/expected/plpgsql_call.out index d0f35163bc..e2442c603c 100644 --- a/src/pl/plpgsql/src/expected/plpgsql_call.out +++ b/src/pl/plpgsql/src/expected/plpgsql_call.out @@ -35,7 +35,26 @@ SELECT * FROM test1; 55 (1 row) +-- nested CALL +TRUNCATE TABLE test1; +CREATE PROCEDURE test_proc4(y int) +LANGUAGE plpgsql +AS $$ +BEGIN + CALL test_proc3(y); + CALL test_proc3($1); +END; +$$; +CALL test_proc4(66); +SELECT * FROM test1; + a +---- + 66 + 66 +(2 rows) + DROP PROCEDURE test_proc1; DROP PROCEDURE test_proc2; DROP PROCEDURE test_proc3; +DROP PROCEDURE test_proc4; DROP TABLE test1; diff --git a/src/pl/plpgsql/src/sql/plpgsql_call.sql b/src/pl/plpgsql/src/sql/plpgsql_call.sql index 38fd220e8f..321ed43af8 100644 --- a/src/pl/plpgsql/src/sql/plpgsql_call.sql +++ b/src/pl/plpgsql/src/sql/plpgsql_call.sql @@ -40,8 +40,26 @@ CALL test_proc3(55); SELECT * FROM test1; +-- nested CALL +TRUNCATE TABLE test1; + +CREATE PROCEDURE test_proc4(y int) +LANGUAGE plpgsql +AS $$ +BEGIN + CALL test_proc3(y); + CALL test_proc3($1); +END; +$$; + +CALL test_proc4(66); + +SELECT * FROM test1; + + DROP PROCEDURE test_proc1; DROP PROCEDURE test_proc2; DROP PROCEDURE test_proc3; +DROP PROCEDURE test_proc4; DROP TABLE test1; diff --git a/src/test/regress/expected/create_procedure.out b/src/test/regress/expected/create_procedure.out index e7bede24fa..182b325ea1 100644 --- a/src/test/regress/expected/create_procedure.out +++ b/src/test/regress/expected/create_procedure.out @@ -55,6 +55,22 @@ AS $$ SELECT 5; $$; CALL ptest2(); +-- nested CALL +TRUNCATE cp_test; +CREATE PROCEDURE ptest3(y text) +LANGUAGE SQL +AS $$ +CALL ptest1(y); +CALL ptest1($1); +$$; +CALL ptest3('b'); +SELECT * FROM cp_test; + a | b +---+--- + 1 | b + 1 | b +(2 rows) + -- various error cases CALL version(); -- error: not a procedure ERROR: version() is not a procedure diff --git a/src/test/regress/sql/create_procedure.sql b/src/test/regress/sql/create_procedure.sql index 774c12ee34..52318bf2a6 100644 --- a/src/test/regress/sql/create_procedure.sql +++ b/src/test/regress/sql/create_procedure.sql @@ -31,6 +31,21 @@ $$; CALL ptest2(); +-- nested CALL +TRUNCATE cp_test; + +CREATE PROCEDURE ptest3(y text) +LANGUAGE SQL +AS $$ +CALL ptest1(y); +CALL ptest1($1); +$$; + +CALL ptest3('b'); + +SELECT * FROM cp_test; + + -- various error cases CALL version(); -- error: not a procedure From b0229235564fbe3a9b1cc115ea738a07e274bf30 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 23 Feb 2018 08:43:52 -0500 Subject: [PATCH 1041/1087] Revise API for partition_rbound_cmp/partition_rbound_datum_cmp. Instead of passing the PartitionKey, pass just the required bits of it. This allows these functions to be used without needing the PartitionKey to be available, which is important for several pending patches. Ashutosh Bapat, reviewed by Amit Langote, with a comment tweak by me. Discussion: http://postgr.es/m/3d835ed1-36ab-f06d-0ce8-a76a2bbf7677@lab.ntt.co.jp Discussion: http://postgr.es/m/b4d88995-094b-320c-b614-2282fae0bf6c@lab.ntt.co.jp --- src/backend/catalog/partition.c | 62 ++++++++++++++++++++++----------- 1 file changed, 41 insertions(+), 21 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index b1c7cd6c72..d34487ce80 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -165,10 +165,12 @@ static PartitionRangeBound *make_one_range_bound(PartitionKey key, int index, List *datums, bool lower); static int32 partition_hbound_cmp(int modulus1, int remainder1, int modulus2, int remainder2); -static int32 partition_rbound_cmp(PartitionKey key, - Datum *datums1, PartitionRangeDatumKind *kind1, - bool lower1, PartitionRangeBound *b2); -static int32 partition_rbound_datum_cmp(PartitionKey key, +static int32 partition_rbound_cmp(int partnatts, FmgrInfo *partsupfunc, + Oid *partcollation, Datum *datums1, + PartitionRangeDatumKind *kind1, bool lower1, + PartitionRangeBound *b2); +static int32 partition_rbound_datum_cmp(FmgrInfo *partsupfunc, + Oid *partcollation, Datum *rb_datums, PartitionRangeDatumKind *rb_kind, Datum *tuple_datums, int n_tuple_datums); @@ -1113,8 +1115,9 @@ check_new_partition_bound(char *relname, Relation parent, * First check if the resulting range would be empty with * specified lower and upper bounds */ - if (partition_rbound_cmp(key, lower->datums, lower->kind, true, - upper) >= 0) + if (partition_rbound_cmp(key->partnatts, key->partsupfunc, + key->partcollation, lower->datums, + lower->kind, true, upper) >= 0) { ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), @@ -1174,7 +1177,10 @@ check_new_partition_bound(char *relname, Relation parent, kind = boundinfo->kind[offset + 1]; is_lower = (boundinfo->indexes[offset + 1] == -1); - cmpval = partition_rbound_cmp(key, datums, kind, + cmpval = partition_rbound_cmp(key->partnatts, + key->partsupfunc, + key->partcollation, + datums, kind, is_lower, upper); if (cmpval < 0) { @@ -2614,10 +2620,11 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) if (!range_partkey_has_null) { bound_offset = partition_range_datum_bsearch(key, - partdesc->boundinfo, - key->partnatts, - values, - &equal); + partdesc->boundinfo, + key->partnatts, + values, + &equal); + /* * The bound at bound_offset is less than or equal to the * tuple value, so the bound at offset+1 is the upper @@ -2811,7 +2818,9 @@ qsort_partition_rbound_cmp(const void *a, const void *b, void *arg) PartitionRangeBound *b2 = (*(PartitionRangeBound *const *) b); PartitionKey key = (PartitionKey) arg; - return partition_rbound_cmp(key, b1->datums, b1->kind, b1->lower, b2); + return partition_rbound_cmp(key->partnatts, key->partsupfunc, + key->partcollation, b1->datums, b1->kind, + b1->lower, b2); } /* @@ -2820,6 +2829,10 @@ qsort_partition_rbound_cmp(const void *a, const void *b, void *arg) * Return for two range bounds whether the 1st one (specified in datums1, * kind1, and lower1) is <, =, or > the bound specified in *b2. * + * partnatts, partsupfunc and partcollation give the number of attributes in the + * bounds to be compared, comparison function to be used and the collations of + * attributes, respectively. + * * Note that if the values of the two range bounds compare equal, then we take * into account whether they are upper or lower bounds, and an upper bound is * considered to be smaller than a lower bound. This is important to the way @@ -2828,7 +2841,7 @@ qsort_partition_rbound_cmp(const void *a, const void *b, void *arg) * two contiguous partitions. */ static int32 -partition_rbound_cmp(PartitionKey key, +partition_rbound_cmp(int partnatts, FmgrInfo *partsupfunc, Oid *partcollation, Datum *datums1, PartitionRangeDatumKind *kind1, bool lower1, PartitionRangeBound *b2) { @@ -2838,7 +2851,7 @@ partition_rbound_cmp(PartitionKey key, PartitionRangeDatumKind *kind2 = b2->kind; bool lower2 = b2->lower; - for (i = 0; i < key->partnatts; i++) + for (i = 0; i < partnatts; i++) { /* * First, handle cases where the column is unbounded, which should not @@ -2859,8 +2872,8 @@ partition_rbound_cmp(PartitionKey key, */ break; - cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[i], - key->partcollation[i], + cmpval = DatumGetInt32(FunctionCall2Coll(&partsupfunc[i], + partcollation[i], datums1[i], datums2[i])); if (cmpval != 0) @@ -2884,9 +2897,14 @@ partition_rbound_cmp(PartitionKey key, * * Return whether range bound (specified in rb_datums, rb_kind, and rb_lower) * is <, =, or > partition key of tuple (tuple_datums) + * + * n_tuple_datums, partsupfunc and partcollation give number of attributes in + * the bounds to be compared, comparison function to be used and the collations + * of attributes resp. + * */ static int32 -partition_rbound_datum_cmp(PartitionKey key, +partition_rbound_datum_cmp(FmgrInfo *partsupfunc, Oid *partcollation, Datum *rb_datums, PartitionRangeDatumKind *rb_kind, Datum *tuple_datums, int n_tuple_datums) { @@ -2900,8 +2918,8 @@ partition_rbound_datum_cmp(PartitionKey key, else if (rb_kind[i] == PARTITION_RANGE_DATUM_MAXVALUE) return 1; - cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[i], - key->partcollation[i], + cmpval = DatumGetInt32(FunctionCall2Coll(&partsupfunc[i], + partcollation[i], rb_datums[i], tuple_datums[i])); if (cmpval != 0) @@ -2978,7 +2996,8 @@ partition_range_bsearch(PartitionKey key, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = partition_rbound_cmp(key, + cmpval = partition_rbound_cmp(key->partnatts, key->partsupfunc, + key->partcollation, boundinfo->datums[mid], boundinfo->kind[mid], (boundinfo->indexes[mid] == -1), @@ -3022,7 +3041,8 @@ partition_range_datum_bsearch(PartitionKey key, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = partition_rbound_datum_cmp(key, + cmpval = partition_rbound_datum_cmp(key->partsupfunc, + key->partcollation, boundinfo->datums[mid], boundinfo->kind[mid], values, From f724022d0ae04e687c309f99df27b7ce64d19761 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 23 Feb 2018 09:08:43 -0500 Subject: [PATCH 1042/1087] Revise API for partition bound search functions. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Similar to what commit b0229235564fbe3a9b1cc115ea738a07e274bf30 for a different set of functions, pass the required bits of the PartitionKey instead of the whole thing. This allows these functions to be used without needing the PartitionKey to be available. Amit Langote. The larger patch series of which this patch is a part has been reviewed and tested by Ashutosh Bapat, David Rowley, Dilip Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Kyotaro Horiguchi, Álvaro Herrera, and me, but especially and in great detail by David Rowley. Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp Discussion: http://postgr.es/m/1f6498e8-377f-d077-e791-5dc84dba2c00@lab.ntt.co.jp --- src/backend/catalog/partition.c | 66 +++++++++++++++++++-------------- 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c index d34487ce80..f8c9a11493 100644 --- a/src/backend/catalog/partition.c +++ b/src/backend/catalog/partition.c @@ -174,22 +174,24 @@ static int32 partition_rbound_datum_cmp(FmgrInfo *partsupfunc, Datum *rb_datums, PartitionRangeDatumKind *rb_kind, Datum *tuple_datums, int n_tuple_datums); -static int partition_list_bsearch(PartitionKey key, +static int partition_list_bsearch(FmgrInfo *partsupfunc, Oid *partcollation, PartitionBoundInfo boundinfo, Datum value, bool *is_equal); -static int partition_range_bsearch(PartitionKey key, +static int partition_range_bsearch(int partnatts, FmgrInfo *partsupfunc, + Oid *partcollation, PartitionBoundInfo boundinfo, PartitionRangeBound *probe, bool *is_equal); -static int partition_range_datum_bsearch(PartitionKey key, +static int partition_range_datum_bsearch(FmgrInfo *partsupfunc, + Oid *partcollation, PartitionBoundInfo boundinfo, int nvalues, Datum *values, bool *is_equal); -static int partition_hash_bsearch(PartitionKey key, - PartitionBoundInfo boundinfo, +static int partition_hash_bsearch(PartitionBoundInfo boundinfo, int modulus, int remainder); static int get_partition_bound_num_indexes(PartitionBoundInfo b); static int get_greatest_modulus(PartitionBoundInfo b); -static uint64 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); +static uint64 compute_hash_value(int partnatts, FmgrInfo *partsupfunc, + Datum *values, bool *isnull); /* * RelationBuildPartitionDesc @@ -1004,7 +1006,7 @@ check_new_partition_bound(char *relname, Relation parent, * boundinfo->datums that is less than or equal to the * (spec->modulus, spec->remainder) pair. */ - offset = partition_hash_bsearch(key, boundinfo, + offset = partition_hash_bsearch(boundinfo, spec->modulus, spec->remainder); if (offset < 0) @@ -1080,7 +1082,9 @@ check_new_partition_bound(char *relname, Relation parent, int offset; bool equal; - offset = partition_list_bsearch(key, boundinfo, + offset = partition_list_bsearch(key->partsupfunc, + key->partcollation, + boundinfo, val->constvalue, &equal); if (offset >= 0 && equal) @@ -1155,7 +1159,10 @@ check_new_partition_bound(char *relname, Relation parent, * since the index array is initialised with an extra -1 * at the end. */ - offset = partition_range_bsearch(key, boundinfo, lower, + offset = partition_range_bsearch(key->partnatts, + key->partsupfunc, + key->partcollation, + boundinfo, lower, &equal); if (boundinfo->indexes[offset + 1] < 0) @@ -2574,7 +2581,9 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) { PartitionBoundInfo boundinfo = partdesc->boundinfo; int greatest_modulus = get_greatest_modulus(boundinfo); - uint64 rowHash = compute_hash_value(key, values, isnull); + uint64 rowHash = compute_hash_value(key->partnatts, + key->partsupfunc, + values, isnull); part_index = boundinfo->indexes[rowHash % greatest_modulus]; } @@ -2590,7 +2599,8 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) { bool equal = false; - bound_offset = partition_list_bsearch(key, + bound_offset = partition_list_bsearch(key->partsupfunc, + key->partcollation, partdesc->boundinfo, values[0], &equal); if (bound_offset >= 0 && equal) @@ -2619,7 +2629,8 @@ get_partition_for_tuple(Relation relation, Datum *values, bool *isnull) if (!range_partkey_has_null) { - bound_offset = partition_range_datum_bsearch(key, + bound_offset = partition_range_datum_bsearch(key->partsupfunc, + key->partcollation, partdesc->boundinfo, key->partnatts, values, @@ -2938,7 +2949,7 @@ partition_rbound_datum_cmp(FmgrInfo *partsupfunc, Oid *partcollation, * to the input value. */ static int -partition_list_bsearch(PartitionKey key, +partition_list_bsearch(FmgrInfo *partsupfunc, Oid *partcollation, PartitionBoundInfo boundinfo, Datum value, bool *is_equal) { @@ -2953,8 +2964,8 @@ partition_list_bsearch(PartitionKey key, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = DatumGetInt32(FunctionCall2Coll(&key->partsupfunc[0], - key->partcollation[0], + cmpval = DatumGetInt32(FunctionCall2Coll(&partsupfunc[0], + partcollation[0], boundinfo->datums[mid][0], value)); if (cmpval <= 0) @@ -2981,7 +2992,8 @@ partition_list_bsearch(PartitionKey key, * to the input range bound */ static int -partition_range_bsearch(PartitionKey key, +partition_range_bsearch(int partnatts, FmgrInfo *partsupfunc, + Oid *partcollation, PartitionBoundInfo boundinfo, PartitionRangeBound *probe, bool *is_equal) { @@ -2996,8 +3008,7 @@ partition_range_bsearch(PartitionKey key, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = partition_rbound_cmp(key->partnatts, key->partsupfunc, - key->partcollation, + cmpval = partition_rbound_cmp(partnatts, partsupfunc, partcollation, boundinfo->datums[mid], boundinfo->kind[mid], (boundinfo->indexes[mid] == -1), @@ -3026,7 +3037,7 @@ partition_range_bsearch(PartitionKey key, * to the input tuple. */ static int -partition_range_datum_bsearch(PartitionKey key, +partition_range_datum_bsearch(FmgrInfo *partsupfunc, Oid *partcollation, PartitionBoundInfo boundinfo, int nvalues, Datum *values, bool *is_equal) { @@ -3041,8 +3052,8 @@ partition_range_datum_bsearch(PartitionKey key, int32 cmpval; mid = (lo + hi + 1) / 2; - cmpval = partition_rbound_datum_cmp(key->partsupfunc, - key->partcollation, + cmpval = partition_rbound_datum_cmp(partsupfunc, + partcollation, boundinfo->datums[mid], boundinfo->kind[mid], values, @@ -3069,8 +3080,7 @@ partition_range_datum_bsearch(PartitionKey key, * all of them are greater */ static int -partition_hash_bsearch(PartitionKey key, - PartitionBoundInfo boundinfo, +partition_hash_bsearch(PartitionBoundInfo boundinfo, int modulus, int remainder) { int lo, @@ -3268,27 +3278,27 @@ get_greatest_modulus(PartitionBoundInfo bound) * Compute the hash value for given not null partition key values. */ static uint64 -compute_hash_value(PartitionKey key, Datum *values, bool *isnull) +compute_hash_value(int partnatts, FmgrInfo *partsupfunc, + Datum *values, bool *isnull) { int i; - int nkeys = key->partnatts; uint64 rowHash = 0; Datum seed = UInt64GetDatum(HASH_PARTITION_SEED); - for (i = 0; i < nkeys; i++) + for (i = 0; i < partnatts; i++) { if (!isnull[i]) { Datum hash; - Assert(OidIsValid(key->partsupfunc[i].fn_oid)); + Assert(OidIsValid(partsupfunc[i].fn_oid)); /* * Compute hash for each datum value by calling respective * datatype-specific hash functions of each partition key * attribute. */ - hash = FunctionCall2(&key->partsupfunc[i], values[i], seed); + hash = FunctionCall2(&partsupfunc[i], values[i], seed); /* Form a single 64-bit hash value */ rowHash = hash_combine64(rowHash, DatumGetUInt64(hash)); From 9afd513df042b22b98bb9b55f27265e95d34f9e6 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 23 Feb 2018 13:47:33 -0500 Subject: [PATCH 1043/1087] Fix planner failures with overlapping mergejoin clauses in an outer join. Given overlapping or partially redundant join clauses, for example t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x the planner's EquivalenceClass machinery will ordinarily refactor the clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't see multiple references to the same EquivalenceClass in a list of join equality clauses. However, if the join is outer, it's incorrect to derive a restriction clause on the outer side from the join conditions, so the clause refactoring does not happen and we end up with overlapping join conditions. The code that attempted to deal with such cases had several subtle bugs, which could result in "left and right pathkeys do not match in mergejoin" or "outer pathkeys do not match mergeclauses" planner errors, if the selected join plan type was a mergejoin. (It does not appear that any actually incorrect plan could have been emitted.) The core of the problem really was failure to recognize that the outer and inner relations' pathkeys have different relationships to the mergeclause list. A join's mergeclause list is constructed by reference to the outer pathkeys, so it will always be ordered the same as the outer pathkeys, but this cannot be presumed true for the inner pathkeys. If the inner sides of the mergeclauses contain multiple references to the same EquivalenceClass ({t2.x} in the above example) then a simplistic rendering of the required inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery recognizes that the second sort column is redundant and throws it away. The mergejoin planning code failed to account for that behavior properly. One error was to try to generate cut-down versions of the mergeclause list from cut-down versions of the inner pathkeys in the same way as the initial construction of the mergeclause list from the outer pathkeys was done; this could lead to choosing a mergeclause list that fails to match the outer pathkeys. The other problem was that the pathkey cross-checking code in create_mergejoin_plan treated the inner and outer pathkey lists identically, whereas actually the expectations for them must be different. That led to false "pathkeys do not match" failures in some cases, and in principle could have led to failure to detect bogus plans in other cases, though there is no indication that such bogus plans could be generated. Reported by Alexander Kuzmenkov, who also reviewed this patch. This has been broken for years (back to around 8.3 according to my testing), so back-patch to all supported branches. Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru --- src/backend/optimizer/path/joinpath.c | 30 +++-- src/backend/optimizer/path/pathkeys.c | 150 +++++++++++++++++++----- src/backend/optimizer/plan/createplan.c | 145 +++++++++++------------ src/include/optimizer/paths.h | 10 +- src/test/regress/expected/join.out | 80 +++++++++++++ src/test/regress/sql/join.sql | 31 +++++ 6 files changed, 322 insertions(+), 124 deletions(-) diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c index 396ee2747a..688f440b92 100644 --- a/src/backend/optimizer/path/joinpath.c +++ b/src/backend/optimizer/path/joinpath.c @@ -1009,10 +1009,10 @@ sort_inner_and_outer(PlannerInfo *root, outerkeys = all_pathkeys; /* no work at first one... */ /* Sort the mergeclauses into the corresponding ordering */ - cur_mergeclauses = find_mergeclauses_for_pathkeys(root, - outerkeys, - true, - extra->mergeclause_list); + cur_mergeclauses = + find_mergeclauses_for_outer_pathkeys(root, + outerkeys, + extra->mergeclause_list); /* Should have used them all... */ Assert(list_length(cur_mergeclauses) == list_length(extra->mergeclause_list)); @@ -1102,10 +1102,10 @@ generate_mergejoin_paths(PlannerInfo *root, jointype = JOIN_INNER; /* Look for useful mergeclauses (if any) */ - mergeclauses = find_mergeclauses_for_pathkeys(root, - outerpath->pathkeys, - true, - extra->mergeclause_list); + mergeclauses = + find_mergeclauses_for_outer_pathkeys(root, + outerpath->pathkeys, + extra->mergeclause_list); /* * Done with this outer path if no chance for a mergejoin. @@ -1228,10 +1228,9 @@ generate_mergejoin_paths(PlannerInfo *root, if (sortkeycnt < num_sortkeys) { newclauses = - find_mergeclauses_for_pathkeys(root, - trialsortkeys, - false, - mergeclauses); + trim_mergeclauses_for_inner_pathkeys(root, + mergeclauses, + trialsortkeys); Assert(newclauses != NIL); } else @@ -1272,10 +1271,9 @@ generate_mergejoin_paths(PlannerInfo *root, if (sortkeycnt < num_sortkeys) { newclauses = - find_mergeclauses_for_pathkeys(root, - trialsortkeys, - false, - mergeclauses); + trim_mergeclauses_for_inner_pathkeys(root, + mergeclauses, + trialsortkeys); Assert(newclauses != NIL); } else diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c index ef58cff28d..6d1cc3b8a0 100644 --- a/src/backend/optimizer/path/pathkeys.c +++ b/src/backend/optimizer/path/pathkeys.c @@ -981,16 +981,14 @@ update_mergeclause_eclasses(PlannerInfo *root, RestrictInfo *restrictinfo) } /* - * find_mergeclauses_for_pathkeys - * This routine attempts to find a set of mergeclauses that can be - * used with a specified ordering for one of the input relations. + * find_mergeclauses_for_outer_pathkeys + * This routine attempts to find a list of mergeclauses that can be + * used with a specified ordering for the join's outer relation. * If successful, it returns a list of mergeclauses. * - * 'pathkeys' is a pathkeys list showing the ordering of an input path. - * 'outer_keys' is true if these keys are for the outer input path, - * false if for inner. + * 'pathkeys' is a pathkeys list showing the ordering of an outer-rel path. * 'restrictinfos' is a list of mergejoinable restriction clauses for the - * join relation being formed. + * join relation being formed, in no particular order. * * The restrictinfos must be marked (via outer_is_left) to show which side * of each clause is associated with the current outer path. (See @@ -998,12 +996,12 @@ update_mergeclause_eclasses(PlannerInfo *root, RestrictInfo *restrictinfo) * * The result is NIL if no merge can be done, else a maximal list of * usable mergeclauses (represented as a list of their restrictinfo nodes). + * The list is ordered to match the pathkeys, as required for execution. */ List * -find_mergeclauses_for_pathkeys(PlannerInfo *root, - List *pathkeys, - bool outer_keys, - List *restrictinfos) +find_mergeclauses_for_outer_pathkeys(PlannerInfo *root, + List *pathkeys, + List *restrictinfos) { List *mergeclauses = NIL; ListCell *i; @@ -1044,19 +1042,20 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root, * * It's possible that multiple matching clauses might have different * ECs on the other side, in which case the order we put them into our - * result makes a difference in the pathkeys required for the other - * input path. However this routine hasn't got any info about which + * result makes a difference in the pathkeys required for the inner + * input rel. However this routine hasn't got any info about which * order would be best, so we don't worry about that. * * It's also possible that the selected mergejoin clauses produce - * a noncanonical ordering of pathkeys for the other side, ie, we + * a noncanonical ordering of pathkeys for the inner side, ie, we * might select clauses that reference b.v1, b.v2, b.v1 in that * order. This is not harmful in itself, though it suggests that - * the clauses are partially redundant. Since it happens only with - * redundant query conditions, we don't bother to eliminate it. - * make_inner_pathkeys_for_merge() has to delete duplicates when - * it constructs the canonical pathkeys list, and we also have to - * deal with the case in create_mergejoin_plan(). + * the clauses are partially redundant. Since the alternative is + * to omit mergejoin clauses and thereby possibly fail to generate a + * plan altogether, we live with it. make_inner_pathkeys_for_merge() + * has to delete duplicates when it constructs the inner pathkeys + * list, and we also have to deal with such cases specially in + * create_mergejoin_plan(). *---------- */ foreach(j, restrictinfos) @@ -1064,12 +1063,8 @@ find_mergeclauses_for_pathkeys(PlannerInfo *root, RestrictInfo *rinfo = (RestrictInfo *) lfirst(j); EquivalenceClass *clause_ec; - if (outer_keys) - clause_ec = rinfo->outer_is_left ? - rinfo->left_ec : rinfo->right_ec; - else - clause_ec = rinfo->outer_is_left ? - rinfo->right_ec : rinfo->left_ec; + clause_ec = rinfo->outer_is_left ? + rinfo->left_ec : rinfo->right_ec; if (clause_ec == pathkey_ec) matched_restrictinfos = lappend(matched_restrictinfos, rinfo); } @@ -1273,8 +1268,8 @@ select_outer_pathkeys_for_merge(PlannerInfo *root, * must be applied to an inner path to make it usable with the * given mergeclauses. * - * 'mergeclauses' is a list of RestrictInfos for mergejoin clauses - * that will be used in a merge join. + * 'mergeclauses' is a list of RestrictInfos for the mergejoin clauses + * that will be used in a merge join, in order. * 'outer_pathkeys' are the already-known canonical pathkeys for the outer * side of the join. * @@ -1351,8 +1346,13 @@ make_inner_pathkeys_for_merge(PlannerInfo *root, opathkey->pk_nulls_first); /* - * Don't generate redundant pathkeys (can happen if multiple - * mergeclauses refer to same EC). + * Don't generate redundant pathkeys (which can happen if multiple + * mergeclauses refer to the same EC). Because we do this, the output + * pathkey list isn't necessarily ordered like the mergeclauses, which + * complicates life for create_mergejoin_plan(). But if we didn't, + * we'd have a noncanonical sort key list, which would be bad; for one + * reason, it certainly wouldn't match any available sort order for + * the input relation. */ if (!pathkey_is_redundant(pathkey, pathkeys)) pathkeys = lappend(pathkeys, pathkey); @@ -1361,6 +1361,98 @@ make_inner_pathkeys_for_merge(PlannerInfo *root, return pathkeys; } +/* + * trim_mergeclauses_for_inner_pathkeys + * This routine trims a list of mergeclauses to include just those that + * work with a specified ordering for the join's inner relation. + * + * 'mergeclauses' is a list of RestrictInfos for mergejoin clauses for the + * join relation being formed, in an order known to work for the + * currently-considered sort ordering of the join's outer rel. + * 'pathkeys' is a pathkeys list showing the ordering of an inner-rel path; + * it should be equal to, or a truncation of, the result of + * make_inner_pathkeys_for_merge for these mergeclauses. + * + * What we return will be a prefix of the given mergeclauses list. + * + * We need this logic because make_inner_pathkeys_for_merge's result isn't + * necessarily in the same order as the mergeclauses. That means that if we + * consider an inner-rel pathkey list that is a truncation of that result, + * we might need to drop mergeclauses even though they match a surviving inner + * pathkey. This happens when they are to the right of a mergeclause that + * matches a removed inner pathkey. + * + * The mergeclauses must be marked (via outer_is_left) to show which side + * of each clause is associated with the current outer path. (See + * select_mergejoin_clauses()) + */ +List * +trim_mergeclauses_for_inner_pathkeys(PlannerInfo *root, + List *mergeclauses, + List *pathkeys) +{ + List *new_mergeclauses = NIL; + PathKey *pathkey; + EquivalenceClass *pathkey_ec; + bool matched_pathkey; + ListCell *lip; + ListCell *i; + + /* No pathkeys => no mergeclauses (though we don't expect this case) */ + if (pathkeys == NIL) + return NIL; + /* Initialize to consider first pathkey */ + lip = list_head(pathkeys); + pathkey = (PathKey *) lfirst(lip); + pathkey_ec = pathkey->pk_eclass; + lip = lnext(lip); + matched_pathkey = false; + + /* Scan mergeclauses to see how many we can use */ + foreach(i, mergeclauses) + { + RestrictInfo *rinfo = (RestrictInfo *) lfirst(i); + EquivalenceClass *clause_ec; + + /* Assume we needn't do update_mergeclause_eclasses again here */ + + /* Check clause's inner-rel EC against current pathkey */ + clause_ec = rinfo->outer_is_left ? + rinfo->right_ec : rinfo->left_ec; + + /* If we don't have a match, attempt to advance to next pathkey */ + if (clause_ec != pathkey_ec) + { + /* If we had no clauses matching this inner pathkey, must stop */ + if (!matched_pathkey) + break; + + /* Advance to next inner pathkey, if any */ + if (lip == NULL) + break; + pathkey = (PathKey *) lfirst(lip); + pathkey_ec = pathkey->pk_eclass; + lip = lnext(lip); + matched_pathkey = false; + } + + /* If mergeclause matches current inner pathkey, we can use it */ + if (clause_ec == pathkey_ec) + { + new_mergeclauses = lappend(new_mergeclauses, rinfo); + matched_pathkey = true; + } + else + { + /* Else, no hope of adding any more mergeclauses */ + break; + } + } + + return new_mergeclauses; +} + + /**************************************************************************** * PATHKEY USEFULNESS CHECKS * diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index da0cc7f266..9ae1bf31d5 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -3791,6 +3791,8 @@ create_mergejoin_plan(PlannerInfo *root, Oid *mergecollations; int *mergestrategies; bool *mergenullsfirst; + PathKey *opathkey; + EquivalenceClass *opeclass; int i; ListCell *lc; ListCell *lop; @@ -3909,7 +3911,8 @@ create_mergejoin_plan(PlannerInfo *root, * Compute the opfamily/collation/strategy/nullsfirst arrays needed by the * executor. The information is in the pathkeys for the two inputs, but * we need to be careful about the possibility of mergeclauses sharing a - * pathkey (compare find_mergeclauses_for_pathkeys()). + * pathkey, as well as the possibility that the inner pathkeys are not in + * an order matching the mergeclauses. */ nClauses = list_length(mergeclauses); Assert(nClauses == list_length(best_path->path_mergeclauses)); @@ -3918,6 +3921,8 @@ create_mergejoin_plan(PlannerInfo *root, mergestrategies = (int *) palloc(nClauses * sizeof(int)); mergenullsfirst = (bool *) palloc(nClauses * sizeof(bool)); + opathkey = NULL; + opeclass = NULL; lop = list_head(outerpathkeys); lip = list_head(innerpathkeys); i = 0; @@ -3926,11 +3931,9 @@ create_mergejoin_plan(PlannerInfo *root, RestrictInfo *rinfo = lfirst_node(RestrictInfo, lc); EquivalenceClass *oeclass; EquivalenceClass *ieclass; - PathKey *opathkey; - PathKey *ipathkey; - EquivalenceClass *opeclass; - EquivalenceClass *ipeclass; - ListCell *l2; + PathKey *ipathkey = NULL; + EquivalenceClass *ipeclass = NULL; + bool first_inner_match = false; /* fetch outer/inner eclass from mergeclause */ if (rinfo->outer_is_left) @@ -3947,104 +3950,96 @@ create_mergejoin_plan(PlannerInfo *root, Assert(ieclass != NULL); /* - * For debugging purposes, we check that the eclasses match the paths' - * pathkeys. In typical cases the merge clauses are one-to-one with - * the pathkeys, but when dealing with partially redundant query - * conditions, we might have clauses that re-reference earlier path - * keys. The case that we need to reject is where a pathkey is - * entirely skipped over. + * We must identify the pathkey elements associated with this clause + * by matching the eclasses (which should give a unique match, since + * the pathkey lists should be canonical). In typical cases the merge + * clauses are one-to-one with the pathkeys, but when dealing with + * partially redundant query conditions, things are more complicated. * - * lop and lip reference the first as-yet-unused pathkey elements; - * it's okay to match them, or any element before them. If they're - * NULL then we have found all pathkey elements to be used. + * lop and lip reference the first as-yet-unmatched pathkey elements. + * If they're NULL then all pathkey elements have been matched. + * + * The ordering of the outer pathkeys should match the mergeclauses, + * by construction (see find_mergeclauses_for_outer_pathkeys()). There + * could be more than one mergeclause for the same outer pathkey, but + * no pathkey may be entirely skipped over. */ - if (lop) + if (oeclass != opeclass) /* multiple matches are not interesting */ { + /* doesn't match the current opathkey, so must match the next */ + if (lop == NULL) + elog(ERROR, "outer pathkeys do not match mergeclauses"); opathkey = (PathKey *) lfirst(lop); opeclass = opathkey->pk_eclass; - if (oeclass == opeclass) - { - /* fast path for typical case */ - lop = lnext(lop); - } - else - { - /* redundant clauses ... must match something before lop */ - foreach(l2, outerpathkeys) - { - if (l2 == lop) - break; - opathkey = (PathKey *) lfirst(l2); - opeclass = opathkey->pk_eclass; - if (oeclass == opeclass) - break; - } - if (oeclass != opeclass) - elog(ERROR, "outer pathkeys do not match mergeclauses"); - } - } - else - { - /* redundant clauses ... must match some already-used pathkey */ - opathkey = NULL; - opeclass = NULL; - foreach(l2, outerpathkeys) - { - opathkey = (PathKey *) lfirst(l2); - opeclass = opathkey->pk_eclass; - if (oeclass == opeclass) - break; - } - if (l2 == NULL) + lop = lnext(lop); + if (oeclass != opeclass) elog(ERROR, "outer pathkeys do not match mergeclauses"); } + /* + * The inner pathkeys likewise should not have skipped-over keys, but + * it's possible for a mergeclause to reference some earlier inner + * pathkey if we had redundant pathkeys. For example we might have + * mergeclauses like "o.a = i.x AND o.b = i.y AND o.c = i.x". The + * implied inner ordering is then "ORDER BY x, y, x", but the pathkey + * mechanism drops the second sort by x as redundant, and this code + * must cope. + * + * It's also possible for the implied inner-rel ordering to be like + * "ORDER BY x, y, x DESC". We still drop the second instance of x as + * redundant; but this means that the sort ordering of a redundant + * inner pathkey should not be considered significant. So we must + * detect whether this is the first clause matching an inner pathkey. + */ if (lip) { ipathkey = (PathKey *) lfirst(lip); ipeclass = ipathkey->pk_eclass; if (ieclass == ipeclass) { - /* fast path for typical case */ + /* successful first match to this inner pathkey */ lip = lnext(lip); - } - else - { - /* redundant clauses ... must match something before lip */ - foreach(l2, innerpathkeys) - { - if (l2 == lip) - break; - ipathkey = (PathKey *) lfirst(l2); - ipeclass = ipathkey->pk_eclass; - if (ieclass == ipeclass) - break; - } - if (ieclass != ipeclass) - elog(ERROR, "inner pathkeys do not match mergeclauses"); + first_inner_match = true; } } - else + if (!first_inner_match) { - /* redundant clauses ... must match some already-used pathkey */ - ipathkey = NULL; - ipeclass = NULL; + /* redundant clause ... must match something before lip */ + ListCell *l2; + foreach(l2, innerpathkeys) { + if (l2 == lip) + break; ipathkey = (PathKey *) lfirst(l2); ipeclass = ipathkey->pk_eclass; if (ieclass == ipeclass) break; } - if (l2 == NULL) + if (ieclass != ipeclass) elog(ERROR, "inner pathkeys do not match mergeclauses"); } - /* pathkeys should match each other too (more debugging) */ + /* + * The pathkeys should always match each other as to opfamily and + * collation (which affect equality), but if we're considering a + * redundant inner pathkey, its sort ordering might not match. In + * such cases we may ignore the inner pathkey's sort ordering and use + * the outer's. (In effect, we're lying to the executor about the + * sort direction of this inner column, but it does not matter since + * the run-time row comparisons would only reach this column when + * there's equality for the earlier column containing the same eclass. + * There could be only one value in this column for the range of inner + * rows having a given value in the earlier column, so it does not + * matter which way we imagine this column to be ordered.) But a + * non-redundant inner pathkey had better match outer's ordering too. + */ if (opathkey->pk_opfamily != ipathkey->pk_opfamily || - opathkey->pk_eclass->ec_collation != ipathkey->pk_eclass->ec_collation || - opathkey->pk_strategy != ipathkey->pk_strategy || - opathkey->pk_nulls_first != ipathkey->pk_nulls_first) + opathkey->pk_eclass->ec_collation != ipathkey->pk_eclass->ec_collation) + elog(ERROR, "left and right pathkeys do not match in mergejoin"); + if (first_inner_match && + (opathkey->pk_strategy != ipathkey->pk_strategy || + opathkey->pk_nulls_first != ipathkey->pk_nulls_first)) elog(ERROR, "left and right pathkeys do not match in mergejoin"); /* OK, save info for executor */ diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index c9e44318ad..b2b4353eea 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -216,16 +216,18 @@ extern void initialize_mergeclause_eclasses(PlannerInfo *root, RestrictInfo *restrictinfo); extern void update_mergeclause_eclasses(PlannerInfo *root, RestrictInfo *restrictinfo); -extern List *find_mergeclauses_for_pathkeys(PlannerInfo *root, - List *pathkeys, - bool outer_keys, - List *restrictinfos); +extern List *find_mergeclauses_for_outer_pathkeys(PlannerInfo *root, + List *pathkeys, + List *restrictinfos); extern List *select_outer_pathkeys_for_merge(PlannerInfo *root, List *mergeclauses, RelOptInfo *joinrel); extern List *make_inner_pathkeys_for_merge(PlannerInfo *root, List *mergeclauses, List *outer_pathkeys); +extern List *trim_mergeclauses_for_inner_pathkeys(PlannerInfo *root, + List *mergeclauses, + List *pathkeys); extern List *truncate_useless_pathkeys(PlannerInfo *root, RelOptInfo *rel, List *pathkeys); diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index c50a206efb..4d5931d67e 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -2290,6 +2290,86 @@ where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous; ----+----+----------+---------- (0 rows) +-- +-- check a case where we formerly got confused by conflicting sort orders +-- in redundant merge join path keys +-- +explain (costs off) +select * from + j1_tbl full join + (select * from j2_tbl order by j2_tbl.i desc, j2_tbl.k asc) j2_tbl + on j1_tbl.i = j2_tbl.i and j1_tbl.i = j2_tbl.k; + QUERY PLAN +----------------------------------------------------------------- + Merge Full Join + Merge Cond: ((j2_tbl.i = j1_tbl.i) AND (j2_tbl.k = j1_tbl.i)) + -> Sort + Sort Key: j2_tbl.i DESC, j2_tbl.k + -> Seq Scan on j2_tbl + -> Sort + Sort Key: j1_tbl.i DESC + -> Seq Scan on j1_tbl +(8 rows) + +select * from + j1_tbl full join + (select * from j2_tbl order by j2_tbl.i desc, j2_tbl.k asc) j2_tbl + on j1_tbl.i = j2_tbl.i and j1_tbl.i = j2_tbl.k; + i | j | t | i | k +---+---+-------+---+---- + | | | | 0 + | | | | + | 0 | zero | | + | | null | | + 8 | 8 | eight | | + 7 | 7 | seven | | + 6 | 6 | six | | + | | | 5 | -5 + | | | 5 | -5 + 5 | 0 | five | | + 4 | 1 | four | | + | | | 3 | -3 + 3 | 2 | three | | + 2 | 3 | two | 2 | 2 + | | | 2 | 4 + | | | 1 | -1 + | | | 0 | + 1 | 4 | one | | + 0 | | zero | | +(19 rows) + +-- +-- a different check for handling of redundant sort keys in merge joins +-- +explain (costs off) +select count(*) from + (select * from tenk1 x order by x.thousand, x.twothousand, x.fivethous) x + left join + (select * from tenk1 y order by y.unique2) y + on x.thousand = y.unique2 and x.twothousand = y.hundred and x.fivethous = y.unique2; + QUERY PLAN +---------------------------------------------------------------------------------- + Aggregate + -> Merge Left Join + Merge Cond: (x.thousand = y.unique2) + Join Filter: ((x.twothousand = y.hundred) AND (x.fivethous = y.unique2)) + -> Sort + Sort Key: x.thousand, x.twothousand, x.fivethous + -> Seq Scan on tenk1 x + -> Materialize + -> Index Scan using tenk1_unique2 on tenk1 y +(9 rows) + +select count(*) from + (select * from tenk1 x order by x.thousand, x.twothousand, x.fivethous) x + left join + (select * from tenk1 y order by y.unique2) y + on x.thousand = y.unique2 and x.twothousand = y.hundred and x.fivethous = y.unique2; + count +------- + 10000 +(1 row) + -- -- Clean up -- diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index fc84237ce9..30dfde223e 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -417,6 +417,37 @@ select a.f1, b.f1, t.thousand, t.tenthous from (select sum(f1) as f1 from int4_tbl i4b) b where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous; +-- +-- check a case where we formerly got confused by conflicting sort orders +-- in redundant merge join path keys +-- +explain (costs off) +select * from + j1_tbl full join + (select * from j2_tbl order by j2_tbl.i desc, j2_tbl.k asc) j2_tbl + on j1_tbl.i = j2_tbl.i and j1_tbl.i = j2_tbl.k; + +select * from + j1_tbl full join + (select * from j2_tbl order by j2_tbl.i desc, j2_tbl.k asc) j2_tbl + on j1_tbl.i = j2_tbl.i and j1_tbl.i = j2_tbl.k; + +-- +-- a different check for handling of redundant sort keys in merge joins +-- +explain (costs off) +select count(*) from + (select * from tenk1 x order by x.thousand, x.twothousand, x.fivethous) x + left join + (select * from tenk1 y order by y.unique2) y + on x.thousand = y.unique2 and x.twothousand = y.hundred and x.fivethous = y.unique2; + +select count(*) from + (select * from tenk1 x order by x.thousand, x.twothousand, x.fivethous) x + left join + (select * from tenk1 y order by y.unique2) y + on x.thousand = y.unique2 and x.twothousand = y.hundred and x.fivethous = y.unique2; + -- -- Clean up From fe35cea7cf896574d765edf86a293fbc67c74365 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Fri, 23 Feb 2018 11:24:04 -0800 Subject: [PATCH 1044/1087] Synchronize doc/ copies of src/test/examples/. This is mostly cosmetic, but it might fix build failures, on some platform, when copying from the documentation. Back-patch to 9.3 (all supported versions). --- doc/src/sgml/libpq.sgml | 15 ++++++++++++++- src/test/examples/testlibpq2.sql | 2 +- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index b66c6da4f7..f327d4b5b5 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -8256,13 +8256,16 @@ testlibpq.o(.text+0xa4): undefined reference to `PQerrorMessage' #include -#include +#include "libpq-fe.h" static void exit_nicely(PGconn *conn) @@ -8383,6 +8386,9 @@ main(int argc, char **argv) #include #include +#ifdef HAVE_SYS_SELECT_H +#include +#endif + #include "libpq-fe.h" static void @@ -8526,6 +8536,9 @@ main(int argc, char **argv) Date: Fri, 23 Feb 2018 14:38:19 -0500 Subject: [PATCH 1045/1087] Allow auto_explain.log_min_duration to go up to INT_MAX. The previous limit of INT_MAX / 1000 seems to have been cargo-culted in from somewhere else. Or possibly the value was converted to microseconds at some point; but in all supported releases, it's just compared to other values, so there's no need for the restriction. This change raises the effective limit from ~35 minutes to ~24 days, which conceivably is useful to somebody, and anyway it's more consistent with the range of the core log_min_duration_statement GUC. Per complaint from Kevin Bloch. Back-patch to all supported releases. Discussion: https://postgr.es/m/8ea82d7e-cb78-8e05-0629-73aa14d2a0ca@codingthat.com --- contrib/auto_explain/auto_explain.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c index d146bf4bc9..ea4f957cfa 100644 --- a/contrib/auto_explain/auto_explain.c +++ b/contrib/auto_explain/auto_explain.c @@ -78,7 +78,7 @@ _PG_init(void) "Zero prints all plans. -1 turns this feature off.", &auto_explain_log_min_duration, -1, - -1, INT_MAX / 1000, + -1, INT_MAX, PGC_SUSET, GUC_UNIT_MS, NULL, From 9fe802c8185e9a53158b6797d0f6fd8bfbb01af1 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 23 Feb 2018 15:11:40 -0500 Subject: [PATCH 1046/1087] Fix brown-paper-bag bug in commit 0a459cec96d3856f476c2db298c6b52f592894e8. RANGE_OFFSET comparisons need to examine the first ORDER BY column, which isn't necessarily the first column in the incoming tuples. No idea how this slipped through initial testing. Per bug #15082 from Zhou Digoal. Discussion: https://postgr.es/m/151939899974.1461.9411971793110285476@wrigleys.postgresql.org --- src/backend/executor/nodeWindowAgg.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c index a56c3e89fd..fe5369a0c7 100644 --- a/src/backend/executor/nodeWindowAgg.c +++ b/src/backend/executor/nodeWindowAgg.c @@ -1559,6 +1559,7 @@ update_frameheadpos(WindowAggState *winstate) * reach end of partition, we will leave frameheadpos = end+1 and * framehead_slot empty. */ + int sortCol = node->ordColIdx[0]; bool sub, less; @@ -1593,9 +1594,9 @@ update_frameheadpos(WindowAggState *winstate) bool headisnull, currisnull; - headval = slot_getattr(winstate->framehead_slot, 1, + headval = slot_getattr(winstate->framehead_slot, sortCol, &headisnull); - currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, 1, + currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, sortCol, &currisnull); if (headisnull || currisnull) { @@ -1809,6 +1810,7 @@ update_frametailpos(WindowAggState *winstate) * necessary. Note that if we reach end of partition, we will * leave frametailpos = end+1 and frametail_slot empty. */ + int sortCol = node->ordColIdx[0]; bool sub, less; @@ -1843,9 +1845,9 @@ update_frametailpos(WindowAggState *winstate) bool tailisnull, currisnull; - tailval = slot_getattr(winstate->frametail_slot, 1, + tailval = slot_getattr(winstate->frametail_slot, sortCol, &tailisnull); - currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, 1, + currval = slot_getattr(winstate->ss.ss_ScanTupleSlot, sortCol, &currisnull); if (tailisnull || currisnull) { From eec1a8cb6cbc6ea44cf58cfaeaa01ad8ee2bc8e8 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Fri, 23 Feb 2018 17:20:26 -0500 Subject: [PATCH 1047/1087] First-draft release notes for 10.3. --- doc/src/sgml/release-10.sgml | 206 +++++++++++++++++++++++++++++++++++ 1 file changed, 206 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 7b0fde2b93..718ad4cb0c 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -1,6 +1,212 @@ + + Release 10.3 + + + Release date: + 2018-03-01 + + + + This release contains a variety of fixes from 10.2. + For information about new features in major release 10, see + . + + + + Migration to Version 10.3 + + + A dump/restore is not required for those running 10.X. + + + + However, if you are upgrading from a version earlier than 10.2, + see . + + + + + Changes + + + + + + + Fix misbehavior of concurrent-update rechecks with CTE references + appearing in subplans (Tom Lane) + + + + If a CTE (WITH clause reference) is used in an + InitPlan or SubPlan, and the query requires a recheck due to trying + to update or lock a concurrently-updated row, incorrect results could + be obtained. + + + + + + + Fix planner failures with overlapping mergejoin clauses in an outer + join (Tom Lane) + + + + These mistakes led to left and right pathkeys do not match in + mergejoin or outer pathkeys do not match + mergeclauses planner errors in corner cases. + + + + + + + Repair pg_upgrade's failure to + preserve relfrozenxid for materialized + views (Tom Lane, Andres Freund) + + + + This oversight could lead to data corruption in materialized views + after an upgrade, manifesting as could not access status of + transaction or found xmin from before + relfrozenxid errors. The problem would be more likely to + occur in seldom-refreshed materialized views, or ones that were + maintained only with REFRESH MATERIALIZED VIEW + CONCURRENTLY. + + + + If such corruption is observed, it can be repaired by refreshing the + materialized view (without CONCURRENTLY). + + + + + + + Fix incorrect pg_dump output for some + non-default sequence limit values (Alexey Bashtanov) + + + + + + + Fix pg_dump's mishandling + of STATISTICS objects (Tom Lane) + + + + An extended statistics object's schema was mislabeled in the dump's + table of contents, possibly leading to the wrong results in a + schema-selective restore. Its ownership was not correctly restored, + either. Also, change the logic so that statistics objects are + dumped/restored, or not, as independent objects rather than tying + them to the dump/restore decision for the table they are on. The + original definition could not scale to the planned future extension to + cross-table statistics. + + + + + + + Fix incorrect reporting of PL/Python function names in + error CONTEXT stacks (Tom Lane) + + + + An error occurring within a nested PL/Python function call (that is, + one reached via a SPI query from another PL/Python function) would + result in a stack trace showing the inner function's name twice, + rather than the expected results. Also, an error in a nested + PL/Python DO block could result in a null pointer + dereference crash on some platforms. + + + + + + + Allow contrib/auto_explain's + log_min_duration setting to range up + to INT_MAX, or about 24 days instead of 35 minutes + (Tom Lane) + + + + + + + Mark assorted GUC variables as PGDLLIMPORT, to + ease porting extension modules to Windows (Metin Doslu) + + + + + + + + Release 10.2 From bc1adc651b8e60680aea144d51ae8bc78ea6b2fb Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 23 Feb 2018 22:13:21 -0500 Subject: [PATCH 1048/1087] Fix filtering of unsupported relations in logical replication In the pgoutput plugin, skip changes for relations that are not publishable, per is_publishable_class(). This concerns in particular materialized views and information_schema tables. While those relations cannot be part of a publication, per existing checks, they will be considered by a FOR ALL TABLES publication. A subscription would not actually apply changes for those relations, again per existing checks, but trying to match incoming changes to local tables on the subscriber would lead to errors if no matching local table exists. Skipping those changes on the publisher avoids sending useless changes and eliminates the error. Bug: #15044 Reported-by: Chad Trabant Reviewed-by: Petr Jelinek --- src/backend/catalog/pg_publication.c | 9 +++++ src/backend/replication/pgoutput/pgoutput.c | 3 ++ src/include/catalog/pg_publication.h | 1 + src/test/subscription/t/009_matviews.pl | 45 +++++++++++++++++++++ 4 files changed, 58 insertions(+) create mode 100644 src/test/subscription/t/009_matviews.pl diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c index b4a5f48b4e..ba18258ebb 100644 --- a/src/backend/catalog/pg_publication.c +++ b/src/backend/catalog/pg_publication.c @@ -105,6 +105,15 @@ is_publishable_class(Oid relid, Form_pg_class reltuple) relid >= FirstNormalObjectId; } +/* + * Another variant of this, taking a Relation. + */ +bool +is_publishable_relation(Relation rel) +{ + return is_publishable_class(RelationGetRelid(rel), rel->rd_rel); +} + /* * SQL-callable variant of the above diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c index 40a1ef3c1d..d538f25ede 100644 --- a/src/backend/replication/pgoutput/pgoutput.c +++ b/src/backend/replication/pgoutput/pgoutput.c @@ -262,6 +262,9 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, MemoryContext old; RelationSyncEntry *relentry; + if (!is_publishable_relation(relation)) + return; + relentry = get_rel_sync_entry(data, RelationGetRelid(relation)); /* First check the table filter */ diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h index 7bdc634cf3..37e77b8be7 100644 --- a/src/include/catalog/pg_publication.h +++ b/src/include/catalog/pg_publication.h @@ -93,6 +93,7 @@ extern List *GetPublicationRelations(Oid pubid); extern List *GetAllTablesPublications(void); extern List *GetAllTablesPublicationRelations(void); +extern bool is_publishable_relation(Relation rel); extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel, bool if_not_exists); diff --git a/src/test/subscription/t/009_matviews.pl b/src/test/subscription/t/009_matviews.pl new file mode 100644 index 0000000000..c55c62c95d --- /dev/null +++ b/src/test/subscription/t/009_matviews.pl @@ -0,0 +1,45 @@ +# Test materialized views behavior +use strict; +use warnings; +use PostgresNode; +use TestLib; +use Test::More tests => 1; + +my $node_publisher = get_new_node('publisher'); +$node_publisher->init(allows_streaming => 'logical'); +$node_publisher->start; + +my $node_subscriber = get_new_node('subscriber'); +$node_subscriber->init(allows_streaming => 'logical'); +$node_subscriber->start; + +my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres'; +my $appname = 'replication_test'; + +$node_publisher->safe_psql('postgres', + "CREATE PUBLICATION mypub FOR ALL TABLES;"); +$node_subscriber->safe_psql('postgres', +"CREATE SUBSCRIPTION mysub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION mypub;" +); + +$node_publisher->safe_psql('postgres', q{CREATE TABLE test1 (a int PRIMARY KEY, b text)}); +$node_publisher->safe_psql('postgres', q{INSERT INTO test1 (a, b) VALUES (1, 'one'), (2, 'two');}); + +$node_subscriber->safe_psql('postgres', q{CREATE TABLE test1 (a int PRIMARY KEY, b text);}); + +$node_publisher->wait_for_catchup($appname); + +# Materialized views are not supported by logical replication, but +# logical decoding does produce change information for them, so we +# need to make sure they are properly ignored. (bug #15044) + +# create a MV with some data +$node_publisher->safe_psql('postgres', q{CREATE MATERIALIZED VIEW testmv1 AS SELECT * FROM test1;}); +$node_publisher->wait_for_catchup($appname); +# There is no equivalent relation on the subscriber, but MV data is +# not replicated, so this does not hang. + +pass "materialized view data not replicated"; + +$node_subscriber->stop; +$node_publisher->stop; From 081bfc19b3b7914b78eb44e00af9dd45325dda3e Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 23 Feb 2018 13:54:45 -0500 Subject: [PATCH 1049/1087] Check error messages in SSL tests In tests that check whether a connection fails, also check the error message. That makes sure that the connection was rejected for the right reason. This discovered that two tests had their connection failing for the wrong reason. One test failed because pg_hba.conf was not set up to allow that user, one test failed because the client key file did not have the right permissions. Fix those tests and add a new one that is really supposed to check the file permission issue. Reviewed-by: Michael Paquier --- src/test/ssl/ServerSetup.pm | 42 +++++++++++++-------------------- src/test/ssl/ssl/.gitignore | 2 +- src/test/ssl/t/001_ssltests.pl | 43 ++++++++++++++++++++++++++++++---- src/test/ssl/t/002_scram.pl | 4 +++- 4 files changed, 59 insertions(+), 32 deletions(-) diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm index 45991d61a2..27a676b65c 100644 --- a/src/test/ssl/ServerSetup.pm +++ b/src/test/ssl/ServerSetup.pm @@ -27,7 +27,6 @@ use Test::More; use Exporter 'import'; our @EXPORT = qw( configure_test_server_for_ssl - run_test_psql switch_server_cert test_connect_fails test_connect_ok @@ -35,37 +34,28 @@ our @EXPORT = qw( # Define a couple of helper functions to test connecting to the server. -# Attempt connection to server with given connection string. -sub run_test_psql -{ - my $connstr = $_[0]; - - my $cmd = [ - 'psql', '-X', '-A', '-t', '-c', "SELECT \$\$connected with $connstr\$\$", - '-d', "$connstr" ]; - - my $result = run_log($cmd); - return $result; -} - # The first argument is a base connection string to use for connection. # The second argument is a complementary connection string. sub test_connect_ok { - my $common_connstr = $_[0]; - my $connstr = $_[1]; - my $test_name = $_[2]; + my ($common_connstr, $connstr, $test_name) = @_; - ok(run_test_psql("$common_connstr $connstr"), $test_name); + my $cmd = [ + 'psql', '-X', '-A', '-t', '-c', "SELECT \$\$connected with $connstr\$\$", + '-d', "$common_connstr $connstr" ]; + + command_ok($cmd, $test_name); } sub test_connect_fails { - my $common_connstr = $_[0]; - my $connstr = $_[1]; - my $test_name = $_[2]; + my ($common_connstr, $connstr, $expected_stderr, $test_name) = @_; + + my $cmd = [ + 'psql', '-X', '-A', '-t', '-c', "SELECT \$\$connected with $connstr\$\$", + '-d', "$common_connstr $connstr" ]; - ok(!run_test_psql("$common_connstr $connstr"), $test_name); + command_fails_like($cmd, $expected_stderr, $test_name); } # Copy a set of files, taking into account wildcards @@ -169,12 +159,12 @@ sub configure_hba_for_ssl print $hba "# TYPE DATABASE USER ADDRESS METHOD\n"; print $hba -"hostssl trustdb ssltestuser $serverhost/32 $authmethod\n"; +"hostssl trustdb all $serverhost/32 $authmethod\n"; print $hba -"hostssl trustdb ssltestuser ::1/128 $authmethod\n"; +"hostssl trustdb all ::1/128 $authmethod\n"; print $hba -"hostssl certdb ssltestuser $serverhost/32 cert\n"; +"hostssl certdb all $serverhost/32 cert\n"; print $hba -"hostssl certdb ssltestuser ::1/128 cert\n"; +"hostssl certdb all ::1/128 cert\n"; close $hba; } diff --git a/src/test/ssl/ssl/.gitignore b/src/test/ssl/ssl/.gitignore index 10b74f0848..af753d4c7d 100644 --- a/src/test/ssl/ssl/.gitignore +++ b/src/test/ssl/ssl/.gitignore @@ -1,3 +1,3 @@ /*.old /new_certs_dir/ -/client_tmp.key +/client*_tmp.key diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl index e53bd12ae9..4b097a69bf 100644 --- a/src/test/ssl/t/001_ssltests.pl +++ b/src/test/ssl/t/001_ssltests.pl @@ -2,7 +2,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 40; +use Test::More tests => 62; use ServerSetup; use File::Copy; @@ -20,6 +20,14 @@ # of the key stored in the code tree and update its permissions. copy("ssl/client.key", "ssl/client_tmp.key"); chmod 0600, "ssl/client_tmp.key"; +copy("ssl/client-revoked.key", "ssl/client-revoked_tmp.key"); +chmod 0600, "ssl/client-revoked_tmp.key"; + +# Also make a copy of that explicitly world-readable. We can't +# necessarily rely on the file in the source tree having those +# permissions. +copy("ssl/client.key", "ssl/client_wrongperms_tmp.key"); +chmod 0644, "ssl/client_wrongperms_tmp.key"; #### Part 0. Set up the server. @@ -48,6 +56,7 @@ # The server should not accept non-SSL connections. test_connect_fails($common_connstr, "sslmode=disable", + qr/\Qno pg_hba.conf entry\E/, "server doesn't accept non-SSL connections"); # Try without a root cert. In sslmode=require, this should work. In verify-ca @@ -55,26 +64,32 @@ test_connect_ok($common_connstr, "sslrootcert=invalid sslmode=require", "connect without server root cert sslmode=require"); test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-ca", + qr/root certificate file "invalid" does not exist/, "connect without server root cert sslmode=verify-ca"); test_connect_fails($common_connstr, "sslrootcert=invalid sslmode=verify-full", + qr/root certificate file "invalid" does not exist/, "connect without server root cert sslmode=verify-full"); # Try with wrong root cert, should fail. (We're using the client CA as the # root, but the server's key is signed by the server CA.) test_connect_fails($common_connstr, "sslrootcert=ssl/client_ca.crt sslmode=require", + qr/SSL error/, "connect with wrong server root cert sslmode=require"); test_connect_fails($common_connstr, "sslrootcert=ssl/client_ca.crt sslmode=verify-ca", + qr/SSL error/, "connect with wrong server root cert sslmode=verify-ca"); test_connect_fails($common_connstr, "sslrootcert=ssl/client_ca.crt sslmode=verify-full", + qr/SSL error/, "connect with wrong server root cert sslmode=verify-full"); # Try with just the server CA's cert. This fails because the root file # must contain the whole chain up to the root CA. test_connect_fails($common_connstr, "sslrootcert=ssl/server_ca.crt sslmode=verify-ca", + qr/SSL error/, "connect with server CA cert, without root CA"); # And finally, with the correct root cert. @@ -107,6 +122,7 @@ # A CRL belonging to a different CA is not accepted, fails test_connect_fails($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/client.crl", + qr/SSL error/, "CRL belonging to a different CA"); # With the correct CRL, succeeds (this cert is not revoked) @@ -124,9 +140,9 @@ test_connect_ok($common_connstr, "sslmode=verify-ca host=wronghost.test", "mismatch between host name and server certificate sslmode=verify-ca"); test_connect_fails($common_connstr, "sslmode=verify-full host=wronghost.test", + qr/\Qserver certificate for "common-name.pg-ssltest.test" does not match host name "wronghost.test"\E/, "mismatch between host name and server certificate sslmode=verify-full"); - # Test Subject Alternative Names. switch_server_cert($node, 'server-multiple-alt-names'); @@ -141,9 +157,11 @@ "host name matching with X.509 Subject Alternative Names wildcard"); test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test", + qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 2 other names) does not match host name "wronghost.alt-name.pg-ssltest.test"\E/, "host name not matching with X.509 Subject Alternative Names"); test_connect_fails($common_connstr, "host=deep.subdomain.wildcard.pg-ssltest.test", + qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 2 other names) does not match host name "deep.subdomain.wildcard.pg-ssltest.test"\E/, "host name not matching with X.509 Subject Alternative Names wildcard"); # Test certificate with a single Subject Alternative Name. (this gives a @@ -157,9 +175,11 @@ "host name matching with a single X.509 Subject Alternative Name"); test_connect_fails($common_connstr, "host=wronghost.alt-name.pg-ssltest.test", + qr/\Qserver certificate for "single.alt-name.pg-ssltest.test" does not match host name "wronghost.alt-name.pg-ssltest.test"\E/, "host name not matching with a single X.509 Subject Alternative Name"); test_connect_fails($common_connstr, "host=deep.subdomain.wildcard.pg-ssltest.test", + qr/\Qserver certificate for "single.alt-name.pg-ssltest.test" does not match host name "deep.subdomain.wildcard.pg-ssltest.test"\E/, "host name not matching with a single X.509 Subject Alternative Name wildcard"); # Test server certificate with a CN and SANs. Per RFCs 2818 and 6125, the CN @@ -174,6 +194,7 @@ test_connect_ok($common_connstr, "host=dns2.alt-name.pg-ssltest.test", "certificate with both a CN and SANs 2"); test_connect_fails($common_connstr, "host=common-name.pg-ssltest.test", + qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 1 other name) does not match host name "common-name.pg-ssltest.test"\E/, "certificate with both a CN and SANs ignores CN"); # Finally, test a server certificate that has no CN or SANs. Of course, that's @@ -187,6 +208,7 @@ "server certificate without CN or SANs sslmode=verify-ca"); test_connect_fails($common_connstr, "sslmode=verify-full host=common-name.pg-ssltest.test", + qr/could not get server's host name from server certificate/, "server certificate without CN or SANs sslmode=verify-full"); # Test that the CRL works @@ -201,6 +223,7 @@ "connects without client-side CRL"); test_connect_fails($common_connstr, "sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl", + qr/SSL error/, "does not connect with client-side CRL"); ### Part 2. Server-side tests. @@ -215,6 +238,7 @@ # no client cert test_connect_fails($common_connstr, "user=ssltestuser sslcert=invalid", + qr/connection requires a valid client certificate/, "certificate authorization fails without client cert"); # correct client cert @@ -222,14 +246,22 @@ "user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key", "certificate authorization succeeds with correct client cert"); +# client key with wrong permissions +test_connect_fails($common_connstr, + "user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_wrongperms_tmp.key", + qr!\Qprivate key file "ssl/client_wrongperms_tmp.key" has group or world access\E!, + "certificate authorization fails because of file permissions"); + # client cert belonging to another user test_connect_fails($common_connstr, "user=anotheruser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key", + qr/certificate authentication failed for user "anotheruser"/, "certificate authorization fails with client cert belonging to another user"); # revoked client cert test_connect_fails($common_connstr, - "user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked.key", + "user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked_tmp.key", + qr/SSL error/, "certificate authorization fails with revoked client cert"); # intermediate client_ca.crt is provided by client, and isn't in server's ssl_ca_file @@ -241,7 +273,10 @@ "sslmode=require sslcert=ssl/client+client_ca.crt", "intermediate client certificate is provided by client"); test_connect_fails($common_connstr, "sslmode=require sslcert=ssl/client.crt", + qr/SSL error/, "intermediate client certificate is missing"); # clean up -unlink "ssl/client_tmp.key"; +unlink("ssl/client_tmp.key", + "ssl/client_wrongperms_tmp.key", + "ssl/client-revoked_tmp.key"); diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl index 67c1409a6e..9460763a65 100644 --- a/src/test/ssl/t/002_scram.pl +++ b/src/test/ssl/t/002_scram.pl @@ -4,7 +4,7 @@ use warnings; use PostgresNode; use TestLib; -use Test::More tests => 5; +use Test::More tests => 6; use ServerSetup; use File::Copy; @@ -59,8 +59,10 @@ { test_connect_fails($common_connstr, "scram_channel_binding=tls-server-end-point", + qr/unsupported SCRAM channel-binding type/, "SCRAM authentication with tls-server-end-point as channel binding"); } test_connect_fails($common_connstr, "scram_channel_binding=not-exists", + qr/unsupported SCRAM channel-binding type/, "SCRAM authentication with invalid channel binding"); From 8b29e88cdce17705f0b2c43e50219ce1d7d2f603 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 24 Feb 2018 13:23:38 -0500 Subject: [PATCH 1050/1087] Add window RANGE support for float4, float8, numeric. Commit 0a459cec9 left this for later, but since time's running out, I went ahead and took care of it. There are more data types that somebody might someday want RANGE support for, but this is enough to satisfy all expectations of the SQL standard, which just says that "numeric, datetime, and interval" types should have RANGE support. --- src/backend/utils/adt/float.c | 87 +++++++++++++ src/backend/utils/adt/numeric.c | 75 +++++++++++ src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_amproc.h | 3 + src/include/catalog/pg_proc.h | 6 + src/test/regress/expected/window.out | 185 +++++++++++++++++++++++++++ src/test/regress/sql/window.sql | 72 +++++++++++ 7 files changed, 429 insertions(+), 1 deletion(-) diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c index bc6a3e09b5..4f718c3eff 100644 --- a/src/backend/utils/adt/float.c +++ b/src/backend/utils/adt/float.c @@ -1180,6 +1180,93 @@ btfloat84cmp(PG_FUNCTION_ARGS) PG_RETURN_INT32(float8_cmp_internal(arg1, arg2)); } +/* + * in_range support function for float8. + * + * Note: we needn't supply a float8_float4 variant, as implicit coercion + * of the offset value takes care of that scenario just as well. + */ +Datum +in_range_float8_float8(PG_FUNCTION_ARGS) +{ + float8 val = PG_GETARG_FLOAT8(0); + float8 base = PG_GETARG_FLOAT8(1); + float8 offset = PG_GETARG_FLOAT8(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + float8 sum; + + /* + * Reject negative or NaN offset. Negative is per spec, and NaN is + * because appropriate semantics for that seem non-obvious. + */ + if (isnan(offset) || offset < 0) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* + * Deal with cases where val and/or base is NaN, following the rule that + * NaN sorts after non-NaN (cf float8_cmp_internal). The offset cannot + * affect the conclusion. + */ + if (isnan(val)) + { + if (isnan(base)) + PG_RETURN_BOOL(true); /* NAN = NAN */ + else + PG_RETURN_BOOL(!less); /* NAN > non-NAN */ + } + else if (isnan(base)) + { + PG_RETURN_BOOL(less); /* non-NAN < NAN */ + } + + /* + * Deal with infinite offset (necessarily +inf, at this point). We must + * special-case this because if base happens to be -inf, their sum would + * be NaN, which is an overflow-ish condition we should avoid. + */ + if (isinf(offset)) + { + PG_RETURN_BOOL(sub ? !less : less); + } + + /* + * Otherwise it should be safe to compute base +/- offset. We trust the + * FPU to cope if base is +/-inf or the true sum would overflow, and + * produce a suitably signed infinity, which will compare properly against + * val whether or not that's infinity. + */ + if (sub) + sum = base - offset; + else + sum = base + offset; + + if (less) + PG_RETURN_BOOL(val <= sum); + else + PG_RETURN_BOOL(val >= sum); +} + +/* + * in_range support function for float4. + * + * We would need a float4_float8 variant in any case, so we supply that and + * let implicit coercion take care of the float4_float4 case. + */ +Datum +in_range_float4_float8(PG_FUNCTION_ARGS) +{ + /* Doesn't seem worth duplicating code for, so just invoke float8_float8 */ + return DirectFunctionCall5(in_range_float8_float8, + Float8GetDatumFast((float8) PG_GETARG_FLOAT4(0)), + Float8GetDatumFast((float8) PG_GETARG_FLOAT4(1)), + PG_GETARG_DATUM(2), + PG_GETARG_DATUM(3), + PG_GETARG_DATUM(4)); +} + /* * =================== diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c index 5b34badd5b..6f40072971 100644 --- a/src/backend/utils/adt/numeric.c +++ b/src/backend/utils/adt/numeric.c @@ -2165,6 +2165,81 @@ cmp_numerics(Numeric num1, Numeric num2) return result; } +/* + * in_range support function for numeric. + */ +Datum +in_range_numeric_numeric(PG_FUNCTION_ARGS) +{ + Numeric val = PG_GETARG_NUMERIC(0); + Numeric base = PG_GETARG_NUMERIC(1); + Numeric offset = PG_GETARG_NUMERIC(2); + bool sub = PG_GETARG_BOOL(3); + bool less = PG_GETARG_BOOL(4); + bool result; + + /* + * Reject negative or NaN offset. Negative is per spec, and NaN is + * because appropriate semantics for that seem non-obvious. + */ + if (NUMERIC_IS_NAN(offset) || NUMERIC_SIGN(offset) == NUMERIC_NEG) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PRECEDING_FOLLOWING_SIZE), + errmsg("invalid preceding or following size in window function"))); + + /* + * Deal with cases where val and/or base is NaN, following the rule that + * NaN sorts after non-NaN (cf cmp_numerics). The offset cannot affect + * the conclusion. + */ + if (NUMERIC_IS_NAN(val)) + { + if (NUMERIC_IS_NAN(base)) + result = true; /* NAN = NAN */ + else + result = !less; /* NAN > non-NAN */ + } + else if (NUMERIC_IS_NAN(base)) + { + result = less; /* non-NAN < NAN */ + } + else + { + /* + * Otherwise go ahead and compute base +/- offset. While it's + * possible for this to overflow the numeric format, it's unlikely + * enough that we don't take measures to prevent it. + */ + NumericVar valv; + NumericVar basev; + NumericVar offsetv; + NumericVar sum; + + init_var_from_num(val, &valv); + init_var_from_num(base, &basev); + init_var_from_num(offset, &offsetv); + init_var(&sum); + + if (sub) + sub_var(&basev, &offsetv, &sum); + else + add_var(&basev, &offsetv, &sum); + + if (less) + result = (cmp_var(&valv, &sum) <= 0); + else + result = (cmp_var(&valv, &sum) >= 0); + + free_var(&sum); + } + + PG_FREE_IF_COPY(val, 0); + PG_FREE_IF_COPY(base, 1); + PG_FREE_IF_COPY(offset, 2); + + PG_RETURN_BOOL(result); +} + Datum hash_numeric(PG_FUNCTION_ARGS) { diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 433d6db4f6..e9484e10a4 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -53,6 +53,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 201802061 +#define CATALOG_VERSION_NO 201802241 #endif diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h index c3d0ff70e6..eb595e81db 100644 --- a/src/include/catalog/pg_amproc.h +++ b/src/include/catalog/pg_amproc.h @@ -105,6 +105,8 @@ DATA(insert ( 1970 700 701 1 2194 )); DATA(insert ( 1970 701 701 1 355 )); DATA(insert ( 1970 701 701 2 3133 )); DATA(insert ( 1970 701 700 1 2195 )); +DATA(insert ( 1970 701 701 3 4139 )); +DATA(insert ( 1970 700 701 3 4140 )); DATA(insert ( 1974 869 869 1 926 )); DATA(insert ( 1976 21 21 1 350 )); DATA(insert ( 1976 21 21 2 3129 )); @@ -133,6 +135,7 @@ DATA(insert ( 1986 19 19 1 359 )); DATA(insert ( 1986 19 19 2 3135 )); DATA(insert ( 1988 1700 1700 1 1769 )); DATA(insert ( 1988 1700 1700 2 3283 )); +DATA(insert ( 1988 1700 1700 3 4141 )); DATA(insert ( 1989 26 26 1 356 )); DATA(insert ( 1989 26 26 2 3134 )); DATA(insert ( 1991 30 30 1 404 )); diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h index 62e16514cc..c00d055940 100644 --- a/src/include/catalog/pg_proc.h +++ b/src/include/catalog/pg_proc.h @@ -661,6 +661,12 @@ DATA(insert OID = 4131 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 DESCR("window RANGE support"); DATA(insert OID = 4132 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "21 21 21 16 16" _null_ _null_ _null_ _null_ _null_ in_range_int2_int2 _null_ _null_ _null_ )); DESCR("window RANGE support"); +DATA(insert OID = 4139 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "701 701 701 16 16" _null_ _null_ _null_ _null_ _null_ in_range_float8_float8 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4140 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "700 700 701 16 16" _null_ _null_ _null_ _null_ _null_ in_range_float4_float8 _null_ _null_ _null_ )); +DESCR("window RANGE support"); +DATA(insert OID = 4141 ( in_range PGNSP PGUID 12 1 0 0 0 f f f f t f i s 5 0 16 "1700 1700 1700 16 16" _null_ _null_ _null_ _null_ _null_ in_range_numeric_numeric _null_ _null_ _null_ )); +DESCR("window RANGE support"); DATA(insert OID = 361 ( lseg_distance PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 701 "601 601" _null_ _null_ _null_ _null_ _null_ lseg_distance _null_ _null_ _null_ )); DATA(insert OID = 362 ( lseg_interpt PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 600 "601 601" _null_ _null_ _null_ _null_ _null_ lseg_interpt _null_ _null_ _null_ )); diff --git a/src/test/regress/expected/window.out b/src/test/regress/expected/window.out index b675487729..85d81e7c9f 100644 --- a/src/test/regress/expected/window.out +++ b/src/test/regress/expected/window.out @@ -1864,6 +1864,191 @@ from generate_series(-9223372036854775806, -9223372036854775804) x; -9223372036854775806 | -9223372036854775806 (3 rows) +-- Test in_range for other numeric datatypes +create temp table numerics( + id int, + f_float4 float4, + f_float8 float8, + f_numeric numeric +); +insert into numerics values +(0, '-infinity', '-infinity', '-1000'), -- numeric type lacks infinities +(1, -3, -3, -3), +(2, -1, -1, -1), +(3, 0, 0, 0), +(4, 1.1, 1.1, 1.1), +(5, 1.12, 1.12, 1.12), +(6, 2, 2, 2), +(7, 100, 100, 100), +(8, 'infinity', 'infinity', '1000'), +(9, 'NaN', 'NaN', 'NaN'); +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1 preceding and 1 following); + id | f_float4 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 3 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | Infinity | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1 preceding and 1.1::float4 following); + id | f_float4 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 4 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | Infinity | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 'inf' preceding and 'inf' following); + id | f_float4 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 8 + 1 | -3 | 0 | 8 + 2 | -1 | 0 | 8 + 3 | 0 | 0 | 8 + 4 | 1.1 | 0 | 8 + 5 | 1.12 | 0 | 8 + 6 | 2 | 0 | 8 + 7 | 100 | 0 | 8 + 8 | Infinity | 0 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed +ERROR: invalid preceding or following size in window function +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1 preceding and 1 following); + id | f_float8 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 3 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | Infinity | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1 preceding and 1.1::float8 following); + id | f_float8 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 4 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | Infinity | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 'inf' preceding and 'inf' following); + id | f_float8 | first_value | last_value +----+-----------+-------------+------------ + 0 | -Infinity | 0 | 8 + 1 | -3 | 0 | 8 + 2 | -1 | 0 | 8 + 3 | 0 | 0 | 8 + 4 | 1.1 | 0 | 8 + 5 | 1.12 | 0 | 8 + 6 | 2 | 0 | 8 + 7 | 100 | 0 | 8 + 8 | Infinity | 0 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed +ERROR: invalid preceding or following size in window function +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1 following); + id | f_numeric | first_value | last_value +----+-----------+-------------+------------ + 0 | -1000 | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 3 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | 1000 | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1.1::numeric following); + id | f_numeric | first_value | last_value +----+-----------+-------------+------------ + 0 | -1000 | 0 | 0 + 1 | -3 | 1 | 1 + 2 | -1 | 2 | 3 + 3 | 0 | 2 | 4 + 4 | 1.1 | 4 | 6 + 5 | 1.12 | 4 | 6 + 6 | 2 | 4 | 6 + 7 | 100 | 7 | 7 + 8 | 1000 | 8 | 8 + 9 | NaN | 9 | 9 +(10 rows) + +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1.1::float8 following); -- currently unsupported +ERROR: RANGE with offset PRECEDING/FOLLOWING is not supported for column type numeric and offset type double precision +LINE 4: 1 preceding and 1.1::float8 following); + ^ +HINT: Cast the offset value to an appropriate type. +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed +ERROR: invalid preceding or following size in window function -- Test in_range for other datetime datatypes create temp table datetimes( id int, diff --git a/src/test/regress/sql/window.sql b/src/test/regress/sql/window.sql index 3320aa81f8..051b50b2d3 100644 --- a/src/test/regress/sql/window.sql +++ b/src/test/regress/sql/window.sql @@ -489,6 +489,78 @@ from generate_series(9223372036854775804, 9223372036854775806) x; select x, last_value(x) over (order by x desc range between current row and 5 following) from generate_series(-9223372036854775806, -9223372036854775804) x; +-- Test in_range for other numeric datatypes + +create temp table numerics( + id int, + f_float4 float4, + f_float8 float8, + f_numeric numeric +); + +insert into numerics values +(0, '-infinity', '-infinity', '-1000'), -- numeric type lacks infinities +(1, -3, -3, -3), +(2, -1, -1, -1), +(3, 0, 0, 0), +(4, 1.1, 1.1, 1.1), +(5, 1.12, 1.12, 1.12), +(6, 2, 2, 2), +(7, 100, 100, 100), +(8, 'infinity', 'infinity', '1000'), +(9, 'NaN', 'NaN', 'NaN'); + +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1 preceding and 1 following); +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1 preceding and 1.1::float4 following); +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 'inf' preceding and 'inf' following); +select id, f_float4, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float4 range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed + +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1 preceding and 1 following); +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1 preceding and 1.1::float8 following); +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 'inf' preceding and 'inf' following); +select id, f_float8, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_float8 range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed + +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1 following); +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1.1::numeric following); +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1 preceding and 1.1::float8 following); -- currently unsupported +select id, f_numeric, first_value(id) over w, last_value(id) over w +from numerics +window w as (order by f_numeric range between + 1.1 preceding and 'NaN' following); -- error, NaN disallowed + -- Test in_range for other datetime datatypes create temp table datetimes( From 32291aed494d425a548e45b3b6ad95f9d5c94e67 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sat, 24 Feb 2018 14:46:37 -0500 Subject: [PATCH 1051/1087] Fix thinko in in_range_float4_float8. I forgot the coding rule for correct use of Float8GetDatumFast. Per buildfarm. --- src/backend/utils/adt/float.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c index 4f718c3eff..aadb92de66 100644 --- a/src/backend/utils/adt/float.c +++ b/src/backend/utils/adt/float.c @@ -1259,9 +1259,12 @@ Datum in_range_float4_float8(PG_FUNCTION_ARGS) { /* Doesn't seem worth duplicating code for, so just invoke float8_float8 */ + float8 val = (float8) PG_GETARG_FLOAT4(0); + float8 base = (float8) PG_GETARG_FLOAT4(1); + return DirectFunctionCall5(in_range_float8_float8, - Float8GetDatumFast((float8) PG_GETARG_FLOAT4(0)), - Float8GetDatumFast((float8) PG_GETARG_FLOAT4(1)), + Float8GetDatumFast(val), + Float8GetDatumFast(base), PG_GETARG_DATUM(2), PG_GETARG_DATUM(3), PG_GETARG_DATUM(4)); From fde03e8b559d0e00bf4acd8cea3bb49411099c34 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 24 Feb 2018 14:35:54 -0500 Subject: [PATCH 1052/1087] Use croak instead of die in Perl code when appropriate --- src/backend/utils/mb/Unicode/convutils.pm | 5 +++-- src/bin/pg_rewind/RewindTest.pm | 3 ++- src/test/perl/PostgresNode.pm | 25 ++++++++++++----------- src/test/perl/RecursiveCopy.pm | 11 +++++----- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm index 854df8cf2a..69494d0df3 100644 --- a/src/backend/utils/mb/Unicode/convutils.pm +++ b/src/backend/utils/mb/Unicode/convutils.pm @@ -7,6 +7,7 @@ package convutils; use strict; +use Carp; use Exporter 'import'; our @EXPORT = @@ -698,7 +699,7 @@ sub make_charmap { my ($out, $charset, $direction, $verbose) = @_; - die "unacceptable direction : $direction" + croak "unacceptable direction : $direction" if ($direction != TO_UNICODE && $direction != FROM_UNICODE); # In verbose mode, print a large comment with the source and comment of @@ -759,7 +760,7 @@ sub make_charmap_combined { my ($charset, $direction) = @_; - die "unacceptable direction : $direction" + croak "unacceptable direction : $direction" if ($direction != TO_UNICODE && $direction != FROM_UNICODE); my @combined; diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm index 42fd577f21..00b5b42dd7 100644 --- a/src/bin/pg_rewind/RewindTest.pm +++ b/src/bin/pg_rewind/RewindTest.pm @@ -35,6 +35,7 @@ package RewindTest; use strict; use warnings; +use Carp; use Config; use Exporter 'import'; use File::Copy; @@ -228,7 +229,7 @@ sub run_pg_rewind { # Cannot come here normally - die("Incorrect test mode specified"); + croak("Incorrect test mode specified"); } # Now move back postgresql.conf with old settings diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm index 1d5ac4ee35..80188315f1 100644 --- a/src/test/perl/PostgresNode.pm +++ b/src/test/perl/PostgresNode.pm @@ -82,6 +82,7 @@ package PostgresNode; use strict; use warnings; +use Carp; use Config; use Cwd; use Exporter 'import'; @@ -359,7 +360,7 @@ sub set_replication_conf my $pgdata = $self->data_dir; $self->host eq $test_pghost - or die "set_replication_conf only works with the default host"; + or croak "set_replication_conf only works with the default host"; open my $hba, '>>', "$pgdata/pg_hba.conf"; print $hba "\n# Allow replication (set up by PostgresNode.pm)\n"; @@ -624,7 +625,7 @@ sub init_from_backup print "# Initializing node \"$node_name\" from backup \"$backup_name\" of node \"$root_name\"\n"; - die "Backup \"$backup_name\" does not exist at $backup_path" + croak "Backup \"$backup_name\" does not exist at $backup_path" unless -d $backup_path; mkdir $self->backup_dir; @@ -1445,7 +1446,7 @@ sub lsn 'replay' => 'pg_last_wal_replay_lsn()'); $mode = '' if !defined($mode); - die "unknown mode for 'lsn': '$mode', valid modes are " + croak "unknown mode for 'lsn': '$mode', valid modes are " . join(', ', keys %modes) if !defined($modes{$mode}); @@ -1490,7 +1491,7 @@ sub wait_for_catchup $mode = defined($mode) ? $mode : 'replay'; my %valid_modes = ('sent' => 1, 'write' => 1, 'flush' => 1, 'replay' => 1); - die "unknown mode $mode for 'wait_for_catchup', valid modes are " + croak "unknown mode $mode for 'wait_for_catchup', valid modes are " . join(', ', keys(%valid_modes)) unless exists($valid_modes{$mode}); @@ -1517,7 +1518,7 @@ sub wait_for_catchup my $query = qq[SELECT $lsn_expr <= ${mode}_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = '$standby_name';]; $self->poll_query_until('postgres', $query) - or die "timed out waiting for catchup"; + or croak "timed out waiting for catchup"; print "done\n"; } @@ -1547,9 +1548,9 @@ sub wait_for_slot_catchup $mode = defined($mode) ? $mode : 'restart'; if (!($mode eq 'restart' || $mode eq 'confirmed_flush')) { - die "valid modes are restart, confirmed_flush"; + croak "valid modes are restart, confirmed_flush"; } - die 'target lsn must be specified' unless defined($target_lsn); + croak 'target lsn must be specified' unless defined($target_lsn); print "Waiting for replication slot " . $slot_name . "'s " . $mode @@ -1559,7 +1560,7 @@ sub wait_for_slot_catchup my $query = qq[SELECT '$target_lsn' <= ${mode}_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = '$slot_name';]; $self->poll_query_until('postgres', $query) - or die "timed out waiting for catchup"; + or croak "timed out waiting for catchup"; print "done\n"; } @@ -1588,7 +1589,7 @@ null columns. sub query_hash { my ($self, $dbname, $query, @columns) = @_; - die 'calls in array context for multi-row results not supported yet' + croak 'calls in array context for multi-row results not supported yet' if (wantarray); # Replace __COLUMNS__ if found @@ -1663,8 +1664,8 @@ sub pg_recvlogical_upto my $timeout_exception = 'pg_recvlogical timed out'; - die 'slot name must be specified' unless defined($slot_name); - die 'endpos must be specified' unless defined($endpos); + croak 'slot name must be specified' unless defined($slot_name); + croak 'endpos must be specified' unless defined($endpos); my @cmd = ( 'pg_recvlogical', '-S', $slot_name, '--dbname', @@ -1674,7 +1675,7 @@ sub pg_recvlogical_upto while (my ($k, $v) = each %plugin_options) { - die "= is not permitted to appear in replication option name" + croak "= is not permitted to appear in replication option name" if ($k =~ qr/=/); push @cmd, "-o", "$k=$v"; } diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm index 19f7dd2fff..5bce720b35 100644 --- a/src/test/perl/RecursiveCopy.pm +++ b/src/test/perl/RecursiveCopy.pm @@ -19,6 +19,7 @@ package RecursiveCopy; use strict; use warnings; +use Carp; use File::Basename; use File::Copy; @@ -68,7 +69,7 @@ sub copypath if (defined $params{filterfn}) { - die "if specified, filterfn must be a subroutine reference" + croak "if specified, filterfn must be a subroutine reference" unless defined(ref $params{filterfn}) and (ref $params{filterfn} eq 'CODE'); @@ -80,7 +81,7 @@ sub copypath } # Complain if original path is bogus, because _copypath_recurse won't. - die "\"$base_src_dir\" does not exist" if !-e $base_src_dir; + croak "\"$base_src_dir\" does not exist" if !-e $base_src_dir; # Start recursive copy from current directory return _copypath_recurse($base_src_dir, $base_dest_dir, "", $filterfn); @@ -98,11 +99,11 @@ sub _copypath_recurse # Check for symlink -- needed only on source dir # (note: this will fall through quietly if file is already gone) - die "Cannot operate on symlink \"$srcpath\"" if -l $srcpath; + croak "Cannot operate on symlink \"$srcpath\"" if -l $srcpath; # Abort if destination path already exists. Should we allow directories # to exist already? - die "Destination path \"$destpath\" already exists" if -e $destpath; + croak "Destination path \"$destpath\" already exists" if -e $destpath; # If this source path is a file, simply copy it to destination with the # same name and we're done. @@ -148,7 +149,7 @@ sub _copypath_recurse return 1 if !-e $srcpath; # Else it's some weird file type; complain. - die "Source path \"$srcpath\" is not a regular file or directory"; + croak "Source path \"$srcpath\" is not a regular file or directory"; } 1; From 9ee0573ef146ab37d7b85951f83e00bcbd305ff3 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 24 Feb 2018 14:38:23 -0500 Subject: [PATCH 1053/1087] Add current directory to Perl include path Recent Perl versions don't have the current directory in the module include path anymore, so we need to add it here explicitly to make these scripts continue to work. --- src/backend/utils/mb/Unicode/Makefile | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/src/backend/utils/mb/Unicode/Makefile b/src/backend/utils/mb/Unicode/Makefile index 06e22de950..00a6256b66 100644 --- a/src/backend/utils/mb/Unicode/Makefile +++ b/src/backend/utils/mb/Unicode/Makefile @@ -73,40 +73,40 @@ GENERICTEXTS = $(ISO8859TEXTS) $(WINTEXTS) \ all: $(MAPS) $(GENERICMAPS): UCS_to_most.pl $(GENERICTEXTS) - $(PERL) $< + $(PERL) -I $(srcdir) $< johab_to_utf8.map utf8_to_johab.map: UCS_to_JOHAB.pl JOHAB.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< uhc_to_utf8.map utf8_to_uhc.map: UCS_to_UHC.pl windows-949-2000.xml - $(PERL) $< + $(PERL) -I $(srcdir) $< euc_jp_to_utf8.map utf8_to_euc_jp.map: UCS_to_EUC_JP.pl CP932.TXT JIS0212.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< euc_cn_to_utf8.map utf8_to_euc_cn.map: UCS_to_EUC_CN.pl gb-18030-2000.xml - $(PERL) $< + $(PERL) -I $(srcdir) $< euc_kr_to_utf8.map utf8_to_euc_kr.map: UCS_to_EUC_KR.pl KSX1001.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< euc_tw_to_utf8.map utf8_to_euc_tw.map: UCS_to_EUC_TW.pl CNS11643.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< sjis_to_utf8.map utf8_to_sjis.map: UCS_to_SJIS.pl CP932.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< gb18030_to_utf8.map utf8_to_gb18030.map: UCS_to_GB18030.pl gb-18030-2000.xml - $(PERL) $< + $(PERL) -I $(srcdir) $< big5_to_utf8.map utf8_to_big5.map: UCS_to_BIG5.pl BIG5.TXT CP950.TXT - $(PERL) $< + $(PERL) -I $(srcdir) $< euc_jis_2004_to_utf8.map utf8_to_euc_jis_2004.map: UCS_to_EUC_JIS_2004.pl euc-jis-2004-std.txt - $(PERL) $< + $(PERL) -I $(srcdir) $< shift_jis_2004_to_utf8.map utf8_to_shift_jis_2004.map: UCS_to_SHIFT_JIS_2004.pl sjis-0213-2004-std.txt - $(PERL) $< + $(PERL) -I $(srcdir) $< distclean: clean rm -f $(TEXTS) From c4ba1bee68abe217e441fb81343e5f9e9e2a5353 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Sat, 24 Feb 2018 14:44:32 -0500 Subject: [PATCH 1054/1087] Update headers of generated files The scripts were changed in c98c35cd084a25c6cf9b08c76de8b89facd75fe7, but the output files were not updated to reflect the script changes. --- src/backend/utils/mb/Unicode/big5_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/euc_cn_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/euc_jp_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/euc_kr_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/euc_tw_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/gb18030_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/gbk_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/johab_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/koi8r_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/koi8u_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/sjis_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/uhc_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_big5.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_euc_cn.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_euc_jp.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_euc_kr.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_euc_tw.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_gb18030.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_gbk.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_johab.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_koi8r.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_koi8u.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_sjis.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_uhc.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1250.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1251.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1252.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1253.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1254.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1255.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1256.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1257.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win1258.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win866.map | 2 +- src/backend/utils/mb/Unicode/utf8_to_win874.map | 2 +- src/backend/utils/mb/Unicode/win1250_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1251_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1252_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1253_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1254_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1255_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1256_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1257_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win1258_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win866_to_utf8.map | 2 +- src/backend/utils/mb/Unicode/win874_to_utf8.map | 2 +- 76 files changed, 76 insertions(+), 76 deletions(-) diff --git a/src/backend/utils/mb/Unicode/big5_to_utf8.map b/src/backend/utils/mb/Unicode/big5_to_utf8.map index a28715cd0f..aa417bc9c8 100644 --- a/src/backend/utils/mb/Unicode/big5_to_utf8.map +++ b/src/backend/utils/mb/Unicode/big5_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/big5_to_utf8.map */ -/* This file is generated by UCS_to_BIG5.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_BIG5.pl */ static const uint32 big5_to_unicode_tree_table[17088]; diff --git a/src/backend/utils/mb/Unicode/euc_cn_to_utf8.map b/src/backend/utils/mb/Unicode/euc_cn_to_utf8.map index a4090512cf..3801e08ef5 100644 --- a/src/backend/utils/mb/Unicode/euc_cn_to_utf8.map +++ b/src/backend/utils/mb/Unicode/euc_cn_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/euc_cn_to_utf8.map */ -/* This file is generated by UCS_to_EUC_CN.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl */ static const uint32 euc_cn_to_unicode_tree_table[7792]; diff --git a/src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map b/src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map index eb9c35ace3..d2da4a383b 100644 --- a/src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map +++ b/src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/euc_jis_2004_to_utf8.map */ -/* This file is generated by UCS_to_EUC_JIS_2004.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl */ static const uint32 euc_jis_2004_to_unicode_tree_table[11727]; diff --git a/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map b/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map index df8e7fdbdf..96b79a58a1 100644 --- a/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map +++ b/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/euc_jp_to_utf8.map */ -/* This file is generated by UCS_to_EUC_JP.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl */ static const uint32 euc_jp_to_unicode_tree_table[14254]; diff --git a/src/backend/utils/mb/Unicode/euc_kr_to_utf8.map b/src/backend/utils/mb/Unicode/euc_kr_to_utf8.map index 4dcce2f1ab..bf1fc4a98b 100644 --- a/src/backend/utils/mb/Unicode/euc_kr_to_utf8.map +++ b/src/backend/utils/mb/Unicode/euc_kr_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/euc_kr_to_utf8.map */ -/* This file is generated by UCS_to_EUC_KR.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl */ static const uint32 euc_kr_to_unicode_tree_table[8553]; diff --git a/src/backend/utils/mb/Unicode/euc_tw_to_utf8.map b/src/backend/utils/mb/Unicode/euc_tw_to_utf8.map index 8a04289d38..22af269924 100644 --- a/src/backend/utils/mb/Unicode/euc_tw_to_utf8.map +++ b/src/backend/utils/mb/Unicode/euc_tw_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/euc_tw_to_utf8.map */ -/* This file is generated by UCS_to_EUC_TW.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl */ static const uint32 euc_tw_to_unicode_tree_table[27068]; diff --git a/src/backend/utils/mb/Unicode/gb18030_to_utf8.map b/src/backend/utils/mb/Unicode/gb18030_to_utf8.map index e2c94e410b..79072fe767 100644 --- a/src/backend/utils/mb/Unicode/gb18030_to_utf8.map +++ b/src/backend/utils/mb/Unicode/gb18030_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/gb18030_to_utf8.map */ -/* This file is generated by UCS_to_GB18030.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_GB18030.pl */ static const uint32 gb18030_to_unicode_tree_table[32795]; diff --git a/src/backend/utils/mb/Unicode/gbk_to_utf8.map b/src/backend/utils/mb/Unicode/gbk_to_utf8.map index 51098776cb..35d710c3b3 100644 --- a/src/backend/utils/mb/Unicode/gbk_to_utf8.map +++ b/src/backend/utils/mb/Unicode/gbk_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/gbk_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 gbk_to_unicode_tree_table[24354]; diff --git a/src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map index 6440d3f070..83961d1327 100644 --- a/src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_10_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_10_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map index 65e65cbe31..9efdc9a207 100644 --- a/src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_13_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_13_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map index 3f9267da17..766b8703b3 100644 --- a/src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_14_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_14_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map index 12eeda7e4a..6ab3f21146 100644 --- a/src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_15_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_15_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map index 0146f5ffdb..a639654034 100644 --- a/src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_16_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_16_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map index c4bd0f186a..9046300917 100644 --- a/src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_2_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_2_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map index bab3556ce8..c716fc3159 100644 --- a/src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_3_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_3_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map index e0e0998211..3b711062e4 100644 --- a/src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_4_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_4_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map index 90a73503c7..447a4f194f 100644 --- a/src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_5_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_5_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map index 20b855fded..5b8be5b6a0 100644 --- a/src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_6_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_6_to_unicode_tree_table[230]; diff --git a/src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map index 69cb1582a7..f4dd1d815a 100644 --- a/src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_7_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_7_to_unicode_tree_table[254]; diff --git a/src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map index 25112c3e4b..194a1b8ebc 100644 --- a/src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_8_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 iso8859_8_to_unicode_tree_table[254]; diff --git a/src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map b/src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map index 5d761c918d..eec3cb8c1b 100644 --- a/src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map +++ b/src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/iso8859_9_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_9_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/johab_to_utf8.map b/src/backend/utils/mb/Unicode/johab_to_utf8.map index 0767ca0d4c..07d2505c26 100644 --- a/src/backend/utils/mb/Unicode/johab_to_utf8.map +++ b/src/backend/utils/mb/Unicode/johab_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/johab_to_utf8.map */ -/* This file is generated by UCS_to_JOHAB.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl */ static const uint32 johab_to_unicode_tree_table[22987]; diff --git a/src/backend/utils/mb/Unicode/koi8r_to_utf8.map b/src/backend/utils/mb/Unicode/koi8r_to_utf8.map index a4497c9b62..0988748585 100644 --- a/src/backend/utils/mb/Unicode/koi8r_to_utf8.map +++ b/src/backend/utils/mb/Unicode/koi8r_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/koi8r_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 koi8r_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/koi8u_to_utf8.map b/src/backend/utils/mb/Unicode/koi8u_to_utf8.map index 404eea8840..2b2ddc9b81 100644 --- a/src/backend/utils/mb/Unicode/koi8u_to_utf8.map +++ b/src/backend/utils/mb/Unicode/koi8u_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/koi8u_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 koi8u_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map b/src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map index f655663f5a..e591a1135b 100644 --- a/src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map +++ b/src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/shift_jis_2004_to_utf8.map */ -/* This file is generated by UCS_to_SHIFT_JIS_2004.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl */ static const uint32 shift_jis_2004_to_unicode_tree_table[11716]; diff --git a/src/backend/utils/mb/Unicode/sjis_to_utf8.map b/src/backend/utils/mb/Unicode/sjis_to_utf8.map index cec9990fdc..2b60a240b6 100644 --- a/src/backend/utils/mb/Unicode/sjis_to_utf8.map +++ b/src/backend/utils/mb/Unicode/sjis_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/sjis_to_utf8.map */ -/* This file is generated by UCS_to_SJIS.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_SJIS.pl */ static const uint32 sjis_to_unicode_tree_table[8786]; diff --git a/src/backend/utils/mb/Unicode/uhc_to_utf8.map b/src/backend/utils/mb/Unicode/uhc_to_utf8.map index 0d56e6d807..65d57639da 100644 --- a/src/backend/utils/mb/Unicode/uhc_to_utf8.map +++ b/src/backend/utils/mb/Unicode/uhc_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/uhc_to_utf8.map */ -/* This file is generated by UCS_to_UHC.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_UHC.pl */ static const uint32 uhc_to_unicode_tree_table[24256]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_big5.map b/src/backend/utils/mb/Unicode/utf8_to_big5.map index 132383a7ed..ea26c8a80e 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_big5.map +++ b/src/backend/utils/mb/Unicode/utf8_to_big5.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_big5.map */ -/* This file is generated by UCS_to_BIG5.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_BIG5.pl */ static const uint16 big5_from_unicode_tree_table[22839]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_euc_cn.map b/src/backend/utils/mb/Unicode/utf8_to_euc_cn.map index f3bdf772d3..488d3be043 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_euc_cn.map +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_cn.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_euc_cn.map */ -/* This file is generated by UCS_to_EUC_CN.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl */ static const uint16 euc_cn_from_unicode_tree_table[21644]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map b/src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map index 72337da8ca..fa90f3958f 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_euc_jis_2004.map */ -/* This file is generated by UCS_to_EUC_JIS_2004.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl */ static const uint32 euc_jis_2004_from_unicode_tree_table[39163]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map b/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map index 7193fa7be9..1adcedf26d 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_euc_jp.map */ -/* This file is generated by UCS_to_EUC_JP.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl */ static const uint32 euc_jp_from_unicode_tree_table[23370]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_euc_kr.map b/src/backend/utils/mb/Unicode/utf8_to_euc_kr.map index 23221f4fa9..b5c4823850 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_euc_kr.map +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_kr.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_euc_kr.map */ -/* This file is generated by UCS_to_EUC_KR.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl */ static const uint16 euc_kr_from_unicode_tree_table[33954]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_euc_tw.map b/src/backend/utils/mb/Unicode/utf8_to_euc_tw.map index 74dd9e78d5..e3a35ea653 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_euc_tw.map +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_tw.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_euc_tw.map */ -/* This file is generated by UCS_to_EUC_TW.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl */ static const uint32 euc_tw_from_unicode_tree_table[22640]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_gb18030.map b/src/backend/utils/mb/Unicode/utf8_to_gb18030.map index 7741b8b3c2..5acee4554c 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_gb18030.map +++ b/src/backend/utils/mb/Unicode/utf8_to_gb18030.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_gb18030.map */ -/* This file is generated by UCS_to_GB18030.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_GB18030.pl */ static const uint32 gb18030_from_unicode_tree_table[31972]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_gbk.map b/src/backend/utils/mb/Unicode/utf8_to_gbk.map index fd8aed6bb5..528b054a63 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_gbk.map +++ b/src/backend/utils/mb/Unicode/utf8_to_gbk.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_gbk.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 gbk_from_unicode_tree_table[24198]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map index db36f10cd7..899eaf9753 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_10.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_10_from_unicode_tree_table[319]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map index eea83e6159..cae360fc9d 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_13.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_13_from_unicode_tree_table[326]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map index 4cdc0550cd..bb2bc3b55f 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_14.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_14_from_unicode_tree_table[507]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map index 52c7813de9..6bc8a32908 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_15.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_15_from_unicode_tree_table[263]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map index fbe7df22c1..59a2ecac90 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_16.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_16_from_unicode_tree_table[425]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map index f7a71c378a..94b65c2c62 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_2.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_2_from_unicode_tree_table[386]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map index b50364d257..c489fda08d 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_3.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_3_from_unicode_tree_table[373]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map index f5043b92ad..b273518f18 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_4.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_4_from_unicode_tree_table[385]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map index a3c2540d16..75c6ea615a 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_5.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_5_from_unicode_tree_table[274]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map index ab6fb628ae..b237f54be1 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_6.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_6_from_unicode_tree_table[248]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map index 9ec5ae5cfb..656b80abd1 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_7.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_7_from_unicode_tree_table[386]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map index 59ae5b5968..b512bd5ed7 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_8_from_unicode_tree_table[279]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map b/src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map index 3dc5163e84..98a2eabbc8 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map +++ b/src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_iso8859_9.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 iso8859_9_from_unicode_tree_table[324]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_johab.map b/src/backend/utils/mb/Unicode/utf8_to_johab.map index 08957072ea..8a4e69a9e5 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_johab.map +++ b/src/backend/utils/mb/Unicode/utf8_to_johab.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_johab.map */ -/* This file is generated by UCS_to_JOHAB.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl */ static const uint16 johab_from_unicode_tree_table[34515]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_koi8r.map b/src/backend/utils/mb/Unicode/utf8_to_koi8r.map index 342d3bfe19..bdc52ad09a 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_koi8r.map +++ b/src/backend/utils/mb/Unicode/utf8_to_koi8r.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_koi8r.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 koi8r_from_unicode_tree_table[678]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_koi8u.map b/src/backend/utils/mb/Unicode/utf8_to_koi8u.map index 2957e96bba..bb5c56a6de 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_koi8u.map +++ b/src/backend/utils/mb/Unicode/utf8_to_koi8u.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_koi8u.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 koi8u_from_unicode_tree_table[727]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map b/src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map index db52dd0845..b756b5f157 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map +++ b/src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_shift_jis_2004.map */ -/* This file is generated by UCS_to_SHIFT_JIS_2004.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl */ static const uint16 shift_jis_2004_from_unicode_tree_table[39196]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_sjis.map b/src/backend/utils/mb/Unicode/utf8_to_sjis.map index 20ad5dfe7c..72acc0977c 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_sjis.map +++ b/src/backend/utils/mb/Unicode/utf8_to_sjis.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_sjis.map */ -/* This file is generated by UCS_to_SJIS.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_SJIS.pl */ static const uint16 sjis_from_unicode_tree_table[22895]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_uhc.map b/src/backend/utils/mb/Unicode/utf8_to_uhc.map index 14ad2455e7..4e8b857cd7 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_uhc.map +++ b/src/backend/utils/mb/Unicode/utf8_to_uhc.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_uhc.map */ -/* This file is generated by UCS_to_UHC.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_UHC.pl */ static const uint16 uhc_from_unicode_tree_table[34768]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1250.map b/src/backend/utils/mb/Unicode/utf8_to_win1250.map index e33983ad4f..3d83c18fe6 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1250.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1250.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1250.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1250_from_unicode_tree_table[507]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1251.map b/src/backend/utils/mb/Unicode/utf8_to_win1251.map index f8a817bf36..d4160171ed 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1251.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1251.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1251.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1251_from_unicode_tree_table[446]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1252.map b/src/backend/utils/mb/Unicode/utf8_to_win1252.map index 4b536c6775..b98b73f239 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1252.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1252.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1252.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1252_from_unicode_tree_table[513]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1253.map b/src/backend/utils/mb/Unicode/utf8_to_win1253.map index 790a778cf1..e1812715a0 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1253.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1253.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1253.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1253_from_unicode_tree_table[454]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1254.map b/src/backend/utils/mb/Unicode/utf8_to_win1254.map index dc920e78c3..fd601c168a 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1254.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1254.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1254.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1254_from_unicode_tree_table[557]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1255.map b/src/backend/utils/mb/Unicode/utf8_to_win1255.map index b140b6d7b7..1ece6434eb 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1255.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1255.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1255.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1255_from_unicode_tree_table[562]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1256.map b/src/backend/utils/mb/Unicode/utf8_to_win1256.map index e69d890e3b..7fce018a1d 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1256.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1256.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1256.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1256_from_unicode_tree_table[765]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1257.map b/src/backend/utils/mb/Unicode/utf8_to_win1257.map index 660e8761a3..1453887ead 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1257.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1257.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1257.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1257_from_unicode_tree_table[513]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win1258.map b/src/backend/utils/mb/Unicode/utf8_to_win1258.map index b298f9d04f..dedcb109c0 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win1258.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win1258.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win1258.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win1258_from_unicode_tree_table[618]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win866.map b/src/backend/utils/mb/Unicode/utf8_to_win866.map index 51adc24599..50e88c901f 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win866.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win866.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win866.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win866_from_unicode_tree_table[571]; diff --git a/src/backend/utils/mb/Unicode/utf8_to_win874.map b/src/backend/utils/mb/Unicode/utf8_to_win874.map index c415c07b66..fe8ded3da4 100644 --- a/src/backend/utils/mb/Unicode/utf8_to_win874.map +++ b/src/backend/utils/mb/Unicode/utf8_to_win874.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/utf8_to_win874.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint16 win874_from_unicode_tree_table[421]; diff --git a/src/backend/utils/mb/Unicode/win1250_to_utf8.map b/src/backend/utils/mb/Unicode/win1250_to_utf8.map index 42c1b3dd00..67909ecf43 100644 --- a/src/backend/utils/mb/Unicode/win1250_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1250_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1250_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1250_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1251_to_utf8.map b/src/backend/utils/mb/Unicode/win1251_to_utf8.map index f8a0b18ff9..4d5a7efef7 100644 --- a/src/backend/utils/mb/Unicode/win1251_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1251_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1251_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1251_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1252_to_utf8.map b/src/backend/utils/mb/Unicode/win1252_to_utf8.map index 56aa5d1382..c09e134b1a 100644 --- a/src/backend/utils/mb/Unicode/win1252_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1252_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1252_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1252_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1253_to_utf8.map b/src/backend/utils/mb/Unicode/win1253_to_utf8.map index df90e6e26b..2298371a7e 100644 --- a/src/backend/utils/mb/Unicode/win1253_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1253_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1253_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1253_to_unicode_tree_table[254]; diff --git a/src/backend/utils/mb/Unicode/win1254_to_utf8.map b/src/backend/utils/mb/Unicode/win1254_to_utf8.map index eed765f7a4..84e009709f 100644 --- a/src/backend/utils/mb/Unicode/win1254_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1254_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1254_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1254_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1255_to_utf8.map b/src/backend/utils/mb/Unicode/win1255_to_utf8.map index 4c9f7c0b10..133a06b9a3 100644 --- a/src/backend/utils/mb/Unicode/win1255_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1255_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1255_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1255_to_unicode_tree_table[254]; diff --git a/src/backend/utils/mb/Unicode/win1256_to_utf8.map b/src/backend/utils/mb/Unicode/win1256_to_utf8.map index b617abb775..6f55484b0f 100644 --- a/src/backend/utils/mb/Unicode/win1256_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1256_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1256_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1256_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1257_to_utf8.map b/src/backend/utils/mb/Unicode/win1257_to_utf8.map index 85b3ddf8ae..a58bb81d97 100644 --- a/src/backend/utils/mb/Unicode/win1257_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1257_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1257_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1257_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win1258_to_utf8.map b/src/backend/utils/mb/Unicode/win1258_to_utf8.map index 5f288ce917..97e203aef7 100644 --- a/src/backend/utils/mb/Unicode/win1258_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win1258_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win1258_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win1258_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win866_to_utf8.map b/src/backend/utils/mb/Unicode/win866_to_utf8.map index 04988b86ab..ed674f9503 100644 --- a/src/backend/utils/mb/Unicode/win866_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win866_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win866_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win866_to_unicode_tree_table[256]; diff --git a/src/backend/utils/mb/Unicode/win874_to_utf8.map b/src/backend/utils/mb/Unicode/win874_to_utf8.map index c27b7e484e..dcccaf0363 100644 --- a/src/backend/utils/mb/Unicode/win874_to_utf8.map +++ b/src/backend/utils/mb/Unicode/win874_to_utf8.map @@ -1,5 +1,5 @@ /* src/backend/utils/mb/Unicode/win874_to_utf8.map */ -/* This file is generated by UCS_to_most.pl */ +/* This file is generated by src/backend/utils/mb/Unicode/UCS_to_most.pl */ static const uint32 win874_to_unicode_tree_table[248]; From 1316417bbab0821f99eb21c0b654e33f5f6f90a4 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 25 Feb 2018 14:52:51 -0500 Subject: [PATCH 1055/1087] Release notes for 10.3, 9.6.8, 9.5.12, 9.4.17, 9.3.22. --- doc/src/sgml/release-10.sgml | 21 ++++++ doc/src/sgml/release-9.3.sgml | 112 ++++++++++++++++++++++++++++++++ doc/src/sgml/release-9.4.sgml | 112 ++++++++++++++++++++++++++++++++ doc/src/sgml/release-9.5.sgml | 112 ++++++++++++++++++++++++++++++++ doc/src/sgml/release-9.6.sgml | 119 ++++++++++++++++++++++++++++++++++ 5 files changed, 476 insertions(+) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index 718ad4cb0c..d543849715 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -35,6 +35,27 @@ + + Prevent logical replication from trying to ship changes for + unpublishable relations (Peter Eisentraut) + + + + A publication marked FOR ALL TABLES would + incorrectly ship changes in materialized views + and information_schema tables, which are + supposed to be omitted from the change stream. + + + + + + + + + Release 9.3.22 + + + Release date: + 2018-03-01 + + + + This release contains a variety of fixes from 9.3.21. + For information about new features in the 9.3 major release, see + . + + + + Migration to Version 9.3.22 + + + A dump/restore is not required for those running 9.3.X. + + + + However, if you are upgrading from a version earlier than 9.3.18, + see . + + + + + Changes + + + + + + Fix misbehavior of concurrent-update rechecks with CTE references + appearing in subplans (Tom Lane) + + + + If a CTE (WITH clause reference) is used in an + InitPlan or SubPlan, and the query requires a recheck due to trying + to update or lock a concurrently-updated row, incorrect results could + be obtained. + + + + + + Fix planner failures with overlapping mergejoin clauses in an outer + join (Tom Lane) + + + + These mistakes led to left and right pathkeys do not match in + mergejoin or outer pathkeys do not match + mergeclauses planner errors in corner cases. + + + + + + Repair pg_upgrade's failure to + preserve relfrozenxid for materialized + views (Tom Lane, Andres Freund) + + + + This oversight could lead to data corruption in materialized views + after an upgrade, manifesting as could not access status of + transaction or found xmin from before + relfrozenxid errors. The problem would be more likely to + occur in seldom-refreshed materialized views, or ones that were + maintained only with REFRESH MATERIALIZED VIEW + CONCURRENTLY. + + + + If such corruption is observed, it can be repaired by refreshing the + materialized view (without CONCURRENTLY). + + + + + + Fix incorrect reporting of PL/Python function names in + error CONTEXT stacks (Tom Lane) + + + + An error occurring within a nested PL/Python function call (that is, + one reached via a SPI query from another PL/Python function) would + result in a stack trace showing the inner function's name twice, + rather than the expected results. Also, an error in a nested + PL/Python DO block could result in a null pointer + dereference crash on some platforms. + + + + + + Allow contrib/auto_explain's + log_min_duration setting to range up + to INT_MAX, or about 24 days instead of 35 minutes + (Tom Lane) + + + + + + + + Release 9.3.21 diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml index c524271e90..68ac961436 100644 --- a/doc/src/sgml/release-9.4.sgml +++ b/doc/src/sgml/release-9.4.sgml @@ -1,6 +1,118 @@ + + Release 9.4.17 + + + Release date: + 2018-03-01 + + + + This release contains a variety of fixes from 9.4.16. + For information about new features in the 9.4 major release, see + . + + + + Migration to Version 9.4.17 + + + A dump/restore is not required for those running 9.4.X. + + + + However, if you are upgrading from a version earlier than 9.4.13, + see . + + + + + Changes + + + + + + Fix misbehavior of concurrent-update rechecks with CTE references + appearing in subplans (Tom Lane) + + + + If a CTE (WITH clause reference) is used in an + InitPlan or SubPlan, and the query requires a recheck due to trying + to update or lock a concurrently-updated row, incorrect results could + be obtained. + + + + + + Fix planner failures with overlapping mergejoin clauses in an outer + join (Tom Lane) + + + + These mistakes led to left and right pathkeys do not match in + mergejoin or outer pathkeys do not match + mergeclauses planner errors in corner cases. + + + + + + Repair pg_upgrade's failure to + preserve relfrozenxid for materialized + views (Tom Lane, Andres Freund) + + + + This oversight could lead to data corruption in materialized views + after an upgrade, manifesting as could not access status of + transaction or found xmin from before + relfrozenxid errors. The problem would be more likely to + occur in seldom-refreshed materialized views, or ones that were + maintained only with REFRESH MATERIALIZED VIEW + CONCURRENTLY. + + + + If such corruption is observed, it can be repaired by refreshing the + materialized view (without CONCURRENTLY). + + + + + + Fix incorrect reporting of PL/Python function names in + error CONTEXT stacks (Tom Lane) + + + + An error occurring within a nested PL/Python function call (that is, + one reached via a SPI query from another PL/Python function) would + result in a stack trace showing the inner function's name twice, + rather than the expected results. Also, an error in a nested + PL/Python DO block could result in a null pointer + dereference crash on some platforms. + + + + + + Allow contrib/auto_explain's + log_min_duration setting to range up + to INT_MAX, or about 24 days instead of 35 minutes + (Tom Lane) + + + + + + + + Release 9.4.16 diff --git a/doc/src/sgml/release-9.5.sgml b/doc/src/sgml/release-9.5.sgml index ab92fb0134..cb545b08ab 100644 --- a/doc/src/sgml/release-9.5.sgml +++ b/doc/src/sgml/release-9.5.sgml @@ -1,6 +1,118 @@ + + Release 9.5.12 + + + Release date: + 2018-03-01 + + + + This release contains a variety of fixes from 9.5.11. + For information about new features in the 9.5 major release, see + . + + + + Migration to Version 9.5.12 + + + A dump/restore is not required for those running 9.5.X. + + + + However, if you are upgrading from a version earlier than 9.5.10, + see . + + + + + Changes + + + + + + Fix misbehavior of concurrent-update rechecks with CTE references + appearing in subplans (Tom Lane) + + + + If a CTE (WITH clause reference) is used in an + InitPlan or SubPlan, and the query requires a recheck due to trying + to update or lock a concurrently-updated row, incorrect results could + be obtained. + + + + + + Fix planner failures with overlapping mergejoin clauses in an outer + join (Tom Lane) + + + + These mistakes led to left and right pathkeys do not match in + mergejoin or outer pathkeys do not match + mergeclauses planner errors in corner cases. + + + + + + Repair pg_upgrade's failure to + preserve relfrozenxid for materialized + views (Tom Lane, Andres Freund) + + + + This oversight could lead to data corruption in materialized views + after an upgrade, manifesting as could not access status of + transaction or found xmin from before + relfrozenxid errors. The problem would be more likely to + occur in seldom-refreshed materialized views, or ones that were + maintained only with REFRESH MATERIALIZED VIEW + CONCURRENTLY. + + + + If such corruption is observed, it can be repaired by refreshing the + materialized view (without CONCURRENTLY). + + + + + + Fix incorrect reporting of PL/Python function names in + error CONTEXT stacks (Tom Lane) + + + + An error occurring within a nested PL/Python function call (that is, + one reached via a SPI query from another PL/Python function) would + result in a stack trace showing the inner function's name twice, + rather than the expected results. Also, an error in a nested + PL/Python DO block could result in a null pointer + dereference crash on some platforms. + + + + + + Allow contrib/auto_explain's + log_min_duration setting to range up + to INT_MAX, or about 24 days instead of 35 minutes + (Tom Lane) + + + + + + + + Release 9.5.11 diff --git a/doc/src/sgml/release-9.6.sgml b/doc/src/sgml/release-9.6.sgml index 6d7a500933..62d0594a00 100644 --- a/doc/src/sgml/release-9.6.sgml +++ b/doc/src/sgml/release-9.6.sgml @@ -1,6 +1,125 @@ + + Release 9.6.8 + + + Release date: + 2018-03-01 + + + + This release contains a variety of fixes from 9.6.7. + For information about new features in the 9.6 major release, see + . + + + + Migration to Version 9.6.8 + + + A dump/restore is not required for those running 9.6.X. + + + + However, if you are upgrading from a version earlier than 9.6.7, + see . + + + + + Changes + + + + + + Fix misbehavior of concurrent-update rechecks with CTE references + appearing in subplans (Tom Lane) + + + + If a CTE (WITH clause reference) is used in an + InitPlan or SubPlan, and the query requires a recheck due to trying + to update or lock a concurrently-updated row, incorrect results could + be obtained. + + + + + + Fix planner failures with overlapping mergejoin clauses in an outer + join (Tom Lane) + + + + These mistakes led to left and right pathkeys do not match in + mergejoin or outer pathkeys do not match + mergeclauses planner errors in corner cases. + + + + + + Repair pg_upgrade's failure to + preserve relfrozenxid for materialized + views (Tom Lane, Andres Freund) + + + + This oversight could lead to data corruption in materialized views + after an upgrade, manifesting as could not access status of + transaction or found xmin from before + relfrozenxid errors. The problem would be more likely to + occur in seldom-refreshed materialized views, or ones that were + maintained only with REFRESH MATERIALIZED VIEW + CONCURRENTLY. + + + + If such corruption is observed, it can be repaired by refreshing the + materialized view (without CONCURRENTLY). + + + + + + Fix incorrect reporting of PL/Python function names in + error CONTEXT stacks (Tom Lane) + + + + An error occurring within a nested PL/Python function call (that is, + one reached via a SPI query from another PL/Python function) would + result in a stack trace showing the inner function's name twice, + rather than the expected results. Also, an error in a nested + PL/Python DO block could result in a null pointer + dereference crash on some platforms. + + + + + + Allow contrib/auto_explain's + log_min_duration setting to range up + to INT_MAX, or about 24 days instead of 35 minutes + (Tom Lane) + + + + + + Mark assorted GUC variables as PGDLLIMPORT, to + ease porting extension modules to Windows (Metin Doslu) + + + + + + + + Release 9.6.7 From 5b570d771b80aadc98755208f8f1b81e9a5eb366 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Sun, 25 Feb 2018 17:27:20 -0500 Subject: [PATCH 1056/1087] Un-break parallel pg_upgrade. Commit b3f840120 changed pg_upgrade so that it'd actually drop and re-create the template1 and postgres databases in the new cluster. That works fine, serially. With the -j option it's not so fine, because other per-database jobs might be launched while the template1 database is dropped. Since they attempt to connect there to start up, kaboom. This is the cause of the intermittent failures buildfarm member jacana has been showing for the last month; evidently it is the only BF member configured to run the pg_upgrade test with parallelism enabled. Fix by processing template1 separately before we get into the parallel sub-job launch loop. (We could alternatively have made the postgres DB be the special case, but it seems likely that template1 will contain less stuff and so we lose less parallelism with this choice.) --- src/bin/pg_upgrade/pg_upgrade.c | 59 +++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 11 deletions(-) diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index bbfa4c1ef3..d12412799f 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -302,13 +302,21 @@ create_new_objects(void) prep_status("Restoring database schemas in the new cluster\n"); + /* + * We cannot process the template1 database concurrently with others, + * because when it's transiently dropped, connection attempts would fail. + * So handle it in a separate non-parallelized pass. + */ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++) { char sql_file_name[MAXPGPATH], log_file_name[MAXPGPATH]; DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum]; const char *create_opts; - const char *starting_db; + + /* Process only template1 in this pass */ + if (strcmp(old_db->db_name, "template1") != 0) + continue; pg_log(PG_STATUS, "%s", old_db->db_name); snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid); @@ -320,26 +328,55 @@ create_new_objects(void) * otherwise we would fail to propagate their database-level * properties. */ - if (strcmp(old_db->db_name, "template1") == 0 || - strcmp(old_db->db_name, "postgres") == 0) - create_opts = "--clean --create"; - else - create_opts = "--create"; + create_opts = "--clean --create"; + + exec_prog(log_file_name, + NULL, + true, + true, + "\"%s/pg_restore\" %s %s --exit-on-error --verbose " + "--dbname postgres \"%s\"", + new_cluster.bindir, + cluster_conn_opts(&new_cluster), + create_opts, + sql_file_name); - /* When processing template1, we can't connect there to start with */ + break; /* done once we've processed template1 */ + } + + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++) + { + char sql_file_name[MAXPGPATH], + log_file_name[MAXPGPATH]; + DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum]; + const char *create_opts; + + /* Skip template1 in this pass */ if (strcmp(old_db->db_name, "template1") == 0) - starting_db = "postgres"; + continue; + + pg_log(PG_STATUS, "%s", old_db->db_name); + snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid); + snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid); + + /* + * template1 and postgres databases will already exist in the target + * installation, so tell pg_restore to drop and recreate them; + * otherwise we would fail to propagate their database-level + * properties. + */ + if (strcmp(old_db->db_name, "postgres") == 0) + create_opts = "--clean --create"; else - starting_db = "template1"; + create_opts = "--create"; parallel_exec_prog(log_file_name, NULL, "\"%s/pg_restore\" %s %s --exit-on-error --verbose " - "--dbname %s \"%s\"", + "--dbname template1 \"%s\"", new_cluster.bindir, cluster_conn_opts(&new_cluster), create_opts, - starting_db, sql_file_name); } From 3bf05e096b9f8375e640c5d7996aa57efd7f240c Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Mon, 26 Feb 2018 09:30:12 -0500 Subject: [PATCH 1057/1087] Add a new upper planner relation for partially-aggregated results. Up until now, we've abused grouped_rel->partial_pathlist as a place to store partial paths that have been partially aggregate, but that's really not correct, because a partial path for a relation is supposed to be one which produces the correct results with the addition of only a Gather or Gather Merge node, and these paths also require a Finalize Aggregate step. Instead, add a new partially_group_rel which can hold either partial paths (which need to be gathered and then have aggregation finalized) or non-partial paths (which only need to have aggregation finalized). This allows us to reuse generate_gather_paths for partially_grouped_rel instead of writing new code, so that this patch actually basically no net new code while making things cleaner, simplifying things for pending patches for partition-wise aggregate. Robert Haas and Jeevan Chalke. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia Sabih, and me. Discussion: http://postgr.es/m/CA+TgmobrzFYS3+U8a_BCy3-hOvh5UyJbC18rEcYehxhpw5=ETA@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoZyQEjdBNuoG9-wC5GQ5GrO4544Myo13dVptvx+uLg9uQ@mail.gmail.com --- src/backend/optimizer/README | 1 + src/backend/optimizer/geqo/geqo_eval.c | 2 +- src/backend/optimizer/path/allpaths.c | 26 ++- src/backend/optimizer/plan/planner.c | 273 ++++++++++++------------- src/include/nodes/relation.h | 2 + src/include/optimizer/paths.h | 3 +- 6 files changed, 154 insertions(+), 153 deletions(-) diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README index 84e60f7f6f..3e254c8b2d 100644 --- a/src/backend/optimizer/README +++ b/src/backend/optimizer/README @@ -998,6 +998,7 @@ considered useful for each step. Currently, we may create these types of additional RelOptInfos during upper-level planning: UPPERREL_SETOP result of UNION/INTERSECT/EXCEPT, if any +UPPERREL_PARTIAL_GROUP_AGG result of partial grouping/aggregation, if any UPPERREL_GROUP_AGG result of grouping/aggregation, if any UPPERREL_WINDOW result of window functions, if any UPPERREL_DISTINCT result of "SELECT DISTINCT", if any diff --git a/src/backend/optimizer/geqo/geqo_eval.c b/src/backend/optimizer/geqo/geqo_eval.c index 57f0f594e5..0be2a73e05 100644 --- a/src/backend/optimizer/geqo/geqo_eval.c +++ b/src/backend/optimizer/geqo/geqo_eval.c @@ -268,7 +268,7 @@ merge_clump(PlannerInfo *root, List *clumps, Clump *new_clump, bool force) generate_partitionwise_join_paths(root, joinrel); /* Create GatherPaths for any useful partial paths for rel */ - generate_gather_paths(root, joinrel); + generate_gather_paths(root, joinrel, false); /* Find and save the cheapest paths for this joinrel */ set_cheapest(joinrel); diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index f714247ebb..1c792a00eb 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -488,7 +488,7 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel, * we'll consider gathering partial paths for the parent appendrel.) */ if (rel->reloptkind == RELOPT_BASEREL) - generate_gather_paths(root, rel); + generate_gather_paths(root, rel, false); /* * Allow a plugin to editorialize on the set of Paths for this base @@ -2444,27 +2444,42 @@ set_worktable_pathlist(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte) * This must not be called until after we're done creating all partial paths * for the specified relation. (Otherwise, add_partial_path might delete a * path that some GatherPath or GatherMergePath has a reference to.) + * + * If we're generating paths for a scan or join relation, override_rows will + * be false, and we'll just use the relation's size estimate. When we're + * being called for a partially-grouped path, though, we need to override + * the rowcount estimate. (It's not clear that the particular value we're + * using here is actually best, but the underlying rel has no estimate so + * we must do something.) */ void -generate_gather_paths(PlannerInfo *root, RelOptInfo *rel) +generate_gather_paths(PlannerInfo *root, RelOptInfo *rel, bool override_rows) { Path *cheapest_partial_path; Path *simple_gather_path; ListCell *lc; + double rows; + double *rowsp = NULL; /* If there are no partial paths, there's nothing to do here. */ if (rel->partial_pathlist == NIL) return; + /* Should we override the rel's rowcount estimate? */ + if (override_rows) + rowsp = &rows; + /* * The output of Gather is always unsorted, so there's only one partial * path of interest: the cheapest one. That will be the one at the front * of partial_pathlist because of the way add_partial_path works. */ cheapest_partial_path = linitial(rel->partial_pathlist); + rows = + cheapest_partial_path->rows * cheapest_partial_path->parallel_workers; simple_gather_path = (Path *) create_gather_path(root, rel, cheapest_partial_path, rel->reltarget, - NULL, NULL); + NULL, rowsp); add_path(rel, simple_gather_path); /* @@ -2479,8 +2494,9 @@ generate_gather_paths(PlannerInfo *root, RelOptInfo *rel) if (subpath->pathkeys == NIL) continue; + rows = subpath->rows * subpath->parallel_workers; path = create_gather_merge_path(root, rel, subpath, rel->reltarget, - subpath->pathkeys, NULL, NULL); + subpath->pathkeys, NULL, rowsp); add_path(rel, &path->path); } } @@ -2653,7 +2669,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) generate_partitionwise_join_paths(root, rel); /* Create GatherPaths for any useful partial paths for rel */ - generate_gather_paths(root, rel); + generate_gather_paths(root, rel, false); /* Find and save the cheapest paths for this rel */ set_cheapest(rel); diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 3e8cd1447c..e8f6cc559b 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -186,19 +186,17 @@ static PathTarget *make_sort_input_target(PlannerInfo *root, static void adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel, List *targets, List *targets_contain_srfs); static void add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, - RelOptInfo *grouped_rel, PathTarget *target, - PathTarget *partial_grouping_target, + RelOptInfo *grouped_rel, + PathTarget *target, + RelOptInfo *partially_grouped_rel, const AggClauseCosts *agg_costs, const AggClauseCosts *agg_final_costs, grouping_sets_data *gd, bool can_sort, bool can_hash, double dNumGroups, List *havingQual); -static void add_partial_paths_to_grouping_rel(PlannerInfo *root, +static void add_paths_to_partial_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, - RelOptInfo *grouped_rel, - PathTarget *target, - PathTarget *partial_grouping_target, + RelOptInfo *partial_grouped_rel, AggClauseCosts *agg_partial_costs, - AggClauseCosts *agg_final_costs, grouping_sets_data *gd, bool can_sort, bool can_hash, @@ -3601,6 +3599,11 @@ estimate_hashagg_tablesize(Path *path, const AggClauseCosts *agg_costs, * create_grouping_paths * * Build a new upperrel containing Paths for grouping and/or aggregation. + * Along the way, we also build an upperrel for Paths which are partially + * grouped and/or aggregated. A partially grouped and/or aggregated path + * needs a FinalizeAggregate node to complete the aggregation. Currently, + * the only partially grouped paths we build are also partial paths; that + * is, they need a Gather and then a FinalizeAggregate. * * input_rel: contains the source-data Paths * target: the pathtarget for the result Paths to compute @@ -3627,7 +3630,7 @@ create_grouping_paths(PlannerInfo *root, Query *parse = root->parse; Path *cheapest_path = input_rel->cheapest_total_path; RelOptInfo *grouped_rel; - PathTarget *partial_grouping_target = NULL; + RelOptInfo *partially_grouped_rel; AggClauseCosts agg_partial_costs; /* parallel only */ AggClauseCosts agg_final_costs; /* parallel only */ double dNumGroups; @@ -3635,26 +3638,41 @@ create_grouping_paths(PlannerInfo *root, bool can_sort; bool try_parallel_aggregation; - /* For now, do all work in the (GROUP_AGG, NULL) upperrel */ + /* + * For now, all aggregated paths are added to the (GROUP_AGG, NULL) + * upperrel. Paths that are only partially aggregated go into the + * (UPPERREL_PARTIAL_GROUP_AGG, NULL) upperrel. + */ grouped_rel = fetch_upper_rel(root, UPPERREL_GROUP_AGG, NULL); + partially_grouped_rel = fetch_upper_rel(root, UPPERREL_PARTIAL_GROUP_AGG, + NULL); /* * If the input relation is not parallel-safe, then the grouped relation * can't be parallel-safe, either. Otherwise, it's parallel-safe if the - * target list and HAVING quals are parallel-safe. + * target list and HAVING quals are parallel-safe. The partially grouped + * relation obeys the same rules. */ if (input_rel->consider_parallel && is_parallel_safe(root, (Node *) target->exprs) && is_parallel_safe(root, (Node *) parse->havingQual)) + { grouped_rel->consider_parallel = true; + partially_grouped_rel->consider_parallel = true; + } /* - * If the input rel belongs to a single FDW, so does the grouped rel. + * If the input rel belongs to a single FDW, so does the grouped rel. Same + * for the partially_grouped_rel. */ grouped_rel->serverid = input_rel->serverid; grouped_rel->userid = input_rel->userid; grouped_rel->useridiscurrent = input_rel->useridiscurrent; grouped_rel->fdwroutine = input_rel->fdwroutine; + partially_grouped_rel->serverid = input_rel->serverid; + partially_grouped_rel->userid = input_rel->userid; + partially_grouped_rel->useridiscurrent = input_rel->useridiscurrent; + partially_grouped_rel->fdwroutine = input_rel->fdwroutine; /* * Check for degenerate grouping. @@ -3778,14 +3796,13 @@ create_grouping_paths(PlannerInfo *root, /* * Before generating paths for grouped_rel, we first generate any possible - * partial paths; that way, later code can easily consider both parallel - * and non-parallel approaches to grouping. Note that the partial paths - * we generate here are also partially aggregated, so simply pushing a - * Gather node on top is insufficient to create a final path, as would be - * the case for a scan/join rel. + * partial paths for partially_grouped_rel; that way, later code can + * easily consider both parallel and non-parallel approaches to grouping. */ if (try_parallel_aggregation) { + PathTarget *partial_grouping_target; + /* * Build target list for partial aggregate paths. These paths cannot * just emit the same tlist as regular aggregate paths, because (1) we @@ -3794,6 +3811,7 @@ create_grouping_paths(PlannerInfo *root, * partial mode. */ partial_grouping_target = make_partial_grouping_target(root, target); + partially_grouped_rel->reltarget = partial_grouping_target; /* * Collect statistics about aggregates for estimating costs of @@ -3817,16 +3835,16 @@ create_grouping_paths(PlannerInfo *root, &agg_final_costs); } - add_partial_paths_to_grouping_rel(root, input_rel, grouped_rel, target, - partial_grouping_target, - &agg_partial_costs, &agg_final_costs, + add_paths_to_partial_grouping_rel(root, input_rel, + partially_grouped_rel, + &agg_partial_costs, gd, can_sort, can_hash, (List *) parse->havingQual); } /* Build final grouping paths */ add_paths_to_grouping_rel(root, input_rel, grouped_rel, target, - partial_grouping_target, agg_costs, + partially_grouped_rel, agg_costs, &agg_final_costs, gd, can_sort, can_hash, dNumGroups, (List *) parse->havingQual); @@ -3854,16 +3872,6 @@ create_grouping_paths(PlannerInfo *root, /* Now choose the best path(s) */ set_cheapest(grouped_rel); - /* - * We've been using the partial pathlist for the grouped relation to hold - * partially aggregated paths, but that's actually a little bit bogus - * because it's unsafe for later planning stages -- like ordered_rel --- - * to get the idea that they can use these partial paths as if they didn't - * need a FinalizeAggregate step. Zap the partial pathlist at this stage - * so we don't get confused. - */ - grouped_rel->partial_pathlist = NIL; - return grouped_rel; } @@ -5996,8 +6004,9 @@ get_partitioned_child_rels_for_join(PlannerInfo *root, Relids join_relids) */ static void add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, - RelOptInfo *grouped_rel, PathTarget *target, - PathTarget *partial_grouping_target, + RelOptInfo *grouped_rel, + PathTarget *target, + RelOptInfo *partially_grouped_rel, const AggClauseCosts *agg_costs, const AggClauseCosts *agg_final_costs, grouping_sets_data *gd, bool can_sort, bool can_hash, @@ -6079,32 +6088,27 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, } /* - * Now generate a complete GroupAgg Path atop of the cheapest partial - * path. We can do this using either Gather or Gather Merge. + * Instead of operating directly on the input relation, we can + * consider finalizing a partially aggregated path. */ - if (grouped_rel->partial_pathlist) + foreach(lc, partially_grouped_rel->pathlist) { - Path *path = (Path *) linitial(grouped_rel->partial_pathlist); - double total_groups = path->rows * path->parallel_workers; - - path = (Path *) create_gather_path(root, - grouped_rel, - path, - partial_grouping_target, - NULL, - &total_groups); + Path *path = (Path *) lfirst(lc); /* - * Since Gather's output is always unsorted, we'll need to sort, - * unless there's no GROUP BY clause or a degenerate (constant) - * one, in which case there will only be a single group. + * Insert a Sort node, if required. But there's no point in + * sorting anything but the cheapest path. */ - if (root->group_pathkeys) + if (!pathkeys_contained_in(root->group_pathkeys, path->pathkeys)) + { + if (path != partially_grouped_rel->cheapest_total_path) + continue; path = (Path *) create_sort_path(root, grouped_rel, path, root->group_pathkeys, -1.0); + } if (parse->hasAggs) add_path(grouped_rel, (Path *) @@ -6127,70 +6131,6 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, parse->groupClause, havingQual, dNumGroups)); - - /* - * The point of using Gather Merge rather than Gather is that it - * can preserve the ordering of the input path, so there's no - * reason to try it unless (1) it's possible to produce more than - * one output row and (2) we want the output path to be ordered. - */ - if (parse->groupClause != NIL && root->group_pathkeys != NIL) - { - foreach(lc, grouped_rel->partial_pathlist) - { - Path *subpath = (Path *) lfirst(lc); - Path *gmpath; - double total_groups; - - /* - * It's useful to consider paths that are already properly - * ordered for Gather Merge, because those don't need a - * sort. It's also useful to consider the cheapest path, - * because sorting it in parallel and then doing Gather - * Merge may be better than doing an unordered Gather - * followed by a sort. But there's no point in considering - * non-cheapest paths that aren't already sorted - * correctly. - */ - if (path != subpath && - !pathkeys_contained_in(root->group_pathkeys, - subpath->pathkeys)) - continue; - - total_groups = subpath->rows * subpath->parallel_workers; - - gmpath = (Path *) - create_gather_merge_path(root, - grouped_rel, - subpath, - partial_grouping_target, - root->group_pathkeys, - NULL, - &total_groups); - - if (parse->hasAggs) - add_path(grouped_rel, (Path *) - create_agg_path(root, - grouped_rel, - gmpath, - target, - parse->groupClause ? AGG_SORTED : AGG_PLAIN, - AGGSPLIT_FINAL_DESERIAL, - parse->groupClause, - havingQual, - agg_final_costs, - dNumGroups)); - else - add_path(grouped_rel, (Path *) - create_group_path(root, - grouped_rel, - gmpath, - target, - parse->groupClause, - havingQual, - dNumGroups)); - } - } } } @@ -6240,29 +6180,19 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, } /* - * Generate a HashAgg Path atop of the cheapest partial path. Once - * again, we'll only do this if it looks as though the hash table - * won't exceed work_mem. + * Generate a Finalize HashAgg Path atop of the cheapest partially + * grouped path. Once again, we'll only do this if it looks as though + * the hash table won't exceed work_mem. */ - if (grouped_rel->partial_pathlist) + if (partially_grouped_rel->pathlist) { - Path *path = (Path *) linitial(grouped_rel->partial_pathlist); + Path *path = partially_grouped_rel->cheapest_total_path; hashaggtablesize = estimate_hashagg_tablesize(path, agg_final_costs, dNumGroups); if (hashaggtablesize < work_mem * 1024L) - { - double total_groups = path->rows * path->parallel_workers; - - path = (Path *) create_gather_path(root, - grouped_rel, - path, - partial_grouping_target, - NULL, - &total_groups); - add_path(grouped_rel, (Path *) create_agg_path(root, grouped_rel, @@ -6274,25 +6204,24 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, havingQual, agg_final_costs, dNumGroups)); - } } } } /* - * add_partial_paths_to_grouping_rel + * add_paths_to_partial_grouping_rel * - * Add partial paths to grouping relation. These paths are not fully - * aggregated; a FinalizeAggregate step is still required. + * First, generate partially aggregated partial paths from the partial paths + * for the input relation, and then generate partially aggregated non-partial + * paths using Gather or Gather Merge. All paths for this relation -- both + * partial and non-partial -- have been partially aggregated but require a + * subsequent FinalizeAggregate step. */ static void -add_partial_paths_to_grouping_rel(PlannerInfo *root, +add_paths_to_partial_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, - RelOptInfo *grouped_rel, - PathTarget *target, - PathTarget *partial_grouping_target, + RelOptInfo *partially_grouped_rel, AggClauseCosts *agg_partial_costs, - AggClauseCosts *agg_final_costs, grouping_sets_data *gd, bool can_sort, bool can_hash, @@ -6330,17 +6259,17 @@ add_partial_paths_to_grouping_rel(PlannerInfo *root, /* Sort the cheapest partial path, if it isn't already */ if (!is_sorted) path = (Path *) create_sort_path(root, - grouped_rel, + partially_grouped_rel, path, root->group_pathkeys, -1.0); if (parse->hasAggs) - add_partial_path(grouped_rel, (Path *) + add_partial_path(partially_grouped_rel, (Path *) create_agg_path(root, - grouped_rel, + partially_grouped_rel, path, - partial_grouping_target, + partially_grouped_rel->reltarget, parse->groupClause ? AGG_SORTED : AGG_PLAIN, AGGSPLIT_INITIAL_SERIAL, parse->groupClause, @@ -6348,11 +6277,11 @@ add_partial_paths_to_grouping_rel(PlannerInfo *root, agg_partial_costs, dNumPartialGroups)); else - add_partial_path(grouped_rel, (Path *) + add_partial_path(partially_grouped_rel, (Path *) create_group_path(root, - grouped_rel, + partially_grouped_rel, path, - partial_grouping_target, + partially_grouped_rel->reltarget, parse->groupClause, NIL, dNumPartialGroups)); @@ -6376,11 +6305,11 @@ add_partial_paths_to_grouping_rel(PlannerInfo *root, */ if (hashaggtablesize < work_mem * 1024L) { - add_partial_path(grouped_rel, (Path *) + add_partial_path(partially_grouped_rel, (Path *) create_agg_path(root, - grouped_rel, + partially_grouped_rel, cheapest_partial_path, - partial_grouping_target, + partially_grouped_rel->reltarget, AGG_HASHED, AGGSPLIT_INITIAL_SERIAL, parse->groupClause, @@ -6389,6 +6318,58 @@ add_partial_paths_to_grouping_rel(PlannerInfo *root, dNumPartialGroups)); } } + + /* + * If there is an FDW that's responsible for all baserels of the query, + * let it consider adding partially grouped ForeignPaths. + */ + if (partially_grouped_rel->fdwroutine && + partially_grouped_rel->fdwroutine->GetForeignUpperPaths) + { + FdwRoutine *fdwroutine = partially_grouped_rel->fdwroutine; + + fdwroutine->GetForeignUpperPaths(root, + UPPERREL_PARTIAL_GROUP_AGG, + input_rel, partially_grouped_rel); + } + + /* + * Try adding Gather or Gather Merge to partial paths to produce + * non-partial paths. + */ + generate_gather_paths(root, partially_grouped_rel, true); + + /* + * generate_gather_paths won't consider sorting the cheapest path to match + * the group keys and then applying a Gather Merge node to the result; + * that might be a winning strategy. + */ + if (!pathkeys_contained_in(root->group_pathkeys, + cheapest_partial_path->pathkeys)) + { + Path *path; + double total_groups; + + total_groups = + cheapest_partial_path->rows * cheapest_partial_path->parallel_workers; + path = (Path *) create_sort_path(root, partially_grouped_rel, + cheapest_partial_path, + root->group_pathkeys, + -1.0); + path = (Path *) + create_gather_merge_path(root, + partially_grouped_rel, + path, + partially_grouped_rel->reltarget, + root->group_pathkeys, + NULL, + &total_groups); + + add_path(partially_grouped_rel, path); + } + + /* Now choose the best path(s) */ + set_cheapest(partially_grouped_rel); } /* diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h index b1c63173c2..db8de2dfd0 100644 --- a/src/include/nodes/relation.h +++ b/src/include/nodes/relation.h @@ -71,6 +71,8 @@ typedef struct AggClauseCosts typedef enum UpperRelationKind { UPPERREL_SETOP, /* result of UNION/INTERSECT/EXCEPT, if any */ + UPPERREL_PARTIAL_GROUP_AGG, /* result of partial grouping/aggregation, if + * any */ UPPERREL_GROUP_AGG, /* result of grouping/aggregation, if any */ UPPERREL_WINDOW, /* result of window functions, if any */ UPPERREL_DISTINCT, /* result of "SELECT DISTINCT", if any */ diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index b2b4353eea..94f9bb2b57 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -53,7 +53,8 @@ extern void set_dummy_rel_pathlist(RelOptInfo *rel); extern RelOptInfo *standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels); -extern void generate_gather_paths(PlannerInfo *root, RelOptInfo *rel); +extern void generate_gather_paths(PlannerInfo *root, RelOptInfo *rel, + bool override_rows); extern int compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages, int max_workers); extern void create_partial_bitmap_paths(PlannerInfo *root, RelOptInfo *rel, From 3d2aed664ee8271fd6c721ed0aa10168cda112ea Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 26 Feb 2018 10:18:21 -0500 Subject: [PATCH 1058/1087] Avoid using unsafe search_path settings during dump and restore. Historically, pg_dump has "set search_path = foo, pg_catalog" when dumping an object in schema "foo", and has also caused that setting to be used while restoring the object. This is problematic because functions and operators in schema "foo" could capture references meant to refer to pg_catalog entries, both in the queries issued by pg_dump and those issued during the subsequent restore run. That could result in dump/restore misbehavior, or in privilege escalation if a nefarious user installs trojan-horse functions or operators. This patch changes pg_dump so that it does not change the search_path dynamically. The emitted restore script sets the search_path to what was used at dump time, and then leaves it alone thereafter. Created objects are placed in the correct schema, regardless of the active search_path, by dint of schema-qualifying their names in the CREATE commands, as well as in subsequent ALTER and ALTER-like commands. Since this change requires a change in the behavior of pg_restore when processing an archive file made according to this new convention, bump the archive file version number; old versions of pg_restore will therefore refuse to process files made with new versions of pg_dump. Security: CVE-2018-1058 --- src/backend/utils/adt/ruleutils.c | 48 +- src/bin/pg_dump/dumputils.c | 100 +- src/bin/pg_dump/dumputils.h | 6 +- src/bin/pg_dump/pg_backup.h | 3 + src/bin/pg_dump/pg_backup_archiver.c | 53 +- src/bin/pg_dump/pg_backup_archiver.h | 4 +- src/bin/pg_dump/pg_dump.c | 1752 +++++++---------- src/bin/pg_dump/pg_dumpall.c | 34 +- src/bin/pg_dump/t/002_pg_dump.pl | 384 ++-- src/test/modules/test_pg_dump/t/001_base.pl | 50 +- .../regress/expected/collate.icu.utf8.out | 12 +- .../regress/expected/collate.linux.utf8.out | 12 +- src/test/regress/expected/collate.out | 12 +- src/test/regress/expected/indexing.out | 98 +- src/test/regress/expected/rules.out | 78 +- src/test/regress/expected/triggers.out | 12 +- 16 files changed, 1166 insertions(+), 1492 deletions(-) diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c index ba9fab4582..3697466789 100644 --- a/src/backend/utils/adt/ruleutils.c +++ b/src/backend/utils/adt/ruleutils.c @@ -86,15 +86,17 @@ #define PRETTYINDENT_LIMIT 40 /* wrap limit */ /* Pretty flags */ -#define PRETTYFLAG_PAREN 1 -#define PRETTYFLAG_INDENT 2 +#define PRETTYFLAG_PAREN 0x0001 +#define PRETTYFLAG_INDENT 0x0002 +#define PRETTYFLAG_SCHEMA 0x0004 /* Default line length for pretty-print wrapping: 0 means wrap always */ #define WRAP_COLUMN_DEFAULT 0 -/* macro to test if pretty action needed */ +/* macros to test if pretty action needed */ #define PRETTY_PAREN(context) ((context)->prettyFlags & PRETTYFLAG_PAREN) #define PRETTY_INDENT(context) ((context)->prettyFlags & PRETTYFLAG_INDENT) +#define PRETTY_SCHEMA(context) ((context)->prettyFlags & PRETTYFLAG_SCHEMA) /* ---------- @@ -499,7 +501,7 @@ pg_get_ruledef_ext(PG_FUNCTION_ARGS) int prettyFlags; char *res; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; res = pg_get_ruledef_worker(ruleoid, prettyFlags); @@ -620,7 +622,7 @@ pg_get_viewdef_ext(PG_FUNCTION_ARGS) int prettyFlags; char *res; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; res = pg_get_viewdef_worker(viewoid, prettyFlags, WRAP_COLUMN_DEFAULT); @@ -640,7 +642,7 @@ pg_get_viewdef_wrap(PG_FUNCTION_ARGS) char *res; /* calling this implies we want pretty printing */ - prettyFlags = PRETTYFLAG_PAREN | PRETTYFLAG_INDENT; + prettyFlags = PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA; res = pg_get_viewdef_worker(viewoid, prettyFlags, wrap); @@ -686,7 +688,7 @@ pg_get_viewdef_name_ext(PG_FUNCTION_ARGS) Oid viewoid; char *res; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; /* Look up view name. Can't lock it - we might not have privileges. */ viewrel = makeRangeVarFromNameList(textToQualifiedNameList(viewname)); @@ -922,8 +924,15 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty) appendStringInfoString(&buf, " TRUNCATE"); findx++; } + + /* + * In non-pretty mode, always schema-qualify the target table name for + * safety. In pretty mode, schema-qualify only if not visible. + */ appendStringInfo(&buf, " ON %s ", - generate_relation_name(trigrec->tgrelid, NIL)); + pretty ? + generate_relation_name(trigrec->tgrelid, NIL) : + generate_qualified_relation_name(trigrec->tgrelid)); if (OidIsValid(trigrec->tgconstraint)) { @@ -1017,7 +1026,7 @@ pg_get_triggerdef_worker(Oid trigid, bool pretty) context.windowClause = NIL; context.windowTList = NIL; context.varprefix = true; - context.prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + context.prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; context.wrapColumn = WRAP_COLUMN_DEFAULT; context.indentLevel = PRETTYINDENT_STD; context.special_exprkind = EXPR_KIND_NONE; @@ -1104,7 +1113,7 @@ pg_get_indexdef_ext(PG_FUNCTION_ARGS) int prettyFlags; char *res; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; res = pg_get_indexdef_worker(indexrelid, colno, NULL, colno != 0, false, false, prettyFlags, true); @@ -1132,7 +1141,8 @@ pg_get_indexdef_columns(Oid indexrelid, bool pretty) { int prettyFlags; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; + return pg_get_indexdef_worker(indexrelid, 0, NULL, true, false, false, prettyFlags, false); } @@ -1264,7 +1274,9 @@ pg_get_indexdef_worker(Oid indexrelid, int colno, quote_identifier(NameStr(idxrelrec->relname)), idxrelrec->relkind == RELKIND_PARTITIONED_INDEX && !inherits ? "ONLY " : "", - generate_relation_name(indrelid, NIL), + (prettyFlags & PRETTYFLAG_SCHEMA) ? + generate_relation_name(indrelid, NIL) : + generate_qualified_relation_name(indrelid), quote_identifier(NameStr(amrec->amname))); else /* currently, must be EXCLUDE constraint */ appendStringInfo(&buf, "EXCLUDE USING %s (", @@ -1575,7 +1587,8 @@ pg_get_partkeydef_columns(Oid relid, bool pretty) { int prettyFlags; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; + return pg_get_partkeydef_worker(relid, prettyFlags, true, false); } @@ -1803,7 +1816,7 @@ pg_get_constraintdef_ext(PG_FUNCTION_ARGS) int prettyFlags; char *res; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; res = pg_get_constraintdef_worker(constraintId, false, prettyFlags, true); @@ -2258,7 +2271,7 @@ pg_get_expr_ext(PG_FUNCTION_ARGS) int prettyFlags; char *relname; - prettyFlags = pretty ? PRETTYFLAG_PAREN | PRETTYFLAG_INDENT : PRETTYFLAG_INDENT; + prettyFlags = pretty ? (PRETTYFLAG_PAREN | PRETTYFLAG_INDENT | PRETTYFLAG_SCHEMA) : PRETTYFLAG_INDENT; if (OidIsValid(relid)) { @@ -4709,7 +4722,10 @@ make_ruledef(StringInfo buf, HeapTuple ruletup, TupleDesc rulettc, } /* The relation the rule is fired on */ - appendStringInfo(buf, " TO %s", generate_relation_name(ev_class, NIL)); + appendStringInfo(buf, " TO %s", + (prettyFlags & PRETTYFLAG_SCHEMA) ? + generate_relation_name(ev_class, NIL) : + generate_qualified_relation_name(ev_class)); /* If the rule has an event qualification, add it */ if (ev_qual == NULL) diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c index 7afddc3153..7f5bb1343e 100644 --- a/src/bin/pg_dump/dumputils.c +++ b/src/bin/pg_dump/dumputils.c @@ -32,6 +32,7 @@ static void AddAcl(PQExpBuffer aclbuf, const char *keyword, * * name: the object name, in the form to use in the commands (already quoted) * subname: the sub-object name, if any (already quoted); NULL if none + * nspname: the namespace the object is in (NULL if none); not pre-quoted * type: the object type (as seen in GRANT command: must be one of * TABLE, SEQUENCE, FUNCTION, PROCEDURE, LANGUAGE, SCHEMA, DATABASE, TABLESPACE, * FOREIGN DATA WRAPPER, SERVER, or LARGE OBJECT) @@ -52,7 +53,7 @@ static void AddAcl(PQExpBuffer aclbuf, const char *keyword, * since this routine uses fmtId() internally. */ bool -buildACLCommands(const char *name, const char *subname, +buildACLCommands(const char *name, const char *subname, const char *nspname, const char *type, const char *acls, const char *racls, const char *owner, const char *prefix, int remoteVersion, PQExpBuffer sql) @@ -152,7 +153,10 @@ buildACLCommands(const char *name, const char *subname, appendPQExpBuffer(firstsql, "%sREVOKE ALL", prefix); if (subname) appendPQExpBuffer(firstsql, "(%s)", subname); - appendPQExpBuffer(firstsql, " ON %s %s FROM PUBLIC;\n", type, name); + appendPQExpBuffer(firstsql, " ON %s ", type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, "%s FROM PUBLIC;\n", name); } else { @@ -170,8 +174,11 @@ buildACLCommands(const char *name, const char *subname, { if (privs->len > 0) { - appendPQExpBuffer(firstsql, "%sREVOKE %s ON %s %s FROM ", - prefix, privs->data, type, name); + appendPQExpBuffer(firstsql, "%sREVOKE %s ON %s ", + prefix, privs->data, type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, "%s FROM ", name); if (grantee->len == 0) appendPQExpBufferStr(firstsql, "PUBLIC;\n"); else if (strncmp(grantee->data, "group ", @@ -185,8 +192,11 @@ buildACLCommands(const char *name, const char *subname, if (privswgo->len > 0) { appendPQExpBuffer(firstsql, - "%sREVOKE GRANT OPTION FOR %s ON %s %s FROM ", - prefix, privswgo->data, type, name); + "%sREVOKE GRANT OPTION FOR %s ON %s ", + prefix, privswgo->data, type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, "%s FROM ", name); if (grantee->len == 0) appendPQExpBufferStr(firstsql, "PUBLIC"); else if (strncmp(grantee->data, "group ", @@ -251,18 +261,33 @@ buildACLCommands(const char *name, const char *subname, appendPQExpBuffer(firstsql, "%sREVOKE ALL", prefix); if (subname) appendPQExpBuffer(firstsql, "(%s)", subname); - appendPQExpBuffer(firstsql, " ON %s %s FROM %s;\n", - type, name, fmtId(grantee->data)); + appendPQExpBuffer(firstsql, " ON %s ", type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, "%s FROM %s;\n", + name, fmtId(grantee->data)); if (privs->len > 0) + { appendPQExpBuffer(firstsql, - "%sGRANT %s ON %s %s TO %s;\n", - prefix, privs->data, type, name, - fmtId(grantee->data)); + "%sGRANT %s ON %s ", + prefix, privs->data, type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, + "%s TO %s;\n", + name, fmtId(grantee->data)); + } if (privswgo->len > 0) + { appendPQExpBuffer(firstsql, - "%sGRANT %s ON %s %s TO %s WITH GRANT OPTION;\n", - prefix, privswgo->data, type, name, - fmtId(grantee->data)); + "%sGRANT %s ON %s ", + prefix, privswgo->data, type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, + "%s TO %s WITH GRANT OPTION;\n", + name, fmtId(grantee->data)); + } } } else @@ -284,8 +309,11 @@ buildACLCommands(const char *name, const char *subname, if (privs->len > 0) { - appendPQExpBuffer(secondsql, "%sGRANT %s ON %s %s TO ", - prefix, privs->data, type, name); + appendPQExpBuffer(secondsql, "%sGRANT %s ON %s ", + prefix, privs->data, type); + if (nspname && *nspname) + appendPQExpBuffer(secondsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(secondsql, "%s TO ", name); if (grantee->len == 0) appendPQExpBufferStr(secondsql, "PUBLIC;\n"); else if (strncmp(grantee->data, "group ", @@ -297,8 +325,11 @@ buildACLCommands(const char *name, const char *subname, } if (privswgo->len > 0) { - appendPQExpBuffer(secondsql, "%sGRANT %s ON %s %s TO ", - prefix, privswgo->data, type, name); + appendPQExpBuffer(secondsql, "%sGRANT %s ON %s ", + prefix, privswgo->data, type); + if (nspname && *nspname) + appendPQExpBuffer(secondsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(secondsql, "%s TO ", name); if (grantee->len == 0) appendPQExpBufferStr(secondsql, "PUBLIC"); else if (strncmp(grantee->data, "group ", @@ -328,8 +359,11 @@ buildACLCommands(const char *name, const char *subname, appendPQExpBuffer(firstsql, "%sREVOKE ALL", prefix); if (subname) appendPQExpBuffer(firstsql, "(%s)", subname); - appendPQExpBuffer(firstsql, " ON %s %s FROM %s;\n", - type, name, fmtId(owner)); + appendPQExpBuffer(firstsql, " ON %s ", type); + if (nspname && *nspname) + appendPQExpBuffer(firstsql, "%s.", fmtId(nspname)); + appendPQExpBuffer(firstsql, "%s FROM %s;\n", + name, fmtId(owner)); } destroyPQExpBuffer(grantee); @@ -388,7 +422,8 @@ buildDefaultACLCommands(const char *type, const char *nspname, if (strlen(initacls) != 0 || strlen(initracls) != 0) { appendPQExpBuffer(sql, "SELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\n"); - if (!buildACLCommands("", NULL, type, initacls, initracls, owner, + if (!buildACLCommands("", NULL, NULL, type, + initacls, initracls, owner, prefix->data, remoteVersion, sql)) { destroyPQExpBuffer(prefix); @@ -397,7 +432,8 @@ buildDefaultACLCommands(const char *type, const char *nspname, appendPQExpBuffer(sql, "SELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\n"); } - if (!buildACLCommands("", NULL, type, acls, racls, owner, + if (!buildACLCommands("", NULL, NULL, type, + acls, racls, owner, prefix->data, remoteVersion, sql)) { destroyPQExpBuffer(prefix); @@ -641,26 +677,32 @@ AddAcl(PQExpBuffer aclbuf, const char *keyword, const char *subname) * buildShSecLabelQuery * * Build a query to retrieve security labels for a shared object. + * The object is identified by its OID plus the name of the catalog + * it can be found in (e.g., "pg_database" for database names). + * The query is appended to "sql". (We don't execute it here so as to + * keep this file free of assumptions about how to deal with SQL errors.) */ void -buildShSecLabelQuery(PGconn *conn, const char *catalog_name, uint32 objectId, +buildShSecLabelQuery(PGconn *conn, const char *catalog_name, Oid objectId, PQExpBuffer sql) { appendPQExpBuffer(sql, "SELECT provider, label FROM pg_catalog.pg_shseclabel " - "WHERE classoid = '%s'::pg_catalog.regclass AND " - "objoid = %u", catalog_name, objectId); + "WHERE classoid = 'pg_catalog.%s'::pg_catalog.regclass " + "AND objoid = '%u'", catalog_name, objectId); } /* * emitShSecLabels * - * Format security label data retrieved by the query generated in - * buildShSecLabelQuery. + * Construct SECURITY LABEL commands using the data retrieved by the query + * generated by buildShSecLabelQuery, and append them to "buffer". + * Here, the target object is identified by its type name (e.g. "DATABASE") + * and its name (not pre-quoted). */ void emitShSecLabels(PGconn *conn, PGresult *res, PQExpBuffer buffer, - const char *target, const char *objname) + const char *objtype, const char *objname) { int i; @@ -672,7 +714,7 @@ emitShSecLabels(PGconn *conn, PGresult *res, PQExpBuffer buffer, /* must use fmtId result before calling it again */ appendPQExpBuffer(buffer, "SECURITY LABEL FOR %s ON %s", - fmtId(provider), target); + fmtId(provider), objtype); appendPQExpBuffer(buffer, " %s IS ", fmtId(objname)); diff --git a/src/bin/pg_dump/dumputils.h b/src/bin/pg_dump/dumputils.h index 23a0645be8..a9e26ae72a 100644 --- a/src/bin/pg_dump/dumputils.h +++ b/src/bin/pg_dump/dumputils.h @@ -36,7 +36,7 @@ #endif -extern bool buildACLCommands(const char *name, const char *subname, +extern bool buildACLCommands(const char *name, const char *subname, const char *nspname, const char *type, const char *acls, const char *racls, const char *owner, const char *prefix, int remoteVersion, PQExpBuffer sql); @@ -47,9 +47,9 @@ extern bool buildDefaultACLCommands(const char *type, const char *nspname, int remoteVersion, PQExpBuffer sql); extern void buildShSecLabelQuery(PGconn *conn, const char *catalog_name, - uint32 objectId, PQExpBuffer sql); + Oid objectId, PQExpBuffer sql); extern void emitShSecLabels(PGconn *conn, PGresult *res, - PQExpBuffer buffer, const char *target, const char *objname); + PQExpBuffer buffer, const char *objtype, const char *objname); extern void buildACLQueries(PQExpBuffer acl_subquery, PQExpBuffer racl_subquery, PQExpBuffer init_acl_subquery, PQExpBuffer init_racl_subquery, diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h index 520cd095d3..ceedd481fb 100644 --- a/src/bin/pg_dump/pg_backup.h +++ b/src/bin/pg_dump/pg_backup.h @@ -197,6 +197,9 @@ typedef struct Archive /* info needed for string escaping */ int encoding; /* libpq code for client_encoding */ bool std_strings; /* standard_conforming_strings */ + + /* other important stuff */ + char *searchpath; /* search_path to set during restore */ char *use_role; /* Issue SET ROLE to this */ /* error handling */ diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index a4deb53e3a..fc233a608f 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -70,6 +70,7 @@ static void _selectOutputSchema(ArchiveHandle *AH, const char *schemaName); static void _selectTablespace(ArchiveHandle *AH, const char *tablespace); static void processEncodingEntry(ArchiveHandle *AH, TocEntry *te); static void processStdStringsEntry(ArchiveHandle *AH, TocEntry *te); +static void processSearchPathEntry(ArchiveHandle *AH, TocEntry *te); static teReqs _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH); static RestorePass _tocEntryRestorePass(TocEntry *te); static bool _tocEntryIsACL(TocEntry *te); @@ -900,7 +901,9 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel) ahprintf(AH, "TRUNCATE TABLE %s%s;\n\n", (PQserverVersion(AH->connection) >= 80400 ? "ONLY " : ""), - fmtId(te->tag)); + fmtQualifiedId(PQserverVersion(AH->connection), + te->namespace, + te->tag)); } /* @@ -987,10 +990,10 @@ _disableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te) /* * Disable them. */ - _selectOutputSchema(AH, te->namespace); - ahprintf(AH, "ALTER TABLE %s DISABLE TRIGGER ALL;\n\n", - fmtId(te->tag)); + fmtQualifiedId(PQserverVersion(AH->connection), + te->namespace, + te->tag)); } static void @@ -1015,10 +1018,10 @@ _enableTriggersIfNecessary(ArchiveHandle *AH, TocEntry *te) /* * Enable them. */ - _selectOutputSchema(AH, te->namespace); - ahprintf(AH, "ALTER TABLE %s ENABLE TRIGGER ALL;\n\n", - fmtId(te->tag)); + fmtQualifiedId(PQserverVersion(AH->connection), + te->namespace, + te->tag)); } /* @@ -2711,6 +2714,8 @@ ReadToc(ArchiveHandle *AH) processEncodingEntry(AH, te); else if (strcmp(te->desc, "STDSTRINGS") == 0) processStdStringsEntry(AH, te); + else if (strcmp(te->desc, "SEARCHPATH") == 0) + processSearchPathEntry(AH, te); } } @@ -2758,6 +2763,16 @@ processStdStringsEntry(ArchiveHandle *AH, TocEntry *te) te->defn); } +static void +processSearchPathEntry(ArchiveHandle *AH, TocEntry *te) +{ + /* + * te->defn should contain a command to set search_path. We just copy it + * verbatim for use later. + */ + AH->public.searchpath = pg_strdup(te->defn); +} + static void StrictNamesCheck(RestoreOptions *ropt) { @@ -2814,9 +2829,10 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH) teReqs res = REQ_SCHEMA | REQ_DATA; RestoreOptions *ropt = AH->public.ropt; - /* ENCODING and STDSTRINGS items are treated specially */ + /* These items are treated specially */ if (strcmp(te->desc, "ENCODING") == 0 || - strcmp(te->desc, "STDSTRINGS") == 0) + strcmp(te->desc, "STDSTRINGS") == 0 || + strcmp(te->desc, "SEARCHPATH") == 0) return REQ_SPECIAL; /* @@ -3117,6 +3133,10 @@ _doSetFixedOutputState(ArchiveHandle *AH) if (ropt && ropt->use_role) ahprintf(AH, "SET ROLE %s;\n", fmtId(ropt->use_role)); + /* Select the dump-time search_path */ + if (AH->public.searchpath) + ahprintf(AH, "%s", AH->public.searchpath); + /* Make sure function checking is disabled */ ahprintf(AH, "SET check_function_bodies = false;\n"); @@ -3321,6 +3341,15 @@ _selectOutputSchema(ArchiveHandle *AH, const char *schemaName) { PQExpBuffer qry; + /* + * If there was a SEARCHPATH TOC entry, we're supposed to just stay with + * that search_path rather than switching to entry-specific paths. + * Otherwise, it's an old archive that will not restore correctly unless + * we set the search_path as it's expecting. + */ + if (AH->public.searchpath) + return; + if (!schemaName || *schemaName == '\0' || (AH->currSchema && strcmp(AH->currSchema, schemaName) == 0)) return; /* no need to do anything */ @@ -3453,8 +3482,10 @@ _getObjectDescription(PQExpBuffer buf, TocEntry *te, ArchiveHandle *AH) strcmp(type, "SUBSCRIPTION") == 0 || strcmp(type, "USER MAPPING") == 0) { - /* We already know that search_path was set properly */ - appendPQExpBuffer(buf, "%s %s", type, fmtId(te->tag)); + appendPQExpBuffer(buf, "%s ", type); + if (te->namespace && *te->namespace) + appendPQExpBuffer(buf, "%s.", fmtId(te->namespace)); + appendPQExpBufferStr(buf, fmtId(te->tag)); return; } diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h index becfee6e81..8dd1915998 100644 --- a/src/bin/pg_dump/pg_backup_archiver.h +++ b/src/bin/pg_dump/pg_backup_archiver.h @@ -92,10 +92,12 @@ typedef z_stream *z_streamp; * indicator */ #define K_VERS_1_12 MAKE_ARCHIVE_VERSION(1, 12, 0) /* add separate BLOB * entries */ +#define K_VERS_1_13 MAKE_ARCHIVE_VERSION(1, 13, 0) /* change search_path + * behavior */ /* Current archive version number (the format we can output) */ #define K_VERS_MAJOR 1 -#define K_VERS_MINOR 12 +#define K_VERS_MINOR 13 #define K_VERS_REV 0 #define K_VERS_SELF MAKE_ARCHIVE_VERSION(K_VERS_MAJOR, K_VERS_MINOR, K_VERS_REV); diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index f7a079f0b1..8b67ec1aa1 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -132,6 +132,15 @@ char g_comment_end[10]; static const CatalogId nilCatalogId = {0, 0}; +/* + * Macro for producing quoted, schema-qualified name of a dumpable object. + * Note implicit dependence on "fout"; we should get rid of that argument. + */ +#define fmtQualifiedDumpable(obj) \ + fmtQualifiedId(fout->remoteVersion, \ + (obj)->dobj.namespace->dobj.name, \ + (obj)->dobj.name) + static void help(const char *progname); static void setup_connection(Archive *AH, const char *dumpencoding, const char *dumpsnapshot, @@ -149,13 +158,13 @@ static NamespaceInfo *findNamespace(Archive *fout, Oid nsoid); static void dumpTableData(Archive *fout, TableDataInfo *tdinfo); static void refreshMatViewData(Archive *fout, TableDataInfo *tdinfo); static void guessConstraintInheritance(TableInfo *tblinfo, int numTables); -static void dumpComment(Archive *fout, const char *target, +static void dumpComment(Archive *fout, const char *type, const char *name, const char *namespace, const char *owner, CatalogId catalogId, int subid, DumpId dumpId); static int findComments(Archive *fout, Oid classoid, Oid objoid, CommentItem **items); static int collectComments(Archive *fout, CommentItem **items); -static void dumpSecLabel(Archive *fout, const char *target, +static void dumpSecLabel(Archive *fout, const char *type, const char *name, const char *namespace, const char *owner, CatalogId catalogId, int subid, DumpId dumpId); static int findSecLabels(Archive *fout, Oid classoid, Oid objoid, @@ -210,7 +219,7 @@ static void dumpDefaultACL(Archive *fout, DefaultACLInfo *daclinfo); static void dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, const char *type, const char *name, const char *subname, - const char *tag, const char *nspname, const char *owner, + const char *nspname, const char *owner, const char *acls, const char *racls, const char *initacls, const char *initracls); @@ -239,10 +248,9 @@ static char *format_function_signature(Archive *fout, FuncInfo *finfo, bool honor_quotes); static char *convertRegProcReference(Archive *fout, const char *proc); -static char *convertOperatorReference(Archive *fout, const char *opr); +static char *getFormattedOperatorName(Archive *fout, const char *oproid); static char *convertTSFunction(Archive *fout, Oid funcOid); static Oid findLastBuiltinOid_V71(Archive *fout, const char *); -static void selectSourceSchema(Archive *fout, const char *schemaName); static char *getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts); static void getBlobs(Archive *fout); static void dumpBlob(Archive *fout, BlobInfo *binfo); @@ -256,6 +264,7 @@ static void dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf, const char *dbname, Oid dboid); static void dumpEncoding(Archive *AH); static void dumpStdStrings(Archive *AH); +static void dumpSearchPath(Archive *AH); static void binary_upgrade_set_type_oids_by_type_oid(Archive *fout, PQExpBuffer upgrade_buffer, Oid pg_type_oid, @@ -267,7 +276,9 @@ static void binary_upgrade_set_pg_class_oids(Archive *fout, Oid pg_class_oid, bool is_index); static void binary_upgrade_extension_member(PQExpBuffer upgrade_buffer, DumpableObject *dobj, - const char *objlabel); + const char *objtype, + const char *objname, + const char *objnamespace); static const char *getAttrName(int attrnum, TableInfo *tblInfo); static const char *fmtCopyColumnList(const TableInfo *ti, PQExpBuffer buffer); static bool nonemptyReloptions(const char *reloptions); @@ -844,9 +855,10 @@ main(int argc, char **argv) * order. */ - /* First the special ENCODING and STDSTRINGS entries. */ + /* First the special ENCODING, STDSTRINGS, and SEARCHPATH entries. */ dumpEncoding(fout); dumpStdStrings(fout); + dumpSearchPath(fout); /* The database items are always next, unless we don't want them at all */ if (dopt.outputCreateDB) @@ -1733,14 +1745,6 @@ dumpTableData_copy(Archive *fout, void *dcontext) write_msg(NULL, "dumping contents of table \"%s.%s\"\n", tbinfo->dobj.namespace->dobj.name, classname); - /* - * Make sure we are in proper schema. We will qualify the table name - * below anyway (in case its name conflicts with a pg_catalog table); but - * this ensures reproducible results in case the table contains regproc, - * regclass, etc columns. - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - /* * Specify the column list explicitly so that we have no possibility of * retrieving data in the wrong column order. (The default column @@ -1752,9 +1756,7 @@ dumpTableData_copy(Archive *fout, void *dcontext) if (oids && hasoids) { appendPQExpBuffer(q, "COPY %s %s WITH OIDS TO stdout;", - fmtQualifiedId(fout->remoteVersion, - tbinfo->dobj.namespace->dobj.name, - classname), + fmtQualifiedDumpable(tbinfo), column_list); } else if (tdinfo->filtercond) @@ -1770,17 +1772,13 @@ dumpTableData_copy(Archive *fout, void *dcontext) else appendPQExpBufferStr(q, "* "); appendPQExpBuffer(q, "FROM %s %s) TO stdout;", - fmtQualifiedId(fout->remoteVersion, - tbinfo->dobj.namespace->dobj.name, - classname), + fmtQualifiedDumpable(tbinfo), tdinfo->filtercond); } else { appendPQExpBuffer(q, "COPY %s %s TO stdout;", - fmtQualifiedId(fout->remoteVersion, - tbinfo->dobj.namespace->dobj.name, - classname), + fmtQualifiedDumpable(tbinfo), column_list); } res = ExecuteSqlQuery(fout, q->data, PGRES_COPY_OUT); @@ -1890,7 +1888,6 @@ dumpTableData_insert(Archive *fout, void *dcontext) { TableDataInfo *tdinfo = (TableDataInfo *) dcontext; TableInfo *tbinfo = tdinfo->tdtable; - const char *classname = tbinfo->dobj.name; DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer insertStmt = NULL; @@ -1899,19 +1896,9 @@ dumpTableData_insert(Archive *fout, void *dcontext) int nfields; int field; - /* - * Make sure we are in proper schema. We will qualify the table name - * below anyway (in case its name conflicts with a pg_catalog table); but - * this ensures reproducible results in case the table contains regproc, - * regclass, etc columns. - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(q, "DECLARE _pg_dump_cursor CURSOR FOR " "SELECT * FROM ONLY %s", - fmtQualifiedId(fout->remoteVersion, - tbinfo->dobj.namespace->dobj.name, - classname)); + fmtQualifiedDumpable(tbinfo)); if (tdinfo->filtercond) appendPQExpBuffer(q, " %s", tdinfo->filtercond); @@ -1933,6 +1920,8 @@ dumpTableData_insert(Archive *fout, void *dcontext) */ if (insertStmt == NULL) { + TableInfo *targettab; + insertStmt = createPQExpBuffer(); /* @@ -1941,25 +1930,12 @@ dumpTableData_insert(Archive *fout, void *dcontext) * through the root table. */ if (dopt->load_via_partition_root && tbinfo->ispartition) - { - TableInfo *parentTbinfo; - - parentTbinfo = getRootTableInfo(tbinfo); - - /* - * When we loading data through the root, we will qualify - * the table name. This is needed because earlier - * search_path will be set for the partition table. - */ - classname = (char *) fmtQualifiedId(fout->remoteVersion, - parentTbinfo->dobj.namespace->dobj.name, - parentTbinfo->dobj.name); - } + targettab = getRootTableInfo(tbinfo); else - classname = fmtId(tbinfo->dobj.name); + targettab = tbinfo; appendPQExpBuffer(insertStmt, "INSERT INTO %s ", - classname); + fmtQualifiedDumpable(targettab)); /* corner case for zero-column table */ if (nfields == 0) @@ -2135,17 +2111,10 @@ dumpTableData(Archive *fout, TableDataInfo *tdinfo) TableInfo *parentTbinfo; parentTbinfo = getRootTableInfo(tbinfo); - - /* - * When we load data through the root, we will qualify the table - * name, because search_path is set for the partition. - */ - copyFrom = fmtQualifiedId(fout->remoteVersion, - parentTbinfo->dobj.namespace->dobj.name, - parentTbinfo->dobj.name); + copyFrom = fmtQualifiedDumpable(parentTbinfo); } else - copyFrom = fmtId(tbinfo->dobj.name); + copyFrom = fmtQualifiedDumpable(tbinfo); /* must use 2 steps here 'cause fmtId is nonreentrant */ appendPQExpBuffer(copyBuf, "COPY %s ", @@ -2200,7 +2169,7 @@ refreshMatViewData(Archive *fout, TableDataInfo *tdinfo) q = createPQExpBuffer(); appendPQExpBuffer(q, "REFRESH MATERIALIZED VIEW %s;\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); if (tdinfo->dobj.dump & DUMP_COMPONENT_DATA) ArchiveEntry(fout, @@ -2328,9 +2297,6 @@ buildMatViewRefreshDependencies(Archive *fout) if (fout->remoteVersion < 90300) return; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - query = createPQExpBuffer(); appendPQExpBufferStr(query, "WITH RECURSIVE w AS " @@ -2592,9 +2558,6 @@ dumpDatabase(Archive *fout) if (g_verbose) write_msg(NULL, "saving database definition\n"); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* Fetch the database-level properties for this database */ if (fout->remoteVersion >= 90600) { @@ -2770,7 +2733,7 @@ dumpDatabase(Archive *fout) NULL, /* Dumper */ NULL); /* Dumper Arg */ - /* Compute correct tag for comments etc */ + /* Compute correct tag for archive entry */ appendPQExpBuffer(labelq, "DATABASE %s", qdatname); /* Dump DB comment if any */ @@ -2805,7 +2768,7 @@ dumpDatabase(Archive *fout) } else { - dumpComment(fout, labelq->data, NULL, dba, + dumpComment(fout, "DATABASE", qdatname, NULL, dba, dbCatId, 0, dbDumpId); } @@ -2837,7 +2800,7 @@ dumpDatabase(Archive *fout) * (pg_init_privs) on databases. */ dumpACL(fout, dbCatId, dbDumpId, "DATABASE", - qdatname, NULL, labelq->data, NULL, + qdatname, NULL, NULL, dba, datacl, rdatacl, "", ""); /* @@ -3125,6 +3088,69 @@ dumpStdStrings(Archive *AH) destroyPQExpBuffer(qry); } +/* + * dumpSearchPath: record the active search_path in the archive + */ +static void +dumpSearchPath(Archive *AH) +{ + PQExpBuffer qry = createPQExpBuffer(); + PQExpBuffer path = createPQExpBuffer(); + PGresult *res; + char **schemanames = NULL; + int nschemanames = 0; + int i; + + /* + * We use the result of current_schemas(), not the search_path GUC, + * because that might contain wildcards such as "$user", which won't + * necessarily have the same value during restore. Also, this way avoids + * listing schemas that may appear in search_path but not actually exist, + * which seems like a prudent exclusion. + */ + res = ExecuteSqlQueryForSingleRow(AH, + "SELECT pg_catalog.current_schemas(false)"); + + if (!parsePGArray(PQgetvalue(res, 0, 0), &schemanames, &nschemanames)) + exit_horribly(NULL, "could not parse result of current_schemas()\n"); + + /* + * We use set_config(), not a simple "SET search_path" command, because + * the latter has less-clean behavior if the search path is empty. While + * that's likely to get fixed at some point, it seems like a good idea to + * be as backwards-compatible as possible in what we put into archives. + */ + for (i = 0; i < nschemanames; i++) + { + if (i > 0) + appendPQExpBufferStr(path, ", "); + appendPQExpBufferStr(path, fmtId(schemanames[i])); + } + + appendPQExpBufferStr(qry, "SELECT pg_catalog.set_config('search_path', "); + appendStringLiteralAH(qry, path->data, AH); + appendPQExpBufferStr(qry, ", false);\n"); + + if (g_verbose) + write_msg(NULL, "saving search_path = %s\n", path->data); + + ArchiveEntry(AH, nilCatalogId, createDumpId(), + "SEARCHPATH", NULL, NULL, "", + false, "SEARCHPATH", SECTION_PRE_DATA, + qry->data, "", NULL, + NULL, 0, + NULL, NULL); + + /* Also save it in AH->searchpath, in case we're doing plain text dump */ + AH->searchpath = pg_strdup(qry->data); + + if (schemanames) + free(schemanames); + PQclear(res); + destroyPQExpBuffer(qry); + destroyPQExpBuffer(path); +} + /* * getBlobs: @@ -3151,9 +3177,6 @@ getBlobs(Archive *fout) if (g_verbose) write_msg(NULL, "reading large objects\n"); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* Fetch BLOB OIDs, and owner/ACL data if >= 9.0 */ if (fout->remoteVersion >= 90600) { @@ -3300,26 +3323,22 @@ dumpBlob(Archive *fout, BlobInfo *binfo) NULL, 0, NULL, NULL); - /* set up tag for comment and/or ACL */ - resetPQExpBuffer(cquery); - appendPQExpBuffer(cquery, "LARGE OBJECT %s", binfo->dobj.name); - /* Dump comment if any */ if (binfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, cquery->data, + dumpComment(fout, "LARGE OBJECT", binfo->dobj.name, NULL, binfo->rolname, binfo->dobj.catId, 0, binfo->dobj.dumpId); /* Dump security label if any */ if (binfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, cquery->data, + dumpSecLabel(fout, "LARGE OBJECT", binfo->dobj.name, NULL, binfo->rolname, binfo->dobj.catId, 0, binfo->dobj.dumpId); /* Dump ACL if any */ if (binfo->blobacl && (binfo->dobj.dump & DUMP_COMPONENT_ACL)) dumpACL(fout, binfo->dobj.catId, binfo->dobj.dumpId, "LARGE OBJECT", - binfo->dobj.name, NULL, cquery->data, + binfo->dobj.name, NULL, NULL, binfo->rolname, binfo->blobacl, binfo->rblobacl, binfo->initblobacl, binfo->initrblobacl); @@ -3346,9 +3365,6 @@ dumpBlobs(Archive *fout, void *arg) if (g_verbose) write_msg(NULL, "saving large objects\n"); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * Currently, we re-fetch all BLOB OIDs using a cursor. Consider scanning * the already-in-memory dumpable objects instead... @@ -3478,11 +3494,6 @@ getPolicies(Archive *fout, TableInfo tblinfo[], int numTables) tbinfo->dobj.namespace->dobj.name, tbinfo->dobj.name); - /* - * select table schema to ensure regproc name is qualified if needed - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - resetPQExpBuffer(query); /* Get the policies for the table. */ @@ -3595,7 +3606,7 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo) query = createPQExpBuffer(); appendPQExpBuffer(query, "ALTER TABLE %s ENABLE ROW LEVEL SECURITY;", - fmtId(polinfo->dobj.name)); + fmtQualifiedDumpable(polinfo)); if (polinfo->dobj.dump & DUMP_COMPONENT_POLICY) ArchiveEntry(fout, polinfo->dobj.catId, polinfo->dobj.dumpId, @@ -3634,7 +3645,7 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo) appendPQExpBuffer(query, "CREATE POLICY %s", fmtId(polinfo->polname)); - appendPQExpBuffer(query, " ON %s%s%s", fmtId(tbinfo->dobj.name), + appendPQExpBuffer(query, " ON %s%s%s", fmtQualifiedDumpable(tbinfo), !polinfo->polpermissive ? " AS RESTRICTIVE" : "", cmd); if (polinfo->polroles != NULL) @@ -3649,7 +3660,7 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo) appendPQExpBuffer(query, ";\n"); appendPQExpBuffer(delqry, "DROP POLICY %s", fmtId(polinfo->polname)); - appendPQExpBuffer(delqry, " ON %s;\n", fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delqry, " ON %s;\n", fmtQualifiedDumpable(tbinfo)); tag = psprintf("%s %s", tbinfo->dobj.name, polinfo->dobj.name); @@ -3698,9 +3709,6 @@ getPublications(Archive *fout) resetPQExpBuffer(query); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* Get the publications. */ appendPQExpBuffer(query, "SELECT p.tableoid, p.oid, p.pubname, " @@ -3763,7 +3771,7 @@ dumpPublication(Archive *fout, PublicationInfo *pubinfo) { PQExpBuffer delq; PQExpBuffer query; - PQExpBuffer labelq; + char *qpubname; bool first = true; if (!(pubinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)) @@ -3771,15 +3779,14 @@ dumpPublication(Archive *fout, PublicationInfo *pubinfo) delq = createPQExpBuffer(); query = createPQExpBuffer(); - labelq = createPQExpBuffer(); + + qpubname = pg_strdup(fmtId(pubinfo->dobj.name)); appendPQExpBuffer(delq, "DROP PUBLICATION %s;\n", - fmtId(pubinfo->dobj.name)); + qpubname); appendPQExpBuffer(query, "CREATE PUBLICATION %s", - fmtId(pubinfo->dobj.name)); - - appendPQExpBuffer(labelq, "PUBLICATION %s", fmtId(pubinfo->dobj.name)); + qpubname); if (pubinfo->puballtables) appendPQExpBufferStr(query, " FOR ALL TABLES"); @@ -3822,17 +3829,18 @@ dumpPublication(Archive *fout, PublicationInfo *pubinfo) NULL, NULL); if (pubinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "PUBLICATION", qpubname, NULL, pubinfo->rolname, pubinfo->dobj.catId, 0, pubinfo->dobj.dumpId); if (pubinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "PUBLICATION", qpubname, NULL, pubinfo->rolname, pubinfo->dobj.catId, 0, pubinfo->dobj.dumpId); destroyPQExpBuffer(delq); destroyPQExpBuffer(query); + free(qpubname); } /* @@ -3857,9 +3865,6 @@ getPublicationTables(Archive *fout, TableInfo tblinfo[], int numTables) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - for (i = 0; i < numTables; i++) { TableInfo *tbinfo = &tblinfo[i]; @@ -3948,8 +3953,8 @@ dumpPublicationTable(Archive *fout, PublicationRelInfo *pubrinfo) appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY", fmtId(pubrinfo->pubname)); - appendPQExpBuffer(query, " %s;", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(query, " %s;\n", + fmtQualifiedDumpable(tbinfo)); /* * There is no point in creating drop query as drop query as the drop is @@ -4011,9 +4016,6 @@ getSubscriptions(Archive *fout) if (dopt->no_subscriptions || fout->remoteVersion < 100000) return; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (!is_superuser(fout)) { int n; @@ -4099,8 +4101,8 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) { PQExpBuffer delq; PQExpBuffer query; - PQExpBuffer labelq; PQExpBuffer publications; + char *qsubname; char **pubnames = NULL; int npubnames = 0; int i; @@ -4110,13 +4112,14 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) delq = createPQExpBuffer(); query = createPQExpBuffer(); - labelq = createPQExpBuffer(); + + qsubname = pg_strdup(fmtId(subinfo->dobj.name)); appendPQExpBuffer(delq, "DROP SUBSCRIPTION %s;\n", - fmtId(subinfo->dobj.name)); + qsubname); appendPQExpBuffer(query, "CREATE SUBSCRIPTION %s CONNECTION ", - fmtId(subinfo->dobj.name)); + qsubname); appendStringLiteralAH(query, subinfo->subconninfo, fout); /* Build list of quoted publications and append them to query. */ @@ -4150,8 +4153,6 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) appendPQExpBufferStr(query, ");\n"); - appendPQExpBuffer(labelq, "SUBSCRIPTION %s", fmtId(subinfo->dobj.name)); - ArchiveEntry(fout, subinfo->dobj.catId, subinfo->dobj.dumpId, subinfo->dobj.name, NULL, @@ -4163,12 +4164,12 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) NULL, NULL); if (subinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "SUBSCRIPTION", qsubname, NULL, subinfo->rolname, subinfo->dobj.catId, 0, subinfo->dobj.dumpId); if (subinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "SUBSCRIPTION", qsubname, NULL, subinfo->rolname, subinfo->dobj.catId, 0, subinfo->dobj.dumpId); @@ -4178,6 +4179,7 @@ dumpSubscription(Archive *fout, SubscriptionInfo *subinfo) destroyPQExpBuffer(delq); destroyPQExpBuffer(query); + free(qsubname); } static void @@ -4361,11 +4363,16 @@ binary_upgrade_set_pg_class_oids(Archive *fout, /* * If the DumpableObject is a member of an extension, add a suitable * ALTER EXTENSION ADD command to the creation commands in upgrade_buffer. + * + * For somewhat historical reasons, objname should already be quoted, + * but not objnamespace (if any). */ static void binary_upgrade_extension_member(PQExpBuffer upgrade_buffer, DumpableObject *dobj, - const char *objlabel) + const char *objtype, + const char *objname, + const char *objnamespace) { DumpableObject *extobj = NULL; int i; @@ -4387,13 +4394,17 @@ binary_upgrade_extension_member(PQExpBuffer upgrade_buffer, extobj = NULL; } if (extobj == NULL) - exit_horribly(NULL, "could not find parent extension for %s\n", objlabel); + exit_horribly(NULL, "could not find parent extension for %s %s\n", + objtype, objname); appendPQExpBufferStr(upgrade_buffer, "\n-- For binary upgrade, handle extension membership the hard way\n"); - appendPQExpBuffer(upgrade_buffer, "ALTER EXTENSION %s ADD %s;\n", + appendPQExpBuffer(upgrade_buffer, "ALTER EXTENSION %s ADD %s ", fmtId(extobj->name), - objlabel); + objtype); + if (objnamespace && *objnamespace) + appendPQExpBuffer(upgrade_buffer, "%s.", fmtId(objnamespace)); + appendPQExpBuffer(upgrade_buffer, "%s;\n", objname); } /* @@ -4423,9 +4434,6 @@ getNamespaces(Archive *fout, int *numNamespaces) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * we fetch all namespaces including system ones, so that every object we * read in can be linked to a containing namespace. @@ -4581,9 +4589,6 @@ getExtensions(Archive *fout, int *numExtensions) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBufferStr(query, "SELECT x.tableoid, x.oid, " "x.extname, n.nspname, x.extrelocatable, x.extversion, x.extconfig, x.extcondition " "FROM pg_extension x " @@ -4681,9 +4686,6 @@ getTypes(Archive *fout, int *numTypes) * be revisited if the backend ever allows renaming of array types. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90600) { PQExpBuffer acl_subquery = createPQExpBuffer(); @@ -4913,9 +4915,6 @@ getOperators(Archive *fout, int *numOprs) * system-defined operators at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, oprname, " "oprnamespace, " "(%s oprowner) AS rolname, " @@ -5006,9 +5005,6 @@ getCollations(Archive *fout, int *numCollations) * system-defined collations at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, collname, " "collnamespace, " "(%s collowner) AS rolname " @@ -5082,9 +5078,6 @@ getConversions(Archive *fout, int *numConversions) * system-defined conversions at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, conname, " "connamespace, " "(%s conowner) AS rolname " @@ -5160,9 +5153,6 @@ getAccessMethods(Archive *fout, int *numAccessMethods) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* Select all access methods from pg_am table */ appendPQExpBuffer(query, "SELECT tableoid, oid, amname, amtype, " "amhandler::pg_catalog.regproc AS amhandler " @@ -5233,9 +5223,6 @@ getOpclasses(Archive *fout, int *numOpclasses) * system-defined opclasses at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, opcname, " "opcnamespace, " "(%s opcowner) AS rolname " @@ -5320,9 +5307,6 @@ getOpfamilies(Archive *fout, int *numOpfamilies) * system-defined opfamilies at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, opfname, " "opfnamespace, " "(%s opfowner) AS rolname " @@ -5400,9 +5384,6 @@ getAggregates(Archive *fout, int *numAggs) int i_initaggacl; int i_initraggacl; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * Find all interesting aggregates. See comment in getFuncs() for the * rationale behind the filtering logic. @@ -5594,9 +5575,6 @@ getFuncs(Archive *fout, int *numFuncs) int i_initproacl; int i_initrproacl; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * Find all interesting functions. This is a bit complicated: * @@ -5853,9 +5831,6 @@ getTables(Archive *fout, int *numTables) int i_ispartition; int i_partbound; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * Find all the tables and table-like objects. * @@ -6562,9 +6537,7 @@ getTables(Archive *fout, int *numTables) resetPQExpBuffer(query); appendPQExpBuffer(query, "LOCK TABLE %s IN ACCESS SHARE MODE", - fmtQualifiedId(fout->remoteVersion, - tblinfo[i].dobj.namespace->dobj.name, - tblinfo[i].dobj.name)); + fmtQualifiedDumpable(&tblinfo[i])); ExecuteSqlStatement(fout, query->data); } @@ -6656,9 +6629,6 @@ getInherits(Archive *fout, int *numInherits) int i_inhrelid; int i_inhparent; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - /* * Find all the inheritance information, excluding implicit inheritance * via partitioning. We handle that case using getPartitions(), because @@ -6750,9 +6720,6 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables) tbinfo->dobj.namespace->dobj.name, tbinfo->dobj.name); - /* Make sure we are in proper schema so indexdef is right */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - /* * The point of the messy-looking outer join is to find a constraint * that is related by an internal dependency link to the index. If we @@ -7046,9 +7013,6 @@ getExtendedStatistics(Archive *fout) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, stxname, " "stxnamespace, (%s stxowner) AS rolname " "FROM pg_catalog.pg_statistic_ext", @@ -7128,12 +7092,6 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int numTables) tbinfo->dobj.namespace->dobj.name, tbinfo->dobj.name); - /* - * select table schema to ensure constraint expr is qualified if - * needed - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - resetPQExpBuffer(query); appendPQExpBuffer(query, "SELECT tableoid, oid, conname, confrelid, " @@ -7198,12 +7156,6 @@ getDomainConstraints(Archive *fout, TypeInfo *tyinfo) i_consrc; int ntups; - /* - * select appropriate schema to ensure names in constraint are properly - * qualified - */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); - query = createPQExpBuffer(); if (fout->remoteVersion >= 90100) @@ -7298,9 +7250,6 @@ getRules(Archive *fout, int *numRules) int i_is_instead; int i_ev_enabled; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 80300) { appendPQExpBufferStr(query, "SELECT " @@ -7436,11 +7385,6 @@ getTriggers(Archive *fout, TableInfo tblinfo[], int numTables) tbinfo->dobj.namespace->dobj.name, tbinfo->dobj.name); - /* - * select table schema to ensure regproc name is qualified if needed - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - resetPQExpBuffer(query); if (fout->remoteVersion >= 90000) { @@ -7624,9 +7568,6 @@ getEventTriggers(Archive *fout, int *numEventTriggers) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT e.tableoid, e.oid, evtname, evtenabled, " "evtevent, (%s evtowner) AS evtowner, " @@ -7714,9 +7655,6 @@ getProcLangs(Archive *fout, int *numProcLangs) int i_initrlanacl; int i_lanowner; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90600) { PQExpBuffer acl_subquery = createPQExpBuffer(); @@ -7889,9 +7827,6 @@ getCasts(Archive *fout, int *numCasts) int i_castcontext; int i_castmethod; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 80400) { appendPQExpBufferStr(query, "SELECT tableoid, oid, " @@ -8013,9 +7948,6 @@ getTransforms(Archive *fout, int *numTransforms) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, " "trftype, trflang, trffromsql::oid, trftosql::oid " "FROM pg_transform " @@ -8129,12 +8061,6 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables) if (!tbinfo->interesting) continue; - /* - * Make sure we are in proper schema for this table; this allows - * correct retrieval of formatted type names and default exprs - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - /* find all the user attributes and their types */ /* @@ -8609,9 +8535,6 @@ getTSParsers(Archive *fout, int *numTSParsers) * system-defined objects at dump-out time. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBufferStr(query, "SELECT tableoid, oid, prsname, prsnamespace, " "prsstart::oid, prstoken::oid, " "prsend::oid, prsheadline::oid, prslextype::oid " @@ -8696,9 +8619,6 @@ getTSDictionaries(Archive *fout, int *numTSDicts) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, dictname, " "dictnamespace, (%s dictowner) AS rolname, " "dicttemplate, dictinitoption " @@ -8782,9 +8702,6 @@ getTSTemplates(Archive *fout, int *numTSTemplates) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBufferStr(query, "SELECT tableoid, oid, tmplname, " "tmplnamespace, tmplinit::oid, tmpllexize::oid " "FROM pg_ts_template"); @@ -8861,9 +8778,6 @@ getTSConfigurations(Archive *fout, int *numTSConfigs) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT tableoid, oid, cfgname, " "cfgnamespace, (%s cfgowner) AS rolname, cfgparser " "FROM pg_ts_config", @@ -8947,9 +8861,6 @@ getForeignDataWrappers(Archive *fout, int *numForeignDataWrappers) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90600) { PQExpBuffer acl_subquery = createPQExpBuffer(); @@ -9117,9 +9028,6 @@ getForeignServers(Archive *fout, int *numForeignServers) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90600) { PQExpBuffer acl_subquery = createPQExpBuffer(); @@ -9266,9 +9174,6 @@ getDefaultACLs(Archive *fout, int *numDefaultACLs) query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90600) { PQExpBuffer acl_subquery = createPQExpBuffer(); @@ -9369,13 +9274,18 @@ getDefaultACLs(Archive *fout, int *numDefaultACLs) * dumpComment -- * * This routine is used to dump any comments associated with the - * object handed to this routine. The routine takes a constant character - * string for the target part of the comment-creation command, plus + * object handed to this routine. The routine takes the object type + * and object name (ready to print, except for schema decoration), plus * the namespace and owner of the object (for labeling the ArchiveEntry), * plus catalog ID and subid which are the lookup key for pg_description, * plus the dump ID for the object (for setting a dependency). * If a matching pg_description entry is found, it is dumped. * + * Note: in some cases, such as comments for triggers and rules, the "type" + * string really looks like, e.g., "TRIGGER name ON". This is a bit of a hack + * but it doesn't seem worth complicating the API for all callers to make + * it cleaner. + * * Note: although this routine takes a dumpId for dependency purposes, * that purpose is just to mark the dependency in the emitted dump file * for possible future use by pg_restore. We do NOT use it for determining @@ -9384,7 +9294,7 @@ getDefaultACLs(Archive *fout, int *numDefaultACLs) * calling ArchiveEntry() for the specified object. */ static void -dumpComment(Archive *fout, const char *target, +dumpComment(Archive *fout, const char *type, const char *name, const char *namespace, const char *owner, CatalogId catalogId, int subid, DumpId dumpId) { @@ -9397,7 +9307,7 @@ dumpComment(Archive *fout, const char *target, return; /* Comments are schema not data ... except blob comments are data */ - if (strncmp(target, "LARGE OBJECT ", 13) != 0) + if (strcmp(type, "LARGE OBJECT") != 0) { if (dopt->dataOnly) return; @@ -9426,24 +9336,31 @@ dumpComment(Archive *fout, const char *target, if (ncomments > 0) { PQExpBuffer query = createPQExpBuffer(); + PQExpBuffer tag = createPQExpBuffer(); - appendPQExpBuffer(query, "COMMENT ON %s IS ", target); + appendPQExpBuffer(query, "COMMENT ON %s ", type); + if (namespace && *namespace) + appendPQExpBuffer(query, "%s.", fmtId(namespace)); + appendPQExpBuffer(query, "%s IS ", name); appendStringLiteralAH(query, comments->descr, fout); appendPQExpBufferStr(query, ";\n"); + appendPQExpBuffer(tag, "%s %s", type, name); + /* * We mark comments as SECTION_NONE because they really belong in the * same section as their parent, whether that is pre-data or * post-data. */ ArchiveEntry(fout, nilCatalogId, createDumpId(), - target, namespace, NULL, owner, + tag->data, namespace, NULL, owner, false, "COMMENT", SECTION_NONE, query->data, "", NULL, &(dumpId), 1, NULL, NULL); destroyPQExpBuffer(query); + destroyPQExpBuffer(tag); } } @@ -9461,7 +9378,7 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, CommentItem *comments; int ncomments; PQExpBuffer query; - PQExpBuffer target; + PQExpBuffer tag; /* do nothing, if --no-comments is supplied */ if (dopt->no_comments) @@ -9482,7 +9399,7 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, return; query = createPQExpBuffer(); - target = createPQExpBuffer(); + tag = createPQExpBuffer(); while (ncomments > 0) { @@ -9491,17 +9408,18 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, if (objsubid == 0) { - resetPQExpBuffer(target); - appendPQExpBuffer(target, "%s %s", reltypename, + resetPQExpBuffer(tag); + appendPQExpBuffer(tag, "%s %s", reltypename, fmtId(tbinfo->dobj.name)); resetPQExpBuffer(query); - appendPQExpBuffer(query, "COMMENT ON %s IS ", target->data); + appendPQExpBuffer(query, "COMMENT ON %s %s IS ", reltypename, + fmtQualifiedDumpable(tbinfo)); appendStringLiteralAH(query, descr, fout); appendPQExpBufferStr(query, ";\n"); ArchiveEntry(fout, nilCatalogId, createDumpId(), - target->data, + tag->data, tbinfo->dobj.namespace->dobj.name, NULL, tbinfo->rolname, false, "COMMENT", SECTION_NONE, @@ -9511,18 +9429,21 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, } else if (objsubid > 0 && objsubid <= tbinfo->numatts) { - resetPQExpBuffer(target); - appendPQExpBuffer(target, "COLUMN %s.", + resetPQExpBuffer(tag); + appendPQExpBuffer(tag, "COLUMN %s.", fmtId(tbinfo->dobj.name)); - appendPQExpBufferStr(target, fmtId(tbinfo->attnames[objsubid - 1])); + appendPQExpBufferStr(tag, fmtId(tbinfo->attnames[objsubid - 1])); resetPQExpBuffer(query); - appendPQExpBuffer(query, "COMMENT ON %s IS ", target->data); + appendPQExpBuffer(query, "COMMENT ON COLUMN %s.", + fmtQualifiedDumpable(tbinfo)); + appendPQExpBuffer(query, "%s IS ", + fmtId(tbinfo->attnames[objsubid - 1])); appendStringLiteralAH(query, descr, fout); appendPQExpBufferStr(query, ";\n"); ArchiveEntry(fout, nilCatalogId, createDumpId(), - target->data, + tag->data, tbinfo->dobj.namespace->dobj.name, NULL, tbinfo->rolname, false, "COMMENT", SECTION_NONE, @@ -9536,7 +9457,7 @@ dumpTableComment(Archive *fout, TableInfo *tbinfo, } destroyPQExpBuffer(query); - destroyPQExpBuffer(target); + destroyPQExpBuffer(tag); } /* @@ -9643,11 +9564,6 @@ collectComments(Archive *fout, CommentItem **items) int i; CommentItem *comments; - /* - * Note we do NOT change source schema here; preserve the caller's - * setting, instead. - */ - query = createPQExpBuffer(); appendPQExpBufferStr(query, "SELECT description, classoid, objoid, objsubid " @@ -9842,7 +9758,6 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; char *qnspname; /* Skip if not to be dumped */ @@ -9851,7 +9766,6 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); qnspname = pg_strdup(fmtId(nspinfo->dobj.name)); @@ -9859,10 +9773,9 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) appendPQExpBuffer(q, "CREATE SCHEMA %s;\n", qnspname); - appendPQExpBuffer(labelq, "SCHEMA %s", qnspname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &nspinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &nspinfo->dobj, + "SCHEMA", qnspname, NULL); if (nspinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, nspinfo->dobj.catId, nspinfo->dobj.dumpId, @@ -9876,18 +9789,18 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) /* Dump Schema Comments and Security Labels */ if (nspinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "SCHEMA", qnspname, NULL, nspinfo->rolname, nspinfo->dobj.catId, 0, nspinfo->dobj.dumpId); if (nspinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "SCHEMA", qnspname, NULL, nspinfo->rolname, nspinfo->dobj.catId, 0, nspinfo->dobj.dumpId); if (nspinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, nspinfo->dobj.catId, nspinfo->dobj.dumpId, "SCHEMA", - qnspname, NULL, labelq->data, NULL, + qnspname, NULL, NULL, nspinfo->rolname, nspinfo->nspacl, nspinfo->rnspacl, nspinfo->initnspacl, nspinfo->initrnspacl); @@ -9895,7 +9808,6 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); } /* @@ -9908,7 +9820,6 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; char *qextname; /* Skip if not to be dumped */ @@ -9917,7 +9828,6 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); qextname = pg_strdup(fmtId(extinfo->dobj.name)); @@ -10003,8 +9913,6 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) appendPQExpBufferStr(q, ");\n"); } - appendPQExpBuffer(labelq, "EXTENSION %s", qextname); - if (extinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, extinfo->dobj.catId, extinfo->dobj.dumpId, extinfo->dobj.name, @@ -10017,12 +9925,12 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) /* Dump Extension Comments and Security Labels */ if (extinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "EXTENSION", qextname, NULL, "", extinfo->dobj.catId, 0, extinfo->dobj.dumpId); if (extinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "EXTENSION", qextname, NULL, "", extinfo->dobj.catId, 0, extinfo->dobj.dumpId); @@ -10030,7 +9938,6 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); } /* @@ -10074,18 +9981,15 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); PQExpBuffer query = createPQExpBuffer(); PGresult *res; int num, i; Oid enum_oid; char *qtypname; + char *qualtypname; char *label; - /* Set proper schema search path */ - selectSourceSchema(fout, "pg_catalog"); - if (fout->remoteVersion >= 90100) appendPQExpBuffer(query, "SELECT oid, enumlabel " "FROM pg_catalog.pg_enum " @@ -10104,16 +10008,13 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) num = PQntuples(res); qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); /* - * DROP must be fully qualified in case same name appears in pg_catalog. * CASCADE shouldn't be required here as for normal types since the I/O * functions are generic and do not get dropped. */ - appendPQExpBuffer(delq, "DROP TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - qtypname); + appendPQExpBuffer(delq, "DROP TYPE %s;\n", qualtypname); if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, @@ -10121,7 +10022,7 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) false); appendPQExpBuffer(q, "CREATE TYPE %s AS ENUM (", - qtypname); + qualtypname); if (!dopt->binary_upgrade) { @@ -10151,19 +10052,16 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) appendPQExpBuffer(q, "SELECT pg_catalog.binary_upgrade_set_next_pg_enum_oid('%u'::pg_catalog.oid);\n", enum_oid); - appendPQExpBuffer(q, "ALTER TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(q, "%s ADD VALUE ", - qtypname); + appendPQExpBuffer(q, "ALTER TYPE %s ADD VALUE ", qualtypname); appendStringLiteralAH(q, label, fout); appendPQExpBufferStr(q, ";\n\n"); } } - appendPQExpBuffer(labelq, "TYPE %s", qtypname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "TYPE", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -10178,18 +10076,18 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) /* Dump Type Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10197,8 +10095,9 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo) PQclear(res); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qtypname); + free(qualtypname); } /* @@ -10211,19 +10110,13 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); PQExpBuffer query = createPQExpBuffer(); PGresult *res; Oid collationOid; char *qtypname; + char *qualtypname; char *procname; - /* - * select appropriate schema to ensure names in CREATE are properly - * qualified - */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(query, "SELECT pg_catalog.format_type(rngsubtype, NULL) AS rngsubtype, " "opc.opcname AS opcname, " @@ -10242,16 +10135,13 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) res = ExecuteSqlQueryForSingleRow(fout, query->data); qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); /* - * DROP must be fully qualified in case same name appears in pg_catalog. * CASCADE shouldn't be required here as for normal types since the I/O * functions are generic and do not get dropped. */ - appendPQExpBuffer(delq, "DROP TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - qtypname); + appendPQExpBuffer(delq, "DROP TYPE %s;\n", qualtypname); if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, @@ -10259,7 +10149,7 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) false); appendPQExpBuffer(q, "CREATE TYPE %s AS RANGE (", - qtypname); + qualtypname); appendPQExpBuffer(q, "\n subtype = %s", PQgetvalue(res, 0, PQfnumber(res, "rngsubtype"))); @@ -10270,7 +10160,6 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) char *opcname = PQgetvalue(res, 0, PQfnumber(res, "opcname")); char *nspname = PQgetvalue(res, 0, PQfnumber(res, "opcnsp")); - /* always schema-qualify, don't try to be smart */ appendPQExpBuffer(q, ",\n subtype_opclass = %s.", fmtId(nspname)); appendPQExpBufferStr(q, fmtId(opcname)); @@ -10282,12 +10171,8 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) CollInfo *coll = findCollationByOid(collationOid); if (coll) - { - /* always schema-qualify, don't try to be smart */ - appendPQExpBuffer(q, ",\n collation = %s.", - fmtId(coll->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(coll->dobj.name)); - } + appendPQExpBuffer(q, ",\n collation = %s", + fmtQualifiedDumpable(coll)); } procname = PQgetvalue(res, 0, PQfnumber(res, "rngcanonical")); @@ -10300,10 +10185,10 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) appendPQExpBufferStr(q, "\n);\n"); - appendPQExpBuffer(labelq, "TYPE %s", qtypname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "TYPE", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -10318,18 +10203,18 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) /* Dump Type Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10337,8 +10222,9 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo) PQclear(res); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qtypname); + free(qualtypname); } /* @@ -10356,18 +10242,13 @@ dumpUndefinedType(Archive *fout, TypeInfo *tyinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); char *qtypname; + char *qualtypname; qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog. - */ - appendPQExpBuffer(delq, "DROP TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - qtypname); + appendPQExpBuffer(delq, "DROP TYPE %s;\n", qualtypname); if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_type_oid(fout, q, @@ -10375,12 +10256,12 @@ dumpUndefinedType(Archive *fout, TypeInfo *tyinfo) false); appendPQExpBuffer(q, "CREATE TYPE %s;\n", - qtypname); - - appendPQExpBuffer(labelq, "TYPE %s", qtypname); + qualtypname); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "TYPE", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -10395,25 +10276,26 @@ dumpUndefinedType(Archive *fout, TypeInfo *tyinfo) /* Dump Type Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qtypname); + free(qualtypname); } /* @@ -10426,10 +10308,10 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); PQExpBuffer query = createPQExpBuffer(); PGresult *res; char *qtypname; + char *qualtypname; char *typlen; char *typinput; char *typoutput; @@ -10453,9 +10335,6 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) char *typdefault; bool typdefault_is_literal = false; - /* Set proper schema search path so regproc references list correctly */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); - /* Fetch type-specific details */ if (fout->remoteVersion >= 90100) { @@ -10564,17 +10443,14 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) typdefault = NULL; qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); /* - * DROP must be fully qualified in case same name appears in pg_catalog. * The reason we include CASCADE is that the circular dependency between * the type and its I/O functions makes it impossible to drop the type any * other way. */ - appendPQExpBuffer(delq, "DROP TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s CASCADE;\n", - qtypname); + appendPQExpBuffer(delq, "DROP TYPE %s CASCADE;\n", qualtypname); /* * We might already have a shell type, but setting pg_type_oid is @@ -10588,7 +10464,7 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) appendPQExpBuffer(q, "CREATE TYPE %s (\n" " INTERNALLENGTH = %s", - qtypname, + qualtypname, (strcmp(typlen, "-1") == 0) ? "variable" : typlen); /* regproc result is sufficiently quoted already */ @@ -10621,8 +10497,6 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) { char *elemType; - /* reselect schema in case changed by function dump */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); elemType = getFormattedTypeName(fout, tyinfo->typelem, zeroAsOpaque); appendPQExpBuffer(q, ",\n ELEMENT = %s", elemType); free(elemType); @@ -10666,10 +10540,10 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) appendPQExpBufferStr(q, "\n);\n"); - appendPQExpBuffer(labelq, "TYPE %s", qtypname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "TYPE", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -10684,18 +10558,18 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) /* Dump Type Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10703,8 +10577,9 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo) PQclear(res); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qtypname); + free(qualtypname); } /* @@ -10717,20 +10592,17 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); PQExpBuffer query = createPQExpBuffer(); PGresult *res; int i; char *qtypname; + char *qualtypname; char *typnotnull; char *typdefn; char *typdefault; Oid typcollation; bool typdefault_is_literal = false; - /* Set proper schema search path so type references list correctly */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); - /* Fetch domain specific details */ if (fout->remoteVersion >= 90100) { @@ -10778,10 +10650,11 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) true); /* force array type */ qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); appendPQExpBuffer(q, "CREATE DOMAIN %s AS %s", - qtypname, + qualtypname, typdefn); /* Print collation only if different from base type's collation */ @@ -10791,12 +10664,7 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) coll = findCollationByOid(typcollation); if (coll) - { - /* always schema-qualify, don't try to be smart */ - appendPQExpBuffer(q, " COLLATE %s.", - fmtId(coll->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(coll->dobj.name)); - } + appendPQExpBuffer(q, " COLLATE %s", fmtQualifiedDumpable(coll)); } if (typnotnull[0] == 't') @@ -10827,18 +10695,12 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) appendPQExpBufferStr(q, ";\n"); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP DOMAIN %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - qtypname); - - appendPQExpBuffer(labelq, "DOMAIN %s", qtypname); + appendPQExpBuffer(delq, "DROP DOMAIN %s;\n", qualtypname); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "DOMAIN", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -10853,18 +10715,18 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) /* Dump Domain Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "DOMAIN", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "DOMAIN", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -10873,26 +10735,25 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo) for (i = 0; i < tyinfo->nDomChecks; i++) { ConstraintInfo *domcheck = &(tyinfo->domChecks[i]); - PQExpBuffer labelq = createPQExpBuffer(); + PQExpBuffer conprefix = createPQExpBuffer(); - appendPQExpBuffer(labelq, "CONSTRAINT %s ", + appendPQExpBuffer(conprefix, "CONSTRAINT %s ON DOMAIN", fmtId(domcheck->dobj.name)); - appendPQExpBuffer(labelq, "ON DOMAIN %s", - qtypname); if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, conprefix->data, qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, domcheck->dobj.catId, 0, tyinfo->dobj.dumpId); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(conprefix); } destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qtypname); + free(qualtypname); } /* @@ -10907,10 +10768,10 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) PQExpBuffer q = createPQExpBuffer(); PQExpBuffer dropped = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); PQExpBuffer query = createPQExpBuffer(); PGresult *res; char *qtypname; + char *qualtypname; int ntups; int i_attname; int i_atttypdefn; @@ -10921,9 +10782,6 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) int i; int actual_atts; - /* Set proper schema search path so type references list correctly */ - selectSourceSchema(fout, tyinfo->dobj.namespace->dobj.name); - /* Fetch type specific details */ if (fout->remoteVersion >= 90100) { @@ -10983,9 +10841,10 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) } qtypname = pg_strdup(fmtId(tyinfo->dobj.name)); + qualtypname = pg_strdup(fmtQualifiedDumpable(tyinfo)); appendPQExpBuffer(q, "CREATE TYPE %s AS (", - qtypname); + qualtypname); actual_atts = 0; for (i = 0; i < ntups; i++) @@ -11023,12 +10882,8 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) coll = findCollationByOid(attcollation); if (coll) - { - /* always schema-qualify, don't try to be smart */ - appendPQExpBuffer(q, " COLLATE %s.", - fmtId(coll->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(coll->dobj.name)); - } + appendPQExpBuffer(q, " COLLATE %s", + fmtQualifiedDumpable(coll)); } } else @@ -11050,11 +10905,11 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) "WHERE attname = ", attlen, attalign); appendStringLiteralAH(dropped, attname, fout); appendPQExpBufferStr(dropped, "\n AND attrelid = "); - appendStringLiteralAH(dropped, qtypname, fout); + appendStringLiteralAH(dropped, qualtypname, fout); appendPQExpBufferStr(dropped, "::pg_catalog.regclass;\n"); appendPQExpBuffer(dropped, "ALTER TYPE %s ", - qtypname); + qualtypname); appendPQExpBuffer(dropped, "DROP ATTRIBUTE %s;\n", fmtId(attname)); } @@ -11062,18 +10917,12 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) appendPQExpBufferStr(q, "\n);\n"); appendPQExpBufferStr(q, dropped->data); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP TYPE %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - qtypname); - - appendPQExpBuffer(labelq, "TYPE %s", qtypname); + appendPQExpBuffer(delq, "DROP TYPE %s;\n", qualtypname); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tyinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tyinfo->dobj, + "TYPE", qtypname, + tyinfo->dobj.namespace->dobj.name); if (tyinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, @@ -11089,18 +10938,18 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) /* Dump Type Comments and Security Labels */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "TYPE", qtypname, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->dobj.catId, 0, tyinfo->dobj.dumpId); if (tyinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, tyinfo->dobj.catId, tyinfo->dobj.dumpId, "TYPE", - qtypname, NULL, labelq->data, + qtypname, NULL, tyinfo->dobj.namespace->dobj.name, tyinfo->rolname, tyinfo->typacl, tyinfo->rtypacl, tyinfo->inittypacl, tyinfo->initrtypacl); @@ -11109,8 +10958,9 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(dropped); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qtypname); + free(qualtypname); /* Dump any per-column comments */ if (tyinfo->dobj.dump & DUMP_COMPONENT_COMMENT) @@ -11205,7 +11055,9 @@ dumpCompositeTypeColComments(Archive *fout, TypeInfo *tyinfo) appendPQExpBufferStr(target, fmtId(attname)); resetPQExpBuffer(query); - appendPQExpBuffer(query, "COMMENT ON %s IS ", target->data); + appendPQExpBuffer(query, "COMMENT ON COLUMN %s.", + fmtQualifiedDumpable(tyinfo)); + appendPQExpBuffer(query, "%s IS ", fmtId(attname)); appendStringLiteralAH(query, descr, fout); appendPQExpBufferStr(query, ";\n"); @@ -11261,7 +11113,7 @@ dumpShellType(Archive *fout, ShellTypeInfo *stinfo) false); appendPQExpBuffer(q, "CREATE TYPE %s;\n", - fmtId(stinfo->dobj.name)); + fmtQualifiedDumpable(stinfo)); if (stinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, stinfo->dobj.catId, stinfo->dobj.dumpId, @@ -11288,10 +11140,8 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) DumpOptions *dopt = fout->dopt; PQExpBuffer defqry; PQExpBuffer delqry; - PQExpBuffer labelq; bool useParams; char *qlanname; - char *lanschema; FuncInfo *funcInfo; FuncInfo *inlineInfo = NULL; FuncInfo *validatorInfo = NULL; @@ -11337,20 +11187,9 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) defqry = createPQExpBuffer(); delqry = createPQExpBuffer(); - labelq = createPQExpBuffer(); qlanname = pg_strdup(fmtId(plang->dobj.name)); - /* - * If dumping a HANDLER clause, treat the language as being in the handler - * function's schema; this avoids cluttering the HANDLER clause. Otherwise - * it doesn't really have a schema. - */ - if (useParams) - lanschema = funcInfo->dobj.namespace->dobj.name; - else - lanschema = NULL; - appendPQExpBuffer(delqry, "DROP PROCEDURAL LANGUAGE %s;\n", qlanname); @@ -11360,25 +11199,13 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) plang->lanpltrusted ? "TRUSTED " : "", qlanname); appendPQExpBuffer(defqry, " HANDLER %s", - fmtId(funcInfo->dobj.name)); + fmtQualifiedDumpable(funcInfo)); if (OidIsValid(plang->laninline)) - { - appendPQExpBufferStr(defqry, " INLINE "); - /* Cope with possibility that inline is in different schema */ - if (inlineInfo->dobj.namespace != funcInfo->dobj.namespace) - appendPQExpBuffer(defqry, "%s.", - fmtId(inlineInfo->dobj.namespace->dobj.name)); - appendPQExpBufferStr(defqry, fmtId(inlineInfo->dobj.name)); - } + appendPQExpBuffer(defqry, " INLINE %s", + fmtQualifiedDumpable(inlineInfo)); if (OidIsValid(plang->lanvalidator)) - { - appendPQExpBufferStr(defqry, " VALIDATOR "); - /* Cope with possibility that validator is in different schema */ - if (validatorInfo->dobj.namespace != funcInfo->dobj.namespace) - appendPQExpBuffer(defqry, "%s.", - fmtId(validatorInfo->dobj.namespace->dobj.name)); - appendPQExpBufferStr(defqry, fmtId(validatorInfo->dobj.name)); - } + appendPQExpBuffer(defqry, " VALIDATOR %s", + fmtQualifiedDumpable(validatorInfo)); } else { @@ -11396,15 +11223,14 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) } appendPQExpBufferStr(defqry, ";\n"); - appendPQExpBuffer(labelq, "LANGUAGE %s", qlanname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(defqry, &plang->dobj, labelq->data); + binary_upgrade_extension_member(defqry, &plang->dobj, + "LANGUAGE", qlanname, NULL); if (plang->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, plang->dobj.catId, plang->dobj.dumpId, plang->dobj.name, - lanschema, NULL, plang->lanowner, + NULL, NULL, plang->lanowner, false, "PROCEDURAL LANGUAGE", SECTION_PRE_DATA, defqry->data, delqry->data, NULL, NULL, 0, @@ -11412,19 +11238,18 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) /* Dump Proc Lang Comments and Security Labels */ if (plang->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, - lanschema, plang->lanowner, + dumpComment(fout, "LANGUAGE", qlanname, + NULL, plang->lanowner, plang->dobj.catId, 0, plang->dobj.dumpId); if (plang->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, - lanschema, plang->lanowner, + dumpSecLabel(fout, "LANGUAGE", qlanname, + NULL, plang->lanowner, plang->dobj.catId, 0, plang->dobj.dumpId); if (plang->lanpltrusted && plang->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, plang->dobj.catId, plang->dobj.dumpId, "LANGUAGE", - qlanname, NULL, labelq->data, - lanschema, + qlanname, NULL, NULL, plang->lanowner, plang->lanacl, plang->rlanacl, plang->initlanacl, plang->initrlanacl); @@ -11432,7 +11257,6 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang) destroyPQExpBuffer(defqry); destroyPQExpBuffer(delqry); - destroyPQExpBuffer(labelq); } /* @@ -11576,7 +11400,6 @@ dumpFunc(Archive *fout, FuncInfo *finfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delqry; - PQExpBuffer labelq; PQExpBuffer asPart; PGresult *res; char *funcsig; /* identity signature */ @@ -11620,12 +11443,8 @@ dumpFunc(Archive *fout, FuncInfo *finfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delqry = createPQExpBuffer(); - labelq = createPQExpBuffer(); asPart = createPQExpBuffer(); - /* Set proper schema search path so type references list correctly */ - selectSourceSchema(fout, finfo->dobj.namespace->dobj.name); - /* Fetch function-specific details */ if (fout->remoteVersion >= 90600) { @@ -11901,18 +11720,17 @@ dumpFunc(Archive *fout, FuncInfo *finfo) keyword = is_procedure ? "PROCEDURE" : "FUNCTION"; - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delqry, "DROP %s %s.%s;\n", keyword, fmtId(finfo->dobj.namespace->dobj.name), funcsig); - appendPQExpBuffer(q, "CREATE %s %s", + appendPQExpBuffer(q, "CREATE %s %s.%s", keyword, + fmtId(finfo->dobj.namespace->dobj.name), funcfullsig ? funcfullsig : funcsig); + if (is_procedure) ; else if (funcresult) @@ -12028,10 +11846,10 @@ dumpFunc(Archive *fout, FuncInfo *finfo) appendPQExpBuffer(q, "\n %s;\n", asPart->data); - appendPQExpBuffer(labelq, "%s %s", keyword, funcsig); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &finfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &finfo->dobj, + keyword, funcsig, + finfo->dobj.namespace->dobj.name); if (finfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, finfo->dobj.catId, finfo->dobj.dumpId, @@ -12046,18 +11864,18 @@ dumpFunc(Archive *fout, FuncInfo *finfo) /* Dump Function Comments and Security Labels */ if (finfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, keyword, funcsig, finfo->dobj.namespace->dobj.name, finfo->rolname, finfo->dobj.catId, 0, finfo->dobj.dumpId); if (finfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, keyword, funcsig, finfo->dobj.namespace->dobj.name, finfo->rolname, finfo->dobj.catId, 0, finfo->dobj.dumpId); if (finfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, finfo->dobj.catId, finfo->dobj.dumpId, keyword, - funcsig, NULL, labelq->data, + funcsig, NULL, finfo->dobj.namespace->dobj.name, finfo->rolname, finfo->proacl, finfo->rproacl, finfo->initproacl, finfo->initrproacl); @@ -12067,7 +11885,6 @@ dumpFunc(Archive *fout, FuncInfo *finfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delqry); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(asPart); free(funcsig); if (funcfullsig) @@ -12094,6 +11911,7 @@ dumpCast(Archive *fout, CastInfo *cast) PQExpBuffer defqry; PQExpBuffer delqry; PQExpBuffer labelq; + PQExpBuffer castargs; FuncInfo *funcInfo = NULL; char *sourceType; char *targetType; @@ -12111,15 +11929,10 @@ dumpCast(Archive *fout, CastInfo *cast) cast->castfunc); } - /* - * Make sure we are in proper schema (needed for getFormattedTypeName). - * Casts don't have a schema of their own, so use pg_catalog. - */ - selectSourceSchema(fout, "pg_catalog"); - defqry = createPQExpBuffer(); delqry = createPQExpBuffer(); labelq = createPQExpBuffer(); + castargs = createPQExpBuffer(); sourceType = getFormattedTypeName(fout, cast->castsource, zeroAsNone); targetType = getFormattedTypeName(fout, cast->casttarget, zeroAsNone); @@ -12143,9 +11956,8 @@ dumpCast(Archive *fout, CastInfo *cast) char *fsig = format_function_signature(fout, funcInfo, true); /* - * Always qualify the function name, in case it is not in - * pg_catalog schema (format_function_signature won't qualify - * it). + * Always qualify the function name (format_function_signature + * won't qualify it). */ appendPQExpBuffer(defqry, "WITH FUNCTION %s.%s", fmtId(funcInfo->dobj.namespace->dobj.name), fsig); @@ -12167,13 +11979,17 @@ dumpCast(Archive *fout, CastInfo *cast) appendPQExpBuffer(labelq, "CAST (%s AS %s)", sourceType, targetType); + appendPQExpBuffer(castargs, "(%s AS %s)", + sourceType, targetType); + if (dopt->binary_upgrade) - binary_upgrade_extension_member(defqry, &cast->dobj, labelq->data); + binary_upgrade_extension_member(defqry, &cast->dobj, + "CAST", castargs->data, NULL); if (cast->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, cast->dobj.catId, cast->dobj.dumpId, labelq->data, - "pg_catalog", NULL, "", + NULL, NULL, "", false, "CAST", SECTION_PRE_DATA, defqry->data, delqry->data, NULL, NULL, 0, @@ -12181,8 +11997,8 @@ dumpCast(Archive *fout, CastInfo *cast) /* Dump Cast Comments */ if (cast->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, - "pg_catalog", "", + dumpComment(fout, "CAST", castargs->data, + NULL, "", cast->dobj.catId, 0, cast->dobj.dumpId); free(sourceType); @@ -12191,6 +12007,7 @@ dumpCast(Archive *fout, CastInfo *cast) destroyPQExpBuffer(defqry); destroyPQExpBuffer(delqry); destroyPQExpBuffer(labelq); + destroyPQExpBuffer(castargs); } /* @@ -12203,6 +12020,7 @@ dumpTransform(Archive *fout, TransformInfo *transform) PQExpBuffer defqry; PQExpBuffer delqry; PQExpBuffer labelq; + PQExpBuffer transformargs; FuncInfo *fromsqlFuncInfo = NULL; FuncInfo *tosqlFuncInfo = NULL; char *lanname; @@ -12228,12 +12046,10 @@ dumpTransform(Archive *fout, TransformInfo *transform) transform->trftosql); } - /* Make sure we are in proper schema (needed for getFormattedTypeName) */ - selectSourceSchema(fout, "pg_catalog"); - defqry = createPQExpBuffer(); delqry = createPQExpBuffer(); labelq = createPQExpBuffer(); + transformargs = createPQExpBuffer(); lanname = get_language_name(fout, transform->trflang); transformType = getFormattedTypeName(fout, transform->trftype, zeroAsNone); @@ -12254,8 +12070,8 @@ dumpTransform(Archive *fout, TransformInfo *transform) char *fsig = format_function_signature(fout, fromsqlFuncInfo, true); /* - * Always qualify the function name, in case it is not in - * pg_catalog schema (format_function_signature won't qualify it). + * Always qualify the function name (format_function_signature + * won't qualify it). */ appendPQExpBuffer(defqry, "FROM SQL WITH FUNCTION %s.%s", fmtId(fromsqlFuncInfo->dobj.namespace->dobj.name), fsig); @@ -12275,8 +12091,8 @@ dumpTransform(Archive *fout, TransformInfo *transform) char *fsig = format_function_signature(fout, tosqlFuncInfo, true); /* - * Always qualify the function name, in case it is not in - * pg_catalog schema (format_function_signature won't qualify it). + * Always qualify the function name (format_function_signature + * won't qualify it). */ appendPQExpBuffer(defqry, "TO SQL WITH FUNCTION %s.%s", fmtId(tosqlFuncInfo->dobj.namespace->dobj.name), fsig); @@ -12291,13 +12107,17 @@ dumpTransform(Archive *fout, TransformInfo *transform) appendPQExpBuffer(labelq, "TRANSFORM FOR %s LANGUAGE %s", transformType, lanname); + appendPQExpBuffer(transformargs, "FOR %s LANGUAGE %s", + transformType, lanname); + if (dopt->binary_upgrade) - binary_upgrade_extension_member(defqry, &transform->dobj, labelq->data); + binary_upgrade_extension_member(defqry, &transform->dobj, + "TRANSFORM", transformargs->data, NULL); if (transform->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, transform->dobj.catId, transform->dobj.dumpId, labelq->data, - "pg_catalog", NULL, "", + NULL, NULL, "", false, "TRANSFORM", SECTION_PRE_DATA, defqry->data, delqry->data, NULL, transform->dobj.dependencies, transform->dobj.nDeps, @@ -12305,8 +12125,8 @@ dumpTransform(Archive *fout, TransformInfo *transform) /* Dump Transform Comments */ if (transform->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, - "pg_catalog", "", + dumpComment(fout, "TRANSFORM", transformargs->data, + NULL, "", transform->dobj.catId, 0, transform->dobj.dumpId); free(lanname); @@ -12314,6 +12134,7 @@ dumpTransform(Archive *fout, TransformInfo *transform) destroyPQExpBuffer(defqry); destroyPQExpBuffer(delqry); destroyPQExpBuffer(labelq); + destroyPQExpBuffer(transformargs); } @@ -12328,7 +12149,6 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer oprid; PQExpBuffer details; PGresult *res; @@ -12369,21 +12189,17 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); oprid = createPQExpBuffer(); details = createPQExpBuffer(); - /* Make sure we are in proper schema so regoperator works correctly */ - selectSourceSchema(fout, oprinfo->dobj.namespace->dobj.name); - if (fout->remoteVersion >= 80300) { appendPQExpBuffer(query, "SELECT oprkind, " "oprcode::pg_catalog.regprocedure, " "oprleft::pg_catalog.regtype, " "oprright::pg_catalog.regtype, " - "oprcom::pg_catalog.regoperator, " - "oprnegate::pg_catalog.regoperator, " + "oprcom, " + "oprnegate, " "oprrest::pg_catalog.regprocedure, " "oprjoin::pg_catalog.regprocedure, " "oprcanmerge, oprcanhash " @@ -12397,8 +12213,8 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) "oprcode::pg_catalog.regprocedure, " "oprleft::pg_catalog.regtype, " "oprright::pg_catalog.regtype, " - "oprcom::pg_catalog.regoperator, " - "oprnegate::pg_catalog.regoperator, " + "oprcom, " + "oprnegate, " "oprrest::pg_catalog.regprocedure, " "oprjoin::pg_catalog.regprocedure, " "(oprlsortop != 0) AS oprcanmerge, " @@ -12464,14 +12280,14 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) else appendPQExpBufferStr(oprid, ", NONE)"); - oprref = convertOperatorReference(fout, oprcom); + oprref = getFormattedOperatorName(fout, oprcom); if (oprref) { appendPQExpBuffer(details, ",\n COMMUTATOR = %s", oprref); free(oprref); } - oprref = convertOperatorReference(fout, oprnegate); + oprref = getFormattedOperatorName(fout, oprnegate); if (oprref) { appendPQExpBuffer(details, ",\n NEGATOR = %s", oprref); @@ -12498,20 +12314,18 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) free(oprregproc); } - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delq, "DROP OPERATOR %s.%s;\n", fmtId(oprinfo->dobj.namespace->dobj.name), oprid->data); - appendPQExpBuffer(q, "CREATE OPERATOR %s (\n%s\n);\n", + appendPQExpBuffer(q, "CREATE OPERATOR %s.%s (\n%s\n);\n", + fmtId(oprinfo->dobj.namespace->dobj.name), oprinfo->dobj.name, details->data); - appendPQExpBuffer(labelq, "OPERATOR %s", oprid->data); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &oprinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &oprinfo->dobj, + "OPERATOR", oprid->data, + oprinfo->dobj.namespace->dobj.name); if (oprinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, oprinfo->dobj.catId, oprinfo->dobj.dumpId, @@ -12526,7 +12340,7 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) /* Dump Operator Comments */ if (oprinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "OPERATOR", oprid->data, oprinfo->dobj.namespace->dobj.name, oprinfo->rolname, oprinfo->dobj.catId, 0, oprinfo->dobj.dumpId); @@ -12535,7 +12349,6 @@ dumpOpr(Archive *fout, OprInfo *oprinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(oprid); destroyPQExpBuffer(details); } @@ -12577,49 +12390,39 @@ convertRegProcReference(Archive *fout, const char *proc) } /* - * Convert an operator cross-reference obtained from pg_operator + * getFormattedOperatorName - retrieve the operator name for the + * given operator OID (presented in string form). * - * Returns an allocated string of what to print, or NULL to print nothing. + * Returns an allocated string, or NULL if the given OID is invalid. * Caller is responsible for free'ing result string. * - * The input is a REGOPERATOR display; we have to strip the argument-types - * part, and add OPERATOR() decoration if the name is schema-qualified. + * What we produce has the format "OPERATOR(schema.oprname)". This is only + * useful in commands where the operator's argument types can be inferred from + * context. We always schema-qualify the name, though. The predecessor to + * this code tried to skip the schema qualification if possible, but that led + * to wrong results in corner cases, such as if an operator and its negator + * are in different schemas. */ static char * -convertOperatorReference(Archive *fout, const char *opr) +getFormattedOperatorName(Archive *fout, const char *oproid) { - char *name; - char *oname; - char *ptr; - bool inquote; - bool sawdot; + OprInfo *oprInfo; /* In all cases "0" means a null reference */ - if (strcmp(opr, "0") == 0) + if (strcmp(oproid, "0") == 0) return NULL; - name = pg_strdup(opr); - /* find non-double-quoted left paren, and check for non-quoted dot */ - inquote = false; - sawdot = false; - for (ptr = name; *ptr; ptr++) + oprInfo = findOprByOid(atooid(oproid)); + if (oprInfo == NULL) { - if (*ptr == '"') - inquote = !inquote; - else if (*ptr == '.' && !inquote) - sawdot = true; - else if (*ptr == '(' && !inquote) - { - *ptr = '\0'; - break; - } + write_msg(NULL, "WARNING: could not find operator with OID %s\n", + oproid); + return NULL; } - /* If not schema-qualified, don't need to add OPERATOR() */ - if (!sawdot) - return name; - oname = psprintf("OPERATOR(%s)", name); - free(name); - return oname; + + return psprintf("OPERATOR(%s.%s)", + fmtId(oprInfo->dobj.namespace->dobj.name), + oprInfo->dobj.name); } /* @@ -12658,7 +12461,6 @@ dumpAccessMethod(Archive *fout, AccessMethodInfo *aminfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; char *qamname; /* Skip if not to be dumped */ @@ -12667,7 +12469,6 @@ dumpAccessMethod(Archive *fout, AccessMethodInfo *aminfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); qamname = pg_strdup(fmtId(aminfo->dobj.name)); @@ -12681,10 +12482,9 @@ dumpAccessMethod(Archive *fout, AccessMethodInfo *aminfo) default: write_msg(NULL, "WARNING: invalid type \"%c\" of access method \"%s\"\n", aminfo->amtype, qamname); - pg_free(qamname); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qamname); return; } @@ -12693,11 +12493,9 @@ dumpAccessMethod(Archive *fout, AccessMethodInfo *aminfo) appendPQExpBuffer(delq, "DROP ACCESS METHOD %s;\n", qamname); - appendPQExpBuffer(labelq, "ACCESS METHOD %s", - qamname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &aminfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &aminfo->dobj, + "ACCESS METHOD", qamname, NULL); if (aminfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, aminfo->dobj.catId, aminfo->dobj.dumpId, @@ -12712,15 +12510,13 @@ dumpAccessMethod(Archive *fout, AccessMethodInfo *aminfo) /* Dump Access Method Comments */ if (aminfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "ACCESS METHOD", qamname, NULL, "", aminfo->dobj.catId, 0, aminfo->dobj.dumpId); - pg_free(qamname); - destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qamname); } /* @@ -12734,7 +12530,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + PQExpBuffer nameusing; PGresult *res; int ntups; int i_opcintype; @@ -12779,10 +12575,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - - /* Make sure we are in proper schema so regoperator works correctly */ - selectSourceSchema(fout, opcinfo->dobj.namespace->dobj.name); + nameusing = createPQExpBuffer(); /* Get additional fields from the pg_opclass row */ if (fout->remoteVersion >= 80300) @@ -12833,19 +12626,14 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) /* amname will still be needed after we PQclear res */ amname = pg_strdup(PQgetvalue(res, 0, i_amname)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delq, "DROP OPERATOR CLASS %s", - fmtId(opcinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s", - fmtId(opcinfo->dobj.name)); + fmtQualifiedDumpable(opcinfo)); appendPQExpBuffer(delq, " USING %s;\n", fmtId(amname)); /* Build the fixed portion of the CREATE command */ appendPQExpBuffer(q, "CREATE OPERATOR CLASS %s\n ", - fmtId(opcinfo->dobj.name)); + fmtQualifiedDumpable(opcinfo)); if (strcmp(opcdefault, "t") == 0) appendPQExpBufferStr(q, "DEFAULT "); appendPQExpBuffer(q, "FOR TYPE %s USING %s", @@ -12854,8 +12642,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) if (strlen(opcfamilyname) > 0) { appendPQExpBufferStr(q, " FAMILY "); - if (strcmp(opcfamilynsp, opcinfo->dobj.namespace->dobj.name) != 0) - appendPQExpBuffer(q, "%s.", fmtId(opcfamilynsp)); + appendPQExpBuffer(q, "%s.", fmtId(opcfamilynsp)); appendPQExpBufferStr(q, fmtId(opcfamilyname)); } appendPQExpBufferStr(q, " AS\n "); @@ -12973,8 +12760,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) if (strlen(sortfamily) > 0) { appendPQExpBufferStr(q, " FOR ORDER BY "); - if (strcmp(sortfamilynsp, opcinfo->dobj.namespace->dobj.name) != 0) - appendPQExpBuffer(q, "%s.", fmtId(sortfamilynsp)); + appendPQExpBuffer(q, "%s.", fmtId(sortfamilynsp)); appendPQExpBufferStr(q, fmtId(sortfamily)); } @@ -13068,13 +12854,14 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) appendPQExpBufferStr(q, ";\n"); - appendPQExpBuffer(labelq, "OPERATOR CLASS %s", - fmtId(opcinfo->dobj.name)); - appendPQExpBuffer(labelq, " USING %s", + appendPQExpBufferStr(nameusing, fmtId(opcinfo->dobj.name)); + appendPQExpBuffer(nameusing, " USING %s", fmtId(amname)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &opcinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &opcinfo->dobj, + "OPERATOR CLASS", nameusing->data, + opcinfo->dobj.namespace->dobj.name); if (opcinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, opcinfo->dobj.catId, opcinfo->dobj.dumpId, @@ -13089,7 +12876,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) /* Dump Operator Class Comments */ if (opcinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "OPERATOR CLASS", nameusing->data, opcinfo->dobj.namespace->dobj.name, opcinfo->rolname, opcinfo->dobj.catId, 0, opcinfo->dobj.dumpId); @@ -13099,7 +12886,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(nameusing); } /* @@ -13116,7 +12903,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + PQExpBuffer nameusing; PGresult *res; PGresult *res_ops; PGresult *res_procs; @@ -13151,10 +12938,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - - /* Make sure we are in proper schema so regoperator works correctly */ - selectSourceSchema(fout, opfinfo->dobj.namespace->dobj.name); + nameusing = createPQExpBuffer(); /* * Fetch only those opfamily members that are tied directly to the @@ -13246,19 +13030,14 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) /* amname will still be needed after we PQclear res */ amname = pg_strdup(PQgetvalue(res, 0, i_amname)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delq, "DROP OPERATOR FAMILY %s", - fmtId(opfinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s", - fmtId(opfinfo->dobj.name)); + fmtQualifiedDumpable(opfinfo)); appendPQExpBuffer(delq, " USING %s;\n", fmtId(amname)); /* Build the fixed portion of the CREATE command */ appendPQExpBuffer(q, "CREATE OPERATOR FAMILY %s", - fmtId(opfinfo->dobj.name)); + fmtQualifiedDumpable(opfinfo)); appendPQExpBuffer(q, " USING %s;\n", fmtId(amname)); @@ -13268,7 +13047,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) if (PQntuples(res_ops) > 0 || PQntuples(res_procs) > 0) { appendPQExpBuffer(q, "ALTER OPERATOR FAMILY %s", - fmtId(opfinfo->dobj.name)); + fmtQualifiedDumpable(opfinfo)); appendPQExpBuffer(q, " USING %s ADD\n ", fmtId(amname)); @@ -13302,8 +13081,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) if (strlen(sortfamily) > 0) { appendPQExpBufferStr(q, " FOR ORDER BY "); - if (strcmp(sortfamilynsp, opfinfo->dobj.namespace->dobj.name) != 0) - appendPQExpBuffer(q, "%s.", fmtId(sortfamilynsp)); + appendPQExpBuffer(q, "%s.", fmtId(sortfamilynsp)); appendPQExpBufferStr(q, fmtId(sortfamily)); } @@ -13343,13 +13121,14 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) appendPQExpBufferStr(q, ";\n"); } - appendPQExpBuffer(labelq, "OPERATOR FAMILY %s", - fmtId(opfinfo->dobj.name)); - appendPQExpBuffer(labelq, " USING %s", + appendPQExpBufferStr(nameusing, fmtId(opfinfo->dobj.name)); + appendPQExpBuffer(nameusing, " USING %s", fmtId(amname)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &opfinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &opfinfo->dobj, + "OPERATOR FAMILY", nameusing->data, + opfinfo->dobj.namespace->dobj.name); if (opfinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, opfinfo->dobj.catId, opfinfo->dobj.dumpId, @@ -13364,7 +13143,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) /* Dump Operator Family Comments */ if (opfinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "OPERATOR FAMILY", nameusing->data, opfinfo->dobj.namespace->dobj.name, opfinfo->rolname, opfinfo->dobj.catId, 0, opfinfo->dobj.dumpId); @@ -13374,7 +13153,7 @@ dumpOpfamily(Archive *fout, OpfamilyInfo *opfinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(nameusing); } /* @@ -13388,7 +13167,7 @@ dumpCollation(Archive *fout, CollInfo *collinfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + char *qcollname; PGresult *res; int i_collprovider; int i_collcollate; @@ -13404,10 +13183,8 @@ dumpCollation(Archive *fout, CollInfo *collinfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, collinfo->dobj.namespace->dobj.name); + qcollname = pg_strdup(fmtId(collinfo->dobj.name)); /* Get collation-specific details */ if (fout->remoteVersion >= 100000) @@ -13439,16 +13216,11 @@ dumpCollation(Archive *fout, CollInfo *collinfo) collcollate = PQgetvalue(res, 0, i_collcollate); collctype = PQgetvalue(res, 0, i_collctype); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP COLLATION %s", - fmtId(collinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(collinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP COLLATION %s;\n", + fmtQualifiedDumpable(collinfo)); appendPQExpBuffer(q, "CREATE COLLATION %s (", - fmtId(collinfo->dobj.name)); + fmtQualifiedDumpable(collinfo)); appendPQExpBufferStr(q, "provider = "); if (collprovider[0] == 'c') @@ -13496,10 +13268,10 @@ dumpCollation(Archive *fout, CollInfo *collinfo) appendPQExpBufferStr(q, ");\n"); - appendPQExpBuffer(labelq, "COLLATION %s", fmtId(collinfo->dobj.name)); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &collinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &collinfo->dobj, + "COLLATION", qcollname, + collinfo->dobj.namespace->dobj.name); if (collinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, collinfo->dobj.catId, collinfo->dobj.dumpId, @@ -13514,7 +13286,7 @@ dumpCollation(Archive *fout, CollInfo *collinfo) /* Dump Collation Comments */ if (collinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "COLLATION", qcollname, collinfo->dobj.namespace->dobj.name, collinfo->rolname, collinfo->dobj.catId, 0, collinfo->dobj.dumpId); @@ -13523,7 +13295,7 @@ dumpCollation(Archive *fout, CollInfo *collinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qcollname); } /* @@ -13537,7 +13309,7 @@ dumpConversion(Archive *fout, ConvInfo *convinfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + char *qconvname; PGresult *res; int i_conforencoding; int i_contoencoding; @@ -13555,10 +13327,8 @@ dumpConversion(Archive *fout, ConvInfo *convinfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, convinfo->dobj.namespace->dobj.name); + qconvname = pg_strdup(fmtId(convinfo->dobj.name)); /* Get conversion-specific details */ appendPQExpBuffer(query, "SELECT " @@ -13581,27 +13351,22 @@ dumpConversion(Archive *fout, ConvInfo *convinfo) conproc = PQgetvalue(res, 0, i_conproc); condefault = (PQgetvalue(res, 0, i_condefault)[0] == 't'); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP CONVERSION %s", - fmtId(convinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(convinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP CONVERSION %s;\n", + fmtQualifiedDumpable(convinfo)); appendPQExpBuffer(q, "CREATE %sCONVERSION %s FOR ", (condefault) ? "DEFAULT " : "", - fmtId(convinfo->dobj.name)); + fmtQualifiedDumpable(convinfo)); appendStringLiteralAH(q, conforencoding, fout); appendPQExpBufferStr(q, " TO "); appendStringLiteralAH(q, contoencoding, fout); /* regproc output is already sufficiently quoted */ appendPQExpBuffer(q, " FROM %s;\n", conproc); - appendPQExpBuffer(labelq, "CONVERSION %s", fmtId(convinfo->dobj.name)); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &convinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &convinfo->dobj, + "CONVERSION", qconvname, + convinfo->dobj.namespace->dobj.name); if (convinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, convinfo->dobj.catId, convinfo->dobj.dumpId, @@ -13616,7 +13381,7 @@ dumpConversion(Archive *fout, ConvInfo *convinfo) /* Dump Conversion Comments */ if (convinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "CONVERSION", qconvname, convinfo->dobj.namespace->dobj.name, convinfo->rolname, convinfo->dobj.catId, 0, convinfo->dobj.dumpId); @@ -13625,7 +13390,7 @@ dumpConversion(Archive *fout, ConvInfo *convinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qconvname); } /* @@ -13679,7 +13444,6 @@ dumpAgg(Archive *fout, AggInfo *agginfo) PQExpBuffer query; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer details; char *aggsig; /* identity signature */ char *aggfullsig = NULL; /* full signature */ @@ -13739,12 +13503,8 @@ dumpAgg(Archive *fout, AggInfo *agginfo) query = createPQExpBuffer(); q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); details = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, agginfo->aggfn.dobj.namespace->dobj.name); - /* Get aggregate-specific details */ if (fout->remoteVersion >= 110000) { @@ -13754,7 +13514,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "aggminvtransfn, aggmfinalfn, aggmtranstype::pg_catalog.regtype, " "aggfinalextra, aggmfinalextra, " "aggfinalmodify, aggmfinalmodify, " - "aggsortop::pg_catalog.regoperator, " + "aggsortop, " "aggkind, " "aggtransspace, agginitval, " "aggmtransspace, aggminitval, " @@ -13775,7 +13535,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "aggminvtransfn, aggmfinalfn, aggmtranstype::pg_catalog.regtype, " "aggfinalextra, aggmfinalextra, " "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " - "aggsortop::pg_catalog.regoperator, " + "aggsortop, " "aggkind, " "aggtransspace, agginitval, " "aggmtransspace, aggminitval, " @@ -13797,7 +13557,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "aggmfinalfn, aggmtranstype::pg_catalog.regtype, " "aggfinalextra, aggmfinalextra, " "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " - "aggsortop::pg_catalog.regoperator, " + "aggsortop, " "aggkind, " "aggtransspace, agginitval, " "aggmtransspace, aggminitval, " @@ -13819,7 +13579,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "0 AS aggmtranstype, false AS aggfinalextra, " "false AS aggmfinalextra, " "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " - "aggsortop::pg_catalog.regoperator, " + "aggsortop, " "'n' AS aggkind, " "0 AS aggtransspace, agginitval, " "0 AS aggmtransspace, NULL AS aggminitval, " @@ -13841,7 +13601,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) "0 AS aggmtranstype, false AS aggfinalextra, " "false AS aggmfinalextra, " "'0' AS aggfinalmodify, '0' AS aggmfinalmodify, " - "aggsortop::pg_catalog.regoperator, " + "aggsortop, " "'n' AS aggkind, " "0 AS aggtransspace, agginitval, " "0 AS aggmtransspace, NULL AS aggminitval, " @@ -14061,7 +13821,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) } } - aggsortconvop = convertOperatorReference(fout, aggsortop); + aggsortconvop = getFormattedOperatorName(fout, aggsortop); if (aggsortconvop) { appendPQExpBuffer(details, ",\n SORTOP = %s", @@ -14083,20 +13843,18 @@ dumpAgg(Archive *fout, AggInfo *agginfo) agginfo->aggfn.dobj.name); } - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delq, "DROP AGGREGATE %s.%s;\n", fmtId(agginfo->aggfn.dobj.namespace->dobj.name), aggsig); - appendPQExpBuffer(q, "CREATE AGGREGATE %s (\n%s\n);\n", + appendPQExpBuffer(q, "CREATE AGGREGATE %s.%s (\n%s\n);\n", + fmtId(agginfo->aggfn.dobj.namespace->dobj.name), aggfullsig ? aggfullsig : aggsig, details->data); - appendPQExpBuffer(labelq, "AGGREGATE %s", aggsig); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &agginfo->aggfn.dobj, labelq->data); + binary_upgrade_extension_member(q, &agginfo->aggfn.dobj, + "AGGREGATE", aggsig, + agginfo->aggfn.dobj.namespace->dobj.name); if (agginfo->aggfn.dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, agginfo->aggfn.dobj.catId, @@ -14112,13 +13870,13 @@ dumpAgg(Archive *fout, AggInfo *agginfo) /* Dump Aggregate Comments */ if (agginfo->aggfn.dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "AGGREGATE", aggsig, agginfo->aggfn.dobj.namespace->dobj.name, agginfo->aggfn.rolname, agginfo->aggfn.dobj.catId, 0, agginfo->aggfn.dobj.dumpId); if (agginfo->aggfn.dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "AGGREGATE", aggsig, agginfo->aggfn.dobj.namespace->dobj.name, agginfo->aggfn.rolname, agginfo->aggfn.dobj.catId, 0, agginfo->aggfn.dobj.dumpId); @@ -14134,8 +13892,7 @@ dumpAgg(Archive *fout, AggInfo *agginfo) if (agginfo->aggfn.dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, agginfo->aggfn.dobj.catId, agginfo->aggfn.dobj.dumpId, - "FUNCTION", - aggsig, NULL, labelq->data, + "FUNCTION", aggsig, NULL, agginfo->aggfn.dobj.namespace->dobj.name, agginfo->aggfn.rolname, agginfo->aggfn.proacl, agginfo->aggfn.rproacl, @@ -14151,7 +13908,6 @@ dumpAgg(Archive *fout, AggInfo *agginfo) destroyPQExpBuffer(query); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(details); } @@ -14165,7 +13921,7 @@ dumpTSParser(Archive *fout, TSParserInfo *prsinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + char *qprsname; /* Skip if not to be dumped */ if (!prsinfo->dobj.dump || dopt->dataOnly) @@ -14173,13 +13929,11 @@ dumpTSParser(Archive *fout, TSParserInfo *prsinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, prsinfo->dobj.namespace->dobj.name); + qprsname = pg_strdup(fmtId(prsinfo->dobj.name)); appendPQExpBuffer(q, "CREATE TEXT SEARCH PARSER %s (\n", - fmtId(prsinfo->dobj.name)); + fmtQualifiedDumpable(prsinfo)); appendPQExpBuffer(q, " START = %s,\n", convertTSFunction(fout, prsinfo->prsstart)); @@ -14193,19 +13947,13 @@ dumpTSParser(Archive *fout, TSParserInfo *prsinfo) appendPQExpBuffer(q, " LEXTYPES = %s );\n", convertTSFunction(fout, prsinfo->prslextype)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP TEXT SEARCH PARSER %s", - fmtId(prsinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(prsinfo->dobj.name)); - - appendPQExpBuffer(labelq, "TEXT SEARCH PARSER %s", - fmtId(prsinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP TEXT SEARCH PARSER %s;\n", + fmtQualifiedDumpable(prsinfo)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &prsinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &prsinfo->dobj, + "TEXT SEARCH PARSER", qprsname, + prsinfo->dobj.namespace->dobj.name); if (prsinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, prsinfo->dobj.catId, prsinfo->dobj.dumpId, @@ -14220,13 +13968,13 @@ dumpTSParser(Archive *fout, TSParserInfo *prsinfo) /* Dump Parser Comments */ if (prsinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TEXT SEARCH PARSER", qprsname, prsinfo->dobj.namespace->dobj.name, "", prsinfo->dobj.catId, 0, prsinfo->dobj.dumpId); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qprsname); } /* @@ -14239,8 +13987,8 @@ dumpTSDictionary(Archive *fout, TSDictInfo *dictinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer query; + char *qdictname; PGresult *res; char *nspname; char *tmplname; @@ -14251,11 +13999,11 @@ dumpTSDictionary(Archive *fout, TSDictInfo *dictinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); query = createPQExpBuffer(); + qdictname = pg_strdup(fmtId(dictinfo->dobj.name)); + /* Fetch name and namespace of the dictionary's template */ - selectSourceSchema(fout, "pg_catalog"); appendPQExpBuffer(query, "SELECT nspname, tmplname " "FROM pg_ts_template p, pg_namespace n " "WHERE p.oid = '%u' AND n.oid = tmplnamespace", @@ -14264,15 +14012,11 @@ dumpTSDictionary(Archive *fout, TSDictInfo *dictinfo) nspname = PQgetvalue(res, 0, 0); tmplname = PQgetvalue(res, 0, 1); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, dictinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(q, "CREATE TEXT SEARCH DICTIONARY %s (\n", - fmtId(dictinfo->dobj.name)); + fmtQualifiedDumpable(dictinfo)); appendPQExpBufferStr(q, " TEMPLATE = "); - if (strcmp(nspname, dictinfo->dobj.namespace->dobj.name) != 0) - appendPQExpBuffer(q, "%s.", fmtId(nspname)); + appendPQExpBuffer(q, "%s.", fmtId(nspname)); appendPQExpBufferStr(q, fmtId(tmplname)); PQclear(res); @@ -14283,19 +14027,13 @@ dumpTSDictionary(Archive *fout, TSDictInfo *dictinfo) appendPQExpBufferStr(q, " );\n"); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP TEXT SEARCH DICTIONARY %s", - fmtId(dictinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(dictinfo->dobj.name)); - - appendPQExpBuffer(labelq, "TEXT SEARCH DICTIONARY %s", - fmtId(dictinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP TEXT SEARCH DICTIONARY %s;\n", + fmtQualifiedDumpable(dictinfo)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &dictinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &dictinfo->dobj, + "TEXT SEARCH DICTIONARY", qdictname, + dictinfo->dobj.namespace->dobj.name); if (dictinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, dictinfo->dobj.catId, dictinfo->dobj.dumpId, @@ -14310,14 +14048,14 @@ dumpTSDictionary(Archive *fout, TSDictInfo *dictinfo) /* Dump Dictionary Comments */ if (dictinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TEXT SEARCH DICTIONARY", qdictname, dictinfo->dobj.namespace->dobj.name, dictinfo->rolname, dictinfo->dobj.catId, 0, dictinfo->dobj.dumpId); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qdictname); } /* @@ -14330,7 +14068,7 @@ dumpTSTemplate(Archive *fout, TSTemplateInfo *tmplinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + char *qtmplname; /* Skip if not to be dumped */ if (!tmplinfo->dobj.dump || dopt->dataOnly) @@ -14338,13 +14076,11 @@ dumpTSTemplate(Archive *fout, TSTemplateInfo *tmplinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, tmplinfo->dobj.namespace->dobj.name); + qtmplname = pg_strdup(fmtId(tmplinfo->dobj.name)); appendPQExpBuffer(q, "CREATE TEXT SEARCH TEMPLATE %s (\n", - fmtId(tmplinfo->dobj.name)); + fmtQualifiedDumpable(tmplinfo)); if (tmplinfo->tmplinit != InvalidOid) appendPQExpBuffer(q, " INIT = %s,\n", @@ -14352,19 +14088,13 @@ dumpTSTemplate(Archive *fout, TSTemplateInfo *tmplinfo) appendPQExpBuffer(q, " LEXIZE = %s );\n", convertTSFunction(fout, tmplinfo->tmpllexize)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP TEXT SEARCH TEMPLATE %s", - fmtId(tmplinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(tmplinfo->dobj.name)); - - appendPQExpBuffer(labelq, "TEXT SEARCH TEMPLATE %s", - fmtId(tmplinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP TEXT SEARCH TEMPLATE %s;\n", + fmtQualifiedDumpable(tmplinfo)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tmplinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tmplinfo->dobj, + "TEXT SEARCH TEMPLATE", qtmplname, + tmplinfo->dobj.namespace->dobj.name); if (tmplinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tmplinfo->dobj.catId, tmplinfo->dobj.dumpId, @@ -14379,13 +14109,13 @@ dumpTSTemplate(Archive *fout, TSTemplateInfo *tmplinfo) /* Dump Template Comments */ if (tmplinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TEXT SEARCH TEMPLATE", qtmplname, tmplinfo->dobj.namespace->dobj.name, "", tmplinfo->dobj.catId, 0, tmplinfo->dobj.dumpId); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qtmplname); } /* @@ -14398,8 +14128,8 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer query; + char *qcfgname; PGresult *res; char *nspname; char *prsname; @@ -14414,11 +14144,11 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); query = createPQExpBuffer(); + qcfgname = pg_strdup(fmtId(cfginfo->dobj.name)); + /* Fetch name and namespace of the config's parser */ - selectSourceSchema(fout, "pg_catalog"); appendPQExpBuffer(query, "SELECT nspname, prsname " "FROM pg_ts_parser p, pg_namespace n " "WHERE p.oid = '%u' AND n.oid = prsnamespace", @@ -14427,15 +14157,10 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) nspname = PQgetvalue(res, 0, 0); prsname = PQgetvalue(res, 0, 1); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, cfginfo->dobj.namespace->dobj.name); - appendPQExpBuffer(q, "CREATE TEXT SEARCH CONFIGURATION %s (\n", - fmtId(cfginfo->dobj.name)); + fmtQualifiedDumpable(cfginfo)); - appendPQExpBufferStr(q, " PARSER = "); - if (strcmp(nspname, cfginfo->dobj.namespace->dobj.name) != 0) - appendPQExpBuffer(q, "%s.", fmtId(nspname)); + appendPQExpBuffer(q, " PARSER = %s.", fmtId(nspname)); appendPQExpBuffer(q, "%s );\n", fmtId(prsname)); PQclear(res); @@ -14469,7 +14194,7 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) if (i > 0) appendPQExpBufferStr(q, ";\n"); appendPQExpBuffer(q, "\nALTER TEXT SEARCH CONFIGURATION %s\n", - fmtId(cfginfo->dobj.name)); + fmtQualifiedDumpable(cfginfo)); /* tokenname needs quoting, dictname does NOT */ appendPQExpBuffer(q, " ADD MAPPING FOR %s WITH %s", fmtId(tokenname), dictname); @@ -14483,19 +14208,13 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) PQclear(res); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "DROP TEXT SEARCH CONFIGURATION %s", - fmtId(cfginfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, ".%s;\n", - fmtId(cfginfo->dobj.name)); - - appendPQExpBuffer(labelq, "TEXT SEARCH CONFIGURATION %s", - fmtId(cfginfo->dobj.name)); + appendPQExpBuffer(delq, "DROP TEXT SEARCH CONFIGURATION %s;\n", + fmtQualifiedDumpable(cfginfo)); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &cfginfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &cfginfo->dobj, + "TEXT SEARCH CONFIGURATION", qcfgname, + cfginfo->dobj.namespace->dobj.name); if (cfginfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, cfginfo->dobj.catId, cfginfo->dobj.dumpId, @@ -14510,14 +14229,14 @@ dumpTSConfig(Archive *fout, TSConfigInfo *cfginfo) /* Dump Configuration Comments */ if (cfginfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "TEXT SEARCH CONFIGURATION", qcfgname, cfginfo->dobj.namespace->dobj.name, cfginfo->rolname, cfginfo->dobj.catId, 0, cfginfo->dobj.dumpId); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qcfgname); } /* @@ -14530,7 +14249,6 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; char *qfdwname; /* Skip if not to be dumped */ @@ -14539,7 +14257,6 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); qfdwname = pg_strdup(fmtId(fdwinfo->dobj.name)); @@ -14560,11 +14277,10 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) appendPQExpBuffer(delq, "DROP FOREIGN DATA WRAPPER %s;\n", qfdwname); - appendPQExpBuffer(labelq, "FOREIGN DATA WRAPPER %s", - qfdwname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &fdwinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &fdwinfo->dobj, + "FOREIGN DATA WRAPPER", qfdwname, + NULL); if (fdwinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, fdwinfo->dobj.catId, fdwinfo->dobj.dumpId, @@ -14579,15 +14295,14 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) /* Dump Foreign Data Wrapper Comments */ if (fdwinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "FOREIGN DATA WRAPPER", qfdwname, NULL, fdwinfo->rolname, fdwinfo->dobj.catId, 0, fdwinfo->dobj.dumpId); /* Handle the ACL */ if (fdwinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, fdwinfo->dobj.catId, fdwinfo->dobj.dumpId, - "FOREIGN DATA WRAPPER", - qfdwname, NULL, labelq->data, + "FOREIGN DATA WRAPPER", qfdwname, NULL, NULL, fdwinfo->rolname, fdwinfo->fdwacl, fdwinfo->rfdwacl, fdwinfo->initfdwacl, fdwinfo->initrfdwacl); @@ -14596,7 +14311,6 @@ dumpForeignDataWrapper(Archive *fout, FdwInfo *fdwinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); } /* @@ -14609,7 +14323,6 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer query; PGresult *res; char *qsrvname; @@ -14621,13 +14334,11 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); query = createPQExpBuffer(); qsrvname = pg_strdup(fmtId(srvinfo->dobj.name)); /* look up the foreign-data wrapper */ - selectSourceSchema(fout, "pg_catalog"); appendPQExpBuffer(query, "SELECT fdwname " "FROM pg_foreign_data_wrapper w " "WHERE w.oid = '%u'", @@ -14658,10 +14369,9 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) appendPQExpBuffer(delq, "DROP SERVER %s;\n", qsrvname); - appendPQExpBuffer(labelq, "SERVER %s", qsrvname); - if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &srvinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &srvinfo->dobj, + "SERVER", qsrvname, NULL); if (srvinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, srvinfo->dobj.catId, srvinfo->dobj.dumpId, @@ -14676,15 +14386,14 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) /* Dump Foreign Server Comments */ if (srvinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "SERVER", qsrvname, NULL, srvinfo->rolname, srvinfo->dobj.catId, 0, srvinfo->dobj.dumpId); /* Handle the ACL */ if (srvinfo->dobj.dump & DUMP_COMPONENT_ACL) dumpACL(fout, srvinfo->dobj.catId, srvinfo->dobj.dumpId, - "FOREIGN SERVER", - qsrvname, NULL, labelq->data, + "FOREIGN SERVER", qsrvname, NULL, NULL, srvinfo->rolname, srvinfo->srvacl, srvinfo->rsrvacl, srvinfo->initsrvacl, srvinfo->initrsrvacl); @@ -14700,7 +14409,6 @@ dumpForeignServer(Archive *fout, ForeignServerInfo *srvinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); } @@ -14740,8 +14448,6 @@ dumpUserMappings(Archive *fout, * OPTIONS clause. A possible alternative is to skip such mappings * altogether, but it's not clear that that's an improvement. */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT usename, " "array_to_string(ARRAY(" @@ -14889,8 +14595,7 @@ dumpDefaultACL(Archive *fout, DefaultACLInfo *daclinfo) * FOREIGN DATA WRAPPER, SERVER, or LARGE OBJECT. * 'name' is the formatted name of the object. Must be quoted etc. already. * 'subname' is the formatted name of the sub-object, if any. Must be quoted. - * 'tag' is the tag for the archive entry (should be the same tag as would be - * used for comments etc; for example "TABLE foo"). + * (Currently we assume that subname is only provided for table columns.) * 'nspname' is the namespace the object is in (NULL if none). * 'owner' is the owner, NULL if there is no owner (for languages). * 'acls' contains the ACL string of the object from the appropriate system @@ -14912,7 +14617,7 @@ dumpDefaultACL(Archive *fout, DefaultACLInfo *daclinfo) static void dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, const char *type, const char *name, const char *subname, - const char *tag, const char *nspname, const char *owner, + const char *nspname, const char *owner, const char *acls, const char *racls, const char *initacls, const char *initracls) { @@ -14940,7 +14645,8 @@ dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, if (strlen(initacls) != 0 || strlen(initracls) != 0) { appendPQExpBuffer(sql, "SELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\n"); - if (!buildACLCommands(name, subname, type, initacls, initracls, owner, + if (!buildACLCommands(name, subname, nspname, type, + initacls, initracls, owner, "", fout->remoteVersion, sql)) exit_horribly(NULL, "could not parse initial GRANT ACL list (%s) or initial REVOKE ACL list (%s) for object \"%s\" (%s)\n", @@ -14948,21 +14654,32 @@ dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, appendPQExpBuffer(sql, "SELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\n"); } - if (!buildACLCommands(name, subname, type, acls, racls, owner, + if (!buildACLCommands(name, subname, nspname, type, + acls, racls, owner, "", fout->remoteVersion, sql)) exit_horribly(NULL, "could not parse GRANT ACL list (%s) or REVOKE ACL list (%s) for object \"%s\" (%s)\n", acls, racls, name, type); if (sql->len > 0) + { + PQExpBuffer tag = createPQExpBuffer(); + + if (subname) + appendPQExpBuffer(tag, "COLUMN %s.%s", name, subname); + else + appendPQExpBuffer(tag, "%s %s", type, name); + ArchiveEntry(fout, nilCatalogId, createDumpId(), - tag, nspname, + tag->data, nspname, NULL, owner ? owner : "", false, "ACL", SECTION_NONE, sql->data, "", NULL, &(objDumpId), 1, NULL, NULL); + destroyPQExpBuffer(tag); + } destroyPQExpBuffer(sql); } @@ -14971,8 +14688,8 @@ dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, * dumpSecLabel * * This routine is used to dump any security labels associated with the - * object handed to this routine. The routine takes a constant character - * string for the target part of the security-label command, plus + * object handed to this routine. The routine takes the object type + * and object name (ready to print, except for schema decoration), plus * the namespace and owner of the object (for labeling the ArchiveEntry), * plus catalog ID and subid which are the lookup key for pg_seclabel, * plus the dump ID for the object (for setting a dependency). @@ -14986,7 +14703,7 @@ dumpACL(Archive *fout, CatalogId objCatId, DumpId objDumpId, * calling ArchiveEntry() for the specified object. */ static void -dumpSecLabel(Archive *fout, const char *target, +dumpSecLabel(Archive *fout, const char *type, const char *name, const char *namespace, const char *owner, CatalogId catalogId, int subid, DumpId dumpId) { @@ -15001,7 +14718,7 @@ dumpSecLabel(Archive *fout, const char *target, return; /* Security labels are schema not data ... except blob labels are data */ - if (strncmp(target, "LARGE OBJECT ", 13) != 0) + if (strcmp(type, "LARGE OBJECT") != 0) { if (dopt->dataOnly) return; @@ -15027,21 +14744,29 @@ dumpSecLabel(Archive *fout, const char *target, continue; appendPQExpBuffer(query, - "SECURITY LABEL FOR %s ON %s IS ", - fmtId(labels[i].provider), target); + "SECURITY LABEL FOR %s ON %s ", + fmtId(labels[i].provider), type); + if (namespace && *namespace) + appendPQExpBuffer(query, "%s.", fmtId(namespace)); + appendPQExpBuffer(query, "%s IS ", name); appendStringLiteralAH(query, labels[i].label, fout); appendPQExpBufferStr(query, ";\n"); } if (query->len > 0) { + PQExpBuffer tag = createPQExpBuffer(); + + appendPQExpBuffer(tag, "%s %s", type, name); ArchiveEntry(fout, nilCatalogId, createDumpId(), - target, namespace, NULL, owner, + tag->data, namespace, NULL, owner, false, "SECURITY LABEL", SECTION_NONE, query->data, "", NULL, &(dumpId), 1, NULL, NULL); + destroyPQExpBuffer(tag); } + destroyPQExpBuffer(query); } @@ -15093,13 +14818,14 @@ dumpTableSecLabel(Archive *fout, TableInfo *tbinfo, const char *reltypename) if (objsubid == 0) { appendPQExpBuffer(target, "%s %s", reltypename, - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); } else { colname = getAttrName(objsubid, tbinfo); - /* first fmtId result must be consumed before calling it again */ - appendPQExpBuffer(target, "COLUMN %s", fmtId(tbinfo->dobj.name)); + /* first fmtXXX result must be consumed before calling again */ + appendPQExpBuffer(target, "COLUMN %s", + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(target, ".%s", fmtId(colname)); } appendPQExpBuffer(query, "SECURITY LABEL FOR %s ON %s IS ", @@ -15297,14 +15023,12 @@ dumpTable(Archive *fout, TableInfo *tbinfo) { const char *objtype = (tbinfo->relkind == RELKIND_SEQUENCE) ? "SEQUENCE" : "TABLE"; - char *acltag = psprintf("%s %s", objtype, namecopy); dumpACL(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, - objtype, namecopy, NULL, acltag, + objtype, namecopy, NULL, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, tbinfo->relacl, tbinfo->rrelacl, tbinfo->initrelacl, tbinfo->initrrelacl); - free(acltag); } /* @@ -15386,17 +15110,14 @@ dumpTable(Archive *fout, TableInfo *tbinfo) char *initattacl = PQgetvalue(res, i, 3); char *initrattacl = PQgetvalue(res, i, 4); char *attnamecopy; - char *acltag; attnamecopy = pg_strdup(fmtId(attname)); - acltag = psprintf("COLUMN %s.%s", namecopy, attnamecopy); /* Column's GRANT type is always TABLE */ - dumpACL(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, "TABLE", - namecopy, attnamecopy, acltag, + dumpACL(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, + "TABLE", namecopy, attnamecopy, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, attacl, rattacl, initattacl, initrattacl); free(attnamecopy); - free(acltag); } PQclear(res); destroyPQExpBuffer(query); @@ -15488,12 +15209,8 @@ createDummyViewAsClause(Archive *fout, TableInfo *tbinfo) coll = findCollationByOid(tbinfo->attcollation[j]); if (coll) - { - /* always schema-qualify, don't try to be smart */ - appendPQExpBuffer(result, " COLLATE %s.", - fmtId(coll->dobj.namespace->dobj.name)); - appendPQExpBufferStr(result, fmtId(coll->dobj.name)); - } + appendPQExpBuffer(result, " COLLATE %s", + fmtQualifiedDumpable(coll)); } appendPQExpBuffer(result, " AS %s", fmtId(tbinfo->attnames[j])); @@ -15512,7 +15229,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q = createPQExpBuffer(); PQExpBuffer delq = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); + char *qrelname; + char *qualrelname; int numParents; TableInfo **parents; int actual_atts; /* number of attrs in this CREATE statement */ @@ -15523,8 +15241,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) int j, k; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); + qrelname = pg_strdup(fmtId(tbinfo->dobj.name)); + qualrelname = pg_strdup(fmtQualifiedDumpable(tbinfo)); if (dopt->binary_upgrade) binary_upgrade_set_type_oids_by_rel_oid(fout, q, @@ -15541,20 +15259,14 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) reltypename = "VIEW"; - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "DROP VIEW %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP VIEW %s;\n", qualrelname); if (dopt->binary_upgrade) binary_upgrade_set_pg_class_oids(fout, q, tbinfo->dobj.catId.oid, false); - appendPQExpBuffer(q, "CREATE VIEW %s", fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(q, "CREATE VIEW %s", qualrelname); + if (tbinfo->dummy_view) result = createDummyViewAsClause(fout, tbinfo); else @@ -15573,9 +15285,6 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (tbinfo->checkoption != NULL && !tbinfo->dummy_view) appendPQExpBuffer(q, "\n WITH %s CHECK OPTION", tbinfo->checkoption); appendPQExpBufferStr(q, ";\n"); - - appendPQExpBuffer(labelq, "VIEW %s", - fmtId(tbinfo->dobj.name)); } else { @@ -15627,17 +15336,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) numParents = tbinfo->numParents; parents = tbinfo->parents; - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "DROP %s %s.", reltypename, - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - fmtId(tbinfo->dobj.name)); - - appendPQExpBuffer(labelq, "%s %s", reltypename, - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP %s %s;\n", reltypename, qualrelname); if (dopt->binary_upgrade) binary_upgrade_set_pg_class_oids(fout, q, @@ -15647,7 +15346,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) tbinfo->relpersistence == RELPERSISTENCE_UNLOGGED ? "UNLOGGED " : "", reltypename, - fmtId(tbinfo->dobj.name)); + qualrelname); /* * Attach to type, if reloftype; except in case of a binary upgrade, @@ -15673,11 +15372,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) exit_horribly(NULL, "invalid number of parents %d for table \"%s\"\n", tbinfo->numParents, tbinfo->dobj.name); - appendPQExpBuffer(q, " PARTITION OF "); - if (parentRel->dobj.namespace != tbinfo->dobj.namespace) - appendPQExpBuffer(q, "%s.", - fmtId(parentRel->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(parentRel->dobj.name)); + appendPQExpBuffer(q, " PARTITION OF %s", + fmtQualifiedDumpable(parentRel)); } if (tbinfo->relkind != RELKIND_MATVIEW) @@ -15762,12 +15458,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) coll = findCollationByOid(tbinfo->attcollation[j]); if (coll) - { - /* always schema-qualify, don't try to be smart */ - appendPQExpBuffer(q, " COLLATE %s.", - fmtId(coll->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(coll->dobj.name)); - } + appendPQExpBuffer(q, " COLLATE %s", + fmtQualifiedDumpable(coll)); } if (has_default) @@ -15831,10 +15523,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (k > 0) appendPQExpBufferStr(q, ", "); - if (parentRel->dobj.namespace != tbinfo->dobj.namespace) - appendPQExpBuffer(q, "%s.", - fmtId(parentRel->dobj.namespace->dobj.name)); - appendPQExpBufferStr(q, fmtId(parentRel->dobj.name)); + appendPQExpBufferStr(q, fmtQualifiedDumpable(parentRel)); } appendPQExpBufferChar(q, ')'); } @@ -15926,16 +15615,16 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) tbinfo->attalign[j]); appendStringLiteralAH(q, tbinfo->attnames[j], fout); appendPQExpBufferStr(q, "\n AND attrelid = "); - appendStringLiteralAH(q, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(q, qualrelname, fout); appendPQExpBufferStr(q, "::pg_catalog.regclass;\n"); if (tbinfo->relkind == RELKIND_RELATION || tbinfo->relkind == RELKIND_PARTITIONED_TABLE) appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); else appendPQExpBuffer(q, "ALTER FOREIGN TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "DROP COLUMN %s;\n", fmtId(tbinfo->attnames[j])); } @@ -15947,7 +15636,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) "WHERE attname = "); appendStringLiteralAH(q, tbinfo->attnames[j], fout); appendPQExpBufferStr(q, "\n AND attrelid = "); - appendStringLiteralAH(q, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(q, qualrelname, fout); appendPQExpBufferStr(q, "::pg_catalog.regclass;\n"); } } @@ -15961,7 +15650,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) appendPQExpBufferStr(q, "\n-- For binary upgrade, set up inherited constraint.\n"); appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, " ADD CONSTRAINT %s ", fmtId(constr->dobj.name)); appendPQExpBuffer(q, "%s;\n", constr->condef); @@ -15970,7 +15659,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) "WHERE contype = 'c' AND conname = "); appendStringLiteralAH(q, constr->dobj.name, fout); appendPQExpBufferStr(q, "\n AND conrelid = "); - appendStringLiteralAH(q, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(q, qualrelname, fout); appendPQExpBufferStr(q, "::pg_catalog.regclass;\n"); } @@ -15980,34 +15669,24 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) for (k = 0; k < numParents; k++) { TableInfo *parentRel = parents[k]; - PQExpBuffer parentname = createPQExpBuffer(); - - /* Schema-qualify the parent table, if necessary */ - if (parentRel->dobj.namespace != tbinfo->dobj.namespace) - appendPQExpBuffer(parentname, "%s.", - fmtId(parentRel->dobj.namespace->dobj.name)); - - appendPQExpBuffer(parentname, "%s", - fmtId(parentRel->dobj.name)); /* In the partitioning case, we alter the parent */ if (tbinfo->ispartition) appendPQExpBuffer(q, "ALTER TABLE ONLY %s ATTACH PARTITION ", - parentname->data); + fmtQualifiedDumpable(parentRel)); else appendPQExpBuffer(q, "ALTER TABLE ONLY %s INHERIT ", - fmtId(tbinfo->dobj.name)); + qualrelname); /* Partition needs specifying the bounds */ if (tbinfo->ispartition) appendPQExpBuffer(q, "%s %s;\n", - fmtId(tbinfo->dobj.name), + qualrelname, tbinfo->partbound); else - appendPQExpBuffer(q, "%s;\n", parentname->data); - - destroyPQExpBuffer(parentname); + appendPQExpBuffer(q, "%s;\n", + fmtQualifiedDumpable(parentRel)); } } @@ -16015,7 +15694,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) { appendPQExpBufferStr(q, "\n-- For binary upgrade, set up typed tables this way.\n"); appendPQExpBuffer(q, "ALTER TABLE ONLY %s OF %s;\n", - fmtId(tbinfo->dobj.name), + qualrelname, tbinfo->reloftype); } } @@ -16036,7 +15715,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) "SET relfrozenxid = '%u', relminmxid = '%u'\n" "WHERE oid = ", tbinfo->frozenxid, tbinfo->minmxid); - appendStringLiteralAH(q, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(q, qualrelname, fout); appendPQExpBufferStr(q, "::pg_catalog.regclass;\n"); if (tbinfo->toast_oid) @@ -16068,7 +15747,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) appendPQExpBufferStr(q, "UPDATE pg_catalog.pg_class\n" "SET relispopulated = 't'\n" "WHERE oid = "); - appendStringLiteralAH(q, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(q, qualrelname, fout); appendPQExpBufferStr(q, "::pg_catalog.regclass;\n"); } @@ -16091,7 +15770,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) tbinfo->notnull[j] && !tbinfo->inhNotNull[j]) { appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s SET NOT NULL;\n", fmtId(tbinfo->attnames[j])); } @@ -16104,7 +15783,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (tbinfo->attstattarget[j] >= 0) { appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s ", fmtId(tbinfo->attnames[j])); appendPQExpBuffer(q, "SET STATISTICS %d;\n", @@ -16141,7 +15820,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (storage != NULL) { appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s ", fmtId(tbinfo->attnames[j])); appendPQExpBuffer(q, "SET STORAGE %s;\n", @@ -16155,7 +15834,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) if (tbinfo->attoptions[j] && tbinfo->attoptions[j][0] != '\0') { appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s ", fmtId(tbinfo->attnames[j])); appendPQExpBuffer(q, "SET (%s);\n", @@ -16170,7 +15849,7 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) tbinfo->attfdwoptions[j][0] != '\0') { appendPQExpBuffer(q, "ALTER FOREIGN TABLE %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s ", fmtId(tbinfo->attnames[j])); appendPQExpBuffer(q, "OPTIONS (\n %s\n);\n", @@ -16194,25 +15873,27 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) else if (tbinfo->relreplident == REPLICA_IDENTITY_NOTHING) { appendPQExpBuffer(q, "\nALTER TABLE ONLY %s REPLICA IDENTITY NOTHING;\n", - fmtId(tbinfo->dobj.name)); + qualrelname); } else if (tbinfo->relreplident == REPLICA_IDENTITY_FULL) { appendPQExpBuffer(q, "\nALTER TABLE ONLY %s REPLICA IDENTITY FULL;\n", - fmtId(tbinfo->dobj.name)); + qualrelname); } } if (tbinfo->relkind == RELKIND_FOREIGN_TABLE && tbinfo->hasoids) appendPQExpBuffer(q, "\nALTER TABLE ONLY %s SET WITH OIDS;\n", - fmtId(tbinfo->dobj.name)); + qualrelname); if (tbinfo->forcerowsec) appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n", - fmtId(tbinfo->dobj.name)); + qualrelname); if (dopt->binary_upgrade) - binary_upgrade_extension_member(q, &tbinfo->dobj, labelq->data); + binary_upgrade_extension_member(q, &tbinfo->dobj, + reltypename, qrelname, + tbinfo->dobj.namespace->dobj.name); if (tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, @@ -16251,7 +15932,8 @@ dumpTableSchema(Archive *fout, TableInfo *tbinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qrelname); + free(qualrelname); } /* @@ -16265,6 +15947,7 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo) int adnum = adinfo->adnum; PQExpBuffer q; PQExpBuffer delq; + char *qualrelname; char *tag; /* Skip if table definition not to be dumped */ @@ -16278,19 +15961,16 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); + qualrelname = pg_strdup(fmtQualifiedDumpable(tbinfo)); + appendPQExpBuffer(q, "ALTER TABLE ONLY %s ", - fmtId(tbinfo->dobj.name)); + qualrelname); appendPQExpBuffer(q, "ALTER COLUMN %s SET DEFAULT %s;\n", fmtId(tbinfo->attnames[adnum - 1]), adinfo->adef_expr); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ - appendPQExpBuffer(delq, "ALTER TABLE %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s ", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "ALTER TABLE %s ", + qualrelname); appendPQExpBuffer(delq, "ALTER COLUMN %s DROP DEFAULT;\n", fmtId(tbinfo->attnames[adnum - 1])); @@ -16310,6 +15990,7 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo) free(tag); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); + free(qualrelname); } /* @@ -16358,17 +16039,15 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) bool is_constraint = (indxinfo->indexconstraint != 0); PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; + char *qindxname; if (dopt->dataOnly) return; q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); - appendPQExpBuffer(labelq, "INDEX %s", - fmtId(indxinfo->dobj.name)); + qindxname = pg_strdup(fmtId(indxinfo->dobj.name)); /* * If there's an associated constraint, don't dump the index per se, but @@ -16390,28 +16069,24 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) if (indxinfo->indisclustered) { appendPQExpBuffer(q, "\nALTER TABLE %s CLUSTER", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); + /* index name is not qualified in this syntax */ appendPQExpBuffer(q, " ON %s;\n", - fmtId(indxinfo->dobj.name)); + qindxname); } /* If the index defines identity, we need to record that. */ if (indxinfo->indisreplident) { appendPQExpBuffer(q, "\nALTER TABLE ONLY %s REPLICA IDENTITY USING", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); + /* index name is not qualified in this syntax */ appendPQExpBuffer(q, " INDEX %s;\n", - fmtId(indxinfo->dobj.name)); + qindxname); } - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "DROP INDEX %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - fmtId(indxinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP INDEX %s;\n", + fmtQualifiedDumpable(indxinfo)); if (indxinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, indxinfo->dobj.catId, indxinfo->dobj.dumpId, @@ -16427,7 +16102,7 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) /* Dump Index Comments */ if (indxinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "INDEX", qindxname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, indxinfo->dobj.catId, 0, @@ -16436,7 +16111,7 @@ dumpIndex(Archive *fout, IndxInfo *indxinfo) destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); + free(qindxname); } /* @@ -16454,13 +16129,9 @@ dumpIndexAttach(Archive *fout, IndexAttachInfo *attachinfo) PQExpBuffer q = createPQExpBuffer(); appendPQExpBuffer(q, "\nALTER INDEX %s ", - fmtQualifiedId(fout->remoteVersion, - attachinfo->parentIdx->dobj.namespace->dobj.name, - attachinfo->parentIdx->dobj.name)); + fmtQualifiedDumpable(attachinfo->parentIdx)); appendPQExpBuffer(q, "ATTACH PARTITION %s;\n", - fmtQualifiedId(fout->remoteVersion, - attachinfo->partitionIdx->dobj.namespace->dobj.name, - attachinfo->partitionIdx->dobj.name)); + fmtQualifiedDumpable(attachinfo->partitionIdx)); ArchiveEntry(fout, attachinfo->dobj.catId, attachinfo->dobj.dumpId, attachinfo->dobj.name, @@ -16485,8 +16156,8 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer q; PQExpBuffer delq; - PQExpBuffer labelq; PQExpBuffer query; + char *qstatsextname; PGresult *res; char *stxdef; @@ -16496,11 +16167,9 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) q = createPQExpBuffer(); delq = createPQExpBuffer(); - labelq = createPQExpBuffer(); query = createPQExpBuffer(); - /* Make sure we are in proper schema so references are qualified */ - selectSourceSchema(fout, statsextinfo->dobj.namespace->dobj.name); + qstatsextname = pg_strdup(fmtId(statsextinfo->dobj.name)); appendPQExpBuffer(query, "SELECT " "pg_catalog.pg_get_statisticsobjdef('%u'::pg_catalog.oid)", @@ -16510,16 +16179,11 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) stxdef = PQgetvalue(res, 0, 0); - appendPQExpBuffer(labelq, "STATISTICS %s", - fmtId(statsextinfo->dobj.name)); - /* Result of pg_get_statisticsobjdef is complete except for semicolon */ appendPQExpBuffer(q, "%s;\n", stxdef); - appendPQExpBuffer(delq, "DROP STATISTICS %s.", - fmtId(statsextinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s;\n", - fmtId(statsextinfo->dobj.name)); + appendPQExpBuffer(delq, "DROP STATISTICS %s;\n", + fmtQualifiedDumpable(statsextinfo)); if (statsextinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, statsextinfo->dobj.catId, @@ -16535,7 +16199,7 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) /* Dump Statistics Comments */ if (statsextinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "STATISTICS", qstatsextname, statsextinfo->dobj.namespace->dobj.name, statsextinfo->rolname, statsextinfo->dobj.catId, 0, @@ -16544,8 +16208,8 @@ dumpStatisticsExt(Archive *fout, StatsExtInfo *statsextinfo) PQclear(res); destroyPQExpBuffer(q); destroyPQExpBuffer(delq); - destroyPQExpBuffer(labelq); destroyPQExpBuffer(query); + free(qstatsextname); } /* @@ -16587,7 +16251,7 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo) indxinfo->dobj.catId.oid, true); appendPQExpBuffer(q, "ALTER TABLE ONLY %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(q, " ADD CONSTRAINT %s ", fmtId(coninfo->dobj.name)); @@ -16637,19 +16301,14 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo) if (indxinfo->indisclustered) { appendPQExpBuffer(q, "\nALTER TABLE %s CLUSTER", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); + /* index name is not qualified in this syntax */ appendPQExpBuffer(q, " ON %s;\n", fmtId(indxinfo->dobj.name)); } - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "ALTER TABLE ONLY %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s ", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "ALTER TABLE ONLY %s ", + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n", fmtId(coninfo->dobj.name)); @@ -16673,19 +16332,13 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo) * current table data is not processed */ appendPQExpBuffer(q, "ALTER TABLE ONLY %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(q, " ADD CONSTRAINT %s %s;\n", fmtId(coninfo->dobj.name), coninfo->condef); - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "ALTER TABLE ONLY %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s ", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "ALTER TABLE ONLY %s ", + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n", fmtId(coninfo->dobj.name)); @@ -16711,19 +16364,13 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo) { /* not ONLY since we want it to propagate to children */ appendPQExpBuffer(q, "ALTER TABLE %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(q, " ADD CONSTRAINT %s %s;\n", fmtId(coninfo->dobj.name), coninfo->condef); - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "ALTER TABLE %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s ", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delq, "ALTER TABLE %s ", + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n", fmtId(coninfo->dobj.name)); @@ -16750,19 +16397,13 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo) if (coninfo->separate) { appendPQExpBuffer(q, "ALTER DOMAIN %s\n", - fmtId(tyinfo->dobj.name)); + fmtQualifiedDumpable(tyinfo)); appendPQExpBuffer(q, " ADD CONSTRAINT %s %s;\n", fmtId(coninfo->dobj.name), coninfo->condef); - /* - * DROP must be fully qualified in case same name appears in - * pg_catalog - */ - appendPQExpBuffer(delq, "ALTER DOMAIN %s.", - fmtId(tyinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delq, "%s ", - fmtId(tyinfo->dobj.name)); + appendPQExpBuffer(delq, "ALTER DOMAIN %s ", + fmtQualifiedDumpable(tyinfo)); appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n", fmtId(coninfo->dobj.name)); @@ -16807,21 +16448,23 @@ static void dumpTableConstraintComment(Archive *fout, ConstraintInfo *coninfo) { TableInfo *tbinfo = coninfo->contable; - PQExpBuffer labelq = createPQExpBuffer(); + PQExpBuffer conprefix = createPQExpBuffer(); + char *qtabname; - appendPQExpBuffer(labelq, "CONSTRAINT %s ", + qtabname = pg_strdup(fmtId(tbinfo->dobj.name)); + + appendPQExpBuffer(conprefix, "CONSTRAINT %s ON", fmtId(coninfo->dobj.name)); - appendPQExpBuffer(labelq, "ON %s", - fmtId(tbinfo->dobj.name)); if (coninfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, conprefix->data, qtabname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, coninfo->dobj.catId, 0, coninfo->separate ? coninfo->dobj.dumpId : tbinfo->dobj.dumpId); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(conprefix); + free(qtabname); } /* @@ -16874,52 +16517,42 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) bufx[32]; PQExpBuffer query = createPQExpBuffer(); PQExpBuffer delqry = createPQExpBuffer(); - PQExpBuffer labelq = createPQExpBuffer(); + char *qseqname; + + qseqname = pg_strdup(fmtId(tbinfo->dobj.name)); if (fout->remoteVersion >= 100000) { - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - appendPQExpBuffer(query, "SELECT format_type(seqtypid, NULL), " "seqstart, seqincrement, " "seqmax, seqmin, " "seqcache, seqcycle " - "FROM pg_class c " - "JOIN pg_sequence s ON (s.seqrelid = c.oid) " - "WHERE c.oid = '%u'::oid", + "FROM pg_catalog.pg_sequence " + "WHERE seqrelid = '%u'::oid", tbinfo->dobj.catId.oid); } else if (fout->remoteVersion >= 80400) { /* - * Before PostgreSQL 10, sequence metadata is in the sequence itself, - * so switch to the sequence's schema instead of pg_catalog. + * Before PostgreSQL 10, sequence metadata is in the sequence itself. * * Note: it might seem that 'bigint' potentially needs to be * schema-qualified, but actually that's a keyword. */ - - /* Make sure we are in proper schema */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(query, "SELECT 'bigint' AS sequence_type, " "start_value, increment_by, max_value, min_value, " "cache_value, is_cycled FROM %s", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); } else { - /* Make sure we are in proper schema */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(query, "SELECT 'bigint' AS sequence_type, " "0 AS start_value, increment_by, max_value, min_value, " "cache_value, is_cycled FROM %s", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); } res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); @@ -16978,14 +16611,12 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) maxv = NULL; /* - * DROP must be fully qualified in case same name appears in pg_catalog + * Identity sequences are not to be dropped separately. */ if (!tbinfo->is_identity_sequence) { - appendPQExpBuffer(delqry, "DROP SEQUENCE %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delqry, "%s;\n", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delqry, "DROP SEQUENCE %s;\n", + fmtQualifiedDumpable(tbinfo)); } resetPQExpBuffer(query); @@ -17004,7 +16635,7 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) appendPQExpBuffer(query, "ALTER TABLE %s ", - fmtId(owning_tab->dobj.name)); + fmtQualifiedDumpable(owning_tab)); appendPQExpBuffer(query, "ALTER COLUMN %s ADD GENERATED ", fmtId(owning_tab->attnames[tbinfo->owning_col - 1])); @@ -17013,13 +16644,13 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) else if (owning_tab->attidentity[tbinfo->owning_col - 1] == ATTRIBUTE_IDENTITY_BY_DEFAULT) appendPQExpBuffer(query, "BY DEFAULT"); appendPQExpBuffer(query, " AS IDENTITY (\n SEQUENCE NAME %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); } else { appendPQExpBuffer(query, "CREATE SEQUENCE %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); if (strcmp(seqtype, "bigint") != 0) appendPQExpBuffer(query, " AS %s\n", seqtype); @@ -17049,13 +16680,12 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) else appendPQExpBufferStr(query, ";\n"); - appendPQExpBuffer(labelq, "SEQUENCE %s", fmtId(tbinfo->dobj.name)); - /* binary_upgrade: no need to clear TOAST table oid */ if (dopt->binary_upgrade) binary_upgrade_extension_member(query, &tbinfo->dobj, - labelq->data); + "SEQUENCE", qseqname, + tbinfo->dobj.namespace->dobj.name); if (tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, tbinfo->dobj.catId, tbinfo->dobj.dumpId, @@ -17092,9 +16722,9 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) { resetPQExpBuffer(query); appendPQExpBuffer(query, "ALTER SEQUENCE %s", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); appendPQExpBuffer(query, " OWNED BY %s", - fmtId(owning_tab->dobj.name)); + fmtQualifiedDumpable(owning_tab)); appendPQExpBuffer(query, ".%s;\n", fmtId(owning_tab->attnames[tbinfo->owning_col - 1])); @@ -17113,12 +16743,12 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) /* Dump Sequence Comments and Security Labels */ if (tbinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "SEQUENCE", qseqname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId); if (tbinfo->dobj.dump & DUMP_COMPONENT_SECLABEL) - dumpSecLabel(fout, labelq->data, + dumpSecLabel(fout, "SEQUENCE", qseqname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, tbinfo->dobj.catId, 0, tbinfo->dobj.dumpId); @@ -17126,7 +16756,7 @@ dumpSequence(Archive *fout, TableInfo *tbinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(delqry); - destroyPQExpBuffer(labelq); + free(qseqname); } /* @@ -17142,12 +16772,9 @@ dumpSequenceData(Archive *fout, TableDataInfo *tdinfo) bool called; PQExpBuffer query = createPQExpBuffer(); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - appendPQExpBuffer(query, "SELECT last_value, is_called FROM %s", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); @@ -17165,7 +16792,7 @@ dumpSequenceData(Archive *fout, TableDataInfo *tdinfo) resetPQExpBuffer(query); appendPQExpBufferStr(query, "SELECT pg_catalog.setval("); - appendStringLiteralAH(query, fmtId(tbinfo->dobj.name), fout); + appendStringLiteralAH(query, fmtQualifiedDumpable(tbinfo), fout); appendPQExpBuffer(query, ", %s, %s);\n", last, (called ? "true" : "false")); @@ -17196,7 +16823,8 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) TableInfo *tbinfo = tginfo->tgtable; PQExpBuffer query; PQExpBuffer delqry; - PQExpBuffer labelq; + PQExpBuffer trigprefix; + char *qtabname; char *tgargs; size_t lentgargs; const char *p; @@ -17212,17 +16840,14 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) query = createPQExpBuffer(); delqry = createPQExpBuffer(); - labelq = createPQExpBuffer(); + trigprefix = createPQExpBuffer(); + + qtabname = pg_strdup(fmtId(tbinfo->dobj.name)); - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ appendPQExpBuffer(delqry, "DROP TRIGGER %s ", fmtId(tginfo->dobj.name)); - appendPQExpBuffer(delqry, "ON %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delqry, "%s;\n", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delqry, "ON %s;\n", + fmtQualifiedDumpable(tbinfo)); if (tginfo->tgdef) { @@ -17286,7 +16911,7 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) findx++; } appendPQExpBuffer(query, " ON %s\n", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); if (tginfo->tgisconstraint) { @@ -17344,7 +16969,7 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) if (tginfo->tgenabled != 't' && tginfo->tgenabled != 'O') { appendPQExpBuffer(query, "\nALTER TABLE %s ", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); switch (tginfo->tgenabled) { case 'D': @@ -17365,10 +16990,8 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) fmtId(tginfo->dobj.name)); } - appendPQExpBuffer(labelq, "TRIGGER %s ", + appendPQExpBuffer(trigprefix, "TRIGGER %s ON", fmtId(tginfo->dobj.name)); - appendPQExpBuffer(labelq, "ON %s", - fmtId(tbinfo->dobj.name)); tag = psprintf("%s %s", tbinfo->dobj.name, tginfo->dobj.name); @@ -17384,14 +17007,15 @@ dumpTrigger(Archive *fout, TriggerInfo *tginfo) NULL, NULL); if (tginfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, trigprefix->data, qtabname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, tginfo->dobj.catId, 0, tginfo->dobj.dumpId); free(tag); destroyPQExpBuffer(query); destroyPQExpBuffer(delqry); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(trigprefix); + free(qtabname); } /* @@ -17404,7 +17028,7 @@ dumpEventTrigger(Archive *fout, EventTriggerInfo *evtinfo) DumpOptions *dopt = fout->dopt; PQExpBuffer query; PQExpBuffer delqry; - PQExpBuffer labelq; + char *qevtname; /* Skip if not to be dumped */ if (!evtinfo->dobj.dump || dopt->dataOnly) @@ -17412,10 +17036,11 @@ dumpEventTrigger(Archive *fout, EventTriggerInfo *evtinfo) query = createPQExpBuffer(); delqry = createPQExpBuffer(); - labelq = createPQExpBuffer(); + + qevtname = pg_strdup(fmtId(evtinfo->dobj.name)); appendPQExpBufferStr(query, "CREATE EVENT TRIGGER "); - appendPQExpBufferStr(query, fmtId(evtinfo->dobj.name)); + appendPQExpBufferStr(query, qevtname); appendPQExpBufferStr(query, " ON "); appendPQExpBufferStr(query, fmtId(evtinfo->evtevent)); @@ -17433,7 +17058,7 @@ dumpEventTrigger(Archive *fout, EventTriggerInfo *evtinfo) if (evtinfo->evtenabled != 'O') { appendPQExpBuffer(query, "\nALTER EVENT TRIGGER %s ", - fmtId(evtinfo->dobj.name)); + qevtname); switch (evtinfo->evtenabled) { case 'D': @@ -17453,10 +17078,7 @@ dumpEventTrigger(Archive *fout, EventTriggerInfo *evtinfo) } appendPQExpBuffer(delqry, "DROP EVENT TRIGGER %s;\n", - fmtId(evtinfo->dobj.name)); - - appendPQExpBuffer(labelq, "EVENT TRIGGER %s", - fmtId(evtinfo->dobj.name)); + qevtname); if (evtinfo->dobj.dump & DUMP_COMPONENT_DEFINITION) ArchiveEntry(fout, evtinfo->dobj.catId, evtinfo->dobj.dumpId, @@ -17468,13 +17090,13 @@ dumpEventTrigger(Archive *fout, EventTriggerInfo *evtinfo) NULL, NULL); if (evtinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, "EVENT TRIGGER", qevtname, NULL, evtinfo->evtowner, evtinfo->dobj.catId, 0, evtinfo->dobj.dumpId); destroyPQExpBuffer(query); destroyPQExpBuffer(delqry); - destroyPQExpBuffer(labelq); + free(qevtname); } /* @@ -17490,7 +17112,8 @@ dumpRule(Archive *fout, RuleInfo *rinfo) PQExpBuffer query; PQExpBuffer cmd; PQExpBuffer delcmd; - PQExpBuffer labelq; + PQExpBuffer ruleprefix; + char *qtabname; PGresult *res; char *tag; @@ -17511,15 +17134,12 @@ dumpRule(Archive *fout, RuleInfo *rinfo) */ is_view = (rinfo->ev_type == '1' && rinfo->is_instead); - /* - * Make sure we are in proper schema. - */ - selectSourceSchema(fout, tbinfo->dobj.namespace->dobj.name); - query = createPQExpBuffer(); cmd = createPQExpBuffer(); delcmd = createPQExpBuffer(); - labelq = createPQExpBuffer(); + ruleprefix = createPQExpBuffer(); + + qtabname = pg_strdup(fmtId(tbinfo->dobj.name)); if (is_view) { @@ -17530,7 +17150,7 @@ dumpRule(Archive *fout, RuleInfo *rinfo) * Otherwise this should look largely like the regular view dump code. */ appendPQExpBuffer(cmd, "CREATE OR REPLACE VIEW %s", - fmtId(tbinfo->dobj.name)); + fmtQualifiedDumpable(tbinfo)); if (nonemptyReloptions(tbinfo->reloptions)) { appendPQExpBufferStr(cmd, " WITH ("); @@ -17572,7 +17192,7 @@ dumpRule(Archive *fout, RuleInfo *rinfo) */ if (rinfo->ev_enabled != 'O') { - appendPQExpBuffer(cmd, "ALTER TABLE %s ", fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(cmd, "ALTER TABLE %s ", fmtQualifiedDumpable(tbinfo)); switch (rinfo->ev_enabled) { case 'A': @@ -17590,9 +17210,6 @@ dumpRule(Archive *fout, RuleInfo *rinfo) } } - /* - * DROP must be fully qualified in case same name appears in pg_catalog - */ if (is_view) { /* @@ -17602,9 +17219,8 @@ dumpRule(Archive *fout, RuleInfo *rinfo) */ PQExpBuffer result; - appendPQExpBuffer(delcmd, "CREATE OR REPLACE VIEW %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBufferStr(delcmd, fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delcmd, "CREATE OR REPLACE VIEW %s", + fmtQualifiedDumpable(tbinfo)); result = createDummyViewAsClause(fout, tbinfo); appendPQExpBuffer(delcmd, " AS\n%s;\n", result->data); destroyPQExpBuffer(result); @@ -17613,16 +17229,12 @@ dumpRule(Archive *fout, RuleInfo *rinfo) { appendPQExpBuffer(delcmd, "DROP RULE %s ", fmtId(rinfo->dobj.name)); - appendPQExpBuffer(delcmd, "ON %s.", - fmtId(tbinfo->dobj.namespace->dobj.name)); - appendPQExpBuffer(delcmd, "%s;\n", - fmtId(tbinfo->dobj.name)); + appendPQExpBuffer(delcmd, "ON %s;\n", + fmtQualifiedDumpable(tbinfo)); } - appendPQExpBuffer(labelq, "RULE %s", + appendPQExpBuffer(ruleprefix, "RULE %s ON", fmtId(rinfo->dobj.name)); - appendPQExpBuffer(labelq, " ON %s", - fmtId(tbinfo->dobj.name)); tag = psprintf("%s %s", tbinfo->dobj.name, rinfo->dobj.name); @@ -17639,7 +17251,7 @@ dumpRule(Archive *fout, RuleInfo *rinfo) /* Dump rule comments */ if (rinfo->dobj.dump & DUMP_COMPONENT_COMMENT) - dumpComment(fout, labelq->data, + dumpComment(fout, ruleprefix->data, qtabname, tbinfo->dobj.namespace->dobj.name, tbinfo->rolname, rinfo->dobj.catId, 0, rinfo->dobj.dumpId); @@ -17648,7 +17260,8 @@ dumpRule(Archive *fout, RuleInfo *rinfo) destroyPQExpBuffer(query); destroyPQExpBuffer(cmd); destroyPQExpBuffer(delcmd); - destroyPQExpBuffer(labelq); + destroyPQExpBuffer(ruleprefix); + free(qtabname); } /* @@ -17680,9 +17293,6 @@ getExtensionMembership(Archive *fout, ExtensionInfo extinfo[], if (numExtensions == 0) return; - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - query = createPQExpBuffer(); /* refclassid constraint is redundant but may speed the search */ @@ -17882,9 +17492,6 @@ processExtensionTables(Archive *fout, ExtensionInfo extinfo[], * recreated after the data has been loaded. */ - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - query = createPQExpBuffer(); printfPQExpBuffer(query, @@ -17952,9 +17559,6 @@ getDependencies(Archive *fout) if (g_verbose) write_msg(NULL, "reading dependency data\n"); - /* Make sure we are in proper schema */ - selectSourceSchema(fout, "pg_catalog"); - query = createPQExpBuffer(); /* @@ -18283,46 +17887,14 @@ findDumpableDependencies(ArchiveHandle *AH, DumpableObject *dobj, } -/* - * selectSourceSchema - make the specified schema the active search path - * in the source database. - * - * NB: pg_catalog is explicitly searched after the specified schema; - * so user names are only qualified if they are cross-schema references, - * and system names are only qualified if they conflict with a user name - * in the current schema. - * - * Whenever the selected schema is not pg_catalog, be careful to qualify - * references to system catalogs and types in our emitted commands! - * - * This function is called only from selectSourceSchemaOnAH and - * selectSourceSchema. - */ -static void -selectSourceSchema(Archive *fout, const char *schemaName) -{ - PQExpBuffer query; - - /* This is checked by the callers already */ - Assert(schemaName != NULL && *schemaName != '\0'); - - query = createPQExpBuffer(); - appendPQExpBuffer(query, "SET search_path = %s", - fmtId(schemaName)); - if (strcmp(schemaName, "pg_catalog") != 0) - appendPQExpBufferStr(query, ", pg_catalog"); - - ExecuteSqlStatement(fout, query->data); - - destroyPQExpBuffer(query); -} - /* * getFormattedTypeName - retrieve a nicely-formatted type name for the - * given type name. + * given type OID. + * + * This does not guarantee to schema-qualify the output, so it should not + * be used to create the target object name for CREATE or ALTER commands. * - * NB: the result may depend on the currently-selected search_path; this is - * why we don't try to cache the names. + * TODO: there might be some value in caching the results. */ static char * getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts) diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 40ee5d1d8b..fbb18b7ade 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -42,9 +42,10 @@ static void dumpUserConfig(PGconn *conn, const char *username); static void dumpDatabases(PGconn *conn); static void dumpTimestamp(const char *msg); static int runPgDump(const char *dbname, const char *create_opts); -static void buildShSecLabels(PGconn *conn, const char *catalog_name, - uint32 objectId, PQExpBuffer buffer, - const char *target, const char *objname); +static void buildShSecLabels(PGconn *conn, + const char *catalog_name, Oid objectId, + const char *objtype, const char *objname, + PQExpBuffer buffer); static PGconn *connectDatabase(const char *dbname, const char *connstr, const char *pghost, const char *pgport, const char *pguser, trivalue prompt_password, bool fail_on_error); static char *constructConnStr(const char **keywords, const char **values); @@ -928,7 +929,8 @@ dumpRoles(PGconn *conn) if (!no_security_labels && server_version >= 90200) buildShSecLabels(conn, "pg_authid", auth_oid, - buf, "ROLE", rolename); + "ROLE", rolename, + buf); fprintf(OPF, "%s", buf->data); } @@ -1191,7 +1193,7 @@ dumpTablespaces(PGconn *conn) for (i = 0; i < PQntuples(res); i++) { PQExpBuffer buf = createPQExpBuffer(); - uint32 spcoid = atooid(PQgetvalue(res, i, 0)); + Oid spcoid = atooid(PQgetvalue(res, i, 0)); char *spcname = PQgetvalue(res, i, 1); char *spcowner = PQgetvalue(res, i, 2); char *spclocation = PQgetvalue(res, i, 3); @@ -1216,11 +1218,12 @@ dumpTablespaces(PGconn *conn) fspcname, spcoptions); if (!skip_acls && - !buildACLCommands(fspcname, NULL, "TABLESPACE", spcacl, rspcacl, + !buildACLCommands(fspcname, NULL, NULL, "TABLESPACE", + spcacl, rspcacl, spcowner, "", server_version, buf)) { fprintf(stderr, _("%s: could not parse ACL list (%s) for tablespace \"%s\"\n"), - progname, spcacl, fspcname); + progname, spcacl, spcname); PQfinish(conn); exit_nicely(1); } @@ -1234,7 +1237,8 @@ dumpTablespaces(PGconn *conn) if (!no_security_labels && server_version >= 90200) buildShSecLabels(conn, "pg_tablespace", spcoid, - buf, "TABLESPACE", fspcname); + "TABLESPACE", spcname, + buf); fprintf(OPF, "%s", buf->data); @@ -1481,19 +1485,23 @@ runPgDump(const char *dbname, const char *create_opts) * * Build SECURITY LABEL command(s) for a shared object * - * The caller has to provide object type and identifier to select security - * labels from pg_seclabels system view. + * The caller has to provide object type and identity in two separate formats: + * catalog_name (e.g., "pg_database") and object OID, as well as + * type name (e.g., "DATABASE") and object name (not pre-quoted). + * + * The command(s) are appended to "buffer". */ static void -buildShSecLabels(PGconn *conn, const char *catalog_name, uint32 objectId, - PQExpBuffer buffer, const char *target, const char *objname) +buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId, + const char *objtype, const char *objname, + PQExpBuffer buffer) { PQExpBuffer sql = createPQExpBuffer(); PGresult *res; buildShSecLabelQuery(conn, catalog_name, objectId, sql); res = executeQuery(conn, sql->data); - emitShSecLabels(conn, res, buffer, target, objname); + emitShSecLabels(conn, res, buffer, objtype, objname); PQclear(res); destroyPQExpBuffer(sql); diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl index ac9cfa04c1..6f74e15805 100644 --- a/src/bin/pg_dump/t/002_pg_dump.pl +++ b/src/bin/pg_dump/t/002_pg_dump.pl @@ -460,7 +460,7 @@ 'ALTER COLLATION test0 OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER COLLATION test0 OWNER TO .*;/m, + regexp => qr/^ALTER COLLATION public.test0 OWNER TO .*;/m, collation => 1, like => { binary_upgrade => 1, @@ -604,7 +604,7 @@ FUNCTION 1 (int4, int4) btint4cmp(int4,int4), FUNCTION 2 (int4, int4) btint4sortsupport(internal);', regexp => qr/^ - \QALTER OPERATOR FAMILY op_family USING btree ADD\E\n\s+ + \QALTER OPERATOR FAMILY dump_test.op_family USING btree ADD\E\n\s+ \QOPERATOR 1 <(bigint,integer) ,\E\n\s+ \QOPERATOR 2 <=(bigint,integer) ,\E\n\s+ \QOPERATOR 3 =(bigint,integer) ,\E\n\s+ @@ -809,7 +809,7 @@ 'ALTER SEQUENCE test_table_col1_seq' => { all_runs => 1, regexp => qr/^ - \QALTER SEQUENCE test_table_col1_seq OWNED BY test_table.col1;\E + \QALTER SEQUENCE dump_test.test_table_col1_seq OWNED BY dump_test.test_table.col1;\E /xm, like => { binary_upgrade => 1, @@ -842,7 +842,7 @@ 'ALTER SEQUENCE test_third_table_col1_seq' => { all_runs => 1, regexp => qr/^ - \QALTER SEQUENCE test_third_table_col1_seq OWNED BY test_third_table.col1;\E + \QALTER SEQUENCE dump_test_second_schema.test_third_table_col1_seq OWNED BY dump_test_second_schema.test_third_table.col1;\E /xm, like => { binary_upgrade => 1, @@ -876,7 +876,7 @@ all_runs => 1, catch_all => 'ALTER TABLE ... commands', regexp => qr/^ - \QALTER TABLE ONLY test_table\E \n^\s+ + \QALTER TABLE ONLY dump_test.test_table\E \n^\s+ \QADD CONSTRAINT test_table_pkey PRIMARY KEY (col1);\E /xm, like => { @@ -911,7 +911,7 @@ create_sql => 'ALTER TABLE dump_test.test_table ALTER COLUMN col1 SET STATISTICS 90;', regexp => qr/^ - \QALTER TABLE ONLY test_table ALTER COLUMN col1 SET STATISTICS 90;\E\n + \QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col1 SET STATISTICS 90;\E\n /xm, like => { binary_upgrade => 1, @@ -945,7 +945,7 @@ create_sql => 'ALTER TABLE dump_test.test_table ALTER COLUMN col2 SET STORAGE EXTERNAL;', regexp => qr/^ - \QALTER TABLE ONLY test_table ALTER COLUMN col2 SET STORAGE EXTERNAL;\E\n + \QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col2 SET STORAGE EXTERNAL;\E\n /xm, like => { binary_upgrade => 1, @@ -979,7 +979,7 @@ create_sql => 'ALTER TABLE dump_test.test_table ALTER COLUMN col3 SET STORAGE MAIN;', regexp => qr/^ - \QALTER TABLE ONLY test_table ALTER COLUMN col3 SET STORAGE MAIN;\E\n + \QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col3 SET STORAGE MAIN;\E\n /xm, like => { binary_upgrade => 1, @@ -1013,7 +1013,7 @@ create_sql => 'ALTER TABLE dump_test.test_table ALTER COLUMN col4 SET (n_distinct = 10);', regexp => qr/^ - \QALTER TABLE ONLY test_table ALTER COLUMN col4 SET (n_distinct=10);\E\n + \QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col4 SET (n_distinct=10);\E\n /xm, like => { binary_upgrade => 1, @@ -1044,7 +1044,7 @@ => { all_runs => 1, regexp => qr/^ - \QALTER TABLE ONLY dump_test.measurement ATTACH PARTITION measurement_y2006m2 \E + \QALTER TABLE ONLY dump_test.measurement ATTACH PARTITION dump_test_second_schema.measurement_y2006m2 \E \QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n /xm, like => { binary_upgrade => 1, }, @@ -1081,7 +1081,7 @@ create_sql => 'ALTER TABLE dump_test.test_table CLUSTER ON test_table_pkey', regexp => qr/^ - \QALTER TABLE test_table CLUSTER ON test_table_pkey;\E\n + \QALTER TABLE dump_test.test_table CLUSTER ON test_table_pkey;\E\n /xm, like => { binary_upgrade => 1, @@ -1112,10 +1112,10 @@ all_runs => 1, regexp => qr/^ \QSET SESSION AUTHORIZATION 'test_superuser';\E\n\n - \QALTER TABLE test_table DISABLE TRIGGER ALL;\E\n\n - \QCOPY test_table (col1, col2, col3, col4) FROM stdin;\E + \QALTER TABLE dump_test.test_table DISABLE TRIGGER ALL;\E\n\n + \QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E \n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n\n\n - \QALTER TABLE test_table ENABLE TRIGGER ALL;\E/xm, + \QALTER TABLE dump_test.test_table ENABLE TRIGGER ALL;\E/xm, like => { data_only => 1, }, unlike => { binary_upgrade => 1, @@ -1147,7 +1147,7 @@ all_runs => 1, catch_all => 'ALTER TABLE ... commands', regexp => qr/^ - \QALTER FOREIGN TABLE foreign_table ALTER COLUMN c1 OPTIONS (\E\n + \QALTER FOREIGN TABLE dump_test.foreign_table ALTER COLUMN c1 OPTIONS (\E\n \s+\Qcolumn_name 'col1'\E\n \Q);\E\n /xm, @@ -1178,7 +1178,7 @@ 'ALTER TABLE test_table OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER TABLE test_table OWNER TO .*;/m, + regexp => qr/^ALTER TABLE dump_test.test_table OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1207,7 +1207,7 @@ create_order => 23, create_sql => 'ALTER TABLE dump_test.test_table ENABLE ROW LEVEL SECURITY;', - regexp => qr/^ALTER TABLE test_table ENABLE ROW LEVEL SECURITY;/m, + regexp => qr/^ALTER TABLE dump_test.test_table ENABLE ROW LEVEL SECURITY;/m, like => { binary_upgrade => 1, clean => 1, @@ -1235,7 +1235,7 @@ 'ALTER TABLE test_second_table OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER TABLE test_second_table OWNER TO .*;/m, + regexp => qr/^ALTER TABLE dump_test.test_second_table OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1261,7 +1261,7 @@ 'ALTER TABLE test_third_table OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER TABLE test_third_table OWNER TO .*;/m, + regexp => qr/^ALTER TABLE dump_test_second_schema.test_third_table OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1287,7 +1287,7 @@ 'ALTER TABLE measurement OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER TABLE measurement OWNER TO .*;/m, + regexp => qr/^ALTER TABLE dump_test.measurement OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1313,7 +1313,7 @@ 'ALTER TABLE measurement_y2006m2 OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER TABLE measurement_y2006m2 OWNER TO .*;/m, + regexp => qr/^ALTER TABLE dump_test_second_schema.measurement_y2006m2 OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1339,7 +1339,7 @@ 'ALTER FOREIGN TABLE foreign_table OWNER TO' => { all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', - regexp => qr/^ALTER FOREIGN TABLE foreign_table OWNER TO .*;/m, + regexp => qr/^ALTER FOREIGN TABLE dump_test.foreign_table OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1366,7 +1366,7 @@ all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', regexp => - qr/^ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 OWNER TO .*;/m, + qr/^ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1393,7 +1393,7 @@ all_runs => 1, catch_all => 'ALTER ... OWNER commands (except post-data objects)', regexp => - qr/^ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 OWNER TO .*;/m, + qr/^ALTER TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 OWNER TO .*;/m, like => { binary_upgrade => 1, clean => 1, @@ -1580,7 +1580,7 @@ create_order => 36, create_sql => 'COMMENT ON TABLE dump_test.test_table IS \'comment on table\';', - regexp => qr/^COMMENT ON TABLE test_table IS 'comment on table';/m, + regexp => qr/^COMMENT ON TABLE dump_test.test_table IS 'comment on table';/m, like => { binary_upgrade => 1, clean => 1, @@ -1613,7 +1613,7 @@ create_sql => 'COMMENT ON COLUMN dump_test.test_table.col1 IS \'comment on column\';', regexp => qr/^ - \QCOMMENT ON COLUMN test_table.col1 IS 'comment on column';\E + \QCOMMENT ON COLUMN dump_test.test_table.col1 IS 'comment on column';\E /xm, like => { binary_upgrade => 1, @@ -1647,7 +1647,7 @@ create_sql => 'COMMENT ON COLUMN dump_test.composite.f1 IS \'comment on column of type\';', regexp => qr/^ - \QCOMMENT ON COLUMN composite.f1 IS 'comment on column of type';\E + \QCOMMENT ON COLUMN dump_test.composite.f1 IS 'comment on column of type';\E /xm, like => { binary_upgrade => 1, @@ -1681,7 +1681,7 @@ create_sql => 'COMMENT ON COLUMN dump_test.test_second_table.col1 IS \'comment on column col1\';', regexp => qr/^ - \QCOMMENT ON COLUMN test_second_table.col1 IS 'comment on column col1';\E + \QCOMMENT ON COLUMN dump_test.test_second_table.col1 IS 'comment on column col1';\E /xm, like => { binary_upgrade => 1, @@ -1715,7 +1715,7 @@ create_sql => 'COMMENT ON COLUMN dump_test.test_second_table.col2 IS \'comment on column col2\';', regexp => qr/^ - \QCOMMENT ON COLUMN test_second_table.col2 IS 'comment on column col2';\E + \QCOMMENT ON COLUMN dump_test.test_second_table.col2 IS 'comment on column col2';\E /xm, like => { binary_upgrade => 1, @@ -1749,7 +1749,7 @@ create_sql => 'COMMENT ON CONVERSION dump_test.test_conversion IS \'comment on test conversion\';', regexp => -qr/^COMMENT ON CONVERSION test_conversion IS 'comment on test conversion';/m, +qr/^COMMENT ON CONVERSION dump_test.test_conversion IS 'comment on test conversion';/m, like => { binary_upgrade => 1, clean => 1, @@ -1782,7 +1782,7 @@ create_sql => 'COMMENT ON COLLATION test0 IS \'comment on test0 collation\';', regexp => - qr/^COMMENT ON COLLATION test0 IS 'comment on test0 collation';/m, + qr/^COMMENT ON COLLATION public.test0 IS 'comment on test0 collation';/m, collation => 1, like => { binary_upgrade => 1, @@ -1928,7 +1928,7 @@ 'COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 IS \'comment on text search configuration\';', regexp => -qr/^COMMENT ON TEXT SEARCH CONFIGURATION alt_ts_conf1 IS 'comment on text search configuration';/m, +qr/^COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 IS 'comment on text search configuration';/m, like => { binary_upgrade => 1, clean => 1, @@ -1962,7 +1962,7 @@ 'COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 IS \'comment on text search dictionary\';', regexp => -qr/^COMMENT ON TEXT SEARCH DICTIONARY alt_ts_dict1 IS 'comment on text search dictionary';/m, +qr/^COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 IS 'comment on text search dictionary';/m, like => { binary_upgrade => 1, clean => 1, @@ -1995,7 +1995,7 @@ create_sql => 'COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1 IS \'comment on text search parser\';', regexp => -qr/^COMMENT ON TEXT SEARCH PARSER alt_ts_prs1 IS 'comment on text search parser';/m, +qr/^COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1 IS 'comment on text search parser';/m, like => { binary_upgrade => 1, clean => 1, @@ -2028,7 +2028,7 @@ create_sql => 'COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 IS \'comment on text search template\';', regexp => -qr/^COMMENT ON TEXT SEARCH TEMPLATE alt_ts_temp1 IS 'comment on text search template';/m, +qr/^COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 IS 'comment on text search template';/m, like => { binary_upgrade => 1, clean => 1, @@ -2060,7 +2060,7 @@ create_order => 68, create_sql => 'COMMENT ON TYPE dump_test.planets IS \'comment on enum type\';', - regexp => qr/^COMMENT ON TYPE planets IS 'comment on enum type';/m, + regexp => qr/^COMMENT ON TYPE dump_test.planets IS 'comment on enum type';/m, like => { binary_upgrade => 1, clean => 1, @@ -2092,7 +2092,7 @@ create_order => 69, create_sql => 'COMMENT ON TYPE dump_test.textrange IS \'comment on range type\';', - regexp => qr/^COMMENT ON TYPE textrange IS 'comment on range type';/m, + regexp => qr/^COMMENT ON TYPE dump_test.textrange IS 'comment on range type';/m, like => { binary_upgrade => 1, clean => 1, @@ -2124,7 +2124,7 @@ create_order => 70, create_sql => 'COMMENT ON TYPE dump_test.int42 IS \'comment on regular type\';', - regexp => qr/^COMMENT ON TYPE int42 IS 'comment on regular type';/m, + regexp => qr/^COMMENT ON TYPE dump_test.int42 IS 'comment on regular type';/m, like => { binary_upgrade => 1, clean => 1, @@ -2157,7 +2157,7 @@ create_sql => 'COMMENT ON TYPE dump_test.undefined IS \'comment on undefined type\';', regexp => - qr/^COMMENT ON TYPE undefined IS 'comment on undefined type';/m, + qr/^COMMENT ON TYPE dump_test.undefined IS 'comment on undefined type';/m, like => { binary_upgrade => 1, clean => 1, @@ -2200,7 +2200,7 @@ create_sql => 'INSERT INTO dump_test.test_table (col1) ' . 'SELECT generate_series FROM generate_series(1,9);', regexp => qr/^ - \QCOPY test_table (col1, col2, col3, col4) FROM stdin;\E + \QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E \n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n /xm, like => { @@ -2231,7 +2231,7 @@ create_sql => 'INSERT INTO dump_test.fk_reference_test_table (col1) ' . 'SELECT generate_series FROM generate_series(1,5);', regexp => qr/^ - \QCOPY fk_reference_test_table (col1) FROM stdin;\E + \QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E \n(?:\d\n){5}\\\.\n /xm, like => { @@ -2262,9 +2262,9 @@ all_runs => 0, # really only for data-only catch_all => 'COPY ... commands', regexp => qr/^ - \QCOPY test_table (col1, col2, col3, col4) FROM stdin;\E + \QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E \n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n.* - \QCOPY fk_reference_test_table (col1) FROM stdin;\E + \QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E \n(?:\d\n){5}\\\.\n /xms, like => { data_only => 1, }, @@ -2278,7 +2278,7 @@ . 'SELECT generate_series, generate_series::text ' . 'FROM generate_series(1,9);', regexp => qr/^ - \QCOPY test_second_table (col1, col2) FROM stdin;\E + \QCOPY dump_test.test_second_table (col1, col2) FROM stdin;\E \n(?:\d\t\d\n){9}\\\.\n /xm, like => { @@ -2310,7 +2310,7 @@ 'INSERT INTO dump_test_second_schema.test_third_table (col1) ' . 'SELECT generate_series FROM generate_series(1,9);', regexp => qr/^ - \QCOPY test_third_table (col1) FROM stdin;\E + \QCOPY dump_test_second_schema.test_third_table (col1) FROM stdin;\E \n(?:\d\n){9}\\\.\n /xm, like => { @@ -2338,7 +2338,7 @@ all_runs => 1, catch_all => 'COPY ... commands', regexp => qr/^ - \QCOPY test_third_table (col1) WITH OIDS FROM stdin;\E + \QCOPY dump_test_second_schema.test_third_table (col1) WITH OIDS FROM stdin;\E \n(?:\d+\t\d\n){9}\\\.\n /xm, like => { with_oids => 1, }, @@ -2368,7 +2368,7 @@ create_sql => 'INSERT INTO dump_test.test_fourth_table DEFAULT VALUES;', regexp => qr/^ - \QCOPY test_fourth_table FROM stdin;\E + \QCOPY dump_test.test_fourth_table FROM stdin;\E \n\n\\\.\n /xm, like => { @@ -2399,7 +2399,7 @@ create_sql => 'INSERT INTO dump_test.test_fifth_table VALUES (NULL, true, false, \'11001\'::bit(5), \'NaN\');', regexp => qr/^ - \QCOPY test_fifth_table (col1, col2, col3, col4, col5) FROM stdin;\E + \QCOPY dump_test.test_fifth_table (col1, col2, col3, col4, col5) FROM stdin;\E \n\\N\tt\tf\t11001\tNaN\n\\\.\n /xm, like => { @@ -2430,7 +2430,7 @@ create_sql => 'INSERT INTO dump_test.test_table_identity (col2) VALUES (\'test\');', regexp => qr/^ - \QCOPY test_table_identity (col1, col2) FROM stdin;\E + \QCOPY dump_test.test_table_identity (col1, col2) FROM stdin;\E \n1\ttest\n\\\.\n /xm, like => { @@ -2471,7 +2471,7 @@ all_runs => 1, catch_all => 'INSERT INTO ...', regexp => qr/^ - (?:INSERT\ INTO\ test_table\ \(col1,\ col2,\ col3,\ col4\)\ VALUES\ \(\d,\ NULL,\ NULL,\ NULL\);\n){9} + (?:INSERT\ INTO\ dump_test.test_table\ \(col1,\ col2,\ col3,\ col4\)\ VALUES\ \(\d,\ NULL,\ NULL,\ NULL\);\n){9} /xm, like => { column_inserts => 1, }, unlike => {}, }, @@ -2480,7 +2480,7 @@ all_runs => 1, catch_all => 'INSERT INTO ...', regexp => qr/^ - (?:INSERT\ INTO\ test_second_table\ \(col1,\ col2\) + (?:INSERT\ INTO\ dump_test.test_second_table\ \(col1,\ col2\) \ VALUES\ \(\d,\ '\d'\);\n){9}/xm, like => { column_inserts => 1, }, unlike => {}, }, @@ -2489,7 +2489,7 @@ all_runs => 1, catch_all => 'INSERT INTO ...', regexp => qr/^ - (?:INSERT\ INTO\ test_third_table\ \(col1\) + (?:INSERT\ INTO\ dump_test_second_schema.test_third_table\ \(col1\) \ VALUES\ \(\d\);\n){9}/xm, like => { column_inserts => 1, }, unlike => {}, }, @@ -2497,7 +2497,7 @@ 'INSERT INTO test_fourth_table' => { all_runs => 1, catch_all => 'INSERT INTO ...', - regexp => qr/^\QINSERT INTO test_fourth_table DEFAULT VALUES;\E/m, + regexp => qr/^\QINSERT INTO dump_test.test_fourth_table DEFAULT VALUES;\E/m, like => { column_inserts => 1, }, unlike => {}, }, @@ -2505,7 +2505,7 @@ all_runs => 1, catch_all => 'INSERT INTO ...', regexp => -qr/^\QINSERT INTO test_fifth_table (col1, col2, col3, col4, col5) VALUES (NULL, true, false, B'11001', 'NaN');\E/m, +qr/^\QINSERT INTO dump_test.test_fifth_table (col1, col2, col3, col4, col5) VALUES (NULL, true, false, B'11001', 'NaN');\E/m, like => { column_inserts => 1, }, unlike => {}, }, @@ -2513,7 +2513,7 @@ all_runs => 1, catch_all => 'INSERT INTO ...', regexp => -qr/^\QINSERT INTO test_table_identity (col1, col2) OVERRIDING SYSTEM VALUE VALUES (1, 'test');\E/m, +qr/^\QINSERT INTO dump_test.test_table_identity (col1, col2) OVERRIDING SYSTEM VALUE VALUES (1, 'test');\E/m, like => { column_inserts => 1, }, unlike => {}, }, @@ -2621,7 +2621,7 @@ create_order => 76, create_sql => 'CREATE COLLATION test0 FROM "C";', regexp => qr/^ - \QCREATE COLLATION test0 (provider = libc, locale = 'C');\E/xm, + \QCREATE COLLATION public.test0 (provider = libc, locale = 'C');\E/xm, collation => 1, like => { binary_upgrade => 1, @@ -2793,7 +2793,7 @@ initcond1 = \'{0,0}\' );', regexp => qr/^ - \QCREATE AGGREGATE newavg(integer) (\E + \QCREATE AGGREGATE dump_test.newavg(integer) (\E \n\s+\QSFUNC = int4_avg_accum,\E \n\s+\QSTYPE = bigint[],\E \n\s+\QINITCOND = '{0,0}',\E @@ -2834,7 +2834,7 @@ create_sql => 'CREATE DEFAULT CONVERSION dump_test.test_conversion FOR \'LATIN1\' TO \'UTF8\' FROM iso8859_1_to_utf8;', regexp => -qr/^\QCREATE DEFAULT CONVERSION test_conversion FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\E/xm, +qr/^\QCREATE DEFAULT CONVERSION dump_test.test_conversion FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\E/xm, like => { binary_upgrade => 1, clean => 1, @@ -2872,7 +2872,7 @@ CHECK(VALUE ~ \'^\d{5}$\' OR VALUE ~ \'^\d{5}-\d{4}$\');', regexp => qr/^ - \QCREATE DOMAIN us_postal_code AS text COLLATE pg_catalog."C" DEFAULT '10014'::text\E\n\s+ + \QCREATE DOMAIN dump_test.us_postal_code AS text COLLATE pg_catalog."C" DEFAULT '10014'::text\E\n\s+ \QCONSTRAINT us_postal_code_check CHECK \E \Q(((VALUE ~ '^\d{5}\E \$\Q'::text) OR (VALUE ~ '^\d{5}-\d{4}\E\$ @@ -2913,7 +2913,7 @@ RETURNS LANGUAGE_HANDLER AS \'$libdir/plpgsql\', \'plpgsql_call_handler\' LANGUAGE C;', regexp => qr/^ - \QCREATE FUNCTION pltestlang_call_handler() \E + \QCREATE FUNCTION dump_test.pltestlang_call_handler() \E \QRETURNS language_handler\E \n\s+\QLANGUAGE c\E \n\s+AS\ \'\$ @@ -2954,7 +2954,7 @@ RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN RETURN NULL; END;$$;', regexp => qr/^ - \QCREATE FUNCTION trigger_func() RETURNS trigger\E + \QCREATE FUNCTION dump_test.trigger_func() RETURNS trigger\E \n\s+\QLANGUAGE plpgsql\E \n\s+AS\ \$\$ \Q BEGIN RETURN NULL; END;\E @@ -2994,7 +2994,7 @@ RETURNS event_trigger LANGUAGE plpgsql AS $$ BEGIN RETURN; END;$$;', regexp => qr/^ - \QCREATE FUNCTION event_trigger_func() RETURNS event_trigger\E + \QCREATE FUNCTION dump_test.event_trigger_func() RETURNS event_trigger\E \n\s+\QLANGUAGE plpgsql\E \n\s+AS\ \$\$ \Q BEGIN RETURN; END;\E @@ -3033,7 +3033,7 @@ create_sql => 'CREATE OPERATOR FAMILY dump_test.op_family USING btree;', regexp => qr/^ - \QCREATE OPERATOR FAMILY op_family USING btree;\E + \QCREATE OPERATOR FAMILY dump_test.op_family USING btree;\E /xm, like => { binary_upgrade => 1, @@ -3077,8 +3077,8 @@ FUNCTION 1 btint8cmp(bigint,bigint), FUNCTION 2 btint8sortsupport(internal);', regexp => qr/^ - \QCREATE OPERATOR CLASS op_class\E\n\s+ - \QFOR TYPE bigint USING btree FAMILY op_family AS\E\n\s+ + \QCREATE OPERATOR CLASS dump_test.op_class\E\n\s+ + \QFOR TYPE bigint USING btree FAMILY dump_test.op_family AS\E\n\s+ \QOPERATOR 1 <(bigint,bigint) ,\E\n\s+ \QOPERATOR 2 <=(bigint,bigint) ,\E\n\s+ \QOPERATOR 3 =(bigint,bigint) ,\E\n\s+ @@ -3162,9 +3162,9 @@ FOR EACH ROW WHEN (NEW.col1 > 10) EXECUTE PROCEDURE dump_test.trigger_func();', regexp => qr/^ - \QCREATE TRIGGER test_trigger BEFORE INSERT ON test_table \E + \QCREATE TRIGGER test_trigger BEFORE INSERT ON dump_test.test_table \E \QFOR EACH ROW WHEN ((new.col1 > 10)) \E - \QEXECUTE PROCEDURE trigger_func();\E + \QEXECUTE PROCEDURE dump_test.trigger_func();\E /xm, like => { binary_upgrade => 1, @@ -3200,7 +3200,7 @@ create_sql => 'CREATE TYPE dump_test.planets AS ENUM ( \'venus\', \'earth\', \'mars\' );', regexp => qr/^ - \QCREATE TYPE planets AS ENUM (\E + \QCREATE TYPE dump_test.planets AS ENUM (\E \n\s+'venus', \n\s+'earth', \n\s+'mars' @@ -3236,7 +3236,7 @@ 'CREATE TYPE dump_test.planets AS ENUM pg_upgrade' => { all_runs => 1, regexp => qr/^ - \QCREATE TYPE planets AS ENUM (\E + \QCREATE TYPE dump_test.planets AS ENUM (\E \n\);.*^ \QALTER TYPE dump_test.planets ADD VALUE 'venus';\E \n.*^ @@ -3278,7 +3278,7 @@ create_sql => 'CREATE TYPE dump_test.textrange AS RANGE (subtype=text, collation="C");', regexp => qr/^ - \QCREATE TYPE textrange AS RANGE (\E + \QCREATE TYPE dump_test.textrange AS RANGE (\E \n\s+\Qsubtype = text,\E \n\s+\Qcollation = pg_catalog."C"\E \n\);/xm, @@ -3314,7 +3314,7 @@ all_runs => 1, create_order => 39, create_sql => 'CREATE TYPE dump_test.int42;', - regexp => qr/^CREATE TYPE int42;/m, + regexp => qr/^CREATE TYPE dump_test.int42;/m, like => { binary_upgrade => 1, clean => 1, @@ -3349,7 +3349,7 @@ create_sql => 'CREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 (copy=english);', regexp => qr/^ - \QCREATE TEXT SEARCH CONFIGURATION alt_ts_conf1 (\E\n + \QCREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 (\E\n \s+\QPARSER = pg_catalog."default" );\E/xm, like => { binary_upgrade => 1, @@ -3382,61 +3382,61 @@ 'ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 ...' => { all_runs => 1, regexp => qr/^ - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR asciiword WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR word WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR numword WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR email WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR url WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR host WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR sfloat WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR version WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR hword_numpart WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR hword_part WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR hword_asciipart WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR numhword WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR asciihword WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR hword WITH english_stem;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR url_path WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR file WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR "float" WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR "int" WITH simple;\E\n \n - \QALTER TEXT SEARCH CONFIGURATION alt_ts_conf1\E\n + \QALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1\E\n \s+\QADD MAPPING FOR uint WITH simple;\E\n \n /xm, @@ -3474,7 +3474,7 @@ create_sql => 'CREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 (lexize=dsimple_lexize);', regexp => qr/^ - \QCREATE TEXT SEARCH TEMPLATE alt_ts_temp1 (\E\n + \QCREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 (\E\n \s+\QLEXIZE = dsimple_lexize );\E/xm, like => { binary_upgrade => 1, @@ -3510,7 +3510,7 @@ create_sql => 'CREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1 (start = prsd_start, gettoken = prsd_nexttoken, end = prsd_end, lextypes = prsd_lextype);', regexp => qr/^ - \QCREATE TEXT SEARCH PARSER alt_ts_prs1 (\E\n + \QCREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1 (\E\n \s+\QSTART = prsd_start,\E\n \s+\QGETTOKEN = prsd_nexttoken,\E\n \s+\QEND = prsd_end,\E\n @@ -3550,7 +3550,7 @@ create_sql => 'CREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 (template=simple);', regexp => qr/^ - \QCREATE TEXT SEARCH DICTIONARY alt_ts_dict1 (\E\n + \QCREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 (\E\n \s+\QTEMPLATE = pg_catalog.simple );\E\n /xm, like => { @@ -3588,7 +3588,7 @@ RETURNS dump_test.int42 AS \'int4in\' LANGUAGE internal STRICT IMMUTABLE;', regexp => qr/^ - \QCREATE FUNCTION int42_in(cstring) RETURNS int42\E + \QCREATE FUNCTION dump_test.int42_in(cstring) RETURNS dump_test.int42\E \n\s+\QLANGUAGE internal IMMUTABLE STRICT\E \n\s+AS\ \$\$int4in\$\$; /xm, @@ -3627,7 +3627,7 @@ RETURNS cstring AS \'int4out\' LANGUAGE internal STRICT IMMUTABLE;', regexp => qr/^ - \QCREATE FUNCTION int42_out(int42) RETURNS cstring\E + \QCREATE FUNCTION dump_test.int42_out(dump_test.int42) RETURNS cstring\E \n\s+\QLANGUAGE internal IMMUTABLE STRICT\E \n\s+AS\ \$\$int4out\$\$; /xm, @@ -3665,7 +3665,7 @@ create_sql => 'CREATE PROCEDURE dump_test.ptest1(a int) LANGUAGE SQL AS $$ INSERT INTO dump_test.test_table (col1) VALUES (a) $$;', regexp => qr/^ - \QCREATE PROCEDURE ptest1(a integer)\E + \QCREATE PROCEDURE dump_test.ptest1(a integer)\E \n\s+\QLANGUAGE sql\E \n\s+AS\ \$\$\Q INSERT INTO dump_test.test_table (col1) VALUES (a) \E\$\$; /xm, @@ -3708,10 +3708,10 @@ default = 42, passedbyvalue);', regexp => qr/^ - \QCREATE TYPE int42 (\E + \QCREATE TYPE dump_test.int42 (\E \n\s+\QINTERNALLENGTH = 4,\E - \n\s+\QINPUT = int42_in,\E - \n\s+\QOUTPUT = int42_out,\E + \n\s+\QINPUT = dump_test.int42_in,\E + \n\s+\QOUTPUT = dump_test.int42_out,\E \n\s+\QDEFAULT = '42',\E \n\s+\QALIGNMENT = int4,\E \n\s+\QSTORAGE = plain,\E @@ -3753,9 +3753,9 @@ f2 dump_test.int42 );', regexp => qr/^ - \QCREATE TYPE composite AS (\E + \QCREATE TYPE dump_test.composite AS (\E \n\s+\Qf1 integer,\E - \n\s+\Qf2 int42\E + \n\s+\Qf2 dump_test.int42\E \n\); /xm, like => { @@ -3790,7 +3790,7 @@ all_runs => 1, create_order => 39, create_sql => 'CREATE TYPE dump_test.undefined;', - regexp => qr/^CREATE TYPE undefined;/m, + regexp => qr/^CREATE TYPE dump_test.undefined;/m, like => { binary_upgrade => 1, clean => 1, @@ -3892,7 +3892,7 @@ 'CREATE FOREIGN TABLE dump_test.foreign_table (c1 int options (column_name \'col1\')) SERVER s1 OPTIONS (schema_name \'x1\');', regexp => qr/ - \QCREATE FOREIGN TABLE foreign_table (\E\n + \QCREATE FOREIGN TABLE dump_test.foreign_table (\E\n \s+\Qc1 integer\E\n \Q)\E\n \QSERVER s1\E\n @@ -4006,7 +4006,7 @@ HANDLER dump_test.pltestlang_call_handler;', regexp => qr/^ \QCREATE PROCEDURAL LANGUAGE pltestlang \E - \QHANDLER pltestlang_call_handler;\E + \QHANDLER dump_test.pltestlang_call_handler;\E /xm, like => { binary_upgrade => 1, @@ -4040,9 +4040,9 @@ create_sql => 'CREATE MATERIALIZED VIEW dump_test.matview (col1) AS SELECT col1 FROM dump_test.test_table;', regexp => qr/^ - \QCREATE MATERIALIZED VIEW matview AS\E + \QCREATE MATERIALIZED VIEW dump_test.matview AS\E \n\s+\QSELECT test_table.col1\E - \n\s+\QFROM test_table\E + \n\s+\QFROM dump_test.test_table\E \n\s+\QWITH NO DATA;\E /xm, like => { @@ -4078,9 +4078,9 @@ dump_test.matview_second (col1) AS SELECT * FROM dump_test.matview;', regexp => qr/^ - \QCREATE MATERIALIZED VIEW matview_second AS\E + \QCREATE MATERIALIZED VIEW dump_test.matview_second AS\E \n\s+\QSELECT matview.col1\E - \n\s+\QFROM matview\E + \n\s+\QFROM dump_test.matview\E \n\s+\QWITH NO DATA;\E /xm, like => { @@ -4116,9 +4116,9 @@ dump_test.matview_third (col1) AS SELECT * FROM dump_test.matview_second WITH NO DATA;', regexp => qr/^ - \QCREATE MATERIALIZED VIEW matview_third AS\E + \QCREATE MATERIALIZED VIEW dump_test.matview_third AS\E \n\s+\QSELECT matview_second.col1\E - \n\s+\QFROM matview_second\E + \n\s+\QFROM dump_test.matview_second\E \n\s+\QWITH NO DATA;\E /xm, like => { @@ -4154,9 +4154,9 @@ dump_test.matview_fourth (col1) AS SELECT * FROM dump_test.matview_third WITH NO DATA;', regexp => qr/^ - \QCREATE MATERIALIZED VIEW matview_fourth AS\E + \QCREATE MATERIALIZED VIEW dump_test.matview_fourth AS\E \n\s+\QSELECT matview_third.col1\E - \n\s+\QFROM matview_third\E + \n\s+\QFROM dump_test.matview_third\E \n\s+\QWITH NO DATA;\E /xm, like => { @@ -4192,7 +4192,7 @@ USING (true) WITH CHECK (true);', regexp => qr/^ - \QCREATE POLICY p1 ON test_table \E + \QCREATE POLICY p1 ON dump_test.test_table \E \QUSING (true) WITH CHECK (true);\E /xm, like => { @@ -4227,7 +4227,7 @@ create_sql => 'CREATE POLICY p2 ON dump_test.test_table FOR SELECT TO regress_dump_test_role USING (true);', regexp => qr/^ - \QCREATE POLICY p2 ON test_table FOR SELECT TO regress_dump_test_role \E + \QCREATE POLICY p2 ON dump_test.test_table FOR SELECT TO regress_dump_test_role \E \QUSING (true);\E /xm, like => { @@ -4262,7 +4262,7 @@ create_sql => 'CREATE POLICY p3 ON dump_test.test_table FOR INSERT TO regress_dump_test_role WITH CHECK (true);', regexp => qr/^ - \QCREATE POLICY p3 ON test_table FOR INSERT \E + \QCREATE POLICY p3 ON dump_test.test_table FOR INSERT \E \QTO regress_dump_test_role WITH CHECK (true);\E /xm, like => { @@ -4297,7 +4297,7 @@ create_sql => 'CREATE POLICY p4 ON dump_test.test_table FOR UPDATE TO regress_dump_test_role USING (true) WITH CHECK (true);', regexp => qr/^ - \QCREATE POLICY p4 ON test_table FOR UPDATE TO regress_dump_test_role \E + \QCREATE POLICY p4 ON dump_test.test_table FOR UPDATE TO regress_dump_test_role \E \QUSING (true) WITH CHECK (true);\E /xm, like => { @@ -4332,7 +4332,7 @@ create_sql => 'CREATE POLICY p5 ON dump_test.test_table FOR DELETE TO regress_dump_test_role USING (true);', regexp => qr/^ - \QCREATE POLICY p5 ON test_table FOR DELETE \E + \QCREATE POLICY p5 ON dump_test.test_table FOR DELETE \E \QTO regress_dump_test_role USING (true);\E /xm, like => { @@ -4367,7 +4367,7 @@ create_sql => 'CREATE POLICY p6 ON dump_test.test_table AS RESTRICTIVE USING (false);', regexp => qr/^ - \QCREATE POLICY p6 ON test_table AS RESTRICTIVE \E + \QCREATE POLICY p6 ON dump_test.test_table AS RESTRICTIVE \E \QUSING (false);\E /xm, like => { @@ -4505,7 +4505,7 @@ create_sql => 'ALTER PUBLICATION pub1 ADD TABLE dump_test.test_table;', regexp => qr/^ - \QALTER PUBLICATION pub1 ADD TABLE ONLY test_table;\E + \QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_table;\E /xm, like => { binary_upgrade => 1, @@ -4539,7 +4539,7 @@ create_sql => 'ALTER PUBLICATION pub1 ADD TABLE dump_test.test_second_table;', regexp => qr/^ - \QALTER PUBLICATION pub1 ADD TABLE ONLY test_second_table;\E + \QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_second_table;\E /xm, like => { binary_upgrade => 1, @@ -4667,7 +4667,7 @@ CHECK (col1 <= 1000) ) WITH (autovacuum_enabled = false, fillfactor=80);', regexp => qr/^ - \QCREATE TABLE test_table (\E\n + \QCREATE TABLE dump_test.test_table (\E\n \s+\Qcol1 integer NOT NULL,\E\n \s+\Qcol2 text,\E\n \s+\Qcol3 text,\E\n @@ -4708,7 +4708,7 @@ col1 int primary key references dump_test.test_table );', regexp => qr/^ - \QCREATE TABLE fk_reference_test_table (\E + \QCREATE TABLE dump_test.fk_reference_test_table (\E \n\s+\Qcol1 integer NOT NULL\E \n\); /xm, @@ -4746,7 +4746,7 @@ col2 text );', regexp => qr/^ - \QCREATE TABLE test_second_table (\E + \QCREATE TABLE dump_test.test_second_table (\E \n\s+\Qcol1 integer,\E \n\s+\Qcol2 text\E \n\); @@ -4790,7 +4790,7 @@ (\Q-- TOC entry \E[0-9]+\ \(class\ 1259\ OID\ [0-9]+\)\n)? \Q-- Name: test_third_table;\E.*\n \Q--\E\n\n - \QCREATE UNLOGGED TABLE test_third_table (\E + \QCREATE UNLOGGED TABLE dump_test_second_schema.test_third_table (\E \n\s+\Qcol1 integer NOT NULL\E \n\);\n /xm, @@ -4832,7 +4832,7 @@ regexp => qr/^ \Q-- Name: measurement;\E.*\n \Q--\E\n\n - \QCREATE TABLE measurement (\E\n + \QCREATE TABLE dump_test.measurement (\E\n \s+\Qcity_id integer NOT NULL,\E\n \s+\Qlogdate date NOT NULL,\E\n \s+\Qpeaktemp integer,\E\n @@ -4876,7 +4876,7 @@ regexp => qr/^ \Q-- Name: measurement_y2006m2;\E.*\n \Q--\E\n\n - \QCREATE TABLE measurement_y2006m2 PARTITION OF dump_test.measurement\E\n + \QCREATE TABLE dump_test_second_schema.measurement_y2006m2 PARTITION OF dump_test.measurement\E\n \QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n /xm, like => { @@ -4911,7 +4911,7 @@ create_sql => 'CREATE TABLE dump_test.test_fourth_table ( );', regexp => qr/^ - \QCREATE TABLE test_fourth_table (\E + \QCREATE TABLE dump_test.test_fourth_table (\E \n\); /xm, like => { @@ -4951,7 +4951,7 @@ col5 float8 );', regexp => qr/^ - \QCREATE TABLE test_fifth_table (\E + \QCREATE TABLE dump_test.test_fifth_table (\E \n\s+\Qcol1 integer,\E \n\s+\Qcol2 boolean,\E \n\s+\Qcol3 boolean,\E @@ -4993,13 +4993,13 @@ col2 text );', regexp => qr/^ - \QCREATE TABLE test_table_identity (\E\n + \QCREATE TABLE dump_test.test_table_identity (\E\n \s+\Qcol1 integer NOT NULL,\E\n \s+\Qcol2 text\E\n \); .* - \QALTER TABLE test_table_identity ALTER COLUMN col1 ADD GENERATED ALWAYS AS IDENTITY (\E\n - \s+\QSEQUENCE NAME test_table_identity_col1_seq\E\n + \QALTER TABLE dump_test.test_table_identity ALTER COLUMN col1 ADD GENERATED ALWAYS AS IDENTITY (\E\n + \s+\QSEQUENCE NAME dump_test.test_table_identity_col1_seq\E\n \s+\QSTART WITH 1\E\n \s+\QINCREMENT BY 1\E\n \s+\QNO MINVALUE\E\n @@ -5039,7 +5039,7 @@ create_sql => 'CREATE STATISTICS dump_test.test_ext_stats_no_options ON col1, col2 FROM dump_test.test_fifth_table', regexp => qr/^ - \QCREATE STATISTICS dump_test.test_ext_stats_no_options ON col1, col2 FROM test_fifth_table;\E + \QCREATE STATISTICS dump_test.test_ext_stats_no_options ON col1, col2 FROM dump_test.test_fifth_table;\E /xms, like => { binary_upgrade => 1, @@ -5073,7 +5073,7 @@ create_sql => 'CREATE STATISTICS dump_test.test_ext_stats_opts (ndistinct) ON col1, col2 FROM dump_test.test_fifth_table', regexp => qr/^ - \QCREATE STATISTICS dump_test.test_ext_stats_opts (ndistinct) ON col1, col2 FROM test_fifth_table;\E + \QCREATE STATISTICS dump_test.test_ext_stats_opts (ndistinct) ON col1, col2 FROM dump_test.test_fifth_table;\E /xms, like => { binary_upgrade => 1, @@ -5104,7 +5104,7 @@ all_runs => 1, catch_all => 'CREATE ... commands', regexp => qr/^ - \QCREATE SEQUENCE test_table_col1_seq\E + \QCREATE SEQUENCE dump_test.test_table_col1_seq\E \n\s+\QAS integer\E \n\s+\QSTART WITH 1\E \n\s+\QINCREMENT BY 1\E @@ -5141,7 +5141,7 @@ all_runs => 1, catch_all => 'CREATE ... commands', regexp => qr/^ - \QCREATE SEQUENCE test_third_table_col1_seq\E + \QCREATE SEQUENCE dump_test_second_schema.test_third_table_col1_seq\E \n\s+\QAS integer\E \n\s+\QSTART WITH 1\E \n\s+\QINCREMENT BY 1\E @@ -5182,7 +5182,7 @@ ON dump_test_second_schema.test_third_table (col1);', regexp => qr/^ \QCREATE UNIQUE INDEX test_third_table_idx \E - \QON test_third_table USING btree (col1);\E + \QON dump_test_second_schema.test_third_table USING btree (col1);\E /xm, like => { binary_upgrade => 1, @@ -5215,7 +5215,7 @@ create_order => 92, create_sql => 'CREATE INDEX ON dump_test.measurement (city_id, logdate);', regexp => qr/^ - \QCREATE INDEX measurement_city_id_logdate_idx ON ONLY measurement USING\E + \QCREATE INDEX measurement_city_id_logdate_idx ON ONLY dump_test.measurement USING\E /xm, like => { binary_upgrade => 1, @@ -5248,7 +5248,7 @@ create_order => 93, create_sql => 'ALTER TABLE dump_test.measurement ADD PRIMARY KEY (city_id, logdate);', regexp => qr/^ - \QALTER TABLE ONLY measurement\E \n^\s+ + \QALTER TABLE ONLY dump_test.measurement\E \n^\s+ \QADD CONSTRAINT measurement_pkey PRIMARY KEY (city_id, logdate);\E /xm, like => { @@ -5280,7 +5280,7 @@ all_runs => 1, catch_all => 'CREATE ... commands', regexp => qr/^ - \QCREATE INDEX measurement_y2006m2_city_id_logdate_idx ON measurement_y2006m2 \E + \QCREATE INDEX measurement_y2006m2_city_id_logdate_idx ON dump_test_second_schema.measurement_y2006m2 \E /xm, like => { binary_upgrade => 1, @@ -5377,9 +5377,9 @@ WITH (check_option = \'local\', security_barrier = true) AS SELECT col1 FROM dump_test.test_table;', regexp => qr/^ - \QCREATE VIEW test_view WITH (security_barrier='true') AS\E + \QCREATE VIEW dump_test.test_view WITH (security_barrier='true') AS\E \n\s+\QSELECT test_table.col1\E - \n\s+\QFROM test_table\E + \n\s+\QFROM dump_test.test_table\E \n\s+\QWITH LOCAL CHECK OPTION;\E/xm, like => { binary_upgrade => 1, @@ -5413,7 +5413,7 @@ create_sql => 'ALTER VIEW dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;', regexp => qr/^ - \QALTER TABLE ONLY test_view ALTER COLUMN col1 SET DEFAULT 1;\E/xm, + \QALTER TABLE ONLY dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;\E/xm, like => { binary_upgrade => 1, clean => 1, @@ -5799,7 +5799,7 @@ create_sql => 'GRANT USAGE ON DOMAIN dump_test.us_postal_code TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON TYPE us_postal_code TO regress_dump_test_role;\E + \QGRANT ALL ON TYPE dump_test.us_postal_code TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -5836,7 +5836,7 @@ create_sql => 'GRANT USAGE ON TYPE dump_test.int42 TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON TYPE int42 TO regress_dump_test_role;\E + \QGRANT ALL ON TYPE dump_test.int42 TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -5873,7 +5873,7 @@ create_sql => 'GRANT USAGE ON TYPE dump_test.planets TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON TYPE planets TO regress_dump_test_role;\E + \QGRANT ALL ON TYPE dump_test.planets TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -5910,7 +5910,7 @@ create_sql => 'GRANT USAGE ON TYPE dump_test.textrange TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON TYPE textrange TO regress_dump_test_role;\E + \QGRANT ALL ON TYPE dump_test.textrange TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -5979,7 +5979,7 @@ create_sql => 'GRANT SELECT ON TABLE dump_test.test_table TO regress_dump_test_role;', regexp => - qr/^GRANT SELECT ON TABLE test_table TO regress_dump_test_role;/m, + qr/^GRANT SELECT ON TABLE dump_test.test_table TO regress_dump_test_role;/m, like => { binary_upgrade => 1, clean => 1, @@ -6012,7 +6012,7 @@ TABLE dump_test_second_schema.test_third_table TO regress_dump_test_role;', regexp => -qr/^GRANT SELECT ON TABLE test_third_table TO regress_dump_test_role;/m, +qr/^GRANT SELECT ON TABLE dump_test_second_schema.test_third_table TO regress_dump_test_role;/m, like => { binary_upgrade => 1, clean => 1, @@ -6045,7 +6045,7 @@ dump_test_second_schema.test_third_table_col1_seq TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON SEQUENCE test_third_table_col1_seq TO regress_dump_test_role;\E + \QGRANT ALL ON SEQUENCE dump_test_second_schema.test_third_table_col1_seq TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -6079,7 +6079,7 @@ TABLE dump_test.measurement TO regress_dump_test_role;', regexp => - qr/^GRANT SELECT ON TABLE measurement TO regress_dump_test_role;/m, + qr/^GRANT SELECT ON TABLE dump_test.measurement TO regress_dump_test_role;/m, like => { binary_upgrade => 1, clean => 1, @@ -6112,7 +6112,7 @@ TABLE dump_test_second_schema.measurement_y2006m2 TO regress_dump_test_role;', regexp => -qr/^GRANT SELECT ON TABLE measurement_y2006m2 TO regress_dump_test_role;/m, +qr/^GRANT SELECT ON TABLE dump_test_second_schema.measurement_y2006m2 TO regress_dump_test_role;/m, like => { binary_upgrade => 1, clean => 1, @@ -6183,7 +6183,7 @@ 'GRANT INSERT (col1) ON TABLE dump_test.test_second_table TO regress_dump_test_role;', regexp => qr/^ - \QGRANT INSERT(col1) ON TABLE test_second_table TO regress_dump_test_role;\E + \QGRANT INSERT(col1) ON TABLE dump_test.test_second_table TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -6216,7 +6216,7 @@ create_sql => 'GRANT EXECUTE ON FUNCTION pg_sleep(float8) TO regress_dump_test_role;', regexp => qr/^ - \QGRANT ALL ON FUNCTION pg_sleep(double precision) TO regress_dump_test_role;\E + \QGRANT ALL ON FUNCTION pg_catalog.pg_sleep(double precision) TO regress_dump_test_role;\E /xm, like => { binary_upgrade => 1, @@ -6280,37 +6280,37 @@ proacl ) ON TABLE pg_proc TO public;', regexp => qr/ - \QGRANT SELECT(tableoid) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(oid) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proname) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(pronamespace) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proowner) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(prolang) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(procost) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(prorows) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(provariadic) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(protransform) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proisagg) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proiswindow) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(prosecdef) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proleakproof) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proisstrict) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proretset) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(provolatile) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proparallel) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(pronargs) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(pronargdefaults) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(prorettype) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proargtypes) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proallargtypes) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proargmodes) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proargnames) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proargdefaults) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(protrftypes) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(prosrc) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(probin) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proconfig) ON TABLE pg_proc TO PUBLIC;\E\n.* - \QGRANT SELECT(proacl) ON TABLE pg_proc TO PUBLIC;\E/xms, + \QGRANT SELECT(tableoid) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(oid) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proname) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(pronamespace) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proowner) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(prolang) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(procost) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(prorows) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(provariadic) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(protransform) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proisagg) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proiswindow) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(prosecdef) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proleakproof) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proisstrict) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proretset) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(provolatile) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proparallel) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(pronargs) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(pronargdefaults) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(prorettype) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proargtypes) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proallargtypes) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proargmodes) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proargnames) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proargdefaults) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(protrftypes) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(prosrc) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(probin) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proconfig) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.* + \QGRANT SELECT(proacl) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E/xms, like => { binary_upgrade => 1, clean => 1, @@ -6376,7 +6376,7 @@ 'REFRESH MATERIALIZED VIEW matview' => { all_runs => 1, - regexp => qr/^REFRESH MATERIALIZED VIEW matview;/m, + regexp => qr/^REFRESH MATERIALIZED VIEW dump_test.matview;/m, like => { clean => 1, clean_if_exists => 1, @@ -6408,9 +6408,9 @@ 'REFRESH MATERIALIZED VIEW matview_second' => { all_runs => 1, regexp => qr/^ - \QREFRESH MATERIALIZED VIEW matview;\E + \QREFRESH MATERIALIZED VIEW dump_test.matview;\E \n.* - \QREFRESH MATERIALIZED VIEW matview_second;\E + \QREFRESH MATERIALIZED VIEW dump_test.matview_second;\E /xms, like => { clean => 1, @@ -6443,7 +6443,7 @@ 'REFRESH MATERIALIZED VIEW matview_third' => { all_runs => 1, regexp => qr/^ - \QREFRESH MATERIALIZED VIEW matview_third;\E + \QREFRESH MATERIALIZED VIEW dump_test.matview_third;\E /xms, like => {}, unlike => { @@ -6476,7 +6476,7 @@ 'REFRESH MATERIALIZED VIEW matview_fourth' => { all_runs => 1, regexp => qr/^ - \QREFRESH MATERIALIZED VIEW matview_fourth;\E + \QREFRESH MATERIALIZED VIEW dump_test.matview_fourth;\E /xms, like => {}, unlike => { @@ -6547,7 +6547,7 @@ create_sql => 'REVOKE EXECUTE ON FUNCTION pg_sleep(float8) FROM public;', regexp => qr/^ - \QREVOKE ALL ON FUNCTION pg_sleep(double precision) FROM PUBLIC;\E + \QREVOKE ALL ON FUNCTION pg_catalog.pg_sleep(double precision) FROM PUBLIC;\E /xm, like => { binary_upgrade => 1, @@ -6578,7 +6578,7 @@ catch_all => 'REVOKE commands', create_order => 45, create_sql => 'REVOKE SELECT ON TABLE pg_proc FROM public;', - regexp => qr/^REVOKE SELECT ON TABLE pg_proc FROM PUBLIC;/m, + regexp => qr/^REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC;/m, like => { binary_upgrade => 1, clean => 1, diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl index de70f4716b..fca00c6478 100644 --- a/src/test/modules/test_pg_dump/t/001_base.pl +++ b/src/test/modules/test_pg_dump/t/001_base.pl @@ -194,7 +194,7 @@ create_sql => 'ALTER EXTENSION test_pg_dump ADD TABLE regress_pg_dump_table_added;', regexp => qr/^ - \QCREATE TABLE regress_pg_dump_table_added (\E + \QCREATE TABLE public.regress_pg_dump_table_added (\E \n\s+\Qcol1 integer NOT NULL,\E \n\s+\Qcol2 integer\E \n\);\n/xm, @@ -250,7 +250,7 @@ 'CREATE SEQUENCE regress_pg_dump_table_col1_seq' => { regexp => qr/^ - \QCREATE SEQUENCE regress_pg_dump_table_col1_seq\E + \QCREATE SEQUENCE public.regress_pg_dump_table_col1_seq\E \n\s+\QAS integer\E \n\s+\QSTART WITH 1\E \n\s+\QINCREMENT BY 1\E @@ -276,7 +276,7 @@ create_sql => 'CREATE TABLE regress_pg_dump_table_added (col1 int not null, col2 int);', regexp => qr/^ - \QCREATE TABLE regress_pg_dump_table_added (\E + \QCREATE TABLE public.regress_pg_dump_table_added (\E \n\s+\Qcol1 integer NOT NULL,\E \n\s+\Qcol2 integer\E \n\);\n/xm, @@ -295,7 +295,7 @@ 'CREATE SEQUENCE regress_pg_dump_seq' => { regexp => qr/^ - \QCREATE SEQUENCE regress_pg_dump_seq\E + \QCREATE SEQUENCE public.regress_pg_dump_seq\E \n\s+\QSTART WITH 1\E \n\s+\QINCREMENT BY 1\E \n\s+\QNO MINVALUE\E @@ -319,7 +319,7 @@ create_order => 6, create_sql => qq{SELECT nextval('regress_seq_dumpable');}, regexp => qr/^ - \QSELECT pg_catalog.setval('regress_seq_dumpable', 1, true);\E + \QSELECT pg_catalog.setval('public.regress_seq_dumpable', 1, true);\E \n/xm, like => { clean => 1, @@ -337,7 +337,7 @@ 'CREATE TABLE regress_pg_dump_table' => { regexp => qr/^ - \QCREATE TABLE regress_pg_dump_table (\E + \QCREATE TABLE public.regress_pg_dump_table (\E \n\s+\Qcol1 integer NOT NULL,\E \n\s+\Qcol2 integer\E \n\);\n/xm, @@ -395,7 +395,7 @@ create_sql => 'GRANT SELECT ON regress_pg_dump_table_added TO regress_dump_test_role;', regexp => qr/^ - \QGRANT SELECT ON TABLE regress_pg_dump_table_added TO regress_dump_test_role;\E + \QGRANT SELECT ON TABLE public.regress_pg_dump_table_added TO regress_dump_test_role;\E \n/xm, like => { binary_upgrade => 1, }, unlike => { @@ -415,7 +415,7 @@ create_sql => 'REVOKE SELECT ON regress_pg_dump_table_added FROM regress_dump_test_role;', regexp => qr/^ - \QREVOKE SELECT ON TABLE regress_pg_dump_table_added FROM regress_dump_test_role;\E + \QREVOKE SELECT ON TABLE public.regress_pg_dump_table_added FROM regress_dump_test_role;\E \n/xm, like => { binary_upgrade => 1, @@ -434,7 +434,7 @@ 'GRANT SELECT ON TABLE regress_pg_dump_table' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT SELECT ON TABLE regress_pg_dump_table TO regress_dump_test_role;\E\n + \QGRANT SELECT ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -453,7 +453,7 @@ 'GRANT SELECT(col1) ON regress_pg_dump_table' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT SELECT(col1) ON TABLE regress_pg_dump_table TO PUBLIC;\E\n + \QGRANT SELECT(col1) ON TABLE public.regress_pg_dump_table TO PUBLIC;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -474,7 +474,7 @@ create_sql => 'GRANT SELECT(col2) ON regress_pg_dump_table TO regress_dump_test_role;', regexp => qr/^ - \QGRANT SELECT(col2) ON TABLE regress_pg_dump_table TO regress_dump_test_role;\E + \QGRANT SELECT(col2) ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E \n/xm, like => { binary_upgrade => 1, @@ -496,7 +496,7 @@ create_sql => 'GRANT USAGE ON SEQUENCE regress_pg_dump_table_col1_seq TO regress_dump_test_role;', regexp => qr/^ - \QGRANT USAGE ON SEQUENCE regress_pg_dump_table_col1_seq TO regress_dump_test_role;\E + \QGRANT USAGE ON SEQUENCE public.regress_pg_dump_table_col1_seq TO regress_dump_test_role;\E \n/xm, like => { binary_upgrade => 1, @@ -514,7 +514,7 @@ 'GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role' => { regexp => qr/^ - \QGRANT USAGE ON SEQUENCE regress_pg_dump_seq TO regress_dump_test_role;\E + \QGRANT USAGE ON SEQUENCE public.regress_pg_dump_seq TO regress_dump_test_role;\E \n/xm, like => { binary_upgrade => 1, }, unlike => { @@ -534,7 +534,7 @@ create_sql => 'REVOKE SELECT(col1) ON regress_pg_dump_table FROM PUBLIC;', regexp => qr/^ - \QREVOKE SELECT(col1) ON TABLE regress_pg_dump_table FROM PUBLIC;\E + \QREVOKE SELECT(col1) ON TABLE public.regress_pg_dump_table FROM PUBLIC;\E \n/xm, like => { binary_upgrade => 1, @@ -553,7 +553,7 @@ # Objects included in extension part of a schema created by this extension */ 'CREATE TABLE regress_pg_dump_schema.test_table' => { regexp => qr/^ - \QCREATE TABLE test_table (\E + \QCREATE TABLE regress_pg_dump_schema.test_table (\E \n\s+\Qcol1 integer,\E \n\s+\Qcol2 integer\E \n\);\n/xm, @@ -573,7 +573,7 @@ 'GRANT SELECT ON regress_pg_dump_schema.test_table' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT SELECT ON TABLE test_table TO regress_dump_test_role;\E\n + \QGRANT SELECT ON TABLE regress_pg_dump_schema.test_table TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -591,7 +591,7 @@ 'CREATE SEQUENCE regress_pg_dump_schema.test_seq' => { regexp => qr/^ - \QCREATE SEQUENCE test_seq\E + \QCREATE SEQUENCE regress_pg_dump_schema.test_seq\E \n\s+\QSTART WITH 1\E \n\s+\QINCREMENT BY 1\E \n\s+\QNO MINVALUE\E @@ -614,7 +614,7 @@ 'GRANT USAGE ON regress_pg_dump_schema.test_seq' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT USAGE ON SEQUENCE test_seq TO regress_dump_test_role;\E\n + \QGRANT USAGE ON SEQUENCE regress_pg_dump_schema.test_seq TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -632,7 +632,7 @@ 'CREATE TYPE regress_pg_dump_schema.test_type' => { regexp => qr/^ - \QCREATE TYPE test_type AS (\E + \QCREATE TYPE regress_pg_dump_schema.test_type AS (\E \n\s+\Qcol1 integer\E \n\);\n/xm, like => { binary_upgrade => 1, }, @@ -651,7 +651,7 @@ 'GRANT USAGE ON regress_pg_dump_schema.test_type' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT ALL ON TYPE test_type TO regress_dump_test_role;\E\n + \QGRANT ALL ON TYPE regress_pg_dump_schema.test_type TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -669,7 +669,7 @@ 'CREATE FUNCTION regress_pg_dump_schema.test_func' => { regexp => qr/^ - \QCREATE FUNCTION test_func() RETURNS integer\E + \QCREATE FUNCTION regress_pg_dump_schema.test_func() RETURNS integer\E \n\s+\QLANGUAGE sql\E \n/xm, like => { binary_upgrade => 1, }, @@ -688,7 +688,7 @@ 'GRANT ALL ON regress_pg_dump_schema.test_func' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT ALL ON FUNCTION test_func() TO regress_dump_test_role;\E\n + \QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -706,7 +706,7 @@ 'CREATE AGGREGATE regress_pg_dump_schema.test_agg' => { regexp => qr/^ - \QCREATE AGGREGATE test_agg(smallint) (\E + \QCREATE AGGREGATE regress_pg_dump_schema.test_agg(smallint) (\E \n\s+\QSFUNC = int2_sum,\E \n\s+\QSTYPE = bigint\E \n\);\n/xm, @@ -726,7 +726,7 @@ 'GRANT ALL ON regress_pg_dump_schema.test_agg' => { regexp => qr/^ \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\E\n - \QGRANT ALL ON FUNCTION test_agg(smallint) TO regress_dump_test_role;\E\n + \QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_agg(smallint) TO regress_dump_test_role;\E\n \QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E \n/xms, like => { binary_upgrade => 1, }, @@ -748,7 +748,7 @@ create_sql => 'CREATE TABLE regress_pg_dump_schema.external_tab (col1 int);', regexp => qr/^ - \QCREATE TABLE external_tab (\E + \QCREATE TABLE regress_pg_dump_schema.external_tab (\E \n\s+\Qcol1 integer\E \n\);\n/xm, like => { diff --git a/src/test/regress/expected/collate.icu.utf8.out b/src/test/regress/expected/collate.icu.utf8.out index e1fc9984f2..e8cf5d50d1 100644 --- a/src/test/regress/expected/collate.icu.utf8.out +++ b/src/test/regress/expected/collate.icu.utf8.out @@ -968,12 +968,12 @@ ERROR: collations are not supported by type integer LINE 1: ...ATE INDEX collate_test1_idx6 ON collate_test1 ((a COLLATE "C... ^ SELECT relname, pg_get_indexdef(oid) FROM pg_class WHERE relname LIKE 'collate_test%_idx%' ORDER BY 1; - relname | pg_get_indexdef ---------------------+----------------------------------------------------------------------------------------------------- - collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_test1 USING btree (b) - collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_test1 USING btree (b COLLATE "C") - collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_test1 USING btree (b COLLATE "C") - collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") + relname | pg_get_indexdef +--------------------+------------------------------------------------------------------------------------------------------------------- + collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_tests.collate_test1 USING btree (b) + collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_tests.collate_test1 USING btree (b COLLATE "C") + collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_tests.collate_test1 USING btree (b COLLATE "C") + collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_tests.collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") (4 rows) -- schema manipulation commands diff --git a/src/test/regress/expected/collate.linux.utf8.out b/src/test/regress/expected/collate.linux.utf8.out index 6b7318613a..4eb2322e53 100644 --- a/src/test/regress/expected/collate.linux.utf8.out +++ b/src/test/regress/expected/collate.linux.utf8.out @@ -977,12 +977,12 @@ ERROR: collations are not supported by type integer LINE 1: ...ATE INDEX collate_test1_idx6 ON collate_test1 ((a COLLATE "C... ^ SELECT relname, pg_get_indexdef(oid) FROM pg_class WHERE relname LIKE 'collate_test%_idx%' ORDER BY 1; - relname | pg_get_indexdef ---------------------+----------------------------------------------------------------------------------------------------- - collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_test1 USING btree (b) - collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_test1 USING btree (b COLLATE "C") - collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_test1 USING btree (b COLLATE "C") - collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") + relname | pg_get_indexdef +--------------------+------------------------------------------------------------------------------------------------------------------- + collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_tests.collate_test1 USING btree (b) + collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_tests.collate_test1 USING btree (b COLLATE "C") + collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_tests.collate_test1 USING btree (b COLLATE "C") + collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_tests.collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") (4 rows) -- schema manipulation commands diff --git a/src/test/regress/expected/collate.out b/src/test/regress/expected/collate.out index 3bc3713ee1..f045f2b291 100644 --- a/src/test/regress/expected/collate.out +++ b/src/test/regress/expected/collate.out @@ -572,12 +572,12 @@ ERROR: collations are not supported by type integer LINE 1: ...ATE INDEX collate_test1_idx6 ON collate_test1 ((a COLLATE "P... ^ SELECT relname, pg_get_indexdef(oid) FROM pg_class WHERE relname LIKE 'collate_test%_idx%' ORDER BY 1; - relname | pg_get_indexdef ---------------------+----------------------------------------------------------------------------------------------------- - collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_test1 USING btree (b) - collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_test1 USING btree (b COLLATE "POSIX") - collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_test1 USING btree (b COLLATE "POSIX") - collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") + relname | pg_get_indexdef +--------------------+------------------------------------------------------------------------------------------------------------------- + collate_test1_idx1 | CREATE INDEX collate_test1_idx1 ON collate_tests.collate_test1 USING btree (b) + collate_test1_idx2 | CREATE INDEX collate_test1_idx2 ON collate_tests.collate_test1 USING btree (b COLLATE "POSIX") + collate_test1_idx3 | CREATE INDEX collate_test1_idx3 ON collate_tests.collate_test1 USING btree (b COLLATE "POSIX") + collate_test1_idx4 | CREATE INDEX collate_test1_idx4 ON collate_tests.collate_test1 USING btree (((b || 'foo'::text)) COLLATE "POSIX") (4 rows) -- foreign keys diff --git a/src/test/regress/expected/indexing.out b/src/test/regress/expected/indexing.out index 85e3575b99..375f55b337 100644 --- a/src/test/regress/expected/indexing.out +++ b/src/test/regress/expected/indexing.out @@ -487,11 +487,11 @@ select relname as child, inhparent::regclass as parent, pg_get_indexdef as child from pg_class join pg_inherits on inhrelid = oid, lateral pg_get_indexdef(pg_class.oid) where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; - child | parent | childdef --------------------+------------------+-------------------------------------------------------------------- - idxpart1_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart1_expr_idx ON idxpart1 USING btree (((a + b))) - idxpart2_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart2_expr_idx ON idxpart2 USING btree (((a + b))) - idxpart3_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart3_expr_idx ON idxpart3 USING btree (((a + b))) + child | parent | childdef +-------------------+------------------+--------------------------------------------------------------------------- + idxpart1_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart1_expr_idx ON public.idxpart1 USING btree (((a + b))) + idxpart2_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart2_expr_idx ON public.idxpart2 USING btree (((a + b))) + idxpart3_expr_idx | idxpart_expr_idx | CREATE INDEX idxpart3_expr_idx ON public.idxpart3 USING btree (((a + b))) (3 rows) drop table idxpart; @@ -511,15 +511,15 @@ select relname as child, inhparent::regclass as parent, pg_get_indexdef as child from pg_class left join pg_inherits on inhrelid = oid, lateral pg_get_indexdef(pg_class.oid) where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; - child | parent | childdef ------------------+---------------+------------------------------------------------------------------------- - idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a COLLATE "C") - idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a COLLATE "POSIX") - idxpart2_a_idx1 | | CREATE INDEX idxpart2_a_idx1 ON idxpart2 USING btree (a) - idxpart2_a_idx2 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx2 ON idxpart2 USING btree (a COLLATE "C") - idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON idxpart3 USING btree (a COLLATE "C") - idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON idxpart4 USING btree (a COLLATE "C") - idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a COLLATE "C") + child | parent | childdef +-----------------+---------------+-------------------------------------------------------------------------------- + idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON public.idxpart1 USING btree (a COLLATE "C") + idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON public.idxpart2 USING btree (a COLLATE "POSIX") + idxpart2_a_idx1 | | CREATE INDEX idxpart2_a_idx1 ON public.idxpart2 USING btree (a) + idxpart2_a_idx2 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx2 ON public.idxpart2 USING btree (a COLLATE "C") + idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON public.idxpart3 USING btree (a COLLATE "C") + idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON public.idxpart4 USING btree (a COLLATE "C") + idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY public.idxpart USING btree (a COLLATE "C") (7 rows) drop table idxpart; @@ -538,14 +538,14 @@ select relname as child, inhparent::regclass as parent, pg_get_indexdef as child from pg_class left join pg_inherits on inhrelid = oid, lateral pg_get_indexdef(pg_class.oid) where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; - child | parent | childdef ------------------+---------------+----------------------------------------------------------------------------- - idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a text_pattern_ops) - idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) - idxpart2_a_idx1 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx1 ON idxpart2 USING btree (a text_pattern_ops) - idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON idxpart3 USING btree (a text_pattern_ops) - idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON idxpart4 USING btree (a text_pattern_ops) - idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a text_pattern_ops) + child | parent | childdef +-----------------+---------------+------------------------------------------------------------------------------------ + idxpart1_a_idx | idxpart_a_idx | CREATE INDEX idxpart1_a_idx ON public.idxpart1 USING btree (a text_pattern_ops) + idxpart2_a_idx | | CREATE INDEX idxpart2_a_idx ON public.idxpart2 USING btree (a) + idxpart2_a_idx1 | idxpart_a_idx | CREATE INDEX idxpart2_a_idx1 ON public.idxpart2 USING btree (a text_pattern_ops) + idxpart3_a_idx | idxpart_a_idx | CREATE INDEX idxpart3_a_idx ON public.idxpart3 USING btree (a text_pattern_ops) + idxpart4_a_idx | idxpart_a_idx | CREATE INDEX idxpart4_a_idx ON public.idxpart4 USING btree (a text_pattern_ops) + idxpart_a_idx | | CREATE INDEX idxpart_a_idx ON ONLY public.idxpart USING btree (a text_pattern_ops) (6 rows) drop index idxpart_a_idx; @@ -584,15 +584,15 @@ select relname as child, inhparent::regclass as parent, pg_get_indexdef as child from pg_class left join pg_inherits on inhrelid = oid, lateral pg_get_indexdef(pg_class.oid) where relkind in ('i', 'I') and relname like 'idxpart%' order by relname; - child | parent | childdef ------------------+---------------+---------------------------------------------------------------------------------- - idxpart1_1_idx | idxpart_1_idx | CREATE INDEX idxpart1_1_idx ON idxpart1 USING btree (b, a) - idxpart1_1b_idx | | CREATE INDEX idxpart1_1b_idx ON idxpart1 USING btree (b) - idxpart1_2_idx | idxpart_2_idx | CREATE INDEX idxpart1_2_idx ON idxpart1 USING btree (((b + a))) WHERE (a > 1) - idxpart1_2b_idx | | CREATE INDEX idxpart1_2b_idx ON idxpart1 USING btree (((a + b))) WHERE (a > 1) - idxpart1_2c_idx | | CREATE INDEX idxpart1_2c_idx ON idxpart1 USING btree (((b + a))) WHERE (b > 1) - idxpart_1_idx | | CREATE INDEX idxpart_1_idx ON ONLY idxpart USING btree (b, a) - idxpart_2_idx | | CREATE INDEX idxpart_2_idx ON ONLY idxpart USING btree (((b + a))) WHERE (a > 1) + child | parent | childdef +-----------------+---------------+----------------------------------------------------------------------------------------- + idxpart1_1_idx | idxpart_1_idx | CREATE INDEX idxpart1_1_idx ON public.idxpart1 USING btree (b, a) + idxpart1_1b_idx | | CREATE INDEX idxpart1_1b_idx ON public.idxpart1 USING btree (b) + idxpart1_2_idx | idxpart_2_idx | CREATE INDEX idxpart1_2_idx ON public.idxpart1 USING btree (((b + a))) WHERE (a > 1) + idxpart1_2b_idx | | CREATE INDEX idxpart1_2b_idx ON public.idxpart1 USING btree (((a + b))) WHERE (a > 1) + idxpart1_2c_idx | | CREATE INDEX idxpart1_2c_idx ON public.idxpart1 USING btree (((b + a))) WHERE (b > 1) + idxpart_1_idx | | CREATE INDEX idxpart_1_idx ON ONLY public.idxpart USING btree (b, a) + idxpart_2_idx | | CREATE INDEX idxpart_2_idx ON ONLY public.idxpart USING btree (((b + a))) WHERE (a > 1) (7 rows) drop table idxpart; @@ -610,14 +610,14 @@ select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' order by indexrelid::regclass::text collate "C"; - relname | pg_get_indexdef -------------------+-------------------------------------------------------------- - idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) - idxpart1_c_b_idx | CREATE INDEX idxpart1_c_b_idx ON idxpart1 USING btree (c, b) - idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) - idxpart2_c_b_idx | CREATE INDEX idxpart2_c_b_idx ON idxpart2 USING btree (c, b) - idxparti | CREATE INDEX idxparti ON ONLY idxpart USING btree (a) - idxparti2 | CREATE INDEX idxparti2 ON ONLY idxpart USING btree (c, b) + relname | pg_get_indexdef +------------------+--------------------------------------------------------------------- + idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON public.idxpart1 USING btree (a) + idxpart1_c_b_idx | CREATE INDEX idxpart1_c_b_idx ON public.idxpart1 USING btree (c, b) + idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON public.idxpart2 USING btree (a) + idxpart2_c_b_idx | CREATE INDEX idxpart2_c_b_idx ON public.idxpart2 USING btree (c, b) + idxparti | CREATE INDEX idxparti ON ONLY public.idxpart USING btree (a) + idxparti2 | CREATE INDEX idxparti2 ON ONLY public.idxpart USING btree (c, b) (6 rows) drop table idxpart; @@ -636,11 +636,11 @@ select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' order by indexrelid::regclass::text collate "C"; - relname | pg_get_indexdef -------------------+------------------------------------------------------------------- - idxpart1_abs_idx | CREATE INDEX idxpart1_abs_idx ON idxpart1 USING btree (abs(b)) - idxpart2_abs_idx | CREATE INDEX idxpart2_abs_idx ON idxpart2 USING btree (abs(b)) - idxpart_abs_idx | CREATE INDEX idxpart_abs_idx ON ONLY idxpart USING btree (abs(b)) + relname | pg_get_indexdef +------------------+-------------------------------------------------------------------------- + idxpart1_abs_idx | CREATE INDEX idxpart1_abs_idx ON public.idxpart1 USING btree (abs(b)) + idxpart2_abs_idx | CREATE INDEX idxpart2_abs_idx ON public.idxpart2 USING btree (abs(b)) + idxpart_abs_idx | CREATE INDEX idxpart_abs_idx ON ONLY public.idxpart USING btree (abs(b)) (3 rows) drop table idxpart; @@ -659,11 +659,11 @@ select c.relname, pg_get_indexdef(indexrelid) from pg_class c join pg_index i on c.oid = i.indexrelid where indrelid::regclass::text like 'idxpart%' order by indexrelid::regclass::text collate "C"; - relname | pg_get_indexdef -----------------+----------------------------------------------------------------------------- - idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON idxpart1 USING btree (a) WHERE (b > 1000) - idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON idxpart2 USING btree (a) WHERE (b > 1000) - idxpart_a_idx | CREATE INDEX idxpart_a_idx ON ONLY idxpart USING btree (a) WHERE (b > 1000) + relname | pg_get_indexdef +----------------+------------------------------------------------------------------------------------ + idxpart1_a_idx | CREATE INDEX idxpart1_a_idx ON public.idxpart1 USING btree (a) WHERE (b > 1000) + idxpart2_a_idx | CREATE INDEX idxpart2_a_idx ON public.idxpart2 USING btree (a) WHERE (b > 1000) + idxpart_a_idx | CREATE INDEX idxpart_a_idx ON ONLY public.idxpart USING btree (a) WHERE (b > 1000) (3 rows) drop table idxpart; diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out index 5433944c6a..5acb92f30f 100644 --- a/src/test/regress/expected/rules.out +++ b/src/test/regress/expected/rules.out @@ -2342,103 +2342,103 @@ toyemp| SELECT emp.name, SELECT tablename, rulename, definition FROM pg_rules ORDER BY tablename, rulename; pg_settings|pg_settings_n|CREATE RULE pg_settings_n AS - ON UPDATE TO pg_settings DO INSTEAD NOTHING; + ON UPDATE TO pg_catalog.pg_settings DO INSTEAD NOTHING; pg_settings|pg_settings_u|CREATE RULE pg_settings_u AS - ON UPDATE TO pg_settings + ON UPDATE TO pg_catalog.pg_settings WHERE (new.name = old.name) DO SELECT set_config(old.name, new.setting, false) AS set_config; rtest_emp|rtest_emp_del|CREATE RULE rtest_emp_del AS - ON DELETE TO rtest_emp DO INSERT INTO rtest_emplog (ename, who, action, newsal, oldsal) + ON DELETE TO public.rtest_emp DO INSERT INTO rtest_emplog (ename, who, action, newsal, oldsal) VALUES (old.ename, CURRENT_USER, 'fired'::bpchar, '$0.00'::money, old.salary); rtest_emp|rtest_emp_ins|CREATE RULE rtest_emp_ins AS - ON INSERT TO rtest_emp DO INSERT INTO rtest_emplog (ename, who, action, newsal, oldsal) + ON INSERT TO public.rtest_emp DO INSERT INTO rtest_emplog (ename, who, action, newsal, oldsal) VALUES (new.ename, CURRENT_USER, 'hired'::bpchar, new.salary, '$0.00'::money); rtest_emp|rtest_emp_upd|CREATE RULE rtest_emp_upd AS - ON UPDATE TO rtest_emp + ON UPDATE TO public.rtest_emp WHERE (new.salary <> old.salary) DO INSERT INTO rtest_emplog (ename, who, action, newsal, oldsal) VALUES (new.ename, CURRENT_USER, 'honored'::bpchar, new.salary, old.salary); rtest_nothn1|rtest_nothn_r1|CREATE RULE rtest_nothn_r1 AS - ON INSERT TO rtest_nothn1 + ON INSERT TO public.rtest_nothn1 WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD NOTHING; rtest_nothn1|rtest_nothn_r2|CREATE RULE rtest_nothn_r2 AS - ON INSERT TO rtest_nothn1 + ON INSERT TO public.rtest_nothn1 WHERE ((new.a >= 30) AND (new.a < 40)) DO INSTEAD NOTHING; rtest_nothn2|rtest_nothn_r3|CREATE RULE rtest_nothn_r3 AS - ON INSERT TO rtest_nothn2 + ON INSERT TO public.rtest_nothn2 WHERE (new.a >= 100) DO INSTEAD INSERT INTO rtest_nothn3 (a, b) VALUES (new.a, new.b); rtest_nothn2|rtest_nothn_r4|CREATE RULE rtest_nothn_r4 AS - ON INSERT TO rtest_nothn2 DO INSTEAD NOTHING; + ON INSERT TO public.rtest_nothn2 DO INSTEAD NOTHING; rtest_order1|rtest_order_r1|CREATE RULE rtest_order_r1 AS - ON INSERT TO rtest_order1 DO INSTEAD INSERT INTO rtest_order2 (a, b, c) + ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO rtest_order2 (a, b, c) VALUES (new.a, nextval('rtest_seq'::regclass), 'rule 1 - this should run 1st'::text); rtest_order1|rtest_order_r2|CREATE RULE rtest_order_r2 AS - ON INSERT TO rtest_order1 DO INSERT INTO rtest_order2 (a, b, c) + ON INSERT TO public.rtest_order1 DO INSERT INTO rtest_order2 (a, b, c) VALUES (new.a, nextval('rtest_seq'::regclass), 'rule 2 - this should run 2nd'::text); rtest_order1|rtest_order_r3|CREATE RULE rtest_order_r3 AS - ON INSERT TO rtest_order1 DO INSTEAD INSERT INTO rtest_order2 (a, b, c) + ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO rtest_order2 (a, b, c) VALUES (new.a, nextval('rtest_seq'::regclass), 'rule 3 - this should run 3rd'::text); rtest_order1|rtest_order_r4|CREATE RULE rtest_order_r4 AS - ON INSERT TO rtest_order1 + ON INSERT TO public.rtest_order1 WHERE (new.a < 100) DO INSTEAD INSERT INTO rtest_order2 (a, b, c) VALUES (new.a, nextval('rtest_seq'::regclass), 'rule 4 - this should run 4th'::text); rtest_person|rtest_pers_del|CREATE RULE rtest_pers_del AS - ON DELETE TO rtest_person DO DELETE FROM rtest_admin + ON DELETE TO public.rtest_person DO DELETE FROM rtest_admin WHERE (rtest_admin.pname = old.pname); rtest_person|rtest_pers_upd|CREATE RULE rtest_pers_upd AS - ON UPDATE TO rtest_person DO UPDATE rtest_admin SET pname = new.pname + ON UPDATE TO public.rtest_person DO UPDATE rtest_admin SET pname = new.pname WHERE (rtest_admin.pname = old.pname); rtest_system|rtest_sys_del|CREATE RULE rtest_sys_del AS - ON DELETE TO rtest_system DO ( DELETE FROM rtest_interface + ON DELETE TO public.rtest_system DO ( DELETE FROM rtest_interface WHERE (rtest_interface.sysname = old.sysname); DELETE FROM rtest_admin WHERE (rtest_admin.sysname = old.sysname); ); rtest_system|rtest_sys_upd|CREATE RULE rtest_sys_upd AS - ON UPDATE TO rtest_system DO ( UPDATE rtest_interface SET sysname = new.sysname + ON UPDATE TO public.rtest_system DO ( UPDATE rtest_interface SET sysname = new.sysname WHERE (rtest_interface.sysname = old.sysname); UPDATE rtest_admin SET sysname = new.sysname WHERE (rtest_admin.sysname = old.sysname); ); rtest_t4|rtest_t4_ins1|CREATE RULE rtest_t4_ins1 AS - ON INSERT TO rtest_t4 + ON INSERT TO public.rtest_t4 WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD INSERT INTO rtest_t5 (a, b) VALUES (new.a, new.b); rtest_t4|rtest_t4_ins2|CREATE RULE rtest_t4_ins2 AS - ON INSERT TO rtest_t4 + ON INSERT TO public.rtest_t4 WHERE ((new.a >= 20) AND (new.a < 30)) DO INSERT INTO rtest_t6 (a, b) VALUES (new.a, new.b); rtest_t5|rtest_t5_ins|CREATE RULE rtest_t5_ins AS - ON INSERT TO rtest_t5 + ON INSERT TO public.rtest_t5 WHERE (new.a > 15) DO INSERT INTO rtest_t7 (a, b) VALUES (new.a, new.b); rtest_t6|rtest_t6_ins|CREATE RULE rtest_t6_ins AS - ON INSERT TO rtest_t6 + ON INSERT TO public.rtest_t6 WHERE (new.a > 25) DO INSTEAD INSERT INTO rtest_t8 (a, b) VALUES (new.a, new.b); rtest_v1|rtest_v1_del|CREATE RULE rtest_v1_del AS - ON DELETE TO rtest_v1 DO INSTEAD DELETE FROM rtest_t1 + ON DELETE TO public.rtest_v1 DO INSTEAD DELETE FROM rtest_t1 WHERE (rtest_t1.a = old.a); rtest_v1|rtest_v1_ins|CREATE RULE rtest_v1_ins AS - ON INSERT TO rtest_v1 DO INSTEAD INSERT INTO rtest_t1 (a, b) + ON INSERT TO public.rtest_v1 DO INSTEAD INSERT INTO rtest_t1 (a, b) VALUES (new.a, new.b); rtest_v1|rtest_v1_upd|CREATE RULE rtest_v1_upd AS - ON UPDATE TO rtest_v1 DO INSTEAD UPDATE rtest_t1 SET a = new.a, b = new.b + ON UPDATE TO public.rtest_v1 DO INSTEAD UPDATE rtest_t1 SET a = new.a, b = new.b WHERE (rtest_t1.a = old.a); shoelace|shoelace_del|CREATE RULE shoelace_del AS - ON DELETE TO shoelace DO INSTEAD DELETE FROM shoelace_data + ON DELETE TO public.shoelace DO INSTEAD DELETE FROM shoelace_data WHERE (shoelace_data.sl_name = old.sl_name); shoelace|shoelace_ins|CREATE RULE shoelace_ins AS - ON INSERT TO shoelace DO INSTEAD INSERT INTO shoelace_data (sl_name, sl_avail, sl_color, sl_len, sl_unit) + ON INSERT TO public.shoelace DO INSTEAD INSERT INTO shoelace_data (sl_name, sl_avail, sl_color, sl_len, sl_unit) VALUES (new.sl_name, new.sl_avail, new.sl_color, new.sl_len, new.sl_unit); shoelace|shoelace_upd|CREATE RULE shoelace_upd AS - ON UPDATE TO shoelace DO INSTEAD UPDATE shoelace_data SET sl_name = new.sl_name, sl_avail = new.sl_avail, sl_color = new.sl_color, sl_len = new.sl_len, sl_unit = new.sl_unit + ON UPDATE TO public.shoelace DO INSTEAD UPDATE shoelace_data SET sl_name = new.sl_name, sl_avail = new.sl_avail, sl_color = new.sl_color, sl_len = new.sl_len, sl_unit = new.sl_unit WHERE (shoelace_data.sl_name = old.sl_name); shoelace_data|log_shoelace|CREATE RULE log_shoelace AS - ON UPDATE TO shoelace_data + ON UPDATE TO public.shoelace_data WHERE (new.sl_avail <> old.sl_avail) DO INSERT INTO shoelace_log (sl_name, sl_avail, log_who, log_when) VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, 'Thu Jan 01 00:00:00 1970'::timestamp without time zone); shoelace_ok|shoelace_ok_ins|CREATE RULE shoelace_ok_ins AS - ON INSERT TO shoelace_ok DO INSTEAD UPDATE shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant) + ON INSERT TO public.shoelace_ok DO INSTEAD UPDATE shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant) WHERE (shoelace.sl_name = new.ok_name); -- restore normal output mode \a\t @@ -2961,7 +2961,7 @@ SELECT definition FROM pg_rules WHERE tablename = 'hats' ORDER BY rulename; definition --------------------------------------------------------------------------------------------- CREATE RULE hat_nosert AS + - ON INSERT TO hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + + ON INSERT TO public.hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + VALUES (new.hat_name, new.hat_color) ON CONFLICT(hat_name COLLATE "C" bpchar_pattern_ops)+ WHERE (hat_color = 'green'::bpchar) DO NOTHING + RETURNING hat_data.hat_name, + @@ -2986,7 +2986,7 @@ SELECT tablename, rulename, definition FROM pg_rules tablename | rulename | definition -----------+------------+--------------------------------------------------------------------------------------------- hats | hat_nosert | CREATE RULE hat_nosert AS + - | | ON INSERT TO hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + + | | ON INSERT TO public.hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + | | VALUES (new.hat_name, new.hat_color) ON CONFLICT(hat_name COLLATE "C" bpchar_pattern_ops)+ | | WHERE (hat_color = 'green'::bpchar) DO NOTHING + | | RETURNING hat_data.hat_name, + @@ -3004,12 +3004,12 @@ CREATE RULE hat_nosert_all AS ON INSERT TO hats DO NOTHING RETURNING *; SELECT definition FROM pg_rules WHERE tablename = 'hats' ORDER BY rulename; - definition ------------------------------------------------------------------------------- - CREATE RULE hat_nosert_all AS + - ON INSERT TO hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color)+ - VALUES (new.hat_name, new.hat_color) ON CONFLICT DO NOTHING + - RETURNING hat_data.hat_name, + + definition +------------------------------------------------------------------------------------- + CREATE RULE hat_nosert_all AS + + ON INSERT TO public.hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color)+ + VALUES (new.hat_name, new.hat_color) ON CONFLICT DO NOTHING + + RETURNING hat_data.hat_name, + hat_data.hat_color; (1 row) @@ -3036,7 +3036,7 @@ SELECT definition FROM pg_rules WHERE tablename = 'hats' ORDER BY rulename; definition ----------------------------------------------------------------------------------------------------------------------------------------- CREATE RULE hat_upsert AS + - ON INSERT TO hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + + ON INSERT TO public.hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + VALUES (new.hat_name, new.hat_color) ON CONFLICT(hat_name) DO UPDATE SET hat_name = hat_data.hat_name, hat_color = excluded.hat_color+ WHERE ((excluded.hat_color <> 'forbidden'::bpchar) AND (hat_data.* <> excluded.*)) + RETURNING hat_data.hat_name, + @@ -3084,7 +3084,7 @@ SELECT tablename, rulename, definition FROM pg_rules tablename | rulename | definition -----------+------------+----------------------------------------------------------------------------------------------------------------------------------------- hats | hat_upsert | CREATE RULE hat_upsert AS + - | | ON INSERT TO hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + + | | ON INSERT TO public.hats DO INSTEAD INSERT INTO hat_data (hat_name, hat_color) + | | VALUES (new.hat_name, new.hat_color) ON CONFLICT(hat_name) DO UPDATE SET hat_name = hat_data.hat_name, hat_color = excluded.hat_color+ | | WHERE ((excluded.hat_color <> 'forbidden'::bpchar) AND (hat_data.* <> excluded.*)) + | | RETURNING hat_data.hat_name, + diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out index e7b4b31afc..ce8fa211a5 100644 --- a/src/test/regress/expected/triggers.out +++ b/src/test/regress/expected/triggers.out @@ -431,9 +431,9 @@ SELECT pg_get_triggerdef(oid, true) FROM pg_trigger WHERE tgrelid = 'main_table' (1 row) SELECT pg_get_triggerdef(oid, false) FROM pg_trigger WHERE tgrelid = 'main_table'::regclass AND tgname = 'modified_a'; - pg_get_triggerdef ----------------------------------------------------------------------------------------------------------------------------------------------- - CREATE TRIGGER modified_a BEFORE UPDATE OF a ON main_table FOR EACH ROW WHEN ((old.a <> new.a)) EXECUTE PROCEDURE trigger_func('modified_a') + pg_get_triggerdef +----------------------------------------------------------------------------------------------------------------------------------------------------- + CREATE TRIGGER modified_a BEFORE UPDATE OF a ON public.main_table FOR EACH ROW WHEN ((old.a <> new.a)) EXECUTE PROCEDURE trigger_func('modified_a') (1 row) SELECT pg_get_triggerdef(oid, true) FROM pg_trigger WHERE tgrelid = 'main_table'::regclass AND tgname = 'modified_any'; @@ -461,9 +461,9 @@ FOR EACH STATEMENT EXECUTE PROCEDURE trigger_func('before_upd_a_stmt'); CREATE TRIGGER after_upd_b_stmt_trig AFTER UPDATE OF b ON main_table FOR EACH STATEMENT EXECUTE PROCEDURE trigger_func('after_upd_b_stmt'); SELECT pg_get_triggerdef(oid) FROM pg_trigger WHERE tgrelid = 'main_table'::regclass AND tgname = 'after_upd_a_b_row_trig'; - pg_get_triggerdef -------------------------------------------------------------------------------------------------------------------------------------------- - CREATE TRIGGER after_upd_a_b_row_trig AFTER UPDATE OF a, b ON main_table FOR EACH ROW EXECUTE PROCEDURE trigger_func('after_upd_a_b_row') + pg_get_triggerdef +-------------------------------------------------------------------------------------------------------------------------------------------------- + CREATE TRIGGER after_upd_a_b_row_trig AFTER UPDATE OF a, b ON public.main_table FOR EACH ROW EXECUTE PROCEDURE trigger_func('after_upd_a_b_row') (1 row) UPDATE main_table SET a = 50; From 582edc369cdbd348d68441fc50fa26a84afd0c1a Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Mon, 26 Feb 2018 07:39:44 -0800 Subject: [PATCH 1059/1087] Empty search_path in Autovacuum and non-psql/pgbench clients. This makes the client programs behave as documented regardless of the connect-time search_path and regardless of user-created objects. Today, a malicious user with CREATE permission on a search_path schema can take control of certain of these clients' queries and invoke arbitrary SQL functions under the client identity, often a superuser. This is exploitable in the default configuration, where all users have CREATE privilege on schema "public". This changes behavior of user-defined code stored in the database, like pg_index.indexprs and pg_extension_config_dump(). If they reach code bearing unqualified names, "does not exist" or "no schema has been selected to create in" errors might appear. Users may fix such errors by schema-qualifying affected names. After upgrading, consider watching server logs for these errors. The --table arguments of src/bin/scripts clients have been lax; for example, "vacuumdb -Zt pg_am\;CHECKPOINT" performed a checkpoint. That now fails, but for now, "vacuumdb -Zt 'pg_am(amname);CHECKPOINT'" still performs a checkpoint. Back-patch to 9.3 (all supported versions). Reviewed by Tom Lane, though this fix strategy was not his first choice. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058 --- contrib/oid2name/oid2name.c | 13 +++ contrib/vacuumlo/vacuumlo.c | 8 +- src/backend/postmaster/autovacuum.c | 14 +++ src/bin/pg_basebackup/streamutil.c | 18 ++++ src/bin/pg_dump/pg_backup_db.c | 9 ++ src/bin/pg_dump/pg_dump.c | 17 +++- src/bin/pg_dump/pg_dumpall.c | 6 +- src/bin/pg_rewind/libpq_fetch.c | 7 ++ src/bin/pg_upgrade/server.c | 3 + src/bin/scripts/clusterdb.c | 12 ++- src/bin/scripts/common.c | 137 ++++++++++++++++++++++++-- src/bin/scripts/common.h | 10 +- src/bin/scripts/createdb.c | 2 +- src/bin/scripts/createuser.c | 2 +- src/bin/scripts/dropdb.c | 3 +- src/bin/scripts/dropuser.c | 2 +- src/bin/scripts/reindexdb.c | 23 ++--- src/bin/scripts/t/010_clusterdb.pl | 2 +- src/bin/scripts/t/090_reindexdb.pl | 6 +- src/bin/scripts/t/100_vacuumdb.pl | 28 +++++- src/bin/scripts/vacuumdb.c | 29 ++++-- src/fe_utils/string_utils.c | 16 +-- src/include/fe_utils/connect.h | 28 ++++++ src/tools/findoidjoins/findoidjoins.c | 9 ++ 24 files changed, 338 insertions(+), 66 deletions(-) create mode 100644 src/include/fe_utils/connect.h diff --git a/contrib/oid2name/oid2name.c b/contrib/oid2name/oid2name.c index 8af99decad..769e527384 100644 --- a/contrib/oid2name/oid2name.c +++ b/contrib/oid2name/oid2name.c @@ -11,6 +11,7 @@ #include "catalog/pg_class.h" +#include "fe_utils/connect.h" #include "libpq-fe.h" #include "pg_getopt.h" @@ -266,6 +267,7 @@ sql_conn(struct options *my_opts) bool have_password = false; char password[100]; bool new_pass; + PGresult *res; /* * Start the connection. Loop until we have a password if requested by @@ -323,6 +325,17 @@ sql_conn(struct options *my_opts) exit(1); } + res = PQexec(conn, ALWAYS_SECURE_SEARCH_PATH_SQL); + if (PQresultStatus(res) != PGRES_TUPLES_OK) + { + fprintf(stderr, "oid2name: could not clear search_path: %s\n", + PQerrorMessage(conn)); + PQclear(res); + PQfinish(conn); + exit(-1); + } + PQclear(res); + /* return the conn if good */ return conn; } diff --git a/contrib/vacuumlo/vacuumlo.c b/contrib/vacuumlo/vacuumlo.c index 4074262b74..ab6b17c7f6 100644 --- a/contrib/vacuumlo/vacuumlo.c +++ b/contrib/vacuumlo/vacuumlo.c @@ -23,6 +23,7 @@ #include "catalog/pg_class.h" +#include "fe_utils/connect.h" #include "libpq-fe.h" #include "pg_getopt.h" @@ -140,11 +141,8 @@ vacuumlo(const char *database, const struct _param *param) fprintf(stdout, "Test run: no large objects will be removed!\n"); } - /* - * Don't get fooled by any non-system catalogs - */ - res = PQexec(conn, "SET search_path = pg_catalog"); - if (PQresultStatus(res) != PGRES_COMMAND_OK) + res = PQexec(conn, ALWAYS_SECURE_SEARCH_PATH_SQL); + if (PQresultStatus(res) != PGRES_TUPLES_OK) { fprintf(stderr, "Failed to set search_path:\n"); fprintf(stderr, "%s", PQerrorMessage(conn)); diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 702f8d8188..21f5e2ce95 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -574,6 +574,12 @@ AutoVacLauncherMain(int argc, char *argv[]) /* must unblock signals before calling rebuild_database_list */ PG_SETMASK(&UnBlockSig); + /* + * Set always-secure search path. Launcher doesn't connect to a database, + * so this has no effect. + */ + SetConfigOption("search_path", "", PGC_SUSET, PGC_S_OVERRIDE); + /* * Force zero_damaged_pages OFF in the autovac process, even if it is set * in postgresql.conf. We don't really want such a dangerous option being @@ -1584,6 +1590,14 @@ AutoVacWorkerMain(int argc, char *argv[]) PG_SETMASK(&UnBlockSig); + /* + * Set always-secure search path, so malicious users can't redirect user + * code (e.g. pg_index.indexprs). (That code runs in a + * SECURITY_RESTRICTED_OPERATION sandbox, so malicious users could not + * take control of the entire autovacuum worker in any case.) + */ + SetConfigOption("search_path", "", PGC_SUSET, PGC_S_OVERRIDE); + /* * Force zero_damaged_pages OFF in the autovac process, even if it is set * in postgresql.conf. We don't really want such a dangerous option being diff --git a/src/bin/pg_basebackup/streamutil.c b/src/bin/pg_basebackup/streamutil.c index c88cede167..296b1888aa 100644 --- a/src/bin/pg_basebackup/streamutil.c +++ b/src/bin/pg_basebackup/streamutil.c @@ -24,6 +24,7 @@ #include "access/xlog_internal.h" #include "common/fe_memutils.h" #include "datatype/timestamp.h" +#include "fe_utils/connect.h" #include "port/pg_bswap.h" #include "pqexpbuffer.h" @@ -208,6 +209,23 @@ GetConnection(void) if (conn_opts) PQconninfoFree(conn_opts); + /* Set always-secure search path, so malicious users can't get control. */ + if (dbname != NULL) + { + PGresult *res; + + res = PQexec(tmpconn, ALWAYS_SECURE_SEARCH_PATH_SQL); + if (PQresultStatus(res) != PGRES_TUPLES_OK) + { + fprintf(stderr, _("%s: could not clear search_path: %s\n"), + progname, PQerrorMessage(tmpconn)); + PQclear(res); + PQfinish(tmpconn); + exit(1); + } + PQclear(res); + } + /* * Ensure we have the same value of integer_datetimes (now always "on") as * the server we are connecting to. diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c index 3b7dd24151..5e32ee8a5b 100644 --- a/src/bin/pg_dump/pg_backup_db.c +++ b/src/bin/pg_dump/pg_backup_db.c @@ -12,6 +12,7 @@ #include "postgres_fe.h" #include "dumputils.h" +#include "fe_utils/connect.h" #include "fe_utils/string_utils.h" #include "parallel.h" #include "pg_backup_archiver.h" @@ -102,6 +103,10 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname, const char *username) PQfinish(AH->connection); AH->connection = newConn; + + /* Start strict; later phases may override this. */ + PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH, + ALWAYS_SECURE_SEARCH_PATH_SQL)); } /* @@ -304,6 +309,10 @@ ConnectDatabase(Archive *AHX, PQdb(AH->connection) ? PQdb(AH->connection) : "", PQerrorMessage(AH->connection)); + /* Start strict; later phases may override this. */ + PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH, + ALWAYS_SECURE_SEARCH_PATH_SQL)); + /* * We want to remember connection's actual password, whether or not we got * it by prompting. So we don't just store the password variable. diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c index 8b67ec1aa1..2c934e6365 100644 --- a/src/bin/pg_dump/pg_dump.c +++ b/src/bin/pg_dump/pg_dump.c @@ -60,6 +60,7 @@ #include "pg_backup_db.h" #include "pg_backup_utils.h" #include "pg_dump.h" +#include "fe_utils/connect.h" #include "fe_utils/string_utils.h" @@ -1021,6 +1022,8 @@ setup_connection(Archive *AH, const char *dumpencoding, PGconn *conn = GetConnection(AH); const char *std_strings; + PQclear(ExecuteSqlQueryForSingleRow(AH, ALWAYS_SECURE_SEARCH_PATH_SQL)); + /* * Set the client encoding if requested. */ @@ -1311,11 +1314,18 @@ expand_table_name_patterns(Archive *fout, for (cell = patterns->head; cell; cell = cell->next) { + /* + * Query must remain ABSOLUTELY devoid of unqualified names. This + * would be unnecessary given a pg_table_is_visible() variant taking a + * search_path argument. + */ appendPQExpBuffer(query, "SELECT c.oid" "\nFROM pg_catalog.pg_class c" - "\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace" - "\nWHERE c.relkind in ('%c', '%c', '%c', '%c', '%c', '%c')\n", + "\n LEFT JOIN pg_catalog.pg_namespace n" + "\n ON n.oid OPERATOR(pg_catalog.=) c.relnamespace" + "\nWHERE c.relkind OPERATOR(pg_catalog.=) ANY" + "\n (array['%c', '%c', '%c', '%c', '%c', '%c'])\n", RELKIND_RELATION, RELKIND_SEQUENCE, RELKIND_VIEW, RELKIND_MATVIEW, RELKIND_FOREIGN_TABLE, RELKIND_PARTITIONED_TABLE); @@ -1323,7 +1333,10 @@ expand_table_name_patterns(Archive *fout, false, "n.nspname", "c.relname", NULL, "pg_catalog.pg_table_is_visible(c.oid)"); + ExecuteSqlStatement(fout, "RESET search_path"); res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK); + PQclear(ExecuteSqlQueryForSingleRow(fout, + ALWAYS_SECURE_SEARCH_PATH_SQL)); if (strict_names && PQntuples(res) == 0) exit_horribly(NULL, "no matching tables were found for pattern \"%s\"\n", cell->val); diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index fbb18b7ade..75e4a539bf 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -23,6 +23,7 @@ #include "dumputils.h" #include "pg_backup.h" #include "common/file_utils.h" +#include "fe_utils/connect.h" #include "fe_utils/string_utils.h" /* version string we expect back from pg_dump */ @@ -1717,10 +1718,7 @@ connectDatabase(const char *dbname, const char *connection_string, exit_nicely(1); } - /* - * Make sure we are not fooled by non-system schemas in the search path. - */ - executeCommand(conn, "SET search_path = pg_catalog"); + PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL)); return conn; } diff --git a/src/bin/pg_rewind/libpq_fetch.c b/src/bin/pg_rewind/libpq_fetch.c index d1726d1c74..8f8d504455 100644 --- a/src/bin/pg_rewind/libpq_fetch.c +++ b/src/bin/pg_rewind/libpq_fetch.c @@ -24,6 +24,7 @@ #include "libpq-fe.h" #include "catalog/catalog.h" #include "catalog/pg_type.h" +#include "fe_utils/connect.h" #include "port/pg_bswap.h" static PGconn *conn = NULL; @@ -54,6 +55,12 @@ libpqConnect(const char *connstr) pg_log(PG_PROGRESS, "connected to server\n"); + res = PQexec(conn, ALWAYS_SECURE_SEARCH_PATH_SQL); + if (PQresultStatus(res) != PGRES_TUPLES_OK) + pg_fatal("could not clear search_path: %s", + PQresultErrorMessage(res)); + PQclear(res); + /* * Check that the server is not in hot standby mode. There is no * fundamental reason that couldn't be made to work, but it doesn't diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c index 5f55b585a8..5273ef6681 100644 --- a/src/bin/pg_upgrade/server.c +++ b/src/bin/pg_upgrade/server.c @@ -9,6 +9,7 @@ #include "postgres_fe.h" +#include "fe_utils/connect.h" #include "fe_utils/string_utils.h" #include "pg_upgrade.h" @@ -40,6 +41,8 @@ connectToServer(ClusterInfo *cluster, const char *db_name) exit(1); } + PQclear(executeQueryOrDie(conn, ALWAYS_SECURE_SEARCH_PATH_SQL)); + return conn; } diff --git a/src/bin/scripts/clusterdb.c b/src/bin/scripts/clusterdb.c index 92c42f62bf..650d2ae261 100644 --- a/src/bin/scripts/clusterdb.c +++ b/src/bin/scripts/clusterdb.c @@ -195,17 +195,21 @@ cluster_one_database(const char *dbname, bool verbose, const char *table, PGconn *conn; + conn = connectDatabase(dbname, host, port, username, prompt_password, + progname, echo, false, false); + initPQExpBuffer(&sql); appendPQExpBufferStr(&sql, "CLUSTER"); if (verbose) appendPQExpBufferStr(&sql, " VERBOSE"); if (table) - appendPQExpBuffer(&sql, " %s", table); + { + appendPQExpBufferChar(&sql, ' '); + appendQualifiedRelation(&sql, table, conn, progname, echo); + } appendPQExpBufferChar(&sql, ';'); - conn = connectDatabase(dbname, host, port, username, prompt_password, - progname, false, false); if (!executeMaintenanceCommand(conn, sql.data, echo)) { if (table) @@ -234,7 +238,7 @@ cluster_all_databases(bool verbose, const char *maintenance_db, int i; conn = connectMaintenanceDatabase(maintenance_db, host, port, username, - prompt_password, progname); + prompt_password, progname, echo); result = executeQuery(conn, "SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1;", progname, echo); PQfinish(conn); diff --git a/src/bin/scripts/common.c b/src/bin/scripts/common.c index e20a5e9146..db2b9f0d68 100644 --- a/src/bin/scripts/common.c +++ b/src/bin/scripts/common.c @@ -18,6 +18,8 @@ #include #include "common.h" +#include "fe_utils/connect.h" +#include "fe_utils/string_utils.h" static PGcancel *volatile cancelConn = NULL; @@ -63,9 +65,10 @@ handle_help_version_opts(int argc, char *argv[], * as before, else we might create password exposure hazards.) */ PGconn * -connectDatabase(const char *dbname, const char *pghost, const char *pgport, - const char *pguser, enum trivalue prompt_password, - const char *progname, bool fail_ok, bool allow_password_reuse) +connectDatabase(const char *dbname, const char *pghost, + const char *pgport, const char *pguser, + enum trivalue prompt_password, const char *progname, + bool echo, bool fail_ok, bool allow_password_reuse) { PGconn *conn; bool new_pass; @@ -142,6 +145,10 @@ connectDatabase(const char *dbname, const char *pghost, const char *pgport, exit(1); } + if (PQserverVersion(conn) >= 70300) + PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, + progname, echo)); + return conn; } @@ -149,24 +156,24 @@ connectDatabase(const char *dbname, const char *pghost, const char *pgport, * Try to connect to the appropriate maintenance database. */ PGconn * -connectMaintenanceDatabase(const char *maintenance_db, const char *pghost, - const char *pgport, const char *pguser, - enum trivalue prompt_password, - const char *progname) +connectMaintenanceDatabase(const char *maintenance_db, + const char *pghost, const char *pgport, + const char *pguser, enum trivalue prompt_password, + const char *progname, bool echo) { PGconn *conn; /* If a maintenance database name was specified, just connect to it. */ if (maintenance_db) return connectDatabase(maintenance_db, pghost, pgport, pguser, - prompt_password, progname, false, false); + prompt_password, progname, echo, false, false); /* Otherwise, try postgres first and then template1. */ conn = connectDatabase("postgres", pghost, pgport, pguser, prompt_password, - progname, true, false); + progname, echo, true, false); if (!conn) conn = connectDatabase("template1", pghost, pgport, pguser, - prompt_password, progname, false, false); + prompt_password, progname, echo, false, false); return conn; } @@ -252,6 +259,116 @@ executeMaintenanceCommand(PGconn *conn, const char *query, bool echo) return r; } + +/* + * Split TABLE[(COLUMNS)] into TABLE and [(COLUMNS)] portions. When you + * finish using them, pg_free(*table). *columns is a pointer into "spec", + * possibly to its NUL terminator. + */ +static void +split_table_columns_spec(const char *spec, int encoding, + char **table, const char **columns) +{ + bool inquotes = false; + const char *cp = spec; + + /* + * Find the first '(' not identifier-quoted. Based on + * dequote_downcase_identifier(). + */ + while (*cp && (*cp != '(' || inquotes)) + { + if (*cp == '"') + { + if (inquotes && cp[1] == '"') + cp++; /* pair does not affect quoting */ + else + inquotes = !inquotes; + cp++; + } + else + cp += PQmblen(cp, encoding); + } + *table = pg_strdup(spec); + (*table)[cp - spec] = '\0'; /* no strndup */ + *columns = cp; +} + +/* + * Break apart TABLE[(COLUMNS)] of "spec". With the reset_val of search_path + * in effect, have regclassin() interpret the TABLE portion. Append to "buf" + * the qualified name of TABLE, followed by any (COLUMNS). Exit on failure. + * We use this to interpret --table=foo under the search path psql would get, + * in advance of "ANALYZE public.foo" under the always-secure search path. + */ +void +appendQualifiedRelation(PQExpBuffer buf, const char *spec, + PGconn *conn, const char *progname, bool echo) +{ + char *table; + const char *columns; + PQExpBufferData sql; + PGresult *res; + int ntups; + + /* Before 7.3, the concept of qualifying a name did not exist. */ + if (PQserverVersion(conn) < 70300) + { + appendPQExpBufferStr(&sql, spec); + return; + } + + split_table_columns_spec(spec, PQclientEncoding(conn), &table, &columns); + + /* + * Query must remain ABSOLUTELY devoid of unqualified names. This would + * be unnecessary given a regclassin() variant taking a search_path + * argument. + */ + initPQExpBuffer(&sql); + appendPQExpBufferStr(&sql, + "SELECT c.relname, ns.nspname\n" + " FROM pg_catalog.pg_class c," + " pg_catalog.pg_namespace ns\n" + " WHERE c.relnamespace OPERATOR(pg_catalog.=) ns.oid\n" + " AND c.oid OPERATOR(pg_catalog.=) "); + appendStringLiteralConn(&sql, table, conn); + appendPQExpBufferStr(&sql, "::pg_catalog.regclass;"); + + executeCommand(conn, "RESET search_path", progname, echo); + + /* + * One row is a typical result, as is a nonexistent relation ERROR. + * regclassin() unconditionally accepts all-digits input as an OID; if no + * relation has that OID; this query returns no rows. Catalog corruption + * might elicit other row counts. + */ + res = executeQuery(conn, sql.data, progname, echo); + ntups = PQntuples(res); + if (ntups != 1) + { + fprintf(stderr, + ngettext("%s: query returned %d row instead of one: %s\n", + "%s: query returned %d rows instead of one: %s\n", + ntups), + progname, ntups, sql.data); + PQfinish(conn); + exit(1); + } + appendPQExpBufferStr(buf, + fmtQualifiedId(PQserverVersion(conn), + PQgetvalue(res, 0, 1), + PQgetvalue(res, 0, 0))); + appendPQExpBufferStr(buf, columns); + PQclear(res); + termPQExpBuffer(&sql); + pg_free(table); + + PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, + progname, echo)); +} + + /* * Check yes/no answer in a localized way. 1=yes, 0=no, -1=neither. */ diff --git a/src/bin/scripts/common.h b/src/bin/scripts/common.h index a660d6848a..30a39a6247 100644 --- a/src/bin/scripts/common.h +++ b/src/bin/scripts/common.h @@ -32,11 +32,12 @@ extern void handle_help_version_opts(int argc, char *argv[], extern PGconn *connectDatabase(const char *dbname, const char *pghost, const char *pgport, const char *pguser, enum trivalue prompt_password, const char *progname, - bool fail_ok, bool allow_password_reuse); + bool echo, bool fail_ok, bool allow_password_reuse); extern PGconn *connectMaintenanceDatabase(const char *maintenance_db, - const char *pghost, const char *pgport, const char *pguser, - enum trivalue prompt_password, const char *progname); + const char *pghost, const char *pgport, + const char *pguser, enum trivalue prompt_password, + const char *progname, bool echo); extern PGresult *executeQuery(PGconn *conn, const char *query, const char *progname, bool echo); @@ -47,6 +48,9 @@ extern void executeCommand(PGconn *conn, const char *query, extern bool executeMaintenanceCommand(PGconn *conn, const char *query, bool echo); +extern void appendQualifiedRelation(PQExpBuffer buf, const char *name, + PGconn *conn, const char *progname, bool echo); + extern bool yesno_prompt(const char *question); extern void setup_cancel_handler(void); diff --git a/src/bin/scripts/createdb.c b/src/bin/scripts/createdb.c index 81a8192136..fc108882e4 100644 --- a/src/bin/scripts/createdb.c +++ b/src/bin/scripts/createdb.c @@ -202,7 +202,7 @@ main(int argc, char *argv[]) maintenance_db = "template1"; conn = connectMaintenanceDatabase(maintenance_db, host, port, username, - prompt_password, progname); + prompt_password, progname, echo); if (echo) printf("%s\n", sql.data); diff --git a/src/bin/scripts/createuser.c b/src/bin/scripts/createuser.c index c488c018e0..3420e62fdd 100644 --- a/src/bin/scripts/createuser.c +++ b/src/bin/scripts/createuser.c @@ -252,7 +252,7 @@ main(int argc, char *argv[]) login = TRI_YES; conn = connectDatabase("postgres", host, port, username, prompt_password, - progname, false, false); + progname, echo, false, false); initPQExpBuffer(&sql); diff --git a/src/bin/scripts/dropdb.c b/src/bin/scripts/dropdb.c index 81929c43c4..ba0038891d 100644 --- a/src/bin/scripts/dropdb.c +++ b/src/bin/scripts/dropdb.c @@ -129,7 +129,8 @@ main(int argc, char *argv[]) maintenance_db = "template1"; conn = connectMaintenanceDatabase(maintenance_db, - host, port, username, prompt_password, progname); + host, port, username, prompt_password, + progname, echo); if (echo) printf("%s\n", sql.data); diff --git a/src/bin/scripts/dropuser.c b/src/bin/scripts/dropuser.c index e3191afc31..d9e7f7b036 100644 --- a/src/bin/scripts/dropuser.c +++ b/src/bin/scripts/dropuser.c @@ -134,7 +134,7 @@ main(int argc, char *argv[]) (if_exists ? "IF EXISTS " : ""), fmtId(dropuser)); conn = connectDatabase("postgres", host, port, username, prompt_password, - progname, false, false); + progname, echo, false, false); if (echo) printf("%s\n", sql.data); diff --git a/src/bin/scripts/reindexdb.c b/src/bin/scripts/reindexdb.c index 64e9a2f3ce..be1c06ebbd 100644 --- a/src/bin/scripts/reindexdb.c +++ b/src/bin/scripts/reindexdb.c @@ -282,23 +282,24 @@ reindex_one_database(const char *name, const char *dbname, const char *type, PGconn *conn; conn = connectDatabase(dbname, host, port, username, prompt_password, - progname, false, false); + progname, echo, false, false); initPQExpBuffer(&sql); - appendPQExpBufferStr(&sql, "REINDEX"); + appendPQExpBufferStr(&sql, "REINDEX "); if (verbose) - appendPQExpBufferStr(&sql, " (VERBOSE)"); + appendPQExpBufferStr(&sql, "(VERBOSE) "); - if (strcmp(type, "TABLE") == 0) - appendPQExpBuffer(&sql, " TABLE %s", name); - else if (strcmp(type, "INDEX") == 0) - appendPQExpBuffer(&sql, " INDEX %s", name); + appendPQExpBufferStr(&sql, type); + appendPQExpBufferChar(&sql, ' '); + if (strcmp(type, "TABLE") == 0 || + strcmp(type, "INDEX") == 0) + appendQualifiedRelation(&sql, name, conn, progname, echo); else if (strcmp(type, "SCHEMA") == 0) - appendPQExpBuffer(&sql, " SCHEMA %s", name); + appendPQExpBufferStr(&sql, name); else if (strcmp(type, "DATABASE") == 0) - appendPQExpBuffer(&sql, " DATABASE %s", fmtId(PQdb(conn))); + appendPQExpBufferStr(&sql, fmtId(PQdb(conn))); appendPQExpBufferChar(&sql, ';'); if (!executeMaintenanceCommand(conn, sql.data, echo)) @@ -335,7 +336,7 @@ reindex_all_databases(const char *maintenance_db, int i; conn = connectMaintenanceDatabase(maintenance_db, host, port, username, - prompt_password, progname); + prompt_password, progname, echo); result = executeQuery(conn, "SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1;", progname, echo); PQfinish(conn); @@ -372,7 +373,7 @@ reindex_system_catalogs(const char *dbname, const char *host, const char *port, PQExpBufferData sql; conn = connectDatabase(dbname, host, port, username, prompt_password, - progname, false, false); + progname, echo, false, false); initPQExpBuffer(&sql); diff --git a/src/bin/scripts/t/010_clusterdb.pl b/src/bin/scripts/t/010_clusterdb.pl index e2cff0fcab..a767338f92 100644 --- a/src/bin/scripts/t/010_clusterdb.pl +++ b/src/bin/scripts/t/010_clusterdb.pl @@ -26,7 +26,7 @@ ); $node->issues_sql_like( [ 'clusterdb', '-t', 'test1' ], - qr/statement: CLUSTER test1;/, + qr/statement: CLUSTER public\.test1;/, 'cluster specific table'); $node->command_ok([qw(clusterdb --echo --verbose dbname=template1)], diff --git a/src/bin/scripts/t/090_reindexdb.pl b/src/bin/scripts/t/090_reindexdb.pl index 3aa3a95350..e57a5e2bad 100644 --- a/src/bin/scripts/t/090_reindexdb.pl +++ b/src/bin/scripts/t/090_reindexdb.pl @@ -24,11 +24,11 @@ 'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a);'); $node->issues_sql_like( [ 'reindexdb', '-t', 'test1', 'postgres' ], - qr/statement: REINDEX TABLE test1;/, + qr/statement: REINDEX TABLE public\.test1;/, 'reindex specific table'); $node->issues_sql_like( [ 'reindexdb', '-i', 'test1x', 'postgres' ], - qr/statement: REINDEX INDEX test1x;/, + qr/statement: REINDEX INDEX public\.test1x;/, 'reindex specific index'); $node->issues_sql_like( [ 'reindexdb', '-S', 'pg_catalog', 'postgres' ], @@ -40,7 +40,7 @@ 'reindex system tables'); $node->issues_sql_like( [ 'reindexdb', '-v', '-t', 'test1', 'postgres' ], - qr/statement: REINDEX \(VERBOSE\) TABLE test1;/, + qr/statement: REINDEX \(VERBOSE\) TABLE public\.test1;/, 'reindex with verbose output'); $node->command_ok([qw(reindexdb --echo --table=pg_am dbname=template1)], diff --git a/src/bin/scripts/t/100_vacuumdb.pl b/src/bin/scripts/t/100_vacuumdb.pl index dd98df8c08..382210e3b6 100644 --- a/src/bin/scripts/t/100_vacuumdb.pl +++ b/src/bin/scripts/t/100_vacuumdb.pl @@ -3,7 +3,7 @@ use PostgresNode; use TestLib; -use Test::More tests => 19; +use Test::More tests => 23; program_help_ok('vacuumdb'); program_version_ok('vacuumdb'); @@ -26,12 +26,32 @@ qr/statement: VACUUM \(FREEZE\);/, 'vacuumdb -F'); $node->issues_sql_like( - [ 'vacuumdb', '-z', 'postgres' ], - qr/statement: VACUUM \(ANALYZE\);/, - 'vacuumdb -z'); + [ 'vacuumdb', '-zj2', 'postgres' ], + qr/statement: VACUUM \(ANALYZE\) pg_catalog\./, + 'vacuumdb -zj2'); $node->issues_sql_like( [ 'vacuumdb', '-Z', 'postgres' ], qr/statement: ANALYZE;/, 'vacuumdb -Z'); $node->command_ok([qw(vacuumdb -Z --table=pg_am dbname=template1)], 'vacuumdb with connection string'); + +$node->command_fails([qw(vacuumdb -Zt pg_am;ABORT postgres)], + 'trailing command in "-t", without COLUMNS'); +# Unwanted; better if it failed. +$node->command_ok([qw(vacuumdb -Zt pg_am(amname);ABORT postgres)], + 'trailing command in "-t", with COLUMNS'); + +$node->safe_psql('postgres', q| + CREATE TABLE "need""q(uot" (")x" text); + + CREATE FUNCTION f0(int) RETURNS int LANGUAGE SQL AS 'SELECT $1 * $1'; + CREATE FUNCTION f1(int) RETURNS int LANGUAGE SQL AS 'SELECT f0($1)'; + CREATE TABLE funcidx (x int); + INSERT INTO funcidx VALUES (0),(1),(2),(3); + CREATE INDEX i0 ON funcidx ((f1(x))); +|); +$node->command_ok([qw|vacuumdb -Z --table="need""q(uot"(")x") postgres|], + 'column list'); +$node->command_fails([qw|vacuumdb -Zt funcidx postgres|], + 'unqualifed name via functional index'); diff --git a/src/bin/scripts/vacuumdb.c b/src/bin/scripts/vacuumdb.c index 663083828e..887fa48fbd 100644 --- a/src/bin/scripts/vacuumdb.c +++ b/src/bin/scripts/vacuumdb.c @@ -61,7 +61,9 @@ static void vacuum_all_databases(vacuumingOptions *vacopts, const char *progname, bool echo, bool quiet); static void prepare_vacuum_command(PQExpBuffer sql, PGconn *conn, - vacuumingOptions *vacopts, const char *table); + vacuumingOptions *vacopts, const char *table, + bool table_pre_qualified, + const char *progname, bool echo); static void run_vacuum_command(PGconn *conn, const char *sql, bool echo, const char *table, const char *progname, bool async); @@ -361,7 +363,7 @@ vacuum_one_database(const char *dbname, vacuumingOptions *vacopts, (stage >= 0 && stage < ANALYZE_NUM_STAGES)); conn = connectDatabase(dbname, host, port, username, prompt_password, - progname, false, true); + progname, echo, false, true); if (!quiet) { @@ -437,7 +439,7 @@ vacuum_one_database(const char *dbname, vacuumingOptions *vacopts, for (i = 1; i < concurrentCons; i++) { conn = connectDatabase(dbname, host, port, username, prompt_password, - progname, false, true); + progname, echo, false, true); init_slot(slots + i, conn, progname); } } @@ -463,7 +465,8 @@ vacuum_one_database(const char *dbname, vacuumingOptions *vacopts, ParallelSlot *free_slot; const char *tabname = cell ? cell->val : NULL; - prepare_vacuum_command(&sql, conn, vacopts, tabname); + prepare_vacuum_command(&sql, conn, vacopts, tabname, + tables == &dbtables, progname, echo); if (CancelRequested) { @@ -554,8 +557,8 @@ vacuum_all_databases(vacuumingOptions *vacopts, int stage; int i; - conn = connectMaintenanceDatabase(maintenance_db, host, port, - username, prompt_password, progname); + conn = connectMaintenanceDatabase(maintenance_db, host, port, username, + prompt_password, progname, echo); result = executeQuery(conn, "SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1;", progname, echo); @@ -618,8 +621,10 @@ vacuum_all_databases(vacuumingOptions *vacopts, * quoted. The command is semicolon-terminated. */ static void -prepare_vacuum_command(PQExpBuffer sql, PGconn *conn, vacuumingOptions *vacopts, - const char *table) +prepare_vacuum_command(PQExpBuffer sql, PGconn *conn, + vacuumingOptions *vacopts, const char *table, + bool table_pre_qualified, + const char *progname, bool echo) { resetPQExpBuffer(sql); @@ -675,7 +680,13 @@ prepare_vacuum_command(PQExpBuffer sql, PGconn *conn, vacuumingOptions *vacopts, } if (table) - appendPQExpBuffer(sql, " %s", table); + { + appendPQExpBufferChar(sql, ' '); + if (table_pre_qualified) + appendPQExpBufferStr(sql, table); + else + appendQualifiedRelation(sql, table, conn, progname, echo); + } appendPQExpBufferChar(sql, ';'); } diff --git a/src/fe_utils/string_utils.c b/src/fe_utils/string_utils.c index 8c05a80d31..b47a396af1 100644 --- a/src/fe_utils/string_utils.c +++ b/src/fe_utils/string_utils.c @@ -956,8 +956,9 @@ processSQLNamePattern(PGconn *conn, PQExpBuffer buf, const char *pattern, } /* - * Now decide what we need to emit. Note there will be a leading "^(" in - * the patterns in any case. + * Now decide what we need to emit. We may run under a hostile + * search_path, so qualify EVERY name. Note there will be a leading "^(" + * in the patterns in any case. */ if (namebuf.len > 2) { @@ -970,15 +971,18 @@ processSQLNamePattern(PGconn *conn, PQExpBuffer buf, const char *pattern, WHEREAND(); if (altnamevar) { - appendPQExpBuffer(buf, "(%s ~ ", namevar); + appendPQExpBuffer(buf, + "(%s OPERATOR(pg_catalog.~) ", namevar); appendStringLiteralConn(buf, namebuf.data, conn); - appendPQExpBuffer(buf, "\n OR %s ~ ", altnamevar); + appendPQExpBuffer(buf, + "\n OR %s OPERATOR(pg_catalog.~) ", + altnamevar); appendStringLiteralConn(buf, namebuf.data, conn); appendPQExpBufferStr(buf, ")\n"); } else { - appendPQExpBuffer(buf, "%s ~ ", namevar); + appendPQExpBuffer(buf, "%s OPERATOR(pg_catalog.~) ", namevar); appendStringLiteralConn(buf, namebuf.data, conn); appendPQExpBufferChar(buf, '\n'); } @@ -994,7 +998,7 @@ processSQLNamePattern(PGconn *conn, PQExpBuffer buf, const char *pattern, if (strcmp(schemabuf.data, "^(.*)$") != 0 && schemavar) { WHEREAND(); - appendPQExpBuffer(buf, "%s ~ ", schemavar); + appendPQExpBuffer(buf, "%s OPERATOR(pg_catalog.~) ", schemavar); appendStringLiteralConn(buf, schemabuf.data, conn); appendPQExpBufferChar(buf, '\n'); } diff --git a/src/include/fe_utils/connect.h b/src/include/fe_utils/connect.h new file mode 100644 index 0000000000..fa293d2458 --- /dev/null +++ b/src/include/fe_utils/connect.h @@ -0,0 +1,28 @@ +/*------------------------------------------------------------------------- + * + * Interfaces in support of FE/BE connections. + * + * + * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * src/include/fe_utils/connect.h + * + *------------------------------------------------------------------------- + */ +#ifndef CONNECT_H +#define CONNECT_H + +/* + * This SQL statement installs an always-secure search path, so malicious + * users can't take control. CREATE of an unqualified name will fail, because + * this selects no creation schema. This does not demote pg_temp, so it is + * suitable where we control the entire FE/BE connection but not suitable in + * SECURITY DEFINER functions. This is portable to PostgreSQL 7.3, which + * introduced schemas. When connected to an older version from code that + * might work with the old server, skip this. + */ +#define ALWAYS_SECURE_SEARCH_PATH_SQL \ + "SELECT pg_catalog.set_config('search_path', '', false)" + +#endif /* CONNECT_H */ diff --git a/src/tools/findoidjoins/findoidjoins.c b/src/tools/findoidjoins/findoidjoins.c index 7ea53b8789..82ef113e92 100644 --- a/src/tools/findoidjoins/findoidjoins.c +++ b/src/tools/findoidjoins/findoidjoins.c @@ -9,6 +9,7 @@ #include "catalog/pg_class.h" +#include "fe_utils/connect.h" #include "libpq-fe.h" #include "pqexpbuffer.h" @@ -46,6 +47,14 @@ main(int argc, char **argv) exit(EXIT_FAILURE); } + res = PQexec(conn, ALWAYS_SECURE_SEARCH_PATH_SQL); + if (!res || PQresultStatus(res) != PGRES_TUPLES_OK) + { + fprintf(stderr, "sql error: %s\n", PQerrorMessage(conn)); + exit(EXIT_FAILURE); + } + PQclear(res); + /* Get a list of relations that have OIDs */ printfPQExpBuffer(&sql, "%s", From 5770172cb0c9df9e6ce27c507b449557e5b45124 Mon Sep 17 00:00:00 2001 From: Noah Misch Date: Mon, 26 Feb 2018 07:39:44 -0800 Subject: [PATCH 1060/1087] Document security implications of search_path and the public schema. The ability to create like-named objects in different schemas opens up the potential for users to change the behavior of other users' queries, maliciously or accidentally. When you connect to a PostgreSQL server, you should remove from your search_path any schema for which a user other than yourself or superusers holds the CREATE privilege. If you do not, other users holding CREATE privilege can redefine the behavior of your commands, causing them to perform arbitrary SQL statements under your identity. "SET search_path = ..." and "SELECT pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one can use either as the first command of a session. As special exceptions, the following client applications behave as documented regardless of search_path settings and schema privileges: clusterdb createdb createlang createuser dropdb droplang dropuser ecpg (not programs it generates) initdb oid2name pg_archivecleanup pg_basebackup pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb vacuumlo. Not included are core client programs that run user-specified SQL commands, namely psql and pgbench. PostgreSQL encourages non-core client applications to do likewise. Document this in the context of libpq connections, psql connections, dblink connections, ECPG connections, extension packaging, and schema usage patterns. The principal defense for applications is "SELECT pg_catalog.set_config('search_path', '', false)", and the principal defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC". Either one is sufficient to prevent attack. After a REVOKE, consider auditing the public schema for objects named like pg_catalog objects. Authors of SECURITY DEFINER functions use some of the same defenses, and the CREATE FUNCTION reference page already covered them thoroughly. This is a good opportunity to audit SECURITY DEFINER functions for robust security practice. Back-patch to 9.3 (all supported versions). Reviewed by Michael Paquier and Jonathan S. Katz. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058 --- doc/src/sgml/config.sgml | 10 +++- doc/src/sgml/contrib.sgml | 2 +- doc/src/sgml/dblink.sgml | 36 ++++++++---- doc/src/sgml/ddl.sgml | 94 +++++++++++++++++++++++--------- doc/src/sgml/ecpg.sgml | 29 ++++++++++ doc/src/sgml/extend.sgml | 56 ++++++++++++++----- doc/src/sgml/libpq.sgml | 82 ++++++++++++++++++++++------ doc/src/sgml/lobj.sgml | 11 ++++ doc/src/sgml/ref/pgbench.sgml | 11 ++++ doc/src/sgml/ref/psql-ref.sgml | 12 ++++ doc/src/sgml/user-manag.sgml | 15 +++-- src/test/examples/testlibpq.c | 21 +++++-- src/test/examples/testlibpq2.c | 29 +++++++--- src/test/examples/testlibpq2.sql | 4 +- src/test/examples/testlibpq3.c | 13 ++++- src/test/examples/testlibpq3.sql | 3 +- src/test/examples/testlibpq4.c | 19 ++++++- src/test/examples/testlo.c | 11 ++++ src/test/examples/testlo64.c | 11 ++++ 19 files changed, 369 insertions(+), 100 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index 4c998fe51f..00fc364c0a 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -6329,6 +6329,13 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; setting, either globally or per-user. + + For more information on schema handling, see + . In particular, the default + configuration is suitable only when the database has a single user or + a few mutually-trusting users. + + The current effective value of the search path can be examined via the SQL function @@ -6340,9 +6347,6 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; appearing in search_path were resolved. - - For more information on schema handling, see . - diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index 0622227bee..b626a345f3 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -75,7 +75,7 @@ CREATE EXTENSION module_name; choice. To do that, add SCHEMA schema_name to the CREATE EXTENSION command. By default, the objects will be placed in your current creation - target schema, typically public. + target schema, which in turn defaults to public. diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index 4c07f886aa..87e14ea093 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -83,7 +83,7 @@ dblink_connect(text connname, text connstr) returns text libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres - password=mypasswd. + password=mypasswd options=-csearch_path=
. For details see . Alternatively, the name of a foreign server. @@ -104,6 +104,17 @@ dblink_connect(text connname, text connstr) returns text Notes + + If untrusted users have access to a database that has not adopted a + secure schema usage pattern, + begin each session by removing publicly-writable schemas from + search_path. One could, for example, + add options=-csearch_path= to + connstr. This consideration is not specific + to dblink; it applies to every interface for + executing arbitrary SQL commands. + + Only superusers may use dblink_connect to create non-password-authenticated connections. If non-superusers need this @@ -121,13 +132,13 @@ dblink_connect(text connname, text connstr) returns text Examples -SELECT dblink_connect('dbname=postgres'); +SELECT dblink_connect('dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK (1 row) -SELECT dblink_connect('myconn', 'dbname=postgres'); +SELECT dblink_connect('myconn', 'dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK @@ -416,7 +427,8 @@ dblink(text sql [, bool fail_on_error]) returns setof record SELECT * - FROM dblink('dbname=mydb', 'select proname, prosrc from pg_proc') + FROM dblink('dbname=mydb options=-csearch_path=', + 'select proname, prosrc from pg_proc') AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%'; @@ -450,7 +462,8 @@ SELECT * CREATE VIEW myremote_pg_proc AS SELECT * - FROM dblink('dbname=postgres', 'select proname, prosrc from pg_proc') + FROM dblink('dbname=postgres options=-csearch_path=', + 'select proname, prosrc from pg_proc') AS t1(proname name, prosrc text); SELECT * FROM myremote_pg_proc WHERE proname LIKE 'bytea%'; @@ -461,7 +474,8 @@ SELECT * FROM myremote_pg_proc WHERE proname LIKE 'bytea%'; Examples -SELECT * FROM dblink('dbname=postgres', 'select proname, prosrc from pg_proc') +SELECT * FROM dblink('dbname=postgres options=-csearch_path=', + 'select proname, prosrc from pg_proc') AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%'; proname | prosrc ------------+------------ @@ -479,7 +493,7 @@ SELECT * FROM dblink('dbname=postgres', 'select proname, prosrc from pg_proc') byteaout | byteaout (12 rows) -SELECT dblink_connect('dbname=postgres'); +SELECT dblink_connect('dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK @@ -503,7 +517,7 @@ SELECT * FROM dblink('select proname, prosrc from pg_proc') byteaout | byteaout (12 rows) -SELECT dblink_connect('myconn', 'dbname=regression'); +SELECT dblink_connect('myconn', 'dbname=regression options=-csearch_path='); dblink_connect ---------------- OK @@ -778,7 +792,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret Examples -SELECT dblink_connect('dbname=postgres'); +SELECT dblink_connect('dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK @@ -899,7 +913,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) Examples -SELECT dblink_connect('dbname=postgres'); +SELECT dblink_connect('dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK @@ -1036,7 +1050,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text Examples -SELECT dblink_connect('dbname=postgres'); +SELECT dblink_connect('dbname=postgres options=-csearch_path='); dblink_connect ---------------- OK diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index 15a9285136..2b879ead4b 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -2172,6 +2172,20 @@ CREATE TABLE public.products ( ... ); in other schemas in the database. + + The ability to create like-named objects in different schemas complicates + writing a query that references precisely the same objects every time. It + also opens up the potential for users to change the behavior of other + users' queries, maliciously or accidentally. Due to the prevalence of + unqualified names in queries and their use + in PostgreSQL internals, adding a schema + to search_path effectively trusts all users having + CREATE privilege on that schema. When you run an + ordinary query, a malicious user able to create objects in a schema of + your search path can take control and execute arbitrary SQL functions as + though you executed them. + + schema current @@ -2288,8 +2302,9 @@ SELECT 3 OPERATOR(pg_catalog.+) 4; the schema public. This allows all users that are able to connect to a given database to create objects in its - public schema. If you do - not want to allow that, you can revoke that privilege: + public schema. + Some usage patterns call for + revoking that privilege: REVOKE CREATE ON SCHEMA public FROM PUBLIC; @@ -2339,50 +2354,75 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; Usage Patterns - Schemas can be used to organize your data in many ways. There are - a few usage patterns that are recommended and are easily supported by - the default configuration: + Schemas can be used to organize your data in many ways. There are a few + usage patterns easily supported by the default configuration, only one of + which suffices when database users mistrust other database users: + - If you do not create any schemas then all users access the - public schema implicitly. This simulates the situation where - schemas are not available at all. This setup is mainly - recommended when there is only a single user or a few cooperating - users in a database. This setup also allows smooth transition - from the non-schema-aware world. + Constrain ordinary users to user-private schemas. To implement this, + issue REVOKE CREATE ON SCHEMA public FROM PUBLIC, + and create a schema for each user with the same name as that user. If + affected users had logged in before this, consider auditing the public + schema for objects named like objects in + schema pg_catalog. Recall that the default search + path starts with $user, which resolves to the user + name. Therefore, if each user has a separate schema, they access their + own schemas by default. - You can create a schema for each user with the same name as - that user. Recall that the default search path starts with - $user, which resolves to the user name. - Therefore, if each user has a separate schema, they access their - own schemas by default. + Remove the public schema from each user's default search path + using ALTER ROLE user SET + search_path = "$user". Everyone retains the ability to + create objects in the public schema, but only qualified names will + choose those objects. A user holding the CREATEROLE + privilege can undo this setting and issue arbitrary queries under the + identity of users relying on the setting. If you + grant CREATEROLE to users not warranting this + almost-superuser ability, use the first pattern instead. + + - If you use this setup then you might also want to revoke access - to the public schema (or drop it altogether), so users are - truly constrained to their own schemas. + Remove the public schema from search_path in + postgresql.conf. + The ensuing user experience matches the previous pattern. In addition + to that pattern's implications for CREATEROLE, this + trusts database owners the same way. If you assign + the CREATEROLE + privilege, CREATEDB privilege or individual database + ownership to users not warranting almost-superuser access, use the + first pattern instead. - To install shared applications (tables to be used by everyone, - additional functions provided by third parties, etc.), put them - into separate schemas. Remember to grant appropriate - privileges to allow the other users to access them. Users can - then refer to these additional objects by qualifying the names - with a schema name, or they can put the additional schemas into - their search path, as they choose. + Keep the default. All users access the public schema implicitly. This + simulates the situation where schemas are not available at all, giving + a smooth transition from the non-schema-aware world. However, any user + can issue arbitrary queries under the identity of any user not electing + to protect itself individually. This pattern is acceptable only when + the database has a single user or a few mutually-trusting users. + + + For any pattern, to install shared applications (tables to be used by + everyone, additional functions provided by third parties, etc.), put them + into separate schemas. Remember to grant appropriate privileges to allow + the other users to access them. Users can then refer to these additional + objects by qualifying the names with a schema name, or they can put the + additional schemas into their search path, as they choose. + @@ -2405,7 +2445,7 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; Also, there is no concept of a public schema in the SQL standard. For maximum conformance to the standard, you should - not use (perhaps even remove) the public schema. + not use the public schema. diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 5a8d1f1b95..98b6840520 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -186,6 +186,18 @@ EXEC SQL CONNECT TO target AS chapter). + + If untrusted users have access to a database that has not adopted a + secure schema usage pattern, + begin each session by removing publicly-writable schemas + from search_path. For example, + add options=-csearch_path= + to options, or + issue EXEC SQL SELECT pg_catalog.set_config('search_path', '', + false); after connecting. This consideration is not specific to + ECPG; it applies to every interface for executing arbitrary SQL commands. + + Here are some examples of CONNECT statements: @@ -266,8 +278,11 @@ int main() { EXEC SQL CONNECT TO testdb1 AS con1 USER testuser; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL CONNECT TO testdb2 AS con2 USER testuser; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL CONNECT TO testdb3 AS con3 USER testuser; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; /* This query would be executed in the last opened database "testdb3". */ EXEC SQL SELECT current_database() INTO :dbname; @@ -1093,6 +1108,7 @@ EXEC SQL BEGIN DECLARE SECTION; EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; in = PGTYPESinterval_new(); EXEC SQL SELECT '1 min'::interval INTO :in; @@ -1147,6 +1163,7 @@ EXEC SQL BEGIN DECLARE SECTION; EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; num = PGTYPESnumeric_new(); dec = PGTYPESdecimal_new(); @@ -1221,6 +1238,7 @@ EXEC SQL END DECLARE SECTION; memset(dbid, 0, sizeof(int) * 8); EXEC SQL CONNECT TO testdb; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; /* Retrieve multiple rows into arrays at once. */ EXEC SQL SELECT oid,datname INTO :dbid, :dbname FROM pg_database; @@ -1887,6 +1905,7 @@ char *stmt = "SELECT u.usename as dbaname, d.datname " EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb AS con1 USER testuser; +EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL PREPARE stmt1 FROM :stmt; @@ -4317,6 +4336,7 @@ main(void) EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb AS con1 USER testuser; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL PREPARE stmt1 FROM :query; EXEC SQL DECLARE cur1 CURSOR FOR stmt1; @@ -4478,6 +4498,7 @@ main(void) EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO uptimedb AS con1 USER uptime; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL PREPARE stmt1 FROM :query; EXEC SQL DECLARE cur1 CURSOR FOR stmt1; @@ -5909,6 +5930,7 @@ main(void) memset(buf, 1, buflen); EXEC SQL CONNECT TO testdb AS con1; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; conn = ECPGget_PGconn("con1"); printf("conn = %p\n", conn); @@ -6038,6 +6060,7 @@ class TestCpp TestCpp::TestCpp() { EXEC SQL CONNECT TO testdb1; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; } void Test::test() @@ -6117,6 +6140,7 @@ void db_connect() { EXEC SQL CONNECT TO testdb1; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; } void @@ -6510,12 +6534,14 @@ EXEC SQL END DECLARE SECTION; ECPGdebug(1, stderr); EXEC SQL CONNECT TO :dbname USER :user; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL SELECT version() INTO :ver; EXEC SQL DISCONNECT; printf("version: %s\n", ver); EXEC SQL CONNECT TO :connection USER :user; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL SELECT version() INTO :ver; EXEC SQL DISCONNECT; @@ -7116,6 +7142,7 @@ EXEC SQL BEGIN DECLARE SECTION; EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb AS con1 USER testuser; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL ALLOCATE DESCRIPTOR d; /* Declare, open a cursor, and assign a descriptor to the cursor */ @@ -7673,6 +7700,7 @@ EXEC SQL BEGIN DECLARE SECTION; EXEC SQL END DECLARE SECTION; EXEC SQL CONNECT TO testdb AS con1; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL SELECT current_database(), 256 INTO :t:t_ind LIMIT 1; @@ -7829,6 +7857,7 @@ int main(void) { EXEC SQL CONNECT TO testdb AS con1; + EXEC SQL SELECT pg_catalog.set_config('search_path', '', false); EXEC SQL COMMIT; EXEC SQL ALLOCATE DESCRIPTOR d; EXEC SQL DECLARE cur CURSOR FOR SELECT current_database(), 'hoge', 256; EXEC SQL OPEN cur; diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml index 5f1bb70e97..6c043cdd02 100644 --- a/doc/src/sgml/extend.sgml +++ b/doc/src/sgml/extend.sgml @@ -430,6 +430,32 @@ dropping the whole extension. + + Defining Extension Objects + + + + Widely-distributed extensions should assume little about the database + they occupy. In particular, unless you issued SET search_path = + pg_temp, assume each unqualified name could resolve to an + object that a malicious user has defined. Beware of constructs that + depend on search_path implicitly: IN + and CASE expression WHEN + always select an operator using the search path. In their place, use + OPERATOR(schema.=) ANY + and CASE WHEN expression. + + + + Extension Files @@ -984,24 +1010,24 @@ SELECT * FROM pg_extension_update_paths('extension_name (LEFTARG = pg_catalog.text, + RIGHTARG = pg_catalog.text, PROCEDURE = pair); -CREATE OR REPLACE FUNCTION pair(text, text) -RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair;'; +-- "SET search_path" is easy to get right, but qualified names perform better. +CREATE OR REPLACE FUNCTION lower(pair) +RETURNS pair LANGUAGE SQL +AS 'SELECT ROW(lower($1.k), lower($1.v))::@extschema@.pair;' +SET search_path = pg_temp; -CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = anyelement, PROCEDURE = pair); -CREATE OPERATOR ~> (LEFTARG = anyelement, RIGHTARG = text, PROCEDURE = pair); -CREATE OPERATOR ~> (LEFTARG = anyelement, RIGHTARG = anyelement, PROCEDURE = pair); -CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = text, PROCEDURE = pair); +CREATE OR REPLACE FUNCTION pair_concat(pair, pair) +RETURNS pair LANGUAGE SQL +AS 'SELECT ROW($1.k OPERATOR(pg_catalog.||) $2.k, + $1.v OPERATOR(pg_catalog.||) $2.v)::@extschema@.pair;'; ]]> @@ -1013,7 +1039,7 @@ CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = text, PROCEDURE = pair); # pair extension comment = 'A key/value pair data type' default_version = '1.0' -relocatable = true +relocatable = false diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml index f327d4b5b5..2a8e1f2e07 100644 --- a/doc/src/sgml/libpq.sgml +++ b/doc/src/sgml/libpq.sgml @@ -65,6 +65,22 @@ the return value for a successful connection before queries are sent via the connection object. + + + If untrusted users have access to a database that has not adopted a + secure schema usage pattern, + begin each session by removing publicly-writable schemas from + search_path. One can set parameter key + word options to + value -csearch_path=. Alternately, one can + issue PQexec(conn, "SELECT + pg_catalog.set_config('search_path', '', false)") after + connecting. This consideration is not specific + to libpq; it applies to every interface for + executing arbitrary SQL commands. + + + On Unix, forking a process with open libpq connections can lead to @@ -6878,7 +6894,8 @@ main(void) { mydata *data; PGresult *res; - PGconn *conn = PQconnectdb("dbname = postgres"); + PGconn *conn = + PQconnectdb("dbname=postgres options=-csearch_path="); if (PQstatus(conn) != CONNECTION_OK) { @@ -8305,6 +8322,22 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + + /* + * Should PQclear PGresult whenever it is no longer needed to avoid memory + * leaks + */ + PQclear(res); + /* * Our test case here involves using a cursor, for which we must be inside * a transaction block. We could do the whole thing with a single @@ -8320,11 +8353,6 @@ main(int argc, char **argv) PQclear(res); exit_nicely(conn); } - - /* - * Should PQclear PGresult whenever it is no longer needed to avoid memory - * leaks - */ PQclear(res); /* @@ -8400,16 +8428,16 @@ main(int argc, char **argv) * populate a database with the following commands * (provided in src/test/examples/testlibpq2.sql): * + * CREATE SCHEMA TESTLIBPQ2; + * SET search_path = TESTLIBPQ2; * CREATE TABLE TBL1 (i int4); - * * CREATE TABLE TBL2 (i int4); - * * CREATE RULE r1 AS ON INSERT TO TBL1 DO * (INSERT INTO TBL2 VALUES (new.i); NOTIFY TBL2); * - * and do this four times: + * Start this program, then from psql do this four times: * - * INSERT INTO TBL1 VALUES (10); + * INSERT INTO TESTLIBPQ2.TBL1 VALUES (10); */ #ifdef WIN32 @@ -8464,6 +8492,22 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + + /* + * Should PQclear PGresult whenever it is no longer needed to avoid memory + * leaks + */ + PQclear(res); + /* * Issue LISTEN command to enable notifications from the rule's NOTIFY. */ @@ -8474,11 +8518,6 @@ main(int argc, char **argv) PQclear(res); exit_nicely(conn); } - - /* - * should PQclear PGresult whenever it is no longer needed to avoid memory - * leaks - */ PQclear(res); /* Quit after four notifies are received. */ @@ -8545,8 +8584,9 @@ main(int argc, char **argv) * Before running this, populate a database with the following commands * (provided in src/test/examples/testlibpq3.sql): * + * CREATE SCHEMA testlibpq3; + * SET search_path = testlibpq3; * CREATE TABLE test1 (i int4, t text, b bytea); - * * INSERT INTO test1 values (1, 'joe''s place', '\\000\\001\\002\\003\\004'); * INSERT INTO test1 values (2, 'ho there', '\\004\\003\\002\\001\\000'); * @@ -8678,6 +8718,16 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, "SET search_path = testlibpq3"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + /* * The point of this program is to illustrate use of PQexecParams() with * out-of-line parameters, as well as binary transmission of data. diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml index 6b5aaebbbc..771795ae66 100644 --- a/doc/src/sgml/lobj.sgml +++ b/doc/src/sgml/lobj.sgml @@ -933,6 +933,17 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + res = PQexec(conn, "begin"); PQclear(res); printf("importing file \"%s\" ...\n", in_filename); diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index 3dd492cec1..59b573b501 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -1688,5 +1688,16 @@ statement latencies in milliseconds: database server. + + Security + + + If untrusted users have access to a database that has not adopted a + secure schema usage pattern, + do not run pgbench in that + database. pgbench uses unqualified names and + does not manipulate the search path. + + diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index 8bd9b9387e..bfdf859731 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -735,6 +735,18 @@ testdb=> of the command are displayed on the screen. + + If untrusted users have access to a database that has not adopted a + secure schema usage pattern, + begin your session by removing publicly-writable schemas + from search_path. One can + add options=-csearch_path= to the connection string or + issue SELECT pg_catalog.set_config('search_path', '', + false) before other SQL commands. This consideration is not + specific to psql; it applies to every interface + for executing arbitrary SQL commands. + + Whenever a command is executed, psql also polls for asynchronous notification events generated by diff --git a/doc/src/sgml/user-manag.sgml b/doc/src/sgml/user-manag.sgml index ae15efed95..94fd4ebf58 100644 --- a/doc/src/sgml/user-manag.sgml +++ b/doc/src/sgml/user-manag.sgml @@ -571,14 +571,17 @@ GRANT pg_signal_backend TO admin_user; - Function and Trigger Security + Function Security - Functions and triggers allow users to insert code into the backend - server that other users might execute unintentionally. Hence, both - mechanisms permit users to Trojan horse - others with relative ease. The only real protection is tight - control over who can define functions. + Functions, triggers and row-level security policies allow users to insert + code into the backend server that other users might execute + unintentionally. Hence, these mechanisms permit users to Trojan + horse others with relative ease. The strongest protection is tight + control over who can define objects. Where that is infeasible, write + queries referring only to objects having trusted owners. Remove + from search_path the public schema and any other schemas + that permit untrusted users to create objects. diff --git a/src/test/examples/testlibpq.c b/src/test/examples/testlibpq.c index 4d9af82dd1..92a05e5309 100644 --- a/src/test/examples/testlibpq.c +++ b/src/test/examples/testlibpq.c @@ -48,6 +48,22 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + + /* + * Should PQclear PGresult whenever it is no longer needed to avoid memory + * leaks + */ + PQclear(res); + /* * Our test case here involves using a cursor, for which we must be inside * a transaction block. We could do the whole thing with a single @@ -63,11 +79,6 @@ main(int argc, char **argv) PQclear(res); exit_nicely(conn); } - - /* - * Should PQclear PGresult whenever it is no longer needed to avoid memory - * leaks - */ PQclear(res); /* diff --git a/src/test/examples/testlibpq2.c b/src/test/examples/testlibpq2.c index 07c6317a21..76787fe010 100644 --- a/src/test/examples/testlibpq2.c +++ b/src/test/examples/testlibpq2.c @@ -13,16 +13,16 @@ * populate a database with the following commands * (provided in src/test/examples/testlibpq2.sql): * + * CREATE SCHEMA TESTLIBPQ2; + * SET search_path = TESTLIBPQ2; * CREATE TABLE TBL1 (i int4); - * * CREATE TABLE TBL2 (i int4); - * * CREATE RULE r1 AS ON INSERT TO TBL1 DO * (INSERT INTO TBL2 VALUES (new.i); NOTIFY TBL2); * - * and do this four times: + * Start this program, then from psql do this four times: * - * INSERT INTO TBL1 VALUES (10); + * INSERT INTO TESTLIBPQ2.TBL1 VALUES (10); */ #ifdef WIN32 @@ -77,6 +77,22 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + + /* + * Should PQclear PGresult whenever it is no longer needed to avoid memory + * leaks + */ + PQclear(res); + /* * Issue LISTEN command to enable notifications from the rule's NOTIFY. */ @@ -87,11 +103,6 @@ main(int argc, char **argv) PQclear(res); exit_nicely(conn); } - - /* - * should PQclear PGresult whenever it is no longer needed to avoid memory - * leaks - */ PQclear(res); /* Quit after four notifies are received. */ diff --git a/src/test/examples/testlibpq2.sql b/src/test/examples/testlibpq2.sql index 1686c3ed0d..e8173e4293 100644 --- a/src/test/examples/testlibpq2.sql +++ b/src/test/examples/testlibpq2.sql @@ -1,6 +1,6 @@ +CREATE SCHEMA TESTLIBPQ2; +SET search_path = TESTLIBPQ2; CREATE TABLE TBL1 (i int4); - CREATE TABLE TBL2 (i int4); - CREATE RULE r1 AS ON INSERT TO TBL1 DO (INSERT INTO TBL2 VALUES (new.i); NOTIFY TBL2); diff --git a/src/test/examples/testlibpq3.c b/src/test/examples/testlibpq3.c index e11e0567ca..00e62b43d2 100644 --- a/src/test/examples/testlibpq3.c +++ b/src/test/examples/testlibpq3.c @@ -8,8 +8,9 @@ * Before running this, populate a database with the following commands * (provided in src/test/examples/testlibpq3.sql): * + * CREATE SCHEMA testlibpq3; + * SET search_path = testlibpq3; * CREATE TABLE test1 (i int4, t text, b bytea); - * * INSERT INTO test1 values (1, 'joe''s place', '\\000\\001\\002\\003\\004'); * INSERT INTO test1 values (2, 'ho there', '\\004\\003\\002\\001\\000'); * @@ -141,6 +142,16 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, "SET search_path = testlibpq3"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + /* * The point of this program is to illustrate use of PQexecParams() with * out-of-line parameters, as well as binary transmission of data. diff --git a/src/test/examples/testlibpq3.sql b/src/test/examples/testlibpq3.sql index 9d9e217e5d..2213306509 100644 --- a/src/test/examples/testlibpq3.sql +++ b/src/test/examples/testlibpq3.sql @@ -1,4 +1,5 @@ +CREATE SCHEMA testlibpq3; +SET search_path = testlibpq3; CREATE TABLE test1 (i int4, t text, b bytea); - INSERT INTO test1 values (1, 'joe''s place', '\\000\\001\\002\\003\\004'); INSERT INTO test1 values (2, 'ho there', '\\004\\003\\002\\001\\000'); diff --git a/src/test/examples/testlibpq4.c b/src/test/examples/testlibpq4.c index 0ec04313c0..a20f6249b4 100644 --- a/src/test/examples/testlibpq4.c +++ b/src/test/examples/testlibpq4.c @@ -22,8 +22,10 @@ exit_nicely(PGconn *conn1, PGconn *conn2) } static void -check_conn(PGconn *conn, const char *dbName) +check_prepare_conn(PGconn *conn, const char *dbName) { + PGresult *res; + /* check to see that the backend connection was successfully made */ if (PQstatus(conn) != CONNECTION_OK) { @@ -31,6 +33,17 @@ check_conn(PGconn *conn, const char *dbName) dbName, PQerrorMessage(conn)); exit(1); } + + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit(1); + } + PQclear(res); } int @@ -80,10 +93,10 @@ main(int argc, char **argv) /* make a connection to the database */ conn1 = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName1); - check_conn(conn1, dbName1); + check_prepare_conn(conn1, dbName1); conn2 = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName2); - check_conn(conn2, dbName2); + check_prepare_conn(conn2, dbName2); /* start a transaction block */ res1 = PQexec(conn1, "BEGIN"); diff --git a/src/test/examples/testlo.c b/src/test/examples/testlo.c index 7afe24714a..be5c72b9a6 100644 --- a/src/test/examples/testlo.c +++ b/src/test/examples/testlo.c @@ -232,6 +232,17 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + res = PQexec(conn, "begin"); PQclear(res); printf("importing file \"%s\" ...\n", in_filename); diff --git a/src/test/examples/testlo64.c b/src/test/examples/testlo64.c index bb188cc3a1..39ba009c52 100644 --- a/src/test/examples/testlo64.c +++ b/src/test/examples/testlo64.c @@ -256,6 +256,17 @@ main(int argc, char **argv) exit_nicely(conn); } + /* Set always-secure search path, so malicous users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_COMMAND_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + res = PQexec(conn, "begin"); PQclear(res); printf("importing file \"%s\" ...\n", in_filename); From 964bddf1e87a42bbaaa989be0aabee94dbac9432 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Mon, 26 Feb 2018 11:54:00 -0500 Subject: [PATCH 1061/1087] Fix typo in internal error message --- src/pl/plpgsql/src/pl_exec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c index eae51e316a..4ff87e0879 100644 --- a/src/pl/plpgsql/src/pl_exec.c +++ b/src/pl/plpgsql/src/pl_exec.c @@ -1999,7 +1999,7 @@ exec_stmt(PLpgSQL_execstate *estate, PLpgSQL_stmt *stmt) default: estate->err_stmt = save_estmt; - elog(ERROR, "unrecognized cmdtype: %d", stmt->cmd_type); + elog(ERROR, "unrecognized cmd_type: %d", stmt->cmd_type); } /* Let the plugin know that we have finished executing this statement */ From 8af3855699aa6fa97b7d0d39e0bc7d3279d3fe47 Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Mon, 26 Feb 2018 12:14:05 -0500 Subject: [PATCH 1062/1087] Last-minute updates for release notes. Security: CVE-2018-1058 --- doc/src/sgml/release-10.sgml | 106 +++++++++++++++++++++++++++++++++- doc/src/sgml/release-9.3.sgml | 76 +++++++++++++++++++++++- doc/src/sgml/release-9.4.sgml | 76 +++++++++++++++++++++++- doc/src/sgml/release-9.5.sgml | 76 +++++++++++++++++++++++- doc/src/sgml/release-9.6.sgml | 76 +++++++++++++++++++++++- 5 files changed, 403 insertions(+), 7 deletions(-) diff --git a/doc/src/sgml/release-10.sgml b/doc/src/sgml/release-10.sgml index d543849715..e8b34086b7 100644 --- a/doc/src/sgml/release-10.sgml +++ b/doc/src/sgml/release-10.sgml @@ -23,7 +23,23 @@ - However, if you are upgrading from a version earlier than 10.2, + However, if you run an installation in which not all users are mutually + trusting, or if you maintain an application or extension that is + intended for use in arbitrary situations, it is strongly recommended + that you read the documentation changes described in the first changelog + entry below, and take suitable steps to ensure that your installation or + code is secure. + + + + Also, the changes described in the second changelog entry below may + cause functions used in index expressions or materialized views to fail + during auto-analyze, or when reloading from a dump. After upgrading, + monitor the server logs for such problems, and fix affected functions. + + + + Also, if you are upgrading from a version earlier than 10.2, see . @@ -35,6 +51,92 @@ + + Document how to configure installations and applications to guard + against search-path-dependent trojan-horse attacks from other users + (Noah Misch) + + + + Using a search_path setting that includes any + schemas writable by a hostile user enables that user to capture + control of queries and then run arbitrary SQL code with the + permissions of the attacked user. While it is possible to write + queries that are proof against such hijacking, it is notationally + tedious, and it's very easy to overlook holes. Therefore, we now + recommend configurations in which no untrusted schemas appear in + one's search path. Relevant documentation appears in + (for database administrators and users), + (for application authors), + (for extension authors), and + (for authors + of SECURITY DEFINER functions). + (CVE-2018-1058) + + + + + + + Avoid use of insecure search_path settings + in pg_dump and other client programs + (Noah Misch, Tom Lane) + + + + pg_dump, + pg_upgrade, + vacuumdb and + other PostgreSQL-provided applications were + themselves vulnerable to the type of hijacking described in the previous + changelog entry; since these applications are commonly run by + superusers, they present particularly attractive targets. To make them + secure whether or not the installation as a whole has been secured, + modify them to include only the pg_catalog + schema in their search_path settings. + Autovacuum worker processes now do the same, as well. + + + + In cases where user-provided functions are indirectly executed by + these programs — for example, user-provided functions in index + expressions — the tighter search_path may + result in errors, which will need to be corrected by adjusting those + user-provided functions to not assume anything about what search path + they are invoked under. That has always been good practice, but now + it will be necessary for correct behavior. + (CVE-2018-1058) + + + + + + + + + + From 6d933da306c993ab52a28dba9f4f5b80c80f9681 Mon Sep 17 00:00:00 2001 From: Peter Eisentraut Date: Fri, 23 Feb 2018 19:52:30 -0500 Subject: [PATCH 1075/1087] doc: Improve man build speed Turn off man.endnotes.are.numbered parameter, which we don't need, but which increases performance vastly if off. Also turn on man.output.quietly, which also makes things a bit faster, but which is also less useful now as a progress indicator because the build is so fast now. --- doc/src/sgml/stylesheet-man.xsl | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/doc/src/sgml/stylesheet-man.xsl b/doc/src/sgml/stylesheet-man.xsl index 5ef2fcd634..fcb485c293 100644 --- a/doc/src/sgml/stylesheet-man.xsl +++ b/doc/src/sgml/stylesheet-man.xsl @@ -12,11 +12,13 @@ 0 0 +0 - + 32 40 +